Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-4.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char / misc driver updates from Greg KH:
"Here's the big char and misc driver update for 4.7-rc1.

Lots of different tiny driver subsystems have updates here with new
drivers and functionality. Details in the shortlog.

All have been in linux-next with no reported issues for a while"

* tag 'char-misc-4.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (125 commits)
mcb: Delete num_cells variable which is not required
mcb: Fixed bar number assignment for the gdd
mcb: Replace ioremap and request_region with the devm version
mcb: Implement bus->dev.release callback
mcb: export bus information via sysfs
mcb: Correctly initialize the bus's device
mei: bus: call mei_cl_read_start under device lock
coresight: etb10: adjust read pointer only when needed
coresight: configuring ETF in FIFO mode when acting as link
coresight: tmc: implementing TMC-ETF AUX space API
coresight: moving struct cs_buffers to header file
coresight: tmc: keep track of memory width
coresight: tmc: make sysFS and Perf mode mutually exclusive
coresight: tmc: dump system memory content only when needed
coresight: tmc: adding mode of operation for link/sinks
coresight: tmc: getting rid of multiple read access
coresight: tmc: allocating memory when needed
coresight: tmc: making prepare/unprepare functions generic
coresight: tmc: splitting driver in ETB/ETF and ETR components
coresight: tmc: cleaning up header file
...

+6422 -3778
+62 -7
Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10
··· 6 6 source for a single sink. 7 7 ex: echo 1 > /sys/bus/coresight/devices/20010000.etb/enable_sink 8 8 9 - What: /sys/bus/coresight/devices/<memory_map>.etb/status 10 - Date: November 2014 11 - KernelVersion: 3.19 12 - Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 13 - Description: (R) List various control and status registers. The specific 14 - layout and content is driver specific. 15 - 16 9 What: /sys/bus/coresight/devices/<memory_map>.etb/trigger_cntr 17 10 Date: November 2014 18 11 KernelVersion: 3.19 ··· 15 22 following the trigger event. The number of 32-bit words written 16 23 into the Trace RAM following the trigger event is equal to the 17 24 value stored in this register+1 (from ARM ETB-TRM). 25 + 26 + What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/rdp 27 + Date: March 2016 28 + KernelVersion: 4.7 29 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 30 + Description: (R) Defines the depth, in words, of the trace RAM in powers of 31 + 2. The value is read directly from HW register RDP, 0x004. 32 + 33 + What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/sts 34 + Date: March 2016 35 + KernelVersion: 4.7 36 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 37 + Description: (R) Shows the value held by the ETB status register. The value 38 + is read directly from HW register STS, 0x00C. 39 + 40 + What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/rrp 41 + Date: March 2016 42 + KernelVersion: 4.7 43 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 44 + Description: (R) Shows the value held by the ETB RAM Read Pointer register 45 + that is used to read entries from the Trace RAM over the APB 46 + interface. The value is read directly from HW register RRP, 47 + 0x014. 48 + 49 + What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/rwp 50 + Date: March 2016 51 + KernelVersion: 4.7 52 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 53 + Description: (R) Shows the value held by the ETB RAM Write Pointer register 54 + that is used to sets the write pointer to write entries from 55 + the CoreSight bus into the Trace RAM. The value is read directly 56 + from HW register RWP, 0x018. 57 + 58 + What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/trg 59 + Date: March 2016 60 + KernelVersion: 4.7 61 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 62 + Description: (R) Similar to "trigger_cntr" above except that this value is 63 + read directly from HW register TRG, 0x01C. 64 + 65 + What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/ctl 66 + Date: March 2016 67 + KernelVersion: 4.7 68 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 69 + Description: (R) Shows the value held by the ETB Control register. The value 70 + is read directly from HW register CTL, 0x020. 71 + 72 + What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/ffsr 73 + Date: March 2016 74 + KernelVersion: 4.7 75 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 76 + Description: (R) Shows the value held by the ETB Formatter and Flush Status 77 + register. The value is read directly from HW register FFSR, 78 + 0x300. 79 + 80 + What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/ffcr 81 + Date: March 2016 82 + KernelVersion: 4.7 83 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 84 + Description: (R) Shows the value held by the ETB Formatter and Flush Control 85 + register. The value is read directly from HW register FFCR, 86 + 0x304.
+13
Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x
··· 359 359 Description: (R) Print the content of the Peripheral ID3 Register 360 360 (0xFEC). The value is taken directly from the HW. 361 361 362 + What: /sys/bus/coresight/devices/<memory_map>.etm/mgmt/trcconfig 363 + Date: February 2016 364 + KernelVersion: 4.07 365 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 366 + Description: (R) Print the content of the trace configuration register 367 + (0x010) as currently set by SW. 368 + 369 + What: /sys/bus/coresight/devices/<memory_map>.etm/mgmt/trctraceid 370 + Date: February 2016 371 + KernelVersion: 4.07 372 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 373 + Description: (R) Print the content of the trace ID register (0x040). 374 + 362 375 What: /sys/bus/coresight/devices/<memory_map>.etm/trcidr/trcidr0 363 376 Date: April 2015 364 377 KernelVersion: 4.01
+53
Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
··· 1 + What: /sys/bus/coresight/devices/<memory_map>.stm/enable_source 2 + Date: April 2016 3 + KernelVersion: 4.7 4 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 5 + Description: (RW) Enable/disable tracing on this specific trace macrocell. 6 + Enabling the trace macrocell implies it has been configured 7 + properly and a sink has been identified for it. The path 8 + of coresight components linking the source to the sink is 9 + configured and managed automatically by the coresight framework. 10 + 11 + What: /sys/bus/coresight/devices/<memory_map>.stm/hwevent_enable 12 + Date: April 2016 13 + KernelVersion: 4.7 14 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 15 + Description: (RW) Provides access to the HW event enable register, used in 16 + conjunction with HW event bank select register. 17 + 18 + What: /sys/bus/coresight/devices/<memory_map>.stm/hwevent_select 19 + Date: April 2016 20 + KernelVersion: 4.7 21 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 22 + Description: (RW) Gives access to the HW event block select register 23 + (STMHEBSR) in order to configure up to 256 channels. Used in 24 + conjunction with "hwevent_enable" register as described above. 25 + 26 + What: /sys/bus/coresight/devices/<memory_map>.stm/port_enable 27 + Date: April 2016 28 + KernelVersion: 4.7 29 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 30 + Description: (RW) Provides access to the stimulus port enable register 31 + (STMSPER). Used in conjunction with "port_select" described 32 + below. 33 + 34 + What: /sys/bus/coresight/devices/<memory_map>.stm/port_select 35 + Date: April 2016 36 + KernelVersion: 4.7 37 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 38 + Description: (RW) Used to determine which bank of stimulus port bit in 39 + register STMSPER (see above) apply to. 40 + 41 + What: /sys/bus/coresight/devices/<memory_map>.stm/status 42 + Date: April 2016 43 + KernelVersion: 4.7 44 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 45 + Description: (R) List various control and status registers. The specific 46 + layout and content is driver specific. 47 + 48 + What: /sys/bus/coresight/devices/<memory_map>.stm/traceid 49 + Date: April 2016 50 + KernelVersion: 4.7 51 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 52 + Description: (RW) Holds the trace ID that will appear in the trace stream 53 + coming from this trace entity.
+77
Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc
··· 6 6 formatter after a defined number of words have been stored 7 7 following the trigger event. Additional interface for this 8 8 driver are expected to be added as it matures. 9 + 10 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rsz 11 + Date: March 2016 12 + KernelVersion: 4.7 13 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 14 + Description: (R) Defines the size, in 32-bit words, of the local RAM buffer. 15 + The value is read directly from HW register RSZ, 0x004. 16 + 17 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/sts 18 + Date: March 2016 19 + KernelVersion: 4.7 20 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 21 + Description: (R) Shows the value held by the TMC status register. The value 22 + is read directly from HW register STS, 0x00C. 23 + 24 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rrp 25 + Date: March 2016 26 + KernelVersion: 4.7 27 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 28 + Description: (R) Shows the value held by the TMC RAM Read Pointer register 29 + that is used to read entries from the Trace RAM over the APB 30 + interface. The value is read directly from HW register RRP, 31 + 0x014. 32 + 33 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rwp 34 + Date: March 2016 35 + KernelVersion: 4.7 36 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 37 + Description: (R) Shows the value held by the TMC RAM Write Pointer register 38 + that is used to sets the write pointer to write entries from 39 + the CoreSight bus into the Trace RAM. The value is read directly 40 + from HW register RWP, 0x018. 41 + 42 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/trg 43 + Date: March 2016 44 + KernelVersion: 4.7 45 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 46 + Description: (R) Similar to "trigger_cntr" above except that this value is 47 + read directly from HW register TRG, 0x01C. 48 + 49 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ctl 50 + Date: March 2016 51 + KernelVersion: 4.7 52 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 53 + Description: (R) Shows the value held by the TMC Control register. The value 54 + is read directly from HW register CTL, 0x020. 55 + 56 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ffsr 57 + Date: March 2016 58 + KernelVersion: 4.7 59 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 60 + Description: (R) Shows the value held by the TMC Formatter and Flush Status 61 + register. The value is read directly from HW register FFSR, 62 + 0x300. 63 + 64 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ffcr 65 + Date: March 2016 66 + KernelVersion: 4.7 67 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 68 + Description: (R) Shows the value held by the TMC Formatter and Flush Control 69 + register. The value is read directly from HW register FFCR, 70 + 0x304. 71 + 72 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/mode 73 + Date: March 2016 74 + KernelVersion: 4.7 75 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 76 + Description: (R) Shows the value held by the TMC Mode register, which 77 + indicate the mode the device has been configured to enact. The 78 + The value is read directly from the MODE register, 0x028. 79 + 80 + What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/devid 81 + Date: March 2016 82 + KernelVersion: 4.7 83 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 84 + Description: (R) Indicates the capabilities of the Coresight TMC. 85 + The value is read directly from the DEVID register, 0xFC8,
+29
Documentation/ABI/testing/sysfs-bus-mcb
··· 1 + What: /sys/bus/mcb/devices/mcb:X 2 + Date: March 2016 3 + KernelVersion: 4.7 4 + Contact: Johannes Thumshirn <jth@kernel.org> 5 + Description: Hardware chip or device hosting the MEN chameleon bus 6 + 7 + What: /sys/bus/mcb/devices/mcb:X/revision 8 + Date: March 2016 9 + KernelVersion: 4.7 10 + Contact: Johannes Thumshirn <jth@kernel.org> 11 + Description: The FPGA's revision number 12 + 13 + What: /sys/bus/mcb/devices/mcb:X/minor 14 + Date: March 2016 15 + KernelVersion: 4.7 16 + Contact: Johannes Thumshirn <jth@kernel.org> 17 + Description: The FPGA's minor number 18 + 19 + What: /sys/bus/mcb/devices/mcb:X/model 20 + Date: March 2016 21 + KernelVersion: 4.7 22 + Contact: Johannes Thumshirn <jth@kernel.org> 23 + Description: The FPGA's model number 24 + 25 + What: /sys/bus/mcb/devices/mcb:X/name 26 + Date: March 2016 27 + KernelVersion: 4.7 28 + Contact: Johannes Thumshirn <jth@kernel.org> 29 + Description: The FPGA's name
+10
Documentation/ABI/testing/sysfs-class-stm
··· 12 12 Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com> 13 13 Description: 14 14 Shows the number of channels per master on this STM device. 15 + 16 + What: /sys/class/stm/<stm>/hw_override 17 + Date: March 2016 18 + KernelVersion: 4.7 19 + Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com> 20 + Description: 21 + Reads as 0 if master numbers in the STP stream produced by 22 + this stm device will match the master numbers assigned by 23 + the software or 1 if the stm hardware overrides software 24 + assigned masters.
+28
Documentation/devicetree/bindings/arm/coresight.txt
··· 19 19 - "arm,coresight-etm3x", "arm,primecell"; 20 20 - "arm,coresight-etm4x", "arm,primecell"; 21 21 - "qcom,coresight-replicator1x", "arm,primecell"; 22 + - "arm,coresight-stm", "arm,primecell"; [1] 22 23 23 24 * reg: physical base address and length of the register 24 25 set(s) of the component. ··· 36 35 * port or ports: The representation of the component's port 37 36 layout using the generic DT graph presentation found in 38 37 "bindings/graph.txt". 38 + 39 + * Additional required properties for System Trace Macrocells (STM): 40 + * reg: along with the physical base address and length of the register 41 + set as described above, another entry is required to describe the 42 + mapping of the extended stimulus port area. 43 + 44 + * reg-names: the only acceptable values are "stm-base" and 45 + "stm-stimulus-base", each corresponding to the areas defined in "reg". 39 46 40 47 * Required properties for devices that don't show up on the AMBA bus, such as 41 48 non-configurable replicators: ··· 211 202 }; 212 203 }; 213 204 }; 205 + 206 + 4. STM 207 + stm@20100000 { 208 + compatible = "arm,coresight-stm", "arm,primecell"; 209 + reg = <0 0x20100000 0 0x1000>, 210 + <0 0x28000000 0 0x180000>; 211 + reg-names = "stm-base", "stm-stimulus-base"; 212 + 213 + clocks = <&soc_smc50mhz>; 214 + clock-names = "apb_pclk"; 215 + port { 216 + stm_out_port: endpoint { 217 + remote-endpoint = <&main_funnel_in_port2>; 218 + }; 219 + }; 220 + }; 221 + 222 + [1]. There is currently two version of STM: STM32 and STM500. Both 223 + have the same HW interface and as such don't need an explicit binding name.
+35 -2
Documentation/trace/coresight.txt
··· 190 190 Last but not least, "struct module *owner" is expected to be set to reflect 191 191 the information carried in "THIS_MODULE". 192 192 193 - How to use 194 - ---------- 193 + How to use the tracer modules 194 + ----------------------------- 195 195 196 196 Before trace collection can start, a coresight sink needs to be identify. 197 197 There is no limit on the amount of sinks (nor sources) that can be enabled at ··· 297 297 Instruction 13570831 0x8026B584 E28DD00C false ADD sp,sp,#0xc 298 298 Instruction 0 0x8026B588 E8BD8000 true LDM sp!,{pc} 299 299 Timestamp Timestamp: 17107041535 300 + 301 + How to use the STM module 302 + ------------------------- 303 + 304 + Using the System Trace Macrocell module is the same as the tracers - the only 305 + difference is that clients are driving the trace capture rather 306 + than the program flow through the code. 307 + 308 + As with any other CoreSight component, specifics about the STM tracer can be 309 + found in sysfs with more information on each entry being found in [1]: 310 + 311 + root@genericarmv8:~# ls /sys/bus/coresight/devices/20100000.stm 312 + enable_source hwevent_select port_enable subsystem uevent 313 + hwevent_enable mgmt port_select traceid 314 + root@genericarmv8:~# 315 + 316 + Like any other source a sink needs to be identified and the STM enabled before 317 + being used: 318 + 319 + root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20010000.etf/enable_sink 320 + root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20100000.stm/enable_source 321 + 322 + From there user space applications can request and use channels using the devfs 323 + interface provided for that purpose by the generic STM API: 324 + 325 + root@genericarmv8:~# ls -l /dev/20100000.stm 326 + crw------- 1 root root 10, 61 Jan 3 18:11 /dev/20100000.stm 327 + root@genericarmv8:~# 328 + 329 + Details on how to use the generic STM API can be found here [2]. 330 + 331 + [1]. Documentation/ABI/testing/sysfs-bus-coresight-devices-stm 332 + [2]. Documentation/trace/stm.txt
+9 -1
Documentation/w1/slaves/w1_therm
··· 33 33 powered it would be possible to convert all the devices at the same 34 34 time and then go back to read individual sensors. That isn't 35 35 currently supported. The driver also doesn't support reduced 36 - precision (which would also reduce the conversion time). 36 + precision (which would also reduce the conversion time) when reading values. 37 + 38 + Writing a value between 9 and 12 to the sysfs w1_slave file will change the 39 + precision of the sensor for the next readings. This value is in (volatile) 40 + SRAM, so it is reset when the sensor gets power-cycled. 41 + 42 + To store the current precision configuration into EEPROM, the value 0 43 + has to be written to the sysfs w1_slave file. Since the EEPROM has a limited 44 + amount of writes (>50k), this command should be used wisely. 37 45 38 46 The module parameter strong_pullup can be set to 0 to disable the 39 47 strong pullup, 1 to enable autodetection or 2 to force strong pullup.
+1
MAINTAINERS
··· 9843 9843 SYSTEM TRACE MODULE CLASS 9844 9844 M: Alexander Shishkin <alexander.shishkin@linux.intel.com> 9845 9845 S: Maintained 9846 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/ash/stm.git 9846 9847 F: Documentation/trace/stm.txt 9847 9848 F: drivers/hwtracing/stm/ 9848 9849 F: include/linux/stm.h
+1 -3
drivers/char/Kconfig
··· 279 279 280 280 config RTC 281 281 tristate "Enhanced Real Time Clock Support (legacy PC RTC driver)" 282 - depends on !PPC && !PARISC && !IA64 && !M68K && !SPARC && !FRV \ 283 - && !ARM && !SUPERH && !S390 && !AVR32 && !BLACKFIN && !UML 282 + depends on ALPHA || (MIPS && MACH_LOONGSON64) || MN10300 284 283 ---help--- 285 284 If you say Y here and create a character special file /dev/rtc with 286 285 major number 10 and minor number 135 using mknod ("man mknod"), you ··· 584 585 585 586 config DEVPORT 586 587 bool 587 - depends on !M68K 588 588 depends on ISA || PCI 589 589 default y 590 590
+1 -10
drivers/char/xillybus/xillybus_of.c
··· 81 81 { 82 82 dma_addr_t addr; 83 83 struct xilly_mapping *this; 84 - int rc; 85 84 86 85 this = kzalloc(sizeof(*this), GFP_KERNEL); 87 86 if (!this) ··· 100 101 101 102 *ret_dma_handle = addr; 102 103 103 - rc = devm_add_action(ep->dev, xilly_of_unmap, this); 104 - 105 - if (rc) { 106 - dma_unmap_single(ep->dev, addr, size, direction); 107 - kfree(this); 108 - return rc; 109 - } 110 - 111 - return 0; 104 + return devm_add_action_or_reset(ep->dev, xilly_of_unmap, this); 112 105 } 113 106 114 107 static struct xilly_endpoint_hardware of_hw = {
+1 -9
drivers/char/xillybus/xillybus_pcie.c
··· 98 98 int pci_direction; 99 99 dma_addr_t addr; 100 100 struct xilly_mapping *this; 101 - int rc; 102 101 103 102 this = kzalloc(sizeof(*this), GFP_KERNEL); 104 103 if (!this) ··· 119 120 120 121 *ret_dma_handle = addr; 121 122 122 - rc = devm_add_action(ep->dev, xilly_pci_unmap, this); 123 - if (rc) { 124 - pci_unmap_single(ep->pdev, addr, size, pci_direction); 125 - kfree(this); 126 - return rc; 127 - } 128 - 129 - return 0; 123 + return devm_add_action_or_reset(ep->dev, xilly_pci_unmap, this); 130 124 } 131 125 132 126 static struct xilly_endpoint_hardware pci_hw = {
+43 -15
drivers/hv/channel_mgmt.c
··· 597 597 598 598 static void vmbus_wait_for_unload(void) 599 599 { 600 - int cpu = smp_processor_id(); 601 - void *page_addr = hv_context.synic_message_page[cpu]; 602 - struct hv_message *msg = (struct hv_message *)page_addr + 603 - VMBUS_MESSAGE_SINT; 600 + int cpu; 601 + void *page_addr; 602 + struct hv_message *msg; 604 603 struct vmbus_channel_message_header *hdr; 605 - bool unloaded = false; 604 + u32 message_type; 606 605 606 + /* 607 + * CHANNELMSG_UNLOAD_RESPONSE is always delivered to the CPU which was 608 + * used for initial contact or to CPU0 depending on host version. When 609 + * we're crashing on a different CPU let's hope that IRQ handler on 610 + * the cpu which receives CHANNELMSG_UNLOAD_RESPONSE is still 611 + * functional and vmbus_unload_response() will complete 612 + * vmbus_connection.unload_event. If not, the last thing we can do is 613 + * read message pages for all CPUs directly. 614 + */ 607 615 while (1) { 608 - if (READ_ONCE(msg->header.message_type) == HVMSG_NONE) { 609 - mdelay(10); 610 - continue; 616 + if (completion_done(&vmbus_connection.unload_event)) 617 + break; 618 + 619 + for_each_online_cpu(cpu) { 620 + page_addr = hv_context.synic_message_page[cpu]; 621 + msg = (struct hv_message *)page_addr + 622 + VMBUS_MESSAGE_SINT; 623 + 624 + message_type = READ_ONCE(msg->header.message_type); 625 + if (message_type == HVMSG_NONE) 626 + continue; 627 + 628 + hdr = (struct vmbus_channel_message_header *) 629 + msg->u.payload; 630 + 631 + if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE) 632 + complete(&vmbus_connection.unload_event); 633 + 634 + vmbus_signal_eom(msg, message_type); 611 635 } 612 636 613 - hdr = (struct vmbus_channel_message_header *)msg->u.payload; 614 - if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE) 615 - unloaded = true; 637 + mdelay(10); 638 + } 616 639 617 - vmbus_signal_eom(msg); 618 - 619 - if (unloaded) 620 - break; 640 + /* 641 + * We're crashing and already got the UNLOAD_RESPONSE, cleanup all 642 + * maybe-pending messages on all CPUs to be able to receive new 643 + * messages after we reconnect. 644 + */ 645 + for_each_online_cpu(cpu) { 646 + page_addr = hv_context.synic_message_page[cpu]; 647 + msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT; 648 + msg->header.message_type = HVMSG_NONE; 621 649 } 622 650 } 623 651
+1
drivers/hv/connection.c
··· 495 495 496 496 hv_do_hypercall(HVCALL_SIGNAL_EVENT, channel->sig_event, NULL); 497 497 } 498 + EXPORT_SYMBOL_GPL(vmbus_set_event);
+3 -2
drivers/hv/hv_balloon.c
··· 714 714 * If the pfn range we are dealing with is not in the current 715 715 * "hot add block", move on. 716 716 */ 717 - if ((start_pfn >= has->end_pfn)) 717 + if (start_pfn < has->start_pfn || start_pfn >= has->end_pfn) 718 718 continue; 719 719 /* 720 720 * If the current hot add-request extends beyond ··· 768 768 * If the pfn range we are dealing with is not in the current 769 769 * "hot add block", move on. 770 770 */ 771 - if ((start_pfn >= has->end_pfn)) 771 + if (start_pfn < has->start_pfn || start_pfn >= has->end_pfn) 772 772 continue; 773 773 774 774 old_covered_state = has->covered_end_pfn; ··· 1400 1400 * This is a normal hot-add request specifying 1401 1401 * hot-add memory. 1402 1402 */ 1403 + dm->host_specified_ha_region = false; 1403 1404 ha_pg_range = &ha_msg->range; 1404 1405 dm->ha_wrk.ha_page_range = *ha_pg_range; 1405 1406 dm->ha_wrk.ha_region_range.page_range = 0;
+31
drivers/hv/hv_kvp.c
··· 78 78 79 79 static void kvp_respond_to_host(struct hv_kvp_msg *msg, int error); 80 80 static void kvp_timeout_func(struct work_struct *dummy); 81 + static void kvp_host_handshake_func(struct work_struct *dummy); 81 82 static void kvp_register(int); 82 83 83 84 static DECLARE_DELAYED_WORK(kvp_timeout_work, kvp_timeout_func); 85 + static DECLARE_DELAYED_WORK(kvp_host_handshake_work, kvp_host_handshake_func); 84 86 static DECLARE_WORK(kvp_sendkey_work, kvp_send_key); 85 87 86 88 static const char kvp_devname[] = "vmbus/hv_kvp"; ··· 132 130 hv_poll_channel(kvp_transaction.recv_channel, kvp_poll_wrapper); 133 131 } 134 132 133 + static void kvp_host_handshake_func(struct work_struct *dummy) 134 + { 135 + hv_poll_channel(kvp_transaction.recv_channel, hv_kvp_onchannelcallback); 136 + } 137 + 135 138 static int kvp_handle_handshake(struct hv_kvp_msg *msg) 136 139 { 137 140 switch (msg->kvp_hdr.operation) { ··· 161 154 pr_debug("KVP: userspace daemon ver. %d registered\n", 162 155 KVP_OP_REGISTER); 163 156 kvp_register(dm_reg_value); 157 + 158 + /* 159 + * If we're still negotiating with the host cancel the timeout 160 + * work to not poll the channel twice. 161 + */ 162 + cancel_delayed_work_sync(&kvp_host_handshake_work); 164 163 hv_poll_channel(kvp_transaction.recv_channel, kvp_poll_wrapper); 165 164 166 165 return 0; ··· 607 594 struct icmsg_negotiate *negop = NULL; 608 595 int util_fw_version; 609 596 int kvp_srv_version; 597 + static enum {NEGO_NOT_STARTED, 598 + NEGO_IN_PROGRESS, 599 + NEGO_FINISHED} host_negotiatied = NEGO_NOT_STARTED; 610 600 601 + if (host_negotiatied == NEGO_NOT_STARTED && 602 + kvp_transaction.state < HVUTIL_READY) { 603 + /* 604 + * If userspace daemon is not connected and host is asking 605 + * us to negotiate we need to delay to not lose messages. 606 + * This is important for Failover IP setting. 607 + */ 608 + host_negotiatied = NEGO_IN_PROGRESS; 609 + schedule_delayed_work(&kvp_host_handshake_work, 610 + HV_UTIL_NEGO_TIMEOUT * HZ); 611 + return; 612 + } 611 613 if (kvp_transaction.state > HVUTIL_READY) 612 614 return; 613 615 ··· 700 672 vmbus_sendpacket(channel, recv_buffer, 701 673 recvlen, requestid, 702 674 VM_PKT_DATA_INBAND, 0); 675 + 676 + host_negotiatied = NEGO_FINISHED; 703 677 } 704 678 705 679 } ··· 738 708 void hv_kvp_deinit(void) 739 709 { 740 710 kvp_transaction.state = HVUTIL_DEVICE_DYING; 711 + cancel_delayed_work_sync(&kvp_host_handshake_work); 741 712 cancel_delayed_work_sync(&kvp_timeout_work); 742 713 cancel_work_sync(&kvp_sendkey_work); 743 714 hvutil_transport_destroy(hvt);
+19 -4
drivers/hv/hyperv_vmbus.h
··· 36 36 #define HV_UTIL_TIMEOUT 30 37 37 38 38 /* 39 + * Timeout for guest-host handshake for services. 40 + */ 41 + #define HV_UTIL_NEGO_TIMEOUT 60 42 + 43 + /* 39 44 * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent 40 45 * is set by CPUID(HVCPUID_VERSION_FEATURES). 41 46 */ ··· 625 620 channel_message_table[CHANNELMSG_COUNT]; 626 621 627 622 /* Free the message slot and signal end-of-message if required */ 628 - static inline void vmbus_signal_eom(struct hv_message *msg) 623 + static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type) 629 624 { 630 - msg->header.message_type = HVMSG_NONE; 625 + /* 626 + * On crash we're reading some other CPU's message page and we need 627 + * to be careful: this other CPU may already had cleared the header 628 + * and the host may already had delivered some other message there. 629 + * In case we blindly write msg->header.message_type we're going 630 + * to lose it. We can still lose a message of the same type but 631 + * we count on the fact that there can only be one 632 + * CHANNELMSG_UNLOAD_RESPONSE and we don't care about other messages 633 + * on crash. 634 + */ 635 + if (cmpxchg(&msg->header.message_type, old_msg_type, 636 + HVMSG_NONE) != old_msg_type) 637 + return; 631 638 632 639 /* 633 640 * Make sure the write to MessageType (ie set to ··· 683 666 void vmbus_disconnect(void); 684 667 685 668 int vmbus_post_msg(void *buffer, size_t buflen); 686 - 687 - void vmbus_set_event(struct vmbus_channel *channel); 688 669 689 670 void vmbus_on_event(unsigned long data); 690 671 void vmbus_on_msg_dpc(unsigned long data);
+12 -83
drivers/hv/ring_buffer.c
··· 33 33 void hv_begin_read(struct hv_ring_buffer_info *rbi) 34 34 { 35 35 rbi->ring_buffer->interrupt_mask = 1; 36 - mb(); 36 + virt_mb(); 37 37 } 38 38 39 39 u32 hv_end_read(struct hv_ring_buffer_info *rbi) 40 40 { 41 - u32 read; 42 - u32 write; 43 41 44 42 rbi->ring_buffer->interrupt_mask = 0; 45 - mb(); 43 + virt_mb(); 46 44 47 45 /* 48 46 * Now check to see if the ring buffer is still empty. 49 47 * If it is not, we raced and we need to process new 50 48 * incoming messages. 51 49 */ 52 - hv_get_ringbuffer_availbytes(rbi, &read, &write); 53 - 54 - return read; 50 + return hv_get_bytes_to_read(rbi); 55 51 } 56 52 57 53 /* ··· 68 72 69 73 static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi) 70 74 { 71 - mb(); 72 - if (rbi->ring_buffer->interrupt_mask) 75 + virt_mb(); 76 + if (READ_ONCE(rbi->ring_buffer->interrupt_mask)) 73 77 return false; 74 78 75 79 /* check interrupt_mask before read_index */ 76 - rmb(); 80 + virt_rmb(); 77 81 /* 78 82 * This is the only case we need to signal when the 79 83 * ring transitions from being empty to non-empty. 80 84 */ 81 - if (old_write == rbi->ring_buffer->read_index) 82 - return true; 83 - 84 - return false; 85 - } 86 - 87 - /* 88 - * To optimize the flow management on the send-side, 89 - * when the sender is blocked because of lack of 90 - * sufficient space in the ring buffer, potential the 91 - * consumer of the ring buffer can signal the producer. 92 - * This is controlled by the following parameters: 93 - * 94 - * 1. pending_send_sz: This is the size in bytes that the 95 - * producer is trying to send. 96 - * 2. The feature bit feat_pending_send_sz set to indicate if 97 - * the consumer of the ring will signal when the ring 98 - * state transitions from being full to a state where 99 - * there is room for the producer to send the pending packet. 100 - */ 101 - 102 - static bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi) 103 - { 104 - u32 cur_write_sz; 105 - u32 r_size; 106 - u32 write_loc; 107 - u32 read_loc = rbi->ring_buffer->read_index; 108 - u32 pending_sz; 109 - 110 - /* 111 - * Issue a full memory barrier before making the signaling decision. 112 - * Here is the reason for having this barrier: 113 - * If the reading of the pend_sz (in this function) 114 - * were to be reordered and read before we commit the new read 115 - * index (in the calling function) we could 116 - * have a problem. If the host were to set the pending_sz after we 117 - * have sampled pending_sz and go to sleep before we commit the 118 - * read index, we could miss sending the interrupt. Issue a full 119 - * memory barrier to address this. 120 - */ 121 - mb(); 122 - 123 - pending_sz = rbi->ring_buffer->pending_send_sz; 124 - write_loc = rbi->ring_buffer->write_index; 125 - /* If the other end is not blocked on write don't bother. */ 126 - if (pending_sz == 0) 127 - return false; 128 - 129 - r_size = rbi->ring_datasize; 130 - cur_write_sz = write_loc >= read_loc ? r_size - (write_loc - read_loc) : 131 - read_loc - write_loc; 132 - 133 - if (cur_write_sz >= pending_sz) 85 + if (old_write == READ_ONCE(rbi->ring_buffer->read_index)) 134 86 return true; 135 87 136 88 return false; ··· 132 188 u32 next_read_location) 133 189 { 134 190 ring_info->ring_buffer->read_index = next_read_location; 191 + ring_info->priv_read_index = next_read_location; 135 192 } 136 - 137 - 138 - /* Get the start of the ring buffer. */ 139 - static inline void * 140 - hv_get_ring_buffer(struct hv_ring_buffer_info *ring_info) 141 - { 142 - return (void *)ring_info->ring_buffer->buffer; 143 - } 144 - 145 193 146 194 /* Get the size of the ring buffer. */ 147 195 static inline u32 ··· 268 332 { 269 333 int i = 0; 270 334 u32 bytes_avail_towrite; 271 - u32 bytes_avail_toread; 272 335 u32 totalbytes_towrite = 0; 273 336 274 337 u32 next_write_location; ··· 283 348 if (lock) 284 349 spin_lock_irqsave(&outring_info->ring_lock, flags); 285 350 286 - hv_get_ringbuffer_availbytes(outring_info, 287 - &bytes_avail_toread, 288 - &bytes_avail_towrite); 351 + bytes_avail_towrite = hv_get_bytes_to_write(outring_info); 289 352 290 353 /* 291 354 * If there is only room for the packet, assume it is full. ··· 317 384 sizeof(u64)); 318 385 319 386 /* Issue a full memory barrier before updating the write index */ 320 - mb(); 387 + virt_mb(); 321 388 322 389 /* Now, update the write location */ 323 390 hv_set_next_write_location(outring_info, next_write_location); ··· 334 401 void *buffer, u32 buflen, u32 *buffer_actual_len, 335 402 u64 *requestid, bool *signal, bool raw) 336 403 { 337 - u32 bytes_avail_towrite; 338 404 u32 bytes_avail_toread; 339 405 u32 next_read_location = 0; 340 406 u64 prev_indices = 0; ··· 349 417 *buffer_actual_len = 0; 350 418 *requestid = 0; 351 419 352 - hv_get_ringbuffer_availbytes(inring_info, 353 - &bytes_avail_toread, 354 - &bytes_avail_towrite); 355 - 420 + bytes_avail_toread = hv_get_bytes_to_read(inring_info); 356 421 /* Make sure there is something to read */ 357 422 if (bytes_avail_toread < sizeof(desc)) { 358 423 /* ··· 393 464 * the writer may start writing to the read area once the read index 394 465 * is updated. 395 466 */ 396 - mb(); 467 + virt_mb(); 397 468 398 469 /* Update the read index */ 399 470 hv_set_next_read_location(inring_info, next_read_location);
+108 -40
drivers/hv/vmbus_drv.c
··· 41 41 #include <linux/ptrace.h> 42 42 #include <linux/screen_info.h> 43 43 #include <linux/kdebug.h> 44 + #include <linux/efi.h> 44 45 #include "hyperv_vmbus.h" 45 46 46 47 static struct acpi_device *hv_acpi_dev; ··· 102 101 .notifier_call = hyperv_panic_event, 103 102 }; 104 103 104 + static const char *fb_mmio_name = "fb_range"; 105 + static struct resource *fb_mmio; 105 106 struct resource *hyperv_mmio; 107 + DEFINE_SEMAPHORE(hyperv_mmio_lock); 106 108 107 109 static int vmbus_exists(void) 108 110 { ··· 712 708 if (dev->event_handler) 713 709 dev->event_handler(dev); 714 710 715 - vmbus_signal_eom(msg); 711 + vmbus_signal_eom(msg, HVMSG_TIMER_EXPIRED); 716 712 } 717 713 718 714 void vmbus_on_msg_dpc(unsigned long data) ··· 724 720 struct vmbus_channel_message_header *hdr; 725 721 struct vmbus_channel_message_table_entry *entry; 726 722 struct onmessage_work_context *ctx; 723 + u32 message_type = msg->header.message_type; 727 724 728 - if (msg->header.message_type == HVMSG_NONE) 725 + if (message_type == HVMSG_NONE) 729 726 /* no msg */ 730 727 return; 731 728 ··· 751 746 entry->message_handler(hdr); 752 747 753 748 msg_handled: 754 - vmbus_signal_eom(msg); 749 + vmbus_signal_eom(msg, message_type); 755 750 } 756 751 757 752 static void vmbus_isr(void) ··· 1053 1048 new_res->end = end; 1054 1049 1055 1050 /* 1056 - * Stick ranges from higher in address space at the front of the list. 1057 1051 * If two ranges are adjacent, merge them. 1058 1052 */ 1059 1053 do { ··· 1073 1069 break; 1074 1070 } 1075 1071 1076 - if ((*old_res)->end < new_res->start) { 1072 + if ((*old_res)->start > new_res->end) { 1077 1073 new_res->sibling = *old_res; 1078 1074 if (prev_res) 1079 1075 (*prev_res)->sibling = new_res; ··· 1095 1091 struct resource *next_res; 1096 1092 1097 1093 if (hyperv_mmio) { 1094 + if (fb_mmio) { 1095 + __release_region(hyperv_mmio, fb_mmio->start, 1096 + resource_size(fb_mmio)); 1097 + fb_mmio = NULL; 1098 + } 1099 + 1098 1100 for (cur_res = hyperv_mmio; cur_res; cur_res = next_res) { 1099 1101 next_res = cur_res->sibling; 1100 1102 kfree(cur_res); ··· 1108 1098 } 1109 1099 1110 1100 return 0; 1101 + } 1102 + 1103 + static void vmbus_reserve_fb(void) 1104 + { 1105 + int size; 1106 + /* 1107 + * Make a claim for the frame buffer in the resource tree under the 1108 + * first node, which will be the one below 4GB. The length seems to 1109 + * be underreported, particularly in a Generation 1 VM. So start out 1110 + * reserving a larger area and make it smaller until it succeeds. 1111 + */ 1112 + 1113 + if (screen_info.lfb_base) { 1114 + if (efi_enabled(EFI_BOOT)) 1115 + size = max_t(__u32, screen_info.lfb_size, 0x800000); 1116 + else 1117 + size = max_t(__u32, screen_info.lfb_size, 0x4000000); 1118 + 1119 + for (; !fb_mmio && (size >= 0x100000); size >>= 1) { 1120 + fb_mmio = __request_region(hyperv_mmio, 1121 + screen_info.lfb_base, size, 1122 + fb_mmio_name, 0); 1123 + } 1124 + } 1111 1125 } 1112 1126 1113 1127 /** ··· 1162 1128 resource_size_t size, resource_size_t align, 1163 1129 bool fb_overlap_ok) 1164 1130 { 1165 - struct resource *iter; 1166 - resource_size_t range_min, range_max, start, local_min, local_max; 1131 + struct resource *iter, *shadow; 1132 + resource_size_t range_min, range_max, start; 1167 1133 const char *dev_n = dev_name(&device_obj->device); 1168 - u32 fb_end = screen_info.lfb_base + (screen_info.lfb_size << 1); 1169 - int i; 1134 + int retval; 1135 + 1136 + retval = -ENXIO; 1137 + down(&hyperv_mmio_lock); 1138 + 1139 + /* 1140 + * If overlaps with frame buffers are allowed, then first attempt to 1141 + * make the allocation from within the reserved region. Because it 1142 + * is already reserved, no shadow allocation is necessary. 1143 + */ 1144 + if (fb_overlap_ok && fb_mmio && !(min > fb_mmio->end) && 1145 + !(max < fb_mmio->start)) { 1146 + 1147 + range_min = fb_mmio->start; 1148 + range_max = fb_mmio->end; 1149 + start = (range_min + align - 1) & ~(align - 1); 1150 + for (; start + size - 1 <= range_max; start += align) { 1151 + *new = request_mem_region_exclusive(start, size, dev_n); 1152 + if (*new) { 1153 + retval = 0; 1154 + goto exit; 1155 + } 1156 + } 1157 + } 1170 1158 1171 1159 for (iter = hyperv_mmio; iter; iter = iter->sibling) { 1172 1160 if ((iter->start >= max) || (iter->end <= min)) ··· 1196 1140 1197 1141 range_min = iter->start; 1198 1142 range_max = iter->end; 1143 + start = (range_min + align - 1) & ~(align - 1); 1144 + for (; start + size - 1 <= range_max; start += align) { 1145 + shadow = __request_region(iter, start, size, NULL, 1146 + IORESOURCE_BUSY); 1147 + if (!shadow) 1148 + continue; 1199 1149 1200 - /* If this range overlaps the frame buffer, split it into 1201 - two tries. */ 1202 - for (i = 0; i < 2; i++) { 1203 - local_min = range_min; 1204 - local_max = range_max; 1205 - if (fb_overlap_ok || (range_min >= fb_end) || 1206 - (range_max <= screen_info.lfb_base)) { 1207 - i++; 1208 - } else { 1209 - if ((range_min <= screen_info.lfb_base) && 1210 - (range_max >= screen_info.lfb_base)) { 1211 - /* 1212 - * The frame buffer is in this window, 1213 - * so trim this into the part that 1214 - * preceeds the frame buffer. 1215 - */ 1216 - local_max = screen_info.lfb_base - 1; 1217 - range_min = fb_end; 1218 - } else { 1219 - range_min = fb_end; 1220 - continue; 1221 - } 1150 + *new = request_mem_region_exclusive(start, size, dev_n); 1151 + if (*new) { 1152 + shadow->name = (char *)*new; 1153 + retval = 0; 1154 + goto exit; 1222 1155 } 1223 1156 1224 - start = (local_min + align - 1) & ~(align - 1); 1225 - for (; start + size - 1 <= local_max; start += align) { 1226 - *new = request_mem_region_exclusive(start, size, 1227 - dev_n); 1228 - if (*new) 1229 - return 0; 1230 - } 1157 + __release_region(iter, start, size); 1231 1158 } 1232 1159 } 1233 1160 1234 - return -ENXIO; 1161 + exit: 1162 + up(&hyperv_mmio_lock); 1163 + return retval; 1235 1164 } 1236 1165 EXPORT_SYMBOL_GPL(vmbus_allocate_mmio); 1166 + 1167 + /** 1168 + * vmbus_free_mmio() - Free a memory-mapped I/O range. 1169 + * @start: Base address of region to release. 1170 + * @size: Size of the range to be allocated 1171 + * 1172 + * This function releases anything requested by 1173 + * vmbus_mmio_allocate(). 1174 + */ 1175 + void vmbus_free_mmio(resource_size_t start, resource_size_t size) 1176 + { 1177 + struct resource *iter; 1178 + 1179 + down(&hyperv_mmio_lock); 1180 + for (iter = hyperv_mmio; iter; iter = iter->sibling) { 1181 + if ((iter->start >= start + size) || (iter->end <= start)) 1182 + continue; 1183 + 1184 + __release_region(iter, start, size); 1185 + } 1186 + release_mem_region(start, size); 1187 + up(&hyperv_mmio_lock); 1188 + 1189 + } 1190 + EXPORT_SYMBOL_GPL(vmbus_free_mmio); 1237 1191 1238 1192 /** 1239 1193 * vmbus_cpu_number_to_vp_number() - Map CPU to VP. ··· 1285 1219 1286 1220 if (ACPI_FAILURE(result)) 1287 1221 continue; 1288 - if (hyperv_mmio) 1222 + if (hyperv_mmio) { 1223 + vmbus_reserve_fb(); 1289 1224 break; 1225 + } 1290 1226 } 1291 1227 ret_val = 0; 1292 1228
+11
drivers/hwtracing/coresight/Kconfig
··· 78 78 programmable ATB replicator sends the ATB trace stream from the 79 79 ETB/ETF to the TPIUi and ETR. 80 80 81 + config CORESIGHT_STM 82 + bool "CoreSight System Trace Macrocell driver" 83 + depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64 84 + select CORESIGHT_LINKS_AND_SINKS 85 + select STM 86 + help 87 + This driver provides support for hardware assisted software 88 + instrumentation based tracing. This is primarily used for 89 + logging useful software events or data coming from various entities 90 + in the system, possibly running different OSs 91 + 81 92 endif
+8 -5
drivers/hwtracing/coresight/Makefile
··· 1 1 # 2 2 # Makefile for CoreSight drivers. 3 3 # 4 - obj-$(CONFIG_CORESIGHT) += coresight.o 4 + obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o 5 5 obj-$(CONFIG_OF) += of_coresight.o 6 - obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o 6 + obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o \ 7 + coresight-tmc-etf.o \ 8 + coresight-tmc-etr.o 7 9 obj-$(CONFIG_CORESIGHT_SINK_TPIU) += coresight-tpiu.o 8 10 obj-$(CONFIG_CORESIGHT_SINK_ETBV10) += coresight-etb10.o 9 11 obj-$(CONFIG_CORESIGHT_LINKS_AND_SINKS) += coresight-funnel.o \ 10 12 coresight-replicator.o 11 13 obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \ 12 - coresight-etm3x-sysfs.o \ 13 - coresight-etm-perf.o 14 - obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o 14 + coresight-etm3x-sysfs.o 15 + obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \ 16 + coresight-etm4x-sysfs.o 15 17 obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o 18 + obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
+39 -64
drivers/hwtracing/coresight/coresight-etb10.c
··· 71 71 #define ETB_FRAME_SIZE_WORDS 4 72 72 73 73 /** 74 - * struct cs_buffer - keep track of a recording session' specifics 75 - * @cur: index of the current buffer 76 - * @nr_pages: max number of pages granted to us 77 - * @offset: offset within the current buffer 78 - * @data_size: how much we collected in this run 79 - * @lost: other than zero if we had a HW buffer wrap around 80 - * @snapshot: is this run in snapshot mode 81 - * @data_pages: a handle the ring buffer 82 - */ 83 - struct cs_buffers { 84 - unsigned int cur; 85 - unsigned int nr_pages; 86 - unsigned long offset; 87 - local_t data_size; 88 - local_t lost; 89 - bool snapshot; 90 - void **data_pages; 91 - }; 92 - 93 - /** 94 74 * struct etb_drvdata - specifics associated to an ETB component 95 75 * @base: memory mapped base address for this component. 96 76 * @dev: the device entity associated to this component. ··· 420 440 u32 mask = ~(ETB_FRAME_SIZE_WORDS - 1); 421 441 422 442 /* The new read pointer must be frame size aligned */ 423 - to_read -= handle->size & mask; 443 + to_read = handle->size & mask; 424 444 /* 425 445 * Move the RAM read pointer up, keeping in mind that 426 446 * everything is in frame size units. ··· 428 448 read_ptr = (write_ptr + drvdata->buffer_depth) - 429 449 to_read / ETB_FRAME_SIZE_WORDS; 430 450 /* Wrap around if need be*/ 431 - read_ptr &= ~(drvdata->buffer_depth - 1); 451 + if (read_ptr > (drvdata->buffer_depth - 1)) 452 + read_ptr -= drvdata->buffer_depth; 432 453 /* let the decoder know we've skipped ahead */ 433 454 local_inc(&buf->lost); 434 455 } ··· 560 579 .llseek = no_llseek, 561 580 }; 562 581 563 - static ssize_t status_show(struct device *dev, 564 - struct device_attribute *attr, char *buf) 565 - { 566 - unsigned long flags; 567 - u32 etb_rdr, etb_sr, etb_rrp, etb_rwp; 568 - u32 etb_trg, etb_cr, etb_ffsr, etb_ffcr; 569 - struct etb_drvdata *drvdata = dev_get_drvdata(dev->parent); 582 + #define coresight_etb10_simple_func(name, offset) \ 583 + coresight_simple_func(struct etb_drvdata, name, offset) 570 584 571 - pm_runtime_get_sync(drvdata->dev); 572 - spin_lock_irqsave(&drvdata->spinlock, flags); 573 - CS_UNLOCK(drvdata->base); 585 + coresight_etb10_simple_func(rdp, ETB_RAM_DEPTH_REG); 586 + coresight_etb10_simple_func(sts, ETB_STATUS_REG); 587 + coresight_etb10_simple_func(rrp, ETB_RAM_READ_POINTER); 588 + coresight_etb10_simple_func(rwp, ETB_RAM_WRITE_POINTER); 589 + coresight_etb10_simple_func(trg, ETB_TRG); 590 + coresight_etb10_simple_func(ctl, ETB_CTL_REG); 591 + coresight_etb10_simple_func(ffsr, ETB_FFSR); 592 + coresight_etb10_simple_func(ffcr, ETB_FFCR); 574 593 575 - etb_rdr = readl_relaxed(drvdata->base + ETB_RAM_DEPTH_REG); 576 - etb_sr = readl_relaxed(drvdata->base + ETB_STATUS_REG); 577 - etb_rrp = readl_relaxed(drvdata->base + ETB_RAM_READ_POINTER); 578 - etb_rwp = readl_relaxed(drvdata->base + ETB_RAM_WRITE_POINTER); 579 - etb_trg = readl_relaxed(drvdata->base + ETB_TRG); 580 - etb_cr = readl_relaxed(drvdata->base + ETB_CTL_REG); 581 - etb_ffsr = readl_relaxed(drvdata->base + ETB_FFSR); 582 - etb_ffcr = readl_relaxed(drvdata->base + ETB_FFCR); 583 - 584 - CS_LOCK(drvdata->base); 585 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 586 - 587 - pm_runtime_put(drvdata->dev); 588 - 589 - return sprintf(buf, 590 - "Depth:\t\t0x%x\n" 591 - "Status:\t\t0x%x\n" 592 - "RAM read ptr:\t0x%x\n" 593 - "RAM wrt ptr:\t0x%x\n" 594 - "Trigger cnt:\t0x%x\n" 595 - "Control:\t0x%x\n" 596 - "Flush status:\t0x%x\n" 597 - "Flush ctrl:\t0x%x\n", 598 - etb_rdr, etb_sr, etb_rrp, etb_rwp, 599 - etb_trg, etb_cr, etb_ffsr, etb_ffcr); 600 - 601 - return -EINVAL; 602 - } 603 - static DEVICE_ATTR_RO(status); 594 + static struct attribute *coresight_etb_mgmt_attrs[] = { 595 + &dev_attr_rdp.attr, 596 + &dev_attr_sts.attr, 597 + &dev_attr_rrp.attr, 598 + &dev_attr_rwp.attr, 599 + &dev_attr_trg.attr, 600 + &dev_attr_ctl.attr, 601 + &dev_attr_ffsr.attr, 602 + &dev_attr_ffcr.attr, 603 + NULL, 604 + }; 604 605 605 606 static ssize_t trigger_cntr_show(struct device *dev, 606 607 struct device_attribute *attr, char *buf) ··· 612 649 613 650 static struct attribute *coresight_etb_attrs[] = { 614 651 &dev_attr_trigger_cntr.attr, 615 - &dev_attr_status.attr, 616 652 NULL, 617 653 }; 618 - ATTRIBUTE_GROUPS(coresight_etb); 654 + 655 + static const struct attribute_group coresight_etb_group = { 656 + .attrs = coresight_etb_attrs, 657 + }; 658 + 659 + static const struct attribute_group coresight_etb_mgmt_group = { 660 + .attrs = coresight_etb_mgmt_attrs, 661 + .name = "mgmt", 662 + }; 663 + 664 + const struct attribute_group *coresight_etb_groups[] = { 665 + &coresight_etb_group, 666 + &coresight_etb_mgmt_group, 667 + NULL, 668 + }; 619 669 620 670 static int etb_probe(struct amba_device *adev, const struct amba_id *id) 621 671 { ··· 705 729 if (ret) 706 730 goto err_misc_register; 707 731 708 - dev_info(dev, "ETB initialized\n"); 709 732 return 0; 710 733 711 734 err_misc_register:
+12 -19
drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
··· 1221 1221 NULL, 1222 1222 }; 1223 1223 1224 - #define coresight_simple_func(name, offset) \ 1225 - static ssize_t name##_show(struct device *_dev, \ 1226 - struct device_attribute *attr, char *buf) \ 1227 - { \ 1228 - struct etm_drvdata *drvdata = dev_get_drvdata(_dev->parent); \ 1229 - return scnprintf(buf, PAGE_SIZE, "0x%x\n", \ 1230 - readl_relaxed(drvdata->base + offset)); \ 1231 - } \ 1232 - DEVICE_ATTR_RO(name) 1224 + #define coresight_etm3x_simple_func(name, offset) \ 1225 + coresight_simple_func(struct etm_drvdata, name, offset) 1233 1226 1234 - coresight_simple_func(etmccr, ETMCCR); 1235 - coresight_simple_func(etmccer, ETMCCER); 1236 - coresight_simple_func(etmscr, ETMSCR); 1237 - coresight_simple_func(etmidr, ETMIDR); 1238 - coresight_simple_func(etmcr, ETMCR); 1239 - coresight_simple_func(etmtraceidr, ETMTRACEIDR); 1240 - coresight_simple_func(etmteevr, ETMTEEVR); 1241 - coresight_simple_func(etmtssvr, ETMTSSCR); 1242 - coresight_simple_func(etmtecr1, ETMTECR1); 1243 - coresight_simple_func(etmtecr2, ETMTECR2); 1227 + coresight_etm3x_simple_func(etmccr, ETMCCR); 1228 + coresight_etm3x_simple_func(etmccer, ETMCCER); 1229 + coresight_etm3x_simple_func(etmscr, ETMSCR); 1230 + coresight_etm3x_simple_func(etmidr, ETMIDR); 1231 + coresight_etm3x_simple_func(etmcr, ETMCR); 1232 + coresight_etm3x_simple_func(etmtraceidr, ETMTRACEIDR); 1233 + coresight_etm3x_simple_func(etmteevr, ETMTEEVR); 1234 + coresight_etm3x_simple_func(etmtssvr, ETMTSSCR); 1235 + coresight_etm3x_simple_func(etmtecr1, ETMTECR1); 1236 + coresight_etm3x_simple_func(etmtecr2, ETMTECR2); 1244 1237 1245 1238 static struct attribute *coresight_etm_mgmt_attrs[] = { 1246 1239 &dev_attr_etmccr.attr,
+2126
drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
··· 1 + /* 2 + * Copyright(C) 2015 Linaro Limited. All rights reserved. 3 + * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms of the GNU General Public License version 2 as published by 7 + * the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + * You should have received a copy of the GNU General Public License along with 15 + * this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #include <linux/pm_runtime.h> 19 + #include <linux/sysfs.h> 20 + #include "coresight-etm4x.h" 21 + 22 + static int etm4_set_mode_exclude(struct etmv4_drvdata *drvdata, bool exclude) 23 + { 24 + u8 idx; 25 + struct etmv4_config *config = &drvdata->config; 26 + 27 + idx = config->addr_idx; 28 + 29 + /* 30 + * TRCACATRn.TYPE bit[1:0]: type of comparison 31 + * the trace unit performs 32 + */ 33 + if (BMVAL(config->addr_acc[idx], 0, 1) == ETM_INSTR_ADDR) { 34 + if (idx % 2 != 0) 35 + return -EINVAL; 36 + 37 + /* 38 + * We are performing instruction address comparison. Set the 39 + * relevant bit of ViewInst Include/Exclude Control register 40 + * for corresponding address comparator pair. 41 + */ 42 + if (config->addr_type[idx] != ETM_ADDR_TYPE_RANGE || 43 + config->addr_type[idx + 1] != ETM_ADDR_TYPE_RANGE) 44 + return -EINVAL; 45 + 46 + if (exclude == true) { 47 + /* 48 + * Set exclude bit and unset the include bit 49 + * corresponding to comparator pair 50 + */ 51 + config->viiectlr |= BIT(idx / 2 + 16); 52 + config->viiectlr &= ~BIT(idx / 2); 53 + } else { 54 + /* 55 + * Set include bit and unset exclude bit 56 + * corresponding to comparator pair 57 + */ 58 + config->viiectlr |= BIT(idx / 2); 59 + config->viiectlr &= ~BIT(idx / 2 + 16); 60 + } 61 + } 62 + return 0; 63 + } 64 + 65 + static ssize_t nr_pe_cmp_show(struct device *dev, 66 + struct device_attribute *attr, 67 + char *buf) 68 + { 69 + unsigned long val; 70 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 71 + 72 + val = drvdata->nr_pe_cmp; 73 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 74 + } 75 + static DEVICE_ATTR_RO(nr_pe_cmp); 76 + 77 + static ssize_t nr_addr_cmp_show(struct device *dev, 78 + struct device_attribute *attr, 79 + char *buf) 80 + { 81 + unsigned long val; 82 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 83 + 84 + val = drvdata->nr_addr_cmp; 85 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 86 + } 87 + static DEVICE_ATTR_RO(nr_addr_cmp); 88 + 89 + static ssize_t nr_cntr_show(struct device *dev, 90 + struct device_attribute *attr, 91 + char *buf) 92 + { 93 + unsigned long val; 94 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 95 + 96 + val = drvdata->nr_cntr; 97 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 98 + } 99 + static DEVICE_ATTR_RO(nr_cntr); 100 + 101 + static ssize_t nr_ext_inp_show(struct device *dev, 102 + struct device_attribute *attr, 103 + char *buf) 104 + { 105 + unsigned long val; 106 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 107 + 108 + val = drvdata->nr_ext_inp; 109 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 110 + } 111 + static DEVICE_ATTR_RO(nr_ext_inp); 112 + 113 + static ssize_t numcidc_show(struct device *dev, 114 + struct device_attribute *attr, 115 + char *buf) 116 + { 117 + unsigned long val; 118 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 119 + 120 + val = drvdata->numcidc; 121 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 122 + } 123 + static DEVICE_ATTR_RO(numcidc); 124 + 125 + static ssize_t numvmidc_show(struct device *dev, 126 + struct device_attribute *attr, 127 + char *buf) 128 + { 129 + unsigned long val; 130 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 131 + 132 + val = drvdata->numvmidc; 133 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 134 + } 135 + static DEVICE_ATTR_RO(numvmidc); 136 + 137 + static ssize_t nrseqstate_show(struct device *dev, 138 + struct device_attribute *attr, 139 + char *buf) 140 + { 141 + unsigned long val; 142 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 143 + 144 + val = drvdata->nrseqstate; 145 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 146 + } 147 + static DEVICE_ATTR_RO(nrseqstate); 148 + 149 + static ssize_t nr_resource_show(struct device *dev, 150 + struct device_attribute *attr, 151 + char *buf) 152 + { 153 + unsigned long val; 154 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 155 + 156 + val = drvdata->nr_resource; 157 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 158 + } 159 + static DEVICE_ATTR_RO(nr_resource); 160 + 161 + static ssize_t nr_ss_cmp_show(struct device *dev, 162 + struct device_attribute *attr, 163 + char *buf) 164 + { 165 + unsigned long val; 166 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 167 + 168 + val = drvdata->nr_ss_cmp; 169 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 170 + } 171 + static DEVICE_ATTR_RO(nr_ss_cmp); 172 + 173 + static ssize_t reset_store(struct device *dev, 174 + struct device_attribute *attr, 175 + const char *buf, size_t size) 176 + { 177 + int i; 178 + unsigned long val; 179 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 180 + struct etmv4_config *config = &drvdata->config; 181 + 182 + if (kstrtoul(buf, 16, &val)) 183 + return -EINVAL; 184 + 185 + spin_lock(&drvdata->spinlock); 186 + if (val) 187 + config->mode = 0x0; 188 + 189 + /* Disable data tracing: do not trace load and store data transfers */ 190 + config->mode &= ~(ETM_MODE_LOAD | ETM_MODE_STORE); 191 + config->cfg &= ~(BIT(1) | BIT(2)); 192 + 193 + /* Disable data value and data address tracing */ 194 + config->mode &= ~(ETM_MODE_DATA_TRACE_ADDR | 195 + ETM_MODE_DATA_TRACE_VAL); 196 + config->cfg &= ~(BIT(16) | BIT(17)); 197 + 198 + /* Disable all events tracing */ 199 + config->eventctrl0 = 0x0; 200 + config->eventctrl1 = 0x0; 201 + 202 + /* Disable timestamp event */ 203 + config->ts_ctrl = 0x0; 204 + 205 + /* Disable stalling */ 206 + config->stall_ctrl = 0x0; 207 + 208 + /* Reset trace synchronization period to 2^8 = 256 bytes*/ 209 + if (drvdata->syncpr == false) 210 + config->syncfreq = 0x8; 211 + 212 + /* 213 + * Enable ViewInst to trace everything with start-stop logic in 214 + * started state. ARM recommends start-stop logic is set before 215 + * each trace run. 216 + */ 217 + config->vinst_ctrl |= BIT(0); 218 + if (drvdata->nr_addr_cmp == true) { 219 + config->mode |= ETM_MODE_VIEWINST_STARTSTOP; 220 + /* SSSTATUS, bit[9] */ 221 + config->vinst_ctrl |= BIT(9); 222 + } 223 + 224 + /* No address range filtering for ViewInst */ 225 + config->viiectlr = 0x0; 226 + 227 + /* No start-stop filtering for ViewInst */ 228 + config->vissctlr = 0x0; 229 + 230 + /* Disable seq events */ 231 + for (i = 0; i < drvdata->nrseqstate-1; i++) 232 + config->seq_ctrl[i] = 0x0; 233 + config->seq_rst = 0x0; 234 + config->seq_state = 0x0; 235 + 236 + /* Disable external input events */ 237 + config->ext_inp = 0x0; 238 + 239 + config->cntr_idx = 0x0; 240 + for (i = 0; i < drvdata->nr_cntr; i++) { 241 + config->cntrldvr[i] = 0x0; 242 + config->cntr_ctrl[i] = 0x0; 243 + config->cntr_val[i] = 0x0; 244 + } 245 + 246 + config->res_idx = 0x0; 247 + for (i = 0; i < drvdata->nr_resource; i++) 248 + config->res_ctrl[i] = 0x0; 249 + 250 + for (i = 0; i < drvdata->nr_ss_cmp; i++) { 251 + config->ss_ctrl[i] = 0x0; 252 + config->ss_pe_cmp[i] = 0x0; 253 + } 254 + 255 + config->addr_idx = 0x0; 256 + for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) { 257 + config->addr_val[i] = 0x0; 258 + config->addr_acc[i] = 0x0; 259 + config->addr_type[i] = ETM_ADDR_TYPE_NONE; 260 + } 261 + 262 + config->ctxid_idx = 0x0; 263 + for (i = 0; i < drvdata->numcidc; i++) { 264 + config->ctxid_pid[i] = 0x0; 265 + config->ctxid_vpid[i] = 0x0; 266 + } 267 + 268 + config->ctxid_mask0 = 0x0; 269 + config->ctxid_mask1 = 0x0; 270 + 271 + config->vmid_idx = 0x0; 272 + for (i = 0; i < drvdata->numvmidc; i++) 273 + config->vmid_val[i] = 0x0; 274 + config->vmid_mask0 = 0x0; 275 + config->vmid_mask1 = 0x0; 276 + 277 + drvdata->trcid = drvdata->cpu + 1; 278 + 279 + spin_unlock(&drvdata->spinlock); 280 + 281 + return size; 282 + } 283 + static DEVICE_ATTR_WO(reset); 284 + 285 + static ssize_t mode_show(struct device *dev, 286 + struct device_attribute *attr, 287 + char *buf) 288 + { 289 + unsigned long val; 290 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 291 + struct etmv4_config *config = &drvdata->config; 292 + 293 + val = config->mode; 294 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 295 + } 296 + 297 + static ssize_t mode_store(struct device *dev, 298 + struct device_attribute *attr, 299 + const char *buf, size_t size) 300 + { 301 + unsigned long val, mode; 302 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 303 + struct etmv4_config *config = &drvdata->config; 304 + 305 + if (kstrtoul(buf, 16, &val)) 306 + return -EINVAL; 307 + 308 + spin_lock(&drvdata->spinlock); 309 + config->mode = val & ETMv4_MODE_ALL; 310 + 311 + if (config->mode & ETM_MODE_EXCLUDE) 312 + etm4_set_mode_exclude(drvdata, true); 313 + else 314 + etm4_set_mode_exclude(drvdata, false); 315 + 316 + if (drvdata->instrp0 == true) { 317 + /* start by clearing instruction P0 field */ 318 + config->cfg &= ~(BIT(1) | BIT(2)); 319 + if (config->mode & ETM_MODE_LOAD) 320 + /* 0b01 Trace load instructions as P0 instructions */ 321 + config->cfg |= BIT(1); 322 + if (config->mode & ETM_MODE_STORE) 323 + /* 0b10 Trace store instructions as P0 instructions */ 324 + config->cfg |= BIT(2); 325 + if (config->mode & ETM_MODE_LOAD_STORE) 326 + /* 327 + * 0b11 Trace load and store instructions 328 + * as P0 instructions 329 + */ 330 + config->cfg |= BIT(1) | BIT(2); 331 + } 332 + 333 + /* bit[3], Branch broadcast mode */ 334 + if ((config->mode & ETM_MODE_BB) && (drvdata->trcbb == true)) 335 + config->cfg |= BIT(3); 336 + else 337 + config->cfg &= ~BIT(3); 338 + 339 + /* bit[4], Cycle counting instruction trace bit */ 340 + if ((config->mode & ETMv4_MODE_CYCACC) && 341 + (drvdata->trccci == true)) 342 + config->cfg |= BIT(4); 343 + else 344 + config->cfg &= ~BIT(4); 345 + 346 + /* bit[6], Context ID tracing bit */ 347 + if ((config->mode & ETMv4_MODE_CTXID) && (drvdata->ctxid_size)) 348 + config->cfg |= BIT(6); 349 + else 350 + config->cfg &= ~BIT(6); 351 + 352 + if ((config->mode & ETM_MODE_VMID) && (drvdata->vmid_size)) 353 + config->cfg |= BIT(7); 354 + else 355 + config->cfg &= ~BIT(7); 356 + 357 + /* bits[10:8], Conditional instruction tracing bit */ 358 + mode = ETM_MODE_COND(config->mode); 359 + if (drvdata->trccond == true) { 360 + config->cfg &= ~(BIT(8) | BIT(9) | BIT(10)); 361 + config->cfg |= mode << 8; 362 + } 363 + 364 + /* bit[11], Global timestamp tracing bit */ 365 + if ((config->mode & ETMv4_MODE_TIMESTAMP) && (drvdata->ts_size)) 366 + config->cfg |= BIT(11); 367 + else 368 + config->cfg &= ~BIT(11); 369 + 370 + /* bit[12], Return stack enable bit */ 371 + if ((config->mode & ETM_MODE_RETURNSTACK) && 372 + (drvdata->retstack == true)) 373 + config->cfg |= BIT(12); 374 + else 375 + config->cfg &= ~BIT(12); 376 + 377 + /* bits[14:13], Q element enable field */ 378 + mode = ETM_MODE_QELEM(config->mode); 379 + /* start by clearing QE bits */ 380 + config->cfg &= ~(BIT(13) | BIT(14)); 381 + /* if supported, Q elements with instruction counts are enabled */ 382 + if ((mode & BIT(0)) && (drvdata->q_support & BIT(0))) 383 + config->cfg |= BIT(13); 384 + /* 385 + * if supported, Q elements with and without instruction 386 + * counts are enabled 387 + */ 388 + if ((mode & BIT(1)) && (drvdata->q_support & BIT(1))) 389 + config->cfg |= BIT(14); 390 + 391 + /* bit[11], AMBA Trace Bus (ATB) trigger enable bit */ 392 + if ((config->mode & ETM_MODE_ATB_TRIGGER) && 393 + (drvdata->atbtrig == true)) 394 + config->eventctrl1 |= BIT(11); 395 + else 396 + config->eventctrl1 &= ~BIT(11); 397 + 398 + /* bit[12], Low-power state behavior override bit */ 399 + if ((config->mode & ETM_MODE_LPOVERRIDE) && 400 + (drvdata->lpoverride == true)) 401 + config->eventctrl1 |= BIT(12); 402 + else 403 + config->eventctrl1 &= ~BIT(12); 404 + 405 + /* bit[8], Instruction stall bit */ 406 + if (config->mode & ETM_MODE_ISTALL_EN) 407 + config->stall_ctrl |= BIT(8); 408 + else 409 + config->stall_ctrl &= ~BIT(8); 410 + 411 + /* bit[10], Prioritize instruction trace bit */ 412 + if (config->mode & ETM_MODE_INSTPRIO) 413 + config->stall_ctrl |= BIT(10); 414 + else 415 + config->stall_ctrl &= ~BIT(10); 416 + 417 + /* bit[13], Trace overflow prevention bit */ 418 + if ((config->mode & ETM_MODE_NOOVERFLOW) && 419 + (drvdata->nooverflow == true)) 420 + config->stall_ctrl |= BIT(13); 421 + else 422 + config->stall_ctrl &= ~BIT(13); 423 + 424 + /* bit[9] Start/stop logic control bit */ 425 + if (config->mode & ETM_MODE_VIEWINST_STARTSTOP) 426 + config->vinst_ctrl |= BIT(9); 427 + else 428 + config->vinst_ctrl &= ~BIT(9); 429 + 430 + /* bit[10], Whether a trace unit must trace a Reset exception */ 431 + if (config->mode & ETM_MODE_TRACE_RESET) 432 + config->vinst_ctrl |= BIT(10); 433 + else 434 + config->vinst_ctrl &= ~BIT(10); 435 + 436 + /* bit[11], Whether a trace unit must trace a system error exception */ 437 + if ((config->mode & ETM_MODE_TRACE_ERR) && 438 + (drvdata->trc_error == true)) 439 + config->vinst_ctrl |= BIT(11); 440 + else 441 + config->vinst_ctrl &= ~BIT(11); 442 + 443 + if (config->mode & (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER)) 444 + etm4_config_trace_mode(config); 445 + 446 + spin_unlock(&drvdata->spinlock); 447 + 448 + return size; 449 + } 450 + static DEVICE_ATTR_RW(mode); 451 + 452 + static ssize_t pe_show(struct device *dev, 453 + struct device_attribute *attr, 454 + char *buf) 455 + { 456 + unsigned long val; 457 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 458 + struct etmv4_config *config = &drvdata->config; 459 + 460 + val = config->pe_sel; 461 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 462 + } 463 + 464 + static ssize_t pe_store(struct device *dev, 465 + struct device_attribute *attr, 466 + const char *buf, size_t size) 467 + { 468 + unsigned long val; 469 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 470 + struct etmv4_config *config = &drvdata->config; 471 + 472 + if (kstrtoul(buf, 16, &val)) 473 + return -EINVAL; 474 + 475 + spin_lock(&drvdata->spinlock); 476 + if (val > drvdata->nr_pe) { 477 + spin_unlock(&drvdata->spinlock); 478 + return -EINVAL; 479 + } 480 + 481 + config->pe_sel = val; 482 + spin_unlock(&drvdata->spinlock); 483 + return size; 484 + } 485 + static DEVICE_ATTR_RW(pe); 486 + 487 + static ssize_t event_show(struct device *dev, 488 + struct device_attribute *attr, 489 + char *buf) 490 + { 491 + unsigned long val; 492 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 493 + struct etmv4_config *config = &drvdata->config; 494 + 495 + val = config->eventctrl0; 496 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 497 + } 498 + 499 + static ssize_t event_store(struct device *dev, 500 + struct device_attribute *attr, 501 + const char *buf, size_t size) 502 + { 503 + unsigned long val; 504 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 505 + struct etmv4_config *config = &drvdata->config; 506 + 507 + if (kstrtoul(buf, 16, &val)) 508 + return -EINVAL; 509 + 510 + spin_lock(&drvdata->spinlock); 511 + switch (drvdata->nr_event) { 512 + case 0x0: 513 + /* EVENT0, bits[7:0] */ 514 + config->eventctrl0 = val & 0xFF; 515 + break; 516 + case 0x1: 517 + /* EVENT1, bits[15:8] */ 518 + config->eventctrl0 = val & 0xFFFF; 519 + break; 520 + case 0x2: 521 + /* EVENT2, bits[23:16] */ 522 + config->eventctrl0 = val & 0xFFFFFF; 523 + break; 524 + case 0x3: 525 + /* EVENT3, bits[31:24] */ 526 + config->eventctrl0 = val; 527 + break; 528 + default: 529 + break; 530 + } 531 + spin_unlock(&drvdata->spinlock); 532 + return size; 533 + } 534 + static DEVICE_ATTR_RW(event); 535 + 536 + static ssize_t event_instren_show(struct device *dev, 537 + struct device_attribute *attr, 538 + char *buf) 539 + { 540 + unsigned long val; 541 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 542 + struct etmv4_config *config = &drvdata->config; 543 + 544 + val = BMVAL(config->eventctrl1, 0, 3); 545 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 546 + } 547 + 548 + static ssize_t event_instren_store(struct device *dev, 549 + struct device_attribute *attr, 550 + const char *buf, size_t size) 551 + { 552 + unsigned long val; 553 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 554 + struct etmv4_config *config = &drvdata->config; 555 + 556 + if (kstrtoul(buf, 16, &val)) 557 + return -EINVAL; 558 + 559 + spin_lock(&drvdata->spinlock); 560 + /* start by clearing all instruction event enable bits */ 561 + config->eventctrl1 &= ~(BIT(0) | BIT(1) | BIT(2) | BIT(3)); 562 + switch (drvdata->nr_event) { 563 + case 0x0: 564 + /* generate Event element for event 1 */ 565 + config->eventctrl1 |= val & BIT(1); 566 + break; 567 + case 0x1: 568 + /* generate Event element for event 1 and 2 */ 569 + config->eventctrl1 |= val & (BIT(0) | BIT(1)); 570 + break; 571 + case 0x2: 572 + /* generate Event element for event 1, 2 and 3 */ 573 + config->eventctrl1 |= val & (BIT(0) | BIT(1) | BIT(2)); 574 + break; 575 + case 0x3: 576 + /* generate Event element for all 4 events */ 577 + config->eventctrl1 |= val & 0xF; 578 + break; 579 + default: 580 + break; 581 + } 582 + spin_unlock(&drvdata->spinlock); 583 + return size; 584 + } 585 + static DEVICE_ATTR_RW(event_instren); 586 + 587 + static ssize_t event_ts_show(struct device *dev, 588 + struct device_attribute *attr, 589 + char *buf) 590 + { 591 + unsigned long val; 592 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 593 + struct etmv4_config *config = &drvdata->config; 594 + 595 + val = config->ts_ctrl; 596 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 597 + } 598 + 599 + static ssize_t event_ts_store(struct device *dev, 600 + struct device_attribute *attr, 601 + const char *buf, size_t size) 602 + { 603 + unsigned long val; 604 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 605 + struct etmv4_config *config = &drvdata->config; 606 + 607 + if (kstrtoul(buf, 16, &val)) 608 + return -EINVAL; 609 + if (!drvdata->ts_size) 610 + return -EINVAL; 611 + 612 + config->ts_ctrl = val & ETMv4_EVENT_MASK; 613 + return size; 614 + } 615 + static DEVICE_ATTR_RW(event_ts); 616 + 617 + static ssize_t syncfreq_show(struct device *dev, 618 + struct device_attribute *attr, 619 + char *buf) 620 + { 621 + unsigned long val; 622 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 623 + struct etmv4_config *config = &drvdata->config; 624 + 625 + val = config->syncfreq; 626 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 627 + } 628 + 629 + static ssize_t syncfreq_store(struct device *dev, 630 + struct device_attribute *attr, 631 + const char *buf, size_t size) 632 + { 633 + unsigned long val; 634 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 635 + struct etmv4_config *config = &drvdata->config; 636 + 637 + if (kstrtoul(buf, 16, &val)) 638 + return -EINVAL; 639 + if (drvdata->syncpr == true) 640 + return -EINVAL; 641 + 642 + config->syncfreq = val & ETMv4_SYNC_MASK; 643 + return size; 644 + } 645 + static DEVICE_ATTR_RW(syncfreq); 646 + 647 + static ssize_t cyc_threshold_show(struct device *dev, 648 + struct device_attribute *attr, 649 + char *buf) 650 + { 651 + unsigned long val; 652 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 653 + struct etmv4_config *config = &drvdata->config; 654 + 655 + val = config->ccctlr; 656 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 657 + } 658 + 659 + static ssize_t cyc_threshold_store(struct device *dev, 660 + struct device_attribute *attr, 661 + const char *buf, size_t size) 662 + { 663 + unsigned long val; 664 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 665 + struct etmv4_config *config = &drvdata->config; 666 + 667 + if (kstrtoul(buf, 16, &val)) 668 + return -EINVAL; 669 + if (val < drvdata->ccitmin) 670 + return -EINVAL; 671 + 672 + config->ccctlr = val & ETM_CYC_THRESHOLD_MASK; 673 + return size; 674 + } 675 + static DEVICE_ATTR_RW(cyc_threshold); 676 + 677 + static ssize_t bb_ctrl_show(struct device *dev, 678 + struct device_attribute *attr, 679 + char *buf) 680 + { 681 + unsigned long val; 682 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 683 + struct etmv4_config *config = &drvdata->config; 684 + 685 + val = config->bb_ctrl; 686 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 687 + } 688 + 689 + static ssize_t bb_ctrl_store(struct device *dev, 690 + struct device_attribute *attr, 691 + const char *buf, size_t size) 692 + { 693 + unsigned long val; 694 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 695 + struct etmv4_config *config = &drvdata->config; 696 + 697 + if (kstrtoul(buf, 16, &val)) 698 + return -EINVAL; 699 + if (drvdata->trcbb == false) 700 + return -EINVAL; 701 + if (!drvdata->nr_addr_cmp) 702 + return -EINVAL; 703 + /* 704 + * Bit[7:0] selects which address range comparator is used for 705 + * branch broadcast control. 706 + */ 707 + if (BMVAL(val, 0, 7) > drvdata->nr_addr_cmp) 708 + return -EINVAL; 709 + 710 + config->bb_ctrl = val; 711 + return size; 712 + } 713 + static DEVICE_ATTR_RW(bb_ctrl); 714 + 715 + static ssize_t event_vinst_show(struct device *dev, 716 + struct device_attribute *attr, 717 + char *buf) 718 + { 719 + unsigned long val; 720 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 721 + struct etmv4_config *config = &drvdata->config; 722 + 723 + val = config->vinst_ctrl & ETMv4_EVENT_MASK; 724 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 725 + } 726 + 727 + static ssize_t event_vinst_store(struct device *dev, 728 + struct device_attribute *attr, 729 + const char *buf, size_t size) 730 + { 731 + unsigned long val; 732 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 733 + struct etmv4_config *config = &drvdata->config; 734 + 735 + if (kstrtoul(buf, 16, &val)) 736 + return -EINVAL; 737 + 738 + spin_lock(&drvdata->spinlock); 739 + val &= ETMv4_EVENT_MASK; 740 + config->vinst_ctrl &= ~ETMv4_EVENT_MASK; 741 + config->vinst_ctrl |= val; 742 + spin_unlock(&drvdata->spinlock); 743 + return size; 744 + } 745 + static DEVICE_ATTR_RW(event_vinst); 746 + 747 + static ssize_t s_exlevel_vinst_show(struct device *dev, 748 + struct device_attribute *attr, 749 + char *buf) 750 + { 751 + unsigned long val; 752 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 753 + struct etmv4_config *config = &drvdata->config; 754 + 755 + val = BMVAL(config->vinst_ctrl, 16, 19); 756 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 757 + } 758 + 759 + static ssize_t s_exlevel_vinst_store(struct device *dev, 760 + struct device_attribute *attr, 761 + const char *buf, size_t size) 762 + { 763 + unsigned long val; 764 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 765 + struct etmv4_config *config = &drvdata->config; 766 + 767 + if (kstrtoul(buf, 16, &val)) 768 + return -EINVAL; 769 + 770 + spin_lock(&drvdata->spinlock); 771 + /* clear all EXLEVEL_S bits (bit[18] is never implemented) */ 772 + config->vinst_ctrl &= ~(BIT(16) | BIT(17) | BIT(19)); 773 + /* enable instruction tracing for corresponding exception level */ 774 + val &= drvdata->s_ex_level; 775 + config->vinst_ctrl |= (val << 16); 776 + spin_unlock(&drvdata->spinlock); 777 + return size; 778 + } 779 + static DEVICE_ATTR_RW(s_exlevel_vinst); 780 + 781 + static ssize_t ns_exlevel_vinst_show(struct device *dev, 782 + struct device_attribute *attr, 783 + char *buf) 784 + { 785 + unsigned long val; 786 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 787 + struct etmv4_config *config = &drvdata->config; 788 + 789 + /* EXLEVEL_NS, bits[23:20] */ 790 + val = BMVAL(config->vinst_ctrl, 20, 23); 791 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 792 + } 793 + 794 + static ssize_t ns_exlevel_vinst_store(struct device *dev, 795 + struct device_attribute *attr, 796 + const char *buf, size_t size) 797 + { 798 + unsigned long val; 799 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 800 + struct etmv4_config *config = &drvdata->config; 801 + 802 + if (kstrtoul(buf, 16, &val)) 803 + return -EINVAL; 804 + 805 + spin_lock(&drvdata->spinlock); 806 + /* clear EXLEVEL_NS bits (bit[23] is never implemented */ 807 + config->vinst_ctrl &= ~(BIT(20) | BIT(21) | BIT(22)); 808 + /* enable instruction tracing for corresponding exception level */ 809 + val &= drvdata->ns_ex_level; 810 + config->vinst_ctrl |= (val << 20); 811 + spin_unlock(&drvdata->spinlock); 812 + return size; 813 + } 814 + static DEVICE_ATTR_RW(ns_exlevel_vinst); 815 + 816 + static ssize_t addr_idx_show(struct device *dev, 817 + struct device_attribute *attr, 818 + char *buf) 819 + { 820 + unsigned long val; 821 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 822 + struct etmv4_config *config = &drvdata->config; 823 + 824 + val = config->addr_idx; 825 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 826 + } 827 + 828 + static ssize_t addr_idx_store(struct device *dev, 829 + struct device_attribute *attr, 830 + const char *buf, size_t size) 831 + { 832 + unsigned long val; 833 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 834 + struct etmv4_config *config = &drvdata->config; 835 + 836 + if (kstrtoul(buf, 16, &val)) 837 + return -EINVAL; 838 + if (val >= drvdata->nr_addr_cmp * 2) 839 + return -EINVAL; 840 + 841 + /* 842 + * Use spinlock to ensure index doesn't change while it gets 843 + * dereferenced multiple times within a spinlock block elsewhere. 844 + */ 845 + spin_lock(&drvdata->spinlock); 846 + config->addr_idx = val; 847 + spin_unlock(&drvdata->spinlock); 848 + return size; 849 + } 850 + static DEVICE_ATTR_RW(addr_idx); 851 + 852 + static ssize_t addr_instdatatype_show(struct device *dev, 853 + struct device_attribute *attr, 854 + char *buf) 855 + { 856 + ssize_t len; 857 + u8 val, idx; 858 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 859 + struct etmv4_config *config = &drvdata->config; 860 + 861 + spin_lock(&drvdata->spinlock); 862 + idx = config->addr_idx; 863 + val = BMVAL(config->addr_acc[idx], 0, 1); 864 + len = scnprintf(buf, PAGE_SIZE, "%s\n", 865 + val == ETM_INSTR_ADDR ? "instr" : 866 + (val == ETM_DATA_LOAD_ADDR ? "data_load" : 867 + (val == ETM_DATA_STORE_ADDR ? "data_store" : 868 + "data_load_store"))); 869 + spin_unlock(&drvdata->spinlock); 870 + return len; 871 + } 872 + 873 + static ssize_t addr_instdatatype_store(struct device *dev, 874 + struct device_attribute *attr, 875 + const char *buf, size_t size) 876 + { 877 + u8 idx; 878 + char str[20] = ""; 879 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 880 + struct etmv4_config *config = &drvdata->config; 881 + 882 + if (strlen(buf) >= 20) 883 + return -EINVAL; 884 + if (sscanf(buf, "%s", str) != 1) 885 + return -EINVAL; 886 + 887 + spin_lock(&drvdata->spinlock); 888 + idx = config->addr_idx; 889 + if (!strcmp(str, "instr")) 890 + /* TYPE, bits[1:0] */ 891 + config->addr_acc[idx] &= ~(BIT(0) | BIT(1)); 892 + 893 + spin_unlock(&drvdata->spinlock); 894 + return size; 895 + } 896 + static DEVICE_ATTR_RW(addr_instdatatype); 897 + 898 + static ssize_t addr_single_show(struct device *dev, 899 + struct device_attribute *attr, 900 + char *buf) 901 + { 902 + u8 idx; 903 + unsigned long val; 904 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 905 + struct etmv4_config *config = &drvdata->config; 906 + 907 + idx = config->addr_idx; 908 + spin_lock(&drvdata->spinlock); 909 + if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE || 910 + config->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) { 911 + spin_unlock(&drvdata->spinlock); 912 + return -EPERM; 913 + } 914 + val = (unsigned long)config->addr_val[idx]; 915 + spin_unlock(&drvdata->spinlock); 916 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 917 + } 918 + 919 + static ssize_t addr_single_store(struct device *dev, 920 + struct device_attribute *attr, 921 + const char *buf, size_t size) 922 + { 923 + u8 idx; 924 + unsigned long val; 925 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 926 + struct etmv4_config *config = &drvdata->config; 927 + 928 + if (kstrtoul(buf, 16, &val)) 929 + return -EINVAL; 930 + 931 + spin_lock(&drvdata->spinlock); 932 + idx = config->addr_idx; 933 + if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE || 934 + config->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) { 935 + spin_unlock(&drvdata->spinlock); 936 + return -EPERM; 937 + } 938 + 939 + config->addr_val[idx] = (u64)val; 940 + config->addr_type[idx] = ETM_ADDR_TYPE_SINGLE; 941 + spin_unlock(&drvdata->spinlock); 942 + return size; 943 + } 944 + static DEVICE_ATTR_RW(addr_single); 945 + 946 + static ssize_t addr_range_show(struct device *dev, 947 + struct device_attribute *attr, 948 + char *buf) 949 + { 950 + u8 idx; 951 + unsigned long val1, val2; 952 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 953 + struct etmv4_config *config = &drvdata->config; 954 + 955 + spin_lock(&drvdata->spinlock); 956 + idx = config->addr_idx; 957 + if (idx % 2 != 0) { 958 + spin_unlock(&drvdata->spinlock); 959 + return -EPERM; 960 + } 961 + if (!((config->addr_type[idx] == ETM_ADDR_TYPE_NONE && 962 + config->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) || 963 + (config->addr_type[idx] == ETM_ADDR_TYPE_RANGE && 964 + config->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) { 965 + spin_unlock(&drvdata->spinlock); 966 + return -EPERM; 967 + } 968 + 969 + val1 = (unsigned long)config->addr_val[idx]; 970 + val2 = (unsigned long)config->addr_val[idx + 1]; 971 + spin_unlock(&drvdata->spinlock); 972 + return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2); 973 + } 974 + 975 + static ssize_t addr_range_store(struct device *dev, 976 + struct device_attribute *attr, 977 + const char *buf, size_t size) 978 + { 979 + u8 idx; 980 + unsigned long val1, val2; 981 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 982 + struct etmv4_config *config = &drvdata->config; 983 + 984 + if (sscanf(buf, "%lx %lx", &val1, &val2) != 2) 985 + return -EINVAL; 986 + /* lower address comparator cannot have a higher address value */ 987 + if (val1 > val2) 988 + return -EINVAL; 989 + 990 + spin_lock(&drvdata->spinlock); 991 + idx = config->addr_idx; 992 + if (idx % 2 != 0) { 993 + spin_unlock(&drvdata->spinlock); 994 + return -EPERM; 995 + } 996 + 997 + if (!((config->addr_type[idx] == ETM_ADDR_TYPE_NONE && 998 + config->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) || 999 + (config->addr_type[idx] == ETM_ADDR_TYPE_RANGE && 1000 + config->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) { 1001 + spin_unlock(&drvdata->spinlock); 1002 + return -EPERM; 1003 + } 1004 + 1005 + config->addr_val[idx] = (u64)val1; 1006 + config->addr_type[idx] = ETM_ADDR_TYPE_RANGE; 1007 + config->addr_val[idx + 1] = (u64)val2; 1008 + config->addr_type[idx + 1] = ETM_ADDR_TYPE_RANGE; 1009 + /* 1010 + * Program include or exclude control bits for vinst or vdata 1011 + * whenever we change addr comparators to ETM_ADDR_TYPE_RANGE 1012 + */ 1013 + if (config->mode & ETM_MODE_EXCLUDE) 1014 + etm4_set_mode_exclude(drvdata, true); 1015 + else 1016 + etm4_set_mode_exclude(drvdata, false); 1017 + 1018 + spin_unlock(&drvdata->spinlock); 1019 + return size; 1020 + } 1021 + static DEVICE_ATTR_RW(addr_range); 1022 + 1023 + static ssize_t addr_start_show(struct device *dev, 1024 + struct device_attribute *attr, 1025 + char *buf) 1026 + { 1027 + u8 idx; 1028 + unsigned long val; 1029 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1030 + struct etmv4_config *config = &drvdata->config; 1031 + 1032 + spin_lock(&drvdata->spinlock); 1033 + idx = config->addr_idx; 1034 + 1035 + if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1036 + config->addr_type[idx] == ETM_ADDR_TYPE_START)) { 1037 + spin_unlock(&drvdata->spinlock); 1038 + return -EPERM; 1039 + } 1040 + 1041 + val = (unsigned long)config->addr_val[idx]; 1042 + spin_unlock(&drvdata->spinlock); 1043 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1044 + } 1045 + 1046 + static ssize_t addr_start_store(struct device *dev, 1047 + struct device_attribute *attr, 1048 + const char *buf, size_t size) 1049 + { 1050 + u8 idx; 1051 + unsigned long val; 1052 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1053 + struct etmv4_config *config = &drvdata->config; 1054 + 1055 + if (kstrtoul(buf, 16, &val)) 1056 + return -EINVAL; 1057 + 1058 + spin_lock(&drvdata->spinlock); 1059 + idx = config->addr_idx; 1060 + if (!drvdata->nr_addr_cmp) { 1061 + spin_unlock(&drvdata->spinlock); 1062 + return -EINVAL; 1063 + } 1064 + if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1065 + config->addr_type[idx] == ETM_ADDR_TYPE_START)) { 1066 + spin_unlock(&drvdata->spinlock); 1067 + return -EPERM; 1068 + } 1069 + 1070 + config->addr_val[idx] = (u64)val; 1071 + config->addr_type[idx] = ETM_ADDR_TYPE_START; 1072 + config->vissctlr |= BIT(idx); 1073 + /* SSSTATUS, bit[9] - turn on start/stop logic */ 1074 + config->vinst_ctrl |= BIT(9); 1075 + spin_unlock(&drvdata->spinlock); 1076 + return size; 1077 + } 1078 + static DEVICE_ATTR_RW(addr_start); 1079 + 1080 + static ssize_t addr_stop_show(struct device *dev, 1081 + struct device_attribute *attr, 1082 + char *buf) 1083 + { 1084 + u8 idx; 1085 + unsigned long val; 1086 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1087 + struct etmv4_config *config = &drvdata->config; 1088 + 1089 + spin_lock(&drvdata->spinlock); 1090 + idx = config->addr_idx; 1091 + 1092 + if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1093 + config->addr_type[idx] == ETM_ADDR_TYPE_STOP)) { 1094 + spin_unlock(&drvdata->spinlock); 1095 + return -EPERM; 1096 + } 1097 + 1098 + val = (unsigned long)config->addr_val[idx]; 1099 + spin_unlock(&drvdata->spinlock); 1100 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1101 + } 1102 + 1103 + static ssize_t addr_stop_store(struct device *dev, 1104 + struct device_attribute *attr, 1105 + const char *buf, size_t size) 1106 + { 1107 + u8 idx; 1108 + unsigned long val; 1109 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1110 + struct etmv4_config *config = &drvdata->config; 1111 + 1112 + if (kstrtoul(buf, 16, &val)) 1113 + return -EINVAL; 1114 + 1115 + spin_lock(&drvdata->spinlock); 1116 + idx = config->addr_idx; 1117 + if (!drvdata->nr_addr_cmp) { 1118 + spin_unlock(&drvdata->spinlock); 1119 + return -EINVAL; 1120 + } 1121 + if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1122 + config->addr_type[idx] == ETM_ADDR_TYPE_STOP)) { 1123 + spin_unlock(&drvdata->spinlock); 1124 + return -EPERM; 1125 + } 1126 + 1127 + config->addr_val[idx] = (u64)val; 1128 + config->addr_type[idx] = ETM_ADDR_TYPE_STOP; 1129 + config->vissctlr |= BIT(idx + 16); 1130 + /* SSSTATUS, bit[9] - turn on start/stop logic */ 1131 + config->vinst_ctrl |= BIT(9); 1132 + spin_unlock(&drvdata->spinlock); 1133 + return size; 1134 + } 1135 + static DEVICE_ATTR_RW(addr_stop); 1136 + 1137 + static ssize_t addr_ctxtype_show(struct device *dev, 1138 + struct device_attribute *attr, 1139 + char *buf) 1140 + { 1141 + ssize_t len; 1142 + u8 idx, val; 1143 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1144 + struct etmv4_config *config = &drvdata->config; 1145 + 1146 + spin_lock(&drvdata->spinlock); 1147 + idx = config->addr_idx; 1148 + /* CONTEXTTYPE, bits[3:2] */ 1149 + val = BMVAL(config->addr_acc[idx], 2, 3); 1150 + len = scnprintf(buf, PAGE_SIZE, "%s\n", val == ETM_CTX_NONE ? "none" : 1151 + (val == ETM_CTX_CTXID ? "ctxid" : 1152 + (val == ETM_CTX_VMID ? "vmid" : "all"))); 1153 + spin_unlock(&drvdata->spinlock); 1154 + return len; 1155 + } 1156 + 1157 + static ssize_t addr_ctxtype_store(struct device *dev, 1158 + struct device_attribute *attr, 1159 + const char *buf, size_t size) 1160 + { 1161 + u8 idx; 1162 + char str[10] = ""; 1163 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1164 + struct etmv4_config *config = &drvdata->config; 1165 + 1166 + if (strlen(buf) >= 10) 1167 + return -EINVAL; 1168 + if (sscanf(buf, "%s", str) != 1) 1169 + return -EINVAL; 1170 + 1171 + spin_lock(&drvdata->spinlock); 1172 + idx = config->addr_idx; 1173 + if (!strcmp(str, "none")) 1174 + /* start by clearing context type bits */ 1175 + config->addr_acc[idx] &= ~(BIT(2) | BIT(3)); 1176 + else if (!strcmp(str, "ctxid")) { 1177 + /* 0b01 The trace unit performs a Context ID */ 1178 + if (drvdata->numcidc) { 1179 + config->addr_acc[idx] |= BIT(2); 1180 + config->addr_acc[idx] &= ~BIT(3); 1181 + } 1182 + } else if (!strcmp(str, "vmid")) { 1183 + /* 0b10 The trace unit performs a VMID */ 1184 + if (drvdata->numvmidc) { 1185 + config->addr_acc[idx] &= ~BIT(2); 1186 + config->addr_acc[idx] |= BIT(3); 1187 + } 1188 + } else if (!strcmp(str, "all")) { 1189 + /* 1190 + * 0b11 The trace unit performs a Context ID 1191 + * comparison and a VMID 1192 + */ 1193 + if (drvdata->numcidc) 1194 + config->addr_acc[idx] |= BIT(2); 1195 + if (drvdata->numvmidc) 1196 + config->addr_acc[idx] |= BIT(3); 1197 + } 1198 + spin_unlock(&drvdata->spinlock); 1199 + return size; 1200 + } 1201 + static DEVICE_ATTR_RW(addr_ctxtype); 1202 + 1203 + static ssize_t addr_context_show(struct device *dev, 1204 + struct device_attribute *attr, 1205 + char *buf) 1206 + { 1207 + u8 idx; 1208 + unsigned long val; 1209 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1210 + struct etmv4_config *config = &drvdata->config; 1211 + 1212 + spin_lock(&drvdata->spinlock); 1213 + idx = config->addr_idx; 1214 + /* context ID comparator bits[6:4] */ 1215 + val = BMVAL(config->addr_acc[idx], 4, 6); 1216 + spin_unlock(&drvdata->spinlock); 1217 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1218 + } 1219 + 1220 + static ssize_t addr_context_store(struct device *dev, 1221 + struct device_attribute *attr, 1222 + const char *buf, size_t size) 1223 + { 1224 + u8 idx; 1225 + unsigned long val; 1226 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1227 + struct etmv4_config *config = &drvdata->config; 1228 + 1229 + if (kstrtoul(buf, 16, &val)) 1230 + return -EINVAL; 1231 + if ((drvdata->numcidc <= 1) && (drvdata->numvmidc <= 1)) 1232 + return -EINVAL; 1233 + if (val >= (drvdata->numcidc >= drvdata->numvmidc ? 1234 + drvdata->numcidc : drvdata->numvmidc)) 1235 + return -EINVAL; 1236 + 1237 + spin_lock(&drvdata->spinlock); 1238 + idx = config->addr_idx; 1239 + /* clear context ID comparator bits[6:4] */ 1240 + config->addr_acc[idx] &= ~(BIT(4) | BIT(5) | BIT(6)); 1241 + config->addr_acc[idx] |= (val << 4); 1242 + spin_unlock(&drvdata->spinlock); 1243 + return size; 1244 + } 1245 + static DEVICE_ATTR_RW(addr_context); 1246 + 1247 + static ssize_t seq_idx_show(struct device *dev, 1248 + struct device_attribute *attr, 1249 + char *buf) 1250 + { 1251 + unsigned long val; 1252 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1253 + struct etmv4_config *config = &drvdata->config; 1254 + 1255 + val = config->seq_idx; 1256 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1257 + } 1258 + 1259 + static ssize_t seq_idx_store(struct device *dev, 1260 + struct device_attribute *attr, 1261 + const char *buf, size_t size) 1262 + { 1263 + unsigned long val; 1264 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1265 + struct etmv4_config *config = &drvdata->config; 1266 + 1267 + if (kstrtoul(buf, 16, &val)) 1268 + return -EINVAL; 1269 + if (val >= drvdata->nrseqstate - 1) 1270 + return -EINVAL; 1271 + 1272 + /* 1273 + * Use spinlock to ensure index doesn't change while it gets 1274 + * dereferenced multiple times within a spinlock block elsewhere. 1275 + */ 1276 + spin_lock(&drvdata->spinlock); 1277 + config->seq_idx = val; 1278 + spin_unlock(&drvdata->spinlock); 1279 + return size; 1280 + } 1281 + static DEVICE_ATTR_RW(seq_idx); 1282 + 1283 + static ssize_t seq_state_show(struct device *dev, 1284 + struct device_attribute *attr, 1285 + char *buf) 1286 + { 1287 + unsigned long val; 1288 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1289 + struct etmv4_config *config = &drvdata->config; 1290 + 1291 + val = config->seq_state; 1292 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1293 + } 1294 + 1295 + static ssize_t seq_state_store(struct device *dev, 1296 + struct device_attribute *attr, 1297 + const char *buf, size_t size) 1298 + { 1299 + unsigned long val; 1300 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1301 + struct etmv4_config *config = &drvdata->config; 1302 + 1303 + if (kstrtoul(buf, 16, &val)) 1304 + return -EINVAL; 1305 + if (val >= drvdata->nrseqstate) 1306 + return -EINVAL; 1307 + 1308 + config->seq_state = val; 1309 + return size; 1310 + } 1311 + static DEVICE_ATTR_RW(seq_state); 1312 + 1313 + static ssize_t seq_event_show(struct device *dev, 1314 + struct device_attribute *attr, 1315 + char *buf) 1316 + { 1317 + u8 idx; 1318 + unsigned long val; 1319 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1320 + struct etmv4_config *config = &drvdata->config; 1321 + 1322 + spin_lock(&drvdata->spinlock); 1323 + idx = config->seq_idx; 1324 + val = config->seq_ctrl[idx]; 1325 + spin_unlock(&drvdata->spinlock); 1326 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1327 + } 1328 + 1329 + static ssize_t seq_event_store(struct device *dev, 1330 + struct device_attribute *attr, 1331 + const char *buf, size_t size) 1332 + { 1333 + u8 idx; 1334 + unsigned long val; 1335 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1336 + struct etmv4_config *config = &drvdata->config; 1337 + 1338 + if (kstrtoul(buf, 16, &val)) 1339 + return -EINVAL; 1340 + 1341 + spin_lock(&drvdata->spinlock); 1342 + idx = config->seq_idx; 1343 + /* RST, bits[7:0] */ 1344 + config->seq_ctrl[idx] = val & 0xFF; 1345 + spin_unlock(&drvdata->spinlock); 1346 + return size; 1347 + } 1348 + static DEVICE_ATTR_RW(seq_event); 1349 + 1350 + static ssize_t seq_reset_event_show(struct device *dev, 1351 + struct device_attribute *attr, 1352 + char *buf) 1353 + { 1354 + unsigned long val; 1355 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1356 + struct etmv4_config *config = &drvdata->config; 1357 + 1358 + val = config->seq_rst; 1359 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1360 + } 1361 + 1362 + static ssize_t seq_reset_event_store(struct device *dev, 1363 + struct device_attribute *attr, 1364 + const char *buf, size_t size) 1365 + { 1366 + unsigned long val; 1367 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1368 + struct etmv4_config *config = &drvdata->config; 1369 + 1370 + if (kstrtoul(buf, 16, &val)) 1371 + return -EINVAL; 1372 + if (!(drvdata->nrseqstate)) 1373 + return -EINVAL; 1374 + 1375 + config->seq_rst = val & ETMv4_EVENT_MASK; 1376 + return size; 1377 + } 1378 + static DEVICE_ATTR_RW(seq_reset_event); 1379 + 1380 + static ssize_t cntr_idx_show(struct device *dev, 1381 + struct device_attribute *attr, 1382 + char *buf) 1383 + { 1384 + unsigned long val; 1385 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1386 + struct etmv4_config *config = &drvdata->config; 1387 + 1388 + val = config->cntr_idx; 1389 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1390 + } 1391 + 1392 + static ssize_t cntr_idx_store(struct device *dev, 1393 + struct device_attribute *attr, 1394 + const char *buf, size_t size) 1395 + { 1396 + unsigned long val; 1397 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1398 + struct etmv4_config *config = &drvdata->config; 1399 + 1400 + if (kstrtoul(buf, 16, &val)) 1401 + return -EINVAL; 1402 + if (val >= drvdata->nr_cntr) 1403 + return -EINVAL; 1404 + 1405 + /* 1406 + * Use spinlock to ensure index doesn't change while it gets 1407 + * dereferenced multiple times within a spinlock block elsewhere. 1408 + */ 1409 + spin_lock(&drvdata->spinlock); 1410 + config->cntr_idx = val; 1411 + spin_unlock(&drvdata->spinlock); 1412 + return size; 1413 + } 1414 + static DEVICE_ATTR_RW(cntr_idx); 1415 + 1416 + static ssize_t cntrldvr_show(struct device *dev, 1417 + struct device_attribute *attr, 1418 + char *buf) 1419 + { 1420 + u8 idx; 1421 + unsigned long val; 1422 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1423 + struct etmv4_config *config = &drvdata->config; 1424 + 1425 + spin_lock(&drvdata->spinlock); 1426 + idx = config->cntr_idx; 1427 + val = config->cntrldvr[idx]; 1428 + spin_unlock(&drvdata->spinlock); 1429 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1430 + } 1431 + 1432 + static ssize_t cntrldvr_store(struct device *dev, 1433 + struct device_attribute *attr, 1434 + const char *buf, size_t size) 1435 + { 1436 + u8 idx; 1437 + unsigned long val; 1438 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1439 + struct etmv4_config *config = &drvdata->config; 1440 + 1441 + if (kstrtoul(buf, 16, &val)) 1442 + return -EINVAL; 1443 + if (val > ETM_CNTR_MAX_VAL) 1444 + return -EINVAL; 1445 + 1446 + spin_lock(&drvdata->spinlock); 1447 + idx = config->cntr_idx; 1448 + config->cntrldvr[idx] = val; 1449 + spin_unlock(&drvdata->spinlock); 1450 + return size; 1451 + } 1452 + static DEVICE_ATTR_RW(cntrldvr); 1453 + 1454 + static ssize_t cntr_val_show(struct device *dev, 1455 + struct device_attribute *attr, 1456 + char *buf) 1457 + { 1458 + u8 idx; 1459 + unsigned long val; 1460 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1461 + struct etmv4_config *config = &drvdata->config; 1462 + 1463 + spin_lock(&drvdata->spinlock); 1464 + idx = config->cntr_idx; 1465 + val = config->cntr_val[idx]; 1466 + spin_unlock(&drvdata->spinlock); 1467 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1468 + } 1469 + 1470 + static ssize_t cntr_val_store(struct device *dev, 1471 + struct device_attribute *attr, 1472 + const char *buf, size_t size) 1473 + { 1474 + u8 idx; 1475 + unsigned long val; 1476 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1477 + struct etmv4_config *config = &drvdata->config; 1478 + 1479 + if (kstrtoul(buf, 16, &val)) 1480 + return -EINVAL; 1481 + if (val > ETM_CNTR_MAX_VAL) 1482 + return -EINVAL; 1483 + 1484 + spin_lock(&drvdata->spinlock); 1485 + idx = config->cntr_idx; 1486 + config->cntr_val[idx] = val; 1487 + spin_unlock(&drvdata->spinlock); 1488 + return size; 1489 + } 1490 + static DEVICE_ATTR_RW(cntr_val); 1491 + 1492 + static ssize_t cntr_ctrl_show(struct device *dev, 1493 + struct device_attribute *attr, 1494 + char *buf) 1495 + { 1496 + u8 idx; 1497 + unsigned long val; 1498 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1499 + struct etmv4_config *config = &drvdata->config; 1500 + 1501 + spin_lock(&drvdata->spinlock); 1502 + idx = config->cntr_idx; 1503 + val = config->cntr_ctrl[idx]; 1504 + spin_unlock(&drvdata->spinlock); 1505 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1506 + } 1507 + 1508 + static ssize_t cntr_ctrl_store(struct device *dev, 1509 + struct device_attribute *attr, 1510 + const char *buf, size_t size) 1511 + { 1512 + u8 idx; 1513 + unsigned long val; 1514 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1515 + struct etmv4_config *config = &drvdata->config; 1516 + 1517 + if (kstrtoul(buf, 16, &val)) 1518 + return -EINVAL; 1519 + 1520 + spin_lock(&drvdata->spinlock); 1521 + idx = config->cntr_idx; 1522 + config->cntr_ctrl[idx] = val; 1523 + spin_unlock(&drvdata->spinlock); 1524 + return size; 1525 + } 1526 + static DEVICE_ATTR_RW(cntr_ctrl); 1527 + 1528 + static ssize_t res_idx_show(struct device *dev, 1529 + struct device_attribute *attr, 1530 + char *buf) 1531 + { 1532 + unsigned long val; 1533 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1534 + struct etmv4_config *config = &drvdata->config; 1535 + 1536 + val = config->res_idx; 1537 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1538 + } 1539 + 1540 + static ssize_t res_idx_store(struct device *dev, 1541 + struct device_attribute *attr, 1542 + const char *buf, size_t size) 1543 + { 1544 + unsigned long val; 1545 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1546 + struct etmv4_config *config = &drvdata->config; 1547 + 1548 + if (kstrtoul(buf, 16, &val)) 1549 + return -EINVAL; 1550 + /* Resource selector pair 0 is always implemented and reserved */ 1551 + if ((val == 0) || (val >= drvdata->nr_resource)) 1552 + return -EINVAL; 1553 + 1554 + /* 1555 + * Use spinlock to ensure index doesn't change while it gets 1556 + * dereferenced multiple times within a spinlock block elsewhere. 1557 + */ 1558 + spin_lock(&drvdata->spinlock); 1559 + config->res_idx = val; 1560 + spin_unlock(&drvdata->spinlock); 1561 + return size; 1562 + } 1563 + static DEVICE_ATTR_RW(res_idx); 1564 + 1565 + static ssize_t res_ctrl_show(struct device *dev, 1566 + struct device_attribute *attr, 1567 + char *buf) 1568 + { 1569 + u8 idx; 1570 + unsigned long val; 1571 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1572 + struct etmv4_config *config = &drvdata->config; 1573 + 1574 + spin_lock(&drvdata->spinlock); 1575 + idx = config->res_idx; 1576 + val = config->res_ctrl[idx]; 1577 + spin_unlock(&drvdata->spinlock); 1578 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1579 + } 1580 + 1581 + static ssize_t res_ctrl_store(struct device *dev, 1582 + struct device_attribute *attr, 1583 + const char *buf, size_t size) 1584 + { 1585 + u8 idx; 1586 + unsigned long val; 1587 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1588 + struct etmv4_config *config = &drvdata->config; 1589 + 1590 + if (kstrtoul(buf, 16, &val)) 1591 + return -EINVAL; 1592 + 1593 + spin_lock(&drvdata->spinlock); 1594 + idx = config->res_idx; 1595 + /* For odd idx pair inversal bit is RES0 */ 1596 + if (idx % 2 != 0) 1597 + /* PAIRINV, bit[21] */ 1598 + val &= ~BIT(21); 1599 + config->res_ctrl[idx] = val; 1600 + spin_unlock(&drvdata->spinlock); 1601 + return size; 1602 + } 1603 + static DEVICE_ATTR_RW(res_ctrl); 1604 + 1605 + static ssize_t ctxid_idx_show(struct device *dev, 1606 + struct device_attribute *attr, 1607 + char *buf) 1608 + { 1609 + unsigned long val; 1610 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1611 + struct etmv4_config *config = &drvdata->config; 1612 + 1613 + val = config->ctxid_idx; 1614 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1615 + } 1616 + 1617 + static ssize_t ctxid_idx_store(struct device *dev, 1618 + struct device_attribute *attr, 1619 + const char *buf, size_t size) 1620 + { 1621 + unsigned long val; 1622 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1623 + struct etmv4_config *config = &drvdata->config; 1624 + 1625 + if (kstrtoul(buf, 16, &val)) 1626 + return -EINVAL; 1627 + if (val >= drvdata->numcidc) 1628 + return -EINVAL; 1629 + 1630 + /* 1631 + * Use spinlock to ensure index doesn't change while it gets 1632 + * dereferenced multiple times within a spinlock block elsewhere. 1633 + */ 1634 + spin_lock(&drvdata->spinlock); 1635 + config->ctxid_idx = val; 1636 + spin_unlock(&drvdata->spinlock); 1637 + return size; 1638 + } 1639 + static DEVICE_ATTR_RW(ctxid_idx); 1640 + 1641 + static ssize_t ctxid_pid_show(struct device *dev, 1642 + struct device_attribute *attr, 1643 + char *buf) 1644 + { 1645 + u8 idx; 1646 + unsigned long val; 1647 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1648 + struct etmv4_config *config = &drvdata->config; 1649 + 1650 + spin_lock(&drvdata->spinlock); 1651 + idx = config->ctxid_idx; 1652 + val = (unsigned long)config->ctxid_vpid[idx]; 1653 + spin_unlock(&drvdata->spinlock); 1654 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1655 + } 1656 + 1657 + static ssize_t ctxid_pid_store(struct device *dev, 1658 + struct device_attribute *attr, 1659 + const char *buf, size_t size) 1660 + { 1661 + u8 idx; 1662 + unsigned long vpid, pid; 1663 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1664 + struct etmv4_config *config = &drvdata->config; 1665 + 1666 + /* 1667 + * only implemented when ctxid tracing is enabled, i.e. at least one 1668 + * ctxid comparator is implemented and ctxid is greater than 0 bits 1669 + * in length 1670 + */ 1671 + if (!drvdata->ctxid_size || !drvdata->numcidc) 1672 + return -EINVAL; 1673 + if (kstrtoul(buf, 16, &vpid)) 1674 + return -EINVAL; 1675 + 1676 + pid = coresight_vpid_to_pid(vpid); 1677 + 1678 + spin_lock(&drvdata->spinlock); 1679 + idx = config->ctxid_idx; 1680 + config->ctxid_pid[idx] = (u64)pid; 1681 + config->ctxid_vpid[idx] = (u64)vpid; 1682 + spin_unlock(&drvdata->spinlock); 1683 + return size; 1684 + } 1685 + static DEVICE_ATTR_RW(ctxid_pid); 1686 + 1687 + static ssize_t ctxid_masks_show(struct device *dev, 1688 + struct device_attribute *attr, 1689 + char *buf) 1690 + { 1691 + unsigned long val1, val2; 1692 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1693 + struct etmv4_config *config = &drvdata->config; 1694 + 1695 + spin_lock(&drvdata->spinlock); 1696 + val1 = config->ctxid_mask0; 1697 + val2 = config->ctxid_mask1; 1698 + spin_unlock(&drvdata->spinlock); 1699 + return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2); 1700 + } 1701 + 1702 + static ssize_t ctxid_masks_store(struct device *dev, 1703 + struct device_attribute *attr, 1704 + const char *buf, size_t size) 1705 + { 1706 + u8 i, j, maskbyte; 1707 + unsigned long val1, val2, mask; 1708 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1709 + struct etmv4_config *config = &drvdata->config; 1710 + 1711 + /* 1712 + * only implemented when ctxid tracing is enabled, i.e. at least one 1713 + * ctxid comparator is implemented and ctxid is greater than 0 bits 1714 + * in length 1715 + */ 1716 + if (!drvdata->ctxid_size || !drvdata->numcidc) 1717 + return -EINVAL; 1718 + if (sscanf(buf, "%lx %lx", &val1, &val2) != 2) 1719 + return -EINVAL; 1720 + 1721 + spin_lock(&drvdata->spinlock); 1722 + /* 1723 + * each byte[0..3] controls mask value applied to ctxid 1724 + * comparator[0..3] 1725 + */ 1726 + switch (drvdata->numcidc) { 1727 + case 0x1: 1728 + /* COMP0, bits[7:0] */ 1729 + config->ctxid_mask0 = val1 & 0xFF; 1730 + break; 1731 + case 0x2: 1732 + /* COMP1, bits[15:8] */ 1733 + config->ctxid_mask0 = val1 & 0xFFFF; 1734 + break; 1735 + case 0x3: 1736 + /* COMP2, bits[23:16] */ 1737 + config->ctxid_mask0 = val1 & 0xFFFFFF; 1738 + break; 1739 + case 0x4: 1740 + /* COMP3, bits[31:24] */ 1741 + config->ctxid_mask0 = val1; 1742 + break; 1743 + case 0x5: 1744 + /* COMP4, bits[7:0] */ 1745 + config->ctxid_mask0 = val1; 1746 + config->ctxid_mask1 = val2 & 0xFF; 1747 + break; 1748 + case 0x6: 1749 + /* COMP5, bits[15:8] */ 1750 + config->ctxid_mask0 = val1; 1751 + config->ctxid_mask1 = val2 & 0xFFFF; 1752 + break; 1753 + case 0x7: 1754 + /* COMP6, bits[23:16] */ 1755 + config->ctxid_mask0 = val1; 1756 + config->ctxid_mask1 = val2 & 0xFFFFFF; 1757 + break; 1758 + case 0x8: 1759 + /* COMP7, bits[31:24] */ 1760 + config->ctxid_mask0 = val1; 1761 + config->ctxid_mask1 = val2; 1762 + break; 1763 + default: 1764 + break; 1765 + } 1766 + /* 1767 + * If software sets a mask bit to 1, it must program relevant byte 1768 + * of ctxid comparator value 0x0, otherwise behavior is unpredictable. 1769 + * For example, if bit[3] of ctxid_mask0 is 1, we must clear bits[31:24] 1770 + * of ctxid comparator0 value (corresponding to byte 0) register. 1771 + */ 1772 + mask = config->ctxid_mask0; 1773 + for (i = 0; i < drvdata->numcidc; i++) { 1774 + /* mask value of corresponding ctxid comparator */ 1775 + maskbyte = mask & ETMv4_EVENT_MASK; 1776 + /* 1777 + * each bit corresponds to a byte of respective ctxid comparator 1778 + * value register 1779 + */ 1780 + for (j = 0; j < 8; j++) { 1781 + if (maskbyte & 1) 1782 + config->ctxid_pid[i] &= ~(0xFF << (j * 8)); 1783 + maskbyte >>= 1; 1784 + } 1785 + /* Select the next ctxid comparator mask value */ 1786 + if (i == 3) 1787 + /* ctxid comparators[4-7] */ 1788 + mask = config->ctxid_mask1; 1789 + else 1790 + mask >>= 0x8; 1791 + } 1792 + 1793 + spin_unlock(&drvdata->spinlock); 1794 + return size; 1795 + } 1796 + static DEVICE_ATTR_RW(ctxid_masks); 1797 + 1798 + static ssize_t vmid_idx_show(struct device *dev, 1799 + struct device_attribute *attr, 1800 + char *buf) 1801 + { 1802 + unsigned long val; 1803 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1804 + struct etmv4_config *config = &drvdata->config; 1805 + 1806 + val = config->vmid_idx; 1807 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1808 + } 1809 + 1810 + static ssize_t vmid_idx_store(struct device *dev, 1811 + struct device_attribute *attr, 1812 + const char *buf, size_t size) 1813 + { 1814 + unsigned long val; 1815 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1816 + struct etmv4_config *config = &drvdata->config; 1817 + 1818 + if (kstrtoul(buf, 16, &val)) 1819 + return -EINVAL; 1820 + if (val >= drvdata->numvmidc) 1821 + return -EINVAL; 1822 + 1823 + /* 1824 + * Use spinlock to ensure index doesn't change while it gets 1825 + * dereferenced multiple times within a spinlock block elsewhere. 1826 + */ 1827 + spin_lock(&drvdata->spinlock); 1828 + config->vmid_idx = val; 1829 + spin_unlock(&drvdata->spinlock); 1830 + return size; 1831 + } 1832 + static DEVICE_ATTR_RW(vmid_idx); 1833 + 1834 + static ssize_t vmid_val_show(struct device *dev, 1835 + struct device_attribute *attr, 1836 + char *buf) 1837 + { 1838 + unsigned long val; 1839 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1840 + struct etmv4_config *config = &drvdata->config; 1841 + 1842 + val = (unsigned long)config->vmid_val[config->vmid_idx]; 1843 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1844 + } 1845 + 1846 + static ssize_t vmid_val_store(struct device *dev, 1847 + struct device_attribute *attr, 1848 + const char *buf, size_t size) 1849 + { 1850 + unsigned long val; 1851 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1852 + struct etmv4_config *config = &drvdata->config; 1853 + 1854 + /* 1855 + * only implemented when vmid tracing is enabled, i.e. at least one 1856 + * vmid comparator is implemented and at least 8 bit vmid size 1857 + */ 1858 + if (!drvdata->vmid_size || !drvdata->numvmidc) 1859 + return -EINVAL; 1860 + if (kstrtoul(buf, 16, &val)) 1861 + return -EINVAL; 1862 + 1863 + spin_lock(&drvdata->spinlock); 1864 + config->vmid_val[config->vmid_idx] = (u64)val; 1865 + spin_unlock(&drvdata->spinlock); 1866 + return size; 1867 + } 1868 + static DEVICE_ATTR_RW(vmid_val); 1869 + 1870 + static ssize_t vmid_masks_show(struct device *dev, 1871 + struct device_attribute *attr, char *buf) 1872 + { 1873 + unsigned long val1, val2; 1874 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1875 + struct etmv4_config *config = &drvdata->config; 1876 + 1877 + spin_lock(&drvdata->spinlock); 1878 + val1 = config->vmid_mask0; 1879 + val2 = config->vmid_mask1; 1880 + spin_unlock(&drvdata->spinlock); 1881 + return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2); 1882 + } 1883 + 1884 + static ssize_t vmid_masks_store(struct device *dev, 1885 + struct device_attribute *attr, 1886 + const char *buf, size_t size) 1887 + { 1888 + u8 i, j, maskbyte; 1889 + unsigned long val1, val2, mask; 1890 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1891 + struct etmv4_config *config = &drvdata->config; 1892 + 1893 + /* 1894 + * only implemented when vmid tracing is enabled, i.e. at least one 1895 + * vmid comparator is implemented and at least 8 bit vmid size 1896 + */ 1897 + if (!drvdata->vmid_size || !drvdata->numvmidc) 1898 + return -EINVAL; 1899 + if (sscanf(buf, "%lx %lx", &val1, &val2) != 2) 1900 + return -EINVAL; 1901 + 1902 + spin_lock(&drvdata->spinlock); 1903 + 1904 + /* 1905 + * each byte[0..3] controls mask value applied to vmid 1906 + * comparator[0..3] 1907 + */ 1908 + switch (drvdata->numvmidc) { 1909 + case 0x1: 1910 + /* COMP0, bits[7:0] */ 1911 + config->vmid_mask0 = val1 & 0xFF; 1912 + break; 1913 + case 0x2: 1914 + /* COMP1, bits[15:8] */ 1915 + config->vmid_mask0 = val1 & 0xFFFF; 1916 + break; 1917 + case 0x3: 1918 + /* COMP2, bits[23:16] */ 1919 + config->vmid_mask0 = val1 & 0xFFFFFF; 1920 + break; 1921 + case 0x4: 1922 + /* COMP3, bits[31:24] */ 1923 + config->vmid_mask0 = val1; 1924 + break; 1925 + case 0x5: 1926 + /* COMP4, bits[7:0] */ 1927 + config->vmid_mask0 = val1; 1928 + config->vmid_mask1 = val2 & 0xFF; 1929 + break; 1930 + case 0x6: 1931 + /* COMP5, bits[15:8] */ 1932 + config->vmid_mask0 = val1; 1933 + config->vmid_mask1 = val2 & 0xFFFF; 1934 + break; 1935 + case 0x7: 1936 + /* COMP6, bits[23:16] */ 1937 + config->vmid_mask0 = val1; 1938 + config->vmid_mask1 = val2 & 0xFFFFFF; 1939 + break; 1940 + case 0x8: 1941 + /* COMP7, bits[31:24] */ 1942 + config->vmid_mask0 = val1; 1943 + config->vmid_mask1 = val2; 1944 + break; 1945 + default: 1946 + break; 1947 + } 1948 + 1949 + /* 1950 + * If software sets a mask bit to 1, it must program relevant byte 1951 + * of vmid comparator value 0x0, otherwise behavior is unpredictable. 1952 + * For example, if bit[3] of vmid_mask0 is 1, we must clear bits[31:24] 1953 + * of vmid comparator0 value (corresponding to byte 0) register. 1954 + */ 1955 + mask = config->vmid_mask0; 1956 + for (i = 0; i < drvdata->numvmidc; i++) { 1957 + /* mask value of corresponding vmid comparator */ 1958 + maskbyte = mask & ETMv4_EVENT_MASK; 1959 + /* 1960 + * each bit corresponds to a byte of respective vmid comparator 1961 + * value register 1962 + */ 1963 + for (j = 0; j < 8; j++) { 1964 + if (maskbyte & 1) 1965 + config->vmid_val[i] &= ~(0xFF << (j * 8)); 1966 + maskbyte >>= 1; 1967 + } 1968 + /* Select the next vmid comparator mask value */ 1969 + if (i == 3) 1970 + /* vmid comparators[4-7] */ 1971 + mask = config->vmid_mask1; 1972 + else 1973 + mask >>= 0x8; 1974 + } 1975 + spin_unlock(&drvdata->spinlock); 1976 + return size; 1977 + } 1978 + static DEVICE_ATTR_RW(vmid_masks); 1979 + 1980 + static ssize_t cpu_show(struct device *dev, 1981 + struct device_attribute *attr, char *buf) 1982 + { 1983 + int val; 1984 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1985 + 1986 + val = drvdata->cpu; 1987 + return scnprintf(buf, PAGE_SIZE, "%d\n", val); 1988 + 1989 + } 1990 + static DEVICE_ATTR_RO(cpu); 1991 + 1992 + static struct attribute *coresight_etmv4_attrs[] = { 1993 + &dev_attr_nr_pe_cmp.attr, 1994 + &dev_attr_nr_addr_cmp.attr, 1995 + &dev_attr_nr_cntr.attr, 1996 + &dev_attr_nr_ext_inp.attr, 1997 + &dev_attr_numcidc.attr, 1998 + &dev_attr_numvmidc.attr, 1999 + &dev_attr_nrseqstate.attr, 2000 + &dev_attr_nr_resource.attr, 2001 + &dev_attr_nr_ss_cmp.attr, 2002 + &dev_attr_reset.attr, 2003 + &dev_attr_mode.attr, 2004 + &dev_attr_pe.attr, 2005 + &dev_attr_event.attr, 2006 + &dev_attr_event_instren.attr, 2007 + &dev_attr_event_ts.attr, 2008 + &dev_attr_syncfreq.attr, 2009 + &dev_attr_cyc_threshold.attr, 2010 + &dev_attr_bb_ctrl.attr, 2011 + &dev_attr_event_vinst.attr, 2012 + &dev_attr_s_exlevel_vinst.attr, 2013 + &dev_attr_ns_exlevel_vinst.attr, 2014 + &dev_attr_addr_idx.attr, 2015 + &dev_attr_addr_instdatatype.attr, 2016 + &dev_attr_addr_single.attr, 2017 + &dev_attr_addr_range.attr, 2018 + &dev_attr_addr_start.attr, 2019 + &dev_attr_addr_stop.attr, 2020 + &dev_attr_addr_ctxtype.attr, 2021 + &dev_attr_addr_context.attr, 2022 + &dev_attr_seq_idx.attr, 2023 + &dev_attr_seq_state.attr, 2024 + &dev_attr_seq_event.attr, 2025 + &dev_attr_seq_reset_event.attr, 2026 + &dev_attr_cntr_idx.attr, 2027 + &dev_attr_cntrldvr.attr, 2028 + &dev_attr_cntr_val.attr, 2029 + &dev_attr_cntr_ctrl.attr, 2030 + &dev_attr_res_idx.attr, 2031 + &dev_attr_res_ctrl.attr, 2032 + &dev_attr_ctxid_idx.attr, 2033 + &dev_attr_ctxid_pid.attr, 2034 + &dev_attr_ctxid_masks.attr, 2035 + &dev_attr_vmid_idx.attr, 2036 + &dev_attr_vmid_val.attr, 2037 + &dev_attr_vmid_masks.attr, 2038 + &dev_attr_cpu.attr, 2039 + NULL, 2040 + }; 2041 + 2042 + #define coresight_etm4x_simple_func(name, offset) \ 2043 + coresight_simple_func(struct etmv4_drvdata, name, offset) 2044 + 2045 + coresight_etm4x_simple_func(trcoslsr, TRCOSLSR); 2046 + coresight_etm4x_simple_func(trcpdcr, TRCPDCR); 2047 + coresight_etm4x_simple_func(trcpdsr, TRCPDSR); 2048 + coresight_etm4x_simple_func(trclsr, TRCLSR); 2049 + coresight_etm4x_simple_func(trcconfig, TRCCONFIGR); 2050 + coresight_etm4x_simple_func(trctraceid, TRCTRACEIDR); 2051 + coresight_etm4x_simple_func(trcauthstatus, TRCAUTHSTATUS); 2052 + coresight_etm4x_simple_func(trcdevid, TRCDEVID); 2053 + coresight_etm4x_simple_func(trcdevtype, TRCDEVTYPE); 2054 + coresight_etm4x_simple_func(trcpidr0, TRCPIDR0); 2055 + coresight_etm4x_simple_func(trcpidr1, TRCPIDR1); 2056 + coresight_etm4x_simple_func(trcpidr2, TRCPIDR2); 2057 + coresight_etm4x_simple_func(trcpidr3, TRCPIDR3); 2058 + 2059 + static struct attribute *coresight_etmv4_mgmt_attrs[] = { 2060 + &dev_attr_trcoslsr.attr, 2061 + &dev_attr_trcpdcr.attr, 2062 + &dev_attr_trcpdsr.attr, 2063 + &dev_attr_trclsr.attr, 2064 + &dev_attr_trcconfig.attr, 2065 + &dev_attr_trctraceid.attr, 2066 + &dev_attr_trcauthstatus.attr, 2067 + &dev_attr_trcdevid.attr, 2068 + &dev_attr_trcdevtype.attr, 2069 + &dev_attr_trcpidr0.attr, 2070 + &dev_attr_trcpidr1.attr, 2071 + &dev_attr_trcpidr2.attr, 2072 + &dev_attr_trcpidr3.attr, 2073 + NULL, 2074 + }; 2075 + 2076 + coresight_etm4x_simple_func(trcidr0, TRCIDR0); 2077 + coresight_etm4x_simple_func(trcidr1, TRCIDR1); 2078 + coresight_etm4x_simple_func(trcidr2, TRCIDR2); 2079 + coresight_etm4x_simple_func(trcidr3, TRCIDR3); 2080 + coresight_etm4x_simple_func(trcidr4, TRCIDR4); 2081 + coresight_etm4x_simple_func(trcidr5, TRCIDR5); 2082 + /* trcidr[6,7] are reserved */ 2083 + coresight_etm4x_simple_func(trcidr8, TRCIDR8); 2084 + coresight_etm4x_simple_func(trcidr9, TRCIDR9); 2085 + coresight_etm4x_simple_func(trcidr10, TRCIDR10); 2086 + coresight_etm4x_simple_func(trcidr11, TRCIDR11); 2087 + coresight_etm4x_simple_func(trcidr12, TRCIDR12); 2088 + coresight_etm4x_simple_func(trcidr13, TRCIDR13); 2089 + 2090 + static struct attribute *coresight_etmv4_trcidr_attrs[] = { 2091 + &dev_attr_trcidr0.attr, 2092 + &dev_attr_trcidr1.attr, 2093 + &dev_attr_trcidr2.attr, 2094 + &dev_attr_trcidr3.attr, 2095 + &dev_attr_trcidr4.attr, 2096 + &dev_attr_trcidr5.attr, 2097 + /* trcidr[6,7] are reserved */ 2098 + &dev_attr_trcidr8.attr, 2099 + &dev_attr_trcidr9.attr, 2100 + &dev_attr_trcidr10.attr, 2101 + &dev_attr_trcidr11.attr, 2102 + &dev_attr_trcidr12.attr, 2103 + &dev_attr_trcidr13.attr, 2104 + NULL, 2105 + }; 2106 + 2107 + static const struct attribute_group coresight_etmv4_group = { 2108 + .attrs = coresight_etmv4_attrs, 2109 + }; 2110 + 2111 + static const struct attribute_group coresight_etmv4_mgmt_group = { 2112 + .attrs = coresight_etmv4_mgmt_attrs, 2113 + .name = "mgmt", 2114 + }; 2115 + 2116 + static const struct attribute_group coresight_etmv4_trcidr_group = { 2117 + .attrs = coresight_etmv4_trcidr_attrs, 2118 + .name = "trcidr", 2119 + }; 2120 + 2121 + const struct attribute_group *coresight_etmv4_groups[] = { 2122 + &coresight_etmv4_group, 2123 + &coresight_etmv4_mgmt_group, 2124 + &coresight_etmv4_trcidr_group, 2125 + NULL, 2126 + };
+279 -2147
drivers/hwtracing/coresight/coresight-etm4x.c
··· 26 26 #include <linux/clk.h> 27 27 #include <linux/cpu.h> 28 28 #include <linux/coresight.h> 29 + #include <linux/coresight-pmu.h> 29 30 #include <linux/pm_wakeup.h> 30 31 #include <linux/amba/bus.h> 31 32 #include <linux/seq_file.h> 32 33 #include <linux/uaccess.h> 34 + #include <linux/perf_event.h> 33 35 #include <linux/pm_runtime.h> 34 36 #include <linux/perf_event.h> 35 37 #include <asm/sections.h> 38 + #include <asm/local.h> 36 39 37 40 #include "coresight-etm4x.h" 41 + #include "coresight-etm-perf.h" 38 42 39 43 static int boot_enable; 40 44 module_param_named(boot_enable, boot_enable, int, S_IRUGO); ··· 46 42 /* The number of ETMv4 currently registered */ 47 43 static int etm4_count; 48 44 static struct etmv4_drvdata *etmdrvdata[NR_CPUS]; 45 + static void etm4_set_default(struct etmv4_config *config); 49 46 50 - static void etm4_os_unlock(void *info) 47 + static void etm4_os_unlock(struct etmv4_drvdata *drvdata) 51 48 { 52 - struct etmv4_drvdata *drvdata = (struct etmv4_drvdata *)info; 53 - 54 49 /* Writing any value to ETMOSLAR unlocks the trace registers */ 55 50 writel_relaxed(0x0, drvdata->base + TRCOSLAR); 51 + drvdata->os_unlock = true; 56 52 isb(); 57 53 } 58 54 ··· 80 76 unsigned long flags; 81 77 int trace_id = -1; 82 78 83 - if (!drvdata->enable) 79 + if (!local_read(&drvdata->mode)) 84 80 return drvdata->trcid; 85 81 86 82 spin_lock_irqsave(&drvdata->spinlock, flags); ··· 99 95 { 100 96 int i; 101 97 struct etmv4_drvdata *drvdata = info; 98 + struct etmv4_config *config = &drvdata->config; 102 99 103 100 CS_UNLOCK(drvdata->base); 104 101 ··· 114 109 "timeout observed when probing at offset %#x\n", 115 110 TRCSTATR); 116 111 117 - writel_relaxed(drvdata->pe_sel, drvdata->base + TRCPROCSELR); 118 - writel_relaxed(drvdata->cfg, drvdata->base + TRCCONFIGR); 112 + writel_relaxed(config->pe_sel, drvdata->base + TRCPROCSELR); 113 + writel_relaxed(config->cfg, drvdata->base + TRCCONFIGR); 119 114 /* nothing specific implemented */ 120 115 writel_relaxed(0x0, drvdata->base + TRCAUXCTLR); 121 - writel_relaxed(drvdata->eventctrl0, drvdata->base + TRCEVENTCTL0R); 122 - writel_relaxed(drvdata->eventctrl1, drvdata->base + TRCEVENTCTL1R); 123 - writel_relaxed(drvdata->stall_ctrl, drvdata->base + TRCSTALLCTLR); 124 - writel_relaxed(drvdata->ts_ctrl, drvdata->base + TRCTSCTLR); 125 - writel_relaxed(drvdata->syncfreq, drvdata->base + TRCSYNCPR); 126 - writel_relaxed(drvdata->ccctlr, drvdata->base + TRCCCCTLR); 127 - writel_relaxed(drvdata->bb_ctrl, drvdata->base + TRCBBCTLR); 116 + writel_relaxed(config->eventctrl0, drvdata->base + TRCEVENTCTL0R); 117 + writel_relaxed(config->eventctrl1, drvdata->base + TRCEVENTCTL1R); 118 + writel_relaxed(config->stall_ctrl, drvdata->base + TRCSTALLCTLR); 119 + writel_relaxed(config->ts_ctrl, drvdata->base + TRCTSCTLR); 120 + writel_relaxed(config->syncfreq, drvdata->base + TRCSYNCPR); 121 + writel_relaxed(config->ccctlr, drvdata->base + TRCCCCTLR); 122 + writel_relaxed(config->bb_ctrl, drvdata->base + TRCBBCTLR); 128 123 writel_relaxed(drvdata->trcid, drvdata->base + TRCTRACEIDR); 129 - writel_relaxed(drvdata->vinst_ctrl, drvdata->base + TRCVICTLR); 130 - writel_relaxed(drvdata->viiectlr, drvdata->base + TRCVIIECTLR); 131 - writel_relaxed(drvdata->vissctlr, 124 + writel_relaxed(config->vinst_ctrl, drvdata->base + TRCVICTLR); 125 + writel_relaxed(config->viiectlr, drvdata->base + TRCVIIECTLR); 126 + writel_relaxed(config->vissctlr, 132 127 drvdata->base + TRCVISSCTLR); 133 - writel_relaxed(drvdata->vipcssctlr, 128 + writel_relaxed(config->vipcssctlr, 134 129 drvdata->base + TRCVIPCSSCTLR); 135 130 for (i = 0; i < drvdata->nrseqstate - 1; i++) 136 - writel_relaxed(drvdata->seq_ctrl[i], 131 + writel_relaxed(config->seq_ctrl[i], 137 132 drvdata->base + TRCSEQEVRn(i)); 138 - writel_relaxed(drvdata->seq_rst, drvdata->base + TRCSEQRSTEVR); 139 - writel_relaxed(drvdata->seq_state, drvdata->base + TRCSEQSTR); 140 - writel_relaxed(drvdata->ext_inp, drvdata->base + TRCEXTINSELR); 133 + writel_relaxed(config->seq_rst, drvdata->base + TRCSEQRSTEVR); 134 + writel_relaxed(config->seq_state, drvdata->base + TRCSEQSTR); 135 + writel_relaxed(config->ext_inp, drvdata->base + TRCEXTINSELR); 141 136 for (i = 0; i < drvdata->nr_cntr; i++) { 142 - writel_relaxed(drvdata->cntrldvr[i], 137 + writel_relaxed(config->cntrldvr[i], 143 138 drvdata->base + TRCCNTRLDVRn(i)); 144 - writel_relaxed(drvdata->cntr_ctrl[i], 139 + writel_relaxed(config->cntr_ctrl[i], 145 140 drvdata->base + TRCCNTCTLRn(i)); 146 - writel_relaxed(drvdata->cntr_val[i], 141 + writel_relaxed(config->cntr_val[i], 147 142 drvdata->base + TRCCNTVRn(i)); 148 143 } 149 144 150 145 /* Resource selector pair 0 is always implemented and reserved */ 151 - for (i = 2; i < drvdata->nr_resource * 2; i++) 152 - writel_relaxed(drvdata->res_ctrl[i], 146 + for (i = 0; i < drvdata->nr_resource * 2; i++) 147 + writel_relaxed(config->res_ctrl[i], 153 148 drvdata->base + TRCRSCTLRn(i)); 154 149 155 150 for (i = 0; i < drvdata->nr_ss_cmp; i++) { 156 - writel_relaxed(drvdata->ss_ctrl[i], 151 + writel_relaxed(config->ss_ctrl[i], 157 152 drvdata->base + TRCSSCCRn(i)); 158 - writel_relaxed(drvdata->ss_status[i], 153 + writel_relaxed(config->ss_status[i], 159 154 drvdata->base + TRCSSCSRn(i)); 160 - writel_relaxed(drvdata->ss_pe_cmp[i], 155 + writel_relaxed(config->ss_pe_cmp[i], 161 156 drvdata->base + TRCSSPCICRn(i)); 162 157 } 163 158 for (i = 0; i < drvdata->nr_addr_cmp; i++) { 164 - writeq_relaxed(drvdata->addr_val[i], 159 + writeq_relaxed(config->addr_val[i], 165 160 drvdata->base + TRCACVRn(i)); 166 - writeq_relaxed(drvdata->addr_acc[i], 161 + writeq_relaxed(config->addr_acc[i], 167 162 drvdata->base + TRCACATRn(i)); 168 163 } 169 164 for (i = 0; i < drvdata->numcidc; i++) 170 - writeq_relaxed(drvdata->ctxid_pid[i], 165 + writeq_relaxed(config->ctxid_pid[i], 171 166 drvdata->base + TRCCIDCVRn(i)); 172 - writel_relaxed(drvdata->ctxid_mask0, drvdata->base + TRCCIDCCTLR0); 173 - writel_relaxed(drvdata->ctxid_mask1, drvdata->base + TRCCIDCCTLR1); 167 + writel_relaxed(config->ctxid_mask0, drvdata->base + TRCCIDCCTLR0); 168 + writel_relaxed(config->ctxid_mask1, drvdata->base + TRCCIDCCTLR1); 174 169 175 170 for (i = 0; i < drvdata->numvmidc; i++) 176 - writeq_relaxed(drvdata->vmid_val[i], 171 + writeq_relaxed(config->vmid_val[i], 177 172 drvdata->base + TRCVMIDCVRn(i)); 178 - writel_relaxed(drvdata->vmid_mask0, drvdata->base + TRCVMIDCCTLR0); 179 - writel_relaxed(drvdata->vmid_mask1, drvdata->base + TRCVMIDCCTLR1); 173 + writel_relaxed(config->vmid_mask0, drvdata->base + TRCVMIDCCTLR0); 174 + writel_relaxed(config->vmid_mask1, drvdata->base + TRCVMIDCCTLR1); 180 175 181 176 /* Enable the trace unit */ 182 177 writel_relaxed(1, drvdata->base + TRCPRGCTLR); ··· 192 187 dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu); 193 188 } 194 189 195 - static int etm4_enable(struct coresight_device *csdev, 196 - struct perf_event_attr *attr, u32 mode) 190 + static int etm4_parse_event_config(struct etmv4_drvdata *drvdata, 191 + struct perf_event_attr *attr) 192 + { 193 + struct etmv4_config *config = &drvdata->config; 194 + 195 + if (!attr) 196 + return -EINVAL; 197 + 198 + /* Clear configuration from previous run */ 199 + memset(config, 0, sizeof(struct etmv4_config)); 200 + 201 + if (attr->exclude_kernel) 202 + config->mode = ETM_MODE_EXCL_KERN; 203 + 204 + if (attr->exclude_user) 205 + config->mode = ETM_MODE_EXCL_USER; 206 + 207 + /* Always start from the default config */ 208 + etm4_set_default(config); 209 + 210 + /* 211 + * By default the tracers are configured to trace the whole address 212 + * range. Narrow the field only if requested by user space. 213 + */ 214 + if (config->mode) 215 + etm4_config_trace_mode(config); 216 + 217 + /* Go from generic option to ETMv4 specifics */ 218 + if (attr->config & BIT(ETM_OPT_CYCACC)) 219 + config->cfg |= ETMv4_MODE_CYCACC; 220 + if (attr->config & BIT(ETM_OPT_TS)) 221 + config->cfg |= ETMv4_MODE_TIMESTAMP; 222 + 223 + return 0; 224 + } 225 + 226 + static int etm4_enable_perf(struct coresight_device *csdev, 227 + struct perf_event_attr *attr) 228 + { 229 + struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 230 + 231 + if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id())) 232 + return -EINVAL; 233 + 234 + /* Configure the tracer based on the session's specifics */ 235 + etm4_parse_event_config(drvdata, attr); 236 + /* And enable it */ 237 + etm4_enable_hw(drvdata); 238 + 239 + return 0; 240 + } 241 + 242 + static int etm4_enable_sysfs(struct coresight_device *csdev) 197 243 { 198 244 struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 199 245 int ret; ··· 259 203 etm4_enable_hw, drvdata, 1); 260 204 if (ret) 261 205 goto err; 262 - drvdata->enable = true; 263 - drvdata->sticky_enable = true; 264 206 207 + drvdata->sticky_enable = true; 265 208 spin_unlock(&drvdata->spinlock); 266 209 267 210 dev_info(drvdata->dev, "ETM tracing enabled\n"); 268 211 return 0; 212 + 269 213 err: 270 214 spin_unlock(&drvdata->spinlock); 215 + return ret; 216 + } 217 + 218 + static int etm4_enable(struct coresight_device *csdev, 219 + struct perf_event_attr *attr, u32 mode) 220 + { 221 + int ret; 222 + u32 val; 223 + struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 224 + 225 + val = local_cmpxchg(&drvdata->mode, CS_MODE_DISABLED, mode); 226 + 227 + /* Someone is already using the tracer */ 228 + if (val) 229 + return -EBUSY; 230 + 231 + switch (mode) { 232 + case CS_MODE_SYSFS: 233 + ret = etm4_enable_sysfs(csdev); 234 + break; 235 + case CS_MODE_PERF: 236 + ret = etm4_enable_perf(csdev, attr); 237 + break; 238 + default: 239 + ret = -EINVAL; 240 + } 241 + 242 + /* The tracer didn't start */ 243 + if (ret) 244 + local_set(&drvdata->mode, CS_MODE_DISABLED); 245 + 271 246 return ret; 272 247 } 273 248 ··· 324 237 dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu); 325 238 } 326 239 327 - static void etm4_disable(struct coresight_device *csdev) 240 + static int etm4_disable_perf(struct coresight_device *csdev) 241 + { 242 + struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 243 + 244 + if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id())) 245 + return -EINVAL; 246 + 247 + etm4_disable_hw(drvdata); 248 + return 0; 249 + } 250 + 251 + static void etm4_disable_sysfs(struct coresight_device *csdev) 328 252 { 329 253 struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 330 254 ··· 353 255 * ensures that register writes occur when cpu is powered. 354 256 */ 355 257 smp_call_function_single(drvdata->cpu, etm4_disable_hw, drvdata, 1); 356 - drvdata->enable = false; 357 258 358 259 spin_unlock(&drvdata->spinlock); 359 260 put_online_cpus(); 360 261 361 262 dev_info(drvdata->dev, "ETM tracing disabled\n"); 263 + } 264 + 265 + static void etm4_disable(struct coresight_device *csdev) 266 + { 267 + u32 mode; 268 + struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 269 + 270 + /* 271 + * For as long as the tracer isn't disabled another entity can't 272 + * change its status. As such we can read the status here without 273 + * fearing it will change under us. 274 + */ 275 + mode = local_read(&drvdata->mode); 276 + 277 + switch (mode) { 278 + case CS_MODE_DISABLED: 279 + break; 280 + case CS_MODE_SYSFS: 281 + etm4_disable_sysfs(csdev); 282 + break; 283 + case CS_MODE_PERF: 284 + etm4_disable_perf(csdev); 285 + break; 286 + } 287 + 288 + if (mode) 289 + local_set(&drvdata->mode, CS_MODE_DISABLED); 362 290 } 363 291 364 292 static const struct coresight_ops_source etm4_source_ops = { ··· 398 274 .source_ops = &etm4_source_ops, 399 275 }; 400 276 401 - static int etm4_set_mode_exclude(struct etmv4_drvdata *drvdata, bool exclude) 402 - { 403 - u8 idx = drvdata->addr_idx; 404 - 405 - /* 406 - * TRCACATRn.TYPE bit[1:0]: type of comparison 407 - * the trace unit performs 408 - */ 409 - if (BMVAL(drvdata->addr_acc[idx], 0, 1) == ETM_INSTR_ADDR) { 410 - if (idx % 2 != 0) 411 - return -EINVAL; 412 - 413 - /* 414 - * We are performing instruction address comparison. Set the 415 - * relevant bit of ViewInst Include/Exclude Control register 416 - * for corresponding address comparator pair. 417 - */ 418 - if (drvdata->addr_type[idx] != ETM_ADDR_TYPE_RANGE || 419 - drvdata->addr_type[idx + 1] != ETM_ADDR_TYPE_RANGE) 420 - return -EINVAL; 421 - 422 - if (exclude == true) { 423 - /* 424 - * Set exclude bit and unset the include bit 425 - * corresponding to comparator pair 426 - */ 427 - drvdata->viiectlr |= BIT(idx / 2 + 16); 428 - drvdata->viiectlr &= ~BIT(idx / 2); 429 - } else { 430 - /* 431 - * Set include bit and unset exclude bit 432 - * corresponding to comparator pair 433 - */ 434 - drvdata->viiectlr |= BIT(idx / 2); 435 - drvdata->viiectlr &= ~BIT(idx / 2 + 16); 436 - } 437 - } 438 - return 0; 439 - } 440 - 441 - static ssize_t nr_pe_cmp_show(struct device *dev, 442 - struct device_attribute *attr, 443 - char *buf) 444 - { 445 - unsigned long val; 446 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 447 - 448 - val = drvdata->nr_pe_cmp; 449 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 450 - } 451 - static DEVICE_ATTR_RO(nr_pe_cmp); 452 - 453 - static ssize_t nr_addr_cmp_show(struct device *dev, 454 - struct device_attribute *attr, 455 - char *buf) 456 - { 457 - unsigned long val; 458 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 459 - 460 - val = drvdata->nr_addr_cmp; 461 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 462 - } 463 - static DEVICE_ATTR_RO(nr_addr_cmp); 464 - 465 - static ssize_t nr_cntr_show(struct device *dev, 466 - struct device_attribute *attr, 467 - char *buf) 468 - { 469 - unsigned long val; 470 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 471 - 472 - val = drvdata->nr_cntr; 473 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 474 - } 475 - static DEVICE_ATTR_RO(nr_cntr); 476 - 477 - static ssize_t nr_ext_inp_show(struct device *dev, 478 - struct device_attribute *attr, 479 - char *buf) 480 - { 481 - unsigned long val; 482 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 483 - 484 - val = drvdata->nr_ext_inp; 485 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 486 - } 487 - static DEVICE_ATTR_RO(nr_ext_inp); 488 - 489 - static ssize_t numcidc_show(struct device *dev, 490 - struct device_attribute *attr, 491 - char *buf) 492 - { 493 - unsigned long val; 494 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 495 - 496 - val = drvdata->numcidc; 497 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 498 - } 499 - static DEVICE_ATTR_RO(numcidc); 500 - 501 - static ssize_t numvmidc_show(struct device *dev, 502 - struct device_attribute *attr, 503 - char *buf) 504 - { 505 - unsigned long val; 506 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 507 - 508 - val = drvdata->numvmidc; 509 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 510 - } 511 - static DEVICE_ATTR_RO(numvmidc); 512 - 513 - static ssize_t nrseqstate_show(struct device *dev, 514 - struct device_attribute *attr, 515 - char *buf) 516 - { 517 - unsigned long val; 518 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 519 - 520 - val = drvdata->nrseqstate; 521 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 522 - } 523 - static DEVICE_ATTR_RO(nrseqstate); 524 - 525 - static ssize_t nr_resource_show(struct device *dev, 526 - struct device_attribute *attr, 527 - char *buf) 528 - { 529 - unsigned long val; 530 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 531 - 532 - val = drvdata->nr_resource; 533 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 534 - } 535 - static DEVICE_ATTR_RO(nr_resource); 536 - 537 - static ssize_t nr_ss_cmp_show(struct device *dev, 538 - struct device_attribute *attr, 539 - char *buf) 540 - { 541 - unsigned long val; 542 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 543 - 544 - val = drvdata->nr_ss_cmp; 545 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 546 - } 547 - static DEVICE_ATTR_RO(nr_ss_cmp); 548 - 549 - static ssize_t reset_store(struct device *dev, 550 - struct device_attribute *attr, 551 - const char *buf, size_t size) 552 - { 553 - int i; 554 - unsigned long val; 555 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 556 - 557 - if (kstrtoul(buf, 16, &val)) 558 - return -EINVAL; 559 - 560 - spin_lock(&drvdata->spinlock); 561 - if (val) 562 - drvdata->mode = 0x0; 563 - 564 - /* Disable data tracing: do not trace load and store data transfers */ 565 - drvdata->mode &= ~(ETM_MODE_LOAD | ETM_MODE_STORE); 566 - drvdata->cfg &= ~(BIT(1) | BIT(2)); 567 - 568 - /* Disable data value and data address tracing */ 569 - drvdata->mode &= ~(ETM_MODE_DATA_TRACE_ADDR | 570 - ETM_MODE_DATA_TRACE_VAL); 571 - drvdata->cfg &= ~(BIT(16) | BIT(17)); 572 - 573 - /* Disable all events tracing */ 574 - drvdata->eventctrl0 = 0x0; 575 - drvdata->eventctrl1 = 0x0; 576 - 577 - /* Disable timestamp event */ 578 - drvdata->ts_ctrl = 0x0; 579 - 580 - /* Disable stalling */ 581 - drvdata->stall_ctrl = 0x0; 582 - 583 - /* Reset trace synchronization period to 2^8 = 256 bytes*/ 584 - if (drvdata->syncpr == false) 585 - drvdata->syncfreq = 0x8; 586 - 587 - /* 588 - * Enable ViewInst to trace everything with start-stop logic in 589 - * started state. ARM recommends start-stop logic is set before 590 - * each trace run. 591 - */ 592 - drvdata->vinst_ctrl |= BIT(0); 593 - if (drvdata->nr_addr_cmp == true) { 594 - drvdata->mode |= ETM_MODE_VIEWINST_STARTSTOP; 595 - /* SSSTATUS, bit[9] */ 596 - drvdata->vinst_ctrl |= BIT(9); 597 - } 598 - 599 - /* No address range filtering for ViewInst */ 600 - drvdata->viiectlr = 0x0; 601 - 602 - /* No start-stop filtering for ViewInst */ 603 - drvdata->vissctlr = 0x0; 604 - 605 - /* Disable seq events */ 606 - for (i = 0; i < drvdata->nrseqstate-1; i++) 607 - drvdata->seq_ctrl[i] = 0x0; 608 - drvdata->seq_rst = 0x0; 609 - drvdata->seq_state = 0x0; 610 - 611 - /* Disable external input events */ 612 - drvdata->ext_inp = 0x0; 613 - 614 - drvdata->cntr_idx = 0x0; 615 - for (i = 0; i < drvdata->nr_cntr; i++) { 616 - drvdata->cntrldvr[i] = 0x0; 617 - drvdata->cntr_ctrl[i] = 0x0; 618 - drvdata->cntr_val[i] = 0x0; 619 - } 620 - 621 - /* Resource selector pair 0 is always implemented and reserved */ 622 - drvdata->res_idx = 0x2; 623 - for (i = 2; i < drvdata->nr_resource * 2; i++) 624 - drvdata->res_ctrl[i] = 0x0; 625 - 626 - for (i = 0; i < drvdata->nr_ss_cmp; i++) { 627 - drvdata->ss_ctrl[i] = 0x0; 628 - drvdata->ss_pe_cmp[i] = 0x0; 629 - } 630 - 631 - drvdata->addr_idx = 0x0; 632 - for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) { 633 - drvdata->addr_val[i] = 0x0; 634 - drvdata->addr_acc[i] = 0x0; 635 - drvdata->addr_type[i] = ETM_ADDR_TYPE_NONE; 636 - } 637 - 638 - drvdata->ctxid_idx = 0x0; 639 - for (i = 0; i < drvdata->numcidc; i++) { 640 - drvdata->ctxid_pid[i] = 0x0; 641 - drvdata->ctxid_vpid[i] = 0x0; 642 - } 643 - 644 - drvdata->ctxid_mask0 = 0x0; 645 - drvdata->ctxid_mask1 = 0x0; 646 - 647 - drvdata->vmid_idx = 0x0; 648 - for (i = 0; i < drvdata->numvmidc; i++) 649 - drvdata->vmid_val[i] = 0x0; 650 - drvdata->vmid_mask0 = 0x0; 651 - drvdata->vmid_mask1 = 0x0; 652 - 653 - drvdata->trcid = drvdata->cpu + 1; 654 - spin_unlock(&drvdata->spinlock); 655 - return size; 656 - } 657 - static DEVICE_ATTR_WO(reset); 658 - 659 - static ssize_t mode_show(struct device *dev, 660 - struct device_attribute *attr, 661 - char *buf) 662 - { 663 - unsigned long val; 664 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 665 - 666 - val = drvdata->mode; 667 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 668 - } 669 - 670 - static ssize_t mode_store(struct device *dev, 671 - struct device_attribute *attr, 672 - const char *buf, size_t size) 673 - { 674 - unsigned long val, mode; 675 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 676 - 677 - if (kstrtoul(buf, 16, &val)) 678 - return -EINVAL; 679 - 680 - spin_lock(&drvdata->spinlock); 681 - drvdata->mode = val & ETMv4_MODE_ALL; 682 - 683 - if (drvdata->mode & ETM_MODE_EXCLUDE) 684 - etm4_set_mode_exclude(drvdata, true); 685 - else 686 - etm4_set_mode_exclude(drvdata, false); 687 - 688 - if (drvdata->instrp0 == true) { 689 - /* start by clearing instruction P0 field */ 690 - drvdata->cfg &= ~(BIT(1) | BIT(2)); 691 - if (drvdata->mode & ETM_MODE_LOAD) 692 - /* 0b01 Trace load instructions as P0 instructions */ 693 - drvdata->cfg |= BIT(1); 694 - if (drvdata->mode & ETM_MODE_STORE) 695 - /* 0b10 Trace store instructions as P0 instructions */ 696 - drvdata->cfg |= BIT(2); 697 - if (drvdata->mode & ETM_MODE_LOAD_STORE) 698 - /* 699 - * 0b11 Trace load and store instructions 700 - * as P0 instructions 701 - */ 702 - drvdata->cfg |= BIT(1) | BIT(2); 703 - } 704 - 705 - /* bit[3], Branch broadcast mode */ 706 - if ((drvdata->mode & ETM_MODE_BB) && (drvdata->trcbb == true)) 707 - drvdata->cfg |= BIT(3); 708 - else 709 - drvdata->cfg &= ~BIT(3); 710 - 711 - /* bit[4], Cycle counting instruction trace bit */ 712 - if ((drvdata->mode & ETMv4_MODE_CYCACC) && 713 - (drvdata->trccci == true)) 714 - drvdata->cfg |= BIT(4); 715 - else 716 - drvdata->cfg &= ~BIT(4); 717 - 718 - /* bit[6], Context ID tracing bit */ 719 - if ((drvdata->mode & ETMv4_MODE_CTXID) && (drvdata->ctxid_size)) 720 - drvdata->cfg |= BIT(6); 721 - else 722 - drvdata->cfg &= ~BIT(6); 723 - 724 - if ((drvdata->mode & ETM_MODE_VMID) && (drvdata->vmid_size)) 725 - drvdata->cfg |= BIT(7); 726 - else 727 - drvdata->cfg &= ~BIT(7); 728 - 729 - /* bits[10:8], Conditional instruction tracing bit */ 730 - mode = ETM_MODE_COND(drvdata->mode); 731 - if (drvdata->trccond == true) { 732 - drvdata->cfg &= ~(BIT(8) | BIT(9) | BIT(10)); 733 - drvdata->cfg |= mode << 8; 734 - } 735 - 736 - /* bit[11], Global timestamp tracing bit */ 737 - if ((drvdata->mode & ETMv4_MODE_TIMESTAMP) && (drvdata->ts_size)) 738 - drvdata->cfg |= BIT(11); 739 - else 740 - drvdata->cfg &= ~BIT(11); 741 - 742 - /* bit[12], Return stack enable bit */ 743 - if ((drvdata->mode & ETM_MODE_RETURNSTACK) && 744 - (drvdata->retstack == true)) 745 - drvdata->cfg |= BIT(12); 746 - else 747 - drvdata->cfg &= ~BIT(12); 748 - 749 - /* bits[14:13], Q element enable field */ 750 - mode = ETM_MODE_QELEM(drvdata->mode); 751 - /* start by clearing QE bits */ 752 - drvdata->cfg &= ~(BIT(13) | BIT(14)); 753 - /* if supported, Q elements with instruction counts are enabled */ 754 - if ((mode & BIT(0)) && (drvdata->q_support & BIT(0))) 755 - drvdata->cfg |= BIT(13); 756 - /* 757 - * if supported, Q elements with and without instruction 758 - * counts are enabled 759 - */ 760 - if ((mode & BIT(1)) && (drvdata->q_support & BIT(1))) 761 - drvdata->cfg |= BIT(14); 762 - 763 - /* bit[11], AMBA Trace Bus (ATB) trigger enable bit */ 764 - if ((drvdata->mode & ETM_MODE_ATB_TRIGGER) && 765 - (drvdata->atbtrig == true)) 766 - drvdata->eventctrl1 |= BIT(11); 767 - else 768 - drvdata->eventctrl1 &= ~BIT(11); 769 - 770 - /* bit[12], Low-power state behavior override bit */ 771 - if ((drvdata->mode & ETM_MODE_LPOVERRIDE) && 772 - (drvdata->lpoverride == true)) 773 - drvdata->eventctrl1 |= BIT(12); 774 - else 775 - drvdata->eventctrl1 &= ~BIT(12); 776 - 777 - /* bit[8], Instruction stall bit */ 778 - if (drvdata->mode & ETM_MODE_ISTALL_EN) 779 - drvdata->stall_ctrl |= BIT(8); 780 - else 781 - drvdata->stall_ctrl &= ~BIT(8); 782 - 783 - /* bit[10], Prioritize instruction trace bit */ 784 - if (drvdata->mode & ETM_MODE_INSTPRIO) 785 - drvdata->stall_ctrl |= BIT(10); 786 - else 787 - drvdata->stall_ctrl &= ~BIT(10); 788 - 789 - /* bit[13], Trace overflow prevention bit */ 790 - if ((drvdata->mode & ETM_MODE_NOOVERFLOW) && 791 - (drvdata->nooverflow == true)) 792 - drvdata->stall_ctrl |= BIT(13); 793 - else 794 - drvdata->stall_ctrl &= ~BIT(13); 795 - 796 - /* bit[9] Start/stop logic control bit */ 797 - if (drvdata->mode & ETM_MODE_VIEWINST_STARTSTOP) 798 - drvdata->vinst_ctrl |= BIT(9); 799 - else 800 - drvdata->vinst_ctrl &= ~BIT(9); 801 - 802 - /* bit[10], Whether a trace unit must trace a Reset exception */ 803 - if (drvdata->mode & ETM_MODE_TRACE_RESET) 804 - drvdata->vinst_ctrl |= BIT(10); 805 - else 806 - drvdata->vinst_ctrl &= ~BIT(10); 807 - 808 - /* bit[11], Whether a trace unit must trace a system error exception */ 809 - if ((drvdata->mode & ETM_MODE_TRACE_ERR) && 810 - (drvdata->trc_error == true)) 811 - drvdata->vinst_ctrl |= BIT(11); 812 - else 813 - drvdata->vinst_ctrl &= ~BIT(11); 814 - 815 - spin_unlock(&drvdata->spinlock); 816 - return size; 817 - } 818 - static DEVICE_ATTR_RW(mode); 819 - 820 - static ssize_t pe_show(struct device *dev, 821 - struct device_attribute *attr, 822 - char *buf) 823 - { 824 - unsigned long val; 825 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 826 - 827 - val = drvdata->pe_sel; 828 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 829 - } 830 - 831 - static ssize_t pe_store(struct device *dev, 832 - struct device_attribute *attr, 833 - const char *buf, size_t size) 834 - { 835 - unsigned long val; 836 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 837 - 838 - if (kstrtoul(buf, 16, &val)) 839 - return -EINVAL; 840 - 841 - spin_lock(&drvdata->spinlock); 842 - if (val > drvdata->nr_pe) { 843 - spin_unlock(&drvdata->spinlock); 844 - return -EINVAL; 845 - } 846 - 847 - drvdata->pe_sel = val; 848 - spin_unlock(&drvdata->spinlock); 849 - return size; 850 - } 851 - static DEVICE_ATTR_RW(pe); 852 - 853 - static ssize_t event_show(struct device *dev, 854 - struct device_attribute *attr, 855 - char *buf) 856 - { 857 - unsigned long val; 858 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 859 - 860 - val = drvdata->eventctrl0; 861 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 862 - } 863 - 864 - static ssize_t event_store(struct device *dev, 865 - struct device_attribute *attr, 866 - const char *buf, size_t size) 867 - { 868 - unsigned long val; 869 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 870 - 871 - if (kstrtoul(buf, 16, &val)) 872 - return -EINVAL; 873 - 874 - spin_lock(&drvdata->spinlock); 875 - switch (drvdata->nr_event) { 876 - case 0x0: 877 - /* EVENT0, bits[7:0] */ 878 - drvdata->eventctrl0 = val & 0xFF; 879 - break; 880 - case 0x1: 881 - /* EVENT1, bits[15:8] */ 882 - drvdata->eventctrl0 = val & 0xFFFF; 883 - break; 884 - case 0x2: 885 - /* EVENT2, bits[23:16] */ 886 - drvdata->eventctrl0 = val & 0xFFFFFF; 887 - break; 888 - case 0x3: 889 - /* EVENT3, bits[31:24] */ 890 - drvdata->eventctrl0 = val; 891 - break; 892 - default: 893 - break; 894 - } 895 - spin_unlock(&drvdata->spinlock); 896 - return size; 897 - } 898 - static DEVICE_ATTR_RW(event); 899 - 900 - static ssize_t event_instren_show(struct device *dev, 901 - struct device_attribute *attr, 902 - char *buf) 903 - { 904 - unsigned long val; 905 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 906 - 907 - val = BMVAL(drvdata->eventctrl1, 0, 3); 908 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 909 - } 910 - 911 - static ssize_t event_instren_store(struct device *dev, 912 - struct device_attribute *attr, 913 - const char *buf, size_t size) 914 - { 915 - unsigned long val; 916 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 917 - 918 - if (kstrtoul(buf, 16, &val)) 919 - return -EINVAL; 920 - 921 - spin_lock(&drvdata->spinlock); 922 - /* start by clearing all instruction event enable bits */ 923 - drvdata->eventctrl1 &= ~(BIT(0) | BIT(1) | BIT(2) | BIT(3)); 924 - switch (drvdata->nr_event) { 925 - case 0x0: 926 - /* generate Event element for event 1 */ 927 - drvdata->eventctrl1 |= val & BIT(1); 928 - break; 929 - case 0x1: 930 - /* generate Event element for event 1 and 2 */ 931 - drvdata->eventctrl1 |= val & (BIT(0) | BIT(1)); 932 - break; 933 - case 0x2: 934 - /* generate Event element for event 1, 2 and 3 */ 935 - drvdata->eventctrl1 |= val & (BIT(0) | BIT(1) | BIT(2)); 936 - break; 937 - case 0x3: 938 - /* generate Event element for all 4 events */ 939 - drvdata->eventctrl1 |= val & 0xF; 940 - break; 941 - default: 942 - break; 943 - } 944 - spin_unlock(&drvdata->spinlock); 945 - return size; 946 - } 947 - static DEVICE_ATTR_RW(event_instren); 948 - 949 - static ssize_t event_ts_show(struct device *dev, 950 - struct device_attribute *attr, 951 - char *buf) 952 - { 953 - unsigned long val; 954 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 955 - 956 - val = drvdata->ts_ctrl; 957 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 958 - } 959 - 960 - static ssize_t event_ts_store(struct device *dev, 961 - struct device_attribute *attr, 962 - const char *buf, size_t size) 963 - { 964 - unsigned long val; 965 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 966 - 967 - if (kstrtoul(buf, 16, &val)) 968 - return -EINVAL; 969 - if (!drvdata->ts_size) 970 - return -EINVAL; 971 - 972 - drvdata->ts_ctrl = val & ETMv4_EVENT_MASK; 973 - return size; 974 - } 975 - static DEVICE_ATTR_RW(event_ts); 976 - 977 - static ssize_t syncfreq_show(struct device *dev, 978 - struct device_attribute *attr, 979 - char *buf) 980 - { 981 - unsigned long val; 982 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 983 - 984 - val = drvdata->syncfreq; 985 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 986 - } 987 - 988 - static ssize_t syncfreq_store(struct device *dev, 989 - struct device_attribute *attr, 990 - const char *buf, size_t size) 991 - { 992 - unsigned long val; 993 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 994 - 995 - if (kstrtoul(buf, 16, &val)) 996 - return -EINVAL; 997 - if (drvdata->syncpr == true) 998 - return -EINVAL; 999 - 1000 - drvdata->syncfreq = val & ETMv4_SYNC_MASK; 1001 - return size; 1002 - } 1003 - static DEVICE_ATTR_RW(syncfreq); 1004 - 1005 - static ssize_t cyc_threshold_show(struct device *dev, 1006 - struct device_attribute *attr, 1007 - char *buf) 1008 - { 1009 - unsigned long val; 1010 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1011 - 1012 - val = drvdata->ccctlr; 1013 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1014 - } 1015 - 1016 - static ssize_t cyc_threshold_store(struct device *dev, 1017 - struct device_attribute *attr, 1018 - const char *buf, size_t size) 1019 - { 1020 - unsigned long val; 1021 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1022 - 1023 - if (kstrtoul(buf, 16, &val)) 1024 - return -EINVAL; 1025 - if (val < drvdata->ccitmin) 1026 - return -EINVAL; 1027 - 1028 - drvdata->ccctlr = val & ETM_CYC_THRESHOLD_MASK; 1029 - return size; 1030 - } 1031 - static DEVICE_ATTR_RW(cyc_threshold); 1032 - 1033 - static ssize_t bb_ctrl_show(struct device *dev, 1034 - struct device_attribute *attr, 1035 - char *buf) 1036 - { 1037 - unsigned long val; 1038 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1039 - 1040 - val = drvdata->bb_ctrl; 1041 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1042 - } 1043 - 1044 - static ssize_t bb_ctrl_store(struct device *dev, 1045 - struct device_attribute *attr, 1046 - const char *buf, size_t size) 1047 - { 1048 - unsigned long val; 1049 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1050 - 1051 - if (kstrtoul(buf, 16, &val)) 1052 - return -EINVAL; 1053 - if (drvdata->trcbb == false) 1054 - return -EINVAL; 1055 - if (!drvdata->nr_addr_cmp) 1056 - return -EINVAL; 1057 - /* 1058 - * Bit[7:0] selects which address range comparator is used for 1059 - * branch broadcast control. 1060 - */ 1061 - if (BMVAL(val, 0, 7) > drvdata->nr_addr_cmp) 1062 - return -EINVAL; 1063 - 1064 - drvdata->bb_ctrl = val; 1065 - return size; 1066 - } 1067 - static DEVICE_ATTR_RW(bb_ctrl); 1068 - 1069 - static ssize_t event_vinst_show(struct device *dev, 1070 - struct device_attribute *attr, 1071 - char *buf) 1072 - { 1073 - unsigned long val; 1074 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1075 - 1076 - val = drvdata->vinst_ctrl & ETMv4_EVENT_MASK; 1077 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1078 - } 1079 - 1080 - static ssize_t event_vinst_store(struct device *dev, 1081 - struct device_attribute *attr, 1082 - const char *buf, size_t size) 1083 - { 1084 - unsigned long val; 1085 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1086 - 1087 - if (kstrtoul(buf, 16, &val)) 1088 - return -EINVAL; 1089 - 1090 - spin_lock(&drvdata->spinlock); 1091 - val &= ETMv4_EVENT_MASK; 1092 - drvdata->vinst_ctrl &= ~ETMv4_EVENT_MASK; 1093 - drvdata->vinst_ctrl |= val; 1094 - spin_unlock(&drvdata->spinlock); 1095 - return size; 1096 - } 1097 - static DEVICE_ATTR_RW(event_vinst); 1098 - 1099 - static ssize_t s_exlevel_vinst_show(struct device *dev, 1100 - struct device_attribute *attr, 1101 - char *buf) 1102 - { 1103 - unsigned long val; 1104 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1105 - 1106 - val = BMVAL(drvdata->vinst_ctrl, 16, 19); 1107 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1108 - } 1109 - 1110 - static ssize_t s_exlevel_vinst_store(struct device *dev, 1111 - struct device_attribute *attr, 1112 - const char *buf, size_t size) 1113 - { 1114 - unsigned long val; 1115 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1116 - 1117 - if (kstrtoul(buf, 16, &val)) 1118 - return -EINVAL; 1119 - 1120 - spin_lock(&drvdata->spinlock); 1121 - /* clear all EXLEVEL_S bits (bit[18] is never implemented) */ 1122 - drvdata->vinst_ctrl &= ~(BIT(16) | BIT(17) | BIT(19)); 1123 - /* enable instruction tracing for corresponding exception level */ 1124 - val &= drvdata->s_ex_level; 1125 - drvdata->vinst_ctrl |= (val << 16); 1126 - spin_unlock(&drvdata->spinlock); 1127 - return size; 1128 - } 1129 - static DEVICE_ATTR_RW(s_exlevel_vinst); 1130 - 1131 - static ssize_t ns_exlevel_vinst_show(struct device *dev, 1132 - struct device_attribute *attr, 1133 - char *buf) 1134 - { 1135 - unsigned long val; 1136 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1137 - 1138 - /* EXLEVEL_NS, bits[23:20] */ 1139 - val = BMVAL(drvdata->vinst_ctrl, 20, 23); 1140 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1141 - } 1142 - 1143 - static ssize_t ns_exlevel_vinst_store(struct device *dev, 1144 - struct device_attribute *attr, 1145 - const char *buf, size_t size) 1146 - { 1147 - unsigned long val; 1148 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1149 - 1150 - if (kstrtoul(buf, 16, &val)) 1151 - return -EINVAL; 1152 - 1153 - spin_lock(&drvdata->spinlock); 1154 - /* clear EXLEVEL_NS bits (bit[23] is never implemented */ 1155 - drvdata->vinst_ctrl &= ~(BIT(20) | BIT(21) | BIT(22)); 1156 - /* enable instruction tracing for corresponding exception level */ 1157 - val &= drvdata->ns_ex_level; 1158 - drvdata->vinst_ctrl |= (val << 20); 1159 - spin_unlock(&drvdata->spinlock); 1160 - return size; 1161 - } 1162 - static DEVICE_ATTR_RW(ns_exlevel_vinst); 1163 - 1164 - static ssize_t addr_idx_show(struct device *dev, 1165 - struct device_attribute *attr, 1166 - char *buf) 1167 - { 1168 - unsigned long val; 1169 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1170 - 1171 - val = drvdata->addr_idx; 1172 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1173 - } 1174 - 1175 - static ssize_t addr_idx_store(struct device *dev, 1176 - struct device_attribute *attr, 1177 - const char *buf, size_t size) 1178 - { 1179 - unsigned long val; 1180 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1181 - 1182 - if (kstrtoul(buf, 16, &val)) 1183 - return -EINVAL; 1184 - if (val >= drvdata->nr_addr_cmp * 2) 1185 - return -EINVAL; 1186 - 1187 - /* 1188 - * Use spinlock to ensure index doesn't change while it gets 1189 - * dereferenced multiple times within a spinlock block elsewhere. 1190 - */ 1191 - spin_lock(&drvdata->spinlock); 1192 - drvdata->addr_idx = val; 1193 - spin_unlock(&drvdata->spinlock); 1194 - return size; 1195 - } 1196 - static DEVICE_ATTR_RW(addr_idx); 1197 - 1198 - static ssize_t addr_instdatatype_show(struct device *dev, 1199 - struct device_attribute *attr, 1200 - char *buf) 1201 - { 1202 - ssize_t len; 1203 - u8 val, idx; 1204 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1205 - 1206 - spin_lock(&drvdata->spinlock); 1207 - idx = drvdata->addr_idx; 1208 - val = BMVAL(drvdata->addr_acc[idx], 0, 1); 1209 - len = scnprintf(buf, PAGE_SIZE, "%s\n", 1210 - val == ETM_INSTR_ADDR ? "instr" : 1211 - (val == ETM_DATA_LOAD_ADDR ? "data_load" : 1212 - (val == ETM_DATA_STORE_ADDR ? "data_store" : 1213 - "data_load_store"))); 1214 - spin_unlock(&drvdata->spinlock); 1215 - return len; 1216 - } 1217 - 1218 - static ssize_t addr_instdatatype_store(struct device *dev, 1219 - struct device_attribute *attr, 1220 - const char *buf, size_t size) 1221 - { 1222 - u8 idx; 1223 - char str[20] = ""; 1224 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1225 - 1226 - if (strlen(buf) >= 20) 1227 - return -EINVAL; 1228 - if (sscanf(buf, "%s", str) != 1) 1229 - return -EINVAL; 1230 - 1231 - spin_lock(&drvdata->spinlock); 1232 - idx = drvdata->addr_idx; 1233 - if (!strcmp(str, "instr")) 1234 - /* TYPE, bits[1:0] */ 1235 - drvdata->addr_acc[idx] &= ~(BIT(0) | BIT(1)); 1236 - 1237 - spin_unlock(&drvdata->spinlock); 1238 - return size; 1239 - } 1240 - static DEVICE_ATTR_RW(addr_instdatatype); 1241 - 1242 - static ssize_t addr_single_show(struct device *dev, 1243 - struct device_attribute *attr, 1244 - char *buf) 1245 - { 1246 - u8 idx; 1247 - unsigned long val; 1248 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1249 - 1250 - idx = drvdata->addr_idx; 1251 - spin_lock(&drvdata->spinlock); 1252 - if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1253 - drvdata->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) { 1254 - spin_unlock(&drvdata->spinlock); 1255 - return -EPERM; 1256 - } 1257 - val = (unsigned long)drvdata->addr_val[idx]; 1258 - spin_unlock(&drvdata->spinlock); 1259 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1260 - } 1261 - 1262 - static ssize_t addr_single_store(struct device *dev, 1263 - struct device_attribute *attr, 1264 - const char *buf, size_t size) 1265 - { 1266 - u8 idx; 1267 - unsigned long val; 1268 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1269 - 1270 - if (kstrtoul(buf, 16, &val)) 1271 - return -EINVAL; 1272 - 1273 - spin_lock(&drvdata->spinlock); 1274 - idx = drvdata->addr_idx; 1275 - if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1276 - drvdata->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) { 1277 - spin_unlock(&drvdata->spinlock); 1278 - return -EPERM; 1279 - } 1280 - 1281 - drvdata->addr_val[idx] = (u64)val; 1282 - drvdata->addr_type[idx] = ETM_ADDR_TYPE_SINGLE; 1283 - spin_unlock(&drvdata->spinlock); 1284 - return size; 1285 - } 1286 - static DEVICE_ATTR_RW(addr_single); 1287 - 1288 - static ssize_t addr_range_show(struct device *dev, 1289 - struct device_attribute *attr, 1290 - char *buf) 1291 - { 1292 - u8 idx; 1293 - unsigned long val1, val2; 1294 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1295 - 1296 - spin_lock(&drvdata->spinlock); 1297 - idx = drvdata->addr_idx; 1298 - if (idx % 2 != 0) { 1299 - spin_unlock(&drvdata->spinlock); 1300 - return -EPERM; 1301 - } 1302 - if (!((drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE && 1303 - drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) || 1304 - (drvdata->addr_type[idx] == ETM_ADDR_TYPE_RANGE && 1305 - drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) { 1306 - spin_unlock(&drvdata->spinlock); 1307 - return -EPERM; 1308 - } 1309 - 1310 - val1 = (unsigned long)drvdata->addr_val[idx]; 1311 - val2 = (unsigned long)drvdata->addr_val[idx + 1]; 1312 - spin_unlock(&drvdata->spinlock); 1313 - return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2); 1314 - } 1315 - 1316 - static ssize_t addr_range_store(struct device *dev, 1317 - struct device_attribute *attr, 1318 - const char *buf, size_t size) 1319 - { 1320 - u8 idx; 1321 - unsigned long val1, val2; 1322 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1323 - 1324 - if (sscanf(buf, "%lx %lx", &val1, &val2) != 2) 1325 - return -EINVAL; 1326 - /* lower address comparator cannot have a higher address value */ 1327 - if (val1 > val2) 1328 - return -EINVAL; 1329 - 1330 - spin_lock(&drvdata->spinlock); 1331 - idx = drvdata->addr_idx; 1332 - if (idx % 2 != 0) { 1333 - spin_unlock(&drvdata->spinlock); 1334 - return -EPERM; 1335 - } 1336 - 1337 - if (!((drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE && 1338 - drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) || 1339 - (drvdata->addr_type[idx] == ETM_ADDR_TYPE_RANGE && 1340 - drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) { 1341 - spin_unlock(&drvdata->spinlock); 1342 - return -EPERM; 1343 - } 1344 - 1345 - drvdata->addr_val[idx] = (u64)val1; 1346 - drvdata->addr_type[idx] = ETM_ADDR_TYPE_RANGE; 1347 - drvdata->addr_val[idx + 1] = (u64)val2; 1348 - drvdata->addr_type[idx + 1] = ETM_ADDR_TYPE_RANGE; 1349 - /* 1350 - * Program include or exclude control bits for vinst or vdata 1351 - * whenever we change addr comparators to ETM_ADDR_TYPE_RANGE 1352 - */ 1353 - if (drvdata->mode & ETM_MODE_EXCLUDE) 1354 - etm4_set_mode_exclude(drvdata, true); 1355 - else 1356 - etm4_set_mode_exclude(drvdata, false); 1357 - 1358 - spin_unlock(&drvdata->spinlock); 1359 - return size; 1360 - } 1361 - static DEVICE_ATTR_RW(addr_range); 1362 - 1363 - static ssize_t addr_start_show(struct device *dev, 1364 - struct device_attribute *attr, 1365 - char *buf) 1366 - { 1367 - u8 idx; 1368 - unsigned long val; 1369 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1370 - 1371 - spin_lock(&drvdata->spinlock); 1372 - idx = drvdata->addr_idx; 1373 - 1374 - if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1375 - drvdata->addr_type[idx] == ETM_ADDR_TYPE_START)) { 1376 - spin_unlock(&drvdata->spinlock); 1377 - return -EPERM; 1378 - } 1379 - 1380 - val = (unsigned long)drvdata->addr_val[idx]; 1381 - spin_unlock(&drvdata->spinlock); 1382 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1383 - } 1384 - 1385 - static ssize_t addr_start_store(struct device *dev, 1386 - struct device_attribute *attr, 1387 - const char *buf, size_t size) 1388 - { 1389 - u8 idx; 1390 - unsigned long val; 1391 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1392 - 1393 - if (kstrtoul(buf, 16, &val)) 1394 - return -EINVAL; 1395 - 1396 - spin_lock(&drvdata->spinlock); 1397 - idx = drvdata->addr_idx; 1398 - if (!drvdata->nr_addr_cmp) { 1399 - spin_unlock(&drvdata->spinlock); 1400 - return -EINVAL; 1401 - } 1402 - if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1403 - drvdata->addr_type[idx] == ETM_ADDR_TYPE_START)) { 1404 - spin_unlock(&drvdata->spinlock); 1405 - return -EPERM; 1406 - } 1407 - 1408 - drvdata->addr_val[idx] = (u64)val; 1409 - drvdata->addr_type[idx] = ETM_ADDR_TYPE_START; 1410 - drvdata->vissctlr |= BIT(idx); 1411 - /* SSSTATUS, bit[9] - turn on start/stop logic */ 1412 - drvdata->vinst_ctrl |= BIT(9); 1413 - spin_unlock(&drvdata->spinlock); 1414 - return size; 1415 - } 1416 - static DEVICE_ATTR_RW(addr_start); 1417 - 1418 - static ssize_t addr_stop_show(struct device *dev, 1419 - struct device_attribute *attr, 1420 - char *buf) 1421 - { 1422 - u8 idx; 1423 - unsigned long val; 1424 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1425 - 1426 - spin_lock(&drvdata->spinlock); 1427 - idx = drvdata->addr_idx; 1428 - 1429 - if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1430 - drvdata->addr_type[idx] == ETM_ADDR_TYPE_STOP)) { 1431 - spin_unlock(&drvdata->spinlock); 1432 - return -EPERM; 1433 - } 1434 - 1435 - val = (unsigned long)drvdata->addr_val[idx]; 1436 - spin_unlock(&drvdata->spinlock); 1437 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1438 - } 1439 - 1440 - static ssize_t addr_stop_store(struct device *dev, 1441 - struct device_attribute *attr, 1442 - const char *buf, size_t size) 1443 - { 1444 - u8 idx; 1445 - unsigned long val; 1446 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1447 - 1448 - if (kstrtoul(buf, 16, &val)) 1449 - return -EINVAL; 1450 - 1451 - spin_lock(&drvdata->spinlock); 1452 - idx = drvdata->addr_idx; 1453 - if (!drvdata->nr_addr_cmp) { 1454 - spin_unlock(&drvdata->spinlock); 1455 - return -EINVAL; 1456 - } 1457 - if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE || 1458 - drvdata->addr_type[idx] == ETM_ADDR_TYPE_STOP)) { 1459 - spin_unlock(&drvdata->spinlock); 1460 - return -EPERM; 1461 - } 1462 - 1463 - drvdata->addr_val[idx] = (u64)val; 1464 - drvdata->addr_type[idx] = ETM_ADDR_TYPE_STOP; 1465 - drvdata->vissctlr |= BIT(idx + 16); 1466 - /* SSSTATUS, bit[9] - turn on start/stop logic */ 1467 - drvdata->vinst_ctrl |= BIT(9); 1468 - spin_unlock(&drvdata->spinlock); 1469 - return size; 1470 - } 1471 - static DEVICE_ATTR_RW(addr_stop); 1472 - 1473 - static ssize_t addr_ctxtype_show(struct device *dev, 1474 - struct device_attribute *attr, 1475 - char *buf) 1476 - { 1477 - ssize_t len; 1478 - u8 idx, val; 1479 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1480 - 1481 - spin_lock(&drvdata->spinlock); 1482 - idx = drvdata->addr_idx; 1483 - /* CONTEXTTYPE, bits[3:2] */ 1484 - val = BMVAL(drvdata->addr_acc[idx], 2, 3); 1485 - len = scnprintf(buf, PAGE_SIZE, "%s\n", val == ETM_CTX_NONE ? "none" : 1486 - (val == ETM_CTX_CTXID ? "ctxid" : 1487 - (val == ETM_CTX_VMID ? "vmid" : "all"))); 1488 - spin_unlock(&drvdata->spinlock); 1489 - return len; 1490 - } 1491 - 1492 - static ssize_t addr_ctxtype_store(struct device *dev, 1493 - struct device_attribute *attr, 1494 - const char *buf, size_t size) 1495 - { 1496 - u8 idx; 1497 - char str[10] = ""; 1498 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1499 - 1500 - if (strlen(buf) >= 10) 1501 - return -EINVAL; 1502 - if (sscanf(buf, "%s", str) != 1) 1503 - return -EINVAL; 1504 - 1505 - spin_lock(&drvdata->spinlock); 1506 - idx = drvdata->addr_idx; 1507 - if (!strcmp(str, "none")) 1508 - /* start by clearing context type bits */ 1509 - drvdata->addr_acc[idx] &= ~(BIT(2) | BIT(3)); 1510 - else if (!strcmp(str, "ctxid")) { 1511 - /* 0b01 The trace unit performs a Context ID */ 1512 - if (drvdata->numcidc) { 1513 - drvdata->addr_acc[idx] |= BIT(2); 1514 - drvdata->addr_acc[idx] &= ~BIT(3); 1515 - } 1516 - } else if (!strcmp(str, "vmid")) { 1517 - /* 0b10 The trace unit performs a VMID */ 1518 - if (drvdata->numvmidc) { 1519 - drvdata->addr_acc[idx] &= ~BIT(2); 1520 - drvdata->addr_acc[idx] |= BIT(3); 1521 - } 1522 - } else if (!strcmp(str, "all")) { 1523 - /* 1524 - * 0b11 The trace unit performs a Context ID 1525 - * comparison and a VMID 1526 - */ 1527 - if (drvdata->numcidc) 1528 - drvdata->addr_acc[idx] |= BIT(2); 1529 - if (drvdata->numvmidc) 1530 - drvdata->addr_acc[idx] |= BIT(3); 1531 - } 1532 - spin_unlock(&drvdata->spinlock); 1533 - return size; 1534 - } 1535 - static DEVICE_ATTR_RW(addr_ctxtype); 1536 - 1537 - static ssize_t addr_context_show(struct device *dev, 1538 - struct device_attribute *attr, 1539 - char *buf) 1540 - { 1541 - u8 idx; 1542 - unsigned long val; 1543 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1544 - 1545 - spin_lock(&drvdata->spinlock); 1546 - idx = drvdata->addr_idx; 1547 - /* context ID comparator bits[6:4] */ 1548 - val = BMVAL(drvdata->addr_acc[idx], 4, 6); 1549 - spin_unlock(&drvdata->spinlock); 1550 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1551 - } 1552 - 1553 - static ssize_t addr_context_store(struct device *dev, 1554 - struct device_attribute *attr, 1555 - const char *buf, size_t size) 1556 - { 1557 - u8 idx; 1558 - unsigned long val; 1559 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1560 - 1561 - if (kstrtoul(buf, 16, &val)) 1562 - return -EINVAL; 1563 - if ((drvdata->numcidc <= 1) && (drvdata->numvmidc <= 1)) 1564 - return -EINVAL; 1565 - if (val >= (drvdata->numcidc >= drvdata->numvmidc ? 1566 - drvdata->numcidc : drvdata->numvmidc)) 1567 - return -EINVAL; 1568 - 1569 - spin_lock(&drvdata->spinlock); 1570 - idx = drvdata->addr_idx; 1571 - /* clear context ID comparator bits[6:4] */ 1572 - drvdata->addr_acc[idx] &= ~(BIT(4) | BIT(5) | BIT(6)); 1573 - drvdata->addr_acc[idx] |= (val << 4); 1574 - spin_unlock(&drvdata->spinlock); 1575 - return size; 1576 - } 1577 - static DEVICE_ATTR_RW(addr_context); 1578 - 1579 - static ssize_t seq_idx_show(struct device *dev, 1580 - struct device_attribute *attr, 1581 - char *buf) 1582 - { 1583 - unsigned long val; 1584 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1585 - 1586 - val = drvdata->seq_idx; 1587 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1588 - } 1589 - 1590 - static ssize_t seq_idx_store(struct device *dev, 1591 - struct device_attribute *attr, 1592 - const char *buf, size_t size) 1593 - { 1594 - unsigned long val; 1595 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1596 - 1597 - if (kstrtoul(buf, 16, &val)) 1598 - return -EINVAL; 1599 - if (val >= drvdata->nrseqstate - 1) 1600 - return -EINVAL; 1601 - 1602 - /* 1603 - * Use spinlock to ensure index doesn't change while it gets 1604 - * dereferenced multiple times within a spinlock block elsewhere. 1605 - */ 1606 - spin_lock(&drvdata->spinlock); 1607 - drvdata->seq_idx = val; 1608 - spin_unlock(&drvdata->spinlock); 1609 - return size; 1610 - } 1611 - static DEVICE_ATTR_RW(seq_idx); 1612 - 1613 - static ssize_t seq_state_show(struct device *dev, 1614 - struct device_attribute *attr, 1615 - char *buf) 1616 - { 1617 - unsigned long val; 1618 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1619 - 1620 - val = drvdata->seq_state; 1621 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1622 - } 1623 - 1624 - static ssize_t seq_state_store(struct device *dev, 1625 - struct device_attribute *attr, 1626 - const char *buf, size_t size) 1627 - { 1628 - unsigned long val; 1629 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1630 - 1631 - if (kstrtoul(buf, 16, &val)) 1632 - return -EINVAL; 1633 - if (val >= drvdata->nrseqstate) 1634 - return -EINVAL; 1635 - 1636 - drvdata->seq_state = val; 1637 - return size; 1638 - } 1639 - static DEVICE_ATTR_RW(seq_state); 1640 - 1641 - static ssize_t seq_event_show(struct device *dev, 1642 - struct device_attribute *attr, 1643 - char *buf) 1644 - { 1645 - u8 idx; 1646 - unsigned long val; 1647 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1648 - 1649 - spin_lock(&drvdata->spinlock); 1650 - idx = drvdata->seq_idx; 1651 - val = drvdata->seq_ctrl[idx]; 1652 - spin_unlock(&drvdata->spinlock); 1653 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1654 - } 1655 - 1656 - static ssize_t seq_event_store(struct device *dev, 1657 - struct device_attribute *attr, 1658 - const char *buf, size_t size) 1659 - { 1660 - u8 idx; 1661 - unsigned long val; 1662 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1663 - 1664 - if (kstrtoul(buf, 16, &val)) 1665 - return -EINVAL; 1666 - 1667 - spin_lock(&drvdata->spinlock); 1668 - idx = drvdata->seq_idx; 1669 - /* RST, bits[7:0] */ 1670 - drvdata->seq_ctrl[idx] = val & 0xFF; 1671 - spin_unlock(&drvdata->spinlock); 1672 - return size; 1673 - } 1674 - static DEVICE_ATTR_RW(seq_event); 1675 - 1676 - static ssize_t seq_reset_event_show(struct device *dev, 1677 - struct device_attribute *attr, 1678 - char *buf) 1679 - { 1680 - unsigned long val; 1681 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1682 - 1683 - val = drvdata->seq_rst; 1684 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1685 - } 1686 - 1687 - static ssize_t seq_reset_event_store(struct device *dev, 1688 - struct device_attribute *attr, 1689 - const char *buf, size_t size) 1690 - { 1691 - unsigned long val; 1692 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1693 - 1694 - if (kstrtoul(buf, 16, &val)) 1695 - return -EINVAL; 1696 - if (!(drvdata->nrseqstate)) 1697 - return -EINVAL; 1698 - 1699 - drvdata->seq_rst = val & ETMv4_EVENT_MASK; 1700 - return size; 1701 - } 1702 - static DEVICE_ATTR_RW(seq_reset_event); 1703 - 1704 - static ssize_t cntr_idx_show(struct device *dev, 1705 - struct device_attribute *attr, 1706 - char *buf) 1707 - { 1708 - unsigned long val; 1709 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1710 - 1711 - val = drvdata->cntr_idx; 1712 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1713 - } 1714 - 1715 - static ssize_t cntr_idx_store(struct device *dev, 1716 - struct device_attribute *attr, 1717 - const char *buf, size_t size) 1718 - { 1719 - unsigned long val; 1720 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1721 - 1722 - if (kstrtoul(buf, 16, &val)) 1723 - return -EINVAL; 1724 - if (val >= drvdata->nr_cntr) 1725 - return -EINVAL; 1726 - 1727 - /* 1728 - * Use spinlock to ensure index doesn't change while it gets 1729 - * dereferenced multiple times within a spinlock block elsewhere. 1730 - */ 1731 - spin_lock(&drvdata->spinlock); 1732 - drvdata->cntr_idx = val; 1733 - spin_unlock(&drvdata->spinlock); 1734 - return size; 1735 - } 1736 - static DEVICE_ATTR_RW(cntr_idx); 1737 - 1738 - static ssize_t cntrldvr_show(struct device *dev, 1739 - struct device_attribute *attr, 1740 - char *buf) 1741 - { 1742 - u8 idx; 1743 - unsigned long val; 1744 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1745 - 1746 - spin_lock(&drvdata->spinlock); 1747 - idx = drvdata->cntr_idx; 1748 - val = drvdata->cntrldvr[idx]; 1749 - spin_unlock(&drvdata->spinlock); 1750 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1751 - } 1752 - 1753 - static ssize_t cntrldvr_store(struct device *dev, 1754 - struct device_attribute *attr, 1755 - const char *buf, size_t size) 1756 - { 1757 - u8 idx; 1758 - unsigned long val; 1759 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1760 - 1761 - if (kstrtoul(buf, 16, &val)) 1762 - return -EINVAL; 1763 - if (val > ETM_CNTR_MAX_VAL) 1764 - return -EINVAL; 1765 - 1766 - spin_lock(&drvdata->spinlock); 1767 - idx = drvdata->cntr_idx; 1768 - drvdata->cntrldvr[idx] = val; 1769 - spin_unlock(&drvdata->spinlock); 1770 - return size; 1771 - } 1772 - static DEVICE_ATTR_RW(cntrldvr); 1773 - 1774 - static ssize_t cntr_val_show(struct device *dev, 1775 - struct device_attribute *attr, 1776 - char *buf) 1777 - { 1778 - u8 idx; 1779 - unsigned long val; 1780 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1781 - 1782 - spin_lock(&drvdata->spinlock); 1783 - idx = drvdata->cntr_idx; 1784 - val = drvdata->cntr_val[idx]; 1785 - spin_unlock(&drvdata->spinlock); 1786 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1787 - } 1788 - 1789 - static ssize_t cntr_val_store(struct device *dev, 1790 - struct device_attribute *attr, 1791 - const char *buf, size_t size) 1792 - { 1793 - u8 idx; 1794 - unsigned long val; 1795 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1796 - 1797 - if (kstrtoul(buf, 16, &val)) 1798 - return -EINVAL; 1799 - if (val > ETM_CNTR_MAX_VAL) 1800 - return -EINVAL; 1801 - 1802 - spin_lock(&drvdata->spinlock); 1803 - idx = drvdata->cntr_idx; 1804 - drvdata->cntr_val[idx] = val; 1805 - spin_unlock(&drvdata->spinlock); 1806 - return size; 1807 - } 1808 - static DEVICE_ATTR_RW(cntr_val); 1809 - 1810 - static ssize_t cntr_ctrl_show(struct device *dev, 1811 - struct device_attribute *attr, 1812 - char *buf) 1813 - { 1814 - u8 idx; 1815 - unsigned long val; 1816 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1817 - 1818 - spin_lock(&drvdata->spinlock); 1819 - idx = drvdata->cntr_idx; 1820 - val = drvdata->cntr_ctrl[idx]; 1821 - spin_unlock(&drvdata->spinlock); 1822 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1823 - } 1824 - 1825 - static ssize_t cntr_ctrl_store(struct device *dev, 1826 - struct device_attribute *attr, 1827 - const char *buf, size_t size) 1828 - { 1829 - u8 idx; 1830 - unsigned long val; 1831 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1832 - 1833 - if (kstrtoul(buf, 16, &val)) 1834 - return -EINVAL; 1835 - 1836 - spin_lock(&drvdata->spinlock); 1837 - idx = drvdata->cntr_idx; 1838 - drvdata->cntr_ctrl[idx] = val; 1839 - spin_unlock(&drvdata->spinlock); 1840 - return size; 1841 - } 1842 - static DEVICE_ATTR_RW(cntr_ctrl); 1843 - 1844 - static ssize_t res_idx_show(struct device *dev, 1845 - struct device_attribute *attr, 1846 - char *buf) 1847 - { 1848 - unsigned long val; 1849 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1850 - 1851 - val = drvdata->res_idx; 1852 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1853 - } 1854 - 1855 - static ssize_t res_idx_store(struct device *dev, 1856 - struct device_attribute *attr, 1857 - const char *buf, size_t size) 1858 - { 1859 - unsigned long val; 1860 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1861 - 1862 - if (kstrtoul(buf, 16, &val)) 1863 - return -EINVAL; 1864 - /* Resource selector pair 0 is always implemented and reserved */ 1865 - if (val < 2 || val >= drvdata->nr_resource * 2) 1866 - return -EINVAL; 1867 - 1868 - /* 1869 - * Use spinlock to ensure index doesn't change while it gets 1870 - * dereferenced multiple times within a spinlock block elsewhere. 1871 - */ 1872 - spin_lock(&drvdata->spinlock); 1873 - drvdata->res_idx = val; 1874 - spin_unlock(&drvdata->spinlock); 1875 - return size; 1876 - } 1877 - static DEVICE_ATTR_RW(res_idx); 1878 - 1879 - static ssize_t res_ctrl_show(struct device *dev, 1880 - struct device_attribute *attr, 1881 - char *buf) 1882 - { 1883 - u8 idx; 1884 - unsigned long val; 1885 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1886 - 1887 - spin_lock(&drvdata->spinlock); 1888 - idx = drvdata->res_idx; 1889 - val = drvdata->res_ctrl[idx]; 1890 - spin_unlock(&drvdata->spinlock); 1891 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1892 - } 1893 - 1894 - static ssize_t res_ctrl_store(struct device *dev, 1895 - struct device_attribute *attr, 1896 - const char *buf, size_t size) 1897 - { 1898 - u8 idx; 1899 - unsigned long val; 1900 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1901 - 1902 - if (kstrtoul(buf, 16, &val)) 1903 - return -EINVAL; 1904 - 1905 - spin_lock(&drvdata->spinlock); 1906 - idx = drvdata->res_idx; 1907 - /* For odd idx pair inversal bit is RES0 */ 1908 - if (idx % 2 != 0) 1909 - /* PAIRINV, bit[21] */ 1910 - val &= ~BIT(21); 1911 - drvdata->res_ctrl[idx] = val; 1912 - spin_unlock(&drvdata->spinlock); 1913 - return size; 1914 - } 1915 - static DEVICE_ATTR_RW(res_ctrl); 1916 - 1917 - static ssize_t ctxid_idx_show(struct device *dev, 1918 - struct device_attribute *attr, 1919 - char *buf) 1920 - { 1921 - unsigned long val; 1922 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1923 - 1924 - val = drvdata->ctxid_idx; 1925 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1926 - } 1927 - 1928 - static ssize_t ctxid_idx_store(struct device *dev, 1929 - struct device_attribute *attr, 1930 - const char *buf, size_t size) 1931 - { 1932 - unsigned long val; 1933 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1934 - 1935 - if (kstrtoul(buf, 16, &val)) 1936 - return -EINVAL; 1937 - if (val >= drvdata->numcidc) 1938 - return -EINVAL; 1939 - 1940 - /* 1941 - * Use spinlock to ensure index doesn't change while it gets 1942 - * dereferenced multiple times within a spinlock block elsewhere. 1943 - */ 1944 - spin_lock(&drvdata->spinlock); 1945 - drvdata->ctxid_idx = val; 1946 - spin_unlock(&drvdata->spinlock); 1947 - return size; 1948 - } 1949 - static DEVICE_ATTR_RW(ctxid_idx); 1950 - 1951 - static ssize_t ctxid_pid_show(struct device *dev, 1952 - struct device_attribute *attr, 1953 - char *buf) 1954 - { 1955 - u8 idx; 1956 - unsigned long val; 1957 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1958 - 1959 - spin_lock(&drvdata->spinlock); 1960 - idx = drvdata->ctxid_idx; 1961 - val = (unsigned long)drvdata->ctxid_vpid[idx]; 1962 - spin_unlock(&drvdata->spinlock); 1963 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1964 - } 1965 - 1966 - static ssize_t ctxid_pid_store(struct device *dev, 1967 - struct device_attribute *attr, 1968 - const char *buf, size_t size) 1969 - { 1970 - u8 idx; 1971 - unsigned long vpid, pid; 1972 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1973 - 1974 - /* 1975 - * only implemented when ctxid tracing is enabled, i.e. at least one 1976 - * ctxid comparator is implemented and ctxid is greater than 0 bits 1977 - * in length 1978 - */ 1979 - if (!drvdata->ctxid_size || !drvdata->numcidc) 1980 - return -EINVAL; 1981 - if (kstrtoul(buf, 16, &vpid)) 1982 - return -EINVAL; 1983 - 1984 - pid = coresight_vpid_to_pid(vpid); 1985 - 1986 - spin_lock(&drvdata->spinlock); 1987 - idx = drvdata->ctxid_idx; 1988 - drvdata->ctxid_pid[idx] = (u64)pid; 1989 - drvdata->ctxid_vpid[idx] = (u64)vpid; 1990 - spin_unlock(&drvdata->spinlock); 1991 - return size; 1992 - } 1993 - static DEVICE_ATTR_RW(ctxid_pid); 1994 - 1995 - static ssize_t ctxid_masks_show(struct device *dev, 1996 - struct device_attribute *attr, 1997 - char *buf) 1998 - { 1999 - unsigned long val1, val2; 2000 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2001 - 2002 - spin_lock(&drvdata->spinlock); 2003 - val1 = drvdata->ctxid_mask0; 2004 - val2 = drvdata->ctxid_mask1; 2005 - spin_unlock(&drvdata->spinlock); 2006 - return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2); 2007 - } 2008 - 2009 - static ssize_t ctxid_masks_store(struct device *dev, 2010 - struct device_attribute *attr, 2011 - const char *buf, size_t size) 2012 - { 2013 - u8 i, j, maskbyte; 2014 - unsigned long val1, val2, mask; 2015 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2016 - 2017 - /* 2018 - * only implemented when ctxid tracing is enabled, i.e. at least one 2019 - * ctxid comparator is implemented and ctxid is greater than 0 bits 2020 - * in length 2021 - */ 2022 - if (!drvdata->ctxid_size || !drvdata->numcidc) 2023 - return -EINVAL; 2024 - if (sscanf(buf, "%lx %lx", &val1, &val2) != 2) 2025 - return -EINVAL; 2026 - 2027 - spin_lock(&drvdata->spinlock); 2028 - /* 2029 - * each byte[0..3] controls mask value applied to ctxid 2030 - * comparator[0..3] 2031 - */ 2032 - switch (drvdata->numcidc) { 2033 - case 0x1: 2034 - /* COMP0, bits[7:0] */ 2035 - drvdata->ctxid_mask0 = val1 & 0xFF; 2036 - break; 2037 - case 0x2: 2038 - /* COMP1, bits[15:8] */ 2039 - drvdata->ctxid_mask0 = val1 & 0xFFFF; 2040 - break; 2041 - case 0x3: 2042 - /* COMP2, bits[23:16] */ 2043 - drvdata->ctxid_mask0 = val1 & 0xFFFFFF; 2044 - break; 2045 - case 0x4: 2046 - /* COMP3, bits[31:24] */ 2047 - drvdata->ctxid_mask0 = val1; 2048 - break; 2049 - case 0x5: 2050 - /* COMP4, bits[7:0] */ 2051 - drvdata->ctxid_mask0 = val1; 2052 - drvdata->ctxid_mask1 = val2 & 0xFF; 2053 - break; 2054 - case 0x6: 2055 - /* COMP5, bits[15:8] */ 2056 - drvdata->ctxid_mask0 = val1; 2057 - drvdata->ctxid_mask1 = val2 & 0xFFFF; 2058 - break; 2059 - case 0x7: 2060 - /* COMP6, bits[23:16] */ 2061 - drvdata->ctxid_mask0 = val1; 2062 - drvdata->ctxid_mask1 = val2 & 0xFFFFFF; 2063 - break; 2064 - case 0x8: 2065 - /* COMP7, bits[31:24] */ 2066 - drvdata->ctxid_mask0 = val1; 2067 - drvdata->ctxid_mask1 = val2; 2068 - break; 2069 - default: 2070 - break; 2071 - } 2072 - /* 2073 - * If software sets a mask bit to 1, it must program relevant byte 2074 - * of ctxid comparator value 0x0, otherwise behavior is unpredictable. 2075 - * For example, if bit[3] of ctxid_mask0 is 1, we must clear bits[31:24] 2076 - * of ctxid comparator0 value (corresponding to byte 0) register. 2077 - */ 2078 - mask = drvdata->ctxid_mask0; 2079 - for (i = 0; i < drvdata->numcidc; i++) { 2080 - /* mask value of corresponding ctxid comparator */ 2081 - maskbyte = mask & ETMv4_EVENT_MASK; 2082 - /* 2083 - * each bit corresponds to a byte of respective ctxid comparator 2084 - * value register 2085 - */ 2086 - for (j = 0; j < 8; j++) { 2087 - if (maskbyte & 1) 2088 - drvdata->ctxid_pid[i] &= ~(0xFF << (j * 8)); 2089 - maskbyte >>= 1; 2090 - } 2091 - /* Select the next ctxid comparator mask value */ 2092 - if (i == 3) 2093 - /* ctxid comparators[4-7] */ 2094 - mask = drvdata->ctxid_mask1; 2095 - else 2096 - mask >>= 0x8; 2097 - } 2098 - 2099 - spin_unlock(&drvdata->spinlock); 2100 - return size; 2101 - } 2102 - static DEVICE_ATTR_RW(ctxid_masks); 2103 - 2104 - static ssize_t vmid_idx_show(struct device *dev, 2105 - struct device_attribute *attr, 2106 - char *buf) 2107 - { 2108 - unsigned long val; 2109 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2110 - 2111 - val = drvdata->vmid_idx; 2112 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 2113 - } 2114 - 2115 - static ssize_t vmid_idx_store(struct device *dev, 2116 - struct device_attribute *attr, 2117 - const char *buf, size_t size) 2118 - { 2119 - unsigned long val; 2120 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2121 - 2122 - if (kstrtoul(buf, 16, &val)) 2123 - return -EINVAL; 2124 - if (val >= drvdata->numvmidc) 2125 - return -EINVAL; 2126 - 2127 - /* 2128 - * Use spinlock to ensure index doesn't change while it gets 2129 - * dereferenced multiple times within a spinlock block elsewhere. 2130 - */ 2131 - spin_lock(&drvdata->spinlock); 2132 - drvdata->vmid_idx = val; 2133 - spin_unlock(&drvdata->spinlock); 2134 - return size; 2135 - } 2136 - static DEVICE_ATTR_RW(vmid_idx); 2137 - 2138 - static ssize_t vmid_val_show(struct device *dev, 2139 - struct device_attribute *attr, 2140 - char *buf) 2141 - { 2142 - unsigned long val; 2143 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2144 - 2145 - val = (unsigned long)drvdata->vmid_val[drvdata->vmid_idx]; 2146 - return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 2147 - } 2148 - 2149 - static ssize_t vmid_val_store(struct device *dev, 2150 - struct device_attribute *attr, 2151 - const char *buf, size_t size) 2152 - { 2153 - unsigned long val; 2154 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2155 - 2156 - /* 2157 - * only implemented when vmid tracing is enabled, i.e. at least one 2158 - * vmid comparator is implemented and at least 8 bit vmid size 2159 - */ 2160 - if (!drvdata->vmid_size || !drvdata->numvmidc) 2161 - return -EINVAL; 2162 - if (kstrtoul(buf, 16, &val)) 2163 - return -EINVAL; 2164 - 2165 - spin_lock(&drvdata->spinlock); 2166 - drvdata->vmid_val[drvdata->vmid_idx] = (u64)val; 2167 - spin_unlock(&drvdata->spinlock); 2168 - return size; 2169 - } 2170 - static DEVICE_ATTR_RW(vmid_val); 2171 - 2172 - static ssize_t vmid_masks_show(struct device *dev, 2173 - struct device_attribute *attr, char *buf) 2174 - { 2175 - unsigned long val1, val2; 2176 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2177 - 2178 - spin_lock(&drvdata->spinlock); 2179 - val1 = drvdata->vmid_mask0; 2180 - val2 = drvdata->vmid_mask1; 2181 - spin_unlock(&drvdata->spinlock); 2182 - return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2); 2183 - } 2184 - 2185 - static ssize_t vmid_masks_store(struct device *dev, 2186 - struct device_attribute *attr, 2187 - const char *buf, size_t size) 2188 - { 2189 - u8 i, j, maskbyte; 2190 - unsigned long val1, val2, mask; 2191 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2192 - /* 2193 - * only implemented when vmid tracing is enabled, i.e. at least one 2194 - * vmid comparator is implemented and at least 8 bit vmid size 2195 - */ 2196 - if (!drvdata->vmid_size || !drvdata->numvmidc) 2197 - return -EINVAL; 2198 - if (sscanf(buf, "%lx %lx", &val1, &val2) != 2) 2199 - return -EINVAL; 2200 - 2201 - spin_lock(&drvdata->spinlock); 2202 - 2203 - /* 2204 - * each byte[0..3] controls mask value applied to vmid 2205 - * comparator[0..3] 2206 - */ 2207 - switch (drvdata->numvmidc) { 2208 - case 0x1: 2209 - /* COMP0, bits[7:0] */ 2210 - drvdata->vmid_mask0 = val1 & 0xFF; 2211 - break; 2212 - case 0x2: 2213 - /* COMP1, bits[15:8] */ 2214 - drvdata->vmid_mask0 = val1 & 0xFFFF; 2215 - break; 2216 - case 0x3: 2217 - /* COMP2, bits[23:16] */ 2218 - drvdata->vmid_mask0 = val1 & 0xFFFFFF; 2219 - break; 2220 - case 0x4: 2221 - /* COMP3, bits[31:24] */ 2222 - drvdata->vmid_mask0 = val1; 2223 - break; 2224 - case 0x5: 2225 - /* COMP4, bits[7:0] */ 2226 - drvdata->vmid_mask0 = val1; 2227 - drvdata->vmid_mask1 = val2 & 0xFF; 2228 - break; 2229 - case 0x6: 2230 - /* COMP5, bits[15:8] */ 2231 - drvdata->vmid_mask0 = val1; 2232 - drvdata->vmid_mask1 = val2 & 0xFFFF; 2233 - break; 2234 - case 0x7: 2235 - /* COMP6, bits[23:16] */ 2236 - drvdata->vmid_mask0 = val1; 2237 - drvdata->vmid_mask1 = val2 & 0xFFFFFF; 2238 - break; 2239 - case 0x8: 2240 - /* COMP7, bits[31:24] */ 2241 - drvdata->vmid_mask0 = val1; 2242 - drvdata->vmid_mask1 = val2; 2243 - break; 2244 - default: 2245 - break; 2246 - } 2247 - 2248 - /* 2249 - * If software sets a mask bit to 1, it must program relevant byte 2250 - * of vmid comparator value 0x0, otherwise behavior is unpredictable. 2251 - * For example, if bit[3] of vmid_mask0 is 1, we must clear bits[31:24] 2252 - * of vmid comparator0 value (corresponding to byte 0) register. 2253 - */ 2254 - mask = drvdata->vmid_mask0; 2255 - for (i = 0; i < drvdata->numvmidc; i++) { 2256 - /* mask value of corresponding vmid comparator */ 2257 - maskbyte = mask & ETMv4_EVENT_MASK; 2258 - /* 2259 - * each bit corresponds to a byte of respective vmid comparator 2260 - * value register 2261 - */ 2262 - for (j = 0; j < 8; j++) { 2263 - if (maskbyte & 1) 2264 - drvdata->vmid_val[i] &= ~(0xFF << (j * 8)); 2265 - maskbyte >>= 1; 2266 - } 2267 - /* Select the next vmid comparator mask value */ 2268 - if (i == 3) 2269 - /* vmid comparators[4-7] */ 2270 - mask = drvdata->vmid_mask1; 2271 - else 2272 - mask >>= 0x8; 2273 - } 2274 - spin_unlock(&drvdata->spinlock); 2275 - return size; 2276 - } 2277 - static DEVICE_ATTR_RW(vmid_masks); 2278 - 2279 - static ssize_t cpu_show(struct device *dev, 2280 - struct device_attribute *attr, char *buf) 2281 - { 2282 - int val; 2283 - struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2284 - 2285 - val = drvdata->cpu; 2286 - return scnprintf(buf, PAGE_SIZE, "%d\n", val); 2287 - 2288 - } 2289 - static DEVICE_ATTR_RO(cpu); 2290 - 2291 - static struct attribute *coresight_etmv4_attrs[] = { 2292 - &dev_attr_nr_pe_cmp.attr, 2293 - &dev_attr_nr_addr_cmp.attr, 2294 - &dev_attr_nr_cntr.attr, 2295 - &dev_attr_nr_ext_inp.attr, 2296 - &dev_attr_numcidc.attr, 2297 - &dev_attr_numvmidc.attr, 2298 - &dev_attr_nrseqstate.attr, 2299 - &dev_attr_nr_resource.attr, 2300 - &dev_attr_nr_ss_cmp.attr, 2301 - &dev_attr_reset.attr, 2302 - &dev_attr_mode.attr, 2303 - &dev_attr_pe.attr, 2304 - &dev_attr_event.attr, 2305 - &dev_attr_event_instren.attr, 2306 - &dev_attr_event_ts.attr, 2307 - &dev_attr_syncfreq.attr, 2308 - &dev_attr_cyc_threshold.attr, 2309 - &dev_attr_bb_ctrl.attr, 2310 - &dev_attr_event_vinst.attr, 2311 - &dev_attr_s_exlevel_vinst.attr, 2312 - &dev_attr_ns_exlevel_vinst.attr, 2313 - &dev_attr_addr_idx.attr, 2314 - &dev_attr_addr_instdatatype.attr, 2315 - &dev_attr_addr_single.attr, 2316 - &dev_attr_addr_range.attr, 2317 - &dev_attr_addr_start.attr, 2318 - &dev_attr_addr_stop.attr, 2319 - &dev_attr_addr_ctxtype.attr, 2320 - &dev_attr_addr_context.attr, 2321 - &dev_attr_seq_idx.attr, 2322 - &dev_attr_seq_state.attr, 2323 - &dev_attr_seq_event.attr, 2324 - &dev_attr_seq_reset_event.attr, 2325 - &dev_attr_cntr_idx.attr, 2326 - &dev_attr_cntrldvr.attr, 2327 - &dev_attr_cntr_val.attr, 2328 - &dev_attr_cntr_ctrl.attr, 2329 - &dev_attr_res_idx.attr, 2330 - &dev_attr_res_ctrl.attr, 2331 - &dev_attr_ctxid_idx.attr, 2332 - &dev_attr_ctxid_pid.attr, 2333 - &dev_attr_ctxid_masks.attr, 2334 - &dev_attr_vmid_idx.attr, 2335 - &dev_attr_vmid_val.attr, 2336 - &dev_attr_vmid_masks.attr, 2337 - &dev_attr_cpu.attr, 2338 - NULL, 2339 - }; 2340 - 2341 - #define coresight_simple_func(name, offset) \ 2342 - static ssize_t name##_show(struct device *_dev, \ 2343 - struct device_attribute *attr, char *buf) \ 2344 - { \ 2345 - struct etmv4_drvdata *drvdata = dev_get_drvdata(_dev->parent); \ 2346 - return scnprintf(buf, PAGE_SIZE, "0x%x\n", \ 2347 - readl_relaxed(drvdata->base + offset)); \ 2348 - } \ 2349 - static DEVICE_ATTR_RO(name) 2350 - 2351 - coresight_simple_func(trcoslsr, TRCOSLSR); 2352 - coresight_simple_func(trcpdcr, TRCPDCR); 2353 - coresight_simple_func(trcpdsr, TRCPDSR); 2354 - coresight_simple_func(trclsr, TRCLSR); 2355 - coresight_simple_func(trcauthstatus, TRCAUTHSTATUS); 2356 - coresight_simple_func(trcdevid, TRCDEVID); 2357 - coresight_simple_func(trcdevtype, TRCDEVTYPE); 2358 - coresight_simple_func(trcpidr0, TRCPIDR0); 2359 - coresight_simple_func(trcpidr1, TRCPIDR1); 2360 - coresight_simple_func(trcpidr2, TRCPIDR2); 2361 - coresight_simple_func(trcpidr3, TRCPIDR3); 2362 - 2363 - static struct attribute *coresight_etmv4_mgmt_attrs[] = { 2364 - &dev_attr_trcoslsr.attr, 2365 - &dev_attr_trcpdcr.attr, 2366 - &dev_attr_trcpdsr.attr, 2367 - &dev_attr_trclsr.attr, 2368 - &dev_attr_trcauthstatus.attr, 2369 - &dev_attr_trcdevid.attr, 2370 - &dev_attr_trcdevtype.attr, 2371 - &dev_attr_trcpidr0.attr, 2372 - &dev_attr_trcpidr1.attr, 2373 - &dev_attr_trcpidr2.attr, 2374 - &dev_attr_trcpidr3.attr, 2375 - NULL, 2376 - }; 2377 - 2378 - coresight_simple_func(trcidr0, TRCIDR0); 2379 - coresight_simple_func(trcidr1, TRCIDR1); 2380 - coresight_simple_func(trcidr2, TRCIDR2); 2381 - coresight_simple_func(trcidr3, TRCIDR3); 2382 - coresight_simple_func(trcidr4, TRCIDR4); 2383 - coresight_simple_func(trcidr5, TRCIDR5); 2384 - /* trcidr[6,7] are reserved */ 2385 - coresight_simple_func(trcidr8, TRCIDR8); 2386 - coresight_simple_func(trcidr9, TRCIDR9); 2387 - coresight_simple_func(trcidr10, TRCIDR10); 2388 - coresight_simple_func(trcidr11, TRCIDR11); 2389 - coresight_simple_func(trcidr12, TRCIDR12); 2390 - coresight_simple_func(trcidr13, TRCIDR13); 2391 - 2392 - static struct attribute *coresight_etmv4_trcidr_attrs[] = { 2393 - &dev_attr_trcidr0.attr, 2394 - &dev_attr_trcidr1.attr, 2395 - &dev_attr_trcidr2.attr, 2396 - &dev_attr_trcidr3.attr, 2397 - &dev_attr_trcidr4.attr, 2398 - &dev_attr_trcidr5.attr, 2399 - /* trcidr[6,7] are reserved */ 2400 - &dev_attr_trcidr8.attr, 2401 - &dev_attr_trcidr9.attr, 2402 - &dev_attr_trcidr10.attr, 2403 - &dev_attr_trcidr11.attr, 2404 - &dev_attr_trcidr12.attr, 2405 - &dev_attr_trcidr13.attr, 2406 - NULL, 2407 - }; 2408 - 2409 - static const struct attribute_group coresight_etmv4_group = { 2410 - .attrs = coresight_etmv4_attrs, 2411 - }; 2412 - 2413 - static const struct attribute_group coresight_etmv4_mgmt_group = { 2414 - .attrs = coresight_etmv4_mgmt_attrs, 2415 - .name = "mgmt", 2416 - }; 2417 - 2418 - static const struct attribute_group coresight_etmv4_trcidr_group = { 2419 - .attrs = coresight_etmv4_trcidr_attrs, 2420 - .name = "trcidr", 2421 - }; 2422 - 2423 - static const struct attribute_group *coresight_etmv4_groups[] = { 2424 - &coresight_etmv4_group, 2425 - &coresight_etmv4_mgmt_group, 2426 - &coresight_etmv4_trcidr_group, 2427 - NULL, 2428 - }; 2429 - 2430 277 static void etm4_init_arch_data(void *info) 2431 278 { 2432 279 u32 etmidr0; ··· 407 2312 u32 etmidr4; 408 2313 u32 etmidr5; 409 2314 struct etmv4_drvdata *drvdata = info; 2315 + 2316 + /* Make sure all registers are accessible */ 2317 + etm4_os_unlock(drvdata); 410 2318 411 2319 CS_UNLOCK(drvdata->base); 412 2320 ··· 562 2464 CS_LOCK(drvdata->base); 563 2465 } 564 2466 565 - static void etm4_init_default_data(struct etmv4_drvdata *drvdata) 2467 + static void etm4_set_default(struct etmv4_config *config) 566 2468 { 567 - int i; 2469 + if (WARN_ON_ONCE(!config)) 2470 + return; 568 2471 569 - drvdata->pe_sel = 0x0; 570 - drvdata->cfg = (ETMv4_MODE_CTXID | ETM_MODE_VMID | 571 - ETMv4_MODE_TIMESTAMP | ETM_MODE_RETURNSTACK); 2472 + /* 2473 + * Make default initialisation trace everything 2474 + * 2475 + * Select the "always true" resource selector on the 2476 + * "Enablign Event" line and configure address range comparator 2477 + * '0' to trace all the possible address range. From there 2478 + * configure the "include/exclude" engine to include address 2479 + * range comparator '0'. 2480 + */ 572 2481 573 2482 /* disable all events tracing */ 574 - drvdata->eventctrl0 = 0x0; 575 - drvdata->eventctrl1 = 0x0; 2483 + config->eventctrl0 = 0x0; 2484 + config->eventctrl1 = 0x0; 576 2485 577 2486 /* disable stalling */ 578 - drvdata->stall_ctrl = 0x0; 2487 + config->stall_ctrl = 0x0; 2488 + 2489 + /* enable trace synchronization every 4096 bytes, if available */ 2490 + config->syncfreq = 0xC; 579 2491 580 2492 /* disable timestamp event */ 581 - drvdata->ts_ctrl = 0x0; 2493 + config->ts_ctrl = 0x0; 582 2494 583 - /* enable trace synchronization every 4096 bytes for trace */ 584 - if (drvdata->syncpr == false) 585 - drvdata->syncfreq = 0xC; 2495 + /* TRCVICTLR::EVENT = 0x01, select the always on logic */ 2496 + config->vinst_ctrl |= BIT(0); 586 2497 587 2498 /* 588 - * enable viewInst to trace everything with start-stop logic in 589 - * started state 2499 + * TRCVICTLR::SSSTATUS == 1, the start-stop logic is 2500 + * in the started state 590 2501 */ 591 - drvdata->vinst_ctrl |= BIT(0); 592 - /* set initial state of start-stop logic */ 593 - if (drvdata->nr_addr_cmp) 594 - drvdata->vinst_ctrl |= BIT(9); 2502 + config->vinst_ctrl |= BIT(9); 595 2503 596 - /* no address range filtering for ViewInst */ 597 - drvdata->viiectlr = 0x0; 2504 + /* 2505 + * Configure address range comparator '0' to encompass all 2506 + * possible addresses. 2507 + */ 2508 + 2509 + /* First half of default address comparator: start at address 0 */ 2510 + config->addr_val[ETM_DEFAULT_ADDR_COMP] = 0x0; 2511 + /* trace instruction addresses */ 2512 + config->addr_acc[ETM_DEFAULT_ADDR_COMP] &= ~(BIT(0) | BIT(1)); 2513 + /* EXLEVEL_NS, bits[12:15], only trace application and kernel space */ 2514 + config->addr_acc[ETM_DEFAULT_ADDR_COMP] |= ETM_EXLEVEL_NS_HYP; 2515 + /* EXLEVEL_S, bits[11:8], don't trace anything in secure state */ 2516 + config->addr_acc[ETM_DEFAULT_ADDR_COMP] |= (ETM_EXLEVEL_S_APP | 2517 + ETM_EXLEVEL_S_OS | 2518 + ETM_EXLEVEL_S_HYP); 2519 + config->addr_type[ETM_DEFAULT_ADDR_COMP] = ETM_ADDR_TYPE_RANGE; 2520 + 2521 + /* 2522 + * Second half of default address comparator: go all 2523 + * the way to the top. 2524 + */ 2525 + config->addr_val[ETM_DEFAULT_ADDR_COMP + 1] = ~0x0; 2526 + /* trace instruction addresses */ 2527 + config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] &= ~(BIT(0) | BIT(1)); 2528 + /* Address comparator type must be equal for both halves */ 2529 + config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] = 2530 + config->addr_acc[ETM_DEFAULT_ADDR_COMP]; 2531 + config->addr_type[ETM_DEFAULT_ADDR_COMP + 1] = ETM_ADDR_TYPE_RANGE; 2532 + 2533 + /* 2534 + * Configure the ViewInst function to filter on address range 2535 + * comparator '0'. 2536 + */ 2537 + config->viiectlr = BIT(0); 2538 + 598 2539 /* no start-stop filtering for ViewInst */ 599 - drvdata->vissctlr = 0x0; 2540 + config->vissctlr = 0x0; 2541 + } 600 2542 601 - /* disable seq events */ 602 - for (i = 0; i < drvdata->nrseqstate-1; i++) 603 - drvdata->seq_ctrl[i] = 0x0; 604 - drvdata->seq_rst = 0x0; 605 - drvdata->seq_state = 0x0; 2543 + void etm4_config_trace_mode(struct etmv4_config *config) 2544 + { 2545 + u32 addr_acc, mode; 606 2546 607 - /* disable external input events */ 608 - drvdata->ext_inp = 0x0; 2547 + mode = config->mode; 2548 + mode &= (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER); 609 2549 610 - for (i = 0; i < drvdata->nr_cntr; i++) { 611 - drvdata->cntrldvr[i] = 0x0; 612 - drvdata->cntr_ctrl[i] = 0x0; 613 - drvdata->cntr_val[i] = 0x0; 614 - } 2550 + /* excluding kernel AND user space doesn't make sense */ 2551 + WARN_ON_ONCE(mode == (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER)); 615 2552 616 - /* Resource selector pair 0 is always implemented and reserved */ 617 - drvdata->res_idx = 0x2; 618 - for (i = 2; i < drvdata->nr_resource * 2; i++) 619 - drvdata->res_ctrl[i] = 0x0; 2553 + /* nothing to do if neither flags are set */ 2554 + if (!(mode & ETM_MODE_EXCL_KERN) && !(mode & ETM_MODE_EXCL_USER)) 2555 + return; 620 2556 621 - for (i = 0; i < drvdata->nr_ss_cmp; i++) { 622 - drvdata->ss_ctrl[i] = 0x0; 623 - drvdata->ss_pe_cmp[i] = 0x0; 624 - } 625 - 626 - if (drvdata->nr_addr_cmp >= 1) { 627 - drvdata->addr_val[0] = (unsigned long)_stext; 628 - drvdata->addr_val[1] = (unsigned long)_etext; 629 - drvdata->addr_type[0] = ETM_ADDR_TYPE_RANGE; 630 - drvdata->addr_type[1] = ETM_ADDR_TYPE_RANGE; 631 - } 632 - 633 - for (i = 0; i < drvdata->numcidc; i++) { 634 - drvdata->ctxid_pid[i] = 0x0; 635 - drvdata->ctxid_vpid[i] = 0x0; 636 - } 637 - 638 - drvdata->ctxid_mask0 = 0x0; 639 - drvdata->ctxid_mask1 = 0x0; 640 - 641 - for (i = 0; i < drvdata->numvmidc; i++) 642 - drvdata->vmid_val[i] = 0x0; 643 - drvdata->vmid_mask0 = 0x0; 644 - drvdata->vmid_mask1 = 0x0; 2557 + addr_acc = config->addr_acc[ETM_DEFAULT_ADDR_COMP]; 2558 + /* clear default config */ 2559 + addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS); 645 2560 646 2561 /* 647 - * A trace ID value of 0 is invalid, so let's start at some 648 - * random value that fits in 7 bits. ETMv3.x has 0x10 so let's 649 - * start at 0x20. 2562 + * EXLEVEL_NS, bits[15:12] 2563 + * The Exception levels are: 2564 + * Bit[12] Exception level 0 - Application 2565 + * Bit[13] Exception level 1 - OS 2566 + * Bit[14] Exception level 2 - Hypervisor 2567 + * Bit[15] Never implemented 650 2568 */ 651 - drvdata->trcid = 0x20 + drvdata->cpu; 2569 + if (mode & ETM_MODE_EXCL_KERN) 2570 + addr_acc |= ETM_EXLEVEL_NS_OS; 2571 + else 2572 + addr_acc |= ETM_EXLEVEL_NS_APP; 2573 + 2574 + config->addr_acc[ETM_DEFAULT_ADDR_COMP] = addr_acc; 2575 + config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] = addr_acc; 652 2576 } 653 2577 654 2578 static int etm4_cpu_callback(struct notifier_block *nfb, unsigned long action, ··· 689 2569 etmdrvdata[cpu]->os_unlock = true; 690 2570 } 691 2571 692 - if (etmdrvdata[cpu]->enable) 2572 + if (local_read(&etmdrvdata[cpu]->mode)) 693 2573 etm4_enable_hw(etmdrvdata[cpu]); 694 2574 spin_unlock(&etmdrvdata[cpu]->spinlock); 695 2575 break; ··· 702 2582 703 2583 case CPU_DYING: 704 2584 spin_lock(&etmdrvdata[cpu]->spinlock); 705 - if (etmdrvdata[cpu]->enable) 2585 + if (local_read(&etmdrvdata[cpu]->mode)) 706 2586 etm4_disable_hw(etmdrvdata[cpu]); 707 2587 spin_unlock(&etmdrvdata[cpu]->spinlock); 708 2588 break; ··· 714 2594 static struct notifier_block etm4_cpu_notifier = { 715 2595 .notifier_call = etm4_cpu_callback, 716 2596 }; 2597 + 2598 + static void etm4_init_trace_id(struct etmv4_drvdata *drvdata) 2599 + { 2600 + drvdata->trcid = coresight_get_trace_id(drvdata->cpu); 2601 + } 717 2602 718 2603 static int etm4_probe(struct amba_device *adev, const struct amba_id *id) 719 2604 { ··· 763 2638 get_online_cpus(); 764 2639 etmdrvdata[drvdata->cpu] = drvdata; 765 2640 766 - if (!smp_call_function_single(drvdata->cpu, etm4_os_unlock, drvdata, 1)) 767 - drvdata->os_unlock = true; 768 - 769 2641 if (smp_call_function_single(drvdata->cpu, 770 2642 etm4_init_arch_data, drvdata, 1)) 771 2643 dev_err(dev, "ETM arch init failed\n"); ··· 776 2654 ret = -EINVAL; 777 2655 goto err_arch_supported; 778 2656 } 779 - etm4_init_default_data(drvdata); 780 2657 781 - pm_runtime_put(&adev->dev); 2658 + etm4_init_trace_id(drvdata); 2659 + etm4_set_default(&drvdata->config); 782 2660 783 2661 desc->type = CORESIGHT_DEV_TYPE_SOURCE; 784 2662 desc->subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_PROC; ··· 789 2667 drvdata->csdev = coresight_register(desc); 790 2668 if (IS_ERR(drvdata->csdev)) { 791 2669 ret = PTR_ERR(drvdata->csdev); 792 - goto err_coresight_register; 2670 + goto err_arch_supported; 793 2671 } 794 2672 2673 + ret = etm_perf_symlink(drvdata->csdev, true); 2674 + if (ret) { 2675 + coresight_unregister(drvdata->csdev); 2676 + goto err_arch_supported; 2677 + } 2678 + 2679 + pm_runtime_put(&adev->dev); 795 2680 dev_info(dev, "%s initialized\n", (char *)id->data); 796 2681 797 2682 if (boot_enable) { ··· 809 2680 return 0; 810 2681 811 2682 err_arch_supported: 812 - pm_runtime_put(&adev->dev); 813 - err_coresight_register: 814 2683 if (--etm4_count == 0) 815 2684 unregister_hotcpu_notifier(&etm4_cpu_notifier); 816 2685 return ret; ··· 824 2697 .id = 0x000bb95e, 825 2698 .mask = 0x000fffff, 826 2699 .data = "ETM 4.0", 2700 + }, 2701 + { /* ETM 4.0 - A72, Maia, HiSilicon */ 2702 + .id = 0x000bb95a, 2703 + .mask = 0x000fffff, 2704 + .data = "ETM 4.0", 827 2705 }, 828 2706 { 0, 0}, 829 2707 };
+124 -98
drivers/hwtracing/coresight/coresight-etm4x.h
··· 13 13 #ifndef _CORESIGHT_CORESIGHT_ETM_H 14 14 #define _CORESIGHT_CORESIGHT_ETM_H 15 15 16 + #include <asm/local.h> 16 17 #include <linux/spinlock.h> 17 18 #include "coresight-priv.h" 18 19 ··· 176 175 #define ETM_MODE_TRACE_RESET BIT(25) 177 176 #define ETM_MODE_TRACE_ERR BIT(26) 178 177 #define ETM_MODE_VIEWINST_STARTSTOP BIT(27) 179 - #define ETMv4_MODE_ALL 0xFFFFFFF 178 + #define ETMv4_MODE_ALL (GENMASK(27, 0) | \ 179 + ETM_MODE_EXCL_KERN | \ 180 + ETM_MODE_EXCL_USER) 180 181 181 182 #define TRCSTATR_IDLE_BIT 0 183 + #define ETM_DEFAULT_ADDR_COMP 0 184 + 185 + /* secure state access levels */ 186 + #define ETM_EXLEVEL_S_APP BIT(8) 187 + #define ETM_EXLEVEL_S_OS BIT(9) 188 + #define ETM_EXLEVEL_S_NA BIT(10) 189 + #define ETM_EXLEVEL_S_HYP BIT(11) 190 + /* non-secure state access levels */ 191 + #define ETM_EXLEVEL_NS_APP BIT(12) 192 + #define ETM_EXLEVEL_NS_OS BIT(13) 193 + #define ETM_EXLEVEL_NS_HYP BIT(14) 194 + #define ETM_EXLEVEL_NS_NA BIT(15) 182 195 183 196 /** 184 - * struct etm4_drvdata - specifics associated to an ETM component 185 - * @base: Memory mapped base address for this component. 186 - * @dev: The device entity associated to this component. 187 - * @csdev: Component vitals needed by the framework. 188 - * @spinlock: Only one at a time pls. 189 - * @cpu: The cpu this component is affined to. 190 - * @arch: ETM version number. 191 - * @enable: Is this ETM currently tracing. 192 - * @sticky_enable: true if ETM base configuration has been done. 193 - * @boot_enable:True if we should start tracing at boot time. 194 - * @os_unlock: True if access to management registers is allowed. 195 - * @nr_pe: The number of processing entity available for tracing. 196 - * @nr_pe_cmp: The number of processing entity comparator inputs that are 197 - * available for tracing. 198 - * @nr_addr_cmp:Number of pairs of address comparators available 199 - * as found in ETMIDR4 0-3. 200 - * @nr_cntr: Number of counters as found in ETMIDR5 bit 28-30. 201 - * @nr_ext_inp: Number of external input. 202 - * @numcidc: Number of contextID comparators. 203 - * @numvmidc: Number of VMID comparators. 204 - * @nrseqstate: The number of sequencer states that are implemented. 205 - * @nr_event: Indicates how many events the trace unit support. 206 - * @nr_resource:The number of resource selection pairs available for tracing. 207 - * @nr_ss_cmp: Number of single-shot comparator controls that are available. 197 + * struct etmv4_config - configuration information related to an ETMv4 208 198 * @mode: Controls various modes supported by this ETM. 209 - * @trcid: value of the current ID for this component. 210 - * @trcid_size: Indicates the trace ID width. 211 - * @instrp0: Tracing of load and store instructions 212 - * as P0 elements is supported. 213 - * @trccond: If the trace unit supports conditional 214 - * instruction tracing. 215 - * @retstack: Indicates if the implementation supports a return stack. 216 - * @trc_error: Whether a trace unit can trace a system 217 - * error exception. 218 - * @atbtrig: If the implementation can support ATB triggers 219 - * @lpoverride: If the implementation can support low-power state over. 220 199 * @pe_sel: Controls which PE to trace. 221 200 * @cfg: Controls the tracing options. 222 201 * @eventctrl0: Controls the tracing of arbitrary events. 223 202 * @eventctrl1: Controls the behavior of the events that @event_ctrl0 selects. 224 203 * @stallctl: If functionality that prevents trace unit buffer overflows 225 204 * is available. 226 - * @sysstall: Does the system support stall control of the PE? 227 - * @nooverflow: Indicate if overflow prevention is supported. 228 - * @stall_ctrl: Enables trace unit functionality that prevents trace 229 - * unit buffer overflows. 230 - * @ts_size: Global timestamp size field. 231 205 * @ts_ctrl: Controls the insertion of global timestamps in the 232 206 * trace streams. 233 - * @syncpr: Indicates if an implementation has a fixed 234 - * synchronization period. 235 207 * @syncfreq: Controls how often trace synchronization requests occur. 236 - * @trccci: Indicates if the trace unit supports cycle counting 237 - * for instruction. 238 - * @ccsize: Indicates the size of the cycle counter in bits. 239 - * @ccitmin: minimum value that can be programmed in 240 208 * the TRCCCCTLR register. 241 209 * @ccctlr: Sets the threshold value for cycle counting. 242 - * @trcbb: Indicates if the trace unit supports branch broadcast tracing. 243 - * @q_support: Q element support characteristics. 244 210 * @vinst_ctrl: Controls instruction trace filtering. 245 211 * @viiectlr: Set or read, the address range comparators. 246 212 * @vissctlr: Set, or read, the single address comparators that control the ··· 232 264 * @addr_acc: Address comparator access type. 233 265 * @addr_type: Current status of the comparator register. 234 266 * @ctxid_idx: Context ID index selector. 235 - * @ctxid_size: Size of the context ID field to consider. 236 267 * @ctxid_pid: Value of the context ID comparator. 237 268 * @ctxid_vpid: Virtual PID seen by users if PID namespace is enabled, otherwise 238 269 * the same value of ctxid_pid. 239 270 * @ctxid_mask0:Context ID comparator mask for comparator 0-3. 240 271 * @ctxid_mask1:Context ID comparator mask for comparator 4-7. 241 272 * @vmid_idx: VM ID index selector. 242 - * @vmid_size: Size of the VM ID comparator to consider. 243 273 * @vmid_val: Value of the VM ID comparator. 244 274 * @vmid_mask0: VM ID comparator mask for comparator 0-3. 245 275 * @vmid_mask1: VM ID comparator mask for comparator 4-7. 246 - * @s_ex_level: In secure state, indicates whether instruction tracing is 247 - * supported for the corresponding Exception level. 248 - * @ns_ex_level:In non-secure state, indicates whether instruction tracing is 249 - * supported for the corresponding Exception level. 250 276 * @ext_inp: External input selection. 251 277 */ 252 - struct etmv4_drvdata { 253 - void __iomem *base; 254 - struct device *dev; 255 - struct coresight_device *csdev; 256 - spinlock_t spinlock; 257 - int cpu; 258 - u8 arch; 259 - bool enable; 260 - bool sticky_enable; 261 - bool boot_enable; 262 - bool os_unlock; 263 - u8 nr_pe; 264 - u8 nr_pe_cmp; 265 - u8 nr_addr_cmp; 266 - u8 nr_cntr; 267 - u8 nr_ext_inp; 268 - u8 numcidc; 269 - u8 numvmidc; 270 - u8 nrseqstate; 271 - u8 nr_event; 272 - u8 nr_resource; 273 - u8 nr_ss_cmp; 278 + struct etmv4_config { 274 279 u32 mode; 275 - u8 trcid; 276 - u8 trcid_size; 277 - bool instrp0; 278 - bool trccond; 279 - bool retstack; 280 - bool trc_error; 281 - bool atbtrig; 282 - bool lpoverride; 283 280 u32 pe_sel; 284 281 u32 cfg; 285 282 u32 eventctrl0; 286 283 u32 eventctrl1; 287 - bool stallctl; 288 - bool sysstall; 289 - bool nooverflow; 290 284 u32 stall_ctrl; 291 - u8 ts_size; 292 285 u32 ts_ctrl; 293 - bool syncpr; 294 286 u32 syncfreq; 295 - bool trccci; 296 - u8 ccsize; 297 - u8 ccitmin; 298 287 u32 ccctlr; 299 - bool trcbb; 300 288 u32 bb_ctrl; 301 - bool q_support; 302 289 u32 vinst_ctrl; 303 290 u32 viiectlr; 304 291 u32 vissctlr; ··· 276 353 u64 addr_acc[ETM_MAX_SINGLE_ADDR_CMP]; 277 354 u8 addr_type[ETM_MAX_SINGLE_ADDR_CMP]; 278 355 u8 ctxid_idx; 279 - u8 ctxid_size; 280 356 u64 ctxid_pid[ETMv4_MAX_CTXID_CMP]; 281 357 u64 ctxid_vpid[ETMv4_MAX_CTXID_CMP]; 282 358 u32 ctxid_mask0; 283 359 u32 ctxid_mask1; 284 360 u8 vmid_idx; 285 - u8 vmid_size; 286 361 u64 vmid_val[ETM_MAX_VMID_CMP]; 287 362 u32 vmid_mask0; 288 363 u32 vmid_mask1; 364 + u32 ext_inp; 365 + }; 366 + 367 + /** 368 + * struct etm4_drvdata - specifics associated to an ETM component 369 + * @base: Memory mapped base address for this component. 370 + * @dev: The device entity associated to this component. 371 + * @csdev: Component vitals needed by the framework. 372 + * @spinlock: Only one at a time pls. 373 + * @mode: This tracer's mode, i.e sysFS, Perf or disabled. 374 + * @cpu: The cpu this component is affined to. 375 + * @arch: ETM version number. 376 + * @nr_pe: The number of processing entity available for tracing. 377 + * @nr_pe_cmp: The number of processing entity comparator inputs that are 378 + * available for tracing. 379 + * @nr_addr_cmp:Number of pairs of address comparators available 380 + * as found in ETMIDR4 0-3. 381 + * @nr_cntr: Number of counters as found in ETMIDR5 bit 28-30. 382 + * @nr_ext_inp: Number of external input. 383 + * @numcidc: Number of contextID comparators. 384 + * @numvmidc: Number of VMID comparators. 385 + * @nrseqstate: The number of sequencer states that are implemented. 386 + * @nr_event: Indicates how many events the trace unit support. 387 + * @nr_resource:The number of resource selection pairs available for tracing. 388 + * @nr_ss_cmp: Number of single-shot comparator controls that are available. 389 + * @trcid: value of the current ID for this component. 390 + * @trcid_size: Indicates the trace ID width. 391 + * @ts_size: Global timestamp size field. 392 + * @ctxid_size: Size of the context ID field to consider. 393 + * @vmid_size: Size of the VM ID comparator to consider. 394 + * @ccsize: Indicates the size of the cycle counter in bits. 395 + * @ccitmin: minimum value that can be programmed in 396 + * @s_ex_level: In secure state, indicates whether instruction tracing is 397 + * supported for the corresponding Exception level. 398 + * @ns_ex_level:In non-secure state, indicates whether instruction tracing is 399 + * supported for the corresponding Exception level. 400 + * @sticky_enable: true if ETM base configuration has been done. 401 + * @boot_enable:True if we should start tracing at boot time. 402 + * @os_unlock: True if access to management registers is allowed. 403 + * @instrp0: Tracing of load and store instructions 404 + * as P0 elements is supported. 405 + * @trcbb: Indicates if the trace unit supports branch broadcast tracing. 406 + * @trccond: If the trace unit supports conditional 407 + * instruction tracing. 408 + * @retstack: Indicates if the implementation supports a return stack. 409 + * @trccci: Indicates if the trace unit supports cycle counting 410 + * for instruction. 411 + * @q_support: Q element support characteristics. 412 + * @trc_error: Whether a trace unit can trace a system 413 + * error exception. 414 + * @syncpr: Indicates if an implementation has a fixed 415 + * synchronization period. 416 + * @stall_ctrl: Enables trace unit functionality that prevents trace 417 + * unit buffer overflows. 418 + * @sysstall: Does the system support stall control of the PE? 419 + * @nooverflow: Indicate if overflow prevention is supported. 420 + * @atbtrig: If the implementation can support ATB triggers 421 + * @lpoverride: If the implementation can support low-power state over. 422 + * @config: structure holding configuration parameters. 423 + */ 424 + struct etmv4_drvdata { 425 + void __iomem *base; 426 + struct device *dev; 427 + struct coresight_device *csdev; 428 + spinlock_t spinlock; 429 + local_t mode; 430 + int cpu; 431 + u8 arch; 432 + u8 nr_pe; 433 + u8 nr_pe_cmp; 434 + u8 nr_addr_cmp; 435 + u8 nr_cntr; 436 + u8 nr_ext_inp; 437 + u8 numcidc; 438 + u8 numvmidc; 439 + u8 nrseqstate; 440 + u8 nr_event; 441 + u8 nr_resource; 442 + u8 nr_ss_cmp; 443 + u8 trcid; 444 + u8 trcid_size; 445 + u8 ts_size; 446 + u8 ctxid_size; 447 + u8 vmid_size; 448 + u8 ccsize; 449 + u8 ccitmin; 289 450 u8 s_ex_level; 290 451 u8 ns_ex_level; 291 - u32 ext_inp; 452 + u8 q_support; 453 + bool sticky_enable; 454 + bool boot_enable; 455 + bool os_unlock; 456 + bool instrp0; 457 + bool trcbb; 458 + bool trccond; 459 + bool retstack; 460 + bool trccci; 461 + bool trc_error; 462 + bool syncpr; 463 + bool stallctl; 464 + bool sysstall; 465 + bool nooverflow; 466 + bool atbtrig; 467 + bool lpoverride; 468 + struct etmv4_config config; 292 469 }; 293 470 294 471 /* Address comparator access types */ ··· 414 391 ETM_ADDR_TYPE_START, 415 392 ETM_ADDR_TYPE_STOP, 416 393 }; 394 + 395 + extern const struct attribute_group *coresight_etmv4_groups[]; 396 + void etm4_config_trace_mode(struct etmv4_config *config); 417 397 #endif
-1
drivers/hwtracing/coresight/coresight-funnel.c
··· 221 221 if (IS_ERR(drvdata->csdev)) 222 222 return PTR_ERR(drvdata->csdev); 223 223 224 - dev_info(dev, "FUNNEL initialized\n"); 225 224 return 0; 226 225 } 227 226
+30
drivers/hwtracing/coresight/coresight-priv.h
··· 37 37 #define ETM_MODE_EXCL_KERN BIT(30) 38 38 #define ETM_MODE_EXCL_USER BIT(31) 39 39 40 + #define coresight_simple_func(type, name, offset) \ 41 + static ssize_t name##_show(struct device *_dev, \ 42 + struct device_attribute *attr, char *buf) \ 43 + { \ 44 + type *drvdata = dev_get_drvdata(_dev->parent); \ 45 + return scnprintf(buf, PAGE_SIZE, "0x%x\n", \ 46 + readl_relaxed(drvdata->base + offset)); \ 47 + } \ 48 + static DEVICE_ATTR_RO(name) 49 + 40 50 enum cs_mode { 41 51 CS_MODE_DISABLED, 42 52 CS_MODE_SYSFS, 43 53 CS_MODE_PERF, 54 + }; 55 + 56 + /** 57 + * struct cs_buffer - keep track of a recording session' specifics 58 + * @cur: index of the current buffer 59 + * @nr_pages: max number of pages granted to us 60 + * @offset: offset within the current buffer 61 + * @data_size: how much we collected in this run 62 + * @lost: other than zero if we had a HW buffer wrap around 63 + * @snapshot: is this run in snapshot mode 64 + * @data_pages: a handle the ring buffer 65 + */ 66 + struct cs_buffers { 67 + unsigned int cur; 68 + unsigned int nr_pages; 69 + unsigned long offset; 70 + local_t data_size; 71 + local_t lost; 72 + bool snapshot; 73 + void **data_pages; 44 74 }; 45 75 46 76 static inline void CS_LOCK(void __iomem *addr)
-1
drivers/hwtracing/coresight/coresight-replicator.c
··· 114 114 115 115 pm_runtime_put(&pdev->dev); 116 116 117 - dev_info(dev, "REPLICATOR initialized\n"); 118 117 return 0; 119 118 120 119 out_disable_pm:
+920
drivers/hwtracing/coresight/coresight-stm.c
··· 1 + /* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved. 2 + * 3 + * Description: CoreSight System Trace Macrocell driver 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 and 7 + * only version 2 as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * Initial implementation by Pratik Patel 15 + * (C) 2014-2015 Pratik Patel <pratikp@codeaurora.org> 16 + * 17 + * Serious refactoring, code cleanup and upgrading to the Coresight upstream 18 + * framework by Mathieu Poirier 19 + * (C) 2015-2016 Mathieu Poirier <mathieu.poirier@linaro.org> 20 + * 21 + * Guaranteed timing and support for various packet type coming from the 22 + * generic STM API by Chunyan Zhang 23 + * (C) 2015-2016 Chunyan Zhang <zhang.chunyan@linaro.org> 24 + */ 25 + #include <asm/local.h> 26 + #include <linux/amba/bus.h> 27 + #include <linux/bitmap.h> 28 + #include <linux/clk.h> 29 + #include <linux/coresight.h> 30 + #include <linux/coresight-stm.h> 31 + #include <linux/err.h> 32 + #include <linux/kernel.h> 33 + #include <linux/moduleparam.h> 34 + #include <linux/of_address.h> 35 + #include <linux/perf_event.h> 36 + #include <linux/pm_runtime.h> 37 + #include <linux/stm.h> 38 + 39 + #include "coresight-priv.h" 40 + 41 + #define STMDMASTARTR 0xc04 42 + #define STMDMASTOPR 0xc08 43 + #define STMDMASTATR 0xc0c 44 + #define STMDMACTLR 0xc10 45 + #define STMDMAIDR 0xcfc 46 + #define STMHEER 0xd00 47 + #define STMHETER 0xd20 48 + #define STMHEBSR 0xd60 49 + #define STMHEMCR 0xd64 50 + #define STMHEMASTR 0xdf4 51 + #define STMHEFEAT1R 0xdf8 52 + #define STMHEIDR 0xdfc 53 + #define STMSPER 0xe00 54 + #define STMSPTER 0xe20 55 + #define STMPRIVMASKR 0xe40 56 + #define STMSPSCR 0xe60 57 + #define STMSPMSCR 0xe64 58 + #define STMSPOVERRIDER 0xe68 59 + #define STMSPMOVERRIDER 0xe6c 60 + #define STMSPTRIGCSR 0xe70 61 + #define STMTCSR 0xe80 62 + #define STMTSSTIMR 0xe84 63 + #define STMTSFREQR 0xe8c 64 + #define STMSYNCR 0xe90 65 + #define STMAUXCR 0xe94 66 + #define STMSPFEAT1R 0xea0 67 + #define STMSPFEAT2R 0xea4 68 + #define STMSPFEAT3R 0xea8 69 + #define STMITTRIGGER 0xee8 70 + #define STMITATBDATA0 0xeec 71 + #define STMITATBCTR2 0xef0 72 + #define STMITATBID 0xef4 73 + #define STMITATBCTR0 0xef8 74 + 75 + #define STM_32_CHANNEL 32 76 + #define BYTES_PER_CHANNEL 256 77 + #define STM_TRACE_BUF_SIZE 4096 78 + #define STM_SW_MASTER_END 127 79 + 80 + /* Register bit definition */ 81 + #define STMTCSR_BUSY_BIT 23 82 + /* Reserve the first 10 channels for kernel usage */ 83 + #define STM_CHANNEL_OFFSET 0 84 + 85 + enum stm_pkt_type { 86 + STM_PKT_TYPE_DATA = 0x98, 87 + STM_PKT_TYPE_FLAG = 0xE8, 88 + STM_PKT_TYPE_TRIG = 0xF8, 89 + }; 90 + 91 + #define stm_channel_addr(drvdata, ch) (drvdata->chs.base + \ 92 + (ch * BYTES_PER_CHANNEL)) 93 + #define stm_channel_off(type, opts) (type & ~opts) 94 + 95 + static int boot_nr_channel; 96 + 97 + /* 98 + * Not really modular but using module_param is the easiest way to 99 + * remain consistent with existing use cases for now. 100 + */ 101 + module_param_named( 102 + boot_nr_channel, boot_nr_channel, int, S_IRUGO 103 + ); 104 + 105 + /** 106 + * struct channel_space - central management entity for extended ports 107 + * @base: memory mapped base address where channels start. 108 + * @guaraneed: is the channel delivery guaranteed. 109 + */ 110 + struct channel_space { 111 + void __iomem *base; 112 + unsigned long *guaranteed; 113 + }; 114 + 115 + /** 116 + * struct stm_drvdata - specifics associated to an STM component 117 + * @base: memory mapped base address for this component. 118 + * @dev: the device entity associated to this component. 119 + * @atclk: optional clock for the core parts of the STM. 120 + * @csdev: component vitals needed by the framework. 121 + * @spinlock: only one at a time pls. 122 + * @chs: the channels accociated to this STM. 123 + * @stm: structure associated to the generic STM interface. 124 + * @mode: this tracer's mode, i.e sysFS, or disabled. 125 + * @traceid: value of the current ID for this component. 126 + * @write_bytes: Maximus bytes this STM can write at a time. 127 + * @stmsper: settings for register STMSPER. 128 + * @stmspscr: settings for register STMSPSCR. 129 + * @numsp: the total number of stimulus port support by this STM. 130 + * @stmheer: settings for register STMHEER. 131 + * @stmheter: settings for register STMHETER. 132 + * @stmhebsr: settings for register STMHEBSR. 133 + */ 134 + struct stm_drvdata { 135 + void __iomem *base; 136 + struct device *dev; 137 + struct clk *atclk; 138 + struct coresight_device *csdev; 139 + spinlock_t spinlock; 140 + struct channel_space chs; 141 + struct stm_data stm; 142 + local_t mode; 143 + u8 traceid; 144 + u32 write_bytes; 145 + u32 stmsper; 146 + u32 stmspscr; 147 + u32 numsp; 148 + u32 stmheer; 149 + u32 stmheter; 150 + u32 stmhebsr; 151 + }; 152 + 153 + static void stm_hwevent_enable_hw(struct stm_drvdata *drvdata) 154 + { 155 + CS_UNLOCK(drvdata->base); 156 + 157 + writel_relaxed(drvdata->stmhebsr, drvdata->base + STMHEBSR); 158 + writel_relaxed(drvdata->stmheter, drvdata->base + STMHETER); 159 + writel_relaxed(drvdata->stmheer, drvdata->base + STMHEER); 160 + writel_relaxed(0x01 | /* Enable HW event tracing */ 161 + 0x04, /* Error detection on event tracing */ 162 + drvdata->base + STMHEMCR); 163 + 164 + CS_LOCK(drvdata->base); 165 + } 166 + 167 + static void stm_port_enable_hw(struct stm_drvdata *drvdata) 168 + { 169 + CS_UNLOCK(drvdata->base); 170 + /* ATB trigger enable on direct writes to TRIG locations */ 171 + writel_relaxed(0x10, 172 + drvdata->base + STMSPTRIGCSR); 173 + writel_relaxed(drvdata->stmspscr, drvdata->base + STMSPSCR); 174 + writel_relaxed(drvdata->stmsper, drvdata->base + STMSPER); 175 + 176 + CS_LOCK(drvdata->base); 177 + } 178 + 179 + static void stm_enable_hw(struct stm_drvdata *drvdata) 180 + { 181 + if (drvdata->stmheer) 182 + stm_hwevent_enable_hw(drvdata); 183 + 184 + stm_port_enable_hw(drvdata); 185 + 186 + CS_UNLOCK(drvdata->base); 187 + 188 + /* 4096 byte between synchronisation packets */ 189 + writel_relaxed(0xFFF, drvdata->base + STMSYNCR); 190 + writel_relaxed((drvdata->traceid << 16 | /* trace id */ 191 + 0x02 | /* timestamp enable */ 192 + 0x01), /* global STM enable */ 193 + drvdata->base + STMTCSR); 194 + 195 + CS_LOCK(drvdata->base); 196 + } 197 + 198 + static int stm_enable(struct coresight_device *csdev, 199 + struct perf_event_attr *attr, u32 mode) 200 + { 201 + u32 val; 202 + struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 203 + 204 + if (mode != CS_MODE_SYSFS) 205 + return -EINVAL; 206 + 207 + val = local_cmpxchg(&drvdata->mode, CS_MODE_DISABLED, mode); 208 + 209 + /* Someone is already using the tracer */ 210 + if (val) 211 + return -EBUSY; 212 + 213 + pm_runtime_get_sync(drvdata->dev); 214 + 215 + spin_lock(&drvdata->spinlock); 216 + stm_enable_hw(drvdata); 217 + spin_unlock(&drvdata->spinlock); 218 + 219 + dev_info(drvdata->dev, "STM tracing enabled\n"); 220 + return 0; 221 + } 222 + 223 + static void stm_hwevent_disable_hw(struct stm_drvdata *drvdata) 224 + { 225 + CS_UNLOCK(drvdata->base); 226 + 227 + writel_relaxed(0x0, drvdata->base + STMHEMCR); 228 + writel_relaxed(0x0, drvdata->base + STMHEER); 229 + writel_relaxed(0x0, drvdata->base + STMHETER); 230 + 231 + CS_LOCK(drvdata->base); 232 + } 233 + 234 + static void stm_port_disable_hw(struct stm_drvdata *drvdata) 235 + { 236 + CS_UNLOCK(drvdata->base); 237 + 238 + writel_relaxed(0x0, drvdata->base + STMSPER); 239 + writel_relaxed(0x0, drvdata->base + STMSPTRIGCSR); 240 + 241 + CS_LOCK(drvdata->base); 242 + } 243 + 244 + static void stm_disable_hw(struct stm_drvdata *drvdata) 245 + { 246 + u32 val; 247 + 248 + CS_UNLOCK(drvdata->base); 249 + 250 + val = readl_relaxed(drvdata->base + STMTCSR); 251 + val &= ~0x1; /* clear global STM enable [0] */ 252 + writel_relaxed(val, drvdata->base + STMTCSR); 253 + 254 + CS_LOCK(drvdata->base); 255 + 256 + stm_port_disable_hw(drvdata); 257 + if (drvdata->stmheer) 258 + stm_hwevent_disable_hw(drvdata); 259 + } 260 + 261 + static void stm_disable(struct coresight_device *csdev) 262 + { 263 + struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 264 + 265 + /* 266 + * For as long as the tracer isn't disabled another entity can't 267 + * change its status. As such we can read the status here without 268 + * fearing it will change under us. 269 + */ 270 + if (local_read(&drvdata->mode) == CS_MODE_SYSFS) { 271 + spin_lock(&drvdata->spinlock); 272 + stm_disable_hw(drvdata); 273 + spin_unlock(&drvdata->spinlock); 274 + 275 + /* Wait until the engine has completely stopped */ 276 + coresight_timeout(drvdata, STMTCSR, STMTCSR_BUSY_BIT, 0); 277 + 278 + pm_runtime_put(drvdata->dev); 279 + 280 + local_set(&drvdata->mode, CS_MODE_DISABLED); 281 + dev_info(drvdata->dev, "STM tracing disabled\n"); 282 + } 283 + } 284 + 285 + static int stm_trace_id(struct coresight_device *csdev) 286 + { 287 + struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 288 + 289 + return drvdata->traceid; 290 + } 291 + 292 + static const struct coresight_ops_source stm_source_ops = { 293 + .trace_id = stm_trace_id, 294 + .enable = stm_enable, 295 + .disable = stm_disable, 296 + }; 297 + 298 + static const struct coresight_ops stm_cs_ops = { 299 + .source_ops = &stm_source_ops, 300 + }; 301 + 302 + static inline bool stm_addr_unaligned(const void *addr, u8 write_bytes) 303 + { 304 + return ((unsigned long)addr & (write_bytes - 1)); 305 + } 306 + 307 + static void stm_send(void *addr, const void *data, u32 size, u8 write_bytes) 308 + { 309 + u8 paload[8]; 310 + 311 + if (stm_addr_unaligned(data, write_bytes)) { 312 + memcpy(paload, data, size); 313 + data = paload; 314 + } 315 + 316 + /* now we are 64bit/32bit aligned */ 317 + switch (size) { 318 + #ifdef CONFIG_64BIT 319 + case 8: 320 + writeq_relaxed(*(u64 *)data, addr); 321 + break; 322 + #endif 323 + case 4: 324 + writel_relaxed(*(u32 *)data, addr); 325 + break; 326 + case 2: 327 + writew_relaxed(*(u16 *)data, addr); 328 + break; 329 + case 1: 330 + writeb_relaxed(*(u8 *)data, addr); 331 + break; 332 + default: 333 + break; 334 + } 335 + } 336 + 337 + static int stm_generic_link(struct stm_data *stm_data, 338 + unsigned int master, unsigned int channel) 339 + { 340 + struct stm_drvdata *drvdata = container_of(stm_data, 341 + struct stm_drvdata, stm); 342 + if (!drvdata || !drvdata->csdev) 343 + return -EINVAL; 344 + 345 + return coresight_enable(drvdata->csdev); 346 + } 347 + 348 + static void stm_generic_unlink(struct stm_data *stm_data, 349 + unsigned int master, unsigned int channel) 350 + { 351 + struct stm_drvdata *drvdata = container_of(stm_data, 352 + struct stm_drvdata, stm); 353 + if (!drvdata || !drvdata->csdev) 354 + return; 355 + 356 + stm_disable(drvdata->csdev); 357 + } 358 + 359 + static long stm_generic_set_options(struct stm_data *stm_data, 360 + unsigned int master, 361 + unsigned int channel, 362 + unsigned int nr_chans, 363 + unsigned long options) 364 + { 365 + struct stm_drvdata *drvdata = container_of(stm_data, 366 + struct stm_drvdata, stm); 367 + if (!(drvdata && local_read(&drvdata->mode))) 368 + return -EINVAL; 369 + 370 + if (channel >= drvdata->numsp) 371 + return -EINVAL; 372 + 373 + switch (options) { 374 + case STM_OPTION_GUARANTEED: 375 + set_bit(channel, drvdata->chs.guaranteed); 376 + break; 377 + 378 + case STM_OPTION_INVARIANT: 379 + clear_bit(channel, drvdata->chs.guaranteed); 380 + break; 381 + 382 + default: 383 + return -EINVAL; 384 + } 385 + 386 + return 0; 387 + } 388 + 389 + static ssize_t stm_generic_packet(struct stm_data *stm_data, 390 + unsigned int master, 391 + unsigned int channel, 392 + unsigned int packet, 393 + unsigned int flags, 394 + unsigned int size, 395 + const unsigned char *payload) 396 + { 397 + unsigned long ch_addr; 398 + struct stm_drvdata *drvdata = container_of(stm_data, 399 + struct stm_drvdata, stm); 400 + 401 + if (!(drvdata && local_read(&drvdata->mode))) 402 + return 0; 403 + 404 + if (channel >= drvdata->numsp) 405 + return 0; 406 + 407 + ch_addr = (unsigned long)stm_channel_addr(drvdata, channel); 408 + 409 + flags = (flags == STP_PACKET_TIMESTAMPED) ? STM_FLAG_TIMESTAMPED : 0; 410 + flags |= test_bit(channel, drvdata->chs.guaranteed) ? 411 + STM_FLAG_GUARANTEED : 0; 412 + 413 + if (size > drvdata->write_bytes) 414 + size = drvdata->write_bytes; 415 + else 416 + size = rounddown_pow_of_two(size); 417 + 418 + switch (packet) { 419 + case STP_PACKET_FLAG: 420 + ch_addr |= stm_channel_off(STM_PKT_TYPE_FLAG, flags); 421 + 422 + /* 423 + * The generic STM core sets a size of '0' on flag packets. 424 + * As such send a flag packet of size '1' and tell the 425 + * core we did so. 426 + */ 427 + stm_send((void *)ch_addr, payload, 1, drvdata->write_bytes); 428 + size = 1; 429 + break; 430 + 431 + case STP_PACKET_DATA: 432 + ch_addr |= stm_channel_off(STM_PKT_TYPE_DATA, flags); 433 + stm_send((void *)ch_addr, payload, size, 434 + drvdata->write_bytes); 435 + break; 436 + 437 + default: 438 + return -ENOTSUPP; 439 + } 440 + 441 + return size; 442 + } 443 + 444 + static ssize_t hwevent_enable_show(struct device *dev, 445 + struct device_attribute *attr, char *buf) 446 + { 447 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 448 + unsigned long val = drvdata->stmheer; 449 + 450 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 451 + } 452 + 453 + static ssize_t hwevent_enable_store(struct device *dev, 454 + struct device_attribute *attr, 455 + const char *buf, size_t size) 456 + { 457 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 458 + unsigned long val; 459 + int ret = 0; 460 + 461 + ret = kstrtoul(buf, 16, &val); 462 + if (ret) 463 + return -EINVAL; 464 + 465 + drvdata->stmheer = val; 466 + /* HW event enable and trigger go hand in hand */ 467 + drvdata->stmheter = val; 468 + 469 + return size; 470 + } 471 + static DEVICE_ATTR_RW(hwevent_enable); 472 + 473 + static ssize_t hwevent_select_show(struct device *dev, 474 + struct device_attribute *attr, char *buf) 475 + { 476 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 477 + unsigned long val = drvdata->stmhebsr; 478 + 479 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 480 + } 481 + 482 + static ssize_t hwevent_select_store(struct device *dev, 483 + struct device_attribute *attr, 484 + const char *buf, size_t size) 485 + { 486 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 487 + unsigned long val; 488 + int ret = 0; 489 + 490 + ret = kstrtoul(buf, 16, &val); 491 + if (ret) 492 + return -EINVAL; 493 + 494 + drvdata->stmhebsr = val; 495 + 496 + return size; 497 + } 498 + static DEVICE_ATTR_RW(hwevent_select); 499 + 500 + static ssize_t port_select_show(struct device *dev, 501 + struct device_attribute *attr, char *buf) 502 + { 503 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 504 + unsigned long val; 505 + 506 + if (!local_read(&drvdata->mode)) { 507 + val = drvdata->stmspscr; 508 + } else { 509 + spin_lock(&drvdata->spinlock); 510 + val = readl_relaxed(drvdata->base + STMSPSCR); 511 + spin_unlock(&drvdata->spinlock); 512 + } 513 + 514 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 515 + } 516 + 517 + static ssize_t port_select_store(struct device *dev, 518 + struct device_attribute *attr, 519 + const char *buf, size_t size) 520 + { 521 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 522 + unsigned long val, stmsper; 523 + int ret = 0; 524 + 525 + ret = kstrtoul(buf, 16, &val); 526 + if (ret) 527 + return ret; 528 + 529 + spin_lock(&drvdata->spinlock); 530 + drvdata->stmspscr = val; 531 + 532 + if (local_read(&drvdata->mode)) { 533 + CS_UNLOCK(drvdata->base); 534 + /* Process as per ARM's TRM recommendation */ 535 + stmsper = readl_relaxed(drvdata->base + STMSPER); 536 + writel_relaxed(0x0, drvdata->base + STMSPER); 537 + writel_relaxed(drvdata->stmspscr, drvdata->base + STMSPSCR); 538 + writel_relaxed(stmsper, drvdata->base + STMSPER); 539 + CS_LOCK(drvdata->base); 540 + } 541 + spin_unlock(&drvdata->spinlock); 542 + 543 + return size; 544 + } 545 + static DEVICE_ATTR_RW(port_select); 546 + 547 + static ssize_t port_enable_show(struct device *dev, 548 + struct device_attribute *attr, char *buf) 549 + { 550 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 551 + unsigned long val; 552 + 553 + if (!local_read(&drvdata->mode)) { 554 + val = drvdata->stmsper; 555 + } else { 556 + spin_lock(&drvdata->spinlock); 557 + val = readl_relaxed(drvdata->base + STMSPER); 558 + spin_unlock(&drvdata->spinlock); 559 + } 560 + 561 + return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 562 + } 563 + 564 + static ssize_t port_enable_store(struct device *dev, 565 + struct device_attribute *attr, 566 + const char *buf, size_t size) 567 + { 568 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 569 + unsigned long val; 570 + int ret = 0; 571 + 572 + ret = kstrtoul(buf, 16, &val); 573 + if (ret) 574 + return ret; 575 + 576 + spin_lock(&drvdata->spinlock); 577 + drvdata->stmsper = val; 578 + 579 + if (local_read(&drvdata->mode)) { 580 + CS_UNLOCK(drvdata->base); 581 + writel_relaxed(drvdata->stmsper, drvdata->base + STMSPER); 582 + CS_LOCK(drvdata->base); 583 + } 584 + spin_unlock(&drvdata->spinlock); 585 + 586 + return size; 587 + } 588 + static DEVICE_ATTR_RW(port_enable); 589 + 590 + static ssize_t traceid_show(struct device *dev, 591 + struct device_attribute *attr, char *buf) 592 + { 593 + unsigned long val; 594 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 595 + 596 + val = drvdata->traceid; 597 + return sprintf(buf, "%#lx\n", val); 598 + } 599 + 600 + static ssize_t traceid_store(struct device *dev, 601 + struct device_attribute *attr, 602 + const char *buf, size_t size) 603 + { 604 + int ret; 605 + unsigned long val; 606 + struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent); 607 + 608 + ret = kstrtoul(buf, 16, &val); 609 + if (ret) 610 + return ret; 611 + 612 + /* traceid field is 7bit wide on STM32 */ 613 + drvdata->traceid = val & 0x7f; 614 + return size; 615 + } 616 + static DEVICE_ATTR_RW(traceid); 617 + 618 + #define coresight_stm_simple_func(name, offset) \ 619 + coresight_simple_func(struct stm_drvdata, name, offset) 620 + 621 + coresight_stm_simple_func(tcsr, STMTCSR); 622 + coresight_stm_simple_func(tsfreqr, STMTSFREQR); 623 + coresight_stm_simple_func(syncr, STMSYNCR); 624 + coresight_stm_simple_func(sper, STMSPER); 625 + coresight_stm_simple_func(spter, STMSPTER); 626 + coresight_stm_simple_func(privmaskr, STMPRIVMASKR); 627 + coresight_stm_simple_func(spscr, STMSPSCR); 628 + coresight_stm_simple_func(spmscr, STMSPMSCR); 629 + coresight_stm_simple_func(spfeat1r, STMSPFEAT1R); 630 + coresight_stm_simple_func(spfeat2r, STMSPFEAT2R); 631 + coresight_stm_simple_func(spfeat3r, STMSPFEAT3R); 632 + coresight_stm_simple_func(devid, CORESIGHT_DEVID); 633 + 634 + static struct attribute *coresight_stm_attrs[] = { 635 + &dev_attr_hwevent_enable.attr, 636 + &dev_attr_hwevent_select.attr, 637 + &dev_attr_port_enable.attr, 638 + &dev_attr_port_select.attr, 639 + &dev_attr_traceid.attr, 640 + NULL, 641 + }; 642 + 643 + static struct attribute *coresight_stm_mgmt_attrs[] = { 644 + &dev_attr_tcsr.attr, 645 + &dev_attr_tsfreqr.attr, 646 + &dev_attr_syncr.attr, 647 + &dev_attr_sper.attr, 648 + &dev_attr_spter.attr, 649 + &dev_attr_privmaskr.attr, 650 + &dev_attr_spscr.attr, 651 + &dev_attr_spmscr.attr, 652 + &dev_attr_spfeat1r.attr, 653 + &dev_attr_spfeat2r.attr, 654 + &dev_attr_spfeat3r.attr, 655 + &dev_attr_devid.attr, 656 + NULL, 657 + }; 658 + 659 + static const struct attribute_group coresight_stm_group = { 660 + .attrs = coresight_stm_attrs, 661 + }; 662 + 663 + static const struct attribute_group coresight_stm_mgmt_group = { 664 + .attrs = coresight_stm_mgmt_attrs, 665 + .name = "mgmt", 666 + }; 667 + 668 + static const struct attribute_group *coresight_stm_groups[] = { 669 + &coresight_stm_group, 670 + &coresight_stm_mgmt_group, 671 + NULL, 672 + }; 673 + 674 + static int stm_get_resource_byname(struct device_node *np, 675 + char *ch_base, struct resource *res) 676 + { 677 + const char *name = NULL; 678 + int index = 0, found = 0; 679 + 680 + while (!of_property_read_string_index(np, "reg-names", index, &name)) { 681 + if (strcmp(ch_base, name)) { 682 + index++; 683 + continue; 684 + } 685 + 686 + /* We have a match and @index is where it's at */ 687 + found = 1; 688 + break; 689 + } 690 + 691 + if (!found) 692 + return -EINVAL; 693 + 694 + return of_address_to_resource(np, index, res); 695 + } 696 + 697 + static u32 stm_fundamental_data_size(struct stm_drvdata *drvdata) 698 + { 699 + u32 stmspfeat2r; 700 + 701 + if (!IS_ENABLED(CONFIG_64BIT)) 702 + return 4; 703 + 704 + stmspfeat2r = readl_relaxed(drvdata->base + STMSPFEAT2R); 705 + 706 + /* 707 + * bit[15:12] represents the fundamental data size 708 + * 0 - 32-bit data 709 + * 1 - 64-bit data 710 + */ 711 + return BMVAL(stmspfeat2r, 12, 15) ? 8 : 4; 712 + } 713 + 714 + static u32 stm_num_stimulus_port(struct stm_drvdata *drvdata) 715 + { 716 + u32 numsp; 717 + 718 + numsp = readl_relaxed(drvdata->base + CORESIGHT_DEVID); 719 + /* 720 + * NUMPS in STMDEVID is 17 bit long and if equal to 0x0, 721 + * 32 stimulus ports are supported. 722 + */ 723 + numsp &= 0x1ffff; 724 + if (!numsp) 725 + numsp = STM_32_CHANNEL; 726 + return numsp; 727 + } 728 + 729 + static void stm_init_default_data(struct stm_drvdata *drvdata) 730 + { 731 + /* Don't use port selection */ 732 + drvdata->stmspscr = 0x0; 733 + /* 734 + * Enable all channel regardless of their number. When port 735 + * selection isn't used (see above) STMSPER applies to all 736 + * 32 channel group available, hence setting all 32 bits to 1 737 + */ 738 + drvdata->stmsper = ~0x0; 739 + 740 + /* 741 + * The trace ID value for *ETM* tracers start at CPU_ID * 2 + 0x10 and 742 + * anything equal to or higher than 0x70 is reserved. Since 0x00 is 743 + * also reserved the STM trace ID needs to be higher than 0x00 and 744 + * lowner than 0x10. 745 + */ 746 + drvdata->traceid = 0x1; 747 + 748 + /* Set invariant transaction timing on all channels */ 749 + bitmap_clear(drvdata->chs.guaranteed, 0, drvdata->numsp); 750 + } 751 + 752 + static void stm_init_generic_data(struct stm_drvdata *drvdata) 753 + { 754 + drvdata->stm.name = dev_name(drvdata->dev); 755 + 756 + /* 757 + * MasterIDs are assigned at HW design phase. As such the core is 758 + * using a single master for interaction with this device. 759 + */ 760 + drvdata->stm.sw_start = 1; 761 + drvdata->stm.sw_end = 1; 762 + drvdata->stm.hw_override = true; 763 + drvdata->stm.sw_nchannels = drvdata->numsp; 764 + drvdata->stm.packet = stm_generic_packet; 765 + drvdata->stm.link = stm_generic_link; 766 + drvdata->stm.unlink = stm_generic_unlink; 767 + drvdata->stm.set_options = stm_generic_set_options; 768 + } 769 + 770 + static int stm_probe(struct amba_device *adev, const struct amba_id *id) 771 + { 772 + int ret; 773 + void __iomem *base; 774 + unsigned long *guaranteed; 775 + struct device *dev = &adev->dev; 776 + struct coresight_platform_data *pdata = NULL; 777 + struct stm_drvdata *drvdata; 778 + struct resource *res = &adev->res; 779 + struct resource ch_res; 780 + size_t res_size, bitmap_size; 781 + struct coresight_desc *desc; 782 + struct device_node *np = adev->dev.of_node; 783 + 784 + if (np) { 785 + pdata = of_get_coresight_platform_data(dev, np); 786 + if (IS_ERR(pdata)) 787 + return PTR_ERR(pdata); 788 + adev->dev.platform_data = pdata; 789 + } 790 + drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 791 + if (!drvdata) 792 + return -ENOMEM; 793 + 794 + drvdata->dev = &adev->dev; 795 + drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */ 796 + if (!IS_ERR(drvdata->atclk)) { 797 + ret = clk_prepare_enable(drvdata->atclk); 798 + if (ret) 799 + return ret; 800 + } 801 + dev_set_drvdata(dev, drvdata); 802 + 803 + base = devm_ioremap_resource(dev, res); 804 + if (IS_ERR(base)) 805 + return PTR_ERR(base); 806 + drvdata->base = base; 807 + 808 + ret = stm_get_resource_byname(np, "stm-stimulus-base", &ch_res); 809 + if (ret) 810 + return ret; 811 + 812 + base = devm_ioremap_resource(dev, &ch_res); 813 + if (IS_ERR(base)) 814 + return PTR_ERR(base); 815 + drvdata->chs.base = base; 816 + 817 + drvdata->write_bytes = stm_fundamental_data_size(drvdata); 818 + 819 + if (boot_nr_channel) { 820 + drvdata->numsp = boot_nr_channel; 821 + res_size = min((resource_size_t)(boot_nr_channel * 822 + BYTES_PER_CHANNEL), resource_size(res)); 823 + } else { 824 + drvdata->numsp = stm_num_stimulus_port(drvdata); 825 + res_size = min((resource_size_t)(drvdata->numsp * 826 + BYTES_PER_CHANNEL), resource_size(res)); 827 + } 828 + bitmap_size = BITS_TO_LONGS(drvdata->numsp) * sizeof(long); 829 + 830 + guaranteed = devm_kzalloc(dev, bitmap_size, GFP_KERNEL); 831 + if (!guaranteed) 832 + return -ENOMEM; 833 + drvdata->chs.guaranteed = guaranteed; 834 + 835 + spin_lock_init(&drvdata->spinlock); 836 + 837 + stm_init_default_data(drvdata); 838 + stm_init_generic_data(drvdata); 839 + 840 + if (stm_register_device(dev, &drvdata->stm, THIS_MODULE)) { 841 + dev_info(dev, 842 + "stm_register_device failed, probing deffered\n"); 843 + return -EPROBE_DEFER; 844 + } 845 + 846 + desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL); 847 + if (!desc) { 848 + ret = -ENOMEM; 849 + goto stm_unregister; 850 + } 851 + 852 + desc->type = CORESIGHT_DEV_TYPE_SOURCE; 853 + desc->subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE; 854 + desc->ops = &stm_cs_ops; 855 + desc->pdata = pdata; 856 + desc->dev = dev; 857 + desc->groups = coresight_stm_groups; 858 + drvdata->csdev = coresight_register(desc); 859 + if (IS_ERR(drvdata->csdev)) { 860 + ret = PTR_ERR(drvdata->csdev); 861 + goto stm_unregister; 862 + } 863 + 864 + pm_runtime_put(&adev->dev); 865 + 866 + dev_info(dev, "%s initialized\n", (char *)id->data); 867 + return 0; 868 + 869 + stm_unregister: 870 + stm_unregister_device(&drvdata->stm); 871 + return ret; 872 + } 873 + 874 + #ifdef CONFIG_PM 875 + static int stm_runtime_suspend(struct device *dev) 876 + { 877 + struct stm_drvdata *drvdata = dev_get_drvdata(dev); 878 + 879 + if (drvdata && !IS_ERR(drvdata->atclk)) 880 + clk_disable_unprepare(drvdata->atclk); 881 + 882 + return 0; 883 + } 884 + 885 + static int stm_runtime_resume(struct device *dev) 886 + { 887 + struct stm_drvdata *drvdata = dev_get_drvdata(dev); 888 + 889 + if (drvdata && !IS_ERR(drvdata->atclk)) 890 + clk_prepare_enable(drvdata->atclk); 891 + 892 + return 0; 893 + } 894 + #endif 895 + 896 + static const struct dev_pm_ops stm_dev_pm_ops = { 897 + SET_RUNTIME_PM_OPS(stm_runtime_suspend, stm_runtime_resume, NULL) 898 + }; 899 + 900 + static struct amba_id stm_ids[] = { 901 + { 902 + .id = 0x0003b962, 903 + .mask = 0x0003ffff, 904 + .data = "STM32", 905 + }, 906 + { 0, 0}, 907 + }; 908 + 909 + static struct amba_driver stm_driver = { 910 + .drv = { 911 + .name = "coresight-stm", 912 + .owner = THIS_MODULE, 913 + .pm = &stm_dev_pm_ops, 914 + .suppress_bind_attrs = true, 915 + }, 916 + .probe = stm_probe, 917 + .id_table = stm_ids, 918 + }; 919 + 920 + builtin_amba_driver(stm_driver);
+604
drivers/hwtracing/coresight/coresight-tmc-etf.c
··· 1 + /* 2 + * Copyright(C) 2016 Linaro Limited. All rights reserved. 3 + * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms of the GNU General Public License version 2 as published by 7 + * the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + * You should have received a copy of the GNU General Public License along with 15 + * this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #include <linux/circ_buf.h> 19 + #include <linux/coresight.h> 20 + #include <linux/perf_event.h> 21 + #include <linux/slab.h> 22 + #include "coresight-priv.h" 23 + #include "coresight-tmc.h" 24 + 25 + void tmc_etb_enable_hw(struct tmc_drvdata *drvdata) 26 + { 27 + CS_UNLOCK(drvdata->base); 28 + 29 + /* Wait for TMCSReady bit to be set */ 30 + tmc_wait_for_tmcready(drvdata); 31 + 32 + writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE); 33 + writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI | 34 + TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT | 35 + TMC_FFCR_TRIGON_TRIGIN, 36 + drvdata->base + TMC_FFCR); 37 + 38 + writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG); 39 + tmc_enable_hw(drvdata); 40 + 41 + CS_LOCK(drvdata->base); 42 + } 43 + 44 + static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata) 45 + { 46 + char *bufp; 47 + u32 read_data; 48 + int i; 49 + 50 + bufp = drvdata->buf; 51 + while (1) { 52 + for (i = 0; i < drvdata->memwidth; i++) { 53 + read_data = readl_relaxed(drvdata->base + TMC_RRD); 54 + if (read_data == 0xFFFFFFFF) 55 + return; 56 + memcpy(bufp, &read_data, 4); 57 + bufp += 4; 58 + } 59 + } 60 + } 61 + 62 + static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata) 63 + { 64 + CS_UNLOCK(drvdata->base); 65 + 66 + tmc_flush_and_stop(drvdata); 67 + /* 68 + * When operating in sysFS mode the content of the buffer needs to be 69 + * read before the TMC is disabled. 70 + */ 71 + if (local_read(&drvdata->mode) == CS_MODE_SYSFS) 72 + tmc_etb_dump_hw(drvdata); 73 + tmc_disable_hw(drvdata); 74 + 75 + CS_LOCK(drvdata->base); 76 + } 77 + 78 + static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata) 79 + { 80 + CS_UNLOCK(drvdata->base); 81 + 82 + /* Wait for TMCSReady bit to be set */ 83 + tmc_wait_for_tmcready(drvdata); 84 + 85 + writel_relaxed(TMC_MODE_HARDWARE_FIFO, drvdata->base + TMC_MODE); 86 + writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI, 87 + drvdata->base + TMC_FFCR); 88 + writel_relaxed(0x0, drvdata->base + TMC_BUFWM); 89 + tmc_enable_hw(drvdata); 90 + 91 + CS_LOCK(drvdata->base); 92 + } 93 + 94 + static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata) 95 + { 96 + CS_UNLOCK(drvdata->base); 97 + 98 + tmc_flush_and_stop(drvdata); 99 + tmc_disable_hw(drvdata); 100 + 101 + CS_LOCK(drvdata->base); 102 + } 103 + 104 + static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev, u32 mode) 105 + { 106 + int ret = 0; 107 + bool used = false; 108 + char *buf = NULL; 109 + long val; 110 + unsigned long flags; 111 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 112 + 113 + /* This shouldn't be happening */ 114 + if (WARN_ON(mode != CS_MODE_SYSFS)) 115 + return -EINVAL; 116 + 117 + /* 118 + * If we don't have a buffer release the lock and allocate memory. 119 + * Otherwise keep the lock and move along. 120 + */ 121 + spin_lock_irqsave(&drvdata->spinlock, flags); 122 + if (!drvdata->buf) { 123 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 124 + 125 + /* Allocating the memory here while outside of the spinlock */ 126 + buf = kzalloc(drvdata->size, GFP_KERNEL); 127 + if (!buf) 128 + return -ENOMEM; 129 + 130 + /* Let's try again */ 131 + spin_lock_irqsave(&drvdata->spinlock, flags); 132 + } 133 + 134 + if (drvdata->reading) { 135 + ret = -EBUSY; 136 + goto out; 137 + } 138 + 139 + val = local_xchg(&drvdata->mode, mode); 140 + /* 141 + * In sysFS mode we can have multiple writers per sink. Since this 142 + * sink is already enabled no memory is needed and the HW need not be 143 + * touched. 144 + */ 145 + if (val == CS_MODE_SYSFS) 146 + goto out; 147 + 148 + /* 149 + * If drvdata::buf isn't NULL, memory was allocated for a previous 150 + * trace run but wasn't read. If so simply zero-out the memory. 151 + * Otherwise use the memory allocated above. 152 + * 153 + * The memory is freed when users read the buffer using the 154 + * /dev/xyz.{etf|etb} interface. See tmc_read_unprepare_etf() for 155 + * details. 156 + */ 157 + if (drvdata->buf) { 158 + memset(drvdata->buf, 0, drvdata->size); 159 + } else { 160 + used = true; 161 + drvdata->buf = buf; 162 + } 163 + 164 + tmc_etb_enable_hw(drvdata); 165 + out: 166 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 167 + 168 + /* Free memory outside the spinlock if need be */ 169 + if (!used && buf) 170 + kfree(buf); 171 + 172 + if (!ret) 173 + dev_info(drvdata->dev, "TMC-ETB/ETF enabled\n"); 174 + 175 + return ret; 176 + } 177 + 178 + static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, u32 mode) 179 + { 180 + int ret = 0; 181 + long val; 182 + unsigned long flags; 183 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 184 + 185 + /* This shouldn't be happening */ 186 + if (WARN_ON(mode != CS_MODE_PERF)) 187 + return -EINVAL; 188 + 189 + spin_lock_irqsave(&drvdata->spinlock, flags); 190 + if (drvdata->reading) { 191 + ret = -EINVAL; 192 + goto out; 193 + } 194 + 195 + val = local_xchg(&drvdata->mode, mode); 196 + /* 197 + * In Perf mode there can be only one writer per sink. There 198 + * is also no need to continue if the ETB/ETR is already operated 199 + * from sysFS. 200 + */ 201 + if (val != CS_MODE_DISABLED) { 202 + ret = -EINVAL; 203 + goto out; 204 + } 205 + 206 + tmc_etb_enable_hw(drvdata); 207 + out: 208 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 209 + 210 + return ret; 211 + } 212 + 213 + static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode) 214 + { 215 + switch (mode) { 216 + case CS_MODE_SYSFS: 217 + return tmc_enable_etf_sink_sysfs(csdev, mode); 218 + case CS_MODE_PERF: 219 + return tmc_enable_etf_sink_perf(csdev, mode); 220 + } 221 + 222 + /* We shouldn't be here */ 223 + return -EINVAL; 224 + } 225 + 226 + static void tmc_disable_etf_sink(struct coresight_device *csdev) 227 + { 228 + long val; 229 + unsigned long flags; 230 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 231 + 232 + spin_lock_irqsave(&drvdata->spinlock, flags); 233 + if (drvdata->reading) { 234 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 235 + return; 236 + } 237 + 238 + val = local_xchg(&drvdata->mode, CS_MODE_DISABLED); 239 + /* Disable the TMC only if it needs to */ 240 + if (val != CS_MODE_DISABLED) 241 + tmc_etb_disable_hw(drvdata); 242 + 243 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 244 + 245 + dev_info(drvdata->dev, "TMC-ETB/ETF disabled\n"); 246 + } 247 + 248 + static int tmc_enable_etf_link(struct coresight_device *csdev, 249 + int inport, int outport) 250 + { 251 + unsigned long flags; 252 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 253 + 254 + spin_lock_irqsave(&drvdata->spinlock, flags); 255 + if (drvdata->reading) { 256 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 257 + return -EBUSY; 258 + } 259 + 260 + tmc_etf_enable_hw(drvdata); 261 + local_set(&drvdata->mode, CS_MODE_SYSFS); 262 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 263 + 264 + dev_info(drvdata->dev, "TMC-ETF enabled\n"); 265 + return 0; 266 + } 267 + 268 + static void tmc_disable_etf_link(struct coresight_device *csdev, 269 + int inport, int outport) 270 + { 271 + unsigned long flags; 272 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 273 + 274 + spin_lock_irqsave(&drvdata->spinlock, flags); 275 + if (drvdata->reading) { 276 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 277 + return; 278 + } 279 + 280 + tmc_etf_disable_hw(drvdata); 281 + local_set(&drvdata->mode, CS_MODE_DISABLED); 282 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 283 + 284 + dev_info(drvdata->dev, "TMC disabled\n"); 285 + } 286 + 287 + static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu, 288 + void **pages, int nr_pages, bool overwrite) 289 + { 290 + int node; 291 + struct cs_buffers *buf; 292 + 293 + if (cpu == -1) 294 + cpu = smp_processor_id(); 295 + node = cpu_to_node(cpu); 296 + 297 + /* Allocate memory structure for interaction with Perf */ 298 + buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node); 299 + if (!buf) 300 + return NULL; 301 + 302 + buf->snapshot = overwrite; 303 + buf->nr_pages = nr_pages; 304 + buf->data_pages = pages; 305 + 306 + return buf; 307 + } 308 + 309 + static void tmc_free_etf_buffer(void *config) 310 + { 311 + struct cs_buffers *buf = config; 312 + 313 + kfree(buf); 314 + } 315 + 316 + static int tmc_set_etf_buffer(struct coresight_device *csdev, 317 + struct perf_output_handle *handle, 318 + void *sink_config) 319 + { 320 + int ret = 0; 321 + unsigned long head; 322 + struct cs_buffers *buf = sink_config; 323 + 324 + /* wrap head around to the amount of space we have */ 325 + head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1); 326 + 327 + /* find the page to write to */ 328 + buf->cur = head / PAGE_SIZE; 329 + 330 + /* and offset within that page */ 331 + buf->offset = head % PAGE_SIZE; 332 + 333 + local_set(&buf->data_size, 0); 334 + 335 + return ret; 336 + } 337 + 338 + static unsigned long tmc_reset_etf_buffer(struct coresight_device *csdev, 339 + struct perf_output_handle *handle, 340 + void *sink_config, bool *lost) 341 + { 342 + long size = 0; 343 + struct cs_buffers *buf = sink_config; 344 + 345 + if (buf) { 346 + /* 347 + * In snapshot mode ->data_size holds the new address of the 348 + * ring buffer's head. The size itself is the whole address 349 + * range since we want the latest information. 350 + */ 351 + if (buf->snapshot) 352 + handle->head = local_xchg(&buf->data_size, 353 + buf->nr_pages << PAGE_SHIFT); 354 + /* 355 + * Tell the tracer PMU how much we got in this run and if 356 + * something went wrong along the way. Nobody else can use 357 + * this cs_buffers instance until we are done. As such 358 + * resetting parameters here and squaring off with the ring 359 + * buffer API in the tracer PMU is fine. 360 + */ 361 + *lost = !!local_xchg(&buf->lost, 0); 362 + size = local_xchg(&buf->data_size, 0); 363 + } 364 + 365 + return size; 366 + } 367 + 368 + static void tmc_update_etf_buffer(struct coresight_device *csdev, 369 + struct perf_output_handle *handle, 370 + void *sink_config) 371 + { 372 + int i, cur; 373 + u32 *buf_ptr; 374 + u32 read_ptr, write_ptr; 375 + u32 status, to_read; 376 + unsigned long offset; 377 + struct cs_buffers *buf = sink_config; 378 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 379 + 380 + if (!buf) 381 + return; 382 + 383 + /* This shouldn't happen */ 384 + if (WARN_ON_ONCE(local_read(&drvdata->mode) != CS_MODE_PERF)) 385 + return; 386 + 387 + CS_UNLOCK(drvdata->base); 388 + 389 + tmc_flush_and_stop(drvdata); 390 + 391 + read_ptr = readl_relaxed(drvdata->base + TMC_RRP); 392 + write_ptr = readl_relaxed(drvdata->base + TMC_RWP); 393 + 394 + /* 395 + * Get a hold of the status register and see if a wrap around 396 + * has occurred. If so adjust things accordingly. 397 + */ 398 + status = readl_relaxed(drvdata->base + TMC_STS); 399 + if (status & TMC_STS_FULL) { 400 + local_inc(&buf->lost); 401 + to_read = drvdata->size; 402 + } else { 403 + to_read = CIRC_CNT(write_ptr, read_ptr, drvdata->size); 404 + } 405 + 406 + /* 407 + * The TMC RAM buffer may be bigger than the space available in the 408 + * perf ring buffer (handle->size). If so advance the RRP so that we 409 + * get the latest trace data. 410 + */ 411 + if (to_read > handle->size) { 412 + u32 mask = 0; 413 + 414 + /* 415 + * The value written to RRP must be byte-address aligned to 416 + * the width of the trace memory databus _and_ to a frame 417 + * boundary (16 byte), whichever is the biggest. For example, 418 + * for 32-bit, 64-bit and 128-bit wide trace memory, the four 419 + * LSBs must be 0s. For 256-bit wide trace memory, the five 420 + * LSBs must be 0s. 421 + */ 422 + switch (drvdata->memwidth) { 423 + case TMC_MEM_INTF_WIDTH_32BITS: 424 + case TMC_MEM_INTF_WIDTH_64BITS: 425 + case TMC_MEM_INTF_WIDTH_128BITS: 426 + mask = GENMASK(31, 5); 427 + break; 428 + case TMC_MEM_INTF_WIDTH_256BITS: 429 + mask = GENMASK(31, 6); 430 + break; 431 + } 432 + 433 + /* 434 + * Make sure the new size is aligned in accordance with the 435 + * requirement explained above. 436 + */ 437 + to_read = handle->size & mask; 438 + /* Move the RAM read pointer up */ 439 + read_ptr = (write_ptr + drvdata->size) - to_read; 440 + /* Make sure we are still within our limits */ 441 + if (read_ptr > (drvdata->size - 1)) 442 + read_ptr -= drvdata->size; 443 + /* Tell the HW */ 444 + writel_relaxed(read_ptr, drvdata->base + TMC_RRP); 445 + local_inc(&buf->lost); 446 + } 447 + 448 + cur = buf->cur; 449 + offset = buf->offset; 450 + 451 + /* for every byte to read */ 452 + for (i = 0; i < to_read; i += 4) { 453 + buf_ptr = buf->data_pages[cur] + offset; 454 + *buf_ptr = readl_relaxed(drvdata->base + TMC_RRD); 455 + 456 + offset += 4; 457 + if (offset >= PAGE_SIZE) { 458 + offset = 0; 459 + cur++; 460 + /* wrap around at the end of the buffer */ 461 + cur &= buf->nr_pages - 1; 462 + } 463 + } 464 + 465 + /* 466 + * In snapshot mode all we have to do is communicate to 467 + * perf_aux_output_end() the address of the current head. In full 468 + * trace mode the same function expects a size to move rb->aux_head 469 + * forward. 470 + */ 471 + if (buf->snapshot) 472 + local_set(&buf->data_size, (cur * PAGE_SIZE) + offset); 473 + else 474 + local_add(to_read, &buf->data_size); 475 + 476 + CS_LOCK(drvdata->base); 477 + } 478 + 479 + static const struct coresight_ops_sink tmc_etf_sink_ops = { 480 + .enable = tmc_enable_etf_sink, 481 + .disable = tmc_disable_etf_sink, 482 + .alloc_buffer = tmc_alloc_etf_buffer, 483 + .free_buffer = tmc_free_etf_buffer, 484 + .set_buffer = tmc_set_etf_buffer, 485 + .reset_buffer = tmc_reset_etf_buffer, 486 + .update_buffer = tmc_update_etf_buffer, 487 + }; 488 + 489 + static const struct coresight_ops_link tmc_etf_link_ops = { 490 + .enable = tmc_enable_etf_link, 491 + .disable = tmc_disable_etf_link, 492 + }; 493 + 494 + const struct coresight_ops tmc_etb_cs_ops = { 495 + .sink_ops = &tmc_etf_sink_ops, 496 + }; 497 + 498 + const struct coresight_ops tmc_etf_cs_ops = { 499 + .sink_ops = &tmc_etf_sink_ops, 500 + .link_ops = &tmc_etf_link_ops, 501 + }; 502 + 503 + int tmc_read_prepare_etb(struct tmc_drvdata *drvdata) 504 + { 505 + long val; 506 + enum tmc_mode mode; 507 + int ret = 0; 508 + unsigned long flags; 509 + 510 + /* config types are set a boot time and never change */ 511 + if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB && 512 + drvdata->config_type != TMC_CONFIG_TYPE_ETF)) 513 + return -EINVAL; 514 + 515 + spin_lock_irqsave(&drvdata->spinlock, flags); 516 + 517 + if (drvdata->reading) { 518 + ret = -EBUSY; 519 + goto out; 520 + } 521 + 522 + /* There is no point in reading a TMC in HW FIFO mode */ 523 + mode = readl_relaxed(drvdata->base + TMC_MODE); 524 + if (mode != TMC_MODE_CIRCULAR_BUFFER) { 525 + ret = -EINVAL; 526 + goto out; 527 + } 528 + 529 + val = local_read(&drvdata->mode); 530 + /* Don't interfere if operated from Perf */ 531 + if (val == CS_MODE_PERF) { 532 + ret = -EINVAL; 533 + goto out; 534 + } 535 + 536 + /* If drvdata::buf is NULL the trace data has been read already */ 537 + if (drvdata->buf == NULL) { 538 + ret = -EINVAL; 539 + goto out; 540 + } 541 + 542 + /* Disable the TMC if need be */ 543 + if (val == CS_MODE_SYSFS) 544 + tmc_etb_disable_hw(drvdata); 545 + 546 + drvdata->reading = true; 547 + out: 548 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 549 + 550 + return ret; 551 + } 552 + 553 + int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata) 554 + { 555 + char *buf = NULL; 556 + enum tmc_mode mode; 557 + unsigned long flags; 558 + 559 + /* config types are set a boot time and never change */ 560 + if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB && 561 + drvdata->config_type != TMC_CONFIG_TYPE_ETF)) 562 + return -EINVAL; 563 + 564 + spin_lock_irqsave(&drvdata->spinlock, flags); 565 + 566 + /* There is no point in reading a TMC in HW FIFO mode */ 567 + mode = readl_relaxed(drvdata->base + TMC_MODE); 568 + if (mode != TMC_MODE_CIRCULAR_BUFFER) { 569 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 570 + return -EINVAL; 571 + } 572 + 573 + /* Re-enable the TMC if need be */ 574 + if (local_read(&drvdata->mode) == CS_MODE_SYSFS) { 575 + /* 576 + * The trace run will continue with the same allocated trace 577 + * buffer. As such zero-out the buffer so that we don't end 578 + * up with stale data. 579 + * 580 + * Since the tracer is still enabled drvdata::buf 581 + * can't be NULL. 582 + */ 583 + memset(drvdata->buf, 0, drvdata->size); 584 + tmc_etb_enable_hw(drvdata); 585 + } else { 586 + /* 587 + * The ETB/ETF is not tracing and the buffer was just read. 588 + * As such prepare to free the trace buffer. 589 + */ 590 + buf = drvdata->buf; 591 + drvdata->buf = NULL; 592 + } 593 + 594 + drvdata->reading = false; 595 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 596 + 597 + /* 598 + * Free allocated memory outside of the spinlock. There is no need 599 + * to assert the validity of 'buf' since calling kfree(NULL) is safe. 600 + */ 601 + kfree(buf); 602 + 603 + return 0; 604 + }
+329
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 1 + /* 2 + * Copyright(C) 2016 Linaro Limited. All rights reserved. 3 + * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms of the GNU General Public License version 2 as published by 7 + * the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + * You should have received a copy of the GNU General Public License along with 15 + * this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #include <linux/coresight.h> 19 + #include <linux/dma-mapping.h> 20 + #include "coresight-priv.h" 21 + #include "coresight-tmc.h" 22 + 23 + void tmc_etr_enable_hw(struct tmc_drvdata *drvdata) 24 + { 25 + u32 axictl; 26 + 27 + /* Zero out the memory to help with debug */ 28 + memset(drvdata->vaddr, 0, drvdata->size); 29 + 30 + CS_UNLOCK(drvdata->base); 31 + 32 + /* Wait for TMCSReady bit to be set */ 33 + tmc_wait_for_tmcready(drvdata); 34 + 35 + writel_relaxed(drvdata->size / 4, drvdata->base + TMC_RSZ); 36 + writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE); 37 + 38 + axictl = readl_relaxed(drvdata->base + TMC_AXICTL); 39 + axictl |= TMC_AXICTL_WR_BURST_16; 40 + writel_relaxed(axictl, drvdata->base + TMC_AXICTL); 41 + axictl &= ~TMC_AXICTL_SCT_GAT_MODE; 42 + writel_relaxed(axictl, drvdata->base + TMC_AXICTL); 43 + axictl = (axictl & 44 + ~(TMC_AXICTL_PROT_CTL_B0 | TMC_AXICTL_PROT_CTL_B1)) | 45 + TMC_AXICTL_PROT_CTL_B1; 46 + writel_relaxed(axictl, drvdata->base + TMC_AXICTL); 47 + 48 + writel_relaxed(drvdata->paddr, drvdata->base + TMC_DBALO); 49 + writel_relaxed(0x0, drvdata->base + TMC_DBAHI); 50 + writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI | 51 + TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT | 52 + TMC_FFCR_TRIGON_TRIGIN, 53 + drvdata->base + TMC_FFCR); 54 + writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG); 55 + tmc_enable_hw(drvdata); 56 + 57 + CS_LOCK(drvdata->base); 58 + } 59 + 60 + static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata) 61 + { 62 + u32 rwp, val; 63 + 64 + rwp = readl_relaxed(drvdata->base + TMC_RWP); 65 + val = readl_relaxed(drvdata->base + TMC_STS); 66 + 67 + /* How much memory do we still have */ 68 + if (val & BIT(0)) 69 + drvdata->buf = drvdata->vaddr + rwp - drvdata->paddr; 70 + else 71 + drvdata->buf = drvdata->vaddr; 72 + } 73 + 74 + static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata) 75 + { 76 + CS_UNLOCK(drvdata->base); 77 + 78 + tmc_flush_and_stop(drvdata); 79 + /* 80 + * When operating in sysFS mode the content of the buffer needs to be 81 + * read before the TMC is disabled. 82 + */ 83 + if (local_read(&drvdata->mode) == CS_MODE_SYSFS) 84 + tmc_etr_dump_hw(drvdata); 85 + tmc_disable_hw(drvdata); 86 + 87 + CS_LOCK(drvdata->base); 88 + } 89 + 90 + static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev, u32 mode) 91 + { 92 + int ret = 0; 93 + bool used = false; 94 + long val; 95 + unsigned long flags; 96 + void __iomem *vaddr = NULL; 97 + dma_addr_t paddr; 98 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 99 + 100 + /* This shouldn't be happening */ 101 + if (WARN_ON(mode != CS_MODE_SYSFS)) 102 + return -EINVAL; 103 + 104 + /* 105 + * If we don't have a buffer release the lock and allocate memory. 106 + * Otherwise keep the lock and move along. 107 + */ 108 + spin_lock_irqsave(&drvdata->spinlock, flags); 109 + if (!drvdata->vaddr) { 110 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 111 + 112 + /* 113 + * Contiguous memory can't be allocated while a spinlock is 114 + * held. As such allocate memory here and free it if a buffer 115 + * has already been allocated (from a previous session). 116 + */ 117 + vaddr = dma_alloc_coherent(drvdata->dev, drvdata->size, 118 + &paddr, GFP_KERNEL); 119 + if (!vaddr) 120 + return -ENOMEM; 121 + 122 + /* Let's try again */ 123 + spin_lock_irqsave(&drvdata->spinlock, flags); 124 + } 125 + 126 + if (drvdata->reading) { 127 + ret = -EBUSY; 128 + goto out; 129 + } 130 + 131 + val = local_xchg(&drvdata->mode, mode); 132 + /* 133 + * In sysFS mode we can have multiple writers per sink. Since this 134 + * sink is already enabled no memory is needed and the HW need not be 135 + * touched. 136 + */ 137 + if (val == CS_MODE_SYSFS) 138 + goto out; 139 + 140 + /* 141 + * If drvdata::buf == NULL, use the memory allocated above. 142 + * Otherwise a buffer still exists from a previous session, so 143 + * simply use that. 144 + */ 145 + if (drvdata->buf == NULL) { 146 + used = true; 147 + drvdata->vaddr = vaddr; 148 + drvdata->paddr = paddr; 149 + drvdata->buf = drvdata->vaddr; 150 + } 151 + 152 + memset(drvdata->vaddr, 0, drvdata->size); 153 + 154 + tmc_etr_enable_hw(drvdata); 155 + out: 156 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 157 + 158 + /* Free memory outside the spinlock if need be */ 159 + if (!used && vaddr) 160 + dma_free_coherent(drvdata->dev, drvdata->size, vaddr, paddr); 161 + 162 + if (!ret) 163 + dev_info(drvdata->dev, "TMC-ETR enabled\n"); 164 + 165 + return ret; 166 + } 167 + 168 + static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, u32 mode) 169 + { 170 + int ret = 0; 171 + long val; 172 + unsigned long flags; 173 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 174 + 175 + /* This shouldn't be happening */ 176 + if (WARN_ON(mode != CS_MODE_PERF)) 177 + return -EINVAL; 178 + 179 + spin_lock_irqsave(&drvdata->spinlock, flags); 180 + if (drvdata->reading) { 181 + ret = -EINVAL; 182 + goto out; 183 + } 184 + 185 + val = local_xchg(&drvdata->mode, mode); 186 + /* 187 + * In Perf mode there can be only one writer per sink. There 188 + * is also no need to continue if the ETR is already operated 189 + * from sysFS. 190 + */ 191 + if (val != CS_MODE_DISABLED) { 192 + ret = -EINVAL; 193 + goto out; 194 + } 195 + 196 + tmc_etr_enable_hw(drvdata); 197 + out: 198 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 199 + 200 + return ret; 201 + } 202 + 203 + static int tmc_enable_etr_sink(struct coresight_device *csdev, u32 mode) 204 + { 205 + switch (mode) { 206 + case CS_MODE_SYSFS: 207 + return tmc_enable_etr_sink_sysfs(csdev, mode); 208 + case CS_MODE_PERF: 209 + return tmc_enable_etr_sink_perf(csdev, mode); 210 + } 211 + 212 + /* We shouldn't be here */ 213 + return -EINVAL; 214 + } 215 + 216 + static void tmc_disable_etr_sink(struct coresight_device *csdev) 217 + { 218 + long val; 219 + unsigned long flags; 220 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 221 + 222 + spin_lock_irqsave(&drvdata->spinlock, flags); 223 + if (drvdata->reading) { 224 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 225 + return; 226 + } 227 + 228 + val = local_xchg(&drvdata->mode, CS_MODE_DISABLED); 229 + /* Disable the TMC only if it needs to */ 230 + if (val != CS_MODE_DISABLED) 231 + tmc_etr_disable_hw(drvdata); 232 + 233 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 234 + 235 + dev_info(drvdata->dev, "TMC-ETR disabled\n"); 236 + } 237 + 238 + static const struct coresight_ops_sink tmc_etr_sink_ops = { 239 + .enable = tmc_enable_etr_sink, 240 + .disable = tmc_disable_etr_sink, 241 + }; 242 + 243 + const struct coresight_ops tmc_etr_cs_ops = { 244 + .sink_ops = &tmc_etr_sink_ops, 245 + }; 246 + 247 + int tmc_read_prepare_etr(struct tmc_drvdata *drvdata) 248 + { 249 + int ret = 0; 250 + long val; 251 + unsigned long flags; 252 + 253 + /* config types are set a boot time and never change */ 254 + if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR)) 255 + return -EINVAL; 256 + 257 + spin_lock_irqsave(&drvdata->spinlock, flags); 258 + if (drvdata->reading) { 259 + ret = -EBUSY; 260 + goto out; 261 + } 262 + 263 + val = local_read(&drvdata->mode); 264 + /* Don't interfere if operated from Perf */ 265 + if (val == CS_MODE_PERF) { 266 + ret = -EINVAL; 267 + goto out; 268 + } 269 + 270 + /* If drvdata::buf is NULL the trace data has been read already */ 271 + if (drvdata->buf == NULL) { 272 + ret = -EINVAL; 273 + goto out; 274 + } 275 + 276 + /* Disable the TMC if need be */ 277 + if (val == CS_MODE_SYSFS) 278 + tmc_etr_disable_hw(drvdata); 279 + 280 + drvdata->reading = true; 281 + out: 282 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 283 + 284 + return ret; 285 + } 286 + 287 + int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata) 288 + { 289 + unsigned long flags; 290 + dma_addr_t paddr; 291 + void __iomem *vaddr = NULL; 292 + 293 + /* config types are set a boot time and never change */ 294 + if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR)) 295 + return -EINVAL; 296 + 297 + spin_lock_irqsave(&drvdata->spinlock, flags); 298 + 299 + /* RE-enable the TMC if need be */ 300 + if (local_read(&drvdata->mode) == CS_MODE_SYSFS) { 301 + /* 302 + * The trace run will continue with the same allocated trace 303 + * buffer. As such zero-out the buffer so that we don't end 304 + * up with stale data. 305 + * 306 + * Since the tracer is still enabled drvdata::buf 307 + * can't be NULL. 308 + */ 309 + memset(drvdata->buf, 0, drvdata->size); 310 + tmc_etr_enable_hw(drvdata); 311 + } else { 312 + /* 313 + * The ETR is not tracing and the buffer was just read. 314 + * As such prepare to free the trace buffer. 315 + */ 316 + vaddr = drvdata->vaddr; 317 + paddr = drvdata->paddr; 318 + drvdata->buf = NULL; 319 + } 320 + 321 + drvdata->reading = false; 322 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 323 + 324 + /* Free allocated memory out side of the spinlock */ 325 + if (vaddr) 326 + dma_free_coherent(drvdata->dev, drvdata->size, vaddr, paddr); 327 + 328 + return 0; 329 + }
+117 -489
drivers/hwtracing/coresight/coresight-tmc.c
··· 30 30 #include <linux/amba/bus.h> 31 31 32 32 #include "coresight-priv.h" 33 + #include "coresight-tmc.h" 33 34 34 - #define TMC_RSZ 0x004 35 - #define TMC_STS 0x00c 36 - #define TMC_RRD 0x010 37 - #define TMC_RRP 0x014 38 - #define TMC_RWP 0x018 39 - #define TMC_TRG 0x01c 40 - #define TMC_CTL 0x020 41 - #define TMC_RWD 0x024 42 - #define TMC_MODE 0x028 43 - #define TMC_LBUFLEVEL 0x02c 44 - #define TMC_CBUFLEVEL 0x030 45 - #define TMC_BUFWM 0x034 46 - #define TMC_RRPHI 0x038 47 - #define TMC_RWPHI 0x03c 48 - #define TMC_AXICTL 0x110 49 - #define TMC_DBALO 0x118 50 - #define TMC_DBAHI 0x11c 51 - #define TMC_FFSR 0x300 52 - #define TMC_FFCR 0x304 53 - #define TMC_PSCR 0x308 54 - #define TMC_ITMISCOP0 0xee0 55 - #define TMC_ITTRFLIN 0xee8 56 - #define TMC_ITATBDATA0 0xeec 57 - #define TMC_ITATBCTR2 0xef0 58 - #define TMC_ITATBCTR1 0xef4 59 - #define TMC_ITATBCTR0 0xef8 60 - 61 - /* register description */ 62 - /* TMC_CTL - 0x020 */ 63 - #define TMC_CTL_CAPT_EN BIT(0) 64 - /* TMC_STS - 0x00C */ 65 - #define TMC_STS_TRIGGERED BIT(1) 66 - /* TMC_AXICTL - 0x110 */ 67 - #define TMC_AXICTL_PROT_CTL_B0 BIT(0) 68 - #define TMC_AXICTL_PROT_CTL_B1 BIT(1) 69 - #define TMC_AXICTL_SCT_GAT_MODE BIT(7) 70 - #define TMC_AXICTL_WR_BURST_LEN 0xF00 71 - /* TMC_FFCR - 0x304 */ 72 - #define TMC_FFCR_EN_FMT BIT(0) 73 - #define TMC_FFCR_EN_TI BIT(1) 74 - #define TMC_FFCR_FON_FLIN BIT(4) 75 - #define TMC_FFCR_FON_TRIG_EVT BIT(5) 76 - #define TMC_FFCR_FLUSHMAN BIT(6) 77 - #define TMC_FFCR_TRIGON_TRIGIN BIT(8) 78 - #define TMC_FFCR_STOP_ON_FLUSH BIT(12) 79 - 80 - #define TMC_STS_TRIGGERED_BIT 2 81 - #define TMC_FFCR_FLUSHMAN_BIT 6 82 - 83 - enum tmc_config_type { 84 - TMC_CONFIG_TYPE_ETB, 85 - TMC_CONFIG_TYPE_ETR, 86 - TMC_CONFIG_TYPE_ETF, 87 - }; 88 - 89 - enum tmc_mode { 90 - TMC_MODE_CIRCULAR_BUFFER, 91 - TMC_MODE_SOFTWARE_FIFO, 92 - TMC_MODE_HARDWARE_FIFO, 93 - }; 94 - 95 - enum tmc_mem_intf_width { 96 - TMC_MEM_INTF_WIDTH_32BITS = 0x2, 97 - TMC_MEM_INTF_WIDTH_64BITS = 0x3, 98 - TMC_MEM_INTF_WIDTH_128BITS = 0x4, 99 - TMC_MEM_INTF_WIDTH_256BITS = 0x5, 100 - }; 101 - 102 - /** 103 - * struct tmc_drvdata - specifics associated to an TMC component 104 - * @base: memory mapped base address for this component. 105 - * @dev: the device entity associated to this component. 106 - * @csdev: component vitals needed by the framework. 107 - * @miscdev: specifics to handle "/dev/xyz.tmc" entry. 108 - * @spinlock: only one at a time pls. 109 - * @read_count: manages preparation of buffer for reading. 110 - * @buf: area of memory where trace data get sent. 111 - * @paddr: DMA start location in RAM. 112 - * @vaddr: virtual representation of @paddr. 113 - * @size: @buf size. 114 - * @enable: this TMC is being used. 115 - * @config_type: TMC variant, must be of type @tmc_config_type. 116 - * @trigger_cntr: amount of words to store after a trigger. 117 - */ 118 - struct tmc_drvdata { 119 - void __iomem *base; 120 - struct device *dev; 121 - struct coresight_device *csdev; 122 - struct miscdevice miscdev; 123 - spinlock_t spinlock; 124 - int read_count; 125 - bool reading; 126 - char *buf; 127 - dma_addr_t paddr; 128 - void *vaddr; 129 - u32 size; 130 - bool enable; 131 - enum tmc_config_type config_type; 132 - u32 trigger_cntr; 133 - }; 134 - 135 - static void tmc_wait_for_ready(struct tmc_drvdata *drvdata) 35 + void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata) 136 36 { 137 37 /* Ensure formatter, unformatter and hardware fifo are empty */ 138 38 if (coresight_timeout(drvdata->base, 139 - TMC_STS, TMC_STS_TRIGGERED_BIT, 1)) { 39 + TMC_STS, TMC_STS_TMCREADY_BIT, 1)) { 140 40 dev_err(drvdata->dev, 141 41 "timeout observed when probing at offset %#x\n", 142 42 TMC_STS); 143 43 } 144 44 } 145 45 146 - static void tmc_flush_and_stop(struct tmc_drvdata *drvdata) 46 + void tmc_flush_and_stop(struct tmc_drvdata *drvdata) 147 47 { 148 48 u32 ffcr; 149 49 150 50 ffcr = readl_relaxed(drvdata->base + TMC_FFCR); 151 51 ffcr |= TMC_FFCR_STOP_ON_FLUSH; 152 52 writel_relaxed(ffcr, drvdata->base + TMC_FFCR); 153 - ffcr |= TMC_FFCR_FLUSHMAN; 53 + ffcr |= BIT(TMC_FFCR_FLUSHMAN_BIT); 154 54 writel_relaxed(ffcr, drvdata->base + TMC_FFCR); 155 55 /* Ensure flush completes */ 156 56 if (coresight_timeout(drvdata->base, ··· 60 160 TMC_FFCR); 61 161 } 62 162 63 - tmc_wait_for_ready(drvdata); 163 + tmc_wait_for_tmcready(drvdata); 64 164 } 65 165 66 - static void tmc_enable_hw(struct tmc_drvdata *drvdata) 166 + void tmc_enable_hw(struct tmc_drvdata *drvdata) 67 167 { 68 168 writel_relaxed(TMC_CTL_CAPT_EN, drvdata->base + TMC_CTL); 69 169 } 70 170 71 - static void tmc_disable_hw(struct tmc_drvdata *drvdata) 171 + void tmc_disable_hw(struct tmc_drvdata *drvdata) 72 172 { 73 173 writel_relaxed(0x0, drvdata->base + TMC_CTL); 74 174 } 75 175 76 - static void tmc_etb_enable_hw(struct tmc_drvdata *drvdata) 77 - { 78 - /* Zero out the memory to help with debug */ 79 - memset(drvdata->buf, 0, drvdata->size); 80 - 81 - CS_UNLOCK(drvdata->base); 82 - 83 - writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE); 84 - writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI | 85 - TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT | 86 - TMC_FFCR_TRIGON_TRIGIN, 87 - drvdata->base + TMC_FFCR); 88 - 89 - writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG); 90 - tmc_enable_hw(drvdata); 91 - 92 - CS_LOCK(drvdata->base); 93 - } 94 - 95 - static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata) 96 - { 97 - u32 axictl; 98 - 99 - /* Zero out the memory to help with debug */ 100 - memset(drvdata->vaddr, 0, drvdata->size); 101 - 102 - CS_UNLOCK(drvdata->base); 103 - 104 - writel_relaxed(drvdata->size / 4, drvdata->base + TMC_RSZ); 105 - writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE); 106 - 107 - axictl = readl_relaxed(drvdata->base + TMC_AXICTL); 108 - axictl |= TMC_AXICTL_WR_BURST_LEN; 109 - writel_relaxed(axictl, drvdata->base + TMC_AXICTL); 110 - axictl &= ~TMC_AXICTL_SCT_GAT_MODE; 111 - writel_relaxed(axictl, drvdata->base + TMC_AXICTL); 112 - axictl = (axictl & 113 - ~(TMC_AXICTL_PROT_CTL_B0 | TMC_AXICTL_PROT_CTL_B1)) | 114 - TMC_AXICTL_PROT_CTL_B1; 115 - writel_relaxed(axictl, drvdata->base + TMC_AXICTL); 116 - 117 - writel_relaxed(drvdata->paddr, drvdata->base + TMC_DBALO); 118 - writel_relaxed(0x0, drvdata->base + TMC_DBAHI); 119 - writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI | 120 - TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT | 121 - TMC_FFCR_TRIGON_TRIGIN, 122 - drvdata->base + TMC_FFCR); 123 - writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG); 124 - tmc_enable_hw(drvdata); 125 - 126 - CS_LOCK(drvdata->base); 127 - } 128 - 129 - static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata) 130 - { 131 - CS_UNLOCK(drvdata->base); 132 - 133 - writel_relaxed(TMC_MODE_HARDWARE_FIFO, drvdata->base + TMC_MODE); 134 - writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI, 135 - drvdata->base + TMC_FFCR); 136 - writel_relaxed(0x0, drvdata->base + TMC_BUFWM); 137 - tmc_enable_hw(drvdata); 138 - 139 - CS_LOCK(drvdata->base); 140 - } 141 - 142 - static int tmc_enable(struct tmc_drvdata *drvdata, enum tmc_mode mode) 143 - { 144 - unsigned long flags; 145 - 146 - spin_lock_irqsave(&drvdata->spinlock, flags); 147 - if (drvdata->reading) { 148 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 149 - return -EBUSY; 150 - } 151 - 152 - if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) { 153 - tmc_etb_enable_hw(drvdata); 154 - } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 155 - tmc_etr_enable_hw(drvdata); 156 - } else { 157 - if (mode == TMC_MODE_CIRCULAR_BUFFER) 158 - tmc_etb_enable_hw(drvdata); 159 - else 160 - tmc_etf_enable_hw(drvdata); 161 - } 162 - drvdata->enable = true; 163 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 164 - 165 - dev_info(drvdata->dev, "TMC enabled\n"); 166 - return 0; 167 - } 168 - 169 - static int tmc_enable_sink(struct coresight_device *csdev, u32 mode) 170 - { 171 - struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 172 - 173 - return tmc_enable(drvdata, TMC_MODE_CIRCULAR_BUFFER); 174 - } 175 - 176 - static int tmc_enable_link(struct coresight_device *csdev, int inport, 177 - int outport) 178 - { 179 - struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 180 - 181 - return tmc_enable(drvdata, TMC_MODE_HARDWARE_FIFO); 182 - } 183 - 184 - static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata) 185 - { 186 - enum tmc_mem_intf_width memwidth; 187 - u8 memwords; 188 - char *bufp; 189 - u32 read_data; 190 - int i; 191 - 192 - memwidth = BMVAL(readl_relaxed(drvdata->base + CORESIGHT_DEVID), 8, 10); 193 - if (memwidth == TMC_MEM_INTF_WIDTH_32BITS) 194 - memwords = 1; 195 - else if (memwidth == TMC_MEM_INTF_WIDTH_64BITS) 196 - memwords = 2; 197 - else if (memwidth == TMC_MEM_INTF_WIDTH_128BITS) 198 - memwords = 4; 199 - else 200 - memwords = 8; 201 - 202 - bufp = drvdata->buf; 203 - while (1) { 204 - for (i = 0; i < memwords; i++) { 205 - read_data = readl_relaxed(drvdata->base + TMC_RRD); 206 - if (read_data == 0xFFFFFFFF) 207 - return; 208 - memcpy(bufp, &read_data, 4); 209 - bufp += 4; 210 - } 211 - } 212 - } 213 - 214 - static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata) 215 - { 216 - CS_UNLOCK(drvdata->base); 217 - 218 - tmc_flush_and_stop(drvdata); 219 - tmc_etb_dump_hw(drvdata); 220 - tmc_disable_hw(drvdata); 221 - 222 - CS_LOCK(drvdata->base); 223 - } 224 - 225 - static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata) 226 - { 227 - u32 rwp, val; 228 - 229 - rwp = readl_relaxed(drvdata->base + TMC_RWP); 230 - val = readl_relaxed(drvdata->base + TMC_STS); 231 - 232 - /* How much memory do we still have */ 233 - if (val & BIT(0)) 234 - drvdata->buf = drvdata->vaddr + rwp - drvdata->paddr; 235 - else 236 - drvdata->buf = drvdata->vaddr; 237 - } 238 - 239 - static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata) 240 - { 241 - CS_UNLOCK(drvdata->base); 242 - 243 - tmc_flush_and_stop(drvdata); 244 - tmc_etr_dump_hw(drvdata); 245 - tmc_disable_hw(drvdata); 246 - 247 - CS_LOCK(drvdata->base); 248 - } 249 - 250 - static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata) 251 - { 252 - CS_UNLOCK(drvdata->base); 253 - 254 - tmc_flush_and_stop(drvdata); 255 - tmc_disable_hw(drvdata); 256 - 257 - CS_LOCK(drvdata->base); 258 - } 259 - 260 - static void tmc_disable(struct tmc_drvdata *drvdata, enum tmc_mode mode) 261 - { 262 - unsigned long flags; 263 - 264 - spin_lock_irqsave(&drvdata->spinlock, flags); 265 - if (drvdata->reading) 266 - goto out; 267 - 268 - if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) { 269 - tmc_etb_disable_hw(drvdata); 270 - } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 271 - tmc_etr_disable_hw(drvdata); 272 - } else { 273 - if (mode == TMC_MODE_CIRCULAR_BUFFER) 274 - tmc_etb_disable_hw(drvdata); 275 - else 276 - tmc_etf_disable_hw(drvdata); 277 - } 278 - out: 279 - drvdata->enable = false; 280 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 281 - 282 - dev_info(drvdata->dev, "TMC disabled\n"); 283 - } 284 - 285 - static void tmc_disable_sink(struct coresight_device *csdev) 286 - { 287 - struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 288 - 289 - tmc_disable(drvdata, TMC_MODE_CIRCULAR_BUFFER); 290 - } 291 - 292 - static void tmc_disable_link(struct coresight_device *csdev, int inport, 293 - int outport) 294 - { 295 - struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 296 - 297 - tmc_disable(drvdata, TMC_MODE_HARDWARE_FIFO); 298 - } 299 - 300 - static const struct coresight_ops_sink tmc_sink_ops = { 301 - .enable = tmc_enable_sink, 302 - .disable = tmc_disable_sink, 303 - }; 304 - 305 - static const struct coresight_ops_link tmc_link_ops = { 306 - .enable = tmc_enable_link, 307 - .disable = tmc_disable_link, 308 - }; 309 - 310 - static const struct coresight_ops tmc_etb_cs_ops = { 311 - .sink_ops = &tmc_sink_ops, 312 - }; 313 - 314 - static const struct coresight_ops tmc_etr_cs_ops = { 315 - .sink_ops = &tmc_sink_ops, 316 - }; 317 - 318 - static const struct coresight_ops tmc_etf_cs_ops = { 319 - .sink_ops = &tmc_sink_ops, 320 - .link_ops = &tmc_link_ops, 321 - }; 322 - 323 176 static int tmc_read_prepare(struct tmc_drvdata *drvdata) 324 177 { 325 - int ret; 326 - unsigned long flags; 327 - enum tmc_mode mode; 178 + int ret = 0; 328 179 329 - spin_lock_irqsave(&drvdata->spinlock, flags); 330 - if (!drvdata->enable) 331 - goto out; 332 - 333 - if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) { 334 - tmc_etb_disable_hw(drvdata); 335 - } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 336 - tmc_etr_disable_hw(drvdata); 337 - } else { 338 - mode = readl_relaxed(drvdata->base + TMC_MODE); 339 - if (mode == TMC_MODE_CIRCULAR_BUFFER) { 340 - tmc_etb_disable_hw(drvdata); 341 - } else { 342 - ret = -ENODEV; 343 - goto err; 344 - } 180 + switch (drvdata->config_type) { 181 + case TMC_CONFIG_TYPE_ETB: 182 + case TMC_CONFIG_TYPE_ETF: 183 + ret = tmc_read_prepare_etb(drvdata); 184 + break; 185 + case TMC_CONFIG_TYPE_ETR: 186 + ret = tmc_read_prepare_etr(drvdata); 187 + break; 188 + default: 189 + ret = -EINVAL; 345 190 } 346 - out: 347 - drvdata->reading = true; 348 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 349 191 350 - dev_info(drvdata->dev, "TMC read start\n"); 351 - return 0; 352 - err: 353 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 192 + if (!ret) 193 + dev_info(drvdata->dev, "TMC read start\n"); 194 + 354 195 return ret; 355 196 } 356 197 357 - static void tmc_read_unprepare(struct tmc_drvdata *drvdata) 198 + static int tmc_read_unprepare(struct tmc_drvdata *drvdata) 358 199 { 359 - unsigned long flags; 360 - enum tmc_mode mode; 200 + int ret = 0; 361 201 362 - spin_lock_irqsave(&drvdata->spinlock, flags); 363 - if (!drvdata->enable) 364 - goto out; 365 - 366 - if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) { 367 - tmc_etb_enable_hw(drvdata); 368 - } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 369 - tmc_etr_enable_hw(drvdata); 370 - } else { 371 - mode = readl_relaxed(drvdata->base + TMC_MODE); 372 - if (mode == TMC_MODE_CIRCULAR_BUFFER) 373 - tmc_etb_enable_hw(drvdata); 202 + switch (drvdata->config_type) { 203 + case TMC_CONFIG_TYPE_ETB: 204 + case TMC_CONFIG_TYPE_ETF: 205 + ret = tmc_read_unprepare_etb(drvdata); 206 + break; 207 + case TMC_CONFIG_TYPE_ETR: 208 + ret = tmc_read_unprepare_etr(drvdata); 209 + break; 210 + default: 211 + ret = -EINVAL; 374 212 } 375 - out: 376 - drvdata->reading = false; 377 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 378 213 379 - dev_info(drvdata->dev, "TMC read end\n"); 214 + if (!ret) 215 + dev_info(drvdata->dev, "TMC read end\n"); 216 + 217 + return ret; 380 218 } 381 219 382 220 static int tmc_open(struct inode *inode, struct file *file) 383 221 { 222 + int ret; 384 223 struct tmc_drvdata *drvdata = container_of(file->private_data, 385 224 struct tmc_drvdata, miscdev); 386 - int ret = 0; 387 - 388 - if (drvdata->read_count++) 389 - goto out; 390 225 391 226 ret = tmc_read_prepare(drvdata); 392 227 if (ret) 393 228 return ret; 394 - out: 229 + 395 230 nonseekable_open(inode, file); 396 231 397 232 dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__); ··· 166 531 167 532 static int tmc_release(struct inode *inode, struct file *file) 168 533 { 534 + int ret; 169 535 struct tmc_drvdata *drvdata = container_of(file->private_data, 170 536 struct tmc_drvdata, miscdev); 171 537 172 - if (--drvdata->read_count) { 173 - if (drvdata->read_count < 0) { 174 - dev_err(drvdata->dev, "mismatched close\n"); 175 - drvdata->read_count = 0; 176 - } 177 - goto out; 178 - } 538 + ret = tmc_read_unprepare(drvdata); 539 + if (ret) 540 + return ret; 179 541 180 - tmc_read_unprepare(drvdata); 181 - out: 182 542 dev_dbg(drvdata->dev, "%s: released\n", __func__); 183 543 return 0; 184 544 } ··· 186 556 .llseek = no_llseek, 187 557 }; 188 558 189 - static ssize_t status_show(struct device *dev, 190 - struct device_attribute *attr, char *buf) 559 + static enum tmc_mem_intf_width tmc_get_memwidth(u32 devid) 191 560 { 192 - unsigned long flags; 193 - u32 tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg; 194 - u32 tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr; 195 - u32 devid; 196 - struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent); 561 + enum tmc_mem_intf_width memwidth; 197 562 198 - pm_runtime_get_sync(drvdata->dev); 199 - spin_lock_irqsave(&drvdata->spinlock, flags); 200 - CS_UNLOCK(drvdata->base); 563 + /* 564 + * Excerpt from the TRM: 565 + * 566 + * DEVID::MEMWIDTH[10:8] 567 + * 0x2 Memory interface databus is 32 bits wide. 568 + * 0x3 Memory interface databus is 64 bits wide. 569 + * 0x4 Memory interface databus is 128 bits wide. 570 + * 0x5 Memory interface databus is 256 bits wide. 571 + */ 572 + switch (BMVAL(devid, 8, 10)) { 573 + case 0x2: 574 + memwidth = TMC_MEM_INTF_WIDTH_32BITS; 575 + break; 576 + case 0x3: 577 + memwidth = TMC_MEM_INTF_WIDTH_64BITS; 578 + break; 579 + case 0x4: 580 + memwidth = TMC_MEM_INTF_WIDTH_128BITS; 581 + break; 582 + case 0x5: 583 + memwidth = TMC_MEM_INTF_WIDTH_256BITS; 584 + break; 585 + default: 586 + memwidth = 0; 587 + } 201 588 202 - tmc_rsz = readl_relaxed(drvdata->base + TMC_RSZ); 203 - tmc_sts = readl_relaxed(drvdata->base + TMC_STS); 204 - tmc_rrp = readl_relaxed(drvdata->base + TMC_RRP); 205 - tmc_rwp = readl_relaxed(drvdata->base + TMC_RWP); 206 - tmc_trg = readl_relaxed(drvdata->base + TMC_TRG); 207 - tmc_ctl = readl_relaxed(drvdata->base + TMC_CTL); 208 - tmc_ffsr = readl_relaxed(drvdata->base + TMC_FFSR); 209 - tmc_ffcr = readl_relaxed(drvdata->base + TMC_FFCR); 210 - tmc_mode = readl_relaxed(drvdata->base + TMC_MODE); 211 - tmc_pscr = readl_relaxed(drvdata->base + TMC_PSCR); 212 - devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID); 213 - 214 - CS_LOCK(drvdata->base); 215 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 216 - pm_runtime_put(drvdata->dev); 217 - 218 - return sprintf(buf, 219 - "Depth:\t\t0x%x\n" 220 - "Status:\t\t0x%x\n" 221 - "RAM read ptr:\t0x%x\n" 222 - "RAM wrt ptr:\t0x%x\n" 223 - "Trigger cnt:\t0x%x\n" 224 - "Control:\t0x%x\n" 225 - "Flush status:\t0x%x\n" 226 - "Flush ctrl:\t0x%x\n" 227 - "Mode:\t\t0x%x\n" 228 - "PSRC:\t\t0x%x\n" 229 - "DEVID:\t\t0x%x\n", 230 - tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg, 231 - tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr, devid); 232 - 233 - return -EINVAL; 589 + return memwidth; 234 590 } 235 - static DEVICE_ATTR_RO(status); 236 591 237 - static ssize_t trigger_cntr_show(struct device *dev, 238 - struct device_attribute *attr, char *buf) 592 + #define coresight_tmc_simple_func(name, offset) \ 593 + coresight_simple_func(struct tmc_drvdata, name, offset) 594 + 595 + coresight_tmc_simple_func(rsz, TMC_RSZ); 596 + coresight_tmc_simple_func(sts, TMC_STS); 597 + coresight_tmc_simple_func(rrp, TMC_RRP); 598 + coresight_tmc_simple_func(rwp, TMC_RWP); 599 + coresight_tmc_simple_func(trg, TMC_TRG); 600 + coresight_tmc_simple_func(ctl, TMC_CTL); 601 + coresight_tmc_simple_func(ffsr, TMC_FFSR); 602 + coresight_tmc_simple_func(ffcr, TMC_FFCR); 603 + coresight_tmc_simple_func(mode, TMC_MODE); 604 + coresight_tmc_simple_func(pscr, TMC_PSCR); 605 + coresight_tmc_simple_func(devid, CORESIGHT_DEVID); 606 + 607 + static struct attribute *coresight_tmc_mgmt_attrs[] = { 608 + &dev_attr_rsz.attr, 609 + &dev_attr_sts.attr, 610 + &dev_attr_rrp.attr, 611 + &dev_attr_rwp.attr, 612 + &dev_attr_trg.attr, 613 + &dev_attr_ctl.attr, 614 + &dev_attr_ffsr.attr, 615 + &dev_attr_ffcr.attr, 616 + &dev_attr_mode.attr, 617 + &dev_attr_pscr.attr, 618 + &dev_attr_devid.attr, 619 + NULL, 620 + }; 621 + 622 + ssize_t trigger_cntr_show(struct device *dev, 623 + struct device_attribute *attr, char *buf) 239 624 { 240 625 struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent); 241 626 unsigned long val = drvdata->trigger_cntr; ··· 275 630 } 276 631 static DEVICE_ATTR_RW(trigger_cntr); 277 632 278 - static struct attribute *coresight_etb_attrs[] = { 633 + static struct attribute *coresight_tmc_attrs[] = { 279 634 &dev_attr_trigger_cntr.attr, 280 - &dev_attr_status.attr, 281 635 NULL, 282 636 }; 283 - ATTRIBUTE_GROUPS(coresight_etb); 284 637 285 - static struct attribute *coresight_etr_attrs[] = { 286 - &dev_attr_trigger_cntr.attr, 287 - &dev_attr_status.attr, 288 - NULL, 638 + static const struct attribute_group coresight_tmc_group = { 639 + .attrs = coresight_tmc_attrs, 289 640 }; 290 - ATTRIBUTE_GROUPS(coresight_etr); 291 641 292 - static struct attribute *coresight_etf_attrs[] = { 293 - &dev_attr_trigger_cntr.attr, 294 - &dev_attr_status.attr, 642 + static const struct attribute_group coresight_tmc_mgmt_group = { 643 + .attrs = coresight_tmc_mgmt_attrs, 644 + .name = "mgmt", 645 + }; 646 + 647 + const struct attribute_group *coresight_tmc_groups[] = { 648 + &coresight_tmc_group, 649 + &coresight_tmc_mgmt_group, 295 650 NULL, 296 651 }; 297 - ATTRIBUTE_GROUPS(coresight_etf); 298 652 299 653 static int tmc_probe(struct amba_device *adev, const struct amba_id *id) 300 654 { ··· 332 688 333 689 devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID); 334 690 drvdata->config_type = BMVAL(devid, 6, 7); 691 + drvdata->memwidth = tmc_get_memwidth(devid); 335 692 336 693 if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 337 694 if (np) ··· 347 702 348 703 pm_runtime_put(&adev->dev); 349 704 350 - if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 351 - drvdata->vaddr = dma_alloc_coherent(dev, drvdata->size, 352 - &drvdata->paddr, GFP_KERNEL); 353 - if (!drvdata->vaddr) 354 - return -ENOMEM; 355 - 356 - memset(drvdata->vaddr, 0, drvdata->size); 357 - drvdata->buf = drvdata->vaddr; 358 - } else { 359 - drvdata->buf = devm_kzalloc(dev, drvdata->size, GFP_KERNEL); 360 - if (!drvdata->buf) 361 - return -ENOMEM; 362 - } 363 - 364 705 desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL); 365 706 if (!desc) { 366 707 ret = -ENOMEM; ··· 356 725 desc->pdata = pdata; 357 726 desc->dev = dev; 358 727 desc->subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER; 728 + desc->groups = coresight_tmc_groups; 359 729 360 730 if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) { 361 731 desc->type = CORESIGHT_DEV_TYPE_SINK; 362 732 desc->ops = &tmc_etb_cs_ops; 363 - desc->groups = coresight_etb_groups; 364 733 } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 365 734 desc->type = CORESIGHT_DEV_TYPE_SINK; 366 735 desc->ops = &tmc_etr_cs_ops; 367 - desc->groups = coresight_etr_groups; 368 736 } else { 369 737 desc->type = CORESIGHT_DEV_TYPE_LINKSINK; 370 738 desc->subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO; 371 739 desc->ops = &tmc_etf_cs_ops; 372 - desc->groups = coresight_etf_groups; 373 740 } 374 741 375 742 drvdata->csdev = coresight_register(desc); ··· 383 754 if (ret) 384 755 goto err_misc_register; 385 756 386 - dev_info(dev, "TMC initialized\n"); 387 757 return 0; 388 758 389 759 err_misc_register:
+140
drivers/hwtracing/coresight/coresight-tmc.h
··· 1 + /* 2 + * Copyright(C) 2015 Linaro Limited. All rights reserved. 3 + * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms of the GNU General Public License version 2 as published by 7 + * the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + * You should have received a copy of the GNU General Public License along with 15 + * this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #ifndef _CORESIGHT_TMC_H 19 + #define _CORESIGHT_TMC_H 20 + 21 + #include <linux/miscdevice.h> 22 + 23 + #define TMC_RSZ 0x004 24 + #define TMC_STS 0x00c 25 + #define TMC_RRD 0x010 26 + #define TMC_RRP 0x014 27 + #define TMC_RWP 0x018 28 + #define TMC_TRG 0x01c 29 + #define TMC_CTL 0x020 30 + #define TMC_RWD 0x024 31 + #define TMC_MODE 0x028 32 + #define TMC_LBUFLEVEL 0x02c 33 + #define TMC_CBUFLEVEL 0x030 34 + #define TMC_BUFWM 0x034 35 + #define TMC_RRPHI 0x038 36 + #define TMC_RWPHI 0x03c 37 + #define TMC_AXICTL 0x110 38 + #define TMC_DBALO 0x118 39 + #define TMC_DBAHI 0x11c 40 + #define TMC_FFSR 0x300 41 + #define TMC_FFCR 0x304 42 + #define TMC_PSCR 0x308 43 + #define TMC_ITMISCOP0 0xee0 44 + #define TMC_ITTRFLIN 0xee8 45 + #define TMC_ITATBDATA0 0xeec 46 + #define TMC_ITATBCTR2 0xef0 47 + #define TMC_ITATBCTR1 0xef4 48 + #define TMC_ITATBCTR0 0xef8 49 + 50 + /* register description */ 51 + /* TMC_CTL - 0x020 */ 52 + #define TMC_CTL_CAPT_EN BIT(0) 53 + /* TMC_STS - 0x00C */ 54 + #define TMC_STS_TMCREADY_BIT 2 55 + #define TMC_STS_FULL BIT(0) 56 + #define TMC_STS_TRIGGERED BIT(1) 57 + /* TMC_AXICTL - 0x110 */ 58 + #define TMC_AXICTL_PROT_CTL_B0 BIT(0) 59 + #define TMC_AXICTL_PROT_CTL_B1 BIT(1) 60 + #define TMC_AXICTL_SCT_GAT_MODE BIT(7) 61 + #define TMC_AXICTL_WR_BURST_16 0xF00 62 + /* TMC_FFCR - 0x304 */ 63 + #define TMC_FFCR_FLUSHMAN_BIT 6 64 + #define TMC_FFCR_EN_FMT BIT(0) 65 + #define TMC_FFCR_EN_TI BIT(1) 66 + #define TMC_FFCR_FON_FLIN BIT(4) 67 + #define TMC_FFCR_FON_TRIG_EVT BIT(5) 68 + #define TMC_FFCR_TRIGON_TRIGIN BIT(8) 69 + #define TMC_FFCR_STOP_ON_FLUSH BIT(12) 70 + 71 + 72 + enum tmc_config_type { 73 + TMC_CONFIG_TYPE_ETB, 74 + TMC_CONFIG_TYPE_ETR, 75 + TMC_CONFIG_TYPE_ETF, 76 + }; 77 + 78 + enum tmc_mode { 79 + TMC_MODE_CIRCULAR_BUFFER, 80 + TMC_MODE_SOFTWARE_FIFO, 81 + TMC_MODE_HARDWARE_FIFO, 82 + }; 83 + 84 + enum tmc_mem_intf_width { 85 + TMC_MEM_INTF_WIDTH_32BITS = 1, 86 + TMC_MEM_INTF_WIDTH_64BITS = 2, 87 + TMC_MEM_INTF_WIDTH_128BITS = 4, 88 + TMC_MEM_INTF_WIDTH_256BITS = 8, 89 + }; 90 + 91 + /** 92 + * struct tmc_drvdata - specifics associated to an TMC component 93 + * @base: memory mapped base address for this component. 94 + * @dev: the device entity associated to this component. 95 + * @csdev: component vitals needed by the framework. 96 + * @miscdev: specifics to handle "/dev/xyz.tmc" entry. 97 + * @spinlock: only one at a time pls. 98 + * @buf: area of memory where trace data get sent. 99 + * @paddr: DMA start location in RAM. 100 + * @vaddr: virtual representation of @paddr. 101 + * @size: @buf size. 102 + * @mode: how this TMC is being used. 103 + * @config_type: TMC variant, must be of type @tmc_config_type. 104 + * @memwidth: width of the memory interface databus, in bytes. 105 + * @trigger_cntr: amount of words to store after a trigger. 106 + */ 107 + struct tmc_drvdata { 108 + void __iomem *base; 109 + struct device *dev; 110 + struct coresight_device *csdev; 111 + struct miscdevice miscdev; 112 + spinlock_t spinlock; 113 + bool reading; 114 + char *buf; 115 + dma_addr_t paddr; 116 + void __iomem *vaddr; 117 + u32 size; 118 + local_t mode; 119 + enum tmc_config_type config_type; 120 + enum tmc_mem_intf_width memwidth; 121 + u32 trigger_cntr; 122 + }; 123 + 124 + /* Generic functions */ 125 + void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata); 126 + void tmc_flush_and_stop(struct tmc_drvdata *drvdata); 127 + void tmc_enable_hw(struct tmc_drvdata *drvdata); 128 + void tmc_disable_hw(struct tmc_drvdata *drvdata); 129 + 130 + /* ETB/ETF functions */ 131 + int tmc_read_prepare_etb(struct tmc_drvdata *drvdata); 132 + int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata); 133 + extern const struct coresight_ops tmc_etb_cs_ops; 134 + extern const struct coresight_ops tmc_etf_cs_ops; 135 + 136 + /* ETR functions */ 137 + int tmc_read_prepare_etr(struct tmc_drvdata *drvdata); 138 + int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata); 139 + extern const struct coresight_ops tmc_etr_cs_ops; 140 + #endif
-1
drivers/hwtracing/coresight/coresight-tpiu.c
··· 167 167 if (IS_ERR(drvdata->csdev)) 168 168 return PTR_ERR(drvdata->csdev); 169 169 170 - dev_info(dev, "TPIU initialized\n"); 171 170 return 0; 172 171 } 173 172
+112 -30
drivers/hwtracing/coresight/coresight.c
··· 43 43 * When operating Coresight drivers from the sysFS interface, only a single 44 44 * path can exist from a tracer (associated to a CPU) to a sink. 45 45 */ 46 - static DEFINE_PER_CPU(struct list_head *, sysfs_path); 46 + static DEFINE_PER_CPU(struct list_head *, tracer_path); 47 + 48 + /* 49 + * As of this writing only a single STM can be found in CS topologies. Since 50 + * there is no way to know if we'll ever see more and what kind of 51 + * configuration they will enact, for the time being only define a single path 52 + * for STM. 53 + */ 54 + static struct list_head *stm_path; 47 55 48 56 static int coresight_id_match(struct device *dev, void *data) 49 57 { ··· 265 257 266 258 void coresight_disable_path(struct list_head *path) 267 259 { 260 + u32 type; 268 261 struct coresight_node *nd; 269 262 struct coresight_device *csdev, *parent, *child; 270 263 271 264 list_for_each_entry(nd, path, link) { 272 265 csdev = nd->csdev; 266 + type = csdev->type; 273 267 274 - switch (csdev->type) { 268 + /* 269 + * ETF devices are tricky... They can be a link or a sink, 270 + * depending on how they are configured. If an ETF has been 271 + * "activated" it will be configured as a sink, otherwise 272 + * go ahead with the link configuration. 273 + */ 274 + if (type == CORESIGHT_DEV_TYPE_LINKSINK) 275 + type = (csdev == coresight_get_sink(path)) ? 276 + CORESIGHT_DEV_TYPE_SINK : 277 + CORESIGHT_DEV_TYPE_LINK; 278 + 279 + switch (type) { 275 280 case CORESIGHT_DEV_TYPE_SINK: 276 - case CORESIGHT_DEV_TYPE_LINKSINK: 277 281 coresight_disable_sink(csdev); 278 282 break; 279 283 case CORESIGHT_DEV_TYPE_SOURCE: ··· 306 286 { 307 287 308 288 int ret = 0; 289 + u32 type; 309 290 struct coresight_node *nd; 310 291 struct coresight_device *csdev, *parent, *child; 311 292 312 293 list_for_each_entry_reverse(nd, path, link) { 313 294 csdev = nd->csdev; 295 + type = csdev->type; 314 296 315 - switch (csdev->type) { 297 + /* 298 + * ETF devices are tricky... They can be a link or a sink, 299 + * depending on how they are configured. If an ETF has been 300 + * "activated" it will be configured as a sink, otherwise 301 + * go ahead with the link configuration. 302 + */ 303 + if (type == CORESIGHT_DEV_TYPE_LINKSINK) 304 + type = (csdev == coresight_get_sink(path)) ? 305 + CORESIGHT_DEV_TYPE_SINK : 306 + CORESIGHT_DEV_TYPE_LINK; 307 + 308 + switch (type) { 316 309 case CORESIGHT_DEV_TYPE_SINK: 317 - case CORESIGHT_DEV_TYPE_LINKSINK: 318 310 ret = coresight_enable_sink(csdev, mode); 319 311 if (ret) 320 312 goto err; ··· 464 432 path = NULL; 465 433 } 466 434 435 + /** coresight_validate_source - make sure a source has the right credentials 436 + * @csdev: the device structure for a source. 437 + * @function: the function this was called from. 438 + * 439 + * Assumes the coresight_mutex is held. 440 + */ 441 + static int coresight_validate_source(struct coresight_device *csdev, 442 + const char *function) 443 + { 444 + u32 type, subtype; 445 + 446 + type = csdev->type; 447 + subtype = csdev->subtype.source_subtype; 448 + 449 + if (type != CORESIGHT_DEV_TYPE_SOURCE) { 450 + dev_err(&csdev->dev, "wrong device type in %s\n", function); 451 + return -EINVAL; 452 + } 453 + 454 + if (subtype != CORESIGHT_DEV_SUBTYPE_SOURCE_PROC && 455 + subtype != CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE) { 456 + dev_err(&csdev->dev, "wrong device subtype in %s\n", function); 457 + return -EINVAL; 458 + } 459 + 460 + return 0; 461 + } 462 + 467 463 int coresight_enable(struct coresight_device *csdev) 468 464 { 469 - int ret = 0; 470 - int cpu; 465 + int cpu, ret = 0; 471 466 struct list_head *path; 472 467 473 468 mutex_lock(&coresight_mutex); 474 - if (csdev->type != CORESIGHT_DEV_TYPE_SOURCE) { 475 - ret = -EINVAL; 476 - dev_err(&csdev->dev, "wrong device type in %s\n", __func__); 469 + 470 + ret = coresight_validate_source(csdev, __func__); 471 + if (ret) 477 472 goto out; 478 - } 473 + 479 474 if (csdev->enable) 480 475 goto out; 481 476 ··· 520 461 if (ret) 521 462 goto err_source; 522 463 523 - /* 524 - * When working from sysFS it is important to keep track 525 - * of the paths that were created so that they can be 526 - * undone in 'coresight_disable()'. Since there can only 527 - * be a single session per tracer (when working from sysFS) 528 - * a per-cpu variable will do just fine. 529 - */ 530 - cpu = source_ops(csdev)->cpu_id(csdev); 531 - per_cpu(sysfs_path, cpu) = path; 464 + switch (csdev->subtype.source_subtype) { 465 + case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC: 466 + /* 467 + * When working from sysFS it is important to keep track 468 + * of the paths that were created so that they can be 469 + * undone in 'coresight_disable()'. Since there can only 470 + * be a single session per tracer (when working from sysFS) 471 + * a per-cpu variable will do just fine. 472 + */ 473 + cpu = source_ops(csdev)->cpu_id(csdev); 474 + per_cpu(tracer_path, cpu) = path; 475 + break; 476 + case CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE: 477 + stm_path = path; 478 + break; 479 + default: 480 + /* We can't be here */ 481 + break; 482 + } 532 483 533 484 out: 534 485 mutex_unlock(&coresight_mutex); ··· 555 486 556 487 void coresight_disable(struct coresight_device *csdev) 557 488 { 558 - int cpu; 559 - struct list_head *path; 489 + int cpu, ret; 490 + struct list_head *path = NULL; 560 491 561 492 mutex_lock(&coresight_mutex); 562 - if (csdev->type != CORESIGHT_DEV_TYPE_SOURCE) { 563 - dev_err(&csdev->dev, "wrong device type in %s\n", __func__); 493 + 494 + ret = coresight_validate_source(csdev, __func__); 495 + if (ret) 564 496 goto out; 565 - } 497 + 566 498 if (!csdev->enable) 567 499 goto out; 568 500 569 - cpu = source_ops(csdev)->cpu_id(csdev); 570 - path = per_cpu(sysfs_path, cpu); 501 + switch (csdev->subtype.source_subtype) { 502 + case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC: 503 + cpu = source_ops(csdev)->cpu_id(csdev); 504 + path = per_cpu(tracer_path, cpu); 505 + per_cpu(tracer_path, cpu) = NULL; 506 + break; 507 + case CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE: 508 + path = stm_path; 509 + stm_path = NULL; 510 + break; 511 + default: 512 + /* We can't be here */ 513 + break; 514 + } 515 + 571 516 coresight_disable_source(csdev); 572 517 coresight_disable_path(path); 573 518 coresight_release_path(path); 574 - per_cpu(sysfs_path, cpu) = NULL; 575 519 576 520 out: 577 521 mutex_unlock(&coresight_mutex); ··· 596 514 { 597 515 struct coresight_device *csdev = to_coresight_device(dev); 598 516 599 - return scnprintf(buf, PAGE_SIZE, "%u\n", (unsigned)csdev->activated); 517 + return scnprintf(buf, PAGE_SIZE, "%u\n", csdev->activated); 600 518 } 601 519 602 520 static ssize_t enable_sink_store(struct device *dev, ··· 626 544 { 627 545 struct coresight_device *csdev = to_coresight_device(dev); 628 546 629 - return scnprintf(buf, PAGE_SIZE, "%u\n", (unsigned)csdev->enable); 547 + return scnprintf(buf, PAGE_SIZE, "%u\n", csdev->enable); 630 548 } 631 549 632 550 static ssize_t enable_source_store(struct device *dev,
+27 -2
drivers/hwtracing/intel_th/core.c
··· 71 71 if (ret) 72 72 return ret; 73 73 74 + if (thdrv->attr_group) { 75 + ret = sysfs_create_group(&thdev->dev.kobj, thdrv->attr_group); 76 + if (ret) { 77 + thdrv->remove(thdev); 78 + 79 + return ret; 80 + } 81 + } 82 + 74 83 if (thdev->type == INTEL_TH_OUTPUT && 75 84 !intel_th_output_assigned(thdev)) 76 85 ret = hubdrv->assign(hub, thdev); ··· 99 90 if (err) 100 91 return err; 101 92 } 93 + 94 + if (thdrv->attr_group) 95 + sysfs_remove_group(&thdev->dev.kobj, thdrv->attr_group); 102 96 103 97 thdrv->remove(thdev); 104 98 ··· 183 171 184 172 static int intel_th_output_activate(struct intel_th_device *thdev) 185 173 { 186 - struct intel_th_driver *thdrv = to_intel_th_driver(thdev->dev.driver); 174 + struct intel_th_driver *thdrv = 175 + to_intel_th_driver_or_null(thdev->dev.driver); 176 + 177 + if (!thdrv) 178 + return -ENODEV; 179 + 180 + if (!try_module_get(thdrv->driver.owner)) 181 + return -ENODEV; 187 182 188 183 if (thdrv->activate) 189 184 return thdrv->activate(thdev); ··· 202 183 203 184 static void intel_th_output_deactivate(struct intel_th_device *thdev) 204 185 { 205 - struct intel_th_driver *thdrv = to_intel_th_driver(thdev->dev.driver); 186 + struct intel_th_driver *thdrv = 187 + to_intel_th_driver_or_null(thdev->dev.driver); 188 + 189 + if (!thdrv) 190 + return; 206 191 207 192 if (thdrv->deactivate) 208 193 thdrv->deactivate(thdev); 209 194 else 210 195 intel_th_trace_disable(thdev); 196 + 197 + module_put(thdrv->driver.owner); 211 198 } 212 199 213 200 static ssize_t active_show(struct device *dev, struct device_attribute *attr,
+6
drivers/hwtracing/intel_th/intel_th.h
··· 115 115 * @enable: enable tracing for a given output device 116 116 * @disable: disable tracing for a given output device 117 117 * @fops: file operations for device nodes 118 + * @attr_group: attributes provided by the driver 118 119 * 119 120 * Callbacks @probe and @remove are required for all device types. 120 121 * Switch device driver needs to fill in @assign, @enable and @disable ··· 140 139 void (*deactivate)(struct intel_th_device *thdev); 141 140 /* file_operations for those who want a device node */ 142 141 const struct file_operations *fops; 142 + /* optional attributes */ 143 + struct attribute_group *attr_group; 143 144 144 145 /* source ops */ 145 146 int (*set_output)(struct intel_th_device *thdev, ··· 150 147 151 148 #define to_intel_th_driver(_d) \ 152 149 container_of((_d), struct intel_th_driver, driver) 150 + 151 + #define to_intel_th_driver_or_null(_d) \ 152 + ((_d) ? to_intel_th_driver(_d) : NULL) 153 153 154 154 static inline struct intel_th_device * 155 155 to_intel_th_hub(struct intel_th_device *thdev)
+68 -48
drivers/hwtracing/intel_th/msu.c
··· 122 122 atomic_t mmap_count; 123 123 struct mutex buf_mutex; 124 124 125 - struct mutex iter_mutex; 126 125 struct list_head iter_list; 127 126 128 127 /* config */ ··· 256 257 257 258 iter = kzalloc(sizeof(*iter), GFP_KERNEL); 258 259 if (!iter) 259 - return NULL; 260 + return ERR_PTR(-ENOMEM); 261 + 262 + mutex_lock(&msc->buf_mutex); 263 + 264 + /* 265 + * Reading and tracing are mutually exclusive; if msc is 266 + * enabled, open() will fail; otherwise existing readers 267 + * will prevent enabling the msc and the rest of fops don't 268 + * need to worry about it. 269 + */ 270 + if (msc->enabled) { 271 + kfree(iter); 272 + iter = ERR_PTR(-EBUSY); 273 + goto unlock; 274 + } 260 275 261 276 msc_iter_init(iter); 262 277 iter->msc = msc; 263 278 264 - mutex_lock(&msc->iter_mutex); 265 279 list_add_tail(&iter->entry, &msc->iter_list); 266 - mutex_unlock(&msc->iter_mutex); 280 + unlock: 281 + mutex_unlock(&msc->buf_mutex); 267 282 268 283 return iter; 269 284 } 270 285 271 286 static void msc_iter_remove(struct msc_iter *iter, struct msc *msc) 272 287 { 273 - mutex_lock(&msc->iter_mutex); 288 + mutex_lock(&msc->buf_mutex); 274 289 list_del(&iter->entry); 275 - mutex_unlock(&msc->iter_mutex); 290 + mutex_unlock(&msc->buf_mutex); 276 291 277 292 kfree(iter); 278 293 } ··· 467 454 { 468 455 struct msc_window *win; 469 456 470 - mutex_lock(&msc->buf_mutex); 471 457 list_for_each_entry(win, &msc->win_list, entry) { 472 458 unsigned int blk; 473 459 size_t hw_sz = sizeof(struct msc_block_desc) - ··· 478 466 memset(&bdesc->hw_tag, 0, hw_sz); 479 467 } 480 468 } 481 - mutex_unlock(&msc->buf_mutex); 482 469 } 483 470 484 471 /** ··· 485 474 * @msc: the MSC device to configure 486 475 * 487 476 * Program storage mode, wrapping, burst length and trace buffer address 488 - * into a given MSC. If msc::enabled is set, enable the trace, too. 477 + * into a given MSC. Then, enable tracing and set msc::enabled. 478 + * The latter is serialized on msc::buf_mutex, so make sure to hold it. 489 479 */ 490 480 static int msc_configure(struct msc *msc) 491 481 { 492 482 u32 reg; 483 + 484 + lockdep_assert_held(&msc->buf_mutex); 493 485 494 486 if (msc->mode > MSC_MODE_MULTI) 495 487 return -ENOTSUPP; ··· 511 497 reg = ioread32(msc->reg_base + REG_MSU_MSC0CTL); 512 498 reg &= ~(MSC_MODE | MSC_WRAPEN | MSC_EN | MSC_RD_HDR_OVRD); 513 499 500 + reg |= MSC_EN; 514 501 reg |= msc->mode << __ffs(MSC_MODE); 515 502 reg |= msc->burst_len << __ffs(MSC_LEN); 516 - /*if (msc->mode == MSC_MODE_MULTI) 517 - reg |= MSC_RD_HDR_OVRD; */ 503 + 518 504 if (msc->wrap) 519 505 reg |= MSC_WRAPEN; 520 - if (msc->enabled) 521 - reg |= MSC_EN; 522 506 523 507 iowrite32(reg, msc->reg_base + REG_MSU_MSC0CTL); 524 508 525 - if (msc->enabled) { 526 - msc->thdev->output.multiblock = msc->mode == MSC_MODE_MULTI; 527 - intel_th_trace_enable(msc->thdev); 528 - } 509 + msc->thdev->output.multiblock = msc->mode == MSC_MODE_MULTI; 510 + intel_th_trace_enable(msc->thdev); 511 + msc->enabled = 1; 512 + 529 513 530 514 return 0; 531 515 } ··· 533 521 * @msc: MSC device to disable 534 522 * 535 523 * If @msc is enabled, disable tracing on the switch and then disable MSC 536 - * storage. 524 + * storage. Caller must hold msc::buf_mutex. 537 525 */ 538 526 static void msc_disable(struct msc *msc) 539 527 { 540 528 unsigned long count; 541 529 u32 reg; 542 530 543 - if (!msc->enabled) 544 - return; 531 + lockdep_assert_held(&msc->buf_mutex); 545 532 546 533 intel_th_trace_disable(msc->thdev); 547 534 ··· 580 569 static int intel_th_msc_activate(struct intel_th_device *thdev) 581 570 { 582 571 struct msc *msc = dev_get_drvdata(&thdev->dev); 583 - int ret = 0; 572 + int ret = -EBUSY; 584 573 585 574 if (!atomic_inc_unless_negative(&msc->user_count)) 586 575 return -ENODEV; 587 576 588 - mutex_lock(&msc->iter_mutex); 589 - if (!list_empty(&msc->iter_list)) 590 - ret = -EBUSY; 591 - mutex_unlock(&msc->iter_mutex); 577 + mutex_lock(&msc->buf_mutex); 592 578 593 - if (ret) { 579 + /* if there are readers, refuse */ 580 + if (list_empty(&msc->iter_list)) 581 + ret = msc_configure(msc); 582 + 583 + mutex_unlock(&msc->buf_mutex); 584 + 585 + if (ret) 594 586 atomic_dec(&msc->user_count); 595 - return ret; 596 - } 597 587 598 - msc->enabled = 1; 599 - 600 - return msc_configure(msc); 588 + return ret; 601 589 } 602 590 603 591 static void intel_th_msc_deactivate(struct intel_th_device *thdev) 604 592 { 605 593 struct msc *msc = dev_get_drvdata(&thdev->dev); 606 594 607 - msc_disable(msc); 608 - 609 - atomic_dec(&msc->user_count); 595 + mutex_lock(&msc->buf_mutex); 596 + if (msc->enabled) { 597 + msc_disable(msc); 598 + atomic_dec(&msc->user_count); 599 + } 600 + mutex_unlock(&msc->buf_mutex); 610 601 } 611 602 612 603 /** ··· 1048 1035 return -EPERM; 1049 1036 1050 1037 iter = msc_iter_install(msc); 1051 - if (!iter) 1052 - return -ENOMEM; 1038 + if (IS_ERR(iter)) 1039 + return PTR_ERR(iter); 1053 1040 1054 1041 file->private_data = iter; 1055 1042 ··· 1113 1100 1114 1101 if (!atomic_inc_unless_negative(&msc->user_count)) 1115 1102 return 0; 1116 - 1117 - if (msc->enabled) { 1118 - ret = -EBUSY; 1119 - goto put_count; 1120 - } 1121 1103 1122 1104 if (msc->mode == MSC_MODE_SINGLE && !msc->single_wrap) 1123 1105 size = msc->single_sz; ··· 1253 1245 .read = intel_th_msc_read, 1254 1246 .mmap = intel_th_msc_mmap, 1255 1247 .llseek = no_llseek, 1248 + .owner = THIS_MODULE, 1256 1249 }; 1257 1250 1258 1251 static int intel_th_msc_init(struct msc *msc) ··· 1263 1254 msc->mode = MSC_MODE_MULTI; 1264 1255 mutex_init(&msc->buf_mutex); 1265 1256 INIT_LIST_HEAD(&msc->win_list); 1266 - 1267 - mutex_init(&msc->iter_mutex); 1268 1257 INIT_LIST_HEAD(&msc->iter_list); 1269 1258 1270 1259 msc->burst_len = ··· 1400 1393 do { 1401 1394 end = memchr(p, ',', len); 1402 1395 s = kstrndup(p, end ? end - p : len, GFP_KERNEL); 1396 + if (!s) { 1397 + ret = -ENOMEM; 1398 + goto free_win; 1399 + } 1400 + 1403 1401 ret = kstrtoul(s, 10, &val); 1404 1402 kfree(s); 1405 1403 ··· 1485 1473 if (err) 1486 1474 return err; 1487 1475 1488 - err = sysfs_create_group(&dev->kobj, &msc_output_group); 1489 - if (err) 1490 - return err; 1491 - 1492 1476 dev_set_drvdata(dev, msc); 1493 1477 1494 1478 return 0; ··· 1492 1484 1493 1485 static void intel_th_msc_remove(struct intel_th_device *thdev) 1494 1486 { 1495 - sysfs_remove_group(&thdev->dev.kobj, &msc_output_group); 1487 + struct msc *msc = dev_get_drvdata(&thdev->dev); 1488 + int ret; 1489 + 1490 + intel_th_msc_deactivate(thdev); 1491 + 1492 + /* 1493 + * Buffers should not be used at this point except if the 1494 + * output character device is still open and the parent 1495 + * device gets detached from its bus, which is a FIXME. 1496 + */ 1497 + ret = msc_buffer_free_unless_used(msc); 1498 + WARN_ON_ONCE(ret); 1496 1499 } 1497 1500 1498 1501 static struct intel_th_driver intel_th_msc_driver = { ··· 1512 1493 .activate = intel_th_msc_activate, 1513 1494 .deactivate = intel_th_msc_deactivate, 1514 1495 .fops = &intel_th_msc_fops, 1496 + .attr_group = &msc_output_group, 1515 1497 .driver = { 1516 1498 .name = "msc", 1517 1499 .owner = THIS_MODULE,
+5
drivers/hwtracing/intel_th/pci.c
··· 75 75 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0a80), 76 76 .driver_data = (kernel_ulong_t)0, 77 77 }, 78 + { 79 + /* Broxton B-step */ 80 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x1a8e), 81 + .driver_data = (kernel_ulong_t)0, 82 + }, 78 83 { 0 }, 79 84 }; 80 85
+1 -5
drivers/hwtracing/intel_th/pti.c
··· 200 200 struct resource *res; 201 201 struct pti_device *pti; 202 202 void __iomem *base; 203 - int ret; 204 203 205 204 res = intel_th_device_get_resource(thdev, IORESOURCE_MEM, 0); 206 205 if (!res) ··· 218 219 219 220 read_hw_config(pti); 220 221 221 - ret = sysfs_create_group(&dev->kobj, &pti_output_group); 222 - if (ret) 223 - return ret; 224 - 225 222 dev_set_drvdata(dev, pti); 226 223 227 224 return 0; ··· 232 237 .remove = intel_th_pti_remove, 233 238 .activate = intel_th_pti_activate, 234 239 .deactivate = intel_th_pti_deactivate, 240 + .attr_group = &pti_output_group, 235 241 .driver = { 236 242 .name = "pti", 237 243 .owner = THIS_MODULE,
+29 -13
drivers/hwtracing/stm/core.c
··· 67 67 68 68 static DEVICE_ATTR_RO(channels); 69 69 70 + static ssize_t hw_override_show(struct device *dev, 71 + struct device_attribute *attr, 72 + char *buf) 73 + { 74 + struct stm_device *stm = to_stm_device(dev); 75 + int ret; 76 + 77 + ret = sprintf(buf, "%u\n", stm->data->hw_override); 78 + 79 + return ret; 80 + } 81 + 82 + static DEVICE_ATTR_RO(hw_override); 83 + 70 84 static struct attribute *stm_attrs[] = { 71 85 &dev_attr_masters.attr, 72 86 &dev_attr_channels.attr, 87 + &dev_attr_hw_override.attr, 73 88 NULL, 74 89 }; 75 90 ··· 561 546 if (ret) 562 547 goto err_free; 563 548 564 - ret = 0; 565 - 566 549 if (stm->data->link) 567 550 ret = stm->data->link(stm->data, stmf->output.master, 568 551 stmf->output.channel); ··· 681 668 stm->dev.parent = parent; 682 669 stm->dev.release = stm_device_release; 683 670 671 + mutex_init(&stm->link_mutex); 672 + spin_lock_init(&stm->link_lock); 673 + INIT_LIST_HEAD(&stm->link_list); 674 + 675 + /* initialize the object before it is accessible via sysfs */ 676 + spin_lock_init(&stm->mc_lock); 677 + mutex_init(&stm->policy_mutex); 678 + stm->sw_nmasters = nmasters; 679 + stm->owner = owner; 680 + stm->data = stm_data; 681 + stm_data->stm = stm; 682 + 684 683 err = kobject_set_name(&stm->dev.kobj, "%s", stm_data->name); 685 684 if (err) 686 685 goto err_device; ··· 701 676 if (err) 702 677 goto err_device; 703 678 704 - mutex_init(&stm->link_mutex); 705 - spin_lock_init(&stm->link_lock); 706 - INIT_LIST_HEAD(&stm->link_list); 707 - 708 - spin_lock_init(&stm->mc_lock); 709 - mutex_init(&stm->policy_mutex); 710 - stm->sw_nmasters = nmasters; 711 - stm->owner = owner; 712 - stm->data = stm_data; 713 - stm_data->stm = stm; 714 - 715 679 return 0; 716 680 717 681 err_device: 682 + unregister_chrdev(stm->major, stm_data->name); 683 + 718 684 /* matches device_initialize() above */ 719 685 put_device(&stm->dev); 720 686 err_free:
+5 -9
drivers/hwtracing/stm/dummy_stm.c
··· 46 46 47 47 static int nr_dummies = 4; 48 48 49 - module_param(nr_dummies, int, 0600); 50 - 51 - static unsigned int dummy_stm_nr; 49 + module_param(nr_dummies, int, 0400); 52 50 53 51 static unsigned int fail_mode; 54 52 ··· 63 65 64 66 static int dummy_stm_init(void) 65 67 { 66 - int i, ret = -ENOMEM, __nr_dummies = ACCESS_ONCE(nr_dummies); 68 + int i, ret = -ENOMEM; 67 69 68 - if (__nr_dummies < 0 || __nr_dummies > DUMMY_STM_MAX) 70 + if (nr_dummies < 0 || nr_dummies > DUMMY_STM_MAX) 69 71 return -EINVAL; 70 72 71 - for (i = 0; i < __nr_dummies; i++) { 73 + for (i = 0; i < nr_dummies; i++) { 72 74 dummy_stm[i].name = kasprintf(GFP_KERNEL, "dummy_stm.%d", i); 73 75 if (!dummy_stm[i].name) 74 76 goto fail_unregister; ··· 83 85 if (ret) 84 86 goto fail_free; 85 87 } 86 - 87 - dummy_stm_nr = __nr_dummies; 88 88 89 89 return 0; 90 90 ··· 101 105 { 102 106 int i; 103 107 104 - for (i = 0; i < dummy_stm_nr; i++) { 108 + for (i = 0; i < nr_dummies; i++) { 105 109 stm_unregister_device(&dummy_stm[i]); 106 110 kfree(dummy_stm[i].name); 107 111 }
+5 -9
drivers/hwtracing/stm/heartbeat.c
··· 26 26 static int nr_devs = 4; 27 27 static int interval_ms = 10; 28 28 29 - module_param(nr_devs, int, 0600); 29 + module_param(nr_devs, int, 0400); 30 30 module_param(interval_ms, int, 0600); 31 31 32 32 static struct stm_heartbeat { ··· 34 34 struct hrtimer hrtimer; 35 35 unsigned int active; 36 36 } stm_heartbeat[STM_HEARTBEAT_MAX]; 37 - 38 - static unsigned int nr_instances; 39 37 40 38 static const char str[] = "heartbeat stm source driver is here to serve you"; 41 39 ··· 72 74 73 75 static int stm_heartbeat_init(void) 74 76 { 75 - int i, ret = -ENOMEM, __nr_instances = ACCESS_ONCE(nr_devs); 77 + int i, ret = -ENOMEM; 76 78 77 - if (__nr_instances < 0 || __nr_instances > STM_HEARTBEAT_MAX) 79 + if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX) 78 80 return -EINVAL; 79 81 80 - for (i = 0; i < __nr_instances; i++) { 82 + for (i = 0; i < nr_devs; i++) { 81 83 stm_heartbeat[i].data.name = 82 84 kasprintf(GFP_KERNEL, "heartbeat.%d", i); 83 85 if (!stm_heartbeat[i].data.name) ··· 96 98 goto fail_free; 97 99 } 98 100 99 - nr_instances = __nr_instances; 100 - 101 101 return 0; 102 102 103 103 fail_unregister: ··· 112 116 { 113 117 int i; 114 118 115 - for (i = 0; i < nr_instances; i++) { 119 + for (i = 0; i < nr_devs; i++) { 116 120 stm_source_unregister_device(&stm_heartbeat[i].data); 117 121 kfree(stm_heartbeat[i].data.name); 118 122 }
+2 -3
drivers/hwtracing/stm/policy.c
··· 107 107 goto unlock; 108 108 109 109 /* must be within [sw_start..sw_end], which is an inclusive range */ 110 - if (first > INT_MAX || last > INT_MAX || first > last || 111 - first < stm->data->sw_start || 110 + if (first > last || first < stm->data->sw_start || 112 111 last > stm->data->sw_end) { 113 112 ret = -ERANGE; 114 113 goto unlock; ··· 341 342 return ERR_PTR(-EINVAL); 342 343 } 343 344 344 - *p++ = '\0'; 345 + *p = '\0'; 345 346 346 347 stm = stm_find_device(devname); 347 348 kfree(devname);
+90 -9
drivers/mcb/mcb-core.c
··· 83 83 84 84 static void mcb_shutdown(struct device *dev) 85 85 { 86 + struct mcb_driver *mdrv = to_mcb_driver(dev->driver); 86 87 struct mcb_device *mdev = to_mcb_device(dev); 87 - struct mcb_driver *mdrv = mdev->driver; 88 88 89 89 if (mdrv && mdrv->shutdown) 90 90 mdrv->shutdown(mdev); 91 91 } 92 + 93 + static ssize_t revision_show(struct device *dev, struct device_attribute *attr, 94 + char *buf) 95 + { 96 + struct mcb_bus *bus = to_mcb_bus(dev); 97 + 98 + return scnprintf(buf, PAGE_SIZE, "%d\n", bus->revision); 99 + } 100 + static DEVICE_ATTR_RO(revision); 101 + 102 + static ssize_t model_show(struct device *dev, struct device_attribute *attr, 103 + char *buf) 104 + { 105 + struct mcb_bus *bus = to_mcb_bus(dev); 106 + 107 + return scnprintf(buf, PAGE_SIZE, "%c\n", bus->model); 108 + } 109 + static DEVICE_ATTR_RO(model); 110 + 111 + static ssize_t minor_show(struct device *dev, struct device_attribute *attr, 112 + char *buf) 113 + { 114 + struct mcb_bus *bus = to_mcb_bus(dev); 115 + 116 + return scnprintf(buf, PAGE_SIZE, "%d\n", bus->minor); 117 + } 118 + static DEVICE_ATTR_RO(minor); 119 + 120 + static ssize_t name_show(struct device *dev, struct device_attribute *attr, 121 + char *buf) 122 + { 123 + struct mcb_bus *bus = to_mcb_bus(dev); 124 + 125 + return scnprintf(buf, PAGE_SIZE, "%s\n", bus->name); 126 + } 127 + static DEVICE_ATTR_RO(name); 128 + 129 + static struct attribute *mcb_bus_attrs[] = { 130 + &dev_attr_revision.attr, 131 + &dev_attr_model.attr, 132 + &dev_attr_minor.attr, 133 + &dev_attr_name.attr, 134 + NULL, 135 + }; 136 + 137 + static const struct attribute_group mcb_carrier_group = { 138 + .attrs = mcb_bus_attrs, 139 + }; 140 + 141 + static const struct attribute_group *mcb_carrier_groups[] = { 142 + &mcb_carrier_group, 143 + NULL, 144 + }; 145 + 92 146 93 147 static struct bus_type mcb_bus_type = { 94 148 .name = "mcb", ··· 151 97 .probe = mcb_probe, 152 98 .remove = mcb_remove, 153 99 .shutdown = mcb_shutdown, 100 + }; 101 + 102 + static struct device_type mcb_carrier_device_type = { 103 + .name = "mcb-carrier", 104 + .groups = mcb_carrier_groups, 154 105 }; 155 106 156 107 /** ··· 214 155 int device_id; 215 156 216 157 device_initialize(&dev->dev); 158 + mcb_bus_get(bus); 217 159 dev->dev.bus = &mcb_bus_type; 218 160 dev->dev.parent = bus->dev.parent; 219 161 dev->dev.release = mcb_release_dev; ··· 238 178 } 239 179 EXPORT_SYMBOL_GPL(mcb_device_register); 240 180 181 + static void mcb_free_bus(struct device *dev) 182 + { 183 + struct mcb_bus *bus = to_mcb_bus(dev); 184 + 185 + put_device(bus->carrier); 186 + ida_simple_remove(&mcb_ida, bus->bus_nr); 187 + kfree(bus); 188 + } 189 + 241 190 /** 242 191 * mcb_alloc_bus() - Allocate a new @mcb_bus 243 192 * ··· 256 187 { 257 188 struct mcb_bus *bus; 258 189 int bus_nr; 190 + int rc; 259 191 260 192 bus = kzalloc(sizeof(struct mcb_bus), GFP_KERNEL); 261 193 if (!bus) ··· 264 194 265 195 bus_nr = ida_simple_get(&mcb_ida, 0, 0, GFP_KERNEL); 266 196 if (bus_nr < 0) { 267 - kfree(bus); 268 - return ERR_PTR(bus_nr); 197 + rc = bus_nr; 198 + goto err_free; 269 199 } 270 200 271 - INIT_LIST_HEAD(&bus->children); 272 201 bus->bus_nr = bus_nr; 273 - bus->carrier = carrier; 202 + bus->carrier = get_device(carrier); 203 + 204 + device_initialize(&bus->dev); 205 + bus->dev.parent = carrier; 206 + bus->dev.bus = &mcb_bus_type; 207 + bus->dev.type = &mcb_carrier_device_type; 208 + bus->dev.release = &mcb_free_bus; 209 + 210 + dev_set_name(&bus->dev, "mcb:%d", bus_nr); 211 + rc = device_add(&bus->dev); 212 + if (rc) 213 + goto err_free; 214 + 274 215 return bus; 216 + err_free: 217 + put_device(carrier); 218 + kfree(bus); 219 + return ERR_PTR(rc); 275 220 } 276 221 EXPORT_SYMBOL_GPL(mcb_alloc_bus); 277 222 ··· 309 224 void mcb_release_bus(struct mcb_bus *bus) 310 225 { 311 226 mcb_devices_unregister(bus); 312 - 313 - ida_simple_remove(&mcb_ida, bus->bus_nr); 314 - 315 - kfree(bus); 316 227 } 317 228 EXPORT_SYMBOL_GPL(mcb_release_bus); 318 229
-1
drivers/mcb/mcb-internal.h
··· 5 5 6 6 #define PCI_VENDOR_ID_MEN 0x1a88 7 7 #define PCI_DEVICE_ID_MEN_CHAMELEON 0x4d45 8 - #define CHAMELEON_FILENAME_LEN 12 9 8 #define CHAMELEONV2_MAGIC 0xabce 10 9 #define CHAM_HEADER_SIZE 0x200 11 10
+6 -11
drivers/mcb/mcb-parse.c
··· 57 57 mdev->id = GDD_DEV(reg1); 58 58 mdev->rev = GDD_REV(reg1); 59 59 mdev->var = GDD_VAR(reg1); 60 - mdev->bar = GDD_BAR(reg1); 60 + mdev->bar = GDD_BAR(reg2); 61 61 mdev->group = GDD_GRP(reg2); 62 62 mdev->inst = GDD_INS(reg2); 63 63 ··· 113 113 } 114 114 p += hsize; 115 115 116 - pr_debug("header->revision = %d\n", header->revision); 117 - pr_debug("header->model = 0x%x ('%c')\n", header->model, 118 - header->model); 119 - pr_debug("header->minor = %d\n", header->minor); 120 - pr_debug("header->bus_type = 0x%x\n", header->bus_type); 121 - 122 - 123 - pr_debug("header->magic = 0x%x\n", header->magic); 124 - pr_debug("header->filename = \"%.*s\"\n", CHAMELEON_FILENAME_LEN, 125 - header->filename); 116 + bus->revision = header->revision; 117 + bus->model = header->model; 118 + bus->minor = header->minor; 119 + snprintf(bus->name, CHAMELEON_FILENAME_LEN + 1, "%s", 120 + header->filename); 126 121 127 122 for_each_chameleon_cell(dtype, p) { 128 123 switch (dtype) {
+8 -15
drivers/mcb/mcb-pci.c
··· 35 35 struct resource *res; 36 36 struct priv *priv; 37 37 int ret; 38 - int num_cells; 39 38 unsigned long flags; 40 39 41 40 priv = devm_kzalloc(&pdev->dev, sizeof(struct priv), GFP_KERNEL); ··· 54 55 goto out_disable; 55 56 } 56 57 57 - res = request_mem_region(priv->mapbase, CHAM_HEADER_SIZE, 58 - KBUILD_MODNAME); 58 + res = devm_request_mem_region(&pdev->dev, priv->mapbase, 59 + CHAM_HEADER_SIZE, 60 + KBUILD_MODNAME); 59 61 if (!res) { 60 62 dev_err(&pdev->dev, "Failed to request PCI memory\n"); 61 63 ret = -EBUSY; 62 64 goto out_disable; 63 65 } 64 66 65 - priv->base = ioremap(priv->mapbase, CHAM_HEADER_SIZE); 67 + priv->base = devm_ioremap(&pdev->dev, priv->mapbase, CHAM_HEADER_SIZE); 66 68 if (!priv->base) { 67 69 dev_err(&pdev->dev, "Cannot ioremap\n"); 68 70 ret = -ENOMEM; 69 - goto out_release; 71 + goto out_disable; 70 72 } 71 73 72 74 flags = pci_resource_flags(pdev, 0); ··· 75 75 ret = -ENOTSUPP; 76 76 dev_err(&pdev->dev, 77 77 "IO mapped PCI devices are not supported\n"); 78 - goto out_iounmap; 78 + goto out_disable; 79 79 } 80 80 81 81 pci_set_drvdata(pdev, priv); ··· 83 83 priv->bus = mcb_alloc_bus(&pdev->dev); 84 84 if (IS_ERR(priv->bus)) { 85 85 ret = PTR_ERR(priv->bus); 86 - goto out_iounmap; 86 + goto out_disable; 87 87 } 88 88 89 89 priv->bus->get_irq = mcb_pci_get_irq; ··· 91 91 ret = chameleon_parse_cells(priv->bus, priv->mapbase, priv->base); 92 92 if (ret < 0) 93 93 goto out_mcb_bus; 94 - num_cells = ret; 95 94 96 - dev_dbg(&pdev->dev, "Found %d cells\n", num_cells); 95 + dev_dbg(&pdev->dev, "Found %d cells\n", ret); 97 96 98 97 mcb_bus_add_devices(priv->bus); 99 98 ··· 100 101 101 102 out_mcb_bus: 102 103 mcb_release_bus(priv->bus); 103 - out_iounmap: 104 - iounmap(priv->base); 105 - out_release: 106 - pci_release_region(pdev, 0); 107 104 out_disable: 108 105 pci_disable_device(pdev); 109 106 return ret; ··· 111 116 112 117 mcb_release_bus(priv->bus); 113 118 114 - iounmap(priv->base); 115 - release_region(priv->mapbase, CHAM_HEADER_SIZE); 116 119 pci_disable_device(pdev); 117 120 } 118 121
+1 -1
drivers/memory/of_memory.c
··· 109 109 struct lpddr2_timings *timings = NULL; 110 110 u32 arr_sz = 0, i = 0; 111 111 struct device_node *np_tim; 112 - char *tim_compat; 112 + char *tim_compat = NULL; 113 113 114 114 switch (device_type) { 115 115 case DDR_TYPE_LPDDR2_S2:
-2
drivers/misc/eeprom/Kconfig
··· 3 3 config EEPROM_AT24 4 4 tristate "I2C EEPROMs / RAMs / ROMs from most vendors" 5 5 depends on I2C && SYSFS 6 - select REGMAP 7 6 select NVMEM 8 7 help 9 8 Enable this driver to get read/write support to most I2C EEPROMs ··· 31 32 config EEPROM_AT25 32 33 tristate "SPI EEPROMs from most vendors" 33 34 depends on SPI && SYSFS 34 - select REGMAP 35 35 select NVMEM 36 36 help 37 37 Enable this driver to get read/write support to most SPI EEPROMs,
+22 -81
drivers/misc/eeprom/at24.c
··· 23 23 #include <linux/acpi.h> 24 24 #include <linux/i2c.h> 25 25 #include <linux/nvmem-provider.h> 26 - #include <linux/regmap.h> 27 26 #include <linux/platform_data/at24.h> 28 27 29 28 /* ··· 68 69 unsigned write_max; 69 70 unsigned num_addresses; 70 71 71 - struct regmap_config regmap_config; 72 72 struct nvmem_config nvmem_config; 73 73 struct nvmem_device *nvmem; 74 74 ··· 249 251 return -ETIMEDOUT; 250 252 } 251 253 252 - static ssize_t at24_read(struct at24_data *at24, 253 - char *buf, loff_t off, size_t count) 254 + static int at24_read(void *priv, unsigned int off, void *val, size_t count) 254 255 { 255 - ssize_t retval = 0; 256 + struct at24_data *at24 = priv; 257 + char *buf = val; 256 258 257 259 if (unlikely(!count)) 258 260 return count; ··· 264 266 mutex_lock(&at24->lock); 265 267 266 268 while (count) { 267 - ssize_t status; 269 + int status; 268 270 269 271 status = at24_eeprom_read(at24, buf, off, count); 270 - if (status <= 0) { 271 - if (retval == 0) 272 - retval = status; 273 - break; 272 + if (status < 0) { 273 + mutex_unlock(&at24->lock); 274 + return status; 274 275 } 275 276 buf += status; 276 277 off += status; 277 278 count -= status; 278 - retval += status; 279 279 } 280 280 281 281 mutex_unlock(&at24->lock); 282 282 283 - return retval; 283 + return 0; 284 284 } 285 285 286 286 /* ··· 366 370 return -ETIMEDOUT; 367 371 } 368 372 369 - static ssize_t at24_write(struct at24_data *at24, const char *buf, loff_t off, 370 - size_t count) 373 + static int at24_write(void *priv, unsigned int off, void *val, size_t count) 371 374 { 372 - ssize_t retval = 0; 375 + struct at24_data *at24 = priv; 376 + char *buf = val; 373 377 374 378 if (unlikely(!count)) 375 - return count; 379 + return -EINVAL; 376 380 377 381 /* 378 382 * Write data to chip, protecting against concurrent updates ··· 381 385 mutex_lock(&at24->lock); 382 386 383 387 while (count) { 384 - ssize_t status; 388 + int status; 385 389 386 390 status = at24_eeprom_write(at24, buf, off, count); 387 - if (status <= 0) { 388 - if (retval == 0) 389 - retval = status; 390 - break; 391 + if (status < 0) { 392 + mutex_unlock(&at24->lock); 393 + return status; 391 394 } 392 395 buf += status; 393 396 off += status; 394 397 count -= status; 395 - retval += status; 396 398 } 397 399 398 400 mutex_unlock(&at24->lock); 399 401 400 - return retval; 401 - } 402 - 403 - /*-------------------------------------------------------------------------*/ 404 - 405 - /* 406 - * Provide a regmap interface, which is registered with the NVMEM 407 - * framework 408 - */ 409 - static int at24_regmap_read(void *context, const void *reg, size_t reg_size, 410 - void *val, size_t val_size) 411 - { 412 - struct at24_data *at24 = context; 413 - off_t offset = *(u32 *)reg; 414 - int err; 415 - 416 - err = at24_read(at24, val, offset, val_size); 417 - if (err) 418 - return err; 419 402 return 0; 420 403 } 421 - 422 - static int at24_regmap_write(void *context, const void *data, size_t count) 423 - { 424 - struct at24_data *at24 = context; 425 - const char *buf; 426 - u32 offset; 427 - size_t len; 428 - int err; 429 - 430 - memcpy(&offset, data, sizeof(offset)); 431 - buf = (const char *)data + sizeof(offset); 432 - len = count - sizeof(offset); 433 - 434 - err = at24_write(at24, buf, offset, len); 435 - if (err) 436 - return err; 437 - return 0; 438 - } 439 - 440 - static const struct regmap_bus at24_regmap_bus = { 441 - .read = at24_regmap_read, 442 - .write = at24_regmap_write, 443 - .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 444 - }; 445 - 446 - /*-------------------------------------------------------------------------*/ 447 404 448 405 #ifdef CONFIG_OF 449 406 static void at24_get_ofdata(struct i2c_client *client, ··· 429 480 struct at24_data *at24; 430 481 int err; 431 482 unsigned i, num_addresses; 432 - struct regmap *regmap; 433 483 434 484 if (client->dev.platform_data) { 435 485 chip = *(struct at24_platform_data *)client->dev.platform_data; ··· 555 607 } 556 608 } 557 609 558 - at24->regmap_config.reg_bits = 32; 559 - at24->regmap_config.val_bits = 8; 560 - at24->regmap_config.reg_stride = 1; 561 - at24->regmap_config.max_register = chip.byte_len - 1; 562 - 563 - regmap = devm_regmap_init(&client->dev, &at24_regmap_bus, at24, 564 - &at24->regmap_config); 565 - if (IS_ERR(regmap)) { 566 - dev_err(&client->dev, "regmap init failed\n"); 567 - err = PTR_ERR(regmap); 568 - goto err_clients; 569 - } 570 - 571 610 at24->nvmem_config.name = dev_name(&client->dev); 572 611 at24->nvmem_config.dev = &client->dev; 573 612 at24->nvmem_config.read_only = !writable; ··· 562 627 at24->nvmem_config.owner = THIS_MODULE; 563 628 at24->nvmem_config.compat = true; 564 629 at24->nvmem_config.base_dev = &client->dev; 630 + at24->nvmem_config.reg_read = at24_read; 631 + at24->nvmem_config.reg_write = at24_write; 632 + at24->nvmem_config.priv = at24; 633 + at24->nvmem_config.stride = 4; 634 + at24->nvmem_config.word_size = 1; 635 + at24->nvmem_config.size = chip.byte_len; 565 636 566 637 at24->nvmem = nvmem_register(&at24->nvmem_config); 567 638
+20 -71
drivers/misc/eeprom/at25.c
··· 17 17 #include <linux/sched.h> 18 18 19 19 #include <linux/nvmem-provider.h> 20 - #include <linux/regmap.h> 21 20 #include <linux/spi/spi.h> 22 21 #include <linux/spi/eeprom.h> 23 22 #include <linux/property.h> ··· 33 34 struct mutex lock; 34 35 struct spi_eeprom chip; 35 36 unsigned addrlen; 36 - struct regmap_config regmap_config; 37 37 struct nvmem_config nvmem_config; 38 38 struct nvmem_device *nvmem; 39 39 }; ··· 63 65 64 66 #define io_limit PAGE_SIZE /* bytes */ 65 67 66 - static ssize_t 67 - at25_ee_read( 68 - struct at25_data *at25, 69 - char *buf, 70 - unsigned offset, 71 - size_t count 72 - ) 68 + static int at25_ee_read(void *priv, unsigned int offset, 69 + void *val, size_t count) 73 70 { 71 + struct at25_data *at25 = priv; 72 + char *buf = val; 74 73 u8 command[EE_MAXADDRLEN + 1]; 75 74 u8 *cp; 76 75 ssize_t status; ··· 76 81 u8 instr; 77 82 78 83 if (unlikely(offset >= at25->chip.byte_len)) 79 - return 0; 84 + return -EINVAL; 80 85 if ((offset + count) > at25->chip.byte_len) 81 86 count = at25->chip.byte_len - offset; 82 87 if (unlikely(!count)) 83 - return count; 88 + return -EINVAL; 84 89 85 90 cp = command; 86 91 ··· 126 131 count, offset, (int) status); 127 132 128 133 mutex_unlock(&at25->lock); 129 - return status ? status : count; 134 + return status; 130 135 } 131 136 132 - static int at25_regmap_read(void *context, const void *reg, size_t reg_size, 133 - void *val, size_t val_size) 137 + static int at25_ee_write(void *priv, unsigned int off, void *val, size_t count) 134 138 { 135 - struct at25_data *at25 = context; 136 - off_t offset = *(u32 *)reg; 137 - int err; 138 - 139 - err = at25_ee_read(at25, val, offset, val_size); 140 - if (err) 141 - return err; 142 - return 0; 143 - } 144 - 145 - static ssize_t 146 - at25_ee_write(struct at25_data *at25, const char *buf, loff_t off, 147 - size_t count) 148 - { 149 - ssize_t status = 0; 150 - unsigned written = 0; 139 + struct at25_data *at25 = priv; 140 + const char *buf = val; 141 + int status = 0; 151 142 unsigned buf_size; 152 143 u8 *bounce; 153 144 ··· 142 161 if ((off + count) > at25->chip.byte_len) 143 162 count = at25->chip.byte_len - off; 144 163 if (unlikely(!count)) 145 - return count; 164 + return -EINVAL; 146 165 147 166 /* Temp buffer starts with command and address */ 148 167 buf_size = at25->chip.page_size; ··· 237 256 off += segment; 238 257 buf += segment; 239 258 count -= segment; 240 - written += segment; 241 259 242 260 } while (count > 0); 243 261 244 262 mutex_unlock(&at25->lock); 245 263 246 264 kfree(bounce); 247 - return written ? written : status; 265 + return status; 248 266 } 249 - 250 - static int at25_regmap_write(void *context, const void *data, size_t count) 251 - { 252 - struct at25_data *at25 = context; 253 - const char *buf; 254 - u32 offset; 255 - size_t len; 256 - int err; 257 - 258 - memcpy(&offset, data, sizeof(offset)); 259 - buf = (const char *)data + sizeof(offset); 260 - len = count - sizeof(offset); 261 - 262 - err = at25_ee_write(at25, buf, offset, len); 263 - if (err) 264 - return err; 265 - return 0; 266 - } 267 - 268 - static const struct regmap_bus at25_regmap_bus = { 269 - .read = at25_regmap_read, 270 - .write = at25_regmap_write, 271 - .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 272 - }; 273 267 274 268 /*-------------------------------------------------------------------------*/ 275 269 ··· 305 349 { 306 350 struct at25_data *at25 = NULL; 307 351 struct spi_eeprom chip; 308 - struct regmap *regmap; 309 352 int err; 310 353 int sr; 311 354 int addrlen; ··· 345 390 346 391 mutex_init(&at25->lock); 347 392 at25->chip = chip; 348 - at25->spi = spi_dev_get(spi); 393 + at25->spi = spi; 349 394 spi_set_drvdata(spi, at25); 350 395 at25->addrlen = addrlen; 351 - 352 - at25->regmap_config.reg_bits = 32; 353 - at25->regmap_config.val_bits = 8; 354 - at25->regmap_config.reg_stride = 1; 355 - at25->regmap_config.max_register = chip.byte_len - 1; 356 - 357 - regmap = devm_regmap_init(&spi->dev, &at25_regmap_bus, at25, 358 - &at25->regmap_config); 359 - if (IS_ERR(regmap)) { 360 - dev_err(&spi->dev, "regmap init failed\n"); 361 - return PTR_ERR(regmap); 362 - } 363 396 364 397 at25->nvmem_config.name = dev_name(&spi->dev); 365 398 at25->nvmem_config.dev = &spi->dev; ··· 356 413 at25->nvmem_config.owner = THIS_MODULE; 357 414 at25->nvmem_config.compat = true; 358 415 at25->nvmem_config.base_dev = &spi->dev; 416 + at25->nvmem_config.reg_read = at25_ee_read; 417 + at25->nvmem_config.reg_write = at25_ee_write; 418 + at25->nvmem_config.priv = at25; 419 + at25->nvmem_config.stride = 4; 420 + at25->nvmem_config.word_size = 1; 421 + at25->nvmem_config.size = chip.byte_len; 359 422 360 423 at25->nvmem = nvmem_register(&at25->nvmem_config); 361 424 if (IS_ERR(at25->nvmem))
+19 -73
drivers/misc/eeprom/eeprom_93xx46.c
··· 20 20 #include <linux/slab.h> 21 21 #include <linux/spi/spi.h> 22 22 #include <linux/nvmem-provider.h> 23 - #include <linux/regmap.h> 24 23 #include <linux/eeprom_93xx46.h> 25 24 26 25 #define OP_START 0x4 ··· 42 43 struct spi_device *spi; 43 44 struct eeprom_93xx46_platform_data *pdata; 44 45 struct mutex lock; 45 - struct regmap_config regmap_config; 46 46 struct nvmem_config nvmem_config; 47 47 struct nvmem_device *nvmem; 48 48 int addrlen; ··· 58 60 return edev->pdata->quirks & EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH; 59 61 } 60 62 61 - static ssize_t 62 - eeprom_93xx46_read(struct eeprom_93xx46_dev *edev, char *buf, 63 - unsigned off, size_t count) 63 + static int eeprom_93xx46_read(void *priv, unsigned int off, 64 + void *val, size_t count) 64 65 { 65 - ssize_t ret = 0; 66 + struct eeprom_93xx46_dev *edev = priv; 67 + char *buf = val; 68 + int err = 0; 66 69 67 70 if (unlikely(off >= edev->size)) 68 71 return 0; ··· 83 84 u16 cmd_addr = OP_READ << edev->addrlen; 84 85 size_t nbytes = count; 85 86 int bits; 86 - int err; 87 87 88 88 if (edev->addrlen == 7) { 89 89 cmd_addr |= off & 0x7f; ··· 118 120 if (err) { 119 121 dev_err(&edev->spi->dev, "read %zu bytes at %d: err. %d\n", 120 122 nbytes, (int)off, err); 121 - ret = err; 122 123 break; 123 124 } 124 125 125 126 buf += nbytes; 126 127 off += nbytes; 127 128 count -= nbytes; 128 - ret += nbytes; 129 129 } 130 130 131 131 if (edev->pdata->finish) 132 132 edev->pdata->finish(edev); 133 133 134 134 mutex_unlock(&edev->lock); 135 - return ret; 135 + 136 + return err; 136 137 } 137 138 138 139 static int eeprom_93xx46_ew(struct eeprom_93xx46_dev *edev, int is_on) ··· 227 230 return ret; 228 231 } 229 232 230 - static ssize_t 231 - eeprom_93xx46_write(struct eeprom_93xx46_dev *edev, const char *buf, 232 - loff_t off, size_t count) 233 + static int eeprom_93xx46_write(void *priv, unsigned int off, 234 + void *val, size_t count) 233 235 { 236 + struct eeprom_93xx46_dev *edev = priv; 237 + char *buf = val; 234 238 int i, ret, step = 1; 235 239 236 240 if (unlikely(off >= edev->size)) ··· 273 275 274 276 /* erase/write disable */ 275 277 eeprom_93xx46_ew(edev, 0); 276 - return ret ? : count; 278 + return ret; 277 279 } 278 - 279 - /* 280 - * Provide a regmap interface, which is registered with the NVMEM 281 - * framework 282 - */ 283 - static int eeprom_93xx46_regmap_read(void *context, const void *reg, 284 - size_t reg_size, void *val, 285 - size_t val_size) 286 - { 287 - struct eeprom_93xx46_dev *eeprom_93xx46 = context; 288 - off_t offset = *(u32 *)reg; 289 - int err; 290 - 291 - err = eeprom_93xx46_read(eeprom_93xx46, val, offset, val_size); 292 - if (err) 293 - return err; 294 - return 0; 295 - } 296 - 297 - static int eeprom_93xx46_regmap_write(void *context, const void *data, 298 - size_t count) 299 - { 300 - struct eeprom_93xx46_dev *eeprom_93xx46 = context; 301 - const char *buf; 302 - u32 offset; 303 - size_t len; 304 - int err; 305 - 306 - memcpy(&offset, data, sizeof(offset)); 307 - buf = (const char *)data + sizeof(offset); 308 - len = count - sizeof(offset); 309 - 310 - err = eeprom_93xx46_write(eeprom_93xx46, buf, offset, len); 311 - if (err) 312 - return err; 313 - return 0; 314 - } 315 - 316 - static const struct regmap_bus eeprom_93xx46_regmap_bus = { 317 - .read = eeprom_93xx46_regmap_read, 318 - .write = eeprom_93xx46_regmap_write, 319 - .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 320 - }; 321 280 322 281 static int eeprom_93xx46_eral(struct eeprom_93xx46_dev *edev) 323 282 { ··· 435 480 { 436 481 struct eeprom_93xx46_platform_data *pd; 437 482 struct eeprom_93xx46_dev *edev; 438 - struct regmap *regmap; 439 483 int err; 440 484 441 485 if (spi->dev.of_node) { ··· 465 511 466 512 mutex_init(&edev->lock); 467 513 468 - edev->spi = spi_dev_get(spi); 514 + edev->spi = spi; 469 515 edev->pdata = pd; 470 516 471 517 edev->size = 128; 472 - 473 - edev->regmap_config.reg_bits = 32; 474 - edev->regmap_config.val_bits = 8; 475 - edev->regmap_config.reg_stride = 1; 476 - edev->regmap_config.max_register = edev->size - 1; 477 - 478 - regmap = devm_regmap_init(&spi->dev, &eeprom_93xx46_regmap_bus, edev, 479 - &edev->regmap_config); 480 - if (IS_ERR(regmap)) { 481 - dev_err(&spi->dev, "regmap init failed\n"); 482 - err = PTR_ERR(regmap); 483 - goto fail; 484 - } 485 - 486 518 edev->nvmem_config.name = dev_name(&spi->dev); 487 519 edev->nvmem_config.dev = &spi->dev; 488 520 edev->nvmem_config.read_only = pd->flags & EE_READONLY; ··· 476 536 edev->nvmem_config.owner = THIS_MODULE; 477 537 edev->nvmem_config.compat = true; 478 538 edev->nvmem_config.base_dev = &spi->dev; 539 + edev->nvmem_config.reg_read = eeprom_93xx46_read; 540 + edev->nvmem_config.reg_write = eeprom_93xx46_write; 541 + edev->nvmem_config.priv = edev; 542 + edev->nvmem_config.stride = 4; 543 + edev->nvmem_config.word_size = 1; 544 + edev->nvmem_config.size = edev->size; 479 545 480 546 edev->nvmem = nvmem_register(&edev->nvmem_config); 481 547 if (IS_ERR(edev->nvmem)) {
+3 -1
drivers/misc/mei/amthif.c
··· 380 380 381 381 dev = cl->dev; 382 382 383 - if (dev->iamthif_state != MEI_IAMTHIF_READING) 383 + if (dev->iamthif_state != MEI_IAMTHIF_READING) { 384 + mei_irq_discard_msg(dev, mei_hdr); 384 385 return 0; 386 + } 385 387 386 388 ret = mei_cl_irq_read_msg(cl, mei_hdr, cmpl_list); 387 389 if (ret)
+22 -20
drivers/misc/mei/bus.c
··· 220 220 static void mei_cl_bus_event_work(struct work_struct *work) 221 221 { 222 222 struct mei_cl_device *cldev; 223 + struct mei_device *bus; 223 224 224 225 cldev = container_of(work, struct mei_cl_device, event_work); 226 + 227 + bus = cldev->bus; 225 228 226 229 if (cldev->event_cb) 227 230 cldev->event_cb(cldev, cldev->events, cldev->event_context); ··· 232 229 cldev->events = 0; 233 230 234 231 /* Prepare for the next read */ 235 - if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) 232 + if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) { 233 + mutex_lock(&bus->device_lock); 236 234 mei_cl_read_start(cldev->cl, 0, NULL); 235 + mutex_unlock(&bus->device_lock); 236 + } 237 237 } 238 238 239 239 /** ··· 310 304 unsigned long events_mask, 311 305 mei_cldev_event_cb_t event_cb, void *context) 312 306 { 307 + struct mei_device *bus = cldev->bus; 313 308 int ret; 314 309 315 310 if (cldev->event_cb) ··· 323 316 INIT_WORK(&cldev->event_work, mei_cl_bus_event_work); 324 317 325 318 if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) { 319 + mutex_lock(&bus->device_lock); 326 320 ret = mei_cl_read_start(cldev->cl, 0, NULL); 321 + mutex_unlock(&bus->device_lock); 327 322 if (ret && ret != -EBUSY) 328 323 return ret; 329 324 } 330 325 331 326 if (cldev->events_mask & BIT(MEI_CL_EVENT_NOTIF)) { 332 - mutex_lock(&cldev->cl->dev->device_lock); 327 + mutex_lock(&bus->device_lock); 333 328 ret = mei_cl_notify_request(cldev->cl, NULL, event_cb ? 1 : 0); 334 - mutex_unlock(&cldev->cl->dev->device_lock); 329 + mutex_unlock(&bus->device_lock); 335 330 if (ret) 336 331 return ret; 337 332 } ··· 589 580 struct mei_cl_device *cldev; 590 581 struct mei_cl_driver *cldrv; 591 582 const struct mei_cl_device_id *id; 583 + int ret; 592 584 593 585 cldev = to_mei_cl_device(dev); 594 586 cldrv = to_mei_cl_driver(dev->driver); ··· 604 594 if (!id) 605 595 return -ENODEV; 606 596 607 - __module_get(THIS_MODULE); 597 + ret = cldrv->probe(cldev, id); 598 + if (ret) 599 + return ret; 608 600 609 - return cldrv->probe(cldev, id); 601 + __module_get(THIS_MODULE); 602 + return 0; 610 603 } 611 604 612 605 /** ··· 647 634 char *buf) 648 635 { 649 636 struct mei_cl_device *cldev = to_mei_cl_device(dev); 650 - size_t len; 651 637 652 - len = snprintf(buf, PAGE_SIZE, "%s", cldev->name); 653 - 654 - return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len; 638 + return scnprintf(buf, PAGE_SIZE, "%s", cldev->name); 655 639 } 656 640 static DEVICE_ATTR_RO(name); 657 641 ··· 657 647 { 658 648 struct mei_cl_device *cldev = to_mei_cl_device(dev); 659 649 const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl); 660 - size_t len; 661 650 662 - len = snprintf(buf, PAGE_SIZE, "%pUl", uuid); 663 - 664 - return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len; 651 + return scnprintf(buf, PAGE_SIZE, "%pUl", uuid); 665 652 } 666 653 static DEVICE_ATTR_RO(uuid); 667 654 ··· 667 660 { 668 661 struct mei_cl_device *cldev = to_mei_cl_device(dev); 669 662 u8 version = mei_me_cl_ver(cldev->me_cl); 670 - size_t len; 671 663 672 - len = snprintf(buf, PAGE_SIZE, "%02X", version); 673 - 674 - return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len; 664 + return scnprintf(buf, PAGE_SIZE, "%02X", version); 675 665 } 676 666 static DEVICE_ATTR_RO(version); 677 667 ··· 677 673 { 678 674 struct mei_cl_device *cldev = to_mei_cl_device(dev); 679 675 const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl); 680 - size_t len; 681 676 682 - len = snprintf(buf, PAGE_SIZE, "mei:%s:%pUl:", cldev->name, uuid); 683 - return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len; 677 + return scnprintf(buf, PAGE_SIZE, "mei:%s:%pUl:", cldev->name, uuid); 684 678 } 685 679 static DEVICE_ATTR_RO(modalias); 686 680
+20 -10
drivers/misc/mei/client.c
··· 727 727 cl_dbg(dev, cl, "Waking up waiting for event clients!\n"); 728 728 wake_up_interruptible(&cl->ev_wait); 729 729 } 730 + /* synchronized under device mutex */ 731 + if (waitqueue_active(&cl->wait)) { 732 + cl_dbg(dev, cl, "Waking up ctrl write clients!\n"); 733 + wake_up_interruptible(&cl->wait); 734 + } 730 735 } 731 736 732 737 /** ··· 884 879 } 885 880 886 881 mutex_unlock(&dev->device_lock); 887 - wait_event_timeout(cl->wait, cl->state == MEI_FILE_DISCONNECT_REPLY, 882 + wait_event_timeout(cl->wait, 883 + cl->state == MEI_FILE_DISCONNECT_REPLY || 884 + cl->state == MEI_FILE_DISCONNECTED, 888 885 mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT)); 889 886 mutex_lock(&dev->device_lock); 890 887 891 888 rets = cl->status; 892 - if (cl->state != MEI_FILE_DISCONNECT_REPLY) { 889 + if (cl->state != MEI_FILE_DISCONNECT_REPLY && 890 + cl->state != MEI_FILE_DISCONNECTED) { 893 891 cl_dbg(dev, cl, "timeout on disconnect from FW client.\n"); 894 892 rets = -ETIME; 895 893 } ··· 1093 1085 mutex_unlock(&dev->device_lock); 1094 1086 wait_event_timeout(cl->wait, 1095 1087 (cl->state == MEI_FILE_CONNECTED || 1088 + cl->state == MEI_FILE_DISCONNECTED || 1096 1089 cl->state == MEI_FILE_DISCONNECT_REQUIRED || 1097 1090 cl->state == MEI_FILE_DISCONNECT_REPLY), 1098 1091 mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT)); ··· 1342 1333 } 1343 1334 1344 1335 mutex_unlock(&dev->device_lock); 1345 - wait_event_timeout(cl->wait, cl->notify_en == request, 1346 - mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT)); 1336 + wait_event_timeout(cl->wait, 1337 + cl->notify_en == request || !mei_cl_is_connected(cl), 1338 + mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT)); 1347 1339 mutex_lock(&dev->device_lock); 1348 1340 1349 - if (cl->notify_en != request) { 1350 - mei_io_list_flush(&dev->ctrl_rd_list, cl); 1351 - mei_io_list_flush(&dev->ctrl_wr_list, cl); 1352 - if (!cl->status) 1353 - cl->status = -EFAULT; 1354 - } 1341 + if (cl->notify_en != request && !cl->status) 1342 + cl->status = -EFAULT; 1355 1343 1356 1344 rets = cl->status; 1357 1345 ··· 1772 1766 if (waitqueue_active(&cl->wait)) 1773 1767 wake_up(&cl->wait); 1774 1768 1769 + break; 1770 + case MEI_FOP_DISCONNECT_RSP: 1771 + mei_io_cb_free(cb); 1772 + mei_cl_set_disconnected(cl); 1775 1773 break; 1776 1774 default: 1777 1775 BUG_ON(0);
+9 -17
drivers/misc/mei/hbm.c
··· 113 113 */ 114 114 void mei_hbm_reset(struct mei_device *dev) 115 115 { 116 - dev->me_client_index = 0; 117 - 118 116 mei_me_cl_rm_all(dev); 119 117 120 118 mei_hbm_idle(dev); ··· 528 530 * mei_hbm_prop_req - request property for a single client 529 531 * 530 532 * @dev: the device structure 533 + * @start_idx: client index to start search 531 534 * 532 535 * Return: 0 on success and < 0 on failure 533 536 */ 534 - 535 - static int mei_hbm_prop_req(struct mei_device *dev) 537 + static int mei_hbm_prop_req(struct mei_device *dev, unsigned long start_idx) 536 538 { 537 - 538 539 struct mei_msg_hdr *mei_hdr = &dev->wr_msg.hdr; 539 540 struct hbm_props_request *prop_req; 540 541 const size_t len = sizeof(struct hbm_props_request); 541 - unsigned long next_client_index; 542 + unsigned long addr; 542 543 int ret; 543 544 544 - next_client_index = find_next_bit(dev->me_clients_map, MEI_CLIENTS_MAX, 545 - dev->me_client_index); 545 + addr = find_next_bit(dev->me_clients_map, MEI_CLIENTS_MAX, start_idx); 546 546 547 547 /* We got all client properties */ 548 - if (next_client_index == MEI_CLIENTS_MAX) { 548 + if (addr == MEI_CLIENTS_MAX) { 549 549 dev->hbm_state = MEI_HBM_STARTED; 550 550 mei_host_client_init(dev); 551 551 ··· 556 560 memset(prop_req, 0, sizeof(struct hbm_props_request)); 557 561 558 562 prop_req->hbm_cmd = HOST_CLIENT_PROPERTIES_REQ_CMD; 559 - prop_req->me_addr = next_client_index; 563 + prop_req->me_addr = addr; 560 564 561 565 ret = mei_write_message(dev, mei_hdr, dev->wr_msg.data); 562 566 if (ret) { ··· 566 570 } 567 571 568 572 dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT; 569 - dev->me_client_index = next_client_index; 570 573 571 574 return 0; 572 575 } ··· 877 882 cb = mei_io_cb_init(cl, MEI_FOP_DISCONNECT_RSP, NULL); 878 883 if (!cb) 879 884 return -ENOMEM; 880 - cl_dbg(dev, cl, "add disconnect response as first\n"); 881 - list_add(&cb->list, &dev->ctrl_wr_list.list); 885 + list_add_tail(&cb->list, &dev->ctrl_wr_list.list); 882 886 } 883 887 return 0; 884 888 } ··· 1146 1152 1147 1153 mei_hbm_me_cl_add(dev, props_res); 1148 1154 1149 - dev->me_client_index++; 1150 - 1151 1155 /* request property for the next client */ 1152 - if (mei_hbm_prop_req(dev)) 1156 + if (mei_hbm_prop_req(dev, props_res->me_addr + 1)) 1153 1157 return -EIO; 1154 1158 1155 1159 break; ··· 1173 1181 dev->hbm_state = MEI_HBM_CLIENT_PROPERTIES; 1174 1182 1175 1183 /* first property request */ 1176 - if (mei_hbm_prop_req(dev)) 1184 + if (mei_hbm_prop_req(dev, 0)) 1177 1185 return -EIO; 1178 1186 1179 1187 break;
+1 -5
drivers/misc/mei/interrupt.c
··· 76 76 * @dev: mei device 77 77 * @hdr: message header 78 78 */ 79 - static inline 80 79 void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr) 81 80 { 82 81 /* ··· 193 194 return -EMSGSIZE; 194 195 195 196 ret = mei_hbm_cl_disconnect_rsp(dev, cl); 196 - mei_cl_set_disconnected(cl); 197 - mei_io_cb_free(cb); 198 - mei_me_cl_put(cl->me_cl); 199 - cl->me_cl = NULL; 197 + list_move_tail(&cb->list, &cmpl_list->list); 200 198 201 199 return ret; 202 200 }
+2 -2
drivers/misc/mei/mei_dev.h
··· 396 396 * @me_clients : list of FW clients 397 397 * @me_clients_map : FW clients bit map 398 398 * @host_clients_map : host clients id pool 399 - * @me_client_index : last FW client index in enumeration 400 399 * 401 400 * @allow_fixed_address: allow user space to connect a fixed client 402 401 * @override_fixed_address: force allow fixed address behavior ··· 485 486 struct list_head me_clients; 486 487 DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX); 487 488 DECLARE_BITMAP(host_clients_map, MEI_CLIENTS_MAX); 488 - unsigned long me_client_index; 489 489 490 490 bool allow_fixed_address; 491 491 bool override_fixed_address; ··· 701 703 bool mei_hbuf_acquire(struct mei_device *dev); 702 704 703 705 bool mei_write_is_idle(struct mei_device *dev); 706 + 707 + void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr); 704 708 705 709 #if IS_ENABLED(CONFIG_DEBUG_FS) 706 710 int mei_dbgfs_register(struct mei_device *dev, const char *name);
+1
drivers/misc/mic/Kconfig
··· 132 132 tristate "VOP Driver" 133 133 depends on 64BIT && PCI && X86 && VOP_BUS 134 134 select VHOST_RING 135 + select VIRTIO 135 136 help 136 137 This enables VOP (Virtio over PCIe) Driver support for the Intel 137 138 Many Integrated Core (MIC) family of PCIe form factor coprocessor
+3 -3
drivers/misc/mic/host/mic_boot.c
··· 76 76 { 77 77 struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev); 78 78 79 - return mic_free_irq(mdev, cookie, data); 79 + mic_free_irq(mdev, cookie, data); 80 80 } 81 81 82 82 static void __mic_ack_interrupt(struct vop_device *vpdev, int num) ··· 272 272 { 273 273 struct mic_device *mdev = scdev_to_mdev(scdev); 274 274 275 - return mic_free_irq(mdev, cookie, data); 275 + mic_free_irq(mdev, cookie, data); 276 276 } 277 277 278 278 static void ___mic_ack_interrupt(struct scif_hw_dev *scdev, int num) ··· 362 362 static void _mic_free_irq(struct mbus_device *mbdev, 363 363 struct mic_irq *cookie, void *data) 364 364 { 365 - return mic_free_irq(mbdev_to_mdev(mbdev), cookie, data); 365 + mic_free_irq(mbdev_to_mdev(mbdev), cookie, data); 366 366 } 367 367 368 368 static void _mic_ack_interrupt(struct mbus_device *mbdev, int num)
+2 -1
drivers/misc/mic/scif/scif_fence.c
··· 27 27 void scif_recv_mark(struct scif_dev *scifdev, struct scifmsg *msg) 28 28 { 29 29 struct scif_endpt *ep = (struct scif_endpt *)msg->payload[0]; 30 - int mark, err; 30 + int mark = 0; 31 + int err; 31 32 32 33 err = _scif_fence_mark(ep, &mark); 33 34 if (err)
+2 -1
drivers/misc/qcom-coincell.c
··· 94 94 { 95 95 struct device_node *node = pdev->dev.of_node; 96 96 struct qcom_coincell chgr; 97 - u32 rset, vset; 97 + u32 rset = 0; 98 + u32 vset = 0; 98 99 bool enable; 99 100 int rc; 100 101
+2 -2
drivers/misc/sram.c
··· 364 364 sram->virt_base = devm_ioremap(sram->dev, res->start, size); 365 365 else 366 366 sram->virt_base = devm_ioremap_wc(sram->dev, res->start, size); 367 - if (IS_ERR(sram->virt_base)) 368 - return PTR_ERR(sram->virt_base); 367 + if (!sram->virt_base) 368 + return -ENOMEM; 369 369 370 370 sram->pool = devm_gen_pool_create(sram->dev, ilog2(SRAM_GRANULARITY), 371 371 NUMA_NO_NODE, NULL);
-1
drivers/misc/ti-st/st_kim.c
··· 78 78 memcpy(kim_gdata->resp_buffer, 79 79 kim_gdata->rx_skb->data, 80 80 kim_gdata->rx_skb->len); 81 - complete_all(&kim_gdata->kim_rcvd); 82 81 kim_gdata->rx_state = ST_W4_PACKET_TYPE; 83 82 kim_gdata->rx_skb = NULL; 84 83 kim_gdata->rx_count = 0;
+2 -3
drivers/nvmem/Kconfig
··· 1 1 menuconfig NVMEM 2 2 tristate "NVMEM Support" 3 - select REGMAP 4 3 help 5 4 Support for NVMEM(Non Volatile Memory) devices like EEPROM, EFUSES... 6 5 ··· 27 28 config NVMEM_LPC18XX_EEPROM 28 29 tristate "NXP LPC18XX EEPROM Memory Support" 29 30 depends on ARCH_LPC18XX || COMPILE_TEST 31 + depends on HAS_IOMEM 30 32 help 31 33 Say Y here to include support for NXP LPC18xx EEPROM memory found in 32 34 NXP LPC185x/3x and LPC435x/3x/2x/1x devices. ··· 49 49 config MTK_EFUSE 50 50 tristate "Mediatek SoCs EFUSE support" 51 51 depends on ARCH_MEDIATEK || COMPILE_TEST 52 + depends on HAS_IOMEM 52 53 select REGMAP_MMIO 53 54 help 54 55 This is a driver to access hardware related data like sensor ··· 62 61 tristate "QCOM QFPROM Support" 63 62 depends on ARCH_QCOM || COMPILE_TEST 64 63 depends on HAS_IOMEM 65 - select REGMAP_MMIO 66 64 help 67 65 Say y here to enable QFPROM support. The QFPROM provides access 68 66 functions for QFPROM data to rest of the drivers via nvmem interface. ··· 83 83 config NVMEM_SUNXI_SID 84 84 tristate "Allwinner SoCs SID support" 85 85 depends on ARCH_SUNXI 86 - select REGMAP_MMIO 87 86 help 88 87 This is a driver for the 'security ID' available on various Allwinner 89 88 devices.
+40 -27
drivers/nvmem/core.c
··· 23 23 #include <linux/nvmem-consumer.h> 24 24 #include <linux/nvmem-provider.h> 25 25 #include <linux/of.h> 26 - #include <linux/regmap.h> 27 26 #include <linux/slab.h> 28 27 29 28 struct nvmem_device { 30 29 const char *name; 31 - struct regmap *regmap; 32 30 struct module *owner; 33 31 struct device dev; 34 32 int stride; ··· 39 41 int flags; 40 42 struct bin_attribute eeprom; 41 43 struct device *base_dev; 44 + nvmem_reg_read_t reg_read; 45 + nvmem_reg_write_t reg_write; 46 + void *priv; 42 47 }; 43 48 44 49 #define FLAG_COMPAT BIT(0) ··· 67 66 #endif 68 67 69 68 #define to_nvmem_device(d) container_of(d, struct nvmem_device, dev) 69 + static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset, 70 + void *val, size_t bytes) 71 + { 72 + if (nvmem->reg_read) 73 + return nvmem->reg_read(nvmem->priv, offset, val, bytes); 74 + 75 + return -EINVAL; 76 + } 77 + 78 + static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset, 79 + void *val, size_t bytes) 80 + { 81 + if (nvmem->reg_write) 82 + return nvmem->reg_write(nvmem->priv, offset, val, bytes); 83 + 84 + return -EINVAL; 85 + } 70 86 71 87 static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj, 72 88 struct bin_attribute *attr, ··· 111 93 112 94 count = round_down(count, nvmem->word_size); 113 95 114 - rc = regmap_raw_read(nvmem->regmap, pos, buf, count); 96 + rc = nvmem_reg_read(nvmem, pos, buf, count); 115 97 116 98 if (IS_ERR_VALUE(rc)) 117 99 return rc; ··· 145 127 146 128 count = round_down(count, nvmem->word_size); 147 129 148 - rc = regmap_raw_write(nvmem->regmap, pos, buf, count); 130 + rc = nvmem_reg_write(nvmem, pos, buf, count); 149 131 150 132 if (IS_ERR_VALUE(rc)) 151 133 return rc; ··· 439 421 { 440 422 struct nvmem_device *nvmem; 441 423 struct device_node *np; 442 - struct regmap *rm; 443 424 int rval; 444 425 445 426 if (!config->dev) 446 427 return ERR_PTR(-EINVAL); 447 - 448 - rm = dev_get_regmap(config->dev, NULL); 449 - if (!rm) { 450 - dev_err(config->dev, "Regmap not found\n"); 451 - return ERR_PTR(-EINVAL); 452 - } 453 428 454 429 nvmem = kzalloc(sizeof(*nvmem), GFP_KERNEL); 455 430 if (!nvmem) ··· 455 444 } 456 445 457 446 nvmem->id = rval; 458 - nvmem->regmap = rm; 459 447 nvmem->owner = config->owner; 460 - nvmem->stride = regmap_get_reg_stride(rm); 461 - nvmem->word_size = regmap_get_val_bytes(rm); 462 - nvmem->size = regmap_get_max_register(rm) + nvmem->stride; 448 + nvmem->stride = config->stride; 449 + nvmem->word_size = config->word_size; 450 + nvmem->size = config->size; 463 451 nvmem->dev.type = &nvmem_provider_type; 464 452 nvmem->dev.bus = &nvmem_bus_type; 465 453 nvmem->dev.parent = config->dev; 454 + nvmem->priv = config->priv; 455 + nvmem->reg_read = config->reg_read; 456 + nvmem->reg_write = config->reg_write; 466 457 np = config->dev->of_node; 467 458 nvmem->dev.of_node = np; 468 459 dev_set_name(&nvmem->dev, "%s%d", ··· 961 948 { 962 949 int rc; 963 950 964 - rc = regmap_raw_read(nvmem->regmap, cell->offset, buf, cell->bytes); 951 + rc = nvmem_reg_read(nvmem, cell->offset, buf, cell->bytes); 965 952 966 953 if (IS_ERR_VALUE(rc)) 967 954 return rc; ··· 990 977 u8 *buf; 991 978 int rc; 992 979 993 - if (!nvmem || !nvmem->regmap) 980 + if (!nvmem) 994 981 return ERR_PTR(-EINVAL); 995 982 996 983 buf = kzalloc(cell->bytes, GFP_KERNEL); ··· 1027 1014 *b <<= bit_offset; 1028 1015 1029 1016 /* setup the first byte with lsb bits from nvmem */ 1030 - rc = regmap_raw_read(nvmem->regmap, cell->offset, &v, 1); 1017 + rc = nvmem_reg_read(nvmem, cell->offset, &v, 1); 1031 1018 *b++ |= GENMASK(bit_offset - 1, 0) & v; 1032 1019 1033 1020 /* setup rest of the byte if any */ ··· 1044 1031 /* if it's not end on byte boundary */ 1045 1032 if ((nbits + bit_offset) % BITS_PER_BYTE) { 1046 1033 /* setup the last byte with msb bits from nvmem */ 1047 - rc = regmap_raw_read(nvmem->regmap, 1034 + rc = nvmem_reg_read(nvmem, 1048 1035 cell->offset + cell->bytes - 1, &v, 1); 1049 1036 *p |= GENMASK(7, (nbits + bit_offset) % BITS_PER_BYTE) & v; 1050 1037 ··· 1067 1054 struct nvmem_device *nvmem = cell->nvmem; 1068 1055 int rc; 1069 1056 1070 - if (!nvmem || !nvmem->regmap || nvmem->read_only || 1057 + if (!nvmem || nvmem->read_only || 1071 1058 (cell->bit_offset == 0 && len != cell->bytes)) 1072 1059 return -EINVAL; 1073 1060 ··· 1077 1064 return PTR_ERR(buf); 1078 1065 } 1079 1066 1080 - rc = regmap_raw_write(nvmem->regmap, cell->offset, buf, cell->bytes); 1067 + rc = nvmem_reg_write(nvmem, cell->offset, buf, cell->bytes); 1081 1068 1082 1069 /* free the tmp buffer */ 1083 1070 if (cell->bit_offset || cell->nbits) ··· 1107 1094 int rc; 1108 1095 ssize_t len; 1109 1096 1110 - if (!nvmem || !nvmem->regmap) 1097 + if (!nvmem) 1111 1098 return -EINVAL; 1112 1099 1113 1100 rc = nvmem_cell_info_to_nvmem_cell(nvmem, info, &cell); ··· 1137 1124 struct nvmem_cell cell; 1138 1125 int rc; 1139 1126 1140 - if (!nvmem || !nvmem->regmap) 1127 + if (!nvmem) 1141 1128 return -EINVAL; 1142 1129 1143 1130 rc = nvmem_cell_info_to_nvmem_cell(nvmem, info, &cell); ··· 1165 1152 { 1166 1153 int rc; 1167 1154 1168 - if (!nvmem || !nvmem->regmap) 1155 + if (!nvmem) 1169 1156 return -EINVAL; 1170 1157 1171 - rc = regmap_raw_read(nvmem->regmap, offset, buf, bytes); 1158 + rc = nvmem_reg_read(nvmem, offset, buf, bytes); 1172 1159 1173 1160 if (IS_ERR_VALUE(rc)) 1174 1161 return rc; ··· 1193 1180 { 1194 1181 int rc; 1195 1182 1196 - if (!nvmem || !nvmem->regmap) 1183 + if (!nvmem) 1197 1184 return -EINVAL; 1198 1185 1199 - rc = regmap_raw_write(nvmem->regmap, offset, buf, bytes); 1186 + rc = nvmem_reg_write(nvmem, offset, buf, bytes); 1200 1187 1201 1188 if (IS_ERR_VALUE(rc)) 1202 1189 return rc;
+11 -44
drivers/nvmem/imx-ocotp.c
··· 22 22 #include <linux/of.h> 23 23 #include <linux/of_device.h> 24 24 #include <linux/platform_device.h> 25 - #include <linux/regmap.h> 26 25 #include <linux/slab.h> 27 26 28 27 struct ocotp_priv { ··· 30 31 unsigned int nregs; 31 32 }; 32 33 33 - static int imx_ocotp_read(void *context, const void *reg, size_t reg_size, 34 - void *val, size_t val_size) 34 + static int imx_ocotp_read(void *context, unsigned int offset, 35 + void *val, size_t bytes) 35 36 { 36 37 struct ocotp_priv *priv = context; 37 - unsigned int offset = *(u32 *)reg; 38 38 unsigned int count; 39 + u32 *buf = val; 39 40 int i; 40 41 u32 index; 41 42 42 43 index = offset >> 2; 43 - count = val_size >> 2; 44 + count = bytes >> 2; 44 45 45 46 if (count > (priv->nregs - index)) 46 47 count = priv->nregs - index; 47 48 48 - for (i = index; i < (index + count); i++) { 49 - *(u32 *)val = readl(priv->base + 0x400 + i * 0x10); 50 - val += 4; 51 - } 49 + for (i = index; i < (index + count); i++) 50 + *buf++ = readl(priv->base + 0x400 + i * 0x10); 52 51 53 52 return 0; 54 53 } 55 - 56 - static int imx_ocotp_write(void *context, const void *data, size_t count) 57 - { 58 - /* Not implemented */ 59 - return 0; 60 - } 61 - 62 - static struct regmap_bus imx_ocotp_bus = { 63 - .read = imx_ocotp_read, 64 - .write = imx_ocotp_write, 65 - .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 66 - .val_format_endian_default = REGMAP_ENDIAN_NATIVE, 67 - }; 68 - 69 - static bool imx_ocotp_writeable_reg(struct device *dev, unsigned int reg) 70 - { 71 - return false; 72 - } 73 - 74 - static struct regmap_config imx_ocotp_regmap_config = { 75 - .reg_bits = 32, 76 - .val_bits = 32, 77 - .reg_stride = 4, 78 - .writeable_reg = imx_ocotp_writeable_reg, 79 - .name = "imx-ocotp", 80 - }; 81 54 82 55 static struct nvmem_config imx_ocotp_nvmem_config = { 83 56 .name = "imx-ocotp", 84 57 .read_only = true, 58 + .word_size = 4, 59 + .stride = 4, 85 60 .owner = THIS_MODULE, 61 + .reg_read = imx_ocotp_read, 86 62 }; 87 63 88 64 static const struct of_device_id imx_ocotp_dt_ids[] = { ··· 73 99 const struct of_device_id *of_id; 74 100 struct device *dev = &pdev->dev; 75 101 struct resource *res; 76 - struct regmap *regmap; 77 102 struct ocotp_priv *priv; 78 103 struct nvmem_device *nvmem; 79 104 ··· 87 114 88 115 of_id = of_match_device(imx_ocotp_dt_ids, dev); 89 116 priv->nregs = (unsigned int)of_id->data; 90 - imx_ocotp_regmap_config.max_register = 4 * priv->nregs - 4; 91 - 92 - regmap = devm_regmap_init(dev, &imx_ocotp_bus, priv, 93 - &imx_ocotp_regmap_config); 94 - if (IS_ERR(regmap)) { 95 - dev_err(dev, "regmap init failed\n"); 96 - return PTR_ERR(regmap); 97 - } 117 + imx_ocotp_nvmem_config.size = 4 * priv->nregs; 98 118 imx_ocotp_nvmem_config.dev = dev; 119 + imx_ocotp_nvmem_config.priv = priv; 99 120 nvmem = nvmem_register(&imx_ocotp_nvmem_config); 100 121 if (IS_ERR(nvmem)) 101 122 return PTR_ERR(nvmem);
+26 -68
drivers/nvmem/lpc18xx_eeprom.c
··· 16 16 #include <linux/module.h> 17 17 #include <linux/nvmem-provider.h> 18 18 #include <linux/platform_device.h> 19 - #include <linux/regmap.h> 20 19 #include <linux/reset.h> 21 20 22 21 /* Registers */ ··· 50 51 struct nvmem_device *nvmem; 51 52 unsigned reg_bytes; 52 53 unsigned val_bytes; 53 - }; 54 - 55 - static struct regmap_config lpc18xx_regmap_config = { 56 - .reg_bits = 32, 57 - .reg_stride = 4, 58 - .val_bits = 32, 54 + int size; 59 55 }; 60 56 61 57 static inline void lpc18xx_eeprom_writel(struct lpc18xx_eeprom_dev *eeprom, ··· 89 95 return -ETIMEDOUT; 90 96 } 91 97 92 - static int lpc18xx_eeprom_gather_write(void *context, const void *reg, 93 - size_t reg_size, const void *val, 94 - size_t val_size) 98 + static int lpc18xx_eeprom_gather_write(void *context, unsigned int reg, 99 + void *val, size_t bytes) 95 100 { 96 101 struct lpc18xx_eeprom_dev *eeprom = context; 97 - unsigned int offset = *(u32 *)reg; 102 + unsigned int offset = reg; 98 103 int ret; 99 104 100 - if (offset % lpc18xx_regmap_config.reg_stride) 105 + /* 106 + * The last page contains the EEPROM initialization data and is not 107 + * writable. 108 + */ 109 + if ((reg > eeprom->size - LPC18XX_EEPROM_PAGE_SIZE) || 110 + (reg + bytes > eeprom->size - LPC18XX_EEPROM_PAGE_SIZE)) 101 111 return -EINVAL; 112 + 102 113 103 114 lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN, 104 115 LPC18XX_EEPROM_PWRDWN_NO); ··· 111 112 /* Wait 100 us while the EEPROM wakes up */ 112 113 usleep_range(100, 200); 113 114 114 - while (val_size) { 115 + while (bytes) { 115 116 writel(*(u32 *)val, eeprom->mem_base + offset); 116 117 ret = lpc18xx_eeprom_busywait_until_prog(eeprom); 117 118 if (ret < 0) 118 119 return ret; 119 120 120 - val_size -= eeprom->val_bytes; 121 + bytes -= eeprom->val_bytes; 121 122 val += eeprom->val_bytes; 122 123 offset += eeprom->val_bytes; 123 124 } ··· 128 129 return 0; 129 130 } 130 131 131 - static int lpc18xx_eeprom_write(void *context, const void *data, size_t count) 132 + static int lpc18xx_eeprom_read(void *context, unsigned int offset, 133 + void *val, size_t bytes) 132 134 { 133 135 struct lpc18xx_eeprom_dev *eeprom = context; 134 - unsigned int offset = eeprom->reg_bytes; 135 - 136 - if (count <= offset) 137 - return -EINVAL; 138 - 139 - return lpc18xx_eeprom_gather_write(context, data, eeprom->reg_bytes, 140 - data + offset, count - offset); 141 - } 142 - 143 - static int lpc18xx_eeprom_read(void *context, const void *reg, size_t reg_size, 144 - void *val, size_t val_size) 145 - { 146 - struct lpc18xx_eeprom_dev *eeprom = context; 147 - unsigned int offset = *(u32 *)reg; 148 136 149 137 lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN, 150 138 LPC18XX_EEPROM_PWRDWN_NO); ··· 139 153 /* Wait 100 us while the EEPROM wakes up */ 140 154 usleep_range(100, 200); 141 155 142 - while (val_size) { 156 + while (bytes) { 143 157 *(u32 *)val = readl(eeprom->mem_base + offset); 144 - val_size -= eeprom->val_bytes; 158 + bytes -= eeprom->val_bytes; 145 159 val += eeprom->val_bytes; 146 160 offset += eeprom->val_bytes; 147 161 } ··· 152 166 return 0; 153 167 } 154 168 155 - static struct regmap_bus lpc18xx_eeprom_bus = { 156 - .write = lpc18xx_eeprom_write, 157 - .gather_write = lpc18xx_eeprom_gather_write, 158 - .read = lpc18xx_eeprom_read, 159 - .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 160 - .val_format_endian_default = REGMAP_ENDIAN_NATIVE, 161 - }; 162 - 163 - static bool lpc18xx_eeprom_writeable_reg(struct device *dev, unsigned int reg) 164 - { 165 - /* 166 - * The last page contains the EEPROM initialization data and is not 167 - * writable. 168 - */ 169 - return reg <= lpc18xx_regmap_config.max_register - 170 - LPC18XX_EEPROM_PAGE_SIZE; 171 - } 172 - 173 - static bool lpc18xx_eeprom_readable_reg(struct device *dev, unsigned int reg) 174 - { 175 - return reg <= lpc18xx_regmap_config.max_register; 176 - } 177 169 178 170 static struct nvmem_config lpc18xx_nvmem_config = { 179 171 .name = "lpc18xx-eeprom", 172 + .stride = 4, 173 + .word_size = 4, 174 + .reg_read = lpc18xx_eeprom_read, 175 + .reg_write = lpc18xx_eeprom_gather_write, 180 176 .owner = THIS_MODULE, 181 177 }; 182 178 ··· 168 200 struct device *dev = &pdev->dev; 169 201 struct reset_control *rst; 170 202 unsigned long clk_rate; 171 - struct regmap *regmap; 172 203 struct resource *res; 173 204 int ret; 174 205 ··· 210 243 goto err_clk; 211 244 } 212 245 213 - eeprom->val_bytes = lpc18xx_regmap_config.val_bits / BITS_PER_BYTE; 214 - eeprom->reg_bytes = lpc18xx_regmap_config.reg_bits / BITS_PER_BYTE; 246 + eeprom->val_bytes = 4; 247 + eeprom->reg_bytes = 4; 215 248 216 249 /* 217 250 * Clock rate is generated by dividing the system bus clock by the ··· 231 264 lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN, 232 265 LPC18XX_EEPROM_PWRDWN_YES); 233 266 234 - lpc18xx_regmap_config.max_register = resource_size(res) - 1; 235 - lpc18xx_regmap_config.writeable_reg = lpc18xx_eeprom_writeable_reg; 236 - lpc18xx_regmap_config.readable_reg = lpc18xx_eeprom_readable_reg; 237 - 238 - regmap = devm_regmap_init(dev, &lpc18xx_eeprom_bus, eeprom, 239 - &lpc18xx_regmap_config); 240 - if (IS_ERR(regmap)) { 241 - dev_err(dev, "regmap init failed: %ld\n", PTR_ERR(regmap)); 242 - ret = PTR_ERR(regmap); 243 - goto err_clk; 244 - } 245 - 267 + eeprom->size = resource_size(res); 268 + lpc18xx_nvmem_config.size = resource_size(res); 246 269 lpc18xx_nvmem_config.dev = dev; 270 + lpc18xx_nvmem_config.priv = eeprom; 247 271 248 272 eeprom->nvmem = nvmem_register(&lpc18xx_nvmem_config); 249 273 if (IS_ERR(eeprom->nvmem)) {
+37 -19
drivers/nvmem/qfprom.c
··· 13 13 14 14 #include <linux/device.h> 15 15 #include <linux/module.h> 16 + #include <linux/io.h> 16 17 #include <linux/nvmem-provider.h> 17 18 #include <linux/platform_device.h> 18 - #include <linux/regmap.h> 19 19 20 - static struct regmap_config qfprom_regmap_config = { 21 - .reg_bits = 32, 22 - .val_bits = 8, 23 - .reg_stride = 1, 24 - .val_format_endian = REGMAP_ENDIAN_LITTLE, 25 - }; 20 + static int qfprom_reg_read(void *context, 21 + unsigned int reg, void *_val, size_t bytes) 22 + { 23 + void __iomem *base = context; 24 + u32 *val = _val; 25 + int i = 0, words = bytes / 4; 26 26 27 - static struct nvmem_config econfig = { 28 - .name = "qfprom", 29 - .owner = THIS_MODULE, 30 - }; 27 + while (words--) 28 + *val++ = readl(base + reg + (i++ * 4)); 29 + 30 + return 0; 31 + } 32 + 33 + static int qfprom_reg_write(void *context, 34 + unsigned int reg, void *_val, size_t bytes) 35 + { 36 + void __iomem *base = context; 37 + u32 *val = _val; 38 + int i = 0, words = bytes / 4; 39 + 40 + while (words--) 41 + writel(*val++, base + reg + (i++ * 4)); 42 + 43 + return 0; 44 + } 31 45 32 46 static int qfprom_remove(struct platform_device *pdev) 33 47 { ··· 50 36 return nvmem_unregister(nvmem); 51 37 } 52 38 39 + static struct nvmem_config econfig = { 40 + .name = "qfprom", 41 + .owner = THIS_MODULE, 42 + .stride = 4, 43 + .word_size = 1, 44 + .reg_read = qfprom_reg_read, 45 + .reg_write = qfprom_reg_write, 46 + }; 47 + 53 48 static int qfprom_probe(struct platform_device *pdev) 54 49 { 55 50 struct device *dev = &pdev->dev; 56 51 struct resource *res; 57 52 struct nvmem_device *nvmem; 58 - struct regmap *regmap; 59 53 void __iomem *base; 60 54 61 55 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 71 49 if (IS_ERR(base)) 72 50 return PTR_ERR(base); 73 51 74 - qfprom_regmap_config.max_register = resource_size(res) - 1; 75 - 76 - regmap = devm_regmap_init_mmio(dev, base, &qfprom_regmap_config); 77 - if (IS_ERR(regmap)) { 78 - dev_err(dev, "regmap init failed\n"); 79 - return PTR_ERR(regmap); 80 - } 52 + econfig.size = resource_size(res); 81 53 econfig.dev = dev; 54 + econfig.priv = base; 55 + 82 56 nvmem = nvmem_register(&econfig); 83 57 if (IS_ERR(nvmem)) 84 58 return PTR_ERR(nvmem);
+9 -40
drivers/nvmem/rockchip-efuse.c
··· 23 23 #include <linux/slab.h> 24 24 #include <linux/of.h> 25 25 #include <linux/platform_device.h> 26 - #include <linux/regmap.h> 27 26 28 27 #define EFUSE_A_SHIFT 6 29 28 #define EFUSE_A_MASK 0x3ff ··· 40 41 struct clk *clk; 41 42 }; 42 43 43 - static int rockchip_efuse_write(void *context, const void *data, size_t count) 44 + static int rockchip_efuse_read(void *context, unsigned int offset, 45 + void *val, size_t bytes) 44 46 { 45 - /* Nothing TBD, Read-Only */ 46 - return 0; 47 - } 48 - 49 - static int rockchip_efuse_read(void *context, 50 - const void *reg, size_t reg_size, 51 - void *val, size_t val_size) 52 - { 53 - unsigned int offset = *(u32 *)reg; 54 47 struct rockchip_efuse_chip *efuse = context; 55 48 u8 *buf = val; 56 49 int ret; ··· 55 64 56 65 writel(EFUSE_LOAD | EFUSE_PGENB, efuse->base + REG_EFUSE_CTRL); 57 66 udelay(1); 58 - while (val_size) { 67 + while (bytes--) { 59 68 writel(readl(efuse->base + REG_EFUSE_CTRL) & 60 69 (~(EFUSE_A_MASK << EFUSE_A_SHIFT)), 61 70 efuse->base + REG_EFUSE_CTRL); 62 71 writel(readl(efuse->base + REG_EFUSE_CTRL) | 63 - ((offset & EFUSE_A_MASK) << EFUSE_A_SHIFT), 72 + ((offset++ & EFUSE_A_MASK) << EFUSE_A_SHIFT), 64 73 efuse->base + REG_EFUSE_CTRL); 65 74 udelay(1); 66 75 writel(readl(efuse->base + REG_EFUSE_CTRL) | ··· 70 79 writel(readl(efuse->base + REG_EFUSE_CTRL) & 71 80 (~EFUSE_STROBE), efuse->base + REG_EFUSE_CTRL); 72 81 udelay(1); 73 - 74 - val_size -= 1; 75 - offset += 1; 76 82 } 77 83 78 84 /* Switch to standby mode */ ··· 80 92 return 0; 81 93 } 82 94 83 - static struct regmap_bus rockchip_efuse_bus = { 84 - .read = rockchip_efuse_read, 85 - .write = rockchip_efuse_write, 86 - .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 87 - .val_format_endian_default = REGMAP_ENDIAN_NATIVE, 88 - }; 89 - 90 - static struct regmap_config rockchip_efuse_regmap_config = { 91 - .reg_bits = 32, 92 - .reg_stride = 1, 93 - .val_bits = 8, 94 - }; 95 - 96 95 static struct nvmem_config econfig = { 97 96 .name = "rockchip-efuse", 98 97 .owner = THIS_MODULE, 98 + .stride = 1, 99 + .word_size = 1, 99 100 .read_only = true, 100 101 }; 101 102 ··· 98 121 { 99 122 struct resource *res; 100 123 struct nvmem_device *nvmem; 101 - struct regmap *regmap; 102 124 struct rockchip_efuse_chip *efuse; 103 125 104 126 efuse = devm_kzalloc(&pdev->dev, sizeof(struct rockchip_efuse_chip), ··· 115 139 return PTR_ERR(efuse->clk); 116 140 117 141 efuse->dev = &pdev->dev; 118 - 119 - rockchip_efuse_regmap_config.max_register = resource_size(res) - 1; 120 - 121 - regmap = devm_regmap_init(efuse->dev, &rockchip_efuse_bus, 122 - efuse, &rockchip_efuse_regmap_config); 123 - if (IS_ERR(regmap)) { 124 - dev_err(efuse->dev, "regmap init failed\n"); 125 - return PTR_ERR(regmap); 126 - } 127 - 142 + econfig.size = resource_size(res); 143 + econfig.reg_read = rockchip_efuse_read; 144 + econfig.priv = efuse; 128 145 econfig.dev = efuse->dev; 129 146 nvmem = nvmem_register(&econfig); 130 147 if (IS_ERR(nvmem))
+9 -45
drivers/nvmem/sunxi_sid.c
··· 21 21 #include <linux/nvmem-provider.h> 22 22 #include <linux/of.h> 23 23 #include <linux/platform_device.h> 24 - #include <linux/regmap.h> 25 24 #include <linux/slab.h> 26 25 #include <linux/random.h> 27 26 28 27 static struct nvmem_config econfig = { 29 28 .name = "sunxi-sid", 30 29 .read_only = true, 30 + .stride = 4, 31 + .word_size = 1, 31 32 .owner = THIS_MODULE, 32 33 }; 33 34 ··· 52 51 return sid_key; /* Only return the last byte */ 53 52 } 54 53 55 - static int sunxi_sid_read(void *context, 56 - const void *reg, size_t reg_size, 57 - void *val, size_t val_size) 54 + static int sunxi_sid_read(void *context, unsigned int offset, 55 + void *val, size_t bytes) 58 56 { 59 57 struct sunxi_sid *sid = context; 60 - unsigned int offset = *(u32 *)reg; 61 58 u8 *buf = val; 62 59 63 - while (val_size) { 64 - *buf++ = sunxi_sid_read_byte(sid, offset); 65 - val_size--; 66 - offset++; 67 - } 60 + while (bytes--) 61 + *buf++ = sunxi_sid_read_byte(sid, offset++); 68 62 69 63 return 0; 70 64 } 71 - 72 - static int sunxi_sid_write(void *context, const void *data, size_t count) 73 - { 74 - /* Unimplemented, dummy to keep regmap core happy */ 75 - return 0; 76 - } 77 - 78 - static struct regmap_bus sunxi_sid_bus = { 79 - .read = sunxi_sid_read, 80 - .write = sunxi_sid_write, 81 - .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 82 - .val_format_endian_default = REGMAP_ENDIAN_NATIVE, 83 - }; 84 - 85 - static bool sunxi_sid_writeable_reg(struct device *dev, unsigned int reg) 86 - { 87 - return false; 88 - } 89 - 90 - static struct regmap_config sunxi_sid_regmap_config = { 91 - .reg_bits = 32, 92 - .val_bits = 8, 93 - .reg_stride = 1, 94 - .writeable_reg = sunxi_sid_writeable_reg, 95 - }; 96 65 97 66 static int sunxi_sid_probe(struct platform_device *pdev) 98 67 { 99 68 struct device *dev = &pdev->dev; 100 69 struct resource *res; 101 70 struct nvmem_device *nvmem; 102 - struct regmap *regmap; 103 71 struct sunxi_sid *sid; 104 72 int ret, i, size; 105 73 char *randomness; ··· 83 113 return PTR_ERR(sid->base); 84 114 85 115 size = resource_size(res) - 1; 86 - sunxi_sid_regmap_config.max_register = size; 87 - 88 - regmap = devm_regmap_init(dev, &sunxi_sid_bus, sid, 89 - &sunxi_sid_regmap_config); 90 - if (IS_ERR(regmap)) { 91 - dev_err(dev, "regmap init failed\n"); 92 - return PTR_ERR(regmap); 93 - } 94 - 116 + econfig.size = resource_size(res); 95 117 econfig.dev = dev; 118 + econfig.reg_read = sunxi_sid_read; 119 + econfig.priv = sid; 96 120 nvmem = nvmem_register(&econfig); 97 121 if (IS_ERR(nvmem)) 98 122 return PTR_ERR(nvmem);
+10 -34
drivers/nvmem/vf610-ocotp.c
··· 25 25 #include <linux/nvmem-provider.h> 26 26 #include <linux/of.h> 27 27 #include <linux/platform_device.h> 28 - #include <linux/regmap.h> 29 28 #include <linux/slab.h> 30 29 31 30 /* OCOTP Register Offsets */ ··· 151 152 return -EINVAL; 152 153 } 153 154 154 - static int vf610_ocotp_write(void *context, const void *data, size_t count) 155 - { 156 - return 0; 157 - } 158 - 159 - static int vf610_ocotp_read(void *context, 160 - const void *off, size_t reg_size, 161 - void *val, size_t val_size) 155 + static int vf610_ocotp_read(void *context, unsigned int offset, 156 + void *val, size_t bytes) 162 157 { 163 158 struct vf610_ocotp *ocotp = context; 164 159 void __iomem *base = ocotp->base; 165 - unsigned int offset = *(u32 *)off; 166 160 u32 reg, *buf = val; 167 161 int fuse_addr; 168 162 int ret; 169 163 170 - while (val_size > 0) { 164 + while (bytes > 0) { 171 165 fuse_addr = vf610_get_fuse_address(offset); 172 166 if (fuse_addr > 0) { 173 167 writel(ocotp->timing, base + OCOTP_TIMING); ··· 197 205 } 198 206 199 207 buf++; 200 - val_size--; 201 - offset += reg_size; 208 + bytes -= 4; 209 + offset += 4; 202 210 } 203 211 204 212 return 0; 205 213 } 206 214 207 - static struct regmap_bus vf610_ocotp_bus = { 208 - .read = vf610_ocotp_read, 209 - .write = vf610_ocotp_write, 210 - .reg_format_endian_default = REGMAP_ENDIAN_NATIVE, 211 - .val_format_endian_default = REGMAP_ENDIAN_NATIVE, 212 - }; 213 - 214 - static struct regmap_config ocotp_regmap_config = { 215 - .reg_bits = 32, 216 - .val_bits = 32, 217 - .reg_stride = 4, 218 - }; 219 - 220 215 static struct nvmem_config ocotp_config = { 221 216 .name = "ocotp", 222 217 .owner = THIS_MODULE, 218 + .stride = 4, 219 + .word_size = 4, 220 + .reg_read = vf610_ocotp_read, 223 221 }; 224 222 225 223 static const struct of_device_id ocotp_of_match[] = { ··· 229 247 { 230 248 struct device *dev = &pdev->dev; 231 249 struct resource *res; 232 - struct regmap *regmap; 233 250 struct vf610_ocotp *ocotp_dev; 234 251 235 252 ocotp_dev = devm_kzalloc(&pdev->dev, ··· 248 267 return PTR_ERR(ocotp_dev->clk); 249 268 } 250 269 251 - ocotp_regmap_config.max_register = resource_size(res); 252 - regmap = devm_regmap_init(dev, 253 - &vf610_ocotp_bus, ocotp_dev, &ocotp_regmap_config); 254 - if (IS_ERR(regmap)) { 255 - dev_err(dev, "regmap init failed\n"); 256 - return PTR_ERR(regmap); 257 - } 270 + ocotp_config.size = resource_size(res); 271 + ocotp_config.priv = ocotp_dev; 258 272 ocotp_config.dev = dev; 259 273 260 274 ocotp_dev->nvmem = nvmem_register(&ocotp_config);
+1 -1
drivers/parport/procfs.c
··· 617 617 } 618 618 #endif 619 619 620 - module_init(parport_default_proc_register) 620 + subsys_initcall(parport_default_proc_register) 621 621 module_exit(parport_default_proc_unregister)
+7 -7
drivers/pci/host/pci-hyperv.c
··· 1809 1809 1810 1810 if (hbus->low_mmio_space && hbus->low_mmio_res) { 1811 1811 hbus->low_mmio_res->flags |= IORESOURCE_BUSY; 1812 - release_mem_region(hbus->low_mmio_res->start, 1813 - resource_size(hbus->low_mmio_res)); 1812 + vmbus_free_mmio(hbus->low_mmio_res->start, 1813 + resource_size(hbus->low_mmio_res)); 1814 1814 } 1815 1815 1816 1816 if (hbus->high_mmio_space && hbus->high_mmio_res) { 1817 1817 hbus->high_mmio_res->flags |= IORESOURCE_BUSY; 1818 - release_mem_region(hbus->high_mmio_res->start, 1819 - resource_size(hbus->high_mmio_res)); 1818 + vmbus_free_mmio(hbus->high_mmio_res->start, 1819 + resource_size(hbus->high_mmio_res)); 1820 1820 } 1821 1821 } 1822 1822 ··· 1894 1894 1895 1895 release_low_mmio: 1896 1896 if (hbus->low_mmio_res) { 1897 - release_mem_region(hbus->low_mmio_res->start, 1898 - resource_size(hbus->low_mmio_res)); 1897 + vmbus_free_mmio(hbus->low_mmio_res->start, 1898 + resource_size(hbus->low_mmio_res)); 1899 1899 } 1900 1900 1901 1901 return ret; ··· 1938 1938 1939 1939 static void hv_free_config_window(struct hv_pcibus_device *hbus) 1940 1940 { 1941 - release_mem_region(hbus->mem_config->start, PCI_CONFIG_MMIO_LENGTH); 1941 + vmbus_free_mmio(hbus->mem_config->start, PCI_CONFIG_MMIO_LENGTH); 1942 1942 } 1943 1943 1944 1944 /**
+10 -2
drivers/spmi/spmi.c
··· 25 25 #define CREATE_TRACE_POINTS 26 26 #include <trace/events/spmi.h> 27 27 28 + static bool is_registered; 28 29 static DEFINE_IDA(ctrl_ida); 29 30 30 31 static void spmi_dev_release(struct device *dev) ··· 508 507 int ret; 509 508 510 509 /* Can't register until after driver model init */ 511 - if (WARN_ON(!spmi_bus_type.p)) 510 + if (WARN_ON(!is_registered)) 512 511 return -EAGAIN; 513 512 514 513 ret = device_add(&ctrl->dev); ··· 577 576 578 577 static int __init spmi_init(void) 579 578 { 580 - return bus_register(&spmi_bus_type); 579 + int ret; 580 + 581 + ret = bus_register(&spmi_bus_type); 582 + if (ret) 583 + return ret; 584 + 585 + is_registered = true; 586 + return 0; 581 587 } 582 588 postcore_initcall(spmi_init); 583 589
+12 -4
drivers/uio/uio.c
··· 271 271 map_found = 1; 272 272 idev->map_dir = kobject_create_and_add("maps", 273 273 &idev->dev->kobj); 274 - if (!idev->map_dir) 274 + if (!idev->map_dir) { 275 + ret = -ENOMEM; 275 276 goto err_map; 277 + } 276 278 } 277 279 map = kzalloc(sizeof(*map), GFP_KERNEL); 278 - if (!map) 280 + if (!map) { 281 + ret = -ENOMEM; 279 282 goto err_map_kobj; 283 + } 280 284 kobject_init(&map->kobj, &map_attr_type); 281 285 map->mem = mem; 282 286 mem->map = map; ··· 300 296 portio_found = 1; 301 297 idev->portio_dir = kobject_create_and_add("portio", 302 298 &idev->dev->kobj); 303 - if (!idev->portio_dir) 299 + if (!idev->portio_dir) { 300 + ret = -ENOMEM; 304 301 goto err_portio; 302 + } 305 303 } 306 304 portio = kzalloc(sizeof(*portio), GFP_KERNEL); 307 - if (!portio) 305 + if (!portio) { 306 + ret = -ENOMEM; 308 307 goto err_portio_kobj; 308 + } 309 309 kobject_init(&portio->kobj, &portio_attr_type); 310 310 portio->port = port; 311 311 port->portio = portio;
+2 -2
drivers/video/fbdev/hyperv_fb.c
··· 743 743 err3: 744 744 iounmap(fb_virt); 745 745 err2: 746 - release_mem_region(par->mem->start, screen_fb_size); 746 + vmbus_free_mmio(par->mem->start, screen_fb_size); 747 747 par->mem = NULL; 748 748 err1: 749 749 if (!gen2vm) ··· 758 758 struct hvfb_par *par = info->par; 759 759 760 760 iounmap(info->screen_base); 761 - release_mem_region(par->mem->start, screen_fb_size); 761 + vmbus_free_mmio(par->mem->start, screen_fb_size); 762 762 par->mem = NULL; 763 763 } 764 764
+1 -8
drivers/vme/bridges/vme_ca91cx42.c
··· 204 204 /* Need pdev */ 205 205 pdev = to_pci_dev(ca91cx42_bridge->parent); 206 206 207 - INIT_LIST_HEAD(&ca91cx42_bridge->vme_error_handlers); 208 - 209 - mutex_init(&ca91cx42_bridge->irq_mtx); 210 - 211 207 /* Disable interrupts from PCI to VME */ 212 208 iowrite32(0, bridge->base + VINT_EN); 213 209 ··· 1622 1626 retval = -ENOMEM; 1623 1627 goto err_struct; 1624 1628 } 1629 + vme_init_bridge(ca91cx42_bridge); 1625 1630 1626 1631 ca91cx42_device = kzalloc(sizeof(struct ca91cx42_driver), GFP_KERNEL); 1627 1632 ··· 1683 1686 } 1684 1687 1685 1688 /* Add master windows to list */ 1686 - INIT_LIST_HEAD(&ca91cx42_bridge->master_resources); 1687 1689 for (i = 0; i < CA91C142_MAX_MASTER; i++) { 1688 1690 master_image = kmalloc(sizeof(struct vme_master_resource), 1689 1691 GFP_KERNEL); ··· 1709 1713 } 1710 1714 1711 1715 /* Add slave windows to list */ 1712 - INIT_LIST_HEAD(&ca91cx42_bridge->slave_resources); 1713 1716 for (i = 0; i < CA91C142_MAX_SLAVE; i++) { 1714 1717 slave_image = kmalloc(sizeof(struct vme_slave_resource), 1715 1718 GFP_KERNEL); ··· 1736 1741 } 1737 1742 1738 1743 /* Add dma engines to list */ 1739 - INIT_LIST_HEAD(&ca91cx42_bridge->dma_resources); 1740 1744 for (i = 0; i < CA91C142_MAX_DMA; i++) { 1741 1745 dma_ctrlr = kmalloc(sizeof(struct vme_dma_resource), 1742 1746 GFP_KERNEL); ··· 1758 1764 } 1759 1765 1760 1766 /* Add location monitor to list */ 1761 - INIT_LIST_HEAD(&ca91cx42_bridge->lm_resources); 1762 1767 lm = kmalloc(sizeof(struct vme_lm_resource), GFP_KERNEL); 1763 1768 if (lm == NULL) { 1764 1769 dev_err(&pdev->dev, "Failed to allocate memory for "
+1 -8
drivers/vme/bridges/vme_tsi148.c
··· 314 314 315 315 bridge = tsi148_bridge->driver_priv; 316 316 317 - INIT_LIST_HEAD(&tsi148_bridge->vme_error_handlers); 318 - 319 - mutex_init(&tsi148_bridge->irq_mtx); 320 - 321 317 result = request_irq(pdev->irq, 322 318 tsi148_irqhandler, 323 319 IRQF_SHARED, ··· 2297 2301 retval = -ENOMEM; 2298 2302 goto err_struct; 2299 2303 } 2304 + vme_init_bridge(tsi148_bridge); 2300 2305 2301 2306 tsi148_device = kzalloc(sizeof(struct tsi148_driver), GFP_KERNEL); 2302 2307 if (tsi148_device == NULL) { ··· 2384 2387 } 2385 2388 2386 2389 /* Add master windows to list */ 2387 - INIT_LIST_HEAD(&tsi148_bridge->master_resources); 2388 2390 for (i = 0; i < master_num; i++) { 2389 2391 master_image = kmalloc(sizeof(struct vme_master_resource), 2390 2392 GFP_KERNEL); ··· 2413 2417 } 2414 2418 2415 2419 /* Add slave windows to list */ 2416 - INIT_LIST_HEAD(&tsi148_bridge->slave_resources); 2417 2420 for (i = 0; i < TSI148_MAX_SLAVE; i++) { 2418 2421 slave_image = kmalloc(sizeof(struct vme_slave_resource), 2419 2422 GFP_KERNEL); ··· 2437 2442 } 2438 2443 2439 2444 /* Add dma engines to list */ 2440 - INIT_LIST_HEAD(&tsi148_bridge->dma_resources); 2441 2445 for (i = 0; i < TSI148_MAX_DMA; i++) { 2442 2446 dma_ctrlr = kmalloc(sizeof(struct vme_dma_resource), 2443 2447 GFP_KERNEL); ··· 2461 2467 } 2462 2468 2463 2469 /* Add location monitor to list */ 2464 - INIT_LIST_HEAD(&tsi148_bridge->lm_resources); 2465 2470 lm = kmalloc(sizeof(struct vme_lm_resource), GFP_KERNEL); 2466 2471 if (lm == NULL) { 2467 2472 dev_err(&pdev->dev, "Failed to allocate memory for "
+20 -6
drivers/vme/vme.c
··· 782 782 783 783 dma_list = kmalloc(sizeof(struct vme_dma_list), GFP_KERNEL); 784 784 if (dma_list == NULL) { 785 - printk(KERN_ERR "Unable to allocate memory for new dma list\n"); 785 + printk(KERN_ERR "Unable to allocate memory for new DMA list\n"); 786 786 return NULL; 787 787 } 788 788 INIT_LIST_HEAD(&dma_list->entries); ··· 846 846 847 847 pci_attr = kmalloc(sizeof(struct vme_dma_pci), GFP_KERNEL); 848 848 if (pci_attr == NULL) { 849 - printk(KERN_ERR "Unable to allocate memory for pci attributes\n"); 849 + printk(KERN_ERR "Unable to allocate memory for PCI attributes\n"); 850 850 goto err_pci; 851 851 } 852 852 ··· 884 884 885 885 vme_attr = kmalloc(sizeof(struct vme_dma_vme), GFP_KERNEL); 886 886 if (vme_attr == NULL) { 887 - printk(KERN_ERR "Unable to allocate memory for vme attributes\n"); 887 + printk(KERN_ERR "Unable to allocate memory for VME attributes\n"); 888 888 goto err_vme; 889 889 } 890 890 ··· 975 975 } 976 976 977 977 /* 978 - * Empty out all of the entries from the dma list. We need to go to the 979 - * low level driver as dma entries are driver specific. 978 + * Empty out all of the entries from the DMA list. We need to go to the 979 + * low level driver as DMA entries are driver specific. 980 980 */ 981 981 retval = bridge->dma_list_empty(list); 982 982 if (retval) { ··· 1091 1091 if (call != NULL) 1092 1092 call(level, statid, priv_data); 1093 1093 else 1094 - printk(KERN_WARNING "Spurilous VME interrupt, level:%x, vector:%x\n", 1094 + printk(KERN_WARNING "Spurious VME interrupt, level:%x, vector:%x\n", 1095 1095 level, statid); 1096 1096 } 1097 1097 EXPORT_SYMBOL(vme_irq_handler); ··· 1428 1428 { 1429 1429 kfree(dev_to_vme_dev(dev)); 1430 1430 } 1431 + 1432 + /* Common bridge initialization */ 1433 + struct vme_bridge *vme_init_bridge(struct vme_bridge *bridge) 1434 + { 1435 + INIT_LIST_HEAD(&bridge->vme_error_handlers); 1436 + INIT_LIST_HEAD(&bridge->master_resources); 1437 + INIT_LIST_HEAD(&bridge->slave_resources); 1438 + INIT_LIST_HEAD(&bridge->dma_resources); 1439 + INIT_LIST_HEAD(&bridge->lm_resources); 1440 + mutex_init(&bridge->irq_mtx); 1441 + 1442 + return bridge; 1443 + } 1444 + EXPORT_SYMBOL(vme_init_bridge); 1431 1445 1432 1446 int vme_register_bridge(struct vme_bridge *bridge) 1433 1447 {
+1
drivers/vme/vme_bridge.h
··· 177 177 unsigned long long address, int am); 178 178 void vme_irq_handler(struct vme_bridge *, int, int); 179 179 180 + struct vme_bridge *vme_init_bridge(struct vme_bridge *); 180 181 int vme_register_bridge(struct vme_bridge *); 181 182 void vme_unregister_bridge(struct vme_bridge *); 182 183 struct vme_error_handler *vme_register_error_handler(
+18
drivers/w1/masters/ds2482.c
··· 24 24 #include "../w1_int.h" 25 25 26 26 /** 27 + * Allow the active pullup to be disabled, default is enabled. 28 + * 29 + * Note from the DS2482 datasheet: 30 + * The APU bit controls whether an active pullup (controlled slew-rate 31 + * transistor) or a passive pullup (Rwpu resistor) will be used to drive 32 + * a 1-Wire line from low to high. When APU = 0, active pullup is disabled 33 + * (resistor mode). Active Pullup should always be selected unless there is 34 + * only a single slave on the 1-Wire line. 35 + */ 36 + static int ds2482_active_pullup = 1; 37 + module_param_named(active_pullup, ds2482_active_pullup, int, 0644); 38 + 39 + /** 27 40 * The DS2482 registers - there are 3 registers that are addressed by a read 28 41 * pointer. The read pointer is set by the last command executed. 29 42 * ··· 151 138 */ 152 139 static inline u8 ds2482_calculate_config(u8 conf) 153 140 { 141 + if (ds2482_active_pullup) 142 + conf |= DS2482_REG_CFG_APU; 143 + 154 144 return conf | ((~conf & 0x0f) << 4); 155 145 } 156 146 ··· 562 546 563 547 module_i2c_driver(ds2482_driver); 564 548 549 + MODULE_PARM_DESC(active_pullup, "Active pullup (apply to all buses): " \ 550 + "0-disable, 1-enable (default)"); 565 551 MODULE_AUTHOR("Ben Gardner <bgardner@wabtec.com>"); 566 552 MODULE_DESCRIPTION("DS2482 driver"); 567 553 MODULE_LICENSE("GPL");
+211 -7
drivers/w1/slaves/w1_therm.c
··· 92 92 static ssize_t w1_slave_show(struct device *device, 93 93 struct device_attribute *attr, char *buf); 94 94 95 + static ssize_t w1_slave_store(struct device *device, 96 + struct device_attribute *attr, const char *buf, size_t size); 97 + 95 98 static ssize_t w1_seq_show(struct device *device, 96 99 struct device_attribute *attr, char *buf); 97 100 98 - static DEVICE_ATTR_RO(w1_slave); 101 + static DEVICE_ATTR_RW(w1_slave); 99 102 static DEVICE_ATTR_RO(w1_seq); 100 103 101 104 static struct attribute *w1_therm_attrs[] = { ··· 157 154 u16 reserved; 158 155 struct w1_family *f; 159 156 int (*convert)(u8 rom[9]); 157 + int (*precision)(struct device *device, int val); 158 + int (*eeprom)(struct device *device); 160 159 }; 160 + 161 + /* write configuration to eeprom */ 162 + static inline int w1_therm_eeprom(struct device *device); 163 + 164 + /* Set precision for conversion */ 165 + static inline int w1_DS18B20_precision(struct device *device, int val); 166 + static inline int w1_DS18S20_precision(struct device *device, int val); 161 167 162 168 /* The return value is millidegrees Centigrade. */ 163 169 static inline int w1_DS18B20_convert_temp(u8 rom[9]); ··· 175 163 static struct w1_therm_family_converter w1_therm_families[] = { 176 164 { 177 165 .f = &w1_therm_family_DS18S20, 178 - .convert = w1_DS18S20_convert_temp 166 + .convert = w1_DS18S20_convert_temp, 167 + .precision = w1_DS18S20_precision, 168 + .eeprom = w1_therm_eeprom 179 169 }, 180 170 { 181 171 .f = &w1_therm_family_DS1822, 182 - .convert = w1_DS18B20_convert_temp 172 + .convert = w1_DS18B20_convert_temp, 173 + .precision = w1_DS18S20_precision, 174 + .eeprom = w1_therm_eeprom 183 175 }, 184 176 { 185 177 .f = &w1_therm_family_DS18B20, 186 - .convert = w1_DS18B20_convert_temp 178 + .convert = w1_DS18B20_convert_temp, 179 + .precision = w1_DS18B20_precision, 180 + .eeprom = w1_therm_eeprom 187 181 }, 188 182 { 189 183 .f = &w1_therm_family_DS28EA00, 190 - .convert = w1_DS18B20_convert_temp 184 + .convert = w1_DS18B20_convert_temp, 185 + .precision = w1_DS18S20_precision, 186 + .eeprom = w1_therm_eeprom 191 187 }, 192 188 { 193 189 .f = &w1_therm_family_DS1825, 194 - .convert = w1_DS18B20_convert_temp 190 + .convert = w1_DS18B20_convert_temp, 191 + .precision = w1_DS18S20_precision, 192 + .eeprom = w1_therm_eeprom 195 193 } 196 194 }; 195 + 196 + static inline int w1_therm_eeprom(struct device *device) 197 + { 198 + struct w1_slave *sl = dev_to_w1_slave(device); 199 + struct w1_master *dev = sl->master; 200 + u8 rom[9], external_power; 201 + int ret, max_trying = 10; 202 + u8 *family_data = sl->family_data; 203 + 204 + ret = mutex_lock_interruptible(&dev->bus_mutex); 205 + if (ret != 0) 206 + goto post_unlock; 207 + 208 + if (!sl->family_data) { 209 + ret = -ENODEV; 210 + goto pre_unlock; 211 + } 212 + 213 + /* prevent the slave from going away in sleep */ 214 + atomic_inc(THERM_REFCNT(family_data)); 215 + memset(rom, 0, sizeof(rom)); 216 + 217 + while (max_trying--) { 218 + if (!w1_reset_select_slave(sl)) { 219 + unsigned int tm = 10; 220 + unsigned long sleep_rem; 221 + 222 + /* check if in parasite mode */ 223 + w1_write_8(dev, W1_READ_PSUPPLY); 224 + external_power = w1_read_8(dev); 225 + 226 + if (w1_reset_select_slave(sl)) 227 + continue; 228 + 229 + /* 10ms strong pullup/delay after the copy command */ 230 + if (w1_strong_pullup == 2 || 231 + (!external_power && w1_strong_pullup)) 232 + w1_next_pullup(dev, tm); 233 + 234 + w1_write_8(dev, W1_COPY_SCRATCHPAD); 235 + 236 + if (external_power) { 237 + mutex_unlock(&dev->bus_mutex); 238 + 239 + sleep_rem = msleep_interruptible(tm); 240 + if (sleep_rem != 0) { 241 + ret = -EINTR; 242 + goto post_unlock; 243 + } 244 + 245 + ret = mutex_lock_interruptible(&dev->bus_mutex); 246 + if (ret != 0) 247 + goto post_unlock; 248 + } else if (!w1_strong_pullup) { 249 + sleep_rem = msleep_interruptible(tm); 250 + if (sleep_rem != 0) { 251 + ret = -EINTR; 252 + goto pre_unlock; 253 + } 254 + } 255 + 256 + break; 257 + } 258 + } 259 + 260 + pre_unlock: 261 + mutex_unlock(&dev->bus_mutex); 262 + 263 + post_unlock: 264 + atomic_dec(THERM_REFCNT(family_data)); 265 + return ret; 266 + } 267 + 268 + /* DS18S20 does not feature configuration register */ 269 + static inline int w1_DS18S20_precision(struct device *device, int val) 270 + { 271 + return 0; 272 + } 273 + 274 + static inline int w1_DS18B20_precision(struct device *device, int val) 275 + { 276 + struct w1_slave *sl = dev_to_w1_slave(device); 277 + struct w1_master *dev = sl->master; 278 + u8 rom[9], crc; 279 + int ret, max_trying = 10; 280 + u8 *family_data = sl->family_data; 281 + uint8_t precision_bits; 282 + uint8_t mask = 0x60; 283 + 284 + if(val > 12 || val < 9) { 285 + pr_warn("Unsupported precision\n"); 286 + return -1; 287 + } 288 + 289 + ret = mutex_lock_interruptible(&dev->bus_mutex); 290 + if (ret != 0) 291 + goto post_unlock; 292 + 293 + if (!sl->family_data) { 294 + ret = -ENODEV; 295 + goto pre_unlock; 296 + } 297 + 298 + /* prevent the slave from going away in sleep */ 299 + atomic_inc(THERM_REFCNT(family_data)); 300 + memset(rom, 0, sizeof(rom)); 301 + 302 + /* translate precision to bitmask (see datasheet page 9) */ 303 + switch (val) { 304 + case 9: 305 + precision_bits = 0x00; 306 + break; 307 + case 10: 308 + precision_bits = 0x20; 309 + break; 310 + case 11: 311 + precision_bits = 0x40; 312 + break; 313 + case 12: 314 + default: 315 + precision_bits = 0x60; 316 + break; 317 + } 318 + 319 + while (max_trying--) { 320 + crc = 0; 321 + 322 + if (!w1_reset_select_slave(sl)) { 323 + int count = 0; 324 + 325 + /* read values to only alter precision bits */ 326 + w1_write_8(dev, W1_READ_SCRATCHPAD); 327 + if ((count = w1_read_block(dev, rom, 9)) != 9) 328 + dev_warn(device, "w1_read_block() returned %u instead of 9.\n", count); 329 + 330 + crc = w1_calc_crc8(rom, 8); 331 + if (rom[8] == crc) { 332 + rom[4] = (rom[4] & ~mask) | (precision_bits & mask); 333 + 334 + if (!w1_reset_select_slave(sl)) { 335 + w1_write_8(dev, W1_WRITE_SCRATCHPAD); 336 + w1_write_8(dev, rom[2]); 337 + w1_write_8(dev, rom[3]); 338 + w1_write_8(dev, rom[4]); 339 + 340 + break; 341 + } 342 + } 343 + } 344 + } 345 + 346 + pre_unlock: 347 + mutex_unlock(&dev->bus_mutex); 348 + 349 + post_unlock: 350 + atomic_dec(THERM_REFCNT(family_data)); 351 + return ret; 352 + } 197 353 198 354 static inline int w1_DS18B20_convert_temp(u8 rom[9]) 199 355 { ··· 400 220 return 0; 401 221 } 402 222 223 + static ssize_t w1_slave_store(struct device *device, 224 + struct device_attribute *attr, const char *buf, 225 + size_t size) 226 + { 227 + int val, ret; 228 + struct w1_slave *sl = dev_to_w1_slave(device); 229 + int i; 230 + 231 + ret = kstrtoint(buf, 0, &val); 232 + if (ret) 233 + return ret; 234 + 235 + for (i = 0; i < ARRAY_SIZE(w1_therm_families); ++i) { 236 + if (w1_therm_families[i].f->fid == sl->family->fid) { 237 + /* zero value indicates to write current configuration to eeprom */ 238 + if (0 == val) 239 + ret = w1_therm_families[i].eeprom(device); 240 + else 241 + ret = w1_therm_families[i].precision(device, val); 242 + break; 243 + } 244 + } 245 + return ret ? : size; 246 + } 403 247 404 248 static ssize_t w1_slave_show(struct device *device, 405 249 struct device_attribute *attr, char *buf) ··· 515 311 for (i = 0; i < 9; ++i) 516 312 c -= snprintf(buf + PAGE_SIZE - c, c, "%02x ", rom[i]); 517 313 c -= snprintf(buf + PAGE_SIZE - c, c, ": crc=%02x %s\n", 518 - crc, (verdict) ? "YES" : "NO"); 314 + crc, (verdict) ? "YES" : "NO"); 519 315 if (verdict) 520 316 memcpy(family_data, rom, sizeof(rom)); 521 317 else
+1 -1
drivers/w1/w1.c
··· 335 335 int tmp; 336 336 struct w1_master *md = dev_to_w1_master(dev); 337 337 338 - if (kstrtoint(buf, 0, &tmp) == -EINVAL || tmp < 1) 338 + if (kstrtoint(buf, 0, &tmp) || tmp < 1) 339 339 return -EINVAL; 340 340 341 341 mutex_lock(&md->mutex);
+2
drivers/w1/w1.h
··· 58 58 #define W1_ALARM_SEARCH 0xEC 59 59 #define W1_CONVERT_TEMP 0x44 60 60 #define W1_SKIP_ROM 0xCC 61 + #define W1_COPY_SCRATCHPAD 0x48 62 + #define W1_WRITE_SCRATCHPAD 0x4E 61 63 #define W1_READ_SCRATCHPAD 0xBE 62 64 #define W1_READ_ROM 0x33 63 65 #define W1_READ_PSUPPLY 0xB4
+6
include/linux/coresight-stm.h
··· 1 + #ifndef __LINUX_CORESIGHT_STM_H_ 2 + #define __LINUX_CORESIGHT_STM_H_ 3 + 4 + #include <uapi/linux/coresight-stm.h> 5 + 6 + #endif
+169 -1
include/linux/hyperv.h
··· 126 126 127 127 u32 ring_datasize; /* < ring_size */ 128 128 u32 ring_data_startoffset; 129 + u32 priv_write_index; 130 + u32 priv_read_index; 129 131 }; 130 132 131 133 /* ··· 151 149 *write = write_loc >= read_loc ? dsize - (write_loc - read_loc) : 152 150 read_loc - write_loc; 153 151 *read = dsize - *write; 152 + } 153 + 154 + static inline u32 hv_get_bytes_to_read(struct hv_ring_buffer_info *rbi) 155 + { 156 + u32 read_loc, write_loc, dsize, read; 157 + 158 + dsize = rbi->ring_datasize; 159 + read_loc = rbi->ring_buffer->read_index; 160 + write_loc = READ_ONCE(rbi->ring_buffer->write_index); 161 + 162 + read = write_loc >= read_loc ? (write_loc - read_loc) : 163 + (dsize - read_loc) + write_loc; 164 + 165 + return read; 166 + } 167 + 168 + static inline u32 hv_get_bytes_to_write(struct hv_ring_buffer_info *rbi) 169 + { 170 + u32 read_loc, write_loc, dsize, write; 171 + 172 + dsize = rbi->ring_datasize; 173 + read_loc = READ_ONCE(rbi->ring_buffer->read_index); 174 + write_loc = rbi->ring_buffer->write_index; 175 + 176 + write = write_loc >= read_loc ? dsize - (write_loc - read_loc) : 177 + read_loc - write_loc; 178 + return write; 154 179 } 155 180 156 181 /* ··· 1120 1091 resource_size_t min, resource_size_t max, 1121 1092 resource_size_t size, resource_size_t align, 1122 1093 bool fb_overlap_ok); 1123 - 1094 + void vmbus_free_mmio(resource_size_t start, resource_size_t size); 1124 1095 int vmbus_cpu_number_to_vp_number(int cpu_number); 1125 1096 u64 hv_do_hypercall(u64 control, void *input, void *output); 1126 1097 ··· 1367 1338 1368 1339 int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id, 1369 1340 const uuid_le *shv_host_servie_id); 1341 + void vmbus_set_event(struct vmbus_channel *channel); 1342 + 1343 + /* Get the start of the ring buffer. */ 1344 + static inline void * 1345 + hv_get_ring_buffer(struct hv_ring_buffer_info *ring_info) 1346 + { 1347 + return (void *)ring_info->ring_buffer->buffer; 1348 + } 1349 + 1350 + /* 1351 + * To optimize the flow management on the send-side, 1352 + * when the sender is blocked because of lack of 1353 + * sufficient space in the ring buffer, potential the 1354 + * consumer of the ring buffer can signal the producer. 1355 + * This is controlled by the following parameters: 1356 + * 1357 + * 1. pending_send_sz: This is the size in bytes that the 1358 + * producer is trying to send. 1359 + * 2. The feature bit feat_pending_send_sz set to indicate if 1360 + * the consumer of the ring will signal when the ring 1361 + * state transitions from being full to a state where 1362 + * there is room for the producer to send the pending packet. 1363 + */ 1364 + 1365 + static inline bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi) 1366 + { 1367 + u32 cur_write_sz; 1368 + u32 pending_sz; 1369 + 1370 + /* 1371 + * Issue a full memory barrier before making the signaling decision. 1372 + * Here is the reason for having this barrier: 1373 + * If the reading of the pend_sz (in this function) 1374 + * were to be reordered and read before we commit the new read 1375 + * index (in the calling function) we could 1376 + * have a problem. If the host were to set the pending_sz after we 1377 + * have sampled pending_sz and go to sleep before we commit the 1378 + * read index, we could miss sending the interrupt. Issue a full 1379 + * memory barrier to address this. 1380 + */ 1381 + virt_mb(); 1382 + 1383 + pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz); 1384 + /* If the other end is not blocked on write don't bother. */ 1385 + if (pending_sz == 0) 1386 + return false; 1387 + 1388 + cur_write_sz = hv_get_bytes_to_write(rbi); 1389 + 1390 + if (cur_write_sz >= pending_sz) 1391 + return true; 1392 + 1393 + return false; 1394 + } 1395 + 1396 + /* 1397 + * An API to support in-place processing of incoming VMBUS packets. 1398 + */ 1399 + #define VMBUS_PKT_TRAILER 8 1400 + 1401 + static inline struct vmpacket_descriptor * 1402 + get_next_pkt_raw(struct vmbus_channel *channel) 1403 + { 1404 + struct hv_ring_buffer_info *ring_info = &channel->inbound; 1405 + u32 read_loc = ring_info->priv_read_index; 1406 + void *ring_buffer = hv_get_ring_buffer(ring_info); 1407 + struct vmpacket_descriptor *cur_desc; 1408 + u32 packetlen; 1409 + u32 dsize = ring_info->ring_datasize; 1410 + u32 delta = read_loc - ring_info->ring_buffer->read_index; 1411 + u32 bytes_avail_toread = (hv_get_bytes_to_read(ring_info) - delta); 1412 + 1413 + if (bytes_avail_toread < sizeof(struct vmpacket_descriptor)) 1414 + return NULL; 1415 + 1416 + if ((read_loc + sizeof(*cur_desc)) > dsize) 1417 + return NULL; 1418 + 1419 + cur_desc = ring_buffer + read_loc; 1420 + packetlen = cur_desc->len8 << 3; 1421 + 1422 + /* 1423 + * If the packet under consideration is wrapping around, 1424 + * return failure. 1425 + */ 1426 + if ((read_loc + packetlen + VMBUS_PKT_TRAILER) > (dsize - 1)) 1427 + return NULL; 1428 + 1429 + return cur_desc; 1430 + } 1431 + 1432 + /* 1433 + * A helper function to step through packets "in-place" 1434 + * This API is to be called after each successful call 1435 + * get_next_pkt_raw(). 1436 + */ 1437 + static inline void put_pkt_raw(struct vmbus_channel *channel, 1438 + struct vmpacket_descriptor *desc) 1439 + { 1440 + struct hv_ring_buffer_info *ring_info = &channel->inbound; 1441 + u32 read_loc = ring_info->priv_read_index; 1442 + u32 packetlen = desc->len8 << 3; 1443 + u32 dsize = ring_info->ring_datasize; 1444 + 1445 + if ((read_loc + packetlen + VMBUS_PKT_TRAILER) > dsize) 1446 + BUG(); 1447 + /* 1448 + * Include the packet trailer. 1449 + */ 1450 + ring_info->priv_read_index += packetlen + VMBUS_PKT_TRAILER; 1451 + } 1452 + 1453 + /* 1454 + * This call commits the read index and potentially signals the host. 1455 + * Here is the pattern for using the "in-place" consumption APIs: 1456 + * 1457 + * while (get_next_pkt_raw() { 1458 + * process the packet "in-place"; 1459 + * put_pkt_raw(); 1460 + * } 1461 + * if (packets processed in place) 1462 + * commit_rd_index(); 1463 + */ 1464 + static inline void commit_rd_index(struct vmbus_channel *channel) 1465 + { 1466 + struct hv_ring_buffer_info *ring_info = &channel->inbound; 1467 + /* 1468 + * Make sure all reads are done before we update the read index since 1469 + * the writer may start writing to the read area once the read index 1470 + * is updated. 1471 + */ 1472 + virt_rmb(); 1473 + ring_info->ring_buffer->read_index = ring_info->priv_read_index; 1474 + 1475 + if (hv_need_to_signal_on_read(ring_info)) 1476 + vmbus_set_event(channel); 1477 + } 1478 + 1479 + 1370 1480 #endif /* _HYPERV_H */
+11 -3
include/linux/mcb.h
··· 15 15 #include <linux/device.h> 16 16 #include <linux/irqreturn.h> 17 17 18 + #define CHAMELEON_FILENAME_LEN 12 19 + 18 20 struct mcb_driver; 19 21 struct mcb_device; 20 22 21 23 /** 22 24 * struct mcb_bus - MEN Chameleon Bus 23 25 * 24 - * @dev: pointer to carrier device 25 - * @children: the child busses 26 + * @dev: bus device 27 + * @carrier: pointer to carrier device 26 28 * @bus_nr: mcb bus number 27 29 * @get_irq: callback to get IRQ number 30 + * @revision: the FPGA's revision number 31 + * @model: the FPGA's model number 32 + * @filename: the FPGA's name 28 33 */ 29 34 struct mcb_bus { 30 - struct list_head children; 31 35 struct device dev; 32 36 struct device *carrier; 33 37 int bus_nr; 38 + u8 revision; 39 + char model; 40 + u8 minor; 41 + char name[CHAMELEON_FILENAME_LEN + 1]; 34 42 int (*get_irq)(struct mcb_device *dev); 35 43 }; 36 44 #define to_mcb_bus(b) container_of((b), struct mcb_bus, dev)
+10
include/linux/nvmem-provider.h
··· 14 14 15 15 struct nvmem_device; 16 16 struct nvmem_cell_info; 17 + typedef int (*nvmem_reg_read_t)(void *priv, unsigned int offset, 18 + void *val, size_t bytes); 19 + typedef int (*nvmem_reg_write_t)(void *priv, unsigned int offset, 20 + void *val, size_t bytes); 17 21 18 22 struct nvmem_config { 19 23 struct device *dev; ··· 28 24 int ncells; 29 25 bool read_only; 30 26 bool root_only; 27 + nvmem_reg_read_t reg_read; 28 + nvmem_reg_write_t reg_write; 29 + int size; 30 + int word_size; 31 + int stride; 32 + void *priv; 31 33 /* To be only used by old driver/misc/eeprom drivers */ 32 34 bool compat; 33 35 struct device *base_dev;
+3
include/linux/stm.h
··· 50 50 * @sw_end: last STP master available to software 51 51 * @sw_nchannels: number of STP channels per master 52 52 * @sw_mmiosz: size of one channel's IO space, for mmap, optional 53 + * @hw_override: masters in the STP stream will not match the ones 54 + * assigned by software, but are up to the STM hardware 53 55 * @packet: callback that sends an STP packet 54 56 * @mmio_addr: mmap callback, optional 55 57 * @link: called when a new stm_source gets linked to us, optional ··· 87 85 unsigned int sw_end; 88 86 unsigned int sw_nchannels; 89 87 unsigned int sw_mmiosz; 88 + unsigned int hw_override; 90 89 ssize_t (*packet)(struct stm_data *, unsigned int, 91 90 unsigned int, unsigned int, 92 91 unsigned int, unsigned int,
+21
include/uapi/linux/coresight-stm.h
··· 1 + #ifndef __UAPI_CORESIGHT_STM_H_ 2 + #define __UAPI_CORESIGHT_STM_H_ 3 + 4 + #define STM_FLAG_TIMESTAMPED BIT(3) 5 + #define STM_FLAG_GUARANTEED BIT(7) 6 + 7 + /* 8 + * The CoreSight STM supports guaranteed and invariant timing 9 + * transactions. Guaranteed transactions are guaranteed to be 10 + * traced, this might involve stalling the bus or system to 11 + * ensure the transaction is accepted by the STM. While invariant 12 + * timing transactions are not guaranteed to be traced, they 13 + * will take an invariant amount of time regardless of the 14 + * state of the STM. 15 + */ 16 + enum { 17 + STM_OPTION_GUARANTEED = 0, 18 + STM_OPTION_INVARIANT, 19 + }; 20 + 21 + #endif
+1 -1
scripts/checkkconfigsymbols.py
··· 89 89 90 90 if opts.diff and not re.match(r"^[\w\-\.]+\.\.[\w\-\.]+$", opts.diff): 91 91 sys.exit("Please specify valid input in the following format: " 92 - "\'commmit1..commit2\'") 92 + "\'commit1..commit2\'") 93 93 94 94 if opts.commit or opts.diff: 95 95 if not opts.force and tree_is_dirty():
+1
tools/hv/lsvmbus
··· 35 35 '{ba6163d9-04a1-4d29-b605-72e2ffb1dc7f}' : 'Synthetic SCSI Controller', 36 36 '{2f9bcc4a-0069-4af3-b76b-6fd0be528cda}' : 'Synthetic fiber channel adapter', 37 37 '{8c2eaf3d-32a7-4b09-ab99-bd1f1c86b501}' : 'Synthetic RDMA adapter', 38 + '{44c4f61d-4444-4400-9d52-802e27ede19f}' : 'PCI Express pass-through', 38 39 '{276aacf4-ac15-426c-98dd-7521ad3f01fe}' : '[Reserved system device]', 39 40 '{f8e65716-3cb3-4a06-9a60-1889c5cccab5}' : '[Reserved system device]', 40 41 '{3375baf4-9e15-4b30-b765-67acb10d607b}' : '[Reserved system device]',