Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'coresight-next-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/coresight/linux into char-misc-next

Suzuki writes:
"coresight: Changes for v6.1

Coresight trace subsystem updates for v6.1 includes:
- Support for HiSilicon PTT trace
- Coresight cleanup of sysfs accessor functions, reduced
code size.
- Expose coresight timestamp source for ETMv4+
- DT binding updates to include missing properties
- Minor documentation, Kconfig text fixes.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>"

* tag 'coresight-next-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/coresight/linux:
hwtracing: hisi_ptt: Fix up for "iommu/dma: Make header private"
MAINTAINERS: Add maintainer for HiSilicon PTT driver
docs: trace: Add HiSilicon PTT device driver documentation
hwtracing: hisi_ptt: Add tune function support for HiSilicon PCIe Tune and Trace device
hwtracing: hisi_ptt: Add trace function support for HiSilicon PCIe Tune and Trace device
iommu/arm-smmu-v3: Make default domain type of HiSilicon PTT device to identity
coresight: cti-sysfs: Mark coresight_cti_reg_store() as __maybe_unused
coresight: Make new csdev_access offsets unsigned
coresight: cti-sysfs: Re-use same functions for similar sysfs register accessors
coresight: Re-use same function for similar sysfs register accessors
coresight: Simplify sysfs accessors by using csdev_access abstraction
coresight: Remove unused function parameter
coresight: etm4x: docs: Add documentation for 'ts_source' sysfs interface
coresight: etm4x: Expose default timestamp source in sysfs
dt-bindings: arm: coresight-tmc: Add 'iommu' property
dt-bindings: arm: coresight: Add 'power-domains' property
coresight: docs: Fix a broken reference
coresight: trbe: fix Kconfig "its" grammar

+1973 -312
+8
Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x
··· 516 516 Description: (Read) Returns the number of special conditional P1 right-hand keys 517 517 that the trace unit can use (0x194). The value is taken 518 518 directly from the HW. 519 + 520 + What: /sys/bus/coresight/devices/etm<N>/ts_source 521 + Date: October 2022 522 + KernelVersion: 6.1 523 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> or Suzuki K Poulose <suzuki.poulose@arm.com> 524 + Description: (Read) When FEAT_TRF is implemented, value of TRFCR_ELx.TS used for 525 + trace session. Otherwise -1 indicates an unknown time source. Check 526 + trcidr0.tssize to see if a global timestamp is available.
+61
Documentation/ABI/testing/sysfs-devices-hisi_ptt
··· 1 + What: /sys/devices/hisi_ptt<sicl_id>_<core_id>/tune 2 + Date: October 2022 3 + KernelVersion: 6.1 4 + Contact: Yicong Yang <yangyicong@hisilicon.com> 5 + Description: This directory contains files for tuning the PCIe link 6 + parameters(events). Each file is named after the event 7 + of the PCIe link. 8 + 9 + See Documentation/trace/hisi-ptt.rst for more information. 10 + 11 + What: /sys/devices/hisi_ptt<sicl_id>_<core_id>/tune/qos_tx_cpl 12 + Date: October 2022 13 + KernelVersion: 6.1 14 + Contact: Yicong Yang <yangyicong@hisilicon.com> 15 + Description: (RW) Controls the weight of Tx completion TLPs, which influence 16 + the proportion of outbound completion TLPs on the PCIe link. 17 + The available tune data is [0, 1, 2]. Writing a negative value 18 + will return an error, and out of range values will be converted 19 + to 2. The value indicates a probable level of the event. 20 + 21 + What: /sys/devices/hisi_ptt<sicl_id>_<core_id>/tune/qos_tx_np 22 + Date: October 2022 23 + KernelVersion: 6.1 24 + Contact: Yicong Yang <yangyicong@hisilicon.com> 25 + Description: (RW) Controls the weight of Tx non-posted TLPs, which influence 26 + the proportion of outbound non-posted TLPs on the PCIe link. 27 + The available tune data is [0, 1, 2]. Writing a negative value 28 + will return an error, and out of range values will be converted 29 + to 2. The value indicates a probable level of the event. 30 + 31 + What: /sys/devices/hisi_ptt<sicl_id>_<core_id>/tune/qos_tx_p 32 + Date: October 2022 33 + KernelVersion: 6.1 34 + Contact: Yicong Yang <yangyicong@hisilicon.com> 35 + Description: (RW) Controls the weight of Tx posted TLPs, which influence the 36 + proportion of outbound posted TLPs on the PCIe link. 37 + The available tune data is [0, 1, 2]. Writing a negative value 38 + will return an error, and out of range values will be converted 39 + to 2. The value indicates a probable level of the event. 40 + 41 + What: /sys/devices/hisi_ptt<sicl_id>_<core_id>/tune/rx_alloc_buf_level 42 + Date: October 2022 43 + KernelVersion: 6.1 44 + Contact: Yicong Yang <yangyicong@hisilicon.com> 45 + Description: (RW) Control the allocated buffer watermark for inbound packets. 46 + The packets will be stored in the buffer first and then transmitted 47 + either when the watermark reached or when timed out. 48 + The available tune data is [0, 1, 2]. Writing a negative value 49 + will return an error, and out of range values will be converted 50 + to 2. The value indicates a probable level of the event. 51 + 52 + What: /sys/devices/hisi_ptt<sicl_id>_<core_id>/tune/tx_alloc_buf_level 53 + Date: October 2022 54 + KernelVersion: 6.1 55 + Contact: Yicong Yang <yangyicong@hisilicon.com> 56 + Description: (RW) Control the allocated buffer watermark of outbound packets. 57 + The packets will be stored in the buffer first and then transmitted 58 + either when the watermark reached or when timed out. 59 + The available tune data is [0, 1, 2]. Writing a negative value 60 + will return an error, and out of range values will be converted 61 + to 2. The value indicates a probable level of the event.
+3
Documentation/devicetree/bindings/arm/arm,coresight-catu.yaml
··· 61 61 maxItems: 1 62 62 description: Address translation error interrupt 63 63 64 + power-domains: 65 + maxItems: 1 66 + 64 67 in-ports: 65 68 $ref: /schemas/graph.yaml#/properties/ports 66 69 additionalProperties: false
+3
Documentation/devicetree/bindings/arm/arm,coresight-cti.yaml
··· 98 98 base cti node if compatible string arm,coresight-cti-v8-arch is used, 99 99 or may appear in a trig-conns child node when appropriate. 100 100 101 + power-domains: 102 + maxItems: 1 103 + 101 104 arm,cti-ctm-id: 102 105 $ref: /schemas/types.yaml#/definitions/uint32 103 106 description:
+3
Documentation/devicetree/bindings/arm/arm,coresight-dynamic-funnel.yaml
··· 54 54 - const: apb_pclk 55 55 - const: atclk 56 56 57 + power-domains: 58 + maxItems: 1 59 + 57 60 in-ports: 58 61 $ref: /schemas/graph.yaml#/properties/ports 59 62
+3
Documentation/devicetree/bindings/arm/arm,coresight-dynamic-replicator.yaml
··· 54 54 - const: apb_pclk 55 55 - const: atclk 56 56 57 + power-domains: 58 + maxItems: 1 59 + 57 60 qcom,replicator-loses-context: 58 61 type: boolean 59 62 description:
+3
Documentation/devicetree/bindings/arm/arm,coresight-etb10.yaml
··· 54 54 - const: apb_pclk 55 55 - const: atclk 56 56 57 + power-domains: 58 + maxItems: 1 59 + 57 60 in-ports: 58 61 $ref: /schemas/graph.yaml#/properties/ports 59 62 additionalProperties: false
+3
Documentation/devicetree/bindings/arm/arm,coresight-etm.yaml
··· 73 73 - const: apb_pclk 74 74 - const: atclk 75 75 76 + power-domains: 77 + maxItems: 1 78 + 76 79 arm,coresight-loses-context-with-cpu: 77 80 type: boolean 78 81 description:
+3
Documentation/devicetree/bindings/arm/arm,coresight-static-funnel.yaml
··· 27 27 compatible: 28 28 const: arm,coresight-static-funnel 29 29 30 + power-domains: 31 + maxItems: 1 32 + 30 33 in-ports: 31 34 $ref: /schemas/graph.yaml#/properties/ports 32 35
+3
Documentation/devicetree/bindings/arm/arm,coresight-static-replicator.yaml
··· 27 27 compatible: 28 28 const: arm,coresight-static-replicator 29 29 30 + power-domains: 31 + maxItems: 1 32 + 30 33 in-ports: 31 34 $ref: /schemas/graph.yaml#/properties/ports 32 35 additionalProperties: false
+3
Documentation/devicetree/bindings/arm/arm,coresight-stm.yaml
··· 61 61 - const: apb_pclk 62 62 - const: atclk 63 63 64 + power-domains: 65 + maxItems: 1 66 + 64 67 out-ports: 65 68 $ref: /schemas/graph.yaml#/properties/ports 66 69 additionalProperties: false
+6
Documentation/devicetree/bindings/arm/arm,coresight-tmc.yaml
··· 55 55 - const: apb_pclk 56 56 - const: atclk 57 57 58 + iommus: 59 + maxItems: 1 60 + 61 + power-domains: 62 + maxItems: 1 63 + 58 64 arm,buffer-size: 59 65 $ref: /schemas/types.yaml#/definitions/uint32 60 66 deprecated: true
+3
Documentation/devicetree/bindings/arm/arm,coresight-tpiu.yaml
··· 54 54 - const: apb_pclk 55 55 - const: atclk 56 56 57 + power-domains: 58 + maxItems: 1 59 + 57 60 in-ports: 58 61 $ref: /schemas/graph.yaml#/properties/ports 59 62 additionalProperties: false
+3
Documentation/devicetree/bindings/arm/arm,embedded-trace-extension.yaml
··· 33 33 Handle to the cpu this ETE is bound to. 34 34 $ref: /schemas/types.yaml#/definitions/phandle 35 35 36 + power-domains: 37 + maxItems: 1 38 + 36 39 out-ports: 37 40 description: | 38 41 Output connections from the ETE to legacy CoreSight trace bus.
+2 -1
Documentation/trace/coresight/coresight-cpu-debug.rst
··· 117 117 Device Tree Bindings 118 118 -------------------- 119 119 120 - See Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt for details. 120 + See Documentation/devicetree/bindings/arm/arm,coresight-cpu-debug.yaml for 121 + details. 121 122 122 123 123 124 How to use the module
+14
Documentation/trace/coresight/coresight-etm4x-reference.rst
··· 71 71 72 72 ---- 73 73 74 + :File: ``ts_source`` (ro) 75 + :Trace Registers: None. 76 + :Notes: 77 + When FEAT_TRF is implemented, value of TRFCR_ELx.TS used for trace session. Otherwise -1 78 + indicates an unknown time source. Check trcidr0.tssize to see if a global timestamp is 79 + available. 80 + 81 + :Example: 82 + ``$> cat ts_source`` 83 + 84 + ``$> 1`` 85 + 86 + ---- 87 + 74 88 :File: ``addr_idx`` (rw) 75 89 :Trace Registers: None. 76 90 :Notes:
+298
Documentation/trace/hisi-ptt.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ====================================== 4 + HiSilicon PCIe Tune and Trace device 5 + ====================================== 6 + 7 + Introduction 8 + ============ 9 + 10 + HiSilicon PCIe tune and trace device (PTT) is a PCIe Root Complex 11 + integrated Endpoint (RCiEP) device, providing the capability 12 + to dynamically monitor and tune the PCIe link's events (tune), 13 + and trace the TLP headers (trace). The two functions are independent, 14 + but is recommended to use them together to analyze and enhance the 15 + PCIe link's performance. 16 + 17 + On Kunpeng 930 SoC, the PCIe Root Complex is composed of several 18 + PCIe cores. Each PCIe core includes several Root Ports and a PTT 19 + RCiEP, like below. The PTT device is capable of tuning and 20 + tracing the links of the PCIe core. 21 + :: 22 + 23 + +--------------Core 0-------+ 24 + | | [ PTT ] | 25 + | | [Root Port]---[Endpoint] 26 + | | [Root Port]---[Endpoint] 27 + | | [Root Port]---[Endpoint] 28 + Root Complex |------Core 1-------+ 29 + | | [ PTT ] | 30 + | | [Root Port]---[ Switch ]---[Endpoint] 31 + | | [Root Port]---[Endpoint] `-[Endpoint] 32 + | | [Root Port]---[Endpoint] 33 + +---------------------------+ 34 + 35 + The PTT device driver registers one PMU device for each PTT device. 36 + The name of each PTT device is composed of 'hisi_ptt' prefix with 37 + the id of the SICL and the Core where it locates. The Kunpeng 930 38 + SoC encapsulates multiple CPU dies (SCCL, Super CPU Cluster) and 39 + IO dies (SICL, Super I/O Cluster), where there's one PCIe Root 40 + Complex for each SICL. 41 + :: 42 + 43 + /sys/devices/hisi_ptt<sicl_id>_<core_id> 44 + 45 + Tune 46 + ==== 47 + 48 + PTT tune is designed for monitoring and adjusting PCIe link parameters (events). 49 + Currently we support events in 2 classes. The scope of the events 50 + covers the PCIe core to which the PTT device belongs. 51 + 52 + Each event is presented as a file under $(PTT PMU dir)/tune, and 53 + a simple open/read/write/close cycle will be used to tune the event. 54 + :: 55 + 56 + $ cd /sys/devices/hisi_ptt<sicl_id>_<core_id>/tune 57 + $ ls 58 + qos_tx_cpl qos_tx_np qos_tx_p 59 + tx_path_rx_req_alloc_buf_level 60 + tx_path_tx_req_alloc_buf_level 61 + $ cat qos_tx_dp 62 + 1 63 + $ echo 2 > qos_tx_dp 64 + $ cat qos_tx_dp 65 + 2 66 + 67 + Current value (numerical value) of the event can be simply read 68 + from the file, and the desired value written to the file to tune. 69 + 70 + 1. Tx Path QoS Control 71 + ------------------------ 72 + 73 + The following files are provided to tune the QoS of the tx path of 74 + the PCIe core. 75 + 76 + - qos_tx_cpl: weight of Tx completion TLPs 77 + - qos_tx_np: weight of Tx non-posted TLPs 78 + - qos_tx_p: weight of Tx posted TLPs 79 + 80 + The weight influences the proportion of certain packets on the PCIe link. 81 + For example, for the storage scenario, increase the proportion 82 + of the completion packets on the link to enhance the performance as 83 + more completions are consumed. 84 + 85 + The available tune data of these events is [0, 1, 2]. 86 + Writing a negative value will return an error, and out of range 87 + values will be converted to 2. Note that the event value just 88 + indicates a probable level, but is not precise. 89 + 90 + 2. Tx Path Buffer Control 91 + ------------------------- 92 + 93 + Following files are provided to tune the buffer of tx path of the PCIe core. 94 + 95 + - rx_alloc_buf_level: watermark of Rx requested 96 + - tx_alloc_buf_level: watermark of Tx requested 97 + 98 + These events influence the watermark of the buffer allocated for each 99 + type. Rx means the inbound while Tx means outbound. The packets will 100 + be stored in the buffer first and then transmitted either when the 101 + watermark reached or when timed out. For a busy direction, you should 102 + increase the related buffer watermark to avoid frequently posting and 103 + thus enhance the performance. In most cases just keep the default value. 104 + 105 + The available tune data of above events is [0, 1, 2]. 106 + Writing a negative value will return an error, and out of range 107 + values will be converted to 2. Note that the event value just 108 + indicates a probable level, but is not precise. 109 + 110 + Trace 111 + ===== 112 + 113 + PTT trace is designed for dumping the TLP headers to the memory, which 114 + can be used to analyze the transactions and usage condition of the PCIe 115 + Link. You can choose to filter the traced headers by either Requester ID, 116 + or those downstream of a set of Root Ports on the same core of the PTT 117 + device. It's also supported to trace the headers of certain type and of 118 + certain direction. 119 + 120 + You can use the perf command `perf record` to set the parameters, start 121 + trace and get the data. It's also supported to decode the trace 122 + data with `perf report`. The control parameters for trace is inputted 123 + as event code for each events, which will be further illustrated later. 124 + An example usage is like 125 + :: 126 + 127 + $ perf record -e hisi_ptt0_2/filter=0x80001,type=1,direction=1, 128 + format=1/ -- sleep 5 129 + 130 + This will trace the TLP headers downstream root port 0000:00:10.1 (event 131 + code for event 'filter' is 0x80001) with type of posted TLP requests, 132 + direction of inbound and traced data format of 8DW. 133 + 134 + 1. Filter 135 + --------- 136 + 137 + The TLP headers to trace can be filtered by the Root Ports or the Requester ID 138 + of the Endpoint, which are located on the same core of the PTT device. You can 139 + set the filter by specifying the `filter` parameter which is required to start 140 + the trace. The parameter value is 20 bit. Bit 19 indicates the filter type. 141 + 1 for Root Port filter and 0 for Requester filter. Bit[15:0] indicates the 142 + filter value. The value for a Root Port is a mask of the core port id which is 143 + calculated from its PCI Slot ID as (slotid & 7) * 2. The value for a Requester 144 + is the Requester ID (Device ID of the PCIe function). Bit[18:16] is currently 145 + reserved for extension. 146 + 147 + For example, if the desired filter is Endpoint function 0000:01:00.1 the filter 148 + value will be 0x00101. If the desired filter is Root Port 0000:00:10.0 then 149 + then filter value is calculated as 0x80001. 150 + 151 + Note that multiple Root Ports can be specified at one time, but only one 152 + Endpoint function can be specified in one trace. Specifying both Root Port 153 + and function at the same time is not supported. Driver maintains a list of 154 + available filters and will check the invalid inputs. 155 + 156 + Currently the available filters are detected in driver's probe. If the supported 157 + devices are removed/added after probe, you may need to reload the driver to update 158 + the filters. 159 + 160 + 2. Type 161 + ------- 162 + 163 + You can trace the TLP headers of certain types by specifying the `type` 164 + parameter, which is required to start the trace. The parameter value is 165 + 8 bit. Current supported types and related values are shown below: 166 + 167 + - 8'b00000001: posted requests (P) 168 + - 8'b00000010: non-posted requests (NP) 169 + - 8'b00000100: completions (CPL) 170 + 171 + You can specify multiple types when tracing inbound TLP headers, but can only 172 + specify one when tracing outbound TLP headers. 173 + 174 + 3. Direction 175 + ------------ 176 + 177 + You can trace the TLP headers from certain direction, which is relative 178 + to the Root Port or the PCIe core, by specifying the `direction` parameter. 179 + This is optional and the default parameter is inbound. The parameter value 180 + is 4 bit. When the desired format is 4DW, directions and related values 181 + supported are shown below: 182 + 183 + - 4'b0000: inbound TLPs (P, NP, CPL) 184 + - 4'b0001: outbound TLPs (P, NP, CPL) 185 + - 4'b0010: outbound TLPs (P, NP, CPL) and inbound TLPs (P, NP, CPL B) 186 + - 4'b0011: outbound TLPs (P, NP, CPL) and inbound TLPs (CPL A) 187 + 188 + When the desired format is 8DW, directions and related values supported are 189 + shown below: 190 + 191 + - 4'b0000: reserved 192 + - 4'b0001: outbound TLPs (P, NP, CPL) 193 + - 4'b0010: inbound TLPs (P, NP, CPL B) 194 + - 4'b0011: inbound TLPs (CPL A) 195 + 196 + Inbound completions are classified into two types: 197 + 198 + - completion A (CPL A): completion of CHI/DMA/Native non-posted requests, except for CPL B 199 + - completion B (CPL B): completion of DMA remote2local and P2P non-posted requests 200 + 201 + 4. Format 202 + -------------- 203 + 204 + You can change the format of the traced TLP headers by specifying the 205 + `format` parameter. The default format is 4DW. The parameter value is 4 bit. 206 + Current supported formats and related values are shown below: 207 + 208 + - 4'b0000: 4DW length per TLP header 209 + - 4'b0001: 8DW length per TLP header 210 + 211 + The traced TLP header format is different from the PCIe standard. 212 + 213 + When using the 8DW data format, the entire TLP header is logged 214 + (Header DW0-3 shown below). For example, the TLP header for Memory 215 + Reads with 64-bit addresses is shown in PCIe r5.0, Figure 2-17; 216 + the header for Configuration Requests is shown in Figure 2.20, etc. 217 + 218 + In addition, 8DW trace buffer entries contain a timestamp and 219 + possibly a prefix for a PASID TLP prefix (see Figure 6-20, PCIe r5.0). 220 + Otherwise this field will be all 0. 221 + 222 + The bit[31:11] of DW0 is always 0x1fffff, which can be 223 + used to distinguish the data format. 8DW format is like 224 + :: 225 + 226 + bits [ 31:11 ][ 10:0 ] 227 + |---------------------------------------|-------------------| 228 + DW0 [ 0x1fffff ][ Reserved (0x7ff) ] 229 + DW1 [ Prefix ] 230 + DW2 [ Header DW0 ] 231 + DW3 [ Header DW1 ] 232 + DW4 [ Header DW2 ] 233 + DW5 [ Header DW3 ] 234 + DW6 [ Reserved (0x0) ] 235 + DW7 [ Time ] 236 + 237 + When using the 4DW data format, DW0 of the trace buffer entry 238 + contains selected fields of DW0 of the TLP, together with a 239 + timestamp. DW1-DW3 of the trace buffer entry contain DW1-DW3 240 + directly from the TLP header. 241 + 242 + 4DW format is like 243 + :: 244 + 245 + bits [31:30] [ 29:25 ][24][23][22][21][ 20:11 ][ 10:0 ] 246 + |-----|---------|---|---|---|---|-------------|-------------| 247 + DW0 [ Fmt ][ Type ][T9][T8][TH][SO][ Length ][ Time ] 248 + DW1 [ Header DW1 ] 249 + DW2 [ Header DW2 ] 250 + DW3 [ Header DW3 ] 251 + 252 + 5. Memory Management 253 + -------------------- 254 + 255 + The traced TLP headers will be written to the memory allocated 256 + by the driver. The hardware accepts 4 DMA address with same size, 257 + and writes the buffer sequentially like below. If DMA addr 3 is 258 + finished and the trace is still on, it will return to addr 0. 259 + :: 260 + 261 + +->[DMA addr 0]->[DMA addr 1]->[DMA addr 2]->[DMA addr 3]-+ 262 + +---------------------------------------------------------+ 263 + 264 + Driver will allocate each DMA buffer of 4MiB. The finished buffer 265 + will be copied to the perf AUX buffer allocated by the perf core. 266 + Once the AUX buffer is full while the trace is still on, driver 267 + will commit the AUX buffer first and then apply for a new one with 268 + the same size. The size of AUX buffer is default to 16MiB. User can 269 + adjust the size by specifying the `-m` parameter of the perf command. 270 + 271 + 6. Decoding 272 + ----------- 273 + 274 + You can decode the traced data with `perf report -D` command (currently 275 + only support to dump the raw trace data). The traced data will be decoded 276 + according to the format described previously (take 8DW as an example): 277 + :: 278 + 279 + [...perf headers and other information] 280 + . ... HISI PTT data: size 4194304 bytes 281 + . 00000000: 00 00 00 00 Prefix 282 + . 00000004: 01 00 00 60 Header DW0 283 + . 00000008: 0f 1e 00 01 Header DW1 284 + . 0000000c: 04 00 00 00 Header DW2 285 + . 00000010: 40 00 81 02 Header DW3 286 + . 00000014: 33 c0 04 00 Time 287 + . 00000020: 00 00 00 00 Prefix 288 + . 00000024: 01 00 00 60 Header DW0 289 + . 00000028: 0f 1e 00 01 Header DW1 290 + . 0000002c: 04 00 00 00 Header DW2 291 + . 00000030: 40 00 81 02 Header DW3 292 + . 00000034: 02 00 00 00 Time 293 + . 00000040: 00 00 00 00 Prefix 294 + . 00000044: 01 00 00 60 Header DW0 295 + . 00000048: 0f 1e 00 01 Header DW1 296 + . 0000004c: 04 00 00 00 Header DW2 297 + . 00000050: 40 00 81 02 Header DW3 298 + [...]
+1
Documentation/trace/index.rst
··· 33 33 coresight/index 34 34 user_events 35 35 rv/index 36 + hisi-ptt
+8
MAINTAINERS
··· 9180 9180 F: Documentation/admin-guide/perf/hns3-pmu.rst 9181 9181 F: drivers/perf/hisilicon/hns3_pmu.c 9182 9182 9183 + HISILICON PTT DRIVER 9184 + M: Yicong Yang <yangyicong@hisilicon.com> 9185 + L: linux-kernel@vger.kernel.org 9186 + S: Maintained 9187 + F: Documentation/ABI/testing/sysfs-devices-hisi_ptt 9188 + F: Documentation/trace/hisi-ptt.rst 9189 + F: drivers/hwtracing/ptt/ 9190 + 9183 9191 HISILICON QM DRIVER 9184 9192 M: Weili Qian <qianweili@huawei.com> 9185 9193 M: Zhou Wang <wangzhou1@hisilicon.com>
+1
arch/arm64/include/asm/sysreg.h
··· 1021 1021 #define SYS_MPIDR_SAFE_VAL (BIT(31)) 1022 1022 1023 1023 #define TRFCR_ELx_TS_SHIFT 5 1024 + #define TRFCR_ELx_TS_MASK ((0x3UL) << TRFCR_ELx_TS_SHIFT) 1024 1025 #define TRFCR_ELx_TS_VIRTUAL ((0x1UL) << TRFCR_ELx_TS_SHIFT) 1025 1026 #define TRFCR_ELx_TS_GUEST_PHYSICAL ((0x2UL) << TRFCR_ELx_TS_SHIFT) 1026 1027 #define TRFCR_ELx_TS_PHYSICAL ((0x3UL) << TRFCR_ELx_TS_SHIFT)
+1
drivers/Makefile
··· 175 175 obj-$(CONFIG_CORESIGHT) += hwtracing/coresight/ 176 176 obj-y += hwtracing/intel_th/ 177 177 obj-$(CONFIG_STM) += hwtracing/stm/ 178 + obj-$(CONFIG_HISI_PTT) += hwtracing/ptt/ 178 179 obj-y += android/ 179 180 obj-$(CONFIG_NVMEM) += nvmem/ 180 181 obj-$(CONFIG_FPGA) += fpga/
+2
drivers/hwtracing/Kconfig
··· 5 5 6 6 source "drivers/hwtracing/intel_th/Kconfig" 7 7 8 + source "drivers/hwtracing/ptt/Kconfig" 9 + 8 10 endmenu
+2 -2
drivers/hwtracing/coresight/Kconfig
··· 193 193 depends on ARM64 && CORESIGHT_SOURCE_ETM4X 194 194 help 195 195 This driver provides support for percpu Trace Buffer Extension (TRBE). 196 - TRBE always needs to be used along with it's corresponding percpu ETE 196 + TRBE always needs to be used along with its corresponding percpu ETE 197 197 component. ETE generates trace data which is then captured with TRBE. 198 198 Unlike traditional sink devices, TRBE is a CPU feature accessible via 199 - system registers. But it's explicit dependency with trace unit (ETE) 199 + system registers. But its explicit dependency with trace unit (ETE) 200 200 requires it to be plugged in as a coresight sink device. 201 201 202 202 To compile this driver as a module, choose M here: the module will be
+8 -19
drivers/hwtracing/coresight/coresight-catu.c
··· 365 365 .get_data = catu_get_data_etr_buf, 366 366 }; 367 367 368 - coresight_simple_reg32(struct catu_drvdata, devid, CORESIGHT_DEVID); 369 - coresight_simple_reg32(struct catu_drvdata, control, CATU_CONTROL); 370 - coresight_simple_reg32(struct catu_drvdata, status, CATU_STATUS); 371 - coresight_simple_reg32(struct catu_drvdata, mode, CATU_MODE); 372 - coresight_simple_reg32(struct catu_drvdata, axictrl, CATU_AXICTRL); 373 - coresight_simple_reg32(struct catu_drvdata, irqen, CATU_IRQEN); 374 - coresight_simple_reg64(struct catu_drvdata, sladdr, 375 - CATU_SLADDRLO, CATU_SLADDRHI); 376 - coresight_simple_reg64(struct catu_drvdata, inaddr, 377 - CATU_INADDRLO, CATU_INADDRHI); 378 - 379 368 static struct attribute *catu_mgmt_attrs[] = { 380 - &dev_attr_devid.attr, 381 - &dev_attr_control.attr, 382 - &dev_attr_status.attr, 383 - &dev_attr_mode.attr, 384 - &dev_attr_axictrl.attr, 385 - &dev_attr_irqen.attr, 386 - &dev_attr_sladdr.attr, 387 - &dev_attr_inaddr.attr, 369 + coresight_simple_reg32(devid, CORESIGHT_DEVID), 370 + coresight_simple_reg32(control, CATU_CONTROL), 371 + coresight_simple_reg32(status, CATU_STATUS), 372 + coresight_simple_reg32(mode, CATU_MODE), 373 + coresight_simple_reg32(axictrl, CATU_AXICTRL), 374 + coresight_simple_reg32(irqen, CATU_IRQEN), 375 + coresight_simple_reg64(sladdr, CATU_SLADDRLO, CATU_SLADDRHI), 376 + coresight_simple_reg64(inaddr, CATU_INADDRLO, CATU_INADDRHI), 388 377 NULL, 389 378 }; 390 379
+4 -4
drivers/hwtracing/coresight/coresight-catu.h
··· 70 70 static inline u32 \ 71 71 catu_read_##name(struct catu_drvdata *drvdata) \ 72 72 { \ 73 - return coresight_read_reg_pair(drvdata->base, offset, -1); \ 73 + return csdev_access_relaxed_read32(&drvdata->csdev->access, offset); \ 74 74 } \ 75 75 static inline void \ 76 76 catu_write_##name(struct catu_drvdata *drvdata, u32 val) \ 77 77 { \ 78 - coresight_write_reg_pair(drvdata->base, val, offset, -1); \ 78 + csdev_access_relaxed_write32(&drvdata->csdev->access, val, offset); \ 79 79 } 80 80 81 81 #define CATU_REG_PAIR(name, lo_off, hi_off) \ 82 82 static inline u64 \ 83 83 catu_read_##name(struct catu_drvdata *drvdata) \ 84 84 { \ 85 - return coresight_read_reg_pair(drvdata->base, lo_off, hi_off); \ 85 + return csdev_access_relaxed_read_pair(&drvdata->csdev->access, lo_off, hi_off); \ 86 86 } \ 87 87 static inline void \ 88 88 catu_write_##name(struct catu_drvdata *drvdata, u64 val) \ 89 89 { \ 90 - coresight_write_reg_pair(drvdata->base, val, lo_off, hi_off); \ 90 + csdev_access_relaxed_write_pair(&drvdata->csdev->access, val, lo_off, hi_off); \ 91 91 } 92 92 93 93 CATU_REG32(control, CATU_CONTROL);
+28
drivers/hwtracing/coresight/coresight-core.c
··· 60 60 61 61 static const struct cti_assoc_op *cti_assoc_ops; 62 62 63 + ssize_t coresight_simple_show_pair(struct device *_dev, 64 + struct device_attribute *attr, char *buf) 65 + { 66 + struct coresight_device *csdev = container_of(_dev, struct coresight_device, dev); 67 + struct cs_pair_attribute *cs_attr = container_of(attr, struct cs_pair_attribute, attr); 68 + u64 val; 69 + 70 + pm_runtime_get_sync(_dev->parent); 71 + val = csdev_access_relaxed_read_pair(&csdev->access, cs_attr->lo_off, cs_attr->hi_off); 72 + pm_runtime_put_sync(_dev->parent); 73 + return sysfs_emit(buf, "0x%llx\n", val); 74 + } 75 + EXPORT_SYMBOL_GPL(coresight_simple_show_pair); 76 + 77 + ssize_t coresight_simple_show32(struct device *_dev, 78 + struct device_attribute *attr, char *buf) 79 + { 80 + struct coresight_device *csdev = container_of(_dev, struct coresight_device, dev); 81 + struct cs_off_attribute *cs_attr = container_of(attr, struct cs_off_attribute, attr); 82 + u64 val; 83 + 84 + pm_runtime_get_sync(_dev->parent); 85 + val = csdev_access_relaxed_read32(&csdev->access, cs_attr->off); 86 + pm_runtime_put_sync(_dev->parent); 87 + return sysfs_emit(buf, "0x%llx\n", val); 88 + } 89 + EXPORT_SYMBOL_GPL(coresight_simple_show32); 90 + 63 91 void coresight_set_cti_ops(const struct cti_assoc_op *cti_op) 64 92 { 65 93 cti_assoc_ops = cti_op;
+86 -127
drivers/hwtracing/coresight/coresight-cti-sysfs.c
··· 163 163 164 164 /* register based attributes */ 165 165 166 - /* macro to access RO registers with power check only (no enable check). */ 167 - #define coresight_cti_reg(name, offset) \ 168 - static ssize_t name##_show(struct device *dev, \ 169 - struct device_attribute *attr, char *buf) \ 170 - { \ 171 - struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent); \ 172 - u32 val = 0; \ 173 - pm_runtime_get_sync(dev->parent); \ 174 - spin_lock(&drvdata->spinlock); \ 175 - if (drvdata->config.hw_powered) \ 176 - val = readl_relaxed(drvdata->base + offset); \ 177 - spin_unlock(&drvdata->spinlock); \ 178 - pm_runtime_put_sync(dev->parent); \ 179 - return sprintf(buf, "0x%x\n", val); \ 180 - } \ 181 - static DEVICE_ATTR_RO(name) 166 + /* Read registers with power check only (no enable check). */ 167 + static ssize_t coresight_cti_reg_show(struct device *dev, 168 + struct device_attribute *attr, char *buf) 169 + { 170 + struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent); 171 + struct cs_off_attribute *cti_attr = container_of(attr, struct cs_off_attribute, attr); 172 + u32 val = 0; 173 + 174 + pm_runtime_get_sync(dev->parent); 175 + spin_lock(&drvdata->spinlock); 176 + if (drvdata->config.hw_powered) 177 + val = readl_relaxed(drvdata->base + cti_attr->off); 178 + spin_unlock(&drvdata->spinlock); 179 + pm_runtime_put_sync(dev->parent); 180 + return sysfs_emit(buf, "0x%x\n", val); 181 + } 182 + 183 + /* Write registers with power check only (no enable check). */ 184 + static __maybe_unused ssize_t coresight_cti_reg_store(struct device *dev, 185 + struct device_attribute *attr, 186 + const char *buf, size_t size) 187 + { 188 + struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent); 189 + struct cs_off_attribute *cti_attr = container_of(attr, struct cs_off_attribute, attr); 190 + unsigned long val = 0; 191 + 192 + if (kstrtoul(buf, 0, &val)) 193 + return -EINVAL; 194 + 195 + pm_runtime_get_sync(dev->parent); 196 + spin_lock(&drvdata->spinlock); 197 + if (drvdata->config.hw_powered) 198 + cti_write_single_reg(drvdata, cti_attr->off, val); 199 + spin_unlock(&drvdata->spinlock); 200 + pm_runtime_put_sync(dev->parent); 201 + return size; 202 + } 203 + 204 + #define coresight_cti_reg(name, offset) \ 205 + (&((struct cs_off_attribute[]) { \ 206 + { \ 207 + __ATTR(name, 0444, coresight_cti_reg_show, NULL), \ 208 + offset \ 209 + } \ 210 + })[0].attr.attr) 211 + 212 + #define coresight_cti_reg_rw(name, offset) \ 213 + (&((struct cs_off_attribute[]) { \ 214 + { \ 215 + __ATTR(name, 0644, coresight_cti_reg_show, \ 216 + coresight_cti_reg_store), \ 217 + offset \ 218 + } \ 219 + })[0].attr.attr) 220 + 221 + #define coresight_cti_reg_wo(name, offset) \ 222 + (&((struct cs_off_attribute[]) { \ 223 + { \ 224 + __ATTR(name, 0200, NULL, coresight_cti_reg_store), \ 225 + offset \ 226 + } \ 227 + })[0].attr.attr) 182 228 183 229 /* coresight management registers */ 184 - coresight_cti_reg(devaff0, CTIDEVAFF0); 185 - coresight_cti_reg(devaff1, CTIDEVAFF1); 186 - coresight_cti_reg(authstatus, CORESIGHT_AUTHSTATUS); 187 - coresight_cti_reg(devarch, CORESIGHT_DEVARCH); 188 - coresight_cti_reg(devid, CORESIGHT_DEVID); 189 - coresight_cti_reg(devtype, CORESIGHT_DEVTYPE); 190 - coresight_cti_reg(pidr0, CORESIGHT_PERIPHIDR0); 191 - coresight_cti_reg(pidr1, CORESIGHT_PERIPHIDR1); 192 - coresight_cti_reg(pidr2, CORESIGHT_PERIPHIDR2); 193 - coresight_cti_reg(pidr3, CORESIGHT_PERIPHIDR3); 194 - coresight_cti_reg(pidr4, CORESIGHT_PERIPHIDR4); 195 - 196 230 static struct attribute *coresight_cti_mgmt_attrs[] = { 197 - &dev_attr_devaff0.attr, 198 - &dev_attr_devaff1.attr, 199 - &dev_attr_authstatus.attr, 200 - &dev_attr_devarch.attr, 201 - &dev_attr_devid.attr, 202 - &dev_attr_devtype.attr, 203 - &dev_attr_pidr0.attr, 204 - &dev_attr_pidr1.attr, 205 - &dev_attr_pidr2.attr, 206 - &dev_attr_pidr3.attr, 207 - &dev_attr_pidr4.attr, 231 + coresight_cti_reg(devaff0, CTIDEVAFF0), 232 + coresight_cti_reg(devaff1, CTIDEVAFF1), 233 + coresight_cti_reg(authstatus, CORESIGHT_AUTHSTATUS), 234 + coresight_cti_reg(devarch, CORESIGHT_DEVARCH), 235 + coresight_cti_reg(devid, CORESIGHT_DEVID), 236 + coresight_cti_reg(devtype, CORESIGHT_DEVTYPE), 237 + coresight_cti_reg(pidr0, CORESIGHT_PERIPHIDR0), 238 + coresight_cti_reg(pidr1, CORESIGHT_PERIPHIDR1), 239 + coresight_cti_reg(pidr2, CORESIGHT_PERIPHIDR2), 240 + coresight_cti_reg(pidr3, CORESIGHT_PERIPHIDR3), 241 + coresight_cti_reg(pidr4, CORESIGHT_PERIPHIDR4), 208 242 NULL, 209 243 }; 210 244 ··· 488 454 } 489 455 static DEVICE_ATTR_WO(apppulse); 490 456 491 - coresight_cti_reg(triginstatus, CTITRIGINSTATUS); 492 - coresight_cti_reg(trigoutstatus, CTITRIGOUTSTATUS); 493 - coresight_cti_reg(chinstatus, CTICHINSTATUS); 494 - coresight_cti_reg(choutstatus, CTICHOUTSTATUS); 495 - 496 457 /* 497 458 * Define CONFIG_CORESIGHT_CTI_INTEGRATION_REGS to enable the access to the 498 459 * integration control registers. Normally only used to investigate connection 499 460 * data. 500 461 */ 501 - #ifdef CONFIG_CORESIGHT_CTI_INTEGRATION_REGS 502 - 503 - /* macro to access RW registers with power check only (no enable check). */ 504 - #define coresight_cti_reg_rw(name, offset) \ 505 - static ssize_t name##_show(struct device *dev, \ 506 - struct device_attribute *attr, char *buf) \ 507 - { \ 508 - struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent); \ 509 - u32 val = 0; \ 510 - pm_runtime_get_sync(dev->parent); \ 511 - spin_lock(&drvdata->spinlock); \ 512 - if (drvdata->config.hw_powered) \ 513 - val = readl_relaxed(drvdata->base + offset); \ 514 - spin_unlock(&drvdata->spinlock); \ 515 - pm_runtime_put_sync(dev->parent); \ 516 - return sprintf(buf, "0x%x\n", val); \ 517 - } \ 518 - \ 519 - static ssize_t name##_store(struct device *dev, \ 520 - struct device_attribute *attr, \ 521 - const char *buf, size_t size) \ 522 - { \ 523 - struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent); \ 524 - unsigned long val = 0; \ 525 - if (kstrtoul(buf, 0, &val)) \ 526 - return -EINVAL; \ 527 - \ 528 - pm_runtime_get_sync(dev->parent); \ 529 - spin_lock(&drvdata->spinlock); \ 530 - if (drvdata->config.hw_powered) \ 531 - cti_write_single_reg(drvdata, offset, val); \ 532 - spin_unlock(&drvdata->spinlock); \ 533 - pm_runtime_put_sync(dev->parent); \ 534 - return size; \ 535 - } \ 536 - static DEVICE_ATTR_RW(name) 537 - 538 - /* macro to access WO registers with power check only (no enable check). */ 539 - #define coresight_cti_reg_wo(name, offset) \ 540 - static ssize_t name##_store(struct device *dev, \ 541 - struct device_attribute *attr, \ 542 - const char *buf, size_t size) \ 543 - { \ 544 - struct cti_drvdata *drvdata = dev_get_drvdata(dev->parent); \ 545 - unsigned long val = 0; \ 546 - if (kstrtoul(buf, 0, &val)) \ 547 - return -EINVAL; \ 548 - \ 549 - pm_runtime_get_sync(dev->parent); \ 550 - spin_lock(&drvdata->spinlock); \ 551 - if (drvdata->config.hw_powered) \ 552 - cti_write_single_reg(drvdata, offset, val); \ 553 - spin_unlock(&drvdata->spinlock); \ 554 - pm_runtime_put_sync(dev->parent); \ 555 - return size; \ 556 - } \ 557 - static DEVICE_ATTR_WO(name) 558 - 559 - coresight_cti_reg_rw(itchout, ITCHOUT); 560 - coresight_cti_reg_rw(ittrigout, ITTRIGOUT); 561 - coresight_cti_reg_rw(itctrl, CORESIGHT_ITCTRL); 562 - coresight_cti_reg_wo(itchinack, ITCHINACK); 563 - coresight_cti_reg_wo(ittriginack, ITTRIGINACK); 564 - coresight_cti_reg(ittrigin, ITTRIGIN); 565 - coresight_cti_reg(itchin, ITCHIN); 566 - coresight_cti_reg(itchoutack, ITCHOUTACK); 567 - coresight_cti_reg(ittrigoutack, ITTRIGOUTACK); 568 - 569 - #endif /* CORESIGHT_CTI_INTEGRATION_REGS */ 570 - 571 462 static struct attribute *coresight_cti_regs_attrs[] = { 572 463 &dev_attr_inout_sel.attr, 573 464 &dev_attr_inen.attr, ··· 503 544 &dev_attr_appset.attr, 504 545 &dev_attr_appclear.attr, 505 546 &dev_attr_apppulse.attr, 506 - &dev_attr_triginstatus.attr, 507 - &dev_attr_trigoutstatus.attr, 508 - &dev_attr_chinstatus.attr, 509 - &dev_attr_choutstatus.attr, 547 + coresight_cti_reg(triginstatus, CTITRIGINSTATUS), 548 + coresight_cti_reg(trigoutstatus, CTITRIGOUTSTATUS), 549 + coresight_cti_reg(chinstatus, CTICHINSTATUS), 550 + coresight_cti_reg(choutstatus, CTICHOUTSTATUS), 510 551 #ifdef CONFIG_CORESIGHT_CTI_INTEGRATION_REGS 511 - &dev_attr_itctrl.attr, 512 - &dev_attr_ittrigin.attr, 513 - &dev_attr_itchin.attr, 514 - &dev_attr_ittrigout.attr, 515 - &dev_attr_itchout.attr, 516 - &dev_attr_itchoutack.attr, 517 - &dev_attr_ittrigoutack.attr, 518 - &dev_attr_ittriginack.attr, 519 - &dev_attr_itchinack.attr, 552 + coresight_cti_reg_rw(itctrl, CORESIGHT_ITCTRL), 553 + coresight_cti_reg(ittrigin, ITTRIGIN), 554 + coresight_cti_reg(itchin, ITCHIN), 555 + coresight_cti_reg_rw(ittrigout, ITTRIGOUT), 556 + coresight_cti_reg_rw(itchout, ITCHOUT), 557 + coresight_cti_reg(itchoutack, ITCHOUTACK), 558 + coresight_cti_reg(ittrigoutack, ITTRIGOUTACK), 559 + coresight_cti_reg_wo(ittriginack, ITTRIGINACK), 560 + coresight_cti_reg_wo(itchinack, ITCHINACK), 520 561 #endif 521 562 NULL, 522 563 };
+8 -20
drivers/hwtracing/coresight/coresight-etb10.c
··· 655 655 .llseek = no_llseek, 656 656 }; 657 657 658 - #define coresight_etb10_reg(name, offset) \ 659 - coresight_simple_reg32(struct etb_drvdata, name, offset) 660 - 661 - coresight_etb10_reg(rdp, ETB_RAM_DEPTH_REG); 662 - coresight_etb10_reg(sts, ETB_STATUS_REG); 663 - coresight_etb10_reg(rrp, ETB_RAM_READ_POINTER); 664 - coresight_etb10_reg(rwp, ETB_RAM_WRITE_POINTER); 665 - coresight_etb10_reg(trg, ETB_TRG); 666 - coresight_etb10_reg(ctl, ETB_CTL_REG); 667 - coresight_etb10_reg(ffsr, ETB_FFSR); 668 - coresight_etb10_reg(ffcr, ETB_FFCR); 669 - 670 658 static struct attribute *coresight_etb_mgmt_attrs[] = { 671 - &dev_attr_rdp.attr, 672 - &dev_attr_sts.attr, 673 - &dev_attr_rrp.attr, 674 - &dev_attr_rwp.attr, 675 - &dev_attr_trg.attr, 676 - &dev_attr_ctl.attr, 677 - &dev_attr_ffsr.attr, 678 - &dev_attr_ffcr.attr, 659 + coresight_simple_reg32(rdp, ETB_RAM_DEPTH_REG), 660 + coresight_simple_reg32(sts, ETB_STATUS_REG), 661 + coresight_simple_reg32(rrp, ETB_RAM_READ_POINTER), 662 + coresight_simple_reg32(rwp, ETB_RAM_WRITE_POINTER), 663 + coresight_simple_reg32(trg, ETB_TRG), 664 + coresight_simple_reg32(ctl, ETB_CTL_REG), 665 + coresight_simple_reg32(ffsr, ETB_FFSR), 666 + coresight_simple_reg32(ffcr, ETB_FFCR), 679 667 NULL, 680 668 }; 681 669
+10 -24
drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
··· 1252 1252 NULL, 1253 1253 }; 1254 1254 1255 - #define coresight_etm3x_reg(name, offset) \ 1256 - coresight_simple_reg32(struct etm_drvdata, name, offset) 1257 - 1258 - coresight_etm3x_reg(etmccr, ETMCCR); 1259 - coresight_etm3x_reg(etmccer, ETMCCER); 1260 - coresight_etm3x_reg(etmscr, ETMSCR); 1261 - coresight_etm3x_reg(etmidr, ETMIDR); 1262 - coresight_etm3x_reg(etmcr, ETMCR); 1263 - coresight_etm3x_reg(etmtraceidr, ETMTRACEIDR); 1264 - coresight_etm3x_reg(etmteevr, ETMTEEVR); 1265 - coresight_etm3x_reg(etmtssvr, ETMTSSCR); 1266 - coresight_etm3x_reg(etmtecr1, ETMTECR1); 1267 - coresight_etm3x_reg(etmtecr2, ETMTECR2); 1268 - 1269 1255 static struct attribute *coresight_etm_mgmt_attrs[] = { 1270 - &dev_attr_etmccr.attr, 1271 - &dev_attr_etmccer.attr, 1272 - &dev_attr_etmscr.attr, 1273 - &dev_attr_etmidr.attr, 1274 - &dev_attr_etmcr.attr, 1275 - &dev_attr_etmtraceidr.attr, 1276 - &dev_attr_etmteevr.attr, 1277 - &dev_attr_etmtssvr.attr, 1278 - &dev_attr_etmtecr1.attr, 1279 - &dev_attr_etmtecr2.attr, 1256 + coresight_simple_reg32(etmccr, ETMCCR), 1257 + coresight_simple_reg32(etmccer, ETMCCER), 1258 + coresight_simple_reg32(etmscr, ETMSCR), 1259 + coresight_simple_reg32(etmidr, ETMIDR), 1260 + coresight_simple_reg32(etmcr, ETMCR), 1261 + coresight_simple_reg32(etmtraceidr, ETMTRACEIDR), 1262 + coresight_simple_reg32(etmteevr, ETMTEEVR), 1263 + coresight_simple_reg32(etmtssvr, ETMTSSCR), 1264 + coresight_simple_reg32(etmtecr1, ETMTECR1), 1265 + coresight_simple_reg32(etmtecr2, ETMTECR2), 1280 1266 NULL, 1281 1267 }; 1282 1268
+29
drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
··· 2306 2306 } 2307 2307 static DEVICE_ATTR_RO(cpu); 2308 2308 2309 + static ssize_t ts_source_show(struct device *dev, 2310 + struct device_attribute *attr, 2311 + char *buf) 2312 + { 2313 + int val; 2314 + struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 2315 + 2316 + if (!drvdata->trfcr) { 2317 + val = -1; 2318 + goto out; 2319 + } 2320 + 2321 + switch (drvdata->trfcr & TRFCR_ELx_TS_MASK) { 2322 + case TRFCR_ELx_TS_VIRTUAL: 2323 + case TRFCR_ELx_TS_GUEST_PHYSICAL: 2324 + case TRFCR_ELx_TS_PHYSICAL: 2325 + val = FIELD_GET(TRFCR_ELx_TS_MASK, drvdata->trfcr); 2326 + break; 2327 + default: 2328 + val = -1; 2329 + break; 2330 + } 2331 + 2332 + out: 2333 + return sysfs_emit(buf, "%d\n", val); 2334 + } 2335 + static DEVICE_ATTR_RO(ts_source); 2336 + 2309 2337 static struct attribute *coresight_etmv4_attrs[] = { 2310 2338 &dev_attr_nr_pe_cmp.attr, 2311 2339 &dev_attr_nr_addr_cmp.attr, ··· 2388 2360 &dev_attr_vmid_val.attr, 2389 2361 &dev_attr_vmid_masks.attr, 2390 2362 &dev_attr_cpu.attr, 2363 + &dev_attr_ts_source.attr, 2391 2364 NULL, 2392 2365 }; 2393 2366
+29 -43
drivers/hwtracing/coresight/coresight-priv.h
··· 39 39 40 40 #define ETM_MODE_EXCL_KERN BIT(30) 41 41 #define ETM_MODE_EXCL_USER BIT(31) 42 + struct cs_pair_attribute { 43 + struct device_attribute attr; 44 + u32 lo_off; 45 + u32 hi_off; 46 + }; 42 47 43 - typedef u32 (*coresight_read_fn)(const struct device *, u32 offset); 44 - #define __coresight_simple_func(type, func, name, lo_off, hi_off) \ 45 - static ssize_t name##_show(struct device *_dev, \ 46 - struct device_attribute *attr, char *buf) \ 47 - { \ 48 - type *drvdata = dev_get_drvdata(_dev->parent); \ 49 - coresight_read_fn fn = func; \ 50 - u64 val; \ 51 - pm_runtime_get_sync(_dev->parent); \ 52 - if (fn) \ 53 - val = (u64)fn(_dev->parent, lo_off); \ 54 - else \ 55 - val = coresight_read_reg_pair(drvdata->base, \ 56 - lo_off, hi_off); \ 57 - pm_runtime_put_sync(_dev->parent); \ 58 - return scnprintf(buf, PAGE_SIZE, "0x%llx\n", val); \ 59 - } \ 60 - static DEVICE_ATTR_RO(name) 48 + struct cs_off_attribute { 49 + struct device_attribute attr; 50 + u32 off; 51 + }; 61 52 62 - #define coresight_simple_func(type, func, name, offset) \ 63 - __coresight_simple_func(type, func, name, offset, -1) 64 - #define coresight_simple_reg32(type, name, offset) \ 65 - __coresight_simple_func(type, NULL, name, offset, -1) 66 - #define coresight_simple_reg64(type, name, lo_off, hi_off) \ 67 - __coresight_simple_func(type, NULL, name, lo_off, hi_off) 53 + extern ssize_t coresight_simple_show32(struct device *_dev, 54 + struct device_attribute *attr, char *buf); 55 + extern ssize_t coresight_simple_show_pair(struct device *_dev, 56 + struct device_attribute *attr, char *buf); 57 + 58 + #define coresight_simple_reg32(name, offset) \ 59 + (&((struct cs_off_attribute[]) { \ 60 + { \ 61 + __ATTR(name, 0444, coresight_simple_show32, NULL), \ 62 + offset \ 63 + } \ 64 + })[0].attr.attr) 65 + 66 + #define coresight_simple_reg64(name, lo_off, hi_off) \ 67 + (&((struct cs_pair_attribute[]) { \ 68 + { \ 69 + __ATTR(name, 0444, coresight_simple_show_pair, NULL), \ 70 + lo_off, hi_off \ 71 + } \ 72 + })[0].attr.attr) 68 73 69 74 extern const u32 coresight_barrier_pkt[4]; 70 75 #define CORESIGHT_BARRIER_PKT_SIZE (sizeof(coresight_barrier_pkt)) ··· 130 125 /* Make sure everyone has seen this */ 131 126 mb(); 132 127 } while (0); 133 - } 134 - 135 - static inline u64 136 - coresight_read_reg_pair(void __iomem *addr, s32 lo_offset, s32 hi_offset) 137 - { 138 - u64 val; 139 - 140 - val = readl_relaxed(addr + lo_offset); 141 - val |= (hi_offset < 0) ? 0 : 142 - (u64)readl_relaxed(addr + hi_offset) << 32; 143 - return val; 144 - } 145 - 146 - static inline void coresight_write_reg_pair(void __iomem *addr, u64 val, 147 - s32 lo_offset, s32 hi_offset) 148 - { 149 - writel_relaxed((u32)val, addr + lo_offset); 150 - if (hi_offset >= 0) 151 - writel_relaxed((u32)(val >> 32), addr + hi_offset); 152 128 } 153 129 154 130 void coresight_disable_path(struct list_head *path);
+2 -8
drivers/hwtracing/coresight/coresight-replicator.c
··· 196 196 .link_ops = &replicator_link_ops, 197 197 }; 198 198 199 - #define coresight_replicator_reg(name, offset) \ 200 - coresight_simple_reg32(struct replicator_drvdata, name, offset) 201 - 202 - coresight_replicator_reg(idfilter0, REPLICATOR_IDFILTER0); 203 - coresight_replicator_reg(idfilter1, REPLICATOR_IDFILTER1); 204 - 205 199 static struct attribute *replicator_mgmt_attrs[] = { 206 - &dev_attr_idfilter0.attr, 207 - &dev_attr_idfilter1.attr, 200 + coresight_simple_reg32(idfilter0, REPLICATOR_IDFILTER0), 201 + coresight_simple_reg32(idfilter1, REPLICATOR_IDFILTER1), 208 202 NULL, 209 203 }; 210 204
+12 -28
drivers/hwtracing/coresight/coresight-stm.c
··· 634 634 } 635 635 static DEVICE_ATTR_RW(traceid); 636 636 637 - #define coresight_stm_reg(name, offset) \ 638 - coresight_simple_reg32(struct stm_drvdata, name, offset) 639 - 640 - coresight_stm_reg(tcsr, STMTCSR); 641 - coresight_stm_reg(tsfreqr, STMTSFREQR); 642 - coresight_stm_reg(syncr, STMSYNCR); 643 - coresight_stm_reg(sper, STMSPER); 644 - coresight_stm_reg(spter, STMSPTER); 645 - coresight_stm_reg(privmaskr, STMPRIVMASKR); 646 - coresight_stm_reg(spscr, STMSPSCR); 647 - coresight_stm_reg(spmscr, STMSPMSCR); 648 - coresight_stm_reg(spfeat1r, STMSPFEAT1R); 649 - coresight_stm_reg(spfeat2r, STMSPFEAT2R); 650 - coresight_stm_reg(spfeat3r, STMSPFEAT3R); 651 - coresight_stm_reg(devid, CORESIGHT_DEVID); 652 - 653 637 static struct attribute *coresight_stm_attrs[] = { 654 638 &dev_attr_hwevent_enable.attr, 655 639 &dev_attr_hwevent_select.attr, ··· 644 660 }; 645 661 646 662 static struct attribute *coresight_stm_mgmt_attrs[] = { 647 - &dev_attr_tcsr.attr, 648 - &dev_attr_tsfreqr.attr, 649 - &dev_attr_syncr.attr, 650 - &dev_attr_sper.attr, 651 - &dev_attr_spter.attr, 652 - &dev_attr_privmaskr.attr, 653 - &dev_attr_spscr.attr, 654 - &dev_attr_spmscr.attr, 655 - &dev_attr_spfeat1r.attr, 656 - &dev_attr_spfeat2r.attr, 657 - &dev_attr_spfeat3r.attr, 658 - &dev_attr_devid.attr, 663 + coresight_simple_reg32(tcsr, STMTCSR), 664 + coresight_simple_reg32(tsfreqr, STMTSFREQR), 665 + coresight_simple_reg32(syncr, STMSYNCR), 666 + coresight_simple_reg32(sper, STMSPER), 667 + coresight_simple_reg32(spter, STMSPTER), 668 + coresight_simple_reg32(privmaskr, STMPRIVMASKR), 669 + coresight_simple_reg32(spscr, STMSPSCR), 670 + coresight_simple_reg32(spmscr, STMSPMSCR), 671 + coresight_simple_reg32(spfeat1r, STMSPFEAT1R), 672 + coresight_simple_reg32(spfeat2r, STMSPFEAT2R), 673 + coresight_simple_reg32(spfeat3r, STMSPFEAT3R), 674 + coresight_simple_reg32(devid, CORESIGHT_DEVID), 659 675 NULL, 660 676 }; 661 677
+14 -34
drivers/hwtracing/coresight/coresight-tmc-core.c
··· 251 251 return memwidth; 252 252 } 253 253 254 - #define coresight_tmc_reg(name, offset) \ 255 - coresight_simple_reg32(struct tmc_drvdata, name, offset) 256 - #define coresight_tmc_reg64(name, lo_off, hi_off) \ 257 - coresight_simple_reg64(struct tmc_drvdata, name, lo_off, hi_off) 258 - 259 - coresight_tmc_reg(rsz, TMC_RSZ); 260 - coresight_tmc_reg(sts, TMC_STS); 261 - coresight_tmc_reg(trg, TMC_TRG); 262 - coresight_tmc_reg(ctl, TMC_CTL); 263 - coresight_tmc_reg(ffsr, TMC_FFSR); 264 - coresight_tmc_reg(ffcr, TMC_FFCR); 265 - coresight_tmc_reg(mode, TMC_MODE); 266 - coresight_tmc_reg(pscr, TMC_PSCR); 267 - coresight_tmc_reg(axictl, TMC_AXICTL); 268 - coresight_tmc_reg(authstatus, TMC_AUTHSTATUS); 269 - coresight_tmc_reg(devid, CORESIGHT_DEVID); 270 - coresight_tmc_reg64(rrp, TMC_RRP, TMC_RRPHI); 271 - coresight_tmc_reg64(rwp, TMC_RWP, TMC_RWPHI); 272 - coresight_tmc_reg64(dba, TMC_DBALO, TMC_DBAHI); 273 - 274 254 static struct attribute *coresight_tmc_mgmt_attrs[] = { 275 - &dev_attr_rsz.attr, 276 - &dev_attr_sts.attr, 277 - &dev_attr_rrp.attr, 278 - &dev_attr_rwp.attr, 279 - &dev_attr_trg.attr, 280 - &dev_attr_ctl.attr, 281 - &dev_attr_ffsr.attr, 282 - &dev_attr_ffcr.attr, 283 - &dev_attr_mode.attr, 284 - &dev_attr_pscr.attr, 285 - &dev_attr_devid.attr, 286 - &dev_attr_dba.attr, 287 - &dev_attr_axictl.attr, 288 - &dev_attr_authstatus.attr, 255 + coresight_simple_reg32(rsz, TMC_RSZ), 256 + coresight_simple_reg32(sts, TMC_STS), 257 + coresight_simple_reg64(rrp, TMC_RRP, TMC_RRPHI), 258 + coresight_simple_reg64(rwp, TMC_RWP, TMC_RWPHI), 259 + coresight_simple_reg32(trg, TMC_TRG), 260 + coresight_simple_reg32(ctl, TMC_CTL), 261 + coresight_simple_reg32(ffsr, TMC_FFSR), 262 + coresight_simple_reg32(ffcr, TMC_FFCR), 263 + coresight_simple_reg32(mode, TMC_MODE), 264 + coresight_simple_reg32(pscr, TMC_PSCR), 265 + coresight_simple_reg32(devid, CORESIGHT_DEVID), 266 + coresight_simple_reg64(dba, TMC_DBALO, TMC_DBAHI), 267 + coresight_simple_reg32(axictl, TMC_AXICTL), 268 + coresight_simple_reg32(authstatus, TMC_AUTHSTATUS), 289 269 NULL, 290 270 }; 291 271
+2 -2
drivers/hwtracing/coresight/coresight-tmc.h
··· 282 282 static inline u64 \ 283 283 tmc_read_##name(struct tmc_drvdata *drvdata) \ 284 284 { \ 285 - return coresight_read_reg_pair(drvdata->base, lo_off, hi_off); \ 285 + return csdev_access_relaxed_read_pair(&drvdata->csdev->access, lo_off, hi_off); \ 286 286 } \ 287 287 static inline void \ 288 288 tmc_write_##name(struct tmc_drvdata *drvdata, u64 val) \ 289 289 { \ 290 - coresight_write_reg_pair(drvdata->base, val, lo_off, hi_off); \ 290 + csdev_access_relaxed_write_pair(&drvdata->csdev->access, val, lo_off, hi_off); \ 291 291 } 292 292 293 293 TMC_REG_PAIR(rrp, TMC_RRP, TMC_RRPHI)
+12
drivers/hwtracing/ptt/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + config HISI_PTT 3 + tristate "HiSilicon PCIe Tune and Trace Device" 4 + depends on ARM64 || (COMPILE_TEST && 64BIT) 5 + depends on PCI && HAS_DMA && HAS_IOMEM && PERF_EVENTS 6 + help 7 + HiSilicon PCIe Tune and Trace device exists as a PCIe RCiEP 8 + device, and it provides support for PCIe traffic tuning and 9 + tracing TLP headers to the memory. 10 + 11 + This driver can also be built as a module. If so, the module 12 + will be called hisi_ptt.
+2
drivers/hwtracing/ptt/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + obj-$(CONFIG_HISI_PTT) += hisi_ptt.o
+1046
drivers/hwtracing/ptt/hisi_ptt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for HiSilicon PCIe tune and trace device 4 + * 5 + * Copyright (c) 2022 HiSilicon Technologies Co., Ltd. 6 + * Author: Yicong Yang <yangyicong@hisilicon.com> 7 + */ 8 + 9 + #include <linux/bitfield.h> 10 + #include <linux/bitops.h> 11 + #include <linux/cpuhotplug.h> 12 + #include <linux/delay.h> 13 + #include <linux/dma-mapping.h> 14 + #include <linux/interrupt.h> 15 + #include <linux/io.h> 16 + #include <linux/iommu.h> 17 + #include <linux/iopoll.h> 18 + #include <linux/module.h> 19 + #include <linux/sysfs.h> 20 + #include <linux/vmalloc.h> 21 + 22 + #include "hisi_ptt.h" 23 + 24 + /* Dynamic CPU hotplug state used by PTT */ 25 + static enum cpuhp_state hisi_ptt_pmu_online; 26 + 27 + static bool hisi_ptt_wait_tuning_finish(struct hisi_ptt *hisi_ptt) 28 + { 29 + u32 val; 30 + 31 + return !readl_poll_timeout(hisi_ptt->iobase + HISI_PTT_TUNING_INT_STAT, 32 + val, !(val & HISI_PTT_TUNING_INT_STAT_MASK), 33 + HISI_PTT_WAIT_POLL_INTERVAL_US, 34 + HISI_PTT_WAIT_TUNE_TIMEOUT_US); 35 + } 36 + 37 + static ssize_t hisi_ptt_tune_attr_show(struct device *dev, 38 + struct device_attribute *attr, 39 + char *buf) 40 + { 41 + struct hisi_ptt *hisi_ptt = to_hisi_ptt(dev_get_drvdata(dev)); 42 + struct dev_ext_attribute *ext_attr; 43 + struct hisi_ptt_tune_desc *desc; 44 + u32 reg; 45 + u16 val; 46 + 47 + ext_attr = container_of(attr, struct dev_ext_attribute, attr); 48 + desc = ext_attr->var; 49 + 50 + mutex_lock(&hisi_ptt->tune_lock); 51 + 52 + reg = readl(hisi_ptt->iobase + HISI_PTT_TUNING_CTRL); 53 + reg &= ~(HISI_PTT_TUNING_CTRL_CODE | HISI_PTT_TUNING_CTRL_SUB); 54 + reg |= FIELD_PREP(HISI_PTT_TUNING_CTRL_CODE | HISI_PTT_TUNING_CTRL_SUB, 55 + desc->event_code); 56 + writel(reg, hisi_ptt->iobase + HISI_PTT_TUNING_CTRL); 57 + 58 + /* Write all 1 to indicates it's the read process */ 59 + writel(~0U, hisi_ptt->iobase + HISI_PTT_TUNING_DATA); 60 + 61 + if (!hisi_ptt_wait_tuning_finish(hisi_ptt)) { 62 + mutex_unlock(&hisi_ptt->tune_lock); 63 + return -ETIMEDOUT; 64 + } 65 + 66 + reg = readl(hisi_ptt->iobase + HISI_PTT_TUNING_DATA); 67 + reg &= HISI_PTT_TUNING_DATA_VAL_MASK; 68 + val = FIELD_GET(HISI_PTT_TUNING_DATA_VAL_MASK, reg); 69 + 70 + mutex_unlock(&hisi_ptt->tune_lock); 71 + return sysfs_emit(buf, "%u\n", val); 72 + } 73 + 74 + static ssize_t hisi_ptt_tune_attr_store(struct device *dev, 75 + struct device_attribute *attr, 76 + const char *buf, size_t count) 77 + { 78 + struct hisi_ptt *hisi_ptt = to_hisi_ptt(dev_get_drvdata(dev)); 79 + struct dev_ext_attribute *ext_attr; 80 + struct hisi_ptt_tune_desc *desc; 81 + u32 reg; 82 + u16 val; 83 + 84 + ext_attr = container_of(attr, struct dev_ext_attribute, attr); 85 + desc = ext_attr->var; 86 + 87 + if (kstrtou16(buf, 10, &val)) 88 + return -EINVAL; 89 + 90 + mutex_lock(&hisi_ptt->tune_lock); 91 + 92 + reg = readl(hisi_ptt->iobase + HISI_PTT_TUNING_CTRL); 93 + reg &= ~(HISI_PTT_TUNING_CTRL_CODE | HISI_PTT_TUNING_CTRL_SUB); 94 + reg |= FIELD_PREP(HISI_PTT_TUNING_CTRL_CODE | HISI_PTT_TUNING_CTRL_SUB, 95 + desc->event_code); 96 + writel(reg, hisi_ptt->iobase + HISI_PTT_TUNING_CTRL); 97 + writel(FIELD_PREP(HISI_PTT_TUNING_DATA_VAL_MASK, val), 98 + hisi_ptt->iobase + HISI_PTT_TUNING_DATA); 99 + 100 + if (!hisi_ptt_wait_tuning_finish(hisi_ptt)) { 101 + mutex_unlock(&hisi_ptt->tune_lock); 102 + return -ETIMEDOUT; 103 + } 104 + 105 + mutex_unlock(&hisi_ptt->tune_lock); 106 + return count; 107 + } 108 + 109 + #define HISI_PTT_TUNE_ATTR(_name, _val, _show, _store) \ 110 + static struct hisi_ptt_tune_desc _name##_desc = { \ 111 + .name = #_name, \ 112 + .event_code = (_val), \ 113 + }; \ 114 + static struct dev_ext_attribute hisi_ptt_##_name##_attr = { \ 115 + .attr = __ATTR(_name, 0600, _show, _store), \ 116 + .var = &_name##_desc, \ 117 + } 118 + 119 + #define HISI_PTT_TUNE_ATTR_COMMON(_name, _val) \ 120 + HISI_PTT_TUNE_ATTR(_name, _val, \ 121 + hisi_ptt_tune_attr_show, \ 122 + hisi_ptt_tune_attr_store) 123 + 124 + /* 125 + * The value of the tuning event are composed of two parts: main event code 126 + * in BIT[0,15] and subevent code in BIT[16,23]. For example, qox_tx_cpl is 127 + * a subevent of 'Tx path QoS control' which for tuning the weight of Tx 128 + * completion TLPs. See hisi_ptt.rst documentation for more information. 129 + */ 130 + #define HISI_PTT_TUNE_QOS_TX_CPL (0x4 | (3 << 16)) 131 + #define HISI_PTT_TUNE_QOS_TX_NP (0x4 | (4 << 16)) 132 + #define HISI_PTT_TUNE_QOS_TX_P (0x4 | (5 << 16)) 133 + #define HISI_PTT_TUNE_RX_ALLOC_BUF_LEVEL (0x5 | (6 << 16)) 134 + #define HISI_PTT_TUNE_TX_ALLOC_BUF_LEVEL (0x5 | (7 << 16)) 135 + 136 + HISI_PTT_TUNE_ATTR_COMMON(qos_tx_cpl, HISI_PTT_TUNE_QOS_TX_CPL); 137 + HISI_PTT_TUNE_ATTR_COMMON(qos_tx_np, HISI_PTT_TUNE_QOS_TX_NP); 138 + HISI_PTT_TUNE_ATTR_COMMON(qos_tx_p, HISI_PTT_TUNE_QOS_TX_P); 139 + HISI_PTT_TUNE_ATTR_COMMON(rx_alloc_buf_level, HISI_PTT_TUNE_RX_ALLOC_BUF_LEVEL); 140 + HISI_PTT_TUNE_ATTR_COMMON(tx_alloc_buf_level, HISI_PTT_TUNE_TX_ALLOC_BUF_LEVEL); 141 + 142 + static struct attribute *hisi_ptt_tune_attrs[] = { 143 + &hisi_ptt_qos_tx_cpl_attr.attr.attr, 144 + &hisi_ptt_qos_tx_np_attr.attr.attr, 145 + &hisi_ptt_qos_tx_p_attr.attr.attr, 146 + &hisi_ptt_rx_alloc_buf_level_attr.attr.attr, 147 + &hisi_ptt_tx_alloc_buf_level_attr.attr.attr, 148 + NULL, 149 + }; 150 + 151 + static struct attribute_group hisi_ptt_tune_group = { 152 + .name = "tune", 153 + .attrs = hisi_ptt_tune_attrs, 154 + }; 155 + 156 + static u16 hisi_ptt_get_filter_val(u16 devid, bool is_port) 157 + { 158 + if (is_port) 159 + return BIT(HISI_PCIE_CORE_PORT_ID(devid & 0xff)); 160 + 161 + return devid; 162 + } 163 + 164 + static bool hisi_ptt_wait_trace_hw_idle(struct hisi_ptt *hisi_ptt) 165 + { 166 + u32 val; 167 + 168 + return !readl_poll_timeout_atomic(hisi_ptt->iobase + HISI_PTT_TRACE_STS, 169 + val, val & HISI_PTT_TRACE_IDLE, 170 + HISI_PTT_WAIT_POLL_INTERVAL_US, 171 + HISI_PTT_WAIT_TRACE_TIMEOUT_US); 172 + } 173 + 174 + static void hisi_ptt_wait_dma_reset_done(struct hisi_ptt *hisi_ptt) 175 + { 176 + u32 val; 177 + 178 + readl_poll_timeout_atomic(hisi_ptt->iobase + HISI_PTT_TRACE_WR_STS, 179 + val, !val, HISI_PTT_RESET_POLL_INTERVAL_US, 180 + HISI_PTT_RESET_TIMEOUT_US); 181 + } 182 + 183 + static void hisi_ptt_trace_end(struct hisi_ptt *hisi_ptt) 184 + { 185 + writel(0, hisi_ptt->iobase + HISI_PTT_TRACE_CTRL); 186 + hisi_ptt->trace_ctrl.started = false; 187 + } 188 + 189 + static int hisi_ptt_trace_start(struct hisi_ptt *hisi_ptt) 190 + { 191 + struct hisi_ptt_trace_ctrl *ctrl = &hisi_ptt->trace_ctrl; 192 + u32 val; 193 + int i; 194 + 195 + /* Check device idle before start trace */ 196 + if (!hisi_ptt_wait_trace_hw_idle(hisi_ptt)) { 197 + pci_err(hisi_ptt->pdev, "Failed to start trace, the device is still busy\n"); 198 + return -EBUSY; 199 + } 200 + 201 + ctrl->started = true; 202 + 203 + /* Reset the DMA before start tracing */ 204 + val = readl(hisi_ptt->iobase + HISI_PTT_TRACE_CTRL); 205 + val |= HISI_PTT_TRACE_CTRL_RST; 206 + writel(val, hisi_ptt->iobase + HISI_PTT_TRACE_CTRL); 207 + 208 + hisi_ptt_wait_dma_reset_done(hisi_ptt); 209 + 210 + val = readl(hisi_ptt->iobase + HISI_PTT_TRACE_CTRL); 211 + val &= ~HISI_PTT_TRACE_CTRL_RST; 212 + writel(val, hisi_ptt->iobase + HISI_PTT_TRACE_CTRL); 213 + 214 + /* Reset the index of current buffer */ 215 + hisi_ptt->trace_ctrl.buf_index = 0; 216 + 217 + /* Zero the trace buffers */ 218 + for (i = 0; i < HISI_PTT_TRACE_BUF_CNT; i++) 219 + memset(ctrl->trace_buf[i].addr, 0, HISI_PTT_TRACE_BUF_SIZE); 220 + 221 + /* Clear the interrupt status */ 222 + writel(HISI_PTT_TRACE_INT_STAT_MASK, hisi_ptt->iobase + HISI_PTT_TRACE_INT_STAT); 223 + writel(0, hisi_ptt->iobase + HISI_PTT_TRACE_INT_MASK); 224 + 225 + /* Set the trace control register */ 226 + val = FIELD_PREP(HISI_PTT_TRACE_CTRL_TYPE_SEL, ctrl->type); 227 + val |= FIELD_PREP(HISI_PTT_TRACE_CTRL_RXTX_SEL, ctrl->direction); 228 + val |= FIELD_PREP(HISI_PTT_TRACE_CTRL_DATA_FORMAT, ctrl->format); 229 + val |= FIELD_PREP(HISI_PTT_TRACE_CTRL_TARGET_SEL, hisi_ptt->trace_ctrl.filter); 230 + if (!hisi_ptt->trace_ctrl.is_port) 231 + val |= HISI_PTT_TRACE_CTRL_FILTER_MODE; 232 + 233 + /* Start the Trace */ 234 + val |= HISI_PTT_TRACE_CTRL_EN; 235 + writel(val, hisi_ptt->iobase + HISI_PTT_TRACE_CTRL); 236 + 237 + return 0; 238 + } 239 + 240 + static int hisi_ptt_update_aux(struct hisi_ptt *hisi_ptt, int index, bool stop) 241 + { 242 + struct hisi_ptt_trace_ctrl *ctrl = &hisi_ptt->trace_ctrl; 243 + struct perf_output_handle *handle = &ctrl->handle; 244 + struct perf_event *event = handle->event; 245 + struct hisi_ptt_pmu_buf *buf; 246 + size_t size; 247 + void *addr; 248 + 249 + buf = perf_get_aux(handle); 250 + if (!buf || !handle->size) 251 + return -EINVAL; 252 + 253 + addr = ctrl->trace_buf[ctrl->buf_index].addr; 254 + 255 + /* 256 + * If we're going to stop, read the size of already traced data from 257 + * HISI_PTT_TRACE_WR_STS. Otherwise we're coming from the interrupt, 258 + * the data size is always HISI_PTT_TRACE_BUF_SIZE. 259 + */ 260 + if (stop) { 261 + u32 reg; 262 + 263 + reg = readl(hisi_ptt->iobase + HISI_PTT_TRACE_WR_STS); 264 + size = FIELD_GET(HISI_PTT_TRACE_WR_STS_WRITE, reg); 265 + } else { 266 + size = HISI_PTT_TRACE_BUF_SIZE; 267 + } 268 + 269 + memcpy(buf->base + buf->pos, addr, size); 270 + buf->pos += size; 271 + 272 + /* 273 + * Just commit the traced data if we're going to stop. Otherwise if the 274 + * resident AUX buffer cannot contain the data of next trace buffer, 275 + * apply a new one. 276 + */ 277 + if (stop) { 278 + perf_aux_output_end(handle, buf->pos); 279 + } else if (buf->length - buf->pos < HISI_PTT_TRACE_BUF_SIZE) { 280 + perf_aux_output_end(handle, buf->pos); 281 + 282 + buf = perf_aux_output_begin(handle, event); 283 + if (!buf) 284 + return -EINVAL; 285 + 286 + buf->pos = handle->head % buf->length; 287 + if (buf->length - buf->pos < HISI_PTT_TRACE_BUF_SIZE) { 288 + perf_aux_output_end(handle, 0); 289 + return -EINVAL; 290 + } 291 + } 292 + 293 + return 0; 294 + } 295 + 296 + static irqreturn_t hisi_ptt_isr(int irq, void *context) 297 + { 298 + struct hisi_ptt *hisi_ptt = context; 299 + u32 status, buf_idx; 300 + 301 + status = readl(hisi_ptt->iobase + HISI_PTT_TRACE_INT_STAT); 302 + if (!(status & HISI_PTT_TRACE_INT_STAT_MASK)) 303 + return IRQ_NONE; 304 + 305 + buf_idx = ffs(status) - 1; 306 + 307 + /* Clear the interrupt status of buffer @buf_idx */ 308 + writel(status, hisi_ptt->iobase + HISI_PTT_TRACE_INT_STAT); 309 + 310 + /* 311 + * Update the AUX buffer and cache the current buffer index, 312 + * as we need to know this and save the data when the trace 313 + * is ended out of the interrupt handler. End the trace 314 + * if the updating fails. 315 + */ 316 + if (hisi_ptt_update_aux(hisi_ptt, buf_idx, false)) 317 + hisi_ptt_trace_end(hisi_ptt); 318 + else 319 + hisi_ptt->trace_ctrl.buf_index = (buf_idx + 1) % HISI_PTT_TRACE_BUF_CNT; 320 + 321 + return IRQ_HANDLED; 322 + } 323 + 324 + static void hisi_ptt_irq_free_vectors(void *pdev) 325 + { 326 + pci_free_irq_vectors(pdev); 327 + } 328 + 329 + static int hisi_ptt_register_irq(struct hisi_ptt *hisi_ptt) 330 + { 331 + struct pci_dev *pdev = hisi_ptt->pdev; 332 + int ret; 333 + 334 + ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI); 335 + if (ret < 0) { 336 + pci_err(pdev, "failed to allocate irq vector, ret = %d\n", ret); 337 + return ret; 338 + } 339 + 340 + ret = devm_add_action_or_reset(&pdev->dev, hisi_ptt_irq_free_vectors, pdev); 341 + if (ret < 0) 342 + return ret; 343 + 344 + ret = devm_request_threaded_irq(&pdev->dev, 345 + pci_irq_vector(pdev, HISI_PTT_TRACE_DMA_IRQ), 346 + NULL, hisi_ptt_isr, 0, 347 + DRV_NAME, hisi_ptt); 348 + if (ret) { 349 + pci_err(pdev, "failed to request irq %d, ret = %d\n", 350 + pci_irq_vector(pdev, HISI_PTT_TRACE_DMA_IRQ), ret); 351 + return ret; 352 + } 353 + 354 + return 0; 355 + } 356 + 357 + static int hisi_ptt_init_filters(struct pci_dev *pdev, void *data) 358 + { 359 + struct hisi_ptt_filter_desc *filter; 360 + struct hisi_ptt *hisi_ptt = data; 361 + 362 + /* 363 + * We won't fail the probe if filter allocation failed here. The filters 364 + * should be partial initialized and users would know which filter fails 365 + * through the log. Other functions of PTT device are still available. 366 + */ 367 + filter = kzalloc(sizeof(*filter), GFP_KERNEL); 368 + if (!filter) { 369 + pci_err(hisi_ptt->pdev, "failed to add filter %s\n", pci_name(pdev)); 370 + return -ENOMEM; 371 + } 372 + 373 + filter->devid = PCI_DEVID(pdev->bus->number, pdev->devfn); 374 + 375 + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT) { 376 + filter->is_port = true; 377 + list_add_tail(&filter->list, &hisi_ptt->port_filters); 378 + 379 + /* Update the available port mask */ 380 + hisi_ptt->port_mask |= hisi_ptt_get_filter_val(filter->devid, true); 381 + } else { 382 + list_add_tail(&filter->list, &hisi_ptt->req_filters); 383 + } 384 + 385 + return 0; 386 + } 387 + 388 + static void hisi_ptt_release_filters(void *data) 389 + { 390 + struct hisi_ptt_filter_desc *filter, *tmp; 391 + struct hisi_ptt *hisi_ptt = data; 392 + 393 + list_for_each_entry_safe(filter, tmp, &hisi_ptt->req_filters, list) { 394 + list_del(&filter->list); 395 + kfree(filter); 396 + } 397 + 398 + list_for_each_entry_safe(filter, tmp, &hisi_ptt->port_filters, list) { 399 + list_del(&filter->list); 400 + kfree(filter); 401 + } 402 + } 403 + 404 + static int hisi_ptt_config_trace_buf(struct hisi_ptt *hisi_ptt) 405 + { 406 + struct hisi_ptt_trace_ctrl *ctrl = &hisi_ptt->trace_ctrl; 407 + struct device *dev = &hisi_ptt->pdev->dev; 408 + int i; 409 + 410 + ctrl->trace_buf = devm_kcalloc(dev, HISI_PTT_TRACE_BUF_CNT, 411 + sizeof(*ctrl->trace_buf), GFP_KERNEL); 412 + if (!ctrl->trace_buf) 413 + return -ENOMEM; 414 + 415 + for (i = 0; i < HISI_PTT_TRACE_BUF_CNT; ++i) { 416 + ctrl->trace_buf[i].addr = dmam_alloc_coherent(dev, HISI_PTT_TRACE_BUF_SIZE, 417 + &ctrl->trace_buf[i].dma, 418 + GFP_KERNEL); 419 + if (!ctrl->trace_buf[i].addr) 420 + return -ENOMEM; 421 + } 422 + 423 + /* Configure the trace DMA buffer */ 424 + for (i = 0; i < HISI_PTT_TRACE_BUF_CNT; i++) { 425 + writel(lower_32_bits(ctrl->trace_buf[i].dma), 426 + hisi_ptt->iobase + HISI_PTT_TRACE_ADDR_BASE_LO_0 + 427 + i * HISI_PTT_TRACE_ADDR_STRIDE); 428 + writel(upper_32_bits(ctrl->trace_buf[i].dma), 429 + hisi_ptt->iobase + HISI_PTT_TRACE_ADDR_BASE_HI_0 + 430 + i * HISI_PTT_TRACE_ADDR_STRIDE); 431 + } 432 + writel(HISI_PTT_TRACE_BUF_SIZE, hisi_ptt->iobase + HISI_PTT_TRACE_ADDR_SIZE); 433 + 434 + return 0; 435 + } 436 + 437 + static int hisi_ptt_init_ctrls(struct hisi_ptt *hisi_ptt) 438 + { 439 + struct pci_dev *pdev = hisi_ptt->pdev; 440 + struct pci_bus *bus; 441 + int ret; 442 + u32 reg; 443 + 444 + INIT_LIST_HEAD(&hisi_ptt->port_filters); 445 + INIT_LIST_HEAD(&hisi_ptt->req_filters); 446 + 447 + ret = hisi_ptt_config_trace_buf(hisi_ptt); 448 + if (ret) 449 + return ret; 450 + 451 + /* 452 + * The device range register provides the information about the root 453 + * ports which the RCiEP can control and trace. The RCiEP and the root 454 + * ports which it supports are on the same PCIe core, with same domain 455 + * number but maybe different bus number. The device range register 456 + * will tell us which root ports we can support, Bit[31:16] indicates 457 + * the upper BDF numbers of the root port, while Bit[15:0] indicates 458 + * the lower. 459 + */ 460 + reg = readl(hisi_ptt->iobase + HISI_PTT_DEVICE_RANGE); 461 + hisi_ptt->upper_bdf = FIELD_GET(HISI_PTT_DEVICE_RANGE_UPPER, reg); 462 + hisi_ptt->lower_bdf = FIELD_GET(HISI_PTT_DEVICE_RANGE_LOWER, reg); 463 + 464 + bus = pci_find_bus(pci_domain_nr(pdev->bus), PCI_BUS_NUM(hisi_ptt->upper_bdf)); 465 + if (bus) 466 + pci_walk_bus(bus, hisi_ptt_init_filters, hisi_ptt); 467 + 468 + ret = devm_add_action_or_reset(&pdev->dev, hisi_ptt_release_filters, hisi_ptt); 469 + if (ret) 470 + return ret; 471 + 472 + hisi_ptt->trace_ctrl.on_cpu = -1; 473 + return 0; 474 + } 475 + 476 + static ssize_t cpumask_show(struct device *dev, struct device_attribute *attr, 477 + char *buf) 478 + { 479 + struct hisi_ptt *hisi_ptt = to_hisi_ptt(dev_get_drvdata(dev)); 480 + const cpumask_t *cpumask = cpumask_of_node(dev_to_node(&hisi_ptt->pdev->dev)); 481 + 482 + return cpumap_print_to_pagebuf(true, buf, cpumask); 483 + } 484 + static DEVICE_ATTR_RO(cpumask); 485 + 486 + static struct attribute *hisi_ptt_cpumask_attrs[] = { 487 + &dev_attr_cpumask.attr, 488 + NULL 489 + }; 490 + 491 + static const struct attribute_group hisi_ptt_cpumask_attr_group = { 492 + .attrs = hisi_ptt_cpumask_attrs, 493 + }; 494 + 495 + /* 496 + * Bit 19 indicates the filter type, 1 for Root Port filter and 0 for Requester 497 + * filter. Bit[15:0] indicates the filter value, for Root Port filter it's 498 + * a bit mask of desired ports and for Requester filter it's the Requester ID 499 + * of the desired PCIe function. Bit[18:16] is reserved for extension. 500 + * 501 + * See hisi_ptt.rst documentation for detailed information. 502 + */ 503 + PMU_FORMAT_ATTR(filter, "config:0-19"); 504 + PMU_FORMAT_ATTR(direction, "config:20-23"); 505 + PMU_FORMAT_ATTR(type, "config:24-31"); 506 + PMU_FORMAT_ATTR(format, "config:32-35"); 507 + 508 + static struct attribute *hisi_ptt_pmu_format_attrs[] = { 509 + &format_attr_filter.attr, 510 + &format_attr_direction.attr, 511 + &format_attr_type.attr, 512 + &format_attr_format.attr, 513 + NULL 514 + }; 515 + 516 + static struct attribute_group hisi_ptt_pmu_format_group = { 517 + .name = "format", 518 + .attrs = hisi_ptt_pmu_format_attrs, 519 + }; 520 + 521 + static const struct attribute_group *hisi_ptt_pmu_groups[] = { 522 + &hisi_ptt_cpumask_attr_group, 523 + &hisi_ptt_pmu_format_group, 524 + &hisi_ptt_tune_group, 525 + NULL 526 + }; 527 + 528 + static int hisi_ptt_trace_valid_direction(u32 val) 529 + { 530 + /* 531 + * The direction values have different effects according to the data 532 + * format (specified in the parentheses). TLP set A/B means different 533 + * set of TLP types. See hisi_ptt.rst documentation for more details. 534 + */ 535 + static const u32 hisi_ptt_trace_available_direction[] = { 536 + 0, /* inbound(4DW) or reserved(8DW) */ 537 + 1, /* outbound(4DW) */ 538 + 2, /* {in, out}bound(4DW) or inbound(8DW), TLP set A */ 539 + 3, /* {in, out}bound(4DW) or inbound(8DW), TLP set B */ 540 + }; 541 + int i; 542 + 543 + for (i = 0; i < ARRAY_SIZE(hisi_ptt_trace_available_direction); i++) { 544 + if (val == hisi_ptt_trace_available_direction[i]) 545 + return 0; 546 + } 547 + 548 + return -EINVAL; 549 + } 550 + 551 + static int hisi_ptt_trace_valid_type(u32 val) 552 + { 553 + /* Different types can be set simultaneously */ 554 + static const u32 hisi_ptt_trace_available_type[] = { 555 + 1, /* posted_request */ 556 + 2, /* non-posted_request */ 557 + 4, /* completion */ 558 + }; 559 + int i; 560 + 561 + if (!val) 562 + return -EINVAL; 563 + 564 + /* 565 + * Walk the available list and clear the valid bits of 566 + * the config. If there is any resident bit after the 567 + * walk then the config is invalid. 568 + */ 569 + for (i = 0; i < ARRAY_SIZE(hisi_ptt_trace_available_type); i++) 570 + val &= ~hisi_ptt_trace_available_type[i]; 571 + 572 + if (val) 573 + return -EINVAL; 574 + 575 + return 0; 576 + } 577 + 578 + static int hisi_ptt_trace_valid_format(u32 val) 579 + { 580 + static const u32 hisi_ptt_trace_availble_format[] = { 581 + 0, /* 4DW */ 582 + 1, /* 8DW */ 583 + }; 584 + int i; 585 + 586 + for (i = 0; i < ARRAY_SIZE(hisi_ptt_trace_availble_format); i++) { 587 + if (val == hisi_ptt_trace_availble_format[i]) 588 + return 0; 589 + } 590 + 591 + return -EINVAL; 592 + } 593 + 594 + static int hisi_ptt_trace_valid_filter(struct hisi_ptt *hisi_ptt, u64 config) 595 + { 596 + unsigned long val, port_mask = hisi_ptt->port_mask; 597 + struct hisi_ptt_filter_desc *filter; 598 + 599 + hisi_ptt->trace_ctrl.is_port = FIELD_GET(HISI_PTT_PMU_FILTER_IS_PORT, config); 600 + val = FIELD_GET(HISI_PTT_PMU_FILTER_VAL_MASK, config); 601 + 602 + /* 603 + * Port filters are defined as bit mask. For port filters, check 604 + * the bits in the @val are within the range of hisi_ptt->port_mask 605 + * and whether it's empty or not, otherwise user has specified 606 + * some unsupported root ports. 607 + * 608 + * For Requester ID filters, walk the available filter list to see 609 + * whether we have one matched. 610 + */ 611 + if (!hisi_ptt->trace_ctrl.is_port) { 612 + list_for_each_entry(filter, &hisi_ptt->req_filters, list) { 613 + if (val == hisi_ptt_get_filter_val(filter->devid, filter->is_port)) 614 + return 0; 615 + } 616 + } else if (bitmap_subset(&val, &port_mask, BITS_PER_LONG)) { 617 + return 0; 618 + } 619 + 620 + return -EINVAL; 621 + } 622 + 623 + static void hisi_ptt_pmu_init_configs(struct hisi_ptt *hisi_ptt, struct perf_event *event) 624 + { 625 + struct hisi_ptt_trace_ctrl *ctrl = &hisi_ptt->trace_ctrl; 626 + u32 val; 627 + 628 + val = FIELD_GET(HISI_PTT_PMU_FILTER_VAL_MASK, event->attr.config); 629 + hisi_ptt->trace_ctrl.filter = val; 630 + 631 + val = FIELD_GET(HISI_PTT_PMU_DIRECTION_MASK, event->attr.config); 632 + ctrl->direction = val; 633 + 634 + val = FIELD_GET(HISI_PTT_PMU_TYPE_MASK, event->attr.config); 635 + ctrl->type = val; 636 + 637 + val = FIELD_GET(HISI_PTT_PMU_FORMAT_MASK, event->attr.config); 638 + ctrl->format = val; 639 + } 640 + 641 + static int hisi_ptt_pmu_event_init(struct perf_event *event) 642 + { 643 + struct hisi_ptt *hisi_ptt = to_hisi_ptt(event->pmu); 644 + int ret; 645 + u32 val; 646 + 647 + if (event->cpu < 0) { 648 + dev_dbg(event->pmu->dev, "Per-task mode not supported\n"); 649 + return -EOPNOTSUPP; 650 + } 651 + 652 + if (event->attr.type != hisi_ptt->hisi_ptt_pmu.type) 653 + return -ENOENT; 654 + 655 + ret = hisi_ptt_trace_valid_filter(hisi_ptt, event->attr.config); 656 + if (ret < 0) 657 + return ret; 658 + 659 + val = FIELD_GET(HISI_PTT_PMU_DIRECTION_MASK, event->attr.config); 660 + ret = hisi_ptt_trace_valid_direction(val); 661 + if (ret < 0) 662 + return ret; 663 + 664 + val = FIELD_GET(HISI_PTT_PMU_TYPE_MASK, event->attr.config); 665 + ret = hisi_ptt_trace_valid_type(val); 666 + if (ret < 0) 667 + return ret; 668 + 669 + val = FIELD_GET(HISI_PTT_PMU_FORMAT_MASK, event->attr.config); 670 + return hisi_ptt_trace_valid_format(val); 671 + } 672 + 673 + static void *hisi_ptt_pmu_setup_aux(struct perf_event *event, void **pages, 674 + int nr_pages, bool overwrite) 675 + { 676 + struct hisi_ptt_pmu_buf *buf; 677 + struct page **pagelist; 678 + int i; 679 + 680 + if (overwrite) { 681 + dev_warn(event->pmu->dev, "Overwrite mode is not supported\n"); 682 + return NULL; 683 + } 684 + 685 + /* If the pages size less than buffers, we cannot start trace */ 686 + if (nr_pages < HISI_PTT_TRACE_TOTAL_BUF_SIZE / PAGE_SIZE) 687 + return NULL; 688 + 689 + buf = kzalloc(sizeof(*buf), GFP_KERNEL); 690 + if (!buf) 691 + return NULL; 692 + 693 + pagelist = kcalloc(nr_pages, sizeof(*pagelist), GFP_KERNEL); 694 + if (!pagelist) 695 + goto err; 696 + 697 + for (i = 0; i < nr_pages; i++) 698 + pagelist[i] = virt_to_page(pages[i]); 699 + 700 + buf->base = vmap(pagelist, nr_pages, VM_MAP, PAGE_KERNEL); 701 + if (!buf->base) { 702 + kfree(pagelist); 703 + goto err; 704 + } 705 + 706 + buf->nr_pages = nr_pages; 707 + buf->length = nr_pages * PAGE_SIZE; 708 + buf->pos = 0; 709 + 710 + kfree(pagelist); 711 + return buf; 712 + err: 713 + kfree(buf); 714 + return NULL; 715 + } 716 + 717 + static void hisi_ptt_pmu_free_aux(void *aux) 718 + { 719 + struct hisi_ptt_pmu_buf *buf = aux; 720 + 721 + vunmap(buf->base); 722 + kfree(buf); 723 + } 724 + 725 + static void hisi_ptt_pmu_start(struct perf_event *event, int flags) 726 + { 727 + struct hisi_ptt *hisi_ptt = to_hisi_ptt(event->pmu); 728 + struct perf_output_handle *handle = &hisi_ptt->trace_ctrl.handle; 729 + struct hw_perf_event *hwc = &event->hw; 730 + struct device *dev = event->pmu->dev; 731 + struct hisi_ptt_pmu_buf *buf; 732 + int cpu = event->cpu; 733 + int ret; 734 + 735 + hwc->state = 0; 736 + 737 + /* Serialize the perf process if user specified several CPUs */ 738 + spin_lock(&hisi_ptt->pmu_lock); 739 + if (hisi_ptt->trace_ctrl.started) { 740 + dev_dbg(dev, "trace has already started\n"); 741 + goto stop; 742 + } 743 + 744 + /* 745 + * Handle the interrupt on the same cpu which starts the trace to avoid 746 + * context mismatch. Otherwise we'll trigger the WARN from the perf 747 + * core in event_function_local(). If CPU passed is offline we'll fail 748 + * here, just log it since we can do nothing here. 749 + */ 750 + ret = irq_set_affinity(pci_irq_vector(hisi_ptt->pdev, HISI_PTT_TRACE_DMA_IRQ), 751 + cpumask_of(cpu)); 752 + if (ret) 753 + dev_warn(dev, "failed to set the affinity of trace interrupt\n"); 754 + 755 + hisi_ptt->trace_ctrl.on_cpu = cpu; 756 + 757 + buf = perf_aux_output_begin(handle, event); 758 + if (!buf) { 759 + dev_dbg(dev, "aux output begin failed\n"); 760 + goto stop; 761 + } 762 + 763 + buf->pos = handle->head % buf->length; 764 + 765 + hisi_ptt_pmu_init_configs(hisi_ptt, event); 766 + 767 + ret = hisi_ptt_trace_start(hisi_ptt); 768 + if (ret) { 769 + dev_dbg(dev, "trace start failed, ret = %d\n", ret); 770 + perf_aux_output_end(handle, 0); 771 + goto stop; 772 + } 773 + 774 + spin_unlock(&hisi_ptt->pmu_lock); 775 + return; 776 + stop: 777 + event->hw.state |= PERF_HES_STOPPED; 778 + spin_unlock(&hisi_ptt->pmu_lock); 779 + } 780 + 781 + static void hisi_ptt_pmu_stop(struct perf_event *event, int flags) 782 + { 783 + struct hisi_ptt *hisi_ptt = to_hisi_ptt(event->pmu); 784 + struct hw_perf_event *hwc = &event->hw; 785 + 786 + if (hwc->state & PERF_HES_STOPPED) 787 + return; 788 + 789 + spin_lock(&hisi_ptt->pmu_lock); 790 + if (hisi_ptt->trace_ctrl.started) { 791 + hisi_ptt_trace_end(hisi_ptt); 792 + 793 + if (!hisi_ptt_wait_trace_hw_idle(hisi_ptt)) 794 + dev_warn(event->pmu->dev, "Device is still busy\n"); 795 + 796 + hisi_ptt_update_aux(hisi_ptt, hisi_ptt->trace_ctrl.buf_index, true); 797 + } 798 + spin_unlock(&hisi_ptt->pmu_lock); 799 + 800 + hwc->state |= PERF_HES_STOPPED; 801 + perf_event_update_userpage(event); 802 + hwc->state |= PERF_HES_UPTODATE; 803 + } 804 + 805 + static int hisi_ptt_pmu_add(struct perf_event *event, int flags) 806 + { 807 + struct hisi_ptt *hisi_ptt = to_hisi_ptt(event->pmu); 808 + struct hw_perf_event *hwc = &event->hw; 809 + int cpu = event->cpu; 810 + 811 + /* Only allow the cpus on the device's node to add the event */ 812 + if (!cpumask_test_cpu(cpu, cpumask_of_node(dev_to_node(&hisi_ptt->pdev->dev)))) 813 + return 0; 814 + 815 + hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; 816 + 817 + if (flags & PERF_EF_START) { 818 + hisi_ptt_pmu_start(event, PERF_EF_RELOAD); 819 + if (hwc->state & PERF_HES_STOPPED) 820 + return -EINVAL; 821 + } 822 + 823 + return 0; 824 + } 825 + 826 + static void hisi_ptt_pmu_del(struct perf_event *event, int flags) 827 + { 828 + hisi_ptt_pmu_stop(event, PERF_EF_UPDATE); 829 + } 830 + 831 + static void hisi_ptt_remove_cpuhp_instance(void *hotplug_node) 832 + { 833 + cpuhp_state_remove_instance_nocalls(hisi_ptt_pmu_online, hotplug_node); 834 + } 835 + 836 + static void hisi_ptt_unregister_pmu(void *pmu) 837 + { 838 + perf_pmu_unregister(pmu); 839 + } 840 + 841 + static int hisi_ptt_register_pmu(struct hisi_ptt *hisi_ptt) 842 + { 843 + u16 core_id, sicl_id; 844 + char *pmu_name; 845 + u32 reg; 846 + int ret; 847 + 848 + ret = cpuhp_state_add_instance_nocalls(hisi_ptt_pmu_online, 849 + &hisi_ptt->hotplug_node); 850 + if (ret) 851 + return ret; 852 + 853 + ret = devm_add_action_or_reset(&hisi_ptt->pdev->dev, 854 + hisi_ptt_remove_cpuhp_instance, 855 + &hisi_ptt->hotplug_node); 856 + if (ret) 857 + return ret; 858 + 859 + mutex_init(&hisi_ptt->tune_lock); 860 + spin_lock_init(&hisi_ptt->pmu_lock); 861 + 862 + hisi_ptt->hisi_ptt_pmu = (struct pmu) { 863 + .module = THIS_MODULE, 864 + .capabilities = PERF_PMU_CAP_EXCLUSIVE | PERF_PMU_CAP_ITRACE, 865 + .task_ctx_nr = perf_sw_context, 866 + .attr_groups = hisi_ptt_pmu_groups, 867 + .event_init = hisi_ptt_pmu_event_init, 868 + .setup_aux = hisi_ptt_pmu_setup_aux, 869 + .free_aux = hisi_ptt_pmu_free_aux, 870 + .start = hisi_ptt_pmu_start, 871 + .stop = hisi_ptt_pmu_stop, 872 + .add = hisi_ptt_pmu_add, 873 + .del = hisi_ptt_pmu_del, 874 + }; 875 + 876 + reg = readl(hisi_ptt->iobase + HISI_PTT_LOCATION); 877 + core_id = FIELD_GET(HISI_PTT_CORE_ID, reg); 878 + sicl_id = FIELD_GET(HISI_PTT_SICL_ID, reg); 879 + 880 + pmu_name = devm_kasprintf(&hisi_ptt->pdev->dev, GFP_KERNEL, "hisi_ptt%u_%u", 881 + sicl_id, core_id); 882 + if (!pmu_name) 883 + return -ENOMEM; 884 + 885 + ret = perf_pmu_register(&hisi_ptt->hisi_ptt_pmu, pmu_name, -1); 886 + if (ret) 887 + return ret; 888 + 889 + return devm_add_action_or_reset(&hisi_ptt->pdev->dev, 890 + hisi_ptt_unregister_pmu, 891 + &hisi_ptt->hisi_ptt_pmu); 892 + } 893 + 894 + /* 895 + * The DMA of PTT trace can only use direct mappings due to some 896 + * hardware restriction. Check whether there is no IOMMU or the 897 + * policy of the IOMMU domain is passthrough, otherwise the trace 898 + * cannot work. 899 + * 900 + * The PTT device is supposed to behind an ARM SMMUv3, which 901 + * should have passthrough the device by a quirk. 902 + */ 903 + static int hisi_ptt_check_iommu_mapping(struct pci_dev *pdev) 904 + { 905 + struct iommu_domain *iommu_domain; 906 + 907 + iommu_domain = iommu_get_domain_for_dev(&pdev->dev); 908 + if (!iommu_domain || iommu_domain->type == IOMMU_DOMAIN_IDENTITY) 909 + return 0; 910 + 911 + return -EOPNOTSUPP; 912 + } 913 + 914 + static int hisi_ptt_probe(struct pci_dev *pdev, 915 + const struct pci_device_id *id) 916 + { 917 + struct hisi_ptt *hisi_ptt; 918 + int ret; 919 + 920 + ret = hisi_ptt_check_iommu_mapping(pdev); 921 + if (ret) { 922 + pci_err(pdev, "requires direct DMA mappings\n"); 923 + return ret; 924 + } 925 + 926 + hisi_ptt = devm_kzalloc(&pdev->dev, sizeof(*hisi_ptt), GFP_KERNEL); 927 + if (!hisi_ptt) 928 + return -ENOMEM; 929 + 930 + hisi_ptt->pdev = pdev; 931 + pci_set_drvdata(pdev, hisi_ptt); 932 + 933 + ret = pcim_enable_device(pdev); 934 + if (ret) { 935 + pci_err(pdev, "failed to enable device, ret = %d\n", ret); 936 + return ret; 937 + } 938 + 939 + ret = pcim_iomap_regions(pdev, BIT(2), DRV_NAME); 940 + if (ret) { 941 + pci_err(pdev, "failed to remap io memory, ret = %d\n", ret); 942 + return ret; 943 + } 944 + 945 + hisi_ptt->iobase = pcim_iomap_table(pdev)[2]; 946 + 947 + ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 948 + if (ret) { 949 + pci_err(pdev, "failed to set 64 bit dma mask, ret = %d\n", ret); 950 + return ret; 951 + } 952 + 953 + pci_set_master(pdev); 954 + 955 + ret = hisi_ptt_register_irq(hisi_ptt); 956 + if (ret) 957 + return ret; 958 + 959 + ret = hisi_ptt_init_ctrls(hisi_ptt); 960 + if (ret) { 961 + pci_err(pdev, "failed to init controls, ret = %d\n", ret); 962 + return ret; 963 + } 964 + 965 + ret = hisi_ptt_register_pmu(hisi_ptt); 966 + if (ret) { 967 + pci_err(pdev, "failed to register PMU device, ret = %d", ret); 968 + return ret; 969 + } 970 + 971 + return 0; 972 + } 973 + 974 + static const struct pci_device_id hisi_ptt_id_tbl[] = { 975 + { PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, 0xa12e) }, 976 + { } 977 + }; 978 + MODULE_DEVICE_TABLE(pci, hisi_ptt_id_tbl); 979 + 980 + static struct pci_driver hisi_ptt_driver = { 981 + .name = DRV_NAME, 982 + .id_table = hisi_ptt_id_tbl, 983 + .probe = hisi_ptt_probe, 984 + }; 985 + 986 + static int hisi_ptt_cpu_teardown(unsigned int cpu, struct hlist_node *node) 987 + { 988 + struct hisi_ptt *hisi_ptt; 989 + struct device *dev; 990 + int target, src; 991 + 992 + hisi_ptt = hlist_entry_safe(node, struct hisi_ptt, hotplug_node); 993 + src = hisi_ptt->trace_ctrl.on_cpu; 994 + dev = hisi_ptt->hisi_ptt_pmu.dev; 995 + 996 + if (!hisi_ptt->trace_ctrl.started || src != cpu) 997 + return 0; 998 + 999 + target = cpumask_any_but(cpumask_of_node(dev_to_node(&hisi_ptt->pdev->dev)), cpu); 1000 + if (target >= nr_cpu_ids) { 1001 + dev_err(dev, "no available cpu for perf context migration\n"); 1002 + return 0; 1003 + } 1004 + 1005 + perf_pmu_migrate_context(&hisi_ptt->hisi_ptt_pmu, src, target); 1006 + 1007 + /* 1008 + * Also make sure the interrupt bind to the migrated CPU as well. Warn 1009 + * the user on failure here. 1010 + */ 1011 + if (irq_set_affinity(pci_irq_vector(hisi_ptt->pdev, HISI_PTT_TRACE_DMA_IRQ), 1012 + cpumask_of(target))) 1013 + dev_warn(dev, "failed to set the affinity of trace interrupt\n"); 1014 + 1015 + hisi_ptt->trace_ctrl.on_cpu = target; 1016 + return 0; 1017 + } 1018 + 1019 + static int __init hisi_ptt_init(void) 1020 + { 1021 + int ret; 1022 + 1023 + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, DRV_NAME, NULL, 1024 + hisi_ptt_cpu_teardown); 1025 + if (ret < 0) 1026 + return ret; 1027 + hisi_ptt_pmu_online = ret; 1028 + 1029 + ret = pci_register_driver(&hisi_ptt_driver); 1030 + if (ret) 1031 + cpuhp_remove_multi_state(hisi_ptt_pmu_online); 1032 + 1033 + return ret; 1034 + } 1035 + module_init(hisi_ptt_init); 1036 + 1037 + static void __exit hisi_ptt_exit(void) 1038 + { 1039 + pci_unregister_driver(&hisi_ptt_driver); 1040 + cpuhp_remove_multi_state(hisi_ptt_pmu_online); 1041 + } 1042 + module_exit(hisi_ptt_exit); 1043 + 1044 + MODULE_LICENSE("GPL"); 1045 + MODULE_AUTHOR("Yicong Yang <yangyicong@hisilicon.com>"); 1046 + MODULE_DESCRIPTION("Driver for HiSilicon PCIe tune and trace device");
+200
drivers/hwtracing/ptt/hisi_ptt.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Driver for HiSilicon PCIe tune and trace device 4 + * 5 + * Copyright (c) 2022 HiSilicon Technologies Co., Ltd. 6 + * Author: Yicong Yang <yangyicong@hisilicon.com> 7 + */ 8 + 9 + #ifndef _HISI_PTT_H 10 + #define _HISI_PTT_H 11 + 12 + #include <linux/bits.h> 13 + #include <linux/cpumask.h> 14 + #include <linux/list.h> 15 + #include <linux/mutex.h> 16 + #include <linux/pci.h> 17 + #include <linux/perf_event.h> 18 + #include <linux/spinlock.h> 19 + #include <linux/types.h> 20 + 21 + #define DRV_NAME "hisi_ptt" 22 + 23 + /* 24 + * The definition of the device registers and register fields. 25 + */ 26 + #define HISI_PTT_TUNING_CTRL 0x0000 27 + #define HISI_PTT_TUNING_CTRL_CODE GENMASK(15, 0) 28 + #define HISI_PTT_TUNING_CTRL_SUB GENMASK(23, 16) 29 + #define HISI_PTT_TUNING_DATA 0x0004 30 + #define HISI_PTT_TUNING_DATA_VAL_MASK GENMASK(15, 0) 31 + #define HISI_PTT_TRACE_ADDR_SIZE 0x0800 32 + #define HISI_PTT_TRACE_ADDR_BASE_LO_0 0x0810 33 + #define HISI_PTT_TRACE_ADDR_BASE_HI_0 0x0814 34 + #define HISI_PTT_TRACE_ADDR_STRIDE 0x8 35 + #define HISI_PTT_TRACE_CTRL 0x0850 36 + #define HISI_PTT_TRACE_CTRL_EN BIT(0) 37 + #define HISI_PTT_TRACE_CTRL_RST BIT(1) 38 + #define HISI_PTT_TRACE_CTRL_RXTX_SEL GENMASK(3, 2) 39 + #define HISI_PTT_TRACE_CTRL_TYPE_SEL GENMASK(7, 4) 40 + #define HISI_PTT_TRACE_CTRL_DATA_FORMAT BIT(14) 41 + #define HISI_PTT_TRACE_CTRL_FILTER_MODE BIT(15) 42 + #define HISI_PTT_TRACE_CTRL_TARGET_SEL GENMASK(31, 16) 43 + #define HISI_PTT_TRACE_INT_STAT 0x0890 44 + #define HISI_PTT_TRACE_INT_STAT_MASK GENMASK(3, 0) 45 + #define HISI_PTT_TRACE_INT_MASK 0x0894 46 + #define HISI_PTT_TUNING_INT_STAT 0x0898 47 + #define HISI_PTT_TUNING_INT_STAT_MASK BIT(0) 48 + #define HISI_PTT_TRACE_WR_STS 0x08a0 49 + #define HISI_PTT_TRACE_WR_STS_WRITE GENMASK(27, 0) 50 + #define HISI_PTT_TRACE_WR_STS_BUFFER GENMASK(29, 28) 51 + #define HISI_PTT_TRACE_STS 0x08b0 52 + #define HISI_PTT_TRACE_IDLE BIT(0) 53 + #define HISI_PTT_DEVICE_RANGE 0x0fe0 54 + #define HISI_PTT_DEVICE_RANGE_UPPER GENMASK(31, 16) 55 + #define HISI_PTT_DEVICE_RANGE_LOWER GENMASK(15, 0) 56 + #define HISI_PTT_LOCATION 0x0fe8 57 + #define HISI_PTT_CORE_ID GENMASK(15, 0) 58 + #define HISI_PTT_SICL_ID GENMASK(31, 16) 59 + 60 + /* Parameters of PTT trace DMA part. */ 61 + #define HISI_PTT_TRACE_DMA_IRQ 0 62 + #define HISI_PTT_TRACE_BUF_CNT 4 63 + #define HISI_PTT_TRACE_BUF_SIZE SZ_4M 64 + #define HISI_PTT_TRACE_TOTAL_BUF_SIZE (HISI_PTT_TRACE_BUF_SIZE * \ 65 + HISI_PTT_TRACE_BUF_CNT) 66 + /* Wait time for hardware DMA to reset */ 67 + #define HISI_PTT_RESET_TIMEOUT_US 10UL 68 + #define HISI_PTT_RESET_POLL_INTERVAL_US 1UL 69 + /* Poll timeout and interval for waiting hardware work to finish */ 70 + #define HISI_PTT_WAIT_TUNE_TIMEOUT_US 1000000UL 71 + #define HISI_PTT_WAIT_TRACE_TIMEOUT_US 100UL 72 + #define HISI_PTT_WAIT_POLL_INTERVAL_US 10UL 73 + 74 + #define HISI_PCIE_CORE_PORT_ID(devfn) ((PCI_SLOT(devfn) & 0x7) << 1) 75 + 76 + /* Definition of the PMU configs */ 77 + #define HISI_PTT_PMU_FILTER_IS_PORT BIT(19) 78 + #define HISI_PTT_PMU_FILTER_VAL_MASK GENMASK(15, 0) 79 + #define HISI_PTT_PMU_DIRECTION_MASK GENMASK(23, 20) 80 + #define HISI_PTT_PMU_TYPE_MASK GENMASK(31, 24) 81 + #define HISI_PTT_PMU_FORMAT_MASK GENMASK(35, 32) 82 + 83 + /** 84 + * struct hisi_ptt_tune_desc - Describe tune event for PTT tune 85 + * @hisi_ptt: PTT device this tune event belongs to 86 + * @name: name of this event 87 + * @event_code: code of the event 88 + */ 89 + struct hisi_ptt_tune_desc { 90 + struct hisi_ptt *hisi_ptt; 91 + const char *name; 92 + u32 event_code; 93 + }; 94 + 95 + /** 96 + * struct hisi_ptt_dma_buffer - Describe a single trace buffer of PTT trace. 97 + * The detail of the data format is described 98 + * in the documentation of PTT device. 99 + * @dma: DMA address of this buffer visible to the device 100 + * @addr: virtual address of this buffer visible to the cpu 101 + */ 102 + struct hisi_ptt_dma_buffer { 103 + dma_addr_t dma; 104 + void *addr; 105 + }; 106 + 107 + /** 108 + * struct hisi_ptt_trace_ctrl - Control and status of PTT trace 109 + * @trace_buf: array of the trace buffers for holding the trace data. 110 + * the length will be HISI_PTT_TRACE_BUF_CNT. 111 + * @handle: perf output handle of current trace session 112 + * @buf_index: the index of current using trace buffer 113 + * @on_cpu: current tracing cpu 114 + * @started: current trace status, true for started 115 + * @is_port: whether we're tracing root port or not 116 + * @direction: direction of the TLP headers to trace 117 + * @filter: filter value for tracing the TLP headers 118 + * @format: format of the TLP headers to trace 119 + * @type: type of the TLP headers to trace 120 + */ 121 + struct hisi_ptt_trace_ctrl { 122 + struct hisi_ptt_dma_buffer *trace_buf; 123 + struct perf_output_handle handle; 124 + u32 buf_index; 125 + int on_cpu; 126 + bool started; 127 + bool is_port; 128 + u32 direction:2; 129 + u32 filter:16; 130 + u32 format:1; 131 + u32 type:4; 132 + }; 133 + 134 + /** 135 + * struct hisi_ptt_filter_desc - Descriptor of the PTT trace filter 136 + * @list: entry of this descriptor in the filter list 137 + * @is_port: the PCI device of the filter is a Root Port or not 138 + * @devid: the PCI device's devid of the filter 139 + */ 140 + struct hisi_ptt_filter_desc { 141 + struct list_head list; 142 + bool is_port; 143 + u16 devid; 144 + }; 145 + 146 + /** 147 + * struct hisi_ptt_pmu_buf - Descriptor of the AUX buffer of PTT trace 148 + * @length: size of the AUX buffer 149 + * @nr_pages: number of pages of the AUX buffer 150 + * @base: start address of AUX buffer 151 + * @pos: position in the AUX buffer to commit traced data 152 + */ 153 + struct hisi_ptt_pmu_buf { 154 + size_t length; 155 + int nr_pages; 156 + void *base; 157 + long pos; 158 + }; 159 + 160 + /** 161 + * struct hisi_ptt - Per PTT device data 162 + * @trace_ctrl: the control information of PTT trace 163 + * @hotplug_node: node for register cpu hotplug event 164 + * @hisi_ptt_pmu: the pum device of trace 165 + * @iobase: base IO address of the device 166 + * @pdev: pci_dev of this PTT device 167 + * @tune_lock: lock to serialize the tune process 168 + * @pmu_lock: lock to serialize the perf process 169 + * @upper_bdf: the upper BDF range of the PCI devices managed by this PTT device 170 + * @lower_bdf: the lower BDF range of the PCI devices managed by this PTT device 171 + * @port_filters: the filter list of root ports 172 + * @req_filters: the filter list of requester ID 173 + * @port_mask: port mask of the managed root ports 174 + */ 175 + struct hisi_ptt { 176 + struct hisi_ptt_trace_ctrl trace_ctrl; 177 + struct hlist_node hotplug_node; 178 + struct pmu hisi_ptt_pmu; 179 + void __iomem *iobase; 180 + struct pci_dev *pdev; 181 + struct mutex tune_lock; 182 + spinlock_t pmu_lock; 183 + u32 upper_bdf; 184 + u32 lower_bdf; 185 + 186 + /* 187 + * The trace TLP headers can either be filtered by certain 188 + * root port, or by the requester ID. Organize the filters 189 + * by @port_filters and @req_filters here. The mask of all 190 + * the valid ports is also cached for doing sanity check 191 + * of user input. 192 + */ 193 + struct list_head port_filters; 194 + struct list_head req_filters; 195 + u16 port_mask; 196 + }; 197 + 198 + #define to_hisi_ptt(pmu) container_of(pmu, struct hisi_ptt, hisi_ptt_pmu) 199 + 200 + #endif /* _HISI_PTT_H */
+21
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
··· 2817 2817 } 2818 2818 } 2819 2819 2820 + /* 2821 + * HiSilicon PCIe tune and trace device can be used to trace TLP headers on the 2822 + * PCIe link and save the data to memory by DMA. The hardware is restricted to 2823 + * use identity mapping only. 2824 + */ 2825 + #define IS_HISI_PTT_DEVICE(pdev) ((pdev)->vendor == PCI_VENDOR_ID_HUAWEI && \ 2826 + (pdev)->device == 0xa12e) 2827 + 2828 + static int arm_smmu_def_domain_type(struct device *dev) 2829 + { 2830 + if (dev_is_pci(dev)) { 2831 + struct pci_dev *pdev = to_pci_dev(dev); 2832 + 2833 + if (IS_HISI_PTT_DEVICE(pdev)) 2834 + return IOMMU_DOMAIN_IDENTITY; 2835 + } 2836 + 2837 + return 0; 2838 + } 2839 + 2820 2840 static struct iommu_ops arm_smmu_ops = { 2821 2841 .capable = arm_smmu_capable, 2822 2842 .domain_alloc = arm_smmu_domain_alloc, ··· 2851 2831 .sva_unbind = arm_smmu_sva_unbind, 2852 2832 .sva_get_pasid = arm_smmu_sva_get_pasid, 2853 2833 .page_response = arm_smmu_page_response, 2834 + .def_domain_type = arm_smmu_def_domain_type, 2854 2835 .pgsize_bitmap = -1UL, /* Restricted during device attach */ 2855 2836 .owner = THIS_MODULE, 2856 2837 .default_domain_ops = &(const struct iommu_domain_ops) {
+23
include/linux/coresight.h
··· 372 372 return csa->read(offset, true, false); 373 373 } 374 374 375 + static inline u64 csdev_access_relaxed_read_pair(struct csdev_access *csa, 376 + u32 lo_offset, u32 hi_offset) 377 + { 378 + if (likely(csa->io_mem)) { 379 + return readl_relaxed(csa->base + lo_offset) | 380 + ((u64)readl_relaxed(csa->base + hi_offset) << 32); 381 + } 382 + 383 + return csa->read(lo_offset, true, false) | (csa->read(hi_offset, true, false) << 32); 384 + } 385 + 386 + static inline void csdev_access_relaxed_write_pair(struct csdev_access *csa, u64 val, 387 + u32 lo_offset, u32 hi_offset) 388 + { 389 + if (likely(csa->io_mem)) { 390 + writel_relaxed((u32)val, csa->base + lo_offset); 391 + writel_relaxed((u32)(val >> 32), csa->base + hi_offset); 392 + } else { 393 + csa->write((u32)val, lo_offset, true, false); 394 + csa->write((u32)(val >> 32), hi_offset, true, false); 395 + } 396 + } 397 + 375 398 static inline u32 csdev_access_read32(struct csdev_access *csa, u32 offset) 376 399 { 377 400 if (likely(csa->io_mem))