Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'add-ppe-driver-for-qualcomm-ipq9574-soc'

Luo Jie says:

====================
Add PPE driver for Qualcomm IPQ9574 SoC

The PPE (packet process engine) hardware block is available in Qualcomm
IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
The PPE in the IPQ9574 SoC includes six Ethernet ports (6 GMAC and 6
XGMAC), which are used to connect with external PHY devices by PCS. The
PPE also includes packet processing offload capabilities for various
networking functions such as route and bridge flows, VLANs, different
tunnel protocols and VPN. It also includes an L2 switch function for
bridging packets among the 6 Ethernet ports and the CPU port. The CPU
port enables packet transfer between the Ethernet ports and the ARM
cores in the SoC, using the Ethernet DMA.

This patch series is the first part of a three part series that will
together enable Ethernet function for IPQ9574 SoC. While support is
initially being added for IPQ9574 SoC, the driver will be easily
extendable to enable Ethernet support for other IPQ SoC such as IPQ5332.
The driver can also be extended later for adding support for L2/L3
network offload features that the PPE can support. The functionality
to be enabled by each of the three series (to be posted sequentially)
is as below:

Part 1: The PPE patch series (this series), which enables the platform
driver, probe and initialization/configuration of different PPE hardware
blocks.

Part 2: The PPE MAC patch series, which enables the phylink operations
for the PPE Ethernet ports.

Part 3: The PPE EDMA patch series, which enables the Rx/Tx Ethernet DMA
and netdevice driver for the 6 PPE Ethernet ports.

A more detailed description of the functions enabled by part 1 is below:
1. Initialize PPE device hardware functions such as buffer management,
queue management, scheduler and clocks in order to bring up PPE
device.
2. Enable platform driver and probe functions
3. Register debugfs file to provide access to various PPE packet
counters. These statistics are recorded by the various hardware
process counters, such as port RX/TX, CPU code and hardware queue
counters.
4. A detailed introduction of PPE along with the PPE hardware diagram
in the first two patches (dt-bindings and documentation).

Below is a reference to an earlier RFC discussion with the community
about enabling Ethernet driver support for Qualcomm IPQ9574 SoC. This
writeup can help provide a higher level architectural view of various
other drivers that support the PPE such as clock and PCS drivers.
Topic: RFC: Advice on adding support for Qualcomm IPQ9574 SoC Ethernet.
https://lore.kernel.org/linux-arm-msm/d2929bd2-bc9e-4733-a89f-2a187e8bf917@quicinc.com/

Signed-off-by: Luo Jie <quic_luoj@quicinc.com>
====================

Link: https://patch.msgid.link/20250818-qcom_ipq_ppe-v8-0-1d4ff641fce9@quicinc.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>

+4842
+533
Documentation/devicetree/bindings/net/qcom,ipq9574-ppe.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/net/qcom,ipq9574-ppe.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm IPQ packet process engine (PPE) 8 + 9 + maintainers: 10 + - Luo Jie <quic_luoj@quicinc.com> 11 + - Lei Wei <quic_leiwei@quicinc.com> 12 + - Suruchi Agarwal <quic_suruchia@quicinc.com> 13 + - Pavithra R <quic_pavir@quicinc.com> 14 + 15 + description: | 16 + The Ethernet functionality in the PPE (Packet Process Engine) is comprised 17 + of three components, the switch core, port wrapper and Ethernet DMA. 18 + 19 + The Switch core in the IPQ9574 PPE has maximum of 6 front panel ports and 20 + two FIFO interfaces. One of the two FIFO interfaces is used for Ethernet 21 + port to host CPU communication using Ethernet DMA. The other is used 22 + communicating to the EIP engine which is used for IPsec offload. On the 23 + IPQ9574, the PPE includes 6 GMAC/XGMACs that can be connected with external 24 + Ethernet PHY. Switch core also includes BM (Buffer Management), QM (Queue 25 + Management) and SCH (Scheduler) modules for supporting the packet processing. 26 + 27 + The port wrapper provides connections from the 6 GMAC/XGMACS to UNIPHY (PCS) 28 + supporting various modes such as SGMII/QSGMII/PSGMII/USXGMII/10G-BASER. There 29 + are 3 UNIPHY (PCS) instances supported on the IPQ9574. 30 + 31 + Ethernet DMA is used to transmit and receive packets between the six Ethernet 32 + ports and ARM host CPU. 33 + 34 + The follow diagram shows the PPE hardware block along with its connectivity 35 + to the external hardware blocks such clock hardware blocks (CMNPLL, GCC, 36 + NSS clock controller) and Ethernet PCS/PHY blocks. For depicting the PHY 37 + connectivity, one 4x1 Gbps PHY (QCA8075) and two 10 GBps PHYs are used as an 38 + example. 39 + 40 + +---------+ 41 + | 48 MHZ | 42 + +----+----+ 43 + |(clock) 44 + v 45 + +----+----+ 46 + +------| CMN PLL | 47 + | +----+----+ 48 + | |(clock) 49 + | v 50 + | +----+----+ +----+----+ (clock) +----+----+ 51 + | +---| NSSCC | | GCC |--------->| MDIO | 52 + | | +----+----+ +----+----+ +----+----+ 53 + | | |(clock & reset) |(clock) 54 + | | v v 55 + | | +----+---------------------+--+----------+----------+---------+ 56 + | | | +-----+ |EDMA FIFO | | EIP FIFO| 57 + | | | | SCH | +----------+ +---------+ 58 + | | | +-----+ | | | 59 + | | | +------+ +------+ +-------------------+ | 60 + | | | | BM | | QM | IPQ9574-PPE | L2/L3 Process | | 61 + | | | +------+ +------+ +-------------------+ | 62 + | | | | | 63 + | | | +-------+ +-------+ +-------+ +-------+ +-------+ +-------+ | 64 + | | | | MAC0 | | MAC1 | | MAC2 | | MAC3 | | XGMAC4| |XGMAC5 | | 65 + | | | +---+---+ +---+---+ +---+---+ +---+---+ +---+---+ +---+---+ | 66 + | | | | | | | | | | 67 + | | +-----+---------+---------+---------+---------+---------+-----+ 68 + | | | | | | | | 69 + | | +---+---------+---------+---------+---+ +---+---+ +---+---+ 70 + +--+---->| PCS0 | | PCS1 | | PCS2 | 71 + |(clock) +---+---------+---------+---------+---+ +---+---+ +---+---+ 72 + | | | | | | | 73 + | +---+---------+---------+---------+---+ +---+---+ +---+---+ 74 + +------->| QCA8075 PHY | | PHY4 | | PHY5 | 75 + (clock) +-------------------------------------+ +-------+ +-------+ 76 + 77 + properties: 78 + compatible: 79 + enum: 80 + - qcom,ipq9574-ppe 81 + 82 + reg: 83 + maxItems: 1 84 + 85 + clocks: 86 + items: 87 + - description: PPE core clock 88 + - description: PPE APB (Advanced Peripheral Bus) clock 89 + - description: PPE IPE (Ingress Process Engine) clock 90 + - description: PPE BM, QM and scheduler clock 91 + 92 + clock-names: 93 + items: 94 + - const: ppe 95 + - const: apb 96 + - const: ipe 97 + - const: btq 98 + 99 + resets: 100 + maxItems: 1 101 + description: PPE reset, which is necessary before configuring PPE hardware 102 + 103 + interrupts: 104 + maxItems: 1 105 + description: PPE switch miscellaneous interrupt 106 + 107 + interconnects: 108 + items: 109 + - description: Bus interconnect path leading to PPE switch core function 110 + - description: Bus interconnect path leading to PPE register access 111 + - description: Bus interconnect path leading to QoS generation 112 + - description: Bus interconnect path leading to timeout reference 113 + - description: Bus interconnect path leading to NSS NOC from memory NOC 114 + - description: Bus interconnect path leading to memory NOC from NSS NOC 115 + - description: Bus interconnect path leading to enhanced memory NOC from NSS NOC 116 + 117 + interconnect-names: 118 + items: 119 + - const: ppe 120 + - const: ppe_cfg 121 + - const: qos_gen 122 + - const: timeout_ref 123 + - const: nssnoc_memnoc 124 + - const: memnoc_nssnoc 125 + - const: memnoc_nssnoc_1 126 + 127 + ethernet-dma: 128 + type: object 129 + additionalProperties: false 130 + description: 131 + EDMA (Ethernet DMA) is used to transmit packets between PPE and ARM 132 + host CPU. There are 32 TX descriptor rings, 32 TX completion rings, 133 + 24 RX descriptor rings and 8 RX fill rings supported. 134 + 135 + properties: 136 + clocks: 137 + items: 138 + - description: EDMA system clock 139 + - description: EDMA APB (Advanced Peripheral Bus) clock 140 + 141 + clock-names: 142 + items: 143 + - const: sys 144 + - const: apb 145 + 146 + resets: 147 + maxItems: 1 148 + description: EDMA reset 149 + 150 + interrupts: 151 + minItems: 65 152 + maxItems: 65 153 + 154 + interrupt-names: 155 + minItems: 65 156 + maxItems: 65 157 + items: 158 + oneOf: 159 + - pattern: '^txcmpl_([1-2]?[0-9]|3[01])$' 160 + - pattern: '^rxfill_[0-7]$' 161 + - pattern: '^rxdesc_(1?[0-9]|2[0-3])$' 162 + - const: misc 163 + description: 164 + Interrupts "txcmpl_[0-31]" are the Ethernet DMA TX completion ring interrupts. 165 + Interrupts "rxfill_[0-7]" are the Ethernet DMA RX fill ring interrupts. 166 + Interrupts "rxdesc_[0-23]" are the Ethernet DMA RX Descriptor ring interrupts. 167 + Interrupt "misc" is the Ethernet DMA miscellaneous error interrupt. 168 + 169 + required: 170 + - clocks 171 + - clock-names 172 + - resets 173 + - interrupts 174 + - interrupt-names 175 + 176 + ethernet-ports: 177 + patternProperties: 178 + "^ethernet-port@[1-6]+$": 179 + type: object 180 + unevaluatedProperties: false 181 + $ref: ethernet-switch-port.yaml# 182 + 183 + properties: 184 + reg: 185 + minimum: 1 186 + maximum: 6 187 + description: PPE Ethernet port ID 188 + 189 + clocks: 190 + items: 191 + - description: Port MAC clock 192 + - description: Port RX clock 193 + - description: Port TX clock 194 + 195 + clock-names: 196 + items: 197 + - const: mac 198 + - const: rx 199 + - const: tx 200 + 201 + resets: 202 + items: 203 + - description: Port MAC reset 204 + - description: Port RX reset 205 + - description: Port TX reset 206 + 207 + reset-names: 208 + items: 209 + - const: mac 210 + - const: rx 211 + - const: tx 212 + 213 + required: 214 + - reg 215 + - clocks 216 + - clock-names 217 + - resets 218 + - reset-names 219 + 220 + required: 221 + - compatible 222 + - reg 223 + - clocks 224 + - clock-names 225 + - resets 226 + - interconnects 227 + - interconnect-names 228 + - ethernet-dma 229 + 230 + allOf: 231 + - $ref: ethernet-switch.yaml 232 + 233 + unevaluatedProperties: false 234 + 235 + examples: 236 + - | 237 + #include <dt-bindings/clock/qcom,ipq9574-gcc.h> 238 + #include <dt-bindings/clock/qcom,ipq9574-nsscc.h> 239 + #include <dt-bindings/interconnect/qcom,ipq9574.h> 240 + #include <dt-bindings/interrupt-controller/arm-gic.h> 241 + #include <dt-bindings/reset/qcom,ipq9574-nsscc.h> 242 + 243 + ethernet-switch@3a000000 { 244 + compatible = "qcom,ipq9574-ppe"; 245 + reg = <0x3a000000 0xbef800>; 246 + clocks = <&nsscc NSS_CC_PPE_SWITCH_CLK>, 247 + <&nsscc NSS_CC_PPE_SWITCH_CFG_CLK>, 248 + <&nsscc NSS_CC_PPE_SWITCH_IPE_CLK>, 249 + <&nsscc NSS_CC_PPE_SWITCH_BTQ_CLK>; 250 + clock-names = "ppe", 251 + "apb", 252 + "ipe", 253 + "btq"; 254 + resets = <&nsscc PPE_FULL_RESET>; 255 + interrupts = <GIC_SPI 498 IRQ_TYPE_LEVEL_HIGH>; 256 + interconnects = <&nsscc MASTER_NSSNOC_PPE &nsscc SLAVE_NSSNOC_PPE>, 257 + <&nsscc MASTER_NSSNOC_PPE_CFG &nsscc SLAVE_NSSNOC_PPE_CFG>, 258 + <&gcc MASTER_NSSNOC_QOSGEN_REF &gcc SLAVE_NSSNOC_QOSGEN_REF>, 259 + <&gcc MASTER_NSSNOC_TIMEOUT_REF &gcc SLAVE_NSSNOC_TIMEOUT_REF>, 260 + <&gcc MASTER_MEM_NOC_NSSNOC &gcc SLAVE_MEM_NOC_NSSNOC>, 261 + <&gcc MASTER_NSSNOC_MEMNOC &gcc SLAVE_NSSNOC_MEMNOC>, 262 + <&gcc MASTER_NSSNOC_MEM_NOC_1 &gcc SLAVE_NSSNOC_MEM_NOC_1>; 263 + interconnect-names = "ppe", 264 + "ppe_cfg", 265 + "qos_gen", 266 + "timeout_ref", 267 + "nssnoc_memnoc", 268 + "memnoc_nssnoc", 269 + "memnoc_nssnoc_1"; 270 + 271 + ethernet-dma { 272 + clocks = <&nsscc NSS_CC_PPE_EDMA_CLK>, 273 + <&nsscc NSS_CC_PPE_EDMA_CFG_CLK>; 274 + clock-names = "sys", 275 + "apb"; 276 + resets = <&nsscc EDMA_HW_RESET>; 277 + interrupts = <GIC_SPI 363 IRQ_TYPE_LEVEL_HIGH>, 278 + <GIC_SPI 364 IRQ_TYPE_LEVEL_HIGH>, 279 + <GIC_SPI 365 IRQ_TYPE_LEVEL_HIGH>, 280 + <GIC_SPI 366 IRQ_TYPE_LEVEL_HIGH>, 281 + <GIC_SPI 367 IRQ_TYPE_LEVEL_HIGH>, 282 + <GIC_SPI 368 IRQ_TYPE_LEVEL_HIGH>, 283 + <GIC_SPI 369 IRQ_TYPE_LEVEL_HIGH>, 284 + <GIC_SPI 370 IRQ_TYPE_LEVEL_HIGH>, 285 + <GIC_SPI 371 IRQ_TYPE_LEVEL_HIGH>, 286 + <GIC_SPI 372 IRQ_TYPE_LEVEL_HIGH>, 287 + <GIC_SPI 373 IRQ_TYPE_LEVEL_HIGH>, 288 + <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>, 289 + <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>, 290 + <GIC_SPI 376 IRQ_TYPE_LEVEL_HIGH>, 291 + <GIC_SPI 377 IRQ_TYPE_LEVEL_HIGH>, 292 + <GIC_SPI 378 IRQ_TYPE_LEVEL_HIGH>, 293 + <GIC_SPI 379 IRQ_TYPE_LEVEL_HIGH>, 294 + <GIC_SPI 380 IRQ_TYPE_LEVEL_HIGH>, 295 + <GIC_SPI 381 IRQ_TYPE_LEVEL_HIGH>, 296 + <GIC_SPI 382 IRQ_TYPE_LEVEL_HIGH>, 297 + <GIC_SPI 383 IRQ_TYPE_LEVEL_HIGH>, 298 + <GIC_SPI 384 IRQ_TYPE_LEVEL_HIGH>, 299 + <GIC_SPI 509 IRQ_TYPE_LEVEL_HIGH>, 300 + <GIC_SPI 508 IRQ_TYPE_LEVEL_HIGH>, 301 + <GIC_SPI 507 IRQ_TYPE_LEVEL_HIGH>, 302 + <GIC_SPI 506 IRQ_TYPE_LEVEL_HIGH>, 303 + <GIC_SPI 505 IRQ_TYPE_LEVEL_HIGH>, 304 + <GIC_SPI 504 IRQ_TYPE_LEVEL_HIGH>, 305 + <GIC_SPI 503 IRQ_TYPE_LEVEL_HIGH>, 306 + <GIC_SPI 502 IRQ_TYPE_LEVEL_HIGH>, 307 + <GIC_SPI 501 IRQ_TYPE_LEVEL_HIGH>, 308 + <GIC_SPI 500 IRQ_TYPE_LEVEL_HIGH>, 309 + <GIC_SPI 355 IRQ_TYPE_LEVEL_HIGH>, 310 + <GIC_SPI 356 IRQ_TYPE_LEVEL_HIGH>, 311 + <GIC_SPI 357 IRQ_TYPE_LEVEL_HIGH>, 312 + <GIC_SPI 358 IRQ_TYPE_LEVEL_HIGH>, 313 + <GIC_SPI 359 IRQ_TYPE_LEVEL_HIGH>, 314 + <GIC_SPI 360 IRQ_TYPE_LEVEL_HIGH>, 315 + <GIC_SPI 361 IRQ_TYPE_LEVEL_HIGH>, 316 + <GIC_SPI 362 IRQ_TYPE_LEVEL_HIGH>, 317 + <GIC_SPI 331 IRQ_TYPE_LEVEL_HIGH>, 318 + <GIC_SPI 332 IRQ_TYPE_LEVEL_HIGH>, 319 + <GIC_SPI 333 IRQ_TYPE_LEVEL_HIGH>, 320 + <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>, 321 + <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>, 322 + <GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>, 323 + <GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>, 324 + <GIC_SPI 338 IRQ_TYPE_LEVEL_HIGH>, 325 + <GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>, 326 + <GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>, 327 + <GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>, 328 + <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>, 329 + <GIC_SPI 343 IRQ_TYPE_LEVEL_HIGH>, 330 + <GIC_SPI 344 IRQ_TYPE_LEVEL_HIGH>, 331 + <GIC_SPI 345 IRQ_TYPE_LEVEL_HIGH>, 332 + <GIC_SPI 346 IRQ_TYPE_LEVEL_HIGH>, 333 + <GIC_SPI 347 IRQ_TYPE_LEVEL_HIGH>, 334 + <GIC_SPI 348 IRQ_TYPE_LEVEL_HIGH>, 335 + <GIC_SPI 349 IRQ_TYPE_LEVEL_HIGH>, 336 + <GIC_SPI 350 IRQ_TYPE_LEVEL_HIGH>, 337 + <GIC_SPI 351 IRQ_TYPE_LEVEL_HIGH>, 338 + <GIC_SPI 352 IRQ_TYPE_LEVEL_HIGH>, 339 + <GIC_SPI 353 IRQ_TYPE_LEVEL_HIGH>, 340 + <GIC_SPI 354 IRQ_TYPE_LEVEL_HIGH>, 341 + <GIC_SPI 499 IRQ_TYPE_LEVEL_HIGH>; 342 + interrupt-names = "txcmpl_0", 343 + "txcmpl_1", 344 + "txcmpl_2", 345 + "txcmpl_3", 346 + "txcmpl_4", 347 + "txcmpl_5", 348 + "txcmpl_6", 349 + "txcmpl_7", 350 + "txcmpl_8", 351 + "txcmpl_9", 352 + "txcmpl_10", 353 + "txcmpl_11", 354 + "txcmpl_12", 355 + "txcmpl_13", 356 + "txcmpl_14", 357 + "txcmpl_15", 358 + "txcmpl_16", 359 + "txcmpl_17", 360 + "txcmpl_18", 361 + "txcmpl_19", 362 + "txcmpl_20", 363 + "txcmpl_21", 364 + "txcmpl_22", 365 + "txcmpl_23", 366 + "txcmpl_24", 367 + "txcmpl_25", 368 + "txcmpl_26", 369 + "txcmpl_27", 370 + "txcmpl_28", 371 + "txcmpl_29", 372 + "txcmpl_30", 373 + "txcmpl_31", 374 + "rxfill_0", 375 + "rxfill_1", 376 + "rxfill_2", 377 + "rxfill_3", 378 + "rxfill_4", 379 + "rxfill_5", 380 + "rxfill_6", 381 + "rxfill_7", 382 + "rxdesc_0", 383 + "rxdesc_1", 384 + "rxdesc_2", 385 + "rxdesc_3", 386 + "rxdesc_4", 387 + "rxdesc_5", 388 + "rxdesc_6", 389 + "rxdesc_7", 390 + "rxdesc_8", 391 + "rxdesc_9", 392 + "rxdesc_10", 393 + "rxdesc_11", 394 + "rxdesc_12", 395 + "rxdesc_13", 396 + "rxdesc_14", 397 + "rxdesc_15", 398 + "rxdesc_16", 399 + "rxdesc_17", 400 + "rxdesc_18", 401 + "rxdesc_19", 402 + "rxdesc_20", 403 + "rxdesc_21", 404 + "rxdesc_22", 405 + "rxdesc_23", 406 + "misc"; 407 + }; 408 + 409 + ethernet-ports { 410 + #address-cells = <1>; 411 + #size-cells = <0>; 412 + 413 + ethernet-port@1 { 414 + reg = <1>; 415 + phy-mode = "qsgmii"; 416 + managed = "in-band-status"; 417 + phy-handle = <&phy0>; 418 + pcs-handle = <&pcs0_ch0>; 419 + clocks = <&nsscc NSS_CC_PORT1_MAC_CLK>, 420 + <&nsscc NSS_CC_PORT1_RX_CLK>, 421 + <&nsscc NSS_CC_PORT1_TX_CLK>; 422 + clock-names = "mac", 423 + "rx", 424 + "tx"; 425 + resets = <&nsscc PORT1_MAC_ARES>, 426 + <&nsscc PORT1_RX_ARES>, 427 + <&nsscc PORT1_TX_ARES>; 428 + reset-names = "mac", 429 + "rx", 430 + "tx"; 431 + }; 432 + 433 + ethernet-port@2 { 434 + reg = <2>; 435 + phy-mode = "qsgmii"; 436 + managed = "in-band-status"; 437 + phy-handle = <&phy1>; 438 + pcs-handle = <&pcs0_ch1>; 439 + clocks = <&nsscc NSS_CC_PORT2_MAC_CLK>, 440 + <&nsscc NSS_CC_PORT2_RX_CLK>, 441 + <&nsscc NSS_CC_PORT2_TX_CLK>; 442 + clock-names = "mac", 443 + "rx", 444 + "tx"; 445 + resets = <&nsscc PORT2_MAC_ARES>, 446 + <&nsscc PORT2_RX_ARES>, 447 + <&nsscc PORT2_TX_ARES>; 448 + reset-names = "mac", 449 + "rx", 450 + "tx"; 451 + }; 452 + 453 + ethernet-port@3 { 454 + reg = <3>; 455 + phy-mode = "qsgmii"; 456 + managed = "in-band-status"; 457 + phy-handle = <&phy2>; 458 + pcs-handle = <&pcs0_ch2>; 459 + clocks = <&nsscc NSS_CC_PORT3_MAC_CLK>, 460 + <&nsscc NSS_CC_PORT3_RX_CLK>, 461 + <&nsscc NSS_CC_PORT3_TX_CLK>; 462 + clock-names = "mac", 463 + "rx", 464 + "tx"; 465 + resets = <&nsscc PORT3_MAC_ARES>, 466 + <&nsscc PORT3_RX_ARES>, 467 + <&nsscc PORT3_TX_ARES>; 468 + reset-names = "mac", 469 + "rx", 470 + "tx"; 471 + }; 472 + 473 + ethernet-port@4 { 474 + reg = <4>; 475 + phy-mode = "qsgmii"; 476 + managed = "in-band-status"; 477 + phy-handle = <&phy3>; 478 + pcs-handle = <&pcs0_ch3>; 479 + clocks = <&nsscc NSS_CC_PORT4_MAC_CLK>, 480 + <&nsscc NSS_CC_PORT4_RX_CLK>, 481 + <&nsscc NSS_CC_PORT4_TX_CLK>; 482 + clock-names = "mac", 483 + "rx", 484 + "tx"; 485 + resets = <&nsscc PORT4_MAC_ARES>, 486 + <&nsscc PORT4_RX_ARES>, 487 + <&nsscc PORT4_TX_ARES>; 488 + reset-names = "mac", 489 + "rx", 490 + "tx"; 491 + }; 492 + 493 + ethernet-port@5 { 494 + reg = <5>; 495 + phy-mode = "usxgmii"; 496 + managed = "in-band-status"; 497 + phy-handle = <&phy4>; 498 + pcs-handle = <&pcs1_ch0>; 499 + clocks = <&nsscc NSS_CC_PORT5_MAC_CLK>, 500 + <&nsscc NSS_CC_PORT5_RX_CLK>, 501 + <&nsscc NSS_CC_PORT5_TX_CLK>; 502 + clock-names = "mac", 503 + "rx", 504 + "tx"; 505 + resets = <&nsscc PORT5_MAC_ARES>, 506 + <&nsscc PORT5_RX_ARES>, 507 + <&nsscc PORT5_TX_ARES>; 508 + reset-names = "mac", 509 + "rx", 510 + "tx"; 511 + }; 512 + 513 + ethernet-port@6 { 514 + reg = <6>; 515 + phy-mode = "usxgmii"; 516 + managed = "in-band-status"; 517 + phy-handle = <&phy5>; 518 + pcs-handle = <&pcs2_ch0>; 519 + clocks = <&nsscc NSS_CC_PORT6_MAC_CLK>, 520 + <&nsscc NSS_CC_PORT6_RX_CLK>, 521 + <&nsscc NSS_CC_PORT6_TX_CLK>; 522 + clock-names = "mac", 523 + "rx", 524 + "tx"; 525 + resets = <&nsscc PORT6_MAC_ARES>, 526 + <&nsscc PORT6_RX_ARES>, 527 + <&nsscc PORT6_TX_ARES>; 528 + reset-names = "mac", 529 + "rx", 530 + "tx"; 531 + }; 532 + }; 533 + };
+1
Documentation/networking/device_drivers/ethernet/index.rst
··· 50 50 neterion/s2io 51 51 netronome/nfp 52 52 pensando/ionic 53 + qualcomm/ppe/ppe 53 54 smsc/smc9 54 55 stmicro/stmmac 55 56 ti/cpsw
+194
Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =============================================== 4 + PPE Ethernet Driver for Qualcomm IPQ SoC Family 5 + =============================================== 6 + 7 + Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 8 + 9 + Author: Lei Wei <quic_leiwei@quicinc.com> 10 + 11 + 12 + Contents 13 + ======== 14 + 15 + - `PPE Overview`_ 16 + - `PPE Driver Overview`_ 17 + - `PPE Driver Supported SoCs`_ 18 + - `Enabling the Driver`_ 19 + - `Debugging`_ 20 + 21 + 22 + PPE Overview 23 + ============ 24 + 25 + IPQ (Qualcomm Internet Processor) SoC (System-on-Chip) series is Qualcomm's series of 26 + networking SoC for Wi-Fi access points. The PPE (Packet Process Engine) is the Ethernet 27 + packet process engine in the IPQ SoC. 28 + 29 + Below is a simplified hardware diagram of IPQ9574 SoC which includes the PPE engine and 30 + other blocks which are in the SoC but outside the PPE engine. These blocks work together 31 + to enable the Ethernet for the IPQ SoC:: 32 + 33 + +------+ +------+ +------+ +------+ +------+ +------+ start +-------+ 34 + |netdev| |netdev| |netdev| |netdev| |netdev| |netdev|<------|PHYLINK| 35 + +------+ +------+ +------+ +------+ +------+ +------+ stop +-+-+-+-+ 36 + | | | ^ 37 + +-------+ +-------------------------+--------+----------------------+ | | | 38 + | GCC | | | EDMA | | | | | 39 + +---+---+ | PPE +---+----+ | | | | 40 + | clk | | | | | | 41 + +-------->| +-----------------------+------+-----+---------------+ | | | | 42 + | | Switch Core |Port0 | |Port7(EIP FIFO)| | | | | 43 + | | +---+--+ +------+--------+ | | | | 44 + | | | | | | | | | 45 + +-------+ | | +------+---------------+----+ | | | | | 46 + |CMN PLL| | | +---+ +---+ +----+ | +--------+ | | | | | | 47 + +---+---+ | | |BM | |QM | |SCH | | | L2/L3 | ....... | | | | | | 48 + | | | | +---+ +---+ +----+ | +--------+ | | | | | | 49 + | | | | +------+--------------------+ | | | | | 50 + | | | | | | | | | | 51 + | v | | +-----+-+-----+-+-----+-+-+---+--+-----+-+-----+ | | | | | 52 + | +------+ | | |Port1| |Port2| |Port3| |Port4| |Port5| |Port6| | | | | | 53 + | |NSSCC | | | +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ | | mac| | | 54 + | +-+-+--+ | | |MAC0 | |MAC1 | |MAC2 | |MAC3 | |MAC4 | |MAC5 | | |<---+ | | 55 + | ^ | |clk | | +-----+-+-----+-+-----+-+-----+--+-----+-+-----+ | | ops | | 56 + | | | +------>| +----|------|-------|-------|---------|--------|-----+ | | | 57 + | | | +---------------------------------------------------------+ | | 58 + | | | | | | | | | | | 59 + | | | MII clk | QSGMII USXGMII USXGMII | | 60 + | | +--------------->| | | | | | | | 61 + | | +-------------------------+ +---------+ +---------+ | | 62 + | |125/312.5MHz clk| (PCS0) | | (PCS1) | | (PCS2) | pcs ops | | 63 + | +----------------+ UNIPHY0 | | UNIPHY1 | | UNIPHY2 |<--------+ | 64 + +----------------->| | | | | | | 65 + | 31.25MHz ref clk +-------------------------+ +---------+ +---------+ | 66 + | | | | | | | | 67 + | +-----------------------------------------------------+ | 68 + |25/50MHz ref clk| +-------------------------+ +------+ +------+ | link | 69 + +--------------->| | QUAD PHY | | PHY4 | | PHY5 | |---------+ 70 + | +-------------------------+ +------+ +------+ | change 71 + | | 72 + | MDIO bus | 73 + +-----------------------------------------------------+ 74 + 75 + The CMN (Common) PLL, NSSCC (Networking Sub System Clock Controller) and GCC (Global 76 + Clock Controller) blocks are in the SoC and act as clock providers. 77 + 78 + The UNIPHY block is in the SoC and provides the PCS (Physical Coding Sublayer) and 79 + XPCS (10-Gigabit Physical Coding Sublayer) functions to support different interface 80 + modes between the PPE MAC and the external PHY. 81 + 82 + This documentation focuses on the descriptions of PPE engine and the PPE driver. 83 + 84 + The Ethernet functionality in the PPE (Packet Process Engine) is comprised of three 85 + components: the switch core, port wrapper and Ethernet DMA. 86 + 87 + The Switch core in the IPQ9574 PPE has maximum of 6 front panel ports and two FIFO 88 + interfaces. One of the two FIFO interfaces is used for Ethernet port to host CPU 89 + communication using Ethernet DMA. The other one is used to communicate to the EIP 90 + engine which is used for IPsec offload. On the IPQ9574, the PPE includes 6 GMAC/XGMACs 91 + that can be connected with external Ethernet PHY. Switch core also includes BM (Buffer 92 + Management), QM (Queue Management) and SCH (Scheduler) modules for supporting the 93 + packet processing. 94 + 95 + The port wrapper provides connections from the 6 GMAC/XGMACS to UNIPHY (PCS) supporting 96 + various modes such as SGMII/QSGMII/PSGMII/USXGMII/10G-BASER. There are 3 UNIPHY (PCS) 97 + instances supported on the IPQ9574. 98 + 99 + Ethernet DMA is used to transmit and receive packets between the Ethernet subsystem 100 + and ARM host CPU. 101 + 102 + The following lists the main blocks in the PPE engine which will be driven by this 103 + PPE driver: 104 + 105 + - BM 106 + BM is the hardware buffer manager for the PPE switch ports. 107 + - QM 108 + Queue Manager for managing the egress hardware queues of the PPE switch ports. 109 + - SCH 110 + The scheduler which manages the hardware traffic scheduling for the PPE switch ports. 111 + - L2 112 + The L2 block performs the packet bridging in the switch core. The bridge domain is 113 + represented by the VSI (Virtual Switch Instance) domain in PPE. FDB learning can be 114 + enabled based on the VSI domain and bridge forwarding occurs within the VSI domain. 115 + - MAC 116 + The PPE in the IPQ9574 supports up to six MACs (MAC0 to MAC5) which are corresponding 117 + to six switch ports (port1 to port6). The MAC block is connected with external PHY 118 + through the UNIPHY PCS block. Each MAC block includes the GMAC and XGMAC blocks and 119 + the switch port can select to use GMAC or XMAC through a MUX selection according to 120 + the external PHY's capability. 121 + - EDMA (Ethernet DMA) 122 + The Ethernet DMA is used to transmit and receive Ethernet packets between the PPE 123 + ports and the ARM cores. 124 + 125 + The received packet on a PPE MAC port can be forwarded to another PPE MAC port. It can 126 + be also forwarded to internal switch port0 so that the packet can be delivered to the 127 + ARM cores using the Ethernet DMA (EDMA) engine. The Ethernet DMA driver will deliver the 128 + packet to the corresponding 'netdevice' interface. 129 + 130 + The software instantiations of the PPE MAC (netdevice), PCS and external PHYs interact 131 + with the Linux PHYLINK framework to manage the connectivity between the PPE ports and 132 + the connected PHYs, and the port link states. This is also illustrated in above diagram. 133 + 134 + 135 + PPE Driver Overview 136 + =================== 137 + PPE driver is Ethernet driver for the Qualcomm IPQ SoC. It is a single platform driver 138 + which includes the PPE part and Ethernet DMA part. The PPE part initializes and drives the 139 + various blocks in PPE switch core such as BM/QM/L2 blocks and the PPE MACs. The EDMA part 140 + drives the Ethernet DMA for packet transfer between PPE ports and ARM cores, and enables 141 + the netdevice driver for the PPE ports. 142 + 143 + The PPE driver files in drivers/net/ethernet/qualcomm/ppe/ are listed as below: 144 + 145 + - Makefile 146 + - ppe.c 147 + - ppe.h 148 + - ppe_config.c 149 + - ppe_config.h 150 + - ppe_debugfs.c 151 + - ppe_debugfs.h 152 + - ppe_regs.h 153 + 154 + The ppe.c file contains the main PPE platform driver and undertakes the initialization of 155 + PPE switch core blocks such as QM, BM and L2. The configuration APIs for these hardware 156 + blocks are provided in the ppe_config.c file. 157 + 158 + The ppe.h defines the PPE device data structure which will be used by PPE driver functions. 159 + 160 + The ppe_debugfs.c enables the PPE statistics counters such as PPE port Rx and Tx counters, 161 + CPU code counters and queue counters. 162 + 163 + 164 + PPE Driver Supported SoCs 165 + ========================= 166 + 167 + The PPE driver supports the following IPQ SoC: 168 + 169 + - IPQ9574 170 + 171 + 172 + Enabling the Driver 173 + =================== 174 + 175 + The driver is located in the menu structure at:: 176 + 177 + -> Device Drivers 178 + -> Network device support (NETDEVICES [=y]) 179 + -> Ethernet driver support 180 + -> Qualcomm devices 181 + -> Qualcomm Technologies, Inc. PPE Ethernet support 182 + 183 + If the driver is built as a module, the module will be called qcom-ppe. 184 + 185 + The PPE driver functionally depends on the CMN PLL and NSSCC clock controller drivers. 186 + Please make sure the dependent modules are installed before installing the PPE driver 187 + module. 188 + 189 + 190 + Debugging 191 + ========= 192 + 193 + The PPE hardware counters can be accessed using debugfs interface from the 194 + ``/sys/kernel/debug/ppe/`` directory.
+8
MAINTAINERS
··· 20836 20836 F: Documentation/devicetree/bindings/power/supply/qcom,pmi8998-charger.yaml 20837 20837 F: drivers/power/supply/qcom_smbx.c 20838 20838 20839 + QUALCOMM PPE DRIVER 20840 + M: Luo Jie <quic_luoj@quicinc.com> 20841 + L: netdev@vger.kernel.org 20842 + S: Supported 20843 + F: Documentation/devicetree/bindings/net/qcom,ipq9574-ppe.yaml 20844 + F: Documentation/networking/device_drivers/ethernet/qualcomm/ppe/ppe.rst 20845 + F: drivers/net/ethernet/qualcomm/ppe/ 20846 + 20839 20847 QUALCOMM QSEECOM DRIVER 20840 20848 M: Maximilian Luz <luzmaximilian@gmail.com> 20841 20849 L: linux-arm-msm@vger.kernel.org
+15
drivers/net/ethernet/qualcomm/Kconfig
··· 60 60 low power, Receive-Side Scaling (RSS), and IEEE 1588-2008 61 61 Precision Clock Synchronization Protocol. 62 62 63 + config QCOM_PPE 64 + tristate "Qualcomm Technologies, Inc. PPE Ethernet support" 65 + depends on HAS_IOMEM && OF 66 + depends on COMMON_CLK 67 + select REGMAP_MMIO 68 + help 69 + This driver supports the Qualcomm Technologies, Inc. packet 70 + process engine (PPE) available with IPQ SoC. The PPE includes 71 + the Ethernet MACs, Ethernet DMA (EDMA) and switch core that 72 + supports L3 flow offload, L2 switch function, RSS and tunnel 73 + offload. 74 + 75 + To compile this driver as a module, choose M here. The module 76 + will be called qcom-ppe. 77 + 63 78 source "drivers/net/ethernet/qualcomm/rmnet/Kconfig" 64 79 65 80 endif # NET_VENDOR_QUALCOMM
+1
drivers/net/ethernet/qualcomm/Makefile
··· 11 11 12 12 obj-y += emac/ 13 13 14 + obj-$(CONFIG_QCOM_PPE) += ppe/ 14 15 obj-$(CONFIG_RMNET) += rmnet/
+7
drivers/net/ethernet/qualcomm/ppe/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + # Makefile for the device driver of PPE (Packet Process Engine) in IPQ SoC 4 + # 5 + 6 + obj-$(CONFIG_QCOM_PPE) += qcom-ppe.o 7 + qcom-ppe-objs := ppe.o ppe_config.o ppe_debugfs.o
+239
drivers/net/ethernet/qualcomm/ppe/ppe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 4 + */ 5 + 6 + /* PPE platform device probe, DTSI parser and PPE clock initializations. */ 7 + 8 + #include <linux/clk.h> 9 + #include <linux/interconnect.h> 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/of.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/regmap.h> 15 + #include <linux/reset.h> 16 + 17 + #include "ppe.h" 18 + #include "ppe_config.h" 19 + #include "ppe_debugfs.h" 20 + 21 + #define PPE_PORT_MAX 8 22 + #define PPE_CLK_RATE 353000000 23 + 24 + /* ICC clocks for enabling PPE device. The avg_bw and peak_bw with value 0 25 + * will be updated by the clock rate of PPE. 26 + */ 27 + static const struct icc_bulk_data ppe_icc_data[] = { 28 + { 29 + .name = "ppe", 30 + .avg_bw = 0, 31 + .peak_bw = 0, 32 + }, 33 + { 34 + .name = "ppe_cfg", 35 + .avg_bw = 0, 36 + .peak_bw = 0, 37 + }, 38 + { 39 + .name = "qos_gen", 40 + .avg_bw = 6000, 41 + .peak_bw = 6000, 42 + }, 43 + { 44 + .name = "timeout_ref", 45 + .avg_bw = 6000, 46 + .peak_bw = 6000, 47 + }, 48 + { 49 + .name = "nssnoc_memnoc", 50 + .avg_bw = 533333, 51 + .peak_bw = 533333, 52 + }, 53 + { 54 + .name = "memnoc_nssnoc", 55 + .avg_bw = 533333, 56 + .peak_bw = 533333, 57 + }, 58 + { 59 + .name = "memnoc_nssnoc_1", 60 + .avg_bw = 533333, 61 + .peak_bw = 533333, 62 + }, 63 + }; 64 + 65 + static const struct regmap_range ppe_readable_ranges[] = { 66 + regmap_reg_range(0x0, 0x1ff), /* Global */ 67 + regmap_reg_range(0x400, 0x5ff), /* LPI CSR */ 68 + regmap_reg_range(0x1000, 0x11ff), /* GMAC0 */ 69 + regmap_reg_range(0x1200, 0x13ff), /* GMAC1 */ 70 + regmap_reg_range(0x1400, 0x15ff), /* GMAC2 */ 71 + regmap_reg_range(0x1600, 0x17ff), /* GMAC3 */ 72 + regmap_reg_range(0x1800, 0x19ff), /* GMAC4 */ 73 + regmap_reg_range(0x1a00, 0x1bff), /* GMAC5 */ 74 + regmap_reg_range(0xb000, 0xefff), /* PRX CSR */ 75 + regmap_reg_range(0xf000, 0x1efff), /* IPE */ 76 + regmap_reg_range(0x20000, 0x5ffff), /* PTX CSR */ 77 + regmap_reg_range(0x60000, 0x9ffff), /* IPE L2 CSR */ 78 + regmap_reg_range(0xb0000, 0xeffff), /* IPO CSR */ 79 + regmap_reg_range(0x100000, 0x17ffff), /* IPE PC */ 80 + regmap_reg_range(0x180000, 0x1bffff), /* PRE IPO CSR */ 81 + regmap_reg_range(0x1d0000, 0x1dffff), /* Tunnel parser */ 82 + regmap_reg_range(0x1e0000, 0x1effff), /* Ingress parse */ 83 + regmap_reg_range(0x200000, 0x2fffff), /* IPE L3 */ 84 + regmap_reg_range(0x300000, 0x3fffff), /* IPE tunnel */ 85 + regmap_reg_range(0x400000, 0x4fffff), /* Scheduler */ 86 + regmap_reg_range(0x500000, 0x503fff), /* XGMAC0 */ 87 + regmap_reg_range(0x504000, 0x507fff), /* XGMAC1 */ 88 + regmap_reg_range(0x508000, 0x50bfff), /* XGMAC2 */ 89 + regmap_reg_range(0x50c000, 0x50ffff), /* XGMAC3 */ 90 + regmap_reg_range(0x510000, 0x513fff), /* XGMAC4 */ 91 + regmap_reg_range(0x514000, 0x517fff), /* XGMAC5 */ 92 + regmap_reg_range(0x600000, 0x6fffff), /* BM */ 93 + regmap_reg_range(0x800000, 0x9fffff), /* QM */ 94 + regmap_reg_range(0xb00000, 0xbef800), /* EDMA */ 95 + }; 96 + 97 + static const struct regmap_access_table ppe_reg_table = { 98 + .yes_ranges = ppe_readable_ranges, 99 + .n_yes_ranges = ARRAY_SIZE(ppe_readable_ranges), 100 + }; 101 + 102 + static const struct regmap_config regmap_config_ipq9574 = { 103 + .reg_bits = 32, 104 + .reg_stride = 4, 105 + .val_bits = 32, 106 + .rd_table = &ppe_reg_table, 107 + .wr_table = &ppe_reg_table, 108 + .max_register = 0xbef800, 109 + .fast_io = true, 110 + }; 111 + 112 + static int ppe_clock_init_and_reset(struct ppe_device *ppe_dev) 113 + { 114 + unsigned long ppe_rate = ppe_dev->clk_rate; 115 + struct device *dev = ppe_dev->dev; 116 + struct reset_control *rstc; 117 + struct clk_bulk_data *clks; 118 + struct clk *clk; 119 + int ret, i; 120 + 121 + for (i = 0; i < ppe_dev->num_icc_paths; i++) { 122 + ppe_dev->icc_paths[i].name = ppe_icc_data[i].name; 123 + ppe_dev->icc_paths[i].avg_bw = ppe_icc_data[i].avg_bw ? : 124 + Bps_to_icc(ppe_rate); 125 + 126 + /* PPE does not have an explicit peak bandwidth requirement, 127 + * so set the peak bandwidth to be equal to the average 128 + * bandwidth. 129 + */ 130 + ppe_dev->icc_paths[i].peak_bw = ppe_icc_data[i].peak_bw ? : 131 + Bps_to_icc(ppe_rate); 132 + } 133 + 134 + ret = devm_of_icc_bulk_get(dev, ppe_dev->num_icc_paths, 135 + ppe_dev->icc_paths); 136 + if (ret) 137 + return ret; 138 + 139 + ret = icc_bulk_set_bw(ppe_dev->num_icc_paths, ppe_dev->icc_paths); 140 + if (ret) 141 + return ret; 142 + 143 + /* The PPE clocks have a common parent clock. Setting the clock 144 + * rate of "ppe" ensures the clock rate of all PPE clocks is 145 + * configured to the same rate. 146 + */ 147 + clk = devm_clk_get(dev, "ppe"); 148 + if (IS_ERR(clk)) 149 + return PTR_ERR(clk); 150 + 151 + ret = clk_set_rate(clk, ppe_rate); 152 + if (ret) 153 + return ret; 154 + 155 + ret = devm_clk_bulk_get_all_enabled(dev, &clks); 156 + if (ret < 0) 157 + return ret; 158 + 159 + /* Reset the PPE. */ 160 + rstc = devm_reset_control_get_exclusive(dev, NULL); 161 + if (IS_ERR(rstc)) 162 + return PTR_ERR(rstc); 163 + 164 + ret = reset_control_assert(rstc); 165 + if (ret) 166 + return ret; 167 + 168 + /* The delay 10 ms of assert is necessary for resetting PPE. */ 169 + usleep_range(10000, 11000); 170 + 171 + return reset_control_deassert(rstc); 172 + } 173 + 174 + static int qcom_ppe_probe(struct platform_device *pdev) 175 + { 176 + struct device *dev = &pdev->dev; 177 + struct ppe_device *ppe_dev; 178 + void __iomem *base; 179 + int ret, num_icc; 180 + 181 + num_icc = ARRAY_SIZE(ppe_icc_data); 182 + ppe_dev = devm_kzalloc(dev, struct_size(ppe_dev, icc_paths, num_icc), 183 + GFP_KERNEL); 184 + if (!ppe_dev) 185 + return -ENOMEM; 186 + 187 + base = devm_platform_ioremap_resource(pdev, 0); 188 + if (IS_ERR(base)) 189 + return dev_err_probe(dev, PTR_ERR(base), "PPE ioremap failed\n"); 190 + 191 + ppe_dev->regmap = devm_regmap_init_mmio(dev, base, &regmap_config_ipq9574); 192 + if (IS_ERR(ppe_dev->regmap)) 193 + return dev_err_probe(dev, PTR_ERR(ppe_dev->regmap), 194 + "PPE initialize regmap failed\n"); 195 + ppe_dev->dev = dev; 196 + ppe_dev->clk_rate = PPE_CLK_RATE; 197 + ppe_dev->num_ports = PPE_PORT_MAX; 198 + ppe_dev->num_icc_paths = num_icc; 199 + 200 + ret = ppe_clock_init_and_reset(ppe_dev); 201 + if (ret) 202 + return dev_err_probe(dev, ret, "PPE clock config failed\n"); 203 + 204 + ret = ppe_hw_config(ppe_dev); 205 + if (ret) 206 + return dev_err_probe(dev, ret, "PPE HW config failed\n"); 207 + 208 + ppe_debugfs_setup(ppe_dev); 209 + platform_set_drvdata(pdev, ppe_dev); 210 + 211 + return 0; 212 + } 213 + 214 + static void qcom_ppe_remove(struct platform_device *pdev) 215 + { 216 + struct ppe_device *ppe_dev; 217 + 218 + ppe_dev = platform_get_drvdata(pdev); 219 + ppe_debugfs_teardown(ppe_dev); 220 + } 221 + 222 + static const struct of_device_id qcom_ppe_of_match[] = { 223 + { .compatible = "qcom,ipq9574-ppe" }, 224 + {} 225 + }; 226 + MODULE_DEVICE_TABLE(of, qcom_ppe_of_match); 227 + 228 + static struct platform_driver qcom_ppe_driver = { 229 + .driver = { 230 + .name = "qcom_ppe", 231 + .of_match_table = qcom_ppe_of_match, 232 + }, 233 + .probe = qcom_ppe_probe, 234 + .remove = qcom_ppe_remove, 235 + }; 236 + module_platform_driver(qcom_ppe_driver); 237 + 238 + MODULE_LICENSE("GPL"); 239 + MODULE_DESCRIPTION("Qualcomm Technologies, Inc. IPQ PPE driver");
+39
drivers/net/ethernet/qualcomm/ppe/ppe.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only 2 + * 3 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 4 + */ 5 + 6 + #ifndef __PPE_H__ 7 + #define __PPE_H__ 8 + 9 + #include <linux/compiler.h> 10 + #include <linux/interconnect.h> 11 + 12 + struct device; 13 + struct regmap; 14 + struct dentry; 15 + 16 + /** 17 + * struct ppe_device - PPE device private data. 18 + * @dev: PPE device structure. 19 + * @regmap: PPE register map. 20 + * @clk_rate: PPE clock rate. 21 + * @num_ports: Number of PPE ports. 22 + * @debugfs_root: Debugfs root entry. 23 + * @num_icc_paths: Number of interconnect paths. 24 + * @icc_paths: Interconnect path array. 25 + * 26 + * PPE device is the instance of PPE hardware, which is used to 27 + * configure PPE packet process modules such as BM (buffer management), 28 + * QM (queue management), and scheduler. 29 + */ 30 + struct ppe_device { 31 + struct device *dev; 32 + struct regmap *regmap; 33 + unsigned long clk_rate; 34 + unsigned int num_ports; 35 + struct dentry *debugfs_root; 36 + unsigned int num_icc_paths; 37 + struct icc_bulk_data icc_paths[] __counted_by(num_icc_paths); 38 + }; 39 + #endif
+2034
drivers/net/ethernet/qualcomm/ppe/ppe_config.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 4 + */ 5 + 6 + /* PPE HW initialization configs such as BM(buffer management), 7 + * QM(queue management) and scheduler configs. 8 + */ 9 + 10 + #include <linux/bitfield.h> 11 + #include <linux/bitmap.h> 12 + #include <linux/bits.h> 13 + #include <linux/device.h> 14 + #include <linux/regmap.h> 15 + 16 + #include "ppe.h" 17 + #include "ppe_config.h" 18 + #include "ppe_regs.h" 19 + 20 + #define PPE_QUEUE_SCH_PRI_NUM 8 21 + 22 + /** 23 + * struct ppe_bm_port_config - PPE BM port configuration. 24 + * @port_id_start: The fist BM port ID to configure. 25 + * @port_id_end: The last BM port ID to configure. 26 + * @pre_alloc: BM port dedicated buffer number. 27 + * @in_fly_buf: Buffer number for receiving the packet after pause frame sent. 28 + * @ceil: Ceil to generate the back pressure. 29 + * @weight: Weight value. 30 + * @resume_offset: Resume offset from the threshold value. 31 + * @resume_ceil: Ceil to resume from the back pressure state. 32 + * @dynamic: Dynamic threshold used or not. 33 + * 34 + * The is for configuring the threshold that impacts the port 35 + * flow control. 36 + */ 37 + struct ppe_bm_port_config { 38 + unsigned int port_id_start; 39 + unsigned int port_id_end; 40 + unsigned int pre_alloc; 41 + unsigned int in_fly_buf; 42 + unsigned int ceil; 43 + unsigned int weight; 44 + unsigned int resume_offset; 45 + unsigned int resume_ceil; 46 + bool dynamic; 47 + }; 48 + 49 + /** 50 + * struct ppe_qm_queue_config - PPE queue config. 51 + * @queue_start: PPE start of queue ID. 52 + * @queue_end: PPE end of queue ID. 53 + * @prealloc_buf: Queue dedicated buffer number. 54 + * @ceil: Ceil to start drop packet from queue. 55 + * @weight: Weight value. 56 + * @resume_offset: Resume offset from the threshold. 57 + * @dynamic: Threshold value is decided dynamically or statically. 58 + * 59 + * Queue configuration decides the threshold to drop packet from PPE 60 + * hardware queue. 61 + */ 62 + struct ppe_qm_queue_config { 63 + unsigned int queue_start; 64 + unsigned int queue_end; 65 + unsigned int prealloc_buf; 66 + unsigned int ceil; 67 + unsigned int weight; 68 + unsigned int resume_offset; 69 + bool dynamic; 70 + }; 71 + 72 + /** 73 + * enum ppe_scheduler_direction - PPE scheduler direction for packet. 74 + * @PPE_SCH_INGRESS: Scheduler for the packet on ingress, 75 + * @PPE_SCH_EGRESS: Scheduler for the packet on egress, 76 + */ 77 + enum ppe_scheduler_direction { 78 + PPE_SCH_INGRESS = 0, 79 + PPE_SCH_EGRESS = 1, 80 + }; 81 + 82 + /** 83 + * struct ppe_scheduler_bm_config - PPE arbitration for buffer config. 84 + * @valid: Arbitration entry valid or not. 85 + * @dir: Arbitration entry for egress or ingress. 86 + * @port: Port ID to use arbitration entry. 87 + * @backup_port_valid: Backup port valid or not. 88 + * @backup_port: Backup port ID to use. 89 + * 90 + * Configure the scheduler settings for accessing and releasing the PPE buffers. 91 + */ 92 + struct ppe_scheduler_bm_config { 93 + bool valid; 94 + enum ppe_scheduler_direction dir; 95 + unsigned int port; 96 + bool backup_port_valid; 97 + unsigned int backup_port; 98 + }; 99 + 100 + /** 101 + * struct ppe_scheduler_qm_config - PPE arbitration for scheduler config. 102 + * @ensch_port_bmp: Port bit map for enqueue scheduler. 103 + * @ensch_port: Port ID to enqueue scheduler. 104 + * @desch_port: Port ID to dequeue scheduler. 105 + * @desch_backup_port_valid: Dequeue for the backup port valid or not. 106 + * @desch_backup_port: Backup port ID to dequeue scheduler. 107 + * 108 + * Configure the scheduler settings for enqueuing and dequeuing packets on 109 + * the PPE port. 110 + */ 111 + struct ppe_scheduler_qm_config { 112 + unsigned int ensch_port_bmp; 113 + unsigned int ensch_port; 114 + unsigned int desch_port; 115 + bool desch_backup_port_valid; 116 + unsigned int desch_backup_port; 117 + }; 118 + 119 + /** 120 + * struct ppe_scheduler_port_config - PPE port scheduler config. 121 + * @port: Port ID to be scheduled. 122 + * @flow_level: Scheduler flow level or not. 123 + * @node_id: Node ID, for level 0, queue ID is used. 124 + * @loop_num: Loop number of scheduler config. 125 + * @pri_max: Max priority configured. 126 + * @flow_id: Strict priority ID. 127 + * @drr_node_id: Node ID for scheduler. 128 + * 129 + * PPE port scheduler configuration which decides the priority in the 130 + * packet scheduler for the egress port. 131 + */ 132 + struct ppe_scheduler_port_config { 133 + unsigned int port; 134 + bool flow_level; 135 + unsigned int node_id; 136 + unsigned int loop_num; 137 + unsigned int pri_max; 138 + unsigned int flow_id; 139 + unsigned int drr_node_id; 140 + }; 141 + 142 + /** 143 + * struct ppe_port_schedule_resource - PPE port scheduler resource. 144 + * @ucastq_start: Unicast queue start ID. 145 + * @ucastq_end: Unicast queue end ID. 146 + * @mcastq_start: Multicast queue start ID. 147 + * @mcastq_end: Multicast queue end ID. 148 + * @flow_id_start: Flow start ID. 149 + * @flow_id_end: Flow end ID. 150 + * @l0node_start: Scheduler node start ID for queue level. 151 + * @l0node_end: Scheduler node end ID for queue level. 152 + * @l1node_start: Scheduler node start ID for flow level. 153 + * @l1node_end: Scheduler node end ID for flow level. 154 + * 155 + * PPE scheduler resource allocated among the PPE ports. 156 + */ 157 + struct ppe_port_schedule_resource { 158 + unsigned int ucastq_start; 159 + unsigned int ucastq_end; 160 + unsigned int mcastq_start; 161 + unsigned int mcastq_end; 162 + unsigned int flow_id_start; 163 + unsigned int flow_id_end; 164 + unsigned int l0node_start; 165 + unsigned int l0node_end; 166 + unsigned int l1node_start; 167 + unsigned int l1node_end; 168 + }; 169 + 170 + /* There are total 2048 buffers available in PPE, out of which some 171 + * buffers are reserved for some specific purposes per PPE port. The 172 + * rest of the pool of 1550 buffers are assigned to the general 'group0' 173 + * which is shared among all ports of the PPE. 174 + */ 175 + static const int ipq9574_ppe_bm_group_config = 1550; 176 + 177 + /* The buffer configurations per PPE port. There are 15 BM ports and 178 + * 4 BM groups supported by PPE. BM port (0-7) is for EDMA port 0, 179 + * BM port (8-13) is for PPE physical port 1-6 and BM port 14 is for 180 + * EIP port. 181 + */ 182 + static const struct ppe_bm_port_config ipq9574_ppe_bm_port_config[] = { 183 + { 184 + /* Buffer configuration for the BM port ID 0 of EDMA. */ 185 + .port_id_start = 0, 186 + .port_id_end = 0, 187 + .pre_alloc = 0, 188 + .in_fly_buf = 100, 189 + .ceil = 1146, 190 + .weight = 7, 191 + .resume_offset = 8, 192 + .resume_ceil = 0, 193 + .dynamic = true, 194 + }, 195 + { 196 + /* Buffer configuration for the BM port ID 1-7 of EDMA. */ 197 + .port_id_start = 1, 198 + .port_id_end = 7, 199 + .pre_alloc = 0, 200 + .in_fly_buf = 100, 201 + .ceil = 250, 202 + .weight = 4, 203 + .resume_offset = 36, 204 + .resume_ceil = 0, 205 + .dynamic = true, 206 + }, 207 + { 208 + /* Buffer configuration for the BM port ID 8-13 of PPE ports. */ 209 + .port_id_start = 8, 210 + .port_id_end = 13, 211 + .pre_alloc = 0, 212 + .in_fly_buf = 128, 213 + .ceil = 250, 214 + .weight = 4, 215 + .resume_offset = 36, 216 + .resume_ceil = 0, 217 + .dynamic = true, 218 + }, 219 + { 220 + /* Buffer configuration for the BM port ID 14 of EIP. */ 221 + .port_id_start = 14, 222 + .port_id_end = 14, 223 + .pre_alloc = 0, 224 + .in_fly_buf = 40, 225 + .ceil = 250, 226 + .weight = 4, 227 + .resume_offset = 36, 228 + .resume_ceil = 0, 229 + .dynamic = true, 230 + }, 231 + }; 232 + 233 + /* QM fetches the packet from PPE buffer management for transmitting the 234 + * packet out. The QM group configuration limits the total number of buffers 235 + * enqueued by all PPE hardware queues. 236 + * There are total 2048 buffers available, out of which some buffers are 237 + * dedicated to hardware exception handlers. The remaining buffers are 238 + * assigned to the general 'group0', which is the group assigned to all 239 + * queues by default. 240 + */ 241 + static const int ipq9574_ppe_qm_group_config = 2000; 242 + 243 + /* Default QM settings for unicast and multicast queues for IPQ9754. */ 244 + static const struct ppe_qm_queue_config ipq9574_ppe_qm_queue_config[] = { 245 + { 246 + /* QM settings for unicast queues 0 to 255. */ 247 + .queue_start = 0, 248 + .queue_end = 255, 249 + .prealloc_buf = 0, 250 + .ceil = 1200, 251 + .weight = 7, 252 + .resume_offset = 36, 253 + .dynamic = true, 254 + }, 255 + { 256 + /* QM settings for multicast queues 256 to 299. */ 257 + .queue_start = 256, 258 + .queue_end = 299, 259 + .prealloc_buf = 0, 260 + .ceil = 250, 261 + .weight = 0, 262 + .resume_offset = 36, 263 + .dynamic = false, 264 + }, 265 + }; 266 + 267 + /* PPE scheduler configuration for BM includes multiple entries. Each entry 268 + * indicates the primary port to be assigned the buffers for the ingress or 269 + * to release the buffers for the egress. Backup port ID will be used when 270 + * the primary port ID is down. 271 + */ 272 + static const struct ppe_scheduler_bm_config ipq9574_ppe_sch_bm_config[] = { 273 + {true, PPE_SCH_INGRESS, 0, false, 0}, 274 + {true, PPE_SCH_EGRESS, 0, false, 0}, 275 + {true, PPE_SCH_INGRESS, 5, false, 0}, 276 + {true, PPE_SCH_EGRESS, 5, false, 0}, 277 + {true, PPE_SCH_INGRESS, 6, false, 0}, 278 + {true, PPE_SCH_EGRESS, 6, false, 0}, 279 + {true, PPE_SCH_INGRESS, 1, false, 0}, 280 + {true, PPE_SCH_EGRESS, 1, false, 0}, 281 + {true, PPE_SCH_INGRESS, 0, false, 0}, 282 + {true, PPE_SCH_EGRESS, 0, false, 0}, 283 + {true, PPE_SCH_INGRESS, 5, false, 0}, 284 + {true, PPE_SCH_EGRESS, 5, false, 0}, 285 + {true, PPE_SCH_INGRESS, 6, false, 0}, 286 + {true, PPE_SCH_EGRESS, 6, false, 0}, 287 + {true, PPE_SCH_INGRESS, 7, false, 0}, 288 + {true, PPE_SCH_EGRESS, 7, false, 0}, 289 + {true, PPE_SCH_INGRESS, 0, false, 0}, 290 + {true, PPE_SCH_EGRESS, 0, false, 0}, 291 + {true, PPE_SCH_INGRESS, 1, false, 0}, 292 + {true, PPE_SCH_EGRESS, 1, false, 0}, 293 + {true, PPE_SCH_INGRESS, 5, false, 0}, 294 + {true, PPE_SCH_EGRESS, 5, false, 0}, 295 + {true, PPE_SCH_INGRESS, 6, false, 0}, 296 + {true, PPE_SCH_EGRESS, 6, false, 0}, 297 + {true, PPE_SCH_INGRESS, 2, false, 0}, 298 + {true, PPE_SCH_EGRESS, 2, false, 0}, 299 + {true, PPE_SCH_INGRESS, 0, false, 0}, 300 + {true, PPE_SCH_EGRESS, 0, false, 0}, 301 + {true, PPE_SCH_INGRESS, 5, false, 0}, 302 + {true, PPE_SCH_EGRESS, 5, false, 0}, 303 + {true, PPE_SCH_INGRESS, 6, false, 0}, 304 + {true, PPE_SCH_EGRESS, 6, false, 0}, 305 + {true, PPE_SCH_INGRESS, 1, false, 0}, 306 + {true, PPE_SCH_EGRESS, 1, false, 0}, 307 + {true, PPE_SCH_INGRESS, 3, false, 0}, 308 + {true, PPE_SCH_EGRESS, 3, false, 0}, 309 + {true, PPE_SCH_INGRESS, 0, false, 0}, 310 + {true, PPE_SCH_EGRESS, 0, false, 0}, 311 + {true, PPE_SCH_INGRESS, 5, false, 0}, 312 + {true, PPE_SCH_EGRESS, 5, false, 0}, 313 + {true, PPE_SCH_INGRESS, 6, false, 0}, 314 + {true, PPE_SCH_EGRESS, 6, false, 0}, 315 + {true, PPE_SCH_INGRESS, 7, false, 0}, 316 + {true, PPE_SCH_EGRESS, 7, false, 0}, 317 + {true, PPE_SCH_INGRESS, 0, false, 0}, 318 + {true, PPE_SCH_EGRESS, 0, false, 0}, 319 + {true, PPE_SCH_INGRESS, 1, false, 0}, 320 + {true, PPE_SCH_EGRESS, 1, false, 0}, 321 + {true, PPE_SCH_INGRESS, 5, false, 0}, 322 + {true, PPE_SCH_EGRESS, 5, false, 0}, 323 + {true, PPE_SCH_INGRESS, 6, false, 0}, 324 + {true, PPE_SCH_EGRESS, 6, false, 0}, 325 + {true, PPE_SCH_INGRESS, 4, false, 0}, 326 + {true, PPE_SCH_EGRESS, 4, false, 0}, 327 + {true, PPE_SCH_INGRESS, 0, false, 0}, 328 + {true, PPE_SCH_EGRESS, 0, false, 0}, 329 + {true, PPE_SCH_INGRESS, 5, false, 0}, 330 + {true, PPE_SCH_EGRESS, 5, false, 0}, 331 + {true, PPE_SCH_INGRESS, 6, false, 0}, 332 + {true, PPE_SCH_EGRESS, 6, false, 0}, 333 + {true, PPE_SCH_INGRESS, 1, false, 0}, 334 + {true, PPE_SCH_EGRESS, 1, false, 0}, 335 + {true, PPE_SCH_INGRESS, 0, false, 0}, 336 + {true, PPE_SCH_EGRESS, 0, false, 0}, 337 + {true, PPE_SCH_INGRESS, 5, false, 0}, 338 + {true, PPE_SCH_EGRESS, 5, false, 0}, 339 + {true, PPE_SCH_INGRESS, 6, false, 0}, 340 + {true, PPE_SCH_EGRESS, 6, false, 0}, 341 + {true, PPE_SCH_INGRESS, 2, false, 0}, 342 + {true, PPE_SCH_EGRESS, 2, false, 0}, 343 + {true, PPE_SCH_INGRESS, 0, false, 0}, 344 + {true, PPE_SCH_EGRESS, 0, false, 0}, 345 + {true, PPE_SCH_INGRESS, 7, false, 0}, 346 + {true, PPE_SCH_EGRESS, 7, false, 0}, 347 + {true, PPE_SCH_INGRESS, 5, false, 0}, 348 + {true, PPE_SCH_EGRESS, 5, false, 0}, 349 + {true, PPE_SCH_INGRESS, 6, false, 0}, 350 + {true, PPE_SCH_EGRESS, 6, false, 0}, 351 + {true, PPE_SCH_INGRESS, 1, false, 0}, 352 + {true, PPE_SCH_EGRESS, 1, false, 0}, 353 + {true, PPE_SCH_INGRESS, 0, false, 0}, 354 + {true, PPE_SCH_EGRESS, 0, false, 0}, 355 + {true, PPE_SCH_INGRESS, 5, false, 0}, 356 + {true, PPE_SCH_EGRESS, 5, false, 0}, 357 + {true, PPE_SCH_INGRESS, 6, false, 0}, 358 + {true, PPE_SCH_EGRESS, 6, false, 0}, 359 + {true, PPE_SCH_INGRESS, 3, false, 0}, 360 + {true, PPE_SCH_EGRESS, 3, false, 0}, 361 + {true, PPE_SCH_INGRESS, 1, false, 0}, 362 + {true, PPE_SCH_EGRESS, 1, false, 0}, 363 + {true, PPE_SCH_INGRESS, 0, false, 0}, 364 + {true, PPE_SCH_EGRESS, 0, false, 0}, 365 + {true, PPE_SCH_INGRESS, 5, false, 0}, 366 + {true, PPE_SCH_EGRESS, 5, false, 0}, 367 + {true, PPE_SCH_INGRESS, 6, false, 0}, 368 + {true, PPE_SCH_EGRESS, 6, false, 0}, 369 + {true, PPE_SCH_INGRESS, 4, false, 0}, 370 + {true, PPE_SCH_EGRESS, 4, false, 0}, 371 + {true, PPE_SCH_INGRESS, 7, false, 0}, 372 + {true, PPE_SCH_EGRESS, 7, false, 0}, 373 + }; 374 + 375 + /* PPE scheduler configuration for QM includes multiple entries. Each entry 376 + * contains ports to be dispatched for enqueueing and dequeueing. The backup 377 + * port for dequeueing is supported to be used when the primary port for 378 + * dequeueing is down. 379 + */ 380 + static const struct ppe_scheduler_qm_config ipq9574_ppe_sch_qm_config[] = { 381 + {0x98, 6, 0, true, 1}, 382 + {0x94, 5, 6, true, 3}, 383 + {0x86, 0, 5, true, 4}, 384 + {0x8C, 1, 6, true, 0}, 385 + {0x1C, 7, 5, true, 1}, 386 + {0x98, 2, 6, true, 0}, 387 + {0x1C, 5, 7, true, 1}, 388 + {0x34, 3, 6, true, 0}, 389 + {0x8C, 4, 5, true, 1}, 390 + {0x98, 2, 6, true, 0}, 391 + {0x8C, 5, 4, true, 1}, 392 + {0xA8, 0, 6, true, 2}, 393 + {0x98, 5, 1, true, 0}, 394 + {0x98, 6, 5, true, 2}, 395 + {0x89, 1, 6, true, 4}, 396 + {0xA4, 3, 0, true, 1}, 397 + {0x8C, 5, 6, true, 4}, 398 + {0xA8, 0, 2, true, 1}, 399 + {0x98, 6, 5, true, 0}, 400 + {0xC4, 4, 3, true, 1}, 401 + {0x94, 6, 5, true, 0}, 402 + {0x1C, 7, 6, true, 1}, 403 + {0x98, 2, 5, true, 0}, 404 + {0x1C, 6, 7, true, 1}, 405 + {0x1C, 5, 6, true, 0}, 406 + {0x94, 3, 5, true, 1}, 407 + {0x8C, 4, 6, true, 0}, 408 + {0x94, 1, 5, true, 3}, 409 + {0x94, 6, 1, true, 0}, 410 + {0xD0, 3, 5, true, 2}, 411 + {0x98, 6, 0, true, 1}, 412 + {0x94, 5, 6, true, 3}, 413 + {0x94, 1, 5, true, 0}, 414 + {0x98, 2, 6, true, 1}, 415 + {0x8C, 4, 5, true, 0}, 416 + {0x1C, 7, 6, true, 1}, 417 + {0x8C, 0, 5, true, 4}, 418 + {0x89, 1, 6, true, 2}, 419 + {0x98, 5, 0, true, 1}, 420 + {0x94, 6, 5, true, 3}, 421 + {0x92, 0, 6, true, 2}, 422 + {0x98, 1, 5, true, 0}, 423 + {0x98, 6, 2, true, 1}, 424 + {0xD0, 0, 5, true, 3}, 425 + {0x94, 6, 0, true, 1}, 426 + {0x8C, 5, 6, true, 4}, 427 + {0x8C, 1, 5, true, 0}, 428 + {0x1C, 6, 7, true, 1}, 429 + {0x1C, 5, 6, true, 0}, 430 + {0xB0, 2, 3, true, 1}, 431 + {0xC4, 4, 5, true, 0}, 432 + {0x8C, 6, 4, true, 1}, 433 + {0xA4, 3, 6, true, 0}, 434 + {0x1C, 5, 7, true, 1}, 435 + {0x4C, 0, 5, true, 4}, 436 + {0x8C, 6, 0, true, 1}, 437 + {0x34, 7, 6, true, 3}, 438 + {0x94, 5, 0, true, 1}, 439 + {0x98, 6, 5, true, 2}, 440 + }; 441 + 442 + static const struct ppe_scheduler_port_config ppe_port_sch_config[] = { 443 + { 444 + .port = 0, 445 + .flow_level = true, 446 + .node_id = 0, 447 + .loop_num = 1, 448 + .pri_max = 1, 449 + .flow_id = 0, 450 + .drr_node_id = 0, 451 + }, 452 + { 453 + .port = 0, 454 + .flow_level = false, 455 + .node_id = 0, 456 + .loop_num = 8, 457 + .pri_max = 8, 458 + .flow_id = 0, 459 + .drr_node_id = 0, 460 + }, 461 + { 462 + .port = 0, 463 + .flow_level = false, 464 + .node_id = 8, 465 + .loop_num = 8, 466 + .pri_max = 8, 467 + .flow_id = 0, 468 + .drr_node_id = 0, 469 + }, 470 + { 471 + .port = 0, 472 + .flow_level = false, 473 + .node_id = 16, 474 + .loop_num = 8, 475 + .pri_max = 8, 476 + .flow_id = 0, 477 + .drr_node_id = 0, 478 + }, 479 + { 480 + .port = 0, 481 + .flow_level = false, 482 + .node_id = 24, 483 + .loop_num = 8, 484 + .pri_max = 8, 485 + .flow_id = 0, 486 + .drr_node_id = 0, 487 + }, 488 + { 489 + .port = 0, 490 + .flow_level = false, 491 + .node_id = 32, 492 + .loop_num = 8, 493 + .pri_max = 8, 494 + .flow_id = 0, 495 + .drr_node_id = 0, 496 + }, 497 + { 498 + .port = 0, 499 + .flow_level = false, 500 + .node_id = 40, 501 + .loop_num = 8, 502 + .pri_max = 8, 503 + .flow_id = 0, 504 + .drr_node_id = 0, 505 + }, 506 + { 507 + .port = 0, 508 + .flow_level = false, 509 + .node_id = 48, 510 + .loop_num = 8, 511 + .pri_max = 8, 512 + .flow_id = 0, 513 + .drr_node_id = 0, 514 + }, 515 + { 516 + .port = 0, 517 + .flow_level = false, 518 + .node_id = 56, 519 + .loop_num = 8, 520 + .pri_max = 8, 521 + .flow_id = 0, 522 + .drr_node_id = 0, 523 + }, 524 + { 525 + .port = 0, 526 + .flow_level = false, 527 + .node_id = 256, 528 + .loop_num = 8, 529 + .pri_max = 8, 530 + .flow_id = 0, 531 + .drr_node_id = 0, 532 + }, 533 + { 534 + .port = 0, 535 + .flow_level = false, 536 + .node_id = 264, 537 + .loop_num = 8, 538 + .pri_max = 8, 539 + .flow_id = 0, 540 + .drr_node_id = 0, 541 + }, 542 + { 543 + .port = 1, 544 + .flow_level = true, 545 + .node_id = 36, 546 + .loop_num = 2, 547 + .pri_max = 0, 548 + .flow_id = 1, 549 + .drr_node_id = 8, 550 + }, 551 + { 552 + .port = 1, 553 + .flow_level = false, 554 + .node_id = 144, 555 + .loop_num = 16, 556 + .pri_max = 8, 557 + .flow_id = 36, 558 + .drr_node_id = 48, 559 + }, 560 + { 561 + .port = 1, 562 + .flow_level = false, 563 + .node_id = 272, 564 + .loop_num = 4, 565 + .pri_max = 4, 566 + .flow_id = 36, 567 + .drr_node_id = 48, 568 + }, 569 + { 570 + .port = 2, 571 + .flow_level = true, 572 + .node_id = 40, 573 + .loop_num = 2, 574 + .pri_max = 0, 575 + .flow_id = 2, 576 + .drr_node_id = 12, 577 + }, 578 + { 579 + .port = 2, 580 + .flow_level = false, 581 + .node_id = 160, 582 + .loop_num = 16, 583 + .pri_max = 8, 584 + .flow_id = 40, 585 + .drr_node_id = 64, 586 + }, 587 + { 588 + .port = 2, 589 + .flow_level = false, 590 + .node_id = 276, 591 + .loop_num = 4, 592 + .pri_max = 4, 593 + .flow_id = 40, 594 + .drr_node_id = 64, 595 + }, 596 + { 597 + .port = 3, 598 + .flow_level = true, 599 + .node_id = 44, 600 + .loop_num = 2, 601 + .pri_max = 0, 602 + .flow_id = 3, 603 + .drr_node_id = 16, 604 + }, 605 + { 606 + .port = 3, 607 + .flow_level = false, 608 + .node_id = 176, 609 + .loop_num = 16, 610 + .pri_max = 8, 611 + .flow_id = 44, 612 + .drr_node_id = 80, 613 + }, 614 + { 615 + .port = 3, 616 + .flow_level = false, 617 + .node_id = 280, 618 + .loop_num = 4, 619 + .pri_max = 4, 620 + .flow_id = 44, 621 + .drr_node_id = 80, 622 + }, 623 + { 624 + .port = 4, 625 + .flow_level = true, 626 + .node_id = 48, 627 + .loop_num = 2, 628 + .pri_max = 0, 629 + .flow_id = 4, 630 + .drr_node_id = 20, 631 + }, 632 + { 633 + .port = 4, 634 + .flow_level = false, 635 + .node_id = 192, 636 + .loop_num = 16, 637 + .pri_max = 8, 638 + .flow_id = 48, 639 + .drr_node_id = 96, 640 + }, 641 + { 642 + .port = 4, 643 + .flow_level = false, 644 + .node_id = 284, 645 + .loop_num = 4, 646 + .pri_max = 4, 647 + .flow_id = 48, 648 + .drr_node_id = 96, 649 + }, 650 + { 651 + .port = 5, 652 + .flow_level = true, 653 + .node_id = 52, 654 + .loop_num = 2, 655 + .pri_max = 0, 656 + .flow_id = 5, 657 + .drr_node_id = 24, 658 + }, 659 + { 660 + .port = 5, 661 + .flow_level = false, 662 + .node_id = 208, 663 + .loop_num = 16, 664 + .pri_max = 8, 665 + .flow_id = 52, 666 + .drr_node_id = 112, 667 + }, 668 + { 669 + .port = 5, 670 + .flow_level = false, 671 + .node_id = 288, 672 + .loop_num = 4, 673 + .pri_max = 4, 674 + .flow_id = 52, 675 + .drr_node_id = 112, 676 + }, 677 + { 678 + .port = 6, 679 + .flow_level = true, 680 + .node_id = 56, 681 + .loop_num = 2, 682 + .pri_max = 0, 683 + .flow_id = 6, 684 + .drr_node_id = 28, 685 + }, 686 + { 687 + .port = 6, 688 + .flow_level = false, 689 + .node_id = 224, 690 + .loop_num = 16, 691 + .pri_max = 8, 692 + .flow_id = 56, 693 + .drr_node_id = 128, 694 + }, 695 + { 696 + .port = 6, 697 + .flow_level = false, 698 + .node_id = 292, 699 + .loop_num = 4, 700 + .pri_max = 4, 701 + .flow_id = 56, 702 + .drr_node_id = 128, 703 + }, 704 + { 705 + .port = 7, 706 + .flow_level = true, 707 + .node_id = 60, 708 + .loop_num = 2, 709 + .pri_max = 0, 710 + .flow_id = 7, 711 + .drr_node_id = 32, 712 + }, 713 + { 714 + .port = 7, 715 + .flow_level = false, 716 + .node_id = 240, 717 + .loop_num = 16, 718 + .pri_max = 8, 719 + .flow_id = 60, 720 + .drr_node_id = 144, 721 + }, 722 + { 723 + .port = 7, 724 + .flow_level = false, 725 + .node_id = 296, 726 + .loop_num = 4, 727 + .pri_max = 4, 728 + .flow_id = 60, 729 + .drr_node_id = 144, 730 + }, 731 + }; 732 + 733 + /* The scheduler resource is applied to each PPE port, The resource 734 + * includes the unicast & multicast queues, flow nodes and DRR nodes. 735 + */ 736 + static const struct ppe_port_schedule_resource ppe_scheduler_res[] = { 737 + { .ucastq_start = 0, 738 + .ucastq_end = 63, 739 + .mcastq_start = 256, 740 + .mcastq_end = 271, 741 + .flow_id_start = 0, 742 + .flow_id_end = 0, 743 + .l0node_start = 0, 744 + .l0node_end = 7, 745 + .l1node_start = 0, 746 + .l1node_end = 0, 747 + }, 748 + { .ucastq_start = 144, 749 + .ucastq_end = 159, 750 + .mcastq_start = 272, 751 + .mcastq_end = 275, 752 + .flow_id_start = 36, 753 + .flow_id_end = 39, 754 + .l0node_start = 48, 755 + .l0node_end = 63, 756 + .l1node_start = 8, 757 + .l1node_end = 11, 758 + }, 759 + { .ucastq_start = 160, 760 + .ucastq_end = 175, 761 + .mcastq_start = 276, 762 + .mcastq_end = 279, 763 + .flow_id_start = 40, 764 + .flow_id_end = 43, 765 + .l0node_start = 64, 766 + .l0node_end = 79, 767 + .l1node_start = 12, 768 + .l1node_end = 15, 769 + }, 770 + { .ucastq_start = 176, 771 + .ucastq_end = 191, 772 + .mcastq_start = 280, 773 + .mcastq_end = 283, 774 + .flow_id_start = 44, 775 + .flow_id_end = 47, 776 + .l0node_start = 80, 777 + .l0node_end = 95, 778 + .l1node_start = 16, 779 + .l1node_end = 19, 780 + }, 781 + { .ucastq_start = 192, 782 + .ucastq_end = 207, 783 + .mcastq_start = 284, 784 + .mcastq_end = 287, 785 + .flow_id_start = 48, 786 + .flow_id_end = 51, 787 + .l0node_start = 96, 788 + .l0node_end = 111, 789 + .l1node_start = 20, 790 + .l1node_end = 23, 791 + }, 792 + { .ucastq_start = 208, 793 + .ucastq_end = 223, 794 + .mcastq_start = 288, 795 + .mcastq_end = 291, 796 + .flow_id_start = 52, 797 + .flow_id_end = 55, 798 + .l0node_start = 112, 799 + .l0node_end = 127, 800 + .l1node_start = 24, 801 + .l1node_end = 27, 802 + }, 803 + { .ucastq_start = 224, 804 + .ucastq_end = 239, 805 + .mcastq_start = 292, 806 + .mcastq_end = 295, 807 + .flow_id_start = 56, 808 + .flow_id_end = 59, 809 + .l0node_start = 128, 810 + .l0node_end = 143, 811 + .l1node_start = 28, 812 + .l1node_end = 31, 813 + }, 814 + { .ucastq_start = 240, 815 + .ucastq_end = 255, 816 + .mcastq_start = 296, 817 + .mcastq_end = 299, 818 + .flow_id_start = 60, 819 + .flow_id_end = 63, 820 + .l0node_start = 144, 821 + .l0node_end = 159, 822 + .l1node_start = 32, 823 + .l1node_end = 35, 824 + }, 825 + { .ucastq_start = 64, 826 + .ucastq_end = 143, 827 + .mcastq_start = 0, 828 + .mcastq_end = 0, 829 + .flow_id_start = 1, 830 + .flow_id_end = 35, 831 + .l0node_start = 8, 832 + .l0node_end = 47, 833 + .l1node_start = 1, 834 + .l1node_end = 7, 835 + }, 836 + }; 837 + 838 + /* Set the PPE queue level scheduler configuration. */ 839 + static int ppe_scheduler_l0_queue_map_set(struct ppe_device *ppe_dev, 840 + int node_id, int port, 841 + struct ppe_scheduler_cfg scheduler_cfg) 842 + { 843 + u32 val, reg; 844 + int ret; 845 + 846 + reg = PPE_L0_FLOW_MAP_TBL_ADDR + node_id * PPE_L0_FLOW_MAP_TBL_INC; 847 + val = FIELD_PREP(PPE_L0_FLOW_MAP_TBL_FLOW_ID, scheduler_cfg.flow_id); 848 + val |= FIELD_PREP(PPE_L0_FLOW_MAP_TBL_C_PRI, scheduler_cfg.pri); 849 + val |= FIELD_PREP(PPE_L0_FLOW_MAP_TBL_E_PRI, scheduler_cfg.pri); 850 + val |= FIELD_PREP(PPE_L0_FLOW_MAP_TBL_C_NODE_WT, scheduler_cfg.drr_node_wt); 851 + val |= FIELD_PREP(PPE_L0_FLOW_MAP_TBL_E_NODE_WT, scheduler_cfg.drr_node_wt); 852 + 853 + ret = regmap_write(ppe_dev->regmap, reg, val); 854 + if (ret) 855 + return ret; 856 + 857 + reg = PPE_L0_C_FLOW_CFG_TBL_ADDR + 858 + (scheduler_cfg.flow_id * PPE_QUEUE_SCH_PRI_NUM + scheduler_cfg.pri) * 859 + PPE_L0_C_FLOW_CFG_TBL_INC; 860 + val = FIELD_PREP(PPE_L0_C_FLOW_CFG_TBL_NODE_ID, scheduler_cfg.drr_node_id); 861 + val |= FIELD_PREP(PPE_L0_C_FLOW_CFG_TBL_NODE_CREDIT_UNIT, scheduler_cfg.unit_is_packet); 862 + 863 + ret = regmap_write(ppe_dev->regmap, reg, val); 864 + if (ret) 865 + return ret; 866 + 867 + reg = PPE_L0_E_FLOW_CFG_TBL_ADDR + 868 + (scheduler_cfg.flow_id * PPE_QUEUE_SCH_PRI_NUM + scheduler_cfg.pri) * 869 + PPE_L0_E_FLOW_CFG_TBL_INC; 870 + val = FIELD_PREP(PPE_L0_E_FLOW_CFG_TBL_NODE_ID, scheduler_cfg.drr_node_id); 871 + val |= FIELD_PREP(PPE_L0_E_FLOW_CFG_TBL_NODE_CREDIT_UNIT, scheduler_cfg.unit_is_packet); 872 + 873 + ret = regmap_write(ppe_dev->regmap, reg, val); 874 + if (ret) 875 + return ret; 876 + 877 + reg = PPE_L0_FLOW_PORT_MAP_TBL_ADDR + node_id * PPE_L0_FLOW_PORT_MAP_TBL_INC; 878 + val = FIELD_PREP(PPE_L0_FLOW_PORT_MAP_TBL_PORT_NUM, port); 879 + 880 + ret = regmap_write(ppe_dev->regmap, reg, val); 881 + if (ret) 882 + return ret; 883 + 884 + reg = PPE_L0_COMP_CFG_TBL_ADDR + node_id * PPE_L0_COMP_CFG_TBL_INC; 885 + val = FIELD_PREP(PPE_L0_COMP_CFG_TBL_NODE_METER_LEN, scheduler_cfg.frame_mode); 886 + 887 + return regmap_update_bits(ppe_dev->regmap, reg, 888 + PPE_L0_COMP_CFG_TBL_NODE_METER_LEN, 889 + val); 890 + } 891 + 892 + /* Set the PPE flow level scheduler configuration. */ 893 + static int ppe_scheduler_l1_queue_map_set(struct ppe_device *ppe_dev, 894 + int node_id, int port, 895 + struct ppe_scheduler_cfg scheduler_cfg) 896 + { 897 + u32 val, reg; 898 + int ret; 899 + 900 + val = FIELD_PREP(PPE_L1_FLOW_MAP_TBL_FLOW_ID, scheduler_cfg.flow_id); 901 + val |= FIELD_PREP(PPE_L1_FLOW_MAP_TBL_C_PRI, scheduler_cfg.pri); 902 + val |= FIELD_PREP(PPE_L1_FLOW_MAP_TBL_E_PRI, scheduler_cfg.pri); 903 + val |= FIELD_PREP(PPE_L1_FLOW_MAP_TBL_C_NODE_WT, scheduler_cfg.drr_node_wt); 904 + val |= FIELD_PREP(PPE_L1_FLOW_MAP_TBL_E_NODE_WT, scheduler_cfg.drr_node_wt); 905 + reg = PPE_L1_FLOW_MAP_TBL_ADDR + node_id * PPE_L1_FLOW_MAP_TBL_INC; 906 + 907 + ret = regmap_write(ppe_dev->regmap, reg, val); 908 + if (ret) 909 + return ret; 910 + 911 + val = FIELD_PREP(PPE_L1_C_FLOW_CFG_TBL_NODE_ID, scheduler_cfg.drr_node_id); 912 + val |= FIELD_PREP(PPE_L1_C_FLOW_CFG_TBL_NODE_CREDIT_UNIT, scheduler_cfg.unit_is_packet); 913 + reg = PPE_L1_C_FLOW_CFG_TBL_ADDR + 914 + (scheduler_cfg.flow_id * PPE_QUEUE_SCH_PRI_NUM + scheduler_cfg.pri) * 915 + PPE_L1_C_FLOW_CFG_TBL_INC; 916 + 917 + ret = regmap_write(ppe_dev->regmap, reg, val); 918 + if (ret) 919 + return ret; 920 + 921 + val = FIELD_PREP(PPE_L1_E_FLOW_CFG_TBL_NODE_ID, scheduler_cfg.drr_node_id); 922 + val |= FIELD_PREP(PPE_L1_E_FLOW_CFG_TBL_NODE_CREDIT_UNIT, scheduler_cfg.unit_is_packet); 923 + reg = PPE_L1_E_FLOW_CFG_TBL_ADDR + 924 + (scheduler_cfg.flow_id * PPE_QUEUE_SCH_PRI_NUM + scheduler_cfg.pri) * 925 + PPE_L1_E_FLOW_CFG_TBL_INC; 926 + 927 + ret = regmap_write(ppe_dev->regmap, reg, val); 928 + if (ret) 929 + return ret; 930 + 931 + val = FIELD_PREP(PPE_L1_FLOW_PORT_MAP_TBL_PORT_NUM, port); 932 + reg = PPE_L1_FLOW_PORT_MAP_TBL_ADDR + node_id * PPE_L1_FLOW_PORT_MAP_TBL_INC; 933 + 934 + ret = regmap_write(ppe_dev->regmap, reg, val); 935 + if (ret) 936 + return ret; 937 + 938 + reg = PPE_L1_COMP_CFG_TBL_ADDR + node_id * PPE_L1_COMP_CFG_TBL_INC; 939 + val = FIELD_PREP(PPE_L1_COMP_CFG_TBL_NODE_METER_LEN, scheduler_cfg.frame_mode); 940 + 941 + return regmap_update_bits(ppe_dev->regmap, reg, PPE_L1_COMP_CFG_TBL_NODE_METER_LEN, val); 942 + } 943 + 944 + /** 945 + * ppe_queue_scheduler_set - Configure scheduler for PPE hardware queue 946 + * @ppe_dev: PPE device 947 + * @node_id: PPE queue ID or flow ID 948 + * @flow_level: Flow level scheduler or queue level scheduler 949 + * @port: PPE port ID set scheduler configuration 950 + * @scheduler_cfg: PPE scheduler configuration 951 + * 952 + * PPE scheduler configuration supports queue level and flow level on 953 + * the PPE egress port. 954 + * 955 + * Return: 0 on success, negative error code on failure. 956 + */ 957 + int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, 958 + int node_id, bool flow_level, int port, 959 + struct ppe_scheduler_cfg scheduler_cfg) 960 + { 961 + if (flow_level) 962 + return ppe_scheduler_l1_queue_map_set(ppe_dev, node_id, 963 + port, scheduler_cfg); 964 + 965 + return ppe_scheduler_l0_queue_map_set(ppe_dev, node_id, 966 + port, scheduler_cfg); 967 + } 968 + 969 + /** 970 + * ppe_queue_ucast_base_set - Set PPE unicast queue base ID and profile ID 971 + * @ppe_dev: PPE device 972 + * @queue_dst: PPE queue destination configuration 973 + * @queue_base: PPE queue base ID 974 + * @profile_id: Profile ID 975 + * 976 + * The PPE unicast queue base ID and profile ID are configured based on the 977 + * destination port information that can be service code or CPU code or the 978 + * destination port. 979 + * 980 + * Return: 0 on success, negative error code on failure. 981 + */ 982 + int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev, 983 + struct ppe_queue_ucast_dest queue_dst, 984 + int queue_base, int profile_id) 985 + { 986 + int index, profile_size; 987 + u32 val, reg; 988 + 989 + profile_size = queue_dst.src_profile << 8; 990 + if (queue_dst.service_code_en) 991 + index = PPE_QUEUE_BASE_SERVICE_CODE + profile_size + 992 + queue_dst.service_code; 993 + else if (queue_dst.cpu_code_en) 994 + index = PPE_QUEUE_BASE_CPU_CODE + profile_size + 995 + queue_dst.cpu_code; 996 + else 997 + index = profile_size + queue_dst.dest_port; 998 + 999 + val = FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID, profile_id); 1000 + val |= FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID, queue_base); 1001 + reg = PPE_UCAST_QUEUE_MAP_TBL_ADDR + index * PPE_UCAST_QUEUE_MAP_TBL_INC; 1002 + 1003 + return regmap_write(ppe_dev->regmap, reg, val); 1004 + } 1005 + 1006 + /** 1007 + * ppe_queue_ucast_offset_pri_set - Set PPE unicast queue offset based on priority 1008 + * @ppe_dev: PPE device 1009 + * @profile_id: Profile ID 1010 + * @priority: PPE internal priority to be used to set queue offset 1011 + * @queue_offset: Queue offset used for calculating the destination queue ID 1012 + * 1013 + * The PPE unicast queue offset is configured based on the PPE 1014 + * internal priority. 1015 + * 1016 + * Return: 0 on success, negative error code on failure. 1017 + */ 1018 + int ppe_queue_ucast_offset_pri_set(struct ppe_device *ppe_dev, 1019 + int profile_id, 1020 + int priority, 1021 + int queue_offset) 1022 + { 1023 + u32 val, reg; 1024 + int index; 1025 + 1026 + index = (profile_id << 4) + priority; 1027 + val = FIELD_PREP(PPE_UCAST_PRIORITY_MAP_TBL_CLASS, queue_offset); 1028 + reg = PPE_UCAST_PRIORITY_MAP_TBL_ADDR + index * PPE_UCAST_PRIORITY_MAP_TBL_INC; 1029 + 1030 + return regmap_write(ppe_dev->regmap, reg, val); 1031 + } 1032 + 1033 + /** 1034 + * ppe_queue_ucast_offset_hash_set - Set PPE unicast queue offset based on hash 1035 + * @ppe_dev: PPE device 1036 + * @profile_id: Profile ID 1037 + * @rss_hash: Packet hash value to be used to set queue offset 1038 + * @queue_offset: Queue offset used for calculating the destination queue ID 1039 + * 1040 + * The PPE unicast queue offset is configured based on the RSS hash value. 1041 + * 1042 + * Return: 0 on success, negative error code on failure. 1043 + */ 1044 + int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, 1045 + int profile_id, 1046 + int rss_hash, 1047 + int queue_offset) 1048 + { 1049 + u32 val, reg; 1050 + int index; 1051 + 1052 + index = (profile_id << 8) + rss_hash; 1053 + val = FIELD_PREP(PPE_UCAST_HASH_MAP_TBL_HASH, queue_offset); 1054 + reg = PPE_UCAST_HASH_MAP_TBL_ADDR + index * PPE_UCAST_HASH_MAP_TBL_INC; 1055 + 1056 + return regmap_write(ppe_dev->regmap, reg, val); 1057 + } 1058 + 1059 + /** 1060 + * ppe_port_resource_get - Get PPE resource per port 1061 + * @ppe_dev: PPE device 1062 + * @port: PPE port 1063 + * @type: Resource type 1064 + * @res_start: Resource start ID returned 1065 + * @res_end: Resource end ID returned 1066 + * 1067 + * PPE resource is assigned per PPE port, which is acquired for QoS scheduler. 1068 + * 1069 + * Return: 0 on success, negative error code on failure. 1070 + */ 1071 + int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, 1072 + enum ppe_resource_type type, 1073 + int *res_start, int *res_end) 1074 + { 1075 + struct ppe_port_schedule_resource res; 1076 + 1077 + /* The reserved resource with the maximum port ID of PPE is 1078 + * also allowed to be acquired. 1079 + */ 1080 + if (port > ppe_dev->num_ports) 1081 + return -EINVAL; 1082 + 1083 + res = ppe_scheduler_res[port]; 1084 + switch (type) { 1085 + case PPE_RES_UCAST: 1086 + *res_start = res.ucastq_start; 1087 + *res_end = res.ucastq_end; 1088 + break; 1089 + case PPE_RES_MCAST: 1090 + *res_start = res.mcastq_start; 1091 + *res_end = res.mcastq_end; 1092 + break; 1093 + case PPE_RES_FLOW_ID: 1094 + *res_start = res.flow_id_start; 1095 + *res_end = res.flow_id_end; 1096 + break; 1097 + case PPE_RES_L0_NODE: 1098 + *res_start = res.l0node_start; 1099 + *res_end = res.l0node_end; 1100 + break; 1101 + case PPE_RES_L1_NODE: 1102 + *res_start = res.l1node_start; 1103 + *res_end = res.l1node_end; 1104 + break; 1105 + default: 1106 + return -EINVAL; 1107 + } 1108 + 1109 + return 0; 1110 + } 1111 + 1112 + /** 1113 + * ppe_sc_config_set - Set PPE service code configuration 1114 + * @ppe_dev: PPE device 1115 + * @sc: Service ID, 0-255 supported by PPE 1116 + * @cfg: Service code configuration 1117 + * 1118 + * PPE service code is used by the PPE during its packet processing stages, 1119 + * to perform or bypass certain selected packet operations on the packet. 1120 + * 1121 + * Return: 0 on success, negative error code on failure. 1122 + */ 1123 + int ppe_sc_config_set(struct ppe_device *ppe_dev, int sc, struct ppe_sc_cfg cfg) 1124 + { 1125 + u32 val, reg, servcode_val[2] = {}; 1126 + unsigned long bitmap_value; 1127 + int ret; 1128 + 1129 + val = FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_PORT_ID_VALID, cfg.dest_port_valid); 1130 + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_PORT_ID, cfg.dest_port); 1131 + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_DIRECTION, cfg.is_src); 1132 + 1133 + bitmap_value = bitmap_read(cfg.bitmaps.egress, 0, PPE_SC_BYPASS_EGRESS_SIZE); 1134 + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_DST_BYPASS_BITMAP, bitmap_value); 1135 + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_RX_CNT_EN, 1136 + test_bit(PPE_SC_BYPASS_COUNTER_RX, cfg.bitmaps.counter)); 1137 + val |= FIELD_PREP(PPE_IN_L2_SERVICE_TBL_TX_CNT_EN, 1138 + test_bit(PPE_SC_BYPASS_COUNTER_TX, cfg.bitmaps.counter)); 1139 + reg = PPE_IN_L2_SERVICE_TBL_ADDR + PPE_IN_L2_SERVICE_TBL_INC * sc; 1140 + 1141 + ret = regmap_write(ppe_dev->regmap, reg, val); 1142 + if (ret) 1143 + return ret; 1144 + 1145 + bitmap_value = bitmap_read(cfg.bitmaps.ingress, 0, PPE_SC_BYPASS_INGRESS_SIZE); 1146 + PPE_SERVICE_SET_BYPASS_BITMAP(servcode_val, bitmap_value); 1147 + PPE_SERVICE_SET_RX_CNT_EN(servcode_val, 1148 + test_bit(PPE_SC_BYPASS_COUNTER_RX_VLAN, cfg.bitmaps.counter)); 1149 + reg = PPE_SERVICE_TBL_ADDR + PPE_SERVICE_TBL_INC * sc; 1150 + 1151 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1152 + servcode_val, ARRAY_SIZE(servcode_val)); 1153 + if (ret) 1154 + return ret; 1155 + 1156 + reg = PPE_EG_SERVICE_TBL_ADDR + PPE_EG_SERVICE_TBL_INC * sc; 1157 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1158 + servcode_val, ARRAY_SIZE(servcode_val)); 1159 + if (ret) 1160 + return ret; 1161 + 1162 + PPE_EG_SERVICE_SET_NEXT_SERVCODE(servcode_val, cfg.next_service_code); 1163 + PPE_EG_SERVICE_SET_UPDATE_ACTION(servcode_val, cfg.eip_field_update_bitmap); 1164 + PPE_EG_SERVICE_SET_HW_SERVICE(servcode_val, cfg.eip_hw_service); 1165 + PPE_EG_SERVICE_SET_OFFSET_SEL(servcode_val, cfg.eip_offset_sel); 1166 + PPE_EG_SERVICE_SET_TX_CNT_EN(servcode_val, 1167 + test_bit(PPE_SC_BYPASS_COUNTER_TX_VLAN, cfg.bitmaps.counter)); 1168 + 1169 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1170 + servcode_val, ARRAY_SIZE(servcode_val)); 1171 + if (ret) 1172 + return ret; 1173 + 1174 + bitmap_value = bitmap_read(cfg.bitmaps.tunnel, 0, PPE_SC_BYPASS_TUNNEL_SIZE); 1175 + val = FIELD_PREP(PPE_TL_SERVICE_TBL_BYPASS_BITMAP, bitmap_value); 1176 + reg = PPE_TL_SERVICE_TBL_ADDR + PPE_TL_SERVICE_TBL_INC * sc; 1177 + 1178 + return regmap_write(ppe_dev->regmap, reg, val); 1179 + } 1180 + 1181 + /** 1182 + * ppe_counter_enable_set - Set PPE port counter enabled 1183 + * @ppe_dev: PPE device 1184 + * @port: PPE port ID 1185 + * 1186 + * Enable PPE counters on the given port for the unicast packet, multicast 1187 + * packet and VLAN packet received and transmitted by PPE. 1188 + * 1189 + * Return: 0 on success, negative error code on failure. 1190 + */ 1191 + int ppe_counter_enable_set(struct ppe_device *ppe_dev, int port) 1192 + { 1193 + u32 reg, mru_mtu_val[3]; 1194 + int ret; 1195 + 1196 + reg = PPE_MRU_MTU_CTRL_TBL_ADDR + PPE_MRU_MTU_CTRL_TBL_INC * port; 1197 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1198 + mru_mtu_val, ARRAY_SIZE(mru_mtu_val)); 1199 + if (ret) 1200 + return ret; 1201 + 1202 + PPE_MRU_MTU_CTRL_SET_RX_CNT_EN(mru_mtu_val, true); 1203 + PPE_MRU_MTU_CTRL_SET_TX_CNT_EN(mru_mtu_val, true); 1204 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1205 + mru_mtu_val, ARRAY_SIZE(mru_mtu_val)); 1206 + if (ret) 1207 + return ret; 1208 + 1209 + reg = PPE_MC_MTU_CTRL_TBL_ADDR + PPE_MC_MTU_CTRL_TBL_INC * port; 1210 + ret = regmap_set_bits(ppe_dev->regmap, reg, PPE_MC_MTU_CTRL_TBL_TX_CNT_EN); 1211 + if (ret) 1212 + return ret; 1213 + 1214 + reg = PPE_PORT_EG_VLAN_TBL_ADDR + PPE_PORT_EG_VLAN_TBL_INC * port; 1215 + 1216 + return regmap_set_bits(ppe_dev->regmap, reg, PPE_PORT_EG_VLAN_TBL_TX_COUNTING_EN); 1217 + } 1218 + 1219 + static int ppe_rss_hash_ipv4_config(struct ppe_device *ppe_dev, int index, 1220 + struct ppe_rss_hash_cfg cfg) 1221 + { 1222 + u32 reg, val; 1223 + 1224 + switch (index) { 1225 + case 0: 1226 + val = cfg.hash_sip_mix[0]; 1227 + break; 1228 + case 1: 1229 + val = cfg.hash_dip_mix[0]; 1230 + break; 1231 + case 2: 1232 + val = cfg.hash_protocol_mix; 1233 + break; 1234 + case 3: 1235 + val = cfg.hash_dport_mix; 1236 + break; 1237 + case 4: 1238 + val = cfg.hash_sport_mix; 1239 + break; 1240 + default: 1241 + return -EINVAL; 1242 + } 1243 + 1244 + reg = PPE_RSS_HASH_MIX_IPV4_ADDR + index * PPE_RSS_HASH_MIX_IPV4_INC; 1245 + 1246 + return regmap_update_bits(ppe_dev->regmap, reg, 1247 + PPE_RSS_HASH_MIX_IPV4_VAL, 1248 + FIELD_PREP(PPE_RSS_HASH_MIX_IPV4_VAL, val)); 1249 + } 1250 + 1251 + static int ppe_rss_hash_ipv6_config(struct ppe_device *ppe_dev, int index, 1252 + struct ppe_rss_hash_cfg cfg) 1253 + { 1254 + u32 reg, val; 1255 + 1256 + switch (index) { 1257 + case 0 ... 3: 1258 + val = cfg.hash_sip_mix[index]; 1259 + break; 1260 + case 4 ... 7: 1261 + val = cfg.hash_dip_mix[index - 4]; 1262 + break; 1263 + case 8: 1264 + val = cfg.hash_protocol_mix; 1265 + break; 1266 + case 9: 1267 + val = cfg.hash_dport_mix; 1268 + break; 1269 + case 10: 1270 + val = cfg.hash_sport_mix; 1271 + break; 1272 + default: 1273 + return -EINVAL; 1274 + } 1275 + 1276 + reg = PPE_RSS_HASH_MIX_ADDR + index * PPE_RSS_HASH_MIX_INC; 1277 + 1278 + return regmap_update_bits(ppe_dev->regmap, reg, 1279 + PPE_RSS_HASH_MIX_VAL, 1280 + FIELD_PREP(PPE_RSS_HASH_MIX_VAL, val)); 1281 + } 1282 + 1283 + /** 1284 + * ppe_rss_hash_config_set - Configure the PPE hash settings for the packet received. 1285 + * @ppe_dev: PPE device. 1286 + * @mode: Configure RSS hash for the packet type IPv4 and IPv6. 1287 + * @cfg: RSS hash configuration. 1288 + * 1289 + * PPE RSS hash settings are configured for the packet type IPv4 and IPv6. 1290 + * 1291 + * Return: 0 on success, negative error code on failure. 1292 + */ 1293 + int ppe_rss_hash_config_set(struct ppe_device *ppe_dev, int mode, 1294 + struct ppe_rss_hash_cfg cfg) 1295 + { 1296 + u32 val, reg; 1297 + int i, ret; 1298 + 1299 + if (mode & PPE_RSS_HASH_MODE_IPV4) { 1300 + val = FIELD_PREP(PPE_RSS_HASH_MASK_IPV4_HASH_MASK, cfg.hash_mask); 1301 + val |= FIELD_PREP(PPE_RSS_HASH_MASK_IPV4_FRAGMENT, cfg.hash_fragment_mode); 1302 + ret = regmap_write(ppe_dev->regmap, PPE_RSS_HASH_MASK_IPV4_ADDR, val); 1303 + if (ret) 1304 + return ret; 1305 + 1306 + val = FIELD_PREP(PPE_RSS_HASH_SEED_IPV4_VAL, cfg.hash_seed); 1307 + ret = regmap_write(ppe_dev->regmap, PPE_RSS_HASH_SEED_IPV4_ADDR, val); 1308 + if (ret) 1309 + return ret; 1310 + 1311 + for (i = 0; i < PPE_RSS_HASH_MIX_IPV4_ENTRIES; i++) { 1312 + ret = ppe_rss_hash_ipv4_config(ppe_dev, i, cfg); 1313 + if (ret) 1314 + return ret; 1315 + } 1316 + 1317 + for (i = 0; i < PPE_RSS_HASH_FIN_IPV4_ENTRIES; i++) { 1318 + val = FIELD_PREP(PPE_RSS_HASH_FIN_IPV4_INNER, cfg.hash_fin_inner[i]); 1319 + val |= FIELD_PREP(PPE_RSS_HASH_FIN_IPV4_OUTER, cfg.hash_fin_outer[i]); 1320 + reg = PPE_RSS_HASH_FIN_IPV4_ADDR + i * PPE_RSS_HASH_FIN_IPV4_INC; 1321 + 1322 + ret = regmap_write(ppe_dev->regmap, reg, val); 1323 + if (ret) 1324 + return ret; 1325 + } 1326 + } 1327 + 1328 + if (mode & PPE_RSS_HASH_MODE_IPV6) { 1329 + val = FIELD_PREP(PPE_RSS_HASH_MASK_HASH_MASK, cfg.hash_mask); 1330 + val |= FIELD_PREP(PPE_RSS_HASH_MASK_FRAGMENT, cfg.hash_fragment_mode); 1331 + ret = regmap_write(ppe_dev->regmap, PPE_RSS_HASH_MASK_ADDR, val); 1332 + if (ret) 1333 + return ret; 1334 + 1335 + val = FIELD_PREP(PPE_RSS_HASH_SEED_VAL, cfg.hash_seed); 1336 + ret = regmap_write(ppe_dev->regmap, PPE_RSS_HASH_SEED_ADDR, val); 1337 + if (ret) 1338 + return ret; 1339 + 1340 + for (i = 0; i < PPE_RSS_HASH_MIX_ENTRIES; i++) { 1341 + ret = ppe_rss_hash_ipv6_config(ppe_dev, i, cfg); 1342 + if (ret) 1343 + return ret; 1344 + } 1345 + 1346 + for (i = 0; i < PPE_RSS_HASH_FIN_ENTRIES; i++) { 1347 + val = FIELD_PREP(PPE_RSS_HASH_FIN_INNER, cfg.hash_fin_inner[i]); 1348 + val |= FIELD_PREP(PPE_RSS_HASH_FIN_OUTER, cfg.hash_fin_outer[i]); 1349 + reg = PPE_RSS_HASH_FIN_ADDR + i * PPE_RSS_HASH_FIN_INC; 1350 + 1351 + ret = regmap_write(ppe_dev->regmap, reg, val); 1352 + if (ret) 1353 + return ret; 1354 + } 1355 + } 1356 + 1357 + return 0; 1358 + } 1359 + 1360 + /** 1361 + * ppe_ring_queue_map_set - Set the PPE queue to Ethernet DMA ring mapping 1362 + * @ppe_dev: PPE device 1363 + * @ring_id: Ethernet DMA ring ID 1364 + * @queue_map: Bit map of queue IDs to given Ethernet DMA ring 1365 + * 1366 + * Configure the mapping from a set of PPE queues to a given Ethernet DMA ring. 1367 + * 1368 + * Return: 0 on success, negative error code on failure. 1369 + */ 1370 + int ppe_ring_queue_map_set(struct ppe_device *ppe_dev, int ring_id, u32 *queue_map) 1371 + { 1372 + u32 reg, queue_bitmap_val[PPE_RING_TO_QUEUE_BITMAP_WORD_CNT]; 1373 + 1374 + memcpy(queue_bitmap_val, queue_map, sizeof(queue_bitmap_val)); 1375 + reg = PPE_RING_Q_MAP_TBL_ADDR + PPE_RING_Q_MAP_TBL_INC * ring_id; 1376 + 1377 + return regmap_bulk_write(ppe_dev->regmap, reg, 1378 + queue_bitmap_val, 1379 + ARRAY_SIZE(queue_bitmap_val)); 1380 + } 1381 + 1382 + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port_id, 1383 + const struct ppe_bm_port_config port_cfg) 1384 + { 1385 + u32 reg, val, bm_fc_val[2]; 1386 + int ret; 1387 + 1388 + reg = PPE_BM_PORT_FC_CFG_TBL_ADDR + PPE_BM_PORT_FC_CFG_TBL_INC * bm_port_id; 1389 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1390 + bm_fc_val, ARRAY_SIZE(bm_fc_val)); 1391 + if (ret) 1392 + return ret; 1393 + 1394 + /* Configure BM flow control related threshold. */ 1395 + PPE_BM_PORT_FC_SET_WEIGHT(bm_fc_val, port_cfg.weight); 1396 + PPE_BM_PORT_FC_SET_RESUME_OFFSET(bm_fc_val, port_cfg.resume_offset); 1397 + PPE_BM_PORT_FC_SET_RESUME_THRESHOLD(bm_fc_val, port_cfg.resume_ceil); 1398 + PPE_BM_PORT_FC_SET_DYNAMIC(bm_fc_val, port_cfg.dynamic); 1399 + PPE_BM_PORT_FC_SET_REACT_LIMIT(bm_fc_val, port_cfg.in_fly_buf); 1400 + PPE_BM_PORT_FC_SET_PRE_ALLOC(bm_fc_val, port_cfg.pre_alloc); 1401 + 1402 + /* Configure low/high bits of the ceiling for the BM port. */ 1403 + val = FIELD_GET(GENMASK(2, 0), port_cfg.ceil); 1404 + PPE_BM_PORT_FC_SET_CEILING_LOW(bm_fc_val, val); 1405 + val = FIELD_GET(GENMASK(10, 3), port_cfg.ceil); 1406 + PPE_BM_PORT_FC_SET_CEILING_HIGH(bm_fc_val, val); 1407 + 1408 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1409 + bm_fc_val, ARRAY_SIZE(bm_fc_val)); 1410 + if (ret) 1411 + return ret; 1412 + 1413 + /* Assign the default group ID 0 to the BM port. */ 1414 + val = FIELD_PREP(PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID, 0); 1415 + reg = PPE_BM_PORT_GROUP_ID_ADDR + PPE_BM_PORT_GROUP_ID_INC * bm_port_id; 1416 + ret = regmap_update_bits(ppe_dev->regmap, reg, 1417 + PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID, 1418 + val); 1419 + if (ret) 1420 + return ret; 1421 + 1422 + /* Enable BM port flow control. */ 1423 + reg = PPE_BM_PORT_FC_MODE_ADDR + PPE_BM_PORT_FC_MODE_INC * bm_port_id; 1424 + 1425 + return regmap_set_bits(ppe_dev->regmap, reg, PPE_BM_PORT_FC_MODE_EN); 1426 + } 1427 + 1428 + /* Configure the buffer threshold for the port flow control function. */ 1429 + static int ppe_config_bm(struct ppe_device *ppe_dev) 1430 + { 1431 + const struct ppe_bm_port_config *port_cfg; 1432 + unsigned int i, bm_port_id, port_cfg_cnt; 1433 + u32 reg, val; 1434 + int ret; 1435 + 1436 + /* Configure the allocated buffer number only for group 0. 1437 + * The buffer number of group 1-3 is already cleared to 0 1438 + * after PPE reset during the probe of PPE driver. 1439 + */ 1440 + reg = PPE_BM_SHARED_GROUP_CFG_ADDR; 1441 + val = FIELD_PREP(PPE_BM_SHARED_GROUP_CFG_SHARED_LIMIT, 1442 + ipq9574_ppe_bm_group_config); 1443 + ret = regmap_update_bits(ppe_dev->regmap, reg, 1444 + PPE_BM_SHARED_GROUP_CFG_SHARED_LIMIT, 1445 + val); 1446 + if (ret) 1447 + goto bm_config_fail; 1448 + 1449 + /* Configure buffer thresholds for the BM ports. */ 1450 + port_cfg = ipq9574_ppe_bm_port_config; 1451 + port_cfg_cnt = ARRAY_SIZE(ipq9574_ppe_bm_port_config); 1452 + for (i = 0; i < port_cfg_cnt; i++) { 1453 + for (bm_port_id = port_cfg[i].port_id_start; 1454 + bm_port_id <= port_cfg[i].port_id_end; bm_port_id++) { 1455 + ret = ppe_config_bm_threshold(ppe_dev, bm_port_id, 1456 + port_cfg[i]); 1457 + if (ret) 1458 + goto bm_config_fail; 1459 + } 1460 + } 1461 + 1462 + return 0; 1463 + 1464 + bm_config_fail: 1465 + dev_err(ppe_dev->dev, "PPE BM config error %d\n", ret); 1466 + return ret; 1467 + } 1468 + 1469 + /* Configure PPE hardware queue depth, which is decided by the threshold 1470 + * of queue. 1471 + */ 1472 + static int ppe_config_qm(struct ppe_device *ppe_dev) 1473 + { 1474 + const struct ppe_qm_queue_config *queue_cfg; 1475 + int ret, i, queue_id, queue_cfg_count; 1476 + u32 reg, multicast_queue_cfg[5]; 1477 + u32 unicast_queue_cfg[4]; 1478 + u32 group_cfg[3]; 1479 + 1480 + /* Assign the buffer number to the group 0 by default. */ 1481 + reg = PPE_AC_GRP_CFG_TBL_ADDR; 1482 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1483 + group_cfg, ARRAY_SIZE(group_cfg)); 1484 + if (ret) 1485 + goto qm_config_fail; 1486 + 1487 + PPE_AC_GRP_SET_BUF_LIMIT(group_cfg, ipq9574_ppe_qm_group_config); 1488 + 1489 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1490 + group_cfg, ARRAY_SIZE(group_cfg)); 1491 + if (ret) 1492 + goto qm_config_fail; 1493 + 1494 + queue_cfg = ipq9574_ppe_qm_queue_config; 1495 + queue_cfg_count = ARRAY_SIZE(ipq9574_ppe_qm_queue_config); 1496 + for (i = 0; i < queue_cfg_count; i++) { 1497 + queue_id = queue_cfg[i].queue_start; 1498 + 1499 + /* Configure threshold for dropping packets separately for 1500 + * unicast and multicast PPE queues. 1501 + */ 1502 + while (queue_id <= queue_cfg[i].queue_end) { 1503 + if (queue_id < PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES) { 1504 + reg = PPE_AC_UNICAST_QUEUE_CFG_TBL_ADDR + 1505 + PPE_AC_UNICAST_QUEUE_CFG_TBL_INC * queue_id; 1506 + 1507 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1508 + unicast_queue_cfg, 1509 + ARRAY_SIZE(unicast_queue_cfg)); 1510 + if (ret) 1511 + goto qm_config_fail; 1512 + 1513 + PPE_AC_UNICAST_QUEUE_SET_EN(unicast_queue_cfg, true); 1514 + PPE_AC_UNICAST_QUEUE_SET_GRP_ID(unicast_queue_cfg, 0); 1515 + PPE_AC_UNICAST_QUEUE_SET_PRE_LIMIT(unicast_queue_cfg, 1516 + queue_cfg[i].prealloc_buf); 1517 + PPE_AC_UNICAST_QUEUE_SET_DYNAMIC(unicast_queue_cfg, 1518 + queue_cfg[i].dynamic); 1519 + PPE_AC_UNICAST_QUEUE_SET_WEIGHT(unicast_queue_cfg, 1520 + queue_cfg[i].weight); 1521 + PPE_AC_UNICAST_QUEUE_SET_THRESHOLD(unicast_queue_cfg, 1522 + queue_cfg[i].ceil); 1523 + PPE_AC_UNICAST_QUEUE_SET_GRN_RESUME(unicast_queue_cfg, 1524 + queue_cfg[i].resume_offset); 1525 + 1526 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1527 + unicast_queue_cfg, 1528 + ARRAY_SIZE(unicast_queue_cfg)); 1529 + if (ret) 1530 + goto qm_config_fail; 1531 + } else { 1532 + reg = PPE_AC_MULTICAST_QUEUE_CFG_TBL_ADDR + 1533 + PPE_AC_MULTICAST_QUEUE_CFG_TBL_INC * queue_id; 1534 + 1535 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1536 + multicast_queue_cfg, 1537 + ARRAY_SIZE(multicast_queue_cfg)); 1538 + if (ret) 1539 + goto qm_config_fail; 1540 + 1541 + PPE_AC_MULTICAST_QUEUE_SET_EN(multicast_queue_cfg, true); 1542 + PPE_AC_MULTICAST_QUEUE_SET_GRN_GRP_ID(multicast_queue_cfg, 0); 1543 + PPE_AC_MULTICAST_QUEUE_SET_GRN_PRE_LIMIT(multicast_queue_cfg, 1544 + queue_cfg[i].prealloc_buf); 1545 + PPE_AC_MULTICAST_QUEUE_SET_GRN_THRESHOLD(multicast_queue_cfg, 1546 + queue_cfg[i].ceil); 1547 + PPE_AC_MULTICAST_QUEUE_SET_GRN_RESUME(multicast_queue_cfg, 1548 + queue_cfg[i].resume_offset); 1549 + 1550 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1551 + multicast_queue_cfg, 1552 + ARRAY_SIZE(multicast_queue_cfg)); 1553 + if (ret) 1554 + goto qm_config_fail; 1555 + } 1556 + 1557 + /* Enable enqueue. */ 1558 + reg = PPE_ENQ_OPR_TBL_ADDR + PPE_ENQ_OPR_TBL_INC * queue_id; 1559 + ret = regmap_clear_bits(ppe_dev->regmap, reg, 1560 + PPE_ENQ_OPR_TBL_ENQ_DISABLE); 1561 + if (ret) 1562 + goto qm_config_fail; 1563 + 1564 + /* Enable dequeue. */ 1565 + reg = PPE_DEQ_OPR_TBL_ADDR + PPE_DEQ_OPR_TBL_INC * queue_id; 1566 + ret = regmap_clear_bits(ppe_dev->regmap, reg, 1567 + PPE_DEQ_OPR_TBL_DEQ_DISABLE); 1568 + if (ret) 1569 + goto qm_config_fail; 1570 + 1571 + queue_id++; 1572 + } 1573 + } 1574 + 1575 + /* Enable queue counter for all PPE hardware queues. */ 1576 + ret = regmap_set_bits(ppe_dev->regmap, PPE_EG_BRIDGE_CONFIG_ADDR, 1577 + PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN); 1578 + if (ret) 1579 + goto qm_config_fail; 1580 + 1581 + return 0; 1582 + 1583 + qm_config_fail: 1584 + dev_err(ppe_dev->dev, "PPE QM config error %d\n", ret); 1585 + return ret; 1586 + } 1587 + 1588 + static int ppe_node_scheduler_config(struct ppe_device *ppe_dev, 1589 + const struct ppe_scheduler_port_config config) 1590 + { 1591 + struct ppe_scheduler_cfg sch_cfg; 1592 + int ret, i; 1593 + 1594 + for (i = 0; i < config.loop_num; i++) { 1595 + if (!config.pri_max) { 1596 + /* Round robin scheduler without priority. */ 1597 + sch_cfg.flow_id = config.flow_id; 1598 + sch_cfg.pri = 0; 1599 + sch_cfg.drr_node_id = config.drr_node_id; 1600 + } else { 1601 + sch_cfg.flow_id = config.flow_id + (i / config.pri_max); 1602 + sch_cfg.pri = i % config.pri_max; 1603 + sch_cfg.drr_node_id = config.drr_node_id + i; 1604 + } 1605 + 1606 + /* Scheduler weight, must be more than 0. */ 1607 + sch_cfg.drr_node_wt = 1; 1608 + /* Byte based to be scheduled. */ 1609 + sch_cfg.unit_is_packet = false; 1610 + /* Frame + CRC calculated. */ 1611 + sch_cfg.frame_mode = PPE_SCH_WITH_FRAME_CRC; 1612 + 1613 + ret = ppe_queue_scheduler_set(ppe_dev, config.node_id + i, 1614 + config.flow_level, 1615 + config.port, 1616 + sch_cfg); 1617 + if (ret) 1618 + return ret; 1619 + } 1620 + 1621 + return 0; 1622 + } 1623 + 1624 + /* Initialize scheduler settings for PPE buffer utilization and dispatching 1625 + * packet on PPE queue. 1626 + */ 1627 + static int ppe_config_scheduler(struct ppe_device *ppe_dev) 1628 + { 1629 + const struct ppe_scheduler_port_config *port_cfg; 1630 + const struct ppe_scheduler_qm_config *qm_cfg; 1631 + const struct ppe_scheduler_bm_config *bm_cfg; 1632 + int ret, i, count; 1633 + u32 val, reg; 1634 + 1635 + count = ARRAY_SIZE(ipq9574_ppe_sch_bm_config); 1636 + bm_cfg = ipq9574_ppe_sch_bm_config; 1637 + 1638 + /* Configure the depth of BM scheduler entries. */ 1639 + val = FIELD_PREP(PPE_BM_SCH_CTRL_SCH_DEPTH, count); 1640 + val |= FIELD_PREP(PPE_BM_SCH_CTRL_SCH_OFFSET, 0); 1641 + val |= FIELD_PREP(PPE_BM_SCH_CTRL_SCH_EN, 1); 1642 + 1643 + ret = regmap_write(ppe_dev->regmap, PPE_BM_SCH_CTRL_ADDR, val); 1644 + if (ret) 1645 + goto sch_config_fail; 1646 + 1647 + /* Configure each BM scheduler entry with the valid ingress port and 1648 + * egress port, the second port takes effect when the specified port 1649 + * is in the inactive state. 1650 + */ 1651 + for (i = 0; i < count; i++) { 1652 + val = FIELD_PREP(PPE_BM_SCH_CFG_TBL_VALID, bm_cfg[i].valid); 1653 + val |= FIELD_PREP(PPE_BM_SCH_CFG_TBL_DIR, bm_cfg[i].dir); 1654 + val |= FIELD_PREP(PPE_BM_SCH_CFG_TBL_PORT_NUM, bm_cfg[i].port); 1655 + val |= FIELD_PREP(PPE_BM_SCH_CFG_TBL_SECOND_PORT_VALID, 1656 + bm_cfg[i].backup_port_valid); 1657 + val |= FIELD_PREP(PPE_BM_SCH_CFG_TBL_SECOND_PORT, 1658 + bm_cfg[i].backup_port); 1659 + 1660 + reg = PPE_BM_SCH_CFG_TBL_ADDR + i * PPE_BM_SCH_CFG_TBL_INC; 1661 + ret = regmap_write(ppe_dev->regmap, reg, val); 1662 + if (ret) 1663 + goto sch_config_fail; 1664 + } 1665 + 1666 + count = ARRAY_SIZE(ipq9574_ppe_sch_qm_config); 1667 + qm_cfg = ipq9574_ppe_sch_qm_config; 1668 + 1669 + /* Configure the depth of QM scheduler entries. */ 1670 + val = FIELD_PREP(PPE_PSCH_SCH_DEPTH_CFG_SCH_DEPTH, count); 1671 + ret = regmap_write(ppe_dev->regmap, PPE_PSCH_SCH_DEPTH_CFG_ADDR, val); 1672 + if (ret) 1673 + goto sch_config_fail; 1674 + 1675 + /* Configure each QM scheduler entry with enqueue port and dequeue 1676 + * port, the second port takes effect when the specified dequeue 1677 + * port is in the inactive port. 1678 + */ 1679 + for (i = 0; i < count; i++) { 1680 + val = FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_ENS_PORT_BITMAP, 1681 + qm_cfg[i].ensch_port_bmp); 1682 + val |= FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_ENS_PORT, 1683 + qm_cfg[i].ensch_port); 1684 + val |= FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_DES_PORT, 1685 + qm_cfg[i].desch_port); 1686 + val |= FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_DES_SECOND_PORT_EN, 1687 + qm_cfg[i].desch_backup_port_valid); 1688 + val |= FIELD_PREP(PPE_PSCH_SCH_CFG_TBL_DES_SECOND_PORT, 1689 + qm_cfg[i].desch_backup_port); 1690 + 1691 + reg = PPE_PSCH_SCH_CFG_TBL_ADDR + i * PPE_PSCH_SCH_CFG_TBL_INC; 1692 + ret = regmap_write(ppe_dev->regmap, reg, val); 1693 + if (ret) 1694 + goto sch_config_fail; 1695 + } 1696 + 1697 + count = ARRAY_SIZE(ppe_port_sch_config); 1698 + port_cfg = ppe_port_sch_config; 1699 + 1700 + /* Configure scheduler per PPE queue or flow. */ 1701 + for (i = 0; i < count; i++) { 1702 + if (port_cfg[i].port >= ppe_dev->num_ports) 1703 + break; 1704 + 1705 + ret = ppe_node_scheduler_config(ppe_dev, port_cfg[i]); 1706 + if (ret) 1707 + goto sch_config_fail; 1708 + } 1709 + 1710 + return 0; 1711 + 1712 + sch_config_fail: 1713 + dev_err(ppe_dev->dev, "PPE scheduler arbitration config error %d\n", ret); 1714 + return ret; 1715 + }; 1716 + 1717 + /* Configure PPE queue destination of each PPE port. */ 1718 + static int ppe_queue_dest_init(struct ppe_device *ppe_dev) 1719 + { 1720 + int ret, port_id, index, q_base, q_offset, res_start, res_end, pri_max; 1721 + struct ppe_queue_ucast_dest queue_dst; 1722 + 1723 + for (port_id = 0; port_id < ppe_dev->num_ports; port_id++) { 1724 + memset(&queue_dst, 0, sizeof(queue_dst)); 1725 + 1726 + ret = ppe_port_resource_get(ppe_dev, port_id, PPE_RES_UCAST, 1727 + &res_start, &res_end); 1728 + if (ret) 1729 + return ret; 1730 + 1731 + q_base = res_start; 1732 + queue_dst.dest_port = port_id; 1733 + 1734 + /* Configure queue base ID and profile ID that is same as 1735 + * physical port ID. 1736 + */ 1737 + ret = ppe_queue_ucast_base_set(ppe_dev, queue_dst, 1738 + q_base, port_id); 1739 + if (ret) 1740 + return ret; 1741 + 1742 + /* Queue priority range supported by each PPE port */ 1743 + ret = ppe_port_resource_get(ppe_dev, port_id, PPE_RES_L0_NODE, 1744 + &res_start, &res_end); 1745 + if (ret) 1746 + return ret; 1747 + 1748 + pri_max = res_end - res_start; 1749 + 1750 + /* Redirect ARP reply packet with the max priority on CPU port, 1751 + * which keeps the ARP reply directed to CPU (CPU code is 101) 1752 + * with highest priority queue of EDMA. 1753 + */ 1754 + if (port_id == 0) { 1755 + memset(&queue_dst, 0, sizeof(queue_dst)); 1756 + 1757 + queue_dst.cpu_code_en = true; 1758 + queue_dst.cpu_code = 101; 1759 + ret = ppe_queue_ucast_base_set(ppe_dev, queue_dst, 1760 + q_base + pri_max, 1761 + 0); 1762 + if (ret) 1763 + return ret; 1764 + } 1765 + 1766 + /* Initialize the queue offset of internal priority. */ 1767 + for (index = 0; index < PPE_QUEUE_INTER_PRI_NUM; index++) { 1768 + q_offset = index > pri_max ? pri_max : index; 1769 + 1770 + ret = ppe_queue_ucast_offset_pri_set(ppe_dev, port_id, 1771 + index, q_offset); 1772 + if (ret) 1773 + return ret; 1774 + } 1775 + 1776 + /* Initialize the queue offset of RSS hash as 0 to avoid the 1777 + * random hardware value that will lead to the unexpected 1778 + * destination queue generated. 1779 + */ 1780 + for (index = 0; index < PPE_QUEUE_HASH_NUM; index++) { 1781 + ret = ppe_queue_ucast_offset_hash_set(ppe_dev, port_id, 1782 + index, 0); 1783 + if (ret) 1784 + return ret; 1785 + } 1786 + } 1787 + 1788 + return 0; 1789 + } 1790 + 1791 + /* Initialize the service code 1 used by CPU port. */ 1792 + static int ppe_servcode_init(struct ppe_device *ppe_dev) 1793 + { 1794 + struct ppe_sc_cfg sc_cfg = {}; 1795 + 1796 + bitmap_zero(sc_cfg.bitmaps.counter, PPE_SC_BYPASS_COUNTER_SIZE); 1797 + bitmap_zero(sc_cfg.bitmaps.tunnel, PPE_SC_BYPASS_TUNNEL_SIZE); 1798 + 1799 + bitmap_fill(sc_cfg.bitmaps.ingress, PPE_SC_BYPASS_INGRESS_SIZE); 1800 + clear_bit(PPE_SC_BYPASS_INGRESS_FAKE_MAC_HEADER, sc_cfg.bitmaps.ingress); 1801 + clear_bit(PPE_SC_BYPASS_INGRESS_SERVICE_CODE, sc_cfg.bitmaps.ingress); 1802 + clear_bit(PPE_SC_BYPASS_INGRESS_FAKE_L2_PROTO, sc_cfg.bitmaps.ingress); 1803 + 1804 + bitmap_fill(sc_cfg.bitmaps.egress, PPE_SC_BYPASS_EGRESS_SIZE); 1805 + clear_bit(PPE_SC_BYPASS_EGRESS_ACL_POST_ROUTING_CHECK, sc_cfg.bitmaps.egress); 1806 + 1807 + return ppe_sc_config_set(ppe_dev, PPE_EDMA_SC_BYPASS_ID, sc_cfg); 1808 + } 1809 + 1810 + /* Initialize PPE port configurations. */ 1811 + static int ppe_port_config_init(struct ppe_device *ppe_dev) 1812 + { 1813 + u32 reg, val, mru_mtu_val[3]; 1814 + int i, ret; 1815 + 1816 + /* MTU and MRU settings are not required for CPU port 0. */ 1817 + for (i = 1; i < ppe_dev->num_ports; i++) { 1818 + /* Enable Ethernet port counter */ 1819 + ret = ppe_counter_enable_set(ppe_dev, i); 1820 + if (ret) 1821 + return ret; 1822 + 1823 + reg = PPE_MRU_MTU_CTRL_TBL_ADDR + PPE_MRU_MTU_CTRL_TBL_INC * i; 1824 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1825 + mru_mtu_val, ARRAY_SIZE(mru_mtu_val)); 1826 + if (ret) 1827 + return ret; 1828 + 1829 + /* Drop the packet when the packet size is more than the MTU 1830 + * and redirect the packet to the CPU port when the received 1831 + * packet size is more than the MRU of the physical interface. 1832 + */ 1833 + PPE_MRU_MTU_CTRL_SET_MRU_CMD(mru_mtu_val, PPE_ACTION_REDIRECT_TO_CPU); 1834 + PPE_MRU_MTU_CTRL_SET_MTU_CMD(mru_mtu_val, PPE_ACTION_DROP); 1835 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1836 + mru_mtu_val, ARRAY_SIZE(mru_mtu_val)); 1837 + if (ret) 1838 + return ret; 1839 + 1840 + reg = PPE_MC_MTU_CTRL_TBL_ADDR + PPE_MC_MTU_CTRL_TBL_INC * i; 1841 + val = FIELD_PREP(PPE_MC_MTU_CTRL_TBL_MTU_CMD, PPE_ACTION_DROP); 1842 + ret = regmap_update_bits(ppe_dev->regmap, reg, 1843 + PPE_MC_MTU_CTRL_TBL_MTU_CMD, 1844 + val); 1845 + if (ret) 1846 + return ret; 1847 + } 1848 + 1849 + /* Enable CPU port counters. */ 1850 + return ppe_counter_enable_set(ppe_dev, 0); 1851 + } 1852 + 1853 + /* Initialize the PPE RSS configuration for IPv4 and IPv6 packet receive. 1854 + * RSS settings are to calculate the random RSS hash value generated during 1855 + * packet receive. This hash is then used to generate the queue offset used 1856 + * to determine the queue used to transmit the packet. 1857 + */ 1858 + static int ppe_rss_hash_init(struct ppe_device *ppe_dev) 1859 + { 1860 + u16 fins[PPE_RSS_HASH_TUPLES] = { 0x205, 0x264, 0x227, 0x245, 0x201 }; 1861 + u8 ips[PPE_RSS_HASH_IP_LENGTH] = { 0x13, 0xb, 0x13, 0xb }; 1862 + struct ppe_rss_hash_cfg hash_cfg; 1863 + int i, ret; 1864 + 1865 + hash_cfg.hash_seed = get_random_u32(); 1866 + hash_cfg.hash_mask = 0xfff; 1867 + 1868 + /* Use 5 tuple as RSS hash key for the first fragment of TCP, UDP 1869 + * and UDP-Lite packets. 1870 + */ 1871 + hash_cfg.hash_fragment_mode = false; 1872 + 1873 + /* The final common seed configs used to calculate the RSS has value, 1874 + * which is available for both IPv4 and IPv6 packet. 1875 + */ 1876 + for (i = 0; i < ARRAY_SIZE(fins); i++) { 1877 + hash_cfg.hash_fin_inner[i] = fins[i] & 0x1f; 1878 + hash_cfg.hash_fin_outer[i] = fins[i] >> 5; 1879 + } 1880 + 1881 + /* RSS seeds for IP protocol, L4 destination & source port and 1882 + * destination & source IP used to calculate the RSS hash value. 1883 + */ 1884 + hash_cfg.hash_protocol_mix = 0x13; 1885 + hash_cfg.hash_dport_mix = 0xb; 1886 + hash_cfg.hash_sport_mix = 0x13; 1887 + hash_cfg.hash_dip_mix[0] = 0xb; 1888 + hash_cfg.hash_sip_mix[0] = 0x13; 1889 + 1890 + /* Configure RSS seed configs for IPv4 packet. */ 1891 + ret = ppe_rss_hash_config_set(ppe_dev, PPE_RSS_HASH_MODE_IPV4, hash_cfg); 1892 + if (ret) 1893 + return ret; 1894 + 1895 + for (i = 0; i < ARRAY_SIZE(ips); i++) { 1896 + hash_cfg.hash_sip_mix[i] = ips[i]; 1897 + hash_cfg.hash_dip_mix[i] = ips[i]; 1898 + } 1899 + 1900 + /* Configure RSS seed configs for IPv6 packet. */ 1901 + return ppe_rss_hash_config_set(ppe_dev, PPE_RSS_HASH_MODE_IPV6, hash_cfg); 1902 + } 1903 + 1904 + /* Initialize mapping between PPE queues assigned to CPU port 0 1905 + * to Ethernet DMA ring 0. 1906 + */ 1907 + static int ppe_queues_to_ring_init(struct ppe_device *ppe_dev) 1908 + { 1909 + u32 queue_bmap[PPE_RING_TO_QUEUE_BITMAP_WORD_CNT] = {}; 1910 + int ret, queue_id, queue_max; 1911 + 1912 + ret = ppe_port_resource_get(ppe_dev, 0, PPE_RES_UCAST, 1913 + &queue_id, &queue_max); 1914 + if (ret) 1915 + return ret; 1916 + 1917 + for (; queue_id <= queue_max; queue_id++) 1918 + queue_bmap[queue_id / 32] |= BIT_MASK(queue_id % 32); 1919 + 1920 + return ppe_ring_queue_map_set(ppe_dev, 0, queue_bmap); 1921 + } 1922 + 1923 + /* Initialize PPE bridge settings to only enable L2 frame receive and 1924 + * transmit between CPU port and PPE Ethernet ports. 1925 + */ 1926 + static int ppe_bridge_init(struct ppe_device *ppe_dev) 1927 + { 1928 + u32 reg, mask, port_cfg[4], vsi_cfg[2]; 1929 + int ret, i; 1930 + 1931 + /* Configure the following settings for CPU port0: 1932 + * a.) Enable Bridge TX 1933 + * b.) Disable FDB new address learning 1934 + * c.) Disable station move address learning 1935 + */ 1936 + mask = PPE_PORT_BRIDGE_TXMAC_EN; 1937 + mask |= PPE_PORT_BRIDGE_NEW_LRN_EN; 1938 + mask |= PPE_PORT_BRIDGE_STA_MOVE_LRN_EN; 1939 + ret = regmap_update_bits(ppe_dev->regmap, 1940 + PPE_PORT_BRIDGE_CTRL_ADDR, 1941 + mask, 1942 + PPE_PORT_BRIDGE_TXMAC_EN); 1943 + if (ret) 1944 + return ret; 1945 + 1946 + for (i = 1; i < ppe_dev->num_ports; i++) { 1947 + /* Enable invalid VSI forwarding for all the physical ports 1948 + * to CPU port0, in case no VSI is assigned to the physical 1949 + * port. 1950 + */ 1951 + reg = PPE_L2_VP_PORT_TBL_ADDR + PPE_L2_VP_PORT_TBL_INC * i; 1952 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1953 + port_cfg, ARRAY_SIZE(port_cfg)); 1954 + 1955 + if (ret) 1956 + return ret; 1957 + 1958 + PPE_L2_PORT_SET_INVALID_VSI_FWD_EN(port_cfg, true); 1959 + PPE_L2_PORT_SET_DST_INFO(port_cfg, 0); 1960 + 1961 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1962 + port_cfg, ARRAY_SIZE(port_cfg)); 1963 + if (ret) 1964 + return ret; 1965 + } 1966 + 1967 + for (i = 0; i < PPE_VSI_TBL_ENTRIES; i++) { 1968 + /* Set the VSI forward membership to include only CPU port0. 1969 + * FDB learning and forwarding take place only after switchdev 1970 + * is supported later to create the VSI and join the physical 1971 + * ports to the VSI port member. 1972 + */ 1973 + reg = PPE_VSI_TBL_ADDR + PPE_VSI_TBL_INC * i; 1974 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 1975 + vsi_cfg, ARRAY_SIZE(vsi_cfg)); 1976 + if (ret) 1977 + return ret; 1978 + 1979 + PPE_VSI_SET_MEMBER_PORT_BITMAP(vsi_cfg, BIT(0)); 1980 + PPE_VSI_SET_UUC_BITMAP(vsi_cfg, BIT(0)); 1981 + PPE_VSI_SET_UMC_BITMAP(vsi_cfg, BIT(0)); 1982 + PPE_VSI_SET_BC_BITMAP(vsi_cfg, BIT(0)); 1983 + PPE_VSI_SET_NEW_ADDR_LRN_EN(vsi_cfg, true); 1984 + PPE_VSI_SET_NEW_ADDR_FWD_CMD(vsi_cfg, PPE_ACTION_FORWARD); 1985 + PPE_VSI_SET_STATION_MOVE_LRN_EN(vsi_cfg, true); 1986 + PPE_VSI_SET_STATION_MOVE_FWD_CMD(vsi_cfg, PPE_ACTION_FORWARD); 1987 + 1988 + ret = regmap_bulk_write(ppe_dev->regmap, reg, 1989 + vsi_cfg, ARRAY_SIZE(vsi_cfg)); 1990 + if (ret) 1991 + return ret; 1992 + } 1993 + 1994 + return 0; 1995 + } 1996 + 1997 + int ppe_hw_config(struct ppe_device *ppe_dev) 1998 + { 1999 + int ret; 2000 + 2001 + ret = ppe_config_bm(ppe_dev); 2002 + if (ret) 2003 + return ret; 2004 + 2005 + ret = ppe_config_qm(ppe_dev); 2006 + if (ret) 2007 + return ret; 2008 + 2009 + ret = ppe_config_scheduler(ppe_dev); 2010 + if (ret) 2011 + return ret; 2012 + 2013 + ret = ppe_queue_dest_init(ppe_dev); 2014 + if (ret) 2015 + return ret; 2016 + 2017 + ret = ppe_servcode_init(ppe_dev); 2018 + if (ret) 2019 + return ret; 2020 + 2021 + ret = ppe_port_config_init(ppe_dev); 2022 + if (ret) 2023 + return ret; 2024 + 2025 + ret = ppe_rss_hash_init(ppe_dev); 2026 + if (ret) 2027 + return ret; 2028 + 2029 + ret = ppe_queues_to_ring_init(ppe_dev); 2030 + if (ret) 2031 + return ret; 2032 + 2033 + return ppe_bridge_init(ppe_dev); 2034 + }
+317
drivers/net/ethernet/qualcomm/ppe/ppe_config.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only 2 + * 3 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 4 + */ 5 + 6 + #ifndef __PPE_CONFIG_H__ 7 + #define __PPE_CONFIG_H__ 8 + 9 + #include <linux/types.h> 10 + 11 + #include "ppe.h" 12 + 13 + /* There are different table index ranges for configuring queue base ID of 14 + * the destination port, CPU code and service code. 15 + */ 16 + #define PPE_QUEUE_BASE_DEST_PORT 0 17 + #define PPE_QUEUE_BASE_CPU_CODE 1024 18 + #define PPE_QUEUE_BASE_SERVICE_CODE 2048 19 + 20 + #define PPE_QUEUE_INTER_PRI_NUM 16 21 + #define PPE_QUEUE_HASH_NUM 256 22 + 23 + /* The service code is used by EDMA port to transmit packet to PPE. */ 24 + #define PPE_EDMA_SC_BYPASS_ID 1 25 + 26 + /* The PPE RSS hash configured for IPv4 and IPv6 packet separately. */ 27 + #define PPE_RSS_HASH_MODE_IPV4 BIT(0) 28 + #define PPE_RSS_HASH_MODE_IPV6 BIT(1) 29 + #define PPE_RSS_HASH_IP_LENGTH 4 30 + #define PPE_RSS_HASH_TUPLES 5 31 + 32 + /* PPE supports 300 queues, each bit presents as one queue. */ 33 + #define PPE_RING_TO_QUEUE_BITMAP_WORD_CNT 10 34 + 35 + /** 36 + * enum ppe_scheduler_frame_mode - PPE scheduler frame mode. 37 + * @PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC: The scheduled frame includes IPG, 38 + * preamble, Ethernet packet and CRC. 39 + * @PPE_SCH_WITH_FRAME_CRC: The scheduled frame includes Ethernet frame and CRC 40 + * excluding IPG and preamble. 41 + * @PPE_SCH_WITH_L3_PAYLOAD: The scheduled frame includes layer 3 packet data. 42 + */ 43 + enum ppe_scheduler_frame_mode { 44 + PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC = 0, 45 + PPE_SCH_WITH_FRAME_CRC = 1, 46 + PPE_SCH_WITH_L3_PAYLOAD = 2, 47 + }; 48 + 49 + /** 50 + * struct ppe_scheduler_cfg - PPE scheduler configuration. 51 + * @flow_id: PPE flow ID. 52 + * @pri: Scheduler priority. 53 + * @drr_node_id: Node ID for scheduled traffic. 54 + * @drr_node_wt: Weight for scheduled traffic. 55 + * @unit_is_packet: Packet based or byte based unit for scheduled traffic. 56 + * @frame_mode: Packet mode to be scheduled. 57 + * 58 + * PPE scheduler supports commit rate and exceed rate configurations. 59 + */ 60 + struct ppe_scheduler_cfg { 61 + int flow_id; 62 + int pri; 63 + int drr_node_id; 64 + int drr_node_wt; 65 + bool unit_is_packet; 66 + enum ppe_scheduler_frame_mode frame_mode; 67 + }; 68 + 69 + /** 70 + * enum ppe_resource_type - PPE resource type. 71 + * @PPE_RES_UCAST: Unicast queue resource. 72 + * @PPE_RES_MCAST: Multicast queue resource. 73 + * @PPE_RES_L0_NODE: Level 0 for queue based node resource. 74 + * @PPE_RES_L1_NODE: Level 1 for flow based node resource. 75 + * @PPE_RES_FLOW_ID: Flow based node resource. 76 + */ 77 + enum ppe_resource_type { 78 + PPE_RES_UCAST, 79 + PPE_RES_MCAST, 80 + PPE_RES_L0_NODE, 81 + PPE_RES_L1_NODE, 82 + PPE_RES_FLOW_ID, 83 + }; 84 + 85 + /** 86 + * struct ppe_queue_ucast_dest - PPE unicast queue destination. 87 + * @src_profile: Source profile. 88 + * @service_code_en: Enable service code to map the queue base ID. 89 + * @service_code: Service code. 90 + * @cpu_code_en: Enable CPU code to map the queue base ID. 91 + * @cpu_code: CPU code. 92 + * @dest_port: destination port. 93 + * 94 + * PPE egress queue ID is decided by the service code if enabled, otherwise 95 + * by the CPU code if enabled, or by destination port if both service code 96 + * and CPU code are disabled. 97 + */ 98 + struct ppe_queue_ucast_dest { 99 + int src_profile; 100 + bool service_code_en; 101 + int service_code; 102 + bool cpu_code_en; 103 + int cpu_code; 104 + int dest_port; 105 + }; 106 + 107 + /* Hardware bitmaps for bypassing features of the ingress packet. */ 108 + enum ppe_sc_ingress_type { 109 + PPE_SC_BYPASS_INGRESS_VLAN_TAG_FMT_CHECK = 0, 110 + PPE_SC_BYPASS_INGRESS_VLAN_MEMBER_CHECK = 1, 111 + PPE_SC_BYPASS_INGRESS_VLAN_TRANSLATE = 2, 112 + PPE_SC_BYPASS_INGRESS_MY_MAC_CHECK = 3, 113 + PPE_SC_BYPASS_INGRESS_DIP_LOOKUP = 4, 114 + PPE_SC_BYPASS_INGRESS_FLOW_LOOKUP = 5, 115 + PPE_SC_BYPASS_INGRESS_FLOW_ACTION = 6, 116 + PPE_SC_BYPASS_INGRESS_ACL = 7, 117 + PPE_SC_BYPASS_INGRESS_FAKE_MAC_HEADER = 8, 118 + PPE_SC_BYPASS_INGRESS_SERVICE_CODE = 9, 119 + PPE_SC_BYPASS_INGRESS_WRONG_PKT_FMT_L2 = 10, 120 + PPE_SC_BYPASS_INGRESS_WRONG_PKT_FMT_L3_IPV4 = 11, 121 + PPE_SC_BYPASS_INGRESS_WRONG_PKT_FMT_L3_IPV6 = 12, 122 + PPE_SC_BYPASS_INGRESS_WRONG_PKT_FMT_L4 = 13, 123 + PPE_SC_BYPASS_INGRESS_FLOW_SERVICE_CODE = 14, 124 + PPE_SC_BYPASS_INGRESS_ACL_SERVICE_CODE = 15, 125 + PPE_SC_BYPASS_INGRESS_FAKE_L2_PROTO = 16, 126 + PPE_SC_BYPASS_INGRESS_PPPOE_TERMINATION = 17, 127 + PPE_SC_BYPASS_INGRESS_DEFAULT_VLAN = 18, 128 + PPE_SC_BYPASS_INGRESS_DEFAULT_PCP = 19, 129 + PPE_SC_BYPASS_INGRESS_VSI_ASSIGN = 20, 130 + /* Values 21-23 are not specified by hardware. */ 131 + PPE_SC_BYPASS_INGRESS_VLAN_ASSIGN_FAIL = 24, 132 + PPE_SC_BYPASS_INGRESS_SOURCE_GUARD = 25, 133 + PPE_SC_BYPASS_INGRESS_MRU_MTU_CHECK = 26, 134 + PPE_SC_BYPASS_INGRESS_FLOW_SRC_CHECK = 27, 135 + PPE_SC_BYPASS_INGRESS_FLOW_QOS = 28, 136 + /* This must be last as it determines the size of the BITMAP. */ 137 + PPE_SC_BYPASS_INGRESS_SIZE, 138 + }; 139 + 140 + /* Hardware bitmaps for bypassing features of the egress packet. */ 141 + enum ppe_sc_egress_type { 142 + PPE_SC_BYPASS_EGRESS_VLAN_MEMBER_CHECK = 0, 143 + PPE_SC_BYPASS_EGRESS_VLAN_TRANSLATE = 1, 144 + PPE_SC_BYPASS_EGRESS_VLAN_TAG_FMT_CTRL = 2, 145 + PPE_SC_BYPASS_EGRESS_FDB_LEARN = 3, 146 + PPE_SC_BYPASS_EGRESS_FDB_REFRESH = 4, 147 + PPE_SC_BYPASS_EGRESS_L2_SOURCE_SECURITY = 5, 148 + PPE_SC_BYPASS_EGRESS_MANAGEMENT_FWD = 6, 149 + PPE_SC_BYPASS_EGRESS_BRIDGING_FWD = 7, 150 + PPE_SC_BYPASS_EGRESS_IN_STP_FLTR = 8, 151 + PPE_SC_BYPASS_EGRESS_EG_STP_FLTR = 9, 152 + PPE_SC_BYPASS_EGRESS_SOURCE_FLTR = 10, 153 + PPE_SC_BYPASS_EGRESS_POLICER = 11, 154 + PPE_SC_BYPASS_EGRESS_L2_PKT_EDIT = 12, 155 + PPE_SC_BYPASS_EGRESS_L3_PKT_EDIT = 13, 156 + PPE_SC_BYPASS_EGRESS_ACL_POST_ROUTING_CHECK = 14, 157 + PPE_SC_BYPASS_EGRESS_PORT_ISOLATION = 15, 158 + PPE_SC_BYPASS_EGRESS_PRE_ACL_QOS = 16, 159 + PPE_SC_BYPASS_EGRESS_POST_ACL_QOS = 17, 160 + PPE_SC_BYPASS_EGRESS_DSCP_QOS = 18, 161 + PPE_SC_BYPASS_EGRESS_PCP_QOS = 19, 162 + PPE_SC_BYPASS_EGRESS_PREHEADER_QOS = 20, 163 + PPE_SC_BYPASS_EGRESS_FAKE_MAC_DROP = 21, 164 + PPE_SC_BYPASS_EGRESS_TUNL_CONTEXT = 22, 165 + PPE_SC_BYPASS_EGRESS_FLOW_POLICER = 23, 166 + /* This must be last as it determines the size of the BITMAP. */ 167 + PPE_SC_BYPASS_EGRESS_SIZE, 168 + }; 169 + 170 + /* Hardware bitmaps for bypassing counter of packet. */ 171 + enum ppe_sc_counter_type { 172 + PPE_SC_BYPASS_COUNTER_RX_VLAN = 0, 173 + PPE_SC_BYPASS_COUNTER_RX = 1, 174 + PPE_SC_BYPASS_COUNTER_TX_VLAN = 2, 175 + PPE_SC_BYPASS_COUNTER_TX = 3, 176 + /* This must be last as it determines the size of the BITMAP. */ 177 + PPE_SC_BYPASS_COUNTER_SIZE, 178 + }; 179 + 180 + /* Hardware bitmaps for bypassing features of tunnel packet. */ 181 + enum ppe_sc_tunnel_type { 182 + PPE_SC_BYPASS_TUNNEL_SERVICE_CODE = 0, 183 + PPE_SC_BYPASS_TUNNEL_TUNNEL_HANDLE = 1, 184 + PPE_SC_BYPASS_TUNNEL_L3_IF_CHECK = 2, 185 + PPE_SC_BYPASS_TUNNEL_VLAN_CHECK = 3, 186 + PPE_SC_BYPASS_TUNNEL_DMAC_CHECK = 4, 187 + PPE_SC_BYPASS_TUNNEL_UDP_CSUM_0_CHECK = 5, 188 + PPE_SC_BYPASS_TUNNEL_TBL_DE_ACCE_CHECK = 6, 189 + PPE_SC_BYPASS_TUNNEL_PPPOE_MC_TERM_CHECK = 7, 190 + PPE_SC_BYPASS_TUNNEL_TTL_EXCEED_CHECK = 8, 191 + PPE_SC_BYPASS_TUNNEL_MAP_SRC_CHECK = 9, 192 + PPE_SC_BYPASS_TUNNEL_MAP_DST_CHECK = 10, 193 + PPE_SC_BYPASS_TUNNEL_LPM_DST_LOOKUP = 11, 194 + PPE_SC_BYPASS_TUNNEL_LPM_LOOKUP = 12, 195 + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_L2 = 13, 196 + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_L3_IPV4 = 14, 197 + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_L3_IPV6 = 15, 198 + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_L4 = 16, 199 + PPE_SC_BYPASS_TUNNEL_WRONG_PKT_FMT_TUNNEL = 17, 200 + /* Values 18-19 are not specified by hardware. */ 201 + PPE_SC_BYPASS_TUNNEL_PRE_IPO = 20, 202 + /* This must be last as it determines the size of the BITMAP. */ 203 + PPE_SC_BYPASS_TUNNEL_SIZE, 204 + }; 205 + 206 + /** 207 + * struct ppe_sc_bypass - PPE service bypass bitmaps 208 + * @ingress: Bitmap of features that can be bypassed on the ingress packet. 209 + * @egress: Bitmap of features that can be bypassed on the egress packet. 210 + * @counter: Bitmap of features that can be bypassed on the counter type. 211 + * @tunnel: Bitmap of features that can be bypassed on the tunnel packet. 212 + */ 213 + struct ppe_sc_bypass { 214 + DECLARE_BITMAP(ingress, PPE_SC_BYPASS_INGRESS_SIZE); 215 + DECLARE_BITMAP(egress, PPE_SC_BYPASS_EGRESS_SIZE); 216 + DECLARE_BITMAP(counter, PPE_SC_BYPASS_COUNTER_SIZE); 217 + DECLARE_BITMAP(tunnel, PPE_SC_BYPASS_TUNNEL_SIZE); 218 + }; 219 + 220 + /** 221 + * struct ppe_sc_cfg - PPE service code configuration. 222 + * @dest_port_valid: Generate destination port or not. 223 + * @dest_port: Destination port ID. 224 + * @bitmaps: Bitmap of bypass features. 225 + * @is_src: Destination port acts as source port, packet sent to CPU. 226 + * @next_service_code: New service code generated. 227 + * @eip_field_update_bitmap: Fields updated as actions taken for EIP. 228 + * @eip_hw_service: Selected hardware functions for EIP. 229 + * @eip_offset_sel: Packet offset selection, using packet's layer 4 offset 230 + * or using packet's layer 3 offset for EIP. 231 + * 232 + * Service code is generated during the packet passing through PPE. 233 + */ 234 + struct ppe_sc_cfg { 235 + bool dest_port_valid; 236 + int dest_port; 237 + struct ppe_sc_bypass bitmaps; 238 + bool is_src; 239 + int next_service_code; 240 + int eip_field_update_bitmap; 241 + int eip_hw_service; 242 + int eip_offset_sel; 243 + }; 244 + 245 + /** 246 + * enum ppe_action_type - PPE action of the received packet. 247 + * @PPE_ACTION_FORWARD: Packet forwarded per L2/L3 process. 248 + * @PPE_ACTION_DROP: Packet dropped by PPE. 249 + * @PPE_ACTION_COPY_TO_CPU: Packet copied to CPU port per multicast queue. 250 + * @PPE_ACTION_REDIRECT_TO_CPU: Packet redirected to CPU port per unicast queue. 251 + */ 252 + enum ppe_action_type { 253 + PPE_ACTION_FORWARD = 0, 254 + PPE_ACTION_DROP = 1, 255 + PPE_ACTION_COPY_TO_CPU = 2, 256 + PPE_ACTION_REDIRECT_TO_CPU = 3, 257 + }; 258 + 259 + /** 260 + * struct ppe_rss_hash_cfg - PPE RSS hash configuration. 261 + * @hash_mask: Mask of the generated hash value. 262 + * @hash_fragment_mode: Hash generation mode for the first fragment of TCP, 263 + * UDP and UDP-Lite packets, to use either 3 tuple or 5 tuple for RSS hash 264 + * key computation. 265 + * @hash_seed: Seed to generate RSS hash. 266 + * @hash_sip_mix: Source IP selection. 267 + * @hash_dip_mix: Destination IP selection. 268 + * @hash_protocol_mix: Protocol selection. 269 + * @hash_sport_mix: Source L4 port selection. 270 + * @hash_dport_mix: Destination L4 port selection. 271 + * @hash_fin_inner: RSS hash value first selection. 272 + * @hash_fin_outer: RSS hash value second selection. 273 + * 274 + * PPE RSS hash value is generated for the packet based on the RSS hash 275 + * configured. 276 + */ 277 + struct ppe_rss_hash_cfg { 278 + u32 hash_mask; 279 + bool hash_fragment_mode; 280 + u32 hash_seed; 281 + u8 hash_sip_mix[PPE_RSS_HASH_IP_LENGTH]; 282 + u8 hash_dip_mix[PPE_RSS_HASH_IP_LENGTH]; 283 + u8 hash_protocol_mix; 284 + u8 hash_sport_mix; 285 + u8 hash_dport_mix; 286 + u8 hash_fin_inner[PPE_RSS_HASH_TUPLES]; 287 + u8 hash_fin_outer[PPE_RSS_HASH_TUPLES]; 288 + }; 289 + 290 + int ppe_hw_config(struct ppe_device *ppe_dev); 291 + int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, 292 + int node_id, bool flow_level, int port, 293 + struct ppe_scheduler_cfg scheduler_cfg); 294 + int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev, 295 + struct ppe_queue_ucast_dest queue_dst, 296 + int queue_base, 297 + int profile_id); 298 + int ppe_queue_ucast_offset_pri_set(struct ppe_device *ppe_dev, 299 + int profile_id, 300 + int priority, 301 + int queue_offset); 302 + int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, 303 + int profile_id, 304 + int rss_hash, 305 + int queue_offset); 306 + int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, 307 + enum ppe_resource_type type, 308 + int *res_start, int *res_end); 309 + int ppe_sc_config_set(struct ppe_device *ppe_dev, int sc, 310 + struct ppe_sc_cfg cfg); 311 + int ppe_counter_enable_set(struct ppe_device *ppe_dev, int port); 312 + int ppe_rss_hash_config_set(struct ppe_device *ppe_dev, int mode, 313 + struct ppe_rss_hash_cfg hash_cfg); 314 + int ppe_ring_queue_map_set(struct ppe_device *ppe_dev, 315 + int ring_id, 316 + u32 *queue_map); 317 + #endif
+847
drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 4 + */ 5 + 6 + /* PPE debugfs routines for display of PPE counters useful for debug. */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/debugfs.h> 10 + #include <linux/dev_printk.h> 11 + #include <linux/device.h> 12 + #include <linux/regmap.h> 13 + #include <linux/seq_file.h> 14 + 15 + #include "ppe.h" 16 + #include "ppe_config.h" 17 + #include "ppe_debugfs.h" 18 + #include "ppe_regs.h" 19 + 20 + #define PPE_PKT_CNT_TBL_SIZE 3 21 + #define PPE_DROP_PKT_CNT_TBL_SIZE 5 22 + 23 + #define PPE_W0_PKT_CNT GENMASK(31, 0) 24 + #define PPE_W2_DROP_PKT_CNT_LOW GENMASK(31, 8) 25 + #define PPE_W3_DROP_PKT_CNT_HIGH GENMASK(7, 0) 26 + 27 + #define PPE_GET_PKT_CNT(tbl_cnt) \ 28 + FIELD_GET(PPE_W0_PKT_CNT, *(tbl_cnt)) 29 + #define PPE_GET_DROP_PKT_CNT_LOW(tbl_cnt) \ 30 + FIELD_GET(PPE_W2_DROP_PKT_CNT_LOW, *((tbl_cnt) + 0x2)) 31 + #define PPE_GET_DROP_PKT_CNT_HIGH(tbl_cnt) \ 32 + FIELD_GET(PPE_W3_DROP_PKT_CNT_HIGH, *((tbl_cnt) + 0x3)) 33 + 34 + /** 35 + * enum ppe_cnt_size_type - PPE counter size type 36 + * @PPE_PKT_CNT_SIZE_1WORD: Counter size with single register 37 + * @PPE_PKT_CNT_SIZE_3WORD: Counter size with table of 3 words 38 + * @PPE_PKT_CNT_SIZE_5WORD: Counter size with table of 5 words 39 + * 40 + * PPE takes the different register size to record the packet counters. 41 + * It uses single register, or register table with 3 words or 5 words. 42 + * The counter with table size 5 words also records the drop counter. 43 + * There are also some other counter types occupying sizes less than 32 44 + * bits, which is not covered by this enumeration type. 45 + */ 46 + enum ppe_cnt_size_type { 47 + PPE_PKT_CNT_SIZE_1WORD, 48 + PPE_PKT_CNT_SIZE_3WORD, 49 + PPE_PKT_CNT_SIZE_5WORD, 50 + }; 51 + 52 + /** 53 + * enum ppe_cnt_type - PPE counter type. 54 + * @PPE_CNT_BM: Packet counter processed by BM. 55 + * @PPE_CNT_PARSE: Packet counter parsed on ingress. 56 + * @PPE_CNT_PORT_RX: Packet counter on the ingress port. 57 + * @PPE_CNT_VLAN_RX: VLAN packet counter received. 58 + * @PPE_CNT_L2_FWD: Packet counter processed by L2 forwarding. 59 + * @PPE_CNT_CPU_CODE: Packet counter marked with various CPU codes. 60 + * @PPE_CNT_VLAN_TX: VLAN packet counter transmitted. 61 + * @PPE_CNT_PORT_TX: Packet counter on the egress port. 62 + * @PPE_CNT_QM: Packet counter processed by QM. 63 + */ 64 + enum ppe_cnt_type { 65 + PPE_CNT_BM, 66 + PPE_CNT_PARSE, 67 + PPE_CNT_PORT_RX, 68 + PPE_CNT_VLAN_RX, 69 + PPE_CNT_L2_FWD, 70 + PPE_CNT_CPU_CODE, 71 + PPE_CNT_VLAN_TX, 72 + PPE_CNT_PORT_TX, 73 + PPE_CNT_QM, 74 + }; 75 + 76 + /** 77 + * struct ppe_debugfs_entry - PPE debugfs entry. 78 + * @name: Debugfs file name. 79 + * @counter_type: PPE packet counter type. 80 + * @ppe: PPE device. 81 + * 82 + * The PPE debugfs entry is used to create the debugfs file and passed 83 + * to debugfs_create_file() as private data. 84 + */ 85 + struct ppe_debugfs_entry { 86 + const char *name; 87 + enum ppe_cnt_type counter_type; 88 + struct ppe_device *ppe; 89 + }; 90 + 91 + static const struct ppe_debugfs_entry debugfs_files[] = { 92 + { 93 + .name = "bm", 94 + .counter_type = PPE_CNT_BM, 95 + }, 96 + { 97 + .name = "parse", 98 + .counter_type = PPE_CNT_PARSE, 99 + }, 100 + { 101 + .name = "port_rx", 102 + .counter_type = PPE_CNT_PORT_RX, 103 + }, 104 + { 105 + .name = "vlan_rx", 106 + .counter_type = PPE_CNT_VLAN_RX, 107 + }, 108 + { 109 + .name = "l2_forward", 110 + .counter_type = PPE_CNT_L2_FWD, 111 + }, 112 + { 113 + .name = "cpu_code", 114 + .counter_type = PPE_CNT_CPU_CODE, 115 + }, 116 + { 117 + .name = "vlan_tx", 118 + .counter_type = PPE_CNT_VLAN_TX, 119 + }, 120 + { 121 + .name = "port_tx", 122 + .counter_type = PPE_CNT_PORT_TX, 123 + }, 124 + { 125 + .name = "qm", 126 + .counter_type = PPE_CNT_QM, 127 + }, 128 + }; 129 + 130 + static int ppe_pkt_cnt_get(struct ppe_device *ppe_dev, u32 reg, 131 + enum ppe_cnt_size_type cnt_type, 132 + u32 *cnt, u32 *drop_cnt) 133 + { 134 + u32 drop_pkt_cnt[PPE_DROP_PKT_CNT_TBL_SIZE]; 135 + u32 pkt_cnt[PPE_PKT_CNT_TBL_SIZE]; 136 + u32 value; 137 + int ret; 138 + 139 + switch (cnt_type) { 140 + case PPE_PKT_CNT_SIZE_1WORD: 141 + ret = regmap_read(ppe_dev->regmap, reg, &value); 142 + if (ret) 143 + return ret; 144 + 145 + *cnt = value; 146 + break; 147 + case PPE_PKT_CNT_SIZE_3WORD: 148 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 149 + pkt_cnt, ARRAY_SIZE(pkt_cnt)); 150 + if (ret) 151 + return ret; 152 + 153 + *cnt = PPE_GET_PKT_CNT(pkt_cnt); 154 + break; 155 + case PPE_PKT_CNT_SIZE_5WORD: 156 + ret = regmap_bulk_read(ppe_dev->regmap, reg, 157 + drop_pkt_cnt, ARRAY_SIZE(drop_pkt_cnt)); 158 + if (ret) 159 + return ret; 160 + 161 + *cnt = PPE_GET_PKT_CNT(drop_pkt_cnt); 162 + 163 + /* Drop counter with low 24 bits. */ 164 + value = PPE_GET_DROP_PKT_CNT_LOW(drop_pkt_cnt); 165 + *drop_cnt = FIELD_PREP(GENMASK(23, 0), value); 166 + 167 + /* Drop counter with high 8 bits. */ 168 + value = PPE_GET_DROP_PKT_CNT_HIGH(drop_pkt_cnt); 169 + *drop_cnt |= FIELD_PREP(GENMASK(31, 24), value); 170 + break; 171 + } 172 + 173 + return 0; 174 + } 175 + 176 + static void ppe_tbl_pkt_cnt_clear(struct ppe_device *ppe_dev, u32 reg, 177 + enum ppe_cnt_size_type cnt_type) 178 + { 179 + u32 drop_pkt_cnt[PPE_DROP_PKT_CNT_TBL_SIZE] = {}; 180 + u32 pkt_cnt[PPE_PKT_CNT_TBL_SIZE] = {}; 181 + 182 + switch (cnt_type) { 183 + case PPE_PKT_CNT_SIZE_1WORD: 184 + regmap_write(ppe_dev->regmap, reg, 0); 185 + break; 186 + case PPE_PKT_CNT_SIZE_3WORD: 187 + regmap_bulk_write(ppe_dev->regmap, reg, 188 + pkt_cnt, ARRAY_SIZE(pkt_cnt)); 189 + break; 190 + case PPE_PKT_CNT_SIZE_5WORD: 191 + regmap_bulk_write(ppe_dev->regmap, reg, 192 + drop_pkt_cnt, ARRAY_SIZE(drop_pkt_cnt)); 193 + break; 194 + } 195 + } 196 + 197 + static int ppe_bm_counter_get(struct ppe_device *ppe_dev, struct seq_file *seq) 198 + { 199 + u32 reg, val, pkt_cnt, pkt_cnt1; 200 + int ret, i, tag; 201 + 202 + seq_printf(seq, "%-24s", "BM SILENT_DROP:"); 203 + tag = 0; 204 + for (i = 0; i < PPE_DROP_CNT_TBL_ENTRIES; i++) { 205 + reg = PPE_DROP_CNT_TBL_ADDR + i * PPE_DROP_CNT_TBL_INC; 206 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, 207 + &pkt_cnt, NULL); 208 + if (ret) { 209 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 210 + return ret; 211 + } 212 + 213 + if (pkt_cnt > 0) { 214 + if (!((++tag) % 4)) 215 + seq_printf(seq, "\n%-24s", ""); 216 + 217 + seq_printf(seq, "%10u(%s=%04d)", pkt_cnt, "port", i); 218 + } 219 + } 220 + 221 + seq_putc(seq, '\n'); 222 + 223 + /* The number of packets dropped because hardware buffers were 224 + * available only partially for the packet. 225 + */ 226 + seq_printf(seq, "%-24s", "BM OVERFLOW_DROP:"); 227 + tag = 0; 228 + for (i = 0; i < PPE_DROP_STAT_TBL_ENTRIES; i++) { 229 + reg = PPE_DROP_STAT_TBL_ADDR + PPE_DROP_STAT_TBL_INC * i; 230 + 231 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 232 + &pkt_cnt, NULL); 233 + if (ret) { 234 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 235 + return ret; 236 + } 237 + 238 + if (pkt_cnt > 0) { 239 + if (!((++tag) % 4)) 240 + seq_printf(seq, "\n%-24s", ""); 241 + 242 + seq_printf(seq, "%10u(%s=%04d)", pkt_cnt, "port", i); 243 + } 244 + } 245 + 246 + seq_putc(seq, '\n'); 247 + 248 + /* The number of currently occupied buffers, that can't be flushed. */ 249 + seq_printf(seq, "%-24s", "BM USED/REACT:"); 250 + tag = 0; 251 + for (i = 0; i < PPE_BM_USED_CNT_TBL_ENTRIES; i++) { 252 + reg = PPE_BM_USED_CNT_TBL_ADDR + i * PPE_BM_USED_CNT_TBL_INC; 253 + ret = regmap_read(ppe_dev->regmap, reg, &val); 254 + if (ret) { 255 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 256 + return ret; 257 + } 258 + 259 + /* The number of PPE buffers used for caching the received 260 + * packets before the pause frame sent. 261 + */ 262 + pkt_cnt = FIELD_GET(PPE_BM_USED_CNT_VAL, val); 263 + 264 + reg = PPE_BM_REACT_CNT_TBL_ADDR + i * PPE_BM_REACT_CNT_TBL_INC; 265 + ret = regmap_read(ppe_dev->regmap, reg, &val); 266 + if (ret) { 267 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 268 + return ret; 269 + } 270 + 271 + /* The number of PPE buffers used for caching the received 272 + * packets after pause frame sent out. 273 + */ 274 + pkt_cnt1 = FIELD_GET(PPE_BM_REACT_CNT_VAL, val); 275 + 276 + if (pkt_cnt > 0 || pkt_cnt1 > 0) { 277 + if (!((++tag) % 4)) 278 + seq_printf(seq, "\n%-24s", ""); 279 + 280 + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, pkt_cnt1, 281 + "port", i); 282 + } 283 + } 284 + 285 + seq_putc(seq, '\n'); 286 + 287 + return 0; 288 + } 289 + 290 + /* The number of packets processed by the ingress parser module of PPE. */ 291 + static int ppe_parse_pkt_counter_get(struct ppe_device *ppe_dev, 292 + struct seq_file *seq) 293 + { 294 + u32 reg, cnt = 0, tunnel_cnt = 0; 295 + int i, ret, tag = 0; 296 + 297 + seq_printf(seq, "%-24s", "PARSE TPRX/IPRX:"); 298 + for (i = 0; i < PPE_IPR_PKT_CNT_TBL_ENTRIES; i++) { 299 + reg = PPE_TPR_PKT_CNT_TBL_ADDR + i * PPE_TPR_PKT_CNT_TBL_INC; 300 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, 301 + &tunnel_cnt, NULL); 302 + if (ret) { 303 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 304 + return ret; 305 + } 306 + 307 + reg = PPE_IPR_PKT_CNT_TBL_ADDR + i * PPE_IPR_PKT_CNT_TBL_INC; 308 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD, 309 + &cnt, NULL); 310 + if (ret) { 311 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 312 + return ret; 313 + } 314 + 315 + if (tunnel_cnt > 0 || cnt > 0) { 316 + if (!((++tag) % 4)) 317 + seq_printf(seq, "\n%-24s", ""); 318 + 319 + seq_printf(seq, "%10u/%u(%s=%04d)", tunnel_cnt, cnt, 320 + "port", i); 321 + } 322 + } 323 + 324 + seq_putc(seq, '\n'); 325 + 326 + return 0; 327 + } 328 + 329 + /* The number of packets received or dropped on the ingress port. */ 330 + static int ppe_port_rx_counter_get(struct ppe_device *ppe_dev, 331 + struct seq_file *seq) 332 + { 333 + u32 reg, pkt_cnt = 0, drop_cnt = 0; 334 + int ret, i, tag; 335 + 336 + seq_printf(seq, "%-24s", "PORT RX/RX_DROP:"); 337 + tag = 0; 338 + for (i = 0; i < PPE_PHY_PORT_RX_CNT_TBL_ENTRIES; i++) { 339 + reg = PPE_PHY_PORT_RX_CNT_TBL_ADDR + PPE_PHY_PORT_RX_CNT_TBL_INC * i; 340 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, 341 + &pkt_cnt, &drop_cnt); 342 + if (ret) { 343 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 344 + return ret; 345 + } 346 + 347 + if (pkt_cnt > 0) { 348 + if (!((++tag) % 4)) 349 + seq_printf(seq, "\n%-24s", ""); 350 + 351 + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, 352 + "port", i); 353 + } 354 + } 355 + 356 + seq_putc(seq, '\n'); 357 + 358 + seq_printf(seq, "%-24s", "VPORT RX/RX_DROP:"); 359 + tag = 0; 360 + for (i = 0; i < PPE_PORT_RX_CNT_TBL_ENTRIES; i++) { 361 + reg = PPE_PORT_RX_CNT_TBL_ADDR + PPE_PORT_RX_CNT_TBL_INC * i; 362 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, 363 + &pkt_cnt, &drop_cnt); 364 + if (ret) { 365 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 366 + return ret; 367 + } 368 + 369 + if (pkt_cnt > 0) { 370 + if (!((++tag) % 4)) 371 + seq_printf(seq, "\n%-24s", ""); 372 + 373 + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, 374 + "port", i); 375 + } 376 + } 377 + 378 + seq_putc(seq, '\n'); 379 + 380 + return 0; 381 + } 382 + 383 + /* The number of packets received or dropped by layer 2 processing. */ 384 + static int ppe_l2_counter_get(struct ppe_device *ppe_dev, 385 + struct seq_file *seq) 386 + { 387 + u32 reg, pkt_cnt = 0, drop_cnt = 0; 388 + int ret, i, tag = 0; 389 + 390 + seq_printf(seq, "%-24s", "L2 RX/RX_DROP:"); 391 + for (i = 0; i < PPE_PRE_L2_CNT_TBL_ENTRIES; i++) { 392 + reg = PPE_PRE_L2_CNT_TBL_ADDR + PPE_PRE_L2_CNT_TBL_INC * i; 393 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD, 394 + &pkt_cnt, &drop_cnt); 395 + if (ret) { 396 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 397 + return ret; 398 + } 399 + 400 + if (pkt_cnt > 0) { 401 + if (!((++tag) % 4)) 402 + seq_printf(seq, "\n%-24s", ""); 403 + 404 + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, 405 + "vsi", i); 406 + } 407 + } 408 + 409 + seq_putc(seq, '\n'); 410 + 411 + return 0; 412 + } 413 + 414 + /* The number of VLAN packets received by PPE. */ 415 + static int ppe_vlan_rx_counter_get(struct ppe_device *ppe_dev, 416 + struct seq_file *seq) 417 + { 418 + u32 reg, pkt_cnt = 0; 419 + int ret, i, tag = 0; 420 + 421 + seq_printf(seq, "%-24s", "VLAN RX:"); 422 + for (i = 0; i < PPE_VLAN_CNT_TBL_ENTRIES; i++) { 423 + reg = PPE_VLAN_CNT_TBL_ADDR + PPE_VLAN_CNT_TBL_INC * i; 424 + 425 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 426 + &pkt_cnt, NULL); 427 + if (ret) { 428 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 429 + return ret; 430 + } 431 + 432 + if (pkt_cnt > 0) { 433 + if (!((++tag) % 4)) 434 + seq_printf(seq, "\n%-24s", ""); 435 + 436 + seq_printf(seq, "%10u(%s=%04d)", pkt_cnt, "vsi", i); 437 + } 438 + } 439 + 440 + seq_putc(seq, '\n'); 441 + 442 + return 0; 443 + } 444 + 445 + /* The number of packets handed to CPU by PPE. */ 446 + static int ppe_cpu_code_counter_get(struct ppe_device *ppe_dev, 447 + struct seq_file *seq) 448 + { 449 + u32 reg, pkt_cnt = 0; 450 + int ret, i; 451 + 452 + seq_printf(seq, "%-24s", "CPU CODE:"); 453 + for (i = 0; i < PPE_DROP_CPU_CNT_TBL_ENTRIES; i++) { 454 + reg = PPE_DROP_CPU_CNT_TBL_ADDR + PPE_DROP_CPU_CNT_TBL_INC * i; 455 + 456 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 457 + &pkt_cnt, NULL); 458 + if (ret) { 459 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 460 + return ret; 461 + } 462 + 463 + if (!pkt_cnt) 464 + continue; 465 + 466 + /* There are 256 CPU codes saved in the first 256 entries 467 + * of register table, and 128 drop codes for each PPE port 468 + * (0-7), the total entries is 256 + 8 * 128. 469 + */ 470 + if (i < 256) 471 + seq_printf(seq, "%10u(cpucode:%d)", pkt_cnt, i); 472 + else 473 + seq_printf(seq, "%10u(port=%04d),dropcode:%d", pkt_cnt, 474 + (i - 256) % 8, (i - 256) / 8); 475 + seq_putc(seq, '\n'); 476 + seq_printf(seq, "%-24s", ""); 477 + } 478 + 479 + seq_putc(seq, '\n'); 480 + 481 + return 0; 482 + } 483 + 484 + /* The number of packets forwarded by VLAN on the egress direction. */ 485 + static int ppe_vlan_tx_counter_get(struct ppe_device *ppe_dev, 486 + struct seq_file *seq) 487 + { 488 + u32 reg, pkt_cnt = 0; 489 + int ret, i, tag = 0; 490 + 491 + seq_printf(seq, "%-24s", "VLAN TX:"); 492 + for (i = 0; i < PPE_EG_VSI_COUNTER_TBL_ENTRIES; i++) { 493 + reg = PPE_EG_VSI_COUNTER_TBL_ADDR + PPE_EG_VSI_COUNTER_TBL_INC * i; 494 + 495 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 496 + &pkt_cnt, NULL); 497 + if (ret) { 498 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 499 + return ret; 500 + } 501 + 502 + if (pkt_cnt > 0) { 503 + if (!((++tag) % 4)) 504 + seq_printf(seq, "\n%-24s", ""); 505 + 506 + seq_printf(seq, "%10u(%s=%04d)", pkt_cnt, "vsi", i); 507 + } 508 + } 509 + 510 + seq_putc(seq, '\n'); 511 + 512 + return 0; 513 + } 514 + 515 + /* The number of packets transmitted or dropped on the egress port. */ 516 + static int ppe_port_tx_counter_get(struct ppe_device *ppe_dev, 517 + struct seq_file *seq) 518 + { 519 + u32 reg, pkt_cnt = 0, drop_cnt = 0; 520 + int ret, i, tag; 521 + 522 + seq_printf(seq, "%-24s", "VPORT TX/TX_DROP:"); 523 + tag = 0; 524 + for (i = 0; i < PPE_VPORT_TX_COUNTER_TBL_ENTRIES; i++) { 525 + reg = PPE_VPORT_TX_COUNTER_TBL_ADDR + PPE_VPORT_TX_COUNTER_TBL_INC * i; 526 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 527 + &pkt_cnt, NULL); 528 + if (ret) { 529 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 530 + return ret; 531 + } 532 + 533 + reg = PPE_VPORT_TX_DROP_CNT_TBL_ADDR + PPE_VPORT_TX_DROP_CNT_TBL_INC * i; 534 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 535 + &drop_cnt, NULL); 536 + if (ret) { 537 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 538 + return ret; 539 + } 540 + 541 + if (pkt_cnt > 0 || drop_cnt > 0) { 542 + if (!((++tag) % 4)) 543 + seq_printf(seq, "\n%-24s", ""); 544 + 545 + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, 546 + "port", i); 547 + } 548 + } 549 + 550 + seq_putc(seq, '\n'); 551 + 552 + seq_printf(seq, "%-24s", "PORT TX/TX_DROP:"); 553 + tag = 0; 554 + for (i = 0; i < PPE_PORT_TX_COUNTER_TBL_ENTRIES; i++) { 555 + reg = PPE_PORT_TX_COUNTER_TBL_ADDR + PPE_PORT_TX_COUNTER_TBL_INC * i; 556 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 557 + &pkt_cnt, NULL); 558 + if (ret) { 559 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 560 + return ret; 561 + } 562 + 563 + reg = PPE_PORT_TX_DROP_CNT_TBL_ADDR + PPE_PORT_TX_DROP_CNT_TBL_INC * i; 564 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 565 + &drop_cnt, NULL); 566 + if (ret) { 567 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 568 + return ret; 569 + } 570 + 571 + if (pkt_cnt > 0 || drop_cnt > 0) { 572 + if (!((++tag) % 4)) 573 + seq_printf(seq, "\n%-24s", ""); 574 + 575 + seq_printf(seq, "%10u/%u(%s=%04d)", pkt_cnt, drop_cnt, 576 + "port", i); 577 + } 578 + } 579 + 580 + seq_putc(seq, '\n'); 581 + 582 + return 0; 583 + } 584 + 585 + /* The number of packets transmitted or pending by the PPE queue. */ 586 + static int ppe_queue_counter_get(struct ppe_device *ppe_dev, 587 + struct seq_file *seq) 588 + { 589 + u32 reg, val, pkt_cnt = 0, pend_cnt = 0, drop_cnt = 0; 590 + int ret, i, tag = 0; 591 + 592 + seq_printf(seq, "%-24s", "QUEUE TX/PEND/DROP:"); 593 + for (i = 0; i < PPE_QUEUE_TX_COUNTER_TBL_ENTRIES; i++) { 594 + reg = PPE_QUEUE_TX_COUNTER_TBL_ADDR + PPE_QUEUE_TX_COUNTER_TBL_INC * i; 595 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 596 + &pkt_cnt, NULL); 597 + if (ret) { 598 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 599 + return ret; 600 + } 601 + 602 + if (i < PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES) { 603 + reg = PPE_AC_UNICAST_QUEUE_CNT_TBL_ADDR + 604 + PPE_AC_UNICAST_QUEUE_CNT_TBL_INC * i; 605 + ret = regmap_read(ppe_dev->regmap, reg, &val); 606 + if (ret) { 607 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 608 + return ret; 609 + } 610 + 611 + pend_cnt = FIELD_GET(PPE_AC_UNICAST_QUEUE_CNT_TBL_PEND_CNT, val); 612 + 613 + reg = PPE_UNICAST_DROP_CNT_TBL_ADDR + 614 + PPE_AC_UNICAST_QUEUE_CNT_TBL_INC * 615 + (i * PPE_UNICAST_DROP_TYPES + PPE_UNICAST_DROP_FORCE_OFFSET); 616 + 617 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 618 + &drop_cnt, NULL); 619 + if (ret) { 620 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 621 + return ret; 622 + } 623 + } else { 624 + int mq_offset = i - PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES; 625 + 626 + reg = PPE_AC_MULTICAST_QUEUE_CNT_TBL_ADDR + 627 + PPE_AC_MULTICAST_QUEUE_CNT_TBL_INC * mq_offset; 628 + ret = regmap_read(ppe_dev->regmap, reg, &val); 629 + if (ret) { 630 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 631 + return ret; 632 + } 633 + 634 + pend_cnt = FIELD_GET(PPE_AC_MULTICAST_QUEUE_CNT_TBL_PEND_CNT, val); 635 + 636 + if (mq_offset < PPE_P0_MULTICAST_QUEUE_NUM) { 637 + reg = PPE_CPU_PORT_MULTICAST_FORCE_DROP_CNT_TBL_ADDR(mq_offset); 638 + } else { 639 + mq_offset -= PPE_P0_MULTICAST_QUEUE_NUM; 640 + 641 + reg = PPE_P1_MULTICAST_DROP_CNT_TBL_ADDR; 642 + reg += (mq_offset / PPE_MULTICAST_QUEUE_NUM) * 643 + PPE_MULTICAST_QUEUE_PORT_ADDR_INC; 644 + reg += (mq_offset % PPE_MULTICAST_QUEUE_NUM) * 645 + PPE_MULTICAST_DROP_CNT_TBL_INC * 646 + PPE_MULTICAST_DROP_TYPES; 647 + } 648 + 649 + ret = ppe_pkt_cnt_get(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD, 650 + &drop_cnt, NULL); 651 + if (ret) { 652 + dev_err(ppe_dev->dev, "CNT ERROR %d\n", ret); 653 + return ret; 654 + } 655 + } 656 + 657 + if (pkt_cnt > 0 || pend_cnt > 0 || drop_cnt > 0) { 658 + if (!((++tag) % 4)) 659 + seq_printf(seq, "\n%-24s", ""); 660 + 661 + seq_printf(seq, "%10u/%u/%u(%s=%04d)", 662 + pkt_cnt, pend_cnt, drop_cnt, "queue", i); 663 + } 664 + } 665 + 666 + seq_putc(seq, '\n'); 667 + 668 + return 0; 669 + } 670 + 671 + /* Display the various packet counters of PPE. */ 672 + static int ppe_packet_counter_show(struct seq_file *seq, void *v) 673 + { 674 + struct ppe_debugfs_entry *entry = seq->private; 675 + struct ppe_device *ppe_dev = entry->ppe; 676 + int ret; 677 + 678 + switch (entry->counter_type) { 679 + case PPE_CNT_BM: 680 + ret = ppe_bm_counter_get(ppe_dev, seq); 681 + break; 682 + case PPE_CNT_PARSE: 683 + ret = ppe_parse_pkt_counter_get(ppe_dev, seq); 684 + break; 685 + case PPE_CNT_PORT_RX: 686 + ret = ppe_port_rx_counter_get(ppe_dev, seq); 687 + break; 688 + case PPE_CNT_VLAN_RX: 689 + ret = ppe_vlan_rx_counter_get(ppe_dev, seq); 690 + break; 691 + case PPE_CNT_L2_FWD: 692 + ret = ppe_l2_counter_get(ppe_dev, seq); 693 + break; 694 + case PPE_CNT_CPU_CODE: 695 + ret = ppe_cpu_code_counter_get(ppe_dev, seq); 696 + break; 697 + case PPE_CNT_VLAN_TX: 698 + ret = ppe_vlan_tx_counter_get(ppe_dev, seq); 699 + break; 700 + case PPE_CNT_PORT_TX: 701 + ret = ppe_port_tx_counter_get(ppe_dev, seq); 702 + break; 703 + case PPE_CNT_QM: 704 + ret = ppe_queue_counter_get(ppe_dev, seq); 705 + break; 706 + default: 707 + ret = -EINVAL; 708 + break; 709 + } 710 + 711 + return ret; 712 + } 713 + 714 + /* Flush the various packet counters of PPE. */ 715 + static ssize_t ppe_packet_counter_write(struct file *file, 716 + const char __user *buf, 717 + size_t count, loff_t *pos) 718 + { 719 + struct ppe_debugfs_entry *entry = file_inode(file)->i_private; 720 + struct ppe_device *ppe_dev = entry->ppe; 721 + u32 reg; 722 + int i; 723 + 724 + switch (entry->counter_type) { 725 + case PPE_CNT_BM: 726 + for (i = 0; i < PPE_DROP_CNT_TBL_ENTRIES; i++) { 727 + reg = PPE_DROP_CNT_TBL_ADDR + i * PPE_DROP_CNT_TBL_INC; 728 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); 729 + } 730 + 731 + for (i = 0; i < PPE_DROP_STAT_TBL_ENTRIES; i++) { 732 + reg = PPE_DROP_STAT_TBL_ADDR + PPE_DROP_STAT_TBL_INC * i; 733 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 734 + } 735 + 736 + break; 737 + case PPE_CNT_PARSE: 738 + for (i = 0; i < PPE_IPR_PKT_CNT_TBL_ENTRIES; i++) { 739 + reg = PPE_IPR_PKT_CNT_TBL_ADDR + i * PPE_IPR_PKT_CNT_TBL_INC; 740 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); 741 + 742 + reg = PPE_TPR_PKT_CNT_TBL_ADDR + i * PPE_TPR_PKT_CNT_TBL_INC; 743 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_1WORD); 744 + } 745 + 746 + break; 747 + case PPE_CNT_PORT_RX: 748 + for (i = 0; i < PPE_PORT_RX_CNT_TBL_ENTRIES; i++) { 749 + reg = PPE_PORT_RX_CNT_TBL_ADDR + PPE_PORT_RX_CNT_TBL_INC * i; 750 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); 751 + } 752 + 753 + for (i = 0; i < PPE_PHY_PORT_RX_CNT_TBL_ENTRIES; i++) { 754 + reg = PPE_PHY_PORT_RX_CNT_TBL_ADDR + PPE_PHY_PORT_RX_CNT_TBL_INC * i; 755 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); 756 + } 757 + 758 + break; 759 + case PPE_CNT_VLAN_RX: 760 + for (i = 0; i < PPE_VLAN_CNT_TBL_ENTRIES; i++) { 761 + reg = PPE_VLAN_CNT_TBL_ADDR + PPE_VLAN_CNT_TBL_INC * i; 762 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 763 + } 764 + 765 + break; 766 + case PPE_CNT_L2_FWD: 767 + for (i = 0; i < PPE_PRE_L2_CNT_TBL_ENTRIES; i++) { 768 + reg = PPE_PRE_L2_CNT_TBL_ADDR + PPE_PRE_L2_CNT_TBL_INC * i; 769 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_5WORD); 770 + } 771 + 772 + break; 773 + case PPE_CNT_CPU_CODE: 774 + for (i = 0; i < PPE_DROP_CPU_CNT_TBL_ENTRIES; i++) { 775 + reg = PPE_DROP_CPU_CNT_TBL_ADDR + PPE_DROP_CPU_CNT_TBL_INC * i; 776 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 777 + } 778 + 779 + break; 780 + case PPE_CNT_VLAN_TX: 781 + for (i = 0; i < PPE_EG_VSI_COUNTER_TBL_ENTRIES; i++) { 782 + reg = PPE_EG_VSI_COUNTER_TBL_ADDR + PPE_EG_VSI_COUNTER_TBL_INC * i; 783 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 784 + } 785 + 786 + break; 787 + case PPE_CNT_PORT_TX: 788 + for (i = 0; i < PPE_PORT_TX_COUNTER_TBL_ENTRIES; i++) { 789 + reg = PPE_PORT_TX_DROP_CNT_TBL_ADDR + PPE_PORT_TX_DROP_CNT_TBL_INC * i; 790 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 791 + 792 + reg = PPE_PORT_TX_COUNTER_TBL_ADDR + PPE_PORT_TX_COUNTER_TBL_INC * i; 793 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 794 + } 795 + 796 + for (i = 0; i < PPE_VPORT_TX_COUNTER_TBL_ENTRIES; i++) { 797 + reg = PPE_VPORT_TX_COUNTER_TBL_ADDR + PPE_VPORT_TX_COUNTER_TBL_INC * i; 798 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 799 + 800 + reg = PPE_VPORT_TX_DROP_CNT_TBL_ADDR + PPE_VPORT_TX_DROP_CNT_TBL_INC * i; 801 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 802 + } 803 + 804 + break; 805 + case PPE_CNT_QM: 806 + for (i = 0; i < PPE_QUEUE_TX_COUNTER_TBL_ENTRIES; i++) { 807 + reg = PPE_QUEUE_TX_COUNTER_TBL_ADDR + PPE_QUEUE_TX_COUNTER_TBL_INC * i; 808 + ppe_tbl_pkt_cnt_clear(ppe_dev, reg, PPE_PKT_CNT_SIZE_3WORD); 809 + } 810 + 811 + break; 812 + default: 813 + break; 814 + } 815 + 816 + return count; 817 + } 818 + DEFINE_SHOW_STORE_ATTRIBUTE(ppe_packet_counter); 819 + 820 + void ppe_debugfs_setup(struct ppe_device *ppe_dev) 821 + { 822 + struct ppe_debugfs_entry *entry; 823 + int i; 824 + 825 + ppe_dev->debugfs_root = debugfs_create_dir("ppe", NULL); 826 + if (IS_ERR(ppe_dev->debugfs_root)) 827 + return; 828 + 829 + for (i = 0; i < ARRAY_SIZE(debugfs_files); i++) { 830 + entry = devm_kzalloc(ppe_dev->dev, sizeof(*entry), GFP_KERNEL); 831 + if (!entry) 832 + return; 833 + 834 + entry->ppe = ppe_dev; 835 + entry->counter_type = debugfs_files[i].counter_type; 836 + 837 + debugfs_create_file(debugfs_files[i].name, 0444, 838 + ppe_dev->debugfs_root, entry, 839 + &ppe_packet_counter_fops); 840 + } 841 + } 842 + 843 + void ppe_debugfs_teardown(struct ppe_device *ppe_dev) 844 + { 845 + debugfs_remove_recursive(ppe_dev->debugfs_root); 846 + ppe_dev->debugfs_root = NULL; 847 + }
+16
drivers/net/ethernet/qualcomm/ppe/ppe_debugfs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only 2 + * 3 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 4 + */ 5 + 6 + /* PPE debugfs counters setup. */ 7 + 8 + #ifndef __PPE_DEBUGFS_H__ 9 + #define __PPE_DEBUGFS_H__ 10 + 11 + #include "ppe.h" 12 + 13 + void ppe_debugfs_setup(struct ppe_device *ppe_dev); 14 + void ppe_debugfs_teardown(struct ppe_device *ppe_dev); 15 + 16 + #endif
+591
drivers/net/ethernet/qualcomm/ppe/ppe_regs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only 2 + * 3 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 4 + */ 5 + 6 + /* PPE hardware register and table declarations. */ 7 + #ifndef __PPE_REGS_H__ 8 + #define __PPE_REGS_H__ 9 + 10 + #include <linux/bitfield.h> 11 + 12 + /* PPE scheduler configurations for buffer manager block. */ 13 + #define PPE_BM_SCH_CTRL_ADDR 0xb000 14 + #define PPE_BM_SCH_CTRL_INC 4 15 + #define PPE_BM_SCH_CTRL_SCH_DEPTH GENMASK(7, 0) 16 + #define PPE_BM_SCH_CTRL_SCH_OFFSET GENMASK(14, 8) 17 + #define PPE_BM_SCH_CTRL_SCH_EN BIT(31) 18 + 19 + /* PPE drop counters. */ 20 + #define PPE_DROP_CNT_TBL_ADDR 0xb024 21 + #define PPE_DROP_CNT_TBL_ENTRIES 8 22 + #define PPE_DROP_CNT_TBL_INC 4 23 + 24 + /* BM port drop counters. */ 25 + #define PPE_DROP_STAT_TBL_ADDR 0xe000 26 + #define PPE_DROP_STAT_TBL_ENTRIES 30 27 + #define PPE_DROP_STAT_TBL_INC 0x10 28 + 29 + /* Egress VLAN counters. */ 30 + #define PPE_EG_VSI_COUNTER_TBL_ADDR 0x41000 31 + #define PPE_EG_VSI_COUNTER_TBL_ENTRIES 64 32 + #define PPE_EG_VSI_COUNTER_TBL_INC 0x10 33 + 34 + /* Port TX counters. */ 35 + #define PPE_PORT_TX_COUNTER_TBL_ADDR 0x45000 36 + #define PPE_PORT_TX_COUNTER_TBL_ENTRIES 8 37 + #define PPE_PORT_TX_COUNTER_TBL_INC 0x10 38 + 39 + /* Virtual port TX counters. */ 40 + #define PPE_VPORT_TX_COUNTER_TBL_ADDR 0x47000 41 + #define PPE_VPORT_TX_COUNTER_TBL_ENTRIES 256 42 + #define PPE_VPORT_TX_COUNTER_TBL_INC 0x10 43 + 44 + /* Queue counters. */ 45 + #define PPE_QUEUE_TX_COUNTER_TBL_ADDR 0x4a000 46 + #define PPE_QUEUE_TX_COUNTER_TBL_ENTRIES 300 47 + #define PPE_QUEUE_TX_COUNTER_TBL_INC 0x10 48 + 49 + /* RSS settings are to calculate the random RSS hash value generated during 50 + * packet receive to ARM cores. This hash is then used to generate the queue 51 + * offset used to determine the queue used to transmit the packet to ARM cores. 52 + */ 53 + #define PPE_RSS_HASH_MASK_ADDR 0xb4318 54 + #define PPE_RSS_HASH_MASK_HASH_MASK GENMASK(20, 0) 55 + #define PPE_RSS_HASH_MASK_FRAGMENT BIT(28) 56 + 57 + #define PPE_RSS_HASH_SEED_ADDR 0xb431c 58 + #define PPE_RSS_HASH_SEED_VAL GENMASK(31, 0) 59 + 60 + #define PPE_RSS_HASH_MIX_ADDR 0xb4320 61 + #define PPE_RSS_HASH_MIX_ENTRIES 11 62 + #define PPE_RSS_HASH_MIX_INC 4 63 + #define PPE_RSS_HASH_MIX_VAL GENMASK(4, 0) 64 + 65 + #define PPE_RSS_HASH_FIN_ADDR 0xb4350 66 + #define PPE_RSS_HASH_FIN_ENTRIES 5 67 + #define PPE_RSS_HASH_FIN_INC 4 68 + #define PPE_RSS_HASH_FIN_INNER GENMASK(4, 0) 69 + #define PPE_RSS_HASH_FIN_OUTER GENMASK(9, 5) 70 + 71 + #define PPE_RSS_HASH_MASK_IPV4_ADDR 0xb4380 72 + #define PPE_RSS_HASH_MASK_IPV4_HASH_MASK GENMASK(20, 0) 73 + #define PPE_RSS_HASH_MASK_IPV4_FRAGMENT BIT(28) 74 + 75 + #define PPE_RSS_HASH_SEED_IPV4_ADDR 0xb4384 76 + #define PPE_RSS_HASH_SEED_IPV4_VAL GENMASK(31, 0) 77 + 78 + #define PPE_RSS_HASH_MIX_IPV4_ADDR 0xb4390 79 + #define PPE_RSS_HASH_MIX_IPV4_ENTRIES 5 80 + #define PPE_RSS_HASH_MIX_IPV4_INC 4 81 + #define PPE_RSS_HASH_MIX_IPV4_VAL GENMASK(4, 0) 82 + 83 + #define PPE_RSS_HASH_FIN_IPV4_ADDR 0xb43b0 84 + #define PPE_RSS_HASH_FIN_IPV4_ENTRIES 5 85 + #define PPE_RSS_HASH_FIN_IPV4_INC 4 86 + #define PPE_RSS_HASH_FIN_IPV4_INNER GENMASK(4, 0) 87 + #define PPE_RSS_HASH_FIN_IPV4_OUTER GENMASK(9, 5) 88 + 89 + #define PPE_BM_SCH_CFG_TBL_ADDR 0xc000 90 + #define PPE_BM_SCH_CFG_TBL_ENTRIES 128 91 + #define PPE_BM_SCH_CFG_TBL_INC 0x10 92 + #define PPE_BM_SCH_CFG_TBL_PORT_NUM GENMASK(3, 0) 93 + #define PPE_BM_SCH_CFG_TBL_DIR BIT(4) 94 + #define PPE_BM_SCH_CFG_TBL_VALID BIT(5) 95 + #define PPE_BM_SCH_CFG_TBL_SECOND_PORT_VALID BIT(6) 96 + #define PPE_BM_SCH_CFG_TBL_SECOND_PORT GENMASK(11, 8) 97 + 98 + /* PPE service code configuration for the ingress direction functions, 99 + * including bypass configuration for relevant PPE switch core functions 100 + * such as flow entry lookup bypass. 101 + */ 102 + #define PPE_SERVICE_TBL_ADDR 0x15000 103 + #define PPE_SERVICE_TBL_ENTRIES 256 104 + #define PPE_SERVICE_TBL_INC 0x10 105 + #define PPE_SERVICE_W0_BYPASS_BITMAP GENMASK(31, 0) 106 + #define PPE_SERVICE_W1_RX_CNT_EN BIT(0) 107 + 108 + #define PPE_SERVICE_SET_BYPASS_BITMAP(tbl_cfg, value) \ 109 + FIELD_MODIFY(PPE_SERVICE_W0_BYPASS_BITMAP, tbl_cfg, value) 110 + #define PPE_SERVICE_SET_RX_CNT_EN(tbl_cfg, value) \ 111 + FIELD_MODIFY(PPE_SERVICE_W1_RX_CNT_EN, (tbl_cfg) + 0x1, value) 112 + 113 + /* PPE port egress VLAN configurations. */ 114 + #define PPE_PORT_EG_VLAN_TBL_ADDR 0x20020 115 + #define PPE_PORT_EG_VLAN_TBL_ENTRIES 8 116 + #define PPE_PORT_EG_VLAN_TBL_INC 4 117 + #define PPE_PORT_EG_VLAN_TBL_VLAN_TYPE BIT(0) 118 + #define PPE_PORT_EG_VLAN_TBL_CTAG_MODE GENMASK(2, 1) 119 + #define PPE_PORT_EG_VLAN_TBL_STAG_MODE GENMASK(4, 3) 120 + #define PPE_PORT_EG_VLAN_TBL_VSI_TAG_MODE_EN BIT(5) 121 + #define PPE_PORT_EG_VLAN_TBL_PCP_PROP_CMD BIT(6) 122 + #define PPE_PORT_EG_VLAN_TBL_DEI_PROP_CMD BIT(7) 123 + #define PPE_PORT_EG_VLAN_TBL_TX_COUNTING_EN BIT(8) 124 + 125 + /* PPE queue counters enable/disable control. */ 126 + #define PPE_EG_BRIDGE_CONFIG_ADDR 0x20044 127 + #define PPE_EG_BRIDGE_CONFIG_QUEUE_CNT_EN BIT(2) 128 + 129 + /* PPE service code configuration on the egress direction. */ 130 + #define PPE_EG_SERVICE_TBL_ADDR 0x43000 131 + #define PPE_EG_SERVICE_TBL_ENTRIES 256 132 + #define PPE_EG_SERVICE_TBL_INC 0x10 133 + #define PPE_EG_SERVICE_W0_UPDATE_ACTION GENMASK(31, 0) 134 + #define PPE_EG_SERVICE_W1_NEXT_SERVCODE GENMASK(7, 0) 135 + #define PPE_EG_SERVICE_W1_HW_SERVICE GENMASK(13, 8) 136 + #define PPE_EG_SERVICE_W1_OFFSET_SEL BIT(14) 137 + #define PPE_EG_SERVICE_W1_TX_CNT_EN BIT(15) 138 + 139 + #define PPE_EG_SERVICE_SET_UPDATE_ACTION(tbl_cfg, value) \ 140 + FIELD_MODIFY(PPE_EG_SERVICE_W0_UPDATE_ACTION, tbl_cfg, value) 141 + #define PPE_EG_SERVICE_SET_NEXT_SERVCODE(tbl_cfg, value) \ 142 + FIELD_MODIFY(PPE_EG_SERVICE_W1_NEXT_SERVCODE, (tbl_cfg) + 0x1, value) 143 + #define PPE_EG_SERVICE_SET_HW_SERVICE(tbl_cfg, value) \ 144 + FIELD_MODIFY(PPE_EG_SERVICE_W1_HW_SERVICE, (tbl_cfg) + 0x1, value) 145 + #define PPE_EG_SERVICE_SET_OFFSET_SEL(tbl_cfg, value) \ 146 + FIELD_MODIFY(PPE_EG_SERVICE_W1_OFFSET_SEL, (tbl_cfg) + 0x1, value) 147 + #define PPE_EG_SERVICE_SET_TX_CNT_EN(tbl_cfg, value) \ 148 + FIELD_MODIFY(PPE_EG_SERVICE_W1_TX_CNT_EN, (tbl_cfg) + 0x1, value) 149 + 150 + /* PPE port bridge configuration */ 151 + #define PPE_PORT_BRIDGE_CTRL_ADDR 0x60300 152 + #define PPE_PORT_BRIDGE_CTRL_ENTRIES 8 153 + #define PPE_PORT_BRIDGE_CTRL_INC 4 154 + #define PPE_PORT_BRIDGE_NEW_LRN_EN BIT(0) 155 + #define PPE_PORT_BRIDGE_STA_MOVE_LRN_EN BIT(3) 156 + #define PPE_PORT_BRIDGE_TXMAC_EN BIT(16) 157 + 158 + /* PPE port control configurations for the traffic to the multicast queues. */ 159 + #define PPE_MC_MTU_CTRL_TBL_ADDR 0x60a00 160 + #define PPE_MC_MTU_CTRL_TBL_ENTRIES 8 161 + #define PPE_MC_MTU_CTRL_TBL_INC 4 162 + #define PPE_MC_MTU_CTRL_TBL_MTU GENMASK(13, 0) 163 + #define PPE_MC_MTU_CTRL_TBL_MTU_CMD GENMASK(15, 14) 164 + #define PPE_MC_MTU_CTRL_TBL_TX_CNT_EN BIT(16) 165 + 166 + /* PPE VSI configurations */ 167 + #define PPE_VSI_TBL_ADDR 0x63800 168 + #define PPE_VSI_TBL_ENTRIES 64 169 + #define PPE_VSI_TBL_INC 0x10 170 + #define PPE_VSI_W0_MEMBER_PORT_BITMAP GENMASK(7, 0) 171 + #define PPE_VSI_W0_UUC_BITMAP GENMASK(15, 8) 172 + #define PPE_VSI_W0_UMC_BITMAP GENMASK(23, 16) 173 + #define PPE_VSI_W0_BC_BITMAP GENMASK(31, 24) 174 + #define PPE_VSI_W1_NEW_ADDR_LRN_EN BIT(0) 175 + #define PPE_VSI_W1_NEW_ADDR_FWD_CMD GENMASK(2, 1) 176 + #define PPE_VSI_W1_STATION_MOVE_LRN_EN BIT(3) 177 + #define PPE_VSI_W1_STATION_MOVE_FWD_CMD GENMASK(5, 4) 178 + 179 + #define PPE_VSI_SET_MEMBER_PORT_BITMAP(tbl_cfg, value) \ 180 + FIELD_MODIFY(PPE_VSI_W0_MEMBER_PORT_BITMAP, tbl_cfg, value) 181 + #define PPE_VSI_SET_UUC_BITMAP(tbl_cfg, value) \ 182 + FIELD_MODIFY(PPE_VSI_W0_UUC_BITMAP, tbl_cfg, value) 183 + #define PPE_VSI_SET_UMC_BITMAP(tbl_cfg, value) \ 184 + FIELD_MODIFY(PPE_VSI_W0_UMC_BITMAP, tbl_cfg, value) 185 + #define PPE_VSI_SET_BC_BITMAP(tbl_cfg, value) \ 186 + FIELD_MODIFY(PPE_VSI_W0_BC_BITMAP, tbl_cfg, value) 187 + #define PPE_VSI_SET_NEW_ADDR_LRN_EN(tbl_cfg, value) \ 188 + FIELD_MODIFY(PPE_VSI_W1_NEW_ADDR_LRN_EN, (tbl_cfg) + 0x1, value) 189 + #define PPE_VSI_SET_NEW_ADDR_FWD_CMD(tbl_cfg, value) \ 190 + FIELD_MODIFY(PPE_VSI_W1_NEW_ADDR_FWD_CMD, (tbl_cfg) + 0x1, value) 191 + #define PPE_VSI_SET_STATION_MOVE_LRN_EN(tbl_cfg, value) \ 192 + FIELD_MODIFY(PPE_VSI_W1_STATION_MOVE_LRN_EN, (tbl_cfg) + 0x1, value) 193 + #define PPE_VSI_SET_STATION_MOVE_FWD_CMD(tbl_cfg, value) \ 194 + FIELD_MODIFY(PPE_VSI_W1_STATION_MOVE_FWD_CMD, (tbl_cfg) + 0x1, value) 195 + 196 + /* PPE port control configurations for the traffic to the unicast queues. */ 197 + #define PPE_MRU_MTU_CTRL_TBL_ADDR 0x65000 198 + #define PPE_MRU_MTU_CTRL_TBL_ENTRIES 256 199 + #define PPE_MRU_MTU_CTRL_TBL_INC 0x10 200 + #define PPE_MRU_MTU_CTRL_W0_MRU GENMASK(13, 0) 201 + #define PPE_MRU_MTU_CTRL_W0_MRU_CMD GENMASK(15, 14) 202 + #define PPE_MRU_MTU_CTRL_W0_MTU GENMASK(29, 16) 203 + #define PPE_MRU_MTU_CTRL_W0_MTU_CMD GENMASK(31, 30) 204 + #define PPE_MRU_MTU_CTRL_W1_RX_CNT_EN BIT(0) 205 + #define PPE_MRU_MTU_CTRL_W1_TX_CNT_EN BIT(1) 206 + #define PPE_MRU_MTU_CTRL_W1_SRC_PROFILE GENMASK(3, 2) 207 + #define PPE_MRU_MTU_CTRL_W1_INNER_PREC_LOW BIT(31) 208 + #define PPE_MRU_MTU_CTRL_W2_INNER_PREC_HIGH GENMASK(1, 0) 209 + 210 + #define PPE_MRU_MTU_CTRL_SET_MRU(tbl_cfg, value) \ 211 + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W0_MRU, tbl_cfg, value) 212 + #define PPE_MRU_MTU_CTRL_SET_MRU_CMD(tbl_cfg, value) \ 213 + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W0_MRU_CMD, tbl_cfg, value) 214 + #define PPE_MRU_MTU_CTRL_SET_MTU(tbl_cfg, value) \ 215 + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W0_MTU, tbl_cfg, value) 216 + #define PPE_MRU_MTU_CTRL_SET_MTU_CMD(tbl_cfg, value) \ 217 + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W0_MTU_CMD, tbl_cfg, value) 218 + #define PPE_MRU_MTU_CTRL_SET_RX_CNT_EN(tbl_cfg, value) \ 219 + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W1_RX_CNT_EN, (tbl_cfg) + 0x1, value) 220 + #define PPE_MRU_MTU_CTRL_SET_TX_CNT_EN(tbl_cfg, value) \ 221 + FIELD_MODIFY(PPE_MRU_MTU_CTRL_W1_TX_CNT_EN, (tbl_cfg) + 0x1, value) 222 + 223 + /* PPE service code configuration for destination port and counter. */ 224 + #define PPE_IN_L2_SERVICE_TBL_ADDR 0x66000 225 + #define PPE_IN_L2_SERVICE_TBL_ENTRIES 256 226 + #define PPE_IN_L2_SERVICE_TBL_INC 0x10 227 + #define PPE_IN_L2_SERVICE_TBL_DST_PORT_ID_VALID BIT(0) 228 + #define PPE_IN_L2_SERVICE_TBL_DST_PORT_ID GENMASK(4, 1) 229 + #define PPE_IN_L2_SERVICE_TBL_DST_DIRECTION BIT(5) 230 + #define PPE_IN_L2_SERVICE_TBL_DST_BYPASS_BITMAP GENMASK(29, 6) 231 + #define PPE_IN_L2_SERVICE_TBL_RX_CNT_EN BIT(30) 232 + #define PPE_IN_L2_SERVICE_TBL_TX_CNT_EN BIT(31) 233 + 234 + /* L2 Port configurations */ 235 + #define PPE_L2_VP_PORT_TBL_ADDR 0x98000 236 + #define PPE_L2_VP_PORT_TBL_ENTRIES 256 237 + #define PPE_L2_VP_PORT_TBL_INC 0x10 238 + #define PPE_L2_VP_PORT_W0_INVALID_VSI_FWD_EN BIT(0) 239 + #define PPE_L2_VP_PORT_W0_DST_INFO GENMASK(9, 2) 240 + 241 + #define PPE_L2_PORT_SET_INVALID_VSI_FWD_EN(tbl_cfg, value) \ 242 + FIELD_MODIFY(PPE_L2_VP_PORT_W0_INVALID_VSI_FWD_EN, tbl_cfg, value) 243 + #define PPE_L2_PORT_SET_DST_INFO(tbl_cfg, value) \ 244 + FIELD_MODIFY(PPE_L2_VP_PORT_W0_DST_INFO, tbl_cfg, value) 245 + 246 + /* Port RX and RX drop counters. */ 247 + #define PPE_PORT_RX_CNT_TBL_ADDR 0x150000 248 + #define PPE_PORT_RX_CNT_TBL_ENTRIES 256 249 + #define PPE_PORT_RX_CNT_TBL_INC 0x20 250 + 251 + /* Physical port RX and RX drop counters. */ 252 + #define PPE_PHY_PORT_RX_CNT_TBL_ADDR 0x156000 253 + #define PPE_PHY_PORT_RX_CNT_TBL_ENTRIES 8 254 + #define PPE_PHY_PORT_RX_CNT_TBL_INC 0x20 255 + 256 + /* Counters for the packet to CPU port. */ 257 + #define PPE_DROP_CPU_CNT_TBL_ADDR 0x160000 258 + #define PPE_DROP_CPU_CNT_TBL_ENTRIES 1280 259 + #define PPE_DROP_CPU_CNT_TBL_INC 0x10 260 + 261 + /* VLAN counters. */ 262 + #define PPE_VLAN_CNT_TBL_ADDR 0x178000 263 + #define PPE_VLAN_CNT_TBL_ENTRIES 64 264 + #define PPE_VLAN_CNT_TBL_INC 0x10 265 + 266 + /* PPE L2 counters. */ 267 + #define PPE_PRE_L2_CNT_TBL_ADDR 0x17c000 268 + #define PPE_PRE_L2_CNT_TBL_ENTRIES 64 269 + #define PPE_PRE_L2_CNT_TBL_INC 0x20 270 + 271 + /* Port TX drop counters. */ 272 + #define PPE_PORT_TX_DROP_CNT_TBL_ADDR 0x17d000 273 + #define PPE_PORT_TX_DROP_CNT_TBL_ENTRIES 8 274 + #define PPE_PORT_TX_DROP_CNT_TBL_INC 0x10 275 + 276 + /* Virtual port TX counters. */ 277 + #define PPE_VPORT_TX_DROP_CNT_TBL_ADDR 0x17e000 278 + #define PPE_VPORT_TX_DROP_CNT_TBL_ENTRIES 256 279 + #define PPE_VPORT_TX_DROP_CNT_TBL_INC 0x10 280 + 281 + /* Counters for the tunnel packet. */ 282 + #define PPE_TPR_PKT_CNT_TBL_ADDR 0x1d0080 283 + #define PPE_TPR_PKT_CNT_TBL_ENTRIES 8 284 + #define PPE_TPR_PKT_CNT_TBL_INC 4 285 + 286 + /* Counters for the all packet received. */ 287 + #define PPE_IPR_PKT_CNT_TBL_ADDR 0x1e0080 288 + #define PPE_IPR_PKT_CNT_TBL_ENTRIES 8 289 + #define PPE_IPR_PKT_CNT_TBL_INC 4 290 + 291 + /* PPE service code configuration for the tunnel packet. */ 292 + #define PPE_TL_SERVICE_TBL_ADDR 0x306000 293 + #define PPE_TL_SERVICE_TBL_ENTRIES 256 294 + #define PPE_TL_SERVICE_TBL_INC 4 295 + #define PPE_TL_SERVICE_TBL_BYPASS_BITMAP GENMASK(31, 0) 296 + 297 + /* Port scheduler global config. */ 298 + #define PPE_PSCH_SCH_DEPTH_CFG_ADDR 0x400000 299 + #define PPE_PSCH_SCH_DEPTH_CFG_INC 4 300 + #define PPE_PSCH_SCH_DEPTH_CFG_SCH_DEPTH GENMASK(7, 0) 301 + 302 + /* PPE queue level scheduler configurations. */ 303 + #define PPE_L0_FLOW_MAP_TBL_ADDR 0x402000 304 + #define PPE_L0_FLOW_MAP_TBL_ENTRIES 300 305 + #define PPE_L0_FLOW_MAP_TBL_INC 0x10 306 + #define PPE_L0_FLOW_MAP_TBL_FLOW_ID GENMASK(5, 0) 307 + #define PPE_L0_FLOW_MAP_TBL_C_PRI GENMASK(8, 6) 308 + #define PPE_L0_FLOW_MAP_TBL_E_PRI GENMASK(11, 9) 309 + #define PPE_L0_FLOW_MAP_TBL_C_NODE_WT GENMASK(21, 12) 310 + #define PPE_L0_FLOW_MAP_TBL_E_NODE_WT GENMASK(31, 22) 311 + 312 + #define PPE_L0_C_FLOW_CFG_TBL_ADDR 0x404000 313 + #define PPE_L0_C_FLOW_CFG_TBL_ENTRIES 512 314 + #define PPE_L0_C_FLOW_CFG_TBL_INC 0x10 315 + #define PPE_L0_C_FLOW_CFG_TBL_NODE_ID GENMASK(7, 0) 316 + #define PPE_L0_C_FLOW_CFG_TBL_NODE_CREDIT_UNIT BIT(8) 317 + 318 + #define PPE_L0_E_FLOW_CFG_TBL_ADDR 0x406000 319 + #define PPE_L0_E_FLOW_CFG_TBL_ENTRIES 512 320 + #define PPE_L0_E_FLOW_CFG_TBL_INC 0x10 321 + #define PPE_L0_E_FLOW_CFG_TBL_NODE_ID GENMASK(7, 0) 322 + #define PPE_L0_E_FLOW_CFG_TBL_NODE_CREDIT_UNIT BIT(8) 323 + 324 + #define PPE_L0_FLOW_PORT_MAP_TBL_ADDR 0x408000 325 + #define PPE_L0_FLOW_PORT_MAP_TBL_ENTRIES 300 326 + #define PPE_L0_FLOW_PORT_MAP_TBL_INC 0x10 327 + #define PPE_L0_FLOW_PORT_MAP_TBL_PORT_NUM GENMASK(3, 0) 328 + 329 + #define PPE_L0_COMP_CFG_TBL_ADDR 0x428000 330 + #define PPE_L0_COMP_CFG_TBL_ENTRIES 300 331 + #define PPE_L0_COMP_CFG_TBL_INC 0x10 332 + #define PPE_L0_COMP_CFG_TBL_SHAPER_METER_LEN GENMASK(1, 0) 333 + #define PPE_L0_COMP_CFG_TBL_NODE_METER_LEN GENMASK(3, 2) 334 + 335 + /* PPE queue to Ethernet DMA ring mapping table. */ 336 + #define PPE_RING_Q_MAP_TBL_ADDR 0x42a000 337 + #define PPE_RING_Q_MAP_TBL_ENTRIES 24 338 + #define PPE_RING_Q_MAP_TBL_INC 0x40 339 + 340 + /* Table addresses for per-queue dequeue setting. */ 341 + #define PPE_DEQ_OPR_TBL_ADDR 0x430000 342 + #define PPE_DEQ_OPR_TBL_ENTRIES 300 343 + #define PPE_DEQ_OPR_TBL_INC 0x10 344 + #define PPE_DEQ_OPR_TBL_DEQ_DISABLE BIT(0) 345 + 346 + /* PPE flow level scheduler configurations. */ 347 + #define PPE_L1_FLOW_MAP_TBL_ADDR 0x440000 348 + #define PPE_L1_FLOW_MAP_TBL_ENTRIES 64 349 + #define PPE_L1_FLOW_MAP_TBL_INC 0x10 350 + #define PPE_L1_FLOW_MAP_TBL_FLOW_ID GENMASK(3, 0) 351 + #define PPE_L1_FLOW_MAP_TBL_C_PRI GENMASK(6, 4) 352 + #define PPE_L1_FLOW_MAP_TBL_E_PRI GENMASK(9, 7) 353 + #define PPE_L1_FLOW_MAP_TBL_C_NODE_WT GENMASK(19, 10) 354 + #define PPE_L1_FLOW_MAP_TBL_E_NODE_WT GENMASK(29, 20) 355 + 356 + #define PPE_L1_C_FLOW_CFG_TBL_ADDR 0x442000 357 + #define PPE_L1_C_FLOW_CFG_TBL_ENTRIES 64 358 + #define PPE_L1_C_FLOW_CFG_TBL_INC 0x10 359 + #define PPE_L1_C_FLOW_CFG_TBL_NODE_ID GENMASK(5, 0) 360 + #define PPE_L1_C_FLOW_CFG_TBL_NODE_CREDIT_UNIT BIT(6) 361 + 362 + #define PPE_L1_E_FLOW_CFG_TBL_ADDR 0x444000 363 + #define PPE_L1_E_FLOW_CFG_TBL_ENTRIES 64 364 + #define PPE_L1_E_FLOW_CFG_TBL_INC 0x10 365 + #define PPE_L1_E_FLOW_CFG_TBL_NODE_ID GENMASK(5, 0) 366 + #define PPE_L1_E_FLOW_CFG_TBL_NODE_CREDIT_UNIT BIT(6) 367 + 368 + #define PPE_L1_FLOW_PORT_MAP_TBL_ADDR 0x446000 369 + #define PPE_L1_FLOW_PORT_MAP_TBL_ENTRIES 64 370 + #define PPE_L1_FLOW_PORT_MAP_TBL_INC 0x10 371 + #define PPE_L1_FLOW_PORT_MAP_TBL_PORT_NUM GENMASK(3, 0) 372 + 373 + #define PPE_L1_COMP_CFG_TBL_ADDR 0x46a000 374 + #define PPE_L1_COMP_CFG_TBL_ENTRIES 64 375 + #define PPE_L1_COMP_CFG_TBL_INC 0x10 376 + #define PPE_L1_COMP_CFG_TBL_SHAPER_METER_LEN GENMASK(1, 0) 377 + #define PPE_L1_COMP_CFG_TBL_NODE_METER_LEN GENMASK(3, 2) 378 + 379 + /* PPE port scheduler configurations for egress. */ 380 + #define PPE_PSCH_SCH_CFG_TBL_ADDR 0x47a000 381 + #define PPE_PSCH_SCH_CFG_TBL_ENTRIES 128 382 + #define PPE_PSCH_SCH_CFG_TBL_INC 0x10 383 + #define PPE_PSCH_SCH_CFG_TBL_DES_PORT GENMASK(3, 0) 384 + #define PPE_PSCH_SCH_CFG_TBL_ENS_PORT GENMASK(7, 4) 385 + #define PPE_PSCH_SCH_CFG_TBL_ENS_PORT_BITMAP GENMASK(15, 8) 386 + #define PPE_PSCH_SCH_CFG_TBL_DES_SECOND_PORT_EN BIT(16) 387 + #define PPE_PSCH_SCH_CFG_TBL_DES_SECOND_PORT GENMASK(20, 17) 388 + 389 + /* There are 15 BM ports and 4 BM groups supported by PPE. 390 + * BM port (0-7) is for EDMA port 0, BM port (8-13) is for 391 + * PPE physical port 1-6 and BM port 14 is for EIP port. 392 + */ 393 + #define PPE_BM_PORT_FC_MODE_ADDR 0x600100 394 + #define PPE_BM_PORT_FC_MODE_ENTRIES 15 395 + #define PPE_BM_PORT_FC_MODE_INC 0x4 396 + #define PPE_BM_PORT_FC_MODE_EN BIT(0) 397 + 398 + #define PPE_BM_PORT_GROUP_ID_ADDR 0x600180 399 + #define PPE_BM_PORT_GROUP_ID_ENTRIES 15 400 + #define PPE_BM_PORT_GROUP_ID_INC 0x4 401 + #define PPE_BM_PORT_GROUP_ID_SHARED_GROUP_ID GENMASK(1, 0) 402 + 403 + /* Counters for PPE buffers used for packets cached. */ 404 + #define PPE_BM_USED_CNT_TBL_ADDR 0x6001c0 405 + #define PPE_BM_USED_CNT_TBL_ENTRIES 15 406 + #define PPE_BM_USED_CNT_TBL_INC 0x4 407 + #define PPE_BM_USED_CNT_VAL GENMASK(10, 0) 408 + 409 + /* Counters for PPE buffers used for packets received after pause frame sent. */ 410 + #define PPE_BM_REACT_CNT_TBL_ADDR 0x600240 411 + #define PPE_BM_REACT_CNT_TBL_ENTRIES 15 412 + #define PPE_BM_REACT_CNT_TBL_INC 0x4 413 + #define PPE_BM_REACT_CNT_VAL GENMASK(8, 0) 414 + 415 + #define PPE_BM_SHARED_GROUP_CFG_ADDR 0x600290 416 + #define PPE_BM_SHARED_GROUP_CFG_ENTRIES 4 417 + #define PPE_BM_SHARED_GROUP_CFG_INC 0x4 418 + #define PPE_BM_SHARED_GROUP_CFG_SHARED_LIMIT GENMASK(10, 0) 419 + 420 + #define PPE_BM_PORT_FC_CFG_TBL_ADDR 0x601000 421 + #define PPE_BM_PORT_FC_CFG_TBL_ENTRIES 15 422 + #define PPE_BM_PORT_FC_CFG_TBL_INC 0x10 423 + #define PPE_BM_PORT_FC_W0_REACT_LIMIT GENMASK(8, 0) 424 + #define PPE_BM_PORT_FC_W0_RESUME_THRESHOLD GENMASK(17, 9) 425 + #define PPE_BM_PORT_FC_W0_RESUME_OFFSET GENMASK(28, 18) 426 + #define PPE_BM_PORT_FC_W0_CEILING_LOW GENMASK(31, 29) 427 + #define PPE_BM_PORT_FC_W1_CEILING_HIGH GENMASK(7, 0) 428 + #define PPE_BM_PORT_FC_W1_WEIGHT GENMASK(10, 8) 429 + #define PPE_BM_PORT_FC_W1_DYNAMIC BIT(11) 430 + #define PPE_BM_PORT_FC_W1_PRE_ALLOC GENMASK(22, 12) 431 + 432 + #define PPE_BM_PORT_FC_SET_REACT_LIMIT(tbl_cfg, value) \ 433 + FIELD_MODIFY(PPE_BM_PORT_FC_W0_REACT_LIMIT, tbl_cfg, value) 434 + #define PPE_BM_PORT_FC_SET_RESUME_THRESHOLD(tbl_cfg, value) \ 435 + FIELD_MODIFY(PPE_BM_PORT_FC_W0_RESUME_THRESHOLD, tbl_cfg, value) 436 + #define PPE_BM_PORT_FC_SET_RESUME_OFFSET(tbl_cfg, value) \ 437 + FIELD_MODIFY(PPE_BM_PORT_FC_W0_RESUME_OFFSET, tbl_cfg, value) 438 + #define PPE_BM_PORT_FC_SET_CEILING_LOW(tbl_cfg, value) \ 439 + FIELD_MODIFY(PPE_BM_PORT_FC_W0_CEILING_LOW, tbl_cfg, value) 440 + #define PPE_BM_PORT_FC_SET_CEILING_HIGH(tbl_cfg, value) \ 441 + FIELD_MODIFY(PPE_BM_PORT_FC_W1_CEILING_HIGH, (tbl_cfg) + 0x1, value) 442 + #define PPE_BM_PORT_FC_SET_WEIGHT(tbl_cfg, value) \ 443 + FIELD_MODIFY(PPE_BM_PORT_FC_W1_WEIGHT, (tbl_cfg) + 0x1, value) 444 + #define PPE_BM_PORT_FC_SET_DYNAMIC(tbl_cfg, value) \ 445 + FIELD_MODIFY(PPE_BM_PORT_FC_W1_DYNAMIC, (tbl_cfg) + 0x1, value) 446 + #define PPE_BM_PORT_FC_SET_PRE_ALLOC(tbl_cfg, value) \ 447 + FIELD_MODIFY(PPE_BM_PORT_FC_W1_PRE_ALLOC, (tbl_cfg) + 0x1, value) 448 + 449 + /* The queue base configurations based on destination port, 450 + * service code or CPU code. 451 + */ 452 + #define PPE_UCAST_QUEUE_MAP_TBL_ADDR 0x810000 453 + #define PPE_UCAST_QUEUE_MAP_TBL_ENTRIES 3072 454 + #define PPE_UCAST_QUEUE_MAP_TBL_INC 0x10 455 + #define PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID GENMASK(3, 0) 456 + #define PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID GENMASK(11, 4) 457 + 458 + /* The queue offset configurations based on RSS hash value. */ 459 + #define PPE_UCAST_HASH_MAP_TBL_ADDR 0x830000 460 + #define PPE_UCAST_HASH_MAP_TBL_ENTRIES 4096 461 + #define PPE_UCAST_HASH_MAP_TBL_INC 0x10 462 + #define PPE_UCAST_HASH_MAP_TBL_HASH GENMASK(7, 0) 463 + 464 + /* The queue offset configurations based on PPE internal priority. */ 465 + #define PPE_UCAST_PRIORITY_MAP_TBL_ADDR 0x842000 466 + #define PPE_UCAST_PRIORITY_MAP_TBL_ENTRIES 256 467 + #define PPE_UCAST_PRIORITY_MAP_TBL_INC 0x10 468 + #define PPE_UCAST_PRIORITY_MAP_TBL_CLASS GENMASK(3, 0) 469 + 470 + /* PPE unicast queue (0-255) configurations. */ 471 + #define PPE_AC_UNICAST_QUEUE_CFG_TBL_ADDR 0x848000 472 + #define PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES 256 473 + #define PPE_AC_UNICAST_QUEUE_CFG_TBL_INC 0x10 474 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_EN BIT(0) 475 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_WRED_EN BIT(1) 476 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_FC_EN BIT(2) 477 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_CLR_AWARE BIT(3) 478 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_GRP_ID GENMASK(5, 4) 479 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_PRE_LIMIT GENMASK(16, 6) 480 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_DYNAMIC BIT(17) 481 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_WEIGHT GENMASK(20, 18) 482 + #define PPE_AC_UNICAST_QUEUE_CFG_W0_THRESHOLD GENMASK(31, 21) 483 + #define PPE_AC_UNICAST_QUEUE_CFG_W3_GRN_RESUME GENMASK(23, 13) 484 + 485 + #define PPE_AC_UNICAST_QUEUE_SET_EN(tbl_cfg, value) \ 486 + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_EN, tbl_cfg, value) 487 + #define PPE_AC_UNICAST_QUEUE_SET_GRP_ID(tbl_cfg, value) \ 488 + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_GRP_ID, tbl_cfg, value) 489 + #define PPE_AC_UNICAST_QUEUE_SET_PRE_LIMIT(tbl_cfg, value) \ 490 + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_PRE_LIMIT, tbl_cfg, value) 491 + #define PPE_AC_UNICAST_QUEUE_SET_DYNAMIC(tbl_cfg, value) \ 492 + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_DYNAMIC, tbl_cfg, value) 493 + #define PPE_AC_UNICAST_QUEUE_SET_WEIGHT(tbl_cfg, value) \ 494 + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_WEIGHT, tbl_cfg, value) 495 + #define PPE_AC_UNICAST_QUEUE_SET_THRESHOLD(tbl_cfg, value) \ 496 + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W0_THRESHOLD, tbl_cfg, value) 497 + #define PPE_AC_UNICAST_QUEUE_SET_GRN_RESUME(tbl_cfg, value) \ 498 + FIELD_MODIFY(PPE_AC_UNICAST_QUEUE_CFG_W3_GRN_RESUME, (tbl_cfg) + 0x3, value) 499 + 500 + /* PPE multicast queue (256-299) configurations. */ 501 + #define PPE_AC_MULTICAST_QUEUE_CFG_TBL_ADDR 0x84a000 502 + #define PPE_AC_MULTICAST_QUEUE_CFG_TBL_ENTRIES 44 503 + #define PPE_AC_MULTICAST_QUEUE_CFG_TBL_INC 0x10 504 + #define PPE_AC_MULTICAST_QUEUE_CFG_W0_EN BIT(0) 505 + #define PPE_AC_MULTICAST_QUEUE_CFG_W0_FC_EN BIT(1) 506 + #define PPE_AC_MULTICAST_QUEUE_CFG_W0_CLR_AWARE BIT(2) 507 + #define PPE_AC_MULTICAST_QUEUE_CFG_W0_GRP_ID GENMASK(4, 3) 508 + #define PPE_AC_MULTICAST_QUEUE_CFG_W0_PRE_LIMIT GENMASK(15, 5) 509 + #define PPE_AC_MULTICAST_QUEUE_CFG_W0_THRESHOLD GENMASK(26, 16) 510 + #define PPE_AC_MULTICAST_QUEUE_CFG_W2_RESUME GENMASK(17, 7) 511 + 512 + #define PPE_AC_MULTICAST_QUEUE_SET_EN(tbl_cfg, value) \ 513 + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W0_EN, tbl_cfg, value) 514 + #define PPE_AC_MULTICAST_QUEUE_SET_GRN_GRP_ID(tbl_cfg, value) \ 515 + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W0_GRP_ID, tbl_cfg, value) 516 + #define PPE_AC_MULTICAST_QUEUE_SET_GRN_PRE_LIMIT(tbl_cfg, value) \ 517 + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W0_PRE_LIMIT, tbl_cfg, value) 518 + #define PPE_AC_MULTICAST_QUEUE_SET_GRN_THRESHOLD(tbl_cfg, value) \ 519 + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W0_THRESHOLD, tbl_cfg, value) 520 + #define PPE_AC_MULTICAST_QUEUE_SET_GRN_RESUME(tbl_cfg, value) \ 521 + FIELD_MODIFY(PPE_AC_MULTICAST_QUEUE_CFG_W2_RESUME, (tbl_cfg) + 0x2, value) 522 + 523 + /* PPE admission control group (0-3) configurations */ 524 + #define PPE_AC_GRP_CFG_TBL_ADDR 0x84c000 525 + #define PPE_AC_GRP_CFG_TBL_ENTRIES 0x4 526 + #define PPE_AC_GRP_CFG_TBL_INC 0x10 527 + #define PPE_AC_GRP_W0_AC_EN BIT(0) 528 + #define PPE_AC_GRP_W0_AC_FC_EN BIT(1) 529 + #define PPE_AC_GRP_W0_CLR_AWARE BIT(2) 530 + #define PPE_AC_GRP_W0_THRESHOLD_LOW GENMASK(31, 25) 531 + #define PPE_AC_GRP_W1_THRESHOLD_HIGH GENMASK(3, 0) 532 + #define PPE_AC_GRP_W1_BUF_LIMIT GENMASK(14, 4) 533 + #define PPE_AC_GRP_W2_RESUME_GRN GENMASK(15, 5) 534 + #define PPE_AC_GRP_W2_PRE_ALLOC GENMASK(26, 16) 535 + 536 + #define PPE_AC_GRP_SET_BUF_LIMIT(tbl_cfg, value) \ 537 + FIELD_MODIFY(PPE_AC_GRP_W1_BUF_LIMIT, (tbl_cfg) + 0x1, value) 538 + 539 + /* Counters for packets handled by unicast queues (0-255). */ 540 + #define PPE_AC_UNICAST_QUEUE_CNT_TBL_ADDR 0x84e000 541 + #define PPE_AC_UNICAST_QUEUE_CNT_TBL_ENTRIES 256 542 + #define PPE_AC_UNICAST_QUEUE_CNT_TBL_INC 0x10 543 + #define PPE_AC_UNICAST_QUEUE_CNT_TBL_PEND_CNT GENMASK(12, 0) 544 + 545 + /* Counters for packets handled by multicast queues (256-299). */ 546 + #define PPE_AC_MULTICAST_QUEUE_CNT_TBL_ADDR 0x852000 547 + #define PPE_AC_MULTICAST_QUEUE_CNT_TBL_ENTRIES 44 548 + #define PPE_AC_MULTICAST_QUEUE_CNT_TBL_INC 0x10 549 + #define PPE_AC_MULTICAST_QUEUE_CNT_TBL_PEND_CNT GENMASK(12, 0) 550 + 551 + /* Table addresses for per-queue enqueue setting. */ 552 + #define PPE_ENQ_OPR_TBL_ADDR 0x85c000 553 + #define PPE_ENQ_OPR_TBL_ENTRIES 300 554 + #define PPE_ENQ_OPR_TBL_INC 0x10 555 + #define PPE_ENQ_OPR_TBL_ENQ_DISABLE BIT(0) 556 + 557 + /* Unicast drop count includes the possible drops with WRED for the green, 558 + * yellow and red categories. 559 + */ 560 + #define PPE_UNICAST_DROP_CNT_TBL_ADDR 0x9e0000 561 + #define PPE_UNICAST_DROP_CNT_TBL_ENTRIES 1536 562 + #define PPE_UNICAST_DROP_CNT_TBL_INC 0x10 563 + #define PPE_UNICAST_DROP_TYPES 6 564 + #define PPE_UNICAST_DROP_FORCE_OFFSET 3 565 + 566 + /* There are 16 multicast queues dedicated to CPU port 0. Multicast drop 567 + * count includes the force drop for green, yellow and red category packets. 568 + */ 569 + #define PPE_P0_MULTICAST_DROP_CNT_TBL_ADDR 0x9f0000 570 + #define PPE_P0_MULTICAST_DROP_CNT_TBL_ENTRIES 48 571 + #define PPE_P0_MULTICAST_DROP_CNT_TBL_INC 0x10 572 + #define PPE_P0_MULTICAST_QUEUE_NUM 16 573 + 574 + /* Each PPE physical port has four dedicated multicast queues, providing 575 + * a total of 12 entries per port. The multicast drop count includes forced 576 + * drops for green, yellow, and red category packets. 577 + */ 578 + #define PPE_MULTICAST_QUEUE_PORT_ADDR_INC 0x1000 579 + #define PPE_MULTICAST_DROP_CNT_TBL_INC 0x10 580 + #define PPE_MULTICAST_DROP_TYPES 3 581 + #define PPE_MULTICAST_QUEUE_NUM 4 582 + #define PPE_MULTICAST_DROP_CNT_TBL_ENTRIES 12 583 + 584 + #define PPE_CPU_PORT_MULTICAST_FORCE_DROP_CNT_TBL_ADDR(mq_offset) \ 585 + (PPE_P0_MULTICAST_DROP_CNT_TBL_ADDR + \ 586 + (mq_offset) * PPE_P0_MULTICAST_DROP_CNT_TBL_INC * \ 587 + PPE_MULTICAST_DROP_TYPES) 588 + 589 + #define PPE_P1_MULTICAST_DROP_CNT_TBL_ADDR \ 590 + (PPE_P0_MULTICAST_DROP_CNT_TBL_ADDR + PPE_MULTICAST_QUEUE_PORT_ADDR_INC) 591 + #endif