Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'cpsw-switchdev'

Grygorii Strashko says:

====================
net: ethernet: ti: introduce new cpsw switchdev based driver

Thank you All for review of v6.

There are no significant changes in this version, just fixed comments to v6.

--- v6
The major change in this version is DT bindings conversation to json-schema, and
fixed other comments to v5. Also added patch to clean up ALE on init and netif
restart.

--- v5
The major part of work done in this iteration is rebasing on top of net-next
with XDP series from Ivan Khoronzhuk [3], and enable XDP support in the new
CPSW switchdev driver (it was little bit painful ;(). There are mostly no
functional changes in new CPSW driver, just few fixes, sync with old driver
and cleanups/optimizations. So, I've kept rest of cover letter unchanged.

---
This series originally based on work [1][2] done by
Ilias Apalodimas <ilias.apalodimas@linaro.org>.

This the RFC v5 which introduces new CPSW switchdev based driver which is
operating in dual-emac mode by default, thus working as 2 individual
network interfaces. The Switch mode can be enabled by configuring devlink driver
parameter "switch_mode" to 1/true:
devlink dev param set platform/48484000.switch \
name switch_mode value 1 cmode runtime
This can be done regardless of the state of Port's netdev devices - UP/DOWN, but
Port's netdev devices have to be in UP before joining the bridge to avoid
overwriting of bridge configuration as CPSW switch driver completely reloads its
configuration when first Port changes its state to UP.
When the both interfaces joined the bridge - CPSW switch driver will start
marking packets with offload_fwd_mark flag unless "ale_bypass=0".
All configuration is implemented via switchdev API.

The previous solution of tracking both Ports joined the bridge
(from netdevice_notifier) proved to be not correct as changing CPSW switch
driver mode required cleanup of ALE table and CPSW settings which happens
while second Port is joined bridge and as result configuration loaded
by bridge for the first Port became corrupted.

The introduction of the new CPSW switchdev based driver (cpsw_new.c) is split
on two parts: Part 1 - basic dual-emac driver; Part 2 switchdev support.
Such approach has simplified code development and testing alot. And, I hope,
it will help with better review.

patches #1 - 5: preparation patches which also moves common code to cpsw_priv.c
patches #6 - 9: Introduce TI CPSW switch driver based on switchdev and new
DT bindings
patch #10: new CPSW switchdev driver documentation
patch #11: adds DT nodes for new CPSW switchdev driver added for DRA7 SoC
patch #12: adds DT nodes for new cpsw switchdev driver for am571x-idk board
patch #13: enables build of TI CPSW driver

Most of the contents of the previous cover-letter have been added in
new driver documentation, so please refer to that for configuration,
testing and future work.

These patches can be found at (branch contains some additional patches required
for testing on top of net-next):
https://github.com/grygoriyS/linux.git
branch: lkml-5.4-switch-tbd-v7

changes in v7:
- patch 2: added check for devm_kmalloc_array() return value
- patch 6: fixed comments

changes in v6: https://lkml.org/lkml/2019/11/9/108
- DT bindings converted to json-schema
- netdev initialization is split on creation and registration.
The netdevs registration happens now at the end of the pobe.
- reworked cpsw_set_pauseparam() to use PHYlib APIs.
- other comments for v5 fixed

v5: https://patchwork.kernel.org/cover/11208785/
- rebase on top of net-next with XDP series from Ivan Khoronzhuk [3],
and enable XDP support in the new CPSW switchdev driver
cpsw driver (tested XDP_DROP only)
- sync with old cpsw driver
- implement comments from Ivan Khoronzhuk and Rob Herring
- fixed "NETDEV WATCHDOG: .." warning after interface after interface UP/DOWN,
missed TX wake in cpsw_adjust_link()

v4: https://patchwork.kernel.org/cover/11010523/
- finished split of common CPSW code
- added devlink support
- changed CPSW mode configuration approach: from netdevice_notifier to devlink
parameter
- refactor and clean up ALE changes which allows to modify VLANs/MDBs entries
- added missed support for port QDISC_CBS and QDISC_MQPRIO
- the CPSW is split on two parts: basic dual_mac driver and switchdev support
- added missed callback .ndo_get_port_parent_id()
- reworked ingress frames marking in switch mode (offload_fwd_mark)
- applied comments from Andrew Lunn

v3: https://lwn.net/Articles/786677/
Changes in v3:
- alot of work done to split properly common code between legacy and switchdev
CPSW drivers and clean up code
- CPSW switchdev interface updated to the current LKML switchdev interface
- actually new CPSW switchdev based driver introduced
- optimized dual_mac mode in new driver. Main change is that in promiscuous
mode P0_UNI_FLOOD (both ports) is enabled in addition to ALLMULTI (current
port) instead of ALE_BYPASS. So, port in non promiscuous mode will keep
possibility of mcast and vlan filtering.
- changed bridge join sequnce: now switch mode will be enabled only when
both ports joined the bridge. CPSW will be switched to dual_mac mode if any
port leave bridge. ALE table is completly cleared and then refiled while
switching to switch mode - this simplidies code a lot, but introduces some
limitation to bridge setup sequence:
ip link add name br0 type bridge
ip link set dev br0 type bridge ageing_time 1000
ip link set dev br0 type bridge vlan_filtering 0 <- disable
echo 0 > /sys/class/net/br0/bridge/default_vlan

ip link set dev sw0p1 up <- add ports
ip link set dev sw0p2 up
ip link set dev sw0p1 master br0
ip link set dev sw0p2 master br0

echo 1 > /sys/class/net/br0/bridge/default_vlan <- enable
ip link set dev br0 type bridge vlan_filtering 1
bridge vlan add dev br0 vid 1 pvid untagged self
- STP tested with vlan_filtering 1/0. To make STP work I've had to set
NO_SA_UPDATE for all slave ports (see comment in code). It also required to
statically register STP mcast address {0x01, 0x80, 0xc2, 0x0, 0x0, 0x0};
- allowed build both TI_CPSW and TI_CPSW_SWITCHDEV drivers
- PTP can be enabled on both ports in dual_mac mode

[1] https://patchwork.ozlabs.org/cover/929367/
[2] https://patches.linaro.org/cover/136709/
[3] https://patchwork.kernel.org/cover/11035813/
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+4704 -1293
+240
Documentation/devicetree/bindings/net/ti,cpsw-switch.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/net/ti,cpsw-switch.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: TI SoC Ethernet Switch Controller (CPSW) Device Tree Bindings 8 + 9 + maintainers: 10 + - Grygorii Strashko <grygorii.strashko@ti.com> 11 + - Sekhar Nori <nsekhar@ti.com> 12 + 13 + description: 14 + The 3-port switch gigabit ethernet subsystem provides ethernet packet 15 + communication and can be configured as an ethernet switch. It provides the 16 + gigabit media independent interface (GMII),reduced gigabit media 17 + independent interface (RGMII), reduced media independent interface (RMII), 18 + the management data input output (MDIO) for physical layer device (PHY) 19 + management. 20 + 21 + properties: 22 + compatible: 23 + oneOf: 24 + - const: ti,cpsw-switch 25 + - items: 26 + - const: ti,am335x-cpsw-switch 27 + - const: ti,cpsw-switch 28 + - items: 29 + - const: ti,am4372-cpsw-switch 30 + - const: ti,cpsw-switch 31 + - items: 32 + - const: ti,dra7-cpsw-switch 33 + - const: ti,cpsw-switch 34 + 35 + reg: 36 + maxItems: 1 37 + description: 38 + The physical base address and size of full the CPSW module IO range 39 + 40 + ranges: true 41 + 42 + clocks: 43 + maxItems: 1 44 + description: CPSW functional clock 45 + 46 + clock-names: 47 + maxItems: 1 48 + items: 49 + - const: fck 50 + 51 + interrupts: 52 + items: 53 + - description: RX_THRESH interrupt 54 + - description: RX interrupt 55 + - description: TX interrupt 56 + - description: MISC interrupt 57 + 58 + interrupt-names: 59 + items: 60 + - const: "rx_thresh" 61 + - const: "rx" 62 + - const: "tx" 63 + - const: "misc" 64 + 65 + pinctrl-names: true 66 + 67 + syscon: 68 + $ref: /schemas/types.yaml#definitions/phandle 69 + description: 70 + Phandle to the system control device node which provides access to 71 + efuse IO range with MAC addresses 72 + 73 + 74 + ethernet-ports: 75 + type: object 76 + properties: 77 + '#address-cells': 78 + const: 1 79 + '#size-cells': 80 + const: 0 81 + 82 + patternProperties: 83 + "^port@[0-9]+$": 84 + type: object 85 + minItems: 1 86 + maxItems: 2 87 + description: CPSW external ports 88 + 89 + allOf: 90 + - $ref: ethernet-controller.yaml# 91 + 92 + properties: 93 + reg: 94 + maxItems: 1 95 + enum: [1, 2] 96 + description: CPSW port number 97 + 98 + phys: 99 + $ref: /schemas/types.yaml#definitions/phandle-array 100 + maxItems: 1 101 + description: phandle on phy-gmii-sel PHY 102 + 103 + label: 104 + $ref: /schemas/types.yaml#/definitions/string-array 105 + maxItems: 1 106 + description: label associated with this port 107 + 108 + ti,dual-emac-pvid: 109 + $ref: /schemas/types.yaml#/definitions/uint32 110 + maxItems: 1 111 + minimum: 1 112 + maximum: 1024 113 + description: 114 + Specifies default PORT VID to be used to segregate 115 + ports. Default value - CPSW port number. 116 + 117 + required: 118 + - reg 119 + - phys 120 + 121 + mdio: 122 + type: object 123 + allOf: 124 + - $ref: "ti,davinci-mdio.yaml#" 125 + description: 126 + CPSW MDIO bus. 127 + 128 + cpts: 129 + type: object 130 + description: 131 + The Common Platform Time Sync (CPTS) module 132 + 133 + properties: 134 + clocks: 135 + maxItems: 1 136 + description: CPTS reference clock 137 + 138 + clock-names: 139 + maxItems: 1 140 + items: 141 + - const: cpts 142 + 143 + cpts_clock_mult: 144 + $ref: /schemas/types.yaml#/definitions/uint32 145 + description: 146 + Numerator to convert input clock ticks into ns 147 + 148 + cpts_clock_shift: 149 + $ref: /schemas/types.yaml#/definitions/uint32 150 + description: 151 + Denominator to convert input clock ticks into ns. 152 + Mult and shift will be calculated basing on CPTS rftclk frequency if 153 + both cpts_clock_shift and cpts_clock_mult properties are not provided. 154 + 155 + required: 156 + - clocks 157 + - clock-names 158 + 159 + required: 160 + - compatible 161 + - reg 162 + - ranges 163 + - clocks 164 + - clock-names 165 + - interrupts 166 + - interrupt-names 167 + - '#address-cells' 168 + - '#size-cells' 169 + 170 + examples: 171 + - | 172 + #include <dt-bindings/interrupt-controller/irq.h> 173 + #include <dt-bindings/interrupt-controller/arm-gic.h> 174 + #include <dt-bindings/clock/dra7.h> 175 + 176 + mac_sw: switch@0 { 177 + compatible = "ti,dra7-cpsw-switch","ti,cpsw-switch"; 178 + reg = <0x0 0x4000>; 179 + ranges = <0 0 0x4000>; 180 + clocks = <&gmac_main_clk>; 181 + clock-names = "fck"; 182 + #address-cells = <1>; 183 + #size-cells = <1>; 184 + syscon = <&scm_conf>; 185 + inctrl-names = "default", "sleep"; 186 + 187 + interrupts = <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>, 188 + <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>, 189 + <GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>, 190 + <GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>; 191 + interrupt-names = "rx_thresh", "rx", "tx", "misc"; 192 + 193 + ethernet-ports { 194 + #address-cells = <1>; 195 + #size-cells = <0>; 196 + 197 + cpsw_port1: port@1 { 198 + reg = <1>; 199 + label = "port1"; 200 + mac-address = [ 00 00 00 00 00 00 ]; 201 + phys = <&phy_gmii_sel 1>; 202 + phy-handle = <&ethphy0_sw>; 203 + phy-mode = "rgmii"; 204 + ti,dual_emac_pvid = <1>; 205 + }; 206 + 207 + cpsw_port2: port@2 { 208 + reg = <2>; 209 + label = "wan"; 210 + mac-address = [ 00 00 00 00 00 00 ]; 211 + phys = <&phy_gmii_sel 2>; 212 + phy-handle = <&ethphy1_sw>; 213 + phy-mode = "rgmii"; 214 + ti,dual_emac_pvid = <2>; 215 + }; 216 + }; 217 + 218 + davinci_mdio_sw: mdio@1000 { 219 + compatible = "ti,cpsw-mdio","ti,davinci_mdio"; 220 + reg = <0x1000 0x100>; 221 + clocks = <&gmac_clkctrl DRA7_GMAC_GMAC_CLKCTRL 0>; 222 + clock-names = "fck"; 223 + #address-cells = <1>; 224 + #size-cells = <0>; 225 + bus_freq = <1000000>; 226 + 227 + ethphy0_sw: ethernet-phy@0 { 228 + reg = <0>; 229 + }; 230 + 231 + ethphy1_sw: ethernet-phy@1 { 232 + reg = <1>; 233 + }; 234 + }; 235 + 236 + cpts { 237 + clocks = <&gmac_clkctrl DRA7_GMAC_GMAC_CLKCTRL 25>; 238 + clock-names = "cpts"; 239 + }; 240 + };
+209
Documentation/networking/device_drivers/ti/cpsw_switchdev.txt
··· 1 + * Texas Instruments CPSW switchdev based ethernet driver 2.0 2 + 3 + - Port renaming 4 + On older udev versions renaming of ethX to swXpY will not be automatically 5 + supported 6 + In order to rename via udev: 7 + ip -d link show dev sw0p1 | grep switchid 8 + 9 + SUBSYSTEM=="net", ACTION=="add", ATTR{phys_switch_id}==<switchid>, \ 10 + ATTR{phys_port_name}!="", NAME="sw0$attr{phys_port_name}" 11 + 12 + 13 + ==================== 14 + # Dual mac mode 15 + ==================== 16 + - The new (cpsw_new.c) driver is operating in dual-emac mode by default, thus 17 + working as 2 individual network interfaces. Main differences from legacy CPSW 18 + driver are: 19 + - optimized promiscuous mode: The P0_UNI_FLOOD (both ports) is enabled in 20 + addition to ALLMULTI (current port) instead of ALE_BYPASS. 21 + So, Ports in promiscuous mode will keep possibility of mcast and vlan filtering, 22 + which is provides significant benefits when ports are joined to the same bridge, 23 + but without enabling "switch" mode, or to different bridges. 24 + - learning disabled on ports as it make not too much sense for 25 + segregated ports - no forwarding in HW. 26 + - enabled basic support for devlink. 27 + 28 + devlink dev show 29 + platform/48484000.switch 30 + 31 + devlink dev param show 32 + platform/48484000.switch: 33 + name switch_mode type driver-specific 34 + values: 35 + cmode runtime value false 36 + name ale_bypass type driver-specific 37 + values: 38 + cmode runtime value false 39 + 40 + Devlink configuration parameters 41 + ==================== 42 + See Documentation/networking/devlink-params-ti-cpsw-switch.txt 43 + 44 + ==================== 45 + # Bridging in dual mac mode 46 + ==================== 47 + The dual_mac mode requires two vids to be reserved for internal purposes, 48 + which, by default, equal CPSW Port numbers. As result, bridge has to be 49 + configured in vlan unaware mode or default_pvid has to be adjusted. 50 + 51 + ip link add name br0 type bridge 52 + ip link set dev br0 type bridge vlan_filtering 0 53 + echo 0 > /sys/class/net/br0/bridge/default_pvid 54 + ip link set dev sw0p1 master br0 55 + ip link set dev sw0p2 master br0 56 + - or - 57 + ip link add name br0 type bridge 58 + ip link set dev br0 type bridge vlan_filtering 0 59 + echo 100 > /sys/class/net/br0/bridge/default_pvid 60 + ip link set dev br0 type bridge vlan_filtering 1 61 + ip link set dev sw0p1 master br0 62 + ip link set dev sw0p2 master br0 63 + 64 + ==================== 65 + # Enabling "switch" 66 + ==================== 67 + The Switch mode can be enabled by configuring devlink driver parameter 68 + "switch_mode" to 1/true: 69 + devlink dev param set platform/48484000.switch \ 70 + name switch_mode value 1 cmode runtime 71 + 72 + This can be done regardless of the state of Port's netdev devices - UP/DOWN, but 73 + Port's netdev devices have to be in UP before joining to the bridge to avoid 74 + overwriting of bridge configuration as CPSW switch driver copletly reloads its 75 + configuration when first Port changes its state to UP. 76 + 77 + When the both interfaces joined the bridge - CPSW switch driver will enable 78 + marking packets with offload_fwd_mark flag unless "ale_bypass=0" 79 + 80 + All configuration is implemented via switchdev API. 81 + 82 + ==================== 83 + # Bridge setup 84 + ==================== 85 + devlink dev param set platform/48484000.switch \ 86 + name switch_mode value 1 cmode runtime 87 + 88 + ip link add name br0 type bridge 89 + ip link set dev br0 type bridge ageing_time 1000 90 + ip link set dev sw0p1 up 91 + ip link set dev sw0p2 up 92 + ip link set dev sw0p1 master br0 93 + ip link set dev sw0p2 master br0 94 + [*] bridge vlan add dev br0 vid 1 pvid untagged self 95 + 96 + [*] if vlan_filtering=1. where default_pvid=1 97 + 98 + ================= 99 + # On/off STP 100 + ================= 101 + ip link set dev BRDEV type bridge stp_state 1/0 102 + 103 + Note. Steps [*] are mandatory. 104 + 105 + ==================== 106 + # VLAN configuration 107 + ==================== 108 + bridge vlan add dev br0 vid 1 pvid untagged self <---- add cpu port to VLAN 1 109 + 110 + Note. This step is mandatory for bridge/default_pvid. 111 + 112 + ================= 113 + # Add extra VLANs 114 + ================= 115 + 1. untagged: 116 + bridge vlan add dev sw0p1 vid 100 pvid untagged master 117 + bridge vlan add dev sw0p2 vid 100 pvid untagged master 118 + bridge vlan add dev br0 vid 100 pvid untagged self <---- Add cpu port to VLAN100 119 + 120 + 2. tagged: 121 + bridge vlan add dev sw0p1 vid 100 master 122 + bridge vlan add dev sw0p2 vid 100 master 123 + bridge vlan add dev br0 vid 100 pvid tagged self <---- Add cpu port to VLAN100 124 + 125 + ==== 126 + FDBs 127 + ==== 128 + FDBs are automatically added on the appropriate switch port upon detection 129 + 130 + Manually adding FDBs: 131 + bridge fdb add aa:bb:cc:dd:ee:ff dev sw0p1 master vlan 100 132 + bridge fdb add aa:bb:cc:dd:ee:fe dev sw0p2 master <---- Add on all VLANs 133 + 134 + ==== 135 + MDBs 136 + ==== 137 + MDBs are automatically added on the appropriate switch port upon detection 138 + 139 + Manually adding MDBs: 140 + bridge mdb add dev br0 port sw0p1 grp 239.1.1.1 permanent vid 100 141 + bridge mdb add dev br0 port sw0p1 grp 239.1.1.1 permanent <---- Add on all VLANs 142 + 143 + ================== 144 + Multicast flooding 145 + ================== 146 + CPU port mcast_flooding is always on 147 + 148 + Turning flooding on/off on swithch ports: 149 + bridge link set dev sw0p1 mcast_flood on/off 150 + 151 + ================== 152 + Access and Trunk port 153 + ================== 154 + bridge vlan add dev sw0p1 vid 100 pvid untagged master 155 + bridge vlan add dev sw0p2 vid 100 master 156 + 157 + 158 + bridge vlan add dev br0 vid 100 self 159 + ip link add link br0 name br0.100 type vlan id 100 160 + 161 + Note. Setting PVID on Bridge device itself working only for 162 + default VLAN (default_pvid). 163 + 164 + ===================== 165 + NFS 166 + ===================== 167 + The only way for NFS to work is by chrooting to a minimal environment when 168 + switch configuration that will affect connectivity is needed. 169 + Assuming you are booting NFS with eth1 interface(the script is hacky and 170 + it's just there to prove NFS is doable). 171 + 172 + setup.sh: 173 + #!/bin/sh 174 + mkdir proc 175 + mount -t proc none /proc 176 + ifconfig br0 > /dev/null 177 + if [ $? -ne 0 ]; then 178 + echo "Setting up bridge" 179 + ip link add name br0 type bridge 180 + ip link set dev br0 type bridge ageing_time 1000 181 + ip link set dev br0 type bridge vlan_filtering 1 182 + 183 + ip link set eth1 down 184 + ip link set eth1 name sw0p1 185 + ip link set dev sw0p1 up 186 + ip link set dev sw0p2 up 187 + ip link set dev sw0p2 master br0 188 + ip link set dev sw0p1 master br0 189 + bridge vlan add dev br0 vid 1 pvid untagged self 190 + ifconfig sw0p1 0.0.0.0 191 + udhchc -i br0 192 + fi 193 + umount /proc 194 + 195 + run_nfs.sh: 196 + #!/bin/sh 197 + mkdir /tmp/root/bin -p 198 + mkdir /tmp/root/lib -p 199 + 200 + cp -r /lib/ /tmp/root/ 201 + cp -r /bin/ /tmp/root/ 202 + cp /sbin/ip /tmp/root/bin 203 + cp /sbin/bridge /tmp/root/bin 204 + cp /sbin/ifconfig /tmp/root/bin 205 + cp /sbin/udhcpc /tmp/root/bin 206 + cp /path/to/setup.sh /tmp/root/bin 207 + chroot /tmp/root/ busybox sh /bin/setup.sh 208 + 209 + run ./run_nfs.sh
+27
arch/arm/boot/dts/am571x-idk.dts
··· 186 186 pinctrl-1 = <&mmc2_pins_hs>; 187 187 pinctrl-2 = <&mmc2_pins_ddr_rev20 &mmc2_iodelay_ddr_conf>; 188 188 }; 189 + 190 + &mac_sw { 191 + pinctrl-names = "default", "sleep"; 192 + status = "okay"; 193 + }; 194 + 195 + &cpsw_port1 { 196 + phy-handle = <&ethphy0_sw>; 197 + phy-mode = "rgmii"; 198 + ti,dual-emac-pvid = <1>; 199 + }; 200 + 201 + &cpsw_port2 { 202 + phy-handle = <&ethphy1_sw>; 203 + phy-mode = "rgmii"; 204 + ti,dual-emac-pvid = <2>; 205 + }; 206 + 207 + &davinci_mdio_sw { 208 + ethphy0_sw: ethernet-phy@0 { 209 + reg = <0>; 210 + }; 211 + 212 + ethphy1_sw: ethernet-phy@1 { 213 + reg = <1>; 214 + }; 215 + };
+5
arch/arm/boot/dts/am572x-idk.dts
··· 27 27 pinctrl-1 = <&mmc2_pins_hs>; 28 28 pinctrl-2 = <&mmc2_pins_ddr_rev20>; 29 29 }; 30 + 31 + &mac { 32 + status = "okay"; 33 + dual_emac; 34 + };
+5
arch/arm/boot/dts/am574x-idk.dts
··· 35 35 pinctrl-1 = <&mmc2_pins_default>; 36 36 pinctrl-2 = <&mmc2_pins_default>; 37 37 }; 38 + 39 + &mac { 40 + status = "okay"; 41 + dual_emac; 42 + };
-5
arch/arm/boot/dts/am57xx-idk-common.dtsi
··· 363 363 ext-clk-src; 364 364 }; 365 365 366 - &mac { 367 - status = "okay"; 368 - dual_emac; 369 - }; 370 - 371 366 &cpsw_emac0 { 372 367 phy-handle = <&ethphy0>; 373 368 phy-mode = "rgmii";
+52
arch/arm/boot/dts/dra7-l4.dtsi
··· 3079 3079 phys = <&phy_gmii_sel 2>; 3080 3080 }; 3081 3081 }; 3082 + 3083 + mac_sw: switch@0 { 3084 + compatible = "ti,dra7-cpsw-switch","ti,cpsw-switch"; 3085 + reg = <0x0 0x4000>; 3086 + ranges = <0 0 0x4000>; 3087 + clocks = <&gmac_main_clk>; 3088 + clock-names = "fck"; 3089 + #address-cells = <1>; 3090 + #size-cells = <1>; 3091 + syscon = <&scm_conf>; 3092 + status = "disabled"; 3093 + 3094 + interrupts = <GIC_SPI 334 IRQ_TYPE_LEVEL_HIGH>, 3095 + <GIC_SPI 335 IRQ_TYPE_LEVEL_HIGH>, 3096 + <GIC_SPI 336 IRQ_TYPE_LEVEL_HIGH>, 3097 + <GIC_SPI 337 IRQ_TYPE_LEVEL_HIGH>; 3098 + interrupt-names = "rx_thresh", "rx", "tx", "misc"; 3099 + 3100 + ethernet-ports { 3101 + #address-cells = <1>; 3102 + #size-cells = <0>; 3103 + 3104 + cpsw_port1: port@1 { 3105 + reg = <1>; 3106 + label = "port1"; 3107 + mac-address = [ 00 00 00 00 00 00 ]; 3108 + phys = <&phy_gmii_sel 1>; 3109 + }; 3110 + 3111 + cpsw_port2: port@2 { 3112 + reg = <2>; 3113 + label = "port2"; 3114 + mac-address = [ 00 00 00 00 00 00 ]; 3115 + phys = <&phy_gmii_sel 2>; 3116 + }; 3117 + }; 3118 + 3119 + davinci_mdio_sw: mdio@1000 { 3120 + compatible = "ti,cpsw-mdio","ti,davinci_mdio"; 3121 + clocks = <&gmac_main_clk>; 3122 + clock-names = "fck"; 3123 + #address-cells = <1>; 3124 + #size-cells = <0>; 3125 + bus_freq = <1000000>; 3126 + reg = <0x1000 0x100>; 3127 + }; 3128 + 3129 + cpts { 3130 + clocks = <&gmac_clkctrl DRA7_GMAC_GMAC_CLKCTRL 25>; 3131 + clock-names = "cpts"; 3132 + }; 3133 + }; 3082 3134 }; 3083 3135 }; 3084 3136 };
+1
arch/arm/configs/omap2plus_defconfig
··· 554 554 CONFIG_MAGIC_SYSRQ=y 555 555 CONFIG_SCHEDSTATS=y 556 556 # CONFIG_DEBUG_BUGVERBOSE is not set 557 + CONFIG_TI_CPSW_SWITCHDEV=y
+17 -2
drivers/net/ethernet/ti/Kconfig
··· 59 59 To compile this driver as a module, choose M here: the module 60 60 will be called cpsw. 61 61 62 + config TI_CPSW_SWITCHDEV 63 + tristate "TI CPSW Switch Support with switchdev" 64 + depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST 65 + select NET_SWITCHDEV 66 + select TI_DAVINCI_MDIO 67 + select MFD_SYSCON 68 + select REGMAP 69 + select NET_DEVLINK 70 + imply PHY_TI_GMII_SEL 71 + help 72 + This driver supports TI's CPSW Ethernet Switch. 73 + 74 + To compile this driver as a module, choose M here: the module 75 + will be called cpsw_new. 76 + 62 77 config TI_CPTS 63 78 bool "TI Common Platform Time Sync (CPTS) Support" 64 - depends on TI_CPSW || TI_KEYSTONE_NETCP || COMPILE_TEST 79 + depends on TI_CPSW || TI_KEYSTONE_NETCP || TI_CPSW_SWITCHDEV || COMPILE_TEST 65 80 depends on COMMON_CLK 66 81 depends on POSIX_TIMERS 67 82 ---help--- ··· 88 73 config TI_CPTS_MOD 89 74 tristate 90 75 depends on TI_CPTS 91 - default y if TI_CPSW=y || TI_KEYSTONE_NETCP=y 76 + default y if TI_CPSW=y || TI_KEYSTONE_NETCP=y || TI_CPSW_SWITCHDEV=y 92 77 select NET_PTP_CLASSIFY 93 78 imply PTP_1588_CLOCK 94 79 default m
+2
drivers/net/ethernet/ti/Makefile
··· 15 15 obj-$(CONFIG_TI_CPTS_MOD) += cpts.o 16 16 obj-$(CONFIG_TI_CPSW) += ti_cpsw.o 17 17 ti_cpsw-y := cpsw.o davinci_cpdma.o cpsw_ale.o cpsw_priv.o cpsw_sl.o cpsw_ethtool.o 18 + obj-$(CONFIG_TI_CPSW_SWITCHDEV) += ti_cpsw_new.o 19 + ti_cpsw_new-y := cpsw_switchdev.o cpsw_new.o davinci_cpdma.o cpsw_ale.o cpsw_sl.o cpsw_priv.o cpsw_ethtool.o 18 20 19 21 obj-$(CONFIG_TI_KEYSTONE_NETCP) += keystone_netcp.o 20 22 keystone_netcp-y := netcp_core.o cpsw_ale.o
+17 -1263
drivers/net/ethernet/ti/cpsw.c
··· 34 34 #include <net/page_pool.h> 35 35 #include <linux/bpf.h> 36 36 #include <linux/bpf_trace.h> 37 - #include <linux/filter.h> 38 37 39 38 #include <linux/pinctrl/consumer.h> 40 39 #include <net/pkt_cls.h> ··· 63 64 module_param(descs_pool_size, int, 0444); 64 65 MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool"); 65 66 66 - /* The buf includes headroom compatible with both skb and xdpf */ 67 - #define CPSW_HEADROOM_NA (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + NET_IP_ALIGN) 68 - #define CPSW_HEADROOM ALIGN(CPSW_HEADROOM_NA, sizeof(long)) 69 - 70 67 #define for_each_slave(priv, func, arg...) \ 71 68 do { \ 72 69 struct cpsw_slave *slave; \ ··· 77 82 (func)(slave++, ##arg); \ 78 83 } while (0) 79 84 80 - #define CPSW_XMETA_OFFSET ALIGN(sizeof(struct xdp_frame), sizeof(long)) 85 + static int cpsw_slave_index_priv(struct cpsw_common *cpsw, 86 + struct cpsw_priv *priv) 87 + { 88 + return cpsw->data.dual_emac ? priv->emac_port : cpsw->data.active_slave; 89 + } 81 90 82 - #define CPSW_XDP_CONSUMED 1 83 - #define CPSW_XDP_PASS 0 91 + static int cpsw_get_slave_port(u32 slave_num) 92 + { 93 + return slave_num + 1; 94 + } 84 95 85 96 static int cpsw_ndo_vlan_rx_add_vid(struct net_device *ndev, 86 97 __be16 proto, u16 vid); ··· 333 332 cpsw_del_mc_addr); 334 333 } 335 334 336 - void cpsw_intr_enable(struct cpsw_common *cpsw) 337 - { 338 - writel_relaxed(0xFF, &cpsw->wr_regs->tx_en); 339 - writel_relaxed(0xFF, &cpsw->wr_regs->rx_en); 340 - 341 - cpdma_ctlr_int_ctrl(cpsw->dma, true); 342 - return; 343 - } 344 - 345 - void cpsw_intr_disable(struct cpsw_common *cpsw) 346 - { 347 - writel_relaxed(0, &cpsw->wr_regs->tx_en); 348 - writel_relaxed(0, &cpsw->wr_regs->rx_en); 349 - 350 - cpdma_ctlr_int_ctrl(cpsw->dma, false); 351 - return; 352 - } 353 - 354 - static int cpsw_is_xdpf_handle(void *handle) 355 - { 356 - return (unsigned long)handle & BIT(0); 357 - } 358 - 359 - static void *cpsw_xdpf_to_handle(struct xdp_frame *xdpf) 360 - { 361 - return (void *)((unsigned long)xdpf | BIT(0)); 362 - } 363 - 364 - static struct xdp_frame *cpsw_handle_to_xdpf(void *handle) 365 - { 366 - return (struct xdp_frame *)((unsigned long)handle & ~BIT(0)); 367 - } 368 - 369 - struct __aligned(sizeof(long)) cpsw_meta_xdp { 370 - struct net_device *ndev; 371 - int ch; 372 - }; 373 - 374 - void cpsw_tx_handler(void *token, int len, int status) 375 - { 376 - struct cpsw_meta_xdp *xmeta; 377 - struct xdp_frame *xdpf; 378 - struct net_device *ndev; 379 - struct netdev_queue *txq; 380 - struct sk_buff *skb; 381 - int ch; 382 - 383 - if (cpsw_is_xdpf_handle(token)) { 384 - xdpf = cpsw_handle_to_xdpf(token); 385 - xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; 386 - ndev = xmeta->ndev; 387 - ch = xmeta->ch; 388 - xdp_return_frame(xdpf); 389 - } else { 390 - skb = token; 391 - ndev = skb->dev; 392 - ch = skb_get_queue_mapping(skb); 393 - cpts_tx_timestamp(ndev_to_cpsw(ndev)->cpts, skb); 394 - dev_kfree_skb_any(skb); 395 - } 396 - 397 - /* Check whether the queue is stopped due to stalled tx dma, if the 398 - * queue is stopped then start the queue as we have free desc for tx 399 - */ 400 - txq = netdev_get_tx_queue(ndev, ch); 401 - if (unlikely(netif_tx_queue_stopped(txq))) 402 - netif_tx_wake_queue(txq); 403 - 404 - ndev->stats.tx_packets++; 405 - ndev->stats.tx_bytes += len; 406 - } 407 - 408 - static void cpsw_rx_vlan_encap(struct sk_buff *skb) 409 - { 410 - struct cpsw_priv *priv = netdev_priv(skb->dev); 411 - struct cpsw_common *cpsw = priv->cpsw; 412 - u32 rx_vlan_encap_hdr = *((u32 *)skb->data); 413 - u16 vtag, vid, prio, pkt_type; 414 - 415 - /* Remove VLAN header encapsulation word */ 416 - skb_pull(skb, CPSW_RX_VLAN_ENCAP_HDR_SIZE); 417 - 418 - pkt_type = (rx_vlan_encap_hdr >> 419 - CPSW_RX_VLAN_ENCAP_HDR_PKT_TYPE_SHIFT) & 420 - CPSW_RX_VLAN_ENCAP_HDR_PKT_TYPE_MSK; 421 - /* Ignore unknown & Priority-tagged packets*/ 422 - if (pkt_type == CPSW_RX_VLAN_ENCAP_HDR_PKT_RESERV || 423 - pkt_type == CPSW_RX_VLAN_ENCAP_HDR_PKT_PRIO_TAG) 424 - return; 425 - 426 - vid = (rx_vlan_encap_hdr >> 427 - CPSW_RX_VLAN_ENCAP_HDR_VID_SHIFT) & 428 - VLAN_VID_MASK; 429 - /* Ignore vid 0 and pass packet as is */ 430 - if (!vid) 431 - return; 432 - /* Ignore default vlans in dual mac mode */ 433 - if (cpsw->data.dual_emac && 434 - vid == cpsw->slaves[priv->emac_port].port_vlan) 435 - return; 436 - 437 - prio = (rx_vlan_encap_hdr >> 438 - CPSW_RX_VLAN_ENCAP_HDR_PRIO_SHIFT) & 439 - CPSW_RX_VLAN_ENCAP_HDR_PRIO_MSK; 440 - 441 - vtag = (prio << VLAN_PRIO_SHIFT) | vid; 442 - __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vtag); 443 - 444 - /* strip vlan tag for VLAN-tagged packet */ 445 - if (pkt_type == CPSW_RX_VLAN_ENCAP_HDR_PKT_VLAN_TAG) { 446 - memmove(skb->data + VLAN_HLEN, skb->data, 2 * ETH_ALEN); 447 - skb_pull(skb, VLAN_HLEN); 448 - } 449 - } 450 - 451 - static int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct xdp_frame *xdpf, 452 - struct page *page) 453 - { 454 - struct cpsw_common *cpsw = priv->cpsw; 455 - struct cpsw_meta_xdp *xmeta; 456 - struct cpdma_chan *txch; 457 - dma_addr_t dma; 458 - int ret, port; 459 - 460 - xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; 461 - xmeta->ndev = priv->ndev; 462 - xmeta->ch = 0; 463 - txch = cpsw->txv[0].ch; 464 - 465 - port = priv->emac_port + cpsw->data.dual_emac; 466 - if (page) { 467 - dma = page_pool_get_dma_addr(page); 468 - dma += xdpf->headroom + sizeof(struct xdp_frame); 469 - ret = cpdma_chan_submit_mapped(txch, cpsw_xdpf_to_handle(xdpf), 470 - dma, xdpf->len, port); 471 - } else { 472 - if (sizeof(*xmeta) > xdpf->headroom) { 473 - xdp_return_frame_rx_napi(xdpf); 474 - return -EINVAL; 475 - } 476 - 477 - ret = cpdma_chan_submit(txch, cpsw_xdpf_to_handle(xdpf), 478 - xdpf->data, xdpf->len, port); 479 - } 480 - 481 - if (ret) { 482 - priv->ndev->stats.tx_dropped++; 483 - xdp_return_frame_rx_napi(xdpf); 484 - } 485 - 486 - return ret; 487 - } 488 - 489 - static int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp, 490 - struct page *page) 491 - { 492 - struct cpsw_common *cpsw = priv->cpsw; 493 - struct net_device *ndev = priv->ndev; 494 - int ret = CPSW_XDP_CONSUMED; 495 - struct xdp_frame *xdpf; 496 - struct bpf_prog *prog; 497 - u32 act; 498 - 499 - rcu_read_lock(); 500 - 501 - prog = READ_ONCE(priv->xdp_prog); 502 - if (!prog) { 503 - ret = CPSW_XDP_PASS; 504 - goto out; 505 - } 506 - 507 - act = bpf_prog_run_xdp(prog, xdp); 508 - switch (act) { 509 - case XDP_PASS: 510 - ret = CPSW_XDP_PASS; 511 - break; 512 - case XDP_TX: 513 - xdpf = convert_to_xdp_frame(xdp); 514 - if (unlikely(!xdpf)) 515 - goto drop; 516 - 517 - cpsw_xdp_tx_frame(priv, xdpf, page); 518 - break; 519 - case XDP_REDIRECT: 520 - if (xdp_do_redirect(ndev, xdp, prog)) 521 - goto drop; 522 - 523 - /* Have to flush here, per packet, instead of doing it in bulk 524 - * at the end of the napi handler. The RX devices on this 525 - * particular hardware is sharing a common queue, so the 526 - * incoming device might change per packet. 527 - */ 528 - xdp_do_flush_map(); 529 - break; 530 - default: 531 - bpf_warn_invalid_xdp_action(act); 532 - /* fall through */ 533 - case XDP_ABORTED: 534 - trace_xdp_exception(ndev, prog, act); 535 - /* fall through -- handle aborts by dropping packet */ 536 - case XDP_DROP: 537 - goto drop; 538 - } 539 - out: 540 - rcu_read_unlock(); 541 - return ret; 542 - drop: 543 - rcu_read_unlock(); 544 - page_pool_recycle_direct(cpsw->page_pool[ch], page); 545 - return ret; 546 - } 547 - 548 335 static unsigned int cpsw_rxbuf_total_len(unsigned int len) 549 336 { 550 337 len += CPSW_HEADROOM; 551 338 len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 552 339 553 340 return SKB_DATA_ALIGN(len); 554 - } 555 - 556 - static struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw, 557 - int size) 558 - { 559 - struct page_pool_params pp_params; 560 - struct page_pool *pool; 561 - 562 - pp_params.order = 0; 563 - pp_params.flags = PP_FLAG_DMA_MAP; 564 - pp_params.pool_size = size; 565 - pp_params.nid = NUMA_NO_NODE; 566 - pp_params.dma_dir = DMA_BIDIRECTIONAL; 567 - pp_params.dev = cpsw->dev; 568 - 569 - pool = page_pool_create(&pp_params); 570 - if (IS_ERR(pool)) 571 - dev_err(cpsw->dev, "cannot create rx page pool\n"); 572 - 573 - return pool; 574 - } 575 - 576 - static int cpsw_ndev_create_xdp_rxq(struct cpsw_priv *priv, int ch) 577 - { 578 - struct cpsw_common *cpsw = priv->cpsw; 579 - struct xdp_rxq_info *rxq; 580 - struct page_pool *pool; 581 - int ret; 582 - 583 - pool = cpsw->page_pool[ch]; 584 - rxq = &priv->xdp_rxq[ch]; 585 - 586 - ret = xdp_rxq_info_reg(rxq, priv->ndev, ch); 587 - if (ret) 588 - return ret; 589 - 590 - ret = xdp_rxq_info_reg_mem_model(rxq, MEM_TYPE_PAGE_POOL, pool); 591 - if (ret) 592 - xdp_rxq_info_unreg(rxq); 593 - 594 - return ret; 595 - } 596 - 597 - static void cpsw_ndev_destroy_xdp_rxq(struct cpsw_priv *priv, int ch) 598 - { 599 - struct xdp_rxq_info *rxq = &priv->xdp_rxq[ch]; 600 - 601 - if (!xdp_rxq_info_is_reg(rxq)) 602 - return; 603 - 604 - xdp_rxq_info_unreg(rxq); 605 - } 606 - 607 - static int cpsw_create_rx_pool(struct cpsw_common *cpsw, int ch) 608 - { 609 - struct page_pool *pool; 610 - int ret = 0, pool_size; 611 - 612 - pool_size = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); 613 - pool = cpsw_create_page_pool(cpsw, pool_size); 614 - if (IS_ERR(pool)) 615 - ret = PTR_ERR(pool); 616 - else 617 - cpsw->page_pool[ch] = pool; 618 - 619 - return ret; 620 - } 621 - 622 - void cpsw_destroy_xdp_rxqs(struct cpsw_common *cpsw) 623 - { 624 - struct net_device *ndev; 625 - int i, ch; 626 - 627 - for (ch = 0; ch < cpsw->rx_ch_num; ch++) { 628 - for (i = 0; i < cpsw->data.slaves; i++) { 629 - ndev = cpsw->slaves[i].ndev; 630 - if (!ndev) 631 - continue; 632 - 633 - cpsw_ndev_destroy_xdp_rxq(netdev_priv(ndev), ch); 634 - } 635 - 636 - page_pool_destroy(cpsw->page_pool[ch]); 637 - cpsw->page_pool[ch] = NULL; 638 - } 639 - } 640 - 641 - int cpsw_create_xdp_rxqs(struct cpsw_common *cpsw) 642 - { 643 - struct net_device *ndev; 644 - int i, ch, ret; 645 - 646 - for (ch = 0; ch < cpsw->rx_ch_num; ch++) { 647 - ret = cpsw_create_rx_pool(cpsw, ch); 648 - if (ret) 649 - goto err_cleanup; 650 - 651 - /* using same page pool is allowed as no running rx handlers 652 - * simultaneously for both ndevs 653 - */ 654 - for (i = 0; i < cpsw->data.slaves; i++) { 655 - ndev = cpsw->slaves[i].ndev; 656 - if (!ndev) 657 - continue; 658 - 659 - ret = cpsw_ndev_create_xdp_rxq(netdev_priv(ndev), ch); 660 - if (ret) 661 - goto err_cleanup; 662 - } 663 - } 664 - 665 - return 0; 666 - 667 - err_cleanup: 668 - cpsw_destroy_xdp_rxqs(cpsw); 669 - 670 - return ret; 671 341 } 672 342 673 343 static void cpsw_rx_handler(void *token, int len, int status) ··· 407 735 xdp.data_hard_start = pa; 408 736 xdp.rxq = &priv->xdp_rxq[ch]; 409 737 410 - ret = cpsw_run_xdp(priv, ch, &xdp, page); 738 + port = priv->emac_port + cpsw->data.dual_emac; 739 + ret = cpsw_run_xdp(priv, ch, &xdp, page, port); 411 740 if (ret != CPSW_XDP_PASS) 412 741 goto requeue; 413 742 ··· 456 783 WARN_ON(ret == -ENOMEM); 457 784 page_pool_recycle_direct(pool, new_page); 458 785 } 459 - } 460 - 461 - void cpsw_split_res(struct cpsw_common *cpsw) 462 - { 463 - u32 consumed_rate = 0, bigest_rate = 0; 464 - struct cpsw_vector *txv = cpsw->txv; 465 - int i, ch_weight, rlim_ch_num = 0; 466 - int budget, bigest_rate_ch = 0; 467 - u32 ch_rate, max_rate; 468 - int ch_budget = 0; 469 - 470 - for (i = 0; i < cpsw->tx_ch_num; i++) { 471 - ch_rate = cpdma_chan_get_rate(txv[i].ch); 472 - if (!ch_rate) 473 - continue; 474 - 475 - rlim_ch_num++; 476 - consumed_rate += ch_rate; 477 - } 478 - 479 - if (cpsw->tx_ch_num == rlim_ch_num) { 480 - max_rate = consumed_rate; 481 - } else if (!rlim_ch_num) { 482 - ch_budget = CPSW_POLL_WEIGHT / cpsw->tx_ch_num; 483 - bigest_rate = 0; 484 - max_rate = consumed_rate; 485 - } else { 486 - max_rate = cpsw->speed * 1000; 487 - 488 - /* if max_rate is less then expected due to reduced link speed, 489 - * split proportionally according next potential max speed 490 - */ 491 - if (max_rate < consumed_rate) 492 - max_rate *= 10; 493 - 494 - if (max_rate < consumed_rate) 495 - max_rate *= 10; 496 - 497 - ch_budget = (consumed_rate * CPSW_POLL_WEIGHT) / max_rate; 498 - ch_budget = (CPSW_POLL_WEIGHT - ch_budget) / 499 - (cpsw->tx_ch_num - rlim_ch_num); 500 - bigest_rate = (max_rate - consumed_rate) / 501 - (cpsw->tx_ch_num - rlim_ch_num); 502 - } 503 - 504 - /* split tx weight/budget */ 505 - budget = CPSW_POLL_WEIGHT; 506 - for (i = 0; i < cpsw->tx_ch_num; i++) { 507 - ch_rate = cpdma_chan_get_rate(txv[i].ch); 508 - if (ch_rate) { 509 - txv[i].budget = (ch_rate * CPSW_POLL_WEIGHT) / max_rate; 510 - if (!txv[i].budget) 511 - txv[i].budget++; 512 - if (ch_rate > bigest_rate) { 513 - bigest_rate_ch = i; 514 - bigest_rate = ch_rate; 515 - } 516 - 517 - ch_weight = (ch_rate * 100) / max_rate; 518 - if (!ch_weight) 519 - ch_weight++; 520 - cpdma_chan_set_weight(cpsw->txv[i].ch, ch_weight); 521 - } else { 522 - txv[i].budget = ch_budget; 523 - if (!bigest_rate_ch) 524 - bigest_rate_ch = i; 525 - cpdma_chan_set_weight(cpsw->txv[i].ch, 0); 526 - } 527 - 528 - budget -= txv[i].budget; 529 - } 530 - 531 - if (budget) 532 - txv[bigest_rate_ch].budget += budget; 533 - 534 - /* split rx budget */ 535 - budget = CPSW_POLL_WEIGHT; 536 - ch_budget = budget / cpsw->rx_ch_num; 537 - for (i = 0; i < cpsw->rx_ch_num; i++) { 538 - cpsw->rxv[i].budget = ch_budget; 539 - budget -= ch_budget; 540 - } 541 - 542 - if (budget) 543 - cpsw->rxv[0].budget += budget; 544 - } 545 - 546 - static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id) 547 - { 548 - struct cpsw_common *cpsw = dev_id; 549 - 550 - writel(0, &cpsw->wr_regs->tx_en); 551 - cpdma_ctlr_eoi(cpsw->dma, CPDMA_EOI_TX); 552 - 553 - if (cpsw->quirk_irq) { 554 - disable_irq_nosync(cpsw->irqs_table[1]); 555 - cpsw->tx_irq_disabled = true; 556 - } 557 - 558 - napi_schedule(&cpsw->napi_tx); 559 - return IRQ_HANDLED; 560 - } 561 - 562 - static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id) 563 - { 564 - struct cpsw_common *cpsw = dev_id; 565 - 566 - cpdma_ctlr_eoi(cpsw->dma, CPDMA_EOI_RX); 567 - writel(0, &cpsw->wr_regs->rx_en); 568 - 569 - if (cpsw->quirk_irq) { 570 - disable_irq_nosync(cpsw->irqs_table[0]); 571 - cpsw->rx_irq_disabled = true; 572 - } 573 - 574 - napi_schedule(&cpsw->napi_rx); 575 - return IRQ_HANDLED; 576 - } 577 - 578 - static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget) 579 - { 580 - u32 ch_map; 581 - int num_tx, cur_budget, ch; 582 - struct cpsw_common *cpsw = napi_to_cpsw(napi_tx); 583 - struct cpsw_vector *txv; 584 - 585 - /* process every unprocessed channel */ 586 - ch_map = cpdma_ctrl_txchs_state(cpsw->dma); 587 - for (ch = 0, num_tx = 0; ch_map & 0xff; ch_map <<= 1, ch++) { 588 - if (!(ch_map & 0x80)) 589 - continue; 590 - 591 - txv = &cpsw->txv[ch]; 592 - if (unlikely(txv->budget > budget - num_tx)) 593 - cur_budget = budget - num_tx; 594 - else 595 - cur_budget = txv->budget; 596 - 597 - num_tx += cpdma_chan_process(txv->ch, cur_budget); 598 - if (num_tx >= budget) 599 - break; 600 - } 601 - 602 - if (num_tx < budget) { 603 - napi_complete(napi_tx); 604 - writel(0xff, &cpsw->wr_regs->tx_en); 605 - } 606 - 607 - return num_tx; 608 - } 609 - 610 - static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget) 611 - { 612 - struct cpsw_common *cpsw = napi_to_cpsw(napi_tx); 613 - int num_tx; 614 - 615 - num_tx = cpdma_chan_process(cpsw->txv[0].ch, budget); 616 - if (num_tx < budget) { 617 - napi_complete(napi_tx); 618 - writel(0xff, &cpsw->wr_regs->tx_en); 619 - if (cpsw->tx_irq_disabled) { 620 - cpsw->tx_irq_disabled = false; 621 - enable_irq(cpsw->irqs_table[1]); 622 - } 623 - } 624 - 625 - return num_tx; 626 - } 627 - 628 - static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) 629 - { 630 - u32 ch_map; 631 - int num_rx, cur_budget, ch; 632 - struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); 633 - struct cpsw_vector *rxv; 634 - 635 - /* process every unprocessed channel */ 636 - ch_map = cpdma_ctrl_rxchs_state(cpsw->dma); 637 - for (ch = 0, num_rx = 0; ch_map; ch_map >>= 1, ch++) { 638 - if (!(ch_map & 0x01)) 639 - continue; 640 - 641 - rxv = &cpsw->rxv[ch]; 642 - if (unlikely(rxv->budget > budget - num_rx)) 643 - cur_budget = budget - num_rx; 644 - else 645 - cur_budget = rxv->budget; 646 - 647 - num_rx += cpdma_chan_process(rxv->ch, cur_budget); 648 - if (num_rx >= budget) 649 - break; 650 - } 651 - 652 - if (num_rx < budget) { 653 - napi_complete_done(napi_rx, num_rx); 654 - writel(0xff, &cpsw->wr_regs->rx_en); 655 - } 656 - 657 - return num_rx; 658 - } 659 - 660 - static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget) 661 - { 662 - struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); 663 - int num_rx; 664 - 665 - num_rx = cpdma_chan_process(cpsw->rxv[0].ch, budget); 666 - if (num_rx < budget) { 667 - napi_complete_done(napi_rx, num_rx); 668 - writel(0xff, &cpsw->wr_regs->rx_en); 669 - if (cpsw->rx_irq_disabled) { 670 - cpsw->rx_irq_disabled = false; 671 - enable_irq(cpsw->irqs_table[0]); 672 - } 673 - } 674 - 675 - return num_rx; 676 - } 677 - 678 - static inline void soft_reset(const char *module, void __iomem *reg) 679 - { 680 - unsigned long timeout = jiffies + HZ; 681 - 682 - writel_relaxed(1, reg); 683 - do { 684 - cpu_relax(); 685 - } while ((readl_relaxed(reg) & 1) && time_after(timeout, jiffies)); 686 - 687 - WARN(readl_relaxed(reg) & 1, "failed to soft-reset %s\n", module); 688 - } 689 - 690 - static void cpsw_set_slave_mac(struct cpsw_slave *slave, 691 - struct cpsw_priv *priv) 692 - { 693 - slave_write(slave, mac_hi(priv->mac_addr), SA_HI); 694 - slave_write(slave, mac_lo(priv->mac_addr), SA_LO); 695 - } 696 - 697 - static bool cpsw_shp_is_off(struct cpsw_priv *priv) 698 - { 699 - struct cpsw_common *cpsw = priv->cpsw; 700 - struct cpsw_slave *slave; 701 - u32 shift, mask, val; 702 - 703 - val = readl_relaxed(&cpsw->regs->ptype); 704 - 705 - slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 706 - shift = CPSW_FIFO_SHAPE_EN_SHIFT + 3 * slave->slave_num; 707 - mask = 7 << shift; 708 - val = val & mask; 709 - 710 - return !val; 711 - } 712 - 713 - static void cpsw_fifo_shp_on(struct cpsw_priv *priv, int fifo, int on) 714 - { 715 - struct cpsw_common *cpsw = priv->cpsw; 716 - struct cpsw_slave *slave; 717 - u32 shift, mask, val; 718 - 719 - val = readl_relaxed(&cpsw->regs->ptype); 720 - 721 - slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 722 - shift = CPSW_FIFO_SHAPE_EN_SHIFT + 3 * slave->slave_num; 723 - mask = (1 << --fifo) << shift; 724 - val = on ? val | mask : val & ~mask; 725 - 726 - writel_relaxed(val, &cpsw->regs->ptype); 727 786 } 728 787 729 788 static void _cpsw_adjust_link(struct cpsw_slave *slave, ··· 521 1116 phy_print_status(phy); 522 1117 523 1118 slave->mac_control = mac_control; 524 - } 525 - 526 - static int cpsw_get_common_speed(struct cpsw_common *cpsw) 527 - { 528 - int i, speed; 529 - 530 - for (i = 0, speed = 0; i < cpsw->data.slaves; i++) 531 - if (cpsw->slaves[i].phy && cpsw->slaves[i].phy->link) 532 - speed += cpsw->slaves[i].phy->speed; 533 - 534 - return speed; 535 - } 536 - 537 - static int cpsw_need_resplit(struct cpsw_common *cpsw) 538 - { 539 - int i, rlim_ch_num; 540 - int speed, ch_rate; 541 - 542 - /* re-split resources only in case speed was changed */ 543 - speed = cpsw_get_common_speed(cpsw); 544 - if (speed == cpsw->speed || !speed) 545 - return 0; 546 - 547 - cpsw->speed = speed; 548 - 549 - for (i = 0, rlim_ch_num = 0; i < cpsw->tx_ch_num; i++) { 550 - ch_rate = cpdma_chan_get_rate(cpsw->txv[i].ch); 551 - if (!ch_rate) 552 - break; 553 - 554 - rlim_ch_num++; 555 - } 556 - 557 - /* cases not dependent on speed */ 558 - if (!rlim_ch_num || rlim_ch_num == cpsw->tx_ch_num) 559 - return 0; 560 - 561 - return 1; 562 1119 } 563 1120 564 1121 static void cpsw_adjust_link(struct net_device *ndev) ··· 715 1348 } 716 1349 } 717 1350 718 - int cpsw_fill_rx_channels(struct cpsw_priv *priv) 719 - { 720 - struct cpsw_common *cpsw = priv->cpsw; 721 - struct cpsw_meta_xdp *xmeta; 722 - struct page_pool *pool; 723 - struct page *page; 724 - int ch_buf_num; 725 - int ch, i, ret; 726 - dma_addr_t dma; 727 - 728 - for (ch = 0; ch < cpsw->rx_ch_num; ch++) { 729 - pool = cpsw->page_pool[ch]; 730 - ch_buf_num = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); 731 - for (i = 0; i < ch_buf_num; i++) { 732 - page = page_pool_dev_alloc_pages(pool); 733 - if (!page) { 734 - cpsw_err(priv, ifup, "allocate rx page err\n"); 735 - return -ENOMEM; 736 - } 737 - 738 - xmeta = page_address(page) + CPSW_XMETA_OFFSET; 739 - xmeta->ndev = priv->ndev; 740 - xmeta->ch = ch; 741 - 742 - dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM; 743 - ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch, 744 - page, dma, 745 - cpsw->rx_packet_max, 746 - 0); 747 - if (ret < 0) { 748 - cpsw_err(priv, ifup, 749 - "cannot submit page to channel %d rx, error %d\n", 750 - ch, ret); 751 - page_pool_recycle_direct(pool, page); 752 - return ret; 753 - } 754 - } 755 - 756 - cpsw_info(priv, ifup, "ch %d rx, submitted %d descriptors\n", 757 - ch, ch_buf_num); 758 - } 759 - 760 - return 0; 761 - } 762 - 763 1351 static void cpsw_slave_stop(struct cpsw_slave *slave, struct cpsw_common *cpsw) 764 1352 { 765 1353 u32 slave_port; ··· 730 1408 ALE_PORT_STATE, ALE_PORT_STATE_DISABLE); 731 1409 cpsw_sl_reset(slave->mac_sl, 100); 732 1410 cpsw_sl_ctl_reset(slave->mac_sl); 733 - } 734 - 735 - static int cpsw_tc_to_fifo(int tc, int num_tc) 736 - { 737 - if (tc == num_tc - 1) 738 - return 0; 739 - 740 - return CPSW_FIFO_SHAPERS_NUM - tc; 741 - } 742 - 743 - static int cpsw_set_fifo_bw(struct cpsw_priv *priv, int fifo, int bw) 744 - { 745 - struct cpsw_common *cpsw = priv->cpsw; 746 - u32 val = 0, send_pct, shift; 747 - struct cpsw_slave *slave; 748 - int pct = 0, i; 749 - 750 - if (bw > priv->shp_cfg_speed * 1000) 751 - goto err; 752 - 753 - /* shaping has to stay enabled for highest fifos linearly 754 - * and fifo bw no more then interface can allow 755 - */ 756 - slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 757 - send_pct = slave_read(slave, SEND_PERCENT); 758 - for (i = CPSW_FIFO_SHAPERS_NUM; i > 0; i--) { 759 - if (!bw) { 760 - if (i >= fifo || !priv->fifo_bw[i]) 761 - continue; 762 - 763 - dev_warn(priv->dev, "Prev FIFO%d is shaped", i); 764 - continue; 765 - } 766 - 767 - if (!priv->fifo_bw[i] && i > fifo) { 768 - dev_err(priv->dev, "Upper FIFO%d is not shaped", i); 769 - return -EINVAL; 770 - } 771 - 772 - shift = (i - 1) * 8; 773 - if (i == fifo) { 774 - send_pct &= ~(CPSW_PCT_MASK << shift); 775 - val = DIV_ROUND_UP(bw, priv->shp_cfg_speed * 10); 776 - if (!val) 777 - val = 1; 778 - 779 - send_pct |= val << shift; 780 - pct += val; 781 - continue; 782 - } 783 - 784 - if (priv->fifo_bw[i]) 785 - pct += (send_pct >> shift) & CPSW_PCT_MASK; 786 - } 787 - 788 - if (pct >= 100) 789 - goto err; 790 - 791 - slave_write(slave, send_pct, SEND_PERCENT); 792 - priv->fifo_bw[fifo] = bw; 793 - 794 - dev_warn(priv->dev, "set FIFO%d bw = %d\n", fifo, 795 - DIV_ROUND_CLOSEST(val * priv->shp_cfg_speed, 100)); 796 - 797 - return 0; 798 - err: 799 - dev_err(priv->dev, "Bandwidth doesn't fit in tc configuration"); 800 - return -EINVAL; 801 - } 802 - 803 - static int cpsw_set_fifo_rlimit(struct cpsw_priv *priv, int fifo, int bw) 804 - { 805 - struct cpsw_common *cpsw = priv->cpsw; 806 - struct cpsw_slave *slave; 807 - u32 tx_in_ctl_rg, val; 808 - int ret; 809 - 810 - ret = cpsw_set_fifo_bw(priv, fifo, bw); 811 - if (ret) 812 - return ret; 813 - 814 - slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 815 - tx_in_ctl_rg = cpsw->version == CPSW_VERSION_1 ? 816 - CPSW1_TX_IN_CTL : CPSW2_TX_IN_CTL; 817 - 818 - if (!bw) 819 - cpsw_fifo_shp_on(priv, fifo, bw); 820 - 821 - val = slave_read(slave, tx_in_ctl_rg); 822 - if (cpsw_shp_is_off(priv)) { 823 - /* disable FIFOs rate limited queues */ 824 - val &= ~(0xf << CPSW_FIFO_RATE_EN_SHIFT); 825 - 826 - /* set type of FIFO queues to normal priority mode */ 827 - val &= ~(3 << CPSW_FIFO_QUEUE_TYPE_SHIFT); 828 - 829 - /* set type of FIFO queues to be rate limited */ 830 - if (bw) 831 - val |= 2 << CPSW_FIFO_QUEUE_TYPE_SHIFT; 832 - else 833 - priv->shp_cfg_speed = 0; 834 - } 835 - 836 - /* toggle a FIFO rate limited queue */ 837 - if (bw) 838 - val |= BIT(fifo + CPSW_FIFO_RATE_EN_SHIFT); 839 - else 840 - val &= ~BIT(fifo + CPSW_FIFO_RATE_EN_SHIFT); 841 - slave_write(slave, val, tx_in_ctl_rg); 842 - 843 - /* FIFO transmit shape enable */ 844 - cpsw_fifo_shp_on(priv, fifo, bw); 845 - return 0; 846 - } 847 - 848 - /* Defaults: 849 - * class A - prio 3 850 - * class B - prio 2 851 - * shaping for class A should be set first 852 - */ 853 - static int cpsw_set_cbs(struct net_device *ndev, 854 - struct tc_cbs_qopt_offload *qopt) 855 - { 856 - struct cpsw_priv *priv = netdev_priv(ndev); 857 - struct cpsw_common *cpsw = priv->cpsw; 858 - struct cpsw_slave *slave; 859 - int prev_speed = 0; 860 - int tc, ret, fifo; 861 - u32 bw = 0; 862 - 863 - tc = netdev_txq_to_tc(priv->ndev, qopt->queue); 864 - 865 - /* enable channels in backward order, as highest FIFOs must be rate 866 - * limited first and for compliance with CPDMA rate limited channels 867 - * that also used in bacward order. FIFO0 cannot be rate limited. 868 - */ 869 - fifo = cpsw_tc_to_fifo(tc, ndev->num_tc); 870 - if (!fifo) { 871 - dev_err(priv->dev, "Last tc%d can't be rate limited", tc); 872 - return -EINVAL; 873 - } 874 - 875 - /* do nothing, it's disabled anyway */ 876 - if (!qopt->enable && !priv->fifo_bw[fifo]) 877 - return 0; 878 - 879 - /* shapers can be set if link speed is known */ 880 - slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 881 - if (slave->phy && slave->phy->link) { 882 - if (priv->shp_cfg_speed && 883 - priv->shp_cfg_speed != slave->phy->speed) 884 - prev_speed = priv->shp_cfg_speed; 885 - 886 - priv->shp_cfg_speed = slave->phy->speed; 887 - } 888 - 889 - if (!priv->shp_cfg_speed) { 890 - dev_err(priv->dev, "Link speed is not known"); 891 - return -1; 892 - } 893 - 894 - ret = pm_runtime_get_sync(cpsw->dev); 895 - if (ret < 0) { 896 - pm_runtime_put_noidle(cpsw->dev); 897 - return ret; 898 - } 899 - 900 - bw = qopt->enable ? qopt->idleslope : 0; 901 - ret = cpsw_set_fifo_rlimit(priv, fifo, bw); 902 - if (ret) { 903 - priv->shp_cfg_speed = prev_speed; 904 - prev_speed = 0; 905 - } 906 - 907 - if (bw && prev_speed) 908 - dev_warn(priv->dev, 909 - "Speed was changed, CBS shaper speeds are changed!"); 910 - 911 - pm_runtime_put_sync(cpsw->dev); 912 - return ret; 913 - } 914 - 915 - static void cpsw_cbs_resume(struct cpsw_slave *slave, struct cpsw_priv *priv) 916 - { 917 - int fifo, bw; 918 - 919 - for (fifo = CPSW_FIFO_SHAPERS_NUM; fifo > 0; fifo--) { 920 - bw = priv->fifo_bw[fifo]; 921 - if (!bw) 922 - continue; 923 - 924 - cpsw_set_fifo_rlimit(priv, fifo, bw); 925 - } 926 - } 927 - 928 - static void cpsw_mqprio_resume(struct cpsw_slave *slave, struct cpsw_priv *priv) 929 - { 930 - struct cpsw_common *cpsw = priv->cpsw; 931 - u32 tx_prio_map = 0; 932 - int i, tc, fifo; 933 - u32 tx_prio_rg; 934 - 935 - if (!priv->mqprio_hw) 936 - return; 937 - 938 - for (i = 0; i < 8; i++) { 939 - tc = netdev_get_prio_tc_map(priv->ndev, i); 940 - fifo = CPSW_FIFO_SHAPERS_NUM - tc; 941 - tx_prio_map |= fifo << (4 * i); 942 - } 943 - 944 - tx_prio_rg = cpsw->version == CPSW_VERSION_1 ? 945 - CPSW1_TX_PRI_MAP : CPSW2_TX_PRI_MAP; 946 - 947 - slave_write(slave, tx_prio_map, tx_prio_rg); 948 1411 } 949 1412 950 1413 static int cpsw_restore_vlans(struct net_device *vdev, int vid, void *arg) ··· 960 1853 return NETDEV_TX_BUSY; 961 1854 } 962 1855 963 - #if IS_ENABLED(CONFIG_TI_CPTS) 964 - 965 - static void cpsw_hwtstamp_v1(struct cpsw_priv *priv) 966 - { 967 - struct cpsw_common *cpsw = priv->cpsw; 968 - struct cpsw_slave *slave = &cpsw->slaves[cpsw->data.active_slave]; 969 - u32 ts_en, seq_id; 970 - 971 - if (!priv->tx_ts_enabled && !priv->rx_ts_enabled) { 972 - slave_write(slave, 0, CPSW1_TS_CTL); 973 - return; 974 - } 975 - 976 - seq_id = (30 << CPSW_V1_SEQ_ID_OFS_SHIFT) | ETH_P_1588; 977 - ts_en = EVENT_MSG_BITS << CPSW_V1_MSG_TYPE_OFS; 978 - 979 - if (priv->tx_ts_enabled) 980 - ts_en |= CPSW_V1_TS_TX_EN; 981 - 982 - if (priv->rx_ts_enabled) 983 - ts_en |= CPSW_V1_TS_RX_EN; 984 - 985 - slave_write(slave, ts_en, CPSW1_TS_CTL); 986 - slave_write(slave, seq_id, CPSW1_TS_SEQ_LTYPE); 987 - } 988 - 989 - static void cpsw_hwtstamp_v2(struct cpsw_priv *priv) 990 - { 991 - struct cpsw_slave *slave; 992 - struct cpsw_common *cpsw = priv->cpsw; 993 - u32 ctrl, mtype; 994 - 995 - slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 996 - 997 - ctrl = slave_read(slave, CPSW2_CONTROL); 998 - switch (cpsw->version) { 999 - case CPSW_VERSION_2: 1000 - ctrl &= ~CTRL_V2_ALL_TS_MASK; 1001 - 1002 - if (priv->tx_ts_enabled) 1003 - ctrl |= CTRL_V2_TX_TS_BITS; 1004 - 1005 - if (priv->rx_ts_enabled) 1006 - ctrl |= CTRL_V2_RX_TS_BITS; 1007 - break; 1008 - case CPSW_VERSION_3: 1009 - default: 1010 - ctrl &= ~CTRL_V3_ALL_TS_MASK; 1011 - 1012 - if (priv->tx_ts_enabled) 1013 - ctrl |= CTRL_V3_TX_TS_BITS; 1014 - 1015 - if (priv->rx_ts_enabled) 1016 - ctrl |= CTRL_V3_RX_TS_BITS; 1017 - break; 1018 - } 1019 - 1020 - mtype = (30 << TS_SEQ_ID_OFFSET_SHIFT) | EVENT_MSG_BITS; 1021 - 1022 - slave_write(slave, mtype, CPSW2_TS_SEQ_MTYPE); 1023 - slave_write(slave, ctrl, CPSW2_CONTROL); 1024 - writel_relaxed(ETH_P_1588, &cpsw->regs->ts_ltype); 1025 - writel_relaxed(ETH_P_8021Q, &cpsw->regs->vlan_ltype); 1026 - } 1027 - 1028 - static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) 1029 - { 1030 - struct cpsw_priv *priv = netdev_priv(dev); 1031 - struct hwtstamp_config cfg; 1032 - struct cpsw_common *cpsw = priv->cpsw; 1033 - 1034 - if (cpsw->version != CPSW_VERSION_1 && 1035 - cpsw->version != CPSW_VERSION_2 && 1036 - cpsw->version != CPSW_VERSION_3) 1037 - return -EOPNOTSUPP; 1038 - 1039 - if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg))) 1040 - return -EFAULT; 1041 - 1042 - /* reserved for future extensions */ 1043 - if (cfg.flags) 1044 - return -EINVAL; 1045 - 1046 - if (cfg.tx_type != HWTSTAMP_TX_OFF && cfg.tx_type != HWTSTAMP_TX_ON) 1047 - return -ERANGE; 1048 - 1049 - switch (cfg.rx_filter) { 1050 - case HWTSTAMP_FILTER_NONE: 1051 - priv->rx_ts_enabled = 0; 1052 - break; 1053 - case HWTSTAMP_FILTER_ALL: 1054 - case HWTSTAMP_FILTER_NTP_ALL: 1055 - return -ERANGE; 1056 - case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: 1057 - case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: 1058 - case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: 1059 - priv->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 1060 - cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 1061 - break; 1062 - case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: 1063 - case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: 1064 - case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ: 1065 - case HWTSTAMP_FILTER_PTP_V2_L2_EVENT: 1066 - case HWTSTAMP_FILTER_PTP_V2_L2_SYNC: 1067 - case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ: 1068 - case HWTSTAMP_FILTER_PTP_V2_EVENT: 1069 - case HWTSTAMP_FILTER_PTP_V2_SYNC: 1070 - case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: 1071 - priv->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V2_EVENT; 1072 - cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; 1073 - break; 1074 - default: 1075 - return -ERANGE; 1076 - } 1077 - 1078 - priv->tx_ts_enabled = cfg.tx_type == HWTSTAMP_TX_ON; 1079 - 1080 - switch (cpsw->version) { 1081 - case CPSW_VERSION_1: 1082 - cpsw_hwtstamp_v1(priv); 1083 - break; 1084 - case CPSW_VERSION_2: 1085 - case CPSW_VERSION_3: 1086 - cpsw_hwtstamp_v2(priv); 1087 - break; 1088 - default: 1089 - WARN_ON(1); 1090 - } 1091 - 1092 - return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0; 1093 - } 1094 - 1095 - static int cpsw_hwtstamp_get(struct net_device *dev, struct ifreq *ifr) 1096 - { 1097 - struct cpsw_common *cpsw = ndev_to_cpsw(dev); 1098 - struct cpsw_priv *priv = netdev_priv(dev); 1099 - struct hwtstamp_config cfg; 1100 - 1101 - if (cpsw->version != CPSW_VERSION_1 && 1102 - cpsw->version != CPSW_VERSION_2 && 1103 - cpsw->version != CPSW_VERSION_3) 1104 - return -EOPNOTSUPP; 1105 - 1106 - cfg.flags = 0; 1107 - cfg.tx_type = priv->tx_ts_enabled ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; 1108 - cfg.rx_filter = priv->rx_ts_enabled; 1109 - 1110 - return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0; 1111 - } 1112 - #else 1113 - static int cpsw_hwtstamp_get(struct net_device *dev, struct ifreq *ifr) 1114 - { 1115 - return -EOPNOTSUPP; 1116 - } 1117 - 1118 - static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) 1119 - { 1120 - return -EOPNOTSUPP; 1121 - } 1122 - #endif /*CONFIG_TI_CPTS*/ 1123 - 1124 - static int cpsw_ndo_ioctl(struct net_device *dev, struct ifreq *req, int cmd) 1125 - { 1126 - struct cpsw_priv *priv = netdev_priv(dev); 1127 - struct cpsw_common *cpsw = priv->cpsw; 1128 - int slave_no = cpsw_slave_index(cpsw, priv); 1129 - 1130 - if (!netif_running(dev)) 1131 - return -EINVAL; 1132 - 1133 - switch (cmd) { 1134 - case SIOCSHWTSTAMP: 1135 - return cpsw_hwtstamp_set(dev, req); 1136 - case SIOCGHWTSTAMP: 1137 - return cpsw_hwtstamp_get(dev, req); 1138 - } 1139 - 1140 - if (!cpsw->slaves[slave_no].phy) 1141 - return -EOPNOTSUPP; 1142 - return phy_mii_ioctl(cpsw->slaves[slave_no].phy, req, cmd); 1143 - } 1144 - 1145 - static void cpsw_ndo_tx_timeout(struct net_device *ndev) 1146 - { 1147 - struct cpsw_priv *priv = netdev_priv(ndev); 1148 - struct cpsw_common *cpsw = priv->cpsw; 1149 - int ch; 1150 - 1151 - cpsw_err(priv, tx_err, "transmit timeout, restarting dma\n"); 1152 - ndev->stats.tx_errors++; 1153 - cpsw_intr_disable(cpsw); 1154 - for (ch = 0; ch < cpsw->tx_ch_num; ch++) { 1155 - cpdma_chan_stop(cpsw->txv[ch].ch); 1156 - cpdma_chan_start(cpsw->txv[ch].ch); 1157 - } 1158 - 1159 - cpsw_intr_enable(cpsw); 1160 - netif_trans_update(ndev); 1161 - netif_tx_wake_all_queues(ndev); 1162 - } 1163 - 1164 1856 static int cpsw_ndo_set_mac_address(struct net_device *ndev, void *p) 1165 1857 { 1166 1858 struct cpsw_priv *priv = netdev_priv(ndev); ··· 1121 2215 return ret; 1122 2216 } 1123 2217 1124 - static int cpsw_ndo_set_tx_maxrate(struct net_device *ndev, int queue, u32 rate) 1125 - { 1126 - struct cpsw_priv *priv = netdev_priv(ndev); 1127 - struct cpsw_common *cpsw = priv->cpsw; 1128 - struct cpsw_slave *slave; 1129 - u32 min_rate; 1130 - u32 ch_rate; 1131 - int i, ret; 1132 - 1133 - ch_rate = netdev_get_tx_queue(ndev, queue)->tx_maxrate; 1134 - if (ch_rate == rate) 1135 - return 0; 1136 - 1137 - ch_rate = rate * 1000; 1138 - min_rate = cpdma_chan_get_min_rate(cpsw->dma); 1139 - if ((ch_rate < min_rate && ch_rate)) { 1140 - dev_err(priv->dev, "The channel rate cannot be less than %dMbps", 1141 - min_rate); 1142 - return -EINVAL; 1143 - } 1144 - 1145 - if (rate > cpsw->speed) { 1146 - dev_err(priv->dev, "The channel rate cannot be more than 2Gbps"); 1147 - return -EINVAL; 1148 - } 1149 - 1150 - ret = pm_runtime_get_sync(cpsw->dev); 1151 - if (ret < 0) { 1152 - pm_runtime_put_noidle(cpsw->dev); 1153 - return ret; 1154 - } 1155 - 1156 - ret = cpdma_chan_set_rate(cpsw->txv[queue].ch, ch_rate); 1157 - pm_runtime_put(cpsw->dev); 1158 - 1159 - if (ret) 1160 - return ret; 1161 - 1162 - /* update rates for slaves tx queues */ 1163 - for (i = 0; i < cpsw->data.slaves; i++) { 1164 - slave = &cpsw->slaves[i]; 1165 - if (!slave->ndev) 1166 - continue; 1167 - 1168 - netdev_get_tx_queue(slave->ndev, queue)->tx_maxrate = rate; 1169 - } 1170 - 1171 - cpsw_split_res(cpsw); 1172 - return ret; 1173 - } 1174 - 1175 - static int cpsw_set_mqprio(struct net_device *ndev, void *type_data) 1176 - { 1177 - struct tc_mqprio_qopt_offload *mqprio = type_data; 1178 - struct cpsw_priv *priv = netdev_priv(ndev); 1179 - struct cpsw_common *cpsw = priv->cpsw; 1180 - int fifo, num_tc, count, offset; 1181 - struct cpsw_slave *slave; 1182 - u32 tx_prio_map = 0; 1183 - int i, tc, ret; 1184 - 1185 - num_tc = mqprio->qopt.num_tc; 1186 - if (num_tc > CPSW_TC_NUM) 1187 - return -EINVAL; 1188 - 1189 - if (mqprio->mode != TC_MQPRIO_MODE_DCB) 1190 - return -EINVAL; 1191 - 1192 - ret = pm_runtime_get_sync(cpsw->dev); 1193 - if (ret < 0) { 1194 - pm_runtime_put_noidle(cpsw->dev); 1195 - return ret; 1196 - } 1197 - 1198 - if (num_tc) { 1199 - for (i = 0; i < 8; i++) { 1200 - tc = mqprio->qopt.prio_tc_map[i]; 1201 - fifo = cpsw_tc_to_fifo(tc, num_tc); 1202 - tx_prio_map |= fifo << (4 * i); 1203 - } 1204 - 1205 - netdev_set_num_tc(ndev, num_tc); 1206 - for (i = 0; i < num_tc; i++) { 1207 - count = mqprio->qopt.count[i]; 1208 - offset = mqprio->qopt.offset[i]; 1209 - netdev_set_tc_queue(ndev, i, count, offset); 1210 - } 1211 - } 1212 - 1213 - if (!mqprio->qopt.hw) { 1214 - /* restore default configuration */ 1215 - netdev_reset_tc(ndev); 1216 - tx_prio_map = TX_PRIORITY_MAPPING; 1217 - } 1218 - 1219 - priv->mqprio_hw = mqprio->qopt.hw; 1220 - 1221 - offset = cpsw->version == CPSW_VERSION_1 ? 1222 - CPSW1_TX_PRI_MAP : CPSW2_TX_PRI_MAP; 1223 - 1224 - slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 1225 - slave_write(slave, tx_prio_map, offset); 1226 - 1227 - pm_runtime_put_sync(cpsw->dev); 1228 - 1229 - return 0; 1230 - } 1231 - 1232 - static int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type, 1233 - void *type_data) 1234 - { 1235 - switch (type) { 1236 - case TC_SETUP_QDISC_CBS: 1237 - return cpsw_set_cbs(ndev, type_data); 1238 - 1239 - case TC_SETUP_QDISC_MQPRIO: 1240 - return cpsw_set_mqprio(ndev, type_data); 1241 - 1242 - default: 1243 - return -EOPNOTSUPP; 1244 - } 1245 - } 1246 - 1247 - static int cpsw_xdp_prog_setup(struct cpsw_priv *priv, struct netdev_bpf *bpf) 1248 - { 1249 - struct bpf_prog *prog = bpf->prog; 1250 - 1251 - if (!priv->xdpi.prog && !prog) 1252 - return 0; 1253 - 1254 - if (!xdp_attachment_flags_ok(&priv->xdpi, bpf)) 1255 - return -EBUSY; 1256 - 1257 - WRITE_ONCE(priv->xdp_prog, prog); 1258 - 1259 - xdp_attachment_setup(&priv->xdpi, bpf); 1260 - 1261 - return 0; 1262 - } 1263 - 1264 - static int cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf) 1265 - { 1266 - struct cpsw_priv *priv = netdev_priv(ndev); 1267 - 1268 - switch (bpf->command) { 1269 - case XDP_SETUP_PROG: 1270 - return cpsw_xdp_prog_setup(priv, bpf); 1271 - 1272 - case XDP_QUERY_PROG: 1273 - return xdp_attachment_query(&priv->xdpi, bpf); 1274 - 1275 - default: 1276 - return -EINVAL; 1277 - } 1278 - } 1279 - 1280 2218 static int cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, 1281 2219 struct xdp_frame **frames, u32 flags) 1282 2220 { 1283 2221 struct cpsw_priv *priv = netdev_priv(ndev); 2222 + struct cpsw_common *cpsw = priv->cpsw; 1284 2223 struct xdp_frame *xdpf; 1285 - int i, drops = 0; 2224 + int i, drops = 0, port; 1286 2225 1287 2226 if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) 1288 2227 return -EINVAL; ··· 1140 2389 continue; 1141 2390 } 1142 2391 1143 - if (cpsw_xdp_tx_frame(priv, xdpf, NULL)) 2392 + port = priv->emac_port + cpsw->data.dual_emac; 2393 + if (cpsw_xdp_tx_frame(priv, xdpf, NULL, port)) 1144 2394 drops++; 1145 2395 } 1146 2396 ··· 1527 2775 return -ENOMEM; 1528 2776 1529 2777 platform_set_drvdata(pdev, cpsw); 2778 + cpsw_slave_index = cpsw_slave_index_priv; 2779 + 1530 2780 cpsw->dev = dev; 1531 2781 1532 2782 mode = devm_gpiod_get_array_optional(dev, "mode", GPIOD_OUT_LOW);
+140 -10
drivers/net/ethernet/ti/cpsw_ale.c
··· 5 5 * Copyright (C) 2012 Texas Instruments 6 6 * 7 7 */ 8 + #include <linux/bitmap.h> 9 + #include <linux/if_vlan.h> 8 10 #include <linux/kernel.h> 9 11 #include <linux/module.h> 10 12 #include <linux/platform_device.h> ··· 384 382 int flags, u16 vid) 385 383 { 386 384 u32 ale_entry[ALE_ENTRY_WORDS] = {0, 0, 0}; 385 + int mcast_members; 387 386 int idx; 388 387 389 388 idx = cpsw_ale_match_addr(ale, addr, (flags & ALE_VLAN) ? vid : 0); ··· 393 390 394 391 cpsw_ale_read(ale, idx, ale_entry); 395 392 396 - if (port_mask) 397 - cpsw_ale_set_port_mask(ale_entry, port_mask, 393 + if (port_mask) { 394 + mcast_members = cpsw_ale_get_port_mask(ale_entry, 395 + ale->port_mask_bits); 396 + mcast_members &= ~port_mask; 397 + cpsw_ale_set_port_mask(ale_entry, mcast_members, 398 398 ale->port_mask_bits); 399 - else 399 + } else { 400 400 cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE); 401 + } 401 402 402 403 cpsw_ale_write(ale, idx, ale_entry); 403 404 return 0; ··· 422 415 writel(unreg_mcast, ale->params.ale_regs + ALE_VLAN_MASK_MUX(idx)); 423 416 } 424 417 425 - int cpsw_ale_add_vlan(struct cpsw_ale *ale, u16 vid, int port, int untag, 418 + static void cpsw_ale_set_vlan_untag(struct cpsw_ale *ale, u32 *ale_entry, 419 + u16 vid, int untag_mask) 420 + { 421 + cpsw_ale_set_vlan_untag_force(ale_entry, 422 + untag_mask, ale->vlan_field_bits); 423 + if (untag_mask & ALE_PORT_HOST) 424 + bitmap_set(ale->p0_untag_vid_mask, vid, 1); 425 + else 426 + bitmap_clear(ale->p0_untag_vid_mask, vid, 1); 427 + } 428 + 429 + int cpsw_ale_add_vlan(struct cpsw_ale *ale, u16 vid, int port_mask, int untag, 426 430 int reg_mcast, int unreg_mcast) 427 431 { 428 432 u32 ale_entry[ALE_ENTRY_WORDS] = {0, 0, 0}; ··· 445 427 446 428 cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_VLAN); 447 429 cpsw_ale_set_vlan_id(ale_entry, vid); 430 + cpsw_ale_set_vlan_untag(ale, ale_entry, vid, untag); 448 431 449 - cpsw_ale_set_vlan_untag_force(ale_entry, untag, ale->vlan_field_bits); 450 432 if (!ale->params.nu_switch_ale) { 451 433 cpsw_ale_set_vlan_reg_mcast(ale_entry, reg_mcast, 452 434 ale->vlan_field_bits); ··· 455 437 } else { 456 438 cpsw_ale_set_vlan_mcast(ale, ale_entry, reg_mcast, unreg_mcast); 457 439 } 458 - cpsw_ale_set_vlan_member_list(ale_entry, port, ale->vlan_field_bits); 440 + cpsw_ale_set_vlan_member_list(ale_entry, port_mask, 441 + ale->vlan_field_bits); 459 442 460 443 if (idx < 0) 461 444 idx = cpsw_ale_match_free(ale); ··· 467 448 468 449 cpsw_ale_write(ale, idx, ale_entry); 469 450 return 0; 451 + } 452 + 453 + static void cpsw_ale_del_vlan_modify(struct cpsw_ale *ale, u32 *ale_entry, 454 + u16 vid, int port_mask) 455 + { 456 + int reg_mcast, unreg_mcast; 457 + int members, untag; 458 + 459 + members = cpsw_ale_get_vlan_member_list(ale_entry, 460 + ale->vlan_field_bits); 461 + members &= ~port_mask; 462 + 463 + untag = cpsw_ale_get_vlan_untag_force(ale_entry, 464 + ale->vlan_field_bits); 465 + reg_mcast = cpsw_ale_get_vlan_reg_mcast(ale_entry, 466 + ale->vlan_field_bits); 467 + unreg_mcast = cpsw_ale_get_vlan_unreg_mcast(ale_entry, 468 + ale->vlan_field_bits); 469 + untag &= members; 470 + reg_mcast &= members; 471 + unreg_mcast &= members; 472 + 473 + cpsw_ale_set_vlan_untag(ale, ale_entry, vid, untag); 474 + 475 + if (!ale->params.nu_switch_ale) { 476 + cpsw_ale_set_vlan_reg_mcast(ale_entry, reg_mcast, 477 + ale->vlan_field_bits); 478 + cpsw_ale_set_vlan_unreg_mcast(ale_entry, unreg_mcast, 479 + ale->vlan_field_bits); 480 + } else { 481 + cpsw_ale_set_vlan_mcast(ale, ale_entry, reg_mcast, 482 + unreg_mcast); 483 + } 484 + cpsw_ale_set_vlan_member_list(ale_entry, members, 485 + ale->vlan_field_bits); 470 486 } 471 487 472 488 int cpsw_ale_del_vlan(struct cpsw_ale *ale, u16 vid, int port_mask) ··· 515 461 516 462 cpsw_ale_read(ale, idx, ale_entry); 517 463 518 - if (port_mask) 519 - cpsw_ale_set_vlan_member_list(ale_entry, port_mask, 520 - ale->vlan_field_bits); 521 - else 464 + if (port_mask) { 465 + cpsw_ale_del_vlan_modify(ale, ale_entry, vid, port_mask); 466 + } else { 467 + cpsw_ale_set_vlan_untag(ale, ale_entry, vid, 0); 522 468 cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE); 469 + } 523 470 524 471 cpsw_ale_write(ale, idx, ale_entry); 472 + 525 473 return 0; 474 + } 475 + 476 + int cpsw_ale_vlan_add_modify(struct cpsw_ale *ale, u16 vid, int port_mask, 477 + int untag_mask, int reg_mask, int unreg_mask) 478 + { 479 + u32 ale_entry[ALE_ENTRY_WORDS] = {0, 0, 0}; 480 + int reg_mcast_members, unreg_mcast_members; 481 + int vlan_members, untag_members; 482 + int idx, ret = 0; 483 + 484 + idx = cpsw_ale_match_vlan(ale, vid); 485 + if (idx >= 0) 486 + cpsw_ale_read(ale, idx, ale_entry); 487 + 488 + vlan_members = cpsw_ale_get_vlan_member_list(ale_entry, 489 + ale->vlan_field_bits); 490 + reg_mcast_members = cpsw_ale_get_vlan_reg_mcast(ale_entry, 491 + ale->vlan_field_bits); 492 + unreg_mcast_members = 493 + cpsw_ale_get_vlan_unreg_mcast(ale_entry, 494 + ale->vlan_field_bits); 495 + untag_members = cpsw_ale_get_vlan_untag_force(ale_entry, 496 + ale->vlan_field_bits); 497 + 498 + vlan_members |= port_mask; 499 + untag_members = (untag_members & ~port_mask) | untag_mask; 500 + reg_mcast_members = (reg_mcast_members & ~port_mask) | reg_mask; 501 + unreg_mcast_members = (unreg_mcast_members & ~port_mask) | unreg_mask; 502 + 503 + ret = cpsw_ale_add_vlan(ale, vid, vlan_members, untag_members, 504 + reg_mcast_members, unreg_mcast_members); 505 + if (ret) { 506 + dev_err(ale->params.dev, "Unable to add vlan\n"); 507 + return ret; 508 + } 509 + dev_dbg(ale->params.dev, "port mask 0x%x untag 0x%x\n", vlan_members, 510 + untag_mask); 511 + 512 + return ret; 513 + } 514 + 515 + void cpsw_ale_set_unreg_mcast(struct cpsw_ale *ale, int unreg_mcast_mask, 516 + bool add) 517 + { 518 + u32 ale_entry[ALE_ENTRY_WORDS]; 519 + int unreg_members = 0; 520 + int type, idx; 521 + 522 + for (idx = 0; idx < ale->params.ale_entries; idx++) { 523 + cpsw_ale_read(ale, idx, ale_entry); 524 + type = cpsw_ale_get_entry_type(ale_entry); 525 + if (type != ALE_TYPE_VLAN) 526 + continue; 527 + 528 + unreg_members = 529 + cpsw_ale_get_vlan_unreg_mcast(ale_entry, 530 + ale->vlan_field_bits); 531 + if (add) 532 + unreg_members |= unreg_mcast_mask; 533 + else 534 + unreg_members &= ~unreg_mcast_mask; 535 + cpsw_ale_set_vlan_unreg_mcast(ale_entry, unreg_members, 536 + ale->vlan_field_bits); 537 + cpsw_ale_write(ale, idx, ale_entry); 538 + } 526 539 } 527 540 528 541 void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti, int port) ··· 900 779 void cpsw_ale_stop(struct cpsw_ale *ale) 901 780 { 902 781 del_timer_sync(&ale->timer); 782 + cpsw_ale_control_set(ale, 0, ALE_CLEAR, 1); 903 783 cpsw_ale_control_set(ale, 0, ALE_ENABLE, 0); 904 784 } 905 785 ··· 912 790 ale = devm_kzalloc(params->dev, sizeof(*ale), GFP_KERNEL); 913 791 if (!ale) 914 792 return NULL; 793 + 794 + ale->p0_untag_vid_mask = 795 + devm_kmalloc_array(params->dev, BITS_TO_LONGS(VLAN_N_VID), 796 + sizeof(unsigned long), 797 + GFP_KERNEL); 798 + if (!ale->p0_untag_vid_mask) 799 + return ERR_PTR(-ENOMEM); 915 800 916 801 ale->params = *params; 917 802 ale->ageout = ale->params.ale_ageout * HZ; ··· 991 862 ALE_UNKNOWNVLAN_FORCE_UNTAG_EGRESS; 992 863 } 993 864 865 + cpsw_ale_control_set(ale, 0, ALE_CLEAR, 1); 994 866 return ale; 995 867 } 996 868
+11
drivers/net/ethernet/ti/cpsw_ale.h
··· 35 35 u32 port_mask_bits; 36 36 u32 port_num_bits; 37 37 u32 vlan_field_bits; 38 + unsigned long *p0_untag_vid_mask; 38 39 }; 39 40 40 41 enum cpsw_ale_control { ··· 115 114 int cpsw_ale_control_set(struct cpsw_ale *ale, int port, 116 115 int control, int value); 117 116 void cpsw_ale_dump(struct cpsw_ale *ale, u32 *data); 117 + 118 + static inline int cpsw_ale_get_vlan_p0_untag(struct cpsw_ale *ale, u16 vid) 119 + { 120 + return test_bit(vid, ale->p0_untag_vid_mask); 121 + } 122 + 123 + int cpsw_ale_vlan_add_modify(struct cpsw_ale *ale, u16 vid, int port_mask, 124 + int untag_mask, int reg_mcast, int unreg_mcast); 125 + void cpsw_ale_set_unreg_mcast(struct cpsw_ale *ale, int unreg_mcast_mask, 126 + bool add); 118 127 119 128 #endif
+2048
drivers/net/ethernet/ti/cpsw_new.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Texas Instruments Ethernet Switch Driver 4 + * 5 + * Copyright (C) 2019 Texas Instruments 6 + */ 7 + 8 + #include <linux/io.h> 9 + #include <linux/clk.h> 10 + #include <linux/timer.h> 11 + #include <linux/module.h> 12 + #include <linux/irqreturn.h> 13 + #include <linux/interrupt.h> 14 + #include <linux/if_ether.h> 15 + #include <linux/etherdevice.h> 16 + #include <linux/net_tstamp.h> 17 + #include <linux/phy.h> 18 + #include <linux/phy/phy.h> 19 + #include <linux/delay.h> 20 + #include <linux/pm_runtime.h> 21 + #include <linux/gpio/consumer.h> 22 + #include <linux/of.h> 23 + #include <linux/of_mdio.h> 24 + #include <linux/of_net.h> 25 + #include <linux/of_device.h> 26 + #include <linux/if_vlan.h> 27 + #include <linux/kmemleak.h> 28 + #include <linux/sys_soc.h> 29 + 30 + #include <net/page_pool.h> 31 + #include <net/pkt_cls.h> 32 + #include <net/devlink.h> 33 + 34 + #include "cpsw.h" 35 + #include "cpsw_ale.h" 36 + #include "cpsw_priv.h" 37 + #include "cpsw_sl.h" 38 + #include "cpsw_switchdev.h" 39 + #include "cpts.h" 40 + #include "davinci_cpdma.h" 41 + 42 + #include <net/pkt_sched.h> 43 + 44 + static int debug_level; 45 + static int ale_ageout = CPSW_ALE_AGEOUT_DEFAULT; 46 + static int rx_packet_max = CPSW_MAX_PACKET_SIZE; 47 + static int descs_pool_size = CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT; 48 + 49 + struct cpsw_devlink { 50 + struct cpsw_common *cpsw; 51 + }; 52 + 53 + enum cpsw_devlink_param_id { 54 + CPSW_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX, 55 + CPSW_DL_PARAM_SWITCH_MODE, 56 + CPSW_DL_PARAM_ALE_BYPASS, 57 + }; 58 + 59 + /* struct cpsw_common is not needed, kept here for compatibility 60 + * reasons witrh the old driver 61 + */ 62 + static int cpsw_slave_index_priv(struct cpsw_common *cpsw, 63 + struct cpsw_priv *priv) 64 + { 65 + if (priv->emac_port == HOST_PORT_NUM) 66 + return -1; 67 + 68 + return priv->emac_port - 1; 69 + } 70 + 71 + static bool cpsw_is_switch_en(struct cpsw_common *cpsw) 72 + { 73 + return !cpsw->data.dual_emac; 74 + } 75 + 76 + static void cpsw_set_promiscious(struct net_device *ndev, bool enable) 77 + { 78 + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); 79 + bool enable_uni = false; 80 + int i; 81 + 82 + if (cpsw_is_switch_en(cpsw)) 83 + return; 84 + 85 + /* Enabling promiscuous mode for one interface will be 86 + * common for both the interface as the interface shares 87 + * the same hardware resource. 88 + */ 89 + for (i = 0; i < cpsw->data.slaves; i++) 90 + if (cpsw->slaves[i].ndev && 91 + (cpsw->slaves[i].ndev->flags & IFF_PROMISC)) 92 + enable_uni = true; 93 + 94 + if (!enable && enable_uni) { 95 + enable = enable_uni; 96 + dev_dbg(cpsw->dev, "promiscuity not disabled as the other interface is still in promiscuity mode\n"); 97 + } 98 + 99 + if (enable) { 100 + /* Enable unknown unicast, reg/unreg mcast */ 101 + cpsw_ale_control_set(cpsw->ale, HOST_PORT_NUM, 102 + ALE_P0_UNI_FLOOD, 1); 103 + 104 + dev_dbg(cpsw->dev, "promiscuity enabled\n"); 105 + } else { 106 + /* Disable unknown unicast */ 107 + cpsw_ale_control_set(cpsw->ale, HOST_PORT_NUM, 108 + ALE_P0_UNI_FLOOD, 0); 109 + dev_dbg(cpsw->dev, "promiscuity disabled\n"); 110 + } 111 + } 112 + 113 + /** 114 + * cpsw_set_mc - adds multicast entry to the table if it's not added or deletes 115 + * if it's not deleted 116 + * @ndev: device to sync 117 + * @addr: address to be added or deleted 118 + * @vid: vlan id, if vid < 0 set/unset address for real device 119 + * @add: add address if the flag is set or remove otherwise 120 + */ 121 + static int cpsw_set_mc(struct net_device *ndev, const u8 *addr, 122 + int vid, int add) 123 + { 124 + struct cpsw_priv *priv = netdev_priv(ndev); 125 + struct cpsw_common *cpsw = priv->cpsw; 126 + int mask, flags, ret, slave_no; 127 + 128 + slave_no = cpsw_slave_index(cpsw, priv); 129 + if (vid < 0) 130 + vid = cpsw->slaves[slave_no].port_vlan; 131 + 132 + mask = ALE_PORT_HOST; 133 + flags = vid ? ALE_VLAN : 0; 134 + 135 + if (add) 136 + ret = cpsw_ale_add_mcast(cpsw->ale, addr, mask, flags, vid, 0); 137 + else 138 + ret = cpsw_ale_del_mcast(cpsw->ale, addr, 0, flags, vid); 139 + 140 + return ret; 141 + } 142 + 143 + static int cpsw_update_vlan_mc(struct net_device *vdev, int vid, void *ctx) 144 + { 145 + struct addr_sync_ctx *sync_ctx = ctx; 146 + struct netdev_hw_addr *ha; 147 + int found = 0, ret = 0; 148 + 149 + if (!vdev || !(vdev->flags & IFF_UP)) 150 + return 0; 151 + 152 + /* vlan address is relevant if its sync_cnt != 0 */ 153 + netdev_for_each_mc_addr(ha, vdev) { 154 + if (ether_addr_equal(ha->addr, sync_ctx->addr)) { 155 + found = ha->sync_cnt; 156 + break; 157 + } 158 + } 159 + 160 + if (found) 161 + sync_ctx->consumed++; 162 + 163 + if (sync_ctx->flush) { 164 + if (!found) 165 + cpsw_set_mc(sync_ctx->ndev, sync_ctx->addr, vid, 0); 166 + return 0; 167 + } 168 + 169 + if (found) 170 + ret = cpsw_set_mc(sync_ctx->ndev, sync_ctx->addr, vid, 1); 171 + 172 + return ret; 173 + } 174 + 175 + static int cpsw_add_mc_addr(struct net_device *ndev, const u8 *addr, int num) 176 + { 177 + struct addr_sync_ctx sync_ctx; 178 + int ret; 179 + 180 + sync_ctx.consumed = 0; 181 + sync_ctx.addr = addr; 182 + sync_ctx.ndev = ndev; 183 + sync_ctx.flush = 0; 184 + 185 + ret = vlan_for_each(ndev, cpsw_update_vlan_mc, &sync_ctx); 186 + if (sync_ctx.consumed < num && !ret) 187 + ret = cpsw_set_mc(ndev, addr, -1, 1); 188 + 189 + return ret; 190 + } 191 + 192 + static int cpsw_del_mc_addr(struct net_device *ndev, const u8 *addr, int num) 193 + { 194 + struct addr_sync_ctx sync_ctx; 195 + 196 + sync_ctx.consumed = 0; 197 + sync_ctx.addr = addr; 198 + sync_ctx.ndev = ndev; 199 + sync_ctx.flush = 1; 200 + 201 + vlan_for_each(ndev, cpsw_update_vlan_mc, &sync_ctx); 202 + if (sync_ctx.consumed == num) 203 + cpsw_set_mc(ndev, addr, -1, 0); 204 + 205 + return 0; 206 + } 207 + 208 + static int cpsw_purge_vlan_mc(struct net_device *vdev, int vid, void *ctx) 209 + { 210 + struct addr_sync_ctx *sync_ctx = ctx; 211 + struct netdev_hw_addr *ha; 212 + int found = 0; 213 + 214 + if (!vdev || !(vdev->flags & IFF_UP)) 215 + return 0; 216 + 217 + /* vlan address is relevant if its sync_cnt != 0 */ 218 + netdev_for_each_mc_addr(ha, vdev) { 219 + if (ether_addr_equal(ha->addr, sync_ctx->addr)) { 220 + found = ha->sync_cnt; 221 + break; 222 + } 223 + } 224 + 225 + if (!found) 226 + return 0; 227 + 228 + sync_ctx->consumed++; 229 + cpsw_set_mc(sync_ctx->ndev, sync_ctx->addr, vid, 0); 230 + return 0; 231 + } 232 + 233 + static int cpsw_purge_all_mc(struct net_device *ndev, const u8 *addr, int num) 234 + { 235 + struct addr_sync_ctx sync_ctx; 236 + 237 + sync_ctx.addr = addr; 238 + sync_ctx.ndev = ndev; 239 + sync_ctx.consumed = 0; 240 + 241 + vlan_for_each(ndev, cpsw_purge_vlan_mc, &sync_ctx); 242 + if (sync_ctx.consumed < num) 243 + cpsw_set_mc(ndev, addr, -1, 0); 244 + 245 + return 0; 246 + } 247 + 248 + static void cpsw_ndo_set_rx_mode(struct net_device *ndev) 249 + { 250 + struct cpsw_priv *priv = netdev_priv(ndev); 251 + struct cpsw_common *cpsw = priv->cpsw; 252 + 253 + if (ndev->flags & IFF_PROMISC) { 254 + /* Enable promiscuous mode */ 255 + cpsw_set_promiscious(ndev, true); 256 + cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, priv->emac_port); 257 + return; 258 + } 259 + 260 + /* Disable promiscuous mode */ 261 + cpsw_set_promiscious(ndev, false); 262 + 263 + /* Restore allmulti on vlans if necessary */ 264 + cpsw_ale_set_allmulti(cpsw->ale, 265 + ndev->flags & IFF_ALLMULTI, priv->emac_port); 266 + 267 + /* add/remove mcast address either for real netdev or for vlan */ 268 + __hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr, 269 + cpsw_del_mc_addr); 270 + } 271 + 272 + static unsigned int cpsw_rxbuf_total_len(unsigned int len) 273 + { 274 + len += CPSW_HEADROOM; 275 + len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 276 + 277 + return SKB_DATA_ALIGN(len); 278 + } 279 + 280 + static void cpsw_rx_handler(void *token, int len, int status) 281 + { 282 + struct page *new_page, *page = token; 283 + void *pa = page_address(page); 284 + int headroom = CPSW_HEADROOM; 285 + struct cpsw_meta_xdp *xmeta; 286 + struct cpsw_common *cpsw; 287 + struct net_device *ndev; 288 + int port, ch, pkt_size; 289 + struct cpsw_priv *priv; 290 + struct page_pool *pool; 291 + struct sk_buff *skb; 292 + struct xdp_buff xdp; 293 + int ret = 0; 294 + dma_addr_t dma; 295 + 296 + xmeta = pa + CPSW_XMETA_OFFSET; 297 + cpsw = ndev_to_cpsw(xmeta->ndev); 298 + ndev = xmeta->ndev; 299 + pkt_size = cpsw->rx_packet_max; 300 + ch = xmeta->ch; 301 + 302 + if (status >= 0) { 303 + port = CPDMA_RX_SOURCE_PORT(status); 304 + if (port) 305 + ndev = cpsw->slaves[--port].ndev; 306 + } 307 + 308 + priv = netdev_priv(ndev); 309 + pool = cpsw->page_pool[ch]; 310 + 311 + if (unlikely(status < 0) || unlikely(!netif_running(ndev))) { 312 + /* In dual emac mode check for all interfaces */ 313 + if (cpsw->usage_count && status >= 0) { 314 + /* The packet received is for the interface which 315 + * is already down and the other interface is up 316 + * and running, instead of freeing which results 317 + * in reducing of the number of rx descriptor in 318 + * DMA engine, requeue page back to cpdma. 319 + */ 320 + new_page = page; 321 + goto requeue; 322 + } 323 + 324 + /* the interface is going down, pages are purged */ 325 + page_pool_recycle_direct(pool, page); 326 + return; 327 + } 328 + 329 + new_page = page_pool_dev_alloc_pages(pool); 330 + if (unlikely(!new_page)) { 331 + new_page = page; 332 + ndev->stats.rx_dropped++; 333 + goto requeue; 334 + } 335 + 336 + if (priv->xdp_prog) { 337 + if (status & CPDMA_RX_VLAN_ENCAP) { 338 + xdp.data = pa + CPSW_HEADROOM + 339 + CPSW_RX_VLAN_ENCAP_HDR_SIZE; 340 + xdp.data_end = xdp.data + len - 341 + CPSW_RX_VLAN_ENCAP_HDR_SIZE; 342 + } else { 343 + xdp.data = pa + CPSW_HEADROOM; 344 + xdp.data_end = xdp.data + len; 345 + } 346 + 347 + xdp_set_data_meta_invalid(&xdp); 348 + 349 + xdp.data_hard_start = pa; 350 + xdp.rxq = &priv->xdp_rxq[ch]; 351 + 352 + ret = cpsw_run_xdp(priv, ch, &xdp, page, priv->emac_port); 353 + if (ret != CPSW_XDP_PASS) 354 + goto requeue; 355 + 356 + /* XDP prog might have changed packet data and boundaries */ 357 + len = xdp.data_end - xdp.data; 358 + headroom = xdp.data - xdp.data_hard_start; 359 + 360 + /* XDP prog can modify vlan tag, so can't use encap header */ 361 + status &= ~CPDMA_RX_VLAN_ENCAP; 362 + } 363 + 364 + /* pass skb to netstack if no XDP prog or returned XDP_PASS */ 365 + skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size)); 366 + if (!skb) { 367 + ndev->stats.rx_dropped++; 368 + page_pool_recycle_direct(pool, page); 369 + goto requeue; 370 + } 371 + 372 + skb->offload_fwd_mark = priv->offload_fwd_mark; 373 + skb_reserve(skb, headroom); 374 + skb_put(skb, len); 375 + skb->dev = ndev; 376 + if (status & CPDMA_RX_VLAN_ENCAP) 377 + cpsw_rx_vlan_encap(skb); 378 + if (priv->rx_ts_enabled) 379 + cpts_rx_timestamp(cpsw->cpts, skb); 380 + skb->protocol = eth_type_trans(skb, ndev); 381 + 382 + /* unmap page as no netstack skb page recycling */ 383 + page_pool_release_page(pool, page); 384 + netif_receive_skb(skb); 385 + 386 + ndev->stats.rx_bytes += len; 387 + ndev->stats.rx_packets++; 388 + 389 + requeue: 390 + xmeta = page_address(new_page) + CPSW_XMETA_OFFSET; 391 + xmeta->ndev = ndev; 392 + xmeta->ch = ch; 393 + 394 + dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; 395 + ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, 396 + pkt_size, 0); 397 + if (ret < 0) { 398 + WARN_ON(ret == -ENOMEM); 399 + page_pool_recycle_direct(pool, new_page); 400 + } 401 + } 402 + 403 + static int cpsw_add_vlan_ale_entry(struct cpsw_priv *priv, 404 + unsigned short vid) 405 + { 406 + struct cpsw_common *cpsw = priv->cpsw; 407 + int unreg_mcast_mask = 0; 408 + int mcast_mask; 409 + u32 port_mask; 410 + int ret; 411 + 412 + port_mask = (1 << priv->emac_port) | ALE_PORT_HOST; 413 + 414 + mcast_mask = ALE_PORT_HOST; 415 + if (priv->ndev->flags & IFF_ALLMULTI) 416 + unreg_mcast_mask = mcast_mask; 417 + 418 + ret = cpsw_ale_add_vlan(cpsw->ale, vid, port_mask, 0, port_mask, 419 + unreg_mcast_mask); 420 + if (ret != 0) 421 + return ret; 422 + 423 + ret = cpsw_ale_add_ucast(cpsw->ale, priv->mac_addr, 424 + HOST_PORT_NUM, ALE_VLAN, vid); 425 + if (ret != 0) 426 + goto clean_vid; 427 + 428 + ret = cpsw_ale_add_mcast(cpsw->ale, priv->ndev->broadcast, 429 + mcast_mask, ALE_VLAN, vid, 0); 430 + if (ret != 0) 431 + goto clean_vlan_ucast; 432 + return 0; 433 + 434 + clean_vlan_ucast: 435 + cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr, 436 + HOST_PORT_NUM, ALE_VLAN, vid); 437 + clean_vid: 438 + cpsw_ale_del_vlan(cpsw->ale, vid, 0); 439 + return ret; 440 + } 441 + 442 + static int cpsw_ndo_vlan_rx_add_vid(struct net_device *ndev, 443 + __be16 proto, u16 vid) 444 + { 445 + struct cpsw_priv *priv = netdev_priv(ndev); 446 + struct cpsw_common *cpsw = priv->cpsw; 447 + int ret, i; 448 + 449 + if (cpsw_is_switch_en(cpsw)) { 450 + dev_dbg(cpsw->dev, ".ndo_vlan_rx_add_vid called in switch mode\n"); 451 + return 0; 452 + } 453 + 454 + if (vid == cpsw->data.default_vlan) 455 + return 0; 456 + 457 + ret = pm_runtime_get_sync(cpsw->dev); 458 + if (ret < 0) { 459 + pm_runtime_put_noidle(cpsw->dev); 460 + return ret; 461 + } 462 + 463 + /* In dual EMAC, reserved VLAN id should not be used for 464 + * creating VLAN interfaces as this can break the dual 465 + * EMAC port separation 466 + */ 467 + for (i = 0; i < cpsw->data.slaves; i++) { 468 + if (cpsw->slaves[i].ndev && 469 + vid == cpsw->slaves[i].port_vlan) { 470 + ret = -EINVAL; 471 + goto err; 472 + } 473 + } 474 + 475 + dev_dbg(priv->dev, "Adding vlanid %d to vlan filter\n", vid); 476 + ret = cpsw_add_vlan_ale_entry(priv, vid); 477 + err: 478 + pm_runtime_put(cpsw->dev); 479 + return ret; 480 + } 481 + 482 + static int cpsw_restore_vlans(struct net_device *vdev, int vid, void *arg) 483 + { 484 + struct cpsw_priv *priv = arg; 485 + 486 + if (!vdev || !vid) 487 + return 0; 488 + 489 + cpsw_ndo_vlan_rx_add_vid(priv->ndev, 0, vid); 490 + return 0; 491 + } 492 + 493 + /* restore resources after port reset */ 494 + static void cpsw_restore(struct cpsw_priv *priv) 495 + { 496 + struct cpsw_common *cpsw = priv->cpsw; 497 + 498 + /* restore vlan configurations */ 499 + vlan_for_each(priv->ndev, cpsw_restore_vlans, priv); 500 + 501 + /* restore MQPRIO offload */ 502 + cpsw_mqprio_resume(&cpsw->slaves[priv->emac_port - 1], priv); 503 + 504 + /* restore CBS offload */ 505 + cpsw_cbs_resume(&cpsw->slaves[priv->emac_port - 1], priv); 506 + } 507 + 508 + static void cpsw_init_stp_ale_entry(struct cpsw_common *cpsw) 509 + { 510 + char stpa[] = {0x01, 0x80, 0xc2, 0x0, 0x0, 0x0}; 511 + 512 + cpsw_ale_add_mcast(cpsw->ale, stpa, 513 + ALE_PORT_HOST, ALE_SUPER, 0, 514 + ALE_MCAST_BLOCK_LEARN_FWD); 515 + } 516 + 517 + static void cpsw_init_host_port_switch(struct cpsw_common *cpsw) 518 + { 519 + int vlan = cpsw->data.default_vlan; 520 + 521 + writel(CPSW_FIFO_NORMAL_MODE, &cpsw->host_port_regs->tx_in_ctl); 522 + 523 + writel(vlan, &cpsw->host_port_regs->port_vlan); 524 + 525 + cpsw_ale_add_vlan(cpsw->ale, vlan, ALE_ALL_PORTS, 526 + ALE_ALL_PORTS, ALE_ALL_PORTS, 527 + ALE_PORT_1 | ALE_PORT_2); 528 + 529 + cpsw_init_stp_ale_entry(cpsw); 530 + 531 + cpsw_ale_control_set(cpsw->ale, HOST_PORT_NUM, ALE_P0_UNI_FLOOD, 1); 532 + dev_dbg(cpsw->dev, "Set P0_UNI_FLOOD\n"); 533 + cpsw_ale_control_set(cpsw->ale, HOST_PORT_NUM, ALE_PORT_NOLEARN, 0); 534 + } 535 + 536 + static void cpsw_init_host_port_dual_mac(struct cpsw_common *cpsw) 537 + { 538 + int vlan = cpsw->data.default_vlan; 539 + 540 + writel(CPSW_FIFO_DUAL_MAC_MODE, &cpsw->host_port_regs->tx_in_ctl); 541 + 542 + cpsw_ale_control_set(cpsw->ale, HOST_PORT_NUM, ALE_P0_UNI_FLOOD, 0); 543 + dev_dbg(cpsw->dev, "unset P0_UNI_FLOOD\n"); 544 + 545 + writel(vlan, &cpsw->host_port_regs->port_vlan); 546 + 547 + cpsw_ale_add_vlan(cpsw->ale, vlan, ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0); 548 + /* learning make no sense in dual_mac mode */ 549 + cpsw_ale_control_set(cpsw->ale, HOST_PORT_NUM, ALE_PORT_NOLEARN, 1); 550 + } 551 + 552 + static void cpsw_init_host_port(struct cpsw_priv *priv) 553 + { 554 + struct cpsw_common *cpsw = priv->cpsw; 555 + u32 control_reg; 556 + 557 + /* soft reset the controller and initialize ale */ 558 + soft_reset("cpsw", &cpsw->regs->soft_reset); 559 + cpsw_ale_start(cpsw->ale); 560 + 561 + /* switch to vlan unaware mode */ 562 + cpsw_ale_control_set(cpsw->ale, HOST_PORT_NUM, ALE_VLAN_AWARE, 563 + CPSW_ALE_VLAN_AWARE); 564 + control_reg = readl(&cpsw->regs->control); 565 + control_reg |= CPSW_VLAN_AWARE | CPSW_RX_VLAN_ENCAP; 566 + writel(control_reg, &cpsw->regs->control); 567 + 568 + /* setup host port priority mapping */ 569 + writel_relaxed(CPDMA_TX_PRIORITY_MAP, 570 + &cpsw->host_port_regs->cpdma_tx_pri_map); 571 + writel_relaxed(0, &cpsw->host_port_regs->cpdma_rx_chan_map); 572 + 573 + /* disable priority elevation */ 574 + writel_relaxed(0, &cpsw->regs->ptype); 575 + 576 + /* enable statistics collection only on all ports */ 577 + writel_relaxed(0x7, &cpsw->regs->stat_port_en); 578 + 579 + /* Enable internal fifo flow control */ 580 + writel(0x7, &cpsw->regs->flow_control); 581 + 582 + if (cpsw_is_switch_en(cpsw)) 583 + cpsw_init_host_port_switch(cpsw); 584 + else 585 + cpsw_init_host_port_dual_mac(cpsw); 586 + 587 + cpsw_ale_control_set(cpsw->ale, HOST_PORT_NUM, 588 + ALE_PORT_STATE, ALE_PORT_STATE_FORWARD); 589 + } 590 + 591 + static void cpsw_port_add_dual_emac_def_ale_entries(struct cpsw_priv *priv, 592 + struct cpsw_slave *slave) 593 + { 594 + u32 port_mask = 1 << priv->emac_port | ALE_PORT_HOST; 595 + struct cpsw_common *cpsw = priv->cpsw; 596 + u32 reg; 597 + 598 + reg = (cpsw->version == CPSW_VERSION_1) ? CPSW1_PORT_VLAN : 599 + CPSW2_PORT_VLAN; 600 + slave_write(slave, slave->port_vlan, reg); 601 + 602 + cpsw_ale_add_vlan(cpsw->ale, slave->port_vlan, port_mask, 603 + port_mask, port_mask, 0); 604 + cpsw_ale_add_mcast(cpsw->ale, priv->ndev->broadcast, 605 + ALE_PORT_HOST, ALE_VLAN, slave->port_vlan, 606 + ALE_MCAST_FWD); 607 + cpsw_ale_add_ucast(cpsw->ale, priv->mac_addr, 608 + HOST_PORT_NUM, ALE_VLAN | 609 + ALE_SECURE, slave->port_vlan); 610 + cpsw_ale_control_set(cpsw->ale, priv->emac_port, 611 + ALE_PORT_DROP_UNKNOWN_VLAN, 1); 612 + /* learning make no sense in dual_mac mode */ 613 + cpsw_ale_control_set(cpsw->ale, priv->emac_port, 614 + ALE_PORT_NOLEARN, 1); 615 + } 616 + 617 + static void cpsw_port_add_switch_def_ale_entries(struct cpsw_priv *priv, 618 + struct cpsw_slave *slave) 619 + { 620 + u32 port_mask = 1 << priv->emac_port | ALE_PORT_HOST; 621 + struct cpsw_common *cpsw = priv->cpsw; 622 + u32 reg; 623 + 624 + cpsw_ale_control_set(cpsw->ale, priv->emac_port, 625 + ALE_PORT_DROP_UNKNOWN_VLAN, 0); 626 + cpsw_ale_control_set(cpsw->ale, priv->emac_port, 627 + ALE_PORT_NOLEARN, 0); 628 + /* disabling SA_UPDATE required to make stp work, without this setting 629 + * Host MAC addresses will jump between ports. 630 + * As per TRM MAC address can be defined as unicast supervisory (super) 631 + * by setting both (ALE_BLOCKED | ALE_SECURE) which should prevent 632 + * SA_UPDATE, but HW seems works incorrectly and setting ALE_SECURE 633 + * causes STP packets to be dropped due to ingress filter 634 + * if (source address found) and (secure) and 635 + * (receive port number != port_number)) 636 + * then discard the packet 637 + */ 638 + cpsw_ale_control_set(cpsw->ale, priv->emac_port, 639 + ALE_PORT_NO_SA_UPDATE, 1); 640 + 641 + cpsw_ale_add_mcast(cpsw->ale, priv->ndev->broadcast, 642 + port_mask, ALE_VLAN, slave->port_vlan, 643 + ALE_MCAST_FWD_2); 644 + cpsw_ale_add_ucast(cpsw->ale, priv->mac_addr, 645 + HOST_PORT_NUM, ALE_VLAN, slave->port_vlan); 646 + 647 + reg = (cpsw->version == CPSW_VERSION_1) ? CPSW1_PORT_VLAN : 648 + CPSW2_PORT_VLAN; 649 + slave_write(slave, slave->port_vlan, reg); 650 + } 651 + 652 + static void cpsw_adjust_link(struct net_device *ndev) 653 + { 654 + struct cpsw_priv *priv = netdev_priv(ndev); 655 + struct cpsw_common *cpsw = priv->cpsw; 656 + struct cpsw_slave *slave; 657 + struct phy_device *phy; 658 + u32 mac_control = 0; 659 + 660 + slave = &cpsw->slaves[priv->emac_port - 1]; 661 + phy = slave->phy; 662 + 663 + if (!phy) 664 + return; 665 + 666 + if (phy->link) { 667 + mac_control = CPSW_SL_CTL_GMII_EN; 668 + 669 + if (phy->speed == 1000) 670 + mac_control |= CPSW_SL_CTL_GIG; 671 + if (phy->duplex) 672 + mac_control |= CPSW_SL_CTL_FULLDUPLEX; 673 + 674 + /* set speed_in input in case RMII mode is used in 100Mbps */ 675 + if (phy->speed == 100) 676 + mac_control |= CPSW_SL_CTL_IFCTL_A; 677 + /* in band mode only works in 10Mbps RGMII mode */ 678 + else if ((phy->speed == 10) && phy_interface_is_rgmii(phy)) 679 + mac_control |= CPSW_SL_CTL_EXT_EN; /* In Band mode */ 680 + 681 + if (priv->rx_pause) 682 + mac_control |= CPSW_SL_CTL_RX_FLOW_EN; 683 + 684 + if (priv->tx_pause) 685 + mac_control |= CPSW_SL_CTL_TX_FLOW_EN; 686 + 687 + if (mac_control != slave->mac_control) 688 + cpsw_sl_ctl_set(slave->mac_sl, mac_control); 689 + 690 + /* enable forwarding */ 691 + cpsw_ale_control_set(cpsw->ale, priv->emac_port, 692 + ALE_PORT_STATE, ALE_PORT_STATE_FORWARD); 693 + 694 + netif_tx_wake_all_queues(ndev); 695 + 696 + if (priv->shp_cfg_speed && 697 + priv->shp_cfg_speed != slave->phy->speed && 698 + !cpsw_shp_is_off(priv)) 699 + dev_warn(priv->dev, "Speed was changed, CBS shaper speeds are changed!"); 700 + } else { 701 + netif_tx_stop_all_queues(ndev); 702 + 703 + mac_control = 0; 704 + /* disable forwarding */ 705 + cpsw_ale_control_set(cpsw->ale, priv->emac_port, 706 + ALE_PORT_STATE, ALE_PORT_STATE_DISABLE); 707 + 708 + cpsw_sl_wait_for_idle(slave->mac_sl, 100); 709 + 710 + cpsw_sl_ctl_reset(slave->mac_sl); 711 + } 712 + 713 + if (mac_control != slave->mac_control) 714 + phy_print_status(phy); 715 + 716 + slave->mac_control = mac_control; 717 + 718 + if (phy->link && cpsw_need_resplit(cpsw)) 719 + cpsw_split_res(cpsw); 720 + } 721 + 722 + static void cpsw_slave_open(struct cpsw_slave *slave, struct cpsw_priv *priv) 723 + { 724 + struct cpsw_common *cpsw = priv->cpsw; 725 + struct phy_device *phy; 726 + 727 + cpsw_sl_reset(slave->mac_sl, 100); 728 + cpsw_sl_ctl_reset(slave->mac_sl); 729 + 730 + /* setup priority mapping */ 731 + cpsw_sl_reg_write(slave->mac_sl, CPSW_SL_RX_PRI_MAP, 732 + RX_PRIORITY_MAPPING); 733 + 734 + switch (cpsw->version) { 735 + case CPSW_VERSION_1: 736 + slave_write(slave, TX_PRIORITY_MAPPING, CPSW1_TX_PRI_MAP); 737 + /* Increase RX FIFO size to 5 for supporting fullduplex 738 + * flow control mode 739 + */ 740 + slave_write(slave, 741 + (CPSW_MAX_BLKS_TX << CPSW_MAX_BLKS_TX_SHIFT) | 742 + CPSW_MAX_BLKS_RX, CPSW1_MAX_BLKS); 743 + break; 744 + case CPSW_VERSION_2: 745 + case CPSW_VERSION_3: 746 + case CPSW_VERSION_4: 747 + slave_write(slave, TX_PRIORITY_MAPPING, CPSW2_TX_PRI_MAP); 748 + /* Increase RX FIFO size to 5 for supporting fullduplex 749 + * flow control mode 750 + */ 751 + slave_write(slave, 752 + (CPSW_MAX_BLKS_TX << CPSW_MAX_BLKS_TX_SHIFT) | 753 + CPSW_MAX_BLKS_RX, CPSW2_MAX_BLKS); 754 + break; 755 + } 756 + 757 + /* setup max packet size, and mac address */ 758 + cpsw_sl_reg_write(slave->mac_sl, CPSW_SL_RX_MAXLEN, 759 + cpsw->rx_packet_max); 760 + cpsw_set_slave_mac(slave, priv); 761 + 762 + slave->mac_control = 0; /* no link yet */ 763 + 764 + if (cpsw_is_switch_en(cpsw)) 765 + cpsw_port_add_switch_def_ale_entries(priv, slave); 766 + else 767 + cpsw_port_add_dual_emac_def_ale_entries(priv, slave); 768 + 769 + if (!slave->data->phy_node) 770 + dev_err(priv->dev, "no phy found on slave %d\n", 771 + slave->slave_num); 772 + phy = of_phy_connect(priv->ndev, slave->data->phy_node, 773 + &cpsw_adjust_link, 0, slave->data->phy_if); 774 + if (!phy) { 775 + dev_err(priv->dev, "phy \"%pOF\" not found on slave %d\n", 776 + slave->data->phy_node, 777 + slave->slave_num); 778 + return; 779 + } 780 + slave->phy = phy; 781 + 782 + phy_attached_info(slave->phy); 783 + 784 + phy_start(slave->phy); 785 + 786 + /* Configure GMII_SEL register */ 787 + phy_set_mode_ext(slave->data->ifphy, PHY_MODE_ETHERNET, 788 + slave->data->phy_if); 789 + } 790 + 791 + static int cpsw_ndo_stop(struct net_device *ndev) 792 + { 793 + struct cpsw_priv *priv = netdev_priv(ndev); 794 + struct cpsw_common *cpsw = priv->cpsw; 795 + struct cpsw_slave *slave; 796 + 797 + cpsw_info(priv, ifdown, "shutting down ndev\n"); 798 + slave = &cpsw->slaves[priv->emac_port - 1]; 799 + if (slave->phy) 800 + phy_stop(slave->phy); 801 + 802 + netif_tx_stop_all_queues(priv->ndev); 803 + 804 + if (slave->phy) { 805 + phy_disconnect(slave->phy); 806 + slave->phy = NULL; 807 + } 808 + 809 + __hw_addr_ref_unsync_dev(&ndev->mc, ndev, cpsw_purge_all_mc); 810 + 811 + if (cpsw->usage_count <= 1) { 812 + napi_disable(&cpsw->napi_rx); 813 + napi_disable(&cpsw->napi_tx); 814 + cpts_unregister(cpsw->cpts); 815 + cpsw_intr_disable(cpsw); 816 + cpdma_ctlr_stop(cpsw->dma); 817 + cpsw_ale_stop(cpsw->ale); 818 + cpsw_destroy_xdp_rxqs(cpsw); 819 + } 820 + 821 + if (cpsw_need_resplit(cpsw)) 822 + cpsw_split_res(cpsw); 823 + 824 + cpsw->usage_count--; 825 + pm_runtime_put_sync(cpsw->dev); 826 + return 0; 827 + } 828 + 829 + static int cpsw_ndo_open(struct net_device *ndev) 830 + { 831 + struct cpsw_priv *priv = netdev_priv(ndev); 832 + struct cpsw_common *cpsw = priv->cpsw; 833 + int ret; 834 + 835 + dev_info(priv->dev, "starting ndev. mode: %s\n", 836 + cpsw_is_switch_en(cpsw) ? "switch" : "dual_mac"); 837 + ret = pm_runtime_get_sync(cpsw->dev); 838 + if (ret < 0) { 839 + pm_runtime_put_noidle(cpsw->dev); 840 + return ret; 841 + } 842 + 843 + /* Notify the stack of the actual queue counts. */ 844 + ret = netif_set_real_num_tx_queues(ndev, cpsw->tx_ch_num); 845 + if (ret) { 846 + dev_err(priv->dev, "cannot set real number of tx queues\n"); 847 + goto pm_cleanup; 848 + } 849 + 850 + ret = netif_set_real_num_rx_queues(ndev, cpsw->rx_ch_num); 851 + if (ret) { 852 + dev_err(priv->dev, "cannot set real number of rx queues\n"); 853 + goto pm_cleanup; 854 + } 855 + 856 + /* Initialize host and slave ports */ 857 + if (!cpsw->usage_count) 858 + cpsw_init_host_port(priv); 859 + cpsw_slave_open(&cpsw->slaves[priv->emac_port - 1], priv); 860 + 861 + /* initialize shared resources for every ndev */ 862 + if (!cpsw->usage_count) { 863 + /* create rxqs for both infs in dual mac as they use same pool 864 + * and must be destroyed together when no users. 865 + */ 866 + ret = cpsw_create_xdp_rxqs(cpsw); 867 + if (ret < 0) 868 + goto err_cleanup; 869 + 870 + ret = cpsw_fill_rx_channels(priv); 871 + if (ret < 0) 872 + goto err_cleanup; 873 + 874 + if (cpts_register(cpsw->cpts)) 875 + dev_err(priv->dev, "error registering cpts device\n"); 876 + 877 + napi_enable(&cpsw->napi_rx); 878 + napi_enable(&cpsw->napi_tx); 879 + 880 + if (cpsw->tx_irq_disabled) { 881 + cpsw->tx_irq_disabled = false; 882 + enable_irq(cpsw->irqs_table[1]); 883 + } 884 + 885 + if (cpsw->rx_irq_disabled) { 886 + cpsw->rx_irq_disabled = false; 887 + enable_irq(cpsw->irqs_table[0]); 888 + } 889 + } 890 + 891 + cpsw_restore(priv); 892 + 893 + /* Enable Interrupt pacing if configured */ 894 + if (cpsw->coal_intvl != 0) { 895 + struct ethtool_coalesce coal; 896 + 897 + coal.rx_coalesce_usecs = cpsw->coal_intvl; 898 + cpsw_set_coalesce(ndev, &coal); 899 + } 900 + 901 + cpdma_ctlr_start(cpsw->dma); 902 + cpsw_intr_enable(cpsw); 903 + cpsw->usage_count++; 904 + 905 + return 0; 906 + 907 + err_cleanup: 908 + cpsw_ndo_stop(ndev); 909 + 910 + pm_cleanup: 911 + pm_runtime_put_sync(cpsw->dev); 912 + return ret; 913 + } 914 + 915 + static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff *skb, 916 + struct net_device *ndev) 917 + { 918 + struct cpsw_priv *priv = netdev_priv(ndev); 919 + struct cpsw_common *cpsw = priv->cpsw; 920 + struct cpts *cpts = cpsw->cpts; 921 + struct netdev_queue *txq; 922 + struct cpdma_chan *txch; 923 + int ret, q_idx; 924 + 925 + if (skb_padto(skb, CPSW_MIN_PACKET_SIZE)) { 926 + cpsw_err(priv, tx_err, "packet pad failed\n"); 927 + ndev->stats.tx_dropped++; 928 + return NET_XMIT_DROP; 929 + } 930 + 931 + if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP && 932 + priv->tx_ts_enabled && cpts_can_timestamp(cpts, skb)) 933 + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; 934 + 935 + q_idx = skb_get_queue_mapping(skb); 936 + if (q_idx >= cpsw->tx_ch_num) 937 + q_idx = q_idx % cpsw->tx_ch_num; 938 + 939 + txch = cpsw->txv[q_idx].ch; 940 + txq = netdev_get_tx_queue(ndev, q_idx); 941 + skb_tx_timestamp(skb); 942 + ret = cpdma_chan_submit(txch, skb, skb->data, skb->len, 943 + priv->emac_port); 944 + if (unlikely(ret != 0)) { 945 + cpsw_err(priv, tx_err, "desc submit failed\n"); 946 + goto fail; 947 + } 948 + 949 + /* If there is no more tx desc left free then we need to 950 + * tell the kernel to stop sending us tx frames. 951 + */ 952 + if (unlikely(!cpdma_check_free_tx_desc(txch))) { 953 + netif_tx_stop_queue(txq); 954 + 955 + /* Barrier, so that stop_queue visible to other cpus */ 956 + smp_mb__after_atomic(); 957 + 958 + if (cpdma_check_free_tx_desc(txch)) 959 + netif_tx_wake_queue(txq); 960 + } 961 + 962 + return NETDEV_TX_OK; 963 + fail: 964 + ndev->stats.tx_dropped++; 965 + netif_tx_stop_queue(txq); 966 + 967 + /* Barrier, so that stop_queue visible to other cpus */ 968 + smp_mb__after_atomic(); 969 + 970 + if (cpdma_check_free_tx_desc(txch)) 971 + netif_tx_wake_queue(txq); 972 + 973 + return NETDEV_TX_BUSY; 974 + } 975 + 976 + static int cpsw_ndo_set_mac_address(struct net_device *ndev, void *p) 977 + { 978 + struct sockaddr *addr = (struct sockaddr *)p; 979 + struct cpsw_priv *priv = netdev_priv(ndev); 980 + struct cpsw_common *cpsw = priv->cpsw; 981 + int ret, slave_no; 982 + int flags = 0; 983 + u16 vid = 0; 984 + 985 + slave_no = cpsw_slave_index(cpsw, priv); 986 + if (!is_valid_ether_addr(addr->sa_data)) 987 + return -EADDRNOTAVAIL; 988 + 989 + ret = pm_runtime_get_sync(cpsw->dev); 990 + if (ret < 0) { 991 + pm_runtime_put_noidle(cpsw->dev); 992 + return ret; 993 + } 994 + 995 + vid = cpsw->slaves[slave_no].port_vlan; 996 + flags = ALE_VLAN | ALE_SECURE; 997 + 998 + cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr, HOST_PORT_NUM, 999 + flags, vid); 1000 + cpsw_ale_add_ucast(cpsw->ale, addr->sa_data, HOST_PORT_NUM, 1001 + flags, vid); 1002 + 1003 + ether_addr_copy(priv->mac_addr, addr->sa_data); 1004 + ether_addr_copy(ndev->dev_addr, priv->mac_addr); 1005 + cpsw_set_slave_mac(&cpsw->slaves[slave_no], priv); 1006 + 1007 + pm_runtime_put(cpsw->dev); 1008 + 1009 + return 0; 1010 + } 1011 + 1012 + static int cpsw_ndo_vlan_rx_kill_vid(struct net_device *ndev, 1013 + __be16 proto, u16 vid) 1014 + { 1015 + struct cpsw_priv *priv = netdev_priv(ndev); 1016 + struct cpsw_common *cpsw = priv->cpsw; 1017 + int ret; 1018 + int i; 1019 + 1020 + if (cpsw_is_switch_en(cpsw)) { 1021 + dev_dbg(cpsw->dev, "ndo del vlan is called in switch mode\n"); 1022 + return 0; 1023 + } 1024 + 1025 + if (vid == cpsw->data.default_vlan) 1026 + return 0; 1027 + 1028 + ret = pm_runtime_get_sync(cpsw->dev); 1029 + if (ret < 0) { 1030 + pm_runtime_put_noidle(cpsw->dev); 1031 + return ret; 1032 + } 1033 + 1034 + for (i = 0; i < cpsw->data.slaves; i++) { 1035 + if (cpsw->slaves[i].ndev && 1036 + vid == cpsw->slaves[i].port_vlan) 1037 + goto err; 1038 + } 1039 + 1040 + dev_dbg(priv->dev, "removing vlanid %d from vlan filter\n", vid); 1041 + cpsw_ale_del_vlan(cpsw->ale, vid, 0); 1042 + cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr, 1043 + HOST_PORT_NUM, ALE_VLAN, vid); 1044 + cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast, 1045 + 0, ALE_VLAN, vid); 1046 + cpsw_ale_flush_multicast(cpsw->ale, 0, vid); 1047 + err: 1048 + pm_runtime_put(cpsw->dev); 1049 + return ret; 1050 + } 1051 + 1052 + static int cpsw_ndo_get_phys_port_name(struct net_device *ndev, char *name, 1053 + size_t len) 1054 + { 1055 + struct cpsw_priv *priv = netdev_priv(ndev); 1056 + int err; 1057 + 1058 + err = snprintf(name, len, "p%d", priv->emac_port); 1059 + 1060 + if (err >= len) 1061 + return -EINVAL; 1062 + 1063 + return 0; 1064 + } 1065 + 1066 + #ifdef CONFIG_NET_POLL_CONTROLLER 1067 + static void cpsw_ndo_poll_controller(struct net_device *ndev) 1068 + { 1069 + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); 1070 + 1071 + cpsw_intr_disable(cpsw); 1072 + cpsw_rx_interrupt(cpsw->irqs_table[0], cpsw); 1073 + cpsw_tx_interrupt(cpsw->irqs_table[1], cpsw); 1074 + cpsw_intr_enable(cpsw); 1075 + } 1076 + #endif 1077 + 1078 + static int cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, 1079 + struct xdp_frame **frames, u32 flags) 1080 + { 1081 + struct cpsw_priv *priv = netdev_priv(ndev); 1082 + struct xdp_frame *xdpf; 1083 + int i, drops = 0; 1084 + 1085 + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) 1086 + return -EINVAL; 1087 + 1088 + for (i = 0; i < n; i++) { 1089 + xdpf = frames[i]; 1090 + if (xdpf->len < CPSW_MIN_PACKET_SIZE) { 1091 + xdp_return_frame_rx_napi(xdpf); 1092 + drops++; 1093 + continue; 1094 + } 1095 + 1096 + if (cpsw_xdp_tx_frame(priv, xdpf, NULL, priv->emac_port)) 1097 + drops++; 1098 + } 1099 + 1100 + return n - drops; 1101 + } 1102 + 1103 + static int cpsw_get_port_parent_id(struct net_device *ndev, 1104 + struct netdev_phys_item_id *ppid) 1105 + { 1106 + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); 1107 + 1108 + ppid->id_len = sizeof(cpsw->base_mac); 1109 + memcpy(&ppid->id, &cpsw->base_mac, ppid->id_len); 1110 + 1111 + return 0; 1112 + } 1113 + 1114 + static const struct net_device_ops cpsw_netdev_ops = { 1115 + .ndo_open = cpsw_ndo_open, 1116 + .ndo_stop = cpsw_ndo_stop, 1117 + .ndo_start_xmit = cpsw_ndo_start_xmit, 1118 + .ndo_set_mac_address = cpsw_ndo_set_mac_address, 1119 + .ndo_do_ioctl = cpsw_ndo_ioctl, 1120 + .ndo_validate_addr = eth_validate_addr, 1121 + .ndo_tx_timeout = cpsw_ndo_tx_timeout, 1122 + .ndo_set_rx_mode = cpsw_ndo_set_rx_mode, 1123 + .ndo_set_tx_maxrate = cpsw_ndo_set_tx_maxrate, 1124 + #ifdef CONFIG_NET_POLL_CONTROLLER 1125 + .ndo_poll_controller = cpsw_ndo_poll_controller, 1126 + #endif 1127 + .ndo_vlan_rx_add_vid = cpsw_ndo_vlan_rx_add_vid, 1128 + .ndo_vlan_rx_kill_vid = cpsw_ndo_vlan_rx_kill_vid, 1129 + .ndo_setup_tc = cpsw_ndo_setup_tc, 1130 + .ndo_get_phys_port_name = cpsw_ndo_get_phys_port_name, 1131 + .ndo_bpf = cpsw_ndo_bpf, 1132 + .ndo_xdp_xmit = cpsw_ndo_xdp_xmit, 1133 + .ndo_get_port_parent_id = cpsw_get_port_parent_id, 1134 + }; 1135 + 1136 + static void cpsw_get_drvinfo(struct net_device *ndev, 1137 + struct ethtool_drvinfo *info) 1138 + { 1139 + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); 1140 + struct platform_device *pdev; 1141 + 1142 + pdev = to_platform_device(cpsw->dev); 1143 + strlcpy(info->driver, "cpsw-switch", sizeof(info->driver)); 1144 + strlcpy(info->version, "2.0", sizeof(info->version)); 1145 + strlcpy(info->bus_info, pdev->name, sizeof(info->bus_info)); 1146 + } 1147 + 1148 + static int cpsw_set_pauseparam(struct net_device *ndev, 1149 + struct ethtool_pauseparam *pause) 1150 + { 1151 + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); 1152 + struct cpsw_priv *priv = netdev_priv(ndev); 1153 + int slave_no; 1154 + 1155 + slave_no = cpsw_slave_index(cpsw, priv); 1156 + if (!cpsw->slaves[slave_no].phy) 1157 + return -EINVAL; 1158 + 1159 + if (!phy_validate_pause(cpsw->slaves[slave_no].phy, pause)) 1160 + return -EINVAL; 1161 + 1162 + priv->rx_pause = pause->rx_pause ? true : false; 1163 + priv->tx_pause = pause->tx_pause ? true : false; 1164 + 1165 + phy_set_asym_pause(cpsw->slaves[slave_no].phy, 1166 + priv->rx_pause, priv->tx_pause); 1167 + 1168 + return 0; 1169 + } 1170 + 1171 + static int cpsw_set_channels(struct net_device *ndev, 1172 + struct ethtool_channels *chs) 1173 + { 1174 + return cpsw_set_channels_common(ndev, chs, cpsw_rx_handler); 1175 + } 1176 + 1177 + static const struct ethtool_ops cpsw_ethtool_ops = { 1178 + .get_drvinfo = cpsw_get_drvinfo, 1179 + .get_msglevel = cpsw_get_msglevel, 1180 + .set_msglevel = cpsw_set_msglevel, 1181 + .get_link = ethtool_op_get_link, 1182 + .get_ts_info = cpsw_get_ts_info, 1183 + .get_coalesce = cpsw_get_coalesce, 1184 + .set_coalesce = cpsw_set_coalesce, 1185 + .get_sset_count = cpsw_get_sset_count, 1186 + .get_strings = cpsw_get_strings, 1187 + .get_ethtool_stats = cpsw_get_ethtool_stats, 1188 + .get_pauseparam = cpsw_get_pauseparam, 1189 + .set_pauseparam = cpsw_set_pauseparam, 1190 + .get_wol = cpsw_get_wol, 1191 + .set_wol = cpsw_set_wol, 1192 + .get_regs_len = cpsw_get_regs_len, 1193 + .get_regs = cpsw_get_regs, 1194 + .begin = cpsw_ethtool_op_begin, 1195 + .complete = cpsw_ethtool_op_complete, 1196 + .get_channels = cpsw_get_channels, 1197 + .set_channels = cpsw_set_channels, 1198 + .get_link_ksettings = cpsw_get_link_ksettings, 1199 + .set_link_ksettings = cpsw_set_link_ksettings, 1200 + .get_eee = cpsw_get_eee, 1201 + .set_eee = cpsw_set_eee, 1202 + .nway_reset = cpsw_nway_reset, 1203 + .get_ringparam = cpsw_get_ringparam, 1204 + .set_ringparam = cpsw_set_ringparam, 1205 + }; 1206 + 1207 + static int cpsw_probe_dt(struct cpsw_common *cpsw) 1208 + { 1209 + struct device_node *node = cpsw->dev->of_node, *tmp_node, *port_np; 1210 + struct cpsw_platform_data *data = &cpsw->data; 1211 + struct device *dev = cpsw->dev; 1212 + int ret; 1213 + u32 prop; 1214 + 1215 + if (!node) 1216 + return -EINVAL; 1217 + 1218 + tmp_node = of_get_child_by_name(node, "ethernet-ports"); 1219 + if (!tmp_node) 1220 + return -ENOENT; 1221 + data->slaves = of_get_child_count(tmp_node); 1222 + if (data->slaves != CPSW_SLAVE_PORTS_NUM) { 1223 + of_node_put(tmp_node); 1224 + return -ENOENT; 1225 + } 1226 + 1227 + data->active_slave = 0; 1228 + data->channels = CPSW_MAX_QUEUES; 1229 + data->ale_entries = CPSW_ALE_NUM_ENTRIES; 1230 + data->dual_emac = 1; 1231 + data->bd_ram_size = CPSW_BD_RAM_SIZE; 1232 + data->mac_control = 0; 1233 + 1234 + data->slave_data = devm_kcalloc(dev, CPSW_SLAVE_PORTS_NUM, 1235 + sizeof(struct cpsw_slave_data), 1236 + GFP_KERNEL); 1237 + if (!data->slave_data) 1238 + return -ENOMEM; 1239 + 1240 + /* Populate all the child nodes here... 1241 + */ 1242 + ret = devm_of_platform_populate(dev); 1243 + /* We do not want to force this, as in some cases may not have child */ 1244 + if (ret) 1245 + dev_warn(dev, "Doesn't have any child node\n"); 1246 + 1247 + for_each_child_of_node(tmp_node, port_np) { 1248 + struct cpsw_slave_data *slave_data; 1249 + const void *mac_addr; 1250 + u32 port_id; 1251 + 1252 + ret = of_property_read_u32(port_np, "reg", &port_id); 1253 + if (ret < 0) { 1254 + dev_err(dev, "%pOF error reading port_id %d\n", 1255 + port_np, ret); 1256 + goto err_node_put; 1257 + } 1258 + 1259 + if (!port_id || port_id > CPSW_SLAVE_PORTS_NUM) { 1260 + dev_err(dev, "%pOF has invalid port_id %u\n", 1261 + port_np, port_id); 1262 + ret = -EINVAL; 1263 + goto err_node_put; 1264 + } 1265 + 1266 + slave_data = &data->slave_data[port_id - 1]; 1267 + 1268 + slave_data->disabled = !of_device_is_available(port_np); 1269 + if (slave_data->disabled) 1270 + continue; 1271 + 1272 + slave_data->slave_node = port_np; 1273 + slave_data->ifphy = devm_of_phy_get(dev, port_np, NULL); 1274 + if (IS_ERR(slave_data->ifphy)) { 1275 + ret = PTR_ERR(slave_data->ifphy); 1276 + dev_err(dev, "%pOF: Error retrieving port phy: %d\n", 1277 + port_np, ret); 1278 + goto err_node_put; 1279 + } 1280 + 1281 + if (of_phy_is_fixed_link(port_np)) { 1282 + ret = of_phy_register_fixed_link(port_np); 1283 + if (ret) { 1284 + if (ret != -EPROBE_DEFER) 1285 + dev_err(dev, "%pOF failed to register fixed-link phy: %d\n", 1286 + port_np, ret); 1287 + goto err_node_put; 1288 + } 1289 + slave_data->phy_node = of_node_get(port_np); 1290 + } else { 1291 + slave_data->phy_node = 1292 + of_parse_phandle(port_np, "phy-handle", 0); 1293 + } 1294 + 1295 + if (!slave_data->phy_node) { 1296 + dev_err(dev, "%pOF no phy found\n", port_np); 1297 + ret = -ENODEV; 1298 + goto err_node_put; 1299 + } 1300 + 1301 + ret = of_get_phy_mode(port_np, &slave_data->phy_if); 1302 + if (ret) { 1303 + dev_err(dev, "%pOF read phy-mode err %d\n", 1304 + port_np, ret); 1305 + goto err_node_put; 1306 + } 1307 + 1308 + mac_addr = of_get_mac_address(port_np); 1309 + if (!IS_ERR(mac_addr)) { 1310 + ether_addr_copy(slave_data->mac_addr, mac_addr); 1311 + } else { 1312 + ret = ti_cm_get_macid(dev, port_id - 1, 1313 + slave_data->mac_addr); 1314 + if (ret) 1315 + goto err_node_put; 1316 + } 1317 + 1318 + if (of_property_read_u32(port_np, "ti,dual-emac-pvid", 1319 + &prop)) { 1320 + dev_err(dev, "%pOF Missing dual_emac_res_vlan in DT.\n", 1321 + port_np); 1322 + slave_data->dual_emac_res_vlan = port_id; 1323 + dev_err(dev, "%pOF Using %d as Reserved VLAN\n", 1324 + port_np, slave_data->dual_emac_res_vlan); 1325 + } else { 1326 + slave_data->dual_emac_res_vlan = prop; 1327 + } 1328 + } 1329 + 1330 + of_node_put(tmp_node); 1331 + return 0; 1332 + 1333 + err_node_put: 1334 + of_node_put(port_np); 1335 + return ret; 1336 + } 1337 + 1338 + static void cpsw_remove_dt(struct cpsw_common *cpsw) 1339 + { 1340 + struct cpsw_platform_data *data = &cpsw->data; 1341 + int i = 0; 1342 + 1343 + for (i = 0; i < cpsw->data.slaves; i++) { 1344 + struct cpsw_slave_data *slave_data = &data->slave_data[i]; 1345 + struct device_node *port_np = slave_data->phy_node; 1346 + 1347 + if (port_np) { 1348 + if (of_phy_is_fixed_link(port_np)) 1349 + of_phy_deregister_fixed_link(port_np); 1350 + 1351 + of_node_put(port_np); 1352 + } 1353 + } 1354 + } 1355 + 1356 + static int cpsw_create_ports(struct cpsw_common *cpsw) 1357 + { 1358 + struct cpsw_platform_data *data = &cpsw->data; 1359 + struct net_device *ndev, *napi_ndev = NULL; 1360 + struct device *dev = cpsw->dev; 1361 + struct cpsw_priv *priv; 1362 + int ret = 0, i = 0; 1363 + 1364 + for (i = 0; i < cpsw->data.slaves; i++) { 1365 + struct cpsw_slave_data *slave_data = &data->slave_data[i]; 1366 + 1367 + if (slave_data->disabled) 1368 + continue; 1369 + 1370 + ndev = devm_alloc_etherdev_mqs(dev, sizeof(struct cpsw_priv), 1371 + CPSW_MAX_QUEUES, 1372 + CPSW_MAX_QUEUES); 1373 + if (!ndev) { 1374 + dev_err(dev, "error allocating net_device\n"); 1375 + return -ENOMEM; 1376 + } 1377 + 1378 + priv = netdev_priv(ndev); 1379 + priv->cpsw = cpsw; 1380 + priv->ndev = ndev; 1381 + priv->dev = dev; 1382 + priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG); 1383 + priv->emac_port = i + 1; 1384 + 1385 + if (is_valid_ether_addr(slave_data->mac_addr)) { 1386 + ether_addr_copy(priv->mac_addr, slave_data->mac_addr); 1387 + dev_info(cpsw->dev, "Detected MACID = %pM\n", 1388 + priv->mac_addr); 1389 + } else { 1390 + eth_random_addr(slave_data->mac_addr); 1391 + dev_info(cpsw->dev, "Random MACID = %pM\n", 1392 + priv->mac_addr); 1393 + } 1394 + ether_addr_copy(ndev->dev_addr, slave_data->mac_addr); 1395 + ether_addr_copy(priv->mac_addr, slave_data->mac_addr); 1396 + 1397 + cpsw->slaves[i].ndev = ndev; 1398 + 1399 + ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | 1400 + NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_NETNS_LOCAL; 1401 + 1402 + ndev->netdev_ops = &cpsw_netdev_ops; 1403 + ndev->ethtool_ops = &cpsw_ethtool_ops; 1404 + SET_NETDEV_DEV(ndev, dev); 1405 + 1406 + if (!napi_ndev) { 1407 + /* CPSW Host port CPDMA interface is shared between 1408 + * ports and there is only one TX and one RX IRQs 1409 + * available for all possible TX and RX channels 1410 + * accordingly. 1411 + */ 1412 + netif_napi_add(ndev, &cpsw->napi_rx, 1413 + cpsw->quirk_irq ? 1414 + cpsw_rx_poll : cpsw_rx_mq_poll, 1415 + CPSW_POLL_WEIGHT); 1416 + netif_tx_napi_add(ndev, &cpsw->napi_tx, 1417 + cpsw->quirk_irq ? 1418 + cpsw_tx_poll : cpsw_tx_mq_poll, 1419 + CPSW_POLL_WEIGHT); 1420 + } 1421 + 1422 + napi_ndev = ndev; 1423 + } 1424 + 1425 + return ret; 1426 + } 1427 + 1428 + static void cpsw_unregister_ports(struct cpsw_common *cpsw) 1429 + { 1430 + int i = 0; 1431 + 1432 + for (i = 0; i < cpsw->data.slaves; i++) { 1433 + if (!cpsw->slaves[i].ndev) 1434 + continue; 1435 + 1436 + unregister_netdev(cpsw->slaves[i].ndev); 1437 + } 1438 + } 1439 + 1440 + static int cpsw_register_ports(struct cpsw_common *cpsw) 1441 + { 1442 + int ret = 0, i = 0; 1443 + 1444 + for (i = 0; i < cpsw->data.slaves; i++) { 1445 + if (!cpsw->slaves[i].ndev) 1446 + continue; 1447 + 1448 + /* register the network device */ 1449 + ret = register_netdev(cpsw->slaves[i].ndev); 1450 + if (ret) { 1451 + dev_err(cpsw->dev, 1452 + "cpsw: err registering net device%d\n", i); 1453 + cpsw->slaves[i].ndev = NULL; 1454 + break; 1455 + } 1456 + } 1457 + 1458 + if (ret) 1459 + cpsw_unregister_ports(cpsw); 1460 + return ret; 1461 + } 1462 + 1463 + bool cpsw_port_dev_check(const struct net_device *ndev) 1464 + { 1465 + if (ndev->netdev_ops == &cpsw_netdev_ops) { 1466 + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); 1467 + 1468 + return !cpsw->data.dual_emac; 1469 + } 1470 + 1471 + return false; 1472 + } 1473 + 1474 + static void cpsw_port_offload_fwd_mark_update(struct cpsw_common *cpsw) 1475 + { 1476 + int set_val = 0; 1477 + int i; 1478 + 1479 + if (!cpsw->ale_bypass && 1480 + (cpsw->br_members == (ALE_PORT_1 | ALE_PORT_2))) 1481 + set_val = 1; 1482 + 1483 + dev_dbg(cpsw->dev, "set offload_fwd_mark %d\n", set_val); 1484 + 1485 + for (i = 0; i < cpsw->data.slaves; i++) { 1486 + struct net_device *sl_ndev = cpsw->slaves[i].ndev; 1487 + struct cpsw_priv *priv = netdev_priv(sl_ndev); 1488 + 1489 + priv->offload_fwd_mark = set_val; 1490 + } 1491 + } 1492 + 1493 + static int cpsw_netdevice_port_link(struct net_device *ndev, 1494 + struct net_device *br_ndev) 1495 + { 1496 + struct cpsw_priv *priv = netdev_priv(ndev); 1497 + struct cpsw_common *cpsw = priv->cpsw; 1498 + 1499 + if (!cpsw->br_members) { 1500 + cpsw->hw_bridge_dev = br_ndev; 1501 + } else { 1502 + /* This is adding the port to a second bridge, this is 1503 + * unsupported 1504 + */ 1505 + if (cpsw->hw_bridge_dev != br_ndev) 1506 + return -EOPNOTSUPP; 1507 + } 1508 + 1509 + cpsw->br_members |= BIT(priv->emac_port); 1510 + 1511 + cpsw_port_offload_fwd_mark_update(cpsw); 1512 + 1513 + return NOTIFY_DONE; 1514 + } 1515 + 1516 + static void cpsw_netdevice_port_unlink(struct net_device *ndev) 1517 + { 1518 + struct cpsw_priv *priv = netdev_priv(ndev); 1519 + struct cpsw_common *cpsw = priv->cpsw; 1520 + 1521 + cpsw->br_members &= ~BIT(priv->emac_port); 1522 + 1523 + cpsw_port_offload_fwd_mark_update(cpsw); 1524 + 1525 + if (!cpsw->br_members) 1526 + cpsw->hw_bridge_dev = NULL; 1527 + } 1528 + 1529 + /* netdev notifier */ 1530 + static int cpsw_netdevice_event(struct notifier_block *unused, 1531 + unsigned long event, void *ptr) 1532 + { 1533 + struct net_device *ndev = netdev_notifier_info_to_dev(ptr); 1534 + struct netdev_notifier_changeupper_info *info; 1535 + int ret = NOTIFY_DONE; 1536 + 1537 + if (!cpsw_port_dev_check(ndev)) 1538 + return NOTIFY_DONE; 1539 + 1540 + switch (event) { 1541 + case NETDEV_CHANGEUPPER: 1542 + info = ptr; 1543 + 1544 + if (netif_is_bridge_master(info->upper_dev)) { 1545 + if (info->linking) 1546 + ret = cpsw_netdevice_port_link(ndev, 1547 + info->upper_dev); 1548 + else 1549 + cpsw_netdevice_port_unlink(ndev); 1550 + } 1551 + break; 1552 + default: 1553 + return NOTIFY_DONE; 1554 + } 1555 + 1556 + return notifier_from_errno(ret); 1557 + } 1558 + 1559 + static struct notifier_block cpsw_netdevice_nb __read_mostly = { 1560 + .notifier_call = cpsw_netdevice_event, 1561 + }; 1562 + 1563 + static int cpsw_register_notifiers(struct cpsw_common *cpsw) 1564 + { 1565 + int ret = 0; 1566 + 1567 + ret = register_netdevice_notifier(&cpsw_netdevice_nb); 1568 + if (ret) { 1569 + dev_err(cpsw->dev, "can't register netdevice notifier\n"); 1570 + return ret; 1571 + } 1572 + 1573 + ret = cpsw_switchdev_register_notifiers(cpsw); 1574 + if (ret) 1575 + unregister_netdevice_notifier(&cpsw_netdevice_nb); 1576 + 1577 + return ret; 1578 + } 1579 + 1580 + static void cpsw_unregister_notifiers(struct cpsw_common *cpsw) 1581 + { 1582 + cpsw_switchdev_unregister_notifiers(cpsw); 1583 + unregister_netdevice_notifier(&cpsw_netdevice_nb); 1584 + } 1585 + 1586 + static const struct devlink_ops cpsw_devlink_ops = { 1587 + }; 1588 + 1589 + static int cpsw_dl_switch_mode_get(struct devlink *dl, u32 id, 1590 + struct devlink_param_gset_ctx *ctx) 1591 + { 1592 + struct cpsw_devlink *dl_priv = devlink_priv(dl); 1593 + struct cpsw_common *cpsw = dl_priv->cpsw; 1594 + 1595 + dev_dbg(cpsw->dev, "%s id:%u\n", __func__, id); 1596 + 1597 + if (id != CPSW_DL_PARAM_SWITCH_MODE) 1598 + return -EOPNOTSUPP; 1599 + 1600 + ctx->val.vbool = !cpsw->data.dual_emac; 1601 + 1602 + return 0; 1603 + } 1604 + 1605 + static int cpsw_dl_switch_mode_set(struct devlink *dl, u32 id, 1606 + struct devlink_param_gset_ctx *ctx) 1607 + { 1608 + struct cpsw_devlink *dl_priv = devlink_priv(dl); 1609 + struct cpsw_common *cpsw = dl_priv->cpsw; 1610 + int vlan = cpsw->data.default_vlan; 1611 + bool switch_en = ctx->val.vbool; 1612 + bool if_running = false; 1613 + int i; 1614 + 1615 + dev_dbg(cpsw->dev, "%s id:%u\n", __func__, id); 1616 + 1617 + if (id != CPSW_DL_PARAM_SWITCH_MODE) 1618 + return -EOPNOTSUPP; 1619 + 1620 + if (switch_en == !cpsw->data.dual_emac) 1621 + return 0; 1622 + 1623 + if (!switch_en && cpsw->br_members) { 1624 + dev_err(cpsw->dev, "Remove ports from BR before disabling switch mode\n"); 1625 + return -EINVAL; 1626 + } 1627 + 1628 + rtnl_lock(); 1629 + 1630 + for (i = 0; i < cpsw->data.slaves; i++) { 1631 + struct cpsw_slave *slave = &cpsw->slaves[i]; 1632 + struct net_device *sl_ndev = slave->ndev; 1633 + 1634 + if (!sl_ndev || !netif_running(sl_ndev)) 1635 + continue; 1636 + 1637 + if_running = true; 1638 + } 1639 + 1640 + if (!if_running) { 1641 + /* all ndevs are down */ 1642 + cpsw->data.dual_emac = !switch_en; 1643 + for (i = 0; i < cpsw->data.slaves; i++) { 1644 + struct cpsw_slave *slave = &cpsw->slaves[i]; 1645 + struct net_device *sl_ndev = slave->ndev; 1646 + struct cpsw_priv *priv; 1647 + 1648 + if (!sl_ndev) 1649 + continue; 1650 + 1651 + priv = netdev_priv(sl_ndev); 1652 + if (switch_en) 1653 + vlan = cpsw->data.default_vlan; 1654 + else 1655 + vlan = slave->data->dual_emac_res_vlan; 1656 + slave->port_vlan = vlan; 1657 + } 1658 + goto exit; 1659 + } 1660 + 1661 + if (switch_en) { 1662 + dev_info(cpsw->dev, "Enable switch mode\n"); 1663 + 1664 + /* enable bypass - no forwarding; all traffic goes to Host */ 1665 + cpsw_ale_control_set(cpsw->ale, 0, ALE_BYPASS, 1); 1666 + 1667 + /* clean up ALE table */ 1668 + cpsw_ale_control_set(cpsw->ale, 0, ALE_CLEAR, 1); 1669 + cpsw_ale_control_get(cpsw->ale, 0, ALE_AGEOUT); 1670 + 1671 + cpsw_init_host_port_switch(cpsw); 1672 + 1673 + for (i = 0; i < cpsw->data.slaves; i++) { 1674 + struct cpsw_slave *slave = &cpsw->slaves[i]; 1675 + struct net_device *sl_ndev = slave->ndev; 1676 + struct cpsw_priv *priv; 1677 + 1678 + if (!sl_ndev) 1679 + continue; 1680 + 1681 + priv = netdev_priv(sl_ndev); 1682 + slave->port_vlan = vlan; 1683 + if (netif_running(sl_ndev)) 1684 + cpsw_port_add_switch_def_ale_entries(priv, 1685 + slave); 1686 + } 1687 + 1688 + cpsw_ale_control_set(cpsw->ale, 0, ALE_BYPASS, 0); 1689 + cpsw->data.dual_emac = false; 1690 + } else { 1691 + dev_info(cpsw->dev, "Disable switch mode\n"); 1692 + 1693 + /* enable bypass - no forwarding; all traffic goes to Host */ 1694 + cpsw_ale_control_set(cpsw->ale, 0, ALE_BYPASS, 1); 1695 + 1696 + cpsw_ale_control_set(cpsw->ale, 0, ALE_CLEAR, 1); 1697 + cpsw_ale_control_get(cpsw->ale, 0, ALE_AGEOUT); 1698 + 1699 + cpsw_init_host_port_dual_mac(cpsw); 1700 + 1701 + for (i = 0; i < cpsw->data.slaves; i++) { 1702 + struct cpsw_slave *slave = &cpsw->slaves[i]; 1703 + struct net_device *sl_ndev = slave->ndev; 1704 + struct cpsw_priv *priv; 1705 + 1706 + if (!sl_ndev) 1707 + continue; 1708 + 1709 + priv = netdev_priv(slave->ndev); 1710 + slave->port_vlan = slave->data->dual_emac_res_vlan; 1711 + cpsw_port_add_dual_emac_def_ale_entries(priv, slave); 1712 + } 1713 + 1714 + cpsw_ale_control_set(cpsw->ale, 0, ALE_BYPASS, 0); 1715 + cpsw->data.dual_emac = true; 1716 + } 1717 + exit: 1718 + rtnl_unlock(); 1719 + 1720 + return 0; 1721 + } 1722 + 1723 + static int cpsw_dl_ale_ctrl_get(struct devlink *dl, u32 id, 1724 + struct devlink_param_gset_ctx *ctx) 1725 + { 1726 + struct cpsw_devlink *dl_priv = devlink_priv(dl); 1727 + struct cpsw_common *cpsw = dl_priv->cpsw; 1728 + 1729 + dev_dbg(cpsw->dev, "%s id:%u\n", __func__, id); 1730 + 1731 + switch (id) { 1732 + case CPSW_DL_PARAM_ALE_BYPASS: 1733 + ctx->val.vbool = cpsw_ale_control_get(cpsw->ale, 0, ALE_BYPASS); 1734 + break; 1735 + default: 1736 + return -EOPNOTSUPP; 1737 + } 1738 + 1739 + return 0; 1740 + } 1741 + 1742 + static int cpsw_dl_ale_ctrl_set(struct devlink *dl, u32 id, 1743 + struct devlink_param_gset_ctx *ctx) 1744 + { 1745 + struct cpsw_devlink *dl_priv = devlink_priv(dl); 1746 + struct cpsw_common *cpsw = dl_priv->cpsw; 1747 + int ret = -EOPNOTSUPP; 1748 + 1749 + dev_dbg(cpsw->dev, "%s id:%u\n", __func__, id); 1750 + 1751 + switch (id) { 1752 + case CPSW_DL_PARAM_ALE_BYPASS: 1753 + ret = cpsw_ale_control_set(cpsw->ale, 0, ALE_BYPASS, 1754 + ctx->val.vbool); 1755 + if (!ret) { 1756 + cpsw->ale_bypass = ctx->val.vbool; 1757 + cpsw_port_offload_fwd_mark_update(cpsw); 1758 + } 1759 + break; 1760 + default: 1761 + return -EOPNOTSUPP; 1762 + } 1763 + 1764 + return 0; 1765 + } 1766 + 1767 + static const struct devlink_param cpsw_devlink_params[] = { 1768 + DEVLINK_PARAM_DRIVER(CPSW_DL_PARAM_SWITCH_MODE, 1769 + "switch_mode", DEVLINK_PARAM_TYPE_BOOL, 1770 + BIT(DEVLINK_PARAM_CMODE_RUNTIME), 1771 + cpsw_dl_switch_mode_get, cpsw_dl_switch_mode_set, 1772 + NULL), 1773 + DEVLINK_PARAM_DRIVER(CPSW_DL_PARAM_ALE_BYPASS, 1774 + "ale_bypass", DEVLINK_PARAM_TYPE_BOOL, 1775 + BIT(DEVLINK_PARAM_CMODE_RUNTIME), 1776 + cpsw_dl_ale_ctrl_get, cpsw_dl_ale_ctrl_set, NULL), 1777 + }; 1778 + 1779 + static int cpsw_register_devlink(struct cpsw_common *cpsw) 1780 + { 1781 + struct device *dev = cpsw->dev; 1782 + struct cpsw_devlink *dl_priv; 1783 + int ret = 0; 1784 + 1785 + cpsw->devlink = devlink_alloc(&cpsw_devlink_ops, sizeof(*dl_priv)); 1786 + if (!cpsw->devlink) 1787 + return -ENOMEM; 1788 + 1789 + dl_priv = devlink_priv(cpsw->devlink); 1790 + dl_priv->cpsw = cpsw; 1791 + 1792 + ret = devlink_register(cpsw->devlink, dev); 1793 + if (ret) { 1794 + dev_err(dev, "DL reg fail ret:%d\n", ret); 1795 + goto dl_free; 1796 + } 1797 + 1798 + ret = devlink_params_register(cpsw->devlink, cpsw_devlink_params, 1799 + ARRAY_SIZE(cpsw_devlink_params)); 1800 + if (ret) { 1801 + dev_err(dev, "DL params reg fail ret:%d\n", ret); 1802 + goto dl_unreg; 1803 + } 1804 + 1805 + devlink_params_publish(cpsw->devlink); 1806 + return ret; 1807 + 1808 + dl_unreg: 1809 + devlink_unregister(cpsw->devlink); 1810 + dl_free: 1811 + devlink_free(cpsw->devlink); 1812 + return ret; 1813 + } 1814 + 1815 + static void cpsw_unregister_devlink(struct cpsw_common *cpsw) 1816 + { 1817 + devlink_params_unpublish(cpsw->devlink); 1818 + devlink_params_unregister(cpsw->devlink, cpsw_devlink_params, 1819 + ARRAY_SIZE(cpsw_devlink_params)); 1820 + devlink_unregister(cpsw->devlink); 1821 + devlink_free(cpsw->devlink); 1822 + } 1823 + 1824 + static const struct of_device_id cpsw_of_mtable[] = { 1825 + { .compatible = "ti,cpsw-switch"}, 1826 + { .compatible = "ti,am335x-cpsw-switch"}, 1827 + { .compatible = "ti,am4372-cpsw-switch"}, 1828 + { .compatible = "ti,dra7-cpsw-switch"}, 1829 + { /* sentinel */ }, 1830 + }; 1831 + MODULE_DEVICE_TABLE(of, cpsw_of_mtable); 1832 + 1833 + static const struct soc_device_attribute cpsw_soc_devices[] = { 1834 + { .family = "AM33xx", .revision = "ES1.0"}, 1835 + { /* sentinel */ } 1836 + }; 1837 + 1838 + static int cpsw_probe(struct platform_device *pdev) 1839 + { 1840 + const struct soc_device_attribute *soc; 1841 + struct device *dev = &pdev->dev; 1842 + struct cpsw_common *cpsw; 1843 + struct resource *ss_res; 1844 + struct gpio_descs *mode; 1845 + void __iomem *ss_regs; 1846 + int ret = 0, ch; 1847 + struct clk *clk; 1848 + int irq; 1849 + 1850 + cpsw = devm_kzalloc(dev, sizeof(struct cpsw_common), GFP_KERNEL); 1851 + if (!cpsw) 1852 + return -ENOMEM; 1853 + 1854 + cpsw_slave_index = cpsw_slave_index_priv; 1855 + 1856 + cpsw->dev = dev; 1857 + 1858 + cpsw->slaves = devm_kcalloc(dev, 1859 + CPSW_SLAVE_PORTS_NUM, 1860 + sizeof(struct cpsw_slave), 1861 + GFP_KERNEL); 1862 + if (!cpsw->slaves) 1863 + return -ENOMEM; 1864 + 1865 + mode = devm_gpiod_get_array_optional(dev, "mode", GPIOD_OUT_LOW); 1866 + if (IS_ERR(mode)) { 1867 + ret = PTR_ERR(mode); 1868 + dev_err(dev, "gpio request failed, ret %d\n", ret); 1869 + return ret; 1870 + } 1871 + 1872 + clk = devm_clk_get(dev, "fck"); 1873 + if (IS_ERR(clk)) { 1874 + ret = PTR_ERR(clk); 1875 + dev_err(dev, "fck is not found %d\n", ret); 1876 + return ret; 1877 + } 1878 + cpsw->bus_freq_mhz = clk_get_rate(clk) / 1000000; 1879 + 1880 + ss_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1881 + ss_regs = devm_ioremap_resource(dev, ss_res); 1882 + if (IS_ERR(ss_regs)) { 1883 + ret = PTR_ERR(ss_regs); 1884 + return ret; 1885 + } 1886 + cpsw->regs = ss_regs; 1887 + 1888 + irq = platform_get_irq_byname(pdev, "rx"); 1889 + if (irq < 0) 1890 + return irq; 1891 + cpsw->irqs_table[0] = irq; 1892 + 1893 + irq = platform_get_irq_byname(pdev, "tx"); 1894 + if (irq < 0) 1895 + return irq; 1896 + cpsw->irqs_table[1] = irq; 1897 + 1898 + platform_set_drvdata(pdev, cpsw); 1899 + /* This may be required here for child devices. */ 1900 + pm_runtime_enable(dev); 1901 + 1902 + /* Need to enable clocks with runtime PM api to access module 1903 + * registers 1904 + */ 1905 + ret = pm_runtime_get_sync(dev); 1906 + if (ret < 0) { 1907 + pm_runtime_put_noidle(dev); 1908 + pm_runtime_disable(dev); 1909 + return ret; 1910 + } 1911 + 1912 + ret = cpsw_probe_dt(cpsw); 1913 + if (ret) 1914 + goto clean_dt_ret; 1915 + 1916 + soc = soc_device_match(cpsw_soc_devices); 1917 + if (soc) 1918 + cpsw->quirk_irq = 1; 1919 + 1920 + cpsw->rx_packet_max = rx_packet_max; 1921 + cpsw->descs_pool_size = descs_pool_size; 1922 + eth_random_addr(cpsw->base_mac); 1923 + 1924 + ret = cpsw_init_common(cpsw, ss_regs, ale_ageout, 1925 + (u32 __force)ss_res->start + CPSW2_BD_OFFSET, 1926 + descs_pool_size); 1927 + if (ret) 1928 + goto clean_dt_ret; 1929 + 1930 + cpsw->wr_regs = cpsw->version == CPSW_VERSION_1 ? 1931 + ss_regs + CPSW1_WR_OFFSET : 1932 + ss_regs + CPSW2_WR_OFFSET; 1933 + 1934 + ch = cpsw->quirk_irq ? 0 : 7; 1935 + cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, ch, cpsw_tx_handler, 0); 1936 + if (IS_ERR(cpsw->txv[0].ch)) { 1937 + dev_err(dev, "error initializing tx dma channel\n"); 1938 + ret = PTR_ERR(cpsw->txv[0].ch); 1939 + goto clean_cpts; 1940 + } 1941 + 1942 + cpsw->rxv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_rx_handler, 1); 1943 + if (IS_ERR(cpsw->rxv[0].ch)) { 1944 + dev_err(dev, "error initializing rx dma channel\n"); 1945 + ret = PTR_ERR(cpsw->rxv[0].ch); 1946 + goto clean_cpts; 1947 + } 1948 + cpsw_split_res(cpsw); 1949 + 1950 + /* setup netdevs */ 1951 + ret = cpsw_create_ports(cpsw); 1952 + if (ret) 1953 + goto clean_unregister_netdev; 1954 + 1955 + /* Grab RX and TX IRQs. Note that we also have RX_THRESHOLD and 1956 + * MISC IRQs which are always kept disabled with this driver so 1957 + * we will not request them. 1958 + * 1959 + * If anyone wants to implement support for those, make sure to 1960 + * first request and append them to irqs_table array. 1961 + */ 1962 + 1963 + ret = devm_request_irq(dev, cpsw->irqs_table[0], cpsw_rx_interrupt, 1964 + 0, dev_name(dev), cpsw); 1965 + if (ret < 0) { 1966 + dev_err(dev, "error attaching irq (%d)\n", ret); 1967 + goto clean_unregister_netdev; 1968 + } 1969 + 1970 + ret = devm_request_irq(dev, cpsw->irqs_table[1], cpsw_tx_interrupt, 1971 + 0, dev_name(dev), cpsw); 1972 + if (ret < 0) { 1973 + dev_err(dev, "error attaching irq (%d)\n", ret); 1974 + goto clean_unregister_netdev; 1975 + } 1976 + 1977 + ret = cpsw_register_notifiers(cpsw); 1978 + if (ret) 1979 + goto clean_unregister_netdev; 1980 + 1981 + ret = cpsw_register_devlink(cpsw); 1982 + if (ret) 1983 + goto clean_unregister_notifiers; 1984 + 1985 + ret = cpsw_register_ports(cpsw); 1986 + if (ret) 1987 + goto clean_unregister_notifiers; 1988 + 1989 + dev_notice(dev, "initialized (regs %pa, pool size %d) hw_ver:%08X %d.%d (%d)\n", 1990 + &ss_res->start, descs_pool_size, 1991 + cpsw->version, CPSW_MAJOR_VERSION(cpsw->version), 1992 + CPSW_MINOR_VERSION(cpsw->version), 1993 + CPSW_RTL_VERSION(cpsw->version)); 1994 + 1995 + pm_runtime_put(dev); 1996 + 1997 + return 0; 1998 + 1999 + clean_unregister_notifiers: 2000 + cpsw_unregister_notifiers(cpsw); 2001 + clean_unregister_netdev: 2002 + cpsw_unregister_ports(cpsw); 2003 + clean_cpts: 2004 + cpts_release(cpsw->cpts); 2005 + cpdma_ctlr_destroy(cpsw->dma); 2006 + clean_dt_ret: 2007 + cpsw_remove_dt(cpsw); 2008 + pm_runtime_put_sync(dev); 2009 + pm_runtime_disable(dev); 2010 + return ret; 2011 + } 2012 + 2013 + static int cpsw_remove(struct platform_device *pdev) 2014 + { 2015 + struct cpsw_common *cpsw = platform_get_drvdata(pdev); 2016 + int ret; 2017 + 2018 + ret = pm_runtime_get_sync(&pdev->dev); 2019 + if (ret < 0) { 2020 + pm_runtime_put_noidle(&pdev->dev); 2021 + return ret; 2022 + } 2023 + 2024 + cpsw_unregister_notifiers(cpsw); 2025 + cpsw_unregister_devlink(cpsw); 2026 + cpsw_unregister_ports(cpsw); 2027 + 2028 + cpts_release(cpsw->cpts); 2029 + cpdma_ctlr_destroy(cpsw->dma); 2030 + cpsw_remove_dt(cpsw); 2031 + pm_runtime_put_sync(&pdev->dev); 2032 + pm_runtime_disable(&pdev->dev); 2033 + return 0; 2034 + } 2035 + 2036 + static struct platform_driver cpsw_driver = { 2037 + .driver = { 2038 + .name = "cpsw-switch", 2039 + .of_match_table = cpsw_of_mtable, 2040 + }, 2041 + .probe = cpsw_probe, 2042 + .remove = cpsw_remove, 2043 + }; 2044 + 2045 + module_platform_driver(cpsw_driver); 2046 + 2047 + MODULE_LICENSE("GPL"); 2048 + MODULE_DESCRIPTION("TI CPSW switchdev Ethernet driver");
+1245 -1
drivers/net/ethernet/ti/cpsw_priv.c
··· 5 5 * Copyright (C) 2019 Texas Instruments 6 6 */ 7 7 8 + #include <linux/bpf.h> 9 + #include <linux/bpf_trace.h> 8 10 #include <linux/if_ether.h> 9 11 #include <linux/if_vlan.h> 12 + #include <linux/kmemleak.h> 10 13 #include <linux/module.h> 11 14 #include <linux/netdevice.h> 15 + #include <linux/net_tstamp.h> 16 + #include <linux/of.h> 12 17 #include <linux/phy.h> 13 18 #include <linux/platform_device.h> 19 + #include <linux/pm_runtime.h> 14 20 #include <linux/skbuff.h> 21 + #include <net/page_pool.h> 22 + #include <net/pkt_cls.h> 15 23 24 + #include "cpsw.h" 16 25 #include "cpts.h" 17 26 #include "cpsw_ale.h" 18 27 #include "cpsw_priv.h" 19 28 #include "cpsw_sl.h" 20 29 #include "davinci_cpdma.h" 30 + 31 + int (*cpsw_slave_index)(struct cpsw_common *cpsw, struct cpsw_priv *priv); 32 + 33 + void cpsw_intr_enable(struct cpsw_common *cpsw) 34 + { 35 + writel_relaxed(0xFF, &cpsw->wr_regs->tx_en); 36 + writel_relaxed(0xFF, &cpsw->wr_regs->rx_en); 37 + 38 + cpdma_ctlr_int_ctrl(cpsw->dma, true); 39 + } 40 + 41 + void cpsw_intr_disable(struct cpsw_common *cpsw) 42 + { 43 + writel_relaxed(0, &cpsw->wr_regs->tx_en); 44 + writel_relaxed(0, &cpsw->wr_regs->rx_en); 45 + 46 + cpdma_ctlr_int_ctrl(cpsw->dma, false); 47 + } 48 + 49 + void cpsw_tx_handler(void *token, int len, int status) 50 + { 51 + struct cpsw_meta_xdp *xmeta; 52 + struct xdp_frame *xdpf; 53 + struct net_device *ndev; 54 + struct netdev_queue *txq; 55 + struct sk_buff *skb; 56 + int ch; 57 + 58 + if (cpsw_is_xdpf_handle(token)) { 59 + xdpf = cpsw_handle_to_xdpf(token); 60 + xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; 61 + ndev = xmeta->ndev; 62 + ch = xmeta->ch; 63 + xdp_return_frame(xdpf); 64 + } else { 65 + skb = token; 66 + ndev = skb->dev; 67 + ch = skb_get_queue_mapping(skb); 68 + cpts_tx_timestamp(ndev_to_cpsw(ndev)->cpts, skb); 69 + dev_kfree_skb_any(skb); 70 + } 71 + 72 + /* Check whether the queue is stopped due to stalled tx dma, if the 73 + * queue is stopped then start the queue as we have free desc for tx 74 + */ 75 + txq = netdev_get_tx_queue(ndev, ch); 76 + if (unlikely(netif_tx_queue_stopped(txq))) 77 + netif_tx_wake_queue(txq); 78 + 79 + ndev->stats.tx_packets++; 80 + ndev->stats.tx_bytes += len; 81 + } 82 + 83 + irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id) 84 + { 85 + struct cpsw_common *cpsw = dev_id; 86 + 87 + writel(0, &cpsw->wr_regs->tx_en); 88 + cpdma_ctlr_eoi(cpsw->dma, CPDMA_EOI_TX); 89 + 90 + if (cpsw->quirk_irq) { 91 + disable_irq_nosync(cpsw->irqs_table[1]); 92 + cpsw->tx_irq_disabled = true; 93 + } 94 + 95 + napi_schedule(&cpsw->napi_tx); 96 + return IRQ_HANDLED; 97 + } 98 + 99 + irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id) 100 + { 101 + struct cpsw_common *cpsw = dev_id; 102 + 103 + cpdma_ctlr_eoi(cpsw->dma, CPDMA_EOI_RX); 104 + writel(0, &cpsw->wr_regs->rx_en); 105 + 106 + if (cpsw->quirk_irq) { 107 + disable_irq_nosync(cpsw->irqs_table[0]); 108 + cpsw->rx_irq_disabled = true; 109 + } 110 + 111 + napi_schedule(&cpsw->napi_rx); 112 + return IRQ_HANDLED; 113 + } 114 + 115 + int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget) 116 + { 117 + struct cpsw_common *cpsw = napi_to_cpsw(napi_tx); 118 + int num_tx, cur_budget, ch; 119 + u32 ch_map; 120 + struct cpsw_vector *txv; 121 + 122 + /* process every unprocessed channel */ 123 + ch_map = cpdma_ctrl_txchs_state(cpsw->dma); 124 + for (ch = 0, num_tx = 0; ch_map & 0xff; ch_map <<= 1, ch++) { 125 + if (!(ch_map & 0x80)) 126 + continue; 127 + 128 + txv = &cpsw->txv[ch]; 129 + if (unlikely(txv->budget > budget - num_tx)) 130 + cur_budget = budget - num_tx; 131 + else 132 + cur_budget = txv->budget; 133 + 134 + num_tx += cpdma_chan_process(txv->ch, cur_budget); 135 + if (num_tx >= budget) 136 + break; 137 + } 138 + 139 + if (num_tx < budget) { 140 + napi_complete(napi_tx); 141 + writel(0xff, &cpsw->wr_regs->tx_en); 142 + } 143 + 144 + return num_tx; 145 + } 146 + 147 + int cpsw_tx_poll(struct napi_struct *napi_tx, int budget) 148 + { 149 + struct cpsw_common *cpsw = napi_to_cpsw(napi_tx); 150 + int num_tx; 151 + 152 + num_tx = cpdma_chan_process(cpsw->txv[0].ch, budget); 153 + if (num_tx < budget) { 154 + napi_complete(napi_tx); 155 + writel(0xff, &cpsw->wr_regs->tx_en); 156 + if (cpsw->tx_irq_disabled) { 157 + cpsw->tx_irq_disabled = false; 158 + enable_irq(cpsw->irqs_table[1]); 159 + } 160 + } 161 + 162 + return num_tx; 163 + } 164 + 165 + int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) 166 + { 167 + struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); 168 + int num_rx, cur_budget, ch; 169 + u32 ch_map; 170 + struct cpsw_vector *rxv; 171 + 172 + /* process every unprocessed channel */ 173 + ch_map = cpdma_ctrl_rxchs_state(cpsw->dma); 174 + for (ch = 0, num_rx = 0; ch_map; ch_map >>= 1, ch++) { 175 + if (!(ch_map & 0x01)) 176 + continue; 177 + 178 + rxv = &cpsw->rxv[ch]; 179 + if (unlikely(rxv->budget > budget - num_rx)) 180 + cur_budget = budget - num_rx; 181 + else 182 + cur_budget = rxv->budget; 183 + 184 + num_rx += cpdma_chan_process(rxv->ch, cur_budget); 185 + if (num_rx >= budget) 186 + break; 187 + } 188 + 189 + if (num_rx < budget) { 190 + napi_complete_done(napi_rx, num_rx); 191 + writel(0xff, &cpsw->wr_regs->rx_en); 192 + } 193 + 194 + return num_rx; 195 + } 196 + 197 + int cpsw_rx_poll(struct napi_struct *napi_rx, int budget) 198 + { 199 + struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); 200 + int num_rx; 201 + 202 + num_rx = cpdma_chan_process(cpsw->rxv[0].ch, budget); 203 + if (num_rx < budget) { 204 + napi_complete_done(napi_rx, num_rx); 205 + writel(0xff, &cpsw->wr_regs->rx_en); 206 + if (cpsw->rx_irq_disabled) { 207 + cpsw->rx_irq_disabled = false; 208 + enable_irq(cpsw->irqs_table[0]); 209 + } 210 + } 211 + 212 + return num_rx; 213 + } 214 + 215 + void cpsw_rx_vlan_encap(struct sk_buff *skb) 216 + { 217 + struct cpsw_priv *priv = netdev_priv(skb->dev); 218 + u32 rx_vlan_encap_hdr = *((u32 *)skb->data); 219 + struct cpsw_common *cpsw = priv->cpsw; 220 + u16 vtag, vid, prio, pkt_type; 221 + 222 + /* Remove VLAN header encapsulation word */ 223 + skb_pull(skb, CPSW_RX_VLAN_ENCAP_HDR_SIZE); 224 + 225 + pkt_type = (rx_vlan_encap_hdr >> 226 + CPSW_RX_VLAN_ENCAP_HDR_PKT_TYPE_SHIFT) & 227 + CPSW_RX_VLAN_ENCAP_HDR_PKT_TYPE_MSK; 228 + /* Ignore unknown & Priority-tagged packets*/ 229 + if (pkt_type == CPSW_RX_VLAN_ENCAP_HDR_PKT_RESERV || 230 + pkt_type == CPSW_RX_VLAN_ENCAP_HDR_PKT_PRIO_TAG) 231 + return; 232 + 233 + vid = (rx_vlan_encap_hdr >> 234 + CPSW_RX_VLAN_ENCAP_HDR_VID_SHIFT) & 235 + VLAN_VID_MASK; 236 + /* Ignore vid 0 and pass packet as is */ 237 + if (!vid) 238 + return; 239 + 240 + /* Untag P0 packets if set for vlan */ 241 + if (!cpsw_ale_get_vlan_p0_untag(cpsw->ale, vid)) { 242 + prio = (rx_vlan_encap_hdr >> 243 + CPSW_RX_VLAN_ENCAP_HDR_PRIO_SHIFT) & 244 + CPSW_RX_VLAN_ENCAP_HDR_PRIO_MSK; 245 + 246 + vtag = (prio << VLAN_PRIO_SHIFT) | vid; 247 + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vtag); 248 + } 249 + 250 + /* strip vlan tag for VLAN-tagged packet */ 251 + if (pkt_type == CPSW_RX_VLAN_ENCAP_HDR_PKT_VLAN_TAG) { 252 + memmove(skb->data + VLAN_HLEN, skb->data, 2 * ETH_ALEN); 253 + skb_pull(skb, VLAN_HLEN); 254 + } 255 + } 256 + 257 + void cpsw_set_slave_mac(struct cpsw_slave *slave, struct cpsw_priv *priv) 258 + { 259 + slave_write(slave, mac_hi(priv->mac_addr), SA_HI); 260 + slave_write(slave, mac_lo(priv->mac_addr), SA_LO); 261 + } 262 + 263 + void soft_reset(const char *module, void __iomem *reg) 264 + { 265 + unsigned long timeout = jiffies + HZ; 266 + 267 + writel_relaxed(1, reg); 268 + do { 269 + cpu_relax(); 270 + } while ((readl_relaxed(reg) & 1) && time_after(timeout, jiffies)); 271 + 272 + WARN(readl_relaxed(reg) & 1, "failed to soft-reset %s\n", module); 273 + } 274 + 275 + void cpsw_ndo_tx_timeout(struct net_device *ndev) 276 + { 277 + struct cpsw_priv *priv = netdev_priv(ndev); 278 + struct cpsw_common *cpsw = priv->cpsw; 279 + int ch; 280 + 281 + cpsw_err(priv, tx_err, "transmit timeout, restarting dma\n"); 282 + ndev->stats.tx_errors++; 283 + cpsw_intr_disable(cpsw); 284 + for (ch = 0; ch < cpsw->tx_ch_num; ch++) { 285 + cpdma_chan_stop(cpsw->txv[ch].ch); 286 + cpdma_chan_start(cpsw->txv[ch].ch); 287 + } 288 + 289 + cpsw_intr_enable(cpsw); 290 + netif_trans_update(ndev); 291 + netif_tx_wake_all_queues(ndev); 292 + } 293 + 294 + static int cpsw_get_common_speed(struct cpsw_common *cpsw) 295 + { 296 + int i, speed; 297 + 298 + for (i = 0, speed = 0; i < cpsw->data.slaves; i++) 299 + if (cpsw->slaves[i].phy && cpsw->slaves[i].phy->link) 300 + speed += cpsw->slaves[i].phy->speed; 301 + 302 + return speed; 303 + } 304 + 305 + int cpsw_need_resplit(struct cpsw_common *cpsw) 306 + { 307 + int i, rlim_ch_num; 308 + int speed, ch_rate; 309 + 310 + /* re-split resources only in case speed was changed */ 311 + speed = cpsw_get_common_speed(cpsw); 312 + if (speed == cpsw->speed || !speed) 313 + return 0; 314 + 315 + cpsw->speed = speed; 316 + 317 + for (i = 0, rlim_ch_num = 0; i < cpsw->tx_ch_num; i++) { 318 + ch_rate = cpdma_chan_get_rate(cpsw->txv[i].ch); 319 + if (!ch_rate) 320 + break; 321 + 322 + rlim_ch_num++; 323 + } 324 + 325 + /* cases not dependent on speed */ 326 + if (!rlim_ch_num || rlim_ch_num == cpsw->tx_ch_num) 327 + return 0; 328 + 329 + return 1; 330 + } 331 + 332 + void cpsw_split_res(struct cpsw_common *cpsw) 333 + { 334 + u32 consumed_rate = 0, bigest_rate = 0; 335 + struct cpsw_vector *txv = cpsw->txv; 336 + int i, ch_weight, rlim_ch_num = 0; 337 + int budget, bigest_rate_ch = 0; 338 + u32 ch_rate, max_rate; 339 + int ch_budget = 0; 340 + 341 + for (i = 0; i < cpsw->tx_ch_num; i++) { 342 + ch_rate = cpdma_chan_get_rate(txv[i].ch); 343 + if (!ch_rate) 344 + continue; 345 + 346 + rlim_ch_num++; 347 + consumed_rate += ch_rate; 348 + } 349 + 350 + if (cpsw->tx_ch_num == rlim_ch_num) { 351 + max_rate = consumed_rate; 352 + } else if (!rlim_ch_num) { 353 + ch_budget = CPSW_POLL_WEIGHT / cpsw->tx_ch_num; 354 + bigest_rate = 0; 355 + max_rate = consumed_rate; 356 + } else { 357 + max_rate = cpsw->speed * 1000; 358 + 359 + /* if max_rate is less then expected due to reduced link speed, 360 + * split proportionally according next potential max speed 361 + */ 362 + if (max_rate < consumed_rate) 363 + max_rate *= 10; 364 + 365 + if (max_rate < consumed_rate) 366 + max_rate *= 10; 367 + 368 + ch_budget = (consumed_rate * CPSW_POLL_WEIGHT) / max_rate; 369 + ch_budget = (CPSW_POLL_WEIGHT - ch_budget) / 370 + (cpsw->tx_ch_num - rlim_ch_num); 371 + bigest_rate = (max_rate - consumed_rate) / 372 + (cpsw->tx_ch_num - rlim_ch_num); 373 + } 374 + 375 + /* split tx weight/budget */ 376 + budget = CPSW_POLL_WEIGHT; 377 + for (i = 0; i < cpsw->tx_ch_num; i++) { 378 + ch_rate = cpdma_chan_get_rate(txv[i].ch); 379 + if (ch_rate) { 380 + txv[i].budget = (ch_rate * CPSW_POLL_WEIGHT) / max_rate; 381 + if (!txv[i].budget) 382 + txv[i].budget++; 383 + if (ch_rate > bigest_rate) { 384 + bigest_rate_ch = i; 385 + bigest_rate = ch_rate; 386 + } 387 + 388 + ch_weight = (ch_rate * 100) / max_rate; 389 + if (!ch_weight) 390 + ch_weight++; 391 + cpdma_chan_set_weight(cpsw->txv[i].ch, ch_weight); 392 + } else { 393 + txv[i].budget = ch_budget; 394 + if (!bigest_rate_ch) 395 + bigest_rate_ch = i; 396 + cpdma_chan_set_weight(cpsw->txv[i].ch, 0); 397 + } 398 + 399 + budget -= txv[i].budget; 400 + } 401 + 402 + if (budget) 403 + txv[bigest_rate_ch].budget += budget; 404 + 405 + /* split rx budget */ 406 + budget = CPSW_POLL_WEIGHT; 407 + ch_budget = budget / cpsw->rx_ch_num; 408 + for (i = 0; i < cpsw->rx_ch_num; i++) { 409 + cpsw->rxv[i].budget = ch_budget; 410 + budget -= ch_budget; 411 + } 412 + 413 + if (budget) 414 + cpsw->rxv[0].budget += budget; 415 + } 21 416 22 417 int cpsw_init_common(struct cpsw_common *cpsw, void __iomem *ss_regs, 23 418 int ale_ageout, phys_addr_t desc_mem_phys, ··· 423 28 struct cpsw_platform_data *data; 424 29 struct cpdma_params dma_params; 425 30 struct device *dev = cpsw->dev; 31 + struct device_node *cpts_node; 426 32 void __iomem *cpts_regs; 427 33 int ret = 0, i; 428 34 ··· 518 122 return -ENOMEM; 519 123 } 520 124 521 - cpsw->cpts = cpts_create(cpsw->dev, cpts_regs, cpsw->dev->of_node); 125 + cpts_node = of_get_child_by_name(cpsw->dev->of_node, "cpts"); 126 + if (!cpts_node) 127 + cpts_node = cpsw->dev->of_node; 128 + 129 + cpsw->cpts = cpts_create(cpsw->dev, cpts_regs, cpts_node); 522 130 if (IS_ERR(cpsw->cpts)) { 523 131 ret = PTR_ERR(cpsw->cpts); 524 132 cpdma_ctlr_destroy(cpsw->dma); 525 133 } 134 + of_node_put(cpts_node); 526 135 136 + return ret; 137 + } 138 + 139 + #if IS_ENABLED(CONFIG_TI_CPTS) 140 + 141 + static void cpsw_hwtstamp_v1(struct cpsw_priv *priv) 142 + { 143 + struct cpsw_common *cpsw = priv->cpsw; 144 + struct cpsw_slave *slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 145 + u32 ts_en, seq_id; 146 + 147 + if (!priv->tx_ts_enabled && !priv->rx_ts_enabled) { 148 + slave_write(slave, 0, CPSW1_TS_CTL); 149 + return; 150 + } 151 + 152 + seq_id = (30 << CPSW_V1_SEQ_ID_OFS_SHIFT) | ETH_P_1588; 153 + ts_en = EVENT_MSG_BITS << CPSW_V1_MSG_TYPE_OFS; 154 + 155 + if (priv->tx_ts_enabled) 156 + ts_en |= CPSW_V1_TS_TX_EN; 157 + 158 + if (priv->rx_ts_enabled) 159 + ts_en |= CPSW_V1_TS_RX_EN; 160 + 161 + slave_write(slave, ts_en, CPSW1_TS_CTL); 162 + slave_write(slave, seq_id, CPSW1_TS_SEQ_LTYPE); 163 + } 164 + 165 + static void cpsw_hwtstamp_v2(struct cpsw_priv *priv) 166 + { 167 + struct cpsw_common *cpsw = priv->cpsw; 168 + struct cpsw_slave *slave; 169 + u32 ctrl, mtype; 170 + 171 + slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 172 + 173 + ctrl = slave_read(slave, CPSW2_CONTROL); 174 + switch (cpsw->version) { 175 + case CPSW_VERSION_2: 176 + ctrl &= ~CTRL_V2_ALL_TS_MASK; 177 + 178 + if (priv->tx_ts_enabled) 179 + ctrl |= CTRL_V2_TX_TS_BITS; 180 + 181 + if (priv->rx_ts_enabled) 182 + ctrl |= CTRL_V2_RX_TS_BITS; 183 + break; 184 + case CPSW_VERSION_3: 185 + default: 186 + ctrl &= ~CTRL_V3_ALL_TS_MASK; 187 + 188 + if (priv->tx_ts_enabled) 189 + ctrl |= CTRL_V3_TX_TS_BITS; 190 + 191 + if (priv->rx_ts_enabled) 192 + ctrl |= CTRL_V3_RX_TS_BITS; 193 + break; 194 + } 195 + 196 + mtype = (30 << TS_SEQ_ID_OFFSET_SHIFT) | EVENT_MSG_BITS; 197 + 198 + slave_write(slave, mtype, CPSW2_TS_SEQ_MTYPE); 199 + slave_write(slave, ctrl, CPSW2_CONTROL); 200 + writel_relaxed(ETH_P_1588, &cpsw->regs->ts_ltype); 201 + writel_relaxed(ETH_P_8021Q, &cpsw->regs->vlan_ltype); 202 + } 203 + 204 + static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) 205 + { 206 + struct cpsw_priv *priv = netdev_priv(dev); 207 + struct cpsw_common *cpsw = priv->cpsw; 208 + struct hwtstamp_config cfg; 209 + 210 + if (cpsw->version != CPSW_VERSION_1 && 211 + cpsw->version != CPSW_VERSION_2 && 212 + cpsw->version != CPSW_VERSION_3) 213 + return -EOPNOTSUPP; 214 + 215 + if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg))) 216 + return -EFAULT; 217 + 218 + /* reserved for future extensions */ 219 + if (cfg.flags) 220 + return -EINVAL; 221 + 222 + if (cfg.tx_type != HWTSTAMP_TX_OFF && cfg.tx_type != HWTSTAMP_TX_ON) 223 + return -ERANGE; 224 + 225 + switch (cfg.rx_filter) { 226 + case HWTSTAMP_FILTER_NONE: 227 + priv->rx_ts_enabled = 0; 228 + break; 229 + case HWTSTAMP_FILTER_ALL: 230 + case HWTSTAMP_FILTER_NTP_ALL: 231 + return -ERANGE; 232 + case HWTSTAMP_FILTER_PTP_V1_L4_EVENT: 233 + case HWTSTAMP_FILTER_PTP_V1_L4_SYNC: 234 + case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ: 235 + priv->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 236 + cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 237 + break; 238 + case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: 239 + case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: 240 + case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ: 241 + case HWTSTAMP_FILTER_PTP_V2_L2_EVENT: 242 + case HWTSTAMP_FILTER_PTP_V2_L2_SYNC: 243 + case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ: 244 + case HWTSTAMP_FILTER_PTP_V2_EVENT: 245 + case HWTSTAMP_FILTER_PTP_V2_SYNC: 246 + case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ: 247 + priv->rx_ts_enabled = HWTSTAMP_FILTER_PTP_V2_EVENT; 248 + cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; 249 + break; 250 + default: 251 + return -ERANGE; 252 + } 253 + 254 + priv->tx_ts_enabled = cfg.tx_type == HWTSTAMP_TX_ON; 255 + 256 + switch (cpsw->version) { 257 + case CPSW_VERSION_1: 258 + cpsw_hwtstamp_v1(priv); 259 + break; 260 + case CPSW_VERSION_2: 261 + case CPSW_VERSION_3: 262 + cpsw_hwtstamp_v2(priv); 263 + break; 264 + default: 265 + WARN_ON(1); 266 + } 267 + 268 + return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0; 269 + } 270 + 271 + static int cpsw_hwtstamp_get(struct net_device *dev, struct ifreq *ifr) 272 + { 273 + struct cpsw_common *cpsw = ndev_to_cpsw(dev); 274 + struct cpsw_priv *priv = netdev_priv(dev); 275 + struct hwtstamp_config cfg; 276 + 277 + if (cpsw->version != CPSW_VERSION_1 && 278 + cpsw->version != CPSW_VERSION_2 && 279 + cpsw->version != CPSW_VERSION_3) 280 + return -EOPNOTSUPP; 281 + 282 + cfg.flags = 0; 283 + cfg.tx_type = priv->tx_ts_enabled ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; 284 + cfg.rx_filter = priv->rx_ts_enabled; 285 + 286 + return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0; 287 + } 288 + #else 289 + static int cpsw_hwtstamp_get(struct net_device *dev, struct ifreq *ifr) 290 + { 291 + return -EOPNOTSUPP; 292 + } 293 + 294 + static int cpsw_hwtstamp_set(struct net_device *dev, struct ifreq *ifr) 295 + { 296 + return -EOPNOTSUPP; 297 + } 298 + #endif /*CONFIG_TI_CPTS*/ 299 + 300 + int cpsw_ndo_ioctl(struct net_device *dev, struct ifreq *req, int cmd) 301 + { 302 + struct cpsw_priv *priv = netdev_priv(dev); 303 + struct cpsw_common *cpsw = priv->cpsw; 304 + int slave_no = cpsw_slave_index(cpsw, priv); 305 + 306 + if (!netif_running(dev)) 307 + return -EINVAL; 308 + 309 + switch (cmd) { 310 + case SIOCSHWTSTAMP: 311 + return cpsw_hwtstamp_set(dev, req); 312 + case SIOCGHWTSTAMP: 313 + return cpsw_hwtstamp_get(dev, req); 314 + } 315 + 316 + if (!cpsw->slaves[slave_no].phy) 317 + return -EOPNOTSUPP; 318 + return phy_mii_ioctl(cpsw->slaves[slave_no].phy, req, cmd); 319 + } 320 + 321 + int cpsw_ndo_set_tx_maxrate(struct net_device *ndev, int queue, u32 rate) 322 + { 323 + struct cpsw_priv *priv = netdev_priv(ndev); 324 + struct cpsw_common *cpsw = priv->cpsw; 325 + struct cpsw_slave *slave; 326 + u32 min_rate; 327 + u32 ch_rate; 328 + int i, ret; 329 + 330 + ch_rate = netdev_get_tx_queue(ndev, queue)->tx_maxrate; 331 + if (ch_rate == rate) 332 + return 0; 333 + 334 + ch_rate = rate * 1000; 335 + min_rate = cpdma_chan_get_min_rate(cpsw->dma); 336 + if ((ch_rate < min_rate && ch_rate)) { 337 + dev_err(priv->dev, "The channel rate cannot be less than %dMbps", 338 + min_rate); 339 + return -EINVAL; 340 + } 341 + 342 + if (rate > cpsw->speed) { 343 + dev_err(priv->dev, "The channel rate cannot be more than 2Gbps"); 344 + return -EINVAL; 345 + } 346 + 347 + ret = pm_runtime_get_sync(cpsw->dev); 348 + if (ret < 0) { 349 + pm_runtime_put_noidle(cpsw->dev); 350 + return ret; 351 + } 352 + 353 + ret = cpdma_chan_set_rate(cpsw->txv[queue].ch, ch_rate); 354 + pm_runtime_put(cpsw->dev); 355 + 356 + if (ret) 357 + return ret; 358 + 359 + /* update rates for slaves tx queues */ 360 + for (i = 0; i < cpsw->data.slaves; i++) { 361 + slave = &cpsw->slaves[i]; 362 + if (!slave->ndev) 363 + continue; 364 + 365 + netdev_get_tx_queue(slave->ndev, queue)->tx_maxrate = rate; 366 + } 367 + 368 + cpsw_split_res(cpsw); 369 + return ret; 370 + } 371 + 372 + static int cpsw_tc_to_fifo(int tc, int num_tc) 373 + { 374 + if (tc == num_tc - 1) 375 + return 0; 376 + 377 + return CPSW_FIFO_SHAPERS_NUM - tc; 378 + } 379 + 380 + bool cpsw_shp_is_off(struct cpsw_priv *priv) 381 + { 382 + struct cpsw_common *cpsw = priv->cpsw; 383 + struct cpsw_slave *slave; 384 + u32 shift, mask, val; 385 + 386 + val = readl_relaxed(&cpsw->regs->ptype); 387 + 388 + slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 389 + shift = CPSW_FIFO_SHAPE_EN_SHIFT + 3 * slave->slave_num; 390 + mask = 7 << shift; 391 + val = val & mask; 392 + 393 + return !val; 394 + } 395 + 396 + static void cpsw_fifo_shp_on(struct cpsw_priv *priv, int fifo, int on) 397 + { 398 + struct cpsw_common *cpsw = priv->cpsw; 399 + struct cpsw_slave *slave; 400 + u32 shift, mask, val; 401 + 402 + val = readl_relaxed(&cpsw->regs->ptype); 403 + 404 + slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 405 + shift = CPSW_FIFO_SHAPE_EN_SHIFT + 3 * slave->slave_num; 406 + mask = (1 << --fifo) << shift; 407 + val = on ? val | mask : val & ~mask; 408 + 409 + writel_relaxed(val, &cpsw->regs->ptype); 410 + } 411 + 412 + static int cpsw_set_fifo_bw(struct cpsw_priv *priv, int fifo, int bw) 413 + { 414 + struct cpsw_common *cpsw = priv->cpsw; 415 + u32 val = 0, send_pct, shift; 416 + struct cpsw_slave *slave; 417 + int pct = 0, i; 418 + 419 + if (bw > priv->shp_cfg_speed * 1000) 420 + goto err; 421 + 422 + /* shaping has to stay enabled for highest fifos linearly 423 + * and fifo bw no more then interface can allow 424 + */ 425 + slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 426 + send_pct = slave_read(slave, SEND_PERCENT); 427 + for (i = CPSW_FIFO_SHAPERS_NUM; i > 0; i--) { 428 + if (!bw) { 429 + if (i >= fifo || !priv->fifo_bw[i]) 430 + continue; 431 + 432 + dev_warn(priv->dev, "Prev FIFO%d is shaped", i); 433 + continue; 434 + } 435 + 436 + if (!priv->fifo_bw[i] && i > fifo) { 437 + dev_err(priv->dev, "Upper FIFO%d is not shaped", i); 438 + return -EINVAL; 439 + } 440 + 441 + shift = (i - 1) * 8; 442 + if (i == fifo) { 443 + send_pct &= ~(CPSW_PCT_MASK << shift); 444 + val = DIV_ROUND_UP(bw, priv->shp_cfg_speed * 10); 445 + if (!val) 446 + val = 1; 447 + 448 + send_pct |= val << shift; 449 + pct += val; 450 + continue; 451 + } 452 + 453 + if (priv->fifo_bw[i]) 454 + pct += (send_pct >> shift) & CPSW_PCT_MASK; 455 + } 456 + 457 + if (pct >= 100) 458 + goto err; 459 + 460 + slave_write(slave, send_pct, SEND_PERCENT); 461 + priv->fifo_bw[fifo] = bw; 462 + 463 + dev_warn(priv->dev, "set FIFO%d bw = %d\n", fifo, 464 + DIV_ROUND_CLOSEST(val * priv->shp_cfg_speed, 100)); 465 + 466 + return 0; 467 + err: 468 + dev_err(priv->dev, "Bandwidth doesn't fit in tc configuration"); 469 + return -EINVAL; 470 + } 471 + 472 + static int cpsw_set_fifo_rlimit(struct cpsw_priv *priv, int fifo, int bw) 473 + { 474 + struct cpsw_common *cpsw = priv->cpsw; 475 + struct cpsw_slave *slave; 476 + u32 tx_in_ctl_rg, val; 477 + int ret; 478 + 479 + ret = cpsw_set_fifo_bw(priv, fifo, bw); 480 + if (ret) 481 + return ret; 482 + 483 + slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 484 + tx_in_ctl_rg = cpsw->version == CPSW_VERSION_1 ? 485 + CPSW1_TX_IN_CTL : CPSW2_TX_IN_CTL; 486 + 487 + if (!bw) 488 + cpsw_fifo_shp_on(priv, fifo, bw); 489 + 490 + val = slave_read(slave, tx_in_ctl_rg); 491 + if (cpsw_shp_is_off(priv)) { 492 + /* disable FIFOs rate limited queues */ 493 + val &= ~(0xf << CPSW_FIFO_RATE_EN_SHIFT); 494 + 495 + /* set type of FIFO queues to normal priority mode */ 496 + val &= ~(3 << CPSW_FIFO_QUEUE_TYPE_SHIFT); 497 + 498 + /* set type of FIFO queues to be rate limited */ 499 + if (bw) 500 + val |= 2 << CPSW_FIFO_QUEUE_TYPE_SHIFT; 501 + else 502 + priv->shp_cfg_speed = 0; 503 + } 504 + 505 + /* toggle a FIFO rate limited queue */ 506 + if (bw) 507 + val |= BIT(fifo + CPSW_FIFO_RATE_EN_SHIFT); 508 + else 509 + val &= ~BIT(fifo + CPSW_FIFO_RATE_EN_SHIFT); 510 + slave_write(slave, val, tx_in_ctl_rg); 511 + 512 + /* FIFO transmit shape enable */ 513 + cpsw_fifo_shp_on(priv, fifo, bw); 514 + return 0; 515 + } 516 + 517 + /* Defaults: 518 + * class A - prio 3 519 + * class B - prio 2 520 + * shaping for class A should be set first 521 + */ 522 + static int cpsw_set_cbs(struct net_device *ndev, 523 + struct tc_cbs_qopt_offload *qopt) 524 + { 525 + struct cpsw_priv *priv = netdev_priv(ndev); 526 + struct cpsw_common *cpsw = priv->cpsw; 527 + struct cpsw_slave *slave; 528 + int prev_speed = 0; 529 + int tc, ret, fifo; 530 + u32 bw = 0; 531 + 532 + tc = netdev_txq_to_tc(priv->ndev, qopt->queue); 533 + 534 + /* enable channels in backward order, as highest FIFOs must be rate 535 + * limited first and for compliance with CPDMA rate limited channels 536 + * that also used in bacward order. FIFO0 cannot be rate limited. 537 + */ 538 + fifo = cpsw_tc_to_fifo(tc, ndev->num_tc); 539 + if (!fifo) { 540 + dev_err(priv->dev, "Last tc%d can't be rate limited", tc); 541 + return -EINVAL; 542 + } 543 + 544 + /* do nothing, it's disabled anyway */ 545 + if (!qopt->enable && !priv->fifo_bw[fifo]) 546 + return 0; 547 + 548 + /* shapers can be set if link speed is known */ 549 + slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 550 + if (slave->phy && slave->phy->link) { 551 + if (priv->shp_cfg_speed && 552 + priv->shp_cfg_speed != slave->phy->speed) 553 + prev_speed = priv->shp_cfg_speed; 554 + 555 + priv->shp_cfg_speed = slave->phy->speed; 556 + } 557 + 558 + if (!priv->shp_cfg_speed) { 559 + dev_err(priv->dev, "Link speed is not known"); 560 + return -1; 561 + } 562 + 563 + ret = pm_runtime_get_sync(cpsw->dev); 564 + if (ret < 0) { 565 + pm_runtime_put_noidle(cpsw->dev); 566 + return ret; 567 + } 568 + 569 + bw = qopt->enable ? qopt->idleslope : 0; 570 + ret = cpsw_set_fifo_rlimit(priv, fifo, bw); 571 + if (ret) { 572 + priv->shp_cfg_speed = prev_speed; 573 + prev_speed = 0; 574 + } 575 + 576 + if (bw && prev_speed) 577 + dev_warn(priv->dev, 578 + "Speed was changed, CBS shaper speeds are changed!"); 579 + 580 + pm_runtime_put_sync(cpsw->dev); 581 + return ret; 582 + } 583 + 584 + static int cpsw_set_mqprio(struct net_device *ndev, void *type_data) 585 + { 586 + struct tc_mqprio_qopt_offload *mqprio = type_data; 587 + struct cpsw_priv *priv = netdev_priv(ndev); 588 + struct cpsw_common *cpsw = priv->cpsw; 589 + int fifo, num_tc, count, offset; 590 + struct cpsw_slave *slave; 591 + u32 tx_prio_map = 0; 592 + int i, tc, ret; 593 + 594 + num_tc = mqprio->qopt.num_tc; 595 + if (num_tc > CPSW_TC_NUM) 596 + return -EINVAL; 597 + 598 + if (mqprio->mode != TC_MQPRIO_MODE_DCB) 599 + return -EINVAL; 600 + 601 + ret = pm_runtime_get_sync(cpsw->dev); 602 + if (ret < 0) { 603 + pm_runtime_put_noidle(cpsw->dev); 604 + return ret; 605 + } 606 + 607 + if (num_tc) { 608 + for (i = 0; i < 8; i++) { 609 + tc = mqprio->qopt.prio_tc_map[i]; 610 + fifo = cpsw_tc_to_fifo(tc, num_tc); 611 + tx_prio_map |= fifo << (4 * i); 612 + } 613 + 614 + netdev_set_num_tc(ndev, num_tc); 615 + for (i = 0; i < num_tc; i++) { 616 + count = mqprio->qopt.count[i]; 617 + offset = mqprio->qopt.offset[i]; 618 + netdev_set_tc_queue(ndev, i, count, offset); 619 + } 620 + } 621 + 622 + if (!mqprio->qopt.hw) { 623 + /* restore default configuration */ 624 + netdev_reset_tc(ndev); 625 + tx_prio_map = TX_PRIORITY_MAPPING; 626 + } 627 + 628 + priv->mqprio_hw = mqprio->qopt.hw; 629 + 630 + offset = cpsw->version == CPSW_VERSION_1 ? 631 + CPSW1_TX_PRI_MAP : CPSW2_TX_PRI_MAP; 632 + 633 + slave = &cpsw->slaves[cpsw_slave_index(cpsw, priv)]; 634 + slave_write(slave, tx_prio_map, offset); 635 + 636 + pm_runtime_put_sync(cpsw->dev); 637 + 638 + return 0; 639 + } 640 + 641 + int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type, 642 + void *type_data) 643 + { 644 + switch (type) { 645 + case TC_SETUP_QDISC_CBS: 646 + return cpsw_set_cbs(ndev, type_data); 647 + 648 + case TC_SETUP_QDISC_MQPRIO: 649 + return cpsw_set_mqprio(ndev, type_data); 650 + 651 + default: 652 + return -EOPNOTSUPP; 653 + } 654 + } 655 + 656 + void cpsw_cbs_resume(struct cpsw_slave *slave, struct cpsw_priv *priv) 657 + { 658 + int fifo, bw; 659 + 660 + for (fifo = CPSW_FIFO_SHAPERS_NUM; fifo > 0; fifo--) { 661 + bw = priv->fifo_bw[fifo]; 662 + if (!bw) 663 + continue; 664 + 665 + cpsw_set_fifo_rlimit(priv, fifo, bw); 666 + } 667 + } 668 + 669 + void cpsw_mqprio_resume(struct cpsw_slave *slave, struct cpsw_priv *priv) 670 + { 671 + struct cpsw_common *cpsw = priv->cpsw; 672 + u32 tx_prio_map = 0; 673 + int i, tc, fifo; 674 + u32 tx_prio_rg; 675 + 676 + if (!priv->mqprio_hw) 677 + return; 678 + 679 + for (i = 0; i < 8; i++) { 680 + tc = netdev_get_prio_tc_map(priv->ndev, i); 681 + fifo = CPSW_FIFO_SHAPERS_NUM - tc; 682 + tx_prio_map |= fifo << (4 * i); 683 + } 684 + 685 + tx_prio_rg = cpsw->version == CPSW_VERSION_1 ? 686 + CPSW1_TX_PRI_MAP : CPSW2_TX_PRI_MAP; 687 + 688 + slave_write(slave, tx_prio_map, tx_prio_rg); 689 + } 690 + 691 + int cpsw_fill_rx_channels(struct cpsw_priv *priv) 692 + { 693 + struct cpsw_common *cpsw = priv->cpsw; 694 + struct cpsw_meta_xdp *xmeta; 695 + struct page_pool *pool; 696 + struct page *page; 697 + int ch_buf_num; 698 + int ch, i, ret; 699 + dma_addr_t dma; 700 + 701 + for (ch = 0; ch < cpsw->rx_ch_num; ch++) { 702 + pool = cpsw->page_pool[ch]; 703 + ch_buf_num = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); 704 + for (i = 0; i < ch_buf_num; i++) { 705 + page = page_pool_dev_alloc_pages(pool); 706 + if (!page) { 707 + cpsw_err(priv, ifup, "allocate rx page err\n"); 708 + return -ENOMEM; 709 + } 710 + 711 + xmeta = page_address(page) + CPSW_XMETA_OFFSET; 712 + xmeta->ndev = priv->ndev; 713 + xmeta->ch = ch; 714 + 715 + dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM; 716 + ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch, 717 + page, dma, 718 + cpsw->rx_packet_max, 719 + 0); 720 + if (ret < 0) { 721 + cpsw_err(priv, ifup, 722 + "cannot submit page to channel %d rx, error %d\n", 723 + ch, ret); 724 + page_pool_recycle_direct(pool, page); 725 + return ret; 726 + } 727 + } 728 + 729 + cpsw_info(priv, ifup, "ch %d rx, submitted %d descriptors\n", 730 + ch, ch_buf_num); 731 + } 732 + 733 + return 0; 734 + } 735 + 736 + static struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw, 737 + int size) 738 + { 739 + struct page_pool_params pp_params; 740 + struct page_pool *pool; 741 + 742 + pp_params.order = 0; 743 + pp_params.flags = PP_FLAG_DMA_MAP; 744 + pp_params.pool_size = size; 745 + pp_params.nid = NUMA_NO_NODE; 746 + pp_params.dma_dir = DMA_BIDIRECTIONAL; 747 + pp_params.dev = cpsw->dev; 748 + 749 + pool = page_pool_create(&pp_params); 750 + if (IS_ERR(pool)) 751 + dev_err(cpsw->dev, "cannot create rx page pool\n"); 752 + 753 + return pool; 754 + } 755 + 756 + static int cpsw_create_rx_pool(struct cpsw_common *cpsw, int ch) 757 + { 758 + struct page_pool *pool; 759 + int ret = 0, pool_size; 760 + 761 + pool_size = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); 762 + pool = cpsw_create_page_pool(cpsw, pool_size); 763 + if (IS_ERR(pool)) 764 + ret = PTR_ERR(pool); 765 + else 766 + cpsw->page_pool[ch] = pool; 767 + 768 + return ret; 769 + } 770 + 771 + static int cpsw_ndev_create_xdp_rxq(struct cpsw_priv *priv, int ch) 772 + { 773 + struct cpsw_common *cpsw = priv->cpsw; 774 + struct xdp_rxq_info *rxq; 775 + struct page_pool *pool; 776 + int ret; 777 + 778 + pool = cpsw->page_pool[ch]; 779 + rxq = &priv->xdp_rxq[ch]; 780 + 781 + ret = xdp_rxq_info_reg(rxq, priv->ndev, ch); 782 + if (ret) 783 + return ret; 784 + 785 + ret = xdp_rxq_info_reg_mem_model(rxq, MEM_TYPE_PAGE_POOL, pool); 786 + if (ret) 787 + xdp_rxq_info_unreg(rxq); 788 + 789 + return ret; 790 + } 791 + 792 + static void cpsw_ndev_destroy_xdp_rxq(struct cpsw_priv *priv, int ch) 793 + { 794 + struct xdp_rxq_info *rxq = &priv->xdp_rxq[ch]; 795 + 796 + if (!xdp_rxq_info_is_reg(rxq)) 797 + return; 798 + 799 + xdp_rxq_info_unreg(rxq); 800 + } 801 + 802 + void cpsw_destroy_xdp_rxqs(struct cpsw_common *cpsw) 803 + { 804 + struct net_device *ndev; 805 + int i, ch; 806 + 807 + for (ch = 0; ch < cpsw->rx_ch_num; ch++) { 808 + for (i = 0; i < cpsw->data.slaves; i++) { 809 + ndev = cpsw->slaves[i].ndev; 810 + if (!ndev) 811 + continue; 812 + 813 + cpsw_ndev_destroy_xdp_rxq(netdev_priv(ndev), ch); 814 + } 815 + 816 + page_pool_destroy(cpsw->page_pool[ch]); 817 + cpsw->page_pool[ch] = NULL; 818 + } 819 + } 820 + 821 + int cpsw_create_xdp_rxqs(struct cpsw_common *cpsw) 822 + { 823 + struct net_device *ndev; 824 + int i, ch, ret; 825 + 826 + for (ch = 0; ch < cpsw->rx_ch_num; ch++) { 827 + ret = cpsw_create_rx_pool(cpsw, ch); 828 + if (ret) 829 + goto err_cleanup; 830 + 831 + /* using same page pool is allowed as no running rx handlers 832 + * simultaneously for both ndevs 833 + */ 834 + for (i = 0; i < cpsw->data.slaves; i++) { 835 + ndev = cpsw->slaves[i].ndev; 836 + if (!ndev) 837 + continue; 838 + 839 + ret = cpsw_ndev_create_xdp_rxq(netdev_priv(ndev), ch); 840 + if (ret) 841 + goto err_cleanup; 842 + } 843 + } 844 + 845 + return 0; 846 + 847 + err_cleanup: 848 + cpsw_destroy_xdp_rxqs(cpsw); 849 + 850 + return ret; 851 + } 852 + 853 + static int cpsw_xdp_prog_setup(struct cpsw_priv *priv, struct netdev_bpf *bpf) 854 + { 855 + struct bpf_prog *prog = bpf->prog; 856 + 857 + if (!priv->xdpi.prog && !prog) 858 + return 0; 859 + 860 + if (!xdp_attachment_flags_ok(&priv->xdpi, bpf)) 861 + return -EBUSY; 862 + 863 + WRITE_ONCE(priv->xdp_prog, prog); 864 + 865 + xdp_attachment_setup(&priv->xdpi, bpf); 866 + 867 + return 0; 868 + } 869 + 870 + int cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf) 871 + { 872 + struct cpsw_priv *priv = netdev_priv(ndev); 873 + 874 + switch (bpf->command) { 875 + case XDP_SETUP_PROG: 876 + return cpsw_xdp_prog_setup(priv, bpf); 877 + 878 + case XDP_QUERY_PROG: 879 + return xdp_attachment_query(&priv->xdpi, bpf); 880 + 881 + default: 882 + return -EINVAL; 883 + } 884 + } 885 + 886 + int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct xdp_frame *xdpf, 887 + struct page *page, int port) 888 + { 889 + struct cpsw_common *cpsw = priv->cpsw; 890 + struct cpsw_meta_xdp *xmeta; 891 + struct cpdma_chan *txch; 892 + dma_addr_t dma; 893 + int ret; 894 + 895 + xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; 896 + xmeta->ndev = priv->ndev; 897 + xmeta->ch = 0; 898 + txch = cpsw->txv[0].ch; 899 + 900 + if (page) { 901 + dma = page_pool_get_dma_addr(page); 902 + dma += xdpf->headroom + sizeof(struct xdp_frame); 903 + ret = cpdma_chan_submit_mapped(txch, cpsw_xdpf_to_handle(xdpf), 904 + dma, xdpf->len, port); 905 + } else { 906 + if (sizeof(*xmeta) > xdpf->headroom) { 907 + xdp_return_frame_rx_napi(xdpf); 908 + return -EINVAL; 909 + } 910 + 911 + ret = cpdma_chan_submit(txch, cpsw_xdpf_to_handle(xdpf), 912 + xdpf->data, xdpf->len, port); 913 + } 914 + 915 + if (ret) { 916 + priv->ndev->stats.tx_dropped++; 917 + xdp_return_frame_rx_napi(xdpf); 918 + } 919 + 920 + return ret; 921 + } 922 + 923 + int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp, 924 + struct page *page, int port) 925 + { 926 + struct cpsw_common *cpsw = priv->cpsw; 927 + struct net_device *ndev = priv->ndev; 928 + int ret = CPSW_XDP_CONSUMED; 929 + struct xdp_frame *xdpf; 930 + struct bpf_prog *prog; 931 + u32 act; 932 + 933 + rcu_read_lock(); 934 + 935 + prog = READ_ONCE(priv->xdp_prog); 936 + if (!prog) { 937 + ret = CPSW_XDP_PASS; 938 + goto out; 939 + } 940 + 941 + act = bpf_prog_run_xdp(prog, xdp); 942 + switch (act) { 943 + case XDP_PASS: 944 + ret = CPSW_XDP_PASS; 945 + break; 946 + case XDP_TX: 947 + xdpf = convert_to_xdp_frame(xdp); 948 + if (unlikely(!xdpf)) 949 + goto drop; 950 + 951 + cpsw_xdp_tx_frame(priv, xdpf, page, port); 952 + break; 953 + case XDP_REDIRECT: 954 + if (xdp_do_redirect(ndev, xdp, prog)) 955 + goto drop; 956 + 957 + /* Have to flush here, per packet, instead of doing it in bulk 958 + * at the end of the napi handler. The RX devices on this 959 + * particular hardware is sharing a common queue, so the 960 + * incoming device might change per packet. 961 + */ 962 + xdp_do_flush_map(); 963 + break; 964 + default: 965 + bpf_warn_invalid_xdp_action(act); 966 + /* fall through */ 967 + case XDP_ABORTED: 968 + trace_xdp_exception(ndev, prog, act); 969 + /* fall through -- handle aborts by dropping packet */ 970 + case XDP_DROP: 971 + goto drop; 972 + } 973 + out: 974 + rcu_read_unlock(); 975 + return ret; 976 + drop: 977 + rcu_read_unlock(); 978 + page_pool_recycle_direct(cpsw->page_pool[ch], page); 527 979 return ret; 528 980 }
+69 -10
drivers/net/ethernet/ti/cpsw_priv.h
··· 54 54 55 55 #define HOST_PORT_NUM 0 56 56 #define CPSW_ALE_PORTS_NUM 3 57 + #define CPSW_SLAVE_PORTS_NUM 2 57 58 #define SLIVER_SIZE 0x40 58 59 59 60 #define CPSW1_HOST_PORT_OFFSET 0x028 ··· 66 65 #define CPSW1_CPTS_OFFSET 0x500 67 66 #define CPSW1_ALE_OFFSET 0x600 68 67 #define CPSW1_SLIVER_OFFSET 0x700 68 + #define CPSW1_WR_OFFSET 0x900 69 69 70 70 #define CPSW2_HOST_PORT_OFFSET 0x108 71 71 #define CPSW2_SLAVE_OFFSET 0x200 ··· 78 76 #define CPSW2_ALE_OFFSET 0xd00 79 77 #define CPSW2_SLIVER_OFFSET 0xd80 80 78 #define CPSW2_BD_OFFSET 0x2000 79 + #define CPSW2_WR_OFFSET 0x1200 81 80 82 81 #define CPDMA_RXTHRESH 0x0c0 83 82 #define CPDMA_RXFREE 0x0e0 ··· 116 113 #define IRQ_NUM 2 117 114 #define CPSW_MAX_QUEUES 8 118 115 #define CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT 256 116 + #define CPSW_ALE_AGEOUT_DEFAULT 10 /* sec */ 117 + #define CPSW_ALE_NUM_ENTRIES 1024 119 118 #define CPSW_FIFO_QUEUE_TYPE_SHIFT 16 120 119 #define CPSW_FIFO_SHAPE_EN_SHIFT 16 121 120 #define CPSW_FIFO_RATE_EN_SHIFT 20 122 121 #define CPSW_TC_NUM 4 123 122 #define CPSW_FIFO_SHAPERS_NUM (CPSW_TC_NUM - 1) 124 123 #define CPSW_PCT_MASK 0x7f 124 + #define CPSW_BD_RAM_SIZE 0x2000 125 125 126 126 #define CPSW_RX_VLAN_ENCAP_HDR_PRIO_SHIFT 29 127 127 #define CPSW_RX_VLAN_ENCAP_HDR_PRIO_MSK GENMASK(2, 0) ··· 285 279 u8 mac_addr[ETH_ALEN]; 286 280 u16 dual_emac_res_vlan; /* Reserved VLAN for DualEMAC */ 287 281 struct phy *ifphy; 282 + bool disabled; 288 283 }; 289 284 290 285 struct cpsw_platform_data { ··· 293 286 u32 ss_reg_ofs; /* Subsystem control register offset */ 294 287 u32 channels; /* number of cpdma channels (symmetric) */ 295 288 u32 slaves; /* number of slave cpgmac ports */ 296 - u32 active_slave; /* time stamping, ethtool and SIOCGMIIPHY slave */ 289 + u32 active_slave;/* time stamping, ethtool and SIOCGMIIPHY slave */ 297 290 u32 ale_entries; /* ale table size */ 298 - u32 bd_ram_size; /*buffer descriptor ram size */ 291 + u32 bd_ram_size; /*buffer descriptor ram size */ 299 292 u32 mac_control; /* Mac control register */ 300 293 u16 default_vlan; /* Def VLAN for ALE lookup in VLAN aware mode*/ 301 294 bool dual_emac; /* Enable Dual EMAC mode */ ··· 351 344 bool tx_irq_disabled; 352 345 u32 irqs_table[IRQ_NUM]; 353 346 struct cpts *cpts; 347 + struct devlink *devlink; 354 348 int rx_ch_num, tx_ch_num; 355 349 int speed; 356 350 int usage_count; 357 351 struct page_pool *page_pool[CPSW_MAX_QUEUES]; 352 + u8 br_members; 353 + struct net_device *hw_bridge_dev; 354 + bool ale_bypass; 355 + u8 base_mac[ETH_ALEN]; 358 356 }; 359 357 360 358 struct cpsw_priv { ··· 380 368 381 369 u32 emac_port; 382 370 struct cpsw_common *cpsw; 371 + int offload_fwd_mark; 383 372 }; 384 373 385 374 #define ndev_to_cpsw(ndev) (((struct cpsw_priv *)netdev_priv(ndev))->cpsw) 386 375 #define napi_to_cpsw(napi) container_of(napi, struct cpsw_common, napi) 387 376 388 - #define cpsw_slave_index(cpsw, priv) \ 389 - ((cpsw->data.dual_emac) ? priv->emac_port : \ 390 - cpsw->data.active_slave) 391 - 392 - static inline int cpsw_get_slave_port(u32 slave_num) 393 - { 394 - return slave_num + 1; 395 - } 377 + extern int (*cpsw_slave_index)(struct cpsw_common *cpsw, 378 + struct cpsw_priv *priv); 396 379 397 380 struct addr_sync_ctx { 398 381 struct net_device *ndev; ··· 395 388 int consumed; /* number of address instances */ 396 389 int flush; /* flush flag */ 397 390 }; 391 + 392 + #define CPSW_XMETA_OFFSET ALIGN(sizeof(struct xdp_frame), sizeof(long)) 393 + 394 + #define CPSW_XDP_CONSUMED 1 395 + #define CPSW_XDP_PASS 0 396 + 397 + struct __aligned(sizeof(long)) cpsw_meta_xdp { 398 + struct net_device *ndev; 399 + int ch; 400 + }; 401 + 402 + /* The buf includes headroom compatible with both skb and xdpf */ 403 + #define CPSW_HEADROOM_NA (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + NET_IP_ALIGN) 404 + #define CPSW_HEADROOM ALIGN(CPSW_HEADROOM_NA, sizeof(long)) 405 + 406 + static inline int cpsw_is_xdpf_handle(void *handle) 407 + { 408 + return (unsigned long)handle & BIT(0); 409 + } 410 + 411 + static inline void *cpsw_xdpf_to_handle(struct xdp_frame *xdpf) 412 + { 413 + return (void *)((unsigned long)xdpf | BIT(0)); 414 + } 415 + 416 + static inline struct xdp_frame *cpsw_handle_to_xdpf(void *handle) 417 + { 418 + return (struct xdp_frame *)((unsigned long)handle & ~BIT(0)); 419 + } 398 420 399 421 int cpsw_init_common(struct cpsw_common *cpsw, void __iomem *ss_regs, 400 422 int ale_ageout, phys_addr_t desc_mem_phys, ··· 435 399 void cpsw_tx_handler(void *token, int len, int status); 436 400 int cpsw_create_xdp_rxqs(struct cpsw_common *cpsw); 437 401 void cpsw_destroy_xdp_rxqs(struct cpsw_common *cpsw); 402 + int cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf); 403 + int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct xdp_frame *xdpf, 404 + struct page *page, int port); 405 + int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp, 406 + struct page *page, int port); 407 + irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id); 408 + irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id); 409 + int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget); 410 + int cpsw_tx_poll(struct napi_struct *napi_tx, int budget); 411 + int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget); 412 + int cpsw_rx_poll(struct napi_struct *napi_rx, int budget); 413 + void cpsw_rx_vlan_encap(struct sk_buff *skb); 414 + void soft_reset(const char *module, void __iomem *reg); 415 + void cpsw_set_slave_mac(struct cpsw_slave *slave, struct cpsw_priv *priv); 416 + void cpsw_ndo_tx_timeout(struct net_device *ndev); 417 + int cpsw_need_resplit(struct cpsw_common *cpsw); 418 + int cpsw_ndo_ioctl(struct net_device *dev, struct ifreq *req, int cmd); 419 + int cpsw_ndo_set_tx_maxrate(struct net_device *ndev, int queue, u32 rate); 420 + int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type, 421 + void *type_data); 422 + bool cpsw_shp_is_off(struct cpsw_priv *priv); 423 + void cpsw_cbs_resume(struct cpsw_slave *slave, struct cpsw_priv *priv); 424 + void cpsw_mqprio_resume(struct cpsw_slave *slave, struct cpsw_priv *priv); 438 425 439 426 /* ethtool */ 440 427 u32 cpsw_get_msglevel(struct net_device *ndev);
+589
drivers/net/ethernet/ti/cpsw_switchdev.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Texas Instruments switchdev Driver 4 + * 5 + * Copyright (C) 2019 Texas Instruments 6 + * 7 + */ 8 + 9 + #include <linux/etherdevice.h> 10 + #include <linux/if_bridge.h> 11 + #include <linux/netdevice.h> 12 + #include <linux/workqueue.h> 13 + #include <net/switchdev.h> 14 + 15 + #include "cpsw.h" 16 + #include "cpsw_ale.h" 17 + #include "cpsw_priv.h" 18 + #include "cpsw_switchdev.h" 19 + 20 + struct cpsw_switchdev_event_work { 21 + struct work_struct work; 22 + struct switchdev_notifier_fdb_info fdb_info; 23 + struct cpsw_priv *priv; 24 + unsigned long event; 25 + }; 26 + 27 + static int cpsw_port_stp_state_set(struct cpsw_priv *priv, 28 + struct switchdev_trans *trans, u8 state) 29 + { 30 + struct cpsw_common *cpsw = priv->cpsw; 31 + u8 cpsw_state; 32 + int ret = 0; 33 + 34 + if (switchdev_trans_ph_prepare(trans)) 35 + return 0; 36 + 37 + switch (state) { 38 + case BR_STATE_FORWARDING: 39 + cpsw_state = ALE_PORT_STATE_FORWARD; 40 + break; 41 + case BR_STATE_LEARNING: 42 + cpsw_state = ALE_PORT_STATE_LEARN; 43 + break; 44 + case BR_STATE_DISABLED: 45 + cpsw_state = ALE_PORT_STATE_DISABLE; 46 + break; 47 + case BR_STATE_LISTENING: 48 + case BR_STATE_BLOCKING: 49 + cpsw_state = ALE_PORT_STATE_BLOCK; 50 + break; 51 + default: 52 + return -EOPNOTSUPP; 53 + } 54 + 55 + ret = cpsw_ale_control_set(cpsw->ale, priv->emac_port, 56 + ALE_PORT_STATE, cpsw_state); 57 + dev_dbg(priv->dev, "ale state: %u\n", cpsw_state); 58 + 59 + return ret; 60 + } 61 + 62 + static int cpsw_port_attr_br_flags_set(struct cpsw_priv *priv, 63 + struct switchdev_trans *trans, 64 + struct net_device *orig_dev, 65 + unsigned long brport_flags) 66 + { 67 + struct cpsw_common *cpsw = priv->cpsw; 68 + bool unreg_mcast_add = false; 69 + 70 + if (switchdev_trans_ph_prepare(trans)) 71 + return 0; 72 + 73 + if (brport_flags & BR_MCAST_FLOOD) 74 + unreg_mcast_add = true; 75 + dev_dbg(priv->dev, "BR_MCAST_FLOOD: %d port %u\n", 76 + unreg_mcast_add, priv->emac_port); 77 + 78 + cpsw_ale_set_unreg_mcast(cpsw->ale, BIT(priv->emac_port), 79 + unreg_mcast_add); 80 + 81 + return 0; 82 + } 83 + 84 + static int cpsw_port_attr_br_flags_pre_set(struct net_device *netdev, 85 + struct switchdev_trans *trans, 86 + unsigned long flags) 87 + { 88 + if (flags & ~(BR_LEARNING | BR_MCAST_FLOOD)) 89 + return -EINVAL; 90 + 91 + return 0; 92 + } 93 + 94 + static int cpsw_port_attr_set(struct net_device *ndev, 95 + const struct switchdev_attr *attr, 96 + struct switchdev_trans *trans) 97 + { 98 + struct cpsw_priv *priv = netdev_priv(ndev); 99 + int ret; 100 + 101 + dev_dbg(priv->dev, "attr: id %u port: %u\n", attr->id, priv->emac_port); 102 + 103 + switch (attr->id) { 104 + case SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS: 105 + ret = cpsw_port_attr_br_flags_pre_set(ndev, trans, 106 + attr->u.brport_flags); 107 + break; 108 + case SWITCHDEV_ATTR_ID_PORT_STP_STATE: 109 + ret = cpsw_port_stp_state_set(priv, trans, attr->u.stp_state); 110 + dev_dbg(priv->dev, "stp state: %u\n", attr->u.stp_state); 111 + break; 112 + case SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS: 113 + ret = cpsw_port_attr_br_flags_set(priv, trans, attr->orig_dev, 114 + attr->u.brport_flags); 115 + break; 116 + default: 117 + ret = -EOPNOTSUPP; 118 + break; 119 + } 120 + 121 + return ret; 122 + } 123 + 124 + static u16 cpsw_get_pvid(struct cpsw_priv *priv) 125 + { 126 + struct cpsw_common *cpsw = priv->cpsw; 127 + u32 __iomem *port_vlan_reg; 128 + u32 pvid; 129 + 130 + if (priv->emac_port) { 131 + int reg = CPSW2_PORT_VLAN; 132 + 133 + if (cpsw->version == CPSW_VERSION_1) 134 + reg = CPSW1_PORT_VLAN; 135 + pvid = slave_read(cpsw->slaves + (priv->emac_port - 1), reg); 136 + } else { 137 + port_vlan_reg = &cpsw->host_port_regs->port_vlan; 138 + pvid = readl(port_vlan_reg); 139 + } 140 + 141 + pvid = pvid & 0xfff; 142 + 143 + return pvid; 144 + } 145 + 146 + static void cpsw_set_pvid(struct cpsw_priv *priv, u16 vid, bool cfi, u32 cos) 147 + { 148 + struct cpsw_common *cpsw = priv->cpsw; 149 + void __iomem *port_vlan_reg; 150 + u32 pvid; 151 + 152 + pvid = vid; 153 + pvid |= cfi ? BIT(12) : 0; 154 + pvid |= (cos & 0x7) << 13; 155 + 156 + if (priv->emac_port) { 157 + int reg = CPSW2_PORT_VLAN; 158 + 159 + if (cpsw->version == CPSW_VERSION_1) 160 + reg = CPSW1_PORT_VLAN; 161 + /* no barrier */ 162 + slave_write(cpsw->slaves + (priv->emac_port - 1), pvid, reg); 163 + } else { 164 + /* CPU port */ 165 + port_vlan_reg = &cpsw->host_port_regs->port_vlan; 166 + writel(pvid, port_vlan_reg); 167 + } 168 + } 169 + 170 + static int cpsw_port_vlan_add(struct cpsw_priv *priv, bool untag, bool pvid, 171 + u16 vid, struct net_device *orig_dev) 172 + { 173 + bool cpu_port = netif_is_bridge_master(orig_dev); 174 + struct cpsw_common *cpsw = priv->cpsw; 175 + int unreg_mcast_mask = 0; 176 + int reg_mcast_mask = 0; 177 + int untag_mask = 0; 178 + int port_mask; 179 + int ret = 0; 180 + u32 flags; 181 + 182 + if (cpu_port) { 183 + port_mask = BIT(HOST_PORT_NUM); 184 + flags = orig_dev->flags; 185 + unreg_mcast_mask = port_mask; 186 + } else { 187 + port_mask = BIT(priv->emac_port); 188 + flags = priv->ndev->flags; 189 + } 190 + 191 + if (flags & IFF_MULTICAST) 192 + reg_mcast_mask = port_mask; 193 + 194 + if (untag) 195 + untag_mask = port_mask; 196 + 197 + ret = cpsw_ale_vlan_add_modify(cpsw->ale, vid, port_mask, untag_mask, 198 + reg_mcast_mask, unreg_mcast_mask); 199 + if (ret) { 200 + dev_err(priv->dev, "Unable to add vlan\n"); 201 + return ret; 202 + } 203 + 204 + if (cpu_port) 205 + cpsw_ale_add_ucast(cpsw->ale, priv->mac_addr, 206 + HOST_PORT_NUM, ALE_VLAN, vid); 207 + if (!pvid) 208 + return ret; 209 + 210 + cpsw_set_pvid(priv, vid, 0, 0); 211 + 212 + dev_dbg(priv->dev, "VID add: %s: vid:%u ports:%X\n", 213 + priv->ndev->name, vid, port_mask); 214 + return ret; 215 + } 216 + 217 + static int cpsw_port_vlan_del(struct cpsw_priv *priv, u16 vid, 218 + struct net_device *orig_dev) 219 + { 220 + bool cpu_port = netif_is_bridge_master(orig_dev); 221 + struct cpsw_common *cpsw = priv->cpsw; 222 + int port_mask; 223 + int ret = 0; 224 + 225 + if (cpu_port) 226 + port_mask = BIT(HOST_PORT_NUM); 227 + else 228 + port_mask = BIT(priv->emac_port); 229 + 230 + ret = cpsw_ale_del_vlan(cpsw->ale, vid, port_mask); 231 + if (ret != 0) 232 + return ret; 233 + 234 + /* We don't care for the return value here, error is returned only if 235 + * the unicast entry is not present 236 + */ 237 + if (cpu_port) 238 + cpsw_ale_del_ucast(cpsw->ale, priv->mac_addr, 239 + HOST_PORT_NUM, ALE_VLAN, vid); 240 + 241 + if (vid == cpsw_get_pvid(priv)) 242 + cpsw_set_pvid(priv, 0, 0, 0); 243 + 244 + /* We don't care for the return value here, error is returned only if 245 + * the multicast entry is not present 246 + */ 247 + cpsw_ale_del_mcast(cpsw->ale, priv->ndev->broadcast, 248 + port_mask, ALE_VLAN, vid); 249 + dev_dbg(priv->dev, "VID del: %s: vid:%u ports:%X\n", 250 + priv->ndev->name, vid, port_mask); 251 + 252 + return ret; 253 + } 254 + 255 + static int cpsw_port_vlans_add(struct cpsw_priv *priv, 256 + const struct switchdev_obj_port_vlan *vlan, 257 + struct switchdev_trans *trans) 258 + { 259 + bool untag = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED; 260 + struct net_device *orig_dev = vlan->obj.orig_dev; 261 + bool cpu_port = netif_is_bridge_master(orig_dev); 262 + bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID; 263 + u16 vid; 264 + 265 + dev_dbg(priv->dev, "VID add: %s: vid:%u flags:%X\n", 266 + priv->ndev->name, vlan->vid_begin, vlan->flags); 267 + 268 + if (cpu_port && !(vlan->flags & BRIDGE_VLAN_INFO_BRENTRY)) 269 + return 0; 270 + 271 + if (switchdev_trans_ph_prepare(trans)) 272 + return 0; 273 + 274 + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) { 275 + int err; 276 + 277 + err = cpsw_port_vlan_add(priv, untag, pvid, vid, orig_dev); 278 + if (err) 279 + return err; 280 + } 281 + 282 + return 0; 283 + } 284 + 285 + static int cpsw_port_vlans_del(struct cpsw_priv *priv, 286 + const struct switchdev_obj_port_vlan *vlan) 287 + 288 + { 289 + struct net_device *orig_dev = vlan->obj.orig_dev; 290 + u16 vid; 291 + 292 + for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) { 293 + int err; 294 + 295 + err = cpsw_port_vlan_del(priv, vid, orig_dev); 296 + if (err) 297 + return err; 298 + } 299 + 300 + return 0; 301 + } 302 + 303 + static int cpsw_port_mdb_add(struct cpsw_priv *priv, 304 + struct switchdev_obj_port_mdb *mdb, 305 + struct switchdev_trans *trans) 306 + 307 + { 308 + struct net_device *orig_dev = mdb->obj.orig_dev; 309 + bool cpu_port = netif_is_bridge_master(orig_dev); 310 + struct cpsw_common *cpsw = priv->cpsw; 311 + int port_mask; 312 + int err; 313 + 314 + if (switchdev_trans_ph_prepare(trans)) 315 + return 0; 316 + 317 + if (cpu_port) 318 + port_mask = BIT(HOST_PORT_NUM); 319 + else 320 + port_mask = BIT(priv->emac_port); 321 + 322 + err = cpsw_ale_add_mcast(cpsw->ale, mdb->addr, port_mask, 323 + ALE_VLAN, mdb->vid, 0); 324 + dev_dbg(priv->dev, "MDB add: %s: vid %u:%pM ports: %X\n", 325 + priv->ndev->name, mdb->vid, mdb->addr, port_mask); 326 + 327 + return err; 328 + } 329 + 330 + static int cpsw_port_mdb_del(struct cpsw_priv *priv, 331 + struct switchdev_obj_port_mdb *mdb) 332 + 333 + { 334 + struct net_device *orig_dev = mdb->obj.orig_dev; 335 + bool cpu_port = netif_is_bridge_master(orig_dev); 336 + struct cpsw_common *cpsw = priv->cpsw; 337 + int del_mask; 338 + int err; 339 + 340 + if (cpu_port) 341 + del_mask = BIT(HOST_PORT_NUM); 342 + else 343 + del_mask = BIT(priv->emac_port); 344 + 345 + err = cpsw_ale_del_mcast(cpsw->ale, mdb->addr, del_mask, 346 + ALE_VLAN, mdb->vid); 347 + dev_dbg(priv->dev, "MDB del: %s: vid %u:%pM ports: %X\n", 348 + priv->ndev->name, mdb->vid, mdb->addr, del_mask); 349 + 350 + return err; 351 + } 352 + 353 + static int cpsw_port_obj_add(struct net_device *ndev, 354 + const struct switchdev_obj *obj, 355 + struct switchdev_trans *trans, 356 + struct netlink_ext_ack *extack) 357 + { 358 + struct switchdev_obj_port_vlan *vlan = SWITCHDEV_OBJ_PORT_VLAN(obj); 359 + struct switchdev_obj_port_mdb *mdb = SWITCHDEV_OBJ_PORT_MDB(obj); 360 + struct cpsw_priv *priv = netdev_priv(ndev); 361 + int err = 0; 362 + 363 + dev_dbg(priv->dev, "obj_add: id %u port: %u\n", 364 + obj->id, priv->emac_port); 365 + 366 + switch (obj->id) { 367 + case SWITCHDEV_OBJ_ID_PORT_VLAN: 368 + err = cpsw_port_vlans_add(priv, vlan, trans); 369 + break; 370 + case SWITCHDEV_OBJ_ID_PORT_MDB: 371 + case SWITCHDEV_OBJ_ID_HOST_MDB: 372 + err = cpsw_port_mdb_add(priv, mdb, trans); 373 + break; 374 + default: 375 + err = -EOPNOTSUPP; 376 + break; 377 + } 378 + 379 + return err; 380 + } 381 + 382 + static int cpsw_port_obj_del(struct net_device *ndev, 383 + const struct switchdev_obj *obj) 384 + { 385 + struct switchdev_obj_port_vlan *vlan = SWITCHDEV_OBJ_PORT_VLAN(obj); 386 + struct switchdev_obj_port_mdb *mdb = SWITCHDEV_OBJ_PORT_MDB(obj); 387 + struct cpsw_priv *priv = netdev_priv(ndev); 388 + int err = 0; 389 + 390 + dev_dbg(priv->dev, "obj_del: id %u port: %u\n", 391 + obj->id, priv->emac_port); 392 + 393 + switch (obj->id) { 394 + case SWITCHDEV_OBJ_ID_PORT_VLAN: 395 + err = cpsw_port_vlans_del(priv, vlan); 396 + break; 397 + case SWITCHDEV_OBJ_ID_PORT_MDB: 398 + case SWITCHDEV_OBJ_ID_HOST_MDB: 399 + err = cpsw_port_mdb_del(priv, mdb); 400 + break; 401 + default: 402 + err = -EOPNOTSUPP; 403 + break; 404 + } 405 + 406 + return err; 407 + } 408 + 409 + static void cpsw_fdb_offload_notify(struct net_device *ndev, 410 + struct switchdev_notifier_fdb_info *rcv) 411 + { 412 + struct switchdev_notifier_fdb_info info; 413 + 414 + info.addr = rcv->addr; 415 + info.vid = rcv->vid; 416 + info.offloaded = true; 417 + call_switchdev_notifiers(SWITCHDEV_FDB_OFFLOADED, 418 + ndev, &info.info, NULL); 419 + } 420 + 421 + static void cpsw_switchdev_event_work(struct work_struct *work) 422 + { 423 + struct cpsw_switchdev_event_work *switchdev_work = 424 + container_of(work, struct cpsw_switchdev_event_work, work); 425 + struct cpsw_priv *priv = switchdev_work->priv; 426 + struct switchdev_notifier_fdb_info *fdb; 427 + struct cpsw_common *cpsw = priv->cpsw; 428 + int port = priv->emac_port; 429 + 430 + rtnl_lock(); 431 + switch (switchdev_work->event) { 432 + case SWITCHDEV_FDB_ADD_TO_DEVICE: 433 + fdb = &switchdev_work->fdb_info; 434 + 435 + dev_dbg(cpsw->dev, "cpsw_fdb_add: MACID = %pM vid = %u flags = %u %u -- port %d\n", 436 + fdb->addr, fdb->vid, fdb->added_by_user, 437 + fdb->offloaded, port); 438 + 439 + if (!fdb->added_by_user) 440 + break; 441 + if (memcmp(priv->mac_addr, (u8 *)fdb->addr, ETH_ALEN) == 0) 442 + port = HOST_PORT_NUM; 443 + 444 + cpsw_ale_add_ucast(cpsw->ale, (u8 *)fdb->addr, port, 445 + fdb->vid ? ALE_VLAN : 0, fdb->vid); 446 + cpsw_fdb_offload_notify(priv->ndev, fdb); 447 + break; 448 + case SWITCHDEV_FDB_DEL_TO_DEVICE: 449 + fdb = &switchdev_work->fdb_info; 450 + 451 + dev_dbg(cpsw->dev, "cpsw_fdb_del: MACID = %pM vid = %u flags = %u %u -- port %d\n", 452 + fdb->addr, fdb->vid, fdb->added_by_user, 453 + fdb->offloaded, port); 454 + 455 + if (!fdb->added_by_user) 456 + break; 457 + if (memcmp(priv->mac_addr, (u8 *)fdb->addr, ETH_ALEN) == 0) 458 + port = HOST_PORT_NUM; 459 + 460 + cpsw_ale_del_ucast(cpsw->ale, (u8 *)fdb->addr, port, 461 + fdb->vid ? ALE_VLAN : 0, fdb->vid); 462 + break; 463 + default: 464 + break; 465 + } 466 + rtnl_unlock(); 467 + 468 + kfree(switchdev_work->fdb_info.addr); 469 + kfree(switchdev_work); 470 + dev_put(priv->ndev); 471 + } 472 + 473 + /* called under rcu_read_lock() */ 474 + static int cpsw_switchdev_event(struct notifier_block *unused, 475 + unsigned long event, void *ptr) 476 + { 477 + struct net_device *ndev = switchdev_notifier_info_to_dev(ptr); 478 + struct switchdev_notifier_fdb_info *fdb_info = ptr; 479 + struct cpsw_switchdev_event_work *switchdev_work; 480 + struct cpsw_priv *priv = netdev_priv(ndev); 481 + int err; 482 + 483 + if (event == SWITCHDEV_PORT_ATTR_SET) { 484 + err = switchdev_handle_port_attr_set(ndev, ptr, 485 + cpsw_port_dev_check, 486 + cpsw_port_attr_set); 487 + return notifier_from_errno(err); 488 + } 489 + 490 + if (!cpsw_port_dev_check(ndev)) 491 + return NOTIFY_DONE; 492 + 493 + switchdev_work = kzalloc(sizeof(*switchdev_work), GFP_ATOMIC); 494 + if (WARN_ON(!switchdev_work)) 495 + return NOTIFY_BAD; 496 + 497 + INIT_WORK(&switchdev_work->work, cpsw_switchdev_event_work); 498 + switchdev_work->priv = priv; 499 + switchdev_work->event = event; 500 + 501 + switch (event) { 502 + case SWITCHDEV_FDB_ADD_TO_DEVICE: 503 + case SWITCHDEV_FDB_DEL_TO_DEVICE: 504 + memcpy(&switchdev_work->fdb_info, ptr, 505 + sizeof(switchdev_work->fdb_info)); 506 + switchdev_work->fdb_info.addr = kzalloc(ETH_ALEN, GFP_ATOMIC); 507 + if (!switchdev_work->fdb_info.addr) 508 + goto err_addr_alloc; 509 + ether_addr_copy((u8 *)switchdev_work->fdb_info.addr, 510 + fdb_info->addr); 511 + dev_hold(ndev); 512 + break; 513 + default: 514 + kfree(switchdev_work); 515 + return NOTIFY_DONE; 516 + } 517 + 518 + queue_work(system_long_wq, &switchdev_work->work); 519 + 520 + return NOTIFY_DONE; 521 + 522 + err_addr_alloc: 523 + kfree(switchdev_work); 524 + return NOTIFY_BAD; 525 + } 526 + 527 + static struct notifier_block cpsw_switchdev_notifier = { 528 + .notifier_call = cpsw_switchdev_event, 529 + }; 530 + 531 + static int cpsw_switchdev_blocking_event(struct notifier_block *unused, 532 + unsigned long event, void *ptr) 533 + { 534 + struct net_device *dev = switchdev_notifier_info_to_dev(ptr); 535 + int err; 536 + 537 + switch (event) { 538 + case SWITCHDEV_PORT_OBJ_ADD: 539 + err = switchdev_handle_port_obj_add(dev, ptr, 540 + cpsw_port_dev_check, 541 + cpsw_port_obj_add); 542 + return notifier_from_errno(err); 543 + case SWITCHDEV_PORT_OBJ_DEL: 544 + err = switchdev_handle_port_obj_del(dev, ptr, 545 + cpsw_port_dev_check, 546 + cpsw_port_obj_del); 547 + return notifier_from_errno(err); 548 + case SWITCHDEV_PORT_ATTR_SET: 549 + err = switchdev_handle_port_attr_set(dev, ptr, 550 + cpsw_port_dev_check, 551 + cpsw_port_attr_set); 552 + return notifier_from_errno(err); 553 + default: 554 + break; 555 + } 556 + 557 + return NOTIFY_DONE; 558 + } 559 + 560 + static struct notifier_block cpsw_switchdev_bl_notifier = { 561 + .notifier_call = cpsw_switchdev_blocking_event, 562 + }; 563 + 564 + int cpsw_switchdev_register_notifiers(struct cpsw_common *cpsw) 565 + { 566 + int ret = 0; 567 + 568 + ret = register_switchdev_notifier(&cpsw_switchdev_notifier); 569 + if (ret) { 570 + dev_err(cpsw->dev, "register switchdev notifier fail ret:%d\n", 571 + ret); 572 + return ret; 573 + } 574 + 575 + ret = register_switchdev_blocking_notifier(&cpsw_switchdev_bl_notifier); 576 + if (ret) { 577 + dev_err(cpsw->dev, "register switchdev blocking notifier ret:%d\n", 578 + ret); 579 + unregister_switchdev_notifier(&cpsw_switchdev_notifier); 580 + } 581 + 582 + return ret; 583 + } 584 + 585 + void cpsw_switchdev_unregister_notifiers(struct cpsw_common *cpsw) 586 + { 587 + unregister_switchdev_blocking_notifier(&cpsw_switchdev_bl_notifier); 588 + unregister_switchdev_notifier(&cpsw_switchdev_notifier); 589 + }
+15
drivers/net/ethernet/ti/cpsw_switchdev.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Texas Instruments Ethernet Switch Driver 4 + */ 5 + 6 + #ifndef DRIVERS_NET_ETHERNET_TI_CPSW_SWITCHDEV_H_ 7 + #define DRIVERS_NET_ETHERNET_TI_CPSW_SWITCHDEV_H_ 8 + 9 + #include <net/switchdev.h> 10 + 11 + bool cpsw_port_dev_check(const struct net_device *dev); 12 + int cpsw_switchdev_register_notifiers(struct cpsw_common *cpsw); 13 + void cpsw_switchdev_unregister_notifiers(struct cpsw_common *cpsw); 14 + 15 + #endif /* DRIVERS_NET_ETHERNET_TI_CPSW_SWITCHDEV_H_ */
+2 -2
drivers/phy/ti/Kconfig
··· 90 90 91 91 config PHY_TI_GMII_SEL 92 92 tristate 93 - default y if TI_CPSW=y 94 - depends on TI_CPSW || COMPILE_TEST 93 + default y if TI_CPSW=y || TI_CPSW_SWITCHDEV=y 94 + depends on TI_CPSW || TI_CPSW_SWITCHDEV || COMPILE_TEST 95 95 select GENERIC_PHY 96 96 select REGMAP 97 97 default m