Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

net: dsa: Use conduit and user terms

Use more inclusive terms throughout the DSA subsystem by moving away
from "master" which is replaced by "conduit" and "slave" which is
replaced by "user". No functional changes.

Acked-by: Rob Herring <robh@kernel.org>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Florian Fainelli <florian.fainelli@broadcom.com>
Link: https://lore.kernel.org/r/20231023181729.1191071-2-florian.fainelli@broadcom.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

authored by

Florian Fainelli and committed by
Jakub Kicinski
6ca80638 00e984cb

+1556 -1553
+1 -1
Documentation/devicetree/bindings/net/dsa/mediatek,mt7530.yaml
··· 60 60 61 61 Check out example 6. 62 62 63 - - Port 5 can be wired to an external phy. Port 5 becomes a DSA slave. 63 + - Port 5 can be wired to an external phy. Port 5 becomes a DSA user port. 64 64 65 65 For the multi-chip module MT7530, the external phy must be wired TX to TX 66 66 to gmac1 of the SoC for this to work. Ubiquiti EdgeRouter X SFP is wired
+7 -7
Documentation/networking/dsa/b53.rst
··· 52 52 it untagged, undesirable. 53 53 54 54 In difference to the configuration described in :ref:`dsa-vlan-configuration` 55 - the default VLAN 1 has to be removed from the slave interface configuration in 55 + the default VLAN 1 has to be removed from the user interface configuration in 56 56 single port and gateway configuration, while there is no need to add an extra 57 57 VLAN configuration in the bridge showcase. 58 58 ··· 68 68 ip link add link eth0 name eth0.2 type vlan id 2 69 69 ip link add link eth0 name eth0.3 type vlan id 3 70 70 71 - # The master interface needs to be brought up before the slave ports. 71 + # The conduit interface needs to be brought up before the user ports. 72 72 ip link set eth0 up 73 73 ip link set eth0.1 up 74 74 ip link set eth0.2 up 75 75 ip link set eth0.3 up 76 76 77 - # bring up the slave interfaces 77 + # bring up the user interfaces 78 78 ip link set wan up 79 79 ip link set lan1 up 80 80 ip link set lan2 up ··· 113 113 # tag traffic on CPU port 114 114 ip link add link eth0 name eth0.1 type vlan id 1 115 115 116 - # The master interface needs to be brought up before the slave ports. 116 + # The conduit interface needs to be brought up before the user ports. 117 117 ip link set eth0 up 118 118 ip link set eth0.1 up 119 119 120 - # bring up the slave interfaces 120 + # bring up the user interfaces 121 121 ip link set wan up 122 122 ip link set lan1 up 123 123 ip link set lan2 up ··· 149 149 ip link add link eth0 name eth0.1 type vlan id 1 150 150 ip link add link eth0 name eth0.2 type vlan id 2 151 151 152 - # The master interface needs to be brought up before the slave ports. 152 + # The conduit interface needs to be brought up before the user ports. 153 153 ip link set eth0 up 154 154 ip link set eth0.1 up 155 155 ip link set eth0.2 up 156 156 157 - # bring up the slave interfaces 157 + # bring up the user interfaces 158 158 ip link set wan up 159 159 ip link set lan1 up 160 160 ip link set lan2 up
+1 -1
Documentation/networking/dsa/bcm_sf2.rst
··· 67 67 ---------------------- 68 68 69 69 Due to a limitation in how Broadcom switches have been designed, external 70 - Broadcom switches connected to a SF2 require the use of the DSA slave MDIO bus 70 + Broadcom switches connected to a SF2 require the use of the DSA user MDIO bus 71 71 in order to properly configure them. By default, the SF2 pseudo-PHY address, and 72 72 an external switch pseudo-PHY address will both be snooping for incoming MDIO 73 73 transactions, since they are at the same address (30), resulting in some kind of
+50 -50
Documentation/networking/dsa/configuration.rst
··· 31 31 32 32 Through DSA every port of a switch is handled like a normal linux Ethernet 33 33 interface. The CPU port is the switch port connected to an Ethernet MAC chip. 34 - The corresponding linux Ethernet interface is called the master interface. 35 - All other corresponding linux interfaces are called slave interfaces. 34 + The corresponding linux Ethernet interface is called the conduit interface. 35 + All other corresponding linux interfaces are called user interfaces. 36 36 37 - The slave interfaces depend on the master interface being up in order for them 38 - to send or receive traffic. Prior to kernel v5.12, the state of the master 37 + The user interfaces depend on the conduit interface being up in order for them 38 + to send or receive traffic. Prior to kernel v5.12, the state of the conduit 39 39 interface had to be managed explicitly by the user. Starting with kernel v5.12, 40 40 the behavior is as follows: 41 41 42 - - when a DSA slave interface is brought up, the master interface is 42 + - when a DSA user interface is brought up, the conduit interface is 43 43 automatically brought up. 44 - - when the master interface is brought down, all DSA slave interfaces are 44 + - when the conduit interface is brought down, all DSA user interfaces are 45 45 automatically brought down. 46 46 47 47 In this documentation the following Ethernet interfaces are used: 48 48 49 49 *eth0* 50 - the master interface 50 + the conduit interface 51 51 52 52 *eth1* 53 - another master interface 53 + another conduit interface 54 54 55 55 *lan1* 56 - a slave interface 56 + a user interface 57 57 58 58 *lan2* 59 - another slave interface 59 + another user interface 60 60 61 61 *lan3* 62 - a third slave interface 62 + a third user interface 63 63 64 64 *wan* 65 - A slave interface dedicated for upstream traffic 65 + A user interface dedicated for upstream traffic 66 66 67 67 Further Ethernet interfaces can be configured similar. 68 68 The configured IPs and networks are: ··· 96 96 ip addr add 192.0.2.5/30 dev lan2 97 97 ip addr add 192.0.2.9/30 dev lan3 98 98 99 - # For kernels earlier than v5.12, the master interface needs to be 100 - # brought up manually before the slave ports. 99 + # For kernels earlier than v5.12, the conduit interface needs to be 100 + # brought up manually before the user ports. 101 101 ip link set eth0 up 102 102 103 - # bring up the slave interfaces 103 + # bring up the user interfaces 104 104 ip link set lan1 up 105 105 ip link set lan2 up 106 106 ip link set lan3 up ··· 108 108 *bridge* 109 109 .. code-block:: sh 110 110 111 - # For kernels earlier than v5.12, the master interface needs to be 112 - # brought up manually before the slave ports. 111 + # For kernels earlier than v5.12, the conduit interface needs to be 112 + # brought up manually before the user ports. 113 113 ip link set eth0 up 114 114 115 - # bring up the slave interfaces 115 + # bring up the user interfaces 116 116 ip link set lan1 up 117 117 ip link set lan2 up 118 118 ip link set lan3 up ··· 134 134 *gateway* 135 135 .. code-block:: sh 136 136 137 - # For kernels earlier than v5.12, the master interface needs to be 138 - # brought up manually before the slave ports. 137 + # For kernels earlier than v5.12, the conduit interface needs to be 138 + # brought up manually before the user ports. 139 139 ip link set eth0 up 140 140 141 - # bring up the slave interfaces 141 + # bring up the user interfaces 142 142 ip link set wan up 143 143 ip link set lan1 up 144 144 ip link set lan2 up ··· 178 178 ip link add link eth0 name eth0.2 type vlan id 2 179 179 ip link add link eth0 name eth0.3 type vlan id 3 180 180 181 - # For kernels earlier than v5.12, the master interface needs to be 182 - # brought up manually before the slave ports. 181 + # For kernels earlier than v5.12, the conduit interface needs to be 182 + # brought up manually before the user ports. 183 183 ip link set eth0 up 184 184 ip link set eth0.1 up 185 185 ip link set eth0.2 up 186 186 ip link set eth0.3 up 187 187 188 - # bring up the slave interfaces 188 + # bring up the user interfaces 189 189 ip link set lan1 up 190 190 ip link set lan2 up 191 191 ip link set lan3 up ··· 221 221 # tag traffic on CPU port 222 222 ip link add link eth0 name eth0.1 type vlan id 1 223 223 224 - # For kernels earlier than v5.12, the master interface needs to be 225 - # brought up manually before the slave ports. 224 + # For kernels earlier than v5.12, the conduit interface needs to be 225 + # brought up manually before the user ports. 226 226 ip link set eth0 up 227 227 ip link set eth0.1 up 228 228 229 - # bring up the slave interfaces 229 + # bring up the user interfaces 230 230 ip link set lan1 up 231 231 ip link set lan2 up 232 232 ip link set lan3 up ··· 261 261 ip link add link eth0 name eth0.1 type vlan id 1 262 262 ip link add link eth0 name eth0.2 type vlan id 2 263 263 264 - # For kernels earlier than v5.12, the master interface needs to be 265 - # brought up manually before the slave ports. 264 + # For kernels earlier than v5.12, the conduit interface needs to be 265 + # brought up manually before the user ports. 266 266 ip link set eth0 up 267 267 ip link set eth0.1 up 268 268 ip link set eth0.2 up 269 269 270 - # bring up the slave interfaces 270 + # bring up the user interfaces 271 271 ip link set wan up 272 272 ip link set lan1 up 273 273 ip link set lan2 up ··· 380 380 381 381 Secondly, it is possible to perform load balancing between CPU ports on a per 382 382 packet basis, rather than statically assigning user ports to CPU ports. 383 - This can be achieved by placing the DSA masters under a LAG interface (bonding 383 + This can be achieved by placing the DSA conduits under a LAG interface (bonding 384 384 or team). DSA monitors this operation and creates a mirror of this software LAG 385 - on the CPU ports facing the physical DSA masters that constitute the LAG slave 385 + on the CPU ports facing the physical DSA conduits that constitute the LAG slave 386 386 devices. 387 387 388 388 To make use of multiple CPU ports, the firmware (device tree) description of 389 - the switch must mark all the links between CPU ports and their DSA masters 389 + the switch must mark all the links between CPU ports and their DSA conduits 390 390 using the ``ethernet`` reference/phandle. At startup, only a single CPU port 391 - and DSA master will be used - the numerically first port from the firmware 391 + and DSA conduit will be used - the numerically first port from the firmware 392 392 description which has an ``ethernet`` property. It is up to the user to 393 - configure the system for the switch to use other masters. 393 + configure the system for the switch to use other conduits. 394 394 395 395 DSA uses the ``rtnl_link_ops`` mechanism (with a "dsa" ``kind``) to allow 396 - changing the DSA master of a user port. The ``IFLA_DSA_MASTER`` u32 netlink 397 - attribute contains the ifindex of the master device that handles each slave 398 - device. The DSA master must be a valid candidate based on firmware node 396 + changing the DSA conduit of a user port. The ``IFLA_DSA_MASTER`` u32 netlink 397 + attribute contains the ifindex of the conduit device that handles each user 398 + device. The DSA conduit must be a valid candidate based on firmware node 399 399 information, or a LAG interface which contains only slaves which are valid 400 400 candidates. 401 401 ··· 403 403 404 404 .. code-block:: sh 405 405 406 - # See the DSA master in current use 406 + # See the DSA conduit in current use 407 407 ip -d link show dev swp0 408 408 (...) 409 409 dsa master eth0 ··· 414 414 ip link set swp2 type dsa master eth1 415 415 ip link set swp3 type dsa master eth0 416 416 417 - # CPU ports in LAG, using explicit assignment of the DSA master 417 + # CPU ports in LAG, using explicit assignment of the DSA conduit 418 418 ip link add bond0 type bond mode balance-xor && ip link set bond0 up 419 419 ip link set eth1 down && ip link set eth1 master bond0 420 420 ip link set swp0 type dsa master bond0 ··· 426 426 (...) 427 427 dsa master bond0 428 428 429 - # CPU ports in LAG, relying on implicit migration of the DSA master 429 + # CPU ports in LAG, relying on implicit migration of the DSA conduit 430 430 ip link add bond0 type bond mode balance-xor && ip link set bond0 up 431 431 ip link set eth0 down && ip link set eth0 master bond0 432 432 ip link set eth1 down && ip link set eth1 master bond0 ··· 436 436 437 437 Notice that in the case of CPU ports under a LAG, the use of the 438 438 ``IFLA_DSA_MASTER`` netlink attribute is not strictly needed, but rather, DSA 439 - reacts to the ``IFLA_MASTER`` attribute change of its present master (``eth0``) 439 + reacts to the ``IFLA_MASTER`` attribute change of its present conduit (``eth0``) 440 440 and migrates all user ports to the new upper of ``eth0``, ``bond0``. Similarly, 441 441 when ``bond0`` is destroyed using ``RTM_DELLINK``, DSA migrates the user ports 442 - that were assigned to this interface to the first physical DSA master which is 442 + that were assigned to this interface to the first physical DSA conduit which is 443 443 eligible, based on the firmware description (it effectively reverts to the 444 444 startup configuration). 445 445 446 446 In a setup with more than 2 physical CPU ports, it is therefore possible to mix 447 - static user to CPU port assignment with LAG between DSA masters. It is not 448 - possible to statically assign a user port towards a DSA master that has any 449 - upper interfaces (this includes LAG devices - the master must always be the LAG 447 + static user to CPU port assignment with LAG between DSA conduits. It is not 448 + possible to statically assign a user port towards a DSA conduit that has any 449 + upper interfaces (this includes LAG devices - the conduit must always be the LAG 450 450 in this case). 451 451 452 - Live changing of the DSA master (and thus CPU port) affinity of a user port is 452 + Live changing of the DSA conduit (and thus CPU port) affinity of a user port is 453 453 permitted, in order to allow dynamic redistribution in response to traffic. 454 454 455 - Physical DSA masters are allowed to join and leave at any time a LAG interface 456 - used as a DSA master; however, DSA will reject a LAG interface as a valid 457 - candidate for being a DSA master unless it has at least one physical DSA master 455 + Physical DSA conduits are allowed to join and leave at any time a LAG interface 456 + used as a DSA conduit; however, DSA will reject a LAG interface as a valid 457 + candidate for being a DSA conduit unless it has at least one physical DSA conduit 458 458 as a slave device.
+83 -79
Documentation/networking/dsa/dsa.rst
··· 25 25 receiving Ethernet frames from the switch. This is a very common setup for all 26 26 kinds of Ethernet switches found in Small Home and Office products: routers, 27 27 gateways, or even top-of-rack switches. This host Ethernet controller will 28 - be later referred to as "master" and "cpu" in DSA terminology and code. 28 + be later referred to as "conduit" and "cpu" in DSA terminology and code. 29 29 30 30 The D in DSA stands for Distributed, because the subsystem has been designed 31 31 with the ability to configure and manage cascaded switches on top of each other ··· 35 35 36 36 For each front-panel port, DSA creates specialized network devices which are 37 37 used as controlling and data-flowing endpoints for use by the Linux networking 38 - stack. These specialized network interfaces are referred to as "slave" network 38 + stack. These specialized network interfaces are referred to as "user" network 39 39 interfaces in DSA terminology and code. 40 40 41 41 The ideal case for using DSA is when an Ethernet switch supports a "switch tag" ··· 56 56 57 57 - the "cpu" port is the Ethernet switch facing side of the management 58 58 controller, and as such, would create a duplication of feature, since you 59 - would get two interfaces for the same conduit: master netdev, and "cpu" netdev 59 + would get two interfaces for the same conduit: conduit netdev, and "cpu" netdev 60 60 61 61 - the "dsa" port(s) are just conduits between two or more switches, and as such 62 62 cannot really be used as proper network interfaces either, only the 63 63 downstream, or the top-most upstream interface makes sense with that model 64 + 65 + NB: for the past 15 years, the DSA subsystem had been making use of the terms 66 + "master" (rather than "conduit") and "slave" (rather than "user"). These terms 67 + have been removed from the DSA codebase and phased out of the uAPI. 64 68 65 69 Switch tagging protocols 66 70 ------------------------ ··· 84 80 Tagging protocols generally fall in one of three categories: 85 81 86 82 1. The switch-specific frame header is located before the Ethernet header, 87 - shifting to the right (from the perspective of the DSA master's frame 83 + shifting to the right (from the perspective of the DSA conduit's frame 88 84 parser) the MAC DA, MAC SA, EtherType and the entire L2 payload. 89 85 2. The switch-specific frame header is located before the EtherType, keeping 90 - the MAC DA and MAC SA in place from the DSA master's perspective, but 86 + the MAC DA and MAC SA in place from the DSA conduit's perspective, but 91 87 shifting the 'real' EtherType and L2 payload to the right. 92 88 3. The switch-specific frame header is located at the tail of the packet, 93 89 keeping all frame headers in place and not altering the view of the packet 94 - that the DSA master's frame parser has. 90 + that the DSA conduit's frame parser has. 95 91 96 92 A tagging protocol may tag all packets with switch tags of the same length, or 97 93 the tag length might vary (for example packets with PTP timestamps might ··· 99 95 different one on RX). Either way, the tagging protocol driver must populate the 100 96 ``struct dsa_device_ops::needed_headroom`` and/or ``struct dsa_device_ops::needed_tailroom`` 101 97 with the length in octets of the longest switch frame header/trailer. The DSA 102 - framework will automatically adjust the MTU of the master interface to 98 + framework will automatically adjust the MTU of the conduit interface to 103 99 accommodate for this extra size in order for DSA user ports to support the 104 100 standard MTU (L2 payload length) of 1500 octets. The ``needed_headroom`` and 105 101 ``needed_tailroom`` properties are also used to request from the network stack, ··· 144 140 It is possible to construct cascaded setups of DSA switches even if their 145 141 tagging protocols are not compatible with one another. In this case, there are 146 142 no DSA links in this fabric, and each switch constitutes a disjoint DSA switch 147 - tree. The DSA links are viewed as simply a pair of a DSA master (the out-facing 143 + tree. The DSA links are viewed as simply a pair of a DSA conduit (the out-facing 148 144 port of the upstream DSA switch) and a CPU port (the in-facing port of the 149 145 downstream DSA switch). 150 146 151 147 The tagging protocol of the attached DSA switch tree can be viewed through the 152 - ``dsa/tagging`` sysfs attribute of the DSA master:: 148 + ``dsa/tagging`` sysfs attribute of the DSA conduit:: 153 149 154 150 cat /sys/class/net/eth0/dsa/tagging 155 151 156 152 If the hardware and driver are capable, the tagging protocol of the DSA switch 157 153 tree can be changed at runtime. This is done by writing the new tagging 158 - protocol name to the same sysfs device attribute as above (the DSA master and 154 + protocol name to the same sysfs device attribute as above (the DSA conduit and 159 155 all attached switch ports must be down while doing this). 160 156 161 157 It is desirable that all tagging protocols are testable with the ``dsa_loop`` ··· 163 159 any network interface should be capable of transmitting the same packet in the 164 160 same way, and the tagger should decode the same received packet in the same way 165 161 regardless of the driver used for the switch control path, and the driver used 166 - for the DSA master. 162 + for the DSA conduit. 167 163 168 164 The transmission of a packet goes through the tagger's ``xmit`` function. 169 165 The passed ``struct sk_buff *skb`` has ``skb->data`` pointing at ··· 187 183 switch port that the packet was received on. 188 184 189 185 Since tagging protocols in category 1 and 2 break software (and most often also 190 - hardware) packet dissection on the DSA master, features such as RPS (Receive 191 - Packet Steering) on the DSA master would be broken. The DSA framework deals 186 + hardware) packet dissection on the DSA conduit, features such as RPS (Receive 187 + Packet Steering) on the DSA conduit would be broken. The DSA framework deals 192 188 with this by hooking into the flow dissector and shifting the offset at which 193 - the IP header is to be found in the tagged frame as seen by the DSA master. 189 + the IP header is to be found in the tagged frame as seen by the DSA conduit. 194 190 This behavior is automatic based on the ``overhead`` value of the tagging 195 191 protocol. If not all packets are of equal size, the tagger can implement the 196 192 ``flow_dissect`` method of the ``struct dsa_device_ops`` and override this 197 193 default behavior by specifying the correct offset incurred by each individual 198 194 RX packet. Tail taggers do not cause issues to the flow dissector. 199 195 200 - Checksum offload should work with category 1 and 2 taggers when the DSA master 196 + Checksum offload should work with category 1 and 2 taggers when the DSA conduit 201 197 driver declares NETIF_F_HW_CSUM in vlan_features and looks at csum_start and 202 198 csum_offset. For those cases, DSA will shift the checksum start and offset by 203 - the tag size. If the DSA master driver still uses the legacy NETIF_F_IP_CSUM 199 + the tag size. If the DSA conduit driver still uses the legacy NETIF_F_IP_CSUM 204 200 or NETIF_F_IPV6_CSUM in vlan_features, the offload might only work if the 205 201 offload hardware already expects that specific tag (perhaps due to matching 206 - vendors). DSA slaves inherit those flags from the master port, and it is up to 202 + vendors). DSA user ports inherit those flags from the conduit, and it is up to 207 203 the driver to correctly fall back to software checksum when the IP header is not 208 204 where the hardware expects. If that check is ineffective, the packets might go 209 205 to the network without a proper checksum (the checksum field will have the 210 206 pseudo IP header sum). For category 3, when the offload hardware does not 211 207 already expect the switch tag in use, the checksum must be calculated before any 212 - tag is inserted (i.e. inside the tagger). Otherwise, the DSA master would 208 + tag is inserted (i.e. inside the tagger). Otherwise, the DSA conduit would 213 209 include the tail tag in the (software or hardware) checksum calculation. Then, 214 210 when the tag gets stripped by the switch during transmission, it will leave an 215 211 incorrect IP checksum in place. 216 212 217 213 Due to various reasons (most common being category 1 taggers being associated 218 - with DSA-unaware masters, mangling what the master perceives as MAC DA), the 219 - tagging protocol may require the DSA master to operate in promiscuous mode, to 214 + with DSA-unaware conduits, mangling what the conduit perceives as MAC DA), the 215 + tagging protocol may require the DSA conduit to operate in promiscuous mode, to 220 216 receive all frames regardless of the value of the MAC DA. This can be done by 221 - setting the ``promisc_on_master`` property of the ``struct dsa_device_ops``. 222 - Note that this assumes a DSA-unaware master driver, which is the norm. 217 + setting the ``promisc_on_conduit`` property of the ``struct dsa_device_ops``. 218 + Note that this assumes a DSA-unaware conduit driver, which is the norm. 223 219 224 - Master network devices 225 - ---------------------- 220 + Conduit network devices 221 + ----------------------- 226 222 227 - Master network devices are regular, unmodified Linux network device drivers for 223 + Conduit network devices are regular, unmodified Linux network device drivers for 228 224 the CPU/management Ethernet interface. Such a driver might occasionally need to 229 225 know whether DSA is enabled (e.g.: to enable/disable specific offload features), 230 226 but the DSA subsystem has been proven to work with industry standard drivers: ··· 236 232 Networking stack hooks 237 233 ---------------------- 238 234 239 - When a master netdev is used with DSA, a small hook is placed in the 235 + When a conduit netdev is used with DSA, a small hook is placed in the 240 236 networking stack is in order to have the DSA subsystem process the Ethernet 241 237 switch specific tagging protocol. DSA accomplishes this by registering a 242 238 specific (and fake) Ethernet type (later becoming ``skb->protocol``) with the 243 239 networking stack, this is also known as a ``ptype`` or ``packet_type``. A typical 244 240 Ethernet Frame receive sequence looks like this: 245 241 246 - Master network device (e.g.: e1000e): 242 + Conduit network device (e.g.: e1000e): 247 243 248 244 1. Receive interrupt fires: 249 245 ··· 273 269 274 270 - inspect and strip switch tag protocol to determine originating port 275 271 - locate per-port network device 276 - - invoke ``eth_type_trans()`` with the DSA slave network device 272 + - invoke ``eth_type_trans()`` with the DSA user network device 277 273 - invoked ``netif_receive_skb()`` 278 274 279 - Past this point, the DSA slave network devices get delivered regular Ethernet 275 + Past this point, the DSA user network devices get delivered regular Ethernet 280 276 frames that can be processed by the networking stack. 281 277 282 - Slave network devices 283 - --------------------- 278 + User network devices 279 + -------------------- 284 280 285 - Slave network devices created by DSA are stacked on top of their master network 281 + User network devices created by DSA are stacked on top of their conduit network 286 282 device, each of these network interfaces will be responsible for being a 287 283 controlling and data-flowing end-point for each front-panel port of the switch. 288 284 These interfaces are specialized in order to: ··· 293 289 Wake-on-LAN, register dumps... 294 290 - manage external/internal PHY: link, auto-negotiation, etc. 295 291 296 - These slave network devices have custom net_device_ops and ethtool_ops function 292 + These user network devices have custom net_device_ops and ethtool_ops function 297 293 pointers which allow DSA to introduce a level of layering between the networking 298 294 stack/ethtool and the switch driver implementation. 299 295 300 - Upon frame transmission from these slave network devices, DSA will look up which 296 + Upon frame transmission from these user network devices, DSA will look up which 301 297 switch tagging protocol is currently registered with these network devices and 302 298 invoke a specific transmit routine which takes care of adding the relevant 303 299 switch tag in the Ethernet frames. 304 300 305 - These frames are then queued for transmission using the master network device 301 + These frames are then queued for transmission using the conduit network device 306 302 ``ndo_start_xmit()`` function. Since they contain the appropriate switch tag, the 307 303 Ethernet switch will be able to process these incoming frames from the 308 304 management interface and deliver them to the physical switch port. 309 305 310 306 When using multiple CPU ports, it is possible to stack a LAG (bonding/team) 311 - device between the DSA slave devices and the physical DSA masters. The LAG 312 - device is thus also a DSA master, but the LAG slave devices continue to be DSA 313 - masters as well (just with no user port assigned to them; this is needed for 314 - recovery in case the LAG DSA master disappears). Thus, the data path of the LAG 315 - DSA master is used asymmetrically. On RX, the ``ETH_P_XDSA`` handler, which 316 - calls ``dsa_switch_rcv()``, is invoked early (on the physical DSA master; 317 - LAG slave). Therefore, the RX data path of the LAG DSA master is not used. 318 - On the other hand, TX takes place linearly: ``dsa_slave_xmit`` calls 319 - ``dsa_enqueue_skb``, which calls ``dev_queue_xmit`` towards the LAG DSA master. 320 - The latter calls ``dev_queue_xmit`` towards one physical DSA master or the 307 + device between the DSA user devices and the physical DSA conduits. The LAG 308 + device is thus also a DSA conduit, but the LAG slave devices continue to be DSA 309 + conduits as well (just with no user port assigned to them; this is needed for 310 + recovery in case the LAG DSA conduit disappears). Thus, the data path of the LAG 311 + DSA conduit is used asymmetrically. On RX, the ``ETH_P_XDSA`` handler, which 312 + calls ``dsa_switch_rcv()``, is invoked early (on the physical DSA conduit; 313 + LAG slave). Therefore, the RX data path of the LAG DSA conduit is not used. 314 + On the other hand, TX takes place linearly: ``dsa_user_xmit`` calls 315 + ``dsa_enqueue_skb``, which calls ``dev_queue_xmit`` towards the LAG DSA conduit. 316 + The latter calls ``dev_queue_xmit`` towards one physical DSA conduit or the 321 317 other, and in both cases, the packet exits the system through a hardware path 322 318 towards the switch. 323 319 ··· 356 352 || swp0 | | swp1 | | swp2 | | swp3 || 357 353 ++------+-+------+-+------+-+------++ 358 354 359 - Slave MDIO bus 360 - -------------- 355 + User MDIO bus 356 + ------------- 361 357 362 - In order to be able to read to/from a switch PHY built into it, DSA creates a 363 - slave MDIO bus which allows a specific switch driver to divert and intercept 358 + In order to be able to read to/from a switch PHY built into it, DSA creates an 359 + user MDIO bus which allows a specific switch driver to divert and intercept 364 360 MDIO reads/writes towards specific PHY addresses. In most MDIO-connected 365 361 switches, these functions would utilize direct or indirect PHY addressing mode 366 362 to return standard MII registers from the switch builtin PHYs, allowing the PHY ··· 368 364 results, etc. 369 365 370 366 For Ethernet switches which have both external and internal MDIO buses, the 371 - slave MII bus can be utilized to mux/demux MDIO reads and writes towards either 367 + user MII bus can be utilized to mux/demux MDIO reads and writes towards either 372 368 internal or external MDIO devices this switch might be connected to: internal 373 369 PHYs, external PHYs, or even external switches. 374 370 ··· 385 381 386 382 - ``dsa_platform_data``: platform device configuration data which can reference 387 383 a collection of dsa_chip_data structures if multiple switches are cascaded, 388 - the master network device this switch tree is attached to needs to be 384 + the conduit network device this switch tree is attached to needs to be 389 385 referenced 390 386 391 - - ``dsa_switch_tree``: structure assigned to the master network device under 387 + - ``dsa_switch_tree``: structure assigned to the conduit network device under 392 388 ``dsa_ptr``, this structure references a dsa_platform_data structure as well as 393 389 the tagging protocol supported by the switch tree, and which receive/transmit 394 390 function hooks should be invoked, information about the directly attached ··· 396 392 referenced to address individual switches in the tree. 397 393 398 394 - ``dsa_switch``: structure describing a switch device in the tree, referencing 399 - a ``dsa_switch_tree`` as a backpointer, slave network devices, master network 395 + a ``dsa_switch_tree`` as a backpointer, user network devices, conduit network 400 396 device, and a reference to the backing``dsa_switch_ops`` 401 397 402 398 - ``dsa_switch_ops``: structure referencing function pointers, see below for a ··· 408 404 Lack of CPU/DSA network devices 409 405 ------------------------------- 410 406 411 - DSA does not currently create slave network devices for the CPU or DSA ports, as 407 + DSA does not currently create user network devices for the CPU or DSA ports, as 412 408 described before. This might be an issue in the following cases: 413 409 414 410 - inability to fetch switch CPU port statistics counters using ethtool, which ··· 423 419 Common pitfalls using DSA setups 424 420 -------------------------------- 425 421 426 - Once a master network device is configured to use DSA (dev->dsa_ptr becomes 422 + Once a conduit network device is configured to use DSA (dev->dsa_ptr becomes 427 423 non-NULL), and the switch behind it expects a tagging protocol, this network 428 424 interface can only exclusively be used as a conduit interface. Sending packets 429 425 directly through this interface (e.g.: opening a socket using this interface) ··· 444 440 MDIO/PHY library 445 441 ---------------- 446 442 447 - Slave network devices exposed by DSA may or may not be interfacing with PHY 443 + User network devices exposed by DSA may or may not be interfacing with PHY 448 444 devices (``struct phy_device`` as defined in ``include/linux/phy.h)``, but the DSA 449 445 subsystem deals with all possible combinations: 450 446 ··· 454 450 - special, non-autonegotiated or non MDIO-managed PHY devices: SFPs, MoCA; a.k.a 455 451 fixed PHYs 456 452 457 - The PHY configuration is done by the ``dsa_slave_phy_setup()`` function and the 453 + The PHY configuration is done by the ``dsa_user_phy_setup()`` function and the 458 454 logic basically looks like this: 459 455 460 456 - if Device Tree is used, the PHY device is looked up using the standard ··· 467 463 and connected transparently using the special fixed MDIO bus driver 468 464 469 465 - finally, if the PHY is built into the switch, as is very common with 470 - standalone switch packages, the PHY is probed using the slave MII bus created 466 + standalone switch packages, the PHY is probed using the user MII bus created 471 467 by DSA 472 468 473 469 ··· 476 472 477 473 DSA directly utilizes SWITCHDEV when interfacing with the bridge layer, and 478 474 more specifically with its VLAN filtering portion when configuring VLANs on top 479 - of per-port slave network devices. As of today, the only SWITCHDEV objects 475 + of per-port user network devices. As of today, the only SWITCHDEV objects 480 476 supported by DSA are the FDB and VLAN objects. 481 477 482 478 Devlink ··· 593 589 It is mandatory for DSA switch drivers to implement the ``shutdown()`` callback 594 590 of their respective bus, and call ``dsa_switch_shutdown()`` from it (a minimal 595 591 version of the full teardown performed by ``dsa_unregister_switch()``). 596 - The reason is that DSA keeps a reference on the master net device, and if the 597 - driver for the master device decides to unbind on shutdown, DSA's reference 592 + The reason is that DSA keeps a reference on the conduit net device, and if the 593 + driver for the conduit device decides to unbind on shutdown, DSA's reference 598 594 will block that operation from finalizing. 599 595 600 596 Either ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` must be called, ··· 619 615 tag formats. 620 616 621 617 - ``change_tag_protocol``: when the default tagging protocol has compatibility 622 - problems with the master or other issues, the driver may support changing it 618 + problems with the conduit or other issues, the driver may support changing it 623 619 at runtime, either through a device tree property or through sysfs. In that 624 620 case, further calls to ``get_tag_protocol`` should report the protocol in 625 621 current use. ··· 647 643 PHY cannot be found. In this case, probing of the DSA switch continues 648 644 without that particular port. 649 645 650 - - ``port_change_master``: method through which the affinity (association used 646 + - ``port_change_conduit``: method through which the affinity (association used 651 647 for traffic termination purposes) between a user port and a CPU port can be 652 648 changed. By default all user ports from a tree are assigned to the first 653 649 available CPU port that makes sense for them (most of the times this means 654 650 the user ports of a tree are all assigned to the same CPU port, except for H 655 651 topologies as described in commit 2c0b03258b8b). The ``port`` argument 656 - represents the index of the user port, and the ``master`` argument represents 657 - the new DSA master ``net_device``. The CPU port associated with the new 658 - master can be retrieved by looking at ``struct dsa_port *cpu_dp = 659 - master->dsa_ptr``. Additionally, the master can also be a LAG device where 660 - all the slave devices are physical DSA masters. LAG DSA masters also have a 661 - valid ``master->dsa_ptr`` pointer, however this is not unique, but rather a 662 - duplicate of the first physical DSA master's (LAG slave) ``dsa_ptr``. In case 663 - of a LAG DSA master, a further call to ``port_lag_join`` will be emitted 652 + represents the index of the user port, and the ``conduit`` argument represents 653 + the new DSA conduit ``net_device``. The CPU port associated with the new 654 + conduit can be retrieved by looking at ``struct dsa_port *cpu_dp = 655 + conduit->dsa_ptr``. Additionally, the conduit can also be a LAG device where 656 + all the slave devices are physical DSA conduits. LAG DSA also have a 657 + valid ``conduit->dsa_ptr`` pointer, however this is not unique, but rather a 658 + duplicate of the first physical DSA conduit's (LAG slave) ``dsa_ptr``. In case 659 + of a LAG DSA conduit, a further call to ``port_lag_join`` will be emitted 664 660 separately for the physical CPU ports associated with the physical DSA 665 - masters, requesting them to create a hardware LAG associated with the LAG 661 + conduits, requesting them to create a hardware LAG associated with the LAG 666 662 interface. 667 663 668 664 PHY devices and link management ··· 674 670 should return a 32-bit bitmask of "flags" that is private between the switch 675 671 driver and the Ethernet PHY driver in ``drivers/net/phy/\*``. 676 672 677 - - ``phy_read``: Function invoked by the DSA slave MDIO bus when attempting to read 673 + - ``phy_read``: Function invoked by the DSA user MDIO bus when attempting to read 678 674 the switch port MDIO registers. If unavailable, return 0xffff for each read. 679 675 For builtin switch Ethernet PHYs, this function should allow reading the link 680 676 status, auto-negotiation results, link partner pages, etc. 681 677 682 - - ``phy_write``: Function invoked by the DSA slave MDIO bus when attempting to write 678 + - ``phy_write``: Function invoked by the DSA user MDIO bus when attempting to write 683 679 to the switch port MDIO registers. If unavailable return a negative error 684 680 code. 685 681 686 - - ``adjust_link``: Function invoked by the PHY library when a slave network device 682 + - ``adjust_link``: Function invoked by the PHY library when a user network device 687 683 is attached to a PHY device. This function is responsible for appropriately 688 684 configuring the switch port link parameters: speed, duplex, pause based on 689 685 what the ``phy_device`` is providing. ··· 702 698 typically return statistics strings, private flags strings, etc. 703 699 704 700 - ``get_ethtool_stats``: ethtool function used to query per-port statistics and 705 - return their values. DSA overlays slave network devices general statistics: 701 + return their values. DSA overlays user network devices general statistics: 706 702 RX/TX counters from the network device, with switch driver specific statistics 707 703 per port 708 704 709 705 - ``get_sset_count``: ethtool function used to query the number of statistics items 710 706 711 707 - ``get_wol``: ethtool function used to obtain Wake-on-LAN settings per-port, this 712 - function may for certain implementations also query the master network device 708 + function may for certain implementations also query the conduit network device 713 709 Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN 714 710 715 711 - ``set_wol``: ethtool function used to configure Wake-on-LAN settings per-port, ··· 751 747 should resume all Ethernet switch activities and re-configure the switch to be 752 748 in a fully active state 753 749 754 - - ``port_enable``: function invoked by the DSA slave network device ndo_open 750 + - ``port_enable``: function invoked by the DSA user network device ndo_open 755 751 function when a port is administratively brought up, this function should 756 752 fully enable a given switch port. DSA takes care of marking the port with 757 753 ``BR_STATE_BLOCKING`` if the port is a bridge member, or ``BR_STATE_FORWARDING`` if it 758 754 was not, and propagating these changes down to the hardware 759 755 760 - - ``port_disable``: function invoked by the DSA slave network device ndo_close 756 + - ``port_disable``: function invoked by the DSA user network device ndo_close 761 757 function when a port is administratively brought down, this function should 762 758 fully disable a given switch port. DSA takes care of marking the port with 763 759 ``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is
+1 -1
Documentation/networking/dsa/lan9303.rst
··· 4 4 5 5 The LAN9303 is a three port 10/100 Mbps ethernet switch with integrated phys for 6 6 the two external ethernet ports. The third port is an RMII/MII interface to a 7 - host master network interface (e.g. fixed link). 7 + host conduit network interface (e.g. fixed link). 8 8 9 9 10 10 Driver details
+3 -3
Documentation/networking/dsa/sja1105.rst
··· 79 79 decodes the VLAN information from the 802.1Q tag. Advanced VLAN classification 80 80 is not possible. Once attributed a VLAN tag, frames are checked against the 81 81 port's membership rules and dropped at ingress if they don't match any VLAN. 82 - This behavior is available when switch ports are enslaved to a bridge with 82 + This behavior is available when switch ports join a bridge with 83 83 ``vlan_filtering 1``. 84 84 85 85 Normally the hardware is not configurable with respect to VLAN awareness, but ··· 122 122 offloaded flows can be steered to TX queues based on the VLAN PCP, but the DSA 123 123 net devices are no longer able to do that. To inject frames into a hardware TX 124 124 queue with VLAN awareness active, it is necessary to create a VLAN 125 - sub-interface on the DSA master port, and send normal (0x8100) VLAN-tagged 125 + sub-interface on the DSA conduit port, and send normal (0x8100) VLAN-tagged 126 126 towards the switch, with the VLAN PCP bits set appropriately. 127 127 128 128 Management traffic (having DMAC 01-80-C2-xx-xx-xx or 01-19-1B-xx-xx-xx) is the ··· 389 389 The SJA1105 does not have an MDIO bus and does not perform in-band AN either. 390 390 Therefore there is no link state notification coming from the switch device. 391 391 A board would need to hook up the PHYs connected to the switch to any other 392 - MDIO bus available to Linux within the system (e.g. to the DSA master's MDIO 392 + MDIO bus available to Linux within the system (e.g. to the DSA conduit's MDIO 393 393 bus). Link state management then works by the driver manually keeping in sync 394 394 (over SPI commands) the MAC link speed with the settings negotiated by the PHY. 395 395
+1 -1
arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
··· 13 13 / { 14 14 aliases { 15 15 ethernet0 = &eth0; 16 - /* for dsa slave device */ 16 + /* for DSA user port device */ 17 17 ethernet1 = &switch0port1; 18 18 ethernet2 = &switch0port2; 19 19 ethernet3 = &switch0port3;
+2 -2
drivers/net/dsa/b53/b53_common.c
··· 757 757 758 758 /* Create an untagged VLAN entry for the default PVID in case 759 759 * CONFIG_VLAN_8021Q is disabled and there are no calls to 760 - * dsa_slave_vlan_rx_add_vid() to create the default VLAN 760 + * dsa_user_vlan_rx_add_vid() to create the default VLAN 761 761 * entry. Do this only when the tagging protocol is not 762 762 * DSA_TAG_PROTO_NONE 763 763 */ ··· 958 958 return NULL; 959 959 } 960 960 961 - return mdiobus_get_phy(ds->slave_mii_bus, port); 961 + return mdiobus_get_phy(ds->user_mii_bus, port); 962 962 } 963 963 964 964 void b53_get_strings(struct dsa_switch *ds, int port, u32 stringset,
+1 -1
drivers/net/dsa/b53/b53_mdio.c
··· 329 329 * layer setup 330 330 */ 331 331 if (of_machine_is_compatible("brcm,bcm7445d0") && 332 - strcmp(mdiodev->bus->name, "sf2 slave mii")) 332 + strcmp(mdiodev->bus->name, "sf2 user mii")) 333 333 return -EPROBE_DEFER; 334 334 335 335 dev = b53_switch_alloc(&mdiodev->dev, &b53_mdio_ops, mdiodev->bus);
+20 -21
drivers/net/dsa/bcm_sf2.c
··· 623 623 624 624 priv->master_mii_dn = dn; 625 625 626 - priv->slave_mii_bus = mdiobus_alloc(); 627 - if (!priv->slave_mii_bus) { 626 + priv->user_mii_bus = mdiobus_alloc(); 627 + if (!priv->user_mii_bus) { 628 628 err = -ENOMEM; 629 629 goto err_put_master_mii_bus_dev; 630 630 } 631 631 632 - priv->slave_mii_bus->priv = priv; 633 - priv->slave_mii_bus->name = "sf2 slave mii"; 634 - priv->slave_mii_bus->read = bcm_sf2_sw_mdio_read; 635 - priv->slave_mii_bus->write = bcm_sf2_sw_mdio_write; 636 - snprintf(priv->slave_mii_bus->id, MII_BUS_ID_SIZE, "sf2-%d", 632 + priv->user_mii_bus->priv = priv; 633 + priv->user_mii_bus->name = "sf2 user mii"; 634 + priv->user_mii_bus->read = bcm_sf2_sw_mdio_read; 635 + priv->user_mii_bus->write = bcm_sf2_sw_mdio_write; 636 + snprintf(priv->user_mii_bus->id, MII_BUS_ID_SIZE, "sf2-%d", 637 637 index++); 638 - priv->slave_mii_bus->dev.of_node = dn; 638 + priv->user_mii_bus->dev.of_node = dn; 639 639 640 640 /* Include the pseudo-PHY address to divert reads towards our 641 641 * workaround. This is only required for 7445D0, since 7445E0 ··· 653 653 priv->indir_phy_mask = 0; 654 654 655 655 ds->phys_mii_mask = priv->indir_phy_mask; 656 - ds->slave_mii_bus = priv->slave_mii_bus; 657 - priv->slave_mii_bus->parent = ds->dev->parent; 658 - priv->slave_mii_bus->phy_mask = ~priv->indir_phy_mask; 656 + ds->user_mii_bus = priv->user_mii_bus; 657 + priv->user_mii_bus->parent = ds->dev->parent; 658 + priv->user_mii_bus->phy_mask = ~priv->indir_phy_mask; 659 659 660 660 /* We need to make sure that of_phy_connect() will not work by 661 661 * removing the 'phandle' and 'linux,phandle' properties and ··· 682 682 phy_device_remove(phydev); 683 683 } 684 684 685 - err = mdiobus_register(priv->slave_mii_bus); 685 + err = mdiobus_register(priv->user_mii_bus); 686 686 if (err && dn) 687 - goto err_free_slave_mii_bus; 687 + goto err_free_user_mii_bus; 688 688 689 689 return 0; 690 690 691 - err_free_slave_mii_bus: 692 - mdiobus_free(priv->slave_mii_bus); 691 + err_free_user_mii_bus: 692 + mdiobus_free(priv->user_mii_bus); 693 693 err_put_master_mii_bus_dev: 694 694 put_device(&priv->master_mii_bus->dev); 695 695 err_of_node_put: ··· 699 699 700 700 static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv) 701 701 { 702 - mdiobus_unregister(priv->slave_mii_bus); 703 - mdiobus_free(priv->slave_mii_bus); 702 + mdiobus_unregister(priv->user_mii_bus); 703 + mdiobus_free(priv->user_mii_bus); 704 704 put_device(&priv->master_mii_bus->dev); 705 - of_node_put(priv->master_mii_dn); 706 705 } 707 706 708 707 static u32 bcm_sf2_sw_get_phy_flags(struct dsa_switch *ds, int port) ··· 914 915 * state machine and make it go in PHY_FORCING state instead. 915 916 */ 916 917 if (!status->link) 917 - netif_carrier_off(dsa_to_port(ds, port)->slave); 918 + netif_carrier_off(dsa_to_port(ds, port)->user); 918 919 status->duplex = DUPLEX_FULL; 919 920 } else { 920 921 status->link = true; ··· 988 989 static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port, 989 990 struct ethtool_wolinfo *wol) 990 991 { 991 - struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port)); 992 + struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port)); 992 993 struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); 993 994 struct ethtool_wolinfo pwol = { }; 994 995 ··· 1012 1013 static int bcm_sf2_sw_set_wol(struct dsa_switch *ds, int port, 1013 1014 struct ethtool_wolinfo *wol) 1014 1015 { 1015 - struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port)); 1016 + struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port)); 1016 1017 struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); 1017 1018 s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index; 1018 1019 struct ethtool_wolinfo pwol = { };
+1 -1
drivers/net/dsa/bcm_sf2.h
··· 108 108 /* Master and slave MDIO bus controller */ 109 109 unsigned int indir_phy_mask; 110 110 struct device_node *master_mii_dn; 111 - struct mii_bus *slave_mii_bus; 111 + struct mii_bus *user_mii_bus; 112 112 struct mii_bus *master_mii_bus; 113 113 114 114 /* Bitmask of ports needing BRCM tags */
+2 -2
drivers/net/dsa/bcm_sf2_cfp.c
··· 1102 1102 int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port, 1103 1103 struct ethtool_rxnfc *nfc, u32 *rule_locs) 1104 1104 { 1105 - struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port)); 1105 + struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port)); 1106 1106 struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); 1107 1107 int ret = 0; 1108 1108 ··· 1145 1145 int bcm_sf2_set_rxnfc(struct dsa_switch *ds, int port, 1146 1146 struct ethtool_rxnfc *nfc) 1147 1147 { 1148 - struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port)); 1148 + struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port)); 1149 1149 struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); 1150 1150 int ret = 0; 1151 1151
+2 -2
drivers/net/dsa/lan9303-core.c
··· 1084 1084 if (!dsa_port_is_user(dp)) 1085 1085 return 0; 1086 1086 1087 - vlan_vid_add(dsa_port_to_master(dp), htons(ETH_P_8021Q), port); 1087 + vlan_vid_add(dsa_port_to_conduit(dp), htons(ETH_P_8021Q), port); 1088 1088 1089 1089 return lan9303_enable_processing_port(chip, port); 1090 1090 } ··· 1097 1097 if (!dsa_port_is_user(dp)) 1098 1098 return; 1099 1099 1100 - vlan_vid_del(dsa_port_to_master(dp), htons(ETH_P_8021Q), port); 1100 + vlan_vid_del(dsa_port_to_conduit(dp), htons(ETH_P_8021Q), port); 1101 1101 1102 1102 lan9303_disable_processing_port(chip, port); 1103 1103 lan9303_phy_write(ds, chip->phy_addr_base + port, MII_BMCR, BMCR_PDOWN);
+17 -17
drivers/net/dsa/lantiq_gswip.c
··· 510 510 struct dsa_switch *ds = priv->ds; 511 511 int err; 512 512 513 - ds->slave_mii_bus = mdiobus_alloc(); 514 - if (!ds->slave_mii_bus) 513 + ds->user_mii_bus = mdiobus_alloc(); 514 + if (!ds->user_mii_bus) 515 515 return -ENOMEM; 516 516 517 - ds->slave_mii_bus->priv = priv; 518 - ds->slave_mii_bus->read = gswip_mdio_rd; 519 - ds->slave_mii_bus->write = gswip_mdio_wr; 520 - ds->slave_mii_bus->name = "lantiq,xrx200-mdio"; 521 - snprintf(ds->slave_mii_bus->id, MII_BUS_ID_SIZE, "%s-mii", 517 + ds->user_mii_bus->priv = priv; 518 + ds->user_mii_bus->read = gswip_mdio_rd; 519 + ds->user_mii_bus->write = gswip_mdio_wr; 520 + ds->user_mii_bus->name = "lantiq,xrx200-mdio"; 521 + snprintf(ds->user_mii_bus->id, MII_BUS_ID_SIZE, "%s-mii", 522 522 dev_name(priv->dev)); 523 - ds->slave_mii_bus->parent = priv->dev; 524 - ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask; 523 + ds->user_mii_bus->parent = priv->dev; 524 + ds->user_mii_bus->phy_mask = ~ds->phys_mii_mask; 525 525 526 - err = of_mdiobus_register(ds->slave_mii_bus, mdio_np); 526 + err = of_mdiobus_register(ds->user_mii_bus, mdio_np); 527 527 if (err) 528 - mdiobus_free(ds->slave_mii_bus); 528 + mdiobus_free(ds->user_mii_bus); 529 529 530 530 return err; 531 531 } ··· 2196 2196 dsa_unregister_switch(priv->ds); 2197 2197 mdio_bus: 2198 2198 if (mdio_np) { 2199 - mdiobus_unregister(priv->ds->slave_mii_bus); 2200 - mdiobus_free(priv->ds->slave_mii_bus); 2199 + mdiobus_unregister(priv->ds->user_mii_bus); 2200 + mdiobus_free(priv->ds->user_mii_bus); 2201 2201 } 2202 2202 put_mdio_node: 2203 2203 of_node_put(mdio_np); ··· 2219 2219 2220 2220 dsa_unregister_switch(priv->ds); 2221 2221 2222 - if (priv->ds->slave_mii_bus) { 2223 - mdiobus_unregister(priv->ds->slave_mii_bus); 2224 - of_node_put(priv->ds->slave_mii_bus->dev.of_node); 2225 - mdiobus_free(priv->ds->slave_mii_bus); 2222 + if (priv->ds->user_mii_bus) { 2223 + mdiobus_unregister(priv->ds->user_mii_bus); 2224 + of_node_put(priv->ds->user_mii_bus->dev.of_node); 2225 + mdiobus_free(priv->ds->user_mii_bus); 2226 2226 } 2227 2227 2228 2228 for (i = 0; i < priv->num_gphy_fw; i++)
+3 -3
drivers/net/dsa/microchip/ksz9477.c
··· 1170 1170 void ksz9477_hsr_join(struct dsa_switch *ds, int port, struct net_device *hsr) 1171 1171 { 1172 1172 struct ksz_device *dev = ds->priv; 1173 - struct net_device *slave; 1173 + struct net_device *user; 1174 1174 struct dsa_port *hsr_dp; 1175 1175 u8 data, hsr_ports = 0; 1176 1176 ··· 1202 1202 ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, PORT_SRC_ADDR_FILTER, true); 1203 1203 1204 1204 /* Setup HW supported features for lan HSR ports */ 1205 - slave = dsa_to_port(ds, port)->slave; 1206 - slave->features |= KSZ9477_SUPPORTED_HSR_FEATURES; 1205 + user = dsa_to_port(ds, port)->user; 1206 + user->features |= KSZ9477_SUPPORTED_HSR_FEATURES; 1207 1207 } 1208 1208 1209 1209 void ksz9477_hsr_leave(struct dsa_switch *ds, int port, struct net_device *hsr)
+10 -10
drivers/net/dsa/microchip/ksz_common.c
··· 1945 1945 ret = irq; 1946 1946 goto out; 1947 1947 } 1948 - ds->slave_mii_bus->irq[phy] = irq; 1948 + ds->user_mii_bus->irq[phy] = irq; 1949 1949 } 1950 1950 } 1951 1951 return 0; 1952 1952 out: 1953 1953 while (phy--) 1954 1954 if (BIT(phy) & ds->phys_mii_mask) 1955 - irq_dispose_mapping(ds->slave_mii_bus->irq[phy]); 1955 + irq_dispose_mapping(ds->user_mii_bus->irq[phy]); 1956 1956 1957 1957 return ret; 1958 1958 } ··· 1964 1964 1965 1965 for (phy = 0; phy < KSZ_MAX_NUM_PORTS; phy++) 1966 1966 if (BIT(phy) & ds->phys_mii_mask) 1967 - irq_dispose_mapping(ds->slave_mii_bus->irq[phy]); 1967 + irq_dispose_mapping(ds->user_mii_bus->irq[phy]); 1968 1968 } 1969 1969 1970 1970 static int ksz_mdio_register(struct ksz_device *dev) ··· 1987 1987 bus->priv = dev; 1988 1988 bus->read = ksz_sw_mdio_read; 1989 1989 bus->write = ksz_sw_mdio_write; 1990 - bus->name = "ksz slave smi"; 1990 + bus->name = "ksz user smi"; 1991 1991 snprintf(bus->id, MII_BUS_ID_SIZE, "SMI-%d", ds->index); 1992 1992 bus->parent = ds->dev; 1993 1993 bus->phy_mask = ~ds->phys_mii_mask; 1994 1994 1995 - ds->slave_mii_bus = bus; 1995 + ds->user_mii_bus = bus; 1996 1996 1997 1997 if (dev->irq > 0) { 1998 1998 ret = ksz_irq_phy_setup(dev); ··· 2344 2344 if (!p->read) { 2345 2345 const struct dsa_port *dp = dsa_to_port(dev->ds, i); 2346 2346 2347 - if (!netif_carrier_ok(dp->slave)) 2347 + if (!netif_carrier_ok(dp->user)) 2348 2348 mib->cnt_ptr = dev->info->reg_mib_cnt; 2349 2349 } 2350 2350 port_r_cnt(dev, i); ··· 2464 2464 mutex_lock(&mib->cnt_mutex); 2465 2465 2466 2466 /* Only read dropped counters if no link. */ 2467 - if (!netif_carrier_ok(dp->slave)) 2467 + if (!netif_carrier_ok(dp->user)) 2468 2468 mib->cnt_ptr = dev->info->reg_mib_cnt; 2469 2469 port_r_cnt(dev, port); 2470 2470 memcpy(buf, mib->counters, dev->info->mib_cnt * sizeof(u64)); ··· 2574 2574 if (!dsa_is_user_port(ds, port)) 2575 2575 return 0; 2576 2576 2577 - /* setup slave port */ 2577 + /* setup user port */ 2578 2578 dev->dev_ops->port_setup(dev, port, false); 2579 2579 2580 2580 /* port_stp_state_set() will be called after to enable the port so ··· 3567 3567 static int ksz_switch_macaddr_get(struct dsa_switch *ds, int port, 3568 3568 struct netlink_ext_ack *extack) 3569 3569 { 3570 - struct net_device *slave = dsa_to_port(ds, port)->slave; 3571 - const unsigned char *addr = slave->dev_addr; 3570 + struct net_device *user = dsa_to_port(ds, port)->user; 3571 + const unsigned char *addr = user->dev_addr; 3572 3572 struct ksz_switch_macaddr *switch_macaddr; 3573 3573 struct ksz_device *dev = ds->priv; 3574 3574 const u16 *regs = dev->info->regs;
+1 -1
drivers/net/dsa/microchip/ksz_ptp.c
··· 557 557 struct skb_shared_hwtstamps hwtstamps = {}; 558 558 int ret; 559 559 560 - /* timeout must include DSA master to transmit data, tstamp latency, 560 + /* timeout must include DSA conduit to transmit data, tstamp latency, 561 561 * IRQ latency and time for reading the time stamp. 562 562 */ 563 563 ret = wait_for_completion_timeout(&prt->tstamp_msg_comp,
+9 -9
drivers/net/dsa/mt7530.c
··· 1113 1113 u32 val; 1114 1114 1115 1115 /* When a new MTU is set, DSA always set the CPU port's MTU to the 1116 - * largest MTU of the slave ports. Because the switch only has a global 1116 + * largest MTU of the user ports. Because the switch only has a global 1117 1117 * RX length register, only allowing CPU port here is enough. 1118 1118 */ 1119 1119 if (!dsa_is_cpu_port(ds, port)) ··· 2069 2069 unsigned int irq; 2070 2070 2071 2071 irq = irq_create_mapping(priv->irq_domain, p); 2072 - ds->slave_mii_bus->irq[p] = irq; 2072 + ds->user_mii_bus->irq[p] = irq; 2073 2073 } 2074 2074 } 2075 2075 } ··· 2163 2163 if (!bus) 2164 2164 return -ENOMEM; 2165 2165 2166 - ds->slave_mii_bus = bus; 2166 + ds->user_mii_bus = bus; 2167 2167 bus->priv = priv; 2168 2168 bus->name = KBUILD_MODNAME "-mii"; 2169 2169 snprintf(bus->id, MII_BUS_ID_SIZE, KBUILD_MODNAME "-%d", idx++); ··· 2200 2200 u32 id, val; 2201 2201 int ret, i; 2202 2202 2203 - /* The parent node of master netdev which holds the common system 2203 + /* The parent node of conduit netdev which holds the common system 2204 2204 * controller also is the container for two GMACs nodes representing 2205 2205 * as two netdev instances. 2206 2206 */ 2207 2207 dsa_switch_for_each_cpu_port(cpu_dp, ds) { 2208 - dn = cpu_dp->master->dev.of_node->parent; 2208 + dn = cpu_dp->conduit->dev.of_node->parent; 2209 2209 /* It doesn't matter which CPU port is found first, 2210 - * their masters should share the same parent OF node 2210 + * their conduits should share the same parent OF node 2211 2211 */ 2212 2212 break; 2213 2213 } 2214 2214 2215 2215 if (!dn) { 2216 - dev_err(ds->dev, "parent OF node of DSA master not found"); 2216 + dev_err(ds->dev, "parent OF node of DSA conduit not found"); 2217 2217 return -EINVAL; 2218 2218 } 2219 2219 ··· 2488 2488 if (mt7531_dual_sgmii_supported(priv)) { 2489 2489 priv->p5_intf_sel = P5_INTF_SEL_GMAC5_SGMII; 2490 2490 2491 - /* Let ds->slave_mii_bus be able to access external phy. */ 2491 + /* Let ds->user_mii_bus be able to access external phy. */ 2492 2492 mt7530_rmw(priv, MT7531_GPIO_MODE1, MT7531_GPIO11_RG_RXD2_MASK, 2493 2493 MT7531_EXT_P_MDC_11); 2494 2494 mt7530_rmw(priv, MT7531_GPIO_MODE1, MT7531_GPIO12_RG_RXD3_MASK, ··· 2717 2717 case PHY_INTERFACE_MODE_RGMII_RXID: 2718 2718 case PHY_INTERFACE_MODE_RGMII_TXID: 2719 2719 dp = dsa_to_port(ds, port); 2720 - phydev = dp->slave->phydev; 2720 + phydev = dp->user->phydev; 2721 2721 return mt7531_rgmii_setup(priv, port, interface, phydev); 2722 2722 case PHY_INTERFACE_MODE_SGMII: 2723 2723 case PHY_INTERFACE_MODE_NA:
+2 -2
drivers/net/dsa/mv88e6xxx/chip.c
··· 2486 2486 else 2487 2487 member = MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_TAGGED; 2488 2488 2489 - /* net/dsa/slave.c will call dsa_port_vlan_add() for the affected port 2489 + /* net/dsa/user.c will call dsa_port_vlan_add() for the affected port 2490 2490 * and then the CPU port. Do not warn for duplicates for the CPU port. 2491 2491 */ 2492 2492 warn = !dsa_is_cpu_port(ds, port) && !dsa_is_dsa_port(ds, port); ··· 3719 3719 return err; 3720 3720 3721 3721 chip->ds = ds; 3722 - ds->slave_mii_bus = mv88e6xxx_default_mdio_bus(chip); 3722 + ds->user_mii_bus = mv88e6xxx_default_mdio_bus(chip); 3723 3723 3724 3724 /* Since virtual bridges are mapped in the PVT, the number we support 3725 3725 * depends on the physical switch topology. We need to let DSA figure
+34 -34
drivers/net/dsa/ocelot/felix.c
··· 42 42 } 43 43 } 44 44 45 - static int felix_cpu_port_for_master(struct dsa_switch *ds, 46 - struct net_device *master) 45 + static int felix_cpu_port_for_conduit(struct dsa_switch *ds, 46 + struct net_device *conduit) 47 47 { 48 48 struct ocelot *ocelot = ds->priv; 49 49 struct dsa_port *cpu_dp; 50 50 int lag; 51 51 52 - if (netif_is_lag_master(master)) { 52 + if (netif_is_lag_master(conduit)) { 53 53 mutex_lock(&ocelot->fwd_domain_lock); 54 - lag = ocelot_bond_get_id(ocelot, master); 54 + lag = ocelot_bond_get_id(ocelot, conduit); 55 55 mutex_unlock(&ocelot->fwd_domain_lock); 56 56 57 57 return lag; 58 58 } 59 59 60 - cpu_dp = master->dsa_ptr; 60 + cpu_dp = conduit->dsa_ptr; 61 61 return cpu_dp->index; 62 62 } 63 63 ··· 366 366 * is the mode through which frames can be injected from and extracted to an 367 367 * external CPU, over Ethernet. In NXP SoCs, the "external CPU" is the ARM CPU 368 368 * running Linux, and this forms a DSA setup together with the enetc or fman 369 - * DSA master. 369 + * DSA conduit. 370 370 */ 371 371 static void felix_npi_port_init(struct ocelot *ocelot, int port) 372 372 { ··· 441 441 return BIT(ocelot->num_phys_ports); 442 442 } 443 443 444 - static int felix_tag_npi_change_master(struct dsa_switch *ds, int port, 445 - struct net_device *master, 446 - struct netlink_ext_ack *extack) 444 + static int felix_tag_npi_change_conduit(struct dsa_switch *ds, int port, 445 + struct net_device *conduit, 446 + struct netlink_ext_ack *extack) 447 447 { 448 448 struct dsa_port *dp = dsa_to_port(ds, port), *other_dp; 449 449 struct ocelot *ocelot = ds->priv; 450 450 451 - if (netif_is_lag_master(master)) { 451 + if (netif_is_lag_master(conduit)) { 452 452 NL_SET_ERR_MSG_MOD(extack, 453 - "LAG DSA master only supported using ocelot-8021q"); 453 + "LAG DSA conduit only supported using ocelot-8021q"); 454 454 return -EOPNOTSUPP; 455 455 } 456 456 ··· 459 459 * come back up until they're all changed to the new one. 460 460 */ 461 461 dsa_switch_for_each_user_port(other_dp, ds) { 462 - struct net_device *slave = other_dp->slave; 462 + struct net_device *user = other_dp->user; 463 463 464 - if (other_dp != dp && (slave->flags & IFF_UP) && 465 - dsa_port_to_master(other_dp) != master) { 464 + if (other_dp != dp && (user->flags & IFF_UP) && 465 + dsa_port_to_conduit(other_dp) != conduit) { 466 466 NL_SET_ERR_MSG_MOD(extack, 467 - "Cannot change while old master still has users"); 467 + "Cannot change while old conduit still has users"); 468 468 return -EOPNOTSUPP; 469 469 } 470 470 } 471 471 472 472 felix_npi_port_deinit(ocelot, ocelot->npi); 473 - felix_npi_port_init(ocelot, felix_cpu_port_for_master(ds, master)); 473 + felix_npi_port_init(ocelot, felix_cpu_port_for_conduit(ds, conduit)); 474 474 475 475 return 0; 476 476 } 477 477 478 478 /* Alternatively to using the NPI functionality, that same hardware MAC 479 - * connected internally to the enetc or fman DSA master can be configured to 479 + * connected internally to the enetc or fman DSA conduit can be configured to 480 480 * use the software-defined tag_8021q frame format. As far as the hardware is 481 481 * concerned, it thinks it is a "dumb switch" - the queues of the CPU port 482 482 * module are now disconnected from it, but can still be accessed through ··· 486 486 .setup = felix_tag_npi_setup, 487 487 .teardown = felix_tag_npi_teardown, 488 488 .get_host_fwd_mask = felix_tag_npi_get_host_fwd_mask, 489 - .change_master = felix_tag_npi_change_master, 489 + .change_conduit = felix_tag_npi_change_conduit, 490 490 }; 491 491 492 492 static int felix_tag_8021q_setup(struct dsa_switch *ds) ··· 561 561 return dsa_cpu_ports(ds); 562 562 } 563 563 564 - static int felix_tag_8021q_change_master(struct dsa_switch *ds, int port, 565 - struct net_device *master, 566 - struct netlink_ext_ack *extack) 564 + static int felix_tag_8021q_change_conduit(struct dsa_switch *ds, int port, 565 + struct net_device *conduit, 566 + struct netlink_ext_ack *extack) 567 567 { 568 - int cpu = felix_cpu_port_for_master(ds, master); 568 + int cpu = felix_cpu_port_for_conduit(ds, conduit); 569 569 struct ocelot *ocelot = ds->priv; 570 570 571 571 ocelot_port_unassign_dsa_8021q_cpu(ocelot, port); ··· 578 578 .setup = felix_tag_8021q_setup, 579 579 .teardown = felix_tag_8021q_teardown, 580 580 .get_host_fwd_mask = felix_tag_8021q_get_host_fwd_mask, 581 - .change_master = felix_tag_8021q_change_master, 581 + .change_conduit = felix_tag_8021q_change_conduit, 582 582 }; 583 583 584 584 static void felix_set_host_flood(struct dsa_switch *ds, unsigned long mask, ··· 741 741 !!felix->host_flood_mc_mask, true); 742 742 } 743 743 744 - static int felix_port_change_master(struct dsa_switch *ds, int port, 745 - struct net_device *master, 746 - struct netlink_ext_ack *extack) 744 + static int felix_port_change_conduit(struct dsa_switch *ds, int port, 745 + struct net_device *conduit, 746 + struct netlink_ext_ack *extack) 747 747 { 748 748 struct ocelot *ocelot = ds->priv; 749 749 struct felix *felix = ocelot_to_felix(ocelot); 750 750 751 - return felix->tag_proto_ops->change_master(ds, port, master, extack); 751 + return felix->tag_proto_ops->change_conduit(ds, port, conduit, extack); 752 752 } 753 753 754 754 static int felix_set_ageing_time(struct dsa_switch *ds, ··· 953 953 if (!dsa_is_cpu_port(ds, port)) 954 954 return 0; 955 955 956 - return felix_port_change_master(ds, port, lag.dev, extack); 956 + return felix_port_change_conduit(ds, port, lag.dev, extack); 957 957 } 958 958 959 959 static int felix_lag_leave(struct dsa_switch *ds, int port, ··· 967 967 if (!dsa_is_cpu_port(ds, port)) 968 968 return 0; 969 969 970 - return felix_port_change_master(ds, port, lag.dev, NULL); 970 + return felix_port_change_conduit(ds, port, lag.dev, NULL); 971 971 } 972 972 973 973 static int felix_lag_change(struct dsa_switch *ds, int port) ··· 1116 1116 return 0; 1117 1117 1118 1118 if (ocelot->npi >= 0) { 1119 - struct net_device *master = dsa_port_to_master(dp); 1119 + struct net_device *conduit = dsa_port_to_conduit(dp); 1120 1120 1121 - if (felix_cpu_port_for_master(ds, master) != ocelot->npi) { 1122 - dev_err(ds->dev, "Multiple masters are not allowed\n"); 1121 + if (felix_cpu_port_for_conduit(ds, conduit) != ocelot->npi) { 1122 + dev_err(ds->dev, "Multiple conduits are not allowed\n"); 1123 1123 return -EINVAL; 1124 1124 } 1125 1125 } ··· 2164 2164 .port_add_dscp_prio = felix_port_add_dscp_prio, 2165 2165 .port_del_dscp_prio = felix_port_del_dscp_prio, 2166 2166 .port_set_host_flood = felix_port_set_host_flood, 2167 - .port_change_master = felix_port_change_master, 2167 + .port_change_conduit = felix_port_change_conduit, 2168 2168 }; 2169 2169 EXPORT_SYMBOL_GPL(felix_switch_ops); 2170 2170 ··· 2176 2176 if (!dsa_is_user_port(ds, port)) 2177 2177 return NULL; 2178 2178 2179 - return dsa_to_port(ds, port)->slave; 2179 + return dsa_to_port(ds, port)->user; 2180 2180 } 2181 2181 EXPORT_SYMBOL_GPL(felix_port_to_netdev); 2182 2182
+3 -3
drivers/net/dsa/ocelot/felix.h
··· 77 77 int (*setup)(struct dsa_switch *ds); 78 78 void (*teardown)(struct dsa_switch *ds); 79 79 unsigned long (*get_host_fwd_mask)(struct dsa_switch *ds); 80 - int (*change_master)(struct dsa_switch *ds, int port, 81 - struct net_device *master, 82 - struct netlink_ext_ack *extack); 80 + int (*change_conduit)(struct dsa_switch *ds, int port, 81 + struct net_device *conduit, 82 + struct netlink_ext_ack *extack); 83 83 }; 84 84 85 85 extern const struct dsa_switch_ops felix_switch_ops;
+25 -25
drivers/net/dsa/qca/qca8k-8xxx.c
··· 323 323 324 324 mutex_lock(&mgmt_eth_data->mutex); 325 325 326 - /* Check mgmt_master if is operational */ 327 - if (!priv->mgmt_master) { 326 + /* Check if the mgmt_conduit if is operational */ 327 + if (!priv->mgmt_conduit) { 328 328 kfree_skb(skb); 329 329 mutex_unlock(&mgmt_eth_data->mutex); 330 330 return -EINVAL; 331 331 } 332 332 333 - skb->dev = priv->mgmt_master; 333 + skb->dev = priv->mgmt_conduit; 334 334 335 335 reinit_completion(&mgmt_eth_data->rw_done); 336 336 ··· 375 375 376 376 mutex_lock(&mgmt_eth_data->mutex); 377 377 378 - /* Check mgmt_master if is operational */ 379 - if (!priv->mgmt_master) { 378 + /* Check if the mgmt_conduit if is operational */ 379 + if (!priv->mgmt_conduit) { 380 380 kfree_skb(skb); 381 381 mutex_unlock(&mgmt_eth_data->mutex); 382 382 return -EINVAL; 383 383 } 384 384 385 - skb->dev = priv->mgmt_master; 385 + skb->dev = priv->mgmt_conduit; 386 386 387 387 reinit_completion(&mgmt_eth_data->rw_done); 388 388 ··· 508 508 struct qca8k_priv *priv = ctx; 509 509 u32 reg = *(u16 *)reg_buf; 510 510 511 - if (priv->mgmt_master && 511 + if (priv->mgmt_conduit && 512 512 !qca8k_read_eth(priv, reg, val_buf, val_len)) 513 513 return 0; 514 514 ··· 531 531 u32 reg = *(u16 *)reg_buf; 532 532 u32 *val = (u32 *)val_buf; 533 533 534 - if (priv->mgmt_master && 534 + if (priv->mgmt_conduit && 535 535 !qca8k_write_eth(priv, reg, val, val_len)) 536 536 return 0; 537 537 ··· 626 626 struct sk_buff *write_skb, *clear_skb, *read_skb; 627 627 struct qca8k_mgmt_eth_data *mgmt_eth_data; 628 628 u32 write_val, clear_val = 0, val; 629 - struct net_device *mgmt_master; 629 + struct net_device *mgmt_conduit; 630 630 int ret, ret1; 631 631 bool ack; 632 632 ··· 683 683 */ 684 684 mutex_lock(&mgmt_eth_data->mutex); 685 685 686 - /* Check if mgmt_master is operational */ 687 - mgmt_master = priv->mgmt_master; 688 - if (!mgmt_master) { 686 + /* Check if mgmt_conduit is operational */ 687 + mgmt_conduit = priv->mgmt_conduit; 688 + if (!mgmt_conduit) { 689 689 mutex_unlock(&mgmt_eth_data->mutex); 690 690 mutex_unlock(&priv->bus->mdio_lock); 691 691 ret = -EINVAL; 692 - goto err_mgmt_master; 692 + goto err_mgmt_conduit; 693 693 } 694 694 695 - read_skb->dev = mgmt_master; 696 - clear_skb->dev = mgmt_master; 697 - write_skb->dev = mgmt_master; 695 + read_skb->dev = mgmt_conduit; 696 + clear_skb->dev = mgmt_conduit; 697 + write_skb->dev = mgmt_conduit; 698 698 699 699 reinit_completion(&mgmt_eth_data->rw_done); 700 700 ··· 780 780 return ret; 781 781 782 782 /* Error handling before lock */ 783 - err_mgmt_master: 783 + err_mgmt_conduit: 784 784 kfree_skb(read_skb); 785 785 err_read_skb: 786 786 kfree_skb(clear_skb); ··· 959 959 ds->dst->index, ds->index); 960 960 bus->parent = ds->dev; 961 961 bus->phy_mask = ~ds->phys_mii_mask; 962 - ds->slave_mii_bus = bus; 962 + ds->user_mii_bus = bus; 963 963 964 964 /* Check if the devicetree declare the port:phy mapping */ 965 965 mdio = of_get_child_by_name(priv->dev->of_node, "mdio"); 966 966 if (of_device_is_available(mdio)) { 967 - bus->name = "qca8k slave mii"; 967 + bus->name = "qca8k user mii"; 968 968 bus->read = qca8k_internal_mdio_read; 969 969 bus->write = qca8k_internal_mdio_write; 970 970 return devm_of_mdiobus_register(priv->dev, bus, mdio); ··· 973 973 /* If a mapping can't be found the legacy mapping is used, 974 974 * using the qca8k_port_to_phy function 975 975 */ 976 - bus->name = "qca8k-legacy slave mii"; 976 + bus->name = "qca8k-legacy user mii"; 977 977 bus->read = qca8k_legacy_mdio_read; 978 978 bus->write = qca8k_legacy_mdio_write; 979 979 return devm_mdiobus_register(priv->dev, bus); ··· 1728 1728 } 1729 1729 1730 1730 static void 1731 - qca8k_master_change(struct dsa_switch *ds, const struct net_device *master, 1732 - bool operational) 1731 + qca8k_conduit_change(struct dsa_switch *ds, const struct net_device *conduit, 1732 + bool operational) 1733 1733 { 1734 - struct dsa_port *dp = master->dsa_ptr; 1734 + struct dsa_port *dp = conduit->dsa_ptr; 1735 1735 struct qca8k_priv *priv = ds->priv; 1736 1736 1737 1737 /* Ethernet MIB/MDIO is only supported for CPU port 0 */ ··· 1741 1741 mutex_lock(&priv->mgmt_eth_data.mutex); 1742 1742 mutex_lock(&priv->mib_eth_data.mutex); 1743 1743 1744 - priv->mgmt_master = operational ? (struct net_device *)master : NULL; 1744 + priv->mgmt_conduit = operational ? (struct net_device *)conduit : NULL; 1745 1745 1746 1746 mutex_unlock(&priv->mib_eth_data.mutex); 1747 1747 mutex_unlock(&priv->mgmt_eth_data.mutex); ··· 2016 2016 .get_phy_flags = qca8k_get_phy_flags, 2017 2017 .port_lag_join = qca8k_port_lag_join, 2018 2018 .port_lag_leave = qca8k_port_lag_leave, 2019 - .master_state_change = qca8k_master_change, 2019 + .conduit_state_change = qca8k_conduit_change, 2020 2020 .connect_tag_protocol = qca8k_connect_tag_protocol, 2021 2021 }; 2022 2022
+2 -2
drivers/net/dsa/qca/qca8k-common.c
··· 499 499 u32 hi = 0; 500 500 int ret; 501 501 502 - if (priv->mgmt_master && priv->info->ops->autocast_mib && 502 + if (priv->mgmt_conduit && priv->info->ops->autocast_mib && 503 503 priv->info->ops->autocast_mib(ds, port, data) > 0) 504 504 return; 505 505 ··· 761 761 int ret; 762 762 763 763 /* We have only have a general MTU setting. 764 - * DSA always set the CPU port's MTU to the largest MTU of the slave 764 + * DSA always set the CPU port's MTU to the largest MTU of the user 765 765 * ports. 766 766 * Setting MTU just for the CPU port is sufficient to correctly set a 767 767 * value for every port.
+3 -3
drivers/net/dsa/qca/qca8k-leds.c
··· 356 356 dp = dsa_to_port(priv->ds, qca8k_phy_to_port(led->port_num)); 357 357 if (!dp) 358 358 return NULL; 359 - if (dp->slave) 360 - return &dp->slave->dev; 359 + if (dp->user) 360 + return &dp->user->dev; 361 361 return NULL; 362 362 } 363 363 ··· 429 429 init_data.default_label = ":port"; 430 430 init_data.fwnode = led; 431 431 init_data.devname_mandatory = true; 432 - init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d", ds->slave_mii_bus->id, 432 + init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d", ds->user_mii_bus->id, 433 433 port_num); 434 434 if (!init_data.devicename) 435 435 return -ENOMEM;
+1 -1
drivers/net/dsa/qca/qca8k.h
··· 458 458 struct mutex reg_mutex; 459 459 struct device *dev; 460 460 struct gpio_desc *reset_gpio; 461 - struct net_device *mgmt_master; /* Track if mdio/mib Ethernet is available */ 461 + struct net_device *mgmt_conduit; /* Track if mdio/mib Ethernet is available */ 462 462 struct qca8k_mgmt_eth_data mgmt_eth_data; 463 463 struct qca8k_mib_eth_data mib_eth_data; 464 464 struct qca8k_mdio_cache mdio_cache;
+14 -14
drivers/net/dsa/realtek/realtek-smi.c
··· 378 378 return -ENODEV; 379 379 } 380 380 381 - priv->slave_mii_bus = devm_mdiobus_alloc(priv->dev); 382 - if (!priv->slave_mii_bus) { 381 + priv->user_mii_bus = devm_mdiobus_alloc(priv->dev); 382 + if (!priv->user_mii_bus) { 383 383 ret = -ENOMEM; 384 384 goto err_put_node; 385 385 } 386 - priv->slave_mii_bus->priv = priv; 387 - priv->slave_mii_bus->name = "SMI slave MII"; 388 - priv->slave_mii_bus->read = realtek_smi_mdio_read; 389 - priv->slave_mii_bus->write = realtek_smi_mdio_write; 390 - snprintf(priv->slave_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d", 386 + priv->user_mii_bus->priv = priv; 387 + priv->user_mii_bus->name = "SMI user MII"; 388 + priv->user_mii_bus->read = realtek_smi_mdio_read; 389 + priv->user_mii_bus->write = realtek_smi_mdio_write; 390 + snprintf(priv->user_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d", 391 391 ds->index); 392 - priv->slave_mii_bus->dev.of_node = mdio_np; 393 - priv->slave_mii_bus->parent = priv->dev; 394 - ds->slave_mii_bus = priv->slave_mii_bus; 392 + priv->user_mii_bus->dev.of_node = mdio_np; 393 + priv->user_mii_bus->parent = priv->dev; 394 + ds->user_mii_bus = priv->user_mii_bus; 395 395 396 - ret = devm_of_mdiobus_register(priv->dev, priv->slave_mii_bus, mdio_np); 396 + ret = devm_of_mdiobus_register(priv->dev, priv->user_mii_bus, mdio_np); 397 397 if (ret) { 398 398 dev_err(priv->dev, "unable to register MDIO bus %s\n", 399 - priv->slave_mii_bus->id); 399 + priv->user_mii_bus->id); 400 400 goto err_put_node; 401 401 } 402 402 ··· 514 514 return; 515 515 516 516 dsa_unregister_switch(priv->ds); 517 - if (priv->slave_mii_bus) 518 - of_node_put(priv->slave_mii_bus->dev.of_node); 517 + if (priv->user_mii_bus) 518 + of_node_put(priv->user_mii_bus->dev.of_node); 519 519 520 520 /* leave the device reset asserted */ 521 521 if (priv->reset)
+1 -1
drivers/net/dsa/realtek/realtek.h
··· 54 54 struct regmap *map; 55 55 struct regmap *map_nolock; 56 56 struct mutex map_lock; 57 - struct mii_bus *slave_mii_bus; 57 + struct mii_bus *user_mii_bus; 58 58 struct mii_bus *bus; 59 59 int mdio_addr; 60 60
+1 -1
drivers/net/dsa/realtek/rtl8365mb.c
··· 1144 1144 int frame_size; 1145 1145 1146 1146 /* When a new MTU is set, DSA always sets the CPU port's MTU to the 1147 - * largest MTU of the slave ports. Because the switch only has a global 1147 + * largest MTU of the user ports. Because the switch only has a global 1148 1148 * RX length register, only allowing CPU port here is enough. 1149 1149 */ 1150 1150 if (!dsa_is_cpu_port(ds, port))
+2 -2
drivers/net/dsa/sja1105/sja1105_main.c
··· 2688 2688 } 2689 2689 2690 2690 /* Transfer skb to the host port. */ 2691 - dsa_enqueue_skb(skb, dsa_to_port(ds, port)->slave); 2691 + dsa_enqueue_skb(skb, dsa_to_port(ds, port)->user); 2692 2692 2693 2693 /* Wait until the switch has processed the frame */ 2694 2694 do { ··· 3081 3081 * ref_clk pin. So port clocking needs to be initialized early, before 3082 3082 * connecting to PHYs is attempted, otherwise they won't respond through MDIO. 3083 3083 * Setting correct PHY link speed does not matter now. 3084 - * But dsa_slave_phy_setup is called later than sja1105_setup, so the PHY 3084 + * But dsa_user_phy_setup is called later than sja1105_setup, so the PHY 3085 3085 * bindings are not yet parsed by DSA core. We need to parse early so that we 3086 3086 * can populate the xMII mode parameters table. 3087 3087 */
+6 -6
drivers/net/dsa/xrs700x/xrs700x.c
··· 554 554 unsigned int val = XRS_HSR_CFG_HSR_PRP; 555 555 struct dsa_port *partner = NULL, *dp; 556 556 struct xrs700x *priv = ds->priv; 557 - struct net_device *slave; 557 + struct net_device *user; 558 558 int ret, i, hsr_pair[2]; 559 559 enum hsr_version ver; 560 560 bool fwd = false; ··· 638 638 hsr_pair[0] = port; 639 639 hsr_pair[1] = partner->index; 640 640 for (i = 0; i < ARRAY_SIZE(hsr_pair); i++) { 641 - slave = dsa_to_port(ds, hsr_pair[i])->slave; 642 - slave->features |= XRS7000X_SUPPORTED_HSR_FEATURES; 641 + user = dsa_to_port(ds, hsr_pair[i])->user; 642 + user->features |= XRS7000X_SUPPORTED_HSR_FEATURES; 643 643 } 644 644 645 645 return 0; ··· 650 650 { 651 651 struct dsa_port *partner = NULL, *dp; 652 652 struct xrs700x *priv = ds->priv; 653 - struct net_device *slave; 653 + struct net_device *user; 654 654 int i, hsr_pair[2]; 655 655 unsigned int val; 656 656 ··· 692 692 hsr_pair[0] = port; 693 693 hsr_pair[1] = partner->index; 694 694 for (i = 0; i < ARRAY_SIZE(hsr_pair); i++) { 695 - slave = dsa_to_port(ds, hsr_pair[i])->slave; 696 - slave->features &= ~XRS7000X_SUPPORTED_HSR_FEATURES; 695 + user = dsa_to_port(ds, hsr_pair[i])->user; 696 + user->features &= ~XRS7000X_SUPPORTED_HSR_FEATURES; 697 697 } 698 698 699 699 return 0;
+1 -1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2430 2430 if (dev->netdev_ops != &bcm_sysport_netdev_ops) 2431 2431 return NOTIFY_DONE; 2432 2432 2433 - if (!dsa_slave_dev_check(info->upper_dev)) 2433 + if (!dsa_user_dev_check(info->upper_dev)) 2434 2434 return NOTIFY_DONE; 2435 2435 2436 2436 if (info->linking)
+1 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 3329 3329 return NOTIFY_DONE; 3330 3330 3331 3331 found: 3332 - if (!dsa_slave_dev_check(dev)) 3332 + if (!dsa_user_dev_check(dev)) 3333 3333 return NOTIFY_DONE; 3334 3334 3335 3335 if (__ethtool_get_link_ksettings(dev, &s))
+1 -1
drivers/net/ethernet/mediatek/mtk_ppe_offload.c
··· 175 175 if (dp->cpu_dp->tag_ops->proto != DSA_TAG_PROTO_MTK) 176 176 return -ENODEV; 177 177 178 - *dev = dsa_port_to_master(dp); 178 + *dev = dsa_port_to_conduit(dp); 179 179 180 180 return dp->index; 181 181 #else
+1 -1
include/linux/dsa/sja1105.h
··· 28 28 /* Source and Destination MAC of follow-up meta frames. 29 29 * Whereas the choice of SMAC only affects the unique identification of the 30 30 * switch as sender of meta frames, the DMAC must be an address that is present 31 - * in the DSA master port's multicast MAC filter. 31 + * in the DSA conduit port's multicast MAC filter. 32 32 * 01-80-C2-00-00-0E is a good choice for this, as all profiles of IEEE 1588 33 33 * over L2 use this address for some purpose already. 34 34 */
+28 -28
include/net/dsa.h
··· 102 102 const char *name; 103 103 enum dsa_tag_protocol proto; 104 104 /* Some tagging protocols either mangle or shift the destination MAC 105 - * address, in which case the DSA master would drop packets on ingress 105 + * address, in which case the DSA conduit would drop packets on ingress 106 106 * if what it understands out of the destination MAC address is not in 107 107 * its RX filter. 108 108 */ 109 - bool promisc_on_master; 109 + bool promisc_on_conduit; 110 110 }; 111 111 112 112 struct dsa_lag { ··· 236 236 }; 237 237 238 238 struct dsa_port { 239 - /* A CPU port is physically connected to a master device. 240 - * A user port exposed to userspace has a slave device. 239 + /* A CPU port is physically connected to a conduit device. A user port 240 + * exposes a network device to user-space, called 'user' here. 241 241 */ 242 242 union { 243 - struct net_device *master; 244 - struct net_device *slave; 243 + struct net_device *conduit; 244 + struct net_device *user; 245 245 }; 246 246 247 247 /* Copy of the tagging protocol operations, for quicker access ··· 249 249 */ 250 250 const struct dsa_device_ops *tag_ops; 251 251 252 - /* Copies for faster access in master receive hot path */ 252 + /* Copies for faster access in conduit receive hot path */ 253 253 struct dsa_switch_tree *dst; 254 254 struct sk_buff *(*rcv)(struct sk_buff *skb, struct net_device *dev); 255 255 ··· 281 281 282 282 u8 lag_tx_enabled:1; 283 283 284 - /* Master state bits, valid only on CPU ports */ 285 - u8 master_admin_up:1; 286 - u8 master_oper_up:1; 284 + /* conduit state bits, valid only on CPU ports */ 285 + u8 conduit_admin_up:1; 286 + u8 conduit_oper_up:1; 287 287 288 288 /* Valid only on user ports */ 289 289 u8 cpu_port_in_lag:1; ··· 303 303 struct list_head list; 304 304 305 305 /* 306 - * Original copy of the master netdev ethtool_ops 306 + * Original copy of the conduit netdev ethtool_ops 307 307 */ 308 308 const struct ethtool_ops *orig_ethtool_ops; 309 309 ··· 452 452 const struct dsa_switch_ops *ops; 453 453 454 454 /* 455 - * Slave mii_bus and devices for the individual ports. 455 + * User mii_bus and devices for the individual ports. 456 456 */ 457 457 u32 phys_mii_mask; 458 - struct mii_bus *slave_mii_bus; 458 + struct mii_bus *user_mii_bus; 459 459 460 460 /* Ageing Time limits in msecs */ 461 461 unsigned int ageing_time_min; ··· 520 520 return dp->type == DSA_PORT_TYPE_UNUSED; 521 521 } 522 522 523 - static inline bool dsa_port_master_is_operational(struct dsa_port *dp) 523 + static inline bool dsa_port_conduit_is_operational(struct dsa_port *dp) 524 524 { 525 - return dsa_port_is_cpu(dp) && dp->master_admin_up && 526 - dp->master_oper_up; 525 + return dsa_port_is_cpu(dp) && dp->conduit_admin_up && 526 + dp->conduit_oper_up; 527 527 } 528 528 529 529 static inline bool dsa_is_unused_port(struct dsa_switch *ds, int p) ··· 713 713 return dsa_port_lag_dev_get(dp) == lag->dev; 714 714 } 715 715 716 - static inline struct net_device *dsa_port_to_master(const struct dsa_port *dp) 716 + static inline struct net_device *dsa_port_to_conduit(const struct dsa_port *dp) 717 717 { 718 718 if (dp->cpu_port_in_lag) 719 719 return dsa_port_lag_dev_get(dp->cpu_dp); 720 720 721 - return dp->cpu_dp->master; 721 + return dp->cpu_dp->conduit; 722 722 } 723 723 724 724 static inline ··· 732 732 else if (dp->hsr_dev) 733 733 return dp->hsr_dev; 734 734 735 - return dp->slave; 735 + return dp->user; 736 736 } 737 737 738 738 static inline struct net_device * ··· 834 834 int (*connect_tag_protocol)(struct dsa_switch *ds, 835 835 enum dsa_tag_protocol proto); 836 836 837 - int (*port_change_master)(struct dsa_switch *ds, int port, 838 - struct net_device *master, 839 - struct netlink_ext_ack *extack); 837 + int (*port_change_conduit)(struct dsa_switch *ds, int port, 838 + struct net_device *conduit, 839 + struct netlink_ext_ack *extack); 840 840 841 841 /* Optional switch-wide initialization and destruction methods */ 842 842 int (*setup)(struct dsa_switch *ds); ··· 1233 1233 int (*tag_8021q_vlan_del)(struct dsa_switch *ds, int port, u16 vid); 1234 1234 1235 1235 /* 1236 - * DSA master tracking operations 1236 + * DSA conduit tracking operations 1237 1237 */ 1238 - void (*master_state_change)(struct dsa_switch *ds, 1239 - const struct net_device *master, 1240 - bool operational); 1238 + void (*conduit_state_change)(struct dsa_switch *ds, 1239 + const struct net_device *conduit, 1240 + bool operational); 1241 1241 }; 1242 1242 1243 1243 #define DSA_DEVLINK_PARAM_DRIVER(_id, _name, _type, _cmodes) \ ··· 1374 1374 #endif /* CONFIG_PM_SLEEP */ 1375 1375 1376 1376 #if IS_ENABLED(CONFIG_NET_DSA) 1377 - bool dsa_slave_dev_check(const struct net_device *dev); 1377 + bool dsa_user_dev_check(const struct net_device *dev); 1378 1378 #else 1379 - static inline bool dsa_slave_dev_check(const struct net_device *dev) 1379 + static inline bool dsa_user_dev_check(const struct net_device *dev) 1380 1380 { 1381 1381 return false; 1382 1382 }
+11 -11
include/net/dsa_stubs.h
··· 13 13 extern const struct dsa_stubs *dsa_stubs; 14 14 15 15 struct dsa_stubs { 16 - int (*master_hwtstamp_validate)(struct net_device *dev, 17 - const struct kernel_hwtstamp_config *config, 18 - struct netlink_ext_ack *extack); 16 + int (*conduit_hwtstamp_validate)(struct net_device *dev, 17 + const struct kernel_hwtstamp_config *config, 18 + struct netlink_ext_ack *extack); 19 19 }; 20 20 21 - static inline int dsa_master_hwtstamp_validate(struct net_device *dev, 22 - const struct kernel_hwtstamp_config *config, 23 - struct netlink_ext_ack *extack) 21 + static inline int dsa_conduit_hwtstamp_validate(struct net_device *dev, 22 + const struct kernel_hwtstamp_config *config, 23 + struct netlink_ext_ack *extack) 24 24 { 25 25 if (!netdev_uses_dsa(dev)) 26 26 return 0; ··· 29 29 * netdev_uses_dsa() returns true, the dsa_core module is still 30 30 * registered, and so, dsa_unregister_stubs() couldn't have run. 31 31 * For netdev_uses_dsa() to start returning false, it would imply that 32 - * dsa_master_teardown() has executed, which requires rtnl_lock(). 32 + * dsa_conduit_teardown() has executed, which requires rtnl_lock(). 33 33 */ 34 34 ASSERT_RTNL(); 35 35 36 - return dsa_stubs->master_hwtstamp_validate(dev, config, extack); 36 + return dsa_stubs->conduit_hwtstamp_validate(dev, config, extack); 37 37 } 38 38 39 39 #else 40 40 41 - static inline int dsa_master_hwtstamp_validate(struct net_device *dev, 42 - const struct kernel_hwtstamp_config *config, 43 - struct netlink_ext_ack *extack) 41 + static inline int dsa_conduit_hwtstamp_validate(struct net_device *dev, 42 + const struct kernel_hwtstamp_config *config, 43 + struct netlink_ext_ack *extack) 44 44 { 45 45 return 0; 46 46 }
+1 -1
net/core/dev_ioctl.c
··· 382 382 if (err) 383 383 return err; 384 384 385 - err = dsa_master_hwtstamp_validate(dev, &kernel_cfg, &extack); 385 + err = dsa_conduit_hwtstamp_validate(dev, &kernel_cfg, &extack); 386 386 if (err) { 387 387 if (extack._msg) 388 388 netdev_err(dev, "%s\n", extack._msg);
+3 -3
net/dsa/Makefile
··· 8 8 # the core 9 9 obj-$(CONFIG_NET_DSA) += dsa_core.o 10 10 dsa_core-y += \ 11 + conduit.o \ 11 12 devlink.o \ 12 13 dsa.o \ 13 - master.o \ 14 14 netlink.o \ 15 15 port.o \ 16 - slave.o \ 17 16 switch.o \ 18 17 tag.o \ 19 18 tag_8021q.o \ 20 - trace.o 19 + trace.o \ 20 + user.o 21 21 22 22 # tagging formats 23 23 obj-$(CONFIG_NET_DSA_TAG_AR9331) += tag_ar9331.o
+22
net/dsa/conduit.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + 3 + #ifndef __DSA_CONDUIT_H 4 + #define __DSA_CONDUIT_H 5 + 6 + struct dsa_port; 7 + struct net_device; 8 + struct netdev_lag_upper_info; 9 + struct netlink_ext_ack; 10 + 11 + int dsa_conduit_setup(struct net_device *dev, struct dsa_port *cpu_dp); 12 + void dsa_conduit_teardown(struct net_device *dev); 13 + int dsa_conduit_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp, 14 + struct netdev_lag_upper_info *uinfo, 15 + struct netlink_ext_ack *extack); 16 + void dsa_conduit_lag_teardown(struct net_device *lag_dev, 17 + struct dsa_port *cpu_dp); 18 + int __dsa_conduit_hwtstamp_validate(struct net_device *dev, 19 + const struct kernel_hwtstamp_config *config, 20 + struct netlink_ext_ack *extack); 21 + 22 + #endif
+123 -123
net/dsa/dsa.c
··· 20 20 #include <net/dsa_stubs.h> 21 21 #include <net/sch_generic.h> 22 22 23 + #include "conduit.h" 23 24 #include "devlink.h" 24 25 #include "dsa.h" 25 - #include "master.h" 26 26 #include "netlink.h" 27 27 #include "port.h" 28 - #include "slave.h" 29 28 #include "switch.h" 30 29 #include "tag.h" 30 + #include "user.h" 31 31 32 32 #define DSA_MAX_NUM_OFFLOADING_BRIDGES BITS_PER_LONG 33 33 ··· 365 365 return NULL; 366 366 } 367 367 368 - struct net_device *dsa_tree_find_first_master(struct dsa_switch_tree *dst) 368 + struct net_device *dsa_tree_find_first_conduit(struct dsa_switch_tree *dst) 369 369 { 370 370 struct device_node *ethernet; 371 - struct net_device *master; 371 + struct net_device *conduit; 372 372 struct dsa_port *cpu_dp; 373 373 374 374 cpu_dp = dsa_tree_find_first_cpu(dst); 375 375 ethernet = of_parse_phandle(cpu_dp->dn, "ethernet", 0); 376 - master = of_find_net_device_by_node(ethernet); 376 + conduit = of_find_net_device_by_node(ethernet); 377 377 of_node_put(ethernet); 378 378 379 - return master; 379 + return conduit; 380 380 } 381 381 382 382 /* Assign the default CPU port (the first one in the tree) to all ports of the ··· 517 517 break; 518 518 case DSA_PORT_TYPE_USER: 519 519 of_get_mac_address(dp->dn, dp->mac); 520 - err = dsa_slave_create(dp); 520 + err = dsa_user_create(dp); 521 521 break; 522 522 } 523 523 ··· 554 554 dsa_shared_port_link_unregister_of(dp); 555 555 break; 556 556 case DSA_PORT_TYPE_USER: 557 - if (dp->slave) { 558 - dsa_slave_destroy(dp->slave); 559 - dp->slave = NULL; 557 + if (dp->user) { 558 + dsa_user_destroy(dp->user); 559 + dp->user = NULL; 560 560 } 561 561 break; 562 562 } ··· 632 632 if (ds->setup) 633 633 return 0; 634 634 635 - /* Initialize ds->phys_mii_mask before registering the slave MDIO bus 635 + /* Initialize ds->phys_mii_mask before registering the user MDIO bus 636 636 * driver and before ops->setup() has run, since the switch drivers and 637 - * the slave MDIO bus driver rely on these values for probing PHY 637 + * the user MDIO bus driver rely on these values for probing PHY 638 638 * devices or not 639 639 */ 640 640 ds->phys_mii_mask |= dsa_user_ports(ds); ··· 657 657 if (err) 658 658 goto teardown; 659 659 660 - if (!ds->slave_mii_bus && ds->ops->phy_read) { 661 - ds->slave_mii_bus = mdiobus_alloc(); 662 - if (!ds->slave_mii_bus) { 660 + if (!ds->user_mii_bus && ds->ops->phy_read) { 661 + ds->user_mii_bus = mdiobus_alloc(); 662 + if (!ds->user_mii_bus) { 663 663 err = -ENOMEM; 664 664 goto teardown; 665 665 } 666 666 667 - dsa_slave_mii_bus_init(ds); 667 + dsa_user_mii_bus_init(ds); 668 668 669 669 dn = of_get_child_by_name(ds->dev->of_node, "mdio"); 670 670 671 - err = of_mdiobus_register(ds->slave_mii_bus, dn); 671 + err = of_mdiobus_register(ds->user_mii_bus, dn); 672 672 of_node_put(dn); 673 673 if (err < 0) 674 - goto free_slave_mii_bus; 674 + goto free_user_mii_bus; 675 675 } 676 676 677 677 dsa_switch_devlink_register(ds); ··· 679 679 ds->setup = true; 680 680 return 0; 681 681 682 - free_slave_mii_bus: 683 - if (ds->slave_mii_bus && ds->ops->phy_read) 684 - mdiobus_free(ds->slave_mii_bus); 682 + free_user_mii_bus: 683 + if (ds->user_mii_bus && ds->ops->phy_read) 684 + mdiobus_free(ds->user_mii_bus); 685 685 teardown: 686 686 if (ds->ops->teardown) 687 687 ds->ops->teardown(ds); ··· 699 699 700 700 dsa_switch_devlink_unregister(ds); 701 701 702 - if (ds->slave_mii_bus && ds->ops->phy_read) { 703 - mdiobus_unregister(ds->slave_mii_bus); 704 - mdiobus_free(ds->slave_mii_bus); 705 - ds->slave_mii_bus = NULL; 702 + if (ds->user_mii_bus && ds->ops->phy_read) { 703 + mdiobus_unregister(ds->user_mii_bus); 704 + mdiobus_free(ds->user_mii_bus); 705 + ds->user_mii_bus = NULL; 706 706 } 707 707 708 708 dsa_switch_teardown_tag_protocol(ds); ··· 793 793 return err; 794 794 } 795 795 796 - static int dsa_tree_setup_master(struct dsa_switch_tree *dst) 796 + static int dsa_tree_setup_conduit(struct dsa_switch_tree *dst) 797 797 { 798 798 struct dsa_port *cpu_dp; 799 799 int err = 0; ··· 801 801 rtnl_lock(); 802 802 803 803 dsa_tree_for_each_cpu_port(cpu_dp, dst) { 804 - struct net_device *master = cpu_dp->master; 805 - bool admin_up = (master->flags & IFF_UP) && 806 - !qdisc_tx_is_noop(master); 804 + struct net_device *conduit = cpu_dp->conduit; 805 + bool admin_up = (conduit->flags & IFF_UP) && 806 + !qdisc_tx_is_noop(conduit); 807 807 808 - err = dsa_master_setup(master, cpu_dp); 808 + err = dsa_conduit_setup(conduit, cpu_dp); 809 809 if (err) 810 810 break; 811 811 812 - /* Replay master state event */ 813 - dsa_tree_master_admin_state_change(dst, master, admin_up); 814 - dsa_tree_master_oper_state_change(dst, master, 815 - netif_oper_up(master)); 812 + /* Replay conduit state event */ 813 + dsa_tree_conduit_admin_state_change(dst, conduit, admin_up); 814 + dsa_tree_conduit_oper_state_change(dst, conduit, 815 + netif_oper_up(conduit)); 816 816 } 817 817 818 818 rtnl_unlock(); ··· 820 820 return err; 821 821 } 822 822 823 - static void dsa_tree_teardown_master(struct dsa_switch_tree *dst) 823 + static void dsa_tree_teardown_conduit(struct dsa_switch_tree *dst) 824 824 { 825 825 struct dsa_port *cpu_dp; 826 826 827 827 rtnl_lock(); 828 828 829 829 dsa_tree_for_each_cpu_port(cpu_dp, dst) { 830 - struct net_device *master = cpu_dp->master; 830 + struct net_device *conduit = cpu_dp->conduit; 831 831 832 832 /* Synthesizing an "admin down" state is sufficient for 833 - * the switches to get a notification if the master is 833 + * the switches to get a notification if the conduit is 834 834 * currently up and running. 835 835 */ 836 - dsa_tree_master_admin_state_change(dst, master, false); 836 + dsa_tree_conduit_admin_state_change(dst, conduit, false); 837 837 838 - dsa_master_teardown(master); 838 + dsa_conduit_teardown(conduit); 839 839 } 840 840 841 841 rtnl_unlock(); ··· 894 894 if (err) 895 895 goto teardown_switches; 896 896 897 - err = dsa_tree_setup_master(dst); 897 + err = dsa_tree_setup_conduit(dst); 898 898 if (err) 899 899 goto teardown_ports; 900 900 901 901 err = dsa_tree_setup_lags(dst); 902 902 if (err) 903 - goto teardown_master; 903 + goto teardown_conduit; 904 904 905 905 dst->setup = true; 906 906 ··· 908 908 909 909 return 0; 910 910 911 - teardown_master: 912 - dsa_tree_teardown_master(dst); 911 + teardown_conduit: 912 + dsa_tree_teardown_conduit(dst); 913 913 teardown_ports: 914 914 dsa_tree_teardown_ports(dst); 915 915 teardown_switches: ··· 929 929 930 930 dsa_tree_teardown_lags(dst); 931 931 932 - dsa_tree_teardown_master(dst); 932 + dsa_tree_teardown_conduit(dst); 933 933 934 934 dsa_tree_teardown_ports(dst); 935 935 ··· 978 978 return err; 979 979 } 980 980 981 - /* Since the dsa/tagging sysfs device attribute is per master, the assumption 981 + /* Since the dsa/tagging sysfs device attribute is per conduit, the assumption 982 982 * is that all DSA switches within a tree share the same tagger, otherwise 983 983 * they would have formed disjoint trees (different "dsa,member" values). 984 984 */ ··· 999 999 * restriction, there needs to be another mutex which serializes this. 1000 1000 */ 1001 1001 dsa_tree_for_each_user_port(dp, dst) { 1002 - if (dsa_port_to_master(dp)->flags & IFF_UP) 1002 + if (dsa_port_to_conduit(dp)->flags & IFF_UP) 1003 1003 goto out_unlock; 1004 1004 1005 - if (dp->slave->flags & IFF_UP) 1005 + if (dp->user->flags & IFF_UP) 1006 1006 goto out_unlock; 1007 1007 } 1008 1008 ··· 1028 1028 return err; 1029 1029 } 1030 1030 1031 - static void dsa_tree_master_state_change(struct dsa_switch_tree *dst, 1032 - struct net_device *master) 1031 + static void dsa_tree_conduit_state_change(struct dsa_switch_tree *dst, 1032 + struct net_device *conduit) 1033 1033 { 1034 - struct dsa_notifier_master_state_info info; 1035 - struct dsa_port *cpu_dp = master->dsa_ptr; 1034 + struct dsa_notifier_conduit_state_info info; 1035 + struct dsa_port *cpu_dp = conduit->dsa_ptr; 1036 1036 1037 - info.master = master; 1038 - info.operational = dsa_port_master_is_operational(cpu_dp); 1037 + info.conduit = conduit; 1038 + info.operational = dsa_port_conduit_is_operational(cpu_dp); 1039 1039 1040 - dsa_tree_notify(dst, DSA_NOTIFIER_MASTER_STATE_CHANGE, &info); 1040 + dsa_tree_notify(dst, DSA_NOTIFIER_CONDUIT_STATE_CHANGE, &info); 1041 1041 } 1042 1042 1043 - void dsa_tree_master_admin_state_change(struct dsa_switch_tree *dst, 1044 - struct net_device *master, 1043 + void dsa_tree_conduit_admin_state_change(struct dsa_switch_tree *dst, 1044 + struct net_device *conduit, 1045 + bool up) 1046 + { 1047 + struct dsa_port *cpu_dp = conduit->dsa_ptr; 1048 + bool notify = false; 1049 + 1050 + /* Don't keep track of admin state on LAG DSA conduits, 1051 + * but rather just of physical DSA conduits 1052 + */ 1053 + if (netif_is_lag_master(conduit)) 1054 + return; 1055 + 1056 + if ((dsa_port_conduit_is_operational(cpu_dp)) != 1057 + (up && cpu_dp->conduit_oper_up)) 1058 + notify = true; 1059 + 1060 + cpu_dp->conduit_admin_up = up; 1061 + 1062 + if (notify) 1063 + dsa_tree_conduit_state_change(dst, conduit); 1064 + } 1065 + 1066 + void dsa_tree_conduit_oper_state_change(struct dsa_switch_tree *dst, 1067 + struct net_device *conduit, 1045 1068 bool up) 1046 1069 { 1047 - struct dsa_port *cpu_dp = master->dsa_ptr; 1070 + struct dsa_port *cpu_dp = conduit->dsa_ptr; 1048 1071 bool notify = false; 1049 1072 1050 - /* Don't keep track of admin state on LAG DSA masters, 1051 - * but rather just of physical DSA masters 1073 + /* Don't keep track of oper state on LAG DSA conduits, 1074 + * but rather just of physical DSA conduits 1052 1075 */ 1053 - if (netif_is_lag_master(master)) 1076 + if (netif_is_lag_master(conduit)) 1054 1077 return; 1055 1078 1056 - if ((dsa_port_master_is_operational(cpu_dp)) != 1057 - (up && cpu_dp->master_oper_up)) 1079 + if ((dsa_port_conduit_is_operational(cpu_dp)) != 1080 + (cpu_dp->conduit_admin_up && up)) 1058 1081 notify = true; 1059 1082 1060 - cpu_dp->master_admin_up = up; 1083 + cpu_dp->conduit_oper_up = up; 1061 1084 1062 1085 if (notify) 1063 - dsa_tree_master_state_change(dst, master); 1064 - } 1065 - 1066 - void dsa_tree_master_oper_state_change(struct dsa_switch_tree *dst, 1067 - struct net_device *master, 1068 - bool up) 1069 - { 1070 - struct dsa_port *cpu_dp = master->dsa_ptr; 1071 - bool notify = false; 1072 - 1073 - /* Don't keep track of oper state on LAG DSA masters, 1074 - * but rather just of physical DSA masters 1075 - */ 1076 - if (netif_is_lag_master(master)) 1077 - return; 1078 - 1079 - if ((dsa_port_master_is_operational(cpu_dp)) != 1080 - (cpu_dp->master_admin_up && up)) 1081 - notify = true; 1082 - 1083 - cpu_dp->master_oper_up = up; 1084 - 1085 - if (notify) 1086 - dsa_tree_master_state_change(dst, master); 1086 + dsa_tree_conduit_state_change(dst, conduit); 1087 1087 } 1088 1088 1089 1089 static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index) ··· 1129 1129 } 1130 1130 1131 1131 static enum dsa_tag_protocol dsa_get_tag_protocol(struct dsa_port *dp, 1132 - struct net_device *master) 1132 + struct net_device *conduit) 1133 1133 { 1134 1134 enum dsa_tag_protocol tag_protocol = DSA_TAG_PROTO_NONE; 1135 1135 struct dsa_switch *mds, *ds = dp->ds; ··· 1140 1140 * happens the switch driver may want to know if its tagging protocol 1141 1141 * is going to work in such a configuration. 1142 1142 */ 1143 - if (dsa_slave_dev_check(master)) { 1144 - mdp = dsa_slave_to_port(master); 1143 + if (dsa_user_dev_check(conduit)) { 1144 + mdp = dsa_user_to_port(conduit); 1145 1145 mds = mdp->ds; 1146 1146 mdp_upstream = dsa_upstream_port(mds, mdp->index); 1147 1147 tag_protocol = mds->ops->get_tag_protocol(mds, mdp_upstream, 1148 1148 DSA_TAG_PROTO_NONE); 1149 1149 } 1150 1150 1151 - /* If the master device is not itself a DSA slave in a disjoint DSA 1151 + /* If the conduit device is not itself a DSA user in a disjoint DSA 1152 1152 * tree, then return immediately. 1153 1153 */ 1154 1154 return ds->ops->get_tag_protocol(ds, dp->index, tag_protocol); 1155 1155 } 1156 1156 1157 - static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master, 1157 + static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *conduit, 1158 1158 const char *user_protocol) 1159 1159 { 1160 1160 const struct dsa_device_ops *tag_ops = NULL; ··· 1163 1163 enum dsa_tag_protocol default_proto; 1164 1164 1165 1165 /* Find out which protocol the switch would prefer. */ 1166 - default_proto = dsa_get_tag_protocol(dp, master); 1166 + default_proto = dsa_get_tag_protocol(dp, conduit); 1167 1167 if (dst->default_proto) { 1168 1168 if (dst->default_proto != default_proto) { 1169 1169 dev_err(ds->dev, ··· 1218 1218 dst->tag_ops = tag_ops; 1219 1219 } 1220 1220 1221 - dp->master = master; 1221 + dp->conduit = conduit; 1222 1222 dp->type = DSA_PORT_TYPE_CPU; 1223 1223 dsa_port_set_tag_protocol(dp, dst->tag_ops); 1224 1224 dp->dst = dst; ··· 1248 1248 dp->dn = dn; 1249 1249 1250 1250 if (ethernet) { 1251 - struct net_device *master; 1251 + struct net_device *conduit; 1252 1252 const char *user_protocol; 1253 1253 1254 - master = of_find_net_device_by_node(ethernet); 1254 + conduit = of_find_net_device_by_node(ethernet); 1255 1255 of_node_put(ethernet); 1256 - if (!master) 1256 + if (!conduit) 1257 1257 return -EPROBE_DEFER; 1258 1258 1259 1259 user_protocol = of_get_property(dn, "dsa-tag-protocol", NULL); 1260 - return dsa_port_parse_cpu(dp, master, user_protocol); 1260 + return dsa_port_parse_cpu(dp, conduit, user_protocol); 1261 1261 } 1262 1262 1263 1263 if (link) ··· 1412 1412 struct device *dev) 1413 1413 { 1414 1414 if (!strcmp(name, "cpu")) { 1415 - struct net_device *master; 1415 + struct net_device *conduit; 1416 1416 1417 - master = dsa_dev_to_net_device(dev); 1418 - if (!master) 1417 + conduit = dsa_dev_to_net_device(dev); 1418 + if (!conduit) 1419 1419 return -EPROBE_DEFER; 1420 1420 1421 - dev_put(master); 1421 + dev_put(conduit); 1422 1422 1423 - return dsa_port_parse_cpu(dp, master, NULL); 1423 + return dsa_port_parse_cpu(dp, conduit, NULL); 1424 1424 } 1425 1425 1426 1426 if (!strcmp(name, "dsa")) ··· 1566 1566 } 1567 1567 EXPORT_SYMBOL_GPL(dsa_unregister_switch); 1568 1568 1569 - /* If the DSA master chooses to unregister its net_device on .shutdown, DSA is 1569 + /* If the DSA conduit chooses to unregister its net_device on .shutdown, DSA is 1570 1570 * blocking that operation from completion, due to the dev_hold taken inside 1571 - * netdev_upper_dev_link. Unlink the DSA slave interfaces from being uppers of 1572 - * the DSA master, so that the system can reboot successfully. 1571 + * netdev_upper_dev_link. Unlink the DSA user interfaces from being uppers of 1572 + * the DSA conduit, so that the system can reboot successfully. 1573 1573 */ 1574 1574 void dsa_switch_shutdown(struct dsa_switch *ds) 1575 1575 { 1576 - struct net_device *master, *slave_dev; 1576 + struct net_device *conduit, *user_dev; 1577 1577 struct dsa_port *dp; 1578 1578 1579 1579 mutex_lock(&dsa2_mutex); ··· 1584 1584 rtnl_lock(); 1585 1585 1586 1586 dsa_switch_for_each_user_port(dp, ds) { 1587 - master = dsa_port_to_master(dp); 1588 - slave_dev = dp->slave; 1587 + conduit = dsa_port_to_conduit(dp); 1588 + user_dev = dp->user; 1589 1589 1590 - netdev_upper_dev_unlink(master, slave_dev); 1590 + netdev_upper_dev_unlink(conduit, user_dev); 1591 1591 } 1592 1592 1593 - /* Disconnect from further netdevice notifiers on the master, 1593 + /* Disconnect from further netdevice notifiers on the conduit, 1594 1594 * since netdev_uses_dsa() will now return false. 1595 1595 */ 1596 1596 dsa_switch_for_each_cpu_port(dp, ds) 1597 - dp->master->dsa_ptr = NULL; 1597 + dp->conduit->dsa_ptr = NULL; 1598 1598 1599 1599 rtnl_unlock(); 1600 1600 out: ··· 1605 1605 #ifdef CONFIG_PM_SLEEP 1606 1606 static bool dsa_port_is_initialized(const struct dsa_port *dp) 1607 1607 { 1608 - return dp->type == DSA_PORT_TYPE_USER && dp->slave; 1608 + return dp->type == DSA_PORT_TYPE_USER && dp->user; 1609 1609 } 1610 1610 1611 1611 int dsa_switch_suspend(struct dsa_switch *ds) ··· 1613 1613 struct dsa_port *dp; 1614 1614 int ret = 0; 1615 1615 1616 - /* Suspend slave network devices */ 1616 + /* Suspend user network devices */ 1617 1617 dsa_switch_for_each_port(dp, ds) { 1618 1618 if (!dsa_port_is_initialized(dp)) 1619 1619 continue; 1620 1620 1621 - ret = dsa_slave_suspend(dp->slave); 1621 + ret = dsa_user_suspend(dp->user); 1622 1622 if (ret) 1623 1623 return ret; 1624 1624 } ··· 1641 1641 if (ret) 1642 1642 return ret; 1643 1643 1644 - /* Resume slave network devices */ 1644 + /* Resume user network devices */ 1645 1645 dsa_switch_for_each_port(dp, ds) { 1646 1646 if (!dsa_port_is_initialized(dp)) 1647 1647 continue; 1648 1648 1649 - ret = dsa_slave_resume(dp->slave); 1649 + ret = dsa_user_resume(dp->user); 1650 1650 if (ret) 1651 1651 return ret; 1652 1652 } ··· 1658 1658 1659 1659 struct dsa_port *dsa_port_from_netdev(struct net_device *netdev) 1660 1660 { 1661 - if (!netdev || !dsa_slave_dev_check(netdev)) 1661 + if (!netdev || !dsa_user_dev_check(netdev)) 1662 1662 return ERR_PTR(-ENODEV); 1663 1663 1664 - return dsa_slave_to_port(netdev); 1664 + return dsa_user_to_port(netdev); 1665 1665 } 1666 1666 EXPORT_SYMBOL_GPL(dsa_port_from_netdev); 1667 1667 ··· 1726 1726 EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db); 1727 1727 1728 1728 static const struct dsa_stubs __dsa_stubs = { 1729 - .master_hwtstamp_validate = __dsa_master_hwtstamp_validate, 1729 + .conduit_hwtstamp_validate = __dsa_conduit_hwtstamp_validate, 1730 1730 }; 1731 1731 1732 1732 static void dsa_register_stubs(void) ··· 1748 1748 if (!dsa_owq) 1749 1749 return -ENOMEM; 1750 1750 1751 - rc = dsa_slave_register_notifier(); 1751 + rc = dsa_user_register_notifier(); 1752 1752 if (rc) 1753 1753 goto register_notifier_fail; 1754 1754 ··· 1763 1763 return 0; 1764 1764 1765 1765 netlink_register_fail: 1766 - dsa_slave_unregister_notifier(); 1766 + dsa_user_unregister_notifier(); 1767 1767 dev_remove_pack(&dsa_pack_type); 1768 1768 register_notifier_fail: 1769 1769 destroy_workqueue(dsa_owq); ··· 1778 1778 1779 1779 rtnl_link_unregister(&dsa_link_ops); 1780 1780 1781 - dsa_slave_unregister_notifier(); 1781 + dsa_user_unregister_notifier(); 1782 1782 dev_remove_pack(&dsa_pack_type); 1783 1783 destroy_workqueue(dsa_owq); 1784 1784 }
+6 -6
net/dsa/dsa.h
··· 21 21 void dsa_lag_unmap(struct dsa_switch_tree *dst, struct dsa_lag *lag); 22 22 struct dsa_lag *dsa_tree_lag_find(struct dsa_switch_tree *dst, 23 23 const struct net_device *lag_dev); 24 - struct net_device *dsa_tree_find_first_master(struct dsa_switch_tree *dst); 24 + struct net_device *dsa_tree_find_first_conduit(struct dsa_switch_tree *dst); 25 25 int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst, 26 26 const struct dsa_device_ops *tag_ops, 27 27 const struct dsa_device_ops *old_tag_ops); 28 - void dsa_tree_master_admin_state_change(struct dsa_switch_tree *dst, 29 - struct net_device *master, 28 + void dsa_tree_conduit_admin_state_change(struct dsa_switch_tree *dst, 29 + struct net_device *conduit, 30 + bool up); 31 + void dsa_tree_conduit_oper_state_change(struct dsa_switch_tree *dst, 32 + struct net_device *conduit, 30 33 bool up); 31 - void dsa_tree_master_oper_state_change(struct dsa_switch_tree *dst, 32 - struct net_device *master, 33 - bool up); 34 34 unsigned int dsa_bridge_num_get(const struct net_device *bridge_dev, int max); 35 35 void dsa_bridge_num_put(const struct net_device *bridge_dev, 36 36 unsigned int bridge_num);
+59 -59
net/dsa/master.c net/dsa/conduit.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 - * Handling of a master device, switching frames via its switch fabric CPU port 3 + * Handling of a conduit device, switching frames via its switch fabric CPU port 4 4 * 5 5 * Copyright (c) 2017 Savoir-faire Linux Inc. 6 6 * Vivien Didelot <vivien.didelot@savoirfairelinux.com> ··· 11 11 #include <linux/netlink.h> 12 12 #include <net/dsa.h> 13 13 14 + #include "conduit.h" 14 15 #include "dsa.h" 15 - #include "master.h" 16 16 #include "port.h" 17 17 #include "tag.h" 18 18 19 - static int dsa_master_get_regs_len(struct net_device *dev) 19 + static int dsa_conduit_get_regs_len(struct net_device *dev) 20 20 { 21 21 struct dsa_port *cpu_dp = dev->dsa_ptr; 22 22 const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; ··· 45 45 return ret; 46 46 } 47 47 48 - static void dsa_master_get_regs(struct net_device *dev, 49 - struct ethtool_regs *regs, void *data) 48 + static void dsa_conduit_get_regs(struct net_device *dev, 49 + struct ethtool_regs *regs, void *data) 50 50 { 51 51 struct dsa_port *cpu_dp = dev->dsa_ptr; 52 52 const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; ··· 80 80 } 81 81 } 82 82 83 - static void dsa_master_get_ethtool_stats(struct net_device *dev, 84 - struct ethtool_stats *stats, 85 - uint64_t *data) 83 + static void dsa_conduit_get_ethtool_stats(struct net_device *dev, 84 + struct ethtool_stats *stats, 85 + uint64_t *data) 86 86 { 87 87 struct dsa_port *cpu_dp = dev->dsa_ptr; 88 88 const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; ··· 99 99 ds->ops->get_ethtool_stats(ds, port, data + count); 100 100 } 101 101 102 - static void dsa_master_get_ethtool_phy_stats(struct net_device *dev, 103 - struct ethtool_stats *stats, 104 - uint64_t *data) 102 + static void dsa_conduit_get_ethtool_phy_stats(struct net_device *dev, 103 + struct ethtool_stats *stats, 104 + uint64_t *data) 105 105 { 106 106 struct dsa_port *cpu_dp = dev->dsa_ptr; 107 107 const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; ··· 125 125 ds->ops->get_ethtool_phy_stats(ds, port, data + count); 126 126 } 127 127 128 - static int dsa_master_get_sset_count(struct net_device *dev, int sset) 128 + static int dsa_conduit_get_sset_count(struct net_device *dev, int sset) 129 129 { 130 130 struct dsa_port *cpu_dp = dev->dsa_ptr; 131 131 const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; ··· 147 147 return count; 148 148 } 149 149 150 - static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset, 151 - uint8_t *data) 150 + static void dsa_conduit_get_strings(struct net_device *dev, uint32_t stringset, 151 + uint8_t *data) 152 152 { 153 153 struct dsa_port *cpu_dp = dev->dsa_ptr; 154 154 const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops; ··· 195 195 } 196 196 } 197 197 198 - /* Deny PTP operations on master if there is at least one switch in the tree 198 + /* Deny PTP operations on conduit if there is at least one switch in the tree 199 199 * that is PTP capable. 200 200 */ 201 - int __dsa_master_hwtstamp_validate(struct net_device *dev, 202 - const struct kernel_hwtstamp_config *config, 203 - struct netlink_ext_ack *extack) 201 + int __dsa_conduit_hwtstamp_validate(struct net_device *dev, 202 + const struct kernel_hwtstamp_config *config, 203 + struct netlink_ext_ack *extack) 204 204 { 205 205 struct dsa_port *cpu_dp = dev->dsa_ptr; 206 206 struct dsa_switch *ds = cpu_dp->ds; ··· 212 212 list_for_each_entry(dp, &dst->ports, list) { 213 213 if (dsa_port_supports_hwtstamp(dp)) { 214 214 NL_SET_ERR_MSG(extack, 215 - "HW timestamping not allowed on DSA master when switch supports the operation"); 215 + "HW timestamping not allowed on DSA conduit when switch supports the operation"); 216 216 return -EBUSY; 217 217 } 218 218 } ··· 220 220 return 0; 221 221 } 222 222 223 - static int dsa_master_ethtool_setup(struct net_device *dev) 223 + static int dsa_conduit_ethtool_setup(struct net_device *dev) 224 224 { 225 225 struct dsa_port *cpu_dp = dev->dsa_ptr; 226 226 struct dsa_switch *ds = cpu_dp->ds; ··· 237 237 if (cpu_dp->orig_ethtool_ops) 238 238 memcpy(ops, cpu_dp->orig_ethtool_ops, sizeof(*ops)); 239 239 240 - ops->get_regs_len = dsa_master_get_regs_len; 241 - ops->get_regs = dsa_master_get_regs; 242 - ops->get_sset_count = dsa_master_get_sset_count; 243 - ops->get_ethtool_stats = dsa_master_get_ethtool_stats; 244 - ops->get_strings = dsa_master_get_strings; 245 - ops->get_ethtool_phy_stats = dsa_master_get_ethtool_phy_stats; 240 + ops->get_regs_len = dsa_conduit_get_regs_len; 241 + ops->get_regs = dsa_conduit_get_regs; 242 + ops->get_sset_count = dsa_conduit_get_sset_count; 243 + ops->get_ethtool_stats = dsa_conduit_get_ethtool_stats; 244 + ops->get_strings = dsa_conduit_get_strings; 245 + ops->get_ethtool_phy_stats = dsa_conduit_get_ethtool_phy_stats; 246 246 247 247 dev->ethtool_ops = ops; 248 248 249 249 return 0; 250 250 } 251 251 252 - static void dsa_master_ethtool_teardown(struct net_device *dev) 252 + static void dsa_conduit_ethtool_teardown(struct net_device *dev) 253 253 { 254 254 struct dsa_port *cpu_dp = dev->dsa_ptr; 255 255 ··· 260 260 cpu_dp->orig_ethtool_ops = NULL; 261 261 } 262 262 263 - /* Keep the master always promiscuous if the tagging protocol requires that 263 + /* Keep the conduit always promiscuous if the tagging protocol requires that 264 264 * (garbles MAC DA) or if it doesn't support unicast filtering, case in which 265 265 * it would revert to promiscuous mode as soon as we call dev_uc_add() on it 266 266 * anyway. 267 267 */ 268 - static void dsa_master_set_promiscuity(struct net_device *dev, int inc) 268 + static void dsa_conduit_set_promiscuity(struct net_device *dev, int inc) 269 269 { 270 270 const struct dsa_device_ops *ops = dev->dsa_ptr->tag_ops; 271 271 272 - if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_master) 272 + if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_conduit) 273 273 return; 274 274 275 275 ASSERT_RTNL(); ··· 336 336 } 337 337 static DEVICE_ATTR_RW(tagging); 338 338 339 - static struct attribute *dsa_slave_attrs[] = { 339 + static struct attribute *dsa_user_attrs[] = { 340 340 &dev_attr_tagging.attr, 341 341 NULL 342 342 }; 343 343 344 344 static const struct attribute_group dsa_group = { 345 345 .name = "dsa", 346 - .attrs = dsa_slave_attrs, 346 + .attrs = dsa_user_attrs, 347 347 }; 348 348 349 - static void dsa_master_reset_mtu(struct net_device *dev) 349 + static void dsa_conduit_reset_mtu(struct net_device *dev) 350 350 { 351 351 int err; 352 352 ··· 356 356 "Unable to reset MTU to exclude DSA overheads\n"); 357 357 } 358 358 359 - int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 359 + int dsa_conduit_setup(struct net_device *dev, struct dsa_port *cpu_dp) 360 360 { 361 361 const struct dsa_device_ops *tag_ops = cpu_dp->tag_ops; 362 362 struct dsa_switch *ds = cpu_dp->ds; ··· 365 365 366 366 mtu = ETH_DATA_LEN + dsa_tag_protocol_overhead(tag_ops); 367 367 368 - /* The DSA master must use SET_NETDEV_DEV for this to work. */ 368 + /* The DSA conduit must use SET_NETDEV_DEV for this to work. */ 369 369 if (!netif_is_lag_master(dev)) { 370 370 consumer_link = device_link_add(ds->dev, dev->dev.parent, 371 371 DL_FLAG_AUTOREMOVE_CONSUMER); ··· 376 376 } 377 377 378 378 /* The switch driver may not implement ->port_change_mtu(), case in 379 - * which dsa_slave_change_mtu() will not update the master MTU either, 379 + * which dsa_user_change_mtu() will not update the conduit MTU either, 380 380 * so we need to do that here. 381 381 */ 382 382 ret = dev_set_mtu(dev, mtu); ··· 392 392 393 393 dev->dsa_ptr = cpu_dp; 394 394 395 - dsa_master_set_promiscuity(dev, 1); 395 + dsa_conduit_set_promiscuity(dev, 1); 396 396 397 - ret = dsa_master_ethtool_setup(dev); 397 + ret = dsa_conduit_ethtool_setup(dev); 398 398 if (ret) 399 399 goto out_err_reset_promisc; 400 400 ··· 405 405 return ret; 406 406 407 407 out_err_ethtool_teardown: 408 - dsa_master_ethtool_teardown(dev); 408 + dsa_conduit_ethtool_teardown(dev); 409 409 out_err_reset_promisc: 410 - dsa_master_set_promiscuity(dev, -1); 410 + dsa_conduit_set_promiscuity(dev, -1); 411 411 return ret; 412 412 } 413 413 414 - void dsa_master_teardown(struct net_device *dev) 414 + void dsa_conduit_teardown(struct net_device *dev) 415 415 { 416 416 sysfs_remove_group(&dev->dev.kobj, &dsa_group); 417 - dsa_master_ethtool_teardown(dev); 418 - dsa_master_reset_mtu(dev); 419 - dsa_master_set_promiscuity(dev, -1); 417 + dsa_conduit_ethtool_teardown(dev); 418 + dsa_conduit_reset_mtu(dev); 419 + dsa_conduit_set_promiscuity(dev, -1); 420 420 421 421 dev->dsa_ptr = NULL; 422 422 ··· 427 427 wmb(); 428 428 } 429 429 430 - int dsa_master_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp, 431 - struct netdev_lag_upper_info *uinfo, 432 - struct netlink_ext_ack *extack) 430 + int dsa_conduit_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp, 431 + struct netdev_lag_upper_info *uinfo, 432 + struct netlink_ext_ack *extack) 433 433 { 434 - bool master_setup = false; 434 + bool conduit_setup = false; 435 435 int err; 436 436 437 437 if (!netdev_uses_dsa(lag_dev)) { 438 - err = dsa_master_setup(lag_dev, cpu_dp); 438 + err = dsa_conduit_setup(lag_dev, cpu_dp); 439 439 if (err) 440 440 return err; 441 441 442 - master_setup = true; 442 + conduit_setup = true; 443 443 } 444 444 445 445 err = dsa_port_lag_join(cpu_dp, lag_dev, uinfo, extack); 446 446 if (err) { 447 447 NL_SET_ERR_MSG_WEAK_MOD(extack, "CPU port failed to join LAG"); 448 - goto out_master_teardown; 448 + goto out_conduit_teardown; 449 449 } 450 450 451 451 return 0; 452 452 453 - out_master_teardown: 454 - if (master_setup) 455 - dsa_master_teardown(lag_dev); 453 + out_conduit_teardown: 454 + if (conduit_setup) 455 + dsa_conduit_teardown(lag_dev); 456 456 return err; 457 457 } 458 458 459 - /* Tear down a master if there isn't any other user port on it, 459 + /* Tear down a conduit if there isn't any other user port on it, 460 460 * optionally also destroying LAG information. 461 461 */ 462 - void dsa_master_lag_teardown(struct net_device *lag_dev, 463 - struct dsa_port *cpu_dp) 462 + void dsa_conduit_lag_teardown(struct net_device *lag_dev, 463 + struct dsa_port *cpu_dp) 464 464 { 465 465 struct net_device *upper; 466 466 struct list_head *iter; ··· 468 468 dsa_port_lag_leave(cpu_dp, lag_dev); 469 469 470 470 netdev_for_each_upper_dev_rcu(lag_dev, upper, iter) 471 - if (dsa_slave_dev_check(upper)) 471 + if (dsa_user_dev_check(upper)) 472 472 return; 473 473 474 - dsa_master_teardown(lag_dev); 474 + dsa_conduit_teardown(lag_dev); 475 475 }
-22
net/dsa/master.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - 3 - #ifndef __DSA_MASTER_H 4 - #define __DSA_MASTER_H 5 - 6 - struct dsa_port; 7 - struct net_device; 8 - struct netdev_lag_upper_info; 9 - struct netlink_ext_ack; 10 - 11 - int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp); 12 - void dsa_master_teardown(struct net_device *dev); 13 - int dsa_master_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp, 14 - struct netdev_lag_upper_info *uinfo, 15 - struct netlink_ext_ack *extack); 16 - void dsa_master_lag_teardown(struct net_device *lag_dev, 17 - struct dsa_port *cpu_dp); 18 - int __dsa_master_hwtstamp_validate(struct net_device *dev, 19 - const struct kernel_hwtstamp_config *config, 20 - struct netlink_ext_ack *extack); 21 - 22 - #endif
+7 -7
net/dsa/netlink.c
··· 5 5 #include <net/rtnetlink.h> 6 6 7 7 #include "netlink.h" 8 - #include "slave.h" 8 + #include "user.h" 9 9 10 10 static const struct nla_policy dsa_policy[IFLA_DSA_MAX + 1] = { 11 11 [IFLA_DSA_MASTER] = { .type = NLA_U32 }, ··· 22 22 23 23 if (data[IFLA_DSA_MASTER]) { 24 24 u32 ifindex = nla_get_u32(data[IFLA_DSA_MASTER]); 25 - struct net_device *master; 25 + struct net_device *conduit; 26 26 27 - master = __dev_get_by_index(dev_net(dev), ifindex); 28 - if (!master) 27 + conduit = __dev_get_by_index(dev_net(dev), ifindex); 28 + if (!conduit) 29 29 return -EINVAL; 30 30 31 - err = dsa_slave_change_master(dev, master, extack); 31 + err = dsa_user_change_conduit(dev, conduit, extack); 32 32 if (err) 33 33 return err; 34 34 } ··· 44 44 45 45 static int dsa_fill_info(struct sk_buff *skb, const struct net_device *dev) 46 46 { 47 - struct net_device *master = dsa_slave_to_master(dev); 47 + struct net_device *conduit = dsa_user_to_conduit(dev); 48 48 49 - if (nla_put_u32(skb, IFLA_DSA_MASTER, master->ifindex)) 49 + if (nla_put_u32(skb, IFLA_DSA_MASTER, conduit->ifindex)) 50 50 return -EMSGSIZE; 51 51 52 52 return 0;
+62 -62
net/dsa/port.c
··· 14 14 15 15 #include "dsa.h" 16 16 #include "port.h" 17 - #include "slave.h" 18 17 #include "switch.h" 19 18 #include "tag_8021q.h" 19 + #include "user.h" 20 20 21 21 /** 22 22 * dsa_port_notify - Notify the switching fabric of changes to a port ··· 289 289 } 290 290 291 291 /* If the bridge was vlan_filtering, the bridge core doesn't trigger an 292 - * event for changing vlan_filtering setting upon slave ports leaving 292 + * event for changing vlan_filtering setting upon user ports leaving 293 293 * it. That is a good thing, because that lets us handle it and also 294 294 * handle the case where the switch's vlan_filtering setting is global 295 295 * (not per port). When that happens, the correct moment to trigger the ··· 489 489 .dp = dp, 490 490 .extack = extack, 491 491 }; 492 - struct net_device *dev = dp->slave; 492 + struct net_device *dev = dp->user; 493 493 struct net_device *brport_dev; 494 494 int err; 495 495 ··· 514 514 dp->bridge->tx_fwd_offload = info.tx_fwd_offload; 515 515 516 516 err = switchdev_bridge_port_offload(brport_dev, dev, dp, 517 - &dsa_slave_switchdev_notifier, 518 - &dsa_slave_switchdev_blocking_notifier, 517 + &dsa_user_switchdev_notifier, 518 + &dsa_user_switchdev_blocking_notifier, 519 519 dp->bridge->tx_fwd_offload, extack); 520 520 if (err) 521 521 goto out_rollback_unbridge; ··· 528 528 529 529 out_rollback_unoffload: 530 530 switchdev_bridge_port_unoffload(brport_dev, dp, 531 - &dsa_slave_switchdev_notifier, 532 - &dsa_slave_switchdev_blocking_notifier); 531 + &dsa_user_switchdev_notifier, 532 + &dsa_user_switchdev_blocking_notifier); 533 533 dsa_flush_workqueue(); 534 534 out_rollback_unbridge: 535 535 dsa_broadcast(DSA_NOTIFIER_BRIDGE_LEAVE, &info); ··· 547 547 return; 548 548 549 549 switchdev_bridge_port_unoffload(brport_dev, dp, 550 - &dsa_slave_switchdev_notifier, 551 - &dsa_slave_switchdev_blocking_notifier); 550 + &dsa_user_switchdev_notifier, 551 + &dsa_user_switchdev_blocking_notifier); 552 552 553 553 dsa_flush_workqueue(); 554 554 } ··· 741 741 */ 742 742 if (vlan_filtering && dsa_port_is_user(dp)) { 743 743 struct net_device *br = dsa_port_bridge_dev_get(dp); 744 - struct net_device *upper_dev, *slave = dp->slave; 744 + struct net_device *upper_dev, *user = dp->user; 745 745 struct list_head *iter; 746 746 747 - netdev_for_each_upper_dev_rcu(slave, upper_dev, iter) { 747 + netdev_for_each_upper_dev_rcu(user, upper_dev, iter) { 748 748 struct bridge_vlan_info br_info; 749 749 u16 vid; 750 750 ··· 803 803 if (!ds->ops->port_vlan_filtering) 804 804 return -EOPNOTSUPP; 805 805 806 - /* We are called from dsa_slave_switchdev_blocking_event(), 806 + /* We are called from dsa_user_switchdev_blocking_event(), 807 807 * which is not under rcu_read_lock(), unlike 808 - * dsa_slave_switchdev_event(). 808 + * dsa_user_switchdev_event(). 809 809 */ 810 810 rcu_read_lock(); 811 811 apply = dsa_port_can_apply_vlan_filtering(dp, vlan_filtering, extack); ··· 827 827 ds->vlan_filtering = vlan_filtering; 828 828 829 829 dsa_switch_for_each_user_port(other_dp, ds) { 830 - struct net_device *slave = other_dp->slave; 830 + struct net_device *user = other_dp->user; 831 831 832 832 /* We might be called in the unbind path, so not 833 - * all slave devices might still be registered. 833 + * all user devices might still be registered. 834 834 */ 835 - if (!slave) 835 + if (!user) 836 836 continue; 837 837 838 - err = dsa_slave_manage_vlan_filtering(slave, 839 - vlan_filtering); 838 + err = dsa_user_manage_vlan_filtering(user, 839 + vlan_filtering); 840 840 if (err) 841 841 goto restore; 842 842 } 843 843 } else { 844 844 dp->vlan_filtering = vlan_filtering; 845 845 846 - err = dsa_slave_manage_vlan_filtering(dp->slave, 847 - vlan_filtering); 846 + err = dsa_user_manage_vlan_filtering(dp->user, 847 + vlan_filtering); 848 848 if (err) 849 849 goto restore; 850 850 } ··· 863 863 } 864 864 865 865 /* This enforces legacy behavior for switch drivers which assume they can't 866 - * receive VLAN configuration when enslaved to a bridge with vlan_filtering=0 866 + * receive VLAN configuration when joining a bridge with vlan_filtering=0 867 867 */ 868 868 bool dsa_port_skip_vlan_configuration(struct dsa_port *dp) 869 869 { ··· 1047 1047 int dsa_port_bridge_host_fdb_add(struct dsa_port *dp, 1048 1048 const unsigned char *addr, u16 vid) 1049 1049 { 1050 - struct net_device *master = dsa_port_to_master(dp); 1050 + struct net_device *conduit = dsa_port_to_conduit(dp); 1051 1051 struct dsa_db db = { 1052 1052 .type = DSA_DB_BRIDGE, 1053 1053 .bridge = *dp->bridge, ··· 1057 1057 if (!dp->ds->fdb_isolation) 1058 1058 db.bridge.num = 0; 1059 1059 1060 - /* Avoid a call to __dev_set_promiscuity() on the master, which 1060 + /* Avoid a call to __dev_set_promiscuity() on the conduit, which 1061 1061 * requires rtnl_lock(), since we can't guarantee that is held here, 1062 1062 * and we can't take it either. 1063 1063 */ 1064 - if (master->priv_flags & IFF_UNICAST_FLT) { 1065 - err = dev_uc_add(master, addr); 1064 + if (conduit->priv_flags & IFF_UNICAST_FLT) { 1065 + err = dev_uc_add(conduit, addr); 1066 1066 if (err) 1067 1067 return err; 1068 1068 } ··· 1098 1098 int dsa_port_bridge_host_fdb_del(struct dsa_port *dp, 1099 1099 const unsigned char *addr, u16 vid) 1100 1100 { 1101 - struct net_device *master = dsa_port_to_master(dp); 1101 + struct net_device *conduit = dsa_port_to_conduit(dp); 1102 1102 struct dsa_db db = { 1103 1103 .type = DSA_DB_BRIDGE, 1104 1104 .bridge = *dp->bridge, ··· 1108 1108 if (!dp->ds->fdb_isolation) 1109 1109 db.bridge.num = 0; 1110 1110 1111 - if (master->priv_flags & IFF_UNICAST_FLT) { 1112 - err = dev_uc_del(master, addr); 1111 + if (conduit->priv_flags & IFF_UNICAST_FLT) { 1112 + err = dev_uc_del(conduit, addr); 1113 1113 if (err) 1114 1114 return err; 1115 1115 } ··· 1229 1229 int dsa_port_bridge_host_mdb_add(const struct dsa_port *dp, 1230 1230 const struct switchdev_obj_port_mdb *mdb) 1231 1231 { 1232 - struct net_device *master = dsa_port_to_master(dp); 1232 + struct net_device *conduit = dsa_port_to_conduit(dp); 1233 1233 struct dsa_db db = { 1234 1234 .type = DSA_DB_BRIDGE, 1235 1235 .bridge = *dp->bridge, ··· 1239 1239 if (!dp->ds->fdb_isolation) 1240 1240 db.bridge.num = 0; 1241 1241 1242 - err = dev_mc_add(master, mdb->addr); 1242 + err = dev_mc_add(conduit, mdb->addr); 1243 1243 if (err) 1244 1244 return err; 1245 1245 ··· 1273 1273 int dsa_port_bridge_host_mdb_del(const struct dsa_port *dp, 1274 1274 const struct switchdev_obj_port_mdb *mdb) 1275 1275 { 1276 - struct net_device *master = dsa_port_to_master(dp); 1276 + struct net_device *conduit = dsa_port_to_conduit(dp); 1277 1277 struct dsa_db db = { 1278 1278 .type = DSA_DB_BRIDGE, 1279 1279 .bridge = *dp->bridge, ··· 1283 1283 if (!dp->ds->fdb_isolation) 1284 1284 db.bridge.num = 0; 1285 1285 1286 - err = dev_mc_del(master, mdb->addr); 1286 + err = dev_mc_del(conduit, mdb->addr); 1287 1287 if (err) 1288 1288 return err; 1289 1289 ··· 1318 1318 const struct switchdev_obj_port_vlan *vlan, 1319 1319 struct netlink_ext_ack *extack) 1320 1320 { 1321 - struct net_device *master = dsa_port_to_master(dp); 1321 + struct net_device *conduit = dsa_port_to_conduit(dp); 1322 1322 struct dsa_notifier_vlan_info info = { 1323 1323 .dp = dp, 1324 1324 .vlan = vlan, ··· 1330 1330 if (err && err != -EOPNOTSUPP) 1331 1331 return err; 1332 1332 1333 - vlan_vid_add(master, htons(ETH_P_8021Q), vlan->vid); 1333 + vlan_vid_add(conduit, htons(ETH_P_8021Q), vlan->vid); 1334 1334 1335 1335 return err; 1336 1336 } ··· 1338 1338 int dsa_port_host_vlan_del(struct dsa_port *dp, 1339 1339 const struct switchdev_obj_port_vlan *vlan) 1340 1340 { 1341 - struct net_device *master = dsa_port_to_master(dp); 1341 + struct net_device *conduit = dsa_port_to_conduit(dp); 1342 1342 struct dsa_notifier_vlan_info info = { 1343 1343 .dp = dp, 1344 1344 .vlan = vlan, ··· 1349 1349 if (err && err != -EOPNOTSUPP) 1350 1350 return err; 1351 1351 1352 - vlan_vid_del(master, htons(ETH_P_8021Q), vlan->vid); 1352 + vlan_vid_del(conduit, htons(ETH_P_8021Q), vlan->vid); 1353 1353 1354 1354 return err; 1355 1355 } ··· 1398 1398 return ds->ops->port_mrp_del_ring_role(ds, dp->index, mrp); 1399 1399 } 1400 1400 1401 - static int dsa_port_assign_master(struct dsa_port *dp, 1402 - struct net_device *master, 1403 - struct netlink_ext_ack *extack, 1404 - bool fail_on_err) 1401 + static int dsa_port_assign_conduit(struct dsa_port *dp, 1402 + struct net_device *conduit, 1403 + struct netlink_ext_ack *extack, 1404 + bool fail_on_err) 1405 1405 { 1406 1406 struct dsa_switch *ds = dp->ds; 1407 1407 int port = dp->index, err; 1408 1408 1409 - err = ds->ops->port_change_master(ds, port, master, extack); 1409 + err = ds->ops->port_change_conduit(ds, port, conduit, extack); 1410 1410 if (err && !fail_on_err) 1411 - dev_err(ds->dev, "port %d failed to assign master %s: %pe\n", 1412 - port, master->name, ERR_PTR(err)); 1411 + dev_err(ds->dev, "port %d failed to assign conduit %s: %pe\n", 1412 + port, conduit->name, ERR_PTR(err)); 1413 1413 1414 1414 if (err && fail_on_err) 1415 1415 return err; 1416 1416 1417 - dp->cpu_dp = master->dsa_ptr; 1418 - dp->cpu_port_in_lag = netif_is_lag_master(master); 1417 + dp->cpu_dp = conduit->dsa_ptr; 1418 + dp->cpu_port_in_lag = netif_is_lag_master(conduit); 1419 1419 1420 1420 return 0; 1421 1421 } ··· 1428 1428 * the old CPU port before changing it, and restore it on errors during the 1429 1429 * bringup of the new one. 1430 1430 */ 1431 - int dsa_port_change_master(struct dsa_port *dp, struct net_device *master, 1432 - struct netlink_ext_ack *extack) 1431 + int dsa_port_change_conduit(struct dsa_port *dp, struct net_device *conduit, 1432 + struct netlink_ext_ack *extack) 1433 1433 { 1434 1434 struct net_device *bridge_dev = dsa_port_bridge_dev_get(dp); 1435 - struct net_device *old_master = dsa_port_to_master(dp); 1436 - struct net_device *dev = dp->slave; 1435 + struct net_device *old_conduit = dsa_port_to_conduit(dp); 1436 + struct net_device *dev = dp->user; 1437 1437 struct dsa_switch *ds = dp->ds; 1438 1438 bool vlan_filtering; 1439 1439 int err, tmp; ··· 1454 1454 */ 1455 1455 vlan_filtering = dsa_port_is_vlan_filtering(dp); 1456 1456 if (vlan_filtering) { 1457 - err = dsa_slave_manage_vlan_filtering(dev, false); 1457 + err = dsa_user_manage_vlan_filtering(dev, false); 1458 1458 if (err) { 1459 1459 NL_SET_ERR_MSG_MOD(extack, 1460 1460 "Failed to remove standalone VLANs"); ··· 1465 1465 /* Standalone addresses, and addresses of upper interfaces like 1466 1466 * VLAN, LAG, HSR need to be migrated. 1467 1467 */ 1468 - dsa_slave_unsync_ha(dev); 1468 + dsa_user_unsync_ha(dev); 1469 1469 1470 - err = dsa_port_assign_master(dp, master, extack, true); 1470 + err = dsa_port_assign_conduit(dp, conduit, extack, true); 1471 1471 if (err) 1472 1472 goto rewind_old_addrs; 1473 1473 1474 - dsa_slave_sync_ha(dev); 1474 + dsa_user_sync_ha(dev); 1475 1475 1476 1476 if (vlan_filtering) { 1477 - err = dsa_slave_manage_vlan_filtering(dev, true); 1477 + err = dsa_user_manage_vlan_filtering(dev, true); 1478 1478 if (err) { 1479 1479 NL_SET_ERR_MSG_MOD(extack, 1480 1480 "Failed to restore standalone VLANs"); ··· 1495 1495 1496 1496 rewind_new_vlan: 1497 1497 if (vlan_filtering) 1498 - dsa_slave_manage_vlan_filtering(dev, false); 1498 + dsa_user_manage_vlan_filtering(dev, false); 1499 1499 1500 1500 rewind_new_addrs: 1501 - dsa_slave_unsync_ha(dev); 1501 + dsa_user_unsync_ha(dev); 1502 1502 1503 - dsa_port_assign_master(dp, old_master, NULL, false); 1503 + dsa_port_assign_conduit(dp, old_conduit, NULL, false); 1504 1504 1505 1505 /* Restore the objects on the old CPU port */ 1506 1506 rewind_old_addrs: 1507 - dsa_slave_sync_ha(dev); 1507 + dsa_user_sync_ha(dev); 1508 1508 1509 1509 if (vlan_filtering) { 1510 - tmp = dsa_slave_manage_vlan_filtering(dev, true); 1510 + tmp = dsa_user_manage_vlan_filtering(dev, true); 1511 1511 if (tmp) { 1512 1512 dev_err(ds->dev, 1513 1513 "port %d failed to restore standalone VLANs: %pe\n", ··· 1620 1620 struct dsa_switch *ds = dp->ds; 1621 1621 1622 1622 if (dsa_port_is_user(dp)) 1623 - phydev = dp->slave->phydev; 1623 + phydev = dp->user->phydev; 1624 1624 1625 1625 if (!ds->ops->phylink_mac_link_down) { 1626 1626 if (ds->ops->adjust_link && phydev) ··· 1808 1808 * their type. 1809 1809 * 1810 1810 * User ports with no phy-handle or fixed-link are expected to connect to an 1811 - * internal PHY located on the ds->slave_mii_bus at an MDIO address equal to 1811 + * internal PHY located on the ds->user_mii_bus at an MDIO address equal to 1812 1812 * the port number. This description is still actively supported. 1813 1813 * 1814 1814 * Shared (CPU and DSA) ports with no phy-handle or fixed-link are expected to ··· 1829 1829 * a fixed-link, a phy-handle, or a managed = "in-band-status" property. 1830 1830 * It becomes the responsibility of the driver to ensure that these ports 1831 1831 * operate at the maximum speed (whatever this means) and will interoperate 1832 - * with the DSA master or other cascade port, since phylink methods will not be 1832 + * with the DSA conduit or other cascade port, since phylink methods will not be 1833 1833 * invoked for them. 1834 1834 * 1835 1835 * If you are considering expanding this table for newly introduced switches,
+2 -2
net/dsa/port.h
··· 109 109 int dsa_port_tag_8021q_vlan_add(struct dsa_port *dp, u16 vid, bool broadcast); 110 110 void dsa_port_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid, bool broadcast); 111 111 void dsa_port_set_host_flood(struct dsa_port *dp, bool uc, bool mc); 112 - int dsa_port_change_master(struct dsa_port *dp, struct net_device *master, 113 - struct netlink_ext_ack *extack); 112 + int dsa_port_change_conduit(struct dsa_port *dp, struct net_device *conduit, 113 + struct netlink_ext_ack *extack); 114 114 115 115 #endif
+730 -730
net/dsa/slave.c net/dsa/user.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 - * net/dsa/slave.c - Slave device handling 3 + * net/dsa/user.c - user device handling 4 4 * Copyright (c) 2008-2009 Marvell Semiconductor 5 5 */ 6 6 ··· 23 23 #include <linux/netpoll.h> 24 24 #include <linux/string.h> 25 25 26 + #include "conduit.h" 26 27 #include "dsa.h" 27 - #include "port.h" 28 - #include "master.h" 29 28 #include "netlink.h" 30 - #include "slave.h" 29 + #include "port.h" 31 30 #include "switch.h" 32 31 #include "tag.h" 32 + #include "user.h" 33 33 34 34 struct dsa_switchdev_event_work { 35 35 struct net_device *dev; ··· 79 79 !ds->needs_standalone_vlan_filtering; 80 80 } 81 81 82 - static void dsa_slave_standalone_event_work(struct work_struct *work) 82 + static void dsa_user_standalone_event_work(struct work_struct *work) 83 83 { 84 84 struct dsa_standalone_event_work *standalone_work = 85 85 container_of(work, struct dsa_standalone_event_work, work); 86 86 const unsigned char *addr = standalone_work->addr; 87 87 struct net_device *dev = standalone_work->dev; 88 - struct dsa_port *dp = dsa_slave_to_port(dev); 88 + struct dsa_port *dp = dsa_user_to_port(dev); 89 89 struct switchdev_obj_port_mdb mdb; 90 90 struct dsa_switch *ds = dp->ds; 91 91 u16 vid = standalone_work->vid; ··· 140 140 kfree(standalone_work); 141 141 } 142 142 143 - static int dsa_slave_schedule_standalone_work(struct net_device *dev, 144 - enum dsa_standalone_event event, 145 - const unsigned char *addr, 146 - u16 vid) 143 + static int dsa_user_schedule_standalone_work(struct net_device *dev, 144 + enum dsa_standalone_event event, 145 + const unsigned char *addr, 146 + u16 vid) 147 147 { 148 148 struct dsa_standalone_event_work *standalone_work; 149 149 ··· 151 151 if (!standalone_work) 152 152 return -ENOMEM; 153 153 154 - INIT_WORK(&standalone_work->work, dsa_slave_standalone_event_work); 154 + INIT_WORK(&standalone_work->work, dsa_user_standalone_event_work); 155 155 standalone_work->event = event; 156 156 standalone_work->dev = dev; 157 157 ··· 163 163 return 0; 164 164 } 165 165 166 - static int dsa_slave_host_vlan_rx_filtering(void *arg, int vid) 166 + static int dsa_user_host_vlan_rx_filtering(void *arg, int vid) 167 167 { 168 168 struct dsa_host_vlan_rx_filtering_ctx *ctx = arg; 169 169 170 - return dsa_slave_schedule_standalone_work(ctx->dev, ctx->event, 170 + return dsa_user_schedule_standalone_work(ctx->dev, ctx->event, 171 171 ctx->addr, vid); 172 172 } 173 173 174 - static int dsa_slave_vlan_for_each(struct net_device *dev, 175 - int (*cb)(void *arg, int vid), void *arg) 174 + static int dsa_user_vlan_for_each(struct net_device *dev, 175 + int (*cb)(void *arg, int vid), void *arg) 176 176 { 177 - struct dsa_port *dp = dsa_slave_to_port(dev); 177 + struct dsa_port *dp = dsa_user_to_port(dev); 178 178 struct dsa_vlan *v; 179 179 int err; 180 180 ··· 193 193 return 0; 194 194 } 195 195 196 - static int dsa_slave_sync_uc(struct net_device *dev, 197 - const unsigned char *addr) 196 + static int dsa_user_sync_uc(struct net_device *dev, 197 + const unsigned char *addr) 198 198 { 199 - struct net_device *master = dsa_slave_to_master(dev); 200 - struct dsa_port *dp = dsa_slave_to_port(dev); 199 + struct net_device *conduit = dsa_user_to_conduit(dev); 200 + struct dsa_port *dp = dsa_user_to_port(dev); 201 201 struct dsa_host_vlan_rx_filtering_ctx ctx = { 202 202 .dev = dev, 203 203 .addr = addr, 204 204 .event = DSA_UC_ADD, 205 205 }; 206 206 207 - dev_uc_add(master, addr); 207 + dev_uc_add(conduit, addr); 208 208 209 209 if (!dsa_switch_supports_uc_filtering(dp->ds)) 210 210 return 0; 211 211 212 - return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, 212 + return dsa_user_vlan_for_each(dev, dsa_user_host_vlan_rx_filtering, 213 213 &ctx); 214 214 } 215 215 216 - static int dsa_slave_unsync_uc(struct net_device *dev, 217 - const unsigned char *addr) 216 + static int dsa_user_unsync_uc(struct net_device *dev, 217 + const unsigned char *addr) 218 218 { 219 - struct net_device *master = dsa_slave_to_master(dev); 220 - struct dsa_port *dp = dsa_slave_to_port(dev); 219 + struct net_device *conduit = dsa_user_to_conduit(dev); 220 + struct dsa_port *dp = dsa_user_to_port(dev); 221 221 struct dsa_host_vlan_rx_filtering_ctx ctx = { 222 222 .dev = dev, 223 223 .addr = addr, 224 224 .event = DSA_UC_DEL, 225 225 }; 226 226 227 - dev_uc_del(master, addr); 227 + dev_uc_del(conduit, addr); 228 228 229 229 if (!dsa_switch_supports_uc_filtering(dp->ds)) 230 230 return 0; 231 231 232 - return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, 232 + return dsa_user_vlan_for_each(dev, dsa_user_host_vlan_rx_filtering, 233 233 &ctx); 234 234 } 235 235 236 - static int dsa_slave_sync_mc(struct net_device *dev, 237 - const unsigned char *addr) 236 + static int dsa_user_sync_mc(struct net_device *dev, 237 + const unsigned char *addr) 238 238 { 239 - struct net_device *master = dsa_slave_to_master(dev); 240 - struct dsa_port *dp = dsa_slave_to_port(dev); 239 + struct net_device *conduit = dsa_user_to_conduit(dev); 240 + struct dsa_port *dp = dsa_user_to_port(dev); 241 241 struct dsa_host_vlan_rx_filtering_ctx ctx = { 242 242 .dev = dev, 243 243 .addr = addr, 244 244 .event = DSA_MC_ADD, 245 245 }; 246 246 247 - dev_mc_add(master, addr); 247 + dev_mc_add(conduit, addr); 248 248 249 249 if (!dsa_switch_supports_mc_filtering(dp->ds)) 250 250 return 0; 251 251 252 - return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, 252 + return dsa_user_vlan_for_each(dev, dsa_user_host_vlan_rx_filtering, 253 253 &ctx); 254 254 } 255 255 256 - static int dsa_slave_unsync_mc(struct net_device *dev, 257 - const unsigned char *addr) 256 + static int dsa_user_unsync_mc(struct net_device *dev, 257 + const unsigned char *addr) 258 258 { 259 - struct net_device *master = dsa_slave_to_master(dev); 260 - struct dsa_port *dp = dsa_slave_to_port(dev); 259 + struct net_device *conduit = dsa_user_to_conduit(dev); 260 + struct dsa_port *dp = dsa_user_to_port(dev); 261 261 struct dsa_host_vlan_rx_filtering_ctx ctx = { 262 262 .dev = dev, 263 263 .addr = addr, 264 264 .event = DSA_MC_DEL, 265 265 }; 266 266 267 - dev_mc_del(master, addr); 267 + dev_mc_del(conduit, addr); 268 268 269 269 if (!dsa_switch_supports_mc_filtering(dp->ds)) 270 270 return 0; 271 271 272 - return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering, 272 + return dsa_user_vlan_for_each(dev, dsa_user_host_vlan_rx_filtering, 273 273 &ctx); 274 274 } 275 275 276 - void dsa_slave_sync_ha(struct net_device *dev) 276 + void dsa_user_sync_ha(struct net_device *dev) 277 277 { 278 - struct dsa_port *dp = dsa_slave_to_port(dev); 278 + struct dsa_port *dp = dsa_user_to_port(dev); 279 279 struct dsa_switch *ds = dp->ds; 280 280 struct netdev_hw_addr *ha; 281 281 282 282 netif_addr_lock_bh(dev); 283 283 284 284 netdev_for_each_synced_mc_addr(ha, dev) 285 - dsa_slave_sync_mc(dev, ha->addr); 285 + dsa_user_sync_mc(dev, ha->addr); 286 286 287 287 netdev_for_each_synced_uc_addr(ha, dev) 288 - dsa_slave_sync_uc(dev, ha->addr); 288 + dsa_user_sync_uc(dev, ha->addr); 289 289 290 290 netif_addr_unlock_bh(dev); 291 291 ··· 294 294 dsa_flush_workqueue(); 295 295 } 296 296 297 - void dsa_slave_unsync_ha(struct net_device *dev) 297 + void dsa_user_unsync_ha(struct net_device *dev) 298 298 { 299 - struct dsa_port *dp = dsa_slave_to_port(dev); 299 + struct dsa_port *dp = dsa_user_to_port(dev); 300 300 struct dsa_switch *ds = dp->ds; 301 301 struct netdev_hw_addr *ha; 302 302 303 303 netif_addr_lock_bh(dev); 304 304 305 305 netdev_for_each_synced_uc_addr(ha, dev) 306 - dsa_slave_unsync_uc(dev, ha->addr); 306 + dsa_user_unsync_uc(dev, ha->addr); 307 307 308 308 netdev_for_each_synced_mc_addr(ha, dev) 309 - dsa_slave_unsync_mc(dev, ha->addr); 309 + dsa_user_unsync_mc(dev, ha->addr); 310 310 311 311 netif_addr_unlock_bh(dev); 312 312 ··· 315 315 dsa_flush_workqueue(); 316 316 } 317 317 318 - /* slave mii_bus handling ***************************************************/ 319 - static int dsa_slave_phy_read(struct mii_bus *bus, int addr, int reg) 318 + /* user mii_bus handling ***************************************************/ 319 + static int dsa_user_phy_read(struct mii_bus *bus, int addr, int reg) 320 320 { 321 321 struct dsa_switch *ds = bus->priv; 322 322 ··· 326 326 return 0xffff; 327 327 } 328 328 329 - static int dsa_slave_phy_write(struct mii_bus *bus, int addr, int reg, u16 val) 329 + static int dsa_user_phy_write(struct mii_bus *bus, int addr, int reg, u16 val) 330 330 { 331 331 struct dsa_switch *ds = bus->priv; 332 332 ··· 336 336 return 0; 337 337 } 338 338 339 - void dsa_slave_mii_bus_init(struct dsa_switch *ds) 339 + void dsa_user_mii_bus_init(struct dsa_switch *ds) 340 340 { 341 - ds->slave_mii_bus->priv = (void *)ds; 342 - ds->slave_mii_bus->name = "dsa slave smi"; 343 - ds->slave_mii_bus->read = dsa_slave_phy_read; 344 - ds->slave_mii_bus->write = dsa_slave_phy_write; 345 - snprintf(ds->slave_mii_bus->id, MII_BUS_ID_SIZE, "dsa-%d.%d", 341 + ds->user_mii_bus->priv = (void *)ds; 342 + ds->user_mii_bus->name = "dsa user smi"; 343 + ds->user_mii_bus->read = dsa_user_phy_read; 344 + ds->user_mii_bus->write = dsa_user_phy_write; 345 + snprintf(ds->user_mii_bus->id, MII_BUS_ID_SIZE, "dsa-%d.%d", 346 346 ds->dst->index, ds->index); 347 - ds->slave_mii_bus->parent = ds->dev; 348 - ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask; 347 + ds->user_mii_bus->parent = ds->dev; 348 + ds->user_mii_bus->phy_mask = ~ds->phys_mii_mask; 349 349 } 350 350 351 351 352 - /* slave device handling ****************************************************/ 353 - static int dsa_slave_get_iflink(const struct net_device *dev) 352 + /* user device handling ****************************************************/ 353 + static int dsa_user_get_iflink(const struct net_device *dev) 354 354 { 355 - return dsa_slave_to_master(dev)->ifindex; 355 + return dsa_user_to_conduit(dev)->ifindex; 356 356 } 357 357 358 - static int dsa_slave_open(struct net_device *dev) 358 + static int dsa_user_open(struct net_device *dev) 359 359 { 360 - struct net_device *master = dsa_slave_to_master(dev); 361 - struct dsa_port *dp = dsa_slave_to_port(dev); 360 + struct net_device *conduit = dsa_user_to_conduit(dev); 361 + struct dsa_port *dp = dsa_user_to_port(dev); 362 362 struct dsa_switch *ds = dp->ds; 363 363 int err; 364 364 365 - err = dev_open(master, NULL); 365 + err = dev_open(conduit, NULL); 366 366 if (err < 0) { 367 - netdev_err(dev, "failed to open master %s\n", master->name); 367 + netdev_err(dev, "failed to open conduit %s\n", conduit->name); 368 368 goto out; 369 369 } 370 370 ··· 374 374 goto out; 375 375 } 376 376 377 - if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) { 378 - err = dev_uc_add(master, dev->dev_addr); 377 + if (!ether_addr_equal(dev->dev_addr, conduit->dev_addr)) { 378 + err = dev_uc_add(conduit, dev->dev_addr); 379 379 if (err < 0) 380 380 goto del_host_addr; 381 381 } ··· 387 387 return 0; 388 388 389 389 del_unicast: 390 - if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) 391 - dev_uc_del(master, dev->dev_addr); 390 + if (!ether_addr_equal(dev->dev_addr, conduit->dev_addr)) 391 + dev_uc_del(conduit, dev->dev_addr); 392 392 del_host_addr: 393 393 if (dsa_switch_supports_uc_filtering(ds)) 394 394 dsa_port_standalone_host_fdb_del(dp, dev->dev_addr, 0); ··· 396 396 return err; 397 397 } 398 398 399 - static int dsa_slave_close(struct net_device *dev) 399 + static int dsa_user_close(struct net_device *dev) 400 400 { 401 - struct net_device *master = dsa_slave_to_master(dev); 402 - struct dsa_port *dp = dsa_slave_to_port(dev); 401 + struct net_device *conduit = dsa_user_to_conduit(dev); 402 + struct dsa_port *dp = dsa_user_to_port(dev); 403 403 struct dsa_switch *ds = dp->ds; 404 404 405 405 dsa_port_disable_rt(dp); 406 406 407 - if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) 408 - dev_uc_del(master, dev->dev_addr); 407 + if (!ether_addr_equal(dev->dev_addr, conduit->dev_addr)) 408 + dev_uc_del(conduit, dev->dev_addr); 409 409 410 410 if (dsa_switch_supports_uc_filtering(ds)) 411 411 dsa_port_standalone_host_fdb_del(dp, dev->dev_addr, 0); ··· 413 413 return 0; 414 414 } 415 415 416 - static void dsa_slave_manage_host_flood(struct net_device *dev) 416 + static void dsa_user_manage_host_flood(struct net_device *dev) 417 417 { 418 418 bool mc = dev->flags & (IFF_PROMISC | IFF_ALLMULTI); 419 - struct dsa_port *dp = dsa_slave_to_port(dev); 419 + struct dsa_port *dp = dsa_user_to_port(dev); 420 420 bool uc = dev->flags & IFF_PROMISC; 421 421 422 422 dsa_port_set_host_flood(dp, uc, mc); 423 423 } 424 424 425 - static void dsa_slave_change_rx_flags(struct net_device *dev, int change) 425 + static void dsa_user_change_rx_flags(struct net_device *dev, int change) 426 426 { 427 - struct net_device *master = dsa_slave_to_master(dev); 428 - struct dsa_port *dp = dsa_slave_to_port(dev); 427 + struct net_device *conduit = dsa_user_to_conduit(dev); 428 + struct dsa_port *dp = dsa_user_to_port(dev); 429 429 struct dsa_switch *ds = dp->ds; 430 430 431 431 if (change & IFF_ALLMULTI) 432 - dev_set_allmulti(master, 432 + dev_set_allmulti(conduit, 433 433 dev->flags & IFF_ALLMULTI ? 1 : -1); 434 434 if (change & IFF_PROMISC) 435 - dev_set_promiscuity(master, 435 + dev_set_promiscuity(conduit, 436 436 dev->flags & IFF_PROMISC ? 1 : -1); 437 437 438 438 if (dsa_switch_supports_uc_filtering(ds) && 439 439 dsa_switch_supports_mc_filtering(ds)) 440 - dsa_slave_manage_host_flood(dev); 440 + dsa_user_manage_host_flood(dev); 441 441 } 442 442 443 - static void dsa_slave_set_rx_mode(struct net_device *dev) 443 + static void dsa_user_set_rx_mode(struct net_device *dev) 444 444 { 445 - __dev_mc_sync(dev, dsa_slave_sync_mc, dsa_slave_unsync_mc); 446 - __dev_uc_sync(dev, dsa_slave_sync_uc, dsa_slave_unsync_uc); 445 + __dev_mc_sync(dev, dsa_user_sync_mc, dsa_user_unsync_mc); 446 + __dev_uc_sync(dev, dsa_user_sync_uc, dsa_user_unsync_uc); 447 447 } 448 448 449 - static int dsa_slave_set_mac_address(struct net_device *dev, void *a) 449 + static int dsa_user_set_mac_address(struct net_device *dev, void *a) 450 450 { 451 - struct net_device *master = dsa_slave_to_master(dev); 452 - struct dsa_port *dp = dsa_slave_to_port(dev); 451 + struct net_device *conduit = dsa_user_to_conduit(dev); 452 + struct dsa_port *dp = dsa_user_to_port(dev); 453 453 struct dsa_switch *ds = dp->ds; 454 454 struct sockaddr *addr = a; 455 455 int err; ··· 465 465 } 466 466 467 467 /* If the port is down, the address isn't synced yet to hardware or 468 - * to the DSA master, so there is nothing to change. 468 + * to the DSA conduit, so there is nothing to change. 469 469 */ 470 470 if (!(dev->flags & IFF_UP)) 471 471 goto out_change_dev_addr; ··· 476 476 return err; 477 477 } 478 478 479 - if (!ether_addr_equal(addr->sa_data, master->dev_addr)) { 480 - err = dev_uc_add(master, addr->sa_data); 479 + if (!ether_addr_equal(addr->sa_data, conduit->dev_addr)) { 480 + err = dev_uc_add(conduit, addr->sa_data); 481 481 if (err < 0) 482 482 goto del_unicast; 483 483 } 484 484 485 - if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) 486 - dev_uc_del(master, dev->dev_addr); 485 + if (!ether_addr_equal(dev->dev_addr, conduit->dev_addr)) 486 + dev_uc_del(conduit, dev->dev_addr); 487 487 488 488 if (dsa_switch_supports_uc_filtering(ds)) 489 489 dsa_port_standalone_host_fdb_del(dp, dev->dev_addr, 0); ··· 500 500 return err; 501 501 } 502 502 503 - struct dsa_slave_dump_ctx { 503 + struct dsa_user_dump_ctx { 504 504 struct net_device *dev; 505 505 struct sk_buff *skb; 506 506 struct netlink_callback *cb; ··· 508 508 }; 509 509 510 510 static int 511 - dsa_slave_port_fdb_do_dump(const unsigned char *addr, u16 vid, 512 - bool is_static, void *data) 511 + dsa_user_port_fdb_do_dump(const unsigned char *addr, u16 vid, 512 + bool is_static, void *data) 513 513 { 514 - struct dsa_slave_dump_ctx *dump = data; 514 + struct dsa_user_dump_ctx *dump = data; 515 515 u32 portid = NETLINK_CB(dump->cb->skb).portid; 516 516 u32 seq = dump->cb->nlh->nlmsg_seq; 517 517 struct nlmsghdr *nlh; ··· 552 552 } 553 553 554 554 static int 555 - dsa_slave_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb, 556 - struct net_device *dev, struct net_device *filter_dev, 557 - int *idx) 555 + dsa_user_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb, 556 + struct net_device *dev, struct net_device *filter_dev, 557 + int *idx) 558 558 { 559 - struct dsa_port *dp = dsa_slave_to_port(dev); 560 - struct dsa_slave_dump_ctx dump = { 559 + struct dsa_port *dp = dsa_user_to_port(dev); 560 + struct dsa_user_dump_ctx dump = { 561 561 .dev = dev, 562 562 .skb = skb, 563 563 .cb = cb, ··· 565 565 }; 566 566 int err; 567 567 568 - err = dsa_port_fdb_dump(dp, dsa_slave_port_fdb_do_dump, &dump); 568 + err = dsa_port_fdb_dump(dp, dsa_user_port_fdb_do_dump, &dump); 569 569 *idx = dump.idx; 570 570 571 571 return err; 572 572 } 573 573 574 - static int dsa_slave_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 574 + static int dsa_user_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 575 575 { 576 - struct dsa_slave_priv *p = netdev_priv(dev); 576 + struct dsa_user_priv *p = netdev_priv(dev); 577 577 struct dsa_switch *ds = p->dp->ds; 578 578 int port = p->dp->index; 579 579 ··· 592 592 return phylink_mii_ioctl(p->dp->pl, ifr, cmd); 593 593 } 594 594 595 - static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx, 596 - const struct switchdev_attr *attr, 597 - struct netlink_ext_ack *extack) 595 + static int dsa_user_port_attr_set(struct net_device *dev, const void *ctx, 596 + const struct switchdev_attr *attr, 597 + struct netlink_ext_ack *extack) 598 598 { 599 - struct dsa_port *dp = dsa_slave_to_port(dev); 599 + struct dsa_port *dp = dsa_user_to_port(dev); 600 600 int ret; 601 601 602 602 if (ctx && ctx != dp) ··· 663 663 664 664 /* Must be called under rcu_read_lock() */ 665 665 static int 666 - dsa_slave_vlan_check_for_8021q_uppers(struct net_device *slave, 667 - const struct switchdev_obj_port_vlan *vlan) 666 + dsa_user_vlan_check_for_8021q_uppers(struct net_device *user, 667 + const struct switchdev_obj_port_vlan *vlan) 668 668 { 669 669 struct net_device *upper_dev; 670 670 struct list_head *iter; 671 671 672 - netdev_for_each_upper_dev_rcu(slave, upper_dev, iter) { 672 + netdev_for_each_upper_dev_rcu(user, upper_dev, iter) { 673 673 u16 vid; 674 674 675 675 if (!is_vlan_dev(upper_dev)) ··· 683 683 return 0; 684 684 } 685 685 686 - static int dsa_slave_vlan_add(struct net_device *dev, 687 - const struct switchdev_obj *obj, 688 - struct netlink_ext_ack *extack) 686 + static int dsa_user_vlan_add(struct net_device *dev, 687 + const struct switchdev_obj *obj, 688 + struct netlink_ext_ack *extack) 689 689 { 690 - struct dsa_port *dp = dsa_slave_to_port(dev); 690 + struct dsa_port *dp = dsa_user_to_port(dev); 691 691 struct switchdev_obj_port_vlan *vlan; 692 692 int err; 693 693 ··· 703 703 */ 704 704 if (br_vlan_enabled(dsa_port_bridge_dev_get(dp))) { 705 705 rcu_read_lock(); 706 - err = dsa_slave_vlan_check_for_8021q_uppers(dev, vlan); 706 + err = dsa_user_vlan_check_for_8021q_uppers(dev, vlan); 707 707 rcu_read_unlock(); 708 708 if (err) { 709 709 NL_SET_ERR_MSG_MOD(extack, ··· 718 718 /* Offload a VLAN installed on the bridge or on a foreign interface by 719 719 * installing it as a VLAN towards the CPU port. 720 720 */ 721 - static int dsa_slave_host_vlan_add(struct net_device *dev, 722 - const struct switchdev_obj *obj, 723 - struct netlink_ext_ack *extack) 721 + static int dsa_user_host_vlan_add(struct net_device *dev, 722 + const struct switchdev_obj *obj, 723 + struct netlink_ext_ack *extack) 724 724 { 725 - struct dsa_port *dp = dsa_slave_to_port(dev); 725 + struct dsa_port *dp = dsa_user_to_port(dev); 726 726 struct switchdev_obj_port_vlan vlan; 727 727 728 728 /* Do nothing if this is a software bridge */ ··· 744 744 return dsa_port_host_vlan_add(dp, &vlan, extack); 745 745 } 746 746 747 - static int dsa_slave_port_obj_add(struct net_device *dev, const void *ctx, 748 - const struct switchdev_obj *obj, 749 - struct netlink_ext_ack *extack) 747 + static int dsa_user_port_obj_add(struct net_device *dev, const void *ctx, 748 + const struct switchdev_obj *obj, 749 + struct netlink_ext_ack *extack) 750 750 { 751 - struct dsa_port *dp = dsa_slave_to_port(dev); 751 + struct dsa_port *dp = dsa_user_to_port(dev); 752 752 int err; 753 753 754 754 if (ctx && ctx != dp) ··· 769 769 break; 770 770 case SWITCHDEV_OBJ_ID_PORT_VLAN: 771 771 if (dsa_port_offloads_bridge_port(dp, obj->orig_dev)) 772 - err = dsa_slave_vlan_add(dev, obj, extack); 772 + err = dsa_user_vlan_add(dev, obj, extack); 773 773 else 774 - err = dsa_slave_host_vlan_add(dev, obj, extack); 774 + err = dsa_user_host_vlan_add(dev, obj, extack); 775 775 break; 776 776 case SWITCHDEV_OBJ_ID_MRP: 777 777 if (!dsa_port_offloads_bridge_dev(dp, obj->orig_dev)) ··· 794 794 return err; 795 795 } 796 796 797 - static int dsa_slave_vlan_del(struct net_device *dev, 798 - const struct switchdev_obj *obj) 797 + static int dsa_user_vlan_del(struct net_device *dev, 798 + const struct switchdev_obj *obj) 799 799 { 800 - struct dsa_port *dp = dsa_slave_to_port(dev); 800 + struct dsa_port *dp = dsa_user_to_port(dev); 801 801 struct switchdev_obj_port_vlan *vlan; 802 802 803 803 if (dsa_port_skip_vlan_configuration(dp)) ··· 808 808 return dsa_port_vlan_del(dp, vlan); 809 809 } 810 810 811 - static int dsa_slave_host_vlan_del(struct net_device *dev, 812 - const struct switchdev_obj *obj) 811 + static int dsa_user_host_vlan_del(struct net_device *dev, 812 + const struct switchdev_obj *obj) 813 813 { 814 - struct dsa_port *dp = dsa_slave_to_port(dev); 814 + struct dsa_port *dp = dsa_user_to_port(dev); 815 815 struct switchdev_obj_port_vlan *vlan; 816 816 817 817 /* Do nothing if this is a software bridge */ ··· 826 826 return dsa_port_host_vlan_del(dp, vlan); 827 827 } 828 828 829 - static int dsa_slave_port_obj_del(struct net_device *dev, const void *ctx, 830 - const struct switchdev_obj *obj) 829 + static int dsa_user_port_obj_del(struct net_device *dev, const void *ctx, 830 + const struct switchdev_obj *obj) 831 831 { 832 - struct dsa_port *dp = dsa_slave_to_port(dev); 832 + struct dsa_port *dp = dsa_user_to_port(dev); 833 833 int err; 834 834 835 835 if (ctx && ctx != dp) ··· 850 850 break; 851 851 case SWITCHDEV_OBJ_ID_PORT_VLAN: 852 852 if (dsa_port_offloads_bridge_port(dp, obj->orig_dev)) 853 - err = dsa_slave_vlan_del(dev, obj); 853 + err = dsa_user_vlan_del(dev, obj); 854 854 else 855 - err = dsa_slave_host_vlan_del(dev, obj); 855 + err = dsa_user_host_vlan_del(dev, obj); 856 856 break; 857 857 case SWITCHDEV_OBJ_ID_MRP: 858 858 if (!dsa_port_offloads_bridge_dev(dp, obj->orig_dev)) ··· 875 875 return err; 876 876 } 877 877 878 - static inline netdev_tx_t dsa_slave_netpoll_send_skb(struct net_device *dev, 879 - struct sk_buff *skb) 878 + static inline netdev_tx_t dsa_user_netpoll_send_skb(struct net_device *dev, 879 + struct sk_buff *skb) 880 880 { 881 881 #ifdef CONFIG_NET_POLL_CONTROLLER 882 - struct dsa_slave_priv *p = netdev_priv(dev); 882 + struct dsa_user_priv *p = netdev_priv(dev); 883 883 884 884 return netpoll_send_skb(p->netpoll, skb); 885 885 #else ··· 888 888 #endif 889 889 } 890 890 891 - static void dsa_skb_tx_timestamp(struct dsa_slave_priv *p, 891 + static void dsa_skb_tx_timestamp(struct dsa_user_priv *p, 892 892 struct sk_buff *skb) 893 893 { 894 894 struct dsa_switch *ds = p->dp->ds; ··· 908 908 * tag to be successfully transmitted 909 909 */ 910 910 if (unlikely(netpoll_tx_running(dev))) 911 - return dsa_slave_netpoll_send_skb(dev, skb); 911 + return dsa_user_netpoll_send_skb(dev, skb); 912 912 913 913 /* Queue the SKB for transmission on the parent interface, but 914 914 * do not modify its EtherType 915 915 */ 916 - skb->dev = dsa_slave_to_master(dev); 916 + skb->dev = dsa_user_to_conduit(dev); 917 917 dev_queue_xmit(skb); 918 918 919 919 return NETDEV_TX_OK; ··· 927 927 928 928 /* For tail taggers, we need to pad short frames ourselves, to ensure 929 929 * that the tail tag does not fail at its role of being at the end of 930 - * the packet, once the master interface pads the frame. Account for 930 + * the packet, once the conduit interface pads the frame. Account for 931 931 * that pad length here, and pad later. 932 932 */ 933 933 if (unlikely(needed_tailroom && skb->len < ETH_ZLEN)) ··· 944 944 GFP_ATOMIC); 945 945 } 946 946 947 - static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev) 947 + static netdev_tx_t dsa_user_xmit(struct sk_buff *skb, struct net_device *dev) 948 948 { 949 - struct dsa_slave_priv *p = netdev_priv(dev); 949 + struct dsa_user_priv *p = netdev_priv(dev); 950 950 struct sk_buff *nskb; 951 951 952 952 dev_sw_netstats_tx_add(dev, 1, skb->len); ··· 981 981 982 982 /* ethtool operations *******************************************************/ 983 983 984 - static void dsa_slave_get_drvinfo(struct net_device *dev, 985 - struct ethtool_drvinfo *drvinfo) 984 + static void dsa_user_get_drvinfo(struct net_device *dev, 985 + struct ethtool_drvinfo *drvinfo) 986 986 { 987 987 strscpy(drvinfo->driver, "dsa", sizeof(drvinfo->driver)); 988 988 strscpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version)); 989 989 strscpy(drvinfo->bus_info, "platform", sizeof(drvinfo->bus_info)); 990 990 } 991 991 992 - static int dsa_slave_get_regs_len(struct net_device *dev) 992 + static int dsa_user_get_regs_len(struct net_device *dev) 993 993 { 994 - struct dsa_port *dp = dsa_slave_to_port(dev); 994 + struct dsa_port *dp = dsa_user_to_port(dev); 995 995 struct dsa_switch *ds = dp->ds; 996 996 997 997 if (ds->ops->get_regs_len) ··· 1001 1001 } 1002 1002 1003 1003 static void 1004 - dsa_slave_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *_p) 1004 + dsa_user_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *_p) 1005 1005 { 1006 - struct dsa_port *dp = dsa_slave_to_port(dev); 1006 + struct dsa_port *dp = dsa_user_to_port(dev); 1007 1007 struct dsa_switch *ds = dp->ds; 1008 1008 1009 1009 if (ds->ops->get_regs) 1010 1010 ds->ops->get_regs(ds, dp->index, regs, _p); 1011 1011 } 1012 1012 1013 - static int dsa_slave_nway_reset(struct net_device *dev) 1013 + static int dsa_user_nway_reset(struct net_device *dev) 1014 1014 { 1015 - struct dsa_port *dp = dsa_slave_to_port(dev); 1015 + struct dsa_port *dp = dsa_user_to_port(dev); 1016 1016 1017 1017 return phylink_ethtool_nway_reset(dp->pl); 1018 1018 } 1019 1019 1020 - static int dsa_slave_get_eeprom_len(struct net_device *dev) 1020 + static int dsa_user_get_eeprom_len(struct net_device *dev) 1021 1021 { 1022 - struct dsa_port *dp = dsa_slave_to_port(dev); 1022 + struct dsa_port *dp = dsa_user_to_port(dev); 1023 1023 struct dsa_switch *ds = dp->ds; 1024 1024 1025 1025 if (ds->cd && ds->cd->eeprom_len) ··· 1031 1031 return 0; 1032 1032 } 1033 1033 1034 - static int dsa_slave_get_eeprom(struct net_device *dev, 1035 - struct ethtool_eeprom *eeprom, u8 *data) 1034 + static int dsa_user_get_eeprom(struct net_device *dev, 1035 + struct ethtool_eeprom *eeprom, u8 *data) 1036 1036 { 1037 - struct dsa_port *dp = dsa_slave_to_port(dev); 1037 + struct dsa_port *dp = dsa_user_to_port(dev); 1038 1038 struct dsa_switch *ds = dp->ds; 1039 1039 1040 1040 if (ds->ops->get_eeprom) ··· 1043 1043 return -EOPNOTSUPP; 1044 1044 } 1045 1045 1046 - static int dsa_slave_set_eeprom(struct net_device *dev, 1047 - struct ethtool_eeprom *eeprom, u8 *data) 1046 + static int dsa_user_set_eeprom(struct net_device *dev, 1047 + struct ethtool_eeprom *eeprom, u8 *data) 1048 1048 { 1049 - struct dsa_port *dp = dsa_slave_to_port(dev); 1049 + struct dsa_port *dp = dsa_user_to_port(dev); 1050 1050 struct dsa_switch *ds = dp->ds; 1051 1051 1052 1052 if (ds->ops->set_eeprom) ··· 1055 1055 return -EOPNOTSUPP; 1056 1056 } 1057 1057 1058 - static void dsa_slave_get_strings(struct net_device *dev, 1059 - uint32_t stringset, uint8_t *data) 1058 + static void dsa_user_get_strings(struct net_device *dev, 1059 + uint32_t stringset, uint8_t *data) 1060 1060 { 1061 - struct dsa_port *dp = dsa_slave_to_port(dev); 1061 + struct dsa_port *dp = dsa_user_to_port(dev); 1062 1062 struct dsa_switch *ds = dp->ds; 1063 1063 1064 1064 if (stringset == ETH_SS_STATS) { ··· 1077 1077 1078 1078 } 1079 1079 1080 - static void dsa_slave_get_ethtool_stats(struct net_device *dev, 1081 - struct ethtool_stats *stats, 1082 - uint64_t *data) 1080 + static void dsa_user_get_ethtool_stats(struct net_device *dev, 1081 + struct ethtool_stats *stats, 1082 + uint64_t *data) 1083 1083 { 1084 - struct dsa_port *dp = dsa_slave_to_port(dev); 1084 + struct dsa_port *dp = dsa_user_to_port(dev); 1085 1085 struct dsa_switch *ds = dp->ds; 1086 1086 struct pcpu_sw_netstats *s; 1087 1087 unsigned int start; ··· 1107 1107 ds->ops->get_ethtool_stats(ds, dp->index, data + 4); 1108 1108 } 1109 1109 1110 - static int dsa_slave_get_sset_count(struct net_device *dev, int sset) 1110 + static int dsa_user_get_sset_count(struct net_device *dev, int sset) 1111 1111 { 1112 - struct dsa_port *dp = dsa_slave_to_port(dev); 1112 + struct dsa_port *dp = dsa_user_to_port(dev); 1113 1113 struct dsa_switch *ds = dp->ds; 1114 1114 1115 1115 if (sset == ETH_SS_STATS) { ··· 1129 1129 return -EOPNOTSUPP; 1130 1130 } 1131 1131 1132 - static void dsa_slave_get_eth_phy_stats(struct net_device *dev, 1133 - struct ethtool_eth_phy_stats *phy_stats) 1132 + static void dsa_user_get_eth_phy_stats(struct net_device *dev, 1133 + struct ethtool_eth_phy_stats *phy_stats) 1134 1134 { 1135 - struct dsa_port *dp = dsa_slave_to_port(dev); 1135 + struct dsa_port *dp = dsa_user_to_port(dev); 1136 1136 struct dsa_switch *ds = dp->ds; 1137 1137 1138 1138 if (ds->ops->get_eth_phy_stats) 1139 1139 ds->ops->get_eth_phy_stats(ds, dp->index, phy_stats); 1140 1140 } 1141 1141 1142 - static void dsa_slave_get_eth_mac_stats(struct net_device *dev, 1143 - struct ethtool_eth_mac_stats *mac_stats) 1142 + static void dsa_user_get_eth_mac_stats(struct net_device *dev, 1143 + struct ethtool_eth_mac_stats *mac_stats) 1144 1144 { 1145 - struct dsa_port *dp = dsa_slave_to_port(dev); 1145 + struct dsa_port *dp = dsa_user_to_port(dev); 1146 1146 struct dsa_switch *ds = dp->ds; 1147 1147 1148 1148 if (ds->ops->get_eth_mac_stats) ··· 1150 1150 } 1151 1151 1152 1152 static void 1153 - dsa_slave_get_eth_ctrl_stats(struct net_device *dev, 1154 - struct ethtool_eth_ctrl_stats *ctrl_stats) 1153 + dsa_user_get_eth_ctrl_stats(struct net_device *dev, 1154 + struct ethtool_eth_ctrl_stats *ctrl_stats) 1155 1155 { 1156 - struct dsa_port *dp = dsa_slave_to_port(dev); 1156 + struct dsa_port *dp = dsa_user_to_port(dev); 1157 1157 struct dsa_switch *ds = dp->ds; 1158 1158 1159 1159 if (ds->ops->get_eth_ctrl_stats) ··· 1161 1161 } 1162 1162 1163 1163 static void 1164 - dsa_slave_get_rmon_stats(struct net_device *dev, 1165 - struct ethtool_rmon_stats *rmon_stats, 1166 - const struct ethtool_rmon_hist_range **ranges) 1164 + dsa_user_get_rmon_stats(struct net_device *dev, 1165 + struct ethtool_rmon_stats *rmon_stats, 1166 + const struct ethtool_rmon_hist_range **ranges) 1167 1167 { 1168 - struct dsa_port *dp = dsa_slave_to_port(dev); 1168 + struct dsa_port *dp = dsa_user_to_port(dev); 1169 1169 struct dsa_switch *ds = dp->ds; 1170 1170 1171 1171 if (ds->ops->get_rmon_stats) 1172 1172 ds->ops->get_rmon_stats(ds, dp->index, rmon_stats, ranges); 1173 1173 } 1174 1174 1175 - static void dsa_slave_net_selftest(struct net_device *ndev, 1176 - struct ethtool_test *etest, u64 *buf) 1175 + static void dsa_user_net_selftest(struct net_device *ndev, 1176 + struct ethtool_test *etest, u64 *buf) 1177 1177 { 1178 - struct dsa_port *dp = dsa_slave_to_port(ndev); 1178 + struct dsa_port *dp = dsa_user_to_port(ndev); 1179 1179 struct dsa_switch *ds = dp->ds; 1180 1180 1181 1181 if (ds->ops->self_test) { ··· 1186 1186 net_selftest(ndev, etest, buf); 1187 1187 } 1188 1188 1189 - static int dsa_slave_get_mm(struct net_device *dev, 1190 - struct ethtool_mm_state *state) 1189 + static int dsa_user_get_mm(struct net_device *dev, 1190 + struct ethtool_mm_state *state) 1191 1191 { 1192 - struct dsa_port *dp = dsa_slave_to_port(dev); 1192 + struct dsa_port *dp = dsa_user_to_port(dev); 1193 1193 struct dsa_switch *ds = dp->ds; 1194 1194 1195 1195 if (!ds->ops->get_mm) ··· 1198 1198 return ds->ops->get_mm(ds, dp->index, state); 1199 1199 } 1200 1200 1201 - static int dsa_slave_set_mm(struct net_device *dev, struct ethtool_mm_cfg *cfg, 1202 - struct netlink_ext_ack *extack) 1201 + static int dsa_user_set_mm(struct net_device *dev, struct ethtool_mm_cfg *cfg, 1202 + struct netlink_ext_ack *extack) 1203 1203 { 1204 - struct dsa_port *dp = dsa_slave_to_port(dev); 1204 + struct dsa_port *dp = dsa_user_to_port(dev); 1205 1205 struct dsa_switch *ds = dp->ds; 1206 1206 1207 1207 if (!ds->ops->set_mm) ··· 1210 1210 return ds->ops->set_mm(ds, dp->index, cfg, extack); 1211 1211 } 1212 1212 1213 - static void dsa_slave_get_mm_stats(struct net_device *dev, 1214 - struct ethtool_mm_stats *stats) 1213 + static void dsa_user_get_mm_stats(struct net_device *dev, 1214 + struct ethtool_mm_stats *stats) 1215 1215 { 1216 - struct dsa_port *dp = dsa_slave_to_port(dev); 1216 + struct dsa_port *dp = dsa_user_to_port(dev); 1217 1217 struct dsa_switch *ds = dp->ds; 1218 1218 1219 1219 if (ds->ops->get_mm_stats) 1220 1220 ds->ops->get_mm_stats(ds, dp->index, stats); 1221 1221 } 1222 1222 1223 - static void dsa_slave_get_wol(struct net_device *dev, struct ethtool_wolinfo *w) 1223 + static void dsa_user_get_wol(struct net_device *dev, struct ethtool_wolinfo *w) 1224 1224 { 1225 - struct dsa_port *dp = dsa_slave_to_port(dev); 1225 + struct dsa_port *dp = dsa_user_to_port(dev); 1226 1226 struct dsa_switch *ds = dp->ds; 1227 1227 1228 1228 phylink_ethtool_get_wol(dp->pl, w); ··· 1231 1231 ds->ops->get_wol(ds, dp->index, w); 1232 1232 } 1233 1233 1234 - static int dsa_slave_set_wol(struct net_device *dev, struct ethtool_wolinfo *w) 1234 + static int dsa_user_set_wol(struct net_device *dev, struct ethtool_wolinfo *w) 1235 1235 { 1236 - struct dsa_port *dp = dsa_slave_to_port(dev); 1236 + struct dsa_port *dp = dsa_user_to_port(dev); 1237 1237 struct dsa_switch *ds = dp->ds; 1238 1238 int ret = -EOPNOTSUPP; 1239 1239 ··· 1245 1245 return ret; 1246 1246 } 1247 1247 1248 - static int dsa_slave_set_eee(struct net_device *dev, struct ethtool_eee *e) 1248 + static int dsa_user_set_eee(struct net_device *dev, struct ethtool_eee *e) 1249 1249 { 1250 - struct dsa_port *dp = dsa_slave_to_port(dev); 1250 + struct dsa_port *dp = dsa_user_to_port(dev); 1251 1251 struct dsa_switch *ds = dp->ds; 1252 1252 int ret; 1253 1253 ··· 1265 1265 return phylink_ethtool_set_eee(dp->pl, e); 1266 1266 } 1267 1267 1268 - static int dsa_slave_get_eee(struct net_device *dev, struct ethtool_eee *e) 1268 + static int dsa_user_get_eee(struct net_device *dev, struct ethtool_eee *e) 1269 1269 { 1270 - struct dsa_port *dp = dsa_slave_to_port(dev); 1270 + struct dsa_port *dp = dsa_user_to_port(dev); 1271 1271 struct dsa_switch *ds = dp->ds; 1272 1272 int ret; 1273 1273 ··· 1285 1285 return phylink_ethtool_get_eee(dp->pl, e); 1286 1286 } 1287 1287 1288 - static int dsa_slave_get_link_ksettings(struct net_device *dev, 1289 - struct ethtool_link_ksettings *cmd) 1288 + static int dsa_user_get_link_ksettings(struct net_device *dev, 1289 + struct ethtool_link_ksettings *cmd) 1290 1290 { 1291 - struct dsa_port *dp = dsa_slave_to_port(dev); 1291 + struct dsa_port *dp = dsa_user_to_port(dev); 1292 1292 1293 1293 return phylink_ethtool_ksettings_get(dp->pl, cmd); 1294 1294 } 1295 1295 1296 - static int dsa_slave_set_link_ksettings(struct net_device *dev, 1297 - const struct ethtool_link_ksettings *cmd) 1296 + static int dsa_user_set_link_ksettings(struct net_device *dev, 1297 + const struct ethtool_link_ksettings *cmd) 1298 1298 { 1299 - struct dsa_port *dp = dsa_slave_to_port(dev); 1299 + struct dsa_port *dp = dsa_user_to_port(dev); 1300 1300 1301 1301 return phylink_ethtool_ksettings_set(dp->pl, cmd); 1302 1302 } 1303 1303 1304 - static void dsa_slave_get_pause_stats(struct net_device *dev, 1305 - struct ethtool_pause_stats *pause_stats) 1304 + static void dsa_user_get_pause_stats(struct net_device *dev, 1305 + struct ethtool_pause_stats *pause_stats) 1306 1306 { 1307 - struct dsa_port *dp = dsa_slave_to_port(dev); 1307 + struct dsa_port *dp = dsa_user_to_port(dev); 1308 1308 struct dsa_switch *ds = dp->ds; 1309 1309 1310 1310 if (ds->ops->get_pause_stats) 1311 1311 ds->ops->get_pause_stats(ds, dp->index, pause_stats); 1312 1312 } 1313 1313 1314 - static void dsa_slave_get_pauseparam(struct net_device *dev, 1315 - struct ethtool_pauseparam *pause) 1314 + static void dsa_user_get_pauseparam(struct net_device *dev, 1315 + struct ethtool_pauseparam *pause) 1316 1316 { 1317 - struct dsa_port *dp = dsa_slave_to_port(dev); 1317 + struct dsa_port *dp = dsa_user_to_port(dev); 1318 1318 1319 1319 phylink_ethtool_get_pauseparam(dp->pl, pause); 1320 1320 } 1321 1321 1322 - static int dsa_slave_set_pauseparam(struct net_device *dev, 1323 - struct ethtool_pauseparam *pause) 1322 + static int dsa_user_set_pauseparam(struct net_device *dev, 1323 + struct ethtool_pauseparam *pause) 1324 1324 { 1325 - struct dsa_port *dp = dsa_slave_to_port(dev); 1325 + struct dsa_port *dp = dsa_user_to_port(dev); 1326 1326 1327 1327 return phylink_ethtool_set_pauseparam(dp->pl, pause); 1328 1328 } 1329 1329 1330 1330 #ifdef CONFIG_NET_POLL_CONTROLLER 1331 - static int dsa_slave_netpoll_setup(struct net_device *dev, 1332 - struct netpoll_info *ni) 1331 + static int dsa_user_netpoll_setup(struct net_device *dev, 1332 + struct netpoll_info *ni) 1333 1333 { 1334 - struct net_device *master = dsa_slave_to_master(dev); 1335 - struct dsa_slave_priv *p = netdev_priv(dev); 1334 + struct net_device *conduit = dsa_user_to_conduit(dev); 1335 + struct dsa_user_priv *p = netdev_priv(dev); 1336 1336 struct netpoll *netpoll; 1337 1337 int err = 0; 1338 1338 ··· 1340 1340 if (!netpoll) 1341 1341 return -ENOMEM; 1342 1342 1343 - err = __netpoll_setup(netpoll, master); 1343 + err = __netpoll_setup(netpoll, conduit); 1344 1344 if (err) { 1345 1345 kfree(netpoll); 1346 1346 goto out; ··· 1351 1351 return err; 1352 1352 } 1353 1353 1354 - static void dsa_slave_netpoll_cleanup(struct net_device *dev) 1354 + static void dsa_user_netpoll_cleanup(struct net_device *dev) 1355 1355 { 1356 - struct dsa_slave_priv *p = netdev_priv(dev); 1356 + struct dsa_user_priv *p = netdev_priv(dev); 1357 1357 struct netpoll *netpoll = p->netpoll; 1358 1358 1359 1359 if (!netpoll) ··· 1364 1364 __netpoll_free(netpoll); 1365 1365 } 1366 1366 1367 - static void dsa_slave_poll_controller(struct net_device *dev) 1367 + static void dsa_user_poll_controller(struct net_device *dev) 1368 1368 { 1369 1369 } 1370 1370 #endif 1371 1371 1372 1372 static struct dsa_mall_tc_entry * 1373 - dsa_slave_mall_tc_entry_find(struct net_device *dev, unsigned long cookie) 1373 + dsa_user_mall_tc_entry_find(struct net_device *dev, unsigned long cookie) 1374 1374 { 1375 - struct dsa_slave_priv *p = netdev_priv(dev); 1375 + struct dsa_user_priv *p = netdev_priv(dev); 1376 1376 struct dsa_mall_tc_entry *mall_tc_entry; 1377 1377 1378 1378 list_for_each_entry(mall_tc_entry, &p->mall_tc_list, list) ··· 1383 1383 } 1384 1384 1385 1385 static int 1386 - dsa_slave_add_cls_matchall_mirred(struct net_device *dev, 1387 - struct tc_cls_matchall_offload *cls, 1388 - bool ingress) 1386 + dsa_user_add_cls_matchall_mirred(struct net_device *dev, 1387 + struct tc_cls_matchall_offload *cls, 1388 + bool ingress) 1389 1389 { 1390 1390 struct netlink_ext_ack *extack = cls->common.extack; 1391 - struct dsa_port *dp = dsa_slave_to_port(dev); 1392 - struct dsa_slave_priv *p = netdev_priv(dev); 1391 + struct dsa_port *dp = dsa_user_to_port(dev); 1392 + struct dsa_user_priv *p = netdev_priv(dev); 1393 1393 struct dsa_mall_mirror_tc_entry *mirror; 1394 1394 struct dsa_mall_tc_entry *mall_tc_entry; 1395 1395 struct dsa_switch *ds = dp->ds; ··· 1409 1409 if (!act->dev) 1410 1410 return -EINVAL; 1411 1411 1412 - if (!dsa_slave_dev_check(act->dev)) 1412 + if (!dsa_user_dev_check(act->dev)) 1413 1413 return -EOPNOTSUPP; 1414 1414 1415 1415 mall_tc_entry = kzalloc(sizeof(*mall_tc_entry), GFP_KERNEL); ··· 1420 1420 mall_tc_entry->type = DSA_PORT_MALL_MIRROR; 1421 1421 mirror = &mall_tc_entry->mirror; 1422 1422 1423 - to_dp = dsa_slave_to_port(act->dev); 1423 + to_dp = dsa_user_to_port(act->dev); 1424 1424 1425 1425 mirror->to_local_port = to_dp->index; 1426 1426 mirror->ingress = ingress; ··· 1437 1437 } 1438 1438 1439 1439 static int 1440 - dsa_slave_add_cls_matchall_police(struct net_device *dev, 1441 - struct tc_cls_matchall_offload *cls, 1442 - bool ingress) 1440 + dsa_user_add_cls_matchall_police(struct net_device *dev, 1441 + struct tc_cls_matchall_offload *cls, 1442 + bool ingress) 1443 1443 { 1444 1444 struct netlink_ext_ack *extack = cls->common.extack; 1445 - struct dsa_port *dp = dsa_slave_to_port(dev); 1446 - struct dsa_slave_priv *p = netdev_priv(dev); 1445 + struct dsa_port *dp = dsa_user_to_port(dev); 1446 + struct dsa_user_priv *p = netdev_priv(dev); 1447 1447 struct dsa_mall_policer_tc_entry *policer; 1448 1448 struct dsa_mall_tc_entry *mall_tc_entry; 1449 1449 struct dsa_switch *ds = dp->ds; ··· 1497 1497 return err; 1498 1498 } 1499 1499 1500 - static int dsa_slave_add_cls_matchall(struct net_device *dev, 1501 - struct tc_cls_matchall_offload *cls, 1502 - bool ingress) 1500 + static int dsa_user_add_cls_matchall(struct net_device *dev, 1501 + struct tc_cls_matchall_offload *cls, 1502 + bool ingress) 1503 1503 { 1504 1504 int err = -EOPNOTSUPP; 1505 1505 1506 1506 if (cls->common.protocol == htons(ETH_P_ALL) && 1507 1507 flow_offload_has_one_action(&cls->rule->action) && 1508 1508 cls->rule->action.entries[0].id == FLOW_ACTION_MIRRED) 1509 - err = dsa_slave_add_cls_matchall_mirred(dev, cls, ingress); 1509 + err = dsa_user_add_cls_matchall_mirred(dev, cls, ingress); 1510 1510 else if (flow_offload_has_one_action(&cls->rule->action) && 1511 1511 cls->rule->action.entries[0].id == FLOW_ACTION_POLICE) 1512 - err = dsa_slave_add_cls_matchall_police(dev, cls, ingress); 1512 + err = dsa_user_add_cls_matchall_police(dev, cls, ingress); 1513 1513 1514 1514 return err; 1515 1515 } 1516 1516 1517 - static void dsa_slave_del_cls_matchall(struct net_device *dev, 1518 - struct tc_cls_matchall_offload *cls) 1517 + static void dsa_user_del_cls_matchall(struct net_device *dev, 1518 + struct tc_cls_matchall_offload *cls) 1519 1519 { 1520 - struct dsa_port *dp = dsa_slave_to_port(dev); 1520 + struct dsa_port *dp = dsa_user_to_port(dev); 1521 1521 struct dsa_mall_tc_entry *mall_tc_entry; 1522 1522 struct dsa_switch *ds = dp->ds; 1523 1523 1524 - mall_tc_entry = dsa_slave_mall_tc_entry_find(dev, cls->cookie); 1524 + mall_tc_entry = dsa_user_mall_tc_entry_find(dev, cls->cookie); 1525 1525 if (!mall_tc_entry) 1526 1526 return; 1527 1527 ··· 1544 1544 kfree(mall_tc_entry); 1545 1545 } 1546 1546 1547 - static int dsa_slave_setup_tc_cls_matchall(struct net_device *dev, 1548 - struct tc_cls_matchall_offload *cls, 1549 - bool ingress) 1547 + static int dsa_user_setup_tc_cls_matchall(struct net_device *dev, 1548 + struct tc_cls_matchall_offload *cls, 1549 + bool ingress) 1550 1550 { 1551 1551 if (cls->common.chain_index) 1552 1552 return -EOPNOTSUPP; 1553 1553 1554 1554 switch (cls->command) { 1555 1555 case TC_CLSMATCHALL_REPLACE: 1556 - return dsa_slave_add_cls_matchall(dev, cls, ingress); 1556 + return dsa_user_add_cls_matchall(dev, cls, ingress); 1557 1557 case TC_CLSMATCHALL_DESTROY: 1558 - dsa_slave_del_cls_matchall(dev, cls); 1558 + dsa_user_del_cls_matchall(dev, cls); 1559 1559 return 0; 1560 1560 default: 1561 1561 return -EOPNOTSUPP; 1562 1562 } 1563 1563 } 1564 1564 1565 - static int dsa_slave_add_cls_flower(struct net_device *dev, 1566 - struct flow_cls_offload *cls, 1567 - bool ingress) 1565 + static int dsa_user_add_cls_flower(struct net_device *dev, 1566 + struct flow_cls_offload *cls, 1567 + bool ingress) 1568 1568 { 1569 - struct dsa_port *dp = dsa_slave_to_port(dev); 1569 + struct dsa_port *dp = dsa_user_to_port(dev); 1570 1570 struct dsa_switch *ds = dp->ds; 1571 1571 int port = dp->index; 1572 1572 ··· 1576 1576 return ds->ops->cls_flower_add(ds, port, cls, ingress); 1577 1577 } 1578 1578 1579 - static int dsa_slave_del_cls_flower(struct net_device *dev, 1580 - struct flow_cls_offload *cls, 1581 - bool ingress) 1579 + static int dsa_user_del_cls_flower(struct net_device *dev, 1580 + struct flow_cls_offload *cls, 1581 + bool ingress) 1582 1582 { 1583 - struct dsa_port *dp = dsa_slave_to_port(dev); 1583 + struct dsa_port *dp = dsa_user_to_port(dev); 1584 1584 struct dsa_switch *ds = dp->ds; 1585 1585 int port = dp->index; 1586 1586 ··· 1590 1590 return ds->ops->cls_flower_del(ds, port, cls, ingress); 1591 1591 } 1592 1592 1593 - static int dsa_slave_stats_cls_flower(struct net_device *dev, 1594 - struct flow_cls_offload *cls, 1595 - bool ingress) 1593 + static int dsa_user_stats_cls_flower(struct net_device *dev, 1594 + struct flow_cls_offload *cls, 1595 + bool ingress) 1596 1596 { 1597 - struct dsa_port *dp = dsa_slave_to_port(dev); 1597 + struct dsa_port *dp = dsa_user_to_port(dev); 1598 1598 struct dsa_switch *ds = dp->ds; 1599 1599 int port = dp->index; 1600 1600 ··· 1604 1604 return ds->ops->cls_flower_stats(ds, port, cls, ingress); 1605 1605 } 1606 1606 1607 - static int dsa_slave_setup_tc_cls_flower(struct net_device *dev, 1608 - struct flow_cls_offload *cls, 1609 - bool ingress) 1607 + static int dsa_user_setup_tc_cls_flower(struct net_device *dev, 1608 + struct flow_cls_offload *cls, 1609 + bool ingress) 1610 1610 { 1611 1611 switch (cls->command) { 1612 1612 case FLOW_CLS_REPLACE: 1613 - return dsa_slave_add_cls_flower(dev, cls, ingress); 1613 + return dsa_user_add_cls_flower(dev, cls, ingress); 1614 1614 case FLOW_CLS_DESTROY: 1615 - return dsa_slave_del_cls_flower(dev, cls, ingress); 1615 + return dsa_user_del_cls_flower(dev, cls, ingress); 1616 1616 case FLOW_CLS_STATS: 1617 - return dsa_slave_stats_cls_flower(dev, cls, ingress); 1617 + return dsa_user_stats_cls_flower(dev, cls, ingress); 1618 1618 default: 1619 1619 return -EOPNOTSUPP; 1620 1620 } 1621 1621 } 1622 1622 1623 - static int dsa_slave_setup_tc_block_cb(enum tc_setup_type type, void *type_data, 1624 - void *cb_priv, bool ingress) 1623 + static int dsa_user_setup_tc_block_cb(enum tc_setup_type type, void *type_data, 1624 + void *cb_priv, bool ingress) 1625 1625 { 1626 1626 struct net_device *dev = cb_priv; 1627 1627 ··· 1630 1630 1631 1631 switch (type) { 1632 1632 case TC_SETUP_CLSMATCHALL: 1633 - return dsa_slave_setup_tc_cls_matchall(dev, type_data, ingress); 1633 + return dsa_user_setup_tc_cls_matchall(dev, type_data, ingress); 1634 1634 case TC_SETUP_CLSFLOWER: 1635 - return dsa_slave_setup_tc_cls_flower(dev, type_data, ingress); 1635 + return dsa_user_setup_tc_cls_flower(dev, type_data, ingress); 1636 1636 default: 1637 1637 return -EOPNOTSUPP; 1638 1638 } 1639 1639 } 1640 1640 1641 - static int dsa_slave_setup_tc_block_cb_ig(enum tc_setup_type type, 1642 - void *type_data, void *cb_priv) 1641 + static int dsa_user_setup_tc_block_cb_ig(enum tc_setup_type type, 1642 + void *type_data, void *cb_priv) 1643 1643 { 1644 - return dsa_slave_setup_tc_block_cb(type, type_data, cb_priv, true); 1644 + return dsa_user_setup_tc_block_cb(type, type_data, cb_priv, true); 1645 1645 } 1646 1646 1647 - static int dsa_slave_setup_tc_block_cb_eg(enum tc_setup_type type, 1648 - void *type_data, void *cb_priv) 1647 + static int dsa_user_setup_tc_block_cb_eg(enum tc_setup_type type, 1648 + void *type_data, void *cb_priv) 1649 1649 { 1650 - return dsa_slave_setup_tc_block_cb(type, type_data, cb_priv, false); 1650 + return dsa_user_setup_tc_block_cb(type, type_data, cb_priv, false); 1651 1651 } 1652 1652 1653 - static LIST_HEAD(dsa_slave_block_cb_list); 1653 + static LIST_HEAD(dsa_user_block_cb_list); 1654 1654 1655 - static int dsa_slave_setup_tc_block(struct net_device *dev, 1656 - struct flow_block_offload *f) 1655 + static int dsa_user_setup_tc_block(struct net_device *dev, 1656 + struct flow_block_offload *f) 1657 1657 { 1658 1658 struct flow_block_cb *block_cb; 1659 1659 flow_setup_cb_t *cb; 1660 1660 1661 1661 if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS) 1662 - cb = dsa_slave_setup_tc_block_cb_ig; 1662 + cb = dsa_user_setup_tc_block_cb_ig; 1663 1663 else if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS) 1664 - cb = dsa_slave_setup_tc_block_cb_eg; 1664 + cb = dsa_user_setup_tc_block_cb_eg; 1665 1665 else 1666 1666 return -EOPNOTSUPP; 1667 1667 1668 - f->driver_block_list = &dsa_slave_block_cb_list; 1668 + f->driver_block_list = &dsa_user_block_cb_list; 1669 1669 1670 1670 switch (f->command) { 1671 1671 case FLOW_BLOCK_BIND: 1672 - if (flow_block_cb_is_busy(cb, dev, &dsa_slave_block_cb_list)) 1672 + if (flow_block_cb_is_busy(cb, dev, &dsa_user_block_cb_list)) 1673 1673 return -EBUSY; 1674 1674 1675 1675 block_cb = flow_block_cb_alloc(cb, dev, dev, NULL); ··· 1677 1677 return PTR_ERR(block_cb); 1678 1678 1679 1679 flow_block_cb_add(block_cb, f); 1680 - list_add_tail(&block_cb->driver_list, &dsa_slave_block_cb_list); 1680 + list_add_tail(&block_cb->driver_list, &dsa_user_block_cb_list); 1681 1681 return 0; 1682 1682 case FLOW_BLOCK_UNBIND: 1683 1683 block_cb = flow_block_cb_lookup(f->block, cb, dev); ··· 1692 1692 } 1693 1693 } 1694 1694 1695 - static int dsa_slave_setup_ft_block(struct dsa_switch *ds, int port, 1696 - void *type_data) 1695 + static int dsa_user_setup_ft_block(struct dsa_switch *ds, int port, 1696 + void *type_data) 1697 1697 { 1698 - struct net_device *master = dsa_port_to_master(dsa_to_port(ds, port)); 1698 + struct net_device *conduit = dsa_port_to_conduit(dsa_to_port(ds, port)); 1699 1699 1700 - if (!master->netdev_ops->ndo_setup_tc) 1700 + if (!conduit->netdev_ops->ndo_setup_tc) 1701 1701 return -EOPNOTSUPP; 1702 1702 1703 - return master->netdev_ops->ndo_setup_tc(master, TC_SETUP_FT, type_data); 1703 + return conduit->netdev_ops->ndo_setup_tc(conduit, TC_SETUP_FT, type_data); 1704 1704 } 1705 1705 1706 - static int dsa_slave_setup_tc(struct net_device *dev, enum tc_setup_type type, 1707 - void *type_data) 1706 + static int dsa_user_setup_tc(struct net_device *dev, enum tc_setup_type type, 1707 + void *type_data) 1708 1708 { 1709 - struct dsa_port *dp = dsa_slave_to_port(dev); 1709 + struct dsa_port *dp = dsa_user_to_port(dev); 1710 1710 struct dsa_switch *ds = dp->ds; 1711 1711 1712 1712 switch (type) { 1713 1713 case TC_SETUP_BLOCK: 1714 - return dsa_slave_setup_tc_block(dev, type_data); 1714 + return dsa_user_setup_tc_block(dev, type_data); 1715 1715 case TC_SETUP_FT: 1716 - return dsa_slave_setup_ft_block(ds, dp->index, type_data); 1716 + return dsa_user_setup_ft_block(ds, dp->index, type_data); 1717 1717 default: 1718 1718 break; 1719 1719 } ··· 1724 1724 return ds->ops->port_setup_tc(ds, dp->index, type, type_data); 1725 1725 } 1726 1726 1727 - static int dsa_slave_get_rxnfc(struct net_device *dev, 1728 - struct ethtool_rxnfc *nfc, u32 *rule_locs) 1727 + static int dsa_user_get_rxnfc(struct net_device *dev, 1728 + struct ethtool_rxnfc *nfc, u32 *rule_locs) 1729 1729 { 1730 - struct dsa_port *dp = dsa_slave_to_port(dev); 1730 + struct dsa_port *dp = dsa_user_to_port(dev); 1731 1731 struct dsa_switch *ds = dp->ds; 1732 1732 1733 1733 if (!ds->ops->get_rxnfc) ··· 1736 1736 return ds->ops->get_rxnfc(ds, dp->index, nfc, rule_locs); 1737 1737 } 1738 1738 1739 - static int dsa_slave_set_rxnfc(struct net_device *dev, 1740 - struct ethtool_rxnfc *nfc) 1739 + static int dsa_user_set_rxnfc(struct net_device *dev, 1740 + struct ethtool_rxnfc *nfc) 1741 1741 { 1742 - struct dsa_port *dp = dsa_slave_to_port(dev); 1742 + struct dsa_port *dp = dsa_user_to_port(dev); 1743 1743 struct dsa_switch *ds = dp->ds; 1744 1744 1745 1745 if (!ds->ops->set_rxnfc) ··· 1748 1748 return ds->ops->set_rxnfc(ds, dp->index, nfc); 1749 1749 } 1750 1750 1751 - static int dsa_slave_get_ts_info(struct net_device *dev, 1752 - struct ethtool_ts_info *ts) 1751 + static int dsa_user_get_ts_info(struct net_device *dev, 1752 + struct ethtool_ts_info *ts) 1753 1753 { 1754 - struct dsa_slave_priv *p = netdev_priv(dev); 1754 + struct dsa_user_priv *p = netdev_priv(dev); 1755 1755 struct dsa_switch *ds = p->dp->ds; 1756 1756 1757 1757 if (!ds->ops->get_ts_info) ··· 1760 1760 return ds->ops->get_ts_info(ds, p->dp->index, ts); 1761 1761 } 1762 1762 1763 - static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto, 1764 - u16 vid) 1763 + static int dsa_user_vlan_rx_add_vid(struct net_device *dev, __be16 proto, 1764 + u16 vid) 1765 1765 { 1766 - struct dsa_port *dp = dsa_slave_to_port(dev); 1766 + struct dsa_port *dp = dsa_user_to_port(dev); 1767 1767 struct switchdev_obj_port_vlan vlan = { 1768 1768 .obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN, 1769 1769 .vid = vid, ··· 1810 1810 1811 1811 if (dsa_switch_supports_mc_filtering(ds)) { 1812 1812 netdev_for_each_synced_mc_addr(ha, dev) { 1813 - dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD, 1814 - ha->addr, vid); 1813 + dsa_user_schedule_standalone_work(dev, DSA_MC_ADD, 1814 + ha->addr, vid); 1815 1815 } 1816 1816 } 1817 1817 1818 1818 if (dsa_switch_supports_uc_filtering(ds)) { 1819 1819 netdev_for_each_synced_uc_addr(ha, dev) { 1820 - dsa_slave_schedule_standalone_work(dev, DSA_UC_ADD, 1821 - ha->addr, vid); 1820 + dsa_user_schedule_standalone_work(dev, DSA_UC_ADD, 1821 + ha->addr, vid); 1822 1822 } 1823 1823 } 1824 1824 ··· 1835 1835 return ret; 1836 1836 } 1837 1837 1838 - static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, 1839 - u16 vid) 1838 + static int dsa_user_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, 1839 + u16 vid) 1840 1840 { 1841 - struct dsa_port *dp = dsa_slave_to_port(dev); 1841 + struct dsa_port *dp = dsa_user_to_port(dev); 1842 1842 struct switchdev_obj_port_vlan vlan = { 1843 1843 .vid = vid, 1844 1844 /* This API only allows programming tagged, non-PVID VIDs */ ··· 1874 1874 1875 1875 if (dsa_switch_supports_mc_filtering(ds)) { 1876 1876 netdev_for_each_synced_mc_addr(ha, dev) { 1877 - dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL, 1878 - ha->addr, vid); 1877 + dsa_user_schedule_standalone_work(dev, DSA_MC_DEL, 1878 + ha->addr, vid); 1879 1879 } 1880 1880 } 1881 1881 1882 1882 if (dsa_switch_supports_uc_filtering(ds)) { 1883 1883 netdev_for_each_synced_uc_addr(ha, dev) { 1884 - dsa_slave_schedule_standalone_work(dev, DSA_UC_DEL, 1885 - ha->addr, vid); 1884 + dsa_user_schedule_standalone_work(dev, DSA_UC_DEL, 1885 + ha->addr, vid); 1886 1886 } 1887 1887 } 1888 1888 ··· 1893 1893 return 0; 1894 1894 } 1895 1895 1896 - static int dsa_slave_restore_vlan(struct net_device *vdev, int vid, void *arg) 1896 + static int dsa_user_restore_vlan(struct net_device *vdev, int vid, void *arg) 1897 1897 { 1898 1898 __be16 proto = vdev ? vlan_dev_vlan_proto(vdev) : htons(ETH_P_8021Q); 1899 1899 1900 - return dsa_slave_vlan_rx_add_vid(arg, proto, vid); 1900 + return dsa_user_vlan_rx_add_vid(arg, proto, vid); 1901 1901 } 1902 1902 1903 - static int dsa_slave_clear_vlan(struct net_device *vdev, int vid, void *arg) 1903 + static int dsa_user_clear_vlan(struct net_device *vdev, int vid, void *arg) 1904 1904 { 1905 1905 __be16 proto = vdev ? vlan_dev_vlan_proto(vdev) : htons(ETH_P_8021Q); 1906 1906 1907 - return dsa_slave_vlan_rx_kill_vid(arg, proto, vid); 1907 + return dsa_user_vlan_rx_kill_vid(arg, proto, vid); 1908 1908 } 1909 1909 1910 1910 /* Keep the VLAN RX filtering list in sync with the hardware only if VLAN ··· 1938 1938 * - the bridge VLANs 1939 1939 * - the 8021q upper VLANs 1940 1940 */ 1941 - int dsa_slave_manage_vlan_filtering(struct net_device *slave, 1942 - bool vlan_filtering) 1941 + int dsa_user_manage_vlan_filtering(struct net_device *user, 1942 + bool vlan_filtering) 1943 1943 { 1944 1944 int err; 1945 1945 1946 1946 if (vlan_filtering) { 1947 - slave->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 1947 + user->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 1948 1948 1949 - err = vlan_for_each(slave, dsa_slave_restore_vlan, slave); 1949 + err = vlan_for_each(user, dsa_user_restore_vlan, user); 1950 1950 if (err) { 1951 - vlan_for_each(slave, dsa_slave_clear_vlan, slave); 1952 - slave->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; 1951 + vlan_for_each(user, dsa_user_clear_vlan, user); 1952 + user->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; 1953 1953 return err; 1954 1954 } 1955 1955 } else { 1956 - err = vlan_for_each(slave, dsa_slave_clear_vlan, slave); 1956 + err = vlan_for_each(user, dsa_user_clear_vlan, user); 1957 1957 if (err) 1958 1958 return err; 1959 1959 1960 - slave->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; 1960 + user->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; 1961 1961 } 1962 1962 1963 1963 return 0; ··· 2028 2028 list_for_each_entry(dst, &dsa_tree_list, list) { 2029 2029 list_for_each_entry(other_dp, &dst->ports, list) { 2030 2030 struct dsa_hw_port *hw_port; 2031 - struct net_device *slave; 2031 + struct net_device *user; 2032 2032 2033 2033 if (other_dp->type != DSA_PORT_TYPE_USER) 2034 2034 continue; ··· 2039 2039 if (!other_dp->ds->mtu_enforcement_ingress) 2040 2040 continue; 2041 2041 2042 - slave = other_dp->slave; 2042 + user = other_dp->user; 2043 2043 2044 - if (min_mtu > slave->mtu) 2045 - min_mtu = slave->mtu; 2044 + if (min_mtu > user->mtu) 2045 + min_mtu = user->mtu; 2046 2046 2047 2047 hw_port = kzalloc(sizeof(*hw_port), GFP_KERNEL); 2048 2048 if (!hw_port) 2049 2049 goto out; 2050 2050 2051 - hw_port->dev = slave; 2052 - hw_port->old_mtu = slave->mtu; 2051 + hw_port->dev = user; 2052 + hw_port->old_mtu = user->mtu; 2053 2053 2054 2054 list_add(&hw_port->list, &hw_port_list); 2055 2055 } ··· 2059 2059 * interface's MTU first, regardless of whether the intention of the 2060 2060 * user was to raise or lower it. 2061 2061 */ 2062 - err = dsa_hw_port_list_set_mtu(&hw_port_list, dp->slave->mtu); 2062 + err = dsa_hw_port_list_set_mtu(&hw_port_list, dp->user->mtu); 2063 2063 if (!err) 2064 2064 goto out; 2065 2065 ··· 2073 2073 dsa_hw_port_list_free(&hw_port_list); 2074 2074 } 2075 2075 2076 - int dsa_slave_change_mtu(struct net_device *dev, int new_mtu) 2076 + int dsa_user_change_mtu(struct net_device *dev, int new_mtu) 2077 2077 { 2078 - struct net_device *master = dsa_slave_to_master(dev); 2079 - struct dsa_port *dp = dsa_slave_to_port(dev); 2078 + struct net_device *conduit = dsa_user_to_conduit(dev); 2079 + struct dsa_port *dp = dsa_user_to_port(dev); 2080 2080 struct dsa_port *cpu_dp = dp->cpu_dp; 2081 2081 struct dsa_switch *ds = dp->ds; 2082 2082 struct dsa_port *other_dp; 2083 2083 int largest_mtu = 0; 2084 - int new_master_mtu; 2085 - int old_master_mtu; 2084 + int new_conduit_mtu; 2085 + int old_conduit_mtu; 2086 2086 int mtu_limit; 2087 2087 int overhead; 2088 2088 int cpu_mtu; ··· 2092 2092 return -EOPNOTSUPP; 2093 2093 2094 2094 dsa_tree_for_each_user_port(other_dp, ds->dst) { 2095 - int slave_mtu; 2095 + int user_mtu; 2096 2096 2097 - /* During probe, this function will be called for each slave 2097 + /* During probe, this function will be called for each user 2098 2098 * device, while not all of them have been allocated. That's 2099 2099 * ok, it doesn't change what the maximum is, so ignore it. 2100 2100 */ 2101 - if (!other_dp->slave) 2101 + if (!other_dp->user) 2102 2102 continue; 2103 2103 2104 2104 /* Pretend that we already applied the setting, which we 2105 2105 * actually haven't (still haven't done all integrity checks) 2106 2106 */ 2107 2107 if (dp == other_dp) 2108 - slave_mtu = new_mtu; 2108 + user_mtu = new_mtu; 2109 2109 else 2110 - slave_mtu = other_dp->slave->mtu; 2110 + user_mtu = other_dp->user->mtu; 2111 2111 2112 - if (largest_mtu < slave_mtu) 2113 - largest_mtu = slave_mtu; 2112 + if (largest_mtu < user_mtu) 2113 + largest_mtu = user_mtu; 2114 2114 } 2115 2115 2116 2116 overhead = dsa_tag_protocol_overhead(cpu_dp->tag_ops); 2117 - mtu_limit = min_t(int, master->max_mtu, dev->max_mtu + overhead); 2118 - old_master_mtu = master->mtu; 2119 - new_master_mtu = largest_mtu + overhead; 2120 - if (new_master_mtu > mtu_limit) 2117 + mtu_limit = min_t(int, conduit->max_mtu, dev->max_mtu + overhead); 2118 + old_conduit_mtu = conduit->mtu; 2119 + new_conduit_mtu = largest_mtu + overhead; 2120 + if (new_conduit_mtu > mtu_limit) 2121 2121 return -ERANGE; 2122 2122 2123 - /* If the master MTU isn't over limit, there's no need to check the CPU 2123 + /* If the conduit MTU isn't over limit, there's no need to check the CPU 2124 2124 * MTU, since that surely isn't either. 2125 2125 */ 2126 2126 cpu_mtu = largest_mtu; 2127 2127 2128 2128 /* Start applying stuff */ 2129 - if (new_master_mtu != old_master_mtu) { 2130 - err = dev_set_mtu(master, new_master_mtu); 2129 + if (new_conduit_mtu != old_conduit_mtu) { 2130 + err = dev_set_mtu(conduit, new_conduit_mtu); 2131 2131 if (err < 0) 2132 - goto out_master_failed; 2132 + goto out_conduit_failed; 2133 2133 2134 2134 /* We only need to propagate the MTU of the CPU port to 2135 2135 * upstream switches, so emit a notifier which updates them. ··· 2150 2150 return 0; 2151 2151 2152 2152 out_port_failed: 2153 - if (new_master_mtu != old_master_mtu) 2154 - dsa_port_mtu_change(cpu_dp, old_master_mtu - overhead); 2153 + if (new_conduit_mtu != old_conduit_mtu) 2154 + dsa_port_mtu_change(cpu_dp, old_conduit_mtu - overhead); 2155 2155 out_cpu_failed: 2156 - if (new_master_mtu != old_master_mtu) 2157 - dev_set_mtu(master, old_master_mtu); 2158 - out_master_failed: 2156 + if (new_conduit_mtu != old_conduit_mtu) 2157 + dev_set_mtu(conduit, old_conduit_mtu); 2158 + out_conduit_failed: 2159 2159 return err; 2160 2160 } 2161 2161 2162 2162 static int __maybe_unused 2163 - dsa_slave_dcbnl_set_default_prio(struct net_device *dev, struct dcb_app *app) 2163 + dsa_user_dcbnl_set_default_prio(struct net_device *dev, struct dcb_app *app) 2164 2164 { 2165 - struct dsa_port *dp = dsa_slave_to_port(dev); 2165 + struct dsa_port *dp = dsa_user_to_port(dev); 2166 2166 struct dsa_switch *ds = dp->ds; 2167 2167 unsigned long mask, new_prio; 2168 2168 int err, port = dp->index; ··· 2187 2187 } 2188 2188 2189 2189 static int __maybe_unused 2190 - dsa_slave_dcbnl_add_dscp_prio(struct net_device *dev, struct dcb_app *app) 2190 + dsa_user_dcbnl_add_dscp_prio(struct net_device *dev, struct dcb_app *app) 2191 2191 { 2192 - struct dsa_port *dp = dsa_slave_to_port(dev); 2192 + struct dsa_port *dp = dsa_user_to_port(dev); 2193 2193 struct dsa_switch *ds = dp->ds; 2194 2194 unsigned long mask, new_prio; 2195 2195 int err, port = dp->index; ··· 2220 2220 return 0; 2221 2221 } 2222 2222 2223 - static int __maybe_unused dsa_slave_dcbnl_ieee_setapp(struct net_device *dev, 2224 - struct dcb_app *app) 2223 + static int __maybe_unused dsa_user_dcbnl_ieee_setapp(struct net_device *dev, 2224 + struct dcb_app *app) 2225 2225 { 2226 2226 switch (app->selector) { 2227 2227 case IEEE_8021QAZ_APP_SEL_ETHERTYPE: 2228 2228 switch (app->protocol) { 2229 2229 case 0: 2230 - return dsa_slave_dcbnl_set_default_prio(dev, app); 2230 + return dsa_user_dcbnl_set_default_prio(dev, app); 2231 2231 default: 2232 2232 return -EOPNOTSUPP; 2233 2233 } 2234 2234 break; 2235 2235 case IEEE_8021QAZ_APP_SEL_DSCP: 2236 - return dsa_slave_dcbnl_add_dscp_prio(dev, app); 2236 + return dsa_user_dcbnl_add_dscp_prio(dev, app); 2237 2237 default: 2238 2238 return -EOPNOTSUPP; 2239 2239 } 2240 2240 } 2241 2241 2242 2242 static int __maybe_unused 2243 - dsa_slave_dcbnl_del_default_prio(struct net_device *dev, struct dcb_app *app) 2243 + dsa_user_dcbnl_del_default_prio(struct net_device *dev, struct dcb_app *app) 2244 2244 { 2245 - struct dsa_port *dp = dsa_slave_to_port(dev); 2245 + struct dsa_port *dp = dsa_user_to_port(dev); 2246 2246 struct dsa_switch *ds = dp->ds; 2247 2247 unsigned long mask, new_prio; 2248 2248 int err, port = dp->index; ··· 2267 2267 } 2268 2268 2269 2269 static int __maybe_unused 2270 - dsa_slave_dcbnl_del_dscp_prio(struct net_device *dev, struct dcb_app *app) 2270 + dsa_user_dcbnl_del_dscp_prio(struct net_device *dev, struct dcb_app *app) 2271 2271 { 2272 - struct dsa_port *dp = dsa_slave_to_port(dev); 2272 + struct dsa_port *dp = dsa_user_to_port(dev); 2273 2273 struct dsa_switch *ds = dp->ds; 2274 2274 int err, port = dp->index; 2275 2275 u8 dscp = app->protocol; ··· 2290 2290 return 0; 2291 2291 } 2292 2292 2293 - static int __maybe_unused dsa_slave_dcbnl_ieee_delapp(struct net_device *dev, 2294 - struct dcb_app *app) 2293 + static int __maybe_unused dsa_user_dcbnl_ieee_delapp(struct net_device *dev, 2294 + struct dcb_app *app) 2295 2295 { 2296 2296 switch (app->selector) { 2297 2297 case IEEE_8021QAZ_APP_SEL_ETHERTYPE: 2298 2298 switch (app->protocol) { 2299 2299 case 0: 2300 - return dsa_slave_dcbnl_del_default_prio(dev, app); 2300 + return dsa_user_dcbnl_del_default_prio(dev, app); 2301 2301 default: 2302 2302 return -EOPNOTSUPP; 2303 2303 } 2304 2304 break; 2305 2305 case IEEE_8021QAZ_APP_SEL_DSCP: 2306 - return dsa_slave_dcbnl_del_dscp_prio(dev, app); 2306 + return dsa_user_dcbnl_del_dscp_prio(dev, app); 2307 2307 default: 2308 2308 return -EOPNOTSUPP; 2309 2309 } ··· 2312 2312 /* Pre-populate the DCB application priority table with the priorities 2313 2313 * configured during switch setup, which we read from hardware here. 2314 2314 */ 2315 - static int dsa_slave_dcbnl_init(struct net_device *dev) 2315 + static int dsa_user_dcbnl_init(struct net_device *dev) 2316 2316 { 2317 - struct dsa_port *dp = dsa_slave_to_port(dev); 2317 + struct dsa_port *dp = dsa_user_to_port(dev); 2318 2318 struct dsa_switch *ds = dp->ds; 2319 2319 int port = dp->index; 2320 2320 int err; ··· 2362 2362 return 0; 2363 2363 } 2364 2364 2365 - static const struct ethtool_ops dsa_slave_ethtool_ops = { 2366 - .get_drvinfo = dsa_slave_get_drvinfo, 2367 - .get_regs_len = dsa_slave_get_regs_len, 2368 - .get_regs = dsa_slave_get_regs, 2369 - .nway_reset = dsa_slave_nway_reset, 2365 + static const struct ethtool_ops dsa_user_ethtool_ops = { 2366 + .get_drvinfo = dsa_user_get_drvinfo, 2367 + .get_regs_len = dsa_user_get_regs_len, 2368 + .get_regs = dsa_user_get_regs, 2369 + .nway_reset = dsa_user_nway_reset, 2370 2370 .get_link = ethtool_op_get_link, 2371 - .get_eeprom_len = dsa_slave_get_eeprom_len, 2372 - .get_eeprom = dsa_slave_get_eeprom, 2373 - .set_eeprom = dsa_slave_set_eeprom, 2374 - .get_strings = dsa_slave_get_strings, 2375 - .get_ethtool_stats = dsa_slave_get_ethtool_stats, 2376 - .get_sset_count = dsa_slave_get_sset_count, 2377 - .get_eth_phy_stats = dsa_slave_get_eth_phy_stats, 2378 - .get_eth_mac_stats = dsa_slave_get_eth_mac_stats, 2379 - .get_eth_ctrl_stats = dsa_slave_get_eth_ctrl_stats, 2380 - .get_rmon_stats = dsa_slave_get_rmon_stats, 2381 - .set_wol = dsa_slave_set_wol, 2382 - .get_wol = dsa_slave_get_wol, 2383 - .set_eee = dsa_slave_set_eee, 2384 - .get_eee = dsa_slave_get_eee, 2385 - .get_link_ksettings = dsa_slave_get_link_ksettings, 2386 - .set_link_ksettings = dsa_slave_set_link_ksettings, 2387 - .get_pause_stats = dsa_slave_get_pause_stats, 2388 - .get_pauseparam = dsa_slave_get_pauseparam, 2389 - .set_pauseparam = dsa_slave_set_pauseparam, 2390 - .get_rxnfc = dsa_slave_get_rxnfc, 2391 - .set_rxnfc = dsa_slave_set_rxnfc, 2392 - .get_ts_info = dsa_slave_get_ts_info, 2393 - .self_test = dsa_slave_net_selftest, 2394 - .get_mm = dsa_slave_get_mm, 2395 - .set_mm = dsa_slave_set_mm, 2396 - .get_mm_stats = dsa_slave_get_mm_stats, 2371 + .get_eeprom_len = dsa_user_get_eeprom_len, 2372 + .get_eeprom = dsa_user_get_eeprom, 2373 + .set_eeprom = dsa_user_set_eeprom, 2374 + .get_strings = dsa_user_get_strings, 2375 + .get_ethtool_stats = dsa_user_get_ethtool_stats, 2376 + .get_sset_count = dsa_user_get_sset_count, 2377 + .get_eth_phy_stats = dsa_user_get_eth_phy_stats, 2378 + .get_eth_mac_stats = dsa_user_get_eth_mac_stats, 2379 + .get_eth_ctrl_stats = dsa_user_get_eth_ctrl_stats, 2380 + .get_rmon_stats = dsa_user_get_rmon_stats, 2381 + .set_wol = dsa_user_set_wol, 2382 + .get_wol = dsa_user_get_wol, 2383 + .set_eee = dsa_user_set_eee, 2384 + .get_eee = dsa_user_get_eee, 2385 + .get_link_ksettings = dsa_user_get_link_ksettings, 2386 + .set_link_ksettings = dsa_user_set_link_ksettings, 2387 + .get_pause_stats = dsa_user_get_pause_stats, 2388 + .get_pauseparam = dsa_user_get_pauseparam, 2389 + .set_pauseparam = dsa_user_set_pauseparam, 2390 + .get_rxnfc = dsa_user_get_rxnfc, 2391 + .set_rxnfc = dsa_user_set_rxnfc, 2392 + .get_ts_info = dsa_user_get_ts_info, 2393 + .self_test = dsa_user_net_selftest, 2394 + .get_mm = dsa_user_get_mm, 2395 + .set_mm = dsa_user_set_mm, 2396 + .get_mm_stats = dsa_user_get_mm_stats, 2397 2397 }; 2398 2398 2399 - static const struct dcbnl_rtnl_ops __maybe_unused dsa_slave_dcbnl_ops = { 2400 - .ieee_setapp = dsa_slave_dcbnl_ieee_setapp, 2401 - .ieee_delapp = dsa_slave_dcbnl_ieee_delapp, 2399 + static const struct dcbnl_rtnl_ops __maybe_unused dsa_user_dcbnl_ops = { 2400 + .ieee_setapp = dsa_user_dcbnl_ieee_setapp, 2401 + .ieee_delapp = dsa_user_dcbnl_ieee_delapp, 2402 2402 }; 2403 2403 2404 - static void dsa_slave_get_stats64(struct net_device *dev, 2405 - struct rtnl_link_stats64 *s) 2404 + static void dsa_user_get_stats64(struct net_device *dev, 2405 + struct rtnl_link_stats64 *s) 2406 2406 { 2407 - struct dsa_port *dp = dsa_slave_to_port(dev); 2407 + struct dsa_port *dp = dsa_user_to_port(dev); 2408 2408 struct dsa_switch *ds = dp->ds; 2409 2409 2410 2410 if (ds->ops->get_stats64) ··· 2413 2413 dev_get_tstats64(dev, s); 2414 2414 } 2415 2415 2416 - static int dsa_slave_fill_forward_path(struct net_device_path_ctx *ctx, 2417 - struct net_device_path *path) 2416 + static int dsa_user_fill_forward_path(struct net_device_path_ctx *ctx, 2417 + struct net_device_path *path) 2418 2418 { 2419 - struct dsa_port *dp = dsa_slave_to_port(ctx->dev); 2420 - struct net_device *master = dsa_port_to_master(dp); 2419 + struct dsa_port *dp = dsa_user_to_port(ctx->dev); 2420 + struct net_device *conduit = dsa_port_to_conduit(dp); 2421 2421 struct dsa_port *cpu_dp = dp->cpu_dp; 2422 2422 2423 2423 path->dev = ctx->dev; 2424 2424 path->type = DEV_PATH_DSA; 2425 2425 path->dsa.proto = cpu_dp->tag_ops->proto; 2426 2426 path->dsa.port = dp->index; 2427 - ctx->dev = master; 2427 + ctx->dev = conduit; 2428 2428 2429 2429 return 0; 2430 2430 } 2431 2431 2432 - static const struct net_device_ops dsa_slave_netdev_ops = { 2433 - .ndo_open = dsa_slave_open, 2434 - .ndo_stop = dsa_slave_close, 2435 - .ndo_start_xmit = dsa_slave_xmit, 2436 - .ndo_change_rx_flags = dsa_slave_change_rx_flags, 2437 - .ndo_set_rx_mode = dsa_slave_set_rx_mode, 2438 - .ndo_set_mac_address = dsa_slave_set_mac_address, 2439 - .ndo_fdb_dump = dsa_slave_fdb_dump, 2440 - .ndo_eth_ioctl = dsa_slave_ioctl, 2441 - .ndo_get_iflink = dsa_slave_get_iflink, 2432 + static const struct net_device_ops dsa_user_netdev_ops = { 2433 + .ndo_open = dsa_user_open, 2434 + .ndo_stop = dsa_user_close, 2435 + .ndo_start_xmit = dsa_user_xmit, 2436 + .ndo_change_rx_flags = dsa_user_change_rx_flags, 2437 + .ndo_set_rx_mode = dsa_user_set_rx_mode, 2438 + .ndo_set_mac_address = dsa_user_set_mac_address, 2439 + .ndo_fdb_dump = dsa_user_fdb_dump, 2440 + .ndo_eth_ioctl = dsa_user_ioctl, 2441 + .ndo_get_iflink = dsa_user_get_iflink, 2442 2442 #ifdef CONFIG_NET_POLL_CONTROLLER 2443 - .ndo_netpoll_setup = dsa_slave_netpoll_setup, 2444 - .ndo_netpoll_cleanup = dsa_slave_netpoll_cleanup, 2445 - .ndo_poll_controller = dsa_slave_poll_controller, 2443 + .ndo_netpoll_setup = dsa_user_netpoll_setup, 2444 + .ndo_netpoll_cleanup = dsa_user_netpoll_cleanup, 2445 + .ndo_poll_controller = dsa_user_poll_controller, 2446 2446 #endif 2447 - .ndo_setup_tc = dsa_slave_setup_tc, 2448 - .ndo_get_stats64 = dsa_slave_get_stats64, 2449 - .ndo_vlan_rx_add_vid = dsa_slave_vlan_rx_add_vid, 2450 - .ndo_vlan_rx_kill_vid = dsa_slave_vlan_rx_kill_vid, 2451 - .ndo_change_mtu = dsa_slave_change_mtu, 2452 - .ndo_fill_forward_path = dsa_slave_fill_forward_path, 2447 + .ndo_setup_tc = dsa_user_setup_tc, 2448 + .ndo_get_stats64 = dsa_user_get_stats64, 2449 + .ndo_vlan_rx_add_vid = dsa_user_vlan_rx_add_vid, 2450 + .ndo_vlan_rx_kill_vid = dsa_user_vlan_rx_kill_vid, 2451 + .ndo_change_mtu = dsa_user_change_mtu, 2452 + .ndo_fill_forward_path = dsa_user_fill_forward_path, 2453 2453 }; 2454 2454 2455 2455 static struct device_type dsa_type = { ··· 2465 2465 } 2466 2466 EXPORT_SYMBOL_GPL(dsa_port_phylink_mac_change); 2467 2467 2468 - static void dsa_slave_phylink_fixed_state(struct phylink_config *config, 2469 - struct phylink_link_state *state) 2468 + static void dsa_user_phylink_fixed_state(struct phylink_config *config, 2469 + struct phylink_link_state *state) 2470 2470 { 2471 2471 struct dsa_port *dp = container_of(config, struct dsa_port, pl_config); 2472 2472 struct dsa_switch *ds = dp->ds; ··· 2477 2477 ds->ops->phylink_fixed_state(ds, dp->index, state); 2478 2478 } 2479 2479 2480 - /* slave device setup *******************************************************/ 2481 - static int dsa_slave_phy_connect(struct net_device *slave_dev, int addr, 2482 - u32 flags) 2480 + /* user device setup *******************************************************/ 2481 + static int dsa_user_phy_connect(struct net_device *user_dev, int addr, 2482 + u32 flags) 2483 2483 { 2484 - struct dsa_port *dp = dsa_slave_to_port(slave_dev); 2484 + struct dsa_port *dp = dsa_user_to_port(user_dev); 2485 2485 struct dsa_switch *ds = dp->ds; 2486 2486 2487 - slave_dev->phydev = mdiobus_get_phy(ds->slave_mii_bus, addr); 2488 - if (!slave_dev->phydev) { 2489 - netdev_err(slave_dev, "no phy at %d\n", addr); 2487 + user_dev->phydev = mdiobus_get_phy(ds->user_mii_bus, addr); 2488 + if (!user_dev->phydev) { 2489 + netdev_err(user_dev, "no phy at %d\n", addr); 2490 2490 return -ENODEV; 2491 2491 } 2492 2492 2493 - slave_dev->phydev->dev_flags |= flags; 2493 + user_dev->phydev->dev_flags |= flags; 2494 2494 2495 - return phylink_connect_phy(dp->pl, slave_dev->phydev); 2495 + return phylink_connect_phy(dp->pl, user_dev->phydev); 2496 2496 } 2497 2497 2498 - static int dsa_slave_phy_setup(struct net_device *slave_dev) 2498 + static int dsa_user_phy_setup(struct net_device *user_dev) 2499 2499 { 2500 - struct dsa_port *dp = dsa_slave_to_port(slave_dev); 2500 + struct dsa_port *dp = dsa_user_to_port(user_dev); 2501 2501 struct device_node *port_dn = dp->dn; 2502 2502 struct dsa_switch *ds = dp->ds; 2503 2503 u32 phy_flags = 0; 2504 2504 int ret; 2505 2505 2506 - dp->pl_config.dev = &slave_dev->dev; 2506 + dp->pl_config.dev = &user_dev->dev; 2507 2507 dp->pl_config.type = PHYLINK_NETDEV; 2508 2508 2509 2509 /* The get_fixed_state callback takes precedence over polling the ··· 2511 2511 * this if the switch provides such a callback. 2512 2512 */ 2513 2513 if (ds->ops->phylink_fixed_state) { 2514 - dp->pl_config.get_fixed_state = dsa_slave_phylink_fixed_state; 2514 + dp->pl_config.get_fixed_state = dsa_user_phylink_fixed_state; 2515 2515 dp->pl_config.poll_fixed_state = true; 2516 2516 } 2517 2517 ··· 2523 2523 phy_flags = ds->ops->get_phy_flags(ds, dp->index); 2524 2524 2525 2525 ret = phylink_of_phy_connect(dp->pl, port_dn, phy_flags); 2526 - if (ret == -ENODEV && ds->slave_mii_bus) { 2526 + if (ret == -ENODEV && ds->user_mii_bus) { 2527 2527 /* We could not connect to a designated PHY or SFP, so try to 2528 2528 * use the switch internal MDIO bus instead 2529 2529 */ 2530 - ret = dsa_slave_phy_connect(slave_dev, dp->index, phy_flags); 2530 + ret = dsa_user_phy_connect(user_dev, dp->index, phy_flags); 2531 2531 } 2532 2532 if (ret) { 2533 - netdev_err(slave_dev, "failed to connect to PHY: %pe\n", 2533 + netdev_err(user_dev, "failed to connect to PHY: %pe\n", 2534 2534 ERR_PTR(ret)); 2535 2535 dsa_port_phylink_destroy(dp); 2536 2536 } ··· 2538 2538 return ret; 2539 2539 } 2540 2540 2541 - void dsa_slave_setup_tagger(struct net_device *slave) 2541 + void dsa_user_setup_tagger(struct net_device *user) 2542 2542 { 2543 - struct dsa_port *dp = dsa_slave_to_port(slave); 2544 - struct net_device *master = dsa_port_to_master(dp); 2545 - struct dsa_slave_priv *p = netdev_priv(slave); 2543 + struct dsa_port *dp = dsa_user_to_port(user); 2544 + struct net_device *conduit = dsa_port_to_conduit(dp); 2545 + struct dsa_user_priv *p = netdev_priv(user); 2546 2546 const struct dsa_port *cpu_dp = dp->cpu_dp; 2547 2547 const struct dsa_switch *ds = dp->ds; 2548 2548 2549 - slave->needed_headroom = cpu_dp->tag_ops->needed_headroom; 2550 - slave->needed_tailroom = cpu_dp->tag_ops->needed_tailroom; 2551 - /* Try to save one extra realloc later in the TX path (in the master) 2552 - * by also inheriting the master's needed headroom and tailroom. 2549 + user->needed_headroom = cpu_dp->tag_ops->needed_headroom; 2550 + user->needed_tailroom = cpu_dp->tag_ops->needed_tailroom; 2551 + /* Try to save one extra realloc later in the TX path (in the conduit) 2552 + * by also inheriting the conduit's needed headroom and tailroom. 2553 2553 * The 8021q driver also does this. 2554 2554 */ 2555 - slave->needed_headroom += master->needed_headroom; 2556 - slave->needed_tailroom += master->needed_tailroom; 2555 + user->needed_headroom += conduit->needed_headroom; 2556 + user->needed_tailroom += conduit->needed_tailroom; 2557 2557 2558 2558 p->xmit = cpu_dp->tag_ops->xmit; 2559 2559 2560 - slave->features = master->vlan_features | NETIF_F_HW_TC; 2561 - slave->hw_features |= NETIF_F_HW_TC; 2562 - slave->features |= NETIF_F_LLTX; 2563 - if (slave->needed_tailroom) 2564 - slave->features &= ~(NETIF_F_SG | NETIF_F_FRAGLIST); 2560 + user->features = conduit->vlan_features | NETIF_F_HW_TC; 2561 + user->hw_features |= NETIF_F_HW_TC; 2562 + user->features |= NETIF_F_LLTX; 2563 + if (user->needed_tailroom) 2564 + user->features &= ~(NETIF_F_SG | NETIF_F_FRAGLIST); 2565 2565 if (ds->needs_standalone_vlan_filtering) 2566 - slave->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 2566 + user->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 2567 2567 } 2568 2568 2569 - int dsa_slave_suspend(struct net_device *slave_dev) 2569 + int dsa_user_suspend(struct net_device *user_dev) 2570 2570 { 2571 - struct dsa_port *dp = dsa_slave_to_port(slave_dev); 2571 + struct dsa_port *dp = dsa_user_to_port(user_dev); 2572 2572 2573 - if (!netif_running(slave_dev)) 2573 + if (!netif_running(user_dev)) 2574 2574 return 0; 2575 2575 2576 - netif_device_detach(slave_dev); 2576 + netif_device_detach(user_dev); 2577 2577 2578 2578 rtnl_lock(); 2579 2579 phylink_stop(dp->pl); ··· 2582 2582 return 0; 2583 2583 } 2584 2584 2585 - int dsa_slave_resume(struct net_device *slave_dev) 2585 + int dsa_user_resume(struct net_device *user_dev) 2586 2586 { 2587 - struct dsa_port *dp = dsa_slave_to_port(slave_dev); 2587 + struct dsa_port *dp = dsa_user_to_port(user_dev); 2588 2588 2589 - if (!netif_running(slave_dev)) 2589 + if (!netif_running(user_dev)) 2590 2590 return 0; 2591 2591 2592 - netif_device_attach(slave_dev); 2592 + netif_device_attach(user_dev); 2593 2593 2594 2594 rtnl_lock(); 2595 2595 phylink_start(dp->pl); ··· 2598 2598 return 0; 2599 2599 } 2600 2600 2601 - int dsa_slave_create(struct dsa_port *port) 2601 + int dsa_user_create(struct dsa_port *port) 2602 2602 { 2603 - struct net_device *master = dsa_port_to_master(port); 2603 + struct net_device *conduit = dsa_port_to_conduit(port); 2604 2604 struct dsa_switch *ds = port->ds; 2605 - struct net_device *slave_dev; 2606 - struct dsa_slave_priv *p; 2605 + struct net_device *user_dev; 2606 + struct dsa_user_priv *p; 2607 2607 const char *name; 2608 2608 int assign_type; 2609 2609 int ret; ··· 2619 2619 assign_type = NET_NAME_ENUM; 2620 2620 } 2621 2621 2622 - slave_dev = alloc_netdev_mqs(sizeof(struct dsa_slave_priv), name, 2623 - assign_type, ether_setup, 2624 - ds->num_tx_queues, 1); 2625 - if (slave_dev == NULL) 2622 + user_dev = alloc_netdev_mqs(sizeof(struct dsa_user_priv), name, 2623 + assign_type, ether_setup, 2624 + ds->num_tx_queues, 1); 2625 + if (user_dev == NULL) 2626 2626 return -ENOMEM; 2627 2627 2628 - slave_dev->rtnl_link_ops = &dsa_link_ops; 2629 - slave_dev->ethtool_ops = &dsa_slave_ethtool_ops; 2628 + user_dev->rtnl_link_ops = &dsa_link_ops; 2629 + user_dev->ethtool_ops = &dsa_user_ethtool_ops; 2630 2630 #if IS_ENABLED(CONFIG_DCB) 2631 - slave_dev->dcbnl_ops = &dsa_slave_dcbnl_ops; 2631 + user_dev->dcbnl_ops = &dsa_user_dcbnl_ops; 2632 2632 #endif 2633 2633 if (!is_zero_ether_addr(port->mac)) 2634 - eth_hw_addr_set(slave_dev, port->mac); 2634 + eth_hw_addr_set(user_dev, port->mac); 2635 2635 else 2636 - eth_hw_addr_inherit(slave_dev, master); 2637 - slave_dev->priv_flags |= IFF_NO_QUEUE; 2636 + eth_hw_addr_inherit(user_dev, conduit); 2637 + user_dev->priv_flags |= IFF_NO_QUEUE; 2638 2638 if (dsa_switch_supports_uc_filtering(ds)) 2639 - slave_dev->priv_flags |= IFF_UNICAST_FLT; 2640 - slave_dev->netdev_ops = &dsa_slave_netdev_ops; 2639 + user_dev->priv_flags |= IFF_UNICAST_FLT; 2640 + user_dev->netdev_ops = &dsa_user_netdev_ops; 2641 2641 if (ds->ops->port_max_mtu) 2642 - slave_dev->max_mtu = ds->ops->port_max_mtu(ds, port->index); 2643 - SET_NETDEV_DEVTYPE(slave_dev, &dsa_type); 2642 + user_dev->max_mtu = ds->ops->port_max_mtu(ds, port->index); 2643 + SET_NETDEV_DEVTYPE(user_dev, &dsa_type); 2644 2644 2645 - SET_NETDEV_DEV(slave_dev, port->ds->dev); 2646 - SET_NETDEV_DEVLINK_PORT(slave_dev, &port->devlink_port); 2647 - slave_dev->dev.of_node = port->dn; 2648 - slave_dev->vlan_features = master->vlan_features; 2645 + SET_NETDEV_DEV(user_dev, port->ds->dev); 2646 + SET_NETDEV_DEVLINK_PORT(user_dev, &port->devlink_port); 2647 + user_dev->dev.of_node = port->dn; 2648 + user_dev->vlan_features = conduit->vlan_features; 2649 2649 2650 - p = netdev_priv(slave_dev); 2651 - slave_dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 2652 - if (!slave_dev->tstats) { 2653 - free_netdev(slave_dev); 2650 + p = netdev_priv(user_dev); 2651 + user_dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 2652 + if (!user_dev->tstats) { 2653 + free_netdev(user_dev); 2654 2654 return -ENOMEM; 2655 2655 } 2656 2656 2657 - ret = gro_cells_init(&p->gcells, slave_dev); 2657 + ret = gro_cells_init(&p->gcells, user_dev); 2658 2658 if (ret) 2659 2659 goto out_free; 2660 2660 2661 2661 p->dp = port; 2662 2662 INIT_LIST_HEAD(&p->mall_tc_list); 2663 - port->slave = slave_dev; 2664 - dsa_slave_setup_tagger(slave_dev); 2663 + port->user = user_dev; 2664 + dsa_user_setup_tagger(user_dev); 2665 2665 2666 - netif_carrier_off(slave_dev); 2666 + netif_carrier_off(user_dev); 2667 2667 2668 - ret = dsa_slave_phy_setup(slave_dev); 2668 + ret = dsa_user_phy_setup(user_dev); 2669 2669 if (ret) { 2670 - netdev_err(slave_dev, 2670 + netdev_err(user_dev, 2671 2671 "error %d setting up PHY for tree %d, switch %d, port %d\n", 2672 2672 ret, ds->dst->index, ds->index, port->index); 2673 2673 goto out_gcells; ··· 2675 2675 2676 2676 rtnl_lock(); 2677 2677 2678 - ret = dsa_slave_change_mtu(slave_dev, ETH_DATA_LEN); 2678 + ret = dsa_user_change_mtu(user_dev, ETH_DATA_LEN); 2679 2679 if (ret && ret != -EOPNOTSUPP) 2680 2680 dev_warn(ds->dev, "nonfatal error %d setting MTU to %d on port %d\n", 2681 2681 ret, ETH_DATA_LEN, port->index); 2682 2682 2683 - ret = register_netdevice(slave_dev); 2683 + ret = register_netdevice(user_dev); 2684 2684 if (ret) { 2685 - netdev_err(master, "error %d registering interface %s\n", 2686 - ret, slave_dev->name); 2685 + netdev_err(conduit, "error %d registering interface %s\n", 2686 + ret, user_dev->name); 2687 2687 rtnl_unlock(); 2688 2688 goto out_phy; 2689 2689 } 2690 2690 2691 2691 if (IS_ENABLED(CONFIG_DCB)) { 2692 - ret = dsa_slave_dcbnl_init(slave_dev); 2692 + ret = dsa_user_dcbnl_init(user_dev); 2693 2693 if (ret) { 2694 - netdev_err(slave_dev, 2694 + netdev_err(user_dev, 2695 2695 "failed to initialize DCB: %pe\n", 2696 2696 ERR_PTR(ret)); 2697 2697 rtnl_unlock(); ··· 2699 2699 } 2700 2700 } 2701 2701 2702 - ret = netdev_upper_dev_link(master, slave_dev, NULL); 2702 + ret = netdev_upper_dev_link(conduit, user_dev, NULL); 2703 2703 2704 2704 rtnl_unlock(); 2705 2705 ··· 2709 2709 return 0; 2710 2710 2711 2711 out_unregister: 2712 - unregister_netdev(slave_dev); 2712 + unregister_netdev(user_dev); 2713 2713 out_phy: 2714 2714 rtnl_lock(); 2715 2715 phylink_disconnect_phy(p->dp->pl); ··· 2718 2718 out_gcells: 2719 2719 gro_cells_destroy(&p->gcells); 2720 2720 out_free: 2721 - free_percpu(slave_dev->tstats); 2722 - free_netdev(slave_dev); 2723 - port->slave = NULL; 2721 + free_percpu(user_dev->tstats); 2722 + free_netdev(user_dev); 2723 + port->user = NULL; 2724 2724 return ret; 2725 2725 } 2726 2726 2727 - void dsa_slave_destroy(struct net_device *slave_dev) 2727 + void dsa_user_destroy(struct net_device *user_dev) 2728 2728 { 2729 - struct net_device *master = dsa_slave_to_master(slave_dev); 2730 - struct dsa_port *dp = dsa_slave_to_port(slave_dev); 2731 - struct dsa_slave_priv *p = netdev_priv(slave_dev); 2729 + struct net_device *conduit = dsa_user_to_conduit(user_dev); 2730 + struct dsa_port *dp = dsa_user_to_port(user_dev); 2731 + struct dsa_user_priv *p = netdev_priv(user_dev); 2732 2732 2733 - netif_carrier_off(slave_dev); 2733 + netif_carrier_off(user_dev); 2734 2734 rtnl_lock(); 2735 - netdev_upper_dev_unlink(master, slave_dev); 2736 - unregister_netdevice(slave_dev); 2735 + netdev_upper_dev_unlink(conduit, user_dev); 2736 + unregister_netdevice(user_dev); 2737 2737 phylink_disconnect_phy(dp->pl); 2738 2738 rtnl_unlock(); 2739 2739 2740 2740 dsa_port_phylink_destroy(dp); 2741 2741 gro_cells_destroy(&p->gcells); 2742 - free_percpu(slave_dev->tstats); 2743 - free_netdev(slave_dev); 2742 + free_percpu(user_dev->tstats); 2743 + free_netdev(user_dev); 2744 2744 } 2745 2745 2746 - int dsa_slave_change_master(struct net_device *dev, struct net_device *master, 2746 + int dsa_user_change_conduit(struct net_device *dev, struct net_device *conduit, 2747 2747 struct netlink_ext_ack *extack) 2748 2748 { 2749 - struct net_device *old_master = dsa_slave_to_master(dev); 2750 - struct dsa_port *dp = dsa_slave_to_port(dev); 2749 + struct net_device *old_conduit = dsa_user_to_conduit(dev); 2750 + struct dsa_port *dp = dsa_user_to_port(dev); 2751 2751 struct dsa_switch *ds = dp->ds; 2752 2752 struct net_device *upper; 2753 2753 struct list_head *iter; 2754 2754 int err; 2755 2755 2756 - if (master == old_master) 2756 + if (conduit == old_conduit) 2757 2757 return 0; 2758 2758 2759 - if (!ds->ops->port_change_master) { 2759 + if (!ds->ops->port_change_conduit) { 2760 2760 NL_SET_ERR_MSG_MOD(extack, 2761 - "Driver does not support changing DSA master"); 2761 + "Driver does not support changing DSA conduit"); 2762 2762 return -EOPNOTSUPP; 2763 2763 } 2764 2764 2765 - if (!netdev_uses_dsa(master)) { 2765 + if (!netdev_uses_dsa(conduit)) { 2766 2766 NL_SET_ERR_MSG_MOD(extack, 2767 - "Interface not eligible as DSA master"); 2767 + "Interface not eligible as DSA conduit"); 2768 2768 return -EOPNOTSUPP; 2769 2769 } 2770 2770 2771 - netdev_for_each_upper_dev_rcu(master, upper, iter) { 2772 - if (dsa_slave_dev_check(upper)) 2771 + netdev_for_each_upper_dev_rcu(conduit, upper, iter) { 2772 + if (dsa_user_dev_check(upper)) 2773 2773 continue; 2774 2774 if (netif_is_bridge_master(upper)) 2775 2775 continue; 2776 - NL_SET_ERR_MSG_MOD(extack, "Cannot join master with unknown uppers"); 2776 + NL_SET_ERR_MSG_MOD(extack, "Cannot join conduit with unknown uppers"); 2777 2777 return -EOPNOTSUPP; 2778 2778 } 2779 2779 2780 - /* Since we allow live-changing the DSA master, plus we auto-open the 2781 - * DSA master when the user port opens => we need to ensure that the 2782 - * new DSA master is open too. 2780 + /* Since we allow live-changing the DSA conduit, plus we auto-open the 2781 + * DSA conduit when the user port opens => we need to ensure that the 2782 + * new DSA conduit is open too. 2783 2783 */ 2784 2784 if (dev->flags & IFF_UP) { 2785 - err = dev_open(master, extack); 2785 + err = dev_open(conduit, extack); 2786 2786 if (err) 2787 2787 return err; 2788 2788 } 2789 2789 2790 - netdev_upper_dev_unlink(old_master, dev); 2790 + netdev_upper_dev_unlink(old_conduit, dev); 2791 2791 2792 - err = netdev_upper_dev_link(master, dev, extack); 2792 + err = netdev_upper_dev_link(conduit, dev, extack); 2793 2793 if (err) 2794 - goto out_revert_old_master_unlink; 2794 + goto out_revert_old_conduit_unlink; 2795 2795 2796 - err = dsa_port_change_master(dp, master, extack); 2796 + err = dsa_port_change_conduit(dp, conduit, extack); 2797 2797 if (err) 2798 - goto out_revert_master_link; 2798 + goto out_revert_conduit_link; 2799 2799 2800 2800 /* Update the MTU of the new CPU port through cross-chip notifiers */ 2801 - err = dsa_slave_change_mtu(dev, dev->mtu); 2801 + err = dsa_user_change_mtu(dev, dev->mtu); 2802 2802 if (err && err != -EOPNOTSUPP) { 2803 2803 netdev_warn(dev, 2804 - "nonfatal error updating MTU with new master: %pe\n", 2804 + "nonfatal error updating MTU with new conduit: %pe\n", 2805 2805 ERR_PTR(err)); 2806 2806 } 2807 2807 2808 2808 /* If the port doesn't have its own MAC address and relies on the DSA 2809 - * master's one, inherit it again from the new DSA master. 2809 + * conduit's one, inherit it again from the new DSA conduit. 2810 2810 */ 2811 2811 if (is_zero_ether_addr(dp->mac)) 2812 - eth_hw_addr_inherit(dev, master); 2812 + eth_hw_addr_inherit(dev, conduit); 2813 2813 2814 2814 return 0; 2815 2815 2816 - out_revert_master_link: 2817 - netdev_upper_dev_unlink(master, dev); 2818 - out_revert_old_master_unlink: 2819 - netdev_upper_dev_link(old_master, dev, NULL); 2816 + out_revert_conduit_link: 2817 + netdev_upper_dev_unlink(conduit, dev); 2818 + out_revert_old_conduit_unlink: 2819 + netdev_upper_dev_link(old_conduit, dev, NULL); 2820 2820 return err; 2821 2821 } 2822 2822 2823 - bool dsa_slave_dev_check(const struct net_device *dev) 2823 + bool dsa_user_dev_check(const struct net_device *dev) 2824 2824 { 2825 - return dev->netdev_ops == &dsa_slave_netdev_ops; 2825 + return dev->netdev_ops == &dsa_user_netdev_ops; 2826 2826 } 2827 - EXPORT_SYMBOL_GPL(dsa_slave_dev_check); 2827 + EXPORT_SYMBOL_GPL(dsa_user_dev_check); 2828 2828 2829 - static int dsa_slave_changeupper(struct net_device *dev, 2830 - struct netdev_notifier_changeupper_info *info) 2829 + static int dsa_user_changeupper(struct net_device *dev, 2830 + struct netdev_notifier_changeupper_info *info) 2831 2831 { 2832 - struct dsa_port *dp = dsa_slave_to_port(dev); 2832 + struct dsa_port *dp = dsa_user_to_port(dev); 2833 2833 struct netlink_ext_ack *extack; 2834 2834 int err = NOTIFY_DONE; 2835 2835 2836 - if (!dsa_slave_dev_check(dev)) 2836 + if (!dsa_user_dev_check(dev)) 2837 2837 return err; 2838 2838 2839 2839 extack = netdev_notifier_info_to_extack(&info->info); ··· 2885 2885 return err; 2886 2886 } 2887 2887 2888 - static int dsa_slave_prechangeupper(struct net_device *dev, 2889 - struct netdev_notifier_changeupper_info *info) 2888 + static int dsa_user_prechangeupper(struct net_device *dev, 2889 + struct netdev_notifier_changeupper_info *info) 2890 2890 { 2891 - struct dsa_port *dp = dsa_slave_to_port(dev); 2891 + struct dsa_port *dp = dsa_user_to_port(dev); 2892 2892 2893 - if (!dsa_slave_dev_check(dev)) 2893 + if (!dsa_user_dev_check(dev)) 2894 2894 return NOTIFY_DONE; 2895 2895 2896 2896 if (netif_is_bridge_master(info->upper_dev) && !info->linking) 2897 2897 dsa_port_pre_bridge_leave(dp, info->upper_dev); 2898 2898 else if (netif_is_lag_master(info->upper_dev) && !info->linking) 2899 2899 dsa_port_pre_lag_leave(dp, info->upper_dev); 2900 - /* dsa_port_pre_hsr_leave is not yet necessary since hsr cannot be 2901 - * meaningfully enslaved to a bridge yet 2900 + /* dsa_port_pre_hsr_leave is not yet necessary since hsr devices cannot 2901 + * meaningfully placed under a bridge yet 2902 2902 */ 2903 2903 2904 2904 return NOTIFY_DONE; 2905 2905 } 2906 2906 2907 2907 static int 2908 - dsa_slave_lag_changeupper(struct net_device *dev, 2909 - struct netdev_notifier_changeupper_info *info) 2908 + dsa_user_lag_changeupper(struct net_device *dev, 2909 + struct netdev_notifier_changeupper_info *info) 2910 2910 { 2911 2911 struct net_device *lower; 2912 2912 struct list_head *iter; ··· 2917 2917 return err; 2918 2918 2919 2919 netdev_for_each_lower_dev(dev, lower, iter) { 2920 - if (!dsa_slave_dev_check(lower)) 2920 + if (!dsa_user_dev_check(lower)) 2921 2921 continue; 2922 2922 2923 - dp = dsa_slave_to_port(lower); 2923 + dp = dsa_user_to_port(lower); 2924 2924 if (!dp->lag) 2925 2925 /* Software LAG */ 2926 2926 continue; 2927 2927 2928 - err = dsa_slave_changeupper(lower, info); 2928 + err = dsa_user_changeupper(lower, info); 2929 2929 if (notifier_to_errno(err)) 2930 2930 break; 2931 2931 } ··· 2933 2933 return err; 2934 2934 } 2935 2935 2936 - /* Same as dsa_slave_lag_changeupper() except that it calls 2937 - * dsa_slave_prechangeupper() 2936 + /* Same as dsa_user_lag_changeupper() except that it calls 2937 + * dsa_user_prechangeupper() 2938 2938 */ 2939 2939 static int 2940 - dsa_slave_lag_prechangeupper(struct net_device *dev, 2941 - struct netdev_notifier_changeupper_info *info) 2940 + dsa_user_lag_prechangeupper(struct net_device *dev, 2941 + struct netdev_notifier_changeupper_info *info) 2942 2942 { 2943 2943 struct net_device *lower; 2944 2944 struct list_head *iter; ··· 2949 2949 return err; 2950 2950 2951 2951 netdev_for_each_lower_dev(dev, lower, iter) { 2952 - if (!dsa_slave_dev_check(lower)) 2952 + if (!dsa_user_dev_check(lower)) 2953 2953 continue; 2954 2954 2955 - dp = dsa_slave_to_port(lower); 2955 + dp = dsa_user_to_port(lower); 2956 2956 if (!dp->lag) 2957 2957 /* Software LAG */ 2958 2958 continue; 2959 2959 2960 - err = dsa_slave_prechangeupper(lower, info); 2960 + err = dsa_user_prechangeupper(lower, info); 2961 2961 if (notifier_to_errno(err)) 2962 2962 break; 2963 2963 } ··· 2970 2970 struct netdev_notifier_changeupper_info *info) 2971 2971 { 2972 2972 struct netlink_ext_ack *ext_ack; 2973 - struct net_device *slave, *br; 2973 + struct net_device *user, *br; 2974 2974 struct dsa_port *dp; 2975 2975 2976 2976 ext_ack = netdev_notifier_info_to_extack(&info->info); ··· 2978 2978 if (!is_vlan_dev(dev)) 2979 2979 return NOTIFY_DONE; 2980 2980 2981 - slave = vlan_dev_real_dev(dev); 2982 - if (!dsa_slave_dev_check(slave)) 2981 + user = vlan_dev_real_dev(dev); 2982 + if (!dsa_user_dev_check(user)) 2983 2983 return NOTIFY_DONE; 2984 2984 2985 - dp = dsa_slave_to_port(slave); 2985 + dp = dsa_user_to_port(user); 2986 2986 br = dsa_port_bridge_dev_get(dp); 2987 2987 if (!br) 2988 2988 return NOTIFY_DONE; ··· 2991 2991 if (br_vlan_enabled(br) && 2992 2992 netif_is_bridge_master(info->upper_dev) && info->linking) { 2993 2993 NL_SET_ERR_MSG_MOD(ext_ack, 2994 - "Cannot enslave VLAN device into VLAN aware bridge"); 2994 + "Cannot make VLAN device join VLAN-aware bridge"); 2995 2995 return notifier_from_errno(-EINVAL); 2996 2996 } 2997 2997 ··· 2999 2999 } 3000 3000 3001 3001 static int 3002 - dsa_slave_check_8021q_upper(struct net_device *dev, 3003 - struct netdev_notifier_changeupper_info *info) 3002 + dsa_user_check_8021q_upper(struct net_device *dev, 3003 + struct netdev_notifier_changeupper_info *info) 3004 3004 { 3005 - struct dsa_port *dp = dsa_slave_to_port(dev); 3005 + struct dsa_port *dp = dsa_user_to_port(dev); 3006 3006 struct net_device *br = dsa_port_bridge_dev_get(dp); 3007 3007 struct bridge_vlan_info br_info; 3008 3008 struct netlink_ext_ack *extack; ··· 3030 3030 } 3031 3031 3032 3032 static int 3033 - dsa_slave_prechangeupper_sanity_check(struct net_device *dev, 3034 - struct netdev_notifier_changeupper_info *info) 3033 + dsa_user_prechangeupper_sanity_check(struct net_device *dev, 3034 + struct netdev_notifier_changeupper_info *info) 3035 3035 { 3036 3036 struct dsa_switch *ds; 3037 3037 struct dsa_port *dp; 3038 3038 int err; 3039 3039 3040 - if (!dsa_slave_dev_check(dev)) 3040 + if (!dsa_user_dev_check(dev)) 3041 3041 return dsa_prevent_bridging_8021q_upper(dev, info); 3042 3042 3043 - dp = dsa_slave_to_port(dev); 3043 + dp = dsa_user_to_port(dev); 3044 3044 ds = dp->ds; 3045 3045 3046 3046 if (ds->ops->port_prechangeupper) { ··· 3050 3050 } 3051 3051 3052 3052 if (is_vlan_dev(info->upper_dev)) 3053 - return dsa_slave_check_8021q_upper(dev, info); 3053 + return dsa_user_check_8021q_upper(dev, info); 3054 3054 3055 3055 return NOTIFY_DONE; 3056 3056 } 3057 3057 3058 - /* To be eligible as a DSA master, a LAG must have all lower interfaces be 3059 - * eligible DSA masters. Additionally, all LAG slaves must be DSA masters of 3058 + /* To be eligible as a DSA conduit, a LAG must have all lower interfaces be 3059 + * eligible DSA conduits. Additionally, all LAG slaves must be DSA conduits of 3060 3060 * switches in the same switch tree. 3061 3061 */ 3062 - static int dsa_lag_master_validate(struct net_device *lag_dev, 3063 - struct netlink_ext_ack *extack) 3062 + static int dsa_lag_conduit_validate(struct net_device *lag_dev, 3063 + struct netlink_ext_ack *extack) 3064 3064 { 3065 3065 struct net_device *lower1, *lower2; 3066 3066 struct list_head *iter1, *iter2; ··· 3070 3070 if (!netdev_uses_dsa(lower1) || 3071 3071 !netdev_uses_dsa(lower2)) { 3072 3072 NL_SET_ERR_MSG_MOD(extack, 3073 - "All LAG ports must be eligible as DSA masters"); 3073 + "All LAG ports must be eligible as DSA conduits"); 3074 3074 return notifier_from_errno(-EINVAL); 3075 3075 } 3076 3076 ··· 3080 3080 if (!dsa_port_tree_same(lower1->dsa_ptr, 3081 3081 lower2->dsa_ptr)) { 3082 3082 NL_SET_ERR_MSG_MOD(extack, 3083 - "LAG contains DSA masters of disjoint switch trees"); 3083 + "LAG contains DSA conduits of disjoint switch trees"); 3084 3084 return notifier_from_errno(-EINVAL); 3085 3085 } 3086 3086 } ··· 3090 3090 } 3091 3091 3092 3092 static int 3093 - dsa_master_prechangeupper_sanity_check(struct net_device *master, 3094 - struct netdev_notifier_changeupper_info *info) 3093 + dsa_conduit_prechangeupper_sanity_check(struct net_device *conduit, 3094 + struct netdev_notifier_changeupper_info *info) 3095 3095 { 3096 3096 struct netlink_ext_ack *extack = netdev_notifier_info_to_extack(&info->info); 3097 3097 3098 - if (!netdev_uses_dsa(master)) 3098 + if (!netdev_uses_dsa(conduit)) 3099 3099 return NOTIFY_DONE; 3100 3100 3101 3101 if (!info->linking) 3102 3102 return NOTIFY_DONE; 3103 3103 3104 3104 /* Allow DSA switch uppers */ 3105 - if (dsa_slave_dev_check(info->upper_dev)) 3105 + if (dsa_user_dev_check(info->upper_dev)) 3106 3106 return NOTIFY_DONE; 3107 3107 3108 - /* Allow bridge uppers of DSA masters, subject to further 3108 + /* Allow bridge uppers of DSA conduits, subject to further 3109 3109 * restrictions in dsa_bridge_prechangelower_sanity_check() 3110 3110 */ 3111 3111 if (netif_is_bridge_master(info->upper_dev)) 3112 3112 return NOTIFY_DONE; 3113 3113 3114 3114 /* Allow LAG uppers, subject to further restrictions in 3115 - * dsa_lag_master_prechangelower_sanity_check() 3115 + * dsa_lag_conduit_prechangelower_sanity_check() 3116 3116 */ 3117 3117 if (netif_is_lag_master(info->upper_dev)) 3118 - return dsa_lag_master_validate(info->upper_dev, extack); 3118 + return dsa_lag_conduit_validate(info->upper_dev, extack); 3119 3119 3120 3120 NL_SET_ERR_MSG_MOD(extack, 3121 - "DSA master cannot join unknown upper interfaces"); 3121 + "DSA conduit cannot join unknown upper interfaces"); 3122 3122 return notifier_from_errno(-EBUSY); 3123 3123 } 3124 3124 3125 3125 static int 3126 - dsa_lag_master_prechangelower_sanity_check(struct net_device *dev, 3127 - struct netdev_notifier_changeupper_info *info) 3126 + dsa_lag_conduit_prechangelower_sanity_check(struct net_device *dev, 3127 + struct netdev_notifier_changeupper_info *info) 3128 3128 { 3129 3129 struct netlink_ext_ack *extack = netdev_notifier_info_to_extack(&info->info); 3130 3130 struct net_device *lag_dev = info->upper_dev; ··· 3139 3139 3140 3140 if (!netdev_uses_dsa(dev)) { 3141 3141 NL_SET_ERR_MSG(extack, 3142 - "Only DSA masters can join a LAG DSA master"); 3142 + "Only DSA conduits can join a LAG DSA conduit"); 3143 3143 return notifier_from_errno(-EINVAL); 3144 3144 } 3145 3145 3146 3146 netdev_for_each_lower_dev(lag_dev, lower, iter) { 3147 3147 if (!dsa_port_tree_same(dev->dsa_ptr, lower->dsa_ptr)) { 3148 3148 NL_SET_ERR_MSG(extack, 3149 - "Interface is DSA master for a different switch tree than this LAG"); 3149 + "Interface is DSA conduit for a different switch tree than this LAG"); 3150 3150 return notifier_from_errno(-EINVAL); 3151 3151 } 3152 3152 ··· 3156 3156 return NOTIFY_DONE; 3157 3157 } 3158 3158 3159 - /* Don't allow bridging of DSA masters, since the bridge layer rx_handler 3159 + /* Don't allow bridging of DSA conduits, since the bridge layer rx_handler 3160 3160 * prevents the DSA fake ethertype handler to be invoked, so we don't get the 3161 3161 * chance to strip off and parse the DSA switch tag protocol header (the bridge 3162 3162 * layer just returns RX_HANDLER_CONSUMED, stopping RX processing for these 3163 3163 * frames). 3164 3164 * The only case where that would not be an issue is when bridging can already 3165 - * be offloaded, such as when the DSA master is itself a DSA or plain switchdev 3165 + * be offloaded, such as when the DSA conduit is itself a DSA or plain switchdev 3166 3166 * port, and is bridged only with other ports from the same hardware device. 3167 3167 */ 3168 3168 static int ··· 3188 3188 3189 3189 if (!netdev_port_same_parent_id(lower, new_lower)) { 3190 3190 NL_SET_ERR_MSG(extack, 3191 - "Cannot do software bridging with a DSA master"); 3191 + "Cannot do software bridging with a DSA conduit"); 3192 3192 return notifier_from_errno(-EINVAL); 3193 3193 } 3194 3194 } ··· 3196 3196 return NOTIFY_DONE; 3197 3197 } 3198 3198 3199 - static void dsa_tree_migrate_ports_from_lag_master(struct dsa_switch_tree *dst, 3200 - struct net_device *lag_dev) 3199 + static void dsa_tree_migrate_ports_from_lag_conduit(struct dsa_switch_tree *dst, 3200 + struct net_device *lag_dev) 3201 3201 { 3202 - struct net_device *new_master = dsa_tree_find_first_master(dst); 3202 + struct net_device *new_conduit = dsa_tree_find_first_conduit(dst); 3203 3203 struct dsa_port *dp; 3204 3204 int err; 3205 3205 3206 3206 dsa_tree_for_each_user_port(dp, dst) { 3207 - if (dsa_port_to_master(dp) != lag_dev) 3207 + if (dsa_port_to_conduit(dp) != lag_dev) 3208 3208 continue; 3209 3209 3210 - err = dsa_slave_change_master(dp->slave, new_master, NULL); 3210 + err = dsa_user_change_conduit(dp->user, new_conduit, NULL); 3211 3211 if (err) { 3212 - netdev_err(dp->slave, 3213 - "failed to restore master to %s: %pe\n", 3214 - new_master->name, ERR_PTR(err)); 3212 + netdev_err(dp->user, 3213 + "failed to restore conduit to %s: %pe\n", 3214 + new_conduit->name, ERR_PTR(err)); 3215 3215 } 3216 3216 } 3217 3217 } 3218 3218 3219 - static int dsa_master_lag_join(struct net_device *master, 3220 - struct net_device *lag_dev, 3221 - struct netdev_lag_upper_info *uinfo, 3222 - struct netlink_ext_ack *extack) 3219 + static int dsa_conduit_lag_join(struct net_device *conduit, 3220 + struct net_device *lag_dev, 3221 + struct netdev_lag_upper_info *uinfo, 3222 + struct netlink_ext_ack *extack) 3223 3223 { 3224 - struct dsa_port *cpu_dp = master->dsa_ptr; 3224 + struct dsa_port *cpu_dp = conduit->dsa_ptr; 3225 3225 struct dsa_switch_tree *dst = cpu_dp->dst; 3226 3226 struct dsa_port *dp; 3227 3227 int err; 3228 3228 3229 - err = dsa_master_lag_setup(lag_dev, cpu_dp, uinfo, extack); 3229 + err = dsa_conduit_lag_setup(lag_dev, cpu_dp, uinfo, extack); 3230 3230 if (err) 3231 3231 return err; 3232 3232 3233 3233 dsa_tree_for_each_user_port(dp, dst) { 3234 - if (dsa_port_to_master(dp) != master) 3234 + if (dsa_port_to_conduit(dp) != conduit) 3235 3235 continue; 3236 3236 3237 - err = dsa_slave_change_master(dp->slave, lag_dev, extack); 3237 + err = dsa_user_change_conduit(dp->user, lag_dev, extack); 3238 3238 if (err) 3239 3239 goto restore; 3240 3240 } ··· 3243 3243 3244 3244 restore: 3245 3245 dsa_tree_for_each_user_port_continue_reverse(dp, dst) { 3246 - if (dsa_port_to_master(dp) != lag_dev) 3246 + if (dsa_port_to_conduit(dp) != lag_dev) 3247 3247 continue; 3248 3248 3249 - err = dsa_slave_change_master(dp->slave, master, NULL); 3249 + err = dsa_user_change_conduit(dp->user, conduit, NULL); 3250 3250 if (err) { 3251 - netdev_err(dp->slave, 3252 - "failed to restore master to %s: %pe\n", 3253 - master->name, ERR_PTR(err)); 3251 + netdev_err(dp->user, 3252 + "failed to restore conduit to %s: %pe\n", 3253 + conduit->name, ERR_PTR(err)); 3254 3254 } 3255 3255 } 3256 3256 3257 - dsa_master_lag_teardown(lag_dev, master->dsa_ptr); 3257 + dsa_conduit_lag_teardown(lag_dev, conduit->dsa_ptr); 3258 3258 3259 3259 return err; 3260 3260 } 3261 3261 3262 - static void dsa_master_lag_leave(struct net_device *master, 3263 - struct net_device *lag_dev) 3262 + static void dsa_conduit_lag_leave(struct net_device *conduit, 3263 + struct net_device *lag_dev) 3264 3264 { 3265 3265 struct dsa_port *dp, *cpu_dp = lag_dev->dsa_ptr; 3266 3266 struct dsa_switch_tree *dst = cpu_dp->dst; ··· 3277 3277 3278 3278 if (new_cpu_dp) { 3279 3279 /* Update the CPU port of the user ports still under the LAG 3280 - * so that dsa_port_to_master() continues to work properly 3280 + * so that dsa_port_to_conduit() continues to work properly 3281 3281 */ 3282 3282 dsa_tree_for_each_user_port(dp, dst) 3283 - if (dsa_port_to_master(dp) == lag_dev) 3283 + if (dsa_port_to_conduit(dp) == lag_dev) 3284 3284 dp->cpu_dp = new_cpu_dp; 3285 3285 3286 3286 /* Update the index of the virtual CPU port to match the lowest ··· 3289 3289 lag_dev->dsa_ptr = new_cpu_dp; 3290 3290 wmb(); 3291 3291 } else { 3292 - /* If the LAG DSA master has no ports left, migrate back all 3292 + /* If the LAG DSA conduit has no ports left, migrate back all 3293 3293 * user ports to the first physical CPU port 3294 3294 */ 3295 - dsa_tree_migrate_ports_from_lag_master(dst, lag_dev); 3295 + dsa_tree_migrate_ports_from_lag_conduit(dst, lag_dev); 3296 3296 } 3297 3297 3298 - /* This DSA master has left its LAG in any case, so let 3298 + /* This DSA conduit has left its LAG in any case, so let 3299 3299 * the CPU port leave the hardware LAG as well 3300 3300 */ 3301 - dsa_master_lag_teardown(lag_dev, master->dsa_ptr); 3301 + dsa_conduit_lag_teardown(lag_dev, conduit->dsa_ptr); 3302 3302 } 3303 3303 3304 - static int dsa_master_changeupper(struct net_device *dev, 3305 - struct netdev_notifier_changeupper_info *info) 3304 + static int dsa_conduit_changeupper(struct net_device *dev, 3305 + struct netdev_notifier_changeupper_info *info) 3306 3306 { 3307 3307 struct netlink_ext_ack *extack; 3308 3308 int err = NOTIFY_DONE; ··· 3314 3314 3315 3315 if (netif_is_lag_master(info->upper_dev)) { 3316 3316 if (info->linking) { 3317 - err = dsa_master_lag_join(dev, info->upper_dev, 3318 - info->upper_info, extack); 3317 + err = dsa_conduit_lag_join(dev, info->upper_dev, 3318 + info->upper_info, extack); 3319 3319 err = notifier_from_errno(err); 3320 3320 } else { 3321 - dsa_master_lag_leave(dev, info->upper_dev); 3321 + dsa_conduit_lag_leave(dev, info->upper_dev); 3322 3322 err = NOTIFY_OK; 3323 3323 } 3324 3324 } ··· 3326 3326 return err; 3327 3327 } 3328 3328 3329 - static int dsa_slave_netdevice_event(struct notifier_block *nb, 3330 - unsigned long event, void *ptr) 3329 + static int dsa_user_netdevice_event(struct notifier_block *nb, 3330 + unsigned long event, void *ptr) 3331 3331 { 3332 3332 struct net_device *dev = netdev_notifier_info_to_dev(ptr); 3333 3333 ··· 3336 3336 struct netdev_notifier_changeupper_info *info = ptr; 3337 3337 int err; 3338 3338 3339 - err = dsa_slave_prechangeupper_sanity_check(dev, info); 3339 + err = dsa_user_prechangeupper_sanity_check(dev, info); 3340 3340 if (notifier_to_errno(err)) 3341 3341 return err; 3342 3342 3343 - err = dsa_master_prechangeupper_sanity_check(dev, info); 3343 + err = dsa_conduit_prechangeupper_sanity_check(dev, info); 3344 3344 if (notifier_to_errno(err)) 3345 3345 return err; 3346 3346 3347 - err = dsa_lag_master_prechangelower_sanity_check(dev, info); 3347 + err = dsa_lag_conduit_prechangelower_sanity_check(dev, info); 3348 3348 if (notifier_to_errno(err)) 3349 3349 return err; 3350 3350 ··· 3352 3352 if (notifier_to_errno(err)) 3353 3353 return err; 3354 3354 3355 - err = dsa_slave_prechangeupper(dev, ptr); 3355 + err = dsa_user_prechangeupper(dev, ptr); 3356 3356 if (notifier_to_errno(err)) 3357 3357 return err; 3358 3358 3359 - err = dsa_slave_lag_prechangeupper(dev, ptr); 3359 + err = dsa_user_lag_prechangeupper(dev, ptr); 3360 3360 if (notifier_to_errno(err)) 3361 3361 return err; 3362 3362 ··· 3365 3365 case NETDEV_CHANGEUPPER: { 3366 3366 int err; 3367 3367 3368 - err = dsa_slave_changeupper(dev, ptr); 3368 + err = dsa_user_changeupper(dev, ptr); 3369 3369 if (notifier_to_errno(err)) 3370 3370 return err; 3371 3371 3372 - err = dsa_slave_lag_changeupper(dev, ptr); 3372 + err = dsa_user_lag_changeupper(dev, ptr); 3373 3373 if (notifier_to_errno(err)) 3374 3374 return err; 3375 3375 3376 - err = dsa_master_changeupper(dev, ptr); 3376 + err = dsa_conduit_changeupper(dev, ptr); 3377 3377 if (notifier_to_errno(err)) 3378 3378 return err; 3379 3379 ··· 3384 3384 struct dsa_port *dp; 3385 3385 int err = 0; 3386 3386 3387 - if (dsa_slave_dev_check(dev)) { 3388 - dp = dsa_slave_to_port(dev); 3387 + if (dsa_user_dev_check(dev)) { 3388 + dp = dsa_user_to_port(dev); 3389 3389 3390 3390 err = dsa_port_lag_change(dp, info->lower_state_info); 3391 3391 } 3392 3392 3393 - /* Mirror LAG port events on DSA masters that are in 3393 + /* Mirror LAG port events on DSA conduits that are in 3394 3394 * a LAG towards their respective switch CPU ports 3395 3395 */ 3396 3396 if (netdev_uses_dsa(dev)) { ··· 3403 3403 } 3404 3404 case NETDEV_CHANGE: 3405 3405 case NETDEV_UP: { 3406 - /* Track state of master port. 3407 - * DSA driver may require the master port (and indirectly 3406 + /* Track state of conduit port. 3407 + * DSA driver may require the conduit port (and indirectly 3408 3408 * the tagger) to be available for some special operation. 3409 3409 */ 3410 3410 if (netdev_uses_dsa(dev)) { 3411 3411 struct dsa_port *cpu_dp = dev->dsa_ptr; 3412 3412 struct dsa_switch_tree *dst = cpu_dp->ds->dst; 3413 3413 3414 - /* Track when the master port is UP */ 3415 - dsa_tree_master_oper_state_change(dst, dev, 3416 - netif_oper_up(dev)); 3414 + /* Track when the conduit port is UP */ 3415 + dsa_tree_conduit_oper_state_change(dst, dev, 3416 + netif_oper_up(dev)); 3417 3417 3418 - /* Track when the master port is ready and can accept 3418 + /* Track when the conduit port is ready and can accept 3419 3419 * packet. 3420 3420 * NETDEV_UP event is not enough to flag a port as ready. 3421 3421 * We also have to wait for linkwatch_do_dev to dev_activate 3422 3422 * and emit a NETDEV_CHANGE event. 3423 - * We check if a master port is ready by checking if the dev 3423 + * We check if a conduit port is ready by checking if the dev 3424 3424 * have a qdisc assigned and is not noop. 3425 3425 */ 3426 - dsa_tree_master_admin_state_change(dst, dev, 3427 - !qdisc_tx_is_noop(dev)); 3426 + dsa_tree_conduit_admin_state_change(dst, dev, 3427 + !qdisc_tx_is_noop(dev)); 3428 3428 3429 3429 return NOTIFY_OK; 3430 3430 } ··· 3442 3442 cpu_dp = dev->dsa_ptr; 3443 3443 dst = cpu_dp->ds->dst; 3444 3444 3445 - dsa_tree_master_admin_state_change(dst, dev, false); 3445 + dsa_tree_conduit_admin_state_change(dst, dev, false); 3446 3446 3447 3447 list_for_each_entry(dp, &dst->ports, list) { 3448 3448 if (!dsa_port_is_user(dp)) ··· 3451 3451 if (dp->cpu_dp != cpu_dp) 3452 3452 continue; 3453 3453 3454 - list_add(&dp->slave->close_list, &close_list); 3454 + list_add(&dp->user->close_list, &close_list); 3455 3455 } 3456 3456 3457 3457 dev_close_many(&close_list, true); ··· 3477 3477 switchdev_work->orig_dev, &info.info, NULL); 3478 3478 } 3479 3479 3480 - static void dsa_slave_switchdev_event_work(struct work_struct *work) 3480 + static void dsa_user_switchdev_event_work(struct work_struct *work) 3481 3481 { 3482 3482 struct dsa_switchdev_event_work *switchdev_work = 3483 3483 container_of(work, struct dsa_switchdev_event_work, work); ··· 3488 3488 struct dsa_port *dp; 3489 3489 int err; 3490 3490 3491 - dp = dsa_slave_to_port(dev); 3491 + dp = dsa_user_to_port(dev); 3492 3492 ds = dp->ds; 3493 3493 3494 3494 switch (switchdev_work->event) { ··· 3530 3530 static bool dsa_foreign_dev_check(const struct net_device *dev, 3531 3531 const struct net_device *foreign_dev) 3532 3532 { 3533 - const struct dsa_port *dp = dsa_slave_to_port(dev); 3533 + const struct dsa_port *dp = dsa_user_to_port(dev); 3534 3534 struct dsa_switch_tree *dst = dp->ds->dst; 3535 3535 3536 3536 if (netif_is_bridge_master(foreign_dev)) ··· 3543 3543 return true; 3544 3544 } 3545 3545 3546 - static int dsa_slave_fdb_event(struct net_device *dev, 3547 - struct net_device *orig_dev, 3548 - unsigned long event, const void *ctx, 3549 - const struct switchdev_notifier_fdb_info *fdb_info) 3546 + static int dsa_user_fdb_event(struct net_device *dev, 3547 + struct net_device *orig_dev, 3548 + unsigned long event, const void *ctx, 3549 + const struct switchdev_notifier_fdb_info *fdb_info) 3550 3550 { 3551 3551 struct dsa_switchdev_event_work *switchdev_work; 3552 - struct dsa_port *dp = dsa_slave_to_port(dev); 3552 + struct dsa_port *dp = dsa_user_to_port(dev); 3553 3553 bool host_addr = fdb_info->is_local; 3554 3554 struct dsa_switch *ds = dp->ds; 3555 3555 ··· 3598 3598 orig_dev->name, fdb_info->addr, fdb_info->vid, 3599 3599 host_addr ? " as host address" : ""); 3600 3600 3601 - INIT_WORK(&switchdev_work->work, dsa_slave_switchdev_event_work); 3601 + INIT_WORK(&switchdev_work->work, dsa_user_switchdev_event_work); 3602 3602 switchdev_work->event = event; 3603 3603 switchdev_work->dev = dev; 3604 3604 switchdev_work->orig_dev = orig_dev; ··· 3613 3613 } 3614 3614 3615 3615 /* Called under rcu_read_lock() */ 3616 - static int dsa_slave_switchdev_event(struct notifier_block *unused, 3617 - unsigned long event, void *ptr) 3616 + static int dsa_user_switchdev_event(struct notifier_block *unused, 3617 + unsigned long event, void *ptr) 3618 3618 { 3619 3619 struct net_device *dev = switchdev_notifier_info_to_dev(ptr); 3620 3620 int err; ··· 3622 3622 switch (event) { 3623 3623 case SWITCHDEV_PORT_ATTR_SET: 3624 3624 err = switchdev_handle_port_attr_set(dev, ptr, 3625 - dsa_slave_dev_check, 3626 - dsa_slave_port_attr_set); 3625 + dsa_user_dev_check, 3626 + dsa_user_port_attr_set); 3627 3627 return notifier_from_errno(err); 3628 3628 case SWITCHDEV_FDB_ADD_TO_DEVICE: 3629 3629 case SWITCHDEV_FDB_DEL_TO_DEVICE: 3630 3630 err = switchdev_handle_fdb_event_to_device(dev, event, ptr, 3631 - dsa_slave_dev_check, 3631 + dsa_user_dev_check, 3632 3632 dsa_foreign_dev_check, 3633 - dsa_slave_fdb_event); 3633 + dsa_user_fdb_event); 3634 3634 return notifier_from_errno(err); 3635 3635 default: 3636 3636 return NOTIFY_DONE; ··· 3639 3639 return NOTIFY_OK; 3640 3640 } 3641 3641 3642 - static int dsa_slave_switchdev_blocking_event(struct notifier_block *unused, 3643 - unsigned long event, void *ptr) 3642 + static int dsa_user_switchdev_blocking_event(struct notifier_block *unused, 3643 + unsigned long event, void *ptr) 3644 3644 { 3645 3645 struct net_device *dev = switchdev_notifier_info_to_dev(ptr); 3646 3646 int err; ··· 3648 3648 switch (event) { 3649 3649 case SWITCHDEV_PORT_OBJ_ADD: 3650 3650 err = switchdev_handle_port_obj_add_foreign(dev, ptr, 3651 - dsa_slave_dev_check, 3651 + dsa_user_dev_check, 3652 3652 dsa_foreign_dev_check, 3653 - dsa_slave_port_obj_add); 3653 + dsa_user_port_obj_add); 3654 3654 return notifier_from_errno(err); 3655 3655 case SWITCHDEV_PORT_OBJ_DEL: 3656 3656 err = switchdev_handle_port_obj_del_foreign(dev, ptr, 3657 - dsa_slave_dev_check, 3657 + dsa_user_dev_check, 3658 3658 dsa_foreign_dev_check, 3659 - dsa_slave_port_obj_del); 3659 + dsa_user_port_obj_del); 3660 3660 return notifier_from_errno(err); 3661 3661 case SWITCHDEV_PORT_ATTR_SET: 3662 3662 err = switchdev_handle_port_attr_set(dev, ptr, 3663 - dsa_slave_dev_check, 3664 - dsa_slave_port_attr_set); 3663 + dsa_user_dev_check, 3664 + dsa_user_port_attr_set); 3665 3665 return notifier_from_errno(err); 3666 3666 } 3667 3667 3668 3668 return NOTIFY_DONE; 3669 3669 } 3670 3670 3671 - static struct notifier_block dsa_slave_nb __read_mostly = { 3672 - .notifier_call = dsa_slave_netdevice_event, 3671 + static struct notifier_block dsa_user_nb __read_mostly = { 3672 + .notifier_call = dsa_user_netdevice_event, 3673 3673 }; 3674 3674 3675 - struct notifier_block dsa_slave_switchdev_notifier = { 3676 - .notifier_call = dsa_slave_switchdev_event, 3675 + struct notifier_block dsa_user_switchdev_notifier = { 3676 + .notifier_call = dsa_user_switchdev_event, 3677 3677 }; 3678 3678 3679 - struct notifier_block dsa_slave_switchdev_blocking_notifier = { 3680 - .notifier_call = dsa_slave_switchdev_blocking_event, 3679 + struct notifier_block dsa_user_switchdev_blocking_notifier = { 3680 + .notifier_call = dsa_user_switchdev_blocking_event, 3681 3681 }; 3682 3682 3683 - int dsa_slave_register_notifier(void) 3683 + int dsa_user_register_notifier(void) 3684 3684 { 3685 3685 struct notifier_block *nb; 3686 3686 int err; 3687 3687 3688 - err = register_netdevice_notifier(&dsa_slave_nb); 3688 + err = register_netdevice_notifier(&dsa_user_nb); 3689 3689 if (err) 3690 3690 return err; 3691 3691 3692 - err = register_switchdev_notifier(&dsa_slave_switchdev_notifier); 3692 + err = register_switchdev_notifier(&dsa_user_switchdev_notifier); 3693 3693 if (err) 3694 3694 goto err_switchdev_nb; 3695 3695 3696 - nb = &dsa_slave_switchdev_blocking_notifier; 3696 + nb = &dsa_user_switchdev_blocking_notifier; 3697 3697 err = register_switchdev_blocking_notifier(nb); 3698 3698 if (err) 3699 3699 goto err_switchdev_blocking_nb; ··· 3701 3701 return 0; 3702 3702 3703 3703 err_switchdev_blocking_nb: 3704 - unregister_switchdev_notifier(&dsa_slave_switchdev_notifier); 3704 + unregister_switchdev_notifier(&dsa_user_switchdev_notifier); 3705 3705 err_switchdev_nb: 3706 - unregister_netdevice_notifier(&dsa_slave_nb); 3706 + unregister_netdevice_notifier(&dsa_user_nb); 3707 3707 return err; 3708 3708 } 3709 3709 3710 - void dsa_slave_unregister_notifier(void) 3710 + void dsa_user_unregister_notifier(void) 3711 3711 { 3712 3712 struct notifier_block *nb; 3713 3713 int err; 3714 3714 3715 - nb = &dsa_slave_switchdev_blocking_notifier; 3715 + nb = &dsa_user_switchdev_blocking_notifier; 3716 3716 err = unregister_switchdev_blocking_notifier(nb); 3717 3717 if (err) 3718 3718 pr_err("DSA: failed to unregister switchdev blocking notifier (%d)\n", err); 3719 3719 3720 - err = unregister_switchdev_notifier(&dsa_slave_switchdev_notifier); 3720 + err = unregister_switchdev_notifier(&dsa_user_switchdev_notifier); 3721 3721 if (err) 3722 3722 pr_err("DSA: failed to unregister switchdev notifier (%d)\n", err); 3723 3723 3724 - err = unregister_netdevice_notifier(&dsa_slave_nb); 3724 + err = unregister_netdevice_notifier(&dsa_user_nb); 3725 3725 if (err) 3726 - pr_err("DSA: failed to unregister slave notifier (%d)\n", err); 3726 + pr_err("DSA: failed to unregister user notifier (%d)\n", err); 3727 3727 }
-69
net/dsa/slave.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - 3 - #ifndef __DSA_SLAVE_H 4 - #define __DSA_SLAVE_H 5 - 6 - #include <linux/if_bridge.h> 7 - #include <linux/if_vlan.h> 8 - #include <linux/list.h> 9 - #include <linux/netpoll.h> 10 - #include <linux/types.h> 11 - #include <net/dsa.h> 12 - #include <net/gro_cells.h> 13 - 14 - struct net_device; 15 - struct netlink_ext_ack; 16 - 17 - extern struct notifier_block dsa_slave_switchdev_notifier; 18 - extern struct notifier_block dsa_slave_switchdev_blocking_notifier; 19 - 20 - struct dsa_slave_priv { 21 - /* Copy of CPU port xmit for faster access in slave transmit hot path */ 22 - struct sk_buff * (*xmit)(struct sk_buff *skb, 23 - struct net_device *dev); 24 - 25 - struct gro_cells gcells; 26 - 27 - /* DSA port data, such as switch, port index, etc. */ 28 - struct dsa_port *dp; 29 - 30 - #ifdef CONFIG_NET_POLL_CONTROLLER 31 - struct netpoll *netpoll; 32 - #endif 33 - 34 - /* TC context */ 35 - struct list_head mall_tc_list; 36 - }; 37 - 38 - void dsa_slave_mii_bus_init(struct dsa_switch *ds); 39 - int dsa_slave_create(struct dsa_port *dp); 40 - void dsa_slave_destroy(struct net_device *slave_dev); 41 - int dsa_slave_suspend(struct net_device *slave_dev); 42 - int dsa_slave_resume(struct net_device *slave_dev); 43 - int dsa_slave_register_notifier(void); 44 - void dsa_slave_unregister_notifier(void); 45 - void dsa_slave_sync_ha(struct net_device *dev); 46 - void dsa_slave_unsync_ha(struct net_device *dev); 47 - void dsa_slave_setup_tagger(struct net_device *slave); 48 - int dsa_slave_change_mtu(struct net_device *dev, int new_mtu); 49 - int dsa_slave_change_master(struct net_device *dev, struct net_device *master, 50 - struct netlink_ext_ack *extack); 51 - int dsa_slave_manage_vlan_filtering(struct net_device *dev, 52 - bool vlan_filtering); 53 - 54 - static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev) 55 - { 56 - struct dsa_slave_priv *p = netdev_priv(dev); 57 - 58 - return p->dp; 59 - } 60 - 61 - static inline struct net_device * 62 - dsa_slave_to_master(const struct net_device *dev) 63 - { 64 - struct dsa_port *dp = dsa_slave_to_port(dev); 65 - 66 - return dsa_port_to_master(dp); 67 - } 68 - 69 - #endif
+10 -10
net/dsa/switch.c
··· 15 15 #include "dsa.h" 16 16 #include "netlink.h" 17 17 #include "port.h" 18 - #include "slave.h" 19 18 #include "switch.h" 20 19 #include "tag_8021q.h" 21 20 #include "trace.h" 21 + #include "user.h" 22 22 23 23 static unsigned int dsa_switch_fastest_ageing_time(struct dsa_switch *ds, 24 24 unsigned int ageing_time) ··· 894 894 * bits that depend on the tagger, such as the MTU. 895 895 */ 896 896 dsa_switch_for_each_user_port(dp, ds) { 897 - struct net_device *slave = dp->slave; 897 + struct net_device *user = dp->user; 898 898 899 - dsa_slave_setup_tagger(slave); 899 + dsa_user_setup_tagger(user); 900 900 901 901 /* rtnl_mutex is held in dsa_tree_change_tag_proto */ 902 - dsa_slave_change_mtu(slave, slave->mtu); 902 + dsa_user_change_mtu(user, user->mtu); 903 903 } 904 904 905 905 return 0; ··· 960 960 } 961 961 962 962 static int 963 - dsa_switch_master_state_change(struct dsa_switch *ds, 964 - struct dsa_notifier_master_state_info *info) 963 + dsa_switch_conduit_state_change(struct dsa_switch *ds, 964 + struct dsa_notifier_conduit_state_info *info) 965 965 { 966 - if (!ds->ops->master_state_change) 966 + if (!ds->ops->conduit_state_change) 967 967 return 0; 968 968 969 - ds->ops->master_state_change(ds, info->master, info->operational); 969 + ds->ops->conduit_state_change(ds, info->conduit, info->operational); 970 970 971 971 return 0; 972 972 } ··· 1056 1056 case DSA_NOTIFIER_TAG_8021Q_VLAN_DEL: 1057 1057 err = dsa_switch_tag_8021q_vlan_del(ds, info); 1058 1058 break; 1059 - case DSA_NOTIFIER_MASTER_STATE_CHANGE: 1060 - err = dsa_switch_master_state_change(ds, info); 1059 + case DSA_NOTIFIER_CONDUIT_STATE_CHANGE: 1060 + err = dsa_switch_conduit_state_change(ds, info); 1061 1061 break; 1062 1062 default: 1063 1063 err = -EOPNOTSUPP;
+4 -4
net/dsa/switch.h
··· 34 34 DSA_NOTIFIER_TAG_PROTO_DISCONNECT, 35 35 DSA_NOTIFIER_TAG_8021Q_VLAN_ADD, 36 36 DSA_NOTIFIER_TAG_8021Q_VLAN_DEL, 37 - DSA_NOTIFIER_MASTER_STATE_CHANGE, 37 + DSA_NOTIFIER_CONDUIT_STATE_CHANGE, 38 38 }; 39 39 40 40 /* DSA_NOTIFIER_AGEING_TIME */ ··· 105 105 u16 vid; 106 106 }; 107 107 108 - /* DSA_NOTIFIER_MASTER_STATE_CHANGE */ 109 - struct dsa_notifier_master_state_info { 110 - const struct net_device *master; 108 + /* DSA_NOTIFIER_CONDUIT_STATE_CHANGE */ 109 + struct dsa_notifier_conduit_state_info { 110 + const struct net_device *conduit; 111 111 bool operational; 112 112 }; 113 113
+5 -5
net/dsa/tag.c
··· 13 13 #include <net/dsa.h> 14 14 #include <net/dst_metadata.h> 15 15 16 - #include "slave.h" 17 16 #include "tag.h" 17 + #include "user.h" 18 18 19 19 static LIST_HEAD(dsa_tag_drivers_list); 20 20 static DEFINE_MUTEX(dsa_tag_drivers_lock); ··· 27 27 * switch, the DSA driver owning the interface to which the packet is 28 28 * delivered is never notified unless we do so here. 29 29 */ 30 - static bool dsa_skb_defer_rx_timestamp(struct dsa_slave_priv *p, 30 + static bool dsa_skb_defer_rx_timestamp(struct dsa_user_priv *p, 31 31 struct sk_buff *skb) 32 32 { 33 33 struct dsa_switch *ds = p->dp->ds; ··· 57 57 struct metadata_dst *md_dst = skb_metadata_dst(skb); 58 58 struct dsa_port *cpu_dp = dev->dsa_ptr; 59 59 struct sk_buff *nskb = NULL; 60 - struct dsa_slave_priv *p; 60 + struct dsa_user_priv *p; 61 61 62 62 if (unlikely(!cpu_dp)) { 63 63 kfree_skb(skb); ··· 75 75 if (!skb_has_extensions(skb)) 76 76 skb->slow_gro = 0; 77 77 78 - skb->dev = dsa_master_find_slave(dev, 0, port); 78 + skb->dev = dsa_conduit_find_user(dev, 0, port); 79 79 if (likely(skb->dev)) { 80 80 dsa_default_offload_fwd_mark(skb); 81 81 nskb = skb; ··· 94 94 skb->pkt_type = PACKET_HOST; 95 95 skb->protocol = eth_type_trans(skb, skb->dev); 96 96 97 - if (unlikely(!dsa_slave_dev_check(skb->dev))) { 97 + if (unlikely(!dsa_user_dev_check(skb->dev))) { 98 98 /* Packet is to be injected directly on an upper 99 99 * device, e.g. a team/bond, so skip all DSA-port 100 100 * specific actions.
+13 -13
net/dsa/tag.h
··· 9 9 #include <net/dsa.h> 10 10 11 11 #include "port.h" 12 - #include "slave.h" 12 + #include "user.h" 13 13 14 14 struct dsa_tag_driver { 15 15 const struct dsa_device_ops *ops; ··· 29 29 return ops->needed_headroom + ops->needed_tailroom; 30 30 } 31 31 32 - static inline struct net_device *dsa_master_find_slave(struct net_device *dev, 32 + static inline struct net_device *dsa_conduit_find_user(struct net_device *dev, 33 33 int device, int port) 34 34 { 35 35 struct dsa_port *cpu_dp = dev->dsa_ptr; ··· 39 39 list_for_each_entry(dp, &dst->ports, list) 40 40 if (dp->ds->index == device && dp->index == port && 41 41 dp->type == DSA_PORT_TYPE_USER) 42 - return dp->slave; 42 + return dp->user; 43 43 44 44 return NULL; 45 45 } ··· 49 49 */ 50 50 static inline struct sk_buff *dsa_untag_bridge_pvid(struct sk_buff *skb) 51 51 { 52 - struct dsa_port *dp = dsa_slave_to_port(skb->dev); 52 + struct dsa_port *dp = dsa_user_to_port(skb->dev); 53 53 struct net_device *br = dsa_port_bridge_dev_get(dp); 54 54 struct net_device *dev = skb->dev; 55 55 struct net_device *upper_dev; ··· 107 107 * to support termination through the bridge. 108 108 */ 109 109 static inline struct net_device * 110 - dsa_find_designated_bridge_port_by_vid(struct net_device *master, u16 vid) 110 + dsa_find_designated_bridge_port_by_vid(struct net_device *conduit, u16 vid) 111 111 { 112 - struct dsa_port *cpu_dp = master->dsa_ptr; 112 + struct dsa_port *cpu_dp = conduit->dsa_ptr; 113 113 struct dsa_switch_tree *dst = cpu_dp->dst; 114 114 struct bridge_vlan_info vinfo; 115 - struct net_device *slave; 115 + struct net_device *user; 116 116 struct dsa_port *dp; 117 117 int err; 118 118 ··· 134 134 if (dp->cpu_dp != cpu_dp) 135 135 continue; 136 136 137 - slave = dp->slave; 137 + user = dp->user; 138 138 139 - err = br_vlan_get_info_rcu(slave, vid, &vinfo); 139 + err = br_vlan_get_info_rcu(user, vid, &vinfo); 140 140 if (err) 141 141 continue; 142 142 143 - return slave; 143 + return user; 144 144 } 145 145 146 146 return NULL; ··· 155 155 */ 156 156 static inline void dsa_default_offload_fwd_mark(struct sk_buff *skb) 157 157 { 158 - struct dsa_port *dp = dsa_slave_to_port(skb->dev); 158 + struct dsa_port *dp = dsa_user_to_port(skb->dev); 159 159 160 160 skb->offload_fwd_mark = !!(dp->bridge); 161 161 } ··· 215 215 memmove(skb->data, skb->data + len, 2 * ETH_ALEN); 216 216 } 217 217 218 - /* On RX, eth_type_trans() on the DSA master pulls ETH_HLEN bytes starting from 218 + /* On RX, eth_type_trans() on the DSA conduit pulls ETH_HLEN bytes starting from 219 219 * skb_mac_header(skb), which leaves skb->data pointing at the first byte after 220 - * what the DSA master perceives as the EtherType (the beginning of the L3 220 + * what the DSA conduit perceives as the EtherType (the beginning of the L3 221 221 * protocol). Since DSA EtherType header taggers treat the EtherType as part of 222 222 * the DSA tag itself, and the EtherType is 2 bytes in length, the DSA header 223 223 * is located 2 bytes behind skb->data. Note that EtherType in this context
+11 -11
net/dsa/tag_8021q.c
··· 73 73 struct dsa_8021q_context { 74 74 struct dsa_switch *ds; 75 75 struct list_head vlans; 76 - /* EtherType of RX VID, used for filtering on master interface */ 76 + /* EtherType of RX VID, used for filtering on conduit interface */ 77 77 __be16 proto; 78 78 }; 79 79 ··· 338 338 struct dsa_8021q_context *ctx = ds->tag_8021q_ctx; 339 339 struct dsa_port *dp = dsa_to_port(ds, port); 340 340 u16 vid = dsa_tag_8021q_standalone_vid(dp); 341 - struct net_device *master; 341 + struct net_device *conduit; 342 342 int err; 343 343 344 344 /* The CPU port is implicitly configured by ··· 347 347 if (!dsa_port_is_user(dp)) 348 348 return 0; 349 349 350 - master = dsa_port_to_master(dp); 350 + conduit = dsa_port_to_conduit(dp); 351 351 352 352 err = dsa_port_tag_8021q_vlan_add(dp, vid, false); 353 353 if (err) { ··· 357 357 return err; 358 358 } 359 359 360 - /* Add the VLAN to the master's RX filter. */ 361 - vlan_vid_add(master, ctx->proto, vid); 360 + /* Add the VLAN to the conduit's RX filter. */ 361 + vlan_vid_add(conduit, ctx->proto, vid); 362 362 363 363 return err; 364 364 } ··· 368 368 struct dsa_8021q_context *ctx = ds->tag_8021q_ctx; 369 369 struct dsa_port *dp = dsa_to_port(ds, port); 370 370 u16 vid = dsa_tag_8021q_standalone_vid(dp); 371 - struct net_device *master; 371 + struct net_device *conduit; 372 372 373 373 /* The CPU port is implicitly configured by 374 374 * configuring the front-panel ports ··· 376 376 if (!dsa_port_is_user(dp)) 377 377 return; 378 378 379 - master = dsa_port_to_master(dp); 379 + conduit = dsa_port_to_conduit(dp); 380 380 381 381 dsa_port_tag_8021q_vlan_del(dp, vid, false); 382 382 383 - vlan_vid_del(master, ctx->proto, vid); 383 + vlan_vid_del(conduit, ctx->proto, vid); 384 384 } 385 385 386 386 static int dsa_tag_8021q_setup(struct dsa_switch *ds) ··· 468 468 } 469 469 EXPORT_SYMBOL_GPL(dsa_8021q_xmit); 470 470 471 - struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *master, 471 + struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *conduit, 472 472 int vbid) 473 473 { 474 - struct dsa_port *cpu_dp = master->dsa_ptr; 474 + struct dsa_port *cpu_dp = conduit->dsa_ptr; 475 475 struct dsa_switch_tree *dst = cpu_dp->dst; 476 476 struct dsa_port *dp; 477 477 ··· 490 490 continue; 491 491 492 492 if (dsa_port_bridge_num_get(dp) == vbid) 493 - return dp->slave; 493 + return dp->user; 494 494 } 495 495 496 496 return NULL;
+1 -1
net/dsa/tag_8021q.h
··· 16 16 void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id, 17 17 int *vbid); 18 18 19 - struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *master, 19 + struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *conduit, 20 20 int vbid); 21 21 22 22 int dsa_switch_tag_8021q_vlan_add(struct dsa_switch *ds,
+2 -2
net/dsa/tag_ar9331.c
··· 29 29 static struct sk_buff *ar9331_tag_xmit(struct sk_buff *skb, 30 30 struct net_device *dev) 31 31 { 32 - struct dsa_port *dp = dsa_slave_to_port(dev); 32 + struct dsa_port *dp = dsa_user_to_port(dev); 33 33 __le16 *phdr; 34 34 u16 hdr; 35 35 ··· 74 74 /* Get source port information */ 75 75 port = FIELD_GET(AR9331_HDR_PORT_NUM_MASK, hdr); 76 76 77 - skb->dev = dsa_master_find_slave(ndev, 0, port); 77 + skb->dev = dsa_conduit_find_user(ndev, 0, port); 78 78 if (!skb->dev) 79 79 return NULL; 80 80
+7 -7
net/dsa/tag_brcm.c
··· 85 85 struct net_device *dev, 86 86 unsigned int offset) 87 87 { 88 - struct dsa_port *dp = dsa_slave_to_port(dev); 88 + struct dsa_port *dp = dsa_user_to_port(dev); 89 89 u16 queue = skb_get_queue_mapping(skb); 90 90 u8 *brcm_tag; 91 91 ··· 96 96 * (including FCS and tag) because the length verification is done after 97 97 * the Broadcom tag is stripped off the ingress packet. 98 98 * 99 - * Let dsa_slave_xmit() free the SKB 99 + * Let dsa_user_xmit() free the SKB 100 100 */ 101 101 if (__skb_put_padto(skb, ETH_ZLEN + BRCM_TAG_LEN, false)) 102 102 return NULL; ··· 119 119 brcm_tag[2] = BRCM_IG_DSTMAP2_MASK; 120 120 brcm_tag[3] = (1 << dp->index) & BRCM_IG_DSTMAP1_MASK; 121 121 122 - /* Now tell the master network device about the desired output queue 122 + /* Now tell the conduit network device about the desired output queue 123 123 * as well 124 124 */ 125 125 skb_set_queue_mapping(skb, BRCM_TAG_SET_PORT_QUEUE(dp->index, queue)); ··· 164 164 /* Locate which port this is coming from */ 165 165 source_port = brcm_tag[3] & BRCM_EG_PID_MASK; 166 166 167 - skb->dev = dsa_master_find_slave(dev, 0, source_port); 167 + skb->dev = dsa_conduit_find_user(dev, 0, source_port); 168 168 if (!skb->dev) 169 169 return NULL; 170 170 ··· 216 216 static struct sk_buff *brcm_leg_tag_xmit(struct sk_buff *skb, 217 217 struct net_device *dev) 218 218 { 219 - struct dsa_port *dp = dsa_slave_to_port(dev); 219 + struct dsa_port *dp = dsa_user_to_port(dev); 220 220 u8 *brcm_tag; 221 221 222 222 /* The Ethernet switch we are interfaced with needs packets to be at ··· 226 226 * (including FCS and tag) because the length verification is done after 227 227 * the Broadcom tag is stripped off the ingress packet. 228 228 * 229 - * Let dsa_slave_xmit() free the SKB 229 + * Let dsa_user_xmit() free the SKB 230 230 */ 231 231 if (__skb_put_padto(skb, ETH_ZLEN + BRCM_LEG_TAG_LEN, false)) 232 232 return NULL; ··· 264 264 265 265 source_port = brcm_tag[5] & BRCM_LEG_PORT_ID; 266 266 267 - skb->dev = dsa_master_find_slave(dev, 0, source_port); 267 + skb->dev = dsa_conduit_find_user(dev, 0, source_port); 268 268 if (!skb->dev) 269 269 return NULL; 270 270
+3 -3
net/dsa/tag_dsa.c
··· 129 129 static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev, 130 130 u8 extra) 131 131 { 132 - struct dsa_port *dp = dsa_slave_to_port(dev); 132 + struct dsa_port *dp = dsa_user_to_port(dev); 133 133 struct net_device *br_dev; 134 134 u8 tag_dev, tag_port; 135 135 enum dsa_cmd cmd; ··· 267 267 lag = dsa_lag_by_id(cpu_dp->dst, source_port + 1); 268 268 skb->dev = lag ? lag->dev : NULL; 269 269 } else { 270 - skb->dev = dsa_master_find_slave(dev, source_device, 270 + skb->dev = dsa_conduit_find_user(dev, source_device, 271 271 source_port); 272 272 } 273 273 274 274 if (!skb->dev) 275 275 return NULL; 276 276 277 - /* When using LAG offload, skb->dev is not a DSA slave interface, 277 + /* When using LAG offload, skb->dev is not a DSA user interface, 278 278 * so we cannot call dsa_default_offload_fwd_mark and we need to 279 279 * special-case it. 280 280 */
+2 -2
net/dsa/tag_gswip.c
··· 61 61 static struct sk_buff *gswip_tag_xmit(struct sk_buff *skb, 62 62 struct net_device *dev) 63 63 { 64 - struct dsa_port *dp = dsa_slave_to_port(dev); 64 + struct dsa_port *dp = dsa_user_to_port(dev); 65 65 u8 *gswip_tag; 66 66 67 67 skb_push(skb, GSWIP_TX_HEADER_LEN); ··· 89 89 90 90 /* Get source port information */ 91 91 port = (gswip_tag[7] & GSWIP_RX_SPPID_MASK) >> GSWIP_RX_SPPID_SHIFT; 92 - skb->dev = dsa_master_find_slave(dev, 0, port); 92 + skb->dev = dsa_conduit_find_user(dev, 0, port); 93 93 if (!skb->dev) 94 94 return NULL; 95 95
+2 -2
net/dsa/tag_hellcreek.c
··· 20 20 static struct sk_buff *hellcreek_xmit(struct sk_buff *skb, 21 21 struct net_device *dev) 22 22 { 23 - struct dsa_port *dp = dsa_slave_to_port(dev); 23 + struct dsa_port *dp = dsa_user_to_port(dev); 24 24 u8 *tag; 25 25 26 26 /* Calculate checksums (if required) before adding the trailer tag to ··· 45 45 u8 *tag = skb_tail_pointer(skb) - HELLCREEK_TAG_LEN; 46 46 unsigned int port = tag[0] & 0x03; 47 47 48 - skb->dev = dsa_master_find_slave(dev, 0, port); 48 + skb->dev = dsa_conduit_find_user(dev, 0, port); 49 49 if (!skb->dev) { 50 50 netdev_warn_once(dev, "Failed to get source port: %d\n", port); 51 51 return NULL;
+6 -6
net/dsa/tag_ksz.c
··· 87 87 struct net_device *dev, 88 88 unsigned int port, unsigned int len) 89 89 { 90 - skb->dev = dsa_master_find_slave(dev, 0, port); 90 + skb->dev = dsa_conduit_find_user(dev, 0, port); 91 91 if (!skb->dev) 92 92 return NULL; 93 93 ··· 119 119 120 120 static struct sk_buff *ksz8795_xmit(struct sk_buff *skb, struct net_device *dev) 121 121 { 122 - struct dsa_port *dp = dsa_slave_to_port(dev); 122 + struct dsa_port *dp = dsa_user_to_port(dev); 123 123 struct ethhdr *hdr; 124 124 u8 *tag; 125 125 ··· 256 256 return NULL; 257 257 258 258 kthread_init_work(&xmit_work->work, xmit_work_fn); 259 - /* Increase refcount so the kfree_skb in dsa_slave_xmit 259 + /* Increase refcount so the kfree_skb in dsa_user_xmit 260 260 * won't really free the packet. 261 261 */ 262 262 xmit_work->dp = dp; ··· 272 272 { 273 273 u16 queue_mapping = skb_get_queue_mapping(skb); 274 274 u8 prio = netdev_txq_to_tc(dev, queue_mapping); 275 - struct dsa_port *dp = dsa_slave_to_port(dev); 275 + struct dsa_port *dp = dsa_user_to_port(dev); 276 276 struct ethhdr *hdr; 277 277 __be16 *tag; 278 278 u16 val; ··· 344 344 { 345 345 u16 queue_mapping = skb_get_queue_mapping(skb); 346 346 u8 prio = netdev_txq_to_tc(dev, queue_mapping); 347 - struct dsa_port *dp = dsa_slave_to_port(dev); 347 + struct dsa_port *dp = dsa_user_to_port(dev); 348 348 struct ethhdr *hdr; 349 349 u8 *tag; 350 350 ··· 410 410 { 411 411 u16 queue_mapping = skb_get_queue_mapping(skb); 412 412 u8 prio = netdev_txq_to_tc(dev, queue_mapping); 413 - struct dsa_port *dp = dsa_slave_to_port(dev); 413 + struct dsa_port *dp = dsa_user_to_port(dev); 414 414 const struct ethhdr *hdr = eth_hdr(skb); 415 415 __be16 *tag; 416 416 u16 val;
+2 -2
net/dsa/tag_lan9303.c
··· 56 56 57 57 static struct sk_buff *lan9303_xmit(struct sk_buff *skb, struct net_device *dev) 58 58 { 59 - struct dsa_port *dp = dsa_slave_to_port(dev); 59 + struct dsa_port *dp = dsa_user_to_port(dev); 60 60 __be16 *lan9303_tag; 61 61 u16 tag; 62 62 ··· 99 99 100 100 source_port = lan9303_tag1 & 0x3; 101 101 102 - skb->dev = dsa_master_find_slave(dev, 0, source_port); 102 + skb->dev = dsa_conduit_find_user(dev, 0, source_port); 103 103 if (!skb->dev) { 104 104 dev_warn_ratelimited(&dev->dev, "Dropping packet due to invalid source port\n"); 105 105 return NULL;
+2 -2
net/dsa/tag_mtk.c
··· 23 23 static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb, 24 24 struct net_device *dev) 25 25 { 26 - struct dsa_port *dp = dsa_slave_to_port(dev); 26 + struct dsa_port *dp = dsa_user_to_port(dev); 27 27 u8 xmit_tpid; 28 28 u8 *mtk_tag; 29 29 ··· 85 85 /* Get source port information */ 86 86 port = (hdr & MTK_HDR_RECV_SOURCE_PORT_MASK); 87 87 88 - skb->dev = dsa_master_find_slave(dev, 0, port); 88 + skb->dev = dsa_conduit_find_user(dev, 0, port); 89 89 if (!skb->dev) 90 90 return NULL; 91 91
+3 -3
net/dsa/tag_none.c
··· 12 12 13 13 #define NONE_NAME "none" 14 14 15 - static struct sk_buff *dsa_slave_notag_xmit(struct sk_buff *skb, 16 - struct net_device *dev) 15 + static struct sk_buff *dsa_user_notag_xmit(struct sk_buff *skb, 16 + struct net_device *dev) 17 17 { 18 18 /* Just return the original SKB */ 19 19 return skb; ··· 22 22 static const struct dsa_device_ops none_ops = { 23 23 .name = NONE_NAME, 24 24 .proto = DSA_TAG_PROTO_NONE, 25 - .xmit = dsa_slave_notag_xmit, 25 + .xmit = dsa_user_notag_xmit, 26 26 }; 27 27 28 28 module_dsa_tag_driver(none_ops);
+11 -11
net/dsa/tag_ocelot.c
··· 45 45 static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev, 46 46 __be32 ifh_prefix, void **ifh) 47 47 { 48 - struct dsa_port *dp = dsa_slave_to_port(netdev); 48 + struct dsa_port *dp = dsa_user_to_port(netdev); 49 49 struct dsa_switch *ds = dp->ds; 50 50 u64 vlan_tci, tag_type; 51 51 void *injection; ··· 79 79 static struct sk_buff *ocelot_xmit(struct sk_buff *skb, 80 80 struct net_device *netdev) 81 81 { 82 - struct dsa_port *dp = dsa_slave_to_port(netdev); 82 + struct dsa_port *dp = dsa_user_to_port(netdev); 83 83 void *injection; 84 84 85 85 ocelot_xmit_common(skb, netdev, cpu_to_be32(0x8880000a), &injection); ··· 91 91 static struct sk_buff *seville_xmit(struct sk_buff *skb, 92 92 struct net_device *netdev) 93 93 { 94 - struct dsa_port *dp = dsa_slave_to_port(netdev); 94 + struct dsa_port *dp = dsa_user_to_port(netdev); 95 95 void *injection; 96 96 97 97 ocelot_xmit_common(skb, netdev, cpu_to_be32(0x88800005), &injection); ··· 111 111 u16 vlan_tpid; 112 112 u64 rew_val; 113 113 114 - /* Revert skb->data by the amount consumed by the DSA master, 114 + /* Revert skb->data by the amount consumed by the DSA conduit, 115 115 * so it points to the beginning of the frame. 116 116 */ 117 117 skb_push(skb, ETH_HLEN); 118 118 /* We don't care about the short prefix, it is just for easy entrance 119 - * into the DSA master's RX filter. Discard it now by moving it into 119 + * into the DSA conduit's RX filter. Discard it now by moving it into 120 120 * the headroom. 121 121 */ 122 122 skb_pull(skb, OCELOT_SHORT_PREFIX_LEN); ··· 141 141 ocelot_xfh_get_vlan_tci(extraction, &vlan_tci); 142 142 ocelot_xfh_get_rew_val(extraction, &rew_val); 143 143 144 - skb->dev = dsa_master_find_slave(netdev, 0, src_port); 144 + skb->dev = dsa_conduit_find_user(netdev, 0, src_port); 145 145 if (!skb->dev) 146 146 /* The switch will reflect back some frames sent through 147 - * sockets opened on the bare DSA master. These will come back 147 + * sockets opened on the bare DSA conduit. These will come back 148 148 * with src_port equal to the index of the CPU port, for which 149 - * there is no slave registered. So don't print any error 149 + * there is no user registered. So don't print any error 150 150 * message here (ignore and drop those frames). 151 151 */ 152 152 return NULL; ··· 170 170 * equal to the pvid of the ingress port and should not be used for 171 171 * processing. 172 172 */ 173 - dp = dsa_slave_to_port(skb->dev); 173 + dp = dsa_user_to_port(skb->dev); 174 174 vlan_tpid = tag_type ? ETH_P_8021AD : ETH_P_8021Q; 175 175 176 176 if (dsa_port_is_vlan_filtering(dp) && ··· 192 192 .xmit = ocelot_xmit, 193 193 .rcv = ocelot_rcv, 194 194 .needed_headroom = OCELOT_TOTAL_TAG_LEN, 195 - .promisc_on_master = true, 195 + .promisc_on_conduit = true, 196 196 }; 197 197 198 198 DSA_TAG_DRIVER(ocelot_netdev_ops); ··· 204 204 .xmit = seville_xmit, 205 205 .rcv = ocelot_rcv, 206 206 .needed_headroom = OCELOT_TOTAL_TAG_LEN, 207 - .promisc_on_master = true, 207 + .promisc_on_conduit = true, 208 208 }; 209 209 210 210 DSA_TAG_DRIVER(seville_netdev_ops);
+6 -6
net/dsa/tag_ocelot_8021q.c
··· 37 37 return NULL; 38 38 39 39 /* PTP over IP packets need UDP checksumming. We may have inherited 40 - * NETIF_F_HW_CSUM from the DSA master, but these packets are not sent 41 - * through the DSA master, so calculate the checksum here. 40 + * NETIF_F_HW_CSUM from the DSA conduit, but these packets are not sent 41 + * through the DSA conduit, so calculate the checksum here. 42 42 */ 43 43 if (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb)) 44 44 return NULL; ··· 49 49 50 50 /* Calls felix_port_deferred_xmit in felix.c */ 51 51 kthread_init_work(&xmit_work->work, xmit_work_fn); 52 - /* Increase refcount so the kfree_skb in dsa_slave_xmit 52 + /* Increase refcount so the kfree_skb in dsa_user_xmit 53 53 * won't really free the packet. 54 54 */ 55 55 xmit_work->dp = dp; ··· 63 63 static struct sk_buff *ocelot_xmit(struct sk_buff *skb, 64 64 struct net_device *netdev) 65 65 { 66 - struct dsa_port *dp = dsa_slave_to_port(netdev); 66 + struct dsa_port *dp = dsa_user_to_port(netdev); 67 67 u16 queue_mapping = skb_get_queue_mapping(skb); 68 68 u8 pcp = netdev_txq_to_tc(netdev, queue_mapping); 69 69 u16 tx_vid = dsa_tag_8021q_standalone_vid(dp); ··· 83 83 84 84 dsa_8021q_rcv(skb, &src_port, &switch_id, NULL); 85 85 86 - skb->dev = dsa_master_find_slave(netdev, switch_id, src_port); 86 + skb->dev = dsa_conduit_find_user(netdev, switch_id, src_port); 87 87 if (!skb->dev) 88 88 return NULL; 89 89 ··· 130 130 .connect = ocelot_connect, 131 131 .disconnect = ocelot_disconnect, 132 132 .needed_headroom = VLAN_HLEN, 133 - .promisc_on_master = true, 133 + .promisc_on_conduit = true, 134 134 }; 135 135 136 136 MODULE_LICENSE("GPL v2");
+3 -3
net/dsa/tag_qca.c
··· 14 14 15 15 static struct sk_buff *qca_tag_xmit(struct sk_buff *skb, struct net_device *dev) 16 16 { 17 - struct dsa_port *dp = dsa_slave_to_port(dev); 17 + struct dsa_port *dp = dsa_user_to_port(dev); 18 18 __be16 *phdr; 19 19 u16 hdr; 20 20 ··· 78 78 /* Get source port information */ 79 79 port = FIELD_GET(QCA_HDR_RECV_SOURCE_PORT, hdr); 80 80 81 - skb->dev = dsa_master_find_slave(dev, 0, port); 81 + skb->dev = dsa_conduit_find_user(dev, 0, port); 82 82 if (!skb->dev) 83 83 return NULL; 84 84 ··· 116 116 .xmit = qca_tag_xmit, 117 117 .rcv = qca_tag_rcv, 118 118 .needed_headroom = QCA_HDR_LEN, 119 - .promisc_on_master = true, 119 + .promisc_on_conduit = true, 120 120 }; 121 121 122 122 MODULE_LICENSE("GPL");
+3 -3
net/dsa/tag_rtl4_a.c
··· 36 36 static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb, 37 37 struct net_device *dev) 38 38 { 39 - struct dsa_port *dp = dsa_slave_to_port(dev); 39 + struct dsa_port *dp = dsa_user_to_port(dev); 40 40 __be16 *p; 41 41 u8 *tag; 42 42 u16 out; ··· 97 97 } 98 98 port = protport & 0xff; 99 99 100 - skb->dev = dsa_master_find_slave(dev, 0, port); 100 + skb->dev = dsa_conduit_find_user(dev, 0, port); 101 101 if (!skb->dev) { 102 - netdev_dbg(dev, "could not find slave for port %d\n", port); 102 + netdev_dbg(dev, "could not find user for port %d\n", port); 103 103 return NULL; 104 104 } 105 105
+3 -3
net/dsa/tag_rtl8_4.c
··· 103 103 static void rtl8_4_write_tag(struct sk_buff *skb, struct net_device *dev, 104 104 void *tag) 105 105 { 106 - struct dsa_port *dp = dsa_slave_to_port(dev); 106 + struct dsa_port *dp = dsa_user_to_port(dev); 107 107 __be16 tag16[RTL8_4_TAG_LEN / 2]; 108 108 109 109 /* Set Realtek EtherType */ ··· 180 180 181 181 /* Parse TX (switch->CPU) */ 182 182 port = FIELD_GET(RTL8_4_TX, ntohs(tag16[3])); 183 - skb->dev = dsa_master_find_slave(dev, 0, port); 183 + skb->dev = dsa_conduit_find_user(dev, 0, port); 184 184 if (!skb->dev) { 185 185 dev_warn_ratelimited(&dev->dev, 186 - "could not find slave for port %d\n", 186 + "could not find user for port %d\n", 187 187 port); 188 188 return -ENOENT; 189 189 }
+2 -2
net/dsa/tag_rzn1_a5psw.c
··· 39 39 40 40 static struct sk_buff *a5psw_tag_xmit(struct sk_buff *skb, struct net_device *dev) 41 41 { 42 - struct dsa_port *dp = dsa_slave_to_port(dev); 42 + struct dsa_port *dp = dsa_user_to_port(dev); 43 43 struct a5psw_tag *ptag; 44 44 u32 data2_val; 45 45 ··· 90 90 91 91 port = FIELD_GET(A5PSW_CTRL_DATA_PORT, ntohs(tag->ctrl_data)); 92 92 93 - skb->dev = dsa_master_find_slave(dev, 0, port); 93 + skb->dev = dsa_conduit_find_user(dev, 0, port); 94 94 if (!skb->dev) 95 95 return NULL; 96 96
+15 -15
net/dsa/tag_sja1105.c
··· 157 157 return NULL; 158 158 159 159 kthread_init_work(&xmit_work->work, xmit_work_fn); 160 - /* Increase refcount so the kfree_skb in dsa_slave_xmit 160 + /* Increase refcount so the kfree_skb in dsa_user_xmit 161 161 * won't really free the packet. 162 162 */ 163 163 xmit_work->dp = dp; ··· 210 210 static struct sk_buff *sja1105_imprecise_xmit(struct sk_buff *skb, 211 211 struct net_device *netdev) 212 212 { 213 - struct dsa_port *dp = dsa_slave_to_port(netdev); 213 + struct dsa_port *dp = dsa_user_to_port(netdev); 214 214 unsigned int bridge_num = dsa_port_bridge_num_get(dp); 215 215 struct net_device *br = dsa_port_bridge_dev_get(dp); 216 216 u16 tx_vid; ··· 235 235 236 236 /* Transform untagged control packets into pvid-tagged control packets so that 237 237 * all packets sent by this tagger are VLAN-tagged and we can configure the 238 - * switch to drop untagged packets coming from the DSA master. 238 + * switch to drop untagged packets coming from the DSA conduit. 239 239 */ 240 240 static struct sk_buff *sja1105_pvid_tag_control_pkt(struct dsa_port *dp, 241 241 struct sk_buff *skb, u8 pcp) ··· 266 266 static struct sk_buff *sja1105_xmit(struct sk_buff *skb, 267 267 struct net_device *netdev) 268 268 { 269 - struct dsa_port *dp = dsa_slave_to_port(netdev); 269 + struct dsa_port *dp = dsa_user_to_port(netdev); 270 270 u16 queue_mapping = skb_get_queue_mapping(skb); 271 271 u8 pcp = netdev_txq_to_tc(netdev, queue_mapping); 272 272 u16 tx_vid = dsa_tag_8021q_standalone_vid(dp); ··· 294 294 struct net_device *netdev) 295 295 { 296 296 struct sk_buff *clone = SJA1105_SKB_CB(skb)->clone; 297 - struct dsa_port *dp = dsa_slave_to_port(netdev); 297 + struct dsa_port *dp = dsa_user_to_port(netdev); 298 298 u16 queue_mapping = skb_get_queue_mapping(skb); 299 299 u8 pcp = netdev_txq_to_tc(netdev, queue_mapping); 300 300 u16 tx_vid = dsa_tag_8021q_standalone_vid(dp); ··· 383 383 * Buffer it until we get its meta frame. 384 384 */ 385 385 if (is_link_local) { 386 - struct dsa_port *dp = dsa_slave_to_port(skb->dev); 386 + struct dsa_port *dp = dsa_user_to_port(skb->dev); 387 387 struct sja1105_tagger_private *priv; 388 388 struct dsa_switch *ds = dp->ds; 389 389 ··· 396 396 if (priv->stampable_skb) { 397 397 dev_err_ratelimited(ds->dev, 398 398 "Expected meta frame, is %12llx " 399 - "in the DSA master multicast filter?\n", 399 + "in the DSA conduit multicast filter?\n", 400 400 SJA1105_META_DMAC); 401 401 kfree_skb(priv->stampable_skb); 402 402 } ··· 417 417 * frame, which serves no further purpose). 418 418 */ 419 419 } else if (is_meta) { 420 - struct dsa_port *dp = dsa_slave_to_port(skb->dev); 420 + struct dsa_port *dp = dsa_user_to_port(skb->dev); 421 421 struct sja1105_tagger_private *priv; 422 422 struct dsa_switch *ds = dp->ds; 423 423 struct sk_buff *stampable_skb; ··· 550 550 } 551 551 552 552 if (source_port != -1 && switch_id != -1) 553 - skb->dev = dsa_master_find_slave(netdev, switch_id, source_port); 553 + skb->dev = dsa_conduit_find_user(netdev, switch_id, source_port); 554 554 else if (vbid >= 1) 555 555 skb->dev = dsa_tag_8021q_find_port_by_vbid(netdev, vbid); 556 556 else ··· 573 573 int switch_id = SJA1110_RX_HEADER_SWITCH_ID(rx_header); 574 574 int n_ts = SJA1110_RX_HEADER_N_TS(rx_header); 575 575 struct sja1105_tagger_data *tagger_data; 576 - struct net_device *master = skb->dev; 576 + struct net_device *conduit = skb->dev; 577 577 struct dsa_port *cpu_dp; 578 578 struct dsa_switch *ds; 579 579 int i; 580 580 581 - cpu_dp = master->dsa_ptr; 581 + cpu_dp = conduit->dsa_ptr; 582 582 ds = dsa_switch_find(cpu_dp->dst->index, switch_id); 583 583 if (!ds) { 584 584 net_err_ratelimited("%s: cannot find switch id %d\n", 585 - master->name, switch_id); 585 + conduit->name, switch_id); 586 586 return NULL; 587 587 } 588 588 ··· 649 649 650 650 /* skb->len counts from skb->data, while start_of_padding 651 651 * counts from the destination MAC address. Right now skb->data 652 - * is still as set by the DSA master, so to trim away the 652 + * is still as set by the DSA conduit, so to trim away the 653 653 * padding and trailer we need to account for the fact that 654 654 * skb->data points to skb_mac_header(skb) + ETH_HLEN. 655 655 */ ··· 698 698 else if (source_port == -1 || switch_id == -1) 699 699 skb->dev = dsa_find_designated_bridge_port_by_vid(netdev, vid); 700 700 else 701 - skb->dev = dsa_master_find_slave(netdev, switch_id, source_port); 701 + skb->dev = dsa_conduit_find_user(netdev, switch_id, source_port); 702 702 if (!skb->dev) { 703 703 netdev_warn(netdev, "Couldn't decode source port\n"); 704 704 return NULL; ··· 778 778 .disconnect = sja1105_disconnect, 779 779 .needed_headroom = VLAN_HLEN, 780 780 .flow_dissect = sja1105_flow_dissect, 781 - .promisc_on_master = true, 781 + .promisc_on_conduit = true, 782 782 }; 783 783 784 784 DSA_TAG_DRIVER(sja1105_netdev_ops);
+2 -2
net/dsa/tag_trailer.c
··· 14 14 15 15 static struct sk_buff *trailer_xmit(struct sk_buff *skb, struct net_device *dev) 16 16 { 17 - struct dsa_port *dp = dsa_slave_to_port(dev); 17 + struct dsa_port *dp = dsa_user_to_port(dev); 18 18 u8 *trailer; 19 19 20 20 trailer = skb_put(skb, 4); ··· 41 41 42 42 source_port = trailer[1] & 7; 43 43 44 - skb->dev = dsa_master_find_slave(dev, 0, source_port); 44 + skb->dev = dsa_conduit_find_user(dev, 0, source_port); 45 45 if (!skb->dev) 46 46 return NULL; 47 47
+2 -2
net/dsa/tag_xrs700x.c
··· 13 13 14 14 static struct sk_buff *xrs700x_xmit(struct sk_buff *skb, struct net_device *dev) 15 15 { 16 - struct dsa_port *partner, *dp = dsa_slave_to_port(dev); 16 + struct dsa_port *partner, *dp = dsa_user_to_port(dev); 17 17 u8 *trailer; 18 18 19 19 trailer = skb_put(skb, 1); ··· 39 39 if (source_port < 0) 40 40 return NULL; 41 41 42 - skb->dev = dsa_master_find_slave(dev, 0, source_port); 42 + skb->dev = dsa_conduit_find_user(dev, 0, source_port); 43 43 if (!skb->dev) 44 44 return NULL; 45 45
+69
net/dsa/user.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + 3 + #ifndef __DSA_USER_H 4 + #define __DSA_USER_H 5 + 6 + #include <linux/if_bridge.h> 7 + #include <linux/if_vlan.h> 8 + #include <linux/list.h> 9 + #include <linux/netpoll.h> 10 + #include <linux/types.h> 11 + #include <net/dsa.h> 12 + #include <net/gro_cells.h> 13 + 14 + struct net_device; 15 + struct netlink_ext_ack; 16 + 17 + extern struct notifier_block dsa_user_switchdev_notifier; 18 + extern struct notifier_block dsa_user_switchdev_blocking_notifier; 19 + 20 + struct dsa_user_priv { 21 + /* Copy of CPU port xmit for faster access in user transmit hot path */ 22 + struct sk_buff * (*xmit)(struct sk_buff *skb, 23 + struct net_device *dev); 24 + 25 + struct gro_cells gcells; 26 + 27 + /* DSA port data, such as switch, port index, etc. */ 28 + struct dsa_port *dp; 29 + 30 + #ifdef CONFIG_NET_POLL_CONTROLLER 31 + struct netpoll *netpoll; 32 + #endif 33 + 34 + /* TC context */ 35 + struct list_head mall_tc_list; 36 + }; 37 + 38 + void dsa_user_mii_bus_init(struct dsa_switch *ds); 39 + int dsa_user_create(struct dsa_port *dp); 40 + void dsa_user_destroy(struct net_device *user_dev); 41 + int dsa_user_suspend(struct net_device *user_dev); 42 + int dsa_user_resume(struct net_device *user_dev); 43 + int dsa_user_register_notifier(void); 44 + void dsa_user_unregister_notifier(void); 45 + void dsa_user_sync_ha(struct net_device *dev); 46 + void dsa_user_unsync_ha(struct net_device *dev); 47 + void dsa_user_setup_tagger(struct net_device *user); 48 + int dsa_user_change_mtu(struct net_device *dev, int new_mtu); 49 + int dsa_user_change_conduit(struct net_device *dev, struct net_device *conduit, 50 + struct netlink_ext_ack *extack); 51 + int dsa_user_manage_vlan_filtering(struct net_device *dev, 52 + bool vlan_filtering); 53 + 54 + static inline struct dsa_port *dsa_user_to_port(const struct net_device *dev) 55 + { 56 + struct dsa_user_priv *p = netdev_priv(dev); 57 + 58 + return p->dp; 59 + } 60 + 61 + static inline struct net_device * 62 + dsa_user_to_conduit(const struct net_device *dev) 63 + { 64 + struct dsa_port *dp = dsa_user_to_port(dev); 65 + 66 + return dsa_port_to_conduit(dp); 67 + } 68 + 69 + #endif