Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from netfilter.

Current release - regressions:

- four fixes for the netdev per-instance locking

Current release - new code bugs:

- consolidate more code between existing Rx zero-copy and uring so
that the latter doesn't miss / have to duplicate the safety checks

Previous releases - regressions:

- ipv6: fix omitted Netlink attributes when using SKIP_STATS

Previous releases - always broken:

- net: fix geneve_opt length integer overflow

- udp: fix multiple wrap arounds of sk->sk_rmem_alloc when it
approaches INT_MAX

- dsa: mvpp2: add a lock to avoid corruption of the shared TCAM

- dsa: airoha: fix issues with traffic QoS configuration / offload,
and flow table offload

Misc:

- touch up the Netlink YAML specs of old families to make them usable
for user space C codegen"

* tag 'net-6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (56 commits)
selftests: net: amt: indicate progress in the stress test
netlink: specs: rt_route: pull the ifa- prefix out of the names
netlink: specs: rt_addr: pull the ifa- prefix out of the names
netlink: specs: rt_addr: fix get multi command name
netlink: specs: rt_addr: fix the spec format / schema failures
net: avoid false positive warnings in __net_mp_close_rxq()
net: move mp dev config validation to __net_mp_open_rxq()
net: ibmveth: make veth_pool_store stop hanging
arcnet: Add NULL check in com20020pci_probe()
ipv6: Do not consider link down nexthops in path selection
ipv6: Start path selection from the first nexthop
usbnet:fix NPE during rx_complete
net: octeontx2: Handle XDP_ABORTED and XDP invalid as XDP_DROP
net: fix geneve_opt length integer overflow
io_uring/zcrx: fix selftests w/ updated netdev Python helpers
selftests: net: use netdevsim in netns test
docs: net: document netdev notifier expectations
net: dummy: request ops lock
netdevsim: add dummy device notifiers
net: rename rtnl_net_debug to lock_debug
...

+865 -443
+4
CREDITS
··· 3670 3670 S: Kingston, Ontario 3671 3671 S: Canada K7L 2P4 3672 3672 3673 + N: Pravin B Shelar 3674 + E: pshelar@ovn.org 3675 + D: Open vSwitch maintenance and contributions 3676 + 3673 3677 N: John Shifflett 3674 3678 E: john@geolog.com 3675 3679 E: jshiffle@netcom.com
+22 -20
Documentation/netlink/specs/rt_addr.yaml
··· 78 78 attribute-sets: 79 79 - 80 80 name: addr-attrs 81 + name-prefix: ifa- 81 82 attributes: 82 83 - 83 - name: ifa-address 84 + name: address 84 85 type: binary 85 86 display-hint: ipv4 86 87 - 87 - name: ifa-local 88 + name: local 88 89 type: binary 89 90 display-hint: ipv4 90 91 - 91 - name: ifa-label 92 + name: label 92 93 type: string 93 94 - 94 - name: ifa-broadcast 95 + name: broadcast 95 96 type: binary 96 97 display-hint: ipv4 97 98 - 98 - name: ifa-anycast 99 + name: anycast 99 100 type: binary 100 101 - 101 - name: ifa-cacheinfo 102 + name: cacheinfo 102 103 type: binary 103 104 struct: ifa-cacheinfo 104 105 - 105 - name: ifa-multicast 106 + name: multicast 106 107 type: binary 107 108 - 108 - name: ifa-flags 109 + name: flags 109 110 type: u32 110 111 enum: ifa-flags 111 112 enum-as-flags: true 112 113 - 113 - name: ifa-rt-priority 114 + name: rt-priority 114 115 type: u32 115 116 - 116 - name: ifa-target-netnsid 117 + name: target-netnsid 117 118 type: binary 118 119 - 119 - name: ifa-proto 120 + name: proto 120 121 type: u8 121 122 122 123 ··· 138 137 - ifa-prefixlen 139 138 - ifa-scope 140 139 - ifa-index 141 - - ifa-address 142 - - ifa-label 143 - - ifa-local 144 - - ifa-cacheinfo 140 + - address 141 + - label 142 + - local 143 + - cacheinfo 145 144 - 146 145 name: deladdr 147 146 doc: Remove address ··· 155 154 - ifa-prefixlen 156 155 - ifa-scope 157 156 - ifa-index 158 - - ifa-address 159 - - ifa-local 157 + - address 158 + - local 160 159 - 161 160 name: getaddr 162 161 doc: Dump address information. ··· 170 169 value: 20 171 170 attributes: *ifaddr-all 172 171 - 173 - name: getmaddrs 172 + name: getmulticast 174 173 doc: Get / dump IPv4/IPv6 multicast addresses. 175 174 attribute-set: addr-attrs 176 175 fixed-header: ifaddrmsg ··· 183 182 reply: 184 183 value: 58 185 184 attributes: &mcaddr-attrs 186 - - ifa-multicast 187 - - ifa-cacheinfo 185 + - multicast 186 + - cacheinfo 188 187 dump: 189 188 request: 190 189 value: 58 190 + attributes: 191 191 - ifa-family 192 192 reply: 193 193 value: 58
+91 -89
Documentation/netlink/specs/rt_route.yaml
··· 80 80 attribute-sets: 81 81 - 82 82 name: route-attrs 83 + name-prefix: rta- 83 84 attributes: 84 85 - 85 - name: rta-dst 86 + name: dst 86 87 type: binary 87 88 display-hint: ipv4 88 89 - 89 - name: rta-src 90 + name: src 90 91 type: binary 91 92 display-hint: ipv4 92 93 - 93 - name: rta-iif 94 + name: iif 94 95 type: u32 95 96 - 96 - name: rta-oif 97 + name: oif 97 98 type: u32 98 99 - 99 - name: rta-gateway 100 + name: gateway 100 101 type: binary 101 102 display-hint: ipv4 102 103 - 103 - name: rta-priority 104 + name: priority 104 105 type: u32 105 106 - 106 - name: rta-prefsrc 107 + name: prefsrc 107 108 type: binary 108 109 display-hint: ipv4 109 110 - 110 - name: rta-metrics 111 + name: metrics 111 112 type: nest 112 - nested-attributes: rta-metrics 113 + nested-attributes: metrics 113 114 - 114 - name: rta-multipath 115 + name: multipath 115 116 type: binary 116 117 - 117 - name: rta-protoinfo # not used 118 + name: protoinfo # not used 118 119 type: binary 119 120 - 120 - name: rta-flow 121 + name: flow 121 122 type: u32 122 123 - 123 - name: rta-cacheinfo 124 + name: cacheinfo 124 125 type: binary 125 126 struct: rta-cacheinfo 126 127 - 127 - name: rta-session # not used 128 + name: session # not used 128 129 type: binary 129 130 - 130 - name: rta-mp-algo # not used 131 + name: mp-algo # not used 131 132 type: binary 132 133 - 133 - name: rta-table 134 + name: table 134 135 type: u32 135 136 - 136 - name: rta-mark 137 + name: mark 137 138 type: u32 138 139 - 139 - name: rta-mfc-stats 140 + name: mfc-stats 140 141 type: binary 141 142 - 142 - name: rta-via 143 + name: via 143 144 type: binary 144 145 - 145 - name: rta-newdst 146 + name: newdst 146 147 type: binary 147 148 - 148 - name: rta-pref 149 + name: pref 149 150 type: u8 150 151 - 151 - name: rta-encap-type 152 + name: encap-type 152 153 type: u16 153 154 - 154 - name: rta-encap 155 + name: encap 155 156 type: binary # tunnel specific nest 156 157 - 157 - name: rta-expires 158 + name: expires 158 159 type: u32 159 160 - 160 - name: rta-pad 161 + name: pad 161 162 type: binary 162 163 - 163 - name: rta-uid 164 + name: uid 164 165 type: u32 165 166 - 166 - name: rta-ttl-propagate 167 + name: ttl-propagate 167 168 type: u8 168 169 - 169 - name: rta-ip-proto 170 + name: ip-proto 170 171 type: u8 171 172 - 172 - name: rta-sport 173 + name: sport 173 174 type: u16 174 175 - 175 - name: rta-dport 176 + name: dport 176 177 type: u16 177 178 - 178 - name: rta-nh-id 179 + name: nh-id 179 180 type: u32 180 181 - 181 - name: rta-flowlabel 182 + name: flowlabel 182 183 type: u32 183 184 byte-order: big-endian 184 185 display-hint: hex 185 186 - 186 - name: rta-metrics 187 + name: metrics 188 + name-prefix: rtax- 187 189 attributes: 188 190 - 189 - name: rtax-unspec 191 + name: unspec 190 192 type: unused 191 193 value: 0 192 194 - 193 - name: rtax-lock 195 + name: lock 194 196 type: u32 195 197 - 196 - name: rtax-mtu 198 + name: mtu 197 199 type: u32 198 200 - 199 - name: rtax-window 201 + name: window 200 202 type: u32 201 203 - 202 - name: rtax-rtt 204 + name: rtt 203 205 type: u32 204 206 - 205 - name: rtax-rttvar 207 + name: rttvar 206 208 type: u32 207 209 - 208 - name: rtax-ssthresh 210 + name: ssthresh 209 211 type: u32 210 212 - 211 - name: rtax-cwnd 213 + name: cwnd 212 214 type: u32 213 215 - 214 - name: rtax-advmss 216 + name: advmss 215 217 type: u32 216 218 - 217 - name: rtax-reordering 219 + name: reordering 218 220 type: u32 219 221 - 220 - name: rtax-hoplimit 222 + name: hoplimit 221 223 type: u32 222 224 - 223 - name: rtax-initcwnd 225 + name: initcwnd 224 226 type: u32 225 227 - 226 - name: rtax-features 228 + name: features 227 229 type: u32 228 230 - 229 - name: rtax-rto-min 231 + name: rto-min 230 232 type: u32 231 233 - 232 - name: rtax-initrwnd 234 + name: initrwnd 233 235 type: u32 234 236 - 235 - name: rtax-quickack 237 + name: quickack 236 238 type: u32 237 239 - 238 - name: rtax-cc-algo 240 + name: cc-algo 239 241 type: string 240 242 - 241 - name: rtax-fastopen-no-cookie 243 + name: fastopen-no-cookie 242 244 type: u32 243 245 244 246 operations: ··· 256 254 value: 26 257 255 attributes: 258 256 - rtm-family 259 - - rta-src 257 + - src 260 258 - rtm-src-len 261 - - rta-dst 259 + - dst 262 260 - rtm-dst-len 263 - - rta-iif 264 - - rta-oif 265 - - rta-ip-proto 266 - - rta-sport 267 - - rta-dport 268 - - rta-mark 269 - - rta-uid 270 - - rta-flowlabel 261 + - iif 262 + - oif 263 + - ip-proto 264 + - sport 265 + - dport 266 + - mark 267 + - uid 268 + - flowlabel 271 269 reply: 272 270 value: 24 273 271 attributes: &all-route-attrs ··· 280 278 - rtm-scope 281 279 - rtm-type 282 280 - rtm-flags 283 - - rta-dst 284 - - rta-src 285 - - rta-iif 286 - - rta-oif 287 - - rta-gateway 288 - - rta-priority 289 - - rta-prefsrc 290 - - rta-metrics 291 - - rta-multipath 292 - - rta-flow 293 - - rta-cacheinfo 294 - - rta-table 295 - - rta-mark 296 - - rta-mfc-stats 297 - - rta-via 298 - - rta-newdst 299 - - rta-pref 300 - - rta-encap-type 301 - - rta-encap 302 - - rta-expires 303 - - rta-pad 304 - - rta-uid 305 - - rta-ttl-propagate 306 - - rta-ip-proto 307 - - rta-sport 308 - - rta-dport 309 - - rta-nh-id 310 - - rta-flowlabel 281 + - dst 282 + - src 283 + - iif 284 + - oif 285 + - gateway 286 + - priority 287 + - prefsrc 288 + - metrics 289 + - multipath 290 + - flow 291 + - cacheinfo 292 + - table 293 + - mark 294 + - mfc-stats 295 + - via 296 + - newdst 297 + - pref 298 + - encap-type 299 + - encap 300 + - expires 301 + - pad 302 + - uid 303 + - ttl-propagate 304 + - ip-proto 305 + - sport 306 + - dport 307 + - nh-id 308 + - flowlabel 311 309 dump: 312 310 request: 313 311 value: 26
+23
Documentation/networking/netdevices.rst
··· 343 343 acquiring the instance lock themselves, while the ``netif_xxx`` functions 344 344 assume that the driver has already acquired the instance lock. 345 345 346 + Notifiers and netdev instance lock 347 + ================================== 348 + 349 + For device drivers that implement shaping or queue management APIs, 350 + some of the notifiers (``enum netdev_cmd``) are running under the netdev 351 + instance lock. 352 + 353 + For devices with locked ops, currently only the following notifiers are 354 + running under the lock: 355 + * ``NETDEV_REGISTER`` 356 + * ``NETDEV_UP`` 357 + 358 + The following notifiers are running without the lock: 359 + * ``NETDEV_UNREGISTER`` 360 + 361 + There are no clear expectations for the remaining notifiers. Notifiers not on 362 + the list may run with or without the instance lock, potentially even invoking 363 + the same notifier type with and without the lock from different code paths. 364 + The goal is to eventually ensure that all (or most, with a few documented 365 + exceptions) notifiers run under the instance lock. Please extend this 366 + documentation whenever you make explicit assumption about lock being held 367 + from a notifier. 368 + 346 369 NETDEV_INTERNAL symbol namespace 347 370 ================================ 348 371
+6 -4
MAINTAINERS
··· 18131 18131 F: drivers/irqchip/irq-or1k-* 18132 18132 18133 18133 OPENVSWITCH 18134 - M: Pravin B Shelar <pshelar@ovn.org> 18134 + M: Aaron Conole <aconole@redhat.com> 18135 + M: Eelco Chaudron <echaudro@redhat.com> 18136 + M: Ilya Maximets <i.maximets@ovn.org> 18135 18137 L: netdev@vger.kernel.org 18136 18138 L: dev@openvswitch.org 18137 18139 S: Maintained ··· 19905 19903 F: drivers/i2c/busses/i2c-qcom-geni.c 19906 19904 19907 19905 QUALCOMM I2C CCI DRIVER 19908 - M: Loic Poulain <loic.poulain@linaro.org> 19906 + M: Loic Poulain <loic.poulain@oss.qualcomm.com> 19909 19907 M: Robert Foss <rfoss@kernel.org> 19910 19908 L: linux-i2c@vger.kernel.org 19911 19909 L: linux-arm-msm@vger.kernel.org ··· 20038 20036 F: drivers/media/platform/qcom/venus/ 20039 20037 20040 20038 QUALCOMM WCN36XX WIRELESS DRIVER 20041 - M: Loic Poulain <loic.poulain@linaro.org> 20039 + M: Loic Poulain <loic.poulain@oss.qualcomm.com> 20042 20040 L: wcn36xx@lists.infradead.org 20043 20041 S: Supported 20044 20042 W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx ··· 26077 26075 F: kernel/workqueue_internal.h 26078 26076 26079 26077 WWAN DRIVERS 26080 - M: Loic Poulain <loic.poulain@linaro.org> 26078 + M: Loic Poulain <loic.poulain@oss.qualcomm.com> 26081 26079 M: Sergey Ryazanov <ryazanov.s.a@gmail.com> 26082 26080 R: Johannes Berg <johannes@sipsolutions.net> 26083 26081 L: netdev@vger.kernel.org
+16 -1
drivers/net/arcnet/com20020-pci.c
··· 251 251 card->tx_led.default_trigger = devm_kasprintf(&pdev->dev, 252 252 GFP_KERNEL, "arc%d-%d-tx", 253 253 dev->dev_id, i); 254 + if (!card->tx_led.default_trigger) { 255 + ret = -ENOMEM; 256 + goto err_free_arcdev; 257 + } 254 258 card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL, 255 259 "pci:green:tx:%d-%d", 256 260 dev->dev_id, i); 257 - 261 + if (!card->tx_led.name) { 262 + ret = -ENOMEM; 263 + goto err_free_arcdev; 264 + } 258 265 card->tx_led.dev = &dev->dev; 259 266 card->recon_led.brightness_set = led_recon_set; 260 267 card->recon_led.default_trigger = devm_kasprintf(&pdev->dev, 261 268 GFP_KERNEL, "arc%d-%d-recon", 262 269 dev->dev_id, i); 270 + if (!card->recon_led.default_trigger) { 271 + ret = -ENOMEM; 272 + goto err_free_arcdev; 273 + } 263 274 card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL, 264 275 "pci:red:recon:%d-%d", 265 276 dev->dev_id, i); 277 + if (!card->recon_led.name) { 278 + ret = -ENOMEM; 279 + goto err_free_arcdev; 280 + } 266 281 card->recon_led.dev = &dev->dev; 267 282 268 283 ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
+7 -4
drivers/net/dsa/mv88e6xxx/chip.c
··· 7350 7350 err = mv88e6xxx_switch_reset(chip); 7351 7351 mv88e6xxx_reg_unlock(chip); 7352 7352 if (err) 7353 - goto out; 7353 + goto out_phy; 7354 7354 7355 7355 if (np) { 7356 7356 chip->irq = of_irq_get(np, 0); 7357 7357 if (chip->irq == -EPROBE_DEFER) { 7358 7358 err = chip->irq; 7359 - goto out; 7359 + goto out_phy; 7360 7360 } 7361 7361 } 7362 7362 ··· 7375 7375 mv88e6xxx_reg_unlock(chip); 7376 7376 7377 7377 if (err) 7378 - goto out; 7378 + goto out_phy; 7379 7379 7380 7380 if (chip->info->g2_irqs > 0) { 7381 7381 err = mv88e6xxx_g2_irq_setup(chip); ··· 7409 7409 mv88e6xxx_g1_irq_free(chip); 7410 7410 else 7411 7411 mv88e6xxx_irq_poll_free(chip); 7412 + out_phy: 7413 + mv88e6xxx_phy_destroy(chip); 7412 7414 out: 7413 7415 if (pdata) 7414 7416 dev_put(pdata->netdev); ··· 7433 7431 mv88e6xxx_ptp_free(chip); 7434 7432 } 7435 7433 7436 - mv88e6xxx_phy_destroy(chip); 7437 7434 mv88e6xxx_unregister_switch(chip); 7438 7435 7439 7436 mv88e6xxx_g1_vtu_prob_irq_free(chip); ··· 7445 7444 mv88e6xxx_g1_irq_free(chip); 7446 7445 else 7447 7446 mv88e6xxx_irq_poll_free(chip); 7447 + 7448 + mv88e6xxx_phy_destroy(chip); 7448 7449 } 7449 7450 7450 7451 static void mv88e6xxx_shutdown(struct mdio_device *mdiodev)
+3
drivers/net/dsa/mv88e6xxx/phy.c
··· 229 229 230 230 static void mv88e6xxx_phy_ppu_state_destroy(struct mv88e6xxx_chip *chip) 231 231 { 232 + mutex_lock(&chip->ppu_mutex); 232 233 del_timer_sync(&chip->ppu_timer); 234 + cancel_work_sync(&chip->ppu_work); 235 + mutex_unlock(&chip->ppu_mutex); 233 236 } 234 237 235 238 int mv88e6185_phy_ppu_read(struct mv88e6xxx_chip *chip, struct mii_bus *bus,
+1
drivers/net/dummy.c
··· 105 105 dev->netdev_ops = &dummy_netdev_ops; 106 106 dev->ethtool_ops = &dummy_ethtool_ops; 107 107 dev->needs_free_netdev = true; 108 + dev->request_ops_lock = true; 108 109 109 110 /* Fill in device structure with ethernet-generic values. */ 110 111 dev->flags |= IFF_NOARP;
+22 -9
drivers/net/ethernet/airoha/airoha_eth.c
··· 2028 2028 struct tc_ets_qopt_offload_replace_params *p = &opt->replace_params; 2029 2029 enum tx_sched_mode mode = TC_SCH_SP; 2030 2030 u16 w[AIROHA_NUM_QOS_QUEUES] = {}; 2031 - int i, nstrict = 0, nwrr, qidx; 2031 + int i, nstrict = 0; 2032 2032 2033 2033 if (p->bands > AIROHA_NUM_QOS_QUEUES) 2034 2034 return -EINVAL; ··· 2046 2046 * lowest priorities with respect to SP ones. 2047 2047 * e.g: WRR0, WRR1, .., WRRm, SP0, SP1, .., SPn 2048 2048 */ 2049 - nwrr = p->bands - nstrict; 2050 - qidx = nstrict && nwrr ? nstrict : 0; 2051 - for (i = 1; i <= p->bands; i++) { 2052 - if (p->priomap[i % AIROHA_NUM_QOS_QUEUES] != qidx) 2049 + for (i = 0; i < nstrict; i++) { 2050 + if (p->priomap[p->bands - i - 1] != i) 2053 2051 return -EINVAL; 2054 - 2055 - qidx = i == nwrr ? 0 : qidx + 1; 2056 2052 } 2057 2053 2058 - for (i = 0; i < nwrr; i++) 2054 + for (i = 0; i < p->bands - nstrict; i++) { 2055 + if (p->priomap[i] != nstrict + i) 2056 + return -EINVAL; 2057 + 2059 2058 w[i] = p->weights[nstrict + i]; 2059 + } 2060 2060 2061 2061 if (!nstrict) 2062 2062 mode = TC_SCH_WRR8; ··· 2358 2358 return -EINVAL; 2359 2359 } 2360 2360 2361 - opt->qid = channel; 2361 + opt->qid = AIROHA_NUM_TX_RING + channel; 2362 2362 2363 2363 return 0; 2364 2364 } ··· 2452 2452 2453 2453 metadata_dst_free(port->dsa_meta[i]); 2454 2454 } 2455 + } 2456 + 2457 + bool airoha_is_valid_gdm_port(struct airoha_eth *eth, 2458 + struct airoha_gdm_port *port) 2459 + { 2460 + int i; 2461 + 2462 + for (i = 0; i < ARRAY_SIZE(eth->ports); i++) { 2463 + if (eth->ports[i] == port) 2464 + return true; 2465 + } 2466 + 2467 + return false; 2455 2468 } 2456 2469 2457 2470 static int airoha_alloc_gdm_port(struct airoha_eth *eth,
+3
drivers/net/ethernet/airoha/airoha_eth.h
··· 532 532 #define airoha_qdma_clear(qdma, offset, val) \ 533 533 airoha_rmw((qdma)->regs, (offset), (val), 0) 534 534 535 + bool airoha_is_valid_gdm_port(struct airoha_eth *eth, 536 + struct airoha_gdm_port *port); 537 + 535 538 void airoha_ppe_check_skb(struct airoha_ppe *ppe, u16 hash); 536 539 int airoha_ppe_setup_tc_block_cb(enum tc_setup_type type, void *type_data, 537 540 void *cb_priv);
+6 -2
drivers/net/ethernet/airoha/airoha_ppe.c
··· 197 197 #endif 198 198 } 199 199 200 - static int airoha_ppe_foe_entry_prepare(struct airoha_foe_entry *hwe, 200 + static int airoha_ppe_foe_entry_prepare(struct airoha_eth *eth, 201 + struct airoha_foe_entry *hwe, 201 202 struct net_device *dev, int type, 202 203 struct airoha_flow_data *data, 203 204 int l4proto) ··· 225 224 if (dev) { 226 225 struct airoha_gdm_port *port = netdev_priv(dev); 227 226 u8 pse_port; 227 + 228 + if (!airoha_is_valid_gdm_port(eth, port)) 229 + return -EINVAL; 228 230 229 231 if (dsa_port >= 0) 230 232 pse_port = port->id == 4 ? FE_PSE_PORT_GDM4 : port->id; ··· 637 633 !is_valid_ether_addr(data.eth.h_dest)) 638 634 return -EINVAL; 639 635 640 - err = airoha_ppe_foe_entry_prepare(&hwe, odev, offload_type, 636 + err = airoha_ppe_foe_entry_prepare(eth, &hwe, odev, offload_type, 641 637 &data, l4proto); 642 638 if (err) 643 639 return err;
+3 -3
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 15909 15909 goto err_reset; 15910 15910 } 15911 15911 15912 - napi_enable(&bnapi->napi); 15912 + napi_enable_locked(&bnapi->napi); 15913 15913 bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons); 15914 15914 15915 15915 for (i = 0; i < bp->nr_vnics; i++) { ··· 15931 15931 err_reset: 15932 15932 netdev_err(bp->dev, "Unexpected HWRM error during queue start rc: %d\n", 15933 15933 rc); 15934 - napi_enable(&bnapi->napi); 15934 + napi_enable_locked(&bnapi->napi); 15935 15935 bnxt_db_nq_arm(bp, &cpr->cp_db, cpr->cp_raw_cons); 15936 15936 bnxt_reset_task(bp, true); 15937 15937 return rc; ··· 15971 15971 * completion is handled in NAPI to guarantee no more DMA on that ring 15972 15972 * after seeing the completion. 15973 15973 */ 15974 - napi_disable(&bnapi->napi); 15974 + napi_disable_locked(&bnapi->napi); 15975 15975 15976 15976 if (bp->tph_mode) { 15977 15977 bnxt_hwrm_cp_ring_free(bp, rxr->rx_cpr);
+3 -1
drivers/net/ethernet/google/gve/gve_ethtool.c
··· 392 392 */ 393 393 data[i++] = 0; 394 394 data[i++] = 0; 395 - data[i++] = tx->dqo_tx.tail - tx->dqo_tx.head; 395 + data[i++] = 396 + (tx->dqo_tx.tail - tx->dqo_tx.head) & 397 + tx->mask; 396 398 } 397 399 do { 398 400 start =
+27 -12
drivers/net/ethernet/ibm/ibmveth.c
··· 1802 1802 long value = simple_strtol(buf, NULL, 10); 1803 1803 long rc; 1804 1804 1805 + rtnl_lock(); 1806 + 1805 1807 if (attr == &veth_active_attr) { 1806 1808 if (value && !pool->active) { 1807 1809 if (netif_running(netdev)) { 1808 1810 if (ibmveth_alloc_buffer_pool(pool)) { 1809 1811 netdev_err(netdev, 1810 1812 "unable to alloc pool\n"); 1811 - return -ENOMEM; 1813 + rc = -ENOMEM; 1814 + goto unlock_err; 1812 1815 } 1813 1816 pool->active = 1; 1814 1817 ibmveth_close(netdev); 1815 - if ((rc = ibmveth_open(netdev))) 1816 - return rc; 1818 + rc = ibmveth_open(netdev); 1819 + if (rc) 1820 + goto unlock_err; 1817 1821 } else { 1818 1822 pool->active = 1; 1819 1823 } ··· 1837 1833 1838 1834 if (i == IBMVETH_NUM_BUFF_POOLS) { 1839 1835 netdev_err(netdev, "no active pool >= MTU\n"); 1840 - return -EPERM; 1836 + rc = -EPERM; 1837 + goto unlock_err; 1841 1838 } 1842 1839 1843 1840 if (netif_running(netdev)) { 1844 1841 ibmveth_close(netdev); 1845 1842 pool->active = 0; 1846 - if ((rc = ibmveth_open(netdev))) 1847 - return rc; 1843 + rc = ibmveth_open(netdev); 1844 + if (rc) 1845 + goto unlock_err; 1848 1846 } 1849 1847 pool->active = 0; 1850 1848 } 1851 1849 } else if (attr == &veth_num_attr) { 1852 1850 if (value <= 0 || value > IBMVETH_MAX_POOL_COUNT) { 1853 - return -EINVAL; 1851 + rc = -EINVAL; 1852 + goto unlock_err; 1854 1853 } else { 1855 1854 if (netif_running(netdev)) { 1856 1855 ibmveth_close(netdev); 1857 1856 pool->size = value; 1858 - if ((rc = ibmveth_open(netdev))) 1859 - return rc; 1857 + rc = ibmveth_open(netdev); 1858 + if (rc) 1859 + goto unlock_err; 1860 1860 } else { 1861 1861 pool->size = value; 1862 1862 } 1863 1863 } 1864 1864 } else if (attr == &veth_size_attr) { 1865 1865 if (value <= IBMVETH_BUFF_OH || value > IBMVETH_MAX_BUF_SIZE) { 1866 - return -EINVAL; 1866 + rc = -EINVAL; 1867 + goto unlock_err; 1867 1868 } else { 1868 1869 if (netif_running(netdev)) { 1869 1870 ibmveth_close(netdev); 1870 1871 pool->buff_size = value; 1871 - if ((rc = ibmveth_open(netdev))) 1872 - return rc; 1872 + rc = ibmveth_open(netdev); 1873 + if (rc) 1874 + goto unlock_err; 1873 1875 } else { 1874 1876 pool->buff_size = value; 1875 1877 } 1876 1878 } 1877 1879 } 1880 + rtnl_unlock(); 1878 1881 1879 1882 /* kick the interrupt handler to allocate/deallocate pools */ 1880 1883 ibmveth_interrupt(netdev->irq, netdev); 1881 1884 return count; 1885 + 1886 + unlock_err: 1887 + rtnl_unlock(); 1888 + return rc; 1882 1889 } 1883 1890 1884 1891
+3
drivers/net/ethernet/intel/e1000e/defines.h
··· 803 803 /* SerDes Control */ 804 804 #define E1000_GEN_POLL_TIMEOUT 640 805 805 806 + #define E1000_FEXTNVM12_PHYPD_CTRL_MASK 0x00C00000 807 + #define E1000_FEXTNVM12_PHYPD_CTRL_P1 0x00800000 808 + 806 809 #endif /* _E1000_DEFINES_H_ */
+75 -5
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 286 286 } 287 287 288 288 /** 289 + * e1000_reconfigure_k1_exit_timeout - reconfigure K1 exit timeout to 290 + * align to MTP and later platform requirements. 291 + * @hw: pointer to the HW structure 292 + * 293 + * Context: PHY semaphore must be held by caller. 294 + * Return: 0 on success, negative on failure 295 + */ 296 + static s32 e1000_reconfigure_k1_exit_timeout(struct e1000_hw *hw) 297 + { 298 + u16 phy_timeout; 299 + u32 fextnvm12; 300 + s32 ret_val; 301 + 302 + if (hw->mac.type < e1000_pch_mtp) 303 + return 0; 304 + 305 + /* Change Kumeran K1 power down state from P0s to P1 */ 306 + fextnvm12 = er32(FEXTNVM12); 307 + fextnvm12 &= ~E1000_FEXTNVM12_PHYPD_CTRL_MASK; 308 + fextnvm12 |= E1000_FEXTNVM12_PHYPD_CTRL_P1; 309 + ew32(FEXTNVM12, fextnvm12); 310 + 311 + /* Wait for the interface the settle */ 312 + usleep_range(1000, 1100); 313 + 314 + /* Change K1 exit timeout */ 315 + ret_val = e1e_rphy_locked(hw, I217_PHY_TIMEOUTS_REG, 316 + &phy_timeout); 317 + if (ret_val) 318 + return ret_val; 319 + 320 + phy_timeout &= ~I217_PHY_TIMEOUTS_K1_EXIT_TO_MASK; 321 + phy_timeout |= 0xF00; 322 + 323 + return e1e_wphy_locked(hw, I217_PHY_TIMEOUTS_REG, 324 + phy_timeout); 325 + } 326 + 327 + /** 289 328 * e1000_init_phy_workarounds_pchlan - PHY initialization workarounds 290 329 * @hw: pointer to the HW structure 291 330 * ··· 366 327 * LANPHYPC Value bit to force the interconnect to PCIe mode. 367 328 */ 368 329 switch (hw->mac.type) { 330 + case e1000_pch_mtp: 331 + case e1000_pch_lnp: 332 + case e1000_pch_ptp: 333 + case e1000_pch_nvp: 334 + /* At this point the PHY might be inaccessible so don't 335 + * propagate the failure 336 + */ 337 + if (e1000_reconfigure_k1_exit_timeout(hw)) 338 + e_dbg("Failed to reconfigure K1 exit timeout\n"); 339 + 340 + fallthrough; 369 341 case e1000_pch_lpt: 370 342 case e1000_pch_spt: 371 343 case e1000_pch_cnp: 372 344 case e1000_pch_tgp: 373 345 case e1000_pch_adp: 374 - case e1000_pch_mtp: 375 - case e1000_pch_lnp: 376 - case e1000_pch_ptp: 377 - case e1000_pch_nvp: 378 346 if (e1000_phy_is_accessible_pchlan(hw)) 379 347 break; 380 348 ··· 465 419 * the PHY is in. 466 420 */ 467 421 ret_val = hw->phy.ops.check_reset_block(hw); 468 - if (ret_val) 422 + if (ret_val) { 469 423 e_err("ME blocked access to PHY after reset\n"); 424 + goto out; 425 + } 426 + 427 + if (hw->mac.type >= e1000_pch_mtp) { 428 + ret_val = hw->phy.ops.acquire(hw); 429 + if (ret_val) { 430 + e_err("Failed to reconfigure K1 exit timeout\n"); 431 + goto out; 432 + } 433 + ret_val = e1000_reconfigure_k1_exit_timeout(hw); 434 + hw->phy.ops.release(hw); 435 + } 470 436 } 471 437 472 438 out: ··· 4946 4888 u16 i; 4947 4889 4948 4890 e1000_initialize_hw_bits_ich8lan(hw); 4891 + if (hw->mac.type >= e1000_pch_mtp) { 4892 + ret_val = hw->phy.ops.acquire(hw); 4893 + if (ret_val) 4894 + return ret_val; 4895 + 4896 + ret_val = e1000_reconfigure_k1_exit_timeout(hw); 4897 + hw->phy.ops.release(hw); 4898 + if (ret_val) { 4899 + e_dbg("Error failed to reconfigure K1 exit timeout\n"); 4900 + return ret_val; 4901 + } 4902 + } 4949 4903 4950 4904 /* Initialize identification LED */ 4951 4905 ret_val = mac->ops.id_led_init(hw);
+4
drivers/net/ethernet/intel/e1000e/ich8lan.h
··· 219 219 #define I217_PLL_CLOCK_GATE_REG PHY_REG(772, 28) 220 220 #define I217_PLL_CLOCK_GATE_MASK 0x07FF 221 221 222 + /* PHY Timeouts */ 223 + #define I217_PHY_TIMEOUTS_REG PHY_REG(770, 21) 224 + #define I217_PHY_TIMEOUTS_K1_EXIT_TO_MASK 0x0FC0 225 + 222 226 #define SW_FLAG_TIMEOUT 1000 /* SW Semaphore flag timeout in ms */ 223 227 224 228 /* Inband Control */
+5 -1
drivers/net/ethernet/intel/idpf/idpf_main.c
··· 87 87 */ 88 88 static void idpf_shutdown(struct pci_dev *pdev) 89 89 { 90 - idpf_remove(pdev); 90 + struct idpf_adapter *adapter = pci_get_drvdata(pdev); 91 + 92 + cancel_delayed_work_sync(&adapter->vc_event_task); 93 + idpf_vc_core_deinit(adapter); 94 + idpf_deinit_dflt_mbx(adapter); 91 95 92 96 if (system_state == SYSTEM_POWER_OFF) 93 97 pci_set_power_state(pdev, PCI_D3hot);
-2
drivers/net/ethernet/intel/igc/igc.h
··· 337 337 struct igc_led_classdev *leds; 338 338 }; 339 339 340 - void igc_set_queue_napi(struct igc_adapter *adapter, int q_idx, 341 - struct napi_struct *napi); 342 340 void igc_up(struct igc_adapter *adapter); 343 341 void igc_down(struct igc_adapter *adapter); 344 342 int igc_open(struct net_device *netdev);
+3 -3
drivers/net/ethernet/intel/igc/igc_main.c
··· 3042 3042 * descriptors. Therefore, to be safe, we always ensure we have at least 3043 3043 * 4 descriptors available. 3044 3044 */ 3045 - while (xsk_tx_peek_desc(pool, &xdp_desc) && budget >= 4) { 3045 + while (budget >= 4 && xsk_tx_peek_desc(pool, &xdp_desc)) { 3046 3046 struct igc_metadata_request meta_req; 3047 3047 struct xsk_tx_metadata *meta = NULL; 3048 3048 struct igc_tx_buffer *bi; ··· 5022 5022 return 0; 5023 5023 } 5024 5024 5025 - void igc_set_queue_napi(struct igc_adapter *adapter, int vector, 5026 - struct napi_struct *napi) 5025 + static void igc_set_queue_napi(struct igc_adapter *adapter, int vector, 5026 + struct napi_struct *napi) 5027 5027 { 5028 5028 struct igc_q_vector *q_vector = adapter->q_vector[vector]; 5029 5029
-2
drivers/net/ethernet/intel/igc/igc_xdp.c
··· 97 97 napi_disable(napi); 98 98 } 99 99 100 - igc_set_queue_napi(adapter, queue_id, NULL); 101 100 set_bit(IGC_RING_FLAG_AF_XDP_ZC, &rx_ring->flags); 102 101 set_bit(IGC_RING_FLAG_AF_XDP_ZC, &tx_ring->flags); 103 102 ··· 146 147 xsk_pool_dma_unmap(pool, IGC_RX_DMA_ATTR); 147 148 clear_bit(IGC_RING_FLAG_AF_XDP_ZC, &rx_ring->flags); 148 149 clear_bit(IGC_RING_FLAG_AF_XDP_ZC, &tx_ring->flags); 149 - igc_set_queue_napi(adapter, queue_id, napi); 150 150 151 151 if (needs_reset) { 152 152 napi_enable(napi);
+3 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_e610.c
··· 1453 1453 hw->link.link_info.phy_type_low = 0; 1454 1454 } else { 1455 1455 highest_bit = fls64(le64_to_cpu(pcaps.phy_type_low)); 1456 - if (highest_bit) 1456 + if (highest_bit) { 1457 1457 hw->link.link_info.phy_type_low = 1458 1458 BIT_ULL(highest_bit - 1); 1459 + hw->link.link_info.phy_type_high = 0; 1460 + } 1459 1461 } 1460 1462 } 1461 1463
+3
drivers/net/ethernet/marvell/mvpp2/mvpp2.h
··· 1113 1113 1114 1114 /* Spinlocks for CM3 shared memory configuration */ 1115 1115 spinlock_t mss_spinlock; 1116 + 1117 + /* Spinlock for shared PRS parser memory and shadow table */ 1118 + spinlock_t prs_spinlock; 1116 1119 }; 1117 1120 1118 1121 struct mvpp2_pcpu_stats {
+2 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 7723 7723 if (mvpp2_read(priv, MVPP2_VER_ID_REG) == MVPP2_VER_PP23) 7724 7724 priv->hw_version = MVPP23; 7725 7725 7726 - /* Init mss lock */ 7726 + /* Init locks for shared packet processor resources */ 7727 7727 spin_lock_init(&priv->mss_spinlock); 7728 + spin_lock_init(&priv->prs_spinlock); 7728 7729 7729 7730 /* Initialize network controller */ 7730 7731 err = mvpp2_init(pdev, priv);
+135 -66
drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
··· 23 23 { 24 24 int i; 25 25 26 + lockdep_assert_held(&priv->prs_spinlock); 27 + 26 28 if (pe->index > MVPP2_PRS_TCAM_SRAM_SIZE - 1) 27 29 return -EINVAL; 28 30 ··· 45 43 } 46 44 47 45 /* Initialize tcam entry from hw */ 48 - int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe, 49 - int tid) 46 + static int __mvpp2_prs_init_from_hw(struct mvpp2 *priv, 47 + struct mvpp2_prs_entry *pe, int tid) 50 48 { 51 49 int i; 50 + 51 + lockdep_assert_held(&priv->prs_spinlock); 52 52 53 53 if (tid > MVPP2_PRS_TCAM_SRAM_SIZE - 1) 54 54 return -EINVAL; ··· 75 71 pe->sram[i] = mvpp2_read(priv, MVPP2_PRS_SRAM_DATA_REG(i)); 76 72 77 73 return 0; 74 + } 75 + 76 + int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe, 77 + int tid) 78 + { 79 + int err; 80 + 81 + spin_lock_bh(&priv->prs_spinlock); 82 + err = __mvpp2_prs_init_from_hw(priv, pe, tid); 83 + spin_unlock_bh(&priv->prs_spinlock); 84 + 85 + return err; 78 86 } 79 87 80 88 /* Invalidate tcam hw entry */ ··· 390 374 priv->prs_shadow[tid].lu != MVPP2_PRS_LU_FLOWS) 391 375 continue; 392 376 393 - mvpp2_prs_init_from_hw(priv, &pe, tid); 377 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 394 378 bits = mvpp2_prs_sram_ai_get(&pe); 395 379 396 380 /* Sram store classification lookup ID in AI bits [5:0] */ ··· 457 441 458 442 if (priv->prs_shadow[MVPP2_PE_DROP_ALL].valid) { 459 443 /* Entry exist - update port only */ 460 - mvpp2_prs_init_from_hw(priv, &pe, MVPP2_PE_DROP_ALL); 444 + __mvpp2_prs_init_from_hw(priv, &pe, MVPP2_PE_DROP_ALL); 461 445 } else { 462 446 /* Entry doesn't exist - create new */ 463 447 memset(&pe, 0, sizeof(pe)); ··· 485 469 } 486 470 487 471 /* Set port to unicast or multicast promiscuous mode */ 488 - void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port, 489 - enum mvpp2_prs_l2_cast l2_cast, bool add) 472 + static void __mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port, 473 + enum mvpp2_prs_l2_cast l2_cast, 474 + bool add) 490 475 { 491 476 struct mvpp2_prs_entry pe; 492 477 unsigned char cast_match; 493 478 unsigned int ri; 494 479 int tid; 480 + 481 + lockdep_assert_held(&priv->prs_spinlock); 495 482 496 483 if (l2_cast == MVPP2_PRS_L2_UNI_CAST) { 497 484 cast_match = MVPP2_PRS_UCAST_VAL; ··· 508 489 509 490 /* promiscuous mode - Accept unknown unicast or multicast packets */ 510 491 if (priv->prs_shadow[tid].valid) { 511 - mvpp2_prs_init_from_hw(priv, &pe, tid); 492 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 512 493 } else { 513 494 memset(&pe, 0, sizeof(pe)); 514 495 mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC); ··· 541 522 mvpp2_prs_hw_write(priv, &pe); 542 523 } 543 524 525 + void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port, 526 + enum mvpp2_prs_l2_cast l2_cast, bool add) 527 + { 528 + spin_lock_bh(&priv->prs_spinlock); 529 + __mvpp2_prs_mac_promisc_set(priv, port, l2_cast, add); 530 + spin_unlock_bh(&priv->prs_spinlock); 531 + } 532 + 544 533 /* Set entry for dsa packets */ 545 534 static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add, 546 535 bool tagged, bool extend) ··· 566 539 567 540 if (priv->prs_shadow[tid].valid) { 568 541 /* Entry exist - update port only */ 569 - mvpp2_prs_init_from_hw(priv, &pe, tid); 542 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 570 543 } else { 571 544 /* Entry doesn't exist - create new */ 572 545 memset(&pe, 0, sizeof(pe)); ··· 637 610 638 611 if (priv->prs_shadow[tid].valid) { 639 612 /* Entry exist - update port only */ 640 - mvpp2_prs_init_from_hw(priv, &pe, tid); 613 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 641 614 } else { 642 615 /* Entry doesn't exist - create new */ 643 616 memset(&pe, 0, sizeof(pe)); ··· 700 673 priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VLAN) 701 674 continue; 702 675 703 - mvpp2_prs_init_from_hw(priv, &pe, tid); 676 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 704 677 match = mvpp2_prs_tcam_data_cmp(&pe, 0, tpid); 705 678 if (!match) 706 679 continue; ··· 753 726 priv->prs_shadow[tid_aux].lu != MVPP2_PRS_LU_VLAN) 754 727 continue; 755 728 756 - mvpp2_prs_init_from_hw(priv, &pe, tid_aux); 729 + __mvpp2_prs_init_from_hw(priv, &pe, tid_aux); 757 730 ri_bits = mvpp2_prs_sram_ri_get(&pe); 758 731 if ((ri_bits & MVPP2_PRS_RI_VLAN_MASK) == 759 732 MVPP2_PRS_RI_VLAN_DOUBLE) ··· 787 760 788 761 mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VLAN); 789 762 } else { 790 - mvpp2_prs_init_from_hw(priv, &pe, tid); 763 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 791 764 } 792 765 /* Update ports' mask */ 793 766 mvpp2_prs_tcam_port_map_set(&pe, port_map); ··· 827 800 priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VLAN) 828 801 continue; 829 802 830 - mvpp2_prs_init_from_hw(priv, &pe, tid); 803 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 831 804 832 805 match = mvpp2_prs_tcam_data_cmp(&pe, 0, tpid1) && 833 806 mvpp2_prs_tcam_data_cmp(&pe, 4, tpid2); ··· 876 849 priv->prs_shadow[tid_aux].lu != MVPP2_PRS_LU_VLAN) 877 850 continue; 878 851 879 - mvpp2_prs_init_from_hw(priv, &pe, tid_aux); 852 + __mvpp2_prs_init_from_hw(priv, &pe, tid_aux); 880 853 ri_bits = mvpp2_prs_sram_ri_get(&pe); 881 854 ri_bits &= MVPP2_PRS_RI_VLAN_MASK; 882 855 if (ri_bits == MVPP2_PRS_RI_VLAN_SINGLE || ··· 907 880 908 881 mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VLAN); 909 882 } else { 910 - mvpp2_prs_init_from_hw(priv, &pe, tid); 883 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 911 884 } 912 885 913 886 /* Update ports' mask */ ··· 1240 1213 /* Create dummy entries for drop all and promiscuous modes */ 1241 1214 mvpp2_prs_drop_fc(priv); 1242 1215 mvpp2_prs_mac_drop_all_set(priv, 0, false); 1243 - mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false); 1244 - mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false); 1216 + __mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false); 1217 + __mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false); 1245 1218 } 1246 1219 1247 1220 /* Set default entries for various types of dsa packets */ ··· 1559 1532 { 1560 1533 struct mvpp2_prs_entry pe; 1561 1534 int err; 1562 - 1563 - priv->prs_double_vlans = devm_kcalloc(&pdev->dev, sizeof(bool), 1564 - MVPP2_PRS_DBL_VLANS_MAX, 1565 - GFP_KERNEL); 1566 - if (!priv->prs_double_vlans) 1567 - return -ENOMEM; 1568 1535 1569 1536 /* Double VLAN: 0x88A8, 0x8100 */ 1570 1537 err = mvpp2_prs_double_vlan_add(priv, ETH_P_8021AD, ETH_P_8021Q, ··· 1962 1941 port->priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VID) 1963 1942 continue; 1964 1943 1965 - mvpp2_prs_init_from_hw(port->priv, &pe, tid); 1944 + __mvpp2_prs_init_from_hw(port->priv, &pe, tid); 1966 1945 1967 1946 mvpp2_prs_tcam_data_byte_get(&pe, 2, &byte[0], &enable[0]); 1968 1947 mvpp2_prs_tcam_data_byte_get(&pe, 3, &byte[1], &enable[1]); ··· 1991 1970 1992 1971 memset(&pe, 0, sizeof(pe)); 1993 1972 1973 + spin_lock_bh(&priv->prs_spinlock); 1974 + 1994 1975 /* Scan TCAM and see if entry with this <vid,port> already exist */ 1995 1976 tid = mvpp2_prs_vid_range_find(port, vid, mask); 1996 1977 ··· 2011 1988 MVPP2_PRS_VLAN_FILT_MAX_ENTRY); 2012 1989 2013 1990 /* There isn't room for a new VID filter */ 2014 - if (tid < 0) 1991 + if (tid < 0) { 1992 + spin_unlock_bh(&priv->prs_spinlock); 2015 1993 return tid; 1994 + } 2016 1995 2017 1996 mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_VID); 2018 1997 pe.index = tid; ··· 2022 1997 /* Mask all ports */ 2023 1998 mvpp2_prs_tcam_port_map_set(&pe, 0); 2024 1999 } else { 2025 - mvpp2_prs_init_from_hw(priv, &pe, tid); 2000 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 2026 2001 } 2027 2002 2028 2003 /* Enable the current port */ ··· 2044 2019 mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VID); 2045 2020 mvpp2_prs_hw_write(priv, &pe); 2046 2021 2022 + spin_unlock_bh(&priv->prs_spinlock); 2047 2023 return 0; 2048 2024 } 2049 2025 ··· 2054 2028 struct mvpp2 *priv = port->priv; 2055 2029 int tid; 2056 2030 2057 - /* Scan TCAM and see if entry with this <vid,port> already exist */ 2031 + spin_lock_bh(&priv->prs_spinlock); 2032 + 2033 + /* Invalidate TCAM entry with this <vid,port>, if it exists */ 2058 2034 tid = mvpp2_prs_vid_range_find(port, vid, 0xfff); 2035 + if (tid >= 0) { 2036 + mvpp2_prs_hw_inv(priv, tid); 2037 + priv->prs_shadow[tid].valid = false; 2038 + } 2059 2039 2060 - /* No such entry */ 2061 - if (tid < 0) 2062 - return; 2063 - 2064 - mvpp2_prs_hw_inv(priv, tid); 2065 - priv->prs_shadow[tid].valid = false; 2040 + spin_unlock_bh(&priv->prs_spinlock); 2066 2041 } 2067 2042 2068 2043 /* Remove all existing VID filters on this port */ ··· 2072 2045 struct mvpp2 *priv = port->priv; 2073 2046 int tid; 2074 2047 2048 + spin_lock_bh(&priv->prs_spinlock); 2049 + 2075 2050 for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id); 2076 2051 tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) { 2077 2052 if (priv->prs_shadow[tid].valid) { ··· 2081 2052 priv->prs_shadow[tid].valid = false; 2082 2053 } 2083 2054 } 2055 + 2056 + spin_unlock_bh(&priv->prs_spinlock); 2084 2057 } 2085 2058 2086 2059 /* Remove VID filering entry for this port */ ··· 2091 2060 unsigned int tid = MVPP2_PRS_VID_PORT_DFLT(port->id); 2092 2061 struct mvpp2 *priv = port->priv; 2093 2062 2063 + spin_lock_bh(&priv->prs_spinlock); 2064 + 2094 2065 /* Invalidate the guard entry */ 2095 2066 mvpp2_prs_hw_inv(priv, tid); 2096 2067 2097 2068 priv->prs_shadow[tid].valid = false; 2069 + 2070 + spin_unlock_bh(&priv->prs_spinlock); 2098 2071 } 2099 2072 2100 2073 /* Add guard entry that drops packets when no VID is matched on this port */ ··· 2113 2078 return; 2114 2079 2115 2080 memset(&pe, 0, sizeof(pe)); 2081 + 2082 + spin_lock_bh(&priv->prs_spinlock); 2116 2083 2117 2084 pe.index = tid; 2118 2085 ··· 2148 2111 /* Update shadow table */ 2149 2112 mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VID); 2150 2113 mvpp2_prs_hw_write(priv, &pe); 2114 + 2115 + spin_unlock_bh(&priv->prs_spinlock); 2151 2116 } 2152 2117 2153 2118 /* Parser default initialization */ 2154 2119 int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv) 2155 2120 { 2156 2121 int err, index, i; 2122 + 2123 + priv->prs_shadow = devm_kcalloc(&pdev->dev, MVPP2_PRS_TCAM_SRAM_SIZE, 2124 + sizeof(*priv->prs_shadow), 2125 + GFP_KERNEL); 2126 + if (!priv->prs_shadow) 2127 + return -ENOMEM; 2128 + 2129 + priv->prs_double_vlans = devm_kcalloc(&pdev->dev, sizeof(bool), 2130 + MVPP2_PRS_DBL_VLANS_MAX, 2131 + GFP_KERNEL); 2132 + if (!priv->prs_double_vlans) 2133 + return -ENOMEM; 2134 + 2135 + spin_lock_bh(&priv->prs_spinlock); 2157 2136 2158 2137 /* Enable tcam table */ 2159 2138 mvpp2_write(priv, MVPP2_PRS_TCAM_CTRL_REG, MVPP2_PRS_TCAM_EN_MASK); ··· 2189 2136 for (index = 0; index < MVPP2_PRS_TCAM_SRAM_SIZE; index++) 2190 2137 mvpp2_prs_hw_inv(priv, index); 2191 2138 2192 - priv->prs_shadow = devm_kcalloc(&pdev->dev, MVPP2_PRS_TCAM_SRAM_SIZE, 2193 - sizeof(*priv->prs_shadow), 2194 - GFP_KERNEL); 2195 - if (!priv->prs_shadow) 2196 - return -ENOMEM; 2197 - 2198 2139 /* Always start from lookup = 0 */ 2199 2140 for (index = 0; index < MVPP2_MAX_PORTS; index++) 2200 2141 mvpp2_prs_hw_port_init(priv, index, MVPP2_PRS_LU_MH, ··· 2205 2158 mvpp2_prs_vid_init(priv); 2206 2159 2207 2160 err = mvpp2_prs_etype_init(priv); 2208 - if (err) 2209 - return err; 2161 + err = err ? : mvpp2_prs_vlan_init(pdev, priv); 2162 + err = err ? : mvpp2_prs_pppoe_init(priv); 2163 + err = err ? : mvpp2_prs_ip6_init(priv); 2164 + err = err ? : mvpp2_prs_ip4_init(priv); 2210 2165 2211 - err = mvpp2_prs_vlan_init(pdev, priv); 2212 - if (err) 2213 - return err; 2214 - 2215 - err = mvpp2_prs_pppoe_init(priv); 2216 - if (err) 2217 - return err; 2218 - 2219 - err = mvpp2_prs_ip6_init(priv); 2220 - if (err) 2221 - return err; 2222 - 2223 - err = mvpp2_prs_ip4_init(priv); 2224 - if (err) 2225 - return err; 2226 - 2227 - return 0; 2166 + spin_unlock_bh(&priv->prs_spinlock); 2167 + return err; 2228 2168 } 2229 2169 2230 2170 /* Compare MAC DA with tcam entry data */ ··· 2251 2217 (priv->prs_shadow[tid].udf != udf_type)) 2252 2218 continue; 2253 2219 2254 - mvpp2_prs_init_from_hw(priv, &pe, tid); 2220 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 2255 2221 entry_pmap = mvpp2_prs_tcam_port_map_get(&pe); 2256 2222 2257 2223 if (mvpp2_prs_mac_range_equals(&pe, da, mask) && ··· 2263 2229 } 2264 2230 2265 2231 /* Update parser's mac da entry */ 2266 - int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add) 2232 + static int __mvpp2_prs_mac_da_accept(struct mvpp2_port *port, 2233 + const u8 *da, bool add) 2267 2234 { 2268 2235 unsigned char mask[ETH_ALEN] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }; 2269 2236 struct mvpp2 *priv = port->priv; ··· 2296 2261 /* Mask all ports */ 2297 2262 mvpp2_prs_tcam_port_map_set(&pe, 0); 2298 2263 } else { 2299 - mvpp2_prs_init_from_hw(priv, &pe, tid); 2264 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 2300 2265 } 2301 2266 2302 2267 mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC); ··· 2352 2317 return 0; 2353 2318 } 2354 2319 2320 + int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add) 2321 + { 2322 + int err; 2323 + 2324 + spin_lock_bh(&port->priv->prs_spinlock); 2325 + err = __mvpp2_prs_mac_da_accept(port, da, add); 2326 + spin_unlock_bh(&port->priv->prs_spinlock); 2327 + 2328 + return err; 2329 + } 2330 + 2355 2331 int mvpp2_prs_update_mac_da(struct net_device *dev, const u8 *da) 2356 2332 { 2357 2333 struct mvpp2_port *port = netdev_priv(dev); ··· 2391 2345 unsigned long pmap; 2392 2346 int index, tid; 2393 2347 2348 + spin_lock_bh(&priv->prs_spinlock); 2349 + 2394 2350 for (tid = MVPP2_PE_MAC_RANGE_START; 2395 2351 tid <= MVPP2_PE_MAC_RANGE_END; tid++) { 2396 2352 unsigned char da[ETH_ALEN], da_mask[ETH_ALEN]; ··· 2402 2354 (priv->prs_shadow[tid].udf != MVPP2_PRS_UDF_MAC_DEF)) 2403 2355 continue; 2404 2356 2405 - mvpp2_prs_init_from_hw(priv, &pe, tid); 2357 + __mvpp2_prs_init_from_hw(priv, &pe, tid); 2406 2358 2407 2359 pmap = mvpp2_prs_tcam_port_map_get(&pe); 2408 2360 ··· 2423 2375 continue; 2424 2376 2425 2377 /* Remove entry from TCAM */ 2426 - mvpp2_prs_mac_da_accept(port, da, false); 2378 + __mvpp2_prs_mac_da_accept(port, da, false); 2427 2379 } 2380 + 2381 + spin_unlock_bh(&priv->prs_spinlock); 2428 2382 } 2429 2383 2430 2384 int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type) 2431 2385 { 2432 2386 switch (type) { 2433 2387 case MVPP2_TAG_TYPE_EDSA: 2388 + spin_lock_bh(&priv->prs_spinlock); 2434 2389 /* Add port to EDSA entries */ 2435 2390 mvpp2_prs_dsa_tag_set(priv, port, true, 2436 2391 MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA); ··· 2444 2393 MVPP2_PRS_TAGGED, MVPP2_PRS_DSA); 2445 2394 mvpp2_prs_dsa_tag_set(priv, port, false, 2446 2395 MVPP2_PRS_UNTAGGED, MVPP2_PRS_DSA); 2396 + spin_unlock_bh(&priv->prs_spinlock); 2447 2397 break; 2448 2398 2449 2399 case MVPP2_TAG_TYPE_DSA: 2400 + spin_lock_bh(&priv->prs_spinlock); 2450 2401 /* Add port to DSA entries */ 2451 2402 mvpp2_prs_dsa_tag_set(priv, port, true, 2452 2403 MVPP2_PRS_TAGGED, MVPP2_PRS_DSA); ··· 2459 2406 MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA); 2460 2407 mvpp2_prs_dsa_tag_set(priv, port, false, 2461 2408 MVPP2_PRS_UNTAGGED, MVPP2_PRS_EDSA); 2409 + spin_unlock_bh(&priv->prs_spinlock); 2462 2410 break; 2463 2411 2464 2412 case MVPP2_TAG_TYPE_MH: 2465 2413 case MVPP2_TAG_TYPE_NONE: 2414 + spin_lock_bh(&priv->prs_spinlock); 2466 2415 /* Remove port form EDSA and DSA entries */ 2467 2416 mvpp2_prs_dsa_tag_set(priv, port, false, 2468 2417 MVPP2_PRS_TAGGED, MVPP2_PRS_DSA); ··· 2474 2419 MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA); 2475 2420 mvpp2_prs_dsa_tag_set(priv, port, false, 2476 2421 MVPP2_PRS_UNTAGGED, MVPP2_PRS_EDSA); 2422 + spin_unlock_bh(&priv->prs_spinlock); 2477 2423 break; 2478 2424 2479 2425 default: ··· 2493 2437 2494 2438 memset(&pe, 0, sizeof(pe)); 2495 2439 2440 + spin_lock_bh(&priv->prs_spinlock); 2441 + 2496 2442 tid = mvpp2_prs_tcam_first_free(priv, 2497 2443 MVPP2_PE_LAST_FREE_TID, 2498 2444 MVPP2_PE_FIRST_FREE_TID); 2499 - if (tid < 0) 2445 + if (tid < 0) { 2446 + spin_unlock_bh(&priv->prs_spinlock); 2500 2447 return tid; 2448 + } 2501 2449 2502 2450 pe.index = tid; 2503 2451 ··· 2521 2461 mvpp2_prs_tcam_port_map_set(&pe, MVPP2_PRS_PORT_MASK); 2522 2462 mvpp2_prs_hw_write(priv, &pe); 2523 2463 2464 + spin_unlock_bh(&priv->prs_spinlock); 2524 2465 return 0; 2525 2466 } 2526 2467 ··· 2533 2472 2534 2473 memset(&pe, 0, sizeof(pe)); 2535 2474 2475 + spin_lock_bh(&port->priv->prs_spinlock); 2476 + 2536 2477 tid = mvpp2_prs_flow_find(port->priv, port->id); 2537 2478 2538 2479 /* Such entry not exist */ ··· 2543 2480 tid = mvpp2_prs_tcam_first_free(port->priv, 2544 2481 MVPP2_PE_LAST_FREE_TID, 2545 2482 MVPP2_PE_FIRST_FREE_TID); 2546 - if (tid < 0) 2483 + if (tid < 0) { 2484 + spin_unlock_bh(&port->priv->prs_spinlock); 2547 2485 return tid; 2486 + } 2548 2487 2549 2488 pe.index = tid; 2550 2489 ··· 2557 2492 /* Update shadow table */ 2558 2493 mvpp2_prs_shadow_set(port->priv, pe.index, MVPP2_PRS_LU_FLOWS); 2559 2494 } else { 2560 - mvpp2_prs_init_from_hw(port->priv, &pe, tid); 2495 + __mvpp2_prs_init_from_hw(port->priv, &pe, tid); 2561 2496 } 2562 2497 2563 2498 mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_FLOWS); 2564 2499 mvpp2_prs_tcam_port_map_set(&pe, (1 << port->id)); 2565 2500 mvpp2_prs_hw_write(port->priv, &pe); 2566 2501 2502 + spin_unlock_bh(&port->priv->prs_spinlock); 2567 2503 return 0; 2568 2504 } 2569 2505 ··· 2575 2509 if (index > MVPP2_PRS_TCAM_SRAM_SIZE) 2576 2510 return -EINVAL; 2577 2511 2512 + spin_lock_bh(&priv->prs_spinlock); 2513 + 2578 2514 mvpp2_write(priv, MVPP2_PRS_TCAM_HIT_IDX_REG, index); 2579 2515 2580 2516 val = mvpp2_read(priv, MVPP2_PRS_TCAM_HIT_CNT_REG); 2581 2517 2582 2518 val &= MVPP2_PRS_TCAM_HIT_CNT_MASK; 2583 2519 2520 + spin_unlock_bh(&priv->prs_spinlock); 2584 2521 return val; 2585 2522 }
+4 -5
drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
··· 1559 1559 break; 1560 1560 default: 1561 1561 bpf_warn_invalid_xdp_action(pfvf->netdev, prog, act); 1562 - break; 1562 + fallthrough; 1563 1563 case XDP_ABORTED: 1564 - if (xsk_buff) 1565 - xsk_buff_free(xsk_buff); 1566 - trace_xdp_exception(pfvf->netdev, prog, act); 1567 - break; 1564 + if (act == XDP_ABORTED) 1565 + trace_xdp_exception(pfvf->netdev, prog, act); 1566 + fallthrough; 1568 1567 case XDP_DROP: 1569 1568 cq->pool_ptrs++; 1570 1569 if (xsk_buff) {
+1
drivers/net/ethernet/mellanox/mlx4/Kconfig
··· 7 7 tristate "Mellanox Technologies 1/10/40Gbit Ethernet support" 8 8 depends on PCI && NETDEVICES && ETHERNET && INET 9 9 depends on PTP_1588_CLOCK_OPTIONAL 10 + select PAGE_POOL 10 11 select MLX4_CORE 11 12 help 12 13 This driver supports Mellanox Technologies ConnectX Ethernet
+3 -3
drivers/net/ethernet/sfc/ef100_netdev.c
··· 450 450 net_dev->hw_enc_features |= efx->type->offload_features; 451 451 net_dev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_SG | 452 452 NETIF_F_HIGHDMA | NETIF_F_ALL_TSO; 453 - netif_set_tso_max_segs(net_dev, 454 - ESE_EF100_DP_GZ_TSO_MAX_HDR_NUM_SEGS_DEFAULT); 453 + nic_data = efx->nic_data; 454 + netif_set_tso_max_size(efx->net_dev, nic_data->tso_max_payload_len); 455 + netif_set_tso_max_segs(efx->net_dev, nic_data->tso_max_payload_num_segs); 455 456 456 457 rc = efx_ef100_init_datapath_caps(efx); 457 458 if (rc < 0) ··· 478 477 /* Don't fail init if RSS setup doesn't work. */ 479 478 efx_mcdi_push_default_indir_table(efx, efx->n_rx_channels); 480 479 481 - nic_data = efx->nic_data; 482 480 rc = ef100_get_mac_address(efx, net_dev->perm_addr, CLIENT_HANDLE_SELF, 483 481 efx->type->is_vf); 484 482 if (rc)
+21 -26
drivers/net/ethernet/sfc/ef100_nic.c
··· 887 887 case ESE_EF100_DP_GZ_TSO_MAX_HDR_NUM_SEGS: 888 888 /* We always put HDR_NUM_SEGS=1 in our TSO descriptors */ 889 889 if (!reader->value) { 890 - netif_err(efx, probe, efx->net_dev, 891 - "TSO_MAX_HDR_NUM_SEGS < 1\n"); 890 + pci_err(efx->pci_dev, "TSO_MAX_HDR_NUM_SEGS < 1\n"); 892 891 return -EOPNOTSUPP; 893 892 } 894 893 return 0; ··· 900 901 */ 901 902 if (!reader->value || reader->value > EFX_MIN_DMAQ_SIZE || 902 903 EFX_MIN_DMAQ_SIZE % (u32)reader->value) { 903 - netif_err(efx, probe, efx->net_dev, 904 - "%s size granularity is %llu, can't guarantee safety\n", 905 - reader->type == ESE_EF100_DP_GZ_RXQ_SIZE_GRANULARITY ? "RXQ" : "TXQ", 906 - reader->value); 904 + pci_err(efx->pci_dev, 905 + "%s size granularity is %llu, can't guarantee safety\n", 906 + reader->type == ESE_EF100_DP_GZ_RXQ_SIZE_GRANULARITY ? "RXQ" : "TXQ", 907 + reader->value); 907 908 return -EOPNOTSUPP; 908 909 } 909 910 return 0; 910 911 case ESE_EF100_DP_GZ_TSO_MAX_PAYLOAD_LEN: 911 912 nic_data->tso_max_payload_len = min_t(u64, reader->value, 912 913 GSO_LEGACY_MAX_SIZE); 913 - netif_set_tso_max_size(efx->net_dev, 914 - nic_data->tso_max_payload_len); 915 914 return 0; 916 915 case ESE_EF100_DP_GZ_TSO_MAX_PAYLOAD_NUM_SEGS: 917 916 nic_data->tso_max_payload_num_segs = min_t(u64, reader->value, 0xffff); 918 - netif_set_tso_max_segs(efx->net_dev, 919 - nic_data->tso_max_payload_num_segs); 920 917 return 0; 921 918 case ESE_EF100_DP_GZ_TSO_MAX_NUM_FRAMES: 922 919 nic_data->tso_max_frames = min_t(u64, reader->value, 0xffff); 923 920 return 0; 924 921 case ESE_EF100_DP_GZ_COMPAT: 925 922 if (reader->value) { 926 - netif_err(efx, probe, efx->net_dev, 927 - "DP_COMPAT has unknown bits %#llx, driver not compatible with this hw\n", 928 - reader->value); 923 + pci_err(efx->pci_dev, 924 + "DP_COMPAT has unknown bits %#llx, driver not compatible with this hw\n", 925 + reader->value); 929 926 return -EOPNOTSUPP; 930 927 } 931 928 return 0; ··· 941 946 * So the value of this shouldn't matter. 942 947 */ 943 948 if (reader->value != ESE_EF100_DP_GZ_VI_STRIDES_DEFAULT) 944 - netif_dbg(efx, probe, efx->net_dev, 945 - "NIC has other than default VI_STRIDES (mask " 946 - "%#llx), early probing might use wrong one\n", 947 - reader->value); 949 + pci_dbg(efx->pci_dev, 950 + "NIC has other than default VI_STRIDES (mask " 951 + "%#llx), early probing might use wrong one\n", 952 + reader->value); 948 953 return 0; 949 954 case ESE_EF100_DP_GZ_RX_MAX_RUNT: 950 955 /* Driver doesn't look at L2_STATUS:LEN_ERR bit, so we don't ··· 956 961 /* Host interface says "Drivers should ignore design parameters 957 962 * that they do not recognise." 958 963 */ 959 - netif_dbg(efx, probe, efx->net_dev, 960 - "Ignoring unrecognised design parameter %u\n", 961 - reader->type); 964 + pci_dbg(efx->pci_dev, 965 + "Ignoring unrecognised design parameter %u\n", 966 + reader->type); 962 967 return 0; 963 968 } 964 969 } ··· 994 999 */ 995 1000 if (reader.state != EF100_TLV_TYPE) { 996 1001 if (reader.state == EF100_TLV_TYPE_CONT) 997 - netif_err(efx, probe, efx->net_dev, 998 - "truncated design parameter (incomplete type %u)\n", 999 - reader.type); 1002 + pci_err(efx->pci_dev, 1003 + "truncated design parameter (incomplete type %u)\n", 1004 + reader.type); 1000 1005 else 1001 - netif_err(efx, probe, efx->net_dev, 1002 - "truncated design parameter %u\n", 1003 - reader.type); 1006 + pci_err(efx->pci_dev, 1007 + "truncated design parameter %u\n", 1008 + reader.type); 1004 1009 rc = -EIO; 1005 1010 } 1006 1011 out:
+13
drivers/net/netdevsim/netdev.c
··· 939 939 ns->netdev->netdev_ops = &nsim_netdev_ops; 940 940 ns->netdev->stat_ops = &nsim_stat_ops; 941 941 ns->netdev->queue_mgmt_ops = &nsim_queue_mgmt_ops; 942 + netdev_lockdep_set_classes(ns->netdev); 942 943 943 944 err = nsim_udp_tunnels_info_create(ns->nsim_dev, ns->netdev); 944 945 if (err) ··· 961 960 if (err) 962 961 goto err_ipsec_teardown; 963 962 rtnl_unlock(); 963 + 964 + if (IS_ENABLED(CONFIG_DEBUG_NET)) { 965 + ns->nb.notifier_call = netdev_debug_event; 966 + if (register_netdevice_notifier_dev_net(ns->netdev, &ns->nb, 967 + &ns->nn)) 968 + ns->nb.notifier_call = NULL; 969 + } 970 + 964 971 return 0; 965 972 966 973 err_ipsec_teardown: ··· 1051 1042 1052 1043 debugfs_remove(ns->qr_dfs); 1053 1044 debugfs_remove(ns->pp_dfs); 1045 + 1046 + if (ns->nb.notifier_call) 1047 + unregister_netdevice_notifier_dev_net(ns->netdev, &ns->nb, 1048 + &ns->nn); 1054 1049 1055 1050 rtnl_lock(); 1056 1051 peer = rtnl_dereference(ns->peer);
+3
drivers/net/netdevsim/netdevsim.h
··· 144 144 145 145 struct nsim_ethtool ethtool; 146 146 struct netdevsim __rcu *peer; 147 + 148 + struct notifier_block nb; 149 + struct netdev_net_notifier nn; 147 150 }; 148 151 149 152 struct netdevsim *
+3 -3
drivers/net/usb/usbnet.c
··· 530 530 netif_device_present (dev->net) && 531 531 test_bit(EVENT_DEV_OPEN, &dev->flags) && 532 532 !test_bit (EVENT_RX_HALT, &dev->flags) && 533 - !test_bit (EVENT_DEV_ASLEEP, &dev->flags)) { 533 + !test_bit (EVENT_DEV_ASLEEP, &dev->flags) && 534 + !usbnet_going_away(dev)) { 534 535 switch (retval = usb_submit_urb (urb, GFP_ATOMIC)) { 535 536 case -EPIPE: 536 537 usbnet_defer_kevent (dev, EVENT_RX_HALT); ··· 552 551 tasklet_schedule (&dev->bh); 553 552 break; 554 553 case 0: 555 - if (!usbnet_going_away(dev)) 556 - __usbnet_queue_skb(&dev->rxq, skb, rx_start); 554 + __usbnet_queue_skb(&dev->rxq, skb, rx_start); 557 555 } 558 556 } else { 559 557 netif_dbg(dev, ifdown, dev->net, "rx: stopped\n");
+1 -1
include/linux/netdevice.h
··· 4192 4192 int netif_set_alias(struct net_device *dev, const char *alias, size_t len); 4193 4193 int dev_set_alias(struct net_device *, const char *, size_t); 4194 4194 int dev_get_alias(const struct net_device *, char *, size_t); 4195 - int netif_change_net_namespace(struct net_device *dev, struct net *net, 4195 + int __dev_change_net_namespace(struct net_device *dev, struct net *net, 4196 4196 const char *pat, int new_ifindex, 4197 4197 struct netlink_ext_ack *extack); 4198 4198 int dev_change_net_namespace(struct net_device *dev, struct net *net,
+8 -8
include/net/ip.h
··· 667 667 memcpy(buf, &naddr, sizeof(naddr)); 668 668 } 669 669 670 - #if IS_MODULE(CONFIG_IPV6) 671 - #define EXPORT_IPV6_MOD(X) EXPORT_SYMBOL(X) 672 - #define EXPORT_IPV6_MOD_GPL(X) EXPORT_SYMBOL_GPL(X) 673 - #else 674 - #define EXPORT_IPV6_MOD(X) 675 - #define EXPORT_IPV6_MOD_GPL(X) 676 - #endif 677 - 678 670 #if IS_ENABLED(CONFIG_IPV6) 679 671 #include <linux/ipv6.h> 680 672 #endif ··· 684 692 #endif 685 693 } 686 694 695 + #endif 696 + 697 + #if IS_MODULE(CONFIG_IPV6) 698 + #define EXPORT_IPV6_MOD(X) EXPORT_SYMBOL(X) 699 + #define EXPORT_IPV6_MOD_GPL(X) EXPORT_SYMBOL_GPL(X) 700 + #else 701 + #define EXPORT_IPV6_MOD(X) 702 + #define EXPORT_IPV6_MOD_GPL(X) 687 703 #endif 688 704 689 705 static inline unsigned int ipv4_addr_hash(__be32 ip)
+3
include/net/netdev_lock.h
··· 98 98 &qdisc_xmit_lock_key); \ 99 99 } 100 100 101 + int netdev_debug_event(struct notifier_block *nb, unsigned long event, 102 + void *ptr); 103 + 101 104 #endif
+6
include/net/page_pool/memory_provider.h
··· 6 6 #include <net/page_pool/types.h> 7 7 8 8 struct netdev_rx_queue; 9 + struct netlink_ext_ack; 9 10 struct sk_buff; 10 11 11 12 struct memory_provider_ops { ··· 25 24 26 25 int net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, 27 26 struct pp_memory_provider_params *p); 27 + int __net_mp_open_rxq(struct net_device *dev, unsigned int rxq_idx, 28 + const struct pp_memory_provider_params *p, 29 + struct netlink_ext_ack *extack); 28 30 void net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx, 29 31 struct pp_memory_provider_params *old_p); 32 + void __net_mp_close_rxq(struct net_device *dev, unsigned int rxq_idx, 33 + const struct pp_memory_provider_params *old_p); 30 34 31 35 /** 32 36 * net_mp_netmem_place_in_cache() - give a netmem to a page pool
+1 -1
net/core/Makefile
··· 45 45 obj-$(CONFIG_OF) += of_net.o 46 46 obj-$(CONFIG_NET_TEST) += net_test.o 47 47 obj-$(CONFIG_NET_DEVMEM) += devmem.o 48 - obj-$(CONFIG_DEBUG_NET_SMALL_RTNL) += rtnl_net_debug.o 48 + obj-$(CONFIG_DEBUG_NET) += lock_debug.o 49 49 obj-$(CONFIG_FAIL_SKB_REALLOC) += skb_fault_injection.o
+12 -3
net/core/dev.c
··· 1771 1771 netdev_unlock_ops(lower_dev); 1772 1772 } 1773 1773 } 1774 + EXPORT_IPV6_MOD(netif_disable_lro); 1774 1775 1775 1776 /** 1776 1777 * dev_disable_gro_hw - disable HW Generic Receive Offload on a device ··· 1859 1858 int err; 1860 1859 1861 1860 for_each_netdev(net, dev) { 1861 + netdev_lock_ops(dev); 1862 1862 err = call_netdevice_register_notifiers(nb, dev); 1863 + netdev_unlock_ops(dev); 1863 1864 if (err) 1864 1865 goto rollback; 1865 1866 } ··· 10287 10284 goto unlock; 10288 10285 } 10289 10286 10287 + netdev_lock_ops(dev); 10290 10288 err = dev_xdp_attach_link(dev, &extack, link); 10289 + netdev_unlock_ops(dev); 10291 10290 rtnl_unlock(); 10292 10291 10293 10292 if (err) { ··· 11050 11045 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len); 11051 11046 11052 11047 /* Notify protocols, that a new device appeared. */ 11048 + netdev_lock_ops(dev); 11053 11049 ret = call_netdevice_notifiers(NETDEV_REGISTER, dev); 11050 + netdev_unlock_ops(dev); 11054 11051 ret = notifier_to_errno(ret); 11055 11052 if (ret) { 11056 11053 /* Expect explicit free_netdev() on failure */ ··· 12064 12057 } 12065 12058 EXPORT_SYMBOL(unregister_netdev); 12066 12059 12067 - int netif_change_net_namespace(struct net_device *dev, struct net *net, 12060 + int __dev_change_net_namespace(struct net_device *dev, struct net *net, 12068 12061 const char *pat, int new_ifindex, 12069 12062 struct netlink_ext_ack *extack) 12070 12063 { ··· 12149 12142 * And now a mini version of register_netdevice unregister_netdevice. 12150 12143 */ 12151 12144 12145 + netdev_lock_ops(dev); 12152 12146 /* If device is running close it first. */ 12153 12147 netif_close(dev); 12154 - 12155 12148 /* And unlink it from device chain */ 12156 12149 unlist_netdevice(dev); 12150 + netdev_unlock_ops(dev); 12157 12151 12158 12152 synchronize_net(); 12159 12153 ··· 12216 12208 err = netdev_change_owner(dev, net_old, net); 12217 12209 WARN_ON(err); 12218 12210 12211 + netdev_lock_ops(dev); 12219 12212 /* Add the device back in the hashes */ 12220 12213 list_netdevice(dev); 12221 - 12222 12214 /* Notify protocols, that a new device appeared. */ 12223 12215 call_netdevice_notifiers(NETDEV_REGISTER, dev); 12216 + netdev_unlock_ops(dev); 12224 12217 12225 12218 /* 12226 12219 * Prevent userspace races by waiting until the network
+1 -7
net/core/dev_api.c
··· 117 117 int dev_change_net_namespace(struct net_device *dev, struct net *net, 118 118 const char *pat) 119 119 { 120 - int ret; 121 - 122 - netdev_lock_ops(dev); 123 - ret = netif_change_net_namespace(dev, net, pat, 0, NULL); 124 - netdev_unlock_ops(dev); 125 - 126 - return ret; 120 + return __dev_change_net_namespace(dev, net, pat, 0, NULL); 127 121 } 128 122 EXPORT_SYMBOL_GPL(dev_change_net_namespace); 129 123
+16 -48
net/core/devmem.c
··· 8 8 */ 9 9 10 10 #include <linux/dma-buf.h> 11 - #include <linux/ethtool_netlink.h> 12 11 #include <linux/genalloc.h> 13 12 #include <linux/mm.h> 14 13 #include <linux/netdevice.h> ··· 116 117 struct netdev_rx_queue *rxq; 117 118 unsigned long xa_idx; 118 119 unsigned int rxq_idx; 119 - int err; 120 120 121 121 if (binding->list.next) 122 122 list_del(&binding->list); 123 123 124 124 xa_for_each(&binding->bound_rxqs, xa_idx, rxq) { 125 - WARN_ON(rxq->mp_params.mp_priv != binding); 126 - 127 - rxq->mp_params.mp_priv = NULL; 128 - rxq->mp_params.mp_ops = NULL; 125 + const struct pp_memory_provider_params mp_params = { 126 + .mp_priv = binding, 127 + .mp_ops = &dmabuf_devmem_ops, 128 + }; 129 129 130 130 rxq_idx = get_netdev_rx_queue_index(rxq); 131 131 132 - err = netdev_rx_queue_restart(binding->dev, rxq_idx); 133 - WARN_ON(err && err != -ENETDOWN); 132 + __net_mp_close_rxq(binding->dev, rxq_idx, &mp_params); 134 133 } 135 134 136 135 xa_erase(&net_devmem_dmabuf_bindings, binding->id); ··· 140 143 struct net_devmem_dmabuf_binding *binding, 141 144 struct netlink_ext_ack *extack) 142 145 { 146 + struct pp_memory_provider_params mp_params = { 147 + .mp_priv = binding, 148 + .mp_ops = &dmabuf_devmem_ops, 149 + }; 143 150 struct netdev_rx_queue *rxq; 144 151 u32 xa_idx; 145 152 int err; 146 153 147 - if (rxq_idx >= dev->real_num_rx_queues) { 148 - NL_SET_ERR_MSG(extack, "rx queue index out of range"); 149 - return -ERANGE; 150 - } 151 - 152 - if (dev->cfg->hds_config != ETHTOOL_TCP_DATA_SPLIT_ENABLED) { 153 - NL_SET_ERR_MSG(extack, "tcp-data-split is disabled"); 154 - return -EINVAL; 155 - } 156 - 157 - if (dev->cfg->hds_thresh) { 158 - NL_SET_ERR_MSG(extack, "hds-thresh is not zero"); 159 - return -EINVAL; 160 - } 161 - 162 - rxq = __netif_get_rx_queue(dev, rxq_idx); 163 - if (rxq->mp_params.mp_ops) { 164 - NL_SET_ERR_MSG(extack, "designated queue already memory provider bound"); 165 - return -EEXIST; 166 - } 167 - 168 - #ifdef CONFIG_XDP_SOCKETS 169 - if (rxq->pool) { 170 - NL_SET_ERR_MSG(extack, "designated queue already in use by AF_XDP"); 171 - return -EBUSY; 172 - } 173 - #endif 174 - 175 - err = xa_alloc(&binding->bound_rxqs, &xa_idx, rxq, xa_limit_32b, 176 - GFP_KERNEL); 154 + err = __net_mp_open_rxq(dev, rxq_idx, &mp_params, extack); 177 155 if (err) 178 156 return err; 179 157 180 - rxq->mp_params.mp_priv = binding; 181 - rxq->mp_params.mp_ops = &dmabuf_devmem_ops; 182 - 183 - err = netdev_rx_queue_restart(dev, rxq_idx); 158 + rxq = __netif_get_rx_queue(dev, rxq_idx); 159 + err = xa_alloc(&binding->bound_rxqs, &xa_idx, rxq, xa_limit_32b, 160 + GFP_KERNEL); 184 161 if (err) 185 - goto err_xa_erase; 162 + goto err_close_rxq; 186 163 187 164 return 0; 188 165 189 - err_xa_erase: 190 - rxq->mp_params.mp_priv = NULL; 191 - rxq->mp_params.mp_ops = NULL; 192 - xa_erase(&binding->bound_rxqs, xa_idx); 193 - 166 + err_close_rxq: 167 + __net_mp_close_rxq(dev, rxq_idx, &mp_params); 194 168 return err; 195 169 } 196 170
+8
net/core/dst.c
··· 165 165 void dst_release(struct dst_entry *dst) 166 166 { 167 167 if (dst && rcuref_put(&dst->__rcuref)) { 168 + #ifdef CONFIG_DST_CACHE 169 + if (dst->flags & DST_METADATA) { 170 + struct metadata_dst *md_dst = (struct metadata_dst *)dst; 171 + 172 + if (md_dst->type == METADATA_IP_TUNNEL) 173 + dst_cache_reset_now(&md_dst->u.tun_info.dst_cache); 174 + } 175 + #endif 168 176 dst_count_dec(dst); 169 177 call_rcu_hurry(&dst->rcu_head, dst_destroy_rcu); 170 178 }
-6
net/core/netdev-genl.c
··· 874 874 goto err_unlock; 875 875 } 876 876 877 - if (dev_xdp_prog_count(netdev)) { 878 - NL_SET_ERR_MSG(info->extack, "unable to bind dmabuf to device with XDP program attached"); 879 - err = -EEXIST; 880 - goto err_unlock; 881 - } 882 - 883 877 binding = net_devmem_bind_dmabuf(netdev, dmabuf_fd, info->extack); 884 878 if (IS_ERR(binding)) { 885 879 err = PTR_ERR(binding);
+41 -12
net/core/netdev_rx_queue.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 3 + #include <linux/ethtool_netlink.h> 3 4 #include <linux/netdevice.h> 4 5 #include <net/netdev_lock.h> 5 6 #include <net/netdev_queues.h> ··· 87 86 } 88 87 EXPORT_SYMBOL_NS_GPL(netdev_rx_queue_restart, "NETDEV_INTERNAL"); 89 88 90 - static int __net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, 91 - struct pp_memory_provider_params *p) 89 + int __net_mp_open_rxq(struct net_device *dev, unsigned int rxq_idx, 90 + const struct pp_memory_provider_params *p, 91 + struct netlink_ext_ack *extack) 92 92 { 93 93 struct netdev_rx_queue *rxq; 94 94 int ret; ··· 97 95 if (!netdev_need_ops_lock(dev)) 98 96 return -EOPNOTSUPP; 99 97 100 - if (ifq_idx >= dev->real_num_rx_queues) 98 + if (rxq_idx >= dev->real_num_rx_queues) 101 99 return -EINVAL; 102 - ifq_idx = array_index_nospec(ifq_idx, dev->real_num_rx_queues); 100 + rxq_idx = array_index_nospec(rxq_idx, dev->real_num_rx_queues); 103 101 104 - rxq = __netif_get_rx_queue(dev, ifq_idx); 105 - if (rxq->mp_params.mp_ops) 102 + if (rxq_idx >= dev->real_num_rx_queues) { 103 + NL_SET_ERR_MSG(extack, "rx queue index out of range"); 104 + return -ERANGE; 105 + } 106 + if (dev->cfg->hds_config != ETHTOOL_TCP_DATA_SPLIT_ENABLED) { 107 + NL_SET_ERR_MSG(extack, "tcp-data-split is disabled"); 108 + return -EINVAL; 109 + } 110 + if (dev->cfg->hds_thresh) { 111 + NL_SET_ERR_MSG(extack, "hds-thresh is not zero"); 112 + return -EINVAL; 113 + } 114 + if (dev_xdp_prog_count(dev)) { 115 + NL_SET_ERR_MSG(extack, "unable to custom memory provider to device with XDP program attached"); 106 116 return -EEXIST; 117 + } 118 + 119 + rxq = __netif_get_rx_queue(dev, rxq_idx); 120 + if (rxq->mp_params.mp_ops) { 121 + NL_SET_ERR_MSG(extack, "designated queue already memory provider bound"); 122 + return -EEXIST; 123 + } 124 + #ifdef CONFIG_XDP_SOCKETS 125 + if (rxq->pool) { 126 + NL_SET_ERR_MSG(extack, "designated queue already in use by AF_XDP"); 127 + return -EBUSY; 128 + } 129 + #endif 107 130 108 131 rxq->mp_params = *p; 109 - ret = netdev_rx_queue_restart(dev, ifq_idx); 132 + ret = netdev_rx_queue_restart(dev, rxq_idx); 110 133 if (ret) { 111 134 rxq->mp_params.mp_ops = NULL; 112 135 rxq->mp_params.mp_priv = NULL; ··· 139 112 return ret; 140 113 } 141 114 142 - int net_mp_open_rxq(struct net_device *dev, unsigned ifq_idx, 115 + int net_mp_open_rxq(struct net_device *dev, unsigned int rxq_idx, 143 116 struct pp_memory_provider_params *p) 144 117 { 145 118 int ret; 146 119 147 120 netdev_lock(dev); 148 - ret = __net_mp_open_rxq(dev, ifq_idx, p); 121 + ret = __net_mp_open_rxq(dev, rxq_idx, p, NULL); 149 122 netdev_unlock(dev); 150 123 return ret; 151 124 } 152 125 153 - static void __net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx, 154 - struct pp_memory_provider_params *old_p) 126 + void __net_mp_close_rxq(struct net_device *dev, unsigned int ifq_idx, 127 + const struct pp_memory_provider_params *old_p) 155 128 { 156 129 struct netdev_rx_queue *rxq; 130 + int err; 157 131 158 132 if (WARN_ON_ONCE(ifq_idx >= dev->real_num_rx_queues)) 159 133 return; ··· 174 146 175 147 rxq->mp_params.mp_ops = NULL; 176 148 rxq->mp_params.mp_priv = NULL; 177 - WARN_ON(netdev_rx_queue_restart(dev, ifq_idx)); 149 + err = netdev_rx_queue_restart(dev, ifq_idx); 150 + WARN_ON(err && err != -ENETDOWN); 178 151 } 179 152 180 153 void net_mp_close_rxq(struct net_device *dev, unsigned ifq_idx,
+4 -4
net/core/rtnetlink.c
··· 3025 3025 char ifname[IFNAMSIZ]; 3026 3026 int err; 3027 3027 3028 - netdev_lock_ops(dev); 3029 - 3030 3028 err = validate_linkmsg(dev, tb, extack); 3031 3029 if (err < 0) 3032 3030 goto errout; ··· 3040 3042 3041 3043 new_ifindex = nla_get_s32_default(tb[IFLA_NEW_IFINDEX], 0); 3042 3044 3043 - err = netif_change_net_namespace(dev, tgt_net, pat, 3045 + err = __dev_change_net_namespace(dev, tgt_net, pat, 3044 3046 new_ifindex, extack); 3045 3047 if (err) 3046 - goto errout; 3048 + return err; 3047 3049 3048 3050 status |= DO_SETLINK_MODIFIED; 3049 3051 } 3052 + 3053 + netdev_lock_ops(dev); 3050 3054 3051 3055 if (tb[IFLA_MAP]) { 3052 3056 struct rtnl_link_ifmap *u_map;
+10 -6
net/core/rtnl_net_debug.c net/core/lock_debug.c
··· 6 6 #include <linux/notifier.h> 7 7 #include <linux/rtnetlink.h> 8 8 #include <net/net_namespace.h> 9 + #include <net/netdev_lock.h> 9 10 #include <net/netns/generic.h> 10 11 11 - static int rtnl_net_debug_event(struct notifier_block *nb, 12 - unsigned long event, void *ptr) 12 + int netdev_debug_event(struct notifier_block *nb, unsigned long event, 13 + void *ptr) 13 14 { 14 15 struct net_device *dev = netdev_notifier_info_to_dev(ptr); 15 16 struct net *net = dev_net(dev); ··· 18 17 19 18 /* Keep enum and don't add default to trigger -Werror=switch */ 20 19 switch (cmd) { 20 + case NETDEV_REGISTER: 21 21 case NETDEV_UP: 22 + netdev_ops_assert_locked(dev); 23 + fallthrough; 22 24 case NETDEV_DOWN: 23 25 case NETDEV_REBOOT: 24 26 case NETDEV_CHANGE: 25 - case NETDEV_REGISTER: 26 27 case NETDEV_UNREGISTER: 27 28 case NETDEV_CHANGEMTU: 28 29 case NETDEV_CHANGEADDR: ··· 69 66 70 67 return NOTIFY_DONE; 71 68 } 69 + EXPORT_SYMBOL_NS_GPL(netdev_debug_event, "NETDEV_INTERNAL"); 72 70 73 71 static int rtnl_net_debug_net_id; 74 72 ··· 78 74 struct notifier_block *nb; 79 75 80 76 nb = net_generic(net, rtnl_net_debug_net_id); 81 - nb->notifier_call = rtnl_net_debug_event; 77 + nb->notifier_call = netdev_debug_event; 82 78 83 79 return register_netdevice_notifier_net(net, nb); 84 80 } ··· 99 95 }; 100 96 101 97 static struct notifier_block rtnl_net_debug_block = { 102 - .notifier_call = rtnl_net_debug_event, 98 + .notifier_call = netdev_debug_event, 103 99 }; 104 100 105 101 static int __init rtnl_net_debug_init(void) 106 102 { 107 103 int ret; 108 104 109 - ret = register_pernet_device(&rtnl_net_debug_net_ops); 105 + ret = register_pernet_subsys(&rtnl_net_debug_net_ops); 110 106 if (ret) 111 107 return ret; 112 108
+1 -1
net/ipv4/devinet.c
··· 281 281 if (!in_dev->arp_parms) 282 282 goto out_kfree; 283 283 if (IPV4_DEVCONF(in_dev->cnf, FORWARDING)) 284 - dev_disable_lro(dev); 284 + netif_disable_lro(dev); 285 285 /* Reference in_dev->dev */ 286 286 netdev_hold(dev, &in_dev->dev_tracker, GFP_KERNEL); 287 287 /* Account for reference dev->ip_ptr (below) */
+2 -2
net/ipv4/ip_tunnel_core.c
··· 416 416 417 417 skb_dst_update_pmtu_no_confirm(skb, mtu); 418 418 419 - if (!reply || skb->pkt_type == PACKET_HOST) 419 + if (!reply) 420 420 return 0; 421 421 422 422 if (skb->protocol == htons(ETH_P_IP)) ··· 451 451 geneve_opt_policy[LWTUNNEL_IP_OPT_GENEVE_MAX + 1] = { 452 452 [LWTUNNEL_IP_OPT_GENEVE_CLASS] = { .type = NLA_U16 }, 453 453 [LWTUNNEL_IP_OPT_GENEVE_TYPE] = { .type = NLA_U8 }, 454 - [LWTUNNEL_IP_OPT_GENEVE_DATA] = { .type = NLA_BINARY, .len = 128 }, 454 + [LWTUNNEL_IP_OPT_GENEVE_DATA] = { .type = NLA_BINARY, .len = 127 }, 455 455 }; 456 456 457 457 static const struct nla_policy
+24 -18
net/ipv4/udp.c
··· 1625 1625 } 1626 1626 1627 1627 /* fully reclaim rmem/fwd memory allocated for skb */ 1628 - static void udp_rmem_release(struct sock *sk, int size, int partial, 1629 - bool rx_queue_lock_held) 1628 + static void udp_rmem_release(struct sock *sk, unsigned int size, 1629 + int partial, bool rx_queue_lock_held) 1630 1630 { 1631 1631 struct udp_sock *up = udp_sk(sk); 1632 1632 struct sk_buff_head *sk_queue; 1633 - int amt; 1633 + unsigned int amt; 1634 1634 1635 1635 if (likely(partial)) { 1636 1636 up->forward_deficit += size; ··· 1650 1650 if (!rx_queue_lock_held) 1651 1651 spin_lock(&sk_queue->lock); 1652 1652 1653 - 1654 - sk_forward_alloc_add(sk, size); 1655 - amt = (sk->sk_forward_alloc - partial) & ~(PAGE_SIZE - 1); 1656 - sk_forward_alloc_add(sk, -amt); 1653 + amt = (size + sk->sk_forward_alloc - partial) & ~(PAGE_SIZE - 1); 1654 + sk_forward_alloc_add(sk, size - amt); 1657 1655 1658 1656 if (amt) 1659 1657 __sk_mem_reduce_allocated(sk, amt >> PAGE_SHIFT); ··· 1723 1725 int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb) 1724 1726 { 1725 1727 struct sk_buff_head *list = &sk->sk_receive_queue; 1726 - int rmem, err = -ENOMEM; 1728 + unsigned int rmem, rcvbuf; 1727 1729 spinlock_t *busy = NULL; 1728 - int size, rcvbuf; 1730 + int size, err = -ENOMEM; 1729 1731 1730 - /* Immediately drop when the receive queue is full. 1731 - * Always allow at least one packet. 1732 - */ 1733 1732 rmem = atomic_read(&sk->sk_rmem_alloc); 1734 1733 rcvbuf = READ_ONCE(sk->sk_rcvbuf); 1735 - if (rmem > rcvbuf) 1736 - goto drop; 1734 + size = skb->truesize; 1735 + 1736 + /* Immediately drop when the receive queue is full. 1737 + * Cast to unsigned int performs the boundary check for INT_MAX. 1738 + */ 1739 + if (rmem + size > rcvbuf) { 1740 + if (rcvbuf > INT_MAX >> 1) 1741 + goto drop; 1742 + 1743 + /* Always allow at least one packet for small buffer. */ 1744 + if (rmem > rcvbuf) 1745 + goto drop; 1746 + } 1737 1747 1738 1748 /* Under mem pressure, it might be helpful to help udp_recvmsg() 1739 1749 * having linear skbs : ··· 1751 1745 */ 1752 1746 if (rmem > (rcvbuf >> 1)) { 1753 1747 skb_condense(skb); 1754 - 1748 + size = skb->truesize; 1755 1749 busy = busylock_acquire(sk); 1756 1750 } 1757 - size = skb->truesize; 1751 + 1758 1752 udp_set_dev_scratch(skb); 1759 1753 1760 1754 atomic_add(size, &sk->sk_rmem_alloc); ··· 1841 1835 1842 1836 static struct sk_buff *__first_packet_length(struct sock *sk, 1843 1837 struct sk_buff_head *rcvq, 1844 - int *total) 1838 + unsigned int *total) 1845 1839 { 1846 1840 struct sk_buff *skb; 1847 1841 ··· 1874 1868 { 1875 1869 struct sk_buff_head *rcvq = &udp_sk(sk)->reader_queue; 1876 1870 struct sk_buff_head *sk_queue = &sk->sk_receive_queue; 1871 + unsigned int total = 0; 1877 1872 struct sk_buff *skb; 1878 - int total = 0; 1879 1873 int res; 1880 1874 1881 1875 spin_lock_bh(&rcvq->lock);
+38 -14
net/ipv6/addrconf.c
··· 80 80 #include <net/netlink.h> 81 81 #include <net/pkt_sched.h> 82 82 #include <net/l3mdev.h> 83 + #include <net/netdev_lock.h> 83 84 #include <linux/if_tunnel.h> 84 85 #include <linux/rtnetlink.h> 85 86 #include <linux/netconf.h> ··· 378 377 int err = -ENOMEM; 379 378 380 379 ASSERT_RTNL(); 380 + netdev_ops_assert_locked(dev); 381 381 382 382 if (dev->mtu < IPV6_MIN_MTU && dev != blackhole_netdev) 383 383 return ERR_PTR(-EINVAL); ··· 404 402 return ERR_PTR(err); 405 403 } 406 404 if (ndev->cnf.forwarding) 407 - dev_disable_lro(dev); 405 + netif_disable_lro(dev); 408 406 /* We refer to the device */ 409 407 netdev_hold(dev, &ndev->dev_tracker, GFP_KERNEL); 410 408 ··· 3154 3152 3155 3153 rtnl_net_lock(net); 3156 3154 dev = __dev_get_by_index(net, ireq.ifr6_ifindex); 3155 + netdev_lock_ops(dev); 3157 3156 if (dev) 3158 3157 err = inet6_addr_add(net, dev, &cfg, 0, 0, NULL); 3159 3158 else 3160 3159 err = -ENODEV; 3160 + netdev_unlock_ops(dev); 3161 3161 rtnl_net_unlock(net); 3162 3162 return err; 3163 3163 } ··· 5030 5026 if (!dev) { 5031 5027 NL_SET_ERR_MSG_MOD(extack, "Unable to find the interface"); 5032 5028 err = -ENODEV; 5033 - goto unlock; 5029 + goto unlock_rtnl; 5034 5030 } 5035 5031 5032 + netdev_lock_ops(dev); 5036 5033 idev = ipv6_find_idev(dev); 5037 5034 if (IS_ERR(idev)) { 5038 5035 err = PTR_ERR(idev); ··· 5070 5065 5071 5066 in6_ifa_put(ifa); 5072 5067 unlock: 5068 + netdev_unlock_ops(dev); 5069 + unlock_rtnl: 5073 5070 rtnl_net_unlock(net); 5074 5071 5075 5072 return err; ··· 5791 5784 } 5792 5785 } 5793 5786 5787 + static int inet6_fill_ifla6_stats_attrs(struct sk_buff *skb, 5788 + struct inet6_dev *idev) 5789 + { 5790 + struct nlattr *nla; 5791 + 5792 + nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64)); 5793 + if (!nla) 5794 + goto nla_put_failure; 5795 + snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla)); 5796 + 5797 + nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64)); 5798 + if (!nla) 5799 + goto nla_put_failure; 5800 + snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla)); 5801 + 5802 + return 0; 5803 + 5804 + nla_put_failure: 5805 + return -EMSGSIZE; 5806 + } 5807 + 5794 5808 static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev, 5795 5809 u32 ext_filter_mask) 5796 5810 { ··· 5834 5806 5835 5807 /* XXX - MC not implemented */ 5836 5808 5837 - if (ext_filter_mask & RTEXT_FILTER_SKIP_STATS) 5838 - return 0; 5839 - 5840 - nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64)); 5841 - if (!nla) 5842 - goto nla_put_failure; 5843 - snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla)); 5844 - 5845 - nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64)); 5846 - if (!nla) 5847 - goto nla_put_failure; 5848 - snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla)); 5809 + if (!(ext_filter_mask & RTEXT_FILTER_SKIP_STATS)) { 5810 + if (inet6_fill_ifla6_stats_attrs(skb, idev) < 0) 5811 + goto nla_put_failure; 5812 + } 5849 5813 5850 5814 nla = nla_reserve(skb, IFLA_INET6_TOKEN, sizeof(struct in6_addr)); 5851 5815 if (!nla) ··· 6523 6503 6524 6504 if (idev->cnf.addr_gen_mode != new_val) { 6525 6505 WRITE_ONCE(idev->cnf.addr_gen_mode, new_val); 6506 + netdev_lock_ops(idev->dev); 6526 6507 addrconf_init_auto_addrs(idev->dev); 6508 + netdev_unlock_ops(idev->dev); 6527 6509 } 6528 6510 } else if (&net->ipv6.devconf_all->addr_gen_mode == ctl->data) { 6529 6511 struct net_device *dev; ··· 6537 6515 idev->cnf.addr_gen_mode != new_val) { 6538 6516 WRITE_ONCE(idev->cnf.addr_gen_mode, 6539 6517 new_val); 6518 + netdev_lock_ops(idev->dev); 6540 6519 addrconf_init_auto_addrs(idev->dev); 6520 + netdev_unlock_ops(idev->dev); 6541 6521 } 6542 6522 } 6543 6523 }
+18 -3
net/ipv6/calipso.c
··· 1072 1072 struct ipv6_opt_hdr *hop; 1073 1073 int opt_len, len, ret_val = -ENOMSG, offset; 1074 1074 unsigned char *opt; 1075 - struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); 1075 + struct ipv6_pinfo *pinfo = inet6_sk(sk); 1076 + struct ipv6_txoptions *txopts; 1076 1077 1078 + if (!pinfo) 1079 + return -EAFNOSUPPORT; 1080 + 1081 + txopts = txopt_get(pinfo); 1077 1082 if (!txopts || !txopts->hopopt) 1078 1083 goto done; 1079 1084 ··· 1130 1125 { 1131 1126 int ret_val; 1132 1127 struct ipv6_opt_hdr *old, *new; 1133 - struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); 1128 + struct ipv6_pinfo *pinfo = inet6_sk(sk); 1129 + struct ipv6_txoptions *txopts; 1134 1130 1131 + if (!pinfo) 1132 + return -EAFNOSUPPORT; 1133 + 1134 + txopts = txopt_get(pinfo); 1135 1135 old = NULL; 1136 1136 if (txopts) 1137 1137 old = txopts->hopopt; ··· 1163 1153 static void calipso_sock_delattr(struct sock *sk) 1164 1154 { 1165 1155 struct ipv6_opt_hdr *new_hop; 1166 - struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk)); 1156 + struct ipv6_pinfo *pinfo = inet6_sk(sk); 1157 + struct ipv6_txoptions *txopts; 1167 1158 1159 + if (!pinfo) 1160 + return; 1161 + 1162 + txopts = txopt_get(pinfo); 1168 1163 if (!txopts || !txopts->hopopt) 1169 1164 goto done; 1170 1165
+38 -4
net/ipv6/route.c
··· 412 412 return false; 413 413 } 414 414 415 + static struct fib6_info * 416 + rt6_multipath_first_sibling_rcu(const struct fib6_info *rt) 417 + { 418 + struct fib6_info *iter; 419 + struct fib6_node *fn; 420 + 421 + fn = rcu_dereference(rt->fib6_node); 422 + if (!fn) 423 + goto out; 424 + iter = rcu_dereference(fn->leaf); 425 + if (!iter) 426 + goto out; 427 + 428 + while (iter) { 429 + if (iter->fib6_metric == rt->fib6_metric && 430 + rt6_qualify_for_ecmp(iter)) 431 + return iter; 432 + iter = rcu_dereference(iter->fib6_next); 433 + } 434 + 435 + out: 436 + return NULL; 437 + } 438 + 415 439 void fib6_select_path(const struct net *net, struct fib6_result *res, 416 440 struct flowi6 *fl6, int oif, bool have_oif_match, 417 441 const struct sk_buff *skb, int strict) 418 442 { 419 - struct fib6_info *match = res->f6i; 443 + struct fib6_info *first, *match = res->f6i; 420 444 struct fib6_info *sibling; 445 + int hash; 421 446 422 447 if (!match->nh && (!match->fib6_nsiblings || have_oif_match)) 423 448 goto out; ··· 465 440 return; 466 441 } 467 442 468 - if (fl6->mp_hash <= atomic_read(&match->fib6_nh->fib_nh_upper_bound)) 443 + first = rt6_multipath_first_sibling_rcu(match); 444 + if (!first) 469 445 goto out; 470 446 471 - list_for_each_entry_rcu(sibling, &match->fib6_siblings, 447 + hash = fl6->mp_hash; 448 + if (hash <= atomic_read(&first->fib6_nh->fib_nh_upper_bound) && 449 + rt6_score_route(first->fib6_nh, first->fib6_flags, oif, 450 + strict) >= 0) { 451 + match = first; 452 + goto out; 453 + } 454 + 455 + list_for_each_entry_rcu(sibling, &first->fib6_siblings, 472 456 fib6_siblings) { 473 457 const struct fib6_nh *nh = sibling->fib6_nh; 474 458 int nh_upper_bound; 475 459 476 460 nh_upper_bound = atomic_read(&nh->fib_nh_upper_bound); 477 - if (fl6->mp_hash > nh_upper_bound) 461 + if (hash > nh_upper_bound) 478 462 continue; 479 463 if (rt6_score_route(nh, sibling->fib6_flags, oif, strict) < 0) 480 464 break;
+2 -2
net/netfilter/nf_tables_api.c
··· 2839 2839 err = nft_netdev_register_hooks(ctx->net, &hook.list); 2840 2840 if (err < 0) 2841 2841 goto err_hooks; 2842 + 2843 + unregister = true; 2842 2844 } 2843 2845 } 2844 - 2845 - unregister = true; 2846 2846 2847 2847 if (nla[NFTA_CHAIN_COUNTERS]) { 2848 2848 if (!nft_is_base_chain(chain)) {
+2 -1
net/netfilter/nft_set_hash.c
··· 309 309 310 310 nft_setelem_expr_foreach(expr, elem_expr, size) { 311 311 if (expr->ops->gc && 312 - expr->ops->gc(read_pnet(&set->net), expr)) 312 + expr->ops->gc(read_pnet(&set->net), expr) && 313 + set->flags & NFT_SET_EVAL) 313 314 return true; 314 315 } 315 316
+3 -3
net/netfilter/nft_tunnel.c
··· 335 335 static const struct nla_policy nft_tunnel_opts_geneve_policy[NFTA_TUNNEL_KEY_GENEVE_MAX + 1] = { 336 336 [NFTA_TUNNEL_KEY_GENEVE_CLASS] = { .type = NLA_U16 }, 337 337 [NFTA_TUNNEL_KEY_GENEVE_TYPE] = { .type = NLA_U8 }, 338 - [NFTA_TUNNEL_KEY_GENEVE_DATA] = { .type = NLA_BINARY, .len = 128 }, 338 + [NFTA_TUNNEL_KEY_GENEVE_DATA] = { .type = NLA_BINARY, .len = 127 }, 339 339 }; 340 340 341 341 static int nft_tunnel_obj_geneve_init(const struct nlattr *attr, 342 342 struct nft_tunnel_opts *opts) 343 343 { 344 - struct geneve_opt *opt = (struct geneve_opt *)opts->u.data + opts->len; 344 + struct geneve_opt *opt = (struct geneve_opt *)(opts->u.data + opts->len); 345 345 struct nlattr *tb[NFTA_TUNNEL_KEY_GENEVE_MAX + 1]; 346 346 int err, data_len; 347 347 ··· 625 625 if (!inner) 626 626 goto failure; 627 627 while (opts->len > offset) { 628 - opt = (struct geneve_opt *)opts->u.data + offset; 628 + opt = (struct geneve_opt *)(opts->u.data + offset); 629 629 if (nla_put_be16(skb, NFTA_TUNNEL_KEY_GENEVE_CLASS, 630 630 opt->opt_class) || 631 631 nla_put_u8(skb, NFTA_TUNNEL_KEY_GENEVE_TYPE,
-6
net/openvswitch/actions.c
··· 947 947 pskb_trim(skb, ovs_mac_header_len(key)); 948 948 } 949 949 950 - /* Need to set the pkt_type to involve the routing layer. The 951 - * packet movement through the OVS datapath doesn't generally 952 - * use routing, but this is needed for tunnel cases. 953 - */ 954 - skb->pkt_type = PACKET_OUTGOING; 955 - 956 950 if (likely(!mru || 957 951 (skb->len <= mru + vport->dev->hard_header_len))) { 958 952 ovs_vport_send(vport, skb, ovs_key_mac_proto(key));
+1 -1
net/sched/act_tunnel_key.c
··· 68 68 [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_CLASS] = { .type = NLA_U16 }, 69 69 [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_TYPE] = { .type = NLA_U8 }, 70 70 [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_DATA] = { .type = NLA_BINARY, 71 - .len = 128 }, 71 + .len = 127 }, 72 72 }; 73 73 74 74 static const struct nla_policy
+1 -1
net/sched/cls_flower.c
··· 766 766 [TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS] = { .type = NLA_U16 }, 767 767 [TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE] = { .type = NLA_U8 }, 768 768 [TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA] = { .type = NLA_BINARY, 769 - .len = 128 }, 769 + .len = 127 }, 770 770 }; 771 771 772 772 static const struct nla_policy
-3
net/sched/sch_skbprio.c
··· 123 123 /* Check to update highest and lowest priorities. */ 124 124 if (skb_queue_empty(lp_qdisc)) { 125 125 if (q->lowest_prio == q->highest_prio) { 126 - /* The incoming packet is the only packet in queue. */ 127 - BUG_ON(sch->q.qlen != 1); 128 126 q->lowest_prio = prio; 129 127 q->highest_prio = prio; 130 128 } else { ··· 154 156 /* Update highest priority field. */ 155 157 if (skb_queue_empty(hpq)) { 156 158 if (q->lowest_prio == q->highest_prio) { 157 - BUG_ON(sch->q.qlen); 158 159 q->highest_prio = 0; 159 160 q->lowest_prio = SKBPRIO_MAX_PRIORITY - 1; 160 161 } else {
+4
net/sctp/sysctl.c
··· 525 525 return ret; 526 526 } 527 527 528 + static DEFINE_MUTEX(sctp_sysctl_mutex); 529 + 528 530 static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write, 529 531 void *buffer, size_t *lenp, loff_t *ppos) 530 532 { ··· 551 549 if (new_value > max || new_value < min) 552 550 return -EINVAL; 553 551 552 + mutex_lock(&sctp_sysctl_mutex); 554 553 net->sctp.udp_port = new_value; 555 554 sctp_udp_sock_stop(net); 556 555 if (new_value) { ··· 564 561 lock_sock(sk); 565 562 sctp_sk(sk)->udp_port = htons(net->sctp.udp_port); 566 563 release_sock(sk); 564 + mutex_unlock(&sctp_sysctl_mutex); 567 565 } 568 566 569 567 return ret;
+5 -1
net/vmw_vsock/af_vsock.c
··· 1551 1551 timeout = vsk->connect_timeout; 1552 1552 prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1553 1553 1554 - while (sk->sk_state != TCP_ESTABLISHED && sk->sk_err == 0) { 1554 + /* If the socket is already closing or it is in an error state, there 1555 + * is no point in waiting. 1556 + */ 1557 + while (sk->sk_state != TCP_ESTABLISHED && 1558 + sk->sk_state != TCP_CLOSING && sk->sk_err == 0) { 1555 1559 if (flags & O_NONBLOCK) { 1556 1560 /* If we're not going to block, we schedule a timeout 1557 1561 * function to generate a timeout on the connection
+4 -4
tools/testing/selftests/drivers/net/hw/iou-zcrx.py
··· 27 27 28 28 29 29 def test_zcrx(cfg) -> None: 30 - cfg.require_v6() 30 + cfg.require_ipver('6') 31 31 32 32 combined_chans = _get_combined_channels(cfg) 33 33 if combined_chans < 2: ··· 40 40 flow_rule_id = _set_flow_rule(cfg, combined_chans - 1) 41 41 42 42 rx_cmd = f"{cfg.bin_remote} -s -p 9999 -i {cfg.ifname} -q {combined_chans - 1}" 43 - tx_cmd = f"{cfg.bin_local} -c -h {cfg.remote_v6} -p 9999 -l 12840" 43 + tx_cmd = f"{cfg.bin_local} -c -h {cfg.remote_addr_v['6']} -p 9999 -l 12840" 44 44 with bkg(rx_cmd, host=cfg.remote, exit_wait=True): 45 45 wait_port_listen(9999, proto="tcp", host=cfg.remote) 46 46 cmd(tx_cmd) ··· 51 51 52 52 53 53 def test_zcrx_oneshot(cfg) -> None: 54 - cfg.require_v6() 54 + cfg.require_ipver('6') 55 55 56 56 combined_chans = _get_combined_channels(cfg) 57 57 if combined_chans < 2: ··· 64 64 flow_rule_id = _set_flow_rule(cfg, combined_chans - 1) 65 65 66 66 rx_cmd = f"{cfg.bin_remote} -s -p 9999 -i {cfg.ifname} -q {combined_chans - 1} -o 4" 67 - tx_cmd = f"{cfg.bin_local} -c -h {cfg.remote_v6} -p 9999 -l 4096 -z 16384" 67 + tx_cmd = f"{cfg.bin_local} -c -h {cfg.remote_addr_v['6']} -p 9999 -l 4096 -z 16384" 68 68 with bkg(rx_cmd, host=cfg.remote, exit_wait=True): 69 69 wait_port_listen(9999, proto="tcp", host=cfg.remote) 70 70 cmd(tx_cmd)
+14 -6
tools/testing/selftests/net/amt.sh
··· 194 194 195 195 send_mcast_torture4() 196 196 { 197 - ip netns exec "${SOURCE}" bash -c \ 198 - 'cat /dev/urandom | head -c 1G | nc -w 1 -u 239.0.0.1 4001' 197 + for i in `seq 10`; do 198 + ip netns exec "${SOURCE}" bash -c \ 199 + 'cat /dev/urandom | head -c 100M | nc -w 1 -u 239.0.0.1 4001' 200 + echo -n "." 201 + done 199 202 } 200 203 201 204 202 205 send_mcast_torture6() 203 206 { 204 - ip netns exec "${SOURCE}" bash -c \ 205 - 'cat /dev/urandom | head -c 1G | nc -w 1 -u ff0e::5:6 6001' 207 + for i in `seq 10`; do 208 + ip netns exec "${SOURCE}" bash -c \ 209 + 'cat /dev/urandom | head -c 100M | nc -w 1 -u ff0e::5:6 6001' 210 + echo -n "." 211 + done 206 212 } 207 213 208 214 check_features() ··· 284 278 if [ $err -eq 1 ]; then 285 279 ERR=1 286 280 fi 281 + printf "TEST: %-50s" "IPv4 amt traffic forwarding torture" 287 282 send_mcast_torture4 288 - printf "TEST: %-60s [ OK ]\n" "IPv4 amt traffic forwarding torture" 283 + printf " [ OK ]\n" 284 + printf "TEST: %-50s" "IPv6 amt traffic forwarding torture" 289 285 send_mcast_torture6 290 - printf "TEST: %-60s [ OK ]\n" "IPv6 amt traffic forwarding torture" 286 + printf " [ OK ]\n" 291 287 sleep 5 292 288 if [ "${ERR}" -eq 1 ]; then 293 289 echo "Some tests failed." >&2
+25
tools/testing/selftests/net/lib.sh
··· 222 222 NS_LIST+=("${ns_list[@]}") 223 223 } 224 224 225 + # Create netdevsim with given id and net namespace. 226 + create_netdevsim() { 227 + local id="$1" 228 + local ns="$2" 229 + 230 + modprobe netdevsim &> /dev/null 231 + udevadm settle 232 + 233 + echo "$id 1" | ip netns exec $ns tee /sys/bus/netdevsim/new_device >/dev/null 234 + local dev=$(ip netns exec $ns ls /sys/bus/netdevsim/devices/netdevsim$id/net) 235 + ip -netns $ns link set dev $dev name nsim$id 236 + ip -netns $ns link set dev nsim$id up 237 + 238 + echo nsim$id 239 + } 240 + 241 + # Remove netdevsim with given id. 242 + cleanup_netdevsim() { 243 + local id="$1" 244 + 245 + if [ -d "/sys/bus/netdevsim/devices/netdevsim$id/net" ]; then 246 + echo "$id" > /sys/bus/netdevsim/del_device 247 + fi 248 + } 249 + 225 250 tc_rule_stats_get() 226 251 { 227 252 local dev=$1; shift
+9 -4
tools/testing/selftests/net/netns-name.sh
··· 7 7 DEV=dummy-dev0 8 8 DEV2=dummy-dev1 9 9 ALT_NAME=some-alt-name 10 + NSIM_ADDR=2025 10 11 11 12 RET_CODE=0 12 13 13 14 cleanup() { 15 + cleanup_netdevsim $NSIM_ADDR 14 16 cleanup_ns $NS $test_ns 15 17 } 16 18 ··· 27 25 28 26 # 29 27 # Test basic move without a rename 28 + # Use netdevsim because it has extra asserts for notifiers. 30 29 # 31 - ip -netns $NS link add name $DEV type dummy || fail 32 - ip -netns $NS link set dev $DEV netns $test_ns || 30 + 31 + nsim=$(create_netdevsim $NSIM_ADDR $NS) 32 + ip -netns $NS link set dev $nsim netns $test_ns || 33 33 fail "Can't perform a netns move" 34 - ip -netns $test_ns link show dev $DEV >> /dev/null || fail "Device not found after move" 35 - ip -netns $test_ns link del $DEV || fail 34 + ip -netns $test_ns link show dev $nsim >> /dev/null || 35 + fail "Device not found after move" 36 + cleanup_netdevsim $NSIM_ADDR 36 37 37 38 # 38 39 # Test move with a conflict
+2 -2
tools/testing/selftests/net/rtnetlink.py
··· 12 12 At least the loopback interface should have this address. 13 13 """ 14 14 15 - addresses = rtnl.getmaddrs({"ifa-family": socket.AF_INET}, dump=True) 15 + addresses = rtnl.getmulticast({"ifa-family": socket.AF_INET}, dump=True) 16 16 17 17 all_host_multicasts = [ 18 - addr for addr in addresses if addr['ifa-multicast'] == IPV4_ALL_HOSTS_MULTICAST 18 + addr for addr in addresses if addr['multicast'] == IPV4_ALL_HOSTS_MULTICAST 19 19 ] 20 20 21 21 ksft_ge(len(all_host_multicasts), 1,
+3
tools/testing/selftests/net/tcp_ao/self-connect.c
··· 16 16 17 17 if (link_set_up(lo_intf)) 18 18 test_error("Failed to bring %s up", lo_intf); 19 + 20 + if (ip_route_add(lo_intf, TEST_FAMILY, local_addr, local_addr)) 21 + test_error("Failed to add a local route %s", lo_intf); 19 22 } 20 23 21 24 static void setup_lo_intf(const char *lo_intf)
+7 -7
tools/testing/selftests/tc-testing/tc-tests/actions/nat.json
··· 305 305 "cmdUnderTest": "$TC actions add action nat ingress default 10.10.10.1 index 12", 306 306 "expExitCode": "0", 307 307 "verifyCmd": "$TC actions get action nat index 12", 308 - "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/32 10.10.10.1 pass.*index 12 ref", 308 + "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/0 10.10.10.1 pass.*index 12 ref", 309 309 "matchCount": "1", 310 310 "teardown": [ 311 311 "$TC actions flush action nat" ··· 332 332 "cmdUnderTest": "$TC actions add action nat ingress any 10.10.10.1 index 12", 333 333 "expExitCode": "0", 334 334 "verifyCmd": "$TC actions get action nat index 12", 335 - "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/32 10.10.10.1 pass.*index 12 ref", 335 + "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/0 10.10.10.1 pass.*index 12 ref", 336 336 "matchCount": "1", 337 337 "teardown": [ 338 338 "$TC actions flush action nat" ··· 359 359 "cmdUnderTest": "$TC actions add action nat ingress all 10.10.10.1 index 12", 360 360 "expExitCode": "0", 361 361 "verifyCmd": "$TC actions get action nat index 12", 362 - "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/32 10.10.10.1 pass.*index 12 ref", 362 + "matchPattern": "action order [0-9]+: nat ingress 0.0.0.0/0 10.10.10.1 pass.*index 12 ref", 363 363 "matchCount": "1", 364 364 "teardown": [ 365 365 "$TC actions flush action nat" ··· 548 548 "cmdUnderTest": "$TC actions add action nat egress default 20.20.20.1 pipe index 10", 549 549 "expExitCode": "0", 550 550 "verifyCmd": "$TC actions get action nat index 10", 551 - "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/32 20.20.20.1 pipe.*index 10 ref", 551 + "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/0 20.20.20.1 pipe.*index 10 ref", 552 552 "matchCount": "1", 553 553 "teardown": [ 554 554 "$TC actions flush action nat" ··· 575 575 "cmdUnderTest": "$TC actions add action nat egress any 20.20.20.1 pipe index 10", 576 576 "expExitCode": "0", 577 577 "verifyCmd": "$TC actions get action nat index 10", 578 - "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/32 20.20.20.1 pipe.*index 10 ref", 578 + "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/0 20.20.20.1 pipe.*index 10 ref", 579 579 "matchCount": "1", 580 580 "teardown": [ 581 581 "$TC actions flush action nat" ··· 602 602 "cmdUnderTest": "$TC actions add action nat egress all 20.20.20.1 pipe index 10", 603 603 "expExitCode": "0", 604 604 "verifyCmd": "$TC actions get action nat index 10", 605 - "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/32 20.20.20.1 pipe.*index 10 ref", 605 + "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/0 20.20.20.1 pipe.*index 10 ref", 606 606 "matchCount": "1", 607 607 "teardown": [ 608 608 "$TC actions flush action nat" ··· 629 629 "cmdUnderTest": "$TC actions add action nat egress all 20.20.20.1 pipe index 10 cookie aa1bc2d3eeff112233445566778800a1", 630 630 "expExitCode": "0", 631 631 "verifyCmd": "$TC actions get action nat index 10", 632 - "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/32 20.20.20.1 pipe.*index 10 ref.*cookie aa1bc2d3eeff112233445566778800a1", 632 + "matchPattern": "action order [0-9]+: nat egress 0.0.0.0/0 20.20.20.1 pipe.*index 10 ref.*cookie aa1bc2d3eeff112233445566778800a1", 633 633 "matchCount": "1", 634 634 "teardown": [ 635 635 "$TC actions flush action nat"
+33 -1
tools/testing/selftests/tc-testing/tc-tests/infra/qdiscs.json
··· 126 126 "$TC qdisc del dev $DUMMY root handle 1: drr", 127 127 "$IP addr del 10.10.10.10/24 dev $DUMMY" 128 128 ] 129 - } 129 + }, 130 + { 131 + "id": "c024", 132 + "name": "Test TBF with SKBPRIO - catch qlen corner cases", 133 + "category": [ 134 + "qdisc", 135 + "tbf", 136 + "skbprio" 137 + ], 138 + "plugins": { 139 + "requires": "nsPlugin" 140 + }, 141 + "setup": [ 142 + "$IP link set dev $DUMMY up || true", 143 + "$IP addr add 10.10.10.10/24 dev $DUMMY || true", 144 + "$TC qdisc add dev $DUMMY handle 1: root tbf rate 100bit burst 2000 limit 1000", 145 + "$TC qdisc add dev $DUMMY parent 1: handle 10: skbprio limit 1", 146 + "ping -c 1 -W 0.1 -Q 0x00 -s 1400 -I $DUMMY 10.10.10.1 > /dev/null || true", 147 + "ping -c 1 -W 0.1 -Q 0x1c -s 1400 -I $DUMMY 10.10.10.1 > /dev/null || true", 148 + "ping -c 1 -W 0.1 -Q 0x00 -s 1400 -I $DUMMY 10.10.10.1 > /dev/null || true", 149 + "ping -c 1 -W 0.1 -Q 0x1c -s 1400 -I $DUMMY 10.10.10.1 > /dev/null || true", 150 + "sleep 0.5" 151 + ], 152 + "cmdUnderTest": "$TC -s qdisc show dev $DUMMY", 153 + "expExitCode": "0", 154 + "verifyCmd": "$TC -s qdisc show dev $DUMMY | grep -A 5 'qdisc skbprio'", 155 + "matchPattern": "dropped [1-9][0-9]*", 156 + "matchCount": "1", 157 + "teardown": [ 158 + "$TC qdisc del dev $DUMMY handle 1: root", 159 + "$IP addr del 10.10.10.10/24 dev $DUMMY || true" 160 + ] 161 + } 130 162 ]