Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'net-fix-nested-device-bugs'

Taehee Yoo says:

====================
net: fix nested device bugs

This patchset fixes several bugs that are related to nesting
device infrastructure.
Current nesting infrastructure code doesn't limit the depth level of
devices. nested devices could be handled recursively. at that moment,
it needs huge memory and stack overflow could occur.
Below devices type have same bug.
VLAN, BONDING, TEAM, MACSEC, MACVLAN, IPVLAN, and VXLAN.
But I couldn't test all interface types so there could be more device
types, which have similar problems.
Maybe qmi_wwan.c code could have same problem.
So, I would appreciate if someone test qmi_wwan.c and other modules.

Test commands:
ip link add dummy0 type dummy
ip link add vlan1 link dummy0 type vlan id 1

for i in {2..100}
do
let A=$i-1
ip link add name vlan$i link vlan$A type vlan id $i
done
ip link del dummy0

1st patch actually fixes the root cause.
It adds new common variables {upper/lower}_level that represent
depth level. upper_level variable is depth of upper devices.
lower_level variable is depth of lower devices.

[U][L] [U][L]
vlan1 1 5 vlan4 1 4
vlan2 2 4 vlan5 2 3
vlan3 3 3 |
| |
+------------+
|
vlan6 4 2
dummy0 5 1

After this patch, the nesting infrastructure code uses this variable to
check the depth level.

2nd patch fixes Qdisc lockdep related problem.
Before this patch, devices use static lockdep map.
So, if devices that are same types are nested, lockdep will warn about
recursive situation.
These patches make these devices use dynamic lockdep key instead of
static lock or subclass.

3rd patch fixes unexpected IFF_BONDING bit unset.
When nested bonding interface scenario, bonding interface could lost it's
IFF_BONDING flag. This should not happen.
This patch adds a condition before unsetting IFF_BONDING.

4th patch fixes nested locking problem in bonding interface
Bonding interface has own lock and this uses static lock.
Bonding interface could be nested and it uses same lockdep key.
So that unexisting lockdep warning occurs.

5th patch fixes nested locking problem in team interface
Team interface has own lock and this uses static lock.
Team interface could be nested and it uses same lockdep key.
So that unexisting lockdep warning occurs.

6th patch fixes a refcnt leak in the macsec module.
When the macsec module is unloaded, refcnt leaks occur.
But actually, that holding refcnt is unnecessary.
So this patch just removes these code.

7th patch adds ignore flag to an adjacent structure.
In order to exchange an adjacent node safely, ignore flag is needed.

8th patch makes vxlan add an adjacent link to limit depth level.
Vxlan interface could set it's lower interface and these lower interfaces
are handled recursively.
So, if the depth of lower interfaces is too deep, stack overflow could
happen.

9th patch removes unnecessary variables and callback.
After 1st patch, subclass callback and variables are unnecessary.
This patch just removes these variables and callback.

10th patch fix refcnt leaks in the virt_wifi module
Like every nested interface, the upper interface should be deleted
before the lower interface is deleted.
In order to fix this, the notifier routine is added in this patch.

v4 -> v5 :
- Update log messages
- Move variables position, 1st patch
- Fix iterator routine, 1st patch
- Add generic lockdep key code, which replaces 2, 4, 5, 6, 7 patches.
- Log message update, 10th patch
- Fix wrong error value in error path of __init routine, 10th patch
- hold module refcnt when interface is created, 10th patch
v3 -> v4 :
- Add new 12th patch to fix refcnt leaks in the virt_wifi module
- Fix wrong usage netdev_upper_dev_link() in the vxlan.c
- Preserve reverse christmas tree variable ordering in the vxlan.c
- Add missing static keyword in the dev.c
- Expose netdev_adjacent_change_{prepare/commit/abort} instead of
netdev_adjacent_dev_{enable/disable}
v2 -> v3 :
- Modify nesting infrastructure code to use iterator instead of recursive.
v1 -> v2 :
- Make the 3rd patch do not add a new priv_flag.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>

+628 -521
+1 -1
drivers/net/bonding/bond_alb.c
··· 952 952 struct bond_vlan_tag *tags; 953 953 954 954 if (is_vlan_dev(upper) && 955 - bond->nest_level == vlan_get_encap_level(upper) - 1) { 955 + bond->dev->lower_level == upper->lower_level - 1) { 956 956 if (upper->addr_assign_type == NET_ADDR_STOLEN) { 957 957 alb_send_lp_vid(slave, mac_addr, 958 958 vlan_dev_vlan_proto(upper),
+10 -20
drivers/net/bonding/bond_main.c
··· 1733 1733 goto err_upper_unlink; 1734 1734 } 1735 1735 1736 - bond->nest_level = dev_get_nest_level(bond_dev) + 1; 1737 - 1738 1736 /* If the mode uses primary, then the following is handled by 1739 1737 * bond_change_active_slave(). 1740 1738 */ ··· 1814 1816 slave_disable_netpoll(new_slave); 1815 1817 1816 1818 err_close: 1817 - slave_dev->priv_flags &= ~IFF_BONDING; 1819 + if (!netif_is_bond_master(slave_dev)) 1820 + slave_dev->priv_flags &= ~IFF_BONDING; 1818 1821 dev_close(slave_dev); 1819 1822 1820 1823 err_restore_mac: ··· 1955 1956 if (!bond_has_slaves(bond)) { 1956 1957 bond_set_carrier(bond); 1957 1958 eth_hw_addr_random(bond_dev); 1958 - bond->nest_level = SINGLE_DEPTH_NESTING; 1959 - } else { 1960 - bond->nest_level = dev_get_nest_level(bond_dev) + 1; 1961 1959 } 1962 1960 1963 1961 unblock_netpoll_tx(); ··· 2013 2017 else 2014 2018 dev_set_mtu(slave_dev, slave->original_mtu); 2015 2019 2016 - slave_dev->priv_flags &= ~IFF_BONDING; 2020 + if (!netif_is_bond_master(slave_dev)) 2021 + slave_dev->priv_flags &= ~IFF_BONDING; 2017 2022 2018 2023 bond_free_slave(slave); 2019 2024 ··· 3439 3442 } 3440 3443 } 3441 3444 3442 - static int bond_get_nest_level(struct net_device *bond_dev) 3443 - { 3444 - struct bonding *bond = netdev_priv(bond_dev); 3445 - 3446 - return bond->nest_level; 3447 - } 3448 - 3449 3445 static void bond_get_stats(struct net_device *bond_dev, 3450 3446 struct rtnl_link_stats64 *stats) 3451 3447 { ··· 3447 3457 struct list_head *iter; 3448 3458 struct slave *slave; 3449 3459 3450 - spin_lock_nested(&bond->stats_lock, bond_get_nest_level(bond_dev)); 3460 + spin_lock(&bond->stats_lock); 3451 3461 memcpy(stats, &bond->bond_stats, sizeof(*stats)); 3452 3462 3453 3463 rcu_read_lock(); ··· 4258 4268 .ndo_neigh_setup = bond_neigh_setup, 4259 4269 .ndo_vlan_rx_add_vid = bond_vlan_rx_add_vid, 4260 4270 .ndo_vlan_rx_kill_vid = bond_vlan_rx_kill_vid, 4261 - .ndo_get_lock_subclass = bond_get_nest_level, 4262 4271 #ifdef CONFIG_NET_POLL_CONTROLLER 4263 4272 .ndo_netpoll_setup = bond_netpoll_setup, 4264 4273 .ndo_netpoll_cleanup = bond_netpoll_cleanup, ··· 4284 4295 { 4285 4296 struct bonding *bond = netdev_priv(bond_dev); 4286 4297 4287 - spin_lock_init(&bond->mode_lock); 4288 - spin_lock_init(&bond->stats_lock); 4289 4298 bond->params = bonding_defaults; 4290 4299 4291 4300 /* Initialize pointers */ ··· 4352 4365 4353 4366 list_del(&bond->bond_list); 4354 4367 4368 + lockdep_unregister_key(&bond->stats_lock_key); 4355 4369 bond_debug_unregister(bond); 4356 4370 } 4357 4371 ··· 4756 4768 if (!bond->wq) 4757 4769 return -ENOMEM; 4758 4770 4759 - bond->nest_level = SINGLE_DEPTH_NESTING; 4760 - netdev_lockdep_set_classes(bond_dev); 4771 + spin_lock_init(&bond->mode_lock); 4772 + spin_lock_init(&bond->stats_lock); 4773 + lockdep_register_key(&bond->stats_lock_key); 4774 + lockdep_set_class(&bond->stats_lock, &bond->stats_lock_key); 4761 4775 4762 4776 list_add_tail(&bond->bond_list, &bn->dev_list); 4763 4777
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 3160 3160 struct mlx5_esw_flow_attr *attr, 3161 3161 u32 *action) 3162 3162 { 3163 - int nest_level = vlan_get_encap_level(attr->parse_attr->filter_dev); 3163 + int nest_level = attr->parse_attr->filter_dev->lower_level; 3164 3164 struct flow_action_entry vlan_act = { 3165 3165 .id = FLOW_ACTION_VLAN_POP, 3166 3166 };
-18
drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
··· 299 299 nfp_port_free(repr->port); 300 300 } 301 301 302 - static struct lock_class_key nfp_repr_netdev_xmit_lock_key; 303 - static struct lock_class_key nfp_repr_netdev_addr_lock_key; 304 - 305 - static void nfp_repr_set_lockdep_class_one(struct net_device *dev, 306 - struct netdev_queue *txq, 307 - void *_unused) 308 - { 309 - lockdep_set_class(&txq->_xmit_lock, &nfp_repr_netdev_xmit_lock_key); 310 - } 311 - 312 - static void nfp_repr_set_lockdep_class(struct net_device *dev) 313 - { 314 - lockdep_set_class(&dev->addr_list_lock, &nfp_repr_netdev_addr_lock_key); 315 - netdev_for_each_tx_queue(dev, nfp_repr_set_lockdep_class_one, NULL); 316 - } 317 - 318 302 int nfp_repr_init(struct nfp_app *app, struct net_device *netdev, 319 303 u32 cmsg_port_id, struct nfp_port *port, 320 304 struct net_device *pf_netdev) ··· 307 323 struct nfp_net *nn = netdev_priv(pf_netdev); 308 324 u32 repr_cap = nn->tlv_caps.repr_cap; 309 325 int err; 310 - 311 - nfp_repr_set_lockdep_class(netdev); 312 326 313 327 repr->port = port; 314 328 repr->dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX, GFP_KERNEL);
-22
drivers/net/hamradio/bpqether.c
··· 107 107 108 108 static LIST_HEAD(bpq_devices); 109 109 110 - /* 111 - * bpqether network devices are paired with ethernet devices below them, so 112 - * form a special "super class" of normal ethernet devices; split their locks 113 - * off into a separate class since they always nest. 114 - */ 115 - static struct lock_class_key bpq_netdev_xmit_lock_key; 116 - static struct lock_class_key bpq_netdev_addr_lock_key; 117 - 118 - static void bpq_set_lockdep_class_one(struct net_device *dev, 119 - struct netdev_queue *txq, 120 - void *_unused) 121 - { 122 - lockdep_set_class(&txq->_xmit_lock, &bpq_netdev_xmit_lock_key); 123 - } 124 - 125 - static void bpq_set_lockdep_class(struct net_device *dev) 126 - { 127 - lockdep_set_class(&dev->addr_list_lock, &bpq_netdev_addr_lock_key); 128 - netdev_for_each_tx_queue(dev, bpq_set_lockdep_class_one, NULL); 129 - } 130 - 131 110 /* ------------------------------------------------------------------------ */ 132 111 133 112 ··· 477 498 err = register_netdevice(ndev); 478 499 if (err) 479 500 goto error; 480 - bpq_set_lockdep_class(ndev); 481 501 482 502 /* List protected by RTNL */ 483 503 list_add_rcu(&bpq->bpq_list, &bpq_devices);
-2
drivers/net/hyperv/netvsc_drv.c
··· 2335 2335 NETIF_F_HW_VLAN_CTAG_RX; 2336 2336 net->vlan_features = net->features; 2337 2337 2338 - netdev_lockdep_set_classes(net); 2339 - 2340 2338 /* MTU range: 68 - 1500 or 65521 */ 2341 2339 net->min_mtu = NETVSC_MTU_MIN; 2342 2340 if (nvdev->nvsp_version >= NVSP_PROTOCOL_VERSION_2)
-2
drivers/net/ipvlan/ipvlan_main.c
··· 131 131 dev->gso_max_segs = phy_dev->gso_max_segs; 132 132 dev->hard_header_len = phy_dev->hard_header_len; 133 133 134 - netdev_lockdep_set_classes(dev); 135 - 136 134 ipvlan->pcpu_stats = netdev_alloc_pcpu_stats(struct ipvl_pcpu_stats); 137 135 if (!ipvlan->pcpu_stats) 138 136 return -ENOMEM;
-18
drivers/net/macsec.c
··· 267 267 struct pcpu_secy_stats __percpu *stats; 268 268 struct list_head secys; 269 269 struct gro_cells gro_cells; 270 - unsigned int nest_level; 271 270 }; 272 271 273 272 /** ··· 2749 2750 2750 2751 #define MACSEC_FEATURES \ 2751 2752 (NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST) 2752 - static struct lock_class_key macsec_netdev_addr_lock_key; 2753 2753 2754 2754 static int macsec_dev_init(struct net_device *dev) 2755 2755 { ··· 2956 2958 return macsec_priv(dev)->real_dev->ifindex; 2957 2959 } 2958 2960 2959 - static int macsec_get_nest_level(struct net_device *dev) 2960 - { 2961 - return macsec_priv(dev)->nest_level; 2962 - } 2963 - 2964 2961 static const struct net_device_ops macsec_netdev_ops = { 2965 2962 .ndo_init = macsec_dev_init, 2966 2963 .ndo_uninit = macsec_dev_uninit, ··· 2969 2976 .ndo_start_xmit = macsec_start_xmit, 2970 2977 .ndo_get_stats64 = macsec_get_stats64, 2971 2978 .ndo_get_iflink = macsec_get_iflink, 2972 - .ndo_get_lock_subclass = macsec_get_nest_level, 2973 2979 }; 2974 2980 2975 2981 static const struct device_type macsec_type = { ··· 2993 3001 static void macsec_free_netdev(struct net_device *dev) 2994 3002 { 2995 3003 struct macsec_dev *macsec = macsec_priv(dev); 2996 - struct net_device *real_dev = macsec->real_dev; 2997 3004 2998 3005 free_percpu(macsec->stats); 2999 3006 free_percpu(macsec->secy.tx_sc.stats); 3000 3007 3001 - dev_put(real_dev); 3002 3008 } 3003 3009 3004 3010 static void macsec_setup(struct net_device *dev) ··· 3250 3260 err = register_netdevice(dev); 3251 3261 if (err < 0) 3252 3262 return err; 3253 - 3254 - dev_hold(real_dev); 3255 - 3256 - macsec->nest_level = dev_get_nest_level(real_dev) + 1; 3257 - netdev_lockdep_set_classes(dev); 3258 - lockdep_set_class_and_subclass(&dev->addr_list_lock, 3259 - &macsec_netdev_addr_lock_key, 3260 - macsec_get_nest_level(dev)); 3261 3263 3262 3264 err = netdev_upper_dev_link(real_dev, dev, extack); 3263 3265 if (err < 0)
-19
drivers/net/macvlan.c
··· 852 852 * "super class" of normal network devices; split their locks off into a 853 853 * separate class since they always nest. 854 854 */ 855 - static struct lock_class_key macvlan_netdev_addr_lock_key; 856 - 857 855 #define ALWAYS_ON_OFFLOADS \ 858 856 (NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE | \ 859 857 NETIF_F_GSO_ROBUST | NETIF_F_GSO_ENCAP_ALL) ··· 866 868 867 869 #define MACVLAN_STATE_MASK \ 868 870 ((1<<__LINK_STATE_NOCARRIER) | (1<<__LINK_STATE_DORMANT)) 869 - 870 - static int macvlan_get_nest_level(struct net_device *dev) 871 - { 872 - return ((struct macvlan_dev *)netdev_priv(dev))->nest_level; 873 - } 874 - 875 - static void macvlan_set_lockdep_class(struct net_device *dev) 876 - { 877 - netdev_lockdep_set_classes(dev); 878 - lockdep_set_class_and_subclass(&dev->addr_list_lock, 879 - &macvlan_netdev_addr_lock_key, 880 - macvlan_get_nest_level(dev)); 881 - } 882 871 883 872 static int macvlan_init(struct net_device *dev) 884 873 { ··· 884 899 dev->gso_max_size = lowerdev->gso_max_size; 885 900 dev->gso_max_segs = lowerdev->gso_max_segs; 886 901 dev->hard_header_len = lowerdev->hard_header_len; 887 - 888 - macvlan_set_lockdep_class(dev); 889 902 890 903 vlan->pcpu_stats = netdev_alloc_pcpu_stats(struct vlan_pcpu_stats); 891 904 if (!vlan->pcpu_stats) ··· 1144 1161 .ndo_fdb_add = macvlan_fdb_add, 1145 1162 .ndo_fdb_del = macvlan_fdb_del, 1146 1163 .ndo_fdb_dump = ndo_dflt_fdb_dump, 1147 - .ndo_get_lock_subclass = macvlan_get_nest_level, 1148 1164 #ifdef CONFIG_NET_POLL_CONTROLLER 1149 1165 .ndo_poll_controller = macvlan_dev_poll_controller, 1150 1166 .ndo_netpoll_setup = macvlan_dev_netpoll_setup, ··· 1427 1445 vlan->dev = dev; 1428 1446 vlan->port = port; 1429 1447 vlan->set_features = MACVLAN_FEATURES; 1430 - vlan->nest_level = dev_get_nest_level(lowerdev) + 1; 1431 1448 1432 1449 vlan->mode = MACVLAN_MODE_VEPA; 1433 1450 if (data && data[IFLA_MACVLAN_MODE])
-2
drivers/net/ppp/ppp_generic.c
··· 1324 1324 { 1325 1325 struct ppp *ppp; 1326 1326 1327 - netdev_lockdep_set_classes(dev); 1328 - 1329 1327 ppp = netdev_priv(dev); 1330 1328 /* Let the netdevice take a reference on the ppp file. This ensures 1331 1329 * that ppp_destroy_interface() won't run before the device gets
+12 -4
drivers/net/team/team.c
··· 1615 1615 int err; 1616 1616 1617 1617 team->dev = dev; 1618 - mutex_init(&team->lock); 1619 1618 team_set_no_mode(team); 1620 1619 1621 1620 team->pcpu_stats = netdev_alloc_pcpu_stats(struct team_pcpu_stats); ··· 1641 1642 goto err_options_register; 1642 1643 netif_carrier_off(dev); 1643 1644 1644 - netdev_lockdep_set_classes(dev); 1645 + lockdep_register_key(&team->team_lock_key); 1646 + __mutex_init(&team->lock, "team->team_lock_key", &team->team_lock_key); 1645 1647 1646 1648 return 0; 1647 1649 ··· 1673 1673 team_queue_override_fini(team); 1674 1674 mutex_unlock(&team->lock); 1675 1675 netdev_change_features(dev); 1676 + lockdep_unregister_key(&team->team_lock_key); 1676 1677 } 1677 1678 1678 1679 static void team_destructor(struct net_device *dev) ··· 1977 1976 err = team_port_del(team, port_dev); 1978 1977 mutex_unlock(&team->lock); 1979 1978 1980 - if (!err) 1981 - netdev_change_features(dev); 1979 + if (err) 1980 + return err; 1981 + 1982 + if (netif_is_team_master(port_dev)) { 1983 + lockdep_unregister_key(&team->team_lock_key); 1984 + lockdep_register_key(&team->team_lock_key); 1985 + lockdep_set_class(&team->lock, &team->team_lock_key); 1986 + } 1987 + netdev_change_features(dev); 1982 1988 1983 1989 return err; 1984 1990 }
-1
drivers/net/vrf.c
··· 865 865 866 866 /* similarly, oper state is irrelevant; set to up to avoid confusion */ 867 867 dev->operstate = IF_OPER_UP; 868 - netdev_lockdep_set_classes(dev); 869 868 return 0; 870 869 871 870 out_rth:
+43 -10
drivers/net/vxlan.c
··· 3566 3566 { 3567 3567 struct vxlan_net *vn = net_generic(net, vxlan_net_id); 3568 3568 struct vxlan_dev *vxlan = netdev_priv(dev); 3569 + struct net_device *remote_dev = NULL; 3569 3570 struct vxlan_fdb *f = NULL; 3570 3571 bool unregister = false; 3572 + struct vxlan_rdst *dst; 3571 3573 int err; 3572 3574 3575 + dst = &vxlan->default_dst; 3573 3576 err = vxlan_dev_configure(net, dev, conf, false, extack); 3574 3577 if (err) 3575 3578 return err; ··· 3580 3577 dev->ethtool_ops = &vxlan_ethtool_ops; 3581 3578 3582 3579 /* create an fdb entry for a valid default destination */ 3583 - if (!vxlan_addr_any(&vxlan->default_dst.remote_ip)) { 3580 + if (!vxlan_addr_any(&dst->remote_ip)) { 3584 3581 err = vxlan_fdb_create(vxlan, all_zeros_mac, 3585 - &vxlan->default_dst.remote_ip, 3582 + &dst->remote_ip, 3586 3583 NUD_REACHABLE | NUD_PERMANENT, 3587 3584 vxlan->cfg.dst_port, 3588 - vxlan->default_dst.remote_vni, 3589 - vxlan->default_dst.remote_vni, 3590 - vxlan->default_dst.remote_ifindex, 3585 + dst->remote_vni, 3586 + dst->remote_vni, 3587 + dst->remote_ifindex, 3591 3588 NTF_SELF, &f); 3592 3589 if (err) 3593 3590 return err; ··· 3598 3595 goto errout; 3599 3596 unregister = true; 3600 3597 3598 + if (dst->remote_ifindex) { 3599 + remote_dev = __dev_get_by_index(net, dst->remote_ifindex); 3600 + if (!remote_dev) 3601 + goto errout; 3602 + 3603 + err = netdev_upper_dev_link(remote_dev, dev, extack); 3604 + if (err) 3605 + goto errout; 3606 + } 3607 + 3601 3608 err = rtnl_configure_link(dev, NULL); 3602 3609 if (err) 3603 - goto errout; 3610 + goto unlink; 3604 3611 3605 3612 if (f) { 3606 - vxlan_fdb_insert(vxlan, all_zeros_mac, 3607 - vxlan->default_dst.remote_vni, f); 3613 + vxlan_fdb_insert(vxlan, all_zeros_mac, dst->remote_vni, f); 3608 3614 3609 3615 /* notify default fdb entry */ 3610 3616 err = vxlan_fdb_notify(vxlan, f, first_remote_rtnl(f), 3611 3617 RTM_NEWNEIGH, true, extack); 3612 3618 if (err) { 3613 3619 vxlan_fdb_destroy(vxlan, f, false, false); 3620 + if (remote_dev) 3621 + netdev_upper_dev_unlink(remote_dev, dev); 3614 3622 goto unregister; 3615 3623 } 3616 3624 } 3617 3625 3618 3626 list_add(&vxlan->next, &vn->vxlan_list); 3627 + if (remote_dev) 3628 + dst->remote_dev = remote_dev; 3619 3629 return 0; 3620 - 3630 + unlink: 3631 + if (remote_dev) 3632 + netdev_upper_dev_unlink(remote_dev, dev); 3621 3633 errout: 3622 3634 /* unregister_netdevice() destroys the default FDB entry with deletion 3623 3635 * notification. But the addition notification was not sent yet, so ··· 3950 3932 struct netlink_ext_ack *extack) 3951 3933 { 3952 3934 struct vxlan_dev *vxlan = netdev_priv(dev); 3953 - struct vxlan_rdst *dst = &vxlan->default_dst; 3954 3935 struct net_device *lowerdev; 3955 3936 struct vxlan_config conf; 3937 + struct vxlan_rdst *dst; 3956 3938 int err; 3957 3939 3940 + dst = &vxlan->default_dst; 3958 3941 err = vxlan_nl2conf(tb, data, dev, &conf, true, extack); 3959 3942 if (err) 3960 3943 return err; 3961 3944 3962 3945 err = vxlan_config_validate(vxlan->net, &conf, &lowerdev, 3963 3946 vxlan, extack); 3947 + if (err) 3948 + return err; 3949 + 3950 + err = netdev_adjacent_change_prepare(dst->remote_dev, lowerdev, dev, 3951 + extack); 3964 3952 if (err) 3965 3953 return err; 3966 3954 ··· 3986 3962 NTF_SELF, true, extack); 3987 3963 if (err) { 3988 3964 spin_unlock_bh(&vxlan->hash_lock[hash_index]); 3965 + netdev_adjacent_change_abort(dst->remote_dev, 3966 + lowerdev, dev); 3989 3967 return err; 3990 3968 } 3991 3969 } ··· 4005 3979 if (conf.age_interval != vxlan->cfg.age_interval) 4006 3980 mod_timer(&vxlan->age_timer, jiffies); 4007 3981 3982 + netdev_adjacent_change_commit(dst->remote_dev, lowerdev, dev); 3983 + if (lowerdev && lowerdev != dst->remote_dev) 3984 + dst->remote_dev = lowerdev; 3985 + 3986 + netdev_update_lockdep_key(lowerdev); 4008 3987 vxlan_config_apply(dev, &conf, lowerdev, vxlan->net, true); 4009 3988 return 0; 4010 3989 } ··· 4022 3991 4023 3992 list_del(&vxlan->next); 4024 3993 unregister_netdevice_queue(dev, head); 3994 + if (vxlan->default_dst.remote_dev) 3995 + netdev_upper_dev_unlink(vxlan->default_dst.remote_dev, dev); 4025 3996 } 4026 3997 4027 3998 static size_t vxlan_get_size(const struct net_device *dev)
-25
drivers/net/wireless/intersil/hostap/hostap_hw.c
··· 3041 3041 } 3042 3042 } 3043 3043 3044 - 3045 - /* 3046 - * HostAP uses two layers of net devices, where the inner 3047 - * layer gets called all the time from the outer layer. 3048 - * This is a natural nesting, which needs a split lock type. 3049 - */ 3050 - static struct lock_class_key hostap_netdev_xmit_lock_key; 3051 - static struct lock_class_key hostap_netdev_addr_lock_key; 3052 - 3053 - static void prism2_set_lockdep_class_one(struct net_device *dev, 3054 - struct netdev_queue *txq, 3055 - void *_unused) 3056 - { 3057 - lockdep_set_class(&txq->_xmit_lock, 3058 - &hostap_netdev_xmit_lock_key); 3059 - } 3060 - 3061 - static void prism2_set_lockdep_class(struct net_device *dev) 3062 - { 3063 - lockdep_set_class(&dev->addr_list_lock, 3064 - &hostap_netdev_addr_lock_key); 3065 - netdev_for_each_tx_queue(dev, prism2_set_lockdep_class_one, NULL); 3066 - } 3067 - 3068 3044 static struct net_device * 3069 3045 prism2_init_local_data(struct prism2_helper_functions *funcs, int card_idx, 3070 3046 struct device *sdev) ··· 3199 3223 if (ret >= 0) 3200 3224 ret = register_netdevice(dev); 3201 3225 3202 - prism2_set_lockdep_class(dev); 3203 3226 rtnl_unlock(); 3204 3227 if (ret < 0) { 3205 3228 printk(KERN_WARNING "%s: register netdevice failed!\n",
+52 -2
drivers/net/wireless/virt_wifi.c
··· 548 548 priv->is_connected = false; 549 549 priv->is_up = false; 550 550 INIT_DELAYED_WORK(&priv->connect, virt_wifi_connect_complete); 551 + __module_get(THIS_MODULE); 551 552 552 553 return 0; 553 554 unregister_netdev: ··· 579 578 netdev_upper_dev_unlink(priv->lowerdev, dev); 580 579 581 580 unregister_netdevice_queue(dev, head); 581 + module_put(THIS_MODULE); 582 582 583 583 /* Deleting the wiphy is handled in the module destructor. */ 584 584 } ··· 592 590 .priv_size = sizeof(struct virt_wifi_netdev_priv), 593 591 }; 594 592 593 + static bool netif_is_virt_wifi_dev(const struct net_device *dev) 594 + { 595 + return rcu_access_pointer(dev->rx_handler) == virt_wifi_rx_handler; 596 + } 597 + 598 + static int virt_wifi_event(struct notifier_block *this, unsigned long event, 599 + void *ptr) 600 + { 601 + struct net_device *lower_dev = netdev_notifier_info_to_dev(ptr); 602 + struct virt_wifi_netdev_priv *priv; 603 + struct net_device *upper_dev; 604 + LIST_HEAD(list_kill); 605 + 606 + if (!netif_is_virt_wifi_dev(lower_dev)) 607 + return NOTIFY_DONE; 608 + 609 + switch (event) { 610 + case NETDEV_UNREGISTER: 611 + priv = rtnl_dereference(lower_dev->rx_handler_data); 612 + if (!priv) 613 + return NOTIFY_DONE; 614 + 615 + upper_dev = priv->upperdev; 616 + 617 + upper_dev->rtnl_link_ops->dellink(upper_dev, &list_kill); 618 + unregister_netdevice_many(&list_kill); 619 + break; 620 + } 621 + 622 + return NOTIFY_DONE; 623 + } 624 + 625 + static struct notifier_block virt_wifi_notifier = { 626 + .notifier_call = virt_wifi_event, 627 + }; 628 + 595 629 /* Acquires and releases the rtnl lock. */ 596 630 static int __init virt_wifi_init_module(void) 597 631 { ··· 636 598 /* Guaranteed to be locallly-administered and not multicast. */ 637 599 eth_random_addr(fake_router_bssid); 638 600 601 + err = register_netdevice_notifier(&virt_wifi_notifier); 602 + if (err) 603 + return err; 604 + 605 + err = -ENOMEM; 639 606 common_wiphy = virt_wifi_make_wiphy(); 640 607 if (!common_wiphy) 641 - return -ENOMEM; 608 + goto notifier; 642 609 643 610 err = rtnl_link_register(&virt_wifi_link_ops); 644 611 if (err) 645 - virt_wifi_destroy_wiphy(common_wiphy); 612 + goto destroy_wiphy; 646 613 614 + return 0; 615 + 616 + destroy_wiphy: 617 + virt_wifi_destroy_wiphy(common_wiphy); 618 + notifier: 619 + unregister_netdevice_notifier(&virt_wifi_notifier); 647 620 return err; 648 621 } 649 622 ··· 664 615 /* Will delete any devices that depend on the wiphy. */ 665 616 rtnl_link_unregister(&virt_wifi_link_ops); 666 617 virt_wifi_destroy_wiphy(common_wiphy); 618 + unregister_netdevice_notifier(&virt_wifi_notifier); 667 619 } 668 620 669 621 module_init(virt_wifi_init_module);
-1
include/linux/if_macvlan.h
··· 29 29 netdev_features_t set_features; 30 30 enum macvlan_mode mode; 31 31 u16 flags; 32 - int nest_level; 33 32 unsigned int macaddr_count; 34 33 #ifdef CONFIG_NET_POLL_CONTROLLER 35 34 struct netpoll *netpoll;
+1
include/linux/if_team.h
··· 223 223 atomic_t count_pending; 224 224 struct delayed_work dw; 225 225 } mcast_rejoin; 226 + struct lock_class_key team_lock_key; 226 227 long mode_priv[TEAM_MODE_PRIV_LONGS]; 227 228 }; 228 229
-11
include/linux/if_vlan.h
··· 182 182 #ifdef CONFIG_NET_POLL_CONTROLLER 183 183 struct netpoll *netpoll; 184 184 #endif 185 - unsigned int nest_level; 186 185 }; 187 186 188 187 static inline struct vlan_dev_priv *vlan_dev_priv(const struct net_device *dev) ··· 220 221 221 222 extern bool vlan_uses_dev(const struct net_device *dev); 222 223 223 - static inline int vlan_get_encap_level(struct net_device *dev) 224 - { 225 - BUG_ON(!is_vlan_dev(dev)); 226 - return vlan_dev_priv(dev)->nest_level; 227 - } 228 224 #else 229 225 static inline struct net_device * 230 226 __vlan_find_dev_deep_rcu(struct net_device *real_dev, ··· 288 294 static inline bool vlan_uses_dev(const struct net_device *dev) 289 295 { 290 296 return false; 291 - } 292 - static inline int vlan_get_encap_level(struct net_device *dev) 293 - { 294 - BUG(); 295 - return 0; 296 297 } 297 298 #endif 298 299
+27 -34
include/linux/netdevice.h
··· 925 925 struct devlink; 926 926 struct tlsdev_ops; 927 927 928 + 928 929 /* 929 930 * This structure defines the management hooks for network devices. 930 931 * The following hooks can be defined; unless noted otherwise, they are ··· 1422 1421 void (*ndo_dfwd_del_station)(struct net_device *pdev, 1423 1422 void *priv); 1424 1423 1425 - int (*ndo_get_lock_subclass)(struct net_device *dev); 1426 1424 int (*ndo_set_tx_maxrate)(struct net_device *dev, 1427 1425 int queue_index, 1428 1426 u32 maxrate); ··· 1649 1649 * @perm_addr: Permanent hw address 1650 1650 * @addr_assign_type: Hw address assignment type 1651 1651 * @addr_len: Hardware address length 1652 + * @upper_level: Maximum depth level of upper devices. 1653 + * @lower_level: Maximum depth level of lower devices. 1652 1654 * @neigh_priv_len: Used in neigh_alloc() 1653 1655 * @dev_id: Used to differentiate devices that share 1654 1656 * the same link layer address ··· 1760 1758 * @phydev: Physical device may attach itself 1761 1759 * for hardware timestamping 1762 1760 * @sfp_bus: attached &struct sfp_bus structure. 1763 - * 1764 - * @qdisc_tx_busylock: lockdep class annotating Qdisc->busylock spinlock 1765 - * @qdisc_running_key: lockdep class annotating Qdisc->running seqcount 1761 + * @qdisc_tx_busylock_key: lockdep class annotating Qdisc->busylock 1762 + spinlock 1763 + * @qdisc_running_key: lockdep class annotating Qdisc->running seqcount 1764 + * @qdisc_xmit_lock_key: lockdep class annotating 1765 + * netdev_queue->_xmit_lock spinlock 1766 + * @addr_list_lock_key: lockdep class annotating 1767 + * net_device->addr_list_lock spinlock 1766 1768 * 1767 1769 * @proto_down: protocol port state information can be sent to the 1768 1770 * switch driver and used to set the phys state of the ··· 1881 1875 unsigned char perm_addr[MAX_ADDR_LEN]; 1882 1876 unsigned char addr_assign_type; 1883 1877 unsigned char addr_len; 1878 + unsigned char upper_level; 1879 + unsigned char lower_level; 1884 1880 unsigned short neigh_priv_len; 1885 1881 unsigned short dev_id; 1886 1882 unsigned short dev_port; ··· 2053 2045 #endif 2054 2046 struct phy_device *phydev; 2055 2047 struct sfp_bus *sfp_bus; 2056 - struct lock_class_key *qdisc_tx_busylock; 2057 - struct lock_class_key *qdisc_running_key; 2048 + struct lock_class_key qdisc_tx_busylock_key; 2049 + struct lock_class_key qdisc_running_key; 2050 + struct lock_class_key qdisc_xmit_lock_key; 2051 + struct lock_class_key addr_list_lock_key; 2058 2052 bool proto_down; 2059 2053 unsigned wol_enabled:1; 2060 2054 }; ··· 2132 2122 2133 2123 for (i = 0; i < dev->num_tx_queues; i++) 2134 2124 f(dev, &dev->_tx[i], arg); 2135 - } 2136 - 2137 - #define netdev_lockdep_set_classes(dev) \ 2138 - { \ 2139 - static struct lock_class_key qdisc_tx_busylock_key; \ 2140 - static struct lock_class_key qdisc_running_key; \ 2141 - static struct lock_class_key qdisc_xmit_lock_key; \ 2142 - static struct lock_class_key dev_addr_list_lock_key; \ 2143 - unsigned int i; \ 2144 - \ 2145 - (dev)->qdisc_tx_busylock = &qdisc_tx_busylock_key; \ 2146 - (dev)->qdisc_running_key = &qdisc_running_key; \ 2147 - lockdep_set_class(&(dev)->addr_list_lock, \ 2148 - &dev_addr_list_lock_key); \ 2149 - for (i = 0; i < (dev)->num_tx_queues; i++) \ 2150 - lockdep_set_class(&(dev)->_tx[i]._xmit_lock, \ 2151 - &qdisc_xmit_lock_key); \ 2152 2125 } 2153 2126 2154 2127 u16 netdev_pick_tx(struct net_device *dev, struct sk_buff *skb, ··· 3132 3139 } 3133 3140 3134 3141 void netif_tx_stop_all_queues(struct net_device *dev); 3142 + void netdev_update_lockdep_key(struct net_device *dev); 3135 3143 3136 3144 static inline bool netif_tx_queue_stopped(const struct netdev_queue *dev_queue) 3137 3145 { ··· 4050 4056 spin_lock(&dev->addr_list_lock); 4051 4057 } 4052 4058 4053 - static inline void netif_addr_lock_nested(struct net_device *dev) 4054 - { 4055 - int subclass = SINGLE_DEPTH_NESTING; 4056 - 4057 - if (dev->netdev_ops->ndo_get_lock_subclass) 4058 - subclass = dev->netdev_ops->ndo_get_lock_subclass(dev); 4059 - 4060 - spin_lock_nested(&dev->addr_list_lock, subclass); 4061 - } 4062 - 4063 4059 static inline void netif_addr_lock_bh(struct net_device *dev) 4064 4060 { 4065 4061 spin_lock_bh(&dev->addr_list_lock); ··· 4313 4329 struct netlink_ext_ack *extack); 4314 4330 void netdev_upper_dev_unlink(struct net_device *dev, 4315 4331 struct net_device *upper_dev); 4332 + int netdev_adjacent_change_prepare(struct net_device *old_dev, 4333 + struct net_device *new_dev, 4334 + struct net_device *dev, 4335 + struct netlink_ext_ack *extack); 4336 + void netdev_adjacent_change_commit(struct net_device *old_dev, 4337 + struct net_device *new_dev, 4338 + struct net_device *dev); 4339 + void netdev_adjacent_change_abort(struct net_device *old_dev, 4340 + struct net_device *new_dev, 4341 + struct net_device *dev); 4316 4342 void netdev_adjacent_rename_links(struct net_device *dev, char *oldname); 4317 4343 void *netdev_lower_dev_get_private(struct net_device *dev, 4318 4344 struct net_device *lower_dev); ··· 4334 4340 extern u8 netdev_rss_key[NETDEV_RSS_KEY_LEN] __read_mostly; 4335 4341 void netdev_rss_key_fill(void *buffer, size_t len); 4336 4342 4337 - int dev_get_nest_level(struct net_device *dev); 4338 4343 int skb_checksum_help(struct sk_buff *skb); 4339 4344 int skb_crc32c_csum_help(struct sk_buff *skb); 4340 4345 int skb_csum_hwoffload_help(struct sk_buff *skb,
+1 -1
include/net/bonding.h
··· 203 203 struct slave __rcu *primary_slave; 204 204 struct bond_up_slave __rcu *slave_arr; /* Array of usable slaves */ 205 205 bool force_primary; 206 - u32 nest_level; 207 206 s32 slave_cnt; /* never change this value outside the attach/detach wrappers */ 208 207 int (*recv_probe)(const struct sk_buff *, struct bonding *, 209 208 struct slave *); ··· 238 239 struct dentry *debug_dir; 239 240 #endif /* CONFIG_DEBUG_FS */ 240 241 struct rtnl_link_stats64 bond_stats; 242 + struct lock_class_key stats_lock_key; 241 243 }; 242 244 243 245 #define bond_slave_get_rcu(dev) \
+1
include/net/vxlan.h
··· 197 197 u8 offloaded:1; 198 198 __be32 remote_vni; 199 199 u32 remote_ifindex; 200 + struct net_device *remote_dev; 200 201 struct list_head list; 201 202 struct rcu_head rcu; 202 203 struct dst_cache dst_cache;
-1
net/8021q/vlan.c
··· 172 172 if (err < 0) 173 173 goto out_uninit_mvrp; 174 174 175 - vlan->nest_level = dev_get_nest_level(real_dev) + 1; 176 175 err = register_netdevice(dev); 177 176 if (err < 0) 178 177 goto out_uninit_mvrp;
-33
net/8021q/vlan_dev.c
··· 489 489 dev_uc_sync(vlan_dev_priv(vlan_dev)->real_dev, vlan_dev); 490 490 } 491 491 492 - /* 493 - * vlan network devices have devices nesting below it, and are a special 494 - * "super class" of normal network devices; split their locks off into a 495 - * separate class since they always nest. 496 - */ 497 - static struct lock_class_key vlan_netdev_xmit_lock_key; 498 - static struct lock_class_key vlan_netdev_addr_lock_key; 499 - 500 - static void vlan_dev_set_lockdep_one(struct net_device *dev, 501 - struct netdev_queue *txq, 502 - void *_subclass) 503 - { 504 - lockdep_set_class_and_subclass(&txq->_xmit_lock, 505 - &vlan_netdev_xmit_lock_key, 506 - *(int *)_subclass); 507 - } 508 - 509 - static void vlan_dev_set_lockdep_class(struct net_device *dev, int subclass) 510 - { 511 - lockdep_set_class_and_subclass(&dev->addr_list_lock, 512 - &vlan_netdev_addr_lock_key, 513 - subclass); 514 - netdev_for_each_tx_queue(dev, vlan_dev_set_lockdep_one, &subclass); 515 - } 516 - 517 - static int vlan_dev_get_lock_subclass(struct net_device *dev) 518 - { 519 - return vlan_dev_priv(dev)->nest_level; 520 - } 521 - 522 492 static const struct header_ops vlan_header_ops = { 523 493 .create = vlan_dev_hard_header, 524 494 .parse = eth_header_parse, ··· 578 608 dev->netdev_ops = &vlan_netdev_ops; 579 609 580 610 SET_NETDEV_DEVTYPE(dev, &vlan_type); 581 - 582 - vlan_dev_set_lockdep_class(dev, vlan_dev_get_lock_subclass(dev)); 583 611 584 612 vlan->vlan_pcpu_stats = netdev_alloc_pcpu_stats(struct vlan_pcpu_stats); 585 613 if (!vlan->vlan_pcpu_stats) ··· 780 812 .ndo_netpoll_cleanup = vlan_dev_netpoll_cleanup, 781 813 #endif 782 814 .ndo_fix_features = vlan_dev_fix_features, 783 - .ndo_get_lock_subclass = vlan_dev_get_lock_subclass, 784 815 .ndo_get_iflink = vlan_dev_get_iflink, 785 816 }; 786 817
-32
net/batman-adv/soft-interface.c
··· 740 740 return 0; 741 741 } 742 742 743 - /* batman-adv network devices have devices nesting below it and are a special 744 - * "super class" of normal network devices; split their locks off into a 745 - * separate class since they always nest. 746 - */ 747 - static struct lock_class_key batadv_netdev_xmit_lock_key; 748 - static struct lock_class_key batadv_netdev_addr_lock_key; 749 - 750 - /** 751 - * batadv_set_lockdep_class_one() - Set lockdep class for a single tx queue 752 - * @dev: device which owns the tx queue 753 - * @txq: tx queue to modify 754 - * @_unused: always NULL 755 - */ 756 - static void batadv_set_lockdep_class_one(struct net_device *dev, 757 - struct netdev_queue *txq, 758 - void *_unused) 759 - { 760 - lockdep_set_class(&txq->_xmit_lock, &batadv_netdev_xmit_lock_key); 761 - } 762 - 763 - /** 764 - * batadv_set_lockdep_class() - Set txq and addr_list lockdep class 765 - * @dev: network device to modify 766 - */ 767 - static void batadv_set_lockdep_class(struct net_device *dev) 768 - { 769 - lockdep_set_class(&dev->addr_list_lock, &batadv_netdev_addr_lock_key); 770 - netdev_for_each_tx_queue(dev, batadv_set_lockdep_class_one, NULL); 771 - } 772 - 773 743 /** 774 744 * batadv_softif_init_late() - late stage initialization of soft interface 775 745 * @dev: registered network device to modify ··· 752 782 u32 random_seqno; 753 783 int ret; 754 784 size_t cnt_len = sizeof(u64) * BATADV_CNT_NUM; 755 - 756 - batadv_set_lockdep_class(dev); 757 785 758 786 bat_priv = netdev_priv(dev); 759 787 bat_priv->soft_iface = dev;
-8
net/bluetooth/6lowpan.c
··· 571 571 return err < 0 ? NET_XMIT_DROP : err; 572 572 } 573 573 574 - static int bt_dev_init(struct net_device *dev) 575 - { 576 - netdev_lockdep_set_classes(dev); 577 - 578 - return 0; 579 - } 580 - 581 574 static const struct net_device_ops netdev_ops = { 582 - .ndo_init = bt_dev_init, 583 575 .ndo_start_xmit = bt_xmit, 584 576 }; 585 577
-8
net/bridge/br_device.c
··· 24 24 const struct nf_br_ops __rcu *nf_br_ops __read_mostly; 25 25 EXPORT_SYMBOL_GPL(nf_br_ops); 26 26 27 - static struct lock_class_key bridge_netdev_addr_lock_key; 28 - 29 27 /* net device transmit always called with BH disabled */ 30 28 netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) 31 29 { ··· 106 108 return NETDEV_TX_OK; 107 109 } 108 110 109 - static void br_set_lockdep_class(struct net_device *dev) 110 - { 111 - lockdep_set_class(&dev->addr_list_lock, &bridge_netdev_addr_lock_key); 112 - } 113 - 114 111 static int br_dev_init(struct net_device *dev) 115 112 { 116 113 struct net_bridge *br = netdev_priv(dev); ··· 143 150 br_mdb_hash_fini(br); 144 151 br_fdb_hash_fini(br); 145 152 } 146 - br_set_lockdep_class(dev); 147 153 148 154 return err; 149 155 }
+464 -154
net/core/dev.c
··· 146 146 #include "net-sysfs.h" 147 147 148 148 #define MAX_GRO_SKBS 8 149 + #define MAX_NEST_DEV 8 149 150 150 151 /* This should be increased if a protocol with a bigger head is added. */ 151 152 #define GRO_MAX_HEAD (MAX_HEADER + 128) ··· 276 275 277 276 DEFINE_PER_CPU_ALIGNED(struct softnet_data, softnet_data); 278 277 EXPORT_PER_CPU_SYMBOL(softnet_data); 279 - 280 - #ifdef CONFIG_LOCKDEP 281 - /* 282 - * register_netdevice() inits txq->_xmit_lock and sets lockdep class 283 - * according to dev->type 284 - */ 285 - static const unsigned short netdev_lock_type[] = { 286 - ARPHRD_NETROM, ARPHRD_ETHER, ARPHRD_EETHER, ARPHRD_AX25, 287 - ARPHRD_PRONET, ARPHRD_CHAOS, ARPHRD_IEEE802, ARPHRD_ARCNET, 288 - ARPHRD_APPLETLK, ARPHRD_DLCI, ARPHRD_ATM, ARPHRD_METRICOM, 289 - ARPHRD_IEEE1394, ARPHRD_EUI64, ARPHRD_INFINIBAND, ARPHRD_SLIP, 290 - ARPHRD_CSLIP, ARPHRD_SLIP6, ARPHRD_CSLIP6, ARPHRD_RSRVD, 291 - ARPHRD_ADAPT, ARPHRD_ROSE, ARPHRD_X25, ARPHRD_HWX25, 292 - ARPHRD_PPP, ARPHRD_CISCO, ARPHRD_LAPB, ARPHRD_DDCMP, 293 - ARPHRD_RAWHDLC, ARPHRD_TUNNEL, ARPHRD_TUNNEL6, ARPHRD_FRAD, 294 - ARPHRD_SKIP, ARPHRD_LOOPBACK, ARPHRD_LOCALTLK, ARPHRD_FDDI, 295 - ARPHRD_BIF, ARPHRD_SIT, ARPHRD_IPDDP, ARPHRD_IPGRE, 296 - ARPHRD_PIMREG, ARPHRD_HIPPI, ARPHRD_ASH, ARPHRD_ECONET, 297 - ARPHRD_IRDA, ARPHRD_FCPP, ARPHRD_FCAL, ARPHRD_FCPL, 298 - ARPHRD_FCFABRIC, ARPHRD_IEEE80211, ARPHRD_IEEE80211_PRISM, 299 - ARPHRD_IEEE80211_RADIOTAP, ARPHRD_PHONET, ARPHRD_PHONET_PIPE, 300 - ARPHRD_IEEE802154, ARPHRD_VOID, ARPHRD_NONE}; 301 - 302 - static const char *const netdev_lock_name[] = { 303 - "_xmit_NETROM", "_xmit_ETHER", "_xmit_EETHER", "_xmit_AX25", 304 - "_xmit_PRONET", "_xmit_CHAOS", "_xmit_IEEE802", "_xmit_ARCNET", 305 - "_xmit_APPLETLK", "_xmit_DLCI", "_xmit_ATM", "_xmit_METRICOM", 306 - "_xmit_IEEE1394", "_xmit_EUI64", "_xmit_INFINIBAND", "_xmit_SLIP", 307 - "_xmit_CSLIP", "_xmit_SLIP6", "_xmit_CSLIP6", "_xmit_RSRVD", 308 - "_xmit_ADAPT", "_xmit_ROSE", "_xmit_X25", "_xmit_HWX25", 309 - "_xmit_PPP", "_xmit_CISCO", "_xmit_LAPB", "_xmit_DDCMP", 310 - "_xmit_RAWHDLC", "_xmit_TUNNEL", "_xmit_TUNNEL6", "_xmit_FRAD", 311 - "_xmit_SKIP", "_xmit_LOOPBACK", "_xmit_LOCALTLK", "_xmit_FDDI", 312 - "_xmit_BIF", "_xmit_SIT", "_xmit_IPDDP", "_xmit_IPGRE", 313 - "_xmit_PIMREG", "_xmit_HIPPI", "_xmit_ASH", "_xmit_ECONET", 314 - "_xmit_IRDA", "_xmit_FCPP", "_xmit_FCAL", "_xmit_FCPL", 315 - "_xmit_FCFABRIC", "_xmit_IEEE80211", "_xmit_IEEE80211_PRISM", 316 - "_xmit_IEEE80211_RADIOTAP", "_xmit_PHONET", "_xmit_PHONET_PIPE", 317 - "_xmit_IEEE802154", "_xmit_VOID", "_xmit_NONE"}; 318 - 319 - static struct lock_class_key netdev_xmit_lock_key[ARRAY_SIZE(netdev_lock_type)]; 320 - static struct lock_class_key netdev_addr_lock_key[ARRAY_SIZE(netdev_lock_type)]; 321 - 322 - static inline unsigned short netdev_lock_pos(unsigned short dev_type) 323 - { 324 - int i; 325 - 326 - for (i = 0; i < ARRAY_SIZE(netdev_lock_type); i++) 327 - if (netdev_lock_type[i] == dev_type) 328 - return i; 329 - /* the last key is used by default */ 330 - return ARRAY_SIZE(netdev_lock_type) - 1; 331 - } 332 - 333 - static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock, 334 - unsigned short dev_type) 335 - { 336 - int i; 337 - 338 - i = netdev_lock_pos(dev_type); 339 - lockdep_set_class_and_name(lock, &netdev_xmit_lock_key[i], 340 - netdev_lock_name[i]); 341 - } 342 - 343 - static inline void netdev_set_addr_lockdep_class(struct net_device *dev) 344 - { 345 - int i; 346 - 347 - i = netdev_lock_pos(dev->type); 348 - lockdep_set_class_and_name(&dev->addr_list_lock, 349 - &netdev_addr_lock_key[i], 350 - netdev_lock_name[i]); 351 - } 352 - #else 353 - static inline void netdev_set_xmit_lockdep_class(spinlock_t *lock, 354 - unsigned short dev_type) 355 - { 356 - } 357 - static inline void netdev_set_addr_lockdep_class(struct net_device *dev) 358 - { 359 - } 360 - #endif 361 278 362 279 /******************************************************************************* 363 280 * ··· 6408 6489 /* upper master flag, there can only be one master device per list */ 6409 6490 bool master; 6410 6491 6492 + /* lookup ignore flag */ 6493 + bool ignore; 6494 + 6411 6495 /* counter for the number of times this device was added to us */ 6412 6496 u16 ref_nr; 6413 6497 ··· 6433 6511 return NULL; 6434 6512 } 6435 6513 6436 - static int __netdev_has_upper_dev(struct net_device *upper_dev, void *data) 6514 + static int ____netdev_has_upper_dev(struct net_device *upper_dev, void *data) 6437 6515 { 6438 6516 struct net_device *dev = data; 6439 6517 ··· 6454 6532 { 6455 6533 ASSERT_RTNL(); 6456 6534 6457 - return netdev_walk_all_upper_dev_rcu(dev, __netdev_has_upper_dev, 6535 + return netdev_walk_all_upper_dev_rcu(dev, ____netdev_has_upper_dev, 6458 6536 upper_dev); 6459 6537 } 6460 6538 EXPORT_SYMBOL(netdev_has_upper_dev); ··· 6472 6550 bool netdev_has_upper_dev_all_rcu(struct net_device *dev, 6473 6551 struct net_device *upper_dev) 6474 6552 { 6475 - return !!netdev_walk_all_upper_dev_rcu(dev, __netdev_has_upper_dev, 6553 + return !!netdev_walk_all_upper_dev_rcu(dev, ____netdev_has_upper_dev, 6476 6554 upper_dev); 6477 6555 } 6478 6556 EXPORT_SYMBOL(netdev_has_upper_dev_all_rcu); ··· 6515 6593 return NULL; 6516 6594 } 6517 6595 EXPORT_SYMBOL(netdev_master_upper_dev_get); 6596 + 6597 + static struct net_device *__netdev_master_upper_dev_get(struct net_device *dev) 6598 + { 6599 + struct netdev_adjacent *upper; 6600 + 6601 + ASSERT_RTNL(); 6602 + 6603 + if (list_empty(&dev->adj_list.upper)) 6604 + return NULL; 6605 + 6606 + upper = list_first_entry(&dev->adj_list.upper, 6607 + struct netdev_adjacent, list); 6608 + if (likely(upper->master) && !upper->ignore) 6609 + return upper->dev; 6610 + return NULL; 6611 + } 6518 6612 6519 6613 /** 6520 6614 * netdev_has_any_lower_dev - Check if device is linked to some device ··· 6582 6644 } 6583 6645 EXPORT_SYMBOL(netdev_upper_get_next_dev_rcu); 6584 6646 6647 + static struct net_device *__netdev_next_upper_dev(struct net_device *dev, 6648 + struct list_head **iter, 6649 + bool *ignore) 6650 + { 6651 + struct netdev_adjacent *upper; 6652 + 6653 + upper = list_entry((*iter)->next, struct netdev_adjacent, list); 6654 + 6655 + if (&upper->list == &dev->adj_list.upper) 6656 + return NULL; 6657 + 6658 + *iter = &upper->list; 6659 + *ignore = upper->ignore; 6660 + 6661 + return upper->dev; 6662 + } 6663 + 6585 6664 static struct net_device *netdev_next_upper_dev_rcu(struct net_device *dev, 6586 6665 struct list_head **iter) 6587 6666 { ··· 6616 6661 return upper->dev; 6617 6662 } 6618 6663 6664 + static int __netdev_walk_all_upper_dev(struct net_device *dev, 6665 + int (*fn)(struct net_device *dev, 6666 + void *data), 6667 + void *data) 6668 + { 6669 + struct net_device *udev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6670 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6671 + int ret, cur = 0; 6672 + bool ignore; 6673 + 6674 + now = dev; 6675 + iter = &dev->adj_list.upper; 6676 + 6677 + while (1) { 6678 + if (now != dev) { 6679 + ret = fn(now, data); 6680 + if (ret) 6681 + return ret; 6682 + } 6683 + 6684 + next = NULL; 6685 + while (1) { 6686 + udev = __netdev_next_upper_dev(now, &iter, &ignore); 6687 + if (!udev) 6688 + break; 6689 + if (ignore) 6690 + continue; 6691 + 6692 + next = udev; 6693 + niter = &udev->adj_list.upper; 6694 + dev_stack[cur] = now; 6695 + iter_stack[cur++] = iter; 6696 + break; 6697 + } 6698 + 6699 + if (!next) { 6700 + if (!cur) 6701 + return 0; 6702 + next = dev_stack[--cur]; 6703 + niter = iter_stack[cur]; 6704 + } 6705 + 6706 + now = next; 6707 + iter = niter; 6708 + } 6709 + 6710 + return 0; 6711 + } 6712 + 6619 6713 int netdev_walk_all_upper_dev_rcu(struct net_device *dev, 6620 6714 int (*fn)(struct net_device *dev, 6621 6715 void *data), 6622 6716 void *data) 6623 6717 { 6624 - struct net_device *udev; 6625 - struct list_head *iter; 6626 - int ret; 6718 + struct net_device *udev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6719 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6720 + int ret, cur = 0; 6627 6721 6628 - for (iter = &dev->adj_list.upper, 6629 - udev = netdev_next_upper_dev_rcu(dev, &iter); 6630 - udev; 6631 - udev = netdev_next_upper_dev_rcu(dev, &iter)) { 6632 - /* first is the upper device itself */ 6633 - ret = fn(udev, data); 6634 - if (ret) 6635 - return ret; 6722 + now = dev; 6723 + iter = &dev->adj_list.upper; 6636 6724 6637 - /* then look at all of its upper devices */ 6638 - ret = netdev_walk_all_upper_dev_rcu(udev, fn, data); 6639 - if (ret) 6640 - return ret; 6725 + while (1) { 6726 + if (now != dev) { 6727 + ret = fn(now, data); 6728 + if (ret) 6729 + return ret; 6730 + } 6731 + 6732 + next = NULL; 6733 + while (1) { 6734 + udev = netdev_next_upper_dev_rcu(now, &iter); 6735 + if (!udev) 6736 + break; 6737 + 6738 + next = udev; 6739 + niter = &udev->adj_list.upper; 6740 + dev_stack[cur] = now; 6741 + iter_stack[cur++] = iter; 6742 + break; 6743 + } 6744 + 6745 + if (!next) { 6746 + if (!cur) 6747 + return 0; 6748 + next = dev_stack[--cur]; 6749 + niter = iter_stack[cur]; 6750 + } 6751 + 6752 + now = next; 6753 + iter = niter; 6641 6754 } 6642 6755 6643 6756 return 0; 6644 6757 } 6645 6758 EXPORT_SYMBOL_GPL(netdev_walk_all_upper_dev_rcu); 6759 + 6760 + static bool __netdev_has_upper_dev(struct net_device *dev, 6761 + struct net_device *upper_dev) 6762 + { 6763 + ASSERT_RTNL(); 6764 + 6765 + return __netdev_walk_all_upper_dev(dev, ____netdev_has_upper_dev, 6766 + upper_dev); 6767 + } 6646 6768 6647 6769 /** 6648 6770 * netdev_lower_get_next_private - Get the next ->private from the ··· 6817 6785 return lower->dev; 6818 6786 } 6819 6787 6788 + static struct net_device *__netdev_next_lower_dev(struct net_device *dev, 6789 + struct list_head **iter, 6790 + bool *ignore) 6791 + { 6792 + struct netdev_adjacent *lower; 6793 + 6794 + lower = list_entry((*iter)->next, struct netdev_adjacent, list); 6795 + 6796 + if (&lower->list == &dev->adj_list.lower) 6797 + return NULL; 6798 + 6799 + *iter = &lower->list; 6800 + *ignore = lower->ignore; 6801 + 6802 + return lower->dev; 6803 + } 6804 + 6820 6805 int netdev_walk_all_lower_dev(struct net_device *dev, 6821 6806 int (*fn)(struct net_device *dev, 6822 6807 void *data), 6823 6808 void *data) 6824 6809 { 6825 - struct net_device *ldev; 6826 - struct list_head *iter; 6827 - int ret; 6810 + struct net_device *ldev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6811 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6812 + int ret, cur = 0; 6828 6813 6829 - for (iter = &dev->adj_list.lower, 6830 - ldev = netdev_next_lower_dev(dev, &iter); 6831 - ldev; 6832 - ldev = netdev_next_lower_dev(dev, &iter)) { 6833 - /* first is the lower device itself */ 6834 - ret = fn(ldev, data); 6835 - if (ret) 6836 - return ret; 6814 + now = dev; 6815 + iter = &dev->adj_list.lower; 6837 6816 6838 - /* then look at all of its lower devices */ 6839 - ret = netdev_walk_all_lower_dev(ldev, fn, data); 6840 - if (ret) 6841 - return ret; 6817 + while (1) { 6818 + if (now != dev) { 6819 + ret = fn(now, data); 6820 + if (ret) 6821 + return ret; 6822 + } 6823 + 6824 + next = NULL; 6825 + while (1) { 6826 + ldev = netdev_next_lower_dev(now, &iter); 6827 + if (!ldev) 6828 + break; 6829 + 6830 + next = ldev; 6831 + niter = &ldev->adj_list.lower; 6832 + dev_stack[cur] = now; 6833 + iter_stack[cur++] = iter; 6834 + break; 6835 + } 6836 + 6837 + if (!next) { 6838 + if (!cur) 6839 + return 0; 6840 + next = dev_stack[--cur]; 6841 + niter = iter_stack[cur]; 6842 + } 6843 + 6844 + now = next; 6845 + iter = niter; 6842 6846 } 6843 6847 6844 6848 return 0; 6845 6849 } 6846 6850 EXPORT_SYMBOL_GPL(netdev_walk_all_lower_dev); 6851 + 6852 + static int __netdev_walk_all_lower_dev(struct net_device *dev, 6853 + int (*fn)(struct net_device *dev, 6854 + void *data), 6855 + void *data) 6856 + { 6857 + struct net_device *ldev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6858 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6859 + int ret, cur = 0; 6860 + bool ignore; 6861 + 6862 + now = dev; 6863 + iter = &dev->adj_list.lower; 6864 + 6865 + while (1) { 6866 + if (now != dev) { 6867 + ret = fn(now, data); 6868 + if (ret) 6869 + return ret; 6870 + } 6871 + 6872 + next = NULL; 6873 + while (1) { 6874 + ldev = __netdev_next_lower_dev(now, &iter, &ignore); 6875 + if (!ldev) 6876 + break; 6877 + if (ignore) 6878 + continue; 6879 + 6880 + next = ldev; 6881 + niter = &ldev->adj_list.lower; 6882 + dev_stack[cur] = now; 6883 + iter_stack[cur++] = iter; 6884 + break; 6885 + } 6886 + 6887 + if (!next) { 6888 + if (!cur) 6889 + return 0; 6890 + next = dev_stack[--cur]; 6891 + niter = iter_stack[cur]; 6892 + } 6893 + 6894 + now = next; 6895 + iter = niter; 6896 + } 6897 + 6898 + return 0; 6899 + } 6847 6900 6848 6901 static struct net_device *netdev_next_lower_dev_rcu(struct net_device *dev, 6849 6902 struct list_head **iter) ··· 6944 6827 return lower->dev; 6945 6828 } 6946 6829 6830 + static u8 __netdev_upper_depth(struct net_device *dev) 6831 + { 6832 + struct net_device *udev; 6833 + struct list_head *iter; 6834 + u8 max_depth = 0; 6835 + bool ignore; 6836 + 6837 + for (iter = &dev->adj_list.upper, 6838 + udev = __netdev_next_upper_dev(dev, &iter, &ignore); 6839 + udev; 6840 + udev = __netdev_next_upper_dev(dev, &iter, &ignore)) { 6841 + if (ignore) 6842 + continue; 6843 + if (max_depth < udev->upper_level) 6844 + max_depth = udev->upper_level; 6845 + } 6846 + 6847 + return max_depth; 6848 + } 6849 + 6850 + static u8 __netdev_lower_depth(struct net_device *dev) 6851 + { 6852 + struct net_device *ldev; 6853 + struct list_head *iter; 6854 + u8 max_depth = 0; 6855 + bool ignore; 6856 + 6857 + for (iter = &dev->adj_list.lower, 6858 + ldev = __netdev_next_lower_dev(dev, &iter, &ignore); 6859 + ldev; 6860 + ldev = __netdev_next_lower_dev(dev, &iter, &ignore)) { 6861 + if (ignore) 6862 + continue; 6863 + if (max_depth < ldev->lower_level) 6864 + max_depth = ldev->lower_level; 6865 + } 6866 + 6867 + return max_depth; 6868 + } 6869 + 6870 + static int __netdev_update_upper_level(struct net_device *dev, void *data) 6871 + { 6872 + dev->upper_level = __netdev_upper_depth(dev) + 1; 6873 + return 0; 6874 + } 6875 + 6876 + static int __netdev_update_lower_level(struct net_device *dev, void *data) 6877 + { 6878 + dev->lower_level = __netdev_lower_depth(dev) + 1; 6879 + return 0; 6880 + } 6881 + 6947 6882 int netdev_walk_all_lower_dev_rcu(struct net_device *dev, 6948 6883 int (*fn)(struct net_device *dev, 6949 6884 void *data), 6950 6885 void *data) 6951 6886 { 6952 - struct net_device *ldev; 6953 - struct list_head *iter; 6954 - int ret; 6887 + struct net_device *ldev, *next, *now, *dev_stack[MAX_NEST_DEV + 1]; 6888 + struct list_head *niter, *iter, *iter_stack[MAX_NEST_DEV + 1]; 6889 + int ret, cur = 0; 6955 6890 6956 - for (iter = &dev->adj_list.lower, 6957 - ldev = netdev_next_lower_dev_rcu(dev, &iter); 6958 - ldev; 6959 - ldev = netdev_next_lower_dev_rcu(dev, &iter)) { 6960 - /* first is the lower device itself */ 6961 - ret = fn(ldev, data); 6962 - if (ret) 6963 - return ret; 6891 + now = dev; 6892 + iter = &dev->adj_list.lower; 6964 6893 6965 - /* then look at all of its lower devices */ 6966 - ret = netdev_walk_all_lower_dev_rcu(ldev, fn, data); 6967 - if (ret) 6968 - return ret; 6894 + while (1) { 6895 + if (now != dev) { 6896 + ret = fn(now, data); 6897 + if (ret) 6898 + return ret; 6899 + } 6900 + 6901 + next = NULL; 6902 + while (1) { 6903 + ldev = netdev_next_lower_dev_rcu(now, &iter); 6904 + if (!ldev) 6905 + break; 6906 + 6907 + next = ldev; 6908 + niter = &ldev->adj_list.lower; 6909 + dev_stack[cur] = now; 6910 + iter_stack[cur++] = iter; 6911 + break; 6912 + } 6913 + 6914 + if (!next) { 6915 + if (!cur) 6916 + return 0; 6917 + next = dev_stack[--cur]; 6918 + niter = iter_stack[cur]; 6919 + } 6920 + 6921 + now = next; 6922 + iter = niter; 6969 6923 } 6970 6924 6971 6925 return 0; ··· 7140 6952 adj->master = master; 7141 6953 adj->ref_nr = 1; 7142 6954 adj->private = private; 6955 + adj->ignore = false; 7143 6956 dev_hold(adj_dev); 7144 6957 7145 6958 pr_debug("Insert adjacency: dev %s adj_dev %s adj->ref_nr %d; dev_hold on %s\n", ··· 7291 7102 return -EBUSY; 7292 7103 7293 7104 /* To prevent loops, check if dev is not upper device to upper_dev. */ 7294 - if (netdev_has_upper_dev(upper_dev, dev)) 7105 + if (__netdev_has_upper_dev(upper_dev, dev)) 7295 7106 return -EBUSY; 7296 7107 7108 + if ((dev->lower_level + upper_dev->upper_level) > MAX_NEST_DEV) 7109 + return -EMLINK; 7110 + 7297 7111 if (!master) { 7298 - if (netdev_has_upper_dev(dev, upper_dev)) 7112 + if (__netdev_has_upper_dev(dev, upper_dev)) 7299 7113 return -EEXIST; 7300 7114 } else { 7301 - master_dev = netdev_master_upper_dev_get(dev); 7115 + master_dev = __netdev_master_upper_dev_get(dev); 7302 7116 if (master_dev) 7303 7117 return master_dev == upper_dev ? -EEXIST : -EBUSY; 7304 7118 } ··· 7322 7130 ret = notifier_to_errno(ret); 7323 7131 if (ret) 7324 7132 goto rollback; 7133 + 7134 + __netdev_update_upper_level(dev, NULL); 7135 + __netdev_walk_all_lower_dev(dev, __netdev_update_upper_level, NULL); 7136 + 7137 + __netdev_update_lower_level(upper_dev, NULL); 7138 + __netdev_walk_all_upper_dev(upper_dev, __netdev_update_lower_level, 7139 + NULL); 7325 7140 7326 7141 return 0; 7327 7142 ··· 7412 7213 7413 7214 call_netdevice_notifiers_info(NETDEV_CHANGEUPPER, 7414 7215 &changeupper_info.info); 7216 + 7217 + __netdev_update_upper_level(dev, NULL); 7218 + __netdev_walk_all_lower_dev(dev, __netdev_update_upper_level, NULL); 7219 + 7220 + __netdev_update_lower_level(upper_dev, NULL); 7221 + __netdev_walk_all_upper_dev(upper_dev, __netdev_update_lower_level, 7222 + NULL); 7415 7223 } 7416 7224 EXPORT_SYMBOL(netdev_upper_dev_unlink); 7225 + 7226 + static void __netdev_adjacent_dev_set(struct net_device *upper_dev, 7227 + struct net_device *lower_dev, 7228 + bool val) 7229 + { 7230 + struct netdev_adjacent *adj; 7231 + 7232 + adj = __netdev_find_adj(lower_dev, &upper_dev->adj_list.lower); 7233 + if (adj) 7234 + adj->ignore = val; 7235 + 7236 + adj = __netdev_find_adj(upper_dev, &lower_dev->adj_list.upper); 7237 + if (adj) 7238 + adj->ignore = val; 7239 + } 7240 + 7241 + static void netdev_adjacent_dev_disable(struct net_device *upper_dev, 7242 + struct net_device *lower_dev) 7243 + { 7244 + __netdev_adjacent_dev_set(upper_dev, lower_dev, true); 7245 + } 7246 + 7247 + static void netdev_adjacent_dev_enable(struct net_device *upper_dev, 7248 + struct net_device *lower_dev) 7249 + { 7250 + __netdev_adjacent_dev_set(upper_dev, lower_dev, false); 7251 + } 7252 + 7253 + int netdev_adjacent_change_prepare(struct net_device *old_dev, 7254 + struct net_device *new_dev, 7255 + struct net_device *dev, 7256 + struct netlink_ext_ack *extack) 7257 + { 7258 + int err; 7259 + 7260 + if (!new_dev) 7261 + return 0; 7262 + 7263 + if (old_dev && new_dev != old_dev) 7264 + netdev_adjacent_dev_disable(dev, old_dev); 7265 + 7266 + err = netdev_upper_dev_link(new_dev, dev, extack); 7267 + if (err) { 7268 + if (old_dev && new_dev != old_dev) 7269 + netdev_adjacent_dev_enable(dev, old_dev); 7270 + return err; 7271 + } 7272 + 7273 + return 0; 7274 + } 7275 + EXPORT_SYMBOL(netdev_adjacent_change_prepare); 7276 + 7277 + void netdev_adjacent_change_commit(struct net_device *old_dev, 7278 + struct net_device *new_dev, 7279 + struct net_device *dev) 7280 + { 7281 + if (!new_dev || !old_dev) 7282 + return; 7283 + 7284 + if (new_dev == old_dev) 7285 + return; 7286 + 7287 + netdev_adjacent_dev_enable(dev, old_dev); 7288 + netdev_upper_dev_unlink(old_dev, dev); 7289 + } 7290 + EXPORT_SYMBOL(netdev_adjacent_change_commit); 7291 + 7292 + void netdev_adjacent_change_abort(struct net_device *old_dev, 7293 + struct net_device *new_dev, 7294 + struct net_device *dev) 7295 + { 7296 + if (!new_dev) 7297 + return; 7298 + 7299 + if (old_dev && new_dev != old_dev) 7300 + netdev_adjacent_dev_enable(dev, old_dev); 7301 + 7302 + netdev_upper_dev_unlink(new_dev, dev); 7303 + } 7304 + EXPORT_SYMBOL(netdev_adjacent_change_abort); 7417 7305 7418 7306 /** 7419 7307 * netdev_bonding_info_change - Dispatch event about slave change ··· 7614 7328 } 7615 7329 EXPORT_SYMBOL(netdev_lower_dev_get_private); 7616 7330 7617 - 7618 - int dev_get_nest_level(struct net_device *dev) 7619 - { 7620 - struct net_device *lower = NULL; 7621 - struct list_head *iter; 7622 - int max_nest = -1; 7623 - int nest; 7624 - 7625 - ASSERT_RTNL(); 7626 - 7627 - netdev_for_each_lower_dev(dev, lower, iter) { 7628 - nest = dev_get_nest_level(lower); 7629 - if (max_nest < nest) 7630 - max_nest = nest; 7631 - } 7632 - 7633 - return max_nest + 1; 7634 - } 7635 - EXPORT_SYMBOL(dev_get_nest_level); 7636 7331 7637 7332 /** 7638 7333 * netdev_lower_change - Dispatch event about lower device state change ··· 8886 8619 { 8887 8620 /* Initialize queue lock */ 8888 8621 spin_lock_init(&queue->_xmit_lock); 8889 - netdev_set_xmit_lockdep_class(&queue->_xmit_lock, dev->type); 8622 + lockdep_set_class(&queue->_xmit_lock, &dev->qdisc_xmit_lock_key); 8890 8623 queue->xmit_lock_owner = -1; 8891 8624 netdev_queue_numa_node_write(queue, NUMA_NO_NODE); 8892 8625 queue->dev = dev; ··· 8933 8666 } 8934 8667 EXPORT_SYMBOL(netif_tx_stop_all_queues); 8935 8668 8669 + static void netdev_register_lockdep_key(struct net_device *dev) 8670 + { 8671 + lockdep_register_key(&dev->qdisc_tx_busylock_key); 8672 + lockdep_register_key(&dev->qdisc_running_key); 8673 + lockdep_register_key(&dev->qdisc_xmit_lock_key); 8674 + lockdep_register_key(&dev->addr_list_lock_key); 8675 + } 8676 + 8677 + static void netdev_unregister_lockdep_key(struct net_device *dev) 8678 + { 8679 + lockdep_unregister_key(&dev->qdisc_tx_busylock_key); 8680 + lockdep_unregister_key(&dev->qdisc_running_key); 8681 + lockdep_unregister_key(&dev->qdisc_xmit_lock_key); 8682 + lockdep_unregister_key(&dev->addr_list_lock_key); 8683 + } 8684 + 8685 + void netdev_update_lockdep_key(struct net_device *dev) 8686 + { 8687 + struct netdev_queue *queue; 8688 + int i; 8689 + 8690 + lockdep_unregister_key(&dev->qdisc_xmit_lock_key); 8691 + lockdep_unregister_key(&dev->addr_list_lock_key); 8692 + 8693 + lockdep_register_key(&dev->qdisc_xmit_lock_key); 8694 + lockdep_register_key(&dev->addr_list_lock_key); 8695 + 8696 + lockdep_set_class(&dev->addr_list_lock, &dev->addr_list_lock_key); 8697 + for (i = 0; i < dev->num_tx_queues; i++) { 8698 + queue = netdev_get_tx_queue(dev, i); 8699 + 8700 + lockdep_set_class(&queue->_xmit_lock, 8701 + &dev->qdisc_xmit_lock_key); 8702 + } 8703 + } 8704 + EXPORT_SYMBOL(netdev_update_lockdep_key); 8705 + 8936 8706 /** 8937 8707 * register_netdevice - register a network device 8938 8708 * @dev: device to register ··· 9004 8700 BUG_ON(!net); 9005 8701 9006 8702 spin_lock_init(&dev->addr_list_lock); 9007 - netdev_set_addr_lockdep_class(dev); 8703 + lockdep_set_class(&dev->addr_list_lock, &dev->addr_list_lock_key); 9008 8704 9009 8705 ret = dev_get_valid_name(net, dev, dev->name); 9010 8706 if (ret < 0) ··· 9514 9210 9515 9211 dev_net_set(dev, &init_net); 9516 9212 9213 + netdev_register_lockdep_key(dev); 9214 + 9517 9215 dev->gso_max_size = GSO_MAX_SIZE; 9518 9216 dev->gso_max_segs = GSO_MAX_SEGS; 9217 + dev->upper_level = 1; 9218 + dev->lower_level = 1; 9519 9219 9520 9220 INIT_LIST_HEAD(&dev->napi_list); 9521 9221 INIT_LIST_HEAD(&dev->unreg_list); ··· 9599 9291 9600 9292 free_percpu(dev->pcpu_refcnt); 9601 9293 dev->pcpu_refcnt = NULL; 9294 + 9295 + netdev_unregister_lockdep_key(dev); 9602 9296 9603 9297 /* Compatibility with error handling in drivers */ 9604 9298 if (dev->reg_state == NETREG_UNINITIALIZED) {
+6 -6
net/core/dev_addr_lists.c
··· 637 637 if (to->addr_len != from->addr_len) 638 638 return -EINVAL; 639 639 640 - netif_addr_lock_nested(to); 640 + netif_addr_lock(to); 641 641 err = __hw_addr_sync(&to->uc, &from->uc, to->addr_len); 642 642 if (!err) 643 643 __dev_set_rx_mode(to); ··· 667 667 if (to->addr_len != from->addr_len) 668 668 return -EINVAL; 669 669 670 - netif_addr_lock_nested(to); 670 + netif_addr_lock(to); 671 671 err = __hw_addr_sync_multiple(&to->uc, &from->uc, to->addr_len); 672 672 if (!err) 673 673 __dev_set_rx_mode(to); ··· 691 691 return; 692 692 693 693 netif_addr_lock_bh(from); 694 - netif_addr_lock_nested(to); 694 + netif_addr_lock(to); 695 695 __hw_addr_unsync(&to->uc, &from->uc, to->addr_len); 696 696 __dev_set_rx_mode(to); 697 697 netif_addr_unlock(to); ··· 858 858 if (to->addr_len != from->addr_len) 859 859 return -EINVAL; 860 860 861 - netif_addr_lock_nested(to); 861 + netif_addr_lock(to); 862 862 err = __hw_addr_sync(&to->mc, &from->mc, to->addr_len); 863 863 if (!err) 864 864 __dev_set_rx_mode(to); ··· 888 888 if (to->addr_len != from->addr_len) 889 889 return -EINVAL; 890 890 891 - netif_addr_lock_nested(to); 891 + netif_addr_lock(to); 892 892 err = __hw_addr_sync_multiple(&to->mc, &from->mc, to->addr_len); 893 893 if (!err) 894 894 __dev_set_rx_mode(to); ··· 912 912 return; 913 913 914 914 netif_addr_lock_bh(from); 915 - netif_addr_lock_nested(to); 915 + netif_addr_lock(to); 916 916 __hw_addr_unsync(&to->mc, &from->mc, to->addr_len); 917 917 __dev_set_rx_mode(to); 918 918 netif_addr_unlock(to);
+1
net/core/rtnetlink.c
··· 2355 2355 err = ops->ndo_del_slave(upper_dev, dev); 2356 2356 if (err) 2357 2357 return err; 2358 + netdev_update_lockdep_key(dev); 2358 2359 } else { 2359 2360 return -EOPNOTSUPP; 2360 2361 }
-5
net/dsa/master.c
··· 310 310 rtnl_unlock(); 311 311 } 312 312 313 - static struct lock_class_key dsa_master_addr_list_lock_key; 314 - 315 313 int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 316 314 { 317 315 int ret; ··· 323 325 wmb(); 324 326 325 327 dev->dsa_ptr = cpu_dp; 326 - lockdep_set_class(&dev->addr_list_lock, 327 - &dsa_master_addr_list_lock_key); 328 - 329 328 ret = dsa_master_ethtool_setup(dev); 330 329 if (ret) 331 330 return ret;
-12
net/dsa/slave.c
··· 1341 1341 return ret; 1342 1342 } 1343 1343 1344 - static struct lock_class_key dsa_slave_netdev_xmit_lock_key; 1345 - static void dsa_slave_set_lockdep_class_one(struct net_device *dev, 1346 - struct netdev_queue *txq, 1347 - void *_unused) 1348 - { 1349 - lockdep_set_class(&txq->_xmit_lock, 1350 - &dsa_slave_netdev_xmit_lock_key); 1351 - } 1352 - 1353 1344 int dsa_slave_suspend(struct net_device *slave_dev) 1354 1345 { 1355 1346 struct dsa_port *dp = dsa_slave_to_port(slave_dev); ··· 1423 1432 slave_dev->min_mtu = 0; 1424 1433 slave_dev->max_mtu = ETH_MAX_MTU; 1425 1434 SET_NETDEV_DEVTYPE(slave_dev, &dsa_type); 1426 - 1427 - netdev_for_each_tx_queue(slave_dev, dsa_slave_set_lockdep_class_one, 1428 - NULL); 1429 1435 1430 1436 SET_NETDEV_DEV(slave_dev, port->ds->dev); 1431 1437 slave_dev->dev.of_node = port->dn;
-8
net/ieee802154/6lowpan/core.c
··· 58 58 .create = lowpan_header_create, 59 59 }; 60 60 61 - static int lowpan_dev_init(struct net_device *ldev) 62 - { 63 - netdev_lockdep_set_classes(ldev); 64 - 65 - return 0; 66 - } 67 - 68 61 static int lowpan_open(struct net_device *dev) 69 62 { 70 63 if (!open_count) ··· 89 96 } 90 97 91 98 static const struct net_device_ops lowpan_netdev_ops = { 92 - .ndo_init = lowpan_dev_init, 93 99 .ndo_start_xmit = lowpan_xmit, 94 100 .ndo_open = lowpan_open, 95 101 .ndo_stop = lowpan_stop,
-1
net/l2tp/l2tp_eth.c
··· 56 56 { 57 57 eth_hw_addr_random(dev); 58 58 eth_broadcast_addr(dev->broadcast); 59 - netdev_lockdep_set_classes(dev); 60 59 61 60 return 0; 62 61 }
-23
net/netrom/af_netrom.c
··· 64 64 static const struct proto_ops nr_proto_ops; 65 65 66 66 /* 67 - * NETROM network devices are virtual network devices encapsulating NETROM 68 - * frames into AX.25 which will be sent through an AX.25 device, so form a 69 - * special "super class" of normal net devices; split their locks off into a 70 - * separate class since they always nest. 71 - */ 72 - static struct lock_class_key nr_netdev_xmit_lock_key; 73 - static struct lock_class_key nr_netdev_addr_lock_key; 74 - 75 - static void nr_set_lockdep_one(struct net_device *dev, 76 - struct netdev_queue *txq, 77 - void *_unused) 78 - { 79 - lockdep_set_class(&txq->_xmit_lock, &nr_netdev_xmit_lock_key); 80 - } 81 - 82 - static void nr_set_lockdep_key(struct net_device *dev) 83 - { 84 - lockdep_set_class(&dev->addr_list_lock, &nr_netdev_addr_lock_key); 85 - netdev_for_each_tx_queue(dev, nr_set_lockdep_one, NULL); 86 - } 87 - 88 - /* 89 67 * Socket removal during an interrupt is now safe. 90 68 */ 91 69 static void nr_remove_socket(struct sock *sk) ··· 1392 1414 free_netdev(dev); 1393 1415 goto fail; 1394 1416 } 1395 - nr_set_lockdep_key(dev); 1396 1417 dev_nr[i] = dev; 1397 1418 } 1398 1419
-23
net/rose/af_rose.c
··· 65 65 ax25_address rose_callsign; 66 66 67 67 /* 68 - * ROSE network devices are virtual network devices encapsulating ROSE 69 - * frames into AX.25 which will be sent through an AX.25 device, so form a 70 - * special "super class" of normal net devices; split their locks off into a 71 - * separate class since they always nest. 72 - */ 73 - static struct lock_class_key rose_netdev_xmit_lock_key; 74 - static struct lock_class_key rose_netdev_addr_lock_key; 75 - 76 - static void rose_set_lockdep_one(struct net_device *dev, 77 - struct netdev_queue *txq, 78 - void *_unused) 79 - { 80 - lockdep_set_class(&txq->_xmit_lock, &rose_netdev_xmit_lock_key); 81 - } 82 - 83 - static void rose_set_lockdep_key(struct net_device *dev) 84 - { 85 - lockdep_set_class(&dev->addr_list_lock, &rose_netdev_addr_lock_key); 86 - netdev_for_each_tx_queue(dev, rose_set_lockdep_one, NULL); 87 - } 88 - 89 - /* 90 68 * Convert a ROSE address into text. 91 69 */ 92 70 char *rose2asc(char *buf, const rose_address *addr) ··· 1511 1533 free_netdev(dev); 1512 1534 goto fail; 1513 1535 } 1514 - rose_set_lockdep_key(dev); 1515 1536 dev_rose[i] = dev; 1516 1537 } 1517 1538
+6 -11
net/sched/sch_generic.c
··· 799 799 }; 800 800 EXPORT_SYMBOL(pfifo_fast_ops); 801 801 802 - static struct lock_class_key qdisc_tx_busylock; 803 - static struct lock_class_key qdisc_running_key; 804 - 805 802 struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue, 806 803 const struct Qdisc_ops *ops, 807 804 struct netlink_ext_ack *extack) ··· 851 854 } 852 855 853 856 spin_lock_init(&sch->busylock); 854 - lockdep_set_class(&sch->busylock, 855 - dev->qdisc_tx_busylock ?: &qdisc_tx_busylock); 856 - 857 857 /* seqlock has the same scope of busylock, for NOLOCK qdisc */ 858 858 spin_lock_init(&sch->seqlock); 859 - lockdep_set_class(&sch->busylock, 860 - dev->qdisc_tx_busylock ?: &qdisc_tx_busylock); 861 - 862 859 seqcount_init(&sch->running); 863 - lockdep_set_class(&sch->running, 864 - dev->qdisc_running_key ?: &qdisc_running_key); 865 860 866 861 sch->ops = ops; 867 862 sch->flags = ops->static_flags; ··· 863 874 sch->empty = true; 864 875 dev_hold(dev); 865 876 refcount_set(&sch->refcnt, 1); 877 + 878 + if (sch != &noop_qdisc) { 879 + lockdep_set_class(&sch->busylock, &dev->qdisc_tx_busylock_key); 880 + lockdep_set_class(&sch->seqlock, &dev->qdisc_tx_busylock_key); 881 + lockdep_set_class(&sch->running, &dev->qdisc_running_key); 882 + } 866 883 867 884 return sch; 868 885 errout1:
+1 -1
net/smc/smc_core.c
··· 561 561 } 562 562 563 563 rtnl_lock(); 564 - nest_lvl = dev_get_nest_level(ndev); 564 + nest_lvl = ndev->lower_level; 565 565 for (i = 0; i < nest_lvl; i++) { 566 566 struct list_head *lower = &ndev->adj_list.lower; 567 567
+1 -1
net/smc/smc_pnet.c
··· 718 718 int i, nest_lvl; 719 719 720 720 rtnl_lock(); 721 - nest_lvl = dev_get_nest_level(ndev); 721 + nest_lvl = ndev->lower_level; 722 722 for (i = 0; i < nest_lvl; i++) { 723 723 struct list_head *lower = &ndev->adj_list.lower; 724 724