Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Lots more phydev and probe error path leaks in various drivers by
Johan Hovold.

2) Fix race in packet_set_ring(), from Philip Pettersson.

3) Use after free in dccp_invalid_packet(), from Eric Dumazet.

4) Signnedness overflow in SO_{SND,RCV}BUFFORCE, also from Eric
Dumazet.

5) When tunneling between ipv4 and ipv6 we can be left with the wrong
skb->protocol value as we enter the IPSEC engine and this causes all
kinds of problems. Set it before the output path does any
dst_output() calls, from Eli Cooper.

6) bcmgenet uses wrong device struct pointer in DMA API calls, fix from
Florian Fainelli.

7) Various netfilter nat bug fixes from FLorian Westphal.

8) Fix memory leak in ipvlan_link_new(), from Gao Feng.

9) Locking fixes, particularly wrt. socket lookups, in l2tp from
Guillaume Nault.

10) Avoid invoking rhash teardowns in atomic context by moving netlink
cb->done() dump completion from a worker thread. Fix from Herbert
Xu.

11) Buffer refcount problems in tun and macvtap on errors, from Jason
Wang.

12) We don't set Kconfig symbol DEFAULT_TCP_CONG properly when the user
selects BBR. Fix from Julian Wollrath.

13) Fix deadlock in transmit path on altera TSE driver, from Lino
Sanfilippo.

14) Fix unbalanced reference counting in dsa_switch_tree, from Nikita
Yushchenko.

15) tc_tunnel_key needs to be properly exported to userspace via uapi,
fix from Roi Dayan.

16) rds_tcp_init_net() doesn't unregister notifier in error path, fix
from Sowmini Varadhan.

17) Stale packet header pointer access after pskb_expand_head() in
genenve driver, fix from Sabrina Dubroca.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (103 commits)
net: avoid signed overflows for SO_{SND|RCV}BUFFORCE
geneve: avoid use-after-free of skb->data
tipc: check minimum bearer MTU
net: renesas: ravb: unintialized return value
sh_eth: remove unchecked interrupts for RZ/A1
net: bcmgenet: Utilize correct struct device for all DMA operations
NET: usb: qmi_wwan: add support for Telit LE922A PID 0x1040
cdc_ether: Fix handling connection notification
ip6_offload: check segs for NULL in ipv6_gso_segment.
RDS: TCP: unregister_netdevice_notifier() in error path of rds_tcp_init_net
Revert: "ip6_tunnel: Update skb->protocol to ETH_P_IPV6 in ip6_tnl_xmit()"
ipv6: Set skb->protocol properly for local output
ipv4: Set skb->protocol properly for local output
packet: fix race condition in packet_set_ring
net: ethernet: altera: TSE: do not use tx queue lock in tx completion handler
net: ethernet: altera: TSE: Remove unneeded dma sync for tx buffers
net: ethernet: stmmac: fix of-node and fixed-link-phydev leaks
net: ethernet: stmmac: platform: fix outdated function header
net: ethernet: stmmac: dwmac-meson8b: fix probe error path
net: ethernet: stmmac: dwmac-generic: fix probe error path
...

+1065 -451
+20 -4
Documentation/devicetree/bindings/net/ethernet.txt
··· 9 9 - max-speed: number, specifies maximum speed in Mbit/s supported by the device; 10 10 - max-frame-size: number, maximum transfer unit (IEEE defined MTU), rather than 11 11 the maximum frame size (there's contradiction in ePAPR). 12 - - phy-mode: string, operation mode of the PHY interface; supported values are 13 - "mii", "gmii", "sgmii", "qsgmii", "tbi", "rev-mii", "rmii", "rgmii", "rgmii-id", 14 - "rgmii-rxid", "rgmii-txid", "rtbi", "smii", "xgmii", "trgmii"; this is now a 15 - de-facto standard property; 12 + - phy-mode: string, operation mode of the PHY interface. This is now a de-facto 13 + standard property; supported values are: 14 + * "mii" 15 + * "gmii" 16 + * "sgmii" 17 + * "qsgmii" 18 + * "tbi" 19 + * "rev-mii" 20 + * "rmii" 21 + * "rgmii" (RX and TX delays are added by the MAC when required) 22 + * "rgmii-id" (RGMII with internal RX and TX delays provided by the PHY, the 23 + MAC should not add the RX or TX delays in this case) 24 + * "rgmii-rxid" (RGMII with internal RX delay provided by the PHY, the MAC 25 + should not add an RX delay in this case) 26 + * "rgmii-txid" (RGMII with internal TX delay provided by the PHY, the MAC 27 + should not add an TX delay in this case) 28 + * "rtbi" 29 + * "smii" 30 + * "xgmii" 31 + * "trgmii" 16 32 - phy-connection-type: the same as "phy-mode" property but described in ePAPR; 17 33 - phy-handle: phandle, specifies a reference to a node representing a PHY 18 34 device; this property is described in ePAPR and so preferred;
+5 -2
Documentation/networking/nf_conntrack-sysctl.txt
··· 62 62 protocols. 63 63 64 64 nf_conntrack_helper - BOOLEAN 65 - 0 - disabled 66 - not 0 - enabled (default) 65 + 0 - disabled (default) 66 + not 0 - enabled 67 67 68 68 Enable automatic conntrack helper assignment. 69 + If disabled it is required to set up iptables rules to assign 70 + helpers to connections. See the CT target description in the 71 + iptables-extensions(8) man page for further information. 69 72 70 73 nf_conntrack_icmp_timeout - INTEGER (seconds) 71 74 default 30
+29 -8
drivers/net/can/usb/peak_usb/pcan_ucan.h
··· 43 43 u16 args[3]; 44 44 }; 45 45 46 + #define PUCAN_TSLOW_BRP_BITS 10 47 + #define PUCAN_TSLOW_TSGEG1_BITS 8 48 + #define PUCAN_TSLOW_TSGEG2_BITS 7 49 + #define PUCAN_TSLOW_SJW_BITS 7 50 + 51 + #define PUCAN_TSLOW_BRP_MASK ((1 << PUCAN_TSLOW_BRP_BITS) - 1) 52 + #define PUCAN_TSLOW_TSEG1_MASK ((1 << PUCAN_TSLOW_TSGEG1_BITS) - 1) 53 + #define PUCAN_TSLOW_TSEG2_MASK ((1 << PUCAN_TSLOW_TSGEG2_BITS) - 1) 54 + #define PUCAN_TSLOW_SJW_MASK ((1 << PUCAN_TSLOW_SJW_BITS) - 1) 55 + 46 56 /* uCAN TIMING_SLOW command fields */ 47 - #define PUCAN_TSLOW_SJW_T(s, t) (((s) & 0xf) | ((!!(t)) << 7)) 48 - #define PUCAN_TSLOW_TSEG2(t) ((t) & 0xf) 49 - #define PUCAN_TSLOW_TSEG1(t) ((t) & 0x3f) 50 - #define PUCAN_TSLOW_BRP(b) ((b) & 0x3ff) 57 + #define PUCAN_TSLOW_SJW_T(s, t) (((s) & PUCAN_TSLOW_SJW_MASK) | \ 58 + ((!!(t)) << 7)) 59 + #define PUCAN_TSLOW_TSEG2(t) ((t) & PUCAN_TSLOW_TSEG2_MASK) 60 + #define PUCAN_TSLOW_TSEG1(t) ((t) & PUCAN_TSLOW_TSEG1_MASK) 61 + #define PUCAN_TSLOW_BRP(b) ((b) & PUCAN_TSLOW_BRP_MASK) 51 62 52 63 struct __packed pucan_timing_slow { 53 64 __le16 opcode_channel; ··· 71 60 __le16 brp; /* BaudRate Prescaler */ 72 61 }; 73 62 63 + #define PUCAN_TFAST_BRP_BITS 10 64 + #define PUCAN_TFAST_TSGEG1_BITS 5 65 + #define PUCAN_TFAST_TSGEG2_BITS 4 66 + #define PUCAN_TFAST_SJW_BITS 4 67 + 68 + #define PUCAN_TFAST_BRP_MASK ((1 << PUCAN_TFAST_BRP_BITS) - 1) 69 + #define PUCAN_TFAST_TSEG1_MASK ((1 << PUCAN_TFAST_TSGEG1_BITS) - 1) 70 + #define PUCAN_TFAST_TSEG2_MASK ((1 << PUCAN_TFAST_TSGEG2_BITS) - 1) 71 + #define PUCAN_TFAST_SJW_MASK ((1 << PUCAN_TFAST_SJW_BITS) - 1) 72 + 74 73 /* uCAN TIMING_FAST command fields */ 75 - #define PUCAN_TFAST_SJW(s) ((s) & 0x3) 76 - #define PUCAN_TFAST_TSEG2(t) ((t) & 0x7) 77 - #define PUCAN_TFAST_TSEG1(t) ((t) & 0xf) 78 - #define PUCAN_TFAST_BRP(b) ((b) & 0x3ff) 74 + #define PUCAN_TFAST_SJW(s) ((s) & PUCAN_TFAST_SJW_MASK) 75 + #define PUCAN_TFAST_TSEG2(t) ((t) & PUCAN_TFAST_TSEG2_MASK) 76 + #define PUCAN_TFAST_TSEG1(t) ((t) & PUCAN_TFAST_TSEG1_MASK) 77 + #define PUCAN_TFAST_BRP(b) ((b) & PUCAN_TFAST_BRP_MASK) 79 78 80 79 struct __packed pucan_timing_fast { 81 80 __le16 opcode_channel;
+2
drivers/net/can/usb/peak_usb/pcan_usb_core.c
··· 39 39 {USB_DEVICE(PCAN_USB_VENDOR_ID, PCAN_USBPRO_PRODUCT_ID)}, 40 40 {USB_DEVICE(PCAN_USB_VENDOR_ID, PCAN_USBFD_PRODUCT_ID)}, 41 41 {USB_DEVICE(PCAN_USB_VENDOR_ID, PCAN_USBPROFD_PRODUCT_ID)}, 42 + {USB_DEVICE(PCAN_USB_VENDOR_ID, PCAN_USBX6_PRODUCT_ID)}, 42 43 {} /* Terminating entry */ 43 44 }; 44 45 ··· 51 50 &pcan_usb_pro, 52 51 &pcan_usb_fd, 53 52 &pcan_usb_pro_fd, 53 + &pcan_usb_x6, 54 54 }; 55 55 56 56 /*
+2
drivers/net/can/usb/peak_usb/pcan_usb_core.h
··· 27 27 #define PCAN_USBPRO_PRODUCT_ID 0x000d 28 28 #define PCAN_USBPROFD_PRODUCT_ID 0x0011 29 29 #define PCAN_USBFD_PRODUCT_ID 0x0012 30 + #define PCAN_USBX6_PRODUCT_ID 0x0014 30 31 31 32 #define PCAN_USB_DRIVER_NAME "peak_usb" 32 33 ··· 91 90 extern const struct peak_usb_adapter pcan_usb_pro; 92 91 extern const struct peak_usb_adapter pcan_usb_fd; 93 92 extern const struct peak_usb_adapter pcan_usb_pro_fd; 93 + extern const struct peak_usb_adapter pcan_usb_x6; 94 94 95 95 struct peak_time_ref { 96 96 struct timeval tv_host_0, tv_host;
+88 -16
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 993 993 static const struct can_bittiming_const pcan_usb_fd_const = { 994 994 .name = "pcan_usb_fd", 995 995 .tseg1_min = 1, 996 - .tseg1_max = 64, 996 + .tseg1_max = (1 << PUCAN_TSLOW_TSGEG1_BITS), 997 997 .tseg2_min = 1, 998 - .tseg2_max = 16, 999 - .sjw_max = 16, 998 + .tseg2_max = (1 << PUCAN_TSLOW_TSGEG2_BITS), 999 + .sjw_max = (1 << PUCAN_TSLOW_SJW_BITS), 1000 1000 .brp_min = 1, 1001 - .brp_max = 1024, 1001 + .brp_max = (1 << PUCAN_TSLOW_BRP_BITS), 1002 1002 .brp_inc = 1, 1003 1003 }; 1004 1004 1005 1005 static const struct can_bittiming_const pcan_usb_fd_data_const = { 1006 1006 .name = "pcan_usb_fd", 1007 1007 .tseg1_min = 1, 1008 - .tseg1_max = 16, 1008 + .tseg1_max = (1 << PUCAN_TFAST_TSGEG1_BITS), 1009 1009 .tseg2_min = 1, 1010 - .tseg2_max = 8, 1011 - .sjw_max = 4, 1010 + .tseg2_max = (1 << PUCAN_TFAST_TSGEG2_BITS), 1011 + .sjw_max = (1 << PUCAN_TFAST_SJW_BITS), 1012 1012 .brp_min = 1, 1013 - .brp_max = 1024, 1013 + .brp_max = (1 << PUCAN_TFAST_BRP_BITS), 1014 1014 .brp_inc = 1, 1015 1015 }; 1016 1016 ··· 1065 1065 static const struct can_bittiming_const pcan_usb_pro_fd_const = { 1066 1066 .name = "pcan_usb_pro_fd", 1067 1067 .tseg1_min = 1, 1068 - .tseg1_max = 64, 1068 + .tseg1_max = (1 << PUCAN_TSLOW_TSGEG1_BITS), 1069 1069 .tseg2_min = 1, 1070 - .tseg2_max = 16, 1071 - .sjw_max = 16, 1070 + .tseg2_max = (1 << PUCAN_TSLOW_TSGEG2_BITS), 1071 + .sjw_max = (1 << PUCAN_TSLOW_SJW_BITS), 1072 1072 .brp_min = 1, 1073 - .brp_max = 1024, 1073 + .brp_max = (1 << PUCAN_TSLOW_BRP_BITS), 1074 1074 .brp_inc = 1, 1075 1075 }; 1076 1076 1077 1077 static const struct can_bittiming_const pcan_usb_pro_fd_data_const = { 1078 1078 .name = "pcan_usb_pro_fd", 1079 1079 .tseg1_min = 1, 1080 - .tseg1_max = 16, 1080 + .tseg1_max = (1 << PUCAN_TFAST_TSGEG1_BITS), 1081 1081 .tseg2_min = 1, 1082 - .tseg2_max = 8, 1083 - .sjw_max = 4, 1082 + .tseg2_max = (1 << PUCAN_TFAST_TSGEG2_BITS), 1083 + .sjw_max = (1 << PUCAN_TFAST_SJW_BITS), 1084 1084 .brp_min = 1, 1085 - .brp_max = 1024, 1085 + .brp_max = (1 << PUCAN_TFAST_BRP_BITS), 1086 1086 .brp_inc = 1, 1087 1087 }; 1088 1088 ··· 1097 1097 }, 1098 1098 .bittiming_const = &pcan_usb_pro_fd_const, 1099 1099 .data_bittiming_const = &pcan_usb_pro_fd_data_const, 1100 + 1101 + /* size of device private data */ 1102 + .sizeof_dev_private = sizeof(struct pcan_usb_fd_device), 1103 + 1104 + /* timestamps usage */ 1105 + .ts_used_bits = 32, 1106 + .ts_period = 1000000, /* calibration period in ts. */ 1107 + .us_per_ts_scale = 1, /* us = (ts * scale) >> shift */ 1108 + .us_per_ts_shift = 0, 1109 + 1110 + /* give here messages in/out endpoints */ 1111 + .ep_msg_in = PCAN_USBPRO_EP_MSGIN, 1112 + .ep_msg_out = {PCAN_USBPRO_EP_MSGOUT_0, PCAN_USBPRO_EP_MSGOUT_1}, 1113 + 1114 + /* size of rx/tx usb buffers */ 1115 + .rx_buffer_size = PCAN_UFD_RX_BUFFER_SIZE, 1116 + .tx_buffer_size = PCAN_UFD_TX_BUFFER_SIZE, 1117 + 1118 + /* device callbacks */ 1119 + .intf_probe = pcan_usb_pro_probe, /* same as PCAN-USB Pro */ 1120 + .dev_init = pcan_usb_fd_init, 1121 + 1122 + .dev_exit = pcan_usb_fd_exit, 1123 + .dev_free = pcan_usb_fd_free, 1124 + .dev_set_bus = pcan_usb_fd_set_bus, 1125 + .dev_set_bittiming = pcan_usb_fd_set_bittiming_slow, 1126 + .dev_set_data_bittiming = pcan_usb_fd_set_bittiming_fast, 1127 + .dev_decode_buf = pcan_usb_fd_decode_buf, 1128 + .dev_start = pcan_usb_fd_start, 1129 + .dev_stop = pcan_usb_fd_stop, 1130 + .dev_restart_async = pcan_usb_fd_restart_async, 1131 + .dev_encode_msg = pcan_usb_fd_encode_msg, 1132 + 1133 + .do_get_berr_counter = pcan_usb_fd_get_berr_counter, 1134 + }; 1135 + 1136 + /* describes the PCAN-USB X6 adapter */ 1137 + static const struct can_bittiming_const pcan_usb_x6_const = { 1138 + .name = "pcan_usb_x6", 1139 + .tseg1_min = 1, 1140 + .tseg1_max = (1 << PUCAN_TSLOW_TSGEG1_BITS), 1141 + .tseg2_min = 1, 1142 + .tseg2_max = (1 << PUCAN_TSLOW_TSGEG2_BITS), 1143 + .sjw_max = (1 << PUCAN_TSLOW_SJW_BITS), 1144 + .brp_min = 1, 1145 + .brp_max = (1 << PUCAN_TSLOW_BRP_BITS), 1146 + .brp_inc = 1, 1147 + }; 1148 + 1149 + static const struct can_bittiming_const pcan_usb_x6_data_const = { 1150 + .name = "pcan_usb_x6", 1151 + .tseg1_min = 1, 1152 + .tseg1_max = (1 << PUCAN_TFAST_TSGEG1_BITS), 1153 + .tseg2_min = 1, 1154 + .tseg2_max = (1 << PUCAN_TFAST_TSGEG2_BITS), 1155 + .sjw_max = (1 << PUCAN_TFAST_SJW_BITS), 1156 + .brp_min = 1, 1157 + .brp_max = (1 << PUCAN_TFAST_BRP_BITS), 1158 + .brp_inc = 1, 1159 + }; 1160 + 1161 + const struct peak_usb_adapter pcan_usb_x6 = { 1162 + .name = "PCAN-USB X6", 1163 + .device_id = PCAN_USBX6_PRODUCT_ID, 1164 + .ctrl_count = PCAN_USBPROFD_CHANNEL_COUNT, 1165 + .ctrlmode_supported = CAN_CTRLMODE_FD | 1166 + CAN_CTRLMODE_3_SAMPLES | CAN_CTRLMODE_LISTENONLY, 1167 + .clock = { 1168 + .freq = PCAN_UFD_CRYSTAL_HZ, 1169 + }, 1170 + .bittiming_const = &pcan_usb_x6_const, 1171 + .data_bittiming_const = &pcan_usb_x6_data_const, 1100 1172 1101 1173 /* size of device private data */ 1102 1174 .sizeof_dev_private = sizeof(struct pcan_usb_fd_device),
+8 -13
drivers/net/ethernet/altera/altera_tse_main.c
··· 400 400 401 401 skb_put(skb, pktlength); 402 402 403 - /* make cache consistent with receive packet buffer */ 404 - dma_sync_single_for_cpu(priv->device, 405 - priv->rx_ring[entry].dma_addr, 406 - priv->rx_ring[entry].len, 407 - DMA_FROM_DEVICE); 408 - 409 403 dma_unmap_single(priv->device, priv->rx_ring[entry].dma_addr, 410 404 priv->rx_ring[entry].len, DMA_FROM_DEVICE); 411 405 ··· 463 469 464 470 if (unlikely(netif_queue_stopped(priv->dev) && 465 471 tse_tx_avail(priv) > TSE_TX_THRESH(priv))) { 466 - netif_tx_lock(priv->dev); 467 472 if (netif_queue_stopped(priv->dev) && 468 473 tse_tx_avail(priv) > TSE_TX_THRESH(priv)) { 469 474 if (netif_msg_tx_done(priv)) ··· 470 477 __func__); 471 478 netif_wake_queue(priv->dev); 472 479 } 473 - netif_tx_unlock(priv->dev); 474 480 } 475 481 476 482 spin_unlock(&priv->tx_lock); ··· 583 591 buffer->skb = skb; 584 592 buffer->dma_addr = dma_addr; 585 593 buffer->len = nopaged_len; 586 - 587 - /* Push data out of the cache hierarchy into main memory */ 588 - dma_sync_single_for_device(priv->device, buffer->dma_addr, 589 - buffer->len, DMA_TO_DEVICE); 590 594 591 595 priv->dmaops->tx_buffer(priv, buffer); 592 596 ··· 807 819 808 820 if (!phydev) { 809 821 netdev_err(dev, "Could not find the PHY\n"); 822 + if (fixed_link) 823 + of_phy_deregister_fixed_link(priv->device->of_node); 810 824 return -ENODEV; 811 825 } 812 826 ··· 1535 1545 static int altera_tse_remove(struct platform_device *pdev) 1536 1546 { 1537 1547 struct net_device *ndev = platform_get_drvdata(pdev); 1548 + struct altera_tse_private *priv = netdev_priv(ndev); 1538 1549 1539 - if (ndev->phydev) 1550 + if (ndev->phydev) { 1540 1551 phy_disconnect(ndev->phydev); 1552 + 1553 + if (of_phy_is_fixed_link(priv->device->of_node)) 1554 + of_phy_deregister_fixed_link(priv->device->of_node); 1555 + } 1541 1556 1542 1557 platform_set_drvdata(pdev, NULL); 1543 1558 altera_tse_mdio_destroy(ndev);
+2 -2
drivers/net/ethernet/amd/xgbe/xgbe-main.c
··· 829 829 return 0; 830 830 } 831 831 832 - #ifdef CONFIG_PM 832 + #ifdef CONFIG_PM_SLEEP 833 833 static int xgbe_suspend(struct device *dev) 834 834 { 835 835 struct net_device *netdev = dev_get_drvdata(dev); ··· 874 874 875 875 return ret; 876 876 } 877 - #endif /* CONFIG_PM */ 877 + #endif /* CONFIG_PM_SLEEP */ 878 878 879 879 #ifdef CONFIG_ACPI 880 880 static const struct acpi_device_id xgbe_acpi_match[] = {
+7 -2
drivers/net/ethernet/aurora/nb8800.c
··· 1466 1466 1467 1467 ret = nb8800_hw_init(dev); 1468 1468 if (ret) 1469 - goto err_free_bus; 1469 + goto err_deregister_fixed_link; 1470 1470 1471 1471 if (ops && ops->init) { 1472 1472 ret = ops->init(dev); 1473 1473 if (ret) 1474 - goto err_free_bus; 1474 + goto err_deregister_fixed_link; 1475 1475 } 1476 1476 1477 1477 dev->netdev_ops = &nb8800_netdev_ops; ··· 1504 1504 1505 1505 err_free_dma: 1506 1506 nb8800_dma_free(dev); 1507 + err_deregister_fixed_link: 1508 + if (of_phy_is_fixed_link(pdev->dev.of_node)) 1509 + of_phy_deregister_fixed_link(pdev->dev.of_node); 1507 1510 err_free_bus: 1508 1511 of_node_put(priv->phy_node); 1509 1512 mdiobus_unregister(bus); ··· 1524 1521 struct nb8800_priv *priv = netdev_priv(ndev); 1525 1522 1526 1523 unregister_netdev(ndev); 1524 + if (of_phy_is_fixed_link(pdev->dev.of_node)) 1525 + of_phy_deregister_fixed_link(pdev->dev.of_node); 1527 1526 of_node_put(priv->phy_node); 1528 1527 1529 1528 mdiobus_unregister(priv->mii_bus);
+12 -5
drivers/net/ethernet/broadcom/bcmsysport.c
··· 1755 1755 if (priv->irq0 <= 0 || priv->irq1 <= 0) { 1756 1756 dev_err(&pdev->dev, "invalid interrupts\n"); 1757 1757 ret = -EINVAL; 1758 - goto err; 1758 + goto err_free_netdev; 1759 1759 } 1760 1760 1761 1761 priv->base = devm_ioremap_resource(&pdev->dev, r); 1762 1762 if (IS_ERR(priv->base)) { 1763 1763 ret = PTR_ERR(priv->base); 1764 - goto err; 1764 + goto err_free_netdev; 1765 1765 } 1766 1766 1767 1767 priv->netdev = dev; ··· 1779 1779 ret = of_phy_register_fixed_link(dn); 1780 1780 if (ret) { 1781 1781 dev_err(&pdev->dev, "failed to register fixed PHY\n"); 1782 - goto err; 1782 + goto err_free_netdev; 1783 1783 } 1784 1784 1785 1785 priv->phy_dn = dn; ··· 1821 1821 ret = register_netdev(dev); 1822 1822 if (ret) { 1823 1823 dev_err(&pdev->dev, "failed to register net_device\n"); 1824 - goto err; 1824 + goto err_deregister_fixed_link; 1825 1825 } 1826 1826 1827 1827 priv->rev = topctrl_readl(priv, REV_CNTL) & REV_MASK; ··· 1832 1832 priv->base, priv->irq0, priv->irq1, txq, rxq); 1833 1833 1834 1834 return 0; 1835 - err: 1835 + 1836 + err_deregister_fixed_link: 1837 + if (of_phy_is_fixed_link(dn)) 1838 + of_phy_deregister_fixed_link(dn); 1839 + err_free_netdev: 1836 1840 free_netdev(dev); 1837 1841 return ret; 1838 1842 } ··· 1844 1840 static int bcm_sysport_remove(struct platform_device *pdev) 1845 1841 { 1846 1842 struct net_device *dev = dev_get_drvdata(&pdev->dev); 1843 + struct device_node *dn = pdev->dev.of_node; 1847 1844 1848 1845 /* Not much to do, ndo_close has been called 1849 1846 * and we use managed allocations 1850 1847 */ 1851 1848 unregister_netdev(dev); 1849 + if (of_phy_is_fixed_link(dn)) 1850 + of_phy_deregister_fixed_link(dn); 1852 1851 free_netdev(dev); 1853 1852 dev_set_drvdata(&pdev->dev, NULL); 1854 1853
+5 -3
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1172 1172 struct bcmgenet_tx_ring *ring) 1173 1173 { 1174 1174 struct bcmgenet_priv *priv = netdev_priv(dev); 1175 + struct device *kdev = &priv->pdev->dev; 1175 1176 struct enet_cb *tx_cb_ptr; 1176 1177 struct netdev_queue *txq; 1177 1178 unsigned int pkts_compl = 0; ··· 1200 1199 if (tx_cb_ptr->skb) { 1201 1200 pkts_compl++; 1202 1201 bytes_compl += GENET_CB(tx_cb_ptr->skb)->bytes_sent; 1203 - dma_unmap_single(&dev->dev, 1202 + dma_unmap_single(kdev, 1204 1203 dma_unmap_addr(tx_cb_ptr, dma_addr), 1205 1204 dma_unmap_len(tx_cb_ptr, dma_len), 1206 1205 DMA_TO_DEVICE); 1207 1206 bcmgenet_free_cb(tx_cb_ptr); 1208 1207 } else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) { 1209 - dma_unmap_page(&dev->dev, 1208 + dma_unmap_page(kdev, 1210 1209 dma_unmap_addr(tx_cb_ptr, dma_addr), 1211 1210 dma_unmap_len(tx_cb_ptr, dma_len), 1212 1211 DMA_TO_DEVICE); ··· 1776 1775 1777 1776 static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv) 1778 1777 { 1778 + struct device *kdev = &priv->pdev->dev; 1779 1779 struct enet_cb *cb; 1780 1780 int i; 1781 1781 ··· 1784 1782 cb = &priv->rx_cbs[i]; 1785 1783 1786 1784 if (dma_unmap_addr(cb, dma_addr)) { 1787 - dma_unmap_single(&priv->dev->dev, 1785 + dma_unmap_single(kdev, 1788 1786 dma_unmap_addr(cb, dma_addr), 1789 1787 priv->rx_buf_len, DMA_FROM_DEVICE); 1790 1788 dma_unmap_addr_set(cb, dma_addr, 0);
+9 -1
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 542 542 /* Make sure we initialize MoCA PHYs with a link down */ 543 543 if (phy_mode == PHY_INTERFACE_MODE_MOCA) { 544 544 phydev = of_phy_find_device(dn); 545 - if (phydev) 545 + if (phydev) { 546 546 phydev->link = 0; 547 + put_device(&phydev->mdio.dev); 548 + } 547 549 } 548 550 549 551 return 0; ··· 627 625 int bcmgenet_mii_init(struct net_device *dev) 628 626 { 629 627 struct bcmgenet_priv *priv = netdev_priv(dev); 628 + struct device_node *dn = priv->pdev->dev.of_node; 630 629 int ret; 631 630 632 631 ret = bcmgenet_mii_alloc(priv); ··· 641 638 return 0; 642 639 643 640 out: 641 + if (of_phy_is_fixed_link(dn)) 642 + of_phy_deregister_fixed_link(dn); 644 643 of_node_put(priv->phy_dn); 645 644 mdiobus_unregister(priv->mii_bus); 646 645 mdiobus_free(priv->mii_bus); ··· 652 647 void bcmgenet_mii_exit(struct net_device *dev) 653 648 { 654 649 struct bcmgenet_priv *priv = netdev_priv(dev); 650 + struct device_node *dn = priv->pdev->dev.of_node; 655 651 652 + if (of_phy_is_fixed_link(dn)) 653 + of_phy_deregister_fixed_link(dn); 656 654 of_node_put(priv->phy_dn); 657 655 mdiobus_unregister(priv->mii_bus); 658 656 mdiobus_free(priv->mii_bus);
+3 -2
drivers/net/ethernet/cadence/macb.c
··· 975 975 addr += bp->rx_buffer_size; 976 976 } 977 977 bp->rx_ring[RX_RING_SIZE - 1].addr |= MACB_BIT(RX_WRAP); 978 + bp->rx_tail = 0; 978 979 } 979 980 980 981 static int macb_rx(struct macb *bp, int budget) ··· 1157 1156 if (status & MACB_BIT(RXUBR)) { 1158 1157 ctrl = macb_readl(bp, NCR); 1159 1158 macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE)); 1159 + wmb(); 1160 1160 macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); 1161 1161 1162 1162 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) ··· 1618 1616 bp->queues[0].tx_head = 0; 1619 1617 bp->queues[0].tx_tail = 0; 1620 1618 bp->queues[0].tx_ring[TX_RING_SIZE - 1].ctrl |= MACB_BIT(TX_WRAP); 1621 - 1622 - bp->rx_tail = 0; 1623 1619 } 1624 1620 1625 1621 static void macb_reset_hw(struct macb *bp) ··· 2770 2770 if (intstatus & MACB_BIT(RXUBR)) { 2771 2771 ctl = macb_readl(lp, NCR); 2772 2772 macb_writel(lp, NCR, ctl & ~MACB_BIT(RE)); 2773 + wmb(); 2773 2774 macb_writel(lp, NCR, ctl | MACB_BIT(RE)); 2774 2775 } 2775 2776
+1
drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
··· 168 168 CH_PCI_ID_TABLE_FENTRY(0x509a), /* Custom T520-CR */ 169 169 CH_PCI_ID_TABLE_FENTRY(0x509b), /* Custom T540-CR LOM */ 170 170 CH_PCI_ID_TABLE_FENTRY(0x509c), /* Custom T520-CR*/ 171 + CH_PCI_ID_TABLE_FENTRY(0x509d), /* Custom T540-CR*/ 171 172 172 173 /* T6 adapters: 173 174 */
+2
drivers/net/ethernet/freescale/fec.h
··· 574 574 unsigned int reload_period; 575 575 int pps_enable; 576 576 unsigned int next_counter; 577 + 578 + u64 ethtool_stats[0]; 577 579 }; 578 580 579 581 void fec_ptp_init(struct platform_device *pdev);
+24 -4
drivers/net/ethernet/freescale/fec_main.c
··· 2313 2313 { "IEEE_rx_octets_ok", IEEE_R_OCTETS_OK }, 2314 2314 }; 2315 2315 2316 - static void fec_enet_get_ethtool_stats(struct net_device *dev, 2317 - struct ethtool_stats *stats, u64 *data) 2316 + static void fec_enet_update_ethtool_stats(struct net_device *dev) 2318 2317 { 2319 2318 struct fec_enet_private *fep = netdev_priv(dev); 2320 2319 int i; 2321 2320 2322 2321 for (i = 0; i < ARRAY_SIZE(fec_stats); i++) 2323 - data[i] = readl(fep->hwp + fec_stats[i].offset); 2322 + fep->ethtool_stats[i] = readl(fep->hwp + fec_stats[i].offset); 2323 + } 2324 + 2325 + static void fec_enet_get_ethtool_stats(struct net_device *dev, 2326 + struct ethtool_stats *stats, u64 *data) 2327 + { 2328 + struct fec_enet_private *fep = netdev_priv(dev); 2329 + 2330 + if (netif_running(dev)) 2331 + fec_enet_update_ethtool_stats(dev); 2332 + 2333 + memcpy(data, fep->ethtool_stats, ARRAY_SIZE(fec_stats) * sizeof(u64)); 2324 2334 } 2325 2335 2326 2336 static void fec_enet_get_strings(struct net_device *netdev, ··· 2884 2874 if (fep->quirks & FEC_QUIRK_ERR006687) 2885 2875 imx6q_cpuidle_fec_irqs_unused(); 2886 2876 2877 + fec_enet_update_ethtool_stats(ndev); 2878 + 2887 2879 fec_enet_clk_enable(ndev, false); 2888 2880 pinctrl_pm_select_sleep_state(&fep->pdev->dev); 2889 2881 pm_runtime_mark_last_busy(&fep->pdev->dev); ··· 3192 3180 3193 3181 fec_restart(ndev); 3194 3182 3183 + fec_enet_update_ethtool_stats(ndev); 3184 + 3195 3185 return 0; 3196 3186 } 3197 3187 ··· 3292 3278 fec_enet_get_queue_num(pdev, &num_tx_qs, &num_rx_qs); 3293 3279 3294 3280 /* Init network device */ 3295 - ndev = alloc_etherdev_mqs(sizeof(struct fec_enet_private), 3281 + ndev = alloc_etherdev_mqs(sizeof(struct fec_enet_private) + 3282 + ARRAY_SIZE(fec_stats) * sizeof(u64), 3296 3283 num_tx_qs, num_rx_qs); 3297 3284 if (!ndev) 3298 3285 return -ENOMEM; ··· 3490 3475 failed_clk_ipg: 3491 3476 fec_enet_clk_enable(ndev, false); 3492 3477 failed_clk: 3478 + if (of_phy_is_fixed_link(np)) 3479 + of_phy_deregister_fixed_link(np); 3493 3480 failed_phy: 3494 3481 of_node_put(phy_node); 3495 3482 failed_ioremap: ··· 3505 3488 { 3506 3489 struct net_device *ndev = platform_get_drvdata(pdev); 3507 3490 struct fec_enet_private *fep = netdev_priv(ndev); 3491 + struct device_node *np = pdev->dev.of_node; 3508 3492 3509 3493 cancel_work_sync(&fep->tx_timeout_work); 3510 3494 fec_ptp_stop(pdev); ··· 3513 3495 fec_enet_mii_remove(fep); 3514 3496 if (fep->reg_phy) 3515 3497 regulator_disable(fep->reg_phy); 3498 + if (of_phy_is_fixed_link(np)) 3499 + of_phy_deregister_fixed_link(np); 3516 3500 of_node_put(fep->phy_node); 3517 3501 free_netdev(ndev); 3518 3502
+3
drivers/net/ethernet/freescale/fman/fman_memac.c
··· 1107 1107 { 1108 1108 free_init_resources(memac); 1109 1109 1110 + if (memac->pcsphy) 1111 + put_device(&memac->pcsphy->mdio.dev); 1112 + 1110 1113 kfree(memac->memac_drv_param); 1111 1114 kfree(memac); 1112 1115
+2
drivers/net/ethernet/freescale/fman/mac.c
··· 892 892 priv->fixed_link->duplex = phy->duplex; 893 893 priv->fixed_link->pause = phy->pause; 894 894 priv->fixed_link->asym_pause = phy->asym_pause; 895 + 896 + put_device(&phy->mdio.dev); 895 897 } 896 898 897 899 err = mac_dev->init(mac_dev);
+6 -1
drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
··· 980 980 err = clk_prepare_enable(clk); 981 981 if (err) { 982 982 ret = err; 983 - goto out_free_fpi; 983 + goto out_deregister_fixed_link; 984 984 } 985 985 fpi->clk_per = clk; 986 986 } ··· 1061 1061 of_node_put(fpi->phy_node); 1062 1062 if (fpi->clk_per) 1063 1063 clk_disable_unprepare(fpi->clk_per); 1064 + out_deregister_fixed_link: 1065 + if (of_phy_is_fixed_link(ofdev->dev.of_node)) 1066 + of_phy_deregister_fixed_link(ofdev->dev.of_node); 1064 1067 out_free_fpi: 1065 1068 kfree(fpi); 1066 1069 return ret; ··· 1082 1079 of_node_put(fep->fpi->phy_node); 1083 1080 if (fep->fpi->clk_per) 1084 1081 clk_disable_unprepare(fep->fpi->clk_per); 1082 + if (of_phy_is_fixed_link(ofdev->dev.of_node)) 1083 + of_phy_deregister_fixed_link(ofdev->dev.of_node); 1085 1084 free_netdev(ndev); 1086 1085 return 0; 1087 1086 }
+8
drivers/net/ethernet/freescale/gianfar.c
··· 1312 1312 */ 1313 1313 static int gfar_probe(struct platform_device *ofdev) 1314 1314 { 1315 + struct device_node *np = ofdev->dev.of_node; 1315 1316 struct net_device *dev = NULL; 1316 1317 struct gfar_private *priv = NULL; 1317 1318 int err = 0, i; ··· 1463 1462 return 0; 1464 1463 1465 1464 register_fail: 1465 + if (of_phy_is_fixed_link(np)) 1466 + of_phy_deregister_fixed_link(np); 1466 1467 unmap_group_regs(priv); 1467 1468 gfar_free_rx_queues(priv); 1468 1469 gfar_free_tx_queues(priv); ··· 1477 1474 static int gfar_remove(struct platform_device *ofdev) 1478 1475 { 1479 1476 struct gfar_private *priv = platform_get_drvdata(ofdev); 1477 + struct device_node *np = ofdev->dev.of_node; 1480 1478 1481 1479 of_node_put(priv->phy_node); 1482 1480 of_node_put(priv->tbi_node); 1483 1481 1484 1482 unregister_netdev(priv->ndev); 1483 + 1484 + if (of_phy_is_fixed_link(np)) 1485 + of_phy_deregister_fixed_link(np); 1486 + 1485 1487 unmap_group_regs(priv); 1486 1488 gfar_free_rx_queues(priv); 1487 1489 gfar_free_tx_queues(priv);
+16 -7
drivers/net/ethernet/freescale/ucc_geth.c
··· 3868 3868 dev = alloc_etherdev(sizeof(*ugeth)); 3869 3869 3870 3870 if (dev == NULL) { 3871 - of_node_put(ug_info->tbi_node); 3872 - of_node_put(ug_info->phy_node); 3873 - return -ENOMEM; 3871 + err = -ENOMEM; 3872 + goto err_deregister_fixed_link; 3874 3873 } 3875 3874 3876 3875 ugeth = netdev_priv(dev); ··· 3906 3907 if (netif_msg_probe(ugeth)) 3907 3908 pr_err("%s: Cannot register net device, aborting\n", 3908 3909 dev->name); 3909 - free_netdev(dev); 3910 - of_node_put(ug_info->tbi_node); 3911 - of_node_put(ug_info->phy_node); 3912 - return err; 3910 + goto err_free_netdev; 3913 3911 } 3914 3912 3915 3913 mac_addr = of_get_mac_address(np); ··· 3919 3923 ugeth->node = np; 3920 3924 3921 3925 return 0; 3926 + 3927 + err_free_netdev: 3928 + free_netdev(dev); 3929 + err_deregister_fixed_link: 3930 + if (of_phy_is_fixed_link(np)) 3931 + of_phy_deregister_fixed_link(np); 3932 + of_node_put(ug_info->tbi_node); 3933 + of_node_put(ug_info->phy_node); 3934 + 3935 + return err; 3922 3936 } 3923 3937 3924 3938 static int ucc_geth_remove(struct platform_device* ofdev) 3925 3939 { 3926 3940 struct net_device *dev = platform_get_drvdata(ofdev); 3927 3941 struct ucc_geth_private *ugeth = netdev_priv(dev); 3942 + struct device_node *np = ofdev->dev.of_node; 3928 3943 3929 3944 unregister_netdev(dev); 3930 3945 free_netdev(dev); 3931 3946 ucc_geth_memclean(ugeth); 3947 + if (of_phy_is_fixed_link(np)) 3948 + of_phy_deregister_fixed_link(np); 3932 3949 of_node_put(ugeth->ug_info->tbi_node); 3933 3950 of_node_put(ugeth->ug_info->phy_node); 3934 3951
+6 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 4931 4931 4932 4932 /* initialize outer IP header fields */ 4933 4933 if (ip.v4->version == 4) { 4934 + unsigned char *csum_start = skb_checksum_start(skb); 4935 + unsigned char *trans_start = ip.hdr + (ip.v4->ihl * 4); 4936 + 4934 4937 /* IP header will have to cancel out any data that 4935 4938 * is not a part of the outer IP header 4936 4939 */ 4937 - ip.v4->check = csum_fold(csum_add(lco_csum(skb), 4938 - csum_unfold(l4.tcp->check))); 4940 + ip.v4->check = csum_fold(csum_partial(trans_start, 4941 + csum_start - trans_start, 4942 + 0)); 4939 4943 type_tucmd |= E1000_ADVTXD_TUCMD_IPV4; 4940 4944 4941 4945 ip.v4->tot_len = 0;
+6 -2
drivers/net/ethernet/intel/igbvf/netdev.c
··· 1965 1965 1966 1966 /* initialize outer IP header fields */ 1967 1967 if (ip.v4->version == 4) { 1968 + unsigned char *csum_start = skb_checksum_start(skb); 1969 + unsigned char *trans_start = ip.hdr + (ip.v4->ihl * 4); 1970 + 1968 1971 /* IP header will have to cancel out any data that 1969 1972 * is not a part of the outer IP header 1970 1973 */ 1971 - ip.v4->check = csum_fold(csum_add(lco_csum(skb), 1972 - csum_unfold(l4.tcp->check))); 1974 + ip.v4->check = csum_fold(csum_partial(trans_start, 1975 + csum_start - trans_start, 1976 + 0)); 1973 1977 type_tucmd |= E1000_ADVTXD_TUCMD_IPV4; 1974 1978 1975 1979 ip.v4->tot_len = 0;
+6 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 7277 7277 7278 7278 /* initialize outer IP header fields */ 7279 7279 if (ip.v4->version == 4) { 7280 + unsigned char *csum_start = skb_checksum_start(skb); 7281 + unsigned char *trans_start = ip.hdr + (ip.v4->ihl * 4); 7282 + 7280 7283 /* IP header will have to cancel out any data that 7281 7284 * is not a part of the outer IP header 7282 7285 */ 7283 - ip.v4->check = csum_fold(csum_add(lco_csum(skb), 7284 - csum_unfold(l4.tcp->check))); 7286 + ip.v4->check = csum_fold(csum_partial(trans_start, 7287 + csum_start - trans_start, 7288 + 0)); 7285 7289 type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4; 7286 7290 7287 7291 ip.v4->tot_len = 0;
+6 -2
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 3329 3329 3330 3330 /* initialize outer IP header fields */ 3331 3331 if (ip.v4->version == 4) { 3332 + unsigned char *csum_start = skb_checksum_start(skb); 3333 + unsigned char *trans_start = ip.hdr + (ip.v4->ihl * 4); 3334 + 3332 3335 /* IP header will have to cancel out any data that 3333 3336 * is not a part of the outer IP header 3334 3337 */ 3335 - ip.v4->check = csum_fold(csum_add(lco_csum(skb), 3336 - csum_unfold(l4.tcp->check))); 3338 + ip.v4->check = csum_fold(csum_partial(trans_start, 3339 + csum_start - trans_start, 3340 + 0)); 3337 3341 type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4; 3338 3342 3339 3343 ip.v4->tot_len = 0;
+5
drivers/net/ethernet/marvell/mvneta.c
··· 4191 4191 clk_disable_unprepare(pp->clk); 4192 4192 err_put_phy_node: 4193 4193 of_node_put(phy_node); 4194 + if (of_phy_is_fixed_link(dn)) 4195 + of_phy_deregister_fixed_link(dn); 4194 4196 err_free_irq: 4195 4197 irq_dispose_mapping(dev->irq); 4196 4198 err_free_netdev: ··· 4204 4202 static int mvneta_remove(struct platform_device *pdev) 4205 4203 { 4206 4204 struct net_device *dev = platform_get_drvdata(pdev); 4205 + struct device_node *dn = pdev->dev.of_node; 4207 4206 struct mvneta_port *pp = netdev_priv(dev); 4208 4207 4209 4208 unregister_netdev(dev); ··· 4212 4209 clk_disable_unprepare(pp->clk); 4213 4210 free_percpu(pp->ports); 4214 4211 free_percpu(pp->stats); 4212 + if (of_phy_is_fixed_link(dn)) 4213 + of_phy_deregister_fixed_link(dn); 4215 4214 irq_dispose_mapping(dev->irq); 4216 4215 of_node_put(pp->phy_node); 4217 4216 free_netdev(dev);
+4
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 318 318 return 0; 319 319 320 320 err_phy: 321 + if (of_phy_is_fixed_link(mac->of_node)) 322 + of_phy_deregister_fixed_link(mac->of_node); 321 323 of_node_put(np); 322 324 dev_err(eth->dev, "%s: invalid phy\n", __func__); 323 325 return -EINVAL; ··· 1925 1923 struct mtk_eth *eth = mac->hw; 1926 1924 1927 1925 phy_disconnect(dev->phydev); 1926 + if (of_phy_is_fixed_link(mac->of_node)) 1927 + of_phy_deregister_fixed_link(mac->of_node); 1928 1928 mtk_irq_disable(eth, MTK_QDMA_INT_MASK, ~0); 1929 1929 mtk_irq_disable(eth, MTK_PDMA_INT_MASK, ~0); 1930 1930 }
+2 -15
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 2079 2079 return -ENOMEM; 2080 2080 } 2081 2081 2082 - static void mlx4_en_shutdown(struct net_device *dev) 2083 - { 2084 - rtnl_lock(); 2085 - netif_device_detach(dev); 2086 - mlx4_en_close(dev); 2087 - rtnl_unlock(); 2088 - } 2089 2082 2090 2083 static int mlx4_en_copy_priv(struct mlx4_en_priv *dst, 2091 2084 struct mlx4_en_priv *src, ··· 2155 2162 { 2156 2163 struct mlx4_en_priv *priv = netdev_priv(dev); 2157 2164 struct mlx4_en_dev *mdev = priv->mdev; 2158 - bool shutdown = mdev->dev->persist->interface_state & 2159 - MLX4_INTERFACE_STATE_SHUTDOWN; 2160 2165 2161 2166 en_dbg(DRV, priv, "Destroying netdev on port:%d\n", priv->port); 2162 2167 ··· 2162 2171 if (priv->registered) { 2163 2172 devlink_port_type_clear(mlx4_get_devlink_port(mdev->dev, 2164 2173 priv->port)); 2165 - if (shutdown) 2166 - mlx4_en_shutdown(dev); 2167 - else 2168 - unregister_netdev(dev); 2174 + unregister_netdev(dev); 2169 2175 } 2170 2176 2171 2177 if (priv->allocated) ··· 2191 2203 kfree(priv->tx_ring); 2192 2204 kfree(priv->tx_cq); 2193 2205 2194 - if (!shutdown) 2195 - free_netdev(dev); 2206 + free_netdev(dev); 2196 2207 } 2197 2208 2198 2209 static int mlx4_en_change_mtu(struct net_device *dev, int new_mtu)
+1 -4
drivers/net/ethernet/mellanox/mlx4/main.c
··· 4147 4147 4148 4148 mlx4_info(persist->dev, "mlx4_shutdown was called\n"); 4149 4149 mutex_lock(&persist->interface_state_mutex); 4150 - if (persist->interface_state & MLX4_INTERFACE_STATE_UP) { 4151 - /* Notify mlx4 clients that the kernel is being shut down */ 4152 - persist->interface_state |= MLX4_INTERFACE_STATE_SHUTDOWN; 4150 + if (persist->interface_state & MLX4_INTERFACE_STATE_UP) 4153 4151 mlx4_unload_one(pdev); 4154 - } 4155 4152 mutex_unlock(&persist->interface_state_mutex); 4156 4153 } 4157 4154
+6 -1
drivers/net/ethernet/mellanox/mlx4/mcg.c
··· 1457 1457 int mlx4_flow_steer_promisc_add(struct mlx4_dev *dev, u8 port, 1458 1458 u32 qpn, enum mlx4_net_trans_promisc_mode mode) 1459 1459 { 1460 - struct mlx4_net_trans_rule rule; 1460 + struct mlx4_net_trans_rule rule = { 1461 + .queue_mode = MLX4_NET_TRANS_Q_FIFO, 1462 + .exclusive = 0, 1463 + .allow_loopback = 1, 1464 + }; 1465 + 1461 1466 u64 *regid_p; 1462 1467 1463 1468 switch (mode) {
+1
drivers/net/ethernet/qualcomm/emac/emac-phy.c
··· 212 212 213 213 phy_np = of_parse_phandle(np, "phy-handle", 0); 214 214 adpt->phydev = of_phy_find_device(phy_np); 215 + of_node_put(phy_np); 215 216 } 216 217 217 218 if (!adpt->phydev) {
+4
drivers/net/ethernet/qualcomm/emac/emac.c
··· 711 711 err_undo_napi: 712 712 netif_napi_del(&adpt->rx_q.napi); 713 713 err_undo_mdiobus: 714 + if (!has_acpi_companion(&pdev->dev)) 715 + put_device(&adpt->phydev->mdio.dev); 714 716 mdiobus_unregister(adpt->mii_bus); 715 717 err_undo_clocks: 716 718 emac_clks_teardown(adpt); ··· 732 730 733 731 emac_clks_teardown(adpt); 734 732 733 + if (!has_acpi_companion(&pdev->dev)) 734 + put_device(&adpt->phydev->mdio.dev); 735 735 mdiobus_unregister(adpt->mii_bus); 736 736 free_netdev(netdev); 737 737
+14 -5
drivers/net/ethernet/renesas/ravb_main.c
··· 1008 1008 of_node_put(pn); 1009 1009 if (!phydev) { 1010 1010 netdev_err(ndev, "failed to connect PHY\n"); 1011 - return -ENOENT; 1011 + err = -ENOENT; 1012 + goto err_deregister_fixed_link; 1012 1013 } 1013 1014 1014 1015 /* This driver only support 10/100Mbit speeds on Gen3 1015 1016 * at this time. 1016 1017 */ 1017 1018 if (priv->chip_id == RCAR_GEN3) { 1018 - int err; 1019 - 1020 1019 err = phy_set_max_speed(phydev, SPEED_100); 1021 1020 if (err) { 1022 1021 netdev_err(ndev, "failed to limit PHY to 100Mbit/s\n"); 1023 - phy_disconnect(phydev); 1024 - return err; 1022 + goto err_phy_disconnect; 1025 1023 } 1026 1024 1027 1025 netdev_info(ndev, "limited PHY to 100Mbit/s\n"); ··· 1031 1033 phy_attached_info(phydev); 1032 1034 1033 1035 return 0; 1036 + 1037 + err_phy_disconnect: 1038 + phy_disconnect(phydev); 1039 + err_deregister_fixed_link: 1040 + if (of_phy_is_fixed_link(np)) 1041 + of_phy_deregister_fixed_link(np); 1042 + 1043 + return err; 1034 1044 } 1035 1045 1036 1046 /* PHY control start function */ ··· 1640 1634 /* Device close function for Ethernet AVB */ 1641 1635 static int ravb_close(struct net_device *ndev) 1642 1636 { 1637 + struct device_node *np = ndev->dev.parent->of_node; 1643 1638 struct ravb_private *priv = netdev_priv(ndev); 1644 1639 struct ravb_tstamp_skb *ts_skb, *ts_skb2; 1645 1640 ··· 1670 1663 if (ndev->phydev) { 1671 1664 phy_stop(ndev->phydev); 1672 1665 phy_disconnect(ndev->phydev); 1666 + if (of_phy_is_fixed_link(np)) 1667 + of_phy_deregister_fixed_link(np); 1673 1668 } 1674 1669 1675 1670 if (priv->chip_id != RCAR_GEN2) {
+1 -1
drivers/net/ethernet/renesas/sh_eth.c
··· 518 518 519 519 .ecsr_value = ECSR_ICD, 520 520 .ecsipr_value = ECSIPR_ICDIP, 521 - .eesipr_value = 0xff7f009f, 521 + .eesipr_value = 0xe77f009f, 522 522 523 523 .tx_check = EESR_TC1 | EESR_FTC, 524 524 .eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT |
+15 -2
drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
··· 50 50 if (plat_dat->init) { 51 51 ret = plat_dat->init(pdev, plat_dat->bsp_priv); 52 52 if (ret) 53 - return ret; 53 + goto err_remove_config_dt; 54 54 } 55 55 56 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 56 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 57 + if (ret) 58 + goto err_exit; 59 + 60 + return 0; 61 + 62 + err_exit: 63 + if (plat_dat->exit) 64 + plat_dat->exit(pdev, plat_dat->bsp_priv); 65 + err_remove_config_dt: 66 + if (pdev->dev.of_node) 67 + stmmac_remove_config_dt(pdev, plat_dat); 68 + 69 + return ret; 57 70 } 58 71 59 72 static const struct of_device_id dwmac_generic_match[] = {
+19 -6
drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
··· 271 271 return PTR_ERR(plat_dat); 272 272 273 273 gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL); 274 - if (!gmac) 275 - return -ENOMEM; 274 + if (!gmac) { 275 + err = -ENOMEM; 276 + goto err_remove_config_dt; 277 + } 276 278 277 279 gmac->pdev = pdev; 278 280 279 281 err = ipq806x_gmac_of_parse(gmac); 280 282 if (err) { 281 283 dev_err(dev, "device tree parsing error\n"); 282 - return err; 284 + goto err_remove_config_dt; 283 285 } 284 286 285 287 regmap_write(gmac->qsgmii_csr, QSGMII_PCS_CAL_LCKDT_CTL, ··· 302 300 default: 303 301 dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n", 304 302 phy_modes(gmac->phy_mode)); 305 - return -EINVAL; 303 + err = -EINVAL; 304 + goto err_remove_config_dt; 306 305 } 307 306 regmap_write(gmac->nss_common, NSS_COMMON_GMAC_CTL(gmac->id), val); 308 307 ··· 322 319 default: 323 320 dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n", 324 321 phy_modes(gmac->phy_mode)); 325 - return -EINVAL; 322 + err = -EINVAL; 323 + goto err_remove_config_dt; 326 324 } 327 325 regmap_write(gmac->nss_common, NSS_COMMON_CLK_SRC_CTRL, val); 328 326 ··· 350 346 plat_dat->bsp_priv = gmac; 351 347 plat_dat->fix_mac_speed = ipq806x_gmac_fix_mac_speed; 352 348 353 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 349 + err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 350 + if (err) 351 + goto err_remove_config_dt; 352 + 353 + return 0; 354 + 355 + err_remove_config_dt: 356 + stmmac_remove_config_dt(pdev, plat_dat); 357 + 358 + return err; 354 359 } 355 360 356 361 static const struct of_device_id ipq806x_gmac_dwmac_match[] = {
+14 -3
drivers/net/ethernet/stmicro/stmmac/dwmac-lpc18xx.c
··· 46 46 reg = syscon_regmap_lookup_by_compatible("nxp,lpc1850-creg"); 47 47 if (IS_ERR(reg)) { 48 48 dev_err(&pdev->dev, "syscon lookup failed\n"); 49 - return PTR_ERR(reg); 49 + ret = PTR_ERR(reg); 50 + goto err_remove_config_dt; 50 51 } 51 52 52 53 if (plat_dat->interface == PHY_INTERFACE_MODE_MII) { ··· 56 55 ethmode = LPC18XX_CREG_CREG6_ETHMODE_RMII; 57 56 } else { 58 57 dev_err(&pdev->dev, "Only MII and RMII mode supported\n"); 59 - return -EINVAL; 58 + ret = -EINVAL; 59 + goto err_remove_config_dt; 60 60 } 61 61 62 62 regmap_update_bits(reg, LPC18XX_CREG_CREG6, 63 63 LPC18XX_CREG_CREG6_ETHMODE_MASK, ethmode); 64 64 65 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 65 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 66 + if (ret) 67 + goto err_remove_config_dt; 68 + 69 + return 0; 70 + 71 + err_remove_config_dt: 72 + stmmac_remove_config_dt(pdev, plat_dat); 73 + 74 + return ret; 66 75 } 67 76 68 77 static const struct of_device_id lpc18xx_dwmac_match[] = {
+18 -5
drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
··· 64 64 return PTR_ERR(plat_dat); 65 65 66 66 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 67 - if (!dwmac) 68 - return -ENOMEM; 67 + if (!dwmac) { 68 + ret = -ENOMEM; 69 + goto err_remove_config_dt; 70 + } 69 71 70 72 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 71 73 dwmac->reg = devm_ioremap_resource(&pdev->dev, res); 72 - if (IS_ERR(dwmac->reg)) 73 - return PTR_ERR(dwmac->reg); 74 + if (IS_ERR(dwmac->reg)) { 75 + ret = PTR_ERR(dwmac->reg); 76 + goto err_remove_config_dt; 77 + } 74 78 75 79 plat_dat->bsp_priv = dwmac; 76 80 plat_dat->fix_mac_speed = meson6_dwmac_fix_mac_speed; 77 81 78 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 82 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 83 + if (ret) 84 + goto err_remove_config_dt; 85 + 86 + return 0; 87 + 88 + err_remove_config_dt: 89 + stmmac_remove_config_dt(pdev, plat_dat); 90 + 91 + return ret; 79 92 } 80 93 81 94 static const struct of_device_id meson6_dwmac_match[] = {
+24 -8
drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
··· 264 264 return PTR_ERR(plat_dat); 265 265 266 266 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 267 - if (!dwmac) 268 - return -ENOMEM; 267 + if (!dwmac) { 268 + ret = -ENOMEM; 269 + goto err_remove_config_dt; 270 + } 269 271 270 272 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 271 273 dwmac->regs = devm_ioremap_resource(&pdev->dev, res); 272 - if (IS_ERR(dwmac->regs)) 273 - return PTR_ERR(dwmac->regs); 274 + if (IS_ERR(dwmac->regs)) { 275 + ret = PTR_ERR(dwmac->regs); 276 + goto err_remove_config_dt; 277 + } 274 278 275 279 dwmac->pdev = pdev; 276 280 dwmac->phy_mode = of_get_phy_mode(pdev->dev.of_node); 277 281 if (dwmac->phy_mode < 0) { 278 282 dev_err(&pdev->dev, "missing phy-mode property\n"); 279 - return -EINVAL; 283 + ret = -EINVAL; 284 + goto err_remove_config_dt; 280 285 } 281 286 282 287 ret = meson8b_init_clk(dwmac); 283 288 if (ret) 284 - return ret; 289 + goto err_remove_config_dt; 285 290 286 291 ret = meson8b_init_prg_eth(dwmac); 287 292 if (ret) 288 - return ret; 293 + goto err_remove_config_dt; 289 294 290 295 plat_dat->bsp_priv = dwmac; 291 296 292 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 297 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 298 + if (ret) 299 + goto err_clk_disable; 300 + 301 + return 0; 302 + 303 + err_clk_disable: 304 + clk_disable_unprepare(dwmac->m25_div_clk); 305 + err_remove_config_dt: 306 + stmmac_remove_config_dt(pdev, plat_dat); 307 + 308 + return ret; 293 309 } 294 310 295 311 static int meson8b_dwmac_remove(struct platform_device *pdev)
+17 -4
drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
··· 981 981 plat_dat->resume = rk_gmac_resume; 982 982 983 983 plat_dat->bsp_priv = rk_gmac_setup(pdev, data); 984 - if (IS_ERR(plat_dat->bsp_priv)) 985 - return PTR_ERR(plat_dat->bsp_priv); 984 + if (IS_ERR(plat_dat->bsp_priv)) { 985 + ret = PTR_ERR(plat_dat->bsp_priv); 986 + goto err_remove_config_dt; 987 + } 986 988 987 989 ret = rk_gmac_init(pdev, plat_dat->bsp_priv); 988 990 if (ret) 989 - return ret; 991 + goto err_remove_config_dt; 990 992 991 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 993 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 994 + if (ret) 995 + goto err_gmac_exit; 996 + 997 + return 0; 998 + 999 + err_gmac_exit: 1000 + rk_gmac_exit(pdev, plat_dat->bsp_priv); 1001 + err_remove_config_dt: 1002 + stmmac_remove_config_dt(pdev, plat_dat); 1003 + 1004 + return ret; 992 1005 } 993 1006 994 1007 static const struct of_device_id rk_gmac_dwmac_match[] = {
+26 -13
drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
··· 304 304 struct device *dev = &pdev->dev; 305 305 int ret; 306 306 struct socfpga_dwmac *dwmac; 307 + struct net_device *ndev; 308 + struct stmmac_priv *stpriv; 307 309 308 310 ret = stmmac_get_platform_resources(pdev, &stmmac_res); 309 311 if (ret) ··· 316 314 return PTR_ERR(plat_dat); 317 315 318 316 dwmac = devm_kzalloc(dev, sizeof(*dwmac), GFP_KERNEL); 319 - if (!dwmac) 320 - return -ENOMEM; 317 + if (!dwmac) { 318 + ret = -ENOMEM; 319 + goto err_remove_config_dt; 320 + } 321 321 322 322 ret = socfpga_dwmac_parse_data(dwmac, dev); 323 323 if (ret) { 324 324 dev_err(dev, "Unable to parse OF data\n"); 325 - return ret; 325 + goto err_remove_config_dt; 326 326 } 327 327 328 328 plat_dat->bsp_priv = dwmac; 329 329 plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed; 330 330 331 331 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 332 + if (ret) 333 + goto err_remove_config_dt; 332 334 333 - if (!ret) { 334 - struct net_device *ndev = platform_get_drvdata(pdev); 335 - struct stmmac_priv *stpriv = netdev_priv(ndev); 335 + ndev = platform_get_drvdata(pdev); 336 + stpriv = netdev_priv(ndev); 336 337 337 - /* The socfpga driver needs to control the stmmac reset to 338 - * set the phy mode. Create a copy of the core reset handel 339 - * so it can be used by the driver later. 340 - */ 341 - dwmac->stmmac_rst = stpriv->stmmac_rst; 338 + /* The socfpga driver needs to control the stmmac reset to set the phy 339 + * mode. Create a copy of the core reset handle so it can be used by 340 + * the driver later. 341 + */ 342 + dwmac->stmmac_rst = stpriv->stmmac_rst; 342 343 343 - ret = socfpga_dwmac_set_phy_mode(dwmac); 344 - } 344 + ret = socfpga_dwmac_set_phy_mode(dwmac); 345 + if (ret) 346 + goto err_dvr_remove; 347 + 348 + return 0; 349 + 350 + err_dvr_remove: 351 + stmmac_dvr_remove(&pdev->dev); 352 + err_remove_config_dt: 353 + stmmac_remove_config_dt(pdev, plat_dat); 345 354 346 355 return ret; 347 356 }
+18 -5
drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
··· 345 345 return PTR_ERR(plat_dat); 346 346 347 347 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 348 - if (!dwmac) 349 - return -ENOMEM; 348 + if (!dwmac) { 349 + ret = -ENOMEM; 350 + goto err_remove_config_dt; 351 + } 350 352 351 353 ret = sti_dwmac_parse_data(dwmac, pdev); 352 354 if (ret) { 353 355 dev_err(&pdev->dev, "Unable to parse OF data\n"); 354 - return ret; 356 + goto err_remove_config_dt; 355 357 } 356 358 357 359 dwmac->fix_retime_src = data->fix_retime_src; ··· 365 363 366 364 ret = sti_dwmac_init(pdev, plat_dat->bsp_priv); 367 365 if (ret) 368 - return ret; 366 + goto err_remove_config_dt; 369 367 370 - return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 368 + ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 369 + if (ret) 370 + goto err_dwmac_exit; 371 + 372 + return 0; 373 + 374 + err_dwmac_exit: 375 + sti_dwmac_exit(pdev, plat_dat->bsp_priv); 376 + err_remove_config_dt: 377 + stmmac_remove_config_dt(pdev, plat_dat); 378 + 379 + return ret; 371 380 } 372 381 373 382 static const struct sti_dwmac_of_data stih4xx_dwmac_data = {
+14 -5
drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
··· 107 107 return PTR_ERR(plat_dat); 108 108 109 109 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 110 - if (!dwmac) 111 - return -ENOMEM; 110 + if (!dwmac) { 111 + ret = -ENOMEM; 112 + goto err_remove_config_dt; 113 + } 112 114 113 115 ret = stm32_dwmac_parse_data(dwmac, &pdev->dev); 114 116 if (ret) { 115 117 dev_err(&pdev->dev, "Unable to parse OF data\n"); 116 - return ret; 118 + goto err_remove_config_dt; 117 119 } 118 120 119 121 plat_dat->bsp_priv = dwmac; 120 122 121 123 ret = stm32_dwmac_init(plat_dat); 122 124 if (ret) 123 - return ret; 125 + goto err_remove_config_dt; 124 126 125 127 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 126 128 if (ret) 127 - stm32_dwmac_clk_disable(dwmac); 129 + goto err_clk_disable; 130 + 131 + return 0; 132 + 133 + err_clk_disable: 134 + stm32_dwmac_clk_disable(dwmac); 135 + err_remove_config_dt: 136 + stmmac_remove_config_dt(pdev, plat_dat); 128 137 129 138 return ret; 130 139 }
+19 -7
drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
··· 120 120 return PTR_ERR(plat_dat); 121 121 122 122 gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL); 123 - if (!gmac) 124 - return -ENOMEM; 123 + if (!gmac) { 124 + ret = -ENOMEM; 125 + goto err_remove_config_dt; 126 + } 125 127 126 128 gmac->interface = of_get_phy_mode(dev->of_node); 127 129 128 130 gmac->tx_clk = devm_clk_get(dev, "allwinner_gmac_tx"); 129 131 if (IS_ERR(gmac->tx_clk)) { 130 132 dev_err(dev, "could not get tx clock\n"); 131 - return PTR_ERR(gmac->tx_clk); 133 + ret = PTR_ERR(gmac->tx_clk); 134 + goto err_remove_config_dt; 132 135 } 133 136 134 137 /* Optional regulator for PHY */ 135 138 gmac->regulator = devm_regulator_get_optional(dev, "phy"); 136 139 if (IS_ERR(gmac->regulator)) { 137 - if (PTR_ERR(gmac->regulator) == -EPROBE_DEFER) 138 - return -EPROBE_DEFER; 140 + if (PTR_ERR(gmac->regulator) == -EPROBE_DEFER) { 141 + ret = -EPROBE_DEFER; 142 + goto err_remove_config_dt; 143 + } 139 144 dev_info(dev, "no regulator found\n"); 140 145 gmac->regulator = NULL; 141 146 } ··· 156 151 157 152 ret = sun7i_gmac_init(pdev, plat_dat->bsp_priv); 158 153 if (ret) 159 - return ret; 154 + goto err_remove_config_dt; 160 155 161 156 ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); 162 157 if (ret) 163 - sun7i_gmac_exit(pdev, plat_dat->bsp_priv); 158 + goto err_gmac_exit; 159 + 160 + return 0; 161 + 162 + err_gmac_exit: 163 + sun7i_gmac_exit(pdev, plat_dat->bsp_priv); 164 + err_remove_config_dt: 165 + stmmac_remove_config_dt(pdev, plat_dat); 164 166 165 167 return ret; 166 168 }
-1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 3416 3416 stmmac_set_mac(priv->ioaddr, false); 3417 3417 netif_carrier_off(ndev); 3418 3418 unregister_netdev(ndev); 3419 - of_node_put(priv->plat->phy_node); 3420 3419 if (priv->stmmac_rst) 3421 3420 reset_control_assert(priv->stmmac_rst); 3422 3421 clk_disable_unprepare(priv->pclk);
+29 -4
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 200 200 /** 201 201 * stmmac_probe_config_dt - parse device-tree driver parameters 202 202 * @pdev: platform_device structure 203 - * @plat: driver data platform structure 204 203 * @mac: MAC address to use 205 204 * Description: 206 205 * this function is to read the driver parameters from device-tree and ··· 305 306 dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg), 306 307 GFP_KERNEL); 307 308 if (!dma_cfg) { 308 - of_node_put(plat->phy_node); 309 + stmmac_remove_config_dt(pdev, plat); 309 310 return ERR_PTR(-ENOMEM); 310 311 } 311 312 plat->dma_cfg = dma_cfg; ··· 328 329 329 330 return plat; 330 331 } 332 + 333 + /** 334 + * stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt() 335 + * @pdev: platform_device structure 336 + * @plat: driver data platform structure 337 + * 338 + * Release resources claimed by stmmac_probe_config_dt(). 339 + */ 340 + void stmmac_remove_config_dt(struct platform_device *pdev, 341 + struct plat_stmmacenet_data *plat) 342 + { 343 + struct device_node *np = pdev->dev.of_node; 344 + 345 + if (of_phy_is_fixed_link(np)) 346 + of_phy_deregister_fixed_link(np); 347 + of_node_put(plat->phy_node); 348 + } 331 349 #else 332 350 struct plat_stmmacenet_data * 333 351 stmmac_probe_config_dt(struct platform_device *pdev, const char **mac) 334 352 { 335 353 return ERR_PTR(-ENOSYS); 336 354 } 355 + 356 + void stmmac_remove_config_dt(struct platform_device *pdev, 357 + struct plat_stmmacenet_data *plat) 358 + { 359 + } 337 360 #endif /* CONFIG_OF */ 338 361 EXPORT_SYMBOL_GPL(stmmac_probe_config_dt); 362 + EXPORT_SYMBOL_GPL(stmmac_remove_config_dt); 339 363 340 364 int stmmac_get_platform_resources(struct platform_device *pdev, 341 365 struct stmmac_resources *stmmac_res) ··· 414 392 { 415 393 struct net_device *ndev = platform_get_drvdata(pdev); 416 394 struct stmmac_priv *priv = netdev_priv(ndev); 395 + struct plat_stmmacenet_data *plat = priv->plat; 417 396 int ret = stmmac_dvr_remove(&pdev->dev); 418 397 419 - if (priv->plat->exit) 420 - priv->plat->exit(pdev, priv->plat->bsp_priv); 398 + if (plat->exit) 399 + plat->exit(pdev, plat->bsp_priv); 400 + 401 + stmmac_remove_config_dt(pdev, plat); 421 402 422 403 return ret; 423 404 }
+2
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.h
··· 23 23 24 24 struct plat_stmmacenet_data * 25 25 stmmac_probe_config_dt(struct platform_device *pdev, const char **mac); 26 + void stmmac_remove_config_dt(struct platform_device *pdev, 27 + struct plat_stmmacenet_data *plat); 26 28 27 29 int stmmac_get_platform_resources(struct platform_device *pdev, 28 30 struct stmmac_resources *stmmac_res);
+13 -7
drivers/net/ethernet/synopsys/dwc_eth_qos.c
··· 2881 2881 ret = of_get_phy_mode(lp->pdev->dev.of_node); 2882 2882 if (ret < 0) { 2883 2883 dev_err(&lp->pdev->dev, "error in getting phy i/f\n"); 2884 - goto err_out_clk_dis_phy; 2884 + goto err_out_deregister_fixed_link; 2885 2885 } 2886 2886 2887 2887 lp->phy_interface = ret; ··· 2889 2889 ret = dwceqos_mii_init(lp); 2890 2890 if (ret) { 2891 2891 dev_err(&lp->pdev->dev, "error in dwceqos_mii_init\n"); 2892 - goto err_out_clk_dis_phy; 2892 + goto err_out_deregister_fixed_link; 2893 2893 } 2894 2894 2895 2895 ret = dwceqos_mii_probe(ndev); 2896 2896 if (ret != 0) { 2897 2897 netdev_err(ndev, "mii_probe fail.\n"); 2898 2898 ret = -ENXIO; 2899 - goto err_out_clk_dis_phy; 2899 + goto err_out_deregister_fixed_link; 2900 2900 } 2901 2901 2902 2902 dwceqos_set_umac_addr(lp, lp->ndev->dev_addr, 0); ··· 2914 2914 if (ret) { 2915 2915 dev_err(&lp->pdev->dev, "Unable to retrieve DT, error %d\n", 2916 2916 ret); 2917 - goto err_out_clk_dis_phy; 2917 + goto err_out_deregister_fixed_link; 2918 2918 } 2919 2919 dev_info(&lp->pdev->dev, "pdev->id %d, baseaddr 0x%08lx, irq %d\n", 2920 2920 pdev->id, ndev->base_addr, ndev->irq); ··· 2924 2924 if (ret) { 2925 2925 dev_err(&lp->pdev->dev, "Unable to request IRQ %d, error %d\n", 2926 2926 ndev->irq, ret); 2927 - goto err_out_clk_dis_phy; 2927 + goto err_out_deregister_fixed_link; 2928 2928 } 2929 2929 2930 2930 if (netif_msg_probe(lp)) ··· 2935 2935 ret = register_netdev(ndev); 2936 2936 if (ret) { 2937 2937 dev_err(&pdev->dev, "Cannot register net device, aborting.\n"); 2938 - goto err_out_clk_dis_phy; 2938 + goto err_out_deregister_fixed_link; 2939 2939 } 2940 2940 2941 2941 return 0; 2942 2942 2943 + err_out_deregister_fixed_link: 2944 + if (of_phy_is_fixed_link(pdev->dev.of_node)) 2945 + of_phy_deregister_fixed_link(pdev->dev.of_node); 2943 2946 err_out_clk_dis_phy: 2944 2947 clk_disable_unprepare(lp->phy_ref_clk); 2945 2948 err_out_clk_dis_aper: ··· 2962 2959 if (ndev) { 2963 2960 lp = netdev_priv(ndev); 2964 2961 2965 - if (ndev->phydev) 2962 + if (ndev->phydev) { 2966 2963 phy_disconnect(ndev->phydev); 2964 + if (of_phy_is_fixed_link(pdev->dev.of_node)) 2965 + of_phy_deregister_fixed_link(pdev->dev.of_node); 2966 + } 2967 2967 mdiobus_unregister(lp->mii_bus); 2968 2968 mdiobus_free(lp->mii_bus); 2969 2969
+6 -14
drivers/net/ethernet/ti/cpsw.c
··· 2459 2459 if (strcmp(slave_node->name, "slave")) 2460 2460 continue; 2461 2461 2462 - if (of_phy_is_fixed_link(slave_node)) { 2463 - struct phy_device *phydev; 2464 - 2465 - phydev = of_phy_find_device(slave_node); 2466 - if (phydev) { 2467 - fixed_phy_unregister(phydev); 2468 - /* Put references taken by 2469 - * of_phy_find_device() and 2470 - * of_phy_register_fixed_link(). 2471 - */ 2472 - phy_device_free(phydev); 2473 - phy_device_free(phydev); 2474 - } 2475 - } 2462 + if (of_phy_is_fixed_link(slave_node)) 2463 + of_phy_deregister_fixed_link(slave_node); 2476 2464 2477 2465 of_node_put(slave_data->phy_node); 2478 2466 ··· 2930 2942 /* Select default pin state */ 2931 2943 pinctrl_pm_select_default_state(dev); 2932 2944 2945 + /* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */ 2946 + rtnl_lock(); 2933 2947 if (cpsw->data.dual_emac) { 2934 2948 int i; 2935 2949 ··· 2943 2953 if (netif_running(ndev)) 2944 2954 cpsw_ndo_open(ndev); 2945 2955 } 2956 + rtnl_unlock(); 2957 + 2946 2958 return 0; 2947 2959 } 2948 2960 #endif
+9 -1
drivers/net/ethernet/ti/davinci_emac.c
··· 1767 1767 */ 1768 1768 static int davinci_emac_probe(struct platform_device *pdev) 1769 1769 { 1770 + struct device_node *np = pdev->dev.of_node; 1770 1771 int rc = 0; 1771 1772 struct resource *res, *res_ctrl; 1772 1773 struct net_device *ndev; ··· 1806 1805 if (!pdata) { 1807 1806 dev_err(&pdev->dev, "no platform data\n"); 1808 1807 rc = -ENODEV; 1809 - goto no_pdata; 1808 + goto err_free_netdev; 1810 1809 } 1811 1810 1812 1811 /* MAC addr and PHY mask , RMII enable info from platform_data */ ··· 1942 1941 cpdma_chan_destroy(priv->rxchan); 1943 1942 cpdma_ctlr_destroy(priv->dma); 1944 1943 no_pdata: 1944 + if (of_phy_is_fixed_link(np)) 1945 + of_phy_deregister_fixed_link(np); 1946 + of_node_put(priv->phy_node); 1947 + err_free_netdev: 1945 1948 free_netdev(ndev); 1946 1949 return rc; 1947 1950 } ··· 1961 1956 { 1962 1957 struct net_device *ndev = platform_get_drvdata(pdev); 1963 1958 struct emac_priv *priv = netdev_priv(ndev); 1959 + struct device_node *np = pdev->dev.of_node; 1964 1960 1965 1961 dev_notice(&ndev->dev, "DaVinci EMAC: davinci_emac_remove()\n"); 1966 1962 ··· 1974 1968 unregister_netdev(ndev); 1975 1969 of_node_put(priv->phy_node); 1976 1970 pm_runtime_disable(&pdev->dev); 1971 + if (of_phy_is_fixed_link(np)) 1972 + of_phy_deregister_fixed_link(np); 1977 1973 free_netdev(ndev); 1978 1974 1979 1975 return 0;
+4 -10
drivers/net/geneve.c
··· 859 859 struct geneve_dev *geneve = netdev_priv(dev); 860 860 struct geneve_sock *gs4; 861 861 struct rtable *rt = NULL; 862 - const struct iphdr *iip; /* interior IP header */ 863 862 int err = -EINVAL; 864 863 struct flowi4 fl4; 865 864 __u8 tos, ttl; ··· 889 890 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 890 891 skb_reset_mac_header(skb); 891 892 892 - iip = ip_hdr(skb); 893 - 894 893 if (info) { 895 894 const struct ip_tunnel_key *key = &info->key; 896 895 u8 *opts = NULL; ··· 908 911 if (unlikely(err)) 909 912 goto tx_error; 910 913 911 - tos = ip_tunnel_ecn_encap(key->tos, iip, skb); 914 + tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); 912 915 ttl = key->ttl; 913 916 df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0; 914 917 } else { ··· 917 920 if (unlikely(err)) 918 921 goto tx_error; 919 922 920 - tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, iip, skb); 923 + tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, ip_hdr(skb), skb); 921 924 ttl = geneve->ttl; 922 925 if (!ttl && IN_MULTICAST(ntohl(fl4.daddr))) 923 926 ttl = 1; ··· 949 952 { 950 953 struct geneve_dev *geneve = netdev_priv(dev); 951 954 struct dst_entry *dst = NULL; 952 - const struct iphdr *iip; /* interior IP header */ 953 955 struct geneve_sock *gs6; 954 956 int err = -EINVAL; 955 957 struct flowi6 fl6; ··· 978 982 sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true); 979 983 skb_reset_mac_header(skb); 980 984 981 - iip = ip_hdr(skb); 982 - 983 985 if (info) { 984 986 const struct ip_tunnel_key *key = &info->key; 985 987 u8 *opts = NULL; ··· 998 1004 if (unlikely(err)) 999 1005 goto tx_error; 1000 1006 1001 - prio = ip_tunnel_ecn_encap(key->tos, iip, skb); 1007 + prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb); 1002 1008 ttl = key->ttl; 1003 1009 label = info->key.label; 1004 1010 } else { ··· 1008 1014 goto tx_error; 1009 1015 1010 1016 prio = ip_tunnel_ecn_encap(ip6_tclass(fl6.flowlabel), 1011 - iip, skb); 1017 + ip_hdr(skb), skb); 1012 1018 ttl = geneve->ttl; 1013 1019 if (!ttl && ipv6_addr_is_multicast(&fl6.daddr)) 1014 1020 ttl = 1;
+12 -5
drivers/net/ipvlan/ipvlan_main.c
··· 497 497 struct net_device *phy_dev; 498 498 int err; 499 499 u16 mode = IPVLAN_MODE_L3; 500 + bool create = false; 500 501 501 502 if (!tb[IFLA_LINK]) 502 503 return -EINVAL; ··· 514 513 err = ipvlan_port_create(phy_dev); 515 514 if (err < 0) 516 515 return err; 516 + create = true; 517 517 } 518 518 519 519 if (data && data[IFLA_IPVLAN_MODE]) ··· 538 536 539 537 err = register_netdevice(dev); 540 538 if (err < 0) 541 - return err; 539 + goto destroy_ipvlan_port; 542 540 543 541 err = netdev_upper_dev_link(phy_dev, dev); 544 542 if (err) { 545 - unregister_netdevice(dev); 546 - return err; 543 + goto unregister_netdev; 547 544 } 548 545 err = ipvlan_set_port_mode(port, mode); 549 546 if (err) { 550 - unregister_netdevice(dev); 551 - return err; 547 + goto unregister_netdev; 552 548 } 553 549 554 550 list_add_tail_rcu(&ipvlan->pnode, &port->ipvlans); 555 551 netif_stacked_transfer_operstate(phy_dev, dev); 556 552 return 0; 553 + 554 + unregister_netdev: 555 + unregister_netdevice(dev); 556 + destroy_ipvlan_port: 557 + if (create) 558 + ipvlan_port_destroy(phy_dev); 559 + return err; 557 560 } 558 561 559 562 static void ipvlan_link_delete(struct net_device *dev, struct list_head *head)
+3 -1
drivers/net/irda/w83977af_ir.c
··· 518 518 519 519 mtt = irda_get_mtt(skb); 520 520 pr_debug("%s(%ld), mtt=%d\n", __func__ , jiffies, mtt); 521 - if (mtt) 521 + if (mtt > 1000) 522 + mdelay(mtt/1000); 523 + else if (mtt) 522 524 udelay(mtt); 523 525 524 526 /* Enable DMA interrupt */
+12 -7
drivers/net/macvtap.c
··· 491 491 /* Don't put anything that may fail after macvlan_common_newlink 492 492 * because we can't undo what it does. 493 493 */ 494 - return macvlan_common_newlink(src_net, dev, tb, data); 494 + err = macvlan_common_newlink(src_net, dev, tb, data); 495 + if (err) { 496 + netdev_rx_handler_unregister(dev); 497 + return err; 498 + } 499 + 500 + return 0; 495 501 } 496 502 497 503 static void macvtap_dellink(struct net_device *dev, ··· 742 736 743 737 if (zerocopy) 744 738 err = zerocopy_sg_from_iter(skb, from); 745 - else { 739 + else 746 740 err = skb_copy_datagram_from_iter(skb, 0, from, len); 747 - if (!err && m && m->msg_control) { 748 - struct ubuf_info *uarg = m->msg_control; 749 - uarg->callback(uarg, false); 750 - } 751 - } 752 741 753 742 if (err) 754 743 goto err_kfree; ··· 774 773 skb_shinfo(skb)->destructor_arg = m->msg_control; 775 774 skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; 776 775 skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; 776 + } else if (m && m->msg_control) { 777 + struct ubuf_info *uarg = m->msg_control; 778 + uarg->callback(uarg, false); 777 779 } 780 + 778 781 if (vlan) { 779 782 skb->dev = vlan->dev; 780 783 dev_queue_xmit(skb);
+12 -8
drivers/net/phy/realtek.c
··· 102 102 if (ret < 0) 103 103 return ret; 104 104 105 - if (phydev->interface == PHY_INTERFACE_MODE_RGMII) { 106 - /* enable TXDLY */ 107 - phy_write(phydev, RTL8211F_PAGE_SELECT, 0xd08); 108 - reg = phy_read(phydev, 0x11); 105 + phy_write(phydev, RTL8211F_PAGE_SELECT, 0xd08); 106 + reg = phy_read(phydev, 0x11); 107 + 108 + /* enable TX-delay for rgmii-id and rgmii-txid, otherwise disable it */ 109 + if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID || 110 + phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID) 109 111 reg |= RTL8211F_TX_DELAY; 110 - phy_write(phydev, 0x11, reg); 111 - /* restore to default page 0 */ 112 - phy_write(phydev, RTL8211F_PAGE_SELECT, 0x0); 113 - } 112 + else 113 + reg &= ~RTL8211F_TX_DELAY; 114 + 115 + phy_write(phydev, 0x11, reg); 116 + /* restore to default page 0 */ 117 + phy_write(phydev, RTL8211F_PAGE_SELECT, 0x0); 114 118 115 119 return 0; 116 120 }
+4 -6
drivers/net/tun.c
··· 1246 1246 1247 1247 if (zerocopy) 1248 1248 err = zerocopy_sg_from_iter(skb, from); 1249 - else { 1249 + else 1250 1250 err = skb_copy_datagram_from_iter(skb, 0, from, len); 1251 - if (!err && msg_control) { 1252 - struct ubuf_info *uarg = msg_control; 1253 - uarg->callback(uarg, false); 1254 - } 1255 - } 1256 1251 1257 1252 if (err) { 1258 1253 this_cpu_inc(tun->pcpu_stats->rx_dropped); ··· 1293 1298 skb_shinfo(skb)->destructor_arg = msg_control; 1294 1299 skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; 1295 1300 skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; 1301 + } else if (msg_control) { 1302 + struct ubuf_info *uarg = msg_control; 1303 + uarg->callback(uarg, false); 1296 1304 } 1297 1305 1298 1306 skb_reset_network_header(skb);
+3 -3
drivers/net/usb/asix_devices.c
··· 603 603 u16 medium; 604 604 605 605 /* Stop MAC operation */ 606 - medium = asix_read_medium_status(dev, 0); 606 + medium = asix_read_medium_status(dev, 1); 607 607 medium &= ~AX_MEDIUM_RE; 608 - asix_write_medium_mode(dev, medium, 0); 608 + asix_write_medium_mode(dev, medium, 1); 609 609 610 610 netdev_dbg(dev->net, "ax88772_suspend: medium=0x%04x\n", 611 - asix_read_medium_status(dev, 0)); 611 + asix_read_medium_status(dev, 1)); 612 612 613 613 /* Preserve BMCR for restoring */ 614 614 priv->presvd_phy_bmcr =
+31 -7
drivers/net/usb/cdc_ether.c
··· 388 388 case USB_CDC_NOTIFY_NETWORK_CONNECTION: 389 389 netif_dbg(dev, timer, dev->net, "CDC: carrier %s\n", 390 390 event->wValue ? "on" : "off"); 391 - 392 - /* Work-around for devices with broken off-notifications */ 393 - if (event->wValue && 394 - !test_bit(__LINK_STATE_NOCARRIER, &dev->net->state)) 395 - usbnet_link_change(dev, 0, 0); 396 - 397 391 usbnet_link_change(dev, !!event->wValue, 0); 398 392 break; 399 393 case USB_CDC_NOTIFY_SPEED_CHANGE: /* tx/rx rates */ ··· 460 466 return 1; 461 467 } 462 468 469 + /* Ensure correct link state 470 + * 471 + * Some devices (ZTE MF823/831/910) export two carrier on notifications when 472 + * connected. This causes the link state to be incorrect. Work around this by 473 + * always setting the state to off, then on. 474 + */ 475 + void usbnet_cdc_zte_status(struct usbnet *dev, struct urb *urb) 476 + { 477 + struct usb_cdc_notification *event; 478 + 479 + if (urb->actual_length < sizeof(*event)) 480 + return; 481 + 482 + event = urb->transfer_buffer; 483 + 484 + if (event->bNotificationType != USB_CDC_NOTIFY_NETWORK_CONNECTION) { 485 + usbnet_cdc_status(dev, urb); 486 + return; 487 + } 488 + 489 + netif_dbg(dev, timer, dev->net, "CDC: carrier %s\n", 490 + event->wValue ? "on" : "off"); 491 + 492 + if (event->wValue && 493 + netif_carrier_ok(dev->net)) 494 + netif_carrier_off(dev->net); 495 + 496 + usbnet_link_change(dev, !!event->wValue, 0); 497 + } 498 + 463 499 static const struct driver_info cdc_info = { 464 500 .description = "CDC Ethernet Device", 465 501 .flags = FLAG_ETHER | FLAG_POINTTOPOINT, ··· 505 481 .flags = FLAG_ETHER | FLAG_POINTTOPOINT, 506 482 .bind = usbnet_cdc_zte_bind, 507 483 .unbind = usbnet_cdc_unbind, 508 - .status = usbnet_cdc_status, 484 + .status = usbnet_cdc_zte_status, 509 485 .set_rx_mode = usbnet_cdc_update_filter, 510 486 .manage_power = usbnet_manage_power, 511 487 .rx_fixup = usbnet_cdc_zte_rx_fixup,
+1
drivers/net/usb/qmi_wwan.c
··· 894 894 {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ 895 895 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 896 896 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 897 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 897 898 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 898 899 {QMI_FIXED_INTF(0x1bc7, 0x1201, 2)}, /* Telit LE920 */ 899 900 {QMI_FIXED_INTF(0x1c9e, 0x9b01, 3)}, /* XS Stick W100-2 from 4G Systems */
+7 -3
drivers/net/vxlan.c
··· 611 611 struct vxlan_rdst *rd = NULL; 612 612 struct vxlan_fdb *f; 613 613 int notify = 0; 614 + int rc; 614 615 615 616 f = __vxlan_find_mac(vxlan, mac); 616 617 if (f) { ··· 642 641 if ((flags & NLM_F_APPEND) && 643 642 (is_multicast_ether_addr(f->eth_addr) || 644 643 is_zero_ether_addr(f->eth_addr))) { 645 - int rc = vxlan_fdb_append(f, ip, port, vni, ifindex, 646 - &rd); 644 + rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd); 647 645 648 646 if (rc < 0) 649 647 return rc; ··· 673 673 INIT_LIST_HEAD(&f->remotes); 674 674 memcpy(f->eth_addr, mac, ETH_ALEN); 675 675 676 - vxlan_fdb_append(f, ip, port, vni, ifindex, &rd); 676 + rc = vxlan_fdb_append(f, ip, port, vni, ifindex, &rd); 677 + if (rc < 0) { 678 + kfree(f); 679 + return rc; 680 + } 677 681 678 682 ++vxlan->addrcnt; 679 683 hlist_add_head_rcu(&f->hlist,
+7 -6
drivers/net/wireless/marvell/mwifiex/cfg80211.c
··· 2222 2222 is_scanning_required = 1; 2223 2223 } else { 2224 2224 mwifiex_dbg(priv->adapter, MSG, 2225 - "info: trying to associate to '%s' bssid %pM\n", 2226 - (char *)req_ssid.ssid, bss->bssid); 2225 + "info: trying to associate to '%.*s' bssid %pM\n", 2226 + req_ssid.ssid_len, (char *)req_ssid.ssid, 2227 + bss->bssid); 2227 2228 memcpy(&priv->cfg_bssid, bss->bssid, ETH_ALEN); 2228 2229 break; 2229 2230 } ··· 2284 2283 } 2285 2284 2286 2285 mwifiex_dbg(adapter, INFO, 2287 - "info: Trying to associate to %s and bssid %pM\n", 2288 - (char *)sme->ssid, sme->bssid); 2286 + "info: Trying to associate to %.*s and bssid %pM\n", 2287 + (int)sme->ssid_len, (char *)sme->ssid, sme->bssid); 2289 2288 2290 2289 if (!mwifiex_stop_bg_scan(priv)) 2291 2290 cfg80211_sched_scan_stopped_rtnl(priv->wdev.wiphy); ··· 2418 2417 } 2419 2418 2420 2419 mwifiex_dbg(priv->adapter, MSG, 2421 - "info: trying to join to %s and bssid %pM\n", 2422 - (char *)params->ssid, params->bssid); 2420 + "info: trying to join to %.*s and bssid %pM\n", 2421 + params->ssid_len, (char *)params->ssid, params->bssid); 2423 2422 2424 2423 mwifiex_set_ibss_params(priv, params); 2425 2424
+15
drivers/of/of_mdio.c
··· 490 490 return -ENODEV; 491 491 } 492 492 EXPORT_SYMBOL(of_phy_register_fixed_link); 493 + 494 + void of_phy_deregister_fixed_link(struct device_node *np) 495 + { 496 + struct phy_device *phydev; 497 + 498 + phydev = of_phy_find_device(np); 499 + if (!phydev) 500 + return; 501 + 502 + fixed_phy_unregister(phydev); 503 + 504 + put_device(&phydev->mdio.dev); /* of_phy_find_device() */ 505 + phy_device_free(phydev); /* fixed_phy_register() */ 506 + } 507 + EXPORT_SYMBOL(of_phy_deregister_fixed_link);
-1
include/linux/mlx4/device.h
··· 476 476 enum { 477 477 MLX4_INTERFACE_STATE_UP = 1 << 0, 478 478 MLX4_INTERFACE_STATE_DELETION = 1 << 1, 479 - MLX4_INTERFACE_STATE_SHUTDOWN = 1 << 2, 480 479 }; 481 480 482 481 #define MSTR_SM_CHANGE_MASK (MLX4_EQ_PORT_INFO_MSTR_SM_SL_CHANGE_MASK | \
+4
include/linux/of_mdio.h
··· 29 29 extern struct mii_bus *of_mdio_find_bus(struct device_node *mdio_np); 30 30 extern int of_mdio_parse_addr(struct device *dev, const struct device_node *np); 31 31 extern int of_phy_register_fixed_link(struct device_node *np); 32 + extern void of_phy_deregister_fixed_link(struct device_node *np); 32 33 extern bool of_phy_is_fixed_link(struct device_node *np); 33 34 34 35 #else /* CONFIG_OF */ ··· 83 82 static inline int of_phy_register_fixed_link(struct device_node *np) 84 83 { 85 84 return -ENOSYS; 85 + } 86 + static inline void of_phy_deregister_fixed_link(struct device_node *np) 87 + { 86 88 } 87 89 static inline bool of_phy_is_fixed_link(struct device_node *np) 88 90 {
+2
include/net/ipv6.h
··· 970 970 int compat_ipv6_getsockopt(struct sock *sk, int level, int optname, 971 971 char __user *optval, int __user *optlen); 972 972 973 + int __ip6_datagram_connect(struct sock *sk, struct sockaddr *addr, 974 + int addr_len); 973 975 int ip6_datagram_connect(struct sock *sk, struct sockaddr *addr, int addr_len); 974 976 int ip6_datagram_connect_v6_only(struct sock *sk, struct sockaddr *addr, 975 977 int addr_len);
+3 -3
include/net/netfilter/nf_conntrack.h
··· 100 100 101 101 possible_net_t ct_net; 102 102 103 + #if IS_ENABLED(CONFIG_NF_NAT) 104 + struct rhlist_head nat_bysource; 105 + #endif 103 106 /* all members below initialized via memset */ 104 107 u8 __nfct_init_offset[0]; 105 108 ··· 120 117 /* Extensions */ 121 118 struct nf_ct_ext *ext; 122 119 123 - #if IS_ENABLED(CONFIG_NF_NAT) 124 - struct rhash_head nat_bysource; 125 - #endif 126 120 /* Storage reserved for other modules, must be the last member */ 127 121 union nf_conntrack_proto proto; 128 122 };
+1 -1
include/net/netfilter/nf_tables.h
··· 313 313 * @size: maximum set size 314 314 * @nelems: number of elements 315 315 * @ndeact: number of deactivated elements queued for removal 316 - * @timeout: default timeout value in msecs 316 + * @timeout: default timeout value in jiffies 317 317 * @gc_int: garbage collection interval in msecs 318 318 * @policy: set parameterization (see enum nft_set_policies) 319 319 * @udlen: user data length
+1
include/uapi/linux/tc_act/Kbuild
··· 11 11 header-y += tc_bpf.h 12 12 header-y += tc_connmark.h 13 13 header-y += tc_ife.h 14 + header-y += tc_tunnel_key.h
+8 -2
kernel/bpf/verifier.c
··· 2454 2454 struct bpf_verifier_state *old, 2455 2455 struct bpf_verifier_state *cur) 2456 2456 { 2457 + bool varlen_map_access = env->varlen_map_value_access; 2457 2458 struct bpf_reg_state *rold, *rcur; 2458 2459 int i; 2459 2460 ··· 2468 2467 /* If the ranges were not the same, but everything else was and 2469 2468 * we didn't do a variable access into a map then we are a-ok. 2470 2469 */ 2471 - if (!env->varlen_map_value_access && 2470 + if (!varlen_map_access && 2472 2471 rold->type == rcur->type && rold->imm == rcur->imm) 2473 2472 continue; 2474 2473 2474 + /* If we didn't map access then again we don't care about the 2475 + * mismatched range values and it's ok if our old type was 2476 + * UNKNOWN and we didn't go to a NOT_INIT'ed reg. 2477 + */ 2475 2478 if (rold->type == NOT_INIT || 2476 - (rold->type == UNKNOWN_VALUE && rcur->type != NOT_INIT)) 2479 + (!varlen_map_access && rold->type == UNKNOWN_VALUE && 2480 + rcur->type != NOT_INIT)) 2477 2481 continue; 2478 2482 2479 2483 if (rold->type == PTR_TO_PACKET && rcur->type == PTR_TO_PACKET &&
+2 -4
net/core/flow.c
··· 95 95 list_for_each_entry_safe(fce, n, &gc_list, u.gc_list) { 96 96 flow_entry_kill(fce, xfrm); 97 97 atomic_dec(&xfrm->flow_cache_gc_count); 98 - WARN_ON(atomic_read(&xfrm->flow_cache_gc_count) < 0); 99 98 } 100 99 } 101 100 ··· 235 236 if (fcp->hash_count > fc->high_watermark) 236 237 flow_cache_shrink(fc, fcp); 237 238 238 - if (fcp->hash_count > 2 * fc->high_watermark || 239 - atomic_read(&net->xfrm.flow_cache_gc_count) > fc->high_watermark) { 240 - atomic_inc(&net->xfrm.flow_cache_genid); 239 + if (atomic_read(&net->xfrm.flow_cache_gc_count) > 240 + 2 * num_online_cpus() * fc->high_watermark) { 241 241 flo = ERR_PTR(-ENOBUFS); 242 242 goto ret_object; 243 243 }
+2 -2
net/core/rtnetlink.c
··· 931 931 + nla_total_size(4) /* IFLA_PROMISCUITY */ 932 932 + nla_total_size(4) /* IFLA_NUM_TX_QUEUES */ 933 933 + nla_total_size(4) /* IFLA_NUM_RX_QUEUES */ 934 - + nla_total_size(4) /* IFLA_MAX_GSO_SEGS */ 935 - + nla_total_size(4) /* IFLA_MAX_GSO_SIZE */ 934 + + nla_total_size(4) /* IFLA_GSO_MAX_SEGS */ 935 + + nla_total_size(4) /* IFLA_GSO_MAX_SIZE */ 936 936 + nla_total_size(1) /* IFLA_OPERSTATE */ 937 937 + nla_total_size(1) /* IFLA_LINKMODE */ 938 938 + nla_total_size(4) /* IFLA_CARRIER_CHANGES */
+2 -2
net/core/sock.c
··· 715 715 val = min_t(u32, val, sysctl_wmem_max); 716 716 set_sndbuf: 717 717 sk->sk_userlocks |= SOCK_SNDBUF_LOCK; 718 - sk->sk_sndbuf = max_t(u32, val * 2, SOCK_MIN_SNDBUF); 718 + sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF); 719 719 /* Wake up sending tasks if we upped the value. */ 720 720 sk->sk_write_space(sk); 721 721 break; ··· 751 751 * returning the value we actually used in getsockopt 752 752 * is the most desirable behavior. 753 753 */ 754 - sk->sk_rcvbuf = max_t(u32, val * 2, SOCK_MIN_RCVBUF); 754 + sk->sk_rcvbuf = max_t(int, val * 2, SOCK_MIN_RCVBUF); 755 755 break; 756 756 757 757 case SO_RCVBUFFORCE:
+7 -5
net/dccp/ipv4.c
··· 700 700 { 701 701 const struct dccp_hdr *dh; 702 702 unsigned int cscov; 703 + u8 dccph_doff; 703 704 704 705 if (skb->pkt_type != PACKET_HOST) 705 706 return 1; ··· 722 721 /* 723 722 * If P.Data Offset is too small for packet type, drop packet and return 724 723 */ 725 - if (dh->dccph_doff < dccp_hdr_len(skb) / sizeof(u32)) { 726 - DCCP_WARN("P.Data Offset(%u) too small\n", dh->dccph_doff); 724 + dccph_doff = dh->dccph_doff; 725 + if (dccph_doff < dccp_hdr_len(skb) / sizeof(u32)) { 726 + DCCP_WARN("P.Data Offset(%u) too small\n", dccph_doff); 727 727 return 1; 728 728 } 729 729 /* 730 730 * If P.Data Offset is too too large for packet, drop packet and return 731 731 */ 732 - if (!pskb_may_pull(skb, dh->dccph_doff * sizeof(u32))) { 733 - DCCP_WARN("P.Data Offset(%u) too large\n", dh->dccph_doff); 732 + if (!pskb_may_pull(skb, dccph_doff * sizeof(u32))) { 733 + DCCP_WARN("P.Data Offset(%u) too large\n", dccph_doff); 734 734 return 1; 735 735 } 736 - 736 + dh = dccp_hdr(skb); 737 737 /* 738 738 * If P.type is not Data, Ack, or DataAck and P.X == 0 (the packet 739 739 * has short sequence numbers), drop packet and return
+4 -9
net/dsa/dsa.c
··· 233 233 genphy_read_status(phydev); 234 234 if (ds->ops->adjust_link) 235 235 ds->ops->adjust_link(ds, port, phydev); 236 + 237 + put_device(&phydev->mdio.dev); 236 238 } 237 239 238 240 return 0; ··· 506 504 507 505 void dsa_cpu_dsa_destroy(struct device_node *port_dn) 508 506 { 509 - struct phy_device *phydev; 510 - 511 - if (of_phy_is_fixed_link(port_dn)) { 512 - phydev = of_phy_find_device(port_dn); 513 - if (phydev) { 514 - phy_device_free(phydev); 515 - fixed_phy_unregister(phydev); 516 - } 517 - } 507 + if (of_phy_is_fixed_link(port_dn)) 508 + of_phy_deregister_fixed_link(port_dn); 518 509 } 519 510 520 511 static void dsa_switch_destroy(struct dsa_switch *ds)
+3 -1
net/dsa/dsa2.c
··· 28 28 struct dsa_switch_tree *dst; 29 29 30 30 list_for_each_entry(dst, &dsa_switch_trees, list) 31 - if (dst->tree == tree) 31 + if (dst->tree == tree) { 32 + kref_get(&dst->refcount); 32 33 return dst; 34 + } 33 35 return NULL; 34 36 } 35 37
+16 -3
net/dsa/slave.c
··· 1125 1125 p->phy_interface = mode; 1126 1126 1127 1127 phy_dn = of_parse_phandle(port_dn, "phy-handle", 0); 1128 - if (of_phy_is_fixed_link(port_dn)) { 1128 + if (!phy_dn && of_phy_is_fixed_link(port_dn)) { 1129 1129 /* In the case of a fixed PHY, the DT node associated 1130 1130 * to the fixed PHY is the Port DT node 1131 1131 */ ··· 1135 1135 return ret; 1136 1136 } 1137 1137 phy_is_fixed = true; 1138 - phy_dn = port_dn; 1138 + phy_dn = of_node_get(port_dn); 1139 1139 } 1140 1140 1141 1141 if (ds->ops->get_phy_flags) ··· 1154 1154 ret = dsa_slave_phy_connect(p, slave_dev, phy_id); 1155 1155 if (ret) { 1156 1156 netdev_err(slave_dev, "failed to connect to phy%d: %d\n", phy_id, ret); 1157 + of_node_put(phy_dn); 1157 1158 return ret; 1158 1159 } 1159 1160 } else { ··· 1163 1162 phy_flags, 1164 1163 p->phy_interface); 1165 1164 } 1165 + 1166 + of_node_put(phy_dn); 1166 1167 } 1167 1168 1168 1169 if (p->phy && phy_is_fixed) ··· 1177 1174 ret = dsa_slave_phy_connect(p, slave_dev, p->port); 1178 1175 if (ret) { 1179 1176 netdev_err(slave_dev, "failed to connect to port %d: %d\n", p->port, ret); 1177 + if (phy_is_fixed) 1178 + of_phy_deregister_fixed_link(port_dn); 1180 1179 return ret; 1181 1180 } 1182 1181 } ··· 1294 1289 void dsa_slave_destroy(struct net_device *slave_dev) 1295 1290 { 1296 1291 struct dsa_slave_priv *p = netdev_priv(slave_dev); 1292 + struct dsa_switch *ds = p->parent; 1293 + struct device_node *port_dn; 1294 + 1295 + port_dn = ds->ports[p->port].dn; 1297 1296 1298 1297 netif_carrier_off(slave_dev); 1299 - if (p->phy) 1298 + if (p->phy) { 1300 1299 phy_disconnect(p->phy); 1300 + 1301 + if (of_phy_is_fixed_link(port_dn)) 1302 + of_phy_deregister_fixed_link(port_dn); 1303 + } 1301 1304 unregister_netdev(slave_dev); 1302 1305 free_netdev(slave_dev); 1303 1306 }
+1
net/ipv4/Kconfig
··· 715 715 default "reno" if DEFAULT_RENO 716 716 default "dctcp" if DEFAULT_DCTCP 717 717 default "cdg" if DEFAULT_CDG 718 + default "bbr" if DEFAULT_BBR 718 719 default "cubic" 719 720 720 721 config TCP_MD5SIG
+1 -1
net/ipv4/af_inet.c
··· 1233 1233 fixedid = !!(skb_shinfo(skb)->gso_type & SKB_GSO_TCP_FIXEDID); 1234 1234 1235 1235 /* fixed ID is invalid if DF bit is not set */ 1236 - if (fixedid && !(iph->frag_off & htons(IP_DF))) 1236 + if (fixedid && !(ip_hdr(skb)->frag_off & htons(IP_DF))) 1237 1237 goto out; 1238 1238 } 1239 1239
+1 -1
net/ipv4/esp4.c
··· 476 476 esph = (void *)skb_push(skb, 4); 477 477 *seqhi = esph->spi; 478 478 esph->spi = esph->seq_no; 479 - esph->seq_no = htonl(XFRM_SKB_CB(skb)->seq.input.hi); 479 + esph->seq_no = XFRM_SKB_CB(skb)->seq.input.hi; 480 480 aead_request_set_callback(req, 0, esp_input_done_esn, skb); 481 481 } 482 482
+2
net/ipv4/ip_output.c
··· 107 107 if (unlikely(!skb)) 108 108 return 0; 109 109 110 + skb->protocol = htons(ETH_P_IP); 111 + 110 112 return nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, 111 113 net, sk, skb, NULL, skb_dst(skb)->dev, 112 114 dst_output);
+4 -1
net/ipv4/netfilter.c
··· 24 24 struct flowi4 fl4 = {}; 25 25 __be32 saddr = iph->saddr; 26 26 __u8 flags = skb->sk ? inet_sk_flowi_flags(skb->sk) : 0; 27 + struct net_device *dev = skb_dst(skb)->dev; 27 28 unsigned int hh_len; 28 29 29 30 if (addr_type == RTN_UNSPEC) 30 - addr_type = inet_addr_type(net, saddr); 31 + addr_type = inet_addr_type_dev_table(net, dev, saddr); 31 32 if (addr_type == RTN_LOCAL || addr_type == RTN_UNICAST) 32 33 flags |= FLOWI_FLAG_ANYSRC; 33 34 else ··· 41 40 fl4.saddr = saddr; 42 41 fl4.flowi4_tos = RT_TOS(iph->tos); 43 42 fl4.flowi4_oif = skb->sk ? skb->sk->sk_bound_dev_if : 0; 43 + if (!fl4.flowi4_oif) 44 + fl4.flowi4_oif = l3mdev_master_ifindex(dev); 44 45 fl4.flowi4_mark = skb->mark; 45 46 fl4.flowi4_flags = flags; 46 47 rt = ip_route_output_key(net, &fl4);
+2 -2
net/ipv4/netfilter/arp_tables.c
··· 1201 1201 1202 1202 newinfo->number = compatr->num_entries; 1203 1203 for (i = 0; i < NF_ARP_NUMHOOKS; i++) { 1204 - newinfo->hook_entry[i] = info->hook_entry[i]; 1205 - newinfo->underflow[i] = info->underflow[i]; 1204 + newinfo->hook_entry[i] = compatr->hook_entry[i]; 1205 + newinfo->underflow[i] = compatr->underflow[i]; 1206 1206 } 1207 1207 entry1 = newinfo->entries; 1208 1208 pos = entry1;
+3 -1
net/ipv6/datagram.c
··· 139 139 } 140 140 EXPORT_SYMBOL_GPL(ip6_datagram_release_cb); 141 141 142 - static int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) 142 + int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, 143 + int addr_len) 143 144 { 144 145 struct sockaddr_in6 *usin = (struct sockaddr_in6 *) uaddr; 145 146 struct inet_sock *inet = inet_sk(sk); ··· 253 252 out: 254 253 return err; 255 254 } 255 + EXPORT_SYMBOL_GPL(__ip6_datagram_connect); 256 256 257 257 int ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) 258 258 {
+1 -1
net/ipv6/esp6.c
··· 418 418 esph = (void *)skb_push(skb, 4); 419 419 *seqhi = esph->spi; 420 420 esph->spi = esph->seq_no; 421 - esph->seq_no = htonl(XFRM_SKB_CB(skb)->seq.input.hi); 421 + esph->seq_no = XFRM_SKB_CB(skb)->seq.input.hi; 422 422 aead_request_set_callback(req, 0, esp_input_done_esn, skb); 423 423 } 424 424
+4 -2
net/ipv6/icmp.c
··· 447 447 448 448 if (__ipv6_addr_needs_scope_id(addr_type)) 449 449 iif = skb->dev->ifindex; 450 - else 451 - iif = l3mdev_master_ifindex(skb_dst(skb)->dev); 450 + else { 451 + dst = skb_dst(skb); 452 + iif = l3mdev_master_ifindex(dst ? dst->dev : skb->dev); 453 + } 452 454 453 455 /* 454 456 * Must not send error if the source does not uniquely
+1 -1
net/ipv6/ip6_offload.c
··· 99 99 segs = ops->callbacks.gso_segment(skb, features); 100 100 } 101 101 102 - if (IS_ERR(segs)) 102 + if (IS_ERR_OR_NULL(segs)) 103 103 goto out; 104 104 105 105 gso_partial = !!(skb_shinfo(segs)->gso_type & SKB_GSO_PARTIAL);
-1
net/ipv6/ip6_tunnel.c
··· 1181 1181 if (err) 1182 1182 return err; 1183 1183 1184 - skb->protocol = htons(ETH_P_IPV6); 1185 1184 skb_push(skb, sizeof(struct ipv6hdr)); 1186 1185 skb_reset_network_header(skb); 1187 1186 ipv6h = ipv6_hdr(skb);
+31
net/ipv6/ip6_vti.c
··· 1138 1138 .priority = 100, 1139 1139 }; 1140 1140 1141 + static bool is_vti6_tunnel(const struct net_device *dev) 1142 + { 1143 + return dev->netdev_ops == &vti6_netdev_ops; 1144 + } 1145 + 1146 + static int vti6_device_event(struct notifier_block *unused, 1147 + unsigned long event, void *ptr) 1148 + { 1149 + struct net_device *dev = netdev_notifier_info_to_dev(ptr); 1150 + struct ip6_tnl *t = netdev_priv(dev); 1151 + 1152 + if (!is_vti6_tunnel(dev)) 1153 + return NOTIFY_DONE; 1154 + 1155 + switch (event) { 1156 + case NETDEV_DOWN: 1157 + if (!net_eq(t->net, dev_net(dev))) 1158 + xfrm_garbage_collect(t->net); 1159 + break; 1160 + } 1161 + return NOTIFY_DONE; 1162 + } 1163 + 1164 + static struct notifier_block vti6_notifier_block __read_mostly = { 1165 + .notifier_call = vti6_device_event, 1166 + }; 1167 + 1141 1168 /** 1142 1169 * vti6_tunnel_init - register protocol and reserve needed resources 1143 1170 * ··· 1174 1147 { 1175 1148 const char *msg; 1176 1149 int err; 1150 + 1151 + register_netdevice_notifier(&vti6_notifier_block); 1177 1152 1178 1153 msg = "tunnel device"; 1179 1154 err = register_pernet_device(&vti6_net_ops); ··· 1209 1180 xfrm_proto_esp_failed: 1210 1181 unregister_pernet_device(&vti6_net_ops); 1211 1182 pernet_dev_failed: 1183 + unregister_netdevice_notifier(&vti6_notifier_block); 1212 1184 pr_err("vti6 init: failed to register %s\n", msg); 1213 1185 return err; 1214 1186 } ··· 1224 1194 xfrm6_protocol_deregister(&vti_ah6_protocol, IPPROTO_AH); 1225 1195 xfrm6_protocol_deregister(&vti_esp6_protocol, IPPROTO_ESP); 1226 1196 unregister_pernet_device(&vti6_net_ops); 1197 + unregister_netdevice_notifier(&vti6_notifier_block); 1227 1198 } 1228 1199 1229 1200 module_init(vti6_tunnel_init);
+2 -2
net/ipv6/netfilter/nf_conntrack_reasm.c
··· 576 576 /* Jumbo payload inhibits frag. header */ 577 577 if (ipv6_hdr(skb)->payload_len == 0) { 578 578 pr_debug("payload len = 0\n"); 579 - return -EINVAL; 579 + return 0; 580 580 } 581 581 582 582 if (find_prev_fhdr(skb, &prevhdr, &nhoff, &fhoff) < 0) 583 - return -EINVAL; 583 + return 0; 584 584 585 585 if (!pskb_may_pull(skb, fhoff + sizeof(*fhdr))) 586 586 return -ENOMEM;
+1 -1
net/ipv6/netfilter/nf_defrag_ipv6_hooks.c
··· 69 69 if (err == -EINPROGRESS) 70 70 return NF_STOLEN; 71 71 72 - return NF_ACCEPT; 72 + return err == 0 ? NF_ACCEPT : NF_DROP; 73 73 } 74 74 75 75 static struct nf_hook_ops ipv6_defrag_ops[] = {
+1
net/ipv6/netfilter/nf_reject_ipv6.c
··· 156 156 fl6.daddr = oip6h->saddr; 157 157 fl6.fl6_sport = otcph->dest; 158 158 fl6.fl6_dport = otcph->source; 159 + fl6.flowi6_oif = l3mdev_master_ifindex(skb_dst(oldskb)->dev); 159 160 security_skb_classify_flow(oldskb, flowi6_to_flowi(&fl6)); 160 161 dst = ip6_route_output(net, NULL, &fl6); 161 162 if (dst->error) {
+2
net/ipv6/output_core.c
··· 155 155 if (unlikely(!skb)) 156 156 return 0; 157 157 158 + skb->protocol = htons(ETH_P_IPV6); 159 + 158 160 return nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, 159 161 net, sk, skb, NULL, skb_dst(skb)->dev, 160 162 dst_output);
+35 -30
net/l2tp/l2tp_ip.c
··· 61 61 if ((l2tp->conn_id == tunnel_id) && 62 62 net_eq(sock_net(sk), net) && 63 63 !(inet->inet_rcv_saddr && inet->inet_rcv_saddr != laddr) && 64 - !(sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif)) 64 + (!sk->sk_bound_dev_if || !dif || 65 + sk->sk_bound_dev_if == dif)) 65 66 goto found; 66 67 } 67 68 ··· 183 182 struct iphdr *iph = (struct iphdr *) skb_network_header(skb); 184 183 185 184 read_lock_bh(&l2tp_ip_lock); 186 - sk = __l2tp_ip_bind_lookup(net, iph->daddr, 0, tunnel_id); 185 + sk = __l2tp_ip_bind_lookup(net, iph->daddr, inet_iif(skb), 186 + tunnel_id); 187 + if (!sk) { 188 + read_unlock_bh(&l2tp_ip_lock); 189 + goto discard; 190 + } 191 + 192 + sock_hold(sk); 187 193 read_unlock_bh(&l2tp_ip_lock); 188 194 } 189 - 190 - if (sk == NULL) 191 - goto discard; 192 - 193 - sock_hold(sk); 194 195 195 196 if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) 196 197 goto discard_put; ··· 259 256 if (addr->l2tp_family != AF_INET) 260 257 return -EINVAL; 261 258 262 - ret = -EADDRINUSE; 263 - read_lock_bh(&l2tp_ip_lock); 264 - if (__l2tp_ip_bind_lookup(net, addr->l2tp_addr.s_addr, 265 - sk->sk_bound_dev_if, addr->l2tp_conn_id)) 266 - goto out_in_use; 267 - 268 - read_unlock_bh(&l2tp_ip_lock); 269 - 270 259 lock_sock(sk); 260 + 261 + ret = -EINVAL; 271 262 if (!sock_flag(sk, SOCK_ZAPPED)) 272 263 goto out; 273 264 ··· 278 281 inet->inet_rcv_saddr = inet->inet_saddr = addr->l2tp_addr.s_addr; 279 282 if (chk_addr_ret == RTN_MULTICAST || chk_addr_ret == RTN_BROADCAST) 280 283 inet->inet_saddr = 0; /* Use device */ 281 - sk_dst_reset(sk); 282 - 283 - l2tp_ip_sk(sk)->conn_id = addr->l2tp_conn_id; 284 284 285 285 write_lock_bh(&l2tp_ip_lock); 286 + if (__l2tp_ip_bind_lookup(net, addr->l2tp_addr.s_addr, 287 + sk->sk_bound_dev_if, addr->l2tp_conn_id)) { 288 + write_unlock_bh(&l2tp_ip_lock); 289 + ret = -EADDRINUSE; 290 + goto out; 291 + } 292 + 293 + sk_dst_reset(sk); 294 + l2tp_ip_sk(sk)->conn_id = addr->l2tp_conn_id; 295 + 286 296 sk_add_bind_node(sk, &l2tp_ip_bind_table); 287 297 sk_del_node_init(sk); 288 298 write_unlock_bh(&l2tp_ip_lock); 299 + 289 300 ret = 0; 290 301 sock_reset_flag(sk, SOCK_ZAPPED); 291 302 292 303 out: 293 304 release_sock(sk); 294 - 295 - return ret; 296 - 297 - out_in_use: 298 - read_unlock_bh(&l2tp_ip_lock); 299 305 300 306 return ret; 301 307 } ··· 308 308 struct sockaddr_l2tpip *lsa = (struct sockaddr_l2tpip *) uaddr; 309 309 int rc; 310 310 311 - if (sock_flag(sk, SOCK_ZAPPED)) /* Must bind first - autobinding does not work */ 312 - return -EINVAL; 313 - 314 311 if (addr_len < sizeof(*lsa)) 315 312 return -EINVAL; 316 313 317 314 if (ipv4_is_multicast(lsa->l2tp_addr.s_addr)) 318 315 return -EINVAL; 319 316 320 - rc = ip4_datagram_connect(sk, uaddr, addr_len); 321 - if (rc < 0) 322 - return rc; 323 - 324 317 lock_sock(sk); 318 + 319 + /* Must bind first - autobinding does not work */ 320 + if (sock_flag(sk, SOCK_ZAPPED)) { 321 + rc = -EINVAL; 322 + goto out_sk; 323 + } 324 + 325 + rc = __ip4_datagram_connect(sk, uaddr, addr_len); 326 + if (rc < 0) 327 + goto out_sk; 325 328 326 329 l2tp_ip_sk(sk)->peer_conn_id = lsa->l2tp_conn_id; 327 330 ··· 333 330 sk_add_bind_node(sk, &l2tp_ip_bind_table); 334 331 write_unlock_bh(&l2tp_ip_lock); 335 332 333 + out_sk: 336 334 release_sock(sk); 335 + 337 336 return rc; 338 337 } 339 338
+42 -37
net/l2tp/l2tp_ip6.c
··· 72 72 73 73 if ((l2tp->conn_id == tunnel_id) && 74 74 net_eq(sock_net(sk), net) && 75 - !(addr && ipv6_addr_equal(addr, laddr)) && 76 - !(sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif)) 75 + (!addr || ipv6_addr_equal(addr, laddr)) && 76 + (!sk->sk_bound_dev_if || !dif || 77 + sk->sk_bound_dev_if == dif)) 77 78 goto found; 78 79 } 79 80 ··· 197 196 struct ipv6hdr *iph = ipv6_hdr(skb); 198 197 199 198 read_lock_bh(&l2tp_ip6_lock); 200 - sk = __l2tp_ip6_bind_lookup(net, &iph->daddr, 201 - 0, tunnel_id); 199 + sk = __l2tp_ip6_bind_lookup(net, &iph->daddr, inet6_iif(skb), 200 + tunnel_id); 201 + if (!sk) { 202 + read_unlock_bh(&l2tp_ip6_lock); 203 + goto discard; 204 + } 205 + 206 + sock_hold(sk); 202 207 read_unlock_bh(&l2tp_ip6_lock); 203 208 } 204 - 205 - if (sk == NULL) 206 - goto discard; 207 - 208 - sock_hold(sk); 209 209 210 210 if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) 211 211 goto discard_put; ··· 268 266 struct sockaddr_l2tpip6 *addr = (struct sockaddr_l2tpip6 *) uaddr; 269 267 struct net *net = sock_net(sk); 270 268 __be32 v4addr = 0; 269 + int bound_dev_if; 271 270 int addr_type; 272 271 int err; 273 272 ··· 287 284 if (addr_type & IPV6_ADDR_MULTICAST) 288 285 return -EADDRNOTAVAIL; 289 286 290 - err = -EADDRINUSE; 291 - read_lock_bh(&l2tp_ip6_lock); 292 - if (__l2tp_ip6_bind_lookup(net, &addr->l2tp_addr, 293 - sk->sk_bound_dev_if, addr->l2tp_conn_id)) 294 - goto out_in_use; 295 - read_unlock_bh(&l2tp_ip6_lock); 296 - 297 287 lock_sock(sk); 298 288 299 289 err = -EINVAL; ··· 296 300 if (sk->sk_state != TCP_CLOSE) 297 301 goto out_unlock; 298 302 303 + bound_dev_if = sk->sk_bound_dev_if; 304 + 299 305 /* Check if the address belongs to the host. */ 300 306 rcu_read_lock(); 301 307 if (addr_type != IPV6_ADDR_ANY) { 302 308 struct net_device *dev = NULL; 303 309 304 310 if (addr_type & IPV6_ADDR_LINKLOCAL) { 305 - if (addr_len >= sizeof(struct sockaddr_in6) && 306 - addr->l2tp_scope_id) { 307 - /* Override any existing binding, if another 308 - * one is supplied by user. 309 - */ 310 - sk->sk_bound_dev_if = addr->l2tp_scope_id; 311 - } 311 + if (addr->l2tp_scope_id) 312 + bound_dev_if = addr->l2tp_scope_id; 312 313 313 314 /* Binding to link-local address requires an 314 - interface */ 315 - if (!sk->sk_bound_dev_if) 315 + * interface. 316 + */ 317 + if (!bound_dev_if) 316 318 goto out_unlock_rcu; 317 319 318 320 err = -ENODEV; 319 - dev = dev_get_by_index_rcu(sock_net(sk), 320 - sk->sk_bound_dev_if); 321 + dev = dev_get_by_index_rcu(sock_net(sk), bound_dev_if); 321 322 if (!dev) 322 323 goto out_unlock_rcu; 323 324 } ··· 329 336 } 330 337 rcu_read_unlock(); 331 338 332 - inet->inet_rcv_saddr = inet->inet_saddr = v4addr; 339 + write_lock_bh(&l2tp_ip6_lock); 340 + if (__l2tp_ip6_bind_lookup(net, &addr->l2tp_addr, bound_dev_if, 341 + addr->l2tp_conn_id)) { 342 + write_unlock_bh(&l2tp_ip6_lock); 343 + err = -EADDRINUSE; 344 + goto out_unlock; 345 + } 346 + 347 + inet->inet_saddr = v4addr; 348 + inet->inet_rcv_saddr = v4addr; 349 + sk->sk_bound_dev_if = bound_dev_if; 333 350 sk->sk_v6_rcv_saddr = addr->l2tp_addr; 334 351 np->saddr = addr->l2tp_addr; 335 352 336 353 l2tp_ip6_sk(sk)->conn_id = addr->l2tp_conn_id; 337 354 338 - write_lock_bh(&l2tp_ip6_lock); 339 355 sk_add_bind_node(sk, &l2tp_ip6_bind_table); 340 356 sk_del_node_init(sk); 341 357 write_unlock_bh(&l2tp_ip6_lock); ··· 357 355 rcu_read_unlock(); 358 356 out_unlock: 359 357 release_sock(sk); 360 - return err; 361 358 362 - out_in_use: 363 - read_unlock_bh(&l2tp_ip6_lock); 364 359 return err; 365 360 } 366 361 ··· 369 370 struct in6_addr *daddr; 370 371 int addr_type; 371 372 int rc; 372 - 373 - if (sock_flag(sk, SOCK_ZAPPED)) /* Must bind first - autobinding does not work */ 374 - return -EINVAL; 375 373 376 374 if (addr_len < sizeof(*lsa)) 377 375 return -EINVAL; ··· 386 390 return -EINVAL; 387 391 } 388 392 389 - rc = ip6_datagram_connect(sk, uaddr, addr_len); 390 - 391 393 lock_sock(sk); 394 + 395 + /* Must bind first - autobinding does not work */ 396 + if (sock_flag(sk, SOCK_ZAPPED)) { 397 + rc = -EINVAL; 398 + goto out_sk; 399 + } 400 + 401 + rc = __ip6_datagram_connect(sk, uaddr, addr_len); 402 + if (rc < 0) 403 + goto out_sk; 392 404 393 405 l2tp_ip6_sk(sk)->peer_conn_id = lsa->l2tp_conn_id; 394 406 ··· 405 401 sk_add_bind_node(sk, &l2tp_ip6_bind_table); 406 402 write_unlock_bh(&l2tp_ip6_lock); 407 403 404 + out_sk: 408 405 release_sock(sk); 409 406 410 407 return rc;
+30 -19
net/netfilter/nf_nat_core.c
··· 42 42 const struct nf_conntrack_zone *zone; 43 43 }; 44 44 45 - static struct rhashtable nf_nat_bysource_table; 45 + static struct rhltable nf_nat_bysource_table; 46 46 47 47 inline const struct nf_nat_l3proto * 48 48 __nf_nat_l3proto_find(u8 family) ··· 193 193 const struct nf_nat_conn_key *key = arg->key; 194 194 const struct nf_conn *ct = obj; 195 195 196 - return same_src(ct, key->tuple) && 197 - net_eq(nf_ct_net(ct), key->net) && 198 - nf_ct_zone_equal(ct, key->zone, IP_CT_DIR_ORIGINAL); 196 + if (!same_src(ct, key->tuple) || 197 + !net_eq(nf_ct_net(ct), key->net) || 198 + !nf_ct_zone_equal(ct, key->zone, IP_CT_DIR_ORIGINAL)) 199 + return 1; 200 + 201 + return 0; 199 202 } 200 203 201 204 static struct rhashtable_params nf_nat_bysource_params = { ··· 207 204 .obj_cmpfn = nf_nat_bysource_cmp, 208 205 .nelem_hint = 256, 209 206 .min_size = 1024, 210 - .nulls_base = (1U << RHT_BASE_SHIFT), 211 207 }; 212 208 213 209 /* Only called for SRC manip */ ··· 225 223 .tuple = tuple, 226 224 .zone = zone 227 225 }; 226 + struct rhlist_head *hl; 228 227 229 - ct = rhashtable_lookup_fast(&nf_nat_bysource_table, &key, 230 - nf_nat_bysource_params); 231 - if (!ct) 228 + hl = rhltable_lookup(&nf_nat_bysource_table, &key, 229 + nf_nat_bysource_params); 230 + if (!hl) 232 231 return 0; 232 + 233 + ct = container_of(hl, typeof(*ct), nat_bysource); 233 234 234 235 nf_ct_invert_tuplepr(result, 235 236 &ct->tuplehash[IP_CT_DIR_REPLY].tuple); ··· 451 446 } 452 447 453 448 if (maniptype == NF_NAT_MANIP_SRC) { 449 + struct nf_nat_conn_key key = { 450 + .net = nf_ct_net(ct), 451 + .tuple = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, 452 + .zone = nf_ct_zone(ct), 453 + }; 454 454 int err; 455 455 456 - err = rhashtable_insert_fast(&nf_nat_bysource_table, 457 - &ct->nat_bysource, 458 - nf_nat_bysource_params); 456 + err = rhltable_insert_key(&nf_nat_bysource_table, 457 + &key, 458 + &ct->nat_bysource, 459 + nf_nat_bysource_params); 459 460 if (err) 460 461 return NF_DROP; 461 462 } ··· 578 567 * will delete entry from already-freed table. 579 568 */ 580 569 ct->status &= ~IPS_NAT_DONE_MASK; 581 - rhashtable_remove_fast(&nf_nat_bysource_table, &ct->nat_bysource, 582 - nf_nat_bysource_params); 570 + rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource, 571 + nf_nat_bysource_params); 583 572 584 573 /* don't delete conntrack. Although that would make things a lot 585 574 * simpler, we'd end up flushing all conntracks on nat rmmod. ··· 709 698 if (!nat) 710 699 return; 711 700 712 - rhashtable_remove_fast(&nf_nat_bysource_table, &ct->nat_bysource, 713 - nf_nat_bysource_params); 701 + rhltable_remove(&nf_nat_bysource_table, &ct->nat_bysource, 702 + nf_nat_bysource_params); 714 703 } 715 704 716 705 static struct nf_ct_ext_type nat_extend __read_mostly = { ··· 845 834 { 846 835 int ret; 847 836 848 - ret = rhashtable_init(&nf_nat_bysource_table, &nf_nat_bysource_params); 837 + ret = rhltable_init(&nf_nat_bysource_table, &nf_nat_bysource_params); 849 838 if (ret) 850 839 return ret; 851 840 852 841 ret = nf_ct_extend_register(&nat_extend); 853 842 if (ret < 0) { 854 - rhashtable_destroy(&nf_nat_bysource_table); 843 + rhltable_destroy(&nf_nat_bysource_table); 855 844 printk(KERN_ERR "nf_nat_core: Unable to register extension\n"); 856 845 return ret; 857 846 } ··· 875 864 return 0; 876 865 877 866 cleanup_extend: 878 - rhashtable_destroy(&nf_nat_bysource_table); 867 + rhltable_destroy(&nf_nat_bysource_table); 879 868 nf_ct_extend_unregister(&nat_extend); 880 869 return ret; 881 870 } ··· 894 883 for (i = 0; i < NFPROTO_NUMPROTO; i++) 895 884 kfree(nf_nat_l4protos[i]); 896 885 897 - rhashtable_destroy(&nf_nat_bysource_table); 886 + rhltable_destroy(&nf_nat_bysource_table); 898 887 } 899 888 900 889 MODULE_LICENSE("GPL");
+9 -5
net/netfilter/nf_tables_api.c
··· 2570 2570 } 2571 2571 2572 2572 if (set->timeout && 2573 - nla_put_be64(skb, NFTA_SET_TIMEOUT, cpu_to_be64(set->timeout), 2573 + nla_put_be64(skb, NFTA_SET_TIMEOUT, 2574 + cpu_to_be64(jiffies_to_msecs(set->timeout)), 2574 2575 NFTA_SET_PAD)) 2575 2576 goto nla_put_failure; 2576 2577 if (set->gc_int && ··· 2860 2859 if (nla[NFTA_SET_TIMEOUT] != NULL) { 2861 2860 if (!(flags & NFT_SET_TIMEOUT)) 2862 2861 return -EINVAL; 2863 - timeout = be64_to_cpu(nla_get_be64(nla[NFTA_SET_TIMEOUT])); 2862 + timeout = msecs_to_jiffies(be64_to_cpu(nla_get_be64( 2863 + nla[NFTA_SET_TIMEOUT]))); 2864 2864 } 2865 2865 gc_int = 0; 2866 2866 if (nla[NFTA_SET_GC_INTERVAL] != NULL) { ··· 3180 3178 3181 3179 if (nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT) && 3182 3180 nla_put_be64(skb, NFTA_SET_ELEM_TIMEOUT, 3183 - cpu_to_be64(*nft_set_ext_timeout(ext)), 3181 + cpu_to_be64(jiffies_to_msecs( 3182 + *nft_set_ext_timeout(ext))), 3184 3183 NFTA_SET_ELEM_PAD)) 3185 3184 goto nla_put_failure; 3186 3185 ··· 3450 3447 memcpy(nft_set_ext_data(ext), data, set->dlen); 3451 3448 if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPIRATION)) 3452 3449 *nft_set_ext_expiration(ext) = 3453 - jiffies + msecs_to_jiffies(timeout); 3450 + jiffies + timeout; 3454 3451 if (nft_set_ext_exists(ext, NFT_SET_EXT_TIMEOUT)) 3455 3452 *nft_set_ext_timeout(ext) = timeout; 3456 3453 ··· 3538 3535 if (nla[NFTA_SET_ELEM_TIMEOUT] != NULL) { 3539 3536 if (!(set->flags & NFT_SET_TIMEOUT)) 3540 3537 return -EINVAL; 3541 - timeout = be64_to_cpu(nla_get_be64(nla[NFTA_SET_ELEM_TIMEOUT])); 3538 + timeout = msecs_to_jiffies(be64_to_cpu(nla_get_be64( 3539 + nla[NFTA_SET_ELEM_TIMEOUT]))); 3542 3540 } else if (set->flags & NFT_SET_TIMEOUT) { 3543 3541 timeout = set->timeout; 3544 3542 }
+5 -2
net/netfilter/nft_hash.c
··· 53 53 { 54 54 struct nft_hash *priv = nft_expr_priv(expr); 55 55 u32 len; 56 + int err; 56 57 57 58 if (!tb[NFTA_HASH_SREG] || 58 59 !tb[NFTA_HASH_DREG] || ··· 68 67 priv->sreg = nft_parse_register(tb[NFTA_HASH_SREG]); 69 68 priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); 70 69 71 - len = ntohl(nla_get_be32(tb[NFTA_HASH_LEN])); 72 - if (len == 0 || len > U8_MAX) 70 + err = nft_parse_u32_check(tb[NFTA_HASH_LEN], U8_MAX, &len); 71 + if (err < 0) 72 + return err; 73 + if (len == 0) 73 74 return -ERANGE; 74 75 75 76 priv->len = len;
+6
net/netfilter/nft_range.c
··· 59 59 int err; 60 60 u32 op; 61 61 62 + if (!tb[NFTA_RANGE_SREG] || 63 + !tb[NFTA_RANGE_OP] || 64 + !tb[NFTA_RANGE_FROM_DATA] || 65 + !tb[NFTA_RANGE_TO_DATA]) 66 + return -EINVAL; 67 + 62 68 err = nft_data_init(NULL, &priv->data_from, sizeof(priv->data_from), 63 69 &desc_from, tb[NFTA_RANGE_FROM_DATA]); 64 70 if (err < 0)
+23 -4
net/netlink/af_netlink.c
··· 322 322 sk_mem_charge(sk, skb->truesize); 323 323 } 324 324 325 - static void netlink_sock_destruct(struct sock *sk) 325 + static void __netlink_sock_destruct(struct sock *sk) 326 326 { 327 327 struct netlink_sock *nlk = nlk_sk(sk); 328 328 329 329 if (nlk->cb_running) { 330 - if (nlk->cb.done) 331 - nlk->cb.done(&nlk->cb); 332 - 333 330 module_put(nlk->cb.module); 334 331 kfree_skb(nlk->cb.skb); 335 332 } ··· 341 344 WARN_ON(atomic_read(&sk->sk_rmem_alloc)); 342 345 WARN_ON(atomic_read(&sk->sk_wmem_alloc)); 343 346 WARN_ON(nlk_sk(sk)->groups); 347 + } 348 + 349 + static void netlink_sock_destruct_work(struct work_struct *work) 350 + { 351 + struct netlink_sock *nlk = container_of(work, struct netlink_sock, 352 + work); 353 + 354 + nlk->cb.done(&nlk->cb); 355 + __netlink_sock_destruct(&nlk->sk); 356 + } 357 + 358 + static void netlink_sock_destruct(struct sock *sk) 359 + { 360 + struct netlink_sock *nlk = nlk_sk(sk); 361 + 362 + if (nlk->cb_running && nlk->cb.done) { 363 + INIT_WORK(&nlk->work, netlink_sock_destruct_work); 364 + schedule_work(&nlk->work); 365 + return; 366 + } 367 + 368 + __netlink_sock_destruct(sk); 344 369 } 345 370 346 371 /* This lock without WQ_FLAG_EXCLUSIVE is good on UP and it is _very_ bad on
+2
net/netlink/af_netlink.h
··· 3 3 4 4 #include <linux/rhashtable.h> 5 5 #include <linux/atomic.h> 6 + #include <linux/workqueue.h> 6 7 #include <net/sock.h> 7 8 8 9 #define NLGRPSZ(x) (ALIGN(x, sizeof(unsigned long) * 8) / 8) ··· 34 33 35 34 struct rhash_head node; 36 35 struct rcu_head rcu; 36 + struct work_struct work; 37 37 }; 38 38 39 39 static inline struct netlink_sock *nlk_sk(struct sock *sk)
+4 -1
net/openvswitch/conntrack.c
··· 370 370 skb_orphan(skb); 371 371 memset(IP6CB(skb), 0, sizeof(struct inet6_skb_parm)); 372 372 err = nf_ct_frag6_gather(net, skb, user); 373 - if (err) 373 + if (err) { 374 + if (err != -EINPROGRESS) 375 + kfree_skb(skb); 374 376 return err; 377 + } 375 378 376 379 key->ip.proto = ipv6_hdr(skb)->nexthdr; 377 380 ovs_cb.mru = IP6CB(skb)->frag_max_size;
+12 -6
net/packet/af_packet.c
··· 3648 3648 3649 3649 if (optlen != sizeof(val)) 3650 3650 return -EINVAL; 3651 - if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) 3652 - return -EBUSY; 3653 3651 if (copy_from_user(&val, optval, sizeof(val))) 3654 3652 return -EFAULT; 3655 3653 switch (val) { 3656 3654 case TPACKET_V1: 3657 3655 case TPACKET_V2: 3658 3656 case TPACKET_V3: 3659 - po->tp_version = val; 3660 - return 0; 3657 + break; 3661 3658 default: 3662 3659 return -EINVAL; 3663 3660 } 3661 + lock_sock(sk); 3662 + if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) { 3663 + ret = -EBUSY; 3664 + } else { 3665 + po->tp_version = val; 3666 + ret = 0; 3667 + } 3668 + release_sock(sk); 3669 + return ret; 3664 3670 } 3665 3671 case PACKET_RESERVE: 3666 3672 { ··· 4170 4164 /* Added to avoid minimal code churn */ 4171 4165 struct tpacket_req *req = &req_u->req; 4172 4166 4167 + lock_sock(sk); 4173 4168 /* Opening a Tx-ring is NOT supported in TPACKET_V3 */ 4174 4169 if (!closing && tx_ring && (po->tp_version > TPACKET_V2)) { 4175 4170 net_warn_ratelimited("Tx-ring is not supported.\n"); ··· 4252 4245 goto out; 4253 4246 } 4254 4247 4255 - lock_sock(sk); 4256 4248 4257 4249 /* Detach socket from network */ 4258 4250 spin_lock(&po->bind_lock); ··· 4300 4294 if (!tx_ring) 4301 4295 prb_shutdown_retire_blk_timer(po, rb_queue); 4302 4296 } 4303 - release_sock(sk); 4304 4297 4305 4298 if (pg_vec) 4306 4299 free_pg_vec(pg_vec, order, req->tp_block_nr); 4307 4300 out: 4301 + release_sock(sk); 4308 4302 return err; 4309 4303 } 4310 4304
+2
net/rds/tcp.c
··· 659 659 out_pernet: 660 660 unregister_pernet_subsys(&rds_tcp_net_ops); 661 661 out_slab: 662 + if (unregister_netdevice_notifier(&rds_tcp_dev_notifier)) 663 + pr_warn("could not unregister rds_tcp_dev_notifier\n"); 662 664 kmem_cache_destroy(rds_tcp_conn_slab); 663 665 out: 664 666 return ret;
+20 -4
net/sched/act_pedit.c
··· 108 108 kfree(keys); 109 109 } 110 110 111 + static bool offset_valid(struct sk_buff *skb, int offset) 112 + { 113 + if (offset > 0 && offset > skb->len) 114 + return false; 115 + 116 + if (offset < 0 && -offset > skb_headroom(skb)) 117 + return false; 118 + 119 + return true; 120 + } 121 + 111 122 static int tcf_pedit(struct sk_buff *skb, const struct tc_action *a, 112 123 struct tcf_result *res) 113 124 { ··· 145 134 if (tkey->offmask) { 146 135 char *d, _d; 147 136 137 + if (!offset_valid(skb, off + tkey->at)) { 138 + pr_info("tc filter pedit 'at' offset %d out of bounds\n", 139 + off + tkey->at); 140 + goto bad; 141 + } 148 142 d = skb_header_pointer(skb, off + tkey->at, 1, 149 143 &_d); 150 144 if (!d) ··· 162 146 " offset must be on 32 bit boundaries\n"); 163 147 goto bad; 164 148 } 165 - if (offset > 0 && offset > skb->len) { 166 - pr_info("tc filter pedit" 167 - " offset %d can't exceed pkt length %d\n", 168 - offset, skb->len); 149 + 150 + if (!offset_valid(skb, off + offset)) { 151 + pr_info("tc filter pedit offset %d out of bounds\n", 152 + offset); 169 153 goto bad; 170 154 } 171 155
-4
net/sched/cls_basic.c
··· 62 62 struct basic_head *head = rtnl_dereference(tp->root); 63 63 struct basic_filter *f; 64 64 65 - if (head == NULL) 66 - return 0UL; 67 - 68 65 list_for_each_entry(f, &head->flist, link) { 69 66 if (f->handle == handle) { 70 67 l = (unsigned long) f; ··· 106 109 tcf_unbind_filter(tp, &f->res); 107 110 call_rcu(&f->rcu, basic_delete_filter); 108 111 } 109 - RCU_INIT_POINTER(tp->root, NULL); 110 112 kfree_rcu(head, rcu); 111 113 return true; 112 114 }
-4
net/sched/cls_bpf.c
··· 292 292 call_rcu(&prog->rcu, __cls_bpf_delete_prog); 293 293 } 294 294 295 - RCU_INIT_POINTER(tp->root, NULL); 296 295 kfree_rcu(head, rcu); 297 296 return true; 298 297 } ··· 301 302 struct cls_bpf_head *head = rtnl_dereference(tp->root); 302 303 struct cls_bpf_prog *prog; 303 304 unsigned long ret = 0UL; 304 - 305 - if (head == NULL) 306 - return 0UL; 307 305 308 306 list_for_each_entry(prog, &head->plist, link) { 309 307 if (prog->handle == handle) {
+3 -4
net/sched/cls_cgroup.c
··· 137 137 138 138 if (!force) 139 139 return false; 140 - 141 - if (head) { 142 - RCU_INIT_POINTER(tp->root, NULL); 140 + /* Head can still be NULL due to cls_cgroup_init(). */ 141 + if (head) 143 142 call_rcu(&head->rcu, cls_cgroup_destroy_rcu); 144 - } 143 + 145 144 return true; 146 145 } 147 146
-1
net/sched/cls_flow.c
··· 596 596 list_del_rcu(&f->list); 597 597 call_rcu(&f->rcu, flow_destroy_filter); 598 598 } 599 - RCU_INIT_POINTER(tp->root, NULL); 600 599 kfree_rcu(head, rcu); 601 600 return true; 602 601 }
+32 -9
net/sched/cls_flower.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/module.h> 15 15 #include <linux/rhashtable.h> 16 + #include <linux/workqueue.h> 16 17 17 18 #include <linux/if_ether.h> 18 19 #include <linux/in6.h> ··· 65 64 bool mask_assigned; 66 65 struct list_head filters; 67 66 struct rhashtable_params ht_params; 68 - struct rcu_head rcu; 67 + union { 68 + struct work_struct work; 69 + struct rcu_head rcu; 70 + }; 69 71 }; 70 72 71 73 struct cls_fl_filter { ··· 273 269 dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, tp->protocol, &tc); 274 270 } 275 271 272 + static void fl_destroy_sleepable(struct work_struct *work) 273 + { 274 + struct cls_fl_head *head = container_of(work, struct cls_fl_head, 275 + work); 276 + if (head->mask_assigned) 277 + rhashtable_destroy(&head->ht); 278 + kfree(head); 279 + module_put(THIS_MODULE); 280 + } 281 + 282 + static void fl_destroy_rcu(struct rcu_head *rcu) 283 + { 284 + struct cls_fl_head *head = container_of(rcu, struct cls_fl_head, rcu); 285 + 286 + INIT_WORK(&head->work, fl_destroy_sleepable); 287 + schedule_work(&head->work); 288 + } 289 + 276 290 static bool fl_destroy(struct tcf_proto *tp, bool force) 277 291 { 278 292 struct cls_fl_head *head = rtnl_dereference(tp->root); ··· 304 282 list_del_rcu(&f->list); 305 283 call_rcu(&f->rcu, fl_destroy_filter); 306 284 } 307 - RCU_INIT_POINTER(tp->root, NULL); 308 - if (head->mask_assigned) 309 - rhashtable_destroy(&head->ht); 310 - kfree_rcu(head, rcu); 285 + 286 + __module_get(THIS_MODULE); 287 + call_rcu(&head->rcu, fl_destroy_rcu); 311 288 return true; 312 289 } 313 290 ··· 732 711 goto errout; 733 712 734 713 if (fold) { 735 - rhashtable_remove_fast(&head->ht, &fold->ht_node, 736 - head->ht_params); 714 + if (!tc_skip_sw(fold->flags)) 715 + rhashtable_remove_fast(&head->ht, &fold->ht_node, 716 + head->ht_params); 737 717 fl_hw_destroy_filter(tp, (unsigned long)fold); 738 718 } 739 719 ··· 761 739 struct cls_fl_head *head = rtnl_dereference(tp->root); 762 740 struct cls_fl_filter *f = (struct cls_fl_filter *) arg; 763 741 764 - rhashtable_remove_fast(&head->ht, &f->ht_node, 765 - head->ht_params); 742 + if (!tc_skip_sw(f->flags)) 743 + rhashtable_remove_fast(&head->ht, &f->ht_node, 744 + head->ht_params); 766 745 list_del_rcu(&f->list); 767 746 fl_hw_destroy_filter(tp, (unsigned long)f); 768 747 tcf_unbind_filter(tp, &f->res);
-1
net/sched/cls_matchall.c
··· 114 114 115 115 call_rcu(&f->rcu, mall_destroy_filter); 116 116 } 117 - RCU_INIT_POINTER(tp->root, NULL); 118 117 kfree_rcu(head, rcu); 119 118 return true; 120 119 }
+2 -1
net/sched/cls_rsvp.h
··· 152 152 return -1; 153 153 nhptr = ip_hdr(skb); 154 154 #endif 155 - 155 + if (unlikely(!head)) 156 + return -1; 156 157 restart: 157 158 158 159 #if RSVP_DST_LEN == 4
-1
net/sched/cls_tcindex.c
··· 543 543 walker.fn = tcindex_destroy_element; 544 544 tcindex_walk(tp, &walker); 545 545 546 - RCU_INIT_POINTER(tp->root, NULL); 547 546 call_rcu(&p->rcu, __tcindex_destroy); 548 547 return true; 549 548 }
+9 -2
net/tipc/bearer.c
··· 421 421 dev = dev_get_by_name(net, driver_name); 422 422 if (!dev) 423 423 return -ENODEV; 424 + if (tipc_mtu_bad(dev, 0)) { 425 + dev_put(dev); 426 + return -EINVAL; 427 + } 424 428 425 429 /* Associate TIPC bearer with L2 bearer */ 426 430 rcu_assign_pointer(b->media_ptr, dev); ··· 614 610 if (!b) 615 611 return NOTIFY_DONE; 616 612 617 - b->mtu = dev->mtu; 618 - 619 613 switch (evt) { 620 614 case NETDEV_CHANGE: 621 615 if (netif_carrier_ok(dev)) ··· 626 624 tipc_reset_bearer(net, b); 627 625 break; 628 626 case NETDEV_CHANGEMTU: 627 + if (tipc_mtu_bad(dev, 0)) { 628 + bearer_disable(net, b); 629 + break; 630 + } 631 + b->mtu = dev->mtu; 629 632 tipc_reset_bearer(net, b); 630 633 break; 631 634 case NETDEV_CHANGEADDR:
+13
net/tipc/bearer.h
··· 39 39 40 40 #include "netlink.h" 41 41 #include "core.h" 42 + #include "msg.h" 42 43 #include <net/genetlink.h> 43 44 44 45 #define MAX_MEDIA 3 ··· 59 58 #define TIPC_MEDIA_TYPE_ETH 1 60 59 #define TIPC_MEDIA_TYPE_IB 2 61 60 #define TIPC_MEDIA_TYPE_UDP 3 61 + 62 + /* minimum bearer MTU */ 63 + #define TIPC_MIN_BEARER_MTU (MAX_H_SIZE + INT_H_SIZE) 62 64 63 65 /** 64 66 * struct tipc_media_addr - destination address used by TIPC bearers ··· 218 214 struct tipc_media_addr *dst); 219 215 void tipc_bearer_bc_xmit(struct net *net, u32 bearer_id, 220 216 struct sk_buff_head *xmitq); 217 + 218 + /* check if device MTU is too low for tipc headers */ 219 + static inline bool tipc_mtu_bad(struct net_device *dev, unsigned int reserve) 220 + { 221 + if (dev->mtu >= TIPC_MIN_BEARER_MTU + reserve) 222 + return false; 223 + netdev_warn(dev, "MTU too low for tipc bearer\n"); 224 + return true; 225 + } 221 226 222 227 #endif /* _TIPC_BEARER_H */
+19 -16
net/tipc/link.c
··· 47 47 #include <linux/pkt_sched.h> 48 48 49 49 struct tipc_stats { 50 - u32 sent_info; /* used in counting # sent packets */ 51 - u32 recv_info; /* used in counting # recv'd packets */ 50 + u32 sent_pkts; 51 + u32 recv_pkts; 52 52 u32 sent_states; 53 53 u32 recv_states; 54 54 u32 sent_probes; ··· 857 857 l->acked = 0; 858 858 l->silent_intv_cnt = 0; 859 859 l->rst_cnt = 0; 860 - l->stats.recv_info = 0; 861 860 l->stale_count = 0; 862 861 l->bc_peer_is_up = false; 863 862 memset(&l->mon_state, 0, sizeof(l->mon_state)); ··· 887 888 struct sk_buff_head *transmq = &l->transmq; 888 889 struct sk_buff_head *backlogq = &l->backlogq; 889 890 struct sk_buff *skb, *_skb, *bskb; 891 + int pkt_cnt = skb_queue_len(list); 890 892 891 893 /* Match msg importance against this and all higher backlog limits: */ 892 894 if (!skb_queue_empty(backlogq)) { ··· 899 899 if (unlikely(msg_size(hdr) > mtu)) { 900 900 skb_queue_purge(list); 901 901 return -EMSGSIZE; 902 + } 903 + 904 + if (pkt_cnt > 1) { 905 + l->stats.sent_fragmented++; 906 + l->stats.sent_fragments += pkt_cnt; 902 907 } 903 908 904 909 /* Prepare each packet for sending, and add to relevant queue: */ ··· 925 920 __skb_queue_tail(xmitq, _skb); 926 921 TIPC_SKB_CB(skb)->ackers = l->ackers; 927 922 l->rcv_unacked = 0; 923 + l->stats.sent_pkts++; 928 924 seqno++; 929 925 continue; 930 926 } ··· 974 968 msg_set_ack(hdr, ack); 975 969 msg_set_bcast_ack(hdr, bc_ack); 976 970 l->rcv_unacked = 0; 971 + l->stats.sent_pkts++; 977 972 seqno++; 978 973 } 979 974 l->snd_nxt = seqno; ··· 1267 1260 1268 1261 /* Deliver packet */ 1269 1262 l->rcv_nxt++; 1270 - l->stats.recv_info++; 1263 + l->stats.recv_pkts++; 1271 1264 if (!tipc_data_input(l, skb, l->inputq)) 1272 1265 rc |= tipc_link_input(l, skb, l->inputq); 1273 1266 if (unlikely(++l->rcv_unacked >= TIPC_MIN_LINK_WIN)) ··· 1807 1800 void tipc_link_reset_stats(struct tipc_link *l) 1808 1801 { 1809 1802 memset(&l->stats, 0, sizeof(l->stats)); 1810 - if (!link_is_bc_sndlink(l)) { 1811 - l->stats.sent_info = l->snd_nxt; 1812 - l->stats.recv_info = l->rcv_nxt; 1813 - } 1814 1803 } 1815 1804 1816 1805 static void link_print(struct tipc_link *l, const char *str) ··· 1870 1867 }; 1871 1868 1872 1869 struct nla_map map[] = { 1873 - {TIPC_NLA_STATS_RX_INFO, s->recv_info}, 1870 + {TIPC_NLA_STATS_RX_INFO, 0}, 1874 1871 {TIPC_NLA_STATS_RX_FRAGMENTS, s->recv_fragments}, 1875 1872 {TIPC_NLA_STATS_RX_FRAGMENTED, s->recv_fragmented}, 1876 1873 {TIPC_NLA_STATS_RX_BUNDLES, s->recv_bundles}, 1877 1874 {TIPC_NLA_STATS_RX_BUNDLED, s->recv_bundled}, 1878 - {TIPC_NLA_STATS_TX_INFO, s->sent_info}, 1875 + {TIPC_NLA_STATS_TX_INFO, 0}, 1879 1876 {TIPC_NLA_STATS_TX_FRAGMENTS, s->sent_fragments}, 1880 1877 {TIPC_NLA_STATS_TX_FRAGMENTED, s->sent_fragmented}, 1881 1878 {TIPC_NLA_STATS_TX_BUNDLES, s->sent_bundles}, ··· 1950 1947 goto attr_msg_full; 1951 1948 if (nla_put_u32(msg->skb, TIPC_NLA_LINK_MTU, link->mtu)) 1952 1949 goto attr_msg_full; 1953 - if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, link->rcv_nxt)) 1950 + if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, link->stats.recv_pkts)) 1954 1951 goto attr_msg_full; 1955 - if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, link->snd_nxt)) 1952 + if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, link->stats.sent_pkts)) 1956 1953 goto attr_msg_full; 1957 1954 1958 1955 if (tipc_link_is_up(link)) ··· 2007 2004 }; 2008 2005 2009 2006 struct nla_map map[] = { 2010 - {TIPC_NLA_STATS_RX_INFO, stats->recv_info}, 2007 + {TIPC_NLA_STATS_RX_INFO, stats->recv_pkts}, 2011 2008 {TIPC_NLA_STATS_RX_FRAGMENTS, stats->recv_fragments}, 2012 2009 {TIPC_NLA_STATS_RX_FRAGMENTED, stats->recv_fragmented}, 2013 2010 {TIPC_NLA_STATS_RX_BUNDLES, stats->recv_bundles}, 2014 2011 {TIPC_NLA_STATS_RX_BUNDLED, stats->recv_bundled}, 2015 - {TIPC_NLA_STATS_TX_INFO, stats->sent_info}, 2012 + {TIPC_NLA_STATS_TX_INFO, stats->sent_pkts}, 2016 2013 {TIPC_NLA_STATS_TX_FRAGMENTS, stats->sent_fragments}, 2017 2014 {TIPC_NLA_STATS_TX_FRAGMENTED, stats->sent_fragmented}, 2018 2015 {TIPC_NLA_STATS_TX_BUNDLES, stats->sent_bundles}, ··· 2079 2076 goto attr_msg_full; 2080 2077 if (nla_put_string(msg->skb, TIPC_NLA_LINK_NAME, bcl->name)) 2081 2078 goto attr_msg_full; 2082 - if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, bcl->rcv_nxt)) 2079 + if (nla_put_u32(msg->skb, TIPC_NLA_LINK_RX, 0)) 2083 2080 goto attr_msg_full; 2084 - if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, bcl->snd_nxt)) 2081 + if (nla_put_u32(msg->skb, TIPC_NLA_LINK_TX, 0)) 2085 2082 goto attr_msg_full; 2086 2083 2087 2084 prop = nla_nest_start(msg->skb, TIPC_NLA_LINK_PROP);
+5
net/tipc/udp_media.c
··· 697 697 udp_conf.local_ip.s_addr = htonl(INADDR_ANY); 698 698 udp_conf.use_udp_checksums = false; 699 699 ub->ifindex = dev->ifindex; 700 + if (tipc_mtu_bad(dev, sizeof(struct iphdr) + 701 + sizeof(struct udphdr))) { 702 + err = -EINVAL; 703 + goto err; 704 + } 700 705 b->mtu = dev->mtu - sizeof(struct iphdr) 701 706 - sizeof(struct udphdr); 702 707 #if IS_ENABLED(CONFIG_IPV6)
+6 -4
net/xfrm/xfrm_policy.c
··· 1268 1268 err = security_xfrm_policy_lookup(pol->security, 1269 1269 fl->flowi_secid, 1270 1270 policy_to_flow_dir(dir)); 1271 - if (!err && !xfrm_pol_hold_rcu(pol)) 1272 - goto again; 1273 - else if (err == -ESRCH) 1271 + if (!err) { 1272 + if (!xfrm_pol_hold_rcu(pol)) 1273 + goto again; 1274 + } else if (err == -ESRCH) { 1274 1275 pol = NULL; 1275 - else 1276 + } else { 1276 1277 pol = ERR_PTR(err); 1278 + } 1277 1279 } else 1278 1280 pol = NULL; 1279 1281 }
+1 -1
net/xfrm/xfrm_user.c
··· 2450 2450 2451 2451 #ifdef CONFIG_COMPAT 2452 2452 if (in_compat_syscall()) 2453 - return -ENOTSUPP; 2453 + return -EOPNOTSUPP; 2454 2454 #endif 2455 2455 2456 2456 type = nlh->nlmsg_type;
+1 -1
samples/bpf/bpf_helpers.h
··· 113 113 #define PT_REGS_FP(x) ((x)->gprs[11]) /* Works only with CONFIG_FRAME_POINTER */ 114 114 #define PT_REGS_RC(x) ((x)->gprs[2]) 115 115 #define PT_REGS_SP(x) ((x)->gprs[15]) 116 - #define PT_REGS_IP(x) ((x)->ip) 116 + #define PT_REGS_IP(x) ((x)->psw.addr) 117 117 118 118 #elif defined(__aarch64__) 119 119
+1 -1
samples/bpf/sampleip_kern.c
··· 25 25 u64 ip; 26 26 u32 *value, init_val = 1; 27 27 28 - ip = ctx->regs.ip; 28 + ip = PT_REGS_IP(&ctx->regs); 29 29 value = bpf_map_lookup_elem(&ip_map, &ip); 30 30 if (value) 31 31 *value += 1;
+1 -1
samples/bpf/trace_event_kern.c
··· 50 50 key.userstack = bpf_get_stackid(ctx, &stackmap, USER_STACKID_FLAGS); 51 51 if ((int)key.kernstack < 0 && (int)key.userstack < 0) { 52 52 bpf_trace_printk(fmt, sizeof(fmt), cpu, ctx->sample_period, 53 - ctx->regs.ip); 53 + PT_REGS_IP(&ctx->regs)); 54 54 return 0; 55 55 } 56 56