Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-6.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from netfilter, wireless and bluetooth.

Kalle Valo steps down after serving as the WiFi driver maintainer for
over a decade.

Current release - fix to a fix:

- vsock: orphan socket after transport release, avoid null-deref

- Bluetooth: L2CAP: fix corrupted list in hci_chan_del

Current release - regressions:

- eth:
- stmmac: correct Rx buffer layout when SPH is enabled
- iavf: fix a locking bug in an error path

- rxrpc: fix alteration of headers whilst zerocopy pending

- s390/qeth: move netif_napi_add_tx() and napi_enable() from under BH

- Revert "netfilter: flowtable: teardown flow if cached mtu is stale"

Current release - new code bugs:

- rxrpc: fix ipv6 path MTU discovery, only ipv4 worked

- pse-pd: fix deadlock in current limit functions

Previous releases - regressions:

- rtnetlink: fix netns refleak with rtnl_setlink()

- wifi: brcmfmac: use random seed flag for BCM4355 and BCM4364
firmware

Previous releases - always broken:

- add missing RCU protection of struct net throughout the stack

- can: rockchip: bail out if skb cannot be allocated

- eth: ti: am65-cpsw: base XDP support fixes

Misc:

- ethtool: tsconfig: update the format of hwtstamp flags, changes the
uAPI but this uAPI was not in any release yet"

* tag 'net-6.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (72 commits)
net: pse-pd: Fix deadlock in current limit functions
rxrpc: Fix ipv6 path MTU discovery
Reapply "net: skb: introduce and use a single page frag cache"
s390/qeth: move netif_napi_add_tx() and napi_enable() from under BH
mlxsw: Add return value check for mlxsw_sp_port_get_stats_raw()
ipv6: mcast: add RCU protection to mld_newpack()
team: better TEAM_OPTION_TYPE_STRING validation
Bluetooth: L2CAP: Fix corrupted list in hci_chan_del
Bluetooth: btintel_pcie: Fix a potential race condition
Bluetooth: L2CAP: Fix slab-use-after-free Read in l2cap_send_cmd
net: ethernet: ti: am65_cpsw: fix tx_cleanup for XDP case
net: ethernet: ti: am65-cpsw: fix RX & TX statistics for XDP_TX case
net: ethernet: ti: am65-cpsw: fix memleak in certain XDP cases
vsock/test: Add test for SO_LINGER null ptr deref
vsock: Orphan socket after transport release
MAINTAINERS: Add sctp headers to the general netdev entry
Revert "netfilter: flowtable: teardown flow if cached mtu is stale"
iavf: Fix a locking bug in an error path
rxrpc: Fix alteration of headers whilst zerocopy pending
net: phylink: make configuring clock-stop dependent on MAC support
...

+709 -458
+1
.mailmap
··· 376 376 Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com> 377 377 Iskren Chernev <me@iskren.info> <iskren.chernev@gmail.com> 378 378 Kalle Valo <kvalo@kernel.org> <kvalo@codeaurora.org> 379 + Kalle Valo <kvalo@kernel.org> <quic_kvalo@quicinc.com> 379 380 Kalyan Thota <quic_kalyant@quicinc.com> <kalyan_t@codeaurora.org> 380 381 Karthikeyan Periyasamy <quic_periyasa@quicinc.com> <periyasa@codeaurora.org> 381 382 Kathiravan T <quic_kathirav@quicinc.com> <kathirav@codeaurora.org>
-1
Documentation/devicetree/bindings/net/wireless/qcom,ath10k.yaml
··· 7 7 title: Qualcomm Technologies ath10k wireless devices 8 8 9 9 maintainers: 10 - - Kalle Valo <kvalo@kernel.org> 11 10 - Jeff Johnson <jjohnson@kernel.org> 12 11 13 12 description:
-1
Documentation/devicetree/bindings/net/wireless/qcom,ath11k-pci.yaml
··· 8 8 title: Qualcomm Technologies ath11k wireless devices (PCIe) 9 9 10 10 maintainers: 11 - - Kalle Valo <kvalo@kernel.org> 12 11 - Jeff Johnson <jjohnson@kernel.org> 13 12 14 13 description: |
-1
Documentation/devicetree/bindings/net/wireless/qcom,ath11k.yaml
··· 8 8 title: Qualcomm Technologies ath11k wireless devices 9 9 10 10 maintainers: 11 - - Kalle Valo <kvalo@kernel.org> 12 11 - Jeff Johnson <jjohnson@kernel.org> 13 12 14 13 description: |
-1
Documentation/devicetree/bindings/net/wireless/qcom,ath12k-wsi.yaml
··· 9 9 10 10 maintainers: 11 11 - Jeff Johnson <jjohnson@kernel.org> 12 - - Kalle Valo <kvalo@kernel.org> 13 12 14 13 description: | 15 14 Qualcomm Technologies IEEE 802.11be PCIe devices with WSI interface.
-1
Documentation/devicetree/bindings/net/wireless/qcom,ath12k.yaml
··· 9 9 10 10 maintainers: 11 11 - Jeff Johnson <quic_jjohnson@quicinc.com> 12 - - Kalle Valo <kvalo@kernel.org> 13 12 14 13 description: 15 14 Qualcomm Technologies IEEE 802.11be PCIe devices.
+2 -1
Documentation/netlink/specs/ethtool.yaml
··· 1524 1524 nested-attributes: bitset 1525 1525 - 1526 1526 name: hwtstamp-flags 1527 - type: u32 1527 + type: nest 1528 + nested-attributes: bitset 1528 1529 1529 1530 operations: 1530 1531 enum-model: directional
+2 -2
Documentation/networking/iso15765-2.rst
··· 369 369 370 370 addr.can_family = AF_CAN; 371 371 addr.can_ifindex = if_nametoindex("can0"); 372 - addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG; 373 - addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG; 372 + addr.can_addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG; 373 + addr.can_addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG; 374 374 375 375 ret = bind(s, (struct sockaddr *)&addr, sizeof(addr)); 376 376 if (ret < 0)
+3 -5
MAINTAINERS
··· 3654 3654 F: drivers/phy/qualcomm/phy-ath79-usb.c 3655 3655 3656 3656 ATHEROS ATH GENERIC UTILITIES 3657 - M: Kalle Valo <kvalo@kernel.org> 3658 3657 M: Jeff Johnson <jjohnson@kernel.org> 3659 3658 L: linux-wireless@vger.kernel.org 3660 3659 S: Supported ··· 16437 16438 X: drivers/net/wireless/ 16438 16439 16439 16440 NETWORKING DRIVERS (WIRELESS) 16440 - M: Kalle Valo <kvalo@kernel.org> 16441 + M: Johannes Berg <johannes@sipsolutions.net> 16441 16442 L: linux-wireless@vger.kernel.org 16442 16443 S: Maintained 16443 16444 W: https://wireless.wiki.kernel.org/ ··· 16508 16509 F: include/linux/netlink.h 16509 16510 F: include/linux/netpoll.h 16510 16511 F: include/linux/rtnetlink.h 16512 + F: include/linux/sctp.h 16511 16513 F: include/linux/seq_file_net.h 16512 16514 F: include/linux/skbuff* 16513 16515 F: include/net/ ··· 16525 16525 F: include/uapi/linux/netlink.h 16526 16526 F: include/uapi/linux/netlink_diag.h 16527 16527 F: include/uapi/linux/rtnetlink.h 16528 + F: include/uapi/linux/sctp.h 16528 16529 F: lib/net_utils.c 16529 16530 F: lib/random32.c 16530 16531 F: net/ ··· 19356 19355 F: drivers/media/tuners/qt1010* 19357 19356 19358 19357 QUALCOMM ATH12K WIRELESS DRIVER 19359 - M: Kalle Valo <kvalo@kernel.org> 19360 19358 M: Jeff Johnson <jjohnson@kernel.org> 19361 19359 L: ath12k@lists.infradead.org 19362 19360 S: Supported ··· 19365 19365 N: ath12k 19366 19366 19367 19367 QUALCOMM ATHEROS ATH10K WIRELESS DRIVER 19368 - M: Kalle Valo <kvalo@kernel.org> 19369 19368 M: Jeff Johnson <jjohnson@kernel.org> 19370 19369 L: ath10k@lists.infradead.org 19371 19370 S: Supported ··· 19374 19375 N: ath10k 19375 19376 19376 19377 QUALCOMM ATHEROS ATH11K WIRELESS DRIVER 19377 - M: Kalle Valo <kvalo@kernel.org> 19378 19378 M: Jeff Johnson <jjohnson@kernel.org> 19379 19379 L: ath11k@lists.infradead.org 19380 19380 S: Supported
+4 -1
drivers/bluetooth/btintel_pcie.c
··· 1320 1320 if (opcode == 0xfc01) 1321 1321 btintel_pcie_inject_cmd_complete(hdev, opcode); 1322 1322 } 1323 + /* Firmware raises alive interrupt on HCI_OP_RESET */ 1324 + if (opcode == HCI_OP_RESET) 1325 + data->gp0_received = false; 1326 + 1323 1327 hdev->stat.cmd_tx++; 1324 1328 break; 1325 1329 case HCI_ACLDATA_PKT: ··· 1361 1357 opcode, btintel_pcie_alivectxt_state2str(old_ctxt), 1362 1358 btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt)); 1363 1359 if (opcode == HCI_OP_RESET) { 1364 - data->gp0_received = false; 1365 1360 ret = wait_event_timeout(data->gp0_wait_q, 1366 1361 data->gp0_received, 1367 1362 msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
+3 -2
drivers/net/can/c_can/c_can_platform.c
··· 385 385 if (ret) { 386 386 dev_err(&pdev->dev, "registering %s failed (err=%d)\n", 387 387 KBUILD_MODNAME, ret); 388 - goto exit_free_device; 388 + goto exit_pm_runtime; 389 389 } 390 390 391 391 dev_info(&pdev->dev, "%s device registered (regs=%p, irq=%d)\n", 392 392 KBUILD_MODNAME, priv->base, dev->irq); 393 393 return 0; 394 394 395 - exit_free_device: 395 + exit_pm_runtime: 396 396 pm_runtime_disable(priv->device); 397 + exit_free_device: 397 398 free_c_can_dev(dev); 398 399 exit: 399 400 dev_err(&pdev->dev, "probe failed\n");
+6 -4
drivers/net/can/ctucanfd/ctucanfd_base.c
··· 867 867 } 868 868 break; 869 869 case CAN_STATE_ERROR_ACTIVE: 870 - cf->can_id |= CAN_ERR_CNT; 871 - cf->data[1] = CAN_ERR_CRTL_ACTIVE; 872 - cf->data[6] = bec.txerr; 873 - cf->data[7] = bec.rxerr; 870 + if (skb) { 871 + cf->can_id |= CAN_ERR_CNT; 872 + cf->data[1] = CAN_ERR_CRTL_ACTIVE; 873 + cf->data[6] = bec.txerr; 874 + cf->data[7] = bec.rxerr; 875 + } 874 876 break; 875 877 default: 876 878 netdev_warn(ndev, "unhandled error state (%d:%s)!\n",
+1 -1
drivers/net/can/rockchip/rockchip_canfd-core.c
··· 622 622 netdev_dbg(priv->ndev, "RX-FIFO overflow\n"); 623 623 624 624 skb = rkcanfd_alloc_can_err_skb(priv, &cf, &timestamp); 625 - if (skb) 625 + if (!skb) 626 626 return 0; 627 627 628 628 rkcanfd_get_berr_counter_corrected(priv, &bec);
+5 -1
drivers/net/can/usb/etas_es58x/es58x_devlink.c
··· 248 248 return ret; 249 249 } 250 250 251 - return devlink_info_serial_number_put(req, es58x_dev->udev->serial); 251 + if (es58x_dev->udev->serial) 252 + ret = devlink_info_serial_number_put(req, 253 + es58x_dev->udev->serial); 254 + 255 + return ret; 252 256 } 253 257 254 258 const struct devlink_ops es58x_dl_ops = {
+1 -1
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2903 2903 } 2904 2904 2905 2905 mutex_unlock(&adapter->crit_lock); 2906 - netdev_unlock(netdev); 2907 2906 restart_watchdog: 2907 + netdev_unlock(netdev); 2908 2908 if (adapter->state >= __IAVF_DOWN) 2909 2909 queue_work(adapter->wq, &adapter->adminq_task); 2910 2910 if (adapter->aq_required)
+5
drivers/net/ethernet/intel/idpf/idpf_lib.c
··· 2159 2159 idpf_vport_ctrl_lock(netdev); 2160 2160 vport = idpf_netdev_to_vport(netdev); 2161 2161 2162 + err = idpf_set_real_num_queues(vport); 2163 + if (err) 2164 + goto unlock; 2165 + 2162 2166 err = idpf_vport_open(vport); 2163 2167 2168 + unlock: 2164 2169 idpf_vport_ctrl_unlock(netdev); 2165 2170 2166 2171 return err;
+1 -4
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 3008 3008 return -EINVAL; 3009 3009 3010 3010 rsc_segments = DIV_ROUND_UP(skb->data_len, rsc_seg_len); 3011 - if (unlikely(rsc_segments == 1)) 3012 - return 0; 3013 3011 3014 3012 NAPI_GRO_CB(skb)->count = rsc_segments; 3015 3013 skb_shinfo(skb)->gso_size = rsc_seg_len; ··· 3070 3072 idpf_rx_hash(rxq, skb, rx_desc, decoded); 3071 3073 3072 3074 skb->protocol = eth_type_trans(skb, rxq->netdev); 3075 + skb_record_rx_queue(skb, rxq->idx); 3073 3076 3074 3077 if (le16_get_bits(rx_desc->hdrlen_flags, 3075 3078 VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M)) ··· 3078 3079 3079 3080 csum_bits = idpf_rx_splitq_extract_csum_bits(rx_desc); 3080 3081 idpf_rx_csum(rxq, skb, csum_bits, decoded); 3081 - 3082 - skb_record_rx_queue(skb, rxq->idx); 3083 3082 3084 3083 return 0; 3085 3084 }
+13 -9
drivers/net/ethernet/intel/igc/igc_main.c
··· 1096 1096 return -ENOMEM; 1097 1097 } 1098 1098 1099 + buffer->type = IGC_TX_BUFFER_TYPE_SKB; 1099 1100 buffer->skb = skb; 1100 1101 buffer->protocol = 0; 1101 1102 buffer->bytecount = skb->len; ··· 2702 2701 } 2703 2702 2704 2703 static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring, 2705 - struct xdp_buff *xdp) 2704 + struct igc_xdp_buff *ctx) 2706 2705 { 2706 + struct xdp_buff *xdp = &ctx->xdp; 2707 2707 unsigned int totalsize = xdp->data_end - xdp->data_meta; 2708 2708 unsigned int metasize = xdp->data - xdp->data_meta; 2709 2709 struct sk_buff *skb; ··· 2723 2721 __skb_pull(skb, metasize); 2724 2722 } 2725 2723 2724 + if (ctx->rx_ts) { 2725 + skb_shinfo(skb)->tx_flags |= SKBTX_HW_TSTAMP_NETDEV; 2726 + skb_hwtstamps(skb)->netdev_data = ctx->rx_ts; 2727 + } 2728 + 2726 2729 return skb; 2727 2730 } 2728 2731 2729 2732 static void igc_dispatch_skb_zc(struct igc_q_vector *q_vector, 2730 2733 union igc_adv_rx_desc *desc, 2731 - struct xdp_buff *xdp, 2732 - ktime_t timestamp) 2734 + struct igc_xdp_buff *ctx) 2733 2735 { 2734 2736 struct igc_ring *ring = q_vector->rx.ring; 2735 2737 struct sk_buff *skb; 2736 2738 2737 - skb = igc_construct_skb_zc(ring, xdp); 2739 + skb = igc_construct_skb_zc(ring, ctx); 2738 2740 if (!skb) { 2739 2741 ring->rx_stats.alloc_failed++; 2740 2742 set_bit(IGC_RING_FLAG_RX_ALLOC_FAILED, &ring->flags); 2741 2743 return; 2742 2744 } 2743 - 2744 - if (timestamp) 2745 - skb_hwtstamps(skb)->hwtstamp = timestamp; 2746 2745 2747 2746 if (igc_cleanup_headers(ring, desc, skb)) 2748 2747 return; ··· 2780 2777 union igc_adv_rx_desc *desc; 2781 2778 struct igc_rx_buffer *bi; 2782 2779 struct igc_xdp_buff *ctx; 2783 - ktime_t timestamp = 0; 2784 2780 unsigned int size; 2785 2781 int res; 2786 2782 ··· 2809 2807 */ 2810 2808 bi->xdp->data_meta += IGC_TS_HDR_LEN; 2811 2809 size -= IGC_TS_HDR_LEN; 2810 + } else { 2811 + ctx->rx_ts = NULL; 2812 2812 } 2813 2813 2814 2814 bi->xdp->data_end = bi->xdp->data + size; ··· 2819 2815 res = __igc_xdp_run_prog(adapter, prog, bi->xdp); 2820 2816 switch (res) { 2821 2817 case IGC_XDP_PASS: 2822 - igc_dispatch_skb_zc(q_vector, desc, bi->xdp, timestamp); 2818 + igc_dispatch_skb_zc(q_vector, desc, ctx); 2823 2819 fallthrough; 2824 2820 case IGC_XDP_CONSUMED: 2825 2821 xsk_buff_free(bi->xdp);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 2105 2105 /* hand second half of page back to the ring */ 2106 2106 ixgbe_reuse_rx_page(rx_ring, rx_buffer); 2107 2107 } else { 2108 - if (!IS_ERR(skb) && IXGBE_CB(skb)->dma == rx_buffer->dma) { 2108 + if (skb && IXGBE_CB(skb)->dma == rx_buffer->dma) { 2109 2109 /* the page has been released from the ring */ 2110 2110 IXGBE_CB(skb)->page_released = true; 2111 2111 } else {
+3 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
··· 768 768 err = mlxsw_sp_get_hw_stats_by_group(&hw_stats, &len, grp); 769 769 if (err) 770 770 return; 771 - mlxsw_sp_port_get_stats_raw(dev, grp, prio, ppcnt_pl); 771 + err = mlxsw_sp_port_get_stats_raw(dev, grp, prio, ppcnt_pl); 772 + if (err) 773 + return; 772 774 for (i = 0; i < len; i++) { 773 775 data[data_index + i] = hw_stats[i].getter(ppcnt_pl); 774 776 if (!hw_stats[i].cells_bytes)
+5
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2094 2094 pp_params.offset = stmmac_rx_offset(priv); 2095 2095 pp_params.max_len = dma_conf->dma_buf_sz; 2096 2096 2097 + if (priv->sph) { 2098 + pp_params.offset = 0; 2099 + pp_params.max_len += stmmac_rx_offset(priv); 2100 + } 2101 + 2097 2102 rx_q->page_pool = page_pool_create(&pp_params); 2098 2103 if (IS_ERR(rx_q->page_pool)) { 2099 2104 ret = PTR_ERR(rx_q->page_pool);
+31 -19
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 828 828 static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma) 829 829 { 830 830 struct am65_cpsw_tx_chn *tx_chn = data; 831 + enum am65_cpsw_tx_buf_type buf_type; 831 832 struct cppi5_host_desc_t *desc_tx; 833 + struct xdp_frame *xdpf; 832 834 struct sk_buff *skb; 833 835 void **swdata; 834 836 835 837 desc_tx = k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma); 836 838 swdata = cppi5_hdesc_get_swdata(desc_tx); 837 - skb = *(swdata); 838 - am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); 839 + buf_type = am65_cpsw_nuss_buf_type(tx_chn, desc_dma); 840 + if (buf_type == AM65_CPSW_TX_BUF_TYPE_SKB) { 841 + skb = *(swdata); 842 + dev_kfree_skb_any(skb); 843 + } else { 844 + xdpf = *(swdata); 845 + xdp_return_frame(xdpf); 846 + } 839 847 840 - dev_kfree_skb_any(skb); 848 + am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); 841 849 } 842 850 843 851 static struct sk_buff *am65_cpsw_build_skb(void *page_addr, 844 852 struct net_device *ndev, 845 - unsigned int len) 853 + unsigned int len, 854 + unsigned int headroom) 846 855 { 847 856 struct sk_buff *skb; 848 857 ··· 861 852 if (unlikely(!skb)) 862 853 return NULL; 863 854 864 - skb_reserve(skb, AM65_CPSW_HEADROOM); 855 + skb_reserve(skb, headroom); 865 856 skb->dev = ndev; 866 857 867 858 return skb; ··· 1178 1169 struct xdp_frame *xdpf; 1179 1170 struct bpf_prog *prog; 1180 1171 struct page *page; 1172 + int pkt_len; 1181 1173 u32 act; 1182 1174 int err; 1183 1175 1176 + pkt_len = *len; 1184 1177 prog = READ_ONCE(port->xdp_prog); 1185 1178 if (!prog) 1186 1179 return AM65_CPSW_XDP_PASS; ··· 1200 1189 netif_txq = netdev_get_tx_queue(ndev, tx_chn->id); 1201 1190 1202 1191 xdpf = xdp_convert_buff_to_frame(xdp); 1203 - if (unlikely(!xdpf)) 1192 + if (unlikely(!xdpf)) { 1193 + ndev->stats.tx_dropped++; 1204 1194 goto drop; 1195 + } 1205 1196 1206 1197 __netif_tx_lock(netif_txq, cpu); 1207 1198 err = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf, ··· 1212 1199 if (err) 1213 1200 goto drop; 1214 1201 1215 - dev_sw_netstats_tx_add(ndev, 1, *len); 1202 + dev_sw_netstats_rx_add(ndev, pkt_len); 1216 1203 ret = AM65_CPSW_XDP_CONSUMED; 1217 1204 goto out; 1218 1205 case XDP_REDIRECT: 1219 1206 if (unlikely(xdp_do_redirect(ndev, xdp, prog))) 1220 1207 goto drop; 1221 1208 1222 - dev_sw_netstats_rx_add(ndev, *len); 1209 + dev_sw_netstats_rx_add(ndev, pkt_len); 1223 1210 ret = AM65_CPSW_XDP_REDIRECT; 1224 1211 goto out; 1225 1212 default: ··· 1328 1315 dev_dbg(dev, "%s rx csum_info:%#x\n", __func__, csum_info); 1329 1316 1330 1317 dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE); 1331 - 1332 1318 k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); 1333 - 1334 - skb = am65_cpsw_build_skb(page_addr, ndev, 1335 - AM65_CPSW_MAX_PACKET_SIZE); 1336 - if (unlikely(!skb)) { 1337 - new_page = page; 1338 - goto requeue; 1339 - } 1340 1319 1341 1320 if (port->xdp_prog) { 1342 1321 xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq[flow->id]); ··· 1339 1334 if (*xdp_state != AM65_CPSW_XDP_PASS) 1340 1335 goto allocate; 1341 1336 1342 - /* Compute additional headroom to be reserved */ 1343 - headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb); 1344 - skb_reserve(skb, headroom); 1337 + headroom = xdp.data - xdp.data_hard_start; 1338 + } else { 1339 + headroom = AM65_CPSW_HEADROOM; 1340 + } 1341 + 1342 + skb = am65_cpsw_build_skb(page_addr, ndev, 1343 + AM65_CPSW_MAX_PACKET_SIZE, headroom); 1344 + if (unlikely(!skb)) { 1345 + new_page = page; 1346 + goto requeue; 1345 1347 } 1346 1348 1347 1349 ndev_priv = netdev_priv(ndev);
+9 -6
drivers/net/phy/phylink.c
··· 2265 2265 /* Allow the MAC to stop its clock if the PHY has the capability */ 2266 2266 pl->mac_tx_clk_stop = phy_eee_tx_clock_stop_capable(phy) > 0; 2267 2267 2268 - /* Explicitly configure whether the PHY is allowed to stop it's 2269 - * receive clock. 2270 - */ 2271 - ret = phy_eee_rx_clock_stop(phy, pl->config->eee_rx_clk_stop_enable); 2272 - if (ret == -EOPNOTSUPP) 2273 - ret = 0; 2268 + if (pl->mac_supports_eee_ops) { 2269 + /* Explicitly configure whether the PHY is allowed to stop it's 2270 + * receive clock. 2271 + */ 2272 + ret = phy_eee_rx_clock_stop(phy, 2273 + pl->config->eee_rx_clk_stop_enable); 2274 + if (ret == -EOPNOTSUPP) 2275 + ret = 0; 2276 + } 2274 2277 2275 2278 return ret; 2276 2279 }
+2 -2
drivers/net/pse-pd/pse_core.c
··· 319 319 goto out; 320 320 mW = ret; 321 321 322 - ret = pse_pi_get_voltage(rdev); 322 + ret = _pse_pi_get_voltage(rdev); 323 323 if (!ret) { 324 324 dev_err(pcdev->dev, "Voltage null\n"); 325 325 ret = -ERANGE; ··· 356 356 357 357 id = rdev_get_id(rdev); 358 358 mutex_lock(&pcdev->lock); 359 - ret = pse_pi_get_voltage(rdev); 359 + ret = _pse_pi_get_voltage(rdev); 360 360 if (!ret) { 361 361 dev_err(pcdev->dev, "Voltage null\n"); 362 362 ret = -ERANGE;
+3 -1
drivers/net/team/team_core.c
··· 2639 2639 ctx.data.u32_val = nla_get_u32(attr_data); 2640 2640 break; 2641 2641 case TEAM_OPTION_TYPE_STRING: 2642 - if (nla_len(attr_data) > TEAM_STRING_MAX_LEN) { 2642 + if (nla_len(attr_data) > TEAM_STRING_MAX_LEN || 2643 + !memchr(nla_data(attr_data), '\0', 2644 + nla_len(attr_data))) { 2643 2645 err = -EINVAL; 2644 2646 goto team_put; 2645 2647 }
+5 -2
drivers/net/vxlan/vxlan_core.c
··· 2898 2898 struct vxlan_dev *vxlan = netdev_priv(dev); 2899 2899 int err; 2900 2900 2901 - if (vxlan->cfg.flags & VXLAN_F_VNIFILTER) 2902 - vxlan_vnigroup_init(vxlan); 2901 + if (vxlan->cfg.flags & VXLAN_F_VNIFILTER) { 2902 + err = vxlan_vnigroup_init(vxlan); 2903 + if (err) 2904 + return err; 2905 + } 2903 2906 2904 2907 err = gro_cells_init(&vxlan->gro_cells, dev); 2905 2908 if (err)
+45 -16
drivers/net/wireless/ath/ath12k/wmi.c
··· 4851 4851 return reg_rule_ptr; 4852 4852 } 4853 4853 4854 + static u8 ath12k_wmi_ignore_num_extra_rules(struct ath12k_wmi_reg_rule_ext_params *rule, 4855 + u32 num_reg_rules) 4856 + { 4857 + u8 num_invalid_5ghz_rules = 0; 4858 + u32 count, start_freq; 4859 + 4860 + for (count = 0; count < num_reg_rules; count++) { 4861 + start_freq = le32_get_bits(rule[count].freq_info, REG_RULE_START_FREQ); 4862 + 4863 + if (start_freq >= ATH12K_MIN_6G_FREQ) 4864 + num_invalid_5ghz_rules++; 4865 + } 4866 + 4867 + return num_invalid_5ghz_rules; 4868 + } 4869 + 4854 4870 static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab, 4855 4871 struct sk_buff *skb, 4856 4872 struct ath12k_reg_info *reg_info) ··· 4877 4861 u32 num_2g_reg_rules, num_5g_reg_rules; 4878 4862 u32 num_6g_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE]; 4879 4863 u32 num_6g_reg_rules_cl[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE]; 4864 + u8 num_invalid_5ghz_ext_rules; 4880 4865 u32 total_reg_rules = 0; 4881 4866 int ret, i, j; 4882 4867 ··· 4970 4953 } 4971 4954 4972 4955 memcpy(reg_info->alpha2, &ev->alpha2, REG_ALPHA2_LEN); 4973 - 4974 - /* FIXME: Currently FW includes 6G reg rule also in 5G rule 4975 - * list for country US. 4976 - * Having same 6G reg rule in 5G and 6G rules list causes 4977 - * intersect check to be true, and same rules will be shown 4978 - * multiple times in iw cmd. So added hack below to avoid 4979 - * parsing 6G rule from 5G reg rule list, and this can be 4980 - * removed later, after FW updates to remove 6G reg rule 4981 - * from 5G rules list. 4982 - */ 4983 - if (memcmp(reg_info->alpha2, "US", 2) == 0) { 4984 - reg_info->num_5g_reg_rules = REG_US_5G_NUM_REG_RULES; 4985 - num_5g_reg_rules = reg_info->num_5g_reg_rules; 4986 - } 4987 4956 4988 4957 reg_info->dfs_region = le32_to_cpu(ev->dfs_region); 4989 4958 reg_info->phybitmap = le32_to_cpu(ev->phybitmap); ··· 5073 5070 } 5074 5071 } 5075 5072 5073 + ext_wmi_reg_rule += num_2g_reg_rules; 5074 + 5075 + /* Firmware might include 6 GHz reg rule in 5 GHz rule list 5076 + * for few countries along with separate 6 GHz rule. 5077 + * Having same 6 GHz reg rule in 5 GHz and 6 GHz rules list 5078 + * causes intersect check to be true, and same rules will be 5079 + * shown multiple times in iw cmd. 5080 + * Hence, avoid parsing 6 GHz rule from 5 GHz reg rule list 5081 + */ 5082 + num_invalid_5ghz_ext_rules = ath12k_wmi_ignore_num_extra_rules(ext_wmi_reg_rule, 5083 + num_5g_reg_rules); 5084 + 5085 + if (num_invalid_5ghz_ext_rules) { 5086 + ath12k_dbg(ab, ATH12K_DBG_WMI, 5087 + "CC: %s 5 GHz reg rules number %d from fw, %d number of invalid 5 GHz rules", 5088 + reg_info->alpha2, reg_info->num_5g_reg_rules, 5089 + num_invalid_5ghz_ext_rules); 5090 + 5091 + num_5g_reg_rules = num_5g_reg_rules - num_invalid_5ghz_ext_rules; 5092 + reg_info->num_5g_reg_rules = num_5g_reg_rules; 5093 + } 5094 + 5076 5095 if (num_5g_reg_rules) { 5077 - ext_wmi_reg_rule += num_2g_reg_rules; 5078 5096 reg_info->reg_rules_5g_ptr = 5079 5097 create_ext_reg_rules_from_wmi(num_5g_reg_rules, 5080 5098 ext_wmi_reg_rule); ··· 5107 5083 } 5108 5084 } 5109 5085 5110 - ext_wmi_reg_rule += num_5g_reg_rules; 5086 + /* We have adjusted the number of 5 GHz reg rules above. But still those 5087 + * many rules needs to be adjusted in ext_wmi_reg_rule. 5088 + * 5089 + * NOTE: num_invalid_5ghz_ext_rules will be 0 for rest other cases. 5090 + */ 5091 + ext_wmi_reg_rule += (num_5g_reg_rules + num_invalid_5ghz_ext_rules); 5111 5092 5112 5093 for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) { 5113 5094 reg_info->reg_rules_6g_ap_ptr[i] =
-1
drivers/net/wireless/ath/ath12k/wmi.h
··· 4073 4073 #define MAX_REG_RULES 10 4074 4074 #define REG_ALPHA2_LEN 2 4075 4075 #define MAX_6G_REG_RULES 5 4076 - #define REG_US_5G_NUM_REG_RULES 4 4077 4076 4078 4077 enum wmi_start_event_param { 4079 4078 WMI_VDEV_START_RESP_EVENT = 0,
+2 -2
drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
··· 2712 2712 BRCMF_PCIE_DEVICE(BRCM_PCIE_4350_DEVICE_ID, WCC), 2713 2713 BRCMF_PCIE_DEVICE_SUB(0x4355, BRCM_PCIE_VENDOR_ID_BROADCOM, 0x4355, WCC), 2714 2714 BRCMF_PCIE_DEVICE(BRCM_PCIE_4354_RAW_DEVICE_ID, WCC), 2715 - BRCMF_PCIE_DEVICE(BRCM_PCIE_4355_DEVICE_ID, WCC), 2715 + BRCMF_PCIE_DEVICE(BRCM_PCIE_4355_DEVICE_ID, WCC_SEED), 2716 2716 BRCMF_PCIE_DEVICE(BRCM_PCIE_4356_DEVICE_ID, WCC), 2717 2717 BRCMF_PCIE_DEVICE(BRCM_PCIE_43567_DEVICE_ID, WCC), 2718 2718 BRCMF_PCIE_DEVICE(BRCM_PCIE_43570_DEVICE_ID, WCC), ··· 2723 2723 BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_2G_DEVICE_ID, WCC), 2724 2724 BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_5G_DEVICE_ID, WCC), 2725 2725 BRCMF_PCIE_DEVICE(BRCM_PCIE_43602_RAW_DEVICE_ID, WCC), 2726 - BRCMF_PCIE_DEVICE(BRCM_PCIE_4364_DEVICE_ID, WCC), 2726 + BRCMF_PCIE_DEVICE(BRCM_PCIE_4364_DEVICE_ID, WCC_SEED), 2727 2727 BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_DEVICE_ID, BCA), 2728 2728 BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_2G_DEVICE_ID, BCA), 2729 2729 BRCMF_PCIE_DEVICE(BRCM_PCIE_4365_5G_DEVICE_ID, BCA),
+21 -26
drivers/ptp/ptp_vmclock.c
··· 414 414 } 415 415 416 416 static const struct file_operations vmclock_miscdev_fops = { 417 + .owner = THIS_MODULE, 417 418 .mmap = vmclock_miscdev_mmap, 418 419 .read = vmclock_miscdev_read, 419 420 }; 420 421 421 422 /* module operations */ 422 423 423 - static void vmclock_remove(struct platform_device *pdev) 424 + static void vmclock_remove(void *data) 424 425 { 425 - struct device *dev = &pdev->dev; 426 - struct vmclock_state *st = dev_get_drvdata(dev); 426 + struct vmclock_state *st = data; 427 427 428 428 if (st->ptp_clock) 429 429 ptp_clock_unregister(st->ptp_clock); ··· 506 506 507 507 if (ret) { 508 508 dev_info(dev, "Failed to obtain physical address: %d\n", ret); 509 - goto out; 509 + return ret; 510 510 } 511 511 512 512 if (resource_size(&st->res) < VMCLOCK_MIN_SIZE) { 513 513 dev_info(dev, "Region too small (0x%llx)\n", 514 514 resource_size(&st->res)); 515 - ret = -EINVAL; 516 - goto out; 515 + return -EINVAL; 517 516 } 518 517 st->clk = devm_memremap(dev, st->res.start, resource_size(&st->res), 519 518 MEMREMAP_WB | MEMREMAP_DEC); ··· 520 521 ret = PTR_ERR(st->clk); 521 522 dev_info(dev, "failed to map shared memory\n"); 522 523 st->clk = NULL; 523 - goto out; 524 + return ret; 524 525 } 525 526 526 527 if (le32_to_cpu(st->clk->magic) != VMCLOCK_MAGIC || 527 528 le32_to_cpu(st->clk->size) > resource_size(&st->res) || 528 529 le16_to_cpu(st->clk->version) != 1) { 529 530 dev_info(dev, "vmclock magic fields invalid\n"); 530 - ret = -EINVAL; 531 - goto out; 531 + return -EINVAL; 532 532 } 533 533 534 534 ret = ida_alloc(&vmclock_ida, GFP_KERNEL); 535 535 if (ret < 0) 536 - goto out; 536 + return ret; 537 537 538 538 st->index = ret; 539 539 ret = devm_add_action_or_reset(&pdev->dev, vmclock_put_idx, st); 540 540 if (ret) 541 - goto out; 541 + return ret; 542 542 543 543 st->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "vmclock%d", st->index); 544 - if (!st->name) { 545 - ret = -ENOMEM; 546 - goto out; 547 - } 544 + if (!st->name) 545 + return -ENOMEM; 546 + 547 + st->miscdev.minor = MISC_DYNAMIC_MINOR; 548 + 549 + ret = devm_add_action_or_reset(&pdev->dev, vmclock_remove, st); 550 + if (ret) 551 + return ret; 548 552 549 553 /* 550 554 * If the structure is big enough, it can be mapped to userspace. ··· 556 554 * cross that bridge if/when we come to it. 557 555 */ 558 556 if (le32_to_cpu(st->clk->size) >= PAGE_SIZE) { 559 - st->miscdev.minor = MISC_DYNAMIC_MINOR; 560 557 st->miscdev.fops = &vmclock_miscdev_fops; 561 558 st->miscdev.name = st->name; 562 559 563 560 ret = misc_register(&st->miscdev); 564 561 if (ret) 565 - goto out; 562 + return ret; 566 563 } 567 564 568 565 /* If there is valid clock information, register a PTP clock */ ··· 571 570 if (IS_ERR(st->ptp_clock)) { 572 571 ret = PTR_ERR(st->ptp_clock); 573 572 st->ptp_clock = NULL; 574 - vmclock_remove(pdev); 575 - goto out; 573 + return ret; 576 574 } 577 575 } 578 576 579 577 if (!st->miscdev.minor && !st->ptp_clock) { 580 578 /* Neither miscdev nor PTP registered */ 581 579 dev_info(dev, "vmclock: Neither miscdev nor PTP available; not registering\n"); 582 - ret = -ENODEV; 583 - goto out; 580 + return -ENODEV; 584 581 } 585 582 586 583 dev_info(dev, "%s: registered %s%s%s\n", st->name, ··· 586 587 (st->miscdev.minor && st->ptp_clock) ? ", " : "", 587 588 st->ptp_clock ? "PTP" : ""); 588 589 589 - dev_set_drvdata(dev, st); 590 - 591 - out: 592 - return ret; 590 + return 0; 593 591 } 594 592 595 593 static const struct acpi_device_id vmclock_acpi_ids[] = { ··· 597 601 598 602 static struct platform_driver vmclock_platform_driver = { 599 603 .probe = vmclock_probe, 600 - .remove = vmclock_remove, 601 604 .driver = { 602 605 .name = "vmclock", 603 606 .acpi_match_table = vmclock_acpi_ids,
+5 -3
drivers/s390/net/qeth_core_main.c
··· 7050 7050 card->data.state = CH_STATE_UP; 7051 7051 netif_tx_start_all_queues(dev); 7052 7052 7053 - local_bh_disable(); 7054 7053 qeth_for_each_output_queue(card, queue, i) { 7055 7054 netif_napi_add_tx(dev, &queue->napi, qeth_tx_poll); 7056 7055 napi_enable(&queue->napi); 7056 + } 7057 + napi_enable(&card->napi); 7058 + 7059 + local_bh_disable(); 7060 + qeth_for_each_output_queue(card, queue, i) { 7057 7061 napi_schedule(&queue->napi); 7058 7062 } 7059 - 7060 - napi_enable(&card->napi); 7061 7063 napi_schedule(&card->napi); 7062 7064 /* kick-start the NAPI softirq: */ 7063 7065 local_bh_enable();
+6
include/linux/netdevice.h
··· 2664 2664 } 2665 2665 2666 2666 static inline 2667 + struct net *dev_net_rcu(const struct net_device *dev) 2668 + { 2669 + return read_pnet_rcu(&dev->nd_net); 2670 + } 2671 + 2672 + static inline 2667 2673 void dev_net_set(struct net_device *dev, struct net *net) 2668 2674 { 2669 2675 write_pnet(&dev->nd_net, net);
+2 -1
include/net/bluetooth/l2cap.h
··· 668 668 struct l2cap_chan *smp; 669 669 670 670 struct list_head chan_l; 671 - struct mutex chan_lock; 671 + struct mutex lock; 672 672 struct kref ref; 673 673 struct list_head users; 674 674 }; ··· 970 970 void l2cap_send_conn_req(struct l2cap_chan *chan); 971 971 972 972 struct l2cap_conn *l2cap_conn_get(struct l2cap_conn *conn); 973 + struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *conn); 973 974 void l2cap_conn_put(struct l2cap_conn *conn); 974 975 975 976 int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user);
+10 -3
include/net/ip.h
··· 471 471 bool forwarding) 472 472 { 473 473 const struct rtable *rt = dst_rtable(dst); 474 - struct net *net = dev_net(dst->dev); 475 - unsigned int mtu; 474 + unsigned int mtu, res; 475 + struct net *net; 476 476 477 + rcu_read_lock(); 478 + 479 + net = dev_net_rcu(dst->dev); 477 480 if (READ_ONCE(net->ipv4.sysctl_ip_fwd_use_pmtu) || 478 481 ip_mtu_locked(dst) || 479 482 !forwarding) { ··· 500 497 out: 501 498 mtu = min_t(unsigned int, mtu, IP_MAX_MTU); 502 499 503 - return mtu - lwtunnel_headroom(dst->lwtstate, mtu); 500 + res = mtu - lwtunnel_headroom(dst->lwtstate, mtu); 501 + 502 + rcu_read_unlock(); 503 + 504 + return res; 504 505 } 505 506 506 507 static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
+2
include/net/l3mdev.h
··· 198 198 if (netif_is_l3_slave(dev)) { 199 199 struct net_device *master; 200 200 201 + rcu_read_lock(); 201 202 master = netdev_master_upper_dev_get_rcu(dev); 202 203 if (master && master->l3mdev_ops->l3mdev_l3_out) 203 204 skb = master->l3mdev_ops->l3mdev_l3_out(master, sk, 204 205 skb, proto); 206 + rcu_read_unlock(); 205 207 } 206 208 207 209 return skb;
+1 -1
include/net/net_namespace.h
··· 398 398 #endif 399 399 } 400 400 401 - static inline struct net *read_pnet_rcu(possible_net_t *pnet) 401 + static inline struct net *read_pnet_rcu(const possible_net_t *pnet) 402 402 { 403 403 #ifdef CONFIG_NET_NS 404 404 return rcu_dereference(pnet->net);
+7 -2
include/net/route.h
··· 382 382 static inline int ip4_dst_hoplimit(const struct dst_entry *dst) 383 383 { 384 384 int hoplimit = dst_metric_raw(dst, RTAX_HOPLIMIT); 385 - struct net *net = dev_net(dst->dev); 386 385 387 - if (hoplimit == 0) 386 + if (hoplimit == 0) { 387 + const struct net *net; 388 + 389 + rcu_read_lock(); 390 + net = dev_net_rcu(dst->dev); 388 391 hoplimit = READ_ONCE(net->ipv4.sysctl_ip_default_ttl); 392 + rcu_read_unlock(); 393 + } 389 394 return hoplimit; 390 395 } 391 396
+2
include/uapi/linux/ethtool.h
··· 682 682 * @ETH_SS_STATS_ETH_CTRL: names of IEEE 802.3 MAC Control statistics 683 683 * @ETH_SS_STATS_RMON: names of RMON statistics 684 684 * @ETH_SS_STATS_PHY: names of PHY(dev) statistics 685 + * @ETH_SS_TS_FLAGS: hardware timestamping flags 685 686 * 686 687 * @ETH_SS_COUNT: number of defined string sets 687 688 */ ··· 709 708 ETH_SS_STATS_ETH_CTRL, 710 709 ETH_SS_STATS_RMON, 711 710 ETH_SS_STATS_PHY, 711 + ETH_SS_TS_FLAGS, 712 712 713 713 /* add new constants above here */ 714 714 ETH_SS_COUNT
+11
net/ax25/af_ax25.c
··· 685 685 break; 686 686 } 687 687 688 + if (ax25->ax25_dev) { 689 + if (dev == ax25->ax25_dev->dev) { 690 + rcu_read_unlock(); 691 + break; 692 + } 693 + netdev_put(ax25->ax25_dev->dev, &ax25->dev_tracker); 694 + ax25_dev_put(ax25->ax25_dev); 695 + } 696 + 688 697 ax25->ax25_dev = ax25_dev_ax25dev(dev); 689 698 if (!ax25->ax25_dev) { 690 699 rcu_read_unlock(); ··· 701 692 break; 702 693 } 703 694 ax25_fillin_cb(ax25, ax25->ax25_dev); 695 + netdev_hold(dev, &ax25->dev_tracker, GFP_ATOMIC); 696 + ax25_dev_hold(ax25->ax25_dev); 704 697 rcu_read_unlock(); 705 698 break; 706 699
-2
net/batman-adv/bat_v.c
··· 113 113 batadv_v_hardif_neigh_init(struct batadv_hardif_neigh_node *hardif_neigh) 114 114 { 115 115 ewma_throughput_init(&hardif_neigh->bat_v.throughput); 116 - INIT_WORK(&hardif_neigh->bat_v.metric_work, 117 - batadv_v_elp_throughput_metric_update); 118 116 } 119 117 120 118 /**
+86 -36
net/batman-adv/bat_v_elp.c
··· 18 18 #include <linux/if_ether.h> 19 19 #include <linux/jiffies.h> 20 20 #include <linux/kref.h> 21 + #include <linux/list.h> 21 22 #include <linux/minmax.h> 22 23 #include <linux/netdevice.h> 23 24 #include <linux/nl80211.h> ··· 27 26 #include <linux/rcupdate.h> 28 27 #include <linux/rtnetlink.h> 29 28 #include <linux/skbuff.h> 29 + #include <linux/slab.h> 30 30 #include <linux/stddef.h> 31 31 #include <linux/string.h> 32 32 #include <linux/types.h> ··· 42 40 #include "originator.h" 43 41 #include "routing.h" 44 42 #include "send.h" 43 + 44 + /** 45 + * struct batadv_v_metric_queue_entry - list of hardif neighbors which require 46 + * and metric update 47 + */ 48 + struct batadv_v_metric_queue_entry { 49 + /** @hardif_neigh: hardif neighbor scheduled for metric update */ 50 + struct batadv_hardif_neigh_node *hardif_neigh; 51 + 52 + /** @list: list node for metric_queue */ 53 + struct list_head list; 54 + }; 45 55 46 56 /** 47 57 * batadv_v_elp_start_timer() - restart timer for ELP periodic work ··· 73 59 /** 74 60 * batadv_v_elp_get_throughput() - get the throughput towards a neighbour 75 61 * @neigh: the neighbour for which the throughput has to be obtained 62 + * @pthroughput: calculated throughput towards the given neighbour in multiples 63 + * of 100kpbs (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc). 76 64 * 77 - * Return: The throughput towards the given neighbour in multiples of 100kpbs 78 - * (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc). 65 + * Return: true when value behind @pthroughput was set 79 66 */ 80 - static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh) 67 + static bool batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh, 68 + u32 *pthroughput) 81 69 { 82 70 struct batadv_hard_iface *hard_iface = neigh->if_incoming; 71 + struct net_device *soft_iface = hard_iface->soft_iface; 83 72 struct ethtool_link_ksettings link_settings; 84 73 struct net_device *real_netdev; 85 74 struct station_info sinfo; 86 75 u32 throughput; 87 76 int ret; 88 77 78 + /* don't query throughput when no longer associated with any 79 + * batman-adv interface 80 + */ 81 + if (!soft_iface) 82 + return false; 83 + 89 84 /* if the user specified a customised value for this interface, then 90 85 * return it directly 91 86 */ 92 87 throughput = atomic_read(&hard_iface->bat_v.throughput_override); 93 - if (throughput != 0) 94 - return throughput; 88 + if (throughput != 0) { 89 + *pthroughput = throughput; 90 + return true; 91 + } 95 92 96 93 /* if this is a wireless device, then ask its throughput through 97 94 * cfg80211 API ··· 129 104 * possible to delete this neighbor. For now set 130 105 * the throughput metric to 0. 131 106 */ 132 - return 0; 107 + *pthroughput = 0; 108 + return true; 133 109 } 134 110 if (ret) 135 111 goto default_throughput; 136 112 137 - if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT)) 138 - return sinfo.expected_throughput / 100; 113 + if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT)) { 114 + *pthroughput = sinfo.expected_throughput / 100; 115 + return true; 116 + } 139 117 140 118 /* try to estimate the expected throughput based on reported tx 141 119 * rates 142 120 */ 143 - if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE)) 144 - return cfg80211_calculate_bitrate(&sinfo.txrate) / 3; 121 + if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE)) { 122 + *pthroughput = cfg80211_calculate_bitrate(&sinfo.txrate) / 3; 123 + return true; 124 + } 145 125 146 126 goto default_throughput; 147 127 } 148 128 129 + /* only use rtnl_trylock because the elp worker will be cancelled while 130 + * the rntl_lock is held. the cancel_delayed_work_sync() would otherwise 131 + * wait forever when the elp work_item was started and it is then also 132 + * trying to rtnl_lock 133 + */ 134 + if (!rtnl_trylock()) 135 + return false; 136 + 149 137 /* if not a wifi interface, check if this device provides data via 150 138 * ethtool (e.g. an Ethernet adapter) 151 139 */ 152 - rtnl_lock(); 153 140 ret = __ethtool_get_link_ksettings(hard_iface->net_dev, &link_settings); 154 141 rtnl_unlock(); 155 142 if (ret == 0) { ··· 172 135 hard_iface->bat_v.flags &= ~BATADV_FULL_DUPLEX; 173 136 174 137 throughput = link_settings.base.speed; 175 - if (throughput && throughput != SPEED_UNKNOWN) 176 - return throughput * 10; 138 + if (throughput && throughput != SPEED_UNKNOWN) { 139 + *pthroughput = throughput * 10; 140 + return true; 141 + } 177 142 } 178 143 179 144 default_throughput: 180 145 if (!(hard_iface->bat_v.flags & BATADV_WARNING_DEFAULT)) { 181 - batadv_info(hard_iface->soft_iface, 146 + batadv_info(soft_iface, 182 147 "WiFi driver or ethtool info does not provide information about link speeds on interface %s, therefore defaulting to hardcoded throughput values of %u.%1u Mbps. Consider overriding the throughput manually or checking your driver.\n", 183 148 hard_iface->net_dev->name, 184 149 BATADV_THROUGHPUT_DEFAULT_VALUE / 10, ··· 189 150 } 190 151 191 152 /* if none of the above cases apply, return the base_throughput */ 192 - return BATADV_THROUGHPUT_DEFAULT_VALUE; 153 + *pthroughput = BATADV_THROUGHPUT_DEFAULT_VALUE; 154 + return true; 193 155 } 194 156 195 157 /** 196 158 * batadv_v_elp_throughput_metric_update() - worker updating the throughput 197 159 * metric of a single hop neighbour 198 - * @work: the work queue item 160 + * @neigh: the neighbour to probe 199 161 */ 200 - void batadv_v_elp_throughput_metric_update(struct work_struct *work) 162 + static void 163 + batadv_v_elp_throughput_metric_update(struct batadv_hardif_neigh_node *neigh) 201 164 { 202 - struct batadv_hardif_neigh_node_bat_v *neigh_bat_v; 203 - struct batadv_hardif_neigh_node *neigh; 165 + u32 throughput; 166 + bool valid; 204 167 205 - neigh_bat_v = container_of(work, struct batadv_hardif_neigh_node_bat_v, 206 - metric_work); 207 - neigh = container_of(neigh_bat_v, struct batadv_hardif_neigh_node, 208 - bat_v); 168 + valid = batadv_v_elp_get_throughput(neigh, &throughput); 169 + if (!valid) 170 + return; 209 171 210 - ewma_throughput_add(&neigh->bat_v.throughput, 211 - batadv_v_elp_get_throughput(neigh)); 212 - 213 - /* decrement refcounter to balance increment performed before scheduling 214 - * this task 215 - */ 216 - batadv_hardif_neigh_put(neigh); 172 + ewma_throughput_add(&neigh->bat_v.throughput, throughput); 217 173 } 218 174 219 175 /** ··· 282 248 */ 283 249 static void batadv_v_elp_periodic_work(struct work_struct *work) 284 250 { 251 + struct batadv_v_metric_queue_entry *metric_entry; 252 + struct batadv_v_metric_queue_entry *metric_safe; 285 253 struct batadv_hardif_neigh_node *hardif_neigh; 286 254 struct batadv_hard_iface *hard_iface; 287 255 struct batadv_hard_iface_bat_v *bat_v; 288 256 struct batadv_elp_packet *elp_packet; 257 + struct list_head metric_queue; 289 258 struct batadv_priv *bat_priv; 290 259 struct sk_buff *skb; 291 260 u32 elp_interval; 292 - bool ret; 293 261 294 262 bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work); 295 263 hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v); ··· 327 291 328 292 atomic_inc(&hard_iface->bat_v.elp_seqno); 329 293 294 + INIT_LIST_HEAD(&metric_queue); 295 + 330 296 /* The throughput metric is updated on each sent packet. This way, if a 331 297 * node is dead and no longer sends packets, batman-adv is still able to 332 298 * react timely to its death. ··· 353 315 354 316 /* Reading the estimated throughput from cfg80211 is a task that 355 317 * may sleep and that is not allowed in an rcu protected 356 - * context. Therefore schedule a task for that. 318 + * context. Therefore add it to metric_queue and process it 319 + * outside rcu protected context. 357 320 */ 358 - ret = queue_work(batadv_event_workqueue, 359 - &hardif_neigh->bat_v.metric_work); 360 - 361 - if (!ret) 321 + metric_entry = kzalloc(sizeof(*metric_entry), GFP_ATOMIC); 322 + if (!metric_entry) { 362 323 batadv_hardif_neigh_put(hardif_neigh); 324 + continue; 325 + } 326 + 327 + metric_entry->hardif_neigh = hardif_neigh; 328 + list_add(&metric_entry->list, &metric_queue); 363 329 } 364 330 rcu_read_unlock(); 331 + 332 + list_for_each_entry_safe(metric_entry, metric_safe, &metric_queue, list) { 333 + batadv_v_elp_throughput_metric_update(metric_entry->hardif_neigh); 334 + 335 + batadv_hardif_neigh_put(metric_entry->hardif_neigh); 336 + list_del(&metric_entry->list); 337 + kfree(metric_entry); 338 + } 365 339 366 340 restart_timer: 367 341 batadv_v_elp_start_timer(hard_iface);
-2
net/batman-adv/bat_v_elp.h
··· 10 10 #include "main.h" 11 11 12 12 #include <linux/skbuff.h> 13 - #include <linux/workqueue.h> 14 13 15 14 int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface); 16 15 void batadv_v_elp_iface_disable(struct batadv_hard_iface *hard_iface); ··· 18 19 void batadv_v_elp_primary_iface_set(struct batadv_hard_iface *primary_iface); 19 20 int batadv_v_elp_packet_recv(struct sk_buff *skb, 20 21 struct batadv_hard_iface *if_incoming); 21 - void batadv_v_elp_throughput_metric_update(struct work_struct *work); 22 22 23 23 #endif /* _NET_BATMAN_ADV_BAT_V_ELP_H_ */
+5 -7
net/batman-adv/translation-table.c
··· 3937 3937 struct batadv_tvlv_tt_change *tt_change; 3938 3938 struct batadv_tvlv_tt_data *tt_data; 3939 3939 u16 num_entries, num_vlan; 3940 - size_t flex_size; 3940 + size_t tt_data_sz; 3941 3941 3942 3942 if (tvlv_value_len < sizeof(*tt_data)) 3943 3943 return; 3944 3944 3945 3945 tt_data = tvlv_value; 3946 - tvlv_value_len -= sizeof(*tt_data); 3947 - 3948 3946 num_vlan = ntohs(tt_data->num_vlan); 3949 3947 3950 - flex_size = flex_array_size(tt_data, vlan_data, num_vlan); 3951 - if (tvlv_value_len < flex_size) 3948 + tt_data_sz = struct_size(tt_data, vlan_data, num_vlan); 3949 + if (tvlv_value_len < tt_data_sz) 3952 3950 return; 3953 3951 3954 3952 tt_change = (struct batadv_tvlv_tt_change *)((void *)tt_data 3955 - + flex_size); 3956 - tvlv_value_len -= flex_size; 3953 + + tt_data_sz); 3954 + tvlv_value_len -= tt_data_sz; 3957 3955 3958 3956 num_entries = batadv_tt_entries(tvlv_value_len); 3959 3957
-3
net/batman-adv/types.h
··· 596 596 * neighbor 597 597 */ 598 598 unsigned long last_unicast_tx; 599 - 600 - /** @metric_work: work queue callback item for metric update */ 601 - struct work_struct metric_work; 602 599 }; 603 600 604 601 /**
+79 -90
net/bluetooth/l2cap_core.c
··· 119 119 { 120 120 struct l2cap_chan *c; 121 121 122 - mutex_lock(&conn->chan_lock); 123 122 c = __l2cap_get_chan_by_scid(conn, cid); 124 123 if (c) { 125 124 /* Only lock if chan reference is not 0 */ ··· 126 127 if (c) 127 128 l2cap_chan_lock(c); 128 129 } 129 - mutex_unlock(&conn->chan_lock); 130 130 131 131 return c; 132 132 } ··· 138 140 { 139 141 struct l2cap_chan *c; 140 142 141 - mutex_lock(&conn->chan_lock); 142 143 c = __l2cap_get_chan_by_dcid(conn, cid); 143 144 if (c) { 144 145 /* Only lock if chan reference is not 0 */ ··· 145 148 if (c) 146 149 l2cap_chan_lock(c); 147 150 } 148 - mutex_unlock(&conn->chan_lock); 149 151 150 152 return c; 151 153 } ··· 414 418 if (!conn) 415 419 return; 416 420 417 - mutex_lock(&conn->chan_lock); 421 + mutex_lock(&conn->lock); 418 422 /* __set_chan_timer() calls l2cap_chan_hold(chan) while scheduling 419 423 * this work. No need to call l2cap_chan_hold(chan) here again. 420 424 */ ··· 435 439 l2cap_chan_unlock(chan); 436 440 l2cap_chan_put(chan); 437 441 438 - mutex_unlock(&conn->chan_lock); 442 + mutex_unlock(&conn->lock); 439 443 } 440 444 441 445 struct l2cap_chan *l2cap_chan_create(void) ··· 637 641 638 642 void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan) 639 643 { 640 - mutex_lock(&conn->chan_lock); 644 + mutex_lock(&conn->lock); 641 645 __l2cap_chan_add(conn, chan); 642 - mutex_unlock(&conn->chan_lock); 646 + mutex_unlock(&conn->lock); 643 647 } 644 648 645 649 void l2cap_chan_del(struct l2cap_chan *chan, int err) ··· 727 731 if (!conn) 728 732 return; 729 733 730 - mutex_lock(&conn->chan_lock); 734 + mutex_lock(&conn->lock); 731 735 __l2cap_chan_list(conn, func, data); 732 - mutex_unlock(&conn->chan_lock); 736 + mutex_unlock(&conn->lock); 733 737 } 734 738 735 739 EXPORT_SYMBOL_GPL(l2cap_chan_list); ··· 741 745 struct hci_conn *hcon = conn->hcon; 742 746 struct l2cap_chan *chan; 743 747 744 - mutex_lock(&conn->chan_lock); 748 + mutex_lock(&conn->lock); 745 749 746 750 list_for_each_entry(chan, &conn->chan_l, list) { 747 751 l2cap_chan_lock(chan); ··· 750 754 l2cap_chan_unlock(chan); 751 755 } 752 756 753 - mutex_unlock(&conn->chan_lock); 757 + mutex_unlock(&conn->lock); 754 758 } 755 759 756 760 static void l2cap_chan_le_connect_reject(struct l2cap_chan *chan) ··· 944 948 return id; 945 949 } 946 950 951 + static void l2cap_send_acl(struct l2cap_conn *conn, struct sk_buff *skb, 952 + u8 flags) 953 + { 954 + /* Check if the hcon still valid before attempting to send */ 955 + if (hci_conn_valid(conn->hcon->hdev, conn->hcon)) 956 + hci_send_acl(conn->hchan, skb, flags); 957 + else 958 + kfree_skb(skb); 959 + } 960 + 947 961 static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len, 948 962 void *data) 949 963 { ··· 976 970 bt_cb(skb)->force_active = BT_POWER_FORCE_ACTIVE_ON; 977 971 skb->priority = HCI_PRIO_MAX; 978 972 979 - hci_send_acl(conn->hchan, skb, flags); 973 + l2cap_send_acl(conn, skb, flags); 980 974 } 981 975 982 976 static void l2cap_do_send(struct l2cap_chan *chan, struct sk_buff *skb) ··· 1503 1497 1504 1498 BT_DBG("conn %p", conn); 1505 1499 1506 - mutex_lock(&conn->chan_lock); 1507 - 1508 1500 list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) { 1509 1501 l2cap_chan_lock(chan); 1510 1502 ··· 1571 1567 1572 1568 l2cap_chan_unlock(chan); 1573 1569 } 1574 - 1575 - mutex_unlock(&conn->chan_lock); 1576 1570 } 1577 1571 1578 1572 static void l2cap_le_conn_ready(struct l2cap_conn *conn) ··· 1616 1614 if (hcon->type == ACL_LINK) 1617 1615 l2cap_request_info(conn); 1618 1616 1619 - mutex_lock(&conn->chan_lock); 1617 + mutex_lock(&conn->lock); 1620 1618 1621 1619 list_for_each_entry(chan, &conn->chan_l, list) { 1622 1620 ··· 1634 1632 l2cap_chan_unlock(chan); 1635 1633 } 1636 1634 1637 - mutex_unlock(&conn->chan_lock); 1635 + mutex_unlock(&conn->lock); 1638 1636 1639 1637 if (hcon->type == LE_LINK) 1640 1638 l2cap_le_conn_ready(conn); ··· 1649 1647 1650 1648 BT_DBG("conn %p", conn); 1651 1649 1652 - mutex_lock(&conn->chan_lock); 1653 - 1654 1650 list_for_each_entry(chan, &conn->chan_l, list) { 1655 1651 if (test_bit(FLAG_FORCE_RELIABLE, &chan->flags)) 1656 1652 l2cap_chan_set_err(chan, err); 1657 1653 } 1658 - 1659 - mutex_unlock(&conn->chan_lock); 1660 1654 } 1661 1655 1662 1656 static void l2cap_info_timeout(struct work_struct *work) ··· 1663 1665 conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_DONE; 1664 1666 conn->info_ident = 0; 1665 1667 1668 + mutex_lock(&conn->lock); 1666 1669 l2cap_conn_start(conn); 1670 + mutex_unlock(&conn->lock); 1667 1671 } 1668 1672 1669 1673 /* ··· 1757 1757 1758 1758 BT_DBG("hcon %p conn %p, err %d", hcon, conn, err); 1759 1759 1760 + mutex_lock(&conn->lock); 1761 + 1760 1762 kfree_skb(conn->rx_skb); 1761 1763 1762 1764 skb_queue_purge(&conn->pending_rx); ··· 1777 1775 /* Force the connection to be immediately dropped */ 1778 1776 hcon->disc_timeout = 0; 1779 1777 1780 - mutex_lock(&conn->chan_lock); 1781 - 1782 1778 /* Kill channels */ 1783 1779 list_for_each_entry_safe(chan, l, &conn->chan_l, list) { 1784 1780 l2cap_chan_hold(chan); ··· 1790 1790 l2cap_chan_put(chan); 1791 1791 } 1792 1792 1793 - mutex_unlock(&conn->chan_lock); 1794 - 1795 - hci_chan_del(conn->hchan); 1796 - 1797 1793 if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT) 1798 1794 cancel_delayed_work_sync(&conn->info_timer); 1799 1795 1800 - hcon->l2cap_data = NULL; 1796 + hci_chan_del(conn->hchan); 1801 1797 conn->hchan = NULL; 1798 + 1799 + hcon->l2cap_data = NULL; 1800 + mutex_unlock(&conn->lock); 1802 1801 l2cap_conn_put(conn); 1803 1802 } 1804 1803 ··· 2915 2916 2916 2917 BT_DBG("conn %p", conn); 2917 2918 2918 - mutex_lock(&conn->chan_lock); 2919 - 2920 2919 list_for_each_entry(chan, &conn->chan_l, list) { 2921 2920 if (chan->chan_type != L2CAP_CHAN_RAW) 2922 2921 continue; ··· 2929 2932 if (chan->ops->recv(chan, nskb)) 2930 2933 kfree_skb(nskb); 2931 2934 } 2932 - 2933 - mutex_unlock(&conn->chan_lock); 2934 2935 } 2935 2936 2936 2937 /* ---- L2CAP signalling commands ---- */ ··· 3947 3952 goto response; 3948 3953 } 3949 3954 3950 - mutex_lock(&conn->chan_lock); 3951 3955 l2cap_chan_lock(pchan); 3952 3956 3953 3957 /* Check if the ACL is secure enough (if not SDP) */ ··· 4053 4059 } 4054 4060 4055 4061 l2cap_chan_unlock(pchan); 4056 - mutex_unlock(&conn->chan_lock); 4057 4062 l2cap_chan_put(pchan); 4058 4063 } 4059 4064 ··· 4091 4098 BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x", 4092 4099 dcid, scid, result, status); 4093 4100 4094 - mutex_lock(&conn->chan_lock); 4095 - 4096 4101 if (scid) { 4097 4102 chan = __l2cap_get_chan_by_scid(conn, scid); 4098 - if (!chan) { 4099 - err = -EBADSLT; 4100 - goto unlock; 4101 - } 4103 + if (!chan) 4104 + return -EBADSLT; 4102 4105 } else { 4103 4106 chan = __l2cap_get_chan_by_ident(conn, cmd->ident); 4104 - if (!chan) { 4105 - err = -EBADSLT; 4106 - goto unlock; 4107 - } 4107 + if (!chan) 4108 + return -EBADSLT; 4108 4109 } 4109 4110 4110 4111 chan = l2cap_chan_hold_unless_zero(chan); 4111 - if (!chan) { 4112 - err = -EBADSLT; 4113 - goto unlock; 4114 - } 4112 + if (!chan) 4113 + return -EBADSLT; 4115 4114 4116 4115 err = 0; 4117 4116 ··· 4140 4155 4141 4156 l2cap_chan_unlock(chan); 4142 4157 l2cap_chan_put(chan); 4143 - 4144 - unlock: 4145 - mutex_unlock(&conn->chan_lock); 4146 4158 4147 4159 return err; 4148 4160 } ··· 4428 4446 4429 4447 chan->ops->set_shutdown(chan); 4430 4448 4431 - l2cap_chan_unlock(chan); 4432 - mutex_lock(&conn->chan_lock); 4433 - l2cap_chan_lock(chan); 4434 4449 l2cap_chan_del(chan, ECONNRESET); 4435 - mutex_unlock(&conn->chan_lock); 4436 4450 4437 4451 chan->ops->close(chan); 4438 4452 ··· 4465 4487 return 0; 4466 4488 } 4467 4489 4468 - l2cap_chan_unlock(chan); 4469 - mutex_lock(&conn->chan_lock); 4470 - l2cap_chan_lock(chan); 4471 4490 l2cap_chan_del(chan, 0); 4472 - mutex_unlock(&conn->chan_lock); 4473 4491 4474 4492 chan->ops->close(chan); 4475 4493 ··· 4663 4689 BT_DBG("dcid 0x%4.4x mtu %u mps %u credits %u result 0x%2.2x", 4664 4690 dcid, mtu, mps, credits, result); 4665 4691 4666 - mutex_lock(&conn->chan_lock); 4667 - 4668 4692 chan = __l2cap_get_chan_by_ident(conn, cmd->ident); 4669 - if (!chan) { 4670 - err = -EBADSLT; 4671 - goto unlock; 4672 - } 4693 + if (!chan) 4694 + return -EBADSLT; 4673 4695 4674 4696 err = 0; 4675 4697 ··· 4712 4742 } 4713 4743 4714 4744 l2cap_chan_unlock(chan); 4715 - 4716 - unlock: 4717 - mutex_unlock(&conn->chan_lock); 4718 4745 4719 4746 return err; 4720 4747 } ··· 4824 4857 goto response; 4825 4858 } 4826 4859 4827 - mutex_lock(&conn->chan_lock); 4828 4860 l2cap_chan_lock(pchan); 4829 4861 4830 4862 if (!smp_sufficient_security(conn->hcon, pchan->sec_level, ··· 4889 4923 4890 4924 response_unlock: 4891 4925 l2cap_chan_unlock(pchan); 4892 - mutex_unlock(&conn->chan_lock); 4893 4926 l2cap_chan_put(pchan); 4894 4927 4895 4928 if (result == L2CAP_CR_PEND) ··· 5022 5057 goto response; 5023 5058 } 5024 5059 5025 - mutex_lock(&conn->chan_lock); 5026 5060 l2cap_chan_lock(pchan); 5027 5061 5028 5062 if (!smp_sufficient_security(conn->hcon, pchan->sec_level, ··· 5096 5132 5097 5133 unlock: 5098 5134 l2cap_chan_unlock(pchan); 5099 - mutex_unlock(&conn->chan_lock); 5100 5135 l2cap_chan_put(pchan); 5101 5136 5102 5137 response: ··· 5131 5168 5132 5169 BT_DBG("mtu %u mps %u credits %u result 0x%4.4x", mtu, mps, credits, 5133 5170 result); 5134 - 5135 - mutex_lock(&conn->chan_lock); 5136 5171 5137 5172 cmd_len -= sizeof(*rsp); 5138 5173 ··· 5216 5255 5217 5256 l2cap_chan_unlock(chan); 5218 5257 } 5219 - 5220 - mutex_unlock(&conn->chan_lock); 5221 5258 5222 5259 return err; 5223 5260 } ··· 5329 5370 if (cmd_len < sizeof(*rej)) 5330 5371 return -EPROTO; 5331 5372 5332 - mutex_lock(&conn->chan_lock); 5333 - 5334 5373 chan = __l2cap_get_chan_by_ident(conn, cmd->ident); 5335 5374 if (!chan) 5336 5375 goto done; ··· 5343 5386 l2cap_chan_put(chan); 5344 5387 5345 5388 done: 5346 - mutex_unlock(&conn->chan_lock); 5347 5389 return 0; 5348 5390 } 5349 5391 ··· 6797 6841 6798 6842 BT_DBG(""); 6799 6843 6844 + mutex_lock(&conn->lock); 6845 + 6800 6846 while ((skb = skb_dequeue(&conn->pending_rx))) 6801 6847 l2cap_recv_frame(conn, skb); 6848 + 6849 + mutex_unlock(&conn->lock); 6802 6850 } 6803 6851 6804 6852 static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon) ··· 6841 6881 conn->local_fixed_chan |= L2CAP_FC_SMP_BREDR; 6842 6882 6843 6883 mutex_init(&conn->ident_lock); 6844 - mutex_init(&conn->chan_lock); 6884 + mutex_init(&conn->lock); 6845 6885 6846 6886 INIT_LIST_HEAD(&conn->chan_l); 6847 6887 INIT_LIST_HEAD(&conn->users); ··· 7032 7072 } 7033 7073 } 7034 7074 7035 - mutex_lock(&conn->chan_lock); 7075 + mutex_lock(&conn->lock); 7036 7076 l2cap_chan_lock(chan); 7037 7077 7038 7078 if (cid && __l2cap_get_chan_by_dcid(conn, cid)) { ··· 7073 7113 7074 7114 chan_unlock: 7075 7115 l2cap_chan_unlock(chan); 7076 - mutex_unlock(&conn->chan_lock); 7116 + mutex_unlock(&conn->lock); 7077 7117 done: 7078 7118 hci_dev_unlock(hdev); 7079 7119 hci_dev_put(hdev); ··· 7285 7325 7286 7326 BT_DBG("conn %p status 0x%2.2x encrypt %u", conn, status, encrypt); 7287 7327 7288 - mutex_lock(&conn->chan_lock); 7328 + mutex_lock(&conn->lock); 7289 7329 7290 7330 list_for_each_entry(chan, &conn->chan_l, list) { 7291 7331 l2cap_chan_lock(chan); ··· 7359 7399 l2cap_chan_unlock(chan); 7360 7400 } 7361 7401 7362 - mutex_unlock(&conn->chan_lock); 7402 + mutex_unlock(&conn->lock); 7363 7403 } 7364 7404 7365 7405 /* Append fragment into frame respecting the maximum len of rx_skb */ ··· 7426 7466 conn->rx_len = 0; 7427 7467 } 7428 7468 7469 + struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *c) 7470 + { 7471 + if (!c) 7472 + return NULL; 7473 + 7474 + BT_DBG("conn %p orig refcnt %u", c, kref_read(&c->ref)); 7475 + 7476 + if (!kref_get_unless_zero(&c->ref)) 7477 + return NULL; 7478 + 7479 + return c; 7480 + } 7481 + 7429 7482 void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags) 7430 7483 { 7431 - struct l2cap_conn *conn = hcon->l2cap_data; 7484 + struct l2cap_conn *conn; 7432 7485 int len; 7486 + 7487 + /* Lock hdev to access l2cap_data to avoid race with l2cap_conn_del */ 7488 + hci_dev_lock(hcon->hdev); 7489 + 7490 + conn = hcon->l2cap_data; 7433 7491 7434 7492 if (!conn) 7435 7493 conn = l2cap_conn_add(hcon); 7436 7494 7437 - if (!conn) 7438 - goto drop; 7495 + conn = l2cap_conn_hold_unless_zero(conn); 7496 + 7497 + hci_dev_unlock(hcon->hdev); 7498 + 7499 + if (!conn) { 7500 + kfree_skb(skb); 7501 + return; 7502 + } 7439 7503 7440 7504 BT_DBG("conn %p len %u flags 0x%x", conn, skb->len, flags); 7505 + 7506 + mutex_lock(&conn->lock); 7441 7507 7442 7508 switch (flags) { 7443 7509 case ACL_START: ··· 7489 7503 if (len == skb->len) { 7490 7504 /* Complete frame received */ 7491 7505 l2cap_recv_frame(conn, skb); 7492 - return; 7506 + goto unlock; 7493 7507 } 7494 7508 7495 7509 BT_DBG("Start: total len %d, frag len %u", len, skb->len); ··· 7553 7567 7554 7568 drop: 7555 7569 kfree_skb(skb); 7570 + unlock: 7571 + mutex_unlock(&conn->lock); 7572 + l2cap_conn_put(conn); 7556 7573 } 7557 7574 7558 7575 static struct hci_cb l2cap_cb = {
+7 -8
net/bluetooth/l2cap_sock.c
··· 1326 1326 /* prevent sk structure from being freed whilst unlocked */ 1327 1327 sock_hold(sk); 1328 1328 1329 - chan = l2cap_pi(sk)->chan; 1330 1329 /* prevent chan structure from being freed whilst unlocked */ 1331 - l2cap_chan_hold(chan); 1330 + chan = l2cap_chan_hold_unless_zero(l2cap_pi(sk)->chan); 1331 + if (!chan) 1332 + goto shutdown_already; 1332 1333 1333 1334 BT_DBG("chan %p state %s", chan, state_to_string(chan->state)); 1334 1335 ··· 1359 1358 release_sock(sk); 1360 1359 1361 1360 l2cap_chan_lock(chan); 1362 - conn = chan->conn; 1363 - if (conn) 1364 - /* prevent conn structure from being freed */ 1365 - l2cap_conn_get(conn); 1361 + /* prevent conn structure from being freed */ 1362 + conn = l2cap_conn_hold_unless_zero(chan->conn); 1366 1363 l2cap_chan_unlock(chan); 1367 1364 1368 1365 if (conn) 1369 1366 /* mutex lock must be taken before l2cap_chan_lock() */ 1370 - mutex_lock(&conn->chan_lock); 1367 + mutex_lock(&conn->lock); 1371 1368 1372 1369 l2cap_chan_lock(chan); 1373 1370 l2cap_chan_close(chan, 0); 1374 1371 l2cap_chan_unlock(chan); 1375 1372 1376 1373 if (conn) { 1377 - mutex_unlock(&conn->chan_lock); 1374 + mutex_unlock(&conn->lock); 1378 1375 l2cap_conn_put(conn); 1379 1376 } 1380 1377
+2 -2
net/can/j1939/socket.c
··· 1132 1132 1133 1133 todo_size = size; 1134 1134 1135 - while (todo_size) { 1135 + do { 1136 1136 struct j1939_sk_buff_cb *skcb; 1137 1137 1138 1138 segment_size = min_t(size_t, J1939_MAX_TP_PACKET_SIZE, ··· 1177 1177 1178 1178 todo_size -= segment_size; 1179 1179 session->total_queued_size += segment_size; 1180 - } 1180 + } while (todo_size); 1181 1181 1182 1182 switch (ret) { 1183 1183 case 0: /* OK */
+3 -2
net/can/j1939/transport.c
··· 382 382 skb_queue_walk(&session->skb_queue, do_skb) { 383 383 do_skcb = j1939_skb_to_cb(do_skb); 384 384 385 - if (offset_start >= do_skcb->offset && 386 - offset_start < (do_skcb->offset + do_skb->len)) { 385 + if ((offset_start >= do_skcb->offset && 386 + offset_start < (do_skcb->offset + do_skb->len)) || 387 + (offset_start == 0 && do_skcb->offset == 0 && do_skb->len == 0)) { 387 388 skb = do_skb; 388 389 } 389 390 }
+13 -11
net/core/fib_rules.c
··· 37 37 38 38 bool fib_rule_matchall(const struct fib_rule *rule) 39 39 { 40 - if (rule->iifindex || rule->oifindex || rule->mark || rule->tun_id || 41 - rule->flags) 40 + if (READ_ONCE(rule->iifindex) || READ_ONCE(rule->oifindex) || 41 + rule->mark || rule->tun_id || rule->flags) 42 42 return false; 43 43 if (rule->suppress_ifgroup != -1 || rule->suppress_prefixlen != -1) 44 44 return false; ··· 261 261 struct flowi *fl, int flags, 262 262 struct fib_lookup_arg *arg) 263 263 { 264 - int ret = 0; 264 + int iifindex, oifindex, ret = 0; 265 265 266 - if (rule->iifindex && (rule->iifindex != fl->flowi_iif)) 266 + iifindex = READ_ONCE(rule->iifindex); 267 + if (iifindex && (iifindex != fl->flowi_iif)) 267 268 goto out; 268 269 269 - if (rule->oifindex && (rule->oifindex != fl->flowi_oif)) 270 + oifindex = READ_ONCE(rule->oifindex); 271 + if (oifindex && (oifindex != fl->flowi_oif)) 270 272 goto out; 271 273 272 274 if ((rule->mark ^ fl->flowi_mark) & rule->mark_mask) ··· 1043 1041 if (rule->iifname[0]) { 1044 1042 if (nla_put_string(skb, FRA_IIFNAME, rule->iifname)) 1045 1043 goto nla_put_failure; 1046 - if (rule->iifindex == -1) 1044 + if (READ_ONCE(rule->iifindex) == -1) 1047 1045 frh->flags |= FIB_RULE_IIF_DETACHED; 1048 1046 } 1049 1047 1050 1048 if (rule->oifname[0]) { 1051 1049 if (nla_put_string(skb, FRA_OIFNAME, rule->oifname)) 1052 1050 goto nla_put_failure; 1053 - if (rule->oifindex == -1) 1051 + if (READ_ONCE(rule->oifindex) == -1) 1054 1052 frh->flags |= FIB_RULE_OIF_DETACHED; 1055 1053 } 1056 1054 ··· 1222 1220 list_for_each_entry(rule, rules, list) { 1223 1221 if (rule->iifindex == -1 && 1224 1222 strcmp(dev->name, rule->iifname) == 0) 1225 - rule->iifindex = dev->ifindex; 1223 + WRITE_ONCE(rule->iifindex, dev->ifindex); 1226 1224 if (rule->oifindex == -1 && 1227 1225 strcmp(dev->name, rule->oifname) == 0) 1228 - rule->oifindex = dev->ifindex; 1226 + WRITE_ONCE(rule->oifindex, dev->ifindex); 1229 1227 } 1230 1228 } 1231 1229 ··· 1235 1233 1236 1234 list_for_each_entry(rule, rules, list) { 1237 1235 if (rule->iifindex == dev->ifindex) 1238 - rule->iifindex = -1; 1236 + WRITE_ONCE(rule->iifindex, -1); 1239 1237 if (rule->oifindex == dev->ifindex) 1240 - rule->oifindex = -1; 1238 + WRITE_ONCE(rule->oifindex, -1); 1241 1239 } 1242 1240 } 1243 1241
+11 -10
net/core/flow_dissector.c
··· 1108 1108 FLOW_DISSECTOR_KEY_BASIC, 1109 1109 target_container); 1110 1110 1111 + rcu_read_lock(); 1112 + 1111 1113 if (skb) { 1112 1114 if (!net) { 1113 1115 if (skb->dev) 1114 - net = dev_net(skb->dev); 1116 + net = dev_net_rcu(skb->dev); 1115 1117 else if (skb->sk) 1116 1118 net = sock_net(skb->sk); 1117 1119 } ··· 1124 1122 enum netns_bpf_attach_type type = NETNS_BPF_FLOW_DISSECTOR; 1125 1123 struct bpf_prog_array *run_array; 1126 1124 1127 - rcu_read_lock(); 1128 1125 run_array = rcu_dereference(init_net.bpf.run_array[type]); 1129 1126 if (!run_array) 1130 1127 run_array = rcu_dereference(net->bpf.run_array[type]); ··· 1151 1150 prog = READ_ONCE(run_array->items[0].prog); 1152 1151 result = bpf_flow_dissect(prog, &ctx, n_proto, nhoff, 1153 1152 hlen, flags); 1154 - if (result == BPF_FLOW_DISSECTOR_CONTINUE) 1155 - goto dissect_continue; 1156 - __skb_flow_bpf_to_target(&flow_keys, flow_dissector, 1157 - target_container); 1158 - rcu_read_unlock(); 1159 - return result == BPF_OK; 1153 + if (result != BPF_FLOW_DISSECTOR_CONTINUE) { 1154 + __skb_flow_bpf_to_target(&flow_keys, flow_dissector, 1155 + target_container); 1156 + rcu_read_unlock(); 1157 + return result == BPF_OK; 1158 + } 1160 1159 } 1161 - dissect_continue: 1162 - rcu_read_unlock(); 1163 1160 } 1161 + 1162 + rcu_read_unlock(); 1164 1163 1165 1164 if (dissector_uses_key(flow_dissector, 1166 1165 FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+6 -2
net/core/neighbour.c
··· 3447 3447 static void __neigh_notify(struct neighbour *n, int type, int flags, 3448 3448 u32 pid) 3449 3449 { 3450 - struct net *net = dev_net(n->dev); 3451 3450 struct sk_buff *skb; 3452 3451 int err = -ENOBUFS; 3452 + struct net *net; 3453 3453 3454 + rcu_read_lock(); 3455 + net = dev_net_rcu(n->dev); 3454 3456 skb = nlmsg_new(neigh_nlmsg_size(), GFP_ATOMIC); 3455 3457 if (skb == NULL) 3456 3458 goto errout; ··· 3465 3463 goto errout; 3466 3464 } 3467 3465 rtnl_notify(skb, net, 0, RTNLGRP_NEIGH, NULL, GFP_ATOMIC); 3468 - return; 3466 + goto out; 3469 3467 errout: 3470 3468 rtnl_set_sk_err(net, RTNLGRP_NEIGH, err); 3469 + out: 3470 + rcu_read_unlock(); 3471 3471 } 3472 3472 3473 3473 void neigh_app_ns(struct neighbour *n)
+1
net/core/rtnetlink.c
··· 3432 3432 err = -ENODEV; 3433 3433 3434 3434 rtnl_nets_unlock(&rtnl_nets); 3435 + rtnl_nets_destroy(&rtnl_nets); 3435 3436 errout: 3436 3437 return err; 3437 3438 }
+5
net/ethtool/common.c
··· 462 462 }; 463 463 static_assert(ARRAY_SIZE(ts_rx_filter_names) == __HWTSTAMP_FILTER_CNT); 464 464 465 + const char ts_flags_names[][ETH_GSTRING_LEN] = { 466 + [const_ilog2(HWTSTAMP_FLAG_BONDED_PHC_INDEX)] = "bonded-phc-index", 467 + }; 468 + static_assert(ARRAY_SIZE(ts_flags_names) == __HWTSTAMP_FLAG_CNT); 469 + 465 470 const char udp_tunnel_type_names[][ETH_GSTRING_LEN] = { 466 471 [ETHTOOL_UDP_TUNNEL_TYPE_VXLAN] = "vxlan", 467 472 [ETHTOOL_UDP_TUNNEL_TYPE_GENEVE] = "geneve",
+2
net/ethtool/common.h
··· 13 13 ETHTOOL_LINK_MODE_ ## speed ## base ## type ## _ ## duplex ## _BIT 14 14 15 15 #define __SOF_TIMESTAMPING_CNT (const_ilog2(SOF_TIMESTAMPING_LAST) + 1) 16 + #define __HWTSTAMP_FLAG_CNT (const_ilog2(HWTSTAMP_FLAG_LAST) + 1) 16 17 17 18 struct link_mode_info { 18 19 int speed; ··· 39 38 extern const char sof_timestamping_names[][ETH_GSTRING_LEN]; 40 39 extern const char ts_tx_type_names[][ETH_GSTRING_LEN]; 41 40 extern const char ts_rx_filter_names[][ETH_GSTRING_LEN]; 41 + extern const char ts_flags_names[][ETH_GSTRING_LEN]; 42 42 extern const char udp_tunnel_type_names[][ETH_GSTRING_LEN]; 43 43 44 44 int __ethtool_get_link(struct net_device *dev);
+5
net/ethtool/strset.c
··· 75 75 .count = __HWTSTAMP_FILTER_CNT, 76 76 .strings = ts_rx_filter_names, 77 77 }, 78 + [ETH_SS_TS_FLAGS] = { 79 + .per_dev = false, 80 + .count = __HWTSTAMP_FLAG_CNT, 81 + .strings = ts_flags_names, 82 + }, 78 83 [ETH_SS_UDP_TUNNEL_TYPES] = { 79 84 .per_dev = false, 80 85 .count = __ETHTOOL_UDP_TUNNEL_TYPE_CNT,
+23 -10
net/ethtool/tsconfig.c
··· 54 54 55 55 data->hwtst_config.tx_type = BIT(cfg.tx_type); 56 56 data->hwtst_config.rx_filter = BIT(cfg.rx_filter); 57 - data->hwtst_config.flags = BIT(cfg.flags); 57 + data->hwtst_config.flags = cfg.flags; 58 58 59 59 data->hwprov_desc.index = -1; 60 60 hwprov = rtnl_dereference(dev->hwprov); ··· 91 91 92 92 BUILD_BUG_ON(__HWTSTAMP_TX_CNT > 32); 93 93 BUILD_BUG_ON(__HWTSTAMP_FILTER_CNT > 32); 94 + BUILD_BUG_ON(__HWTSTAMP_FLAG_CNT > 32); 94 95 95 - if (data->hwtst_config.flags) 96 - /* _TSCONFIG_HWTSTAMP_FLAGS */ 97 - len += nla_total_size(sizeof(u32)); 96 + if (data->hwtst_config.flags) { 97 + ret = ethnl_bitset32_size(&data->hwtst_config.flags, 98 + NULL, __HWTSTAMP_FLAG_CNT, 99 + ts_flags_names, compact); 100 + if (ret < 0) 101 + return ret; 102 + len += ret; /* _TSCONFIG_HWTSTAMP_FLAGS */ 103 + } 98 104 99 105 if (data->hwtst_config.tx_type) { 100 106 ret = ethnl_bitset32_size(&data->hwtst_config.tx_type, ··· 136 130 int ret; 137 131 138 132 if (data->hwtst_config.flags) { 139 - ret = nla_put_u32(skb, ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS, 140 - data->hwtst_config.flags); 133 + ret = ethnl_put_bitset32(skb, ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS, 134 + &data->hwtst_config.flags, NULL, 135 + __HWTSTAMP_FLAG_CNT, 136 + ts_flags_names, compact); 141 137 if (ret < 0) 142 138 return ret; 143 139 } ··· 188 180 [ETHTOOL_A_TSCONFIG_HEADER] = NLA_POLICY_NESTED(ethnl_header_policy), 189 181 [ETHTOOL_A_TSCONFIG_HWTSTAMP_PROVIDER] = 190 182 NLA_POLICY_NESTED(ethnl_ts_hwtst_prov_policy), 191 - [ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS] = { .type = NLA_U32 }, 183 + [ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS] = { .type = NLA_NESTED }, 192 184 [ETHTOOL_A_TSCONFIG_RX_FILTERS] = { .type = NLA_NESTED }, 193 185 [ETHTOOL_A_TSCONFIG_TX_TYPES] = { .type = NLA_NESTED }, 194 186 }; ··· 304 296 305 297 BUILD_BUG_ON(__HWTSTAMP_TX_CNT >= 32); 306 298 BUILD_BUG_ON(__HWTSTAMP_FILTER_CNT >= 32); 299 + BUILD_BUG_ON(__HWTSTAMP_FLAG_CNT > 32); 307 300 308 301 if (!netif_device_present(dev)) 309 302 return -ENODEV; ··· 386 377 } 387 378 388 379 if (tb[ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS]) { 389 - ethnl_update_u32(&hwtst_config.flags, 390 - tb[ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS], 391 - &config_mod); 380 + ret = ethnl_update_bitset32(&hwtst_config.flags, 381 + __HWTSTAMP_FLAG_CNT, 382 + tb[ETHTOOL_A_TSCONFIG_HWTSTAMP_FLAGS], 383 + ts_flags_names, info->extack, 384 + &config_mod); 385 + if (ret < 0) 386 + goto err_free_hwprov; 392 387 } 393 388 394 389 ret = net_hwtstamp_validate(&hwtst_config);
+3 -1
net/ipv4/arp.c
··· 659 659 */ 660 660 void arp_xmit(struct sk_buff *skb) 661 661 { 662 + rcu_read_lock(); 662 663 /* Send it off, maybe filter it using firewalling first. */ 663 664 NF_HOOK(NFPROTO_ARP, NF_ARP_OUT, 664 - dev_net(skb->dev), NULL, skb, NULL, skb->dev, 665 + dev_net_rcu(skb->dev), NULL, skb, NULL, skb->dev, 665 666 arp_xmit_finish); 667 + rcu_read_unlock(); 666 668 } 667 669 EXPORT_SYMBOL(arp_xmit); 668 670
+2 -1
net/ipv4/devinet.c
··· 1371 1371 __be32 addr = 0; 1372 1372 unsigned char localnet_scope = RT_SCOPE_HOST; 1373 1373 struct in_device *in_dev; 1374 - struct net *net = dev_net(dev); 1374 + struct net *net; 1375 1375 int master_idx; 1376 1376 1377 1377 rcu_read_lock(); 1378 + net = dev_net_rcu(dev); 1378 1379 in_dev = __in_dev_get_rcu(dev); 1379 1380 if (!in_dev) 1380 1381 goto no_in_dev;
+17 -14
net/ipv4/icmp.c
··· 399 399 400 400 static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb) 401 401 { 402 - struct ipcm_cookie ipc; 403 402 struct rtable *rt = skb_rtable(skb); 404 - struct net *net = dev_net(rt->dst.dev); 403 + struct net *net = dev_net_rcu(rt->dst.dev); 405 404 bool apply_ratelimit = false; 405 + struct ipcm_cookie ipc; 406 406 struct flowi4 fl4; 407 407 struct sock *sk; 408 408 struct inet_sock *inet; ··· 608 608 struct sock *sk; 609 609 610 610 if (!rt) 611 - goto out; 611 + return; 612 + 613 + rcu_read_lock(); 612 614 613 615 if (rt->dst.dev) 614 - net = dev_net(rt->dst.dev); 616 + net = dev_net_rcu(rt->dst.dev); 615 617 else if (skb_in->dev) 616 - net = dev_net(skb_in->dev); 618 + net = dev_net_rcu(skb_in->dev); 617 619 else 618 620 goto out; 619 621 ··· 787 785 icmp_xmit_unlock(sk); 788 786 out_bh_enable: 789 787 local_bh_enable(); 790 - out:; 788 + out: 789 + rcu_read_unlock(); 791 790 } 792 791 EXPORT_SYMBOL(__icmp_send); 793 792 ··· 837 834 * avoid additional coding at protocol handlers. 838 835 */ 839 836 if (!pskb_may_pull(skb, iph->ihl * 4 + 8)) { 840 - __ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS); 837 + __ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS); 841 838 return; 842 839 } 843 840 ··· 871 868 struct net *net; 872 869 u32 info = 0; 873 870 874 - net = dev_net(skb_dst(skb)->dev); 871 + net = dev_net_rcu(skb_dst(skb)->dev); 875 872 876 873 /* 877 874 * Incomplete header ? ··· 982 979 static enum skb_drop_reason icmp_redirect(struct sk_buff *skb) 983 980 { 984 981 if (skb->len < sizeof(struct iphdr)) { 985 - __ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS); 982 + __ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS); 986 983 return SKB_DROP_REASON_PKT_TOO_SMALL; 987 984 } 988 985 ··· 1014 1011 struct icmp_bxm icmp_param; 1015 1012 struct net *net; 1016 1013 1017 - net = dev_net(skb_dst(skb)->dev); 1014 + net = dev_net_rcu(skb_dst(skb)->dev); 1018 1015 /* should there be an ICMP stat for ignored echos? */ 1019 1016 if (READ_ONCE(net->ipv4.sysctl_icmp_echo_ignore_all)) 1020 1017 return SKB_NOT_DROPPED_YET; ··· 1043 1040 1044 1041 bool icmp_build_probe(struct sk_buff *skb, struct icmphdr *icmphdr) 1045 1042 { 1043 + struct net *net = dev_net_rcu(skb->dev); 1046 1044 struct icmp_ext_hdr *ext_hdr, _ext_hdr; 1047 1045 struct icmp_ext_echo_iio *iio, _iio; 1048 - struct net *net = dev_net(skb->dev); 1049 1046 struct inet6_dev *in6_dev; 1050 1047 struct in_device *in_dev; 1051 1048 struct net_device *dev; ··· 1184 1181 return SKB_NOT_DROPPED_YET; 1185 1182 1186 1183 out_err: 1187 - __ICMP_INC_STATS(dev_net(skb_dst(skb)->dev), ICMP_MIB_INERRORS); 1184 + __ICMP_INC_STATS(dev_net_rcu(skb_dst(skb)->dev), ICMP_MIB_INERRORS); 1188 1185 return SKB_DROP_REASON_PKT_TOO_SMALL; 1189 1186 } 1190 1187 ··· 1201 1198 { 1202 1199 enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED; 1203 1200 struct rtable *rt = skb_rtable(skb); 1204 - struct net *net = dev_net(rt->dst.dev); 1201 + struct net *net = dev_net_rcu(rt->dst.dev); 1205 1202 struct icmphdr *icmph; 1206 1203 1207 1204 if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) { ··· 1374 1371 struct iphdr *iph = (struct iphdr *)skb->data; 1375 1372 int offset = iph->ihl<<2; 1376 1373 struct icmphdr *icmph = (struct icmphdr *)(skb->data + offset); 1374 + struct net *net = dev_net_rcu(skb->dev); 1377 1375 int type = icmp_hdr(skb)->type; 1378 1376 int code = icmp_hdr(skb)->code; 1379 - struct net *net = dev_net(skb->dev); 1380 1377 1381 1378 /* 1382 1379 * Use ping_err to handle all icmp errors except those
+21 -9
net/ipv4/route.c
··· 390 390 391 391 static inline bool rt_is_expired(const struct rtable *rth) 392 392 { 393 - return rth->rt_genid != rt_genid_ipv4(dev_net(rth->dst.dev)); 393 + bool res; 394 + 395 + rcu_read_lock(); 396 + res = rth->rt_genid != rt_genid_ipv4(dev_net_rcu(rth->dst.dev)); 397 + rcu_read_unlock(); 398 + 399 + return res; 394 400 } 395 401 396 402 void rt_cache_flush(struct net *net) ··· 1008 1002 static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu) 1009 1003 { 1010 1004 struct dst_entry *dst = &rt->dst; 1011 - struct net *net = dev_net(dst->dev); 1012 1005 struct fib_result res; 1013 1006 bool lock = false; 1007 + struct net *net; 1014 1008 u32 old_mtu; 1015 1009 1016 1010 if (ip_mtu_locked(dst)) ··· 1020 1014 if (old_mtu < mtu) 1021 1015 return; 1022 1016 1017 + rcu_read_lock(); 1018 + net = dev_net_rcu(dst->dev); 1023 1019 if (mtu < net->ipv4.ip_rt_min_pmtu) { 1024 1020 lock = true; 1025 1021 mtu = min(old_mtu, net->ipv4.ip_rt_min_pmtu); ··· 1029 1021 1030 1022 if (rt->rt_pmtu == mtu && !lock && 1031 1023 time_before(jiffies, dst->expires - net->ipv4.ip_rt_mtu_expires / 2)) 1032 - return; 1024 + goto out; 1033 1025 1034 - rcu_read_lock(); 1035 1026 if (fib_lookup(net, fl4, &res, 0) == 0) { 1036 1027 struct fib_nh_common *nhc; 1037 1028 ··· 1044 1037 update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock, 1045 1038 jiffies + net->ipv4.ip_rt_mtu_expires); 1046 1039 } 1047 - rcu_read_unlock(); 1048 - return; 1040 + goto out; 1049 1041 } 1050 1042 #endif /* CONFIG_IP_ROUTE_MULTIPATH */ 1051 1043 nhc = FIB_RES_NHC(res); 1052 1044 update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock, 1053 1045 jiffies + net->ipv4.ip_rt_mtu_expires); 1054 1046 } 1047 + out: 1055 1048 rcu_read_unlock(); 1056 1049 } 1057 1050 ··· 1314 1307 1315 1308 static unsigned int ipv4_default_advmss(const struct dst_entry *dst) 1316 1309 { 1317 - struct net *net = dev_net(dst->dev); 1318 1310 unsigned int header_size = sizeof(struct tcphdr) + sizeof(struct iphdr); 1319 - unsigned int advmss = max_t(unsigned int, ipv4_mtu(dst) - header_size, 1320 - net->ipv4.ip_rt_min_advmss); 1311 + unsigned int advmss; 1312 + struct net *net; 1313 + 1314 + rcu_read_lock(); 1315 + net = dev_net_rcu(dst->dev); 1316 + advmss = max_t(unsigned int, ipv4_mtu(dst) - header_size, 1317 + net->ipv4.ip_rt_min_advmss); 1318 + rcu_read_unlock(); 1321 1319 1322 1320 return min(advmss, IPV4_MAX_PMTU - header_size); 1323 1321 }
+23 -19
net/ipv6/icmp.c
··· 76 76 { 77 77 /* icmpv6_notify checks 8 bytes can be pulled, icmp6hdr is 8 bytes */ 78 78 struct icmp6hdr *icmp6 = (struct icmp6hdr *) (skb->data + offset); 79 - struct net *net = dev_net(skb->dev); 79 + struct net *net = dev_net_rcu(skb->dev); 80 80 81 81 if (type == ICMPV6_PKT_TOOBIG) 82 82 ip6_update_pmtu(skb, net, info, skb->dev->ifindex, 0, sock_net_uid(net, NULL)); ··· 473 473 474 474 if (!skb->dev) 475 475 return; 476 - net = dev_net(skb->dev); 476 + 477 + rcu_read_lock(); 478 + 479 + net = dev_net_rcu(skb->dev); 477 480 mark = IP6_REPLY_MARK(net, skb->mark); 478 481 /* 479 482 * Make sure we respect the rules ··· 499 496 !(type == ICMPV6_PARAMPROB && 500 497 code == ICMPV6_UNK_OPTION && 501 498 (opt_unrec(skb, info)))) 502 - return; 499 + goto out; 503 500 504 501 saddr = NULL; 505 502 } ··· 529 526 if ((addr_type == IPV6_ADDR_ANY) || (addr_type & IPV6_ADDR_MULTICAST)) { 530 527 net_dbg_ratelimited("icmp6_send: addr_any/mcast source [%pI6c > %pI6c]\n", 531 528 &hdr->saddr, &hdr->daddr); 532 - return; 529 + goto out; 533 530 } 534 531 535 532 /* ··· 538 535 if (is_ineligible(skb)) { 539 536 net_dbg_ratelimited("icmp6_send: no reply to icmp error [%pI6c > %pI6c]\n", 540 537 &hdr->saddr, &hdr->daddr); 541 - return; 538 + goto out; 542 539 } 543 540 544 541 /* Needed by both icmpv6_global_allow and icmpv6_xmit_lock */ ··· 585 582 np = inet6_sk(sk); 586 583 587 584 if (!icmpv6_xrlim_allow(sk, type, &fl6, apply_ratelimit)) 588 - goto out; 585 + goto out_unlock; 589 586 590 587 tmp_hdr.icmp6_type = type; 591 588 tmp_hdr.icmp6_code = code; ··· 603 600 604 601 dst = icmpv6_route_lookup(net, skb, sk, &fl6); 605 602 if (IS_ERR(dst)) 606 - goto out; 603 + goto out_unlock; 607 604 608 605 ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst); 609 606 ··· 619 616 goto out_dst_release; 620 617 } 621 618 622 - rcu_read_lock(); 623 619 idev = __in6_dev_get(skb->dev); 624 620 625 621 if (ip6_append_data(sk, icmpv6_getfrag, &msg, ··· 632 630 icmpv6_push_pending_frames(sk, &fl6, &tmp_hdr, 633 631 len + sizeof(struct icmp6hdr)); 634 632 } 635 - rcu_read_unlock(); 633 + 636 634 out_dst_release: 637 635 dst_release(dst); 638 - out: 636 + out_unlock: 639 637 icmpv6_xmit_unlock(sk); 640 638 out_bh_enable: 641 639 local_bh_enable(); 640 + out: 641 + rcu_read_unlock(); 642 642 } 643 643 EXPORT_SYMBOL(icmp6_send); 644 644 ··· 683 679 skb_pull(skb2, nhs); 684 680 skb_reset_network_header(skb2); 685 681 686 - rt = rt6_lookup(dev_net(skb->dev), &ipv6_hdr(skb2)->saddr, NULL, 0, 687 - skb, 0); 682 + rt = rt6_lookup(dev_net_rcu(skb->dev), &ipv6_hdr(skb2)->saddr, 683 + NULL, 0, skb, 0); 688 684 689 685 if (rt && rt->dst.dev) 690 686 skb2->dev = rt->dst.dev; ··· 721 717 722 718 static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb) 723 719 { 724 - struct net *net = dev_net(skb->dev); 720 + struct net *net = dev_net_rcu(skb->dev); 725 721 struct sock *sk; 726 722 struct inet6_dev *idev; 727 723 struct ipv6_pinfo *np; ··· 836 832 u8 code, __be32 info) 837 833 { 838 834 struct inet6_skb_parm *opt = IP6CB(skb); 839 - struct net *net = dev_net(skb->dev); 835 + struct net *net = dev_net_rcu(skb->dev); 840 836 const struct inet6_protocol *ipprot; 841 837 enum skb_drop_reason reason; 842 838 int inner_offset; ··· 893 889 static int icmpv6_rcv(struct sk_buff *skb) 894 890 { 895 891 enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED; 896 - struct net *net = dev_net(skb->dev); 892 + struct net *net = dev_net_rcu(skb->dev); 897 893 struct net_device *dev = icmp6_dev(skb); 898 894 struct inet6_dev *idev = __in6_dev_get(dev); 899 895 const struct in6_addr *saddr, *daddr; ··· 925 921 skb_set_network_header(skb, nh); 926 922 } 927 923 928 - __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INMSGS); 924 + __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_INMSGS); 929 925 930 926 saddr = &ipv6_hdr(skb)->saddr; 931 927 daddr = &ipv6_hdr(skb)->daddr; ··· 943 939 944 940 type = hdr->icmp6_type; 945 941 946 - ICMP6MSGIN_INC_STATS(dev_net(dev), idev, type); 942 + ICMP6MSGIN_INC_STATS(dev_net_rcu(dev), idev, type); 947 943 948 944 switch (type) { 949 945 case ICMPV6_ECHO_REQUEST: ··· 1038 1034 1039 1035 csum_error: 1040 1036 reason = SKB_DROP_REASON_ICMP_CSUM; 1041 - __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_CSUMERRORS); 1037 + __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_CSUMERRORS); 1042 1038 discard_it: 1043 - __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INERRORS); 1039 + __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_INERRORS); 1044 1040 drop_no_count: 1045 1041 kfree_skb_reason(skb, reason); 1046 1042 return 0;
+9 -5
net/ipv6/ip6_input.c
··· 477 477 static int ip6_input_finish(struct net *net, struct sock *sk, struct sk_buff *skb) 478 478 { 479 479 skb_clear_delivery_time(skb); 480 - rcu_read_lock(); 481 480 ip6_protocol_deliver_rcu(net, skb, 0, false); 482 - rcu_read_unlock(); 483 481 484 482 return 0; 485 483 } ··· 485 487 486 488 int ip6_input(struct sk_buff *skb) 487 489 { 488 - return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_IN, 489 - dev_net(skb->dev), NULL, skb, skb->dev, NULL, 490 - ip6_input_finish); 490 + int res; 491 + 492 + rcu_read_lock(); 493 + res = NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_IN, 494 + dev_net_rcu(skb->dev), NULL, skb, skb->dev, NULL, 495 + ip6_input_finish); 496 + rcu_read_unlock(); 497 + 498 + return res; 491 499 } 492 500 EXPORT_SYMBOL_GPL(ip6_input); 493 501
+25 -20
net/ipv6/mcast.c
··· 1773 1773 struct net_device *dev = idev->dev; 1774 1774 int hlen = LL_RESERVED_SPACE(dev); 1775 1775 int tlen = dev->needed_tailroom; 1776 - struct net *net = dev_net(dev); 1777 1776 const struct in6_addr *saddr; 1778 1777 struct in6_addr addr_buf; 1779 1778 struct mld2_report *pmr; 1780 1779 struct sk_buff *skb; 1781 1780 unsigned int size; 1782 1781 struct sock *sk; 1783 - int err; 1782 + struct net *net; 1784 1783 1785 - sk = net->ipv6.igmp_sk; 1786 1784 /* we assume size > sizeof(ra) here 1787 1785 * Also try to not allocate high-order pages for big MTU 1788 1786 */ 1789 1787 size = min_t(int, mtu, PAGE_SIZE / 2) + hlen + tlen; 1790 - skb = sock_alloc_send_skb(sk, size, 1, &err); 1788 + skb = alloc_skb(size, GFP_KERNEL); 1791 1789 if (!skb) 1792 1790 return NULL; 1793 1791 1794 1792 skb->priority = TC_PRIO_CONTROL; 1795 1793 skb_reserve(skb, hlen); 1796 1794 skb_tailroom_reserve(skb, mtu, tlen); 1795 + 1796 + rcu_read_lock(); 1797 + 1798 + net = dev_net_rcu(dev); 1799 + sk = net->ipv6.igmp_sk; 1800 + skb_set_owner_w(skb, sk); 1797 1801 1798 1802 if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) { 1799 1803 /* <draft-ietf-magma-mld-source-05.txt>: ··· 1809 1805 saddr = &addr_buf; 1810 1806 1811 1807 ip6_mc_hdr(sk, skb, dev, saddr, &mld2_all_mcr, NEXTHDR_HOP, 0); 1808 + 1809 + rcu_read_unlock(); 1812 1810 1813 1811 skb_put_data(skb, ra, sizeof(ra)); 1814 1812 ··· 2171 2165 2172 2166 static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type) 2173 2167 { 2174 - struct net *net = dev_net(dev); 2175 - struct sock *sk = net->ipv6.igmp_sk; 2168 + const struct in6_addr *snd_addr, *saddr; 2169 + int err, len, payload_len, full_len; 2170 + struct in6_addr addr_buf; 2176 2171 struct inet6_dev *idev; 2177 2172 struct sk_buff *skb; 2178 2173 struct mld_msg *hdr; 2179 - const struct in6_addr *snd_addr, *saddr; 2180 - struct in6_addr addr_buf; 2181 2174 int hlen = LL_RESERVED_SPACE(dev); 2182 2175 int tlen = dev->needed_tailroom; 2183 - int err, len, payload_len, full_len; 2184 2176 u8 ra[8] = { IPPROTO_ICMPV6, 0, 2185 2177 IPV6_TLV_ROUTERALERT, 2, 0, 0, 2186 2178 IPV6_TLV_PADN, 0 }; 2187 - struct flowi6 fl6; 2188 2179 struct dst_entry *dst; 2180 + struct flowi6 fl6; 2181 + struct net *net; 2182 + struct sock *sk; 2189 2183 2190 2184 if (type == ICMPV6_MGM_REDUCTION) 2191 2185 snd_addr = &in6addr_linklocal_allrouters; ··· 2196 2190 payload_len = len + sizeof(ra); 2197 2191 full_len = sizeof(struct ipv6hdr) + payload_len; 2198 2192 2193 + skb = alloc_skb(hlen + tlen + full_len, GFP_KERNEL); 2194 + 2199 2195 rcu_read_lock(); 2200 - IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_OUTREQUESTS); 2201 - rcu_read_unlock(); 2202 2196 2203 - skb = sock_alloc_send_skb(sk, hlen + tlen + full_len, 1, &err); 2204 - 2197 + net = dev_net_rcu(dev); 2198 + idev = __in6_dev_get(dev); 2199 + IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS); 2205 2200 if (!skb) { 2206 - rcu_read_lock(); 2207 - IP6_INC_STATS(net, __in6_dev_get(dev), 2208 - IPSTATS_MIB_OUTDISCARDS); 2201 + IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS); 2209 2202 rcu_read_unlock(); 2210 2203 return; 2211 2204 } 2205 + sk = net->ipv6.igmp_sk; 2206 + skb_set_owner_w(skb, sk); 2207 + 2212 2208 skb->priority = TC_PRIO_CONTROL; 2213 2209 skb_reserve(skb, hlen); 2214 2210 ··· 2234 2226 hdr->mld_cksum = csum_ipv6_magic(saddr, snd_addr, len, 2235 2227 IPPROTO_ICMPV6, 2236 2228 csum_partial(hdr, len, 0)); 2237 - 2238 - rcu_read_lock(); 2239 - idev = __in6_dev_get(skb->dev); 2240 2229 2241 2230 icmpv6_flow_init(sk, &fl6, type, 2242 2231 &ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr,
+15 -13
net/ipv6/ndisc.c
··· 418 418 { 419 419 int hlen = LL_RESERVED_SPACE(dev); 420 420 int tlen = dev->needed_tailroom; 421 - struct sock *sk = dev_net(dev)->ipv6.ndisc_sk; 422 421 struct sk_buff *skb; 423 422 424 423 skb = alloc_skb(hlen + sizeof(struct ipv6hdr) + len + tlen, GFP_ATOMIC); 425 - if (!skb) { 426 - ND_PRINTK(0, err, "ndisc: %s failed to allocate an skb\n", 427 - __func__); 424 + if (!skb) 428 425 return NULL; 429 - } 430 426 431 427 skb->protocol = htons(ETH_P_IPV6); 432 428 skb->dev = dev; ··· 433 437 /* Manually assign socket ownership as we avoid calling 434 438 * sock_alloc_send_pskb() to bypass wmem buffer limits 435 439 */ 436 - skb_set_owner_w(skb, sk); 440 + rcu_read_lock(); 441 + skb_set_owner_w(skb, dev_net_rcu(dev)->ipv6.ndisc_sk); 442 + rcu_read_unlock(); 437 443 438 444 return skb; 439 445 } ··· 471 473 void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr, 472 474 const struct in6_addr *saddr) 473 475 { 474 - struct dst_entry *dst = skb_dst(skb); 475 - struct net *net = dev_net(skb->dev); 476 - struct sock *sk = net->ipv6.ndisc_sk; 477 - struct inet6_dev *idev; 478 - int err; 479 476 struct icmp6hdr *icmp6h = icmp6_hdr(skb); 477 + struct dst_entry *dst = skb_dst(skb); 478 + struct inet6_dev *idev; 479 + struct net *net; 480 + struct sock *sk; 481 + int err; 480 482 u8 type; 481 483 482 484 type = icmp6h->icmp6_type; 483 485 486 + rcu_read_lock(); 487 + 488 + net = dev_net_rcu(skb->dev); 489 + sk = net->ipv6.ndisc_sk; 484 490 if (!dst) { 485 491 struct flowi6 fl6; 486 492 int oif = skb->dev->ifindex; ··· 492 490 icmpv6_flow_init(sk, &fl6, type, saddr, daddr, oif); 493 491 dst = icmp6_dst_alloc(skb->dev, &fl6); 494 492 if (IS_ERR(dst)) { 493 + rcu_read_unlock(); 495 494 kfree_skb(skb); 496 495 return; 497 496 } ··· 507 504 508 505 ip6_nd_hdr(skb, saddr, daddr, READ_ONCE(inet6_sk(sk)->hop_limit), skb->len); 509 506 510 - rcu_read_lock(); 511 507 idev = __in6_dev_get(dst->dev); 512 508 IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS); 513 509 ··· 1696 1694 bool ret; 1697 1695 1698 1696 if (netif_is_l3_master(skb->dev)) { 1699 - dev = __dev_get_by_index(dev_net(skb->dev), IPCB(skb)->iif); 1697 + dev = dev_get_by_index_rcu(dev_net(skb->dev), IPCB(skb)->iif); 1700 1698 if (!dev) 1701 1699 return; 1702 1700 }
+6 -1
net/ipv6/route.c
··· 3196 3196 { 3197 3197 struct net_device *dev = dst->dev; 3198 3198 unsigned int mtu = dst_mtu(dst); 3199 - struct net *net = dev_net(dev); 3199 + struct net *net; 3200 3200 3201 3201 mtu -= sizeof(struct ipv6hdr) + sizeof(struct tcphdr); 3202 3202 3203 + rcu_read_lock(); 3204 + 3205 + net = dev_net_rcu(dev); 3203 3206 if (mtu < net->ipv6.sysctl.ip6_rt_min_advmss) 3204 3207 mtu = net->ipv6.sysctl.ip6_rt_min_advmss; 3208 + 3209 + rcu_read_unlock(); 3205 3210 3206 3211 /* 3207 3212 * Maximal non-jumbo IPv6 payload is IPV6_MAXPLEN and
+2 -6
net/netfilter/nf_flow_table_ip.c
··· 381 381 flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]); 382 382 383 383 mtu = flow->tuplehash[dir].tuple.mtu + ctx->offset; 384 - if (unlikely(nf_flow_exceeds_mtu(skb, mtu))) { 385 - flow_offload_teardown(flow); 384 + if (unlikely(nf_flow_exceeds_mtu(skb, mtu))) 386 385 return 0; 387 - } 388 386 389 387 iph = (struct iphdr *)(skb_network_header(skb) + ctx->offset); 390 388 thoff = (iph->ihl * 4) + ctx->offset; ··· 660 662 flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]); 661 663 662 664 mtu = flow->tuplehash[dir].tuple.mtu + ctx->offset; 663 - if (unlikely(nf_flow_exceeds_mtu(skb, mtu))) { 664 - flow_offload_teardown(flow); 665 + if (unlikely(nf_flow_exceeds_mtu(skb, mtu))) 665 666 return 0; 666 - } 667 667 668 668 ip6h = (struct ipv6hdr *)(skb_network_header(skb) + ctx->offset); 669 669 thoff = sizeof(*ip6h) + ctx->offset;
+9 -3
net/openvswitch/datapath.c
··· 2101 2101 { 2102 2102 struct ovs_header *ovs_header; 2103 2103 struct ovs_vport_stats vport_stats; 2104 + struct net *net_vport; 2104 2105 int err; 2105 2106 2106 2107 ovs_header = genlmsg_put(skb, portid, seq, &dp_vport_genl_family, ··· 2118 2117 nla_put_u32(skb, OVS_VPORT_ATTR_IFINDEX, vport->dev->ifindex)) 2119 2118 goto nla_put_failure; 2120 2119 2121 - if (!net_eq(net, dev_net(vport->dev))) { 2122 - int id = peernet2id_alloc(net, dev_net(vport->dev), gfp); 2120 + rcu_read_lock(); 2121 + net_vport = dev_net_rcu(vport->dev); 2122 + if (!net_eq(net, net_vport)) { 2123 + int id = peernet2id_alloc(net, net_vport, GFP_ATOMIC); 2123 2124 2124 2125 if (nla_put_s32(skb, OVS_VPORT_ATTR_NETNSID, id)) 2125 - goto nla_put_failure; 2126 + goto nla_put_failure_unlock; 2126 2127 } 2128 + rcu_read_unlock(); 2127 2129 2128 2130 ovs_vport_get_stats(vport, &vport_stats); 2129 2131 if (nla_put_64bit(skb, OVS_VPORT_ATTR_STATS, ··· 2147 2143 genlmsg_end(skb, ovs_header); 2148 2144 return 0; 2149 2145 2146 + nla_put_failure_unlock: 2147 + rcu_read_unlock(); 2150 2148 nla_put_failure: 2151 2149 err = -EMSGSIZE; 2152 2150 error:
+3 -4
net/rxrpc/ar-internal.h
··· 327 327 * packet with a maximum set of jumbo subpackets or a PING ACK padded 328 328 * out to 64K with zeropages for PMTUD. 329 329 */ 330 - struct kvec kvec[RXRPC_MAX_NR_JUMBO > 3 + 16 ? 331 - RXRPC_MAX_NR_JUMBO : 3 + 16]; 330 + struct kvec kvec[1 + RXRPC_MAX_NR_JUMBO > 3 + 16 ? 331 + 1 + RXRPC_MAX_NR_JUMBO : 3 + 16]; 332 332 }; 333 333 334 334 /* ··· 874 874 #define RXRPC_TXBUF_RESENT 0x100 /* Set if has been resent */ 875 875 __be16 cksum; /* Checksum to go in header */ 876 876 bool jumboable; /* Can be non-terminal jumbo subpacket */ 877 - u8 nr_kvec; /* Amount of kvec[] used */ 878 - struct kvec kvec[1]; 877 + void *data; /* Data with preceding jumbo header */ 879 878 }; 880 879 881 880 static inline bool rxrpc_sending_to_server(const struct rxrpc_txbuf *txb)
+35 -15
net/rxrpc/output.c
··· 428 428 static size_t rxrpc_prepare_data_subpacket(struct rxrpc_call *call, 429 429 struct rxrpc_send_data_req *req, 430 430 struct rxrpc_txbuf *txb, 431 + struct rxrpc_wire_header *whdr, 431 432 rxrpc_serial_t serial, int subpkt) 432 433 { 433 - struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; 434 - struct rxrpc_jumbo_header *jumbo = (void *)(whdr + 1) - sizeof(*jumbo); 434 + struct rxrpc_jumbo_header *jumbo = txb->data - sizeof(*jumbo); 435 435 enum rxrpc_req_ack_trace why; 436 436 struct rxrpc_connection *conn = call->conn; 437 - struct kvec *kv = &call->local->kvec[subpkt]; 437 + struct kvec *kv = &call->local->kvec[1 + subpkt]; 438 438 size_t len = txb->pkt_len; 439 439 bool last; 440 440 u8 flags; ··· 491 491 } 492 492 dont_set_request_ack: 493 493 494 - /* The jumbo header overlays the wire header in the txbuf. */ 494 + /* There's a jumbo header prepended to the data if we need it. */ 495 495 if (subpkt < req->n - 1) 496 496 flags |= RXRPC_JUMBO_PACKET; 497 497 else 498 498 flags &= ~RXRPC_JUMBO_PACKET; 499 499 if (subpkt == 0) { 500 500 whdr->flags = flags; 501 - whdr->serial = htonl(txb->serial); 502 501 whdr->cksum = txb->cksum; 503 - whdr->serviceId = htons(conn->service_id); 504 - kv->iov_base = whdr; 505 - len += sizeof(*whdr); 502 + kv->iov_base = txb->data; 506 503 } else { 507 504 jumbo->flags = flags; 508 505 jumbo->pad = 0; ··· 532 535 /* 533 536 * Prepare a (jumbo) packet for transmission. 534 537 */ 535 - static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, struct rxrpc_send_data_req *req) 538 + static size_t rxrpc_prepare_data_packet(struct rxrpc_call *call, 539 + struct rxrpc_send_data_req *req, 540 + struct rxrpc_wire_header *whdr) 536 541 { 537 542 struct rxrpc_txqueue *tq = req->tq; 538 543 rxrpc_serial_t serial; ··· 547 548 548 549 /* Each transmission of a Tx packet needs a new serial number */ 549 550 serial = rxrpc_get_next_serials(call->conn, req->n); 551 + 552 + whdr->epoch = htonl(call->conn->proto.epoch); 553 + whdr->cid = htonl(call->cid); 554 + whdr->callNumber = htonl(call->call_id); 555 + whdr->seq = htonl(seq); 556 + whdr->serial = htonl(serial); 557 + whdr->type = RXRPC_PACKET_TYPE_DATA; 558 + whdr->flags = 0; 559 + whdr->userStatus = 0; 560 + whdr->securityIndex = call->security_ix; 561 + whdr->_rsvd = 0; 562 + whdr->serviceId = htons(call->conn->service_id); 550 563 551 564 call->tx_last_serial = serial + req->n - 1; 552 565 call->tx_last_sent = req->now; ··· 587 576 if (i + 1 == req->n) 588 577 /* Only sample the last subpacket in a jumbo. */ 589 578 __set_bit(ix, &tq->rtt_samples); 590 - len += rxrpc_prepare_data_subpacket(call, req, txb, serial, i); 579 + len += rxrpc_prepare_data_subpacket(call, req, txb, whdr, serial, i); 591 580 serial++; 592 581 seq++; 593 582 i++; ··· 629 618 } 630 619 631 620 rxrpc_set_keepalive(call, req->now); 621 + page_frag_free(whdr); 632 622 return len; 633 623 } 634 624 ··· 638 626 */ 639 627 void rxrpc_send_data_packet(struct rxrpc_call *call, struct rxrpc_send_data_req *req) 640 628 { 629 + struct rxrpc_wire_header *whdr; 641 630 struct rxrpc_connection *conn = call->conn; 642 631 enum rxrpc_tx_point frag; 643 632 struct rxrpc_txqueue *tq = req->tq; 644 633 struct rxrpc_txbuf *txb; 645 634 struct msghdr msg; 646 635 rxrpc_seq_t seq = req->seq; 647 - size_t len; 636 + size_t len = sizeof(*whdr); 648 637 bool new_call = test_bit(RXRPC_CALL_BEGAN_RX_TIMER, &call->flags); 649 638 int ret, stat_ix; 650 639 651 640 _enter("%x,%x-%x", tq->qbase, seq, seq + req->n - 1); 652 641 642 + whdr = page_frag_alloc(&call->local->tx_alloc, sizeof(*whdr), GFP_NOFS); 643 + if (!whdr) 644 + return; /* Drop the packet if no memory. */ 645 + 646 + call->local->kvec[0].iov_base = whdr; 647 + call->local->kvec[0].iov_len = sizeof(*whdr); 648 + 653 649 stat_ix = umin(req->n, ARRAY_SIZE(call->rxnet->stat_tx_jumbo)) - 1; 654 650 atomic_inc(&call->rxnet->stat_tx_jumbo[stat_ix]); 655 651 656 - len = rxrpc_prepare_data_packet(call, req); 652 + len += rxrpc_prepare_data_packet(call, req, whdr); 657 653 txb = tq->bufs[seq & RXRPC_TXQ_MASK]; 658 654 659 - iov_iter_kvec(&msg.msg_iter, WRITE, call->local->kvec, req->n, len); 655 + iov_iter_kvec(&msg.msg_iter, WRITE, call->local->kvec, 1 + req->n, len); 660 656 661 657 msg.msg_name = &call->peer->srx.transport; 662 658 msg.msg_namelen = call->peer->srx.transport_len; ··· 715 695 716 696 if (ret == -EMSGSIZE) { 717 697 rxrpc_inc_stat(call->rxnet, stat_tx_data_send_msgsize); 718 - trace_rxrpc_tx_packet(call->debug_id, call->local->kvec[0].iov_base, frag); 698 + trace_rxrpc_tx_packet(call->debug_id, whdr, frag); 719 699 ret = 0; 720 700 } else if (ret < 0) { 721 701 rxrpc_inc_stat(call->rxnet, stat_tx_data_send_fail); 722 702 trace_rxrpc_tx_fail(call->debug_id, txb->serial, ret, frag); 723 703 } else { 724 - trace_rxrpc_tx_packet(call->debug_id, call->local->kvec[0].iov_base, frag); 704 + trace_rxrpc_tx_packet(call->debug_id, whdr, frag); 725 705 } 726 706 727 707 rxrpc_tx_backoff(call, ret);
+7
net/rxrpc/peer_event.c
··· 169 169 goto out; 170 170 } 171 171 172 + if ((serr->ee.ee_origin == SO_EE_ORIGIN_ICMP6 && 173 + serr->ee.ee_type == ICMPV6_PKT_TOOBIG && 174 + serr->ee.ee_code == 0)) { 175 + rxrpc_adjust_mtu(peer, serr->ee.ee_info); 176 + goto out; 177 + } 178 + 172 179 rxrpc_store_error(peer, skb); 173 180 out: 174 181 rxrpc_put_peer(peer, rxrpc_peer_put_input_error);
+5 -8
net/rxrpc/rxkad.c
··· 257 257 struct rxrpc_txbuf *txb, 258 258 struct skcipher_request *req) 259 259 { 260 - struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; 261 - struct rxkad_level1_hdr *hdr = (void *)(whdr + 1); 260 + struct rxkad_level1_hdr *hdr = txb->data; 262 261 struct rxrpc_crypt iv; 263 262 struct scatterlist sg; 264 263 size_t pad; ··· 273 274 pad = RXKAD_ALIGN - pad; 274 275 pad &= RXKAD_ALIGN - 1; 275 276 if (pad) { 276 - memset(txb->kvec[0].iov_base + txb->offset, 0, pad); 277 + memset(txb->data + txb->offset, 0, pad); 277 278 txb->pkt_len += pad; 278 279 } 279 280 ··· 299 300 struct skcipher_request *req) 300 301 { 301 302 const struct rxrpc_key_token *token; 302 - struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; 303 - struct rxkad_level2_hdr *rxkhdr = (void *)(whdr + 1); 303 + struct rxkad_level2_hdr *rxkhdr = txb->data; 304 304 struct rxrpc_crypt iv; 305 305 struct scatterlist sg; 306 306 size_t content, pad; ··· 317 319 txb->pkt_len = round_up(content, RXKAD_ALIGN); 318 320 pad = txb->pkt_len - content; 319 321 if (pad) 320 - memset(txb->kvec[0].iov_base + txb->offset, 0, pad); 322 + memset(txb->data + txb->offset, 0, pad); 321 323 322 324 /* encrypt from the session key */ 323 325 token = call->conn->key->payload.data[0]; ··· 405 407 406 408 /* Clear excess space in the packet */ 407 409 if (txb->pkt_len < txb->alloc_size) { 408 - struct rxrpc_wire_header *whdr = txb->kvec[0].iov_base; 409 410 size_t gap = txb->alloc_size - txb->pkt_len; 410 - void *p = whdr + 1; 411 + void *p = txb->data; 411 412 412 413 memset(p + txb->pkt_len, 0, gap); 413 414 }
+1 -3
net/rxrpc/sendmsg.c
··· 419 419 size_t copy = umin(txb->space, msg_data_left(msg)); 420 420 421 421 _debug("add %zu", copy); 422 - if (!copy_from_iter_full(txb->kvec[0].iov_base + txb->offset, 422 + if (!copy_from_iter_full(txb->data + txb->offset, 423 423 copy, &msg->msg_iter)) 424 424 goto efault; 425 425 _debug("added"); ··· 445 445 ret = call->security->secure_packet(call, txb); 446 446 if (ret < 0) 447 447 goto out; 448 - 449 - txb->kvec[0].iov_len += txb->len; 450 448 rxrpc_queue_packet(rx, call, txb, notify_end_tx); 451 449 txb = NULL; 452 450 }
+10 -27
net/rxrpc/txbuf.c
··· 19 19 struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_size, 20 20 size_t data_align, gfp_t gfp) 21 21 { 22 - struct rxrpc_wire_header *whdr; 23 22 struct rxrpc_txbuf *txb; 24 - size_t total, hoff; 23 + size_t total, doff, jsize = sizeof(struct rxrpc_jumbo_header); 25 24 void *buf; 26 25 27 26 txb = kzalloc(sizeof(*txb), gfp); 28 27 if (!txb) 29 28 return NULL; 30 29 31 - hoff = round_up(sizeof(*whdr), data_align) - sizeof(*whdr); 32 - total = hoff + sizeof(*whdr) + data_size; 30 + /* We put a jumbo header in the buffer, but not a full wire header to 31 + * avoid delayed-corruption problems with zerocopy. 32 + */ 33 + doff = round_up(jsize, data_align); 34 + total = doff + data_size; 33 35 34 36 data_align = umax(data_align, L1_CACHE_BYTES); 35 37 mutex_lock(&call->conn->tx_data_alloc_lock); ··· 43 41 return NULL; 44 42 } 45 43 46 - whdr = buf + hoff; 47 - 48 44 refcount_set(&txb->ref, 1); 49 45 txb->call_debug_id = call->debug_id; 50 46 txb->debug_id = atomic_inc_return(&rxrpc_txbuf_debug_ids); 51 47 txb->alloc_size = data_size; 52 48 txb->space = data_size; 53 - txb->offset = sizeof(*whdr); 49 + txb->offset = 0; 54 50 txb->flags = call->conn->out_clientflag; 55 51 txb->seq = call->send_top + 1; 56 - txb->nr_kvec = 1; 57 - txb->kvec[0].iov_base = whdr; 58 - txb->kvec[0].iov_len = sizeof(*whdr); 59 - 60 - whdr->epoch = htonl(call->conn->proto.epoch); 61 - whdr->cid = htonl(call->cid); 62 - whdr->callNumber = htonl(call->call_id); 63 - whdr->seq = htonl(txb->seq); 64 - whdr->type = RXRPC_PACKET_TYPE_DATA; 65 - whdr->flags = 0; 66 - whdr->userStatus = 0; 67 - whdr->securityIndex = call->security_ix; 68 - whdr->_rsvd = 0; 69 - whdr->serviceId = htons(call->dest_srx.srx_service); 52 + txb->data = buf + doff; 70 53 71 54 trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 1, 72 55 rxrpc_txbuf_alloc_data); ··· 77 90 78 91 static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb) 79 92 { 80 - int i; 81 - 82 93 trace_rxrpc_txbuf(txb->debug_id, txb->call_debug_id, txb->seq, 0, 83 94 rxrpc_txbuf_free); 84 - for (i = 0; i < txb->nr_kvec; i++) 85 - if (txb->kvec[i].iov_base && 86 - !is_zero_pfn(page_to_pfn(virt_to_page(txb->kvec[i].iov_base)))) 87 - page_frag_free(txb->kvec[i].iov_base); 95 + if (txb->data) 96 + page_frag_free(txb->data); 88 97 kfree(txb); 89 98 atomic_dec(&rxrpc_nr_txbuf); 90 99 }
+7 -1
net/vmw_vsock/af_vsock.c
··· 824 824 */ 825 825 lock_sock_nested(sk, level); 826 826 827 - sock_orphan(sk); 827 + /* Indicate to vsock_remove_sock() that the socket is being released and 828 + * can be removed from the bound_table. Unlike transport reassignment 829 + * case, where the socket must remain bound despite vsock_remove_sock() 830 + * being called from the transport release() callback. 831 + */ 832 + sock_set_flag(sk, SOCK_DEAD); 828 833 829 834 if (vsk->transport) 830 835 vsk->transport->release(vsk); 831 836 else if (sock_type_connectible(sk->sk_type)) 832 837 vsock_remove_sock(vsk); 833 838 839 + sock_orphan(sk); 834 840 sk->sk_shutdown = SHUTDOWN_MASK; 835 841 836 842 skb_queue_purge(&sk->sk_receive_queue);
+41
tools/testing/vsock/vsock_test.c
··· 1788 1788 close(fd); 1789 1789 } 1790 1790 1791 + static void test_stream_linger_client(const struct test_opts *opts) 1792 + { 1793 + struct linger optval = { 1794 + .l_onoff = 1, 1795 + .l_linger = 1 1796 + }; 1797 + int fd; 1798 + 1799 + fd = vsock_stream_connect(opts->peer_cid, opts->peer_port); 1800 + if (fd < 0) { 1801 + perror("connect"); 1802 + exit(EXIT_FAILURE); 1803 + } 1804 + 1805 + if (setsockopt(fd, SOL_SOCKET, SO_LINGER, &optval, sizeof(optval))) { 1806 + perror("setsockopt(SO_LINGER)"); 1807 + exit(EXIT_FAILURE); 1808 + } 1809 + 1810 + close(fd); 1811 + } 1812 + 1813 + static void test_stream_linger_server(const struct test_opts *opts) 1814 + { 1815 + int fd; 1816 + 1817 + fd = vsock_stream_accept(VMADDR_CID_ANY, opts->peer_port, NULL); 1818 + if (fd < 0) { 1819 + perror("accept"); 1820 + exit(EXIT_FAILURE); 1821 + } 1822 + 1823 + vsock_wait_remote_close(fd); 1824 + close(fd); 1825 + } 1826 + 1791 1827 static struct test_case test_cases[] = { 1792 1828 { 1793 1829 .name = "SOCK_STREAM connection reset", ··· 1978 1942 .name = "SOCK_STREAM retry failed connect()", 1979 1943 .run_client = test_stream_connect_retry_client, 1980 1944 .run_server = test_stream_connect_retry_server, 1945 + }, 1946 + { 1947 + .name = "SOCK_STREAM SO_LINGER null-ptr-deref", 1948 + .run_client = test_stream_linger_client, 1949 + .run_server = test_stream_linger_server, 1981 1950 }, 1982 1951 {}, 1983 1952 };