Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.11-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"We have a few fixes for long standing issues, in particular Eric's fix
to not underestimate the skb sizes, and my fix for brokenness of
register_netdevice() error path. They may uncover other bugs so we
will keep an eye on them. Also included are Willem's fixes for
kmap(_atomic).

Looking at the "current release" fixes, it seems we are about one rc
behind a normal cycle. We've previously seen an uptick of "people had
run their test suites" / "humans actually tried to use new features"
fixes between rc2 and rc3.

Summary:

Current release - regressions:

- fix feature enforcement to allow NETIF_F_HW_TLS_TX if IP_CSUM &&
IPV6_CSUM

- dcb: accept RTM_GETDCB messages carrying set-like DCB commands if
user is admin for backward-compatibility

- selftests/tls: fix selftests build after adding ChaCha20-Poly1305

Current release - always broken:

- ppp: fix refcount underflow on channel unbridge

- bnxt_en: clear DEFRAG flag in firmware message when retry flashing

- smc: fix out of bound access in the new netlink interface

Previous releases - regressions:

- fix use-after-free with UDP GRO by frags

- mptcp: better msk-level shutdown

- rndis_host: set proper input size for OID_GEN_PHYSICAL_MEDIUM
request

- i40e: xsk: fix potential NULL pointer dereferencing

Previous releases - always broken:

- skb frag: kmap_atomic fixes

- avoid 32 x truesize under-estimation for tiny skbs

- fix issues around register_netdevice() failures

- udp: prevent reuseport_select_sock from reading uninitialized socks

- dsa: unbind all switches from tree when DSA master unbinds

- dsa: clear devlink port type before unregistering slave netdevs

- can: isotp: isotp_getname(): fix kernel information leak

- mlxsw: core: Thermal control fixes

- ipv6: validate GSO SKB against MTU before finish IPv6 processing

- stmmac: use __napi_schedule() for PREEMPT_RT

- net: mvpp2: remove Pause and Asym_Pause support

Misc:

- remove from MAINTAINERS folks who had been inactive for >5yrs"

* tag 'net-5.11-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (58 commits)
mptcp: fix locking in mptcp_disconnect()
net: Allow NETIF_F_HW_TLS_TX if IP_CSUM && IPV6_CSUM
MAINTAINERS: dccp: move Gerrit Renker to CREDITS
MAINTAINERS: ipvs: move Wensong Zhang to CREDITS
MAINTAINERS: tls: move Aviad to CREDITS
MAINTAINERS: ena: remove Zorik Machulsky from reviewers
MAINTAINERS: vrf: move Shrijeet to CREDITS
MAINTAINERS: net: move Alexey Kuznetsov to CREDITS
MAINTAINERS: altx: move Jay Cliburn to CREDITS
net: avoid 32 x truesize under-estimation for tiny skbs
nt: usb: USB_RTL8153_ECM should not default to y
net: stmmac: fix taprio configuration when base_time is in the past
net: stmmac: fix taprio schedule configuration
net: tip: fix a couple kernel-doc markups
net: sit: unregister_netdevice on newlink's error path
net: stmmac: Fixed mtu channged by cache aligned
cxgb4/chtls: Fix tid stuck due to wrong update of qid
i40e: fix potential NULL pointer dereferencing
net: stmmac: use __napi_schedule() for PREEMPT_RT
can: mcp251xfd: mcp251xfd_handle_rxif_one(): fix wrong NULL pointer check
...

+570 -224
+24
CREDITS
··· 710 710 S: Las Heras, Mendoza CP 5539 711 711 S: Argentina 712 712 713 + N: Jay Cliburn 714 + E: jcliburn@gmail.com 715 + D: ATLX Ethernet drivers 716 + 713 717 N: Steven P. Cole 714 718 E: scole@lanl.gov 715 719 E: elenstev@mesatop.com ··· 1287 1283 D: Major kbuild rework during the 2.5 cycle 1288 1284 D: ISDN Maintainer 1289 1285 S: USA 1286 + 1287 + N: Gerrit Renker 1288 + E: gerrit@erg.abdn.ac.uk 1289 + D: DCCP protocol support. 1290 1290 1291 1291 N: Philip Gladstone 1292 1292 E: philip@gladstonefamily.net ··· 2146 2138 E: seasons@makosteszta.sote.hu 2147 2139 D: Original author of software suspend 2148 2140 2141 + N: Alexey Kuznetsov 2142 + E: kuznet@ms2.inr.ac.ru 2143 + D: Author and maintainer of large parts of the networking stack 2144 + 2149 2145 N: Jaroslav Kysela 2150 2146 E: perex@perex.cz 2151 2147 W: https://www.perex.cz ··· 2707 2695 N: Wolfgang Muees 2708 2696 E: wolfgang@iksw-muees.de 2709 2697 D: Auerswald USB driver 2698 + 2699 + N: Shrijeet Mukherjee 2700 + E: shrijeet@gmail.com 2701 + D: Network routing domains (VRF). 2710 2702 2711 2703 N: Paul Mundt 2712 2704 E: paul.mundt@gmail.com ··· 4126 4110 S: 16 Baliqiao Nanjie, Beijing 101100 4127 4111 S: People's Repulic of China 4128 4112 4113 + N: Aviad Yehezkel 4114 + E: aviadye@nvidia.com 4115 + D: Kernel TLS implementation and offload support. 4116 + 4129 4117 N: Victor Yodaiken 4130 4118 E: yodaiken@fsmlabs.com 4131 4119 D: RTLinux (RealTime Linux) ··· 4186 4166 S: 1507 145th Place SE #B5 4187 4167 S: Bellevue, Washington 98007 4188 4168 S: USA 4169 + 4170 + N: Wensong Zhang 4171 + E: wensong@linux-vs.org 4172 + D: IP virtual server (IPVS). 4189 4173 4190 4174 N: Haojian Zhuang 4191 4175 E: haojian.zhuang@gmail.com
+1
Documentation/devicetree/bindings/net/renesas,etheravb.yaml
··· 163 163 enum: 164 164 - renesas,etheravb-r8a774a1 165 165 - renesas,etheravb-r8a774b1 166 + - renesas,etheravb-r8a774e1 166 167 - renesas,etheravb-r8a7795 167 168 - renesas,etheravb-r8a7796 168 169 - renesas,etheravb-r8a77961
+6 -2
Documentation/devicetree/bindings/net/snps,dwmac.yaml
··· 161 161 * snps,route-dcbcp, DCB Control Packets 162 162 * snps,route-up, Untagged Packets 163 163 * snps,route-multi-broad, Multicast & Broadcast Packets 164 - * snps,priority, RX queue priority (Range 0x0 to 0xF) 164 + * snps,priority, bitmask of the tagged frames priorities assigned to 165 + the queue 165 166 166 167 snps,mtl-tx-config: 167 168 $ref: /schemas/types.yaml#/definitions/phandle ··· 189 188 * snps,idle_slope, unlock on WoL 190 189 * snps,high_credit, max write outstanding req. limit 191 190 * snps,low_credit, max read outstanding req. limit 192 - * snps,priority, TX queue priority (Range 0x0 to 0xF) 191 + * snps,priority, bitmask of the priorities assigned to the queue. 192 + When a PFC frame is received with priorities matching the bitmask, 193 + the queue is blocked from transmitting for the pause time specified 194 + in the PFC frame. 193 195 194 196 snps,reset-gpio: 195 197 deprecated: true
+165 -6
Documentation/networking/netdevices.rst
··· 10 10 The following is a random collection of documentation regarding 11 11 network devices. 12 12 13 - struct net_device allocation rules 14 - ================================== 13 + struct net_device lifetime rules 14 + ================================ 15 15 Network device structures need to persist even after module is unloaded and 16 16 must be allocated with alloc_netdev_mqs() and friends. 17 17 If device has registered successfully, it will be freed on last use 18 - by free_netdev(). This is required to handle the pathologic case cleanly 19 - (example: rmmod mydriver </sys/class/net/myeth/mtu ) 18 + by free_netdev(). This is required to handle the pathological case cleanly 19 + (example: ``rmmod mydriver </sys/class/net/myeth/mtu``) 20 20 21 - alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver 21 + alloc_netdev_mqs() / alloc_netdev() reserve extra space for driver 22 22 private data which gets freed when the network device is freed. If 23 23 separately allocated data is attached to the network device 24 - (netdev_priv(dev)) then it is up to the module exit handler to free that. 24 + (netdev_priv()) then it is up to the module exit handler to free that. 25 + 26 + There are two groups of APIs for registering struct net_device. 27 + First group can be used in normal contexts where ``rtnl_lock`` is not already 28 + held: register_netdev(), unregister_netdev(). 29 + Second group can be used when ``rtnl_lock`` is already held: 30 + register_netdevice(), unregister_netdevice(), free_netdevice(). 31 + 32 + Simple drivers 33 + -------------- 34 + 35 + Most drivers (especially device drivers) handle lifetime of struct net_device 36 + in context where ``rtnl_lock`` is not held (e.g. driver probe and remove paths). 37 + 38 + In that case the struct net_device registration is done using 39 + the register_netdev(), and unregister_netdev() functions: 40 + 41 + .. code-block:: c 42 + 43 + int probe() 44 + { 45 + struct my_device_priv *priv; 46 + int err; 47 + 48 + dev = alloc_netdev_mqs(...); 49 + if (!dev) 50 + return -ENOMEM; 51 + priv = netdev_priv(dev); 52 + 53 + /* ... do all device setup before calling register_netdev() ... 54 + */ 55 + 56 + err = register_netdev(dev); 57 + if (err) 58 + goto err_undo; 59 + 60 + /* net_device is visible to the user! */ 61 + 62 + err_undo: 63 + /* ... undo the device setup ... */ 64 + free_netdev(dev); 65 + return err; 66 + } 67 + 68 + void remove() 69 + { 70 + unregister_netdev(dev); 71 + free_netdev(dev); 72 + } 73 + 74 + Note that after calling register_netdev() the device is visible in the system. 75 + Users can open it and start sending / receiving traffic immediately, 76 + or run any other callback, so all initialization must be done prior to 77 + registration. 78 + 79 + unregister_netdev() closes the device and waits for all users to be done 80 + with it. The memory of struct net_device itself may still be referenced 81 + by sysfs but all operations on that device will fail. 82 + 83 + free_netdev() can be called after unregister_netdev() returns on when 84 + register_netdev() failed. 85 + 86 + Device management under RTNL 87 + ---------------------------- 88 + 89 + Registering struct net_device while in context which already holds 90 + the ``rtnl_lock`` requires extra care. In those scenarios most drivers 91 + will want to make use of struct net_device's ``needs_free_netdev`` 92 + and ``priv_destructor`` members for freeing of state. 93 + 94 + Example flow of netdev handling under ``rtnl_lock``: 95 + 96 + .. code-block:: c 97 + 98 + static void my_setup(struct net_device *dev) 99 + { 100 + dev->needs_free_netdev = true; 101 + } 102 + 103 + static void my_destructor(struct net_device *dev) 104 + { 105 + some_obj_destroy(priv->obj); 106 + some_uninit(priv); 107 + } 108 + 109 + int create_link() 110 + { 111 + struct my_device_priv *priv; 112 + int err; 113 + 114 + ASSERT_RTNL(); 115 + 116 + dev = alloc_netdev(sizeof(*priv), "net%d", NET_NAME_UNKNOWN, my_setup); 117 + if (!dev) 118 + return -ENOMEM; 119 + priv = netdev_priv(dev); 120 + 121 + /* Implicit constructor */ 122 + err = some_init(priv); 123 + if (err) 124 + goto err_free_dev; 125 + 126 + priv->obj = some_obj_create(); 127 + if (!priv->obj) { 128 + err = -ENOMEM; 129 + goto err_some_uninit; 130 + } 131 + /* End of constructor, set the destructor: */ 132 + dev->priv_destructor = my_destructor; 133 + 134 + err = register_netdevice(dev); 135 + if (err) 136 + /* register_netdevice() calls destructor on failure */ 137 + goto err_free_dev; 138 + 139 + /* If anything fails now unregister_netdevice() (or unregister_netdev()) 140 + * will take care of calling my_destructor and free_netdev(). 141 + */ 142 + 143 + return 0; 144 + 145 + err_some_uninit: 146 + some_uninit(priv); 147 + err_free_dev: 148 + free_netdev(dev); 149 + return err; 150 + } 151 + 152 + If struct net_device.priv_destructor is set it will be called by the core 153 + some time after unregister_netdevice(), it will also be called if 154 + register_netdevice() fails. The callback may be invoked with or without 155 + ``rtnl_lock`` held. 156 + 157 + There is no explicit constructor callback, driver "constructs" the private 158 + netdev state after allocating it and before registration. 159 + 160 + Setting struct net_device.needs_free_netdev makes core call free_netdevice() 161 + automatically after unregister_netdevice() when all references to the device 162 + are gone. It only takes effect after a successful call to register_netdevice() 163 + so if register_netdevice() fails driver is responsible for calling 164 + free_netdev(). 165 + 166 + free_netdev() is safe to call on error paths right after unregister_netdevice() 167 + or when register_netdevice() fails. Parts of netdev (de)registration process 168 + happen after ``rtnl_lock`` is released, therefore in those cases free_netdev() 169 + will defer some of the processing until ``rtnl_lock`` is released. 170 + 171 + Devices spawned from struct rtnl_link_ops should never free the 172 + struct net_device directly. 173 + 174 + .ndo_init and .ndo_uninit 175 + ~~~~~~~~~~~~~~~~~~~~~~~~~ 176 + 177 + ``.ndo_init`` and ``.ndo_uninit`` callbacks are called during net_device 178 + registration and de-registration, under ``rtnl_lock``. Drivers can use 179 + those e.g. when parts of their init process need to run under ``rtnl_lock``. 180 + 181 + ``.ndo_init`` runs before device is visible in the system, ``.ndo_uninit`` 182 + runs during de-registering after device is closed but other subsystems 183 + may still have outstanding references to the netdevice. 25 184 26 185 MTU 27 186 ===
+1 -1
Documentation/networking/tls-offload.rst
··· 530 530 offloads, old connections will remain active after flags are cleared. 531 531 532 532 TLS encryption cannot be offloaded to devices without checksum calculation 533 - offload. Hence, TLS TX device feature flag requires NETIF_F_HW_CSUM being set. 533 + offload. Hence, TLS TX device feature flag requires TX csum offload being set. 534 534 Disabling the latter implies clearing the former. Disabling TX checksum offload 535 535 should not affect old connections, and drivers should make sure checksum 536 536 calculation does not break for them.
+1 -8
MAINTAINERS
··· 820 820 M: Arthur Kiyanovski <akiyano@amazon.com> 821 821 R: Guy Tzalik <gtzalik@amazon.com> 822 822 R: Saeed Bishara <saeedb@amazon.com> 823 - R: Zorik Machulsky <zorik@amazon.com> 824 823 L: netdev@vger.kernel.org 825 824 S: Supported 826 825 F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst ··· 2941 2942 F: drivers/hwmon/asus_atk0110.c 2942 2943 2943 2944 ATLX ETHERNET DRIVERS 2944 - M: Jay Cliburn <jcliburn@gmail.com> 2945 2945 M: Chris Snook <chris.snook@gmail.com> 2946 2946 L: netdev@vger.kernel.org 2947 2947 S: Maintained ··· 4920 4922 F: drivers/scsi/dc395x.* 4921 4923 4922 4924 DCCP PROTOCOL 4923 - M: Gerrit Renker <gerrit@erg.abdn.ac.uk> 4924 4925 L: dccp@vger.kernel.org 4925 - S: Maintained 4926 + S: Orphan 4926 4927 W: http://www.linuxfoundation.org/collaborate/workgroups/networking/dccp 4927 4928 F: include/linux/dccp.h 4928 4929 F: include/linux/tfrc.h ··· 9323 9326 F: drivers/scsi/ips* 9324 9327 9325 9328 IPVS 9326 - M: Wensong Zhang <wensong@linux-vs.org> 9327 9329 M: Simon Horman <horms@verge.net.au> 9328 9330 M: Julian Anastasov <ja@ssi.bg> 9329 9331 L: netdev@vger.kernel.org ··· 12412 12416 12413 12417 NETWORKING [IPv4/IPv6] 12414 12418 M: "David S. Miller" <davem@davemloft.net> 12415 - M: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> 12416 12419 M: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> 12417 12420 L: netdev@vger.kernel.org 12418 12421 S: Maintained ··· 12468 12473 12469 12474 NETWORKING [TLS] 12470 12475 M: Boris Pismenny <borisp@nvidia.com> 12471 - M: Aviad Yehezkel <aviadye@nvidia.com> 12472 12476 M: John Fastabend <john.fastabend@gmail.com> 12473 12477 M: Daniel Borkmann <daniel@iogearbox.net> 12474 12478 M: Jakub Kicinski <kuba@kernel.org> ··· 19065 19071 19066 19072 VRF 19067 19073 M: David Ahern <dsahern@kernel.org> 19068 - M: Shrijeet Mukherjee <shrijeet@gmail.com> 19069 19074 L: netdev@vger.kernel.org 19070 19075 S: Maintained 19071 19076 F: Documentation/networking/vrf.rst
+1 -1
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 1491 1491 else 1492 1492 skb = alloc_can_skb(priv->ndev, (struct can_frame **)&cfd); 1493 1493 1494 - if (!cfd) { 1494 + if (!skb) { 1495 1495 stats->rx_dropped++; 1496 1496 return 0; 1497 1497 }
+2 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 2532 2532 2533 2533 if (rc && ((struct hwrm_err_output *)&resp)->cmd_err == 2534 2534 NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) { 2535 - install.flags |= 2535 + install.flags = 2536 2536 cpu_to_le16(NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG); 2537 2537 2538 2538 rc = _hwrm_send_message_silent(bp, &install, ··· 2546 2546 * UPDATE directory and try the flash again 2547 2547 */ 2548 2548 defrag_attempted = true; 2549 + install.flags = 0; 2549 2550 rc = __bnxt_flash_nvram(bp->dev, 2550 2551 BNX_DIR_TYPE_UPDATE, 2551 2552 BNX_DIR_ORDINAL_FIRST,
+6 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
··· 222 222 223 223 int bnxt_get_ulp_stat_ctxs(struct bnxt *bp) 224 224 { 225 - if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) 226 - return BNXT_MIN_ROCE_STAT_CTXS; 225 + if (bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) { 226 + struct bnxt_en_dev *edev = bp->edev; 227 + 228 + if (edev->ulp_tbl[BNXT_ROCE_ULP].msix_requested) 229 + return BNXT_MIN_ROCE_STAT_CTXS; 230 + } 227 231 228 232 return 0; 229 233 }
+7
drivers/net/ethernet/chelsio/cxgb4/t4_tcb.h
··· 40 40 #define TCB_L2T_IX_M 0xfffULL 41 41 #define TCB_L2T_IX_V(x) ((x) << TCB_L2T_IX_S) 42 42 43 + #define TCB_T_FLAGS_W 1 44 + #define TCB_T_FLAGS_S 0 45 + #define TCB_T_FLAGS_M 0xffffffffffffffffULL 46 + #define TCB_T_FLAGS_V(x) ((__u64)(x) << TCB_T_FLAGS_S) 47 + 48 + #define TCB_FIELD_COOKIE_TFLAG 1 49 + 43 50 #define TCB_SMAC_SEL_W 0 44 51 #define TCB_SMAC_SEL_S 24 45 52 #define TCB_SMAC_SEL_M 0xffULL
+4
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
··· 575 575 void chtls_tcp_push(struct sock *sk, int flags); 576 576 int chtls_push_frames(struct chtls_sock *csk, int comp); 577 577 int chtls_set_tcb_tflag(struct sock *sk, unsigned int bit_pos, int val); 578 + void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word, 579 + u64 mask, u64 val, u8 cookie, 580 + int through_l2t); 578 581 int chtls_setkey(struct chtls_sock *csk, u32 keylen, u32 mode, int cipher_type); 582 + void chtls_set_quiesce_ctrl(struct sock *sk, int val); 579 583 void skb_entail(struct sock *sk, struct sk_buff *skb, int flags); 580 584 unsigned int keyid_to_addr(int start_addr, int keyid); 581 585 void free_tls_keyid(struct sock *sk);
+30 -2
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
··· 32 32 #include "chtls.h" 33 33 #include "chtls_cm.h" 34 34 #include "clip_tbl.h" 35 + #include "t4_tcb.h" 35 36 36 37 /* 37 38 * State transitions and actions for close. Note that if we are in SYN_SENT ··· 268 267 if (sk->sk_state != TCP_SYN_RECV) 269 268 chtls_send_abort(sk, mode, skb); 270 269 else 271 - goto out; 270 + chtls_set_tcb_field_rpl_skb(sk, TCB_T_FLAGS_W, 271 + TCB_T_FLAGS_V(TCB_T_FLAGS_M), 0, 272 + TCB_FIELD_COOKIE_TFLAG, 1); 272 273 273 274 return; 274 275 out: ··· 1952 1949 else if (tcp_sk(sk)->linger2 < 0 && 1953 1950 !csk_flag_nochk(csk, CSK_ABORT_SHUTDOWN)) 1954 1951 chtls_abort_conn(sk, skb); 1952 + else if (csk_flag_nochk(csk, CSK_TX_DATA_SENT)) 1953 + chtls_set_quiesce_ctrl(sk, 0); 1955 1954 break; 1956 1955 default: 1957 1956 pr_info("close_con_rpl in bad state %d\n", sk->sk_state); ··· 2297 2292 return 0; 2298 2293 } 2299 2294 2295 + static int chtls_set_tcb_rpl(struct chtls_dev *cdev, struct sk_buff *skb) 2296 + { 2297 + struct cpl_set_tcb_rpl *rpl = cplhdr(skb) + RSS_HDR; 2298 + unsigned int hwtid = GET_TID(rpl); 2299 + struct sock *sk; 2300 + 2301 + sk = lookup_tid(cdev->tids, hwtid); 2302 + 2303 + /* return EINVAL if socket doesn't exist */ 2304 + if (!sk) 2305 + return -EINVAL; 2306 + 2307 + /* Reusing the skb as size of cpl_set_tcb_field structure 2308 + * is greater than cpl_abort_req 2309 + */ 2310 + if (TCB_COOKIE_G(rpl->cookie) == TCB_FIELD_COOKIE_TFLAG) 2311 + chtls_send_abort(sk, CPL_ABORT_SEND_RST, NULL); 2312 + 2313 + kfree_skb(skb); 2314 + return 0; 2315 + } 2316 + 2300 2317 chtls_handler_func chtls_handlers[NUM_CPL_CMDS] = { 2301 2318 [CPL_PASS_OPEN_RPL] = chtls_pass_open_rpl, 2302 2319 [CPL_CLOSE_LISTSRV_RPL] = chtls_close_listsrv_rpl, ··· 2331 2304 [CPL_CLOSE_CON_RPL] = chtls_conn_cpl, 2332 2305 [CPL_ABORT_REQ_RSS] = chtls_conn_cpl, 2333 2306 [CPL_ABORT_RPL_RSS] = chtls_conn_cpl, 2334 - [CPL_FW4_ACK] = chtls_wr_ack, 2307 + [CPL_FW4_ACK] = chtls_wr_ack, 2308 + [CPL_SET_TCB_RPL] = chtls_set_tcb_rpl, 2335 2309 };
+41
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_hw.c
··· 88 88 return ret < 0 ? ret : 0; 89 89 } 90 90 91 + void chtls_set_tcb_field_rpl_skb(struct sock *sk, u16 word, 92 + u64 mask, u64 val, u8 cookie, 93 + int through_l2t) 94 + { 95 + struct sk_buff *skb; 96 + unsigned int wrlen; 97 + 98 + wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata); 99 + wrlen = roundup(wrlen, 16); 100 + 101 + skb = alloc_skb(wrlen, GFP_KERNEL | __GFP_NOFAIL); 102 + if (!skb) 103 + return; 104 + 105 + __set_tcb_field(sk, skb, word, mask, val, cookie, 0); 106 + send_or_defer(sk, tcp_sk(sk), skb, through_l2t); 107 + } 108 + 91 109 /* 92 110 * Set one of the t_flags bits in the TCB. 93 111 */ ··· 129 111 { 130 112 return chtls_set_tcb_field(sk, 1, (1ULL << TF_RX_QUIESCE_S), 131 113 TF_RX_QUIESCE_V(val)); 114 + } 115 + 116 + void chtls_set_quiesce_ctrl(struct sock *sk, int val) 117 + { 118 + struct chtls_sock *csk; 119 + struct sk_buff *skb; 120 + unsigned int wrlen; 121 + int ret; 122 + 123 + wrlen = sizeof(struct cpl_set_tcb_field) + sizeof(struct ulptx_idata); 124 + wrlen = roundup(wrlen, 16); 125 + 126 + skb = alloc_skb(wrlen, GFP_ATOMIC); 127 + if (!skb) 128 + return; 129 + 130 + csk = rcu_dereference_sk_user_data(sk); 131 + 132 + __set_tcb_field(sk, skb, 1, TF_RX_QUIESCE_V(1), 0, 0, 1); 133 + set_wr_txq(skb, CPL_PRIORITY_CONTROL, csk->port_id); 134 + ret = cxgb4_ofld_send(csk->egress_dev, skb); 135 + if (ret < 0) 136 + kfree_skb(skb); 132 137 } 133 138 134 139 /* TLS Key bitmap processing */
+1 -1
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 348 348 * SBP is *not* set in PRT_SBPVSI (default not set). 349 349 */ 350 350 skb = i40e_construct_skb_zc(rx_ring, *bi); 351 - *bi = NULL; 352 351 if (!skb) { 353 352 rx_ring->rx_stats.alloc_buff_failed++; 354 353 break; 355 354 } 356 355 356 + *bi = NULL; 357 357 cleaned_count++; 358 358 i40e_inc_ntc(rx_ring); 359 359
-2
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 5882 5882 5883 5883 phylink_set(mask, Autoneg); 5884 5884 phylink_set_port_modes(mask); 5885 - phylink_set(mask, Pause); 5886 - phylink_set(mask, Asym_Pause); 5887 5885 5888 5886 switch (state->interface) { 5889 5887 case PHY_INTERFACE_MODE_10GBASER:
+8 -5
drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
··· 19 19 #define MLXSW_THERMAL_ASIC_TEMP_NORM 75000 /* 75C */ 20 20 #define MLXSW_THERMAL_ASIC_TEMP_HIGH 85000 /* 85C */ 21 21 #define MLXSW_THERMAL_ASIC_TEMP_HOT 105000 /* 105C */ 22 - #define MLXSW_THERMAL_ASIC_TEMP_CRIT 110000 /* 110C */ 22 + #define MLXSW_THERMAL_ASIC_TEMP_CRIT 140000 /* 140C */ 23 23 #define MLXSW_THERMAL_HYSTERESIS_TEMP 5000 /* 5C */ 24 24 #define MLXSW_THERMAL_MODULE_TEMP_SHIFT (MLXSW_THERMAL_HYSTERESIS_TEMP * 2) 25 25 #define MLXSW_THERMAL_ZONE_MAX_NAME 16 ··· 176 176 if (err) 177 177 return err; 178 178 179 + if (crit_temp > emerg_temp) { 180 + dev_warn(dev, "%s : Critical threshold %d is above emergency threshold %d\n", 181 + tz->tzdev->type, crit_temp, emerg_temp); 182 + return 0; 183 + } 184 + 179 185 /* According to the system thermal requirements, the thermal zones are 180 186 * defined with four trip points. The critical and emergency 181 187 * temperature thresholds, provided by QSFP module are set as "active" ··· 196 190 tz->trips[MLXSW_THERMAL_TEMP_TRIP_NORM].temp = crit_temp; 197 191 tz->trips[MLXSW_THERMAL_TEMP_TRIP_HIGH].temp = crit_temp; 198 192 tz->trips[MLXSW_THERMAL_TEMP_TRIP_HOT].temp = emerg_temp; 199 - if (emerg_temp > crit_temp) 200 - tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp + 193 + tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp + 201 194 MLXSW_THERMAL_MODULE_TEMP_SHIFT; 202 - else 203 - tz->trips[MLXSW_THERMAL_TEMP_TRIP_CRIT].temp = emerg_temp; 204 195 205 196 return 0; 206 197 }
+1 -6
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
··· 564 564 .ndo_set_features = netxen_set_features, 565 565 }; 566 566 567 - static inline bool netxen_function_zero(struct pci_dev *pdev) 568 - { 569 - return (PCI_FUNC(pdev->devfn) == 0) ? true : false; 570 - } 571 - 572 567 static inline void netxen_set_interrupt_mode(struct netxen_adapter *adapter, 573 568 u32 mode) 574 569 { ··· 659 664 netxen_initialize_interrupt_registers(adapter); 660 665 netxen_set_msix_bit(pdev, 0); 661 666 662 - if (netxen_function_zero(pdev)) { 667 + if (adapter->portnum == 0) { 663 668 if (!netxen_setup_msi_interrupts(adapter, num_msix)) 664 669 netxen_set_interrupt_mode(adapter, NETXEN_MSI_MODE); 665 670 else
+4 -48
drivers/net/ethernet/stmicro/stmmac/dwmac5.c
··· 568 568 int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg, 569 569 unsigned int ptp_rate) 570 570 { 571 - u32 speed, total_offset, offset, ctrl, ctr_low; 572 - u32 extcfg = readl(ioaddr + GMAC_EXT_CONFIG); 573 - u32 mac_cfg = readl(ioaddr + GMAC_CONFIG); 574 571 int i, ret = 0x0; 575 - u64 total_ctr; 576 - 577 - if (extcfg & GMAC_CONFIG_EIPG_EN) { 578 - offset = (extcfg & GMAC_CONFIG_EIPG) >> GMAC_CONFIG_EIPG_SHIFT; 579 - offset = 104 + (offset * 8); 580 - } else { 581 - offset = (mac_cfg & GMAC_CONFIG_IPG) >> GMAC_CONFIG_IPG_SHIFT; 582 - offset = 96 - (offset * 8); 583 - } 584 - 585 - speed = mac_cfg & (GMAC_CONFIG_PS | GMAC_CONFIG_FES); 586 - speed = speed >> GMAC_CONFIG_FES_SHIFT; 587 - 588 - switch (speed) { 589 - case 0x0: 590 - offset = offset * 1000; /* 1G */ 591 - break; 592 - case 0x1: 593 - offset = offset * 400; /* 2.5G */ 594 - break; 595 - case 0x2: 596 - offset = offset * 100000; /* 10M */ 597 - break; 598 - case 0x3: 599 - offset = offset * 10000; /* 100M */ 600 - break; 601 - default: 602 - return -EINVAL; 603 - } 604 - 605 - offset = offset / 1000; 572 + u32 ctrl; 606 573 607 574 ret |= dwmac5_est_write(ioaddr, BTR_LOW, cfg->btr[0], false); 608 575 ret |= dwmac5_est_write(ioaddr, BTR_HIGH, cfg->btr[1], false); 609 576 ret |= dwmac5_est_write(ioaddr, TER, cfg->ter, false); 610 577 ret |= dwmac5_est_write(ioaddr, LLR, cfg->gcl_size, false); 578 + ret |= dwmac5_est_write(ioaddr, CTR_LOW, cfg->ctr[0], false); 579 + ret |= dwmac5_est_write(ioaddr, CTR_HIGH, cfg->ctr[1], false); 611 580 if (ret) 612 581 return ret; 613 582 614 - total_offset = 0; 615 583 for (i = 0; i < cfg->gcl_size; i++) { 616 - ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i] + offset, true); 584 + ret = dwmac5_est_write(ioaddr, i, cfg->gcl[i], true); 617 585 if (ret) 618 586 return ret; 619 - 620 - total_offset += offset; 621 587 } 622 - 623 - total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000ULL; 624 - total_ctr += total_offset; 625 - 626 - ctr_low = do_div(total_ctr, 1000000000); 627 - 628 - ret |= dwmac5_est_write(ioaddr, CTR_LOW, ctr_low, false); 629 - ret |= dwmac5_est_write(ioaddr, CTR_HIGH, total_ctr, false); 630 - if (ret) 631 - return ret; 632 588 633 589 ctrl = readl(ioaddr + MTL_EST_CONTROL); 634 590 ctrl &= ~PTOV;
+4 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2184 2184 spin_lock_irqsave(&ch->lock, flags); 2185 2185 stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 1, 0); 2186 2186 spin_unlock_irqrestore(&ch->lock, flags); 2187 - __napi_schedule_irqoff(&ch->rx_napi); 2187 + __napi_schedule(&ch->rx_napi); 2188 2188 } 2189 2189 } 2190 2190 ··· 2193 2193 spin_lock_irqsave(&ch->lock, flags); 2194 2194 stmmac_disable_dma_irq(priv, priv->ioaddr, chan, 0, 1); 2195 2195 spin_unlock_irqrestore(&ch->lock, flags); 2196 - __napi_schedule_irqoff(&ch->tx_napi); 2196 + __napi_schedule(&ch->tx_napi); 2197 2197 } 2198 2198 } 2199 2199 ··· 4026 4026 { 4027 4027 struct stmmac_priv *priv = netdev_priv(dev); 4028 4028 int txfifosz = priv->plat->tx_fifo_size; 4029 + const int mtu = new_mtu; 4029 4030 4030 4031 if (txfifosz == 0) 4031 4032 txfifosz = priv->dma_cap.tx_fifo_size; ··· 4044 4043 if ((txfifosz < new_mtu) || (new_mtu > BUF_SIZE_16KiB)) 4045 4044 return -EINVAL; 4046 4045 4047 - dev->mtu = new_mtu; 4046 + dev->mtu = mtu; 4048 4047 4049 4048 netdev_update_features(dev); 4050 4049
+18 -2
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 599 599 { 600 600 u32 size, wid = priv->dma_cap.estwid, dep = priv->dma_cap.estdep; 601 601 struct plat_stmmacenet_data *plat = priv->plat; 602 - struct timespec64 time; 602 + struct timespec64 time, current_time; 603 + ktime_t current_time_ns; 603 604 bool fpe = false; 604 605 int i, ret = 0; 605 606 u64 ctr; ··· 695 694 } 696 695 697 696 /* Adjust for real system time */ 698 - time = ktime_to_timespec64(qopt->base_time); 697 + priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, &current_time); 698 + current_time_ns = timespec64_to_ktime(current_time); 699 + if (ktime_after(qopt->base_time, current_time_ns)) { 700 + time = ktime_to_timespec64(qopt->base_time); 701 + } else { 702 + ktime_t base_time; 703 + s64 n; 704 + 705 + n = div64_s64(ktime_sub_ns(current_time_ns, qopt->base_time), 706 + qopt->cycle_time); 707 + base_time = ktime_add_ns(qopt->base_time, 708 + (n + 1) * qopt->cycle_time); 709 + 710 + time = ktime_to_timespec64(base_time); 711 + } 712 + 699 713 priv->plat->est->btr[0] = (u32)time.tv_nsec; 700 714 priv->plat->est->btr[1] = (u32)time.tv_sec; 701 715
+1
drivers/net/ipa/ipa_modem.c
··· 216 216 ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]->netdev = netdev; 217 217 ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->netdev = netdev; 218 218 219 + SET_NETDEV_DEV(netdev, &ipa->pdev->dev); 219 220 priv = netdev_priv(netdev); 220 221 priv->ipa = ipa; 221 222
+2 -1
drivers/net/phy/smsc.c
··· 317 317 /* Make clk optional to keep DTB backward compatibility. */ 318 318 priv->refclk = clk_get_optional(dev, NULL); 319 319 if (IS_ERR(priv->refclk)) 320 - dev_err_probe(dev, PTR_ERR(priv->refclk), "Failed to request clock\n"); 320 + return dev_err_probe(dev, PTR_ERR(priv->refclk), 321 + "Failed to request clock\n"); 321 322 322 323 ret = clk_prepare_enable(priv->refclk); 323 324 if (ret)
+9 -3
drivers/net/ppp/ppp_generic.c
··· 623 623 write_unlock_bh(&pch->upl); 624 624 return -EALREADY; 625 625 } 626 + refcount_inc(&pchb->file.refcnt); 626 627 rcu_assign_pointer(pch->bridge, pchb); 627 628 write_unlock_bh(&pch->upl); 628 629 ··· 633 632 write_unlock_bh(&pchb->upl); 634 633 goto err_unset; 635 634 } 635 + refcount_inc(&pch->file.refcnt); 636 636 rcu_assign_pointer(pchb->bridge, pch); 637 637 write_unlock_bh(&pchb->upl); 638 - 639 - refcount_inc(&pch->file.refcnt); 640 - refcount_inc(&pchb->file.refcnt); 641 638 642 639 return 0; 643 640 644 641 err_unset: 645 642 write_lock_bh(&pch->upl); 643 + /* Re-read pch->bridge with upl held in case it was modified concurrently */ 644 + pchb = rcu_dereference_protected(pch->bridge, lockdep_is_held(&pch->upl)); 646 645 RCU_INIT_POINTER(pch->bridge, NULL); 647 646 write_unlock_bh(&pch->upl); 648 647 synchronize_rcu(); 648 + 649 + if (pchb) 650 + if (refcount_dec_and_test(&pchb->file.refcnt)) 651 + ppp_destroy_channel(pchb); 652 + 649 653 return -EALREADY; 650 654 } 651 655
-1
drivers/net/usb/Kconfig
··· 631 631 config USB_RTL8153_ECM 632 632 tristate "RTL8153 ECM support" 633 633 depends on USB_NET_CDCETHER && (USB_RTL8152 || USB_RTL8152=n) 634 - default y 635 634 help 636 635 This option supports ECM mode for RTL8153 ethernet adapter, when 637 636 CONFIG_USB_RTL8152 is not set, or the RTL8153 device is not
+7
drivers/net/usb/cdc_ether.c
··· 793 793 .driver_info = 0, 794 794 }, 795 795 796 + /* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */ 797 + { 798 + USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x721e, USB_CLASS_COMM, 799 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 800 + .driver_info = 0, 801 + }, 802 + 796 803 /* ThinkPad USB-C Dock Gen 2 (based on Realtek RTL8153) */ 797 804 { 798 805 USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa387, USB_CLASS_COMM,
+1
drivers/net/usb/r8152.c
··· 6877 6877 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, 6878 6878 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x720c)}, 6879 6879 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7214)}, 6880 + {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x721e)}, 6880 6881 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0xa387)}, 6881 6882 {REALTEK_USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041)}, 6882 6883 {REALTEK_USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff)},
+8
drivers/net/usb/r8153_ecm.c
··· 122 122 }; 123 123 124 124 static const struct usb_device_id products[] = { 125 + /* Realtek RTL8153 Based USB 3.0 Ethernet Adapters */ 125 126 { 126 127 USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_REALTEK, 0x8153, USB_CLASS_COMM, 128 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 129 + .driver_info = (unsigned long)&r8153_info, 130 + }, 131 + 132 + /* Lenovo Powered USB-C Travel Hub (4X90S92381, based on Realtek RTL8153) */ 133 + { 134 + USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_LENOVO, 0x721e, USB_CLASS_COMM, 127 135 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 128 136 .driver_info = (unsigned long)&r8153_info, 129 137 },
+1 -1
drivers/net/usb/rndis_host.c
··· 387 387 reply_len = sizeof *phym; 388 388 retval = rndis_query(dev, intf, u.buf, 389 389 RNDIS_OID_GEN_PHYSICAL_MEDIUM, 390 - 0, (void **) &phym, &reply_len); 390 + reply_len, (void **)&phym, &reply_len); 391 391 if (retval != 0 || !phym) { 392 392 /* OID is optional so don't fail here. */ 393 393 phym_unspec = cpu_to_le32(RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED);
+2 -1
include/linux/skbuff.h
··· 366 366 static inline bool skb_frag_must_loop(struct page *p) 367 367 { 368 368 #if defined(CONFIG_HIGHMEM) 369 - if (PageHighMem(p)) 369 + if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) || PageHighMem(p)) 370 370 return true; 371 371 #endif 372 372 return false; ··· 1203 1203 struct sk_buff *root_skb; 1204 1204 struct sk_buff *cur_skb; 1205 1205 __u8 *frag_data; 1206 + __u32 frag_off; 1206 1207 }; 1207 1208 1208 1209 void skb_prepare_seq_read(struct sk_buff *skb, unsigned int from,
+1 -3
net/8021q/vlan.c
··· 284 284 return 0; 285 285 286 286 out_free_newdev: 287 - if (new_dev->reg_state == NETREG_UNINITIALIZED || 288 - new_dev->reg_state == NETREG_UNREGISTERED) 289 - free_netdev(new_dev); 287 + free_netdev(new_dev); 290 288 return err; 291 289 } 292 290
+1
net/can/isotp.c
··· 1155 1155 if (peer) 1156 1156 return -EOPNOTSUPP; 1157 1157 1158 + memset(addr, 0, sizeof(*addr)); 1158 1159 addr->can_family = AF_CAN; 1159 1160 addr->can_ifindex = so->ifindex; 1160 1161 addr->can_addr.tp.rx_id = so->rxid;
+24 -13
net/core/dev.c
··· 9661 9661 } 9662 9662 } 9663 9663 9664 - if ((features & NETIF_F_HW_TLS_TX) && !(features & NETIF_F_HW_CSUM)) { 9665 - netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n"); 9666 - features &= ~NETIF_F_HW_TLS_TX; 9664 + if (features & NETIF_F_HW_TLS_TX) { 9665 + bool ip_csum = (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) == 9666 + (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM); 9667 + bool hw_csum = features & NETIF_F_HW_CSUM; 9668 + 9669 + if (!ip_csum && !hw_csum) { 9670 + netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n"); 9671 + features &= ~NETIF_F_HW_TLS_TX; 9672 + } 9667 9673 } 9668 9674 9669 9675 return features; ··· 10083 10077 ret = call_netdevice_notifiers(NETDEV_REGISTER, dev); 10084 10078 ret = notifier_to_errno(ret); 10085 10079 if (ret) { 10080 + /* Expect explicit free_netdev() on failure */ 10081 + dev->needs_free_netdev = false; 10086 10082 rollback_registered(dev); 10087 - rcu_barrier(); 10088 - 10089 - dev->reg_state = NETREG_UNREGISTERED; 10090 - /* We should put the kobject that hold in 10091 - * netdev_unregister_kobject(), otherwise 10092 - * the net device cannot be freed when 10093 - * driver calls free_netdev(), because the 10094 - * kobject is being hold. 10095 - */ 10096 - kobject_put(&dev->dev.kobj); 10083 + net_set_todo(dev); 10084 + goto out; 10097 10085 } 10098 10086 /* 10099 10087 * Prevent userspace races by waiting until the network ··· 10631 10631 struct napi_struct *p, *n; 10632 10632 10633 10633 might_sleep(); 10634 + 10635 + /* When called immediately after register_netdevice() failed the unwind 10636 + * handling may still be dismantling the device. Handle that case by 10637 + * deferring the free. 10638 + */ 10639 + if (dev->reg_state == NETREG_UNREGISTERING) { 10640 + ASSERT_RTNL(); 10641 + dev->needs_free_netdev = true; 10642 + return; 10643 + } 10644 + 10634 10645 netif_free_tx_queues(dev); 10635 10646 netif_free_rx_queues(dev); 10636 10647
+6 -17
net/core/rtnetlink.c
··· 3439 3439 3440 3440 dev->ifindex = ifm->ifi_index; 3441 3441 3442 - if (ops->newlink) { 3442 + if (ops->newlink) 3443 3443 err = ops->newlink(link_net ? : net, dev, tb, data, extack); 3444 - /* Drivers should call free_netdev() in ->destructor 3445 - * and unregister it on failure after registration 3446 - * so that device could be finally freed in rtnl_unlock. 3447 - */ 3448 - if (err < 0) { 3449 - /* If device is not registered at all, free it now */ 3450 - if (dev->reg_state == NETREG_UNINITIALIZED || 3451 - dev->reg_state == NETREG_UNREGISTERED) 3452 - free_netdev(dev); 3453 - goto out; 3454 - } 3455 - } else { 3444 + else 3456 3445 err = register_netdevice(dev); 3457 - if (err < 0) { 3458 - free_netdev(dev); 3459 - goto out; 3460 - } 3446 + if (err < 0) { 3447 + free_netdev(dev); 3448 + goto out; 3461 3449 } 3450 + 3462 3451 err = rtnl_configure_link(dev, ifm); 3463 3452 if (err < 0) 3464 3453 goto out_unregister;
+50 -9
net/core/skbuff.c
··· 501 501 struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len, 502 502 gfp_t gfp_mask) 503 503 { 504 - struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); 504 + struct napi_alloc_cache *nc; 505 505 struct sk_buff *skb; 506 506 void *data; 507 507 508 508 len += NET_SKB_PAD + NET_IP_ALIGN; 509 509 510 - if ((len > SKB_WITH_OVERHEAD(PAGE_SIZE)) || 510 + /* If requested length is either too small or too big, 511 + * we use kmalloc() for skb->head allocation. 512 + */ 513 + if (len <= SKB_WITH_OVERHEAD(1024) || 514 + len > SKB_WITH_OVERHEAD(PAGE_SIZE) || 511 515 (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) { 512 516 skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE); 513 517 if (!skb) ··· 519 515 goto skb_success; 520 516 } 521 517 518 + nc = this_cpu_ptr(&napi_alloc_cache); 522 519 len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); 523 520 len = SKB_DATA_ALIGN(len); 524 521 ··· 3447 3442 st->root_skb = st->cur_skb = skb; 3448 3443 st->frag_idx = st->stepped_offset = 0; 3449 3444 st->frag_data = NULL; 3445 + st->frag_off = 0; 3450 3446 } 3451 3447 EXPORT_SYMBOL(skb_prepare_seq_read); 3452 3448 ··· 3502 3496 st->stepped_offset += skb_headlen(st->cur_skb); 3503 3497 3504 3498 while (st->frag_idx < skb_shinfo(st->cur_skb)->nr_frags) { 3505 - frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx]; 3506 - block_limit = skb_frag_size(frag) + st->stepped_offset; 3499 + unsigned int pg_idx, pg_off, pg_sz; 3507 3500 3501 + frag = &skb_shinfo(st->cur_skb)->frags[st->frag_idx]; 3502 + 3503 + pg_idx = 0; 3504 + pg_off = skb_frag_off(frag); 3505 + pg_sz = skb_frag_size(frag); 3506 + 3507 + if (skb_frag_must_loop(skb_frag_page(frag))) { 3508 + pg_idx = (pg_off + st->frag_off) >> PAGE_SHIFT; 3509 + pg_off = offset_in_page(pg_off + st->frag_off); 3510 + pg_sz = min_t(unsigned int, pg_sz - st->frag_off, 3511 + PAGE_SIZE - pg_off); 3512 + } 3513 + 3514 + block_limit = pg_sz + st->stepped_offset; 3508 3515 if (abs_offset < block_limit) { 3509 3516 if (!st->frag_data) 3510 - st->frag_data = kmap_atomic(skb_frag_page(frag)); 3517 + st->frag_data = kmap_atomic(skb_frag_page(frag) + pg_idx); 3511 3518 3512 - *data = (u8 *) st->frag_data + skb_frag_off(frag) + 3519 + *data = (u8 *)st->frag_data + pg_off + 3513 3520 (abs_offset - st->stepped_offset); 3514 3521 3515 3522 return block_limit - abs_offset; ··· 3533 3514 st->frag_data = NULL; 3534 3515 } 3535 3516 3536 - st->frag_idx++; 3537 - st->stepped_offset += skb_frag_size(frag); 3517 + st->stepped_offset += pg_sz; 3518 + st->frag_off += pg_sz; 3519 + if (st->frag_off == skb_frag_size(frag)) { 3520 + st->frag_off = 0; 3521 + st->frag_idx++; 3522 + } 3538 3523 } 3539 3524 3540 3525 if (st->frag_data) { ··· 3678 3655 unsigned int delta_truesize = 0; 3679 3656 unsigned int delta_len = 0; 3680 3657 struct sk_buff *tail = NULL; 3681 - struct sk_buff *nskb; 3658 + struct sk_buff *nskb, *tmp; 3659 + int err; 3682 3660 3683 3661 skb_push(skb, -skb_network_offset(skb) + offset); 3684 3662 ··· 3689 3665 nskb = list_skb; 3690 3666 list_skb = list_skb->next; 3691 3667 3668 + err = 0; 3669 + if (skb_shared(nskb)) { 3670 + tmp = skb_clone(nskb, GFP_ATOMIC); 3671 + if (tmp) { 3672 + consume_skb(nskb); 3673 + nskb = tmp; 3674 + err = skb_unclone(nskb, GFP_ATOMIC); 3675 + } else { 3676 + err = -ENOMEM; 3677 + } 3678 + } 3679 + 3692 3680 if (!tail) 3693 3681 skb->next = nskb; 3694 3682 else 3695 3683 tail->next = nskb; 3684 + 3685 + if (unlikely(err)) { 3686 + nskb->next = list_skb; 3687 + goto err_linearize; 3688 + } 3696 3689 3697 3690 tail = nskb; 3698 3691
+1 -1
net/core/sock_reuseport.c
··· 293 293 i = j = reciprocal_scale(hash, socks); 294 294 while (reuse->socks[i]->sk_state == TCP_ESTABLISHED) { 295 295 i++; 296 - if (i >= reuse->num_socks) 296 + if (i >= socks) 297 297 i = 0; 298 298 if (i == j) 299 299 goto out;
+1 -1
net/dcb/dcbnl.c
··· 1765 1765 fn = &reply_funcs[dcb->cmd]; 1766 1766 if (!fn->cb) 1767 1767 return -EOPNOTSUPP; 1768 - if (fn->type != nlh->nlmsg_type) 1768 + if (fn->type == RTM_SETDCB && !netlink_capable(skb, CAP_NET_ADMIN)) 1769 1769 return -EPERM; 1770 1770 1771 1771 if (!tb[DCB_ATTR_IFNAME])
+4
net/dsa/dsa2.c
··· 353 353 354 354 static void dsa_port_teardown(struct dsa_port *dp) 355 355 { 356 + struct devlink_port *dlp = &dp->devlink_port; 357 + 356 358 if (!dp->setup) 357 359 return; 360 + 361 + devlink_port_type_clear(dlp); 358 362 359 363 switch (dp->type) { 360 364 case DSA_PORT_TYPE_UNUSED:
+10
net/dsa/master.c
··· 309 309 int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp) 310 310 { 311 311 int mtu = ETH_DATA_LEN + cpu_dp->tag_ops->overhead; 312 + struct dsa_switch *ds = cpu_dp->ds; 313 + struct device_link *consumer_link; 312 314 int ret; 315 + 316 + /* The DSA master must use SET_NETDEV_DEV for this to work. */ 317 + consumer_link = device_link_add(ds->dev, dev->dev.parent, 318 + DL_FLAG_AUTOREMOVE_CONSUMER); 319 + if (!consumer_link) 320 + netdev_err(dev, 321 + "Failed to create a device link to DSA switch %s\n", 322 + dev_name(ds->dev)); 313 323 314 324 rtnl_lock(); 315 325 ret = dev_set_mtu(dev, mtu);
+1 -6
net/ipv4/esp4.c
··· 443 443 int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp) 444 444 { 445 445 u8 *tail; 446 - u8 *vaddr; 447 446 int nfrags; 448 447 int esph_offset; 449 448 struct page *page; ··· 484 485 page = pfrag->page; 485 486 get_page(page); 486 487 487 - vaddr = kmap_atomic(page); 488 - 489 - tail = vaddr + pfrag->offset; 488 + tail = page_address(page) + pfrag->offset; 490 489 491 490 esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto); 492 - 493 - kunmap_atomic(vaddr); 494 491 495 492 nfrags = skb_shinfo(skb)->nr_frags; 496 493
+1 -6
net/ipv6/esp6.c
··· 478 478 int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp) 479 479 { 480 480 u8 *tail; 481 - u8 *vaddr; 482 481 int nfrags; 483 482 int esph_offset; 484 483 struct page *page; ··· 518 519 page = pfrag->page; 519 520 get_page(page); 520 521 521 - vaddr = kmap_atomic(page); 522 - 523 - tail = vaddr + pfrag->offset; 522 + tail = page_address(page) + pfrag->offset; 524 523 525 524 esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto); 526 - 527 - kunmap_atomic(vaddr); 528 525 529 526 nfrags = skb_shinfo(skb)->nr_frags; 530 527
+40 -1
net/ipv6/ip6_output.c
··· 125 125 return -EINVAL; 126 126 } 127 127 128 + static int 129 + ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk, 130 + struct sk_buff *skb, unsigned int mtu) 131 + { 132 + struct sk_buff *segs, *nskb; 133 + netdev_features_t features; 134 + int ret = 0; 135 + 136 + /* Please see corresponding comment in ip_finish_output_gso 137 + * describing the cases where GSO segment length exceeds the 138 + * egress MTU. 139 + */ 140 + features = netif_skb_features(skb); 141 + segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK); 142 + if (IS_ERR_OR_NULL(segs)) { 143 + kfree_skb(skb); 144 + return -ENOMEM; 145 + } 146 + 147 + consume_skb(skb); 148 + 149 + skb_list_walk_safe(segs, segs, nskb) { 150 + int err; 151 + 152 + skb_mark_not_on_list(segs); 153 + err = ip6_fragment(net, sk, segs, ip6_finish_output2); 154 + if (err && ret == 0) 155 + ret = err; 156 + } 157 + 158 + return ret; 159 + } 160 + 128 161 static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb) 129 162 { 163 + unsigned int mtu; 164 + 130 165 #if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM) 131 166 /* Policy lookup after SNAT yielded a new policy */ 132 167 if (skb_dst(skb)->xfrm) { ··· 170 135 } 171 136 #endif 172 137 173 - if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) || 138 + mtu = ip6_skb_dst_mtu(skb); 139 + if (skb_is_gso(skb) && !skb_gso_validate_network_len(skb, mtu)) 140 + return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu); 141 + 142 + if ((skb->len > mtu && !skb_is_gso(skb)) || 174 143 dst_allfrag(skb_dst(skb)) || 175 144 (IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size)) 176 145 return ip6_fragment(net, sk, skb, ip6_finish_output2);
+4 -1
net/ipv6/sit.c
··· 1645 1645 } 1646 1646 1647 1647 #ifdef CONFIG_IPV6_SIT_6RD 1648 - if (ipip6_netlink_6rd_parms(data, &ip6rd)) 1648 + if (ipip6_netlink_6rd_parms(data, &ip6rd)) { 1649 1649 err = ipip6_tunnel_update_6rd(nt, &ip6rd); 1650 + if (err < 0) 1651 + unregister_netdevice_queue(dev, NULL); 1652 + } 1650 1653 #endif 1651 1654 1652 1655 return err;
+23 -46
net/mptcp/protocol.c
··· 427 427 static bool tcp_can_send_ack(const struct sock *ssk) 428 428 { 429 429 return !((1 << inet_sk_state_load(ssk)) & 430 - (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE)); 430 + (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN)); 431 431 } 432 432 433 433 static void mptcp_send_ack(struct mptcp_sock *msk) ··· 2642 2642 2643 2643 static int mptcp_disconnect(struct sock *sk, int flags) 2644 2644 { 2645 - /* Should never be called. 2646 - * inet_stream_connect() calls ->disconnect, but that 2647 - * refers to the subflow socket, not the mptcp one. 2648 - */ 2649 - WARN_ON_ONCE(1); 2645 + struct mptcp_subflow_context *subflow; 2646 + struct mptcp_sock *msk = mptcp_sk(sk); 2647 + 2648 + __mptcp_flush_join_list(msk); 2649 + mptcp_for_each_subflow(msk, subflow) { 2650 + struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 2651 + 2652 + lock_sock(ssk); 2653 + tcp_disconnect(ssk, flags); 2654 + release_sock(ssk); 2655 + } 2650 2656 return 0; 2651 2657 } 2652 2658 ··· 3095 3089 return true; 3096 3090 } 3097 3091 3092 + static void mptcp_shutdown(struct sock *sk, int how) 3093 + { 3094 + pr_debug("sk=%p, how=%d", sk, how); 3095 + 3096 + if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk)) 3097 + __mptcp_wr_shutdown(sk); 3098 + } 3099 + 3098 3100 static struct proto mptcp_prot = { 3099 3101 .name = "MPTCP", 3100 3102 .owner = THIS_MODULE, ··· 3112 3098 .accept = mptcp_accept, 3113 3099 .setsockopt = mptcp_setsockopt, 3114 3100 .getsockopt = mptcp_getsockopt, 3115 - .shutdown = tcp_shutdown, 3101 + .shutdown = mptcp_shutdown, 3116 3102 .destroy = mptcp_destroy, 3117 3103 .sendmsg = mptcp_sendmsg, 3118 3104 .recvmsg = mptcp_recvmsg, ··· 3358 3344 return mask; 3359 3345 } 3360 3346 3361 - static int mptcp_shutdown(struct socket *sock, int how) 3362 - { 3363 - struct mptcp_sock *msk = mptcp_sk(sock->sk); 3364 - struct sock *sk = sock->sk; 3365 - int ret = 0; 3366 - 3367 - pr_debug("sk=%p, how=%d", msk, how); 3368 - 3369 - lock_sock(sk); 3370 - 3371 - how++; 3372 - if ((how & ~SHUTDOWN_MASK) || !how) { 3373 - ret = -EINVAL; 3374 - goto out_unlock; 3375 - } 3376 - 3377 - if (sock->state == SS_CONNECTING) { 3378 - if ((1 << sk->sk_state) & 3379 - (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_CLOSE)) 3380 - sock->state = SS_DISCONNECTING; 3381 - else 3382 - sock->state = SS_CONNECTED; 3383 - } 3384 - 3385 - sk->sk_shutdown |= how; 3386 - if ((how & SEND_SHUTDOWN) && mptcp_close_state(sk)) 3387 - __mptcp_wr_shutdown(sk); 3388 - 3389 - /* Wake up anyone sleeping in poll. */ 3390 - sk->sk_state_change(sk); 3391 - 3392 - out_unlock: 3393 - release_sock(sk); 3394 - 3395 - return ret; 3396 - } 3397 - 3398 3347 static const struct proto_ops mptcp_stream_ops = { 3399 3348 .family = PF_INET, 3400 3349 .owner = THIS_MODULE, ··· 3371 3394 .ioctl = inet_ioctl, 3372 3395 .gettstamp = sock_gettstamp, 3373 3396 .listen = mptcp_listen, 3374 - .shutdown = mptcp_shutdown, 3397 + .shutdown = inet_shutdown, 3375 3398 .setsockopt = sock_common_setsockopt, 3376 3399 .getsockopt = sock_common_getsockopt, 3377 3400 .sendmsg = inet_sendmsg, ··· 3421 3444 .ioctl = inet6_ioctl, 3422 3445 .gettstamp = sock_gettstamp, 3423 3446 .listen = mptcp_listen, 3424 - .shutdown = mptcp_shutdown, 3447 + .shutdown = inet_shutdown, 3425 3448 .setsockopt = sock_common_setsockopt, 3426 3449 .getsockopt = sock_common_getsockopt, 3427 3450 .sendmsg = inet6_sendmsg,
+3
net/netfilter/nf_conntrack_standalone.c
··· 523 523 { 524 524 int ret; 525 525 526 + /* module_param hashsize could have changed value */ 527 + nf_conntrack_htable_size_user = nf_conntrack_htable_size; 528 + 526 529 ret = proc_dointvec(table, write, buffer, lenp, ppos); 527 530 if (ret < 0 || !write) 528 531 return ret;
+1
net/netfilter/nf_nat_core.c
··· 1174 1174 ret = register_pernet_subsys(&nat_net_ops); 1175 1175 if (ret < 0) { 1176 1176 nf_ct_extend_unregister(&nat_extend); 1177 + kvfree(nf_nat_bysource); 1177 1178 return ret; 1178 1179 } 1179 1180
+1 -1
net/rxrpc/input.c
··· 430 430 return; 431 431 } 432 432 433 - if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST) { 433 + if (state == RXRPC_CALL_SERVER_RECV_REQUEST) { 434 434 unsigned long timo = READ_ONCE(call->next_req_timo); 435 435 unsigned long now, expect_req_by; 436 436
+4 -2
net/rxrpc/key.c
··· 598 598 default: /* we have a ticket we can't encode */ 599 599 pr_err("Unsupported key token type (%u)\n", 600 600 token->security_index); 601 - continue; 601 + return -ENOPKG; 602 602 } 603 603 604 604 _debug("token[%u]: toksize=%u", ntoks, toksize); ··· 674 674 break; 675 675 676 676 default: 677 - break; 677 + pr_err("Unsupported key token type (%u)\n", 678 + token->security_index); 679 + return -ENOPKG; 678 680 } 679 681 680 682 ASSERTCMP((unsigned long)xdr - (unsigned long)oldxdr, ==,
+13 -7
net/smc/smc_core.c
··· 246 246 goto errattr; 247 247 smc_clc_get_hostname(&host); 248 248 if (host) { 249 - snprintf(hostname, sizeof(hostname), "%s", host); 249 + memcpy(hostname, host, SMC_MAX_HOSTNAME_LEN); 250 + hostname[SMC_MAX_HOSTNAME_LEN] = 0; 250 251 if (nla_put_string(skb, SMC_NLA_SYS_LOCAL_HOST, hostname)) 251 252 goto errattr; 252 253 } ··· 258 257 smc_ism_get_system_eid(smcd_dev, &seid); 259 258 mutex_unlock(&smcd_dev_list.mutex); 260 259 if (seid && smc_ism_is_v2_capable()) { 261 - snprintf(smc_seid, sizeof(smc_seid), "%s", seid); 260 + memcpy(smc_seid, seid, SMC_MAX_EID_LEN); 261 + smc_seid[SMC_MAX_EID_LEN] = 0; 262 262 if (nla_put_string(skb, SMC_NLA_SYS_SEID, smc_seid)) 263 263 goto errattr; 264 264 } ··· 297 295 goto errattr; 298 296 if (nla_put_u8(skb, SMC_NLA_LGR_R_VLAN_ID, lgr->vlan_id)) 299 297 goto errattr; 300 - snprintf(smc_target, sizeof(smc_target), "%s", lgr->pnet_id); 298 + memcpy(smc_target, lgr->pnet_id, SMC_MAX_PNETID_LEN); 299 + smc_target[SMC_MAX_PNETID_LEN] = 0; 301 300 if (nla_put_string(skb, SMC_NLA_LGR_R_PNETID, smc_target)) 302 301 goto errattr; 303 302 ··· 315 312 struct sk_buff *skb, 316 313 struct netlink_callback *cb) 317 314 { 318 - char smc_ibname[IB_DEVICE_NAME_MAX + 1]; 315 + char smc_ibname[IB_DEVICE_NAME_MAX]; 319 316 u8 smc_gid_target[41]; 320 317 struct nlattr *attrs; 321 318 u32 link_uid = 0; ··· 464 461 goto errattr; 465 462 if (nla_put_u32(skb, SMC_NLA_LGR_D_CHID, smc_ism_get_chid(lgr->smcd))) 466 463 goto errattr; 467 - snprintf(smc_pnet, sizeof(smc_pnet), "%s", lgr->smcd->pnetid); 464 + memcpy(smc_pnet, lgr->smcd->pnetid, SMC_MAX_PNETID_LEN); 465 + smc_pnet[SMC_MAX_PNETID_LEN] = 0; 468 466 if (nla_put_string(skb, SMC_NLA_LGR_D_PNETID, smc_pnet)) 469 467 goto errattr; 470 468 ··· 478 474 goto errv2attr; 479 475 if (nla_put_u8(skb, SMC_NLA_LGR_V2_OS, lgr->peer_os)) 480 476 goto errv2attr; 481 - snprintf(smc_host, sizeof(smc_host), "%s", lgr->peer_hostname); 477 + memcpy(smc_host, lgr->peer_hostname, SMC_MAX_HOSTNAME_LEN); 478 + smc_host[SMC_MAX_HOSTNAME_LEN] = 0; 482 479 if (nla_put_string(skb, SMC_NLA_LGR_V2_PEER_HOST, smc_host)) 483 480 goto errv2attr; 484 - snprintf(smc_eid, sizeof(smc_eid), "%s", lgr->negotiated_eid); 481 + memcpy(smc_eid, lgr->negotiated_eid, SMC_MAX_EID_LEN); 482 + smc_eid[SMC_MAX_EID_LEN] = 0; 485 483 if (nla_put_string(skb, SMC_NLA_LGR_V2_NEG_EID, smc_eid)) 486 484 goto errv2attr; 487 485
+3 -3
net/smc/smc_ib.c
··· 371 371 if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, 372 372 smcibdev->pnetid_by_user[port])) 373 373 goto errattr; 374 - snprintf(smc_pnet, sizeof(smc_pnet), "%s", 375 - (char *)&smcibdev->pnetid[port]); 374 + memcpy(smc_pnet, &smcibdev->pnetid[port], SMC_MAX_PNETID_LEN); 375 + smc_pnet[SMC_MAX_PNETID_LEN] = 0; 376 376 if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet)) 377 377 goto errattr; 378 378 if (nla_put_u32(skb, SMC_NLA_DEV_PORT_NETDEV, ··· 414 414 struct sk_buff *skb, 415 415 struct netlink_callback *cb) 416 416 { 417 - char smc_ibname[IB_DEVICE_NAME_MAX + 1]; 417 + char smc_ibname[IB_DEVICE_NAME_MAX]; 418 418 struct smc_pci_dev smc_pci_dev; 419 419 struct pci_dev *pci_dev; 420 420 unsigned char is_crit;
+2 -1
net/smc/smc_ism.c
··· 250 250 goto errattr; 251 251 if (nla_put_u8(skb, SMC_NLA_DEV_PORT_PNET_USR, smcd->pnetid_by_user)) 252 252 goto errportattr; 253 - snprintf(smc_pnet, sizeof(smc_pnet), "%s", smcd->pnetid); 253 + memcpy(smc_pnet, smcd->pnetid, SMC_MAX_PNETID_LEN); 254 + smc_pnet[SMC_MAX_PNETID_LEN] = 0; 254 255 if (nla_put_string(skb, SMC_NLA_DEV_PORT_PNETID, smc_pnet)) 255 256 goto errportattr; 256 257
+8 -3
net/tipc/link.c
··· 1030 1030 int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list, 1031 1031 struct sk_buff_head *xmitq) 1032 1032 { 1033 - struct tipc_msg *hdr = buf_msg(skb_peek(list)); 1034 1033 struct sk_buff_head *backlogq = &l->backlogq; 1035 1034 struct sk_buff_head *transmq = &l->transmq; 1036 1035 struct sk_buff *skb, *_skb; ··· 1037 1038 u16 ack = l->rcv_nxt - 1; 1038 1039 u16 seqno = l->snd_nxt; 1039 1040 int pkt_cnt = skb_queue_len(list); 1040 - int imp = msg_importance(hdr); 1041 1041 unsigned int mss = tipc_link_mss(l); 1042 1042 unsigned int cwin = l->window; 1043 1043 unsigned int mtu = l->mtu; 1044 + struct tipc_msg *hdr; 1044 1045 bool new_bundle; 1045 1046 int rc = 0; 1047 + int imp; 1046 1048 1049 + if (pkt_cnt <= 0) 1050 + return 0; 1051 + 1052 + hdr = buf_msg(skb_peek(list)); 1047 1053 if (unlikely(msg_size(hdr) > mtu)) { 1048 1054 pr_warn("Too large msg, purging xmit list %d %d %d %d %d!\n", 1049 1055 skb_queue_len(list), msg_user(hdr), ··· 1057 1053 return -EMSGSIZE; 1058 1054 } 1059 1055 1056 + imp = msg_importance(hdr); 1060 1057 /* Allow oversubscription of one data msg per source at congestion */ 1061 1058 if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) { 1062 1059 if (imp == TIPC_SYSTEM_IMPORTANCE) { ··· 2544 2539 } 2545 2540 2546 2541 /** 2547 - * link_reset_stats - reset link statistics 2542 + * tipc_link_reset_stats - reset link statistics 2548 2543 * @l: pointer to link 2549 2544 */ 2550 2545 void tipc_link_reset_stats(struct tipc_link *l)
+1 -1
net/tipc/node.c
··· 1665 1665 } 1666 1666 1667 1667 /** 1668 - * tipc_node_xmit() is the general link level function for message sending 1668 + * tipc_node_xmit() - general link level function for message sending 1669 1669 * @net: the applicable net namespace 1670 1670 * @list: chain of buffers containing message 1671 1671 * @dnode: address of destination node
+2 -2
tools/testing/selftests/net/tls.c
··· 103 103 104 104 FIXTURE_VARIANT(tls) 105 105 { 106 - u16 tls_version; 107 - u16 cipher_type; 106 + uint16_t tls_version; 107 + uint16_t cipher_type; 108 108 }; 109 109 110 110 FIXTURE_VARIANT_ADD(tls, 12_gcm)
+9 -3
tools/testing/selftests/netfilter/nft_conntrack_helper.sh
··· 94 94 local message=$2 95 95 local port=$3 96 96 97 - ip netns exec ${netns} conntrack -L -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp' 97 + if echo $message |grep -q 'ipv6';then 98 + local family="ipv6" 99 + else 100 + local family="ipv4" 101 + fi 102 + 103 + ip netns exec ${netns} conntrack -L -f $family -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp' 98 104 if [ $? -ne 0 ] ; then 99 105 echo "FAIL: ${netns} did not show attached helper $message" 1>&2 100 106 ret=1 ··· 117 111 118 112 sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null & 119 113 120 - sleep 1 121 114 sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null & 115 + sleep 1 122 116 123 117 check_for_helper "$ns1" "ip $msg" $port 124 118 check_for_helper "$ns2" "ip $msg" $port ··· 134 128 135 129 sleep 3 | ip netns exec ${ns2} nc -w 2 -6 -l -p $port > /dev/null & 136 130 137 - sleep 1 138 131 sleep 1 | ip netns exec ${ns1} nc -w 2 -6 dead:1::2 $port > /dev/null & 132 + sleep 1 139 133 140 134 check_for_helper "$ns1" "ipv6 $msg" $port 141 135 check_for_helper "$ns2" "ipv6 $msg" $port