Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:
"Various fixes, all over:

1) Fix overflow in ptp_qoriq_adjfine(), from Yangbo Lu.

2) Always store the rx queue mapping in veth, from Maciej
Fijalkowski.

3) Don't allow vmlinux btf in map_create, from Alexei Starovoitov.

4) Fix memory leak in octeontx2-af from Colin Ian King.

5) Use kvalloc in bpf x86 JIT for storing jit'd addresses, from
Yonghong Song.

6) Fix tx ptp stats in mlx5, from Aya Levin.

7) Check correct ip version in tun decap, fropm Roi Dayan.

8) Fix rate calculation in mlx5 E-Switch code, from arav Pandit.

9) Work item memork leak in mlx5, from Shay Drory.

10) Fix ip6ip6 tunnel crash with bpf, from Daniel Borkmann.

11) Lack of preemptrion awareness in macvlan, from Eric Dumazet.

12) Fix data race in pxa168_eth, from Pavel Andrianov.

13) Range validate stab in red_check_params(), from Eric Dumazet.

14) Inherit vlan filtering setting properly in b53 driver, from
Florian Fainelli.

15) Fix rtnl locking in igc driver, from Sasha Neftin.

16) Pause handling fixes in igc driver, from Muhammad Husaini
Zulkifli.

17) Missing rtnl locking in e1000_reset_task, from Vitaly Lifshits.

18) Use after free in qlcnic, from Lv Yunlong.

19) fix crash in fritzpci mISDN, from Tong Zhang.

20) Premature rx buffer reuse in igb, from Li RongQing.

21) Missing termination of ip[a driver message handler arrays, from
Alex Elder.

22) Fix race between "x25_close" and "x25_xmit"/"x25_rx" in hdlc_x25
driver, from Xie He.

23) Use after free in c_can_pci_remove(), from Tong Zhang.

24) Uninitialized variable use in nl80211, from Jarod Wilson.

25) Off by one size calc in bpf verifier, from Piotr Krysiuk.

26) Use delayed work instead of deferrable for flowtable GC, from
Yinjun Zhang.

27) Fix infinite loop in NPC unmap of octeontx2 driver, from
Hariprasad Kelam.

28) Fix being unable to change MTU of dwmac-sun8i devices due to lack
of fifo sizes, from Corentin Labbe.

29) DMA use after free in r8169 with WoL, fom Heiner Kallweit.

30) Mismatched prototypes in isdn-capi, from Arnd Bergmann.

31) Fix psample UAPI breakage, from Ido Schimmel"

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (171 commits)
psample: Fix user API breakage
math: Export mul_u64_u64_div_u64
ch_ktls: fix enum-conversion warning
octeontx2-af: Fix memory leak of object buf
ptp_qoriq: fix overflow in ptp_qoriq_adjfine() u64 calcalation
net: bridge: don't notify switchdev for local FDB addresses
net/sched: act_ct: clear post_ct if doing ct_clear
net: dsa: don't assign an error value to tag_ops
isdn: capi: fix mismatched prototypes
net/mlx5: SF, do not use ecpu bit for vhca state processing
net/mlx5e: Fix division by 0 in mlx5e_select_queue
net/mlx5e: Fix error path for ethtool set-priv-flag
net/mlx5e: Offload tuple rewrite for non-CT flows
net/mlx5e: Allow to match on MPLS parameters only for MPLS over UDP
net/mlx5: Add back multicast stats for uplink representor
net: ipconfig: ic_dev can be NULL in ic_close_devs
MAINTAINERS: Combine "QLOGIC QLGE 10Gb ETHERNET DRIVER" sections into one
docs: networking: Fix a typo
r8169: fix DMA being used after buffer free if WoL is enabled
net: ipa: fix init header command validation
...

+1820 -788
+1 -1
Documentation/networking/device_drivers/ethernet/amazon/ena.rst
··· 267 267 Tx 268 268 -- 269 269 270 - end_start_xmit() is called by the stack. This function does the following: 270 + ena_start_xmit() is called by the stack. This function does the following: 271 271 272 272 - Maps data buffers (skb->data and frags). 273 273 - Populates ena_buf for the push buffer (if the driver and device are
+1 -1
Documentation/networking/xfrm_device.rst
··· 50 50 51 51 The NIC driver offering ipsec offload will need to implement these 52 52 callbacks to make the offload available to the network stack's 53 - XFRM subsytem. Additionally, the feature bits NETIF_F_HW_ESP and 53 + XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and 54 54 NETIF_F_HW_ESP_TX_CSUM will signal the availability of the offload. 55 55 56 56
+7 -8
MAINTAINERS
··· 8520 8520 M: Dany Madden <drt@linux.ibm.com> 8521 8521 M: Lijun Pan <ljp@linux.ibm.com> 8522 8522 M: Sukadev Bhattiprolu <sukadev@linux.ibm.com> 8523 + R: Thomas Falcon <tlfalcon@linux.ibm.com> 8523 8524 L: netdev@vger.kernel.org 8524 8525 S: Supported 8525 8526 F: drivers/net/ethernet/ibm/ibmvnic.* ··· 12538 12537 M: Mat Martineau <mathew.j.martineau@linux.intel.com> 12539 12538 M: Matthieu Baerts <matthieu.baerts@tessares.net> 12540 12539 L: netdev@vger.kernel.org 12541 - L: mptcp@lists.01.org 12540 + L: mptcp@lists.linux.dev 12542 12541 S: Maintained 12543 12542 W: https://github.com/multipath-tcp/mptcp_net-next/wiki 12544 12543 B: https://github.com/multipath-tcp/mptcp_net-next/issues ··· 14709 14708 QLOGIC QLGE 10Gb ETHERNET DRIVER 14710 14709 M: Manish Chopra <manishc@marvell.com> 14711 14710 M: GR-Linux-NIC-Dev@marvell.com 14712 - L: netdev@vger.kernel.org 14713 - S: Supported 14714 - F: drivers/staging/qlge/ 14715 - 14716 - QLOGIC QLGE 10Gb ETHERNET DRIVER 14717 14711 M: Coiby Xu <coiby.xu@gmail.com> 14718 14712 L: netdev@vger.kernel.org 14719 - S: Maintained 14713 + S: Supported 14720 14714 F: Documentation/networking/device_drivers/qlogic/qlge.rst 14715 + F: drivers/staging/qlge/ 14721 14716 14722 14717 QM1D1B0004 MEDIA DRIVER 14723 14718 M: Akihiro Tsukada <tskd08@gmail.com> ··· 16883 16886 16884 16887 SPIDERNET NETWORK DRIVER for CELL 16885 16888 M: Ishizaki Kou <kou.ishizaki@toshiba.co.jp> 16889 + M: Geoff Levand <geoff@infradead.org> 16886 16890 L: netdev@vger.kernel.org 16887 - S: Supported 16891 + L: linuxppc-dev@lists.ozlabs.org 16892 + S: Maintained 16888 16893 F: Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst 16889 16894 F: drivers/net/ethernet/toshiba/spider_net* 16890 16895
+25 -6
arch/x86/net/bpf_jit_comp.c
··· 1936 1936 * add rsp, 8 // skip eth_type_trans's frame 1937 1937 * ret // return to its caller 1938 1938 */ 1939 - int arch_prepare_bpf_trampoline(void *image, void *image_end, 1939 + int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end, 1940 1940 const struct btf_func_model *m, u32 flags, 1941 1941 struct bpf_tramp_progs *tprogs, 1942 1942 void *orig_call) ··· 1975 1975 1976 1976 save_regs(m, &prog, nr_args, stack_size); 1977 1977 1978 + if (flags & BPF_TRAMP_F_CALL_ORIG) { 1979 + /* arg1: mov rdi, im */ 1980 + emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im); 1981 + if (emit_call(&prog, __bpf_tramp_enter, prog)) { 1982 + ret = -EINVAL; 1983 + goto cleanup; 1984 + } 1985 + } 1986 + 1978 1987 if (fentry->nr_progs) 1979 1988 if (invoke_bpf(m, &prog, fentry, stack_size)) 1980 1989 return -EINVAL; ··· 2002 1993 } 2003 1994 2004 1995 if (flags & BPF_TRAMP_F_CALL_ORIG) { 2005 - if (fentry->nr_progs || fmod_ret->nr_progs) 2006 - restore_regs(m, &prog, nr_args, stack_size); 1996 + restore_regs(m, &prog, nr_args, stack_size); 2007 1997 2008 1998 /* call original function */ 2009 1999 if (emit_call(&prog, orig_call, prog)) { ··· 2011 2003 } 2012 2004 /* remember return value in a stack for bpf prog to access */ 2013 2005 emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8); 2006 + im->ip_after_call = prog; 2007 + memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE); 2008 + prog += X86_PATCH_SIZE; 2014 2009 } 2015 2010 2016 2011 if (fmod_ret->nr_progs) { ··· 2044 2033 * the return value is only updated on the stack and still needs to be 2045 2034 * restored to R0. 2046 2035 */ 2047 - if (flags & BPF_TRAMP_F_CALL_ORIG) 2036 + if (flags & BPF_TRAMP_F_CALL_ORIG) { 2037 + im->ip_epilogue = prog; 2038 + /* arg1: mov rdi, im */ 2039 + emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im); 2040 + if (emit_call(&prog, __bpf_tramp_exit, prog)) { 2041 + ret = -EINVAL; 2042 + goto cleanup; 2043 + } 2048 2044 /* restore original return value back into RAX */ 2049 2045 emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8); 2046 + } 2050 2047 2051 2048 EMIT1(0x5B); /* pop rbx */ 2052 2049 EMIT1(0xC9); /* leave */ ··· 2244 2225 padding = true; 2245 2226 goto skip_init_addrs; 2246 2227 } 2247 - addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL); 2228 + addrs = kvmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL); 2248 2229 if (!addrs) { 2249 2230 prog = orig_prog; 2250 2231 goto out_addrs; ··· 2336 2317 if (image) 2337 2318 bpf_prog_fill_jited_linfo(prog, addrs + 1); 2338 2319 out_addrs: 2339 - kfree(addrs); 2320 + kvfree(addrs); 2340 2321 kfree(jit_data); 2341 2322 prog->aux->jit_data = NULL; 2342 2323 }
+14 -4
drivers/infiniband/hw/mlx5/qp.c
··· 1078 1078 1079 1079 qpc = MLX5_ADDR_OF(create_qp_in, *in, qpc); 1080 1080 MLX5_SET(qpc, qpc, uar_page, uar_index); 1081 - MLX5_SET(qpc, qpc, ts_format, MLX5_QPC_TIMESTAMP_FORMAT_DEFAULT); 1081 + MLX5_SET(qpc, qpc, ts_format, mlx5_get_qp_default_ts(dev->mdev)); 1082 1082 MLX5_SET(qpc, qpc, log_page_size, qp->buf.page_shift - MLX5_ADAPTER_PAGE_SHIFT); 1083 1083 1084 1084 /* Set "fast registration enabled" for all kernel QPs */ ··· 1188 1188 } 1189 1189 return MLX5_RQC_TIMESTAMP_FORMAT_FREE_RUNNING; 1190 1190 } 1191 - return MLX5_RQC_TIMESTAMP_FORMAT_DEFAULT; 1191 + return fr_supported ? MLX5_RQC_TIMESTAMP_FORMAT_FREE_RUNNING : 1192 + MLX5_RQC_TIMESTAMP_FORMAT_DEFAULT; 1192 1193 } 1193 1194 1194 1195 static int get_sq_ts_format(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *send_cq) ··· 1207 1206 } 1208 1207 return MLX5_SQC_TIMESTAMP_FORMAT_FREE_RUNNING; 1209 1208 } 1210 - return MLX5_SQC_TIMESTAMP_FORMAT_DEFAULT; 1209 + return fr_supported ? MLX5_SQC_TIMESTAMP_FORMAT_FREE_RUNNING : 1210 + MLX5_SQC_TIMESTAMP_FORMAT_DEFAULT; 1211 1211 } 1212 1212 1213 1213 static int get_qp_ts_format(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *send_cq, ··· 1219 1217 MLX5_QP_TIMESTAMP_FORMAT_CAP_FREE_RUNNING || 1220 1218 MLX5_CAP_ROCE(dev->mdev, qp_ts_format) == 1221 1219 MLX5_QP_TIMESTAMP_FORMAT_CAP_FREE_RUNNING_AND_REAL_TIME; 1222 - int ts_format = MLX5_QPC_TIMESTAMP_FORMAT_DEFAULT; 1220 + int ts_format = fr_supported ? MLX5_QPC_TIMESTAMP_FORMAT_FREE_RUNNING : 1221 + MLX5_QPC_TIMESTAMP_FORMAT_DEFAULT; 1223 1222 1224 1223 if (recv_cq && 1225 1224 recv_cq->create_flags & IB_UVERBS_CQ_FLAGS_TIMESTAMP_COMPLETION) ··· 1933 1930 if (qp->flags & IB_QP_CREATE_MANAGED_RECV) 1934 1931 MLX5_SET(qpc, qpc, cd_slave_receive, 1); 1935 1932 1933 + MLX5_SET(qpc, qpc, ts_format, mlx5_get_qp_default_ts(dev->mdev)); 1936 1934 MLX5_SET(qpc, qpc, rq_type, MLX5_SRQ_RQ); 1937 1935 MLX5_SET(qpc, qpc, no_sq, 1); 1938 1936 MLX5_SET(qpc, qpc, cqn_rcv, to_mcq(devr->c0)->mcq.cqn); ··· 4877 4873 struct mlx5_ib_dev *dev; 4878 4874 int has_net_offloads; 4879 4875 __be64 *rq_pas0; 4876 + int ts_format; 4880 4877 void *in; 4881 4878 void *rqc; 4882 4879 void *wq; ··· 4885 4880 int err; 4886 4881 4887 4882 dev = to_mdev(pd->device); 4883 + 4884 + ts_format = get_rq_ts_format(dev, to_mcq(init_attr->cq)); 4885 + if (ts_format < 0) 4886 + return ts_format; 4888 4887 4889 4888 inlen = MLX5_ST_SZ_BYTES(create_rq_in) + sizeof(u64) * rwq->rq_num_pas; 4890 4889 in = kvzalloc(inlen, GFP_KERNEL); ··· 4899 4890 rqc = MLX5_ADDR_OF(create_rq_in, in, ctx); 4900 4891 MLX5_SET(rqc, rqc, mem_rq_type, 4901 4892 MLX5_RQC_MEM_RQ_TYPE_MEMORY_RQ_INLINE); 4893 + MLX5_SET(rqc, rqc, ts_format, ts_format); 4902 4894 MLX5_SET(rqc, rqc, user_index, rwq->user_index); 4903 4895 MLX5_SET(rqc, rqc, cqn, to_mcq(init_attr->cq)->mcq.cqn); 4904 4896 MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RST);
+2 -2
drivers/isdn/capi/kcapi.c
··· 721 721 * Return value: CAPI result code 722 722 */ 723 723 724 - u16 capi20_get_manufacturer(u32 contr, u8 *buf) 724 + u16 capi20_get_manufacturer(u32 contr, u8 buf[CAPI_MANUFACTURER_LEN]) 725 725 { 726 726 struct capi_ctr *ctr; 727 727 u16 ret; ··· 787 787 * Return value: CAPI result code 788 788 */ 789 789 790 - u16 capi20_get_serial(u32 contr, u8 *serial) 790 + u16 capi20_get_serial(u32 contr, u8 serial[CAPI_SERIAL_LEN]) 791 791 { 792 792 struct capi_ctr *ctr; 793 793 u16 ret;
+1 -1
drivers/isdn/hardware/mISDN/mISDNipac.c
··· 694 694 { 695 695 if (isac->type & IPAC_TYPE_ISACX) 696 696 WriteISAC(isac, ISACX_MASK, 0xff); 697 - else 697 + else if (isac->type != 0) 698 698 WriteISAC(isac, ISAC_MASK, 0xff); 699 699 if (isac->dch.timer.function != NULL) { 700 700 del_timer(&isac->dch.timer);
+19 -15
drivers/net/arcnet/com20020-pci.c
··· 127 127 int i, ioaddr, ret; 128 128 struct resource *r; 129 129 130 + ret = 0; 131 + 130 132 if (pci_enable_device(pdev)) 131 133 return -EIO; 132 134 ··· 140 138 ci = (struct com20020_pci_card_info *)id->driver_data; 141 139 priv->ci = ci; 142 140 mm = &ci->misc_map; 141 + 142 + pci_set_drvdata(pdev, priv); 143 143 144 144 INIT_LIST_HEAD(&priv->list_dev); 145 145 ··· 165 161 dev = alloc_arcdev(device); 166 162 if (!dev) { 167 163 ret = -ENOMEM; 168 - goto out_port; 164 + break; 169 165 } 170 166 dev->dev_port = i; 171 167 ··· 182 178 pr_err("IO region %xh-%xh already allocated\n", 183 179 ioaddr, ioaddr + cm->size - 1); 184 180 ret = -EBUSY; 185 - goto out_port; 181 + goto err_free_arcdev; 186 182 } 187 183 188 184 /* Dummy access after Reset ··· 220 216 if (arcnet_inb(ioaddr, COM20020_REG_R_STATUS) == 0xFF) { 221 217 pr_err("IO address %Xh is empty!\n", ioaddr); 222 218 ret = -EIO; 223 - goto out_port; 219 + goto err_free_arcdev; 224 220 } 225 221 if (com20020_check(dev)) { 226 222 ret = -EIO; 227 - goto out_port; 223 + goto err_free_arcdev; 228 224 } 229 225 230 226 card = devm_kzalloc(&pdev->dev, sizeof(struct com20020_dev), 231 227 GFP_KERNEL); 232 228 if (!card) { 233 229 ret = -ENOMEM; 234 - goto out_port; 230 + goto err_free_arcdev; 235 231 } 236 232 237 233 card->index = i; ··· 257 253 258 254 ret = devm_led_classdev_register(&pdev->dev, &card->tx_led); 259 255 if (ret) 260 - goto out_port; 256 + goto err_free_arcdev; 261 257 262 258 ret = devm_led_classdev_register(&pdev->dev, &card->recon_led); 263 259 if (ret) 264 - goto out_port; 260 + goto err_free_arcdev; 265 261 266 262 dev_set_drvdata(&dev->dev, card); 267 263 268 264 ret = com20020_found(dev, IRQF_SHARED); 269 265 if (ret) 270 - goto out_port; 266 + goto err_free_arcdev; 271 267 272 268 devm_arcnet_led_init(dev, dev->dev_id, i); 273 269 274 270 list_add(&card->list, &priv->list_dev); 271 + continue; 272 + 273 + err_free_arcdev: 274 + free_arcdev(dev); 275 + break; 275 276 } 276 - 277 - pci_set_drvdata(pdev, priv); 278 - 279 - return 0; 280 - 281 - out_port: 282 - com20020pci_remove(pdev); 277 + if (ret) 278 + com20020pci_remove(pdev); 283 279 return ret; 284 280 } 285 281
+2 -6
drivers/net/bonding/bond_main.c
··· 3978 3978 3979 3979 rcu_read_lock(); 3980 3980 slave = bond_first_slave_rcu(bond); 3981 - if (!slave) { 3982 - ret = -EINVAL; 3981 + if (!slave) 3983 3982 goto out; 3984 - } 3985 3983 slave_ops = slave->dev->netdev_ops; 3986 - if (!slave_ops->ndo_neigh_setup) { 3987 - ret = -EINVAL; 3984 + if (!slave_ops->ndo_neigh_setup) 3988 3985 goto out; 3989 - } 3990 3986 3991 3987 /* TODO: find another way [1] to implement this. 3992 3988 * Passing a zeroed structure is fragile,
+1 -23
drivers/net/can/c_can/c_can.c
··· 212 212 .brp_inc = 1, 213 213 }; 214 214 215 - static inline void c_can_pm_runtime_enable(const struct c_can_priv *priv) 216 - { 217 - if (priv->device) 218 - pm_runtime_enable(priv->device); 219 - } 220 - 221 - static inline void c_can_pm_runtime_disable(const struct c_can_priv *priv) 222 - { 223 - if (priv->device) 224 - pm_runtime_disable(priv->device); 225 - } 226 - 227 215 static inline void c_can_pm_runtime_get_sync(const struct c_can_priv *priv) 228 216 { 229 217 if (priv->device) ··· 1323 1335 1324 1336 int register_c_can_dev(struct net_device *dev) 1325 1337 { 1326 - struct c_can_priv *priv = netdev_priv(dev); 1327 1338 int err; 1328 1339 1329 1340 /* Deactivate pins to prevent DRA7 DCAN IP from being ··· 1332 1345 */ 1333 1346 pinctrl_pm_select_sleep_state(dev->dev.parent); 1334 1347 1335 - c_can_pm_runtime_enable(priv); 1336 - 1337 1348 dev->flags |= IFF_ECHO; /* we support local echo */ 1338 1349 dev->netdev_ops = &c_can_netdev_ops; 1339 1350 1340 1351 err = register_candev(dev); 1341 - if (err) 1342 - c_can_pm_runtime_disable(priv); 1343 - else 1352 + if (!err) 1344 1353 devm_can_led_init(dev); 1345 - 1346 1354 return err; 1347 1355 } 1348 1356 EXPORT_SYMBOL_GPL(register_c_can_dev); 1349 1357 1350 1358 void unregister_c_can_dev(struct net_device *dev) 1351 1359 { 1352 - struct c_can_priv *priv = netdev_priv(dev); 1353 - 1354 1360 unregister_candev(dev); 1355 - 1356 - c_can_pm_runtime_disable(priv); 1357 1361 } 1358 1362 EXPORT_SYMBOL_GPL(unregister_c_can_dev); 1359 1363
+2 -1
drivers/net/can/c_can/c_can_pci.c
··· 239 239 { 240 240 struct net_device *dev = pci_get_drvdata(pdev); 241 241 struct c_can_priv *priv = netdev_priv(dev); 242 + void __iomem *addr = priv->base; 242 243 243 244 unregister_c_can_dev(dev); 244 245 245 246 free_c_can_dev(dev); 246 247 247 - pci_iounmap(pdev, priv->base); 248 + pci_iounmap(pdev, addr); 248 249 pci_disable_msi(pdev); 249 250 pci_clear_master(pdev); 250 251 pci_release_regions(pdev);
+5 -1
drivers/net/can/c_can/c_can_platform.c
··· 29 29 #include <linux/list.h> 30 30 #include <linux/io.h> 31 31 #include <linux/platform_device.h> 32 + #include <linux/pm_runtime.h> 32 33 #include <linux/clk.h> 33 34 #include <linux/of.h> 34 35 #include <linux/of_device.h> ··· 387 386 platform_set_drvdata(pdev, dev); 388 387 SET_NETDEV_DEV(dev, &pdev->dev); 389 388 389 + pm_runtime_enable(priv->device); 390 390 ret = register_c_can_dev(dev); 391 391 if (ret) { 392 392 dev_err(&pdev->dev, "registering %s failed (err=%d)\n", ··· 400 398 return 0; 401 399 402 400 exit_free_device: 401 + pm_runtime_disable(priv->device); 403 402 free_c_can_dev(dev); 404 403 exit: 405 404 dev_err(&pdev->dev, "probe failed\n"); ··· 411 408 static int c_can_plat_remove(struct platform_device *pdev) 412 409 { 413 410 struct net_device *dev = platform_get_drvdata(pdev); 411 + struct c_can_priv *priv = netdev_priv(dev); 414 412 415 413 unregister_c_can_dev(dev); 416 - 414 + pm_runtime_disable(priv->device); 417 415 free_c_can_dev(dev); 418 416 419 417 return 0;
+1
drivers/net/can/dev/netlink.c
··· 355 355 356 356 struct rtnl_link_ops can_link_ops __read_mostly = { 357 357 .kind = "can", 358 + .netns_refund = true, 358 359 .maxtype = IFLA_CAN_MAX, 359 360 .policy = can_policy, 360 361 .setup = can_setup,
+7 -1
drivers/net/can/flexcan.c
··· 697 697 static int flexcan_chip_freeze(struct flexcan_priv *priv) 698 698 { 699 699 struct flexcan_regs __iomem *regs = priv->regs; 700 - unsigned int timeout = 1000 * 1000 * 10 / priv->can.bittiming.bitrate; 700 + unsigned int timeout; 701 + u32 bitrate = priv->can.bittiming.bitrate; 701 702 u32 reg; 703 + 704 + if (bitrate) 705 + timeout = 1000 * 1000 * 10 / bitrate; 706 + else 707 + timeout = FLEXCAN_TIMEOUT_US / 10; 702 708 703 709 reg = priv->read(&regs->mcr); 704 710 reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT;
+4
drivers/net/can/kvaser_pciefd.c
··· 57 57 #define KVASER_PCIEFD_KCAN_STAT_REG 0x418 58 58 #define KVASER_PCIEFD_KCAN_MODE_REG 0x41c 59 59 #define KVASER_PCIEFD_KCAN_BTRN_REG 0x420 60 + #define KVASER_PCIEFD_KCAN_BUS_LOAD_REG 0x424 60 61 #define KVASER_PCIEFD_KCAN_BTRD_REG 0x428 61 62 #define KVASER_PCIEFD_KCAN_PWM_REG 0x430 62 63 /* Loopback control register */ ··· 949 948 init_completion(&can->flush_comp); 950 949 timer_setup(&can->bec_poll_timer, kvaser_pciefd_bec_poll_timer, 951 950 0); 951 + 952 + /* Disable Bus load reporting */ 953 + iowrite32(0, can->reg_base + KVASER_PCIEFD_KCAN_BUS_LOAD_REG); 952 954 953 955 tx_npackets = ioread32(can->reg_base + 954 956 KVASER_PCIEFD_KCAN_TX_NPACKETS_REG);
+1 -4
drivers/net/can/m_can/m_can.c
··· 501 501 } 502 502 503 503 while ((rxfs & RXFS_FFL_MASK) && (quota > 0)) { 504 - if (rxfs & RXFS_RFL) 505 - netdev_warn(dev, "Rx FIFO 0 Message Lost\n"); 506 - 507 504 m_can_read_fifo(dev, rxfs); 508 505 509 506 quota--; ··· 873 876 { 874 877 struct m_can_classdev *cdev = netdev_priv(dev); 875 878 876 - m_can_rx_handler(dev, 1); 879 + m_can_rx_handler(dev, M_CAN_NAPI_WEIGHT); 877 880 878 881 m_can_enable_all_interrupts(cdev); 879 882
+1
drivers/net/can/usb/Kconfig
··· 73 73 - Kvaser Memorator Pro 5xHS 74 74 - Kvaser USBcan Light 4xHS 75 75 - Kvaser USBcan Pro 2xHS v2 76 + - Kvaser USBcan Pro 4xHS 76 77 - Kvaser USBcan Pro 5xHS 77 78 - Kvaser U100 78 79 - Kvaser U100P
+3 -1
drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
··· 86 86 #define USB_U100_PRODUCT_ID 273 87 87 #define USB_U100P_PRODUCT_ID 274 88 88 #define USB_U100S_PRODUCT_ID 275 89 + #define USB_USBCAN_PRO_4HS_PRODUCT_ID 276 89 90 #define USB_HYDRA_PRODUCT_ID_END \ 90 - USB_U100S_PRODUCT_ID 91 + USB_USBCAN_PRO_4HS_PRODUCT_ID 91 92 92 93 static inline bool kvaser_is_leaf(const struct usb_device_id *id) 93 94 { ··· 194 193 { USB_DEVICE(KVASER_VENDOR_ID, USB_U100_PRODUCT_ID) }, 195 194 { USB_DEVICE(KVASER_VENDOR_ID, USB_U100P_PRODUCT_ID) }, 196 195 { USB_DEVICE(KVASER_VENDOR_ID, USB_U100S_PRODUCT_ID) }, 196 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_4HS_PRODUCT_ID) }, 197 197 { } 198 198 }; 199 199 MODULE_DEVICE_TABLE(usb, kvaser_usb_table);
+7 -7
drivers/net/dsa/b53/b53_common.c
··· 1105 1105 b53_disable_port(ds, port); 1106 1106 } 1107 1107 1108 - /* Let DSA handle the case were multiple bridges span the same switch 1109 - * device and different VLAN awareness settings are requested, which 1110 - * would be breaking filtering semantics for any of the other bridge 1111 - * devices. (not hardware supported) 1112 - */ 1113 - ds->vlan_filtering_is_global = true; 1114 - 1115 1108 return b53_setup_devlink_resources(ds); 1116 1109 } 1117 1110 ··· 2657 2664 ds->ops = &b53_switch_ops; 2658 2665 ds->untag_bridge_pvid = true; 2659 2666 dev->vlan_enabled = true; 2667 + /* Let DSA handle the case were multiple bridges span the same switch 2668 + * device and different VLAN awareness settings are requested, which 2669 + * would be breaking filtering semantics for any of the other bridge 2670 + * devices. (not hardware supported) 2671 + */ 2672 + ds->vlan_filtering_is_global = true; 2673 + 2660 2674 mutex_init(&dev->reg_mutex); 2661 2675 mutex_init(&dev->stats_mutex); 2662 2676
+8 -3
drivers/net/dsa/bcm_sf2.c
··· 114 114 /* Force link status for IMP port */ 115 115 reg = core_readl(priv, offset); 116 116 reg |= (MII_SW_OR | LINK_STS); 117 - reg &= ~GMII_SPEED_UP_2G; 117 + if (priv->type == BCM4908_DEVICE_ID) 118 + reg |= GMII_SPEED_UP_2G; 119 + else 120 + reg &= ~GMII_SPEED_UP_2G; 118 121 core_writel(priv, reg, offset); 119 122 120 123 /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */ ··· 588 585 * in bits 15:8 and the patch level in bits 7:0 which is exactly what 589 586 * the REG_PHY_REVISION register layout is. 590 587 */ 591 - 592 - return priv->hw_params.gphy_rev; 588 + if (priv->int_phy_mask & BIT(port)) 589 + return priv->hw_params.gphy_rev; 590 + else 591 + return 0; 593 592 } 594 593 595 594 static void bcm_sf2_sw_validate(struct dsa_switch *ds, int port,
+22 -24
drivers/net/dsa/mt7530.c
··· 436 436 TD_DM_DRVP(8) | TD_DM_DRVN(8)); 437 437 438 438 /* Setup core clock for MT7530 */ 439 - if (!trgint) { 440 - /* Disable MT7530 core clock */ 441 - core_clear(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN); 439 + /* Disable MT7530 core clock */ 440 + core_clear(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN); 442 441 443 - /* Disable PLL, since phy_device has not yet been created 444 - * provided for phy_[read,write]_mmd_indirect is called, we 445 - * provide our own core_write_mmd_indirect to complete this 446 - * function. 447 - */ 448 - core_write_mmd_indirect(priv, 449 - CORE_GSWPLL_GRP1, 450 - MDIO_MMD_VEND2, 451 - 0); 442 + /* Disable PLL, since phy_device has not yet been created 443 + * provided for phy_[read,write]_mmd_indirect is called, we 444 + * provide our own core_write_mmd_indirect to complete this 445 + * function. 446 + */ 447 + core_write_mmd_indirect(priv, 448 + CORE_GSWPLL_GRP1, 449 + MDIO_MMD_VEND2, 450 + 0); 452 451 453 - /* Set core clock into 500Mhz */ 454 - core_write(priv, CORE_GSWPLL_GRP2, 455 - RG_GSWPLL_POSDIV_500M(1) | 456 - RG_GSWPLL_FBKDIV_500M(25)); 452 + /* Set core clock into 500Mhz */ 453 + core_write(priv, CORE_GSWPLL_GRP2, 454 + RG_GSWPLL_POSDIV_500M(1) | 455 + RG_GSWPLL_FBKDIV_500M(25)); 457 456 458 - /* Enable PLL */ 459 - core_write(priv, CORE_GSWPLL_GRP1, 460 - RG_GSWPLL_EN_PRE | 461 - RG_GSWPLL_POSDIV_200M(2) | 462 - RG_GSWPLL_FBKDIV_200M(32)); 457 + /* Enable PLL */ 458 + core_write(priv, CORE_GSWPLL_GRP1, 459 + RG_GSWPLL_EN_PRE | 460 + RG_GSWPLL_POSDIV_200M(2) | 461 + RG_GSWPLL_FBKDIV_200M(32)); 463 462 464 - /* Enable MT7530 core clock */ 465 - core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN); 466 - } 463 + /* Enable MT7530 core clock */ 464 + core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN); 467 465 468 466 /* Setup the MT7530 TRGMII Tx Clock */ 469 467 core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);
+1 -1
drivers/net/ethernet/broadcom/Kconfig
··· 54 54 config BCM4908_ENET 55 55 tristate "Broadcom BCM4908 internal mac support" 56 56 depends on ARCH_BCM4908 || COMPILE_TEST 57 - default y 57 + default y if ARCH_BCM4908 58 58 help 59 59 This driver supports Ethernet controller integrated into Broadcom 60 60 BCM4908 family SoCs.
+1 -1
drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
··· 722 722 kvfree(tx_info); 723 723 return 0; 724 724 } 725 - tx_info->open_state = false; 725 + tx_info->open_state = CH_KTLS_OPEN_SUCCESS; 726 726 spin_unlock(&tx_info->lock); 727 727 728 728 complete(&tx_info->completion);
+1
drivers/net/ethernet/faraday/ftgmac100.c
··· 1337 1337 */ 1338 1338 if (unlikely(priv->need_mac_restart)) { 1339 1339 ftgmac100_start_hw(priv); 1340 + priv->need_mac_restart = false; 1340 1341 1341 1342 /* Re-enable "bad" interrupts */ 1342 1343 iowrite32(FTGMAC100_INT_BAD,
+2
drivers/net/ethernet/intel/e1000e/82571.c
··· 899 899 } else { 900 900 data &= ~IGP02E1000_PM_D0_LPLU; 901 901 ret_val = e1e_wphy(hw, IGP02E1000_PHY_POWER_MGMT, data); 902 + if (ret_val) 903 + return ret_val; 902 904 /* LPLU and SmartSpeed are mutually exclusive. LPLU is used 903 905 * during Dx states where the power conservation is most 904 906 * important. During driver activity we should enable
+3 -3
drivers/net/ethernet/intel/e1000e/hw.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* Copyright(c) 1999 - 2018 Intel Corporation. */ 3 3 4 - #ifndef _E1000_HW_H_ 5 - #define _E1000_HW_H_ 4 + #ifndef _E1000E_HW_H_ 5 + #define _E1000E_HW_H_ 6 6 7 7 #include "regs.h" 8 8 #include "defines.h" ··· 714 714 #include "80003es2lan.h" 715 715 #include "ich8lan.h" 716 716 717 - #endif 717 + #endif /* _E1000E_HW_H_ */
+5 -1
drivers/net/ethernet/intel/e1000e/netdev.c
··· 5974 5974 struct e1000_adapter *adapter; 5975 5975 adapter = container_of(work, struct e1000_adapter, reset_task); 5976 5976 5977 + rtnl_lock(); 5977 5978 /* don't run the task if already down */ 5978 - if (test_bit(__E1000_DOWN, &adapter->state)) 5979 + if (test_bit(__E1000_DOWN, &adapter->state)) { 5980 + rtnl_unlock(); 5979 5981 return; 5982 + } 5980 5983 5981 5984 if (!(adapter->flags & FLAG_RESTART_NOW)) { 5982 5985 e1000e_dump(adapter); 5983 5986 e_err("Reset adapter unexpectedly\n"); 5984 5987 } 5985 5988 e1000e_reinit_locked(adapter); 5989 + rtnl_unlock(); 5986 5990 } 5987 5991 5988 5992 /**
+13
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 3259 3259 } 3260 3260 3261 3261 /** 3262 + * i40e_rx_offset - Return expected offset into page to access data 3263 + * @rx_ring: Ring we are requesting offset of 3264 + * 3265 + * Returns the offset value for ring into the data buffer. 3266 + */ 3267 + static unsigned int i40e_rx_offset(struct i40e_ring *rx_ring) 3268 + { 3269 + return ring_uses_build_skb(rx_ring) ? I40E_SKB_PAD : 0; 3270 + } 3271 + 3272 + /** 3262 3273 * i40e_configure_rx_ring - Configure a receive ring context 3263 3274 * @ring: The Rx ring to configure 3264 3275 * ··· 3379 3368 clear_ring_build_skb_enabled(ring); 3380 3369 else 3381 3370 set_ring_build_skb_enabled(ring); 3371 + 3372 + ring->rx_offset = i40e_rx_offset(ring); 3382 3373 3383 3374 /* cache tail for quicker writes, and clear the reg before use */ 3384 3375 ring->tail = hw->hw_addr + I40E_QRX_TAIL(pf_q);
-12
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 1570 1570 } 1571 1571 1572 1572 /** 1573 - * i40e_rx_offset - Return expected offset into page to access data 1574 - * @rx_ring: Ring we are requesting offset of 1575 - * 1576 - * Returns the offset value for ring into the data buffer. 1577 - */ 1578 - static unsigned int i40e_rx_offset(struct i40e_ring *rx_ring) 1579 - { 1580 - return ring_uses_build_skb(rx_ring) ? I40E_SKB_PAD : 0; 1581 - } 1582 - 1583 - /** 1584 1573 * i40e_setup_rx_descriptors - Allocate Rx descriptors 1585 1574 * @rx_ring: Rx descriptor ring (for a specific queue) to setup 1586 1575 * ··· 1597 1608 rx_ring->next_to_alloc = 0; 1598 1609 rx_ring->next_to_clean = 0; 1599 1610 rx_ring->next_to_use = 0; 1600 - rx_ring->rx_offset = i40e_rx_offset(rx_ring); 1601 1611 1602 1612 /* XDP RX-queue info only needed for RX rings exposed to XDP */ 1603 1613 if (rx_ring->vsi->type == I40E_VSI_MAIN) {
+22 -2
drivers/net/ethernet/intel/ice/ice_base.c
··· 275 275 } 276 276 277 277 /** 278 + * ice_rx_offset - Return expected offset into page to access data 279 + * @rx_ring: Ring we are requesting offset of 280 + * 281 + * Returns the offset value for ring into the data buffer. 282 + */ 283 + static unsigned int ice_rx_offset(struct ice_ring *rx_ring) 284 + { 285 + if (ice_ring_uses_build_skb(rx_ring)) 286 + return ICE_SKB_PAD; 287 + else if (ice_is_xdp_ena_vsi(rx_ring->vsi)) 288 + return XDP_PACKET_HEADROOM; 289 + 290 + return 0; 291 + } 292 + 293 + /** 278 294 * ice_setup_rx_ctx - Configure a receive ring context 279 295 * @ring: The Rx ring to configure 280 296 * ··· 429 413 else 430 414 ice_set_ring_build_skb_ena(ring); 431 415 416 + ring->rx_offset = ice_rx_offset(ring); 417 + 432 418 /* init queue specific tail register */ 433 419 ring->tail = hw->hw_addr + QRX_TAIL(pf_q); 434 420 writel(0, ring->tail); 435 421 436 422 if (ring->xsk_pool) { 423 + bool ok; 424 + 437 425 if (!xsk_buff_can_alloc(ring->xsk_pool, num_bufs)) { 438 426 dev_warn(dev, "XSK buffer pool does not provide enough addresses to fill %d buffers on Rx ring %d\n", 439 427 num_bufs, ring->q_index); ··· 446 426 return 0; 447 427 } 448 428 449 - err = ice_alloc_rx_bufs_zc(ring, num_bufs); 450 - if (err) 429 + ok = ice_alloc_rx_bufs_zc(ring, num_bufs); 430 + if (!ok) 451 431 dev_info(dev, "Failed to allocate some buffers on XSK buffer pool enabled Rx ring %d (pf_q %d)\n", 452 432 ring->q_index, pf_q); 453 433 return 0;
-17
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 444 444 } 445 445 446 446 /** 447 - * ice_rx_offset - Return expected offset into page to access data 448 - * @rx_ring: Ring we are requesting offset of 449 - * 450 - * Returns the offset value for ring into the data buffer. 451 - */ 452 - static unsigned int ice_rx_offset(struct ice_ring *rx_ring) 453 - { 454 - if (ice_ring_uses_build_skb(rx_ring)) 455 - return ICE_SKB_PAD; 456 - else if (ice_is_xdp_ena_vsi(rx_ring->vsi)) 457 - return XDP_PACKET_HEADROOM; 458 - 459 - return 0; 460 - } 461 - 462 - /** 463 447 * ice_setup_rx_ring - Allocate the Rx descriptors 464 448 * @rx_ring: the Rx ring to set up 465 449 * ··· 477 493 478 494 rx_ring->next_to_use = 0; 479 495 rx_ring->next_to_clean = 0; 480 - rx_ring->rx_offset = ice_rx_offset(rx_ring); 481 496 482 497 if (ice_is_xdp_ena_vsi(rx_ring->vsi)) 483 498 WRITE_ONCE(rx_ring->xdp_prog, rx_ring->vsi->xdp_prog);
+5 -5
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 358 358 * This function allocates a number of Rx buffers from the fill ring 359 359 * or the internal recycle mechanism and places them on the Rx ring. 360 360 * 361 - * Returns false if all allocations were successful, true if any fail. 361 + * Returns true if all allocations were successful, false if any fail. 362 362 */ 363 363 bool ice_alloc_rx_bufs_zc(struct ice_ring *rx_ring, u16 count) 364 364 { 365 365 union ice_32b_rx_flex_desc *rx_desc; 366 366 u16 ntu = rx_ring->next_to_use; 367 367 struct ice_rx_buf *rx_buf; 368 - bool ret = false; 368 + bool ok = true; 369 369 dma_addr_t dma; 370 370 371 371 if (!count) 372 - return false; 372 + return true; 373 373 374 374 rx_desc = ICE_RX_DESC(rx_ring, ntu); 375 375 rx_buf = &rx_ring->rx_buf[ntu]; ··· 377 377 do { 378 378 rx_buf->xdp = xsk_buff_alloc(rx_ring->xsk_pool); 379 379 if (!rx_buf->xdp) { 380 - ret = true; 380 + ok = false; 381 381 break; 382 382 } 383 383 ··· 402 402 ice_release_rx_desc(rx_ring, ntu); 403 403 } 404 404 405 - return ret; 405 + return ok; 406 406 } 407 407 408 408 /**
+3 -3
drivers/net/ethernet/intel/igb/e1000_hw.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* Copyright(c) 2007 - 2018 Intel Corporation. */ 3 3 4 - #ifndef _E1000_HW_H_ 5 - #define _E1000_HW_H_ 4 + #ifndef _E1000_IGB_HW_H_ 5 + #define _E1000_IGB_HW_H_ 6 6 7 7 #include <linux/types.h> 8 8 #include <linux/delay.h> ··· 551 551 552 552 void igb_read_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); 553 553 void igb_write_pci_cfg(struct e1000_hw *hw, u32 reg, u16 *value); 554 - #endif /* _E1000_HW_H_ */ 554 + #endif /* _E1000_IGB_HW_H_ */
+2 -2
drivers/net/ethernet/intel/igb/igb.h
··· 748 748 void igb_ptp_rx_hang(struct igb_adapter *adapter); 749 749 void igb_ptp_tx_hang(struct igb_adapter *adapter); 750 750 void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector, struct sk_buff *skb); 751 - void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va, 752 - struct sk_buff *skb); 751 + int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va, 752 + struct sk_buff *skb); 753 753 int igb_ptp_set_ts_config(struct net_device *netdev, struct ifreq *ifr); 754 754 int igb_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr); 755 755 void igb_set_flag_queue_pairs(struct igb_adapter *, const u32);
+21 -12
drivers/net/ethernet/intel/igb/igb_main.c
··· 8214 8214 new_buff->pagecnt_bias = old_buff->pagecnt_bias; 8215 8215 } 8216 8216 8217 - static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer) 8217 + static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer, 8218 + int rx_buf_pgcnt) 8218 8219 { 8219 8220 unsigned int pagecnt_bias = rx_buffer->pagecnt_bias; 8220 8221 struct page *page = rx_buffer->page; ··· 8226 8225 8227 8226 #if (PAGE_SIZE < 8192) 8228 8227 /* if we are only owner of page we can reuse it */ 8229 - if (unlikely((page_ref_count(page) - pagecnt_bias) > 1)) 8228 + if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1)) 8230 8229 return false; 8231 8230 #else 8232 8231 #define IGB_LAST_OFFSET \ ··· 8302 8301 return NULL; 8303 8302 8304 8303 if (unlikely(igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP))) { 8305 - igb_ptp_rx_pktstamp(rx_ring->q_vector, xdp->data, skb); 8306 - xdp->data += IGB_TS_HDR_LEN; 8307 - size -= IGB_TS_HDR_LEN; 8304 + if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, xdp->data, skb)) { 8305 + xdp->data += IGB_TS_HDR_LEN; 8306 + size -= IGB_TS_HDR_LEN; 8307 + } 8308 8308 } 8309 8309 8310 8310 /* Determine available headroom for copy */ ··· 8366 8364 8367 8365 /* pull timestamp out of packet data */ 8368 8366 if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) { 8369 - igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb); 8370 - __skb_pull(skb, IGB_TS_HDR_LEN); 8367 + if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb)) 8368 + __skb_pull(skb, IGB_TS_HDR_LEN); 8371 8369 } 8372 8370 8373 8371 /* update buffer offset */ ··· 8616 8614 } 8617 8615 8618 8616 static struct igb_rx_buffer *igb_get_rx_buffer(struct igb_ring *rx_ring, 8619 - const unsigned int size) 8617 + const unsigned int size, int *rx_buf_pgcnt) 8620 8618 { 8621 8619 struct igb_rx_buffer *rx_buffer; 8622 8620 8623 8621 rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; 8622 + *rx_buf_pgcnt = 8623 + #if (PAGE_SIZE < 8192) 8624 + page_count(rx_buffer->page); 8625 + #else 8626 + 0; 8627 + #endif 8624 8628 prefetchw(rx_buffer->page); 8625 8629 8626 8630 /* we are reusing so sync this buffer for CPU use */ ··· 8642 8634 } 8643 8635 8644 8636 static void igb_put_rx_buffer(struct igb_ring *rx_ring, 8645 - struct igb_rx_buffer *rx_buffer) 8637 + struct igb_rx_buffer *rx_buffer, int rx_buf_pgcnt) 8646 8638 { 8647 - if (igb_can_reuse_rx_page(rx_buffer)) { 8639 + if (igb_can_reuse_rx_page(rx_buffer, rx_buf_pgcnt)) { 8648 8640 /* hand second half of page back to the ring */ 8649 8641 igb_reuse_rx_page(rx_ring, rx_buffer); 8650 8642 } else { ··· 8672 8664 unsigned int xdp_xmit = 0; 8673 8665 struct xdp_buff xdp; 8674 8666 u32 frame_sz = 0; 8667 + int rx_buf_pgcnt; 8675 8668 8676 8669 /* Frame size depend on rx_ring setup when PAGE_SIZE=4K */ 8677 8670 #if (PAGE_SIZE < 8192) ··· 8702 8693 */ 8703 8694 dma_rmb(); 8704 8695 8705 - rx_buffer = igb_get_rx_buffer(rx_ring, size); 8696 + rx_buffer = igb_get_rx_buffer(rx_ring, size, &rx_buf_pgcnt); 8706 8697 8707 8698 /* retrieve a buffer from the ring */ 8708 8699 if (!skb) { ··· 8745 8736 break; 8746 8737 } 8747 8738 8748 - igb_put_rx_buffer(rx_ring, rx_buffer); 8739 + igb_put_rx_buffer(rx_ring, rx_buffer, rx_buf_pgcnt); 8749 8740 cleaned_count++; 8750 8741 8751 8742 /* fetch next buffer in frame if non-eop */
+24 -7
drivers/net/ethernet/intel/igb/igb_ptp.c
··· 856 856 dev_kfree_skb_any(skb); 857 857 } 858 858 859 + #define IGB_RET_PTP_DISABLED 1 860 + #define IGB_RET_PTP_INVALID 2 861 + 859 862 /** 860 863 * igb_ptp_rx_pktstamp - retrieve Rx per packet timestamp 861 864 * @q_vector: Pointer to interrupt specific structure ··· 867 864 * 868 865 * This function is meant to retrieve a timestamp from the first buffer of an 869 866 * incoming frame. The value is stored in little endian format starting on 870 - * byte 8. 867 + * byte 8 868 + * 869 + * Returns: 0 if success, nonzero if failure 871 870 **/ 872 - void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va, 873 - struct sk_buff *skb) 871 + int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va, 872 + struct sk_buff *skb) 874 873 { 875 - __le64 *regval = (__le64 *)va; 876 874 struct igb_adapter *adapter = q_vector->adapter; 875 + __le64 *regval = (__le64 *)va; 877 876 int adjust = 0; 877 + 878 + if (!(adapter->ptp_flags & IGB_PTP_ENABLED)) 879 + return IGB_RET_PTP_DISABLED; 878 880 879 881 /* The timestamp is recorded in little endian format. 880 882 * DWORD: 0 1 2 3 881 883 * Field: Reserved Reserved SYSTIML SYSTIMH 882 884 */ 885 + 886 + /* check reserved dwords are zero, be/le doesn't matter for zero */ 887 + if (regval[0]) 888 + return IGB_RET_PTP_INVALID; 889 + 883 890 igb_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb), 884 891 le64_to_cpu(regval[1])); 885 892 ··· 909 896 } 910 897 skb_hwtstamps(skb)->hwtstamp = 911 898 ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust); 899 + 900 + return 0; 912 901 } 913 902 914 903 /** ··· 921 906 * This function is meant to retrieve a timestamp from the internal registers 922 907 * of the adapter and store it in the skb. 923 908 **/ 924 - void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector, 925 - struct sk_buff *skb) 909 + void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector, struct sk_buff *skb) 926 910 { 927 911 struct igb_adapter *adapter = q_vector->adapter; 928 912 struct e1000_hw *hw = &adapter->hw; 929 - u64 regval; 930 913 int adjust = 0; 914 + u64 regval; 915 + 916 + if (!(adapter->ptp_flags & IGB_PTP_ENABLED)) 917 + return; 931 918 932 919 /* If this bit is set, then the RX registers contain the time stamp. No 933 920 * other packet will be time stamped until we read these registers, so
+1 -1
drivers/net/ethernet/intel/igc/igc.h
··· 547 547 void igc_ptp_reset(struct igc_adapter *adapter); 548 548 void igc_ptp_suspend(struct igc_adapter *adapter); 549 549 void igc_ptp_stop(struct igc_adapter *adapter); 550 - void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, void *va, 550 + void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va, 551 551 struct sk_buff *skb); 552 552 int igc_ptp_set_ts_config(struct net_device *netdev, struct ifreq *ifr); 553 553 int igc_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr);
+4 -3
drivers/net/ethernet/intel/igc/igc_ethtool.c
··· 1711 1711 Autoneg); 1712 1712 } 1713 1713 1714 + /* Set pause flow control settings */ 1715 + ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); 1716 + 1714 1717 switch (hw->fc.requested_mode) { 1715 1718 case igc_fc_full: 1716 1719 ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); ··· 1728 1725 Asym_Pause); 1729 1726 break; 1730 1727 default: 1731 - ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); 1732 - ethtool_link_ksettings_add_link_mode(cmd, advertising, 1733 - Asym_Pause); 1728 + break; 1734 1729 } 1735 1730 1736 1731 status = pm_runtime_suspended(&adapter->pdev->dev) ?
+9
drivers/net/ethernet/intel/igc/igc_main.c
··· 3831 3831 3832 3832 adapter = container_of(work, struct igc_adapter, reset_task); 3833 3833 3834 + rtnl_lock(); 3835 + /* If we're already down or resetting, just bail */ 3836 + if (test_bit(__IGC_DOWN, &adapter->state) || 3837 + test_bit(__IGC_RESETTING, &adapter->state)) { 3838 + rtnl_unlock(); 3839 + return; 3840 + } 3841 + 3834 3842 igc_rings_dump(adapter); 3835 3843 igc_regs_dump(adapter); 3836 3844 netdev_err(adapter->netdev, "Reset adapter\n"); 3837 3845 igc_reinit_locked(adapter); 3846 + rtnl_unlock(); 3838 3847 } 3839 3848 3840 3849 /**
+38 -30
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 152 152 } 153 153 154 154 /** 155 - * igc_ptp_rx_pktstamp - retrieve Rx per packet timestamp 155 + * igc_ptp_rx_pktstamp - Retrieve timestamp from Rx packet buffer 156 156 * @q_vector: Pointer to interrupt specific structure 157 157 * @va: Pointer to address containing Rx buffer 158 158 * @skb: Buffer containing timestamp and packet 159 159 * 160 - * This function is meant to retrieve the first timestamp from the 161 - * first buffer of an incoming frame. The value is stored in little 162 - * endian format starting on byte 0. There's a second timestamp 163 - * starting on byte 8. 164 - **/ 165 - void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, void *va, 160 + * This function retrieves the timestamp saved in the beginning of packet 161 + * buffer. While two timestamps are available, one in timer0 reference and the 162 + * other in timer1 reference, this function considers only the timestamp in 163 + * timer0 reference. 164 + */ 165 + void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va, 166 166 struct sk_buff *skb) 167 167 { 168 168 struct igc_adapter *adapter = q_vector->adapter; 169 - __le64 *regval = (__le64 *)va; 170 - int adjust = 0; 169 + u64 regval; 170 + int adjust; 171 171 172 - /* The timestamp is recorded in little endian format. 173 - * DWORD: | 0 | 1 | 2 | 3 174 - * Field: | Timer0 Low | Timer0 High | Timer1 Low | Timer1 High 172 + /* Timestamps are saved in little endian at the beginning of the packet 173 + * buffer following the layout: 174 + * 175 + * DWORD: | 0 | 1 | 2 | 3 | 176 + * Field: | Timer1 SYSTIML | Timer1 SYSTIMH | Timer0 SYSTIML | Timer0 SYSTIMH | 177 + * 178 + * SYSTIML holds the nanoseconds part while SYSTIMH holds the seconds 179 + * part of the timestamp. 175 180 */ 176 - igc_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb), 177 - le64_to_cpu(regval[0])); 181 + regval = le32_to_cpu(va[2]); 182 + regval |= (u64)le32_to_cpu(va[3]) << 32; 183 + igc_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb), regval); 178 184 179 - /* adjust timestamp for the RX latency based on link speed */ 180 - if (adapter->hw.mac.type == igc_i225) { 181 - switch (adapter->link_speed) { 182 - case SPEED_10: 183 - adjust = IGC_I225_RX_LATENCY_10; 184 - break; 185 - case SPEED_100: 186 - adjust = IGC_I225_RX_LATENCY_100; 187 - break; 188 - case SPEED_1000: 189 - adjust = IGC_I225_RX_LATENCY_1000; 190 - break; 191 - case SPEED_2500: 192 - adjust = IGC_I225_RX_LATENCY_2500; 193 - break; 194 - } 185 + /* Adjust timestamp for the RX latency based on link speed */ 186 + switch (adapter->link_speed) { 187 + case SPEED_10: 188 + adjust = IGC_I225_RX_LATENCY_10; 189 + break; 190 + case SPEED_100: 191 + adjust = IGC_I225_RX_LATENCY_100; 192 + break; 193 + case SPEED_1000: 194 + adjust = IGC_I225_RX_LATENCY_1000; 195 + break; 196 + case SPEED_2500: 197 + adjust = IGC_I225_RX_LATENCY_2500; 198 + break; 199 + default: 200 + adjust = 0; 201 + netdev_warn_once(adapter->netdev, "Imprecise timestamp\n"); 202 + break; 195 203 } 196 204 skb_hwtstamps(skb)->hwtstamp = 197 205 ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust);
+2 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 4118 4118 #endif 4119 4119 } 4120 4120 4121 + ring->rx_offset = ixgbe_rx_offset(ring); 4122 + 4121 4123 if (ring->xsk_pool && hw->mac.type != ixgbe_mac_82599EB) { 4122 4124 u32 xsk_buf_len = xsk_pool_get_rx_frame_size(ring->xsk_pool); 4123 4125 ··· 6580 6578 6581 6579 rx_ring->next_to_clean = 0; 6582 6580 rx_ring->next_to_use = 0; 6583 - rx_ring->rx_offset = ixgbe_rx_offset(rx_ring); 6584 6581 6585 6582 /* XDP RX-queue info */ 6586 6583 if (xdp_rxq_info_reg(&rx_ring->xdp_rxq, adapter->netdev,
+2 -2
drivers/net/ethernet/marvell/Kconfig
··· 6 6 config NET_VENDOR_MARVELL 7 7 bool "Marvell devices" 8 8 default y 9 - depends on PCI || CPU_PXA168 || MV64X60 || PPC32 || PLAT_ORION || INET || COMPILE_TEST 9 + depends on PCI || CPU_PXA168 || PPC32 || PLAT_ORION || INET || COMPILE_TEST 10 10 help 11 11 If you have a network (Ethernet) card belonging to this class, say Y. 12 12 ··· 19 19 20 20 config MV643XX_ETH 21 21 tristate "Marvell Discovery (643XX) and Orion ethernet support" 22 - depends on MV64X60 || PPC32 || PLAT_ORION || COMPILE_TEST 22 + depends on PPC32 || PLAT_ORION || COMPILE_TEST 23 23 depends on INET 24 24 select PHYLIB 25 25 select MVMDIO
+1 -1
drivers/net/ethernet/marvell/mv643xx_eth.c
··· 2684 2684 MODULE_DEVICE_TABLE(of, mv643xx_eth_shared_ids); 2685 2685 #endif 2686 2686 2687 - #if defined(CONFIG_OF_IRQ) && !defined(CONFIG_MV64X60) 2687 + #ifdef CONFIG_OF_IRQ 2688 2688 #define mv643xx_eth_property(_np, _name, _v) \ 2689 2689 do { \ 2690 2690 u32 tmp; \
-2
drivers/net/ethernet/marvell/octeontx2/af/npc_profile.h
··· 13499 13499 [NPC_LT_LC_IP] = { 13500 13500 /* SIP+DIP: 8 bytes, KW2[63:0] */ 13501 13501 KEX_LD_CFG(0x07, 0xc, 0x1, 0x0, 0x10), 13502 - /* TOS: 1 byte, KW1[63:56] */ 13503 - KEX_LD_CFG(0x0, 0x1, 0x1, 0x0, 0xf), 13504 13502 }, 13505 13503 /* Layer C: IPv6 */ 13506 13504 [NPC_LT_LC_IP6] = {
+4 -2
drivers/net/ethernet/marvell/octeontx2/af/rvu.c
··· 2462 2462 INTR_MASK(rvu->hw->total_pfs) & ~1ULL); 2463 2463 2464 2464 for (irq = 0; irq < rvu->num_vec; irq++) { 2465 - if (rvu->irq_allocated[irq]) 2465 + if (rvu->irq_allocated[irq]) { 2466 2466 free_irq(pci_irq_vector(rvu->pdev, irq), rvu); 2467 + rvu->irq_allocated[irq] = false; 2468 + } 2467 2469 } 2468 2470 2469 2471 pci_free_irq_vectors(rvu->pdev); ··· 2977 2975 struct rvu *rvu = pci_get_drvdata(pdev); 2978 2976 2979 2977 rvu_dbg_exit(rvu); 2980 - rvu_unregister_interrupts(rvu); 2981 2978 rvu_unregister_dl(rvu); 2979 + rvu_unregister_interrupts(rvu); 2982 2980 rvu_flr_wq_destroy(rvu); 2983 2981 rvu_cgx_exit(rvu); 2984 2982 rvu_fwdata_exit(rvu);
+1
drivers/net/ethernet/marvell/octeontx2/af/rvu.h
··· 678 678 u8 *intf, u8 *ena); 679 679 bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature); 680 680 u32 rvu_cgx_get_fifolen(struct rvu *rvu); 681 + void *rvu_first_cgx_pdata(struct rvu *rvu); 681 682 682 683 /* CPT APIs */ 683 684 int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int lf, int slot);
+16 -2
drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
··· 89 89 return rvu->cgx_idmap[cgx_id]; 90 90 } 91 91 92 + /* Return first enabled CGX instance if none are enabled then return NULL */ 93 + void *rvu_first_cgx_pdata(struct rvu *rvu) 94 + { 95 + int first_enabled_cgx = 0; 96 + void *cgxd = NULL; 97 + 98 + for (; first_enabled_cgx < rvu->cgx_cnt_max; first_enabled_cgx++) { 99 + cgxd = rvu_cgx_pdata(first_enabled_cgx, rvu); 100 + if (cgxd) 101 + break; 102 + } 103 + 104 + return cgxd; 105 + } 106 + 92 107 /* Based on P2X connectivity find mapped NIX block for a PF */ 93 108 static void rvu_map_cgx_nix_block(struct rvu *rvu, int pf, 94 109 int cgx_id, int lmac_id) ··· 726 711 u32 rvu_cgx_get_fifolen(struct rvu *rvu) 727 712 { 728 713 struct mac_ops *mac_ops; 729 - int rvu_def_cgx_id = 0; 730 714 u32 fifo_len; 731 715 732 - mac_ops = get_mac_ops(rvu_cgx_pdata(rvu_def_cgx_id, rvu)); 716 + mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu)); 733 717 fifo_len = mac_ops ? mac_ops->fifo_len : 0; 734 718 735 719 return fifo_len;
+36 -21
drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
··· 234 234 char __user *buffer, 235 235 size_t count, loff_t *ppos) 236 236 { 237 - int index, off = 0, flag = 0, go_back = 0, off_prev; 237 + int index, off = 0, flag = 0, go_back = 0, len = 0; 238 238 struct rvu *rvu = filp->private_data; 239 239 int lf, pf, vf, pcifunc; 240 240 struct rvu_block block; 241 241 int bytes_not_copied; 242 + int lf_str_size = 12; 242 243 int buf_size = 2048; 244 + char *lfs; 243 245 char *buf; 244 246 245 247 /* don't allow partial reads */ ··· 251 249 buf = kzalloc(buf_size, GFP_KERNEL); 252 250 if (!buf) 253 251 return -ENOSPC; 254 - off += scnprintf(&buf[off], buf_size - 1 - off, "\npcifunc\t\t"); 252 + 253 + lfs = kzalloc(lf_str_size, GFP_KERNEL); 254 + if (!lfs) { 255 + kfree(buf); 256 + return -ENOMEM; 257 + } 258 + off += scnprintf(&buf[off], buf_size - 1 - off, "%-*s", lf_str_size, 259 + "pcifunc"); 255 260 for (index = 0; index < BLK_COUNT; index++) 256 - if (strlen(rvu->hw->block[index].name)) 257 - off += scnprintf(&buf[off], buf_size - 1 - off, 258 - "%*s\t", (index - 1) * 2, 259 - rvu->hw->block[index].name); 261 + if (strlen(rvu->hw->block[index].name)) { 262 + off += scnprintf(&buf[off], buf_size - 1 - off, 263 + "%-*s", lf_str_size, 264 + rvu->hw->block[index].name); 265 + } 260 266 off += scnprintf(&buf[off], buf_size - 1 - off, "\n"); 261 267 for (pf = 0; pf < rvu->hw->total_pfs; pf++) { 262 268 for (vf = 0; vf <= rvu->hw->total_vfs; vf++) { ··· 273 263 continue; 274 264 275 265 if (vf) { 266 + sprintf(lfs, "PF%d:VF%d", pf, vf - 1); 276 267 go_back = scnprintf(&buf[off], 277 268 buf_size - 1 - off, 278 - "PF%d:VF%d\t\t", pf, 279 - vf - 1); 269 + "%-*s", lf_str_size, lfs); 280 270 } else { 271 + sprintf(lfs, "PF%d", pf); 281 272 go_back = scnprintf(&buf[off], 282 273 buf_size - 1 - off, 283 - "PF%d\t\t", pf); 274 + "%-*s", lf_str_size, lfs); 284 275 } 285 276 286 277 off += go_back; ··· 289 278 block = rvu->hw->block[index]; 290 279 if (!strlen(block.name)) 291 280 continue; 292 - off_prev = off; 281 + len = 0; 282 + lfs[len] = '\0'; 293 283 for (lf = 0; lf < block.lf.max; lf++) { 294 284 if (block.fn_map[lf] != pcifunc) 295 285 continue; 296 286 flag = 1; 297 - off += scnprintf(&buf[off], buf_size - 1 298 - - off, "%3d,", lf); 287 + len += sprintf(&lfs[len], "%d,", lf); 299 288 } 300 - if (flag && off_prev != off) 301 - off--; 302 - else 303 - go_back++; 289 + 290 + if (flag) 291 + len--; 292 + lfs[len] = '\0'; 304 293 off += scnprintf(&buf[off], buf_size - 1 - off, 305 - "\t"); 294 + "%-*s", lf_str_size, lfs); 295 + if (!strlen(lfs)) 296 + go_back += lf_str_size; 306 297 } 307 298 if (!flag) 308 299 off -= go_back; ··· 316 303 } 317 304 318 305 bytes_not_copied = copy_to_user(buffer, buf, off); 306 + kfree(lfs); 319 307 kfree(buf); 320 308 321 309 if (bytes_not_copied) ··· 333 319 struct rvu *rvu = filp->private; 334 320 struct pci_dev *pdev = NULL; 335 321 struct mac_ops *mac_ops; 336 - int rvu_def_cgx_id = 0; 337 322 char cgx[10], lmac[10]; 338 323 struct rvu_pfvf *pfvf; 339 324 int pf, domain, blkid; ··· 340 327 u16 pcifunc; 341 328 342 329 domain = 2; 343 - mac_ops = get_mac_ops(rvu_cgx_pdata(rvu_def_cgx_id, rvu)); 330 + mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu)); 331 + /* There can be no CGX devices at all */ 332 + if (!mac_ops) 333 + return 0; 344 334 seq_printf(filp, "PCI dev\t\tRVU PF Func\tNIX block\t%s\tLMAC\n", 345 335 mac_ops->name); 346 336 for (pf = 0; pf < rvu->hw->total_pfs; pf++) { ··· 1834 1818 { 1835 1819 struct mac_ops *mac_ops; 1836 1820 unsigned long lmac_bmap; 1837 - int rvu_def_cgx_id = 0; 1838 1821 int i, lmac_id; 1839 1822 char dname[20]; 1840 1823 void *cgx; ··· 1841 1826 if (!cgx_get_cgxcnt_max()) 1842 1827 return; 1843 1828 1844 - mac_ops = get_mac_ops(rvu_cgx_pdata(rvu_def_cgx_id, rvu)); 1829 + mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu)); 1845 1830 if (!mac_ops) 1846 1831 return; 1847 1832
+1 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
··· 2629 2629 struct nix_rx_flowkey_alg *field; 2630 2630 struct nix_rx_flowkey_alg tmp; 2631 2631 u32 key_type, valid_key; 2632 - int l4_key_offset; 2632 + int l4_key_offset = 0; 2633 2633 2634 2634 if (!alg) 2635 2635 return -EINVAL;
+1 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
··· 2490 2490 index = find_next_bit(mcam->bmap, mcam->bmap_entries, entry); 2491 2491 if (index >= mcam->bmap_entries) 2492 2492 break; 2493 + entry = index + 1; 2493 2494 if (mcam->entry2cntr_map[index] != req->cntr) 2494 2495 continue; 2495 2496 2496 - entry = index + 1; 2497 2497 npc_unmap_mcam_entry_and_cntr(rvu, mcam, blkaddr, 2498 2498 index, req->cntr); 2499 2499 }
+3 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
··· 257 257 int otx2_get_all_flows(struct otx2_nic *pfvf, struct ethtool_rxnfc *nfc, 258 258 u32 *rule_locs) 259 259 { 260 + u32 rule_cnt = nfc->rule_cnt; 260 261 u32 location = 0; 261 262 int idx = 0; 262 263 int err = 0; 263 264 264 265 nfc->data = pfvf->flow_cfg->ntuple_max_flows; 265 - while ((!err || err == -ENOENT) && idx < nfc->rule_cnt) { 266 + while ((!err || err == -ENOENT) && idx < rule_cnt) { 266 267 err = otx2_get_flow(pfvf, nfc, location); 267 268 if (!err) 268 269 rule_locs[idx++] = location; 269 270 location++; 270 271 } 272 + nfc->rule_cnt = rule_cnt; 271 273 272 274 return err; 273 275 }
+5
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 1672 1672 struct otx2_nic *pf = netdev_priv(netdev); 1673 1673 struct otx2_cq_poll *cq_poll = NULL; 1674 1674 struct otx2_qset *qset = &pf->qset; 1675 + struct otx2_rss_info *rss; 1675 1676 int qidx, vec, wrk; 1676 1677 1677 1678 netif_carrier_off(netdev); ··· 1684 1683 1685 1684 /* First stop packet Rx/Tx */ 1686 1685 otx2_rxtx_enable(pf, false); 1686 + 1687 + /* Clear RSS enable flag */ 1688 + rss = &pf->hw.rss_info; 1689 + rss->enable = false; 1687 1690 1688 1691 /* Cleanup Queue IRQ */ 1689 1692 vec = pci_irq_vector(pf->pdev,
+1 -1
drivers/net/ethernet/marvell/pxa168_eth.c
··· 1544 1544 clk_disable_unprepare(pep->clk); 1545 1545 mdiobus_unregister(pep->smi_bus); 1546 1546 mdiobus_free(pep->smi_bus); 1547 - unregister_netdev(dev); 1548 1547 cancel_work_sync(&pep->tx_timeout_task); 1548 + unregister_netdev(dev); 1549 1549 free_netdev(dev); 1550 1550 return 0; 1551 1551 }
+4 -3
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 92 92 MLX5_MPWRQ_LOG_WQE_SZ - PAGE_SHIFT : 0) 93 93 #define MLX5_MPWRQ_PAGES_PER_WQE BIT(MLX5_MPWRQ_WQE_PAGE_ORDER) 94 94 95 - #define MLX5_MTT_OCTW(npages) (ALIGN(npages, 8) / 2) 95 + #define MLX5_ALIGN_MTTS(mtts) (ALIGN(mtts, 8)) 96 + #define MLX5_ALIGNED_MTTS_OCTW(mtts) ((mtts) / 2) 97 + #define MLX5_MTT_OCTW(mtts) (MLX5_ALIGNED_MTTS_OCTW(MLX5_ALIGN_MTTS(mtts))) 96 98 /* Add another page to MLX5E_REQUIRED_WQE_MTTS as a buffer between 97 99 * WQEs, This page will absorb write overflow by the hardware, when 98 100 * receiving packets larger than MTU. These oversize packets are 99 101 * dropped by the driver at a later stage. 100 102 */ 101 - #define MLX5E_REQUIRED_WQE_MTTS (ALIGN(MLX5_MPWRQ_PAGES_PER_WQE + 1, 8)) 102 - #define MLX5E_LOG_ALIGNED_MPWQE_PPW (ilog2(MLX5E_REQUIRED_WQE_MTTS)) 103 + #define MLX5E_REQUIRED_WQE_MTTS (MLX5_ALIGN_MTTS(MLX5_MPWRQ_PAGES_PER_WQE + 1)) 103 104 #define MLX5E_REQUIRED_MTTS(wqes) (wqes * MLX5E_REQUIRED_WQE_MTTS) 104 105 #define MLX5E_MAX_RQ_NUM_MTTS \ 105 106 ((1 << 16) * 2) /* So that MLX5_MTT_OCTW(num_mtts) fits into u16 */
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c
··· 1181 1181 1182 1182 mlx5e_tc_match_to_reg_get_match(spec, CTSTATE_TO_REG, 1183 1183 &ctstate, &ctstate_mask); 1184 - if (ctstate_mask) 1184 + 1185 + if ((ctstate & ctstate_mask) == MLX5_CT_STATE_TRK_BIT) 1185 1186 return -EOPNOTSUPP; 1186 1187 1187 1188 ctstate_mask |= MLX5_CT_STATE_TRK_BIT;
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
··· 685 685 u16 vport_num; 686 686 int err = 0; 687 687 688 - if (flow_attr->ip_version == 4) { 688 + if (flow_attr->tun_ip_version == 4) { 689 689 /* Addresses are swapped for decap */ 690 690 attr.fl.fl4.saddr = esw_attr->rx_tun_attr->dst_ip.v4; 691 691 attr.fl.fl4.daddr = esw_attr->rx_tun_attr->src_ip.v4; 692 692 err = mlx5e_route_lookup_ipv4_get(priv, priv->netdev, &attr); 693 693 } 694 694 #if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6) 695 - else if (flow_attr->ip_version == 6) { 695 + else if (flow_attr->tun_ip_version == 6) { 696 696 /* Addresses are swapped for decap */ 697 697 attr.fl.fl6.saddr = esw_attr->rx_tun_attr->dst_ip.v6; 698 698 attr.fl.fl6.daddr = esw_attr->rx_tun_attr->src_ip.v6; ··· 718 718 esw_attr->rx_tun_attr->decap_vport = vport_num; 719 719 720 720 out: 721 - if (flow_attr->ip_version == 4) 721 + if (flow_attr->tun_ip_version == 4) 722 722 mlx5e_route_lookup_ipv4_put(&attr); 723 723 #if IS_ENABLED(CONFIG_INET) && IS_ENABLED(CONFIG_IPV6) 724 - else if (flow_attr->ip_version == 6) 724 + else if (flow_attr->tun_ip_version == 6) 725 725 mlx5e_route_lookup_ipv6_put(&attr); 726 726 #endif 727 727 return err;
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
··· 89 89 * required to establish routing. 90 90 */ 91 91 flow_flag_set(flow, TUN_RX); 92 + flow->attr->tun_ip_version = ip_version; 92 93 return 0; 93 94 } 94 95 ··· 1092 1091 if (err || !esw_attr->rx_tun_attr->decap_vport) 1093 1092 goto out; 1094 1093 1095 - key.ip_version = attr->ip_version; 1094 + key.ip_version = attr->tun_ip_version; 1096 1095 if (key.ip_version == 4) 1097 1096 key.endpoint_ip.v4 = esw_attr->rx_tun_attr->dst_ip.v4; 1098 1097 else
+4
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_geneve.c
··· 227 227 option_key = (struct geneve_opt *)&enc_opts.key->data[0]; 228 228 option_mask = (struct geneve_opt *)&enc_opts.mask->data[0]; 229 229 230 + if (option_mask->opt_class == 0 && option_mask->type == 0 && 231 + !memchr_inv(option_mask->opt_data, 0, option_mask->length * 4)) 232 + return 0; 233 + 230 234 if (option_key->length > max_tlv_option_data_len) { 231 235 NL_SET_ERR_MSG_MOD(extack, 232 236 "Matching on GENEVE options: unsupported option len");
+10 -1
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 1887 1887 { 1888 1888 struct mlx5e_priv *priv = netdev_priv(netdev); 1889 1889 struct mlx5_core_dev *mdev = priv->mdev; 1890 + int err; 1890 1891 1891 1892 if (!MLX5_CAP_GEN(mdev, cqe_compression)) 1892 1893 return -EOPNOTSUPP; ··· 1897 1896 return -EINVAL; 1898 1897 } 1899 1898 1900 - mlx5e_modify_rx_cqe_compression_locked(priv, enable); 1899 + err = mlx5e_modify_rx_cqe_compression_locked(priv, enable); 1900 + if (err) 1901 + return err; 1902 + 1901 1903 priv->channels.params.rx_cqe_compress_def = enable; 1902 1904 1903 1905 return 0; ··· 2018 2014 */ 2019 2015 2020 2016 if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 2017 + struct mlx5e_params old_params; 2018 + 2019 + old_params = priv->channels.params; 2021 2020 priv->channels.params = new_channels.params; 2022 2021 err = mlx5e_num_channels_changed(priv); 2022 + if (err) 2023 + priv->channels.params = old_params; 2023 2024 goto out; 2024 2025 } 2025 2026
+57 -24
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 334 334 rq->wqe_overflow.addr); 335 335 } 336 336 337 - static inline u64 mlx5e_get_mpwqe_offset(struct mlx5e_rq *rq, u16 wqe_ix) 337 + static u64 mlx5e_get_mpwqe_offset(u16 wqe_ix) 338 338 { 339 - return (wqe_ix << MLX5E_LOG_ALIGNED_MPWQE_PPW) << PAGE_SHIFT; 339 + return MLX5E_REQUIRED_MTTS(wqe_ix) << PAGE_SHIFT; 340 340 } 341 341 342 342 static void mlx5e_init_frags_partition(struct mlx5e_rq *rq) ··· 577 577 mlx5_wq_ll_get_wqe(&rq->mpwqe.wq, i); 578 578 u32 byte_count = 579 579 rq->mpwqe.num_strides << rq->mpwqe.log_stride_sz; 580 - u64 dma_offset = mlx5e_get_mpwqe_offset(rq, i); 580 + u64 dma_offset = mlx5e_get_mpwqe_offset(i); 581 581 582 582 wqe->data[0].addr = cpu_to_be64(dma_offset + rq->buff.headroom); 583 583 wqe->data[0].byte_count = cpu_to_be32(byte_count); ··· 2368 2368 { 2369 2369 switch (params->rq_wq_type) { 2370 2370 case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: 2371 - return order_base_2(MLX5E_UMR_WQEBBS) + 2372 - mlx5e_get_rq_log_wq_sz(rqp->rqc); 2371 + return max_t(u8, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE, 2372 + order_base_2(MLX5E_UMR_WQEBBS) + 2373 + mlx5e_get_rq_log_wq_sz(rqp->rqc)); 2373 2374 default: /* MLX5_WQ_TYPE_CYCLIC */ 2374 2375 return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE; 2375 2376 } ··· 2503 2502 { 2504 2503 int i; 2505 2504 2506 - if (chs->port_ptp) 2505 + if (chs->port_ptp) { 2507 2506 mlx5e_port_ptp_close(chs->port_ptp); 2507 + chs->port_ptp = NULL; 2508 + } 2508 2509 2509 2510 for (i = 0; i < chs->num; i++) 2510 2511 mlx5e_close_channel(chs->c[i]); ··· 3818 3815 s->tx_dropped += sq_stats->dropped; 3819 3816 } 3820 3817 } 3818 + if (priv->port_ptp_opened) { 3819 + for (i = 0; i < priv->max_opened_tc; i++) { 3820 + struct mlx5e_sq_stats *sq_stats = &priv->port_ptp_stats.sq[i]; 3821 + 3822 + s->tx_packets += sq_stats->packets; 3823 + s->tx_bytes += sq_stats->bytes; 3824 + s->tx_dropped += sq_stats->dropped; 3825 + } 3826 + } 3821 3827 } 3822 3828 3823 3829 void ··· 3846 3834 } 3847 3835 3848 3836 if (mlx5e_is_uplink_rep(priv)) { 3837 + struct mlx5e_vport_stats *vstats = &priv->stats.vport; 3838 + 3849 3839 stats->rx_packets = PPORT_802_3_GET(pstats, a_frames_received_ok); 3850 3840 stats->rx_bytes = PPORT_802_3_GET(pstats, a_octets_received_ok); 3851 3841 stats->tx_packets = PPORT_802_3_GET(pstats, a_frames_transmitted_ok); 3852 3842 stats->tx_bytes = PPORT_802_3_GET(pstats, a_octets_transmitted_ok); 3843 + 3844 + /* vport multicast also counts packets that are dropped due to steering 3845 + * or rx out of buffer 3846 + */ 3847 + stats->multicast = VPORT_COUNTER_GET(vstats, received_eth_multicast.packets); 3853 3848 } else { 3854 3849 mlx5e_fold_sw_stats64(priv, stats); 3855 3850 } ··· 4702 4683 struct mlx5e_channel *c = priv->channels.c[i]; 4703 4684 4704 4685 mlx5e_rq_replace_xdp_prog(&c->rq, prog); 4705 - if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state)) 4686 + if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state)) { 4687 + bpf_prog_inc(prog); 4706 4688 mlx5e_rq_replace_xdp_prog(&c->xskrq, prog); 4689 + } 4707 4690 } 4708 4691 4709 4692 unlock: ··· 4978 4957 params->num_channels = min_t(unsigned int, MLX5E_MAX_NUM_CHANNELS / 2, 4979 4958 priv->max_nch); 4980 4959 params->num_tc = 1; 4960 + 4961 + /* Set an initial non-zero value, so that mlx5e_select_queue won't 4962 + * divide by zero if called before first activating channels. 4963 + */ 4964 + priv->num_tc_x_num_ch = params->num_channels * params->num_tc; 4981 4965 4982 4966 /* SQ */ 4983 4967 params->log_sq_size = is_kdump_kernel() ? ··· 5500 5474 struct net_device *netdev, 5501 5475 struct mlx5_core_dev *mdev) 5502 5476 { 5503 - memset(priv, 0, sizeof(*priv)); 5504 - 5505 5477 /* priv init */ 5506 5478 priv->mdev = mdev; 5507 5479 priv->netdev = netdev; ··· 5532 5508 { 5533 5509 int i; 5534 5510 5511 + /* bail if change profile failed and also rollback failed */ 5512 + if (!priv->mdev) 5513 + return; 5514 + 5535 5515 destroy_workqueue(priv->wq); 5536 5516 free_cpumask_var(priv->scratchpad.cpumask); 5537 5517 5538 5518 for (i = 0; i < priv->htb.max_qos_sqs; i++) 5539 5519 kfree(priv->htb.qos_sq_stats[i]); 5540 5520 kvfree(priv->htb.qos_sq_stats); 5521 + 5522 + memset(priv, 0, sizeof(*priv)); 5541 5523 } 5542 5524 5543 5525 struct net_device * ··· 5660 5630 } 5661 5631 5662 5632 static int 5663 - mlx5e_netdev_attach_profile(struct mlx5e_priv *priv, 5633 + mlx5e_netdev_attach_profile(struct net_device *netdev, struct mlx5_core_dev *mdev, 5664 5634 const struct mlx5e_profile *new_profile, void *new_ppriv) 5665 5635 { 5666 - struct net_device *netdev = priv->netdev; 5667 - struct mlx5_core_dev *mdev = priv->mdev; 5636 + struct mlx5e_priv *priv = netdev_priv(netdev); 5668 5637 int err; 5669 5638 5670 5639 err = mlx5e_priv_init(priv, netdev, mdev); ··· 5676 5647 priv->ppriv = new_ppriv; 5677 5648 err = new_profile->init(priv->mdev, priv->netdev); 5678 5649 if (err) 5679 - return err; 5650 + goto priv_cleanup; 5680 5651 err = mlx5e_attach_netdev(priv); 5681 5652 if (err) 5682 - new_profile->cleanup(priv); 5653 + goto profile_cleanup; 5654 + return err; 5655 + 5656 + profile_cleanup: 5657 + new_profile->cleanup(priv); 5658 + priv_cleanup: 5659 + mlx5e_priv_cleanup(priv); 5683 5660 return err; 5684 5661 } 5685 5662 ··· 5694 5659 { 5695 5660 unsigned int new_max_nch = mlx5e_calc_max_nch(priv, new_profile); 5696 5661 const struct mlx5e_profile *orig_profile = priv->profile; 5662 + struct net_device *netdev = priv->netdev; 5663 + struct mlx5_core_dev *mdev = priv->mdev; 5697 5664 void *orig_ppriv = priv->ppriv; 5698 5665 int err, rollback_err; 5699 5666 5700 5667 /* sanity */ 5701 5668 if (new_max_nch != priv->max_nch) { 5702 - netdev_warn(priv->netdev, 5703 - "%s: Replacing profile with different max channels\n", 5669 + netdev_warn(netdev, "%s: Replacing profile with different max channels\n", 5704 5670 __func__); 5705 5671 return -EINVAL; 5706 5672 } ··· 5711 5675 priv->profile->cleanup(priv); 5712 5676 mlx5e_priv_cleanup(priv); 5713 5677 5714 - err = mlx5e_netdev_attach_profile(priv, new_profile, new_ppriv); 5678 + err = mlx5e_netdev_attach_profile(netdev, mdev, new_profile, new_ppriv); 5715 5679 if (err) { /* roll back to original profile */ 5716 - netdev_warn(priv->netdev, "%s: new profile init failed, %d\n", 5717 - __func__, err); 5680 + netdev_warn(netdev, "%s: new profile init failed, %d\n", __func__, err); 5718 5681 goto rollback; 5719 5682 } 5720 5683 5721 5684 return 0; 5722 5685 5723 5686 rollback: 5724 - rollback_err = mlx5e_netdev_attach_profile(priv, orig_profile, orig_ppriv); 5725 - if (rollback_err) { 5726 - netdev_err(priv->netdev, 5727 - "%s: failed to rollback to orig profile, %d\n", 5687 + rollback_err = mlx5e_netdev_attach_profile(netdev, mdev, orig_profile, orig_ppriv); 5688 + if (rollback_err) 5689 + netdev_err(netdev, "%s: failed to rollback to orig profile, %d\n", 5728 5690 __func__, rollback_err); 5729 - } 5730 5691 return err; 5731 5692 } 5732 5693
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 500 500 struct mlx5e_icosq *sq = rq->icosq; 501 501 struct mlx5_wq_cyc *wq = &sq->wq; 502 502 struct mlx5e_umr_wqe *umr_wqe; 503 - u16 xlt_offset = ix << (MLX5E_LOG_ALIGNED_MPWQE_PPW - 1); 504 503 u16 pi; 505 504 int err; 506 505 int i; ··· 530 531 umr_wqe->ctrl.opmod_idx_opcode = 531 532 cpu_to_be32((sq->pc << MLX5_WQE_CTRL_WQE_INDEX_SHIFT) | 532 533 MLX5_OPCODE_UMR); 533 - umr_wqe->uctrl.xlt_offset = cpu_to_be16(xlt_offset); 534 + umr_wqe->uctrl.xlt_offset = 535 + cpu_to_be16(MLX5_ALIGNED_MTTS_OCTW(MLX5E_REQUIRED_MTTS(ix))); 534 536 535 537 sq->db.wqe_info[pi] = (struct mlx5e_icosq_wqe_info) { 536 538 .wqe_type = MLX5E_ICOSQ_WQE_UMR_RX,
+45 -12
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 2296 2296 *match_level = MLX5_MATCH_L4; 2297 2297 } 2298 2298 2299 + /* Currenlty supported only for MPLS over UDP */ 2300 + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS) && 2301 + !netif_is_bareudp(filter_dev)) { 2302 + NL_SET_ERR_MSG_MOD(extack, 2303 + "Matching on MPLS is supported only for MPLS over UDP"); 2304 + netdev_err(priv->netdev, 2305 + "Matching on MPLS is supported only for MPLS over UDP\n"); 2306 + return -EOPNOTSUPP; 2307 + } 2308 + 2299 2309 return 0; 2300 2310 } 2301 2311 ··· 2909 2899 return 0; 2910 2900 } 2911 2901 2902 + static bool modify_tuple_supported(bool modify_tuple, bool ct_clear, 2903 + bool ct_flow, struct netlink_ext_ack *extack, 2904 + struct mlx5e_priv *priv, 2905 + struct mlx5_flow_spec *spec) 2906 + { 2907 + if (!modify_tuple || ct_clear) 2908 + return true; 2909 + 2910 + if (ct_flow) { 2911 + NL_SET_ERR_MSG_MOD(extack, 2912 + "can't offload tuple modification with non-clear ct()"); 2913 + netdev_info(priv->netdev, 2914 + "can't offload tuple modification with non-clear ct()"); 2915 + return false; 2916 + } 2917 + 2918 + /* Add ct_state=-trk match so it will be offloaded for non ct flows 2919 + * (or after clear action), as otherwise, since the tuple is changed, 2920 + * we can't restore ct state 2921 + */ 2922 + if (mlx5_tc_ct_add_no_trk_match(spec)) { 2923 + NL_SET_ERR_MSG_MOD(extack, 2924 + "can't offload tuple modification with ct matches and no ct(clear) action"); 2925 + netdev_info(priv->netdev, 2926 + "can't offload tuple modification with ct matches and no ct(clear) action"); 2927 + return false; 2928 + } 2929 + 2930 + return true; 2931 + } 2932 + 2912 2933 static bool modify_header_match_supported(struct mlx5e_priv *priv, 2913 2934 struct mlx5_flow_spec *spec, 2914 2935 struct flow_action *flow_action, ··· 2978 2937 return err; 2979 2938 } 2980 2939 2981 - /* Add ct_state=-trk match so it will be offloaded for non ct flows 2982 - * (or after clear action), as otherwise, since the tuple is changed, 2983 - * we can't restore ct state 2984 - */ 2985 - if (!ct_clear && modify_tuple && 2986 - mlx5_tc_ct_add_no_trk_match(spec)) { 2987 - NL_SET_ERR_MSG_MOD(extack, 2988 - "can't offload tuple modify header with ct matches"); 2989 - netdev_info(priv->netdev, 2990 - "can't offload tuple modify header with ct matches"); 2940 + if (!modify_tuple_supported(modify_tuple, ct_clear, ct_flow, extack, 2941 + priv, spec)) 2991 2942 return false; 2992 - } 2993 2943 2994 2944 ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol); 2995 2945 if (modify_ip_header && ip_proto != IPPROTO_TCP && ··· 4477 4445 */ 4478 4446 if (rate) { 4479 4447 rate = (rate * BITS_PER_BYTE) + 500000; 4480 - rate_mbps = max_t(u64, do_div(rate, 1000000), 1); 4448 + do_div(rate, 1000000); 4449 + rate_mbps = max_t(u32, rate, 1); 4481 4450 } 4482 4451 4483 4452 err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps);
+1
drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
··· 79 79 u8 inner_match_level; 80 80 u8 outer_match_level; 81 81 u8 ip_version; 82 + u8 tun_ip_version; 82 83 u32 flags; 83 84 union { 84 85 struct mlx5_esw_flow_attr esw_attr[0];
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 551 551 552 552 if (!mlx5_eswitch_termtbl_required(esw, attr, flow_act, spec) && 553 553 MLX5_CAP_GEN(esw_attr->in_mdev, reg_c_preserve) && 554 - mlx5_eswitch_vport_match_metadata_enabled(esw)) 554 + mlx5_eswitch_vport_match_metadata_enabled(esw) && 555 + MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ignore_flow_level)) 555 556 attr->flags |= MLX5_ESW_ATTR_FLAG_SRC_REWRITE; 556 557 557 558 if (attr->dest_ft) {
+1
drivers/net/ethernet/mellanox/mlx5/core/fpga/conn.c
··· 575 575 MLX5_SET(qpc, qpc, log_sq_size, ilog2(conn->qp.sq.size)); 576 576 MLX5_SET(qpc, qpc, cqn_snd, conn->cq.mcq.cqn); 577 577 MLX5_SET(qpc, qpc, cqn_rcv, conn->cq.mcq.cqn); 578 + MLX5_SET(qpc, qpc, ts_format, mlx5_get_qp_default_ts(mdev)); 578 579 MLX5_SET64(qpc, qpc, dbr_addr, conn->qp.wq_ctrl.db.dma); 579 580 if (MLX5_CAP_GEN(mdev, cqe_version) == 1) 580 581 MLX5_SET(qpc, qpc, user_index, 0xFFFFFF);
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
··· 233 233 } 234 234 235 235 qpc = MLX5_ADDR_OF(create_qp_in, in, qpc); 236 + MLX5_SET(qpc, qpc, ts_format, mlx5_get_qp_default_ts(priv->mdev)); 236 237 MLX5_SET(qpc, qpc, st, MLX5_QP_ST_UD); 237 238 MLX5_SET(qpc, qpc, pm_state, MLX5_QP_PM_MIGRATED); 238 239 MLX5_SET(qpc, qpc, ulp_stateless_offload_mode, ··· 695 694 static void mlx5_rdma_netdev_free(struct net_device *netdev) 696 695 { 697 696 struct mlx5e_priv *priv = mlx5i_epriv(netdev); 697 + struct mlx5_core_dev *mdev = priv->mdev; 698 698 struct mlx5i_priv *ipriv = priv->ppriv; 699 699 const struct mlx5e_profile *profile = priv->profile; 700 700 ··· 704 702 705 703 if (!ipriv->sub_interface) { 706 704 mlx5i_pkey_qpn_ht_cleanup(netdev); 707 - mlx5e_destroy_mdev_resources(priv->mdev); 705 + mlx5e_destroy_mdev_resources(mdev); 708 706 } 709 707 } 710 708
+4 -4
drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
··· 495 495 return -EINVAL; 496 496 497 497 field_select = MLX5_MTPPS_FS_ENABLE; 498 + pin = ptp_find_pin(clock->ptp, PTP_PF_PEROUT, rq->perout.index); 499 + if (pin < 0) 500 + return -EBUSY; 501 + 498 502 if (on) { 499 503 bool rt_mode = mlx5_real_time_mode(mdev); 500 504 u32 nsec; 501 505 s64 sec; 502 - 503 - pin = ptp_find_pin(clock->ptp, PTP_PF_PEROUT, rq->perout.index); 504 - if (pin < 0) 505 - return -EBUSY; 506 506 507 507 pin_mode = MLX5_PIN_MODE_OUT; 508 508 pattern = MLX5_OUT_PATTERN_PERIODIC;
+1 -3
drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c
··· 181 181 u16 max_functions; 182 182 u16 function_id; 183 183 int err = 0; 184 - bool ecpu; 185 184 int i; 186 185 187 186 max_functions = mlx5_sf_max_functions(dev); 188 187 function_id = MLX5_CAP_GEN(dev, sf_base_id); 189 - ecpu = mlx5_read_embedded_cpu(dev); 190 188 /* Arm the vhca context as the vhca event notifier */ 191 189 for (i = 0; i < max_functions; i++) { 192 - err = mlx5_vhca_event_arm(dev, function_id, ecpu); 190 + err = mlx5_vhca_event_arm(dev, function_id); 193 191 if (err) 194 192 return err; 195 193
+4 -6
drivers/net/ethernet/mellanox/mlx5/core/sf/hw_table.c
··· 6 6 #include "sf.h" 7 7 #include "mlx5_ifc_vhca_event.h" 8 8 #include "vhca_event.h" 9 - #include "ecpf.h" 9 + #include "mlx5_core.h" 10 10 11 11 struct mlx5_sf_hw { 12 12 u32 usr_sfnum; ··· 18 18 struct mlx5_core_dev *dev; 19 19 struct mlx5_sf_hw *sfs; 20 20 int max_local_functions; 21 - u8 ecpu: 1; 22 21 struct mutex table_lock; /* Serializes sf deletion and vhca state change handler. */ 23 22 struct notifier_block vhca_nb; 24 23 }; ··· 63 64 } 64 65 if (sw_id == -ENOSPC) { 65 66 err = -ENOSPC; 66 - goto err; 67 + goto exist_err; 67 68 } 68 69 69 70 hw_fn_id = mlx5_sf_sw_to_hw_id(table->dev, sw_id); ··· 71 72 if (err) 72 73 goto err; 73 74 74 - err = mlx5_modify_vhca_sw_id(dev, hw_fn_id, table->ecpu, usr_sfnum); 75 + err = mlx5_modify_vhca_sw_id(dev, hw_fn_id, usr_sfnum); 75 76 if (err) 76 77 goto vhca_err; 77 78 ··· 117 118 118 119 hw_fn_id = mlx5_sf_sw_to_hw_id(dev, id); 119 120 mutex_lock(&table->table_lock); 120 - err = mlx5_cmd_query_vhca_state(dev, hw_fn_id, table->ecpu, out, sizeof(out)); 121 + err = mlx5_cmd_query_vhca_state(dev, hw_fn_id, out, sizeof(out)); 121 122 if (err) 122 123 goto err; 123 124 state = MLX5_GET(query_vhca_state_out, out, vhca_state_context.vhca_state); ··· 163 164 table->dev = dev; 164 165 table->sfs = sfs; 165 166 table->max_local_functions = max_functions; 166 - table->ecpu = mlx5_read_embedded_cpu(dev); 167 167 dev->priv.sf_hw_table = table; 168 168 mlx5_core_dbg(dev, "SF HW table: max sfs = %d\n", max_functions); 169 169 return 0;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/sf/mlx5_ifc_vhca_event.h
··· 20 20 21 21 u8 sw_function_id[0x20]; 22 22 23 - u8 reserved_at_40[0x80]; 23 + u8 reserved_at_40[0x40]; 24 24 }; 25 25 26 26 struct mlx5_ifc_query_vhca_state_out_bits {
+11 -12
drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c
··· 19 19 struct mlx5_vhca_state_event event; 20 20 }; 21 21 22 - int mlx5_cmd_query_vhca_state(struct mlx5_core_dev *dev, u16 function_id, 23 - bool ecpu, u32 *out, u32 outlen) 22 + int mlx5_cmd_query_vhca_state(struct mlx5_core_dev *dev, u16 function_id, u32 *out, u32 outlen) 24 23 { 25 24 u32 in[MLX5_ST_SZ_DW(query_vhca_state_in)] = {}; 26 25 27 26 MLX5_SET(query_vhca_state_in, in, opcode, MLX5_CMD_OP_QUERY_VHCA_STATE); 28 27 MLX5_SET(query_vhca_state_in, in, function_id, function_id); 29 - MLX5_SET(query_vhca_state_in, in, embedded_cpu_function, ecpu); 28 + MLX5_SET(query_vhca_state_in, in, embedded_cpu_function, 0); 30 29 31 30 return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen); 32 31 } 33 32 34 33 static int mlx5_cmd_modify_vhca_state(struct mlx5_core_dev *dev, u16 function_id, 35 - bool ecpu, u32 *in, u32 inlen) 34 + u32 *in, u32 inlen) 36 35 { 37 36 u32 out[MLX5_ST_SZ_DW(modify_vhca_state_out)] = {}; 38 37 39 38 MLX5_SET(modify_vhca_state_in, in, opcode, MLX5_CMD_OP_MODIFY_VHCA_STATE); 40 39 MLX5_SET(modify_vhca_state_in, in, function_id, function_id); 41 - MLX5_SET(modify_vhca_state_in, in, embedded_cpu_function, ecpu); 40 + MLX5_SET(modify_vhca_state_in, in, embedded_cpu_function, 0); 42 41 43 42 return mlx5_cmd_exec(dev, in, inlen, out, sizeof(out)); 44 43 } 45 44 46 - int mlx5_modify_vhca_sw_id(struct mlx5_core_dev *dev, u16 function_id, bool ecpu, u32 sw_fn_id) 45 + int mlx5_modify_vhca_sw_id(struct mlx5_core_dev *dev, u16 function_id, u32 sw_fn_id) 47 46 { 48 47 u32 out[MLX5_ST_SZ_DW(modify_vhca_state_out)] = {}; 49 48 u32 in[MLX5_ST_SZ_DW(modify_vhca_state_in)] = {}; 50 49 51 50 MLX5_SET(modify_vhca_state_in, in, opcode, MLX5_CMD_OP_MODIFY_VHCA_STATE); 52 51 MLX5_SET(modify_vhca_state_in, in, function_id, function_id); 53 - MLX5_SET(modify_vhca_state_in, in, embedded_cpu_function, ecpu); 52 + MLX5_SET(modify_vhca_state_in, in, embedded_cpu_function, 0); 54 53 MLX5_SET(modify_vhca_state_in, in, vhca_state_field_select.sw_function_id, 1); 55 54 MLX5_SET(modify_vhca_state_in, in, vhca_state_context.sw_function_id, sw_fn_id); 56 55 57 56 return mlx5_cmd_exec_inout(dev, modify_vhca_state, in, out); 58 57 } 59 58 60 - int mlx5_vhca_event_arm(struct mlx5_core_dev *dev, u16 function_id, bool ecpu) 59 + int mlx5_vhca_event_arm(struct mlx5_core_dev *dev, u16 function_id) 61 60 { 62 61 u32 in[MLX5_ST_SZ_DW(modify_vhca_state_in)] = {}; 63 62 64 63 MLX5_SET(modify_vhca_state_in, in, vhca_state_context.arm_change_event, 1); 65 64 MLX5_SET(modify_vhca_state_in, in, vhca_state_field_select.arm_change_event, 1); 66 65 67 - return mlx5_cmd_modify_vhca_state(dev, function_id, ecpu, in, sizeof(in)); 66 + return mlx5_cmd_modify_vhca_state(dev, function_id, in, sizeof(in)); 68 67 } 69 68 70 69 static void ··· 72 73 u32 out[MLX5_ST_SZ_DW(query_vhca_state_out)] = {}; 73 74 int err; 74 75 75 - err = mlx5_cmd_query_vhca_state(dev, event->function_id, event->ecpu, out, sizeof(out)); 76 + err = mlx5_cmd_query_vhca_state(dev, event->function_id, out, sizeof(out)); 76 77 if (err) 77 78 return; 78 79 ··· 81 82 event->new_vhca_state = MLX5_GET(query_vhca_state_out, out, 82 83 vhca_state_context.vhca_state); 83 84 84 - mlx5_vhca_event_arm(dev, event->function_id, event->ecpu); 85 + mlx5_vhca_event_arm(dev, event->function_id); 85 86 86 87 blocking_notifier_call_chain(&dev->priv.vhca_state_notifier->n_head, 0, event); 87 88 } ··· 93 94 struct mlx5_core_dev *dev = notifier->dev; 94 95 95 96 mlx5_vhca_event_notify(dev, &work->event); 97 + kfree(work); 96 98 } 97 99 98 100 static int ··· 110 110 INIT_WORK(&work->work, &mlx5_vhca_state_work_handler); 111 111 work->notifier = notifier; 112 112 work->event.function_id = be16_to_cpu(eqe->data.vhca_state.function_id); 113 - work->event.ecpu = be16_to_cpu(eqe->data.vhca_state.ec_function); 114 113 mlx5_events_work_enqueue(notifier->dev, &work->work); 115 114 return NOTIFY_OK; 116 115 }
+3 -4
drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.h
··· 10 10 u16 function_id; 11 11 u16 sw_function_id; 12 12 u8 new_vhca_state; 13 - bool ecpu; 14 13 }; 15 14 16 15 static inline bool mlx5_vhca_event_supported(const struct mlx5_core_dev *dev) ··· 24 25 void mlx5_vhca_event_stop(struct mlx5_core_dev *dev); 25 26 int mlx5_vhca_event_notifier_register(struct mlx5_core_dev *dev, struct notifier_block *nb); 26 27 void mlx5_vhca_event_notifier_unregister(struct mlx5_core_dev *dev, struct notifier_block *nb); 27 - int mlx5_modify_vhca_sw_id(struct mlx5_core_dev *dev, u16 function_id, bool ecpu, u32 sw_fn_id); 28 - int mlx5_vhca_event_arm(struct mlx5_core_dev *dev, u16 function_id, bool ecpu); 28 + int mlx5_modify_vhca_sw_id(struct mlx5_core_dev *dev, u16 function_id, u32 sw_fn_id); 29 + int mlx5_vhca_event_arm(struct mlx5_core_dev *dev, u16 function_id); 29 30 int mlx5_cmd_query_vhca_state(struct mlx5_core_dev *dev, u16 function_id, 30 - bool ecpu, u32 *out, u32 outlen); 31 + u32 *out, u32 outlen); 31 32 #else 32 33 33 34 static inline void mlx5_vhca_state_cap_handle(struct mlx5_core_dev *dev, void *set_hca_cap)
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
··· 169 169 MLX5_SET(qpc, qpc, log_rq_size, ilog2(dr_qp->rq.wqe_cnt)); 170 170 MLX5_SET(qpc, qpc, rq_type, MLX5_NON_ZERO_RQ); 171 171 MLX5_SET(qpc, qpc, log_sq_size, ilog2(dr_qp->sq.wqe_cnt)); 172 + MLX5_SET(qpc, qpc, ts_format, mlx5_get_qp_default_ts(mdev)); 172 173 MLX5_SET64(qpc, qpc, dbr_addr, dr_qp->wq_ctrl.db.dma); 173 174 if (MLX5_CAP_GEN(mdev, cqe_version) == 1) 174 175 MLX5_SET(qpc, qpc, user_index, 0xFFFFFF);
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_ste_v1.c
··· 264 264 static u64 dr_ste_v1_get_miss_addr(u8 *hw_ste_p) 265 265 { 266 266 u64 index = 267 - (MLX5_GET(ste_match_bwc_v1, hw_ste_p, miss_address_31_6) | 268 - MLX5_GET(ste_match_bwc_v1, hw_ste_p, miss_address_39_32) << 26); 267 + ((u64)MLX5_GET(ste_match_bwc_v1, hw_ste_p, miss_address_31_6) | 268 + ((u64)MLX5_GET(ste_match_bwc_v1, hw_ste_p, miss_address_39_32)) << 26); 269 269 270 270 return index << 6; 271 271 }
+17 -7
drivers/net/ethernet/netronome/nfp/flower/metadata.c
··· 327 327 goto err_free_ctx_entry; 328 328 } 329 329 330 + /* Do net allocate a mask-id for pre_tun_rules. These flows are used to 331 + * configure the pre_tun table and are never actually send to the 332 + * firmware as an add-flow message. This causes the mask-id allocation 333 + * on the firmware to get out of sync if allocated here. 334 + */ 330 335 new_mask_id = 0; 331 - if (!nfp_check_mask_add(app, nfp_flow->mask_data, 336 + if (!nfp_flow->pre_tun_rule.dev && 337 + !nfp_check_mask_add(app, nfp_flow->mask_data, 332 338 nfp_flow->meta.mask_len, 333 339 &nfp_flow->meta.flags, &new_mask_id)) { 334 340 NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot allocate a new mask id"); ··· 365 359 goto err_remove_mask; 366 360 } 367 361 368 - if (!nfp_check_mask_remove(app, nfp_flow->mask_data, 362 + if (!nfp_flow->pre_tun_rule.dev && 363 + !nfp_check_mask_remove(app, nfp_flow->mask_data, 369 364 nfp_flow->meta.mask_len, 370 365 NULL, &new_mask_id)) { 371 366 NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot release mask id"); ··· 381 374 return 0; 382 375 383 376 err_remove_mask: 384 - nfp_check_mask_remove(app, nfp_flow->mask_data, nfp_flow->meta.mask_len, 385 - NULL, &new_mask_id); 377 + if (!nfp_flow->pre_tun_rule.dev) 378 + nfp_check_mask_remove(app, nfp_flow->mask_data, 379 + nfp_flow->meta.mask_len, 380 + NULL, &new_mask_id); 386 381 err_remove_rhash: 387 382 WARN_ON_ONCE(rhashtable_remove_fast(&priv->stats_ctx_table, 388 383 &ctx_entry->ht_node, ··· 415 406 416 407 __nfp_modify_flow_metadata(priv, nfp_flow); 417 408 418 - nfp_check_mask_remove(app, nfp_flow->mask_data, 419 - nfp_flow->meta.mask_len, &nfp_flow->meta.flags, 420 - &new_mask_id); 409 + if (!nfp_flow->pre_tun_rule.dev) 410 + nfp_check_mask_remove(app, nfp_flow->mask_data, 411 + nfp_flow->meta.mask_len, &nfp_flow->meta.flags, 412 + &new_mask_id); 421 413 422 414 /* Update flow payload with mask ids. */ 423 415 nfp_flow->unmasked_data[NFP_FL_MASK_ID_LOCATION] = new_mask_id;
+18
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 1142 1142 return -EOPNOTSUPP; 1143 1143 } 1144 1144 1145 + if (!(key_layer & NFP_FLOWER_LAYER_IPV4) && 1146 + !(key_layer & NFP_FLOWER_LAYER_IPV6)) { 1147 + NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: match on ipv4/ipv6 eth_type must be present"); 1148 + return -EOPNOTSUPP; 1149 + } 1150 + 1145 1151 /* Skip fields known to exist. */ 1146 1152 mask += sizeof(struct nfp_flower_meta_tci); 1147 1153 ext += sizeof(struct nfp_flower_meta_tci); ··· 1158 1152 mask += sizeof(struct nfp_flower_in_port); 1159 1153 ext += sizeof(struct nfp_flower_in_port); 1160 1154 1155 + /* Ensure destination MAC address matches pre_tun_dev. */ 1156 + mac = (struct nfp_flower_mac_mpls *)ext; 1157 + if (memcmp(&mac->mac_dst[0], flow->pre_tun_rule.dev->dev_addr, 6)) { 1158 + NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: dest MAC must match output dev MAC"); 1159 + return -EOPNOTSUPP; 1160 + } 1161 + 1161 1162 /* Ensure destination MAC address is fully matched. */ 1162 1163 mac = (struct nfp_flower_mac_mpls *)mask; 1163 1164 if (!is_broadcast_ether_addr(&mac->mac_dst[0])) { 1164 1165 NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: dest MAC field must not be masked"); 1166 + return -EOPNOTSUPP; 1167 + } 1168 + 1169 + if (mac->mpls_lse) { 1170 + NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: MPLS not supported"); 1165 1171 return -EOPNOTSUPP; 1166 1172 } 1167 1173
+13 -2
drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
··· 16 16 #define NFP_FL_MAX_ROUTES 32 17 17 18 18 #define NFP_TUN_PRE_TUN_RULE_LIMIT 32 19 - #define NFP_TUN_PRE_TUN_RULE_DEL 0x1 20 - #define NFP_TUN_PRE_TUN_IDX_BIT 0x8 19 + #define NFP_TUN_PRE_TUN_RULE_DEL BIT(0) 20 + #define NFP_TUN_PRE_TUN_IDX_BIT BIT(3) 21 + #define NFP_TUN_PRE_TUN_IPV6_BIT BIT(7) 21 22 22 23 /** 23 24 * struct nfp_tun_pre_run_rule - rule matched before decap ··· 1269 1268 { 1270 1269 struct nfp_flower_priv *app_priv = app->priv; 1271 1270 struct nfp_tun_offloaded_mac *mac_entry; 1271 + struct nfp_flower_meta_tci *key_meta; 1272 1272 struct nfp_tun_pre_tun_rule payload; 1273 1273 struct net_device *internal_dev; 1274 1274 int err; ··· 1291 1289 internal_dev->dev_addr); 1292 1290 if (!mac_entry) 1293 1291 return -ENOENT; 1292 + 1293 + /* Set/clear IPV6 bit. cpu_to_be16() swap will lead to MSB being 1294 + * set/clear for port_idx. 1295 + */ 1296 + key_meta = (struct nfp_flower_meta_tci *)flow->unmasked_data; 1297 + if (key_meta->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV6) 1298 + mac_entry->index |= NFP_TUN_PRE_TUN_IPV6_BIT; 1299 + else 1300 + mac_entry->index &= ~NFP_TUN_PRE_TUN_IPV6_BIT; 1294 1301 1295 1302 payload.port_idx = cpu_to_be16(mac_entry->index); 1296 1303
+7 -6
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 1079 1079 { 1080 1080 int sg_elems = q->lif->qtype_info[IONIC_QTYPE_TXQ].max_sg_elems; 1081 1081 struct ionic_tx_stats *stats = q_to_tx_stats(q); 1082 + int ndescs; 1082 1083 int err; 1083 1084 1084 - /* If TSO, need roundup(skb->len/mss) descs */ 1085 + /* Each desc is mss long max, so a descriptor for each gso_seg */ 1085 1086 if (skb_is_gso(skb)) 1086 - return (skb->len / skb_shinfo(skb)->gso_size) + 1; 1087 + ndescs = skb_shinfo(skb)->gso_segs; 1088 + else 1089 + ndescs = 1; 1087 1090 1088 - /* If non-TSO, just need 1 desc and nr_frags sg elems */ 1089 1091 if (skb_shinfo(skb)->nr_frags <= sg_elems) 1090 - return 1; 1092 + return ndescs; 1091 1093 1092 1094 /* Too many frags, so linearize */ 1093 1095 err = skb_linearize(skb); ··· 1098 1096 1099 1097 stats->linearize++; 1100 1098 1101 - /* Need 1 desc and zero sg elems */ 1102 - return 1; 1099 + return ndescs; 1103 1100 } 1104 1101 1105 1102 static int ionic_maybe_stop_tx(struct ionic_queue *q, int ndescs)
+3
drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
··· 1425 1425 1426 1426 if (fw_dump->tmpl_hdr == NULL || current_version > prev_version) { 1427 1427 vfree(fw_dump->tmpl_hdr); 1428 + fw_dump->tmpl_hdr = NULL; 1428 1429 1429 1430 if (qlcnic_83xx_md_check_extended_dump_capability(adapter)) 1430 1431 extended = !qlcnic_83xx_extend_md_capab(adapter); ··· 1444 1443 struct qlcnic_83xx_dump_template_hdr *hdr; 1445 1444 1446 1445 hdr = fw_dump->tmpl_hdr; 1446 + if (!hdr) 1447 + return; 1447 1448 hdr->drv_cap_mask = 0x1f; 1448 1449 fw_dump->cap_mask = 0x1f; 1449 1450 dev_info(&pdev->dev,
+4 -2
drivers/net/ethernet/realtek/r8169_main.c
··· 4646 4646 4647 4647 rtl8169_update_counters(tp); 4648 4648 4649 + pci_clear_master(tp->pci_dev); 4650 + rtl_pci_commit(tp); 4651 + 4649 4652 rtl8169_cleanup(tp, true); 4650 4653 4651 4654 rtl_prepare_power_down(tp); ··· 4656 4653 4657 4654 static void rtl8169_up(struct rtl8169_private *tp) 4658 4655 { 4656 + pci_set_master(tp->pci_dev); 4659 4657 phy_resume(tp->phydev); 4660 4658 rtl8169_init_phy(tp); 4661 4659 napi_enable(&tp->napi); ··· 5310 5306 rtl_hw_initialize(tp); 5311 5307 5312 5308 rtl_hw_reset(tp); 5313 - 5314 - pci_set_master(pdev); 5315 5309 5316 5310 rc = rtl_alloc_irq(tp); 5317 5311 if (rc < 0) {
+6 -3
drivers/net/ethernet/socionext/netsec.c
··· 1715 1715 goto err1; 1716 1716 1717 1717 /* set phy power down */ 1718 - data = netsec_phy_read(priv->mii_bus, priv->phy_addr, MII_BMCR) | 1719 - BMCR_PDOWN; 1720 - netsec_phy_write(priv->mii_bus, priv->phy_addr, MII_BMCR, data); 1718 + data = netsec_phy_read(priv->mii_bus, priv->phy_addr, MII_BMCR); 1719 + netsec_phy_write(priv->mii_bus, priv->phy_addr, MII_BMCR, 1720 + data | BMCR_PDOWN); 1721 1721 1722 1722 ret = netsec_reset_hardware(priv, true); 1723 1723 if (ret) 1724 1724 goto err2; 1725 + 1726 + /* Restore phy power state */ 1727 + netsec_phy_write(priv->mii_bus, priv->phy_addr, MII_BMCR, data); 1725 1728 1726 1729 spin_lock_init(&priv->desc_ring[NETSEC_RING_TX].lock); 1727 1730 spin_lock_init(&priv->desc_ring[NETSEC_RING_RX].lock);
+2
drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
··· 1214 1214 plat_dat->init = sun8i_dwmac_init; 1215 1215 plat_dat->exit = sun8i_dwmac_exit; 1216 1216 plat_dat->setup = sun8i_dwmac_setup; 1217 + plat_dat->tx_fifo_size = 4096; 1218 + plat_dat->rx_fifo_size = 16384; 1217 1219 1218 1220 ret = sun8i_dwmac_set_syscon(&pdev->dev, plat_dat); 1219 1221 if (ret)
+25 -12
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1880 1880 if (IS_ERR(lp->regs)) { 1881 1881 dev_err(&pdev->dev, "could not map Axi Ethernet regs.\n"); 1882 1882 ret = PTR_ERR(lp->regs); 1883 - goto free_netdev; 1883 + goto cleanup_clk; 1884 1884 } 1885 1885 lp->regs_start = ethres->start; 1886 1886 ··· 1958 1958 break; 1959 1959 default: 1960 1960 ret = -EINVAL; 1961 - goto free_netdev; 1961 + goto cleanup_clk; 1962 1962 } 1963 1963 } else { 1964 1964 ret = of_get_phy_mode(pdev->dev.of_node, &lp->phy_mode); 1965 1965 if (ret) 1966 - goto free_netdev; 1966 + goto cleanup_clk; 1967 1967 } 1968 1968 if (lp->switch_x_sgmii && lp->phy_mode != PHY_INTERFACE_MODE_SGMII && 1969 1969 lp->phy_mode != PHY_INTERFACE_MODE_1000BASEX) { 1970 1970 dev_err(&pdev->dev, "xlnx,switch-x-sgmii only supported with SGMII or 1000BaseX\n"); 1971 1971 ret = -EINVAL; 1972 - goto free_netdev; 1972 + goto cleanup_clk; 1973 1973 } 1974 1974 1975 1975 /* Find the DMA node, map the DMA registers, and decode the DMA IRQs */ ··· 1982 1982 dev_err(&pdev->dev, 1983 1983 "unable to get DMA resource\n"); 1984 1984 of_node_put(np); 1985 - goto free_netdev; 1985 + goto cleanup_clk; 1986 1986 } 1987 1987 lp->dma_regs = devm_ioremap_resource(&pdev->dev, 1988 1988 &dmares); ··· 2002 2002 if (IS_ERR(lp->dma_regs)) { 2003 2003 dev_err(&pdev->dev, "could not map DMA regs\n"); 2004 2004 ret = PTR_ERR(lp->dma_regs); 2005 - goto free_netdev; 2005 + goto cleanup_clk; 2006 2006 } 2007 2007 if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) { 2008 2008 dev_err(&pdev->dev, "could not determine irqs\n"); 2009 2009 ret = -ENOMEM; 2010 - goto free_netdev; 2010 + goto cleanup_clk; 2011 2011 } 2012 2012 2013 2013 /* Autodetect the need for 64-bit DMA pointers. ··· 2037 2037 ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(addr_width)); 2038 2038 if (ret) { 2039 2039 dev_err(&pdev->dev, "No suitable DMA available\n"); 2040 - goto free_netdev; 2040 + goto cleanup_clk; 2041 2041 } 2042 2042 2043 2043 /* Check for Ethernet core IRQ (optional) */ ··· 2068 2068 if (!lp->phy_node) { 2069 2069 dev_err(&pdev->dev, "phy-handle required for 1000BaseX/SGMII\n"); 2070 2070 ret = -EINVAL; 2071 - goto free_netdev; 2071 + goto cleanup_mdio; 2072 2072 } 2073 2073 lp->pcs_phy = of_mdio_find_device(lp->phy_node); 2074 2074 if (!lp->pcs_phy) { 2075 2075 ret = -EPROBE_DEFER; 2076 - goto free_netdev; 2076 + goto cleanup_mdio; 2077 2077 } 2078 2078 lp->phylink_config.pcs_poll = true; 2079 2079 } ··· 2087 2087 if (IS_ERR(lp->phylink)) { 2088 2088 ret = PTR_ERR(lp->phylink); 2089 2089 dev_err(&pdev->dev, "phylink_create error (%i)\n", ret); 2090 - goto free_netdev; 2090 + goto cleanup_mdio; 2091 2091 } 2092 2092 2093 2093 ret = register_netdev(lp->ndev); 2094 2094 if (ret) { 2095 2095 dev_err(lp->dev, "register_netdev() error (%i)\n", ret); 2096 - goto free_netdev; 2096 + goto cleanup_phylink; 2097 2097 } 2098 2098 2099 2099 return 0; 2100 + 2101 + cleanup_phylink: 2102 + phylink_destroy(lp->phylink); 2103 + 2104 + cleanup_mdio: 2105 + if (lp->pcs_phy) 2106 + put_device(&lp->pcs_phy->dev); 2107 + if (lp->mii_bus) 2108 + axienet_mdio_teardown(lp); 2109 + of_node_put(lp->phy_node); 2110 + 2111 + cleanup_clk: 2112 + clk_disable_unprepare(lp->clk); 2100 2113 2101 2114 free_netdev: 2102 2115 free_netdev(ndev);
+33 -17
drivers/net/ipa/ipa_cmd.c
··· 175 175 : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK); 176 176 if (mem->offset > offset_max || 177 177 ipa->mem_offset > offset_max - mem->offset) { 178 - dev_err(dev, "IPv%c %s%s table region offset too large " 179 - "(0x%04x + 0x%04x > 0x%04x)\n", 180 - ipv6 ? '6' : '4', hashed ? "hashed " : "", 181 - route ? "route" : "filter", 182 - ipa->mem_offset, mem->offset, offset_max); 178 + dev_err(dev, "IPv%c %s%s table region offset too large\n", 179 + ipv6 ? '6' : '4', hashed ? "hashed " : "", 180 + route ? "route" : "filter"); 181 + dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n", 182 + ipa->mem_offset, mem->offset, offset_max); 183 + 183 184 return false; 184 185 } 185 186 186 187 if (mem->offset > ipa->mem_size || 187 188 mem->size > ipa->mem_size - mem->offset) { 188 - dev_err(dev, "IPv%c %s%s table region out of range " 189 - "(0x%04x + 0x%04x > 0x%04x)\n", 190 - ipv6 ? '6' : '4', hashed ? "hashed " : "", 191 - route ? "route" : "filter", 192 - mem->offset, mem->size, ipa->mem_size); 189 + dev_err(dev, "IPv%c %s%s table region out of range\n", 190 + ipv6 ? '6' : '4', hashed ? "hashed " : "", 191 + route ? "route" : "filter"); 192 + dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n", 193 + mem->offset, mem->size, ipa->mem_size); 194 + 193 195 return false; 194 196 } 195 197 ··· 207 205 u32 size_max; 208 206 u32 size; 209 207 208 + /* In ipa_cmd_hdr_init_local_add() we record the offset and size 209 + * of the header table memory area. Make sure the offset and size 210 + * fit in the fields that need to hold them, and that the entire 211 + * range is within the overall IPA memory range. 212 + */ 210 213 offset_max = field_max(HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK); 211 214 if (mem->offset > offset_max || 212 215 ipa->mem_offset > offset_max - mem->offset) { 213 - dev_err(dev, "header table region offset too large " 214 - "(0x%04x + 0x%04x > 0x%04x)\n", 215 - ipa->mem_offset + mem->offset, offset_max); 216 + dev_err(dev, "header table region offset too large\n"); 217 + dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n", 218 + ipa->mem_offset, mem->offset, offset_max); 219 + 216 220 return false; 217 221 } 218 222 219 223 size_max = field_max(HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK); 220 224 size = ipa->mem[IPA_MEM_MODEM_HEADER].size; 221 225 size += ipa->mem[IPA_MEM_AP_HEADER].size; 222 - if (mem->offset > ipa->mem_size || size > ipa->mem_size - mem->offset) { 223 - dev_err(dev, "header table region out of range " 224 - "(0x%04x + 0x%04x > 0x%04x)\n", 225 - mem->offset, size, ipa->mem_size); 226 + 227 + if (size > size_max) { 228 + dev_err(dev, "header table region size too large\n"); 229 + dev_err(dev, " (0x%04x > 0x%08x)\n", size, size_max); 230 + 231 + return false; 232 + } 233 + if (size > ipa->mem_size || mem->offset > ipa->mem_size - size) { 234 + dev_err(dev, "header table region out of range\n"); 235 + dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n", 236 + mem->offset, size, ipa->mem_size); 237 + 226 238 return false; 227 239 } 228 240
+2
drivers/net/ipa/ipa_qmi.c
··· 249 249 .decoded_size = IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ, 250 250 .fn = ipa_server_driver_init_complete, 251 251 }, 252 + { }, 252 253 }; 253 254 254 255 /* Handle an INIT_DRIVER response message from the modem. */ ··· 270 269 .decoded_size = IPA_QMI_INIT_DRIVER_RSP_SZ, 271 270 .fn = ipa_client_init_driver, 272 271 }, 272 + { }, 273 273 }; 274 274 275 275 /* Return a pointer to an init modem driver request structure, which contains
+9
drivers/net/phy/broadcom.c
··· 342 342 bcm54xx_adjust_rxrefclk(phydev); 343 343 344 344 switch (BRCM_PHY_MODEL(phydev)) { 345 + case PHY_ID_BCM50610: 346 + case PHY_ID_BCM50610M: 347 + err = bcm54xx_config_clock_delay(phydev); 348 + break; 345 349 case PHY_ID_BCM54210E: 346 350 err = bcm54210e_config_init(phydev); 347 351 break; ··· 402 398 ret = genphy_resume(phydev); 403 399 if (ret < 0) 404 400 return ret; 401 + 402 + /* Upon exiting power down, the PHY remains in an internal reset state 403 + * for 40us 404 + */ 405 + fsleep(40); 405 406 406 407 return bcm54xx_config_init(phydev); 407 408 }
+1 -1
drivers/net/phy/phylink.c
··· 476 476 err = pl->mac_ops->mac_finish(pl->config, pl->cur_link_an_mode, 477 477 state->interface); 478 478 if (err < 0) 479 - phylink_err(pl, "mac_prepare failed: %pe\n", 479 + phylink_err(pl, "mac_finish failed: %pe\n", 480 480 ERR_PTR(err)); 481 481 } 482 482 }
+2
drivers/net/usb/cdc-phonet.c
··· 387 387 388 388 err = register_netdev(dev); 389 389 if (err) { 390 + /* Set disconnected flag so that disconnect() returns early. */ 391 + pnd->disconnected = 1; 390 392 usb_driver_release_interface(&usbpn_driver, data_intf); 391 393 goto out; 392 394 }
+4 -1
drivers/net/usb/r8152.c
··· 6553 6553 ops->in_nway = rtl8153_in_nway; 6554 6554 ops->hw_phy_cfg = r8153_hw_phy_cfg; 6555 6555 ops->autosuspend_en = rtl8153_runtime_enable; 6556 - tp->rx_buf_sz = 32 * 1024; 6556 + if (tp->udev->speed < USB_SPEED_SUPER) 6557 + tp->rx_buf_sz = 16 * 1024; 6558 + else 6559 + tp->rx_buf_sz = 32 * 1024; 6557 6560 tp->eee_en = true; 6558 6561 tp->eee_adv = MDIO_EEE_1000T | MDIO_EEE_100TX; 6559 6562 break;
+1 -2
drivers/net/veth.c
··· 302 302 if (rxq < rcv->real_num_rx_queues) { 303 303 rq = &rcv_priv->rq[rxq]; 304 304 rcv_xdp = rcu_access_pointer(rq->xdp_prog); 305 - if (rcv_xdp) 306 - skb_record_rx_queue(skb, rxq); 305 + skb_record_rx_queue(skb, rxq); 307 306 } 308 307 309 308 skb_tx_timestamp(skb);
+42 -2
drivers/net/wan/hdlc_x25.c
··· 23 23 24 24 struct x25_state { 25 25 x25_hdlc_proto settings; 26 + bool up; 27 + spinlock_t up_lock; /* Protects "up" */ 26 28 }; 27 29 28 30 static int x25_ioctl(struct net_device *dev, struct ifreq *ifr); ··· 106 104 107 105 static netdev_tx_t x25_xmit(struct sk_buff *skb, struct net_device *dev) 108 106 { 107 + hdlc_device *hdlc = dev_to_hdlc(dev); 108 + struct x25_state *x25st = state(hdlc); 109 109 int result; 110 110 111 111 /* There should be a pseudo header of 1 byte added by upper layers. ··· 118 114 return NETDEV_TX_OK; 119 115 } 120 116 117 + spin_lock_bh(&x25st->up_lock); 118 + if (!x25st->up) { 119 + spin_unlock_bh(&x25st->up_lock); 120 + kfree_skb(skb); 121 + return NETDEV_TX_OK; 122 + } 123 + 121 124 switch (skb->data[0]) { 122 125 case X25_IFACE_DATA: /* Data to be transmitted */ 123 126 skb_pull(skb, 1); 124 127 if ((result = lapb_data_request(dev, skb)) != LAPB_OK) 125 128 dev_kfree_skb(skb); 129 + spin_unlock_bh(&x25st->up_lock); 126 130 return NETDEV_TX_OK; 127 131 128 132 case X25_IFACE_CONNECT: ··· 159 147 break; 160 148 } 161 149 150 + spin_unlock_bh(&x25st->up_lock); 162 151 dev_kfree_skb(skb); 163 152 return NETDEV_TX_OK; 164 153 } ··· 177 164 .data_transmit = x25_data_transmit, 178 165 }; 179 166 hdlc_device *hdlc = dev_to_hdlc(dev); 167 + struct x25_state *x25st = state(hdlc); 180 168 struct lapb_parms_struct params; 181 169 int result; 182 170 ··· 204 190 if (result != LAPB_OK) 205 191 return -EINVAL; 206 192 193 + spin_lock_bh(&x25st->up_lock); 194 + x25st->up = true; 195 + spin_unlock_bh(&x25st->up_lock); 196 + 207 197 return 0; 208 198 } 209 199 ··· 215 197 216 198 static void x25_close(struct net_device *dev) 217 199 { 200 + hdlc_device *hdlc = dev_to_hdlc(dev); 201 + struct x25_state *x25st = state(hdlc); 202 + 203 + spin_lock_bh(&x25st->up_lock); 204 + x25st->up = false; 205 + spin_unlock_bh(&x25st->up_lock); 206 + 218 207 lapb_unregister(dev); 219 208 } 220 209 ··· 230 205 static int x25_rx(struct sk_buff *skb) 231 206 { 232 207 struct net_device *dev = skb->dev; 208 + hdlc_device *hdlc = dev_to_hdlc(dev); 209 + struct x25_state *x25st = state(hdlc); 233 210 234 211 if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL) { 235 212 dev->stats.rx_dropped++; 236 213 return NET_RX_DROP; 237 214 } 238 215 239 - if (lapb_data_received(dev, skb) == LAPB_OK) 240 - return NET_RX_SUCCESS; 216 + spin_lock_bh(&x25st->up_lock); 217 + if (!x25st->up) { 218 + spin_unlock_bh(&x25st->up_lock); 219 + kfree_skb(skb); 220 + dev->stats.rx_dropped++; 221 + return NET_RX_DROP; 222 + } 241 223 224 + if (lapb_data_received(dev, skb) == LAPB_OK) { 225 + spin_unlock_bh(&x25st->up_lock); 226 + return NET_RX_SUCCESS; 227 + } 228 + 229 + spin_unlock_bh(&x25st->up_lock); 242 230 dev->stats.rx_errors++; 243 231 dev_kfree_skb_any(skb); 244 232 return NET_RX_DROP; ··· 336 298 return result; 337 299 338 300 memcpy(&state(hdlc)->settings, &new_settings, size); 301 + state(hdlc)->up = false; 302 + spin_lock_init(&state(hdlc)->up_lock); 339 303 340 304 /* There's no header_ops so hard_header_len should be 0. */ 341 305 dev->hard_header_len = 0;
+7 -6
drivers/ptp/ptp_qoriq.c
··· 189 189 tmr_add = ptp_qoriq->tmr_add; 190 190 adj = tmr_add; 191 191 192 - /* calculate diff as adj*(scaled_ppm/65536)/1000000 193 - * and round() to the nearest integer 192 + /* 193 + * Calculate diff and round() to the nearest integer 194 + * 195 + * diff = adj * (ppb / 1000000000) 196 + * = adj * scaled_ppm / 65536000000 194 197 */ 195 - adj *= scaled_ppm; 196 - diff = div_u64(adj, 8000000); 197 - diff = (diff >> 13) + ((diff >> 12) & 1); 198 + diff = mul_u64_u64_div_u64(adj, scaled_ppm, 32768000000); 199 + diff = DIV64_U64_ROUND_UP(diff, 2); 198 200 199 201 tmr_add = neg_adj ? tmr_add - diff : tmr_add + diff; 200 - 201 202 ptp_qoriq->write(&regs->ctrl_regs->tmr_add, tmr_add); 202 203 203 204 return 0;
+25 -8
include/linux/bpf.h
··· 21 21 #include <linux/capability.h> 22 22 #include <linux/sched/mm.h> 23 23 #include <linux/slab.h> 24 + #include <linux/percpu-refcount.h> 24 25 25 26 struct bpf_verifier_env; 26 27 struct bpf_verifier_log; ··· 557 556 * fentry = a set of program to run before calling original function 558 557 * fexit = a set of program to run after original function 559 558 */ 560 - int arch_prepare_bpf_trampoline(void *image, void *image_end, 559 + struct bpf_tramp_image; 560 + int arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end, 561 561 const struct btf_func_model *m, u32 flags, 562 562 struct bpf_tramp_progs *tprogs, 563 563 void *orig_call); ··· 567 565 void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start); 568 566 u64 notrace __bpf_prog_enter_sleepable(struct bpf_prog *prog); 569 567 void notrace __bpf_prog_exit_sleepable(struct bpf_prog *prog, u64 start); 568 + void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr); 569 + void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr); 570 570 571 571 struct bpf_ksym { 572 572 unsigned long start; ··· 585 581 BPF_TRAMP_MODIFY_RETURN, 586 582 BPF_TRAMP_MAX, 587 583 BPF_TRAMP_REPLACE, /* more than MAX */ 584 + }; 585 + 586 + struct bpf_tramp_image { 587 + void *image; 588 + struct bpf_ksym ksym; 589 + struct percpu_ref pcref; 590 + void *ip_after_call; 591 + void *ip_epilogue; 592 + union { 593 + struct rcu_head rcu; 594 + struct work_struct work; 595 + }; 588 596 }; 589 597 590 598 struct bpf_trampoline { ··· 621 605 /* Number of attached programs. A counter per kind. */ 622 606 int progs_cnt[BPF_TRAMP_MAX]; 623 607 /* Executable image of trampoline */ 624 - void *image; 608 + struct bpf_tramp_image *cur_image; 625 609 u64 selector; 626 - struct bpf_ksym ksym; 627 610 }; 628 611 629 612 struct bpf_attach_target_info { ··· 706 691 void bpf_image_ksym_del(struct bpf_ksym *ksym); 707 692 void bpf_ksym_add(struct bpf_ksym *ksym); 708 693 void bpf_ksym_del(struct bpf_ksym *ksym); 694 + int bpf_jit_charge_modmem(u32 pages); 695 + void bpf_jit_uncharge_modmem(u32 pages); 709 696 #else 710 697 static inline int bpf_trampoline_link_prog(struct bpf_prog *prog, 711 698 struct bpf_trampoline *tr) ··· 804 787 bool func_proto_unreliable; 805 788 bool sleepable; 806 789 bool tail_call_reachable; 807 - enum bpf_tramp_prog_type trampoline_prog_type; 808 790 struct hlist_node tramp_hlist; 809 791 /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */ 810 792 const struct btf_type *attach_func_proto; ··· 1109 1093 _ret; \ 1110 1094 }) 1111 1095 1112 - #define __BPF_PROG_RUN_ARRAY(array, ctx, func, check_non_null) \ 1096 + #define __BPF_PROG_RUN_ARRAY(array, ctx, func, check_non_null, set_cg_storage) \ 1113 1097 ({ \ 1114 1098 struct bpf_prog_array_item *_item; \ 1115 1099 struct bpf_prog *_prog; \ ··· 1122 1106 goto _out; \ 1123 1107 _item = &_array->items[0]; \ 1124 1108 while ((_prog = READ_ONCE(_item->prog))) { \ 1125 - bpf_cgroup_storage_set(_item->cgroup_storage); \ 1109 + if (set_cg_storage) \ 1110 + bpf_cgroup_storage_set(_item->cgroup_storage); \ 1126 1111 _ret &= func(_prog, ctx); \ 1127 1112 _item++; \ 1128 1113 } \ ··· 1170 1153 }) 1171 1154 1172 1155 #define BPF_PROG_RUN_ARRAY(array, ctx, func) \ 1173 - __BPF_PROG_RUN_ARRAY(array, ctx, func, false) 1156 + __BPF_PROG_RUN_ARRAY(array, ctx, func, false, true) 1174 1157 1175 1158 #define BPF_PROG_RUN_ARRAY_CHECK(array, ctx, func) \ 1176 - __BPF_PROG_RUN_ARRAY(array, ctx, func, true) 1159 + __BPF_PROG_RUN_ARRAY(array, ctx, func, true, false) 1177 1160 1178 1161 #ifdef CONFIG_BPF_SYSCALL 1179 1162 DECLARE_PER_CPU(int, bpf_prog_active);
+2 -1
include/linux/if_macvlan.h
··· 43 43 if (likely(success)) { 44 44 struct vlan_pcpu_stats *pcpu_stats; 45 45 46 - pcpu_stats = this_cpu_ptr(vlan->pcpu_stats); 46 + pcpu_stats = get_cpu_ptr(vlan->pcpu_stats); 47 47 u64_stats_update_begin(&pcpu_stats->syncp); 48 48 pcpu_stats->rx_packets++; 49 49 pcpu_stats->rx_bytes += len; 50 50 if (multicast) 51 51 pcpu_stats->rx_multicast++; 52 52 u64_stats_update_end(&pcpu_stats->syncp); 53 + put_cpu_ptr(vlan->pcpu_stats); 53 54 } else { 54 55 this_cpu_inc(vlan->pcpu_stats->rx_errors); 55 56 }
+7
include/linux/mlx5/qp.h
··· 547 547 } 548 548 } 549 549 550 + static inline int mlx5_get_qp_default_ts(struct mlx5_core_dev *dev) 551 + { 552 + return !MLX5_CAP_ROCE(dev, qp_ts_format) ? 553 + MLX5_QPC_TIMESTAMP_FORMAT_FREE_RUNNING : 554 + MLX5_QPC_TIMESTAMP_FORMAT_DEFAULT; 555 + } 556 + 550 557 #endif /* MLX5_QP_H */
+2
include/linux/netdevice.h
··· 360 360 NAPI_STATE_IN_BUSY_POLL, /* sk_busy_loop() owns this NAPI */ 361 361 NAPI_STATE_PREFER_BUSY_POLL, /* prefer busy-polling over softirq processing*/ 362 362 NAPI_STATE_THREADED, /* The poll is performed inside its own thread*/ 363 + NAPI_STATE_SCHED_THREADED, /* Napi is currently scheduled in threaded mode */ 363 364 }; 364 365 365 366 enum { ··· 373 372 NAPIF_STATE_IN_BUSY_POLL = BIT(NAPI_STATE_IN_BUSY_POLL), 374 373 NAPIF_STATE_PREFER_BUSY_POLL = BIT(NAPI_STATE_PREFER_BUSY_POLL), 375 374 NAPIF_STATE_THREADED = BIT(NAPI_STATE_THREADED), 375 + NAPIF_STATE_SCHED_THREADED = BIT(NAPI_STATE_SCHED_THREADED), 376 376 }; 377 377 378 378 enum gro_result {
+2 -5
include/linux/netfilter/x_tables.h
··· 227 227 unsigned int valid_hooks; 228 228 229 229 /* Man behind the curtain... */ 230 - struct xt_table_info __rcu *private; 230 + struct xt_table_info *private; 231 231 232 232 /* Set this to THIS_MODULE if you are a module, otherwise NULL */ 233 233 struct module *me; ··· 376 376 * since addend is most likely 1 377 377 */ 378 378 __this_cpu_add(xt_recseq.sequence, addend); 379 - smp_wmb(); 379 + smp_mb(); 380 380 381 381 return addend; 382 382 } ··· 447 447 } 448 448 449 449 struct nf_hook_ops *xt_hook_ops_alloc(const struct xt_table *, nf_hookfn *); 450 - 451 - struct xt_table_info 452 - *xt_table_get_private_protected(const struct xt_table *table); 453 450 454 451 #ifdef CONFIG_COMPAT 455 452 #include <net/compat.h>
+1
include/linux/skbuff.h
··· 285 285 struct tc_skb_ext { 286 286 __u32 chain; 287 287 __u16 mru; 288 + bool post_ct; 288 289 }; 289 290 #endif 290 291
+1
include/linux/usermode_driver.h
··· 14 14 int umd_load_blob(struct umd_info *info, const void *data, size_t len); 15 15 int umd_unload_blob(struct umd_info *info); 16 16 int fork_usermode_driver(struct umd_info *info); 17 + void umd_cleanup_helper(struct umd_info *info); 17 18 18 19 #endif /* __LINUX_USERMODE_DRIVER_H__ */
+11
include/net/dst.h
··· 550 550 dst->ops->update_pmtu(dst, NULL, skb, mtu, false); 551 551 } 552 552 553 + struct dst_entry *dst_blackhole_check(struct dst_entry *dst, u32 cookie); 554 + void dst_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk, 555 + struct sk_buff *skb, u32 mtu, bool confirm_neigh); 556 + void dst_blackhole_redirect(struct dst_entry *dst, struct sock *sk, 557 + struct sk_buff *skb); 558 + u32 *dst_blackhole_cow_metrics(struct dst_entry *dst, unsigned long old); 559 + struct neighbour *dst_blackhole_neigh_lookup(const struct dst_entry *dst, 560 + struct sk_buff *skb, 561 + const void *daddr); 562 + unsigned int dst_blackhole_mtu(const struct dst_entry *dst); 563 + 553 564 #endif /* _NET_DST_H */
+1 -1
include/net/inet_connection_sock.h
··· 282 282 return inet_csk_reqsk_queue_len(sk) >= sk->sk_max_ack_backlog; 283 283 } 284 284 285 - void inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req); 285 + bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req); 286 286 void inet_csk_reqsk_queue_drop_and_put(struct sock *sk, struct request_sock *req); 287 287 288 288 static inline void inet_csk_prepare_for_destroy_sock(struct sock *sk)
+3
include/net/netfilter/nf_tables.h
··· 1536 1536 struct nft_flowtable *flowtable; 1537 1537 bool update; 1538 1538 struct list_head hook_list; 1539 + u32 flags; 1539 1540 }; 1540 1541 1541 1542 #define nft_trans_flowtable(trans) \ ··· 1545 1544 (((struct nft_trans_flowtable *)trans->data)->update) 1546 1545 #define nft_trans_flowtable_hooks(trans) \ 1547 1546 (((struct nft_trans_flowtable *)trans->data)->hook_list) 1547 + #define nft_trans_flowtable_flags(trans) \ 1548 + (((struct nft_trans_flowtable *)trans->data)->flags) 1548 1549 1549 1550 int __init nft_chain_filter_init(void); 1550 1551 void nft_chain_filter_fini(void);
+24
include/net/nexthop.h
··· 410 410 int fib6_check_nexthop(struct nexthop *nh, struct fib6_config *cfg, 411 411 struct netlink_ext_ack *extack); 412 412 413 + /* Caller should either hold rcu_read_lock(), or RTNL. */ 413 414 static inline struct fib6_nh *nexthop_fib6_nh(struct nexthop *nh) 414 415 { 415 416 struct nh_info *nhi; ··· 425 424 } 426 425 427 426 nhi = rcu_dereference_rtnl(nh->nh_info); 427 + if (nhi->family == AF_INET6) 428 + return &nhi->fib6_nh; 429 + 430 + return NULL; 431 + } 432 + 433 + /* Variant of nexthop_fib6_nh(). 434 + * Caller should either hold rcu_read_lock_bh(), or RTNL. 435 + */ 436 + static inline struct fib6_nh *nexthop_fib6_nh_bh(struct nexthop *nh) 437 + { 438 + struct nh_info *nhi; 439 + 440 + if (nh->is_group) { 441 + struct nh_group *nh_grp; 442 + 443 + nh_grp = rcu_dereference_bh_rtnl(nh->nh_grp); 444 + nh = nexthop_mpath_select(nh_grp, 0); 445 + if (!nh) 446 + return NULL; 447 + } 448 + 449 + nhi = rcu_dereference_bh_rtnl(nh->nh_info); 428 450 if (nhi->family == AF_INET6) 429 451 return &nhi->fib6_nh; 430 452
+10 -2
include/net/red.h
··· 168 168 v->qcount = -1; 169 169 } 170 170 171 - static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, u8 Scell_log) 171 + static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, 172 + u8 Scell_log, u8 *stab) 172 173 { 173 174 if (fls(qth_min) + Wlog > 32) 174 175 return false; ··· 179 178 return false; 180 179 if (qth_max < qth_min) 181 180 return false; 181 + if (stab) { 182 + int i; 183 + 184 + for (i = 0; i < RED_STAB_SIZE; i++) 185 + if (stab[i] >= 32) 186 + return false; 187 + } 182 188 return true; 183 189 } 184 190 ··· 295 287 int shift; 296 288 297 289 /* 298 - * The problem: ideally, average length queue recalcultion should 290 + * The problem: ideally, average length queue recalculation should 299 291 * be done over constant clock intervals. This is too expensive, so 300 292 * that the calculation is driven by outgoing packets. 301 293 * When the queue is idle we have to model this clock by hand.
+2
include/net/rtnetlink.h
··· 33 33 * 34 34 * @list: Used internally 35 35 * @kind: Identifier 36 + * @netns_refund: Physical device, move to init_net on netns exit 36 37 * @maxtype: Highest device specific netlink attribute number 37 38 * @policy: Netlink policy for device specific attribute validation 38 39 * @validate: Optional validation function for netlink/changelink parameters ··· 65 64 size_t priv_size; 66 65 void (*setup)(struct net_device *dev); 67 66 67 + bool netns_refund; 68 68 unsigned int maxtype; 69 69 const struct nla_policy *policy; 70 70 int (*validate)(struct nlattr *tb[],
+1 -1
include/net/sock.h
··· 936 936 937 937 static inline bool sk_acceptq_is_full(const struct sock *sk) 938 938 { 939 - return READ_ONCE(sk->sk_ack_backlog) > READ_ONCE(sk->sk_max_ack_backlog); 939 + return READ_ONCE(sk->sk_ack_backlog) >= READ_ONCE(sk->sk_max_ack_backlog); 940 940 } 941 941 942 942 /*
+11 -5
include/uapi/linux/bpf.h
··· 3850 3850 * 3851 3851 * long bpf_check_mtu(void *ctx, u32 ifindex, u32 *mtu_len, s32 len_diff, u64 flags) 3852 3852 * Description 3853 - * Check ctx packet size against exceeding MTU of net device (based 3853 + * Check packet size against exceeding MTU of net device (based 3854 3854 * on *ifindex*). This helper will likely be used in combination 3855 3855 * with helpers that adjust/change the packet size. 3856 3856 * ··· 3866 3866 * Specifying *ifindex* zero means the MTU check is performed 3867 3867 * against the current net device. This is practical if this isn't 3868 3868 * used prior to redirect. 3869 + * 3870 + * On input *mtu_len* must be a valid pointer, else verifier will 3871 + * reject BPF program. If the value *mtu_len* is initialized to 3872 + * zero then the ctx packet size is use. When value *mtu_len* is 3873 + * provided as input this specify the L3 length that the MTU check 3874 + * is done against. Remember XDP and TC length operate at L2, but 3875 + * this value is L3 as this correlate to MTU and IP-header tot_len 3876 + * values which are L3 (similar behavior as bpf_fib_lookup). 3869 3877 * 3870 3878 * The Linux kernel route table can configure MTUs on a more 3871 3879 * specific per route level, which is not provided by this helper. ··· 3899 3891 * 3900 3892 * On return *mtu_len* pointer contains the MTU value of the net 3901 3893 * device. Remember the net device configured MTU is the L3 size, 3902 - * which is returned here and XDP and TX length operate at L2. 3894 + * which is returned here and XDP and TC length operate at L2. 3903 3895 * Helper take this into account for you, but remember when using 3904 - * MTU value in your BPF-code. On input *mtu_len* must be a valid 3905 - * pointer and be initialized (to zero), else verifier will reject 3906 - * BPF program. 3896 + * MTU value in your BPF-code. 3907 3897 * 3908 3898 * Return 3909 3899 * * 0 on success, and populate MTU value in *mtu_len* pointer.
+1 -4
include/uapi/linux/psample.h
··· 3 3 #define __UAPI_PSAMPLE_H 4 4 5 5 enum { 6 - /* sampled packet metadata */ 7 6 PSAMPLE_ATTR_IIFINDEX, 8 7 PSAMPLE_ATTR_OIFINDEX, 9 8 PSAMPLE_ATTR_ORIGSIZE, ··· 10 11 PSAMPLE_ATTR_GROUP_SEQ, 11 12 PSAMPLE_ATTR_SAMPLE_RATE, 12 13 PSAMPLE_ATTR_DATA, 13 - PSAMPLE_ATTR_TUNNEL, 14 - 15 - /* commands attributes */ 16 14 PSAMPLE_ATTR_GROUP_REFCOUNT, 15 + PSAMPLE_ATTR_TUNNEL, 17 16 18 17 __PSAMPLE_ATTR_MAX 19 18 };
+1 -1
kernel/bpf/bpf_inode_storage.c
··· 109 109 fd = *(int *)key; 110 110 f = fget_raw(fd); 111 111 if (!f) 112 - return NULL; 112 + return ERR_PTR(-EBADF); 113 113 114 114 sdata = inode_storage_lookup(f->f_inode, map, true); 115 115 fput(f);
+1 -1
kernel/bpf/bpf_struct_ops.c
··· 430 430 431 431 tprogs[BPF_TRAMP_FENTRY].progs[0] = prog; 432 432 tprogs[BPF_TRAMP_FENTRY].nr_progs = 1; 433 - err = arch_prepare_bpf_trampoline(image, 433 + err = arch_prepare_bpf_trampoline(NULL, image, 434 434 st_map->image + PAGE_SIZE, 435 435 &st_ops->func_models[i], 0, 436 436 tprogs, NULL);
+2 -2
kernel/bpf/core.c
··· 827 827 } 828 828 pure_initcall(bpf_jit_charge_init); 829 829 830 - static int bpf_jit_charge_modmem(u32 pages) 830 + int bpf_jit_charge_modmem(u32 pages) 831 831 { 832 832 if (atomic_long_add_return(pages, &bpf_jit_current) > 833 833 (bpf_jit_limit >> PAGE_SHIFT)) { ··· 840 840 return 0; 841 841 } 842 842 843 - static void bpf_jit_uncharge_modmem(u32 pages) 843 + void bpf_jit_uncharge_modmem(u32 pages) 844 844 { 845 845 atomic_long_sub(pages, &bpf_jit_current); 846 846 }
+15 -4
kernel/bpf/preload/bpf_preload_kern.c
··· 60 60 &magic, sizeof(magic), &pos); 61 61 if (n != sizeof(magic)) 62 62 return -EPIPE; 63 + 63 64 tgid = umd_ops.info.tgid; 64 - wait_event(tgid->wait_pidfd, thread_group_exited(tgid)); 65 - umd_ops.info.tgid = NULL; 65 + if (tgid) { 66 + wait_event(tgid->wait_pidfd, thread_group_exited(tgid)); 67 + umd_cleanup_helper(&umd_ops.info); 68 + } 66 69 return 0; 67 70 } 68 71 ··· 83 80 84 81 static void __exit fini_umd(void) 85 82 { 83 + struct pid *tgid; 84 + 86 85 bpf_preload_ops = NULL; 86 + 87 87 /* kill UMD in case it's still there due to earlier error */ 88 - kill_pid(umd_ops.info.tgid, SIGKILL, 1); 89 - umd_ops.info.tgid = NULL; 88 + tgid = umd_ops.info.tgid; 89 + if (tgid) { 90 + kill_pid(tgid, SIGKILL, 1); 91 + 92 + wait_event(tgid->wait_pidfd, thread_group_exited(tgid)); 93 + umd_cleanup_helper(&umd_ops.info); 94 + } 90 95 umd_unload_blob(&umd_ops.info); 91 96 } 92 97 late_initcall(load_umd);
+5
kernel/bpf/syscall.c
··· 854 854 err = PTR_ERR(btf); 855 855 goto free_map; 856 856 } 857 + if (btf_is_kernel(btf)) { 858 + btf_put(btf); 859 + err = -EACCES; 860 + goto free_map; 861 + } 857 862 map->btf = btf; 858 863 859 864 if (attr->btf_value_type_id) {
+168 -50
kernel/bpf/trampoline.c
··· 57 57 PAGE_SIZE, true, ksym->name); 58 58 } 59 59 60 - static void bpf_trampoline_ksym_add(struct bpf_trampoline *tr) 61 - { 62 - struct bpf_ksym *ksym = &tr->ksym; 63 - 64 - snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", tr->key); 65 - bpf_image_ksym_add(tr->image, ksym); 66 - } 67 - 68 60 static struct bpf_trampoline *bpf_trampoline_lookup(u64 key) 69 61 { 70 62 struct bpf_trampoline *tr; 71 63 struct hlist_head *head; 72 - void *image; 73 64 int i; 74 65 75 66 mutex_lock(&trampoline_mutex); ··· 75 84 if (!tr) 76 85 goto out; 77 86 78 - /* is_root was checked earlier. No need for bpf_jit_charge_modmem() */ 79 - image = bpf_jit_alloc_exec_page(); 80 - if (!image) { 81 - kfree(tr); 82 - tr = NULL; 83 - goto out; 84 - } 85 - 86 87 tr->key = key; 87 88 INIT_HLIST_NODE(&tr->hlist); 88 89 hlist_add_head(&tr->hlist, head); ··· 82 99 mutex_init(&tr->mutex); 83 100 for (i = 0; i < BPF_TRAMP_MAX; i++) 84 101 INIT_HLIST_HEAD(&tr->progs_hlist[i]); 85 - tr->image = image; 86 - INIT_LIST_HEAD_RCU(&tr->ksym.lnode); 87 - bpf_trampoline_ksym_add(tr); 88 102 out: 89 103 mutex_unlock(&trampoline_mutex); 90 104 return tr; ··· 165 185 return tprogs; 166 186 } 167 187 188 + static void __bpf_tramp_image_put_deferred(struct work_struct *work) 189 + { 190 + struct bpf_tramp_image *im; 191 + 192 + im = container_of(work, struct bpf_tramp_image, work); 193 + bpf_image_ksym_del(&im->ksym); 194 + bpf_jit_free_exec(im->image); 195 + bpf_jit_uncharge_modmem(1); 196 + percpu_ref_exit(&im->pcref); 197 + kfree_rcu(im, rcu); 198 + } 199 + 200 + /* callback, fexit step 3 or fentry step 2 */ 201 + static void __bpf_tramp_image_put_rcu(struct rcu_head *rcu) 202 + { 203 + struct bpf_tramp_image *im; 204 + 205 + im = container_of(rcu, struct bpf_tramp_image, rcu); 206 + INIT_WORK(&im->work, __bpf_tramp_image_put_deferred); 207 + schedule_work(&im->work); 208 + } 209 + 210 + /* callback, fexit step 2. Called after percpu_ref_kill confirms. */ 211 + static void __bpf_tramp_image_release(struct percpu_ref *pcref) 212 + { 213 + struct bpf_tramp_image *im; 214 + 215 + im = container_of(pcref, struct bpf_tramp_image, pcref); 216 + call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu); 217 + } 218 + 219 + /* callback, fexit or fentry step 1 */ 220 + static void __bpf_tramp_image_put_rcu_tasks(struct rcu_head *rcu) 221 + { 222 + struct bpf_tramp_image *im; 223 + 224 + im = container_of(rcu, struct bpf_tramp_image, rcu); 225 + if (im->ip_after_call) 226 + /* the case of fmod_ret/fexit trampoline and CONFIG_PREEMPTION=y */ 227 + percpu_ref_kill(&im->pcref); 228 + else 229 + /* the case of fentry trampoline */ 230 + call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu); 231 + } 232 + 233 + static void bpf_tramp_image_put(struct bpf_tramp_image *im) 234 + { 235 + /* The trampoline image that calls original function is using: 236 + * rcu_read_lock_trace to protect sleepable bpf progs 237 + * rcu_read_lock to protect normal bpf progs 238 + * percpu_ref to protect trampoline itself 239 + * rcu tasks to protect trampoline asm not covered by percpu_ref 240 + * (which are few asm insns before __bpf_tramp_enter and 241 + * after __bpf_tramp_exit) 242 + * 243 + * The trampoline is unreachable before bpf_tramp_image_put(). 244 + * 245 + * First, patch the trampoline to avoid calling into fexit progs. 246 + * The progs will be freed even if the original function is still 247 + * executing or sleeping. 248 + * In case of CONFIG_PREEMPT=y use call_rcu_tasks() to wait on 249 + * first few asm instructions to execute and call into 250 + * __bpf_tramp_enter->percpu_ref_get. 251 + * Then use percpu_ref_kill to wait for the trampoline and the original 252 + * function to finish. 253 + * Then use call_rcu_tasks() to make sure few asm insns in 254 + * the trampoline epilogue are done as well. 255 + * 256 + * In !PREEMPT case the task that got interrupted in the first asm 257 + * insns won't go through an RCU quiescent state which the 258 + * percpu_ref_kill will be waiting for. Hence the first 259 + * call_rcu_tasks() is not necessary. 260 + */ 261 + if (im->ip_after_call) { 262 + int err = bpf_arch_text_poke(im->ip_after_call, BPF_MOD_JUMP, 263 + NULL, im->ip_epilogue); 264 + WARN_ON(err); 265 + if (IS_ENABLED(CONFIG_PREEMPTION)) 266 + call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu_tasks); 267 + else 268 + percpu_ref_kill(&im->pcref); 269 + return; 270 + } 271 + 272 + /* The trampoline without fexit and fmod_ret progs doesn't call original 273 + * function and doesn't use percpu_ref. 274 + * Use call_rcu_tasks_trace() to wait for sleepable progs to finish. 275 + * Then use call_rcu_tasks() to wait for the rest of trampoline asm 276 + * and normal progs. 277 + */ 278 + call_rcu_tasks_trace(&im->rcu, __bpf_tramp_image_put_rcu_tasks); 279 + } 280 + 281 + static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx) 282 + { 283 + struct bpf_tramp_image *im; 284 + struct bpf_ksym *ksym; 285 + void *image; 286 + int err = -ENOMEM; 287 + 288 + im = kzalloc(sizeof(*im), GFP_KERNEL); 289 + if (!im) 290 + goto out; 291 + 292 + err = bpf_jit_charge_modmem(1); 293 + if (err) 294 + goto out_free_im; 295 + 296 + err = -ENOMEM; 297 + im->image = image = bpf_jit_alloc_exec_page(); 298 + if (!image) 299 + goto out_uncharge; 300 + 301 + err = percpu_ref_init(&im->pcref, __bpf_tramp_image_release, 0, GFP_KERNEL); 302 + if (err) 303 + goto out_free_image; 304 + 305 + ksym = &im->ksym; 306 + INIT_LIST_HEAD_RCU(&ksym->lnode); 307 + snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu_%u", key, idx); 308 + bpf_image_ksym_add(image, ksym); 309 + return im; 310 + 311 + out_free_image: 312 + bpf_jit_free_exec(im->image); 313 + out_uncharge: 314 + bpf_jit_uncharge_modmem(1); 315 + out_free_im: 316 + kfree(im); 317 + out: 318 + return ERR_PTR(err); 319 + } 320 + 168 321 static int bpf_trampoline_update(struct bpf_trampoline *tr) 169 322 { 170 - void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2; 171 - void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2; 323 + struct bpf_tramp_image *im; 172 324 struct bpf_tramp_progs *tprogs; 173 325 u32 flags = BPF_TRAMP_F_RESTORE_REGS; 174 326 int err, total; ··· 310 198 return PTR_ERR(tprogs); 311 199 312 200 if (total == 0) { 313 - err = unregister_fentry(tr, old_image); 201 + err = unregister_fentry(tr, tr->cur_image->image); 202 + bpf_tramp_image_put(tr->cur_image); 203 + tr->cur_image = NULL; 314 204 tr->selector = 0; 205 + goto out; 206 + } 207 + 208 + im = bpf_tramp_image_alloc(tr->key, tr->selector); 209 + if (IS_ERR(im)) { 210 + err = PTR_ERR(im); 315 211 goto out; 316 212 } 317 213 ··· 327 207 tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs) 328 208 flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME; 329 209 330 - /* Though the second half of trampoline page is unused a task could be 331 - * preempted in the middle of the first half of trampoline and two 332 - * updates to trampoline would change the code from underneath the 333 - * preempted task. Hence wait for tasks to voluntarily schedule or go 334 - * to userspace. 335 - * The same trampoline can hold both sleepable and non-sleepable progs. 336 - * synchronize_rcu_tasks_trace() is needed to make sure all sleepable 337 - * programs finish executing. 338 - * Wait for these two grace periods together. 339 - */ 340 - synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace); 341 - 342 - err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2, 210 + err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE, 343 211 &tr->func.model, flags, tprogs, 344 212 tr->func.addr); 345 213 if (err < 0) 346 214 goto out; 347 215 348 - if (tr->selector) 216 + WARN_ON(tr->cur_image && tr->selector == 0); 217 + WARN_ON(!tr->cur_image && tr->selector); 218 + if (tr->cur_image) 349 219 /* progs already running at this address */ 350 - err = modify_fentry(tr, old_image, new_image); 220 + err = modify_fentry(tr, tr->cur_image->image, im->image); 351 221 else 352 222 /* first time registering */ 353 - err = register_fentry(tr, new_image); 223 + err = register_fentry(tr, im->image); 354 224 if (err) 355 225 goto out; 226 + if (tr->cur_image) 227 + bpf_tramp_image_put(tr->cur_image); 228 + tr->cur_image = im; 356 229 tr->selector++; 357 230 out: 358 231 kfree(tprogs); ··· 477 364 goto out; 478 365 if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT]))) 479 366 goto out; 480 - bpf_image_ksym_del(&tr->ksym); 481 - /* This code will be executed when all bpf progs (both sleepable and 482 - * non-sleepable) went through 483 - * bpf_prog_put()->call_rcu[_tasks_trace]()->bpf_prog_free_deferred(). 484 - * Hence no need for another synchronize_rcu_tasks_trace() here, 485 - * but synchronize_rcu_tasks() is still needed, since trampoline 486 - * may not have had any sleepable programs and we need to wait 487 - * for tasks to get out of trampoline code before freeing it. 367 + /* This code will be executed even when the last bpf_tramp_image 368 + * is alive. All progs are detached from the trampoline and the 369 + * trampoline image is patched with jmp into epilogue to skip 370 + * fexit progs. The fentry-only trampoline will be freed via 371 + * multiple rcu callbacks. 488 372 */ 489 - synchronize_rcu_tasks(); 490 - bpf_jit_free_exec(tr->image); 491 373 hlist_del(&tr->hlist); 492 374 kfree(tr); 493 375 out: ··· 586 478 rcu_read_unlock_trace(); 587 479 } 588 480 481 + void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr) 482 + { 483 + percpu_ref_get(&tr->pcref); 484 + } 485 + 486 + void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr) 487 + { 488 + percpu_ref_put(&tr->pcref); 489 + } 490 + 589 491 int __weak 590 - arch_prepare_bpf_trampoline(void *image, void *image_end, 492 + arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end, 591 493 const struct btf_func_model *m, u32 flags, 592 494 struct bpf_tramp_progs *tprogs, 593 495 void *orig_call)
+25 -12
kernel/bpf/verifier.c
··· 5861 5861 { 5862 5862 bool mask_to_left = (opcode == BPF_ADD && off_is_neg) || 5863 5863 (opcode == BPF_SUB && !off_is_neg); 5864 - u32 off; 5864 + u32 off, max; 5865 5865 5866 5866 switch (ptr_reg->type) { 5867 5867 case PTR_TO_STACK: 5868 + /* Offset 0 is out-of-bounds, but acceptable start for the 5869 + * left direction, see BPF_REG_FP. 5870 + */ 5871 + max = MAX_BPF_STACK + mask_to_left; 5868 5872 /* Indirect variable offset stack access is prohibited in 5869 5873 * unprivileged mode so it's not handled here. 5870 5874 */ ··· 5876 5872 if (mask_to_left) 5877 5873 *ptr_limit = MAX_BPF_STACK + off; 5878 5874 else 5879 - *ptr_limit = -off; 5880 - return 0; 5875 + *ptr_limit = -off - 1; 5876 + return *ptr_limit >= max ? -ERANGE : 0; 5881 5877 case PTR_TO_MAP_VALUE: 5878 + max = ptr_reg->map_ptr->value_size; 5882 5879 if (mask_to_left) { 5883 5880 *ptr_limit = ptr_reg->umax_value + ptr_reg->off; 5884 5881 } else { 5885 5882 off = ptr_reg->smin_value + ptr_reg->off; 5886 - *ptr_limit = ptr_reg->map_ptr->value_size - off; 5883 + *ptr_limit = ptr_reg->map_ptr->value_size - off - 1; 5887 5884 } 5888 - return 0; 5885 + return *ptr_limit >= max ? -ERANGE : 0; 5889 5886 default: 5890 5887 return -EINVAL; 5891 5888 } ··· 5939 5934 u32 alu_state, alu_limit; 5940 5935 struct bpf_reg_state tmp; 5941 5936 bool ret; 5937 + int err; 5942 5938 5943 5939 if (can_skip_alu_sanitation(env, insn)) 5944 5940 return 0; ··· 5955 5949 alu_state |= ptr_is_dst_reg ? 5956 5950 BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST; 5957 5951 5958 - if (retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg)) 5959 - return 0; 5960 - if (update_alu_sanitation_state(aux, alu_state, alu_limit)) 5961 - return -EACCES; 5952 + err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg); 5953 + if (err < 0) 5954 + return err; 5955 + 5956 + err = update_alu_sanitation_state(aux, alu_state, alu_limit); 5957 + if (err < 0) 5958 + return err; 5962 5959 do_sim: 5963 5960 /* Simulate and find potential out-of-bounds access under 5964 5961 * speculative execution from truncation as a result of ··· 6112 6103 case BPF_ADD: 6113 6104 ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0); 6114 6105 if (ret < 0) { 6115 - verbose(env, "R%d tried to add from different maps or paths\n", dst); 6106 + verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst); 6116 6107 return ret; 6117 6108 } 6118 6109 /* We can take a fixed offset as long as it doesn't overflow ··· 6167 6158 case BPF_SUB: 6168 6159 ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0); 6169 6160 if (ret < 0) { 6170 - verbose(env, "R%d tried to sub from different maps or paths\n", dst); 6161 + verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst); 6171 6162 return ret; 6172 6163 } 6173 6164 if (dst_reg == off_reg) { ··· 9065 9056 btf = btf_get_by_fd(attr->prog_btf_fd); 9066 9057 if (IS_ERR(btf)) 9067 9058 return PTR_ERR(btf); 9059 + if (btf_is_kernel(btf)) { 9060 + btf_put(btf); 9061 + return -EACCES; 9062 + } 9068 9063 env->prog->aux->btf = btf; 9069 9064 9070 9065 err = check_btf_func(env, attr, uattr); ··· 11673 11660 off_reg = issrc ? insn->src_reg : insn->dst_reg; 11674 11661 if (isneg) 11675 11662 *patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1); 11676 - *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit - 1); 11663 + *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit); 11677 11664 *patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg); 11678 11665 *patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg); 11679 11666 *patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
+38 -5
kernel/trace/ftrace.c
··· 5045 5045 return NULL; 5046 5046 } 5047 5047 5048 + static struct ftrace_direct_func *ftrace_alloc_direct_func(unsigned long addr) 5049 + { 5050 + struct ftrace_direct_func *direct; 5051 + 5052 + direct = kmalloc(sizeof(*direct), GFP_KERNEL); 5053 + if (!direct) 5054 + return NULL; 5055 + direct->addr = addr; 5056 + direct->count = 0; 5057 + list_add_rcu(&direct->next, &ftrace_direct_funcs); 5058 + ftrace_direct_func_count++; 5059 + return direct; 5060 + } 5061 + 5048 5062 /** 5049 5063 * register_ftrace_direct - Call a custom trampoline directly 5050 5064 * @ip: The address of the nop at the beginning of a function ··· 5134 5120 5135 5121 direct = ftrace_find_direct_func(addr); 5136 5122 if (!direct) { 5137 - direct = kmalloc(sizeof(*direct), GFP_KERNEL); 5123 + direct = ftrace_alloc_direct_func(addr); 5138 5124 if (!direct) { 5139 5125 kfree(entry); 5140 5126 goto out_unlock; 5141 5127 } 5142 - direct->addr = addr; 5143 - direct->count = 0; 5144 - list_add_rcu(&direct->next, &ftrace_direct_funcs); 5145 - ftrace_direct_func_count++; 5146 5128 } 5147 5129 5148 5130 entry->ip = ip; ··· 5339 5329 int modify_ftrace_direct(unsigned long ip, 5340 5330 unsigned long old_addr, unsigned long new_addr) 5341 5331 { 5332 + struct ftrace_direct_func *direct, *new_direct = NULL; 5342 5333 struct ftrace_func_entry *entry; 5343 5334 struct dyn_ftrace *rec; 5344 5335 int ret = -ENODEV; ··· 5355 5344 if (entry->direct != old_addr) 5356 5345 goto out_unlock; 5357 5346 5347 + direct = ftrace_find_direct_func(old_addr); 5348 + if (WARN_ON(!direct)) 5349 + goto out_unlock; 5350 + if (direct->count > 1) { 5351 + ret = -ENOMEM; 5352 + new_direct = ftrace_alloc_direct_func(new_addr); 5353 + if (!new_direct) 5354 + goto out_unlock; 5355 + direct->count--; 5356 + new_direct->count++; 5357 + } else { 5358 + direct->addr = new_addr; 5359 + } 5360 + 5358 5361 /* 5359 5362 * If there's no other ftrace callback on the rec->ip location, 5360 5363 * then it can be changed directly by the architecture. ··· 5380 5355 } else { 5381 5356 entry->direct = new_addr; 5382 5357 ret = 0; 5358 + } 5359 + 5360 + if (unlikely(ret && new_direct)) { 5361 + direct->count++; 5362 + list_del_rcu(&new_direct->next); 5363 + synchronize_rcu_tasks(); 5364 + kfree(new_direct); 5365 + ftrace_direct_func_count--; 5383 5366 } 5384 5367 5385 5368 out_unlock:
+15 -6
kernel/usermode_driver.c
··· 139 139 struct umd_info *umd_info = info->data; 140 140 141 141 /* cleanup if umh_setup() was successful but exec failed */ 142 - if (info->retval) { 143 - fput(umd_info->pipe_to_umh); 144 - fput(umd_info->pipe_from_umh); 145 - put_pid(umd_info->tgid); 146 - umd_info->tgid = NULL; 147 - } 142 + if (info->retval) 143 + umd_cleanup_helper(umd_info); 148 144 } 145 + 146 + /** 147 + * umd_cleanup_helper - release the resources which were allocated in umd_setup 148 + * @info: information about usermode driver 149 + */ 150 + void umd_cleanup_helper(struct umd_info *info) 151 + { 152 + fput(info->pipe_to_umh); 153 + fput(info->pipe_from_umh); 154 + put_pid(info->tgid); 155 + info->tgid = NULL; 156 + } 157 + EXPORT_SYMBOL_GPL(umd_cleanup_helper); 149 158 150 159 /** 151 160 * fork_usermode_driver - fork a usermode driver
+1
lib/math/div64.c
··· 232 232 233 233 return res + div64_u64(a * b, c); 234 234 } 235 + EXPORT_SYMBOL(mul_u64_u64_div_u64); 235 236 #endif
+2
net/bridge/br_switchdev.c
··· 128 128 { 129 129 if (!fdb->dst) 130 130 return; 131 + if (test_bit(BR_FDB_LOCAL, &fdb->flags)) 132 + return; 131 133 132 134 switch (type) { 133 135 case RTM_DELNEIGH:
+8 -10
net/can/isotp.c
··· 196 196 nskb->dev = dev; 197 197 can_skb_set_owner(nskb, sk); 198 198 ncf = (struct canfd_frame *)nskb->data; 199 - skb_put(nskb, so->ll.mtu); 199 + skb_put_zero(nskb, so->ll.mtu); 200 200 201 201 /* create & send flow control reply */ 202 202 ncf->can_id = so->txid; ··· 215 215 if (ae) 216 216 ncf->data[0] = so->opt.ext_address; 217 217 218 - if (so->ll.mtu == CANFD_MTU) 219 - ncf->flags = so->ll.tx_flags; 218 + ncf->flags = so->ll.tx_flags; 220 219 221 220 can_send_ret = can_send(nskb, 1); 222 221 if (can_send_ret) ··· 779 780 can_skb_prv(skb)->skbcnt = 0; 780 781 781 782 cf = (struct canfd_frame *)skb->data; 782 - skb_put(skb, so->ll.mtu); 783 + skb_put_zero(skb, so->ll.mtu); 783 784 784 785 /* create consecutive frame */ 785 786 isotp_fill_dataframe(cf, so, ae, 0); ··· 789 790 so->tx.sn %= 16; 790 791 so->tx.bs++; 791 792 792 - if (so->ll.mtu == CANFD_MTU) 793 - cf->flags = so->ll.tx_flags; 793 + cf->flags = so->ll.tx_flags; 794 794 795 795 skb->dev = dev; 796 796 can_skb_set_owner(skb, sk); ··· 895 897 so->tx.idx = 0; 896 898 897 899 cf = (struct canfd_frame *)skb->data; 898 - skb_put(skb, so->ll.mtu); 900 + skb_put_zero(skb, so->ll.mtu); 899 901 900 902 /* check for single frame transmission depending on TX_DL */ 901 903 if (size <= so->tx.ll_dl - SF_PCI_SZ4 - ae - off) { ··· 937 939 } 938 940 939 941 /* send the first or only CAN frame */ 940 - if (so->ll.mtu == CANFD_MTU) 941 - cf->flags = so->ll.tx_flags; 942 + cf->flags = so->ll.tx_flags; 942 943 943 944 skb->dev = dev; 944 945 skb->sk = sk; ··· 1225 1228 if (ll.mtu != CAN_MTU && ll.mtu != CANFD_MTU) 1226 1229 return -EINVAL; 1227 1230 1228 - if (ll.mtu == CAN_MTU && ll.tx_dl > CAN_MAX_DLEN) 1231 + if (ll.mtu == CAN_MTU && 1232 + (ll.tx_dl > CAN_MAX_DLEN || ll.tx_flags != 0)) 1229 1233 return -EINVAL; 1230 1234 1231 1235 memcpy(&so->ll, &ll, sizeof(ll));
+31 -2
net/core/dev.c
··· 1184 1184 return -ENOMEM; 1185 1185 1186 1186 for_each_netdev(net, d) { 1187 + struct netdev_name_node *name_node; 1188 + list_for_each_entry(name_node, &d->name_node->list, list) { 1189 + if (!sscanf(name_node->name, name, &i)) 1190 + continue; 1191 + if (i < 0 || i >= max_netdevices) 1192 + continue; 1193 + 1194 + /* avoid cases where sscanf is not exact inverse of printf */ 1195 + snprintf(buf, IFNAMSIZ, name, i); 1196 + if (!strncmp(buf, name_node->name, IFNAMSIZ)) 1197 + set_bit(i, inuse); 1198 + } 1187 1199 if (!sscanf(d->name, name, &i)) 1188 1200 continue; 1189 1201 if (i < 0 || i >= max_netdevices) ··· 4306 4294 */ 4307 4295 thread = READ_ONCE(napi->thread); 4308 4296 if (thread) { 4297 + /* Avoid doing set_bit() if the thread is in 4298 + * INTERRUPTIBLE state, cause napi_thread_wait() 4299 + * makes sure to proceed with napi polling 4300 + * if the thread is explicitly woken from here. 4301 + */ 4302 + if (READ_ONCE(thread->state) != TASK_INTERRUPTIBLE) 4303 + set_bit(NAPI_STATE_SCHED_THREADED, &napi->state); 4309 4304 wake_up_process(thread); 4310 4305 return; 4311 4306 } ··· 6505 6486 WARN_ON_ONCE(!(val & NAPIF_STATE_SCHED)); 6506 6487 6507 6488 new = val & ~(NAPIF_STATE_MISSED | NAPIF_STATE_SCHED | 6489 + NAPIF_STATE_SCHED_THREADED | 6508 6490 NAPIF_STATE_PREFER_BUSY_POLL); 6509 6491 6510 6492 /* If STATE_MISSED was set, leave STATE_SCHED set, ··· 6988 6968 6989 6969 static int napi_thread_wait(struct napi_struct *napi) 6990 6970 { 6971 + bool woken = false; 6972 + 6991 6973 set_current_state(TASK_INTERRUPTIBLE); 6992 6974 6993 6975 while (!kthread_should_stop() && !napi_disable_pending(napi)) { 6994 - if (test_bit(NAPI_STATE_SCHED, &napi->state)) { 6976 + /* Testing SCHED_THREADED bit here to make sure the current 6977 + * kthread owns this napi and could poll on this napi. 6978 + * Testing SCHED bit is not enough because SCHED bit might be 6979 + * set by some other busy poll thread or by napi_disable(). 6980 + */ 6981 + if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) { 6995 6982 WARN_ON(!list_empty(&napi->poll_list)); 6996 6983 __set_current_state(TASK_RUNNING); 6997 6984 return 0; 6998 6985 } 6999 6986 7000 6987 schedule(); 6988 + /* woken being true indicates this thread owns this napi. */ 6989 + woken = true; 7001 6990 set_current_state(TASK_INTERRUPTIBLE); 7002 6991 } 7003 6992 __set_current_state(TASK_RUNNING); ··· 11375 11346 continue; 11376 11347 11377 11348 /* Leave virtual devices for the generic cleanup */ 11378 - if (dev->rtnl_link_ops) 11349 + if (dev->rtnl_link_ops && !dev->rtnl_link_ops->netns_refund) 11379 11350 continue; 11380 11351 11381 11352 /* Push remaining network devices to init_net */
+23
net/core/drop_monitor.c
··· 1053 1053 return 0; 1054 1054 1055 1055 err_module_put: 1056 + for_each_possible_cpu(cpu) { 1057 + struct per_cpu_dm_data *hw_data = &per_cpu(dm_hw_cpu_data, cpu); 1058 + struct sk_buff *skb; 1059 + 1060 + del_timer_sync(&hw_data->send_timer); 1061 + cancel_work_sync(&hw_data->dm_alert_work); 1062 + while ((skb = __skb_dequeue(&hw_data->drop_queue))) { 1063 + struct devlink_trap_metadata *hw_metadata; 1064 + 1065 + hw_metadata = NET_DM_SKB_CB(skb)->hw_metadata; 1066 + net_dm_hw_metadata_free(hw_metadata); 1067 + consume_skb(skb); 1068 + } 1069 + } 1056 1070 module_put(THIS_MODULE); 1057 1071 return rc; 1058 1072 } ··· 1148 1134 err_unregister_trace: 1149 1135 unregister_trace_kfree_skb(ops->kfree_skb_probe, NULL); 1150 1136 err_module_put: 1137 + for_each_possible_cpu(cpu) { 1138 + struct per_cpu_dm_data *data = &per_cpu(dm_cpu_data, cpu); 1139 + struct sk_buff *skb; 1140 + 1141 + del_timer_sync(&data->send_timer); 1142 + cancel_work_sync(&data->dm_alert_work); 1143 + while ((skb = __skb_dequeue(&data->drop_queue))) 1144 + consume_skb(skb); 1145 + } 1151 1146 module_put(THIS_MODULE); 1152 1147 return rc; 1153 1148 }
+47 -22
net/core/dst.c
··· 237 237 } 238 238 EXPORT_SYMBOL(__dst_destroy_metrics_generic); 239 239 240 - static struct dst_ops md_dst_ops = { 241 - .family = AF_UNSPEC, 240 + struct dst_entry *dst_blackhole_check(struct dst_entry *dst, u32 cookie) 241 + { 242 + return NULL; 243 + } 244 + 245 + u32 *dst_blackhole_cow_metrics(struct dst_entry *dst, unsigned long old) 246 + { 247 + return NULL; 248 + } 249 + 250 + struct neighbour *dst_blackhole_neigh_lookup(const struct dst_entry *dst, 251 + struct sk_buff *skb, 252 + const void *daddr) 253 + { 254 + return NULL; 255 + } 256 + 257 + void dst_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk, 258 + struct sk_buff *skb, u32 mtu, 259 + bool confirm_neigh) 260 + { 261 + } 262 + EXPORT_SYMBOL_GPL(dst_blackhole_update_pmtu); 263 + 264 + void dst_blackhole_redirect(struct dst_entry *dst, struct sock *sk, 265 + struct sk_buff *skb) 266 + { 267 + } 268 + EXPORT_SYMBOL_GPL(dst_blackhole_redirect); 269 + 270 + unsigned int dst_blackhole_mtu(const struct dst_entry *dst) 271 + { 272 + unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 273 + 274 + return mtu ? : dst->dev->mtu; 275 + } 276 + EXPORT_SYMBOL_GPL(dst_blackhole_mtu); 277 + 278 + static struct dst_ops dst_blackhole_ops = { 279 + .family = AF_UNSPEC, 280 + .neigh_lookup = dst_blackhole_neigh_lookup, 281 + .check = dst_blackhole_check, 282 + .cow_metrics = dst_blackhole_cow_metrics, 283 + .update_pmtu = dst_blackhole_update_pmtu, 284 + .redirect = dst_blackhole_redirect, 285 + .mtu = dst_blackhole_mtu, 242 286 }; 243 - 244 - static int dst_md_discard_out(struct net *net, struct sock *sk, struct sk_buff *skb) 245 - { 246 - WARN_ONCE(1, "Attempting to call output on metadata dst\n"); 247 - kfree_skb(skb); 248 - return 0; 249 - } 250 - 251 - static int dst_md_discard(struct sk_buff *skb) 252 - { 253 - WARN_ONCE(1, "Attempting to call input on metadata dst\n"); 254 - kfree_skb(skb); 255 - return 0; 256 - } 257 287 258 288 static void __metadata_dst_init(struct metadata_dst *md_dst, 259 289 enum metadata_type type, u8 optslen) 260 - 261 290 { 262 291 struct dst_entry *dst; 263 292 264 293 dst = &md_dst->dst; 265 - dst_init(dst, &md_dst_ops, NULL, 1, DST_OBSOLETE_NONE, 294 + dst_init(dst, &dst_blackhole_ops, NULL, 1, DST_OBSOLETE_NONE, 266 295 DST_METADATA | DST_NOCOUNT); 267 - 268 - dst->input = dst_md_discard; 269 - dst->output = dst_md_discard_out; 270 - 271 296 memset(dst + 1, 0, sizeof(*md_dst) + optslen - sizeof(*dst)); 272 297 md_dst->type = type; 273 298 }
+10 -2
net/core/filter.c
··· 5658 5658 if (unlikely(flags & ~(BPF_MTU_CHK_SEGS))) 5659 5659 return -EINVAL; 5660 5660 5661 - if (unlikely(flags & BPF_MTU_CHK_SEGS && len_diff)) 5661 + if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len))) 5662 5662 return -EINVAL; 5663 5663 5664 5664 dev = __dev_via_ifindex(dev, ifindex); ··· 5668 5668 mtu = READ_ONCE(dev->mtu); 5669 5669 5670 5670 dev_len = mtu + dev->hard_header_len; 5671 - skb_len = skb->len + len_diff; /* minus result pass check */ 5671 + 5672 + /* If set use *mtu_len as input, L3 as iph->tot_len (like fib_lookup) */ 5673 + skb_len = *mtu_len ? *mtu_len + dev->hard_header_len : skb->len; 5674 + 5675 + skb_len += len_diff; /* minus result pass check */ 5672 5676 if (skb_len <= dev_len) { 5673 5677 ret = BPF_MTU_CHK_RET_SUCCESS; 5674 5678 goto out; ··· 5716 5712 5717 5713 /* Add L2-header as dev MTU is L3 size */ 5718 5714 dev_len = mtu + dev->hard_header_len; 5715 + 5716 + /* Use *mtu_len as input, L3 as iph->tot_len (like fib_lookup) */ 5717 + if (*mtu_len) 5718 + xdp_len = *mtu_len + dev->hard_header_len; 5719 5719 5720 5720 xdp_len += len_diff; /* minus result pass check */ 5721 5721 if (xdp_len > dev_len)
+1 -1
net/core/flow_dissector.c
··· 176 176 * avoid confusion with packets without such field 177 177 */ 178 178 if (icmp_has_id(ih->type)) 179 - key_icmp->id = ih->un.echo.id ? : 1; 179 + key_icmp->id = ih->un.echo.id ? ntohs(ih->un.echo.id) : 1; 180 180 else 181 181 key_icmp->id = 0; 182 182 }
+28 -16
net/core/sock.c
··· 3440 3440 twsk_prot->twsk_slab = NULL; 3441 3441 } 3442 3442 3443 + static int tw_prot_init(const struct proto *prot) 3444 + { 3445 + struct timewait_sock_ops *twsk_prot = prot->twsk_prot; 3446 + 3447 + if (!twsk_prot) 3448 + return 0; 3449 + 3450 + twsk_prot->twsk_slab_name = kasprintf(GFP_KERNEL, "tw_sock_%s", 3451 + prot->name); 3452 + if (!twsk_prot->twsk_slab_name) 3453 + return -ENOMEM; 3454 + 3455 + twsk_prot->twsk_slab = 3456 + kmem_cache_create(twsk_prot->twsk_slab_name, 3457 + twsk_prot->twsk_obj_size, 0, 3458 + SLAB_ACCOUNT | prot->slab_flags, 3459 + NULL); 3460 + if (!twsk_prot->twsk_slab) { 3461 + pr_crit("%s: Can't create timewait sock SLAB cache!\n", 3462 + prot->name); 3463 + return -ENOMEM; 3464 + } 3465 + 3466 + return 0; 3467 + } 3468 + 3443 3469 static void req_prot_cleanup(struct request_sock_ops *rsk_prot) 3444 3470 { 3445 3471 if (!rsk_prot) ··· 3522 3496 if (req_prot_init(prot)) 3523 3497 goto out_free_request_sock_slab; 3524 3498 3525 - if (prot->twsk_prot != NULL) { 3526 - prot->twsk_prot->twsk_slab_name = kasprintf(GFP_KERNEL, "tw_sock_%s", prot->name); 3527 - 3528 - if (prot->twsk_prot->twsk_slab_name == NULL) 3529 - goto out_free_request_sock_slab; 3530 - 3531 - prot->twsk_prot->twsk_slab = 3532 - kmem_cache_create(prot->twsk_prot->twsk_slab_name, 3533 - prot->twsk_prot->twsk_obj_size, 3534 - 0, 3535 - SLAB_ACCOUNT | 3536 - prot->slab_flags, 3537 - NULL); 3538 - if (prot->twsk_prot->twsk_slab == NULL) 3539 - goto out_free_timewait_sock_slab; 3540 - } 3499 + if (tw_prot_init(prot)) 3500 + goto out_free_timewait_sock_slab; 3541 3501 } 3542 3502 3543 3503 mutex_lock(&proto_list_mutex);
+5
net/dccp/ipv6.c
··· 319 319 if (!ipv6_unicast_destination(skb)) 320 320 return 0; /* discard, don't send a reset here */ 321 321 322 + if (ipv6_addr_v4mapped(&ipv6_hdr(skb)->saddr)) { 323 + __IP6_INC_STATS(sock_net(sk), NULL, IPSTATS_MIB_INHDRERRORS); 324 + return 0; 325 + } 326 + 322 327 if (dccp_bad_service_code(sk, service)) { 323 328 dcb->dccpd_reset_code = DCCP_RESET_CODE_BAD_SERVICE_CODE; 324 329 goto drop;
+7 -4
net/dsa/dsa2.c
··· 1066 1066 { 1067 1067 struct dsa_switch *ds = dp->ds; 1068 1068 struct dsa_switch_tree *dst = ds->dst; 1069 + const struct dsa_device_ops *tag_ops; 1069 1070 enum dsa_tag_protocol tag_protocol; 1070 1071 1071 1072 tag_protocol = dsa_get_tag_protocol(dp, master); ··· 1081 1080 * nothing to do here. 1082 1081 */ 1083 1082 } else { 1084 - dst->tag_ops = dsa_tag_driver_get(tag_protocol); 1085 - if (IS_ERR(dst->tag_ops)) { 1086 - if (PTR_ERR(dst->tag_ops) == -ENOPROTOOPT) 1083 + tag_ops = dsa_tag_driver_get(tag_protocol); 1084 + if (IS_ERR(tag_ops)) { 1085 + if (PTR_ERR(tag_ops) == -ENOPROTOOPT) 1087 1086 return -EPROBE_DEFER; 1088 1087 dev_warn(ds->dev, "No tagger for this switch\n"); 1089 1088 dp->master = NULL; 1090 - return PTR_ERR(dst->tag_ops); 1089 + return PTR_ERR(tag_ops); 1091 1090 } 1091 + 1092 + dst->tag_ops = tag_ops; 1092 1093 } 1093 1094 1094 1095 dp->master = master;
+5 -2
net/ipv4/inet_connection_sock.c
··· 705 705 return found; 706 706 } 707 707 708 - void inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req) 708 + bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req) 709 709 { 710 - if (reqsk_queue_unlink(req)) { 710 + bool unlinked = reqsk_queue_unlink(req); 711 + 712 + if (unlinked) { 711 713 reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req); 712 714 reqsk_put(req); 713 715 } 716 + return unlinked; 714 717 } 715 718 EXPORT_SYMBOL(inet_csk_reqsk_queue_drop); 716 719
+8 -6
net/ipv4/ipconfig.c
··· 309 309 */ 310 310 static void __init ic_close_devs(void) 311 311 { 312 - struct net_device *selected_dev = ic_dev->dev; 312 + struct net_device *selected_dev = ic_dev ? ic_dev->dev : NULL; 313 313 struct ic_device *d, *next; 314 314 struct net_device *dev; 315 315 ··· 317 317 next = ic_first_dev; 318 318 while ((d = next)) { 319 319 bool bring_down = (d != ic_dev); 320 - struct net_device *lower_dev; 320 + struct net_device *lower; 321 321 struct list_head *iter; 322 322 323 323 next = d->next; 324 324 dev = d->dev; 325 325 326 - netdev_for_each_lower_dev(selected_dev, lower_dev, iter) { 327 - if (dev == lower_dev) { 328 - bring_down = false; 329 - break; 326 + if (selected_dev) { 327 + netdev_for_each_lower_dev(selected_dev, lower, iter) { 328 + if (dev == lower) { 329 + bring_down = false; 330 + break; 331 + } 330 332 } 331 333 } 332 334 if (bring_down) {
+8 -8
net/ipv4/netfilter/arp_tables.c
··· 203 203 204 204 local_bh_disable(); 205 205 addend = xt_write_recseq_begin(); 206 - private = rcu_access_pointer(table->private); 206 + private = READ_ONCE(table->private); /* Address dependency. */ 207 207 cpu = smp_processor_id(); 208 208 table_base = private->entries; 209 209 jumpstack = (struct arpt_entry **)private->jumpstack[cpu]; ··· 649 649 { 650 650 unsigned int countersize; 651 651 struct xt_counters *counters; 652 - const struct xt_table_info *private = xt_table_get_private_protected(table); 652 + const struct xt_table_info *private = table->private; 653 653 654 654 /* We need atomic snapshot of counters: rest doesn't change 655 655 * (other than comefrom, which userspace doesn't care ··· 673 673 unsigned int off, num; 674 674 const struct arpt_entry *e; 675 675 struct xt_counters *counters; 676 - struct xt_table_info *private = xt_table_get_private_protected(table); 676 + struct xt_table_info *private = table->private; 677 677 int ret = 0; 678 678 void *loc_cpu_entry; 679 679 ··· 807 807 t = xt_request_find_table_lock(net, NFPROTO_ARP, name); 808 808 if (!IS_ERR(t)) { 809 809 struct arpt_getinfo info; 810 - const struct xt_table_info *private = xt_table_get_private_protected(t); 810 + const struct xt_table_info *private = t->private; 811 811 #ifdef CONFIG_COMPAT 812 812 struct xt_table_info tmp; 813 813 ··· 860 860 861 861 t = xt_find_table_lock(net, NFPROTO_ARP, get.name); 862 862 if (!IS_ERR(t)) { 863 - const struct xt_table_info *private = xt_table_get_private_protected(t); 863 + const struct xt_table_info *private = t->private; 864 864 865 865 if (get.size == private->size) 866 866 ret = copy_entries_to_user(private->size, ··· 1017 1017 } 1018 1018 1019 1019 local_bh_disable(); 1020 - private = xt_table_get_private_protected(t); 1020 + private = t->private; 1021 1021 if (private->number != tmp.num_counters) { 1022 1022 ret = -EINVAL; 1023 1023 goto unlock_up_free; ··· 1330 1330 void __user *userptr) 1331 1331 { 1332 1332 struct xt_counters *counters; 1333 - const struct xt_table_info *private = xt_table_get_private_protected(table); 1333 + const struct xt_table_info *private = table->private; 1334 1334 void __user *pos; 1335 1335 unsigned int size; 1336 1336 int ret = 0; ··· 1379 1379 xt_compat_lock(NFPROTO_ARP); 1380 1380 t = xt_find_table_lock(net, NFPROTO_ARP, get.name); 1381 1381 if (!IS_ERR(t)) { 1382 - const struct xt_table_info *private = xt_table_get_private_protected(t); 1382 + const struct xt_table_info *private = t->private; 1383 1383 struct xt_table_info info; 1384 1384 1385 1385 ret = compat_table_info(private, &info);
+8 -8
net/ipv4/netfilter/ip_tables.c
··· 258 258 WARN_ON(!(table->valid_hooks & (1 << hook))); 259 259 local_bh_disable(); 260 260 addend = xt_write_recseq_begin(); 261 - private = rcu_access_pointer(table->private); 261 + private = READ_ONCE(table->private); /* Address dependency. */ 262 262 cpu = smp_processor_id(); 263 263 table_base = private->entries; 264 264 jumpstack = (struct ipt_entry **)private->jumpstack[cpu]; ··· 791 791 { 792 792 unsigned int countersize; 793 793 struct xt_counters *counters; 794 - const struct xt_table_info *private = xt_table_get_private_protected(table); 794 + const struct xt_table_info *private = table->private; 795 795 796 796 /* We need atomic snapshot of counters: rest doesn't change 797 797 (other than comefrom, which userspace doesn't care ··· 815 815 unsigned int off, num; 816 816 const struct ipt_entry *e; 817 817 struct xt_counters *counters; 818 - const struct xt_table_info *private = xt_table_get_private_protected(table); 818 + const struct xt_table_info *private = table->private; 819 819 int ret = 0; 820 820 const void *loc_cpu_entry; 821 821 ··· 964 964 t = xt_request_find_table_lock(net, AF_INET, name); 965 965 if (!IS_ERR(t)) { 966 966 struct ipt_getinfo info; 967 - const struct xt_table_info *private = xt_table_get_private_protected(t); 967 + const struct xt_table_info *private = t->private; 968 968 #ifdef CONFIG_COMPAT 969 969 struct xt_table_info tmp; 970 970 ··· 1018 1018 1019 1019 t = xt_find_table_lock(net, AF_INET, get.name); 1020 1020 if (!IS_ERR(t)) { 1021 - const struct xt_table_info *private = xt_table_get_private_protected(t); 1021 + const struct xt_table_info *private = t->private; 1022 1022 if (get.size == private->size) 1023 1023 ret = copy_entries_to_user(private->size, 1024 1024 t, uptr->entrytable); ··· 1173 1173 } 1174 1174 1175 1175 local_bh_disable(); 1176 - private = xt_table_get_private_protected(t); 1176 + private = t->private; 1177 1177 if (private->number != tmp.num_counters) { 1178 1178 ret = -EINVAL; 1179 1179 goto unlock_up_free; ··· 1543 1543 void __user *userptr) 1544 1544 { 1545 1545 struct xt_counters *counters; 1546 - const struct xt_table_info *private = xt_table_get_private_protected(table); 1546 + const struct xt_table_info *private = table->private; 1547 1547 void __user *pos; 1548 1548 unsigned int size; 1549 1549 int ret = 0; ··· 1589 1589 xt_compat_lock(AF_INET); 1590 1590 t = xt_find_table_lock(net, AF_INET, get.name); 1591 1591 if (!IS_ERR(t)) { 1592 - const struct xt_table_info *private = xt_table_get_private_protected(t); 1592 + const struct xt_table_info *private = t->private; 1593 1593 struct xt_table_info info; 1594 1594 ret = compat_table_info(private, &info); 1595 1595 if (!ret && get.size == info.size)
+8 -37
net/ipv4/route.c
··· 2687 2687 return rth; 2688 2688 } 2689 2689 2690 - static struct dst_entry *ipv4_blackhole_dst_check(struct dst_entry *dst, u32 cookie) 2691 - { 2692 - return NULL; 2693 - } 2694 - 2695 - static unsigned int ipv4_blackhole_mtu(const struct dst_entry *dst) 2696 - { 2697 - unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 2698 - 2699 - return mtu ? : dst->dev->mtu; 2700 - } 2701 - 2702 - static void ipv4_rt_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk, 2703 - struct sk_buff *skb, u32 mtu, 2704 - bool confirm_neigh) 2705 - { 2706 - } 2707 - 2708 - static void ipv4_rt_blackhole_redirect(struct dst_entry *dst, struct sock *sk, 2709 - struct sk_buff *skb) 2710 - { 2711 - } 2712 - 2713 - static u32 *ipv4_rt_blackhole_cow_metrics(struct dst_entry *dst, 2714 - unsigned long old) 2715 - { 2716 - return NULL; 2717 - } 2718 - 2719 2690 static struct dst_ops ipv4_dst_blackhole_ops = { 2720 - .family = AF_INET, 2721 - .check = ipv4_blackhole_dst_check, 2722 - .mtu = ipv4_blackhole_mtu, 2723 - .default_advmss = ipv4_default_advmss, 2724 - .update_pmtu = ipv4_rt_blackhole_update_pmtu, 2725 - .redirect = ipv4_rt_blackhole_redirect, 2726 - .cow_metrics = ipv4_rt_blackhole_cow_metrics, 2727 - .neigh_lookup = ipv4_neigh_lookup, 2691 + .family = AF_INET, 2692 + .default_advmss = ipv4_default_advmss, 2693 + .neigh_lookup = ipv4_neigh_lookup, 2694 + .check = dst_blackhole_check, 2695 + .cow_metrics = dst_blackhole_cow_metrics, 2696 + .update_pmtu = dst_blackhole_update_pmtu, 2697 + .redirect = dst_blackhole_redirect, 2698 + .mtu = dst_blackhole_mtu, 2728 2699 }; 2729 2700 2730 2701 struct dst_entry *ipv4_blackhole_route(struct net *net, struct dst_entry *dst_orig)
+5 -2
net/ipv4/tcp_minisocks.c
··· 804 804 tcp_reset(sk, skb); 805 805 } 806 806 if (!fastopen) { 807 - inet_csk_reqsk_queue_drop(sk, req); 808 - __NET_INC_STATS(sock_net(sk), LINUX_MIB_EMBRYONICRSTS); 807 + bool unlinked = inet_csk_reqsk_queue_drop(sk, req); 808 + 809 + if (unlinked) 810 + __NET_INC_STATS(sock_net(sk), LINUX_MIB_EMBRYONICRSTS); 811 + *req_stolen = !unlinked; 809 812 } 810 813 return NULL; 811 814 }
+1 -1
net/ipv6/ip6_fib.c
··· 2486 2486 const struct net_device *dev; 2487 2487 2488 2488 if (rt->nh) 2489 - fib6_nh = nexthop_fib6_nh(rt->nh); 2489 + fib6_nh = nexthop_fib6_nh_bh(rt->nh); 2490 2490 2491 2491 seq_printf(seq, "%pi6 %02x ", &rt->fib6_dst.addr, rt->fib6_dst.plen); 2492 2492
-10
net/ipv6/ip6_input.c
··· 245 245 if (ipv6_addr_is_multicast(&hdr->saddr)) 246 246 goto err; 247 247 248 - /* While RFC4291 is not explicit about v4mapped addresses 249 - * in IPv6 headers, it seems clear linux dual-stack 250 - * model can not deal properly with these. 251 - * Security models could be fooled by ::ffff:127.0.0.1 for example. 252 - * 253 - * https://tools.ietf.org/html/draft-itojun-v6ops-v4mapped-harmful-02 254 - */ 255 - if (ipv6_addr_v4mapped(&hdr->saddr)) 256 - goto err; 257 - 258 248 skb->transport_header = skb->network_header + sizeof(*hdr); 259 249 IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr); 260 250
+8 -8
net/ipv6/netfilter/ip6_tables.c
··· 280 280 281 281 local_bh_disable(); 282 282 addend = xt_write_recseq_begin(); 283 - private = rcu_access_pointer(table->private); 283 + private = READ_ONCE(table->private); /* Address dependency. */ 284 284 cpu = smp_processor_id(); 285 285 table_base = private->entries; 286 286 jumpstack = (struct ip6t_entry **)private->jumpstack[cpu]; ··· 807 807 { 808 808 unsigned int countersize; 809 809 struct xt_counters *counters; 810 - const struct xt_table_info *private = xt_table_get_private_protected(table); 810 + const struct xt_table_info *private = table->private; 811 811 812 812 /* We need atomic snapshot of counters: rest doesn't change 813 813 (other than comefrom, which userspace doesn't care ··· 831 831 unsigned int off, num; 832 832 const struct ip6t_entry *e; 833 833 struct xt_counters *counters; 834 - const struct xt_table_info *private = xt_table_get_private_protected(table); 834 + const struct xt_table_info *private = table->private; 835 835 int ret = 0; 836 836 const void *loc_cpu_entry; 837 837 ··· 980 980 t = xt_request_find_table_lock(net, AF_INET6, name); 981 981 if (!IS_ERR(t)) { 982 982 struct ip6t_getinfo info; 983 - const struct xt_table_info *private = xt_table_get_private_protected(t); 983 + const struct xt_table_info *private = t->private; 984 984 #ifdef CONFIG_COMPAT 985 985 struct xt_table_info tmp; 986 986 ··· 1035 1035 1036 1036 t = xt_find_table_lock(net, AF_INET6, get.name); 1037 1037 if (!IS_ERR(t)) { 1038 - struct xt_table_info *private = xt_table_get_private_protected(t); 1038 + struct xt_table_info *private = t->private; 1039 1039 if (get.size == private->size) 1040 1040 ret = copy_entries_to_user(private->size, 1041 1041 t, uptr->entrytable); ··· 1189 1189 } 1190 1190 1191 1191 local_bh_disable(); 1192 - private = xt_table_get_private_protected(t); 1192 + private = t->private; 1193 1193 if (private->number != tmp.num_counters) { 1194 1194 ret = -EINVAL; 1195 1195 goto unlock_up_free; ··· 1552 1552 void __user *userptr) 1553 1553 { 1554 1554 struct xt_counters *counters; 1555 - const struct xt_table_info *private = xt_table_get_private_protected(table); 1555 + const struct xt_table_info *private = table->private; 1556 1556 void __user *pos; 1557 1557 unsigned int size; 1558 1558 int ret = 0; ··· 1598 1598 xt_compat_lock(AF_INET6); 1599 1599 t = xt_find_table_lock(net, AF_INET6, get.name); 1600 1600 if (!IS_ERR(t)) { 1601 - const struct xt_table_info *private = xt_table_get_private_protected(t); 1601 + const struct xt_table_info *private = t->private; 1602 1602 struct xt_table_info info; 1603 1603 ret = compat_table_info(private, &info); 1604 1604 if (!ret && get.size == info.size)
+9 -27
net/ipv6/route.c
··· 260 260 .confirm_neigh = ip6_confirm_neigh, 261 261 }; 262 262 263 - static unsigned int ip6_blackhole_mtu(const struct dst_entry *dst) 264 - { 265 - unsigned int mtu = dst_metric_raw(dst, RTAX_MTU); 266 - 267 - return mtu ? : dst->dev->mtu; 268 - } 269 - 270 - static void ip6_rt_blackhole_update_pmtu(struct dst_entry *dst, struct sock *sk, 271 - struct sk_buff *skb, u32 mtu, 272 - bool confirm_neigh) 273 - { 274 - } 275 - 276 - static void ip6_rt_blackhole_redirect(struct dst_entry *dst, struct sock *sk, 277 - struct sk_buff *skb) 278 - { 279 - } 280 - 281 263 static struct dst_ops ip6_dst_blackhole_ops = { 282 - .family = AF_INET6, 283 - .destroy = ip6_dst_destroy, 284 - .check = ip6_dst_check, 285 - .mtu = ip6_blackhole_mtu, 286 - .default_advmss = ip6_default_advmss, 287 - .update_pmtu = ip6_rt_blackhole_update_pmtu, 288 - .redirect = ip6_rt_blackhole_redirect, 289 - .cow_metrics = dst_cow_metrics_generic, 290 - .neigh_lookup = ip6_dst_neigh_lookup, 264 + .family = AF_INET6, 265 + .default_advmss = ip6_default_advmss, 266 + .neigh_lookup = ip6_dst_neigh_lookup, 267 + .check = ip6_dst_check, 268 + .destroy = ip6_dst_destroy, 269 + .cow_metrics = dst_cow_metrics_generic, 270 + .update_pmtu = dst_blackhole_update_pmtu, 271 + .redirect = dst_blackhole_redirect, 272 + .mtu = dst_blackhole_mtu, 291 273 }; 292 274 293 275 static const u32 ip6_template_metrics[RTAX_MAX] = {
+5
net/ipv6/tcp_ipv6.c
··· 1175 1175 if (!ipv6_unicast_destination(skb)) 1176 1176 goto drop; 1177 1177 1178 + if (ipv6_addr_v4mapped(&ipv6_hdr(skb)->saddr)) { 1179 + __IP6_INC_STATS(sock_net(sk), NULL, IPSTATS_MIB_INHDRERRORS); 1180 + return 0; 1181 + } 1182 + 1178 1183 return tcp_conn_request(&tcp6_request_sock_ops, 1179 1184 &tcp_request_sock_ipv6_ops, sk, skb); 1180 1185
+3 -2
net/mac80211/aead_api.c
··· 23 23 struct aead_request *aead_req; 24 24 int reqsize = sizeof(*aead_req) + crypto_aead_reqsize(tfm); 25 25 u8 *__aad; 26 + int ret; 26 27 27 28 aead_req = kzalloc(reqsize + aad_len, GFP_ATOMIC); 28 29 if (!aead_req) ··· 41 40 aead_request_set_crypt(aead_req, sg, sg, data_len, b_0); 42 41 aead_request_set_ad(aead_req, sg[0].length); 43 42 44 - crypto_aead_encrypt(aead_req); 43 + ret = crypto_aead_encrypt(aead_req); 45 44 kfree_sensitive(aead_req); 46 45 47 - return 0; 46 + return ret; 48 47 } 49 48 50 49 int aead_decrypt(struct crypto_aead *tfm, u8 *b_0, u8 *aad, size_t aad_len,
+3 -2
net/mac80211/aes_gmac.c
··· 22 22 struct aead_request *aead_req; 23 23 int reqsize = sizeof(*aead_req) + crypto_aead_reqsize(tfm); 24 24 const __le16 *fc; 25 + int ret; 25 26 26 27 if (data_len < GMAC_MIC_LEN) 27 28 return -EINVAL; ··· 60 59 aead_request_set_crypt(aead_req, sg, sg, 0, iv); 61 60 aead_request_set_ad(aead_req, GMAC_AAD_LEN + data_len); 62 61 63 - crypto_aead_encrypt(aead_req); 62 + ret = crypto_aead_encrypt(aead_req); 64 63 kfree_sensitive(aead_req); 65 64 66 - return 0; 65 + return ret; 67 66 } 68 67 69 68 struct crypto_aead *ieee80211_aes_gmac_key_setup(const u8 key[],
+2 -2
net/mac80211/cfg.c
··· 2950 2950 continue; 2951 2951 2952 2952 for (j = 0; j < IEEE80211_HT_MCS_MASK_LEN; j++) { 2953 - if (~sdata->rc_rateidx_mcs_mask[i][j]) { 2953 + if (sdata->rc_rateidx_mcs_mask[i][j] != 0xff) { 2954 2954 sdata->rc_has_mcs_mask[i] = true; 2955 2955 break; 2956 2956 } 2957 2957 } 2958 2958 2959 2959 for (j = 0; j < NL80211_VHT_NSS_MAX; j++) { 2960 - if (~sdata->rc_rateidx_vht_mcs_mask[i][j]) { 2960 + if (sdata->rc_rateidx_vht_mcs_mask[i][j] != 0xffff) { 2961 2961 sdata->rc_has_vht_mcs_mask[i] = true; 2962 2962 break; 2963 2963 }
+2
net/mac80211/ibss.c
··· 1874 1874 1875 1875 /* remove beacon */ 1876 1876 kfree(sdata->u.ibss.ie); 1877 + sdata->u.ibss.ie = NULL; 1878 + sdata->u.ibss.ie_len = 0; 1877 1879 1878 1880 /* on the next join, re-program HT parameters */ 1879 1881 memset(&ifibss->ht_capa, 0, sizeof(ifibss->ht_capa));
+12 -1
net/mac80211/main.c
··· 973 973 continue; 974 974 975 975 if (!dflt_chandef.chan) { 976 + /* 977 + * Assign the first enabled channel to dflt_chandef 978 + * from the list of channels 979 + */ 980 + for (i = 0; i < sband->n_channels; i++) 981 + if (!(sband->channels[i].flags & 982 + IEEE80211_CHAN_DISABLED)) 983 + break; 984 + /* if none found then use the first anyway */ 985 + if (i == sband->n_channels) 986 + i = 0; 976 987 cfg80211_chandef_create(&dflt_chandef, 977 - &sband->channels[0], 988 + &sband->channels[i], 978 989 NL80211_CHAN_NO_HT); 979 990 /* init channel we're on */ 980 991 if (!local->use_chanctx && !local->_oper_chandef.chan) {
+1 -1
net/mac80211/mlme.c
··· 5071 5071 he_oper_ie = cfg80211_find_ext_ie(WLAN_EID_EXT_HE_OPERATION, 5072 5072 ies->data, ies->len); 5073 5073 if (he_oper_ie && 5074 - he_oper_ie[1] == ieee80211_he_oper_size(&he_oper_ie[3])) 5074 + he_oper_ie[1] >= ieee80211_he_oper_size(&he_oper_ie[3])) 5075 5075 he_oper = (void *)(he_oper_ie + 3); 5076 5076 else 5077 5077 he_oper = NULL;
-2
net/mac80211/rc80211_minstrel_ht.c
··· 805 805 static u16 806 806 minstrel_ht_next_inc_rate(struct minstrel_ht_sta *mi, u32 fast_rate_dur) 807 807 { 808 - struct minstrel_mcs_group_data *mg; 809 808 u8 type = MINSTREL_SAMPLE_TYPE_INC; 810 809 int i, index = 0; 811 810 u8 group; ··· 812 813 group = mi->sample[type].sample_group; 813 814 for (i = 0; i < ARRAY_SIZE(minstrel_mcs_groups); i++) { 814 815 group = (group + 1) % ARRAY_SIZE(minstrel_mcs_groups); 815 - mg = &mi->groups[group]; 816 816 817 817 index = minstrel_ht_group_min_rate_offset(mi, group, 818 818 fast_rate_dur);
+1 -1
net/mac80211/util.c
··· 968 968 break; 969 969 case WLAN_EID_EXT_HE_OPERATION: 970 970 if (len >= sizeof(*elems->he_operation) && 971 - len == ieee80211_he_oper_size(data) - 1) { 971 + len >= ieee80211_he_oper_size(data) - 1) { 972 972 if (crc) 973 973 *crc = crc32_be(*crc, (void *)elem, 974 974 elem->datalen + 2);
+14 -10
net/mptcp/options.c
··· 567 567 } 568 568 569 569 static u64 add_addr_generate_hmac(u64 key1, u64 key2, u8 addr_id, 570 - struct in_addr *addr) 570 + struct in_addr *addr, u16 port) 571 571 { 572 572 u8 hmac[SHA256_DIGEST_SIZE]; 573 573 u8 msg[7]; 574 574 575 575 msg[0] = addr_id; 576 576 memcpy(&msg[1], &addr->s_addr, 4); 577 - msg[5] = 0; 578 - msg[6] = 0; 577 + msg[5] = port >> 8; 578 + msg[6] = port & 0xFF; 579 579 580 580 mptcp_crypto_hmac_sha(key1, key2, msg, 7, hmac); 581 581 ··· 584 584 585 585 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 586 586 static u64 add_addr6_generate_hmac(u64 key1, u64 key2, u8 addr_id, 587 - struct in6_addr *addr) 587 + struct in6_addr *addr, u16 port) 588 588 { 589 589 u8 hmac[SHA256_DIGEST_SIZE]; 590 590 u8 msg[19]; 591 591 592 592 msg[0] = addr_id; 593 593 memcpy(&msg[1], &addr->s6_addr, 16); 594 - msg[17] = 0; 595 - msg[18] = 0; 594 + msg[17] = port >> 8; 595 + msg[18] = port & 0xFF; 596 596 597 597 mptcp_crypto_hmac_sha(key1, key2, msg, 19, hmac); 598 598 ··· 646 646 opts->ahmac = add_addr_generate_hmac(msk->local_key, 647 647 msk->remote_key, 648 648 opts->addr_id, 649 - &opts->addr); 649 + &opts->addr, 650 + opts->port); 650 651 } 651 652 } 652 653 #if IS_ENABLED(CONFIG_MPTCP_IPV6) ··· 658 657 opts->ahmac = add_addr6_generate_hmac(msk->local_key, 659 658 msk->remote_key, 660 659 opts->addr_id, 661 - &opts->addr6); 660 + &opts->addr6, 661 + opts->port); 662 662 } 663 663 } 664 664 #endif ··· 964 962 if (mp_opt->family == MPTCP_ADDR_IPVERSION_4) 965 963 hmac = add_addr_generate_hmac(msk->remote_key, 966 964 msk->local_key, 967 - mp_opt->addr_id, &mp_opt->addr); 965 + mp_opt->addr_id, &mp_opt->addr, 966 + mp_opt->port); 968 967 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 969 968 else 970 969 hmac = add_addr6_generate_hmac(msk->remote_key, 971 970 msk->local_key, 972 - mp_opt->addr_id, &mp_opt->addr6); 971 + mp_opt->addr_id, &mp_opt->addr6, 972 + mp_opt->port); 973 973 #endif 974 974 975 975 pr_debug("msk=%p, ahmac=%llu, mp_opt->ahmac=%llu\n",
+2 -2
net/mptcp/protocol.c
··· 2968 2968 for (;;) { 2969 2969 flags = 0; 2970 2970 if (test_and_clear_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->flags)) 2971 - flags |= MPTCP_PUSH_PENDING; 2971 + flags |= BIT(MPTCP_PUSH_PENDING); 2972 2972 if (!flags) 2973 2973 break; 2974 2974 ··· 2981 2981 */ 2982 2982 2983 2983 spin_unlock_bh(&sk->sk_lock.slock); 2984 - if (flags & MPTCP_PUSH_PENDING) 2984 + if (flags & BIT(MPTCP_PUSH_PENDING)) 2985 2985 __mptcp_push_pending(sk, 0); 2986 2986 2987 2987 cond_resched();
+5
net/mptcp/subflow.c
··· 477 477 if (!ipv6_unicast_destination(skb)) 478 478 goto drop; 479 479 480 + if (ipv6_addr_v4mapped(&ipv6_hdr(skb)->saddr)) { 481 + __IP6_INC_STATS(sock_net(sk), NULL, IPSTATS_MIB_INHDRERRORS); 482 + return 0; 483 + } 484 + 480 485 return tcp_conn_request(&mptcp_subflow_request_sock_ops, 481 486 &subflow_request_sock_ipv6_ops, sk, skb); 482 487
+1
net/netfilter/nf_conntrack_netlink.c
··· 2962 2962 memset(&m, 0xFF, sizeof(m)); 2963 2963 memcpy(&m.src.u3, &mask->src.u3, sizeof(m.src.u3)); 2964 2964 m.src.u.all = mask->src.u.all; 2965 + m.src.l3num = tuple->src.l3num; 2965 2966 m.dst.protonum = tuple->dst.protonum; 2966 2967 2967 2968 nest_parms = nla_nest_start(skb, CTA_EXPECT_MASK);
-3
net/netfilter/nf_conntrack_proto_gre.c
··· 218 218 enum ip_conntrack_info ctinfo, 219 219 const struct nf_hook_state *state) 220 220 { 221 - if (state->pf != NFPROTO_IPV4) 222 - return -NF_ACCEPT; 223 - 224 221 if (!nf_ct_is_confirmed(ct)) { 225 222 unsigned int *timeouts = nf_ct_timeout_lookup(ct); 226 223
+1 -1
net/netfilter/nf_flow_table_core.c
··· 506 506 { 507 507 int err; 508 508 509 - INIT_DEFERRABLE_WORK(&flowtable->gc_work, nf_flow_offload_work_gc); 509 + INIT_DELAYED_WORK(&flowtable->gc_work, nf_flow_offload_work_gc); 510 510 flow_block_init(&flowtable->flow_block); 511 511 init_rwsem(&flowtable->flow_block_lock); 512 512
+21 -1
net/netfilter/nf_tables_api.c
··· 6783 6783 6784 6784 list_for_each_entry(hook, hook_list, list) { 6785 6785 list_for_each_entry(ft, &table->flowtables, list) { 6786 + if (!nft_is_active_next(net, ft)) 6787 + continue; 6788 + 6786 6789 list_for_each_entry(hook2, &ft->hook_list, list) { 6787 6790 if (hook->ops.dev == hook2->ops.dev && 6788 6791 hook->ops.pf == hook2->ops.pf) { ··· 6845 6842 struct nft_hook *hook, *next; 6846 6843 struct nft_trans *trans; 6847 6844 bool unregister = false; 6845 + u32 flags; 6848 6846 int err; 6849 6847 6850 6848 err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], ··· 6858 6854 list_del(&hook->list); 6859 6855 kfree(hook); 6860 6856 } 6857 + } 6858 + 6859 + if (nla[NFTA_FLOWTABLE_FLAGS]) { 6860 + flags = ntohl(nla_get_be32(nla[NFTA_FLOWTABLE_FLAGS])); 6861 + if (flags & ~NFT_FLOWTABLE_MASK) 6862 + return -EOPNOTSUPP; 6863 + if ((flowtable->data.flags & NFT_FLOWTABLE_HW_OFFLOAD) ^ 6864 + (flags & NFT_FLOWTABLE_HW_OFFLOAD)) 6865 + return -EOPNOTSUPP; 6866 + } else { 6867 + flags = flowtable->data.flags; 6861 6868 } 6862 6869 6863 6870 err = nft_register_flowtable_net_hooks(ctx->net, ctx->table, ··· 6884 6869 goto err_flowtable_update_hook; 6885 6870 } 6886 6871 6872 + nft_trans_flowtable_flags(trans) = flags; 6887 6873 nft_trans_flowtable(trans) = flowtable; 6888 6874 nft_trans_flowtable_update(trans) = true; 6889 6875 INIT_LIST_HEAD(&nft_trans_flowtable_hooks(trans)); ··· 6979 6963 if (nla[NFTA_FLOWTABLE_FLAGS]) { 6980 6964 flowtable->data.flags = 6981 6965 ntohl(nla_get_be32(nla[NFTA_FLOWTABLE_FLAGS])); 6982 - if (flowtable->data.flags & ~NFT_FLOWTABLE_MASK) 6966 + if (flowtable->data.flags & ~NFT_FLOWTABLE_MASK) { 6967 + err = -EOPNOTSUPP; 6983 6968 goto err3; 6969 + } 6984 6970 } 6985 6971 6986 6972 write_pnet(&flowtable->data.net, net); ··· 8194 8176 break; 8195 8177 case NFT_MSG_NEWFLOWTABLE: 8196 8178 if (nft_trans_flowtable_update(trans)) { 8179 + nft_trans_flowtable(trans)->data.flags = 8180 + nft_trans_flowtable_flags(trans); 8197 8181 nf_tables_flowtable_notify(&trans->ctx, 8198 8182 nft_trans_flowtable(trans), 8199 8183 &nft_trans_flowtable_hooks(trans),
+34 -15
net/netfilter/x_tables.c
··· 1351 1351 } 1352 1352 EXPORT_SYMBOL(xt_counters_alloc); 1353 1353 1354 - struct xt_table_info 1355 - *xt_table_get_private_protected(const struct xt_table *table) 1356 - { 1357 - return rcu_dereference_protected(table->private, 1358 - mutex_is_locked(&xt[table->af].mutex)); 1359 - } 1360 - EXPORT_SYMBOL(xt_table_get_private_protected); 1361 - 1362 1354 struct xt_table_info * 1363 1355 xt_replace_table(struct xt_table *table, 1364 1356 unsigned int num_counters, ··· 1358 1366 int *error) 1359 1367 { 1360 1368 struct xt_table_info *private; 1369 + unsigned int cpu; 1361 1370 int ret; 1362 1371 1363 1372 ret = xt_jumpstack_alloc(newinfo); ··· 1368 1375 } 1369 1376 1370 1377 /* Do the substitution. */ 1371 - private = xt_table_get_private_protected(table); 1378 + local_bh_disable(); 1379 + private = table->private; 1372 1380 1373 1381 /* Check inside lock: is the old number correct? */ 1374 1382 if (num_counters != private->number) { 1375 1383 pr_debug("num_counters != table->private->number (%u/%u)\n", 1376 1384 num_counters, private->number); 1385 + local_bh_enable(); 1377 1386 *error = -EAGAIN; 1378 1387 return NULL; 1379 1388 } 1380 1389 1381 1390 newinfo->initial_entries = private->initial_entries; 1391 + /* 1392 + * Ensure contents of newinfo are visible before assigning to 1393 + * private. 1394 + */ 1395 + smp_wmb(); 1396 + table->private = newinfo; 1382 1397 1383 - rcu_assign_pointer(table->private, newinfo); 1384 - synchronize_rcu(); 1398 + /* make sure all cpus see new ->private value */ 1399 + smp_mb(); 1400 + 1401 + /* 1402 + * Even though table entries have now been swapped, other CPU's 1403 + * may still be using the old entries... 1404 + */ 1405 + local_bh_enable(); 1406 + 1407 + /* ... so wait for even xt_recseq on all cpus */ 1408 + for_each_possible_cpu(cpu) { 1409 + seqcount_t *s = &per_cpu(xt_recseq, cpu); 1410 + u32 seq = raw_read_seqcount(s); 1411 + 1412 + if (seq & 1) { 1413 + do { 1414 + cond_resched(); 1415 + cpu_relax(); 1416 + } while (seq == raw_read_seqcount(s)); 1417 + } 1418 + } 1385 1419 1386 1420 audit_log_nfcfg(table->name, table->af, private->number, 1387 1421 !private->number ? AUDIT_XT_OP_REGISTER : ··· 1444 1424 } 1445 1425 1446 1426 /* Simplifies replace_table code. */ 1447 - rcu_assign_pointer(table->private, bootstrap); 1427 + table->private = bootstrap; 1448 1428 1449 1429 if (!xt_replace_table(table, 0, newinfo, &ret)) 1450 1430 goto unlock; 1451 1431 1452 - private = xt_table_get_private_protected(table); 1432 + private = table->private; 1453 1433 pr_debug("table->private->number = %u\n", private->number); 1454 1434 1455 1435 /* save number of initial entries */ ··· 1472 1452 struct xt_table_info *private; 1473 1453 1474 1454 mutex_lock(&xt[table->af].mutex); 1475 - private = xt_table_get_private_protected(table); 1476 - RCU_INIT_POINTER(table->private, NULL); 1455 + private = table->private; 1477 1456 list_del(&table->list); 1478 1457 mutex_unlock(&xt[table->af].mutex); 1479 1458 audit_log_nfcfg(table->name, table->af, private->number,
+5 -3
net/openvswitch/conntrack.c
··· 271 271 /* This is called to initialize CT key fields possibly coming in from the local 272 272 * stack. 273 273 */ 274 - void ovs_ct_fill_key(const struct sk_buff *skb, struct sw_flow_key *key) 274 + void ovs_ct_fill_key(const struct sk_buff *skb, 275 + struct sw_flow_key *key, 276 + bool post_ct) 275 277 { 276 - ovs_ct_update_key(skb, NULL, key, false, false); 278 + ovs_ct_update_key(skb, NULL, key, post_ct, false); 277 279 } 278 280 279 281 int ovs_ct_put_key(const struct sw_flow_key *swkey, ··· 1334 1332 if (skb_nfct(skb)) { 1335 1333 nf_conntrack_put(skb_nfct(skb)); 1336 1334 nf_ct_set(skb, NULL, IP_CT_UNTRACKED); 1337 - ovs_ct_fill_key(skb, key); 1335 + ovs_ct_fill_key(skb, key, false); 1338 1336 } 1339 1337 1340 1338 return 0;
+4 -2
net/openvswitch/conntrack.h
··· 25 25 const struct ovs_conntrack_info *); 26 26 int ovs_ct_clear(struct sk_buff *skb, struct sw_flow_key *key); 27 27 28 - void ovs_ct_fill_key(const struct sk_buff *skb, struct sw_flow_key *key); 28 + void ovs_ct_fill_key(const struct sk_buff *skb, struct sw_flow_key *key, 29 + bool post_ct); 29 30 int ovs_ct_put_key(const struct sw_flow_key *swkey, 30 31 const struct sw_flow_key *output, struct sk_buff *skb); 31 32 void ovs_ct_free_action(const struct nlattr *a); ··· 75 74 } 76 75 77 76 static inline void ovs_ct_fill_key(const struct sk_buff *skb, 78 - struct sw_flow_key *key) 77 + struct sw_flow_key *key, 78 + bool post_ct) 79 79 { 80 80 key->ct_state = 0; 81 81 key->ct_zone = 0;
+3 -1
net/openvswitch/flow.c
··· 857 857 #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT) 858 858 struct tc_skb_ext *tc_ext; 859 859 #endif 860 + bool post_ct = false; 860 861 int res, err; 861 862 862 863 /* Extract metadata from packet. */ ··· 896 895 tc_ext = skb_ext_find(skb, TC_SKB_EXT); 897 896 key->recirc_id = tc_ext ? tc_ext->chain : 0; 898 897 OVS_CB(skb)->mru = tc_ext ? tc_ext->mru : 0; 898 + post_ct = tc_ext ? tc_ext->post_ct : false; 899 899 } else { 900 900 key->recirc_id = 0; 901 901 } ··· 906 904 907 905 err = key_extract(skb, key); 908 906 if (!err) 909 - ovs_ct_fill_key(skb, key); /* Must be after key_extract(). */ 907 + ovs_ct_fill_key(skb, key, post_ct); /* Must be after key_extract(). */ 910 908 return err; 911 909 } 912 910
+5
net/qrtr/qrtr.c
··· 1058 1058 rc = copied; 1059 1059 1060 1060 if (addr) { 1061 + /* There is an anonymous 2-byte hole after sq_family, 1062 + * make sure to clear it. 1063 + */ 1064 + memset(addr, 0, sizeof(*addr)); 1065 + 1061 1066 addr->sq_family = AF_QIPCRTR; 1062 1067 addr->sq_node = cb->src_node; 1063 1068 addr->sq_port = cb->src_port;
+4 -2
net/sched/act_ct.c
··· 945 945 tcf_lastuse_update(&c->tcf_tm); 946 946 947 947 if (clear) { 948 + qdisc_skb_cb(skb)->post_ct = false; 948 949 ct = nf_ct_get(skb, &ctinfo); 949 950 if (ct) { 950 951 nf_conntrack_put(&ct->ct_general); 951 952 nf_ct_set(skb, NULL, IP_CT_UNTRACKED); 952 953 } 953 954 954 - goto out; 955 + goto out_clear; 955 956 } 956 957 957 958 family = tcf_ct_skb_nf_family(skb); ··· 1031 1030 skb_push_rcsum(skb, nh_ofs); 1032 1031 1033 1032 out: 1034 - tcf_action_update_bstats(&c->common, skb); 1035 1033 qdisc_skb_cb(skb)->post_ct = true; 1034 + out_clear: 1035 + tcf_action_update_bstats(&c->common, skb); 1036 1036 if (defrag) 1037 1037 qdisc_skb_cb(skb)->pkt_len = skb->len; 1038 1038 return retval;
+1
net/sched/cls_api.c
··· 1629 1629 return TC_ACT_SHOT; 1630 1630 ext->chain = last_executed_chain; 1631 1631 ext->mru = qdisc_skb_cb(skb)->mru; 1632 + ext->post_ct = qdisc_skb_cb(skb)->post_ct; 1632 1633 } 1633 1634 1634 1635 return ret;
+1 -1
net/sched/cls_flower.c
··· 1451 1451 &mask->ct_state, TCA_FLOWER_KEY_CT_STATE_MASK, 1452 1452 sizeof(key->ct_state)); 1453 1453 1454 - err = fl_validate_ct_state(mask->ct_state, 1454 + err = fl_validate_ct_state(key->ct_state & mask->ct_state, 1455 1455 tb[TCA_FLOWER_KEY_CT_STATE_MASK], 1456 1456 extack); 1457 1457 if (err)
+4 -3
net/sched/sch_choke.c
··· 345 345 struct sk_buff **old = NULL; 346 346 unsigned int mask; 347 347 u32 max_P; 348 + u8 *stab; 348 349 349 350 if (opt == NULL) 350 351 return -EINVAL; ··· 362 361 max_P = tb[TCA_CHOKE_MAX_P] ? nla_get_u32(tb[TCA_CHOKE_MAX_P]) : 0; 363 362 364 363 ctl = nla_data(tb[TCA_CHOKE_PARMS]); 365 - 366 - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) 364 + stab = nla_data(tb[TCA_CHOKE_STAB]); 365 + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log, stab)) 367 366 return -EINVAL; 368 367 369 368 if (ctl->limit > CHOKE_MAX_QUEUE) ··· 413 412 414 413 red_set_parms(&q->parms, ctl->qth_min, ctl->qth_max, ctl->Wlog, 415 414 ctl->Plog, ctl->Scell_log, 416 - nla_data(tb[TCA_CHOKE_STAB]), 415 + stab, 417 416 max_P); 418 417 red_set_vars(&q->vars); 419 418
+1 -1
net/sched/sch_gred.c
··· 480 480 struct gred_sched *table = qdisc_priv(sch); 481 481 struct gred_sched_data *q = table->tab[dp]; 482 482 483 - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) { 483 + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log, stab)) { 484 484 NL_SET_ERR_MSG_MOD(extack, "invalid RED parameters"); 485 485 return -EINVAL; 486 486 }
+13 -6
net/sched/sch_htb.c
··· 1020 1020 struct nlattr *tb[TCA_HTB_MAX + 1]; 1021 1021 struct tc_htb_glob *gopt; 1022 1022 unsigned int ntx; 1023 + bool offload; 1023 1024 int err; 1024 1025 1025 1026 qdisc_watchdog_init(&q->watchdog, sch); ··· 1045 1044 if (gopt->version != HTB_VER >> 16) 1046 1045 return -EINVAL; 1047 1046 1048 - q->offload = nla_get_flag(tb[TCA_HTB_OFFLOAD]); 1047 + offload = nla_get_flag(tb[TCA_HTB_OFFLOAD]); 1049 1048 1050 - if (q->offload) { 1049 + if (offload) { 1051 1050 if (sch->parent != TC_H_ROOT) 1052 1051 return -EOPNOTSUPP; 1053 1052 ··· 1077 1076 q->rate2quantum = 1; 1078 1077 q->defcls = gopt->defcls; 1079 1078 1080 - if (!q->offload) 1079 + if (!offload) 1081 1080 return 0; 1082 1081 1083 1082 for (ntx = 0; ntx < q->num_direct_qdiscs; ntx++) { ··· 1108 1107 if (err) 1109 1108 goto err_free_qdiscs; 1110 1109 1110 + /* Defer this assignment, so that htb_destroy skips offload-related 1111 + * parts (especially calling ndo_setup_tc) on errors. 1112 + */ 1113 + q->offload = true; 1114 + 1111 1115 return 0; 1112 1116 1113 1117 err_free_qdiscs: 1114 - /* TC_HTB_CREATE call failed, avoid any further calls to the driver. */ 1115 - q->offload = false; 1116 - 1117 1118 for (ntx = 0; ntx < q->num_direct_qdiscs && q->direct_qdiscs[ntx]; 1118 1119 ntx++) 1119 1120 qdisc_put(q->direct_qdiscs[ntx]); ··· 1343 1340 { 1344 1341 struct net_device *dev = qdisc_dev(sch); 1345 1342 struct tc_htb_qopt_offload offload_opt; 1343 + struct htb_sched *q = qdisc_priv(sch); 1346 1344 int err; 1345 + 1346 + if (!q->offload) 1347 + return sch->dev_queue; 1347 1348 1348 1349 offload_opt = (struct tc_htb_qopt_offload) { 1349 1350 .command = TC_HTB_LEAF_QUERY_QUEUE,
+5 -2
net/sched/sch_red.c
··· 242 242 unsigned char flags; 243 243 int err; 244 244 u32 max_P; 245 + u8 *stab; 245 246 246 247 if (tb[TCA_RED_PARMS] == NULL || 247 248 tb[TCA_RED_STAB] == NULL) ··· 251 250 max_P = tb[TCA_RED_MAX_P] ? nla_get_u32(tb[TCA_RED_MAX_P]) : 0; 252 251 253 252 ctl = nla_data(tb[TCA_RED_PARMS]); 254 - if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log)) 253 + stab = nla_data(tb[TCA_RED_STAB]); 254 + if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, 255 + ctl->Scell_log, stab)) 255 256 return -EINVAL; 256 257 257 258 err = red_get_flags(ctl->flags, TC_RED_HISTORIC_FLAGS, ··· 291 288 red_set_parms(&q->parms, 292 289 ctl->qth_min, ctl->qth_max, ctl->Wlog, 293 290 ctl->Plog, ctl->Scell_log, 294 - nla_data(tb[TCA_RED_STAB]), 291 + stab, 295 292 max_P); 296 293 red_set_vars(&q->vars); 297 294
+1 -1
net/sched/sch_sfq.c
··· 647 647 } 648 648 649 649 if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max, 650 - ctl_v1->Wlog, ctl_v1->Scell_log)) 650 + ctl_v1->Wlog, ctl_v1->Scell_log, NULL)) 651 651 return -EINVAL; 652 652 if (ctl_v1 && ctl_v1->qth_min) { 653 653 p = kmalloc(sizeof(*p), GFP_KERNEL);
-7
net/sctp/output.c
··· 584 584 goto out; 585 585 } 586 586 587 - rcu_read_lock(); 588 - if (__sk_dst_get(sk) != tp->dst) { 589 - dst_hold(tp->dst); 590 - sk_setup_caps(sk, tp->dst); 591 - } 592 - rcu_read_unlock(); 593 - 594 587 /* pack up chunks */ 595 588 pkt_count = sctp_packet_pack(packet, head, gso, gfp); 596 589 if (!pkt_count) {
+7
net/sctp/outqueue.c
··· 1135 1135 1136 1136 static void sctp_outq_flush_transports(struct sctp_flush_ctx *ctx) 1137 1137 { 1138 + struct sock *sk = ctx->asoc->base.sk; 1138 1139 struct list_head *ltransport; 1139 1140 struct sctp_packet *packet; 1140 1141 struct sctp_transport *t; ··· 1145 1144 t = list_entry(ltransport, struct sctp_transport, send_ready); 1146 1145 packet = &t->packet; 1147 1146 if (!sctp_packet_empty(packet)) { 1147 + rcu_read_lock(); 1148 + if (t->dst && __sk_dst_get(sk) != t->dst) { 1149 + dst_hold(t->dst); 1150 + sk_setup_caps(sk, t->dst); 1151 + } 1152 + rcu_read_unlock(); 1148 1153 error = sctp_packet_transmit(packet, ctx->gfp); 1149 1154 if (error < 0) 1150 1155 ctx->q->asoc->base.sk->sk_err = -error;
+8 -3
net/tipc/node.c
··· 2895 2895 2896 2896 #ifdef CONFIG_TIPC_CRYPTO 2897 2897 static int tipc_nl_retrieve_key(struct nlattr **attrs, 2898 - struct tipc_aead_key **key) 2898 + struct tipc_aead_key **pkey) 2899 2899 { 2900 2900 struct nlattr *attr = attrs[TIPC_NLA_NODE_KEY]; 2901 + struct tipc_aead_key *key; 2901 2902 2902 2903 if (!attr) 2903 2904 return -ENODATA; 2904 2905 2905 - *key = (struct tipc_aead_key *)nla_data(attr); 2906 - if (nla_len(attr) < tipc_aead_key_size(*key)) 2906 + if (nla_len(attr) < sizeof(*key)) 2907 + return -EINVAL; 2908 + key = (struct tipc_aead_key *)nla_data(attr); 2909 + if (key->keylen > TIPC_AEAD_KEYLEN_MAX || 2910 + nla_len(attr) < tipc_aead_key_size(key)) 2907 2911 return -EINVAL; 2908 2912 2913 + *pkey = key; 2909 2914 return 0; 2910 2915 } 2911 2916
+1
net/vmw_vsock/af_vsock.c
··· 755 755 vsk->buffer_size = psk->buffer_size; 756 756 vsk->buffer_min_size = psk->buffer_min_size; 757 757 vsk->buffer_max_size = psk->buffer_max_size; 758 + security_sk_clone(parent, sk); 758 759 } else { 759 760 vsk->trusted = ns_capable_noaudit(&init_user_ns, CAP_NET_ADMIN); 760 761 vsk->owner = get_current_cred();
+8 -4
net/wireless/nl80211.c
··· 70 70 struct wireless_dev *result = NULL; 71 71 bool have_ifidx = attrs[NL80211_ATTR_IFINDEX]; 72 72 bool have_wdev_id = attrs[NL80211_ATTR_WDEV]; 73 - u64 wdev_id; 73 + u64 wdev_id = 0; 74 74 int wiphy_idx = -1; 75 75 int ifidx = -1; 76 76 ··· 14789 14789 #define NL80211_FLAG_NEED_WDEV_UP (NL80211_FLAG_NEED_WDEV |\ 14790 14790 NL80211_FLAG_CHECK_NETDEV_UP) 14791 14791 #define NL80211_FLAG_CLEAR_SKB 0x20 14792 + #define NL80211_FLAG_NO_WIPHY_MTX 0x40 14792 14793 14793 14794 static int nl80211_pre_doit(const struct genl_ops *ops, struct sk_buff *skb, 14794 14795 struct genl_info *info) ··· 14841 14840 info->user_ptr[0] = rdev; 14842 14841 } 14843 14842 14844 - if (rdev) { 14843 + if (rdev && !(ops->internal_flags & NL80211_FLAG_NO_WIPHY_MTX)) { 14845 14844 wiphy_lock(&rdev->wiphy); 14846 14845 /* we keep the mutex locked until post_doit */ 14847 14846 __release(&rdev->wiphy.mtx); ··· 14866 14865 } 14867 14866 } 14868 14867 14869 - if (info->user_ptr[0]) { 14868 + if (info->user_ptr[0] && 14869 + !(ops->internal_flags & NL80211_FLAG_NO_WIPHY_MTX)) { 14870 14870 struct cfg80211_registered_device *rdev = info->user_ptr[0]; 14871 14871 14872 14872 /* we kept the mutex locked since pre_doit */ ··· 15331 15329 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, 15332 15330 .doit = nl80211_wiphy_netns, 15333 15331 .flags = GENL_UNS_ADMIN_PERM, 15334 - .internal_flags = NL80211_FLAG_NEED_WIPHY, 15332 + .internal_flags = NL80211_FLAG_NEED_WIPHY | 15333 + NL80211_FLAG_NEED_RTNL | 15334 + NL80211_FLAG_NO_WIPHY_MTX, 15335 15335 }, 15336 15336 { 15337 15337 .cmd = NL80211_CMD_GET_SURVEY,
+1 -1
tools/lib/bpf/Makefile
··· 215 215 if [ ! -d '$(DESTDIR_SQ)$2' ]; then \ 216 216 $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$2'; \ 217 217 fi; \ 218 - $(INSTALL) $1 $(if $3,-m $3,) '$(DESTDIR_SQ)$2' 218 + $(INSTALL) $(if $3,-m $3,) $1 '$(DESTDIR_SQ)$2' 219 219 endef 220 220 221 221 install_lib: all_cmd
+1 -1
tools/lib/bpf/btf_dump.c
··· 462 462 return err; 463 463 464 464 case BTF_KIND_ARRAY: 465 - return btf_dump_order_type(d, btf_array(t)->type, through_ptr); 465 + return btf_dump_order_type(d, btf_array(t)->type, false); 466 466 467 467 case BTF_KIND_STRUCT: 468 468 case BTF_KIND_UNION: {
+2 -1
tools/lib/bpf/libbpf.c
··· 1181 1181 if (!elf_rawdata(elf_getscn(obj->efile.elf, obj->efile.shstrndx), NULL)) { 1182 1182 pr_warn("elf: failed to get section names strings from %s: %s\n", 1183 1183 obj->path, elf_errmsg(-1)); 1184 - return -LIBBPF_ERRNO__FORMAT; 1184 + err = -LIBBPF_ERRNO__FORMAT; 1185 + goto errout; 1185 1186 } 1186 1187 1187 1188 /* Old LLVM set e_machine to EM_NONE */
+1 -1
tools/lib/bpf/netlink.c
··· 40 40 memset(&sa, 0, sizeof(sa)); 41 41 sa.nl_family = AF_NETLINK; 42 42 43 - sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE); 43 + sock = socket(AF_NETLINK, SOCK_RAW | SOCK_CLOEXEC, NETLINK_ROUTE); 44 44 if (sock < 0) 45 45 return -errno; 46 46
+4
tools/testing/selftests/bpf/prog_tests/check_mtu.c
··· 128 128 test_check_mtu_run_xdp(skel, skel->progs.xdp_use_helper, mtu); 129 129 test_check_mtu_run_xdp(skel, skel->progs.xdp_exceed_mtu, mtu); 130 130 test_check_mtu_run_xdp(skel, skel->progs.xdp_minus_delta, mtu); 131 + test_check_mtu_run_xdp(skel, skel->progs.xdp_input_len, mtu); 132 + test_check_mtu_run_xdp(skel, skel->progs.xdp_input_len_exceed, mtu); 131 133 132 134 cleanup: 133 135 test_check_mtu__destroy(skel); ··· 189 187 test_check_mtu_run_tc(skel, skel->progs.tc_exceed_mtu, mtu); 190 188 test_check_mtu_run_tc(skel, skel->progs.tc_exceed_mtu_da, mtu); 191 189 test_check_mtu_run_tc(skel, skel->progs.tc_minus_delta, mtu); 190 + test_check_mtu_run_tc(skel, skel->progs.tc_input_len, mtu); 191 + test_check_mtu_run_tc(skel, skel->progs.tc_input_len_exceed, mtu); 192 192 cleanup: 193 193 test_check_mtu__destroy(skel); 194 194 }
+82
tools/testing/selftests/bpf/prog_tests/fexit_sleep.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2021 Facebook */ 3 + #define _GNU_SOURCE 4 + #include <sched.h> 5 + #include <test_progs.h> 6 + #include <time.h> 7 + #include <sys/mman.h> 8 + #include <sys/syscall.h> 9 + #include "fexit_sleep.skel.h" 10 + 11 + static int do_sleep(void *skel) 12 + { 13 + struct fexit_sleep *fexit_skel = skel; 14 + struct timespec ts1 = { .tv_nsec = 1 }; 15 + struct timespec ts2 = { .tv_sec = 10 }; 16 + 17 + fexit_skel->bss->pid = getpid(); 18 + (void)syscall(__NR_nanosleep, &ts1, NULL); 19 + (void)syscall(__NR_nanosleep, &ts2, NULL); 20 + return 0; 21 + } 22 + 23 + #define STACK_SIZE (1024 * 1024) 24 + static char child_stack[STACK_SIZE]; 25 + 26 + void test_fexit_sleep(void) 27 + { 28 + struct fexit_sleep *fexit_skel = NULL; 29 + int wstatus, duration = 0; 30 + pid_t cpid; 31 + int err, fexit_cnt; 32 + 33 + fexit_skel = fexit_sleep__open_and_load(); 34 + if (CHECK(!fexit_skel, "fexit_skel_load", "fexit skeleton failed\n")) 35 + goto cleanup; 36 + 37 + err = fexit_sleep__attach(fexit_skel); 38 + if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err)) 39 + goto cleanup; 40 + 41 + cpid = clone(do_sleep, child_stack + STACK_SIZE, CLONE_FILES | SIGCHLD, fexit_skel); 42 + if (CHECK(cpid == -1, "clone", strerror(errno))) 43 + goto cleanup; 44 + 45 + /* wait until first sys_nanosleep ends and second sys_nanosleep starts */ 46 + while (READ_ONCE(fexit_skel->bss->fentry_cnt) != 2); 47 + fexit_cnt = READ_ONCE(fexit_skel->bss->fexit_cnt); 48 + if (CHECK(fexit_cnt != 1, "fexit_cnt", "%d", fexit_cnt)) 49 + goto cleanup; 50 + 51 + /* close progs and detach them. That will trigger two nop5->jmp5 rewrites 52 + * in the trampolines to skip nanosleep_fexit prog. 53 + * The nanosleep_fentry prog will get detached first. 54 + * The nanosleep_fexit prog will get detached second. 55 + * Detaching will trigger freeing of both progs JITed images. 56 + * There will be two dying bpf_tramp_image-s, but only the initial 57 + * bpf_tramp_image (with both _fentry and _fexit progs will be stuck 58 + * waiting for percpu_ref_kill to confirm). The other one 59 + * will be freed quickly. 60 + */ 61 + close(bpf_program__fd(fexit_skel->progs.nanosleep_fentry)); 62 + close(bpf_program__fd(fexit_skel->progs.nanosleep_fexit)); 63 + fexit_sleep__detach(fexit_skel); 64 + 65 + /* kill the thread to unwind sys_nanosleep stack through the trampoline */ 66 + kill(cpid, 9); 67 + 68 + if (CHECK(waitpid(cpid, &wstatus, 0) == -1, "waitpid", strerror(errno))) 69 + goto cleanup; 70 + if (CHECK(WEXITSTATUS(wstatus) != 0, "exitstatus", "failed")) 71 + goto cleanup; 72 + 73 + /* The bypassed nanosleep_fexit prog shouldn't have executed. 74 + * Unlike progs the maps were not freed and directly accessible. 75 + */ 76 + fexit_cnt = READ_ONCE(fexit_skel->bss->fexit_cnt); 77 + if (CHECK(fexit_cnt != 1, "fexit_cnt", "%d", fexit_cnt)) 78 + goto cleanup; 79 + 80 + cleanup: 81 + fexit_sleep__destroy(fexit_skel); 82 + }
+8
tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c
··· 174 174 }; 175 175 }; 176 176 177 + struct struct_in_array {}; 178 + 179 + struct struct_in_array_typed {}; 180 + 181 + typedef struct struct_in_array_typed struct_in_array_t[2]; 182 + 177 183 struct struct_with_embedded_stuff { 178 184 int a; 179 185 struct { ··· 209 203 } r[5]; 210 204 struct struct_in_struct s[10]; 211 205 int t[11]; 206 + struct struct_in_array (*u)[2]; 207 + struct_in_array_t *v; 212 208 }; 213 209 214 210 struct root_struct {
+31
tools/testing/selftests/bpf/progs/fexit_sleep.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2021 Facebook */ 3 + #include "vmlinux.h" 4 + #include <bpf/bpf_helpers.h> 5 + #include <bpf/bpf_tracing.h> 6 + 7 + char LICENSE[] SEC("license") = "GPL"; 8 + 9 + int pid = 0; 10 + int fentry_cnt = 0; 11 + int fexit_cnt = 0; 12 + 13 + SEC("fentry/__x64_sys_nanosleep") 14 + int BPF_PROG(nanosleep_fentry, const struct pt_regs *regs) 15 + { 16 + if ((int)bpf_get_current_pid_tgid() != pid) 17 + return 0; 18 + 19 + fentry_cnt++; 20 + return 0; 21 + } 22 + 23 + SEC("fexit/__x64_sys_nanosleep") 24 + int BPF_PROG(nanosleep_fexit, const struct pt_regs *regs, int ret) 25 + { 26 + if ((int)bpf_get_current_pid_tgid() != pid) 27 + return 0; 28 + 29 + fexit_cnt++; 30 + return 0; 31 + }
+92
tools/testing/selftests/bpf/progs/test_check_mtu.c
··· 105 105 return retval; 106 106 } 107 107 108 + SEC("xdp") 109 + int xdp_input_len(struct xdp_md *ctx) 110 + { 111 + int retval = XDP_PASS; /* Expected retval on successful test */ 112 + void *data_end = (void *)(long)ctx->data_end; 113 + void *data = (void *)(long)ctx->data; 114 + __u32 ifindex = GLOBAL_USER_IFINDEX; 115 + __u32 data_len = data_end - data; 116 + 117 + /* API allow user give length to check as input via mtu_len param, 118 + * resulting MTU value is still output in mtu_len param after call. 119 + * 120 + * Input len is L3, like MTU and iph->tot_len. 121 + * Remember XDP data_len is L2. 122 + */ 123 + __u32 mtu_len = data_len - ETH_HLEN; 124 + 125 + if (bpf_check_mtu(ctx, ifindex, &mtu_len, 0, 0)) 126 + retval = XDP_ABORTED; 127 + 128 + global_bpf_mtu_xdp = mtu_len; 129 + return retval; 130 + } 131 + 132 + SEC("xdp") 133 + int xdp_input_len_exceed(struct xdp_md *ctx) 134 + { 135 + int retval = XDP_ABORTED; /* Fail */ 136 + __u32 ifindex = GLOBAL_USER_IFINDEX; 137 + int err; 138 + 139 + /* API allow user give length to check as input via mtu_len param, 140 + * resulting MTU value is still output in mtu_len param after call. 141 + * 142 + * Input length value is L3 size like MTU. 143 + */ 144 + __u32 mtu_len = GLOBAL_USER_MTU; 145 + 146 + mtu_len += 1; /* Exceed with 1 */ 147 + 148 + err = bpf_check_mtu(ctx, ifindex, &mtu_len, 0, 0); 149 + if (err == BPF_MTU_CHK_RET_FRAG_NEEDED) 150 + retval = XDP_PASS ; /* Success in exceeding MTU check */ 151 + 152 + global_bpf_mtu_xdp = mtu_len; 153 + return retval; 154 + } 155 + 108 156 SEC("classifier") 109 157 int tc_use_helper(struct __sk_buff *ctx) 110 158 { ··· 240 192 */ 241 193 if (bpf_check_mtu(ctx, ifindex, &mtu_len, delta, 0)) 242 194 retval = BPF_DROP; 195 + 196 + global_bpf_mtu_xdp = mtu_len; 197 + return retval; 198 + } 199 + 200 + SEC("classifier") 201 + int tc_input_len(struct __sk_buff *ctx) 202 + { 203 + int retval = BPF_OK; /* Expected retval on successful test */ 204 + __u32 ifindex = GLOBAL_USER_IFINDEX; 205 + 206 + /* API allow user give length to check as input via mtu_len param, 207 + * resulting MTU value is still output in mtu_len param after call. 208 + * 209 + * Input length value is L3 size. 210 + */ 211 + __u32 mtu_len = GLOBAL_USER_MTU; 212 + 213 + if (bpf_check_mtu(ctx, ifindex, &mtu_len, 0, 0)) 214 + retval = BPF_DROP; 215 + 216 + global_bpf_mtu_xdp = mtu_len; 217 + return retval; 218 + } 219 + 220 + SEC("classifier") 221 + int tc_input_len_exceed(struct __sk_buff *ctx) 222 + { 223 + int retval = BPF_DROP; /* Fail */ 224 + __u32 ifindex = GLOBAL_USER_IFINDEX; 225 + int err; 226 + 227 + /* API allow user give length to check as input via mtu_len param, 228 + * resulting MTU value is still output in mtu_len param after call. 229 + * 230 + * Input length value is L3 size like MTU. 231 + */ 232 + __u32 mtu_len = GLOBAL_USER_MTU; 233 + 234 + mtu_len += 1; /* Exceed with 1 */ 235 + 236 + err = bpf_check_mtu(ctx, ifindex, &mtu_len, 0, 0); 237 + if (err == BPF_MTU_CHK_RET_FRAG_NEEDED) 238 + retval = BPF_OK; /* Success in exceeding MTU check */ 243 239 244 240 global_bpf_mtu_xdp = mtu_len; 245 241 return retval;
+2 -4
tools/testing/selftests/bpf/progs/test_tunnel_kern.c
··· 508 508 } 509 509 510 510 ret = bpf_skb_get_tunnel_opt(skb, &gopt, sizeof(gopt)); 511 - if (ret < 0) { 512 - ERROR(ret); 513 - return TC_ACT_SHOT; 514 - } 511 + if (ret < 0) 512 + gopt.opt_class = 0; 515 513 516 514 bpf_trace_printk(fmt, sizeof(fmt), 517 515 key.tunnel_id, key.remote_ipv4, gopt.opt_class);
+19 -8
tools/testing/selftests/bpf/verifier/bounds_deduction.c
··· 6 6 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 7 7 BPF_EXIT_INSN(), 8 8 }, 9 - .result = REJECT, 9 + .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 10 10 .errstr = "R0 tried to subtract pointer from scalar", 11 + .result = REJECT, 11 12 }, 12 13 { 13 14 "check deducing bounds from const, 2", ··· 21 20 BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0), 22 21 BPF_EXIT_INSN(), 23 22 }, 23 + .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types", 24 + .result_unpriv = REJECT, 24 25 .result = ACCEPT, 25 26 .retval = 1, 26 27 }, ··· 34 31 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 35 32 BPF_EXIT_INSN(), 36 33 }, 37 - .result = REJECT, 34 + .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 38 35 .errstr = "R0 tried to subtract pointer from scalar", 36 + .result = REJECT, 39 37 }, 40 38 { 41 39 "check deducing bounds from const, 4", ··· 49 45 BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0), 50 46 BPF_EXIT_INSN(), 51 47 }, 48 + .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types", 49 + .result_unpriv = REJECT, 52 50 .result = ACCEPT, 53 51 }, 54 52 { ··· 61 55 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 62 56 BPF_EXIT_INSN(), 63 57 }, 64 - .result = REJECT, 58 + .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 65 59 .errstr = "R0 tried to subtract pointer from scalar", 60 + .result = REJECT, 66 61 }, 67 62 { 68 63 "check deducing bounds from const, 6", ··· 74 67 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 75 68 BPF_EXIT_INSN(), 76 69 }, 77 - .result = REJECT, 70 + .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 78 71 .errstr = "R0 tried to subtract pointer from scalar", 72 + .result = REJECT, 79 73 }, 80 74 { 81 75 "check deducing bounds from const, 7", ··· 88 80 offsetof(struct __sk_buff, mark)), 89 81 BPF_EXIT_INSN(), 90 82 }, 91 - .result = REJECT, 83 + .errstr_unpriv = "R1 tried to sub from different maps, paths, or prohibited types", 92 84 .errstr = "dereference of modified ctx ptr", 85 + .result = REJECT, 93 86 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 94 87 }, 95 88 { ··· 103 94 offsetof(struct __sk_buff, mark)), 104 95 BPF_EXIT_INSN(), 105 96 }, 106 - .result = REJECT, 97 + .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types", 107 98 .errstr = "dereference of modified ctx ptr", 99 + .result = REJECT, 108 100 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 109 101 }, 110 102 { ··· 116 106 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 117 107 BPF_EXIT_INSN(), 118 108 }, 119 - .result = REJECT, 109 + .errstr_unpriv = "R0 tried to sub from different maps, paths, or prohibited types", 120 110 .errstr = "R0 tried to subtract pointer from scalar", 111 + .result = REJECT, 121 112 }, 122 113 { 123 114 "check deducing bounds from const, 10", ··· 130 119 BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 131 120 BPF_EXIT_INSN(), 132 121 }, 133 - .result = REJECT, 134 122 .errstr = "math between ctx pointer and register with unbounded min value is not allowed", 123 + .result = REJECT, 135 124 },
+4
tools/testing/selftests/bpf/verifier/map_ptr.c
··· 75 75 BPF_EXIT_INSN(), 76 76 }, 77 77 .fixup_map_hash_16b = { 4 }, 78 + .result_unpriv = REJECT, 79 + .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types", 78 80 .result = ACCEPT, 79 81 }, 80 82 { ··· 93 91 BPF_EXIT_INSN(), 94 92 }, 95 93 .fixup_map_hash_16b = { 4 }, 94 + .result_unpriv = REJECT, 95 + .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types", 96 96 .result = ACCEPT, 97 97 },
+14 -1
tools/testing/selftests/bpf/verifier/unpriv.c
··· 497 497 .result = ACCEPT, 498 498 }, 499 499 { 500 - "unpriv: adding of fp", 500 + "unpriv: adding of fp, reg", 501 501 .insns = { 502 502 BPF_MOV64_IMM(BPF_REG_0, 0), 503 503 BPF_MOV64_IMM(BPF_REG_1, 0), 504 504 BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_10), 505 + BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), 506 + BPF_EXIT_INSN(), 507 + }, 508 + .errstr_unpriv = "R1 tried to add from different maps, paths, or prohibited types", 509 + .result_unpriv = REJECT, 510 + .result = ACCEPT, 511 + }, 512 + { 513 + "unpriv: adding of fp, imm", 514 + .insns = { 515 + BPF_MOV64_IMM(BPF_REG_0, 0), 516 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), 517 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0), 505 518 BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, -8), 506 519 BPF_EXIT_INSN(), 507 520 },
+22 -1
tools/testing/selftests/bpf/verifier/value_ptr_arith.c
··· 169 169 .fixup_map_array_48b = { 1 }, 170 170 .result = ACCEPT, 171 171 .result_unpriv = REJECT, 172 - .errstr_unpriv = "R2 tried to add from different maps or paths", 172 + .errstr_unpriv = "R2 tried to add from different maps, paths, or prohibited types", 173 173 .retval = 0, 174 174 }, 175 175 { ··· 515 515 .fixup_map_array_48b = { 3 }, 516 516 .result = ACCEPT, 517 517 .retval = 0xabcdef12, 518 + }, 519 + { 520 + "map access: value_ptr += N, value_ptr -= N known scalar", 521 + .insns = { 522 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 523 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 524 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 525 + BPF_LD_MAP_FD(BPF_REG_1, 0), 526 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 527 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), 528 + BPF_MOV32_IMM(BPF_REG_1, 0x12345678), 529 + BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0), 530 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2), 531 + BPF_MOV64_IMM(BPF_REG_1, 2), 532 + BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 533 + BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0), 534 + BPF_EXIT_INSN(), 535 + }, 536 + .fixup_map_array_48b = { 3 }, 537 + .result = ACCEPT, 538 + .retval = 0x12345678, 518 539 }, 519 540 { 520 541 "map access: unknown scalar += value_ptr, 1",
+1 -1
tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
··· 658 658 # In accordance with INET_ECN_decapsulate() 659 659 __test_ecn_decap 00 00 0x00 660 660 __test_ecn_decap 01 01 0x01 661 - __test_ecn_decap 02 01 0x02 661 + __test_ecn_decap 02 01 0x01 662 662 __test_ecn_decap 01 03 0x03 663 663 __test_ecn_decap 02 03 0x03 664 664 test_ecn_decap_error
+20 -10
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 11 11 timeout=30 12 12 mptcp_connect="" 13 13 capture=0 14 + do_all_tests=1 14 15 15 16 TEST_COUNT=0 16 17 ··· 121 120 "$CBPF_MPTCP_SUBOPTION_ADD_ADDR" \ 122 121 -j DROP 123 122 } 124 - 125 - for arg in "$@"; do 126 - if [ "$arg" = "-c" ]; then 127 - capture=1 128 - fi 129 - done 130 123 131 124 ip -Version > /dev/null 2>&1 132 125 if [ $? -ne 0 ];then ··· 1216 1221 echo " -4 v4mapped_tests" 1217 1222 echo " -b backup_tests" 1218 1223 echo " -p add_addr_ports_tests" 1219 - echo " -c syncookies_tests" 1224 + echo " -k syncookies_tests" 1225 + echo " -c capture pcap files" 1220 1226 echo " -h help" 1221 1227 } 1222 1228 ··· 1231 1235 make_file "$sin" "server" 1 1232 1236 trap cleanup EXIT 1233 1237 1234 - if [ -z $1 ]; then 1238 + for arg in "$@"; do 1239 + # check for "capture" arg before launching tests 1240 + if [[ "${arg}" =~ ^"-"[0-9a-zA-Z]*"c"[0-9a-zA-Z]*$ ]]; then 1241 + capture=1 1242 + fi 1243 + 1244 + # exception for the capture option, the rest means: a part of the tests 1245 + if [ "${arg}" != "-c" ]; then 1246 + do_all_tests=0 1247 + fi 1248 + done 1249 + 1250 + if [ $do_all_tests -eq 1 ]; then 1235 1251 all_tests 1236 1252 exit $ret 1237 1253 fi 1238 1254 1239 - while getopts 'fsltra64bpch' opt; do 1255 + while getopts 'fsltra64bpkch' opt; do 1240 1256 case $opt in 1241 1257 f) 1242 1258 subflows_tests ··· 1280 1272 p) 1281 1273 add_addr_ports_tests 1282 1274 ;; 1283 - c) 1275 + k) 1284 1276 syncookies_tests 1277 + ;; 1278 + c) 1285 1279 ;; 1286 1280 h | *) 1287 1281 usage
+16 -16
tools/testing/selftests/net/reuseaddr_ports_exhausted.c
··· 30 30 }; 31 31 32 32 struct reuse_opts unreusable_opts[12] = { 33 - {0, 0, 0, 0}, 34 - {0, 0, 0, 1}, 35 - {0, 0, 1, 0}, 36 - {0, 0, 1, 1}, 37 - {0, 1, 0, 0}, 38 - {0, 1, 0, 1}, 39 - {0, 1, 1, 0}, 40 - {0, 1, 1, 1}, 41 - {1, 0, 0, 0}, 42 - {1, 0, 0, 1}, 43 - {1, 0, 1, 0}, 44 - {1, 0, 1, 1}, 33 + {{0, 0}, {0, 0}}, 34 + {{0, 0}, {0, 1}}, 35 + {{0, 0}, {1, 0}}, 36 + {{0, 0}, {1, 1}}, 37 + {{0, 1}, {0, 0}}, 38 + {{0, 1}, {0, 1}}, 39 + {{0, 1}, {1, 0}}, 40 + {{0, 1}, {1, 1}}, 41 + {{1, 0}, {0, 0}}, 42 + {{1, 0}, {0, 1}}, 43 + {{1, 0}, {1, 0}}, 44 + {{1, 0}, {1, 1}}, 45 45 }; 46 46 47 47 struct reuse_opts reusable_opts[4] = { 48 - {1, 1, 0, 0}, 49 - {1, 1, 0, 1}, 50 - {1, 1, 1, 0}, 51 - {1, 1, 1, 1}, 48 + {{1, 1}, {0, 0}}, 49 + {{1, 1}, {0, 1}}, 50 + {{1, 1}, {1, 0}}, 51 + {{1, 1}, {1, 1}}, 52 52 }; 53 53 54 54 int bind_port(struct __test_metadata *_metadata, int reuseaddr, int reuseport)