Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.16-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from bpf, can and netfilter.

Current release - regressions:

- bpf, sockmap: re-evaluate proto ops when psock is removed from
sockmap

Current release - new code bugs:

- bpf: fix bpf_check_mod_kfunc_call for built-in modules

- ice: fixes for TC classifier offloads

- vrf: don't run conntrack on vrf with !dflt qdisc

Previous releases - regressions:

- bpf: fix the off-by-two error in range markings

- seg6: fix the iif in the IPv6 socket control block

- devlink: fix netns refcount leak in devlink_nl_cmd_reload()

- dsa: mv88e6xxx: fix "don't use PHY_DETECT on internal PHY's"

- dsa: mv88e6xxx: allow use of PHYs on CPU and DSA ports

Previous releases - always broken:

- ethtool: do not perform operations on net devices being
unregistered

- udp: use datalen to cap max gso segments

- ice: fix races in stats collection

- fec: only clear interrupt of handling queue in fec_enet_rx_queue()

- m_can: pci: fix incorrect reference clock rate

- m_can: disable and ignore ELO interrupt

- mvpp2: fix XDP rx queues registering

Misc:

- treewide: add missing includes masked by cgroup -> bpf.h
dependency"

* tag 'net-5.16-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (82 commits)
net: dsa: mv88e6xxx: allow use of PHYs on CPU and DSA ports
net: wwan: iosm: fixes unable to send AT command during mbim tx
net: wwan: iosm: fixes net interface nonfunctional after fw flash
net: wwan: iosm: fixes unnecessary doorbell send
net: dsa: felix: Fix memory leak in felix_setup_mmio_filtering
MAINTAINERS: s390/net: remove myself as maintainer
net/sched: fq_pie: prevent dismantle issue
net: mana: Fix memory leak in mana_hwc_create_wq
seg6: fix the iif in the IPv6 socket control block
nfp: Fix memory leak in nfp_cpp_area_cache_add()
nfc: fix potential NULL pointer deref in nfc_genl_dump_ses_done
nfc: fix segfault in nfc_genl_dump_devices_done
udp: using datalen to cap max gso segments
net: dsa: mv88e6xxx: error handling for serdes_power functions
can: kvaser_usb: get CAN clock frequency from device
can: kvaser_pciefd: kvaser_pciefd_rx_error_frame(): increase correct stats->{rx,tx}_errors counter
net: mvpp2: fix XDP rx queues registering
vmxnet3: fix minimum vectors alloc issue
net, neigh: clear whole pneigh_entry at alloc time
net: dsa: mv88e6xxx: fix "don't use PHY_DETECT on internal PHY's"
...

+1370 -383
+3 -6
Documentation/locking/locktypes.rst
··· 439 439 spin_lock(&p->lock); 440 440 p->count += this_cpu_read(var2); 441 441 442 - On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable() 443 - which makes the above code fully equivalent. On a PREEMPT_RT kernel 444 442 migrate_disable() ensures that the task is pinned on the current CPU which 445 443 in turn guarantees that the per-CPU access to var1 and var2 are staying on 446 - the same CPU. 444 + the same CPU while the task remains preemptible. 447 445 448 446 The migrate_disable() substitution is not valid for the following 449 447 scenario:: ··· 454 456 p = this_cpu_ptr(&var1); 455 457 p->val = func2(); 456 458 457 - While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because 458 - here migrate_disable() does not protect against reentrancy from a 459 - preempting task. A correct substitution for this case is:: 459 + This breaks because migrate_disable() does not protect against reentrancy from 460 + a preempting task. A correct substitution for this case is:: 460 461 461 462 func() 462 463 {
+1 -3
MAINTAINERS
··· 12180 12180 F: include/linux/mlx5/mlx5_ifc_fpga.h 12181 12181 12182 12182 MELLANOX ETHERNET SWITCH DRIVERS 12183 - M: Jiri Pirko <jiri@nvidia.com> 12184 12183 M: Ido Schimmel <idosch@nvidia.com> 12184 + M: Petr Machata <petrm@nvidia.com> 12185 12185 L: netdev@vger.kernel.org 12186 12186 S: Supported 12187 12187 W: http://www.mellanox.com ··· 16629 16629 F: drivers/iommu/s390-iommu.c 16630 16630 16631 16631 S390 IUCV NETWORK LAYER 16632 - M: Julian Wiedmann <jwi@linux.ibm.com> 16633 16632 M: Alexandra Winter <wintera@linux.ibm.com> 16634 16633 M: Wenjia Zhang <wenjia@linux.ibm.com> 16635 16634 L: linux-s390@vger.kernel.org ··· 16640 16641 F: net/iucv/ 16641 16642 16642 16643 S390 NETWORK DRIVERS 16643 - M: Julian Wiedmann <jwi@linux.ibm.com> 16644 16644 M: Alexandra Winter <wintera@linux.ibm.com> 16645 16645 M: Wenjia Zhang <wenjia@linux.ibm.com> 16646 16646 L: linux-s390@vger.kernel.org
+1 -1
arch/mips/net/bpf_jit_comp.h
··· 98 98 #define emit(...) __emit(__VA_ARGS__) 99 99 100 100 /* Workaround for R10000 ll/sc errata */ 101 - #ifdef CONFIG_WAR_R10000 101 + #ifdef CONFIG_WAR_R10000_LLSC 102 102 #define LLSC_beqz beqzl 103 103 #else 104 104 #define LLSC_beqz beqz
+1
block/fops.c
··· 15 15 #include <linux/falloc.h> 16 16 #include <linux/suspend.h> 17 17 #include <linux/fs.h> 18 + #include <linux/module.h> 18 19 #include "blk.h" 19 20 20 21 static inline struct inode *bdev_file_inode(struct file *file)
+1
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 9 9 #include <linux/shmem_fs.h> 10 10 #include <linux/slab.h> 11 11 #include <linux/vmalloc.h> 12 + #include <linux/module.h> 12 13 13 14 #ifdef CONFIG_X86 14 15 #include <asm/set_memory.h>
+1
drivers/gpu/drm/i915/gt/intel_gtt.c
··· 6 6 #include <linux/slab.h> /* fault-inject.h is not standalone! */ 7 7 8 8 #include <linux/fault-inject.h> 9 + #include <linux/sched/mm.h> 9 10 10 11 #include "gem/i915_gem_lmem.h" 11 12 #include "i915_trace.h"
+1
drivers/gpu/drm/i915/i915_request.c
··· 29 29 #include <linux/sched.h> 30 30 #include <linux/sched/clock.h> 31 31 #include <linux/sched/signal.h> 32 + #include <linux/sched/mm.h> 32 33 33 34 #include "gem/i915_gem_context.h" 34 35 #include "gt/intel_breadcrumbs.h"
+1
drivers/gpu/drm/lima/lima_device.c
··· 4 4 #include <linux/regulator/consumer.h> 5 5 #include <linux/reset.h> 6 6 #include <linux/clk.h> 7 + #include <linux/slab.h> 7 8 #include <linux/dma-mapping.h> 8 9 #include <linux/platform_device.h> 9 10
+1
drivers/gpu/drm/msm/msm_gem_shrinker.c
··· 5 5 */ 6 6 7 7 #include <linux/vmalloc.h> 8 + #include <linux/sched/mm.h> 8 9 9 10 #include "msm_drv.h" 10 11 #include "msm_gem.h"
+1
drivers/gpu/drm/ttm/ttm_tt.c
··· 34 34 #include <linux/sched.h> 35 35 #include <linux/shmem_fs.h> 36 36 #include <linux/file.h> 37 + #include <linux/module.h> 37 38 #include <drm/drm_cache.h> 38 39 #include <drm/ttm/ttm_bo_driver.h> 39 40
+8 -6
drivers/net/bonding/bond_alb.c
··· 1501 1501 struct slave *slave; 1502 1502 1503 1503 if (!bond_has_slaves(bond)) { 1504 - bond_info->tx_rebalance_counter = 0; 1504 + atomic_set(&bond_info->tx_rebalance_counter, 0); 1505 1505 bond_info->lp_counter = 0; 1506 1506 goto re_arm; 1507 1507 } 1508 1508 1509 1509 rcu_read_lock(); 1510 1510 1511 - bond_info->tx_rebalance_counter++; 1511 + atomic_inc(&bond_info->tx_rebalance_counter); 1512 1512 bond_info->lp_counter++; 1513 1513 1514 1514 /* send learning packets */ ··· 1530 1530 } 1531 1531 1532 1532 /* rebalance tx traffic */ 1533 - if (bond_info->tx_rebalance_counter >= BOND_TLB_REBALANCE_TICKS) { 1533 + if (atomic_read(&bond_info->tx_rebalance_counter) >= BOND_TLB_REBALANCE_TICKS) { 1534 1534 bond_for_each_slave_rcu(bond, slave, iter) { 1535 1535 tlb_clear_slave(bond, slave, 1); 1536 1536 if (slave == rcu_access_pointer(bond->curr_active_slave)) { ··· 1540 1540 bond_info->unbalanced_load = 0; 1541 1541 } 1542 1542 } 1543 - bond_info->tx_rebalance_counter = 0; 1543 + atomic_set(&bond_info->tx_rebalance_counter, 0); 1544 1544 } 1545 1545 1546 1546 if (bond_info->rlb_enabled) { ··· 1610 1610 tlb_init_slave(slave); 1611 1611 1612 1612 /* order a rebalance ASAP */ 1613 - bond->alb_info.tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS; 1613 + atomic_set(&bond->alb_info.tx_rebalance_counter, 1614 + BOND_TLB_REBALANCE_TICKS); 1614 1615 1615 1616 if (bond->alb_info.rlb_enabled) 1616 1617 bond->alb_info.rlb_rebalance = 1; ··· 1648 1647 rlb_clear_slave(bond, slave); 1649 1648 } else if (link == BOND_LINK_UP) { 1650 1649 /* order a rebalance ASAP */ 1651 - bond_info->tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS; 1650 + atomic_set(&bond_info->tx_rebalance_counter, 1651 + BOND_TLB_REBALANCE_TICKS); 1652 1652 if (bond->alb_info.rlb_enabled) { 1653 1653 bond->alb_info.rlb_rebalance = 1; 1654 1654 /* If the updelay module parameter is smaller than the
+7 -1
drivers/net/can/kvaser_pciefd.c
··· 248 248 #define KVASER_PCIEFD_SPACK_EWLR BIT(23) 249 249 #define KVASER_PCIEFD_SPACK_EPLR BIT(24) 250 250 251 + /* Kvaser KCAN_EPACK second word */ 252 + #define KVASER_PCIEFD_EPACK_DIR_TX BIT(0) 253 + 251 254 struct kvaser_pciefd; 252 255 253 256 struct kvaser_pciefd_can { ··· 1288 1285 1289 1286 can->err_rep_cnt++; 1290 1287 can->can.can_stats.bus_error++; 1291 - stats->rx_errors++; 1288 + if (p->header[1] & KVASER_PCIEFD_EPACK_DIR_TX) 1289 + stats->tx_errors++; 1290 + else 1291 + stats->rx_errors++; 1292 1292 1293 1293 can->bec.txerr = bec.txerr; 1294 1294 can->bec.rxerr = bec.rxerr;
+27 -15
drivers/net/can/m_can/m_can.c
··· 204 204 205 205 /* Interrupts for version 3.0.x */ 206 206 #define IR_ERR_LEC_30X (IR_STE | IR_FOE | IR_ACKE | IR_BE | IR_CRCE) 207 - #define IR_ERR_BUS_30X (IR_ERR_LEC_30X | IR_WDI | IR_ELO | IR_BEU | \ 208 - IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \ 209 - IR_RF1L | IR_RF0L) 207 + #define IR_ERR_BUS_30X (IR_ERR_LEC_30X | IR_WDI | IR_BEU | IR_BEC | \ 208 + IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \ 209 + IR_RF0L) 210 210 #define IR_ERR_ALL_30X (IR_ERR_STATE | IR_ERR_BUS_30X) 211 211 212 212 /* Interrupts for version >= 3.1.x */ 213 213 #define IR_ERR_LEC_31X (IR_PED | IR_PEA) 214 - #define IR_ERR_BUS_31X (IR_ERR_LEC_31X | IR_WDI | IR_ELO | IR_BEU | \ 215 - IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \ 216 - IR_RF1L | IR_RF0L) 214 + #define IR_ERR_BUS_31X (IR_ERR_LEC_31X | IR_WDI | IR_BEU | IR_BEC | \ 215 + IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \ 216 + IR_RF0L) 217 217 #define IR_ERR_ALL_31X (IR_ERR_STATE | IR_ERR_BUS_31X) 218 218 219 219 /* Interrupt Line Select (ILS) */ ··· 517 517 err = m_can_fifo_read(cdev, fgi, M_CAN_FIFO_DATA, 518 518 cf->data, DIV_ROUND_UP(cf->len, 4)); 519 519 if (err) 520 - goto out_fail; 520 + goto out_free_skb; 521 521 } 522 522 523 523 /* acknowledge rx fifo 0 */ ··· 532 532 533 533 return 0; 534 534 535 + out_free_skb: 536 + kfree_skb(skb); 535 537 out_fail: 536 538 netdev_err(dev, "FIFO read returned %d\n", err); 537 539 return err; ··· 812 810 { 813 811 if (irqstatus & IR_WDI) 814 812 netdev_err(dev, "Message RAM Watchdog event due to missing READY\n"); 815 - if (irqstatus & IR_ELO) 816 - netdev_err(dev, "Error Logging Overflow\n"); 817 813 if (irqstatus & IR_BEU) 818 814 netdev_err(dev, "Bit Error Uncorrected\n"); 819 815 if (irqstatus & IR_BEC) ··· 1494 1494 case 30: 1495 1495 /* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.0.x */ 1496 1496 can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO); 1497 - cdev->can.bittiming_const = &m_can_bittiming_const_30X; 1498 - cdev->can.data_bittiming_const = &m_can_data_bittiming_const_30X; 1497 + cdev->can.bittiming_const = cdev->bit_timing ? 1498 + cdev->bit_timing : &m_can_bittiming_const_30X; 1499 + 1500 + cdev->can.data_bittiming_const = cdev->data_timing ? 1501 + cdev->data_timing : 1502 + &m_can_data_bittiming_const_30X; 1499 1503 break; 1500 1504 case 31: 1501 1505 /* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.1.x */ 1502 1506 can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO); 1503 - cdev->can.bittiming_const = &m_can_bittiming_const_31X; 1504 - cdev->can.data_bittiming_const = &m_can_data_bittiming_const_31X; 1507 + cdev->can.bittiming_const = cdev->bit_timing ? 1508 + cdev->bit_timing : &m_can_bittiming_const_31X; 1509 + 1510 + cdev->can.data_bittiming_const = cdev->data_timing ? 1511 + cdev->data_timing : 1512 + &m_can_data_bittiming_const_31X; 1505 1513 break; 1506 1514 case 32: 1507 1515 case 33: 1508 1516 /* Support both MCAN version v3.2.x and v3.3.0 */ 1509 - cdev->can.bittiming_const = &m_can_bittiming_const_31X; 1510 - cdev->can.data_bittiming_const = &m_can_data_bittiming_const_31X; 1517 + cdev->can.bittiming_const = cdev->bit_timing ? 1518 + cdev->bit_timing : &m_can_bittiming_const_31X; 1519 + 1520 + cdev->can.data_bittiming_const = cdev->data_timing ? 1521 + cdev->data_timing : 1522 + &m_can_data_bittiming_const_31X; 1511 1523 1512 1524 cdev->can.ctrlmode_supported |= 1513 1525 (m_can_niso_supported(cdev) ?
+3
drivers/net/can/m_can/m_can.h
··· 85 85 struct sk_buff *tx_skb; 86 86 struct phy *transceiver; 87 87 88 + const struct can_bittiming_const *bit_timing; 89 + const struct can_bittiming_const *data_timing; 90 + 88 91 struct m_can_ops *ops; 89 92 90 93 int version;
+56 -6
drivers/net/can/m_can/m_can_pci.c
··· 18 18 19 19 #define M_CAN_PCI_MMIO_BAR 0 20 20 21 - #define M_CAN_CLOCK_FREQ_EHL 100000000 22 21 #define CTL_CSR_INT_CTL_OFFSET 0x508 22 + 23 + struct m_can_pci_config { 24 + const struct can_bittiming_const *bit_timing; 25 + const struct can_bittiming_const *data_timing; 26 + unsigned int clock_freq; 27 + }; 23 28 24 29 struct m_can_pci_priv { 25 30 struct m_can_classdev cdev; ··· 47 42 static int iomap_read_fifo(struct m_can_classdev *cdev, int offset, void *val, size_t val_count) 48 43 { 49 44 struct m_can_pci_priv *priv = cdev_to_priv(cdev); 45 + void __iomem *src = priv->base + offset; 50 46 51 - ioread32_rep(priv->base + offset, val, val_count); 47 + while (val_count--) { 48 + *(unsigned int *)val = ioread32(src); 49 + val += 4; 50 + src += 4; 51 + } 52 52 53 53 return 0; 54 54 } ··· 71 61 const void *val, size_t val_count) 72 62 { 73 63 struct m_can_pci_priv *priv = cdev_to_priv(cdev); 64 + void __iomem *dst = priv->base + offset; 74 65 75 - iowrite32_rep(priv->base + offset, val, val_count); 66 + while (val_count--) { 67 + iowrite32(*(unsigned int *)val, dst); 68 + val += 4; 69 + dst += 4; 70 + } 76 71 77 72 return 0; 78 73 } ··· 89 74 .read_fifo = iomap_read_fifo, 90 75 }; 91 76 77 + static const struct can_bittiming_const m_can_bittiming_const_ehl = { 78 + .name = KBUILD_MODNAME, 79 + .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ 80 + .tseg1_max = 64, 81 + .tseg2_min = 1, /* Time segment 2 = phase_seg2 */ 82 + .tseg2_max = 128, 83 + .sjw_max = 128, 84 + .brp_min = 1, 85 + .brp_max = 512, 86 + .brp_inc = 1, 87 + }; 88 + 89 + static const struct can_bittiming_const m_can_data_bittiming_const_ehl = { 90 + .name = KBUILD_MODNAME, 91 + .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ 92 + .tseg1_max = 16, 93 + .tseg2_min = 1, /* Time segment 2 = phase_seg2 */ 94 + .tseg2_max = 8, 95 + .sjw_max = 4, 96 + .brp_min = 1, 97 + .brp_max = 32, 98 + .brp_inc = 1, 99 + }; 100 + 101 + static const struct m_can_pci_config m_can_pci_ehl = { 102 + .bit_timing = &m_can_bittiming_const_ehl, 103 + .data_timing = &m_can_data_bittiming_const_ehl, 104 + .clock_freq = 200000000, 105 + }; 106 + 92 107 static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id) 93 108 { 94 109 struct device *dev = &pci->dev; 110 + const struct m_can_pci_config *cfg; 95 111 struct m_can_classdev *mcan_class; 96 112 struct m_can_pci_priv *priv; 97 113 void __iomem *base; ··· 150 104 if (!mcan_class) 151 105 return -ENOMEM; 152 106 107 + cfg = (const struct m_can_pci_config *)id->driver_data; 108 + 153 109 priv = cdev_to_priv(mcan_class); 154 110 155 111 priv->base = base; ··· 163 115 mcan_class->dev = &pci->dev; 164 116 mcan_class->net->irq = pci_irq_vector(pci, 0); 165 117 mcan_class->pm_clock_support = 1; 166 - mcan_class->can.clock.freq = id->driver_data; 118 + mcan_class->bit_timing = cfg->bit_timing; 119 + mcan_class->data_timing = cfg->data_timing; 120 + mcan_class->can.clock.freq = cfg->clock_freq; 167 121 mcan_class->ops = &m_can_pci_ops; 168 122 169 123 pci_set_drvdata(pci, mcan_class); ··· 218 168 m_can_pci_suspend, m_can_pci_resume); 219 169 220 170 static const struct pci_device_id m_can_pci_id_table[] = { 221 - { PCI_VDEVICE(INTEL, 0x4bc1), M_CAN_CLOCK_FREQ_EHL, }, 222 - { PCI_VDEVICE(INTEL, 0x4bc2), M_CAN_CLOCK_FREQ_EHL, }, 171 + { PCI_VDEVICE(INTEL, 0x4bc1), (kernel_ulong_t)&m_can_pci_ehl, }, 172 + { PCI_VDEVICE(INTEL, 0x4bc2), (kernel_ulong_t)&m_can_pci_ehl, }, 223 173 { } /* Terminating Entry */ 224 174 }; 225 175 MODULE_DEVICE_TABLE(pci, m_can_pci_id_table);
+1 -1
drivers/net/can/pch_can.c
··· 692 692 cf->data[i + 1] = data_reg >> 8; 693 693 } 694 694 695 - netif_receive_skb(skb); 696 695 rcv_pkts++; 697 696 stats->rx_packets++; 698 697 quota--; 699 698 stats->rx_bytes += cf->len; 699 + netif_receive_skb(skb); 700 700 701 701 pch_fifo_thresh(priv, obj_num); 702 702 obj_num++;
+6 -1
drivers/net/can/sja1000/ems_pcmcia.c
··· 234 234 free_sja1000dev(dev); 235 235 } 236 236 237 - err = request_irq(dev->irq, &ems_pcmcia_interrupt, IRQF_SHARED, 237 + if (!card->channels) { 238 + err = -ENODEV; 239 + goto failure_cleanup; 240 + } 241 + 242 + err = request_irq(pdev->irq, &ems_pcmcia_interrupt, IRQF_SHARED, 238 243 DRV_NAME, card); 239 244 if (!err) 240 245 return 0;
+73 -28
drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
··· 28 28 29 29 #include "kvaser_usb.h" 30 30 31 - /* Forward declaration */ 32 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg; 33 - 34 - #define CAN_USB_CLOCK 8000000 35 31 #define MAX_USBCAN_NET_DEVICES 2 36 32 37 33 /* Command header size */ ··· 75 79 #define CMD_FLUSH_QUEUE_REPLY 68 76 80 77 81 #define CMD_LEAF_LOG_MESSAGE 106 82 + 83 + /* Leaf frequency options */ 84 + #define KVASER_USB_LEAF_SWOPTION_FREQ_MASK 0x60 85 + #define KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK 0 86 + #define KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK BIT(5) 87 + #define KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK BIT(6) 78 88 79 89 /* error factors */ 80 90 #define M16C_EF_ACKE BIT(0) ··· 342 340 }; 343 341 }; 344 342 343 + static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = { 344 + .name = "kvaser_usb", 345 + .tseg1_min = KVASER_USB_TSEG1_MIN, 346 + .tseg1_max = KVASER_USB_TSEG1_MAX, 347 + .tseg2_min = KVASER_USB_TSEG2_MIN, 348 + .tseg2_max = KVASER_USB_TSEG2_MAX, 349 + .sjw_max = KVASER_USB_SJW_MAX, 350 + .brp_min = KVASER_USB_BRP_MIN, 351 + .brp_max = KVASER_USB_BRP_MAX, 352 + .brp_inc = KVASER_USB_BRP_INC, 353 + }; 354 + 355 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = { 356 + .clock = { 357 + .freq = 8000000, 358 + }, 359 + .timestamp_freq = 1, 360 + .bittiming_const = &kvaser_usb_leaf_bittiming_const, 361 + }; 362 + 363 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = { 364 + .clock = { 365 + .freq = 16000000, 366 + }, 367 + .timestamp_freq = 1, 368 + .bittiming_const = &kvaser_usb_leaf_bittiming_const, 369 + }; 370 + 371 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = { 372 + .clock = { 373 + .freq = 24000000, 374 + }, 375 + .timestamp_freq = 1, 376 + .bittiming_const = &kvaser_usb_leaf_bittiming_const, 377 + }; 378 + 379 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = { 380 + .clock = { 381 + .freq = 32000000, 382 + }, 383 + .timestamp_freq = 1, 384 + .bittiming_const = &kvaser_usb_leaf_bittiming_const, 385 + }; 386 + 345 387 static void * 346 388 kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv, 347 389 const struct sk_buff *skb, int *frame_len, ··· 517 471 return rc; 518 472 } 519 473 474 + static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev, 475 + const struct leaf_cmd_softinfo *softinfo) 476 + { 477 + u32 sw_options = le32_to_cpu(softinfo->sw_options); 478 + 479 + dev->fw_version = le32_to_cpu(softinfo->fw_version); 480 + dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx); 481 + 482 + switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { 483 + case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: 484 + dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz; 485 + break; 486 + case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: 487 + dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz; 488 + break; 489 + case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: 490 + dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz; 491 + break; 492 + } 493 + } 494 + 520 495 static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev) 521 496 { 522 497 struct kvaser_cmd cmd; ··· 553 486 554 487 switch (dev->card_data.leaf.family) { 555 488 case KVASER_LEAF: 556 - dev->fw_version = le32_to_cpu(cmd.u.leaf.softinfo.fw_version); 557 - dev->max_tx_urbs = 558 - le16_to_cpu(cmd.u.leaf.softinfo.max_outstanding_tx); 489 + kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo); 559 490 break; 560 491 case KVASER_USBCAN: 561 492 dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version); 562 493 dev->max_tx_urbs = 563 494 le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx); 495 + dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz; 564 496 break; 565 497 } 566 498 ··· 1291 1225 { 1292 1226 struct kvaser_usb_dev_card_data *card_data = &dev->card_data; 1293 1227 1294 - dev->cfg = &kvaser_usb_leaf_dev_cfg; 1295 1228 card_data->ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES; 1296 1229 1297 1230 return 0; 1298 1231 } 1299 - 1300 - static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = { 1301 - .name = "kvaser_usb", 1302 - .tseg1_min = KVASER_USB_TSEG1_MIN, 1303 - .tseg1_max = KVASER_USB_TSEG1_MAX, 1304 - .tseg2_min = KVASER_USB_TSEG2_MIN, 1305 - .tseg2_max = KVASER_USB_TSEG2_MAX, 1306 - .sjw_max = KVASER_USB_SJW_MAX, 1307 - .brp_min = KVASER_USB_BRP_MIN, 1308 - .brp_max = KVASER_USB_BRP_MAX, 1309 - .brp_inc = KVASER_USB_BRP_INC, 1310 - }; 1311 1232 1312 1233 static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev) 1313 1234 { ··· 1400 1347 .dev_flush_queue = kvaser_usb_leaf_flush_queue, 1401 1348 .dev_read_bulk_callback = kvaser_usb_leaf_read_bulk_callback, 1402 1349 .dev_frame_to_cmd = kvaser_usb_leaf_frame_to_cmd, 1403 - }; 1404 - 1405 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg = { 1406 - .clock = { 1407 - .freq = CAN_USB_CLOCK, 1408 - }, 1409 - .timestamp_freq = 1, 1410 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 1411 1350 };
+46 -37
drivers/net/dsa/mv88e6xxx/chip.c
··· 471 471 u16 reg; 472 472 int err; 473 473 474 + /* The 88e6250 family does not have the PHY detect bit. Instead, 475 + * report whether the port is internal. 476 + */ 477 + if (chip->info->family == MV88E6XXX_FAMILY_6250) 478 + return port < chip->info->num_internal_phys; 479 + 474 480 err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg); 475 481 if (err) { 476 482 dev_err(chip->dev, ··· 698 692 { 699 693 struct mv88e6xxx_chip *chip = ds->priv; 700 694 struct mv88e6xxx_port *p; 701 - int err; 695 + int err = 0; 702 696 703 697 p = &chip->ports[port]; 704 698 705 - /* FIXME: is this the correct test? If we're in fixed mode on an 706 - * internal port, why should we process this any different from 707 - * PHY mode? On the other hand, the port may be automedia between 708 - * an internal PHY and the serdes... 709 - */ 710 - if ((mode == MLO_AN_PHY) && mv88e6xxx_phy_is_internal(ds, port)) 711 - return; 712 - 713 699 mv88e6xxx_reg_lock(chip); 714 - /* In inband mode, the link may come up at any time while the link 715 - * is not forced down. Force the link down while we reconfigure the 716 - * interface mode. 717 - */ 718 - if (mode == MLO_AN_INBAND && p->interface != state->interface && 719 - chip->info->ops->port_set_link) 720 - chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN); 721 700 722 - err = mv88e6xxx_port_config_interface(chip, port, state->interface); 723 - if (err && err != -EOPNOTSUPP) 724 - goto err_unlock; 701 + if (mode != MLO_AN_PHY || !mv88e6xxx_phy_is_internal(ds, port)) { 702 + /* In inband mode, the link may come up at any time while the 703 + * link is not forced down. Force the link down while we 704 + * reconfigure the interface mode. 705 + */ 706 + if (mode == MLO_AN_INBAND && 707 + p->interface != state->interface && 708 + chip->info->ops->port_set_link) 709 + chip->info->ops->port_set_link(chip, port, 710 + LINK_FORCED_DOWN); 725 711 726 - err = mv88e6xxx_serdes_pcs_config(chip, port, mode, state->interface, 727 - state->advertising); 728 - /* FIXME: we should restart negotiation if something changed - which 729 - * is something we get if we convert to using phylinks PCS operations. 730 - */ 731 - if (err > 0) 732 - err = 0; 712 + err = mv88e6xxx_port_config_interface(chip, port, 713 + state->interface); 714 + if (err && err != -EOPNOTSUPP) 715 + goto err_unlock; 716 + 717 + err = mv88e6xxx_serdes_pcs_config(chip, port, mode, 718 + state->interface, 719 + state->advertising); 720 + /* FIXME: we should restart negotiation if something changed - 721 + * which is something we get if we convert to using phylinks 722 + * PCS operations. 723 + */ 724 + if (err > 0) 725 + err = 0; 726 + } 733 727 734 728 /* Undo the forced down state above after completing configuration 735 - * irrespective of its state on entry, which allows the link to come up. 729 + * irrespective of its state on entry, which allows the link to come 730 + * up in the in-band case where there is no separate SERDES. Also 731 + * ensure that the link can come up if the PPU is in use and we are 732 + * in PHY mode (we treat the PPU as an effective in-band mechanism.) 736 733 */ 737 - if (mode == MLO_AN_INBAND && p->interface != state->interface && 738 - chip->info->ops->port_set_link) 734 + if (chip->info->ops->port_set_link && 735 + ((mode == MLO_AN_INBAND && p->interface != state->interface) || 736 + (mode == MLO_AN_PHY && mv88e6xxx_port_ppu_updates(chip, port)))) 739 737 chip->info->ops->port_set_link(chip, port, LINK_UNFORCED); 740 738 741 739 p->interface = state->interface; ··· 762 752 ops = chip->info->ops; 763 753 764 754 mv88e6xxx_reg_lock(chip); 765 - /* Internal PHYs propagate their configuration directly to the MAC. 766 - * External PHYs depend on whether the PPU is enabled for this port. 755 + /* Force the link down if we know the port may not be automatically 756 + * updated by the switch or if we are using fixed-link mode. 767 757 */ 768 - if (((!mv88e6xxx_phy_is_internal(ds, port) && 769 - !mv88e6xxx_port_ppu_updates(chip, port)) || 758 + if ((!mv88e6xxx_port_ppu_updates(chip, port) || 770 759 mode == MLO_AN_FIXED) && ops->port_sync_link) 771 760 err = ops->port_sync_link(chip, port, mode, false); 772 761 mv88e6xxx_reg_unlock(chip); ··· 788 779 ops = chip->info->ops; 789 780 790 781 mv88e6xxx_reg_lock(chip); 791 - /* Internal PHYs propagate their configuration directly to the MAC. 792 - * External PHYs depend on whether the PPU is enabled for this port. 782 + /* Configure and force the link up if we know that the port may not 783 + * automatically updated by the switch or if we are using fixed-link 784 + * mode. 793 785 */ 794 - if ((!mv88e6xxx_phy_is_internal(ds, port) && 795 - !mv88e6xxx_port_ppu_updates(chip, port)) || 786 + if (!mv88e6xxx_port_ppu_updates(chip, port) || 796 787 mode == MLO_AN_FIXED) { 797 788 /* FIXME: for an automedia port, should we force the link 798 789 * down here - what if the link comes up due to "other" media
+7 -1
drivers/net/dsa/mv88e6xxx/serdes.c
··· 830 830 bool up) 831 831 { 832 832 u8 cmode = chip->ports[port].cmode; 833 - int err = 0; 833 + int err; 834 834 835 835 switch (cmode) { 836 836 case MV88E6XXX_PORT_STS_CMODE_SGMII: ··· 841 841 case MV88E6XXX_PORT_STS_CMODE_XAUI: 842 842 case MV88E6XXX_PORT_STS_CMODE_RXAUI: 843 843 err = mv88e6390_serdes_power_10g(chip, lane, up); 844 + break; 845 + default: 846 + err = -EINVAL; 844 847 break; 845 848 } 846 849 ··· 1543 1540 case MV88E6393X_PORT_STS_CMODE_5GBASER: 1544 1541 case MV88E6393X_PORT_STS_CMODE_10GBASER: 1545 1542 err = mv88e6390_serdes_power_10g(chip, lane, on); 1543 + break; 1544 + default: 1545 + err = -EINVAL; 1546 1546 break; 1547 1547 } 1548 1548
+4 -1
drivers/net/dsa/ocelot/felix.c
··· 290 290 } 291 291 } 292 292 293 - if (cpu < 0) 293 + if (cpu < 0) { 294 + kfree(tagging_rule); 295 + kfree(redirect_rule); 294 296 return -EINVAL; 297 + } 295 298 296 299 tagging_rule->key_type = OCELOT_VCAP_KEY_ETYPE; 297 300 *(__be16 *)tagging_rule->key.etype.etype.value = htons(ETH_P_1588);
+6 -3
drivers/net/ethernet/altera/altera_tse_main.c
··· 1430 1430 priv->rxdescmem_busaddr = dma_res->start; 1431 1431 1432 1432 } else { 1433 + ret = -ENODEV; 1433 1434 goto err_free_netdev; 1434 1435 } 1435 1436 1436 - if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) 1437 + if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) { 1437 1438 dma_set_coherent_mask(priv->device, 1438 1439 DMA_BIT_MASK(priv->dmaops->dmamask)); 1439 - else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) 1440 + } else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) { 1440 1441 dma_set_coherent_mask(priv->device, DMA_BIT_MASK(32)); 1441 - else 1442 + } else { 1443 + ret = -EIO; 1442 1444 goto err_free_netdev; 1445 + } 1443 1446 1444 1447 /* MAC address space */ 1445 1448 ret = request_and_map(pdev, "control_port", &control_port,
+3 -1
drivers/net/ethernet/broadcom/bcm4908_enet.c
··· 708 708 709 709 enet->irq_tx = platform_get_irq_byname(pdev, "tx"); 710 710 711 - dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 711 + err = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 712 + if (err) 713 + return err; 712 714 713 715 err = bcm4908_enet_dma_alloc(enet); 714 716 if (err)
+3
drivers/net/ethernet/freescale/fec.h
··· 377 377 #define FEC_ENET_WAKEUP ((uint)0x00020000) /* Wakeup request */ 378 378 #define FEC_ENET_TXF (FEC_ENET_TXF_0 | FEC_ENET_TXF_1 | FEC_ENET_TXF_2) 379 379 #define FEC_ENET_RXF (FEC_ENET_RXF_0 | FEC_ENET_RXF_1 | FEC_ENET_RXF_2) 380 + #define FEC_ENET_RXF_GET(X) (((X) == 0) ? FEC_ENET_RXF_0 : \ 381 + (((X) == 1) ? FEC_ENET_RXF_1 : \ 382 + FEC_ENET_RXF_2)) 380 383 #define FEC_ENET_TS_AVAIL ((uint)0x00010000) 381 384 #define FEC_ENET_TS_TIMER ((uint)0x00008000) 382 385
+1 -1
drivers/net/ethernet/freescale/fec_main.c
··· 1480 1480 break; 1481 1481 pkt_received++; 1482 1482 1483 - writel(FEC_ENET_RXF, fep->hwp + FEC_IEVENT); 1483 + writel(FEC_ENET_RXF_GET(queue_id), fep->hwp + FEC_IEVENT); 1484 1484 1485 1485 /* Check for errors. */ 1486 1486 status ^= BD_ENET_RX_LAST;
+3
drivers/net/ethernet/google/gve/gve_utils.c
··· 68 68 set_protocol = ctx->curr_frag_cnt == ctx->expected_frag_cnt - 1; 69 69 } else { 70 70 skb = napi_alloc_skb(napi, len); 71 + 72 + if (unlikely(!skb)) 73 + return NULL; 71 74 set_protocol = true; 72 75 } 73 76 __skb_put(skb, len);
+1
drivers/net/ethernet/huawei/hinic/hinic_sriov.c
··· 8 8 #include <linux/interrupt.h> 9 9 #include <linux/etherdevice.h> 10 10 #include <linux/netdevice.h> 11 + #include <linux/module.h> 11 12 12 13 #include "hinic_hw_dev.h" 13 14 #include "hinic_dev.h"
+8
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 553 553 dev_info(&pf->pdev->dev, "vsi %d not found\n", vsi_seid); 554 554 return; 555 555 } 556 + if (vsi->type != I40E_VSI_MAIN && 557 + vsi->type != I40E_VSI_FDIR && 558 + vsi->type != I40E_VSI_VMDQ2) { 559 + dev_info(&pf->pdev->dev, 560 + "vsi %d type %d descriptor rings not available\n", 561 + vsi_seid, vsi->type); 562 + return; 563 + } 556 564 if (type == RING_TYPE_XDP && !i40e_enabled_xdp_vsi(vsi)) { 557 565 dev_info(&pf->pdev->dev, "XDP not enabled on VSI %d\n", vsi_seid); 558 566 return;
+48 -27
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1949 1949 } 1950 1950 1951 1951 /** 1952 + * i40e_sync_vf_state 1953 + * @vf: pointer to the VF info 1954 + * @state: VF state 1955 + * 1956 + * Called from a VF message to synchronize the service with a potential 1957 + * VF reset state 1958 + **/ 1959 + static bool i40e_sync_vf_state(struct i40e_vf *vf, enum i40e_vf_states state) 1960 + { 1961 + int i; 1962 + 1963 + /* When handling some messages, it needs VF state to be set. 1964 + * It is possible that this flag is cleared during VF reset, 1965 + * so there is a need to wait until the end of the reset to 1966 + * handle the request message correctly. 1967 + */ 1968 + for (i = 0; i < I40E_VF_STATE_WAIT_COUNT; i++) { 1969 + if (test_bit(state, &vf->vf_states)) 1970 + return true; 1971 + usleep_range(10000, 20000); 1972 + } 1973 + 1974 + return test_bit(state, &vf->vf_states); 1975 + } 1976 + 1977 + /** 1952 1978 * i40e_vc_get_version_msg 1953 1979 * @vf: pointer to the VF info 1954 1980 * @msg: pointer to the msg buffer ··· 2034 2008 size_t len = 0; 2035 2009 int ret; 2036 2010 2037 - if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) { 2011 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) { 2038 2012 aq_ret = I40E_ERR_PARAM; 2039 2013 goto err; 2040 2014 } ··· 2157 2131 bool allmulti = false; 2158 2132 bool alluni = false; 2159 2133 2160 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2134 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2161 2135 aq_ret = I40E_ERR_PARAM; 2162 2136 goto err_out; 2163 2137 } ··· 2245 2219 struct i40e_vsi *vsi; 2246 2220 u16 num_qps_all = 0; 2247 2221 2248 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2222 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2249 2223 aq_ret = I40E_ERR_PARAM; 2250 2224 goto error_param; 2251 2225 } ··· 2394 2368 i40e_status aq_ret = 0; 2395 2369 int i; 2396 2370 2397 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2371 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2398 2372 aq_ret = I40E_ERR_PARAM; 2399 2373 goto error_param; 2400 2374 } ··· 2566 2540 struct i40e_pf *pf = vf->pf; 2567 2541 i40e_status aq_ret = 0; 2568 2542 2569 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2543 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2570 2544 aq_ret = I40E_ERR_PARAM; 2571 2545 goto error_param; 2572 2546 } ··· 2616 2590 u8 cur_pairs = vf->num_queue_pairs; 2617 2591 struct i40e_pf *pf = vf->pf; 2618 2592 2619 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) 2593 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) 2620 2594 return -EINVAL; 2621 2595 2622 2596 if (req_pairs > I40E_MAX_VF_QUEUES) { ··· 2661 2635 2662 2636 memset(&stats, 0, sizeof(struct i40e_eth_stats)); 2663 2637 2664 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2638 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2665 2639 aq_ret = I40E_ERR_PARAM; 2666 2640 goto error_param; 2667 2641 } ··· 2778 2752 i40e_status ret = 0; 2779 2753 int i; 2780 2754 2781 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 2755 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 2782 2756 !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) { 2783 2757 ret = I40E_ERR_PARAM; 2784 2758 goto error_param; ··· 2850 2824 i40e_status ret = 0; 2851 2825 int i; 2852 2826 2853 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 2827 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 2854 2828 !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) { 2855 2829 ret = I40E_ERR_PARAM; 2856 2830 goto error_param; ··· 2994 2968 i40e_status aq_ret = 0; 2995 2969 int i; 2996 2970 2997 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 2971 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 2998 2972 !i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) { 2999 2973 aq_ret = I40E_ERR_PARAM; 3000 2974 goto error_param; ··· 3114 3088 struct i40e_vsi *vsi = NULL; 3115 3089 i40e_status aq_ret = 0; 3116 3090 3117 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 3091 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 3118 3092 !i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) || 3119 - (vrk->key_len != I40E_HKEY_ARRAY_SIZE)) { 3093 + vrk->key_len != I40E_HKEY_ARRAY_SIZE) { 3120 3094 aq_ret = I40E_ERR_PARAM; 3121 3095 goto err; 3122 3096 } ··· 3145 3119 i40e_status aq_ret = 0; 3146 3120 u16 i; 3147 3121 3148 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 3122 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 3149 3123 !i40e_vc_isvalid_vsi_id(vf, vrl->vsi_id) || 3150 - (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) { 3124 + vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE) { 3151 3125 aq_ret = I40E_ERR_PARAM; 3152 3126 goto err; 3153 3127 } ··· 3180 3154 i40e_status aq_ret = 0; 3181 3155 int len = 0; 3182 3156 3183 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3157 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3184 3158 aq_ret = I40E_ERR_PARAM; 3185 3159 goto err; 3186 3160 } ··· 3216 3190 struct i40e_hw *hw = &pf->hw; 3217 3191 i40e_status aq_ret = 0; 3218 3192 3219 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3193 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3220 3194 aq_ret = I40E_ERR_PARAM; 3221 3195 goto err; 3222 3196 } ··· 3241 3215 i40e_status aq_ret = 0; 3242 3216 struct i40e_vsi *vsi; 3243 3217 3244 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3218 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3245 3219 aq_ret = I40E_ERR_PARAM; 3246 3220 goto err; 3247 3221 } ··· 3267 3241 i40e_status aq_ret = 0; 3268 3242 struct i40e_vsi *vsi; 3269 3243 3270 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3244 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3271 3245 aq_ret = I40E_ERR_PARAM; 3272 3246 goto err; 3273 3247 } ··· 3494 3468 i40e_status aq_ret = 0; 3495 3469 int i, ret; 3496 3470 3497 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3471 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3498 3472 aq_ret = I40E_ERR_PARAM; 3499 3473 goto err; 3500 3474 } ··· 3625 3599 i40e_status aq_ret = 0; 3626 3600 int i, ret; 3627 3601 3628 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3602 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3629 3603 aq_ret = I40E_ERR_PARAM; 3630 3604 goto err_out; 3631 3605 } ··· 3734 3708 i40e_status aq_ret = 0; 3735 3709 u64 speed = 0; 3736 3710 3737 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3711 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3738 3712 aq_ret = I40E_ERR_PARAM; 3739 3713 goto err; 3740 3714 } ··· 3823 3797 3824 3798 /* set this flag only after making sure all inputs are sane */ 3825 3799 vf->adq_enabled = true; 3826 - /* num_req_queues is set when user changes number of queues via ethtool 3827 - * and this causes issue for default VSI(which depends on this variable) 3828 - * when ADq is enabled, hence reset it. 3829 - */ 3830 - vf->num_req_queues = 0; 3831 3800 3832 3801 /* reset the VF in order to allocate resources */ 3833 3802 i40e_vc_reset_vf(vf, true); ··· 3845 3824 struct i40e_pf *pf = vf->pf; 3846 3825 i40e_status aq_ret = 0; 3847 3826 3848 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3827 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3849 3828 aq_ret = I40E_ERR_PARAM; 3850 3829 goto err; 3851 3830 }
+2
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
··· 18 18 19 19 #define I40E_MAX_VF_PROMISC_FLAGS 3 20 20 21 + #define I40E_VF_STATE_WAIT_COUNT 20 22 + 21 23 /* Various queue ctrls */ 22 24 enum i40e_queue_ctrl { 23 25 I40E_QUEUE_CTRL_UNKNOWN = 0,
+32 -11
drivers/net/ethernet/intel/iavf/iavf_ethtool.c
··· 615 615 if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending)) 616 616 return -EINVAL; 617 617 618 - new_tx_count = clamp_t(u32, ring->tx_pending, 619 - IAVF_MIN_TXD, 620 - IAVF_MAX_TXD); 621 - new_tx_count = ALIGN(new_tx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE); 618 + if (ring->tx_pending > IAVF_MAX_TXD || 619 + ring->tx_pending < IAVF_MIN_TXD || 620 + ring->rx_pending > IAVF_MAX_RXD || 621 + ring->rx_pending < IAVF_MIN_RXD) { 622 + netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n", 623 + ring->tx_pending, ring->rx_pending, IAVF_MIN_TXD, 624 + IAVF_MAX_RXD, IAVF_REQ_DESCRIPTOR_MULTIPLE); 625 + return -EINVAL; 626 + } 622 627 623 - new_rx_count = clamp_t(u32, ring->rx_pending, 624 - IAVF_MIN_RXD, 625 - IAVF_MAX_RXD); 626 - new_rx_count = ALIGN(new_rx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE); 628 + new_tx_count = ALIGN(ring->tx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE); 629 + if (new_tx_count != ring->tx_pending) 630 + netdev_info(netdev, "Requested Tx descriptor count rounded up to %d\n", 631 + new_tx_count); 632 + 633 + new_rx_count = ALIGN(ring->rx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE); 634 + if (new_rx_count != ring->rx_pending) 635 + netdev_info(netdev, "Requested Rx descriptor count rounded up to %d\n", 636 + new_rx_count); 627 637 628 638 /* if nothing to do return success */ 629 639 if ((new_tx_count == adapter->tx_desc_count) && 630 - (new_rx_count == adapter->rx_desc_count)) 640 + (new_rx_count == adapter->rx_desc_count)) { 641 + netdev_dbg(netdev, "Nothing to change, descriptor count is same as requested\n"); 631 642 return 0; 643 + } 632 644 633 - adapter->tx_desc_count = new_tx_count; 634 - adapter->rx_desc_count = new_rx_count; 645 + if (new_tx_count != adapter->tx_desc_count) { 646 + netdev_dbg(netdev, "Changing Tx descriptor count from %d to %d\n", 647 + adapter->tx_desc_count, new_tx_count); 648 + adapter->tx_desc_count = new_tx_count; 649 + } 650 + 651 + if (new_rx_count != adapter->rx_desc_count) { 652 + netdev_dbg(netdev, "Changing Rx descriptor count from %d to %d\n", 653 + adapter->rx_desc_count, new_rx_count); 654 + adapter->rx_desc_count = new_rx_count; 655 + } 635 656 636 657 if (netif_running(netdev)) { 637 658 adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+1
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2248 2248 } 2249 2249 2250 2250 pci_set_master(adapter->pdev); 2251 + pci_restore_msi_state(adapter->pdev); 2251 2252 2252 2253 if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) { 2253 2254 dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
+12 -6
drivers/net/ethernet/intel/ice/ice_dcb_nl.c
··· 97 97 98 98 new_cfg->etscfg.maxtcs = pf->hw.func_caps.common_cap.maxtc; 99 99 100 + if (!bwcfg) 101 + new_cfg->etscfg.tcbwtable[0] = 100; 102 + 100 103 if (!bwrec) 101 104 new_cfg->etsrec.tcbwtable[0] = 100; 102 105 ··· 170 167 if (mode == pf->dcbx_cap) 171 168 return ICE_DCB_NO_HW_CHG; 172 169 173 - pf->dcbx_cap = mode; 174 170 qos_cfg = &pf->hw.port_info->qos_cfg; 175 - if (mode & DCB_CAP_DCBX_VER_CEE) { 176 - if (qos_cfg->local_dcbx_cfg.pfc_mode == ICE_QOS_MODE_DSCP) 177 - return ICE_DCB_NO_HW_CHG; 171 + 172 + /* DSCP configuration is not DCBx negotiated */ 173 + if (qos_cfg->local_dcbx_cfg.pfc_mode == ICE_QOS_MODE_DSCP) 174 + return ICE_DCB_NO_HW_CHG; 175 + 176 + pf->dcbx_cap = mode; 177 + 178 + if (mode & DCB_CAP_DCBX_VER_CEE) 178 179 qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE; 179 - } else { 180 + else 180 181 qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE; 181 - } 182 182 183 183 dev_info(ice_pf_to_dev(pf), "DCBx mode = 0x%x\n", mode); 184 184 return ICE_DCB_HW_CHG_RST;
+2 -2
drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
··· 1268 1268 bool is_tun = tun == ICE_FD_HW_SEG_TUN; 1269 1269 int err; 1270 1270 1271 - if (is_tun && !ice_get_open_tunnel_port(&pf->hw, &port_num)) 1271 + if (is_tun && !ice_get_open_tunnel_port(&pf->hw, &port_num, TNL_ALL)) 1272 1272 continue; 1273 1273 err = ice_fdir_write_fltr(pf, input, add, is_tun); 1274 1274 if (err) ··· 1652 1652 } 1653 1653 1654 1654 /* return error if not an update and no available filters */ 1655 - fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port) ? 2 : 1; 1655 + fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port, TNL_ALL) ? 2 : 1; 1656 1656 if (!ice_fdir_find_fltr_by_idx(hw, fsp->location) && 1657 1657 ice_fdir_num_avail_fltr(hw, pf->vsi[vsi->idx]) < fltrs_needed) { 1658 1658 dev_err(dev, "Failed to add filter. The maximum number of flow director filters has been reached.\n");
+1 -1
drivers/net/ethernet/intel/ice/ice_fdir.c
··· 924 924 memcpy(pkt, ice_fdir_pkt[idx].pkt, ice_fdir_pkt[idx].pkt_len); 925 925 loc = pkt; 926 926 } else { 927 - if (!ice_get_open_tunnel_port(hw, &tnl_port)) 927 + if (!ice_get_open_tunnel_port(hw, &tnl_port, TNL_ALL)) 928 928 return ICE_ERR_DOES_NOT_EXIST; 929 929 if (!ice_fdir_pkt[idx].tun_pkt) 930 930 return ICE_ERR_PARAM;
+5 -2
drivers/net/ethernet/intel/ice/ice_flex_pipe.c
··· 1899 1899 * ice_get_open_tunnel_port - retrieve an open tunnel port 1900 1900 * @hw: pointer to the HW structure 1901 1901 * @port: returns open port 1902 + * @type: type of tunnel, can be TNL_LAST if it doesn't matter 1902 1903 */ 1903 1904 bool 1904 - ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port) 1905 + ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, 1906 + enum ice_tunnel_type type) 1905 1907 { 1906 1908 bool res = false; 1907 1909 u16 i; ··· 1911 1909 mutex_lock(&hw->tnl_lock); 1912 1910 1913 1911 for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++) 1914 - if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].port) { 1912 + if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].port && 1913 + (type == TNL_LAST || type == hw->tnl.tbl[i].type)) { 1915 1914 *port = hw->tnl.tbl[i].port; 1916 1915 res = true; 1917 1916 break;
+2 -1
drivers/net/ethernet/intel/ice/ice_flex_pipe.h
··· 33 33 ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, 34 34 unsigned long *bm, struct list_head *fv_list); 35 35 bool 36 - ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port); 36 + ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, 37 + enum ice_tunnel_type type); 37 38 int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table, 38 39 unsigned int idx, struct udp_tunnel_info *ti); 39 40 int ice_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table,
+21 -11
drivers/net/ethernet/intel/ice/ice_main.c
··· 5881 5881 netif_carrier_on(vsi->netdev); 5882 5882 } 5883 5883 5884 + /* clear this now, and the first stats read will be used as baseline */ 5885 + vsi->stat_offsets_loaded = false; 5886 + 5884 5887 ice_service_task_schedule(pf); 5885 5888 5886 5889 return 0; ··· 5930 5927 /** 5931 5928 * ice_update_vsi_tx_ring_stats - Update VSI Tx ring stats counters 5932 5929 * @vsi: the VSI to be updated 5930 + * @vsi_stats: the stats struct to be updated 5933 5931 * @rings: rings to work on 5934 5932 * @count: number of rings 5935 5933 */ 5936 5934 static void 5937 - ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, struct ice_tx_ring **rings, 5938 - u16 count) 5935 + ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, 5936 + struct rtnl_link_stats64 *vsi_stats, 5937 + struct ice_tx_ring **rings, u16 count) 5939 5938 { 5940 - struct rtnl_link_stats64 *vsi_stats = &vsi->net_stats; 5941 5939 u16 i; 5942 5940 5943 5941 for (i = 0; i < count; i++) { ··· 5962 5958 */ 5963 5959 static void ice_update_vsi_ring_stats(struct ice_vsi *vsi) 5964 5960 { 5965 - struct rtnl_link_stats64 *vsi_stats = &vsi->net_stats; 5961 + struct rtnl_link_stats64 *vsi_stats; 5966 5962 u64 pkts, bytes; 5967 5963 int i; 5968 5964 5969 - /* reset netdev stats */ 5970 - vsi_stats->tx_packets = 0; 5971 - vsi_stats->tx_bytes = 0; 5972 - vsi_stats->rx_packets = 0; 5973 - vsi_stats->rx_bytes = 0; 5965 + vsi_stats = kzalloc(sizeof(*vsi_stats), GFP_ATOMIC); 5966 + if (!vsi_stats) 5967 + return; 5974 5968 5975 5969 /* reset non-netdev (extended) stats */ 5976 5970 vsi->tx_restart = 0; ··· 5980 5978 rcu_read_lock(); 5981 5979 5982 5980 /* update Tx rings counters */ 5983 - ice_update_vsi_tx_ring_stats(vsi, vsi->tx_rings, vsi->num_txq); 5981 + ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->tx_rings, 5982 + vsi->num_txq); 5984 5983 5985 5984 /* update Rx rings counters */ 5986 5985 ice_for_each_rxq(vsi, i) { ··· 5996 5993 5997 5994 /* update XDP Tx rings counters */ 5998 5995 if (ice_is_xdp_ena_vsi(vsi)) 5999 - ice_update_vsi_tx_ring_stats(vsi, vsi->xdp_rings, 5996 + ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->xdp_rings, 6000 5997 vsi->num_xdp_txq); 6001 5998 6002 5999 rcu_read_unlock(); 6000 + 6001 + vsi->net_stats.tx_packets = vsi_stats->tx_packets; 6002 + vsi->net_stats.tx_bytes = vsi_stats->tx_bytes; 6003 + vsi->net_stats.rx_packets = vsi_stats->rx_packets; 6004 + vsi->net_stats.rx_bytes = vsi_stats->rx_bytes; 6005 + 6006 + kfree(vsi_stats); 6003 6007 } 6004 6008 6005 6009 /**
+14 -7
drivers/net/ethernet/intel/ice/ice_switch.c
··· 3796 3796 * ice_find_recp - find a recipe 3797 3797 * @hw: pointer to the hardware structure 3798 3798 * @lkup_exts: extension sequence to match 3799 + * @tun_type: type of recipe tunnel 3799 3800 * 3800 3801 * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found. 3801 3802 */ 3802 - static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts) 3803 + static u16 3804 + ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, 3805 + enum ice_sw_tunnel_type tun_type) 3803 3806 { 3804 3807 bool refresh_required = true; 3805 3808 struct ice_sw_recipe *recp; ··· 3863 3860 } 3864 3861 /* If for "i"th recipe the found was never set to false 3865 3862 * then it means we found our match 3863 + * Also tun type of recipe needs to be checked 3866 3864 */ 3867 - if (found) 3865 + if (found && recp[i].tun_type == tun_type) 3868 3866 return i; /* Return the recipe ID */ 3869 3867 } 3870 3868 } ··· 4655 4651 } 4656 4652 4657 4653 /* Look for a recipe which matches our requested fv / mask list */ 4658 - *rid = ice_find_recp(hw, lkup_exts); 4654 + *rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type); 4659 4655 if (*rid < ICE_MAX_NUM_RECIPES) 4660 4656 /* Success if found a recipe that match the existing criteria */ 4661 4657 goto err_unroll; 4662 4658 4659 + rm->tun_type = rinfo->tun_type; 4663 4660 /* Recipe we need does not exist, add a recipe */ 4664 4661 status = ice_add_sw_recipe(hw, rm, profiles); 4665 4662 if (status) ··· 4963 4958 4964 4959 switch (tun_type) { 4965 4960 case ICE_SW_TUN_VXLAN: 4966 - case ICE_SW_TUN_GENEVE: 4967 - if (!ice_get_open_tunnel_port(hw, &open_port)) 4961 + if (!ice_get_open_tunnel_port(hw, &open_port, TNL_VXLAN)) 4968 4962 return ICE_ERR_CFG; 4969 4963 break; 4970 - 4964 + case ICE_SW_TUN_GENEVE: 4965 + if (!ice_get_open_tunnel_port(hw, &open_port, TNL_GENEVE)) 4966 + return ICE_ERR_CFG; 4967 + break; 4971 4968 default: 4972 4969 /* Nothing needs to be done for this tunnel type */ 4973 4970 return 0; ··· 5562 5555 if (status) 5563 5556 return status; 5564 5557 5565 - rid = ice_find_recp(hw, &lkup_exts); 5558 + rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type); 5566 5559 /* If did not find a recipe that match the existing criteria */ 5567 5560 if (rid == ICE_MAX_NUM_RECIPES) 5568 5561 return ICE_ERR_PARAM;
+12 -18
drivers/net/ethernet/intel/ice/ice_tc_lib.c
··· 74 74 return inner ? ICE_IPV6_IL : ICE_IPV6_OFOS; 75 75 } 76 76 77 - static enum ice_protocol_type 78 - ice_proto_type_from_l4_port(bool inner, u16 ip_proto) 77 + static enum ice_protocol_type ice_proto_type_from_l4_port(u16 ip_proto) 79 78 { 80 - if (inner) { 81 - switch (ip_proto) { 82 - case IPPROTO_UDP: 83 - return ICE_UDP_ILOS; 84 - } 85 - } else { 86 - switch (ip_proto) { 87 - case IPPROTO_TCP: 88 - return ICE_TCP_IL; 89 - case IPPROTO_UDP: 90 - return ICE_UDP_OF; 91 - } 79 + switch (ip_proto) { 80 + case IPPROTO_TCP: 81 + return ICE_TCP_IL; 82 + case IPPROTO_UDP: 83 + return ICE_UDP_ILOS; 92 84 } 93 85 94 86 return 0; ··· 183 191 i++; 184 192 } 185 193 186 - if (flags & ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT) { 187 - list[i].type = ice_proto_type_from_l4_port(false, hdr->l3_key.ip_proto); 194 + if ((flags & ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT) && 195 + hdr->l3_key.ip_proto == IPPROTO_UDP) { 196 + list[i].type = ICE_UDP_OF; 188 197 list[i].h_u.l4_hdr.dst_port = hdr->l4_key.dst_port; 189 198 list[i].m_u.l4_hdr.dst_port = hdr->l4_mask.dst_port; 190 199 i++; ··· 310 317 ICE_TC_FLWR_FIELD_SRC_L4_PORT)) { 311 318 struct ice_tc_l4_hdr *l4_key, *l4_mask; 312 319 313 - list[i].type = ice_proto_type_from_l4_port(inner, headers->l3_key.ip_proto); 320 + list[i].type = ice_proto_type_from_l4_port(headers->l3_key.ip_proto); 314 321 l4_key = &headers->l4_key; 315 322 l4_mask = &headers->l4_mask; 316 323 ··· 795 802 headers->l3_mask.ttl = match.mask->ttl; 796 803 } 797 804 798 - if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) { 805 + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS) && 806 + fltr->tunnel_type != TNL_VXLAN && fltr->tunnel_type != TNL_GENEVE) { 799 807 struct flow_match_ports match; 800 808 801 809 flow_rule_match_enc_ports(rule, &match);
+6
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
··· 1617 1617 ice_vc_set_default_allowlist(vf); 1618 1618 1619 1619 ice_vf_fdir_exit(vf); 1620 + ice_vf_fdir_init(vf); 1620 1621 /* clean VF control VSI when resetting VFs since it should be 1621 1622 * setup only when VF creates its first FDIR rule. 1622 1623 */ ··· 1748 1747 } 1749 1748 1750 1749 ice_vf_fdir_exit(vf); 1750 + ice_vf_fdir_init(vf); 1751 1751 /* clean VF control VSI when resetting VF since it should be setup 1752 1752 * only when VF creates its first FDIR rule. 1753 1753 */ ··· 2022 2020 ret = ice_eswitch_configure(pf); 2023 2021 if (ret) 2024 2022 goto err_unroll_sriov; 2023 + 2024 + /* rearm global interrupts */ 2025 + if (test_and_clear_bit(ICE_OICR_INTR_DIS, pf->state)) 2026 + ice_irq_dynamic_ena(hw, NULL, NULL); 2025 2027 2026 2028 return 0; 2027 2029
+2 -2
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 2960 2960 mvpp2_rxq_status_update(port, rxq->id, 0, rxq->size); 2961 2961 2962 2962 if (priv->percpu_pools) { 2963 - err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->id, 0); 2963 + err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->logic_rxq, 0); 2964 2964 if (err < 0) 2965 2965 goto err_free_dma; 2966 2966 2967 - err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->id, 0); 2967 + err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->logic_rxq, 0); 2968 2968 if (err < 0) 2969 2969 goto err_unregister_rxq_short; 2970 2970
+2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c
··· 5 5 * 6 6 */ 7 7 8 + #include <linux/module.h> 9 + 8 10 #include "otx2_common.h" 9 11 #include "otx2_ptp.h" 10 12
+5 -5
drivers/net/ethernet/microsoft/mana/hw_channel.c
··· 480 480 if (err) 481 481 goto out; 482 482 483 - err = mana_hwc_alloc_dma_buf(hwc, q_depth, max_msg_size, 484 - &hwc_wq->msg_buf); 485 - if (err) 486 - goto out; 487 - 488 483 hwc_wq->hwc = hwc; 489 484 hwc_wq->gdma_wq = queue; 490 485 hwc_wq->queue_depth = q_depth; 491 486 hwc_wq->hwc_cq = hwc_cq; 487 + 488 + err = mana_hwc_alloc_dma_buf(hwc, q_depth, max_msg_size, 489 + &hwc_wq->msg_buf); 490 + if (err) 491 + goto out; 492 492 493 493 *hwc_wq_ptr = hwc_wq; 494 494 return 0;
+3 -1
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
··· 803 803 return -ENOMEM; 804 804 805 805 cache = kzalloc(sizeof(*cache), GFP_KERNEL); 806 - if (!cache) 806 + if (!cache) { 807 + nfp_cpp_area_free(area); 807 808 return -ENOMEM; 809 + } 808 810 809 811 cache->id = 0; 810 812 cache->addr = 0;
+7
drivers/net/ethernet/qlogic/qede/qede_fp.c
··· 1643 1643 data_split = true; 1644 1644 } 1645 1645 } else { 1646 + if (unlikely(skb->len > ETH_TX_MAX_NON_LSO_PKT_LEN)) { 1647 + DP_ERR(edev, "Unexpected non LSO skb length = 0x%x\n", skb->len); 1648 + qede_free_failed_tx_pkt(txq, first_bd, 0, false); 1649 + qede_update_tx_producer(txq); 1650 + return NETDEV_TX_OK; 1651 + } 1652 + 1646 1653 val |= ((skb->len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) << 1647 1654 ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT); 1648 1655 }
+9 -10
drivers/net/ethernet/qlogic/qla3xxx.c
··· 3480 3480 3481 3481 spin_lock_irqsave(&qdev->hw_lock, hw_flags); 3482 3482 3483 - err = ql_wait_for_drvr_lock(qdev); 3484 - if (err) { 3485 - err = ql_adapter_initialize(qdev); 3486 - if (err) { 3487 - netdev_err(ndev, "Unable to initialize adapter\n"); 3488 - goto err_init; 3489 - } 3490 - netdev_err(ndev, "Releasing driver lock\n"); 3491 - ql_sem_unlock(qdev, QL_DRVR_SEM_MASK); 3492 - } else { 3483 + if (!ql_wait_for_drvr_lock(qdev)) { 3493 3484 netdev_err(ndev, "Could not acquire driver lock\n"); 3485 + err = -ENODEV; 3494 3486 goto err_lock; 3495 3487 } 3488 + 3489 + err = ql_adapter_initialize(qdev); 3490 + if (err) { 3491 + netdev_err(ndev, "Unable to initialize adapter\n"); 3492 + goto err_init; 3493 + } 3494 + ql_sem_unlock(qdev, QL_DRVR_SEM_MASK); 3496 3495 3497 3496 spin_unlock_irqrestore(&qdev->hw_lock, hw_flags); 3498 3497
+1
drivers/net/phy/phylink.c
··· 1388 1388 * @mac_wol: true if the MAC needs to receive packets for Wake-on-Lan 1389 1389 * 1390 1390 * Handle a network device suspend event. There are several cases: 1391 + * 1391 1392 * - If Wake-on-Lan is not active, we can bring down the link between 1392 1393 * the MAC and PHY by calling phylink_stop(). 1393 1394 * - If Wake-on-Lan is active, and being handled only by the PHY, we
+2
drivers/net/usb/cdc_ncm.c
··· 181 181 min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32); 182 182 183 183 max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize)); 184 + if (max == 0) 185 + max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */ 184 186 185 187 /* some devices set dwNtbOutMaxSize too low for the above default */ 186 188 min = min(min, max);
+7 -6
drivers/net/vmxnet3/vmxnet3_drv.c
··· 3261 3261 3262 3262 #ifdef CONFIG_PCI_MSI 3263 3263 if (adapter->intr.type == VMXNET3_IT_MSIX) { 3264 - int i, nvec; 3264 + int i, nvec, nvec_allocated; 3265 3265 3266 3266 nvec = adapter->share_intr == VMXNET3_INTR_TXSHARE ? 3267 3267 1 : adapter->num_tx_queues; ··· 3274 3274 for (i = 0; i < nvec; i++) 3275 3275 adapter->intr.msix_entries[i].entry = i; 3276 3276 3277 - nvec = vmxnet3_acquire_msix_vectors(adapter, nvec); 3278 - if (nvec < 0) 3277 + nvec_allocated = vmxnet3_acquire_msix_vectors(adapter, nvec); 3278 + if (nvec_allocated < 0) 3279 3279 goto msix_err; 3280 3280 3281 3281 /* If we cannot allocate one MSIx vector per queue 3282 3282 * then limit the number of rx queues to 1 3283 3283 */ 3284 - if (nvec == VMXNET3_LINUX_MIN_MSIX_VECT) { 3284 + if (nvec_allocated == VMXNET3_LINUX_MIN_MSIX_VECT && 3285 + nvec != VMXNET3_LINUX_MIN_MSIX_VECT) { 3285 3286 if (adapter->share_intr != VMXNET3_INTR_BUDDYSHARE 3286 3287 || adapter->num_rx_queues != 1) { 3287 3288 adapter->share_intr = VMXNET3_INTR_TXSHARE; ··· 3292 3291 } 3293 3292 } 3294 3293 3295 - adapter->intr.num_intrs = nvec; 3294 + adapter->intr.num_intrs = nvec_allocated; 3296 3295 return; 3297 3296 3298 3297 msix_err: 3299 3298 /* If we cannot allocate MSIx vectors use only one rx queue */ 3300 3299 dev_info(&adapter->pdev->dev, 3301 3300 "Failed to enable MSI-X, error %d. " 3302 - "Limiting #rx queues to 1, try MSI.\n", nvec); 3301 + "Limiting #rx queues to 1, try MSI.\n", nvec_allocated); 3303 3302 3304 3303 adapter->intr.type = VMXNET3_IT_MSI; 3305 3304 }
+4 -4
drivers/net/vrf.c
··· 770 770 771 771 skb->dev = vrf_dev; 772 772 773 - vrf_nf_set_untracked(skb); 774 - 775 773 err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, 776 774 skb, NULL, vrf_dev, vrf_ip6_out_direct_finish); 777 775 ··· 789 791 /* don't divert link scope packets */ 790 792 if (rt6_need_strict(&ipv6_hdr(skb)->daddr)) 791 793 return skb; 794 + 795 + vrf_nf_set_untracked(skb); 792 796 793 797 if (qdisc_tx_is_default(vrf_dev) || 794 798 IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED) ··· 1000 1000 1001 1001 skb->dev = vrf_dev; 1002 1002 1003 - vrf_nf_set_untracked(skb); 1004 - 1005 1003 err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk, 1006 1004 skb, NULL, vrf_dev, vrf_ip_out_direct_finish); 1007 1005 ··· 1020 1022 if (ipv4_is_multicast(ip_hdr(skb)->daddr) || 1021 1023 ipv4_is_lbcast(ip_hdr(skb)->daddr)) 1022 1024 return skb; 1025 + 1026 + vrf_nf_set_untracked(skb); 1023 1027 1024 1028 if (qdisc_tx_is_default(vrf_dev) || 1025 1029 IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
+17 -9
drivers/net/wwan/iosm/iosm_ipc_imem.c
··· 181 181 bool ipc_imem_ul_write_td(struct iosm_imem *ipc_imem) 182 182 { 183 183 struct ipc_mem_channel *channel; 184 + bool hpda_ctrl_pending = false; 184 185 struct sk_buff_head *ul_list; 185 186 bool hpda_pending = false; 186 - bool forced_hpdu = false; 187 187 struct ipc_pipe *pipe; 188 188 int i; 189 189 ··· 200 200 ul_list = &channel->ul_list; 201 201 202 202 /* Fill the transfer descriptor with the uplink buffer info. */ 203 - hpda_pending |= ipc_protocol_ul_td_send(ipc_imem->ipc_protocol, 203 + if (!ipc_imem_check_wwan_ips(channel)) { 204 + hpda_ctrl_pending |= 205 + ipc_protocol_ul_td_send(ipc_imem->ipc_protocol, 204 206 pipe, ul_list); 205 - 206 - /* forced HP update needed for non data channels */ 207 - if (hpda_pending && !ipc_imem_check_wwan_ips(channel)) 208 - forced_hpdu = true; 207 + } else { 208 + hpda_pending |= 209 + ipc_protocol_ul_td_send(ipc_imem->ipc_protocol, 210 + pipe, ul_list); 211 + } 209 212 } 210 213 211 - if (forced_hpdu) { 214 + /* forced HP update needed for non data channels */ 215 + if (hpda_ctrl_pending) { 212 216 hpda_pending = false; 213 217 ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol, 214 218 IPC_HP_UL_WRITE_TD); ··· 530 526 "Modem link down. Exit run state worker."); 531 527 return; 532 528 } 529 + 530 + if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag)) 531 + ipc_devlink_deinit(ipc_imem->ipc_devlink); 533 532 534 533 if (!ipc_imem_setup_cp_mux_cap_init(ipc_imem, &mux_cfg)) 535 534 ipc_imem->mux = ipc_mux_init(&mux_cfg, ipc_imem); ··· 1174 1167 ipc_port_deinit(ipc_imem->ipc_port); 1175 1168 } 1176 1169 1177 - if (ipc_imem->ipc_devlink) 1170 + if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag)) 1178 1171 ipc_devlink_deinit(ipc_imem->ipc_devlink); 1179 1172 1180 1173 ipc_imem_device_ipc_uninit(ipc_imem); ··· 1270 1263 1271 1264 ipc_imem->pci_device_id = device_id; 1272 1265 1273 - ipc_imem->ev_cdev_write_pending = false; 1274 1266 ipc_imem->cp_version = 0; 1275 1267 ipc_imem->device_sleep = IPC_HOST_SLEEP_ENTER_SLEEP; 1276 1268 ··· 1337 1331 1338 1332 if (ipc_flash_link_establish(ipc_imem)) 1339 1333 goto devlink_channel_fail; 1334 + 1335 + set_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag); 1340 1336 } 1341 1337 return ipc_imem; 1342 1338 devlink_channel_fail:
+1 -3
drivers/net/wwan/iosm/iosm_ipc_imem.h
··· 101 101 #define IOSM_CHIP_INFO_SIZE_MAX 100 102 102 103 103 #define FULLY_FUNCTIONAL 0 104 + #define IOSM_DEVLINK_INIT 1 104 105 105 106 /* List of the supported UL/DL pipes. */ 106 107 enum ipc_mem_pipes { ··· 336 335 * process the irq actions. 337 336 * @flag: Flag to monitor the state of driver 338 337 * @td_update_timer_suspended: if true then td update timer suspend 339 - * @ev_cdev_write_pending: 0 means inform the IPC tasklet to pass 340 - * the accumulated uplink buffers to CP. 341 338 * @ev_mux_net_transmit_pending:0 means inform the IPC tasklet to pass 342 339 * @reset_det_n: Reset detect flag 343 340 * @pcie_wake_n: Pcie wake flag ··· 373 374 u8 ev_irq_pending[IPC_IRQ_VECTORS]; 374 375 unsigned long flag; 375 376 u8 td_update_timer_suspended:1, 376 - ev_cdev_write_pending:1, 377 377 ev_mux_net_transmit_pending:1, 378 378 reset_det_n:1, 379 379 pcie_wake_n:1;
+1 -6
drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
··· 41 41 static int ipc_imem_tq_cdev_write(struct iosm_imem *ipc_imem, int arg, 42 42 void *msg, size_t size) 43 43 { 44 - ipc_imem->ev_cdev_write_pending = false; 45 44 ipc_imem_ul_send(ipc_imem); 46 45 47 46 return 0; ··· 49 50 /* Through tasklet to do sio write. */ 50 51 static int ipc_imem_call_cdev_write(struct iosm_imem *ipc_imem) 51 52 { 52 - if (ipc_imem->ev_cdev_write_pending) 53 - return -1; 54 - 55 - ipc_imem->ev_cdev_write_pending = true; 56 - 57 53 return ipc_task_queue_send_task(ipc_imem, ipc_imem_tq_cdev_write, 0, 58 54 NULL, 0, false); 59 55 } ··· 444 450 /* Release the pipe resources */ 445 451 ipc_imem_pipe_cleanup(ipc_imem, &channel->ul_pipe); 446 452 ipc_imem_pipe_cleanup(ipc_imem, &channel->dl_pipe); 453 + ipc_imem->nr_of_channels--; 447 454 } 448 455 449 456 void ipc_imem_sys_devlink_notify_rx(struct iosm_devlink *ipc_devlink,
+1
drivers/pci/controller/dwc/pci-exynos.c
··· 19 19 #include <linux/platform_device.h> 20 20 #include <linux/phy/phy.h> 21 21 #include <linux/regulator/consumer.h> 22 + #include <linux/module.h> 22 23 23 24 #include "pcie-designware.h" 24 25
+1
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 18 18 #include <linux/pm_domain.h> 19 19 #include <linux/regmap.h> 20 20 #include <linux/reset.h> 21 + #include <linux/module.h> 21 22 22 23 #include "pcie-designware.h" 23 24
+1
drivers/usb/cdns3/host.c
··· 10 10 */ 11 11 12 12 #include <linux/platform_device.h> 13 + #include <linux/slab.h> 13 14 #include "core.h" 14 15 #include "drd.h" 15 16 #include "host-export.h"
+3 -14
include/linux/bpf.h
··· 732 732 struct bpf_trampoline *bpf_trampoline_get(u64 key, 733 733 struct bpf_attach_target_info *tgt_info); 734 734 void bpf_trampoline_put(struct bpf_trampoline *tr); 735 + int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs); 735 736 #define BPF_DISPATCHER_INIT(_name) { \ 736 737 .mutex = __MUTEX_INITIALIZER(_name.mutex), \ 737 738 .func = &_name##_func, \ ··· 1353 1352 * kprobes, tracepoints) to prevent deadlocks on map operations as any of 1354 1353 * these events can happen inside a region which holds a map bucket lock 1355 1354 * and can deadlock on it. 1356 - * 1357 - * Use the preemption safe inc/dec variants on RT because migrate disable 1358 - * is preemptible on RT and preemption in the middle of the RMW operation 1359 - * might lead to inconsistent state. Use the raw variants for non RT 1360 - * kernels as migrate_disable() maps to preempt_disable() so the slightly 1361 - * more expensive save operation can be avoided. 1362 1355 */ 1363 1356 static inline void bpf_disable_instrumentation(void) 1364 1357 { 1365 1358 migrate_disable(); 1366 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 1367 - this_cpu_inc(bpf_prog_active); 1368 - else 1369 - __this_cpu_inc(bpf_prog_active); 1359 + this_cpu_inc(bpf_prog_active); 1370 1360 } 1371 1361 1372 1362 static inline void bpf_enable_instrumentation(void) 1373 1363 { 1374 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 1375 - this_cpu_dec(bpf_prog_active); 1376 - else 1377 - __this_cpu_dec(bpf_prog_active); 1364 + this_cpu_dec(bpf_prog_active); 1378 1365 migrate_enable(); 1379 1366 } 1380 1367
+10 -4
include/linux/btf.h
··· 245 245 struct module *owner; 246 246 }; 247 247 248 - struct kfunc_btf_id_list; 248 + struct kfunc_btf_id_list { 249 + struct list_head list; 250 + struct mutex mutex; 251 + }; 249 252 250 253 #ifdef CONFIG_DEBUG_INFO_BTF_MODULES 251 254 void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, ··· 257 254 struct kfunc_btf_id_set *s); 258 255 bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist, u32 kfunc_id, 259 256 struct module *owner); 257 + 258 + extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list; 259 + extern struct kfunc_btf_id_list prog_test_kfunc_list; 260 260 #else 261 261 static inline void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, 262 262 struct kfunc_btf_id_set *s) ··· 274 268 { 275 269 return false; 276 270 } 271 + 272 + static struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list __maybe_unused; 273 + static struct kfunc_btf_id_list prog_test_kfunc_list __maybe_unused; 277 274 #endif 278 275 279 276 #define DEFINE_KFUNC_BTF_ID_SET(set, name) \ 280 277 struct kfunc_btf_id_set name = { LIST_HEAD_INIT(name.list), (set), \ 281 278 THIS_MODULE } 282 - 283 - extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list; 284 - extern struct kfunc_btf_id_list prog_test_kfunc_list; 285 279 286 280 #endif
-1
include/linux/cacheinfo.h
··· 3 3 #define _LINUX_CACHEINFO_H 4 4 5 5 #include <linux/bitops.h> 6 - #include <linux/cpu.h> 7 6 #include <linux/cpumask.h> 8 7 #include <linux/smp.h> 9 8
+1
include/linux/device/driver.h
··· 18 18 #include <linux/klist.h> 19 19 #include <linux/pm.h> 20 20 #include <linux/device/bus.h> 21 + #include <linux/module.h> 21 22 22 23 /** 23 24 * enum probe_type - device driver probe type to try
+1 -4
include/linux/filter.h
··· 6 6 #define __LINUX_FILTER_H__ 7 7 8 8 #include <linux/atomic.h> 9 + #include <linux/bpf.h> 9 10 #include <linux/refcount.h> 10 11 #include <linux/compat.h> 11 12 #include <linux/skbuff.h> ··· 27 26 28 27 #include <asm/byteorder.h> 29 28 #include <uapi/linux/filter.h> 30 - #include <uapi/linux/bpf.h> 31 29 32 30 struct sk_buff; 33 31 struct sock; ··· 640 640 * This uses migrate_disable/enable() explicitly to document that the 641 641 * invocation of a BPF program does not require reentrancy protection 642 642 * against a BPF program which is invoked from a preempting task. 643 - * 644 - * For non RT enabled kernels migrate_disable/enable() maps to 645 - * preempt_disable/enable(), i.e. it disables also preemption. 646 643 */ 647 644 static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog, 648 645 const void *ctx)
+6 -5
include/linux/phy.h
··· 538 538 * @mac_managed_pm: Set true if MAC driver takes of suspending/resuming PHY 539 539 * @state: State of the PHY for management purposes 540 540 * @dev_flags: Device-specific flags used by the PHY driver. 541 - * Bits [15:0] are free to use by the PHY driver to communicate 542 - * driver specific behavior. 543 - * Bits [23:16] are currently reserved for future use. 544 - * Bits [31:24] are reserved for defining generic 545 - * PHY driver behavior. 541 + * 542 + * - Bits [15:0] are free to use by the PHY driver to communicate 543 + * driver specific behavior. 544 + * - Bits [23:16] are currently reserved for future use. 545 + * - Bits [31:24] are reserved for defining generic 546 + * PHY driver behavior. 546 547 * @irq: IRQ number of the PHY's interrupt (-1 if none) 547 548 * @phy_timer: The timer for handling the state machine 548 549 * @phylink: Pointer to phylink instance for this PHY
+1 -1
include/net/bond_alb.h
··· 126 126 struct alb_bond_info { 127 127 struct tlb_client_info *tx_hashtbl; /* Dynamically allocated */ 128 128 u32 unbalanced_load; 129 - int tx_rebalance_counter; 129 + atomic_t tx_rebalance_counter; 130 130 int lp_counter; 131 131 /* -------- rlb parameters -------- */ 132 132 int rlb_enabled;
+13
include/net/busy_poll.h
··· 136 136 sk_rx_queue_update(sk, skb); 137 137 } 138 138 139 + /* Variant of sk_mark_napi_id() for passive flow setup, 140 + * as sk->sk_napi_id and sk->sk_rx_queue_mapping content 141 + * needs to be set. 142 + */ 143 + static inline void sk_mark_napi_id_set(struct sock *sk, 144 + const struct sk_buff *skb) 145 + { 146 + #ifdef CONFIG_NET_RX_BUSY_POLL 147 + WRITE_ONCE(sk->sk_napi_id, skb->napi_id); 148 + #endif 149 + sk_rx_queue_set(sk, skb); 150 + } 151 + 139 152 static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id) 140 153 { 141 154 #ifdef CONFIG_NET_RX_BUSY_POLL
+3 -3
include/net/netfilter/nf_conntrack.h
··· 276 276 /* jiffies until ct expires, 0 if already expired */ 277 277 static inline unsigned long nf_ct_expires(const struct nf_conn *ct) 278 278 { 279 - s32 timeout = ct->timeout - nfct_time_stamp; 279 + s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp; 280 280 281 281 return timeout > 0 ? timeout : 0; 282 282 } 283 283 284 284 static inline bool nf_ct_is_expired(const struct nf_conn *ct) 285 285 { 286 - return (__s32)(ct->timeout - nfct_time_stamp) <= 0; 286 + return (__s32)(READ_ONCE(ct->timeout) - nfct_time_stamp) <= 0; 287 287 } 288 288 289 289 /* use after obtaining a reference count */ ··· 302 302 static inline void nf_ct_offload_timeout(struct nf_conn *ct) 303 303 { 304 304 if (nf_ct_expires(ct) < NF_CT_DAY / 2) 305 - ct->timeout = nfct_time_stamp + NF_CT_DAY; 305 + WRITE_ONCE(ct->timeout, nfct_time_stamp + NF_CT_DAY); 306 306 } 307 307 308 308 struct kernel_param;
+2 -9
kernel/bpf/btf.c
··· 6346 6346 6347 6347 /* BTF ID set registration API for modules */ 6348 6348 6349 - struct kfunc_btf_id_list { 6350 - struct list_head list; 6351 - struct mutex mutex; 6352 - }; 6353 - 6354 6349 #ifdef CONFIG_DEBUG_INFO_BTF_MODULES 6355 6350 6356 6351 void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, ··· 6371 6376 { 6372 6377 struct kfunc_btf_id_set *s; 6373 6378 6374 - if (!owner) 6375 - return false; 6376 6379 mutex_lock(&klist->mutex); 6377 6380 list_for_each_entry(s, &klist->list, list) { 6378 6381 if (s->owner == owner && btf_id_set_contains(s->set, kfunc_id)) { ··· 6382 6389 return false; 6383 6390 } 6384 6391 6385 - #endif 6386 - 6387 6392 #define DEFINE_KFUNC_BTF_ID_LIST(name) \ 6388 6393 struct kfunc_btf_id_list name = { LIST_HEAD_INIT(name.list), \ 6389 6394 __MUTEX_INITIALIZER(name.mutex) }; \ ··· 6389 6398 6390 6399 DEFINE_KFUNC_BTF_ID_LIST(bpf_tcp_ca_kfunc_list); 6391 6400 DEFINE_KFUNC_BTF_ID_LIST(prog_test_kfunc_list); 6401 + 6402 + #endif
+1 -1
kernel/bpf/verifier.c
··· 8422 8422 8423 8423 new_range = dst_reg->off; 8424 8424 if (range_right_open) 8425 - new_range--; 8425 + new_range++; 8426 8426 8427 8427 /* Examples for register markings: 8428 8428 *
+1
lib/Kconfig.debug
··· 316 316 bool "Generate BTF typeinfo" 317 317 depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED 318 318 depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST 319 + depends on BPF_SYSCALL 319 320 help 320 321 Generate deduplicated BTF type information from DWARF debug info. 321 322 Turning this on expects presence of pahole tool, which will convert
+1
mm/damon/vaddr.c
··· 13 13 #include <linux/mmu_notifier.h> 14 14 #include <linux/page_idle.h> 15 15 #include <linux/pagewalk.h> 16 + #include <linux/sched/mm.h> 16 17 17 18 #include "prmtv-common.h" 18 19
+1
mm/memory_hotplug.c
··· 35 35 #include <linux/memblock.h> 36 36 #include <linux/compaction.h> 37 37 #include <linux/rmap.h> 38 + #include <linux/module.h> 38 39 39 40 #include <asm/tlbflush.h> 40 41
+1
mm/swap_slots.c
··· 30 30 #include <linux/swap_slots.h> 31 31 #include <linux/cpu.h> 32 32 #include <linux/cpumask.h> 33 + #include <linux/slab.h> 33 34 #include <linux/vmalloc.h> 34 35 #include <linux/mutex.h> 35 36 #include <linux/mm.h>
+8 -8
net/core/devlink.c
··· 4110 4110 return err; 4111 4111 } 4112 4112 4113 - if (info->attrs[DEVLINK_ATTR_NETNS_PID] || 4114 - info->attrs[DEVLINK_ATTR_NETNS_FD] || 4115 - info->attrs[DEVLINK_ATTR_NETNS_ID]) { 4116 - dest_net = devlink_netns_get(skb, info); 4117 - if (IS_ERR(dest_net)) 4118 - return PTR_ERR(dest_net); 4119 - } 4120 - 4121 4113 if (info->attrs[DEVLINK_ATTR_RELOAD_ACTION]) 4122 4114 action = nla_get_u8(info->attrs[DEVLINK_ATTR_RELOAD_ACTION]); 4123 4115 else ··· 4152 4160 return -EINVAL; 4153 4161 } 4154 4162 } 4163 + if (info->attrs[DEVLINK_ATTR_NETNS_PID] || 4164 + info->attrs[DEVLINK_ATTR_NETNS_FD] || 4165 + info->attrs[DEVLINK_ATTR_NETNS_ID]) { 4166 + dest_net = devlink_netns_get(skb, info); 4167 + if (IS_ERR(dest_net)) 4168 + return PTR_ERR(dest_net); 4169 + } 4170 + 4155 4171 err = devlink_reload(devlink, dest_net, action, limit, &actions_performed, info->extack); 4156 4172 4157 4173 if (dest_net)
+1 -2
net/core/neighbour.c
··· 763 763 764 764 ASSERT_RTNL(); 765 765 766 - n = kmalloc(sizeof(*n) + key_len, GFP_KERNEL); 766 + n = kzalloc(sizeof(*n) + key_len, GFP_KERNEL); 767 767 if (!n) 768 768 goto out; 769 769 770 - n->protocol = 0; 771 770 write_pnet(&n->net, net); 772 771 memcpy(n->key, pkey, key_len); 773 772 n->dev = dev;
+5
net/core/skmsg.c
··· 1124 1124 1125 1125 void sk_psock_stop_strp(struct sock *sk, struct sk_psock *psock) 1126 1126 { 1127 + psock_set_prog(&psock->progs.stream_parser, NULL); 1128 + 1127 1129 if (!psock->saved_data_ready) 1128 1130 return; 1129 1131 ··· 1214 1212 1215 1213 void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock) 1216 1214 { 1215 + psock_set_prog(&psock->progs.stream_verdict, NULL); 1216 + psock_set_prog(&psock->progs.skb_verdict, NULL); 1217 + 1217 1218 if (!psock->saved_data_ready) 1218 1219 return; 1219 1220
+10 -5
net/core/sock_map.c
··· 167 167 write_lock_bh(&sk->sk_callback_lock); 168 168 if (strp_stop) 169 169 sk_psock_stop_strp(sk, psock); 170 - else 170 + if (verdict_stop) 171 171 sk_psock_stop_verdict(sk, psock); 172 + 173 + if (psock->psock_update_sk_prot) 174 + psock->psock_update_sk_prot(sk, psock, false); 172 175 write_unlock_bh(&sk->sk_callback_lock); 173 176 } 174 177 } ··· 285 282 286 283 if (msg_parser) 287 284 psock_set_prog(&psock->progs.msg_parser, msg_parser); 285 + if (stream_parser) 286 + psock_set_prog(&psock->progs.stream_parser, stream_parser); 287 + if (stream_verdict) 288 + psock_set_prog(&psock->progs.stream_verdict, stream_verdict); 289 + if (skb_verdict) 290 + psock_set_prog(&psock->progs.skb_verdict, skb_verdict); 288 291 289 292 ret = sock_map_init_proto(sk, psock); 290 293 if (ret < 0) ··· 301 292 ret = sk_psock_init_strp(sk, psock); 302 293 if (ret) 303 294 goto out_unlock_drop; 304 - psock_set_prog(&psock->progs.stream_verdict, stream_verdict); 305 - psock_set_prog(&psock->progs.stream_parser, stream_parser); 306 295 sk_psock_start_strp(sk, psock); 307 296 } else if (!stream_parser && stream_verdict && !psock->saved_data_ready) { 308 - psock_set_prog(&psock->progs.stream_verdict, stream_verdict); 309 297 sk_psock_start_verdict(sk,psock); 310 298 } else if (!stream_verdict && skb_verdict && !psock->saved_data_ready) { 311 - psock_set_prog(&psock->progs.skb_verdict, skb_verdict); 312 299 sk_psock_start_verdict(sk, psock); 313 300 } 314 301 write_unlock_bh(&sk->sk_callback_lock);
+2 -1
net/ethtool/netlink.c
··· 40 40 if (dev->dev.parent) 41 41 pm_runtime_get_sync(dev->dev.parent); 42 42 43 - if (!netif_device_present(dev)) { 43 + if (!netif_device_present(dev) || 44 + dev->reg_state == NETREG_UNREGISTERING) { 44 45 ret = -ENODEV; 45 46 goto err; 46 47 }
+1 -1
net/ipv4/inet_connection_sock.c
··· 721 721 722 722 sk_node_init(&nreq_sk->sk_node); 723 723 nreq_sk->sk_tx_queue_mapping = req_sk->sk_tx_queue_mapping; 724 - #ifdef CONFIG_XPS 724 + #ifdef CONFIG_SOCK_RX_QUEUE_MAPPING 725 725 nreq_sk->sk_rx_queue_mapping = req_sk->sk_rx_queue_mapping; 726 726 #endif 727 727 nreq_sk->sk_incoming_cpu = req_sk->sk_incoming_cpu;
+2 -2
net/ipv4/tcp_minisocks.c
··· 829 829 int ret = 0; 830 830 int state = child->sk_state; 831 831 832 - /* record NAPI ID of child */ 833 - sk_mark_napi_id(child, skb); 832 + /* record sk_napi_id and sk_rx_queue_mapping of child. */ 833 + sk_mark_napi_id_set(child, skb); 834 834 835 835 tcp_segs_in(tcp_sk(child), skb); 836 836 if (!sock_owned_by_user(child)) {
+1 -1
net/ipv4/udp.c
··· 916 916 kfree_skb(skb); 917 917 return -EINVAL; 918 918 } 919 - if (skb->len > cork->gso_size * UDP_MAX_SEGMENTS) { 919 + if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) { 920 920 kfree_skb(skb); 921 921 return -EINVAL; 922 922 }
+8
net/ipv6/seg6_iptunnel.c
··· 161 161 hdr->hop_limit = ip6_dst_hoplimit(skb_dst(skb)); 162 162 163 163 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 164 + 165 + /* the control block has been erased, so we have to set the 166 + * iif once again. 167 + * We read the receiving interface index directly from the 168 + * skb->skb_iif as it is done in the IPv4 receiving path (i.e.: 169 + * ip_rcv_core(...)). 170 + */ 171 + IP6CB(skb)->iif = skb->skb_iif; 164 172 } 165 173 166 174 hdr->nexthdr = NEXTHDR_ROUTING;
+3 -3
net/netfilter/nf_conntrack_core.c
··· 684 684 685 685 tstamp = nf_conn_tstamp_find(ct); 686 686 if (tstamp) { 687 - s32 timeout = ct->timeout - nfct_time_stamp; 687 + s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp; 688 688 689 689 tstamp->stop = ktime_get_real_ns(); 690 690 if (timeout < 0) ··· 1036 1036 } 1037 1037 1038 1038 /* We want the clashing entry to go away real soon: 1 second timeout. */ 1039 - loser_ct->timeout = nfct_time_stamp + HZ; 1039 + WRITE_ONCE(loser_ct->timeout, nfct_time_stamp + HZ); 1040 1040 1041 1041 /* IPS_NAT_CLASH removes the entry automatically on the first 1042 1042 * reply. Also prevents UDP tracker from moving the entry to ··· 1560 1560 /* save hash for reusing when confirming */ 1561 1561 *(unsigned long *)(&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev) = hash; 1562 1562 ct->status = 0; 1563 - ct->timeout = 0; 1563 + WRITE_ONCE(ct->timeout, 0); 1564 1564 write_pnet(&ct->ct_net, net); 1565 1565 memset(&ct->__nfct_init_offset, 0, 1566 1566 offsetof(struct nf_conn, proto) -
+1 -1
net/netfilter/nf_conntrack_netlink.c
··· 1998 1998 1999 1999 if (timeout > INT_MAX) 2000 2000 timeout = INT_MAX; 2001 - ct->timeout = nfct_time_stamp + (u32)timeout; 2001 + WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout); 2002 2002 2003 2003 if (test_bit(IPS_DYING_BIT, &ct->status)) 2004 2004 return -ETIME;
+2 -2
net/netfilter/nf_flow_table_core.c
··· 201 201 if (timeout < 0) 202 202 timeout = 0; 203 203 204 - if (nf_flow_timeout_delta(ct->timeout) > (__s32)timeout) 205 - ct->timeout = nfct_time_stamp + timeout; 204 + if (nf_flow_timeout_delta(READ_ONCE(ct->timeout)) > (__s32)timeout) 205 + WRITE_ONCE(ct->timeout, nfct_time_stamp + timeout); 206 206 } 207 207 208 208 static void flow_offload_fixup_ct_state(struct nf_conn *ct)
+7 -4
net/netfilter/nft_exthdr.c
··· 236 236 237 237 tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, &tcphdr_len); 238 238 if (!tcph) 239 - return; 239 + goto err; 240 240 241 241 opt = (u8 *)tcph; 242 242 for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) { ··· 251 251 continue; 252 252 253 253 if (i + optl > tcphdr_len || priv->len + priv->offset > optl) 254 - return; 254 + goto err; 255 255 256 256 if (skb_ensure_writable(pkt->skb, 257 257 nft_thoff(pkt) + i + priv->len)) 258 - return; 258 + goto err; 259 259 260 260 tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, 261 261 &tcphdr_len); 262 262 if (!tcph) 263 - return; 263 + goto err; 264 264 265 265 offset = i + priv->offset; 266 266 ··· 303 303 304 304 return; 305 305 } 306 + return; 307 + err: 308 + regs->verdict.code = NFT_BREAK; 306 309 } 307 310 308 311 static void nft_exthdr_sctp_eval(const struct nft_expr *expr,
+1 -1
net/netfilter/nft_set_pipapo_avx2.c
··· 886 886 NFT_PIPAPO_AVX2_BUCKET_LOAD8(4, lt, 4, pkt[4], bsize); 887 887 888 888 NFT_PIPAPO_AVX2_AND(5, 0, 1); 889 - NFT_PIPAPO_AVX2_BUCKET_LOAD8(6, lt, 6, pkt[5], bsize); 889 + NFT_PIPAPO_AVX2_BUCKET_LOAD8(6, lt, 5, pkt[5], bsize); 890 890 NFT_PIPAPO_AVX2_AND(7, 2, 3); 891 891 892 892 /* Stall */
+8 -4
net/nfc/netlink.c
··· 636 636 { 637 637 struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0]; 638 638 639 - nfc_device_iter_exit(iter); 640 - kfree(iter); 639 + if (iter) { 640 + nfc_device_iter_exit(iter); 641 + kfree(iter); 642 + } 641 643 642 644 return 0; 643 645 } ··· 1394 1392 { 1395 1393 struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0]; 1396 1394 1397 - nfc_device_iter_exit(iter); 1398 - kfree(iter); 1395 + if (iter) { 1396 + nfc_device_iter_exit(iter); 1397 + kfree(iter); 1398 + } 1399 1399 1400 1400 return 0; 1401 1401 }
+1
net/sched/sch_fq_pie.c
··· 531 531 struct fq_pie_sched_data *q = qdisc_priv(sch); 532 532 533 533 tcf_block_put(q->block); 534 + q->p_params.tupdate = 0; 534 535 del_timer_sync(&q->adapt_timer); 535 536 kvfree(q->flows); 536 537 }
+5 -3
tools/bpf/resolve_btfids/main.c
··· 83 83 int cnt; 84 84 }; 85 85 int addr_cnt; 86 + bool is_set; 86 87 Elf64_Addr addr[ADDR_CNT]; 87 88 }; 88 89 ··· 452 451 * in symbol's size, together with 'cnt' field hence 453 452 * that - 1. 454 453 */ 455 - if (id) 454 + if (id) { 456 455 id->cnt = sym.st_size / sizeof(int) - 1; 456 + id->is_set = true; 457 + } 457 458 } else { 458 459 pr_err("FAILED unsupported prefix %s\n", prefix); 459 460 return -1; ··· 571 568 int *ptr = data->d_buf; 572 569 int i; 573 570 574 - if (!id->id) { 571 + if (!id->id && !id->is_set) 575 572 pr_err("WARN: resolve_btfids: unresolved symbol %s\n", id->name); 576 - } 577 573 578 574 for (i = 0; i < id->addr_cnt; i++) { 579 575 unsigned long addr = id->addr[i];
+600 -32
tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c
··· 35 35 .prog_type = BPF_PROG_TYPE_XDP, 36 36 }, 37 37 { 38 - "XDP pkt read, pkt_data' > pkt_end, good access", 38 + "XDP pkt read, pkt_data' > pkt_end, corner case, good access", 39 39 .insns = { 40 40 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 41 41 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, ··· 88 88 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 89 89 }, 90 90 { 91 + "XDP pkt read, pkt_data' > pkt_end, corner case +1, good access", 92 + .insns = { 93 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 94 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 95 + offsetof(struct xdp_md, data_end)), 96 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 97 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 98 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), 99 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 100 + BPF_MOV64_IMM(BPF_REG_0, 0), 101 + BPF_EXIT_INSN(), 102 + }, 103 + .result = ACCEPT, 104 + .prog_type = BPF_PROG_TYPE_XDP, 105 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 106 + }, 107 + { 108 + "XDP pkt read, pkt_data' > pkt_end, corner case -1, bad access", 109 + .insns = { 110 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 111 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 112 + offsetof(struct xdp_md, data_end)), 113 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 114 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 115 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), 116 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 117 + BPF_MOV64_IMM(BPF_REG_0, 0), 118 + BPF_EXIT_INSN(), 119 + }, 120 + .errstr = "R1 offset is outside of the packet", 121 + .result = REJECT, 122 + .prog_type = BPF_PROG_TYPE_XDP, 123 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 124 + }, 125 + { 91 126 "XDP pkt read, pkt_end > pkt_data', good access", 92 127 .insns = { 93 128 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), ··· 141 106 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 142 107 }, 143 108 { 144 - "XDP pkt read, pkt_end > pkt_data', bad access 1", 109 + "XDP pkt read, pkt_end > pkt_data', corner case -1, bad access", 145 110 .insns = { 146 111 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 147 112 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 148 113 offsetof(struct xdp_md, data_end)), 149 114 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 150 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 115 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 151 116 BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 152 117 BPF_JMP_IMM(BPF_JA, 0, 0, 1), 153 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 118 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 154 119 BPF_MOV64_IMM(BPF_REG_0, 0), 155 120 BPF_EXIT_INSN(), 156 121 }, ··· 178 143 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 179 144 }, 180 145 { 146 + "XDP pkt read, pkt_end > pkt_data', corner case, good access", 147 + .insns = { 148 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 149 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 150 + offsetof(struct xdp_md, data_end)), 151 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 152 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 153 + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 154 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 155 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 156 + BPF_MOV64_IMM(BPF_REG_0, 0), 157 + BPF_EXIT_INSN(), 158 + }, 159 + .result = ACCEPT, 160 + .prog_type = BPF_PROG_TYPE_XDP, 161 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 162 + }, 163 + { 164 + "XDP pkt read, pkt_end > pkt_data', corner case +1, good access", 165 + .insns = { 166 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 167 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 168 + offsetof(struct xdp_md, data_end)), 169 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 170 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 171 + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 172 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 173 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 174 + BPF_MOV64_IMM(BPF_REG_0, 0), 175 + BPF_EXIT_INSN(), 176 + }, 177 + .result = ACCEPT, 178 + .prog_type = BPF_PROG_TYPE_XDP, 179 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 180 + }, 181 + { 181 182 "XDP pkt read, pkt_data' < pkt_end, good access", 182 183 .insns = { 183 184 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), ··· 232 161 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 233 162 }, 234 163 { 235 - "XDP pkt read, pkt_data' < pkt_end, bad access 1", 164 + "XDP pkt read, pkt_data' < pkt_end, corner case -1, bad access", 236 165 .insns = { 237 166 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 238 167 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 239 168 offsetof(struct xdp_md, data_end)), 240 169 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 241 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 170 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 242 171 BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 243 172 BPF_JMP_IMM(BPF_JA, 0, 0, 1), 244 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 173 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 245 174 BPF_MOV64_IMM(BPF_REG_0, 0), 246 175 BPF_EXIT_INSN(), 247 176 }, ··· 269 198 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 270 199 }, 271 200 { 272 - "XDP pkt read, pkt_end < pkt_data', good access", 201 + "XDP pkt read, pkt_data' < pkt_end, corner case, good access", 202 + .insns = { 203 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 204 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 205 + offsetof(struct xdp_md, data_end)), 206 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 207 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 208 + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 209 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 210 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 211 + BPF_MOV64_IMM(BPF_REG_0, 0), 212 + BPF_EXIT_INSN(), 213 + }, 214 + .result = ACCEPT, 215 + .prog_type = BPF_PROG_TYPE_XDP, 216 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 217 + }, 218 + { 219 + "XDP pkt read, pkt_data' < pkt_end, corner case +1, good access", 220 + .insns = { 221 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 222 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 223 + offsetof(struct xdp_md, data_end)), 224 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 225 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 226 + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 227 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 228 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 229 + BPF_MOV64_IMM(BPF_REG_0, 0), 230 + BPF_EXIT_INSN(), 231 + }, 232 + .result = ACCEPT, 233 + .prog_type = BPF_PROG_TYPE_XDP, 234 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 235 + }, 236 + { 237 + "XDP pkt read, pkt_end < pkt_data', corner case, good access", 273 238 .insns = { 274 239 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 275 240 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, ··· 358 251 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 359 252 }, 360 253 { 254 + "XDP pkt read, pkt_end < pkt_data', corner case +1, good access", 255 + .insns = { 256 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 257 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 258 + offsetof(struct xdp_md, data_end)), 259 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 260 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 261 + BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), 262 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 263 + BPF_MOV64_IMM(BPF_REG_0, 0), 264 + BPF_EXIT_INSN(), 265 + }, 266 + .result = ACCEPT, 267 + .prog_type = BPF_PROG_TYPE_XDP, 268 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 269 + }, 270 + { 271 + "XDP pkt read, pkt_end < pkt_data', corner case -1, bad access", 272 + .insns = { 273 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 274 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 275 + offsetof(struct xdp_md, data_end)), 276 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 277 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 278 + BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), 279 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 280 + BPF_MOV64_IMM(BPF_REG_0, 0), 281 + BPF_EXIT_INSN(), 282 + }, 283 + .errstr = "R1 offset is outside of the packet", 284 + .result = REJECT, 285 + .prog_type = BPF_PROG_TYPE_XDP, 286 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 287 + }, 288 + { 361 289 "XDP pkt read, pkt_data' >= pkt_end, good access", 362 290 .insns = { 363 291 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), ··· 410 268 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 411 269 }, 412 270 { 413 - "XDP pkt read, pkt_data' >= pkt_end, bad access 1", 271 + "XDP pkt read, pkt_data' >= pkt_end, corner case -1, bad access", 414 272 .insns = { 415 273 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 416 274 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 417 275 offsetof(struct xdp_md, data_end)), 418 276 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 419 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 277 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 420 278 BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 421 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 279 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 422 280 BPF_MOV64_IMM(BPF_REG_0, 0), 423 281 BPF_EXIT_INSN(), 424 282 }, ··· 446 304 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 447 305 }, 448 306 { 449 - "XDP pkt read, pkt_end >= pkt_data', good access", 307 + "XDP pkt read, pkt_data' >= pkt_end, corner case, good access", 308 + .insns = { 309 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 310 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 311 + offsetof(struct xdp_md, data_end)), 312 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 313 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 314 + BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 315 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 316 + BPF_MOV64_IMM(BPF_REG_0, 0), 317 + BPF_EXIT_INSN(), 318 + }, 319 + .result = ACCEPT, 320 + .prog_type = BPF_PROG_TYPE_XDP, 321 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 322 + }, 323 + { 324 + "XDP pkt read, pkt_data' >= pkt_end, corner case +1, good access", 325 + .insns = { 326 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 327 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 328 + offsetof(struct xdp_md, data_end)), 329 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 330 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 331 + BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 332 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 333 + BPF_MOV64_IMM(BPF_REG_0, 0), 334 + BPF_EXIT_INSN(), 335 + }, 336 + .result = ACCEPT, 337 + .prog_type = BPF_PROG_TYPE_XDP, 338 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 339 + }, 340 + { 341 + "XDP pkt read, pkt_end >= pkt_data', corner case, good access", 450 342 .insns = { 451 343 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 452 344 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, ··· 535 359 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 536 360 }, 537 361 { 538 - "XDP pkt read, pkt_data' <= pkt_end, good access", 362 + "XDP pkt read, pkt_end >= pkt_data', corner case +1, good access", 363 + .insns = { 364 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 365 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 366 + offsetof(struct xdp_md, data_end)), 367 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 368 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 369 + BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), 370 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 371 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 372 + BPF_MOV64_IMM(BPF_REG_0, 0), 373 + BPF_EXIT_INSN(), 374 + }, 375 + .result = ACCEPT, 376 + .prog_type = BPF_PROG_TYPE_XDP, 377 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 378 + }, 379 + { 380 + "XDP pkt read, pkt_end >= pkt_data', corner case -1, bad access", 381 + .insns = { 382 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 383 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 384 + offsetof(struct xdp_md, data_end)), 385 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 386 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 387 + BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), 388 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 389 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 390 + BPF_MOV64_IMM(BPF_REG_0, 0), 391 + BPF_EXIT_INSN(), 392 + }, 393 + .errstr = "R1 offset is outside of the packet", 394 + .result = REJECT, 395 + .prog_type = BPF_PROG_TYPE_XDP, 396 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 397 + }, 398 + { 399 + "XDP pkt read, pkt_data' <= pkt_end, corner case, good access", 539 400 .insns = { 540 401 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 541 402 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, ··· 627 414 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 628 415 }, 629 416 { 417 + "XDP pkt read, pkt_data' <= pkt_end, corner case +1, good access", 418 + .insns = { 419 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 420 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 421 + offsetof(struct xdp_md, data_end)), 422 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 423 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 424 + BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), 425 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 426 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 427 + BPF_MOV64_IMM(BPF_REG_0, 0), 428 + BPF_EXIT_INSN(), 429 + }, 430 + .result = ACCEPT, 431 + .prog_type = BPF_PROG_TYPE_XDP, 432 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 433 + }, 434 + { 435 + "XDP pkt read, pkt_data' <= pkt_end, corner case -1, bad access", 436 + .insns = { 437 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 438 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 439 + offsetof(struct xdp_md, data_end)), 440 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 441 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 442 + BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), 443 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 444 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 445 + BPF_MOV64_IMM(BPF_REG_0, 0), 446 + BPF_EXIT_INSN(), 447 + }, 448 + .errstr = "R1 offset is outside of the packet", 449 + .result = REJECT, 450 + .prog_type = BPF_PROG_TYPE_XDP, 451 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 452 + }, 453 + { 630 454 "XDP pkt read, pkt_end <= pkt_data', good access", 631 455 .insns = { 632 456 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), ··· 681 431 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 682 432 }, 683 433 { 684 - "XDP pkt read, pkt_end <= pkt_data', bad access 1", 434 + "XDP pkt read, pkt_end <= pkt_data', corner case -1, bad access", 685 435 .insns = { 686 436 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 687 437 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 688 438 offsetof(struct xdp_md, data_end)), 689 439 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 690 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 440 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 691 441 BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 692 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 442 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 693 443 BPF_MOV64_IMM(BPF_REG_0, 0), 694 444 BPF_EXIT_INSN(), 695 445 }, ··· 717 467 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 718 468 }, 719 469 { 720 - "XDP pkt read, pkt_meta' > pkt_data, good access", 470 + "XDP pkt read, pkt_end <= pkt_data', corner case, good access", 471 + .insns = { 472 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 473 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 474 + offsetof(struct xdp_md, data_end)), 475 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 476 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 477 + BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 478 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 479 + BPF_MOV64_IMM(BPF_REG_0, 0), 480 + BPF_EXIT_INSN(), 481 + }, 482 + .result = ACCEPT, 483 + .prog_type = BPF_PROG_TYPE_XDP, 484 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 485 + }, 486 + { 487 + "XDP pkt read, pkt_end <= pkt_data', corner case +1, good access", 488 + .insns = { 489 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 490 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 491 + offsetof(struct xdp_md, data_end)), 492 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 493 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 494 + BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 495 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 496 + BPF_MOV64_IMM(BPF_REG_0, 0), 497 + BPF_EXIT_INSN(), 498 + }, 499 + .result = ACCEPT, 500 + .prog_type = BPF_PROG_TYPE_XDP, 501 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 502 + }, 503 + { 504 + "XDP pkt read, pkt_meta' > pkt_data, corner case, good access", 721 505 .insns = { 722 506 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 723 507 offsetof(struct xdp_md, data_meta)), ··· 804 520 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 805 521 }, 806 522 { 523 + "XDP pkt read, pkt_meta' > pkt_data, corner case +1, good access", 524 + .insns = { 525 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 526 + offsetof(struct xdp_md, data_meta)), 527 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 528 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 529 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 530 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), 531 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 532 + BPF_MOV64_IMM(BPF_REG_0, 0), 533 + BPF_EXIT_INSN(), 534 + }, 535 + .result = ACCEPT, 536 + .prog_type = BPF_PROG_TYPE_XDP, 537 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 538 + }, 539 + { 540 + "XDP pkt read, pkt_meta' > pkt_data, corner case -1, bad access", 541 + .insns = { 542 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 543 + offsetof(struct xdp_md, data_meta)), 544 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 545 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 546 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 547 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), 548 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 549 + BPF_MOV64_IMM(BPF_REG_0, 0), 550 + BPF_EXIT_INSN(), 551 + }, 552 + .errstr = "R1 offset is outside of the packet", 553 + .result = REJECT, 554 + .prog_type = BPF_PROG_TYPE_XDP, 555 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 556 + }, 557 + { 807 558 "XDP pkt read, pkt_data > pkt_meta', good access", 808 559 .insns = { 809 560 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 857 538 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 858 539 }, 859 540 { 860 - "XDP pkt read, pkt_data > pkt_meta', bad access 1", 541 + "XDP pkt read, pkt_data > pkt_meta', corner case -1, bad access", 861 542 .insns = { 862 543 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 863 544 offsetof(struct xdp_md, data_meta)), 864 545 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 865 546 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 866 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 547 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 867 548 BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 868 549 BPF_JMP_IMM(BPF_JA, 0, 0, 1), 869 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 550 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 870 551 BPF_MOV64_IMM(BPF_REG_0, 0), 871 552 BPF_EXIT_INSN(), 872 553 }, ··· 894 575 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 895 576 }, 896 577 { 578 + "XDP pkt read, pkt_data > pkt_meta', corner case, good access", 579 + .insns = { 580 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 581 + offsetof(struct xdp_md, data_meta)), 582 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 583 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 584 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 585 + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 586 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 587 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 588 + BPF_MOV64_IMM(BPF_REG_0, 0), 589 + BPF_EXIT_INSN(), 590 + }, 591 + .result = ACCEPT, 592 + .prog_type = BPF_PROG_TYPE_XDP, 593 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 594 + }, 595 + { 596 + "XDP pkt read, pkt_data > pkt_meta', corner case +1, good access", 597 + .insns = { 598 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 599 + offsetof(struct xdp_md, data_meta)), 600 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 601 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 602 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 603 + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 604 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 605 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 606 + BPF_MOV64_IMM(BPF_REG_0, 0), 607 + BPF_EXIT_INSN(), 608 + }, 609 + .result = ACCEPT, 610 + .prog_type = BPF_PROG_TYPE_XDP, 611 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 612 + }, 613 + { 897 614 "XDP pkt read, pkt_meta' < pkt_data, good access", 898 615 .insns = { 899 616 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 948 593 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 949 594 }, 950 595 { 951 - "XDP pkt read, pkt_meta' < pkt_data, bad access 1", 596 + "XDP pkt read, pkt_meta' < pkt_data, corner case -1, bad access", 952 597 .insns = { 953 598 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 954 599 offsetof(struct xdp_md, data_meta)), 955 600 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 956 601 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 957 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 602 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 958 603 BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 959 604 BPF_JMP_IMM(BPF_JA, 0, 0, 1), 960 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 605 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 961 606 BPF_MOV64_IMM(BPF_REG_0, 0), 962 607 BPF_EXIT_INSN(), 963 608 }, ··· 985 630 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 986 631 }, 987 632 { 988 - "XDP pkt read, pkt_data < pkt_meta', good access", 633 + "XDP pkt read, pkt_meta' < pkt_data, corner case, good access", 634 + .insns = { 635 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 636 + offsetof(struct xdp_md, data_meta)), 637 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 638 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 639 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 640 + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 641 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 642 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 643 + BPF_MOV64_IMM(BPF_REG_0, 0), 644 + BPF_EXIT_INSN(), 645 + }, 646 + .result = ACCEPT, 647 + .prog_type = BPF_PROG_TYPE_XDP, 648 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 649 + }, 650 + { 651 + "XDP pkt read, pkt_meta' < pkt_data, corner case +1, good access", 652 + .insns = { 653 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 654 + offsetof(struct xdp_md, data_meta)), 655 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 656 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 657 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 658 + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 659 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 660 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 661 + BPF_MOV64_IMM(BPF_REG_0, 0), 662 + BPF_EXIT_INSN(), 663 + }, 664 + .result = ACCEPT, 665 + .prog_type = BPF_PROG_TYPE_XDP, 666 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 667 + }, 668 + { 669 + "XDP pkt read, pkt_data < pkt_meta', corner case, good access", 989 670 .insns = { 990 671 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 991 672 offsetof(struct xdp_md, data_meta)), ··· 1074 683 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1075 684 }, 1076 685 { 686 + "XDP pkt read, pkt_data < pkt_meta', corner case +1, good access", 687 + .insns = { 688 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 689 + offsetof(struct xdp_md, data_meta)), 690 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 691 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 692 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 693 + BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), 694 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 695 + BPF_MOV64_IMM(BPF_REG_0, 0), 696 + BPF_EXIT_INSN(), 697 + }, 698 + .result = ACCEPT, 699 + .prog_type = BPF_PROG_TYPE_XDP, 700 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 701 + }, 702 + { 703 + "XDP pkt read, pkt_data < pkt_meta', corner case -1, bad access", 704 + .insns = { 705 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 706 + offsetof(struct xdp_md, data_meta)), 707 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 708 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 709 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 710 + BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), 711 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 712 + BPF_MOV64_IMM(BPF_REG_0, 0), 713 + BPF_EXIT_INSN(), 714 + }, 715 + .errstr = "R1 offset is outside of the packet", 716 + .result = REJECT, 717 + .prog_type = BPF_PROG_TYPE_XDP, 718 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 719 + }, 720 + { 1077 721 "XDP pkt read, pkt_meta' >= pkt_data, good access", 1078 722 .insns = { 1079 723 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 1126 700 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1127 701 }, 1128 702 { 1129 - "XDP pkt read, pkt_meta' >= pkt_data, bad access 1", 703 + "XDP pkt read, pkt_meta' >= pkt_data, corner case -1, bad access", 1130 704 .insns = { 1131 705 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 1132 706 offsetof(struct xdp_md, data_meta)), 1133 707 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 1134 708 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 1135 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 709 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 1136 710 BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 1137 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 711 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 1138 712 BPF_MOV64_IMM(BPF_REG_0, 0), 1139 713 BPF_EXIT_INSN(), 1140 714 }, ··· 1162 736 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1163 737 }, 1164 738 { 1165 - "XDP pkt read, pkt_data >= pkt_meta', good access", 739 + "XDP pkt read, pkt_meta' >= pkt_data, corner case, good access", 740 + .insns = { 741 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 742 + offsetof(struct xdp_md, data_meta)), 743 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 744 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 745 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 746 + BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 747 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 748 + BPF_MOV64_IMM(BPF_REG_0, 0), 749 + BPF_EXIT_INSN(), 750 + }, 751 + .result = ACCEPT, 752 + .prog_type = BPF_PROG_TYPE_XDP, 753 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 754 + }, 755 + { 756 + "XDP pkt read, pkt_meta' >= pkt_data, corner case +1, good access", 757 + .insns = { 758 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 759 + offsetof(struct xdp_md, data_meta)), 760 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 761 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 762 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 763 + BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 764 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 765 + BPF_MOV64_IMM(BPF_REG_0, 0), 766 + BPF_EXIT_INSN(), 767 + }, 768 + .result = ACCEPT, 769 + .prog_type = BPF_PROG_TYPE_XDP, 770 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 771 + }, 772 + { 773 + "XDP pkt read, pkt_data >= pkt_meta', corner case, good access", 1166 774 .insns = { 1167 775 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 1168 776 offsetof(struct xdp_md, data_meta)), ··· 1251 791 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1252 792 }, 1253 793 { 1254 - "XDP pkt read, pkt_meta' <= pkt_data, good access", 794 + "XDP pkt read, pkt_data >= pkt_meta', corner case +1, good access", 795 + .insns = { 796 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 797 + offsetof(struct xdp_md, data_meta)), 798 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 799 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 800 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 801 + BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), 802 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 803 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 804 + BPF_MOV64_IMM(BPF_REG_0, 0), 805 + BPF_EXIT_INSN(), 806 + }, 807 + .result = ACCEPT, 808 + .prog_type = BPF_PROG_TYPE_XDP, 809 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 810 + }, 811 + { 812 + "XDP pkt read, pkt_data >= pkt_meta', corner case -1, bad access", 813 + .insns = { 814 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 815 + offsetof(struct xdp_md, data_meta)), 816 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 817 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 818 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 819 + BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), 820 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 821 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 822 + BPF_MOV64_IMM(BPF_REG_0, 0), 823 + BPF_EXIT_INSN(), 824 + }, 825 + .errstr = "R1 offset is outside of the packet", 826 + .result = REJECT, 827 + .prog_type = BPF_PROG_TYPE_XDP, 828 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 829 + }, 830 + { 831 + "XDP pkt read, pkt_meta' <= pkt_data, corner case, good access", 1255 832 .insns = { 1256 833 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 1257 834 offsetof(struct xdp_md, data_meta)), ··· 1343 846 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1344 847 }, 1345 848 { 849 + "XDP pkt read, pkt_meta' <= pkt_data, corner case +1, good access", 850 + .insns = { 851 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 852 + offsetof(struct xdp_md, data_meta)), 853 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 854 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 855 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 856 + BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), 857 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 858 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 859 + BPF_MOV64_IMM(BPF_REG_0, 0), 860 + BPF_EXIT_INSN(), 861 + }, 862 + .result = ACCEPT, 863 + .prog_type = BPF_PROG_TYPE_XDP, 864 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 865 + }, 866 + { 867 + "XDP pkt read, pkt_meta' <= pkt_data, corner case -1, bad access", 868 + .insns = { 869 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 870 + offsetof(struct xdp_md, data_meta)), 871 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 872 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 873 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 874 + BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), 875 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 876 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 877 + BPF_MOV64_IMM(BPF_REG_0, 0), 878 + BPF_EXIT_INSN(), 879 + }, 880 + .errstr = "R1 offset is outside of the packet", 881 + .result = REJECT, 882 + .prog_type = BPF_PROG_TYPE_XDP, 883 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 884 + }, 885 + { 1346 886 "XDP pkt read, pkt_data <= pkt_meta', good access", 1347 887 .insns = { 1348 888 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 1397 863 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1398 864 }, 1399 865 { 1400 - "XDP pkt read, pkt_data <= pkt_meta', bad access 1", 866 + "XDP pkt read, pkt_data <= pkt_meta', corner case -1, bad access", 1401 867 .insns = { 1402 868 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 1403 869 offsetof(struct xdp_md, data_meta)), 1404 870 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 1405 871 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 1406 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 872 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 1407 873 BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 1408 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 874 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 1409 875 BPF_MOV64_IMM(BPF_REG_0, 0), 1410 876 BPF_EXIT_INSN(), 1411 877 }, ··· 1429 895 }, 1430 896 .errstr = "R1 offset is outside of the packet", 1431 897 .result = REJECT, 898 + .prog_type = BPF_PROG_TYPE_XDP, 899 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 900 + }, 901 + { 902 + "XDP pkt read, pkt_data <= pkt_meta', corner case, good access", 903 + .insns = { 904 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 905 + offsetof(struct xdp_md, data_meta)), 906 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 907 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 908 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 909 + BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 910 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 911 + BPF_MOV64_IMM(BPF_REG_0, 0), 912 + BPF_EXIT_INSN(), 913 + }, 914 + .result = ACCEPT, 915 + .prog_type = BPF_PROG_TYPE_XDP, 916 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 917 + }, 918 + { 919 + "XDP pkt read, pkt_data <= pkt_meta', corner case +1, good access", 920 + .insns = { 921 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 922 + offsetof(struct xdp_md, data_meta)), 923 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 924 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 925 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 926 + BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 927 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 928 + BPF_MOV64_IMM(BPF_REG_0, 0), 929 + BPF_EXIT_INSN(), 930 + }, 931 + .result = ACCEPT, 1432 932 .prog_type = BPF_PROG_TYPE_XDP, 1433 933 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1434 934 },
+8
tools/testing/selftests/net/fcnal-test.sh
··· 4077 4077 4078 4078 printf "\nTests passed: %3d\n" ${nsuccess} 4079 4079 printf "Tests failed: %3d\n" ${nfail} 4080 + 4081 + if [ $nfail -ne 0 ]; then 4082 + exit 1 # KSFT_FAIL 4083 + elif [ $nsuccess -eq 0 ]; then 4084 + exit $ksft_skip 4085 + fi 4086 + 4087 + exit 0 # KSFT_PASS
+49 -10
tools/testing/selftests/net/fib_tests.sh
··· 444 444 setup 445 445 446 446 set -e 447 + ip netns add ns2 448 + ip netns set ns2 auto 449 + 450 + ip -netns ns2 link set dev lo up 451 + 452 + $IP link add name veth1 type veth peer name veth2 453 + $IP link set dev veth2 netns ns2 454 + $IP address add 192.0.2.1/24 dev veth1 455 + ip -netns ns2 address add 192.0.2.1/24 dev veth2 456 + $IP link set dev veth1 up 457 + ip -netns ns2 link set dev veth2 up 458 + 447 459 $IP link set dev lo address 52:54:00:6a:c7:5e 448 - $IP link set dummy0 address 52:54:00:6a:c7:5e 449 - $IP link add dummy1 type dummy 450 - $IP link set dummy1 address 52:54:00:6a:c7:5e 451 - $IP link set dev dummy1 up 460 + $IP link set dev veth1 address 52:54:00:6a:c7:5e 461 + ip -netns ns2 link set dev lo address 52:54:00:6a:c7:5e 462 + ip -netns ns2 link set dev veth2 address 52:54:00:6a:c7:5e 463 + 464 + # 1. (ns2) redirect lo's egress to veth2's egress 465 + ip netns exec ns2 tc qdisc add dev lo parent root handle 1: fq_codel 466 + ip netns exec ns2 tc filter add dev lo parent 1: protocol arp basic \ 467 + action mirred egress redirect dev veth2 468 + ip netns exec ns2 tc filter add dev lo parent 1: protocol ip basic \ 469 + action mirred egress redirect dev veth2 470 + 471 + # 2. (ns1) redirect veth1's ingress to lo's ingress 472 + $NS_EXEC tc qdisc add dev veth1 ingress 473 + $NS_EXEC tc filter add dev veth1 ingress protocol arp basic \ 474 + action mirred ingress redirect dev lo 475 + $NS_EXEC tc filter add dev veth1 ingress protocol ip basic \ 476 + action mirred ingress redirect dev lo 477 + 478 + # 3. (ns1) redirect lo's egress to veth1's egress 479 + $NS_EXEC tc qdisc add dev lo parent root handle 1: fq_codel 480 + $NS_EXEC tc filter add dev lo parent 1: protocol arp basic \ 481 + action mirred egress redirect dev veth1 482 + $NS_EXEC tc filter add dev lo parent 1: protocol ip basic \ 483 + action mirred egress redirect dev veth1 484 + 485 + # 4. (ns2) redirect veth2's ingress to lo's ingress 486 + ip netns exec ns2 tc qdisc add dev veth2 ingress 487 + ip netns exec ns2 tc filter add dev veth2 ingress protocol arp basic \ 488 + action mirred ingress redirect dev lo 489 + ip netns exec ns2 tc filter add dev veth2 ingress protocol ip basic \ 490 + action mirred ingress redirect dev lo 491 + 452 492 $NS_EXEC sysctl -qw net.ipv4.conf.all.rp_filter=1 453 493 $NS_EXEC sysctl -qw net.ipv4.conf.all.accept_local=1 454 494 $NS_EXEC sysctl -qw net.ipv4.conf.all.route_localnet=1 455 - 456 - $NS_EXEC tc qd add dev dummy1 parent root handle 1: fq_codel 457 - $NS_EXEC tc filter add dev dummy1 parent 1: protocol arp basic action mirred egress redirect dev lo 458 - $NS_EXEC tc filter add dev dummy1 parent 1: protocol ip basic action mirred egress redirect dev lo 495 + ip netns exec ns2 sysctl -qw net.ipv4.conf.all.rp_filter=1 496 + ip netns exec ns2 sysctl -qw net.ipv4.conf.all.accept_local=1 497 + ip netns exec ns2 sysctl -qw net.ipv4.conf.all.route_localnet=1 459 498 set +e 460 499 461 - run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 198.51.100.1" 500 + run_cmd "ip netns exec ns2 ping -w1 -c1 192.0.2.1" 462 501 log_test $? 0 "rp_filter passes local packets" 463 502 464 - run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 127.0.0.1" 503 + run_cmd "ip netns exec ns2 ping -w1 -c1 127.0.0.1" 465 504 log_test $? 0 "rp_filter passes loopback packets" 466 505 467 506 cleanup
+36
tools/testing/selftests/net/tls.c
··· 31 31 struct tls12_crypto_info_chacha20_poly1305 chacha20; 32 32 struct tls12_crypto_info_sm4_gcm sm4gcm; 33 33 struct tls12_crypto_info_sm4_ccm sm4ccm; 34 + struct tls12_crypto_info_aes_ccm_128 aesccm128; 35 + struct tls12_crypto_info_aes_gcm_256 aesgcm256; 34 36 }; 35 37 size_t len; 36 38 }; ··· 62 60 tls12->len = sizeof(struct tls12_crypto_info_sm4_ccm); 63 61 tls12->sm4ccm.info.version = tls_version; 64 62 tls12->sm4ccm.info.cipher_type = cipher_type; 63 + break; 64 + case TLS_CIPHER_AES_CCM_128: 65 + tls12->len = sizeof(struct tls12_crypto_info_aes_ccm_128); 66 + tls12->aesccm128.info.version = tls_version; 67 + tls12->aesccm128.info.cipher_type = cipher_type; 68 + break; 69 + case TLS_CIPHER_AES_GCM_256: 70 + tls12->len = sizeof(struct tls12_crypto_info_aes_gcm_256); 71 + tls12->aesgcm256.info.version = tls_version; 72 + tls12->aesgcm256.info.cipher_type = cipher_type; 65 73 break; 66 74 default: 67 75 break; ··· 271 259 { 272 260 .tls_version = TLS_1_3_VERSION, 273 261 .cipher_type = TLS_CIPHER_SM4_CCM, 262 + }; 263 + 264 + FIXTURE_VARIANT_ADD(tls, 12_aes_ccm) 265 + { 266 + .tls_version = TLS_1_2_VERSION, 267 + .cipher_type = TLS_CIPHER_AES_CCM_128, 268 + }; 269 + 270 + FIXTURE_VARIANT_ADD(tls, 13_aes_ccm) 271 + { 272 + .tls_version = TLS_1_3_VERSION, 273 + .cipher_type = TLS_CIPHER_AES_CCM_128, 274 + }; 275 + 276 + FIXTURE_VARIANT_ADD(tls, 12_aes_gcm_256) 277 + { 278 + .tls_version = TLS_1_2_VERSION, 279 + .cipher_type = TLS_CIPHER_AES_GCM_256, 280 + }; 281 + 282 + FIXTURE_VARIANT_ADD(tls, 13_aes_gcm_256) 283 + { 284 + .tls_version = TLS_1_3_VERSION, 285 + .cipher_type = TLS_CIPHER_AES_GCM_256, 274 286 }; 275 287 276 288 FIXTURE_SETUP(tls)
+26 -4
tools/testing/selftests/netfilter/conntrack_vrf.sh
··· 150 150 # oifname is the vrf device. 151 151 test_masquerade_vrf() 152 152 { 153 + local qdisc=$1 154 + 155 + if [ "$qdisc" != "default" ]; then 156 + tc -net $ns0 qdisc add dev tvrf root $qdisc 157 + fi 158 + 153 159 ip netns exec $ns0 conntrack -F 2>/dev/null 154 160 155 161 ip netns exec $ns0 nft -f - <<EOF 156 162 flush ruleset 157 163 table ip nat { 164 + chain rawout { 165 + type filter hook output priority raw; 166 + 167 + oif tvrf ct state untracked counter 168 + } 169 + chain postrouting2 { 170 + type filter hook postrouting priority mangle; 171 + 172 + oif tvrf ct state untracked counter 173 + } 158 174 chain postrouting { 159 175 type nat hook postrouting priority 0; 160 176 # NB: masquerade should always be combined with 'oif(name) bla', ··· 187 171 fi 188 172 189 173 # must also check that nat table was evaluated on second (lower device) iteration. 190 - ip netns exec $ns0 nft list table ip nat |grep -q 'counter packets 2' 174 + ip netns exec $ns0 nft list table ip nat |grep -q 'counter packets 2' && 175 + ip netns exec $ns0 nft list table ip nat |grep -q 'untracked counter packets [1-9]' 191 176 if [ $? -eq 0 ]; then 192 - echo "PASS: iperf3 connect with masquerade + sport rewrite on vrf device" 177 + echo "PASS: iperf3 connect with masquerade + sport rewrite on vrf device ($qdisc qdisc)" 193 178 else 194 - echo "FAIL: vrf masq rule has unexpected counter value" 179 + echo "FAIL: vrf rules have unexpected counter value" 195 180 ret=1 181 + fi 182 + 183 + if [ "$qdisc" != "default" ]; then 184 + tc -net $ns0 qdisc del dev tvrf root 196 185 fi 197 186 } 198 187 ··· 234 213 } 235 214 236 215 test_ct_zone_in 237 - test_masquerade_vrf 216 + test_masquerade_vrf "default" 217 + test_masquerade_vrf "pfifo" 238 218 test_masquerade_veth 239 219 240 220 exit $ret
+21 -3
tools/testing/selftests/netfilter/nft_concat_range.sh
··· 23 23 24 24 # Set types, defined by TYPE_ variables below 25 25 TYPES="net_port port_net net6_port port_proto net6_port_mac net6_port_mac_proto 26 - net_port_net net_mac net_mac_icmp net6_mac_icmp net6_port_net6_port 27 - net_port_mac_proto_net" 26 + net_port_net net_mac mac_net net_mac_icmp net6_mac_icmp 27 + net6_port_net6_port net_port_mac_proto_net" 28 28 29 29 # Reported bugs, also described by TYPE_ variables below 30 30 BUGS="flush_remove_add" ··· 275 275 perf_src 276 276 perf_entries 1000 277 277 perf_proto ipv4 278 + " 279 + 280 + TYPE_mac_net=" 281 + display mac,net 282 + type_spec ether_addr . ipv4_addr 283 + chain_spec ether saddr . ip saddr 284 + dst 285 + src mac addr4 286 + start 1 287 + count 5 288 + src_delta 2000 289 + tools sendip nc bash 290 + proto udp 291 + 292 + race_repeat 0 293 + 294 + perf_duration 0 278 295 " 279 296 280 297 TYPE_net_mac_icmp=" ··· 1001 984 fi 1002 985 done 1003 986 for f in ${src}; do 1004 - __expr="${__expr} . " 987 + [ "${__expr}" != "{ " ] && __expr="${__expr} . " 988 + 1005 989 __start="$(eval format_"${f}" "${srcstart}")" 1006 990 __end="$(eval format_"${f}" "${srcend}")" 1007 991
+13 -6
tools/testing/selftests/netfilter/nft_zones_many.sh
··· 18 18 ip netns del $ns 19 19 } 20 20 21 - ip netns add $ns 22 - if [ $? -ne 0 ];then 23 - echo "SKIP: Could not create net namespace $gw" 24 - exit $ksft_skip 25 - fi 21 + checktool (){ 22 + if ! $1 > /dev/null 2>&1; then 23 + echo "SKIP: Could not $2" 24 + exit $ksft_skip 25 + fi 26 + } 27 + 28 + checktool "nft --version" "run test without nft tool" 29 + checktool "ip -Version" "run test without ip tool" 30 + checktool "socat -V" "run test without socat tool" 31 + checktool "ip netns add $ns" "create net namespace" 26 32 27 33 trap cleanup EXIT 28 34 ··· 77 71 local start=$(date +%s%3N) 78 72 i=$((i + 10000)) 79 73 j=$((j + 1)) 80 - dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" nc -w 1 -q 1 -u -p 12345 127.0.0.1 12345 > /dev/null 74 + # nft rule in output places each packet in a different zone. 75 + dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" socat STDIN UDP:127.0.0.1:12345,sourceport=12345 81 76 if [ $? -ne 0 ] ;then 82 77 ret=1 83 78 break
+2
tools/testing/selftests/tc-testing/config
··· 60 60 CONFIG_NET_SCH_FIFO=y 61 61 CONFIG_NET_SCH_ETS=m 62 62 CONFIG_NET_SCH_RED=m 63 + CONFIG_NET_SCH_FQ_PIE=m 64 + CONFIG_NETDEVSIM=m 63 65 64 66 # 65 67 ## Network testing
+5 -3
tools/testing/selftests/tc-testing/tdc.py
··· 716 716 list_test_cases(alltests) 717 717 exit(0) 718 718 719 + exit_code = 0 # KSFT_PASS 719 720 if len(alltests): 720 721 req_plugins = pm.get_required_plugins(alltests) 721 722 try: ··· 725 724 print('The following plugins were not found:') 726 725 print('{}'.format(pde.missing_pg)) 727 726 catresults = test_runner(pm, args, alltests) 727 + if catresults.count_failures() != 0: 728 + exit_code = 1 # KSFT_FAIL 728 729 if args.format == 'none': 729 730 print('Test results output suppression requested\n') 730 731 else: ··· 751 748 gid=int(os.getenv('SUDO_GID'))) 752 749 else: 753 750 print('No tests found\n') 751 + exit_code = 4 # KSFT_SKIP 752 + exit(exit_code) 754 753 755 754 def main(): 756 755 """ ··· 771 766 print('args is {}'.format(args)) 772 767 773 768 set_operation_mode(pm, parser, args, remaining) 774 - 775 - exit(0) 776 - 777 769 778 770 if __name__ == "__main__": 779 771 main()
+1
tools/testing/selftests/tc-testing/tdc.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + modprobe netdevsim 4 5 ./tdc.py -c actions --nobuildebpf 5 6 ./tdc.py -c qdisc