Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from netfilter, bpf.

Quite a handful of old regression fixes but most of those are
pre-5.16.

Current release - regressions:

- fix memory leaks in the skb free deferral scheme if upper layer
protocols are used, i.e. in-kernel TCP readers like TLS

Current release - new code bugs:

- nf_tables: fix NULL check typo in _clone() functions

- change the default to y for Vertexcom vendor Kconfig

- a couple of fixes to incorrect uses of ref tracking

- two fixes for constifying netdev->dev_addr

Previous releases - regressions:

- bpf:
- various verifier fixes mainly around register offset handling
when passed to helper functions
- fix mount source displayed for bpffs (none -> bpffs)

- bonding:
- fix extraction of ports for connection hash calculation
- fix bond_xmit_broadcast return value when some devices are down

- phy: marvell: add Marvell specific PHY loopback

- sch_api: don't skip qdisc attach on ingress, prevent ref leak

- htb: restore minimal packet size handling in rate control

- sfp: fix high power modules without diagnostic monitoring

- mscc: ocelot:
- don't let phylink re-enable TX PAUSE on the NPI port
- don't dereference NULL pointers with shared tc filters

- smsc95xx: correct reset handling for LAN9514

- cpsw: avoid alignment faults by taking NET_IP_ALIGN into account

- phy: micrel: use kszphy_suspend/_resume for irq aware devices,
avoid races with the interrupt

Previous releases - always broken:

- xdp: check prog type before updating BPF link

- smc: resolve various races around abnormal connection termination

- sit: allow encapsulated IPv6 traffic to be delivered locally

- axienet: fix init/reset handling, add missing barriers, read the
right status words, stop queues correctly

- add missing dev_put() in sock_timestamping_bind_phc()

Misc:

- ipv4: prevent accidentally passing RTO_ONLINK to
ip_route_output_key_hash() by sanitizing flags

- ipv4: avoid quadratic behavior in netns dismantle

- stmmac: dwmac-oxnas: add support for OX810SE

- fsl: xgmac_mdio: add workaround for erratum A-009885"

* tag 'net-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (92 commits)
ipv4: add net_hash_mix() dispersion to fib_info_laddrhash keys
ipv4: avoid quadratic behavior in netns dismantle
net/fsl: xgmac_mdio: Fix incorrect iounmap when removing module
powerpc/fsl/dts: Enable WA for erratum A-009885 on fman3l MDIO buses
dt-bindings: net: Document fsl,erratum-a009885
net/fsl: xgmac_mdio: Add workaround for erratum A-009885
net: mscc: ocelot: fix using match before it is set
net: phy: micrel: use kszphy_suspend()/kszphy_resume for irq aware devices
net: cpsw: avoid alignment faults by taking NET_IP_ALIGN into account
nfc: llcp: fix NULL error pointer dereference on sendmsg() after failed bind()
net: axienet: increase default TX ring size to 128
net: axienet: fix for TX busy handling
net: axienet: fix number of TX ring slots for available check
net: axienet: Fix TX ring slot available check
net: axienet: limit minimum TX ring size
net: axienet: add missing memory barriers
net: axienet: reset core on initialization prior to MDIO access
net: axienet: Wait for PhyRstCmplt after core reset
net: axienet: increase reset timeout
bpf, selftests: Add ringbuf memory type confusion test
...

+1049 -422
+9
Documentation/devicetree/bindings/net/fsl-fman.txt
··· 410 410 The settings and programming routines for internal/external 411 411 MDIO are different. Must be included for internal MDIO. 412 412 413 + - fsl,erratum-a009885 414 + Usage: optional 415 + Value type: <boolean> 416 + Definition: Indicates the presence of the A009885 417 + erratum describing that the contents of MDIO_DATA may 418 + become corrupt unless it is read within 16 MDC cycles 419 + of MDIO_CFG[BSY] being cleared, when performing an 420 + MDIO read operation. 421 + 413 422 - fsl,erratum-a011043 414 423 Usage: optional 415 424 Value type: <boolean>
+3
Documentation/devicetree/bindings/net/oxnas-dwmac.txt
··· 9 9 - compatible: For the OX820 SoC, it should be : 10 10 - "oxsemi,ox820-dwmac" to select glue 11 11 - "snps,dwmac-3.512" to select IP version. 12 + For the OX810SE SoC, it should be : 13 + - "oxsemi,ox810se-dwmac" to select glue 14 + - "snps,dwmac-3.512" to select IP version. 12 15 13 16 - clocks: Should contain phandles to the following clocks 14 17 - clock-names: Should contain the following:
+2
arch/powerpc/boot/dts/fsl/qoriq-fman3l-0.dtsi
··· 79 79 #size-cells = <0>; 80 80 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 81 81 reg = <0xfc000 0x1000>; 82 + fsl,erratum-a009885; 82 83 }; 83 84 84 85 xmdio0: mdio@fd000 { ··· 87 86 #size-cells = <0>; 88 87 compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio"; 89 88 reg = <0xfd000 0x1000>; 89 + fsl,erratum-a009885; 90 90 }; 91 91 }; 92 92
+1 -3
drivers/atm/iphase.c
··· 178 178 179 179 static u16 get_desc (IADEV *dev, struct ia_vcc *iavcc) { 180 180 u_short desc_num, i; 181 - struct sk_buff *skb; 182 181 struct ia_vcc *iavcc_r = NULL; 183 182 unsigned long delta; 184 183 static unsigned long timer = 0; ··· 201 202 else 202 203 dev->ffL.tcq_rd -= 2; 203 204 *(u_short *)(dev->seg_ram + dev->ffL.tcq_rd) = i+1; 204 - if (!(skb = dev->desc_tbl[i].txskb) || 205 - !(iavcc_r = dev->desc_tbl[i].iavcc)) 205 + if (!dev->desc_tbl[i].txskb || !(iavcc_r = dev->desc_tbl[i].iavcc)) 206 206 printk("Fatal err, desc table vcc or skb is NULL\n"); 207 207 else 208 208 iavcc_r->vc_desc_cnt--;
+25 -11
drivers/net/bonding/bond_main.c
··· 3874 3874 skb->l4_hash) 3875 3875 return skb->hash; 3876 3876 3877 - return __bond_xmit_hash(bond, skb, skb->head, skb->protocol, 3878 - skb->mac_header, skb->network_header, 3877 + return __bond_xmit_hash(bond, skb, skb->data, skb->protocol, 3878 + skb_mac_offset(skb), skb_network_offset(skb), 3879 3879 skb_headlen(skb)); 3880 3880 } 3881 3881 ··· 4884 4884 struct bonding *bond = netdev_priv(bond_dev); 4885 4885 struct slave *slave = NULL; 4886 4886 struct list_head *iter; 4887 + bool xmit_suc = false; 4888 + bool skb_used = false; 4887 4889 4888 4890 bond_for_each_slave_rcu(bond, slave, iter) { 4889 - if (bond_is_last_slave(bond, slave)) 4890 - break; 4891 - if (bond_slave_is_up(slave) && slave->link == BOND_LINK_UP) { 4892 - struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC); 4891 + struct sk_buff *skb2; 4893 4892 4893 + if (!(bond_slave_is_up(slave) && slave->link == BOND_LINK_UP)) 4894 + continue; 4895 + 4896 + if (bond_is_last_slave(bond, slave)) { 4897 + skb2 = skb; 4898 + skb_used = true; 4899 + } else { 4900 + skb2 = skb_clone(skb, GFP_ATOMIC); 4894 4901 if (!skb2) { 4895 4902 net_err_ratelimited("%s: Error: %s: skb_clone() failed\n", 4896 4903 bond_dev->name, __func__); 4897 4904 continue; 4898 4905 } 4899 - bond_dev_queue_xmit(bond, skb2, slave->dev); 4900 4906 } 4901 - } 4902 - if (slave && bond_slave_is_up(slave) && slave->link == BOND_LINK_UP) 4903 - return bond_dev_queue_xmit(bond, skb, slave->dev); 4904 4907 4905 - return bond_tx_drop(bond_dev, skb); 4908 + if (bond_dev_queue_xmit(bond, skb2, slave->dev) == NETDEV_TX_OK) 4909 + xmit_suc = true; 4910 + } 4911 + 4912 + if (!skb_used) 4913 + dev_kfree_skb_any(skb); 4914 + 4915 + if (xmit_suc) 4916 + return NETDEV_TX_OK; 4917 + 4918 + atomic_long_inc(&bond_dev->tx_dropped); 4919 + return NET_XMIT_DROP; 4906 4920 } 4907 4921 4908 4922 /*------------------------- Device initialization ---------------------------*/
+18 -13
drivers/net/ethernet/allwinner/sun4i-emac.c
··· 106 106 107 107 /* set EMAC SPEED, depend on PHY */ 108 108 reg_val = readl(db->membase + EMAC_MAC_SUPP_REG); 109 - reg_val &= ~(0x1 << 8); 109 + reg_val &= ~EMAC_MAC_SUPP_100M; 110 110 if (db->speed == SPEED_100) 111 - reg_val |= 1 << 8; 111 + reg_val |= EMAC_MAC_SUPP_100M; 112 112 writel(reg_val, db->membase + EMAC_MAC_SUPP_REG); 113 113 } 114 114 ··· 264 264 265 265 /* re enable interrupt */ 266 266 reg_val = readl(db->membase + EMAC_INT_CTL_REG); 267 - reg_val |= (0x01 << 8); 267 + reg_val |= EMAC_INT_CTL_RX_EN; 268 268 writel(reg_val, db->membase + EMAC_INT_CTL_REG); 269 269 270 270 db->emacrx_completed_flag = 1; ··· 429 429 /* initial EMAC */ 430 430 /* flush RX FIFO */ 431 431 reg_val = readl(db->membase + EMAC_RX_CTL_REG); 432 - reg_val |= 0x8; 432 + reg_val |= EMAC_RX_CTL_FLUSH_FIFO; 433 433 writel(reg_val, db->membase + EMAC_RX_CTL_REG); 434 434 udelay(1); 435 435 ··· 441 441 442 442 /* set MII clock */ 443 443 reg_val = readl(db->membase + EMAC_MAC_MCFG_REG); 444 - reg_val &= (~(0xf << 2)); 445 - reg_val |= (0xD << 2); 444 + reg_val &= ~EMAC_MAC_MCFG_MII_CLKD_MASK; 445 + reg_val |= EMAC_MAC_MCFG_MII_CLKD_72; 446 446 writel(reg_val, db->membase + EMAC_MAC_MCFG_REG); 447 447 448 448 /* clear RX counter */ ··· 506 506 507 507 /* enable RX/TX0/RX Hlevel interrup */ 508 508 reg_val = readl(db->membase + EMAC_INT_CTL_REG); 509 - reg_val |= (0xf << 0) | (0x01 << 8); 509 + reg_val |= (EMAC_INT_CTL_TX_EN | EMAC_INT_CTL_TX_ABRT_EN | EMAC_INT_CTL_RX_EN); 510 510 writel(reg_val, db->membase + EMAC_INT_CTL_REG); 511 511 512 512 spin_unlock_irqrestore(&db->lock, flags); ··· 637 637 if (!rxcount) { 638 638 db->emacrx_completed_flag = 1; 639 639 reg_val = readl(db->membase + EMAC_INT_CTL_REG); 640 - reg_val |= (0xf << 0) | (0x01 << 8); 640 + reg_val |= (EMAC_INT_CTL_TX_EN | 641 + EMAC_INT_CTL_TX_ABRT_EN | 642 + EMAC_INT_CTL_RX_EN); 641 643 writel(reg_val, db->membase + EMAC_INT_CTL_REG); 642 644 643 645 /* had one stuck? */ ··· 671 669 writel(reg_val | EMAC_CTL_RX_EN, 672 670 db->membase + EMAC_CTL_REG); 673 671 reg_val = readl(db->membase + EMAC_INT_CTL_REG); 674 - reg_val |= (0xf << 0) | (0x01 << 8); 672 + reg_val |= (EMAC_INT_CTL_TX_EN | 673 + EMAC_INT_CTL_TX_ABRT_EN | 674 + EMAC_INT_CTL_RX_EN); 675 675 writel(reg_val, db->membase + EMAC_INT_CTL_REG); 676 676 677 677 db->emacrx_completed_flag = 1; ··· 787 783 } 788 784 789 785 /* Transmit Interrupt check */ 790 - if (int_status & (0x01 | 0x02)) 786 + if (int_status & EMAC_INT_STA_TX_COMPLETE) 791 787 emac_tx_done(dev, db, int_status); 792 788 793 - if (int_status & (0x04 | 0x08)) 789 + if (int_status & EMAC_INT_STA_TX_ABRT) 794 790 netdev_info(dev, " ab : %x\n", int_status); 795 791 796 792 /* Re-enable interrupt mask */ 797 793 if (db->emacrx_completed_flag == 1) { 798 794 reg_val = readl(db->membase + EMAC_INT_CTL_REG); 799 - reg_val |= (0xf << 0) | (0x01 << 8); 795 + reg_val |= (EMAC_INT_CTL_TX_EN | EMAC_INT_CTL_TX_ABRT_EN | EMAC_INT_CTL_RX_EN); 800 796 writel(reg_val, db->membase + EMAC_INT_CTL_REG); 801 797 } else { 802 798 reg_val = readl(db->membase + EMAC_INT_CTL_REG); 803 - reg_val |= (0xf << 0); 799 + reg_val |= (EMAC_INT_CTL_TX_EN | EMAC_INT_CTL_TX_ABRT_EN); 804 800 writel(reg_val, db->membase + EMAC_INT_CTL_REG); 805 801 } 806 802 ··· 1072 1068 clk_disable_unprepare(db->clk); 1073 1069 out_dispose_mapping: 1074 1070 irq_dispose_mapping(ndev->irq); 1071 + dma_release_channel(db->rx_chan); 1075 1072 out_iounmap: 1076 1073 iounmap(db->membase); 1077 1074 out:
+18
drivers/net/ethernet/allwinner/sun4i-emac.h
··· 38 38 #define EMAC_RX_CTL_REG (0x3c) 39 39 #define EMAC_RX_CTL_AUTO_DRQ_EN (1 << 1) 40 40 #define EMAC_RX_CTL_DMA_EN (1 << 2) 41 + #define EMAC_RX_CTL_FLUSH_FIFO (1 << 3) 41 42 #define EMAC_RX_CTL_PASS_ALL_EN (1 << 4) 42 43 #define EMAC_RX_CTL_PASS_CTL_EN (1 << 5) 43 44 #define EMAC_RX_CTL_PASS_CRC_ERR_EN (1 << 6) ··· 62 61 #define EMAC_RX_IO_DATA_STATUS_OK (1 << 7) 63 62 #define EMAC_RX_FBC_REG (0x50) 64 63 #define EMAC_INT_CTL_REG (0x54) 64 + #define EMAC_INT_CTL_RX_EN (1 << 8) 65 + #define EMAC_INT_CTL_TX0_EN (1) 66 + #define EMAC_INT_CTL_TX1_EN (1 << 1) 67 + #define EMAC_INT_CTL_TX_EN (EMAC_INT_CTL_TX0_EN | EMAC_INT_CTL_TX1_EN) 68 + #define EMAC_INT_CTL_TX0_ABRT_EN (0x1 << 2) 69 + #define EMAC_INT_CTL_TX1_ABRT_EN (0x1 << 3) 70 + #define EMAC_INT_CTL_TX_ABRT_EN (EMAC_INT_CTL_TX0_ABRT_EN | EMAC_INT_CTL_TX1_ABRT_EN) 65 71 #define EMAC_INT_STA_REG (0x58) 72 + #define EMAC_INT_STA_TX0_COMPLETE (0x1) 73 + #define EMAC_INT_STA_TX1_COMPLETE (0x1 << 1) 74 + #define EMAC_INT_STA_TX_COMPLETE (EMAC_INT_STA_TX0_COMPLETE | EMAC_INT_STA_TX1_COMPLETE) 75 + #define EMAC_INT_STA_TX0_ABRT (0x1 << 2) 76 + #define EMAC_INT_STA_TX1_ABRT (0x1 << 3) 77 + #define EMAC_INT_STA_TX_ABRT (EMAC_INT_STA_TX0_ABRT | EMAC_INT_STA_TX1_ABRT) 78 + #define EMAC_INT_STA_RX_COMPLETE (0x1 << 8) 66 79 #define EMAC_MAC_CTL0_REG (0x5c) 67 80 #define EMAC_MAC_CTL0_RX_FLOW_CTL_EN (1 << 2) 68 81 #define EMAC_MAC_CTL0_TX_FLOW_CTL_EN (1 << 3) ··· 102 87 #define EMAC_MAC_CLRT_RM (0x0f) 103 88 #define EMAC_MAC_MAXF_REG (0x70) 104 89 #define EMAC_MAC_SUPP_REG (0x74) 90 + #define EMAC_MAC_SUPP_100M (0x1 << 8) 105 91 #define EMAC_MAC_TEST_REG (0x78) 106 92 #define EMAC_MAC_MCFG_REG (0x7c) 93 + #define EMAC_MAC_MCFG_MII_CLKD_MASK (0xff << 2) 94 + #define EMAC_MAC_MCFG_MII_CLKD_72 (0x0d << 2) 107 95 #define EMAC_MAC_A0_REG (0x98) 108 96 #define EMAC_MAC_A1_REG (0x9c) 109 97 #define EMAC_MAC_A2_REG (0xa0)
+4 -1
drivers/net/ethernet/apple/bmac.c
··· 1237 1237 struct bmac_data *bp; 1238 1238 const unsigned char *prop_addr; 1239 1239 unsigned char addr[6]; 1240 + u8 macaddr[6]; 1240 1241 struct net_device *dev; 1241 1242 int is_bmac_plus = ((int)match->data) != 0; 1242 1243 ··· 1285 1284 1286 1285 rev = addr[0] == 0 && addr[1] == 0xA0; 1287 1286 for (j = 0; j < 6; ++j) 1288 - dev->dev_addr[j] = rev ? bitrev8(addr[j]): addr[j]; 1287 + macaddr[j] = rev ? bitrev8(addr[j]): addr[j]; 1288 + 1289 + eth_hw_addr_set(dev, macaddr); 1289 1290 1290 1291 /* Enable chip without interrupts for now */ 1291 1292 bmac_enable_and_reset_chip(dev);
+11 -5
drivers/net/ethernet/apple/mace.c
··· 90 90 static void mace_tx_timeout(struct timer_list *t); 91 91 static inline void dbdma_reset(volatile struct dbdma_regs __iomem *dma); 92 92 static inline void mace_clean_rings(struct mace_data *mp); 93 - static void __mace_set_address(struct net_device *dev, void *addr); 93 + static void __mace_set_address(struct net_device *dev, const void *addr); 94 94 95 95 /* 96 96 * If we can't get a skbuff when we need it, we use this area for DMA. ··· 112 112 struct net_device *dev; 113 113 struct mace_data *mp; 114 114 const unsigned char *addr; 115 + u8 macaddr[ETH_ALEN]; 115 116 int j, rev, rc = -EBUSY; 116 117 117 118 if (macio_resource_count(mdev) != 3 || macio_irq_count(mdev) != 3) { ··· 168 167 169 168 rev = addr[0] == 0 && addr[1] == 0xA0; 170 169 for (j = 0; j < 6; ++j) { 171 - dev->dev_addr[j] = rev ? bitrev8(addr[j]): addr[j]; 170 + macaddr[j] = rev ? bitrev8(addr[j]): addr[j]; 172 171 } 172 + eth_hw_addr_set(dev, macaddr); 173 173 mp->chipid = (in_8(&mp->mace->chipid_hi) << 8) | 174 174 in_8(&mp->mace->chipid_lo); 175 175 ··· 371 369 out_8(&mb->plscc, PORTSEL_GPSI + ENPLSIO); 372 370 } 373 371 374 - static void __mace_set_address(struct net_device *dev, void *addr) 372 + static void __mace_set_address(struct net_device *dev, const void *addr) 375 373 { 376 374 struct mace_data *mp = netdev_priv(dev); 377 375 volatile struct mace __iomem *mb = mp->mace; 378 - unsigned char *p = addr; 376 + const unsigned char *p = addr; 377 + u8 macaddr[ETH_ALEN]; 379 378 int i; 380 379 381 380 /* load up the hardware address */ ··· 388 385 ; 389 386 } 390 387 for (i = 0; i < 6; ++i) 391 - out_8(&mb->padr, dev->dev_addr[i] = p[i]); 388 + out_8(&mb->padr, macaddr[i] = p[i]); 389 + 390 + eth_hw_addr_set(dev, macaddr); 391 + 392 392 if (mp->chipid != BROKEN_ADDRCHG_REV) 393 393 out_8(&mb->iac, 0); 394 394 }
+6 -4
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 4020 4020 4021 4021 /* Request the WOL interrupt and advertise suspend if available */ 4022 4022 priv->wol_irq_disabled = true; 4023 - err = devm_request_irq(&pdev->dev, priv->wol_irq, bcmgenet_wol_isr, 0, 4024 - dev->name, priv); 4025 - if (!err) 4026 - device_set_wakeup_capable(&pdev->dev, 1); 4023 + if (priv->wol_irq > 0) { 4024 + err = devm_request_irq(&pdev->dev, priv->wol_irq, 4025 + bcmgenet_wol_isr, 0, dev->name, priv); 4026 + if (!err) 4027 + device_set_wakeup_capable(&pdev->dev, 1); 4028 + } 4027 4029 4028 4030 /* Set the needed headroom to account for any possible 4029 4031 * features enabling/disabling at runtime
+2 -1
drivers/net/ethernet/chelsio/libcxgb/libcxgb_cm.c
··· 32 32 33 33 #include <linux/tcp.h> 34 34 #include <linux/ipv6.h> 35 + #include <net/inet_ecn.h> 35 36 #include <net/route.h> 36 37 #include <net/ip6_route.h> 37 38 ··· 100 99 101 100 rt = ip_route_output_ports(&init_net, &fl4, NULL, peer_ip, local_ip, 102 101 peer_port, local_port, IPPROTO_TCP, 103 - tos, 0); 102 + tos & ~INET_ECN_MASK, 0); 104 103 if (IS_ERR(rt)) 105 104 return NULL; 106 105 n = dst_neigh_lookup(&rt->dst, &peer_ip);
+21 -7
drivers/net/ethernet/freescale/xgmac_mdio.c
··· 51 51 struct mdio_fsl_priv { 52 52 struct tgec_mdio_controller __iomem *mdio_base; 53 53 bool is_little_endian; 54 + bool has_a009885; 54 55 bool has_a011043; 55 56 }; 56 57 ··· 187 186 { 188 187 struct mdio_fsl_priv *priv = (struct mdio_fsl_priv *)bus->priv; 189 188 struct tgec_mdio_controller __iomem *regs = priv->mdio_base; 189 + unsigned long flags; 190 190 uint16_t dev_addr; 191 191 uint32_t mdio_stat; 192 192 uint32_t mdio_ctl; 193 - uint16_t value; 194 193 int ret; 195 194 bool endian = priv->is_little_endian; 196 195 ··· 222 221 return ret; 223 222 } 224 223 224 + if (priv->has_a009885) 225 + /* Once the operation completes, i.e. MDIO_STAT_BSY clears, we 226 + * must read back the data register within 16 MDC cycles. 227 + */ 228 + local_irq_save(flags); 229 + 225 230 /* Initiate the read */ 226 231 xgmac_write32(mdio_ctl | MDIO_CTL_READ, &regs->mdio_ctl, endian); 227 232 228 233 ret = xgmac_wait_until_done(&bus->dev, regs, endian); 229 234 if (ret) 230 - return ret; 235 + goto irq_restore; 231 236 232 237 /* Return all Fs if nothing was there */ 233 238 if ((xgmac_read32(&regs->mdio_stat, endian) & MDIO_STAT_RD_ER) && ··· 241 234 dev_dbg(&bus->dev, 242 235 "Error while reading PHY%d reg at %d.%hhu\n", 243 236 phy_id, dev_addr, regnum); 244 - return 0xffff; 237 + ret = 0xffff; 238 + } else { 239 + ret = xgmac_read32(&regs->mdio_data, endian) & 0xffff; 240 + dev_dbg(&bus->dev, "read %04x\n", ret); 245 241 } 246 242 247 - value = xgmac_read32(&regs->mdio_data, endian) & 0xffff; 248 - dev_dbg(&bus->dev, "read %04x\n", value); 243 + irq_restore: 244 + if (priv->has_a009885) 245 + local_irq_restore(flags); 249 246 250 - return value; 247 + return ret; 251 248 } 252 249 253 250 static int xgmac_mdio_probe(struct platform_device *pdev) ··· 298 287 priv->is_little_endian = device_property_read_bool(&pdev->dev, 299 288 "little-endian"); 300 289 290 + priv->has_a009885 = device_property_read_bool(&pdev->dev, 291 + "fsl,erratum-a009885"); 301 292 priv->has_a011043 = device_property_read_bool(&pdev->dev, 302 293 "fsl,erratum-a011043"); 303 294 ··· 331 318 static int xgmac_mdio_remove(struct platform_device *pdev) 332 319 { 333 320 struct mii_bus *bus = platform_get_drvdata(pdev); 321 + struct mdio_fsl_priv *priv = bus->priv; 334 322 335 323 mdiobus_unregister(bus); 336 - iounmap(bus->priv); 324 + iounmap(priv->mdio_base); 337 325 mdiobus_free(bus); 338 326 339 327 return 0;
+2 -1
drivers/net/ethernet/i825xx/sni_82596.c
··· 117 117 netdevice->dev_addr[5] = readb(eth_addr + 0x06); 118 118 iounmap(eth_addr); 119 119 120 - if (!netdevice->irq) { 120 + if (netdevice->irq < 0) { 121 121 printk(KERN_ERR "%s: IRQ not found for i82596 at 0x%lx\n", 122 122 __FILE__, netdevice->base_addr); 123 + retval = netdevice->irq; 123 124 goto probe_failed; 124 125 } 125 126
-1
drivers/net/ethernet/marvell/prestera/prestera.h
··· 283 283 struct list_head rif_entry_list; 284 284 struct notifier_block inetaddr_nb; 285 285 struct notifier_block inetaddr_valid_nb; 286 - bool aborted; 287 286 }; 288 287 289 288 struct prestera_rxtx_params {
+2 -2
drivers/net/ethernet/marvell/prestera/prestera_hw.c
··· 1831 1831 int prestera_hw_rif_create(struct prestera_switch *sw, 1832 1832 struct prestera_iface *iif, u8 *mac, u16 *rif_id) 1833 1833 { 1834 - struct prestera_msg_rif_req req; 1835 1834 struct prestera_msg_rif_resp resp; 1835 + struct prestera_msg_rif_req req; 1836 1836 int err; 1837 1837 1838 1838 memcpy(req.mac, mac, ETH_ALEN); ··· 1868 1868 1869 1869 int prestera_hw_vr_create(struct prestera_switch *sw, u16 *vr_id) 1870 1870 { 1871 - int err; 1872 1871 struct prestera_msg_vr_resp resp; 1873 1872 struct prestera_msg_vr_req req; 1873 + int err; 1874 1874 1875 1875 err = prestera_cmd_ret(sw, PRESTERA_CMD_TYPE_ROUTER_VR_CREATE, 1876 1876 &req.cmd, sizeof(req), &resp.ret, sizeof(resp));
+1
drivers/net/ethernet/marvell/prestera/prestera_main.c
··· 982 982 prestera_event_handlers_unregister(sw); 983 983 prestera_rxtx_switch_fini(sw); 984 984 prestera_switchdev_fini(sw); 985 + prestera_router_fini(sw); 985 986 prestera_netdev_event_handler_unregister(sw); 986 987 prestera_hw_switch_fini(sw); 987 988 }
+13 -11
drivers/net/ethernet/marvell/prestera/prestera_router.c
··· 25 25 struct netlink_ext_ack *extack) 26 26 { 27 27 struct prestera_port *port = netdev_priv(port_dev); 28 - int err; 29 - struct prestera_rif_entry *re; 30 28 struct prestera_rif_entry_key re_key = {}; 29 + struct prestera_rif_entry *re; 31 30 u32 kern_tb_id; 31 + int err; 32 32 33 33 err = prestera_is_valid_mac_addr(port, port_dev->dev_addr); 34 34 if (err) { ··· 45 45 switch (event) { 46 46 case NETDEV_UP: 47 47 if (re) { 48 - NL_SET_ERR_MSG_MOD(extack, "rif_entry already exist"); 48 + NL_SET_ERR_MSG_MOD(extack, "RIF already exist"); 49 49 return -EEXIST; 50 50 } 51 51 re = prestera_rif_entry_create(port->sw, &re_key, 52 52 prestera_fix_tb_id(kern_tb_id), 53 53 port_dev->dev_addr); 54 54 if (!re) { 55 - NL_SET_ERR_MSG_MOD(extack, "Can't create rif_entry"); 55 + NL_SET_ERR_MSG_MOD(extack, "Can't create RIF"); 56 56 return -EINVAL; 57 57 } 58 58 dev_hold(port_dev); 59 59 break; 60 60 case NETDEV_DOWN: 61 61 if (!re) { 62 - NL_SET_ERR_MSG_MOD(extack, "rif_entry not exist"); 62 + NL_SET_ERR_MSG_MOD(extack, "Can't find RIF"); 63 63 return -EEXIST; 64 64 } 65 65 prestera_rif_entry_destroy(port->sw, re); ··· 75 75 unsigned long event, 76 76 struct netlink_ext_ack *extack) 77 77 { 78 - if (prestera_netdev_check(dev) && !netif_is_bridge_port(dev) && 79 - !netif_is_lag_port(dev) && !netif_is_ovs_port(dev)) 80 - return __prestera_inetaddr_port_event(dev, event, extack); 78 + if (!prestera_netdev_check(dev) || netif_is_bridge_port(dev) || 79 + netif_is_lag_port(dev) || netif_is_ovs_port(dev)) 80 + return 0; 81 81 82 - return 0; 82 + return __prestera_inetaddr_port_event(dev, event, extack); 83 83 } 84 84 85 85 static int __prestera_inetaddr_cb(struct notifier_block *nb, ··· 126 126 goto out; 127 127 128 128 if (ipv4_is_multicast(ivi->ivi_addr)) { 129 + NL_SET_ERR_MSG_MOD(ivi->extack, 130 + "Multicast addr on RIF is not supported"); 129 131 err = -EINVAL; 130 132 goto out; 131 133 } ··· 168 166 err_register_inetaddr_notifier: 169 167 unregister_inetaddr_validator_notifier(&router->inetaddr_valid_nb); 170 168 err_register_inetaddr_validator_notifier: 171 - /* prestera_router_hw_fini */ 169 + prestera_router_hw_fini(sw); 172 170 err_router_lib_init: 173 171 kfree(sw->router); 174 172 return err; ··· 178 176 { 179 177 unregister_inetaddr_notifier(&sw->router->inetaddr_nb); 180 178 unregister_inetaddr_validator_notifier(&sw->router->inetaddr_valid_nb); 181 - /* router_hw_fini */ 179 + prestera_router_hw_fini(sw); 182 180 kfree(sw->router); 183 181 sw->router = NULL; 184 182 }
+23 -17
drivers/net/ethernet/marvell/prestera/prestera_router_hw.c
··· 29 29 return 0; 30 30 } 31 31 32 + void prestera_router_hw_fini(struct prestera_switch *sw) 33 + { 34 + WARN_ON(!list_empty(&sw->router->vr_list)); 35 + WARN_ON(!list_empty(&sw->router->rif_entry_list)); 36 + } 37 + 32 38 static struct prestera_vr *__prestera_vr_find(struct prestera_switch *sw, 33 39 u32 tb_id) 34 40 { ··· 53 47 struct netlink_ext_ack *extack) 54 48 { 55 49 struct prestera_vr *vr; 56 - u16 hw_vr_id; 57 50 int err; 58 - 59 - err = prestera_hw_vr_create(sw, &hw_vr_id); 60 - if (err) 61 - return ERR_PTR(-ENOMEM); 62 51 63 52 vr = kzalloc(sizeof(*vr), GFP_KERNEL); 64 53 if (!vr) { ··· 62 61 } 63 62 64 63 vr->tb_id = tb_id; 65 - vr->hw_vr_id = hw_vr_id; 64 + 65 + err = prestera_hw_vr_create(sw, &vr->hw_vr_id); 66 + if (err) 67 + goto err_hw_create; 66 68 67 69 list_add(&vr->router_node, &sw->router->vr_list); 68 70 69 71 return vr; 70 72 71 - err_alloc_vr: 72 - prestera_hw_vr_delete(sw, hw_vr_id); 73 + err_hw_create: 73 74 kfree(vr); 75 + err_alloc_vr: 74 76 return ERR_PTR(err); 75 77 } 76 78 77 79 static void __prestera_vr_destroy(struct prestera_switch *sw, 78 80 struct prestera_vr *vr) 79 81 { 80 - prestera_hw_vr_delete(sw, vr->hw_vr_id); 81 82 list_del(&vr->router_node); 83 + prestera_hw_vr_delete(sw, vr->hw_vr_id); 82 84 kfree(vr); 83 85 } 84 86 ··· 91 87 struct prestera_vr *vr; 92 88 93 89 vr = __prestera_vr_find(sw, tb_id); 94 - if (!vr) 90 + if (vr) { 91 + refcount_inc(&vr->refcount); 92 + } else { 95 93 vr = __prestera_vr_create(sw, tb_id, extack); 96 - if (IS_ERR(vr)) 97 - return ERR_CAST(vr); 94 + if (IS_ERR(vr)) 95 + return ERR_CAST(vr); 96 + 97 + refcount_set(&vr->refcount, 1); 98 + } 98 99 99 100 return vr; 100 101 } 101 102 102 103 static void prestera_vr_put(struct prestera_switch *sw, struct prestera_vr *vr) 103 104 { 104 - if (!vr->ref_cnt) 105 + if (refcount_dec_and_test(&vr->refcount)) 105 106 __prestera_vr_destroy(sw, vr); 106 107 } 107 108 ··· 129 120 out->iface.vlan_id = in->iface.vlan_id; 130 121 break; 131 122 default: 132 - pr_err("Unsupported iface type"); 123 + WARN(1, "Unsupported iface type"); 133 124 return -EINVAL; 134 125 } 135 126 ··· 167 158 iface.vr_id = e->vr->hw_vr_id; 168 159 prestera_hw_rif_delete(sw, e->hw_id, &iface); 169 160 170 - e->vr->ref_cnt--; 171 161 prestera_vr_put(sw, e->vr); 172 162 kfree(e); 173 163 } ··· 191 183 if (IS_ERR(e->vr)) 192 184 goto err_vr_get; 193 185 194 - e->vr->ref_cnt++; 195 186 memcpy(&e->addr, addr, sizeof(e->addr)); 196 187 197 188 /* HW */ ··· 205 198 return e; 206 199 207 200 err_hw_create: 208 - e->vr->ref_cnt--; 209 201 prestera_vr_put(sw, e->vr); 210 202 err_vr_get: 211 203 err_key_copy:
+2 -1
drivers/net/ethernet/marvell/prestera/prestera_router_hw.h
··· 6 6 7 7 struct prestera_vr { 8 8 struct list_head router_node; 9 - unsigned int ref_cnt; 9 + refcount_t refcount; 10 10 u32 tb_id; /* key (kernel fib table id) */ 11 11 u16 hw_vr_id; /* virtual router ID */ 12 12 u8 __pad[2]; ··· 32 32 struct prestera_rif_entry_key *k, 33 33 u32 tb_id, const unsigned char *addr); 34 34 int prestera_router_hw_init(struct prestera_switch *sw); 35 + void prestera_router_hw_fini(struct prestera_switch *sw); 35 36 36 37 #endif /* _PRESTERA_ROUTER_HW_H_ */
+1 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 267 267 phylink_config); 268 268 struct mtk_eth *eth = mac->hw; 269 269 u32 mcr_cur, mcr_new, sid, i; 270 - int val, ge_mode, err; 270 + int val, ge_mode, err = 0; 271 271 272 272 /* MT76x8 has no hardware settings between for the MAC */ 273 273 if (!MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628) &&
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ 2 2 /* Copyright (c) 2018 Mellanox Technologies. */ 3 3 4 + #include <net/inet_ecn.h> 4 5 #include <net/vxlan.h> 5 6 #include <net/gre.h> 6 7 #include <net/geneve.h> ··· 236 235 int err; 237 236 238 237 /* add the IP fields */ 239 - attr.fl.fl4.flowi4_tos = tun_key->tos; 238 + attr.fl.fl4.flowi4_tos = tun_key->tos & ~INET_ECN_MASK; 240 239 attr.fl.fl4.daddr = tun_key->u.ipv4.dst; 241 240 attr.fl.fl4.saddr = tun_key->u.ipv4.src; 242 241 attr.ttl = tun_key->ttl; ··· 351 350 int err; 352 351 353 352 /* add the IP fields */ 354 - attr.fl.fl4.flowi4_tos = tun_key->tos; 353 + attr.fl.fl4.flowi4_tos = tun_key->tos & ~INET_ECN_MASK; 355 354 attr.fl.fl4.daddr = tun_key->u.ipv4.dst; 356 355 attr.fl.fl4.saddr = tun_key->u.ipv4.src; 357 356 attr.ttl = tun_key->ttl;
+4 -1
drivers/net/ethernet/mscc/ocelot.c
··· 771 771 772 772 ocelot_write_rix(ocelot, 0, ANA_POL_FLOWC, port); 773 773 774 - ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, tx_pause); 774 + /* Don't attempt to send PAUSE frames on the NPI port, it's broken */ 775 + if (port != ocelot->npi) 776 + ocelot_fields_write(ocelot, port, SYS_PAUSE_CFG_PAUSE_ENA, 777 + tx_pause); 775 778 776 779 /* Undo the effects of ocelot_phylink_mac_link_down: 777 780 * enable MAC module
+36 -8
drivers/net/ethernet/mscc/ocelot_flower.c
··· 559 559 return -EOPNOTSUPP; 560 560 } 561 561 562 - if (filter->block_id == VCAP_IS1 && 563 - !is_zero_ether_addr(match.mask->dst)) { 564 - NL_SET_ERR_MSG_MOD(extack, 565 - "Key type S1_NORMAL cannot match on destination MAC"); 566 - return -EOPNOTSUPP; 567 - } 568 - 569 562 /* The hw support mac matches only for MAC_ETYPE key, 570 563 * therefore if other matches(port, tcp flags, etc) are added 571 564 * then just bail out ··· 573 580 return -EOPNOTSUPP; 574 581 575 582 flow_rule_match_eth_addrs(rule, &match); 583 + 584 + if (filter->block_id == VCAP_IS1 && 585 + !is_zero_ether_addr(match.mask->dst)) { 586 + NL_SET_ERR_MSG_MOD(extack, 587 + "Key type S1_NORMAL cannot match on destination MAC"); 588 + return -EOPNOTSUPP; 589 + } 590 + 576 591 filter->key_type = OCELOT_VCAP_KEY_ETYPE; 577 592 ether_addr_copy(filter->key.etype.dmac.value, 578 593 match.key->dst); ··· 806 805 struct netlink_ext_ack *extack = f->common.extack; 807 806 struct ocelot_vcap_filter *filter; 808 807 int chain = f->common.chain_index; 809 - int ret; 808 + int block_id, ret; 810 809 811 810 if (chain && !ocelot_find_vcap_filter_that_points_at(ocelot, chain)) { 812 811 NL_SET_ERR_MSG_MOD(extack, "No default GOTO action points to this chain"); 813 812 return -EOPNOTSUPP; 814 813 } 815 814 815 + block_id = ocelot_chain_to_block(chain, ingress); 816 + if (block_id < 0) { 817 + NL_SET_ERR_MSG_MOD(extack, "Cannot offload to this chain"); 818 + return -EOPNOTSUPP; 819 + } 820 + 821 + filter = ocelot_vcap_block_find_filter_by_id(&ocelot->block[block_id], 822 + f->cookie, true); 823 + if (filter) { 824 + /* Filter already exists on other ports */ 825 + if (!ingress) { 826 + NL_SET_ERR_MSG_MOD(extack, "VCAP ES0 does not support shared filters"); 827 + return -EOPNOTSUPP; 828 + } 829 + 830 + filter->ingress_port_mask |= BIT(port); 831 + 832 + return ocelot_vcap_filter_replace(ocelot, filter); 833 + } 834 + 835 + /* Filter didn't exist, create it now */ 816 836 filter = ocelot_vcap_filter_create(ocelot, port, ingress, f); 817 837 if (!filter) 818 838 return -ENOMEM; ··· 895 873 896 874 if (filter->type == OCELOT_VCAP_FILTER_DUMMY) 897 875 return ocelot_vcap_dummy_filter_del(ocelot, filter); 876 + 877 + if (ingress) { 878 + filter->ingress_port_mask &= ~BIT(port); 879 + if (filter->ingress_port_mask) 880 + return ocelot_vcap_filter_replace(ocelot, filter); 881 + } 898 882 899 883 return ocelot_vcap_filter_del(ocelot, filter); 900 884 }
+3 -3
drivers/net/ethernet/mscc/ocelot_net.c
··· 1187 1187 ocelot_port_bridge_join(ocelot, port, bridge); 1188 1188 1189 1189 err = switchdev_bridge_port_offload(brport_dev, dev, priv, 1190 - &ocelot_netdevice_nb, 1190 + &ocelot_switchdev_nb, 1191 1191 &ocelot_switchdev_blocking_nb, 1192 1192 false, extack); 1193 1193 if (err) ··· 1201 1201 1202 1202 err_switchdev_sync: 1203 1203 switchdev_bridge_port_unoffload(brport_dev, priv, 1204 - &ocelot_netdevice_nb, 1204 + &ocelot_switchdev_nb, 1205 1205 &ocelot_switchdev_blocking_nb); 1206 1206 err_switchdev_offload: 1207 1207 ocelot_port_bridge_leave(ocelot, port, bridge); ··· 1214 1214 struct ocelot_port_private *priv = netdev_priv(dev); 1215 1215 1216 1216 switchdev_bridge_port_unoffload(brport_dev, priv, 1217 - &ocelot_netdevice_nb, 1217 + &ocelot_switchdev_nb, 1218 1218 &ocelot_switchdev_blocking_nb); 1219 1219 } 1220 1220
+86 -29
drivers/net/ethernet/stmicro/stmmac/dwmac-oxnas.c
··· 12 12 #include <linux/io.h> 13 13 #include <linux/module.h> 14 14 #include <linux/of.h> 15 + #include <linux/of_device.h> 15 16 #include <linux/platform_device.h> 16 17 #include <linux/regmap.h> 17 18 #include <linux/mfd/syscon.h> ··· 49 48 #define DWMAC_RX_VARDELAY(d) ((d) << DWMAC_RX_VARDELAY_SHIFT) 50 49 #define DWMAC_RXN_VARDELAY(d) ((d) << DWMAC_RXN_VARDELAY_SHIFT) 51 50 51 + struct oxnas_dwmac; 52 + 53 + struct oxnas_dwmac_data { 54 + int (*setup)(struct oxnas_dwmac *dwmac); 55 + }; 56 + 52 57 struct oxnas_dwmac { 53 58 struct device *dev; 54 59 struct clk *clk; 55 60 struct regmap *regmap; 61 + const struct oxnas_dwmac_data *data; 56 62 }; 63 + 64 + static int oxnas_dwmac_setup_ox810se(struct oxnas_dwmac *dwmac) 65 + { 66 + unsigned int value; 67 + int ret; 68 + 69 + ret = regmap_read(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, &value); 70 + if (ret < 0) 71 + return ret; 72 + 73 + /* Enable GMII_GTXCLK to follow GMII_REFCLK, required for gigabit PHY */ 74 + value |= BIT(DWMAC_CKEN_GTX) | 75 + /* Use simple mux for 25/125 Mhz clock switching */ 76 + BIT(DWMAC_SIMPLE_MUX); 77 + 78 + regmap_write(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, value); 79 + 80 + return 0; 81 + } 82 + 83 + static int oxnas_dwmac_setup_ox820(struct oxnas_dwmac *dwmac) 84 + { 85 + unsigned int value; 86 + int ret; 87 + 88 + ret = regmap_read(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, &value); 89 + if (ret < 0) 90 + return ret; 91 + 92 + /* Enable GMII_GTXCLK to follow GMII_REFCLK, required for gigabit PHY */ 93 + value |= BIT(DWMAC_CKEN_GTX) | 94 + /* Use simple mux for 25/125 Mhz clock switching */ 95 + BIT(DWMAC_SIMPLE_MUX) | 96 + /* set auto switch tx clock source */ 97 + BIT(DWMAC_AUTO_TX_SOURCE) | 98 + /* enable tx & rx vardelay */ 99 + BIT(DWMAC_CKEN_TX_OUT) | 100 + BIT(DWMAC_CKEN_TXN_OUT) | 101 + BIT(DWMAC_CKEN_TX_IN) | 102 + BIT(DWMAC_CKEN_RX_OUT) | 103 + BIT(DWMAC_CKEN_RXN_OUT) | 104 + BIT(DWMAC_CKEN_RX_IN); 105 + regmap_write(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, value); 106 + 107 + /* set tx & rx vardelay */ 108 + value = DWMAC_TX_VARDELAY(4) | 109 + DWMAC_TXN_VARDELAY(2) | 110 + DWMAC_RX_VARDELAY(10) | 111 + DWMAC_RXN_VARDELAY(8); 112 + regmap_write(dwmac->regmap, OXNAS_DWMAC_DELAY_REGOFFSET, value); 113 + 114 + return 0; 115 + } 57 116 58 117 static int oxnas_dwmac_init(struct platform_device *pdev, void *priv) 59 118 { 60 119 struct oxnas_dwmac *dwmac = priv; 61 - unsigned int value; 62 120 int ret; 63 121 64 122 /* Reset HW here before changing the glue configuration */ ··· 129 69 if (ret) 130 70 return ret; 131 71 132 - ret = regmap_read(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, &value); 133 - if (ret < 0) { 72 + ret = dwmac->data->setup(dwmac); 73 + if (ret) 134 74 clk_disable_unprepare(dwmac->clk); 135 - return ret; 136 - } 137 75 138 - /* Enable GMII_GTXCLK to follow GMII_REFCLK, required for gigabit PHY */ 139 - value |= BIT(DWMAC_CKEN_GTX) | 140 - /* Use simple mux for 25/125 Mhz clock switching */ 141 - BIT(DWMAC_SIMPLE_MUX) | 142 - /* set auto switch tx clock source */ 143 - BIT(DWMAC_AUTO_TX_SOURCE) | 144 - /* enable tx & rx vardelay */ 145 - BIT(DWMAC_CKEN_TX_OUT) | 146 - BIT(DWMAC_CKEN_TXN_OUT) | 147 - BIT(DWMAC_CKEN_TX_IN) | 148 - BIT(DWMAC_CKEN_RX_OUT) | 149 - BIT(DWMAC_CKEN_RXN_OUT) | 150 - BIT(DWMAC_CKEN_RX_IN); 151 - regmap_write(dwmac->regmap, OXNAS_DWMAC_CTRL_REGOFFSET, value); 152 - 153 - /* set tx & rx vardelay */ 154 - value = DWMAC_TX_VARDELAY(4) | 155 - DWMAC_TXN_VARDELAY(2) | 156 - DWMAC_RX_VARDELAY(10) | 157 - DWMAC_RXN_VARDELAY(8); 158 - regmap_write(dwmac->regmap, OXNAS_DWMAC_DELAY_REGOFFSET, value); 159 - 160 - return 0; 76 + return ret; 161 77 } 162 78 163 79 static void oxnas_dwmac_exit(struct platform_device *pdev, void *priv) ··· 161 125 dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL); 162 126 if (!dwmac) { 163 127 ret = -ENOMEM; 128 + goto err_remove_config_dt; 129 + } 130 + 131 + dwmac->data = (const struct oxnas_dwmac_data *)of_device_get_match_data(&pdev->dev); 132 + if (!dwmac->data) { 133 + ret = -EINVAL; 164 134 goto err_remove_config_dt; 165 135 } 166 136 ··· 208 166 return ret; 209 167 } 210 168 169 + static const struct oxnas_dwmac_data ox810se_dwmac_data = { 170 + .setup = oxnas_dwmac_setup_ox810se, 171 + }; 172 + 173 + static const struct oxnas_dwmac_data ox820_dwmac_data = { 174 + .setup = oxnas_dwmac_setup_ox820, 175 + }; 176 + 211 177 static const struct of_device_id oxnas_dwmac_match[] = { 212 - { .compatible = "oxsemi,ox820-dwmac" }, 178 + { 179 + .compatible = "oxsemi,ox810se-dwmac", 180 + .data = &ox810se_dwmac_data, 181 + }, 182 + { 183 + .compatible = "oxsemi,ox820-dwmac", 184 + .data = &ox820_dwmac_data, 185 + }, 213 186 { } 214 187 }; 215 188 MODULE_DEVICE_TABLE(of, oxnas_dwmac_match);
+2 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 7159 7159 7160 7160 pm_runtime_get_noresume(device); 7161 7161 pm_runtime_set_active(device); 7162 - pm_runtime_enable(device); 7162 + if (!pm_runtime_enabled(device)) 7163 + pm_runtime_enable(device); 7163 7164 7164 7165 if (priv->hw->pcs != STMMAC_PCS_TBI && 7165 7166 priv->hw->pcs != STMMAC_PCS_RTBI) {
+3 -3
drivers/net/ethernet/ti/cpsw.c
··· 349 349 struct cpsw_common *cpsw = ndev_to_cpsw(xmeta->ndev); 350 350 int pkt_size = cpsw->rx_packet_max; 351 351 int ret = 0, port, ch = xmeta->ch; 352 - int headroom = CPSW_HEADROOM; 352 + int headroom = CPSW_HEADROOM_NA; 353 353 struct net_device *ndev = xmeta->ndev; 354 354 struct cpsw_priv *priv; 355 355 struct page_pool *pool; ··· 392 392 } 393 393 394 394 if (priv->xdp_prog) { 395 - int headroom = CPSW_HEADROOM, size = len; 395 + int size = len; 396 396 397 397 xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]); 398 398 if (status & CPDMA_RX_VLAN_ENCAP) { ··· 442 442 xmeta->ndev = ndev; 443 443 xmeta->ch = ch; 444 444 445 - dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; 445 + dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA; 446 446 ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, 447 447 pkt_size, 0); 448 448 if (ret < 0) {
+3 -3
drivers/net/ethernet/ti/cpsw_new.c
··· 283 283 { 284 284 struct page *new_page, *page = token; 285 285 void *pa = page_address(page); 286 - int headroom = CPSW_HEADROOM; 286 + int headroom = CPSW_HEADROOM_NA; 287 287 struct cpsw_meta_xdp *xmeta; 288 288 struct cpsw_common *cpsw; 289 289 struct net_device *ndev; ··· 336 336 } 337 337 338 338 if (priv->xdp_prog) { 339 - int headroom = CPSW_HEADROOM, size = len; 339 + int size = len; 340 340 341 341 xdp_init_buff(&xdp, PAGE_SIZE, &priv->xdp_rxq[ch]); 342 342 if (status & CPDMA_RX_VLAN_ENCAP) { ··· 386 386 xmeta->ndev = ndev; 387 387 xmeta->ch = ch; 388 388 389 - dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; 389 + dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM_NA; 390 390 ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, 391 391 pkt_size, 0); 392 392 if (ret < 0) {
+1 -1
drivers/net/ethernet/ti/cpsw_priv.c
··· 1122 1122 xmeta->ndev = priv->ndev; 1123 1123 xmeta->ch = ch; 1124 1124 1125 - dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM; 1125 + dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM_NA; 1126 1126 ret = cpdma_chan_idle_submit_mapped(cpsw->rxv[ch].ch, 1127 1127 page, dma, 1128 1128 cpsw->rx_packet_max,
+1 -1
drivers/net/ethernet/vertexcom/Kconfig
··· 5 5 6 6 config NET_VENDOR_VERTEXCOM 7 7 bool "Vertexcom devices" 8 - default n 8 + default y 9 9 help 10 10 If you have a network (Ethernet) card belonging to this class, say Y. 11 11
+84 -51
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 41 41 #include "xilinx_axienet.h" 42 42 43 43 /* Descriptors defines for Tx and Rx DMA */ 44 - #define TX_BD_NUM_DEFAULT 64 44 + #define TX_BD_NUM_DEFAULT 128 45 45 #define RX_BD_NUM_DEFAULT 1024 46 + #define TX_BD_NUM_MIN (MAX_SKB_FRAGS + 1) 46 47 #define TX_BD_NUM_MAX 4096 47 48 #define RX_BD_NUM_MAX 4096 48 49 ··· 497 496 498 497 static int __axienet_device_reset(struct axienet_local *lp) 499 498 { 500 - u32 timeout; 499 + u32 value; 500 + int ret; 501 501 502 502 /* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset 503 503 * process of Axi DMA takes a while to complete as all pending ··· 508 506 * they both reset the entire DMA core, so only one needs to be used. 509 507 */ 510 508 axienet_dma_out32(lp, XAXIDMA_TX_CR_OFFSET, XAXIDMA_CR_RESET_MASK); 511 - timeout = DELAY_OF_ONE_MILLISEC; 512 - while (axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET) & 513 - XAXIDMA_CR_RESET_MASK) { 514 - udelay(1); 515 - if (--timeout == 0) { 516 - netdev_err(lp->ndev, "%s: DMA reset timeout!\n", 517 - __func__); 518 - return -ETIMEDOUT; 519 - } 509 + ret = read_poll_timeout(axienet_dma_in32, value, 510 + !(value & XAXIDMA_CR_RESET_MASK), 511 + DELAY_OF_ONE_MILLISEC, 50000, false, lp, 512 + XAXIDMA_TX_CR_OFFSET); 513 + if (ret) { 514 + dev_err(lp->dev, "%s: DMA reset timeout!\n", __func__); 515 + return ret; 516 + } 517 + 518 + /* Wait for PhyRstCmplt bit to be set, indicating the PHY reset has finished */ 519 + ret = read_poll_timeout(axienet_ior, value, 520 + value & XAE_INT_PHYRSTCMPLT_MASK, 521 + DELAY_OF_ONE_MILLISEC, 50000, false, lp, 522 + XAE_IS_OFFSET); 523 + if (ret) { 524 + dev_err(lp->dev, "%s: timeout waiting for PhyRstCmplt\n", __func__); 525 + return ret; 520 526 } 521 527 522 528 return 0; ··· 633 623 if (nr_bds == -1 && !(status & XAXIDMA_BD_STS_COMPLETE_MASK)) 634 624 break; 635 625 626 + /* Ensure we see complete descriptor update */ 627 + dma_rmb(); 636 628 phys = desc_get_phys_addr(lp, cur_p); 637 629 dma_unmap_single(ndev->dev.parent, phys, 638 630 (cur_p->cntrl & XAXIDMA_BD_CTRL_LENGTH_MASK), ··· 643 631 if (cur_p->skb && (status & XAXIDMA_BD_STS_COMPLETE_MASK)) 644 632 dev_consume_skb_irq(cur_p->skb); 645 633 646 - cur_p->cntrl = 0; 647 634 cur_p->app0 = 0; 648 635 cur_p->app1 = 0; 649 636 cur_p->app2 = 0; 650 637 cur_p->app4 = 0; 651 - cur_p->status = 0; 652 638 cur_p->skb = NULL; 639 + /* ensure our transmit path and device don't prematurely see status cleared */ 640 + wmb(); 641 + cur_p->cntrl = 0; 642 + cur_p->status = 0; 653 643 654 644 if (sizep) 655 645 *sizep += status & XAXIDMA_BD_STS_ACTUAL_LEN_MASK; 656 646 } 657 647 658 648 return i; 649 + } 650 + 651 + /** 652 + * axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy 653 + * @lp: Pointer to the axienet_local structure 654 + * @num_frag: The number of BDs to check for 655 + * 656 + * Return: 0, on success 657 + * NETDEV_TX_BUSY, if any of the descriptors are not free 658 + * 659 + * This function is invoked before BDs are allocated and transmission starts. 660 + * This function returns 0 if a BD or group of BDs can be allocated for 661 + * transmission. If the BD or any of the BDs are not free the function 662 + * returns a busy status. This is invoked from axienet_start_xmit. 663 + */ 664 + static inline int axienet_check_tx_bd_space(struct axienet_local *lp, 665 + int num_frag) 666 + { 667 + struct axidma_bd *cur_p; 668 + 669 + /* Ensure we see all descriptor updates from device or TX IRQ path */ 670 + rmb(); 671 + cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num]; 672 + if (cur_p->cntrl) 673 + return NETDEV_TX_BUSY; 674 + return 0; 659 675 } 660 676 661 677 /** ··· 715 675 /* Matches barrier in axienet_start_xmit */ 716 676 smp_mb(); 717 677 718 - netif_wake_queue(ndev); 719 - } 720 - 721 - /** 722 - * axienet_check_tx_bd_space - Checks if a BD/group of BDs are currently busy 723 - * @lp: Pointer to the axienet_local structure 724 - * @num_frag: The number of BDs to check for 725 - * 726 - * Return: 0, on success 727 - * NETDEV_TX_BUSY, if any of the descriptors are not free 728 - * 729 - * This function is invoked before BDs are allocated and transmission starts. 730 - * This function returns 0 if a BD or group of BDs can be allocated for 731 - * transmission. If the BD or any of the BDs are not free the function 732 - * returns a busy status. This is invoked from axienet_start_xmit. 733 - */ 734 - static inline int axienet_check_tx_bd_space(struct axienet_local *lp, 735 - int num_frag) 736 - { 737 - struct axidma_bd *cur_p; 738 - cur_p = &lp->tx_bd_v[(lp->tx_bd_tail + num_frag) % lp->tx_bd_num]; 739 - if (cur_p->status & XAXIDMA_BD_STS_ALL_MASK) 740 - return NETDEV_TX_BUSY; 741 - return 0; 678 + if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) 679 + netif_wake_queue(ndev); 742 680 } 743 681 744 682 /** ··· 748 730 num_frag = skb_shinfo(skb)->nr_frags; 749 731 cur_p = &lp->tx_bd_v[lp->tx_bd_tail]; 750 732 751 - if (axienet_check_tx_bd_space(lp, num_frag)) { 752 - if (netif_queue_stopped(ndev)) 753 - return NETDEV_TX_BUSY; 754 - 733 + if (axienet_check_tx_bd_space(lp, num_frag + 1)) { 734 + /* Should not happen as last start_xmit call should have 735 + * checked for sufficient space and queue should only be 736 + * woken when sufficient space is available. 737 + */ 755 738 netif_stop_queue(ndev); 756 - 757 - /* Matches barrier in axienet_start_xmit_done */ 758 - smp_mb(); 759 - 760 - /* Space might have just been freed - check again */ 761 - if (axienet_check_tx_bd_space(lp, num_frag)) 762 - return NETDEV_TX_BUSY; 763 - 764 - netif_wake_queue(ndev); 739 + if (net_ratelimit()) 740 + netdev_warn(ndev, "TX ring unexpectedly full\n"); 741 + return NETDEV_TX_BUSY; 765 742 } 766 743 767 744 if (skb->ip_summed == CHECKSUM_PARTIAL) { ··· 817 804 if (++lp->tx_bd_tail >= lp->tx_bd_num) 818 805 lp->tx_bd_tail = 0; 819 806 807 + /* Stop queue if next transmit may not have space */ 808 + if (axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) { 809 + netif_stop_queue(ndev); 810 + 811 + /* Matches barrier in axienet_start_xmit_done */ 812 + smp_mb(); 813 + 814 + /* Space might have just been freed - check again */ 815 + if (!axienet_check_tx_bd_space(lp, MAX_SKB_FRAGS + 1)) 816 + netif_wake_queue(ndev); 817 + } 818 + 820 819 return NETDEV_TX_OK; 821 820 } 822 821 ··· 859 834 860 835 tail_p = lp->rx_bd_p + sizeof(*lp->rx_bd_v) * lp->rx_bd_ci; 861 836 837 + /* Ensure we see complete descriptor update */ 838 + dma_rmb(); 862 839 phys = desc_get_phys_addr(lp, cur_p); 863 840 dma_unmap_single(ndev->dev.parent, phys, lp->max_frm_size, 864 841 DMA_FROM_DEVICE); ··· 1379 1352 if (ering->rx_pending > RX_BD_NUM_MAX || 1380 1353 ering->rx_mini_pending || 1381 1354 ering->rx_jumbo_pending || 1382 - ering->rx_pending > TX_BD_NUM_MAX) 1355 + ering->tx_pending < TX_BD_NUM_MIN || 1356 + ering->tx_pending > TX_BD_NUM_MAX) 1383 1357 return -EINVAL; 1384 1358 1385 1359 if (netif_running(ndev)) ··· 2054 2026 2055 2027 lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD; 2056 2028 lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD; 2029 + 2030 + /* Reset core now that clocks are enabled, prior to accessing MDIO */ 2031 + ret = __axienet_device_reset(lp); 2032 + if (ret) 2033 + goto cleanup_clk; 2057 2034 2058 2035 lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); 2059 2036 if (lp->phy_node) {
+20 -8
drivers/net/ipa/ipa_endpoint.c
··· 1080 1080 { 1081 1081 struct gsi *gsi; 1082 1082 u32 backlog; 1083 + int delta; 1083 1084 1084 - if (!endpoint->replenish_enabled) { 1085 + if (!test_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags)) { 1085 1086 if (add_one) 1086 1087 atomic_inc(&endpoint->replenish_saved); 1088 + return; 1089 + } 1090 + 1091 + /* If already active, just update the backlog */ 1092 + if (test_and_set_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags)) { 1093 + if (add_one) 1094 + atomic_inc(&endpoint->replenish_backlog); 1087 1095 return; 1088 1096 } 1089 1097 1090 1098 while (atomic_dec_not_zero(&endpoint->replenish_backlog)) 1091 1099 if (ipa_endpoint_replenish_one(endpoint)) 1092 1100 goto try_again_later; 1101 + 1102 + clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags); 1103 + 1093 1104 if (add_one) 1094 1105 atomic_inc(&endpoint->replenish_backlog); 1095 1106 1096 1107 return; 1097 1108 1098 1109 try_again_later: 1099 - /* The last one didn't succeed, so fix the backlog */ 1100 - backlog = atomic_inc_return(&endpoint->replenish_backlog); 1110 + clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags); 1101 1111 1102 - if (add_one) 1103 - atomic_inc(&endpoint->replenish_backlog); 1112 + /* The last one didn't succeed, so fix the backlog */ 1113 + delta = add_one ? 2 : 1; 1114 + backlog = atomic_add_return(delta, &endpoint->replenish_backlog); 1104 1115 1105 1116 /* Whenever a receive buffer transaction completes we'll try to 1106 1117 * replenish again. It's unlikely, but if we fail to supply even ··· 1131 1120 u32 max_backlog; 1132 1121 u32 saved; 1133 1122 1134 - endpoint->replenish_enabled = true; 1123 + set_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags); 1135 1124 while ((saved = atomic_xchg(&endpoint->replenish_saved, 0))) 1136 1125 atomic_add(saved, &endpoint->replenish_backlog); 1137 1126 ··· 1145 1134 { 1146 1135 u32 backlog; 1147 1136 1148 - endpoint->replenish_enabled = false; 1137 + clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags); 1149 1138 while ((backlog = atomic_xchg(&endpoint->replenish_backlog, 0))) 1150 1139 atomic_add(backlog, &endpoint->replenish_saved); 1151 1140 } ··· 1702 1691 /* RX transactions require a single TRE, so the maximum 1703 1692 * backlog is the same as the maximum outstanding TREs. 1704 1693 */ 1705 - endpoint->replenish_enabled = false; 1694 + clear_bit(IPA_REPLENISH_ENABLED, endpoint->replenish_flags); 1695 + clear_bit(IPA_REPLENISH_ACTIVE, endpoint->replenish_flags); 1706 1696 atomic_set(&endpoint->replenish_saved, 1707 1697 gsi_channel_tre_max(gsi, endpoint->channel_id)); 1708 1698 atomic_set(&endpoint->replenish_backlog, 0);
+15 -2
drivers/net/ipa/ipa_endpoint.h
··· 41 41 #define IPA_ENDPOINT_MAX 32 /* Max supported by driver */ 42 42 43 43 /** 44 + * enum ipa_replenish_flag: RX buffer replenish flags 45 + * 46 + * @IPA_REPLENISH_ENABLED: Whether receive buffer replenishing is enabled 47 + * @IPA_REPLENISH_ACTIVE: Whether replenishing is underway 48 + * @IPA_REPLENISH_COUNT: Number of defined replenish flags 49 + */ 50 + enum ipa_replenish_flag { 51 + IPA_REPLENISH_ENABLED, 52 + IPA_REPLENISH_ACTIVE, 53 + IPA_REPLENISH_COUNT, /* Number of flags (must be last) */ 54 + }; 55 + 56 + /** 44 57 * struct ipa_endpoint - IPA endpoint information 45 58 * @ipa: IPA pointer 46 59 * @ee_id: Execution environmnent endpoint is associated with ··· 64 51 * @trans_tre_max: Maximum number of TRE descriptors per transaction 65 52 * @evt_ring_id: GSI event ring used by the endpoint 66 53 * @netdev: Network device pointer, if endpoint uses one 67 - * @replenish_enabled: Whether receive buffer replenishing is enabled 54 + * @replenish_flags: Replenishing state flags 68 55 * @replenish_ready: Number of replenish transactions without doorbell 69 56 * @replenish_saved: Replenish requests held while disabled 70 57 * @replenish_backlog: Number of buffers needed to fill hardware queue ··· 85 72 struct net_device *netdev; 86 73 87 74 /* Receive buffer replenishing for RX endpoints */ 88 - bool replenish_enabled; 75 + DECLARE_BITMAP(replenish_flags, IPA_REPLENISH_COUNT); 89 76 u32 replenish_ready; 90 77 atomic_t replenish_saved; 91 78 atomic_t replenish_backlog;
+1 -1
drivers/net/phy/at803x.c
··· 421 421 const u8 *mac; 422 422 int ret, irq_enabled; 423 423 unsigned int i; 424 - const unsigned int offsets[] = { 424 + static const unsigned int offsets[] = { 425 425 AT803X_LOC_MAC_ADDR_32_47_OFFSET, 426 426 AT803X_LOC_MAC_ADDR_16_31_OFFSET, 427 427 AT803X_LOC_MAC_ADDR_0_15_OFFSET,
+55 -1
drivers/net/phy/marvell.c
··· 189 189 #define MII_88E1510_GEN_CTRL_REG_1_MODE_RGMII_SGMII 0x4 190 190 #define MII_88E1510_GEN_CTRL_REG_1_RESET 0x8000 /* Soft reset */ 191 191 192 + #define MII_88E1510_MSCR_2 0x15 193 + 192 194 #define MII_VCT5_TX_RX_MDI0_COUPLING 0x10 193 195 #define MII_VCT5_TX_RX_MDI1_COUPLING 0x11 194 196 #define MII_VCT5_TX_RX_MDI2_COUPLING 0x12 ··· 1934 1932 data[i] = marvell_get_stat(phydev, i); 1935 1933 } 1936 1934 1935 + static int m88e1510_loopback(struct phy_device *phydev, bool enable) 1936 + { 1937 + int err; 1938 + 1939 + if (enable) { 1940 + u16 bmcr_ctl = 0, mscr2_ctl = 0; 1941 + 1942 + if (phydev->speed == SPEED_1000) 1943 + bmcr_ctl = BMCR_SPEED1000; 1944 + else if (phydev->speed == SPEED_100) 1945 + bmcr_ctl = BMCR_SPEED100; 1946 + 1947 + if (phydev->duplex == DUPLEX_FULL) 1948 + bmcr_ctl |= BMCR_FULLDPLX; 1949 + 1950 + err = phy_write(phydev, MII_BMCR, bmcr_ctl); 1951 + if (err < 0) 1952 + return err; 1953 + 1954 + if (phydev->speed == SPEED_1000) 1955 + mscr2_ctl = BMCR_SPEED1000; 1956 + else if (phydev->speed == SPEED_100) 1957 + mscr2_ctl = BMCR_SPEED100; 1958 + 1959 + err = phy_modify_paged(phydev, MII_MARVELL_MSCR_PAGE, 1960 + MII_88E1510_MSCR_2, BMCR_SPEED1000 | 1961 + BMCR_SPEED100, mscr2_ctl); 1962 + if (err < 0) 1963 + return err; 1964 + 1965 + /* Need soft reset to have speed configuration takes effect */ 1966 + err = genphy_soft_reset(phydev); 1967 + if (err < 0) 1968 + return err; 1969 + 1970 + /* FIXME: Based on trial and error test, it seem 1G need to have 1971 + * delay between soft reset and loopback enablement. 1972 + */ 1973 + if (phydev->speed == SPEED_1000) 1974 + msleep(1000); 1975 + 1976 + return phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK, 1977 + BMCR_LOOPBACK); 1978 + } else { 1979 + err = phy_modify(phydev, MII_BMCR, BMCR_LOOPBACK, 0); 1980 + if (err < 0) 1981 + return err; 1982 + 1983 + return phy_config_aneg(phydev); 1984 + } 1985 + } 1986 + 1937 1987 static int marvell_vct5_wait_complete(struct phy_device *phydev) 1938 1988 { 1939 1989 int i; ··· 3132 3078 .get_sset_count = marvell_get_sset_count, 3133 3079 .get_strings = marvell_get_strings, 3134 3080 .get_stats = marvell_get_stats, 3135 - .set_loopback = genphy_loopback, 3081 + .set_loopback = m88e1510_loopback, 3136 3082 .get_tunable = m88e1011_get_tunable, 3137 3083 .set_tunable = m88e1011_set_tunable, 3138 3084 .cable_test_start = marvell_vct7_cable_test_start,
+18 -18
drivers/net/phy/micrel.c
··· 1726 1726 .config_init = kszphy_config_init, 1727 1727 .config_intr = kszphy_config_intr, 1728 1728 .handle_interrupt = kszphy_handle_interrupt, 1729 - .suspend = genphy_suspend, 1730 - .resume = genphy_resume, 1729 + .suspend = kszphy_suspend, 1730 + .resume = kszphy_resume, 1731 1731 }, { 1732 1732 .phy_id = PHY_ID_KSZ8021, 1733 1733 .phy_id_mask = 0x00ffffff, ··· 1741 1741 .get_sset_count = kszphy_get_sset_count, 1742 1742 .get_strings = kszphy_get_strings, 1743 1743 .get_stats = kszphy_get_stats, 1744 - .suspend = genphy_suspend, 1745 - .resume = genphy_resume, 1744 + .suspend = kszphy_suspend, 1745 + .resume = kszphy_resume, 1746 1746 }, { 1747 1747 .phy_id = PHY_ID_KSZ8031, 1748 1748 .phy_id_mask = 0x00ffffff, ··· 1756 1756 .get_sset_count = kszphy_get_sset_count, 1757 1757 .get_strings = kszphy_get_strings, 1758 1758 .get_stats = kszphy_get_stats, 1759 - .suspend = genphy_suspend, 1760 - .resume = genphy_resume, 1759 + .suspend = kszphy_suspend, 1760 + .resume = kszphy_resume, 1761 1761 }, { 1762 1762 .phy_id = PHY_ID_KSZ8041, 1763 1763 .phy_id_mask = MICREL_PHY_ID_MASK, ··· 1788 1788 .get_sset_count = kszphy_get_sset_count, 1789 1789 .get_strings = kszphy_get_strings, 1790 1790 .get_stats = kszphy_get_stats, 1791 - .suspend = genphy_suspend, 1792 - .resume = genphy_resume, 1791 + .suspend = kszphy_suspend, 1792 + .resume = kszphy_resume, 1793 1793 }, { 1794 1794 .name = "Micrel KSZ8051", 1795 1795 /* PHY_BASIC_FEATURES */ ··· 1802 1802 .get_strings = kszphy_get_strings, 1803 1803 .get_stats = kszphy_get_stats, 1804 1804 .match_phy_device = ksz8051_match_phy_device, 1805 - .suspend = genphy_suspend, 1806 - .resume = genphy_resume, 1805 + .suspend = kszphy_suspend, 1806 + .resume = kszphy_resume, 1807 1807 }, { 1808 1808 .phy_id = PHY_ID_KSZ8001, 1809 1809 .name = "Micrel KSZ8001 or KS8721", ··· 1817 1817 .get_sset_count = kszphy_get_sset_count, 1818 1818 .get_strings = kszphy_get_strings, 1819 1819 .get_stats = kszphy_get_stats, 1820 - .suspend = genphy_suspend, 1821 - .resume = genphy_resume, 1820 + .suspend = kszphy_suspend, 1821 + .resume = kszphy_resume, 1822 1822 }, { 1823 1823 .phy_id = PHY_ID_KSZ8081, 1824 1824 .name = "Micrel KSZ8081 or KSZ8091", ··· 1848 1848 .config_init = ksz8061_config_init, 1849 1849 .config_intr = kszphy_config_intr, 1850 1850 .handle_interrupt = kszphy_handle_interrupt, 1851 - .suspend = genphy_suspend, 1852 - .resume = genphy_resume, 1851 + .suspend = kszphy_suspend, 1852 + .resume = kszphy_resume, 1853 1853 }, { 1854 1854 .phy_id = PHY_ID_KSZ9021, 1855 1855 .phy_id_mask = 0x000ffffe, ··· 1864 1864 .get_sset_count = kszphy_get_sset_count, 1865 1865 .get_strings = kszphy_get_strings, 1866 1866 .get_stats = kszphy_get_stats, 1867 - .suspend = genphy_suspend, 1868 - .resume = genphy_resume, 1867 + .suspend = kszphy_suspend, 1868 + .resume = kszphy_resume, 1869 1869 .read_mmd = genphy_read_mmd_unsupported, 1870 1870 .write_mmd = genphy_write_mmd_unsupported, 1871 1871 }, { ··· 1883 1883 .get_sset_count = kszphy_get_sset_count, 1884 1884 .get_strings = kszphy_get_strings, 1885 1885 .get_stats = kszphy_get_stats, 1886 - .suspend = genphy_suspend, 1886 + .suspend = kszphy_suspend, 1887 1887 .resume = kszphy_resume, 1888 1888 }, { 1889 1889 .phy_id = PHY_ID_LAN8814, ··· 1928 1928 .get_sset_count = kszphy_get_sset_count, 1929 1929 .get_strings = kszphy_get_strings, 1930 1930 .get_stats = kszphy_get_stats, 1931 - .suspend = genphy_suspend, 1931 + .suspend = kszphy_suspend, 1932 1932 .resume = kszphy_resume, 1933 1933 }, { 1934 1934 .phy_id = PHY_ID_KSZ8873MLL,
+21 -4
drivers/net/phy/sfp.c
··· 1641 1641 static int sfp_module_parse_power(struct sfp *sfp) 1642 1642 { 1643 1643 u32 power_mW = 1000; 1644 + bool supports_a2; 1644 1645 1645 1646 if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_POWER_DECL)) 1646 1647 power_mW = 1500; 1647 1648 if (sfp->id.ext.options & cpu_to_be16(SFP_OPTIONS_HIGH_POWER_LEVEL)) 1648 1649 power_mW = 2000; 1649 1650 1651 + supports_a2 = sfp->id.ext.sff8472_compliance != 1652 + SFP_SFF8472_COMPLIANCE_NONE || 1653 + sfp->id.ext.diagmon & SFP_DIAGMON_DDM; 1654 + 1650 1655 if (power_mW > sfp->max_power_mW) { 1651 1656 /* Module power specification exceeds the allowed maximum. */ 1652 - if (sfp->id.ext.sff8472_compliance == 1653 - SFP_SFF8472_COMPLIANCE_NONE && 1654 - !(sfp->id.ext.diagmon & SFP_DIAGMON_DDM)) { 1657 + if (!supports_a2) { 1655 1658 /* The module appears not to implement bus address 1656 1659 * 0xa2, so assume that the module powers up in the 1657 1660 * indicated mode. ··· 1671 1668 } 1672 1669 } 1673 1670 1671 + if (power_mW <= 1000) { 1672 + /* Modules below 1W do not require a power change sequence */ 1673 + sfp->module_power_mW = power_mW; 1674 + return 0; 1675 + } 1676 + 1677 + if (!supports_a2) { 1678 + /* The module power level is below the host maximum and the 1679 + * module appears not to implement bus address 0xa2, so assume 1680 + * that the module powers up in the indicated mode. 1681 + */ 1682 + return 0; 1683 + } 1684 + 1674 1685 /* If the module requires a higher power mode, but also requires 1675 1686 * an address change sequence, warn the user that the module may 1676 1687 * not be functional. 1677 1688 */ 1678 - if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE && power_mW > 1000) { 1689 + if (sfp->id.ext.diagmon & SFP_DIAGMON_ADDRMODE) { 1679 1690 dev_warn(sfp->dev, 1680 1691 "Address Change Sequence not supported but module requires %u.%uW, module may not be functional\n", 1681 1692 power_mW / 1000, (power_mW / 100) % 10);
+2
drivers/net/usb/qmi_wwan.c
··· 1316 1316 {QMI_FIXED_INTF(0x19d2, 0x1426, 2)}, /* ZTE MF91 */ 1317 1317 {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ 1318 1318 {QMI_FIXED_INTF(0x19d2, 0x1432, 3)}, /* ZTE ME3620 */ 1319 + {QMI_FIXED_INTF(0x19d2, 0x1485, 5)}, /* ZTE MF286D */ 1319 1320 {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ 1320 1321 {QMI_FIXED_INTF(0x2001, 0x7e16, 3)}, /* D-Link DWM-221 */ 1321 1322 {QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */ ··· 1402 1401 {QMI_FIXED_INTF(0x413c, 0x81e0, 0)}, /* Dell Wireless 5821e with eSIM support*/ 1403 1402 {QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */ 1404 1403 {QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */ 1404 + {QMI_QUIRK_SET_DTR(0x22de, 0x9051, 2)}, /* Hucom Wireless HM-211S/K */ 1405 1405 {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */ 1406 1406 {QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */ 1407 1407 {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */
+2 -1
drivers/net/usb/smsc95xx.c
··· 1962 1962 .bind = smsc95xx_bind, 1963 1963 .unbind = smsc95xx_unbind, 1964 1964 .link_reset = smsc95xx_link_reset, 1965 - .reset = smsc95xx_start_phy, 1965 + .reset = smsc95xx_reset, 1966 + .check_connect = smsc95xx_start_phy, 1966 1967 .stop = smsc95xx_stop, 1967 1968 .rx_fixup = smsc95xx_rx_fixup, 1968 1969 .tx_fixup = smsc95xx_tx_fixup,
+2 -2
drivers/net/wwan/mhi_wwan_mbim.c
··· 385 385 int err; 386 386 387 387 while (!mhi_queue_is_full(mdev, DMA_FROM_DEVICE)) { 388 - struct sk_buff *skb = alloc_skb(MHI_DEFAULT_MRU, GFP_KERNEL); 388 + struct sk_buff *skb = alloc_skb(mbim->mru, GFP_KERNEL); 389 389 390 390 if (unlikely(!skb)) 391 391 break; 392 392 393 393 err = mhi_queue_skb(mdev, DMA_FROM_DEVICE, skb, 394 - MHI_DEFAULT_MRU, MHI_EOT); 394 + mbim->mru, MHI_EOT); 395 395 if (unlikely(err)) { 396 396 kfree_skb(skb); 397 397 break;
+1 -1
drivers/nfc/pn544/i2c.c
··· 188 188 static void pn544_hci_i2c_platform_init(struct pn544_i2c_phy *phy) 189 189 { 190 190 int polarity, retry, ret; 191 - char rset_cmd[] = { 0x05, 0xF9, 0x04, 0x00, 0xC3, 0xE5 }; 191 + static const char rset_cmd[] = { 0x05, 0xF9, 0x04, 0x00, 0xC3, 0xE5 }; 192 192 int count = sizeof(rset_cmd); 193 193 194 194 nfc_info(&phy->i2c_dev->dev, "Detecting nfc_en polarity\n");
+10
drivers/nfc/st21nfca/se.c
··· 316 316 return -ENOMEM; 317 317 318 318 transaction->aid_len = skb->data[1]; 319 + 320 + /* Checking if the length of the AID is valid */ 321 + if (transaction->aid_len > sizeof(transaction->aid)) 322 + return -EINVAL; 323 + 319 324 memcpy(transaction->aid, &skb->data[2], 320 325 transaction->aid_len); 321 326 ··· 330 325 return -EPROTO; 331 326 332 327 transaction->params_len = skb->data[transaction->aid_len + 3]; 328 + 329 + /* Total size is allocated (skb->len - 2) minus fixed array members */ 330 + if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction))) 331 + return -EINVAL; 332 + 333 333 memcpy(transaction->params, skb->data + 334 334 transaction->aid_len + 4, transaction->params_len); 335 335
+7 -2
include/linux/bpf.h
··· 316 316 */ 317 317 MEM_RDONLY = BIT(1 + BPF_BASE_TYPE_BITS), 318 318 319 - __BPF_TYPE_LAST_FLAG = MEM_RDONLY, 319 + /* MEM was "allocated" from a different helper, and cannot be mixed 320 + * with regular non-MEM_ALLOC'ed MEM types. 321 + */ 322 + MEM_ALLOC = BIT(2 + BPF_BASE_TYPE_BITS), 323 + 324 + __BPF_TYPE_LAST_FLAG = MEM_ALLOC, 320 325 }; 321 326 322 327 /* Max number of base types. */ ··· 405 400 RET_PTR_TO_SOCKET_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCKET, 406 401 RET_PTR_TO_TCP_SOCK_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_TCP_SOCK, 407 402 RET_PTR_TO_SOCK_COMMON_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_SOCK_COMMON, 408 - RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_ALLOC_MEM, 403 + RET_PTR_TO_ALLOC_MEM_OR_NULL = PTR_MAYBE_NULL | MEM_ALLOC | RET_PTR_TO_ALLOC_MEM, 409 404 RET_PTR_TO_BTF_ID_OR_NULL = PTR_MAYBE_NULL | RET_PTR_TO_BTF_ID, 410 405 411 406 /* This must be the last entry. Its purpose is to ensure the enum is
+2 -2
include/linux/bpf_verifier.h
··· 519 519 void 520 520 bpf_prog_offload_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt); 521 521 522 - int check_ctx_reg(struct bpf_verifier_env *env, 523 - const struct bpf_reg_state *reg, int regno); 522 + int check_ptr_off_reg(struct bpf_verifier_env *env, 523 + const struct bpf_reg_state *reg, int regno); 524 524 int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, 525 525 u32 regno, u32 mem_size); 526 526
+9 -2
include/net/inet_frag.h
··· 117 117 118 118 static inline void fqdir_pre_exit(struct fqdir *fqdir) 119 119 { 120 - fqdir->high_thresh = 0; /* prevent creation of new frags */ 121 - fqdir->dead = true; 120 + /* Prevent creation of new frags. 121 + * Pairs with READ_ONCE() in inet_frag_find(). 122 + */ 123 + WRITE_ONCE(fqdir->high_thresh, 0); 124 + 125 + /* Pairs with READ_ONCE() in inet_frag_kill(), ip_expire() 126 + * and ip6frag_expire_frag_queue(). 127 + */ 128 + WRITE_ONCE(fqdir->dead, true); 122 129 } 123 130 void fqdir_exit(struct fqdir *fqdir); 124 131
+2 -1
include/net/ipv6_frag.h
··· 67 67 struct sk_buff *head; 68 68 69 69 rcu_read_lock(); 70 - if (fq->q.fqdir->dead) 70 + /* Paired with the WRITE_ONCE() in fqdir_pre_exit(). */ 71 + if (READ_ONCE(fq->q.fqdir->dead)) 71 72 goto out_rcu_unlock; 72 73 spin_lock(&fq->q.lock); 73 74
+3 -1
include/net/pkt_cls.h
··· 218 218 #ifdef CONFIG_NET_CLS_ACT 219 219 exts->type = 0; 220 220 exts->nr_actions = 0; 221 + /* Note: we do not own yet a reference on net. 222 + * This reference might be taken later from tcf_exts_get_net(). 223 + */ 221 224 exts->net = net; 222 - netns_tracker_alloc(net, &exts->ns_tracker, GFP_KERNEL); 223 225 exts->actions = kcalloc(TCA_ACT_MAX_PRIO, sizeof(struct tc_action *), 224 226 GFP_KERNEL); 225 227 if (!exts->actions)
+5
include/net/sch_generic.h
··· 1244 1244 u64 rate_bytes_ps; /* bytes per second */ 1245 1245 u32 mult; 1246 1246 u16 overhead; 1247 + u16 mpu; 1247 1248 u8 linklayer; 1248 1249 u8 shift; 1249 1250 }; ··· 1253 1252 unsigned int len) 1254 1253 { 1255 1254 len += r->overhead; 1255 + 1256 + if (len < r->mpu) 1257 + len = r->mpu; 1256 1258 1257 1259 if (unlikely(r->linklayer == TC_LINKLAYER_ATM)) 1258 1260 return ((u64)(DIV_ROUND_UP(len,48)*53) * r->mult) >> r->shift; ··· 1279 1275 res->rate = min_t(u64, r->rate_bytes_ps, ~0U); 1280 1276 1281 1277 res->overhead = r->overhead; 1278 + res->mpu = r->mpu; 1282 1279 res->linklayer = (r->linklayer & TC_LINKLAYER_MASK); 1283 1280 } 1284 1281
+1 -1
kernel/bpf/btf.c
··· 5686 5686 i, btf_type_str(t)); 5687 5687 return -EINVAL; 5688 5688 } 5689 - if (check_ctx_reg(env, reg, regno)) 5689 + if (check_ptr_off_reg(env, reg, regno)) 5690 5690 return -EINVAL; 5691 5691 } else if (is_kfunc && (reg->type == PTR_TO_BTF_ID || reg2btf_ids[reg->type])) { 5692 5692 const struct btf_type *reg_ref_t;
+12 -2
kernel/bpf/inode.c
··· 648 648 int opt; 649 649 650 650 opt = fs_parse(fc, bpf_fs_parameters, param, &result); 651 - if (opt < 0) 651 + if (opt < 0) { 652 652 /* We might like to report bad mount options here, but 653 653 * traditionally we've ignored all mount options, so we'd 654 654 * better continue to ignore non-existing options for bpf. 655 655 */ 656 - return opt == -ENOPARAM ? 0 : opt; 656 + if (opt == -ENOPARAM) { 657 + opt = vfs_parse_fs_param_source(fc, param); 658 + if (opt != -ENOPARAM) 659 + return opt; 660 + 661 + return 0; 662 + } 663 + 664 + if (opt < 0) 665 + return opt; 666 + } 657 667 658 668 switch (opt) { 659 669 case OPT_MODE:
+56 -25
kernel/bpf/verifier.c
··· 570 570 571 571 if (type & MEM_RDONLY) 572 572 strncpy(prefix, "rdonly_", 16); 573 + if (type & MEM_ALLOC) 574 + strncpy(prefix, "alloc_", 16); 573 575 574 576 snprintf(env->type_str_buf, TYPE_STR_BUF_LEN, "%s%s%s", 575 577 prefix, str[base_type(type)], postfix); ··· 618 616 619 617 static void mark_stack_slot_scratched(struct bpf_verifier_env *env, u32 spi) 620 618 { 621 - env->scratched_stack_slots |= 1UL << spi; 619 + env->scratched_stack_slots |= 1ULL << spi; 622 620 } 623 621 624 622 static bool reg_scratched(const struct bpf_verifier_env *env, u32 regno) ··· 639 637 static void mark_verifier_state_clean(struct bpf_verifier_env *env) 640 638 { 641 639 env->scratched_regs = 0U; 642 - env->scratched_stack_slots = 0UL; 640 + env->scratched_stack_slots = 0ULL; 643 641 } 644 642 645 643 /* Used for printing the entire verifier state. */ 646 644 static void mark_verifier_state_scratched(struct bpf_verifier_env *env) 647 645 { 648 646 env->scratched_regs = ~0U; 649 - env->scratched_stack_slots = ~0UL; 647 + env->scratched_stack_slots = ~0ULL; 650 648 } 651 649 652 650 /* The reg state of a pointer or a bounded scalar was saved when ··· 3971 3969 } 3972 3970 #endif 3973 3971 3974 - int check_ctx_reg(struct bpf_verifier_env *env, 3975 - const struct bpf_reg_state *reg, int regno) 3972 + static int __check_ptr_off_reg(struct bpf_verifier_env *env, 3973 + const struct bpf_reg_state *reg, int regno, 3974 + bool fixed_off_ok) 3976 3975 { 3977 - /* Access to ctx or passing it to a helper is only allowed in 3978 - * its original, unmodified form. 3976 + /* Access to this pointer-typed register or passing it to a helper 3977 + * is only allowed in its original, unmodified form. 3979 3978 */ 3980 3979 3981 - if (reg->off) { 3982 - verbose(env, "dereference of modified ctx ptr R%d off=%d disallowed\n", 3983 - regno, reg->off); 3980 + if (!fixed_off_ok && reg->off) { 3981 + verbose(env, "dereference of modified %s ptr R%d off=%d disallowed\n", 3982 + reg_type_str(env, reg->type), regno, reg->off); 3984 3983 return -EACCES; 3985 3984 } 3986 3985 ··· 3989 3986 char tn_buf[48]; 3990 3987 3991 3988 tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); 3992 - verbose(env, "variable ctx access var_off=%s disallowed\n", tn_buf); 3989 + verbose(env, "variable %s access var_off=%s disallowed\n", 3990 + reg_type_str(env, reg->type), tn_buf); 3993 3991 return -EACCES; 3994 3992 } 3995 3993 3996 3994 return 0; 3995 + } 3996 + 3997 + int check_ptr_off_reg(struct bpf_verifier_env *env, 3998 + const struct bpf_reg_state *reg, int regno) 3999 + { 4000 + return __check_ptr_off_reg(env, reg, regno, false); 3997 4001 } 3998 4002 3999 4003 static int __check_buffer_access(struct bpf_verifier_env *env, ··· 4447 4437 return -EACCES; 4448 4438 } 4449 4439 4450 - err = check_ctx_reg(env, reg, regno); 4440 + err = check_ptr_off_reg(env, reg, regno); 4451 4441 if (err < 0) 4452 4442 return err; 4453 4443 ··· 5137 5127 PTR_TO_MAP_KEY, 5138 5128 PTR_TO_MAP_VALUE, 5139 5129 PTR_TO_MEM, 5130 + PTR_TO_MEM | MEM_ALLOC, 5140 5131 PTR_TO_BUF, 5141 5132 }, 5142 5133 }; ··· 5155 5144 static const struct bpf_reg_types fullsock_types = { .types = { PTR_TO_SOCKET } }; 5156 5145 static const struct bpf_reg_types scalar_types = { .types = { SCALAR_VALUE } }; 5157 5146 static const struct bpf_reg_types context_types = { .types = { PTR_TO_CTX } }; 5158 - static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM } }; 5147 + static const struct bpf_reg_types alloc_mem_types = { .types = { PTR_TO_MEM | MEM_ALLOC } }; 5159 5148 static const struct bpf_reg_types const_map_ptr_types = { .types = { CONST_PTR_TO_MAP } }; 5160 5149 static const struct bpf_reg_types btf_ptr_types = { .types = { PTR_TO_BTF_ID } }; 5161 5150 static const struct bpf_reg_types spin_lock_types = { .types = { PTR_TO_MAP_VALUE } }; ··· 5255 5244 kernel_type_name(btf_vmlinux, *arg_btf_id)); 5256 5245 return -EACCES; 5257 5246 } 5258 - 5259 - if (!tnum_is_const(reg->var_off) || reg->var_off.value) { 5260 - verbose(env, "R%d is a pointer to in-kernel struct with non-zero offset\n", 5261 - regno); 5262 - return -EACCES; 5263 - } 5264 5247 } 5265 5248 5266 5249 return 0; ··· 5309 5304 if (err) 5310 5305 return err; 5311 5306 5312 - if (type == PTR_TO_CTX) { 5313 - err = check_ctx_reg(env, reg, regno); 5307 + switch ((u32)type) { 5308 + case SCALAR_VALUE: 5309 + /* Pointer types where reg offset is explicitly allowed: */ 5310 + case PTR_TO_PACKET: 5311 + case PTR_TO_PACKET_META: 5312 + case PTR_TO_MAP_KEY: 5313 + case PTR_TO_MAP_VALUE: 5314 + case PTR_TO_MEM: 5315 + case PTR_TO_MEM | MEM_RDONLY: 5316 + case PTR_TO_MEM | MEM_ALLOC: 5317 + case PTR_TO_BUF: 5318 + case PTR_TO_BUF | MEM_RDONLY: 5319 + case PTR_TO_STACK: 5320 + /* Some of the argument types nevertheless require a 5321 + * zero register offset. 5322 + */ 5323 + if (arg_type == ARG_PTR_TO_ALLOC_MEM) 5324 + goto force_off_check; 5325 + break; 5326 + /* All the rest must be rejected: */ 5327 + default: 5328 + force_off_check: 5329 + err = __check_ptr_off_reg(env, reg, regno, 5330 + type == PTR_TO_BTF_ID); 5314 5331 if (err < 0) 5315 5332 return err; 5333 + break; 5316 5334 } 5317 5335 5318 5336 skip_type_check: ··· 9535 9507 return 0; 9536 9508 } 9537 9509 9538 - if (insn->src_reg == BPF_PSEUDO_BTF_ID) { 9539 - mark_reg_known_zero(env, regs, insn->dst_reg); 9510 + /* All special src_reg cases are listed below. From this point onwards 9511 + * we either succeed and assign a corresponding dst_reg->type after 9512 + * zeroing the offset, or fail and reject the program. 9513 + */ 9514 + mark_reg_known_zero(env, regs, insn->dst_reg); 9540 9515 9516 + if (insn->src_reg == BPF_PSEUDO_BTF_ID) { 9541 9517 dst_reg->type = aux->btf_var.reg_type; 9542 9518 switch (base_type(dst_reg->type)) { 9543 9519 case PTR_TO_MEM: ··· 9579 9547 } 9580 9548 9581 9549 map = env->used_maps[aux->map_index]; 9582 - mark_reg_known_zero(env, regs, insn->dst_reg); 9583 9550 dst_reg->map_ptr = map; 9584 9551 9585 9552 if (insn->src_reg == BPF_PSEUDO_MAP_VALUE || ··· 9682 9651 return err; 9683 9652 } 9684 9653 9685 - err = check_ctx_reg(env, &regs[ctx_reg], ctx_reg); 9654 + err = check_ptr_off_reg(env, &regs[ctx_reg], ctx_reg); 9686 9655 if (err < 0) 9687 9656 return err; 9688 9657
+4 -1
lib/ref_tracker.c
··· 69 69 unsigned long entries[REF_TRACKER_STACK_ENTRIES]; 70 70 struct ref_tracker *tracker; 71 71 unsigned int nr_entries; 72 + gfp_t gfp_mask = gfp; 72 73 unsigned long flags; 73 74 74 - *trackerp = tracker = kzalloc(sizeof(*tracker), gfp | __GFP_NOFAIL); 75 + if (gfp & __GFP_DIRECT_RECLAIM) 76 + gfp_mask |= __GFP_NOFAIL; 77 + *trackerp = tracker = kzalloc(sizeof(*tracker), gfp_mask); 75 78 if (unlikely(!tracker)) { 76 79 pr_err_once("memory allocation failure, unreliable refcount tracker.\n"); 77 80 refcount_inc(&dir->untracked);
+2 -1
net/bridge/br_if.c
··· 615 615 err = dev_set_allmulti(dev, 1); 616 616 if (err) { 617 617 br_multicast_del_port(p); 618 + dev_put_track(dev, &p->dev_tracker); 618 619 kfree(p); /* kobject not yet init'd, manually free */ 619 620 goto err1; 620 621 } ··· 725 724 sysfs_remove_link(br->ifobj, p->dev->name); 726 725 err2: 727 726 br_multicast_del_port(p); 727 + dev_put_track(dev, &p->dev_tracker); 728 728 kobject_put(&p->kobj); 729 729 dev_set_allmulti(dev, -1); 730 730 err1: 731 - dev_put(dev); 732 731 return err; 733 732 } 734 733
+6
net/core/dev.c
··· 8981 8981 goto out_unlock; 8982 8982 } 8983 8983 old_prog = link->prog; 8984 + if (old_prog->type != new_prog->type || 8985 + old_prog->expected_attach_type != new_prog->expected_attach_type) { 8986 + err = -EINVAL; 8987 + goto out_unlock; 8988 + } 8989 + 8984 8990 if (old_prog == new_prog) { 8985 8991 /* no-op, don't disturb drivers */ 8986 8992 bpf_prog_put(new_prog);
+3 -1
net/core/net_namespace.c
··· 164 164 { 165 165 struct net *net; 166 166 if (ops->exit) { 167 - list_for_each_entry(net, net_exit_list, exit_list) 167 + list_for_each_entry(net, net_exit_list, exit_list) { 168 168 ops->exit(net); 169 + cond_resched(); 170 + } 169 171 } 170 172 if (ops->exit_batch) 171 173 ops->exit_batch(net_exit_list);
+10 -21
net/core/of_net.c
··· 61 61 { 62 62 struct platform_device *pdev = of_find_device_by_node(np); 63 63 struct nvmem_cell *cell; 64 - const void *buf; 64 + const void *mac; 65 65 size_t len; 66 66 int ret; 67 67 ··· 78 78 if (IS_ERR(cell)) 79 79 return PTR_ERR(cell); 80 80 81 - buf = nvmem_cell_read(cell, &len); 81 + mac = nvmem_cell_read(cell, &len); 82 82 nvmem_cell_put(cell); 83 83 84 - if (IS_ERR(buf)) 85 - return PTR_ERR(buf); 84 + if (IS_ERR(mac)) 85 + return PTR_ERR(mac); 86 86 87 - ret = 0; 88 - if (len == ETH_ALEN) { 89 - if (is_valid_ether_addr(buf)) 90 - memcpy(addr, buf, ETH_ALEN); 91 - else 92 - ret = -EINVAL; 93 - } else if (len == 3 * ETH_ALEN - 1) { 94 - u8 mac[ETH_ALEN]; 95 - 96 - if (mac_pton(buf, mac)) 97 - memcpy(addr, mac, ETH_ALEN); 98 - else 99 - ret = -EINVAL; 100 - } else { 101 - ret = -EINVAL; 87 + if (len != ETH_ALEN || !is_valid_ether_addr(mac)) { 88 + kfree(mac); 89 + return -EINVAL; 102 90 } 103 91 104 - kfree(buf); 92 + memcpy(addr, mac, ETH_ALEN); 93 + kfree(mac); 105 94 106 - return ret; 95 + return 0; 107 96 } 108 97 109 98 /**
+5
net/core/sock.c
··· 844 844 } 845 845 846 846 num = ethtool_get_phc_vclocks(dev, &vclock_index); 847 + dev_put(dev); 848 + 847 849 for (i = 0; i < num; i++) { 848 850 if (*(vclock_index + i) == phc_index) { 849 851 match = true; ··· 2048 2046 void sk_destruct(struct sock *sk) 2049 2047 { 2050 2048 bool use_call_rcu = sock_flag(sk, SOCK_RCU_FREE); 2049 + 2050 + WARN_ON_ONCE(!llist_empty(&sk->defer_list)); 2051 + sk_defer_free_flush(sk); 2051 2052 2052 2053 if (rcu_access_pointer(sk->sk_reuseport_cb)) { 2053 2054 reuseport_detach_sock(sk);
+40 -36
net/ipv4/fib_semantics.c
··· 29 29 #include <linux/init.h> 30 30 #include <linux/slab.h> 31 31 #include <linux/netlink.h> 32 + #include <linux/hash.h> 32 33 33 34 #include <net/arp.h> 34 35 #include <net/ip.h> ··· 52 51 static struct hlist_head *fib_info_hash; 53 52 static struct hlist_head *fib_info_laddrhash; 54 53 static unsigned int fib_info_hash_size; 54 + static unsigned int fib_info_hash_bits; 55 55 static unsigned int fib_info_cnt; 56 56 57 57 #define DEVINDEX_HASHBITS 8 ··· 251 249 pr_warn("Freeing alive fib_info %p\n", fi); 252 250 return; 253 251 } 254 - fib_info_cnt--; 255 252 256 253 call_rcu(&fi->rcu, free_fib_info_rcu); 257 254 } ··· 261 260 spin_lock_bh(&fib_info_lock); 262 261 if (fi && refcount_dec_and_test(&fi->fib_treeref)) { 263 262 hlist_del(&fi->fib_hash); 263 + 264 + /* Paired with READ_ONCE() in fib_create_info(). */ 265 + WRITE_ONCE(fib_info_cnt, fib_info_cnt - 1); 266 + 264 267 if (fi->fib_prefsrc) 265 268 hlist_del(&fi->fib_lhash); 266 269 if (fi->nh) { ··· 321 316 322 317 static inline unsigned int fib_devindex_hashfn(unsigned int val) 323 318 { 324 - unsigned int mask = DEVINDEX_HASHSIZE - 1; 319 + return hash_32(val, DEVINDEX_HASHBITS); 320 + } 325 321 326 - return (val ^ 327 - (val >> DEVINDEX_HASHBITS) ^ 328 - (val >> (DEVINDEX_HASHBITS * 2))) & mask; 322 + static struct hlist_head * 323 + fib_info_devhash_bucket(const struct net_device *dev) 324 + { 325 + u32 val = net_hash_mix(dev_net(dev)) ^ dev->ifindex; 326 + 327 + return &fib_info_devhash[fib_devindex_hashfn(val)]; 329 328 } 330 329 331 330 static unsigned int fib_info_hashfn_1(int init_val, u8 protocol, u8 scope, ··· 439 430 { 440 431 struct hlist_head *head; 441 432 struct fib_nh *nh; 442 - unsigned int hash; 443 433 444 434 spin_lock(&fib_info_lock); 445 435 446 - hash = fib_devindex_hashfn(dev->ifindex); 447 - head = &fib_info_devhash[hash]; 436 + head = fib_info_devhash_bucket(dev); 437 + 448 438 hlist_for_each_entry(nh, head, nh_hash) { 449 439 if (nh->fib_nh_dev == dev && 450 440 nh->fib_nh_gw4 == gw && ··· 1248 1240 return err; 1249 1241 } 1250 1242 1251 - static inline unsigned int fib_laddr_hashfn(__be32 val) 1243 + static struct hlist_head * 1244 + fib_info_laddrhash_bucket(const struct net *net, __be32 val) 1252 1245 { 1253 - unsigned int mask = (fib_info_hash_size - 1); 1246 + u32 slot = hash_32(net_hash_mix(net) ^ (__force u32)val, 1247 + fib_info_hash_bits); 1254 1248 1255 - return ((__force u32)val ^ 1256 - ((__force u32)val >> 7) ^ 1257 - ((__force u32)val >> 14)) & mask; 1249 + return &fib_info_laddrhash[slot]; 1258 1250 } 1259 1251 1260 1252 static struct hlist_head *fib_info_hash_alloc(int bytes) ··· 1290 1282 old_info_hash = fib_info_hash; 1291 1283 old_laddrhash = fib_info_laddrhash; 1292 1284 fib_info_hash_size = new_size; 1285 + fib_info_hash_bits = ilog2(new_size); 1293 1286 1294 1287 for (i = 0; i < old_size; i++) { 1295 1288 struct hlist_head *head = &fib_info_hash[i]; ··· 1308 1299 } 1309 1300 fib_info_hash = new_info_hash; 1310 1301 1302 + fib_info_laddrhash = new_laddrhash; 1311 1303 for (i = 0; i < old_size; i++) { 1312 - struct hlist_head *lhead = &fib_info_laddrhash[i]; 1304 + struct hlist_head *lhead = &old_laddrhash[i]; 1313 1305 struct hlist_node *n; 1314 1306 struct fib_info *fi; 1315 1307 1316 1308 hlist_for_each_entry_safe(fi, n, lhead, fib_lhash) { 1317 1309 struct hlist_head *ldest; 1318 - unsigned int new_hash; 1319 1310 1320 - new_hash = fib_laddr_hashfn(fi->fib_prefsrc); 1321 - ldest = &new_laddrhash[new_hash]; 1311 + ldest = fib_info_laddrhash_bucket(fi->fib_net, 1312 + fi->fib_prefsrc); 1322 1313 hlist_add_head(&fi->fib_lhash, ldest); 1323 1314 } 1324 1315 } 1325 - fib_info_laddrhash = new_laddrhash; 1326 1316 1327 1317 spin_unlock_bh(&fib_info_lock); 1328 1318 ··· 1438 1430 #endif 1439 1431 1440 1432 err = -ENOBUFS; 1441 - if (fib_info_cnt >= fib_info_hash_size) { 1433 + 1434 + /* Paired with WRITE_ONCE() in fib_release_info() */ 1435 + if (READ_ONCE(fib_info_cnt) >= fib_info_hash_size) { 1442 1436 unsigned int new_size = fib_info_hash_size << 1; 1443 1437 struct hlist_head *new_info_hash; 1444 1438 struct hlist_head *new_laddrhash; ··· 1472 1462 return ERR_PTR(err); 1473 1463 } 1474 1464 1475 - fib_info_cnt++; 1476 1465 fi->fib_net = net; 1477 1466 fi->fib_protocol = cfg->fc_protocol; 1478 1467 fi->fib_scope = cfg->fc_scope; ··· 1600 1591 refcount_set(&fi->fib_treeref, 1); 1601 1592 refcount_set(&fi->fib_clntref, 1); 1602 1593 spin_lock_bh(&fib_info_lock); 1594 + fib_info_cnt++; 1603 1595 hlist_add_head(&fi->fib_hash, 1604 1596 &fib_info_hash[fib_info_hashfn(fi)]); 1605 1597 if (fi->fib_prefsrc) { 1606 1598 struct hlist_head *head; 1607 1599 1608 - head = &fib_info_laddrhash[fib_laddr_hashfn(fi->fib_prefsrc)]; 1600 + head = fib_info_laddrhash_bucket(net, fi->fib_prefsrc); 1609 1601 hlist_add_head(&fi->fib_lhash, head); 1610 1602 } 1611 1603 if (fi->nh) { ··· 1614 1604 } else { 1615 1605 change_nexthops(fi) { 1616 1606 struct hlist_head *head; 1617 - unsigned int hash; 1618 1607 1619 1608 if (!nexthop_nh->fib_nh_dev) 1620 1609 continue; 1621 - hash = fib_devindex_hashfn(nexthop_nh->fib_nh_dev->ifindex); 1622 - head = &fib_info_devhash[hash]; 1610 + head = fib_info_devhash_bucket(nexthop_nh->fib_nh_dev); 1623 1611 hlist_add_head(&nexthop_nh->nh_hash, head); 1624 1612 } endfor_nexthops(fi) 1625 1613 } ··· 1878 1870 */ 1879 1871 int fib_sync_down_addr(struct net_device *dev, __be32 local) 1880 1872 { 1881 - int ret = 0; 1882 - unsigned int hash = fib_laddr_hashfn(local); 1883 - struct hlist_head *head = &fib_info_laddrhash[hash]; 1884 1873 int tb_id = l3mdev_fib_table(dev) ? : RT_TABLE_MAIN; 1885 1874 struct net *net = dev_net(dev); 1875 + struct hlist_head *head; 1886 1876 struct fib_info *fi; 1877 + int ret = 0; 1887 1878 1888 1879 if (!fib_info_laddrhash || local == 0) 1889 1880 return 0; 1890 1881 1882 + head = fib_info_laddrhash_bucket(net, local); 1891 1883 hlist_for_each_entry(fi, head, fib_lhash) { 1892 1884 if (!net_eq(fi->fib_net, net) || 1893 1885 fi->fib_tb_id != tb_id) ··· 1969 1961 1970 1962 void fib_sync_mtu(struct net_device *dev, u32 orig_mtu) 1971 1963 { 1972 - unsigned int hash = fib_devindex_hashfn(dev->ifindex); 1973 - struct hlist_head *head = &fib_info_devhash[hash]; 1964 + struct hlist_head *head = fib_info_devhash_bucket(dev); 1974 1965 struct fib_nh *nh; 1975 1966 1976 1967 hlist_for_each_entry(nh, head, nh_hash) { ··· 1988 1981 */ 1989 1982 int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force) 1990 1983 { 1991 - int ret = 0; 1992 - int scope = RT_SCOPE_NOWHERE; 1984 + struct hlist_head *head = fib_info_devhash_bucket(dev); 1993 1985 struct fib_info *prev_fi = NULL; 1994 - unsigned int hash = fib_devindex_hashfn(dev->ifindex); 1995 - struct hlist_head *head = &fib_info_devhash[hash]; 1986 + int scope = RT_SCOPE_NOWHERE; 1996 1987 struct fib_nh *nh; 1988 + int ret = 0; 1997 1989 1998 1990 if (force) 1999 1991 scope = -1; ··· 2137 2131 int fib_sync_up(struct net_device *dev, unsigned char nh_flags) 2138 2132 { 2139 2133 struct fib_info *prev_fi; 2140 - unsigned int hash; 2141 2134 struct hlist_head *head; 2142 2135 struct fib_nh *nh; 2143 2136 int ret; ··· 2152 2147 } 2153 2148 2154 2149 prev_fi = NULL; 2155 - hash = fib_devindex_hashfn(dev->ifindex); 2156 - head = &fib_info_devhash[hash]; 2150 + head = fib_info_devhash_bucket(dev); 2157 2151 ret = 0; 2158 2152 2159 2153 hlist_for_each_entry(nh, head, nh_hash) {
+5 -3
net/ipv4/inet_fragment.c
··· 235 235 /* The RCU read lock provides a memory barrier 236 236 * guaranteeing that if fqdir->dead is false then 237 237 * the hash table destruction will not start until 238 - * after we unlock. Paired with inet_frags_exit_net(). 238 + * after we unlock. Paired with fqdir_pre_exit(). 239 239 */ 240 - if (!fqdir->dead) { 240 + if (!READ_ONCE(fqdir->dead)) { 241 241 rhashtable_remove_fast(&fqdir->rhashtable, &fq->node, 242 242 fqdir->f->rhash_params); 243 243 refcount_dec(&fq->refcnt); ··· 352 352 /* TODO : call from rcu_read_lock() and no longer use refcount_inc_not_zero() */ 353 353 struct inet_frag_queue *inet_frag_find(struct fqdir *fqdir, void *key) 354 354 { 355 + /* This pairs with WRITE_ONCE() in fqdir_pre_exit(). */ 356 + long high_thresh = READ_ONCE(fqdir->high_thresh); 355 357 struct inet_frag_queue *fq = NULL, *prev; 356 358 357 - if (!fqdir->high_thresh || frag_mem_limit(fqdir) > fqdir->high_thresh) 359 + if (!high_thresh || frag_mem_limit(fqdir) > high_thresh) 358 360 return NULL; 359 361 360 362 rcu_read_lock();
+2 -1
net/ipv4/ip_fragment.c
··· 144 144 145 145 rcu_read_lock(); 146 146 147 - if (qp->q.fqdir->dead) 147 + /* Paired with WRITE_ONCE() in fqdir_pre_exit(). */ 148 + if (READ_ONCE(qp->q.fqdir->dead)) 148 149 goto out_rcu_unlock; 149 150 150 151 spin_lock(&qp->q.lock);
+3 -2
net/ipv4/ip_gre.c
··· 604 604 605 605 key = &info->key; 606 606 ip_tunnel_init_flow(&fl4, IPPROTO_GRE, key->u.ipv4.dst, key->u.ipv4.src, 607 - tunnel_id_to_key32(key->tun_id), key->tos, 0, 608 - skb->mark, skb_get_hash(skb)); 607 + tunnel_id_to_key32(key->tun_id), 608 + key->tos & ~INET_ECN_MASK, 0, skb->mark, 609 + skb_get_hash(skb)); 609 610 rt = ip_route_output_key(dev_net(dev), &fl4); 610 611 if (IS_ERR(rt)) 611 612 return PTR_ERR(rt);
+1 -1
net/ipv6/sit.c
··· 956 956 dst_cache_set_ip4(&tunnel->dst_cache, &rt->dst, fl4.saddr); 957 957 } 958 958 959 - if (rt->rt_type != RTN_UNICAST) { 959 + if (rt->rt_type != RTN_UNICAST && rt->rt_type != RTN_LOCAL) { 960 960 ip_rt_put(rt); 961 961 dev->stats.tx_carrier_errors++; 962 962 goto tx_error_icmp;
+1 -1
net/mctp/test/route-test.c
··· 285 285 struct mctp_test_route **rtp, 286 286 struct socket **sockp) 287 287 { 288 - struct sockaddr_mctp addr; 288 + struct sockaddr_mctp addr = {0}; 289 289 struct mctp_test_route *rt; 290 290 struct mctp_test_dev *dev; 291 291 struct socket *sock;
+1 -1
net/netfilter/nft_connlimit.c
··· 206 206 struct nft_connlimit *priv_src = nft_expr_priv(src); 207 207 208 208 priv_dst->list = kmalloc(sizeof(*priv_dst->list), GFP_ATOMIC); 209 - if (priv_dst->list) 209 + if (!priv_dst->list) 210 210 return -ENOMEM; 211 211 212 212 nf_conncount_list_init(priv_dst->list);
+1 -1
net/netfilter/nft_last.c
··· 106 106 struct nft_last_priv *priv_dst = nft_expr_priv(dst); 107 107 108 108 priv_dst->last = kzalloc(sizeof(*priv_dst->last), GFP_ATOMIC); 109 - if (priv_dst->last) 109 + if (!priv_dst->last) 110 110 return -ENOMEM; 111 111 112 112 return 0;
+1 -1
net/netfilter/nft_limit.c
··· 145 145 priv_dst->invert = priv_src->invert; 146 146 147 147 priv_dst->limit = kmalloc(sizeof(*priv_dst->limit), GFP_ATOMIC); 148 - if (priv_dst->limit) 148 + if (!priv_dst->limit) 149 149 return -ENOMEM; 150 150 151 151 spin_lock_init(&priv_dst->limit->lock);
+1 -1
net/netfilter/nft_quota.c
··· 237 237 struct nft_quota *priv_dst = nft_expr_priv(dst); 238 238 239 239 priv_dst->consumed = kmalloc(sizeof(*priv_dst->consumed), GFP_ATOMIC); 240 - if (priv_dst->consumed) 240 + if (!priv_dst->consumed) 241 241 return -ENOMEM; 242 242 243 243 atomic64_set(priv_dst->consumed, 0);
+5
net/nfc/llcp_sock.c
··· 789 789 790 790 lock_sock(sk); 791 791 792 + if (!llcp_sock->local) { 793 + release_sock(sk); 794 + return -ENODEV; 795 + } 796 + 792 797 if (sk->sk_type == SOCK_DGRAM) { 793 798 DECLARE_SOCKADDR(struct sockaddr_nfc_llcp *, addr, 794 799 msg->msg_name);
+1 -1
net/sched/sch_api.c
··· 1062 1062 1063 1063 qdisc_offload_graft_root(dev, new, old, extack); 1064 1064 1065 - if (new && new->ops->attach) 1065 + if (new && new->ops->attach && !ingress) 1066 1066 goto skip; 1067 1067 1068 1068 for (i = 0; i < num_q; i++) {
+1
net/sched/sch_generic.c
··· 1529 1529 { 1530 1530 memset(r, 0, sizeof(*r)); 1531 1531 r->overhead = conf->overhead; 1532 + r->mpu = conf->mpu; 1532 1533 r->rate_bytes_ps = max_t(u64, conf->rate, rate64); 1533 1534 r->linklayer = (conf->linklayer & TC_LINKLAYER_MASK); 1534 1535 psched_ratecfg_precompute__(r->rate_bytes_ps, &r->mult, &r->shift);
+5 -1
net/smc/af_smc.c
··· 634 634 { 635 635 struct smc_connection *conn = &smc->conn; 636 636 struct smc_link_group *lgr = conn->lgr; 637 + bool lgr_valid = false; 638 + 639 + if (smc_conn_lgr_valid(conn)) 640 + lgr_valid = true; 637 641 638 642 smc_conn_free(conn); 639 - if (local_first) 643 + if (local_first && lgr_valid) 640 644 smc_lgr_cleanup_early(lgr); 641 645 } 642 646
+1
net/smc/smc.h
··· 221 221 */ 222 222 u64 peer_token; /* SMC-D token of peer */ 223 223 u8 killed : 1; /* abnormal termination */ 224 + u8 freed : 1; /* normal termiation */ 224 225 u8 out_of_sync : 1; /* out of sync with peer */ 225 226 }; 226 227
+2 -1
net/smc/smc_cdc.c
··· 197 197 { 198 198 int rc; 199 199 200 - if (!conn->lgr || (conn->lgr->is_smcd && conn->lgr->peer_shutdown)) 200 + if (!smc_conn_lgr_valid(conn) || 201 + (conn->lgr->is_smcd && conn->lgr->peer_shutdown)) 201 202 return -EPIPE; 202 203 203 204 if (conn->lgr->is_smcd) {
+1 -1
net/smc/smc_clc.c
··· 774 774 dclc.os_type = version == SMC_V1 ? 0 : SMC_CLC_OS_LINUX; 775 775 dclc.hdr.typev2 = (peer_diag_info == SMC_CLC_DECL_SYNCERR) ? 776 776 SMC_FIRST_CONTACT_MASK : 0; 777 - if ((!smc->conn.lgr || !smc->conn.lgr->is_smcd) && 777 + if ((!smc_conn_lgr_valid(&smc->conn) || !smc->conn.lgr->is_smcd) && 778 778 smc_ib_is_valid_local_systemid()) 779 779 memcpy(dclc.id_for_peer, local_systemid, 780 780 sizeof(local_systemid));
+96 -43
net/smc/smc_core.c
··· 211 211 { 212 212 struct smc_link_group *lgr = conn->lgr; 213 213 214 - if (!lgr) 214 + if (!smc_conn_lgr_valid(conn)) 215 215 return; 216 216 write_lock_bh(&lgr->conns_lock); 217 217 if (conn->alert_token_local) { 218 218 __smc_lgr_unregister_conn(conn); 219 219 } 220 220 write_unlock_bh(&lgr->conns_lock); 221 - conn->lgr = NULL; 222 221 } 223 222 224 223 int smc_nl_get_sys_info(struct sk_buff *skb, struct netlink_callback *cb) ··· 748 749 } 749 750 get_device(&lnk->smcibdev->ibdev->dev); 750 751 atomic_inc(&lnk->smcibdev->lnk_cnt); 752 + refcount_set(&lnk->refcnt, 1); /* link refcnt is set to 1 */ 753 + lnk->clearing = 0; 751 754 lnk->path_mtu = lnk->smcibdev->pattr[lnk->ibport - 1].active_mtu; 752 755 lnk->link_id = smcr_next_link_id(lgr); 753 756 lnk->lgr = lgr; 757 + smc_lgr_hold(lgr); /* lgr_put in smcr_link_clear() */ 754 758 lnk->link_idx = link_idx; 755 759 smc_ibdev_cnt_inc(lnk); 756 760 smcr_copy_dev_info_to_link(lnk); ··· 808 806 lnk->state = SMC_LNK_UNUSED; 809 807 if (!atomic_dec_return(&smcibdev->lnk_cnt)) 810 808 wake_up(&smcibdev->lnks_deleted); 809 + smc_lgr_put(lgr); /* lgr_hold above */ 811 810 return rc; 812 811 } 813 812 ··· 847 844 lgr->terminating = 0; 848 845 lgr->freeing = 0; 849 846 lgr->vlan_id = ini->vlan_id; 847 + refcount_set(&lgr->refcnt, 1); /* set lgr refcnt to 1 */ 850 848 mutex_init(&lgr->sndbufs_lock); 851 849 mutex_init(&lgr->rmbs_lock); 852 850 rwlock_init(&lgr->conns_lock); ··· 1000 996 struct smc_link *to_lnk) 1001 997 { 1002 998 atomic_dec(&conn->lnk->conn_cnt); 999 + /* link_hold in smc_conn_create() */ 1000 + smcr_link_put(conn->lnk); 1003 1001 conn->lnk = to_lnk; 1004 1002 atomic_inc(&conn->lnk->conn_cnt); 1003 + /* link_put in smc_conn_free() */ 1004 + smcr_link_hold(conn->lnk); 1005 1005 } 1006 1006 1007 1007 struct smc_link *smc_switch_conns(struct smc_link_group *lgr, ··· 1138 1130 { 1139 1131 struct smc_link_group *lgr = conn->lgr; 1140 1132 1141 - if (!lgr) 1133 + if (!lgr || conn->freed) 1134 + /* Connection has never been registered in a 1135 + * link group, or has already been freed. 1136 + */ 1142 1137 return; 1138 + 1139 + conn->freed = 1; 1140 + if (!smc_conn_lgr_valid(conn)) 1141 + /* Connection has already unregistered from 1142 + * link group. 1143 + */ 1144 + goto lgr_put; 1145 + 1143 1146 if (lgr->is_smcd) { 1144 1147 if (!list_empty(&lgr->list)) 1145 1148 smc_ism_unset_conn(conn); ··· 1167 1148 1168 1149 if (!lgr->conns_num) 1169 1150 smc_lgr_schedule_free_work(lgr); 1151 + lgr_put: 1152 + if (!lgr->is_smcd) 1153 + smcr_link_put(conn->lnk); /* link_hold in smc_conn_create() */ 1154 + smc_lgr_put(lgr); /* lgr_hold in smc_conn_create() */ 1170 1155 } 1171 1156 1172 1157 /* unregister a link from a buf_desc */ ··· 1226 1203 } 1227 1204 } 1228 1205 1229 - /* must be called under lgr->llc_conf_mutex lock */ 1230 - void smcr_link_clear(struct smc_link *lnk, bool log) 1206 + static void __smcr_link_clear(struct smc_link *lnk) 1231 1207 { 1208 + struct smc_link_group *lgr = lnk->lgr; 1232 1209 struct smc_ib_device *smcibdev; 1233 1210 1234 - if (!lnk->lgr || lnk->state == SMC_LNK_UNUSED) 1235 - return; 1236 - lnk->peer_qpn = 0; 1237 - smc_llc_link_clear(lnk, log); 1238 - smcr_buf_unmap_lgr(lnk); 1239 - smcr_rtoken_clear_link(lnk); 1240 - smc_ib_modify_qp_error(lnk); 1241 - smc_wr_free_link(lnk); 1242 - smc_ib_destroy_queue_pair(lnk); 1243 - smc_ib_dealloc_protection_domain(lnk); 1244 1211 smc_wr_free_link_mem(lnk); 1245 1212 smc_ibdev_cnt_dec(lnk); 1246 1213 put_device(&lnk->smcibdev->ibdev->dev); ··· 1239 1226 lnk->state = SMC_LNK_UNUSED; 1240 1227 if (!atomic_dec_return(&smcibdev->lnk_cnt)) 1241 1228 wake_up(&smcibdev->lnks_deleted); 1229 + smc_lgr_put(lgr); /* lgr_hold in smcr_link_init() */ 1230 + } 1231 + 1232 + /* must be called under lgr->llc_conf_mutex lock */ 1233 + void smcr_link_clear(struct smc_link *lnk, bool log) 1234 + { 1235 + if (!lnk->lgr || lnk->clearing || 1236 + lnk->state == SMC_LNK_UNUSED) 1237 + return; 1238 + lnk->clearing = 1; 1239 + lnk->peer_qpn = 0; 1240 + smc_llc_link_clear(lnk, log); 1241 + smcr_buf_unmap_lgr(lnk); 1242 + smcr_rtoken_clear_link(lnk); 1243 + smc_ib_modify_qp_error(lnk); 1244 + smc_wr_free_link(lnk); 1245 + smc_ib_destroy_queue_pair(lnk); 1246 + smc_ib_dealloc_protection_domain(lnk); 1247 + smcr_link_put(lnk); /* theoretically last link_put */ 1248 + } 1249 + 1250 + void smcr_link_hold(struct smc_link *lnk) 1251 + { 1252 + refcount_inc(&lnk->refcnt); 1253 + } 1254 + 1255 + void smcr_link_put(struct smc_link *lnk) 1256 + { 1257 + if (refcount_dec_and_test(&lnk->refcnt)) 1258 + __smcr_link_clear(lnk); 1242 1259 } 1243 1260 1244 1261 static void smcr_buf_free(struct smc_link_group *lgr, bool is_rmb, ··· 1333 1290 __smc_lgr_free_bufs(lgr, true); 1334 1291 } 1335 1292 1293 + /* won't be freed until no one accesses to lgr anymore */ 1294 + static void __smc_lgr_free(struct smc_link_group *lgr) 1295 + { 1296 + smc_lgr_free_bufs(lgr); 1297 + if (lgr->is_smcd) { 1298 + if (!atomic_dec_return(&lgr->smcd->lgr_cnt)) 1299 + wake_up(&lgr->smcd->lgrs_deleted); 1300 + } else { 1301 + smc_wr_free_lgr_mem(lgr); 1302 + if (!atomic_dec_return(&lgr_cnt)) 1303 + wake_up(&lgrs_deleted); 1304 + } 1305 + kfree(lgr); 1306 + } 1307 + 1336 1308 /* remove a link group */ 1337 1309 static void smc_lgr_free(struct smc_link_group *lgr) 1338 1310 { ··· 1363 1305 smc_llc_lgr_clear(lgr); 1364 1306 } 1365 1307 1366 - smc_lgr_free_bufs(lgr); 1367 1308 destroy_workqueue(lgr->tx_wq); 1368 1309 if (lgr->is_smcd) { 1369 1310 smc_ism_put_vlan(lgr->smcd, lgr->vlan_id); 1370 1311 put_device(&lgr->smcd->dev); 1371 - if (!atomic_dec_return(&lgr->smcd->lgr_cnt)) 1372 - wake_up(&lgr->smcd->lgrs_deleted); 1373 - } else { 1374 - smc_wr_free_lgr_mem(lgr); 1375 - if (!atomic_dec_return(&lgr_cnt)) 1376 - wake_up(&lgrs_deleted); 1377 1312 } 1378 - kfree(lgr); 1313 + smc_lgr_put(lgr); /* theoretically last lgr_put */ 1314 + } 1315 + 1316 + void smc_lgr_hold(struct smc_link_group *lgr) 1317 + { 1318 + refcount_inc(&lgr->refcnt); 1319 + } 1320 + 1321 + void smc_lgr_put(struct smc_link_group *lgr) 1322 + { 1323 + if (refcount_dec_and_test(&lgr->refcnt)) 1324 + __smc_lgr_free(lgr); 1379 1325 } 1380 1326 1381 1327 static void smc_sk_wake_ups(struct smc_sock *smc) ··· 1531 1469 /* Called when an SMCR device is removed or the smc module is unloaded. 1532 1470 * If smcibdev is given, all SMCR link groups using this device are terminated. 1533 1471 * If smcibdev is NULL, all SMCR link groups are terminated. 1534 - * 1535 - * We must wait here for QPs been destroyed before we destroy the CQs, 1536 - * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus 1537 - * smc_sock cannot be released. 1538 1472 */ 1539 1473 void smc_smcr_terminate_all(struct smc_ib_device *smcibdev) 1540 1474 { 1541 1475 struct smc_link_group *lgr, *lg; 1542 1476 LIST_HEAD(lgr_free_list); 1543 - LIST_HEAD(lgr_linkdown_list); 1544 1477 int i; 1545 1478 1546 1479 spin_lock_bh(&smc_lgr_list.lock); ··· 1547 1490 list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) { 1548 1491 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1549 1492 if (lgr->lnk[i].smcibdev == smcibdev) 1550 - list_move_tail(&lgr->list, &lgr_linkdown_list); 1493 + smcr_link_down_cond_sched(&lgr->lnk[i]); 1551 1494 } 1552 1495 } 1553 1496 } ··· 1557 1500 list_del_init(&lgr->list); 1558 1501 smc_llc_set_termination_rsn(lgr, SMC_LLC_DEL_OP_INIT_TERM); 1559 1502 __smc_lgr_terminate(lgr, false); 1560 - } 1561 - 1562 - list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) { 1563 - for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1564 - if (lgr->lnk[i].smcibdev == smcibdev) { 1565 - mutex_lock(&lgr->llc_conf_mutex); 1566 - smcr_link_down_cond(&lgr->lnk[i]); 1567 - mutex_unlock(&lgr->llc_conf_mutex); 1568 - } 1569 - } 1570 1503 } 1571 1504 1572 1505 if (smcibdev) { ··· 1903 1856 goto out; 1904 1857 } 1905 1858 } 1859 + smc_lgr_hold(conn->lgr); /* lgr_put in smc_conn_free() */ 1860 + if (!conn->lgr->is_smcd) 1861 + smcr_link_hold(conn->lnk); /* link_put in smc_conn_free() */ 1862 + conn->freed = 0; 1906 1863 conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE; 1907 1864 conn->local_tx_ctrl.len = SMC_WR_TX_SIZE; 1908 1865 conn->urg_state = SMC_URG_READ; ··· 2291 2240 2292 2241 void smc_sndbuf_sync_sg_for_cpu(struct smc_connection *conn) 2293 2242 { 2294 - if (!conn->lgr || conn->lgr->is_smcd || !smc_link_active(conn->lnk)) 2243 + if (!smc_conn_lgr_valid(conn) || conn->lgr->is_smcd || 2244 + !smc_link_active(conn->lnk)) 2295 2245 return; 2296 2246 smc_ib_sync_sg_for_cpu(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE); 2297 2247 } 2298 2248 2299 2249 void smc_sndbuf_sync_sg_for_device(struct smc_connection *conn) 2300 2250 { 2301 - if (!conn->lgr || conn->lgr->is_smcd || !smc_link_active(conn->lnk)) 2251 + if (!smc_conn_lgr_valid(conn) || conn->lgr->is_smcd || 2252 + !smc_link_active(conn->lnk)) 2302 2253 return; 2303 2254 smc_ib_sync_sg_for_device(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE); 2304 2255 } ··· 2309 2256 { 2310 2257 int i; 2311 2258 2312 - if (!conn->lgr || conn->lgr->is_smcd) 2259 + if (!smc_conn_lgr_valid(conn) || conn->lgr->is_smcd) 2313 2260 return; 2314 2261 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 2315 2262 if (!smc_link_active(&conn->lgr->lnk[i])) ··· 2323 2270 { 2324 2271 int i; 2325 2272 2326 - if (!conn->lgr || conn->lgr->is_smcd) 2273 + if (!smc_conn_lgr_valid(conn) || conn->lgr->is_smcd) 2327 2274 return; 2328 2275 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 2329 2276 if (!smc_link_active(&conn->lgr->lnk[i]))
+12
net/smc/smc_core.h
··· 137 137 u8 peer_link_uid[SMC_LGR_ID_SIZE]; /* peer uid */ 138 138 u8 link_idx; /* index in lgr link array */ 139 139 u8 link_is_asym; /* is link asymmetric? */ 140 + u8 clearing : 1; /* link is being cleared */ 141 + refcount_t refcnt; /* link reference count */ 140 142 struct smc_link_group *lgr; /* parent link group */ 141 143 struct work_struct link_down_wrk; /* wrk to bring link down */ 142 144 char ibname[IB_DEVICE_NAME_MAX]; /* ib device name */ ··· 251 249 u8 terminating : 1;/* lgr is terminating */ 252 250 u8 freeing : 1; /* lgr is being freed */ 253 251 252 + refcount_t refcnt; /* lgr reference count */ 254 253 bool is_smcd; /* SMC-R or SMC-D */ 255 254 u8 smc_version; 256 255 u8 negotiated_eid[SMC_MAX_EID_LEN]; ··· 412 409 return res; 413 410 } 414 411 412 + static inline bool smc_conn_lgr_valid(struct smc_connection *conn) 413 + { 414 + return conn->lgr && conn->alert_token_local; 415 + } 416 + 415 417 /* 416 418 * Returns true if the specified link is usable. 417 419 * ··· 495 487 496 488 void smc_lgr_cleanup_early(struct smc_link_group *lgr); 497 489 void smc_lgr_terminate_sched(struct smc_link_group *lgr); 490 + void smc_lgr_hold(struct smc_link_group *lgr); 491 + void smc_lgr_put(struct smc_link_group *lgr); 498 492 void smcr_port_add(struct smc_ib_device *smcibdev, u8 ibport); 499 493 void smcr_port_err(struct smc_ib_device *smcibdev, u8 ibport); 500 494 void smc_smcd_terminate(struct smcd_dev *dev, u64 peer_gid, ··· 528 518 int smcr_link_init(struct smc_link_group *lgr, struct smc_link *lnk, 529 519 u8 link_idx, struct smc_init_info *ini); 530 520 void smcr_link_clear(struct smc_link *lnk, bool log); 521 + void smcr_link_hold(struct smc_link *lnk); 522 + void smcr_link_put(struct smc_link *lnk); 531 523 void smc_switch_link_and_count(struct smc_connection *conn, 532 524 struct smc_link *to_lnk); 533 525 int smcr_buf_map_lgr(struct smc_link *lnk);
+3 -3
net/smc/smc_diag.c
··· 89 89 r->diag_state = sk->sk_state; 90 90 if (smc->use_fallback) 91 91 r->diag_mode = SMC_DIAG_MODE_FALLBACK_TCP; 92 - else if (smc->conn.lgr && smc->conn.lgr->is_smcd) 92 + else if (smc_conn_lgr_valid(&smc->conn) && smc->conn.lgr->is_smcd) 93 93 r->diag_mode = SMC_DIAG_MODE_SMCD; 94 94 else 95 95 r->diag_mode = SMC_DIAG_MODE_SMCR; ··· 142 142 goto errout; 143 143 } 144 144 145 - if (smc->conn.lgr && !smc->conn.lgr->is_smcd && 145 + if (smc_conn_lgr_valid(&smc->conn) && !smc->conn.lgr->is_smcd && 146 146 (req->diag_ext & (1 << (SMC_DIAG_LGRINFO - 1))) && 147 147 !list_empty(&smc->conn.lgr->list)) { 148 148 struct smc_link *link = smc->conn.lnk; ··· 164 164 if (nla_put(skb, SMC_DIAG_LGRINFO, sizeof(linfo), &linfo) < 0) 165 165 goto errout; 166 166 } 167 - if (smc->conn.lgr && smc->conn.lgr->is_smcd && 167 + if (smc_conn_lgr_valid(&smc->conn) && smc->conn.lgr->is_smcd && 168 168 (req->diag_ext & (1 << (SMC_DIAG_DMBINFO - 1))) && 169 169 !list_empty(&smc->conn.lgr->list)) { 170 170 struct smc_connection *conn = &smc->conn;
+2 -1
net/smc/smc_pnet.c
··· 369 369 memcpy(new_pe->pnet_name, pnet_name, SMC_MAX_PNETID_LEN); 370 370 strncpy(new_pe->eth_name, eth_name, IFNAMSIZ); 371 371 new_pe->ndev = ndev; 372 - netdev_tracker_alloc(ndev, &new_pe->dev_tracker, GFP_KERNEL); 372 + if (ndev) 373 + netdev_tracker_alloc(ndev, &new_pe->dev_tracker, GFP_KERNEL); 373 374 rc = -EEXIST; 374 375 new_netdev = true; 375 376 write_lock(&pnettable->lock);
-4
net/smc/smc_wr.h
··· 125 125 int smc_wr_tx_send_wait(struct smc_link *link, struct smc_wr_tx_pend_priv *priv, 126 126 unsigned long timeout); 127 127 void smc_wr_tx_cq_handler(struct ib_cq *ib_cq, void *cq_context); 128 - void smc_wr_tx_dismiss_slots(struct smc_link *lnk, u8 wr_rx_hdr_type, 129 - smc_wr_tx_filter filter, 130 - smc_wr_tx_dismisser dismisser, 131 - unsigned long data); 132 128 void smc_wr_tx_wait_no_pending_sends(struct smc_link *link); 133 129 134 130 int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler);
+1
net/tls/tls_sw.c
··· 2059 2059 2060 2060 splice_read_end: 2061 2061 release_sock(sk); 2062 + sk_defer_free_flush(sk); 2062 2063 return copied ? : err; 2063 2064 } 2064 2065
+11 -3
net/unix/garbage.c
··· 192 192 { 193 193 /* If number of inflight sockets is insane, 194 194 * force a garbage collect right now. 195 + * Paired with the WRITE_ONCE() in unix_inflight(), 196 + * unix_notinflight() and gc_in_progress(). 195 197 */ 196 - if (unix_tot_inflight > UNIX_INFLIGHT_TRIGGER_GC && !gc_in_progress) 198 + if (READ_ONCE(unix_tot_inflight) > UNIX_INFLIGHT_TRIGGER_GC && 199 + !READ_ONCE(gc_in_progress)) 197 200 unix_gc(); 198 201 wait_event(unix_gc_wait, gc_in_progress == false); 199 202 } ··· 216 213 if (gc_in_progress) 217 214 goto out; 218 215 219 - gc_in_progress = true; 216 + /* Paired with READ_ONCE() in wait_for_unix_gc(). */ 217 + WRITE_ONCE(gc_in_progress, true); 218 + 220 219 /* First, select candidates for garbage collection. Only 221 220 * in-flight sockets are considered, and from those only ones 222 221 * which don't have any external reference. ··· 304 299 305 300 /* All candidates should have been detached by now. */ 306 301 BUG_ON(!list_empty(&gc_candidates)); 307 - gc_in_progress = false; 302 + 303 + /* Paired with READ_ONCE() in wait_for_unix_gc(). */ 304 + WRITE_ONCE(gc_in_progress, false); 305 + 308 306 wake_up(&unix_gc_wait); 309 307 310 308 out:
+4 -2
net/unix/scm.c
··· 60 60 } else { 61 61 BUG_ON(list_empty(&u->link)); 62 62 } 63 - unix_tot_inflight++; 63 + /* Paired with READ_ONCE() in wait_for_unix_gc() */ 64 + WRITE_ONCE(unix_tot_inflight, unix_tot_inflight + 1); 64 65 } 65 66 user->unix_inflight++; 66 67 spin_unlock(&unix_gc_lock); ··· 81 80 82 81 if (atomic_long_dec_and_test(&u->inflight)) 83 82 list_del_init(&u->link); 84 - unix_tot_inflight--; 83 + /* Paired with READ_ONCE() in wait_for_unix_gc() */ 84 + WRITE_ONCE(unix_tot_inflight, unix_tot_inflight - 1); 85 85 } 86 86 user->unix_inflight--; 87 87 spin_unlock(&unix_gc_lock);
+2 -1
net/xfrm/xfrm_policy.c
··· 31 31 #include <linux/if_tunnel.h> 32 32 #include <net/dst.h> 33 33 #include <net/flow.h> 34 + #include <net/inet_ecn.h> 34 35 #include <net/xfrm.h> 35 36 #include <net/ip.h> 36 37 #include <net/gre.h> ··· 3296 3295 fl4->flowi4_proto = iph->protocol; 3297 3296 fl4->daddr = reverse ? iph->saddr : iph->daddr; 3298 3297 fl4->saddr = reverse ? iph->daddr : iph->saddr; 3299 - fl4->flowi4_tos = iph->tos; 3298 + fl4->flowi4_tos = iph->tos & ~INET_ECN_MASK; 3300 3299 3301 3300 if (!ip_is_fragment(iph)) { 3302 3301 switch (iph->protocol) {
+14
tools/testing/selftests/bpf/prog_tests/d_path.c
··· 10 10 11 11 #include "test_d_path.skel.h" 12 12 #include "test_d_path_check_rdonly_mem.skel.h" 13 + #include "test_d_path_check_types.skel.h" 13 14 14 15 static int duration; 15 16 ··· 168 167 test_d_path_check_rdonly_mem__destroy(skel); 169 168 } 170 169 170 + static void test_d_path_check_types(void) 171 + { 172 + struct test_d_path_check_types *skel; 173 + 174 + skel = test_d_path_check_types__open_and_load(); 175 + ASSERT_ERR_PTR(skel, "unexpected_load_passing_wrong_type"); 176 + 177 + test_d_path_check_types__destroy(skel); 178 + } 179 + 171 180 void test_d_path(void) 172 181 { 173 182 if (test__start_subtest("basic")) ··· 185 174 186 175 if (test__start_subtest("check_rdonly_mem")) 187 176 test_d_path_check_rdonly_mem(); 177 + 178 + if (test__start_subtest("check_alloc_mem")) 179 + test_d_path_check_types(); 188 180 }
+30 -31
tools/testing/selftests/bpf/prog_tests/xdp_link.c
··· 8 8 9 9 void serial_test_xdp_link(void) 10 10 { 11 - __u32 duration = 0, id1, id2, id0 = 0, prog_fd1, prog_fd2, err; 12 11 DECLARE_LIBBPF_OPTS(bpf_xdp_set_link_opts, opts, .old_fd = -1); 13 12 struct test_xdp_link *skel1 = NULL, *skel2 = NULL; 13 + __u32 id1, id2, id0 = 0, prog_fd1, prog_fd2; 14 14 struct bpf_link_info link_info; 15 15 struct bpf_prog_info prog_info; 16 16 struct bpf_link *link; 17 + int err; 17 18 __u32 link_info_len = sizeof(link_info); 18 19 __u32 prog_info_len = sizeof(prog_info); 19 20 20 21 skel1 = test_xdp_link__open_and_load(); 21 - if (CHECK(!skel1, "skel_load", "skeleton open and load failed\n")) 22 + if (!ASSERT_OK_PTR(skel1, "skel_load")) 22 23 goto cleanup; 23 24 prog_fd1 = bpf_program__fd(skel1->progs.xdp_handler); 24 25 25 26 skel2 = test_xdp_link__open_and_load(); 26 - if (CHECK(!skel2, "skel_load", "skeleton open and load failed\n")) 27 + if (!ASSERT_OK_PTR(skel2, "skel_load")) 27 28 goto cleanup; 28 29 prog_fd2 = bpf_program__fd(skel2->progs.xdp_handler); 29 30 30 31 memset(&prog_info, 0, sizeof(prog_info)); 31 32 err = bpf_obj_get_info_by_fd(prog_fd1, &prog_info, &prog_info_len); 32 - if (CHECK(err, "fd_info1", "failed %d\n", -errno)) 33 + if (!ASSERT_OK(err, "fd_info1")) 33 34 goto cleanup; 34 35 id1 = prog_info.id; 35 36 36 37 memset(&prog_info, 0, sizeof(prog_info)); 37 38 err = bpf_obj_get_info_by_fd(prog_fd2, &prog_info, &prog_info_len); 38 - if (CHECK(err, "fd_info2", "failed %d\n", -errno)) 39 + if (!ASSERT_OK(err, "fd_info2")) 39 40 goto cleanup; 40 41 id2 = prog_info.id; 41 42 42 43 /* set initial prog attachment */ 43 44 err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd1, XDP_FLAGS_REPLACE, &opts); 44 - if (CHECK(err, "fd_attach", "initial prog attach failed: %d\n", err)) 45 + if (!ASSERT_OK(err, "fd_attach")) 45 46 goto cleanup; 46 47 47 48 /* validate prog ID */ 48 49 err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 49 - CHECK(err || id0 != id1, "id1_check", 50 - "loaded prog id %u != id1 %u, err %d", id0, id1, err); 50 + if (!ASSERT_OK(err, "id1_check_err") || !ASSERT_EQ(id0, id1, "id1_check_val")) 51 + goto cleanup; 51 52 52 53 /* BPF link is not allowed to replace prog attachment */ 53 54 link = bpf_program__attach_xdp(skel1->progs.xdp_handler, IFINDEX_LO); ··· 63 62 /* detach BPF program */ 64 63 opts.old_fd = prog_fd1; 65 64 err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, -1, XDP_FLAGS_REPLACE, &opts); 66 - if (CHECK(err, "prog_detach", "failed %d\n", err)) 65 + if (!ASSERT_OK(err, "prog_detach")) 67 66 goto cleanup; 68 67 69 68 /* now BPF link should attach successfully */ ··· 74 73 75 74 /* validate prog ID */ 76 75 err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 77 - if (CHECK(err || id0 != id1, "id1_check", 78 - "loaded prog id %u != id1 %u, err %d", id0, id1, err)) 76 + if (!ASSERT_OK(err, "id1_check_err") || !ASSERT_EQ(id0, id1, "id1_check_val")) 79 77 goto cleanup; 80 78 81 79 /* BPF prog attach is not allowed to replace BPF link */ 82 80 opts.old_fd = prog_fd1; 83 81 err = bpf_set_link_xdp_fd_opts(IFINDEX_LO, prog_fd2, XDP_FLAGS_REPLACE, &opts); 84 - if (CHECK(!err, "prog_attach_fail", "unexpected success\n")) 82 + if (!ASSERT_ERR(err, "prog_attach_fail")) 85 83 goto cleanup; 86 84 87 85 /* Can't force-update when BPF link is active */ 88 86 err = bpf_set_link_xdp_fd(IFINDEX_LO, prog_fd2, 0); 89 - if (CHECK(!err, "prog_update_fail", "unexpected success\n")) 87 + if (!ASSERT_ERR(err, "prog_update_fail")) 90 88 goto cleanup; 91 89 92 90 /* Can't force-detach when BPF link is active */ 93 91 err = bpf_set_link_xdp_fd(IFINDEX_LO, -1, 0); 94 - if (CHECK(!err, "prog_detach_fail", "unexpected success\n")) 92 + if (!ASSERT_ERR(err, "prog_detach_fail")) 95 93 goto cleanup; 96 94 97 95 /* BPF link is not allowed to replace another BPF link */ ··· 110 110 skel2->links.xdp_handler = link; 111 111 112 112 err = bpf_get_link_xdp_id(IFINDEX_LO, &id0, 0); 113 - if (CHECK(err || id0 != id2, "id2_check", 114 - "loaded prog id %u != id2 %u, err %d", id0, id1, err)) 113 + if (!ASSERT_OK(err, "id2_check_err") || !ASSERT_EQ(id0, id2, "id2_check_val")) 115 114 goto cleanup; 116 115 117 116 /* updating program under active BPF link works as expected */ 118 117 err = bpf_link__update_program(link, skel1->progs.xdp_handler); 119 - if (CHECK(err, "link_upd", "failed: %d\n", err)) 118 + if (!ASSERT_OK(err, "link_upd")) 120 119 goto cleanup; 121 120 122 121 memset(&link_info, 0, sizeof(link_info)); 123 122 err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &link_info, &link_info_len); 124 - if (CHECK(err, "link_info", "failed: %d\n", err)) 123 + if (!ASSERT_OK(err, "link_info")) 125 124 goto cleanup; 126 125 127 - CHECK(link_info.type != BPF_LINK_TYPE_XDP, "link_type", 128 - "got %u != exp %u\n", link_info.type, BPF_LINK_TYPE_XDP); 129 - CHECK(link_info.prog_id != id1, "link_prog_id", 130 - "got %u != exp %u\n", link_info.prog_id, id1); 131 - CHECK(link_info.xdp.ifindex != IFINDEX_LO, "link_ifindex", 132 - "got %u != exp %u\n", link_info.xdp.ifindex, IFINDEX_LO); 126 + ASSERT_EQ(link_info.type, BPF_LINK_TYPE_XDP, "link_type"); 127 + ASSERT_EQ(link_info.prog_id, id1, "link_prog_id"); 128 + ASSERT_EQ(link_info.xdp.ifindex, IFINDEX_LO, "link_ifindex"); 129 + 130 + /* updating program under active BPF link with different type fails */ 131 + err = bpf_link__update_program(link, skel1->progs.tc_handler); 132 + if (!ASSERT_ERR(err, "link_upd_invalid")) 133 + goto cleanup; 133 134 134 135 err = bpf_link__detach(link); 135 - if (CHECK(err, "link_detach", "failed %d\n", err)) 136 + if (!ASSERT_OK(err, "link_detach")) 136 137 goto cleanup; 137 138 138 139 memset(&link_info, 0, sizeof(link_info)); 139 140 err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &link_info, &link_info_len); 140 - if (CHECK(err, "link_info", "failed: %d\n", err)) 141 - goto cleanup; 142 - CHECK(link_info.prog_id != id1, "link_prog_id", 143 - "got %u != exp %u\n", link_info.prog_id, id1); 141 + 142 + ASSERT_OK(err, "link_info"); 143 + ASSERT_EQ(link_info.prog_id, id1, "link_prog_id"); 144 144 /* ifindex should be zeroed out */ 145 - CHECK(link_info.xdp.ifindex != 0, "link_ifindex", 146 - "got %u != exp %u\n", link_info.xdp.ifindex, 0); 145 + ASSERT_EQ(link_info.xdp.ifindex, 0, "link_ifindex"); 147 146 148 147 cleanup: 149 148 test_xdp_link__destroy(skel1);
+32
tools/testing/selftests/bpf/progs/test_d_path_check_types.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include "vmlinux.h" 4 + #include <bpf/bpf_helpers.h> 5 + #include <bpf/bpf_tracing.h> 6 + 7 + extern const int bpf_prog_active __ksym; 8 + 9 + struct { 10 + __uint(type, BPF_MAP_TYPE_RINGBUF); 11 + __uint(max_entries, 1 << 12); 12 + } ringbuf SEC(".maps"); 13 + 14 + SEC("fentry/security_inode_getattr") 15 + int BPF_PROG(d_path_check_rdonly_mem, struct path *path, struct kstat *stat, 16 + __u32 request_mask, unsigned int query_flags) 17 + { 18 + void *active; 19 + u32 cpu; 20 + 21 + cpu = bpf_get_smp_processor_id(); 22 + active = (void *)bpf_per_cpu_ptr(&bpf_prog_active, cpu); 23 + if (active) { 24 + /* FAIL here! 'active' points to 'regular' memory. It 25 + * cannot be submitted to ring buffer. 26 + */ 27 + bpf_ringbuf_submit(active, 0); 28 + } 29 + return 0; 30 + } 31 + 32 + char _license[] SEC("license") = "GPL";
+6
tools/testing/selftests/bpf/progs/test_xdp_link.c
··· 10 10 { 11 11 return 0; 12 12 } 13 + 14 + SEC("tc") 15 + int tc_handler(struct __sk_buff *skb) 16 + { 17 + return 0; 18 + }
+95
tools/testing/selftests/bpf/verifier/ringbuf.c
··· 1 + { 2 + "ringbuf: invalid reservation offset 1", 3 + .insns = { 4 + /* reserve 8 byte ringbuf memory */ 5 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 6 + BPF_LD_MAP_FD(BPF_REG_1, 0), 7 + BPF_MOV64_IMM(BPF_REG_2, 8), 8 + BPF_MOV64_IMM(BPF_REG_3, 0), 9 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), 10 + /* store a pointer to the reserved memory in R6 */ 11 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), 12 + /* check whether the reservation was successful */ 13 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), 14 + /* spill R6(mem) into the stack */ 15 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8), 16 + /* fill it back in R7 */ 17 + BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8), 18 + /* should be able to access *(R7) = 0 */ 19 + BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0), 20 + /* submit the reserved ringbuf memory */ 21 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), 22 + /* add invalid offset to reserved ringbuf memory */ 23 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xcafe), 24 + BPF_MOV64_IMM(BPF_REG_2, 0), 25 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), 26 + BPF_MOV64_IMM(BPF_REG_0, 0), 27 + BPF_EXIT_INSN(), 28 + }, 29 + .fixup_map_ringbuf = { 1 }, 30 + .result = REJECT, 31 + .errstr = "dereference of modified alloc_mem ptr R1", 32 + }, 33 + { 34 + "ringbuf: invalid reservation offset 2", 35 + .insns = { 36 + /* reserve 8 byte ringbuf memory */ 37 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 38 + BPF_LD_MAP_FD(BPF_REG_1, 0), 39 + BPF_MOV64_IMM(BPF_REG_2, 8), 40 + BPF_MOV64_IMM(BPF_REG_3, 0), 41 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), 42 + /* store a pointer to the reserved memory in R6 */ 43 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_0), 44 + /* check whether the reservation was successful */ 45 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), 46 + /* spill R6(mem) into the stack */ 47 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8), 48 + /* fill it back in R7 */ 49 + BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_10, -8), 50 + /* add invalid offset to reserved ringbuf memory */ 51 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, 0xcafe), 52 + /* should be able to access *(R7) = 0 */ 53 + BPF_ST_MEM(BPF_DW, BPF_REG_7, 0, 0), 54 + /* submit the reserved ringbuf memory */ 55 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), 56 + BPF_MOV64_IMM(BPF_REG_2, 0), 57 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), 58 + BPF_MOV64_IMM(BPF_REG_0, 0), 59 + BPF_EXIT_INSN(), 60 + }, 61 + .fixup_map_ringbuf = { 1 }, 62 + .result = REJECT, 63 + .errstr = "R7 min value is outside of the allowed memory range", 64 + }, 65 + { 66 + "ringbuf: check passing rb mem to helpers", 67 + .insns = { 68 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 69 + /* reserve 8 byte ringbuf memory */ 70 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 71 + BPF_LD_MAP_FD(BPF_REG_1, 0), 72 + BPF_MOV64_IMM(BPF_REG_2, 8), 73 + BPF_MOV64_IMM(BPF_REG_3, 0), 74 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve), 75 + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), 76 + /* check whether the reservation was successful */ 77 + BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), 78 + BPF_EXIT_INSN(), 79 + /* pass allocated ring buffer memory to fib lookup */ 80 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), 81 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), 82 + BPF_MOV64_IMM(BPF_REG_3, 8), 83 + BPF_MOV64_IMM(BPF_REG_4, 0), 84 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_fib_lookup), 85 + /* submit the ringbuf memory */ 86 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_7), 87 + BPF_MOV64_IMM(BPF_REG_2, 0), 88 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit), 89 + BPF_MOV64_IMM(BPF_REG_0, 0), 90 + BPF_EXIT_INSN(), 91 + }, 92 + .fixup_map_ringbuf = { 2 }, 93 + .prog_type = BPF_PROG_TYPE_XDP, 94 + .result = ACCEPT, 95 + },
+1 -1
tools/testing/selftests/bpf/verifier/spill_fill.c
··· 84 84 }, 85 85 .fixup_map_ringbuf = { 1 }, 86 86 .result = REJECT, 87 - .errstr = "R0 pointer arithmetic on mem_or_null prohibited", 87 + .errstr = "R0 pointer arithmetic on alloc_mem_or_null prohibited", 88 88 }, 89 89 { 90 90 "check corrupted spill/fill",
+3
tools/testing/selftests/net/fcnal-test.sh
··· 4059 4059 -p Pause on fail 4060 4060 -P Pause after each test 4061 4061 -v Be verbose 4062 + 4063 + Tests: 4064 + $TESTS_IPV4 $TESTS_IPV6 $TESTS_OTHER 4062 4065 EOF 4063 4066 } 4064 4067
+1 -1
tools/testing/selftests/net/settings
··· 1 - timeout=300 1 + timeout=1500