Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) BPF verifier signed/unsigned value tracking fix, from Daniel
Borkmann, Edward Cree, and Josef Bacik.

2) Fix memory allocation length when setting up calls to
->ndo_set_mac_address, from Cong Wang.

3) Add a new cxgb4 device ID, from Ganesh Goudar.

4) Fix FIB refcount handling, we have to set it's initial value before
the configure callback (which can bump it). From David Ahern.

5) Fix double-free in qcom/emac driver, from Timur Tabi.

6) A bunch of gcc-7 string format overflow warning fixes from Arnd
Bergmann.

7) Fix link level headroom tests in ip_do_fragment(), from Vasily
Averin.

8) Fix chunk walking in SCTP when iterating over error and parameter
headers. From Alexander Potapenko.

9) TCP BBR congestion control fixes from Neal Cardwell.

10) Fix SKB fragment handling in bcmgenet driver, from Doug Berger.

11) BPF_CGROUP_RUN_PROG_SOCK_OPS needs to check for null __sk, from Cong
Wang.

12) xmit_recursion in ppp driver needs to be per-device not per-cpu,
from Gao Feng.

13) Cannot release skb->dst in UDP if IP options processing needs it.
From Paolo Abeni.

14) Some netdev ioctl ifr_name[] NULL termination fixes. From Alexander
Levin and myself.

15) Revert some rtnetlink notification changes that are causing
regressions, from David Ahern.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (83 commits)
net: bonding: Fix transmit load balancing in balance-alb mode
rds: Make sure updates to cp_send_gen can be observed
net: ethernet: ti: cpsw: Push the request_irq function to the end of probe
ipv4: initialize fib_trie prior to register_netdev_notifier call.
rtnetlink: allocate more memory for dev_set_mac_address()
net: dsa: b53: Add missing ARL entries for BCM53125
bpf: more tests for mixed signed and unsigned bounds checks
bpf: add test for mixed signed and unsigned bounds checks
bpf: fix up test cases with mixed signed/unsigned bounds
bpf: allow to specify log level and reduce it for test_verifier
bpf: fix mixed signed/unsigned derived min/max value bounds
ipv6: avoid overflow of offset in ip6_find_1stfragopt
net: tehuti: don't process data if it has not been copied from userspace
Revert "rtnetlink: Do not generate notifications for CHANGEADDR event"
net: dsa: mv88e6xxx: Enable CMODE config support for 6390X
dt-binding: ptp: Add SoC compatibility strings for dte ptp clock
NET: dwmac: Make dwmac reset unconditional
net: Zero terminate ifr_name in dev_ifname().
wireless: wext: terminate ifr name coming from userspace
netfilter: fix netfilter_net_init() return
...

+1179 -584
+1
Documentation/devicetree/bindings/net/brcm,amac.txt
··· 11 11 - reg-names: Names of the registers. 12 12 "amac_base": Address and length of the GMAC registers 13 13 "idm_base": Address and length of the GMAC IDM registers 14 + (required for NSP and Northstar2) 14 15 "nicpm_base": Address and length of the NIC Port Manager 15 16 registers (required for Northstar2) 16 17 - interrupts: Interrupt number
-24
Documentation/devicetree/bindings/net/brcm,bgmac-nsp.txt
··· 1 - Broadcom GMAC Ethernet Controller Device Tree Bindings 2 - ------------------------------------------------------------- 3 - 4 - Required properties: 5 - - compatible: "brcm,bgmac-nsp" 6 - - reg: Address and length of the GMAC registers, 7 - Address and length of the GMAC IDM registers 8 - - reg-names: Names of the registers. Must have both "gmac_base" and 9 - "idm_base" 10 - - interrupts: Interrupt number 11 - 12 - Optional properties: 13 - - mac-address: See ethernet.txt file in the same directory 14 - 15 - Examples: 16 - 17 - gmac0: ethernet@18022000 { 18 - compatible = "brcm,bgmac-nsp"; 19 - reg = <0x18022000 0x1000>, 20 - <0x18110000 0x1000>; 21 - reg-names = "gmac_base", "idm_base"; 22 - interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>; 23 - status = "disabled"; 24 - };
+11 -4
Documentation/devicetree/bindings/ptp/brcm,ptp-dte.txt
··· 1 - * Broadcom Digital Timing Engine(DTE) based PTP clock driver 1 + * Broadcom Digital Timing Engine(DTE) based PTP clock 2 2 3 3 Required properties: 4 - - compatible: should be "brcm,ptp-dte" 4 + - compatible: should contain the core compatibility string 5 + and the SoC compatibility string. The SoC 6 + compatibility string is to handle SoC specific 7 + hardware differences. 8 + Core compatibility string: 9 + "brcm,ptp-dte" 10 + SoC compatibility strings: 11 + "brcm,iproc-ptp-dte" - for iproc based SoC's 5 12 - reg: address and length of the DTE block's NCO registers 6 13 7 14 Example: 8 15 9 - ptp_dte: ptp_dte@180af650 { 10 - compatible = "brcm,ptp-dte"; 16 + ptp: ptp-dte@180af650 { 17 + compatible = "brcm,iproc-ptp-dte", "brcm,ptp-dte"; 11 18 reg = <0x180af650 0x10>; 12 19 status = "okay"; 13 20 };
+1 -1
drivers/atm/zatm.c
··· 1613 1613 1614 1614 ret = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32)); 1615 1615 if (ret < 0) 1616 - goto out_disable; 1616 + goto out_release; 1617 1617 1618 1618 zatm_dev->pci_dev = pci_dev; 1619 1619 dev->dev_data = zatm_dev;
+13 -12
drivers/isdn/divert/isdn_divert.c
··· 485 485 cs->deflect_dest[0] = '\0'; 486 486 retval = 4; /* only proceed */ 487 487 } 488 - sprintf(cs->info, "%d 0x%lx %s %s %s %s 0x%x 0x%x %d %d %s\n", 489 - cs->akt_state, 490 - cs->divert_id, 491 - divert_if.drv_to_name(cs->ics.driver), 492 - (ic->command == ISDN_STAT_ICALLW) ? "1" : "0", 493 - cs->ics.parm.setup.phone, 494 - cs->ics.parm.setup.eazmsn, 495 - cs->ics.parm.setup.si1, 496 - cs->ics.parm.setup.si2, 497 - cs->ics.parm.setup.screen, 498 - dv->rule.waittime, 499 - cs->deflect_dest); 488 + snprintf(cs->info, sizeof(cs->info), 489 + "%d 0x%lx %s %s %s %s 0x%x 0x%x %d %d %s\n", 490 + cs->akt_state, 491 + cs->divert_id, 492 + divert_if.drv_to_name(cs->ics.driver), 493 + (ic->command == ISDN_STAT_ICALLW) ? "1" : "0", 494 + cs->ics.parm.setup.phone, 495 + cs->ics.parm.setup.eazmsn, 496 + cs->ics.parm.setup.si1, 497 + cs->ics.parm.setup.si2, 498 + cs->ics.parm.setup.screen, 499 + dv->rule.waittime, 500 + cs->deflect_dest); 500 501 if ((dv->rule.action == DEFLECT_REPORT) || 501 502 (dv->rule.action == DEFLECT_REJECT)) { 502 503 put_info_buffer(cs->info);
+1 -1
drivers/isdn/hardware/avm/c4.c
··· 42 42 43 43 static bool suppress_pollack; 44 44 45 - static struct pci_device_id c4_pci_tbl[] = { 45 + static const struct pci_device_id c4_pci_tbl[] = { 46 46 { PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_DEC_21285, PCI_VENDOR_ID_AVM, PCI_DEVICE_ID_AVM_C4, 0, 0, (unsigned long)4 }, 47 47 { PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_DEC_21285, PCI_VENDOR_ID_AVM, PCI_DEVICE_ID_AVM_C2, 0, 0, (unsigned long)2 }, 48 48 { } /* Terminating entry */
+1 -1
drivers/isdn/hardware/eicon/divasmain.c
··· 110 110 /* 111 111 This table should be sorted by PCI device ID 112 112 */ 113 - static struct pci_device_id divas_pci_tbl[] = { 113 + static const struct pci_device_id divas_pci_tbl[] = { 114 114 /* Diva Server BRI-2M PCI 0xE010 */ 115 115 { PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRA), 116 116 CARDTYPE_MAESTRA_PCI },
+1 -1
drivers/isdn/hardware/mISDN/avmfritz.c
··· 1142 1142 pr_info("%s: drvdata already removed\n", __func__); 1143 1143 } 1144 1144 1145 - static struct pci_device_id fcpci_ids[] = { 1145 + static const struct pci_device_id fcpci_ids[] = { 1146 1146 { PCI_VENDOR_ID_AVM, PCI_DEVICE_ID_AVM_A1, PCI_ANY_ID, PCI_ANY_ID, 1147 1147 0, 0, (unsigned long) "Fritz!Card PCI"}, 1148 1148 { PCI_VENDOR_ID_AVM, PCI_DEVICE_ID_AVM_A1_V2, PCI_ANY_ID, PCI_ANY_ID,
+1 -1
drivers/isdn/hardware/mISDN/hfcmulti.c
··· 5348 5348 5349 5349 #undef H 5350 5350 #define H(x) ((unsigned long)&hfcm_map[x]) 5351 - static struct pci_device_id hfmultipci_ids[] = { 5351 + static const struct pci_device_id hfmultipci_ids[] = { 5352 5352 5353 5353 /* Cards with HFC-4S Chip */ 5354 5354 { PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_HFC4S, PCI_VENDOR_ID_CCD,
+1 -1
drivers/isdn/hardware/mISDN/hfcpci.c
··· 2161 2161 {}, 2162 2162 }; 2163 2163 2164 - static struct pci_device_id hfc_ids[] = 2164 + static const struct pci_device_id hfc_ids[] = 2165 2165 { 2166 2166 { PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_2BD0), 2167 2167 (unsigned long) &hfc_map[0] },
+1 -1
drivers/isdn/hardware/mISDN/netjet.c
··· 1137 1137 /* We cannot select cards with PCI_SUB... IDs, since here are cards with 1138 1138 * SUB IDs set to PCI_ANY_ID, so we need to match all and reject 1139 1139 * known other cards which not work with this driver - see probe function */ 1140 - static struct pci_device_id nj_pci_ids[] = { 1140 + static const struct pci_device_id nj_pci_ids[] = { 1141 1141 { PCI_VENDOR_ID_TIGERJET, PCI_DEVICE_ID_TIGERJET_300, 1142 1142 PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, 1143 1143 { }
+1 -1
drivers/isdn/hardware/mISDN/w6692.c
··· 1398 1398 pr_notice("%s: drvdata already removed\n", __func__); 1399 1399 } 1400 1400 1401 - static struct pci_device_id w6692_ids[] = { 1401 + static const struct pci_device_id w6692_ids[] = { 1402 1402 { PCI_VENDOR_ID_DYNALINK, PCI_DEVICE_ID_DYNALINK_IS64PH, 1403 1403 PCI_ANY_ID, PCI_ANY_ID, 0, 0, (ulong)&w6692_map[0]}, 1404 1404 { PCI_VENDOR_ID_WINBOND2, PCI_DEVICE_ID_WINBOND2_6692,
+1 -1
drivers/isdn/hisax/config.c
··· 1909 1909 #ifdef CONFIG_PCI 1910 1910 #include <linux/pci.h> 1911 1911 1912 - static struct pci_device_id hisax_pci_tbl[] __used = { 1912 + static const struct pci_device_id hisax_pci_tbl[] __used = { 1913 1913 #ifdef CONFIG_HISAX_FRITZPCI 1914 1914 {PCI_VDEVICE(AVM, PCI_DEVICE_ID_AVM_A1) }, 1915 1915 #endif
+1 -1
drivers/isdn/hisax/hfc4s8s_l1.c
··· 86 86 char *device_name; 87 87 } hfc4s8s_param; 88 88 89 - static struct pci_device_id hfc4s8s_ids[] = { 89 + static const struct pci_device_id hfc4s8s_ids[] = { 90 90 {.vendor = PCI_VENDOR_ID_CCD, 91 91 .device = PCI_DEVICE_ID_4S, 92 92 .subvendor = 0x1397,
+1 -1
drivers/isdn/hisax/hisax_fcpcipnp.c
··· 52 52 MODULE_AUTHOR("Kai Germaschewski <kai.germaschewski@gmx.de>/Karsten Keil <kkeil@suse.de>"); 53 53 MODULE_DESCRIPTION("AVM Fritz!PCI/PnP ISDN driver"); 54 54 55 - static struct pci_device_id fcpci_ids[] = { 55 + static const struct pci_device_id fcpci_ids[] = { 56 56 { .vendor = PCI_VENDOR_ID_AVM, 57 57 .device = PCI_DEVICE_ID_AVM_A1, 58 58 .subvendor = PCI_ANY_ID,
+1 -1
drivers/net/bonding/bond_main.c
··· 4596 4596 } 4597 4597 ad_user_port_key = valptr->value; 4598 4598 4599 - if (bond_mode == BOND_MODE_TLB) { 4599 + if ((bond_mode == BOND_MODE_TLB) || (bond_mode == BOND_MODE_ALB)) { 4600 4600 bond_opt_initstr(&newval, "default"); 4601 4601 valptr = bond_opt_parse(bond_opt_get(BOND_OPT_TLB_DYNAMIC_LB), 4602 4602 &newval);
+1
drivers/net/dsa/b53/b53_common.c
··· 1665 1665 .dev_name = "BCM53125", 1666 1666 .vlans = 4096, 1667 1667 .enabled_ports = 0xff, 1668 + .arl_entries = 4, 1668 1669 .cpu_port = B53_CPU_PORT, 1669 1670 .vta_regs = B53_VTA_REGS, 1670 1671 .duplex_reg = B53_DUPLEX_STAT_GE,
+1
drivers/net/dsa/mv88e6xxx/chip.c
··· 3178 3178 .port_set_jumbo_size = mv88e6165_port_set_jumbo_size, 3179 3179 .port_egress_rate_limiting = mv88e6097_port_egress_rate_limiting, 3180 3180 .port_pause_limit = mv88e6390_port_pause_limit, 3181 + .port_set_cmode = mv88e6390x_port_set_cmode, 3181 3182 .port_disable_learn_limit = mv88e6xxx_port_disable_learn_limit, 3182 3183 .port_disable_pri_override = mv88e6xxx_port_disable_pri_override, 3183 3184 .stats_snapshot = mv88e6390_g1_stats_snapshot,
+12 -10
drivers/net/ethernet/apm/xgene/xgene_enet_main.c
··· 1785 1785 1786 1786 xgene_enet_gpiod_get(pdata); 1787 1787 1788 - pdata->clk = devm_clk_get(&pdev->dev, NULL); 1789 - if (IS_ERR(pdata->clk)) { 1790 - /* Abort if the clock is defined but couldn't be retrived. 1791 - * Always abort if the clock is missing on DT system as 1792 - * the driver can't cope with this case. 1793 - */ 1794 - if (PTR_ERR(pdata->clk) != -ENOENT || dev->of_node) 1795 - return PTR_ERR(pdata->clk); 1796 - /* Firmware may have set up the clock already. */ 1797 - dev_info(dev, "clocks have been setup already\n"); 1788 + if (pdata->phy_mode != PHY_INTERFACE_MODE_SGMII) { 1789 + pdata->clk = devm_clk_get(&pdev->dev, NULL); 1790 + if (IS_ERR(pdata->clk)) { 1791 + /* Abort if the clock is defined but couldn't be 1792 + * retrived. Always abort if the clock is missing on 1793 + * DT system as the driver can't cope with this case. 1794 + */ 1795 + if (PTR_ERR(pdata->clk) != -ENOENT || dev->of_node) 1796 + return PTR_ERR(pdata->clk); 1797 + /* Firmware may have set up the clock already. */ 1798 + dev_info(dev, "clocks have been setup already\n"); 1799 + } 1798 1800 } 1799 1801 1800 1802 if (pdata->phy_mode != PHY_INTERFACE_MODE_XGMII)
+13 -8
drivers/net/ethernet/broadcom/bgmac-platform.c
··· 50 50 51 51 static void platform_bgmac_idm_write(struct bgmac *bgmac, u16 offset, u32 value) 52 52 { 53 - return writel(value, bgmac->plat.idm_base + offset); 53 + writel(value, bgmac->plat.idm_base + offset); 54 54 } 55 55 56 56 static bool platform_bgmac_clk_enabled(struct bgmac *bgmac) 57 57 { 58 + if (!bgmac->plat.idm_base) 59 + return true; 60 + 58 61 if ((bgmac_idm_read(bgmac, BCMA_IOCTL) & BGMAC_CLK_EN) != BGMAC_CLK_EN) 59 62 return false; 60 63 if (bgmac_idm_read(bgmac, BCMA_RESET_CTL) & BCMA_RESET_CTL_RESET) ··· 68 65 static void platform_bgmac_clk_enable(struct bgmac *bgmac, u32 flags) 69 66 { 70 67 u32 val; 68 + 69 + if (!bgmac->plat.idm_base) 70 + return; 71 71 72 72 /* The Reset Control register only contains a single bit to show if the 73 73 * controller is currently in reset. Do a sanity check here, just in ··· 186 180 bgmac->feature_flags |= BGMAC_FEAT_CMDCFG_SR_REV4; 187 181 bgmac->feature_flags |= BGMAC_FEAT_TX_MASK_SETUP; 188 182 bgmac->feature_flags |= BGMAC_FEAT_RX_MASK_SETUP; 183 + bgmac->feature_flags |= BGMAC_FEAT_IDM_MASK; 189 184 190 185 bgmac->dev = &pdev->dev; 191 186 bgmac->dma_dev = &pdev->dev; ··· 214 207 return PTR_ERR(bgmac->plat.base); 215 208 216 209 regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "idm_base"); 217 - if (!regs) { 218 - dev_err(&pdev->dev, "Unable to obtain idm resource\n"); 219 - return -EINVAL; 210 + if (regs) { 211 + bgmac->plat.idm_base = devm_ioremap_resource(&pdev->dev, regs); 212 + if (IS_ERR(bgmac->plat.idm_base)) 213 + return PTR_ERR(bgmac->plat.idm_base); 214 + bgmac->feature_flags &= ~BGMAC_FEAT_IDM_MASK; 220 215 } 221 - 222 - bgmac->plat.idm_base = devm_ioremap_resource(&pdev->dev, regs); 223 - if (IS_ERR(bgmac->plat.idm_base)) 224 - return PTR_ERR(bgmac->plat.idm_base); 225 216 226 217 regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "nicpm_base"); 227 218 if (regs) {
+42 -28
drivers/net/ethernet/broadcom/bgmac.c
··· 622 622 BUILD_BUG_ON(BGMAC_MAX_TX_RINGS > ARRAY_SIZE(ring_base)); 623 623 BUILD_BUG_ON(BGMAC_MAX_RX_RINGS > ARRAY_SIZE(ring_base)); 624 624 625 - if (!(bgmac_idm_read(bgmac, BCMA_IOST) & BCMA_IOST_DMA64)) { 626 - dev_err(bgmac->dev, "Core does not report 64-bit DMA\n"); 627 - return -ENOTSUPP; 625 + if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) { 626 + if (!(bgmac_idm_read(bgmac, BCMA_IOST) & BCMA_IOST_DMA64)) { 627 + dev_err(bgmac->dev, "Core does not report 64-bit DMA\n"); 628 + return -ENOTSUPP; 629 + } 628 630 } 629 631 630 632 for (i = 0; i < BGMAC_MAX_TX_RINGS; i++) { ··· 857 855 static void bgmac_miiconfig(struct bgmac *bgmac) 858 856 { 859 857 if (bgmac->feature_flags & BGMAC_FEAT_FORCE_SPEED_2500) { 860 - bgmac_idm_write(bgmac, BCMA_IOCTL, 861 - bgmac_idm_read(bgmac, BCMA_IOCTL) | 0x40 | 862 - BGMAC_BCMA_IOCTL_SW_CLKEN); 858 + if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) { 859 + bgmac_idm_write(bgmac, BCMA_IOCTL, 860 + bgmac_idm_read(bgmac, BCMA_IOCTL) | 861 + 0x40 | BGMAC_BCMA_IOCTL_SW_CLKEN); 862 + } 863 863 bgmac->mac_speed = SPEED_2500; 864 864 bgmac->mac_duplex = DUPLEX_FULL; 865 865 bgmac_mac_speed(bgmac); ··· 878 874 } 879 875 } 880 876 877 + static void bgmac_chip_reset_idm_config(struct bgmac *bgmac) 878 + { 879 + u32 iost; 880 + 881 + iost = bgmac_idm_read(bgmac, BCMA_IOST); 882 + if (bgmac->feature_flags & BGMAC_FEAT_IOST_ATTACHED) 883 + iost &= ~BGMAC_BCMA_IOST_ATTACHED; 884 + 885 + /* 3GMAC: for BCM4707 & BCM47094, only do core reset at bgmac_probe() */ 886 + if (!(bgmac->feature_flags & BGMAC_FEAT_NO_RESET)) { 887 + u32 flags = 0; 888 + 889 + if (iost & BGMAC_BCMA_IOST_ATTACHED) { 890 + flags = BGMAC_BCMA_IOCTL_SW_CLKEN; 891 + if (!bgmac->has_robosw) 892 + flags |= BGMAC_BCMA_IOCTL_SW_RESET; 893 + } 894 + bgmac_clk_enable(bgmac, flags); 895 + } 896 + 897 + if (iost & BGMAC_BCMA_IOST_ATTACHED && !bgmac->has_robosw) 898 + bgmac_idm_write(bgmac, BCMA_IOCTL, 899 + bgmac_idm_read(bgmac, BCMA_IOCTL) & 900 + ~BGMAC_BCMA_IOCTL_SW_RESET); 901 + } 902 + 881 903 /* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/chipreset */ 882 904 static void bgmac_chip_reset(struct bgmac *bgmac) 883 905 { 884 906 u32 cmdcfg_sr; 885 - u32 iost; 886 907 int i; 887 908 888 909 if (bgmac_clk_enabled(bgmac)) { ··· 928 899 /* TODO: Clear software multicast filter list */ 929 900 } 930 901 931 - iost = bgmac_idm_read(bgmac, BCMA_IOST); 932 - if (bgmac->feature_flags & BGMAC_FEAT_IOST_ATTACHED) 933 - iost &= ~BGMAC_BCMA_IOST_ATTACHED; 934 - 935 - /* 3GMAC: for BCM4707 & BCM47094, only do core reset at bgmac_probe() */ 936 - if (!(bgmac->feature_flags & BGMAC_FEAT_NO_RESET)) { 937 - u32 flags = 0; 938 - if (iost & BGMAC_BCMA_IOST_ATTACHED) { 939 - flags = BGMAC_BCMA_IOCTL_SW_CLKEN; 940 - if (!bgmac->has_robosw) 941 - flags |= BGMAC_BCMA_IOCTL_SW_RESET; 942 - } 943 - bgmac_clk_enable(bgmac, flags); 944 - } 902 + if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) 903 + bgmac_chip_reset_idm_config(bgmac); 945 904 946 905 /* Request Misc PLL for corerev > 2 */ 947 906 if (bgmac->feature_flags & BGMAC_FEAT_MISC_PLL_REQ) { ··· 986 969 bgmac_cco_ctl_maskset(bgmac, 7, ~BGMAC_CHIPCTL_7_IF_TYPE_MASK, 987 970 BGMAC_CHIPCTL_7_IF_TYPE_RGMII); 988 971 } 989 - 990 - if (iost & BGMAC_BCMA_IOST_ATTACHED && !bgmac->has_robosw) 991 - bgmac_idm_write(bgmac, BCMA_IOCTL, 992 - bgmac_idm_read(bgmac, BCMA_IOCTL) & 993 - ~BGMAC_BCMA_IOCTL_SW_RESET); 994 972 995 973 /* http://bcm-v4.sipsolutions.net/mac-gbit/gmac/gmac_reset 996 974 * Specs don't say about using BGMAC_CMDCFG_SR, but in this routine ··· 1509 1497 bgmac_clk_enable(bgmac, 0); 1510 1498 1511 1499 /* This seems to be fixing IRQ by assigning OOB #6 to the core */ 1512 - if (bgmac->feature_flags & BGMAC_FEAT_IRQ_ID_OOB_6) 1513 - bgmac_idm_write(bgmac, BCMA_OOB_SEL_OUT_A30, 0x86); 1500 + if (!(bgmac->feature_flags & BGMAC_FEAT_IDM_MASK)) { 1501 + if (bgmac->feature_flags & BGMAC_FEAT_IRQ_ID_OOB_6) 1502 + bgmac_idm_write(bgmac, BCMA_OOB_SEL_OUT_A30, 0x86); 1503 + } 1514 1504 1515 1505 bgmac_chip_reset(bgmac); 1516 1506
+1
drivers/net/ethernet/broadcom/bgmac.h
··· 425 425 #define BGMAC_FEAT_CC4_IF_SW_TYPE BIT(17) 426 426 #define BGMAC_FEAT_CC4_IF_SW_TYPE_RGMII BIT(18) 427 427 #define BGMAC_FEAT_CC7_IF_TYPE_RGMII BIT(19) 428 + #define BGMAC_FEAT_IDM_MASK BIT(20) 428 429 429 430 struct bgmac_slot_info { 430 431 union {
+4 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
··· 2886 2886 2887 2887 static int bnx2x_test_nvram(struct bnx2x *bp) 2888 2888 { 2889 - const struct crc_pair nvram_tbl[] = { 2889 + static const struct crc_pair nvram_tbl[] = { 2890 2890 { 0, 0x14 }, /* bootstrap */ 2891 2891 { 0x14, 0xec }, /* dir */ 2892 2892 { 0x100, 0x350 }, /* manuf_info */ ··· 2895 2895 { 0x708, 0x70 }, /* manuf_key_info */ 2896 2896 { 0, 0 } 2897 2897 }; 2898 - const struct crc_pair nvram_tbl2[] = { 2898 + static const struct crc_pair nvram_tbl2[] = { 2899 2899 { 0x7e8, 0x350 }, /* manuf_info2 */ 2900 2900 { 0xb38, 0xf0 }, /* feature_info */ 2901 2901 { 0, 0 } ··· 3162 3162 if (is_multi(bp)) { 3163 3163 for_each_eth_queue(bp, i) { 3164 3164 memset(queue_name, 0, sizeof(queue_name)); 3165 - sprintf(queue_name, "%d", i); 3165 + snprintf(queue_name, sizeof(queue_name), 3166 + "%d", i); 3166 3167 for (j = 0; j < BNX2X_NUM_Q_STATS; j++) 3167 3168 snprintf(buf + (k + j)*ETH_GSTRING_LEN, 3168 3169 ETH_GSTRING_LEN,
+153 -152
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1202 1202 return tx_cb_ptr; 1203 1203 } 1204 1204 1205 - /* Simple helper to free a control block's resources */ 1206 - static void bcmgenet_free_cb(struct enet_cb *cb) 1205 + static struct enet_cb *bcmgenet_put_txcb(struct bcmgenet_priv *priv, 1206 + struct bcmgenet_tx_ring *ring) 1207 1207 { 1208 - dev_kfree_skb_any(cb->skb); 1209 - cb->skb = NULL; 1210 - dma_unmap_addr_set(cb, dma_addr, 0); 1208 + struct enet_cb *tx_cb_ptr; 1209 + 1210 + tx_cb_ptr = ring->cbs; 1211 + tx_cb_ptr += ring->write_ptr - ring->cb_ptr; 1212 + 1213 + /* Rewinding local write pointer */ 1214 + if (ring->write_ptr == ring->cb_ptr) 1215 + ring->write_ptr = ring->end_ptr; 1216 + else 1217 + ring->write_ptr--; 1218 + 1219 + return tx_cb_ptr; 1211 1220 } 1212 1221 1213 1222 static inline void bcmgenet_rx_ring16_int_disable(struct bcmgenet_rx_ring *ring) ··· 1269 1260 INTRL2_CPU_MASK_SET); 1270 1261 } 1271 1262 1263 + /* Simple helper to free a transmit control block's resources 1264 + * Returns an skb when the last transmit control block associated with the 1265 + * skb is freed. The skb should be freed by the caller if necessary. 1266 + */ 1267 + static struct sk_buff *bcmgenet_free_tx_cb(struct device *dev, 1268 + struct enet_cb *cb) 1269 + { 1270 + struct sk_buff *skb; 1271 + 1272 + skb = cb->skb; 1273 + 1274 + if (skb) { 1275 + cb->skb = NULL; 1276 + if (cb == GENET_CB(skb)->first_cb) 1277 + dma_unmap_single(dev, dma_unmap_addr(cb, dma_addr), 1278 + dma_unmap_len(cb, dma_len), 1279 + DMA_TO_DEVICE); 1280 + else 1281 + dma_unmap_page(dev, dma_unmap_addr(cb, dma_addr), 1282 + dma_unmap_len(cb, dma_len), 1283 + DMA_TO_DEVICE); 1284 + dma_unmap_addr_set(cb, dma_addr, 0); 1285 + 1286 + if (cb == GENET_CB(skb)->last_cb) 1287 + return skb; 1288 + 1289 + } else if (dma_unmap_addr(cb, dma_addr)) { 1290 + dma_unmap_page(dev, 1291 + dma_unmap_addr(cb, dma_addr), 1292 + dma_unmap_len(cb, dma_len), 1293 + DMA_TO_DEVICE); 1294 + dma_unmap_addr_set(cb, dma_addr, 0); 1295 + } 1296 + 1297 + return 0; 1298 + } 1299 + 1300 + /* Simple helper to free a receive control block's resources */ 1301 + static struct sk_buff *bcmgenet_free_rx_cb(struct device *dev, 1302 + struct enet_cb *cb) 1303 + { 1304 + struct sk_buff *skb; 1305 + 1306 + skb = cb->skb; 1307 + cb->skb = NULL; 1308 + 1309 + if (dma_unmap_addr(cb, dma_addr)) { 1310 + dma_unmap_single(dev, dma_unmap_addr(cb, dma_addr), 1311 + dma_unmap_len(cb, dma_len), DMA_FROM_DEVICE); 1312 + dma_unmap_addr_set(cb, dma_addr, 0); 1313 + } 1314 + 1315 + return skb; 1316 + } 1317 + 1272 1318 /* Unlocked version of the reclaim routine */ 1273 1319 static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev, 1274 1320 struct bcmgenet_tx_ring *ring) 1275 1321 { 1276 1322 struct bcmgenet_priv *priv = netdev_priv(dev); 1277 - struct device *kdev = &priv->pdev->dev; 1278 - struct enet_cb *tx_cb_ptr; 1279 - unsigned int pkts_compl = 0; 1280 - unsigned int bytes_compl = 0; 1281 - unsigned int c_index; 1282 - unsigned int txbds_ready; 1283 1323 unsigned int txbds_processed = 0; 1324 + unsigned int bytes_compl = 0; 1325 + unsigned int pkts_compl = 0; 1326 + unsigned int txbds_ready; 1327 + unsigned int c_index; 1328 + struct sk_buff *skb; 1284 1329 1285 1330 /* Clear status before servicing to reduce spurious interrupts */ 1286 1331 if (ring->index == DESC_INDEX) ··· 1355 1292 1356 1293 /* Reclaim transmitted buffers */ 1357 1294 while (txbds_processed < txbds_ready) { 1358 - tx_cb_ptr = &priv->tx_cbs[ring->clean_ptr]; 1359 - if (tx_cb_ptr->skb) { 1295 + skb = bcmgenet_free_tx_cb(&priv->pdev->dev, 1296 + &priv->tx_cbs[ring->clean_ptr]); 1297 + if (skb) { 1360 1298 pkts_compl++; 1361 - bytes_compl += GENET_CB(tx_cb_ptr->skb)->bytes_sent; 1362 - dma_unmap_single(kdev, 1363 - dma_unmap_addr(tx_cb_ptr, dma_addr), 1364 - dma_unmap_len(tx_cb_ptr, dma_len), 1365 - DMA_TO_DEVICE); 1366 - bcmgenet_free_cb(tx_cb_ptr); 1367 - } else if (dma_unmap_addr(tx_cb_ptr, dma_addr)) { 1368 - dma_unmap_page(kdev, 1369 - dma_unmap_addr(tx_cb_ptr, dma_addr), 1370 - dma_unmap_len(tx_cb_ptr, dma_len), 1371 - DMA_TO_DEVICE); 1372 - dma_unmap_addr_set(tx_cb_ptr, dma_addr, 0); 1299 + bytes_compl += GENET_CB(skb)->bytes_sent; 1300 + dev_kfree_skb_any(skb); 1373 1301 } 1374 1302 1375 1303 txbds_processed++; ··· 1434 1380 bcmgenet_tx_reclaim(dev, &priv->tx_rings[DESC_INDEX]); 1435 1381 } 1436 1382 1437 - /* Transmits a single SKB (either head of a fragment or a single SKB) 1438 - * caller must hold priv->lock 1439 - */ 1440 - static int bcmgenet_xmit_single(struct net_device *dev, 1441 - struct sk_buff *skb, 1442 - u16 dma_desc_flags, 1443 - struct bcmgenet_tx_ring *ring) 1444 - { 1445 - struct bcmgenet_priv *priv = netdev_priv(dev); 1446 - struct device *kdev = &priv->pdev->dev; 1447 - struct enet_cb *tx_cb_ptr; 1448 - unsigned int skb_len; 1449 - dma_addr_t mapping; 1450 - u32 length_status; 1451 - int ret; 1452 - 1453 - tx_cb_ptr = bcmgenet_get_txcb(priv, ring); 1454 - 1455 - if (unlikely(!tx_cb_ptr)) 1456 - BUG(); 1457 - 1458 - tx_cb_ptr->skb = skb; 1459 - 1460 - skb_len = skb_headlen(skb); 1461 - 1462 - mapping = dma_map_single(kdev, skb->data, skb_len, DMA_TO_DEVICE); 1463 - ret = dma_mapping_error(kdev, mapping); 1464 - if (ret) { 1465 - priv->mib.tx_dma_failed++; 1466 - netif_err(priv, tx_err, dev, "Tx DMA map failed\n"); 1467 - dev_kfree_skb(skb); 1468 - return ret; 1469 - } 1470 - 1471 - dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping); 1472 - dma_unmap_len_set(tx_cb_ptr, dma_len, skb_len); 1473 - length_status = (skb_len << DMA_BUFLENGTH_SHIFT) | dma_desc_flags | 1474 - (priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT) | 1475 - DMA_TX_APPEND_CRC; 1476 - 1477 - if (skb->ip_summed == CHECKSUM_PARTIAL) 1478 - length_status |= DMA_TX_DO_CSUM; 1479 - 1480 - dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping, length_status); 1481 - 1482 - return 0; 1483 - } 1484 - 1485 - /* Transmit a SKB fragment */ 1486 - static int bcmgenet_xmit_frag(struct net_device *dev, 1487 - skb_frag_t *frag, 1488 - u16 dma_desc_flags, 1489 - struct bcmgenet_tx_ring *ring) 1490 - { 1491 - struct bcmgenet_priv *priv = netdev_priv(dev); 1492 - struct device *kdev = &priv->pdev->dev; 1493 - struct enet_cb *tx_cb_ptr; 1494 - unsigned int frag_size; 1495 - dma_addr_t mapping; 1496 - int ret; 1497 - 1498 - tx_cb_ptr = bcmgenet_get_txcb(priv, ring); 1499 - 1500 - if (unlikely(!tx_cb_ptr)) 1501 - BUG(); 1502 - 1503 - tx_cb_ptr->skb = NULL; 1504 - 1505 - frag_size = skb_frag_size(frag); 1506 - 1507 - mapping = skb_frag_dma_map(kdev, frag, 0, frag_size, DMA_TO_DEVICE); 1508 - ret = dma_mapping_error(kdev, mapping); 1509 - if (ret) { 1510 - priv->mib.tx_dma_failed++; 1511 - netif_err(priv, tx_err, dev, "%s: Tx DMA map failed\n", 1512 - __func__); 1513 - return ret; 1514 - } 1515 - 1516 - dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping); 1517 - dma_unmap_len_set(tx_cb_ptr, dma_len, frag_size); 1518 - 1519 - dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping, 1520 - (frag_size << DMA_BUFLENGTH_SHIFT) | dma_desc_flags | 1521 - (priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT)); 1522 - 1523 - return 0; 1524 - } 1525 - 1526 1383 /* Reallocate the SKB to put enough headroom in front of it and insert 1527 1384 * the transmit checksum offsets in the descriptors 1528 1385 */ ··· 1500 1535 static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev) 1501 1536 { 1502 1537 struct bcmgenet_priv *priv = netdev_priv(dev); 1538 + struct device *kdev = &priv->pdev->dev; 1503 1539 struct bcmgenet_tx_ring *ring = NULL; 1540 + struct enet_cb *tx_cb_ptr; 1504 1541 struct netdev_queue *txq; 1505 1542 unsigned long flags = 0; 1506 1543 int nr_frags, index; 1507 - u16 dma_desc_flags; 1544 + dma_addr_t mapping; 1545 + unsigned int size; 1546 + skb_frag_t *frag; 1547 + u32 len_stat; 1508 1548 int ret; 1509 1549 int i; 1510 1550 ··· 1562 1592 } 1563 1593 } 1564 1594 1565 - dma_desc_flags = DMA_SOP; 1566 - if (nr_frags == 0) 1567 - dma_desc_flags |= DMA_EOP; 1595 + for (i = 0; i <= nr_frags; i++) { 1596 + tx_cb_ptr = bcmgenet_get_txcb(priv, ring); 1568 1597 1569 - /* Transmit single SKB or head of fragment list */ 1570 - ret = bcmgenet_xmit_single(dev, skb, dma_desc_flags, ring); 1571 - if (ret) { 1572 - ret = NETDEV_TX_OK; 1573 - goto out; 1574 - } 1598 + if (unlikely(!tx_cb_ptr)) 1599 + BUG(); 1575 1600 1576 - /* xmit fragment */ 1577 - for (i = 0; i < nr_frags; i++) { 1578 - ret = bcmgenet_xmit_frag(dev, 1579 - &skb_shinfo(skb)->frags[i], 1580 - (i == nr_frags - 1) ? DMA_EOP : 0, 1581 - ring); 1582 - if (ret) { 1583 - ret = NETDEV_TX_OK; 1584 - goto out; 1601 + if (!i) { 1602 + /* Transmit single SKB or head of fragment list */ 1603 + GENET_CB(skb)->first_cb = tx_cb_ptr; 1604 + size = skb_headlen(skb); 1605 + mapping = dma_map_single(kdev, skb->data, size, 1606 + DMA_TO_DEVICE); 1607 + } else { 1608 + /* xmit fragment */ 1609 + frag = &skb_shinfo(skb)->frags[i - 1]; 1610 + size = skb_frag_size(frag); 1611 + mapping = skb_frag_dma_map(kdev, frag, 0, size, 1612 + DMA_TO_DEVICE); 1585 1613 } 1614 + 1615 + ret = dma_mapping_error(kdev, mapping); 1616 + if (ret) { 1617 + priv->mib.tx_dma_failed++; 1618 + netif_err(priv, tx_err, dev, "Tx DMA map failed\n"); 1619 + ret = NETDEV_TX_OK; 1620 + goto out_unmap_frags; 1621 + } 1622 + dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping); 1623 + dma_unmap_len_set(tx_cb_ptr, dma_len, size); 1624 + 1625 + tx_cb_ptr->skb = skb; 1626 + 1627 + len_stat = (size << DMA_BUFLENGTH_SHIFT) | 1628 + (priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT); 1629 + 1630 + if (!i) { 1631 + len_stat |= DMA_TX_APPEND_CRC | DMA_SOP; 1632 + if (skb->ip_summed == CHECKSUM_PARTIAL) 1633 + len_stat |= DMA_TX_DO_CSUM; 1634 + } 1635 + if (i == nr_frags) 1636 + len_stat |= DMA_EOP; 1637 + 1638 + dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping, len_stat); 1586 1639 } 1587 1640 1641 + GENET_CB(skb)->last_cb = tx_cb_ptr; 1588 1642 skb_tx_timestamp(skb); 1589 1643 1590 1644 /* Decrement total BD count and advance our write pointer */ ··· 1629 1635 spin_unlock_irqrestore(&ring->lock, flags); 1630 1636 1631 1637 return ret; 1638 + 1639 + out_unmap_frags: 1640 + /* Back up for failed control block mapping */ 1641 + bcmgenet_put_txcb(priv, ring); 1642 + 1643 + /* Unmap successfully mapped control blocks */ 1644 + while (i-- > 0) { 1645 + tx_cb_ptr = bcmgenet_put_txcb(priv, ring); 1646 + bcmgenet_free_tx_cb(kdev, tx_cb_ptr); 1647 + } 1648 + 1649 + dev_kfree_skb(skb); 1650 + goto out; 1632 1651 } 1633 1652 1634 1653 static struct sk_buff *bcmgenet_rx_refill(struct bcmgenet_priv *priv, ··· 1673 1666 } 1674 1667 1675 1668 /* Grab the current Rx skb from the ring and DMA-unmap it */ 1676 - rx_skb = cb->skb; 1677 - if (likely(rx_skb)) 1678 - dma_unmap_single(kdev, dma_unmap_addr(cb, dma_addr), 1679 - priv->rx_buf_len, DMA_FROM_DEVICE); 1669 + rx_skb = bcmgenet_free_rx_cb(kdev, cb); 1680 1670 1681 1671 /* Put the new Rx skb on the ring */ 1682 1672 cb->skb = skb; 1683 1673 dma_unmap_addr_set(cb, dma_addr, mapping); 1674 + dma_unmap_len_set(cb, dma_len, priv->rx_buf_len); 1684 1675 dmadesc_set_addr(priv, cb->bd_addr, mapping); 1685 1676 1686 1677 /* Return the current Rx skb to caller */ ··· 1885 1880 1886 1881 static void bcmgenet_free_rx_buffers(struct bcmgenet_priv *priv) 1887 1882 { 1888 - struct device *kdev = &priv->pdev->dev; 1883 + struct sk_buff *skb; 1889 1884 struct enet_cb *cb; 1890 1885 int i; 1891 1886 1892 1887 for (i = 0; i < priv->num_rx_bds; i++) { 1893 1888 cb = &priv->rx_cbs[i]; 1894 1889 1895 - if (dma_unmap_addr(cb, dma_addr)) { 1896 - dma_unmap_single(kdev, 1897 - dma_unmap_addr(cb, dma_addr), 1898 - priv->rx_buf_len, DMA_FROM_DEVICE); 1899 - dma_unmap_addr_set(cb, dma_addr, 0); 1900 - } 1901 - 1902 - if (cb->skb) 1903 - bcmgenet_free_cb(cb); 1890 + skb = bcmgenet_free_rx_cb(&priv->pdev->dev, cb); 1891 + if (skb) 1892 + dev_kfree_skb_any(skb); 1904 1893 } 1905 1894 } 1906 1895 ··· 2478 2479 2479 2480 static void bcmgenet_fini_dma(struct bcmgenet_priv *priv) 2480 2481 { 2481 - int i; 2482 2482 struct netdev_queue *txq; 2483 + struct sk_buff *skb; 2484 + struct enet_cb *cb; 2485 + int i; 2483 2486 2484 2487 bcmgenet_fini_rx_napi(priv); 2485 2488 bcmgenet_fini_tx_napi(priv); ··· 2490 2489 bcmgenet_dma_teardown(priv); 2491 2490 2492 2491 for (i = 0; i < priv->num_tx_bds; i++) { 2493 - if (priv->tx_cbs[i].skb != NULL) { 2494 - dev_kfree_skb(priv->tx_cbs[i].skb); 2495 - priv->tx_cbs[i].skb = NULL; 2496 - } 2492 + cb = priv->tx_cbs + i; 2493 + skb = bcmgenet_free_tx_cb(&priv->pdev->dev, cb); 2494 + if (skb) 2495 + dev_kfree_skb(skb); 2497 2496 } 2498 2497 2499 2498 for (i = 0; i < priv->hw_params->tx_queues; i++) {
+2
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 544 544 }; 545 545 546 546 struct bcmgenet_skb_cb { 547 + struct enet_cb *first_cb; /* First control block of SKB */ 548 + struct enet_cb *last_cb; /* Last control block of SKB */ 547 549 unsigned int bytes_sent; /* bytes on the wire (no TSB) */ 548 550 }; 549 551
+1 -1
drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
··· 335 335 336 336 static int lio_get_eeprom_len(struct net_device *netdev) 337 337 { 338 - u8 buf[128]; 338 + u8 buf[192]; 339 339 struct lio *lio = GET_LIO(netdev); 340 340 struct octeon_device *oct_dev = lio->oct_dev; 341 341 struct octeon_board_info *board_info;
+1 -1
drivers/net/ethernet/cavium/thunder/thunder_bgx.c
··· 1008 1008 { 1009 1009 struct device *dev = &bgx->pdev->dev; 1010 1010 struct lmac *lmac; 1011 - char str[20]; 1011 + char str[27]; 1012 1012 1013 1013 if (!bgx->is_dlm && lmacid) 1014 1014 return;
+2 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
··· 441 441 442 442 adapter->ptp_clock = ptp_clock_register(&adapter->ptp_clock_info, 443 443 &adapter->pdev->dev); 444 - if (!adapter->ptp_clock) { 444 + if (IS_ERR_OR_NULL(adapter->ptp_clock)) { 445 + adapter->ptp_clock = NULL; 445 446 dev_err(adapter->pdev_dev, 446 447 "PTP %s Clock registration has failed\n", __func__); 447 448 return;
+2
drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
··· 174 174 CH_PCI_ID_TABLE_FENTRY(0x50a0), /* Custom T540-CR */ 175 175 CH_PCI_ID_TABLE_FENTRY(0x50a1), /* Custom T540-CR */ 176 176 CH_PCI_ID_TABLE_FENTRY(0x50a2), /* Custom T540-KR4 */ 177 + CH_PCI_ID_TABLE_FENTRY(0x50a3), /* Custom T580-KR4 */ 178 + CH_PCI_ID_TABLE_FENTRY(0x50a4), /* Custom 2x T540-CR */ 177 179 178 180 /* T6 adapters: 179 181 */
+2 -1
drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
··· 776 776 777 777 assert(handle); 778 778 mac_cb = hns_get_mac_cb(handle); 779 - if (!mac_cb->cpld_ctrl) 779 + if (mac_cb->media_type != HNAE_MEDIA_TYPE_FIBER) 780 780 return; 781 + 781 782 hns_set_led_opt(mac_cb); 782 783 } 783 784
+56 -2
drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
··· 53 53 return ret; 54 54 } 55 55 56 + static void hns_dsaf_acpi_ledctrl_by_port(struct hns_mac_cb *mac_cb, u8 op_type, 57 + u32 link, u32 port, u32 act) 58 + { 59 + union acpi_object *obj; 60 + union acpi_object obj_args[3], argv4; 61 + 62 + obj_args[0].integer.type = ACPI_TYPE_INTEGER; 63 + obj_args[0].integer.value = link; 64 + obj_args[1].integer.type = ACPI_TYPE_INTEGER; 65 + obj_args[1].integer.value = port; 66 + obj_args[2].integer.type = ACPI_TYPE_INTEGER; 67 + obj_args[2].integer.value = act; 68 + 69 + argv4.type = ACPI_TYPE_PACKAGE; 70 + argv4.package.count = 3; 71 + argv4.package.elements = obj_args; 72 + 73 + obj = acpi_evaluate_dsm(ACPI_HANDLE(mac_cb->dev), 74 + &hns_dsaf_acpi_dsm_guid, 0, op_type, &argv4); 75 + if (!obj) { 76 + dev_warn(mac_cb->dev, "ledctrl fail, link:%d port:%d act:%d!\n", 77 + link, port, act); 78 + return; 79 + } 80 + 81 + ACPI_FREE(obj); 82 + } 83 + 56 84 static void hns_cpld_set_led(struct hns_mac_cb *mac_cb, int link_status, 57 85 u16 speed, int data) 58 86 { ··· 121 93 } 122 94 } 123 95 96 + static void hns_cpld_set_led_acpi(struct hns_mac_cb *mac_cb, int link_status, 97 + u16 speed, int data) 98 + { 99 + if (!mac_cb) { 100 + pr_err("cpld_led_set mac_cb is null!\n"); 101 + return; 102 + } 103 + 104 + hns_dsaf_acpi_ledctrl_by_port(mac_cb, HNS_OP_LED_SET_FUNC, 105 + link_status, mac_cb->mac_id, data); 106 + } 107 + 124 108 static void cpld_led_reset(struct hns_mac_cb *mac_cb) 125 109 { 126 110 if (!mac_cb || !mac_cb->cpld_ctrl) ··· 141 101 dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg, 142 102 CPLD_LED_DEFAULT_VALUE); 143 103 mac_cb->cpld_led_value = CPLD_LED_DEFAULT_VALUE; 104 + } 105 + 106 + static void cpld_led_reset_acpi(struct hns_mac_cb *mac_cb) 107 + { 108 + if (!mac_cb) { 109 + pr_err("cpld_led_reset mac_cb is null!\n"); 110 + return; 111 + } 112 + 113 + if (mac_cb->media_type != HNAE_MEDIA_TYPE_FIBER) 114 + return; 115 + 116 + hns_dsaf_acpi_ledctrl_by_port(mac_cb, HNS_OP_LED_SET_FUNC, 117 + 0, mac_cb->mac_id, 0); 144 118 } 145 119 146 120 static int cpld_set_led_id(struct hns_mac_cb *mac_cb, ··· 658 604 659 605 misc_op->cfg_serdes_loopback = hns_mac_config_sds_loopback; 660 606 } else if (is_acpi_node(dsaf_dev->dev->fwnode)) { 661 - misc_op->cpld_set_led = hns_cpld_set_led; 662 - misc_op->cpld_reset_led = cpld_led_reset; 607 + misc_op->cpld_set_led = hns_cpld_set_led_acpi; 608 + misc_op->cpld_reset_led = cpld_led_reset_acpi; 663 609 misc_op->cpld_set_led_id = cpld_set_led_id; 664 610 665 611 misc_op->dsaf_reset = hns_dsaf_rst_acpi;
+3 -5
drivers/net/ethernet/mellanox/mlx4/alloc.c
··· 283 283 } 284 284 285 285 /* Should be called under a lock */ 286 - static int __mlx4_zone_remove_one_entry(struct mlx4_zone_entry *entry) 286 + static void __mlx4_zone_remove_one_entry(struct mlx4_zone_entry *entry) 287 287 { 288 288 struct mlx4_zone_allocator *zone_alloc = entry->allocator; 289 289 ··· 315 315 } 316 316 zone_alloc->mask = mask; 317 317 } 318 - 319 - return 0; 320 318 } 321 319 322 320 void mlx4_zone_allocator_destroy(struct mlx4_zone_allocator *zone_alloc) ··· 455 457 int mlx4_zone_remove_one(struct mlx4_zone_allocator *zones, u32 uid) 456 458 { 457 459 struct mlx4_zone_entry *zone; 458 - int res; 460 + int res = 0; 459 461 460 462 spin_lock(&zones->lock); 461 463 ··· 466 468 goto out; 467 469 } 468 470 469 - res = __mlx4_zone_remove_one_entry(zone); 471 + __mlx4_zone_remove_one_entry(zone); 470 472 471 473 out: 472 474 spin_unlock(&zones->lock);
+6 -4
drivers/net/ethernet/qualcomm/emac/emac.c
··· 766 766 struct emac_adapter *adpt = netdev_priv(netdev); 767 767 struct emac_sgmii *sgmii = &adpt->phy; 768 768 769 - /* Closing the SGMII turns off its interrupts */ 770 - sgmii->close(adpt); 769 + if (netdev->flags & IFF_UP) { 770 + /* Closing the SGMII turns off its interrupts */ 771 + sgmii->close(adpt); 771 772 772 - /* Resetting the MAC turns off all DMA and its interrupts */ 773 - emac_mac_reset(adpt); 773 + /* Resetting the MAC turns off all DMA and its interrupts */ 774 + emac_mac_reset(adpt); 775 + } 774 776 } 775 777 776 778 static struct platform_driver emac_platform_driver = {
+5 -9
drivers/net/ethernet/sgi/ioc3-eth.c
··· 90 90 spinlock_t ioc3_lock; 91 91 struct mii_if_info mii; 92 92 93 + struct net_device *dev; 93 94 struct pci_dev *pdev; 94 95 95 96 /* Members used by autonegotiation */ 96 97 struct timer_list ioc3_timer; 97 98 }; 98 - 99 - static inline struct net_device *priv_netdev(struct ioc3_private *dev) 100 - { 101 - return (void *)dev - ((sizeof(struct net_device) + 31) & ~31); 102 - } 103 99 104 100 static int ioc3_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); 105 101 static void ioc3_set_multicast_list(struct net_device *dev); ··· 423 427 nic[i] = nic_read_byte(ioc3); 424 428 425 429 for (i = 2; i < 8; i++) 426 - priv_netdev(ip)->dev_addr[i - 2] = nic[i]; 430 + ip->dev->dev_addr[i - 2] = nic[i]; 427 431 } 428 432 429 433 /* ··· 435 439 { 436 440 ioc3_get_eaddr_nic(ip); 437 441 438 - printk("Ethernet address is %pM.\n", priv_netdev(ip)->dev_addr); 442 + printk("Ethernet address is %pM.\n", ip->dev->dev_addr); 439 443 } 440 444 441 445 static void __ioc3_set_mac_address(struct net_device *dev) ··· 786 790 */ 787 791 static int ioc3_mii_init(struct ioc3_private *ip) 788 792 { 789 - struct net_device *dev = priv_netdev(ip); 790 793 int i, found = 0, res = 0; 791 794 int ioc3_phy_workaround = 1; 792 795 u16 word; 793 796 794 797 for (i = 0; i < 32; i++) { 795 - word = ioc3_mdio_read(dev, i, MII_PHYSID1); 798 + word = ioc3_mdio_read(ip->dev, i, MII_PHYSID1); 796 799 797 800 if (word != 0xffff && word != 0x0000) { 798 801 found = 1; ··· 1271 1276 SET_NETDEV_DEV(dev, &pdev->dev); 1272 1277 1273 1278 ip = netdev_priv(dev); 1279 + ip->dev = dev; 1274 1280 1275 1281 dev->irq = pdev->irq; 1276 1282
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
··· 117 117 void __iomem *ioaddr = hw->pcsr; 118 118 u32 value; 119 119 120 - const struct stmmac_rx_routing route_possibilities[] = { 120 + static const struct stmmac_rx_routing route_possibilities[] = { 121 121 { GMAC_RXQCTRL_AVCPQ_MASK, GMAC_RXQCTRL_AVCPQ_SHIFT }, 122 122 { GMAC_RXQCTRL_PTPQ_MASK, GMAC_RXQCTRL_PTPQ_SHIFT }, 123 123 { GMAC_RXQCTRL_DCBCPQ_MASK, GMAC_RXQCTRL_DCBCPQ_SHIFT },
+8 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 4120 4120 if ((phyaddr >= 0) && (phyaddr <= 31)) 4121 4121 priv->plat->phy_addr = phyaddr; 4122 4122 4123 - if (priv->plat->stmmac_rst) 4123 + if (priv->plat->stmmac_rst) { 4124 + ret = reset_control_assert(priv->plat->stmmac_rst); 4124 4125 reset_control_deassert(priv->plat->stmmac_rst); 4126 + /* Some reset controllers have only reset callback instead of 4127 + * assert + deassert callbacks pair. 4128 + */ 4129 + if (ret == -ENOTSUPP) 4130 + reset_control_reset(priv->plat->stmmac_rst); 4131 + } 4125 4132 4126 4133 /* Init MAC and get the capabilities */ 4127 4134 ret = stmmac_hw_init(priv);
+2 -2
drivers/net/ethernet/sun/niu.c
··· 9532 9532 p = niu_new_parent(np, id, ptype); 9533 9533 9534 9534 if (p) { 9535 - char port_name[6]; 9535 + char port_name[8]; 9536 9536 int err; 9537 9537 9538 9538 sprintf(port_name, "port%d", port); ··· 9553 9553 { 9554 9554 struct niu_parent *p = np->parent; 9555 9555 u8 port = np->port; 9556 - char port_name[6]; 9556 + char port_name[8]; 9557 9557 9558 9558 BUG_ON(!p || p->ports[port] != np); 9559 9559
+2
drivers/net/ethernet/tehuti/tehuti.c
··· 654 654 RET(-EFAULT); 655 655 } 656 656 DBG("%d 0x%x 0x%x\n", data[0], data[1], data[2]); 657 + } else { 658 + return -EOPNOTSUPP; 657 659 } 658 660 659 661 if (!capable(CAP_SYS_RAWIO))
+25 -24
drivers/net/ethernet/ti/cpsw.c
··· 3089 3089 cpsw->quirk_irq = true; 3090 3090 } 3091 3091 3092 + ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 3093 + 3094 + ndev->netdev_ops = &cpsw_netdev_ops; 3095 + ndev->ethtool_ops = &cpsw_ethtool_ops; 3096 + netif_napi_add(ndev, &cpsw->napi_rx, cpsw_rx_poll, CPSW_POLL_WEIGHT); 3097 + netif_tx_napi_add(ndev, &cpsw->napi_tx, cpsw_tx_poll, CPSW_POLL_WEIGHT); 3098 + cpsw_split_res(ndev); 3099 + 3100 + /* register the network device */ 3101 + SET_NETDEV_DEV(ndev, &pdev->dev); 3102 + ret = register_netdev(ndev); 3103 + if (ret) { 3104 + dev_err(priv->dev, "error registering net device\n"); 3105 + ret = -ENODEV; 3106 + goto clean_ale_ret; 3107 + } 3108 + 3109 + if (cpsw->data.dual_emac) { 3110 + ret = cpsw_probe_dual_emac(priv); 3111 + if (ret) { 3112 + cpsw_err(priv, probe, "error probe slave 2 emac interface\n"); 3113 + goto clean_unregister_netdev_ret; 3114 + } 3115 + } 3116 + 3092 3117 /* Grab RX and TX IRQs. Note that we also have RX_THRESHOLD and 3093 3118 * MISC IRQs which are always kept disabled with this driver so 3094 3119 * we will not request them. ··· 3152 3127 goto clean_ale_ret; 3153 3128 } 3154 3129 3155 - ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER; 3156 - 3157 - ndev->netdev_ops = &cpsw_netdev_ops; 3158 - ndev->ethtool_ops = &cpsw_ethtool_ops; 3159 - netif_napi_add(ndev, &cpsw->napi_rx, cpsw_rx_poll, CPSW_POLL_WEIGHT); 3160 - netif_tx_napi_add(ndev, &cpsw->napi_tx, cpsw_tx_poll, CPSW_POLL_WEIGHT); 3161 - cpsw_split_res(ndev); 3162 - 3163 - /* register the network device */ 3164 - SET_NETDEV_DEV(ndev, &pdev->dev); 3165 - ret = register_netdev(ndev); 3166 - if (ret) { 3167 - dev_err(priv->dev, "error registering net device\n"); 3168 - ret = -ENODEV; 3169 - goto clean_ale_ret; 3170 - } 3171 - 3172 3130 cpsw_notice(priv, probe, 3173 3131 "initialized device (regs %pa, irq %d, pool size %d)\n", 3174 3132 &ss_res->start, ndev->irq, dma_params.descs_pool_size); 3175 - if (cpsw->data.dual_emac) { 3176 - ret = cpsw_probe_dual_emac(priv); 3177 - if (ret) { 3178 - cpsw_err(priv, probe, "error probe slave 2 emac interface\n"); 3179 - goto clean_unregister_netdev_ret; 3180 - } 3181 - } 3182 3133 3183 3134 pm_runtime_put(&pdev->dev); 3184 3135
+2 -2
drivers/net/phy/mdio-mux.c
··· 135 135 for_each_available_child_of_node(dev->of_node, child_bus_node) { 136 136 int v; 137 137 138 - v = of_mdio_parse_addr(dev, child_bus_node); 139 - if (v < 0) { 138 + r = of_property_read_u32(child_bus_node, "reg", &v); 139 + if (r) { 140 140 dev_err(dev, 141 141 "Error: Failed to find reg for child %s\n", 142 142 of_node_full_name(child_bus_node));
+21 -9
drivers/net/ppp/ppp_generic.c
··· 120 120 int n_channels; /* how many channels are attached 54 */ 121 121 spinlock_t rlock; /* lock for receive side 58 */ 122 122 spinlock_t wlock; /* lock for transmit side 5c */ 123 + int *xmit_recursion __percpu; /* xmit recursion detect */ 123 124 int mru; /* max receive unit 60 */ 124 125 unsigned int flags; /* control bits 64 */ 125 126 unsigned int xstate; /* transmit state bits 68 */ ··· 1026 1025 struct ppp *ppp = netdev_priv(dev); 1027 1026 int indx; 1028 1027 int err; 1028 + int cpu; 1029 1029 1030 1030 ppp->dev = dev; 1031 1031 ppp->ppp_net = src_net; ··· 1041 1039 INIT_LIST_HEAD(&ppp->channels); 1042 1040 spin_lock_init(&ppp->rlock); 1043 1041 spin_lock_init(&ppp->wlock); 1042 + 1043 + ppp->xmit_recursion = alloc_percpu(int); 1044 + if (!ppp->xmit_recursion) { 1045 + err = -ENOMEM; 1046 + goto err1; 1047 + } 1048 + for_each_possible_cpu(cpu) 1049 + (*per_cpu_ptr(ppp->xmit_recursion, cpu)) = 0; 1050 + 1044 1051 #ifdef CONFIG_PPP_MULTILINK 1045 1052 ppp->minseq = -1; 1046 1053 skb_queue_head_init(&ppp->mrq); ··· 1061 1050 1062 1051 err = ppp_unit_register(ppp, conf->unit, conf->ifname_is_set); 1063 1052 if (err < 0) 1064 - return err; 1053 + goto err2; 1065 1054 1066 1055 conf->file->private_data = &ppp->file; 1067 1056 1068 1057 return 0; 1058 + err2: 1059 + free_percpu(ppp->xmit_recursion); 1060 + err1: 1061 + return err; 1069 1062 } 1070 1063 1071 1064 static const struct nla_policy ppp_nl_policy[IFLA_PPP_MAX + 1] = { ··· 1415 1400 ppp_xmit_unlock(ppp); 1416 1401 } 1417 1402 1418 - static DEFINE_PER_CPU(int, ppp_xmit_recursion); 1419 - 1420 1403 static void ppp_xmit_process(struct ppp *ppp) 1421 1404 { 1422 1405 local_bh_disable(); 1423 1406 1424 - if (unlikely(__this_cpu_read(ppp_xmit_recursion))) 1407 + if (unlikely(*this_cpu_ptr(ppp->xmit_recursion))) 1425 1408 goto err; 1426 1409 1427 - __this_cpu_inc(ppp_xmit_recursion); 1410 + (*this_cpu_ptr(ppp->xmit_recursion))++; 1428 1411 __ppp_xmit_process(ppp); 1429 - __this_cpu_dec(ppp_xmit_recursion); 1412 + (*this_cpu_ptr(ppp->xmit_recursion))--; 1430 1413 1431 1414 local_bh_enable(); 1432 1415 ··· 1918 1905 read_lock(&pch->upl); 1919 1906 ppp = pch->ppp; 1920 1907 if (ppp) 1921 - __ppp_xmit_process(ppp); 1908 + ppp_xmit_process(ppp); 1922 1909 read_unlock(&pch->upl); 1923 1910 } 1924 1911 } ··· 1927 1914 { 1928 1915 local_bh_disable(); 1929 1916 1930 - __this_cpu_inc(ppp_xmit_recursion); 1931 1917 __ppp_channel_push(pch); 1932 - __this_cpu_dec(ppp_xmit_recursion); 1933 1918 1934 1919 local_bh_enable(); 1935 1920 } ··· 3068 3057 #endif /* CONFIG_PPP_FILTER */ 3069 3058 3070 3059 kfree_skb(ppp->xmit_pending); 3060 + free_percpu(ppp->xmit_recursion); 3071 3061 3072 3062 free_netdev(ppp->dev); 3073 3063 }
+28
drivers/net/usb/cdc_ncm.c
··· 768 768 u8 *buf; 769 769 int len; 770 770 int temp; 771 + int err; 771 772 u8 iface_no; 772 773 struct usb_cdc_parsed_header hdr; 774 + u16 curr_ntb_format; 773 775 774 776 ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 775 777 if (!ctx) ··· 874 872 if (temp) { 875 873 dev_dbg(&intf->dev, "set interface failed\n"); 876 874 goto error2; 875 + } 876 + 877 + /* 878 + * Some Huawei devices have been observed to come out of reset in NDP32 mode. 879 + * Let's check if this is the case, and set the device to NDP16 mode again if 880 + * needed. 881 + */ 882 + if (ctx->drvflags & CDC_NCM_FLAG_RESET_NTB16) { 883 + err = usbnet_read_cmd(dev, USB_CDC_GET_NTB_FORMAT, 884 + USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE, 885 + 0, iface_no, &curr_ntb_format, 2); 886 + if (err < 0) { 887 + goto error2; 888 + } 889 + 890 + if (curr_ntb_format == USB_CDC_NCM_NTB32_FORMAT) { 891 + dev_info(&intf->dev, "resetting NTB format to 16-bit"); 892 + err = usbnet_write_cmd(dev, USB_CDC_SET_NTB_FORMAT, 893 + USB_TYPE_CLASS | USB_DIR_OUT 894 + | USB_RECIP_INTERFACE, 895 + USB_CDC_NCM_NTB16_FORMAT, 896 + iface_no, NULL, 0); 897 + 898 + if (err < 0) 899 + goto error2; 900 + } 877 901 } 878 902 879 903 cdc_ncm_find_endpoints(dev, ctx->data);
+6
drivers/net/usb/huawei_cdc_ncm.c
··· 80 80 * be at the end of the frame. 81 81 */ 82 82 drvflags |= CDC_NCM_FLAG_NDP_TO_END; 83 + 84 + /* Additionally, it has been reported that some Huawei E3372H devices, with 85 + * firmware version 21.318.01.00.541, come out of reset in NTB32 format mode, hence 86 + * needing to be set to the NTB16 one again. 87 + */ 88 + drvflags |= CDC_NCM_FLAG_RESET_NTB16; 83 89 ret = cdc_ncm_bind_common(usbnet_dev, intf, 1, drvflags); 84 90 if (ret) 85 91 goto err;
+1
drivers/net/usb/smsc95xx.c
··· 898 898 .set_wol = smsc95xx_ethtool_set_wol, 899 899 .get_link_ksettings = smsc95xx_get_link_ksettings, 900 900 .set_link_ksettings = smsc95xx_set_link_ksettings, 901 + .get_ts_info = ethtool_op_get_ts_info, 901 902 }; 902 903 903 904 static int smsc95xx_ioctl(struct net_device *netdev, struct ifreq *rq, int cmd)
+1 -1
drivers/net/vmxnet3/vmxnet3_int.h
··· 311 311 u8 num_intrs; /* # of intr vectors */ 312 312 u8 event_intr_idx; /* idx of the intr vector for event */ 313 313 u8 mod_levels[VMXNET3_LINUX_MAX_MSIX_VECT]; /* moderation level */ 314 - char event_msi_vector_name[IFNAMSIZ+11]; 314 + char event_msi_vector_name[IFNAMSIZ+17]; 315 315 #ifdef CONFIG_PCI_MSI 316 316 struct msix_entry msix_entries[VMXNET3_LINUX_MAX_MSIX_VECT]; 317 317 #endif
+1 -1
drivers/net/wireless/ralink/rt2x00/rt2800lib.c
··· 5704 5704 5705 5705 static void rt2800_init_bbp_5592_glrt(struct rt2x00_dev *rt2x00dev) 5706 5706 { 5707 - const u8 glrt_table[] = { 5707 + static const u8 glrt_table[] = { 5708 5708 0xE0, 0x1F, 0X38, 0x32, 0x08, 0x28, 0x19, 0x0A, 0xFF, 0x00, /* 128 ~ 137 */ 5709 5709 0x16, 0x10, 0x10, 0x0B, 0x36, 0x2C, 0x26, 0x24, 0x42, 0x36, /* 138 ~ 147 */ 5710 5710 0x30, 0x2D, 0x4C, 0x46, 0x3D, 0x40, 0x3E, 0x42, 0x3D, 0x40, /* 148 ~ 157 */
+1 -1
include/linux/bpf-cgroup.h
··· 85 85 int __ret = 0; \ 86 86 if (cgroup_bpf_enabled && (sock_ops)->sk) { \ 87 87 typeof(sk) __sk = sk_to_full_sk((sock_ops)->sk); \ 88 - if (sk_fullsock(__sk)) \ 88 + if (__sk && sk_fullsock(__sk)) \ 89 89 __ret = __cgroup_bpf_run_filter_sock_ops(__sk, \ 90 90 sock_ops, \ 91 91 BPF_CGROUP_SOCK_OPS); \
+1
include/linux/bpf_verifier.h
··· 43 43 u32 min_align; 44 44 u32 aux_off; 45 45 u32 aux_off_align; 46 + bool value_from_signed; 46 47 }; 47 48 48 49 enum bpf_stack_slot_type {
+14 -15
include/linux/jhash.h
··· 85 85 k += 12; 86 86 } 87 87 /* Last block: affect all 32 bits of (c) */ 88 - /* All the case statements fall through */ 89 88 switch (length) { 90 - case 12: c += (u32)k[11]<<24; 91 - case 11: c += (u32)k[10]<<16; 92 - case 10: c += (u32)k[9]<<8; 93 - case 9: c += k[8]; 94 - case 8: b += (u32)k[7]<<24; 95 - case 7: b += (u32)k[6]<<16; 96 - case 6: b += (u32)k[5]<<8; 97 - case 5: b += k[4]; 98 - case 4: a += (u32)k[3]<<24; 99 - case 3: a += (u32)k[2]<<16; 100 - case 2: a += (u32)k[1]<<8; 89 + case 12: c += (u32)k[11]<<24; /* fall through */ 90 + case 11: c += (u32)k[10]<<16; /* fall through */ 91 + case 10: c += (u32)k[9]<<8; /* fall through */ 92 + case 9: c += k[8]; /* fall through */ 93 + case 8: b += (u32)k[7]<<24; /* fall through */ 94 + case 7: b += (u32)k[6]<<16; /* fall through */ 95 + case 6: b += (u32)k[5]<<8; /* fall through */ 96 + case 5: b += k[4]; /* fall through */ 97 + case 4: a += (u32)k[3]<<24; /* fall through */ 98 + case 3: a += (u32)k[2]<<16; /* fall through */ 99 + case 2: a += (u32)k[1]<<8; /* fall through */ 101 100 case 1: a += k[0]; 102 101 __jhash_final(a, b, c); 103 102 case 0: /* Nothing left to add */ ··· 130 131 k += 3; 131 132 } 132 133 133 - /* Handle the last 3 u32's: all the case statements fall through */ 134 + /* Handle the last 3 u32's */ 134 135 switch (length) { 135 - case 3: c += k[2]; 136 - case 2: b += k[1]; 136 + case 3: c += k[2]; /* fall through */ 137 + case 2: b += k[1]; /* fall through */ 137 138 case 1: a += k[0]; 138 139 __jhash_final(a, b, c); 139 140 case 0: /* Nothing left to add */
-9
include/linux/netfilter.h
··· 61 61 struct sk_buff *skb, 62 62 const struct nf_hook_state *state); 63 63 struct nf_hook_ops { 64 - struct list_head list; 65 - 66 64 /* User fills in from here down. */ 67 65 nf_hookfn *hook; 68 66 struct net_device *dev; ··· 157 159 unsigned int n); 158 160 void nf_unregister_net_hooks(struct net *net, const struct nf_hook_ops *reg, 159 161 unsigned int n); 160 - 161 - int nf_register_hook(struct nf_hook_ops *reg); 162 - void nf_unregister_hook(struct nf_hook_ops *reg); 163 - int nf_register_hooks(struct nf_hook_ops *reg, unsigned int n); 164 - void nf_unregister_hooks(struct nf_hook_ops *reg, unsigned int n); 165 - int _nf_register_hooks(struct nf_hook_ops *reg, unsigned int n); 166 - void _nf_unregister_hooks(struct nf_hook_ops *reg, unsigned int n); 167 162 168 163 /* Functions to register get/setsockopt ranges (non-inclusive). You 169 164 need to check permissions yourself! */
+1
include/linux/usb/cdc_ncm.h
··· 83 83 /* Driver flags */ 84 84 #define CDC_NCM_FLAG_NDP_TO_END 0x02 /* NDP is placed at end of frame */ 85 85 #define CDC_MBIM_FLAG_AVOID_ALTSETTING_TOGGLE 0x04 /* Avoid altsetting toggle during init */ 86 + #define CDC_NCM_FLAG_RESET_NTB16 0x08 /* set NDP16 one more time after altsetting switch */ 86 87 87 88 #define cdc_ncm_comm_intf_is_mbim(x) ((x)->desc.bInterfaceSubClass == USB_CDC_SUBCLASS_MBIM && \ 88 89 (x)->desc.bInterfaceProtocol == USB_CDC_PROTO_NONE)
+2 -2
include/net/netlink.h
··· 98 98 * nla_put_u8(skb, type, value) add u8 attribute to skb 99 99 * nla_put_u16(skb, type, value) add u16 attribute to skb 100 100 * nla_put_u32(skb, type, value) add u32 attribute to skb 101 - * nla_put_u64_64bits(skb, type, 102 - * value, padattr) add u64 attribute to skb 101 + * nla_put_u64_64bit(skb, type, 102 + * value, padattr) add u64 attribute to skb 103 103 * nla_put_s8(skb, type, value) add s8 attribute to skb 104 104 * nla_put_s16(skb, type, value) add s16 attribute to skb 105 105 * nla_put_s32(skb, type, value) add s32 attribute to skb
+4
include/net/sctp/sctp.h
··· 469 469 470 470 #define _sctp_walk_params(pos, chunk, end, member)\ 471 471 for (pos.v = chunk->member;\ 472 + (pos.v + offsetof(struct sctp_paramhdr, length) + sizeof(pos.p->length) <\ 473 + (void *)chunk + end) &&\ 472 474 pos.v <= (void *)chunk + end - ntohs(pos.p->length) &&\ 473 475 ntohs(pos.p->length) >= sizeof(struct sctp_paramhdr);\ 474 476 pos.v += SCTP_PAD4(ntohs(pos.p->length))) ··· 481 479 #define _sctp_walk_errors(err, chunk_hdr, end)\ 482 480 for (err = (sctp_errhdr_t *)((void *)chunk_hdr + \ 483 481 sizeof(struct sctp_chunkhdr));\ 482 + ((void *)err + offsetof(sctp_errhdr_t, length) + sizeof(err->length) <\ 483 + (void *)chunk_hdr + end) &&\ 484 484 (void *)err <= (void *)chunk_hdr + end - ntohs(err->length) &&\ 485 485 ntohs(err->length) >= sizeof(sctp_errhdr_t); \ 486 486 err = (sctp_errhdr_t *)((void *)err + SCTP_PAD4(ntohs(err->length))))
+94 -14
kernel/bpf/verifier.c
··· 504 504 { 505 505 regs[regno].min_value = BPF_REGISTER_MIN_RANGE; 506 506 regs[regno].max_value = BPF_REGISTER_MAX_RANGE; 507 + regs[regno].value_from_signed = false; 507 508 regs[regno].min_align = 0; 508 509 } 509 510 ··· 778 777 return -EACCES; 779 778 } 780 779 781 - static bool is_pointer_value(struct bpf_verifier_env *env, int regno) 780 + static bool __is_pointer_value(bool allow_ptr_leaks, 781 + const struct bpf_reg_state *reg) 782 782 { 783 - if (env->allow_ptr_leaks) 783 + if (allow_ptr_leaks) 784 784 return false; 785 785 786 - switch (env->cur_state.regs[regno].type) { 786 + switch (reg->type) { 787 787 case UNKNOWN_VALUE: 788 788 case CONST_IMM: 789 789 return false; 790 790 default: 791 791 return true; 792 792 } 793 + } 794 + 795 + static bool is_pointer_value(struct bpf_verifier_env *env, int regno) 796 + { 797 + return __is_pointer_value(env->allow_ptr_leaks, &env->cur_state.regs[regno]); 793 798 } 794 799 795 800 static int check_pkt_ptr_alignment(const struct bpf_reg_state *reg, ··· 1839 1832 dst_align = dst_reg->min_align; 1840 1833 1841 1834 /* We don't know anything about what was done to this register, mark it 1842 - * as unknown. 1835 + * as unknown. Also, if both derived bounds came from signed/unsigned 1836 + * mixed compares and one side is unbounded, we cannot really do anything 1837 + * with them as boundaries cannot be trusted. Thus, arithmetic of two 1838 + * regs of such kind will get invalidated bounds on the dst side. 1843 1839 */ 1844 - if (min_val == BPF_REGISTER_MIN_RANGE && 1845 - max_val == BPF_REGISTER_MAX_RANGE) { 1840 + if ((min_val == BPF_REGISTER_MIN_RANGE && 1841 + max_val == BPF_REGISTER_MAX_RANGE) || 1842 + (BPF_SRC(insn->code) == BPF_X && 1843 + ((min_val != BPF_REGISTER_MIN_RANGE && 1844 + max_val == BPF_REGISTER_MAX_RANGE) || 1845 + (min_val == BPF_REGISTER_MIN_RANGE && 1846 + max_val != BPF_REGISTER_MAX_RANGE) || 1847 + (dst_reg->min_value != BPF_REGISTER_MIN_RANGE && 1848 + dst_reg->max_value == BPF_REGISTER_MAX_RANGE) || 1849 + (dst_reg->min_value == BPF_REGISTER_MIN_RANGE && 1850 + dst_reg->max_value != BPF_REGISTER_MAX_RANGE)) && 1851 + regs[insn->dst_reg].value_from_signed != 1852 + regs[insn->src_reg].value_from_signed)) { 1846 1853 reset_reg_range_values(regs, insn->dst_reg); 1847 1854 return; 1848 1855 } ··· 2044 2023 regs[insn->dst_reg].max_value = insn->imm; 2045 2024 regs[insn->dst_reg].min_value = insn->imm; 2046 2025 regs[insn->dst_reg].min_align = calc_align(insn->imm); 2026 + regs[insn->dst_reg].value_from_signed = false; 2047 2027 } 2048 2028 2049 2029 } else if (opcode > BPF_END) { ··· 2220 2198 struct bpf_reg_state *false_reg, u64 val, 2221 2199 u8 opcode) 2222 2200 { 2201 + bool value_from_signed = true; 2202 + bool is_range = true; 2203 + 2223 2204 switch (opcode) { 2224 2205 case BPF_JEQ: 2225 2206 /* If this is false then we know nothing Jon Snow, but if it is 2226 2207 * true then we know for sure. 2227 2208 */ 2228 2209 true_reg->max_value = true_reg->min_value = val; 2210 + is_range = false; 2229 2211 break; 2230 2212 case BPF_JNE: 2231 2213 /* If this is true we know nothing Jon Snow, but if it is false 2232 2214 * we know the value for sure; 2233 2215 */ 2234 2216 false_reg->max_value = false_reg->min_value = val; 2217 + is_range = false; 2235 2218 break; 2236 2219 case BPF_JGT: 2237 - /* Unsigned comparison, the minimum value is 0. */ 2238 - false_reg->min_value = 0; 2220 + value_from_signed = false; 2239 2221 /* fallthrough */ 2240 2222 case BPF_JSGT: 2223 + if (true_reg->value_from_signed != value_from_signed) 2224 + reset_reg_range_values(true_reg, 0); 2225 + if (false_reg->value_from_signed != value_from_signed) 2226 + reset_reg_range_values(false_reg, 0); 2227 + if (opcode == BPF_JGT) { 2228 + /* Unsigned comparison, the minimum value is 0. */ 2229 + false_reg->min_value = 0; 2230 + } 2241 2231 /* If this is false then we know the maximum val is val, 2242 2232 * otherwise we know the min val is val+1. 2243 2233 */ 2244 2234 false_reg->max_value = val; 2235 + false_reg->value_from_signed = value_from_signed; 2245 2236 true_reg->min_value = val + 1; 2237 + true_reg->value_from_signed = value_from_signed; 2246 2238 break; 2247 2239 case BPF_JGE: 2248 - /* Unsigned comparison, the minimum value is 0. */ 2249 - false_reg->min_value = 0; 2240 + value_from_signed = false; 2250 2241 /* fallthrough */ 2251 2242 case BPF_JSGE: 2243 + if (true_reg->value_from_signed != value_from_signed) 2244 + reset_reg_range_values(true_reg, 0); 2245 + if (false_reg->value_from_signed != value_from_signed) 2246 + reset_reg_range_values(false_reg, 0); 2247 + if (opcode == BPF_JGE) { 2248 + /* Unsigned comparison, the minimum value is 0. */ 2249 + false_reg->min_value = 0; 2250 + } 2252 2251 /* If this is false then we know the maximum value is val - 1, 2253 2252 * otherwise we know the mimimum value is val. 2254 2253 */ 2255 2254 false_reg->max_value = val - 1; 2255 + false_reg->value_from_signed = value_from_signed; 2256 2256 true_reg->min_value = val; 2257 + true_reg->value_from_signed = value_from_signed; 2257 2258 break; 2258 2259 default: 2259 2260 break; ··· 2284 2239 2285 2240 check_reg_overflow(false_reg); 2286 2241 check_reg_overflow(true_reg); 2242 + if (is_range) { 2243 + if (__is_pointer_value(false, false_reg)) 2244 + reset_reg_range_values(false_reg, 0); 2245 + if (__is_pointer_value(false, true_reg)) 2246 + reset_reg_range_values(true_reg, 0); 2247 + } 2287 2248 } 2288 2249 2289 2250 /* Same as above, but for the case that dst_reg is a CONST_IMM reg and src_reg ··· 2299 2248 struct bpf_reg_state *false_reg, u64 val, 2300 2249 u8 opcode) 2301 2250 { 2251 + bool value_from_signed = true; 2252 + bool is_range = true; 2253 + 2302 2254 switch (opcode) { 2303 2255 case BPF_JEQ: 2304 2256 /* If this is false then we know nothing Jon Snow, but if it is 2305 2257 * true then we know for sure. 2306 2258 */ 2307 2259 true_reg->max_value = true_reg->min_value = val; 2260 + is_range = false; 2308 2261 break; 2309 2262 case BPF_JNE: 2310 2263 /* If this is true we know nothing Jon Snow, but if it is false 2311 2264 * we know the value for sure; 2312 2265 */ 2313 2266 false_reg->max_value = false_reg->min_value = val; 2267 + is_range = false; 2314 2268 break; 2315 2269 case BPF_JGT: 2316 - /* Unsigned comparison, the minimum value is 0. */ 2317 - true_reg->min_value = 0; 2270 + value_from_signed = false; 2318 2271 /* fallthrough */ 2319 2272 case BPF_JSGT: 2273 + if (true_reg->value_from_signed != value_from_signed) 2274 + reset_reg_range_values(true_reg, 0); 2275 + if (false_reg->value_from_signed != value_from_signed) 2276 + reset_reg_range_values(false_reg, 0); 2277 + if (opcode == BPF_JGT) { 2278 + /* Unsigned comparison, the minimum value is 0. */ 2279 + true_reg->min_value = 0; 2280 + } 2320 2281 /* 2321 2282 * If this is false, then the val is <= the register, if it is 2322 2283 * true the register <= to the val. 2323 2284 */ 2324 2285 false_reg->min_value = val; 2286 + false_reg->value_from_signed = value_from_signed; 2325 2287 true_reg->max_value = val - 1; 2288 + true_reg->value_from_signed = value_from_signed; 2326 2289 break; 2327 2290 case BPF_JGE: 2328 - /* Unsigned comparison, the minimum value is 0. */ 2329 - true_reg->min_value = 0; 2291 + value_from_signed = false; 2330 2292 /* fallthrough */ 2331 2293 case BPF_JSGE: 2294 + if (true_reg->value_from_signed != value_from_signed) 2295 + reset_reg_range_values(true_reg, 0); 2296 + if (false_reg->value_from_signed != value_from_signed) 2297 + reset_reg_range_values(false_reg, 0); 2298 + if (opcode == BPF_JGE) { 2299 + /* Unsigned comparison, the minimum value is 0. */ 2300 + true_reg->min_value = 0; 2301 + } 2332 2302 /* If this is false then constant < register, if it is true then 2333 2303 * the register < constant. 2334 2304 */ 2335 2305 false_reg->min_value = val + 1; 2306 + false_reg->value_from_signed = value_from_signed; 2336 2307 true_reg->max_value = val; 2308 + true_reg->value_from_signed = value_from_signed; 2337 2309 break; 2338 2310 default: 2339 2311 break; ··· 2364 2290 2365 2291 check_reg_overflow(false_reg); 2366 2292 check_reg_overflow(true_reg); 2293 + if (is_range) { 2294 + if (__is_pointer_value(false, false_reg)) 2295 + reset_reg_range_values(false_reg, 0); 2296 + if (__is_pointer_value(false, true_reg)) 2297 + reset_reg_range_values(true_reg, 0); 2298 + } 2367 2299 } 2368 2300 2369 2301 static void mark_map_reg(struct bpf_reg_state *regs, u32 regno, u32 id,
+2 -1
net/bridge/br_device.c
··· 34 34 netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev) 35 35 { 36 36 struct net_bridge *br = netdev_priv(dev); 37 - const unsigned char *dest = skb->data; 38 37 struct net_bridge_fdb_entry *dst; 39 38 struct net_bridge_mdb_entry *mdst; 40 39 struct pcpu_sw_netstats *brstats = this_cpu_ptr(br->stats); 41 40 const struct nf_br_ops *nf_ops; 41 + const unsigned char *dest; 42 42 u16 vid = 0; 43 43 44 44 rcu_read_lock(); ··· 61 61 if (!br_allowed_ingress(br, br_vlan_group_rcu(br), skb, &vid)) 62 62 goto out; 63 63 64 + dest = eth_hdr(skb)->h_dest; 64 65 if (is_broadcast_ether_addr(dest)) { 65 66 br_flood(br, skb, BR_PKT_BROADCAST, false, true); 66 67 } else if (is_multicast_ether_addr(dest)) {
+2 -1
net/bridge/br_input.c
··· 131 131 int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb) 132 132 { 133 133 struct net_bridge_port *p = br_port_get_rcu(skb->dev); 134 - const unsigned char *dest = eth_hdr(skb)->h_dest; 135 134 enum br_pkt_type pkt_type = BR_PKT_UNICAST; 136 135 struct net_bridge_fdb_entry *dst = NULL; 137 136 struct net_bridge_mdb_entry *mdst; 138 137 bool local_rcv, mcast_hit = false; 138 + const unsigned char *dest; 139 139 struct net_bridge *br; 140 140 u16 vid = 0; 141 141 ··· 153 153 br_fdb_update(br, p, eth_hdr(skb)->h_source, vid, false); 154 154 155 155 local_rcv = !!(br->dev->flags & IFF_PROMISC); 156 + dest = eth_hdr(skb)->h_dest; 156 157 if (is_multicast_ether_addr(dest)) { 157 158 /* by definition the broadcast is also a multicast address */ 158 159 if (is_broadcast_ether_addr(dest)) {
+3
net/core/dev_ioctl.c
··· 28 28 29 29 if (copy_from_user(&ifr, arg, sizeof(struct ifreq))) 30 30 return -EFAULT; 31 + ifr.ifr_name[IFNAMSIZ-1] = 0; 31 32 32 33 error = netdev_get_name(net, ifr.ifr_name, ifr.ifr_ifindex); 33 34 if (error) ··· 424 423 425 424 if (copy_from_user(&iwr, arg, sizeof(iwr))) 426 425 return -EFAULT; 426 + 427 + iwr.ifr_name[sizeof(iwr.ifr_name) - 1] = 0; 427 428 428 429 return wext_handle_ioctl(net, &iwr, cmd, arg); 429 430 }
+1 -2
net/core/fib_rules.c
··· 400 400 err = -ENOMEM; 401 401 goto errout; 402 402 } 403 + refcount_set(&rule->refcnt, 1); 403 404 rule->fr_net = net; 404 405 405 406 rule->pref = tb[FRA_PRIORITY] ? nla_get_u32(tb[FRA_PRIORITY]) ··· 517 516 break; 518 517 last = r; 519 518 } 520 - 521 - refcount_set(&rule->refcnt, 1); 522 519 523 520 if (last) 524 521 list_add_rcu(&rule->list, &last->list);
+1 -1
net/core/filter.c
··· 2248 2248 bpf_skb_net_grow(skb, len_diff_abs); 2249 2249 2250 2250 bpf_compute_data_end(skb); 2251 - return 0; 2251 + return ret; 2252 2252 } 2253 2253 2254 2254 BPF_CALL_4(bpf_skb_adjust_room, struct sk_buff *, skb, s32, len_diff,
+1 -1
net/core/netpoll.c
··· 277 277 struct sk_buff *skb = clist; 278 278 clist = clist->next; 279 279 if (!skb_irq_freeable(skb)) { 280 - refcount_inc(&skb->users); 280 + refcount_set(&skb->users, 1); 281 281 dev_kfree_skb_any(skb); /* put this one back */ 282 282 } else { 283 283 __kfree_skb(skb);
+3 -1
net/core/rtnetlink.c
··· 2031 2031 struct sockaddr *sa; 2032 2032 int len; 2033 2033 2034 - len = sizeof(sa_family_t) + dev->addr_len; 2034 + len = sizeof(sa_family_t) + max_t(size_t, dev->addr_len, 2035 + sizeof(*sa)); 2035 2036 sa = kmalloc(len, GFP_KERNEL); 2036 2037 if (!sa) { 2037 2038 err = -ENOMEM; ··· 4242 4241 4243 4242 switch (event) { 4244 4243 case NETDEV_REBOOT: 4244 + case NETDEV_CHANGEADDR: 4245 4245 case NETDEV_CHANGENAME: 4246 4246 case NETDEV_FEAT_CHANGE: 4247 4247 case NETDEV_BONDING_FAILOVER:
+1 -1
net/dccp/input.c
··· 126 126 127 127 static u16 dccp_reset_code_convert(const u8 code) 128 128 { 129 - const u16 error_code[] = { 129 + static const u16 error_code[] = { 130 130 [DCCP_RESET_CODE_CLOSED] = 0, /* normal termination */ 131 131 [DCCP_RESET_CODE_UNSPECIFIED] = 0, /* nothing known */ 132 132 [DCCP_RESET_CODE_ABORTED] = ECONNRESET,
+5 -4
net/ipv4/fib_frontend.c
··· 1334 1334 1335 1335 void __init ip_fib_init(void) 1336 1336 { 1337 - rtnl_register(PF_INET, RTM_NEWROUTE, inet_rtm_newroute, NULL, NULL); 1338 - rtnl_register(PF_INET, RTM_DELROUTE, inet_rtm_delroute, NULL, NULL); 1339 - rtnl_register(PF_INET, RTM_GETROUTE, NULL, inet_dump_fib, NULL); 1337 + fib_trie_init(); 1340 1338 1341 1339 register_pernet_subsys(&fib_net_ops); 1340 + 1342 1341 register_netdevice_notifier(&fib_netdev_notifier); 1343 1342 register_inetaddr_notifier(&fib_inetaddr_notifier); 1344 1343 1345 - fib_trie_init(); 1344 + rtnl_register(PF_INET, RTM_NEWROUTE, inet_rtm_newroute, NULL, NULL); 1345 + rtnl_register(PF_INET, RTM_DELROUTE, inet_rtm_delroute, NULL, NULL); 1346 + rtnl_register(PF_INET, RTM_GETROUTE, NULL, inet_dump_fib, NULL); 1346 1347 }
+4 -4
net/ipv4/ip_output.c
··· 599 599 hlen = iph->ihl * 4; 600 600 mtu = mtu - hlen; /* Size of data space */ 601 601 IPCB(skb)->flags |= IPSKB_FRAG_COMPLETE; 602 + ll_rs = LL_RESERVED_SPACE(rt->dst.dev); 602 603 603 604 /* When frag_list is given, use it. First, check its validity: 604 605 * some transformers could create wrong frag_list or break existing ··· 615 614 if (first_len - hlen > mtu || 616 615 ((first_len - hlen) & 7) || 617 616 ip_is_fragment(iph) || 618 - skb_cloned(skb)) 617 + skb_cloned(skb) || 618 + skb_headroom(skb) < ll_rs) 619 619 goto slow_path; 620 620 621 621 skb_walk_frags(skb, frag) { 622 622 /* Correct geometry. */ 623 623 if (frag->len > mtu || 624 624 ((frag->len & 7) && frag->next) || 625 - skb_headroom(frag) < hlen) 625 + skb_headroom(frag) < hlen + ll_rs) 626 626 goto slow_path_clean; 627 627 628 628 /* Partially cloned skb? */ ··· 712 710 713 711 left = skb->len - hlen; /* Space per frame */ 714 712 ptr = hlen; /* Where to start from */ 715 - 716 - ll_rs = LL_RESERVED_SPACE(rt->dst.dev); 717 713 718 714 /* 719 715 * Fragment the datagram.
+1 -2
net/ipv4/netfilter/nf_tables_arp.c
··· 72 72 .family = NFPROTO_ARP, 73 73 .owner = THIS_MODULE, 74 74 .hook_mask = (1 << NF_ARP_IN) | 75 - (1 << NF_ARP_OUT) | 76 - (1 << NF_ARP_FORWARD), 75 + (1 << NF_ARP_OUT), 77 76 }; 78 77 79 78 static int __init nf_tables_arp_init(void)
+1
net/ipv4/syncookies.c
··· 335 335 treq->rcv_isn = ntohl(th->seq) - 1; 336 336 treq->snt_isn = cookie; 337 337 treq->ts_off = 0; 338 + treq->txhash = net_tx_rndhash(); 338 339 req->mss = mss; 339 340 ireq->ir_num = ntohs(th->dest); 340 341 ireq->ir_rmt_port = th->source;
+38 -11
net/ipv4/tcp_bbr.c
··· 112 112 cwnd_gain:10, /* current gain for setting cwnd */ 113 113 full_bw_cnt:3, /* number of rounds without large bw gains */ 114 114 cycle_idx:3, /* current index in pacing_gain cycle array */ 115 - unused_b:6; 115 + has_seen_rtt:1, /* have we seen an RTT sample yet? */ 116 + unused_b:5; 116 117 u32 prior_cwnd; /* prior cwnd upon entering loss recovery */ 117 118 u32 full_bw; /* recent bw, to estimate if pipe is full */ 118 119 }; ··· 212 211 return rate >> BW_SCALE; 213 212 } 214 213 214 + /* Convert a BBR bw and gain factor to a pacing rate in bytes per second. */ 215 + static u32 bbr_bw_to_pacing_rate(struct sock *sk, u32 bw, int gain) 216 + { 217 + u64 rate = bw; 218 + 219 + rate = bbr_rate_bytes_per_sec(sk, rate, gain); 220 + rate = min_t(u64, rate, sk->sk_max_pacing_rate); 221 + return rate; 222 + } 223 + 224 + /* Initialize pacing rate to: high_gain * init_cwnd / RTT. */ 225 + static void bbr_init_pacing_rate_from_rtt(struct sock *sk) 226 + { 227 + struct tcp_sock *tp = tcp_sk(sk); 228 + struct bbr *bbr = inet_csk_ca(sk); 229 + u64 bw; 230 + u32 rtt_us; 231 + 232 + if (tp->srtt_us) { /* any RTT sample yet? */ 233 + rtt_us = max(tp->srtt_us >> 3, 1U); 234 + bbr->has_seen_rtt = 1; 235 + } else { /* no RTT sample yet */ 236 + rtt_us = USEC_PER_MSEC; /* use nominal default RTT */ 237 + } 238 + bw = (u64)tp->snd_cwnd * BW_UNIT; 239 + do_div(bw, rtt_us); 240 + sk->sk_pacing_rate = bbr_bw_to_pacing_rate(sk, bw, bbr_high_gain); 241 + } 242 + 215 243 /* Pace using current bw estimate and a gain factor. In order to help drive the 216 244 * network toward lower queues while maintaining high utilization and low 217 245 * latency, the average pacing rate aims to be slightly (~1%) lower than the ··· 250 220 */ 251 221 static void bbr_set_pacing_rate(struct sock *sk, u32 bw, int gain) 252 222 { 223 + struct tcp_sock *tp = tcp_sk(sk); 253 224 struct bbr *bbr = inet_csk_ca(sk); 254 - u64 rate = bw; 225 + u32 rate = bbr_bw_to_pacing_rate(sk, bw, gain); 255 226 256 - rate = bbr_rate_bytes_per_sec(sk, rate, gain); 257 - rate = min_t(u64, rate, sk->sk_max_pacing_rate); 258 - if (bbr->mode != BBR_STARTUP || rate > sk->sk_pacing_rate) 227 + if (unlikely(!bbr->has_seen_rtt && tp->srtt_us)) 228 + bbr_init_pacing_rate_from_rtt(sk); 229 + if (bbr_full_bw_reached(sk) || rate > sk->sk_pacing_rate) 259 230 sk->sk_pacing_rate = rate; 260 231 } 261 232 ··· 829 798 { 830 799 struct tcp_sock *tp = tcp_sk(sk); 831 800 struct bbr *bbr = inet_csk_ca(sk); 832 - u64 bw; 833 801 834 802 bbr->prior_cwnd = 0; 835 803 bbr->tso_segs_goal = 0; /* default segs per skb until first ACK */ ··· 844 814 845 815 minmax_reset(&bbr->bw, bbr->rtt_cnt, 0); /* init max bw to 0 */ 846 816 847 - /* Initialize pacing rate to: high_gain * init_cwnd / RTT. */ 848 - bw = (u64)tp->snd_cwnd * BW_UNIT; 849 - do_div(bw, (tp->srtt_us >> 3) ? : USEC_PER_MSEC); 850 - sk->sk_pacing_rate = 0; /* force an update of sk_pacing_rate */ 851 - bbr_set_pacing_rate(sk, bw, bbr_high_gain); 817 + bbr->has_seen_rtt = 0; 818 + bbr_init_pacing_rate_from_rtt(sk); 852 819 853 820 bbr->restore_cwnd = 0; 854 821 bbr->round_start = 0;
+11 -2
net/ipv4/udp.c
··· 1388 1388 unlock_sock_fast(sk, slow); 1389 1389 } 1390 1390 1391 + /* we cleared the head states previously only if the skb lacks any IP 1392 + * options, see __udp_queue_rcv_skb(). 1393 + */ 1394 + if (unlikely(IPCB(skb)->opt.optlen > 0)) 1395 + skb_release_head_state(skb); 1391 1396 consume_stateless_skb(skb); 1392 1397 } 1393 1398 EXPORT_SYMBOL_GPL(skb_consume_udp); ··· 1784 1779 sk_mark_napi_id_once(sk, skb); 1785 1780 } 1786 1781 1787 - /* clear all pending head states while they are hot in the cache */ 1788 - skb_release_head_state(skb); 1782 + /* At recvmsg() time we need skb->dst to process IP options-related 1783 + * cmsg, elsewhere can we clear all pending head states while they are 1784 + * hot in the cache 1785 + */ 1786 + if (likely(IPCB(skb)->opt.optlen == 0)) 1787 + skb_release_head_state(skb); 1789 1788 1790 1789 rc = __udp_enqueue_schedule_skb(sk, skb); 1791 1790 if (rc < 0) {
+6 -2
net/ipv6/output_core.c
··· 78 78 79 79 int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) 80 80 { 81 - u16 offset = sizeof(struct ipv6hdr); 81 + unsigned int offset = sizeof(struct ipv6hdr); 82 82 unsigned int packet_len = skb_tail_pointer(skb) - 83 83 skb_network_header(skb); 84 84 int found_rhdr = 0; ··· 86 86 87 87 while (offset <= packet_len) { 88 88 struct ipv6_opt_hdr *exthdr; 89 + unsigned int len; 89 90 90 91 switch (**nexthdr) { 91 92 ··· 112 111 113 112 exthdr = (struct ipv6_opt_hdr *)(skb_network_header(skb) + 114 113 offset); 115 - offset += ipv6_optlen(exthdr); 114 + len = ipv6_optlen(exthdr); 115 + if (len + offset >= IPV6_MAXPLEN) 116 + return -EINVAL; 117 + offset += len; 116 118 *nexthdr = &exthdr->nexthdr; 117 119 } 118 120
+1
net/ipv6/syncookies.c
··· 216 216 treq->rcv_isn = ntohl(th->seq) - 1; 217 217 treq->snt_isn = cookie; 218 218 treq->ts_off = 0; 219 + treq->txhash = net_tx_rndhash(); 219 220 220 221 /* 221 222 * We need to lookup the dst_entry to get the correct window size.
+2 -145
net/netfilter/core.c
··· 227 227 } 228 228 EXPORT_SYMBOL(nf_unregister_net_hooks); 229 229 230 - static LIST_HEAD(nf_hook_list); 231 - 232 - static int _nf_register_hook(struct nf_hook_ops *reg) 233 - { 234 - struct net *net, *last; 235 - int ret; 236 - 237 - for_each_net(net) { 238 - ret = nf_register_net_hook(net, reg); 239 - if (ret && ret != -ENOENT) 240 - goto rollback; 241 - } 242 - list_add_tail(&reg->list, &nf_hook_list); 243 - 244 - return 0; 245 - rollback: 246 - last = net; 247 - for_each_net(net) { 248 - if (net == last) 249 - break; 250 - nf_unregister_net_hook(net, reg); 251 - } 252 - return ret; 253 - } 254 - 255 - int nf_register_hook(struct nf_hook_ops *reg) 256 - { 257 - int ret; 258 - 259 - rtnl_lock(); 260 - ret = _nf_register_hook(reg); 261 - rtnl_unlock(); 262 - 263 - return ret; 264 - } 265 - EXPORT_SYMBOL(nf_register_hook); 266 - 267 - static void _nf_unregister_hook(struct nf_hook_ops *reg) 268 - { 269 - struct net *net; 270 - 271 - list_del(&reg->list); 272 - for_each_net(net) 273 - nf_unregister_net_hook(net, reg); 274 - } 275 - 276 - void nf_unregister_hook(struct nf_hook_ops *reg) 277 - { 278 - rtnl_lock(); 279 - _nf_unregister_hook(reg); 280 - rtnl_unlock(); 281 - } 282 - EXPORT_SYMBOL(nf_unregister_hook); 283 - 284 - int nf_register_hooks(struct nf_hook_ops *reg, unsigned int n) 285 - { 286 - unsigned int i; 287 - int err = 0; 288 - 289 - for (i = 0; i < n; i++) { 290 - err = nf_register_hook(&reg[i]); 291 - if (err) 292 - goto err; 293 - } 294 - return err; 295 - 296 - err: 297 - if (i > 0) 298 - nf_unregister_hooks(reg, i); 299 - return err; 300 - } 301 - EXPORT_SYMBOL(nf_register_hooks); 302 - 303 - /* Caller MUST take rtnl_lock() */ 304 - int _nf_register_hooks(struct nf_hook_ops *reg, unsigned int n) 305 - { 306 - unsigned int i; 307 - int err = 0; 308 - 309 - for (i = 0; i < n; i++) { 310 - err = _nf_register_hook(&reg[i]); 311 - if (err) 312 - goto err; 313 - } 314 - return err; 315 - 316 - err: 317 - if (i > 0) 318 - _nf_unregister_hooks(reg, i); 319 - return err; 320 - } 321 - EXPORT_SYMBOL(_nf_register_hooks); 322 - 323 - void nf_unregister_hooks(struct nf_hook_ops *reg, unsigned int n) 324 - { 325 - while (n-- > 0) 326 - nf_unregister_hook(&reg[n]); 327 - } 328 - EXPORT_SYMBOL(nf_unregister_hooks); 329 - 330 - /* Caller MUST take rtnl_lock */ 331 - void _nf_unregister_hooks(struct nf_hook_ops *reg, unsigned int n) 332 - { 333 - while (n-- > 0) 334 - _nf_unregister_hook(&reg[n]); 335 - } 336 - EXPORT_SYMBOL(_nf_unregister_hooks); 337 - 338 230 /* Returns 1 if okfn() needs to be executed by the caller, 339 231 * -EPERM for NF_DROP, 0 otherwise. Caller must hold rcu_read_lock. */ 340 232 int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state, ··· 342 450 EXPORT_SYMBOL(nf_nat_decode_session_hook); 343 451 #endif 344 452 345 - static int nf_register_hook_list(struct net *net) 346 - { 347 - struct nf_hook_ops *elem; 348 - int ret; 349 - 350 - rtnl_lock(); 351 - list_for_each_entry(elem, &nf_hook_list, list) { 352 - ret = nf_register_net_hook(net, elem); 353 - if (ret && ret != -ENOENT) 354 - goto out_undo; 355 - } 356 - rtnl_unlock(); 357 - return 0; 358 - 359 - out_undo: 360 - list_for_each_entry_continue_reverse(elem, &nf_hook_list, list) 361 - nf_unregister_net_hook(net, elem); 362 - rtnl_unlock(); 363 - return ret; 364 - } 365 - 366 - static void nf_unregister_hook_list(struct net *net) 367 - { 368 - struct nf_hook_ops *elem; 369 - 370 - rtnl_lock(); 371 - list_for_each_entry(elem, &nf_hook_list, list) 372 - nf_unregister_net_hook(net, elem); 373 - rtnl_unlock(); 374 - } 375 - 376 453 static int __net_init netfilter_net_init(struct net *net) 377 454 { 378 - int i, h, ret; 455 + int i, h; 379 456 380 457 for (i = 0; i < ARRAY_SIZE(net->nf.hooks); i++) { 381 458 for (h = 0; h < NF_MAX_HOOKS; h++) ··· 361 500 return -ENOMEM; 362 501 } 363 502 #endif 364 - ret = nf_register_hook_list(net); 365 - if (ret) 366 - remove_proc_entry("netfilter", net->proc_net); 367 503 368 - return ret; 504 + return 0; 369 505 } 370 506 371 507 static void __net_exit netfilter_net_exit(struct net *net) 372 508 { 373 - nf_unregister_hook_list(net); 374 509 remove_proc_entry("netfilter", net->proc_net); 375 510 } 376 511
+1 -1
net/netfilter/nf_conntrack_expect.c
··· 422 422 h = nf_ct_expect_dst_hash(net, &expect->tuple); 423 423 hlist_for_each_entry_safe(i, next, &nf_ct_expect_hash[h], hnode) { 424 424 if (expect_matches(i, expect)) { 425 - if (nf_ct_remove_expect(expect)) 425 + if (nf_ct_remove_expect(i)) 426 426 break; 427 427 } else if (expect_clash(i, expect)) { 428 428 ret = -EBUSY;
+9 -8
net/netfilter/nf_nat_core.c
··· 222 222 .tuple = tuple, 223 223 .zone = zone 224 224 }; 225 - struct rhlist_head *hl; 225 + struct rhlist_head *hl, *h; 226 226 227 227 hl = rhltable_lookup(&nf_nat_bysource_table, &key, 228 228 nf_nat_bysource_params); 229 - if (!hl) 230 - return 0; 231 229 232 - ct = container_of(hl, typeof(*ct), nat_bysource); 230 + rhl_for_each_entry_rcu(ct, h, hl, nat_bysource) { 231 + nf_ct_invert_tuplepr(result, 232 + &ct->tuplehash[IP_CT_DIR_REPLY].tuple); 233 + result->dst = tuple->dst; 233 234 234 - nf_ct_invert_tuplepr(result, 235 - &ct->tuplehash[IP_CT_DIR_REPLY].tuple); 236 - result->dst = tuple->dst; 235 + if (in_range(l3proto, l4proto, result, range)) 236 + return 1; 237 + } 237 238 238 - return in_range(l3proto, l4proto, result, range); 239 + return 0; 239 240 } 240 241 241 242 /* For [FUTURE] fragmentation handling, we want the least-used
+3 -3
net/netfilter/nfnetlink.c
··· 472 472 if (msglen > skb->len) 473 473 msglen = skb->len; 474 474 475 - if (nlh->nlmsg_len < NLMSG_HDRLEN || 476 - skb->len < NLMSG_HDRLEN + sizeof(struct nfgenmsg)) 475 + if (skb->len < NLMSG_HDRLEN + sizeof(struct nfgenmsg)) 477 476 return; 478 477 479 478 err = nla_parse(cda, NFNL_BATCH_MAX, attr, attrlen, nfnl_batch_policy, ··· 499 500 { 500 501 struct nlmsghdr *nlh = nlmsg_hdr(skb); 501 502 502 - if (nlh->nlmsg_len < NLMSG_HDRLEN || 503 + if (skb->len < NLMSG_HDRLEN || 504 + nlh->nlmsg_len < NLMSG_HDRLEN || 503 505 skb->len < nlh->nlmsg_len) 504 506 return; 505 507
+36 -15
net/openvswitch/conntrack.c
··· 629 629 return ct; 630 630 } 631 631 632 + static 633 + struct nf_conn *ovs_ct_executed(struct net *net, 634 + const struct sw_flow_key *key, 635 + const struct ovs_conntrack_info *info, 636 + struct sk_buff *skb, 637 + bool *ct_executed) 638 + { 639 + struct nf_conn *ct = NULL; 640 + 641 + /* If no ct, check if we have evidence that an existing conntrack entry 642 + * might be found for this skb. This happens when we lose a skb->_nfct 643 + * due to an upcall, or if the direction is being forced. If the 644 + * connection was not confirmed, it is not cached and needs to be run 645 + * through conntrack again. 646 + */ 647 + *ct_executed = (key->ct_state & OVS_CS_F_TRACKED) && 648 + !(key->ct_state & OVS_CS_F_INVALID) && 649 + (key->ct_zone == info->zone.id); 650 + 651 + if (*ct_executed || (!key->ct_state && info->force)) { 652 + ct = ovs_ct_find_existing(net, &info->zone, info->family, skb, 653 + !!(key->ct_state & 654 + OVS_CS_F_NAT_MASK)); 655 + } 656 + 657 + return ct; 658 + } 659 + 632 660 /* Determine whether skb->_nfct is equal to the result of conntrack lookup. */ 633 661 static bool skb_nfct_cached(struct net *net, 634 662 const struct sw_flow_key *key, ··· 665 637 { 666 638 enum ip_conntrack_info ctinfo; 667 639 struct nf_conn *ct; 640 + bool ct_executed = true; 668 641 669 642 ct = nf_ct_get(skb, &ctinfo); 670 - /* If no ct, check if we have evidence that an existing conntrack entry 671 - * might be found for this skb. This happens when we lose a skb->_nfct 672 - * due to an upcall. If the connection was not confirmed, it is not 673 - * cached and needs to be run through conntrack again. 674 - */ 675 - if (!ct && key->ct_state & OVS_CS_F_TRACKED && 676 - !(key->ct_state & OVS_CS_F_INVALID) && 677 - key->ct_zone == info->zone.id) { 678 - ct = ovs_ct_find_existing(net, &info->zone, info->family, skb, 679 - !!(key->ct_state 680 - & OVS_CS_F_NAT_MASK)); 681 - if (ct) 682 - nf_ct_get(skb, &ctinfo); 683 - } 684 643 if (!ct) 644 + ct = ovs_ct_executed(net, key, info, skb, &ct_executed); 645 + 646 + if (ct) 647 + nf_ct_get(skb, &ctinfo); 648 + else 685 649 return false; 650 + 686 651 if (!net_eq(net, read_pnet(&ct->ct_net))) 687 652 return false; 688 653 if (!nf_ct_zone_equal_any(info->ct, nf_ct_zone(ct))) ··· 700 679 return false; 701 680 } 702 681 703 - return true; 682 + return ct_executed; 704 683 } 705 684 706 685 #ifdef CONFIG_NF_NAT_NEEDED
+2 -4
net/packet/af_packet.c
··· 214 214 static void prb_fill_vlan_info(struct tpacket_kbdq_core *, 215 215 struct tpacket3_hdr *); 216 216 static void packet_flush_mclist(struct sock *sk); 217 + static void packet_pick_tx_queue(struct net_device *dev, struct sk_buff *skb); 217 218 218 219 struct packet_skb_cb { 219 220 union { ··· 261 260 if (skb != orig_skb) 262 261 goto drop; 263 262 263 + packet_pick_tx_queue(dev, skb); 264 264 txq = skb_get_tx_queue(dev, skb); 265 265 266 266 local_bh_disable(); ··· 2749 2747 goto tpacket_error; 2750 2748 } 2751 2749 2752 - packet_pick_tx_queue(dev, skb); 2753 - 2754 2750 skb->destructor = tpacket_destruct_skb; 2755 2751 __packet_set_status(po, ph, TP_STATUS_SENDING); 2756 2752 packet_inc_pending(&po->tx_ring); ··· 2930 2930 skb->dev = dev; 2931 2931 skb->priority = sk->sk_priority; 2932 2932 skb->mark = sockc.mark; 2933 - 2934 - packet_pick_tx_queue(dev, skb); 2935 2933 2936 2934 if (po->has_vnet_hdr) { 2937 2935 err = virtio_net_hdr_to_skb(skb, &vnet_hdr, vio_le());
+3 -3
net/rds/send.c
··· 170 170 * The acquire_in_xmit() check above ensures that only one 171 171 * caller can increment c_send_gen at any time. 172 172 */ 173 - cp->cp_send_gen++; 174 - send_gen = cp->cp_send_gen; 173 + send_gen = READ_ONCE(cp->cp_send_gen) + 1; 174 + WRITE_ONCE(cp->cp_send_gen, send_gen); 175 175 176 176 /* 177 177 * rds_conn_shutdown() sets the conn state and then tests RDS_IN_XMIT, ··· 431 431 smp_mb(); 432 432 if ((test_bit(0, &conn->c_map_queued) || 433 433 !list_empty(&cp->cp_send_queue)) && 434 - send_gen == cp->cp_send_gen) { 434 + send_gen == READ_ONCE(cp->cp_send_gen)) { 435 435 rds_stats_inc(s_send_lock_queue_raced); 436 436 if (batch_count < send_batch_count) 437 437 goto restart;
+2 -2
net/sched/act_api.c
··· 835 835 } 836 836 837 837 static int 838 - act_get_notify(struct net *net, u32 portid, struct nlmsghdr *n, 838 + tcf_get_notify(struct net *net, u32 portid, struct nlmsghdr *n, 839 839 struct list_head *actions, int event) 840 840 { 841 841 struct sk_buff *skb; ··· 1018 1018 } 1019 1019 1020 1020 if (event == RTM_GETACTION) 1021 - ret = act_get_notify(net, portid, n, &actions, event); 1021 + ret = tcf_get_notify(net, portid, n, &actions, event); 1022 1022 else { /* delete */ 1023 1023 ret = tcf_del_notify(net, n, &actions, portid); 1024 1024 if (ret)
+2 -2
net/sctp/sm_make_chunk.c
··· 228 228 sctp_adaptation_ind_param_t aiparam; 229 229 sctp_supported_ext_param_t ext_param; 230 230 int num_ext = 0; 231 - __u8 extensions[3]; 231 + __u8 extensions[4]; 232 232 struct sctp_paramhdr *auth_chunks = NULL, 233 233 *auth_hmacs = NULL; 234 234 ··· 396 396 sctp_adaptation_ind_param_t aiparam; 397 397 sctp_supported_ext_param_t ext_param; 398 398 int num_ext = 0; 399 - __u8 extensions[3]; 399 + __u8 extensions[4]; 400 400 struct sctp_paramhdr *auth_chunks = NULL, 401 401 *auth_hmacs = NULL, 402 402 *auth_random = NULL;
+2 -2
tools/lib/bpf/bpf.c
··· 120 120 int bpf_verify_program(enum bpf_prog_type type, const struct bpf_insn *insns, 121 121 size_t insns_cnt, int strict_alignment, 122 122 const char *license, __u32 kern_version, 123 - char *log_buf, size_t log_buf_sz) 123 + char *log_buf, size_t log_buf_sz, int log_level) 124 124 { 125 125 union bpf_attr attr; 126 126 ··· 131 131 attr.license = ptr_to_u64(license); 132 132 attr.log_buf = ptr_to_u64(log_buf); 133 133 attr.log_size = log_buf_sz; 134 - attr.log_level = 2; 134 + attr.log_level = log_level; 135 135 log_buf[0] = 0; 136 136 attr.kern_version = kern_version; 137 137 attr.prog_flags = strict_alignment ? BPF_F_STRICT_ALIGNMENT : 0;
+1 -1
tools/lib/bpf/bpf.h
··· 38 38 int bpf_verify_program(enum bpf_prog_type type, const struct bpf_insn *insns, 39 39 size_t insns_cnt, int strict_alignment, 40 40 const char *license, __u32 kern_version, 41 - char *log_buf, size_t log_buf_sz); 41 + char *log_buf, size_t log_buf_sz, int log_level); 42 42 43 43 int bpf_map_update_elem(int fd, const void *key, const void *value, 44 44 __u64 flags);
+1 -1
tools/testing/selftests/bpf/test_align.c
··· 380 380 prog_len = probe_filter_length(prog); 381 381 fd_prog = bpf_verify_program(prog_type ? : BPF_PROG_TYPE_SOCKET_FILTER, 382 382 prog, prog_len, 1, "GPL", 0, 383 - bpf_vlog, sizeof(bpf_vlog)); 383 + bpf_vlog, sizeof(bpf_vlog), 2); 384 384 if (fd_prog < 0) { 385 385 printf("Failed to load program.\n"); 386 386 printf("%s", bpf_vlog);
+475 -5
tools/testing/selftests/bpf/test_verifier.c
··· 4969 4969 BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, 4970 4970 sizeof(struct test_val), 4), 4971 4971 BPF_MOV64_IMM(BPF_REG_4, 0), 4972 - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), 4972 + BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), 4973 4973 BPF_MOV64_IMM(BPF_REG_3, 0), 4974 4974 BPF_EMIT_CALL(BPF_FUNC_probe_read), 4975 4975 BPF_MOV64_IMM(BPF_REG_0, 0), ··· 4995 4995 BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, 4996 4996 sizeof(struct test_val) + 1, 4), 4997 4997 BPF_MOV64_IMM(BPF_REG_4, 0), 4998 - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), 4998 + BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), 4999 4999 BPF_MOV64_IMM(BPF_REG_3, 0), 5000 5000 BPF_EMIT_CALL(BPF_FUNC_probe_read), 5001 5001 BPF_MOV64_IMM(BPF_REG_0, 0), ··· 5023 5023 BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, 5024 5024 sizeof(struct test_val) - 20, 4), 5025 5025 BPF_MOV64_IMM(BPF_REG_4, 0), 5026 - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), 5026 + BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), 5027 5027 BPF_MOV64_IMM(BPF_REG_3, 0), 5028 5028 BPF_EMIT_CALL(BPF_FUNC_probe_read), 5029 5029 BPF_MOV64_IMM(BPF_REG_0, 0), ··· 5050 5050 BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, 5051 5051 sizeof(struct test_val) - 19, 4), 5052 5052 BPF_MOV64_IMM(BPF_REG_4, 0), 5053 - BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2), 5053 + BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2), 5054 5054 BPF_MOV64_IMM(BPF_REG_3, 0), 5055 5055 BPF_EMIT_CALL(BPF_FUNC_probe_read), 5056 5056 BPF_MOV64_IMM(BPF_REG_0, 0), ··· 5510 5510 .errstr = "invalid bpf_context access", 5511 5511 .prog_type = BPF_PROG_TYPE_LWT_IN, 5512 5512 }, 5513 + { 5514 + "bounds checks mixing signed and unsigned, positive bounds", 5515 + .insns = { 5516 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5517 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5518 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5519 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5520 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5521 + BPF_FUNC_map_lookup_elem), 5522 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), 5523 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5524 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5525 + BPF_MOV64_IMM(BPF_REG_2, 2), 5526 + BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 3), 5527 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 4, 2), 5528 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5529 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5530 + BPF_MOV64_IMM(BPF_REG_0, 0), 5531 + BPF_EXIT_INSN(), 5532 + }, 5533 + .fixup_map1 = { 3 }, 5534 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5535 + .errstr = "R0 min value is negative", 5536 + .result = REJECT, 5537 + .result_unpriv = REJECT, 5538 + }, 5539 + { 5540 + "bounds checks mixing signed and unsigned", 5541 + .insns = { 5542 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5543 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5544 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5545 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5546 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5547 + BPF_FUNC_map_lookup_elem), 5548 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), 5549 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5550 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5551 + BPF_MOV64_IMM(BPF_REG_2, -1), 5552 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3), 5553 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5554 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5555 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5556 + BPF_MOV64_IMM(BPF_REG_0, 0), 5557 + BPF_EXIT_INSN(), 5558 + }, 5559 + .fixup_map1 = { 3 }, 5560 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5561 + .errstr = "R0 min value is negative", 5562 + .result = REJECT, 5563 + .result_unpriv = REJECT, 5564 + }, 5565 + { 5566 + "bounds checks mixing signed and unsigned, variant 2", 5567 + .insns = { 5568 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5569 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5570 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5571 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5572 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5573 + BPF_FUNC_map_lookup_elem), 5574 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), 5575 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5576 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5577 + BPF_MOV64_IMM(BPF_REG_2, -1), 5578 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5), 5579 + BPF_MOV64_IMM(BPF_REG_8, 0), 5580 + BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_1), 5581 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 1, 2), 5582 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), 5583 + BPF_ST_MEM(BPF_B, BPF_REG_8, 0, 0), 5584 + BPF_MOV64_IMM(BPF_REG_0, 0), 5585 + BPF_EXIT_INSN(), 5586 + }, 5587 + .fixup_map1 = { 3 }, 5588 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5589 + .errstr = "R8 invalid mem access 'inv'", 5590 + .result = REJECT, 5591 + .result_unpriv = REJECT, 5592 + }, 5593 + { 5594 + "bounds checks mixing signed and unsigned, variant 3", 5595 + .insns = { 5596 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5597 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5598 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5599 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5600 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5601 + BPF_FUNC_map_lookup_elem), 5602 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), 5603 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5604 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5605 + BPF_MOV64_IMM(BPF_REG_2, -1), 5606 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 4), 5607 + BPF_MOV64_REG(BPF_REG_8, BPF_REG_1), 5608 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 1, 2), 5609 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8), 5610 + BPF_ST_MEM(BPF_B, BPF_REG_8, 0, 0), 5611 + BPF_MOV64_IMM(BPF_REG_0, 0), 5612 + BPF_EXIT_INSN(), 5613 + }, 5614 + .fixup_map1 = { 3 }, 5615 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5616 + .errstr = "R8 invalid mem access 'inv'", 5617 + .result = REJECT, 5618 + .result_unpriv = REJECT, 5619 + }, 5620 + { 5621 + "bounds checks mixing signed and unsigned, variant 4", 5622 + .insns = { 5623 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5624 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5625 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5626 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5627 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5628 + BPF_FUNC_map_lookup_elem), 5629 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), 5630 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5631 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5632 + BPF_MOV64_IMM(BPF_REG_2, 1), 5633 + BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2), 5634 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5635 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5636 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5637 + BPF_MOV64_IMM(BPF_REG_0, 0), 5638 + BPF_EXIT_INSN(), 5639 + }, 5640 + .fixup_map1 = { 3 }, 5641 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5642 + .errstr = "R0 min value is negative", 5643 + .result = REJECT, 5644 + .result_unpriv = REJECT, 5645 + }, 5646 + { 5647 + "bounds checks mixing signed and unsigned, variant 5", 5648 + .insns = { 5649 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5650 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5651 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5652 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5653 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5654 + BPF_FUNC_map_lookup_elem), 5655 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), 5656 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5657 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5658 + BPF_MOV64_IMM(BPF_REG_2, -1), 5659 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5), 5660 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 4), 5661 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 4), 5662 + BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1), 5663 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5664 + BPF_MOV64_IMM(BPF_REG_0, 0), 5665 + BPF_EXIT_INSN(), 5666 + }, 5667 + .fixup_map1 = { 3 }, 5668 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5669 + .errstr = "R0 invalid mem access", 5670 + .result = REJECT, 5671 + .result_unpriv = REJECT, 5672 + }, 5673 + { 5674 + "bounds checks mixing signed and unsigned, variant 6", 5675 + .insns = { 5676 + BPF_MOV64_IMM(BPF_REG_2, 0), 5677 + BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), 5678 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -512), 5679 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5680 + BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -16), 5681 + BPF_MOV64_IMM(BPF_REG_6, -1), 5682 + BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_6, 5), 5683 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_4, 1, 4), 5684 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 1), 5685 + BPF_MOV64_IMM(BPF_REG_5, 0), 5686 + BPF_ST_MEM(BPF_H, BPF_REG_10, -512, 0), 5687 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5688 + BPF_FUNC_skb_load_bytes), 5689 + BPF_MOV64_IMM(BPF_REG_0, 0), 5690 + BPF_EXIT_INSN(), 5691 + }, 5692 + .errstr_unpriv = "R4 min value is negative, either use unsigned", 5693 + .errstr = "R4 min value is negative, either use unsigned", 5694 + .result = REJECT, 5695 + .result_unpriv = REJECT, 5696 + }, 5697 + { 5698 + "bounds checks mixing signed and unsigned, variant 7", 5699 + .insns = { 5700 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5701 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5702 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5703 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5704 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5705 + BPF_FUNC_map_lookup_elem), 5706 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), 5707 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5708 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5709 + BPF_MOV64_IMM(BPF_REG_2, 1024 * 1024 * 1024), 5710 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3), 5711 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5712 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5713 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5714 + BPF_MOV64_IMM(BPF_REG_0, 0), 5715 + BPF_EXIT_INSN(), 5716 + }, 5717 + .fixup_map1 = { 3 }, 5718 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5719 + .errstr = "R0 min value is negative", 5720 + .result = REJECT, 5721 + .result_unpriv = REJECT, 5722 + }, 5723 + { 5724 + "bounds checks mixing signed and unsigned, variant 8", 5725 + .insns = { 5726 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5727 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5728 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5729 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5730 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5731 + BPF_FUNC_map_lookup_elem), 5732 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7), 5733 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5734 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5735 + BPF_MOV64_IMM(BPF_REG_2, 1024 * 1024 * 1024 + 1), 5736 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3), 5737 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5738 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5739 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5740 + BPF_MOV64_IMM(BPF_REG_0, 0), 5741 + BPF_EXIT_INSN(), 5742 + }, 5743 + .fixup_map1 = { 3 }, 5744 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5745 + .errstr = "R0 min value is negative", 5746 + .result = REJECT, 5747 + .result_unpriv = REJECT, 5748 + }, 5749 + { 5750 + "bounds checks mixing signed and unsigned, variant 9", 5751 + .insns = { 5752 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5753 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5754 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5755 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5756 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5757 + BPF_FUNC_map_lookup_elem), 5758 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), 5759 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5760 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5761 + BPF_MOV64_IMM(BPF_REG_2, -1), 5762 + BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2), 5763 + BPF_MOV64_IMM(BPF_REG_0, 0), 5764 + BPF_EXIT_INSN(), 5765 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5766 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5767 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5768 + BPF_MOV64_IMM(BPF_REG_0, 0), 5769 + BPF_EXIT_INSN(), 5770 + }, 5771 + .fixup_map1 = { 3 }, 5772 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5773 + .errstr = "R0 min value is negative", 5774 + .result = REJECT, 5775 + .result_unpriv = REJECT, 5776 + }, 5777 + { 5778 + "bounds checks mixing signed and unsigned, variant 10", 5779 + .insns = { 5780 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5781 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5782 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5783 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5784 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5785 + BPF_FUNC_map_lookup_elem), 5786 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10), 5787 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5788 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5789 + BPF_LD_IMM64(BPF_REG_2, -9223372036854775808ULL), 5790 + BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2), 5791 + BPF_MOV64_IMM(BPF_REG_0, 0), 5792 + BPF_EXIT_INSN(), 5793 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5794 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5795 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5796 + BPF_MOV64_IMM(BPF_REG_0, 0), 5797 + BPF_EXIT_INSN(), 5798 + }, 5799 + .fixup_map1 = { 3 }, 5800 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5801 + .errstr = "R0 min value is negative", 5802 + .result = REJECT, 5803 + .result_unpriv = REJECT, 5804 + }, 5805 + { 5806 + "bounds checks mixing signed and unsigned, variant 11", 5807 + .insns = { 5808 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5809 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5810 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5811 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5812 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5813 + BPF_FUNC_map_lookup_elem), 5814 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), 5815 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5816 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5817 + BPF_MOV64_IMM(BPF_REG_2, 0), 5818 + BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2), 5819 + BPF_MOV64_IMM(BPF_REG_0, 0), 5820 + BPF_EXIT_INSN(), 5821 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5822 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5823 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5824 + BPF_MOV64_IMM(BPF_REG_0, 0), 5825 + BPF_EXIT_INSN(), 5826 + }, 5827 + .fixup_map1 = { 3 }, 5828 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5829 + .errstr = "R0 min value is negative", 5830 + .result = REJECT, 5831 + .result_unpriv = REJECT, 5832 + }, 5833 + { 5834 + "bounds checks mixing signed and unsigned, variant 12", 5835 + .insns = { 5836 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5837 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5838 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5839 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5840 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5841 + BPF_FUNC_map_lookup_elem), 5842 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), 5843 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5844 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5845 + BPF_MOV64_IMM(BPF_REG_2, -1), 5846 + BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2), 5847 + /* Dead branch. */ 5848 + BPF_MOV64_IMM(BPF_REG_0, 0), 5849 + BPF_EXIT_INSN(), 5850 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5851 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5852 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5853 + BPF_MOV64_IMM(BPF_REG_0, 0), 5854 + BPF_EXIT_INSN(), 5855 + }, 5856 + .fixup_map1 = { 3 }, 5857 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5858 + .errstr = "R0 min value is negative", 5859 + .result = REJECT, 5860 + .result_unpriv = REJECT, 5861 + }, 5862 + { 5863 + "bounds checks mixing signed and unsigned, variant 13", 5864 + .insns = { 5865 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5866 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5867 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5868 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5869 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5870 + BPF_FUNC_map_lookup_elem), 5871 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9), 5872 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5873 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5874 + BPF_MOV64_IMM(BPF_REG_2, -6), 5875 + BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2), 5876 + BPF_MOV64_IMM(BPF_REG_0, 0), 5877 + BPF_EXIT_INSN(), 5878 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5879 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5880 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5881 + BPF_MOV64_IMM(BPF_REG_0, 0), 5882 + BPF_EXIT_INSN(), 5883 + }, 5884 + .fixup_map1 = { 3 }, 5885 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5886 + .errstr = "R0 min value is negative", 5887 + .result = REJECT, 5888 + .result_unpriv = REJECT, 5889 + }, 5890 + { 5891 + "bounds checks mixing signed and unsigned, variant 14", 5892 + .insns = { 5893 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5894 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5895 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5896 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5897 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5898 + BPF_FUNC_map_lookup_elem), 5899 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6), 5900 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5901 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5902 + BPF_MOV64_IMM(BPF_REG_2, 2), 5903 + BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2), 5904 + BPF_MOV64_IMM(BPF_REG_7, 1), 5905 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, 0, 2), 5906 + BPF_MOV64_IMM(BPF_REG_0, 0), 5907 + BPF_EXIT_INSN(), 5908 + BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_1), 5909 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, 4, 2), 5910 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_7), 5911 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5912 + BPF_MOV64_IMM(BPF_REG_0, 0), 5913 + BPF_EXIT_INSN(), 5914 + }, 5915 + .fixup_map1 = { 3 }, 5916 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5917 + .errstr = "R0 min value is negative", 5918 + .result = REJECT, 5919 + .result_unpriv = REJECT, 5920 + }, 5921 + { 5922 + "bounds checks mixing signed and unsigned, variant 15", 5923 + .insns = { 5924 + BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_1, 5925 + offsetof(struct __sk_buff, mark)), 5926 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5927 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5928 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5929 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5930 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5931 + BPF_FUNC_map_lookup_elem), 5932 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8), 5933 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5934 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5935 + BPF_MOV64_IMM(BPF_REG_2, -1), 5936 + BPF_MOV64_IMM(BPF_REG_8, 2), 5937 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_9, 42, 6), 5938 + BPF_JMP_REG(BPF_JSGT, BPF_REG_8, BPF_REG_1, 3), 5939 + BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2), 5940 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5941 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5942 + BPF_MOV64_IMM(BPF_REG_0, 0), 5943 + BPF_EXIT_INSN(), 5944 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, -3), 5945 + BPF_JMP_IMM(BPF_JA, 0, 0, -7), 5946 + }, 5947 + .fixup_map1 = { 4 }, 5948 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5949 + .errstr = "R0 min value is negative", 5950 + .result = REJECT, 5951 + .result_unpriv = REJECT, 5952 + }, 5953 + { 5954 + "bounds checks mixing signed and unsigned, variant 16", 5955 + .insns = { 5956 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 5957 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 5958 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 5959 + BPF_LD_MAP_FD(BPF_REG_1, 0), 5960 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 5961 + BPF_FUNC_map_lookup_elem), 5962 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4), 5963 + BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8), 5964 + BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16), 5965 + BPF_MOV64_IMM(BPF_REG_2, -6), 5966 + BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2), 5967 + BPF_MOV64_IMM(BPF_REG_0, 0), 5968 + BPF_EXIT_INSN(), 5969 + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), 5970 + BPF_JMP_IMM(BPF_JGT, BPF_REG_0, 1, 2), 5971 + BPF_MOV64_IMM(BPF_REG_0, 0), 5972 + BPF_EXIT_INSN(), 5973 + BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0), 5974 + BPF_MOV64_IMM(BPF_REG_0, 0), 5975 + BPF_EXIT_INSN(), 5976 + }, 5977 + .fixup_map1 = { 3 }, 5978 + .errstr_unpriv = "R0 pointer arithmetic prohibited", 5979 + .errstr = "R0 min value is negative", 5980 + .result = REJECT, 5981 + .result_unpriv = REJECT, 5982 + }, 5513 5983 }; 5514 5984 5515 5985 static int probe_filter_length(const struct bpf_insn *fp) ··· 6103 5633 6104 5634 fd_prog = bpf_verify_program(prog_type ? : BPF_PROG_TYPE_SOCKET_FILTER, 6105 5635 prog, prog_len, test->flags & F_LOAD_WITH_STRICT_ALIGNMENT, 6106 - "GPL", 0, bpf_vlog, sizeof(bpf_vlog)); 5636 + "GPL", 0, bpf_vlog, sizeof(bpf_vlog), 1); 6107 5637 6108 5638 expected_ret = unpriv && test->result_unpriv != UNDEF ? 6109 5639 test->result_unpriv : test->result;