Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:
"It looks like a decent sized set of fixes, but a lot of these are one
liner off-by-one and similar type changes:

1) Fix netlink header pointer to calcular bad attribute offset
reported to user. From Pablo Neira Ayuso.

2) Don't double clear PHY interrupts when ->did_interrupt is set,
from Heiner Kallweit.

3) Add missing validation of various (devlink, nl802154, fib, etc.)
attributes, from Jakub Kicinski.

4) Missing *pos increments in various netfilter seq_next ops, from
Vasily Averin.

5) Missing break in of_mdiobus_register() loop, from Dajun Jin.

6) Don't double bump tx_dropped in veth driver, from Jiang Lidong.

7) Work around FMAN erratum A050385, from Madalin Bucur.

8) Make sure ARP header is pulled early enough in bonding driver,
from Eric Dumazet.

9) Do a cond_resched() during multicast processing of ipvlan and
macvlan, from Mahesh Bandewar.

10) Don't attach cgroups to unrelated sockets when in interrupt
context, from Shakeel Butt.

11) Fix tpacket ring state management when encountering unknown GSO
types. From Willem de Bruijn.

12) Fix MDIO bus PHY resume by checking mdio_bus_phy_may_suspend()
only in the suspend context. From Heiner Kallweit"

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (112 commits)
net: systemport: fix index check to avoid an array out of bounds access
tc-testing: add ETS scheduler to tdc build configuration
net: phy: fix MDIO bus PM PHY resuming
net: hns3: clear port base VLAN when unload PF
net: hns3: fix RMW issue for VLAN filter switch
net: hns3: fix VF VLAN table entries inconsistent issue
net: hns3: fix "tc qdisc del" failed issue
taprio: Fix sending packets without dequeueing them
net: mvmdio: avoid error message for optional IRQ
net: dsa: mv88e6xxx: Add missing mask of ATU occupancy register
net: memcg: fix lockdep splat in inet_csk_accept()
s390/qeth: implement smarter resizing of the RX buffer pool
s390/qeth: refactor buffer pool code
s390/qeth: use page pointers to manage RX buffer pool
seg6: fix SRv6 L2 tunnels to use IANA-assigned protocol number
net: dsa: Don't instantiate phylink for CPU/DSA ports unless needed
net/packet: tpacket_rcv: do not increment ring index on drop
sxgbe: Fix off by one in samsung driver strncpy size arg
net: caif: Add lockdep expression to RCU traversal primitive
MAINTAINERS: remove Sathya Perla as Emulex NIC maintainer
...

+993 -370
+7
Documentation/devicetree/bindings/net/fsl-fman.txt
··· 110 110 Usage: required 111 111 Definition: See soc/fsl/qman.txt and soc/fsl/bman.txt 112 112 113 + - fsl,erratum-a050385 114 + Usage: optional 115 + Value type: boolean 116 + Definition: A boolean property. Indicates the presence of the 117 + erratum A050385 which indicates that DMA transactions that are 118 + split can result in a FMan lock. 119 + 113 120 ============================================================================= 114 121 FMan MURAM Node 115 122
+3 -3
Documentation/networking/net_failover.rst
··· 8 8 ======== 9 9 10 10 The net_failover driver provides an automated failover mechanism via APIs 11 - to create and destroy a failover master netdev and mananges a primary and 11 + to create and destroy a failover master netdev and manages a primary and 12 12 standby slave netdevs that get registered via the generic failover 13 - infrastructrure. 13 + infrastructure. 14 14 15 15 The failover netdev acts a master device and controls 2 slave devices. The 16 16 original paravirtual interface is registered as 'standby' slave netdev and ··· 29 29 ============================================= 30 30 31 31 net_failover enables hypervisor controlled accelerated datapath to virtio-net 32 - enabled VMs in a transparent manner with no/minimal guest userspace chanages. 32 + enabled VMs in a transparent manner with no/minimal guest userspace changes. 33 33 34 34 To support this, the hypervisor needs to enable VIRTIO_NET_F_STANDBY 35 35 feature on the virtio-net interface and assign the same MAC address to both
+1 -1
Documentation/networking/rds.txt
··· 159 159 set SO_RDS_TRANSPORT on a socket for which the transport has 160 160 been previously attached explicitly (by SO_RDS_TRANSPORT) or 161 161 implicitly (via bind(2)) will return an error of EOPNOTSUPP. 162 - An attempt to set SO_RDS_TRANSPPORT to RDS_TRANS_NONE will 162 + An attempt to set SO_RDS_TRANSPORT to RDS_TRANS_NONE will 163 163 always return EINVAL. 164 164 165 165 RDMA for RDS
+1 -3
MAINTAINERS
··· 4073 4073 CISCO VIC ETHERNET NIC DRIVER 4074 4074 M: Christian Benvenuti <benve@cisco.com> 4075 4075 M: Govindarajulu Varadarajan <_govind@gmx.com> 4076 - M: Parvi Kaustubhi <pkaustub@cisco.com> 4077 4076 S: Supported 4078 4077 F: drivers/net/ethernet/cisco/enic/ 4079 4078 ··· 4571 4572 F: include/uapi/rdma/cxgb4-abi.h 4572 4573 4573 4574 CXGB4VF ETHERNET DRIVER (CXGB4VF) 4574 - M: Casey Leedom <leedom@chelsio.com> 4575 + M: Vishal Kulkarni <vishal@gmail.com> 4575 4576 L: netdev@vger.kernel.org 4576 4577 W: http://www.chelsio.com 4577 4578 S: Supported ··· 6197 6198 F: drivers/scsi/be2iscsi/ 6198 6199 6199 6200 Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER (be2net) 6200 - M: Sathya Perla <sathya.perla@broadcom.com> 6201 6201 M: Ajit Khaparde <ajit.khaparde@broadcom.com> 6202 6202 M: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> 6203 6203 M: Somnath Kotur <somnath.kotur@broadcom.com>
+2
arch/arm64/boot/dts/freescale/fsl-ls1043-post.dtsi
··· 20 20 }; 21 21 22 22 &fman0 { 23 + fsl,erratum-a050385; 24 + 23 25 /* these aliases provide the FMan ports mapping */ 24 26 enet0: ethernet@e0000 { 25 27 };
+1 -1
drivers/atm/nicstar.c
··· 91 91 #ifdef GENERAL_DEBUG 92 92 #define PRINTK(args...) printk(args) 93 93 #else 94 - #define PRINTK(args...) 94 + #define PRINTK(args...) do {} while (0) 95 95 #endif /* GENERAL_DEBUG */ 96 96 97 97 #ifdef EXTRA_DEBUG
+10 -10
drivers/net/bonding/bond_alb.c
··· 50 50 }; 51 51 #pragma pack() 52 52 53 - static inline struct arp_pkt *arp_pkt(const struct sk_buff *skb) 54 - { 55 - return (struct arp_pkt *)skb_network_header(skb); 56 - } 57 - 58 53 /* Forward declaration */ 59 54 static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[], 60 55 bool strict_match); ··· 548 553 spin_unlock(&bond->mode_lock); 549 554 } 550 555 551 - static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bond) 556 + static struct slave *rlb_choose_channel(struct sk_buff *skb, 557 + struct bonding *bond, 558 + const struct arp_pkt *arp) 552 559 { 553 560 struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond)); 554 - struct arp_pkt *arp = arp_pkt(skb); 555 561 struct slave *assigned_slave, *curr_active_slave; 556 562 struct rlb_client_info *client_info; 557 563 u32 hash_index = 0; ··· 649 653 */ 650 654 static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond) 651 655 { 652 - struct arp_pkt *arp = arp_pkt(skb); 653 656 struct slave *tx_slave = NULL; 657 + struct arp_pkt *arp; 658 + 659 + if (!pskb_network_may_pull(skb, sizeof(*arp))) 660 + return NULL; 661 + arp = (struct arp_pkt *)skb_network_header(skb); 654 662 655 663 /* Don't modify or load balance ARPs that do not originate locally 656 664 * (e.g.,arrive via a bridge). ··· 664 664 665 665 if (arp->op_code == htons(ARPOP_REPLY)) { 666 666 /* the arp must be sent on the selected rx channel */ 667 - tx_slave = rlb_choose_channel(skb, bond); 667 + tx_slave = rlb_choose_channel(skb, bond, arp); 668 668 if (tx_slave) 669 669 bond_hw_addr_copy(arp->mac_src, tx_slave->dev->dev_addr, 670 670 tx_slave->dev->addr_len); ··· 676 676 * When the arp reply is received the entry will be updated 677 677 * with the correct unicast address of the client. 678 678 */ 679 - tx_slave = rlb_choose_channel(skb, bond); 679 + tx_slave = rlb_choose_channel(skb, bond, arp); 680 680 681 681 /* The ARP reply packets must be delayed so that 682 682 * they can cancel out the influence of the ARP request.
+1
drivers/net/can/dev.c
··· 883 883 = { .len = sizeof(struct can_bittiming) }, 884 884 [IFLA_CAN_DATA_BITTIMING_CONST] 885 885 = { .len = sizeof(struct can_bittiming_const) }, 886 + [IFLA_CAN_TERMINATION] = { .type = NLA_U16 }, 886 887 }; 887 888 888 889 static int can_validate(struct nlattr *tb[], struct nlattr *data[],
+2
drivers/net/dsa/mv88e6xxx/chip.c
··· 2769 2769 goto unlock; 2770 2770 } 2771 2771 2772 + occupancy &= MV88E6XXX_G2_ATU_STATS_MASK; 2773 + 2772 2774 unlock: 2773 2775 mv88e6xxx_reg_unlock(chip); 2774 2776
+7 -1
drivers/net/dsa/mv88e6xxx/global2.c
··· 1099 1099 { 1100 1100 int err, irq, virq; 1101 1101 1102 + chip->g2_irq.masked = ~0; 1103 + mv88e6xxx_reg_lock(chip); 1104 + err = mv88e6xxx_g2_int_mask(chip, ~chip->g2_irq.masked); 1105 + mv88e6xxx_reg_unlock(chip); 1106 + if (err) 1107 + return err; 1108 + 1102 1109 chip->g2_irq.domain = irq_domain_add_simple( 1103 1110 chip->dev->of_node, 16, 0, &mv88e6xxx_g2_irq_domain_ops, chip); 1104 1111 if (!chip->g2_irq.domain) ··· 1115 1108 irq_create_mapping(chip->g2_irq.domain, irq); 1116 1109 1117 1110 chip->g2_irq.chip = mv88e6xxx_g2_irq_chip; 1118 - chip->g2_irq.masked = ~0; 1119 1111 1120 1112 chip->device_irq = irq_find_mapping(chip->g1_irq.domain, 1121 1113 MV88E6XXX_G1_STS_IRQ_DEVICE);
+2 -1
drivers/net/dsa/sja1105/sja1105_main.c
··· 1741 1741 if (!dsa_is_user_port(ds, port)) 1742 1742 continue; 1743 1743 1744 - kthread_destroy_worker(sp->xmit_worker); 1744 + if (sp->xmit_worker) 1745 + kthread_destroy_worker(sp->xmit_worker); 1745 1746 } 1746 1747 1747 1748 sja1105_tas_teardown(ds);
+1 -1
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2135 2135 return -ENOSPC; 2136 2136 2137 2137 index = find_first_zero_bit(priv->filters, RXCHK_BRCM_TAG_MAX); 2138 - if (index > RXCHK_BRCM_TAG_MAX) 2138 + if (index >= RXCHK_BRCM_TAG_MAX) 2139 2139 return -ENOSPC; 2140 2140 2141 2141 /* Location is the classification ID, and index is the position
+2 -2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 10982 10982 struct bnxt *bp = netdev_priv(dev); 10983 10983 10984 10984 if (netif_running(dev)) 10985 - bnxt_close_nic(bp, false, false); 10985 + bnxt_close_nic(bp, true, false); 10986 10986 10987 10987 dev->mtu = new_mtu; 10988 10988 bnxt_set_ring_params(bp); 10989 10989 10990 10990 if (netif_running(dev)) 10991 - return bnxt_open_nic(bp, false, false); 10991 + return bnxt_open_nic(bp, true, false); 10992 10992 10993 10993 return 0; 10994 10994 }
+11 -13
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 2007 2007 struct hwrm_nvm_install_update_output *resp = bp->hwrm_cmd_resp_addr; 2008 2008 struct hwrm_nvm_install_update_input install = {0}; 2009 2009 const struct firmware *fw; 2010 - int rc, hwrm_err = 0; 2011 2010 u32 item_len; 2011 + int rc = 0; 2012 2012 u16 index; 2013 2013 2014 2014 bnxt_hwrm_fw_set_time(bp); ··· 2052 2052 memcpy(kmem, fw->data, fw->size); 2053 2053 modify.host_src_addr = cpu_to_le64(dma_handle); 2054 2054 2055 - hwrm_err = hwrm_send_message(bp, &modify, 2056 - sizeof(modify), 2057 - FLASH_PACKAGE_TIMEOUT); 2055 + rc = hwrm_send_message(bp, &modify, sizeof(modify), 2056 + FLASH_PACKAGE_TIMEOUT); 2058 2057 dma_free_coherent(&bp->pdev->dev, fw->size, kmem, 2059 2058 dma_handle); 2060 2059 } 2061 2060 } 2062 2061 release_firmware(fw); 2063 - if (rc || hwrm_err) 2062 + if (rc) 2064 2063 goto err_exit; 2065 2064 2066 2065 if ((install_type & 0xffff) == 0) ··· 2068 2069 install.install_type = cpu_to_le32(install_type); 2069 2070 2070 2071 mutex_lock(&bp->hwrm_cmd_lock); 2071 - hwrm_err = _hwrm_send_message(bp, &install, sizeof(install), 2072 - INSTALL_PACKAGE_TIMEOUT); 2073 - if (hwrm_err) { 2072 + rc = _hwrm_send_message(bp, &install, sizeof(install), 2073 + INSTALL_PACKAGE_TIMEOUT); 2074 + if (rc) { 2074 2075 u8 error_code = ((struct hwrm_err_output *)resp)->cmd_err; 2075 2076 2076 2077 if (resp->error_code && error_code == 2077 2078 NVM_INSTALL_UPDATE_CMD_ERR_CODE_FRAG_ERR) { 2078 2079 install.flags |= cpu_to_le16( 2079 2080 NVM_INSTALL_UPDATE_REQ_FLAGS_ALLOWED_TO_DEFRAG); 2080 - hwrm_err = _hwrm_send_message(bp, &install, 2081 - sizeof(install), 2082 - INSTALL_PACKAGE_TIMEOUT); 2081 + rc = _hwrm_send_message(bp, &install, sizeof(install), 2082 + INSTALL_PACKAGE_TIMEOUT); 2083 2083 } 2084 - if (hwrm_err) 2084 + if (rc) 2085 2085 goto flash_pkg_exit; 2086 2086 } 2087 2087 ··· 2092 2094 flash_pkg_exit: 2093 2095 mutex_unlock(&bp->hwrm_cmd_lock); 2094 2096 err_exit: 2095 - if (hwrm_err == -EACCES) 2097 + if (rc == -EACCES) 2096 2098 bnxt_print_admin_err(bp); 2097 2099 return rc; 2098 2100 }
+27 -22
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 5381 5381 static int cfg_queues(struct adapter *adap) 5382 5382 { 5383 5383 u32 avail_qsets, avail_eth_qsets, avail_uld_qsets; 5384 + u32 i, n10g = 0, qidx = 0, n1g = 0; 5385 + u32 ncpus = num_online_cpus(); 5384 5386 u32 niqflint, neq, num_ulds; 5385 5387 struct sge *s = &adap->sge; 5386 - u32 i, n10g = 0, qidx = 0; 5387 - #ifndef CONFIG_CHELSIO_T4_DCB 5388 - int q10g = 0; 5389 - #endif 5388 + u32 q10g = 0, q1g; 5390 5389 5391 5390 /* Reduce memory usage in kdump environment, disable all offload. */ 5392 5391 if (is_kdump_kernel() || (is_uld(adap) && t4_uld_mem_alloc(adap))) { ··· 5423 5424 n10g += is_x_10g_port(&adap2pinfo(adap, i)->link_cfg); 5424 5425 5425 5426 avail_eth_qsets = min_t(u32, avail_qsets, MAX_ETH_QSETS); 5427 + 5428 + /* We default to 1 queue per non-10G port and up to # of cores queues 5429 + * per 10G port. 5430 + */ 5431 + if (n10g) 5432 + q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g; 5433 + 5434 + n1g = adap->params.nports - n10g; 5426 5435 #ifdef CONFIG_CHELSIO_T4_DCB 5427 5436 /* For Data Center Bridging support we need to be able to support up 5428 5437 * to 8 Traffic Priorities; each of which will be assigned to its 5429 5438 * own TX Queue in order to prevent Head-Of-Line Blocking. 5430 5439 */ 5440 + q1g = 8; 5431 5441 if (adap->params.nports * 8 > avail_eth_qsets) { 5432 5442 dev_err(adap->pdev_dev, "DCB avail_eth_qsets=%d < %d!\n", 5433 5443 avail_eth_qsets, adap->params.nports * 8); 5434 5444 return -ENOMEM; 5435 5445 } 5436 5446 5437 - for_each_port(adap, i) { 5438 - struct port_info *pi = adap2pinfo(adap, i); 5447 + if (adap->params.nports * ncpus < avail_eth_qsets) 5448 + q10g = max(8U, ncpus); 5449 + else 5450 + q10g = max(8U, q10g); 5439 5451 5440 - pi->first_qset = qidx; 5441 - pi->nqsets = is_kdump_kernel() ? 1 : 8; 5442 - qidx += pi->nqsets; 5443 - } 5452 + while ((q10g * n10g) > (avail_eth_qsets - n1g * q1g)) 5453 + q10g--; 5454 + 5444 5455 #else /* !CONFIG_CHELSIO_T4_DCB */ 5445 - /* We default to 1 queue per non-10G port and up to # of cores queues 5446 - * per 10G port. 5447 - */ 5448 - if (n10g) 5449 - q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g; 5450 - if (q10g > netif_get_num_default_rss_queues()) 5451 - q10g = netif_get_num_default_rss_queues(); 5452 - 5453 - if (is_kdump_kernel()) 5456 + q1g = 1; 5457 + q10g = min(q10g, ncpus); 5458 + #endif /* !CONFIG_CHELSIO_T4_DCB */ 5459 + if (is_kdump_kernel()) { 5454 5460 q10g = 1; 5461 + q1g = 1; 5462 + } 5455 5463 5456 5464 for_each_port(adap, i) { 5457 5465 struct port_info *pi = adap2pinfo(adap, i); 5458 5466 5459 5467 pi->first_qset = qidx; 5460 - pi->nqsets = is_x_10g_port(&pi->link_cfg) ? q10g : 1; 5468 + pi->nqsets = is_x_10g_port(&pi->link_cfg) ? q10g : q1g; 5461 5469 qidx += pi->nqsets; 5462 5470 } 5463 - #endif /* !CONFIG_CHELSIO_T4_DCB */ 5464 5471 5465 5472 s->ethqsets = qidx; 5466 5473 s->max_ethqsets = qidx; /* MSI-X may lower it later */ ··· 5478 5473 * capped by the number of available cores. 5479 5474 */ 5480 5475 num_ulds = adap->num_uld + adap->num_ofld_uld; 5481 - i = min_t(u32, MAX_OFLD_QSETS, num_online_cpus()); 5476 + i = min_t(u32, MAX_OFLD_QSETS, ncpus); 5482 5477 avail_uld_qsets = roundup(i, adap->params.nports); 5483 5478 if (avail_qsets < num_ulds * adap->params.nports) { 5484 5479 adap->params.offload = 0;
+108 -6
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 1 1 /* Copyright 2008 - 2016 Freescale Semiconductor Inc. 2 + * Copyright 2020 NXP 2 3 * 3 4 * Redistribution and use in source and binary forms, with or without 4 5 * modification, are permitted provided that the following conditions are met: ··· 124 123 #define FSL_QMAN_MAX_OAL 127 125 124 126 125 /* Default alignment for start of data in an Rx FD */ 126 + #ifdef CONFIG_DPAA_ERRATUM_A050385 127 + /* aligning data start to 64 avoids DMA transaction splits, unless the buffer 128 + * is crossing a 4k page boundary 129 + */ 130 + #define DPAA_FD_DATA_ALIGNMENT (fman_has_errata_a050385() ? 64 : 16) 131 + /* aligning to 256 avoids DMA transaction splits caused by 4k page boundary 132 + * crossings; also, all SG fragments except the last must have a size multiple 133 + * of 256 to avoid DMA transaction splits 134 + */ 135 + #define DPAA_A050385_ALIGN 256 136 + #define DPAA_FD_RX_DATA_ALIGNMENT (fman_has_errata_a050385() ? \ 137 + DPAA_A050385_ALIGN : 16) 138 + #else 127 139 #define DPAA_FD_DATA_ALIGNMENT 16 140 + #define DPAA_FD_RX_DATA_ALIGNMENT DPAA_FD_DATA_ALIGNMENT 141 + #endif 128 142 129 143 /* The DPAA requires 256 bytes reserved and mapped for the SGT */ 130 144 #define DPAA_SGT_SIZE 256 ··· 174 158 #define DPAA_PARSE_RESULTS_SIZE sizeof(struct fman_prs_result) 175 159 #define DPAA_TIME_STAMP_SIZE 8 176 160 #define DPAA_HASH_RESULTS_SIZE 8 161 + #ifdef CONFIG_DPAA_ERRATUM_A050385 162 + #define DPAA_RX_PRIV_DATA_SIZE (DPAA_A050385_ALIGN - (DPAA_PARSE_RESULTS_SIZE\ 163 + + DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE)) 164 + #else 177 165 #define DPAA_RX_PRIV_DATA_SIZE (u16)(DPAA_TX_PRIV_DATA_SIZE + \ 178 166 dpaa_rx_extra_headroom) 167 + #endif 179 168 180 169 #define DPAA_ETH_PCD_RXQ_NUM 128 181 170 ··· 201 180 202 181 #define DPAA_BP_RAW_SIZE 4096 203 182 183 + #ifdef CONFIG_DPAA_ERRATUM_A050385 184 + #define dpaa_bp_size(raw_size) (SKB_WITH_OVERHEAD(raw_size) & \ 185 + ~(DPAA_A050385_ALIGN - 1)) 186 + #else 204 187 #define dpaa_bp_size(raw_size) SKB_WITH_OVERHEAD(raw_size) 188 + #endif 205 189 206 190 static int dpaa_max_frm; 207 191 ··· 1218 1192 buf_prefix_content.pass_prs_result = true; 1219 1193 buf_prefix_content.pass_hash_result = true; 1220 1194 buf_prefix_content.pass_time_stamp = true; 1221 - buf_prefix_content.data_align = DPAA_FD_DATA_ALIGNMENT; 1195 + buf_prefix_content.data_align = DPAA_FD_RX_DATA_ALIGNMENT; 1222 1196 1223 1197 rx_p = &params.specific_params.rx_params; 1224 1198 rx_p->err_fqid = errq->fqid; ··· 1688 1662 return CHECKSUM_NONE; 1689 1663 } 1690 1664 1665 + #define PTR_IS_ALIGNED(x, a) (IS_ALIGNED((unsigned long)(x), (a))) 1666 + 1691 1667 /* Build a linear skb around the received buffer. 1692 1668 * We are guaranteed there is enough room at the end of the data buffer to 1693 1669 * accommodate the shared info area of the skb. ··· 1761 1733 1762 1734 sg_addr = qm_sg_addr(&sgt[i]); 1763 1735 sg_vaddr = phys_to_virt(sg_addr); 1764 - WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr, 1765 - SMP_CACHE_BYTES)); 1736 + WARN_ON(!PTR_IS_ALIGNED(sg_vaddr, SMP_CACHE_BYTES)); 1766 1737 1767 1738 dma_unmap_page(priv->rx_dma_dev, sg_addr, 1768 1739 DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE); ··· 2049 2022 return 0; 2050 2023 } 2051 2024 2025 + #ifdef CONFIG_DPAA_ERRATUM_A050385 2026 + int dpaa_a050385_wa(struct net_device *net_dev, struct sk_buff **s) 2027 + { 2028 + struct dpaa_priv *priv = netdev_priv(net_dev); 2029 + struct sk_buff *new_skb, *skb = *s; 2030 + unsigned char *start, i; 2031 + 2032 + /* check linear buffer alignment */ 2033 + if (!PTR_IS_ALIGNED(skb->data, DPAA_A050385_ALIGN)) 2034 + goto workaround; 2035 + 2036 + /* linear buffers just need to have an aligned start */ 2037 + if (!skb_is_nonlinear(skb)) 2038 + return 0; 2039 + 2040 + /* linear data size for nonlinear skbs needs to be aligned */ 2041 + if (!IS_ALIGNED(skb_headlen(skb), DPAA_A050385_ALIGN)) 2042 + goto workaround; 2043 + 2044 + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { 2045 + skb_frag_t *frag = &skb_shinfo(skb)->frags[i]; 2046 + 2047 + /* all fragments need to have aligned start addresses */ 2048 + if (!IS_ALIGNED(skb_frag_off(frag), DPAA_A050385_ALIGN)) 2049 + goto workaround; 2050 + 2051 + /* all but last fragment need to have aligned sizes */ 2052 + if (!IS_ALIGNED(skb_frag_size(frag), DPAA_A050385_ALIGN) && 2053 + (i < skb_shinfo(skb)->nr_frags - 1)) 2054 + goto workaround; 2055 + } 2056 + 2057 + return 0; 2058 + 2059 + workaround: 2060 + /* copy all the skb content into a new linear buffer */ 2061 + new_skb = netdev_alloc_skb(net_dev, skb->len + DPAA_A050385_ALIGN - 1 + 2062 + priv->tx_headroom); 2063 + if (!new_skb) 2064 + return -ENOMEM; 2065 + 2066 + /* NET_SKB_PAD bytes already reserved, adding up to tx_headroom */ 2067 + skb_reserve(new_skb, priv->tx_headroom - NET_SKB_PAD); 2068 + 2069 + /* Workaround for DPAA_A050385 requires data start to be aligned */ 2070 + start = PTR_ALIGN(new_skb->data, DPAA_A050385_ALIGN); 2071 + if (start - new_skb->data != 0) 2072 + skb_reserve(new_skb, start - new_skb->data); 2073 + 2074 + skb_put(new_skb, skb->len); 2075 + skb_copy_bits(skb, 0, new_skb->data, skb->len); 2076 + skb_copy_header(new_skb, skb); 2077 + new_skb->dev = skb->dev; 2078 + 2079 + /* We move the headroom when we align it so we have to reset the 2080 + * network and transport header offsets relative to the new data 2081 + * pointer. The checksum offload relies on these offsets. 2082 + */ 2083 + skb_set_network_header(new_skb, skb_network_offset(skb)); 2084 + skb_set_transport_header(new_skb, skb_transport_offset(skb)); 2085 + 2086 + /* TODO: does timestamping need the result in the old skb? */ 2087 + dev_kfree_skb(skb); 2088 + *s = new_skb; 2089 + 2090 + return 0; 2091 + } 2092 + #endif 2093 + 2052 2094 static netdev_tx_t 2053 2095 dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev) 2054 2096 { ··· 2163 2067 2164 2068 nonlinear = skb_is_nonlinear(skb); 2165 2069 } 2070 + 2071 + #ifdef CONFIG_DPAA_ERRATUM_A050385 2072 + if (unlikely(fman_has_errata_a050385())) { 2073 + if (dpaa_a050385_wa(net_dev, &skb)) 2074 + goto enomem; 2075 + nonlinear = skb_is_nonlinear(skb); 2076 + } 2077 + #endif 2166 2078 2167 2079 if (nonlinear) { 2168 2080 /* Just create a S/G fd based on the skb */ ··· 2845 2741 headroom = (u16)(bl->priv_data_size + DPAA_PARSE_RESULTS_SIZE + 2846 2742 DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE); 2847 2743 2848 - return DPAA_FD_DATA_ALIGNMENT ? ALIGN(headroom, 2849 - DPAA_FD_DATA_ALIGNMENT) : 2850 - headroom; 2744 + return ALIGN(headroom, DPAA_FD_DATA_ALIGNMENT); 2851 2745 } 2852 2746 2853 2747 static int dpaa_eth_probe(struct platform_device *pdev)
+3 -3
drivers/net/ethernet/freescale/fec_main.c
··· 2529 2529 return -EINVAL; 2530 2530 } 2531 2531 2532 - cycle = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr); 2532 + cycle = fec_enet_us_to_itr_clock(ndev, ec->rx_coalesce_usecs); 2533 2533 if (cycle > 0xFFFF) { 2534 2534 dev_err(dev, "Rx coalesced usec exceed hardware limitation\n"); 2535 2535 return -EINVAL; 2536 2536 } 2537 2537 2538 - cycle = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr); 2538 + cycle = fec_enet_us_to_itr_clock(ndev, ec->tx_coalesce_usecs); 2539 2539 if (cycle > 0xFFFF) { 2540 - dev_err(dev, "Rx coalesced usec exceed hardware limitation\n"); 2540 + dev_err(dev, "Tx coalesced usec exceed hardware limitation\n"); 2541 2541 return -EINVAL; 2542 2542 } 2543 2543
+28
drivers/net/ethernet/freescale/fman/Kconfig
··· 8 8 help 9 9 Freescale Data-Path Acceleration Architecture Frame Manager 10 10 (FMan) support 11 + 12 + config DPAA_ERRATUM_A050385 13 + bool 14 + depends on ARM64 && FSL_DPAA 15 + default y 16 + help 17 + DPAA FMan erratum A050385 software workaround implementation: 18 + align buffers, data start, SG fragment length to avoid FMan DMA 19 + splits. 20 + FMAN DMA read or writes under heavy traffic load may cause FMAN 21 + internal resource leak thus stopping further packet processing. 22 + The FMAN internal queue can overflow when FMAN splits single 23 + read or write transactions into multiple smaller transactions 24 + such that more than 17 AXI transactions are in flight from FMAN 25 + to interconnect. When the FMAN internal queue overflows, it can 26 + stall further packet processing. The issue can occur with any 27 + one of the following three conditions: 28 + 1. FMAN AXI transaction crosses 4K address boundary (Errata 29 + A010022) 30 + 2. FMAN DMA address for an AXI transaction is not 16 byte 31 + aligned, i.e. the last 4 bits of an address are non-zero 32 + 3. Scatter Gather (SG) frames have more than one SG buffer in 33 + the SG list and any one of the buffers, except the last 34 + buffer in the SG list has data size that is not a multiple 35 + of 16 bytes, i.e., other than 16, 32, 48, 64, etc. 36 + With any one of the above three conditions present, there is 37 + likelihood of stalled FMAN packet processing, especially under 38 + stress with multiple ports injecting line-rate traffic.
+18
drivers/net/ethernet/freescale/fman/fman.c
··· 1 1 /* 2 2 * Copyright 2008-2015 Freescale Semiconductor Inc. 3 + * Copyright 2020 NXP 3 4 * 4 5 * Redistribution and use in source and binary forms, with or without 5 6 * modification, are permitted provided that the following conditions are met: ··· 566 565 u32 total_num_of_tasks; 567 566 u32 qmi_def_tnums_thresh; 568 567 }; 568 + 569 + #ifdef CONFIG_DPAA_ERRATUM_A050385 570 + static bool fman_has_err_a050385; 571 + #endif 569 572 570 573 static irqreturn_t fman_exceptions(struct fman *fman, 571 574 enum fman_exceptions exception) ··· 2523 2518 } 2524 2519 EXPORT_SYMBOL(fman_bind); 2525 2520 2521 + #ifdef CONFIG_DPAA_ERRATUM_A050385 2522 + bool fman_has_errata_a050385(void) 2523 + { 2524 + return fman_has_err_a050385; 2525 + } 2526 + EXPORT_SYMBOL(fman_has_errata_a050385); 2527 + #endif 2528 + 2526 2529 static irqreturn_t fman_err_irq(int irq, void *handle) 2527 2530 { 2528 2531 struct fman *fman = (struct fman *)handle; ··· 2857 2844 __func__); 2858 2845 goto fman_free; 2859 2846 } 2847 + 2848 + #ifdef CONFIG_DPAA_ERRATUM_A050385 2849 + fman_has_err_a050385 = 2850 + of_property_read_bool(fm_node, "fsl,erratum-a050385"); 2851 + #endif 2860 2852 2861 2853 return fman; 2862 2854
+5
drivers/net/ethernet/freescale/fman/fman.h
··· 1 1 /* 2 2 * Copyright 2008-2015 Freescale Semiconductor Inc. 3 + * Copyright 2020 NXP 3 4 * 4 5 * Redistribution and use in source and binary forms, with or without 5 6 * modification, are permitted provided that the following conditions are met: ··· 398 397 u16 fman_get_max_frm(void); 399 398 400 399 int fman_get_rx_extra_headroom(void); 400 + 401 + #ifdef CONFIG_DPAA_ERRATUM_A050385 402 + bool fman_has_errata_a050385(void); 403 + #endif 401 404 402 405 struct fman *fman_bind(struct device *dev); 403 406
+1
drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
··· 46 46 HCLGE_MBX_PUSH_VLAN_INFO, /* (PF -> VF) push port base vlan */ 47 47 HCLGE_MBX_GET_MEDIA_TYPE, /* (VF -> PF) get media type */ 48 48 HCLGE_MBX_PUSH_PROMISC_INFO, /* (PF -> VF) push vf promisc info */ 49 + HCLGE_MBX_VF_UNINIT, /* (VF -> PF) vf is unintializing */ 49 50 50 51 HCLGE_MBX_GET_VF_FLR_STATUS = 200, /* (M7 -> PF) get vf flr status */ 51 52 HCLGE_MBX_PUSH_LINK_STATUS, /* (M7 -> PF) get port link status */
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 1711 1711 netif_dbg(h, drv, netdev, "setup tc: num_tc=%u\n", tc); 1712 1712 1713 1713 return (kinfo->dcb_ops && kinfo->dcb_ops->setup_tc) ? 1714 - kinfo->dcb_ops->setup_tc(h, tc, prio_tc) : -EOPNOTSUPP; 1714 + kinfo->dcb_ops->setup_tc(h, tc ? tc : 1, prio_tc) : -EOPNOTSUPP; 1715 1715 } 1716 1716 1717 1717 static int hns3_nic_setup_tc(struct net_device *dev, enum tc_setup_type type,
+42 -5
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 2446 2446 2447 2447 int hclge_cfg_mac_speed_dup(struct hclge_dev *hdev, int speed, u8 duplex) 2448 2448 { 2449 + struct hclge_mac *mac = &hdev->hw.mac; 2449 2450 int ret; 2450 2451 2451 2452 duplex = hclge_check_speed_dup(duplex, speed); 2452 - if (hdev->hw.mac.speed == speed && hdev->hw.mac.duplex == duplex) 2453 + if (!mac->support_autoneg && mac->speed == speed && 2454 + mac->duplex == duplex) 2453 2455 return 0; 2454 2456 2455 2457 ret = hclge_cfg_mac_speed_dup_hw(hdev, speed, duplex); ··· 7745 7743 struct hclge_desc desc; 7746 7744 int ret; 7747 7745 7748 - hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_VLAN_FILTER_CTRL, false); 7749 - 7746 + /* read current vlan filter parameter */ 7747 + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_VLAN_FILTER_CTRL, true); 7750 7748 req = (struct hclge_vlan_filter_ctrl_cmd *)desc.data; 7751 7749 req->vlan_type = vlan_type; 7752 - req->vlan_fe = filter_en ? fe_type : 0; 7753 7750 req->vf_id = vf_id; 7754 7751 7755 7752 ret = hclge_cmd_send(&hdev->hw, &desc, 1); 7753 + if (ret) { 7754 + dev_err(&hdev->pdev->dev, 7755 + "failed to get vlan filter config, ret = %d.\n", ret); 7756 + return ret; 7757 + } 7758 + 7759 + /* modify and write new config parameter */ 7760 + hclge_cmd_reuse_desc(&desc, false); 7761 + req->vlan_fe = filter_en ? 7762 + (req->vlan_fe | fe_type) : (req->vlan_fe & ~fe_type); 7763 + 7764 + ret = hclge_cmd_send(&hdev->hw, &desc, 1); 7756 7765 if (ret) 7757 - dev_err(&hdev->pdev->dev, "set vlan filter fail, ret =%d.\n", 7766 + dev_err(&hdev->pdev->dev, "failed to set vlan filter, ret = %d.\n", 7758 7767 ret); 7759 7768 7760 7769 return ret; ··· 8283 8270 kfree(vlan); 8284 8271 } 8285 8272 } 8273 + clear_bit(vport->vport_id, hdev->vf_vlan_full); 8286 8274 } 8287 8275 8288 8276 void hclge_uninit_vport_vlan_table(struct hclge_dev *hdev) ··· 8497 8483 vlan, qos, 8498 8484 ntohs(proto)); 8499 8485 return ret; 8486 + } 8487 + } 8488 + 8489 + static void hclge_clear_vf_vlan(struct hclge_dev *hdev) 8490 + { 8491 + struct hclge_vlan_info *vlan_info; 8492 + struct hclge_vport *vport; 8493 + int ret; 8494 + int vf; 8495 + 8496 + /* clear port base vlan for all vf */ 8497 + for (vf = HCLGE_VF_VPORT_START_NUM; vf < hdev->num_alloc_vport; vf++) { 8498 + vport = &hdev->vport[vf]; 8499 + vlan_info = &vport->port_base_vlan_cfg.vlan_info; 8500 + 8501 + ret = hclge_set_vlan_filter_hw(hdev, htons(ETH_P_8021Q), 8502 + vport->vport_id, 8503 + vlan_info->vlan_tag, true); 8504 + if (ret) 8505 + dev_err(&hdev->pdev->dev, 8506 + "failed to clear vf vlan for vf%d, ret = %d\n", 8507 + vf - HCLGE_VF_VPORT_START_NUM, ret); 8500 8508 } 8501 8509 } 8502 8510 ··· 9931 9895 struct hclge_mac *mac = &hdev->hw.mac; 9932 9896 9933 9897 hclge_reset_vf_rate(hdev); 9898 + hclge_clear_vf_vlan(hdev); 9934 9899 hclge_misc_affinity_teardown(hdev); 9935 9900 hclge_state_uninit(hdev); 9936 9901
+1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
··· 799 799 hclge_get_link_mode(vport, req); 800 800 break; 801 801 case HCLGE_MBX_GET_VF_FLR_STATUS: 802 + case HCLGE_MBX_VF_UNINIT: 802 803 hclge_rm_vport_all_mac_table(vport, true, 803 804 HCLGE_MAC_ADDR_UC); 804 805 hclge_rm_vport_all_mac_table(vport, true,
+3
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 2803 2803 { 2804 2804 hclgevf_state_uninit(hdev); 2805 2805 2806 + hclgevf_send_mbx_msg(hdev, HCLGE_MBX_VF_UNINIT, 0, NULL, 0, 2807 + false, NULL, 0); 2808 + 2806 2809 if (test_bit(HCLGEVF_STATE_IRQ_INITED, &hdev->state)) { 2807 2810 hclgevf_misc_irq_uninit(hdev); 2808 2811 hclgevf_uninit_msi(hdev);
+22 -2
drivers/net/ethernet/ibm/ibmvnic.c
··· 2142 2142 { 2143 2143 struct ibmvnic_rwi *rwi; 2144 2144 struct ibmvnic_adapter *adapter; 2145 + bool saved_state = false; 2146 + unsigned long flags; 2145 2147 u32 reset_state; 2146 2148 int rc = 0; 2147 2149 ··· 2155 2153 return; 2156 2154 } 2157 2155 2158 - reset_state = adapter->state; 2159 - 2160 2156 rwi = get_next_rwi(adapter); 2161 2157 while (rwi) { 2158 + spin_lock_irqsave(&adapter->state_lock, flags); 2159 + 2162 2160 if (adapter->state == VNIC_REMOVING || 2163 2161 adapter->state == VNIC_REMOVED) { 2162 + spin_unlock_irqrestore(&adapter->state_lock, flags); 2164 2163 kfree(rwi); 2165 2164 rc = EBUSY; 2166 2165 break; 2167 2166 } 2167 + 2168 + if (!saved_state) { 2169 + reset_state = adapter->state; 2170 + adapter->state = VNIC_RESETTING; 2171 + saved_state = true; 2172 + } 2173 + spin_unlock_irqrestore(&adapter->state_lock, flags); 2168 2174 2169 2175 if (rwi->reset_reason == VNIC_RESET_CHANGE_PARAM) { 2170 2176 /* CHANGE_PARAM requestor holds rtnl_lock */ ··· 5101 5091 __ibmvnic_delayed_reset); 5102 5092 INIT_LIST_HEAD(&adapter->rwi_list); 5103 5093 spin_lock_init(&adapter->rwi_lock); 5094 + spin_lock_init(&adapter->state_lock); 5104 5095 mutex_init(&adapter->fw_lock); 5105 5096 init_completion(&adapter->init_done); 5106 5097 init_completion(&adapter->fw_done); ··· 5174 5163 { 5175 5164 struct net_device *netdev = dev_get_drvdata(&dev->dev); 5176 5165 struct ibmvnic_adapter *adapter = netdev_priv(netdev); 5166 + unsigned long flags; 5167 + 5168 + spin_lock_irqsave(&adapter->state_lock, flags); 5169 + if (adapter->state == VNIC_RESETTING) { 5170 + spin_unlock_irqrestore(&adapter->state_lock, flags); 5171 + return -EBUSY; 5172 + } 5177 5173 5178 5174 adapter->state = VNIC_REMOVING; 5175 + spin_unlock_irqrestore(&adapter->state_lock, flags); 5176 + 5179 5177 rtnl_lock(); 5180 5178 unregister_netdevice(netdev); 5181 5179
+5 -1
drivers/net/ethernet/ibm/ibmvnic.h
··· 941 941 VNIC_CLOSING, 942 942 VNIC_CLOSED, 943 943 VNIC_REMOVING, 944 - VNIC_REMOVED}; 944 + VNIC_REMOVED, 945 + VNIC_RESETTING}; 945 946 946 947 enum ibmvnic_reset_reason {VNIC_RESET_FAILOVER = 1, 947 948 VNIC_RESET_MOBILITY, ··· 1091 1090 1092 1091 struct ibmvnic_tunables desired; 1093 1092 struct ibmvnic_tunables fallback; 1093 + 1094 + /* Used for serializatin of state field */ 1095 + spinlock_t state_lock; 1094 1096 };
+3 -3
drivers/net/ethernet/marvell/mvmdio.c
··· 347 347 } 348 348 349 349 350 - dev->err_interrupt = platform_get_irq(pdev, 0); 350 + dev->err_interrupt = platform_get_irq_optional(pdev, 0); 351 351 if (dev->err_interrupt > 0 && 352 352 resource_size(r) < MVMDIO_ERR_INT_MASK + 4) { 353 353 dev_err(&pdev->dev, ··· 364 364 writel(MVMDIO_ERR_INT_SMI_DONE, 365 365 dev->regs + MVMDIO_ERR_INT_MASK); 366 366 367 - } else if (dev->err_interrupt == -EPROBE_DEFER) { 368 - ret = -EPROBE_DEFER; 367 + } else if (dev->err_interrupt < 0) { 368 + ret = dev->err_interrupt; 369 369 goto out_mdio; 370 370 } 371 371
+17 -11
drivers/net/ethernet/mscc/ocelot.c
··· 2176 2176 return 0; 2177 2177 } 2178 2178 2179 - static void ocelot_port_set_mtu(struct ocelot *ocelot, int port, size_t mtu) 2179 + /* Configure the maximum SDU (L2 payload) on RX to the value specified in @sdu. 2180 + * The length of VLAN tags is accounted for automatically via DEV_MAC_TAGS_CFG. 2181 + */ 2182 + static void ocelot_port_set_maxlen(struct ocelot *ocelot, int port, size_t sdu) 2180 2183 { 2181 2184 struct ocelot_port *ocelot_port = ocelot->ports[port]; 2185 + int maxlen = sdu + ETH_HLEN + ETH_FCS_LEN; 2182 2186 int atop_wm; 2183 2187 2184 - ocelot_port_writel(ocelot_port, mtu, DEV_MAC_MAXLEN_CFG); 2188 + ocelot_port_writel(ocelot_port, maxlen, DEV_MAC_MAXLEN_CFG); 2185 2189 2186 2190 /* Set Pause WM hysteresis 2187 - * 152 = 6 * mtu / OCELOT_BUFFER_CELL_SZ 2188 - * 101 = 4 * mtu / OCELOT_BUFFER_CELL_SZ 2191 + * 152 = 6 * maxlen / OCELOT_BUFFER_CELL_SZ 2192 + * 101 = 4 * maxlen / OCELOT_BUFFER_CELL_SZ 2189 2193 */ 2190 2194 ocelot_write_rix(ocelot, SYS_PAUSE_CFG_PAUSE_ENA | 2191 2195 SYS_PAUSE_CFG_PAUSE_STOP(101) | 2192 2196 SYS_PAUSE_CFG_PAUSE_START(152), SYS_PAUSE_CFG, port); 2193 2197 2194 2198 /* Tail dropping watermark */ 2195 - atop_wm = (ocelot->shared_queue_sz - 9 * mtu) / OCELOT_BUFFER_CELL_SZ; 2196 - ocelot_write_rix(ocelot, ocelot_wm_enc(9 * mtu), 2199 + atop_wm = (ocelot->shared_queue_sz - 9 * maxlen) / 2200 + OCELOT_BUFFER_CELL_SZ; 2201 + ocelot_write_rix(ocelot, ocelot_wm_enc(9 * maxlen), 2197 2202 SYS_ATOP, port); 2198 2203 ocelot_write(ocelot, ocelot_wm_enc(atop_wm), SYS_ATOP_TOT_CFG); 2199 2204 } ··· 2227 2222 DEV_MAC_HDX_CFG); 2228 2223 2229 2224 /* Set Max Length and maximum tags allowed */ 2230 - ocelot_port_set_mtu(ocelot, port, VLAN_ETH_FRAME_LEN); 2225 + ocelot_port_set_maxlen(ocelot, port, ETH_DATA_LEN); 2231 2226 ocelot_port_writel(ocelot_port, DEV_MAC_TAGS_CFG_TAG_ID(ETH_P_8021AD) | 2232 2227 DEV_MAC_TAGS_CFG_VLAN_AWR_ENA | 2228 + DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA | 2233 2229 DEV_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA, 2234 2230 DEV_MAC_TAGS_CFG); 2235 2231 ··· 2316 2310 * Only one port can be an NPI at the same time. 2317 2311 */ 2318 2312 if (cpu < ocelot->num_phys_ports) { 2319 - int mtu = VLAN_ETH_FRAME_LEN + OCELOT_TAG_LEN; 2313 + int sdu = ETH_DATA_LEN + OCELOT_TAG_LEN; 2320 2314 2321 2315 ocelot_write(ocelot, QSYS_EXT_CPU_CFG_EXT_CPUQ_MSK_M | 2322 2316 QSYS_EXT_CPU_CFG_EXT_CPU_PORT(cpu), 2323 2317 QSYS_EXT_CPU_CFG); 2324 2318 2325 2319 if (injection == OCELOT_TAG_PREFIX_SHORT) 2326 - mtu += OCELOT_SHORT_PREFIX_LEN; 2320 + sdu += OCELOT_SHORT_PREFIX_LEN; 2327 2321 else if (injection == OCELOT_TAG_PREFIX_LONG) 2328 - mtu += OCELOT_LONG_PREFIX_LEN; 2322 + sdu += OCELOT_LONG_PREFIX_LEN; 2329 2323 2330 - ocelot_port_set_mtu(ocelot, cpu, mtu); 2324 + ocelot_port_set_maxlen(ocelot, cpu, sdu); 2331 2325 } 2332 2326 2333 2327 /* CPU port Injection/Extraction configuration */
+4 -4
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 1688 1688 if (!(is_zero_ether_addr(mac) || is_valid_ether_addr(mac))) 1689 1689 return -EINVAL; 1690 1690 1691 - down_read(&ionic->vf_op_lock); 1691 + down_write(&ionic->vf_op_lock); 1692 1692 1693 1693 if (vf >= pci_num_vf(ionic->pdev) || !ionic->vfs) { 1694 1694 ret = -EINVAL; ··· 1698 1698 ether_addr_copy(ionic->vfs[vf].macaddr, mac); 1699 1699 } 1700 1700 1701 - up_read(&ionic->vf_op_lock); 1701 + up_write(&ionic->vf_op_lock); 1702 1702 return ret; 1703 1703 } 1704 1704 ··· 1719 1719 if (proto != htons(ETH_P_8021Q)) 1720 1720 return -EPROTONOSUPPORT; 1721 1721 1722 - down_read(&ionic->vf_op_lock); 1722 + down_write(&ionic->vf_op_lock); 1723 1723 1724 1724 if (vf >= pci_num_vf(ionic->pdev) || !ionic->vfs) { 1725 1725 ret = -EINVAL; ··· 1730 1730 ionic->vfs[vf].vlanid = vlan; 1731 1731 } 1732 1732 1733 - up_read(&ionic->vf_op_lock); 1733 + up_write(&ionic->vf_op_lock); 1734 1734 return ret; 1735 1735 } 1736 1736
+1 -1
drivers/net/ethernet/samsung/sxgbe/sxgbe_main.c
··· 2277 2277 if (!str || !*str) 2278 2278 return -EINVAL; 2279 2279 while ((opt = strsep(&str, ",")) != NULL) { 2280 - if (!strncmp(opt, "eee_timer:", 6)) { 2280 + if (!strncmp(opt, "eee_timer:", 10)) { 2281 2281 if (kstrtoint(opt + 10, 0, &eee_timer)) 2282 2282 goto err; 2283 2283 }
+17 -15
drivers/net/ethernet/sfc/ef10.c
··· 2853 2853 } 2854 2854 2855 2855 /* Transmit timestamps are only available for 8XXX series. They result 2856 - * in three events per packet. These occur in order, and are: 2857 - * - the normal completion event 2856 + * in up to three events per packet. These occur in order, and are: 2857 + * - the normal completion event (may be omitted) 2858 2858 * - the low part of the timestamp 2859 2859 * - the high part of the timestamp 2860 + * 2861 + * It's possible for multiple completion events to appear before the 2862 + * corresponding timestamps. So we can for example get: 2863 + * COMP N 2864 + * COMP N+1 2865 + * TS_LO N 2866 + * TS_HI N 2867 + * TS_LO N+1 2868 + * TS_HI N+1 2869 + * 2870 + * In addition it's also possible for the adjacent completions to be 2871 + * merged, so we may not see COMP N above. As such, the completion 2872 + * events are not very useful here. 2860 2873 * 2861 2874 * Each part of the timestamp is itself split across two 16 bit 2862 2875 * fields in the event. ··· 2878 2865 2879 2866 switch (tx_ev_type) { 2880 2867 case TX_TIMESTAMP_EVENT_TX_EV_COMPLETION: 2881 - /* In case of Queue flush or FLR, we might have received 2882 - * the previous TX completion event but not the Timestamp 2883 - * events. 2884 - */ 2885 - if (tx_queue->completed_desc_ptr != tx_queue->ptr_mask) 2886 - efx_xmit_done(tx_queue, tx_queue->completed_desc_ptr); 2887 - 2888 - tx_ev_desc_ptr = EFX_QWORD_FIELD(*event, 2889 - ESF_DZ_TX_DESCR_INDX); 2890 - tx_queue->completed_desc_ptr = 2891 - tx_ev_desc_ptr & tx_queue->ptr_mask; 2868 + /* Ignore this event - see above. */ 2892 2869 break; 2893 2870 2894 2871 case TX_TIMESTAMP_EVENT_TX_EV_TSTAMP_LO: ··· 2890 2887 ts_part = efx_ef10_extract_event_ts(event); 2891 2888 tx_queue->completed_timestamp_major = ts_part; 2892 2889 2893 - efx_xmit_done(tx_queue, tx_queue->completed_desc_ptr); 2894 - tx_queue->completed_desc_ptr = tx_queue->ptr_mask; 2890 + efx_xmit_done_single(tx_queue); 2895 2891 break; 2896 2892 2897 2893 default:
+1
drivers/net/ethernet/sfc/efx.h
··· 20 20 struct net_device *net_dev); 21 21 netdev_tx_t efx_enqueue_skb(struct efx_tx_queue *tx_queue, struct sk_buff *skb); 22 22 void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index); 23 + void efx_xmit_done_single(struct efx_tx_queue *tx_queue); 23 24 int efx_setup_tc(struct net_device *net_dev, enum tc_setup_type type, 24 25 void *type_data); 25 26 extern unsigned int efx_piobuf_size;
+1
drivers/net/ethernet/sfc/efx_channels.c
··· 583 583 if (tx_queue->channel) 584 584 tx_queue->channel = channel; 585 585 tx_queue->buffer = NULL; 586 + tx_queue->cb_page = NULL; 586 587 memset(&tx_queue->txd, 0, sizeof(tx_queue->txd)); 587 588 } 588 589
-3
drivers/net/ethernet/sfc/net_driver.h
··· 208 208 * avoid cache-line ping-pong between the xmit path and the 209 209 * completion path. 210 210 * @merge_events: Number of TX merged completion events 211 - * @completed_desc_ptr: Most recent completed pointer - only used with 212 - * timestamping. 213 211 * @completed_timestamp_major: Top part of the most recent tx timestamp. 214 212 * @completed_timestamp_minor: Low part of the most recent tx timestamp. 215 213 * @insert_count: Current insert pointer ··· 267 269 unsigned int merge_events; 268 270 unsigned int bytes_compl; 269 271 unsigned int pkts_compl; 270 - unsigned int completed_desc_ptr; 271 272 u32 completed_timestamp_major; 272 273 u32 completed_timestamp_minor; 273 274
+38
drivers/net/ethernet/sfc/tx.c
··· 535 535 return efx_enqueue_skb(tx_queue, skb); 536 536 } 537 537 538 + void efx_xmit_done_single(struct efx_tx_queue *tx_queue) 539 + { 540 + unsigned int pkts_compl = 0, bytes_compl = 0; 541 + unsigned int read_ptr; 542 + bool finished = false; 543 + 544 + read_ptr = tx_queue->read_count & tx_queue->ptr_mask; 545 + 546 + while (!finished) { 547 + struct efx_tx_buffer *buffer = &tx_queue->buffer[read_ptr]; 548 + 549 + if (!efx_tx_buffer_in_use(buffer)) { 550 + struct efx_nic *efx = tx_queue->efx; 551 + 552 + netif_err(efx, hw, efx->net_dev, 553 + "TX queue %d spurious single TX completion\n", 554 + tx_queue->queue); 555 + efx_schedule_reset(efx, RESET_TYPE_TX_SKIP); 556 + return; 557 + } 558 + 559 + /* Need to check the flag before dequeueing. */ 560 + if (buffer->flags & EFX_TX_BUF_SKB) 561 + finished = true; 562 + efx_dequeue_buffer(tx_queue, buffer, &pkts_compl, &bytes_compl); 563 + 564 + ++tx_queue->read_count; 565 + read_ptr = tx_queue->read_count & tx_queue->ptr_mask; 566 + } 567 + 568 + tx_queue->pkts_compl += pkts_compl; 569 + tx_queue->bytes_compl += bytes_compl; 570 + 571 + EFX_WARN_ON_PARANOID(pkts_compl != 1); 572 + 573 + efx_xmit_done_check_empty(tx_queue); 574 + } 575 + 538 576 void efx_init_tx_queue_core_txq(struct efx_tx_queue *tx_queue) 539 577 { 540 578 struct efx_nic *efx = tx_queue->efx;
+16 -13
drivers/net/ethernet/sfc/tx_common.c
··· 80 80 tx_queue->xmit_more_available = false; 81 81 tx_queue->timestamping = (efx_ptp_use_mac_tx_timestamps(efx) && 82 82 tx_queue->channel == efx_ptp_channel(efx)); 83 - tx_queue->completed_desc_ptr = tx_queue->ptr_mask; 84 83 tx_queue->completed_timestamp_major = 0; 85 84 tx_queue->completed_timestamp_minor = 0; 86 85 ··· 209 210 while (read_ptr != stop_index) { 210 211 struct efx_tx_buffer *buffer = &tx_queue->buffer[read_ptr]; 211 212 212 - if (!(buffer->flags & EFX_TX_BUF_OPTION) && 213 - unlikely(buffer->len == 0)) { 213 + if (!efx_tx_buffer_in_use(buffer)) { 214 214 netif_err(efx, tx_err, efx->net_dev, 215 - "TX queue %d spurious TX completion id %x\n", 215 + "TX queue %d spurious TX completion id %d\n", 216 216 tx_queue->queue, read_ptr); 217 217 efx_schedule_reset(efx, RESET_TYPE_TX_SKIP); 218 218 return; ··· 221 223 222 224 ++tx_queue->read_count; 223 225 read_ptr = tx_queue->read_count & tx_queue->ptr_mask; 226 + } 227 + } 228 + 229 + void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue) 230 + { 231 + if ((int)(tx_queue->read_count - tx_queue->old_write_count) >= 0) { 232 + tx_queue->old_write_count = READ_ONCE(tx_queue->write_count); 233 + if (tx_queue->read_count == tx_queue->old_write_count) { 234 + /* Ensure that read_count is flushed. */ 235 + smp_mb(); 236 + tx_queue->empty_read_count = 237 + tx_queue->read_count | EFX_EMPTY_COUNT_VALID; 238 + } 224 239 } 225 240 } 226 241 ··· 267 256 netif_tx_wake_queue(tx_queue->core_txq); 268 257 } 269 258 270 - /* Check whether the hardware queue is now empty */ 271 - if ((int)(tx_queue->read_count - tx_queue->old_write_count) >= 0) { 272 - tx_queue->old_write_count = READ_ONCE(tx_queue->write_count); 273 - if (tx_queue->read_count == tx_queue->old_write_count) { 274 - smp_mb(); 275 - tx_queue->empty_read_count = 276 - tx_queue->read_count | EFX_EMPTY_COUNT_VALID; 277 - } 278 - } 259 + efx_xmit_done_check_empty(tx_queue); 279 260 } 280 261 281 262 /* Remove buffers put into a tx_queue for the current packet.
+6
drivers/net/ethernet/sfc/tx_common.h
··· 21 21 unsigned int *pkts_compl, 22 22 unsigned int *bytes_compl); 23 23 24 + static inline bool efx_tx_buffer_in_use(struct efx_tx_buffer *buffer) 25 + { 26 + return buffer->len || (buffer->flags & EFX_TX_BUF_OPTION); 27 + } 28 + 29 + void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue); 24 30 void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index); 25 31 26 32 void efx_enqueue_unwind(struct efx_tx_queue *tx_queue,
+2 -1
drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
··· 24 24 static void dwmac1000_core_init(struct mac_device_info *hw, 25 25 struct net_device *dev) 26 26 { 27 + struct stmmac_priv *priv = netdev_priv(dev); 27 28 void __iomem *ioaddr = hw->pcsr; 28 29 u32 value = readl(ioaddr + GMAC_CONTROL); 29 30 int mtu = dev->mtu; ··· 36 35 * Broadcom tags can look like invalid LLC/SNAP packets and cause the 37 36 * hardware to truncate packets on reception. 38 37 */ 39 - if (netdev_uses_dsa(dev)) 38 + if (netdev_uses_dsa(dev) || !priv->plat->enh_desc) 40 39 value &= ~GMAC_CONTROL_ACS; 41 40 42 41 if (mtu > 1500)
+11 -8
drivers/net/ipvlan/ipvlan_core.c
··· 293 293 } 294 294 if (dev) 295 295 dev_put(dev); 296 + cond_resched(); 296 297 } 297 298 } 298 299 ··· 499 498 struct ethhdr *ethh = eth_hdr(skb); 500 499 int ret = NET_XMIT_DROP; 501 500 502 - /* In this mode we dont care about multicast and broadcast traffic */ 503 - if (is_multicast_ether_addr(ethh->h_dest)) { 504 - pr_debug_ratelimited("Dropped {multi|broad}cast of type=[%x]\n", 505 - ntohs(skb->protocol)); 506 - kfree_skb(skb); 507 - goto out; 508 - } 509 - 510 501 /* The ipvlan is a pseudo-L2 device, so the packets that we receive 511 502 * will have L2; which need to discarded and processed further 512 503 * in the net-ns of the main-device. 513 504 */ 514 505 if (skb_mac_header_was_set(skb)) { 506 + /* In this mode we dont care about 507 + * multicast and broadcast traffic */ 508 + if (is_multicast_ether_addr(ethh->h_dest)) { 509 + pr_debug_ratelimited( 510 + "Dropped {multi|broad}cast of type=[%x]\n", 511 + ntohs(skb->protocol)); 512 + kfree_skb(skb); 513 + goto out; 514 + } 515 + 515 516 skb_pull(skb, sizeof(*ethh)); 516 517 skb->mac_header = (typeof(skb->mac_header))~0U; 517 518 skb_reset_network_header(skb);
+1 -4
drivers/net/ipvlan/ipvlan_main.c
··· 164 164 static int ipvlan_open(struct net_device *dev) 165 165 { 166 166 struct ipvl_dev *ipvlan = netdev_priv(dev); 167 - struct net_device *phy_dev = ipvlan->phy_dev; 168 167 struct ipvl_addr *addr; 169 168 170 169 if (ipvlan->port->mode == IPVLAN_MODE_L3 || ··· 177 178 ipvlan_ht_addr_add(ipvlan, addr); 178 179 rcu_read_unlock(); 179 180 180 - return dev_uc_add(phy_dev, phy_dev->dev_addr); 181 + return 0; 181 182 } 182 183 183 184 static int ipvlan_stop(struct net_device *dev) ··· 188 189 189 190 dev_uc_unsync(phy_dev, dev); 190 191 dev_mc_unsync(phy_dev, dev); 191 - 192 - dev_uc_del(phy_dev, phy_dev->dev_addr); 193 192 194 193 rcu_read_lock(); 195 194 list_for_each_entry_rcu(addr, &ipvlan->addrs, anode)
+20 -5
drivers/net/macsec.c
··· 424 424 return (struct macsec_eth_header *)skb_mac_header(skb); 425 425 } 426 426 427 + static sci_t dev_to_sci(struct net_device *dev, __be16 port) 428 + { 429 + return make_sci(dev->dev_addr, port); 430 + } 431 + 427 432 static void __macsec_pn_wrapped(struct macsec_secy *secy, 428 433 struct macsec_tx_sa *tx_sa) 429 434 { ··· 3273 3268 3274 3269 out: 3275 3270 ether_addr_copy(dev->dev_addr, addr->sa_data); 3271 + macsec->secy.sci = dev_to_sci(dev, MACSEC_PORT_ES); 3272 + 3273 + /* If h/w offloading is available, propagate to the device */ 3274 + if (macsec_is_offloaded(macsec)) { 3275 + const struct macsec_ops *ops; 3276 + struct macsec_context ctx; 3277 + 3278 + ops = macsec_get_ops(macsec, &ctx); 3279 + if (ops) { 3280 + ctx.secy = &macsec->secy; 3281 + macsec_offload(ops->mdo_upd_secy, &ctx); 3282 + } 3283 + } 3284 + 3276 3285 return 0; 3277 3286 } 3278 3287 ··· 3361 3342 3362 3343 static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = { 3363 3344 [IFLA_MACSEC_SCI] = { .type = NLA_U64 }, 3345 + [IFLA_MACSEC_PORT] = { .type = NLA_U16 }, 3364 3346 [IFLA_MACSEC_ICV_LEN] = { .type = NLA_U8 }, 3365 3347 [IFLA_MACSEC_CIPHER_SUITE] = { .type = NLA_U64 }, 3366 3348 [IFLA_MACSEC_WINDOW] = { .type = NLA_U32 }, ··· 3610 3590 } 3611 3591 3612 3592 return false; 3613 - } 3614 - 3615 - static sci_t dev_to_sci(struct net_device *dev, __be16 port) 3616 - { 3617 - return make_sci(dev->dev_addr, port); 3618 3593 } 3619 3594 3620 3595 static int macsec_add_dev(struct net_device *dev, sci_t sci, u8 icv_len)
+2
drivers/net/macvlan.c
··· 334 334 if (src) 335 335 dev_put(src->dev); 336 336 consume_skb(skb); 337 + 338 + cond_resched(); 337 339 } 338 340 } 339 341
+1
drivers/net/phy/bcm63xx.c
··· 73 73 /* same phy as above, with just a different OUI */ 74 74 .phy_id = 0x002bdc00, 75 75 .phy_id_mask = 0xfffffc00, 76 + .name = "Broadcom BCM63XX (2)", 76 77 /* PHY_BASIC_FEATURES */ 77 78 .flags = PHY_IS_INTERNAL, 78 79 .config_init = bcm63xx_config_init,
+2 -1
drivers/net/phy/phy.c
··· 727 727 phy_trigger_machine(phydev); 728 728 } 729 729 730 - if (phy_clear_interrupt(phydev)) 730 + /* did_interrupt() may have cleared the interrupt already */ 731 + if (!phydev->drv->did_interrupt && phy_clear_interrupt(phydev)) 731 732 goto phy_err; 732 733 return IRQ_HANDLED; 733 734
+5 -1
drivers/net/phy/phy_device.c
··· 286 286 if (!mdio_bus_phy_may_suspend(phydev)) 287 287 return 0; 288 288 289 + phydev->suspended_by_mdio_bus = 1; 290 + 289 291 return phy_suspend(phydev); 290 292 } 291 293 ··· 296 294 struct phy_device *phydev = to_phy_device(dev); 297 295 int ret; 298 296 299 - if (!mdio_bus_phy_may_suspend(phydev)) 297 + if (!phydev->suspended_by_mdio_bus) 300 298 goto no_resume; 299 + 300 + phydev->suspended_by_mdio_bus = 0; 301 301 302 302 ret = phy_resume(phydev); 303 303 if (ret < 0)
+7 -1
drivers/net/phy/phylink.c
··· 761 761 config.interface = interface; 762 762 763 763 ret = phylink_validate(pl, supported, &config); 764 - if (ret) 764 + if (ret) { 765 + phylink_warn(pl, "validation of %s with support %*pb and advertisement %*pb failed: %d\n", 766 + phy_modes(config.interface), 767 + __ETHTOOL_LINK_MODE_MASK_NBITS, phy->supported, 768 + __ETHTOOL_LINK_MODE_MASK_NBITS, config.advertising, 769 + ret); 765 770 return ret; 771 + } 766 772 767 773 phy->phylink = pl; 768 774 phy->phy_link_change = phylink_phy_change;
+10 -4
drivers/net/slip/slhc.c
··· 232 232 struct cstate *cs = lcs->next; 233 233 unsigned long deltaS, deltaA; 234 234 short changes = 0; 235 - int hlen; 235 + int nlen, hlen; 236 236 unsigned char new_seq[16]; 237 237 unsigned char *cp = new_seq; 238 238 struct iphdr *ip; ··· 248 248 return isize; 249 249 250 250 ip = (struct iphdr *) icp; 251 + if (ip->version != 4 || ip->ihl < 5) 252 + return isize; 251 253 252 254 /* Bail if this packet isn't TCP, or is an IP fragment */ 253 255 if (ip->protocol != IPPROTO_TCP || (ntohs(ip->frag_off) & 0x3fff)) { ··· 260 258 comp->sls_o_tcp++; 261 259 return isize; 262 260 } 263 - /* Extract TCP header */ 261 + nlen = ip->ihl * 4; 262 + if (isize < nlen + sizeof(*th)) 263 + return isize; 264 264 265 - th = (struct tcphdr *)(((unsigned char *)ip) + ip->ihl*4); 266 - hlen = ip->ihl*4 + th->doff*4; 265 + th = (struct tcphdr *)(icp + nlen); 266 + if (th->doff < sizeof(struct tcphdr) / 4) 267 + return isize; 268 + hlen = nlen + th->doff * 4; 267 269 268 270 /* Bail if the TCP packet isn't `compressible' (i.e., ACK isn't set or 269 271 * some other control bit is set). Also uncompressible if
+2
drivers/net/team/team.c
··· 2240 2240 [TEAM_ATTR_OPTION_CHANGED] = { .type = NLA_FLAG }, 2241 2241 [TEAM_ATTR_OPTION_TYPE] = { .type = NLA_U8 }, 2242 2242 [TEAM_ATTR_OPTION_DATA] = { .type = NLA_BINARY }, 2243 + [TEAM_ATTR_OPTION_PORT_IFINDEX] = { .type = NLA_U32 }, 2244 + [TEAM_ATTR_OPTION_ARRAY_INDEX] = { .type = NLA_U32 }, 2243 2245 }; 2244 2246 2245 2247 static int team_nl_cmd_noop(struct sk_buff *skb, struct genl_info *info)
+8
drivers/net/usb/r8152.c
··· 3221 3221 } 3222 3222 3223 3223 msleep(20); 3224 + if (test_bit(RTL8152_UNPLUG, &tp->flags)) 3225 + break; 3224 3226 } 3225 3227 3226 3228 return data; ··· 5404 5402 if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_BOOT_CTRL) & 5405 5403 AUTOLOAD_DONE) 5406 5404 break; 5405 + 5407 5406 msleep(20); 5407 + if (test_bit(RTL8152_UNPLUG, &tp->flags)) 5408 + break; 5408 5409 } 5409 5410 5410 5411 data = r8153_phy_status(tp, 0); ··· 5544 5539 if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_BOOT_CTRL) & 5545 5540 AUTOLOAD_DONE) 5546 5541 break; 5542 + 5547 5543 msleep(20); 5544 + if (test_bit(RTL8152_UNPLUG, &tp->flags)) 5545 + break; 5548 5546 } 5549 5547 5550 5548 data = r8153_phy_status(tp, 0);
+1 -1
drivers/net/veth.c
··· 328 328 rcu_read_lock(); 329 329 peer = rcu_dereference(priv->peer); 330 330 if (peer) { 331 - tot->rx_dropped += veth_stats_tx(peer, &packets, &bytes); 331 + veth_stats_tx(peer, &packets, &bytes); 332 332 tot->rx_bytes += bytes; 333 333 tot->rx_packets += packets; 334 334
+2 -1
drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
··· 308 308 } 309 309 310 310 /* PHY_SKU section is mandatory in B0 */ 311 - if (!mvm->nvm_sections[NVM_SECTION_TYPE_PHY_SKU].data) { 311 + if (mvm->trans->cfg->nvm_type == IWL_NVM_EXT && 312 + !mvm->nvm_sections[NVM_SECTION_TYPE_PHY_SKU].data) { 312 313 IWL_ERR(mvm, 313 314 "Can't parse phy_sku in B0, empty sections\n"); 314 315 return NULL;
+6 -3
drivers/net/wireless/mediatek/mt76/dma.c
··· 447 447 struct page *page = virt_to_head_page(data); 448 448 int offset = data - page_address(page); 449 449 struct sk_buff *skb = q->rx_head; 450 + struct skb_shared_info *shinfo = skb_shinfo(skb); 450 451 451 - offset += q->buf_offset; 452 - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, offset, len, 453 - q->buf_size); 452 + if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) { 453 + offset += q->buf_offset; 454 + skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len, 455 + q->buf_size); 456 + } 454 457 455 458 if (more) 456 459 return;
+1
drivers/of/of_mdio.c
··· 306 306 rc = of_mdiobus_register_phy(mdio, child, addr); 307 307 if (rc && rc != -ENODEV) 308 308 goto unregister; 309 + break; 309 310 } 310 311 } 311 312 }
+2 -2
drivers/s390/net/qeth_core.h
··· 369 369 struct qeth_buffer_pool_entry { 370 370 struct list_head list; 371 371 struct list_head init_list; 372 - void *elements[QDIO_MAX_ELEMENTS_PER_BUFFER]; 372 + struct page *elements[QDIO_MAX_ELEMENTS_PER_BUFFER]; 373 373 }; 374 374 375 375 struct qeth_qdio_buffer_pool { ··· 983 983 extern const struct device_type qeth_generic_devtype; 984 984 985 985 const char *qeth_get_cardname_short(struct qeth_card *); 986 - int qeth_realloc_buffer_pool(struct qeth_card *, int); 986 + int qeth_resize_buffer_pool(struct qeth_card *card, unsigned int count); 987 987 int qeth_core_load_discipline(struct qeth_card *, enum qeth_discipline_id); 988 988 void qeth_core_free_discipline(struct qeth_card *); 989 989
+118 -58
drivers/s390/net/qeth_core_main.c
··· 65 65 static void qeth_issue_next_read_cb(struct qeth_card *card, 66 66 struct qeth_cmd_buffer *iob, 67 67 unsigned int data_length); 68 - static void qeth_free_buffer_pool(struct qeth_card *); 69 68 static int qeth_qdio_establish(struct qeth_card *); 70 69 static void qeth_free_qdio_queues(struct qeth_card *card); 71 70 static void qeth_notify_skbs(struct qeth_qdio_out_q *queue, ··· 211 212 } 212 213 EXPORT_SYMBOL_GPL(qeth_clear_working_pool_list); 213 214 215 + static void qeth_free_pool_entry(struct qeth_buffer_pool_entry *entry) 216 + { 217 + unsigned int i; 218 + 219 + for (i = 0; i < ARRAY_SIZE(entry->elements); i++) { 220 + if (entry->elements[i]) 221 + __free_page(entry->elements[i]); 222 + } 223 + 224 + kfree(entry); 225 + } 226 + 227 + static void qeth_free_buffer_pool(struct qeth_card *card) 228 + { 229 + struct qeth_buffer_pool_entry *entry, *tmp; 230 + 231 + list_for_each_entry_safe(entry, tmp, &card->qdio.init_pool.entry_list, 232 + init_list) { 233 + list_del(&entry->init_list); 234 + qeth_free_pool_entry(entry); 235 + } 236 + } 237 + 238 + static struct qeth_buffer_pool_entry *qeth_alloc_pool_entry(unsigned int pages) 239 + { 240 + struct qeth_buffer_pool_entry *entry; 241 + unsigned int i; 242 + 243 + entry = kzalloc(sizeof(*entry), GFP_KERNEL); 244 + if (!entry) 245 + return NULL; 246 + 247 + for (i = 0; i < pages; i++) { 248 + entry->elements[i] = alloc_page(GFP_KERNEL); 249 + 250 + if (!entry->elements[i]) { 251 + qeth_free_pool_entry(entry); 252 + return NULL; 253 + } 254 + } 255 + 256 + return entry; 257 + } 258 + 214 259 static int qeth_alloc_buffer_pool(struct qeth_card *card) 215 260 { 216 - struct qeth_buffer_pool_entry *pool_entry; 217 - void *ptr; 218 - int i, j; 261 + unsigned int buf_elements = QETH_MAX_BUFFER_ELEMENTS(card); 262 + unsigned int i; 219 263 220 264 QETH_CARD_TEXT(card, 5, "alocpool"); 221 265 for (i = 0; i < card->qdio.init_pool.buf_count; ++i) { 222 - pool_entry = kzalloc(sizeof(*pool_entry), GFP_KERNEL); 223 - if (!pool_entry) { 266 + struct qeth_buffer_pool_entry *entry; 267 + 268 + entry = qeth_alloc_pool_entry(buf_elements); 269 + if (!entry) { 224 270 qeth_free_buffer_pool(card); 225 271 return -ENOMEM; 226 272 } 227 - for (j = 0; j < QETH_MAX_BUFFER_ELEMENTS(card); ++j) { 228 - ptr = (void *) __get_free_page(GFP_KERNEL); 229 - if (!ptr) { 230 - while (j > 0) 231 - free_page((unsigned long) 232 - pool_entry->elements[--j]); 233 - kfree(pool_entry); 234 - qeth_free_buffer_pool(card); 235 - return -ENOMEM; 236 - } 237 - pool_entry->elements[j] = ptr; 238 - } 239 - list_add(&pool_entry->init_list, 240 - &card->qdio.init_pool.entry_list); 273 + 274 + list_add(&entry->init_list, &card->qdio.init_pool.entry_list); 241 275 } 242 276 return 0; 243 277 } 244 278 245 - int qeth_realloc_buffer_pool(struct qeth_card *card, int bufcnt) 279 + int qeth_resize_buffer_pool(struct qeth_card *card, unsigned int count) 246 280 { 281 + unsigned int buf_elements = QETH_MAX_BUFFER_ELEMENTS(card); 282 + struct qeth_qdio_buffer_pool *pool = &card->qdio.init_pool; 283 + struct qeth_buffer_pool_entry *entry, *tmp; 284 + int delta = count - pool->buf_count; 285 + LIST_HEAD(entries); 286 + 247 287 QETH_CARD_TEXT(card, 2, "realcbp"); 248 288 249 - /* TODO: steel/add buffers from/to a running card's buffer pool (?) */ 250 - qeth_clear_working_pool_list(card); 251 - qeth_free_buffer_pool(card); 252 - card->qdio.in_buf_pool.buf_count = bufcnt; 253 - card->qdio.init_pool.buf_count = bufcnt; 254 - return qeth_alloc_buffer_pool(card); 289 + /* Defer until queue is allocated: */ 290 + if (!card->qdio.in_q) 291 + goto out; 292 + 293 + /* Remove entries from the pool: */ 294 + while (delta < 0) { 295 + entry = list_first_entry(&pool->entry_list, 296 + struct qeth_buffer_pool_entry, 297 + init_list); 298 + list_del(&entry->init_list); 299 + qeth_free_pool_entry(entry); 300 + 301 + delta++; 302 + } 303 + 304 + /* Allocate additional entries: */ 305 + while (delta > 0) { 306 + entry = qeth_alloc_pool_entry(buf_elements); 307 + if (!entry) { 308 + list_for_each_entry_safe(entry, tmp, &entries, 309 + init_list) { 310 + list_del(&entry->init_list); 311 + qeth_free_pool_entry(entry); 312 + } 313 + 314 + return -ENOMEM; 315 + } 316 + 317 + list_add(&entry->init_list, &entries); 318 + 319 + delta--; 320 + } 321 + 322 + list_splice(&entries, &pool->entry_list); 323 + 324 + out: 325 + card->qdio.in_buf_pool.buf_count = count; 326 + pool->buf_count = count; 327 + return 0; 255 328 } 256 - EXPORT_SYMBOL_GPL(qeth_realloc_buffer_pool); 329 + EXPORT_SYMBOL_GPL(qeth_resize_buffer_pool); 257 330 258 331 static void qeth_free_qdio_queue(struct qeth_qdio_q *q) 259 332 { ··· 1241 1170 } 1242 1171 EXPORT_SYMBOL_GPL(qeth_drain_output_queues); 1243 1172 1244 - static void qeth_free_buffer_pool(struct qeth_card *card) 1245 - { 1246 - struct qeth_buffer_pool_entry *pool_entry, *tmp; 1247 - int i = 0; 1248 - list_for_each_entry_safe(pool_entry, tmp, 1249 - &card->qdio.init_pool.entry_list, init_list){ 1250 - for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) 1251 - free_page((unsigned long)pool_entry->elements[i]); 1252 - list_del(&pool_entry->init_list); 1253 - kfree(pool_entry); 1254 - } 1255 - } 1256 - 1257 1173 static int qeth_osa_set_output_queues(struct qeth_card *card, bool single) 1258 1174 { 1259 1175 unsigned int count = single ? 1 : card->dev->num_tx_queues; ··· 1262 1204 if (count == 1) 1263 1205 dev_info(&card->gdev->dev, "Priority Queueing not supported\n"); 1264 1206 1265 - card->qdio.default_out_queue = single ? 0 : QETH_DEFAULT_QUEUE; 1266 1207 card->qdio.no_out_queues = count; 1267 1208 return 0; 1268 1209 } ··· 2450 2393 return; 2451 2394 2452 2395 qeth_free_cq(card); 2453 - cancel_delayed_work_sync(&card->buffer_reclaim_work); 2454 2396 for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) { 2455 2397 if (card->qdio.in_q->bufs[j].rx_skb) 2456 2398 dev_kfree_skb_any(card->qdio.in_q->bufs[j].rx_skb); ··· 2631 2575 struct list_head *plh; 2632 2576 struct qeth_buffer_pool_entry *entry; 2633 2577 int i, free; 2634 - struct page *page; 2635 2578 2636 2579 if (list_empty(&card->qdio.in_buf_pool.entry_list)) 2637 2580 return NULL; ··· 2639 2584 entry = list_entry(plh, struct qeth_buffer_pool_entry, list); 2640 2585 free = 1; 2641 2586 for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { 2642 - if (page_count(virt_to_page(entry->elements[i])) > 1) { 2587 + if (page_count(entry->elements[i]) > 1) { 2643 2588 free = 0; 2644 2589 break; 2645 2590 } ··· 2654 2599 entry = list_entry(card->qdio.in_buf_pool.entry_list.next, 2655 2600 struct qeth_buffer_pool_entry, list); 2656 2601 for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { 2657 - if (page_count(virt_to_page(entry->elements[i])) > 1) { 2658 - page = alloc_page(GFP_ATOMIC); 2659 - if (!page) { 2602 + if (page_count(entry->elements[i]) > 1) { 2603 + struct page *page = alloc_page(GFP_ATOMIC); 2604 + 2605 + if (!page) 2660 2606 return NULL; 2661 - } else { 2662 - free_page((unsigned long)entry->elements[i]); 2663 - entry->elements[i] = page_address(page); 2664 - QETH_CARD_STAT_INC(card, rx_sg_alloc_page); 2665 - } 2607 + 2608 + __free_page(entry->elements[i]); 2609 + entry->elements[i] = page; 2610 + QETH_CARD_STAT_INC(card, rx_sg_alloc_page); 2666 2611 } 2667 2612 } 2668 2613 list_del_init(&entry->list); ··· 2680 2625 ETH_HLEN + 2681 2626 sizeof(struct ipv6hdr)); 2682 2627 if (!buf->rx_skb) 2683 - return 1; 2628 + return -ENOMEM; 2684 2629 } 2685 2630 2686 2631 pool_entry = qeth_find_free_buffer_pool_entry(card); 2687 2632 if (!pool_entry) 2688 - return 1; 2633 + return -ENOBUFS; 2689 2634 2690 2635 /* 2691 2636 * since the buffer is accessed only from the input_tasklet ··· 2698 2643 for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) { 2699 2644 buf->buffer->element[i].length = PAGE_SIZE; 2700 2645 buf->buffer->element[i].addr = 2701 - virt_to_phys(pool_entry->elements[i]); 2646 + page_to_phys(pool_entry->elements[i]); 2702 2647 if (i == QETH_MAX_BUFFER_ELEMENTS(card) - 1) 2703 2648 buf->buffer->element[i].eflags = SBAL_EFLAGS_LAST_ENTRY; 2704 2649 else ··· 2730 2675 /* inbound queue */ 2731 2676 qdio_reset_buffers(card->qdio.in_q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q); 2732 2677 memset(&card->rx, 0, sizeof(struct qeth_rx)); 2678 + 2733 2679 qeth_initialize_working_pool_list(card); 2734 2680 /*give only as many buffers to hardware as we have buffer pool entries*/ 2735 - for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; ++i) 2736 - qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]); 2681 + for (i = 0; i < card->qdio.in_buf_pool.buf_count - 1; i++) { 2682 + rc = qeth_init_input_buffer(card, &card->qdio.in_q->bufs[i]); 2683 + if (rc) 2684 + return rc; 2685 + } 2686 + 2737 2687 card->qdio.in_q->next_buf_to_init = 2738 2688 card->qdio.in_buf_pool.buf_count - 1; 2739 2689 rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0,
+4 -5
drivers/s390/net/qeth_core_sys.c
··· 247 247 struct device_attribute *attr, const char *buf, size_t count) 248 248 { 249 249 struct qeth_card *card = dev_get_drvdata(dev); 250 + unsigned int cnt; 250 251 char *tmp; 251 - int cnt, old_cnt; 252 252 int rc = 0; 253 253 254 254 mutex_lock(&card->conf_mutex); ··· 257 257 goto out; 258 258 } 259 259 260 - old_cnt = card->qdio.in_buf_pool.buf_count; 261 260 cnt = simple_strtoul(buf, &tmp, 10); 262 261 cnt = (cnt < QETH_IN_BUF_COUNT_MIN) ? QETH_IN_BUF_COUNT_MIN : 263 262 ((cnt > QETH_IN_BUF_COUNT_MAX) ? QETH_IN_BUF_COUNT_MAX : cnt); 264 - if (old_cnt != cnt) { 265 - rc = qeth_realloc_buffer_pool(card, cnt); 266 - } 263 + 264 + rc = qeth_resize_buffer_pool(card, cnt); 265 + 267 266 out: 268 267 mutex_unlock(&card->conf_mutex); 269 268 return rc ? rc : count;
+1
drivers/s390/net/qeth_l2_main.c
··· 284 284 if (card->state == CARD_STATE_SOFTSETUP) { 285 285 qeth_clear_ipacmd_list(card); 286 286 qeth_drain_output_queues(card); 287 + cancel_delayed_work_sync(&card->buffer_reclaim_work); 287 288 card->state = CARD_STATE_DOWN; 288 289 } 289 290
+1
drivers/s390/net/qeth_l3_main.c
··· 1178 1178 qeth_l3_clear_ip_htable(card, 1); 1179 1179 qeth_clear_ipacmd_list(card); 1180 1180 qeth_drain_output_queues(card); 1181 + cancel_delayed_work_sync(&card->buffer_reclaim_work); 1181 1182 card->state = CARD_STATE_DOWN; 1182 1183 } 1183 1184
+4 -5
drivers/s390/net/qeth_l3_sys.c
··· 206 206 qdio_get_ssqd_desc(CARD_DDEV(card), &card->ssqd); 207 207 if (card->ssqd.qdioac2 & CHSC_AC2_SNIFFER_AVAILABLE) { 208 208 card->options.sniffer = i; 209 - if (card->qdio.init_pool.buf_count != 210 - QETH_IN_BUF_COUNT_MAX) 211 - qeth_realloc_buffer_pool(card, 212 - QETH_IN_BUF_COUNT_MAX); 213 - } else 209 + qeth_resize_buffer_pool(card, QETH_IN_BUF_COUNT_MAX); 210 + } else { 214 211 rc = -EPERM; 212 + } 213 + 215 214 break; 216 215 default: 217 216 rc = -EINVAL;
+12 -6
include/linux/inet_diag.h
··· 2 2 #ifndef _INET_DIAG_H_ 3 3 #define _INET_DIAG_H_ 1 4 4 5 + #include <net/netlink.h> 5 6 #include <uapi/linux/inet_diag.h> 6 7 7 - struct net; 8 - struct sock; 9 8 struct inet_hashinfo; 10 - struct nlattr; 11 - struct nlmsghdr; 12 - struct sk_buff; 13 - struct netlink_callback; 14 9 15 10 struct inet_diag_handler { 16 11 void (*dump)(struct sk_buff *skb, ··· 57 62 58 63 void inet_diag_msg_common_fill(struct inet_diag_msg *r, struct sock *sk); 59 64 65 + static inline size_t inet_diag_msg_attrs_size(void) 66 + { 67 + return nla_total_size(1) /* INET_DIAG_SHUTDOWN */ 68 + + nla_total_size(1) /* INET_DIAG_TOS */ 69 + #if IS_ENABLED(CONFIG_IPV6) 70 + + nla_total_size(1) /* INET_DIAG_TCLASS */ 71 + + nla_total_size(1) /* INET_DIAG_SKV6ONLY */ 72 + #endif 73 + + nla_total_size(4) /* INET_DIAG_MARK */ 74 + + nla_total_size(4); /* INET_DIAG_CLASS_ID */ 75 + } 60 76 int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb, 61 77 struct inet_diag_msg *r, int ext, 62 78 struct user_namespace *user_ns, bool net_admin);
+3
include/linux/phy.h
··· 357 357 * is_gigabit_capable: Set to true if PHY supports 1000Mbps 358 358 * has_fixups: Set to true if this phy has fixups/quirks. 359 359 * suspended: Set to true if this phy has been suspended successfully. 360 + * suspended_by_mdio_bus: Set to true if this phy was suspended by MDIO bus. 360 361 * sysfs_links: Internal boolean tracking sysfs symbolic links setup/removal. 361 362 * loopback_enabled: Set true if this phy has been loopbacked successfully. 362 363 * state: state of the PHY for management purposes ··· 397 396 unsigned is_gigabit_capable:1; 398 397 unsigned has_fixups:1; 399 398 unsigned suspended:1; 399 + unsigned suspended_by_mdio_bus:1; 400 400 unsigned sysfs_links:1; 401 401 unsigned loopback_enabled:1; 402 402 ··· 559 557 /* 560 558 * Checks if the PHY generated an interrupt. 561 559 * For multi-PHY devices with shared PHY interrupt pin 560 + * Set interrupt bits have to be cleared. 562 561 */ 563 562 int (*did_interrupt)(struct phy_device *phydev); 564 563
+1 -1
include/linux/rhashtable.h
··· 972 972 /** 973 973 * rhashtable_lookup_get_insert_key - lookup and insert object into hash table 974 974 * @ht: hash table 975 + * @key: key 975 976 * @obj: pointer to hash head inside object 976 977 * @params: hash table parameters 977 - * @data: pointer to element data already in hashes 978 978 * 979 979 * Just like rhashtable_lookup_insert_key(), but this function returns the 980 980 * object if it exists, NULL if it does not and the insertion was successful,
+1
include/net/fib_rules.h
··· 108 108 [FRA_OIFNAME] = { .type = NLA_STRING, .len = IFNAMSIZ - 1 }, \ 109 109 [FRA_PRIORITY] = { .type = NLA_U32 }, \ 110 110 [FRA_FWMARK] = { .type = NLA_U32 }, \ 111 + [FRA_TUN_ID] = { .type = NLA_U64 }, \ 111 112 [FRA_FWMASK] = { .type = NLA_U32 }, \ 112 113 [FRA_TABLE] = { .type = NLA_U32 }, \ 113 114 [FRA_SUPPRESS_PREFIXLEN] = { .type = NLA_U32 }, \
+1 -1
include/soc/mscc/ocelot_dev.h
··· 74 74 #define DEV_MAC_TAGS_CFG_TAG_ID_M GENMASK(31, 16) 75 75 #define DEV_MAC_TAGS_CFG_TAG_ID_X(x) (((x) & GENMASK(31, 16)) >> 16) 76 76 #define DEV_MAC_TAGS_CFG_VLAN_LEN_AWR_ENA BIT(2) 77 - #define DEV_MAC_TAGS_CFG_PB_ENA BIT(1) 77 + #define DEV_MAC_TAGS_CFG_VLAN_DBL_AWR_ENA BIT(1) 78 78 #define DEV_MAC_TAGS_CFG_VLAN_AWR_ENA BIT(0) 79 79 80 80 #define DEV_MAC_ADV_CHK_CFG 0x2c
+2
include/uapi/linux/in.h
··· 74 74 #define IPPROTO_UDPLITE IPPROTO_UDPLITE 75 75 IPPROTO_MPLS = 137, /* MPLS in IP (RFC 4023) */ 76 76 #define IPPROTO_MPLS IPPROTO_MPLS 77 + IPPROTO_ETHERNET = 143, /* Ethernet-within-IPv6 Encapsulation */ 78 + #define IPPROTO_ETHERNET IPPROTO_ETHERNET 77 79 IPPROTO_RAW = 255, /* Raw IP packets */ 78 80 #define IPPROTO_RAW IPPROTO_RAW 79 81 IPPROTO_MPTCP = 262, /* Multipath TCP connection */
+4
kernel/cgroup/cgroup.c
··· 6271 6271 return; 6272 6272 } 6273 6273 6274 + /* Don't associate the sock with unrelated interrupted task's cgroup. */ 6275 + if (in_interrupt()) 6276 + return; 6277 + 6274 6278 rcu_read_lock(); 6275 6279 6276 6280 while (true) {
+2 -12
mm/memcontrol.c
··· 6682 6682 if (!mem_cgroup_sockets_enabled) 6683 6683 return; 6684 6684 6685 - /* 6686 - * Socket cloning can throw us here with sk_memcg already 6687 - * filled. It won't however, necessarily happen from 6688 - * process context. So the test for root memcg given 6689 - * the current task's memcg won't help us in this case. 6690 - * 6691 - * Respecting the original socket's memcg is a better 6692 - * decision in this case. 6693 - */ 6694 - if (sk->sk_memcg) { 6695 - css_get(&sk->sk_memcg->css); 6685 + /* Do not associate the sock with unrelated interrupted task's memcg. */ 6686 + if (in_interrupt()) 6696 6687 return; 6697 - } 6698 6688 6699 6689 rcu_read_lock(); 6700 6690 memcg = mem_cgroup_from_task(current);
+4
net/batman-adv/bat_iv_ogm.c
··· 789 789 790 790 lockdep_assert_held(&hard_iface->bat_iv.ogm_buff_mutex); 791 791 792 + /* interface already disabled by batadv_iv_ogm_iface_disable */ 793 + if (!*ogm_buff) 794 + return; 795 + 792 796 /* the interface gets activated here to avoid race conditions between 793 797 * the moment of activating the interface in 794 798 * hardif_activate_interface() where the originator mac is set and
+2 -1
net/caif/caif_dev.c
··· 112 112 caif_device_list(dev_net(dev)); 113 113 struct caif_device_entry *caifd; 114 114 115 - list_for_each_entry_rcu(caifd, &caifdevs->list, list) { 115 + list_for_each_entry_rcu(caifd, &caifdevs->list, list, 116 + lockdep_rtnl_is_held()) { 116 117 if (caifd->netdev == dev) 117 118 return caifd; 118 119 }
+21 -12
net/core/devlink.c
··· 3352 3352 struct genl_info *info, 3353 3353 union devlink_param_value *value) 3354 3354 { 3355 + struct nlattr *param_data; 3355 3356 int len; 3356 3357 3357 - if (param->type != DEVLINK_PARAM_TYPE_BOOL && 3358 - !info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]) 3358 + param_data = info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]; 3359 + 3360 + if (param->type != DEVLINK_PARAM_TYPE_BOOL && !param_data) 3359 3361 return -EINVAL; 3360 3362 3361 3363 switch (param->type) { 3362 3364 case DEVLINK_PARAM_TYPE_U8: 3363 - value->vu8 = nla_get_u8(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]); 3365 + if (nla_len(param_data) != sizeof(u8)) 3366 + return -EINVAL; 3367 + value->vu8 = nla_get_u8(param_data); 3364 3368 break; 3365 3369 case DEVLINK_PARAM_TYPE_U16: 3366 - value->vu16 = nla_get_u16(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]); 3370 + if (nla_len(param_data) != sizeof(u16)) 3371 + return -EINVAL; 3372 + value->vu16 = nla_get_u16(param_data); 3367 3373 break; 3368 3374 case DEVLINK_PARAM_TYPE_U32: 3369 - value->vu32 = nla_get_u32(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]); 3375 + if (nla_len(param_data) != sizeof(u32)) 3376 + return -EINVAL; 3377 + value->vu32 = nla_get_u32(param_data); 3370 3378 break; 3371 3379 case DEVLINK_PARAM_TYPE_STRING: 3372 - len = strnlen(nla_data(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]), 3373 - nla_len(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA])); 3374 - if (len == nla_len(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA]) || 3380 + len = strnlen(nla_data(param_data), nla_len(param_data)); 3381 + if (len == nla_len(param_data) || 3375 3382 len >= __DEVLINK_PARAM_MAX_STRING_VALUE) 3376 3383 return -EINVAL; 3377 - strcpy(value->vstr, 3378 - nla_data(info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA])); 3384 + strcpy(value->vstr, nla_data(param_data)); 3379 3385 break; 3380 3386 case DEVLINK_PARAM_TYPE_BOOL: 3381 - value->vbool = info->attrs[DEVLINK_ATTR_PARAM_VALUE_DATA] ? 3382 - true : false; 3387 + if (param_data && nla_len(param_data)) 3388 + return -EINVAL; 3389 + value->vbool = nla_get_flag(param_data); 3383 3390 break; 3384 3391 } 3385 3392 return 0; ··· 5958 5951 [DEVLINK_ATTR_PARAM_VALUE_CMODE] = { .type = NLA_U8 }, 5959 5952 [DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING }, 5960 5953 [DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .type = NLA_U32 }, 5954 + [DEVLINK_ATTR_REGION_CHUNK_ADDR] = { .type = NLA_U64 }, 5955 + [DEVLINK_ATTR_REGION_CHUNK_LEN] = { .type = NLA_U64 }, 5961 5956 [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING }, 5962 5957 [DEVLINK_ATTR_HEALTH_REPORTER_GRACEFUL_PERIOD] = { .type = NLA_U64 }, 5963 5958 [DEVLINK_ATTR_HEALTH_REPORTER_AUTO_RECOVER] = { .type = NLA_U8 },
+37 -10
net/core/netclassid_cgroup.c
··· 53 53 kfree(css_cls_state(css)); 54 54 } 55 55 56 + /* 57 + * To avoid freezing of sockets creation for tasks with big number of threads 58 + * and opened sockets lets release file_lock every 1000 iterated descriptors. 59 + * New sockets will already have been created with new classid. 60 + */ 61 + 62 + struct update_classid_context { 63 + u32 classid; 64 + unsigned int batch; 65 + }; 66 + 67 + #define UPDATE_CLASSID_BATCH 1000 68 + 56 69 static int update_classid_sock(const void *v, struct file *file, unsigned n) 57 70 { 58 71 int err; 72 + struct update_classid_context *ctx = (void *)v; 59 73 struct socket *sock = sock_from_file(file, &err); 60 74 61 75 if (sock) { 62 76 spin_lock(&cgroup_sk_update_lock); 63 - sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, 64 - (unsigned long)v); 77 + sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid); 65 78 spin_unlock(&cgroup_sk_update_lock); 66 79 } 80 + if (--ctx->batch == 0) { 81 + ctx->batch = UPDATE_CLASSID_BATCH; 82 + return n + 1; 83 + } 67 84 return 0; 85 + } 86 + 87 + static void update_classid_task(struct task_struct *p, u32 classid) 88 + { 89 + struct update_classid_context ctx = { 90 + .classid = classid, 91 + .batch = UPDATE_CLASSID_BATCH 92 + }; 93 + unsigned int fd = 0; 94 + 95 + do { 96 + task_lock(p); 97 + fd = iterate_fd(p->files, fd, update_classid_sock, &ctx); 98 + task_unlock(p); 99 + cond_resched(); 100 + } while (fd); 68 101 } 69 102 70 103 static void cgrp_attach(struct cgroup_taskset *tset) ··· 106 73 struct task_struct *p; 107 74 108 75 cgroup_taskset_for_each(p, css, tset) { 109 - task_lock(p); 110 - iterate_fd(p->files, 0, update_classid_sock, 111 - (void *)(unsigned long)css_cls_state(css)->classid); 112 - task_unlock(p); 76 + update_classid_task(p, css_cls_state(css)->classid); 113 77 } 114 78 } 115 79 ··· 128 98 129 99 css_task_iter_start(css, 0, &it); 130 100 while ((p = css_task_iter_next(&it))) { 131 - task_lock(p); 132 - iterate_fd(p->files, 0, update_classid_sock, 133 - (void *)(unsigned long)cs->classid); 134 - task_unlock(p); 101 + update_classid_task(p, cs->classid); 135 102 cond_resched(); 136 103 } 137 104 css_task_iter_end(&it);
+4 -1
net/core/sock.c
··· 1830 1830 atomic_set(&newsk->sk_zckey, 0); 1831 1831 1832 1832 sock_reset_flag(newsk, SOCK_DONE); 1833 - mem_cgroup_sk_alloc(newsk); 1833 + 1834 + /* sk->sk_memcg will be populated at accept() time */ 1835 + newsk->sk_memcg = NULL; 1836 + 1834 1837 cgroup_sk_alloc(&newsk->sk_cgrp_data); 1835 1838 1836 1839 rcu_read_lock();
+2
net/dsa/dsa_priv.h
··· 117 117 /* port.c */ 118 118 int dsa_port_set_state(struct dsa_port *dp, u8 state, 119 119 struct switchdev_trans *trans); 120 + int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy); 120 121 int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy); 122 + void dsa_port_disable_rt(struct dsa_port *dp); 121 123 void dsa_port_disable(struct dsa_port *dp); 122 124 int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br); 123 125 void dsa_port_bridge_leave(struct dsa_port *dp, struct net_device *br);
+35 -9
net/dsa/port.c
··· 63 63 pr_err("DSA: failed to set STP state %u (%d)\n", state, err); 64 64 } 65 65 66 - int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy) 66 + int dsa_port_enable_rt(struct dsa_port *dp, struct phy_device *phy) 67 67 { 68 68 struct dsa_switch *ds = dp->ds; 69 69 int port = dp->index; ··· 78 78 if (!dp->bridge_dev) 79 79 dsa_port_set_state_now(dp, BR_STATE_FORWARDING); 80 80 81 + if (dp->pl) 82 + phylink_start(dp->pl); 83 + 81 84 return 0; 82 85 } 83 86 84 - void dsa_port_disable(struct dsa_port *dp) 87 + int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy) 88 + { 89 + int err; 90 + 91 + rtnl_lock(); 92 + err = dsa_port_enable_rt(dp, phy); 93 + rtnl_unlock(); 94 + 95 + return err; 96 + } 97 + 98 + void dsa_port_disable_rt(struct dsa_port *dp) 85 99 { 86 100 struct dsa_switch *ds = dp->ds; 87 101 int port = dp->index; 102 + 103 + if (dp->pl) 104 + phylink_stop(dp->pl); 88 105 89 106 if (!dp->bridge_dev) 90 107 dsa_port_set_state_now(dp, BR_STATE_DISABLED); 91 108 92 109 if (ds->ops->port_disable) 93 110 ds->ops->port_disable(ds, port); 111 + } 112 + 113 + void dsa_port_disable(struct dsa_port *dp) 114 + { 115 + rtnl_lock(); 116 + dsa_port_disable_rt(dp); 117 + rtnl_unlock(); 94 118 } 95 119 96 120 int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br) ··· 638 614 goto err_phy_connect; 639 615 } 640 616 641 - rtnl_lock(); 642 - phylink_start(dp->pl); 643 - rtnl_unlock(); 644 - 645 617 return 0; 646 618 647 619 err_phy_connect: ··· 648 628 int dsa_port_link_register_of(struct dsa_port *dp) 649 629 { 650 630 struct dsa_switch *ds = dp->ds; 631 + struct device_node *phy_np; 651 632 652 - if (!ds->ops->adjust_link) 653 - return dsa_port_phylink_register(dp); 633 + if (!ds->ops->adjust_link) { 634 + phy_np = of_parse_phandle(dp->dn, "phy-handle", 0); 635 + if (of_phy_is_fixed_link(dp->dn) || phy_np) 636 + return dsa_port_phylink_register(dp); 637 + return 0; 638 + } 654 639 655 640 dev_warn(ds->dev, 656 641 "Using legacy PHYLIB callbacks. Please migrate to PHYLINK!\n"); ··· 670 645 { 671 646 struct dsa_switch *ds = dp->ds; 672 647 673 - if (!ds->ops->adjust_link) { 648 + if (!ds->ops->adjust_link && dp->pl) { 674 649 rtnl_lock(); 675 650 phylink_disconnect_phy(dp->pl); 676 651 rtnl_unlock(); 677 652 phylink_destroy(dp->pl); 653 + dp->pl = NULL; 678 654 return; 679 655 } 680 656
+2 -6
net/dsa/slave.c
··· 88 88 goto clear_allmulti; 89 89 } 90 90 91 - err = dsa_port_enable(dp, dev->phydev); 91 + err = dsa_port_enable_rt(dp, dev->phydev); 92 92 if (err) 93 93 goto clear_promisc; 94 - 95 - phylink_start(dp->pl); 96 94 97 95 return 0; 98 96 ··· 112 114 struct net_device *master = dsa_slave_to_master(dev); 113 115 struct dsa_port *dp = dsa_slave_to_port(dev); 114 116 115 - phylink_stop(dp->pl); 116 - 117 - dsa_port_disable(dp); 117 + dsa_port_disable_rt(dp); 118 118 119 119 dev_mc_unsync(master, dev); 120 120 dev_uc_unsync(master, dev);
+6
net/ieee802154/nl_policy.c
··· 21 21 [IEEE802154_ATTR_HW_ADDR] = { .type = NLA_HW_ADDR, }, 22 22 [IEEE802154_ATTR_PAN_ID] = { .type = NLA_U16, }, 23 23 [IEEE802154_ATTR_CHANNEL] = { .type = NLA_U8, }, 24 + [IEEE802154_ATTR_BCN_ORD] = { .type = NLA_U8, }, 25 + [IEEE802154_ATTR_SF_ORD] = { .type = NLA_U8, }, 26 + [IEEE802154_ATTR_PAN_COORD] = { .type = NLA_U8, }, 27 + [IEEE802154_ATTR_BAT_EXT] = { .type = NLA_U8, }, 28 + [IEEE802154_ATTR_COORD_REALIGN] = { .type = NLA_U8, }, 24 29 [IEEE802154_ATTR_PAGE] = { .type = NLA_U8, }, 30 + [IEEE802154_ATTR_DEV_TYPE] = { .type = NLA_U8, }, 25 31 [IEEE802154_ATTR_COORD_SHORT_ADDR] = { .type = NLA_U16, }, 26 32 [IEEE802154_ATTR_COORD_HW_ADDR] = { .type = NLA_HW_ADDR, }, 27 33 [IEEE802154_ATTR_COORD_PAN_ID] = { .type = NLA_U16, },
+10 -2
net/ipv4/gre_demux.c
··· 56 56 } 57 57 EXPORT_SYMBOL_GPL(gre_del_protocol); 58 58 59 - /* Fills in tpi and returns header length to be pulled. */ 59 + /* Fills in tpi and returns header length to be pulled. 60 + * Note that caller must use pskb_may_pull() before pulling GRE header. 61 + */ 60 62 int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi, 61 63 bool *csum_err, __be16 proto, int nhs) 62 64 { ··· 112 110 * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header 113 111 */ 114 112 if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) { 113 + u8 _val, *val; 114 + 115 + val = skb_header_pointer(skb, nhs + hdr_len, 116 + sizeof(_val), &_val); 117 + if (!val) 118 + return -EINVAL; 115 119 tpi->proto = proto; 116 - if ((*(u8 *)options & 0xF0) != 0x40) 120 + if ((*val & 0xF0) != 0x40) 117 121 hdr_len += 4; 118 122 } 119 123 tpi->hdr_len = hdr_len;
+20
net/ipv4/inet_connection_sock.c
··· 482 482 } 483 483 spin_unlock_bh(&queue->fastopenq.lock); 484 484 } 485 + 485 486 out: 486 487 release_sock(sk); 488 + if (newsk && mem_cgroup_sockets_enabled) { 489 + int amt; 490 + 491 + /* atomically get the memory usage, set and charge the 492 + * newsk->sk_memcg. 493 + */ 494 + lock_sock(newsk); 495 + 496 + /* The socket has not been accepted yet, no need to look at 497 + * newsk->sk_wmem_queued. 498 + */ 499 + amt = sk_mem_pages(newsk->sk_forward_alloc + 500 + atomic_read(&newsk->sk_rmem_alloc)); 501 + mem_cgroup_sk_alloc(newsk); 502 + if (newsk->sk_memcg && amt) 503 + mem_cgroup_charge_skmem(newsk->sk_memcg, amt); 504 + 505 + release_sock(newsk); 506 + } 487 507 if (req) 488 508 reqsk_put(req); 489 509 return newsk;
+20 -24
net/ipv4/inet_diag.c
··· 100 100 aux = handler->idiag_get_aux_size(sk, net_admin); 101 101 102 102 return nla_total_size(sizeof(struct tcp_info)) 103 - + nla_total_size(1) /* INET_DIAG_SHUTDOWN */ 104 - + nla_total_size(1) /* INET_DIAG_TOS */ 105 - + nla_total_size(1) /* INET_DIAG_TCLASS */ 106 - + nla_total_size(4) /* INET_DIAG_MARK */ 107 - + nla_total_size(4) /* INET_DIAG_CLASS_ID */ 108 - + nla_total_size(sizeof(struct inet_diag_meminfo)) 109 103 + nla_total_size(sizeof(struct inet_diag_msg)) 104 + + inet_diag_msg_attrs_size() 105 + + nla_total_size(sizeof(struct inet_diag_meminfo)) 110 106 + nla_total_size(SK_MEMINFO_VARS * sizeof(u32)) 111 107 + nla_total_size(TCP_CA_NAME_MAX) 112 108 + nla_total_size(sizeof(struct tcpvegas_info)) ··· 142 146 143 147 if (net_admin && nla_put_u32(skb, INET_DIAG_MARK, sk->sk_mark)) 144 148 goto errout; 149 + 150 + if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) || 151 + ext & (1 << (INET_DIAG_TCLASS - 1))) { 152 + u32 classid = 0; 153 + 154 + #ifdef CONFIG_SOCK_CGROUP_DATA 155 + classid = sock_cgroup_classid(&sk->sk_cgrp_data); 156 + #endif 157 + /* Fallback to socket priority if class id isn't set. 158 + * Classful qdiscs use it as direct reference to class. 159 + * For cgroup2 classid is always zero. 160 + */ 161 + if (!classid) 162 + classid = sk->sk_priority; 163 + 164 + if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid)) 165 + goto errout; 166 + } 145 167 146 168 r->idiag_uid = from_kuid_munged(user_ns, sock_i_uid(sk)); 147 169 r->idiag_inode = sock_i_ino(sk); ··· 295 281 sz = ca_ops->get_info(sk, ext, &attr, &info); 296 282 rcu_read_unlock(); 297 283 if (sz && nla_put(skb, attr, sz, &info) < 0) 298 - goto errout; 299 - } 300 - 301 - if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) || 302 - ext & (1 << (INET_DIAG_TCLASS - 1))) { 303 - u32 classid = 0; 304 - 305 - #ifdef CONFIG_SOCK_CGROUP_DATA 306 - classid = sock_cgroup_classid(&sk->sk_cgrp_data); 307 - #endif 308 - /* Fallback to socket priority if class id isn't set. 309 - * Classful qdiscs use it as direct reference to class. 310 - * For cgroup2 classid is always zero. 311 - */ 312 - if (!classid) 313 - classid = sk->sk_priority; 314 - 315 - if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid)) 316 284 goto errout; 317 285 } 318 286
+3 -2
net/ipv4/raw_diag.c
··· 100 100 if (IS_ERR(sk)) 101 101 return PTR_ERR(sk); 102 102 103 - rep = nlmsg_new(sizeof(struct inet_diag_msg) + 104 - sizeof(struct inet_diag_meminfo) + 64, 103 + rep = nlmsg_new(nla_total_size(sizeof(struct inet_diag_msg)) + 104 + inet_diag_msg_attrs_size() + 105 + nla_total_size(sizeof(struct inet_diag_meminfo)) + 64, 105 106 GFP_KERNEL); 106 107 if (!rep) { 107 108 sock_put(sk);
+3 -2
net/ipv4/udp_diag.c
··· 64 64 goto out; 65 65 66 66 err = -ENOMEM; 67 - rep = nlmsg_new(sizeof(struct inet_diag_msg) + 68 - sizeof(struct inet_diag_meminfo) + 64, 67 + rep = nlmsg_new(nla_total_size(sizeof(struct inet_diag_msg)) + 68 + inet_diag_msg_attrs_size() + 69 + nla_total_size(sizeof(struct inet_diag_meminfo)) + 64, 69 70 GFP_KERNEL); 70 71 if (!rep) 71 72 goto out;
+40 -11
net/ipv6/addrconf.c
··· 1226 1226 } 1227 1227 1228 1228 static void 1229 - cleanup_prefix_route(struct inet6_ifaddr *ifp, unsigned long expires, bool del_rt) 1229 + cleanup_prefix_route(struct inet6_ifaddr *ifp, unsigned long expires, 1230 + bool del_rt, bool del_peer) 1230 1231 { 1231 1232 struct fib6_info *f6i; 1232 1233 1233 - f6i = addrconf_get_prefix_route(&ifp->addr, ifp->prefix_len, 1234 + f6i = addrconf_get_prefix_route(del_peer ? &ifp->peer_addr : &ifp->addr, 1235 + ifp->prefix_len, 1234 1236 ifp->idev->dev, 0, RTF_DEFAULT, true); 1235 1237 if (f6i) { 1236 1238 if (del_rt) ··· 1295 1293 1296 1294 if (action != CLEANUP_PREFIX_RT_NOP) { 1297 1295 cleanup_prefix_route(ifp, expires, 1298 - action == CLEANUP_PREFIX_RT_DEL); 1296 + action == CLEANUP_PREFIX_RT_DEL, false); 1299 1297 } 1300 1298 1301 1299 /* clean up prefsrc entries */ ··· 3347 3345 (dev->type != ARPHRD_NONE) && 3348 3346 (dev->type != ARPHRD_RAWIP)) { 3349 3347 /* Alas, we support only Ethernet autoconfiguration. */ 3348 + idev = __in6_dev_get(dev); 3349 + if (!IS_ERR_OR_NULL(idev) && dev->flags & IFF_UP && 3350 + dev->flags & IFF_MULTICAST) 3351 + ipv6_mc_up(idev); 3350 3352 return; 3351 3353 } 3352 3354 ··· 4592 4586 } 4593 4587 4594 4588 static int modify_prefix_route(struct inet6_ifaddr *ifp, 4595 - unsigned long expires, u32 flags) 4589 + unsigned long expires, u32 flags, 4590 + bool modify_peer) 4596 4591 { 4597 4592 struct fib6_info *f6i; 4598 4593 u32 prio; 4599 4594 4600 - f6i = addrconf_get_prefix_route(&ifp->addr, ifp->prefix_len, 4595 + f6i = addrconf_get_prefix_route(modify_peer ? &ifp->peer_addr : &ifp->addr, 4596 + ifp->prefix_len, 4601 4597 ifp->idev->dev, 0, RTF_DEFAULT, true); 4602 4598 if (!f6i) 4603 4599 return -ENOENT; ··· 4610 4602 ip6_del_rt(dev_net(ifp->idev->dev), f6i); 4611 4603 4612 4604 /* add new one */ 4613 - addrconf_prefix_route(&ifp->addr, ifp->prefix_len, 4605 + addrconf_prefix_route(modify_peer ? &ifp->peer_addr : &ifp->addr, 4606 + ifp->prefix_len, 4614 4607 ifp->rt_priority, ifp->idev->dev, 4615 4608 expires, flags, GFP_KERNEL); 4616 4609 } else { ··· 4633 4624 unsigned long timeout; 4634 4625 bool was_managetempaddr; 4635 4626 bool had_prefixroute; 4627 + bool new_peer = false; 4636 4628 4637 4629 ASSERT_RTNL(); 4638 4630 ··· 4665 4655 cfg->preferred_lft = timeout; 4666 4656 } 4667 4657 4658 + if (cfg->peer_pfx && 4659 + memcmp(&ifp->peer_addr, cfg->peer_pfx, sizeof(struct in6_addr))) { 4660 + if (!ipv6_addr_any(&ifp->peer_addr)) 4661 + cleanup_prefix_route(ifp, expires, true, true); 4662 + new_peer = true; 4663 + } 4664 + 4668 4665 spin_lock_bh(&ifp->lock); 4669 4666 was_managetempaddr = ifp->flags & IFA_F_MANAGETEMPADDR; 4670 4667 had_prefixroute = ifp->flags & IFA_F_PERMANENT && ··· 4687 4670 if (cfg->rt_priority && cfg->rt_priority != ifp->rt_priority) 4688 4671 ifp->rt_priority = cfg->rt_priority; 4689 4672 4673 + if (new_peer) 4674 + ifp->peer_addr = *cfg->peer_pfx; 4675 + 4690 4676 spin_unlock_bh(&ifp->lock); 4691 4677 if (!(ifp->flags&IFA_F_TENTATIVE)) 4692 4678 ipv6_ifa_notify(0, ifp); ··· 4698 4678 int rc = -ENOENT; 4699 4679 4700 4680 if (had_prefixroute) 4701 - rc = modify_prefix_route(ifp, expires, flags); 4681 + rc = modify_prefix_route(ifp, expires, flags, false); 4702 4682 4703 4683 /* prefix route could have been deleted; if so restore it */ 4704 4684 if (rc == -ENOENT) { 4705 4685 addrconf_prefix_route(&ifp->addr, ifp->prefix_len, 4686 + ifp->rt_priority, ifp->idev->dev, 4687 + expires, flags, GFP_KERNEL); 4688 + } 4689 + 4690 + if (had_prefixroute && !ipv6_addr_any(&ifp->peer_addr)) 4691 + rc = modify_prefix_route(ifp, expires, flags, true); 4692 + 4693 + if (rc == -ENOENT && !ipv6_addr_any(&ifp->peer_addr)) { 4694 + addrconf_prefix_route(&ifp->peer_addr, ifp->prefix_len, 4706 4695 ifp->rt_priority, ifp->idev->dev, 4707 4696 expires, flags, GFP_KERNEL); 4708 4697 } ··· 4725 4696 4726 4697 if (action != CLEANUP_PREFIX_RT_NOP) { 4727 4698 cleanup_prefix_route(ifp, rt_expires, 4728 - action == CLEANUP_PREFIX_RT_DEL); 4699 + action == CLEANUP_PREFIX_RT_DEL, false); 4729 4700 } 4730 4701 } 4731 4702 ··· 6012 5983 if (ifp->idev->cnf.forwarding) 6013 5984 addrconf_join_anycast(ifp); 6014 5985 if (!ipv6_addr_any(&ifp->peer_addr)) 6015 - addrconf_prefix_route(&ifp->peer_addr, 128, 0, 6016 - ifp->idev->dev, 0, 0, 6017 - GFP_ATOMIC); 5986 + addrconf_prefix_route(&ifp->peer_addr, 128, 5987 + ifp->rt_priority, ifp->idev->dev, 5988 + 0, 0, GFP_ATOMIC); 6018 5989 break; 6019 5990 case RTM_DELADDR: 6020 5991 if (ifp->idev->cnf.forwarding)
+1 -1
net/ipv6/seg6_iptunnel.c
··· 268 268 skb_mac_header_rebuild(skb); 269 269 skb_push(skb, skb->mac_len); 270 270 271 - err = seg6_do_srh_encap(skb, tinfo->srh, NEXTHDR_NONE); 271 + err = seg6_do_srh_encap(skb, tinfo->srh, IPPROTO_ETHERNET); 272 272 if (err) 273 273 return err; 274 274
+1 -1
net/ipv6/seg6_local.c
··· 282 282 struct net_device *odev; 283 283 struct ethhdr *eth; 284 284 285 - if (!decap_and_validate(skb, NEXTHDR_NONE)) 285 + if (!decap_and_validate(skb, IPPROTO_ETHERNET)) 286 286 goto drop; 287 287 288 288 if (!pskb_may_pull(skb, ETH_HLEN))
+2 -1
net/mac80211/mesh_hwmp.c
··· 1152 1152 } 1153 1153 } 1154 1154 1155 - if (!(mpath->flags & MESH_PATH_RESOLVING)) 1155 + if (!(mpath->flags & MESH_PATH_RESOLVING) && 1156 + mesh_path_sel_is_hwmp(sdata)) 1156 1157 mesh_queue_preq(mpath, PREQ_Q_F_START); 1157 1158 1158 1159 if (skb_queue_len(&mpath->frame_queue) >= MESH_FRAME_QUEUE_LEN)
+17 -2
net/mptcp/options.c
··· 334 334 struct mptcp_sock *msk; 335 335 unsigned int ack_size; 336 336 bool ret = false; 337 + bool can_ack; 338 + u64 ack_seq; 337 339 u8 tcp_fin; 338 340 339 341 if (skb) { ··· 362 360 ret = true; 363 361 } 364 362 363 + /* passive sockets msk will set the 'can_ack' after accept(), even 364 + * if the first subflow may have the already the remote key handy 365 + */ 366 + can_ack = true; 365 367 opts->ext_copy.use_ack = 0; 366 368 msk = mptcp_sk(subflow->conn); 367 - if (!msk || !READ_ONCE(msk->can_ack)) { 369 + if (likely(msk && READ_ONCE(msk->can_ack))) { 370 + ack_seq = msk->ack_seq; 371 + } else if (subflow->can_ack) { 372 + mptcp_crypto_key_sha(subflow->remote_key, NULL, &ack_seq); 373 + ack_seq++; 374 + } else { 375 + can_ack = false; 376 + } 377 + 378 + if (unlikely(!can_ack)) { 368 379 *size = ALIGN(dss_size, 4); 369 380 return ret; 370 381 } ··· 390 375 391 376 dss_size += ack_size; 392 377 393 - opts->ext_copy.data_ack = msk->ack_seq; 378 + opts->ext_copy.data_ack = ack_seq; 394 379 opts->ext_copy.ack64 = 1; 395 380 opts->ext_copy.use_ack = 1; 396 381
+1 -1
net/netfilter/nf_conntrack_standalone.c
··· 411 411 *pos = cpu + 1; 412 412 return per_cpu_ptr(net->ct.stat, cpu); 413 413 } 414 - 414 + (*pos)++; 415 415 return NULL; 416 416 } 417 417
+1 -1
net/netfilter/nf_synproxy_core.c
··· 267 267 *pos = cpu + 1; 268 268 return per_cpu_ptr(snet->stats, cpu); 269 269 } 270 - 270 + (*pos)++; 271 271 return NULL; 272 272 } 273 273
+14 -8
net/netfilter/nf_tables_api.c
··· 1405 1405 lockdep_commit_lock_is_held(net)); 1406 1406 if (nft_dump_stats(skb, stats)) 1407 1407 goto nla_put_failure; 1408 + 1409 + if ((chain->flags & NFT_CHAIN_HW_OFFLOAD) && 1410 + nla_put_be32(skb, NFTA_CHAIN_FLAGS, 1411 + htonl(NFT_CHAIN_HW_OFFLOAD))) 1412 + goto nla_put_failure; 1408 1413 } 1409 1414 1410 1415 if (nla_put_be32(skb, NFTA_CHAIN_USE, htonl(chain->use))) ··· 6305 6300 goto err4; 6306 6301 6307 6302 err = nft_register_flowtable_net_hooks(ctx.net, table, flowtable); 6308 - if (err < 0) 6303 + if (err < 0) { 6304 + list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { 6305 + list_del_rcu(&hook->list); 6306 + kfree_rcu(hook, rcu); 6307 + } 6309 6308 goto err4; 6309 + } 6310 6310 6311 6311 err = nft_trans_flowtable_add(&ctx, NFT_MSG_NEWFLOWTABLE, flowtable); 6312 6312 if (err < 0) ··· 7388 7378 list_splice_init(&net->nft.module_list, &module_list); 7389 7379 mutex_unlock(&net->nft.commit_mutex); 7390 7380 list_for_each_entry_safe(req, next, &module_list, list) { 7391 - if (req->done) { 7392 - list_del(&req->list); 7393 - kfree(req); 7394 - } else { 7395 - request_module("%s", req->module); 7396 - req->done = true; 7397 - } 7381 + request_module("%s", req->module); 7382 + req->done = true; 7398 7383 } 7399 7384 mutex_lock(&net->nft.commit_mutex); 7400 7385 list_splice(&module_list, &net->nft.module_list); ··· 8172 8167 __nft_release_tables(net); 8173 8168 mutex_unlock(&net->nft.commit_mutex); 8174 8169 WARN_ON_ONCE(!list_empty(&net->nft.tables)); 8170 + WARN_ON_ONCE(!list_empty(&net->nft.module_list)); 8175 8171 } 8176 8172 8177 8173 static struct pernet_operations nf_tables_net_ops = {
+1
net/netfilter/nft_chain_nat.c
··· 89 89 .name = "nat", 90 90 .type = NFT_CHAIN_T_NAT, 91 91 .family = NFPROTO_INET, 92 + .owner = THIS_MODULE, 92 93 .hook_mask = (1 << NF_INET_PRE_ROUTING) | 93 94 (1 << NF_INET_LOCAL_IN) | 94 95 (1 << NF_INET_LOCAL_OUT) |
+1
net/netfilter/nft_payload.c
··· 129 129 [NFTA_PAYLOAD_LEN] = { .type = NLA_U32 }, 130 130 [NFTA_PAYLOAD_CSUM_TYPE] = { .type = NLA_U32 }, 131 131 [NFTA_PAYLOAD_CSUM_OFFSET] = { .type = NLA_U32 }, 132 + [NFTA_PAYLOAD_CSUM_FLAGS] = { .type = NLA_U32 }, 132 133 }; 133 134 134 135 static int nft_payload_init(const struct nft_ctx *ctx,
+2
net/netfilter/nft_tunnel.c
··· 339 339 [NFTA_TUNNEL_KEY_FLAGS] = { .type = NLA_U32, }, 340 340 [NFTA_TUNNEL_KEY_TOS] = { .type = NLA_U8, }, 341 341 [NFTA_TUNNEL_KEY_TTL] = { .type = NLA_U8, }, 342 + [NFTA_TUNNEL_KEY_SPORT] = { .type = NLA_U16, }, 343 + [NFTA_TUNNEL_KEY_DPORT] = { .type = NLA_U16, }, 342 344 [NFTA_TUNNEL_KEY_OPTS] = { .type = NLA_NESTED, }, 343 345 }; 344 346
+3 -3
net/netfilter/x_tables.c
··· 1551 1551 uint8_t nfproto = (unsigned long)PDE_DATA(file_inode(seq->file)); 1552 1552 struct nf_mttg_trav *trav = seq->private; 1553 1553 1554 + if (ppos != NULL) 1555 + ++(*ppos); 1556 + 1554 1557 switch (trav->class) { 1555 1558 case MTTG_TRAV_INIT: 1556 1559 trav->class = MTTG_TRAV_NFP_UNSPEC; ··· 1579 1576 default: 1580 1577 return NULL; 1581 1578 } 1582 - 1583 - if (ppos != NULL) 1584 - ++*ppos; 1585 1579 return trav; 1586 1580 } 1587 1581
+1 -1
net/netfilter/xt_recent.c
··· 492 492 const struct recent_entry *e = v; 493 493 const struct list_head *head = e->list.next; 494 494 495 + (*pos)++; 495 496 while (head == &t->iphash[st->bucket]) { 496 497 if (++st->bucket >= ip_list_hash_size) 497 498 return NULL; 498 499 head = t->iphash[st->bucket].next; 499 500 } 500 - (*pos)++; 501 501 return list_entry(head, struct recent_entry, list); 502 502 } 503 503
+1 -1
net/netlink/af_netlink.c
··· 2434 2434 in_skb->len)) 2435 2435 WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS, 2436 2436 (u8 *)extack->bad_attr - 2437 - in_skb->data)); 2437 + (u8 *)nlh)); 2438 2438 } else { 2439 2439 if (extack->cookie_len) 2440 2440 WARN_ON(nla_put(skb, NLMSGERR_ATTR_COOKIE,
+16 -3
net/nfc/hci/core.c
··· 181 181 void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd, 182 182 struct sk_buff *skb) 183 183 { 184 - u8 gate = hdev->pipes[pipe].gate; 185 184 u8 status = NFC_HCI_ANY_OK; 186 185 struct hci_create_pipe_resp *create_info; 187 186 struct hci_delete_pipe_noti *delete_info; 188 187 struct hci_all_pipe_cleared_noti *cleared_info; 188 + u8 gate; 189 189 190 - pr_debug("from gate %x pipe %x cmd %x\n", gate, pipe, cmd); 190 + pr_debug("from pipe %x cmd %x\n", pipe, cmd); 191 + 192 + if (pipe >= NFC_HCI_MAX_PIPES) { 193 + status = NFC_HCI_ANY_E_NOK; 194 + goto exit; 195 + } 196 + 197 + gate = hdev->pipes[pipe].gate; 191 198 192 199 switch (cmd) { 193 200 case NFC_HCI_ADM_NOTIFY_PIPE_CREATED: ··· 382 375 struct sk_buff *skb) 383 376 { 384 377 int r = 0; 385 - u8 gate = hdev->pipes[pipe].gate; 378 + u8 gate; 386 379 380 + if (pipe >= NFC_HCI_MAX_PIPES) { 381 + pr_err("Discarded event %x to invalid pipe %x\n", event, pipe); 382 + goto exit; 383 + } 384 + 385 + gate = hdev->pipes[pipe].gate; 387 386 if (gate == NFC_HCI_INVALID_GATE) { 388 387 pr_err("Discarded event %x to unopened pipe %x\n", event, pipe); 389 388 goto exit;
+4
net/nfc/netlink.c
··· 32 32 [NFC_ATTR_DEVICE_NAME] = { .type = NLA_STRING, 33 33 .len = NFC_DEVICE_NAME_MAXSIZE }, 34 34 [NFC_ATTR_PROTOCOLS] = { .type = NLA_U32 }, 35 + [NFC_ATTR_TARGET_INDEX] = { .type = NLA_U32 }, 35 36 [NFC_ATTR_COMM_MODE] = { .type = NLA_U8 }, 36 37 [NFC_ATTR_RF_MODE] = { .type = NLA_U8 }, 37 38 [NFC_ATTR_DEVICE_POWERED] = { .type = NLA_U8 }, ··· 44 43 [NFC_ATTR_LLC_SDP] = { .type = NLA_NESTED }, 45 44 [NFC_ATTR_FIRMWARE_NAME] = { .type = NLA_STRING, 46 45 .len = NFC_FIRMWARE_NAME_MAXSIZE }, 46 + [NFC_ATTR_SE_INDEX] = { .type = NLA_U32 }, 47 47 [NFC_ATTR_SE_APDU] = { .type = NLA_BINARY }, 48 + [NFC_ATTR_VENDOR_ID] = { .type = NLA_U32 }, 49 + [NFC_ATTR_VENDOR_SUBCMD] = { .type = NLA_U32 }, 48 50 [NFC_ATTR_VENDOR_DATA] = { .type = NLA_BINARY }, 49 51 50 52 };
+1
net/openvswitch/datapath.c
··· 645 645 [OVS_PACKET_ATTR_ACTIONS] = { .type = NLA_NESTED }, 646 646 [OVS_PACKET_ATTR_PROBE] = { .type = NLA_FLAG }, 647 647 [OVS_PACKET_ATTR_MRU] = { .type = NLA_U16 }, 648 + [OVS_PACKET_ATTR_HASH] = { .type = NLA_U64 }, 648 649 }; 649 650 650 651 static const struct genl_ops dp_packet_genl_ops[] = {
+7 -6
net/packet/af_packet.c
··· 2274 2274 TP_STATUS_KERNEL, (macoff+snaplen)); 2275 2275 if (!h.raw) 2276 2276 goto drop_n_account; 2277 + 2278 + if (do_vnet && 2279 + virtio_net_hdr_from_skb(skb, h.raw + macoff - 2280 + sizeof(struct virtio_net_hdr), 2281 + vio_le(), true, 0)) 2282 + goto drop_n_account; 2283 + 2277 2284 if (po->tp_version <= TPACKET_V2) { 2278 2285 packet_increment_rx_head(po, &po->rx_ring); 2279 2286 /* ··· 2292 2285 if (atomic_read(&po->tp_drops)) 2293 2286 status |= TP_STATUS_LOSING; 2294 2287 } 2295 - 2296 - if (do_vnet && 2297 - virtio_net_hdr_from_skb(skb, h.raw + macoff - 2298 - sizeof(struct virtio_net_hdr), 2299 - vio_le(), true, 0)) 2300 - goto drop_n_account; 2301 2288 2302 2289 po->stats.stats1.tp_packets++; 2303 2290 if (copy_skb) {
+1
net/sched/sch_fq.c
··· 744 744 [TCA_FQ_FLOW_MAX_RATE] = { .type = NLA_U32 }, 745 745 [TCA_FQ_BUCKETS_LOG] = { .type = NLA_U32 }, 746 746 [TCA_FQ_FLOW_REFILL_DELAY] = { .type = NLA_U32 }, 747 + [TCA_FQ_ORPHAN_MASK] = { .type = NLA_U32 }, 747 748 [TCA_FQ_LOW_RATE_THRESHOLD] = { .type = NLA_U32 }, 748 749 [TCA_FQ_CE_THRESHOLD] = { .type = NLA_U32 }, 749 750 };
+10 -3
net/sched/sch_taprio.c
··· 564 564 prio = skb->priority; 565 565 tc = netdev_get_prio_tc_map(dev, prio); 566 566 567 - if (!(gate_mask & BIT(tc))) 567 + if (!(gate_mask & BIT(tc))) { 568 + skb = NULL; 568 569 continue; 570 + } 569 571 570 572 len = qdisc_pkt_len(skb); 571 573 guard = ktime_add_ns(taprio_get_time(q), ··· 577 575 * guard band ... 578 576 */ 579 577 if (gate_mask != TAPRIO_ALL_GATES_OPEN && 580 - ktime_after(guard, entry->close_time)) 578 + ktime_after(guard, entry->close_time)) { 579 + skb = NULL; 581 580 continue; 581 + } 582 582 583 583 /* ... and no budget. */ 584 584 if (gate_mask != TAPRIO_ALL_GATES_OPEN && 585 - atomic_sub_return(len, &entry->budget) < 0) 585 + atomic_sub_return(len, &entry->budget) < 0) { 586 + skb = NULL; 586 587 continue; 588 + } 587 589 588 590 skb = child->ops->dequeue(child); 589 591 if (unlikely(!skb)) ··· 774 768 [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME] = { .type = NLA_S64 }, 775 769 [TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME_EXTENSION] = { .type = NLA_S64 }, 776 770 [TCA_TAPRIO_ATTR_FLAGS] = { .type = NLA_U32 }, 771 + [TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 }, 777 772 }; 778 773 779 774 static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry,
+2 -6
net/sctp/diag.c
··· 237 237 addrcnt++; 238 238 239 239 return nla_total_size(sizeof(struct sctp_info)) 240 - + nla_total_size(1) /* INET_DIAG_SHUTDOWN */ 241 - + nla_total_size(1) /* INET_DIAG_TOS */ 242 - + nla_total_size(1) /* INET_DIAG_TCLASS */ 243 - + nla_total_size(4) /* INET_DIAG_MARK */ 244 - + nla_total_size(4) /* INET_DIAG_CLASS_ID */ 245 240 + nla_total_size(addrlen * asoc->peer.transport_count) 246 241 + nla_total_size(addrlen * addrcnt) 247 - + nla_total_size(sizeof(struct inet_diag_meminfo)) 248 242 + nla_total_size(sizeof(struct inet_diag_msg)) 243 + + inet_diag_msg_attrs_size() 244 + + nla_total_size(sizeof(struct inet_diag_meminfo)) 249 245 + 64; 250 246 } 251 247
+1
net/smc/smc_ib.c
··· 582 582 smc_smcr_terminate_all(smcibdev); 583 583 smc_ib_cleanup_per_ibdev(smcibdev); 584 584 ib_unregister_event_handler(&smcibdev->event_handler); 585 + cancel_work_sync(&smcibdev->port_event_work); 585 586 kfree(smcibdev); 586 587 } 587 588
+1
net/tipc/netlink.c
··· 116 116 [TIPC_NLA_PROP_PRIO] = { .type = NLA_U32 }, 117 117 [TIPC_NLA_PROP_TOL] = { .type = NLA_U32 }, 118 118 [TIPC_NLA_PROP_WIN] = { .type = NLA_U32 }, 119 + [TIPC_NLA_PROP_MTU] = { .type = NLA_U32 }, 119 120 [TIPC_NLA_PROP_BROADCAST] = { .type = NLA_U32 }, 120 121 [TIPC_NLA_PROP_BROADCAST_RATIO] = { .type = NLA_U32 } 121 122 };
+5
net/wireless/nl80211.c
··· 470 470 [NL80211_ATTR_WOWLAN_TRIGGERS] = { .type = NLA_NESTED }, 471 471 [NL80211_ATTR_STA_PLINK_STATE] = 472 472 NLA_POLICY_MAX(NLA_U8, NUM_NL80211_PLINK_STATES - 1), 473 + [NL80211_ATTR_MEASUREMENT_DURATION] = { .type = NLA_U16 }, 474 + [NL80211_ATTR_MEASUREMENT_DURATION_MANDATORY] = { .type = NLA_FLAG }, 473 475 [NL80211_ATTR_MESH_PEER_AID] = 474 476 NLA_POLICY_RANGE(NLA_U16, 1, IEEE80211_MAX_AID), 475 477 [NL80211_ATTR_SCHED_SCAN_INTERVAL] = { .type = NLA_U32 }, ··· 533 531 [NL80211_ATTR_MDID] = { .type = NLA_U16 }, 534 532 [NL80211_ATTR_IE_RIC] = { .type = NLA_BINARY, 535 533 .len = IEEE80211_MAX_DATA_LEN }, 534 + [NL80211_ATTR_CRIT_PROT_ID] = { .type = NLA_U16 }, 535 + [NL80211_ATTR_MAX_CRIT_PROT_DURATION] = { .type = NLA_U16 }, 536 536 [NL80211_ATTR_PEER_AID] = 537 537 NLA_POLICY_RANGE(NLA_U16, 1, IEEE80211_MAX_AID), 538 538 [NL80211_ATTR_CH_SWITCH_COUNT] = { .type = NLA_U32 }, ··· 565 561 NLA_POLICY_MAX(NLA_U8, IEEE80211_NUM_UPS - 1), 566 562 [NL80211_ATTR_ADMITTED_TIME] = { .type = NLA_U16 }, 567 563 [NL80211_ATTR_SMPS_MODE] = { .type = NLA_U8 }, 564 + [NL80211_ATTR_OPER_CLASS] = { .type = NLA_U8 }, 568 565 [NL80211_ATTR_MAC_MASK] = { 569 566 .type = NLA_EXACT_LEN_WARN, 570 567 .len = ETH_ALEN
+31 -3
tools/testing/selftests/net/fib_tests.sh
··· 1041 1041 fi 1042 1042 log_test $rc 0 "Prefix route with metric on link up" 1043 1043 1044 + # verify peer metric added correctly 1045 + set -e 1046 + run_cmd "$IP -6 addr flush dev dummy2" 1047 + run_cmd "$IP -6 addr add dev dummy2 2001:db8:104::1 peer 2001:db8:104::2 metric 260" 1048 + set +e 1049 + 1050 + check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 260" 1051 + log_test $? 0 "Set metric with peer route on local side" 1052 + log_test $? 0 "User specified metric on local address" 1053 + check_route6 "2001:db8:104::2 dev dummy2 proto kernel metric 260" 1054 + log_test $? 0 "Set metric with peer route on peer side" 1055 + 1056 + set -e 1057 + run_cmd "$IP -6 addr change dev dummy2 2001:db8:104::1 peer 2001:db8:104::3 metric 261" 1058 + set +e 1059 + 1060 + check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 261" 1061 + log_test $? 0 "Modify metric and peer address on local side" 1062 + check_route6 "2001:db8:104::3 dev dummy2 proto kernel metric 261" 1063 + log_test $? 0 "Modify metric and peer address on peer side" 1064 + 1044 1065 $IP li del dummy1 1045 1066 $IP li del dummy2 1046 1067 cleanup ··· 1478 1457 1479 1458 run_cmd "$IP addr flush dev dummy2" 1480 1459 run_cmd "$IP addr add dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 260" 1481 - run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 261" 1482 1460 rc=$? 1483 1461 if [ $rc -eq 0 ]; then 1484 - check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261" 1462 + check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 260" 1485 1463 rc=$? 1486 1464 fi 1487 - log_test $rc 0 "Modify metric of address with peer route" 1465 + log_test $rc 0 "Set metric of address with peer route" 1466 + 1467 + run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.3 metric 261" 1468 + rc=$? 1469 + if [ $rc -eq 0 ]; then 1470 + check_route "172.16.104.3 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261" 1471 + rc=$? 1472 + fi 1473 + log_test $rc 0 "Modify metric and peer address for peer route" 1488 1474 1489 1475 $IP li del dummy1 1490 1476 $IP li del dummy2
+1
tools/testing/selftests/tc-testing/config
··· 57 57 CONFIG_NET_IFE_SKBPRIO=m 58 58 CONFIG_NET_IFE_SKBTCINDEX=m 59 59 CONFIG_NET_SCH_FIFO=y 60 + CONFIG_NET_SCH_ETS=m