Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Handle init flow failures properly in iwlwifi driver, from Shahar S
Matityahu.

2) mac80211 TXQs need to be unscheduled on powersave start, from Felix
Fietkau.

3) SKB memory accounting fix in A-MDSU aggregation, from Felix Fietkau.

4) Increase RCU lock hold time in mlx5 FPGA code, from Saeed Mahameed.

5) Avoid checksum complete with XDP in mlx5, also from Saeed.

6) Fix netdev feature clobbering in ibmvnic driver, from Thomas Falcon.

7) Partial sent TLS record leak fix from Jakub Kicinski.

8) Reject zero size iova range in vhost, from Jason Wang.

9) Allow pending work to complete before clcsock release from Karsten
Graul.

10) Fix XDP handling max MTU in thunderx, from Matteo Croce.

11) A lot of protocols look at the sa_family field of a sockaddr before
validating it's length is large enough, from Tetsuo Handa.

12) Don't write to free'd pointer in qede ptp error path, from Colin Ian
King.

13) Have to recompile IP options in ipv4_link_failure because it can be
invoked from ARP, from Stephen Suryaputra.

14) Doorbell handling fixes in qed from Denis Bolotin.

15) Revert net-sysfs kobject register leak fix, it causes new problems.
From Wang Hai.

16) Spectre v1 fix in ATM code, from Gustavo A. R. Silva.

17) Fix put of BROPT_VLAN_STATS_PER_PORT in bridging code, from Nikolay
Aleksandrov.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (111 commits)
socket: fix compat SO_RCVTIMEO_NEW/SO_SNDTIMEO_NEW
tcp: tcp_grow_window() needs to respect tcp_space()
ocelot: Clean up stats update deferred work
ocelot: Don't sleep in atomic context (irqs_disabled())
net: bridge: fix netlink export of vlan_stats_per_port option
qed: fix spelling mistake "faspath" -> "fastpath"
tipc: set sysctl_tipc_rmem and named_timeout right range
tipc: fix link established but not in session
net: Fix missing meta data in skb with vlan packet
net: atm: Fix potential Spectre v1 vulnerabilities
net/core: work around section mismatch warning for ptp_classifier
net: bridge: fix per-port af_packet sockets
bnx2x: fix spelling mistake "dicline" -> "decline"
route: Avoid crash from dereferencing NULL rt->from
MAINTAINERS: normalize Woojung Huh's email address
bonding: fix event handling for stacked bonds
Revert "net-sysfs: Fix memory leak in netdev_register_kobject"
rtnetlink: fix rtnl_valid_stats_req() nlmsg_len check
qed: Fix the DORQ's attentions handling
qed: Fix missing DORQ attentions
...

+1124 -747
+9 -7
Documentation/networking/rxrpc.txt
··· 1009 1009 1010 1010 (*) Check call still alive. 1011 1011 1012 - u32 rxrpc_kernel_check_life(struct socket *sock, 1013 - struct rxrpc_call *call); 1012 + bool rxrpc_kernel_check_life(struct socket *sock, 1013 + struct rxrpc_call *call, 1014 + u32 *_life); 1014 1015 void rxrpc_kernel_probe_life(struct socket *sock, 1015 1016 struct rxrpc_call *call); 1016 1017 1017 - The first function returns a number that is updated when ACKs are received 1018 - from the peer (notably including PING RESPONSE ACKs which we can elicit by 1019 - sending PING ACKs to see if the call still exists on the server). The 1020 - caller should compare the numbers of two calls to see if the call is still 1021 - alive after waiting for a suitable interval. 1018 + The first function passes back in *_life a number that is updated when 1019 + ACKs are received from the peer (notably including PING RESPONSE ACKs 1020 + which we can elicit by sending PING ACKs to see if the call still exists 1021 + on the server). The caller should compare the numbers of two calls to see 1022 + if the call is still alive after waiting for a suitable interval. It also 1023 + returns true as long as the call hasn't yet reached the completed state. 1022 1024 1023 1025 This allows the caller to work out if the server is still contactable and 1024 1026 if the call is still alive on the server while waiting for the server to
+1 -1
MAINTAINERS
··· 10145 10145 F: Documentation/devicetree/bindings/mfd/atmel-usart.txt 10146 10146 10147 10147 MICROCHIP KSZ SERIES ETHERNET SWITCH DRIVER 10148 - M: Woojung Huh <Woojung.Huh@microchip.com> 10148 + M: Woojung Huh <woojung.huh@microchip.com> 10149 10149 M: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> 10150 10150 L: netdev@vger.kernel.org 10151 10151 S: Maintained
+2 -2
drivers/isdn/mISDN/socket.c
··· 710 710 struct sock *sk = sock->sk; 711 711 int err = 0; 712 712 713 - if (!maddr || maddr->family != AF_ISDN) 713 + if (addr_len < sizeof(struct sockaddr_mISDN)) 714 714 return -EINVAL; 715 715 716 - if (addr_len < sizeof(struct sockaddr_mISDN)) 716 + if (!maddr || maddr->family != AF_ISDN) 717 717 return -EINVAL; 718 718 719 719 lock_sock(sk);
+5 -1
drivers/net/bonding/bond_main.c
··· 3213 3213 return NOTIFY_DONE; 3214 3214 3215 3215 if (event_dev->flags & IFF_MASTER) { 3216 + int ret; 3217 + 3216 3218 netdev_dbg(event_dev, "IFF_MASTER\n"); 3217 - return bond_master_netdev_event(event, event_dev); 3219 + ret = bond_master_netdev_event(event, event_dev); 3220 + if (ret != NOTIFY_DONE) 3221 + return ret; 3218 3222 } 3219 3223 3220 3224 if (event_dev->flags & IFF_SLAVE) {
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
··· 957 957 bnx2x_sample_bulletin(bp); 958 958 959 959 if (bp->shadow_bulletin.content.valid_bitmap & 1 << VLAN_VALID) { 960 - BNX2X_ERR("Hypervisor will dicline the request, avoiding\n"); 960 + BNX2X_ERR("Hypervisor will decline the request, avoiding\n"); 961 961 rc = -EINVAL; 962 962 goto out; 963 963 }
+20 -2
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 32 32 #define DRV_NAME "nicvf" 33 33 #define DRV_VERSION "1.0" 34 34 35 + /* NOTE: Packets bigger than 1530 are split across multiple pages and XDP needs 36 + * the buffer to be contiguous. Allow XDP to be set up only if we don't exceed 37 + * this value, keeping headroom for the 14 byte Ethernet header and two 38 + * VLAN tags (for QinQ) 39 + */ 40 + #define MAX_XDP_MTU (1530 - ETH_HLEN - VLAN_HLEN * 2) 41 + 35 42 /* Supported devices */ 36 43 static const struct pci_device_id nicvf_id_table[] = { 37 44 { PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM, ··· 1589 1582 struct nicvf *nic = netdev_priv(netdev); 1590 1583 int orig_mtu = netdev->mtu; 1591 1584 1585 + /* For now just support only the usual MTU sized frames, 1586 + * plus some headroom for VLAN, QinQ. 1587 + */ 1588 + if (nic->xdp_prog && new_mtu > MAX_XDP_MTU) { 1589 + netdev_warn(netdev, "Jumbo frames not yet supported with XDP, current MTU %d.\n", 1590 + netdev->mtu); 1591 + return -EINVAL; 1592 + } 1593 + 1592 1594 netdev->mtu = new_mtu; 1593 1595 1594 1596 if (!netif_running(netdev)) ··· 1846 1830 bool bpf_attached = false; 1847 1831 int ret = 0; 1848 1832 1849 - /* For now just support only the usual MTU sized frames */ 1850 - if (prog && (dev->mtu > 1500)) { 1833 + /* For now just support only the usual MTU sized frames, 1834 + * plus some headroom for VLAN, QinQ. 1835 + */ 1836 + if (prog && dev->mtu > MAX_XDP_MTU) { 1851 1837 netdev_warn(dev, "Jumbo frames not yet supported with XDP, current MTU %d.\n", 1852 1838 dev->mtu); 1853 1839 return -EOPNOTSUPP;
+21 -9
drivers/net/ethernet/freescale/fec_main.c
··· 1840 1840 int ret; 1841 1841 1842 1842 if (enable) { 1843 - ret = clk_prepare_enable(fep->clk_ahb); 1844 - if (ret) 1845 - return ret; 1846 - 1847 1843 ret = clk_prepare_enable(fep->clk_enet_out); 1848 1844 if (ret) 1849 - goto failed_clk_enet_out; 1845 + return ret; 1850 1846 1851 1847 if (fep->clk_ptp) { 1852 1848 mutex_lock(&fep->ptp_clk_mutex); ··· 1862 1866 1863 1867 phy_reset_after_clk_enable(ndev->phydev); 1864 1868 } else { 1865 - clk_disable_unprepare(fep->clk_ahb); 1866 1869 clk_disable_unprepare(fep->clk_enet_out); 1867 1870 if (fep->clk_ptp) { 1868 1871 mutex_lock(&fep->ptp_clk_mutex); ··· 1880 1885 failed_clk_ptp: 1881 1886 if (fep->clk_enet_out) 1882 1887 clk_disable_unprepare(fep->clk_enet_out); 1883 - failed_clk_enet_out: 1884 - clk_disable_unprepare(fep->clk_ahb); 1885 1888 1886 1889 return ret; 1887 1890 } ··· 3463 3470 ret = clk_prepare_enable(fep->clk_ipg); 3464 3471 if (ret) 3465 3472 goto failed_clk_ipg; 3473 + ret = clk_prepare_enable(fep->clk_ahb); 3474 + if (ret) 3475 + goto failed_clk_ahb; 3466 3476 3467 3477 fep->reg_phy = devm_regulator_get_optional(&pdev->dev, "phy"); 3468 3478 if (!IS_ERR(fep->reg_phy)) { ··· 3559 3563 pm_runtime_put(&pdev->dev); 3560 3564 pm_runtime_disable(&pdev->dev); 3561 3565 failed_regulator: 3566 + clk_disable_unprepare(fep->clk_ahb); 3567 + failed_clk_ahb: 3568 + clk_disable_unprepare(fep->clk_ipg); 3562 3569 failed_clk_ipg: 3563 3570 fec_enet_clk_enable(ndev, false); 3564 3571 failed_clk: ··· 3685 3686 struct net_device *ndev = dev_get_drvdata(dev); 3686 3687 struct fec_enet_private *fep = netdev_priv(ndev); 3687 3688 3689 + clk_disable_unprepare(fep->clk_ahb); 3688 3690 clk_disable_unprepare(fep->clk_ipg); 3689 3691 3690 3692 return 0; ··· 3695 3695 { 3696 3696 struct net_device *ndev = dev_get_drvdata(dev); 3697 3697 struct fec_enet_private *fep = netdev_priv(ndev); 3698 + int ret; 3698 3699 3699 - return clk_prepare_enable(fep->clk_ipg); 3700 + ret = clk_prepare_enable(fep->clk_ahb); 3701 + if (ret) 3702 + return ret; 3703 + ret = clk_prepare_enable(fep->clk_ipg); 3704 + if (ret) 3705 + goto failed_clk_ipg; 3706 + 3707 + return 0; 3708 + 3709 + failed_clk_ipg: 3710 + clk_disable_unprepare(fep->clk_ahb); 3711 + return ret; 3700 3712 } 3701 3713 3702 3714 static const struct dev_pm_ops fec_pm_ops = {
+25 -7
drivers/net/ethernet/ibm/ibmvnic.c
··· 3762 3762 { 3763 3763 struct device *dev = &adapter->vdev->dev; 3764 3764 struct ibmvnic_query_ip_offload_buffer *buf = &adapter->ip_offload_buf; 3765 + netdev_features_t old_hw_features = 0; 3765 3766 union ibmvnic_crq crq; 3766 3767 int i; 3767 3768 ··· 3838 3837 adapter->ip_offload_ctrl.large_rx_ipv4 = 0; 3839 3838 adapter->ip_offload_ctrl.large_rx_ipv6 = 0; 3840 3839 3841 - adapter->netdev->features = NETIF_F_SG | NETIF_F_GSO; 3840 + if (adapter->state != VNIC_PROBING) { 3841 + old_hw_features = adapter->netdev->hw_features; 3842 + adapter->netdev->hw_features = 0; 3843 + } 3844 + 3845 + adapter->netdev->hw_features = NETIF_F_SG | NETIF_F_GSO | NETIF_F_GRO; 3842 3846 3843 3847 if (buf->tcp_ipv4_chksum || buf->udp_ipv4_chksum) 3844 - adapter->netdev->features |= NETIF_F_IP_CSUM; 3848 + adapter->netdev->hw_features |= NETIF_F_IP_CSUM; 3845 3849 3846 3850 if (buf->tcp_ipv6_chksum || buf->udp_ipv6_chksum) 3847 - adapter->netdev->features |= NETIF_F_IPV6_CSUM; 3851 + adapter->netdev->hw_features |= NETIF_F_IPV6_CSUM; 3848 3852 3849 3853 if ((adapter->netdev->features & 3850 3854 (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM))) 3851 - adapter->netdev->features |= NETIF_F_RXCSUM; 3855 + adapter->netdev->hw_features |= NETIF_F_RXCSUM; 3852 3856 3853 3857 if (buf->large_tx_ipv4) 3854 - adapter->netdev->features |= NETIF_F_TSO; 3858 + adapter->netdev->hw_features |= NETIF_F_TSO; 3855 3859 if (buf->large_tx_ipv6) 3856 - adapter->netdev->features |= NETIF_F_TSO6; 3860 + adapter->netdev->hw_features |= NETIF_F_TSO6; 3857 3861 3858 - adapter->netdev->hw_features |= adapter->netdev->features; 3862 + if (adapter->state == VNIC_PROBING) { 3863 + adapter->netdev->features |= adapter->netdev->hw_features; 3864 + } else if (old_hw_features != adapter->netdev->hw_features) { 3865 + netdev_features_t tmp = 0; 3866 + 3867 + /* disable features no longer supported */ 3868 + adapter->netdev->features &= adapter->netdev->hw_features; 3869 + /* turn on features now supported if previously enabled */ 3870 + tmp = (old_hw_features ^ adapter->netdev->hw_features) & 3871 + adapter->netdev->hw_features; 3872 + adapter->netdev->features |= 3873 + tmp & adapter->netdev->wanted_features; 3874 + } 3859 3875 3860 3876 memset(&crq, 0, sizeof(crq)); 3861 3877 crq.control_ip_offload.first = IBMVNIC_CRQ_CMD;
+1
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 858 858 * switching channels 859 859 */ 860 860 typedef int (*mlx5e_fp_hw_modify)(struct mlx5e_priv *priv); 861 + int mlx5e_safe_reopen_channels(struct mlx5e_priv *priv); 861 862 int mlx5e_safe_switch_channels(struct mlx5e_priv *priv, 862 863 struct mlx5e_channels *new_chs, 863 864 mlx5e_fp_hw_modify hw_modify);
+8 -3
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
··· 186 186 187 187 static int mlx5e_tx_reporter_recover_all(struct mlx5e_priv *priv) 188 188 { 189 - int err; 189 + int err = 0; 190 190 191 191 rtnl_lock(); 192 192 mutex_lock(&priv->state_lock); 193 - mlx5e_close_locked(priv->netdev); 194 - err = mlx5e_open_locked(priv->netdev); 193 + 194 + if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) 195 + goto out; 196 + 197 + err = mlx5e_safe_reopen_channels(priv); 198 + 199 + out: 195 200 mutex_unlock(&priv->state_lock); 196 201 rtnl_unlock(); 197 202
+4
drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
··· 39 39 return -EOPNOTSUPP; 40 40 } 41 41 42 + if (!(mlx5e_eswitch_rep(*out_dev) && 43 + mlx5e_is_uplink_rep(netdev_priv(*out_dev)))) 44 + return -EOPNOTSUPP; 45 + 42 46 return 0; 43 47 } 44 48
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 1768 1768 struct mlx5e_channel *c; 1769 1769 int i; 1770 1770 1771 - if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) 1771 + if (!test_bit(MLX5E_STATE_OPENED, &priv->state) || 1772 + priv->channels.params.xdp_prog) 1772 1773 return 0; 1773 1774 1774 1775 for (i = 0; i < channels->num; i++) {
+16 -5
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 951 951 if (params->rx_dim_enabled) 952 952 __set_bit(MLX5E_RQ_STATE_AM, &c->rq.state); 953 953 954 - if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_NO_CSUM_COMPLETE)) 954 + /* We disable csum_complete when XDP is enabled since 955 + * XDP programs might manipulate packets which will render 956 + * skb->checksum incorrect. 957 + */ 958 + if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_NO_CSUM_COMPLETE) || c->xdp) 955 959 __set_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &c->rq.state); 956 960 957 961 return 0; ··· 2941 2937 return 0; 2942 2938 } 2943 2939 2940 + int mlx5e_safe_reopen_channels(struct mlx5e_priv *priv) 2941 + { 2942 + struct mlx5e_channels new_channels = {}; 2943 + 2944 + new_channels.params = priv->channels.params; 2945 + return mlx5e_safe_switch_channels(priv, &new_channels, NULL); 2946 + } 2947 + 2944 2948 void mlx5e_timestamp_init(struct mlx5e_priv *priv) 2945 2949 { 2946 2950 priv->tstamp.tx_type = HWTSTAMP_TX_OFF; ··· 4173 4161 if (!report_failed) 4174 4162 goto unlock; 4175 4163 4176 - mlx5e_close_locked(priv->netdev); 4177 - err = mlx5e_open_locked(priv->netdev); 4164 + err = mlx5e_safe_reopen_channels(priv); 4178 4165 if (err) 4179 4166 netdev_err(priv->netdev, 4180 - "mlx5e_open_locked failed recovering from a tx_timeout, err(%d).\n", 4167 + "mlx5e_safe_reopen_channels failed recovering from a tx_timeout, err(%d).\n", 4181 4168 err); 4182 4169 4183 4170 unlock: ··· 4564 4553 { 4565 4554 enum mlx5e_traffic_types tt; 4566 4555 4567 - rss_params->hfunc = ETH_RSS_HASH_XOR; 4556 + rss_params->hfunc = ETH_RSS_HASH_TOP; 4568 4557 netdev_rss_key_fill(rss_params->toeplitz_hash_key, 4569 4558 sizeof(rss_params->toeplitz_hash_key)); 4570 4559 mlx5e_build_default_indir_rqt(rss_params->indirection_rqt,
+75 -19
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 692 692 { 693 693 *proto = ((struct ethhdr *)skb->data)->h_proto; 694 694 *proto = __vlan_get_protocol(skb, *proto, network_depth); 695 - return (*proto == htons(ETH_P_IP) || *proto == htons(ETH_P_IPV6)); 695 + 696 + if (*proto == htons(ETH_P_IP)) 697 + return pskb_may_pull(skb, *network_depth + sizeof(struct iphdr)); 698 + 699 + if (*proto == htons(ETH_P_IPV6)) 700 + return pskb_may_pull(skb, *network_depth + sizeof(struct ipv6hdr)); 701 + 702 + return false; 696 703 } 697 704 698 705 static inline void mlx5e_enable_ecn(struct mlx5e_rq *rq, struct sk_buff *skb) ··· 719 712 rq->stats->ecn_mark += !!rc; 720 713 } 721 714 722 - static u32 mlx5e_get_fcs(const struct sk_buff *skb) 723 - { 724 - const void *fcs_bytes; 725 - u32 _fcs_bytes; 726 - 727 - fcs_bytes = skb_header_pointer(skb, skb->len - ETH_FCS_LEN, 728 - ETH_FCS_LEN, &_fcs_bytes); 729 - 730 - return __get_unaligned_cpu32(fcs_bytes); 731 - } 732 - 733 715 static u8 get_ip_proto(struct sk_buff *skb, int network_depth, __be16 proto) 734 716 { 735 717 void *ip_p = skb->data + network_depth; ··· 728 732 } 729 733 730 734 #define short_frame(size) ((size) <= ETH_ZLEN + ETH_FCS_LEN) 735 + 736 + #define MAX_PADDING 8 737 + 738 + static void 739 + tail_padding_csum_slow(struct sk_buff *skb, int offset, int len, 740 + struct mlx5e_rq_stats *stats) 741 + { 742 + stats->csum_complete_tail_slow++; 743 + skb->csum = csum_block_add(skb->csum, 744 + skb_checksum(skb, offset, len, 0), 745 + offset); 746 + } 747 + 748 + static void 749 + tail_padding_csum(struct sk_buff *skb, int offset, 750 + struct mlx5e_rq_stats *stats) 751 + { 752 + u8 tail_padding[MAX_PADDING]; 753 + int len = skb->len - offset; 754 + void *tail; 755 + 756 + if (unlikely(len > MAX_PADDING)) { 757 + tail_padding_csum_slow(skb, offset, len, stats); 758 + return; 759 + } 760 + 761 + tail = skb_header_pointer(skb, offset, len, tail_padding); 762 + if (unlikely(!tail)) { 763 + tail_padding_csum_slow(skb, offset, len, stats); 764 + return; 765 + } 766 + 767 + stats->csum_complete_tail++; 768 + skb->csum = csum_block_add(skb->csum, csum_partial(tail, len, 0), offset); 769 + } 770 + 771 + static void 772 + mlx5e_skb_padding_csum(struct sk_buff *skb, int network_depth, __be16 proto, 773 + struct mlx5e_rq_stats *stats) 774 + { 775 + struct ipv6hdr *ip6; 776 + struct iphdr *ip4; 777 + int pkt_len; 778 + 779 + switch (proto) { 780 + case htons(ETH_P_IP): 781 + ip4 = (struct iphdr *)(skb->data + network_depth); 782 + pkt_len = network_depth + ntohs(ip4->tot_len); 783 + break; 784 + case htons(ETH_P_IPV6): 785 + ip6 = (struct ipv6hdr *)(skb->data + network_depth); 786 + pkt_len = network_depth + sizeof(*ip6) + ntohs(ip6->payload_len); 787 + break; 788 + default: 789 + return; 790 + } 791 + 792 + if (likely(pkt_len >= skb->len)) 793 + return; 794 + 795 + tail_padding_csum(skb, pkt_len, stats); 796 + } 731 797 732 798 static inline void mlx5e_handle_csum(struct net_device *netdev, 733 799 struct mlx5_cqe64 *cqe, ··· 810 752 return; 811 753 } 812 754 813 - if (unlikely(test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state))) 755 + /* True when explicitly set via priv flag, or XDP prog is loaded */ 756 + if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state)) 814 757 goto csum_unnecessary; 815 758 816 759 /* CQE csum doesn't cover padding octets in short ethernet ··· 839 780 skb->csum = csum_partial(skb->data + ETH_HLEN, 840 781 network_depth - ETH_HLEN, 841 782 skb->csum); 842 - if (unlikely(netdev->features & NETIF_F_RXFCS)) 843 - skb->csum = csum_block_add(skb->csum, 844 - (__force __wsum)mlx5e_get_fcs(skb), 845 - skb->len - ETH_FCS_LEN); 783 + 784 + mlx5e_skb_padding_csum(skb, network_depth, proto, stats); 846 785 stats->csum_complete++; 847 786 return; 848 787 } 849 788 850 789 csum_unnecessary: 851 790 if (likely((cqe->hds_ip_ext & CQE_L3_OK) && 852 - ((cqe->hds_ip_ext & CQE_L4_OK) || 853 - (get_cqe_l4_hdr_type(cqe) == CQE_L4_HDR_TYPE_NONE)))) { 791 + (cqe->hds_ip_ext & CQE_L4_OK))) { 854 792 skb->ip_summed = CHECKSUM_UNNECESSARY; 855 793 if (cqe_is_tunneled(cqe)) { 856 794 skb->csum_level = 1;
+6
drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
··· 59 59 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_unnecessary) }, 60 60 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_none) }, 61 61 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_complete) }, 62 + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_complete_tail) }, 63 + { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_complete_tail_slow) }, 62 64 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_unnecessary_inner) }, 63 65 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_drop) }, 64 66 { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_redirect) }, ··· 153 151 s->rx_removed_vlan_packets += rq_stats->removed_vlan_packets; 154 152 s->rx_csum_none += rq_stats->csum_none; 155 153 s->rx_csum_complete += rq_stats->csum_complete; 154 + s->rx_csum_complete_tail += rq_stats->csum_complete_tail; 155 + s->rx_csum_complete_tail_slow += rq_stats->csum_complete_tail_slow; 156 156 s->rx_csum_unnecessary += rq_stats->csum_unnecessary; 157 157 s->rx_csum_unnecessary_inner += rq_stats->csum_unnecessary_inner; 158 158 s->rx_xdp_drop += rq_stats->xdp_drop; ··· 1194 1190 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, packets) }, 1195 1191 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, bytes) }, 1196 1192 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_complete) }, 1193 + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_complete_tail) }, 1194 + { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_complete_tail_slow) }, 1197 1195 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_unnecessary) }, 1198 1196 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_unnecessary_inner) }, 1199 1197 { MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_none) },
+4
drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
··· 71 71 u64 rx_csum_unnecessary; 72 72 u64 rx_csum_none; 73 73 u64 rx_csum_complete; 74 + u64 rx_csum_complete_tail; 75 + u64 rx_csum_complete_tail_slow; 74 76 u64 rx_csum_unnecessary_inner; 75 77 u64 rx_xdp_drop; 76 78 u64 rx_xdp_redirect; ··· 183 181 u64 packets; 184 182 u64 bytes; 185 183 u64 csum_complete; 184 + u64 csum_complete_tail; 185 + u64 csum_complete_tail_slow; 186 186 u64 csum_unnecessary; 187 187 u64 csum_unnecessary_inner; 188 188 u64 csum_none;
+24 -37
drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
··· 148 148 return ret; 149 149 } 150 150 151 - static void mlx5_fpga_tls_release_swid(struct idr *idr, 152 - spinlock_t *idr_spinlock, u32 swid) 151 + static void *mlx5_fpga_tls_release_swid(struct idr *idr, 152 + spinlock_t *idr_spinlock, u32 swid) 153 153 { 154 154 unsigned long flags; 155 + void *ptr; 155 156 156 157 spin_lock_irqsave(idr_spinlock, flags); 157 - idr_remove(idr, swid); 158 + ptr = idr_remove(idr, swid); 158 159 spin_unlock_irqrestore(idr_spinlock, flags); 160 + return ptr; 159 161 } 160 162 161 163 static void mlx_tls_kfree_complete(struct mlx5_fpga_conn *conn, ··· 167 165 kfree(buf); 168 166 } 169 167 170 - struct mlx5_teardown_stream_context { 171 - struct mlx5_fpga_tls_command_context cmd; 172 - u32 swid; 173 - }; 174 - 175 168 static void 176 169 mlx5_fpga_tls_teardown_completion(struct mlx5_fpga_conn *conn, 177 170 struct mlx5_fpga_device *fdev, 178 171 struct mlx5_fpga_tls_command_context *cmd, 179 172 struct mlx5_fpga_dma_buf *resp) 180 173 { 181 - struct mlx5_teardown_stream_context *ctx = 182 - container_of(cmd, struct mlx5_teardown_stream_context, cmd); 183 - 184 174 if (resp) { 185 175 u32 syndrome = MLX5_GET(tls_resp, resp->sg[0].data, syndrome); 186 176 ··· 180 186 mlx5_fpga_err(fdev, 181 187 "Teardown stream failed with syndrome = %d", 182 188 syndrome); 183 - else if (MLX5_GET(tls_cmd, cmd->buf.sg[0].data, direction_sx)) 184 - mlx5_fpga_tls_release_swid(&fdev->tls->tx_idr, 185 - &fdev->tls->tx_idr_spinlock, 186 - ctx->swid); 187 - else 188 - mlx5_fpga_tls_release_swid(&fdev->tls->rx_idr, 189 - &fdev->tls->rx_idr_spinlock, 190 - ctx->swid); 191 189 } 192 190 mlx5_fpga_tls_put_command_ctx(cmd); 193 191 } ··· 203 217 void *cmd; 204 218 int ret; 205 219 206 - rcu_read_lock(); 207 - flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle)); 208 - rcu_read_unlock(); 209 - 210 - if (!flow) { 211 - WARN_ONCE(1, "Received NULL pointer for handle\n"); 212 - return -EINVAL; 213 - } 214 - 215 220 buf = kzalloc(size, GFP_ATOMIC); 216 221 if (!buf) 217 222 return -ENOMEM; 218 223 219 224 cmd = (buf + 1); 220 225 226 + rcu_read_lock(); 227 + flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle)); 228 + if (unlikely(!flow)) { 229 + rcu_read_unlock(); 230 + WARN_ONCE(1, "Received NULL pointer for handle\n"); 231 + kfree(buf); 232 + return -EINVAL; 233 + } 221 234 mlx5_fpga_tls_flow_to_cmd(flow, cmd); 235 + rcu_read_unlock(); 222 236 223 237 MLX5_SET(tls_cmd, cmd, swid, ntohl(handle)); 224 238 MLX5_SET64(tls_cmd, cmd, tls_rcd_sn, be64_to_cpu(rcd_sn)); ··· 239 253 static void mlx5_fpga_tls_send_teardown_cmd(struct mlx5_core_dev *mdev, 240 254 void *flow, u32 swid, gfp_t flags) 241 255 { 242 - struct mlx5_teardown_stream_context *ctx; 256 + struct mlx5_fpga_tls_command_context *ctx; 243 257 struct mlx5_fpga_dma_buf *buf; 244 258 void *cmd; 245 259 ··· 247 261 if (!ctx) 248 262 return; 249 263 250 - buf = &ctx->cmd.buf; 264 + buf = &ctx->buf; 251 265 cmd = (ctx + 1); 252 266 MLX5_SET(tls_cmd, cmd, command_type, CMD_TEARDOWN_STREAM); 253 267 MLX5_SET(tls_cmd, cmd, swid, swid); ··· 258 272 buf->sg[0].data = cmd; 259 273 buf->sg[0].size = MLX5_TLS_COMMAND_SIZE; 260 274 261 - ctx->swid = swid; 262 - mlx5_fpga_tls_cmd_send(mdev->fpga, &ctx->cmd, 275 + mlx5_fpga_tls_cmd_send(mdev->fpga, ctx, 263 276 mlx5_fpga_tls_teardown_completion); 264 277 } 265 278 ··· 268 283 struct mlx5_fpga_tls *tls = mdev->fpga->tls; 269 284 void *flow; 270 285 271 - rcu_read_lock(); 272 286 if (direction_sx) 273 - flow = idr_find(&tls->tx_idr, swid); 287 + flow = mlx5_fpga_tls_release_swid(&tls->tx_idr, 288 + &tls->tx_idr_spinlock, 289 + swid); 274 290 else 275 - flow = idr_find(&tls->rx_idr, swid); 276 - 277 - rcu_read_unlock(); 291 + flow = mlx5_fpga_tls_release_swid(&tls->rx_idr, 292 + &tls->rx_idr_spinlock, 293 + swid); 278 294 279 295 if (!flow) { 280 296 mlx5_fpga_err(mdev->fpga, "No flow information for swid %u\n", ··· 283 297 return; 284 298 } 285 299 300 + synchronize_rcu(); /* before kfree(flow) */ 286 301 mlx5_fpga_tls_send_teardown_cmd(mdev, flow, swid, flags); 287 302 } 288 303
+3 -3
drivers/net/ethernet/mellanox/mlxsw/core.c
··· 568 568 if (!(mlxsw_core->bus->features & MLXSW_BUS_F_TXRX)) 569 569 return 0; 570 570 571 - emad_wq = alloc_workqueue("mlxsw_core_emad", WQ_MEM_RECLAIM, 0); 571 + emad_wq = alloc_workqueue("mlxsw_core_emad", 0, 0); 572 572 if (!emad_wq) 573 573 return -ENOMEM; 574 574 mlxsw_core->emad_wq = emad_wq; ··· 1958 1958 { 1959 1959 int err; 1960 1960 1961 - mlxsw_wq = alloc_workqueue(mlxsw_core_driver_name, WQ_MEM_RECLAIM, 0); 1961 + mlxsw_wq = alloc_workqueue(mlxsw_core_driver_name, 0, 0); 1962 1962 if (!mlxsw_wq) 1963 1963 return -ENOMEM; 1964 - mlxsw_owq = alloc_ordered_workqueue("%s_ordered", WQ_MEM_RECLAIM, 1964 + mlxsw_owq = alloc_ordered_workqueue("%s_ordered", 0, 1965 1965 mlxsw_core_driver_name); 1966 1966 if (!mlxsw_owq) { 1967 1967 err = -ENOMEM;
+11 -8
drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
··· 70 70 {MLXSW_REG_SBXX_DIR_EGRESS, 1}, 71 71 {MLXSW_REG_SBXX_DIR_EGRESS, 2}, 72 72 {MLXSW_REG_SBXX_DIR_EGRESS, 3}, 73 + {MLXSW_REG_SBXX_DIR_EGRESS, 15}, 73 74 }; 74 75 75 76 #define MLXSW_SP_SB_ING_TC_COUNT 8 ··· 429 428 MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0), 430 429 MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0), 431 430 MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, 0), 431 + MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_STATIC, MLXSW_SP_SB_INFI), 432 432 }; 433 433 434 434 static int mlxsw_sp_sb_prs_init(struct mlxsw_sp *mlxsw_sp, ··· 519 517 MLXSW_SP_SB_CM(0, 7, 4), 520 518 MLXSW_SP_SB_CM(0, 7, 4), 521 519 MLXSW_SP_SB_CM(0, 7, 4), 522 - MLXSW_SP_SB_CM(0, 7, 4), 523 - MLXSW_SP_SB_CM(0, 7, 4), 524 - MLXSW_SP_SB_CM(0, 7, 4), 525 - MLXSW_SP_SB_CM(0, 7, 4), 526 - MLXSW_SP_SB_CM(0, 7, 4), 527 - MLXSW_SP_SB_CM(0, 7, 4), 528 - MLXSW_SP_SB_CM(0, 7, 4), 529 - MLXSW_SP_SB_CM(0, 7, 4), 520 + MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8), 521 + MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8), 522 + MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8), 523 + MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8), 524 + MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8), 525 + MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8), 526 + MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8), 527 + MLXSW_SP_SB_CM(0, MLXSW_SP_SB_INFI, 8), 530 528 MLXSW_SP_SB_CM(1, 0xff, 4), 531 529 }; 532 530 ··· 673 671 MLXSW_SP_SB_PM(0, 0), 674 672 MLXSW_SP_SB_PM(0, 0), 675 673 MLXSW_SP_SB_PM(0, 0), 674 + MLXSW_SP_SB_PM(10000, 90000), 676 675 }; 677 676 678 677 static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
+1 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 6781 6781 /* A RIF is not created for macvlan netdevs. Their MAC is used to 6782 6782 * populate the FDB 6783 6783 */ 6784 - if (netif_is_macvlan(dev)) 6784 + if (netif_is_macvlan(dev) || netif_is_l3_master(dev)) 6785 6785 return 0; 6786 6786 6787 6787 for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS); i++) {
+1 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
··· 1630 1630 u16 fid_index; 1631 1631 int err = 0; 1632 1632 1633 - if (switchdev_trans_ph_prepare(trans)) 1633 + if (switchdev_trans_ph_commit(trans)) 1634 1634 return 0; 1635 1635 1636 1636 bridge_port = mlxsw_sp_bridge_port_find(mlxsw_sp->bridge, orig_dev);
+15 -9
drivers/net/ethernet/mscc/ocelot.c
··· 613 613 struct netdev_hw_addr *hw_addr) 614 614 { 615 615 struct ocelot *ocelot = port->ocelot; 616 - struct netdev_hw_addr *ha = kzalloc(sizeof(*ha), GFP_KERNEL); 616 + struct netdev_hw_addr *ha = kzalloc(sizeof(*ha), GFP_ATOMIC); 617 617 618 618 if (!ha) 619 619 return -ENOMEM; ··· 959 959 ETH_GSTRING_LEN); 960 960 } 961 961 962 - static void ocelot_check_stats(struct work_struct *work) 962 + static void ocelot_update_stats(struct ocelot *ocelot) 963 963 { 964 - struct delayed_work *del_work = to_delayed_work(work); 965 - struct ocelot *ocelot = container_of(del_work, struct ocelot, stats_work); 966 964 int i, j; 967 965 968 966 mutex_lock(&ocelot->stats_lock); ··· 984 986 } 985 987 } 986 988 987 - cancel_delayed_work(&ocelot->stats_work); 989 + mutex_unlock(&ocelot->stats_lock); 990 + } 991 + 992 + static void ocelot_check_stats_work(struct work_struct *work) 993 + { 994 + struct delayed_work *del_work = to_delayed_work(work); 995 + struct ocelot *ocelot = container_of(del_work, struct ocelot, 996 + stats_work); 997 + 998 + ocelot_update_stats(ocelot); 999 + 988 1000 queue_delayed_work(ocelot->stats_queue, &ocelot->stats_work, 989 1001 OCELOT_STATS_CHECK_DELAY); 990 - 991 - mutex_unlock(&ocelot->stats_lock); 992 1002 } 993 1003 994 1004 static void ocelot_get_ethtool_stats(struct net_device *dev, ··· 1007 1001 int i; 1008 1002 1009 1003 /* check and update now */ 1010 - ocelot_check_stats(&ocelot->stats_work.work); 1004 + ocelot_update_stats(ocelot); 1011 1005 1012 1006 /* Copy all counters */ 1013 1007 for (i = 0; i < ocelot->num_stats; i++) ··· 1815 1809 ANA_CPUQ_8021_CFG_CPUQ_BPDU_VAL(6), 1816 1810 ANA_CPUQ_8021_CFG, i); 1817 1811 1818 - INIT_DELAYED_WORK(&ocelot->stats_work, ocelot_check_stats); 1812 + INIT_DELAYED_WORK(&ocelot->stats_work, ocelot_check_stats_work); 1819 1813 queue_delayed_work(ocelot->stats_queue, &ocelot->stats_work, 1820 1814 OCELOT_STATS_CHECK_DELAY); 1821 1815 return 0;
+1
drivers/net/ethernet/neterion/vxge/vxge-config.c
··· 2366 2366 dma_object->addr))) { 2367 2367 vxge_os_dma_free(devh->pdev, memblock, 2368 2368 &dma_object->acc_handle); 2369 + memblock = NULL; 2369 2370 goto exit; 2370 2371 } 2371 2372
+5 -2
drivers/net/ethernet/qlogic/qed/qed.h
··· 431 431 u8 num_pf_rls; 432 432 }; 433 433 434 + #define QED_OVERFLOW_BIT 1 435 + 434 436 struct qed_db_recovery_info { 435 437 struct list_head list; 436 438 437 439 /* Lock to protect the doorbell recovery mechanism list */ 438 440 spinlock_t lock; 441 + bool dorq_attn; 439 442 u32 db_recovery_counter; 443 + unsigned long overflow; 440 444 }; 441 445 442 446 struct storm_stats { ··· 924 920 925 921 /* doorbell recovery mechanism */ 926 922 void qed_db_recovery_dp(struct qed_hwfn *p_hwfn); 927 - void qed_db_recovery_execute(struct qed_hwfn *p_hwfn, 928 - enum qed_db_rec_exec db_exec); 923 + void qed_db_recovery_execute(struct qed_hwfn *p_hwfn); 929 924 bool qed_edpm_enabled(struct qed_hwfn *p_hwfn); 930 925 931 926 /* Other Linux specific common definitions */
+33 -50
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 102 102 103 103 /* Doorbell address sanity (address within doorbell bar range) */ 104 104 static bool qed_db_rec_sanity(struct qed_dev *cdev, 105 - void __iomem *db_addr, void *db_data) 105 + void __iomem *db_addr, 106 + enum qed_db_rec_width db_width, 107 + void *db_data) 106 108 { 109 + u32 width = (db_width == DB_REC_WIDTH_32B) ? 32 : 64; 110 + 107 111 /* Make sure doorbell address is within the doorbell bar */ 108 112 if (db_addr < cdev->doorbells || 109 - (u8 __iomem *)db_addr > 113 + (u8 __iomem *)db_addr + width > 110 114 (u8 __iomem *)cdev->doorbells + cdev->db_size) { 111 115 WARN(true, 112 116 "Illegal doorbell address: %p. Legal range for doorbell addresses is [%p..%p]\n", ··· 163 159 } 164 160 165 161 /* Sanitize doorbell address */ 166 - if (!qed_db_rec_sanity(cdev, db_addr, db_data)) 162 + if (!qed_db_rec_sanity(cdev, db_addr, db_width, db_data)) 167 163 return -EINVAL; 168 164 169 165 /* Obtain hwfn from doorbell address */ ··· 208 204 QED_MSG_IOV, "db recovery - skipping VF doorbell\n"); 209 205 return 0; 210 206 } 211 - 212 - /* Sanitize doorbell address */ 213 - if (!qed_db_rec_sanity(cdev, db_addr, db_data)) 214 - return -EINVAL; 215 207 216 208 /* Obtain hwfn from doorbell address */ 217 209 p_hwfn = qed_db_rec_find_hwfn(cdev, db_addr); ··· 300 300 301 301 /* Ring the doorbell of a single doorbell recovery entry */ 302 302 static void qed_db_recovery_ring(struct qed_hwfn *p_hwfn, 303 - struct qed_db_recovery_entry *db_entry, 304 - enum qed_db_rec_exec db_exec) 303 + struct qed_db_recovery_entry *db_entry) 305 304 { 306 - if (db_exec != DB_REC_ONCE) { 307 - /* Print according to width */ 308 - if (db_entry->db_width == DB_REC_WIDTH_32B) { 309 - DP_VERBOSE(p_hwfn, QED_MSG_SPQ, 310 - "%s doorbell address %p data %x\n", 311 - db_exec == DB_REC_DRY_RUN ? 312 - "would have rung" : "ringing", 313 - db_entry->db_addr, 314 - *(u32 *)db_entry->db_data); 315 - } else { 316 - DP_VERBOSE(p_hwfn, QED_MSG_SPQ, 317 - "%s doorbell address %p data %llx\n", 318 - db_exec == DB_REC_DRY_RUN ? 319 - "would have rung" : "ringing", 320 - db_entry->db_addr, 321 - *(u64 *)(db_entry->db_data)); 322 - } 305 + /* Print according to width */ 306 + if (db_entry->db_width == DB_REC_WIDTH_32B) { 307 + DP_VERBOSE(p_hwfn, QED_MSG_SPQ, 308 + "ringing doorbell address %p data %x\n", 309 + db_entry->db_addr, 310 + *(u32 *)db_entry->db_data); 311 + } else { 312 + DP_VERBOSE(p_hwfn, QED_MSG_SPQ, 313 + "ringing doorbell address %p data %llx\n", 314 + db_entry->db_addr, 315 + *(u64 *)(db_entry->db_data)); 323 316 } 324 317 325 318 /* Sanity */ 326 319 if (!qed_db_rec_sanity(p_hwfn->cdev, db_entry->db_addr, 327 - db_entry->db_data)) 320 + db_entry->db_width, db_entry->db_data)) 328 321 return; 329 322 330 323 /* Flush the write combined buffer. Since there are multiple doorbelling ··· 327 334 wmb(); 328 335 329 336 /* Ring the doorbell */ 330 - if (db_exec == DB_REC_REAL_DEAL || db_exec == DB_REC_ONCE) { 331 - if (db_entry->db_width == DB_REC_WIDTH_32B) 332 - DIRECT_REG_WR(db_entry->db_addr, 333 - *(u32 *)(db_entry->db_data)); 334 - else 335 - DIRECT_REG_WR64(db_entry->db_addr, 336 - *(u64 *)(db_entry->db_data)); 337 - } 337 + if (db_entry->db_width == DB_REC_WIDTH_32B) 338 + DIRECT_REG_WR(db_entry->db_addr, 339 + *(u32 *)(db_entry->db_data)); 340 + else 341 + DIRECT_REG_WR64(db_entry->db_addr, 342 + *(u64 *)(db_entry->db_data)); 338 343 339 344 /* Flush the write combined buffer. Next doorbell may come from a 340 345 * different entity to the same address... ··· 341 350 } 342 351 343 352 /* Traverse the doorbell recovery entry list and ring all the doorbells */ 344 - void qed_db_recovery_execute(struct qed_hwfn *p_hwfn, 345 - enum qed_db_rec_exec db_exec) 353 + void qed_db_recovery_execute(struct qed_hwfn *p_hwfn) 346 354 { 347 355 struct qed_db_recovery_entry *db_entry = NULL; 348 356 349 - if (db_exec != DB_REC_ONCE) { 350 - DP_NOTICE(p_hwfn, 351 - "Executing doorbell recovery. Counter was %d\n", 352 - p_hwfn->db_recovery_info.db_recovery_counter); 357 + DP_NOTICE(p_hwfn, "Executing doorbell recovery. Counter was %d\n", 358 + p_hwfn->db_recovery_info.db_recovery_counter); 353 359 354 - /* Track amount of times recovery was executed */ 355 - p_hwfn->db_recovery_info.db_recovery_counter++; 356 - } 360 + /* Track amount of times recovery was executed */ 361 + p_hwfn->db_recovery_info.db_recovery_counter++; 357 362 358 363 /* Protect the list */ 359 364 spin_lock_bh(&p_hwfn->db_recovery_info.lock); 360 365 list_for_each_entry(db_entry, 361 - &p_hwfn->db_recovery_info.list, list_entry) { 362 - qed_db_recovery_ring(p_hwfn, db_entry, db_exec); 363 - if (db_exec == DB_REC_ONCE) 364 - break; 365 - } 366 - 366 + &p_hwfn->db_recovery_info.list, list_entry) 367 + qed_db_recovery_ring(p_hwfn, db_entry); 367 368 spin_unlock_bh(&p_hwfn->db_recovery_info.lock); 368 369 } 369 370
+64 -21
drivers/net/ethernet/qlogic/qed/qed_int.c
··· 378 378 u32 count = QED_DB_REC_COUNT; 379 379 u32 usage = 1; 380 380 381 + /* Flush any pending (e)dpms as they may never arrive */ 382 + qed_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1); 383 + 381 384 /* wait for usage to zero or count to run out. This is necessary since 382 385 * EDPM doorbell transactions can take multiple 64b cycles, and as such 383 386 * can "split" over the pci. Possibly, the doorbell drop can happen with ··· 409 406 410 407 int qed_db_rec_handler(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) 411 408 { 412 - u32 overflow; 409 + u32 attn_ovfl, cur_ovfl; 413 410 int rc; 414 411 415 - overflow = qed_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY); 416 - DP_NOTICE(p_hwfn, "PF Overflow sticky 0x%x\n", overflow); 417 - if (!overflow) { 418 - qed_db_recovery_execute(p_hwfn, DB_REC_ONCE); 412 + attn_ovfl = test_and_clear_bit(QED_OVERFLOW_BIT, 413 + &p_hwfn->db_recovery_info.overflow); 414 + cur_ovfl = qed_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY); 415 + if (!cur_ovfl && !attn_ovfl) 419 416 return 0; 420 - } 421 417 422 - if (qed_edpm_enabled(p_hwfn)) { 418 + DP_NOTICE(p_hwfn, "PF Overflow sticky: attn %u current %u\n", 419 + attn_ovfl, cur_ovfl); 420 + 421 + if (cur_ovfl && !p_hwfn->db_bar_no_edpm) { 423 422 rc = qed_db_rec_flush_queue(p_hwfn, p_ptt); 424 423 if (rc) 425 424 return rc; 426 425 } 427 426 428 - /* Flush any pending (e)dpm as they may never arrive */ 429 - qed_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1); 430 - 431 427 /* Release overflow sticky indication (stop silently dropping everything) */ 432 428 qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0); 433 429 434 430 /* Repeat all last doorbells (doorbell drop recovery) */ 435 - qed_db_recovery_execute(p_hwfn, DB_REC_REAL_DEAL); 431 + qed_db_recovery_execute(p_hwfn); 436 432 437 433 return 0; 438 434 } 439 435 440 - static int qed_dorq_attn_cb(struct qed_hwfn *p_hwfn) 436 + static void qed_dorq_attn_overflow(struct qed_hwfn *p_hwfn) 437 + { 438 + struct qed_ptt *p_ptt = p_hwfn->p_dpc_ptt; 439 + u32 overflow; 440 + int rc; 441 + 442 + overflow = qed_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY); 443 + if (!overflow) 444 + goto out; 445 + 446 + /* Run PF doorbell recovery in next periodic handler */ 447 + set_bit(QED_OVERFLOW_BIT, &p_hwfn->db_recovery_info.overflow); 448 + 449 + if (!p_hwfn->db_bar_no_edpm) { 450 + rc = qed_db_rec_flush_queue(p_hwfn, p_ptt); 451 + if (rc) 452 + goto out; 453 + } 454 + 455 + qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0); 456 + out: 457 + /* Schedule the handler even if overflow was not detected */ 458 + qed_periodic_db_rec_start(p_hwfn); 459 + } 460 + 461 + static int qed_dorq_attn_int_sts(struct qed_hwfn *p_hwfn) 441 462 { 442 463 u32 int_sts, first_drop_reason, details, address, all_drops_reason; 443 464 struct qed_ptt *p_ptt = p_hwfn->p_dpc_ptt; 444 - int rc; 445 - 446 - int_sts = qed_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS); 447 - DP_NOTICE(p_hwfn->cdev, "DORQ attention. int_sts was %x\n", int_sts); 448 465 449 466 /* int_sts may be zero since all PFs were interrupted for doorbell 450 467 * overflow but another one already handled it. Can abort here. If 451 468 * This PF also requires overflow recovery we will be interrupted again. 452 469 * The masked almost full indication may also be set. Ignoring. 453 470 */ 471 + int_sts = qed_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS); 454 472 if (!(int_sts & ~DORQ_REG_INT_STS_DORQ_FIFO_AFULL)) 455 473 return 0; 474 + 475 + DP_NOTICE(p_hwfn->cdev, "DORQ attention. int_sts was %x\n", int_sts); 456 476 457 477 /* check if db_drop or overflow happened */ 458 478 if (int_sts & (DORQ_REG_INT_STS_DB_DROP | ··· 503 477 GET_FIELD(details, QED_DORQ_ATTENTION_SIZE) * 4, 504 478 first_drop_reason, all_drops_reason); 505 479 506 - rc = qed_db_rec_handler(p_hwfn, p_ptt); 507 - qed_periodic_db_rec_start(p_hwfn); 508 - if (rc) 509 - return rc; 510 - 511 480 /* Clear the doorbell drop details and prepare for next drop */ 512 481 qed_wr(p_hwfn, p_ptt, DORQ_REG_DB_DROP_DETAILS_REL, 0); 513 482 ··· 526 505 DP_INFO(p_hwfn, "DORQ fatal attention\n"); 527 506 528 507 return -EINVAL; 508 + } 509 + 510 + static int qed_dorq_attn_cb(struct qed_hwfn *p_hwfn) 511 + { 512 + p_hwfn->db_recovery_info.dorq_attn = true; 513 + qed_dorq_attn_overflow(p_hwfn); 514 + 515 + return qed_dorq_attn_int_sts(p_hwfn); 516 + } 517 + 518 + static void qed_dorq_attn_handler(struct qed_hwfn *p_hwfn) 519 + { 520 + if (p_hwfn->db_recovery_info.dorq_attn) 521 + goto out; 522 + 523 + /* Call DORQ callback if the attention was missed */ 524 + qed_dorq_attn_cb(p_hwfn); 525 + out: 526 + p_hwfn->db_recovery_info.dorq_attn = false; 529 527 } 530 528 531 529 /* Instead of major changes to the data-structure, we have a some 'special' ··· 1119 1079 } 1120 1080 } 1121 1081 } 1082 + 1083 + /* Handle missed DORQ attention */ 1084 + qed_dorq_attn_handler(p_hwfn); 1122 1085 1123 1086 /* Clear IGU indication for the deasserted bits */ 1124 1087 DIRECT_REG_WR((u8 __iomem *)p_hwfn->regview +
+2 -2
drivers/net/ethernet/qlogic/qed/qed_int.h
··· 192 192 193 193 /** 194 194 * @brief - Doorbell Recovery handler. 195 - * Run DB_REAL_DEAL doorbell recovery in case of PF overflow 196 - * (and flush DORQ if needed), otherwise run DB_REC_ONCE. 195 + * Run doorbell recovery in case of PF overflow (and flush DORQ if 196 + * needed). 197 197 * 198 198 * @param p_hwfn 199 199 * @param p_ptt
+1 -1
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 970 970 } 971 971 } 972 972 973 - #define QED_PERIODIC_DB_REC_COUNT 100 973 + #define QED_PERIODIC_DB_REC_COUNT 10 974 974 #define QED_PERIODIC_DB_REC_INTERVAL_MS 100 975 975 #define QED_PERIODIC_DB_REC_INTERVAL \ 976 976 msecs_to_jiffies(QED_PERIODIC_DB_REC_INTERVAL_MS)
+1 -1
drivers/net/ethernet/qlogic/qed/qed_sriov.c
··· 1591 1591 p_vfdev->eth_fp_hsi_minor = ETH_HSI_VER_NO_PKT_LEN_TUNN; 1592 1592 } else { 1593 1593 DP_INFO(p_hwfn, 1594 - "VF[%d] needs fastpath HSI %02x.%02x, which is incompatible with loaded FW's faspath HSI %02x.%02x\n", 1594 + "VF[%d] needs fastpath HSI %02x.%02x, which is incompatible with loaded FW's fastpath HSI %02x.%02x\n", 1595 1595 vf->abs_vf_id, 1596 1596 req->vfdev_info.eth_fp_hsi_major, 1597 1597 req->vfdev_info.eth_fp_hsi_minor,
+3 -4
drivers/net/ethernet/qlogic/qede/qede_ptp.c
··· 490 490 491 491 ptp->clock = ptp_clock_register(&ptp->clock_info, &edev->pdev->dev); 492 492 if (IS_ERR(ptp->clock)) { 493 - rc = -EINVAL; 494 493 DP_ERR(edev, "PTP clock registration failed\n"); 494 + qede_ptp_disable(edev); 495 + rc = -EINVAL; 495 496 goto err2; 496 497 } 497 498 498 499 return 0; 499 500 500 - err2: 501 - qede_ptp_disable(edev); 502 - ptp->clock = NULL; 503 501 err1: 504 502 kfree(ptp); 503 + err2: 505 504 edev->ptp = NULL; 506 505 507 506 return rc;
+26
drivers/net/team/team.c
··· 1246 1246 goto err_option_port_add; 1247 1247 } 1248 1248 1249 + /* set promiscuity level to new slave */ 1250 + if (dev->flags & IFF_PROMISC) { 1251 + err = dev_set_promiscuity(port_dev, 1); 1252 + if (err) 1253 + goto err_set_slave_promisc; 1254 + } 1255 + 1256 + /* set allmulti level to new slave */ 1257 + if (dev->flags & IFF_ALLMULTI) { 1258 + err = dev_set_allmulti(port_dev, 1); 1259 + if (err) { 1260 + if (dev->flags & IFF_PROMISC) 1261 + dev_set_promiscuity(port_dev, -1); 1262 + goto err_set_slave_promisc; 1263 + } 1264 + } 1265 + 1249 1266 netif_addr_lock_bh(dev); 1250 1267 dev_uc_sync_multiple(port_dev, dev); 1251 1268 dev_mc_sync_multiple(port_dev, dev); ··· 1278 1261 netdev_info(dev, "Port device %s added\n", portname); 1279 1262 1280 1263 return 0; 1264 + 1265 + err_set_slave_promisc: 1266 + __team_option_inst_del_port(team, port); 1281 1267 1282 1268 err_option_port_add: 1283 1269 team_upper_dev_unlink(team, port); ··· 1327 1307 1328 1308 team_port_disable(team, port); 1329 1309 list_del_rcu(&port->list); 1310 + 1311 + if (dev->flags & IFF_PROMISC) 1312 + dev_set_promiscuity(port_dev, -1); 1313 + if (dev->flags & IFF_ALLMULTI) 1314 + dev_set_allmulti(port_dev, -1); 1315 + 1330 1316 team_upper_dev_unlink(team, port); 1331 1317 netdev_rx_handler_unregister(port_dev); 1332 1318 team_port_disable_netpoll(port);
+1 -1
drivers/net/wireless/ath/ath10k/htt_rx.c
··· 2728 2728 num_msdus++; 2729 2729 num_bytes += ret; 2730 2730 } 2731 - ieee80211_return_txq(hw, txq); 2731 + ieee80211_return_txq(hw, txq, false); 2732 2732 ieee80211_txq_schedule_end(hw, txq->ac); 2733 2733 2734 2734 record->num_msdus = cpu_to_le16(num_msdus);
+2 -2
drivers/net/wireless/ath/ath10k/mac.c
··· 4089 4089 if (ret < 0) 4090 4090 break; 4091 4091 } 4092 - ieee80211_return_txq(hw, txq); 4092 + ieee80211_return_txq(hw, txq, false); 4093 4093 ath10k_htt_tx_txq_update(hw, txq); 4094 4094 if (ret == -EBUSY) 4095 4095 break; ··· 4374 4374 if (ret < 0) 4375 4375 break; 4376 4376 } 4377 - ieee80211_return_txq(hw, txq); 4377 + ieee80211_return_txq(hw, txq, false); 4378 4378 ath10k_htt_tx_txq_update(hw, txq); 4379 4379 out: 4380 4380 ieee80211_txq_schedule_end(hw, ac);
+4 -1
drivers/net/wireless/ath/ath9k/xmit.c
··· 1938 1938 goto out; 1939 1939 1940 1940 while ((queue = ieee80211_next_txq(hw, txq->mac80211_qnum))) { 1941 + bool force; 1942 + 1941 1943 tid = (struct ath_atx_tid *)queue->drv_priv; 1942 1944 1943 1945 ret = ath_tx_sched_aggr(sc, txq, tid); 1944 1946 ath_dbg(common, QUEUE, "ath_tx_sched_aggr returned %d\n", ret); 1945 1947 1946 - ieee80211_return_txq(hw, queue); 1948 + force = !skb_queue_empty(&tid->retry_q); 1949 + ieee80211_return_txq(hw, queue, force); 1947 1950 } 1948 1951 1949 1952 out:
+22 -8
drivers/net/wireless/intel/iwlwifi/cfg/22000.c
··· 82 82 #define IWL_22000_HR_A0_FW_PRE "iwlwifi-QuQnj-a0-hr-a0-" 83 83 #define IWL_22000_SU_Z0_FW_PRE "iwlwifi-su-z0-" 84 84 #define IWL_QU_B_JF_B_FW_PRE "iwlwifi-Qu-b0-jf-b0-" 85 + #define IWL_QUZ_A_HR_B_FW_PRE "iwlwifi-QuZ-a0-hr-b0-" 85 86 #define IWL_QNJ_B_JF_B_FW_PRE "iwlwifi-QuQnj-b0-jf-b0-" 86 87 #define IWL_CC_A_FW_PRE "iwlwifi-cc-a0-" 87 88 #define IWL_22000_SO_A_JF_B_FW_PRE "iwlwifi-so-a0-jf-b0-" ··· 106 105 IWL_22000_HR_A0_FW_PRE __stringify(api) ".ucode" 107 106 #define IWL_22000_SU_Z0_MODULE_FIRMWARE(api) \ 108 107 IWL_22000_SU_Z0_FW_PRE __stringify(api) ".ucode" 109 - #define IWL_QU_B_JF_B_MODULE_FIRMWARE(api) \ 110 - IWL_QU_B_JF_B_FW_PRE __stringify(api) ".ucode" 108 + #define IWL_QUZ_A_HR_B_MODULE_FIRMWARE(api) \ 109 + IWL_QUZ_A_HR_B_FW_PRE __stringify(api) ".ucode" 111 110 #define IWL_QU_B_JF_B_MODULE_FIRMWARE(api) \ 112 111 IWL_QU_B_JF_B_FW_PRE __stringify(api) ".ucode" 113 112 #define IWL_QNJ_B_JF_B_MODULE_FIRMWARE(api) \ ··· 236 235 .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT, 237 236 }; 238 237 239 - const struct iwl_cfg iwl22260_2ax_cfg = { 240 - .name = "Intel(R) Wireless-AX 22260", 238 + const struct iwl_cfg iwl_ax101_cfg_quz_hr = { 239 + .name = "Intel(R) Wi-Fi 6 AX101", 240 + .fw_name_pre = IWL_QUZ_A_HR_B_FW_PRE, 241 + IWL_DEVICE_22500, 242 + /* 243 + * This device doesn't support receiving BlockAck with a large bitmap 244 + * so we need to restrict the size of transmitted aggregation to the 245 + * HT size; mac80211 would otherwise pick the HE max (256) by default. 246 + */ 247 + .max_tx_agg_size = IEEE80211_MAX_AMPDU_BUF_HT, 248 + }; 249 + 250 + const struct iwl_cfg iwl_ax200_cfg_cc = { 251 + .name = "Intel(R) Wi-Fi 6 AX200 160MHz", 241 252 .fw_name_pre = IWL_CC_A_FW_PRE, 242 253 IWL_DEVICE_22500, 243 254 /* ··· 262 249 }; 263 250 264 251 const struct iwl_cfg killer1650x_2ax_cfg = { 265 - .name = "Killer(R) Wireless-AX 1650x Wireless Network Adapter (200NGW)", 252 + .name = "Killer(R) Wi-Fi 6 AX1650x 160MHz Wireless Network Adapter (200NGW)", 266 253 .fw_name_pre = IWL_CC_A_FW_PRE, 267 254 IWL_DEVICE_22500, 268 255 /* ··· 275 262 }; 276 263 277 264 const struct iwl_cfg killer1650w_2ax_cfg = { 278 - .name = "Killer(R) Wireless-AX 1650w Wireless Network Adapter (200D2W)", 265 + .name = "Killer(R) Wi-Fi 6 AX1650w 160MHz Wireless Network Adapter (200D2W)", 279 266 .fw_name_pre = IWL_CC_A_FW_PRE, 280 267 IWL_DEVICE_22500, 281 268 /* ··· 341 328 }; 342 329 343 330 const struct iwl_cfg killer1650s_2ax_cfg_qu_b0_hr_b0 = { 344 - .name = "Killer(R) Wireless-AX 1650i Wireless Network Adapter (22560NGW)", 331 + .name = "Killer(R) Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter (201NGW)", 345 332 .fw_name_pre = IWL_22000_QU_B_HR_B_FW_PRE, 346 333 IWL_DEVICE_22500, 347 334 /* ··· 353 340 }; 354 341 355 342 const struct iwl_cfg killer1650i_2ax_cfg_qu_b0_hr_b0 = { 356 - .name = "Killer(R) Wireless-AX 1650s Wireless Network Adapter (22560D2W)", 343 + .name = "Killer(R) Wi-Fi 6 AX1650s 160MHz Wireless Network Adapter (201D2W)", 357 344 .fw_name_pre = IWL_22000_QU_B_HR_B_FW_PRE, 358 345 IWL_DEVICE_22500, 359 346 /* ··· 457 444 MODULE_FIRMWARE(IWL_22000_HR_A0_QNJ_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); 458 445 MODULE_FIRMWARE(IWL_22000_SU_Z0_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); 459 446 MODULE_FIRMWARE(IWL_QU_B_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); 447 + MODULE_FIRMWARE(IWL_QUZ_A_HR_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); 460 448 MODULE_FIRMWARE(IWL_QNJ_B_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); 461 449 MODULE_FIRMWARE(IWL_CC_A_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX)); 462 450 MODULE_FIRMWARE(IWL_22000_SO_A_JF_B_MODULE_FIRMWARE(IWL_22000_UCODE_API_MAX));
+8 -26
drivers/net/wireless/intel/iwlwifi/fw/dbg.c
··· 1614 1614 if (!range) { 1615 1615 IWL_ERR(fwrt, "Failed to fill region header: id=%d, type=%d\n", 1616 1616 le32_to_cpu(reg->region_id), type); 1617 + memset(*data, 0, le32_to_cpu((*data)->len)); 1617 1618 return; 1618 1619 } 1619 1620 ··· 1624 1623 if (range_size < 0) { 1625 1624 IWL_ERR(fwrt, "Failed to dump region: id=%d, type=%d\n", 1626 1625 le32_to_cpu(reg->region_id), type); 1626 + memset(*data, 0, le32_to_cpu((*data)->len)); 1627 1627 return; 1628 1628 } 1629 1629 range = range + range_size; ··· 1809 1807 1810 1808 trigger = fwrt->dump.active_trigs[id].trig; 1811 1809 1812 - size = sizeof(*dump_file); 1813 - size += iwl_fw_ini_get_trigger_len(fwrt, trigger); 1814 - 1810 + size = iwl_fw_ini_get_trigger_len(fwrt, trigger); 1815 1811 if (!size) 1816 1812 return NULL; 1813 + 1814 + size += sizeof(*dump_file); 1817 1815 1818 1816 dump_file = vzalloc(size); 1819 1817 if (!dump_file) ··· 1944 1942 iwl_dump_error_desc->len = 0; 1945 1943 1946 1944 ret = iwl_fw_dbg_collect_desc(fwrt, iwl_dump_error_desc, false, 0); 1947 - if (ret) { 1945 + if (ret) 1948 1946 kfree(iwl_dump_error_desc); 1949 - } else { 1950 - set_bit(STATUS_FW_WAIT_DUMP, &fwrt->trans->status); 1951 - 1952 - /* trigger nmi to halt the fw */ 1953 - iwl_force_nmi(fwrt->trans); 1954 - } 1947 + else 1948 + iwl_trans_sync_nmi(fwrt->trans); 1955 1949 1956 1950 return ret; 1957 1951 } ··· 2487 2489 2488 2490 void iwl_fwrt_stop_device(struct iwl_fw_runtime *fwrt) 2489 2491 { 2490 - /* if the wait event timeout elapses instead of wake up then 2491 - * the driver did not receive NMI interrupt and can not assume the FW 2492 - * is halted 2493 - */ 2494 - int ret = wait_event_timeout(fwrt->trans->fw_halt_waitq, 2495 - !test_bit(STATUS_FW_WAIT_DUMP, 2496 - &fwrt->trans->status), 2497 - msecs_to_jiffies(2000)); 2498 - if (!ret) { 2499 - /* failed to receive NMI interrupt, assuming the FW is stuck */ 2500 - set_bit(STATUS_FW_ERROR, &fwrt->trans->status); 2501 - 2502 - clear_bit(STATUS_FW_WAIT_DUMP, &fwrt->trans->status); 2503 - } 2504 - 2505 - /* Assuming the op mode mutex is held at this point */ 2506 2492 iwl_fw_dbg_collect_sync(fwrt); 2507 2493 2508 2494 iwl_trans_stop_device(fwrt->trans);
-1
drivers/net/wireless/intel/iwlwifi/fw/init.c
··· 76 76 fwrt->ops_ctx = ops_ctx; 77 77 INIT_DELAYED_WORK(&fwrt->dump.wk, iwl_fw_error_dump_wk); 78 78 iwl_fwrt_dbgfs_register(fwrt, dbgfs_dir); 79 - init_waitqueue_head(&fwrt->trans->fw_halt_waitq); 80 79 } 81 80 IWL_EXPORT_SYMBOL(iwl_fw_runtime_init); 82 81
+2 -1
drivers/net/wireless/intel/iwlwifi/iwl-config.h
··· 549 549 extern const struct iwl_cfg iwl22000_2ac_cfg_hr_cdb; 550 550 extern const struct iwl_cfg iwl22000_2ac_cfg_jf; 551 551 extern const struct iwl_cfg iwl_ax101_cfg_qu_hr; 552 + extern const struct iwl_cfg iwl_ax101_cfg_quz_hr; 552 553 extern const struct iwl_cfg iwl22000_2ax_cfg_hr; 553 - extern const struct iwl_cfg iwl22260_2ax_cfg; 554 + extern const struct iwl_cfg iwl_ax200_cfg_cc; 554 555 extern const struct iwl_cfg killer1650s_2ax_cfg_qu_b0_hr_b0; 555 556 extern const struct iwl_cfg killer1650i_2ax_cfg_qu_b0_hr_b0; 556 557 extern const struct iwl_cfg killer1650x_2ax_cfg;
+1
drivers/net/wireless/intel/iwlwifi/iwl-csr.h
··· 327 327 #define CSR_HW_REV_TYPE_NONE (0x00001F0) 328 328 #define CSR_HW_REV_TYPE_QNJ (0x0000360) 329 329 #define CSR_HW_REV_TYPE_QNJ_B0 (0x0000364) 330 + #define CSR_HW_REV_TYPE_QUZ (0x0000354) 330 331 #define CSR_HW_REV_TYPE_HR_CDB (0x0000340) 331 332 #define CSR_HW_REV_TYPE_SO (0x0000370) 332 333 #define CSR_HW_REV_TYPE_TY (0x0000420)
+6 -6
drivers/net/wireless/intel/iwlwifi/iwl-trans.h
··· 338 338 * are sent 339 339 * @STATUS_TRANS_IDLE: the trans is idle - general commands are not to be sent 340 340 * @STATUS_TRANS_DEAD: trans is dead - avoid any read/write operation 341 - * @STATUS_FW_WAIT_DUMP: if set, wait until cleared before collecting dump 342 341 */ 343 342 enum iwl_trans_status { 344 343 STATUS_SYNC_HCMD_ACTIVE, ··· 350 351 STATUS_TRANS_GOING_IDLE, 351 352 STATUS_TRANS_IDLE, 352 353 STATUS_TRANS_DEAD, 353 - STATUS_FW_WAIT_DUMP, 354 354 }; 355 355 356 356 static inline int ··· 616 618 struct iwl_trans_dump_data *(*dump_data)(struct iwl_trans *trans, 617 619 u32 dump_mask); 618 620 void (*debugfs_cleanup)(struct iwl_trans *trans); 621 + void (*sync_nmi)(struct iwl_trans *trans); 619 622 }; 620 623 621 624 /** ··· 830 831 u32 lmac_error_event_table[2]; 831 832 u32 umac_error_event_table; 832 833 unsigned int error_event_table_tlv_status; 833 - wait_queue_head_t fw_halt_waitq; 834 834 835 835 /* pointer to trans specific struct */ 836 836 /*Ensure that this pointer will always be aligned to sizeof pointer */ ··· 1237 1239 /* prevent double restarts due to the same erroneous FW */ 1238 1240 if (!test_and_set_bit(STATUS_FW_ERROR, &trans->status)) 1239 1241 iwl_op_mode_nic_error(trans->op_mode); 1242 + } 1240 1243 1241 - if (test_and_clear_bit(STATUS_FW_WAIT_DUMP, &trans->status)) 1242 - wake_up(&trans->fw_halt_waitq); 1243 - 1244 + static inline void iwl_trans_sync_nmi(struct iwl_trans *trans) 1245 + { 1246 + if (trans->ops->sync_nmi) 1247 + trans->ops->sync_nmi(trans); 1244 1248 } 1245 1249 1246 1250 /*****************************************************
+22 -49
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 2714 2714 2715 2715 iwl_mvm_mac_ctxt_remove(mvm, vif); 2716 2716 2717 - kfree(mvmvif->ap_wep_key); 2718 - mvmvif->ap_wep_key = NULL; 2719 - 2720 2717 mutex_unlock(&mvm->mutex); 2721 2718 } 2722 2719 ··· 3180 3183 ret = iwl_mvm_update_sta(mvm, vif, sta); 3181 3184 } else if (old_state == IEEE80211_STA_ASSOC && 3182 3185 new_state == IEEE80211_STA_AUTHORIZED) { 3183 - /* if wep is used, need to set the key for the station now */ 3184 - if (vif->type == NL80211_IFTYPE_AP && mvmvif->ap_wep_key) { 3185 - mvm_sta->wep_key = 3186 - kmemdup(mvmvif->ap_wep_key, 3187 - sizeof(*mvmvif->ap_wep_key) + 3188 - mvmvif->ap_wep_key->keylen, 3189 - GFP_KERNEL); 3190 - if (!mvm_sta->wep_key) { 3191 - ret = -ENOMEM; 3192 - goto out_unlock; 3193 - } 3194 - 3195 - ret = iwl_mvm_set_sta_key(mvm, vif, sta, 3196 - mvm_sta->wep_key, 3197 - STA_KEY_IDX_INVALID); 3198 - } else { 3199 - ret = 0; 3200 - } 3186 + ret = 0; 3201 3187 3202 3188 /* we don't support TDLS during DCM */ 3203 3189 if (iwl_mvm_phy_ctx_count(mvm) > 1) ··· 3222 3242 NL80211_TDLS_DISABLE_LINK); 3223 3243 } 3224 3244 3225 - /* Remove STA key if this is an AP using WEP */ 3226 - if (vif->type == NL80211_IFTYPE_AP && mvmvif->ap_wep_key) { 3227 - int rm_ret = iwl_mvm_remove_sta_key(mvm, vif, sta, 3228 - mvm_sta->wep_key); 3229 - 3230 - if (!ret) 3231 - ret = rm_ret; 3232 - kfree(mvm_sta->wep_key); 3233 - mvm_sta->wep_key = NULL; 3234 - } 3235 - 3236 3245 if (unlikely(ret && 3237 3246 test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, 3238 3247 &mvm->status))) ··· 3258 3289 struct ieee80211_sta *sta, u32 changed) 3259 3290 { 3260 3291 struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); 3292 + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); 3293 + 3294 + if (changed & (IEEE80211_RC_BW_CHANGED | 3295 + IEEE80211_RC_SUPP_RATES_CHANGED | 3296 + IEEE80211_RC_NSS_CHANGED)) 3297 + iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band, 3298 + true); 3261 3299 3262 3300 if (vif->type == NL80211_IFTYPE_STATION && 3263 3301 changed & IEEE80211_RC_NSS_CHANGED) ··· 3415 3439 break; 3416 3440 case WLAN_CIPHER_SUITE_WEP40: 3417 3441 case WLAN_CIPHER_SUITE_WEP104: 3418 - if (vif->type == NL80211_IFTYPE_AP) { 3419 - struct iwl_mvm_vif *mvmvif = 3420 - iwl_mvm_vif_from_mac80211(vif); 3421 - 3422 - mvmvif->ap_wep_key = kmemdup(key, 3423 - sizeof(*key) + key->keylen, 3424 - GFP_KERNEL); 3425 - if (!mvmvif->ap_wep_key) 3426 - return -ENOMEM; 3427 - } 3428 - 3429 - if (vif->type != NL80211_IFTYPE_STATION) 3430 - return 0; 3431 - break; 3442 + if (vif->type == NL80211_IFTYPE_STATION) 3443 + break; 3444 + if (iwl_mvm_has_new_tx_api(mvm)) 3445 + return -EOPNOTSUPP; 3446 + /* support HW crypto on TX */ 3447 + return 0; 3432 3448 default: 3433 3449 /* currently FW supports only one optional cipher scheme */ 3434 3450 if (hw->n_cipher_schemes && ··· 3508 3540 ret = iwl_mvm_set_sta_key(mvm, vif, sta, key, key_offset); 3509 3541 if (ret) { 3510 3542 IWL_WARN(mvm, "set key failed\n"); 3543 + key->hw_key_idx = STA_KEY_IDX_INVALID; 3511 3544 /* 3512 3545 * can't add key for RX, but we don't need it 3513 - * in the device for TX so still return 0 3546 + * in the device for TX so still return 0, 3547 + * unless we have new TX API where we cannot 3548 + * put key material into the TX_CMD 3514 3549 */ 3515 - key->hw_key_idx = STA_KEY_IDX_INVALID; 3516 - ret = 0; 3550 + if (iwl_mvm_has_new_tx_api(mvm)) 3551 + ret = -EOPNOTSUPP; 3552 + else 3553 + ret = 0; 3517 3554 } 3518 3555 3519 3556 break;
-1
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 498 498 netdev_features_t features; 499 499 500 500 struct iwl_probe_resp_data __rcu *probe_resp_data; 501 - struct ieee80211_key_conf *ap_wep_key; 502 501 }; 503 502 504 503 static inline struct iwl_mvm_vif *
+4 -39
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 8 8 * Copyright(c) 2012 - 2015 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 11 - * Copyright(c) 2018 Intel Corporation 11 + * Copyright(c) 2018 - 2019 Intel Corporation 12 12 * 13 13 * This program is free software; you can redistribute it and/or modify 14 14 * it under the terms of version 2 of the GNU General Public License as ··· 31 31 * Copyright(c) 2012 - 2015 Intel Corporation. All rights reserved. 32 32 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 33 33 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 34 - * Copyright(c) 2018 Intel Corporation 34 + * Copyright(c) 2018 - 2019 Intel Corporation 35 35 * All rights reserved. 36 36 * 37 37 * Redistribution and use in source and binary forms, with or without ··· 1399 1399 1400 1400 iwl_mvm_sta_alloc_queue(mvm, txq->sta, txq->ac, tid); 1401 1401 list_del_init(&mvmtxq->list); 1402 + local_bh_disable(); 1402 1403 iwl_mvm_mac_itxq_xmit(mvm->hw, txq); 1404 + local_bh_enable(); 1403 1405 } 1404 1406 1405 1407 mutex_unlock(&mvm->mutex); ··· 2335 2333 iwl_mvm_enable_txq(mvm, NULL, mvmvif->cab_queue, 0, &cfg, 2336 2334 timeout); 2337 2335 2338 - if (mvmvif->ap_wep_key) { 2339 - u8 key_offset = iwl_mvm_set_fw_key_idx(mvm); 2340 - 2341 - __set_bit(key_offset, mvm->fw_key_table); 2342 - 2343 - if (key_offset == STA_KEY_IDX_INVALID) 2344 - return -ENOSPC; 2345 - 2346 - ret = iwl_mvm_send_sta_key(mvm, mvmvif->mcast_sta.sta_id, 2347 - mvmvif->ap_wep_key, true, 0, NULL, 0, 2348 - key_offset, 0); 2349 - if (ret) 2350 - return ret; 2351 - } 2352 - 2353 2336 return 0; 2354 2337 } 2355 2338 ··· 2405 2418 iwl_mvm_flush_sta(mvm, &mvmvif->mcast_sta, true, 0); 2406 2419 2407 2420 iwl_mvm_disable_txq(mvm, NULL, mvmvif->cab_queue, 0, 0); 2408 - 2409 - if (mvmvif->ap_wep_key) { 2410 - int i; 2411 - 2412 - if (!__test_and_clear_bit(mvmvif->ap_wep_key->hw_key_idx, 2413 - mvm->fw_key_table)) { 2414 - IWL_ERR(mvm, "offset %d not used in fw key table.\n", 2415 - mvmvif->ap_wep_key->hw_key_idx); 2416 - return -ENOENT; 2417 - } 2418 - 2419 - /* track which key was deleted last */ 2420 - for (i = 0; i < STA_KEY_MAX_NUM; i++) { 2421 - if (mvm->fw_key_deleted[i] < U8_MAX) 2422 - mvm->fw_key_deleted[i]++; 2423 - } 2424 - mvm->fw_key_deleted[mvmvif->ap_wep_key->hw_key_idx] = 0; 2425 - ret = __iwl_mvm_remove_sta_key(mvm, mvmvif->mcast_sta.sta_id, 2426 - mvmvif->ap_wep_key, true); 2427 - if (ret) 2428 - return ret; 2429 - } 2430 2421 2431 2422 ret = iwl_mvm_rm_sta_common(mvm, mvmvif->mcast_sta.sta_id); 2432 2423 if (ret)
+2 -5
drivers/net/wireless/intel/iwlwifi/mvm/sta.h
··· 8 8 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 10 10 * Copyright(c) 2015 - 2016 Intel Deutschland GmbH 11 - * Copyright(c) 2018 Intel Corporation 11 + * Copyright(c) 2018 - 2019 Intel Corporation 12 12 * 13 13 * This program is free software; you can redistribute it and/or modify 14 14 * it under the terms of version 2 of the GNU General Public License as ··· 31 31 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 32 32 * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 33 33 * Copyright(c) 2015 - 2016 Intel Deutschland GmbH 34 - * Copyright(c) 2018 Intel Corporation 34 + * Copyright(c) 2018 - 2019 Intel Corporation 35 35 * All rights reserved. 36 36 * 37 37 * Redistribution and use in source and binary forms, with or without ··· 394 394 * the BA window. To be used for UAPSD only. 395 395 * @ptk_pn: per-queue PTK PN data structures 396 396 * @dup_data: per queue duplicate packet detection data 397 - * @wep_key: used in AP mode. Is a duplicate of the WEP key. 398 397 * @deferred_traffic_tid_map: indication bitmap of deferred traffic per-TID 399 398 * @tx_ant: the index of the antenna to use for data tx to this station. Only 400 399 * used during connection establishment (e.g. for the 4 way handshake ··· 424 425 struct ieee80211_vif *vif; 425 426 struct iwl_mvm_key_pn __rcu *ptk_pn[4]; 426 427 struct iwl_mvm_rxq_dup_data *dup_data; 427 - 428 - struct ieee80211_key_conf *wep_key; 429 428 430 429 u8 reserved_queue; 431 430
+7 -6
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 953 953 {IWL_PCI_DEVICE(0xA0F0, 0x1652, killer1650i_2ax_cfg_qu_b0_hr_b0)}, 954 954 {IWL_PCI_DEVICE(0xA0F0, 0x4070, iwl_ax101_cfg_qu_hr)}, 955 955 956 - {IWL_PCI_DEVICE(0x2723, 0x0080, iwl22260_2ax_cfg)}, 957 - {IWL_PCI_DEVICE(0x2723, 0x0084, iwl22260_2ax_cfg)}, 958 - {IWL_PCI_DEVICE(0x2723, 0x0088, iwl22260_2ax_cfg)}, 959 - {IWL_PCI_DEVICE(0x2723, 0x008C, iwl22260_2ax_cfg)}, 956 + {IWL_PCI_DEVICE(0x2723, 0x0080, iwl_ax200_cfg_cc)}, 957 + {IWL_PCI_DEVICE(0x2723, 0x0084, iwl_ax200_cfg_cc)}, 958 + {IWL_PCI_DEVICE(0x2723, 0x0088, iwl_ax200_cfg_cc)}, 959 + {IWL_PCI_DEVICE(0x2723, 0x008C, iwl_ax200_cfg_cc)}, 960 960 {IWL_PCI_DEVICE(0x2723, 0x1653, killer1650w_2ax_cfg)}, 961 961 {IWL_PCI_DEVICE(0x2723, 0x1654, killer1650x_2ax_cfg)}, 962 - {IWL_PCI_DEVICE(0x2723, 0x4080, iwl22260_2ax_cfg)}, 963 - {IWL_PCI_DEVICE(0x2723, 0x4088, iwl22260_2ax_cfg)}, 962 + {IWL_PCI_DEVICE(0x2723, 0x2080, iwl_ax200_cfg_cc)}, 963 + {IWL_PCI_DEVICE(0x2723, 0x4080, iwl_ax200_cfg_cc)}, 964 + {IWL_PCI_DEVICE(0x2723, 0x4088, iwl_ax200_cfg_cc)}, 964 965 965 966 {IWL_PCI_DEVICE(0x1a56, 0x1653, killer1650w_2ax_cfg)}, 966 967 {IWL_PCI_DEVICE(0x1a56, 0x1654, killer1650x_2ax_cfg)},
+1 -1
drivers/net/wireless/intel/iwlwifi/pcie/internal.h
··· 1043 1043 1044 1044 void iwl_trans_pcie_rf_kill(struct iwl_trans *trans, bool state); 1045 1045 void iwl_trans_pcie_dump_regs(struct iwl_trans *trans); 1046 - void iwl_trans_sync_nmi(struct iwl_trans *trans); 1046 + void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans); 1047 1047 1048 1048 #ifdef CONFIG_IWLWIFI_DEBUGFS 1049 1049 int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans);
+8 -3
drivers/net/wireless/intel/iwlwifi/pcie/trans.c
··· 3318 3318 .unref = iwl_trans_pcie_unref, \ 3319 3319 .dump_data = iwl_trans_pcie_dump_data, \ 3320 3320 .d3_suspend = iwl_trans_pcie_d3_suspend, \ 3321 - .d3_resume = iwl_trans_pcie_d3_resume 3321 + .d3_resume = iwl_trans_pcie_d3_resume, \ 3322 + .sync_nmi = iwl_trans_pcie_sync_nmi 3322 3323 3323 3324 #ifdef CONFIG_PM_SLEEP 3324 3325 #define IWL_TRANS_PM_OPS \ ··· 3543 3542 } 3544 3543 } else if (cfg == &iwl_ax101_cfg_qu_hr) { 3545 3544 if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == 3545 + CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) && 3546 + trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0) { 3547 + trans->cfg = &iwl22000_2ax_cfg_qnj_hr_b0; 3548 + } else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == 3546 3549 CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR)) { 3547 3550 trans->cfg = &iwl_ax101_cfg_qu_hr; 3548 3551 } else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == ··· 3565 3560 } 3566 3561 } else if (CSR_HW_RF_ID_TYPE_CHIP_ID(trans->hw_rf_id) == 3567 3562 CSR_HW_RF_ID_TYPE_CHIP_ID(CSR_HW_RF_ID_TYPE_HR) && 3568 - (trans->cfg != &iwl22260_2ax_cfg || 3563 + (trans->cfg != &iwl_ax200_cfg_cc || 3569 3564 trans->hw_rev == CSR_HW_REV_TYPE_QNJ_B0)) { 3570 3565 u32 hw_status; 3571 3566 ··· 3642 3637 return ERR_PTR(ret); 3643 3638 } 3644 3639 3645 - void iwl_trans_sync_nmi(struct iwl_trans *trans) 3640 + void iwl_trans_pcie_sync_nmi(struct iwl_trans *trans) 3646 3641 { 3647 3642 unsigned long timeout = jiffies + IWL_TRANS_NMI_TIMEOUT; 3648 3643
+1 -1
drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
··· 965 965 cmd_str); 966 966 ret = -ETIMEDOUT; 967 967 968 - iwl_trans_sync_nmi(trans); 968 + iwl_trans_pcie_sync_nmi(trans); 969 969 goto cancel; 970 970 } 971 971
+1 -1
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 1960 1960 iwl_get_cmd_string(trans, cmd->id)); 1961 1961 ret = -ETIMEDOUT; 1962 1962 1963 - iwl_trans_sync_nmi(trans); 1963 + iwl_trans_pcie_sync_nmi(trans); 1964 1964 goto cancel; 1965 1965 } 1966 1966
+15 -4
drivers/net/wireless/mac80211_hwsim.c
··· 2644 2644 enum nl80211_band band; 2645 2645 const struct ieee80211_ops *ops = &mac80211_hwsim_ops; 2646 2646 struct net *net; 2647 - int idx; 2647 + int idx, i; 2648 2648 int n_limits = 0; 2649 2649 2650 2650 if (WARN_ON(param->channels > 1 && !param->use_chanctx)) ··· 2768 2768 goto failed_hw; 2769 2769 } 2770 2770 2771 + data->if_combination.max_interfaces = 0; 2772 + for (i = 0; i < n_limits; i++) 2773 + data->if_combination.max_interfaces += 2774 + data->if_limits[i].max; 2775 + 2771 2776 data->if_combination.n_limits = n_limits; 2772 - data->if_combination.max_interfaces = 2048; 2773 2777 data->if_combination.limits = data->if_limits; 2774 2778 2775 - hw->wiphy->iface_combinations = &data->if_combination; 2776 - hw->wiphy->n_iface_combinations = 1; 2779 + /* 2780 + * If we actually were asked to support combinations, 2781 + * advertise them - if there's only a single thing like 2782 + * only IBSS then don't advertise it as combinations. 2783 + */ 2784 + if (data->if_combination.max_interfaces > 1) { 2785 + hw->wiphy->iface_combinations = &data->if_combination; 2786 + hw->wiphy->n_iface_combinations = 1; 2787 + } 2777 2788 2778 2789 if (param->ciphers) { 2779 2790 memcpy(data->ciphers, param->ciphers,
+2
drivers/net/wireless/mediatek/mt76/mt7603/init.c
··· 510 510 bus_ops->rmw = mt7603_rmw; 511 511 dev->mt76.bus = bus_ops; 512 512 513 + spin_lock_init(&dev->ps_lock); 514 + 513 515 INIT_DELAYED_WORK(&dev->mac_work, mt7603_mac_work); 514 516 tasklet_init(&dev->pre_tbtt_tasklet, mt7603_pre_tbtt_tasklet, 515 517 (unsigned long)dev);
+14 -39
drivers/net/wireless/mediatek/mt76/mt7603/mac.c
··· 343 343 MT_BA_CONTROL_1_RESET)); 344 344 } 345 345 346 - void mt7603_mac_tx_ba_reset(struct mt7603_dev *dev, int wcid, int tid, int ssn, 346 + void mt7603_mac_tx_ba_reset(struct mt7603_dev *dev, int wcid, int tid, 347 347 int ba_size) 348 348 { 349 349 u32 addr = mt7603_wtbl2_addr(wcid); ··· 358 358 mt76_clear(dev, addr + (15 * 4), tid_mask); 359 359 return; 360 360 } 361 - mt76_poll(dev, MT_WTBL_UPDATE, MT_WTBL_UPDATE_BUSY, 0, 5000); 362 - 363 - mt7603_mac_stop(dev); 364 - switch (tid) { 365 - case 0: 366 - mt76_rmw_field(dev, addr + (2 * 4), MT_WTBL2_W2_TID0_SN, ssn); 367 - break; 368 - case 1: 369 - mt76_rmw_field(dev, addr + (2 * 4), MT_WTBL2_W2_TID1_SN, ssn); 370 - break; 371 - case 2: 372 - mt76_rmw_field(dev, addr + (2 * 4), MT_WTBL2_W2_TID2_SN_LO, 373 - ssn); 374 - mt76_rmw_field(dev, addr + (3 * 4), MT_WTBL2_W3_TID2_SN_HI, 375 - ssn >> 8); 376 - break; 377 - case 3: 378 - mt76_rmw_field(dev, addr + (3 * 4), MT_WTBL2_W3_TID3_SN, ssn); 379 - break; 380 - case 4: 381 - mt76_rmw_field(dev, addr + (3 * 4), MT_WTBL2_W3_TID4_SN, ssn); 382 - break; 383 - case 5: 384 - mt76_rmw_field(dev, addr + (3 * 4), MT_WTBL2_W3_TID5_SN_LO, 385 - ssn); 386 - mt76_rmw_field(dev, addr + (4 * 4), MT_WTBL2_W4_TID5_SN_HI, 387 - ssn >> 4); 388 - break; 389 - case 6: 390 - mt76_rmw_field(dev, addr + (4 * 4), MT_WTBL2_W4_TID6_SN, ssn); 391 - break; 392 - case 7: 393 - mt76_rmw_field(dev, addr + (4 * 4), MT_WTBL2_W4_TID7_SN, ssn); 394 - break; 395 - } 396 - mt7603_wtbl_update(dev, wcid, MT_WTBL_UPDATE_WTBL2); 397 - mt7603_mac_start(dev); 398 361 399 362 for (i = 7; i > 0; i--) { 400 363 if (ba_size >= MT_AGG_SIZE_LIMIT(i)) ··· 790 827 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 791 828 struct ieee80211_tx_rate *rate = &info->control.rates[0]; 792 829 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; 830 + struct ieee80211_bar *bar = (struct ieee80211_bar *)skb->data; 793 831 struct ieee80211_vif *vif = info->control.vif; 794 832 struct mt7603_vif *mvif; 795 833 int wlan_idx; ··· 798 834 int tx_count = 8; 799 835 u8 frame_type, frame_subtype; 800 836 u16 fc = le16_to_cpu(hdr->frame_control); 837 + u16 seqno = 0; 801 838 u8 vif_idx = 0; 802 839 u32 val; 803 840 u8 bw; ··· 884 919 tx_count = 0x1f; 885 920 886 921 val = FIELD_PREP(MT_TXD3_REM_TX_COUNT, tx_count) | 887 - FIELD_PREP(MT_TXD3_SEQ, le16_to_cpu(hdr->seq_ctrl)); 922 + MT_TXD3_SN_VALID; 923 + 924 + if (ieee80211_is_data_qos(hdr->frame_control)) 925 + seqno = le16_to_cpu(hdr->seq_ctrl); 926 + else if (ieee80211_is_back_req(hdr->frame_control)) 927 + seqno = le16_to_cpu(bar->start_seq_num); 928 + else 929 + val &= ~MT_TXD3_SN_VALID; 930 + 931 + val |= FIELD_PREP(MT_TXD3_SEQ, seqno >> 4); 932 + 888 933 txwi[3] = cpu_to_le32(val); 889 934 890 935 if (key) {
+4 -4
drivers/net/wireless/mediatek/mt76/mt7603/main.c
··· 372 372 struct mt7603_sta *msta = (struct mt7603_sta *)sta->drv_priv; 373 373 struct sk_buff_head list; 374 374 375 - mt76_stop_tx_queues(&dev->mt76, sta, false); 375 + mt76_stop_tx_queues(&dev->mt76, sta, true); 376 376 mt7603_wtbl_set_ps(dev, msta, ps); 377 377 if (ps) 378 378 return; ··· 584 584 case IEEE80211_AMPDU_TX_OPERATIONAL: 585 585 mtxq->aggr = true; 586 586 mtxq->send_bar = false; 587 - mt7603_mac_tx_ba_reset(dev, msta->wcid.idx, tid, *ssn, ba_size); 587 + mt7603_mac_tx_ba_reset(dev, msta->wcid.idx, tid, ba_size); 588 588 break; 589 589 case IEEE80211_AMPDU_TX_STOP_FLUSH: 590 590 case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT: 591 591 mtxq->aggr = false; 592 592 ieee80211_send_bar(vif, sta->addr, tid, mtxq->agg_ssn); 593 - mt7603_mac_tx_ba_reset(dev, msta->wcid.idx, tid, *ssn, -1); 593 + mt7603_mac_tx_ba_reset(dev, msta->wcid.idx, tid, -1); 594 594 break; 595 595 case IEEE80211_AMPDU_TX_START: 596 596 mtxq->agg_ssn = *ssn << 4; ··· 598 598 break; 599 599 case IEEE80211_AMPDU_TX_STOP_CONT: 600 600 mtxq->aggr = false; 601 - mt7603_mac_tx_ba_reset(dev, msta->wcid.idx, tid, *ssn, -1); 601 + mt7603_mac_tx_ba_reset(dev, msta->wcid.idx, tid, -1); 602 602 ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid); 603 603 break; 604 604 }
+1 -1
drivers/net/wireless/mediatek/mt76/mt7603/mt7603.h
··· 200 200 int mt7603_mac_fill_rx(struct mt7603_dev *dev, struct sk_buff *skb); 201 201 void mt7603_mac_add_txs(struct mt7603_dev *dev, void *data); 202 202 void mt7603_mac_rx_ba_reset(struct mt7603_dev *dev, void *addr, u8 tid); 203 - void mt7603_mac_tx_ba_reset(struct mt7603_dev *dev, int wcid, int tid, int ssn, 203 + void mt7603_mac_tx_ba_reset(struct mt7603_dev *dev, int wcid, int tid, 204 204 int ba_size); 205 205 206 206 void mt7603_pse_client_reset(struct mt7603_dev *dev);
+8 -6
drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
··· 466 466 return; 467 467 468 468 rcu_read_lock(); 469 - mt76_tx_status_lock(mdev, &list); 470 469 471 470 if (stat->wcid < ARRAY_SIZE(dev->mt76.wcid)) 472 471 wcid = rcu_dereference(dev->mt76.wcid[stat->wcid]); ··· 477 478 status.sta = container_of(priv, struct ieee80211_sta, 478 479 drv_priv); 479 480 } 481 + 482 + mt76_tx_status_lock(mdev, &list); 480 483 481 484 if (wcid) { 482 485 if (stat->pktid >= MT_PACKET_ID_FIRST) ··· 499 498 if (*update == 0 && stat_val == stat_cache && 500 499 stat->wcid == msta->status.wcid && msta->n_frames < 32) { 501 500 msta->n_frames++; 502 - goto out; 501 + mt76_tx_status_unlock(mdev, &list); 502 + rcu_read_unlock(); 503 + return; 503 504 } 504 505 505 506 mt76x02_mac_fill_tx_status(dev, status.info, &msta->status, ··· 517 514 518 515 if (status.skb) 519 516 mt76_tx_status_skb_done(mdev, status.skb, &list); 520 - else 521 - ieee80211_tx_status_ext(mt76_hw(dev), &status); 522 - 523 - out: 524 517 mt76_tx_status_unlock(mdev, &list); 518 + 519 + if (!status.skb) 520 + ieee80211_tx_status_ext(mt76_hw(dev), &status); 525 521 rcu_read_unlock(); 526 522 } 527 523
-1
drivers/net/wireless/ralink/rt2x00/rt2x00.h
··· 673 673 CONFIG_CHANNEL_HT40, 674 674 CONFIG_POWERSAVING, 675 675 CONFIG_HT_DISABLED, 676 - CONFIG_QOS_DISABLED, 677 676 CONFIG_MONITORING, 678 677 679 678 /*
-10
drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
··· 642 642 rt2x00dev->intf_associated--; 643 643 644 644 rt2x00leds_led_assoc(rt2x00dev, !!rt2x00dev->intf_associated); 645 - 646 - clear_bit(CONFIG_QOS_DISABLED, &rt2x00dev->flags); 647 645 } 648 - 649 - /* 650 - * Check for access point which do not support 802.11e . We have to 651 - * generate data frames sequence number in S/W for such AP, because 652 - * of H/W bug. 653 - */ 654 - if (changes & BSS_CHANGED_QOS && !bss_conf->qos) 655 - set_bit(CONFIG_QOS_DISABLED, &rt2x00dev->flags); 656 646 657 647 /* 658 648 * When the erp information has changed, we should perform
+9 -6
drivers/net/wireless/ralink/rt2x00/rt2x00queue.c
··· 201 201 if (!rt2x00_has_cap_flag(rt2x00dev, REQUIRE_SW_SEQNO)) { 202 202 /* 203 203 * rt2800 has a H/W (or F/W) bug, device incorrectly increase 204 - * seqno on retransmited data (non-QOS) frames. To workaround 205 - * the problem let's generate seqno in software if QOS is 206 - * disabled. 204 + * seqno on retransmitted data (non-QOS) and management frames. 205 + * To workaround the problem let's generate seqno in software. 206 + * Except for beacons which are transmitted periodically by H/W 207 + * hence hardware has to assign seqno for them. 207 208 */ 208 - if (test_bit(CONFIG_QOS_DISABLED, &rt2x00dev->flags)) 209 - __clear_bit(ENTRY_TXD_GENERATE_SEQ, &txdesc->flags); 210 - else 209 + if (ieee80211_is_beacon(hdr->frame_control)) { 210 + __set_bit(ENTRY_TXD_GENERATE_SEQ, &txdesc->flags); 211 211 /* H/W will generate sequence number */ 212 212 return; 213 + } 214 + 215 + __clear_bit(ENTRY_TXD_GENERATE_SEQ, &txdesc->flags); 213 216 } 214 217 215 218 /*
+5 -1
drivers/vhost/vhost.c
··· 911 911 u64 start, u64 size, u64 end, 912 912 u64 userspace_addr, int perm) 913 913 { 914 - struct vhost_umem_node *tmp, *node = kmalloc(sizeof(*node), GFP_ATOMIC); 914 + struct vhost_umem_node *tmp, *node; 915 915 916 + if (!size) 917 + return -EFAULT; 918 + 919 + node = kmalloc(sizeof(*node), GFP_ATOMIC); 916 920 if (!node) 917 921 return -ENOMEM; 918 922
+17 -7
fs/afs/rxrpc.c
··· 610 610 bool stalled = false; 611 611 u64 rtt; 612 612 u32 life, last_life; 613 + bool rxrpc_complete = false; 613 614 614 615 DECLARE_WAITQUEUE(myself, current); 615 616 ··· 622 621 rtt2 = 2; 623 622 624 623 timeout = rtt2; 625 - last_life = rxrpc_kernel_check_life(call->net->socket, call->rxcall); 624 + rxrpc_kernel_check_life(call->net->socket, call->rxcall, &last_life); 626 625 627 626 add_wait_queue(&call->waitq, &myself); 628 627 for (;;) { ··· 640 639 if (afs_check_call_state(call, AFS_CALL_COMPLETE)) 641 640 break; 642 641 643 - life = rxrpc_kernel_check_life(call->net->socket, call->rxcall); 642 + if (!rxrpc_kernel_check_life(call->net->socket, call->rxcall, &life)) { 643 + /* rxrpc terminated the call. */ 644 + rxrpc_complete = true; 645 + break; 646 + } 647 + 644 648 if (timeout == 0 && 645 649 life == last_life && signal_pending(current)) { 646 650 if (stalled) ··· 669 663 remove_wait_queue(&call->waitq, &myself); 670 664 __set_current_state(TASK_RUNNING); 671 665 672 - /* Kill off the call if it's still live. */ 673 666 if (!afs_check_call_state(call, AFS_CALL_COMPLETE)) { 674 - _debug("call interrupted"); 675 - if (rxrpc_kernel_abort_call(call->net->socket, call->rxcall, 676 - RX_USER_ABORT, -EINTR, "KWI")) 677 - afs_set_call_complete(call, -EINTR, 0); 667 + if (rxrpc_complete) { 668 + afs_set_call_complete(call, call->error, call->abort_code); 669 + } else { 670 + /* Kill off the call if it's still live. */ 671 + _debug("call interrupted"); 672 + if (rxrpc_kernel_abort_call(call->net->socket, call->rxcall, 673 + RX_USER_ABORT, -EINTR, "KWI")) 674 + afs_set_call_complete(call, -EINTR, 0); 675 + } 678 676 } 679 677 680 678 spin_lock_bh(&call->state_lock);
+3
include/linux/netdevice.h
··· 1500 1500 * @IFF_FAILOVER: device is a failover master device 1501 1501 * @IFF_FAILOVER_SLAVE: device is lower dev of a failover master device 1502 1502 * @IFF_L3MDEV_RX_HANDLER: only invoke the rx handler of L3 master device 1503 + * @IFF_LIVE_RENAME_OK: rename is allowed while device is up and running 1503 1504 */ 1504 1505 enum netdev_priv_flags { 1505 1506 IFF_802_1Q_VLAN = 1<<0, ··· 1533 1532 IFF_FAILOVER = 1<<27, 1534 1533 IFF_FAILOVER_SLAVE = 1<<28, 1535 1534 IFF_L3MDEV_RX_HANDLER = 1<<29, 1535 + IFF_LIVE_RENAME_OK = 1<<30, 1536 1536 }; 1537 1537 1538 1538 #define IFF_802_1Q_VLAN IFF_802_1Q_VLAN ··· 1565 1563 #define IFF_FAILOVER IFF_FAILOVER 1566 1564 #define IFF_FAILOVER_SLAVE IFF_FAILOVER_SLAVE 1567 1565 #define IFF_L3MDEV_RX_HANDLER IFF_L3MDEV_RX_HANDLER 1566 + #define IFF_LIVE_RENAME_OK IFF_LIVE_RENAME_OK 1568 1567 1569 1568 /** 1570 1569 * struct net_device - The DEVICE structure.
+3 -1
include/net/af_rxrpc.h
··· 61 61 rxrpc_user_attach_call_t, unsigned long, gfp_t, 62 62 unsigned int); 63 63 void rxrpc_kernel_set_tx_length(struct socket *, struct rxrpc_call *, s64); 64 - u32 rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *); 64 + bool rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *, 65 + u32 *); 65 66 void rxrpc_kernel_probe_life(struct socket *, struct rxrpc_call *); 66 67 u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *); 67 68 bool rxrpc_kernel_get_reply_time(struct socket *, struct rxrpc_call *, 68 69 ktime_t *); 70 + bool rxrpc_kernel_call_is_complete(struct rxrpc_call *); 69 71 70 72 #endif /* _NET_RXRPC_H */
+5
include/net/cfg80211.h
··· 7183 7183 #define wiphy_info(wiphy, format, args...) \ 7184 7184 dev_info(&(wiphy)->dev, format, ##args) 7185 7185 7186 + #define wiphy_err_ratelimited(wiphy, format, args...) \ 7187 + dev_err_ratelimited(&(wiphy)->dev, format, ##args) 7188 + #define wiphy_warn_ratelimited(wiphy, format, args...) \ 7189 + dev_warn_ratelimited(&(wiphy)->dev, format, ##args) 7190 + 7186 7191 #define wiphy_debug(wiphy, format, args...) \ 7187 7192 wiphy_printk(KERN_DEBUG, wiphy, format, ##args) 7188 7193
+38 -33
include/net/mac80211.h
··· 6231 6231 * @hw: pointer as obtained from ieee80211_alloc_hw() 6232 6232 * @ac: AC number to return packets from. 6233 6233 * 6234 - * Should only be called between calls to ieee80211_txq_schedule_start() 6235 - * and ieee80211_txq_schedule_end(). 6236 6234 * Returns the next txq if successful, %NULL if no queue is eligible. If a txq 6237 6235 * is returned, it should be returned with ieee80211_return_txq() after the 6238 6236 * driver has finished scheduling it. ··· 6238 6240 struct ieee80211_txq *ieee80211_next_txq(struct ieee80211_hw *hw, u8 ac); 6239 6241 6240 6242 /** 6241 - * ieee80211_return_txq - return a TXQ previously acquired by ieee80211_next_txq() 6242 - * 6243 - * @hw: pointer as obtained from ieee80211_alloc_hw() 6244 - * @txq: pointer obtained from station or virtual interface 6245 - * 6246 - * Should only be called between calls to ieee80211_txq_schedule_start() 6247 - * and ieee80211_txq_schedule_end(). 6248 - */ 6249 - void ieee80211_return_txq(struct ieee80211_hw *hw, struct ieee80211_txq *txq); 6250 - 6251 - /** 6252 - * ieee80211_txq_schedule_start - acquire locks for safe scheduling of an AC 6243 + * ieee80211_txq_schedule_start - start new scheduling round for TXQs 6253 6244 * 6254 6245 * @hw: pointer as obtained from ieee80211_alloc_hw() 6255 6246 * @ac: AC number to acquire locks for 6256 6247 * 6257 - * Acquire locks needed to schedule TXQs from the given AC. Should be called 6258 - * before ieee80211_next_txq() or ieee80211_return_txq(). 6248 + * Should be called before ieee80211_next_txq() or ieee80211_return_txq(). 6249 + * The driver must not call multiple TXQ scheduling rounds concurrently. 6259 6250 */ 6260 - void ieee80211_txq_schedule_start(struct ieee80211_hw *hw, u8 ac) 6261 - __acquires(txq_lock); 6251 + void ieee80211_txq_schedule_start(struct ieee80211_hw *hw, u8 ac); 6262 6252 6263 - /** 6264 - * ieee80211_txq_schedule_end - release locks for safe scheduling of an AC 6265 - * 6266 - * @hw: pointer as obtained from ieee80211_alloc_hw() 6267 - * @ac: AC number to acquire locks for 6268 - * 6269 - * Release locks previously acquired by ieee80211_txq_schedule_end(). 6270 - */ 6271 - void ieee80211_txq_schedule_end(struct ieee80211_hw *hw, u8 ac) 6272 - __releases(txq_lock); 6253 + /* (deprecated) */ 6254 + static inline void ieee80211_txq_schedule_end(struct ieee80211_hw *hw, u8 ac) 6255 + { 6256 + } 6257 + 6258 + void __ieee80211_schedule_txq(struct ieee80211_hw *hw, 6259 + struct ieee80211_txq *txq, bool force); 6273 6260 6274 6261 /** 6275 6262 * ieee80211_schedule_txq - schedule a TXQ for transmission ··· 6262 6279 * @hw: pointer as obtained from ieee80211_alloc_hw() 6263 6280 * @txq: pointer obtained from station or virtual interface 6264 6281 * 6265 - * Schedules a TXQ for transmission if it is not already scheduled. Takes a 6266 - * lock, which means it must *not* be called between 6267 - * ieee80211_txq_schedule_start() and ieee80211_txq_schedule_end() 6282 + * Schedules a TXQ for transmission if it is not already scheduled, 6283 + * even if mac80211 does not have any packets buffered. 6284 + * 6285 + * The driver may call this function if it has buffered packets for 6286 + * this TXQ internally. 6268 6287 */ 6269 - void ieee80211_schedule_txq(struct ieee80211_hw *hw, struct ieee80211_txq *txq) 6270 - __acquires(txq_lock) __releases(txq_lock); 6288 + static inline void 6289 + ieee80211_schedule_txq(struct ieee80211_hw *hw, struct ieee80211_txq *txq) 6290 + { 6291 + __ieee80211_schedule_txq(hw, txq, true); 6292 + } 6293 + 6294 + /** 6295 + * ieee80211_return_txq - return a TXQ previously acquired by ieee80211_next_txq() 6296 + * 6297 + * @hw: pointer as obtained from ieee80211_alloc_hw() 6298 + * @txq: pointer obtained from station or virtual interface 6299 + * @force: schedule txq even if mac80211 does not have any buffered packets. 6300 + * 6301 + * The driver may set force=true if it has buffered packets for this TXQ 6302 + * internally. 6303 + */ 6304 + static inline void 6305 + ieee80211_return_txq(struct ieee80211_hw *hw, struct ieee80211_txq *txq, 6306 + bool force) 6307 + { 6308 + __ieee80211_schedule_txq(hw, txq, force); 6309 + } 6271 6310 6272 6311 /** 6273 6312 * ieee80211_txq_may_transmit - check whether TXQ is allowed to transmit
+1 -1
include/net/netrom.h
··· 266 266 int nr_t1timer_running(struct sock *); 267 267 268 268 /* sysctl_net_netrom.c */ 269 - void nr_register_sysctl(void); 269 + int nr_register_sysctl(void); 270 270 void nr_unregister_sysctl(void); 271 271 272 272 #endif
-6
include/net/sock.h
··· 2084 2084 * @p: poll_table 2085 2085 * 2086 2086 * See the comments in the wq_has_sleeper function. 2087 - * 2088 - * Do not derive sock from filp->private_data here. An SMC socket establishes 2089 - * an internal TCP socket that is used in the fallback case. All socket 2090 - * operations on the SMC socket are then forwarded to the TCP socket. In case of 2091 - * poll, the filp->private_data pointer references the SMC socket because the 2092 - * TCP socket has no file assigned. 2093 2087 */ 2094 2088 static inline void sock_poll_wait(struct file *filp, struct socket *sock, 2095 2089 poll_table *p)
+3 -1
include/net/tls.h
··· 307 307 int tls_device_sendpage(struct sock *sk, struct page *page, 308 308 int offset, size_t size, int flags); 309 309 void tls_device_sk_destruct(struct sock *sk); 310 + void tls_device_free_resources_tx(struct sock *sk); 310 311 void tls_device_init(void); 311 312 void tls_device_cleanup(void); 312 313 int tls_tx_records(struct sock *sk, int flags); ··· 331 330 int flags); 332 331 int tls_push_partial_record(struct sock *sk, struct tls_context *ctx, 333 332 int flags); 333 + bool tls_free_partial_record(struct sock *sk, struct tls_context *ctx); 334 334 335 335 static inline struct tls_msg *tls_msg(struct sk_buff *skb) 336 336 { ··· 381 379 static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk) 382 380 { 383 381 #ifdef CONFIG_SOCK_VALIDATE_XMIT 384 - return sk_fullsock(sk) & 382 + return sk_fullsock(sk) && 385 383 (smp_load_acquire(&sk->sk_validate_xmit_skb) == 386 384 &tls_validate_xmit_skb); 387 385 #else
+5 -1
net/atm/lec.c
··· 710 710 711 711 static int lec_mcast_attach(struct atm_vcc *vcc, int arg) 712 712 { 713 - if (arg < 0 || arg >= MAX_LEC_ITF || !dev_lec[arg]) 713 + if (arg < 0 || arg >= MAX_LEC_ITF) 714 + return -EINVAL; 715 + arg = array_index_nospec(arg, MAX_LEC_ITF); 716 + if (!dev_lec[arg]) 714 717 return -EINVAL; 715 718 vcc->proto_data = dev_lec[arg]; 716 719 return lec_mcast_make(netdev_priv(dev_lec[arg]), vcc); ··· 731 728 i = arg; 732 729 if (arg >= MAX_LEC_ITF) 733 730 return -EINVAL; 731 + i = array_index_nospec(arg, MAX_LEC_ITF); 734 732 if (!dev_lec[i]) { 735 733 int size; 736 734
+2 -2
net/bluetooth/sco.c
··· 523 523 struct sock *sk = sock->sk; 524 524 int err = 0; 525 525 526 - BT_DBG("sk %p %pMR", sk, &sa->sco_bdaddr); 527 - 528 526 if (!addr || addr_len < sizeof(struct sockaddr_sco) || 529 527 addr->sa_family != AF_BLUETOOTH) 530 528 return -EINVAL; 529 + 530 + BT_DBG("sk %p %pMR", sk, &sa->sco_bdaddr); 531 531 532 532 lock_sock(sk); 533 533
+14 -9
net/bridge/br_input.c
··· 197 197 /* note: already called with rcu_read_lock */ 198 198 static int br_handle_local_finish(struct net *net, struct sock *sk, struct sk_buff *skb) 199 199 { 200 - struct net_bridge_port *p = br_port_get_rcu(skb->dev); 201 - 202 200 __br_handle_local_finish(skb); 203 201 204 - BR_INPUT_SKB_CB(skb)->brdev = p->br->dev; 205 - br_pass_frame_up(skb); 206 - return 0; 202 + /* return 1 to signal the okfn() was called so it's ok to use the skb */ 203 + return 1; 207 204 } 208 205 209 206 /* ··· 277 280 goto forward; 278 281 } 279 282 280 - /* Deliver packet to local host only */ 281 - NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_IN, dev_net(skb->dev), 282 - NULL, skb, skb->dev, NULL, br_handle_local_finish); 283 - return RX_HANDLER_CONSUMED; 283 + /* The else clause should be hit when nf_hook(): 284 + * - returns < 0 (drop/error) 285 + * - returns = 0 (stolen/nf_queue) 286 + * Thus return 1 from the okfn() to signal the skb is ok to pass 287 + */ 288 + if (NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_IN, 289 + dev_net(skb->dev), NULL, skb, skb->dev, NULL, 290 + br_handle_local_finish) == 1) { 291 + return RX_HANDLER_PASS; 292 + } else { 293 + return RX_HANDLER_CONSUMED; 294 + } 284 295 } 285 296 286 297 forward:
+3 -1
net/bridge/br_multicast.c
··· 2031 2031 2032 2032 __br_multicast_open(br, query); 2033 2033 2034 - list_for_each_entry(port, &br->port_list, list) { 2034 + rcu_read_lock(); 2035 + list_for_each_entry_rcu(port, &br->port_list, list) { 2035 2036 if (port->state == BR_STATE_DISABLED || 2036 2037 port->state == BR_STATE_BLOCKING) 2037 2038 continue; ··· 2044 2043 br_multicast_enable(&port->ip6_own_query); 2045 2044 #endif 2046 2045 } 2046 + rcu_read_unlock(); 2047 2047 } 2048 2048 2049 2049 int br_multicast_toggle(struct net_bridge *br, unsigned long val)
+1 -1
net/bridge/br_netlink.c
··· 1441 1441 nla_put_u8(skb, IFLA_BR_VLAN_STATS_ENABLED, 1442 1442 br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) || 1443 1443 nla_put_u8(skb, IFLA_BR_VLAN_STATS_PER_PORT, 1444 - br_opt_get(br, IFLA_BR_VLAN_STATS_PER_PORT))) 1444 + br_opt_get(br, BROPT_VLAN_STATS_PER_PORT))) 1445 1445 return -EMSGSIZE; 1446 1446 #endif 1447 1447 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
+15 -1
net/core/dev.c
··· 1184 1184 BUG_ON(!dev_net(dev)); 1185 1185 1186 1186 net = dev_net(dev); 1187 - if (dev->flags & IFF_UP) 1187 + 1188 + /* Some auto-enslaved devices e.g. failover slaves are 1189 + * special, as userspace might rename the device after 1190 + * the interface had been brought up and running since 1191 + * the point kernel initiated auto-enslavement. Allow 1192 + * live name change even when these slave devices are 1193 + * up and running. 1194 + * 1195 + * Typically, users of these auto-enslaving devices 1196 + * don't actually care about slave name change, as 1197 + * they are supposed to operate on master interface 1198 + * directly. 1199 + */ 1200 + if (dev->flags & IFF_UP && 1201 + likely(!(dev->priv_flags & IFF_LIVE_RENAME_OK))) 1188 1202 return -EBUSY; 1189 1203 1190 1204 write_seqcount_begin(&devnet_rename_seq);
+3 -3
net/core/failover.c
··· 80 80 goto err_upper_link; 81 81 } 82 82 83 - slave_dev->priv_flags |= IFF_FAILOVER_SLAVE; 83 + slave_dev->priv_flags |= (IFF_FAILOVER_SLAVE | IFF_LIVE_RENAME_OK); 84 84 85 85 if (fops && fops->slave_register && 86 86 !fops->slave_register(slave_dev, failover_dev)) 87 87 return NOTIFY_OK; 88 88 89 89 netdev_upper_dev_unlink(slave_dev, failover_dev); 90 - slave_dev->priv_flags &= ~IFF_FAILOVER_SLAVE; 90 + slave_dev->priv_flags &= ~(IFF_FAILOVER_SLAVE | IFF_LIVE_RENAME_OK); 91 91 err_upper_link: 92 92 netdev_rx_handler_unregister(slave_dev); 93 93 done: ··· 121 121 122 122 netdev_rx_handler_unregister(slave_dev); 123 123 netdev_upper_dev_unlink(slave_dev, failover_dev); 124 - slave_dev->priv_flags &= ~IFF_FAILOVER_SLAVE; 124 + slave_dev->priv_flags &= ~(IFF_FAILOVER_SLAVE | IFF_LIVE_RENAME_OK); 125 125 126 126 if (fops && fops->slave_unregister && 127 127 !fops->slave_unregister(slave_dev, failover_dev))
+2
net/core/filter.c
··· 4383 4383 * Only binding to IP is supported. 4384 4384 */ 4385 4385 err = -EINVAL; 4386 + if (addr_len < offsetofend(struct sockaddr, sa_family)) 4387 + return err; 4386 4388 if (addr->sa_family == AF_INET) { 4387 4389 if (addr_len < sizeof(struct sockaddr_in)) 4388 4390 return err;
+5 -9
net/core/net-sysfs.c
··· 1747 1747 1748 1748 error = device_add(dev); 1749 1749 if (error) 1750 - goto error_put_device; 1750 + return error; 1751 1751 1752 1752 error = register_queue_kobjects(ndev); 1753 - if (error) 1754 - goto error_device_del; 1753 + if (error) { 1754 + device_del(dev); 1755 + return error; 1756 + } 1755 1757 1756 1758 pm_runtime_set_memalloc_noio(dev, true); 1757 1759 1758 - return 0; 1759 - 1760 - error_device_del: 1761 - device_del(dev); 1762 - error_put_device: 1763 - put_device(dev); 1764 1760 return error; 1765 1761 } 1766 1762
+4 -3
net/core/ptp_classifier.c
··· 185 185 { 0x16, 0, 0, 0x00000000 }, 186 186 { 0x06, 0, 0, 0x00000000 }, 187 187 }; 188 - struct sock_fprog_kern ptp_prog = { 189 - .len = ARRAY_SIZE(ptp_filter), .filter = ptp_filter, 190 - }; 188 + struct sock_fprog_kern ptp_prog; 189 + 190 + ptp_prog.len = ARRAY_SIZE(ptp_filter); 191 + ptp_prog.filter = ptp_filter; 191 192 192 193 BUG_ON(bpf_prog_create(&ptp_insns, &ptp_prog)); 193 194 }
+1 -1
net/core/rtnetlink.c
··· 4948 4948 { 4949 4949 struct if_stats_msg *ifsm; 4950 4950 4951 - if (nlh->nlmsg_len < sizeof(*ifsm)) { 4951 + if (nlh->nlmsg_len < nlmsg_msg_size(sizeof(*ifsm))) { 4952 4952 NL_SET_ERR_MSG(extack, "Invalid header for stats dump"); 4953 4953 return -EINVAL; 4954 4954 }
+9 -1
net/core/skbuff.c
··· 5083 5083 5084 5084 static struct sk_buff *skb_reorder_vlan_header(struct sk_buff *skb) 5085 5085 { 5086 - int mac_len; 5086 + int mac_len, meta_len; 5087 + void *meta; 5087 5088 5088 5089 if (skb_cow(skb, skb_headroom(skb)) < 0) { 5089 5090 kfree_skb(skb); ··· 5096 5095 memmove(skb_mac_header(skb) + VLAN_HLEN, skb_mac_header(skb), 5097 5096 mac_len - VLAN_HLEN - ETH_TLEN); 5098 5097 } 5098 + 5099 + meta_len = skb_metadata_len(skb); 5100 + if (meta_len) { 5101 + meta = skb_metadata_end(skb) - meta_len; 5102 + memmove(meta + VLAN_HLEN, meta, meta_len); 5103 + } 5104 + 5099 5105 skb->mac_header += VLAN_HLEN; 5100 5106 return skb; 5101 5107 }
+2 -2
net/core/sock.c
··· 348 348 tv.tv_usec = ((timeo % HZ) * USEC_PER_SEC) / HZ; 349 349 } 350 350 351 - if (in_compat_syscall() && !COMPAT_USE_64BIT_TIME) { 351 + if (old_timeval && in_compat_syscall() && !COMPAT_USE_64BIT_TIME) { 352 352 struct old_timeval32 tv32 = { tv.tv_sec, tv.tv_usec }; 353 353 *(struct old_timeval32 *)optval = tv32; 354 354 return sizeof(tv32); ··· 372 372 { 373 373 struct __kernel_sock_timeval tv; 374 374 375 - if (in_compat_syscall() && !COMPAT_USE_64BIT_TIME) { 375 + if (old_timeval && in_compat_syscall() && !COMPAT_USE_64BIT_TIME) { 376 376 struct old_timeval32 tv32; 377 377 378 378 if (optlen < sizeof(tv32))
+3 -1
net/ipv4/fou.c
··· 121 121 struct guehdr *guehdr; 122 122 void *data; 123 123 u16 doffset = 0; 124 + u8 proto_ctype; 124 125 125 126 if (!fou) 126 127 return 1; ··· 213 212 if (unlikely(guehdr->control)) 214 213 return gue_control_message(skb, guehdr); 215 214 215 + proto_ctype = guehdr->proto_ctype; 216 216 __skb_pull(skb, sizeof(struct udphdr) + hdrlen); 217 217 skb_reset_transport_header(skb); 218 218 219 219 if (iptunnel_pull_offloads(skb)) 220 220 goto drop; 221 221 222 - return -guehdr->proto_ctype; 222 + return -proto_ctype; 223 223 224 224 drop: 225 225 kfree_skb(skb);
+15 -1
net/ipv4/route.c
··· 1185 1185 1186 1186 static void ipv4_link_failure(struct sk_buff *skb) 1187 1187 { 1188 + struct ip_options opt; 1188 1189 struct rtable *rt; 1190 + int res; 1189 1191 1190 - icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0); 1192 + /* Recompile ip options since IPCB may not be valid anymore. 1193 + */ 1194 + memset(&opt, 0, sizeof(opt)); 1195 + opt.optlen = ip_hdr(skb)->ihl*4 - sizeof(struct iphdr); 1196 + 1197 + rcu_read_lock(); 1198 + res = __ip_options_compile(dev_net(skb->dev), &opt, skb, NULL); 1199 + rcu_read_unlock(); 1200 + 1201 + if (res) 1202 + return; 1203 + 1204 + __icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0, &opt); 1191 1205 1192 1206 rt = skb_rtable(skb); 1193 1207 if (rt)
+19 -30
net/ipv4/tcp_dctcp.c
··· 49 49 #define DCTCP_MAX_ALPHA 1024U 50 50 51 51 struct dctcp { 52 - u32 acked_bytes_ecn; 53 - u32 acked_bytes_total; 54 - u32 prior_snd_una; 52 + u32 old_delivered; 53 + u32 old_delivered_ce; 55 54 u32 prior_rcv_nxt; 56 55 u32 dctcp_alpha; 57 56 u32 next_seq; ··· 72 73 { 73 74 ca->next_seq = tp->snd_nxt; 74 75 75 - ca->acked_bytes_ecn = 0; 76 - ca->acked_bytes_total = 0; 76 + ca->old_delivered = tp->delivered; 77 + ca->old_delivered_ce = tp->delivered_ce; 77 78 } 78 79 79 80 static void dctcp_init(struct sock *sk) ··· 85 86 sk->sk_state == TCP_CLOSE)) { 86 87 struct dctcp *ca = inet_csk_ca(sk); 87 88 88 - ca->prior_snd_una = tp->snd_una; 89 89 ca->prior_rcv_nxt = tp->rcv_nxt; 90 90 91 91 ca->dctcp_alpha = min(dctcp_alpha_on_init, DCTCP_MAX_ALPHA); ··· 116 118 { 117 119 const struct tcp_sock *tp = tcp_sk(sk); 118 120 struct dctcp *ca = inet_csk_ca(sk); 119 - u32 acked_bytes = tp->snd_una - ca->prior_snd_una; 120 - 121 - /* If ack did not advance snd_una, count dupack as MSS size. 122 - * If ack did update window, do not count it at all. 123 - */ 124 - if (acked_bytes == 0 && !(flags & CA_ACK_WIN_UPDATE)) 125 - acked_bytes = inet_csk(sk)->icsk_ack.rcv_mss; 126 - if (acked_bytes) { 127 - ca->acked_bytes_total += acked_bytes; 128 - ca->prior_snd_una = tp->snd_una; 129 - 130 - if (flags & CA_ACK_ECE) 131 - ca->acked_bytes_ecn += acked_bytes; 132 - } 133 121 134 122 /* Expired RTT */ 135 123 if (!before(tp->snd_una, ca->next_seq)) { 136 - u64 bytes_ecn = ca->acked_bytes_ecn; 124 + u32 delivered_ce = tp->delivered_ce - ca->old_delivered_ce; 137 125 u32 alpha = ca->dctcp_alpha; 138 126 139 127 /* alpha = (1 - g) * alpha + g * F */ 140 128 141 129 alpha -= min_not_zero(alpha, alpha >> dctcp_shift_g); 142 - if (bytes_ecn) { 143 - /* If dctcp_shift_g == 1, a 32bit value would overflow 144 - * after 8 Mbytes. 145 - */ 146 - bytes_ecn <<= (10 - dctcp_shift_g); 147 - do_div(bytes_ecn, max(1U, ca->acked_bytes_total)); 130 + if (delivered_ce) { 131 + u32 delivered = tp->delivered - ca->old_delivered; 148 132 149 - alpha = min(alpha + (u32)bytes_ecn, DCTCP_MAX_ALPHA); 133 + /* If dctcp_shift_g == 1, a 32bit value would overflow 134 + * after 8 M packets. 135 + */ 136 + delivered_ce <<= (10 - dctcp_shift_g); 137 + delivered_ce /= max(1U, delivered); 138 + 139 + alpha = min(alpha + delivered_ce, DCTCP_MAX_ALPHA); 150 140 } 151 141 /* dctcp_alpha can be read from dctcp_get_info() without 152 142 * synchro, so we ask compiler to not use dctcp_alpha ··· 186 200 union tcp_cc_info *info) 187 201 { 188 202 const struct dctcp *ca = inet_csk_ca(sk); 203 + const struct tcp_sock *tp = tcp_sk(sk); 189 204 190 205 /* Fill it also in case of VEGASINFO due to req struct limits. 191 206 * We can still correctly retrieve it later. ··· 198 211 info->dctcp.dctcp_enabled = 1; 199 212 info->dctcp.dctcp_ce_state = (u16) ca->ce_state; 200 213 info->dctcp.dctcp_alpha = ca->dctcp_alpha; 201 - info->dctcp.dctcp_ab_ecn = ca->acked_bytes_ecn; 202 - info->dctcp.dctcp_ab_tot = ca->acked_bytes_total; 214 + info->dctcp.dctcp_ab_ecn = tp->mss_cache * 215 + (tp->delivered_ce - ca->old_delivered_ce); 216 + info->dctcp.dctcp_ab_tot = tp->mss_cache * 217 + (tp->delivered - ca->old_delivered); 203 218 } 204 219 205 220 *attr = INET_DIAG_DCTCPINFO;
+5 -5
net/ipv4/tcp_input.c
··· 402 402 static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb) 403 403 { 404 404 struct tcp_sock *tp = tcp_sk(sk); 405 + int room; 406 + 407 + room = min_t(int, tp->window_clamp, tcp_space(sk)) - tp->rcv_ssthresh; 405 408 406 409 /* Check #1 */ 407 - if (tp->rcv_ssthresh < tp->window_clamp && 408 - (int)tp->rcv_ssthresh < tcp_space(sk) && 409 - !tcp_under_memory_pressure(sk)) { 410 + if (room > 0 && !tcp_under_memory_pressure(sk)) { 410 411 int incr; 411 412 412 413 /* Check #2. Increase window, if skb with such overhead ··· 420 419 421 420 if (incr) { 422 421 incr = max_t(int, incr, 2 * skb->len); 423 - tp->rcv_ssthresh = min(tp->rcv_ssthresh + incr, 424 - tp->window_clamp); 422 + tp->rcv_ssthresh += min(room, incr); 425 423 inet_csk(sk)->icsk_ack.quick |= 1; 426 424 } 427 425 }
+4
net/ipv6/route.c
··· 2330 2330 2331 2331 rcu_read_lock(); 2332 2332 from = rcu_dereference(rt6->from); 2333 + if (!from) { 2334 + rcu_read_unlock(); 2335 + return; 2336 + } 2333 2337 nrt6 = ip6_rt_cache_alloc(from, daddr, saddr); 2334 2338 if (nrt6) { 2335 2339 rt6_do_update_pmtu(nrt6, mtu);
+2
net/ipv6/udp.c
··· 1047 1047 static int udpv6_pre_connect(struct sock *sk, struct sockaddr *uaddr, 1048 1048 int addr_len) 1049 1049 { 1050 + if (addr_len < offsetofend(struct sockaddr, sa_family)) 1051 + return -EINVAL; 1050 1052 /* The following checks are replicated from __ip6_datagram_connect() 1051 1053 * and intended to prevent BPF program called below from accessing 1052 1054 * bytes that are out of the bound specified by user in addr_len.
+1 -2
net/llc/af_llc.c
··· 320 320 struct llc_sap *sap; 321 321 int rc = -EINVAL; 322 322 323 - dprintk("%s: binding %02X\n", __func__, addr->sllc_sap); 324 - 325 323 lock_sock(sk); 326 324 if (unlikely(!sock_flag(sk, SOCK_ZAPPED) || addrlen != sizeof(*addr))) 327 325 goto out; 328 326 rc = -EAFNOSUPPORT; 329 327 if (unlikely(addr->sllc_family != AF_LLC)) 330 328 goto out; 329 + dprintk("%s: binding %02X\n", __func__, addr->sllc_sap); 331 330 rc = -ENODEV; 332 331 rcu_read_lock(); 333 332 if (sk->sk_bound_dev_if) {
+3
net/mac80211/driver-ops.h
··· 1195 1195 { 1196 1196 struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif); 1197 1197 1198 + if (local->in_reconfig) 1199 + return; 1200 + 1198 1201 if (!check_sdata_in_driver(sdata)) 1199 1202 return; 1200 1203
+4 -5
net/mac80211/key.c
··· 167 167 * The driver doesn't know anything about VLAN interfaces. 168 168 * Hence, don't send GTKs for VLAN interfaces to the driver. 169 169 */ 170 - if (!(key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE)) 170 + if (!(key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE)) { 171 + ret = 1; 171 172 goto out_unsupported; 173 + } 172 174 } 173 175 174 176 ret = drv_set_key(key->local, SET_KEY, sdata, ··· 215 213 /* all of these we can do in software - if driver can */ 216 214 if (ret == 1) 217 215 return 0; 218 - if (ieee80211_hw_check(&key->local->hw, SW_CRYPTO_CONTROL)) { 219 - if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) 220 - return 0; 216 + if (ieee80211_hw_check(&key->local->hw, SW_CRYPTO_CONTROL)) 221 217 return -EINVAL; 222 - } 223 218 return 0; 224 219 default: 225 220 return -EINVAL;
+1 -1
net/mac80211/mesh_pathtbl.c
··· 23 23 static u32 mesh_table_hash(const void *addr, u32 len, u32 seed) 24 24 { 25 25 /* Use last four bytes of hw addr as hash index */ 26 - return jhash_1word(*(u32 *)(addr+2), seed); 26 + return jhash_1word(__get_unaligned_cpu32((u8 *)addr + 2), seed); 27 27 } 28 28 29 29 static const struct rhashtable_params mesh_rht_params = {
+9 -1
net/mac80211/rx.c
··· 1568 1568 return; 1569 1569 1570 1570 for (tid = 0; tid < IEEE80211_NUM_TIDS; tid++) { 1571 - if (txq_has_queue(sta->sta.txq[tid])) 1571 + struct ieee80211_txq *txq = sta->sta.txq[tid]; 1572 + struct txq_info *txqi = to_txq_info(txq); 1573 + 1574 + spin_lock(&local->active_txq_lock[txq->ac]); 1575 + if (!list_empty(&txqi->schedule_order)) 1576 + list_del_init(&txqi->schedule_order); 1577 + spin_unlock(&local->active_txq_lock[txq->ac]); 1578 + 1579 + if (txq_has_queue(txq)) 1572 1580 set_bit(tid, &sta->txq_buffered_tids); 1573 1581 else 1574 1582 clear_bit(tid, &sta->txq_buffered_tids);
+6 -1
net/mac80211/trace_msg.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Portions of this file 4 + * Copyright (C) 2019 Intel Corporation 5 + */ 6 + 2 7 #ifdef CONFIG_MAC80211_MESSAGE_TRACING 3 8 4 9 #if !defined(__MAC80211_MSG_DRIVER_TRACE) || defined(TRACE_HEADER_MULTI_READ) ··· 16 11 #undef TRACE_SYSTEM 17 12 #define TRACE_SYSTEM mac80211_msg 18 13 19 - #define MAX_MSG_LEN 100 14 + #define MAX_MSG_LEN 120 20 15 21 16 DECLARE_EVENT_CLASS(mac80211_msg_event, 22 17 TP_PROTO(struct va_format *vaf),
+23 -30
net/mac80211/tx.c
··· 3221 3221 u8 max_subframes = sta->sta.max_amsdu_subframes; 3222 3222 int max_frags = local->hw.max_tx_fragments; 3223 3223 int max_amsdu_len = sta->sta.max_amsdu_len; 3224 + int orig_truesize; 3224 3225 __be16 len; 3225 3226 void *data; 3226 3227 bool ret = false; ··· 3262 3261 if (!head || skb_is_gso(head)) 3263 3262 goto out; 3264 3263 3264 + orig_truesize = head->truesize; 3265 3265 orig_len = head->len; 3266 3266 3267 3267 if (skb->len + head->len > max_amsdu_len) ··· 3320 3318 *frag_tail = skb; 3321 3319 3322 3320 out_recalc: 3321 + fq->memory_usage += head->truesize - orig_truesize; 3323 3322 if (head->len != orig_len) { 3324 3323 flow->backlog += head->len - orig_len; 3325 3324 tin->backlog_bytes += head->len - orig_len; ··· 3649 3646 struct ieee80211_txq *ieee80211_next_txq(struct ieee80211_hw *hw, u8 ac) 3650 3647 { 3651 3648 struct ieee80211_local *local = hw_to_local(hw); 3649 + struct ieee80211_txq *ret = NULL; 3652 3650 struct txq_info *txqi = NULL; 3653 3651 3654 - lockdep_assert_held(&local->active_txq_lock[ac]); 3652 + spin_lock_bh(&local->active_txq_lock[ac]); 3655 3653 3656 3654 begin: 3657 3655 txqi = list_first_entry_or_null(&local->active_txqs[ac], 3658 3656 struct txq_info, 3659 3657 schedule_order); 3660 3658 if (!txqi) 3661 - return NULL; 3659 + goto out; 3662 3660 3663 3661 if (txqi->txq.sta) { 3664 3662 struct sta_info *sta = container_of(txqi->txq.sta, ··· 3676 3672 3677 3673 3678 3674 if (txqi->schedule_round == local->schedule_round[ac]) 3679 - return NULL; 3675 + goto out; 3680 3676 3681 3677 list_del_init(&txqi->schedule_order); 3682 3678 txqi->schedule_round = local->schedule_round[ac]; 3683 - return &txqi->txq; 3679 + ret = &txqi->txq; 3680 + 3681 + out: 3682 + spin_unlock_bh(&local->active_txq_lock[ac]); 3683 + return ret; 3684 3684 } 3685 3685 EXPORT_SYMBOL(ieee80211_next_txq); 3686 3686 3687 - void ieee80211_return_txq(struct ieee80211_hw *hw, 3688 - struct ieee80211_txq *txq) 3687 + void __ieee80211_schedule_txq(struct ieee80211_hw *hw, 3688 + struct ieee80211_txq *txq, 3689 + bool force) 3689 3690 { 3690 3691 struct ieee80211_local *local = hw_to_local(hw); 3691 3692 struct txq_info *txqi = to_txq_info(txq); 3692 3693 3693 - lockdep_assert_held(&local->active_txq_lock[txq->ac]); 3694 + spin_lock_bh(&local->active_txq_lock[txq->ac]); 3694 3695 3695 3696 if (list_empty(&txqi->schedule_order) && 3696 - (!skb_queue_empty(&txqi->frags) || txqi->tin.backlog_packets)) { 3697 + (force || !skb_queue_empty(&txqi->frags) || 3698 + txqi->tin.backlog_packets)) { 3697 3699 /* If airtime accounting is active, always enqueue STAs at the 3698 3700 * head of the list to ensure that they only get moved to the 3699 3701 * back by the airtime DRR scheduler once they have a negative ··· 3716 3706 list_add_tail(&txqi->schedule_order, 3717 3707 &local->active_txqs[txq->ac]); 3718 3708 } 3719 - } 3720 - EXPORT_SYMBOL(ieee80211_return_txq); 3721 3709 3722 - void ieee80211_schedule_txq(struct ieee80211_hw *hw, 3723 - struct ieee80211_txq *txq) 3724 - __acquires(txq_lock) __releases(txq_lock) 3725 - { 3726 - struct ieee80211_local *local = hw_to_local(hw); 3727 - 3728 - spin_lock_bh(&local->active_txq_lock[txq->ac]); 3729 - ieee80211_return_txq(hw, txq); 3730 3710 spin_unlock_bh(&local->active_txq_lock[txq->ac]); 3731 3711 } 3732 - EXPORT_SYMBOL(ieee80211_schedule_txq); 3712 + EXPORT_SYMBOL(__ieee80211_schedule_txq); 3733 3713 3734 3714 bool ieee80211_txq_may_transmit(struct ieee80211_hw *hw, 3735 3715 struct ieee80211_txq *txq) ··· 3729 3729 struct sta_info *sta; 3730 3730 u8 ac = txq->ac; 3731 3731 3732 - lockdep_assert_held(&local->active_txq_lock[ac]); 3732 + spin_lock_bh(&local->active_txq_lock[ac]); 3733 3733 3734 3734 if (!txqi->txq.sta) 3735 3735 goto out; ··· 3759 3759 3760 3760 sta->airtime[ac].deficit += sta->airtime_weight; 3761 3761 list_move_tail(&txqi->schedule_order, &local->active_txqs[ac]); 3762 + spin_unlock_bh(&local->active_txq_lock[ac]); 3762 3763 3763 3764 return false; 3764 3765 out: 3765 3766 if (!list_empty(&txqi->schedule_order)) 3766 3767 list_del_init(&txqi->schedule_order); 3768 + spin_unlock_bh(&local->active_txq_lock[ac]); 3767 3769 3768 3770 return true; 3769 3771 } 3770 3772 EXPORT_SYMBOL(ieee80211_txq_may_transmit); 3771 3773 3772 3774 void ieee80211_txq_schedule_start(struct ieee80211_hw *hw, u8 ac) 3773 - __acquires(txq_lock) 3774 3775 { 3775 3776 struct ieee80211_local *local = hw_to_local(hw); 3776 3777 3777 3778 spin_lock_bh(&local->active_txq_lock[ac]); 3778 3779 local->schedule_round[ac]++; 3779 - } 3780 - EXPORT_SYMBOL(ieee80211_txq_schedule_start); 3781 - 3782 - void ieee80211_txq_schedule_end(struct ieee80211_hw *hw, u8 ac) 3783 - __releases(txq_lock) 3784 - { 3785 - struct ieee80211_local *local = hw_to_local(hw); 3786 - 3787 3780 spin_unlock_bh(&local->active_txq_lock[ac]); 3788 3781 } 3789 - EXPORT_SYMBOL(ieee80211_txq_schedule_end); 3782 + EXPORT_SYMBOL(ieee80211_txq_schedule_start); 3790 3783 3791 3784 void __ieee80211_subif_start_xmit(struct sk_buff *skb, 3792 3785 struct net_device *dev,
+2 -1
net/netlink/af_netlink.c
··· 988 988 struct netlink_sock *nlk = nlk_sk(sk); 989 989 struct sockaddr_nl *nladdr = (struct sockaddr_nl *)addr; 990 990 int err = 0; 991 - unsigned long groups = nladdr->nl_groups; 991 + unsigned long groups; 992 992 bool bound; 993 993 994 994 if (addr_len < sizeof(struct sockaddr_nl)) ··· 996 996 997 997 if (nladdr->nl_family != AF_NETLINK) 998 998 return -EINVAL; 999 + groups = nladdr->nl_groups; 999 1000 1000 1001 /* Only superuser is allowed to listen multicasts */ 1001 1002 if (groups) {
+54 -22
net/netrom/af_netrom.c
··· 1392 1392 int i; 1393 1393 int rc = proto_register(&nr_proto, 0); 1394 1394 1395 - if (rc != 0) 1396 - goto out; 1395 + if (rc) 1396 + return rc; 1397 1397 1398 1398 if (nr_ndevs > 0x7fffffff/sizeof(struct net_device *)) { 1399 - printk(KERN_ERR "NET/ROM: nr_proto_init - nr_ndevs parameter to large\n"); 1400 - return -1; 1399 + pr_err("NET/ROM: %s - nr_ndevs parameter too large\n", 1400 + __func__); 1401 + rc = -EINVAL; 1402 + goto unregister_proto; 1401 1403 } 1402 1404 1403 1405 dev_nr = kcalloc(nr_ndevs, sizeof(struct net_device *), GFP_KERNEL); 1404 - if (dev_nr == NULL) { 1405 - printk(KERN_ERR "NET/ROM: nr_proto_init - unable to allocate device array\n"); 1406 - return -1; 1406 + if (!dev_nr) { 1407 + pr_err("NET/ROM: %s - unable to allocate device array\n", 1408 + __func__); 1409 + rc = -ENOMEM; 1410 + goto unregister_proto; 1407 1411 } 1408 1412 1409 1413 for (i = 0; i < nr_ndevs; i++) { ··· 1417 1413 sprintf(name, "nr%d", i); 1418 1414 dev = alloc_netdev(0, name, NET_NAME_UNKNOWN, nr_setup); 1419 1415 if (!dev) { 1420 - printk(KERN_ERR "NET/ROM: nr_proto_init - unable to allocate device structure\n"); 1416 + rc = -ENOMEM; 1421 1417 goto fail; 1422 1418 } 1423 1419 1424 1420 dev->base_addr = i; 1425 - if (register_netdev(dev)) { 1426 - printk(KERN_ERR "NET/ROM: nr_proto_init - unable to register network device\n"); 1421 + rc = register_netdev(dev); 1422 + if (rc) { 1427 1423 free_netdev(dev); 1428 1424 goto fail; 1429 1425 } ··· 1431 1427 dev_nr[i] = dev; 1432 1428 } 1433 1429 1434 - if (sock_register(&nr_family_ops)) { 1435 - printk(KERN_ERR "NET/ROM: nr_proto_init - unable to register socket family\n"); 1430 + rc = sock_register(&nr_family_ops); 1431 + if (rc) 1436 1432 goto fail; 1437 - } 1438 1433 1439 - register_netdevice_notifier(&nr_dev_notifier); 1434 + rc = register_netdevice_notifier(&nr_dev_notifier); 1435 + if (rc) 1436 + goto out_sock; 1440 1437 1441 1438 ax25_register_pid(&nr_pid); 1442 1439 ax25_linkfail_register(&nr_linkfail_notifier); 1443 1440 1444 1441 #ifdef CONFIG_SYSCTL 1445 - nr_register_sysctl(); 1442 + rc = nr_register_sysctl(); 1443 + if (rc) 1444 + goto out_sysctl; 1446 1445 #endif 1447 1446 1448 1447 nr_loopback_init(); 1449 1448 1450 - proc_create_seq("nr", 0444, init_net.proc_net, &nr_info_seqops); 1451 - proc_create_seq("nr_neigh", 0444, init_net.proc_net, &nr_neigh_seqops); 1452 - proc_create_seq("nr_nodes", 0444, init_net.proc_net, &nr_node_seqops); 1453 - out: 1454 - return rc; 1449 + rc = -ENOMEM; 1450 + if (!proc_create_seq("nr", 0444, init_net.proc_net, &nr_info_seqops)) 1451 + goto proc_remove1; 1452 + if (!proc_create_seq("nr_neigh", 0444, init_net.proc_net, 1453 + &nr_neigh_seqops)) 1454 + goto proc_remove2; 1455 + if (!proc_create_seq("nr_nodes", 0444, init_net.proc_net, 1456 + &nr_node_seqops)) 1457 + goto proc_remove3; 1458 + 1459 + return 0; 1460 + 1461 + proc_remove3: 1462 + remove_proc_entry("nr_neigh", init_net.proc_net); 1463 + proc_remove2: 1464 + remove_proc_entry("nr", init_net.proc_net); 1465 + proc_remove1: 1466 + 1467 + nr_loopback_clear(); 1468 + nr_rt_free(); 1469 + 1470 + #ifdef CONFIG_SYSCTL 1471 + nr_unregister_sysctl(); 1472 + out_sysctl: 1473 + #endif 1474 + ax25_linkfail_release(&nr_linkfail_notifier); 1475 + ax25_protocol_release(AX25_P_NETROM); 1476 + unregister_netdevice_notifier(&nr_dev_notifier); 1477 + out_sock: 1478 + sock_unregister(PF_NETROM); 1455 1479 fail: 1456 1480 while (--i >= 0) { 1457 1481 unregister_netdev(dev_nr[i]); 1458 1482 free_netdev(dev_nr[i]); 1459 1483 } 1460 1484 kfree(dev_nr); 1485 + unregister_proto: 1461 1486 proto_unregister(&nr_proto); 1462 - rc = -1; 1463 - goto out; 1487 + return rc; 1464 1488 } 1465 1489 1466 1490 module_init(nr_proto_init);
+1 -1
net/netrom/nr_loopback.c
··· 70 70 } 71 71 } 72 72 73 - void __exit nr_loopback_clear(void) 73 + void nr_loopback_clear(void) 74 74 { 75 75 del_timer_sync(&loopback_timer); 76 76 skb_queue_purge(&loopback_queue);
+1 -1
net/netrom/nr_route.c
··· 953 953 /* 954 954 * Free all memory associated with the nodes and routes lists. 955 955 */ 956 - void __exit nr_rt_free(void) 956 + void nr_rt_free(void) 957 957 { 958 958 struct nr_neigh *s = NULL; 959 959 struct nr_node *t = NULL;
+4 -1
net/netrom/sysctl_net_netrom.c
··· 146 146 { } 147 147 }; 148 148 149 - void __init nr_register_sysctl(void) 149 + int __init nr_register_sysctl(void) 150 150 { 151 151 nr_table_header = register_net_sysctl(&init_net, "net/netrom", nr_table); 152 + if (!nr_table_header) 153 + return -ENOMEM; 154 + return 0; 152 155 } 153 156 154 157 void nr_unregister_sysctl(void)
+3
net/rds/af_rds.c
··· 543 543 struct rds_sock *rs = rds_sk_to_rs(sk); 544 544 int ret = 0; 545 545 546 + if (addr_len < offsetofend(struct sockaddr, sa_family)) 547 + return -EINVAL; 548 + 546 549 lock_sock(sk); 547 550 548 551 switch (uaddr->sa_family) {
+2
net/rds/bind.c
··· 173 173 /* We allow an RDS socket to be bound to either IPv4 or IPv6 174 174 * address. 175 175 */ 176 + if (addr_len < offsetofend(struct sockaddr, sa_family)) 177 + return -EINVAL; 176 178 if (uaddr->sa_family == AF_INET) { 177 179 struct sockaddr_in *sin = (struct sockaddr_in *)uaddr; 178 180
+11 -6
net/rxrpc/af_rxrpc.c
··· 135 135 struct sockaddr_rxrpc *srx = (struct sockaddr_rxrpc *)saddr; 136 136 struct rxrpc_local *local; 137 137 struct rxrpc_sock *rx = rxrpc_sk(sock->sk); 138 - u16 service_id = srx->srx_service; 138 + u16 service_id; 139 139 int ret; 140 140 141 141 _enter("%p,%p,%d", rx, saddr, len); ··· 143 143 ret = rxrpc_validate_address(rx, srx, len); 144 144 if (ret < 0) 145 145 goto error; 146 + service_id = srx->srx_service; 146 147 147 148 lock_sock(&rx->sk); 148 149 ··· 371 370 * rxrpc_kernel_check_life - Check to see whether a call is still alive 372 371 * @sock: The socket the call is on 373 372 * @call: The call to check 373 + * @_life: Where to store the life value 374 374 * 375 375 * Allow a kernel service to find out whether a call is still alive - ie. we're 376 - * getting ACKs from the server. Returns a number representing the life state 377 - * which can be compared to that returned by a previous call. 376 + * getting ACKs from the server. Passes back in *_life a number representing 377 + * the life state which can be compared to that returned by a previous call and 378 + * return true if the call is still alive. 378 379 * 379 380 * If the life state stalls, rxrpc_kernel_probe_life() should be called and 380 381 * then 2RTT waited. 381 382 */ 382 - u32 rxrpc_kernel_check_life(const struct socket *sock, 383 - const struct rxrpc_call *call) 383 + bool rxrpc_kernel_check_life(const struct socket *sock, 384 + const struct rxrpc_call *call, 385 + u32 *_life) 384 386 { 385 - return call->acks_latest; 387 + *_life = call->acks_latest; 388 + return call->state != RXRPC_CALL_COMPLETE; 386 389 } 387 390 EXPORT_SYMBOL(rxrpc_kernel_check_life); 388 391
+1
net/rxrpc/ar-internal.h
··· 654 654 u8 ackr_reason; /* reason to ACK */ 655 655 u16 ackr_skew; /* skew on packet being ACK'd */ 656 656 rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */ 657 + rxrpc_serial_t ackr_first_seq; /* first sequence number received */ 657 658 rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */ 658 659 rxrpc_seq_t ackr_consumed; /* Highest packet shown consumed */ 659 660 rxrpc_seq_t ackr_seen; /* Highest packet shown seen */
+7 -4
net/rxrpc/conn_event.c
··· 153 153 * pass a connection-level abort onto all calls on that connection 154 154 */ 155 155 static void rxrpc_abort_calls(struct rxrpc_connection *conn, 156 - enum rxrpc_call_completion compl) 156 + enum rxrpc_call_completion compl, 157 + rxrpc_serial_t serial) 157 158 { 158 159 struct rxrpc_call *call; 159 160 int i; ··· 174 173 call->call_id, 0, 175 174 conn->abort_code, 176 175 conn->error); 176 + else 177 + trace_rxrpc_rx_abort(call, serial, 178 + conn->abort_code); 177 179 if (rxrpc_set_call_completion(call, compl, 178 180 conn->abort_code, 179 181 conn->error)) ··· 217 213 conn->state = RXRPC_CONN_LOCALLY_ABORTED; 218 214 spin_unlock_bh(&conn->state_lock); 219 215 220 - rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED); 221 - 222 216 msg.msg_name = &conn->params.peer->srx.transport; 223 217 msg.msg_namelen = conn->params.peer->srx.transport_len; 224 218 msg.msg_control = NULL; ··· 244 242 len = iov[0].iov_len + iov[1].iov_len; 245 243 246 244 serial = atomic_inc_return(&conn->serial); 245 + rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, serial); 247 246 whdr.serial = htonl(serial); 248 247 _proto("Tx CONN ABORT %%%u { %d }", serial, conn->abort_code); 249 248 ··· 324 321 conn->error = -ECONNABORTED; 325 322 conn->abort_code = abort_code; 326 323 conn->state = RXRPC_CONN_REMOTELY_ABORTED; 327 - rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED); 324 + rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial); 328 325 return -ECONNABORTED; 329 326 330 327 case RXRPC_PACKET_TYPE_CHALLENGE:
+12 -6
net/rxrpc/input.c
··· 837 837 u8 acks[RXRPC_MAXACKS]; 838 838 } buf; 839 839 rxrpc_serial_t acked_serial; 840 - rxrpc_seq_t first_soft_ack, hard_ack; 840 + rxrpc_seq_t first_soft_ack, hard_ack, prev_pkt; 841 841 int nr_acks, offset, ioffset; 842 842 843 843 _enter(""); ··· 851 851 852 852 acked_serial = ntohl(buf.ack.serial); 853 853 first_soft_ack = ntohl(buf.ack.firstPacket); 854 + prev_pkt = ntohl(buf.ack.previousPacket); 854 855 hard_ack = first_soft_ack - 1; 855 856 nr_acks = buf.ack.nAcks; 856 857 summary.ack_reason = (buf.ack.reason < RXRPC_ACK__INVALID ? 857 858 buf.ack.reason : RXRPC_ACK__INVALID); 858 859 859 860 trace_rxrpc_rx_ack(call, sp->hdr.serial, acked_serial, 860 - first_soft_ack, ntohl(buf.ack.previousPacket), 861 + first_soft_ack, prev_pkt, 861 862 summary.ack_reason, nr_acks); 862 863 863 864 if (buf.ack.reason == RXRPC_ACK_PING_RESPONSE) ··· 879 878 rxrpc_propose_ack_respond_to_ack); 880 879 } 881 880 882 - /* Discard any out-of-order or duplicate ACKs. */ 883 - if (before_eq(sp->hdr.serial, call->acks_latest)) 881 + /* Discard any out-of-order or duplicate ACKs (outside lock). */ 882 + if (before(first_soft_ack, call->ackr_first_seq) || 883 + before(prev_pkt, call->ackr_prev_seq)) 884 884 return; 885 885 886 886 buf.info.rxMTU = 0; ··· 892 890 893 891 spin_lock(&call->input_lock); 894 892 895 - /* Discard any out-of-order or duplicate ACKs. */ 896 - if (before_eq(sp->hdr.serial, call->acks_latest)) 893 + /* Discard any out-of-order or duplicate ACKs (inside lock). */ 894 + if (before(first_soft_ack, call->ackr_first_seq) || 895 + before(prev_pkt, call->ackr_prev_seq)) 897 896 goto out; 898 897 call->acks_latest_ts = skb->tstamp; 899 898 call->acks_latest = sp->hdr.serial; 899 + 900 + call->ackr_first_seq = first_soft_ack; 901 + call->ackr_prev_seq = prev_pkt; 900 902 901 903 /* Parse rwind and mtu sizes if provided. */ 902 904 if (buf.info.rxMTU)
+5
net/rxrpc/peer_event.c
··· 157 157 158 158 _enter("%p{%d}", sk, local->debug_id); 159 159 160 + /* Clear the outstanding error value on the socket so that it doesn't 161 + * cause kernel_sendmsg() to return it later. 162 + */ 163 + sock_error(sk); 164 + 160 165 skb = sock_dequeue_err_skb(sk); 161 166 if (!skb) { 162 167 _leave("UDP socket errqueue empty");
+12 -9
net/rxrpc/sendmsg.c
··· 152 152 } 153 153 154 154 /* 155 - * Queue a DATA packet for transmission, set the resend timeout and send the 156 - * packet immediately 155 + * Queue a DATA packet for transmission, set the resend timeout and send 156 + * the packet immediately. Returns the error from rxrpc_send_data_packet() 157 + * in case the caller wants to do something with it. 157 158 */ 158 - static void rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, 159 - struct sk_buff *skb, bool last, 160 - rxrpc_notify_end_tx_t notify_end_tx) 159 + static int rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call, 160 + struct sk_buff *skb, bool last, 161 + rxrpc_notify_end_tx_t notify_end_tx) 161 162 { 162 163 struct rxrpc_skb_priv *sp = rxrpc_skb(skb); 163 164 unsigned long now; ··· 251 250 252 251 out: 253 252 rxrpc_free_skb(skb, rxrpc_skb_tx_freed); 254 - _leave(""); 253 + _leave(" = %d", ret); 254 + return ret; 255 255 } 256 256 257 257 /* ··· 425 423 if (ret < 0) 426 424 goto out; 427 425 428 - rxrpc_queue_packet(rx, call, skb, 429 - !msg_data_left(msg) && !more, 430 - notify_end_tx); 426 + ret = rxrpc_queue_packet(rx, call, skb, 427 + !msg_data_left(msg) && !more, 428 + notify_end_tx); 429 + /* Should check for failure here */ 431 430 skb = NULL; 432 431 } 433 432 } while (msg_data_left(msg) > 0);
+2 -1
net/sctp/socket.c
··· 4847 4847 } 4848 4848 4849 4849 /* Validate addr_len before calling common connect/connectx routine. */ 4850 - af = sctp_get_af_specific(addr->sa_family); 4850 + af = addr_len < offsetofend(struct sockaddr, sa_family) ? NULL : 4851 + sctp_get_af_specific(addr->sa_family); 4851 4852 if (!af || addr_len < af->sockaddr_len) { 4852 4853 err = -EINVAL; 4853 4854 } else {
+39 -19
net/smc/af_smc.c
··· 167 167 168 168 if (sk->sk_state == SMC_CLOSED) { 169 169 if (smc->clcsock) { 170 - mutex_lock(&smc->clcsock_release_lock); 171 - sock_release(smc->clcsock); 172 - smc->clcsock = NULL; 173 - mutex_unlock(&smc->clcsock_release_lock); 170 + release_sock(sk); 171 + smc_clcsock_release(smc); 172 + lock_sock(sk); 174 173 } 175 174 if (!smc->use_fallback) 176 175 smc_conn_free(&smc->conn); ··· 445 446 link->peer_mtu = clc->qp_mtu; 446 447 } 447 448 449 + static void smc_switch_to_fallback(struct smc_sock *smc) 450 + { 451 + smc->use_fallback = true; 452 + if (smc->sk.sk_socket && smc->sk.sk_socket->file) { 453 + smc->clcsock->file = smc->sk.sk_socket->file; 454 + smc->clcsock->file->private_data = smc->clcsock; 455 + } 456 + } 457 + 448 458 /* fall back during connect */ 449 459 static int smc_connect_fallback(struct smc_sock *smc, int reason_code) 450 460 { 451 - smc->use_fallback = true; 461 + smc_switch_to_fallback(smc); 452 462 smc->fallback_rsn = reason_code; 453 463 smc_copy_sock_settings_to_clc(smc); 454 464 if (smc->sk.sk_state == SMC_INIT) ··· 783 775 smc->sk.sk_err = -rc; 784 776 785 777 out: 786 - if (smc->sk.sk_err) 787 - smc->sk.sk_state_change(&smc->sk); 788 - else 789 - smc->sk.sk_write_space(&smc->sk); 778 + if (!sock_flag(&smc->sk, SOCK_DEAD)) { 779 + if (smc->sk.sk_err) { 780 + smc->sk.sk_state_change(&smc->sk); 781 + } else { /* allow polling before and after fallback decision */ 782 + smc->clcsock->sk->sk_write_space(smc->clcsock->sk); 783 + smc->sk.sk_write_space(&smc->sk); 784 + } 785 + } 790 786 kfree(smc->connect_info); 791 787 smc->connect_info = NULL; 792 788 release_sock(&smc->sk); ··· 884 872 if (rc < 0) 885 873 lsk->sk_err = -rc; 886 874 if (rc < 0 || lsk->sk_state == SMC_CLOSED) { 875 + new_sk->sk_prot->unhash(new_sk); 887 876 if (new_clcsock) 888 877 sock_release(new_clcsock); 889 878 new_sk->sk_state = SMC_CLOSED; 890 879 sock_set_flag(new_sk, SOCK_DEAD); 891 - new_sk->sk_prot->unhash(new_sk); 892 880 sock_put(new_sk); /* final */ 893 881 *new_smc = NULL; 894 882 goto out; ··· 939 927 940 928 smc_accept_unlink(new_sk); 941 929 if (new_sk->sk_state == SMC_CLOSED) { 930 + new_sk->sk_prot->unhash(new_sk); 942 931 if (isk->clcsock) { 943 932 sock_release(isk->clcsock); 944 933 isk->clcsock = NULL; 945 934 } 946 - new_sk->sk_prot->unhash(new_sk); 947 935 sock_put(new_sk); /* final */ 948 936 continue; 949 937 } 950 - if (new_sock) 938 + if (new_sock) { 951 939 sock_graft(new_sk, new_sock); 940 + if (isk->use_fallback) { 941 + smc_sk(new_sk)->clcsock->file = new_sock->file; 942 + isk->clcsock->file->private_data = isk->clcsock; 943 + } 944 + } 952 945 return new_sk; 953 946 } 954 947 return NULL; ··· 973 956 sock_set_flag(sk, SOCK_DEAD); 974 957 sk->sk_shutdown |= SHUTDOWN_MASK; 975 958 } 959 + sk->sk_prot->unhash(sk); 976 960 if (smc->clcsock) { 977 961 struct socket *tcp; 978 962 ··· 989 971 smc_conn_free(&smc->conn); 990 972 } 991 973 release_sock(sk); 992 - sk->sk_prot->unhash(sk); 993 974 sock_put(sk); /* final sock_put */ 994 975 } 995 976 ··· 1054 1037 struct smc_sock *lsmc = new_smc->listen_smc; 1055 1038 struct sock *newsmcsk = &new_smc->sk; 1056 1039 1057 - lock_sock_nested(&lsmc->sk, SINGLE_DEPTH_NESTING); 1058 1040 if (lsmc->sk.sk_state == SMC_LISTEN) { 1041 + lock_sock_nested(&lsmc->sk, SINGLE_DEPTH_NESTING); 1059 1042 smc_accept_enqueue(&lsmc->sk, newsmcsk); 1043 + release_sock(&lsmc->sk); 1060 1044 } else { /* no longer listening */ 1061 1045 smc_close_non_accepted(newsmcsk); 1062 1046 } 1063 - release_sock(&lsmc->sk); 1064 1047 1065 1048 /* Wake up accept */ 1066 1049 lsmc->sk.sk_data_ready(&lsmc->sk); ··· 1104 1087 return; 1105 1088 } 1106 1089 smc_conn_free(&new_smc->conn); 1107 - new_smc->use_fallback = true; 1090 + smc_switch_to_fallback(new_smc); 1108 1091 new_smc->fallback_rsn = reason_code; 1109 1092 if (reason_code && reason_code != SMC_CLC_DECL_PEERDECL) { 1110 1093 if (smc_clc_send_decline(new_smc, reason_code) < 0) { ··· 1254 1237 int rc = 0; 1255 1238 u8 ibport; 1256 1239 1240 + if (new_smc->listen_smc->sk.sk_state != SMC_LISTEN) 1241 + return smc_listen_out_err(new_smc); 1242 + 1257 1243 if (new_smc->use_fallback) { 1258 1244 smc_listen_out_connected(new_smc); 1259 1245 return; ··· 1264 1244 1265 1245 /* check if peer is smc capable */ 1266 1246 if (!tcp_sk(newclcsock->sk)->syn_smc) { 1267 - new_smc->use_fallback = true; 1247 + smc_switch_to_fallback(new_smc); 1268 1248 new_smc->fallback_rsn = SMC_CLC_DECL_PEERNOSMC; 1269 1249 smc_listen_out_connected(new_smc); 1270 1250 return; ··· 1521 1501 1522 1502 if (msg->msg_flags & MSG_FASTOPEN) { 1523 1503 if (sk->sk_state == SMC_INIT) { 1524 - smc->use_fallback = true; 1504 + smc_switch_to_fallback(smc); 1525 1505 smc->fallback_rsn = SMC_CLC_DECL_OPTUNSUPP; 1526 1506 } else { 1527 1507 rc = -EINVAL; ··· 1723 1703 case TCP_FASTOPEN_NO_COOKIE: 1724 1704 /* option not supported by SMC */ 1725 1705 if (sk->sk_state == SMC_INIT) { 1726 - smc->use_fallback = true; 1706 + smc_switch_to_fallback(smc); 1727 1707 smc->fallback_rsn = SMC_CLC_DECL_OPTUNSUPP; 1728 1708 } else { 1729 1709 if (!smc->use_fallback)
+21 -4
net/smc/smc_close.c
··· 21 21 22 22 #define SMC_CLOSE_WAIT_LISTEN_CLCSOCK_TIME (5 * HZ) 23 23 24 + /* release the clcsock that is assigned to the smc_sock */ 25 + void smc_clcsock_release(struct smc_sock *smc) 26 + { 27 + struct socket *tcp; 28 + 29 + if (smc->listen_smc && current_work() != &smc->smc_listen_work) 30 + cancel_work_sync(&smc->smc_listen_work); 31 + mutex_lock(&smc->clcsock_release_lock); 32 + if (smc->clcsock) { 33 + tcp = smc->clcsock; 34 + smc->clcsock = NULL; 35 + sock_release(tcp); 36 + } 37 + mutex_unlock(&smc->clcsock_release_lock); 38 + } 39 + 24 40 static void smc_close_cleanup_listen(struct sock *parent) 25 41 { 26 42 struct sock *sk; ··· 337 321 close_work); 338 322 struct smc_sock *smc = container_of(conn, struct smc_sock, conn); 339 323 struct smc_cdc_conn_state_flags *rxflags; 324 + bool release_clcsock = false; 340 325 struct sock *sk = &smc->sk; 341 326 int old_state; 342 327 ··· 417 400 if ((sk->sk_state == SMC_CLOSED) && 418 401 (sock_flag(sk, SOCK_DEAD) || !sk->sk_socket)) { 419 402 smc_conn_free(conn); 420 - if (smc->clcsock) { 421 - sock_release(smc->clcsock); 422 - smc->clcsock = NULL; 423 - } 403 + if (smc->clcsock) 404 + release_clcsock = true; 424 405 } 425 406 } 426 407 release_sock(sk); 408 + if (release_clcsock) 409 + smc_clcsock_release(smc); 427 410 sock_put(sk); /* sock_hold done by schedulers of close_work */ 428 411 } 429 412
+1
net/smc/smc_close.h
··· 23 23 int smc_close_active(struct smc_sock *smc); 24 24 int smc_close_shutdown_write(struct smc_sock *smc); 25 25 void smc_close_init(struct smc_sock *smc); 26 + void smc_clcsock_release(struct smc_sock *smc); 26 27 27 28 #endif /* SMC_CLOSE_H */
+5
net/smc/smc_ism.c
··· 289 289 INIT_LIST_HEAD(&smcd->vlan); 290 290 smcd->event_wq = alloc_ordered_workqueue("ism_evt_wq-%s)", 291 291 WQ_MEM_RECLAIM, name); 292 + if (!smcd->event_wq) { 293 + kfree(smcd->conn); 294 + kfree(smcd); 295 + return NULL; 296 + } 292 297 return smcd; 293 298 } 294 299 EXPORT_SYMBOL_GPL(smcd_alloc_dev);
+2 -1
net/smc/smc_pnet.c
··· 603 603 { 604 604 struct net *net = genl_info_net(info); 605 605 606 - return smc_pnet_remove_by_pnetid(net, NULL); 606 + smc_pnet_remove_by_pnetid(net, NULL); 607 + return 0; 607 608 } 608 609 609 610 /* SMC_PNETID generic netlink operation definition */
+5 -7
net/strparser/strparser.c
··· 140 140 /* We are going to append to the frags_list of head. 141 141 * Need to unshare the frag_list. 142 142 */ 143 - if (skb_has_frag_list(head)) { 144 - err = skb_unclone(head, GFP_ATOMIC); 145 - if (err) { 146 - STRP_STATS_INCR(strp->stats.mem_fail); 147 - desc->error = err; 148 - return 0; 149 - } 143 + err = skb_unclone(head, GFP_ATOMIC); 144 + if (err) { 145 + STRP_STATS_INCR(strp->stats.mem_fail); 146 + desc->error = err; 147 + return 0; 150 148 } 151 149 152 150 if (unlikely(skb_shinfo(head)->frag_list)) {
+2
net/tipc/link.c
··· 869 869 __skb_queue_head_init(&list); 870 870 871 871 l->in_session = false; 872 + /* Force re-synch of peer session number before establishing */ 873 + l->peer_session--; 872 874 l->session++; 873 875 l->mtu = l->advertised_mtu; 874 876
+2 -1
net/tipc/name_table.c
··· 909 909 for (; i < TIPC_NAMETBL_SIZE; i++) { 910 910 head = &tn->nametbl->services[i]; 911 911 912 - if (*last_type) { 912 + if (*last_type || 913 + (!i && *last_key && (*last_lower == *last_key))) { 913 914 service = tipc_service_find(net, *last_type); 914 915 if (!service) 915 916 return -EPIPE;
+6 -2
net/tipc/sysctl.c
··· 38 38 39 39 #include <linux/sysctl.h> 40 40 41 + static int zero; 42 + static int one = 1; 41 43 static struct ctl_table_header *tipc_ctl_hdr; 42 44 43 45 static struct ctl_table tipc_table[] = { ··· 48 46 .data = &sysctl_tipc_rmem, 49 47 .maxlen = sizeof(sysctl_tipc_rmem), 50 48 .mode = 0644, 51 - .proc_handler = proc_dointvec, 49 + .proc_handler = proc_dointvec_minmax, 50 + .extra1 = &one, 52 51 }, 53 52 { 54 53 .procname = "named_timeout", 55 54 .data = &sysctl_tipc_named_timeout, 56 55 .maxlen = sizeof(sysctl_tipc_named_timeout), 57 56 .mode = 0644, 58 - .proc_handler = proc_dointvec, 57 + .proc_handler = proc_dointvec_minmax, 58 + .extra1 = &zero, 59 59 }, 60 60 { 61 61 .procname = "sk_filter",
+11 -1
net/tls/tls_device.c
··· 52 52 53 53 static void tls_device_free_ctx(struct tls_context *ctx) 54 54 { 55 - if (ctx->tx_conf == TLS_HW) 55 + if (ctx->tx_conf == TLS_HW) { 56 56 kfree(tls_offload_ctx_tx(ctx)); 57 + kfree(ctx->tx.rec_seq); 58 + kfree(ctx->tx.iv); 59 + } 57 60 58 61 if (ctx->rx_conf == TLS_HW) 59 62 kfree(tls_offload_ctx_rx(ctx)); ··· 218 215 tls_device_queue_ctx_destruction(tls_ctx); 219 216 } 220 217 EXPORT_SYMBOL(tls_device_sk_destruct); 218 + 219 + void tls_device_free_resources_tx(struct sock *sk) 220 + { 221 + struct tls_context *tls_ctx = tls_get_ctx(sk); 222 + 223 + tls_free_partial_record(sk, tls_ctx); 224 + } 221 225 222 226 static void tls_append_frag(struct tls_record_info *record, 223 227 struct page_frag *pfrag,
+24
net/tls/tls_main.c
··· 208 208 return tls_push_sg(sk, ctx, sg, offset, flags); 209 209 } 210 210 211 + bool tls_free_partial_record(struct sock *sk, struct tls_context *ctx) 212 + { 213 + struct scatterlist *sg; 214 + 215 + sg = ctx->partially_sent_record; 216 + if (!sg) 217 + return false; 218 + 219 + while (1) { 220 + put_page(sg_page(sg)); 221 + sk_mem_uncharge(sk, sg->length); 222 + 223 + if (sg_is_last(sg)) 224 + break; 225 + sg++; 226 + } 227 + ctx->partially_sent_record = NULL; 228 + return true; 229 + } 230 + 211 231 static void tls_write_space(struct sock *sk) 212 232 { 213 233 struct tls_context *ctx = tls_get_ctx(sk); ··· 287 267 kfree(ctx->tx.rec_seq); 288 268 kfree(ctx->tx.iv); 289 269 tls_sw_free_resources_tx(sk); 270 + #ifdef CONFIG_TLS_DEVICE 271 + } else if (ctx->tx_conf == TLS_HW) { 272 + tls_device_free_resources_tx(sk); 273 + #endif 290 274 } 291 275 292 276 if (ctx->rx_conf == TLS_SW) {
+1 -14
net/tls/tls_sw.c
··· 2052 2052 /* Free up un-sent records in tx_list. First, free 2053 2053 * the partially sent record if any at head of tx_list. 2054 2054 */ 2055 - if (tls_ctx->partially_sent_record) { 2056 - struct scatterlist *sg = tls_ctx->partially_sent_record; 2057 - 2058 - while (1) { 2059 - put_page(sg_page(sg)); 2060 - sk_mem_uncharge(sk, sg->length); 2061 - 2062 - if (sg_is_last(sg)) 2063 - break; 2064 - sg++; 2065 - } 2066 - 2067 - tls_ctx->partially_sent_record = NULL; 2068 - 2055 + if (tls_free_partial_record(sk, tls_ctx)) { 2069 2056 rec = list_first_entry(&ctx->tx_list, 2070 2057 struct tls_rec, list); 2071 2058 list_del(&rec->list);
+12 -6
net/wireless/nl80211.c
··· 13650 13650 .policy = nl80211_policy, 13651 13651 .flags = GENL_UNS_ADMIN_PERM, 13652 13652 .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 13653 - NL80211_FLAG_NEED_RTNL, 13653 + NL80211_FLAG_NEED_RTNL | 13654 + NL80211_FLAG_CLEAR_SKB, 13654 13655 }, 13655 13656 { 13656 13657 .cmd = NL80211_CMD_DEAUTHENTICATE, ··· 13702 13701 .policy = nl80211_policy, 13703 13702 .flags = GENL_UNS_ADMIN_PERM, 13704 13703 .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 13705 - NL80211_FLAG_NEED_RTNL, 13704 + NL80211_FLAG_NEED_RTNL | 13705 + NL80211_FLAG_CLEAR_SKB, 13706 13706 }, 13707 13707 { 13708 13708 .cmd = NL80211_CMD_UPDATE_CONNECT_PARAMS, ··· 13711 13709 .policy = nl80211_policy, 13712 13710 .flags = GENL_ADMIN_PERM, 13713 13711 .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 13714 - NL80211_FLAG_NEED_RTNL, 13712 + NL80211_FLAG_NEED_RTNL | 13713 + NL80211_FLAG_CLEAR_SKB, 13715 13714 }, 13716 13715 { 13717 13716 .cmd = NL80211_CMD_DISCONNECT, ··· 13741 13738 .policy = nl80211_policy, 13742 13739 .flags = GENL_UNS_ADMIN_PERM, 13743 13740 .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 13744 - NL80211_FLAG_NEED_RTNL, 13741 + NL80211_FLAG_NEED_RTNL | 13742 + NL80211_FLAG_CLEAR_SKB, 13745 13743 }, 13746 13744 { 13747 13745 .cmd = NL80211_CMD_DEL_PMKSA, ··· 14094 14090 .policy = nl80211_policy, 14095 14091 .flags = GENL_UNS_ADMIN_PERM, 14096 14092 .internal_flags = NL80211_FLAG_NEED_WIPHY | 14097 - NL80211_FLAG_NEED_RTNL, 14093 + NL80211_FLAG_NEED_RTNL | 14094 + NL80211_FLAG_CLEAR_SKB, 14098 14095 }, 14099 14096 { 14100 14097 .cmd = NL80211_CMD_SET_QOS_MAP, ··· 14150 14145 .doit = nl80211_set_pmk, 14151 14146 .policy = nl80211_policy, 14152 14147 .internal_flags = NL80211_FLAG_NEED_NETDEV_UP | 14153 - NL80211_FLAG_NEED_RTNL, 14148 + NL80211_FLAG_NEED_RTNL | 14149 + NL80211_FLAG_CLEAR_SKB, 14154 14150 }, 14155 14151 { 14156 14152 .cmd = NL80211_CMD_DEL_PMK,
+39
net/wireless/reg.c
··· 1309 1309 return dfs_region1; 1310 1310 } 1311 1311 1312 + static void reg_wmm_rules_intersect(const struct ieee80211_wmm_ac *wmm_ac1, 1313 + const struct ieee80211_wmm_ac *wmm_ac2, 1314 + struct ieee80211_wmm_ac *intersect) 1315 + { 1316 + intersect->cw_min = max_t(u16, wmm_ac1->cw_min, wmm_ac2->cw_min); 1317 + intersect->cw_max = max_t(u16, wmm_ac1->cw_max, wmm_ac2->cw_max); 1318 + intersect->cot = min_t(u16, wmm_ac1->cot, wmm_ac2->cot); 1319 + intersect->aifsn = max_t(u8, wmm_ac1->aifsn, wmm_ac2->aifsn); 1320 + } 1321 + 1312 1322 /* 1313 1323 * Helper for regdom_intersect(), this does the real 1314 1324 * mathematical intersection fun ··· 1333 1323 struct ieee80211_freq_range *freq_range; 1334 1324 const struct ieee80211_power_rule *power_rule1, *power_rule2; 1335 1325 struct ieee80211_power_rule *power_rule; 1326 + const struct ieee80211_wmm_rule *wmm_rule1, *wmm_rule2; 1327 + struct ieee80211_wmm_rule *wmm_rule; 1336 1328 u32 freq_diff, max_bandwidth1, max_bandwidth2; 1337 1329 1338 1330 freq_range1 = &rule1->freq_range; ··· 1344 1332 power_rule1 = &rule1->power_rule; 1345 1333 power_rule2 = &rule2->power_rule; 1346 1334 power_rule = &intersected_rule->power_rule; 1335 + 1336 + wmm_rule1 = &rule1->wmm_rule; 1337 + wmm_rule2 = &rule2->wmm_rule; 1338 + wmm_rule = &intersected_rule->wmm_rule; 1347 1339 1348 1340 freq_range->start_freq_khz = max(freq_range1->start_freq_khz, 1349 1341 freq_range2->start_freq_khz); ··· 1391 1375 1392 1376 intersected_rule->dfs_cac_ms = max(rule1->dfs_cac_ms, 1393 1377 rule2->dfs_cac_ms); 1378 + 1379 + if (rule1->has_wmm && rule2->has_wmm) { 1380 + u8 ac; 1381 + 1382 + for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) { 1383 + reg_wmm_rules_intersect(&wmm_rule1->client[ac], 1384 + &wmm_rule2->client[ac], 1385 + &wmm_rule->client[ac]); 1386 + reg_wmm_rules_intersect(&wmm_rule1->ap[ac], 1387 + &wmm_rule2->ap[ac], 1388 + &wmm_rule->ap[ac]); 1389 + } 1390 + 1391 + intersected_rule->has_wmm = true; 1392 + } else if (rule1->has_wmm) { 1393 + *wmm_rule = *wmm_rule1; 1394 + intersected_rule->has_wmm = true; 1395 + } else if (rule2->has_wmm) { 1396 + *wmm_rule = *wmm_rule2; 1397 + intersected_rule->has_wmm = true; 1398 + } else { 1399 + intersected_rule->has_wmm = false; 1400 + } 1394 1401 1395 1402 if (!is_valid_reg_rule(intersected_rule)) 1396 1403 return -EINVAL;
+1 -2
net/wireless/scan.c
··· 190 190 /* copy subelement as we need to change its content to 191 191 * mark an ie after it is processed. 192 192 */ 193 - sub_copy = kmalloc(subie_len, gfp); 193 + sub_copy = kmemdup(subelement, subie_len, gfp); 194 194 if (!sub_copy) 195 195 return 0; 196 - memcpy(sub_copy, subelement, subie_len); 197 196 198 197 pos = &new_ie[0]; 199 198
+4 -2
net/wireless/util.c
··· 1220 1220 else if (rate->bw == RATE_INFO_BW_HE_RU && 1221 1221 rate->he_ru_alloc == NL80211_RATE_INFO_HE_RU_ALLOC_26) 1222 1222 result = rates_26[rate->he_gi]; 1223 - else if (WARN(1, "invalid HE MCS: bw:%d, ru:%d\n", 1224 - rate->bw, rate->he_ru_alloc)) 1223 + else { 1224 + WARN(1, "invalid HE MCS: bw:%d, ru:%d\n", 1225 + rate->bw, rate->he_ru_alloc); 1225 1226 return 0; 1227 + } 1226 1228 1227 1229 /* now scale to the appropriate MCS */ 1228 1230 tmp = result;
+20
tools/testing/selftests/drivers/net/mlxsw/rtnetlink.sh
··· 11 11 12 12 ALL_TESTS=" 13 13 rif_set_addr_test 14 + rif_vrf_set_addr_test 14 15 rif_inherit_bridge_addr_test 15 16 rif_non_inherit_bridge_addr_test 16 17 vlan_interface_deletion_test ··· 97 96 98 97 ip link set dev $swp2 addr $swp2_mac 99 98 ip link set dev $swp1 addr $swp1_mac 99 + } 100 + 101 + rif_vrf_set_addr_test() 102 + { 103 + # Test that it is possible to set an IP address on a VRF upper despite 104 + # its random MAC address. 105 + RET=0 106 + 107 + ip link add name vrf-test type vrf table 10 108 + ip link set dev $swp1 master vrf-test 109 + 110 + ip -4 address add 192.0.2.1/24 dev vrf-test 111 + check_err $? "failed to set IPv4 address on VRF" 112 + ip -6 address add 2001:db8:1::1/64 dev vrf-test 113 + check_err $? "failed to set IPv6 address on VRF" 114 + 115 + log_test "RIF - setting IP address on VRF" 116 + 117 + ip link del dev vrf-test 100 118 } 101 119 102 120 rif_inherit_bridge_addr_test()
+40 -54
tools/testing/selftests/net/fib_tests.sh
··· 605 605 return $rc 606 606 } 607 607 608 + check_expected() 609 + { 610 + local out="$1" 611 + local expected="$2" 612 + local rc=0 613 + 614 + [ "${out}" = "${expected}" ] && return 0 615 + 616 + if [ -z "${out}" ]; then 617 + if [ "$VERBOSE" = "1" ]; then 618 + printf "\nNo route entry found\n" 619 + printf "Expected:\n" 620 + printf " ${expected}\n" 621 + fi 622 + return 1 623 + fi 624 + 625 + # tricky way to convert output to 1-line without ip's 626 + # messy '\'; this drops all extra white space 627 + out=$(echo ${out}) 628 + if [ "${out}" != "${expected}" ]; then 629 + rc=1 630 + if [ "${VERBOSE}" = "1" ]; then 631 + printf " Unexpected route entry. Have:\n" 632 + printf " ${out}\n" 633 + printf " Expected:\n" 634 + printf " ${expected}\n\n" 635 + fi 636 + fi 637 + 638 + return $rc 639 + } 640 + 608 641 # add route for a prefix, flushing any existing routes first 609 642 # expected to be the first step of a test 610 643 add_route6() ··· 685 652 pfx=$1 686 653 687 654 out=$($IP -6 ro ls match ${pfx} | sed -e 's/ pref medium//') 688 - [ "${out}" = "${expected}" ] && return 0 689 - 690 - if [ -z "${out}" ]; then 691 - if [ "$VERBOSE" = "1" ]; then 692 - printf "\nNo route entry found\n" 693 - printf "Expected:\n" 694 - printf " ${expected}\n" 695 - fi 696 - return 1 697 - fi 698 - 699 - # tricky way to convert output to 1-line without ip's 700 - # messy '\'; this drops all extra white space 701 - out=$(echo ${out}) 702 - if [ "${out}" != "${expected}" ]; then 703 - rc=1 704 - if [ "${VERBOSE}" = "1" ]; then 705 - printf " Unexpected route entry. Have:\n" 706 - printf " ${out}\n" 707 - printf " Expected:\n" 708 - printf " ${expected}\n\n" 709 - fi 710 - fi 711 - 712 - return $rc 655 + check_expected "${out}" "${expected}" 713 656 } 714 657 715 658 route_cleanup() ··· 734 725 ip -netns ns2 addr add 172.16.103.2/24 dev veth4 735 726 ip -netns ns2 addr add 172.16.104.1/24 dev dummy1 736 727 737 - set +ex 728 + set +e 738 729 } 739 730 740 731 # assumption is that basic add of a single path route works ··· 969 960 run_cmd "$IP li set dev dummy2 down" 970 961 rc=$? 971 962 if [ $rc -eq 0 ]; then 972 - check_route6 "" 963 + out=$($IP -6 ro ls match 2001:db8:104::/64) 964 + check_expected "${out}" "" 973 965 rc=$? 974 966 fi 975 967 log_test $rc 0 "Prefix route removed on link down" ··· 1101 1091 local pfx 1102 1092 local expected="$1" 1103 1093 local out 1104 - local rc=0 1105 1094 1106 1095 set -- $expected 1107 1096 pfx=$1 1108 1097 [ "${pfx}" = "unreachable" ] && pfx=$2 1109 1098 1110 1099 out=$($IP ro ls match ${pfx}) 1111 - [ "${out}" = "${expected}" ] && return 0 1112 - 1113 - if [ -z "${out}" ]; then 1114 - if [ "$VERBOSE" = "1" ]; then 1115 - printf "\nNo route entry found\n" 1116 - printf "Expected:\n" 1117 - printf " ${expected}\n" 1118 - fi 1119 - return 1 1120 - fi 1121 - 1122 - # tricky way to convert output to 1-line without ip's 1123 - # messy '\'; this drops all extra white space 1124 - out=$(echo ${out}) 1125 - if [ "${out}" != "${expected}" ]; then 1126 - rc=1 1127 - if [ "${VERBOSE}" = "1" ]; then 1128 - printf " Unexpected route entry. Have:\n" 1129 - printf " ${out}\n" 1130 - printf " Expected:\n" 1131 - printf " ${expected}\n\n" 1132 - fi 1133 - fi 1134 - 1135 - return $rc 1100 + check_expected "${out}" "${expected}" 1136 1101 } 1137 1102 1138 1103 # assumption is that basic add of a single path route works ··· 1372 1387 run_cmd "$IP li set dev dummy2 down" 1373 1388 rc=$? 1374 1389 if [ $rc -eq 0 ]; then 1375 - check_route "" 1390 + out=$($IP ro ls match 172.16.104.0/24) 1391 + check_expected "${out}" "" 1376 1392 rc=$? 1377 1393 fi 1378 1394 log_test $rc 0 "Prefix route removed on link down"