Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Unbalanced refcounting in TIPC, from Jon Maloy.

2) Only allow TCP_MD5SIG to be set on sockets in close or listen state.
Once the connection is established it makes no sense to change this.
From Eric Dumazet.

3) Missing attribute validation in neigh_dump_table(), also from Eric
Dumazet.

4) Fix address comparisons in SCTP, from Xin Long.

5) Neigh proxy table clearing can deadlock, from Wolfgang Bumiller.

6) Fix tunnel refcounting in l2tp, from Guillaume Nault.

7) Fix double list insert in team driver, from Paolo Abeni.

8) af_vsock.ko module was accidently made unremovable, from Stefan
Hajnoczi.

9) Fix reference to freed llc_sap object in llc stack, from Cong Wang.

10) Don't assume netdevice struct is DMA'able memory in virtio_net
driver, from Michael S. Tsirkin.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (62 commits)
net/smc: fix shutdown in state SMC_LISTEN
bnxt_en: Fix memory fault in bnxt_ethtool_init()
virtio_net: sparse annotation fix
virtio_net: fix adding vids on big-endian
virtio_net: split out ctrl buffer
net: hns: Avoid action name truncation
docs: ip-sysctl.txt: fix name of some ipv6 variables
vmxnet3: fix incorrect dereference when rxvlan is disabled
llc: hold llc_sap before release_sock()
MAINTAINERS: Direct networking documentation changes to netdev
atm: iphase: fix spelling mistake: "Tansmit" -> "Transmit"
net: qmi_wwan: add Wistron Neweb D19Q1
net: caif: fix spelling mistake "UKNOWN" -> "UNKNOWN"
net: stmmac: Disable ACS Feature for GMAC >= 4
net: mvpp2: Fix DMA address mask size
net: change the comment of dev_mc_init
net: qualcomm: rmnet: Fix warning seen with fill_info
tun: fix vlan packet truncation
tipc: fix infinite loop when dumping link monitor summary
tipc: fix use-after-free in tipc_nametbl_stop
...

+793 -356
+13
Documentation/core-api/kernel-api.rst
··· 136 136 .. kernel-doc:: lib/list_sort.c 137 137 :export: 138 138 139 + Text Searching 140 + -------------- 141 + 142 + .. kernel-doc:: lib/textsearch.c 143 + :doc: ts_intro 144 + 145 + .. kernel-doc:: lib/textsearch.c 146 + :export: 147 + 148 + .. kernel-doc:: include/linux/textsearch.h 149 + :functions: textsearch_find textsearch_next \ 150 + textsearch_get_pattern textsearch_get_pattern_len 151 + 139 152 UUID/GUID 140 153 --------- 141 154
+3 -3
Documentation/networking/filter.txt
··· 169 169 BPF engine and instruction set 170 170 ------------------------------ 171 171 172 - Under tools/net/ there's a small helper tool called bpf_asm which can 172 + Under tools/bpf/ there's a small helper tool called bpf_asm which can 173 173 be used to write low-level filters for example scenarios mentioned in the 174 174 previous section. Asm-like syntax mentioned here has been implemented in 175 175 bpf_asm and will be used for further explanations (instead of dealing with ··· 359 359 In particular, as usage with xt_bpf or cls_bpf can result in more complex BPF 360 360 filters that might not be obvious at first, it's good to test filters before 361 361 attaching to a live system. For that purpose, there's a small tool called 362 - bpf_dbg under tools/net/ in the kernel source directory. This debugger allows 362 + bpf_dbg under tools/bpf/ in the kernel source directory. This debugger allows 363 363 for testing BPF filters against given pcap files, single stepping through the 364 364 BPF code on the pcap's packets and to do BPF machine register dumps. 365 365 ··· 483 483 [ 3389.935851] JIT code: 00000030: 00 e8 28 94 ff e0 83 f8 01 75 07 b8 ff ff 00 00 484 484 [ 3389.935852] JIT code: 00000040: eb 02 31 c0 c9 c3 485 485 486 - In the kernel source tree under tools/net/, there's bpf_jit_disasm for 486 + In the kernel source tree under tools/bpf/, there's bpf_jit_disasm for 487 487 generating disassembly out of the kernel log's hexdump: 488 488 489 489 # ./bpf_jit_disasm
+4 -4
Documentation/networking/ip-sysctl.txt
··· 1390 1390 Default: 2 (as specified by RFC3810 9.1) 1391 1391 Minimum: 1 (as specified by RFC6636 4.5) 1392 1392 1393 - max_dst_opts_cnt - INTEGER 1393 + max_dst_opts_number - INTEGER 1394 1394 Maximum number of non-padding TLVs allowed in a Destination 1395 1395 options extension header. If this value is less than zero 1396 1396 then unknown options are disallowed and the number of known 1397 1397 TLVs allowed is the absolute value of this number. 1398 1398 Default: 8 1399 1399 1400 - max_hbh_opts_cnt - INTEGER 1400 + max_hbh_opts_number - INTEGER 1401 1401 Maximum number of non-padding TLVs allowed in a Hop-by-Hop 1402 1402 options extension header. If this value is less than zero 1403 1403 then unknown options are disallowed and the number of known 1404 1404 TLVs allowed is the absolute value of this number. 1405 1405 Default: 8 1406 1406 1407 - max dst_opts_len - INTEGER 1407 + max_dst_opts_length - INTEGER 1408 1408 Maximum length allowed for a Destination options extension 1409 1409 header. 1410 1410 Default: INT_MAX (unlimited) 1411 1411 1412 - max hbh_opts_len - INTEGER 1412 + max_hbh_length - INTEGER 1413 1413 Maximum length allowed for a Hop-by-Hop options extension 1414 1414 header. 1415 1415 Default: INT_MAX (unlimited)
+1
MAINTAINERS
··· 9773 9773 F: tools/testing/selftests/net/ 9774 9774 F: lib/net_utils.c 9775 9775 F: lib/random32.c 9776 + F: Documentation/networking/ 9776 9777 9777 9778 NETWORKING [IPSEC] 9778 9779 M: Steffen Klassert <steffen.klassert@secunet.com>
+2 -2
drivers/atm/iphase.c
··· 671 671 if ((vcc->pop) && (skb1->len != 0)) 672 672 { 673 673 vcc->pop(vcc, skb1); 674 - IF_EVENT(printk("Tansmit Done - skb 0x%lx return\n", 674 + IF_EVENT(printk("Transmit Done - skb 0x%lx return\n", 675 675 (long)skb1);) 676 676 } 677 677 else ··· 1665 1665 status = readl(iadev->seg_reg+SEG_INTR_STATUS_REG); 1666 1666 if (status & TRANSMIT_DONE){ 1667 1667 1668 - IF_EVENT(printk("Tansmit Done Intr logic run\n");) 1668 + IF_EVENT(printk("Transmit Done Intr logic run\n");) 1669 1669 spin_lock_irqsave(&iadev->tx_lock, flags); 1670 1670 ia_tx_poll(iadev); 1671 1671 spin_unlock_irqrestore(&iadev->tx_lock, flags);
+5 -3
drivers/isdn/mISDN/dsp_hwec.c
··· 68 68 goto _do; 69 69 70 70 { 71 - char _dup[len + 1]; 72 71 char *dup, *tok, *name, *val; 73 72 int tmp; 74 73 75 - strcpy(_dup, arg); 76 - dup = _dup; 74 + dup = kstrdup(arg, GFP_ATOMIC); 75 + if (!dup) 76 + return; 77 77 78 78 while ((tok = strsep(&dup, ","))) { 79 79 if (!strlen(tok)) ··· 89 89 deftaps = tmp; 90 90 } 91 91 } 92 + 93 + kfree(dup); 92 94 } 93 95 94 96 _do:
+11 -3
drivers/isdn/mISDN/l1oip_core.c
··· 279 279 u16 timebase, u8 *buf, int len) 280 280 { 281 281 u8 *p; 282 - u8 frame[len + 32]; 282 + u8 frame[MAX_DFRAME_LEN_L1 + 32]; 283 283 struct socket *socket = NULL; 284 284 285 285 if (debug & DEBUG_L1OIP_MSG) ··· 902 902 p = skb->data; 903 903 l = skb->len; 904 904 while (l) { 905 - ll = (l < L1OIP_MAX_PERFRAME) ? l : L1OIP_MAX_PERFRAME; 905 + /* 906 + * This is technically bounded by L1OIP_MAX_PERFRAME but 907 + * MAX_DFRAME_LEN_L1 < L1OIP_MAX_PERFRAME 908 + */ 909 + ll = (l < MAX_DFRAME_LEN_L1) ? l : MAX_DFRAME_LEN_L1; 906 910 l1oip_socket_send(hc, 0, dch->slot, 0, 907 911 hc->chan[dch->slot].tx_counter++, p, ll); 908 912 p += ll; ··· 1144 1140 p = skb->data; 1145 1141 l = skb->len; 1146 1142 while (l) { 1147 - ll = (l < L1OIP_MAX_PERFRAME) ? l : L1OIP_MAX_PERFRAME; 1143 + /* 1144 + * This is technically bounded by L1OIP_MAX_PERFRAME but 1145 + * MAX_DFRAME_LEN_L1 < L1OIP_MAX_PERFRAME 1146 + */ 1147 + ll = (l < MAX_DFRAME_LEN_L1) ? l : MAX_DFRAME_LEN_L1; 1148 1148 l1oip_socket_send(hc, hc->codec, bch->slot, 0, 1149 1149 hc->chan[bch->slot].tx_counter, p, ll); 1150 1150 hc->chan[bch->slot].tx_counter += ll;
+10 -2
drivers/net/dsa/mv88e6xxx/hwtstamp.c
··· 285 285 struct sk_buff_head *rxq) 286 286 { 287 287 u16 buf[4] = { 0 }, status, seq_id; 288 - u64 ns, timelo, timehi; 289 288 struct skb_shared_hwtstamps *shwt; 289 + struct sk_buff_head received; 290 + u64 ns, timelo, timehi; 291 + unsigned long flags; 290 292 int err; 293 + 294 + /* The latched timestamp belongs to one of the received frames. */ 295 + __skb_queue_head_init(&received); 296 + spin_lock_irqsave(&rxq->lock, flags); 297 + skb_queue_splice_tail_init(rxq, &received); 298 + spin_unlock_irqrestore(&rxq->lock, flags); 291 299 292 300 mutex_lock(&chip->reg_lock); 293 301 err = mv88e6xxx_port_ptp_read(chip, ps->port_id, ··· 319 311 /* Since the device can only handle one time stamp at a time, 320 312 * we purge any extra frames from the queue. 321 313 */ 322 - for ( ; skb; skb = skb_dequeue(rxq)) { 314 + for ( ; skb; skb = __skb_dequeue(&received)) { 323 315 if (mv88e6xxx_ts_valid(status) && seq_match(skb, seq_id)) { 324 316 ns = timehi << 16 | timelo; 325 317
+27 -22
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 1927 1927 return retval; 1928 1928 } 1929 1929 1930 - static char *bnxt_get_pkgver(struct net_device *dev, char *buf, size_t buflen) 1930 + static void bnxt_get_pkgver(struct net_device *dev) 1931 1931 { 1932 + struct bnxt *bp = netdev_priv(dev); 1932 1933 u16 index = 0; 1933 - u32 datalen; 1934 + char *pkgver; 1935 + u32 pkglen; 1936 + u8 *pkgbuf; 1937 + int len; 1934 1938 1935 1939 if (bnxt_find_nvram_item(dev, BNX_DIR_TYPE_PKG_LOG, 1936 1940 BNX_DIR_ORDINAL_FIRST, BNX_DIR_EXT_NONE, 1937 - &index, NULL, &datalen) != 0) 1938 - return NULL; 1941 + &index, NULL, &pkglen) != 0) 1942 + return; 1939 1943 1940 - memset(buf, 0, buflen); 1941 - if (bnxt_get_nvram_item(dev, index, 0, datalen, buf) != 0) 1942 - return NULL; 1944 + pkgbuf = kzalloc(pkglen, GFP_KERNEL); 1945 + if (!pkgbuf) { 1946 + dev_err(&bp->pdev->dev, "Unable to allocate memory for pkg version, length = %u\n", 1947 + pkglen); 1948 + return; 1949 + } 1943 1950 1944 - return bnxt_parse_pkglog(BNX_PKG_LOG_FIELD_IDX_PKG_VERSION, buf, 1945 - datalen); 1951 + if (bnxt_get_nvram_item(dev, index, 0, pkglen, pkgbuf)) 1952 + goto err; 1953 + 1954 + pkgver = bnxt_parse_pkglog(BNX_PKG_LOG_FIELD_IDX_PKG_VERSION, pkgbuf, 1955 + pkglen); 1956 + if (pkgver && *pkgver != 0 && isdigit(*pkgver)) { 1957 + len = strlen(bp->fw_ver_str); 1958 + snprintf(bp->fw_ver_str + len, FW_VER_STR_LEN - len - 1, 1959 + "/pkg %s", pkgver); 1960 + } 1961 + err: 1962 + kfree(pkgbuf); 1946 1963 } 1947 1964 1948 1965 static int bnxt_get_eeprom(struct net_device *dev, ··· 2632 2615 struct hwrm_selftest_qlist_input req = {0}; 2633 2616 struct bnxt_test_info *test_info; 2634 2617 struct net_device *dev = bp->dev; 2635 - char *pkglog; 2636 2618 int i, rc; 2637 2619 2638 - pkglog = kzalloc(BNX_PKG_LOG_MAX_LENGTH, GFP_KERNEL); 2639 - if (pkglog) { 2640 - char *pkgver; 2641 - int len; 2620 + bnxt_get_pkgver(dev); 2642 2621 2643 - pkgver = bnxt_get_pkgver(dev, pkglog, BNX_PKG_LOG_MAX_LENGTH); 2644 - if (pkgver && *pkgver != 0 && isdigit(*pkgver)) { 2645 - len = strlen(bp->fw_ver_str); 2646 - snprintf(bp->fw_ver_str + len, FW_VER_STR_LEN - len - 1, 2647 - "/pkg %s", pkgver); 2648 - } 2649 - kfree(pkglog); 2650 - } 2651 2622 if (bp->hwrm_spec_code < 0x10704 || !BNXT_SINGLE_PF(bp)) 2652 2623 return; 2653 2624
-2
drivers/net/ethernet/broadcom/bnxt/bnxt_nvm_defs.h
··· 59 59 #define BNX_DIR_ATTR_NO_CHKSUM (1 << 0) 60 60 #define BNX_DIR_ATTR_PROP_STREAM (1 << 1) 61 61 62 - #define BNX_PKG_LOG_MAX_LENGTH 4096 63 - 64 62 enum bnxnvm_pkglog_field_index { 65 63 BNX_PKG_LOG_FIELD_IDX_INSTALLED_TIMESTAMP = 0, 66 64 BNX_PKG_LOG_FIELD_IDX_PKG_DESCRIPTION = 1,
+1 -1
drivers/net/ethernet/hisilicon/hns/hnae.h
··· 87 87 88 88 #define HNAE_AE_REGISTER 0x1 89 89 90 - #define RCB_RING_NAME_LEN 16 90 + #define RCB_RING_NAME_LEN (IFNAMSIZ + 4) 91 91 92 92 #define HNAE_LOWEST_LATENCY_COAL_PARAM 30 93 93 #define HNAE_LOW_LATENCY_COAL_PARAM 80
+54 -31
drivers/net/ethernet/ibm/ibmvnic.c
··· 794 794 { 795 795 struct ibmvnic_adapter *adapter = netdev_priv(netdev); 796 796 unsigned long timeout = msecs_to_jiffies(30000); 797 - struct device *dev = &adapter->vdev->dev; 797 + int retry_count = 0; 798 798 int rc; 799 799 800 800 do { 801 - if (adapter->renegotiate) { 802 - adapter->renegotiate = false; 801 + if (retry_count > IBMVNIC_MAX_QUEUES) { 802 + netdev_warn(netdev, "Login attempts exceeded\n"); 803 + return -1; 804 + } 805 + 806 + adapter->init_done_rc = 0; 807 + reinit_completion(&adapter->init_done); 808 + rc = send_login(adapter); 809 + if (rc) { 810 + netdev_warn(netdev, "Unable to login\n"); 811 + return rc; 812 + } 813 + 814 + if (!wait_for_completion_timeout(&adapter->init_done, 815 + timeout)) { 816 + netdev_warn(netdev, "Login timed out\n"); 817 + return -1; 818 + } 819 + 820 + if (adapter->init_done_rc == PARTIALSUCCESS) { 821 + retry_count++; 803 822 release_sub_crqs(adapter, 1); 804 823 824 + adapter->init_done_rc = 0; 805 825 reinit_completion(&adapter->init_done); 806 826 send_cap_queries(adapter); 807 827 if (!wait_for_completion_timeout(&adapter->init_done, 808 828 timeout)) { 809 - dev_err(dev, "Capabilities query timeout\n"); 829 + netdev_warn(netdev, 830 + "Capabilities query timed out\n"); 810 831 return -1; 811 832 } 833 + 812 834 rc = init_sub_crqs(adapter); 813 835 if (rc) { 814 - dev_err(dev, 815 - "Initialization of SCRQ's failed\n"); 836 + netdev_warn(netdev, 837 + "SCRQ initialization failed\n"); 816 838 return -1; 817 839 } 840 + 818 841 rc = init_sub_crq_irqs(adapter); 819 842 if (rc) { 820 - dev_err(dev, 821 - "Initialization of SCRQ's irqs failed\n"); 843 + netdev_warn(netdev, 844 + "SCRQ irq initialization failed\n"); 822 845 return -1; 823 846 } 824 - } 825 - 826 - reinit_completion(&adapter->init_done); 827 - rc = send_login(adapter); 828 - if (rc) { 829 - dev_err(dev, "Unable to attempt device login\n"); 830 - return rc; 831 - } else if (!wait_for_completion_timeout(&adapter->init_done, 832 - timeout)) { 833 - dev_err(dev, "Login timeout\n"); 847 + } else if (adapter->init_done_rc) { 848 + netdev_warn(netdev, "Adapter login failed\n"); 834 849 return -1; 835 850 } 836 - } while (adapter->renegotiate); 851 + } while (adapter->init_done_rc == PARTIALSUCCESS); 837 852 838 853 /* handle pending MAC address changes after successful login */ 839 854 if (adapter->mac_change_pending) { ··· 1049 1034 netdev_dbg(netdev, "Enabling rx_scrq[%d] irq\n", i); 1050 1035 if (prev_state == VNIC_CLOSED) 1051 1036 enable_irq(adapter->rx_scrq[i]->irq); 1052 - else 1053 - enable_scrq_irq(adapter, adapter->rx_scrq[i]); 1037 + enable_scrq_irq(adapter, adapter->rx_scrq[i]); 1054 1038 } 1055 1039 1056 1040 for (i = 0; i < adapter->req_tx_queues; i++) { 1057 1041 netdev_dbg(netdev, "Enabling tx_scrq[%d] irq\n", i); 1058 1042 if (prev_state == VNIC_CLOSED) 1059 1043 enable_irq(adapter->tx_scrq[i]->irq); 1060 - else 1061 - enable_scrq_irq(adapter, adapter->tx_scrq[i]); 1044 + enable_scrq_irq(adapter, adapter->tx_scrq[i]); 1062 1045 } 1063 1046 1064 1047 rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_UP); ··· 1197 1184 if (adapter->tx_scrq[i]->irq) { 1198 1185 netdev_dbg(netdev, 1199 1186 "Disabling tx_scrq[%d] irq\n", i); 1187 + disable_scrq_irq(adapter, adapter->tx_scrq[i]); 1200 1188 disable_irq(adapter->tx_scrq[i]->irq); 1201 1189 } 1202 1190 } ··· 1207 1193 if (adapter->rx_scrq[i]->irq) { 1208 1194 netdev_dbg(netdev, 1209 1195 "Disabling rx_scrq[%d] irq\n", i); 1196 + disable_scrq_irq(adapter, adapter->rx_scrq[i]); 1210 1197 disable_irq(adapter->rx_scrq[i]->irq); 1211 1198 } 1212 1199 } ··· 1843 1828 for (i = 0; i < adapter->req_rx_queues; i++) 1844 1829 napi_schedule(&adapter->napi[i]); 1845 1830 1846 - if (adapter->reset_reason != VNIC_RESET_FAILOVER) 1831 + if (adapter->reset_reason != VNIC_RESET_FAILOVER && 1832 + adapter->reset_reason != VNIC_RESET_CHANGE_PARAM) 1847 1833 netdev_notify_peers(netdev); 1848 1834 1849 1835 netif_carrier_on(netdev); ··· 2617 2601 { 2618 2602 struct device *dev = &adapter->vdev->dev; 2619 2603 unsigned long rc; 2604 + u64 val; 2620 2605 2621 2606 if (scrq->hw_irq > 0x100000000ULL) { 2622 2607 dev_err(dev, "bad hw_irq = %lx\n", scrq->hw_irq); 2623 2608 return 1; 2624 2609 } 2610 + 2611 + val = (0xff000000) | scrq->hw_irq; 2612 + rc = plpar_hcall_norets(H_EOI, val); 2613 + if (rc) 2614 + dev_err(dev, "H_EOI FAILED irq 0x%llx. rc=%ld\n", 2615 + val, rc); 2625 2616 2626 2617 rc = plpar_hcall_norets(H_VIOCTL, adapter->vdev->unit_address, 2627 2618 H_ENABLE_VIO_INTERRUPT, scrq->hw_irq, 0, 0); ··· 3193 3170 struct vnic_login_client_data { 3194 3171 u8 type; 3195 3172 __be16 len; 3196 - char name; 3173 + char name[]; 3197 3174 } __packed; 3198 3175 3199 3176 static int vnic_client_data_len(struct ibmvnic_adapter *adapter) ··· 3222 3199 vlcd->type = 1; 3223 3200 len = strlen(os_name) + 1; 3224 3201 vlcd->len = cpu_to_be16(len); 3225 - strncpy(&vlcd->name, os_name, len); 3226 - vlcd = (struct vnic_login_client_data *)((char *)&vlcd->name + len); 3202 + strncpy(vlcd->name, os_name, len); 3203 + vlcd = (struct vnic_login_client_data *)(vlcd->name + len); 3227 3204 3228 3205 /* Type 2 - LPAR name */ 3229 3206 vlcd->type = 2; 3230 3207 len = strlen(utsname()->nodename) + 1; 3231 3208 vlcd->len = cpu_to_be16(len); 3232 - strncpy(&vlcd->name, utsname()->nodename, len); 3233 - vlcd = (struct vnic_login_client_data *)((char *)&vlcd->name + len); 3209 + strncpy(vlcd->name, utsname()->nodename, len); 3210 + vlcd = (struct vnic_login_client_data *)(vlcd->name + len); 3234 3211 3235 3212 /* Type 3 - device name */ 3236 3213 vlcd->type = 3; 3237 3214 len = strlen(adapter->netdev->name) + 1; 3238 3215 vlcd->len = cpu_to_be16(len); 3239 - strncpy(&vlcd->name, adapter->netdev->name, len); 3216 + strncpy(vlcd->name, adapter->netdev->name, len); 3240 3217 } 3241 3218 3242 3219 static int send_login(struct ibmvnic_adapter *adapter) ··· 3965 3942 * to resend the login buffer with fewer queues requested. 3966 3943 */ 3967 3944 if (login_rsp_crq->generic.rc.code) { 3968 - adapter->renegotiate = true; 3945 + adapter->init_done_rc = login_rsp_crq->generic.rc.code; 3969 3946 complete(&adapter->init_done); 3970 3947 return 0; 3971 3948 }
-1
drivers/net/ethernet/ibm/ibmvnic.h
··· 1035 1035 1036 1036 struct ibmvnic_sub_crq_queue **tx_scrq; 1037 1037 struct ibmvnic_sub_crq_queue **rx_scrq; 1038 - bool renegotiate; 1039 1038 1040 1039 /* rx structs */ 1041 1040 struct napi_struct *napi;
+8 -6
drivers/net/ethernet/marvell/mvpp2.c
··· 663 663 #define MVPP2_PE_VID_FILT_RANGE_END (MVPP2_PRS_TCAM_SRAM_SIZE - 31) 664 664 #define MVPP2_PE_VID_FILT_RANGE_START (MVPP2_PE_VID_FILT_RANGE_END - \ 665 665 MVPP2_PRS_VLAN_FILT_RANGE_SIZE + 1) 666 - #define MVPP2_PE_LAST_FREE_TID (MVPP2_PE_VID_FILT_RANGE_START - 1) 666 + #define MVPP2_PE_LAST_FREE_TID (MVPP2_PE_MAC_RANGE_START - 1) 667 667 #define MVPP2_PE_IP6_EXT_PROTO_UN (MVPP2_PRS_TCAM_SRAM_SIZE - 30) 668 668 #define MVPP2_PE_IP6_ADDR_UN (MVPP2_PRS_TCAM_SRAM_SIZE - 29) 669 669 #define MVPP2_PE_IP4_ADDR_UN (MVPP2_PRS_TCAM_SRAM_SIZE - 28) ··· 915 915 #define MVPP2_MIB_LATE_COLLISION 0x7c 916 916 917 917 #define MVPP2_MIB_COUNTERS_STATS_DELAY (1 * HZ) 918 + 919 + #define MVPP2_DESC_DMA_MASK DMA_BIT_MASK(40) 918 920 919 921 /* Definitions */ 920 922 ··· 1431 1429 if (port->priv->hw_version == MVPP21) 1432 1430 return tx_desc->pp21.buf_dma_addr; 1433 1431 else 1434 - return tx_desc->pp22.buf_dma_addr_ptp & GENMASK_ULL(40, 0); 1432 + return tx_desc->pp22.buf_dma_addr_ptp & MVPP2_DESC_DMA_MASK; 1435 1433 } 1436 1434 1437 1435 static void mvpp2_txdesc_dma_addr_set(struct mvpp2_port *port, ··· 1449 1447 } else { 1450 1448 u64 val = (u64)addr; 1451 1449 1452 - tx_desc->pp22.buf_dma_addr_ptp &= ~GENMASK_ULL(40, 0); 1450 + tx_desc->pp22.buf_dma_addr_ptp &= ~MVPP2_DESC_DMA_MASK; 1453 1451 tx_desc->pp22.buf_dma_addr_ptp |= val; 1454 1452 tx_desc->pp22.packet_offset = offset; 1455 1453 } ··· 1509 1507 if (port->priv->hw_version == MVPP21) 1510 1508 return rx_desc->pp21.buf_dma_addr; 1511 1509 else 1512 - return rx_desc->pp22.buf_dma_addr_key_hash & GENMASK_ULL(40, 0); 1510 + return rx_desc->pp22.buf_dma_addr_key_hash & MVPP2_DESC_DMA_MASK; 1513 1511 } 1514 1512 1515 1513 static unsigned long mvpp2_rxdesc_cookie_get(struct mvpp2_port *port, ··· 1518 1516 if (port->priv->hw_version == MVPP21) 1519 1517 return rx_desc->pp21.buf_cookie; 1520 1518 else 1521 - return rx_desc->pp22.buf_cookie_misc & GENMASK_ULL(40, 0); 1519 + return rx_desc->pp22.buf_cookie_misc & MVPP2_DESC_DMA_MASK; 1522 1520 } 1523 1521 1524 1522 static size_t mvpp2_rxdesc_size_get(struct mvpp2_port *port, ··· 8791 8789 } 8792 8790 8793 8791 if (priv->hw_version == MVPP22) { 8794 - err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(40)); 8792 + err = dma_set_mask(&pdev->dev, MVPP2_DESC_DMA_MASK); 8795 8793 if (err) 8796 8794 goto err_mg_clk; 8797 8795 /* Sadly, the BM pools all share the same register to
+37 -7
drivers/net/ethernet/netronome/nfp/flower/cmsg.c
··· 258 258 case NFP_FLOWER_CMSG_TYPE_ACTIVE_TUNS: 259 259 nfp_tunnel_keep_alive(app, skb); 260 260 break; 261 - case NFP_FLOWER_CMSG_TYPE_TUN_NEIGH: 262 - /* Acks from the NFP that the route is added - ignore. */ 263 - break; 264 261 default: 265 262 nfp_flower_cmsg_warn(app, "Cannot handle invalid repr control type %u\n", 266 263 type); ··· 272 275 273 276 void nfp_flower_cmsg_process_rx(struct work_struct *work) 274 277 { 278 + struct sk_buff_head cmsg_joined; 275 279 struct nfp_flower_priv *priv; 276 280 struct sk_buff *skb; 277 281 278 282 priv = container_of(work, struct nfp_flower_priv, cmsg_work); 283 + skb_queue_head_init(&cmsg_joined); 279 284 280 - while ((skb = skb_dequeue(&priv->cmsg_skbs))) 285 + spin_lock_bh(&priv->cmsg_skbs_high.lock); 286 + skb_queue_splice_tail_init(&priv->cmsg_skbs_high, &cmsg_joined); 287 + spin_unlock_bh(&priv->cmsg_skbs_high.lock); 288 + 289 + spin_lock_bh(&priv->cmsg_skbs_low.lock); 290 + skb_queue_splice_tail_init(&priv->cmsg_skbs_low, &cmsg_joined); 291 + spin_unlock_bh(&priv->cmsg_skbs_low.lock); 292 + 293 + while ((skb = __skb_dequeue(&cmsg_joined))) 281 294 nfp_flower_cmsg_process_one_rx(priv->app, skb); 295 + } 296 + 297 + static void 298 + nfp_flower_queue_ctl_msg(struct nfp_app *app, struct sk_buff *skb, int type) 299 + { 300 + struct nfp_flower_priv *priv = app->priv; 301 + struct sk_buff_head *skb_head; 302 + 303 + if (type == NFP_FLOWER_CMSG_TYPE_PORT_REIFY || 304 + type == NFP_FLOWER_CMSG_TYPE_PORT_MOD) 305 + skb_head = &priv->cmsg_skbs_high; 306 + else 307 + skb_head = &priv->cmsg_skbs_low; 308 + 309 + if (skb_queue_len(skb_head) >= NFP_FLOWER_WORKQ_MAX_SKBS) { 310 + nfp_flower_cmsg_warn(app, "Dropping queued control messages\n"); 311 + dev_kfree_skb_any(skb); 312 + return; 313 + } 314 + 315 + skb_queue_tail(skb_head, skb); 316 + schedule_work(&priv->cmsg_work); 282 317 } 283 318 284 319 void nfp_flower_cmsg_rx(struct nfp_app *app, struct sk_buff *skb) 285 320 { 286 - struct nfp_flower_priv *priv = app->priv; 287 321 struct nfp_flower_cmsg_hdr *cmsg_hdr; 288 322 289 323 cmsg_hdr = nfp_flower_cmsg_get_hdr(skb); ··· 334 306 nfp_flower_process_mtu_ack(app, skb)) { 335 307 /* Handle MTU acks outside wq to prevent RTNL conflict. */ 336 308 dev_consume_skb_any(skb); 309 + } else if (cmsg_hdr->type == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH) { 310 + /* Acks from the NFP that the route is added - ignore. */ 311 + dev_consume_skb_any(skb); 337 312 } else { 338 - skb_queue_tail(&priv->cmsg_skbs, skb); 339 - schedule_work(&priv->cmsg_work); 313 + nfp_flower_queue_ctl_msg(app, skb, cmsg_hdr->type); 340 314 } 341 315 }
+2
drivers/net/ethernet/netronome/nfp/flower/cmsg.h
··· 108 108 #define NFP_FL_IPV4_TUNNEL_TYPE GENMASK(7, 4) 109 109 #define NFP_FL_IPV4_PRE_TUN_INDEX GENMASK(2, 0) 110 110 111 + #define NFP_FLOWER_WORKQ_MAX_SKBS 30000 112 + 111 113 #define nfp_flower_cmsg_warn(app, fmt, args...) \ 112 114 do { \ 113 115 if (net_ratelimit()) \
+4 -2
drivers/net/ethernet/netronome/nfp/flower/main.c
··· 519 519 520 520 app->priv = app_priv; 521 521 app_priv->app = app; 522 - skb_queue_head_init(&app_priv->cmsg_skbs); 522 + skb_queue_head_init(&app_priv->cmsg_skbs_high); 523 + skb_queue_head_init(&app_priv->cmsg_skbs_low); 523 524 INIT_WORK(&app_priv->cmsg_work, nfp_flower_cmsg_process_rx); 524 525 init_waitqueue_head(&app_priv->reify_wait_queue); 525 526 ··· 550 549 { 551 550 struct nfp_flower_priv *app_priv = app->priv; 552 551 553 - skb_queue_purge(&app_priv->cmsg_skbs); 552 + skb_queue_purge(&app_priv->cmsg_skbs_high); 553 + skb_queue_purge(&app_priv->cmsg_skbs_low); 554 554 flush_work(&app_priv->cmsg_work); 555 555 556 556 nfp_flower_metadata_cleanup(app);
+6 -2
drivers/net/ethernet/netronome/nfp/flower/main.h
··· 107 107 * @mask_table: Hash table used to store masks 108 108 * @flow_table: Hash table used to store flower rules 109 109 * @cmsg_work: Workqueue for control messages processing 110 - * @cmsg_skbs: List of skbs for control message processing 110 + * @cmsg_skbs_high: List of higher priority skbs for control message 111 + * processing 112 + * @cmsg_skbs_low: List of lower priority skbs for control message 113 + * processing 111 114 * @nfp_mac_off_list: List of MAC addresses to offload 112 115 * @nfp_mac_index_list: List of unique 8-bit indexes for non NFP netdevs 113 116 * @nfp_ipv4_off_list: List of IPv4 addresses to offload ··· 139 136 DECLARE_HASHTABLE(mask_table, NFP_FLOWER_MASK_HASH_BITS); 140 137 DECLARE_HASHTABLE(flow_table, NFP_FLOWER_HASH_BITS); 141 138 struct work_struct cmsg_work; 142 - struct sk_buff_head cmsg_skbs; 139 + struct sk_buff_head cmsg_skbs_high; 140 + struct sk_buff_head cmsg_skbs_low; 143 141 struct list_head nfp_mac_off_list; 144 142 struct list_head nfp_mac_index_list; 145 143 struct list_head nfp_ipv4_off_list;
+4 -1
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_mutex.c
··· 211 211 break; 212 212 213 213 err = msleep_interruptible(timeout_ms); 214 - if (err != 0) 214 + if (err != 0) { 215 + nfp_info(mutex->cpp, 216 + "interrupted waiting for NFP mutex\n"); 215 217 return -ERESTARTSYS; 218 + } 216 219 217 220 if (time_is_before_eq_jiffies(warn_at)) { 218 221 warn_at = jiffies + NFP_MUTEX_WAIT_NEXT_WARN * HZ;
+1 -2
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
··· 281 281 if ((*reg & mask) == val) 282 282 return 0; 283 283 284 - if (msleep_interruptible(25)) 285 - return -ERESTARTSYS; 284 + msleep(25); 286 285 287 286 if (time_after(start_time, wait_until)) 288 287 return -ETIMEDOUT;
+6 -5
drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
··· 350 350 351 351 real_dev = priv->real_dev; 352 352 353 - if (!rmnet_is_real_dev_registered(real_dev)) 354 - return -ENODEV; 355 - 356 353 if (nla_put_u16(skb, IFLA_RMNET_MUX_ID, priv->mux_id)) 357 354 goto nla_put_failure; 358 355 359 - port = rmnet_get_port_rtnl(real_dev); 356 + if (rmnet_is_real_dev_registered(real_dev)) { 357 + port = rmnet_get_port_rtnl(real_dev); 358 + f.flags = port->data_format; 359 + } else { 360 + f.flags = 0; 361 + } 360 362 361 - f.flags = port->data_format; 362 363 f.mask = ~0; 363 364 364 365 if (nla_put(skb, IFLA_RMNET_FLAGS, sizeof(f), &f))
+3 -4
drivers/net/ethernet/sfc/ef10.c
··· 4776 4776 goto out_unlock; 4777 4777 } 4778 4778 4779 - if (!rps_may_expire_flow(efx->net_dev, spec->dmaq_id, 4780 - flow_id, filter_idx)) { 4779 + if (!rps_may_expire_flow(efx->net_dev, spec->dmaq_id, flow_id, 0)) { 4781 4780 ret = false; 4782 4781 goto out_unlock; 4783 4782 } ··· 5264 5265 ids = vlan->uc; 5265 5266 } 5266 5267 5267 - filter_flags = efx_rss_enabled(efx) ? EFX_FILTER_FLAG_RX_RSS : 0; 5268 + filter_flags = efx_rss_active(&efx->rss_context) ? EFX_FILTER_FLAG_RX_RSS : 0; 5268 5269 5269 5270 /* Insert/renew filters */ 5270 5271 for (i = 0; i < addr_count; i++) { ··· 5333 5334 int rc; 5334 5335 u16 *id; 5335 5336 5336 - filter_flags = efx_rss_enabled(efx) ? EFX_FILTER_FLAG_RX_RSS : 0; 5337 + filter_flags = efx_rss_active(&efx->rss_context) ? EFX_FILTER_FLAG_RX_RSS : 0; 5337 5338 5338 5339 efx_filter_init_rx(&spec, EFX_FILTER_PRI_AUTO, filter_flags, 0); 5339 5340
+1 -1
drivers/net/ethernet/sfc/farch.c
··· 2912 2912 if (test_bit(index, table->used_bitmap) && 2913 2913 table->spec[index].priority == EFX_FILTER_PRI_HINT && 2914 2914 rps_may_expire_flow(efx->net_dev, table->spec[index].dmaq_id, 2915 - flow_id, index)) { 2915 + flow_id, 0)) { 2916 2916 efx_farch_filter_table_clear_entry(efx, table, index); 2917 2917 ret = true; 2918 2918 }
+25
drivers/net/ethernet/sfc/net_driver.h
··· 733 733 u32 rx_indir_table[128]; 734 734 }; 735 735 736 + #ifdef CONFIG_RFS_ACCEL 737 + /** 738 + * struct efx_async_filter_insertion - Request to asynchronously insert a filter 739 + * @net_dev: Reference to the netdevice 740 + * @spec: The filter to insert 741 + * @work: Workitem for this request 742 + * @rxq_index: Identifies the channel for which this request was made 743 + * @flow_id: Identifies the kernel-side flow for which this request was made 744 + */ 745 + struct efx_async_filter_insertion { 746 + struct net_device *net_dev; 747 + struct efx_filter_spec spec; 748 + struct work_struct work; 749 + u16 rxq_index; 750 + u32 flow_id; 751 + }; 752 + 753 + /* Maximum number of ARFS workitems that may be in flight on an efx_nic */ 754 + #define EFX_RPS_MAX_IN_FLIGHT 8 755 + #endif /* CONFIG_RFS_ACCEL */ 756 + 736 757 /** 737 758 * struct efx_nic - an Efx NIC 738 759 * @name: Device name (net device name or bus id before net device registered) ··· 871 850 * @rps_expire_channel: Next channel to check for expiry 872 851 * @rps_expire_index: Next index to check for expiry in 873 852 * @rps_expire_channel's @rps_flow_id 853 + * @rps_slot_map: bitmap of in-flight entries in @rps_slot 854 + * @rps_slot: array of ARFS insertion requests for efx_filter_rfs_work() 874 855 * @active_queues: Count of RX and TX queues that haven't been flushed and drained. 875 856 * @rxq_flush_pending: Count of number of receive queues that need to be flushed. 876 857 * Decremented when the efx_flush_rx_queue() is called. ··· 1027 1004 struct mutex rps_mutex; 1028 1005 unsigned int rps_expire_channel; 1029 1006 unsigned int rps_expire_index; 1007 + unsigned long rps_slot_map; 1008 + struct efx_async_filter_insertion rps_slot[EFX_RPS_MAX_IN_FLIGHT]; 1030 1009 #endif 1031 1010 1032 1011 atomic_t active_queues;
+31 -29
drivers/net/ethernet/sfc/rx.c
··· 827 827 828 828 #ifdef CONFIG_RFS_ACCEL 829 829 830 - /** 831 - * struct efx_async_filter_insertion - Request to asynchronously insert a filter 832 - * @net_dev: Reference to the netdevice 833 - * @spec: The filter to insert 834 - * @work: Workitem for this request 835 - * @rxq_index: Identifies the channel for which this request was made 836 - * @flow_id: Identifies the kernel-side flow for which this request was made 837 - */ 838 - struct efx_async_filter_insertion { 839 - struct net_device *net_dev; 840 - struct efx_filter_spec spec; 841 - struct work_struct work; 842 - u16 rxq_index; 843 - u32 flow_id; 844 - }; 845 - 846 830 static void efx_filter_rfs_work(struct work_struct *data) 847 831 { 848 832 struct efx_async_filter_insertion *req = container_of(data, struct efx_async_filter_insertion, 849 833 work); 850 834 struct efx_nic *efx = netdev_priv(req->net_dev); 851 835 struct efx_channel *channel = efx_get_channel(efx, req->rxq_index); 836 + int slot_idx = req - efx->rps_slot; 852 837 int rc; 853 838 854 - rc = efx->type->filter_insert(efx, &req->spec, false); 839 + rc = efx->type->filter_insert(efx, &req->spec, true); 855 840 if (rc >= 0) { 856 841 /* Remember this so we can check whether to expire the filter 857 842 * later. ··· 863 878 } 864 879 865 880 /* Release references */ 881 + clear_bit(slot_idx, &efx->rps_slot_map); 866 882 dev_put(req->net_dev); 867 - kfree(req); 868 883 } 869 884 870 885 int efx_filter_rfs(struct net_device *net_dev, const struct sk_buff *skb, ··· 873 888 struct efx_nic *efx = netdev_priv(net_dev); 874 889 struct efx_async_filter_insertion *req; 875 890 struct flow_keys fk; 891 + int slot_idx; 892 + int rc; 876 893 877 - if (flow_id == RPS_FLOW_ID_INVALID) 878 - return -EINVAL; 894 + /* find a free slot */ 895 + for (slot_idx = 0; slot_idx < EFX_RPS_MAX_IN_FLIGHT; slot_idx++) 896 + if (!test_and_set_bit(slot_idx, &efx->rps_slot_map)) 897 + break; 898 + if (slot_idx >= EFX_RPS_MAX_IN_FLIGHT) 899 + return -EBUSY; 879 900 880 - if (!skb_flow_dissect_flow_keys(skb, &fk, 0)) 881 - return -EPROTONOSUPPORT; 901 + if (flow_id == RPS_FLOW_ID_INVALID) { 902 + rc = -EINVAL; 903 + goto out_clear; 904 + } 882 905 883 - if (fk.basic.n_proto != htons(ETH_P_IP) && fk.basic.n_proto != htons(ETH_P_IPV6)) 884 - return -EPROTONOSUPPORT; 885 - if (fk.control.flags & FLOW_DIS_IS_FRAGMENT) 886 - return -EPROTONOSUPPORT; 906 + if (!skb_flow_dissect_flow_keys(skb, &fk, 0)) { 907 + rc = -EPROTONOSUPPORT; 908 + goto out_clear; 909 + } 887 910 888 - req = kmalloc(sizeof(*req), GFP_ATOMIC); 889 - if (!req) 890 - return -ENOMEM; 911 + if (fk.basic.n_proto != htons(ETH_P_IP) && fk.basic.n_proto != htons(ETH_P_IPV6)) { 912 + rc = -EPROTONOSUPPORT; 913 + goto out_clear; 914 + } 915 + if (fk.control.flags & FLOW_DIS_IS_FRAGMENT) { 916 + rc = -EPROTONOSUPPORT; 917 + goto out_clear; 918 + } 891 919 920 + req = efx->rps_slot + slot_idx; 892 921 efx_filter_init_rx(&req->spec, EFX_FILTER_PRI_HINT, 893 922 efx->rx_scatter ? EFX_FILTER_FLAG_RX_SCATTER : 0, 894 923 rxq_index); ··· 932 933 req->flow_id = flow_id; 933 934 schedule_work(&req->work); 934 935 return 0; 936 + out_clear: 937 + clear_bit(slot_idx, &efx->rps_slot_map); 938 + return rc; 935 939 } 936 940 937 941 bool __efx_filter_rfs_expire(struct efx_nic *efx, unsigned int quota)
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac4.h
··· 347 347 #define MTL_RX_OVERFLOW_INT BIT(16) 348 348 349 349 /* Default operating mode of the MAC */ 350 - #define GMAC_CORE_INIT (GMAC_CONFIG_JD | GMAC_CONFIG_PS | GMAC_CONFIG_ACS | \ 350 + #define GMAC_CORE_INIT (GMAC_CONFIG_JD | GMAC_CONFIG_PS | \ 351 351 GMAC_CONFIG_BE | GMAC_CONFIG_DCRS) 352 352 353 353 /* To dump the core regs excluding the Address Registers */
-7
drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
··· 31 31 32 32 value |= GMAC_CORE_INIT; 33 33 34 - /* Clear ACS bit because Ethernet switch tagging formats such as 35 - * Broadcom tags can look like invalid LLC/SNAP packets and cause the 36 - * hardware to truncate packets on reception. 37 - */ 38 - if (netdev_uses_dsa(dev)) 39 - value &= ~GMAC_CONFIG_ACS; 40 - 41 34 if (mtu > 1500) 42 35 value |= GMAC_CONFIG_2K; 43 36 if (mtu > 2000)
+6 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 3495 3495 3496 3496 /* ACS is set; GMAC core strips PAD/FCS for IEEE 802.3 3497 3497 * Type frames (LLC/LLC-SNAP) 3498 + * 3499 + * llc_snap is never checked in GMAC >= 4, so this ACS 3500 + * feature is always disabled and packets need to be 3501 + * stripped manually. 3498 3502 */ 3499 - if (unlikely(status != llc_snap)) 3503 + if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00) || 3504 + unlikely(status != llc_snap)) 3500 3505 frame_len -= ETH_FCS_LEN; 3501 3506 3502 3507 if (netif_msg_rx_status(priv)) {
+2 -3
drivers/net/macsec.c
··· 3277 3277 3278 3278 err = netdev_upper_dev_link(real_dev, dev, extack); 3279 3279 if (err < 0) 3280 - goto put_dev; 3280 + goto unregister; 3281 3281 3282 3282 /* need to be already registered so that ->init has run and 3283 3283 * the MAC addr is set ··· 3316 3316 macsec_del_dev(macsec); 3317 3317 unlink: 3318 3318 netdev_upper_dev_unlink(real_dev, dev); 3319 - put_dev: 3320 - dev_put(real_dev); 3319 + unregister: 3321 3320 unregister_netdevice(dev); 3322 3321 return err; 3323 3322 }
+177 -1
drivers/net/phy/microchip.c
··· 20 20 #include <linux/ethtool.h> 21 21 #include <linux/phy.h> 22 22 #include <linux/microchipphy.h> 23 + #include <linux/delay.h> 23 24 24 25 #define DRIVER_AUTHOR "WOOJUNG HUH <woojung.huh@microchip.com>" 25 26 #define DRIVER_DESC "Microchip LAN88XX PHY driver" ··· 30 29 int chip_rev; 31 30 __u32 wolopts; 32 31 }; 32 + 33 + static int lan88xx_read_page(struct phy_device *phydev) 34 + { 35 + return __phy_read(phydev, LAN88XX_EXT_PAGE_ACCESS); 36 + } 37 + 38 + static int lan88xx_write_page(struct phy_device *phydev, int page) 39 + { 40 + return __phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, page); 41 + } 33 42 34 43 static int lan88xx_phy_config_intr(struct phy_device *phydev) 35 44 { ··· 75 64 genphy_suspend(phydev); 76 65 77 66 return 0; 67 + } 68 + 69 + static int lan88xx_TR_reg_set(struct phy_device *phydev, u16 regaddr, 70 + u32 data) 71 + { 72 + int val, save_page, ret = 0; 73 + u16 buf; 74 + 75 + /* Save current page */ 76 + save_page = phy_save_page(phydev); 77 + if (save_page < 0) { 78 + pr_warn("Failed to get current page\n"); 79 + goto err; 80 + } 81 + 82 + /* Switch to TR page */ 83 + lan88xx_write_page(phydev, LAN88XX_EXT_PAGE_ACCESS_TR); 84 + 85 + ret = __phy_write(phydev, LAN88XX_EXT_PAGE_TR_LOW_DATA, 86 + (data & 0xFFFF)); 87 + if (ret < 0) { 88 + pr_warn("Failed to write TR low data\n"); 89 + goto err; 90 + } 91 + 92 + ret = __phy_write(phydev, LAN88XX_EXT_PAGE_TR_HIGH_DATA, 93 + (data & 0x00FF0000) >> 16); 94 + if (ret < 0) { 95 + pr_warn("Failed to write TR high data\n"); 96 + goto err; 97 + } 98 + 99 + /* Config control bits [15:13] of register */ 100 + buf = (regaddr & ~(0x3 << 13));/* Clr [14:13] to write data in reg */ 101 + buf |= 0x8000; /* Set [15] to Packet transmit */ 102 + 103 + ret = __phy_write(phydev, LAN88XX_EXT_PAGE_TR_CR, buf); 104 + if (ret < 0) { 105 + pr_warn("Failed to write data in reg\n"); 106 + goto err; 107 + } 108 + 109 + usleep_range(1000, 2000);/* Wait for Data to be written */ 110 + val = __phy_read(phydev, LAN88XX_EXT_PAGE_TR_CR); 111 + if (!(val & 0x8000)) 112 + pr_warn("TR Register[0x%X] configuration failed\n", regaddr); 113 + err: 114 + return phy_restore_page(phydev, save_page, ret); 115 + } 116 + 117 + static void lan88xx_config_TR_regs(struct phy_device *phydev) 118 + { 119 + int err; 120 + 121 + /* Get access to Channel 0x1, Node 0xF , Register 0x01. 122 + * Write 24-bit value 0x12B00A to register. Setting MrvlTrFix1000Kf, 123 + * MrvlTrFix1000Kp, MasterEnableTR bits. 124 + */ 125 + err = lan88xx_TR_reg_set(phydev, 0x0F82, 0x12B00A); 126 + if (err < 0) 127 + pr_warn("Failed to Set Register[0x0F82]\n"); 128 + 129 + /* Get access to Channel b'10, Node b'1101, Register 0x06. 130 + * Write 24-bit value 0xD2C46F to register. Setting SSTrKf1000Slv, 131 + * SSTrKp1000Mas bits. 132 + */ 133 + err = lan88xx_TR_reg_set(phydev, 0x168C, 0xD2C46F); 134 + if (err < 0) 135 + pr_warn("Failed to Set Register[0x168C]\n"); 136 + 137 + /* Get access to Channel b'10, Node b'1111, Register 0x11. 138 + * Write 24-bit value 0x620 to register. Setting rem_upd_done_thresh 139 + * bits 140 + */ 141 + err = lan88xx_TR_reg_set(phydev, 0x17A2, 0x620); 142 + if (err < 0) 143 + pr_warn("Failed to Set Register[0x17A2]\n"); 144 + 145 + /* Get access to Channel b'10, Node b'1101, Register 0x10. 146 + * Write 24-bit value 0xEEFFDD to register. Setting 147 + * eee_TrKp1Long_1000, eee_TrKp2Long_1000, eee_TrKp3Long_1000, 148 + * eee_TrKp1Short_1000,eee_TrKp2Short_1000, eee_TrKp3Short_1000 bits. 149 + */ 150 + err = lan88xx_TR_reg_set(phydev, 0x16A0, 0xEEFFDD); 151 + if (err < 0) 152 + pr_warn("Failed to Set Register[0x16A0]\n"); 153 + 154 + /* Get access to Channel b'10, Node b'1101, Register 0x13. 155 + * Write 24-bit value 0x071448 to register. Setting 156 + * slv_lpi_tr_tmr_val1, slv_lpi_tr_tmr_val2 bits. 157 + */ 158 + err = lan88xx_TR_reg_set(phydev, 0x16A6, 0x071448); 159 + if (err < 0) 160 + pr_warn("Failed to Set Register[0x16A6]\n"); 161 + 162 + /* Get access to Channel b'10, Node b'1101, Register 0x12. 163 + * Write 24-bit value 0x13132F to register. Setting 164 + * slv_sigdet_timer_val1, slv_sigdet_timer_val2 bits. 165 + */ 166 + err = lan88xx_TR_reg_set(phydev, 0x16A4, 0x13132F); 167 + if (err < 0) 168 + pr_warn("Failed to Set Register[0x16A4]\n"); 169 + 170 + /* Get access to Channel b'10, Node b'1101, Register 0x14. 171 + * Write 24-bit value 0x0 to register. Setting eee_3level_delay, 172 + * eee_TrKf_freeze_delay bits. 173 + */ 174 + err = lan88xx_TR_reg_set(phydev, 0x16A8, 0x0); 175 + if (err < 0) 176 + pr_warn("Failed to Set Register[0x16A8]\n"); 177 + 178 + /* Get access to Channel b'01, Node b'1111, Register 0x34. 179 + * Write 24-bit value 0x91B06C to register. Setting 180 + * FastMseSearchThreshLong1000, FastMseSearchThreshShort1000, 181 + * FastMseSearchUpdGain1000 bits. 182 + */ 183 + err = lan88xx_TR_reg_set(phydev, 0x0FE8, 0x91B06C); 184 + if (err < 0) 185 + pr_warn("Failed to Set Register[0x0FE8]\n"); 186 + 187 + /* Get access to Channel b'01, Node b'1111, Register 0x3E. 188 + * Write 24-bit value 0xC0A028 to register. Setting 189 + * FastMseKp2ThreshLong1000, FastMseKp2ThreshShort1000, 190 + * FastMseKp2UpdGain1000, FastMseKp2ExitEn1000 bits. 191 + */ 192 + err = lan88xx_TR_reg_set(phydev, 0x0FFC, 0xC0A028); 193 + if (err < 0) 194 + pr_warn("Failed to Set Register[0x0FFC]\n"); 195 + 196 + /* Get access to Channel b'01, Node b'1111, Register 0x35. 197 + * Write 24-bit value 0x041600 to register. Setting 198 + * FastMseSearchPhShNum1000, FastMseSearchClksPerPh1000, 199 + * FastMsePhChangeDelay1000 bits. 200 + */ 201 + err = lan88xx_TR_reg_set(phydev, 0x0FEA, 0x041600); 202 + if (err < 0) 203 + pr_warn("Failed to Set Register[0x0FEA]\n"); 204 + 205 + /* Get access to Channel b'10, Node b'1101, Register 0x03. 206 + * Write 24-bit value 0x000004 to register. Setting TrFreeze bits. 207 + */ 208 + err = lan88xx_TR_reg_set(phydev, 0x1686, 0x000004); 209 + if (err < 0) 210 + pr_warn("Failed to Set Register[0x1686]\n"); 78 211 } 79 212 80 213 static int lan88xx_probe(struct phy_device *phydev) ··· 287 132 phy_write(phydev, LAN88XX_EXT_PAGE_ACCESS, LAN88XX_EXT_PAGE_SPACE_0); 288 133 } 289 134 135 + static int lan88xx_config_init(struct phy_device *phydev) 136 + { 137 + int val; 138 + 139 + genphy_config_init(phydev); 140 + /*Zerodetect delay enable */ 141 + val = phy_read_mmd(phydev, MDIO_MMD_PCS, 142 + PHY_ARDENNES_MMD_DEV_3_PHY_CFG); 143 + val |= PHY_ARDENNES_MMD_DEV_3_PHY_CFG_ZD_DLY_EN_; 144 + 145 + phy_write_mmd(phydev, MDIO_MMD_PCS, PHY_ARDENNES_MMD_DEV_3_PHY_CFG, 146 + val); 147 + 148 + /* Config DSP registers */ 149 + lan88xx_config_TR_regs(phydev); 150 + 151 + return 0; 152 + } 153 + 290 154 static int lan88xx_config_aneg(struct phy_device *phydev) 291 155 { 292 156 lan88xx_set_mdix(phydev); ··· 325 151 .probe = lan88xx_probe, 326 152 .remove = lan88xx_remove, 327 153 328 - .config_init = genphy_config_init, 154 + .config_init = lan88xx_config_init, 329 155 .config_aneg = lan88xx_config_aneg, 330 156 331 157 .ack_interrupt = lan88xx_phy_ack_interrupt, ··· 334 160 .suspend = lan88xx_suspend, 335 161 .resume = genphy_resume, 336 162 .set_wol = lan88xx_set_wol, 163 + .read_page = lan88xx_read_page, 164 + .write_page = lan88xx_write_page, 337 165 } }; 338 166 339 167 module_phy_driver(microchip_phy_driver);
+19
drivers/net/team/team.c
··· 261 261 } 262 262 } 263 263 264 + static bool __team_option_inst_tmp_find(const struct list_head *opts, 265 + const struct team_option_inst *needle) 266 + { 267 + struct team_option_inst *opt_inst; 268 + 269 + list_for_each_entry(opt_inst, opts, tmp_list) 270 + if (opt_inst == needle) 271 + return true; 272 + return false; 273 + } 274 + 264 275 static int __team_options_register(struct team *team, 265 276 const struct team_option *option, 266 277 size_t option_count) ··· 2579 2568 if (err) 2580 2569 goto team_put; 2581 2570 opt_inst->changed = true; 2571 + 2572 + /* dumb/evil user-space can send us duplicate opt, 2573 + * keep only the last one 2574 + */ 2575 + if (__team_option_inst_tmp_find(&opt_inst_list, 2576 + opt_inst)) 2577 + continue; 2578 + 2582 2579 list_add(&opt_inst->tmp_list, &opt_inst_list); 2583 2580 } 2584 2581 if (!opt_found) {
+1 -6
drivers/net/tun.c
··· 1102 1102 goto drop; 1103 1103 1104 1104 len = run_ebpf_filter(tun, skb, len); 1105 - 1106 - /* Trim extra bytes since we may insert vlan proto & TCI 1107 - * in tun_put_user(). 1108 - */ 1109 - len -= skb_vlan_tag_present(skb) ? sizeof(struct veth) : 0; 1110 - if (len <= 0 || pskb_trim(skb, len)) 1105 + if (len == 0 || pskb_trim(skb, len)) 1111 1106 goto drop; 1112 1107 1113 1108 if (unlikely(skb_orphan_frags_rx(skb, GFP_ATOMIC)))
+1
drivers/net/usb/qmi_wwan.c
··· 1107 1107 {QMI_FIXED_INTF(0x1435, 0xd181, 3)}, /* Wistron NeWeb D18Q1 */ 1108 1108 {QMI_FIXED_INTF(0x1435, 0xd181, 4)}, /* Wistron NeWeb D18Q1 */ 1109 1109 {QMI_FIXED_INTF(0x1435, 0xd181, 5)}, /* Wistron NeWeb D18Q1 */ 1110 + {QMI_FIXED_INTF(0x1435, 0xd191, 4)}, /* Wistron NeWeb D19Q1 */ 1110 1111 {QMI_FIXED_INTF(0x16d8, 0x6003, 0)}, /* CMOTech 6003 */ 1111 1112 {QMI_FIXED_INTF(0x16d8, 0x6007, 0)}, /* CMOTech CHE-628S */ 1112 1113 {QMI_FIXED_INTF(0x16d8, 0x6008, 0)}, /* CMOTech CMU-301 */
+48 -31
drivers/net/virtio_net.c
··· 147 147 struct xdp_rxq_info xdp_rxq; 148 148 }; 149 149 150 + /* Control VQ buffers: protected by the rtnl lock */ 151 + struct control_buf { 152 + struct virtio_net_ctrl_hdr hdr; 153 + virtio_net_ctrl_ack status; 154 + struct virtio_net_ctrl_mq mq; 155 + u8 promisc; 156 + u8 allmulti; 157 + __virtio16 vid; 158 + __virtio64 offloads; 159 + }; 160 + 150 161 struct virtnet_info { 151 162 struct virtio_device *vdev; 152 163 struct virtqueue *cvq; ··· 203 192 struct hlist_node node; 204 193 struct hlist_node node_dead; 205 194 206 - /* Control VQ buffers: protected by the rtnl lock */ 207 - struct virtio_net_ctrl_hdr ctrl_hdr; 208 - virtio_net_ctrl_ack ctrl_status; 209 - struct virtio_net_ctrl_mq ctrl_mq; 210 - u8 ctrl_promisc; 211 - u8 ctrl_allmulti; 212 - u16 ctrl_vid; 213 - u64 ctrl_offloads; 195 + struct control_buf *ctrl; 214 196 215 197 /* Ethtool settings */ 216 198 u8 duplex; ··· 1273 1269 { 1274 1270 struct receive_queue *rq = 1275 1271 container_of(napi, struct receive_queue, napi); 1276 - unsigned int received; 1272 + struct virtnet_info *vi = rq->vq->vdev->priv; 1273 + struct send_queue *sq; 1274 + unsigned int received, qp; 1277 1275 bool xdp_xmit = false; 1278 1276 1279 1277 virtnet_poll_cleantx(rq); ··· 1286 1280 if (received < budget) 1287 1281 virtqueue_napi_complete(napi, rq->vq, received); 1288 1282 1289 - if (xdp_xmit) 1283 + if (xdp_xmit) { 1284 + qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + 1285 + smp_processor_id(); 1286 + sq = &vi->sq[qp]; 1287 + virtqueue_kick(sq->vq); 1290 1288 xdp_do_flush_map(); 1289 + } 1291 1290 1292 1291 return received; 1293 1292 } ··· 1465 1454 /* Caller should know better */ 1466 1455 BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ)); 1467 1456 1468 - vi->ctrl_status = ~0; 1469 - vi->ctrl_hdr.class = class; 1470 - vi->ctrl_hdr.cmd = cmd; 1457 + vi->ctrl->status = ~0; 1458 + vi->ctrl->hdr.class = class; 1459 + vi->ctrl->hdr.cmd = cmd; 1471 1460 /* Add header */ 1472 - sg_init_one(&hdr, &vi->ctrl_hdr, sizeof(vi->ctrl_hdr)); 1461 + sg_init_one(&hdr, &vi->ctrl->hdr, sizeof(vi->ctrl->hdr)); 1473 1462 sgs[out_num++] = &hdr; 1474 1463 1475 1464 if (out) 1476 1465 sgs[out_num++] = out; 1477 1466 1478 1467 /* Add return status. */ 1479 - sg_init_one(&stat, &vi->ctrl_status, sizeof(vi->ctrl_status)); 1468 + sg_init_one(&stat, &vi->ctrl->status, sizeof(vi->ctrl->status)); 1480 1469 sgs[out_num] = &stat; 1481 1470 1482 1471 BUG_ON(out_num + 1 > ARRAY_SIZE(sgs)); 1483 1472 virtqueue_add_sgs(vi->cvq, sgs, out_num, 1, vi, GFP_ATOMIC); 1484 1473 1485 1474 if (unlikely(!virtqueue_kick(vi->cvq))) 1486 - return vi->ctrl_status == VIRTIO_NET_OK; 1475 + return vi->ctrl->status == VIRTIO_NET_OK; 1487 1476 1488 1477 /* Spin for a response, the kick causes an ioport write, trapping 1489 1478 * into the hypervisor, so the request should be handled immediately. ··· 1492 1481 !virtqueue_is_broken(vi->cvq)) 1493 1482 cpu_relax(); 1494 1483 1495 - return vi->ctrl_status == VIRTIO_NET_OK; 1484 + return vi->ctrl->status == VIRTIO_NET_OK; 1496 1485 } 1497 1486 1498 1487 static int virtnet_set_mac_address(struct net_device *dev, void *p) ··· 1604 1593 if (!vi->has_cvq || !virtio_has_feature(vi->vdev, VIRTIO_NET_F_MQ)) 1605 1594 return 0; 1606 1595 1607 - vi->ctrl_mq.virtqueue_pairs = cpu_to_virtio16(vi->vdev, queue_pairs); 1608 - sg_init_one(&sg, &vi->ctrl_mq, sizeof(vi->ctrl_mq)); 1596 + vi->ctrl->mq.virtqueue_pairs = cpu_to_virtio16(vi->vdev, queue_pairs); 1597 + sg_init_one(&sg, &vi->ctrl->mq, sizeof(vi->ctrl->mq)); 1609 1598 1610 1599 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_MQ, 1611 1600 VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET, &sg)) { ··· 1664 1653 if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_RX)) 1665 1654 return; 1666 1655 1667 - vi->ctrl_promisc = ((dev->flags & IFF_PROMISC) != 0); 1668 - vi->ctrl_allmulti = ((dev->flags & IFF_ALLMULTI) != 0); 1656 + vi->ctrl->promisc = ((dev->flags & IFF_PROMISC) != 0); 1657 + vi->ctrl->allmulti = ((dev->flags & IFF_ALLMULTI) != 0); 1669 1658 1670 - sg_init_one(sg, &vi->ctrl_promisc, sizeof(vi->ctrl_promisc)); 1659 + sg_init_one(sg, &vi->ctrl->promisc, sizeof(vi->ctrl->promisc)); 1671 1660 1672 1661 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_RX, 1673 1662 VIRTIO_NET_CTRL_RX_PROMISC, sg)) 1674 1663 dev_warn(&dev->dev, "Failed to %sable promisc mode.\n", 1675 - vi->ctrl_promisc ? "en" : "dis"); 1664 + vi->ctrl->promisc ? "en" : "dis"); 1676 1665 1677 - sg_init_one(sg, &vi->ctrl_allmulti, sizeof(vi->ctrl_allmulti)); 1666 + sg_init_one(sg, &vi->ctrl->allmulti, sizeof(vi->ctrl->allmulti)); 1678 1667 1679 1668 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_RX, 1680 1669 VIRTIO_NET_CTRL_RX_ALLMULTI, sg)) 1681 1670 dev_warn(&dev->dev, "Failed to %sable allmulti mode.\n", 1682 - vi->ctrl_allmulti ? "en" : "dis"); 1671 + vi->ctrl->allmulti ? "en" : "dis"); 1683 1672 1684 1673 uc_count = netdev_uc_count(dev); 1685 1674 mc_count = netdev_mc_count(dev); ··· 1725 1714 struct virtnet_info *vi = netdev_priv(dev); 1726 1715 struct scatterlist sg; 1727 1716 1728 - vi->ctrl_vid = vid; 1729 - sg_init_one(&sg, &vi->ctrl_vid, sizeof(vi->ctrl_vid)); 1717 + vi->ctrl->vid = cpu_to_virtio16(vi->vdev, vid); 1718 + sg_init_one(&sg, &vi->ctrl->vid, sizeof(vi->ctrl->vid)); 1730 1719 1731 1720 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_VLAN, 1732 1721 VIRTIO_NET_CTRL_VLAN_ADD, &sg)) ··· 1740 1729 struct virtnet_info *vi = netdev_priv(dev); 1741 1730 struct scatterlist sg; 1742 1731 1743 - vi->ctrl_vid = vid; 1744 - sg_init_one(&sg, &vi->ctrl_vid, sizeof(vi->ctrl_vid)); 1732 + vi->ctrl->vid = cpu_to_virtio16(vi->vdev, vid); 1733 + sg_init_one(&sg, &vi->ctrl->vid, sizeof(vi->ctrl->vid)); 1745 1734 1746 1735 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_VLAN, 1747 1736 VIRTIO_NET_CTRL_VLAN_DEL, &sg)) ··· 2137 2126 static int virtnet_set_guest_offloads(struct virtnet_info *vi, u64 offloads) 2138 2127 { 2139 2128 struct scatterlist sg; 2140 - vi->ctrl_offloads = cpu_to_virtio64(vi->vdev, offloads); 2129 + vi->ctrl->offloads = cpu_to_virtio64(vi->vdev, offloads); 2141 2130 2142 - sg_init_one(&sg, &vi->ctrl_offloads, sizeof(vi->ctrl_offloads)); 2131 + sg_init_one(&sg, &vi->ctrl->offloads, sizeof(vi->ctrl->offloads)); 2143 2132 2144 2133 if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_GUEST_OFFLOADS, 2145 2134 VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET, &sg)) { ··· 2362 2351 2363 2352 kfree(vi->rq); 2364 2353 kfree(vi->sq); 2354 + kfree(vi->ctrl); 2365 2355 } 2366 2356 2367 2357 static void _free_receive_bufs(struct virtnet_info *vi) ··· 2555 2543 { 2556 2544 int i; 2557 2545 2546 + vi->ctrl = kzalloc(sizeof(*vi->ctrl), GFP_KERNEL); 2547 + if (!vi->ctrl) 2548 + goto err_ctrl; 2558 2549 vi->sq = kzalloc(sizeof(*vi->sq) * vi->max_queue_pairs, GFP_KERNEL); 2559 2550 if (!vi->sq) 2560 2551 goto err_sq; ··· 2586 2571 err_rq: 2587 2572 kfree(vi->sq); 2588 2573 err_sq: 2574 + kfree(vi->ctrl); 2575 + err_ctrl: 2589 2576 return -ENOMEM; 2590 2577 } 2591 2578
+13 -4
drivers/net/vmxnet3/vmxnet3_drv.c
··· 1218 1218 union { 1219 1219 void *ptr; 1220 1220 struct ethhdr *eth; 1221 + struct vlan_ethhdr *veth; 1221 1222 struct iphdr *ipv4; 1222 1223 struct ipv6hdr *ipv6; 1223 1224 struct tcphdr *tcp; ··· 1229 1228 if (unlikely(sizeof(struct iphdr) + sizeof(struct tcphdr) > maplen)) 1230 1229 return 0; 1231 1230 1231 + if (skb->protocol == cpu_to_be16(ETH_P_8021Q) || 1232 + skb->protocol == cpu_to_be16(ETH_P_8021AD)) 1233 + hlen = sizeof(struct vlan_ethhdr); 1234 + else 1235 + hlen = sizeof(struct ethhdr); 1236 + 1232 1237 hdr.eth = eth_hdr(skb); 1233 1238 if (gdesc->rcd.v4) { 1234 - BUG_ON(hdr.eth->h_proto != htons(ETH_P_IP)); 1235 - hdr.ptr += sizeof(struct ethhdr); 1239 + BUG_ON(hdr.eth->h_proto != htons(ETH_P_IP) && 1240 + hdr.veth->h_vlan_encapsulated_proto != htons(ETH_P_IP)); 1241 + hdr.ptr += hlen; 1236 1242 BUG_ON(hdr.ipv4->protocol != IPPROTO_TCP); 1237 1243 hlen = hdr.ipv4->ihl << 2; 1238 1244 hdr.ptr += hdr.ipv4->ihl << 2; 1239 1245 } else if (gdesc->rcd.v6) { 1240 - BUG_ON(hdr.eth->h_proto != htons(ETH_P_IPV6)); 1241 - hdr.ptr += sizeof(struct ethhdr); 1246 + BUG_ON(hdr.eth->h_proto != htons(ETH_P_IPV6) && 1247 + hdr.veth->h_vlan_encapsulated_proto != htons(ETH_P_IPV6)); 1248 + hdr.ptr += hlen; 1242 1249 /* Use an estimated value, since we also need to handle 1243 1250 * TSO case. 1244 1251 */
+2 -2
drivers/net/vmxnet3/vmxnet3_int.h
··· 69 69 /* 70 70 * Version numbers 71 71 */ 72 - #define VMXNET3_DRIVER_VERSION_STRING "1.4.13.0-k" 72 + #define VMXNET3_DRIVER_VERSION_STRING "1.4.14.0-k" 73 73 74 74 /* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ 75 - #define VMXNET3_DRIVER_VERSION_NUM 0x01040d00 75 + #define VMXNET3_DRIVER_VERSION_NUM 0x01040e00 76 76 77 77 #if defined(CONFIG_PCI_MSI) 78 78 /* RSS only makes sense if MSI-X is supported. */
+5 -2
include/linux/if_vlan.h
··· 663 663 * Returns true if the skb is tagged with multiple vlan headers, regardless 664 664 * of whether it is hardware accelerated or not. 665 665 */ 666 - static inline bool skb_vlan_tagged_multi(const struct sk_buff *skb) 666 + static inline bool skb_vlan_tagged_multi(struct sk_buff *skb) 667 667 { 668 668 __be16 protocol = skb->protocol; 669 669 ··· 671 671 struct vlan_ethhdr *veh; 672 672 673 673 if (likely(!eth_type_vlan(protocol))) 674 + return false; 675 + 676 + if (unlikely(!pskb_may_pull(skb, VLAN_ETH_HLEN))) 674 677 return false; 675 678 676 679 veh = (struct vlan_ethhdr *)skb->data; ··· 693 690 * 694 691 * Returns features without unsafe ones if the skb has multiple tags. 695 692 */ 696 - static inline netdev_features_t vlan_features_check(const struct sk_buff *skb, 693 + static inline netdev_features_t vlan_features_check(struct sk_buff *skb, 697 694 netdev_features_t features) 698 695 { 699 696 if (skb_vlan_tagged_multi(skb)) {
+8
include/linux/microchipphy.h
··· 70 70 #define LAN88XX_MMD3_CHIP_ID (32877) 71 71 #define LAN88XX_MMD3_CHIP_REV (32878) 72 72 73 + /* DSP registers */ 74 + #define PHY_ARDENNES_MMD_DEV_3_PHY_CFG (0x806A) 75 + #define PHY_ARDENNES_MMD_DEV_3_PHY_CFG_ZD_DLY_EN_ (0x2000) 76 + #define LAN88XX_EXT_PAGE_ACCESS_TR (0x52B5) 77 + #define LAN88XX_EXT_PAGE_TR_CR 16 78 + #define LAN88XX_EXT_PAGE_TR_LOW_DATA 17 79 + #define LAN88XX_EXT_PAGE_TR_HIGH_DATA 18 80 + 73 81 #endif /* _MICROCHIPPHY_H */
+2 -2
include/linux/textsearch.h
··· 62 62 int flags; 63 63 64 64 /** 65 - * get_next_block - fetch next block of data 65 + * @get_next_block: fetch next block of data 66 66 * @consumed: number of bytes consumed by the caller 67 67 * @dst: destination buffer 68 68 * @conf: search configuration ··· 79 79 struct ts_state *state); 80 80 81 81 /** 82 - * finish - finalize/clean a series of get_next_block() calls 82 + * @finish: finalize/clean a series of get_next_block() calls 83 83 * @conf: search configuration 84 84 * @state: search state 85 85 *
+23 -17
lib/textsearch.c
··· 10 10 * Pablo Neira Ayuso <pablo@netfilter.org> 11 11 * 12 12 * ========================================================================== 13 - * 13 + */ 14 + 15 + /** 16 + * DOC: ts_intro 14 17 * INTRODUCTION 15 18 * 16 19 * The textsearch infrastructure provides text searching facilities for ··· 22 19 * 23 20 * ARCHITECTURE 24 21 * 25 - * User 22 + * .. code-block:: none 23 + * 24 + * User 26 25 * +----------------+ 27 26 * | finish()|<--------------(6)-----------------+ 28 27 * |get_next_block()|<--------------(5)---------------+ | ··· 38 33 * | (3)|----->| find()/next() |-----------+ | 39 34 * | (7)|----->| destroy() |----------------------+ 40 35 * +----------------+ +---------------+ 41 - * 42 - * (1) User configures a search by calling _prepare() specifying the 43 - * search parameters such as the pattern and algorithm name. 36 + * 37 + * (1) User configures a search by calling textsearch_prepare() specifying 38 + * the search parameters such as the pattern and algorithm name. 44 39 * (2) Core requests the algorithm to allocate and initialize a search 45 40 * configuration according to the specified parameters. 46 - * (3) User starts the search(es) by calling _find() or _next() to 47 - * fetch subsequent occurrences. A state variable is provided 48 - * to the algorithm to store persistent variables. 41 + * (3) User starts the search(es) by calling textsearch_find() or 42 + * textsearch_next() to fetch subsequent occurrences. A state variable 43 + * is provided to the algorithm to store persistent variables. 49 44 * (4) Core eventually resets the search offset and forwards the find() 50 45 * request to the algorithm. 51 46 * (5) Algorithm calls get_next_block() provided by the user continuously 52 47 * to fetch the data to be searched in block by block. 53 48 * (6) Algorithm invokes finish() after the last call to get_next_block 54 49 * to clean up any leftovers from get_next_block. (Optional) 55 - * (7) User destroys the configuration by calling _destroy(). 50 + * (7) User destroys the configuration by calling textsearch_destroy(). 56 51 * (8) Core notifies the algorithm to destroy algorithm specific 57 52 * allocations. (Optional) 58 53 * ··· 67 62 * amount of times and even in parallel as long as a separate struct 68 63 * ts_state variable is provided to every instance. 69 64 * 70 - * The actual search is performed by either calling textsearch_find_- 71 - * continuous() for linear data or by providing an own get_next_block() 72 - * implementation and calling textsearch_find(). Both functions return 65 + * The actual search is performed by either calling 66 + * textsearch_find_continuous() for linear data or by providing 67 + * an own get_next_block() implementation and 68 + * calling textsearch_find(). Both functions return 73 69 * the position of the first occurrence of the pattern or UINT_MAX if 74 70 * no match was found. Subsequent occurrences can be found by calling 75 71 * textsearch_next() regardless of the linearity of the data. ··· 78 72 * Once you're done using a configuration it must be given back via 79 73 * textsearch_destroy. 80 74 * 81 - * EXAMPLE 75 + * EXAMPLE:: 82 76 * 83 77 * int pos; 84 78 * struct ts_config *conf; ··· 93 87 * goto errout; 94 88 * } 95 89 * 96 - * pos = textsearch_find_continuous(conf, &state, example, strlen(example)); 90 + * pos = textsearch_find_continuous(conf, \&state, example, strlen(example)); 97 91 * if (pos != UINT_MAX) 98 - * panic("Oh my god, dancing chickens at %d\n", pos); 92 + * panic("Oh my god, dancing chickens at \%d\n", pos); 99 93 * 100 94 * textsearch_destroy(conf); 101 - * ========================================================================== 102 95 */ 96 + /* ========================================================================== */ 103 97 104 98 #include <linux/module.h> 105 99 #include <linux/types.h> ··· 231 225 * 232 226 * Returns the position of first occurrence of the pattern or 233 227 * %UINT_MAX if no occurrence was found. 234 - */ 228 + */ 235 229 unsigned int textsearch_find_continuous(struct ts_config *conf, 236 230 struct ts_state *state, 237 231 const void *data, unsigned int len)
+1 -1
net/caif/chnl_net.c
··· 174 174 flow == CAIF_CTRLCMD_DEINIT_RSP ? "CLOSE/DEINIT" : 175 175 flow == CAIF_CTRLCMD_INIT_FAIL_RSP ? "OPEN_FAIL" : 176 176 flow == CAIF_CTRLCMD_REMOTE_SHUTDOWN_IND ? 177 - "REMOTE_SHUTDOWN" : "UKNOWN CTRL COMMAND"); 177 + "REMOTE_SHUTDOWN" : "UNKNOWN CTRL COMMAND"); 178 178 179 179 180 180
+1 -1
net/core/dev.c
··· 2969 2969 } 2970 2970 EXPORT_SYMBOL(passthru_features_check); 2971 2971 2972 - static netdev_features_t dflt_features_check(const struct sk_buff *skb, 2972 + static netdev_features_t dflt_features_check(struct sk_buff *skb, 2973 2973 struct net_device *dev, 2974 2974 netdev_features_t features) 2975 2975 {
+1 -1
net/core/dev_addr_lists.c
··· 839 839 EXPORT_SYMBOL(dev_mc_flush); 840 840 841 841 /** 842 - * dev_mc_flush - Init multicast address list 842 + * dev_mc_init - Init multicast address list 843 843 * @dev: device 844 844 * 845 845 * Init multicast address list.
+26 -14
net/core/neighbour.c
··· 55 55 static void __neigh_notify(struct neighbour *n, int type, int flags, 56 56 u32 pid); 57 57 static void neigh_update_notify(struct neighbour *neigh, u32 nlmsg_pid); 58 - static int pneigh_ifdown(struct neigh_table *tbl, struct net_device *dev); 58 + static int pneigh_ifdown_and_unlock(struct neigh_table *tbl, 59 + struct net_device *dev); 59 60 60 61 #ifdef CONFIG_PROC_FS 61 62 static const struct file_operations neigh_stat_seq_fops; ··· 292 291 { 293 292 write_lock_bh(&tbl->lock); 294 293 neigh_flush_dev(tbl, dev); 295 - pneigh_ifdown(tbl, dev); 296 - write_unlock_bh(&tbl->lock); 294 + pneigh_ifdown_and_unlock(tbl, dev); 297 295 298 296 del_timer_sync(&tbl->proxy_timer); 299 297 pneigh_queue_purge(&tbl->proxy_queue); ··· 681 681 return -ENOENT; 682 682 } 683 683 684 - static int pneigh_ifdown(struct neigh_table *tbl, struct net_device *dev) 684 + static int pneigh_ifdown_and_unlock(struct neigh_table *tbl, 685 + struct net_device *dev) 685 686 { 686 - struct pneigh_entry *n, **np; 687 + struct pneigh_entry *n, **np, *freelist = NULL; 687 688 u32 h; 688 689 689 690 for (h = 0; h <= PNEIGH_HASHMASK; h++) { ··· 692 691 while ((n = *np) != NULL) { 693 692 if (!dev || n->dev == dev) { 694 693 *np = n->next; 695 - if (tbl->pdestructor) 696 - tbl->pdestructor(n); 697 - if (n->dev) 698 - dev_put(n->dev); 699 - kfree(n); 694 + n->next = freelist; 695 + freelist = n; 700 696 continue; 701 697 } 702 698 np = &n->next; 703 699 } 700 + } 701 + write_unlock_bh(&tbl->lock); 702 + while ((n = freelist)) { 703 + freelist = n->next; 704 + n->next = NULL; 705 + if (tbl->pdestructor) 706 + tbl->pdestructor(n); 707 + if (n->dev) 708 + dev_put(n->dev); 709 + kfree(n); 704 710 } 705 711 return -ENOENT; 706 712 } ··· 2331 2323 2332 2324 err = nlmsg_parse(nlh, sizeof(struct ndmsg), tb, NDA_MAX, NULL, NULL); 2333 2325 if (!err) { 2334 - if (tb[NDA_IFINDEX]) 2326 + if (tb[NDA_IFINDEX]) { 2327 + if (nla_len(tb[NDA_IFINDEX]) != sizeof(u32)) 2328 + return -EINVAL; 2335 2329 filter_idx = nla_get_u32(tb[NDA_IFINDEX]); 2336 - 2337 - if (tb[NDA_MASTER]) 2330 + } 2331 + if (tb[NDA_MASTER]) { 2332 + if (nla_len(tb[NDA_MASTER]) != sizeof(u32)) 2333 + return -EINVAL; 2338 2334 filter_master_idx = nla_get_u32(tb[NDA_MASTER]); 2339 - 2335 + } 2340 2336 if (filter_idx || filter_master_idx) 2341 2337 flags |= NLM_F_DUMP_FILTERED; 2342 2338 }
+5 -7
net/dns_resolver/dns_key.c
··· 91 91 92 92 next_opt = memchr(opt, '#', end - opt) ?: end; 93 93 opt_len = next_opt - opt; 94 - if (!opt_len) { 95 - printk(KERN_WARNING 96 - "Empty option to dns_resolver key\n"); 94 + if (opt_len <= 0 || opt_len > 128) { 95 + pr_warn_ratelimited("Invalid option length (%d) for dns_resolver key\n", 96 + opt_len); 97 97 return -EINVAL; 98 98 } 99 99 ··· 127 127 } 128 128 129 129 bad_option_value: 130 - printk(KERN_WARNING 131 - "Option '%*.*s' to dns_resolver key:" 132 - " bad/missing value\n", 133 - opt_nlen, opt_nlen, opt); 130 + pr_warn_ratelimited("Option '%*.*s' to dns_resolver key: bad/missing value\n", 131 + opt_nlen, opt_nlen, opt); 134 132 return -EINVAL; 135 133 } while (opt = next_opt + 1, opt < end); 136 134 }
+5 -3
net/ipv4/ip_output.c
··· 1109 1109 struct ip_options_rcu *opt; 1110 1110 struct rtable *rt; 1111 1111 1112 + rt = *rtp; 1113 + if (unlikely(!rt)) 1114 + return -EFAULT; 1115 + 1112 1116 /* 1113 1117 * setup for corking. 1114 1118 */ ··· 1128 1124 cork->flags |= IPCORK_OPT; 1129 1125 cork->addr = ipc->addr; 1130 1126 } 1131 - rt = *rtp; 1132 - if (unlikely(!rt)) 1133 - return -EFAULT; 1127 + 1134 1128 /* 1135 1129 * We steal reference to this route, caller should not release it 1136 1130 */
+5 -3
net/ipv4/tcp.c
··· 2368 2368 INIT_LIST_HEAD(&tcp_sk(sk)->tsorted_sent_queue); 2369 2369 sk_mem_reclaim(sk); 2370 2370 tcp_clear_all_retrans_hints(tcp_sk(sk)); 2371 + tcp_sk(sk)->packets_out = 0; 2371 2372 } 2372 2373 2373 2374 int tcp_disconnect(struct sock *sk, int flags) ··· 2418 2417 icsk->icsk_backoff = 0; 2419 2418 tp->snd_cwnd = 2; 2420 2419 icsk->icsk_probes_out = 0; 2421 - tp->packets_out = 0; 2422 2420 tp->snd_ssthresh = TCP_INFINITE_SSTHRESH; 2423 2421 tp->snd_cwnd_cnt = 0; 2424 2422 tp->window_clamp = 0; ··· 2813 2813 #ifdef CONFIG_TCP_MD5SIG 2814 2814 case TCP_MD5SIG: 2815 2815 case TCP_MD5SIG_EXT: 2816 - /* Read the IP->Key mappings from userspace */ 2817 - err = tp->af_specific->md5_parse(sk, optname, optval, optlen); 2816 + if ((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) 2817 + err = tp->af_specific->md5_parse(sk, optname, optval, optlen); 2818 + else 2819 + err = -EINVAL; 2818 2820 break; 2819 2821 #endif 2820 2822 case TCP_USER_TIMEOUT:
+20 -20
net/l2tp/l2tp_core.c
··· 183 183 } 184 184 EXPORT_SYMBOL_GPL(l2tp_tunnel_get); 185 185 186 + struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth) 187 + { 188 + const struct l2tp_net *pn = l2tp_pernet(net); 189 + struct l2tp_tunnel *tunnel; 190 + int count = 0; 191 + 192 + rcu_read_lock_bh(); 193 + list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) { 194 + if (++count > nth) { 195 + l2tp_tunnel_inc_refcount(tunnel); 196 + rcu_read_unlock_bh(); 197 + return tunnel; 198 + } 199 + } 200 + rcu_read_unlock_bh(); 201 + 202 + return NULL; 203 + } 204 + EXPORT_SYMBOL_GPL(l2tp_tunnel_get_nth); 205 + 186 206 /* Lookup a session. A new reference is held on the returned session. */ 187 207 struct l2tp_session *l2tp_session_get(const struct net *net, 188 208 struct l2tp_tunnel *tunnel, ··· 354 334 return err; 355 335 } 356 336 EXPORT_SYMBOL_GPL(l2tp_session_register); 357 - 358 - struct l2tp_tunnel *l2tp_tunnel_find_nth(const struct net *net, int nth) 359 - { 360 - struct l2tp_net *pn = l2tp_pernet(net); 361 - struct l2tp_tunnel *tunnel; 362 - int count = 0; 363 - 364 - rcu_read_lock_bh(); 365 - list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) { 366 - if (++count > nth) { 367 - rcu_read_unlock_bh(); 368 - return tunnel; 369 - } 370 - } 371 - 372 - rcu_read_unlock_bh(); 373 - 374 - return NULL; 375 - } 376 - EXPORT_SYMBOL_GPL(l2tp_tunnel_find_nth); 377 337 378 338 /***************************************************************************** 379 339 * Receive data handling
+2 -1
net/l2tp/l2tp_core.h
··· 212 212 } 213 213 214 214 struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id); 215 + struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth); 216 + 215 217 void l2tp_tunnel_free(struct l2tp_tunnel *tunnel); 216 218 217 219 struct l2tp_session *l2tp_session_get(const struct net *net, ··· 222 220 struct l2tp_session *l2tp_session_get_nth(struct l2tp_tunnel *tunnel, int nth); 223 221 struct l2tp_session *l2tp_session_get_by_ifname(const struct net *net, 224 222 const char *ifname); 225 - struct l2tp_tunnel *l2tp_tunnel_find_nth(const struct net *net, int nth); 226 223 227 224 int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, 228 225 u32 peer_tunnel_id, struct l2tp_tunnel_cfg *cfg,
+13 -2
net/l2tp/l2tp_debugfs.c
··· 47 47 48 48 static void l2tp_dfs_next_tunnel(struct l2tp_dfs_seq_data *pd) 49 49 { 50 - pd->tunnel = l2tp_tunnel_find_nth(pd->net, pd->tunnel_idx); 50 + /* Drop reference taken during previous invocation */ 51 + if (pd->tunnel) 52 + l2tp_tunnel_dec_refcount(pd->tunnel); 53 + 54 + pd->tunnel = l2tp_tunnel_get_nth(pd->net, pd->tunnel_idx); 51 55 pd->tunnel_idx++; 52 56 } 53 57 ··· 100 96 101 97 static void l2tp_dfs_seq_stop(struct seq_file *p, void *v) 102 98 { 103 - /* nothing to do */ 99 + struct l2tp_dfs_seq_data *pd = v; 100 + 101 + if (!pd || pd == SEQ_START_TOKEN) 102 + return; 103 + 104 + /* Drop reference taken by last invocation of l2tp_dfs_next_tunnel() */ 105 + if (pd->tunnel) 106 + l2tp_tunnel_dec_refcount(pd->tunnel); 104 107 } 105 108 106 109 static void l2tp_dfs_seq_tunnel_show(struct seq_file *m, void *v)
+8 -3
net/l2tp/l2tp_netlink.c
··· 487 487 struct net *net = sock_net(skb->sk); 488 488 489 489 for (;;) { 490 - tunnel = l2tp_tunnel_find_nth(net, ti); 490 + tunnel = l2tp_tunnel_get_nth(net, ti); 491 491 if (tunnel == NULL) 492 492 goto out; 493 493 494 494 if (l2tp_nl_tunnel_send(skb, NETLINK_CB(cb->skb).portid, 495 495 cb->nlh->nlmsg_seq, NLM_F_MULTI, 496 - tunnel, L2TP_CMD_TUNNEL_GET) < 0) 496 + tunnel, L2TP_CMD_TUNNEL_GET) < 0) { 497 + l2tp_tunnel_dec_refcount(tunnel); 497 498 goto out; 499 + } 500 + l2tp_tunnel_dec_refcount(tunnel); 498 501 499 502 ti++; 500 503 } ··· 851 848 852 849 for (;;) { 853 850 if (tunnel == NULL) { 854 - tunnel = l2tp_tunnel_find_nth(net, ti); 851 + tunnel = l2tp_tunnel_get_nth(net, ti); 855 852 if (tunnel == NULL) 856 853 goto out; 857 854 } ··· 859 856 session = l2tp_session_get_nth(tunnel, si); 860 857 if (session == NULL) { 861 858 ti++; 859 + l2tp_tunnel_dec_refcount(tunnel); 862 860 tunnel = NULL; 863 861 si = 0; 864 862 continue; ··· 869 865 cb->nlh->nlmsg_seq, NLM_F_MULTI, 870 866 session, L2TP_CMD_SESSION_GET) < 0) { 871 867 l2tp_session_dec_refcount(session); 868 + l2tp_tunnel_dec_refcount(tunnel); 872 869 break; 873 870 } 874 871 l2tp_session_dec_refcount(session);
+17 -7
net/l2tp/l2tp_ppp.c
··· 1551 1551 1552 1552 static void pppol2tp_next_tunnel(struct net *net, struct pppol2tp_seq_data *pd) 1553 1553 { 1554 + /* Drop reference taken during previous invocation */ 1555 + if (pd->tunnel) 1556 + l2tp_tunnel_dec_refcount(pd->tunnel); 1557 + 1554 1558 for (;;) { 1555 - pd->tunnel = l2tp_tunnel_find_nth(net, pd->tunnel_idx); 1559 + pd->tunnel = l2tp_tunnel_get_nth(net, pd->tunnel_idx); 1556 1560 pd->tunnel_idx++; 1557 1561 1558 - if (pd->tunnel == NULL) 1559 - break; 1562 + /* Only accept L2TPv2 tunnels */ 1563 + if (!pd->tunnel || pd->tunnel->version == 2) 1564 + return; 1560 1565 1561 - /* Ignore L2TPv3 tunnels */ 1562 - if (pd->tunnel->version < 3) 1563 - break; 1566 + l2tp_tunnel_dec_refcount(pd->tunnel); 1564 1567 } 1565 1568 } 1566 1569 ··· 1612 1609 1613 1610 static void pppol2tp_seq_stop(struct seq_file *p, void *v) 1614 1611 { 1615 - /* nothing to do */ 1612 + struct pppol2tp_seq_data *pd = v; 1613 + 1614 + if (!pd || pd == SEQ_START_TOKEN) 1615 + return; 1616 + 1617 + /* Drop reference taken by last invocation of pppol2tp_next_tunnel() */ 1618 + if (pd->tunnel) 1619 + l2tp_tunnel_dec_refcount(pd->tunnel); 1616 1620 } 1617 1621 1618 1622 static void pppol2tp_seq_tunnel_show(struct seq_file *m, void *v)
+7
net/llc/af_llc.c
··· 189 189 { 190 190 struct sock *sk = sock->sk; 191 191 struct llc_sock *llc; 192 + struct llc_sap *sap; 192 193 193 194 if (unlikely(sk == NULL)) 194 195 goto out; ··· 200 199 llc->laddr.lsap, llc->daddr.lsap); 201 200 if (!llc_send_disc(sk)) 202 201 llc_ui_wait_for_disc(sk, sk->sk_rcvtimeo); 202 + sap = llc->sap; 203 + /* Hold this for release_sock(), so that llc_backlog_rcv() could still 204 + * use it. 205 + */ 206 + llc_sap_hold(sap); 203 207 if (!sock_flag(sk, SOCK_ZAPPED)) 204 208 llc_sap_remove_socket(llc->sap, sk); 205 209 release_sock(sk); 210 + llc_sap_put(sap); 206 211 if (llc->dev) 207 212 dev_put(llc->dev); 208 213 sock_put(sk);
+14 -9
net/packet/af_packet.c
··· 3008 3008 3009 3009 packet_flush_mclist(sk); 3010 3010 3011 + lock_sock(sk); 3011 3012 if (po->rx_ring.pg_vec) { 3012 3013 memset(&req_u, 0, sizeof(req_u)); 3013 3014 packet_set_ring(sk, &req_u, 1, 0); ··· 3018 3017 memset(&req_u, 0, sizeof(req_u)); 3019 3018 packet_set_ring(sk, &req_u, 1, 1); 3020 3019 } 3020 + release_sock(sk); 3021 3021 3022 3022 f = fanout_release(sk); 3023 3023 ··· 3645 3643 union tpacket_req_u req_u; 3646 3644 int len; 3647 3645 3646 + lock_sock(sk); 3648 3647 switch (po->tp_version) { 3649 3648 case TPACKET_V1: 3650 3649 case TPACKET_V2: ··· 3656 3653 len = sizeof(req_u.req3); 3657 3654 break; 3658 3655 } 3659 - if (optlen < len) 3660 - return -EINVAL; 3661 - if (copy_from_user(&req_u.req, optval, len)) 3662 - return -EFAULT; 3663 - return packet_set_ring(sk, &req_u, 0, 3664 - optname == PACKET_TX_RING); 3656 + if (optlen < len) { 3657 + ret = -EINVAL; 3658 + } else { 3659 + if (copy_from_user(&req_u.req, optval, len)) 3660 + ret = -EFAULT; 3661 + else 3662 + ret = packet_set_ring(sk, &req_u, 0, 3663 + optname == PACKET_TX_RING); 3664 + } 3665 + release_sock(sk); 3666 + return ret; 3665 3667 } 3666 3668 case PACKET_COPY_THRESH: 3667 3669 { ··· 4216 4208 /* Added to avoid minimal code churn */ 4217 4209 struct tpacket_req *req = &req_u->req; 4218 4210 4219 - lock_sock(sk); 4220 - 4221 4211 rb = tx_ring ? &po->tx_ring : &po->rx_ring; 4222 4212 rb_queue = tx_ring ? &sk->sk_write_queue : &sk->sk_receive_queue; 4223 4213 ··· 4353 4347 if (pg_vec) 4354 4348 free_pg_vec(pg_vec, order, req->tp_block_nr); 4355 4349 out: 4356 - release_sock(sk); 4357 4350 return err; 4358 4351 } 4359 4352
+1
net/qrtr/qrtr.c
··· 1135 1135 1136 1136 MODULE_DESCRIPTION("Qualcomm IPC-router driver"); 1137 1137 MODULE_LICENSE("GPL v2"); 1138 + MODULE_ALIAS_NETPROTO(PF_QIPCRTR);
+37 -37
net/sctp/ipv6.c
··· 556 556 addr->v6.sin6_scope_id = 0; 557 557 } 558 558 559 + static int __sctp_v6_cmp_addr(const union sctp_addr *addr1, 560 + const union sctp_addr *addr2) 561 + { 562 + if (addr1->sa.sa_family != addr2->sa.sa_family) { 563 + if (addr1->sa.sa_family == AF_INET && 564 + addr2->sa.sa_family == AF_INET6 && 565 + ipv6_addr_v4mapped(&addr2->v6.sin6_addr) && 566 + addr2->v6.sin6_addr.s6_addr32[3] == 567 + addr1->v4.sin_addr.s_addr) 568 + return 1; 569 + 570 + if (addr2->sa.sa_family == AF_INET && 571 + addr1->sa.sa_family == AF_INET6 && 572 + ipv6_addr_v4mapped(&addr1->v6.sin6_addr) && 573 + addr1->v6.sin6_addr.s6_addr32[3] == 574 + addr2->v4.sin_addr.s_addr) 575 + return 1; 576 + 577 + return 0; 578 + } 579 + 580 + if (!ipv6_addr_equal(&addr1->v6.sin6_addr, &addr2->v6.sin6_addr)) 581 + return 0; 582 + 583 + /* If this is a linklocal address, compare the scope_id. */ 584 + if ((ipv6_addr_type(&addr1->v6.sin6_addr) & IPV6_ADDR_LINKLOCAL) && 585 + addr1->v6.sin6_scope_id && addr2->v6.sin6_scope_id && 586 + addr1->v6.sin6_scope_id != addr2->v6.sin6_scope_id) 587 + return 0; 588 + 589 + return 1; 590 + } 591 + 559 592 /* Compare addresses exactly. 560 593 * v4-mapped-v6 is also in consideration. 561 594 */ 562 595 static int sctp_v6_cmp_addr(const union sctp_addr *addr1, 563 596 const union sctp_addr *addr2) 564 597 { 565 - if (addr1->sa.sa_family != addr2->sa.sa_family) { 566 - if (addr1->sa.sa_family == AF_INET && 567 - addr2->sa.sa_family == AF_INET6 && 568 - ipv6_addr_v4mapped(&addr2->v6.sin6_addr)) { 569 - if (addr2->v6.sin6_port == addr1->v4.sin_port && 570 - addr2->v6.sin6_addr.s6_addr32[3] == 571 - addr1->v4.sin_addr.s_addr) 572 - return 1; 573 - } 574 - if (addr2->sa.sa_family == AF_INET && 575 - addr1->sa.sa_family == AF_INET6 && 576 - ipv6_addr_v4mapped(&addr1->v6.sin6_addr)) { 577 - if (addr1->v6.sin6_port == addr2->v4.sin_port && 578 - addr1->v6.sin6_addr.s6_addr32[3] == 579 - addr2->v4.sin_addr.s_addr) 580 - return 1; 581 - } 582 - return 0; 583 - } 584 - if (addr1->v6.sin6_port != addr2->v6.sin6_port) 585 - return 0; 586 - if (!ipv6_addr_equal(&addr1->v6.sin6_addr, &addr2->v6.sin6_addr)) 587 - return 0; 588 - /* If this is a linklocal address, compare the scope_id. */ 589 - if (ipv6_addr_type(&addr1->v6.sin6_addr) & IPV6_ADDR_LINKLOCAL) { 590 - if (addr1->v6.sin6_scope_id && addr2->v6.sin6_scope_id && 591 - (addr1->v6.sin6_scope_id != addr2->v6.sin6_scope_id)) { 592 - return 0; 593 - } 594 - } 595 - 596 - return 1; 598 + return __sctp_v6_cmp_addr(addr1, addr2) && 599 + addr1->v6.sin6_port == addr2->v6.sin6_port; 597 600 } 598 601 599 602 /* Initialize addr struct to INADDR_ANY. */ ··· 878 875 const union sctp_addr *addr2, 879 876 struct sctp_sock *opt) 880 877 { 881 - struct sctp_af *af1, *af2; 882 878 struct sock *sk = sctp_opt2sk(opt); 879 + struct sctp_af *af1, *af2; 883 880 884 881 af1 = sctp_get_af_specific(addr1->sa.sa_family); 885 882 af2 = sctp_get_af_specific(addr2->sa.sa_family); ··· 895 892 if (sctp_is_any(sk, addr1) || sctp_is_any(sk, addr2)) 896 893 return 1; 897 894 898 - if (addr1->sa.sa_family != addr2->sa.sa_family) 899 - return 0; 900 - 901 - return af1->cmp_addr(addr1, addr2); 895 + return __sctp_v6_cmp_addr(addr1, addr2); 902 896 } 903 897 904 898 /* Verify that the provided sockaddr looks bindable. Common verification,
+4 -6
net/smc/af_smc.c
··· 1259 1259 rc = smc_close_shutdown_write(smc); 1260 1260 break; 1261 1261 case SHUT_RD: 1262 - if (sk->sk_state == SMC_LISTEN) 1263 - rc = smc_close_active(smc); 1264 - else 1265 - rc = 0; 1266 - /* nothing more to do because peer is not involved */ 1262 + rc = 0; 1263 + /* nothing more to do because peer is not involved */ 1267 1264 break; 1268 1265 } 1269 - rc1 = kernel_sock_shutdown(smc->clcsock, how); 1266 + if (smc->clcsock) 1267 + rc1 = kernel_sock_shutdown(smc->clcsock, how); 1270 1268 /* map sock_shutdown_cmd constants to sk_shutdown value range */ 1271 1269 sk->sk_shutdown |= how + 1; 1272 1270
+3 -4
net/strparser/strparser.c
··· 296 296 strp_start_timer(strp, timeo); 297 297 } 298 298 299 + stm->accum_len += cand_len; 299 300 strp->need_bytes = stm->strp.full_len - 300 301 stm->accum_len; 301 - stm->accum_len += cand_len; 302 302 stm->early_eaten = cand_len; 303 303 STRP_STATS_ADD(strp->stats.bytes, cand_len); 304 304 desc->count = 0; /* Stop reading socket */ ··· 321 321 /* Hurray, we have a new message! */ 322 322 cancel_delayed_work(&strp->msg_timer_work); 323 323 strp->skb_head = NULL; 324 + strp->need_bytes = 0; 324 325 STRP_STATS_INCR(strp->stats.msgs); 325 326 326 327 /* Give skb to upper layer */ ··· 411 410 return; 412 411 413 412 if (strp->need_bytes) { 414 - if (strp_peek_len(strp) >= strp->need_bytes) 415 - strp->need_bytes = 0; 416 - else 413 + if (strp_peek_len(strp) < strp->need_bytes) 417 414 return; 418 415 } 419 416
+1 -1
net/tipc/monitor.c
··· 777 777 778 778 ret = tipc_bearer_get_name(net, bearer_name, bearer_id); 779 779 if (ret || !mon) 780 - return -EINVAL; 780 + return 0; 781 781 782 782 hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_family, 783 783 NLM_F_MULTI, TIPC_NL_MON_GET);
+21 -13
net/tipc/name_table.c
··· 241 241 static struct publication *tipc_service_remove_publ(struct net *net, 242 242 struct tipc_service *sc, 243 243 u32 lower, u32 upper, 244 - u32 node, u32 key) 244 + u32 node, u32 key, 245 + struct service_range **rng) 245 246 { 246 247 struct tipc_subscription *sub, *tmp; 247 248 struct service_range *sr; ··· 276 275 277 276 list_del(&p->all_publ); 278 277 list_del(&p->local_publ); 279 - 280 - /* Remove service range item if this was its last publication */ 281 - if (list_empty(&sr->all_publ)) { 278 + if (list_empty(&sr->all_publ)) 282 279 last = true; 283 - rb_erase(&sr->tree_node, &sc->ranges); 284 - kfree(sr); 285 - } 286 280 287 281 /* Notify any waiting subscriptions */ 288 282 list_for_each_entry_safe(sub, tmp, &sc->subscriptions, service_list) { 289 283 tipc_sub_report_overlap(sub, p->lower, p->upper, TIPC_WITHDRAWN, 290 284 p->port, p->node, p->scope, last); 291 285 } 286 + *rng = sr; 292 287 return p; 293 288 } 294 289 ··· 376 379 u32 node, u32 key) 377 380 { 378 381 struct tipc_service *sc = tipc_service_find(net, type); 382 + struct service_range *sr = NULL; 379 383 struct publication *p = NULL; 380 384 381 385 if (!sc) 382 386 return NULL; 383 387 384 388 spin_lock_bh(&sc->lock); 385 - p = tipc_service_remove_publ(net, sc, lower, upper, node, key); 389 + p = tipc_service_remove_publ(net, sc, lower, upper, node, key, &sr); 390 + 391 + /* Remove service range item if this was its last publication */ 392 + if (sr && list_empty(&sr->all_publ)) { 393 + rb_erase(&sr->tree_node, &sc->ranges); 394 + kfree(sr); 395 + } 386 396 387 397 /* Delete service item if this no more publications and subscriptions */ 388 398 if (RB_EMPTY_ROOT(&sc->ranges) && list_empty(&sc->subscriptions)) { ··· 669 665 /** 670 666 * tipc_nametbl_subscribe - add a subscription object to the name table 671 667 */ 672 - void tipc_nametbl_subscribe(struct tipc_subscription *sub) 668 + bool tipc_nametbl_subscribe(struct tipc_subscription *sub) 673 669 { 674 670 struct name_table *nt = tipc_name_table(sub->net); 675 671 struct tipc_net *tn = tipc_net(sub->net); 676 672 struct tipc_subscr *s = &sub->evt.s; 677 673 u32 type = tipc_sub_read(s, seq.type); 678 674 struct tipc_service *sc; 675 + bool res = true; 679 676 680 677 spin_lock_bh(&tn->nametbl_lock); 681 678 sc = tipc_service_find(sub->net, type); ··· 690 685 pr_warn("Failed to subscribe for {%u,%u,%u}\n", type, 691 686 tipc_sub_read(s, seq.lower), 692 687 tipc_sub_read(s, seq.upper)); 688 + res = false; 693 689 } 694 690 spin_unlock_bh(&tn->nametbl_lock); 691 + return res; 695 692 } 696 693 697 694 /** ··· 751 744 static void tipc_service_delete(struct net *net, struct tipc_service *sc) 752 745 { 753 746 struct service_range *sr, *tmpr; 754 - struct publication *p, *tmpb; 747 + struct publication *p, *tmp; 755 748 756 749 spin_lock_bh(&sc->lock); 757 750 rbtree_postorder_for_each_entry_safe(sr, tmpr, &sc->ranges, tree_node) { 758 - list_for_each_entry_safe(p, tmpb, 759 - &sr->all_publ, all_publ) { 751 + list_for_each_entry_safe(p, tmp, &sr->all_publ, all_publ) { 760 752 tipc_service_remove_publ(net, sc, p->lower, p->upper, 761 - p->node, p->key); 753 + p->node, p->key, &sr); 762 754 kfree_rcu(p, rcu); 763 755 } 756 + rb_erase(&sr->tree_node, &sc->ranges); 757 + kfree(sr); 764 758 } 765 759 hlist_del_init_rcu(&sc->service_list); 766 760 spin_unlock_bh(&sc->lock);
+1 -1
net/tipc/name_table.h
··· 126 126 struct publication *tipc_nametbl_remove_publ(struct net *net, u32 type, 127 127 u32 lower, u32 upper, 128 128 u32 node, u32 key); 129 - void tipc_nametbl_subscribe(struct tipc_subscription *s); 129 + bool tipc_nametbl_subscribe(struct tipc_subscription *s); 130 130 void tipc_nametbl_unsubscribe(struct tipc_subscription *s); 131 131 int tipc_nametbl_init(struct net *net); 132 132 void tipc_nametbl_stop(struct net *net);
+2
net/tipc/net.c
··· 252 252 u64 *w0 = (u64 *)&node_id[0]; 253 253 u64 *w1 = (u64 *)&node_id[8]; 254 254 255 + if (!attrs[TIPC_NLA_NET_NODEID_W1]) 256 + return -EINVAL; 255 257 *w0 = nla_get_u64(attrs[TIPC_NLA_NET_NODEID]); 256 258 *w1 = nla_get_u64(attrs[TIPC_NLA_NET_NODEID_W1]); 257 259 tipc_net_init(net, node_id, 0);
+4 -1
net/tipc/netlink.c
··· 79 79 80 80 const struct nla_policy tipc_nl_net_policy[TIPC_NLA_NET_MAX + 1] = { 81 81 [TIPC_NLA_NET_UNSPEC] = { .type = NLA_UNSPEC }, 82 - [TIPC_NLA_NET_ID] = { .type = NLA_U32 } 82 + [TIPC_NLA_NET_ID] = { .type = NLA_U32 }, 83 + [TIPC_NLA_NET_ADDR] = { .type = NLA_U32 }, 84 + [TIPC_NLA_NET_NODEID] = { .type = NLA_U64 }, 85 + [TIPC_NLA_NET_NODEID_W1] = { .type = NLA_U64 }, 83 86 }; 84 87 85 88 const struct nla_policy tipc_nl_link_policy[TIPC_NLA_LINK_MAX + 1] = {
+4 -7
net/tipc/node.c
··· 2232 2232 struct net *net = sock_net(skb->sk); 2233 2233 u32 prev_bearer = cb->args[0]; 2234 2234 struct tipc_nl_msg msg; 2235 + int bearer_id; 2235 2236 int err; 2236 - int i; 2237 2237 2238 2238 if (prev_bearer == MAX_BEARERS) 2239 2239 return 0; ··· 2243 2243 msg.seq = cb->nlh->nlmsg_seq; 2244 2244 2245 2245 rtnl_lock(); 2246 - for (i = prev_bearer; i < MAX_BEARERS; i++) { 2247 - prev_bearer = i; 2246 + for (bearer_id = prev_bearer; bearer_id < MAX_BEARERS; bearer_id++) { 2248 2247 err = __tipc_nl_add_monitor(net, &msg, prev_bearer); 2249 2248 if (err) 2250 - goto out; 2249 + break; 2251 2250 } 2252 - 2253 - out: 2254 2251 rtnl_unlock(); 2255 - cb->args[0] = prev_bearer; 2252 + cb->args[0] = bearer_id; 2256 2253 2257 2254 return skb->len; 2258 2255 }
+3 -1
net/tipc/socket.c
··· 1278 1278 struct tipc_msg *hdr = &tsk->phdr; 1279 1279 struct tipc_name_seq *seq; 1280 1280 struct sk_buff_head pkts; 1281 - u32 dnode, dport; 1281 + u32 dport, dnode = 0; 1282 1282 u32 type, inst; 1283 1283 int mtu, rc; 1284 1284 ··· 1348 1348 msg_set_destnode(hdr, dnode); 1349 1349 msg_set_destport(hdr, dest->addr.id.ref); 1350 1350 msg_set_hdr_sz(hdr, BASIC_H_SIZE); 1351 + } else { 1352 + return -EINVAL; 1351 1353 } 1352 1354 1353 1355 /* Block or return if destination link is congested */
+4 -1
net/tipc/subscr.c
··· 153 153 memcpy(&sub->evt.s, s, sizeof(*s)); 154 154 spin_lock_init(&sub->lock); 155 155 kref_init(&sub->kref); 156 - tipc_nametbl_subscribe(sub); 156 + if (!tipc_nametbl_subscribe(sub)) { 157 + kfree(sub); 158 + return NULL; 159 + } 157 160 timer_setup(&sub->timer, tipc_sub_timeout, 0); 158 161 timeout = tipc_sub_read(&sub->evt.s, timeout); 159 162 if (timeout != TIPC_WAIT_FOREVER)
+9 -1
net/tls/tls_sw.c
··· 41 41 #include <net/strparser.h> 42 42 #include <net/tls.h> 43 43 44 + #define MAX_IV_SIZE TLS_CIPHER_AES_GCM_128_IV_SIZE 45 + 44 46 static int tls_do_decryption(struct sock *sk, 45 47 struct scatterlist *sgin, 46 48 struct scatterlist *sgout, ··· 675 673 { 676 674 struct tls_context *tls_ctx = tls_get_ctx(sk); 677 675 struct tls_sw_context *ctx = tls_sw_ctx(tls_ctx); 678 - char iv[TLS_CIPHER_AES_GCM_128_SALT_SIZE + tls_ctx->rx.iv_size]; 676 + char iv[TLS_CIPHER_AES_GCM_128_SALT_SIZE + MAX_IV_SIZE]; 679 677 struct scatterlist sgin_arr[MAX_SKB_FRAGS + 2]; 680 678 struct scatterlist *sgin = &sgin_arr[0]; 681 679 struct strp_msg *rxm = strp_msg(skb); ··· 1092 1090 break; 1093 1091 } 1094 1092 default: 1093 + rc = -EINVAL; 1094 + goto free_priv; 1095 + } 1096 + 1097 + /* Sanity-check the IV size for stack allocations. */ 1098 + if (iv_size > MAX_IV_SIZE) { 1095 1099 rc = -EINVAL; 1096 1100 goto free_priv; 1097 1101 }
+6
net/vmw_vsock/af_vsock.c
··· 2018 2018 } 2019 2019 EXPORT_SYMBOL_GPL(vsock_core_get_transport); 2020 2020 2021 + static void __exit vsock_exit(void) 2022 + { 2023 + /* Do nothing. This function makes this module removable. */ 2024 + } 2025 + 2021 2026 module_init(vsock_init_tables); 2027 + module_exit(vsock_exit); 2022 2028 2023 2029 MODULE_AUTHOR("VMware, Inc."); 2024 2030 MODULE_DESCRIPTION("VMware Virtual Socket Family");
+1 -1
tools/testing/selftests/net/Makefile
··· 5 5 CFLAGS += -I../../../../usr/include/ 6 6 7 7 TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh rtnetlink.sh 8 - TEST_PROGS += fib_tests.sh fib-onlink-tests.sh pmtu.sh 8 + TEST_PROGS += fib_tests.sh fib-onlink-tests.sh in_netns.sh pmtu.sh 9 9 TEST_GEN_FILES = socket 10 10 TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy 11 11 TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa