Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

1) Don't insert ESP trailer twice in IPSEC code, from Huy Nguyen.

2) The default crypto algorithm selection in Kconfig for IPSEC is out
of touch with modern reality, fix this up. From Eric Biggers.

3) bpftool is missing an entry for BPF_MAP_TYPE_RINGBUF, from Andrii
Nakryiko.

4) Missing init of ->frame_sz in xdp_convert_zc_to_xdp_frame(), from
Hangbin Liu.

5) Adjust packet alignment handling in ax88179_178a driver to match
what the hardware actually does. From Jeremy Kerr.

6) register_netdevice can leak in the case one of the notifiers fail,
from Yang Yingliang.

7) Use after free in ip_tunnel_lookup(), from Taehee Yoo.

8) VLAN checks in sja1105 DSA driver need adjustments, from Vladimir
Oltean.

9) tg3 driver can sleep forever when we get enough EEH errors, fix from
David Christensen.

10) Missing {READ,WRITE}_ONCE() annotations in various Intel ethernet
drivers, from Ciara Loftus.

11) Fix scanning loop break condition in of_mdiobus_register(), from
Florian Fainelli.

12) MTU limit is incorrect in ibmveth driver, from Thomas Falcon.

13) Endianness fix in mlxsw, from Ido Schimmel.

14) Use after free in smsc95xx usbnet driver, from Tuomas Tynkkynen.

15) Missing bridge mrp configuration validation, from Horatiu Vultur.

16) Fix circular netns references in wireguard, from Jason A. Donenfeld.

17) PTP initialization on recovery is not done properly in qed driver,
from Alexander Lobakin.

18) Endian conversion of L4 ports in filters of cxgb4 driver is wrong,
from Rahul Lakkireddy.

19) Don't clear bound device TX queue of socket prematurely otherwise we
get problems with ktls hw offloading, from Tariq Toukan.

20) ipset can do atomics on unaligned memory, fix from Russell King.

21) Align ethernet addresses properly in bridging code, from Thomas
Martitz.

22) Don't advertise ipv4 addresses on SCTP sockets having ipv6only set,
from Marcelo Ricardo Leitner.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (149 commits)
rds: transport module should be auto loaded when transport is set
sch_cake: fix a few style nits
sch_cake: don't call diffserv parsing code when it is not needed
sch_cake: don't try to reallocate or unshare skb unconditionally
ethtool: fix error handling in linkstate_prepare_data()
wil6210: account for napi_gro_receive never returning GRO_DROP
hns: do not cast return value of napi_gro_receive to null
socionext: account for napi_gro_receive never returning GRO_DROP
wireguard: receive: account for napi_gro_receive never returning GRO_DROP
vxlan: fix last fdb index during dump of fdb with nhid
sctp: Don't advertise IPv4 addresses if ipv6only is set on the socket
tc-testing: avoid action cookies with odd length.
bpf: tcp: bpf_cubic: fix spurious HYSTART_DELAY exit upon drop in min RTT
tcp_cubic: fix spurious HYSTART_DELAY exit upon drop in min RTT
net: dsa: sja1105: fix tc-gate schedule with single element
net: dsa: sja1105: recalculate gating subschedule after deleting tc-gate rules
net: dsa: sja1105: unconditionally free old gating config
net: dsa: sja1105: move sja1105_compose_gating_subschedule at the top
net: macb: free resources on failure path of at91ether_open()
net: macb: call pm_runtime_put_sync on failure path
...

+1973 -1156
+14
Documentation/bpf/prog_cgroup_sockopt.rst
··· 86 86 *not* the original input ``setsockopt`` arguments. The potentially 87 87 modified values will be then passed down to the kernel. 88 88 89 + Large optval 90 + ============ 91 + When the ``optval`` is greater than the ``PAGE_SIZE``, the BPF program 92 + can access only the first ``PAGE_SIZE`` of that data. So it has to options: 93 + 94 + * Set ``optlen`` to zero, which indicates that the kernel should 95 + use the original buffer from the userspace. Any modifications 96 + done by the BPF program to the ``optval`` are ignored. 97 + * Set ``optlen`` to the value less than ``PAGE_SIZE``, which 98 + indicates that the kernel should use BPF's trimmed ``optval``. 99 + 100 + When the BPF program returns with the ``optlen`` greater than 101 + ``PAGE_SIZE``, the userspace will receive ``EFAULT`` errno. 102 + 89 103 Example 90 104 ======= 91 105
+2 -2
Documentation/networking/ieee802154.rst
··· 30 30 31 31 The address family, socket addresses etc. are defined in the 32 32 include/net/af_ieee802154.h header or in the special header 33 - in the userspace package (see either http://wpan.cakelab.org/ or the 34 - git tree at https://github.com/linux-wpan/wpan-tools). 33 + in the userspace package (see either https://linux-wpan.org/wpan-tools.html 34 + or the git tree at https://github.com/linux-wpan/wpan-tools). 35 35 36 36 6LoWPAN Linux implementation 37 37 ============================
+2 -2
MAINTAINERS
··· 8333 8333 M: Stefan Schmidt <stefan@datenfreihafen.org> 8334 8334 L: linux-wpan@vger.kernel.org 8335 8335 S: Maintained 8336 - W: http://wpan.cakelab.org/ 8336 + W: https://linux-wpan.org/ 8337 8337 T: git git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan.git 8338 8338 T: git git://git.kernel.org/pub/scm/linux/kernel/git/sschmidt/wpan-next.git 8339 8339 F: Documentation/networking/ieee802154.rst ··· 10808 10808 F: drivers/dma/mediatek/ 10809 10809 10810 10810 MEDIATEK ETHERNET DRIVER 10811 - M: Felix Fietkau <nbd@openwrt.org> 10811 + M: Felix Fietkau <nbd@nbd.name> 10812 10812 M: John Crispin <john@phrozen.org> 10813 10813 M: Sean Wang <sean.wang@mediatek.com> 10814 10814 M: Mark Lee <Mark-MC.Lee@mediatek.com>
+3
drivers/net/bareudp.c
··· 572 572 if (data[IFLA_BAREUDP_SRCPORT_MIN]) 573 573 conf->sport_min = nla_get_u16(data[IFLA_BAREUDP_SRCPORT_MIN]); 574 574 575 + if (data[IFLA_BAREUDP_MULTIPROTO_MODE]) 576 + conf->multi_proto_mode = true; 577 + 575 578 return 0; 576 579 } 577 580
+2
drivers/net/dsa/bcm_sf2.c
··· 1147 1147 set_bit(0, priv->cfp.used); 1148 1148 set_bit(0, priv->cfp.unique); 1149 1149 1150 + /* Balance of_node_put() done by of_find_node_by_name() */ 1151 + of_node_get(dn); 1150 1152 ports = of_find_node_by_name(dn, "ports"); 1151 1153 if (ports) { 1152 1154 bcm_sf2_identify_ports(priv, ports);
+173 -166
drivers/net/dsa/sja1105/sja1105_vl.c
··· 7 7 8 8 #define SJA1105_SIZE_VL_STATUS 8 9 9 10 + /* Insert into the global gate list, sorted by gate action time. */ 11 + static int sja1105_insert_gate_entry(struct sja1105_gating_config *gating_cfg, 12 + struct sja1105_rule *rule, 13 + u8 gate_state, s64 entry_time, 14 + struct netlink_ext_ack *extack) 15 + { 16 + struct sja1105_gate_entry *e; 17 + int rc; 18 + 19 + e = kzalloc(sizeof(*e), GFP_KERNEL); 20 + if (!e) 21 + return -ENOMEM; 22 + 23 + e->rule = rule; 24 + e->gate_state = gate_state; 25 + e->interval = entry_time; 26 + 27 + if (list_empty(&gating_cfg->entries)) { 28 + list_add(&e->list, &gating_cfg->entries); 29 + } else { 30 + struct sja1105_gate_entry *p; 31 + 32 + list_for_each_entry(p, &gating_cfg->entries, list) { 33 + if (p->interval == e->interval) { 34 + NL_SET_ERR_MSG_MOD(extack, 35 + "Gate conflict"); 36 + rc = -EBUSY; 37 + goto err; 38 + } 39 + 40 + if (e->interval < p->interval) 41 + break; 42 + } 43 + list_add(&e->list, p->list.prev); 44 + } 45 + 46 + gating_cfg->num_entries++; 47 + 48 + return 0; 49 + err: 50 + kfree(e); 51 + return rc; 52 + } 53 + 54 + /* The gate entries contain absolute times in their e->interval field. Convert 55 + * that to proper intervals (i.e. "0, 5, 10, 15" to "5, 5, 5, 5"). 56 + */ 57 + static void 58 + sja1105_gating_cfg_time_to_interval(struct sja1105_gating_config *gating_cfg, 59 + u64 cycle_time) 60 + { 61 + struct sja1105_gate_entry *last_e; 62 + struct sja1105_gate_entry *e; 63 + struct list_head *prev; 64 + 65 + list_for_each_entry(e, &gating_cfg->entries, list) { 66 + struct sja1105_gate_entry *p; 67 + 68 + prev = e->list.prev; 69 + 70 + if (prev == &gating_cfg->entries) 71 + continue; 72 + 73 + p = list_entry(prev, struct sja1105_gate_entry, list); 74 + p->interval = e->interval - p->interval; 75 + } 76 + last_e = list_last_entry(&gating_cfg->entries, 77 + struct sja1105_gate_entry, list); 78 + last_e->interval = cycle_time - last_e->interval; 79 + } 80 + 81 + static void sja1105_free_gating_config(struct sja1105_gating_config *gating_cfg) 82 + { 83 + struct sja1105_gate_entry *e, *n; 84 + 85 + list_for_each_entry_safe(e, n, &gating_cfg->entries, list) { 86 + list_del(&e->list); 87 + kfree(e); 88 + } 89 + } 90 + 91 + static int sja1105_compose_gating_subschedule(struct sja1105_private *priv, 92 + struct netlink_ext_ack *extack) 93 + { 94 + struct sja1105_gating_config *gating_cfg = &priv->tas_data.gating_cfg; 95 + struct sja1105_rule *rule; 96 + s64 max_cycle_time = 0; 97 + s64 its_base_time = 0; 98 + int i, rc = 0; 99 + 100 + sja1105_free_gating_config(gating_cfg); 101 + 102 + list_for_each_entry(rule, &priv->flow_block.rules, list) { 103 + if (rule->type != SJA1105_RULE_VL) 104 + continue; 105 + if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED) 106 + continue; 107 + 108 + if (max_cycle_time < rule->vl.cycle_time) { 109 + max_cycle_time = rule->vl.cycle_time; 110 + its_base_time = rule->vl.base_time; 111 + } 112 + } 113 + 114 + if (!max_cycle_time) 115 + return 0; 116 + 117 + dev_dbg(priv->ds->dev, "max_cycle_time %lld its_base_time %lld\n", 118 + max_cycle_time, its_base_time); 119 + 120 + gating_cfg->base_time = its_base_time; 121 + gating_cfg->cycle_time = max_cycle_time; 122 + gating_cfg->num_entries = 0; 123 + 124 + list_for_each_entry(rule, &priv->flow_block.rules, list) { 125 + s64 time; 126 + s64 rbt; 127 + 128 + if (rule->type != SJA1105_RULE_VL) 129 + continue; 130 + if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED) 131 + continue; 132 + 133 + /* Calculate the difference between this gating schedule's 134 + * base time, and the base time of the gating schedule with the 135 + * longest cycle time. We call it the relative base time (rbt). 136 + */ 137 + rbt = future_base_time(rule->vl.base_time, rule->vl.cycle_time, 138 + its_base_time); 139 + rbt -= its_base_time; 140 + 141 + time = rbt; 142 + 143 + for (i = 0; i < rule->vl.num_entries; i++) { 144 + u8 gate_state = rule->vl.entries[i].gate_state; 145 + s64 entry_time = time; 146 + 147 + while (entry_time < max_cycle_time) { 148 + rc = sja1105_insert_gate_entry(gating_cfg, rule, 149 + gate_state, 150 + entry_time, 151 + extack); 152 + if (rc) 153 + goto err; 154 + 155 + entry_time += rule->vl.cycle_time; 156 + } 157 + time += rule->vl.entries[i].interval; 158 + } 159 + } 160 + 161 + sja1105_gating_cfg_time_to_interval(gating_cfg, max_cycle_time); 162 + 163 + return 0; 164 + err: 165 + sja1105_free_gating_config(gating_cfg); 166 + return rc; 167 + } 168 + 10 169 /* The switch flow classification core implements TTEthernet, which 'thinks' in 11 170 * terms of Virtual Links (VL), a concept borrowed from ARINC 664 part 7. 12 171 * However it also has one other operating mode (VLLUPFORMAT=0) where it acts ··· 501 342 NL_SET_ERR_MSG_MOD(extack, 502 343 "Can only redirect based on DMAC"); 503 344 return -EOPNOTSUPP; 504 - } else if (key->type != SJA1105_KEY_VLAN_AWARE_VL) { 345 + } else if ((priv->vlan_state == SJA1105_VLAN_BEST_EFFORT || 346 + priv->vlan_state == SJA1105_VLAN_FILTERING_FULL) && 347 + key->type != SJA1105_KEY_VLAN_AWARE_VL) { 505 348 NL_SET_ERR_MSG_MOD(extack, 506 349 "Can only redirect based on {DMAC, VID, PCP}"); 507 350 return -EOPNOTSUPP; ··· 549 388 kfree(rule); 550 389 } 551 390 391 + rc = sja1105_compose_gating_subschedule(priv, extack); 392 + if (rc) 393 + return rc; 394 + 552 395 rc = sja1105_init_virtual_links(priv, extack); 553 396 if (rc) 554 397 return rc; 555 398 399 + rc = sja1105_init_scheduling(priv); 400 + if (rc < 0) 401 + return rc; 402 + 556 403 return sja1105_static_config_reload(priv, SJA1105_VIRTUAL_LINKS); 557 - } 558 - 559 - /* Insert into the global gate list, sorted by gate action time. */ 560 - static int sja1105_insert_gate_entry(struct sja1105_gating_config *gating_cfg, 561 - struct sja1105_rule *rule, 562 - u8 gate_state, s64 entry_time, 563 - struct netlink_ext_ack *extack) 564 - { 565 - struct sja1105_gate_entry *e; 566 - int rc; 567 - 568 - e = kzalloc(sizeof(*e), GFP_KERNEL); 569 - if (!e) 570 - return -ENOMEM; 571 - 572 - e->rule = rule; 573 - e->gate_state = gate_state; 574 - e->interval = entry_time; 575 - 576 - if (list_empty(&gating_cfg->entries)) { 577 - list_add(&e->list, &gating_cfg->entries); 578 - } else { 579 - struct sja1105_gate_entry *p; 580 - 581 - list_for_each_entry(p, &gating_cfg->entries, list) { 582 - if (p->interval == e->interval) { 583 - NL_SET_ERR_MSG_MOD(extack, 584 - "Gate conflict"); 585 - rc = -EBUSY; 586 - goto err; 587 - } 588 - 589 - if (e->interval < p->interval) 590 - break; 591 - } 592 - list_add(&e->list, p->list.prev); 593 - } 594 - 595 - gating_cfg->num_entries++; 596 - 597 - return 0; 598 - err: 599 - kfree(e); 600 - return rc; 601 - } 602 - 603 - /* The gate entries contain absolute times in their e->interval field. Convert 604 - * that to proper intervals (i.e. "0, 5, 10, 15" to "5, 5, 5, 5"). 605 - */ 606 - static void 607 - sja1105_gating_cfg_time_to_interval(struct sja1105_gating_config *gating_cfg, 608 - u64 cycle_time) 609 - { 610 - struct sja1105_gate_entry *last_e; 611 - struct sja1105_gate_entry *e; 612 - struct list_head *prev; 613 - 614 - list_for_each_entry(e, &gating_cfg->entries, list) { 615 - struct sja1105_gate_entry *p; 616 - 617 - prev = e->list.prev; 618 - 619 - if (prev == &gating_cfg->entries) 620 - continue; 621 - 622 - p = list_entry(prev, struct sja1105_gate_entry, list); 623 - p->interval = e->interval - p->interval; 624 - } 625 - last_e = list_last_entry(&gating_cfg->entries, 626 - struct sja1105_gate_entry, list); 627 - if (last_e->list.prev != &gating_cfg->entries) 628 - last_e->interval = cycle_time - last_e->interval; 629 - } 630 - 631 - static void sja1105_free_gating_config(struct sja1105_gating_config *gating_cfg) 632 - { 633 - struct sja1105_gate_entry *e, *n; 634 - 635 - list_for_each_entry_safe(e, n, &gating_cfg->entries, list) { 636 - list_del(&e->list); 637 - kfree(e); 638 - } 639 - } 640 - 641 - static int sja1105_compose_gating_subschedule(struct sja1105_private *priv, 642 - struct netlink_ext_ack *extack) 643 - { 644 - struct sja1105_gating_config *gating_cfg = &priv->tas_data.gating_cfg; 645 - struct sja1105_rule *rule; 646 - s64 max_cycle_time = 0; 647 - s64 its_base_time = 0; 648 - int i, rc = 0; 649 - 650 - list_for_each_entry(rule, &priv->flow_block.rules, list) { 651 - if (rule->type != SJA1105_RULE_VL) 652 - continue; 653 - if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED) 654 - continue; 655 - 656 - if (max_cycle_time < rule->vl.cycle_time) { 657 - max_cycle_time = rule->vl.cycle_time; 658 - its_base_time = rule->vl.base_time; 659 - } 660 - } 661 - 662 - if (!max_cycle_time) 663 - return 0; 664 - 665 - dev_dbg(priv->ds->dev, "max_cycle_time %lld its_base_time %lld\n", 666 - max_cycle_time, its_base_time); 667 - 668 - sja1105_free_gating_config(gating_cfg); 669 - 670 - gating_cfg->base_time = its_base_time; 671 - gating_cfg->cycle_time = max_cycle_time; 672 - gating_cfg->num_entries = 0; 673 - 674 - list_for_each_entry(rule, &priv->flow_block.rules, list) { 675 - s64 time; 676 - s64 rbt; 677 - 678 - if (rule->type != SJA1105_RULE_VL) 679 - continue; 680 - if (rule->vl.type != SJA1105_VL_TIME_TRIGGERED) 681 - continue; 682 - 683 - /* Calculate the difference between this gating schedule's 684 - * base time, and the base time of the gating schedule with the 685 - * longest cycle time. We call it the relative base time (rbt). 686 - */ 687 - rbt = future_base_time(rule->vl.base_time, rule->vl.cycle_time, 688 - its_base_time); 689 - rbt -= its_base_time; 690 - 691 - time = rbt; 692 - 693 - for (i = 0; i < rule->vl.num_entries; i++) { 694 - u8 gate_state = rule->vl.entries[i].gate_state; 695 - s64 entry_time = time; 696 - 697 - while (entry_time < max_cycle_time) { 698 - rc = sja1105_insert_gate_entry(gating_cfg, rule, 699 - gate_state, 700 - entry_time, 701 - extack); 702 - if (rc) 703 - goto err; 704 - 705 - entry_time += rule->vl.cycle_time; 706 - } 707 - time += rule->vl.entries[i].interval; 708 - } 709 - } 710 - 711 - sja1105_gating_cfg_time_to_interval(gating_cfg, max_cycle_time); 712 - 713 - return 0; 714 - err: 715 - sja1105_free_gating_config(gating_cfg); 716 - return rc; 717 404 } 718 405 719 406 int sja1105_vl_gate(struct sja1105_private *priv, int port, ··· 597 588 598 589 if (priv->vlan_state == SJA1105_VLAN_UNAWARE && 599 590 key->type != SJA1105_KEY_VLAN_UNAWARE_VL) { 600 - dev_err(priv->ds->dev, "1: vlan state %d key type %d\n", 601 - priv->vlan_state, key->type); 602 591 NL_SET_ERR_MSG_MOD(extack, 603 592 "Can only gate based on DMAC"); 604 593 return -EOPNOTSUPP; 605 - } else if (key->type != SJA1105_KEY_VLAN_AWARE_VL) { 606 - dev_err(priv->ds->dev, "2: vlan state %d key type %d\n", 607 - priv->vlan_state, key->type); 594 + } else if ((priv->vlan_state == SJA1105_VLAN_BEST_EFFORT || 595 + priv->vlan_state == SJA1105_VLAN_FILTERING_FULL) && 596 + key->type != SJA1105_KEY_VLAN_AWARE_VL) { 608 597 NL_SET_ERR_MSG_MOD(extack, 609 598 "Can only gate based on {DMAC, VID, PCP}"); 610 599 return -EOPNOTSUPP;
+29 -7
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 6292 6292 6293 6293 static void bnxt_hwrm_stat_ctx_free(struct bnxt *bp) 6294 6294 { 6295 + struct hwrm_stat_ctx_clr_stats_input req0 = {0}; 6295 6296 struct hwrm_stat_ctx_free_input req = {0}; 6296 6297 int i; 6297 6298 ··· 6302 6301 if (BNXT_CHIP_TYPE_NITRO_A0(bp)) 6303 6302 return; 6304 6303 6304 + bnxt_hwrm_cmd_hdr_init(bp, &req0, HWRM_STAT_CTX_CLR_STATS, -1, -1); 6305 6305 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_STAT_CTX_FREE, -1, -1); 6306 6306 6307 6307 mutex_lock(&bp->hwrm_cmd_lock); ··· 6312 6310 6313 6311 if (cpr->hw_stats_ctx_id != INVALID_STATS_CTX_ID) { 6314 6312 req.stat_ctx_id = cpu_to_le32(cpr->hw_stats_ctx_id); 6315 - 6313 + if (BNXT_FW_MAJ(bp) <= 20) { 6314 + req0.stat_ctx_id = req.stat_ctx_id; 6315 + _hwrm_send_message(bp, &req0, sizeof(req0), 6316 + HWRM_CMD_TIMEOUT); 6317 + } 6316 6318 _hwrm_send_message(bp, &req, sizeof(req), 6317 6319 HWRM_CMD_TIMEOUT); 6318 6320 ··· 6982 6976 bp->fw_cap |= BNXT_FW_CAP_ERR_RECOVER_RELOAD; 6983 6977 6984 6978 bp->tx_push_thresh = 0; 6985 - if (flags & FUNC_QCAPS_RESP_FLAGS_PUSH_MODE_SUPPORTED) 6979 + if ((flags & FUNC_QCAPS_RESP_FLAGS_PUSH_MODE_SUPPORTED) && 6980 + BNXT_FW_MAJ(bp) > 217) 6986 6981 bp->tx_push_thresh = BNXT_TX_PUSH_THRESH; 6987 6982 6988 6983 hw_resc->max_rsscos_ctxs = le16_to_cpu(resp->max_rsscos_ctx); ··· 7247 7240 static int bnxt_hwrm_ver_get(struct bnxt *bp) 7248 7241 { 7249 7242 struct hwrm_ver_get_output *resp = bp->hwrm_cmd_resp_addr; 7243 + u16 fw_maj, fw_min, fw_bld, fw_rsv; 7250 7244 u32 dev_caps_cfg, hwrm_ver; 7251 - int rc; 7245 + int rc, len; 7252 7246 7253 7247 bp->hwrm_max_req_len = HWRM_MAX_REQ_LEN; 7254 7248 mutex_lock(&bp->hwrm_cmd_lock); ··· 7281 7273 resp->hwrm_intf_maj_8b, resp->hwrm_intf_min_8b, 7282 7274 resp->hwrm_intf_upd_8b); 7283 7275 7284 - snprintf(bp->fw_ver_str, BC_HWRM_STR_LEN, "%d.%d.%d.%d", 7285 - resp->hwrm_fw_maj_8b, resp->hwrm_fw_min_8b, 7286 - resp->hwrm_fw_bld_8b, resp->hwrm_fw_rsvd_8b); 7276 + fw_maj = le16_to_cpu(resp->hwrm_fw_major); 7277 + if (bp->hwrm_spec_code > 0x10803 && fw_maj) { 7278 + fw_min = le16_to_cpu(resp->hwrm_fw_minor); 7279 + fw_bld = le16_to_cpu(resp->hwrm_fw_build); 7280 + fw_rsv = le16_to_cpu(resp->hwrm_fw_patch); 7281 + len = FW_VER_STR_LEN; 7282 + } else { 7283 + fw_maj = resp->hwrm_fw_maj_8b; 7284 + fw_min = resp->hwrm_fw_min_8b; 7285 + fw_bld = resp->hwrm_fw_bld_8b; 7286 + fw_rsv = resp->hwrm_fw_rsvd_8b; 7287 + len = BC_HWRM_STR_LEN; 7288 + } 7289 + bp->fw_ver_code = BNXT_FW_VER_CODE(fw_maj, fw_min, fw_bld, fw_rsv); 7290 + snprintf(bp->fw_ver_str, len, "%d.%d.%d.%d", fw_maj, fw_min, fw_bld, 7291 + fw_rsv); 7287 7292 7288 7293 if (strlen(resp->active_pkg_name)) { 7289 7294 int fw_ver_len = strlen(bp->fw_ver_str); ··· 11913 11892 dev->ethtool_ops = &bnxt_ethtool_ops; 11914 11893 pci_set_drvdata(pdev, dev); 11915 11894 11916 - bnxt_vpd_read_info(bp); 11895 + if (BNXT_PF(bp)) 11896 + bnxt_vpd_read_info(bp); 11917 11897 11918 11898 rc = bnxt_alloc_hwrm_resources(bp); 11919 11899 if (rc)
+5
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1746 1746 #define PHY_VER_STR_LEN (FW_VER_STR_LEN - BC_HWRM_STR_LEN) 1747 1747 char fw_ver_str[FW_VER_STR_LEN]; 1748 1748 char hwrm_ver_supp[FW_VER_STR_LEN]; 1749 + u64 fw_ver_code; 1750 + #define BNXT_FW_VER_CODE(maj, min, bld, rsv) \ 1751 + ((u64)(maj) << 48 | (u64)(min) << 32 | (u64)(bld) << 16 | (rsv)) 1752 + #define BNXT_FW_MAJ(bp) ((bp)->fw_ver_code >> 48) 1753 + 1749 1754 __be16 vxlan_port; 1750 1755 u8 vxlan_port_cnt; 1751 1756 __le16 vxlan_fw_dst_port_id;
+13 -8
drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
··· 1889 1889 } 1890 1890 1891 1891 static int bnxt_tc_setup_indr_block(struct net_device *netdev, struct bnxt *bp, 1892 - struct flow_block_offload *f) 1892 + struct flow_block_offload *f, void *data, 1893 + void (*cleanup)(struct flow_block_cb *block_cb)) 1893 1894 { 1894 1895 struct bnxt_flower_indr_block_cb_priv *cb_priv; 1895 1896 struct flow_block_cb *block_cb; ··· 1908 1907 cb_priv->bp = bp; 1909 1908 list_add(&cb_priv->list, &bp->tc_indr_block_list); 1910 1909 1911 - block_cb = flow_block_cb_alloc(bnxt_tc_setup_indr_block_cb, 1912 - cb_priv, cb_priv, 1913 - bnxt_tc_setup_indr_rel); 1910 + block_cb = flow_indr_block_cb_alloc(bnxt_tc_setup_indr_block_cb, 1911 + cb_priv, cb_priv, 1912 + bnxt_tc_setup_indr_rel, f, 1913 + netdev, data, bp, cleanup); 1914 1914 if (IS_ERR(block_cb)) { 1915 1915 list_del(&cb_priv->list); 1916 1916 kfree(cb_priv); ··· 1932 1930 if (!block_cb) 1933 1931 return -ENOENT; 1934 1932 1935 - flow_block_cb_remove(block_cb, f); 1933 + flow_indr_block_cb_remove(block_cb, f); 1936 1934 list_del(&block_cb->driver_list); 1937 1935 break; 1938 1936 default: ··· 1947 1945 } 1948 1946 1949 1947 static int bnxt_tc_setup_indr_cb(struct net_device *netdev, void *cb_priv, 1950 - enum tc_setup_type type, void *type_data) 1948 + enum tc_setup_type type, void *type_data, 1949 + void *data, 1950 + void (*cleanup)(struct flow_block_cb *block_cb)) 1951 1951 { 1952 1952 if (!bnxt_is_netdev_indr_offload(netdev)) 1953 1953 return -EOPNOTSUPP; 1954 1954 1955 1955 switch (type) { 1956 1956 case TC_SETUP_BLOCK: 1957 - return bnxt_tc_setup_indr_block(netdev, cb_priv, type_data); 1957 + return bnxt_tc_setup_indr_block(netdev, cb_priv, type_data, data, 1958 + cleanup); 1958 1959 default: 1959 1960 break; 1960 1961 } ··· 2079 2074 return; 2080 2075 2081 2076 flow_indr_dev_unregister(bnxt_tc_setup_indr_cb, bp, 2082 - bnxt_tc_setup_indr_block_cb); 2077 + bnxt_tc_setup_indr_rel); 2083 2078 rhashtable_destroy(&tc_info->flow_table); 2084 2079 rhashtable_destroy(&tc_info->l2_table); 2085 2080 rhashtable_destroy(&tc_info->decap_l2_table);
+5 -83
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 459 459 genet_dma_ring_regs[r]); 460 460 } 461 461 462 - static bool bcmgenet_hfb_is_filter_enabled(struct bcmgenet_priv *priv, 463 - u32 f_index) 464 - { 465 - u32 offset; 466 - u32 reg; 467 - 468 - offset = HFB_FLT_ENABLE_V3PLUS + (f_index < 32) * sizeof(u32); 469 - reg = bcmgenet_hfb_reg_readl(priv, offset); 470 - return !!(reg & (1 << (f_index % 32))); 471 - } 472 - 473 462 static void bcmgenet_hfb_enable_filter(struct bcmgenet_priv *priv, u32 f_index) 474 463 { 475 464 u32 offset; ··· 520 531 reg &= ~(0xFF << (8 * (f_index % 4))); 521 532 reg |= ((f_length & 0xFF) << (8 * (f_index % 4))); 522 533 bcmgenet_hfb_reg_writel(priv, reg, offset); 523 - } 524 - 525 - static int bcmgenet_hfb_find_unused_filter(struct bcmgenet_priv *priv) 526 - { 527 - u32 f_index; 528 - 529 - /* First MAX_NUM_OF_FS_RULES are reserved for Rx NFC filters */ 530 - for (f_index = MAX_NUM_OF_FS_RULES; 531 - f_index < priv->hw_params->hfb_filter_cnt; f_index++) 532 - if (!bcmgenet_hfb_is_filter_enabled(priv, f_index)) 533 - return f_index; 534 - 535 - return -ENOMEM; 536 534 } 537 535 538 536 static int bcmgenet_hfb_validate_mask(void *mask, size_t size) ··· 610 634 { 611 635 struct ethtool_rx_flow_spec *fs = &rule->fs; 612 636 int err = 0, offset = 0, f_length = 0; 613 - u16 val_16, mask_16; 614 637 u8 val_8, mask_8; 638 + __be16 val_16; 639 + u16 mask_16; 615 640 size_t size; 616 641 u32 *f_data; 617 642 ··· 719 742 kfree(f_data); 720 743 721 744 return err; 722 - } 723 - 724 - /* bcmgenet_hfb_add_filter 725 - * 726 - * Add new filter to Hardware Filter Block to match and direct Rx traffic to 727 - * desired Rx queue. 728 - * 729 - * f_data is an array of unsigned 32-bit integers where each 32-bit integer 730 - * provides filter data for 2 bytes (4 nibbles) of Rx frame: 731 - * 732 - * bits 31:20 - unused 733 - * bit 19 - nibble 0 match enable 734 - * bit 18 - nibble 1 match enable 735 - * bit 17 - nibble 2 match enable 736 - * bit 16 - nibble 3 match enable 737 - * bits 15:12 - nibble 0 data 738 - * bits 11:8 - nibble 1 data 739 - * bits 7:4 - nibble 2 data 740 - * bits 3:0 - nibble 3 data 741 - * 742 - * Example: 743 - * In order to match: 744 - * - Ethernet frame type = 0x0800 (IP) 745 - * - IP version field = 4 746 - * - IP protocol field = 0x11 (UDP) 747 - * 748 - * The following filter is needed: 749 - * u32 hfb_filter_ipv4_udp[] = { 750 - * Rx frame offset 0x00: 0x00000000, 0x00000000, 0x00000000, 0x00000000, 751 - * Rx frame offset 0x08: 0x00000000, 0x00000000, 0x000F0800, 0x00084000, 752 - * Rx frame offset 0x10: 0x00000000, 0x00000000, 0x00000000, 0x00030011, 753 - * }; 754 - * 755 - * To add the filter to HFB and direct the traffic to Rx queue 0, call: 756 - * bcmgenet_hfb_add_filter(priv, hfb_filter_ipv4_udp, 757 - * ARRAY_SIZE(hfb_filter_ipv4_udp), 0); 758 - */ 759 - int bcmgenet_hfb_add_filter(struct bcmgenet_priv *priv, u32 *f_data, 760 - u32 f_length, u32 rx_queue) 761 - { 762 - int f_index; 763 - 764 - f_index = bcmgenet_hfb_find_unused_filter(priv); 765 - if (f_index < 0) 766 - return -ENOMEM; 767 - 768 - if (f_length > priv->hw_params->hfb_filter_size) 769 - return -EINVAL; 770 - 771 - bcmgenet_hfb_set_filter(priv, f_data, f_length, rx_queue, f_index); 772 - bcmgenet_hfb_enable_filter(priv, f_index); 773 - 774 - return 0; 775 745 } 776 746 777 747 /* bcmgenet_hfb_clear ··· 2042 2118 goto out; 2043 2119 } 2044 2120 2045 - if (skb_padto(skb, ETH_ZLEN)) { 2046 - ret = NETDEV_TX_OK; 2047 - goto out; 2048 - } 2049 - 2050 2121 /* Retain how many bytes will be sent on the wire, without TSB inserted 2051 2122 * by transmit checksum offload 2052 2123 */ ··· 2088 2169 len_stat = (size << DMA_BUFLENGTH_SHIFT) | 2089 2170 (priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT); 2090 2171 2172 + /* Note: if we ever change from DMA_TX_APPEND_CRC below we 2173 + * will need to restore software padding of "runt" packets 2174 + */ 2091 2175 if (!i) { 2092 2176 len_stat |= DMA_TX_APPEND_CRC | DMA_SOP; 2093 2177 if (skb->ip_summed == CHECKSUM_PARTIAL)
+2 -2
drivers/net/ethernet/broadcom/tg3.c
··· 18168 18168 18169 18169 rtnl_lock(); 18170 18170 18171 - /* We probably don't have netdev yet */ 18172 - if (!netdev || !netif_running(netdev)) 18171 + /* Could be second call or maybe we don't have netdev yet */ 18172 + if (!netdev || tp->pcierr_recovery || !netif_running(netdev)) 18173 18173 goto done; 18174 18174 18175 18175 /* We needn't recover from permanent error */
+82 -46
drivers/net/ethernet/cadence/macb_main.c
··· 2558 2558 2559 2559 err = macb_phylink_connect(bp); 2560 2560 if (err) 2561 - goto napi_exit; 2561 + goto reset_hw; 2562 2562 2563 2563 netif_tx_start_all_queues(dev); 2564 2564 ··· 2567 2567 2568 2568 return 0; 2569 2569 2570 - napi_exit: 2570 + reset_hw: 2571 + macb_reset_hw(bp); 2571 2572 for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) 2572 2573 napi_disable(&queue->napi); 2574 + macb_free_consistent(bp); 2573 2575 pm_exit: 2574 2576 pm_runtime_put_sync(&bp->pdev->dev); 2575 2577 return err; ··· 3762 3760 3763 3761 static struct sifive_fu540_macb_mgmt *mgmt; 3764 3762 3765 - /* Initialize and start the Receiver and Transmit subsystems */ 3766 - static int at91ether_start(struct net_device *dev) 3763 + static int at91ether_alloc_coherent(struct macb *lp) 3767 3764 { 3768 - struct macb *lp = netdev_priv(dev); 3769 3765 struct macb_queue *q = &lp->queues[0]; 3770 - struct macb_dma_desc *desc; 3771 - dma_addr_t addr; 3772 - u32 ctl; 3773 - int i; 3774 3766 3775 3767 q->rx_ring = dma_alloc_coherent(&lp->pdev->dev, 3776 3768 (AT91ETHER_MAX_RX_DESCR * ··· 3785 3789 q->rx_ring = NULL; 3786 3790 return -ENOMEM; 3787 3791 } 3792 + 3793 + return 0; 3794 + } 3795 + 3796 + static void at91ether_free_coherent(struct macb *lp) 3797 + { 3798 + struct macb_queue *q = &lp->queues[0]; 3799 + 3800 + if (q->rx_ring) { 3801 + dma_free_coherent(&lp->pdev->dev, 3802 + AT91ETHER_MAX_RX_DESCR * 3803 + macb_dma_desc_get_size(lp), 3804 + q->rx_ring, q->rx_ring_dma); 3805 + q->rx_ring = NULL; 3806 + } 3807 + 3808 + if (q->rx_buffers) { 3809 + dma_free_coherent(&lp->pdev->dev, 3810 + AT91ETHER_MAX_RX_DESCR * 3811 + AT91ETHER_MAX_RBUFF_SZ, 3812 + q->rx_buffers, q->rx_buffers_dma); 3813 + q->rx_buffers = NULL; 3814 + } 3815 + } 3816 + 3817 + /* Initialize and start the Receiver and Transmit subsystems */ 3818 + static int at91ether_start(struct macb *lp) 3819 + { 3820 + struct macb_queue *q = &lp->queues[0]; 3821 + struct macb_dma_desc *desc; 3822 + dma_addr_t addr; 3823 + u32 ctl; 3824 + int i, ret; 3825 + 3826 + ret = at91ether_alloc_coherent(lp); 3827 + if (ret) 3828 + return ret; 3788 3829 3789 3830 addr = q->rx_buffers_dma; 3790 3831 for (i = 0; i < AT91ETHER_MAX_RX_DESCR; i++) { ··· 3844 3811 ctl = macb_readl(lp, NCR); 3845 3812 macb_writel(lp, NCR, ctl | MACB_BIT(RE) | MACB_BIT(TE)); 3846 3813 3814 + /* Enable MAC interrupts */ 3815 + macb_writel(lp, IER, MACB_BIT(RCOMP) | 3816 + MACB_BIT(RXUBR) | 3817 + MACB_BIT(ISR_TUND) | 3818 + MACB_BIT(ISR_RLE) | 3819 + MACB_BIT(TCOMP) | 3820 + MACB_BIT(ISR_ROVR) | 3821 + MACB_BIT(HRESP)); 3822 + 3847 3823 return 0; 3824 + } 3825 + 3826 + static void at91ether_stop(struct macb *lp) 3827 + { 3828 + u32 ctl; 3829 + 3830 + /* Disable MAC interrupts */ 3831 + macb_writel(lp, IDR, MACB_BIT(RCOMP) | 3832 + MACB_BIT(RXUBR) | 3833 + MACB_BIT(ISR_TUND) | 3834 + MACB_BIT(ISR_RLE) | 3835 + MACB_BIT(TCOMP) | 3836 + MACB_BIT(ISR_ROVR) | 3837 + MACB_BIT(HRESP)); 3838 + 3839 + /* Disable Receiver and Transmitter */ 3840 + ctl = macb_readl(lp, NCR); 3841 + macb_writel(lp, NCR, ctl & ~(MACB_BIT(TE) | MACB_BIT(RE))); 3842 + 3843 + /* Free resources. */ 3844 + at91ether_free_coherent(lp); 3848 3845 } 3849 3846 3850 3847 /* Open the ethernet interface */ ··· 3896 3833 3897 3834 macb_set_hwaddr(lp); 3898 3835 3899 - ret = at91ether_start(dev); 3836 + ret = at91ether_start(lp); 3900 3837 if (ret) 3901 - return ret; 3902 - 3903 - /* Enable MAC interrupts */ 3904 - macb_writel(lp, IER, MACB_BIT(RCOMP) | 3905 - MACB_BIT(RXUBR) | 3906 - MACB_BIT(ISR_TUND) | 3907 - MACB_BIT(ISR_RLE) | 3908 - MACB_BIT(TCOMP) | 3909 - MACB_BIT(ISR_ROVR) | 3910 - MACB_BIT(HRESP)); 3838 + goto pm_exit; 3911 3839 3912 3840 ret = macb_phylink_connect(lp); 3913 3841 if (ret) 3914 - return ret; 3842 + goto stop; 3915 3843 3916 3844 netif_start_queue(dev); 3917 3845 3918 3846 return 0; 3847 + 3848 + stop: 3849 + at91ether_stop(lp); 3850 + pm_exit: 3851 + pm_runtime_put_sync(&lp->pdev->dev); 3852 + return ret; 3919 3853 } 3920 3854 3921 3855 /* Close the interface */ 3922 3856 static int at91ether_close(struct net_device *dev) 3923 3857 { 3924 3858 struct macb *lp = netdev_priv(dev); 3925 - struct macb_queue *q = &lp->queues[0]; 3926 - u32 ctl; 3927 - 3928 - /* Disable Receiver and Transmitter */ 3929 - ctl = macb_readl(lp, NCR); 3930 - macb_writel(lp, NCR, ctl & ~(MACB_BIT(TE) | MACB_BIT(RE))); 3931 - 3932 - /* Disable MAC interrupts */ 3933 - macb_writel(lp, IDR, MACB_BIT(RCOMP) | 3934 - MACB_BIT(RXUBR) | 3935 - MACB_BIT(ISR_TUND) | 3936 - MACB_BIT(ISR_RLE) | 3937 - MACB_BIT(TCOMP) | 3938 - MACB_BIT(ISR_ROVR) | 3939 - MACB_BIT(HRESP)); 3940 3859 3941 3860 netif_stop_queue(dev); 3942 3861 3943 3862 phylink_stop(lp->phylink); 3944 3863 phylink_disconnect_phy(lp->phylink); 3945 3864 3946 - dma_free_coherent(&lp->pdev->dev, 3947 - AT91ETHER_MAX_RX_DESCR * 3948 - macb_dma_desc_get_size(lp), 3949 - q->rx_ring, q->rx_ring_dma); 3950 - q->rx_ring = NULL; 3951 - 3952 - dma_free_coherent(&lp->pdev->dev, 3953 - AT91ETHER_MAX_RX_DESCR * AT91ETHER_MAX_RBUFF_SZ, 3954 - q->rx_buffers, q->rx_buffers_dma); 3955 - q->rx_buffers = NULL; 3865 + at91ether_stop(lp); 3956 3866 3957 3867 return pm_runtime_put(&lp->pdev->dev); 3958 3868 }
+4 -2
drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
··· 1975 1975 u8 mem_type[CTXT_INGRESS + 1] = { 0 }; 1976 1976 struct cudbg_buffer temp_buff = { 0 }; 1977 1977 struct cudbg_ch_cntxt *buff; 1978 - u64 *dst_off, *src_off; 1979 1978 u8 *ctx_buf; 1980 1979 u8 i, k; 1981 1980 int rc; ··· 2043 2044 } 2044 2045 2045 2046 for (j = 0; j < max_ctx_qid; j++) { 2047 + __be64 *dst_off; 2048 + u64 *src_off; 2049 + 2046 2050 src_off = (u64 *)(ctx_buf + j * SGE_CTXT_SIZE); 2047 - dst_off = (u64 *)buff->data; 2051 + dst_off = (__be64 *)buff->data; 2048 2052 2049 2053 /* The data is stored in 64-bit cpu order. Convert it 2050 2054 * to big endian before parsing.
+3
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.h
··· 136 136 ((val & 0x02) << 5) | 137 137 ((val & 0x01) << 7); 138 138 } 139 + 140 + extern const char * const dcb_ver_array[]; 141 + 139 142 #define CXGB4_DCB_ENABLED true 140 143 141 144 #else /* !CONFIG_CHELSIO_T4_DCB */
-1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
··· 2379 2379 }; 2380 2380 2381 2381 #ifdef CONFIG_CHELSIO_T4_DCB 2382 - extern char *dcb_ver_array[]; 2383 2382 2384 2383 /* Data Center Briging information for each port. 2385 2384 */
+1 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
··· 588 588 /** 589 589 * lmm_to_fw_caps - translate ethtool Link Mode Mask to Firmware 590 590 * capabilities 591 - * @et_lmm: ethtool Link Mode Mask 591 + * @link_mode_mask: ethtool Link Mode Mask 592 592 * 593 593 * Translate ethtool Link Mode Mask into a Firmware Port capabilities 594 594 * value.
+16 -9
drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c
··· 165 165 unsigned int tid, bool dip, bool sip, bool dp, 166 166 bool sp) 167 167 { 168 + u8 *nat_lp = (u8 *)&f->fs.nat_lport; 169 + u8 *nat_fp = (u8 *)&f->fs.nat_fport; 170 + 168 171 if (dip) { 169 172 if (f->fs.type) { 170 173 set_tcb_field(adap, f, tid, TCB_SND_UNA_RAW_W, ··· 239 236 } 240 237 241 238 set_tcb_field(adap, f, tid, TCB_PDU_HDR_LEN_W, WORD_MASK, 242 - (dp ? f->fs.nat_lport : 0) | 243 - (sp ? f->fs.nat_fport << 16 : 0), 1); 239 + (dp ? (nat_lp[1] | nat_lp[0] << 8) : 0) | 240 + (sp ? (nat_fp[1] << 16 | nat_fp[0] << 24) : 0), 241 + 1); 244 242 } 245 243 246 244 /* Validate filter spec against configuration done on the card. */ ··· 913 909 fwr->fpm = htons(f->fs.mask.fport); 914 910 915 911 if (adapter->params.filter2_wr_support) { 912 + u8 *nat_lp = (u8 *)&f->fs.nat_lport; 913 + u8 *nat_fp = (u8 *)&f->fs.nat_fport; 914 + 916 915 fwr->natmode_to_ulp_type = 917 916 FW_FILTER2_WR_ULP_TYPE_V(f->fs.nat_mode ? 918 917 ULP_MODE_TCPDDP : ··· 923 916 FW_FILTER2_WR_NATMODE_V(f->fs.nat_mode); 924 917 memcpy(fwr->newlip, f->fs.nat_lip, sizeof(fwr->newlip)); 925 918 memcpy(fwr->newfip, f->fs.nat_fip, sizeof(fwr->newfip)); 926 - fwr->newlport = htons(f->fs.nat_lport); 927 - fwr->newfport = htons(f->fs.nat_fport); 919 + fwr->newlport = htons(nat_lp[1] | nat_lp[0] << 8); 920 + fwr->newfport = htons(nat_fp[1] | nat_fp[0] << 8); 928 921 } 929 922 930 923 /* Mark the filter as "pending" and ship off the Filter Work Request. ··· 1112 1105 struct in_addr *addr; 1113 1106 1114 1107 addr = (struct in_addr *)ipmask; 1115 - if (addr->s_addr == 0xffffffff) 1108 + if (ntohl(addr->s_addr) == 0xffffffff) 1116 1109 return true; 1117 1110 } else if (family == AF_INET6) { 1118 1111 struct in6_addr *addr6; 1119 1112 1120 1113 addr6 = (struct in6_addr *)ipmask; 1121 - if (addr6->s6_addr32[0] == 0xffffffff && 1122 - addr6->s6_addr32[1] == 0xffffffff && 1123 - addr6->s6_addr32[2] == 0xffffffff && 1124 - addr6->s6_addr32[3] == 0xffffffff) 1114 + if (ntohl(addr6->s6_addr32[0]) == 0xffffffff && 1115 + ntohl(addr6->s6_addr32[1]) == 0xffffffff && 1116 + ntohl(addr6->s6_addr32[2]) == 0xffffffff && 1117 + ntohl(addr6->s6_addr32[3]) == 0xffffffff) 1125 1118 return true; 1126 1119 } 1127 1120 return false;
+6 -5
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 449 449 * or -1 450 450 * @addr: the new MAC address value 451 451 * @persist: whether a new MAC allocation should be persistent 452 - * @add_smt: if true also add the address to the HW SMT 452 + * @smt_idx: the destination to store the new SMT index. 453 453 * 454 454 * Modifies an MPS filter and sets it to the new MAC address if 455 455 * @tcam_idx >= 0, or adds the MAC address to a new filter if ··· 1615 1615 * @stid: the server TID 1616 1616 * @sip: local IP address to bind server to 1617 1617 * @sport: the server's TCP port 1618 + * @vlan: the VLAN header information 1618 1619 * @queue: queue to direct messages from this server to 1619 1620 * 1620 1621 * Create an IP server for the given port and address. ··· 2610 2609 2611 2610 /* Clear out filter specifications */ 2612 2611 memset(&f->fs, 0, sizeof(struct ch_filter_specification)); 2613 - f->fs.val.lport = cpu_to_be16(sport); 2612 + f->fs.val.lport = be16_to_cpu(sport); 2614 2613 f->fs.mask.lport = ~0; 2615 2614 val = (u8 *)&sip; 2616 2615 if ((val[0] | val[1] | val[2] | val[3]) != 0) { ··· 5378 5377 static int cfg_queues(struct adapter *adap) 5379 5378 { 5380 5379 u32 avail_qsets, avail_eth_qsets, avail_uld_qsets; 5381 - u32 i, n10g = 0, qidx = 0, n1g = 0; 5382 5380 u32 ncpus = num_online_cpus(); 5383 5381 u32 niqflint, neq, num_ulds; 5384 5382 struct sge *s = &adap->sge; 5383 + u32 i, n10g = 0, qidx = 0; 5385 5384 u32 q10g = 0, q1g; 5386 5385 5387 5386 /* Reduce memory usage in kdump environment, disable all offload. */ ··· 5427 5426 if (n10g) 5428 5427 q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g; 5429 5428 5430 - n1g = adap->params.nports - n10g; 5431 5429 #ifdef CONFIG_CHELSIO_T4_DCB 5432 5430 /* For Data Center Bridging support we need to be able to support up 5433 5431 * to 8 Traffic Priorities; each of which will be assigned to its ··· 5444 5444 else 5445 5445 q10g = max(8U, q10g); 5446 5446 5447 - while ((q10g * n10g) > (avail_eth_qsets - n1g * q1g)) 5447 + while ((q10g * n10g) > 5448 + (avail_eth_qsets - (adap->params.nports - n10g) * q1g)) 5448 5449 q10g--; 5449 5450 5450 5451 #else /* !CONFIG_CHELSIO_T4_DCB */
+2 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
··· 194 194 } 195 195 196 196 /** 197 + * cxgb4_ptp_adjfreq - Adjust frequency of PHC cycle counter 197 198 * @ptp: ptp clock structure 198 199 * @ppb: Desired frequency change in parts per billion 199 200 * ··· 230 229 231 230 /** 232 231 * cxgb4_ptp_fineadjtime - Shift the time of the hardware clock 233 - * @ptp: ptp clock structure 232 + * @adapter: board private structure 234 233 * @delta: Desired change in nanoseconds 235 234 * 236 235 * Adjust the timer by resetting the timecounter structure.
+10 -20
drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
··· 58 58 PEDIT_FIELDS(IP6_, DST_63_32, 4, nat_lip, 4), 59 59 PEDIT_FIELDS(IP6_, DST_95_64, 4, nat_lip, 8), 60 60 PEDIT_FIELDS(IP6_, DST_127_96, 4, nat_lip, 12), 61 - PEDIT_FIELDS(TCP_, SPORT, 2, nat_fport, 0), 62 - PEDIT_FIELDS(TCP_, DPORT, 2, nat_lport, 0), 63 - PEDIT_FIELDS(UDP_, SPORT, 2, nat_fport, 0), 64 - PEDIT_FIELDS(UDP_, DPORT, 2, nat_lport, 0), 65 61 }; 66 62 67 63 static struct ch_tc_flower_entry *allocate_flower_entry(void) ··· 152 156 struct flow_match_ports match; 153 157 154 158 flow_rule_match_ports(rule, &match); 155 - fs->val.lport = cpu_to_be16(match.key->dst); 156 - fs->mask.lport = cpu_to_be16(match.mask->dst); 157 - fs->val.fport = cpu_to_be16(match.key->src); 158 - fs->mask.fport = cpu_to_be16(match.mask->src); 159 + fs->val.lport = be16_to_cpu(match.key->dst); 160 + fs->mask.lport = be16_to_cpu(match.mask->dst); 161 + fs->val.fport = be16_to_cpu(match.key->src); 162 + fs->mask.fport = be16_to_cpu(match.mask->src); 159 163 160 164 /* also initialize nat_lport/fport to same values */ 161 - fs->nat_lport = cpu_to_be16(match.key->dst); 162 - fs->nat_fport = cpu_to_be16(match.key->src); 165 + fs->nat_lport = fs->val.lport; 166 + fs->nat_fport = fs->val.fport; 163 167 } 164 168 165 169 if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_IP)) { ··· 350 354 switch (offset) { 351 355 case PEDIT_TCP_SPORT_DPORT: 352 356 if (~mask & PEDIT_TCP_UDP_SPORT_MASK) 353 - offload_pedit(fs, cpu_to_be32(val) >> 16, 354 - cpu_to_be32(mask) >> 16, 355 - TCP_SPORT); 357 + fs->nat_fport = val; 356 358 else 357 - offload_pedit(fs, cpu_to_be32(val), 358 - cpu_to_be32(mask), TCP_DPORT); 359 + fs->nat_lport = val >> 16; 359 360 } 360 361 fs->nat_mode = NAT_MODE_ALL; 361 362 break; ··· 360 367 switch (offset) { 361 368 case PEDIT_UDP_SPORT_DPORT: 362 369 if (~mask & PEDIT_TCP_UDP_SPORT_MASK) 363 - offload_pedit(fs, cpu_to_be32(val) >> 16, 364 - cpu_to_be32(mask) >> 16, 365 - UDP_SPORT); 370 + fs->nat_fport = val; 366 371 else 367 - offload_pedit(fs, cpu_to_be32(val), 368 - cpu_to_be32(mask), UDP_DPORT); 372 + fs->nat_lport = val >> 16; 369 373 } 370 374 fs->nat_mode = NAT_MODE_ALL; 371 375 }
+9 -9
drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32.c
··· 48 48 bool next_header) 49 49 { 50 50 unsigned int i, j; 51 - u32 val, mask; 51 + __be32 val, mask; 52 52 int off, err; 53 53 bool found; 54 54 ··· 228 228 const struct cxgb4_next_header *next; 229 229 bool found = false; 230 230 unsigned int i, j; 231 - u32 val, mask; 231 + __be32 val, mask; 232 232 int off; 233 233 234 234 if (t->table[link_uhtid - 1].link_handle) { ··· 242 242 243 243 /* Try to find matches that allow jumps to next header. */ 244 244 for (i = 0; next[i].jump; i++) { 245 - if (next[i].offoff != cls->knode.sel->offoff || 246 - next[i].shift != cls->knode.sel->offshift || 247 - next[i].mask != cls->knode.sel->offmask || 248 - next[i].offset != cls->knode.sel->off) 245 + if (next[i].sel.offoff != cls->knode.sel->offoff || 246 + next[i].sel.offshift != cls->knode.sel->offshift || 247 + next[i].sel.offmask != cls->knode.sel->offmask || 248 + next[i].sel.off != cls->knode.sel->off) 249 249 continue; 250 250 251 251 /* Found a possible candidate. Find a key that ··· 257 257 val = cls->knode.sel->keys[j].val; 258 258 mask = cls->knode.sel->keys[j].mask; 259 259 260 - if (next[i].match_off == off && 261 - next[i].match_val == val && 262 - next[i].match_mask == mask) { 260 + if (next[i].key.off == off && 261 + next[i].key.val == val && 262 + next[i].key.mask == mask) { 263 263 found = true; 264 264 break; 265 265 }
+82 -40
drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h
··· 38 38 struct cxgb4_match_field { 39 39 int off; /* Offset from the beginning of the header to match */ 40 40 /* Fill the value/mask pair in the spec if matched */ 41 - int (*val)(struct ch_filter_specification *f, u32 val, u32 mask); 41 + int (*val)(struct ch_filter_specification *f, __be32 val, __be32 mask); 42 42 }; 43 43 44 44 /* IPv4 match fields */ 45 45 static inline int cxgb4_fill_ipv4_tos(struct ch_filter_specification *f, 46 - u32 val, u32 mask) 46 + __be32 val, __be32 mask) 47 47 { 48 48 f->val.tos = (ntohl(val) >> 16) & 0x000000FF; 49 49 f->mask.tos = (ntohl(mask) >> 16) & 0x000000FF; ··· 52 52 } 53 53 54 54 static inline int cxgb4_fill_ipv4_frag(struct ch_filter_specification *f, 55 - u32 val, u32 mask) 55 + __be32 val, __be32 mask) 56 56 { 57 57 u32 mask_val; 58 58 u8 frag_val; ··· 74 74 } 75 75 76 76 static inline int cxgb4_fill_ipv4_proto(struct ch_filter_specification *f, 77 - u32 val, u32 mask) 77 + __be32 val, __be32 mask) 78 78 { 79 79 f->val.proto = (ntohl(val) >> 16) & 0x000000FF; 80 80 f->mask.proto = (ntohl(mask) >> 16) & 0x000000FF; ··· 83 83 } 84 84 85 85 static inline int cxgb4_fill_ipv4_src_ip(struct ch_filter_specification *f, 86 - u32 val, u32 mask) 86 + __be32 val, __be32 mask) 87 87 { 88 88 memcpy(&f->val.fip[0], &val, sizeof(u32)); 89 89 memcpy(&f->mask.fip[0], &mask, sizeof(u32)); ··· 92 92 } 93 93 94 94 static inline int cxgb4_fill_ipv4_dst_ip(struct ch_filter_specification *f, 95 - u32 val, u32 mask) 95 + __be32 val, __be32 mask) 96 96 { 97 97 memcpy(&f->val.lip[0], &val, sizeof(u32)); 98 98 memcpy(&f->mask.lip[0], &mask, sizeof(u32)); ··· 111 111 112 112 /* IPv6 match fields */ 113 113 static inline int cxgb4_fill_ipv6_tos(struct ch_filter_specification *f, 114 - u32 val, u32 mask) 114 + __be32 val, __be32 mask) 115 115 { 116 116 f->val.tos = (ntohl(val) >> 20) & 0x000000FF; 117 117 f->mask.tos = (ntohl(mask) >> 20) & 0x000000FF; ··· 120 120 } 121 121 122 122 static inline int cxgb4_fill_ipv6_proto(struct ch_filter_specification *f, 123 - u32 val, u32 mask) 123 + __be32 val, __be32 mask) 124 124 { 125 125 f->val.proto = (ntohl(val) >> 8) & 0x000000FF; 126 126 f->mask.proto = (ntohl(mask) >> 8) & 0x000000FF; ··· 129 129 } 130 130 131 131 static inline int cxgb4_fill_ipv6_src_ip0(struct ch_filter_specification *f, 132 - u32 val, u32 mask) 132 + __be32 val, __be32 mask) 133 133 { 134 134 memcpy(&f->val.fip[0], &val, sizeof(u32)); 135 135 memcpy(&f->mask.fip[0], &mask, sizeof(u32)); ··· 138 138 } 139 139 140 140 static inline int cxgb4_fill_ipv6_src_ip1(struct ch_filter_specification *f, 141 - u32 val, u32 mask) 141 + __be32 val, __be32 mask) 142 142 { 143 143 memcpy(&f->val.fip[4], &val, sizeof(u32)); 144 144 memcpy(&f->mask.fip[4], &mask, sizeof(u32)); ··· 147 147 } 148 148 149 149 static inline int cxgb4_fill_ipv6_src_ip2(struct ch_filter_specification *f, 150 - u32 val, u32 mask) 150 + __be32 val, __be32 mask) 151 151 { 152 152 memcpy(&f->val.fip[8], &val, sizeof(u32)); 153 153 memcpy(&f->mask.fip[8], &mask, sizeof(u32)); ··· 156 156 } 157 157 158 158 static inline int cxgb4_fill_ipv6_src_ip3(struct ch_filter_specification *f, 159 - u32 val, u32 mask) 159 + __be32 val, __be32 mask) 160 160 { 161 161 memcpy(&f->val.fip[12], &val, sizeof(u32)); 162 162 memcpy(&f->mask.fip[12], &mask, sizeof(u32)); ··· 165 165 } 166 166 167 167 static inline int cxgb4_fill_ipv6_dst_ip0(struct ch_filter_specification *f, 168 - u32 val, u32 mask) 168 + __be32 val, __be32 mask) 169 169 { 170 170 memcpy(&f->val.lip[0], &val, sizeof(u32)); 171 171 memcpy(&f->mask.lip[0], &mask, sizeof(u32)); ··· 174 174 } 175 175 176 176 static inline int cxgb4_fill_ipv6_dst_ip1(struct ch_filter_specification *f, 177 - u32 val, u32 mask) 177 + __be32 val, __be32 mask) 178 178 { 179 179 memcpy(&f->val.lip[4], &val, sizeof(u32)); 180 180 memcpy(&f->mask.lip[4], &mask, sizeof(u32)); ··· 183 183 } 184 184 185 185 static inline int cxgb4_fill_ipv6_dst_ip2(struct ch_filter_specification *f, 186 - u32 val, u32 mask) 186 + __be32 val, __be32 mask) 187 187 { 188 188 memcpy(&f->val.lip[8], &val, sizeof(u32)); 189 189 memcpy(&f->mask.lip[8], &mask, sizeof(u32)); ··· 192 192 } 193 193 194 194 static inline int cxgb4_fill_ipv6_dst_ip3(struct ch_filter_specification *f, 195 - u32 val, u32 mask) 195 + __be32 val, __be32 mask) 196 196 { 197 197 memcpy(&f->val.lip[12], &val, sizeof(u32)); 198 198 memcpy(&f->mask.lip[12], &mask, sizeof(u32)); ··· 216 216 217 217 /* TCP/UDP match */ 218 218 static inline int cxgb4_fill_l4_ports(struct ch_filter_specification *f, 219 - u32 val, u32 mask) 219 + __be32 val, __be32 mask) 220 220 { 221 221 f->val.fport = ntohl(val) >> 16; 222 222 f->mask.fport = ntohl(mask) >> 16; ··· 237 237 }; 238 238 239 239 struct cxgb4_next_header { 240 - unsigned int offset; /* Offset to next header */ 241 - /* offset, shift, and mask added to offset above 240 + /* Offset, shift, and mask added to beginning of the header 242 241 * to get to next header. Useful when using a header 243 242 * field's value to jump to next header such as IHL field 244 243 * in IPv4 header. 245 244 */ 246 - unsigned int offoff; 247 - u32 shift; 248 - u32 mask; 249 - /* match criteria to make this jump */ 250 - unsigned int match_off; 251 - u32 match_val; 252 - u32 match_mask; 245 + struct tc_u32_sel sel; 246 + struct tc_u32_key key; 253 247 /* location of jump to make */ 254 248 const struct cxgb4_match_field *jump; 255 249 }; ··· 252 258 * IPv4 header. 253 259 */ 254 260 static const struct cxgb4_next_header cxgb4_ipv4_jumps[] = { 255 - { .offset = 0, .offoff = 0, .shift = 6, .mask = 0xF, 256 - .match_off = 8, .match_val = 0x600, .match_mask = 0xFF00, 257 - .jump = cxgb4_tcp_fields }, 258 - { .offset = 0, .offoff = 0, .shift = 6, .mask = 0xF, 259 - .match_off = 8, .match_val = 0x1100, .match_mask = 0xFF00, 260 - .jump = cxgb4_udp_fields }, 261 - { .jump = NULL } 261 + { 262 + /* TCP Jump */ 263 + .sel = { 264 + .off = 0, 265 + .offoff = 0, 266 + .offshift = 6, 267 + .offmask = cpu_to_be16(0x0f00), 268 + }, 269 + .key = { 270 + .off = 8, 271 + .val = cpu_to_be32(0x00060000), 272 + .mask = cpu_to_be32(0x00ff0000), 273 + }, 274 + .jump = cxgb4_tcp_fields, 275 + }, 276 + { 277 + /* UDP Jump */ 278 + .sel = { 279 + .off = 0, 280 + .offoff = 0, 281 + .offshift = 6, 282 + .offmask = cpu_to_be16(0x0f00), 283 + }, 284 + .key = { 285 + .off = 8, 286 + .val = cpu_to_be32(0x00110000), 287 + .mask = cpu_to_be32(0x00ff0000), 288 + }, 289 + .jump = cxgb4_udp_fields, 290 + }, 291 + { .jump = NULL }, 262 292 }; 263 293 264 294 /* Accept a rule with a jump directly past the 40 Bytes of IPv6 fixed header 265 295 * to get to transport layer header. 266 296 */ 267 297 static const struct cxgb4_next_header cxgb4_ipv6_jumps[] = { 268 - { .offset = 0x28, .offoff = 0, .shift = 0, .mask = 0, 269 - .match_off = 4, .match_val = 0x60000, .match_mask = 0xFF0000, 270 - .jump = cxgb4_tcp_fields }, 271 - { .offset = 0x28, .offoff = 0, .shift = 0, .mask = 0, 272 - .match_off = 4, .match_val = 0x110000, .match_mask = 0xFF0000, 273 - .jump = cxgb4_udp_fields }, 274 - { .jump = NULL } 298 + { 299 + /* TCP Jump */ 300 + .sel = { 301 + .off = 40, 302 + .offoff = 0, 303 + .offshift = 0, 304 + .offmask = 0, 305 + }, 306 + .key = { 307 + .off = 4, 308 + .val = cpu_to_be32(0x00000600), 309 + .mask = cpu_to_be32(0x0000ff00), 310 + }, 311 + .jump = cxgb4_tcp_fields, 312 + }, 313 + { 314 + /* UDP Jump */ 315 + .sel = { 316 + .off = 40, 317 + .offoff = 0, 318 + .offshift = 0, 319 + .offmask = 0, 320 + }, 321 + .key = { 322 + .off = 4, 323 + .val = cpu_to_be32(0x00001100), 324 + .mask = cpu_to_be32(0x0000ff00), 325 + }, 326 + .jump = cxgb4_udp_fields, 327 + }, 328 + { .jump = NULL }, 275 329 }; 276 330 277 331 struct cxgb4_link {
+25 -28
drivers/net/ethernet/chelsio/cxgb4/l2t.c
··· 503 503 EXPORT_SYMBOL(cxgb4_select_ntuple); 504 504 505 505 /* 506 - * Called when address resolution fails for an L2T entry to handle packets 507 - * on the arpq head. If a packet specifies a failure handler it is invoked, 508 - * otherwise the packet is sent to the device. 509 - */ 510 - static void handle_failed_resolution(struct adapter *adap, struct l2t_entry *e) 511 - { 512 - struct sk_buff *skb; 513 - 514 - while ((skb = __skb_dequeue(&e->arpq)) != NULL) { 515 - const struct l2t_skb_cb *cb = L2T_SKB_CB(skb); 516 - 517 - spin_unlock(&e->lock); 518 - if (cb->arp_err_handler) 519 - cb->arp_err_handler(cb->handle, skb); 520 - else 521 - t4_ofld_send(adap, skb); 522 - spin_lock(&e->lock); 523 - } 524 - } 525 - 526 - /* 527 506 * Called when the host's neighbor layer makes a change to some entry that is 528 507 * loaded into the HW L2 table. 529 508 */ 530 509 void t4_l2t_update(struct adapter *adap, struct neighbour *neigh) 531 510 { 532 - struct l2t_entry *e; 533 - struct sk_buff_head *arpq = NULL; 534 - struct l2t_data *d = adap->l2t; 535 511 unsigned int addr_len = neigh->tbl->key_len; 536 512 u32 *addr = (u32 *) neigh->primary_key; 537 - int ifidx = neigh->dev->ifindex; 538 - int hash = addr_hash(d, addr, addr_len, ifidx); 513 + int hash, ifidx = neigh->dev->ifindex; 514 + struct sk_buff_head *arpq = NULL; 515 + struct l2t_data *d = adap->l2t; 516 + struct l2t_entry *e; 539 517 518 + hash = addr_hash(d, addr, addr_len, ifidx); 540 519 read_lock_bh(&d->lock); 541 520 for (e = d->l2tab[hash].first; e; e = e->next) 542 521 if (!addreq(e, addr) && e->ifindex == ifidx) { ··· 548 569 write_l2e(adap, e, 0); 549 570 } 550 571 551 - if (arpq) 552 - handle_failed_resolution(adap, e); 572 + if (arpq) { 573 + struct sk_buff *skb; 574 + 575 + /* Called when address resolution fails for an L2T 576 + * entry to handle packets on the arpq head. If a 577 + * packet specifies a failure handler it is invoked, 578 + * otherwise the packet is sent to the device. 579 + */ 580 + while ((skb = __skb_dequeue(&e->arpq)) != NULL) { 581 + const struct l2t_skb_cb *cb = L2T_SKB_CB(skb); 582 + 583 + spin_unlock(&e->lock); 584 + if (cb->arp_err_handler) 585 + cb->arp_err_handler(cb->handle, skb); 586 + else 587 + t4_ofld_send(adap, skb); 588 + spin_lock(&e->lock); 589 + } 590 + } 553 591 spin_unlock_bh(&e->lock); 554 592 } 555 593 ··· 609 613 } 610 614 611 615 /** 616 + * cxgb4_l2t_alloc_switching - Allocates an L2T entry for switch filters 612 617 * @dev: net_device pointer 613 618 * @vlan: VLAN Id 614 619 * @port: Associated port
+1 -1
drivers/net/ethernet/chelsio/cxgb4/sched.c
··· 598 598 /** 599 599 * cxgb4_sched_class_free - free a scheduling class 600 600 * @dev: net_device pointer 601 - * @e: scheduling class 601 + * @classid: scheduling class id to free 602 602 * 603 603 * Frees a scheduling class if there are no users. 604 604 */
+24 -23
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 302 302 303 303 /** 304 304 * free_tx_desc - reclaims Tx descriptors and their buffers 305 - * @adapter: the adapter 305 + * @adap: the adapter 306 306 * @q: the Tx queue to reclaim descriptors from 307 307 * @n: the number of descriptors to reclaim 308 308 * @unmap: whether the buffers should be unmapped for DMA ··· 722 722 /** 723 723 * is_eth_imm - can an Ethernet packet be sent as immediate data? 724 724 * @skb: the packet 725 + * @chip_ver: chip version 725 726 * 726 727 * Returns whether an Ethernet packet is small enough to fit as 727 728 * immediate data. Return value corresponds to headroom required. ··· 750 749 /** 751 750 * calc_tx_flits - calculate the number of flits for a packet Tx WR 752 751 * @skb: the packet 752 + * @chip_ver: chip version 753 753 * 754 754 * Returns the number of flits needed for a Tx WR for the given Ethernet 755 755 * packet, including the needed WR and CPL headers. ··· 806 804 /** 807 805 * calc_tx_descs - calculate the number of Tx descriptors for a packet 808 806 * @skb: the packet 807 + * @chip_ver: chip version 809 808 * 810 809 * Returns the number of Tx descriptors needed for the given Ethernet 811 810 * packet, including the needed WR and CPL headers. ··· 1428 1425 1429 1426 qidx = skb_get_queue_mapping(skb); 1430 1427 if (ptp_enabled) { 1431 - spin_lock(&adap->ptp_lock); 1432 1428 if (!(adap->ptp_tx_skb)) { 1433 1429 skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; 1434 1430 adap->ptp_tx_skb = skb_get(skb); 1435 1431 } else { 1436 - spin_unlock(&adap->ptp_lock); 1437 1432 goto out_free; 1438 1433 } 1439 1434 q = &adap->sge.ptptxq; ··· 1445 1444 1446 1445 #ifdef CONFIG_CHELSIO_T4_FCOE 1447 1446 ret = cxgb_fcoe_offload(skb, adap, pi, &cntrl); 1448 - if (unlikely(ret == -ENOTSUPP)) { 1449 - if (ptp_enabled) 1450 - spin_unlock(&adap->ptp_lock); 1447 + if (unlikely(ret == -EOPNOTSUPP)) 1451 1448 goto out_free; 1452 - } 1453 1449 #endif /* CONFIG_CHELSIO_T4_FCOE */ 1454 1450 1455 1451 chip_ver = CHELSIO_CHIP_VERSION(adap->params.chip); ··· 1459 1461 dev_err(adap->pdev_dev, 1460 1462 "%s: Tx ring %u full while queue awake!\n", 1461 1463 dev->name, qidx); 1462 - if (ptp_enabled) 1463 - spin_unlock(&adap->ptp_lock); 1464 1464 return NETDEV_TX_BUSY; 1465 1465 } 1466 1466 ··· 1477 1481 unlikely(cxgb4_map_skb(adap->pdev_dev, skb, sgl_sdesc->addr) < 0)) { 1478 1482 memset(sgl_sdesc->addr, 0, sizeof(sgl_sdesc->addr)); 1479 1483 q->mapping_err++; 1480 - if (ptp_enabled) 1481 - spin_unlock(&adap->ptp_lock); 1482 1484 goto out_free; 1483 1485 } 1484 1486 ··· 1527 1533 if (iph->version == 4) { 1528 1534 iph->check = 0; 1529 1535 iph->tot_len = 0; 1530 - iph->check = (u16)(~ip_fast_csum((u8 *)iph, 1531 - iph->ihl)); 1536 + iph->check = ~ip_fast_csum((u8 *)iph, iph->ihl); 1532 1537 } 1533 1538 if (skb->ip_summed == CHECKSUM_PARTIAL) 1534 1539 cntrl = hwcsum(adap->params.chip, skb); ··· 1623 1630 txq_advance(&q->q, ndesc); 1624 1631 1625 1632 cxgb4_ring_tx_db(adap, &q->q, ndesc); 1626 - if (ptp_enabled) 1627 - spin_unlock(&adap->ptp_lock); 1628 1633 return NETDEV_TX_OK; 1629 1634 1630 1635 out_free: ··· 2368 2377 if (unlikely(qid >= pi->nqsets)) 2369 2378 return cxgb4_ethofld_xmit(skb, dev); 2370 2379 2380 + if (is_ptp_enabled(skb, dev)) { 2381 + struct adapter *adap = netdev2adap(dev); 2382 + netdev_tx_t ret; 2383 + 2384 + spin_lock(&adap->ptp_lock); 2385 + ret = cxgb4_eth_xmit(skb, dev); 2386 + spin_unlock(&adap->ptp_lock); 2387 + return ret; 2388 + } 2389 + 2371 2390 return cxgb4_eth_xmit(skb, dev); 2372 2391 } 2373 2392 ··· 2411 2410 2412 2411 /** 2413 2412 * cxgb4_ethofld_send_flowc - Send ETHOFLD flowc request to bind eotid to tc. 2414 - * @dev - netdevice 2415 - * @eotid - ETHOFLD tid to bind/unbind 2416 - * @tc - traffic class. If set to FW_SCHED_CLS_NONE, then unbinds the @eotid 2413 + * @dev: netdevice 2414 + * @eotid: ETHOFLD tid to bind/unbind 2415 + * @tc: traffic class. If set to FW_SCHED_CLS_NONE, then unbinds the @eotid 2417 2416 * 2418 2417 * Send a FLOWC work request to bind an ETHOFLD TID to a traffic class. 2419 2418 * If @tc is set to FW_SCHED_CLS_NONE, then the @eotid is unbound from ··· 2692 2691 2693 2692 /** 2694 2693 * txq_stop_maperr - stop a Tx queue due to I/O MMU exhaustion 2695 - * @adap: the adapter 2696 2694 * @q: the queue to stop 2697 2695 * 2698 2696 * Mark a Tx queue stopped due to I/O MMU exhaustion and resulting ··· 3286 3286 3287 3287 /** 3288 3288 * t4_systim_to_hwstamp - read hardware time stamp 3289 - * @adap: the adapter 3289 + * @adapter: the adapter 3290 3290 * @skb: the packet 3291 3291 * 3292 3292 * Read Time Stamp from MPS packet and insert in skb which ··· 3313 3313 3314 3314 hwtstamps = skb_hwtstamps(skb); 3315 3315 memset(hwtstamps, 0, sizeof(*hwtstamps)); 3316 - hwtstamps->hwtstamp = ns_to_ktime(be64_to_cpu(*((u64 *)data))); 3316 + hwtstamps->hwtstamp = ns_to_ktime(get_unaligned_be64(data)); 3317 3317 3318 3318 return RX_PTP_PKT_SUC; 3319 3319 } 3320 3320 3321 3321 /** 3322 3322 * t4_rx_hststamp - Recv PTP Event Message 3323 - * @adap: the adapter 3323 + * @adapter: the adapter 3324 3324 * @rsp: the response queue descriptor holding the RX_PKT message 3325 + * @rxq: the response queue holding the RX_PKT message 3325 3326 * @skb: the packet 3326 3327 * 3327 3328 * PTP enabled and MPS packet, read HW timestamp ··· 3346 3345 3347 3346 /** 3348 3347 * t4_tx_hststamp - Loopback PTP Transmit Event Message 3349 - * @adap: the adapter 3348 + * @adapter: the adapter 3350 3349 * @skb: the packet 3351 3350 * @dev: the ingress net device 3352 3351 *
+2
drivers/net/ethernet/chelsio/cxgb4/smt.c
··· 103 103 } 104 104 105 105 /** 106 + * cxgb4_smt_release - Release SMT entry 106 107 * @e: smt entry to release 107 108 * 108 109 * Releases ref count and frees up an smt entry from SMT table ··· 232 231 } 233 232 234 233 /** 234 + * cxgb4_smt_alloc_switching - Allocates an SMT entry for switch filters. 235 235 * @dev: net_device pointer 236 236 * @smac: MAC address to add to SMT 237 237 * Returns pointer to the SMT entry created
+18 -18
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 3163 3163 3164 3164 /** 3165 3165 * t4_get_exprom_version - return the Expansion ROM version (if any) 3166 - * @adapter: the adapter 3166 + * @adap: the adapter 3167 3167 * @vers: where to place the version 3168 3168 * 3169 3169 * Reads the Expansion ROM header from FLASH and returns the version ··· 5310 5310 * @cmd: TP fw ldst address space type 5311 5311 * @vals: where the indirect register values are stored/written 5312 5312 * @nregs: how many indirect registers to read/write 5313 - * @start_idx: index of first indirect register to read/write 5313 + * @start_index: index of first indirect register to read/write 5314 5314 * @rw: Read (1) or Write (0) 5315 5315 * @sleep_ok: if true we may sleep while awaiting command completion 5316 5316 * ··· 6115 6115 6116 6116 /** 6117 6117 * compute_mps_bg_map - compute the MPS Buffer Group Map for a Port 6118 - * @adap: the adapter 6118 + * @adapter: the adapter 6119 6119 * @pidx: the port index 6120 6120 * 6121 6121 * Computes and returns a bitmap indicating which MPS buffer groups are ··· 6252 6252 6253 6253 /** 6254 6254 * t4_get_tp_ch_map - return TP ingress channels associated with a port 6255 - * @adapter: the adapter 6255 + * @adap: the adapter 6256 6256 * @pidx: the port index 6257 6257 * 6258 6258 * Returns a bitmap indicating which TP Ingress Channels are associated ··· 6589 6589 * @phy_addr: the PHY address 6590 6590 * @mmd: the PHY MMD to access (0 for clause 22 PHYs) 6591 6591 * @reg: the register to write 6592 - * @valp: value to write 6592 + * @val: value to write 6593 6593 * 6594 6594 * Issues a FW command through the given mailbox to write a PHY register. 6595 6595 */ ··· 6615 6615 6616 6616 /** 6617 6617 * t4_sge_decode_idma_state - decode the idma state 6618 - * @adap: the adapter 6618 + * @adapter: the adapter 6619 6619 * @state: the state idma is stuck in 6620 6620 */ 6621 6621 void t4_sge_decode_idma_state(struct adapter *adapter, int state) ··· 6782 6782 * t4_sge_ctxt_flush - flush the SGE context cache 6783 6783 * @adap: the adapter 6784 6784 * @mbox: mailbox to use for the FW command 6785 - * @ctx_type: Egress or Ingress 6785 + * @ctxt_type: Egress or Ingress 6786 6786 * 6787 6787 * Issues a FW command through the given mailbox to flush the 6788 6788 * SGE context cache. ··· 6809 6809 6810 6810 /** 6811 6811 * t4_read_sge_dbqtimers - read SGE Doorbell Queue Timer values 6812 - * @adap - the adapter 6812 + * @adap: the adapter 6813 6813 * @ndbqtimers: size of the provided SGE Doorbell Queue Timer table 6814 6814 * @dbqtimers: SGE Doorbell Queue Timer table 6815 6815 * ··· 7092 7092 /** 7093 7093 * t4_fw_restart - restart the firmware by taking the uP out of RESET 7094 7094 * @adap: the adapter 7095 + * @mbox: mailbox to use for the FW command 7095 7096 * @reset: if we want to do a RESET to restart things 7096 7097 * 7097 7098 * Restart firmware previously halted by t4_fw_halt(). On successful ··· 7631 7630 * @nmac: number of MAC addresses needed (1 to 5) 7632 7631 * @mac: the MAC addresses of the VI 7633 7632 * @rss_size: size of RSS table slice associated with this VI 7633 + * @vivld: the destination to store the VI Valid value. 7634 + * @vin: the destination to store the VIN value. 7634 7635 * 7635 7636 * Allocates a virtual interface for the given physical port. If @mac is 7636 7637 * not %NULL it contains the MAC addresses of the VI as assigned by FW. ··· 7851 7848 * t4_alloc_encap_mac_filt - Adds a mac entry in mps tcam with VNI support 7852 7849 * @adap: the adapter 7853 7850 * @viid: the VI id 7854 - * @mac: the MAC address 7851 + * @addr: the MAC address 7855 7852 * @mask: the mask 7856 7853 * @vni: the VNI id for the tunnel protocol 7857 7854 * @vni_mask: mask for the VNI id ··· 7900 7897 * t4_alloc_raw_mac_filt - Adds a mac entry in mps tcam 7901 7898 * @adap: the adapter 7902 7899 * @viid: the VI id 7903 - * @mac: the MAC address 7900 + * @addr: the MAC address 7904 7901 * @mask: the mask 7905 7902 * @idx: index at which to add this entry 7906 - * @port_id: the port index 7907 7903 * @lookup_type: MAC address for inner (1) or outer (0) header 7904 + * @port_id: the port index 7908 7905 * @sleep_ok: call is allowed to sleep 7909 7906 * 7910 7907 * Adds the mac entry at the specified index using raw mac interface. ··· 8129 8126 * @idx: index of existing filter for old value of MAC address, or -1 8130 8127 * @addr: the new MAC address value 8131 8128 * @persist: whether a new MAC allocation should be persistent 8132 - * @add_smt: if true also add the address to the HW SMT 8129 + * @smt_idx: the destination to store the new SMT index. 8133 8130 * 8134 8131 * Modifies an exact-match filter and sets it to the new MAC address. 8135 8132 * Note that in general it is not possible to modify the value of a given ··· 8451 8448 8452 8449 /** 8453 8450 * t4_link_down_rc_str - return a string for a Link Down Reason Code 8454 - * @adap: the adapter 8455 8451 * @link_down_rc: Link Down Reason Code 8456 8452 * 8457 8453 * Returns a string representation of the Link Down Reason Code. ··· 8474 8472 return reason[link_down_rc]; 8475 8473 } 8476 8474 8477 - /** 8478 - * Return the highest speed set in the port capabilities, in Mb/s. 8479 - */ 8475 + /* Return the highest speed set in the port capabilities, in Mb/s. */ 8480 8476 static unsigned int fwcap_to_speed(fw_port_cap32_t caps) 8481 8477 { 8482 8478 #define TEST_SPEED_RETURN(__caps_speed, __speed) \ ··· 9110 9110 /** 9111 9111 * t4_prep_adapter - prepare SW and HW for operation 9112 9112 * @adapter: the adapter 9113 - * @reset: if true perform a HW reset 9114 9113 * 9115 9114 * Initialize adapter SW state for the various HW modules, set initial 9116 9115 * values for some adapter tunables, take PHYs out of reset, and ··· 10394 10395 /** 10395 10396 * t4_i2c_rd - read I2C data from adapter 10396 10397 * @adap: the adapter 10398 + * @mbox: mailbox to use for the FW command 10397 10399 * @port: Port number if per-port device; <0 if not 10398 10400 * @devid: per-port device ID or absolute device ID 10399 10401 * @offset: byte offset into device I2C space ··· 10450 10450 10451 10451 /** 10452 10452 * t4_set_vlan_acl - Set a VLAN id for the specified VF 10453 - * @adapter: the adapter 10453 + * @adap: the adapter 10454 10454 * @mbox: mailbox to use for the FW command 10455 10455 * @vf: one of the VFs instantiated by the specified PF 10456 10456 * @vlan: The vlanid to be set
+1 -2
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
··· 260 260 * @tcam_idx: TCAM index of existing filter for old value of MAC address, 261 261 * or -1 262 262 * @addr: the new MAC address value 263 - * @persist: whether a new MAC allocation should be persistent 264 - * @add_smt: if true also add the address to the HW SMT 263 + * @persistent: whether a new MAC allocation should be persistent 265 264 * 266 265 * Modifies an MPS filter and sets it to the new MAC address if 267 266 * @tcam_idx >= 0, or adds the MAC address to a new filter if
+4 -3
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
··· 1692 1692 * restore_rx_bufs - put back a packet's RX buffers 1693 1693 * @gl: the packet gather list 1694 1694 * @fl: the SGE Free List 1695 - * @nfrags: how many fragments in @si 1695 + * @frags: how many fragments in @si 1696 1696 * 1697 1697 * Called when we find out that the current packet, @si, can't be 1698 1698 * processed right away for some reason. This is a very rare event and ··· 2054 2054 2055 2055 /** 2056 2056 * sge_rx_timer_cb - perform periodic maintenance of SGE RX queues 2057 - * @data: the adapter 2057 + * @t: Rx timer 2058 2058 * 2059 2059 * Runs periodically from a timer to perform maintenance of SGE RX queues. 2060 2060 * ··· 2113 2113 2114 2114 /** 2115 2115 * sge_tx_timer_cb - perform periodic maintenance of SGE Tx queues 2116 - * @data: the adapter 2116 + * @t: Tx timer 2117 2117 * 2118 2118 * Runs periodically from a timer to perform maintenance of SGE TX queues. 2119 2119 * ··· 2405 2405 * t4vf_sge_alloc_eth_txq - allocate an SGE Ethernet TX Queue 2406 2406 * @adapter: the adapter 2407 2407 * @txq: pointer to the new txq to be filled in 2408 + * @dev: the network device 2408 2409 * @devq: the network TX queue associated with the new txq 2409 2410 * @iqid: the relative ingress queue ID to which events relating to 2410 2411 * the new txq should be directed
+3 -6
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
··· 389 389 return cc_fec; 390 390 } 391 391 392 - /** 393 - * Return the highest speed set in the port capabilities, in Mb/s. 394 - */ 392 + /* Return the highest speed set in the port capabilities, in Mb/s. */ 395 393 static unsigned int fwcap_to_speed(fw_port_cap32_t caps) 396 394 { 397 395 #define TEST_SPEED_RETURN(__caps_speed, __speed) \ ··· 1465 1467 * @bcast: 1 to enable broadcast Rx, 0 to disable it, -1 no change 1466 1468 * @vlanex: 1 to enable hardware VLAN Tag extraction, 0 to disable it, 1467 1469 * -1 no change 1470 + * @sleep_ok: call is allowed to sleep 1468 1471 * 1469 1472 * Sets Rx properties of a virtual interface. 1470 1473 */ ··· 1905 1906 /** 1906 1907 * t4vf_handle_get_port_info - process a FW reply message 1907 1908 * @pi: the port info 1908 - * @rpl: start of the FW message 1909 + * @cmd: start of the FW message 1909 1910 * 1910 1911 * Processes a GET_PORT_INFO FW reply message. 1911 1912 */ ··· 2136 2137 return 0; 2137 2138 } 2138 2139 2139 - /** 2140 - */ 2141 2140 int t4vf_prep_adapter(struct adapter *adapter) 2142 2141 { 2143 2142 int err;
+26
drivers/net/ethernet/freescale/enetc/enetc.c
··· 1595 1595 return 0; 1596 1596 } 1597 1597 1598 + static void enetc_enable_rxvlan(struct net_device *ndev, bool en) 1599 + { 1600 + struct enetc_ndev_priv *priv = netdev_priv(ndev); 1601 + int i; 1602 + 1603 + for (i = 0; i < priv->num_rx_rings; i++) 1604 + enetc_bdr_enable_rxvlan(&priv->si->hw, i, en); 1605 + } 1606 + 1607 + static void enetc_enable_txvlan(struct net_device *ndev, bool en) 1608 + { 1609 + struct enetc_ndev_priv *priv = netdev_priv(ndev); 1610 + int i; 1611 + 1612 + for (i = 0; i < priv->num_tx_rings; i++) 1613 + enetc_bdr_enable_txvlan(&priv->si->hw, i, en); 1614 + } 1615 + 1598 1616 int enetc_set_features(struct net_device *ndev, 1599 1617 netdev_features_t features) 1600 1618 { ··· 1621 1603 1622 1604 if (changed & NETIF_F_RXHASH) 1623 1605 enetc_set_rss(ndev, !!(features & NETIF_F_RXHASH)); 1606 + 1607 + if (changed & NETIF_F_HW_VLAN_CTAG_RX) 1608 + enetc_enable_rxvlan(ndev, 1609 + !!(features & NETIF_F_HW_VLAN_CTAG_RX)); 1610 + 1611 + if (changed & NETIF_F_HW_VLAN_CTAG_TX) 1612 + enetc_enable_txvlan(ndev, 1613 + !!(features & NETIF_F_HW_VLAN_CTAG_TX)); 1624 1614 1625 1615 if (changed & NETIF_F_HW_TC) 1626 1616 err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
+8 -8
drivers/net/ethernet/freescale/enetc/enetc_hw.h
··· 531 531 532 532 /* Common H/W utility functions */ 533 533 534 - static inline void enetc_enable_rxvlan(struct enetc_hw *hw, int si_idx, 535 - bool en) 534 + static inline void enetc_bdr_enable_rxvlan(struct enetc_hw *hw, int idx, 535 + bool en) 536 536 { 537 - u32 val = enetc_rxbdr_rd(hw, si_idx, ENETC_RBMR); 537 + u32 val = enetc_rxbdr_rd(hw, idx, ENETC_RBMR); 538 538 539 539 val = (val & ~ENETC_RBMR_VTE) | (en ? ENETC_RBMR_VTE : 0); 540 - enetc_rxbdr_wr(hw, si_idx, ENETC_RBMR, val); 540 + enetc_rxbdr_wr(hw, idx, ENETC_RBMR, val); 541 541 } 542 542 543 - static inline void enetc_enable_txvlan(struct enetc_hw *hw, int si_idx, 544 - bool en) 543 + static inline void enetc_bdr_enable_txvlan(struct enetc_hw *hw, int idx, 544 + bool en) 545 545 { 546 - u32 val = enetc_txbdr_rd(hw, si_idx, ENETC_TBMR); 546 + u32 val = enetc_txbdr_rd(hw, idx, ENETC_TBMR); 547 547 548 548 val = (val & ~ENETC_TBMR_VIH) | (en ? ENETC_TBMR_VIH : 0); 549 - enetc_txbdr_wr(hw, si_idx, ENETC_TBMR, val); 549 + enetc_txbdr_wr(hw, idx, ENETC_TBMR, val); 550 550 } 551 551 552 552 static inline void enetc_set_bdr_prio(struct enetc_hw *hw, int bdr_idx,
-8
drivers/net/ethernet/freescale/enetc/enetc_pf.c
··· 649 649 netdev_features_t changed = ndev->features ^ features; 650 650 struct enetc_ndev_priv *priv = netdev_priv(ndev); 651 651 652 - if (changed & NETIF_F_HW_VLAN_CTAG_RX) 653 - enetc_enable_rxvlan(&priv->si->hw, 0, 654 - !!(features & NETIF_F_HW_VLAN_CTAG_RX)); 655 - 656 - if (changed & NETIF_F_HW_VLAN_CTAG_TX) 657 - enetc_enable_txvlan(&priv->si->hw, 0, 658 - !!(features & NETIF_F_HW_VLAN_CTAG_TX)); 659 - 660 652 if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) { 661 653 struct enetc_pf *pf = enetc_si_priv(priv->si); 662 654
+1 -1
drivers/net/ethernet/hisilicon/hns/hns_enet.c
··· 699 699 struct net_device *ndev = ring_data->napi.dev; 700 700 701 701 skb->protocol = eth_type_trans(skb, ndev); 702 - (void)napi_gro_receive(&ring_data->napi, skb); 702 + napi_gro_receive(&ring_data->napi, skb); 703 703 } 704 704 705 705 static int hns_desc_unused(struct hnae_ring *ring)
+1 -1
drivers/net/ethernet/ibm/ibmveth.c
··· 1715 1715 } 1716 1716 1717 1717 netdev->min_mtu = IBMVETH_MIN_MTU; 1718 - netdev->max_mtu = ETH_MAX_MTU; 1718 + netdev->max_mtu = ETH_MAX_MTU - IBMVETH_BUFF_OH; 1719 1719 1720 1720 memcpy(netdev->dev_addr, mac_addr_p, ETH_ALEN); 1721 1721
+7 -2
drivers/net/ethernet/ibm/ibmvnic.c
··· 1971 1971 release_sub_crqs(adapter, 1); 1972 1972 } else { 1973 1973 rc = ibmvnic_reset_crq(adapter); 1974 - if (!rc) 1974 + if (rc == H_CLOSED || rc == H_SUCCESS) { 1975 1975 rc = vio_enable_interrupts(adapter->vdev); 1976 + if (rc) 1977 + netdev_err(adapter->netdev, 1978 + "Reset failed to enable interrupts. rc=%d\n", 1979 + rc); 1980 + } 1976 1981 } 1977 1982 1978 1983 if (rc) { 1979 1984 netdev_err(adapter->netdev, 1980 - "Couldn't initialize crq. rc=%d\n", rc); 1985 + "Reset couldn't initialize crq. rc=%d\n", rc); 1981 1986 goto out; 1982 1987 } 1983 1988
+3
drivers/net/ethernet/intel/i40e/i40e_ethtool.c
··· 2072 2072 err = i40e_setup_rx_descriptors(&rx_rings[i]); 2073 2073 if (err) 2074 2074 goto rx_unwind; 2075 + err = i40e_alloc_rx_bi(&rx_rings[i]); 2076 + if (err) 2077 + goto rx_unwind; 2075 2078 2076 2079 /* now allocate the Rx buffers to make sure the OS 2077 2080 * has enough memory, any failure here means abort
+19 -10
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 439 439 i40e_get_netdev_stats_struct_tx(ring, stats); 440 440 441 441 if (i40e_enabled_xdp_vsi(vsi)) { 442 - ring++; 442 + ring = READ_ONCE(vsi->xdp_rings[i]); 443 + if (!ring) 444 + continue; 443 445 i40e_get_netdev_stats_struct_tx(ring, stats); 444 446 } 445 447 446 - ring++; 448 + ring = READ_ONCE(vsi->rx_rings[i]); 449 + if (!ring) 450 + continue; 447 451 do { 448 452 start = u64_stats_fetch_begin_irq(&ring->syncp); 449 453 packets = ring->stats.packets; ··· 791 787 for (q = 0; q < vsi->num_queue_pairs; q++) { 792 788 /* locate Tx ring */ 793 789 p = READ_ONCE(vsi->tx_rings[q]); 790 + if (!p) 791 + continue; 794 792 795 793 do { 796 794 start = u64_stats_fetch_begin_irq(&p->syncp); ··· 806 800 tx_linearize += p->tx_stats.tx_linearize; 807 801 tx_force_wb += p->tx_stats.tx_force_wb; 808 802 809 - /* Rx queue is part of the same block as Tx queue */ 810 - p = &p[1]; 803 + /* locate Rx ring */ 804 + p = READ_ONCE(vsi->rx_rings[q]); 805 + if (!p) 806 + continue; 807 + 811 808 do { 812 809 start = u64_stats_fetch_begin_irq(&p->syncp); 813 810 packets = p->stats.packets; ··· 10833 10824 if (vsi->tx_rings && vsi->tx_rings[0]) { 10834 10825 for (i = 0; i < vsi->alloc_queue_pairs; i++) { 10835 10826 kfree_rcu(vsi->tx_rings[i], rcu); 10836 - vsi->tx_rings[i] = NULL; 10837 - vsi->rx_rings[i] = NULL; 10827 + WRITE_ONCE(vsi->tx_rings[i], NULL); 10828 + WRITE_ONCE(vsi->rx_rings[i], NULL); 10838 10829 if (vsi->xdp_rings) 10839 - vsi->xdp_rings[i] = NULL; 10830 + WRITE_ONCE(vsi->xdp_rings[i], NULL); 10840 10831 } 10841 10832 } 10842 10833 } ··· 10870 10861 if (vsi->back->hw_features & I40E_HW_WB_ON_ITR_CAPABLE) 10871 10862 ring->flags = I40E_TXR_FLAGS_WB_ON_ITR; 10872 10863 ring->itr_setting = pf->tx_itr_default; 10873 - vsi->tx_rings[i] = ring++; 10864 + WRITE_ONCE(vsi->tx_rings[i], ring++); 10874 10865 10875 10866 if (!i40e_enabled_xdp_vsi(vsi)) 10876 10867 goto setup_rx; ··· 10888 10879 ring->flags = I40E_TXR_FLAGS_WB_ON_ITR; 10889 10880 set_ring_xdp(ring); 10890 10881 ring->itr_setting = pf->tx_itr_default; 10891 - vsi->xdp_rings[i] = ring++; 10882 + WRITE_ONCE(vsi->xdp_rings[i], ring++); 10892 10883 10893 10884 setup_rx: 10894 10885 ring->queue_index = i; ··· 10901 10892 ring->size = 0; 10902 10893 ring->dcb_tc = 0; 10903 10894 ring->itr_setting = pf->rx_itr_default; 10904 - vsi->rx_rings[i] = ring; 10895 + WRITE_ONCE(vsi->rx_rings[i], ring); 10905 10896 } 10906 10897 10907 10898 return 0;
+4 -4
drivers/net/ethernet/intel/ice/ice_lib.c
··· 1194 1194 for (i = 0; i < vsi->alloc_txq; i++) { 1195 1195 if (vsi->tx_rings[i]) { 1196 1196 kfree_rcu(vsi->tx_rings[i], rcu); 1197 - vsi->tx_rings[i] = NULL; 1197 + WRITE_ONCE(vsi->tx_rings[i], NULL); 1198 1198 } 1199 1199 } 1200 1200 } ··· 1202 1202 for (i = 0; i < vsi->alloc_rxq; i++) { 1203 1203 if (vsi->rx_rings[i]) { 1204 1204 kfree_rcu(vsi->rx_rings[i], rcu); 1205 - vsi->rx_rings[i] = NULL; 1205 + WRITE_ONCE(vsi->rx_rings[i], NULL); 1206 1206 } 1207 1207 } 1208 1208 } ··· 1235 1235 ring->vsi = vsi; 1236 1236 ring->dev = dev; 1237 1237 ring->count = vsi->num_tx_desc; 1238 - vsi->tx_rings[i] = ring; 1238 + WRITE_ONCE(vsi->tx_rings[i], ring); 1239 1239 } 1240 1240 1241 1241 /* Allocate Rx rings */ ··· 1254 1254 ring->netdev = vsi->netdev; 1255 1255 ring->dev = dev; 1256 1256 ring->count = vsi->num_rx_desc; 1257 - vsi->rx_rings[i] = ring; 1257 + WRITE_ONCE(vsi->rx_rings[i], ring); 1258 1258 } 1259 1259 1260 1260 return 0;
+1 -1
drivers/net/ethernet/intel/ice/ice_main.c
··· 1702 1702 xdp_ring->netdev = NULL; 1703 1703 xdp_ring->dev = dev; 1704 1704 xdp_ring->count = vsi->num_tx_desc; 1705 - vsi->xdp_rings[i] = xdp_ring; 1705 + WRITE_ONCE(vsi->xdp_rings[i], xdp_ring); 1706 1706 if (ice_setup_tx_ring(xdp_ring)) 1707 1707 goto free_xdp_rings; 1708 1708 ice_set_ring_xdp(xdp_ring);
+6 -6
drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
··· 921 921 ring->queue_index = txr_idx; 922 922 923 923 /* assign ring to adapter */ 924 - adapter->tx_ring[txr_idx] = ring; 924 + WRITE_ONCE(adapter->tx_ring[txr_idx], ring); 925 925 926 926 /* update count and index */ 927 927 txr_count--; ··· 948 948 set_ring_xdp(ring); 949 949 950 950 /* assign ring to adapter */ 951 - adapter->xdp_ring[xdp_idx] = ring; 951 + WRITE_ONCE(adapter->xdp_ring[xdp_idx], ring); 952 952 953 953 /* update count and index */ 954 954 xdp_count--; ··· 991 991 ring->queue_index = rxr_idx; 992 992 993 993 /* assign ring to adapter */ 994 - adapter->rx_ring[rxr_idx] = ring; 994 + WRITE_ONCE(adapter->rx_ring[rxr_idx], ring); 995 995 996 996 /* update count and index */ 997 997 rxr_count--; ··· 1020 1020 1021 1021 ixgbe_for_each_ring(ring, q_vector->tx) { 1022 1022 if (ring_is_xdp(ring)) 1023 - adapter->xdp_ring[ring->queue_index] = NULL; 1023 + WRITE_ONCE(adapter->xdp_ring[ring->queue_index], NULL); 1024 1024 else 1025 - adapter->tx_ring[ring->queue_index] = NULL; 1025 + WRITE_ONCE(adapter->tx_ring[ring->queue_index], NULL); 1026 1026 } 1027 1027 1028 1028 ixgbe_for_each_ring(ring, q_vector->rx) 1029 - adapter->rx_ring[ring->queue_index] = NULL; 1029 + WRITE_ONCE(adapter->rx_ring[ring->queue_index], NULL); 1030 1030 1031 1031 adapter->q_vector[v_idx] = NULL; 1032 1032 napi_hash_del(&q_vector->napi);
+11 -3
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 7051 7051 } 7052 7052 7053 7053 for (i = 0; i < adapter->num_rx_queues; i++) { 7054 - struct ixgbe_ring *rx_ring = adapter->rx_ring[i]; 7054 + struct ixgbe_ring *rx_ring = READ_ONCE(adapter->rx_ring[i]); 7055 + 7056 + if (!rx_ring) 7057 + continue; 7055 7058 non_eop_descs += rx_ring->rx_stats.non_eop_descs; 7056 7059 alloc_rx_page += rx_ring->rx_stats.alloc_rx_page; 7057 7060 alloc_rx_page_failed += rx_ring->rx_stats.alloc_rx_page_failed; ··· 7075 7072 packets = 0; 7076 7073 /* gather some stats to the adapter struct that are per queue */ 7077 7074 for (i = 0; i < adapter->num_tx_queues; i++) { 7078 - struct ixgbe_ring *tx_ring = adapter->tx_ring[i]; 7075 + struct ixgbe_ring *tx_ring = READ_ONCE(adapter->tx_ring[i]); 7076 + 7077 + if (!tx_ring) 7078 + continue; 7079 7079 restart_queue += tx_ring->tx_stats.restart_queue; 7080 7080 tx_busy += tx_ring->tx_stats.tx_busy; 7081 7081 bytes += tx_ring->stats.bytes; 7082 7082 packets += tx_ring->stats.packets; 7083 7083 } 7084 7084 for (i = 0; i < adapter->num_xdp_queues; i++) { 7085 - struct ixgbe_ring *xdp_ring = adapter->xdp_ring[i]; 7085 + struct ixgbe_ring *xdp_ring = READ_ONCE(adapter->xdp_ring[i]); 7086 7086 7087 + if (!xdp_ring) 7088 + continue; 7087 7089 restart_queue += xdp_ring->tx_stats.restart_queue; 7088 7090 tx_busy += xdp_ring->tx_stats.tx_busy; 7089 7091 bytes += xdp_ring->stats.bytes;
+53 -23
drivers/net/ethernet/marvell/mvneta.c
··· 106 106 #define MVNETA_TX_IN_PRGRS BIT(1) 107 107 #define MVNETA_TX_FIFO_EMPTY BIT(8) 108 108 #define MVNETA_RX_MIN_FRAME_SIZE 0x247c 109 + /* Only exists on Armada XP and Armada 370 */ 109 110 #define MVNETA_SERDES_CFG 0x24A0 110 111 #define MVNETA_SGMII_SERDES_PROTO 0x0cc7 111 112 #define MVNETA_QSGMII_SERDES_PROTO 0x0667 113 + #define MVNETA_HSGMII_SERDES_PROTO 0x1107 112 114 #define MVNETA_TYPE_PRIO 0x24bc 113 115 #define MVNETA_FORCE_UNI BIT(21) 114 116 #define MVNETA_TXQ_CMD_1 0x24e4 ··· 3531 3529 return 0; 3532 3530 } 3533 3531 3534 - static int mvneta_comphy_init(struct mvneta_port *pp) 3532 + static int mvneta_comphy_init(struct mvneta_port *pp, phy_interface_t interface) 3535 3533 { 3536 3534 int ret; 3537 3535 3538 - if (!pp->comphy) 3539 - return 0; 3540 - 3541 - ret = phy_set_mode_ext(pp->comphy, PHY_MODE_ETHERNET, 3542 - pp->phy_interface); 3536 + ret = phy_set_mode_ext(pp->comphy, PHY_MODE_ETHERNET, interface); 3543 3537 if (ret) 3544 3538 return ret; 3545 3539 3546 3540 return phy_power_on(pp->comphy); 3547 3541 } 3548 3542 3543 + static int mvneta_config_interface(struct mvneta_port *pp, 3544 + phy_interface_t interface) 3545 + { 3546 + int ret = 0; 3547 + 3548 + if (pp->comphy) { 3549 + if (interface == PHY_INTERFACE_MODE_SGMII || 3550 + interface == PHY_INTERFACE_MODE_1000BASEX || 3551 + interface == PHY_INTERFACE_MODE_2500BASEX) { 3552 + ret = mvneta_comphy_init(pp, interface); 3553 + } 3554 + } else { 3555 + switch (interface) { 3556 + case PHY_INTERFACE_MODE_QSGMII: 3557 + mvreg_write(pp, MVNETA_SERDES_CFG, 3558 + MVNETA_QSGMII_SERDES_PROTO); 3559 + break; 3560 + 3561 + case PHY_INTERFACE_MODE_SGMII: 3562 + case PHY_INTERFACE_MODE_1000BASEX: 3563 + mvreg_write(pp, MVNETA_SERDES_CFG, 3564 + MVNETA_SGMII_SERDES_PROTO); 3565 + break; 3566 + 3567 + case PHY_INTERFACE_MODE_2500BASEX: 3568 + mvreg_write(pp, MVNETA_SERDES_CFG, 3569 + MVNETA_HSGMII_SERDES_PROTO); 3570 + break; 3571 + default: 3572 + break; 3573 + } 3574 + } 3575 + 3576 + pp->phy_interface = interface; 3577 + 3578 + return ret; 3579 + } 3580 + 3549 3581 static void mvneta_start_dev(struct mvneta_port *pp) 3550 3582 { 3551 3583 int cpu; 3552 3584 3553 - WARN_ON(mvneta_comphy_init(pp)); 3585 + WARN_ON(mvneta_config_interface(pp, pp->phy_interface)); 3554 3586 3555 3587 mvneta_max_rx_size_set(pp, pp->pkt_size); 3556 3588 mvneta_txq_max_tx_size_set(pp, pp->pkt_size); ··· 3962 3926 if (state->speed == SPEED_2500) 3963 3927 new_ctrl4 |= MVNETA_GMAC4_SHORT_PREAMBLE_ENABLE; 3964 3928 3965 - if (pp->comphy && pp->phy_interface != state->interface && 3966 - (state->interface == PHY_INTERFACE_MODE_SGMII || 3967 - state->interface == PHY_INTERFACE_MODE_1000BASEX || 3968 - state->interface == PHY_INTERFACE_MODE_2500BASEX)) { 3969 - pp->phy_interface = state->interface; 3970 - 3971 - WARN_ON(phy_power_off(pp->comphy)); 3972 - WARN_ON(mvneta_comphy_init(pp)); 3929 + if (pp->phy_interface != state->interface) { 3930 + if (pp->comphy) 3931 + WARN_ON(phy_power_off(pp->comphy)); 3932 + WARN_ON(mvneta_config_interface(pp, state->interface)); 3973 3933 } 3974 3934 3975 3935 if (new_ctrl0 != gmac_ctrl0) ··· 5014 4982 /* MAC Cause register should be cleared */ 5015 4983 mvreg_write(pp, MVNETA_UNIT_INTR_CAUSE, 0); 5016 4984 5017 - if (phy_mode == PHY_INTERFACE_MODE_QSGMII) 5018 - mvreg_write(pp, MVNETA_SERDES_CFG, MVNETA_QSGMII_SERDES_PROTO); 5019 - else if (phy_mode == PHY_INTERFACE_MODE_SGMII || 5020 - phy_interface_mode_is_8023z(phy_mode)) 5021 - mvreg_write(pp, MVNETA_SERDES_CFG, MVNETA_SGMII_SERDES_PROTO); 5022 - else if (!phy_interface_mode_is_rgmii(phy_mode)) 4985 + if (phy_mode != PHY_INTERFACE_MODE_QSGMII && 4986 + phy_mode != PHY_INTERFACE_MODE_SGMII && 4987 + !phy_interface_mode_is_8023z(phy_mode) && 4988 + !phy_interface_mode_is_rgmii(phy_mode)) 5023 4989 return -EINVAL; 5024 4990 5025 4991 return 0; ··· 5206 5176 if (err < 0) 5207 5177 goto err_netdev; 5208 5178 5209 - err = mvneta_port_power_up(pp, phy_mode); 5179 + err = mvneta_port_power_up(pp, pp->phy_interface); 5210 5180 if (err < 0) { 5211 5181 dev_err(&pdev->dev, "can't power up port\n"); 5212 - goto err_netdev; 5182 + return err; 5213 5183 } 5214 5184 5215 5185 /* Armada3700 network controller does not support per-cpu
+16 -8
drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
··· 407 407 mlx5e_rep_indr_setup_block(struct net_device *netdev, 408 408 struct mlx5e_rep_priv *rpriv, 409 409 struct flow_block_offload *f, 410 - flow_setup_cb_t *setup_cb) 410 + flow_setup_cb_t *setup_cb, 411 + void *data, 412 + void (*cleanup)(struct flow_block_cb *block_cb)) 411 413 { 412 414 struct mlx5e_priv *priv = netdev_priv(rpriv->netdev); 413 415 struct mlx5e_rep_indr_block_priv *indr_priv; ··· 440 438 list_add(&indr_priv->list, 441 439 &rpriv->uplink_priv.tc_indr_block_priv_list); 442 440 443 - block_cb = flow_block_cb_alloc(setup_cb, indr_priv, indr_priv, 444 - mlx5e_rep_indr_block_unbind); 441 + block_cb = flow_indr_block_cb_alloc(setup_cb, indr_priv, indr_priv, 442 + mlx5e_rep_indr_block_unbind, 443 + f, netdev, data, rpriv, 444 + cleanup); 445 445 if (IS_ERR(block_cb)) { 446 446 list_del(&indr_priv->list); 447 447 kfree(indr_priv); ··· 462 458 if (!block_cb) 463 459 return -ENOENT; 464 460 465 - flow_block_cb_remove(block_cb, f); 461 + flow_indr_block_cb_remove(block_cb, f); 466 462 list_del(&block_cb->driver_list); 467 463 return 0; 468 464 default: ··· 473 469 474 470 static 475 471 int mlx5e_rep_indr_setup_cb(struct net_device *netdev, void *cb_priv, 476 - enum tc_setup_type type, void *type_data) 472 + enum tc_setup_type type, void *type_data, 473 + void *data, 474 + void (*cleanup)(struct flow_block_cb *block_cb)) 477 475 { 478 476 switch (type) { 479 477 case TC_SETUP_BLOCK: 480 478 return mlx5e_rep_indr_setup_block(netdev, cb_priv, type_data, 481 - mlx5e_rep_indr_setup_tc_cb); 479 + mlx5e_rep_indr_setup_tc_cb, 480 + data, cleanup); 482 481 case TC_SETUP_FT: 483 482 return mlx5e_rep_indr_setup_block(netdev, cb_priv, type_data, 484 - mlx5e_rep_indr_setup_ft_cb); 483 + mlx5e_rep_indr_setup_ft_cb, 484 + data, cleanup); 485 485 default: 486 486 return -EOPNOTSUPP; 487 487 } ··· 504 496 void mlx5e_rep_tc_netdevice_event_unregister(struct mlx5e_rep_priv *rpriv) 505 497 { 506 498 flow_indr_dev_unregister(mlx5e_rep_indr_setup_cb, rpriv, 507 - mlx5e_rep_indr_setup_tc_cb); 499 + mlx5e_rep_indr_block_unbind); 508 500 } 509 501 510 502 #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
+2 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 978 978 979 979 lossy = !(pfc || pause_en); 980 980 thres_cells = mlxsw_sp_pg_buf_threshold_get(mlxsw_sp, mtu); 981 - mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &thres_cells); 981 + thres_cells = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, thres_cells); 982 982 delay_cells = mlxsw_sp_pg_buf_delay_get(mlxsw_sp, mtu, delay, 983 983 pfc, pause_en); 984 - mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &delay_cells); 984 + delay_cells = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, delay_cells); 985 985 total_cells = thres_cells + delay_cells; 986 986 987 987 taken_headroom_cells += total_cells;
+3 -5
drivers/net/ethernet/mellanox/mlxsw/spectrum.h
··· 374 374 return NULL; 375 375 } 376 376 377 - static inline void 377 + static inline u32 378 378 mlxsw_sp_port_headroom_8x_adjust(const struct mlxsw_sp_port *mlxsw_sp_port, 379 - u16 *p_size) 379 + u32 size_cells) 380 380 { 381 381 /* Ports with eight lanes use two headroom buffers between which the 382 382 * configured headroom size is split. Therefore, multiply the calculated 383 383 * headroom size by two. 384 384 */ 385 - if (mlxsw_sp_port->mapping.width != 8) 386 - return; 387 - *p_size *= 2; 385 + return mlxsw_sp_port->mapping.width == 8 ? 2 * size_cells : size_cells; 388 386 } 389 387 390 388 enum mlxsw_sp_flood_type {
+1 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
··· 312 312 313 313 if (i == MLXSW_SP_PB_UNUSED) 314 314 continue; 315 - mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, &size); 315 + size = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, size); 316 316 mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, i, size); 317 317 } 318 318 mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl,
+1 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
··· 782 782 speed = 0; 783 783 784 784 buffsize = mlxsw_sp_span_buffsize_get(mlxsw_sp, speed, mtu); 785 - mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, (u16 *) &buffsize); 785 + buffsize = mlxsw_sp_port_headroom_8x_adjust(mlxsw_sp_port, buffsize); 786 786 mlxsw_reg_sbib_pack(sbib_pl, mlxsw_sp_port->local_port, buffsize); 787 787 return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbib), sbib_pl); 788 788 }
+1 -1
drivers/net/ethernet/neterion/vxge/vxge-config.h
··· 297 297 * @greedy_return: If Set it forces the device to return absolutely all RxD 298 298 * that are consumed and still on board when a timer interrupt 299 299 * triggers. If Clear, then if the device has already returned 300 - * RxD before current timer interrupt trigerred and after the 300 + * RxD before current timer interrupt triggered and after the 301 301 * previous timer interrupt triggered, then the device is not 302 302 * forced to returned the rest of the consumed RxD that it has 303 303 * on board which account for a byte count less than the one
+1 -1
drivers/net/ethernet/netronome/nfp/flower/main.c
··· 861 861 flush_work(&app_priv->cmsg_work); 862 862 863 863 flow_indr_dev_unregister(nfp_flower_indr_setup_tc_cb, app, 864 - nfp_flower_setup_indr_block_cb); 864 + nfp_flower_setup_indr_tc_release); 865 865 866 866 if (app_priv->flower_ext_feats & NFP_FL_FEATS_VF_RLIM) 867 867 nfp_flower_qos_cleanup(app);
+4 -3
drivers/net/ethernet/netronome/nfp/flower/main.h
··· 459 459 struct tc_cls_matchall_offload *flow); 460 460 void nfp_flower_stats_rlim_reply(struct nfp_app *app, struct sk_buff *skb); 461 461 int nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, 462 - enum tc_setup_type type, void *type_data); 463 - int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, void *type_data, 464 - void *cb_priv); 462 + enum tc_setup_type type, void *type_data, 463 + void *data, 464 + void (*cleanup)(struct flow_block_cb *block_cb)); 465 + void nfp_flower_setup_indr_tc_release(void *cb_priv); 465 466 466 467 void 467 468 __nfp_flower_non_repr_priv_get(struct nfp_flower_non_repr_priv *non_repr_priv);
+14 -10
drivers/net/ethernet/netronome/nfp/flower/offload.c
··· 1619 1619 return NULL; 1620 1620 } 1621 1621 1622 - int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, 1623 - void *type_data, void *cb_priv) 1622 + static int nfp_flower_setup_indr_block_cb(enum tc_setup_type type, 1623 + void *type_data, void *cb_priv) 1624 1624 { 1625 1625 struct nfp_flower_indr_block_cb_priv *priv = cb_priv; 1626 1626 struct flow_cls_offload *flower = type_data; ··· 1637 1637 } 1638 1638 } 1639 1639 1640 - static void nfp_flower_setup_indr_tc_release(void *cb_priv) 1640 + void nfp_flower_setup_indr_tc_release(void *cb_priv) 1641 1641 { 1642 1642 struct nfp_flower_indr_block_cb_priv *priv = cb_priv; 1643 1643 ··· 1647 1647 1648 1648 static int 1649 1649 nfp_flower_setup_indr_tc_block(struct net_device *netdev, struct nfp_app *app, 1650 - struct flow_block_offload *f) 1650 + struct flow_block_offload *f, void *data, 1651 + void (*cleanup)(struct flow_block_cb *block_cb)) 1651 1652 { 1652 1653 struct nfp_flower_indr_block_cb_priv *cb_priv; 1653 1654 struct nfp_flower_priv *priv = app->priv; ··· 1677 1676 cb_priv->app = app; 1678 1677 list_add(&cb_priv->list, &priv->indr_block_cb_priv); 1679 1678 1680 - block_cb = flow_block_cb_alloc(nfp_flower_setup_indr_block_cb, 1681 - cb_priv, cb_priv, 1682 - nfp_flower_setup_indr_tc_release); 1679 + block_cb = flow_indr_block_cb_alloc(nfp_flower_setup_indr_block_cb, 1680 + cb_priv, cb_priv, 1681 + nfp_flower_setup_indr_tc_release, 1682 + f, netdev, data, app, cleanup); 1683 1683 if (IS_ERR(block_cb)) { 1684 1684 list_del(&cb_priv->list); 1685 1685 kfree(cb_priv); ··· 1701 1699 if (!block_cb) 1702 1700 return -ENOENT; 1703 1701 1704 - flow_block_cb_remove(block_cb, f); 1702 + flow_indr_block_cb_remove(block_cb, f); 1705 1703 list_del(&block_cb->driver_list); 1706 1704 return 0; 1707 1705 default: ··· 1712 1710 1713 1711 int 1714 1712 nfp_flower_indr_setup_tc_cb(struct net_device *netdev, void *cb_priv, 1715 - enum tc_setup_type type, void *type_data) 1713 + enum tc_setup_type type, void *type_data, 1714 + void *data, 1715 + void (*cleanup)(struct flow_block_cb *block_cb)) 1716 1716 { 1717 1717 if (!nfp_fl_is_netdev_to_offload(netdev)) 1718 1718 return -EOPNOTSUPP; ··· 1722 1718 switch (type) { 1723 1719 case TC_SETUP_BLOCK: 1724 1720 return nfp_flower_setup_indr_tc_block(netdev, cb_priv, 1725 - type_data); 1721 + type_data, data, cleanup); 1726 1722 default: 1727 1723 return -EOPNOTSUPP; 1728 1724 }
+1 -1
drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.h
··· 147 147 #define PCH_GBE_RH_ALM_FULL_8 0x00001000 /* 8 words */ 148 148 #define PCH_GBE_RH_ALM_FULL_16 0x00002000 /* 16 words */ 149 149 #define PCH_GBE_RH_ALM_FULL_32 0x00003000 /* 32 words */ 150 - /* RX FIFO Read Triger Threshold */ 150 + /* RX FIFO Read Trigger Threshold */ 151 151 #define PCH_GBE_RH_RD_TRG_4 0x00000000 /* 4 words */ 152 152 #define PCH_GBE_RH_RD_TRG_8 0x00000200 /* 8 words */ 153 153 #define PCH_GBE_RH_RD_TRG_16 0x00000400 /* 16 words */
+11 -8
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 96 96 u16 link_status; 97 97 bool link_up; 98 98 99 - if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state)) 99 + if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state) || 100 + test_bit(IONIC_LIF_F_QUEUE_RESET, lif->state)) 100 101 return; 101 102 102 103 link_status = le16_to_cpu(lif->info->status.link_status); ··· 1246 1245 1247 1246 netdev->hw_features |= netdev->hw_enc_features; 1248 1247 netdev->features |= netdev->hw_features; 1248 + netdev->vlan_features |= netdev->features & ~NETIF_F_VLAN_FEATURES; 1249 1249 1250 1250 netdev->priv_flags |= IFF_UNICAST_FLT | 1251 1251 IFF_LIVE_ADDR_CHANGE; ··· 1694 1692 if (!test_and_clear_bit(IONIC_LIF_F_UP, lif->state)) 1695 1693 return; 1696 1694 1697 - ionic_txrx_disable(lif); 1698 1695 netif_tx_disable(lif->netdev); 1696 + ionic_txrx_disable(lif); 1699 1697 } 1700 1698 1701 1699 int ionic_stop(struct net_device *netdev) 1702 1700 { 1703 1701 struct ionic_lif *lif = netdev_priv(netdev); 1704 1702 1705 - if (!netif_device_present(netdev)) 1703 + if (test_bit(IONIC_LIF_F_FW_RESET, lif->state)) 1706 1704 return 0; 1707 1705 1708 1706 ionic_stop_queues(lif); ··· 1985 1983 bool running; 1986 1984 int err = 0; 1987 1985 1988 - /* Put off the next watchdog timeout */ 1989 - netif_trans_update(lif->netdev); 1990 - 1991 1986 err = ionic_wait_for_bit(lif, IONIC_LIF_F_QUEUE_RESET); 1992 1987 if (err) 1993 1988 return err; 1994 1989 1995 1990 running = netif_running(lif->netdev); 1996 - if (running) 1991 + if (running) { 1992 + netif_device_detach(lif->netdev); 1997 1993 err = ionic_stop(lif->netdev); 1998 - if (!err && running) 1994 + } 1995 + if (!err && running) { 1999 1996 ionic_open(lif->netdev); 1997 + netif_device_attach(lif->netdev); 1998 + } 2000 1999 2001 2000 clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state); 2002 2001
+20 -1
drivers/net/ethernet/qlogic/qed/qed_cxt.c
··· 271 271 vf_tids += segs[NUM_TASK_PF_SEGMENTS].count; 272 272 } 273 273 274 - iids->vf_cids += vf_cids * p_mngr->vf_count; 274 + iids->vf_cids = vf_cids; 275 275 iids->tids += vf_tids * p_mngr->vf_count; 276 276 277 277 DP_VERBOSE(p_hwfn, QED_MSG_ILT, ··· 465 465 return p_blk; 466 466 } 467 467 468 + static void qed_cxt_ilt_blk_reset(struct qed_hwfn *p_hwfn) 469 + { 470 + struct qed_ilt_client_cfg *clients = p_hwfn->p_cxt_mngr->clients; 471 + u32 cli_idx, blk_idx; 472 + 473 + for (cli_idx = 0; cli_idx < MAX_ILT_CLIENTS; cli_idx++) { 474 + for (blk_idx = 0; blk_idx < ILT_CLI_PF_BLOCKS; blk_idx++) 475 + clients[cli_idx].pf_blks[blk_idx].total_size = 0; 476 + 477 + for (blk_idx = 0; blk_idx < ILT_CLI_VF_BLOCKS; blk_idx++) 478 + clients[cli_idx].vf_blks[blk_idx].total_size = 0; 479 + } 480 + } 481 + 468 482 int qed_cxt_cfg_ilt_compute(struct qed_hwfn *p_hwfn, u32 *line_count) 469 483 { 470 484 struct qed_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr; ··· 497 483 memset(&tm_iids, 0, sizeof(tm_iids)); 498 484 499 485 p_mngr->pf_start_line = RESC_START(p_hwfn, QED_ILT); 486 + 487 + /* Reset all ILT blocks at the beginning of ILT computing in order 488 + * to prevent memory allocation for irrelevant blocks afterwards. 489 + */ 490 + qed_cxt_ilt_blk_reset(p_hwfn); 500 491 501 492 DP_VERBOSE(p_hwfn, QED_MSG_ILT, 502 493 "hwfn [%d] - Set context manager starting line to be 0x%08x\n",
+2 -1
drivers/net/ethernet/qlogic/qed/qed_debug.c
··· 5568 5568 5569 5569 /* DBG_STATUS_INVALID_FILTER_TRIGGER_DWORDS */ 5570 5570 "The filter/trigger constraint dword offsets are not enabled for recording", 5571 - 5571 + /* DBG_STATUS_NO_MATCHING_FRAMING_MODE */ 5572 + "No matching framing mode", 5572 5573 5573 5574 /* DBG_STATUS_VFC_READ_ERROR */ 5574 5575 "Error reading from VFC",
+8 -3
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 980 980 struct qed_hwfn *p_hwfn = QED_LEADING_HWFN(cdev); 981 981 struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn); 982 982 union qed_llh_filter filter = {}; 983 - u8 filter_idx, abs_ppfid; 983 + u8 filter_idx, abs_ppfid = 0; 984 984 u32 high, low, ref_cnt; 985 985 int rc = 0; 986 986 ··· 1368 1368 1369 1369 void qed_resc_free(struct qed_dev *cdev) 1370 1370 { 1371 + struct qed_rdma_info *rdma_info; 1372 + struct qed_hwfn *p_hwfn; 1371 1373 int i; 1372 1374 1373 1375 if (IS_VF(cdev)) { ··· 1387 1385 qed_llh_free(cdev); 1388 1386 1389 1387 for_each_hwfn(cdev, i) { 1390 - struct qed_hwfn *p_hwfn = &cdev->hwfns[i]; 1388 + p_hwfn = cdev->hwfns + i; 1389 + rdma_info = p_hwfn->p_rdma_info; 1391 1390 1392 1391 qed_cxt_mngr_free(p_hwfn); 1393 1392 qed_qm_info_free(p_hwfn); ··· 1407 1404 qed_ooo_free(p_hwfn); 1408 1405 } 1409 1406 1410 - if (QED_IS_RDMA_PERSONALITY(p_hwfn)) 1407 + if (QED_IS_RDMA_PERSONALITY(p_hwfn) && rdma_info) { 1408 + qed_spq_unregister_async_cb(p_hwfn, rdma_info->proto); 1411 1409 qed_rdma_info_free(p_hwfn); 1410 + } 1412 1411 1413 1412 qed_iov_free(p_hwfn); 1414 1413 qed_l2_free(p_hwfn);
-2
drivers/net/ethernet/qlogic/qed/qed_iwarp.c
··· 2836 2836 if (rc) 2837 2837 return rc; 2838 2838 2839 - qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_IWARP); 2840 - 2841 2839 return qed_iwarp_ll2_stop(p_hwfn); 2842 2840 } 2843 2841
-1
drivers/net/ethernet/qlogic/qed/qed_roce.c
··· 113 113 break; 114 114 } 115 115 } 116 - qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_ROCE); 117 116 } 118 117 119 118 static void qed_rdma_copy_gids(struct qed_rdma_qp *qp, __le32 *src_gid,
+18 -5
drivers/net/ethernet/qlogic/qed/qed_vf.c
··· 81 81 mutex_unlock(&(p_hwfn->vf_iov_info->mutex)); 82 82 } 83 83 84 + #define QED_VF_CHANNEL_USLEEP_ITERATIONS 90 85 + #define QED_VF_CHANNEL_USLEEP_DELAY 100 86 + #define QED_VF_CHANNEL_MSLEEP_ITERATIONS 10 87 + #define QED_VF_CHANNEL_MSLEEP_DELAY 25 88 + 84 89 static int qed_send_msg2pf(struct qed_hwfn *p_hwfn, u8 *done, u32 resp_size) 85 90 { 86 91 union vfpf_tlvs *p_req = p_hwfn->vf_iov_info->vf2pf_request; 87 92 struct ustorm_trigger_vf_zone trigger; 88 93 struct ustorm_vf_zone *zone_data; 89 - int rc = 0, time = 100; 94 + int iter, rc = 0; 90 95 91 96 zone_data = (struct ustorm_vf_zone *)PXP_VF_BAR0_START_USDM_ZONE_B; 92 97 ··· 131 126 REG_WR(p_hwfn, (uintptr_t)&zone_data->trigger, *((u32 *)&trigger)); 132 127 133 128 /* When PF would be done with the response, it would write back to the 134 - * `done' address. Poll until then. 129 + * `done' address from a coherent DMA zone. Poll until then. 135 130 */ 136 - while ((!*done) && time) { 137 - msleep(25); 138 - time--; 131 + 132 + iter = QED_VF_CHANNEL_USLEEP_ITERATIONS; 133 + while (!*done && iter--) { 134 + udelay(QED_VF_CHANNEL_USLEEP_DELAY); 135 + dma_rmb(); 136 + } 137 + 138 + iter = QED_VF_CHANNEL_MSLEEP_ITERATIONS; 139 + while (!*done && iter--) { 140 + msleep(QED_VF_CHANNEL_MSLEEP_DELAY); 141 + dma_rmb(); 139 142 } 140 143 141 144 if (!*done) {
+2 -1
drivers/net/ethernet/qlogic/qede/qede_main.c
··· 1229 1229 1230 1230 /* PTP not supported on VFs */ 1231 1231 if (!is_vf) 1232 - qede_ptp_enable(edev, (mode == QEDE_PROBE_NORMAL)); 1232 + qede_ptp_enable(edev); 1233 1233 1234 1234 edev->ops->register_ops(cdev, &qede_ll_ops, edev); 1235 1235 ··· 1318 1318 if (system_state == SYSTEM_POWER_OFF) 1319 1319 return; 1320 1320 qed_ops->common->remove(cdev); 1321 + edev->cdev = NULL; 1321 1322 1322 1323 /* Since this can happen out-of-sync with other flows, 1323 1324 * don't release the netdevice until after slowpath stop
+12 -17
drivers/net/ethernet/qlogic/qede/qede_ptp.c
··· 412 412 if (ptp->tx_skb) { 413 413 dev_kfree_skb_any(ptp->tx_skb); 414 414 ptp->tx_skb = NULL; 415 + clear_bit_unlock(QEDE_FLAGS_PTP_TX_IN_PRORGESS, &edev->flags); 415 416 } 416 417 417 418 /* Disable PTP in HW */ ··· 424 423 edev->ptp = NULL; 425 424 } 426 425 427 - static int qede_ptp_init(struct qede_dev *edev, bool init_tc) 426 + static int qede_ptp_init(struct qede_dev *edev) 428 427 { 429 428 struct qede_ptp *ptp; 430 429 int rc; ··· 445 444 /* Init work queue for Tx timestamping */ 446 445 INIT_WORK(&ptp->work, qede_ptp_task); 447 446 448 - /* Init cyclecounter and timecounter. This is done only in the first 449 - * load. If done in every load, PTP application will fail when doing 450 - * unload / load (e.g. MTU change) while it is running. 451 - */ 452 - if (init_tc) { 453 - memset(&ptp->cc, 0, sizeof(ptp->cc)); 454 - ptp->cc.read = qede_ptp_read_cc; 455 - ptp->cc.mask = CYCLECOUNTER_MASK(64); 456 - ptp->cc.shift = 0; 457 - ptp->cc.mult = 1; 447 + /* Init cyclecounter and timecounter */ 448 + memset(&ptp->cc, 0, sizeof(ptp->cc)); 449 + ptp->cc.read = qede_ptp_read_cc; 450 + ptp->cc.mask = CYCLECOUNTER_MASK(64); 451 + ptp->cc.shift = 0; 452 + ptp->cc.mult = 1; 458 453 459 - timecounter_init(&ptp->tc, &ptp->cc, 460 - ktime_to_ns(ktime_get_real())); 461 - } 454 + timecounter_init(&ptp->tc, &ptp->cc, ktime_to_ns(ktime_get_real())); 462 455 463 - return rc; 456 + return 0; 464 457 } 465 458 466 - int qede_ptp_enable(struct qede_dev *edev, bool init_tc) 459 + int qede_ptp_enable(struct qede_dev *edev) 467 460 { 468 461 struct qede_ptp *ptp; 469 462 int rc; ··· 478 483 479 484 edev->ptp = ptp; 480 485 481 - rc = qede_ptp_init(edev, init_tc); 486 + rc = qede_ptp_init(edev); 482 487 if (rc) 483 488 goto err1; 484 489
+1 -1
drivers/net/ethernet/qlogic/qede/qede_ptp.h
··· 41 41 void qede_ptp_tx_ts(struct qede_dev *edev, struct sk_buff *skb); 42 42 int qede_ptp_hw_ts(struct qede_dev *edev, struct ifreq *req); 43 43 void qede_ptp_disable(struct qede_dev *edev); 44 - int qede_ptp_enable(struct qede_dev *edev, bool init_tc); 44 + int qede_ptp_enable(struct qede_dev *edev); 45 45 int qede_ptp_get_ts_info(struct qede_dev *edev, struct ethtool_ts_info *ts); 46 46 47 47 static inline void qede_ptp_record_rx_ts(struct qede_dev *edev,
+2 -1
drivers/net/ethernet/qlogic/qede/qede_rdma.c
··· 105 105 106 106 qede_rdma_cleanup_event(edev); 107 107 destroy_workqueue(edev->rdma_info.rdma_wq); 108 + edev->rdma_info.rdma_wq = NULL; 108 109 } 109 110 110 111 int qede_rdma_dev_add(struct qede_dev *edev, bool recovery) ··· 326 325 if (edev->rdma_info.exp_recovery) 327 326 return; 328 327 329 - if (!edev->rdma_info.qedr_dev) 328 + if (!edev->rdma_info.qedr_dev || !edev->rdma_info.rdma_wq) 330 329 return; 331 330 332 331 /* We don't want the cleanup flow to start while we're allocating and
+4 -1
drivers/net/ethernet/realtek/r8169_main.c
··· 2114 2114 void r8169_apply_firmware(struct rtl8169_private *tp) 2115 2115 { 2116 2116 /* TODO: release firmware if rtl_fw_write_firmware signals failure. */ 2117 - if (tp->rtl_fw) 2117 + if (tp->rtl_fw) { 2118 2118 rtl_fw_write_firmware(tp, tp->rtl_fw); 2119 + /* At least one firmware doesn't reset tp->ocp_base. */ 2120 + tp->ocp_base = OCP_STD_PHY_BASE; 2121 + } 2119 2122 } 2120 2123 2121 2124 static void rtl8168_config_eee_mac(struct rtl8169_private *tp)
+3 -2
drivers/net/ethernet/socionext/netsec.c
··· 1044 1044 skb->ip_summed = CHECKSUM_UNNECESSARY; 1045 1045 1046 1046 next: 1047 - if ((skb && napi_gro_receive(&priv->napi, skb) != GRO_DROP) || 1048 - xdp_result) { 1047 + if (skb) 1048 + napi_gro_receive(&priv->napi, skb); 1049 + if (skb || xdp_result) { 1049 1050 ndev->stats.rx_packets++; 1050 1051 ndev->stats.rx_bytes += xdp.data_end - xdp.data; 1051 1052 }
+1
drivers/net/geneve.c
··· 1649 1649 geneve->collect_md = metadata; 1650 1650 geneve->use_udp6_rx_checksums = use_udp6_rx_checksums; 1651 1651 geneve->ttl_inherit = ttl_inherit; 1652 + geneve->df = df; 1652 1653 geneve_unquiesce(geneve, gs4, gs6); 1653 1654 1654 1655 return 0;
+1 -2
drivers/net/phy/Kconfig
··· 480 480 config MICROSEMI_PHY 481 481 tristate "Microsemi PHYs" 482 482 depends on MACSEC || MACSEC=n 483 - select CRYPTO_AES 484 - select CRYPTO_ECB 483 + select CRYPTO_LIB_AES if MACSEC 485 484 help 486 485 Currently supports VSC8514, VSC8530, VSC8531, VSC8540 and VSC8541 PHYs 487 486
+9 -31
drivers/net/phy/mscc/mscc_macsec.c
··· 10 10 #include <linux/phy.h> 11 11 #include <dt-bindings/net/mscc-phy-vsc8531.h> 12 12 13 - #include <crypto/skcipher.h> 13 + #include <crypto/aes.h> 14 14 15 15 #include <net/macsec.h> 16 16 ··· 500 500 static int vsc8584_macsec_derive_key(const u8 key[MACSEC_KEYID_LEN], 501 501 u16 key_len, u8 hkey[16]) 502 502 { 503 - struct crypto_skcipher *tfm = crypto_alloc_skcipher("ecb(aes)", 0, 0); 504 - struct skcipher_request *req = NULL; 505 - struct scatterlist src, dst; 506 - DECLARE_CRYPTO_WAIT(wait); 507 - u32 input[4] = {0}; 503 + const u8 input[AES_BLOCK_SIZE] = {0}; 504 + struct crypto_aes_ctx ctx; 508 505 int ret; 509 506 510 - if (IS_ERR(tfm)) 511 - return PTR_ERR(tfm); 507 + ret = aes_expandkey(&ctx, key, key_len); 508 + if (ret) 509 + return ret; 512 510 513 - req = skcipher_request_alloc(tfm, GFP_KERNEL); 514 - if (!req) { 515 - ret = -ENOMEM; 516 - goto out; 517 - } 518 - 519 - skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | 520 - CRYPTO_TFM_REQ_MAY_SLEEP, crypto_req_done, 521 - &wait); 522 - ret = crypto_skcipher_setkey(tfm, key, key_len); 523 - if (ret < 0) 524 - goto out; 525 - 526 - sg_init_one(&src, input, 16); 527 - sg_init_one(&dst, hkey, 16); 528 - skcipher_request_set_crypt(req, &src, &dst, 16, NULL); 529 - 530 - ret = crypto_wait_req(crypto_skcipher_encrypt(req), &wait); 531 - 532 - out: 533 - skcipher_request_free(req); 534 - crypto_free_skcipher(tfm); 535 - return ret; 511 + aes_encrypt(&ctx, hkey, input); 512 + memzero_explicit(&ctx, sizeof(ctx)); 513 + return 0; 536 514 } 537 515 538 516 static int vsc8584_macsec_transformation(struct phy_device *phydev,
+1 -1
drivers/net/phy/phy.c
··· 840 840 * phy_disable_interrupts - Disable the PHY interrupts from the PHY side 841 841 * @phydev: target phy_device struct 842 842 */ 843 - static int phy_disable_interrupts(struct phy_device *phydev) 843 + int phy_disable_interrupts(struct phy_device *phydev) 844 844 { 845 845 int err; 846 846
+8 -2
drivers/net/phy/phy_device.c
··· 794 794 795 795 /* Grab the bits from PHYIR2, and put them in the lower half */ 796 796 phy_reg = mdiobus_read(bus, addr, MII_PHYSID2); 797 - if (phy_reg < 0) 798 - return -EIO; 797 + if (phy_reg < 0) { 798 + /* returning -ENODEV doesn't stop bus scanning */ 799 + return (phy_reg == -EIO || phy_reg == -ENODEV) ? -ENODEV : -EIO; 800 + } 799 801 800 802 *phy_id |= phy_reg; 801 803 ··· 1090 1088 1091 1089 ret = phy_scan_fixups(phydev); 1092 1090 if (ret < 0) 1091 + return ret; 1092 + 1093 + ret = phy_disable_interrupts(phydev); 1094 + if (ret) 1093 1095 return ret; 1094 1096 1095 1097 if (phydev->drv->config_init)
+32 -13
drivers/net/phy/phylink.c
··· 1463 1463 struct ethtool_pauseparam *pause) 1464 1464 { 1465 1465 struct phylink_link_state *config = &pl->link_config; 1466 + bool manual_changed; 1467 + int pause_state; 1466 1468 1467 1469 ASSERT_RTNL(); 1468 1470 ··· 1479 1477 !pause->autoneg && pause->rx_pause != pause->tx_pause) 1480 1478 return -EINVAL; 1481 1479 1482 - mutex_lock(&pl->state_mutex); 1483 - config->pause = 0; 1480 + pause_state = 0; 1484 1481 if (pause->autoneg) 1485 - config->pause |= MLO_PAUSE_AN; 1482 + pause_state |= MLO_PAUSE_AN; 1486 1483 if (pause->rx_pause) 1487 - config->pause |= MLO_PAUSE_RX; 1484 + pause_state |= MLO_PAUSE_RX; 1488 1485 if (pause->tx_pause) 1489 - config->pause |= MLO_PAUSE_TX; 1486 + pause_state |= MLO_PAUSE_TX; 1490 1487 1488 + mutex_lock(&pl->state_mutex); 1491 1489 /* 1492 1490 * See the comments for linkmode_set_pause(), wrt the deficiencies 1493 1491 * with the current implementation. A solution to this issue would ··· 1504 1502 linkmode_set_pause(config->advertising, pause->tx_pause, 1505 1503 pause->rx_pause); 1506 1504 1507 - /* If we have a PHY, phylib will call our link state function if the 1508 - * mode has changed, which will trigger a resolve and update the MAC 1509 - * configuration. 1505 + manual_changed = (config->pause ^ pause_state) & MLO_PAUSE_AN || 1506 + (!(pause_state & MLO_PAUSE_AN) && 1507 + (config->pause ^ pause_state) & MLO_PAUSE_TXRX_MASK); 1508 + 1509 + config->pause = pause_state; 1510 + 1511 + if (!pl->phydev && !test_bit(PHYLINK_DISABLE_STOPPED, 1512 + &pl->phylink_disable_state)) 1513 + phylink_pcs_config(pl, true, &pl->link_config); 1514 + 1515 + mutex_unlock(&pl->state_mutex); 1516 + 1517 + /* If we have a PHY, a change of the pause frame advertisement will 1518 + * cause phylib to renegotiate (if AN is enabled) which will in turn 1519 + * call our phylink_phy_change() and trigger a resolve. Note that 1520 + * we can't hold our state mutex while calling phy_set_asym_pause(). 1510 1521 */ 1511 - if (pl->phydev) { 1522 + if (pl->phydev) 1512 1523 phy_set_asym_pause(pl->phydev, pause->rx_pause, 1513 1524 pause->tx_pause); 1514 - } else if (!test_bit(PHYLINK_DISABLE_STOPPED, 1515 - &pl->phylink_disable_state)) { 1516 - phylink_pcs_config(pl, true, &pl->link_config); 1525 + 1526 + /* If the manual pause settings changed, make sure we trigger a 1527 + * resolve to update their state; we can not guarantee that the 1528 + * link will cycle. 1529 + */ 1530 + if (manual_changed) { 1531 + pl->mac_link_dropped = true; 1532 + phylink_run_resolve(pl); 1517 1533 } 1518 - mutex_unlock(&pl->state_mutex); 1519 1534 1520 1535 return 0; 1521 1536 }
+7 -4
drivers/net/phy/smsc.c
··· 122 122 if (rc < 0) 123 123 return rc; 124 124 125 - /* Wait max 640 ms to detect energy */ 126 - phy_read_poll_timeout(phydev, MII_LAN83C185_CTRL_STATUS, rc, 127 - rc & MII_LAN83C185_ENERGYON, 10000, 128 - 640000, true); 125 + /* Wait max 640 ms to detect energy and the timeout is not 126 + * an actual error. 127 + */ 128 + read_poll_timeout(phy_read, rc, 129 + rc & MII_LAN83C185_ENERGYON || rc < 0, 130 + 10000, 640000, true, phydev, 131 + MII_LAN83C185_CTRL_STATUS); 129 132 if (rc < 0) 130 133 return rc; 131 134
+6 -5
drivers/net/usb/ax88179_178a.c
··· 1491 1491 } 1492 1492 1493 1493 if (pkt_cnt == 0) { 1494 - /* Skip IP alignment psudo header */ 1495 - skb_pull(skb, 2); 1496 1494 skb->len = pkt_len; 1497 - skb_set_tail_pointer(skb, pkt_len); 1495 + /* Skip IP alignment pseudo header */ 1496 + skb_pull(skb, 2); 1497 + skb_set_tail_pointer(skb, skb->len); 1498 1498 skb->truesize = pkt_len + sizeof(struct sk_buff); 1499 1499 ax88179_rx_checksum(skb, pkt_hdr); 1500 1500 return 1; ··· 1503 1503 ax_skb = skb_clone(skb, GFP_ATOMIC); 1504 1504 if (ax_skb) { 1505 1505 ax_skb->len = pkt_len; 1506 - ax_skb->data = skb->data + 2; 1507 - skb_set_tail_pointer(ax_skb, pkt_len); 1506 + /* Skip IP alignment pseudo header */ 1507 + skb_pull(ax_skb, 2); 1508 + skb_set_tail_pointer(ax_skb, ax_skb->len); 1508 1509 ax_skb->truesize = pkt_len + sizeof(struct sk_buff); 1509 1510 ax88179_rx_checksum(ax_skb, pkt_hdr); 1510 1511 usbnet_skb_return(dev, ax_skb);
+1 -1
drivers/net/usb/smsc95xx.c
··· 1324 1324 struct smsc95xx_priv *pdata = (struct smsc95xx_priv *)(dev->data[0]); 1325 1325 1326 1326 if (pdata) { 1327 - cancel_delayed_work(&pdata->carrier_check); 1327 + cancel_delayed_work_sync(&pdata->carrier_check); 1328 1328 netif_dbg(dev, ifdown, dev->net, "free pdata\n"); 1329 1329 kfree(pdata); 1330 1330 pdata = NULL;
+4
drivers/net/vxlan.c
··· 1380 1380 struct vxlan_rdst *rd; 1381 1381 1382 1382 if (rcu_access_pointer(f->nh)) { 1383 + if (*idx < cb->args[2]) 1384 + goto skip_nh; 1383 1385 err = vxlan_fdb_info(skb, vxlan, f, 1384 1386 NETLINK_CB(cb->skb).portid, 1385 1387 cb->nlh->nlmsg_seq, ··· 1389 1387 NLM_F_MULTI, NULL); 1390 1388 if (err < 0) 1391 1389 goto out; 1390 + skip_nh: 1391 + *idx += 1; 1392 1392 continue; 1393 1393 } 1394 1394
+27 -31
drivers/net/wireguard/device.c
··· 45 45 if (dev_v6) 46 46 dev_v6->cnf.addr_gen_mode = IN6_ADDR_GEN_MODE_NONE; 47 47 48 + mutex_lock(&wg->device_update_lock); 48 49 ret = wg_socket_init(wg, wg->incoming_port); 49 50 if (ret < 0) 50 - return ret; 51 - mutex_lock(&wg->device_update_lock); 51 + goto out; 52 52 list_for_each_entry(peer, &wg->peer_list, peer_list) { 53 53 wg_packet_send_staged_packets(peer); 54 54 if (peer->persistent_keepalive_interval) 55 55 wg_packet_send_keepalive(peer); 56 56 } 57 + out: 57 58 mutex_unlock(&wg->device_update_lock); 58 - return 0; 59 + return ret; 59 60 } 60 61 61 62 #ifdef CONFIG_PM_SLEEP ··· 226 225 list_del(&wg->device_list); 227 226 rtnl_unlock(); 228 227 mutex_lock(&wg->device_update_lock); 228 + rcu_assign_pointer(wg->creating_net, NULL); 229 229 wg->incoming_port = 0; 230 230 wg_socket_reinit(wg, NULL, NULL); 231 231 /* The final references are cleared in the below calls to destroy_workqueue. */ ··· 242 240 skb_queue_purge(&wg->incoming_handshakes); 243 241 free_percpu(dev->tstats); 244 242 free_percpu(wg->incoming_handshakes_worker); 245 - if (wg->have_creating_net_ref) 246 - put_net(wg->creating_net); 247 243 kvfree(wg->index_hashtable); 248 244 kvfree(wg->peer_hashtable); 249 245 mutex_unlock(&wg->device_update_lock); 250 246 251 - pr_debug("%s: Interface deleted\n", dev->name); 247 + pr_debug("%s: Interface destroyed\n", dev->name); 252 248 free_netdev(dev); 253 249 } 254 250 ··· 292 292 struct wg_device *wg = netdev_priv(dev); 293 293 int ret = -ENOMEM; 294 294 295 - wg->creating_net = src_net; 295 + rcu_assign_pointer(wg->creating_net, src_net); 296 296 init_rwsem(&wg->static_identity.lock); 297 297 mutex_init(&wg->socket_update_lock); 298 298 mutex_init(&wg->device_update_lock); ··· 393 393 .newlink = wg_newlink, 394 394 }; 395 395 396 - static int wg_netdevice_notification(struct notifier_block *nb, 397 - unsigned long action, void *data) 396 + static void wg_netns_pre_exit(struct net *net) 398 397 { 399 - struct net_device *dev = ((struct netdev_notifier_info *)data)->dev; 400 - struct wg_device *wg = netdev_priv(dev); 398 + struct wg_device *wg; 401 399 402 - ASSERT_RTNL(); 403 - 404 - if (action != NETDEV_REGISTER || dev->netdev_ops != &netdev_ops) 405 - return 0; 406 - 407 - if (dev_net(dev) == wg->creating_net && wg->have_creating_net_ref) { 408 - put_net(wg->creating_net); 409 - wg->have_creating_net_ref = false; 410 - } else if (dev_net(dev) != wg->creating_net && 411 - !wg->have_creating_net_ref) { 412 - wg->have_creating_net_ref = true; 413 - get_net(wg->creating_net); 400 + rtnl_lock(); 401 + list_for_each_entry(wg, &device_list, device_list) { 402 + if (rcu_access_pointer(wg->creating_net) == net) { 403 + pr_debug("%s: Creating namespace exiting\n", wg->dev->name); 404 + netif_carrier_off(wg->dev); 405 + mutex_lock(&wg->device_update_lock); 406 + rcu_assign_pointer(wg->creating_net, NULL); 407 + wg_socket_reinit(wg, NULL, NULL); 408 + mutex_unlock(&wg->device_update_lock); 409 + } 414 410 } 415 - return 0; 411 + rtnl_unlock(); 416 412 } 417 413 418 - static struct notifier_block netdevice_notifier = { 419 - .notifier_call = wg_netdevice_notification 414 + static struct pernet_operations pernet_ops = { 415 + .pre_exit = wg_netns_pre_exit 420 416 }; 421 417 422 418 int __init wg_device_init(void) ··· 425 429 return ret; 426 430 #endif 427 431 428 - ret = register_netdevice_notifier(&netdevice_notifier); 432 + ret = register_pernet_device(&pernet_ops); 429 433 if (ret) 430 434 goto error_pm; 431 435 432 436 ret = rtnl_link_register(&link_ops); 433 437 if (ret) 434 - goto error_netdevice; 438 + goto error_pernet; 435 439 436 440 return 0; 437 441 438 - error_netdevice: 439 - unregister_netdevice_notifier(&netdevice_notifier); 442 + error_pernet: 443 + unregister_pernet_device(&pernet_ops); 440 444 error_pm: 441 445 #ifdef CONFIG_PM_SLEEP 442 446 unregister_pm_notifier(&pm_notifier); ··· 447 451 void wg_device_uninit(void) 448 452 { 449 453 rtnl_link_unregister(&link_ops); 450 - unregister_netdevice_notifier(&netdevice_notifier); 454 + unregister_pernet_device(&pernet_ops); 451 455 #ifdef CONFIG_PM_SLEEP 452 456 unregister_pm_notifier(&pm_notifier); 453 457 #endif
+1 -2
drivers/net/wireguard/device.h
··· 40 40 struct net_device *dev; 41 41 struct crypt_queue encrypt_queue, decrypt_queue; 42 42 struct sock __rcu *sock4, *sock6; 43 - struct net *creating_net; 43 + struct net __rcu *creating_net; 44 44 struct noise_static_identity static_identity; 45 45 struct workqueue_struct *handshake_receive_wq, *handshake_send_wq; 46 46 struct workqueue_struct *packet_crypt_wq; ··· 56 56 unsigned int num_peers, device_update_gen; 57 57 u32 fwmark; 58 58 u16 incoming_port; 59 - bool have_creating_net_ref; 60 59 }; 61 60 62 61 int wg_device_init(void);
+9 -5
drivers/net/wireguard/netlink.c
··· 511 511 if (flags & ~__WGDEVICE_F_ALL) 512 512 goto out; 513 513 514 - ret = -EPERM; 515 - if ((info->attrs[WGDEVICE_A_LISTEN_PORT] || 516 - info->attrs[WGDEVICE_A_FWMARK]) && 517 - !ns_capable(wg->creating_net->user_ns, CAP_NET_ADMIN)) 518 - goto out; 514 + if (info->attrs[WGDEVICE_A_LISTEN_PORT] || info->attrs[WGDEVICE_A_FWMARK]) { 515 + struct net *net; 516 + rcu_read_lock(); 517 + net = rcu_dereference(wg->creating_net); 518 + ret = !net || !ns_capable(net->user_ns, CAP_NET_ADMIN) ? -EPERM : 0; 519 + rcu_read_unlock(); 520 + if (ret) 521 + goto out; 522 + } 519 523 520 524 ++wg->device_update_gen; 521 525
+2 -2
drivers/net/wireguard/noise.c
··· 617 617 memcpy(handshake->hash, hash, NOISE_HASH_LEN); 618 618 memcpy(handshake->chaining_key, chaining_key, NOISE_HASH_LEN); 619 619 handshake->remote_index = src->sender_index; 620 - if ((s64)(handshake->last_initiation_consumption - 621 - (initiation_consumption = ktime_get_coarse_boottime_ns())) < 0) 620 + initiation_consumption = ktime_get_coarse_boottime_ns(); 621 + if ((s64)(handshake->last_initiation_consumption - initiation_consumption) < 0) 622 622 handshake->last_initiation_consumption = initiation_consumption; 623 623 handshake->state = HANDSHAKE_CONSUMED_INITIATION; 624 624 up_write(&handshake->lock);
+2 -8
drivers/net/wireguard/receive.c
··· 414 414 if (unlikely(routed_peer != peer)) 415 415 goto dishonest_packet_peer; 416 416 417 - if (unlikely(napi_gro_receive(&peer->napi, skb) == GRO_DROP)) { 418 - ++dev->stats.rx_dropped; 419 - net_dbg_ratelimited("%s: Failed to give packet to userspace from peer %llu (%pISpfsc)\n", 420 - dev->name, peer->internal_id, 421 - &peer->endpoint.addr); 422 - } else { 423 - update_rx_stats(peer, message_data_len(len_before_trim)); 424 - } 417 + napi_gro_receive(&peer->napi, skb); 418 + update_rx_stats(peer, message_data_len(len_before_trim)); 425 419 return; 426 420 427 421 dishonest_packet_peer:
+18 -7
drivers/net/wireguard/socket.c
··· 347 347 348 348 int wg_socket_init(struct wg_device *wg, u16 port) 349 349 { 350 + struct net *net; 350 351 int ret; 351 352 struct udp_tunnel_sock_cfg cfg = { 352 353 .sk_user_data = wg, ··· 372 371 }; 373 372 #endif 374 373 374 + rcu_read_lock(); 375 + net = rcu_dereference(wg->creating_net); 376 + net = net ? maybe_get_net(net) : NULL; 377 + rcu_read_unlock(); 378 + if (unlikely(!net)) 379 + return -ENONET; 380 + 375 381 #if IS_ENABLED(CONFIG_IPV6) 376 382 retry: 377 383 #endif 378 384 379 - ret = udp_sock_create(wg->creating_net, &port4, &new4); 385 + ret = udp_sock_create(net, &port4, &new4); 380 386 if (ret < 0) { 381 387 pr_err("%s: Could not create IPv4 socket\n", wg->dev->name); 382 - return ret; 388 + goto out; 383 389 } 384 390 set_sock_opts(new4); 385 - setup_udp_tunnel_sock(wg->creating_net, new4, &cfg); 391 + setup_udp_tunnel_sock(net, new4, &cfg); 386 392 387 393 #if IS_ENABLED(CONFIG_IPV6) 388 394 if (ipv6_mod_enabled()) { 389 395 port6.local_udp_port = inet_sk(new4->sk)->inet_sport; 390 - ret = udp_sock_create(wg->creating_net, &port6, &new6); 396 + ret = udp_sock_create(net, &port6, &new6); 391 397 if (ret < 0) { 392 398 udp_tunnel_sock_release(new4); 393 399 if (ret == -EADDRINUSE && !port && retries++ < 100) 394 400 goto retry; 395 401 pr_err("%s: Could not create IPv6 socket\n", 396 402 wg->dev->name); 397 - return ret; 403 + goto out; 398 404 } 399 405 set_sock_opts(new6); 400 - setup_udp_tunnel_sock(wg->creating_net, new6, &cfg); 406 + setup_udp_tunnel_sock(net, new6, &cfg); 401 407 } 402 408 #endif 403 409 404 410 wg_socket_reinit(wg, new4->sk, new6 ? new6->sk : NULL); 405 - return 0; 411 + ret = 0; 412 + out: 413 + put_net(net); 414 + return ret; 406 415 } 407 416 408 417 void wg_socket_reinit(struct wg_device *wg, struct sock *new4,
+11 -28
drivers/net/wireless/ath/wil6210/txrx.c
··· 897 897 void wil_netif_rx(struct sk_buff *skb, struct net_device *ndev, int cid, 898 898 struct wil_net_stats *stats, bool gro) 899 899 { 900 - gro_result_t rc = GRO_NORMAL; 901 900 struct wil6210_vif *vif = ndev_to_vif(ndev); 902 901 struct wil6210_priv *wil = ndev_to_wil(ndev); 903 902 struct wireless_dev *wdev = vif_to_wdev(vif); ··· 907 908 */ 908 909 int mcast = is_multicast_ether_addr(da); 909 910 struct sk_buff *xmit_skb = NULL; 910 - static const char * const gro_res_str[] = { 911 - [GRO_MERGED] = "GRO_MERGED", 912 - [GRO_MERGED_FREE] = "GRO_MERGED_FREE", 913 - [GRO_HELD] = "GRO_HELD", 914 - [GRO_NORMAL] = "GRO_NORMAL", 915 - [GRO_DROP] = "GRO_DROP", 916 - [GRO_CONSUMED] = "GRO_CONSUMED", 917 - }; 918 911 919 912 if (wdev->iftype == NL80211_IFTYPE_STATION) { 920 913 sa = wil_skb_get_sa(skb); 921 914 if (mcast && ether_addr_equal(sa, ndev->dev_addr)) { 922 915 /* mcast packet looped back to us */ 923 - rc = GRO_DROP; 924 916 dev_kfree_skb(skb); 925 - goto stats; 917 + ndev->stats.rx_dropped++; 918 + stats->rx_dropped++; 919 + wil_dbg_txrx(wil, "Rx drop %d bytes\n", len); 920 + return; 926 921 } 927 922 } else if (wdev->iftype == NL80211_IFTYPE_AP && !vif->ap_isolate) { 928 923 if (mcast) { ··· 960 967 wil_rx_handle_eapol(vif, skb); 961 968 962 969 if (gro) 963 - rc = napi_gro_receive(&wil->napi_rx, skb); 970 + napi_gro_receive(&wil->napi_rx, skb); 964 971 else 965 972 netif_rx_ni(skb); 966 - wil_dbg_txrx(wil, "Rx complete %d bytes => %s\n", 967 - len, gro_res_str[rc]); 968 973 } 969 - stats: 970 - /* statistics. rc set to GRO_NORMAL for AP bridging */ 971 - if (unlikely(rc == GRO_DROP)) { 972 - ndev->stats.rx_dropped++; 973 - stats->rx_dropped++; 974 - wil_dbg_txrx(wil, "Rx drop %d bytes\n", len); 975 - } else { 976 - ndev->stats.rx_packets++; 977 - stats->rx_packets++; 978 - ndev->stats.rx_bytes += len; 979 - stats->rx_bytes += len; 980 - if (mcast) 981 - ndev->stats.multicast++; 982 - } 974 + ndev->stats.rx_packets++; 975 + stats->rx_packets++; 976 + ndev->stats.rx_bytes += len; 977 + stats->rx_bytes += len; 978 + if (mcast) 979 + ndev->stats.multicast++; 983 980 } 984 981 985 982 void wil_netif_rx_any(struct sk_buff *skb, struct net_device *ndev)
+7 -2
drivers/of/of_mdio.c
··· 314 314 child, addr); 315 315 316 316 if (of_mdiobus_child_is_phy(child)) { 317 + /* -ENODEV is the return code that PHYLIB has 318 + * standardized on to indicate that bus 319 + * scanning should continue. 320 + */ 317 321 rc = of_mdiobus_register_phy(mdio, child, addr); 318 - if (rc && rc != -ENODEV) 322 + if (!rc) 323 + break; 324 + if (rc != -ENODEV) 319 325 goto unregister; 320 - break; 321 326 } 322 327 } 323 328 }
+5 -6
drivers/s390/net/qeth_core_main.c
··· 4544 4544 int fallback = *(int *)reply->param; 4545 4545 4546 4546 QETH_CARD_TEXT(card, 4, "setaccb"); 4547 - if (cmd->hdr.return_code) 4548 - return -EIO; 4549 - qeth_setadpparms_inspect_rc(cmd); 4550 4547 4551 4548 access_ctrl_req = &cmd->data.setadapterparms.data.set_access_ctrl; 4552 4549 QETH_CARD_TEXT_(card, 2, "rc=%d", ··· 4553 4556 QETH_DBF_MESSAGE(3, "ERR:SET_ACCESS_CTRL(%#x) on device %x: %#x\n", 4554 4557 access_ctrl_req->subcmd_code, CARD_DEVID(card), 4555 4558 cmd->data.setadapterparms.hdr.return_code); 4556 - switch (cmd->data.setadapterparms.hdr.return_code) { 4559 + switch (qeth_setadpparms_inspect_rc(cmd)) { 4557 4560 case SET_ACCESS_CTRL_RC_SUCCESS: 4558 4561 if (card->options.isolation == ISOLATION_MODE_NONE) { 4559 4562 dev_info(&card->gdev->dev, ··· 6837 6840 struct net_device *dev, 6838 6841 netdev_features_t features) 6839 6842 { 6843 + struct qeth_card *card = dev->ml_priv; 6844 + 6840 6845 /* Traffic with local next-hop is not eligible for some offloads: */ 6841 - if (skb->ip_summed == CHECKSUM_PARTIAL) { 6842 - struct qeth_card *card = dev->ml_priv; 6846 + if (skb->ip_summed == CHECKSUM_PARTIAL && 6847 + card->options.isolation != ISOLATION_MODE_FWD) { 6843 6848 netdev_features_t restricted = 0; 6844 6849 6845 6850 if (skb_is_gso(skb) && !netif_needs_gso(skb, features))
+1 -1
include/linux/netdevice.h
··· 3157 3157 return this_cpu_read(softnet_data.xmit.recursion); 3158 3158 } 3159 3159 3160 - #define XMIT_RECURSION_LIMIT 10 3160 + #define XMIT_RECURSION_LIMIT 8 3161 3161 static inline bool dev_xmit_recursion(void) 3162 3162 { 3163 3163 return unlikely(__this_cpu_read(softnet_data.xmit.recursion) >
+6
include/linux/netfilter_ipv4/ip_tables.h
··· 25 25 int ipt_register_table(struct net *net, const struct xt_table *table, 26 26 const struct ipt_replace *repl, 27 27 const struct nf_hook_ops *ops, struct xt_table **res); 28 + 29 + void ipt_unregister_table_pre_exit(struct net *net, struct xt_table *table, 30 + const struct nf_hook_ops *ops); 31 + 32 + void ipt_unregister_table_exit(struct net *net, struct xt_table *table); 33 + 28 34 void ipt_unregister_table(struct net *net, struct xt_table *table, 29 35 const struct nf_hook_ops *ops); 30 36
+3
include/linux/netfilter_ipv6/ip6_tables.h
··· 29 29 const struct nf_hook_ops *ops, struct xt_table **res); 30 30 void ip6t_unregister_table(struct net *net, struct xt_table *table, 31 31 const struct nf_hook_ops *ops); 32 + void ip6t_unregister_table_pre_exit(struct net *net, struct xt_table *table, 33 + const struct nf_hook_ops *ops); 34 + void ip6t_unregister_table_exit(struct net *net, struct xt_table *table); 32 35 extern unsigned int ip6t_do_table(struct sk_buff *skb, 33 36 const struct nf_hook_state *state, 34 37 struct xt_table *table);
+1
include/linux/phy.h
··· 1416 1416 int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd); 1417 1417 int phy_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd); 1418 1418 int phy_do_ioctl_running(struct net_device *dev, struct ifreq *ifr, int cmd); 1419 + int phy_disable_interrupts(struct phy_device *phydev); 1419 1420 void phy_request_interrupt(struct phy_device *phydev); 1420 1421 void phy_free_interrupt(struct phy_device *phydev); 1421 1422 void phy_print_status(struct phy_device *phydev);
+16 -10
include/linux/qed/qed_chain.h
··· 207 207 208 208 static inline u16 qed_chain_get_elem_left(struct qed_chain *p_chain) 209 209 { 210 + u16 elem_per_page = p_chain->elem_per_page; 211 + u32 prod = p_chain->u.chain16.prod_idx; 212 + u32 cons = p_chain->u.chain16.cons_idx; 210 213 u16 used; 211 214 212 - used = (u16) (((u32)0x10000 + 213 - (u32)p_chain->u.chain16.prod_idx) - 214 - (u32)p_chain->u.chain16.cons_idx); 215 + if (prod < cons) 216 + prod += (u32)U16_MAX + 1; 217 + 218 + used = (u16)(prod - cons); 215 219 if (p_chain->mode == QED_CHAIN_MODE_NEXT_PTR) 216 - used -= p_chain->u.chain16.prod_idx / p_chain->elem_per_page - 217 - p_chain->u.chain16.cons_idx / p_chain->elem_per_page; 220 + used -= prod / elem_per_page - cons / elem_per_page; 218 221 219 222 return (u16)(p_chain->capacity - used); 220 223 } 221 224 222 225 static inline u32 qed_chain_get_elem_left_u32(struct qed_chain *p_chain) 223 226 { 227 + u16 elem_per_page = p_chain->elem_per_page; 228 + u64 prod = p_chain->u.chain32.prod_idx; 229 + u64 cons = p_chain->u.chain32.cons_idx; 224 230 u32 used; 225 231 226 - used = (u32) (((u64)0x100000000ULL + 227 - (u64)p_chain->u.chain32.prod_idx) - 228 - (u64)p_chain->u.chain32.cons_idx); 232 + if (prod < cons) 233 + prod += (u64)U32_MAX + 1; 234 + 235 + used = (u32)(prod - cons); 229 236 if (p_chain->mode == QED_CHAIN_MODE_NEXT_PTR) 230 - used -= p_chain->u.chain32.prod_idx / p_chain->elem_per_page - 231 - p_chain->u.chain32.cons_idx / p_chain->elem_per_page; 237 + used -= (u32)(prod / elem_per_page - cons / elem_per_page); 232 238 233 239 return p_chain->capacity - used; 234 240 }
+19 -2
include/net/flow_offload.h
··· 450 450 struct net_device *dev; 451 451 enum flow_block_binder_type binder_type; 452 452 void *data; 453 + void *cb_priv; 453 454 void (*cleanup)(struct flow_block_cb *block_cb); 454 455 }; 455 456 ··· 468 467 struct flow_block_cb *flow_block_cb_alloc(flow_setup_cb_t *cb, 469 468 void *cb_ident, void *cb_priv, 470 469 void (*release)(void *cb_priv)); 470 + struct flow_block_cb *flow_indr_block_cb_alloc(flow_setup_cb_t *cb, 471 + void *cb_ident, void *cb_priv, 472 + void (*release)(void *cb_priv), 473 + struct flow_block_offload *bo, 474 + struct net_device *dev, void *data, 475 + void *indr_cb_priv, 476 + void (*cleanup)(struct flow_block_cb *block_cb)); 471 477 void flow_block_cb_free(struct flow_block_cb *block_cb); 472 478 473 479 struct flow_block_cb *flow_block_cb_lookup(struct flow_block *block, ··· 493 485 static inline void flow_block_cb_remove(struct flow_block_cb *block_cb, 494 486 struct flow_block_offload *offload) 495 487 { 488 + list_move(&block_cb->list, &offload->cb_list); 489 + } 490 + 491 + static inline void flow_indr_block_cb_remove(struct flow_block_cb *block_cb, 492 + struct flow_block_offload *offload) 493 + { 494 + list_del(&block_cb->indr.list); 496 495 list_move(&block_cb->list, &offload->cb_list); 497 496 } 498 497 ··· 547 532 } 548 533 549 534 typedef int flow_indr_block_bind_cb_t(struct net_device *dev, void *cb_priv, 550 - enum tc_setup_type type, void *type_data); 535 + enum tc_setup_type type, void *type_data, 536 + void *data, 537 + void (*cleanup)(struct flow_block_cb *block_cb)); 551 538 552 539 int flow_indr_dev_register(flow_indr_block_bind_cb_t *cb, void *cb_priv); 553 540 void flow_indr_dev_unregister(flow_indr_block_bind_cb_t *cb, void *cb_priv, 554 - flow_setup_cb_t *setup_cb); 541 + void (*release)(void *cb_priv)); 555 542 int flow_indr_dev_setup_offload(struct net_device *dev, 556 543 enum tc_setup_type type, void *data, 557 544 struct flow_block_offload *bo,
+1 -1
include/net/gue.h
··· 21 21 * | | 22 22 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 23 23 * 24 - * C bit indicates contol message when set, data message when unset. 24 + * C bit indicates control message when set, data message when unset. 25 25 * For a control message, proto/ctype is interpreted as a type of 26 26 * control message. For data messages, proto/ctype is the IP protocol 27 27 * of the next header.
+5 -3
include/net/sctp/constants.h
··· 353 353 ipv4_is_anycast_6to4(a)) 354 354 355 355 /* Flags used for the bind address copy functions. */ 356 - #define SCTP_ADDR6_ALLOWED 0x00000001 /* IPv6 address is allowed by 356 + #define SCTP_ADDR4_ALLOWED 0x00000001 /* IPv4 address is allowed by 357 357 local sock family */ 358 - #define SCTP_ADDR4_PEERSUPP 0x00000002 /* IPv4 address is supported by 358 + #define SCTP_ADDR6_ALLOWED 0x00000002 /* IPv6 address is allowed by 359 + local sock family */ 360 + #define SCTP_ADDR4_PEERSUPP 0x00000004 /* IPv4 address is supported by 359 361 peer */ 360 - #define SCTP_ADDR6_PEERSUPP 0x00000004 /* IPv6 address is supported by 362 + #define SCTP_ADDR6_PEERSUPP 0x00000008 /* IPv6 address is supported by 361 363 peer */ 362 364 363 365 /* Reasons to retransmit. */
-1
include/net/sock.h
··· 1848 1848 1849 1849 static inline void sk_set_socket(struct sock *sk, struct socket *sock) 1850 1850 { 1851 - sk_tx_queue_clear(sk); 1852 1851 sk->sk_socket = sock; 1853 1852 } 1854 1853
+1
include/net/xfrm.h
··· 1008 1008 #define XFRM_GRO 32 1009 1009 #define XFRM_ESP_NO_TRAILER 64 1010 1010 #define XFRM_DEV_RESUME 128 1011 + #define XFRM_XMIT 256 1011 1012 1012 1013 __u32 status; 1013 1014 #define CRYPTO_SUCCESS 1
+1 -1
include/trace/events/rxrpc.h
··· 400 400 EM(rxrpc_cong_begin_retransmission, " Retrans") \ 401 401 EM(rxrpc_cong_cleared_nacks, " Cleared") \ 402 402 EM(rxrpc_cong_new_low_nack, " NewLowN") \ 403 - EM(rxrpc_cong_no_change, "") \ 403 + EM(rxrpc_cong_no_change, " -") \ 404 404 EM(rxrpc_cong_progress, " Progres") \ 405 405 EM(rxrpc_cong_retransmit_again, " ReTxAgn") \ 406 406 EM(rxrpc_cong_rtt_window_end, " RttWinE") \
+1 -1
include/uapi/linux/bpf.h
··· 3168 3168 * Return 3169 3169 * The id is returned or 0 in case the id could not be retrieved. 3170 3170 * 3171 - * void *bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags) 3171 + * int bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags) 3172 3172 * Description 3173 3173 * Copy *size* bytes from *data* into a ring buffer *ringbuf*. 3174 3174 * If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
-1
include/uapi/linux/mrp_bridge.h
··· 36 36 enum br_mrp_port_role_type { 37 37 BR_MRP_PORT_ROLE_PRIMARY, 38 38 BR_MRP_PORT_ROLE_SECONDARY, 39 - BR_MRP_PORT_ROLE_NONE, 40 39 }; 41 40 42 41 enum br_mrp_tlv_header_type {
+3 -1
include/uapi/linux/rds.h
··· 64 64 65 65 /* supported values for SO_RDS_TRANSPORT */ 66 66 #define RDS_TRANS_IB 0 67 - #define RDS_TRANS_IWARP 1 67 + #define RDS_TRANS_GAP 1 68 68 #define RDS_TRANS_TCP 2 69 69 #define RDS_TRANS_COUNT 3 70 70 #define RDS_TRANS_NONE (~0) 71 + /* don't use RDS_TRANS_IWARP - it is deprecated */ 72 + #define RDS_TRANS_IWARP RDS_TRANS_GAP 71 73 72 74 /* IOCTLS commands for SOL_RDS */ 73 75 #define SIOCRDSSETTOS (SIOCPROTOPRIVATE)
+33 -20
kernel/bpf/cgroup.c
··· 1276 1276 1277 1277 static int sockopt_alloc_buf(struct bpf_sockopt_kern *ctx, int max_optlen) 1278 1278 { 1279 - if (unlikely(max_optlen > PAGE_SIZE) || max_optlen < 0) 1279 + if (unlikely(max_optlen < 0)) 1280 1280 return -EINVAL; 1281 + 1282 + if (unlikely(max_optlen > PAGE_SIZE)) { 1283 + /* We don't expose optvals that are greater than PAGE_SIZE 1284 + * to the BPF program. 1285 + */ 1286 + max_optlen = PAGE_SIZE; 1287 + } 1281 1288 1282 1289 ctx->optval = kzalloc(max_optlen, GFP_USER); 1283 1290 if (!ctx->optval) ··· 1292 1285 1293 1286 ctx->optval_end = ctx->optval + max_optlen; 1294 1287 1295 - return 0; 1288 + return max_optlen; 1296 1289 } 1297 1290 1298 1291 static void sockopt_free_buf(struct bpf_sockopt_kern *ctx) ··· 1326 1319 */ 1327 1320 max_optlen = max_t(int, 16, *optlen); 1328 1321 1329 - ret = sockopt_alloc_buf(&ctx, max_optlen); 1330 - if (ret) 1331 - return ret; 1322 + max_optlen = sockopt_alloc_buf(&ctx, max_optlen); 1323 + if (max_optlen < 0) 1324 + return max_optlen; 1332 1325 1333 1326 ctx.optlen = *optlen; 1334 1327 1335 - if (copy_from_user(ctx.optval, optval, *optlen) != 0) { 1328 + if (copy_from_user(ctx.optval, optval, min(*optlen, max_optlen)) != 0) { 1336 1329 ret = -EFAULT; 1337 1330 goto out; 1338 1331 } ··· 1360 1353 /* export any potential modifications */ 1361 1354 *level = ctx.level; 1362 1355 *optname = ctx.optname; 1363 - *optlen = ctx.optlen; 1364 - *kernel_optval = ctx.optval; 1356 + 1357 + /* optlen == 0 from BPF indicates that we should 1358 + * use original userspace data. 1359 + */ 1360 + if (ctx.optlen != 0) { 1361 + *optlen = ctx.optlen; 1362 + *kernel_optval = ctx.optval; 1363 + } 1365 1364 } 1366 1365 1367 1366 out: ··· 1398 1385 __cgroup_bpf_prog_array_is_empty(cgrp, BPF_CGROUP_GETSOCKOPT)) 1399 1386 return retval; 1400 1387 1401 - ret = sockopt_alloc_buf(&ctx, max_optlen); 1402 - if (ret) 1403 - return ret; 1404 - 1405 1388 ctx.optlen = max_optlen; 1389 + 1390 + max_optlen = sockopt_alloc_buf(&ctx, max_optlen); 1391 + if (max_optlen < 0) 1392 + return max_optlen; 1406 1393 1407 1394 if (!retval) { 1408 1395 /* If kernel getsockopt finished successfully, ··· 1417 1404 goto out; 1418 1405 } 1419 1406 1420 - if (ctx.optlen > max_optlen) 1421 - ctx.optlen = max_optlen; 1422 - 1423 - if (copy_from_user(ctx.optval, optval, ctx.optlen) != 0) { 1407 + if (copy_from_user(ctx.optval, optval, 1408 + min(ctx.optlen, max_optlen)) != 0) { 1424 1409 ret = -EFAULT; 1425 1410 goto out; 1426 1411 } ··· 1447 1436 goto out; 1448 1437 } 1449 1438 1450 - if (copy_to_user(optval, ctx.optval, ctx.optlen) || 1451 - put_user(ctx.optlen, optlen)) { 1452 - ret = -EFAULT; 1453 - goto out; 1439 + if (ctx.optlen != 0) { 1440 + if (copy_to_user(optval, ctx.optval, ctx.optlen) || 1441 + put_user(ctx.optlen, optlen)) { 1442 + ret = -EFAULT; 1443 + goto out; 1444 + } 1454 1445 } 1455 1446 1456 1447 ret = ctx.retval;
+6 -4
kernel/bpf/devmap.c
··· 86 86 static DEFINE_SPINLOCK(dev_map_lock); 87 87 static LIST_HEAD(dev_map_list); 88 88 89 - static struct hlist_head *dev_map_create_hash(unsigned int entries) 89 + static struct hlist_head *dev_map_create_hash(unsigned int entries, 90 + int numa_node) 90 91 { 91 92 int i; 92 93 struct hlist_head *hash; 93 94 94 - hash = kmalloc_array(entries, sizeof(*hash), GFP_KERNEL); 95 + hash = bpf_map_area_alloc(entries * sizeof(*hash), numa_node); 95 96 if (hash != NULL) 96 97 for (i = 0; i < entries; i++) 97 98 INIT_HLIST_HEAD(&hash[i]); ··· 146 145 return -EINVAL; 147 146 148 147 if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) { 149 - dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets); 148 + dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets, 149 + dtab->map.numa_node); 150 150 if (!dtab->dev_index_head) 151 151 goto free_charge; 152 152 ··· 234 232 } 235 233 } 236 234 237 - kfree(dtab->dev_index_head); 235 + bpf_map_area_free(dtab->dev_index_head); 238 236 } else { 239 237 for (i = 0; i < dtab->map.max_entries; i++) { 240 238 struct bpf_dtab_netdev *dev;
+1 -1
kernel/trace/bpf_trace.c
··· 241 241 if (unlikely(ret < 0)) 242 242 goto fail; 243 243 244 - return 0; 244 + return ret; 245 245 fail: 246 246 memset(dst, 0, size); 247 247 return ret;
+1
net/9p/mod.c
··· 189 189 MODULE_AUTHOR("Eric Van Hensbergen <ericvh@gmail.com>"); 190 190 MODULE_AUTHOR("Ron Minnich <rminnich@lanl.gov>"); 191 191 MODULE_LICENSE("GPL"); 192 + MODULE_DESCRIPTION("Plan 9 Resource Sharing Support (9P2000)");
+8 -2
net/bridge/br_mrp.c
··· 411 411 if (!mrp) 412 412 return -EINVAL; 413 413 414 - if (role == BR_MRP_PORT_ROLE_PRIMARY) 414 + switch (role) { 415 + case BR_MRP_PORT_ROLE_PRIMARY: 415 416 rcu_assign_pointer(mrp->p_port, p); 416 - else 417 + break; 418 + case BR_MRP_PORT_ROLE_SECONDARY: 417 419 rcu_assign_pointer(mrp->s_port, p); 420 + break; 421 + default: 422 + return -EINVAL; 423 + } 418 424 419 425 br_mrp_port_switchdev_set_role(p, role); 420 426
+1 -1
net/bridge/br_private.h
··· 217 217 struct rcu_head rcu; 218 218 struct timer_list timer; 219 219 struct br_ip addr; 220 + unsigned char eth_addr[ETH_ALEN] __aligned(2); 220 221 unsigned char flags; 221 - unsigned char eth_addr[ETH_ALEN]; 222 222 }; 223 223 224 224 struct net_bridge_mdb_entry {
+1
net/bridge/netfilter/nft_meta_bridge.c
··· 155 155 MODULE_LICENSE("GPL"); 156 156 MODULE_AUTHOR("wenxu <wenxu@ucloud.cn>"); 157 157 MODULE_ALIAS_NFT_AF_EXPR(AF_BRIDGE, "meta"); 158 + MODULE_DESCRIPTION("Support for bridge dedicated meta key");
+1
net/bridge/netfilter/nft_reject_bridge.c
··· 455 455 MODULE_LICENSE("GPL"); 456 456 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 457 457 MODULE_ALIAS_NFT_AF_EXPR(AF_BRIDGE, "reject"); 458 + MODULE_DESCRIPTION("Reject packets from bridge via nftables");
+9
net/core/dev.c
··· 4192 4192 4193 4193 local_bh_disable(); 4194 4194 4195 + dev_xmit_recursion_inc(); 4195 4196 HARD_TX_LOCK(dev, txq, smp_processor_id()); 4196 4197 if (!netif_xmit_frozen_or_drv_stopped(txq)) 4197 4198 ret = netdev_start_xmit(skb, dev, txq, false); 4198 4199 HARD_TX_UNLOCK(dev, txq); 4200 + dev_xmit_recursion_dec(); 4199 4201 4200 4202 local_bh_enable(); 4201 4203 ··· 9549 9547 rcu_barrier(); 9550 9548 9551 9549 dev->reg_state = NETREG_UNREGISTERED; 9550 + /* We should put the kobject that hold in 9551 + * netdev_unregister_kobject(), otherwise 9552 + * the net device cannot be freed when 9553 + * driver calls free_netdev(), because the 9554 + * kobject is being hold. 9555 + */ 9556 + kobject_put(&dev->dev.kobj); 9552 9557 } 9553 9558 /* 9554 9559 * Prevent userspace races by waiting until the network
+1
net/core/drop_monitor.c
··· 1721 1721 MODULE_LICENSE("GPL v2"); 1722 1722 MODULE_AUTHOR("Neil Horman <nhorman@tuxdriver.com>"); 1723 1723 MODULE_ALIAS_GENL_FAMILY("NET_DM"); 1724 + MODULE_DESCRIPTION("Monitoring code for network dropped packet alerts");
+26 -21
net/core/flow_offload.c
··· 372 372 } 373 373 EXPORT_SYMBOL(flow_indr_dev_register); 374 374 375 - static void __flow_block_indr_cleanup(flow_setup_cb_t *setup_cb, void *cb_priv, 375 + static void __flow_block_indr_cleanup(void (*release)(void *cb_priv), 376 + void *cb_priv, 376 377 struct list_head *cleanup_list) 377 378 { 378 379 struct flow_block_cb *this, *next; 379 380 380 381 list_for_each_entry_safe(this, next, &flow_block_indr_list, indr.list) { 381 - if (this->cb == setup_cb && 382 - this->cb_priv == cb_priv) { 382 + if (this->release == release && 383 + this->indr.cb_priv == cb_priv) { 383 384 list_move(&this->indr.list, cleanup_list); 384 385 return; 385 386 } ··· 398 397 } 399 398 400 399 void flow_indr_dev_unregister(flow_indr_block_bind_cb_t *cb, void *cb_priv, 401 - flow_setup_cb_t *setup_cb) 400 + void (*release)(void *cb_priv)) 402 401 { 403 402 struct flow_indr_dev *this, *next, *indr_dev = NULL; 404 403 LIST_HEAD(cleanup_list); ··· 419 418 return; 420 419 } 421 420 422 - __flow_block_indr_cleanup(setup_cb, cb_priv, &cleanup_list); 421 + __flow_block_indr_cleanup(release, cb_priv, &cleanup_list); 423 422 mutex_unlock(&flow_indr_block_lock); 424 423 425 424 flow_block_indr_notify(&cleanup_list); ··· 430 429 static void flow_block_indr_init(struct flow_block_cb *flow_block, 431 430 struct flow_block_offload *bo, 432 431 struct net_device *dev, void *data, 432 + void *cb_priv, 433 433 void (*cleanup)(struct flow_block_cb *block_cb)) 434 434 { 435 435 flow_block->indr.binder_type = bo->binder_type; 436 436 flow_block->indr.data = data; 437 + flow_block->indr.cb_priv = cb_priv; 437 438 flow_block->indr.dev = dev; 438 439 flow_block->indr.cleanup = cleanup; 439 440 } 440 441 441 - static void __flow_block_indr_binding(struct flow_block_offload *bo, 442 - struct net_device *dev, void *data, 443 - void (*cleanup)(struct flow_block_cb *block_cb)) 442 + struct flow_block_cb *flow_indr_block_cb_alloc(flow_setup_cb_t *cb, 443 + void *cb_ident, void *cb_priv, 444 + void (*release)(void *cb_priv), 445 + struct flow_block_offload *bo, 446 + struct net_device *dev, void *data, 447 + void *indr_cb_priv, 448 + void (*cleanup)(struct flow_block_cb *block_cb)) 444 449 { 445 450 struct flow_block_cb *block_cb; 446 451 447 - list_for_each_entry(block_cb, &bo->cb_list, list) { 448 - switch (bo->command) { 449 - case FLOW_BLOCK_BIND: 450 - flow_block_indr_init(block_cb, bo, dev, data, cleanup); 451 - list_add(&block_cb->indr.list, &flow_block_indr_list); 452 - break; 453 - case FLOW_BLOCK_UNBIND: 454 - list_del(&block_cb->indr.list); 455 - break; 456 - } 457 - } 452 + block_cb = flow_block_cb_alloc(cb, cb_ident, cb_priv, release); 453 + if (IS_ERR(block_cb)) 454 + goto out; 455 + 456 + flow_block_indr_init(block_cb, bo, dev, data, indr_cb_priv, cleanup); 457 + list_add(&block_cb->indr.list, &flow_block_indr_list); 458 + 459 + out: 460 + return block_cb; 458 461 } 462 + EXPORT_SYMBOL(flow_indr_block_cb_alloc); 459 463 460 464 int flow_indr_dev_setup_offload(struct net_device *dev, 461 465 enum tc_setup_type type, void *data, ··· 471 465 472 466 mutex_lock(&flow_indr_block_lock); 473 467 list_for_each_entry(this, &flow_block_indr_dev_list, list) 474 - this->cb(dev, this->cb_priv, type, bo); 468 + this->cb(dev, this->cb_priv, type, bo, data, cleanup); 475 469 476 - __flow_block_indr_binding(bo, dev, data, cleanup); 477 470 mutex_unlock(&flow_indr_block_lock); 478 471 479 472 return list_empty(&bo->cb_list) ? -EOPNOTSUPP : 0;
+3 -1
net/core/sock.c
··· 718 718 return inet6_sk(sk)->mc_loop; 719 719 #endif 720 720 } 721 - WARN_ON(1); 721 + WARN_ON_ONCE(1); 722 722 return true; 723 723 } 724 724 EXPORT_SYMBOL(sk_mc_loop); ··· 1767 1767 cgroup_sk_alloc(&sk->sk_cgrp_data); 1768 1768 sock_update_classid(&sk->sk_cgrp_data); 1769 1769 sock_update_netprioidx(&sk->sk_cgrp_data); 1770 + sk_tx_queue_clear(sk); 1770 1771 } 1771 1772 1772 1773 return sk; ··· 1991 1990 */ 1992 1991 sk_refcnt_debug_inc(newsk); 1993 1992 sk_set_socket(newsk, NULL); 1993 + sk_tx_queue_clear(newsk); 1994 1994 RCU_INIT_POINTER(newsk->sk_wq, NULL); 1995 1995 1996 1996 if (newsk->sk_prot->sockets_allocated)
+1
net/core/xdp.c
··· 462 462 xdpf->len = totsize - metasize; 463 463 xdpf->headroom = 0; 464 464 xdpf->metasize = metasize; 465 + xdpf->frame_sz = PAGE_SIZE; 465 466 xdpf->mem.type = MEM_TYPE_PAGE_ORDER0; 466 467 467 468 xsk_buff_free(xdp);
+34 -3
net/dsa/tag_edsa.c
··· 13 13 #define DSA_HLEN 4 14 14 #define EDSA_HLEN 8 15 15 16 + #define FRAME_TYPE_TO_CPU 0x00 17 + #define FRAME_TYPE_FORWARD 0x03 18 + 19 + #define TO_CPU_CODE_MGMT_TRAP 0x00 20 + #define TO_CPU_CODE_FRAME2REG 0x01 21 + #define TO_CPU_CODE_IGMP_MLD_TRAP 0x02 22 + #define TO_CPU_CODE_POLICY_TRAP 0x03 23 + #define TO_CPU_CODE_ARP_MIRROR 0x04 24 + #define TO_CPU_CODE_POLICY_MIRROR 0x05 25 + 16 26 static struct sk_buff *edsa_xmit(struct sk_buff *skb, struct net_device *dev) 17 27 { 18 28 struct dsa_port *dp = dsa_slave_to_port(dev); ··· 87 77 struct packet_type *pt) 88 78 { 89 79 u8 *edsa_header; 80 + int frame_type; 81 + int code; 90 82 int source_device; 91 83 int source_port; 92 84 ··· 103 91 /* 104 92 * Check that frame type is either TO_CPU or FORWARD. 105 93 */ 106 - if ((edsa_header[0] & 0xc0) != 0x00 && (edsa_header[0] & 0xc0) != 0xc0) 94 + frame_type = edsa_header[0] >> 6; 95 + 96 + switch (frame_type) { 97 + case FRAME_TYPE_TO_CPU: 98 + code = (edsa_header[1] & 0x6) | ((edsa_header[2] >> 4) & 1); 99 + 100 + /* 101 + * Mark the frame to never egress on any port of the same switch 102 + * unless it's a trapped IGMP/MLD packet, in which case the 103 + * bridge might want to forward it. 104 + */ 105 + if (code != TO_CPU_CODE_IGMP_MLD_TRAP) 106 + skb->offload_fwd_mark = 1; 107 + 108 + break; 109 + 110 + case FRAME_TYPE_FORWARD: 111 + skb->offload_fwd_mark = 1; 112 + break; 113 + 114 + default: 107 115 return NULL; 116 + } 108 117 109 118 /* 110 119 * Determine source device and port. ··· 188 155 skb->data - ETH_HLEN - EDSA_HLEN, 189 156 2 * ETH_ALEN); 190 157 } 191 - 192 - skb->offload_fwd_mark = 1; 193 158 194 159 return skb; 195 160 }
+9 -8
net/ethtool/cabletest.c
··· 234 234 struct nlattr *tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_MAX + 1]; 235 235 int ret; 236 236 237 + cfg->first = 100; 238 + cfg->step = 100; 239 + cfg->last = MAX_CABLE_LENGTH_CM; 240 + cfg->pair = PHY_PAIR_ALL; 241 + 242 + if (!nest) 243 + return 0; 244 + 237 245 ret = nla_parse_nested(tb, ETHTOOL_A_CABLE_TEST_TDR_CFG_MAX, nest, 238 246 cable_test_tdr_act_cfg_policy, info->extack); 239 247 if (ret < 0) ··· 250 242 if (tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_FIRST]) 251 243 cfg->first = nla_get_u32( 252 244 tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_FIRST]); 253 - else 254 - cfg->first = 100; 245 + 255 246 if (tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_LAST]) 256 247 cfg->last = nla_get_u32(tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_LAST]); 257 - else 258 - cfg->last = MAX_CABLE_LENGTH_CM; 259 248 260 249 if (tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_STEP]) 261 250 cfg->step = nla_get_u32(tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_STEP]); 262 - else 263 - cfg->step = 100; 264 251 265 252 if (tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_PAIR]) { 266 253 cfg->pair = nla_get_u8(tb[ETHTOOL_A_CABLE_TEST_TDR_CFG_PAIR]); ··· 266 263 "invalid pair parameter"); 267 264 return -EINVAL; 268 265 } 269 - } else { 270 - cfg->pair = PHY_PAIR_ALL; 271 266 } 272 267 273 268 if (cfg->first > MAX_CABLE_LENGTH_CM) {
+2
net/ethtool/common.c
··· 40 40 [NETIF_F_GSO_UDP_TUNNEL_BIT] = "tx-udp_tnl-segmentation", 41 41 [NETIF_F_GSO_UDP_TUNNEL_CSUM_BIT] = "tx-udp_tnl-csum-segmentation", 42 42 [NETIF_F_GSO_PARTIAL_BIT] = "tx-gso-partial", 43 + [NETIF_F_GSO_TUNNEL_REMCSUM_BIT] = "tx-tunnel-remcsum-segmentation", 43 44 [NETIF_F_GSO_SCTP_BIT] = "tx-sctp-segmentation", 44 45 [NETIF_F_GSO_ESP_BIT] = "tx-esp-segmentation", 45 46 [NETIF_F_GSO_UDP_L4_BIT] = "tx-udp-segmentation", 47 + [NETIF_F_GSO_FRAGLIST_BIT] = "tx-gso-list", 46 48 47 49 [NETIF_F_FCOE_CRC_BIT] = "tx-checksum-fcoe-crc", 48 50 [NETIF_F_SCTP_CRC_BIT] = "tx-checksum-sctp",
+1 -1
net/ethtool/ioctl.c
··· 2978 2978 sizeof(match->mask.ipv6.dst)); 2979 2979 } 2980 2980 if (memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr)) || 2981 - memcmp(v6_m_spec->ip6src, &zero_addr, sizeof(zero_addr))) { 2981 + memcmp(v6_m_spec->ip6dst, &zero_addr, sizeof(zero_addr))) { 2982 2982 match->dissector.used_keys |= 2983 2983 BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS); 2984 2984 match->dissector.offset[FLOW_DISSECTOR_KEY_IPV6_ADDRS] =
+5 -6
net/ethtool/linkstate.c
··· 78 78 79 79 ret = linkstate_get_sqi(dev); 80 80 if (ret < 0 && ret != -EOPNOTSUPP) 81 - return ret; 82 - 81 + goto out; 83 82 data->sqi = ret; 84 83 85 84 ret = linkstate_get_sqi_max(dev); 86 85 if (ret < 0 && ret != -EOPNOTSUPP) 87 - return ret; 88 - 86 + goto out; 89 87 data->sqi_max = ret; 90 88 89 + ret = 0; 90 + out: 91 91 ethnl_ops_complete(dev); 92 - 93 - return 0; 92 + return ret; 94 93 } 95 94 96 95 static int linkstate_reply_size(const struct ethnl_req_info *req_base,
+1 -20
net/hsr/hsr_device.c
··· 339 339 rcu_read_unlock(); 340 340 } 341 341 342 - static void hsr_del_ports(struct hsr_priv *hsr) 342 + void hsr_del_ports(struct hsr_priv *hsr) 343 343 { 344 344 struct hsr_port *port; 345 345 ··· 356 356 hsr_del_port(port); 357 357 } 358 358 359 - /* This has to be called after all the readers are gone. 360 - * Otherwise we would have to check the return value of 361 - * hsr_port_get_hsr(). 362 - */ 363 - static void hsr_dev_destroy(struct net_device *hsr_dev) 364 - { 365 - struct hsr_priv *hsr = netdev_priv(hsr_dev); 366 - 367 - hsr_debugfs_term(hsr); 368 - hsr_del_ports(hsr); 369 - 370 - del_timer_sync(&hsr->prune_timer); 371 - del_timer_sync(&hsr->announce_timer); 372 - 373 - hsr_del_self_node(hsr); 374 - hsr_del_nodes(&hsr->node_db); 375 - } 376 - 377 359 static const struct net_device_ops hsr_device_ops = { 378 360 .ndo_change_mtu = hsr_dev_change_mtu, 379 361 .ndo_open = hsr_dev_open, 380 362 .ndo_stop = hsr_dev_close, 381 363 .ndo_start_xmit = hsr_dev_xmit, 382 364 .ndo_fix_features = hsr_fix_features, 383 - .ndo_uninit = hsr_dev_destroy, 384 365 }; 385 366 386 367 static struct device_type hsr_type = {
+1 -1
net/hsr/hsr_device.h
··· 11 11 #include <linux/netdevice.h> 12 12 #include "hsr_main.h" 13 13 14 + void hsr_del_ports(struct hsr_priv *hsr); 14 15 void hsr_dev_setup(struct net_device *dev); 15 16 int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2], 16 17 unsigned char multicast_spec, u8 protocol_version, ··· 19 18 void hsr_check_carrier_and_operstate(struct hsr_priv *hsr); 20 19 bool is_hsr_master(struct net_device *dev); 21 20 int hsr_get_max_mtu(struct hsr_priv *hsr); 22 - 23 21 #endif /* __HSR_DEVICE_H */
+6 -3
net/hsr/hsr_main.c
··· 6 6 */ 7 7 8 8 #include <linux/netdevice.h> 9 + #include <net/rtnetlink.h> 9 10 #include <linux/rculist.h> 10 11 #include <linux/timer.h> 11 12 #include <linux/etherdevice.h> ··· 101 100 master = hsr_port_get_hsr(port->hsr, HSR_PT_MASTER); 102 101 hsr_del_port(port); 103 102 if (hsr_slave_empty(master->hsr)) { 104 - unregister_netdevice_queue(master->dev, 105 - &list_kill); 103 + const struct rtnl_link_ops *ops; 104 + 105 + ops = master->dev->rtnl_link_ops; 106 + ops->dellink(master->dev, &list_kill); 106 107 unregister_netdevice_many(&list_kill); 107 108 } 108 109 } ··· 147 144 148 145 static void __exit hsr_exit(void) 149 146 { 150 - unregister_netdevice_notifier(&hsr_nb); 151 147 hsr_netlink_exit(); 152 148 hsr_debugfs_remove_root(); 149 + unregister_netdevice_notifier(&hsr_nb); 153 150 } 154 151 155 152 module_init(hsr_init);
+17
net/hsr/hsr_netlink.c
··· 83 83 return hsr_dev_finalize(dev, link, multicast_spec, hsr_version, extack); 84 84 } 85 85 86 + static void hsr_dellink(struct net_device *dev, struct list_head *head) 87 + { 88 + struct hsr_priv *hsr = netdev_priv(dev); 89 + 90 + del_timer_sync(&hsr->prune_timer); 91 + del_timer_sync(&hsr->announce_timer); 92 + 93 + hsr_debugfs_term(hsr); 94 + hsr_del_ports(hsr); 95 + 96 + hsr_del_self_node(hsr); 97 + hsr_del_nodes(&hsr->node_db); 98 + 99 + unregister_netdevice_queue(dev, head); 100 + } 101 + 86 102 static int hsr_fill_info(struct sk_buff *skb, const struct net_device *dev) 87 103 { 88 104 struct hsr_priv *hsr = netdev_priv(dev); ··· 134 118 .priv_size = sizeof(struct hsr_priv), 135 119 .setup = hsr_dev_setup, 136 120 .newlink = hsr_newlink, 121 + .dellink = hsr_dellink, 137 122 .fill_info = hsr_fill_info, 138 123 }; 139 124
+18 -16
net/ipv4/Kconfig
··· 340 340 341 341 config INET_AH 342 342 tristate "IP: AH transformation" 343 - select XFRM_ALGO 344 - select CRYPTO 345 - select CRYPTO_HMAC 346 - select CRYPTO_MD5 347 - select CRYPTO_SHA1 343 + select XFRM_AH 348 344 help 349 - Support for IPsec AH. 345 + Support for IPsec AH (Authentication Header). 346 + 347 + AH can be used with various authentication algorithms. Besides 348 + enabling AH support itself, this option enables the generic 349 + implementations of the algorithms that RFC 8221 lists as MUST be 350 + implemented. If you need any other algorithms, you'll need to enable 351 + them in the crypto API. You should also enable accelerated 352 + implementations of any needed algorithms when available. 350 353 351 354 If unsure, say Y. 352 355 353 356 config INET_ESP 354 357 tristate "IP: ESP transformation" 355 - select XFRM_ALGO 356 - select CRYPTO 357 - select CRYPTO_AUTHENC 358 - select CRYPTO_HMAC 359 - select CRYPTO_MD5 360 - select CRYPTO_CBC 361 - select CRYPTO_SHA1 362 - select CRYPTO_DES 363 - select CRYPTO_ECHAINIV 358 + select XFRM_ESP 364 359 help 365 - Support for IPsec ESP. 360 + Support for IPsec ESP (Encapsulating Security Payload). 361 + 362 + ESP can be used with various encryption and authentication algorithms. 363 + Besides enabling ESP support itself, this option enables the generic 364 + implementations of the algorithms that RFC 8221 lists as MUST be 365 + implemented. If you need any other algorithms, you'll need to enable 366 + them in the crypto API. You should also enable accelerated 367 + implementations of any needed algorithms when available. 366 368 367 369 If unsure, say Y. 368 370
+1
net/ipv4/esp4_offload.c
··· 361 361 MODULE_LICENSE("GPL"); 362 362 MODULE_AUTHOR("Steffen Klassert <steffen.klassert@secunet.com>"); 363 363 MODULE_ALIAS_XFRM_OFFLOAD_TYPE(AF_INET, XFRM_PROTO_ESP); 364 + MODULE_DESCRIPTION("IPV4 GSO/GRO offload support");
+1 -1
net/ipv4/fib_semantics.c
··· 1109 1109 if (fl4.flowi4_scope < RT_SCOPE_LINK) 1110 1110 fl4.flowi4_scope = RT_SCOPE_LINK; 1111 1111 1112 - if (table) 1112 + if (table && table != RT_TABLE_MAIN) 1113 1113 tbl = fib_get_table(net, table); 1114 1114 1115 1115 if (tbl)
+1
net/ipv4/fou.c
··· 1304 1304 module_exit(fou_fini); 1305 1305 MODULE_AUTHOR("Tom Herbert <therbert@google.com>"); 1306 1306 MODULE_LICENSE("GPL"); 1307 + MODULE_DESCRIPTION("Foo over UDP");
+8 -6
net/ipv4/ip_tunnel.c
··· 85 85 __be32 remote, __be32 local, 86 86 __be32 key) 87 87 { 88 - unsigned int hash; 89 88 struct ip_tunnel *t, *cand = NULL; 90 89 struct hlist_head *head; 90 + struct net_device *ndev; 91 + unsigned int hash; 91 92 92 93 hash = ip_tunnel_hash(key, remote); 93 94 head = &itn->tunnels[hash]; ··· 163 162 if (t && t->dev->flags & IFF_UP) 164 163 return t; 165 164 166 - if (itn->fb_tunnel_dev && itn->fb_tunnel_dev->flags & IFF_UP) 167 - return netdev_priv(itn->fb_tunnel_dev); 165 + ndev = READ_ONCE(itn->fb_tunnel_dev); 166 + if (ndev && ndev->flags & IFF_UP) 167 + return netdev_priv(ndev); 168 168 169 169 return NULL; 170 170 } ··· 1261 1259 struct ip_tunnel_net *itn; 1262 1260 1263 1261 itn = net_generic(net, tunnel->ip_tnl_net_id); 1264 - /* fb_tunnel_dev will be unregisted in net-exit call. */ 1265 - if (itn->fb_tunnel_dev != dev) 1266 - ip_tunnel_del(itn, netdev_priv(dev)); 1262 + ip_tunnel_del(itn, netdev_priv(dev)); 1263 + if (itn->fb_tunnel_dev == dev) 1264 + WRITE_ONCE(itn->fb_tunnel_dev, NULL); 1267 1265 1268 1266 dst_cache_reset(&tunnel->dst_cache); 1269 1267 }
+14 -1
net/ipv4/netfilter/ip_tables.c
··· 1797 1797 return ret; 1798 1798 } 1799 1799 1800 + void ipt_unregister_table_pre_exit(struct net *net, struct xt_table *table, 1801 + const struct nf_hook_ops *ops) 1802 + { 1803 + nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks)); 1804 + } 1805 + 1806 + void ipt_unregister_table_exit(struct net *net, struct xt_table *table) 1807 + { 1808 + __ipt_unregister_table(net, table); 1809 + } 1810 + 1800 1811 void ipt_unregister_table(struct net *net, struct xt_table *table, 1801 1812 const struct nf_hook_ops *ops) 1802 1813 { 1803 1814 if (ops) 1804 - nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks)); 1815 + ipt_unregister_table_pre_exit(net, table, ops); 1805 1816 __ipt_unregister_table(net, table); 1806 1817 } 1807 1818 ··· 1969 1958 1970 1959 EXPORT_SYMBOL(ipt_register_table); 1971 1960 EXPORT_SYMBOL(ipt_unregister_table); 1961 + EXPORT_SYMBOL(ipt_unregister_table_pre_exit); 1962 + EXPORT_SYMBOL(ipt_unregister_table_exit); 1972 1963 EXPORT_SYMBOL(ipt_do_table); 1973 1964 module_init(ip_tables_init); 1974 1965 module_exit(ip_tables_fini);
+1
net/ipv4/netfilter/ipt_SYNPROXY.c
··· 118 118 119 119 MODULE_LICENSE("GPL"); 120 120 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 121 + MODULE_DESCRIPTION("Intercept TCP connections and establish them using syncookies");
+9 -1
net/ipv4/netfilter/iptable_filter.c
··· 72 72 return 0; 73 73 } 74 74 75 + static void __net_exit iptable_filter_net_pre_exit(struct net *net) 76 + { 77 + if (net->ipv4.iptable_filter) 78 + ipt_unregister_table_pre_exit(net, net->ipv4.iptable_filter, 79 + filter_ops); 80 + } 81 + 75 82 static void __net_exit iptable_filter_net_exit(struct net *net) 76 83 { 77 84 if (!net->ipv4.iptable_filter) 78 85 return; 79 - ipt_unregister_table(net, net->ipv4.iptable_filter, filter_ops); 86 + ipt_unregister_table_exit(net, net->ipv4.iptable_filter); 80 87 net->ipv4.iptable_filter = NULL; 81 88 } 82 89 83 90 static struct pernet_operations iptable_filter_net_ops = { 84 91 .init = iptable_filter_net_init, 92 + .pre_exit = iptable_filter_net_pre_exit, 85 93 .exit = iptable_filter_net_exit, 86 94 }; 87 95
+9 -1
net/ipv4/netfilter/iptable_mangle.c
··· 100 100 return ret; 101 101 } 102 102 103 + static void __net_exit iptable_mangle_net_pre_exit(struct net *net) 104 + { 105 + if (net->ipv4.iptable_mangle) 106 + ipt_unregister_table_pre_exit(net, net->ipv4.iptable_mangle, 107 + mangle_ops); 108 + } 109 + 103 110 static void __net_exit iptable_mangle_net_exit(struct net *net) 104 111 { 105 112 if (!net->ipv4.iptable_mangle) 106 113 return; 107 - ipt_unregister_table(net, net->ipv4.iptable_mangle, mangle_ops); 114 + ipt_unregister_table_exit(net, net->ipv4.iptable_mangle); 108 115 net->ipv4.iptable_mangle = NULL; 109 116 } 110 117 111 118 static struct pernet_operations iptable_mangle_net_ops = { 119 + .pre_exit = iptable_mangle_net_pre_exit, 112 120 .exit = iptable_mangle_net_exit, 113 121 }; 114 122
+8 -2
net/ipv4/netfilter/iptable_nat.c
··· 113 113 return ret; 114 114 } 115 115 116 + static void __net_exit iptable_nat_net_pre_exit(struct net *net) 117 + { 118 + if (net->ipv4.nat_table) 119 + ipt_nat_unregister_lookups(net); 120 + } 121 + 116 122 static void __net_exit iptable_nat_net_exit(struct net *net) 117 123 { 118 124 if (!net->ipv4.nat_table) 119 125 return; 120 - ipt_nat_unregister_lookups(net); 121 - ipt_unregister_table(net, net->ipv4.nat_table, NULL); 126 + ipt_unregister_table_exit(net, net->ipv4.nat_table); 122 127 net->ipv4.nat_table = NULL; 123 128 } 124 129 125 130 static struct pernet_operations iptable_nat_net_ops = { 131 + .pre_exit = iptable_nat_net_pre_exit, 126 132 .exit = iptable_nat_net_exit, 127 133 }; 128 134
+9 -1
net/ipv4/netfilter/iptable_raw.c
··· 67 67 return ret; 68 68 } 69 69 70 + static void __net_exit iptable_raw_net_pre_exit(struct net *net) 71 + { 72 + if (net->ipv4.iptable_raw) 73 + ipt_unregister_table_pre_exit(net, net->ipv4.iptable_raw, 74 + rawtable_ops); 75 + } 76 + 70 77 static void __net_exit iptable_raw_net_exit(struct net *net) 71 78 { 72 79 if (!net->ipv4.iptable_raw) 73 80 return; 74 - ipt_unregister_table(net, net->ipv4.iptable_raw, rawtable_ops); 81 + ipt_unregister_table_exit(net, net->ipv4.iptable_raw); 75 82 net->ipv4.iptable_raw = NULL; 76 83 } 77 84 78 85 static struct pernet_operations iptable_raw_net_ops = { 86 + .pre_exit = iptable_raw_net_pre_exit, 79 87 .exit = iptable_raw_net_exit, 80 88 }; 81 89
+9 -2
net/ipv4/netfilter/iptable_security.c
··· 62 62 return ret; 63 63 } 64 64 65 + static void __net_exit iptable_security_net_pre_exit(struct net *net) 66 + { 67 + if (net->ipv4.iptable_security) 68 + ipt_unregister_table_pre_exit(net, net->ipv4.iptable_security, 69 + sectbl_ops); 70 + } 71 + 65 72 static void __net_exit iptable_security_net_exit(struct net *net) 66 73 { 67 74 if (!net->ipv4.iptable_security) 68 75 return; 69 - 70 - ipt_unregister_table(net, net->ipv4.iptable_security, sectbl_ops); 76 + ipt_unregister_table_exit(net, net->ipv4.iptable_security); 71 77 net->ipv4.iptable_security = NULL; 72 78 } 73 79 74 80 static struct pernet_operations iptable_security_net_ops = { 81 + .pre_exit = iptable_security_net_pre_exit, 75 82 .exit = iptable_security_net_exit, 76 83 }; 77 84
+1
net/ipv4/netfilter/nf_flow_table_ipv4.c
··· 34 34 MODULE_LICENSE("GPL"); 35 35 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 36 36 MODULE_ALIAS_NF_FLOWTABLE(AF_INET); 37 + MODULE_DESCRIPTION("Netfilter flow table support");
+1
net/ipv4/netfilter/nft_dup_ipv4.c
··· 107 107 MODULE_LICENSE("GPL"); 108 108 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 109 109 MODULE_ALIAS_NFT_AF_EXPR(AF_INET, "dup"); 110 + MODULE_DESCRIPTION("IPv4 nftables packet duplication support");
+1
net/ipv4/netfilter/nft_fib_ipv4.c
··· 210 210 MODULE_LICENSE("GPL"); 211 211 MODULE_AUTHOR("Florian Westphal <fw@strlen.de>"); 212 212 MODULE_ALIAS_NFT_AF_EXPR(2, "fib"); 213 + MODULE_DESCRIPTION("nftables fib / ip route lookup support");
+1
net/ipv4/netfilter/nft_reject_ipv4.c
··· 71 71 MODULE_LICENSE("GPL"); 72 72 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 73 73 MODULE_ALIAS_NFT_AF_EXPR(AF_INET, "reject"); 74 + MODULE_DESCRIPTION("IPv4 packet rejection for nftables");
+2 -3
net/ipv4/tcp_cubic.c
··· 432 432 433 433 if (hystart_detect & HYSTART_DELAY) { 434 434 /* obtain the minimum delay of more than sampling packets */ 435 + if (ca->curr_rtt > delay) 436 + ca->curr_rtt = delay; 435 437 if (ca->sample_cnt < HYSTART_MIN_SAMPLES) { 436 - if (ca->curr_rtt > delay) 437 - ca->curr_rtt = delay; 438 - 439 438 ca->sample_cnt++; 440 439 } else { 441 440 if (ca->curr_rtt > ca->delay_min +
+11 -3
net/ipv4/tcp_input.c
··· 261 261 * cwnd may be very low (even just 1 packet), so we should ACK 262 262 * immediately. 263 263 */ 264 - inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW; 264 + if (TCP_SKB_CB(skb)->seq != TCP_SKB_CB(skb)->end_seq) 265 + inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW; 265 266 } 266 267 } 267 268 ··· 3666 3665 tcp_in_ack_event(sk, ack_ev_flags); 3667 3666 } 3668 3667 3668 + /* This is a deviation from RFC3168 since it states that: 3669 + * "When the TCP data sender is ready to set the CWR bit after reducing 3670 + * the congestion window, it SHOULD set the CWR bit only on the first 3671 + * new data packet that it transmits." 3672 + * We accept CWR on pure ACKs to be more robust 3673 + * with widely-deployed TCP implementations that do this. 3674 + */ 3675 + tcp_ecn_accept_cwr(sk, skb); 3676 + 3669 3677 /* We passed data and got it acked, remove any soft error 3670 3678 * log. Something worked... 3671 3679 */ ··· 4809 4799 } 4810 4800 skb_dst_drop(skb); 4811 4801 __skb_pull(skb, tcp_hdr(skb)->doff * 4); 4812 - 4813 - tcp_ecn_accept_cwr(sk, skb); 4814 4802 4815 4803 tp->rx_opt.dsack = 0; 4816 4804
+18 -16
net/ipv6/Kconfig
··· 49 49 50 50 config INET6_AH 51 51 tristate "IPv6: AH transformation" 52 - select XFRM_ALGO 53 - select CRYPTO 54 - select CRYPTO_HMAC 55 - select CRYPTO_MD5 56 - select CRYPTO_SHA1 52 + select XFRM_AH 57 53 help 58 - Support for IPsec AH. 54 + Support for IPsec AH (Authentication Header). 55 + 56 + AH can be used with various authentication algorithms. Besides 57 + enabling AH support itself, this option enables the generic 58 + implementations of the algorithms that RFC 8221 lists as MUST be 59 + implemented. If you need any other algorithms, you'll need to enable 60 + them in the crypto API. You should also enable accelerated 61 + implementations of any needed algorithms when available. 59 62 60 63 If unsure, say Y. 61 64 62 65 config INET6_ESP 63 66 tristate "IPv6: ESP transformation" 64 - select XFRM_ALGO 65 - select CRYPTO 66 - select CRYPTO_AUTHENC 67 - select CRYPTO_HMAC 68 - select CRYPTO_MD5 69 - select CRYPTO_CBC 70 - select CRYPTO_SHA1 71 - select CRYPTO_DES 72 - select CRYPTO_ECHAINIV 67 + select XFRM_ESP 73 68 help 74 - Support for IPsec ESP. 69 + Support for IPsec ESP (Encapsulating Security Payload). 70 + 71 + ESP can be used with various encryption and authentication algorithms. 72 + Besides enabling ESP support itself, this option enables the generic 73 + implementations of the algorithms that RFC 8221 lists as MUST be 74 + implemented. If you need any other algorithms, you'll need to enable 75 + them in the crypto API. You should also enable accelerated 76 + implementations of any needed algorithms when available. 75 77 76 78 If unsure, say Y. 77 79
+1
net/ipv6/esp6_offload.c
··· 395 395 MODULE_LICENSE("GPL"); 396 396 MODULE_AUTHOR("Steffen Klassert <steffen.klassert@secunet.com>"); 397 397 MODULE_ALIAS_XFRM_OFFLOAD_TYPE(AF_INET6, XFRM_PROTO_ESP); 398 + MODULE_DESCRIPTION("IPV6 GSO/GRO offload support");
+1
net/ipv6/fou6.c
··· 224 224 module_exit(fou6_fini); 225 225 MODULE_AUTHOR("Tom Herbert <therbert@google.com>"); 226 226 MODULE_LICENSE("GPL"); 227 + MODULE_DESCRIPTION("Foo over UDP (IPv6)");
+1
net/ipv6/ila/ila_main.c
··· 120 120 module_exit(ila_fini); 121 121 MODULE_AUTHOR("Tom Herbert <tom@herbertland.com>"); 122 122 MODULE_LICENSE("GPL"); 123 + MODULE_DESCRIPTION("IPv6: Identifier Locator Addressing (ILA)");
+6 -3
net/ipv6/ip6_gre.c
··· 127 127 gre_proto == htons(ETH_P_ERSPAN2)) ? 128 128 ARPHRD_ETHER : ARPHRD_IP6GRE; 129 129 int score, cand_score = 4; 130 + struct net_device *ndev; 130 131 131 132 for_each_ip_tunnel_rcu(t, ign->tunnels_r_l[h0 ^ h1]) { 132 133 if (!ipv6_addr_equal(local, &t->parms.laddr) || ··· 239 238 if (t && t->dev->flags & IFF_UP) 240 239 return t; 241 240 242 - dev = ign->fb_tunnel_dev; 243 - if (dev && dev->flags & IFF_UP) 244 - return netdev_priv(dev); 241 + ndev = READ_ONCE(ign->fb_tunnel_dev); 242 + if (ndev && ndev->flags & IFF_UP) 243 + return netdev_priv(ndev); 245 244 246 245 return NULL; 247 246 } ··· 414 413 415 414 ip6gre_tunnel_unlink_md(ign, t); 416 415 ip6gre_tunnel_unlink(ign, t); 416 + if (ign->fb_tunnel_dev == dev) 417 + WRITE_ONCE(ign->fb_tunnel_dev, NULL); 417 418 dst_cache_reset(&t->dst_cache); 418 419 dev_put(dev); 419 420 }
+14 -1
net/ipv6/netfilter/ip6_tables.c
··· 1807 1807 return ret; 1808 1808 } 1809 1809 1810 + void ip6t_unregister_table_pre_exit(struct net *net, struct xt_table *table, 1811 + const struct nf_hook_ops *ops) 1812 + { 1813 + nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks)); 1814 + } 1815 + 1816 + void ip6t_unregister_table_exit(struct net *net, struct xt_table *table) 1817 + { 1818 + __ip6t_unregister_table(net, table); 1819 + } 1820 + 1810 1821 void ip6t_unregister_table(struct net *net, struct xt_table *table, 1811 1822 const struct nf_hook_ops *ops) 1812 1823 { 1813 1824 if (ops) 1814 - nf_unregister_net_hooks(net, ops, hweight32(table->valid_hooks)); 1825 + ip6t_unregister_table_pre_exit(net, table, ops); 1815 1826 __ip6t_unregister_table(net, table); 1816 1827 } 1817 1828 ··· 1980 1969 1981 1970 EXPORT_SYMBOL(ip6t_register_table); 1982 1971 EXPORT_SYMBOL(ip6t_unregister_table); 1972 + EXPORT_SYMBOL(ip6t_unregister_table_pre_exit); 1973 + EXPORT_SYMBOL(ip6t_unregister_table_exit); 1983 1974 EXPORT_SYMBOL(ip6t_do_table); 1984 1975 1985 1976 module_init(ip6_tables_init);
+1
net/ipv6/netfilter/ip6t_SYNPROXY.c
··· 121 121 122 122 MODULE_LICENSE("GPL"); 123 123 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 124 + MODULE_DESCRIPTION("Intercept IPv6 TCP connections and establish them using syncookies");
+9 -1
net/ipv6/netfilter/ip6table_filter.c
··· 73 73 return 0; 74 74 } 75 75 76 + static void __net_exit ip6table_filter_net_pre_exit(struct net *net) 77 + { 78 + if (net->ipv6.ip6table_filter) 79 + ip6t_unregister_table_pre_exit(net, net->ipv6.ip6table_filter, 80 + filter_ops); 81 + } 82 + 76 83 static void __net_exit ip6table_filter_net_exit(struct net *net) 77 84 { 78 85 if (!net->ipv6.ip6table_filter) 79 86 return; 80 - ip6t_unregister_table(net, net->ipv6.ip6table_filter, filter_ops); 87 + ip6t_unregister_table_exit(net, net->ipv6.ip6table_filter); 81 88 net->ipv6.ip6table_filter = NULL; 82 89 } 83 90 84 91 static struct pernet_operations ip6table_filter_net_ops = { 85 92 .init = ip6table_filter_net_init, 93 + .pre_exit = ip6table_filter_net_pre_exit, 86 94 .exit = ip6table_filter_net_exit, 87 95 }; 88 96
+9 -1
net/ipv6/netfilter/ip6table_mangle.c
··· 93 93 return ret; 94 94 } 95 95 96 + static void __net_exit ip6table_mangle_net_pre_exit(struct net *net) 97 + { 98 + if (net->ipv6.ip6table_mangle) 99 + ip6t_unregister_table_pre_exit(net, net->ipv6.ip6table_mangle, 100 + mangle_ops); 101 + } 102 + 96 103 static void __net_exit ip6table_mangle_net_exit(struct net *net) 97 104 { 98 105 if (!net->ipv6.ip6table_mangle) 99 106 return; 100 107 101 - ip6t_unregister_table(net, net->ipv6.ip6table_mangle, mangle_ops); 108 + ip6t_unregister_table_exit(net, net->ipv6.ip6table_mangle); 102 109 net->ipv6.ip6table_mangle = NULL; 103 110 } 104 111 105 112 static struct pernet_operations ip6table_mangle_net_ops = { 113 + .pre_exit = ip6table_mangle_net_pre_exit, 106 114 .exit = ip6table_mangle_net_exit, 107 115 }; 108 116
+8 -2
net/ipv6/netfilter/ip6table_nat.c
··· 114 114 return ret; 115 115 } 116 116 117 + static void __net_exit ip6table_nat_net_pre_exit(struct net *net) 118 + { 119 + if (net->ipv6.ip6table_nat) 120 + ip6t_nat_unregister_lookups(net); 121 + } 122 + 117 123 static void __net_exit ip6table_nat_net_exit(struct net *net) 118 124 { 119 125 if (!net->ipv6.ip6table_nat) 120 126 return; 121 - ip6t_nat_unregister_lookups(net); 122 - ip6t_unregister_table(net, net->ipv6.ip6table_nat, NULL); 127 + ip6t_unregister_table_exit(net, net->ipv6.ip6table_nat); 123 128 net->ipv6.ip6table_nat = NULL; 124 129 } 125 130 126 131 static struct pernet_operations ip6table_nat_net_ops = { 132 + .pre_exit = ip6table_nat_net_pre_exit, 127 133 .exit = ip6table_nat_net_exit, 128 134 }; 129 135
+9 -1
net/ipv6/netfilter/ip6table_raw.c
··· 66 66 return ret; 67 67 } 68 68 69 + static void __net_exit ip6table_raw_net_pre_exit(struct net *net) 70 + { 71 + if (net->ipv6.ip6table_raw) 72 + ip6t_unregister_table_pre_exit(net, net->ipv6.ip6table_raw, 73 + rawtable_ops); 74 + } 75 + 69 76 static void __net_exit ip6table_raw_net_exit(struct net *net) 70 77 { 71 78 if (!net->ipv6.ip6table_raw) 72 79 return; 73 - ip6t_unregister_table(net, net->ipv6.ip6table_raw, rawtable_ops); 80 + ip6t_unregister_table_exit(net, net->ipv6.ip6table_raw); 74 81 net->ipv6.ip6table_raw = NULL; 75 82 } 76 83 77 84 static struct pernet_operations ip6table_raw_net_ops = { 85 + .pre_exit = ip6table_raw_net_pre_exit, 78 86 .exit = ip6table_raw_net_exit, 79 87 }; 80 88
+9 -1
net/ipv6/netfilter/ip6table_security.c
··· 61 61 return ret; 62 62 } 63 63 64 + static void __net_exit ip6table_security_net_pre_exit(struct net *net) 65 + { 66 + if (net->ipv6.ip6table_security) 67 + ip6t_unregister_table_pre_exit(net, net->ipv6.ip6table_security, 68 + sectbl_ops); 69 + } 70 + 64 71 static void __net_exit ip6table_security_net_exit(struct net *net) 65 72 { 66 73 if (!net->ipv6.ip6table_security) 67 74 return; 68 - ip6t_unregister_table(net, net->ipv6.ip6table_security, sectbl_ops); 75 + ip6t_unregister_table_exit(net, net->ipv6.ip6table_security); 69 76 net->ipv6.ip6table_security = NULL; 70 77 } 71 78 72 79 static struct pernet_operations ip6table_security_net_ops = { 80 + .pre_exit = ip6table_security_net_pre_exit, 73 81 .exit = ip6table_security_net_exit, 74 82 }; 75 83
+1
net/ipv6/netfilter/nf_flow_table_ipv6.c
··· 35 35 MODULE_LICENSE("GPL"); 36 36 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 37 37 MODULE_ALIAS_NF_FLOWTABLE(AF_INET6); 38 + MODULE_DESCRIPTION("Netfilter flow table IPv6 module");
+1
net/ipv6/netfilter/nft_dup_ipv6.c
··· 105 105 MODULE_LICENSE("GPL"); 106 106 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 107 107 MODULE_ALIAS_NFT_AF_EXPR(AF_INET6, "dup"); 108 + MODULE_DESCRIPTION("IPv6 nftables packet duplication support");
+1
net/ipv6/netfilter/nft_fib_ipv6.c
··· 255 255 MODULE_LICENSE("GPL"); 256 256 MODULE_AUTHOR("Florian Westphal <fw@strlen.de>"); 257 257 MODULE_ALIAS_NFT_AF_EXPR(10, "fib"); 258 + MODULE_DESCRIPTION("nftables fib / ipv6 route lookup support");
+1
net/ipv6/netfilter/nft_reject_ipv6.c
··· 72 72 MODULE_LICENSE("GPL"); 73 73 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 74 74 MODULE_ALIAS_NFT_AF_EXPR(AF_INET6, "reject"); 75 + MODULE_DESCRIPTION("IPv6 packet rejection for nftables");
-2
net/mptcp/options.c
··· 336 336 */ 337 337 subflow->snd_isn = TCP_SKB_CB(skb)->end_seq; 338 338 if (subflow->request_mptcp) { 339 - pr_debug("local_key=%llu", subflow->local_key); 340 339 opts->suboptions = OPTION_MPTCP_MPC_SYN; 341 - opts->sndr_key = subflow->local_key; 342 340 *size = TCPOLEN_MPTCP_MPC_SYN; 343 341 return true; 344 342 } else if (subflow->request_join) {
+1
net/mptcp/protocol.h
··· 249 249 u64 thmac; 250 250 u32 local_nonce; 251 251 u32 remote_nonce; 252 + struct mptcp_sock *msk; 252 253 }; 253 254 254 255 static inline struct mptcp_subflow_request_sock *
+27 -30
net/mptcp/subflow.c
··· 69 69 70 70 pr_debug("subflow_req=%p", subflow_req); 71 71 72 + if (subflow_req->msk) 73 + sock_put((struct sock *)subflow_req->msk); 74 + 72 75 if (subflow_req->mp_capable) 73 76 mptcp_token_destroy_request(subflow_req->token); 74 77 tcp_request_sock_ops.destructor(req); ··· 89 86 } 90 87 91 88 /* validate received token and create truncated hmac and nonce for SYN-ACK */ 92 - static bool subflow_token_join_request(struct request_sock *req, 93 - const struct sk_buff *skb) 89 + static struct mptcp_sock *subflow_token_join_request(struct request_sock *req, 90 + const struct sk_buff *skb) 94 91 { 95 92 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); 96 93 u8 hmac[SHA256_DIGEST_SIZE]; ··· 100 97 msk = mptcp_token_get_sock(subflow_req->token); 101 98 if (!msk) { 102 99 SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINNOTOKEN); 103 - return false; 100 + return NULL; 104 101 } 105 102 106 103 local_id = mptcp_pm_get_local_id(msk, (struct sock_common *)req); 107 104 if (local_id < 0) { 108 105 sock_put((struct sock *)msk); 109 - return false; 106 + return NULL; 110 107 } 111 108 subflow_req->local_id = local_id; 112 109 ··· 117 114 subflow_req->remote_nonce, hmac); 118 115 119 116 subflow_req->thmac = get_unaligned_be64(hmac); 120 - 121 - sock_put((struct sock *)msk); 122 - return true; 117 + return msk; 123 118 } 124 119 125 120 static void subflow_init_req(struct request_sock *req, ··· 134 133 135 134 subflow_req->mp_capable = 0; 136 135 subflow_req->mp_join = 0; 136 + subflow_req->msk = NULL; 137 137 138 138 #ifdef CONFIG_TCP_MD5SIG 139 139 /* no MPTCP if MD5SIG is enabled on this socket or we may run out of ··· 168 166 subflow_req->remote_id = mp_opt.join_id; 169 167 subflow_req->token = mp_opt.token; 170 168 subflow_req->remote_nonce = mp_opt.nonce; 171 - pr_debug("token=%u, remote_nonce=%u", subflow_req->token, 172 - subflow_req->remote_nonce); 173 - if (!subflow_token_join_request(req, skb)) { 174 - subflow_req->mp_join = 0; 175 - // @@ need to trigger RST 176 - } 169 + subflow_req->msk = subflow_token_join_request(req, skb); 170 + pr_debug("token=%u, remote_nonce=%u msk=%p", subflow_req->token, 171 + subflow_req->remote_nonce, subflow_req->msk); 177 172 } 178 173 } 179 174 ··· 353 354 const struct mptcp_subflow_request_sock *subflow_req; 354 355 u8 hmac[SHA256_DIGEST_SIZE]; 355 356 struct mptcp_sock *msk; 356 - bool ret; 357 357 358 358 subflow_req = mptcp_subflow_rsk(req); 359 - msk = mptcp_token_get_sock(subflow_req->token); 359 + msk = subflow_req->msk; 360 360 if (!msk) 361 361 return false; 362 362 ··· 363 365 subflow_req->remote_nonce, 364 366 subflow_req->local_nonce, hmac); 365 367 366 - ret = true; 367 - if (crypto_memneq(hmac, mp_opt->hmac, MPTCPOPT_HMAC_LEN)) 368 - ret = false; 369 - 370 - sock_put((struct sock *)msk); 371 - return ret; 368 + return !crypto_memneq(hmac, mp_opt->hmac, MPTCPOPT_HMAC_LEN); 372 369 } 373 370 374 371 static void mptcp_sock_destruct(struct sock *sk) ··· 431 438 struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk); 432 439 struct mptcp_subflow_request_sock *subflow_req; 433 440 struct mptcp_options_received mp_opt; 434 - bool fallback_is_fatal = false; 441 + bool fallback, fallback_is_fatal; 435 442 struct sock *new_msk = NULL; 436 - bool fallback = false; 437 443 struct sock *child; 438 444 439 445 pr_debug("listener=%p, req=%p, conn=%p", listener, req, listener->conn); 440 446 441 - /* we need later a valid 'mp_capable' value even when options are not 442 - * parsed 447 + /* After child creation we must look for 'mp_capable' even when options 448 + * are not parsed 443 449 */ 444 450 mp_opt.mp_capable = 0; 445 - if (tcp_rsk(req)->is_mptcp == 0) 451 + 452 + /* hopefully temporary handling for MP_JOIN+syncookie */ 453 + subflow_req = mptcp_subflow_rsk(req); 454 + fallback_is_fatal = subflow_req->mp_join; 455 + fallback = !tcp_rsk(req)->is_mptcp; 456 + if (fallback) 446 457 goto create_child; 447 458 448 459 /* if the sk is MP_CAPABLE, we try to fetch the client key */ 449 - subflow_req = mptcp_subflow_rsk(req); 450 460 if (subflow_req->mp_capable) { 451 461 if (TCP_SKB_CB(skb)->seq != subflow_req->ssn_offset + 1) { 452 462 /* here we can receive and accept an in-window, ··· 470 474 if (!new_msk) 471 475 fallback = true; 472 476 } else if (subflow_req->mp_join) { 473 - fallback_is_fatal = true; 474 477 mptcp_get_options(skb, &mp_opt); 475 478 if (!mp_opt.mp_join || 476 479 !subflow_hmac_valid(req, &mp_opt)) { 477 480 SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC); 478 - return NULL; 481 + fallback = true; 479 482 } 480 483 } 481 484 ··· 517 522 } else if (ctx->mp_join) { 518 523 struct mptcp_sock *owner; 519 524 520 - owner = mptcp_token_get_sock(ctx->token); 525 + owner = subflow_req->msk; 521 526 if (!owner) 522 527 goto dispose_child; 523 528 529 + /* move the msk reference ownership to the subflow */ 530 + subflow_req->msk = NULL; 524 531 ctx->conn = (struct sock *)owner; 525 532 if (!mptcp_finish_join(child)) 526 533 goto dispose_child;
+2
net/netfilter/ipset/ip_set_core.c
··· 460 460 for (id = 0; id < IPSET_EXT_ID_MAX; id++) { 461 461 if (!add_extension(id, cadt_flags, tb)) 462 462 continue; 463 + if (align < ip_set_extensions[id].align) 464 + align = ip_set_extensions[id].align; 463 465 len = ALIGN(len, ip_set_extensions[id].align); 464 466 set->offset[id] = len; 465 467 set->extensions |= ip_set_extensions[id].type;
+1
net/netfilter/nf_dup_netdev.c
··· 73 73 74 74 MODULE_LICENSE("GPL"); 75 75 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 76 + MODULE_DESCRIPTION("Netfilter packet duplication support");
+1
net/netfilter/nf_flow_table_core.c
··· 594 594 595 595 MODULE_LICENSE("GPL"); 596 596 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 597 + MODULE_DESCRIPTION("Netfilter flow table module");
+1
net/netfilter/nf_flow_table_inet.c
··· 72 72 MODULE_LICENSE("GPL"); 73 73 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 74 74 MODULE_ALIAS_NF_FLOWTABLE(1); /* NFPROTO_INET */ 75 + MODULE_DESCRIPTION("Netfilter flow table mixed IPv4/IPv6 module");
+1
net/netfilter/nf_flow_table_offload.c
··· 950 950 nf_flow_table_gc_cleanup(flowtable, dev); 951 951 down_write(&flowtable->flow_block_lock); 952 952 list_del(&block_cb->list); 953 + list_del(&block_cb->driver_list); 953 954 flow_block_cb_free(block_cb); 954 955 up_write(&flowtable->flow_block_lock); 955 956 }
+1
net/netfilter/nf_synproxy_core.c
··· 1237 1237 1238 1238 MODULE_LICENSE("GPL"); 1239 1239 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 1240 + MODULE_DESCRIPTION("nftables SYNPROXY expression support");
+1
net/netfilter/nf_tables_offload.c
··· 296 296 nft_flow_block_offload_init(&bo, dev_net(dev), FLOW_BLOCK_UNBIND, 297 297 basechain, &extack); 298 298 mutex_lock(&net->nft.commit_mutex); 299 + list_del(&block_cb->driver_list); 299 300 list_move(&block_cb->list, &bo.cb_list); 300 301 nft_flow_offload_unbind(&bo, basechain); 301 302 mutex_unlock(&net->nft.commit_mutex);
+1
net/netfilter/nfnetlink.c
··· 33 33 MODULE_LICENSE("GPL"); 34 34 MODULE_AUTHOR("Harald Welte <laforge@netfilter.org>"); 35 35 MODULE_ALIAS_NET_PF_PROTO(PF_NETLINK, NETLINK_NETFILTER); 36 + MODULE_DESCRIPTION("Netfilter messages via netlink socket"); 36 37 37 38 #define nfnl_dereference_protected(id) \ 38 39 rcu_dereference_protected(table[(id)].subsys, \
+1
net/netfilter/nft_compat.c
··· 902 902 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 903 903 MODULE_ALIAS_NFT_EXPR("match"); 904 904 MODULE_ALIAS_NFT_EXPR("target"); 905 + MODULE_DESCRIPTION("x_tables over nftables support");
+1
net/netfilter/nft_connlimit.c
··· 280 280 MODULE_AUTHOR("Pablo Neira Ayuso"); 281 281 MODULE_ALIAS_NFT_EXPR("connlimit"); 282 282 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_CONNLIMIT); 283 + MODULE_DESCRIPTION("nftables connlimit rule support");
+1
net/netfilter/nft_counter.c
··· 303 303 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 304 304 MODULE_ALIAS_NFT_EXPR("counter"); 305 305 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_COUNTER); 306 + MODULE_DESCRIPTION("nftables counter rule support");
+1
net/netfilter/nft_ct.c
··· 1345 1345 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_CT_HELPER); 1346 1346 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_CT_TIMEOUT); 1347 1347 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_CT_EXPECT); 1348 + MODULE_DESCRIPTION("Netfilter nf_tables conntrack module");
+1
net/netfilter/nft_dup_netdev.c
··· 102 102 MODULE_LICENSE("GPL"); 103 103 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 104 104 MODULE_ALIAS_NFT_AF_EXPR(5, "dup"); 105 + MODULE_DESCRIPTION("nftables netdev packet duplication support");
+1
net/netfilter/nft_fib_inet.c
··· 76 76 MODULE_LICENSE("GPL"); 77 77 MODULE_AUTHOR("Florian Westphal <fw@strlen.de>"); 78 78 MODULE_ALIAS_NFT_AF_EXPR(1, "fib"); 79 + MODULE_DESCRIPTION("nftables fib inet support");
+1
net/netfilter/nft_fib_netdev.c
··· 85 85 MODULE_LICENSE("GPL"); 86 86 MODULE_AUTHOR("Pablo M. Bermudo Garay <pablombg@gmail.com>"); 87 87 MODULE_ALIAS_NFT_AF_EXPR(5, "fib"); 88 + MODULE_DESCRIPTION("nftables netdev fib lookups support");
+1
net/netfilter/nft_flow_offload.c
··· 286 286 MODULE_LICENSE("GPL"); 287 287 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 288 288 MODULE_ALIAS_NFT_EXPR("flow_offload"); 289 + MODULE_DESCRIPTION("nftables hardware flow offload module");
+1
net/netfilter/nft_hash.c
··· 248 248 MODULE_LICENSE("GPL"); 249 249 MODULE_AUTHOR("Laura Garcia <nevola@gmail.com>"); 250 250 MODULE_ALIAS_NFT_EXPR("hash"); 251 + MODULE_DESCRIPTION("Netfilter nftables hash module");
+1
net/netfilter/nft_limit.c
··· 372 372 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 373 373 MODULE_ALIAS_NFT_EXPR("limit"); 374 374 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_LIMIT); 375 + MODULE_DESCRIPTION("nftables limit expression support");
+1
net/netfilter/nft_log.c
··· 298 298 MODULE_LICENSE("GPL"); 299 299 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 300 300 MODULE_ALIAS_NFT_EXPR("log"); 301 + MODULE_DESCRIPTION("Netfilter nf_tables log module");
+1
net/netfilter/nft_masq.c
··· 305 305 MODULE_LICENSE("GPL"); 306 306 MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo@debian.org>"); 307 307 MODULE_ALIAS_NFT_EXPR("masq"); 308 + MODULE_DESCRIPTION("Netfilter nftables masquerade expression support");
+1
net/netfilter/nft_nat.c
··· 402 402 MODULE_LICENSE("GPL"); 403 403 MODULE_AUTHOR("Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>"); 404 404 MODULE_ALIAS_NFT_EXPR("nat"); 405 + MODULE_DESCRIPTION("Network Address Translation support");
+1
net/netfilter/nft_numgen.c
··· 217 217 MODULE_LICENSE("GPL"); 218 218 MODULE_AUTHOR("Laura Garcia <nevola@gmail.com>"); 219 219 MODULE_ALIAS_NFT_EXPR("numgen"); 220 + MODULE_DESCRIPTION("nftables number generator module");
+1
net/netfilter/nft_objref.c
··· 252 252 MODULE_LICENSE("GPL"); 253 253 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 254 254 MODULE_ALIAS_NFT_EXPR("objref"); 255 + MODULE_DESCRIPTION("nftables stateful object reference module");
+1
net/netfilter/nft_osf.c
··· 149 149 MODULE_LICENSE("GPL"); 150 150 MODULE_AUTHOR("Fernando Fernandez <ffmancera@riseup.net>"); 151 151 MODULE_ALIAS_NFT_EXPR("osf"); 152 + MODULE_DESCRIPTION("nftables passive OS fingerprint support");
+1
net/netfilter/nft_queue.c
··· 216 216 MODULE_LICENSE("GPL"); 217 217 MODULE_AUTHOR("Eric Leblond <eric@regit.org>"); 218 218 MODULE_ALIAS_NFT_EXPR("queue"); 219 + MODULE_DESCRIPTION("Netfilter nftables queue module");
+1
net/netfilter/nft_quota.c
··· 254 254 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 255 255 MODULE_ALIAS_NFT_EXPR("quota"); 256 256 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_QUOTA); 257 + MODULE_DESCRIPTION("Netfilter nftables quota module");
+1
net/netfilter/nft_redir.c
··· 292 292 MODULE_LICENSE("GPL"); 293 293 MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo@debian.org>"); 294 294 MODULE_ALIAS_NFT_EXPR("redir"); 295 + MODULE_DESCRIPTION("Netfilter nftables redirect support");
+1
net/netfilter/nft_reject.c
··· 119 119 120 120 MODULE_LICENSE("GPL"); 121 121 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 122 + MODULE_DESCRIPTION("Netfilter x_tables over nftables module");
+1
net/netfilter/nft_reject_inet.c
··· 149 149 MODULE_LICENSE("GPL"); 150 150 MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); 151 151 MODULE_ALIAS_NFT_AF_EXPR(1, "reject"); 152 + MODULE_DESCRIPTION("Netfilter nftables reject inet support");
+1
net/netfilter/nft_synproxy.c
··· 388 388 MODULE_AUTHOR("Fernando Fernandez <ffmancera@riseup.net>"); 389 389 MODULE_ALIAS_NFT_EXPR("synproxy"); 390 390 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_SYNPROXY); 391 + MODULE_DESCRIPTION("nftables SYNPROXY expression support");
+1
net/netfilter/nft_tunnel.c
··· 719 719 MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); 720 720 MODULE_ALIAS_NFT_EXPR("tunnel"); 721 721 MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_TUNNEL); 722 + MODULE_DESCRIPTION("nftables tunnel expression support");
+1
net/netfilter/xt_nat.c
··· 244 244 MODULE_ALIAS("ipt_DNAT"); 245 245 MODULE_ALIAS("ip6t_SNAT"); 246 246 MODULE_ALIAS("ip6t_DNAT"); 247 + MODULE_DESCRIPTION("SNAT and DNAT targets support");
+7 -2
net/openvswitch/actions.c
··· 1169 1169 struct sw_flow_key *key, 1170 1170 const struct nlattr *attr, bool last) 1171 1171 { 1172 + struct ovs_skb_cb *ovs_cb = OVS_CB(skb); 1172 1173 const struct nlattr *actions, *cpl_arg; 1174 + int len, max_len, rem = nla_len(attr); 1173 1175 const struct check_pkt_len_arg *arg; 1174 - int rem = nla_len(attr); 1175 1176 bool clone_flow_key; 1176 1177 1177 1178 /* The first netlink attribute in 'attr' is always ··· 1181 1180 cpl_arg = nla_data(attr); 1182 1181 arg = nla_data(cpl_arg); 1183 1182 1184 - if (skb->len <= arg->pkt_len) { 1183 + len = ovs_cb->mru ? ovs_cb->mru + skb->mac_len : skb->len; 1184 + max_len = arg->pkt_len; 1185 + 1186 + if ((skb_is_gso(skb) && skb_gso_validate_mac_len(skb, max_len)) || 1187 + len <= max_len) { 1185 1188 /* Second netlink attribute in 'attr' is always 1186 1189 * 'OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_LESS_EQUAL'. 1187 1190 */
+17 -9
net/rds/transport.c
··· 38 38 #include "rds.h" 39 39 #include "loop.h" 40 40 41 + static char * const rds_trans_modules[] = { 42 + [RDS_TRANS_IB] = "rds_rdma", 43 + [RDS_TRANS_GAP] = NULL, 44 + [RDS_TRANS_TCP] = "rds_tcp", 45 + }; 46 + 41 47 static struct rds_transport *transports[RDS_TRANS_COUNT]; 42 48 static DECLARE_RWSEM(rds_trans_sem); 43 49 ··· 116 110 { 117 111 struct rds_transport *ret = NULL; 118 112 struct rds_transport *trans; 119 - unsigned int i; 120 113 121 114 down_read(&rds_trans_sem); 122 - for (i = 0; i < RDS_TRANS_COUNT; i++) { 123 - trans = transports[i]; 124 - 125 - if (trans && trans->t_type == t_type && 126 - (!trans->t_owner || try_module_get(trans->t_owner))) { 127 - ret = trans; 128 - break; 129 - } 115 + trans = transports[t_type]; 116 + if (!trans) { 117 + up_read(&rds_trans_sem); 118 + if (rds_trans_modules[t_type]) 119 + request_module(rds_trans_modules[t_type]); 120 + down_read(&rds_trans_sem); 121 + trans = transports[t_type]; 130 122 } 123 + if (trans && trans->t_type == t_type && 124 + (!trans->t_owner || try_module_get(trans->t_owner))) 125 + ret = trans; 126 + 131 127 up_read(&rds_trans_sem); 132 128 133 129 return ret;
+7
net/rxrpc/call_accept.c
··· 22 22 #include <net/ip.h> 23 23 #include "ar-internal.h" 24 24 25 + static void rxrpc_dummy_notify(struct sock *sk, struct rxrpc_call *call, 26 + unsigned long user_call_ID) 27 + { 28 + } 29 + 25 30 /* 26 31 * Preallocate a single service call, connection and peer and, if possible, 27 32 * give them a user ID and attach the user's side of the ID to them. ··· 233 228 if (rx->discard_new_call) { 234 229 _debug("discard %lx", call->user_call_ID); 235 230 rx->discard_new_call(call, call->user_call_ID); 231 + if (call->notify_rx) 232 + call->notify_rx = rxrpc_dummy_notify; 236 233 rxrpc_put_call(call, rxrpc_call_put_kernel); 237 234 } 238 235 rxrpc_call_completed(call);
+1 -1
net/rxrpc/call_event.c
··· 253 253 * confuse things 254 254 */ 255 255 annotation &= ~RXRPC_TX_ANNO_MASK; 256 - annotation |= RXRPC_TX_ANNO_RESENT; 256 + annotation |= RXRPC_TX_ANNO_UNACK | RXRPC_TX_ANNO_RESENT; 257 257 call->rxtx_annotations[ix] = annotation; 258 258 259 259 skb = call->rxtx_buffer[ix];
+3 -4
net/rxrpc/input.c
··· 722 722 ntohl(ackinfo->rxMTU), ntohl(ackinfo->maxMTU), 723 723 rwind, ntohl(ackinfo->jumbo_max)); 724 724 725 + if (rwind > RXRPC_RXTX_BUFF_SIZE - 1) 726 + rwind = RXRPC_RXTX_BUFF_SIZE - 1; 725 727 if (call->tx_winsize != rwind) { 726 - if (rwind > RXRPC_RXTX_BUFF_SIZE - 1) 727 - rwind = RXRPC_RXTX_BUFF_SIZE - 1; 728 728 if (rwind > call->tx_winsize) 729 729 wake = true; 730 - trace_rxrpc_rx_rwind_change(call, sp->hdr.serial, 731 - ntohl(ackinfo->rwind), wake); 730 + trace_rxrpc_rx_rwind_change(call, sp->hdr.serial, rwind, wake); 732 731 call->tx_winsize = rwind; 733 732 } 734 733
+68 -58
net/sched/act_gate.c
··· 32 32 return KTIME_MAX; 33 33 } 34 34 35 - static int gate_get_start_time(struct tcf_gate *gact, ktime_t *start) 35 + static void gate_get_start_time(struct tcf_gate *gact, ktime_t *start) 36 36 { 37 37 struct tcf_gate_params *param = &gact->param; 38 38 ktime_t now, base, cycle; ··· 43 43 44 44 if (ktime_after(base, now)) { 45 45 *start = base; 46 - return 0; 46 + return; 47 47 } 48 48 49 49 cycle = param->tcfg_cycletime; 50 50 51 - /* cycle time should not be zero */ 52 - if (!cycle) 53 - return -EFAULT; 54 - 55 51 n = div64_u64(ktime_sub_ns(now, base), cycle); 56 52 *start = ktime_add_ns(base, (n + 1) * cycle); 57 - return 0; 58 53 } 59 54 60 55 static void gate_start_timer(struct tcf_gate *gact, ktime_t start) ··· 272 277 return err; 273 278 } 274 279 280 + static void gate_setup_timer(struct tcf_gate *gact, u64 basetime, 281 + enum tk_offsets tko, s32 clockid, 282 + bool do_init) 283 + { 284 + if (!do_init) { 285 + if (basetime == gact->param.tcfg_basetime && 286 + tko == gact->tk_offset && 287 + clockid == gact->param.tcfg_clockid) 288 + return; 289 + 290 + spin_unlock_bh(&gact->tcf_lock); 291 + hrtimer_cancel(&gact->hitimer); 292 + spin_lock_bh(&gact->tcf_lock); 293 + } 294 + gact->param.tcfg_basetime = basetime; 295 + gact->param.tcfg_clockid = clockid; 296 + gact->tk_offset = tko; 297 + hrtimer_init(&gact->hitimer, clockid, HRTIMER_MODE_ABS_SOFT); 298 + gact->hitimer.function = gate_timer_func; 299 + } 300 + 275 301 static int tcf_gate_init(struct net *net, struct nlattr *nla, 276 302 struct nlattr *est, struct tc_action **a, 277 303 int ovr, int bind, bool rtnl_held, ··· 303 287 enum tk_offsets tk_offset = TK_OFFS_TAI; 304 288 struct nlattr *tb[TCA_GATE_MAX + 1]; 305 289 struct tcf_chain *goto_ch = NULL; 290 + u64 cycletime = 0, basetime = 0; 306 291 struct tcf_gate_params *p; 307 292 s32 clockid = CLOCK_TAI; 308 293 struct tcf_gate *gact; 309 294 struct tc_gate *parm; 310 295 int ret = 0, err; 311 - u64 basetime = 0; 312 296 u32 gflags = 0; 313 297 s32 prio = -1; 314 298 ktime_t start; ··· 323 307 324 308 if (!tb[TCA_GATE_PARMS]) 325 309 return -EINVAL; 310 + 311 + if (tb[TCA_GATE_CLOCKID]) { 312 + clockid = nla_get_s32(tb[TCA_GATE_CLOCKID]); 313 + switch (clockid) { 314 + case CLOCK_REALTIME: 315 + tk_offset = TK_OFFS_REAL; 316 + break; 317 + case CLOCK_MONOTONIC: 318 + tk_offset = TK_OFFS_MAX; 319 + break; 320 + case CLOCK_BOOTTIME: 321 + tk_offset = TK_OFFS_BOOT; 322 + break; 323 + case CLOCK_TAI: 324 + tk_offset = TK_OFFS_TAI; 325 + break; 326 + default: 327 + NL_SET_ERR_MSG(extack, "Invalid 'clockid'"); 328 + return -EINVAL; 329 + } 330 + } 326 331 327 332 parm = nla_data(tb[TCA_GATE_PARMS]); 328 333 index = parm->index; ··· 368 331 tcf_idr_release(*a, bind); 369 332 return -EEXIST; 370 333 } 371 - if (ret == ACT_P_CREATED) { 372 - to_gate(*a)->param.tcfg_clockid = -1; 373 - INIT_LIST_HEAD(&(to_gate(*a)->param.entries)); 374 - } 375 334 376 335 if (tb[TCA_GATE_PRIORITY]) 377 336 prio = nla_get_s32(tb[TCA_GATE_PRIORITY]); ··· 378 345 if (tb[TCA_GATE_FLAGS]) 379 346 gflags = nla_get_u32(tb[TCA_GATE_FLAGS]); 380 347 381 - if (tb[TCA_GATE_CLOCKID]) { 382 - clockid = nla_get_s32(tb[TCA_GATE_CLOCKID]); 383 - switch (clockid) { 384 - case CLOCK_REALTIME: 385 - tk_offset = TK_OFFS_REAL; 386 - break; 387 - case CLOCK_MONOTONIC: 388 - tk_offset = TK_OFFS_MAX; 389 - break; 390 - case CLOCK_BOOTTIME: 391 - tk_offset = TK_OFFS_BOOT; 392 - break; 393 - case CLOCK_TAI: 394 - tk_offset = TK_OFFS_TAI; 395 - break; 396 - default: 397 - NL_SET_ERR_MSG(extack, "Invalid 'clockid'"); 398 - goto release_idr; 399 - } 400 - } 348 + gact = to_gate(*a); 349 + if (ret == ACT_P_CREATED) 350 + INIT_LIST_HEAD(&gact->param.entries); 401 351 402 352 err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack); 403 353 if (err < 0) 404 354 goto release_idr; 405 355 406 - gact = to_gate(*a); 407 - 408 356 spin_lock_bh(&gact->tcf_lock); 409 357 p = &gact->param; 410 358 411 - if (tb[TCA_GATE_CYCLE_TIME]) { 412 - p->tcfg_cycletime = nla_get_u64(tb[TCA_GATE_CYCLE_TIME]); 413 - if (!p->tcfg_cycletime_ext) 414 - goto chain_put; 415 - } 359 + if (tb[TCA_GATE_CYCLE_TIME]) 360 + cycletime = nla_get_u64(tb[TCA_GATE_CYCLE_TIME]); 416 361 417 362 if (tb[TCA_GATE_ENTRY_LIST]) { 418 363 err = parse_gate_list(tb[TCA_GATE_ENTRY_LIST], p, extack); ··· 398 387 goto chain_put; 399 388 } 400 389 401 - if (!p->tcfg_cycletime) { 390 + if (!cycletime) { 402 391 struct tcfg_gate_entry *entry; 403 392 ktime_t cycle = 0; 404 393 405 394 list_for_each_entry(entry, &p->entries, list) 406 395 cycle = ktime_add_ns(cycle, entry->interval); 407 - p->tcfg_cycletime = cycle; 396 + cycletime = cycle; 397 + if (!cycletime) { 398 + err = -EINVAL; 399 + goto chain_put; 400 + } 408 401 } 402 + p->tcfg_cycletime = cycletime; 409 403 410 404 if (tb[TCA_GATE_CYCLE_TIME_EXT]) 411 405 p->tcfg_cycletime_ext = 412 406 nla_get_u64(tb[TCA_GATE_CYCLE_TIME_EXT]); 413 407 408 + gate_setup_timer(gact, basetime, tk_offset, clockid, 409 + ret == ACT_P_CREATED); 414 410 p->tcfg_priority = prio; 415 - p->tcfg_basetime = basetime; 416 - p->tcfg_clockid = clockid; 417 411 p->tcfg_flags = gflags; 418 - 419 - gact->tk_offset = tk_offset; 420 - hrtimer_init(&gact->hitimer, clockid, HRTIMER_MODE_ABS_SOFT); 421 - gact->hitimer.function = gate_timer_func; 422 - 423 - err = gate_get_start_time(gact, &start); 424 - if (err < 0) { 425 - NL_SET_ERR_MSG(extack, 426 - "Internal error: failed get start time"); 427 - release_entry_list(&p->entries); 428 - goto chain_put; 429 - } 412 + gate_get_start_time(gact, &start); 430 413 431 414 gact->current_close_time = start; 432 415 gact->current_gate_status = GATE_ACT_GATE_OPEN | GATE_ACT_PENDING; ··· 448 443 if (goto_ch) 449 444 tcf_chain_put_by_act(goto_ch); 450 445 release_idr: 446 + /* action is not inserted in any list: it's safe to init hitimer 447 + * without taking tcf_lock. 448 + */ 449 + if (ret == ACT_P_CREATED) 450 + gate_setup_timer(gact, gact->param.tcfg_basetime, 451 + gact->tk_offset, gact->param.tcfg_clockid, 452 + true); 451 453 tcf_idr_release(*a, bind); 452 454 return err; 453 455 } ··· 465 453 struct tcf_gate_params *p; 466 454 467 455 p = &gact->param; 468 - if (p->tcfg_clockid != -1) 469 - hrtimer_cancel(&gact->hitimer); 470 - 456 + hrtimer_cancel(&gact->hitimer); 471 457 release_entry_list(&p->entries); 472 458 } 473 459
+16 -11
net/sched/cls_api.c
··· 652 652 &block->flow_block, tcf_block_shared(block), 653 653 &extack); 654 654 down_write(&block->cb_lock); 655 + list_del(&block_cb->driver_list); 655 656 list_move(&block_cb->list, &bo.cb_list); 656 657 up_write(&block->cb_lock); 657 658 rtnl_lock(); ··· 672 671 struct netlink_ext_ack *extack) 673 672 { 674 673 struct flow_block_offload bo = {}; 675 - int err; 676 674 677 675 tcf_block_offload_init(&bo, dev, command, ei->binder_type, 678 676 &block->flow_block, tcf_block_shared(block), 679 677 extack); 680 678 681 - if (dev->netdev_ops->ndo_setup_tc) 682 - err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo); 683 - else 684 - err = flow_indr_dev_setup_offload(dev, TC_SETUP_BLOCK, block, 685 - &bo, tc_block_indr_cleanup); 679 + if (dev->netdev_ops->ndo_setup_tc) { 680 + int err; 686 681 687 - if (err < 0) { 688 - if (err != -EOPNOTSUPP) 689 - NL_SET_ERR_MSG(extack, "Driver ndo_setup_tc failed"); 690 - return err; 682 + err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_BLOCK, &bo); 683 + if (err < 0) { 684 + if (err != -EOPNOTSUPP) 685 + NL_SET_ERR_MSG(extack, "Driver ndo_setup_tc failed"); 686 + return err; 687 + } 688 + 689 + return tcf_block_setup(block, &bo); 691 690 } 692 691 693 - return tcf_block_setup(block, &bo); 692 + flow_indr_dev_setup_offload(dev, TC_SETUP_BLOCK, block, &bo, 693 + tc_block_indr_cleanup); 694 + tcf_block_setup(block, &bo); 695 + 696 + return -EOPNOTSUPP; 694 697 } 695 698 696 699 static int tcf_block_offload_bind(struct tcf_block *block, struct Qdisc *q,
+41 -17
net/sched/sch_cake.c
··· 1551 1551 return idx + (tin << 16); 1552 1552 } 1553 1553 1554 - static u8 cake_handle_diffserv(struct sk_buff *skb, u16 wash) 1554 + static u8 cake_handle_diffserv(struct sk_buff *skb, bool wash) 1555 1555 { 1556 - int wlen = skb_network_offset(skb); 1556 + const int offset = skb_network_offset(skb); 1557 + u16 *buf, buf_; 1557 1558 u8 dscp; 1558 1559 1559 1560 switch (tc_skb_protocol(skb)) { 1560 1561 case htons(ETH_P_IP): 1561 - wlen += sizeof(struct iphdr); 1562 - if (!pskb_may_pull(skb, wlen) || 1563 - skb_try_make_writable(skb, wlen)) 1562 + buf = skb_header_pointer(skb, offset, sizeof(buf_), &buf_); 1563 + if (unlikely(!buf)) 1564 1564 return 0; 1565 1565 1566 - dscp = ipv4_get_dsfield(ip_hdr(skb)) >> 2; 1567 - if (wash && dscp) 1566 + /* ToS is in the second byte of iphdr */ 1567 + dscp = ipv4_get_dsfield((struct iphdr *)buf) >> 2; 1568 + 1569 + if (wash && dscp) { 1570 + const int wlen = offset + sizeof(struct iphdr); 1571 + 1572 + if (!pskb_may_pull(skb, wlen) || 1573 + skb_try_make_writable(skb, wlen)) 1574 + return 0; 1575 + 1568 1576 ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0); 1577 + } 1578 + 1569 1579 return dscp; 1570 1580 1571 1581 case htons(ETH_P_IPV6): 1572 - wlen += sizeof(struct ipv6hdr); 1573 - if (!pskb_may_pull(skb, wlen) || 1574 - skb_try_make_writable(skb, wlen)) 1582 + buf = skb_header_pointer(skb, offset, sizeof(buf_), &buf_); 1583 + if (unlikely(!buf)) 1575 1584 return 0; 1576 1585 1577 - dscp = ipv6_get_dsfield(ipv6_hdr(skb)) >> 2; 1578 - if (wash && dscp) 1586 + /* Traffic class is in the first and second bytes of ipv6hdr */ 1587 + dscp = ipv6_get_dsfield((struct ipv6hdr *)buf) >> 2; 1588 + 1589 + if (wash && dscp) { 1590 + const int wlen = offset + sizeof(struct ipv6hdr); 1591 + 1592 + if (!pskb_may_pull(skb, wlen) || 1593 + skb_try_make_writable(skb, wlen)) 1594 + return 0; 1595 + 1579 1596 ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0); 1597 + } 1598 + 1580 1599 return dscp; 1581 1600 1582 1601 case htons(ETH_P_ARP): ··· 1612 1593 { 1613 1594 struct cake_sched_data *q = qdisc_priv(sch); 1614 1595 u32 tin, mark; 1596 + bool wash; 1615 1597 u8 dscp; 1616 1598 1617 1599 /* Tin selection: Default to diffserv-based selection, allow overriding 1618 - * using firewall marks or skb->priority. 1600 + * using firewall marks or skb->priority. Call DSCP parsing early if 1601 + * wash is enabled, otherwise defer to below to skip unneeded parsing. 1619 1602 */ 1620 - dscp = cake_handle_diffserv(skb, 1621 - q->rate_flags & CAKE_FLAG_WASH); 1622 1603 mark = (skb->mark & q->fwmark_mask) >> q->fwmark_shft; 1604 + wash = !!(q->rate_flags & CAKE_FLAG_WASH); 1605 + if (wash) 1606 + dscp = cake_handle_diffserv(skb, wash); 1623 1607 1624 1608 if (q->tin_mode == CAKE_DIFFSERV_BESTEFFORT) 1625 1609 tin = 0; ··· 1636 1614 tin = q->tin_order[TC_H_MIN(skb->priority) - 1]; 1637 1615 1638 1616 else { 1617 + if (!wash) 1618 + dscp = cake_handle_diffserv(skb, wash); 1639 1619 tin = q->tin_index[dscp]; 1640 1620 1641 1621 if (unlikely(tin >= q->tin_cnt)) ··· 2715 2691 qdisc_watchdog_init(&q->watchdog, sch); 2716 2692 2717 2693 if (opt) { 2718 - int err = cake_change(sch, opt, extack); 2694 + err = cake_change(sch, opt, extack); 2719 2695 2720 2696 if (err) 2721 2697 return err; ··· 3032 3008 PUT_STAT_S32(BLUE_TIMER_US, 3033 3009 ktime_to_us( 3034 3010 ktime_sub(now, 3035 - flow->cvars.blue_timer))); 3011 + flow->cvars.blue_timer))); 3036 3012 } 3037 3013 if (flow->cvars.dropping) { 3038 3014 PUT_STAT_S32(DROP_NEXT_US,
+1
net/sched/sch_fq.c
··· 1075 1075 module_exit(fq_module_exit) 1076 1076 MODULE_AUTHOR("Eric Dumazet"); 1077 1077 MODULE_LICENSE("GPL"); 1078 + MODULE_DESCRIPTION("Fair Queue Packet Scheduler");
+1
net/sched/sch_fq_codel.c
··· 721 721 module_exit(fq_codel_module_exit) 722 722 MODULE_AUTHOR("Eric Dumazet"); 723 723 MODULE_LICENSE("GPL"); 724 + MODULE_DESCRIPTION("Fair Queue CoDel discipline");
+1
net/sched/sch_hhf.c
··· 721 721 MODULE_AUTHOR("Terry Lam"); 722 722 MODULE_AUTHOR("Nandita Dukkipati"); 723 723 MODULE_LICENSE("GPL"); 724 + MODULE_DESCRIPTION("Heavy-Hitter Filter (HHF)");
+4 -1
net/sctp/associola.c
··· 1565 1565 int sctp_assoc_set_bind_addr_from_ep(struct sctp_association *asoc, 1566 1566 enum sctp_scope scope, gfp_t gfp) 1567 1567 { 1568 + struct sock *sk = asoc->base.sk; 1568 1569 int flags; 1569 1570 1570 1571 /* Use scoping rules to determine the subset of addresses from 1571 1572 * the endpoint. 1572 1573 */ 1573 - flags = (PF_INET6 == asoc->base.sk->sk_family) ? SCTP_ADDR6_ALLOWED : 0; 1574 + flags = (PF_INET6 == sk->sk_family) ? SCTP_ADDR6_ALLOWED : 0; 1575 + if (!inet_v6_ipv6only(sk)) 1576 + flags |= SCTP_ADDR4_ALLOWED; 1574 1577 if (asoc->peer.ipv4_address) 1575 1578 flags |= SCTP_ADDR4_PEERSUPP; 1576 1579 if (asoc->peer.ipv6_address)
+1
net/sctp/bind_addr.c
··· 461 461 * well as the remote peer. 462 462 */ 463 463 if ((((AF_INET == addr->sa.sa_family) && 464 + (flags & SCTP_ADDR4_ALLOWED) && 464 465 (flags & SCTP_ADDR4_PEERSUPP))) || 465 466 (((AF_INET6 == addr->sa.sa_family) && 466 467 (flags & SCTP_ADDR6_ALLOWED) &&
+2 -1
net/sctp/protocol.c
··· 148 148 * sock as well as the remote peer. 149 149 */ 150 150 if (addr->a.sa.sa_family == AF_INET && 151 - !(copy_flags & SCTP_ADDR4_PEERSUPP)) 151 + (!(copy_flags & SCTP_ADDR4_ALLOWED) || 152 + !(copy_flags & SCTP_ADDR4_PEERSUPP))) 152 153 continue; 153 154 if (addr->a.sa.sa_family == AF_INET6 && 154 155 (!(copy_flags & SCTP_ADDR6_ALLOWED) ||
+24
net/xfrm/Kconfig
··· 67 67 68 68 If unsure, say N. 69 69 70 + # This option selects XFRM_ALGO along with the AH authentication algorithms that 71 + # RFC 8221 lists as MUST be implemented. 72 + config XFRM_AH 73 + tristate 74 + select XFRM_ALGO 75 + select CRYPTO 76 + select CRYPTO_HMAC 77 + select CRYPTO_SHA256 78 + 79 + # This option selects XFRM_ALGO along with the ESP encryption and authentication 80 + # algorithms that RFC 8221 lists as MUST be implemented. 81 + config XFRM_ESP 82 + tristate 83 + select XFRM_ALGO 84 + select CRYPTO 85 + select CRYPTO_AES 86 + select CRYPTO_AUTHENC 87 + select CRYPTO_CBC 88 + select CRYPTO_ECHAINIV 89 + select CRYPTO_GCM 90 + select CRYPTO_HMAC 91 + select CRYPTO_SEQIV 92 + select CRYPTO_SHA256 93 + 70 94 config XFRM_IPCOMP 71 95 tristate 72 96 select XFRM_ALGO
+3 -1
net/xfrm/xfrm_device.c
··· 108 108 struct xfrm_offload *xo = xfrm_offload(skb); 109 109 struct sec_path *sp; 110 110 111 - if (!xo) 111 + if (!xo || (xo->flags & XFRM_XMIT)) 112 112 return skb; 113 113 114 114 if (!(features & NETIF_F_HW_ESP)) ··· 128 128 *again = true; 129 129 return skb; 130 130 } 131 + 132 + xo->flags |= XFRM_XMIT; 131 133 132 134 if (skb_is_gso(skb)) { 133 135 struct net_device *dev = skb->dev;
-4
net/xfrm/xfrm_output.c
··· 574 574 switch (x->outer_mode.family) { 575 575 case AF_INET: 576 576 memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 577 - #ifdef CONFIG_NETFILTER 578 577 IPCB(skb)->flags |= IPSKB_XFRM_TRANSFORMED; 579 - #endif 580 578 break; 581 579 case AF_INET6: 582 580 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 583 581 584 - #ifdef CONFIG_NETFILTER 585 582 IP6CB(skb)->flags |= IP6SKB_XFRM_TRANSFORMED; 586 - #endif 587 583 break; 588 584 } 589 585
+2 -6
samples/bpf/xdp_monitor_user.c
··· 509 509 { 510 510 unsigned int nr_cpus = bpf_num_possible_cpus(); 511 511 void *array; 512 - size_t size; 513 512 514 - size = record_size * nr_cpus; 515 - array = malloc(size); 516 - memset(array, 0, size); 513 + array = calloc(nr_cpus, record_size); 517 514 if (!array) { 518 515 fprintf(stderr, "Mem alloc error (nr_cpus:%u)\n", nr_cpus); 519 516 exit(EXIT_FAIL_MEM); ··· 525 528 int i; 526 529 527 530 /* Alloc main stats_record structure */ 528 - rec = malloc(sizeof(*rec)); 529 - memset(rec, 0, sizeof(*rec)); 531 + rec = calloc(1, sizeof(*rec)); 530 532 if (!rec) { 531 533 fprintf(stderr, "Mem alloc error\n"); 532 534 exit(EXIT_FAIL_MEM);
+2 -5
samples/bpf/xdp_redirect_cpu_user.c
··· 207 207 { 208 208 unsigned int nr_cpus = bpf_num_possible_cpus(); 209 209 struct datarec *array; 210 - size_t size; 211 210 212 - size = sizeof(struct datarec) * nr_cpus; 213 - array = malloc(size); 214 - memset(array, 0, size); 211 + array = calloc(nr_cpus, sizeof(struct datarec)); 215 212 if (!array) { 216 213 fprintf(stderr, "Mem alloc error (nr_cpus:%u)\n", nr_cpus); 217 214 exit(EXIT_FAIL_MEM); ··· 223 226 224 227 size = sizeof(*rec) + n_cpus * sizeof(struct record); 225 228 rec = malloc(size); 226 - memset(rec, 0, size); 227 229 if (!rec) { 228 230 fprintf(stderr, "Mem alloc error\n"); 229 231 exit(EXIT_FAIL_MEM); 230 232 } 233 + memset(rec, 0, size); 231 234 rec->rx_cnt.cpu = alloc_record_per_cpu(); 232 235 rec->redir_err.cpu = alloc_record_per_cpu(); 233 236 rec->kthread.cpu = alloc_record_per_cpu();
+3 -10
samples/bpf/xdp_rxq_info_user.c
··· 198 198 { 199 199 unsigned int nr_cpus = bpf_num_possible_cpus(); 200 200 struct datarec *array; 201 - size_t size; 202 201 203 - size = sizeof(struct datarec) * nr_cpus; 204 - array = malloc(size); 205 - memset(array, 0, size); 202 + array = calloc(nr_cpus, sizeof(struct datarec)); 206 203 if (!array) { 207 204 fprintf(stderr, "Mem alloc error (nr_cpus:%u)\n", nr_cpus); 208 205 exit(EXIT_FAIL_MEM); ··· 211 214 { 212 215 unsigned int nr_rxqs = bpf_map__def(rx_queue_index_map)->max_entries; 213 216 struct record *array; 214 - size_t size; 215 217 216 - size = sizeof(struct record) * nr_rxqs; 217 - array = malloc(size); 218 - memset(array, 0, size); 218 + array = calloc(nr_rxqs, sizeof(struct record)); 219 219 if (!array) { 220 220 fprintf(stderr, "Mem alloc error (nr_rxqs:%u)\n", nr_rxqs); 221 221 exit(EXIT_FAIL_MEM); ··· 226 232 struct stats_record *rec; 227 233 int i; 228 234 229 - rec = malloc(sizeof(*rec)); 230 - memset(rec, 0, sizeof(*rec)); 235 + rec = calloc(1, sizeof(struct stats_record)); 231 236 if (!rec) { 232 237 fprintf(stderr, "Mem alloc error\n"); 233 238 exit(EXIT_FAIL_MEM);
+1 -1
tools/bpf/bpftool/Documentation/bpftool-map.rst
··· 49 49 | | **lru_percpu_hash** | **lpm_trie** | **array_of_maps** | **hash_of_maps** 50 50 | | **devmap** | **devmap_hash** | **sockmap** | **cpumap** | **xskmap** | **sockhash** 51 51 | | **cgroup_storage** | **reuseport_sockarray** | **percpu_cgroup_storage** 52 - | | **queue** | **stack** | **sk_storage** | **struct_ops** } 52 + | | **queue** | **stack** | **sk_storage** | **struct_ops** | **ringbuf** } 53 53 54 54 DESCRIPTION 55 55 ===========
+2 -1
tools/bpf/bpftool/map.c
··· 49 49 [BPF_MAP_TYPE_STACK] = "stack", 50 50 [BPF_MAP_TYPE_SK_STORAGE] = "sk_storage", 51 51 [BPF_MAP_TYPE_STRUCT_OPS] = "struct_ops", 52 + [BPF_MAP_TYPE_RINGBUF] = "ringbuf", 52 53 }; 53 54 54 55 const size_t map_type_name_size = ARRAY_SIZE(map_type_name); ··· 1591 1590 " lru_percpu_hash | lpm_trie | array_of_maps | hash_of_maps |\n" 1592 1591 " devmap | devmap_hash | sockmap | cpumap | xskmap | sockhash |\n" 1593 1592 " cgroup_storage | reuseport_sockarray | percpu_cgroup_storage |\n" 1594 - " queue | stack | sk_storage | struct_ops }\n" 1593 + " queue | stack | sk_storage | struct_ops | ringbuf }\n" 1595 1594 " " HELP_SPEC_OPTIONS "\n" 1596 1595 "", 1597 1596 bin_name, argv[-2]);
+1 -1
tools/include/uapi/linux/bpf.h
··· 3168 3168 * Return 3169 3169 * The id is returned or 0 in case the id could not be retrieved. 3170 3170 * 3171 - * void *bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags) 3171 + * int bpf_ringbuf_output(void *ringbuf, void *data, u64 size, u64 flags) 3172 3172 * Description 3173 3173 * Copy *size* bytes from *data* into a ring buffer *ringbuf*. 3174 3174 * If BPF_RB_NO_WAKEUP is specified in *flags*, no notification of
+39 -7
tools/testing/selftests/bpf/prog_tests/sockopt_sk.c
··· 13 13 char cc[16]; /* TCP_CA_NAME_MAX */ 14 14 } buf = {}; 15 15 socklen_t optlen; 16 + char *big_buf = NULL; 16 17 17 18 fd = socket(AF_INET, SOCK_STREAM, 0); 18 19 if (fd < 0) { ··· 23 22 24 23 /* IP_TOS - BPF bypass */ 25 24 26 - buf.u8[0] = 0x08; 27 - err = setsockopt(fd, SOL_IP, IP_TOS, &buf, 1); 25 + optlen = getpagesize() * 2; 26 + big_buf = calloc(1, optlen); 27 + if (!big_buf) { 28 + log_err("Couldn't allocate two pages"); 29 + goto err; 30 + } 31 + 32 + *(int *)big_buf = 0x08; 33 + err = setsockopt(fd, SOL_IP, IP_TOS, big_buf, optlen); 28 34 if (err) { 29 35 log_err("Failed to call setsockopt(IP_TOS)"); 30 36 goto err; 31 37 } 32 38 33 - buf.u8[0] = 0x00; 39 + memset(big_buf, 0, optlen); 34 40 optlen = 1; 35 - err = getsockopt(fd, SOL_IP, IP_TOS, &buf, &optlen); 41 + err = getsockopt(fd, SOL_IP, IP_TOS, big_buf, &optlen); 36 42 if (err) { 37 43 log_err("Failed to call getsockopt(IP_TOS)"); 38 44 goto err; 39 45 } 40 46 41 - if (buf.u8[0] != 0x08) { 42 - log_err("Unexpected getsockopt(IP_TOS) buf[0] 0x%02x != 0x08", 43 - buf.u8[0]); 47 + if (*(int *)big_buf != 0x08) { 48 + log_err("Unexpected getsockopt(IP_TOS) optval 0x%x != 0x08", 49 + *(int *)big_buf); 44 50 goto err; 45 51 } 46 52 ··· 84 76 if (buf.u8[0] != 0x01) { 85 77 log_err("Unexpected buf[0] 0x%02x != 0x01", buf.u8[0]); 86 78 goto err; 79 + } 80 + 81 + /* IP_FREEBIND - BPF can't access optval past PAGE_SIZE */ 82 + 83 + optlen = getpagesize() * 2; 84 + memset(big_buf, 0, optlen); 85 + 86 + err = setsockopt(fd, SOL_IP, IP_FREEBIND, big_buf, optlen); 87 + if (err != 0) { 88 + log_err("Failed to call setsockopt, ret=%d", err); 89 + goto err; 90 + } 91 + 92 + err = getsockopt(fd, SOL_IP, IP_FREEBIND, big_buf, &optlen); 93 + if (err != 0) { 94 + log_err("Failed to call getsockopt, ret=%d", err); 95 + goto err; 96 + } 97 + 98 + if (optlen != 1 || *(__u8 *)big_buf != 0x55) { 99 + log_err("Unexpected IP_FREEBIND getsockopt, optlen=%d, optval=0x%x", 100 + optlen, *(__u8 *)big_buf); 87 101 } 88 102 89 103 /* SO_SNDBUF is overwritten */ ··· 154 124 goto err; 155 125 } 156 126 127 + free(big_buf); 157 128 close(fd); 158 129 return 0; 159 130 err: 131 + free(big_buf); 160 132 close(fd); 161 133 return -1; 162 134 }
+2 -3
tools/testing/selftests/bpf/progs/bpf_cubic.c
··· 480 480 481 481 if (hystart_detect & HYSTART_DELAY) { 482 482 /* obtain the minimum delay of more than sampling packets */ 483 + if (ca->curr_rtt > delay) 484 + ca->curr_rtt = delay; 483 485 if (ca->sample_cnt < HYSTART_MIN_SAMPLES) { 484 - if (ca->curr_rtt > delay) 485 - ca->curr_rtt = delay; 486 - 487 486 ca->sample_cnt++; 488 487 } else { 489 488 if (ca->curr_rtt > ca->delay_min +
+52 -2
tools/testing/selftests/bpf/progs/sockopt_sk.c
··· 8 8 char _license[] SEC("license") = "GPL"; 9 9 __u32 _version SEC("version") = 1; 10 10 11 + #ifndef PAGE_SIZE 12 + #define PAGE_SIZE 4096 13 + #endif 14 + 11 15 #define SOL_CUSTOM 0xdeadbeef 12 16 13 17 struct sockopt_sk { ··· 32 28 __u8 *optval = ctx->optval; 33 29 struct sockopt_sk *storage; 34 30 35 - if (ctx->level == SOL_IP && ctx->optname == IP_TOS) 31 + if (ctx->level == SOL_IP && ctx->optname == IP_TOS) { 36 32 /* Not interested in SOL_IP:IP_TOS; 37 33 * let next BPF program in the cgroup chain or kernel 38 34 * handle it. 39 35 */ 36 + ctx->optlen = 0; /* bypass optval>PAGE_SIZE */ 40 37 return 1; 38 + } 41 39 42 40 if (ctx->level == SOL_SOCKET && ctx->optname == SO_SNDBUF) { 43 41 /* Not interested in SOL_SOCKET:SO_SNDBUF; ··· 54 48 * let next BPF program in the cgroup chain or kernel 55 49 * handle it. 56 50 */ 51 + return 1; 52 + } 53 + 54 + if (ctx->level == SOL_IP && ctx->optname == IP_FREEBIND) { 55 + if (optval + 1 > optval_end) 56 + return 0; /* EPERM, bounds check */ 57 + 58 + ctx->retval = 0; /* Reset system call return value to zero */ 59 + 60 + /* Always export 0x55 */ 61 + optval[0] = 0x55; 62 + ctx->optlen = 1; 63 + 64 + /* Userspace buffer is PAGE_SIZE * 2, but BPF 65 + * program can only see the first PAGE_SIZE 66 + * bytes of data. 67 + */ 68 + if (optval_end - optval != PAGE_SIZE) 69 + return 0; /* EPERM, unexpected data size */ 70 + 57 71 return 1; 58 72 } 59 73 ··· 107 81 __u8 *optval = ctx->optval; 108 82 struct sockopt_sk *storage; 109 83 110 - if (ctx->level == SOL_IP && ctx->optname == IP_TOS) 84 + if (ctx->level == SOL_IP && ctx->optname == IP_TOS) { 111 85 /* Not interested in SOL_IP:IP_TOS; 112 86 * let next BPF program in the cgroup chain or kernel 113 87 * handle it. 114 88 */ 89 + ctx->optlen = 0; /* bypass optval>PAGE_SIZE */ 115 90 return 1; 91 + } 116 92 117 93 if (ctx->level == SOL_SOCKET && ctx->optname == SO_SNDBUF) { 118 94 /* Overwrite SO_SNDBUF value */ ··· 136 108 137 109 memcpy(optval, "cubic", 5); 138 110 ctx->optlen = 5; 111 + 112 + return 1; 113 + } 114 + 115 + if (ctx->level == SOL_IP && ctx->optname == IP_FREEBIND) { 116 + /* Original optlen is larger than PAGE_SIZE. */ 117 + if (ctx->optlen != PAGE_SIZE * 2) 118 + return 0; /* EPERM, unexpected data size */ 119 + 120 + if (optval + 1 > optval_end) 121 + return 0; /* EPERM, bounds check */ 122 + 123 + /* Make sure we can trim the buffer. */ 124 + optval[0] = 0; 125 + ctx->optlen = 1; 126 + 127 + /* Usepace buffer is PAGE_SIZE * 2, but BPF 128 + * program can only see the first PAGE_SIZE 129 + * bytes of data. 130 + */ 131 + if (optval_end - optval != PAGE_SIZE) 132 + return 0; /* EPERM, unexpected data size */ 139 133 140 134 return 1; 141 135 }
+26 -7
tools/testing/selftests/net/so_txtime.c
··· 15 15 #include <inttypes.h> 16 16 #include <linux/net_tstamp.h> 17 17 #include <linux/errqueue.h> 18 + #include <linux/if_ether.h> 18 19 #include <linux/ipv6.h> 19 - #include <linux/tcp.h> 20 + #include <linux/udp.h> 20 21 #include <stdbool.h> 21 22 #include <stdlib.h> 22 23 #include <stdio.h> ··· 141 140 { 142 141 char control[CMSG_SPACE(sizeof(struct sock_extended_err)) + 143 142 CMSG_SPACE(sizeof(struct sockaddr_in6))] = {0}; 144 - char data[sizeof(struct ipv6hdr) + 145 - sizeof(struct tcphdr) + 1]; 143 + char data[sizeof(struct ethhdr) + sizeof(struct ipv6hdr) + 144 + sizeof(struct udphdr) + 1]; 146 145 struct sock_extended_err *err; 147 146 struct msghdr msg = {0}; 148 147 struct iovec iov = {0}; ··· 160 159 msg.msg_controllen = sizeof(control); 161 160 162 161 while (1) { 162 + const char *reason; 163 + 163 164 ret = recvmsg(fdt, &msg, MSG_ERRQUEUE); 164 165 if (ret == -1 && errno == EAGAIN) 165 166 break; ··· 179 176 err = (struct sock_extended_err *)CMSG_DATA(cm); 180 177 if (err->ee_origin != SO_EE_ORIGIN_TXTIME) 181 178 error(1, 0, "errqueue: origin 0x%x\n", err->ee_origin); 182 - if (err->ee_code != ECANCELED) 183 - error(1, 0, "errqueue: code 0x%x\n", err->ee_code); 179 + 180 + switch (err->ee_errno) { 181 + case ECANCELED: 182 + if (err->ee_code != SO_EE_CODE_TXTIME_MISSED) 183 + error(1, 0, "errqueue: unknown ECANCELED %u\n", 184 + err->ee_code); 185 + reason = "missed txtime"; 186 + break; 187 + case EINVAL: 188 + if (err->ee_code != SO_EE_CODE_TXTIME_INVALID_PARAM) 189 + error(1, 0, "errqueue: unknown EINVAL %u\n", 190 + err->ee_code); 191 + reason = "invalid txtime"; 192 + break; 193 + default: 194 + error(1, 0, "errqueue: errno %u code %u\n", 195 + err->ee_errno, err->ee_code); 196 + }; 184 197 185 198 tstamp = ((int64_t) err->ee_data) << 32 | err->ee_info; 186 199 tstamp -= (int64_t) glob_tstart; 187 200 tstamp /= 1000 * 1000; 188 - fprintf(stderr, "send: pkt %c at %" PRId64 "ms dropped\n", 189 - data[ret - 1], tstamp); 201 + fprintf(stderr, "send: pkt %c at %" PRId64 "ms dropped: %s\n", 202 + data[ret - 1], tstamp, reason); 190 203 191 204 msg.msg_flags = 0; 192 205 msg.msg_controllen = sizeof(control);
+1 -1
tools/testing/selftests/netfilter/Makefile
··· 3 3 4 4 TEST_PROGS := nft_trans_stress.sh nft_nat.sh bridge_brouter.sh \ 5 5 conntrack_icmp_related.sh nft_flowtable.sh ipvs.sh \ 6 - nft_concat_range.sh \ 6 + nft_concat_range.sh nft_conntrack_helper.sh \ 7 7 nft_queue.sh 8 8 9 9 LDLIBS = -lmnl
+175
tools/testing/selftests/netfilter/nft_conntrack_helper.sh
··· 1 + #!/bin/bash 2 + # 3 + # This tests connection tracking helper assignment: 4 + # 1. can attach ftp helper to a connection from nft ruleset. 5 + # 2. auto-assign still works. 6 + # 7 + # Kselftest framework requirement - SKIP code is 4. 8 + ksft_skip=4 9 + ret=0 10 + 11 + sfx=$(mktemp -u "XXXXXXXX") 12 + ns1="ns1-$sfx" 13 + ns2="ns2-$sfx" 14 + testipv6=1 15 + 16 + cleanup() 17 + { 18 + ip netns del ${ns1} 19 + ip netns del ${ns2} 20 + } 21 + 22 + nft --version > /dev/null 2>&1 23 + if [ $? -ne 0 ];then 24 + echo "SKIP: Could not run test without nft tool" 25 + exit $ksft_skip 26 + fi 27 + 28 + ip -Version > /dev/null 2>&1 29 + if [ $? -ne 0 ];then 30 + echo "SKIP: Could not run test without ip tool" 31 + exit $ksft_skip 32 + fi 33 + 34 + conntrack -V > /dev/null 2>&1 35 + if [ $? -ne 0 ];then 36 + echo "SKIP: Could not run test without conntrack tool" 37 + exit $ksft_skip 38 + fi 39 + 40 + which nc >/dev/null 2>&1 41 + if [ $? -ne 0 ];then 42 + echo "SKIP: Could not run test without netcat tool" 43 + exit $ksft_skip 44 + fi 45 + 46 + trap cleanup EXIT 47 + 48 + ip netns add ${ns1} 49 + ip netns add ${ns2} 50 + 51 + ip link add veth0 netns ${ns1} type veth peer name veth0 netns ${ns2} > /dev/null 2>&1 52 + if [ $? -ne 0 ];then 53 + echo "SKIP: No virtual ethernet pair device support in kernel" 54 + exit $ksft_skip 55 + fi 56 + 57 + ip -net ${ns1} link set lo up 58 + ip -net ${ns1} link set veth0 up 59 + 60 + ip -net ${ns2} link set lo up 61 + ip -net ${ns2} link set veth0 up 62 + 63 + ip -net ${ns1} addr add 10.0.1.1/24 dev veth0 64 + ip -net ${ns1} addr add dead:1::1/64 dev veth0 65 + 66 + ip -net ${ns2} addr add 10.0.1.2/24 dev veth0 67 + ip -net ${ns2} addr add dead:1::2/64 dev veth0 68 + 69 + load_ruleset_family() { 70 + local family=$1 71 + local ns=$2 72 + 73 + ip netns exec ${ns} nft -f - <<EOF 74 + table $family raw { 75 + ct helper ftp { 76 + type "ftp" protocol tcp 77 + } 78 + chain pre { 79 + type filter hook prerouting priority 0; policy accept; 80 + tcp dport 2121 ct helper set "ftp" 81 + } 82 + chain output { 83 + type filter hook output priority 0; policy accept; 84 + tcp dport 2121 ct helper set "ftp" 85 + } 86 + } 87 + EOF 88 + return $? 89 + } 90 + 91 + check_for_helper() 92 + { 93 + local netns=$1 94 + local message=$2 95 + local port=$3 96 + 97 + ip netns exec ${netns} conntrack -L -p tcp --dport $port 2> /dev/null |grep -q 'helper=ftp' 98 + if [ $? -ne 0 ] ; then 99 + echo "FAIL: ${netns} did not show attached helper $message" 1>&2 100 + ret=1 101 + fi 102 + 103 + echo "PASS: ${netns} connection on port $port has ftp helper attached" 1>&2 104 + return 0 105 + } 106 + 107 + test_helper() 108 + { 109 + local port=$1 110 + local msg=$2 111 + 112 + sleep 3 | ip netns exec ${ns2} nc -w 2 -l -p $port > /dev/null & 113 + 114 + sleep 1 115 + sleep 1 | ip netns exec ${ns1} nc -w 2 10.0.1.2 $port > /dev/null & 116 + 117 + check_for_helper "$ns1" "ip $msg" $port 118 + check_for_helper "$ns2" "ip $msg" $port 119 + 120 + wait 121 + 122 + if [ $testipv6 -eq 0 ] ;then 123 + return 0 124 + fi 125 + 126 + ip netns exec ${ns1} conntrack -F 2> /dev/null 127 + ip netns exec ${ns2} conntrack -F 2> /dev/null 128 + 129 + sleep 3 | ip netns exec ${ns2} nc -w 2 -6 -l -p $port > /dev/null & 130 + 131 + sleep 1 132 + sleep 1 | ip netns exec ${ns1} nc -w 2 -6 dead:1::2 $port > /dev/null & 133 + 134 + check_for_helper "$ns1" "ipv6 $msg" $port 135 + check_for_helper "$ns2" "ipv6 $msg" $port 136 + 137 + wait 138 + } 139 + 140 + load_ruleset_family ip ${ns1} 141 + if [ $? -ne 0 ];then 142 + echo "FAIL: ${ns1} cannot load ip ruleset" 1>&2 143 + exit 1 144 + fi 145 + 146 + load_ruleset_family ip6 ${ns1} 147 + if [ $? -ne 0 ];then 148 + echo "SKIP: ${ns1} cannot load ip6 ruleset" 1>&2 149 + testipv6=0 150 + fi 151 + 152 + load_ruleset_family inet ${ns2} 153 + if [ $? -ne 0 ];then 154 + echo "SKIP: ${ns1} cannot load inet ruleset" 1>&2 155 + load_ruleset_family ip ${ns2} 156 + if [ $? -ne 0 ];then 157 + echo "FAIL: ${ns2} cannot load ip ruleset" 1>&2 158 + exit 1 159 + fi 160 + 161 + if [ $testipv6 -eq 1 ] ;then 162 + load_ruleset_family ip6 ${ns2} 163 + if [ $? -ne 0 ];then 164 + echo "FAIL: ${ns2} cannot load ip6 ruleset" 1>&2 165 + exit 1 166 + fi 167 + fi 168 + fi 169 + 170 + test_helper 2121 "set via ruleset" 171 + ip netns exec ${ns1} sysctl -q 'net.netfilter.nf_conntrack_helper=1' 172 + ip netns exec ${ns2} sysctl -q 'net.netfilter.nf_conntrack_helper=1' 173 + test_helper 21 "auto-assign" 174 + 175 + exit $ret
+2 -2
tools/testing/selftests/tc-testing/tc-tests/actions/bpf.json
··· 260 260 255 261 261 ] 262 262 ], 263 - "cmdUnderTest": "$TC action add action bpf bytecode '4,40 0 0 12,21 0 1 2054,6 0 0 262144,6 0 0 0' index 4294967296 cookie 12345", 263 + "cmdUnderTest": "$TC action add action bpf bytecode '4,40 0 0 12,21 0 1 2054,6 0 0 262144,6 0 0 0' index 4294967296 cookie 123456", 264 264 "expExitCode": "255", 265 265 "verifyCmd": "$TC action ls action bpf", 266 - "matchPattern": "action order [0-9]*: bpf bytecode '4,40 0 0 12,21 0 1 2048,6 0 0 262144,6 0 0 0' default-action pipe.*cookie 12345", 266 + "matchPattern": "action order [0-9]*: bpf bytecode '4,40 0 0 12,21 0 1 2048,6 0 0 262144,6 0 0 0' default-action pipe.*cookie 123456", 267 267 "matchCount": "0", 268 268 "teardown": [ 269 269 "$TC action flush action bpf"
+2 -2
tools/testing/selftests/tc-testing/tc-tests/actions/csum.json
··· 469 469 255 470 470 ] 471 471 ], 472 - "cmdUnderTest": "bash -c \"for i in \\`seq 1 32\\`; do cmd=\\\"action csum tcp continue index \\$i cookie aaabbbcccdddeee \\\"; args=\"\\$args\\$cmd\"; done && $TC actions add \\$args\"", 472 + "cmdUnderTest": "bash -c \"for i in \\`seq 1 32\\`; do cmd=\\\"action csum tcp continue index \\$i cookie 123456789abcde \\\"; args=\"\\$args\\$cmd\"; done && $TC actions add \\$args\"", 473 473 "expExitCode": "0", 474 474 "verifyCmd": "$TC actions ls action csum", 475 475 "matchPattern": "^[ \t]+index [0-9]* ref", ··· 492 492 1, 493 493 255 494 494 ], 495 - "bash -c \"for i in \\`seq 1 32\\`; do cmd=\\\"action csum tcp continue index \\$i cookie aaabbbcccdddeee \\\"; args=\"\\$args\\$cmd\"; done && $TC actions add \\$args\"" 495 + "bash -c \"for i in \\`seq 1 32\\`; do cmd=\\\"action csum tcp continue index \\$i cookie 123456789abcde \\\"; args=\"\\$args\\$cmd\"; done && $TC actions add \\$args\"" 496 496 ], 497 497 "cmdUnderTest": "bash -c \"for i in \\`seq 1 32\\`; do cmd=\\\"action csum index \\$i \\\"; args=\"\\$args\\$cmd\"; done && $TC actions del \\$args\"", 498 498 "expExitCode": "0",
+10 -10
tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json
··· 629 629 "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:00880022 index 1", 630 630 "expExitCode": "0", 631 631 "verifyCmd": "$TC actions get action tunnel_key index 1", 632 - "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:00880022.*index 1", 632 + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt[s]? 0102:80:00880022.*index 1", 633 633 "matchCount": "1", 634 634 "teardown": [ 635 635 "$TC actions flush action tunnel_key" ··· 653 653 "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:00880022,0408:42:0040007611223344,0111:02:1020304011223344 index 1", 654 654 "expExitCode": "0", 655 655 "verifyCmd": "$TC actions get action tunnel_key index 1", 656 - "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:00880022,0408:42:0040007611223344,0111:02:1020304011223344.*index 1", 656 + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt[s]? 0102:80:00880022,0408:42:0040007611223344,0111:02:1020304011223344.*index 1", 657 657 "matchCount": "1", 658 658 "teardown": [ 659 659 "$TC actions flush action tunnel_key" ··· 677 677 "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 824212:80:00880022 index 1", 678 678 "expExitCode": "255", 679 679 "verifyCmd": "$TC actions get action tunnel_key index 1", 680 - "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 824212:80:00880022.*index 1", 680 + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt[s]? 824212:80:00880022.*index 1", 681 681 "matchCount": "0", 682 682 "teardown": [ 683 683 "$TC actions flush action tunnel_key" ··· 701 701 "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:4224:00880022 index 1", 702 702 "expExitCode": "255", 703 703 "verifyCmd": "$TC actions get action tunnel_key index 1", 704 - "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:4224:00880022.*index 1", 704 + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt[s]? 0102:4224:00880022.*index 1", 705 705 "matchCount": "0", 706 706 "teardown": [ 707 707 "$TC actions flush action tunnel_key" ··· 725 725 "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:4288 index 1", 726 726 "expExitCode": "255", 727 727 "verifyCmd": "$TC actions get action tunnel_key index 1", 728 - "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:4288.*index 1", 728 + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt[s]? 0102:80:4288.*index 1", 729 729 "matchCount": "0", 730 730 "teardown": [ 731 731 "$TC actions flush action tunnel_key" ··· 749 749 "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:4288428822 index 1", 750 750 "expExitCode": "255", 751 751 "verifyCmd": "$TC actions get action tunnel_key index 1", 752 - "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:4288428822.*index 1", 752 + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt[s]? 0102:80:4288428822.*index 1", 753 753 "matchCount": "0", 754 754 "teardown": [ 755 755 "$TC actions flush action tunnel_key" ··· 773 773 "cmdUnderTest": "$TC actions add action tunnel_key set src_ip 1.1.1.1 dst_ip 2.2.2.2 id 42 dst_port 6081 geneve_opts 0102:80:00880022,0408:42: index 1", 774 774 "expExitCode": "255", 775 775 "verifyCmd": "$TC actions get action tunnel_key index 1", 776 - "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt 0102:80:00880022,0408:42:.*index 1", 776 + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 1.1.1.1.*dst_ip 2.2.2.2.*key_id 42.*dst_port 6081.*geneve_opt[s]? 0102:80:00880022,0408:42:.*index 1", 777 777 "matchCount": "0", 778 778 "teardown": [ 779 779 "$TC actions flush action tunnel_key" ··· 818 818 1, 819 819 255 820 820 ], 821 - "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 dst_port 3128 nocsum id 1 index 1 cookie aabbccddeeff112233445566778800a" 821 + "$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 dst_port 3128 nocsum id 1 index 1 cookie 123456" 822 822 ], 823 - "cmdUnderTest": "$TC actions replace action tunnel_key set src_ip 11.11.11.1 dst_ip 21.21.21.2 dst_port 3129 id 11 csum reclassify index 1 cookie a1b1c1d1", 823 + "cmdUnderTest": "$TC actions replace action tunnel_key set src_ip 11.11.11.1 dst_ip 21.21.21.2 dst_port 3129 id 11 csum reclassify index 1 cookie 123456", 824 824 "expExitCode": "0", 825 825 "verifyCmd": "$TC actions get action tunnel_key index 1", 826 - "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 11.11.11.1.*dst_ip 21.21.21.2.*key_id 11.*dst_port 3129.*csum reclassify.*index 1.*cookie a1b1c1d1", 826 + "matchPattern": "action order [0-9]+: tunnel_key.*set.*src_ip 11.11.11.1.*dst_ip 21.21.21.2.*key_id 11.*dst_port 3129.*csum reclassify.*index 1.*cookie 123456", 827 827 "matchCount": "1", 828 828 "teardown": [ 829 829 "$TC actions flush action tunnel_key"
+12 -1
tools/testing/selftests/wireguard/netns.sh
··· 587 587 kill $ncat_pid 588 588 ip0 link del wg0 589 589 590 + # Ensure there aren't circular reference loops 591 + ip1 link add wg1 type wireguard 592 + ip2 link add wg2 type wireguard 593 + ip1 link set wg1 netns $netns2 594 + ip2 link set wg2 netns $netns1 595 + pp ip netns delete $netns1 596 + pp ip netns delete $netns2 597 + pp ip netns add $netns1 598 + pp ip netns add $netns2 599 + 600 + sleep 2 # Wait for cleanup and grace periods 590 601 declare -A objects 591 602 while read -t 0.1 -r line 2>/dev/null || [[ $? -ne 142 ]]; do 592 - [[ $line =~ .*(wg[0-9]+:\ [A-Z][a-z]+\ [0-9]+)\ .*(created|destroyed).* ]] || continue 603 + [[ $line =~ .*(wg[0-9]+:\ [A-Z][a-z]+\ ?[0-9]*)\ .*(created|destroyed).* ]] || continue 593 604 objects["${BASH_REMATCH[1]}"]+="${BASH_REMATCH[2]}" 594 605 done < /dev/kmsg 595 606 alldeleted=1