Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'net-5.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski.
"Including fixes from bpf and netfilter.

Current release - regressions:

- sock: fix parameter order in sock_setsockopt()

Current release - new code bugs:

- netfilter: nft_last:
- fix incorrect arithmetic when restoring last used
- honor NFTA_LAST_SET on restoration

Previous releases - regressions:

- udp: properly flush normal packet at GRO time

- sfc: ensure correct number of XDP queues; don't allow enabling the
feature if there isn't sufficient resources to Tx from any CPU

- dsa: sja1105: fix address learning getting disabled on the CPU port

- mptcp: addresses a rmem accounting issue that could keep packets in
subflow receive buffers longer than necessary, delaying MPTCP-level
ACKs

- ip_tunnel: fix mtu calculation for ETHER tunnel devices

- do not reuse skbs allocated from skbuff_fclone_cache in the napi
skb cache, we'd try to return them to the wrong slab cache

- tcp: consistently disable header prediction for mptcp

Previous releases - always broken:

- bpf: fix subprog poke descriptor tracking use-after-free

- ipv6:
- allocate enough headroom in ip6_finish_output2() in case
iptables TEE is used
- tcp: drop silly ICMPv6 packet too big messages to avoid
expensive and pointless lookups (which may serve as a DDOS
vector)
- make sure fwmark is copied in SYNACK packets
- fix 'disable_policy' for forwarded packets (align with IPv4)

- netfilter: conntrack:
- do not renew entry stuck in tcp SYN_SENT state
- do not mark RST in the reply direction coming after SYN packet
for an out-of-sync entry

- mptcp: cleanly handle error conditions with MP_JOIN and syncookies

- mptcp: fix double free when rejecting a join due to port mismatch

- validate lwtstate->data before returning from skb_tunnel_info()

- tcp: call sk_wmem_schedule before sk_mem_charge in zerocopy path

- mt76: mt7921: continue to probe driver when fw already downloaded

- bonding: fix multiple issues with offloading IPsec to (thru?) bond

- stmmac: ptp: fix issues around Qbv support and setting time back

- bcmgenet: always clear wake-up based on energy detection

Misc:

- sctp: move 198 addresses from unusable to private scope

- ptp: support virtual clocks and timestamping

- openvswitch: optimize operation for key comparison"

* tag 'net-5.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (158 commits)
net: dsa: properly check for the bridge_leave methods in dsa_switch_bridge_leave()
sfc: add logs explaining XDP_TX/REDIRECT is not available
sfc: ensure correct number of XDP queues
sfc: fix lack of XDP TX queues - error XDP TX failed (-22)
net: fddi: fix UAF in fza_probe
net: dsa: sja1105: fix address learning getting disabled on the CPU port
net: ocelot: fix switchdev objects synced for wrong netdev with LAG offload
net: Use nlmsg_unicast() instead of netlink_unicast()
octeontx2-pf: Fix uninitialized boolean variable pps
ipv6: allocate enough headroom in ip6_finish_output2()
net: hdlc: rename 'mod_init' & 'mod_exit' functions to be module-specific
net: bridge: multicast: fix MRD advertisement router port marking race
net: bridge: multicast: fix PIM hello router port marking race
net: phy: marvell10g: fix differentiation of 88X3310 from 88X3340
dsa: fix for_each_child.cocci warnings
virtio_net: check virtqueue_add_sgs() return value
mptcp: properly account bulk freed memory
selftests: mptcp: fix case multiple subflows limited by server
mptcp: avoid processing packet if a subflow reset
mptcp: fix syncookie process if mptcp can not_accept new subflow
...

+3598 -2349
+20
Documentation/ABI/testing/sysfs-ptp
··· 33 33 frequency adjustment value (a positive integer) in 34 34 parts per billion. 35 35 36 + What: /sys/class/ptp/ptpN/max_vclocks 37 + Date: May 2021 38 + Contact: Yangbo Lu <yangbo.lu@nxp.com> 39 + Description: 40 + This file contains the maximum number of ptp vclocks. 41 + Write integer to re-configure it. 42 + 36 43 What: /sys/class/ptp/ptpN/n_alarms 37 44 Date: September 2010 38 45 Contact: Richard Cochran <richardcochran@gmail.com> ··· 67 60 Description: 68 61 This file contains the number of programmable pins 69 62 offered by the PTP hardware clock. 63 + 64 + What: /sys/class/ptp/ptpN/n_vclocks 65 + Date: May 2021 66 + Contact: Yangbo Lu <yangbo.lu@nxp.com> 67 + Description: 68 + This file contains the number of virtual PTP clocks in 69 + use. By default, the value is 0 meaning that only the 70 + physical clock is in use. Setting the value creates 71 + the corresponding number of virtual clocks and causes 72 + the physical clock to become free running. Setting the 73 + value back to 0 deletes the virtual clocks and 74 + switches the physical clock back to normal, adjustable 75 + operation. 70 76 71 77 What: /sys/class/ptp/ptpN/pins 72 78 Date: March 2014
+1 -1
Documentation/devicetree/bindings/net/gpmc-eth.txt
··· 13 13 14 14 For the properties relevant to the ethernet controller connected to the GPMC 15 15 refer to the binding documentation of the device. For example, the documentation 16 - for the SMSC 911x is Documentation/devicetree/bindings/net/smsc911x.txt 16 + for the SMSC 911x is Documentation/devicetree/bindings/net/smsc,lan9115.yaml 17 17 18 18 Child nodes need to specify the GPMC bus address width using the "bank-width" 19 19 property but is possible that an ethernet controller also has a property to
+110
Documentation/devicetree/bindings/net/smsc,lan9115.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/net/smsc,lan9115.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Smart Mixed-Signal Connectivity (SMSC) LAN911x/912x Controller 8 + 9 + maintainers: 10 + - Shawn Guo <shawnguo@kernel.org> 11 + 12 + allOf: 13 + - $ref: ethernet-controller.yaml# 14 + 15 + properties: 16 + compatible: 17 + oneOf: 18 + - const: smsc,lan9115 19 + - items: 20 + - enum: 21 + - smsc,lan89218 22 + - smsc,lan9117 23 + - smsc,lan9118 24 + - smsc,lan9220 25 + - smsc,lan9221 26 + - const: smsc,lan9115 27 + 28 + reg: 29 + maxItems: 1 30 + 31 + reg-shift: true 32 + 33 + reg-io-width: 34 + enum: [ 2, 4 ] 35 + default: 2 36 + 37 + interrupts: 38 + minItems: 1 39 + items: 40 + - description: 41 + LAN interrupt line 42 + - description: 43 + Optional PME (power management event) interrupt that is able to wake 44 + up the host system with a 50ms pulse on network activity 45 + 46 + clocks: 47 + maxItems: 1 48 + 49 + phy-mode: true 50 + 51 + smsc,irq-active-high: 52 + type: boolean 53 + description: Indicates the IRQ polarity is active-high 54 + 55 + smsc,irq-push-pull: 56 + type: boolean 57 + description: Indicates the IRQ type is push-pull 58 + 59 + smsc,force-internal-phy: 60 + type: boolean 61 + description: Forces SMSC LAN controller to use internal PHY 62 + 63 + smsc,force-external-phy: 64 + type: boolean 65 + description: Forces SMSC LAN controller to use external PHY 66 + 67 + smsc,save-mac-address: 68 + type: boolean 69 + description: 70 + Indicates that MAC address needs to be saved before resetting the 71 + controller 72 + 73 + reset-gpios: 74 + maxItems: 1 75 + description: 76 + A GPIO line connected to the RESET (active low) signal of the device. 77 + On many systems this is wired high so the device goes out of reset at 78 + power-on, but if it is under program control, this optional GPIO can 79 + wake up in response to it. 80 + 81 + vdd33a-supply: 82 + description: 3.3V analog power supply 83 + 84 + vddvario-supply: 85 + description: IO logic power supply 86 + 87 + required: 88 + - compatible 89 + - reg 90 + - interrupts 91 + 92 + # There are lots of bus-specific properties ("qcom,*", "samsung,*", "fsl,*", 93 + # "gpmc,*", ...) to be found, that actually depend on the compatible value of 94 + # the parent node. 95 + additionalProperties: true 96 + 97 + examples: 98 + - | 99 + #include <dt-bindings/gpio/gpio.h> 100 + 101 + ethernet@f4000000 { 102 + compatible = "smsc,lan9220", "smsc,lan9115"; 103 + reg = <0xf4000000 0x2000000>; 104 + phy-mode = "mii"; 105 + interrupt-parent = <&gpio1>; 106 + interrupts = <31>, <32>; 107 + reset-gpios = <&gpio1 30 GPIO_ACTIVE_LOW>; 108 + reg-io-width = <4>; 109 + smsc,irq-push-pull; 110 + };
-43
Documentation/devicetree/bindings/net/smsc911x.txt
··· 1 - * Smart Mixed-Signal Connectivity (SMSC) LAN911x/912x Controller 2 - 3 - Required properties: 4 - - compatible : Should be "smsc,lan<model>", "smsc,lan9115" 5 - - reg : Address and length of the io space for SMSC LAN 6 - - interrupts : one or two interrupt specifiers 7 - - The first interrupt is the SMSC LAN interrupt line 8 - - The second interrupt (if present) is the PME (power 9 - management event) interrupt that is able to wake up the host 10 - system with a 50ms pulse on network activity 11 - - phy-mode : See ethernet.txt file in the same directory 12 - 13 - Optional properties: 14 - - reg-shift : Specify the quantity to shift the register offsets by 15 - - reg-io-width : Specify the size (in bytes) of the IO accesses that 16 - should be performed on the device. Valid value for SMSC LAN is 17 - 2 or 4. If it's omitted or invalid, the size would be 2. 18 - - smsc,irq-active-high : Indicates the IRQ polarity is active-high 19 - - smsc,irq-push-pull : Indicates the IRQ type is push-pull 20 - - smsc,force-internal-phy : Forces SMSC LAN controller to use 21 - internal PHY 22 - - smsc,force-external-phy : Forces SMSC LAN controller to use 23 - external PHY 24 - - smsc,save-mac-address : Indicates that mac address needs to be saved 25 - before resetting the controller 26 - - reset-gpios : a GPIO line connected to the RESET (active low) signal 27 - of the device. On many systems this is wired high so the device goes 28 - out of reset at power-on, but if it is under program control, this 29 - optional GPIO can wake up in response to it. 30 - - vdd33a-supply, vddvario-supply : 3.3V analog and IO logic power supplies 31 - 32 - Examples: 33 - 34 - lan9220@f4000000 { 35 - compatible = "smsc,lan9220", "smsc,lan9115"; 36 - reg = <0xf4000000 0x2000000>; 37 - phy-mode = "mii"; 38 - interrupt-parent = <&gpio1>; 39 - interrupts = <31>, <32>; 40 - reset-gpios = <&gpio1 30 GPIO_ACTIVE_LOW>; 41 - reg-io-width = <4>; 42 - smsc,irq-push-pull; 43 - };
+22
Documentation/networking/ethtool-netlink.rst
··· 212 212 ``ETHTOOL_MSG_FEC_SET`` set FEC settings 213 213 ``ETHTOOL_MSG_MODULE_EEPROM_GET`` read SFP module EEPROM 214 214 ``ETHTOOL_MSG_STATS_GET`` get standard statistics 215 + ``ETHTOOL_MSG_PHC_VCLOCKS_GET`` get PHC virtual clocks info 215 216 ===================================== ================================ 216 217 217 218 Kernel to userspace: ··· 251 250 ``ETHTOOL_MSG_FEC_NTF`` FEC settings 252 251 ``ETHTOOL_MSG_MODULE_EEPROM_GET_REPLY`` read SFP module EEPROM 253 252 ``ETHTOOL_MSG_STATS_GET_REPLY`` standard statistics 253 + ``ETHTOOL_MSG_PHC_VCLOCKS_GET_REPLY`` PHC virtual clocks info 254 254 ======================================== ================================= 255 255 256 256 ``GET`` requests are sent by userspace applications to retrieve device ··· 1479 1477 etherStatsPkts512to1023Octets 512 1023 1480 1478 ============================= ==== ==== 1481 1479 1480 + PHC_VCLOCKS_GET 1481 + =============== 1482 + 1483 + Query device PHC virtual clocks information. 1484 + 1485 + Request contents: 1486 + 1487 + ==================================== ====== ========================== 1488 + ``ETHTOOL_A_PHC_VCLOCKS_HEADER`` nested request header 1489 + ==================================== ====== ========================== 1490 + 1491 + Kernel response contents: 1492 + 1493 + ==================================== ====== ========================== 1494 + ``ETHTOOL_A_PHC_VCLOCKS_HEADER`` nested reply header 1495 + ``ETHTOOL_A_PHC_VCLOCKS_NUM`` u32 PHC virtual clocks number 1496 + ``ETHTOOL_A_PHC_VCLOCKS_INDEX`` s32 PHC index array 1497 + ==================================== ====== ========================== 1498 + 1482 1499 Request translation 1483 1500 =================== 1484 1501 ··· 1596 1575 n/a ``ETHTOOL_MSG_CABLE_TEST_ACT`` 1597 1576 n/a ``ETHTOOL_MSG_CABLE_TEST_TDR_ACT`` 1598 1577 n/a ``ETHTOOL_MSG_TUNNEL_INFO_GET`` 1578 + n/a ``ETHTOOL_MSG_PHC_VCLOCKS_GET`` 1599 1579 =================================== =====================================
+6
Documentation/networking/nf_conntrack-sysctl.rst
··· 110 110 Be conservative in what you do, be liberal in what you accept from others. 111 111 If it's non-zero, we mark only out of window RST segments as INVALID. 112 112 113 + nf_conntrack_tcp_ignore_invalid_rst - BOOLEAN 114 + - 0 - disabled (default) 115 + - 1 - enabled 116 + 117 + If it's 1, we don't mark out of window RST segments as INVALID. 118 + 113 119 nf_conntrack_tcp_loose - BOOLEAN 114 120 - 0 - disabled 115 121 - not 0 - enabled (default)
+118 -3
Documentation/networking/tipc.rst
··· 4 4 Linux Kernel TIPC 5 5 ================= 6 6 7 - TIPC (Transparent Inter Process Communication) is a protocol that is 8 - specially designed for intra-cluster communication. 7 + Introduction 8 + ============ 9 9 10 - For more information about TIPC, see http://tipc.sourceforge.net. 10 + TIPC (Transparent Inter Process Communication) is a protocol that is specially 11 + designed for intra-cluster communication. It can be configured to transmit 12 + messages either on UDP or directly across Ethernet. Message delivery is 13 + sequence guaranteed, loss free and flow controlled. Latency times are shorter 14 + than with any other known protocol, while maximal throughput is comparable to 15 + that of TCP. 16 + 17 + TIPC Features 18 + ------------- 19 + 20 + - Cluster wide IPC service 21 + 22 + Have you ever wished you had the convenience of Unix Domain Sockets even when 23 + transmitting data between cluster nodes? Where you yourself determine the 24 + addresses you want to bind to and use? Where you don't have to perform DNS 25 + lookups and worry about IP addresses? Where you don't have to start timers 26 + to monitor the continuous existence of peer sockets? And yet without the 27 + downsides of that socket type, such as the risk of lingering inodes? 28 + 29 + Welcome to the Transparent Inter Process Communication service, TIPC in short, 30 + which gives you all of this, and a lot more. 31 + 32 + - Service Addressing 33 + 34 + A fundamental concept in TIPC is that of Service Addressing which makes it 35 + possible for a programmer to chose his own address, bind it to a server 36 + socket and let client programs use only that address for sending messages. 37 + 38 + - Service Tracking 39 + 40 + A client wanting to wait for the availability of a server, uses the Service 41 + Tracking mechanism to subscribe for binding and unbinding/close events for 42 + sockets with the associated service address. 43 + 44 + The service tracking mechanism can also be used for Cluster Topology Tracking, 45 + i.e., subscribing for availability/non-availability of cluster nodes. 46 + 47 + Likewise, the service tracking mechanism can be used for Cluster Connectivity 48 + Tracking, i.e., subscribing for up/down events for individual links between 49 + cluster nodes. 50 + 51 + - Transmission Modes 52 + 53 + Using a service address, a client can send datagram messages to a server socket. 54 + 55 + Using the same address type, it can establish a connection towards an accepting 56 + server socket. 57 + 58 + It can also use a service address to create and join a Communication Group, 59 + which is the TIPC manifestation of a brokerless message bus. 60 + 61 + Multicast with very good performance and scalability is available both in 62 + datagram mode and in communication group mode. 63 + 64 + - Inter Node Links 65 + 66 + Communication between any two nodes in a cluster is maintained by one or two 67 + Inter Node Links, which both guarantee data traffic integrity and monitor 68 + the peer node's availability. 69 + 70 + - Cluster Scalability 71 + 72 + By applying the Overlapping Ring Monitoring algorithm on the inter node links 73 + it is possible to scale TIPC clusters up to 1000 nodes with a maintained 74 + neighbor failure discovery time of 1-2 seconds. For smaller clusters this 75 + time can be made much shorter. 76 + 77 + - Neighbor Discovery 78 + 79 + Neighbor Node Discovery in the cluster is done by Ethernet broadcast or UDP 80 + multicast, when any of those services are available. If not, configured peer 81 + IP addresses can be used. 82 + 83 + - Configuration 84 + 85 + When running TIPC in single node mode no configuration whatsoever is needed. 86 + When running in cluster mode TIPC must as a minimum be given a node address 87 + (before Linux 4.17) and told which interface to attach to. The "tipc" 88 + configuration tool makes is possible to add and maintain many more 89 + configuration parameters. 90 + 91 + - Performance 92 + 93 + TIPC message transfer latency times are better than in any other known protocol. 94 + Maximal byte throughput for inter-node connections is still somewhat lower than 95 + for TCP, while they are superior for intra-node and inter-container throughput 96 + on the same host. 97 + 98 + - Language Support 99 + 100 + The TIPC user API has support for C, Python, Perl, Ruby, D and Go. 101 + 102 + More Information 103 + ---------------- 104 + 105 + - How to set up TIPC: 106 + 107 + http://tipc.io/getting_started.html 108 + 109 + - How to program with TIPC: 110 + 111 + http://tipc.io/programming.html 112 + 113 + - How to contribute to TIPC: 114 + 115 + - http://tipc.io/contacts.html 116 + 117 + - More details about TIPC specification: 118 + 119 + http://tipc.io/protocol.html 120 + 121 + 122 + Implementation 123 + ============== 124 + 125 + TIPC is implemented as a kernel module in net/tipc/ directory. 11 126 12 127 TIPC Base Types 13 128 ---------------
+7
MAINTAINERS
··· 15009 15009 F: drivers/ptp/* 15010 15010 F: include/linux/ptp_cl* 15011 15011 15012 + PTP VIRTUAL CLOCK SUPPORT 15013 + M: Yangbo Lu <yangbo.lu@nxp.com> 15014 + L: netdev@vger.kernel.org 15015 + S: Maintained 15016 + F: drivers/ptp/ptp_vclock.c 15017 + F: net/ethtool/phc_vclocks.c 15018 + 15012 15019 PTRACE SUPPORT 15013 15020 M: Oleg Nesterov <oleg@redhat.com> 15014 15021 S: Maintained
+1 -3
arch/arm/boot/dts/qcom-apq8060-dragonboard.dts
··· 581 581 * EBI2. This has a 25MHz chrystal next to it, so no 582 582 * clocking is needed. 583 583 */ 584 - ethernet-ebi2@2,0 { 584 + ethernet@2,0 { 585 585 compatible = "smsc,lan9221", "smsc,lan9115"; 586 586 reg = <2 0x0 0x100>; 587 587 /* ··· 598 598 phy-mode = "mii"; 599 599 reg-io-width = <2>; 600 600 smsc,force-external-phy; 601 - /* IRQ on edge falling = active low */ 602 - smsc,irq-active-low; 603 601 smsc,irq-push-pull; 604 602 605 603 /*
+3
arch/x86/net/bpf_jit_comp.c
··· 570 570 571 571 for (i = 0; i < prog->aux->size_poke_tab; i++) { 572 572 poke = &prog->aux->poke_tab[i]; 573 + if (poke->aux && poke->aux != prog->aux) 574 + continue; 575 + 573 576 WARN_ON_ONCE(READ_ONCE(poke->tailcall_target_stable)); 574 577 575 578 if (poke->reason != BPF_POKE_REASON_TAIL_CALL)
+153 -28
drivers/net/bonding/bond_main.c
··· 401 401 static int bond_ipsec_add_sa(struct xfrm_state *xs) 402 402 { 403 403 struct net_device *bond_dev = xs->xso.dev; 404 + struct bond_ipsec *ipsec; 404 405 struct bonding *bond; 405 406 struct slave *slave; 407 + int err; 406 408 407 409 if (!bond_dev) 408 410 return -EINVAL; 409 411 412 + rcu_read_lock(); 410 413 bond = netdev_priv(bond_dev); 411 414 slave = rcu_dereference(bond->curr_active_slave); 412 - xs->xso.real_dev = slave->dev; 413 - bond->xs = xs; 415 + if (!slave) { 416 + rcu_read_unlock(); 417 + return -ENODEV; 418 + } 414 419 415 - if (!(slave->dev->xfrmdev_ops 416 - && slave->dev->xfrmdev_ops->xdo_dev_state_add)) { 420 + if (!slave->dev->xfrmdev_ops || 421 + !slave->dev->xfrmdev_ops->xdo_dev_state_add || 422 + netif_is_bond_master(slave->dev)) { 417 423 slave_warn(bond_dev, slave->dev, "Slave does not support ipsec offload\n"); 424 + rcu_read_unlock(); 418 425 return -EINVAL; 419 426 } 420 427 421 - return slave->dev->xfrmdev_ops->xdo_dev_state_add(xs); 428 + ipsec = kmalloc(sizeof(*ipsec), GFP_ATOMIC); 429 + if (!ipsec) { 430 + rcu_read_unlock(); 431 + return -ENOMEM; 432 + } 433 + xs->xso.real_dev = slave->dev; 434 + 435 + err = slave->dev->xfrmdev_ops->xdo_dev_state_add(xs); 436 + if (!err) { 437 + ipsec->xs = xs; 438 + INIT_LIST_HEAD(&ipsec->list); 439 + spin_lock_bh(&bond->ipsec_lock); 440 + list_add(&ipsec->list, &bond->ipsec_list); 441 + spin_unlock_bh(&bond->ipsec_lock); 442 + } else { 443 + kfree(ipsec); 444 + } 445 + rcu_read_unlock(); 446 + return err; 447 + } 448 + 449 + static void bond_ipsec_add_sa_all(struct bonding *bond) 450 + { 451 + struct net_device *bond_dev = bond->dev; 452 + struct bond_ipsec *ipsec; 453 + struct slave *slave; 454 + 455 + rcu_read_lock(); 456 + slave = rcu_dereference(bond->curr_active_slave); 457 + if (!slave) 458 + goto out; 459 + 460 + if (!slave->dev->xfrmdev_ops || 461 + !slave->dev->xfrmdev_ops->xdo_dev_state_add || 462 + netif_is_bond_master(slave->dev)) { 463 + spin_lock_bh(&bond->ipsec_lock); 464 + if (!list_empty(&bond->ipsec_list)) 465 + slave_warn(bond_dev, slave->dev, 466 + "%s: no slave xdo_dev_state_add\n", 467 + __func__); 468 + spin_unlock_bh(&bond->ipsec_lock); 469 + goto out; 470 + } 471 + 472 + spin_lock_bh(&bond->ipsec_lock); 473 + list_for_each_entry(ipsec, &bond->ipsec_list, list) { 474 + ipsec->xs->xso.real_dev = slave->dev; 475 + if (slave->dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs)) { 476 + slave_warn(bond_dev, slave->dev, "%s: failed to add SA\n", __func__); 477 + ipsec->xs->xso.real_dev = NULL; 478 + } 479 + } 480 + spin_unlock_bh(&bond->ipsec_lock); 481 + out: 482 + rcu_read_unlock(); 422 483 } 423 484 424 485 /** ··· 489 428 static void bond_ipsec_del_sa(struct xfrm_state *xs) 490 429 { 491 430 struct net_device *bond_dev = xs->xso.dev; 431 + struct bond_ipsec *ipsec; 492 432 struct bonding *bond; 493 433 struct slave *slave; 494 434 495 435 if (!bond_dev) 496 436 return; 497 437 438 + rcu_read_lock(); 498 439 bond = netdev_priv(bond_dev); 499 440 slave = rcu_dereference(bond->curr_active_slave); 500 441 501 442 if (!slave) 502 - return; 443 + goto out; 503 444 504 - xs->xso.real_dev = slave->dev; 445 + if (!xs->xso.real_dev) 446 + goto out; 505 447 506 - if (!(slave->dev->xfrmdev_ops 507 - && slave->dev->xfrmdev_ops->xdo_dev_state_delete)) { 448 + WARN_ON(xs->xso.real_dev != slave->dev); 449 + 450 + if (!slave->dev->xfrmdev_ops || 451 + !slave->dev->xfrmdev_ops->xdo_dev_state_delete || 452 + netif_is_bond_master(slave->dev)) { 508 453 slave_warn(bond_dev, slave->dev, "%s: no slave xdo_dev_state_delete\n", __func__); 509 - return; 454 + goto out; 510 455 } 511 456 512 457 slave->dev->xfrmdev_ops->xdo_dev_state_delete(xs); 458 + out: 459 + spin_lock_bh(&bond->ipsec_lock); 460 + list_for_each_entry(ipsec, &bond->ipsec_list, list) { 461 + if (ipsec->xs == xs) { 462 + list_del(&ipsec->list); 463 + kfree(ipsec); 464 + break; 465 + } 466 + } 467 + spin_unlock_bh(&bond->ipsec_lock); 468 + rcu_read_unlock(); 469 + } 470 + 471 + static void bond_ipsec_del_sa_all(struct bonding *bond) 472 + { 473 + struct net_device *bond_dev = bond->dev; 474 + struct bond_ipsec *ipsec; 475 + struct slave *slave; 476 + 477 + rcu_read_lock(); 478 + slave = rcu_dereference(bond->curr_active_slave); 479 + if (!slave) { 480 + rcu_read_unlock(); 481 + return; 482 + } 483 + 484 + spin_lock_bh(&bond->ipsec_lock); 485 + list_for_each_entry(ipsec, &bond->ipsec_list, list) { 486 + if (!ipsec->xs->xso.real_dev) 487 + continue; 488 + 489 + if (!slave->dev->xfrmdev_ops || 490 + !slave->dev->xfrmdev_ops->xdo_dev_state_delete || 491 + netif_is_bond_master(slave->dev)) { 492 + slave_warn(bond_dev, slave->dev, 493 + "%s: no slave xdo_dev_state_delete\n", 494 + __func__); 495 + } else { 496 + slave->dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs); 497 + } 498 + ipsec->xs->xso.real_dev = NULL; 499 + } 500 + spin_unlock_bh(&bond->ipsec_lock); 501 + rcu_read_unlock(); 513 502 } 514 503 515 504 /** ··· 570 459 static bool bond_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs) 571 460 { 572 461 struct net_device *bond_dev = xs->xso.dev; 573 - struct bonding *bond = netdev_priv(bond_dev); 574 - struct slave *curr_active = rcu_dereference(bond->curr_active_slave); 575 - struct net_device *slave_dev = curr_active->dev; 462 + struct net_device *real_dev; 463 + struct slave *curr_active; 464 + struct bonding *bond; 465 + int err; 576 466 577 - if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) 578 - return true; 467 + bond = netdev_priv(bond_dev); 468 + rcu_read_lock(); 469 + curr_active = rcu_dereference(bond->curr_active_slave); 470 + real_dev = curr_active->dev; 579 471 580 - if (!(slave_dev->xfrmdev_ops 581 - && slave_dev->xfrmdev_ops->xdo_dev_offload_ok)) { 582 - slave_warn(bond_dev, slave_dev, "%s: no slave xdo_dev_offload_ok\n", __func__); 583 - return false; 472 + if (BOND_MODE(bond) != BOND_MODE_ACTIVEBACKUP) { 473 + err = false; 474 + goto out; 584 475 } 585 476 586 - xs->xso.real_dev = slave_dev; 587 - return slave_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs); 477 + if (!xs->xso.real_dev) { 478 + err = false; 479 + goto out; 480 + } 481 + 482 + if (!real_dev->xfrmdev_ops || 483 + !real_dev->xfrmdev_ops->xdo_dev_offload_ok || 484 + netif_is_bond_master(real_dev)) { 485 + err = false; 486 + goto out; 487 + } 488 + 489 + err = real_dev->xfrmdev_ops->xdo_dev_offload_ok(skb, xs); 490 + out: 491 + rcu_read_unlock(); 492 + return err; 588 493 } 589 494 590 495 static const struct xfrmdev_ops bond_xfrmdev_ops = { ··· 1117 990 return; 1118 991 1119 992 #ifdef CONFIG_XFRM_OFFLOAD 1120 - if (old_active && bond->xs) 1121 - bond_ipsec_del_sa(bond->xs); 993 + bond_ipsec_del_sa_all(bond); 1122 994 #endif /* CONFIG_XFRM_OFFLOAD */ 1123 995 1124 996 if (new_active) { ··· 1192 1066 } 1193 1067 1194 1068 #ifdef CONFIG_XFRM_OFFLOAD 1195 - if (new_active && bond->xs) { 1196 - xfrm_dev_state_flush(dev_net(bond->dev), bond->dev, true); 1197 - bond_ipsec_add_sa(bond->xs); 1198 - } 1069 + bond_ipsec_add_sa_all(bond); 1199 1070 #endif /* CONFIG_XFRM_OFFLOAD */ 1200 1071 1201 1072 /* resend IGMP joins since active slave has changed or ··· 3450 3327 return bond_event_changename(event_bond); 3451 3328 case NETDEV_UNREGISTER: 3452 3329 bond_remove_proc_entry(event_bond); 3330 + xfrm_dev_state_flush(dev_net(bond_dev), bond_dev, true); 3453 3331 break; 3454 3332 case NETDEV_REGISTER: 3455 3333 bond_create_proc_entry(event_bond); ··· 5018 4894 #ifdef CONFIG_XFRM_OFFLOAD 5019 4895 /* set up xfrm device ops (only supported in active-backup right now) */ 5020 4896 bond_dev->xfrmdev_ops = &bond_xfrmdev_ops; 5021 - bond->xs = NULL; 4897 + INIT_LIST_HEAD(&bond->ipsec_list); 4898 + spin_lock_init(&bond->ipsec_lock); 5022 4899 #endif /* CONFIG_XFRM_OFFLOAD */ 5023 4900 5024 4901 /* don't acquire bond device's netif_tx_lock when transmitting */
-9
drivers/net/caif/Kconfig
··· 20 20 identified as N_CAIF. When this ldisc is opened from user space 21 21 it will redirect the TTY's traffic into the CAIF stack. 22 22 23 - config CAIF_HSI 24 - tristate "CAIF HSI transport driver" 25 - depends on CAIF 26 - default n 27 - help 28 - The CAIF low level driver for CAIF over HSI. 29 - Be aware that if you enable this then you also need to 30 - enable a low-level HSI driver. 31 - 32 23 config CAIF_VIRTIO 33 24 tristate "CAIF virtio transport driver" 34 25 depends on CAIF && HAS_DMA
-3
drivers/net/caif/Makefile
··· 4 4 # Serial interface 5 5 obj-$(CONFIG_CAIF_TTY) += caif_serial.o 6 6 7 - # HSI interface 8 - obj-$(CONFIG_CAIF_HSI) += caif_hsi.o 9 - 10 7 # Virtio interface 11 8 obj-$(CONFIG_CAIF_VIRTIO) += caif_virtio.o
-1454
drivers/net/caif/caif_hsi.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (C) ST-Ericsson AB 2010 4 - * Author: Daniel Martensson 5 - * Dmitry.Tarnyagin / dmitry.tarnyagin@lockless.no 6 - */ 7 - 8 - #define pr_fmt(fmt) KBUILD_MODNAME fmt 9 - 10 - #include <linux/init.h> 11 - #include <linux/module.h> 12 - #include <linux/device.h> 13 - #include <linux/netdevice.h> 14 - #include <linux/string.h> 15 - #include <linux/list.h> 16 - #include <linux/interrupt.h> 17 - #include <linux/delay.h> 18 - #include <linux/sched.h> 19 - #include <linux/if_arp.h> 20 - #include <linux/timer.h> 21 - #include <net/rtnetlink.h> 22 - #include <linux/pkt_sched.h> 23 - #include <net/caif/caif_layer.h> 24 - #include <net/caif/caif_hsi.h> 25 - 26 - MODULE_LICENSE("GPL"); 27 - MODULE_AUTHOR("Daniel Martensson"); 28 - MODULE_DESCRIPTION("CAIF HSI driver"); 29 - 30 - /* Returns the number of padding bytes for alignment. */ 31 - #define PAD_POW2(x, pow) ((((x)&((pow)-1)) == 0) ? 0 :\ 32 - (((pow)-((x)&((pow)-1))))) 33 - 34 - static const struct cfhsi_config hsi_default_config = { 35 - 36 - /* Inactivity timeout on HSI, ms */ 37 - .inactivity_timeout = HZ, 38 - 39 - /* Aggregation timeout (ms) of zero means no aggregation is done*/ 40 - .aggregation_timeout = 1, 41 - 42 - /* 43 - * HSI link layer flow-control thresholds. 44 - * Threshold values for the HSI packet queue. Flow-control will be 45 - * asserted when the number of packets exceeds q_high_mark. It will 46 - * not be de-asserted before the number of packets drops below 47 - * q_low_mark. 48 - * Warning: A high threshold value might increase throughput but it 49 - * will at the same time prevent channel prioritization and increase 50 - * the risk of flooding the modem. The high threshold should be above 51 - * the low. 52 - */ 53 - .q_high_mark = 100, 54 - .q_low_mark = 50, 55 - 56 - /* 57 - * HSI padding options. 58 - * Warning: must be a base of 2 (& operation used) and can not be zero ! 59 - */ 60 - .head_align = 4, 61 - .tail_align = 4, 62 - }; 63 - 64 - #define ON 1 65 - #define OFF 0 66 - 67 - static LIST_HEAD(cfhsi_list); 68 - 69 - static void cfhsi_inactivity_tout(struct timer_list *t) 70 - { 71 - struct cfhsi *cfhsi = from_timer(cfhsi, t, inactivity_timer); 72 - 73 - netdev_dbg(cfhsi->ndev, "%s.\n", 74 - __func__); 75 - 76 - /* Schedule power down work queue. */ 77 - if (!test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 78 - queue_work(cfhsi->wq, &cfhsi->wake_down_work); 79 - } 80 - 81 - static void cfhsi_update_aggregation_stats(struct cfhsi *cfhsi, 82 - const struct sk_buff *skb, 83 - int direction) 84 - { 85 - struct caif_payload_info *info; 86 - int hpad, tpad, len; 87 - 88 - info = (struct caif_payload_info *)&skb->cb; 89 - hpad = 1 + PAD_POW2((info->hdr_len + 1), cfhsi->cfg.head_align); 90 - tpad = PAD_POW2((skb->len + hpad), cfhsi->cfg.tail_align); 91 - len = skb->len + hpad + tpad; 92 - 93 - if (direction > 0) 94 - cfhsi->aggregation_len += len; 95 - else if (direction < 0) 96 - cfhsi->aggregation_len -= len; 97 - } 98 - 99 - static bool cfhsi_can_send_aggregate(struct cfhsi *cfhsi) 100 - { 101 - int i; 102 - 103 - if (cfhsi->cfg.aggregation_timeout == 0) 104 - return true; 105 - 106 - for (i = 0; i < CFHSI_PRIO_BEBK; ++i) { 107 - if (cfhsi->qhead[i].qlen) 108 - return true; 109 - } 110 - 111 - /* TODO: Use aggregation_len instead */ 112 - if (cfhsi->qhead[CFHSI_PRIO_BEBK].qlen >= CFHSI_MAX_PKTS) 113 - return true; 114 - 115 - return false; 116 - } 117 - 118 - static struct sk_buff *cfhsi_dequeue(struct cfhsi *cfhsi) 119 - { 120 - struct sk_buff *skb; 121 - int i; 122 - 123 - for (i = 0; i < CFHSI_PRIO_LAST; ++i) { 124 - skb = skb_dequeue(&cfhsi->qhead[i]); 125 - if (skb) 126 - break; 127 - } 128 - 129 - return skb; 130 - } 131 - 132 - static int cfhsi_tx_queue_len(struct cfhsi *cfhsi) 133 - { 134 - int i, len = 0; 135 - for (i = 0; i < CFHSI_PRIO_LAST; ++i) 136 - len += skb_queue_len(&cfhsi->qhead[i]); 137 - return len; 138 - } 139 - 140 - static void cfhsi_abort_tx(struct cfhsi *cfhsi) 141 - { 142 - struct sk_buff *skb; 143 - 144 - for (;;) { 145 - spin_lock_bh(&cfhsi->lock); 146 - skb = cfhsi_dequeue(cfhsi); 147 - if (!skb) 148 - break; 149 - 150 - cfhsi->ndev->stats.tx_errors++; 151 - cfhsi->ndev->stats.tx_dropped++; 152 - cfhsi_update_aggregation_stats(cfhsi, skb, -1); 153 - spin_unlock_bh(&cfhsi->lock); 154 - kfree_skb(skb); 155 - } 156 - cfhsi->tx_state = CFHSI_TX_STATE_IDLE; 157 - if (!test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 158 - mod_timer(&cfhsi->inactivity_timer, 159 - jiffies + cfhsi->cfg.inactivity_timeout); 160 - spin_unlock_bh(&cfhsi->lock); 161 - } 162 - 163 - static int cfhsi_flush_fifo(struct cfhsi *cfhsi) 164 - { 165 - char buffer[32]; /* Any reasonable value */ 166 - size_t fifo_occupancy; 167 - int ret; 168 - 169 - netdev_dbg(cfhsi->ndev, "%s.\n", 170 - __func__); 171 - 172 - do { 173 - ret = cfhsi->ops->cfhsi_fifo_occupancy(cfhsi->ops, 174 - &fifo_occupancy); 175 - if (ret) { 176 - netdev_warn(cfhsi->ndev, 177 - "%s: can't get FIFO occupancy: %d.\n", 178 - __func__, ret); 179 - break; 180 - } else if (!fifo_occupancy) 181 - /* No more data, exitting normally */ 182 - break; 183 - 184 - fifo_occupancy = min(sizeof(buffer), fifo_occupancy); 185 - set_bit(CFHSI_FLUSH_FIFO, &cfhsi->bits); 186 - ret = cfhsi->ops->cfhsi_rx(buffer, fifo_occupancy, 187 - cfhsi->ops); 188 - if (ret) { 189 - clear_bit(CFHSI_FLUSH_FIFO, &cfhsi->bits); 190 - netdev_warn(cfhsi->ndev, 191 - "%s: can't read data: %d.\n", 192 - __func__, ret); 193 - break; 194 - } 195 - 196 - ret = 5 * HZ; 197 - ret = wait_event_interruptible_timeout(cfhsi->flush_fifo_wait, 198 - !test_bit(CFHSI_FLUSH_FIFO, &cfhsi->bits), ret); 199 - 200 - if (ret < 0) { 201 - netdev_warn(cfhsi->ndev, 202 - "%s: can't wait for flush complete: %d.\n", 203 - __func__, ret); 204 - break; 205 - } else if (!ret) { 206 - ret = -ETIMEDOUT; 207 - netdev_warn(cfhsi->ndev, 208 - "%s: timeout waiting for flush complete.\n", 209 - __func__); 210 - break; 211 - } 212 - } while (1); 213 - 214 - return ret; 215 - } 216 - 217 - static int cfhsi_tx_frm(struct cfhsi_desc *desc, struct cfhsi *cfhsi) 218 - { 219 - int nfrms = 0; 220 - int pld_len = 0; 221 - struct sk_buff *skb; 222 - u8 *pfrm = desc->emb_frm + CFHSI_MAX_EMB_FRM_SZ; 223 - 224 - skb = cfhsi_dequeue(cfhsi); 225 - if (!skb) 226 - return 0; 227 - 228 - /* Clear offset. */ 229 - desc->offset = 0; 230 - 231 - /* Check if we can embed a CAIF frame. */ 232 - if (skb->len < CFHSI_MAX_EMB_FRM_SZ) { 233 - struct caif_payload_info *info; 234 - int hpad; 235 - int tpad; 236 - 237 - /* Calculate needed head alignment and tail alignment. */ 238 - info = (struct caif_payload_info *)&skb->cb; 239 - 240 - hpad = 1 + PAD_POW2((info->hdr_len + 1), cfhsi->cfg.head_align); 241 - tpad = PAD_POW2((skb->len + hpad), cfhsi->cfg.tail_align); 242 - 243 - /* Check if frame still fits with added alignment. */ 244 - if ((skb->len + hpad + tpad) <= CFHSI_MAX_EMB_FRM_SZ) { 245 - u8 *pemb = desc->emb_frm; 246 - desc->offset = CFHSI_DESC_SHORT_SZ; 247 - *pemb = (u8)(hpad - 1); 248 - pemb += hpad; 249 - 250 - /* Update network statistics. */ 251 - spin_lock_bh(&cfhsi->lock); 252 - cfhsi->ndev->stats.tx_packets++; 253 - cfhsi->ndev->stats.tx_bytes += skb->len; 254 - cfhsi_update_aggregation_stats(cfhsi, skb, -1); 255 - spin_unlock_bh(&cfhsi->lock); 256 - 257 - /* Copy in embedded CAIF frame. */ 258 - skb_copy_bits(skb, 0, pemb, skb->len); 259 - 260 - /* Consume the SKB */ 261 - consume_skb(skb); 262 - skb = NULL; 263 - } 264 - } 265 - 266 - /* Create payload CAIF frames. */ 267 - while (nfrms < CFHSI_MAX_PKTS) { 268 - struct caif_payload_info *info; 269 - int hpad; 270 - int tpad; 271 - 272 - if (!skb) 273 - skb = cfhsi_dequeue(cfhsi); 274 - 275 - if (!skb) 276 - break; 277 - 278 - /* Calculate needed head alignment and tail alignment. */ 279 - info = (struct caif_payload_info *)&skb->cb; 280 - 281 - hpad = 1 + PAD_POW2((info->hdr_len + 1), cfhsi->cfg.head_align); 282 - tpad = PAD_POW2((skb->len + hpad), cfhsi->cfg.tail_align); 283 - 284 - /* Fill in CAIF frame length in descriptor. */ 285 - desc->cffrm_len[nfrms] = hpad + skb->len + tpad; 286 - 287 - /* Fill head padding information. */ 288 - *pfrm = (u8)(hpad - 1); 289 - pfrm += hpad; 290 - 291 - /* Update network statistics. */ 292 - spin_lock_bh(&cfhsi->lock); 293 - cfhsi->ndev->stats.tx_packets++; 294 - cfhsi->ndev->stats.tx_bytes += skb->len; 295 - cfhsi_update_aggregation_stats(cfhsi, skb, -1); 296 - spin_unlock_bh(&cfhsi->lock); 297 - 298 - /* Copy in CAIF frame. */ 299 - skb_copy_bits(skb, 0, pfrm, skb->len); 300 - 301 - /* Update payload length. */ 302 - pld_len += desc->cffrm_len[nfrms]; 303 - 304 - /* Update frame pointer. */ 305 - pfrm += skb->len + tpad; 306 - 307 - /* Consume the SKB */ 308 - consume_skb(skb); 309 - skb = NULL; 310 - 311 - /* Update number of frames. */ 312 - nfrms++; 313 - } 314 - 315 - /* Unused length fields should be zero-filled (according to SPEC). */ 316 - while (nfrms < CFHSI_MAX_PKTS) { 317 - desc->cffrm_len[nfrms] = 0x0000; 318 - nfrms++; 319 - } 320 - 321 - /* Check if we can piggy-back another descriptor. */ 322 - if (cfhsi_can_send_aggregate(cfhsi)) 323 - desc->header |= CFHSI_PIGGY_DESC; 324 - else 325 - desc->header &= ~CFHSI_PIGGY_DESC; 326 - 327 - return CFHSI_DESC_SZ + pld_len; 328 - } 329 - 330 - static void cfhsi_start_tx(struct cfhsi *cfhsi) 331 - { 332 - struct cfhsi_desc *desc = (struct cfhsi_desc *)cfhsi->tx_buf; 333 - int len, res; 334 - 335 - netdev_dbg(cfhsi->ndev, "%s.\n", __func__); 336 - 337 - if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 338 - return; 339 - 340 - do { 341 - /* Create HSI frame. */ 342 - len = cfhsi_tx_frm(desc, cfhsi); 343 - if (!len) { 344 - spin_lock_bh(&cfhsi->lock); 345 - if (unlikely(cfhsi_tx_queue_len(cfhsi))) { 346 - spin_unlock_bh(&cfhsi->lock); 347 - res = -EAGAIN; 348 - continue; 349 - } 350 - cfhsi->tx_state = CFHSI_TX_STATE_IDLE; 351 - /* Start inactivity timer. */ 352 - mod_timer(&cfhsi->inactivity_timer, 353 - jiffies + cfhsi->cfg.inactivity_timeout); 354 - spin_unlock_bh(&cfhsi->lock); 355 - break; 356 - } 357 - 358 - /* Set up new transfer. */ 359 - res = cfhsi->ops->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->ops); 360 - if (WARN_ON(res < 0)) 361 - netdev_err(cfhsi->ndev, "%s: TX error %d.\n", 362 - __func__, res); 363 - } while (res < 0); 364 - } 365 - 366 - static void cfhsi_tx_done(struct cfhsi *cfhsi) 367 - { 368 - netdev_dbg(cfhsi->ndev, "%s.\n", __func__); 369 - 370 - if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 371 - return; 372 - 373 - /* 374 - * Send flow on if flow off has been previously signalled 375 - * and number of packets is below low water mark. 376 - */ 377 - spin_lock_bh(&cfhsi->lock); 378 - if (cfhsi->flow_off_sent && 379 - cfhsi_tx_queue_len(cfhsi) <= cfhsi->cfg.q_low_mark && 380 - cfhsi->cfdev.flowctrl) { 381 - 382 - cfhsi->flow_off_sent = 0; 383 - cfhsi->cfdev.flowctrl(cfhsi->ndev, ON); 384 - } 385 - 386 - if (cfhsi_can_send_aggregate(cfhsi)) { 387 - spin_unlock_bh(&cfhsi->lock); 388 - cfhsi_start_tx(cfhsi); 389 - } else { 390 - mod_timer(&cfhsi->aggregation_timer, 391 - jiffies + cfhsi->cfg.aggregation_timeout); 392 - spin_unlock_bh(&cfhsi->lock); 393 - } 394 - 395 - return; 396 - } 397 - 398 - static void cfhsi_tx_done_cb(struct cfhsi_cb_ops *cb_ops) 399 - { 400 - struct cfhsi *cfhsi; 401 - 402 - cfhsi = container_of(cb_ops, struct cfhsi, cb_ops); 403 - netdev_dbg(cfhsi->ndev, "%s.\n", 404 - __func__); 405 - 406 - if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 407 - return; 408 - cfhsi_tx_done(cfhsi); 409 - } 410 - 411 - static int cfhsi_rx_desc(struct cfhsi_desc *desc, struct cfhsi *cfhsi) 412 - { 413 - int xfer_sz = 0; 414 - int nfrms = 0; 415 - u16 *plen = NULL; 416 - u8 *pfrm = NULL; 417 - 418 - if ((desc->header & ~CFHSI_PIGGY_DESC) || 419 - (desc->offset > CFHSI_MAX_EMB_FRM_SZ)) { 420 - netdev_err(cfhsi->ndev, "%s: Invalid descriptor.\n", 421 - __func__); 422 - return -EPROTO; 423 - } 424 - 425 - /* Check for embedded CAIF frame. */ 426 - if (desc->offset) { 427 - struct sk_buff *skb; 428 - int len = 0; 429 - pfrm = ((u8 *)desc) + desc->offset; 430 - 431 - /* Remove offset padding. */ 432 - pfrm += *pfrm + 1; 433 - 434 - /* Read length of CAIF frame (little endian). */ 435 - len = *pfrm; 436 - len |= ((*(pfrm+1)) << 8) & 0xFF00; 437 - len += 2; /* Add FCS fields. */ 438 - 439 - /* Sanity check length of CAIF frame. */ 440 - if (unlikely(len > CFHSI_MAX_CAIF_FRAME_SZ)) { 441 - netdev_err(cfhsi->ndev, "%s: Invalid length.\n", 442 - __func__); 443 - return -EPROTO; 444 - } 445 - 446 - /* Allocate SKB (OK even in IRQ context). */ 447 - skb = alloc_skb(len + 1, GFP_ATOMIC); 448 - if (!skb) { 449 - netdev_err(cfhsi->ndev, "%s: Out of memory !\n", 450 - __func__); 451 - return -ENOMEM; 452 - } 453 - caif_assert(skb != NULL); 454 - 455 - skb_put_data(skb, pfrm, len); 456 - 457 - skb->protocol = htons(ETH_P_CAIF); 458 - skb_reset_mac_header(skb); 459 - skb->dev = cfhsi->ndev; 460 - 461 - netif_rx_any_context(skb); 462 - 463 - /* Update network statistics. */ 464 - cfhsi->ndev->stats.rx_packets++; 465 - cfhsi->ndev->stats.rx_bytes += len; 466 - } 467 - 468 - /* Calculate transfer length. */ 469 - plen = desc->cffrm_len; 470 - while (nfrms < CFHSI_MAX_PKTS && *plen) { 471 - xfer_sz += *plen; 472 - plen++; 473 - nfrms++; 474 - } 475 - 476 - /* Check for piggy-backed descriptor. */ 477 - if (desc->header & CFHSI_PIGGY_DESC) 478 - xfer_sz += CFHSI_DESC_SZ; 479 - 480 - if ((xfer_sz % 4) || (xfer_sz > (CFHSI_BUF_SZ_RX - CFHSI_DESC_SZ))) { 481 - netdev_err(cfhsi->ndev, 482 - "%s: Invalid payload len: %d, ignored.\n", 483 - __func__, xfer_sz); 484 - return -EPROTO; 485 - } 486 - return xfer_sz; 487 - } 488 - 489 - static int cfhsi_rx_desc_len(struct cfhsi_desc *desc) 490 - { 491 - int xfer_sz = 0; 492 - int nfrms = 0; 493 - u16 *plen; 494 - 495 - if ((desc->header & ~CFHSI_PIGGY_DESC) || 496 - (desc->offset > CFHSI_MAX_EMB_FRM_SZ)) { 497 - 498 - pr_err("Invalid descriptor. %x %x\n", desc->header, 499 - desc->offset); 500 - return -EPROTO; 501 - } 502 - 503 - /* Calculate transfer length. */ 504 - plen = desc->cffrm_len; 505 - while (nfrms < CFHSI_MAX_PKTS && *plen) { 506 - xfer_sz += *plen; 507 - plen++; 508 - nfrms++; 509 - } 510 - 511 - if (xfer_sz % 4) { 512 - pr_err("Invalid payload len: %d, ignored.\n", xfer_sz); 513 - return -EPROTO; 514 - } 515 - return xfer_sz; 516 - } 517 - 518 - static int cfhsi_rx_pld(struct cfhsi_desc *desc, struct cfhsi *cfhsi) 519 - { 520 - int rx_sz = 0; 521 - int nfrms = 0; 522 - u16 *plen = NULL; 523 - u8 *pfrm = NULL; 524 - 525 - /* Sanity check header and offset. */ 526 - if (WARN_ON((desc->header & ~CFHSI_PIGGY_DESC) || 527 - (desc->offset > CFHSI_MAX_EMB_FRM_SZ))) { 528 - netdev_err(cfhsi->ndev, "%s: Invalid descriptor.\n", 529 - __func__); 530 - return -EPROTO; 531 - } 532 - 533 - /* Set frame pointer to start of payload. */ 534 - pfrm = desc->emb_frm + CFHSI_MAX_EMB_FRM_SZ; 535 - plen = desc->cffrm_len; 536 - 537 - /* Skip already processed frames. */ 538 - while (nfrms < cfhsi->rx_state.nfrms) { 539 - pfrm += *plen; 540 - rx_sz += *plen; 541 - plen++; 542 - nfrms++; 543 - } 544 - 545 - /* Parse payload. */ 546 - while (nfrms < CFHSI_MAX_PKTS && *plen) { 547 - struct sk_buff *skb; 548 - u8 *pcffrm = NULL; 549 - int len; 550 - 551 - /* CAIF frame starts after head padding. */ 552 - pcffrm = pfrm + *pfrm + 1; 553 - 554 - /* Read length of CAIF frame (little endian). */ 555 - len = *pcffrm; 556 - len |= ((*(pcffrm + 1)) << 8) & 0xFF00; 557 - len += 2; /* Add FCS fields. */ 558 - 559 - /* Sanity check length of CAIF frames. */ 560 - if (unlikely(len > CFHSI_MAX_CAIF_FRAME_SZ)) { 561 - netdev_err(cfhsi->ndev, "%s: Invalid length.\n", 562 - __func__); 563 - return -EPROTO; 564 - } 565 - 566 - /* Allocate SKB (OK even in IRQ context). */ 567 - skb = alloc_skb(len + 1, GFP_ATOMIC); 568 - if (!skb) { 569 - netdev_err(cfhsi->ndev, "%s: Out of memory !\n", 570 - __func__); 571 - cfhsi->rx_state.nfrms = nfrms; 572 - return -ENOMEM; 573 - } 574 - caif_assert(skb != NULL); 575 - 576 - skb_put_data(skb, pcffrm, len); 577 - 578 - skb->protocol = htons(ETH_P_CAIF); 579 - skb_reset_mac_header(skb); 580 - skb->dev = cfhsi->ndev; 581 - 582 - netif_rx_any_context(skb); 583 - 584 - /* Update network statistics. */ 585 - cfhsi->ndev->stats.rx_packets++; 586 - cfhsi->ndev->stats.rx_bytes += len; 587 - 588 - pfrm += *plen; 589 - rx_sz += *plen; 590 - plen++; 591 - nfrms++; 592 - } 593 - 594 - return rx_sz; 595 - } 596 - 597 - static void cfhsi_rx_done(struct cfhsi *cfhsi) 598 - { 599 - int res; 600 - int desc_pld_len = 0, rx_len, rx_state; 601 - struct cfhsi_desc *desc = NULL; 602 - u8 *rx_ptr, *rx_buf; 603 - struct cfhsi_desc *piggy_desc = NULL; 604 - 605 - desc = (struct cfhsi_desc *)cfhsi->rx_buf; 606 - 607 - netdev_dbg(cfhsi->ndev, "%s\n", __func__); 608 - 609 - if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 610 - return; 611 - 612 - /* Update inactivity timer if pending. */ 613 - spin_lock_bh(&cfhsi->lock); 614 - mod_timer_pending(&cfhsi->inactivity_timer, 615 - jiffies + cfhsi->cfg.inactivity_timeout); 616 - spin_unlock_bh(&cfhsi->lock); 617 - 618 - if (cfhsi->rx_state.state == CFHSI_RX_STATE_DESC) { 619 - desc_pld_len = cfhsi_rx_desc_len(desc); 620 - 621 - if (desc_pld_len < 0) 622 - goto out_of_sync; 623 - 624 - rx_buf = cfhsi->rx_buf; 625 - rx_len = desc_pld_len; 626 - if (desc_pld_len > 0 && (desc->header & CFHSI_PIGGY_DESC)) 627 - rx_len += CFHSI_DESC_SZ; 628 - if (desc_pld_len == 0) 629 - rx_buf = cfhsi->rx_flip_buf; 630 - } else { 631 - rx_buf = cfhsi->rx_flip_buf; 632 - 633 - rx_len = CFHSI_DESC_SZ; 634 - if (cfhsi->rx_state.pld_len > 0 && 635 - (desc->header & CFHSI_PIGGY_DESC)) { 636 - 637 - piggy_desc = (struct cfhsi_desc *) 638 - (desc->emb_frm + CFHSI_MAX_EMB_FRM_SZ + 639 - cfhsi->rx_state.pld_len); 640 - 641 - cfhsi->rx_state.piggy_desc = true; 642 - 643 - /* Extract payload len from piggy-backed descriptor. */ 644 - desc_pld_len = cfhsi_rx_desc_len(piggy_desc); 645 - if (desc_pld_len < 0) 646 - goto out_of_sync; 647 - 648 - if (desc_pld_len > 0) { 649 - rx_len = desc_pld_len; 650 - if (piggy_desc->header & CFHSI_PIGGY_DESC) 651 - rx_len += CFHSI_DESC_SZ; 652 - } 653 - 654 - /* 655 - * Copy needed information from the piggy-backed 656 - * descriptor to the descriptor in the start. 657 - */ 658 - memcpy(rx_buf, (u8 *)piggy_desc, 659 - CFHSI_DESC_SHORT_SZ); 660 - } 661 - } 662 - 663 - if (desc_pld_len) { 664 - rx_state = CFHSI_RX_STATE_PAYLOAD; 665 - rx_ptr = rx_buf + CFHSI_DESC_SZ; 666 - } else { 667 - rx_state = CFHSI_RX_STATE_DESC; 668 - rx_ptr = rx_buf; 669 - rx_len = CFHSI_DESC_SZ; 670 - } 671 - 672 - /* Initiate next read */ 673 - if (test_bit(CFHSI_AWAKE, &cfhsi->bits)) { 674 - /* Set up new transfer. */ 675 - netdev_dbg(cfhsi->ndev, "%s: Start RX.\n", 676 - __func__); 677 - 678 - res = cfhsi->ops->cfhsi_rx(rx_ptr, rx_len, 679 - cfhsi->ops); 680 - if (WARN_ON(res < 0)) { 681 - netdev_err(cfhsi->ndev, "%s: RX error %d.\n", 682 - __func__, res); 683 - cfhsi->ndev->stats.rx_errors++; 684 - cfhsi->ndev->stats.rx_dropped++; 685 - } 686 - } 687 - 688 - if (cfhsi->rx_state.state == CFHSI_RX_STATE_DESC) { 689 - /* Extract payload from descriptor */ 690 - if (cfhsi_rx_desc(desc, cfhsi) < 0) 691 - goto out_of_sync; 692 - } else { 693 - /* Extract payload */ 694 - if (cfhsi_rx_pld(desc, cfhsi) < 0) 695 - goto out_of_sync; 696 - if (piggy_desc) { 697 - /* Extract any payload in piggyback descriptor. */ 698 - if (cfhsi_rx_desc(piggy_desc, cfhsi) < 0) 699 - goto out_of_sync; 700 - /* Mark no embedded frame after extracting it */ 701 - piggy_desc->offset = 0; 702 - } 703 - } 704 - 705 - /* Update state info */ 706 - memset(&cfhsi->rx_state, 0, sizeof(cfhsi->rx_state)); 707 - cfhsi->rx_state.state = rx_state; 708 - cfhsi->rx_ptr = rx_ptr; 709 - cfhsi->rx_len = rx_len; 710 - cfhsi->rx_state.pld_len = desc_pld_len; 711 - cfhsi->rx_state.piggy_desc = desc->header & CFHSI_PIGGY_DESC; 712 - 713 - if (rx_buf != cfhsi->rx_buf) 714 - swap(cfhsi->rx_buf, cfhsi->rx_flip_buf); 715 - return; 716 - 717 - out_of_sync: 718 - netdev_err(cfhsi->ndev, "%s: Out of sync.\n", __func__); 719 - print_hex_dump_bytes("--> ", DUMP_PREFIX_NONE, 720 - cfhsi->rx_buf, CFHSI_DESC_SZ); 721 - schedule_work(&cfhsi->out_of_sync_work); 722 - } 723 - 724 - static void cfhsi_rx_slowpath(struct timer_list *t) 725 - { 726 - struct cfhsi *cfhsi = from_timer(cfhsi, t, rx_slowpath_timer); 727 - 728 - netdev_dbg(cfhsi->ndev, "%s.\n", 729 - __func__); 730 - 731 - cfhsi_rx_done(cfhsi); 732 - } 733 - 734 - static void cfhsi_rx_done_cb(struct cfhsi_cb_ops *cb_ops) 735 - { 736 - struct cfhsi *cfhsi; 737 - 738 - cfhsi = container_of(cb_ops, struct cfhsi, cb_ops); 739 - netdev_dbg(cfhsi->ndev, "%s.\n", 740 - __func__); 741 - 742 - if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 743 - return; 744 - 745 - if (test_and_clear_bit(CFHSI_FLUSH_FIFO, &cfhsi->bits)) 746 - wake_up_interruptible(&cfhsi->flush_fifo_wait); 747 - else 748 - cfhsi_rx_done(cfhsi); 749 - } 750 - 751 - static void cfhsi_wake_up(struct work_struct *work) 752 - { 753 - struct cfhsi *cfhsi = NULL; 754 - int res; 755 - int len; 756 - long ret; 757 - 758 - cfhsi = container_of(work, struct cfhsi, wake_up_work); 759 - 760 - if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 761 - return; 762 - 763 - if (unlikely(test_bit(CFHSI_AWAKE, &cfhsi->bits))) { 764 - /* It happenes when wakeup is requested by 765 - * both ends at the same time. */ 766 - clear_bit(CFHSI_WAKE_UP, &cfhsi->bits); 767 - clear_bit(CFHSI_WAKE_UP_ACK, &cfhsi->bits); 768 - return; 769 - } 770 - 771 - /* Activate wake line. */ 772 - cfhsi->ops->cfhsi_wake_up(cfhsi->ops); 773 - 774 - netdev_dbg(cfhsi->ndev, "%s: Start waiting.\n", 775 - __func__); 776 - 777 - /* Wait for acknowledge. */ 778 - ret = CFHSI_WAKE_TOUT; 779 - ret = wait_event_interruptible_timeout(cfhsi->wake_up_wait, 780 - test_and_clear_bit(CFHSI_WAKE_UP_ACK, 781 - &cfhsi->bits), ret); 782 - if (unlikely(ret < 0)) { 783 - /* Interrupted by signal. */ 784 - netdev_err(cfhsi->ndev, "%s: Signalled: %ld.\n", 785 - __func__, ret); 786 - 787 - clear_bit(CFHSI_WAKE_UP, &cfhsi->bits); 788 - cfhsi->ops->cfhsi_wake_down(cfhsi->ops); 789 - return; 790 - } else if (!ret) { 791 - bool ca_wake = false; 792 - size_t fifo_occupancy = 0; 793 - 794 - /* Wakeup timeout */ 795 - netdev_dbg(cfhsi->ndev, "%s: Timeout.\n", 796 - __func__); 797 - 798 - /* Check FIFO to check if modem has sent something. */ 799 - WARN_ON(cfhsi->ops->cfhsi_fifo_occupancy(cfhsi->ops, 800 - &fifo_occupancy)); 801 - 802 - netdev_dbg(cfhsi->ndev, "%s: Bytes in FIFO: %u.\n", 803 - __func__, (unsigned) fifo_occupancy); 804 - 805 - /* Check if we misssed the interrupt. */ 806 - WARN_ON(cfhsi->ops->cfhsi_get_peer_wake(cfhsi->ops, 807 - &ca_wake)); 808 - 809 - if (ca_wake) { 810 - netdev_err(cfhsi->ndev, "%s: CA Wake missed !.\n", 811 - __func__); 812 - 813 - /* Clear the CFHSI_WAKE_UP_ACK bit to prevent race. */ 814 - clear_bit(CFHSI_WAKE_UP_ACK, &cfhsi->bits); 815 - 816 - /* Continue execution. */ 817 - goto wake_ack; 818 - } 819 - 820 - clear_bit(CFHSI_WAKE_UP, &cfhsi->bits); 821 - cfhsi->ops->cfhsi_wake_down(cfhsi->ops); 822 - return; 823 - } 824 - wake_ack: 825 - netdev_dbg(cfhsi->ndev, "%s: Woken.\n", 826 - __func__); 827 - 828 - /* Clear power up bit. */ 829 - set_bit(CFHSI_AWAKE, &cfhsi->bits); 830 - clear_bit(CFHSI_WAKE_UP, &cfhsi->bits); 831 - 832 - /* Resume read operation. */ 833 - netdev_dbg(cfhsi->ndev, "%s: Start RX.\n", __func__); 834 - res = cfhsi->ops->cfhsi_rx(cfhsi->rx_ptr, cfhsi->rx_len, cfhsi->ops); 835 - 836 - if (WARN_ON(res < 0)) 837 - netdev_err(cfhsi->ndev, "%s: RX err %d.\n", __func__, res); 838 - 839 - /* Clear power up acknowledment. */ 840 - clear_bit(CFHSI_WAKE_UP_ACK, &cfhsi->bits); 841 - 842 - spin_lock_bh(&cfhsi->lock); 843 - 844 - /* Resume transmit if queues are not empty. */ 845 - if (!cfhsi_tx_queue_len(cfhsi)) { 846 - netdev_dbg(cfhsi->ndev, "%s: Peer wake, start timer.\n", 847 - __func__); 848 - /* Start inactivity timer. */ 849 - mod_timer(&cfhsi->inactivity_timer, 850 - jiffies + cfhsi->cfg.inactivity_timeout); 851 - spin_unlock_bh(&cfhsi->lock); 852 - return; 853 - } 854 - 855 - netdev_dbg(cfhsi->ndev, "%s: Host wake.\n", 856 - __func__); 857 - 858 - spin_unlock_bh(&cfhsi->lock); 859 - 860 - /* Create HSI frame. */ 861 - len = cfhsi_tx_frm((struct cfhsi_desc *)cfhsi->tx_buf, cfhsi); 862 - 863 - if (likely(len > 0)) { 864 - /* Set up new transfer. */ 865 - res = cfhsi->ops->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->ops); 866 - if (WARN_ON(res < 0)) { 867 - netdev_err(cfhsi->ndev, "%s: TX error %d.\n", 868 - __func__, res); 869 - cfhsi_abort_tx(cfhsi); 870 - } 871 - } else { 872 - netdev_err(cfhsi->ndev, 873 - "%s: Failed to create HSI frame: %d.\n", 874 - __func__, len); 875 - } 876 - } 877 - 878 - static void cfhsi_wake_down(struct work_struct *work) 879 - { 880 - long ret; 881 - struct cfhsi *cfhsi = NULL; 882 - size_t fifo_occupancy = 0; 883 - int retry = CFHSI_WAKE_TOUT; 884 - 885 - cfhsi = container_of(work, struct cfhsi, wake_down_work); 886 - netdev_dbg(cfhsi->ndev, "%s.\n", __func__); 887 - 888 - if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 889 - return; 890 - 891 - /* Deactivate wake line. */ 892 - cfhsi->ops->cfhsi_wake_down(cfhsi->ops); 893 - 894 - /* Wait for acknowledge. */ 895 - ret = CFHSI_WAKE_TOUT; 896 - ret = wait_event_interruptible_timeout(cfhsi->wake_down_wait, 897 - test_and_clear_bit(CFHSI_WAKE_DOWN_ACK, 898 - &cfhsi->bits), ret); 899 - if (ret < 0) { 900 - /* Interrupted by signal. */ 901 - netdev_err(cfhsi->ndev, "%s: Signalled: %ld.\n", 902 - __func__, ret); 903 - return; 904 - } else if (!ret) { 905 - bool ca_wake = true; 906 - 907 - /* Timeout */ 908 - netdev_err(cfhsi->ndev, "%s: Timeout.\n", __func__); 909 - 910 - /* Check if we misssed the interrupt. */ 911 - WARN_ON(cfhsi->ops->cfhsi_get_peer_wake(cfhsi->ops, 912 - &ca_wake)); 913 - if (!ca_wake) 914 - netdev_err(cfhsi->ndev, "%s: CA Wake missed !.\n", 915 - __func__); 916 - } 917 - 918 - /* Check FIFO occupancy. */ 919 - while (retry) { 920 - WARN_ON(cfhsi->ops->cfhsi_fifo_occupancy(cfhsi->ops, 921 - &fifo_occupancy)); 922 - 923 - if (!fifo_occupancy) 924 - break; 925 - 926 - set_current_state(TASK_INTERRUPTIBLE); 927 - schedule_timeout(1); 928 - retry--; 929 - } 930 - 931 - if (!retry) 932 - netdev_err(cfhsi->ndev, "%s: FIFO Timeout.\n", __func__); 933 - 934 - /* Clear AWAKE condition. */ 935 - clear_bit(CFHSI_AWAKE, &cfhsi->bits); 936 - 937 - /* Cancel pending RX requests. */ 938 - cfhsi->ops->cfhsi_rx_cancel(cfhsi->ops); 939 - } 940 - 941 - static void cfhsi_out_of_sync(struct work_struct *work) 942 - { 943 - struct cfhsi *cfhsi = NULL; 944 - 945 - cfhsi = container_of(work, struct cfhsi, out_of_sync_work); 946 - 947 - rtnl_lock(); 948 - dev_close(cfhsi->ndev); 949 - rtnl_unlock(); 950 - } 951 - 952 - static void cfhsi_wake_up_cb(struct cfhsi_cb_ops *cb_ops) 953 - { 954 - struct cfhsi *cfhsi = NULL; 955 - 956 - cfhsi = container_of(cb_ops, struct cfhsi, cb_ops); 957 - netdev_dbg(cfhsi->ndev, "%s.\n", 958 - __func__); 959 - 960 - set_bit(CFHSI_WAKE_UP_ACK, &cfhsi->bits); 961 - wake_up_interruptible(&cfhsi->wake_up_wait); 962 - 963 - if (test_bit(CFHSI_SHUTDOWN, &cfhsi->bits)) 964 - return; 965 - 966 - /* Schedule wake up work queue if the peer initiates. */ 967 - if (!test_and_set_bit(CFHSI_WAKE_UP, &cfhsi->bits)) 968 - queue_work(cfhsi->wq, &cfhsi->wake_up_work); 969 - } 970 - 971 - static void cfhsi_wake_down_cb(struct cfhsi_cb_ops *cb_ops) 972 - { 973 - struct cfhsi *cfhsi = NULL; 974 - 975 - cfhsi = container_of(cb_ops, struct cfhsi, cb_ops); 976 - netdev_dbg(cfhsi->ndev, "%s.\n", 977 - __func__); 978 - 979 - /* Initiating low power is only permitted by the host (us). */ 980 - set_bit(CFHSI_WAKE_DOWN_ACK, &cfhsi->bits); 981 - wake_up_interruptible(&cfhsi->wake_down_wait); 982 - } 983 - 984 - static void cfhsi_aggregation_tout(struct timer_list *t) 985 - { 986 - struct cfhsi *cfhsi = from_timer(cfhsi, t, aggregation_timer); 987 - 988 - netdev_dbg(cfhsi->ndev, "%s.\n", 989 - __func__); 990 - 991 - cfhsi_start_tx(cfhsi); 992 - } 993 - 994 - static netdev_tx_t cfhsi_xmit(struct sk_buff *skb, struct net_device *dev) 995 - { 996 - struct cfhsi *cfhsi = NULL; 997 - int start_xfer = 0; 998 - int timer_active; 999 - int prio; 1000 - 1001 - if (!dev) 1002 - return -EINVAL; 1003 - 1004 - cfhsi = netdev_priv(dev); 1005 - 1006 - switch (skb->priority) { 1007 - case TC_PRIO_BESTEFFORT: 1008 - case TC_PRIO_FILLER: 1009 - case TC_PRIO_BULK: 1010 - prio = CFHSI_PRIO_BEBK; 1011 - break; 1012 - case TC_PRIO_INTERACTIVE_BULK: 1013 - prio = CFHSI_PRIO_VI; 1014 - break; 1015 - case TC_PRIO_INTERACTIVE: 1016 - prio = CFHSI_PRIO_VO; 1017 - break; 1018 - case TC_PRIO_CONTROL: 1019 - default: 1020 - prio = CFHSI_PRIO_CTL; 1021 - break; 1022 - } 1023 - 1024 - spin_lock_bh(&cfhsi->lock); 1025 - 1026 - /* Update aggregation statistics */ 1027 - cfhsi_update_aggregation_stats(cfhsi, skb, 1); 1028 - 1029 - /* Queue the SKB */ 1030 - skb_queue_tail(&cfhsi->qhead[prio], skb); 1031 - 1032 - /* Sanity check; xmit should not be called after unregister_netdev */ 1033 - if (WARN_ON(test_bit(CFHSI_SHUTDOWN, &cfhsi->bits))) { 1034 - spin_unlock_bh(&cfhsi->lock); 1035 - cfhsi_abort_tx(cfhsi); 1036 - return -EINVAL; 1037 - } 1038 - 1039 - /* Send flow off if number of packets is above high water mark. */ 1040 - if (!cfhsi->flow_off_sent && 1041 - cfhsi_tx_queue_len(cfhsi) > cfhsi->cfg.q_high_mark && 1042 - cfhsi->cfdev.flowctrl) { 1043 - cfhsi->flow_off_sent = 1; 1044 - cfhsi->cfdev.flowctrl(cfhsi->ndev, OFF); 1045 - } 1046 - 1047 - if (cfhsi->tx_state == CFHSI_TX_STATE_IDLE) { 1048 - cfhsi->tx_state = CFHSI_TX_STATE_XFER; 1049 - start_xfer = 1; 1050 - } 1051 - 1052 - if (!start_xfer) { 1053 - /* Send aggregate if it is possible */ 1054 - bool aggregate_ready = 1055 - cfhsi_can_send_aggregate(cfhsi) && 1056 - del_timer(&cfhsi->aggregation_timer) > 0; 1057 - spin_unlock_bh(&cfhsi->lock); 1058 - if (aggregate_ready) 1059 - cfhsi_start_tx(cfhsi); 1060 - return NETDEV_TX_OK; 1061 - } 1062 - 1063 - /* Delete inactivity timer if started. */ 1064 - timer_active = del_timer_sync(&cfhsi->inactivity_timer); 1065 - 1066 - spin_unlock_bh(&cfhsi->lock); 1067 - 1068 - if (timer_active) { 1069 - struct cfhsi_desc *desc = (struct cfhsi_desc *)cfhsi->tx_buf; 1070 - int len; 1071 - int res; 1072 - 1073 - /* Create HSI frame. */ 1074 - len = cfhsi_tx_frm(desc, cfhsi); 1075 - WARN_ON(!len); 1076 - 1077 - /* Set up new transfer. */ 1078 - res = cfhsi->ops->cfhsi_tx(cfhsi->tx_buf, len, cfhsi->ops); 1079 - if (WARN_ON(res < 0)) { 1080 - netdev_err(cfhsi->ndev, "%s: TX error %d.\n", 1081 - __func__, res); 1082 - cfhsi_abort_tx(cfhsi); 1083 - } 1084 - } else { 1085 - /* Schedule wake up work queue if the we initiate. */ 1086 - if (!test_and_set_bit(CFHSI_WAKE_UP, &cfhsi->bits)) 1087 - queue_work(cfhsi->wq, &cfhsi->wake_up_work); 1088 - } 1089 - 1090 - return NETDEV_TX_OK; 1091 - } 1092 - 1093 - static const struct net_device_ops cfhsi_netdevops; 1094 - 1095 - static void cfhsi_setup(struct net_device *dev) 1096 - { 1097 - int i; 1098 - struct cfhsi *cfhsi = netdev_priv(dev); 1099 - dev->features = 0; 1100 - dev->type = ARPHRD_CAIF; 1101 - dev->flags = IFF_POINTOPOINT | IFF_NOARP; 1102 - dev->mtu = CFHSI_MAX_CAIF_FRAME_SZ; 1103 - dev->priv_flags |= IFF_NO_QUEUE; 1104 - dev->needs_free_netdev = true; 1105 - dev->netdev_ops = &cfhsi_netdevops; 1106 - for (i = 0; i < CFHSI_PRIO_LAST; ++i) 1107 - skb_queue_head_init(&cfhsi->qhead[i]); 1108 - cfhsi->cfdev.link_select = CAIF_LINK_HIGH_BANDW; 1109 - cfhsi->cfdev.use_frag = false; 1110 - cfhsi->cfdev.use_stx = false; 1111 - cfhsi->cfdev.use_fcs = false; 1112 - cfhsi->ndev = dev; 1113 - cfhsi->cfg = hsi_default_config; 1114 - } 1115 - 1116 - static int cfhsi_open(struct net_device *ndev) 1117 - { 1118 - struct cfhsi *cfhsi = netdev_priv(ndev); 1119 - int res; 1120 - 1121 - clear_bit(CFHSI_SHUTDOWN, &cfhsi->bits); 1122 - 1123 - /* Initialize state vaiables. */ 1124 - cfhsi->tx_state = CFHSI_TX_STATE_IDLE; 1125 - cfhsi->rx_state.state = CFHSI_RX_STATE_DESC; 1126 - 1127 - /* Set flow info */ 1128 - cfhsi->flow_off_sent = 0; 1129 - 1130 - /* 1131 - * Allocate a TX buffer with the size of a HSI packet descriptors 1132 - * and the necessary room for CAIF payload frames. 1133 - */ 1134 - cfhsi->tx_buf = kzalloc(CFHSI_BUF_SZ_TX, GFP_KERNEL); 1135 - if (!cfhsi->tx_buf) { 1136 - res = -ENODEV; 1137 - goto err_alloc_tx; 1138 - } 1139 - 1140 - /* 1141 - * Allocate a RX buffer with the size of two HSI packet descriptors and 1142 - * the necessary room for CAIF payload frames. 1143 - */ 1144 - cfhsi->rx_buf = kzalloc(CFHSI_BUF_SZ_RX, GFP_KERNEL); 1145 - if (!cfhsi->rx_buf) { 1146 - res = -ENODEV; 1147 - goto err_alloc_rx; 1148 - } 1149 - 1150 - cfhsi->rx_flip_buf = kzalloc(CFHSI_BUF_SZ_RX, GFP_KERNEL); 1151 - if (!cfhsi->rx_flip_buf) { 1152 - res = -ENODEV; 1153 - goto err_alloc_rx_flip; 1154 - } 1155 - 1156 - /* Initialize aggregation timeout */ 1157 - cfhsi->cfg.aggregation_timeout = hsi_default_config.aggregation_timeout; 1158 - 1159 - /* Initialize recieve vaiables. */ 1160 - cfhsi->rx_ptr = cfhsi->rx_buf; 1161 - cfhsi->rx_len = CFHSI_DESC_SZ; 1162 - 1163 - /* Initialize spin locks. */ 1164 - spin_lock_init(&cfhsi->lock); 1165 - 1166 - /* Set up the driver. */ 1167 - cfhsi->cb_ops.tx_done_cb = cfhsi_tx_done_cb; 1168 - cfhsi->cb_ops.rx_done_cb = cfhsi_rx_done_cb; 1169 - cfhsi->cb_ops.wake_up_cb = cfhsi_wake_up_cb; 1170 - cfhsi->cb_ops.wake_down_cb = cfhsi_wake_down_cb; 1171 - 1172 - /* Initialize the work queues. */ 1173 - INIT_WORK(&cfhsi->wake_up_work, cfhsi_wake_up); 1174 - INIT_WORK(&cfhsi->wake_down_work, cfhsi_wake_down); 1175 - INIT_WORK(&cfhsi->out_of_sync_work, cfhsi_out_of_sync); 1176 - 1177 - /* Clear all bit fields. */ 1178 - clear_bit(CFHSI_WAKE_UP_ACK, &cfhsi->bits); 1179 - clear_bit(CFHSI_WAKE_DOWN_ACK, &cfhsi->bits); 1180 - clear_bit(CFHSI_WAKE_UP, &cfhsi->bits); 1181 - clear_bit(CFHSI_AWAKE, &cfhsi->bits); 1182 - 1183 - /* Create work thread. */ 1184 - cfhsi->wq = alloc_ordered_workqueue(cfhsi->ndev->name, WQ_MEM_RECLAIM); 1185 - if (!cfhsi->wq) { 1186 - netdev_err(cfhsi->ndev, "%s: Failed to create work queue.\n", 1187 - __func__); 1188 - res = -ENODEV; 1189 - goto err_create_wq; 1190 - } 1191 - 1192 - /* Initialize wait queues. */ 1193 - init_waitqueue_head(&cfhsi->wake_up_wait); 1194 - init_waitqueue_head(&cfhsi->wake_down_wait); 1195 - init_waitqueue_head(&cfhsi->flush_fifo_wait); 1196 - 1197 - /* Setup the inactivity timer. */ 1198 - timer_setup(&cfhsi->inactivity_timer, cfhsi_inactivity_tout, 0); 1199 - /* Setup the slowpath RX timer. */ 1200 - timer_setup(&cfhsi->rx_slowpath_timer, cfhsi_rx_slowpath, 0); 1201 - /* Setup the aggregation timer. */ 1202 - timer_setup(&cfhsi->aggregation_timer, cfhsi_aggregation_tout, 0); 1203 - 1204 - /* Activate HSI interface. */ 1205 - res = cfhsi->ops->cfhsi_up(cfhsi->ops); 1206 - if (res) { 1207 - netdev_err(cfhsi->ndev, 1208 - "%s: can't activate HSI interface: %d.\n", 1209 - __func__, res); 1210 - goto err_activate; 1211 - } 1212 - 1213 - /* Flush FIFO */ 1214 - res = cfhsi_flush_fifo(cfhsi); 1215 - if (res) { 1216 - netdev_err(cfhsi->ndev, "%s: Can't flush FIFO: %d.\n", 1217 - __func__, res); 1218 - goto err_net_reg; 1219 - } 1220 - return res; 1221 - 1222 - err_net_reg: 1223 - cfhsi->ops->cfhsi_down(cfhsi->ops); 1224 - err_activate: 1225 - destroy_workqueue(cfhsi->wq); 1226 - err_create_wq: 1227 - kfree(cfhsi->rx_flip_buf); 1228 - err_alloc_rx_flip: 1229 - kfree(cfhsi->rx_buf); 1230 - err_alloc_rx: 1231 - kfree(cfhsi->tx_buf); 1232 - err_alloc_tx: 1233 - return res; 1234 - } 1235 - 1236 - static int cfhsi_close(struct net_device *ndev) 1237 - { 1238 - struct cfhsi *cfhsi = netdev_priv(ndev); 1239 - u8 *tx_buf, *rx_buf, *flip_buf; 1240 - 1241 - /* going to shutdown driver */ 1242 - set_bit(CFHSI_SHUTDOWN, &cfhsi->bits); 1243 - 1244 - /* Delete timers if pending */ 1245 - del_timer_sync(&cfhsi->inactivity_timer); 1246 - del_timer_sync(&cfhsi->rx_slowpath_timer); 1247 - del_timer_sync(&cfhsi->aggregation_timer); 1248 - 1249 - /* Cancel pending RX request (if any) */ 1250 - cfhsi->ops->cfhsi_rx_cancel(cfhsi->ops); 1251 - 1252 - /* Destroy workqueue */ 1253 - destroy_workqueue(cfhsi->wq); 1254 - 1255 - /* Store bufferes: will be freed later. */ 1256 - tx_buf = cfhsi->tx_buf; 1257 - rx_buf = cfhsi->rx_buf; 1258 - flip_buf = cfhsi->rx_flip_buf; 1259 - /* Flush transmit queues. */ 1260 - cfhsi_abort_tx(cfhsi); 1261 - 1262 - /* Deactivate interface */ 1263 - cfhsi->ops->cfhsi_down(cfhsi->ops); 1264 - 1265 - /* Free buffers. */ 1266 - kfree(tx_buf); 1267 - kfree(rx_buf); 1268 - kfree(flip_buf); 1269 - return 0; 1270 - } 1271 - 1272 - static void cfhsi_uninit(struct net_device *dev) 1273 - { 1274 - struct cfhsi *cfhsi = netdev_priv(dev); 1275 - ASSERT_RTNL(); 1276 - symbol_put(cfhsi_get_device); 1277 - list_del(&cfhsi->list); 1278 - } 1279 - 1280 - static const struct net_device_ops cfhsi_netdevops = { 1281 - .ndo_uninit = cfhsi_uninit, 1282 - .ndo_open = cfhsi_open, 1283 - .ndo_stop = cfhsi_close, 1284 - .ndo_start_xmit = cfhsi_xmit 1285 - }; 1286 - 1287 - static void cfhsi_netlink_parms(struct nlattr *data[], struct cfhsi *cfhsi) 1288 - { 1289 - int i; 1290 - 1291 - if (!data) { 1292 - pr_debug("no params data found\n"); 1293 - return; 1294 - } 1295 - 1296 - i = __IFLA_CAIF_HSI_INACTIVITY_TOUT; 1297 - /* 1298 - * Inactivity timeout in millisecs. Lowest possible value is 1, 1299 - * and highest possible is NEXT_TIMER_MAX_DELTA. 1300 - */ 1301 - if (data[i]) { 1302 - u32 inactivity_timeout = nla_get_u32(data[i]); 1303 - /* Pre-calculate inactivity timeout. */ 1304 - cfhsi->cfg.inactivity_timeout = inactivity_timeout * HZ / 1000; 1305 - if (cfhsi->cfg.inactivity_timeout == 0) 1306 - cfhsi->cfg.inactivity_timeout = 1; 1307 - else if (cfhsi->cfg.inactivity_timeout > NEXT_TIMER_MAX_DELTA) 1308 - cfhsi->cfg.inactivity_timeout = NEXT_TIMER_MAX_DELTA; 1309 - } 1310 - 1311 - i = __IFLA_CAIF_HSI_AGGREGATION_TOUT; 1312 - if (data[i]) 1313 - cfhsi->cfg.aggregation_timeout = nla_get_u32(data[i]); 1314 - 1315 - i = __IFLA_CAIF_HSI_HEAD_ALIGN; 1316 - if (data[i]) 1317 - cfhsi->cfg.head_align = nla_get_u32(data[i]); 1318 - 1319 - i = __IFLA_CAIF_HSI_TAIL_ALIGN; 1320 - if (data[i]) 1321 - cfhsi->cfg.tail_align = nla_get_u32(data[i]); 1322 - 1323 - i = __IFLA_CAIF_HSI_QHIGH_WATERMARK; 1324 - if (data[i]) 1325 - cfhsi->cfg.q_high_mark = nla_get_u32(data[i]); 1326 - 1327 - i = __IFLA_CAIF_HSI_QLOW_WATERMARK; 1328 - if (data[i]) 1329 - cfhsi->cfg.q_low_mark = nla_get_u32(data[i]); 1330 - } 1331 - 1332 - static int caif_hsi_changelink(struct net_device *dev, struct nlattr *tb[], 1333 - struct nlattr *data[], 1334 - struct netlink_ext_ack *extack) 1335 - { 1336 - cfhsi_netlink_parms(data, netdev_priv(dev)); 1337 - netdev_state_change(dev); 1338 - return 0; 1339 - } 1340 - 1341 - static const struct nla_policy caif_hsi_policy[__IFLA_CAIF_HSI_MAX + 1] = { 1342 - [__IFLA_CAIF_HSI_INACTIVITY_TOUT] = { .type = NLA_U32, .len = 4 }, 1343 - [__IFLA_CAIF_HSI_AGGREGATION_TOUT] = { .type = NLA_U32, .len = 4 }, 1344 - [__IFLA_CAIF_HSI_HEAD_ALIGN] = { .type = NLA_U32, .len = 4 }, 1345 - [__IFLA_CAIF_HSI_TAIL_ALIGN] = { .type = NLA_U32, .len = 4 }, 1346 - [__IFLA_CAIF_HSI_QHIGH_WATERMARK] = { .type = NLA_U32, .len = 4 }, 1347 - [__IFLA_CAIF_HSI_QLOW_WATERMARK] = { .type = NLA_U32, .len = 4 }, 1348 - }; 1349 - 1350 - static size_t caif_hsi_get_size(const struct net_device *dev) 1351 - { 1352 - int i; 1353 - size_t s = 0; 1354 - for (i = __IFLA_CAIF_HSI_UNSPEC + 1; i < __IFLA_CAIF_HSI_MAX; i++) 1355 - s += nla_total_size(caif_hsi_policy[i].len); 1356 - return s; 1357 - } 1358 - 1359 - static int caif_hsi_fill_info(struct sk_buff *skb, const struct net_device *dev) 1360 - { 1361 - struct cfhsi *cfhsi = netdev_priv(dev); 1362 - 1363 - if (nla_put_u32(skb, __IFLA_CAIF_HSI_INACTIVITY_TOUT, 1364 - cfhsi->cfg.inactivity_timeout) || 1365 - nla_put_u32(skb, __IFLA_CAIF_HSI_AGGREGATION_TOUT, 1366 - cfhsi->cfg.aggregation_timeout) || 1367 - nla_put_u32(skb, __IFLA_CAIF_HSI_HEAD_ALIGN, 1368 - cfhsi->cfg.head_align) || 1369 - nla_put_u32(skb, __IFLA_CAIF_HSI_TAIL_ALIGN, 1370 - cfhsi->cfg.tail_align) || 1371 - nla_put_u32(skb, __IFLA_CAIF_HSI_QHIGH_WATERMARK, 1372 - cfhsi->cfg.q_high_mark) || 1373 - nla_put_u32(skb, __IFLA_CAIF_HSI_QLOW_WATERMARK, 1374 - cfhsi->cfg.q_low_mark)) 1375 - return -EMSGSIZE; 1376 - 1377 - return 0; 1378 - } 1379 - 1380 - static int caif_hsi_newlink(struct net *src_net, struct net_device *dev, 1381 - struct nlattr *tb[], struct nlattr *data[], 1382 - struct netlink_ext_ack *extack) 1383 - { 1384 - struct cfhsi *cfhsi = NULL; 1385 - struct cfhsi_ops *(*get_ops)(void); 1386 - 1387 - ASSERT_RTNL(); 1388 - 1389 - cfhsi = netdev_priv(dev); 1390 - cfhsi_netlink_parms(data, cfhsi); 1391 - 1392 - get_ops = symbol_get(cfhsi_get_ops); 1393 - if (!get_ops) { 1394 - pr_err("%s: failed to get the cfhsi_ops\n", __func__); 1395 - return -ENODEV; 1396 - } 1397 - 1398 - /* Assign the HSI device. */ 1399 - cfhsi->ops = (*get_ops)(); 1400 - if (!cfhsi->ops) { 1401 - pr_err("%s: failed to get the cfhsi_ops\n", __func__); 1402 - goto err; 1403 - } 1404 - 1405 - /* Assign the driver to this HSI device. */ 1406 - cfhsi->ops->cb_ops = &cfhsi->cb_ops; 1407 - if (register_netdevice(dev)) { 1408 - pr_warn("%s: caif_hsi device registration failed\n", __func__); 1409 - goto err; 1410 - } 1411 - /* Add CAIF HSI device to list. */ 1412 - list_add_tail(&cfhsi->list, &cfhsi_list); 1413 - 1414 - return 0; 1415 - err: 1416 - symbol_put(cfhsi_get_ops); 1417 - return -ENODEV; 1418 - } 1419 - 1420 - static struct rtnl_link_ops caif_hsi_link_ops __read_mostly = { 1421 - .kind = "cfhsi", 1422 - .priv_size = sizeof(struct cfhsi), 1423 - .setup = cfhsi_setup, 1424 - .maxtype = __IFLA_CAIF_HSI_MAX, 1425 - .policy = caif_hsi_policy, 1426 - .newlink = caif_hsi_newlink, 1427 - .changelink = caif_hsi_changelink, 1428 - .get_size = caif_hsi_get_size, 1429 - .fill_info = caif_hsi_fill_info, 1430 - }; 1431 - 1432 - static void __exit cfhsi_exit_module(void) 1433 - { 1434 - struct list_head *list_node; 1435 - struct list_head *n; 1436 - struct cfhsi *cfhsi; 1437 - 1438 - rtnl_link_unregister(&caif_hsi_link_ops); 1439 - 1440 - rtnl_lock(); 1441 - list_for_each_safe(list_node, n, &cfhsi_list) { 1442 - cfhsi = list_entry(list_node, struct cfhsi, list); 1443 - unregister_netdevice(cfhsi->ndev); 1444 - } 1445 - rtnl_unlock(); 1446 - } 1447 - 1448 - static int __init cfhsi_init_module(void) 1449 - { 1450 - return rtnl_link_register(&caif_hsi_link_ops); 1451 - } 1452 - 1453 - module_init(cfhsi_init_module); 1454 - module_exit(cfhsi_exit_module);
+3 -1
drivers/net/dsa/microchip/ksz_common.c
··· 419 419 if (of_property_read_u32(port, "reg", 420 420 &port_num)) 421 421 continue; 422 - if (!(dev->port_mask & BIT(port_num))) 422 + if (!(dev->port_mask & BIT(port_num))) { 423 + of_node_put(port); 423 424 return -EINVAL; 425 + } 424 426 of_get_phy_mode(port, 425 427 &dev->ports[port_num].interface); 426 428 }
+20 -2
drivers/net/dsa/mv88e6xxx/chip.c
··· 3583 3583 .port_set_speed_duplex = mv88e6341_port_set_speed_duplex, 3584 3584 .port_max_speed_mode = mv88e6341_port_max_speed_mode, 3585 3585 .port_tag_remap = mv88e6095_port_tag_remap, 3586 + .port_set_policy = mv88e6352_port_set_policy, 3586 3587 .port_set_frame_mode = mv88e6351_port_set_frame_mode, 3587 3588 .port_set_ucast_flood = mv88e6352_port_set_ucast_flood, 3588 3589 .port_set_mcast_flood = mv88e6352_port_set_mcast_flood, ··· 3597 3596 .port_set_cmode = mv88e6341_port_set_cmode, 3598 3597 .port_setup_message_port = mv88e6xxx_setup_message_port, 3599 3598 .stats_snapshot = mv88e6390_g1_stats_snapshot, 3600 - .stats_set_histogram = mv88e6095_g1_stats_set_histogram, 3599 + .stats_set_histogram = mv88e6390_g1_stats_set_histogram, 3601 3600 .stats_get_sset_count = mv88e6320_stats_get_sset_count, 3602 3601 .stats_get_strings = mv88e6320_stats_get_strings, 3603 3602 .stats_get_stats = mv88e6390_stats_get_stats, ··· 3607 3606 .mgmt_rsvd2cpu = mv88e6390_g1_mgmt_rsvd2cpu, 3608 3607 .pot_clear = mv88e6xxx_g2_pot_clear, 3609 3608 .reset = mv88e6352_g1_reset, 3609 + .rmu_disable = mv88e6390_g1_rmu_disable, 3610 + .atu_get_hash = mv88e6165_g1_atu_get_hash, 3611 + .atu_set_hash = mv88e6165_g1_atu_set_hash, 3610 3612 .vtu_getnext = mv88e6352_g1_vtu_getnext, 3611 3613 .vtu_loadpurge = mv88e6352_g1_vtu_loadpurge, 3612 3614 .serdes_power = mv88e6390_serdes_power, ··· 3623 3619 .serdes_irq_enable = mv88e6390_serdes_irq_enable, 3624 3620 .serdes_irq_status = mv88e6390_serdes_irq_status, 3625 3621 .gpio_ops = &mv88e6352_gpio_ops, 3622 + .serdes_get_sset_count = mv88e6390_serdes_get_sset_count, 3623 + .serdes_get_strings = mv88e6390_serdes_get_strings, 3624 + .serdes_get_stats = mv88e6390_serdes_get_stats, 3625 + .serdes_get_regs_len = mv88e6390_serdes_get_regs_len, 3626 + .serdes_get_regs = mv88e6390_serdes_get_regs, 3626 3627 .phylink_validate = mv88e6341_phylink_validate, 3627 3628 }; 3628 3629 ··· 4392 4383 .port_set_speed_duplex = mv88e6341_port_set_speed_duplex, 4393 4384 .port_max_speed_mode = mv88e6341_port_max_speed_mode, 4394 4385 .port_tag_remap = mv88e6095_port_tag_remap, 4386 + .port_set_policy = mv88e6352_port_set_policy, 4395 4387 .port_set_frame_mode = mv88e6351_port_set_frame_mode, 4396 4388 .port_set_ucast_flood = mv88e6352_port_set_ucast_flood, 4397 4389 .port_set_mcast_flood = mv88e6352_port_set_mcast_flood, ··· 4406 4396 .port_set_cmode = mv88e6341_port_set_cmode, 4407 4397 .port_setup_message_port = mv88e6xxx_setup_message_port, 4408 4398 .stats_snapshot = mv88e6390_g1_stats_snapshot, 4409 - .stats_set_histogram = mv88e6095_g1_stats_set_histogram, 4399 + .stats_set_histogram = mv88e6390_g1_stats_set_histogram, 4410 4400 .stats_get_sset_count = mv88e6320_stats_get_sset_count, 4411 4401 .stats_get_strings = mv88e6320_stats_get_strings, 4412 4402 .stats_get_stats = mv88e6390_stats_get_stats, ··· 4416 4406 .mgmt_rsvd2cpu = mv88e6390_g1_mgmt_rsvd2cpu, 4417 4407 .pot_clear = mv88e6xxx_g2_pot_clear, 4418 4408 .reset = mv88e6352_g1_reset, 4409 + .rmu_disable = mv88e6390_g1_rmu_disable, 4410 + .atu_get_hash = mv88e6165_g1_atu_get_hash, 4411 + .atu_set_hash = mv88e6165_g1_atu_set_hash, 4419 4412 .vtu_getnext = mv88e6352_g1_vtu_getnext, 4420 4413 .vtu_loadpurge = mv88e6352_g1_vtu_loadpurge, 4421 4414 .serdes_power = mv88e6390_serdes_power, ··· 4434 4421 .gpio_ops = &mv88e6352_gpio_ops, 4435 4422 .avb_ops = &mv88e6390_avb_ops, 4436 4423 .ptp_ops = &mv88e6352_ptp_ops, 4424 + .serdes_get_sset_count = mv88e6390_serdes_get_sset_count, 4425 + .serdes_get_strings = mv88e6390_serdes_get_strings, 4426 + .serdes_get_stats = mv88e6390_serdes_get_stats, 4427 + .serdes_get_regs_len = mv88e6390_serdes_get_regs_len, 4428 + .serdes_get_regs = mv88e6390_serdes_get_regs, 4437 4429 .phylink_validate = mv88e6341_phylink_validate, 4438 4430 }; 4439 4431
+3 -3
drivers/net/dsa/mv88e6xxx/serdes.c
··· 722 722 723 723 int mv88e6390_serdes_get_sset_count(struct mv88e6xxx_chip *chip, int port) 724 724 { 725 - if (mv88e6390_serdes_get_lane(chip, port) < 0) 725 + if (mv88e6xxx_serdes_get_lane(chip, port) < 0) 726 726 return 0; 727 727 728 728 return ARRAY_SIZE(mv88e6390_serdes_hw_stats); ··· 734 734 struct mv88e6390_serdes_hw_stat *stat; 735 735 int i; 736 736 737 - if (mv88e6390_serdes_get_lane(chip, port) < 0) 737 + if (mv88e6xxx_serdes_get_lane(chip, port) < 0) 738 738 return 0; 739 739 740 740 for (i = 0; i < ARRAY_SIZE(mv88e6390_serdes_hw_stats); i++) { ··· 770 770 int lane; 771 771 int i; 772 772 773 - lane = mv88e6390_serdes_get_lane(chip, port); 773 + lane = mv88e6xxx_serdes_get_lane(chip, port); 774 774 if (lane < 0) 775 775 return 0; 776 776
+6 -8
drivers/net/dsa/sja1105/sja1105_main.c
··· 122 122 123 123 for (i = 0; i < ds->num_ports; i++) { 124 124 mac[i] = default_mac; 125 - if (i == dsa_upstream_port(priv->ds, i)) { 126 - /* STP doesn't get called for CPU port, so we need to 127 - * set the I/O parameters statically. 128 - */ 129 - mac[i].dyn_learn = true; 130 - mac[i].ingress = true; 131 - mac[i].egress = true; 132 - } 125 + 126 + /* Let sja1105_bridge_stp_state_set() keep address learning 127 + * enabled for the CPU port. 128 + */ 129 + if (dsa_is_cpu_port(ds, i)) 130 + priv->learn_ena |= BIT(i); 133 131 } 134 132 135 133 return 0;
+5
drivers/net/ethernet/atheros/atl1c/atl1c_hw.c
··· 594 594 int ret_val; 595 595 u16 mii_bmcr_data = BMCR_RESET; 596 596 597 + if (hw->nic_type == athr_mt) { 598 + hw->phy_configured = true; 599 + return 0; 600 + } 601 + 597 602 if ((atl1c_read_phy_reg(hw, MII_PHYSID1, &hw->phy_id1) != 0) || 598 603 (atl1c_read_phy_reg(hw, MII_PHYSID2, &hw->phy_id2) != 0)) { 599 604 dev_err(&pdev->dev, "Error get phy ID\n");
+8 -15
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 1640 1640 1641 1641 switch (mode) { 1642 1642 case GENET_POWER_PASSIVE: 1643 - reg &= ~(EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS); 1643 + reg &= ~(EXT_PWR_DOWN_DLL | EXT_PWR_DOWN_BIAS | 1644 + EXT_ENERGY_DET_MASK); 1644 1645 if (GENET_IS_V5(priv)) { 1645 1646 reg &= ~(EXT_PWR_DOWN_PHY_EN | 1646 1647 EXT_PWR_DOWN_PHY_RD | ··· 3238 3237 /* Returns a reusable dma control register value */ 3239 3238 static u32 bcmgenet_dma_disable(struct bcmgenet_priv *priv) 3240 3239 { 3240 + unsigned int i; 3241 3241 u32 reg; 3242 3242 u32 dma_ctrl; 3243 3243 3244 3244 /* disable DMA */ 3245 3245 dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN; 3246 + for (i = 0; i < priv->hw_params->tx_queues; i++) 3247 + dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT)); 3246 3248 reg = bcmgenet_tdma_readl(priv, DMA_CTRL); 3247 3249 reg &= ~dma_ctrl; 3248 3250 bcmgenet_tdma_writel(priv, reg, DMA_CTRL); 3249 3251 3252 + dma_ctrl = 1 << (DESC_INDEX + DMA_RING_BUF_EN_SHIFT) | DMA_EN; 3253 + for (i = 0; i < priv->hw_params->rx_queues; i++) 3254 + dma_ctrl |= (1 << (i + DMA_RING_BUF_EN_SHIFT)); 3250 3255 reg = bcmgenet_rdma_readl(priv, DMA_CTRL); 3251 3256 reg &= ~dma_ctrl; 3252 3257 bcmgenet_rdma_writel(priv, reg, DMA_CTRL); ··· 3299 3292 { 3300 3293 struct bcmgenet_priv *priv = netdev_priv(dev); 3301 3294 unsigned long dma_ctrl; 3302 - u32 reg; 3303 3295 int ret; 3304 3296 3305 3297 netif_dbg(priv, ifup, dev, "bcmgenet_open\n"); ··· 3323 3317 bcmgenet_set_features(dev, dev->features); 3324 3318 3325 3319 bcmgenet_set_hw_addr(priv, dev->dev_addr); 3326 - 3327 - if (priv->internal_phy) { 3328 - reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT); 3329 - reg |= EXT_ENERGY_DET_MASK; 3330 - bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT); 3331 - } 3332 3320 3333 3321 /* Disable RX/TX DMA and flush TX queues */ 3334 3322 dma_ctrl = bcmgenet_dma_disable(priv); ··· 4139 4139 struct bcmgenet_priv *priv = netdev_priv(dev); 4140 4140 struct bcmgenet_rxnfc_rule *rule; 4141 4141 unsigned long dma_ctrl; 4142 - u32 reg; 4143 4142 int ret; 4144 4143 4145 4144 if (!netif_running(dev)) ··· 4174 4175 list_for_each_entry(rule, &priv->rxnfc_list, list) 4175 4176 if (rule->state != BCMGENET_RXNFC_STATE_UNUSED) 4176 4177 bcmgenet_hfb_create_rxnfc_filter(priv, rule); 4177 - 4178 - if (priv->internal_phy) { 4179 - reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT); 4180 - reg |= EXT_ENERGY_DET_MASK; 4181 - bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT); 4182 - } 4183 4178 4184 4179 /* Disable RX/TX DMA and flush TX queues */ 4185 4180 dma_ctrl = bcmgenet_dma_disable(priv);
-6
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 186 186 reg |= CMD_RX_EN; 187 187 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 188 188 189 - if (priv->hw_params->flags & GENET_HAS_EXT) { 190 - reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT); 191 - reg &= ~EXT_ENERGY_DET_MASK; 192 - bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT); 193 - } 194 - 195 189 reg = UMAC_IRQ_MPD_R; 196 190 if (hfb_enable) 197 191 reg |= UMAC_IRQ_HFB_SM | UMAC_IRQ_HFB_MM;
+10 -8
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 2643 2643 { 2644 2644 unsigned int i; 2645 2645 2646 + if (!is_uld(adap)) 2647 + return; 2648 + 2646 2649 mutex_lock(&uld_mutex); 2647 2650 list_del(&adap->list_node); 2648 2651 ··· 7144 7141 */ 7145 7142 destroy_workqueue(adapter->workq); 7146 7143 7147 - if (is_uld(adapter)) { 7148 - detach_ulds(adapter); 7149 - t4_uld_clean_up(adapter); 7150 - } 7144 + detach_ulds(adapter); 7145 + 7146 + for_each_port(adapter, i) 7147 + if (adapter->port[i]->reg_state == NETREG_REGISTERED) 7148 + unregister_netdev(adapter->port[i]); 7149 + 7150 + t4_uld_clean_up(adapter); 7151 7151 7152 7152 adap_free_hma_mem(adapter); 7153 7153 7154 7154 disable_interrupts(adapter); 7155 7155 7156 7156 cxgb4_free_mps_ref_entries(adapter); 7157 - 7158 - for_each_port(adapter, i) 7159 - if (adapter->port[i]->reg_state == NETREG_REGISTERED) 7160 - unregister_netdev(adapter->port[i]); 7161 7157 7162 7158 debugfs_remove_recursive(adapter->debugfs_root); 7163 7159
+3
drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
··· 581 581 { 582 582 unsigned int i; 583 583 584 + if (!is_uld(adap)) 585 + return; 586 + 584 587 mutex_lock(&uld_mutex); 585 588 for (i = 0; i < CXGB4_ULD_MAX; i++) { 586 589 if (!adap->uld[i].handle)
+8 -11
drivers/net/ethernet/google/gve/gve_main.c
··· 1469 1469 1470 1470 err = pci_enable_device(pdev); 1471 1471 if (err) 1472 - return -ENXIO; 1472 + return err; 1473 1473 1474 1474 err = pci_request_regions(pdev, "gvnic-cfg"); 1475 1475 if (err) ··· 1477 1477 1478 1478 pci_set_master(pdev); 1479 1479 1480 - err = pci_set_dma_mask(pdev, DMA_BIT_MASK(64)); 1480 + err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 1481 1481 if (err) { 1482 1482 dev_err(&pdev->dev, "Failed to set dma mask: err=%d\n", err); 1483 - goto abort_with_pci_region; 1484 - } 1485 - 1486 - err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64)); 1487 - if (err) { 1488 - dev_err(&pdev->dev, 1489 - "Failed to set consistent dma mask: err=%d\n", err); 1490 1483 goto abort_with_pci_region; 1491 1484 } 1492 1485 ··· 1505 1512 dev = alloc_etherdev_mqs(sizeof(*priv), max_tx_queues, max_rx_queues); 1506 1513 if (!dev) { 1507 1514 dev_err(&pdev->dev, "could not allocate netdev\n"); 1515 + err = -ENOMEM; 1508 1516 goto abort_with_db_bar; 1509 1517 } 1510 1518 SET_NETDEV_DEV(dev, &pdev->dev); ··· 1559 1565 1560 1566 err = register_netdev(dev); 1561 1567 if (err) 1562 - goto abort_with_wq; 1568 + goto abort_with_gve_init; 1563 1569 1564 1570 dev_info(&pdev->dev, "GVE version %s\n", gve_version_str); 1565 1571 dev_info(&pdev->dev, "GVE queue format %d\n", (int)priv->queue_format); 1566 1572 gve_clear_probe_in_progress(priv); 1567 1573 queue_work(priv->gve_wq, &priv->service_task); 1568 1574 return 0; 1575 + 1576 + abort_with_gve_init: 1577 + gve_teardown_priv_resources(priv); 1569 1578 1570 1579 abort_with_wq: 1571 1580 destroy_workqueue(priv->gve_wq); ··· 1587 1590 1588 1591 abort_with_enabled: 1589 1592 pci_disable_device(pdev); 1590 - return -ENXIO; 1593 + return err; 1591 1594 } 1592 1595 1593 1596 static void gve_remove(struct pci_dev *pdev)
-7
drivers/net/ethernet/google/gve/gve_rx_dqo.c
··· 566 566 return 0; 567 567 } 568 568 569 - /* Prefetch the payload header. */ 570 - prefetch((char *)buf_state->addr + buf_state->page_info.page_offset); 571 - #if L1_CACHE_BYTES < 128 572 - prefetch((char *)buf_state->addr + buf_state->page_info.page_offset + 573 - L1_CACHE_BYTES); 574 - #endif 575 - 576 569 if (eop && buf_len <= priv->rx_copybreak) { 577 570 rx->skb_head = gve_rx_copy(priv->dev, napi, 578 571 &buf_state->page_info, buf_len, 0);
+19 -3
drivers/net/ethernet/ibm/ibmvnic.c
··· 2420 2420 2421 2421 static void __ibmvnic_reset(struct work_struct *work) 2422 2422 { 2423 - struct ibmvnic_rwi *rwi; 2424 2423 struct ibmvnic_adapter *adapter; 2425 2424 bool saved_state = false; 2425 + struct ibmvnic_rwi *tmprwi; 2426 + struct ibmvnic_rwi *rwi; 2426 2427 unsigned long flags; 2427 2428 u32 reset_state; 2428 2429 int rc = 0; ··· 2490 2489 } else { 2491 2490 rc = do_reset(adapter, rwi, reset_state); 2492 2491 } 2493 - kfree(rwi); 2492 + tmprwi = rwi; 2494 2493 adapter->last_reset_time = jiffies; 2495 2494 2496 2495 if (rc) ··· 2498 2497 2499 2498 rwi = get_next_rwi(adapter); 2500 2499 2500 + /* 2501 + * If there is another reset queued, free the previous rwi 2502 + * and process the new reset even if previous reset failed 2503 + * (the previous reset could have failed because of a fail 2504 + * over for instance, so process the fail over). 2505 + * 2506 + * If there are no resets queued and the previous reset failed, 2507 + * the adapter would be in an undefined state. So retry the 2508 + * previous reset as a hard reset. 2509 + */ 2510 + if (rwi) 2511 + kfree(tmprwi); 2512 + else if (rc) 2513 + rwi = tmprwi; 2514 + 2501 2515 if (rwi && (rwi->reset_reason == VNIC_RESET_FAILOVER || 2502 - rwi->reset_reason == VNIC_RESET_MOBILITY)) 2516 + rwi->reset_reason == VNIC_RESET_MOBILITY || rc)) 2503 2517 adapter->force_reset_recovery = true; 2504 2518 } 2505 2519
+1
drivers/net/ethernet/intel/e1000e/netdev.c
··· 7664 7664 err_ioremap: 7665 7665 free_netdev(netdev); 7666 7666 err_alloc_etherdev: 7667 + pci_disable_pcie_error_reporting(pdev); 7667 7668 pci_release_mem_regions(pdev); 7668 7669 err_pci_reg: 7669 7670 err_dma:
+1
drivers/net/ethernet/intel/fm10k/fm10k_pci.c
··· 2227 2227 err_ioremap: 2228 2228 free_netdev(netdev); 2229 2229 err_alloc_netdev: 2230 + pci_disable_pcie_error_reporting(pdev); 2230 2231 pci_release_mem_regions(pdev); 2231 2232 err_pci_reg: 2232 2233 err_dma:
+1
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 3798 3798 err_ioremap: 3799 3799 free_netdev(netdev); 3800 3800 err_alloc_etherdev: 3801 + pci_disable_pcie_error_reporting(pdev); 3801 3802 pci_release_regions(pdev); 3802 3803 err_pci_reg: 3803 3804 err_dma:
+13 -2
drivers/net/ethernet/intel/igb/igb_main.c
··· 931 931 **/ 932 932 static int igb_request_msix(struct igb_adapter *adapter) 933 933 { 934 + unsigned int num_q_vectors = adapter->num_q_vectors; 934 935 struct net_device *netdev = adapter->netdev; 935 936 int i, err = 0, vector = 0, free_vector = 0; 936 937 ··· 940 939 if (err) 941 940 goto err_out; 942 941 943 - for (i = 0; i < adapter->num_q_vectors; i++) { 942 + if (num_q_vectors > MAX_Q_VECTORS) { 943 + num_q_vectors = MAX_Q_VECTORS; 944 + dev_warn(&adapter->pdev->dev, 945 + "The number of queue vectors (%d) is higher than max allowed (%d)\n", 946 + adapter->num_q_vectors, MAX_Q_VECTORS); 947 + } 948 + for (i = 0; i < num_q_vectors; i++) { 944 949 struct igb_q_vector *q_vector = adapter->q_vector[i]; 945 950 946 951 vector++; ··· 1685 1678 **/ 1686 1679 static void igb_config_tx_modes(struct igb_adapter *adapter, int queue) 1687 1680 { 1688 - struct igb_ring *ring = adapter->tx_ring[queue]; 1689 1681 struct net_device *netdev = adapter->netdev; 1690 1682 struct e1000_hw *hw = &adapter->hw; 1683 + struct igb_ring *ring; 1691 1684 u32 tqavcc, tqavctrl; 1692 1685 u16 value; 1693 1686 1694 1687 WARN_ON(hw->mac.type != e1000_i210); 1695 1688 WARN_ON(queue < 0 || queue > 1); 1689 + ring = adapter->tx_ring[queue]; 1696 1690 1697 1691 /* If any of the Qav features is enabled, configure queues as SR and 1698 1692 * with HIGH PRIO. If none is, then configure them with LOW PRIO and ··· 3623 3615 err_ioremap: 3624 3616 free_netdev(netdev); 3625 3617 err_alloc_etherdev: 3618 + pci_disable_pcie_error_reporting(pdev); 3626 3619 pci_release_mem_regions(pdev); 3627 3620 err_pci_reg: 3628 3621 err_dma: ··· 4843 4834 dma_unmap_len(tx_buffer, len), 4844 4835 DMA_TO_DEVICE); 4845 4836 } 4837 + 4838 + tx_buffer->next_to_watch = NULL; 4846 4839 4847 4840 /* move us one more past the eop_desc for start of next pkt */ 4848 4841 tx_buffer++;
+1 -1
drivers/net/ethernet/intel/igc/igc.h
··· 578 578 if (hw->phy.ops.read_reg) 579 579 return hw->phy.ops.read_reg(hw, offset, data); 580 580 581 - return 0; 581 + return -EOPNOTSUPP; 582 582 } 583 583 584 584 void igc_reinit_locked(struct igc_adapter *);
+3
drivers/net/ethernet/intel/igc/igc_main.c
··· 232 232 igc_unmap_tx_buffer(tx_ring->dev, tx_buffer); 233 233 } 234 234 235 + tx_buffer->next_to_watch = NULL; 236 + 235 237 /* move us one more past the eop_desc for start of next pkt */ 236 238 tx_buffer++; 237 239 i++; ··· 6056 6054 err_ioremap: 6057 6055 free_netdev(netdev); 6058 6056 err_alloc_etherdev: 6057 + pci_disable_pcie_error_reporting(pdev); 6059 6058 pci_release_mem_regions(pdev); 6060 6059 err_pci_reg: 6061 6060 err_dma:
+1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 11067 11067 disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); 11068 11068 free_netdev(netdev); 11069 11069 err_alloc_etherdev: 11070 + pci_disable_pcie_error_reporting(pdev); 11070 11071 pci_release_mem_regions(pdev); 11071 11072 err_pci_reg: 11072 11073 err_dma:
+13 -7
drivers/net/ethernet/intel/ixgbevf/ipsec.c
··· 211 211 static int ixgbevf_ipsec_parse_proto_keys(struct xfrm_state *xs, 212 212 u32 *mykey, u32 *mysalt) 213 213 { 214 - struct net_device *dev = xs->xso.dev; 214 + struct net_device *dev = xs->xso.real_dev; 215 215 unsigned char *key_data; 216 216 char *alg_name = NULL; 217 217 int key_len; ··· 260 260 **/ 261 261 static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs) 262 262 { 263 - struct net_device *dev = xs->xso.dev; 264 - struct ixgbevf_adapter *adapter = netdev_priv(dev); 265 - struct ixgbevf_ipsec *ipsec = adapter->ipsec; 263 + struct net_device *dev = xs->xso.real_dev; 264 + struct ixgbevf_adapter *adapter; 265 + struct ixgbevf_ipsec *ipsec; 266 266 u16 sa_idx; 267 267 int ret; 268 + 269 + adapter = netdev_priv(dev); 270 + ipsec = adapter->ipsec; 268 271 269 272 if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) { 270 273 netdev_err(dev, "Unsupported protocol 0x%04x for IPsec offload\n", ··· 386 383 **/ 387 384 static void ixgbevf_ipsec_del_sa(struct xfrm_state *xs) 388 385 { 389 - struct net_device *dev = xs->xso.dev; 390 - struct ixgbevf_adapter *adapter = netdev_priv(dev); 391 - struct ixgbevf_ipsec *ipsec = adapter->ipsec; 386 + struct net_device *dev = xs->xso.real_dev; 387 + struct ixgbevf_adapter *adapter; 388 + struct ixgbevf_ipsec *ipsec; 392 389 u16 sa_idx; 390 + 391 + adapter = netdev_priv(dev); 392 + ipsec = adapter->ipsec; 393 393 394 394 if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) { 395 395 sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_RX_INDEX;
+10 -10
drivers/net/ethernet/marvell/mvneta.c
··· 2299 2299 skb_frag_off_set(frag, pp->rx_offset_correction); 2300 2300 skb_frag_size_set(frag, data_len); 2301 2301 __skb_frag_set_page(frag, page); 2302 - 2303 - /* last fragment */ 2304 - if (len == *size) { 2305 - struct skb_shared_info *sinfo; 2306 - 2307 - sinfo = xdp_get_shared_info_from_buff(xdp); 2308 - sinfo->nr_frags = xdp_sinfo->nr_frags; 2309 - memcpy(sinfo->frags, xdp_sinfo->frags, 2310 - sinfo->nr_frags * sizeof(skb_frag_t)); 2311 - } 2312 2302 } else { 2313 2303 page_pool_put_full_page(rxq->page_pool, page, true); 2304 + } 2305 + 2306 + /* last fragment */ 2307 + if (len == *size) { 2308 + struct skb_shared_info *sinfo; 2309 + 2310 + sinfo = xdp_get_shared_info_from_buff(xdp); 2311 + sinfo->nr_frags = xdp_sinfo->nr_frags; 2312 + memcpy(sinfo->frags, xdp_sinfo->frags, 2313 + sinfo->nr_frags * sizeof(skb_frag_t)); 2314 2314 } 2315 2315 *size -= len; 2316 2316 }
+277 -15
drivers/net/ethernet/marvell/octeontx2/af/cgx.c
··· 86 86 return test_bit(lmac_id, &cgx->lmac_bmap); 87 87 } 88 88 89 + /* Helper function to get sequential index 90 + * given the enabled LMAC of a CGX 91 + */ 92 + static int get_sequence_id_of_lmac(struct cgx *cgx, int lmac_id) 93 + { 94 + int tmp, id = 0; 95 + 96 + for_each_set_bit(tmp, &cgx->lmac_bmap, MAX_LMAC_PER_CGX) { 97 + if (tmp == lmac_id) 98 + break; 99 + id++; 100 + } 101 + 102 + return id; 103 + } 104 + 89 105 struct mac_ops *get_mac_ops(void *cgxd) 90 106 { 91 107 if (!cgxd) ··· 227 211 return mac; 228 212 } 229 213 214 + static void cfg2mac(u64 cfg, u8 *mac_addr) 215 + { 216 + int i, index = 0; 217 + 218 + for (i = ETH_ALEN - 1; i >= 0; i--, index++) 219 + mac_addr[i] = (cfg >> (8 * index)) & 0xFF; 220 + } 221 + 230 222 int cgx_lmac_addr_set(u8 cgx_id, u8 lmac_id, u8 *mac_addr) 231 223 { 232 224 struct cgx *cgx_dev = cgx_get_pdata(cgx_id); 225 + struct lmac *lmac = lmac_pdata(lmac_id, cgx_dev); 233 226 struct mac_ops *mac_ops; 227 + int index, id; 234 228 u64 cfg; 235 229 230 + /* access mac_ops to know csr_offset */ 236 231 mac_ops = cgx_dev->mac_ops; 232 + 237 233 /* copy 6bytes from macaddr */ 238 234 /* memcpy(&cfg, mac_addr, 6); */ 239 235 240 236 cfg = mac2u64 (mac_addr); 241 237 242 - cgx_write(cgx_dev, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (lmac_id * 0x8)), 238 + id = get_sequence_id_of_lmac(cgx_dev, lmac_id); 239 + 240 + index = id * lmac->mac_to_index_bmap.max; 241 + 242 + cgx_write(cgx_dev, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (index * 0x8)), 243 243 cfg | CGX_DMAC_CAM_ADDR_ENABLE | ((u64)lmac_id << 49)); 244 244 245 245 cfg = cgx_read(cgx_dev, lmac_id, CGXX_CMRX_RX_DMAC_CTL0); 246 - cfg |= CGX_DMAC_CTL0_CAM_ENABLE; 246 + cfg |= (CGX_DMAC_CTL0_CAM_ENABLE | CGX_DMAC_BCAST_MODE | 247 + CGX_DMAC_MCAST_MODE); 247 248 cgx_write(cgx_dev, lmac_id, CGXX_CMRX_RX_DMAC_CTL0, cfg); 249 + 250 + return 0; 251 + } 252 + 253 + u64 cgx_read_dmac_ctrl(void *cgxd, int lmac_id) 254 + { 255 + struct mac_ops *mac_ops; 256 + struct cgx *cgx = cgxd; 257 + 258 + if (!cgxd || !is_lmac_valid(cgxd, lmac_id)) 259 + return 0; 260 + 261 + cgx = cgxd; 262 + /* Get mac_ops to know csr offset */ 263 + mac_ops = cgx->mac_ops; 264 + 265 + return cgx_read(cgxd, lmac_id, CGXX_CMRX_RX_DMAC_CTL0); 266 + } 267 + 268 + u64 cgx_read_dmac_entry(void *cgxd, int index) 269 + { 270 + struct mac_ops *mac_ops; 271 + struct cgx *cgx; 272 + 273 + if (!cgxd) 274 + return 0; 275 + 276 + cgx = cgxd; 277 + mac_ops = cgx->mac_ops; 278 + return cgx_read(cgx, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (index * 8))); 279 + } 280 + 281 + int cgx_lmac_addr_add(u8 cgx_id, u8 lmac_id, u8 *mac_addr) 282 + { 283 + struct cgx *cgx_dev = cgx_get_pdata(cgx_id); 284 + struct lmac *lmac = lmac_pdata(lmac_id, cgx_dev); 285 + struct mac_ops *mac_ops; 286 + int index, idx; 287 + u64 cfg = 0; 288 + int id; 289 + 290 + if (!lmac) 291 + return -ENODEV; 292 + 293 + mac_ops = cgx_dev->mac_ops; 294 + /* Get available index where entry is to be installed */ 295 + idx = rvu_alloc_rsrc(&lmac->mac_to_index_bmap); 296 + if (idx < 0) 297 + return idx; 298 + 299 + id = get_sequence_id_of_lmac(cgx_dev, lmac_id); 300 + 301 + index = id * lmac->mac_to_index_bmap.max + idx; 302 + 303 + cfg = mac2u64 (mac_addr); 304 + cfg |= CGX_DMAC_CAM_ADDR_ENABLE; 305 + cfg |= ((u64)lmac_id << 49); 306 + cgx_write(cgx_dev, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (index * 0x8)), cfg); 307 + 308 + cfg = cgx_read(cgx_dev, lmac_id, CGXX_CMRX_RX_DMAC_CTL0); 309 + cfg |= (CGX_DMAC_BCAST_MODE | CGX_DMAC_CAM_ACCEPT); 310 + 311 + if (is_multicast_ether_addr(mac_addr)) { 312 + cfg &= ~GENMASK_ULL(2, 1); 313 + cfg |= CGX_DMAC_MCAST_MODE_CAM; 314 + lmac->mcast_filters_count++; 315 + } else if (!lmac->mcast_filters_count) { 316 + cfg |= CGX_DMAC_MCAST_MODE; 317 + } 318 + 319 + cgx_write(cgx_dev, lmac_id, CGXX_CMRX_RX_DMAC_CTL0, cfg); 320 + 321 + return idx; 322 + } 323 + 324 + int cgx_lmac_addr_reset(u8 cgx_id, u8 lmac_id) 325 + { 326 + struct cgx *cgx_dev = cgx_get_pdata(cgx_id); 327 + struct lmac *lmac = lmac_pdata(lmac_id, cgx_dev); 328 + struct mac_ops *mac_ops; 329 + u8 index = 0, id; 330 + u64 cfg; 331 + 332 + if (!lmac) 333 + return -ENODEV; 334 + 335 + mac_ops = cgx_dev->mac_ops; 336 + /* Restore index 0 to its default init value as done during 337 + * cgx_lmac_init 338 + */ 339 + set_bit(0, lmac->mac_to_index_bmap.bmap); 340 + 341 + id = get_sequence_id_of_lmac(cgx_dev, lmac_id); 342 + 343 + index = id * lmac->mac_to_index_bmap.max + index; 344 + cgx_write(cgx_dev, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (index * 0x8)), 0); 345 + 346 + /* Reset CGXX_CMRX_RX_DMAC_CTL0 register to default state */ 347 + cfg = cgx_read(cgx_dev, lmac_id, CGXX_CMRX_RX_DMAC_CTL0); 348 + cfg &= ~CGX_DMAC_CAM_ACCEPT; 349 + cfg |= (CGX_DMAC_BCAST_MODE | CGX_DMAC_MCAST_MODE); 350 + cgx_write(cgx_dev, lmac_id, CGXX_CMRX_RX_DMAC_CTL0, cfg); 351 + 352 + return 0; 353 + } 354 + 355 + /* Allows caller to change macaddress associated with index 356 + * in dmac filter table including index 0 reserved for 357 + * interface mac address 358 + */ 359 + int cgx_lmac_addr_update(u8 cgx_id, u8 lmac_id, u8 *mac_addr, u8 index) 360 + { 361 + struct cgx *cgx_dev = cgx_get_pdata(cgx_id); 362 + struct mac_ops *mac_ops; 363 + struct lmac *lmac; 364 + u64 cfg; 365 + int id; 366 + 367 + lmac = lmac_pdata(lmac_id, cgx_dev); 368 + if (!lmac) 369 + return -ENODEV; 370 + 371 + mac_ops = cgx_dev->mac_ops; 372 + /* Validate the index */ 373 + if (index >= lmac->mac_to_index_bmap.max) 374 + return -EINVAL; 375 + 376 + /* ensure index is already set */ 377 + if (!test_bit(index, lmac->mac_to_index_bmap.bmap)) 378 + return -EINVAL; 379 + 380 + id = get_sequence_id_of_lmac(cgx_dev, lmac_id); 381 + 382 + index = id * lmac->mac_to_index_bmap.max + index; 383 + 384 + cfg = cgx_read(cgx_dev, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (index * 0x8))); 385 + cfg &= ~CGX_RX_DMAC_ADR_MASK; 386 + cfg |= mac2u64 (mac_addr); 387 + 388 + cgx_write(cgx_dev, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (index * 0x8)), cfg); 389 + return 0; 390 + } 391 + 392 + int cgx_lmac_addr_del(u8 cgx_id, u8 lmac_id, u8 index) 393 + { 394 + struct cgx *cgx_dev = cgx_get_pdata(cgx_id); 395 + struct lmac *lmac = lmac_pdata(lmac_id, cgx_dev); 396 + struct mac_ops *mac_ops; 397 + u8 mac[ETH_ALEN]; 398 + u64 cfg; 399 + int id; 400 + 401 + if (!lmac) 402 + return -ENODEV; 403 + 404 + mac_ops = cgx_dev->mac_ops; 405 + /* Validate the index */ 406 + if (index >= lmac->mac_to_index_bmap.max) 407 + return -EINVAL; 408 + 409 + /* Skip deletion for reserved index i.e. index 0 */ 410 + if (index == 0) 411 + return 0; 412 + 413 + rvu_free_rsrc(&lmac->mac_to_index_bmap, index); 414 + 415 + id = get_sequence_id_of_lmac(cgx_dev, lmac_id); 416 + 417 + index = id * lmac->mac_to_index_bmap.max + index; 418 + 419 + /* Read MAC address to check whether it is ucast or mcast */ 420 + cfg = cgx_read(cgx_dev, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (index * 0x8))); 421 + 422 + cfg2mac(cfg, mac); 423 + if (is_multicast_ether_addr(mac)) 424 + lmac->mcast_filters_count--; 425 + 426 + if (!lmac->mcast_filters_count) { 427 + cfg = cgx_read(cgx_dev, lmac_id, CGXX_CMRX_RX_DMAC_CTL0); 428 + cfg &= ~GENMASK_ULL(2, 1); 429 + cfg |= CGX_DMAC_MCAST_MODE; 430 + cgx_write(cgx_dev, lmac_id, CGXX_CMRX_RX_DMAC_CTL0, cfg); 431 + } 432 + 433 + cgx_write(cgx_dev, 0, (CGXX_CMRX_RX_DMAC_CAM0 + (index * 0x8)), 0); 434 + 435 + return 0; 436 + } 437 + 438 + int cgx_lmac_addr_max_entries_get(u8 cgx_id, u8 lmac_id) 439 + { 440 + struct cgx *cgx_dev = cgx_get_pdata(cgx_id); 441 + struct lmac *lmac = lmac_pdata(lmac_id, cgx_dev); 442 + 443 + if (lmac) 444 + return lmac->mac_to_index_bmap.max; 248 445 249 446 return 0; 250 447 } ··· 465 236 u64 cgx_lmac_addr_get(u8 cgx_id, u8 lmac_id) 466 237 { 467 238 struct cgx *cgx_dev = cgx_get_pdata(cgx_id); 239 + struct lmac *lmac = lmac_pdata(lmac_id, cgx_dev); 468 240 struct mac_ops *mac_ops; 241 + int index; 469 242 u64 cfg; 243 + int id; 470 244 471 245 mac_ops = cgx_dev->mac_ops; 472 246 473 - cfg = cgx_read(cgx_dev, 0, CGXX_CMRX_RX_DMAC_CAM0 + lmac_id * 0x8); 247 + id = get_sequence_id_of_lmac(cgx_dev, lmac_id); 248 + 249 + index = id * lmac->mac_to_index_bmap.max; 250 + 251 + cfg = cgx_read(cgx_dev, 0, CGXX_CMRX_RX_DMAC_CAM0 + index * 0x8); 474 252 return cfg & CGX_RX_DMAC_ADR_MASK; 475 253 } 476 254 ··· 533 297 void cgx_lmac_promisc_config(int cgx_id, int lmac_id, bool enable) 534 298 { 535 299 struct cgx *cgx = cgx_get_pdata(cgx_id); 300 + struct lmac *lmac = lmac_pdata(lmac_id, cgx); 301 + u16 max_dmac = lmac->mac_to_index_bmap.max; 536 302 struct mac_ops *mac_ops; 303 + int index, i; 537 304 u64 cfg = 0; 305 + int id; 538 306 539 307 if (!cgx) 540 308 return; 309 + 310 + id = get_sequence_id_of_lmac(cgx, lmac_id); 541 311 542 312 mac_ops = cgx->mac_ops; 543 313 if (enable) { 544 314 /* Enable promiscuous mode on LMAC */ 545 315 cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_RX_DMAC_CTL0); 546 - cfg &= ~(CGX_DMAC_CAM_ACCEPT | CGX_DMAC_MCAST_MODE); 547 - cfg |= CGX_DMAC_BCAST_MODE; 316 + cfg &= ~CGX_DMAC_CAM_ACCEPT; 317 + cfg |= (CGX_DMAC_BCAST_MODE | CGX_DMAC_MCAST_MODE); 548 318 cgx_write(cgx, lmac_id, CGXX_CMRX_RX_DMAC_CTL0, cfg); 549 319 550 - cfg = cgx_read(cgx, 0, 551 - (CGXX_CMRX_RX_DMAC_CAM0 + lmac_id * 0x8)); 552 - cfg &= ~CGX_DMAC_CAM_ADDR_ENABLE; 553 - cgx_write(cgx, 0, 554 - (CGXX_CMRX_RX_DMAC_CAM0 + lmac_id * 0x8), cfg); 320 + for (i = 0; i < max_dmac; i++) { 321 + index = id * max_dmac + i; 322 + cfg = cgx_read(cgx, 0, 323 + (CGXX_CMRX_RX_DMAC_CAM0 + index * 0x8)); 324 + cfg &= ~CGX_DMAC_CAM_ADDR_ENABLE; 325 + cgx_write(cgx, 0, 326 + (CGXX_CMRX_RX_DMAC_CAM0 + index * 0x8), cfg); 327 + } 555 328 } else { 556 329 /* Disable promiscuous mode */ 557 330 cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_RX_DMAC_CTL0); 558 331 cfg |= CGX_DMAC_CAM_ACCEPT | CGX_DMAC_MCAST_MODE; 559 332 cgx_write(cgx, lmac_id, CGXX_CMRX_RX_DMAC_CTL0, cfg); 560 - cfg = cgx_read(cgx, 0, 561 - (CGXX_CMRX_RX_DMAC_CAM0 + lmac_id * 0x8)); 562 - cfg |= CGX_DMAC_CAM_ADDR_ENABLE; 563 - cgx_write(cgx, 0, 564 - (CGXX_CMRX_RX_DMAC_CAM0 + lmac_id * 0x8), cfg); 333 + for (i = 0; i < max_dmac; i++) { 334 + index = id * max_dmac + i; 335 + cfg = cgx_read(cgx, 0, 336 + (CGXX_CMRX_RX_DMAC_CAM0 + index * 0x8)); 337 + if ((cfg & CGX_RX_DMAC_ADR_MASK) != 0) { 338 + cfg |= CGX_DMAC_CAM_ADDR_ENABLE; 339 + cgx_write(cgx, 0, 340 + (CGXX_CMRX_RX_DMAC_CAM0 + 341 + index * 0x8), 342 + cfg); 343 + } 344 + } 565 345 } 566 346 } 567 347 ··· 1486 1234 } 1487 1235 1488 1236 lmac->cgx = cgx; 1237 + lmac->mac_to_index_bmap.max = 1238 + MAX_DMAC_ENTRIES_PER_CGX / cgx->lmac_count; 1239 + err = rvu_alloc_bitmap(&lmac->mac_to_index_bmap); 1240 + if (err) 1241 + return err; 1242 + 1243 + /* Reserve first entry for default MAC address */ 1244 + set_bit(0, lmac->mac_to_index_bmap.bmap); 1245 + 1489 1246 init_waitqueue_head(&lmac->wq_cmd_cmplt); 1490 1247 mutex_init(&lmac->cmd_lock); 1491 1248 spin_lock_init(&lmac->event_cb_lock); ··· 1535 1274 continue; 1536 1275 cgx->mac_ops->mac_pause_frm_config(cgx, lmac->lmac_id, false); 1537 1276 cgx_configure_interrupt(cgx, lmac, lmac->lmac_id, true); 1277 + kfree(lmac->mac_to_index_bmap.bmap); 1538 1278 kfree(lmac->name); 1539 1279 kfree(lmac); 1540 1280 }
+10
drivers/net/ethernet/marvell/octeontx2/af/cgx.h
··· 23 23 24 24 #define CGX_ID_MASK 0x7 25 25 #define MAX_LMAC_PER_CGX 4 26 + #define MAX_DMAC_ENTRIES_PER_CGX 32 26 27 #define CGX_FIFO_LEN 65536 /* 64K for both Rx & Tx */ 27 28 #define CGX_OFFSET(x) ((x) * MAX_LMAC_PER_CGX) 28 29 ··· 47 46 #define CGXX_CMRX_RX_DMAC_CTL0 (0x1F8 + mac_ops->csr_offset) 48 47 #define CGX_DMAC_CTL0_CAM_ENABLE BIT_ULL(3) 49 48 #define CGX_DMAC_CAM_ACCEPT BIT_ULL(3) 49 + #define CGX_DMAC_MCAST_MODE_CAM BIT_ULL(2) 50 50 #define CGX_DMAC_MCAST_MODE BIT_ULL(1) 51 51 #define CGX_DMAC_BCAST_MODE BIT_ULL(0) 52 52 #define CGXX_CMRX_RX_DMAC_CAM0 (0x200 + mac_ops->csr_offset) 53 53 #define CGX_DMAC_CAM_ADDR_ENABLE BIT_ULL(48) 54 + #define CGX_DMAC_CAM_ENTRY_LMACID GENMASK_ULL(50, 49) 54 55 #define CGXX_CMRX_RX_DMAC_CAM1 0x400 55 56 #define CGX_RX_DMAC_ADR_MASK GENMASK_ULL(47, 0) 56 57 #define CGXX_CMRX_TX_STAT0 0x700 ··· 142 139 int cgx_lmac_rx_tx_enable(void *cgxd, int lmac_id, bool enable); 143 140 int cgx_lmac_tx_enable(void *cgxd, int lmac_id, bool enable); 144 141 int cgx_lmac_addr_set(u8 cgx_id, u8 lmac_id, u8 *mac_addr); 142 + int cgx_lmac_addr_reset(u8 cgx_id, u8 lmac_id); 145 143 u64 cgx_lmac_addr_get(u8 cgx_id, u8 lmac_id); 144 + int cgx_lmac_addr_add(u8 cgx_id, u8 lmac_id, u8 *mac_addr); 145 + int cgx_lmac_addr_del(u8 cgx_id, u8 lmac_id, u8 index); 146 + int cgx_lmac_addr_max_entries_get(u8 cgx_id, u8 lmac_id); 146 147 void cgx_lmac_promisc_config(int cgx_id, int lmac_id, bool enable); 147 148 void cgx_lmac_enadis_rx_pause_fwding(void *cgxd, int lmac_id, bool enable); 148 149 int cgx_lmac_internal_loopback(void *cgxd, int lmac_id, bool enable); ··· 172 165 unsigned long cgx_get_lmac_bmap(void *cgxd); 173 166 void cgx_lmac_write(int cgx_id, int lmac_id, u64 offset, u64 val); 174 167 u64 cgx_lmac_read(int cgx_id, int lmac_id, u64 offset); 168 + int cgx_lmac_addr_update(u8 cgx_id, u8 lmac_id, u8 *mac_addr, u8 index); 169 + u64 cgx_read_dmac_ctrl(void *cgxd, int lmac_id); 170 + u64 cgx_read_dmac_entry(void *cgxd, int index); 175 171 #endif /* CGX_H */
+8 -4
drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
··· 10 10 #include "rvu.h" 11 11 #include "cgx.h" 12 12 /** 13 - * struct lmac 13 + * struct lmac - per lmac locks and properties 14 14 * @wq_cmd_cmplt: waitq to keep the process blocked until cmd completion 15 15 * @cmd_lock: Lock to serialize the command interface 16 16 * @resp: command response 17 17 * @link_info: link related information 18 + * @mac_to_index_bmap: Mac address to CGX table index mapping 18 19 * @event_cb: callback for linkchange events 19 20 * @event_cb_lock: lock for serializing callback with unregister 21 + * @cgx: parent cgx port 22 + * @mcast_filters_count: Number of multicast filters installed 23 + * @lmac_id: lmac port id 20 24 * @cmd_pend: flag set before new command is started 21 25 * flag cleared after command response is received 22 - * @cgx: parent cgx port 23 - * @lmac_id: lmac port id 24 26 * @name: lmac port name 25 27 */ 26 28 struct lmac { ··· 31 29 struct mutex cmd_lock; 32 30 u64 resp; 33 31 struct cgx_link_user_info link_info; 32 + struct rsrc_bmap mac_to_index_bmap; 34 33 struct cgx_event_cb event_cb; 35 34 /* lock for serializing callback with unregister */ 36 35 spinlock_t event_cb_lock; 37 - bool cmd_pend; 38 36 struct cgx *cgx; 37 + u8 mcast_filters_count; 39 38 u8 lmac_id; 39 + bool cmd_pend; 40 40 char *name; 41 41 }; 42 42
+57 -1
drivers/net/ethernet/marvell/octeontx2/af/mbox.h
··· 134 134 M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \ 135 135 M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \ 136 136 M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \ 137 + M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req, \ 138 + msg_rsp) \ 137 139 M(SET_VF_PERM, 0x00b, set_vf_perm, set_vf_perm, msg_rsp) \ 138 140 /* CGX mbox IDs (range 0x200 - 0x3FF) */ \ 139 141 M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \ ··· 165 163 M(CGX_FEATURES_GET, 0x215, cgx_features_get, msg_req, \ 166 164 cgx_features_info_msg) \ 167 165 M(RPM_STATS, 0x216, rpm_stats, msg_req, rpm_stats_rsp) \ 168 - /* NPA mbox IDs (range 0x400 - 0x5FF) */ \ 166 + M(CGX_MAC_ADDR_ADD, 0x217, cgx_mac_addr_add, cgx_mac_addr_add_req, \ 167 + cgx_mac_addr_add_rsp) \ 168 + M(CGX_MAC_ADDR_DEL, 0x218, cgx_mac_addr_del, cgx_mac_addr_del_req, \ 169 + msg_rsp) \ 170 + M(CGX_MAC_MAX_ENTRIES_GET, 0x219, cgx_mac_max_entries_get, msg_req, \ 171 + cgx_max_dmac_entries_get_rsp) \ 172 + M(CGX_MAC_ADDR_RESET, 0x21A, cgx_mac_addr_reset, msg_req, msg_rsp) \ 173 + M(CGX_MAC_ADDR_UPDATE, 0x21B, cgx_mac_addr_update, cgx_mac_addr_update_req, \ 174 + msg_rsp) \ 169 175 /* NPA mbox IDs (range 0x400 - 0x5FF) */ \ 170 176 M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, \ 171 177 npa_lf_alloc_req, npa_lf_alloc_rsp) \ ··· 411 401 u8 mac_addr[ETH_ALEN]; 412 402 }; 413 403 404 + /* Structure for requesting the operation to 405 + * add DMAC filter entry into CGX interface 406 + */ 407 + struct cgx_mac_addr_add_req { 408 + struct mbox_msghdr hdr; 409 + u8 mac_addr[ETH_ALEN]; 410 + }; 411 + 412 + /* Structure for response against the operation to 413 + * add DMAC filter entry into CGX interface 414 + */ 415 + struct cgx_mac_addr_add_rsp { 416 + struct mbox_msghdr hdr; 417 + u8 index; 418 + }; 419 + 420 + /* Structure for requesting the operation to 421 + * delete DMAC filter entry from CGX interface 422 + */ 423 + struct cgx_mac_addr_del_req { 424 + struct mbox_msghdr hdr; 425 + u8 index; 426 + }; 427 + 428 + /* Structure for response against the operation to 429 + * get maximum supported DMAC filter entries 430 + */ 431 + struct cgx_max_dmac_entries_get_rsp { 432 + struct mbox_msghdr hdr; 433 + u8 max_dmac_filters; 434 + }; 435 + 414 436 struct cgx_link_user_info { 415 437 uint64_t link_up:1; 416 438 uint64_t full_duplex:1; ··· 539 497 struct cgx_set_link_mode_rsp { 540 498 struct mbox_msghdr hdr; 541 499 int status; 500 + }; 501 + 502 + struct cgx_mac_addr_update_req { 503 + struct mbox_msghdr hdr; 504 + u8 mac_addr[ETH_ALEN]; 505 + u8 index; 542 506 }; 543 507 544 508 #define RVU_LMAC_FEAT_FC BIT_ULL(0) /* pause frames */ ··· 1324 1276 #define RESET_VF_PERM BIT_ULL(0) 1325 1277 #define VF_TRUSTED BIT_ULL(1) 1326 1278 u64 flags; 1279 + }; 1280 + 1281 + struct lmtst_tbl_setup_req { 1282 + struct mbox_msghdr hdr; 1283 + u16 base_pcifunc; 1284 + u8 use_local_lmt_region; 1285 + u64 lmt_iova; 1286 + u64 rsvd[4]; 1327 1287 }; 1328 1288 1329 1289 /* CPT mailbox error codes
+1
drivers/net/ethernet/marvell/octeontx2/af/rvu.c
··· 2333 2333 rvu_blklf_teardown(rvu, pcifunc, BLKADDR_SSOW); 2334 2334 rvu_blklf_teardown(rvu, pcifunc, BLKADDR_SSO); 2335 2335 rvu_blklf_teardown(rvu, pcifunc, BLKADDR_NPA); 2336 + rvu_reset_lmt_map_tbl(rvu, pcifunc); 2336 2337 rvu_detach_rsrcs(rvu, NULL, pcifunc); 2337 2338 mutex_unlock(&rvu->flr_lock); 2338 2339 }
+7
drivers/net/ethernet/marvell/octeontx2/af/rvu.h
··· 243 243 u8 nix_blkaddr; /* BLKADDR_NIX0/1 assigned to this PF */ 244 244 u8 nix_rx_intf; /* NIX0_RX/NIX1_RX interface to NPC */ 245 245 u8 nix_tx_intf; /* NIX0_TX/NIX1_TX interface to NPC */ 246 + u64 lmt_base_addr; /* Preseving the pcifunc's lmtst base addr*/ 246 247 unsigned long flags; 247 248 }; 248 249 ··· 657 656 int rvu_cgx_start_stop_io(struct rvu *rvu, u16 pcifunc, bool start); 658 657 int rvu_cgx_nix_cuml_stats(struct rvu *rvu, void *cgxd, int lmac_id, int index, 659 658 int rxtxflag, u64 *stat); 659 + void rvu_cgx_disable_dmac_entries(struct rvu *rvu, u16 pcifunc); 660 + 660 661 /* NPA APIs */ 661 662 int rvu_npa_init(struct rvu *rvu); 662 663 void rvu_npa_freemem(struct rvu *rvu); ··· 744 741 bool is_mac_feature_supported(struct rvu *rvu, int pf, int feature); 745 742 u32 rvu_cgx_get_fifolen(struct rvu *rvu); 746 743 void *rvu_first_cgx_pdata(struct rvu *rvu); 744 + int cgxlmac_to_pf(struct rvu *rvu, int cgx_id, int lmac_id); 747 745 748 746 int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf, 749 747 int type); ··· 757 753 /* CN10K RVU */ 758 754 int rvu_set_channels_base(struct rvu *rvu); 759 755 void rvu_program_channels(struct rvu *rvu); 756 + 757 + /* CN10K RVU - LMT*/ 758 + void rvu_reset_lmt_map_tbl(struct rvu *rvu, u16 pcifunc); 760 759 761 760 #ifdef CONFIG_DEBUG_FS 762 761 void rvu_dbg_init(struct rvu *rvu);
+110 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
··· 63 63 return rvu->cgxlmac2pf_map[CGX_OFFSET(cgx_id) + lmac_id]; 64 64 } 65 65 66 - static int cgxlmac_to_pf(struct rvu *rvu, int cgx_id, int lmac_id) 66 + int cgxlmac_to_pf(struct rvu *rvu, int cgx_id, int lmac_id) 67 67 { 68 68 unsigned long pfmap; 69 69 ··· 454 454 return 0; 455 455 } 456 456 457 + void rvu_cgx_disable_dmac_entries(struct rvu *rvu, u16 pcifunc) 458 + { 459 + int pf = rvu_get_pf(pcifunc); 460 + int i = 0, lmac_count = 0; 461 + u8 max_dmac_filters; 462 + u8 cgx_id, lmac_id; 463 + void *cgx_dev; 464 + 465 + if (!is_cgx_config_permitted(rvu, pcifunc)) 466 + return; 467 + 468 + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); 469 + cgx_dev = cgx_get_pdata(cgx_id); 470 + lmac_count = cgx_get_lmac_cnt(cgx_dev); 471 + max_dmac_filters = MAX_DMAC_ENTRIES_PER_CGX / lmac_count; 472 + 473 + for (i = 0; i < max_dmac_filters; i++) 474 + cgx_lmac_addr_del(cgx_id, lmac_id, i); 475 + 476 + /* As cgx_lmac_addr_del does not clear entry for index 0 477 + * so it needs to be done explicitly 478 + */ 479 + cgx_lmac_addr_reset(cgx_id, lmac_id); 480 + } 481 + 457 482 int rvu_mbox_handler_cgx_start_rxtx(struct rvu *rvu, struct msg_req *req, 458 483 struct msg_rsp *rsp) 459 484 { ··· 579 554 580 555 cgx_lmac_addr_set(cgx_id, lmac_id, req->mac_addr); 581 556 557 + return 0; 558 + } 559 + 560 + int rvu_mbox_handler_cgx_mac_addr_add(struct rvu *rvu, 561 + struct cgx_mac_addr_add_req *req, 562 + struct cgx_mac_addr_add_rsp *rsp) 563 + { 564 + int pf = rvu_get_pf(req->hdr.pcifunc); 565 + u8 cgx_id, lmac_id; 566 + int rc = 0; 567 + 568 + if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc)) 569 + return -EPERM; 570 + 571 + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); 572 + rc = cgx_lmac_addr_add(cgx_id, lmac_id, req->mac_addr); 573 + if (rc >= 0) { 574 + rsp->index = rc; 575 + return 0; 576 + } 577 + 578 + return rc; 579 + } 580 + 581 + int rvu_mbox_handler_cgx_mac_addr_del(struct rvu *rvu, 582 + struct cgx_mac_addr_del_req *req, 583 + struct msg_rsp *rsp) 584 + { 585 + int pf = rvu_get_pf(req->hdr.pcifunc); 586 + u8 cgx_id, lmac_id; 587 + 588 + if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc)) 589 + return -EPERM; 590 + 591 + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); 592 + return cgx_lmac_addr_del(cgx_id, lmac_id, req->index); 593 + } 594 + 595 + int rvu_mbox_handler_cgx_mac_max_entries_get(struct rvu *rvu, 596 + struct msg_req *req, 597 + struct cgx_max_dmac_entries_get_rsp 598 + *rsp) 599 + { 600 + int pf = rvu_get_pf(req->hdr.pcifunc); 601 + u8 cgx_id, lmac_id; 602 + 603 + /* If msg is received from PFs(which are not mapped to CGX LMACs) 604 + * or VF then no entries are allocated for DMAC filters at CGX level. 605 + * So returning zero. 606 + */ 607 + if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc)) { 608 + rsp->max_dmac_filters = 0; 609 + return 0; 610 + } 611 + 612 + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); 613 + rsp->max_dmac_filters = cgx_lmac_addr_max_entries_get(cgx_id, lmac_id); 582 614 return 0; 583 615 } 584 616 ··· 1034 952 cgxd = rvu_cgx_pdata(cgx_idx, rvu); 1035 953 rsp->status = cgx_set_link_mode(cgxd, req->args, cgx_idx, lmac); 1036 954 return 0; 955 + } 956 + 957 + int rvu_mbox_handler_cgx_mac_addr_reset(struct rvu *rvu, struct msg_req *req, 958 + struct msg_rsp *rsp) 959 + { 960 + int pf = rvu_get_pf(req->hdr.pcifunc); 961 + u8 cgx_id, lmac_id; 962 + 963 + if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc)) 964 + return -EPERM; 965 + 966 + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); 967 + return cgx_lmac_addr_reset(cgx_id, lmac_id); 968 + } 969 + 970 + int rvu_mbox_handler_cgx_mac_addr_update(struct rvu *rvu, 971 + struct cgx_mac_addr_update_req *req, 972 + struct msg_rsp *rsp) 973 + { 974 + int pf = rvu_get_pf(req->hdr.pcifunc); 975 + u8 cgx_id, lmac_id; 976 + 977 + if (!is_cgx_config_permitted(rvu, req->hdr.pcifunc)) 978 + return -EPERM; 979 + 980 + rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id); 981 + return cgx_lmac_addr_update(cgx_id, lmac_id, req->mac_addr, req->index); 1037 982 }
+200
drivers/net/ethernet/marvell/octeontx2/af/rvu_cn10k.c
··· 10 10 #include "cgx.h" 11 11 #include "rvu_reg.h" 12 12 13 + /* RVU LMTST */ 14 + #define LMT_TBL_OP_READ 0 15 + #define LMT_TBL_OP_WRITE 1 16 + #define LMT_MAP_TABLE_SIZE (128 * 1024) 17 + #define LMT_MAPTBL_ENTRY_SIZE 16 18 + 19 + /* Function to perform operations (read/write) on lmtst map table */ 20 + static int lmtst_map_table_ops(struct rvu *rvu, u32 index, u64 *val, 21 + int lmt_tbl_op) 22 + { 23 + void __iomem *lmt_map_base; 24 + u64 tbl_base; 25 + 26 + tbl_base = rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_MAP_BASE); 27 + 28 + lmt_map_base = ioremap_wc(tbl_base, LMT_MAP_TABLE_SIZE); 29 + if (!lmt_map_base) { 30 + dev_err(rvu->dev, "Failed to setup lmt map table mapping!!\n"); 31 + return -ENOMEM; 32 + } 33 + 34 + if (lmt_tbl_op == LMT_TBL_OP_READ) { 35 + *val = readq(lmt_map_base + index); 36 + } else { 37 + writeq((*val), (lmt_map_base + index)); 38 + /* Flushing the AP interceptor cache to make APR_LMT_MAP_ENTRY_S 39 + * changes effective. Write 1 for flush and read is being used as a 40 + * barrier and sets up a data dependency. Write to 0 after a write 41 + * to 1 to complete the flush. 42 + */ 43 + rvu_write64(rvu, BLKADDR_APR, APR_AF_LMT_CTL, BIT_ULL(0)); 44 + rvu_read64(rvu, BLKADDR_APR, APR_AF_LMT_CTL); 45 + rvu_write64(rvu, BLKADDR_APR, APR_AF_LMT_CTL, 0x00); 46 + } 47 + 48 + iounmap(lmt_map_base); 49 + return 0; 50 + } 51 + 52 + static u32 rvu_get_lmtst_tbl_index(struct rvu *rvu, u16 pcifunc) 53 + { 54 + return ((rvu_get_pf(pcifunc) * rvu->hw->total_vfs) + 55 + (pcifunc & RVU_PFVF_FUNC_MASK)) * LMT_MAPTBL_ENTRY_SIZE; 56 + } 57 + 58 + static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc, 59 + u64 iova, u64 *lmt_addr) 60 + { 61 + u64 pa, val, pf; 62 + int err; 63 + 64 + if (!iova) { 65 + dev_err(rvu->dev, "%s Requested Null address for transulation\n", __func__); 66 + return -EINVAL; 67 + } 68 + 69 + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova); 70 + pf = rvu_get_pf(pcifunc) & 0x1F; 71 + val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 | 72 + ((pcifunc & RVU_PFVF_FUNC_MASK) & 0xFF); 73 + rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TXN_REQ, val); 74 + 75 + err = rvu_poll_reg(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS, BIT_ULL(0), false); 76 + if (err) { 77 + dev_err(rvu->dev, "%s LMTLINE iova transulation failed\n", __func__); 78 + return err; 79 + } 80 + val = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS); 81 + if (val & ~0x1ULL) { 82 + dev_err(rvu->dev, "%s LMTLINE iova transulation failed err:%llx\n", __func__, val); 83 + return -EIO; 84 + } 85 + /* PA[51:12] = RVU_AF_SMMU_TLN_FLIT1[60:21] 86 + * PA[11:0] = IOVA[11:0] 87 + */ 88 + pa = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TLN_FLIT1) >> 21; 89 + pa &= GENMASK_ULL(39, 0); 90 + *lmt_addr = (pa << 12) | (iova & 0xFFF); 91 + 92 + return 0; 93 + } 94 + 95 + static int rvu_update_lmtaddr(struct rvu *rvu, u16 pcifunc, u64 lmt_addr) 96 + { 97 + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); 98 + u32 tbl_idx; 99 + int err = 0; 100 + u64 val; 101 + 102 + /* Read the current lmt addr of pcifunc */ 103 + tbl_idx = rvu_get_lmtst_tbl_index(rvu, pcifunc); 104 + err = lmtst_map_table_ops(rvu, tbl_idx, &val, LMT_TBL_OP_READ); 105 + if (err) { 106 + dev_err(rvu->dev, 107 + "Failed to read LMT map table: index 0x%x err %d\n", 108 + tbl_idx, err); 109 + return err; 110 + } 111 + 112 + /* Storing the seondary's lmt base address as this needs to be 113 + * reverted in FLR. Also making sure this default value doesn't 114 + * get overwritten on multiple calls to this mailbox. 115 + */ 116 + if (!pfvf->lmt_base_addr) 117 + pfvf->lmt_base_addr = val; 118 + 119 + /* Update the LMT table with new addr */ 120 + err = lmtst_map_table_ops(rvu, tbl_idx, &lmt_addr, LMT_TBL_OP_WRITE); 121 + if (err) { 122 + dev_err(rvu->dev, 123 + "Failed to update LMT map table: index 0x%x err %d\n", 124 + tbl_idx, err); 125 + return err; 126 + } 127 + return 0; 128 + } 129 + 130 + int rvu_mbox_handler_lmtst_tbl_setup(struct rvu *rvu, 131 + struct lmtst_tbl_setup_req *req, 132 + struct msg_rsp *rsp) 133 + { 134 + u64 lmt_addr, val; 135 + u32 pri_tbl_idx; 136 + int err = 0; 137 + 138 + /* Check if PF_FUNC wants to use it's own local memory as LMTLINE 139 + * region, if so, convert that IOVA to physical address and 140 + * populate LMT table with that address 141 + */ 142 + if (req->use_local_lmt_region) { 143 + err = rvu_get_lmtaddr(rvu, req->hdr.pcifunc, 144 + req->lmt_iova, &lmt_addr); 145 + if (err < 0) 146 + return err; 147 + 148 + /* Update the lmt addr for this PFFUNC in the LMT table */ 149 + err = rvu_update_lmtaddr(rvu, req->hdr.pcifunc, lmt_addr); 150 + if (err) 151 + return err; 152 + } 153 + 154 + /* Reconfiguring lmtst map table in lmt region shared mode i.e. make 155 + * multiple PF_FUNCs to share an LMTLINE region, so primary/base 156 + * pcifunc (which is passed as an argument to mailbox) is the one 157 + * whose lmt base address will be shared among other secondary 158 + * pcifunc (will be the one who is calling this mailbox). 159 + */ 160 + if (req->base_pcifunc) { 161 + /* Calculating the LMT table index equivalent to primary 162 + * pcifunc. 163 + */ 164 + pri_tbl_idx = rvu_get_lmtst_tbl_index(rvu, req->base_pcifunc); 165 + 166 + /* Read the base lmt addr of the primary pcifunc */ 167 + err = lmtst_map_table_ops(rvu, pri_tbl_idx, &val, 168 + LMT_TBL_OP_READ); 169 + if (err) { 170 + dev_err(rvu->dev, 171 + "Failed to read LMT map table: index 0x%x err %d\n", 172 + pri_tbl_idx, err); 173 + return err; 174 + } 175 + 176 + /* Update the base lmt addr of secondary with primary's base 177 + * lmt addr. 178 + */ 179 + err = rvu_update_lmtaddr(rvu, req->hdr.pcifunc, val); 180 + if (err) 181 + return err; 182 + } 183 + 184 + return 0; 185 + } 186 + 187 + /* Resetting the lmtst map table to original base addresses */ 188 + void rvu_reset_lmt_map_tbl(struct rvu *rvu, u16 pcifunc) 189 + { 190 + struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); 191 + u32 tbl_idx; 192 + int err; 193 + 194 + if (is_rvu_otx2(rvu)) 195 + return; 196 + 197 + if (pfvf->lmt_base_addr) { 198 + /* This corresponds to lmt map table index */ 199 + tbl_idx = rvu_get_lmtst_tbl_index(rvu, pcifunc); 200 + /* Reverting back original lmt base addr for respective 201 + * pcifunc. 202 + */ 203 + err = lmtst_map_table_ops(rvu, tbl_idx, &pfvf->lmt_base_addr, 204 + LMT_TBL_OP_WRITE); 205 + if (err) 206 + dev_err(rvu->dev, 207 + "Failed to update LMT map table: index 0x%x err %d\n", 208 + tbl_idx, err); 209 + pfvf->lmt_base_addr = 0; 210 + } 211 + } 212 + 13 213 int rvu_set_channels_base(struct rvu *rvu) 14 214 { 15 215 struct rvu_hwinfo *hw = rvu->hw;
+80 -8
drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
··· 1971 1971 return err; 1972 1972 } 1973 1973 1974 - static int rvu_dbg_cgx_stat_display(struct seq_file *filp, void *unused) 1974 + static int rvu_dbg_derive_lmacid(struct seq_file *filp, int *lmac_id) 1975 1975 { 1976 1976 struct dentry *current_dir; 1977 - int err, lmac_id; 1978 1977 char *buf; 1979 1978 1980 1979 current_dir = filp->file->f_path.dentry->d_parent; ··· 1981 1982 if (!buf) 1982 1983 return -EINVAL; 1983 1984 1984 - err = kstrtoint(buf + 1, 10, &lmac_id); 1985 - if (!err) { 1986 - err = cgx_print_stats(filp, lmac_id); 1987 - if (err) 1988 - return err; 1989 - } 1985 + return kstrtoint(buf + 1, 10, lmac_id); 1986 + } 1987 + 1988 + static int rvu_dbg_cgx_stat_display(struct seq_file *filp, void *unused) 1989 + { 1990 + int lmac_id, err; 1991 + 1992 + err = rvu_dbg_derive_lmacid(filp, &lmac_id); 1993 + if (!err) 1994 + return cgx_print_stats(filp, lmac_id); 1995 + 1990 1996 return err; 1991 1997 } 1992 1998 1993 1999 RVU_DEBUG_SEQ_FOPS(cgx_stat, cgx_stat_display, NULL); 2000 + 2001 + static int cgx_print_dmac_flt(struct seq_file *s, int lmac_id) 2002 + { 2003 + struct pci_dev *pdev = NULL; 2004 + void *cgxd = s->private; 2005 + char *bcast, *mcast; 2006 + u16 index, domain; 2007 + u8 dmac[ETH_ALEN]; 2008 + struct rvu *rvu; 2009 + u64 cfg, mac; 2010 + int pf; 2011 + 2012 + rvu = pci_get_drvdata(pci_get_device(PCI_VENDOR_ID_CAVIUM, 2013 + PCI_DEVID_OCTEONTX2_RVU_AF, NULL)); 2014 + if (!rvu) 2015 + return -ENODEV; 2016 + 2017 + pf = cgxlmac_to_pf(rvu, cgx_get_cgxid(cgxd), lmac_id); 2018 + domain = 2; 2019 + 2020 + pdev = pci_get_domain_bus_and_slot(domain, pf + 1, 0); 2021 + if (!pdev) 2022 + return 0; 2023 + 2024 + cfg = cgx_read_dmac_ctrl(cgxd, lmac_id); 2025 + bcast = cfg & CGX_DMAC_BCAST_MODE ? "ACCEPT" : "REJECT"; 2026 + mcast = cfg & CGX_DMAC_MCAST_MODE ? "ACCEPT" : "REJECT"; 2027 + 2028 + seq_puts(s, 2029 + "PCI dev RVUPF BROADCAST MULTICAST FILTER-MODE\n"); 2030 + seq_printf(s, "%s PF%d %9s %9s", 2031 + dev_name(&pdev->dev), pf, bcast, mcast); 2032 + if (cfg & CGX_DMAC_CAM_ACCEPT) 2033 + seq_printf(s, "%12s\n\n", "UNICAST"); 2034 + else 2035 + seq_printf(s, "%16s\n\n", "PROMISCUOUS"); 2036 + 2037 + seq_puts(s, "\nDMAC-INDEX ADDRESS\n"); 2038 + 2039 + for (index = 0 ; index < 32 ; index++) { 2040 + cfg = cgx_read_dmac_entry(cgxd, index); 2041 + /* Display enabled dmac entries associated with current lmac */ 2042 + if (lmac_id == FIELD_GET(CGX_DMAC_CAM_ENTRY_LMACID, cfg) && 2043 + FIELD_GET(CGX_DMAC_CAM_ADDR_ENABLE, cfg)) { 2044 + mac = FIELD_GET(CGX_RX_DMAC_ADR_MASK, cfg); 2045 + u64_to_ether_addr(mac, dmac); 2046 + seq_printf(s, "%7d %pM\n", index, dmac); 2047 + } 2048 + } 2049 + 2050 + return 0; 2051 + } 2052 + 2053 + static int rvu_dbg_cgx_dmac_flt_display(struct seq_file *filp, void *unused) 2054 + { 2055 + int err, lmac_id; 2056 + 2057 + err = rvu_dbg_derive_lmacid(filp, &lmac_id); 2058 + if (!err) 2059 + return cgx_print_dmac_flt(filp, lmac_id); 2060 + 2061 + return err; 2062 + } 2063 + 2064 + RVU_DEBUG_SEQ_FOPS(cgx_dmac_flt, cgx_dmac_flt_display, NULL); 1994 2065 1995 2066 static void rvu_dbg_cgx_init(struct rvu *rvu) 1996 2067 { ··· 2098 2029 2099 2030 debugfs_create_file("stats", 0600, rvu->rvu_dbg.lmac, 2100 2031 cgx, &rvu_dbg_cgx_stat_fops); 2032 + debugfs_create_file("mac_filter", 0600, 2033 + rvu->rvu_dbg.lmac, cgx, 2034 + &rvu_dbg_cgx_dmac_flt_fops); 2101 2035 } 2102 2036 } 2103 2037 }
+3
drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
··· 346 346 347 347 /* Free and disable any MCAM entries used by this NIX LF */ 348 348 rvu_npc_disable_mcam_entries(rvu, pcifunc, nixlf); 349 + 350 + /* Disable DMAC filters used */ 351 + rvu_cgx_disable_dmac_entries(rvu, pcifunc); 349 352 } 350 353 351 354 int rvu_mbox_handler_nix_bp_disable(struct rvu *rvu,
+10
drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
··· 49 49 #define RVU_AF_PFX_VF_BAR4_ADDR (0x5400 | (a) << 4) 50 50 #define RVU_AF_PFX_VF_BAR4_CFG (0x5600 | (a) << 4) 51 51 #define RVU_AF_PFX_LMTLINE_ADDR (0x5800 | (a) << 4) 52 + #define RVU_AF_SMMU_ADDR_REQ (0x6000) 53 + #define RVU_AF_SMMU_TXN_REQ (0x6008) 54 + #define RVU_AF_SMMU_ADDR_RSP_STS (0x6010) 55 + #define RVU_AF_SMMU_ADDR_TLN (0x6018) 56 + #define RVU_AF_SMMU_TLN_FLIT1 (0x6030) 52 57 53 58 /* Admin function's privileged PF/VF registers */ 54 59 #define RVU_PRIV_CONST (0x8000000) ··· 696 691 #define LBK_LINK_CFG_RANGE_MASK GENMASK_ULL(19, 16) 697 692 #define LBK_LINK_CFG_ID_MASK GENMASK_ULL(11, 6) 698 693 #define LBK_LINK_CFG_BASE_MASK GENMASK_ULL(5, 0) 694 + 695 + /* APR */ 696 + #define APR_AF_LMT_CFG (0x000ull) 697 + #define APR_AF_LMT_MAP_BASE (0x008ull) 698 + #define APR_AF_LMT_CTL (0x010ull) 699 699 700 700 #endif /* RVU_REG_H */
+2 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h
··· 35 35 BLKADDR_NDC_NPA0 = 0xeULL, 36 36 BLKADDR_NDC_NIX1_RX = 0x10ULL, 37 37 BLKADDR_NDC_NIX1_TX = 0x11ULL, 38 - BLK_COUNT = 0x12ULL, 38 + BLKADDR_APR = 0x16ULL, 39 + BLK_COUNT = 0x17ULL, 39 40 }; 40 41 41 42 /* RVU Block Type Enumeration */
+1 -1
drivers/net/ethernet/marvell/octeontx2/nic/Makefile
··· 7 7 obj-$(CONFIG_OCTEONTX2_VF) += rvu_nicvf.o 8 8 9 9 rvu_nicpf-y := otx2_pf.o otx2_common.o otx2_txrx.o otx2_ethtool.o \ 10 - otx2_ptp.o otx2_flows.o otx2_tc.o cn10k.o 10 + otx2_ptp.o otx2_flows.o otx2_tc.o cn10k.o otx2_dmac_flt.o 11 11 rvu_nicvf-y := otx2_vf.o 12 12 13 13 ccflags-y += -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af
+38 -55
drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
··· 22 22 .refill_pool_ptrs = cn10k_refill_pool_ptrs, 23 23 }; 24 24 25 - int cn10k_pf_lmtst_init(struct otx2_nic *pf) 25 + int cn10k_lmtst_init(struct otx2_nic *pfvf) 26 26 { 27 - int size, num_lines; 28 - u64 base; 29 27 30 - if (!test_bit(CN10K_LMTST, &pf->hw.cap_flag)) { 31 - pf->hw_ops = &otx2_hw_ops; 28 + struct lmtst_tbl_setup_req *req; 29 + int qcount, err; 30 + 31 + if (!test_bit(CN10K_LMTST, &pfvf->hw.cap_flag)) { 32 + pfvf->hw_ops = &otx2_hw_ops; 32 33 return 0; 33 34 } 34 35 35 - pf->hw_ops = &cn10k_hw_ops; 36 - base = pci_resource_start(pf->pdev, PCI_MBOX_BAR_NUM) + 37 - (MBOX_SIZE * (pf->total_vfs + 1)); 36 + pfvf->hw_ops = &cn10k_hw_ops; 37 + qcount = pfvf->hw.max_queues; 38 + /* LMTST lines allocation 39 + * qcount = num_online_cpus(); 40 + * NPA = TX + RX + XDP. 41 + * NIX = TX * 32 (For Burst SQE flush). 42 + */ 43 + pfvf->tot_lmt_lines = (qcount * 3) + (qcount * 32); 44 + pfvf->npa_lmt_lines = qcount * 3; 45 + pfvf->nix_lmt_size = LMT_BURST_SIZE * LMT_LINE_SIZE; 38 46 39 - size = pci_resource_len(pf->pdev, PCI_MBOX_BAR_NUM) - 40 - (MBOX_SIZE * (pf->total_vfs + 1)); 41 - 42 - pf->hw.lmt_base = ioremap(base, size); 43 - 44 - if (!pf->hw.lmt_base) { 45 - dev_err(pf->dev, "Unable to map PF LMTST region\n"); 47 + mutex_lock(&pfvf->mbox.lock); 48 + req = otx2_mbox_alloc_msg_lmtst_tbl_setup(&pfvf->mbox); 49 + if (!req) { 50 + mutex_unlock(&pfvf->mbox.lock); 46 51 return -ENOMEM; 47 52 } 48 53 49 - /* FIXME: Get the num of LMTST lines from LMT table */ 50 - pf->tot_lmt_lines = size / LMT_LINE_SIZE; 51 - num_lines = (pf->tot_lmt_lines - NIX_LMTID_BASE) / 52 - pf->hw.tx_queues; 53 - /* Number of LMT lines per SQ queues */ 54 - pf->nix_lmt_lines = num_lines > 32 ? 32 : num_lines; 54 + req->use_local_lmt_region = true; 55 55 56 - pf->nix_lmt_size = pf->nix_lmt_lines * LMT_LINE_SIZE; 56 + err = qmem_alloc(pfvf->dev, &pfvf->dync_lmt, pfvf->tot_lmt_lines, 57 + LMT_LINE_SIZE); 58 + if (err) { 59 + mutex_unlock(&pfvf->mbox.lock); 60 + return err; 61 + } 62 + pfvf->hw.lmt_base = (u64 *)pfvf->dync_lmt->base; 63 + req->lmt_iova = (u64)pfvf->dync_lmt->iova; 64 + 65 + err = otx2_sync_mbox_msg(&pfvf->mbox); 66 + mutex_unlock(&pfvf->mbox.lock); 67 + 57 68 return 0; 58 69 } 59 - 60 - int cn10k_vf_lmtst_init(struct otx2_nic *vf) 61 - { 62 - int size, num_lines; 63 - 64 - if (!test_bit(CN10K_LMTST, &vf->hw.cap_flag)) { 65 - vf->hw_ops = &otx2_hw_ops; 66 - return 0; 67 - } 68 - 69 - vf->hw_ops = &cn10k_hw_ops; 70 - size = pci_resource_len(vf->pdev, PCI_MBOX_BAR_NUM); 71 - vf->hw.lmt_base = ioremap_wc(pci_resource_start(vf->pdev, 72 - PCI_MBOX_BAR_NUM), 73 - size); 74 - if (!vf->hw.lmt_base) { 75 - dev_err(vf->dev, "Unable to map VF LMTST region\n"); 76 - return -ENOMEM; 77 - } 78 - 79 - vf->tot_lmt_lines = size / LMT_LINE_SIZE; 80 - /* LMTST lines per SQ */ 81 - num_lines = (vf->tot_lmt_lines - NIX_LMTID_BASE) / 82 - vf->hw.tx_queues; 83 - vf->nix_lmt_lines = num_lines > 32 ? 32 : num_lines; 84 - vf->nix_lmt_size = vf->nix_lmt_lines * LMT_LINE_SIZE; 85 - return 0; 86 - } 87 - EXPORT_SYMBOL(cn10k_vf_lmtst_init); 70 + EXPORT_SYMBOL(cn10k_lmtst_init); 88 71 89 72 int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura) 90 73 { ··· 76 93 struct otx2_snd_queue *sq; 77 94 78 95 sq = &pfvf->qset.sq[qidx]; 79 - sq->lmt_addr = (__force u64 *)((u64)pfvf->hw.nix_lmt_base + 96 + sq->lmt_addr = (u64 *)((u64)pfvf->hw.nix_lmt_base + 80 97 (qidx * pfvf->nix_lmt_size)); 98 + 99 + sq->lmt_id = pfvf->npa_lmt_lines + (qidx * LMT_BURST_SIZE); 81 100 82 101 /* Get memory to put this msg */ 83 102 aq = otx2_mbox_alloc_msg_nix_cn10k_aq_enq(&pfvf->mbox); ··· 143 158 144 159 void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx) 145 160 { 146 - struct otx2_nic *pfvf = dev; 147 - int lmt_id = NIX_LMTID_BASE + (qidx * pfvf->nix_lmt_lines); 148 161 u64 val = 0, tar_addr = 0; 149 162 150 163 /* FIXME: val[0:10] LMT_ID. 151 164 * [12:15] no of LMTST - 1 in the burst. 152 165 * [19:63] data size of each LMTST in the burst except first. 153 166 */ 154 - val = (lmt_id & 0x7FF); 167 + val = (sq->lmt_id & 0x7FF); 155 168 /* Target address for LMTST flush tells HW how many 128bit 156 169 * words are present. 157 170 * tar_addr[6:4] size of first LMTST - 1 in units of 128b.
+1 -2
drivers/net/ethernet/marvell/octeontx2/nic/cn10k.h
··· 12 12 void cn10k_refill_pool_ptrs(void *dev, struct otx2_cq_queue *cq); 13 13 void cn10k_sqe_flush(void *dev, struct otx2_snd_queue *sq, int size, int qidx); 14 14 int cn10k_sq_aq_init(void *dev, u16 qidx, u16 sqb_aura); 15 - int cn10k_pf_lmtst_init(struct otx2_nic *pf); 16 - int cn10k_vf_lmtst_init(struct otx2_nic *vf); 15 + int cn10k_lmtst_init(struct otx2_nic *pfvf); 17 16 int cn10k_free_all_ipolicers(struct otx2_nic *pfvf); 18 17 int cn10k_alloc_matchall_ipolicer(struct otx2_nic *pfvf); 19 18 int cn10k_free_matchall_ipolicer(struct otx2_nic *pfvf);
+3
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
··· 210 210 /* update dmac field in vlan offload rule */ 211 211 if (pfvf->flags & OTX2_FLAG_RX_VLAN_SUPPORT) 212 212 otx2_install_rxvlan_offload_flow(pfvf); 213 + /* update dmac address in ntuple and DMAC filter list */ 214 + if (pfvf->flags & OTX2_FLAG_DMACFLTR_SUPPORT) 215 + otx2_dmacflt_update_pfmac_flow(pfvf); 213 216 } else { 214 217 return -EPERM; 215 218 }
+15 -3
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
··· 218 218 unsigned long cap_flag; 219 219 220 220 #define LMT_LINE_SIZE 128 221 - #define NIX_LMTID_BASE 72 /* RX + TX + XDP */ 222 - void __iomem *lmt_base; 221 + #define LMT_BURST_SIZE 32 /* 32 LMTST lines for burst SQE flush */ 222 + u64 *lmt_base; 223 223 u64 *npa_lmt_base; 224 224 u64 *nix_lmt_base; 225 225 }; ··· 288 288 u16 tc_flower_offset; 289 289 u16 ntuple_max_flows; 290 290 u16 tc_max_flows; 291 + u8 dmacflt_max_flows; 292 + u8 *bmap_to_dmacindex; 293 + unsigned long dmacflt_bmap; 291 294 struct list_head flow_list; 292 295 }; 293 296 ··· 332 329 #define OTX2_FLAG_TC_FLOWER_SUPPORT BIT_ULL(11) 333 330 #define OTX2_FLAG_TC_MATCHALL_EGRESS_ENABLED BIT_ULL(12) 334 331 #define OTX2_FLAG_TC_MATCHALL_INGRESS_ENABLED BIT_ULL(13) 332 + #define OTX2_FLAG_DMACFLTR_SUPPORT BIT_ULL(14) 335 333 u64 flags; 336 334 337 335 struct otx2_qset qset; ··· 367 363 /* Block address of NIX either BLKADDR_NIX0 or BLKADDR_NIX1 */ 368 364 int nix_blkaddr; 369 365 /* LMTST Lines info */ 366 + struct qmem *dync_lmt; 370 367 u16 tot_lmt_lines; 371 - u16 nix_lmt_lines; 368 + u16 npa_lmt_lines; 372 369 u32 nix_lmt_size; 373 370 374 371 struct otx2_ptp *ptp; ··· 838 833 void otx2_shutdown_tc(struct otx2_nic *nic); 839 834 int otx2_setup_tc(struct net_device *netdev, enum tc_setup_type type, 840 835 void *type_data); 836 + /* CGX/RPM DMAC filters support */ 837 + int otx2_dmacflt_get_max_cnt(struct otx2_nic *pf); 838 + int otx2_dmacflt_add(struct otx2_nic *pf, const u8 *mac, u8 bit_pos); 839 + int otx2_dmacflt_remove(struct otx2_nic *pf, const u8 *mac, u8 bit_pos); 840 + int otx2_dmacflt_update(struct otx2_nic *pf, u8 *mac, u8 bit_pos); 841 + void otx2_dmacflt_reinstall_flows(struct otx2_nic *pf); 842 + void otx2_dmacflt_update_pfmac_flow(struct otx2_nic *pfvf); 841 843 #endif /* OTX2_COMMON_H */
+173
drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Marvell OcteonTx2 RVU Physcial Function ethernet driver 3 + * 4 + * Copyright (C) 2021 Marvell. 5 + */ 6 + 7 + #include "otx2_common.h" 8 + 9 + static int otx2_dmacflt_do_add(struct otx2_nic *pf, const u8 *mac, 10 + u8 *dmac_index) 11 + { 12 + struct cgx_mac_addr_add_req *req; 13 + struct cgx_mac_addr_add_rsp *rsp; 14 + int err; 15 + 16 + mutex_lock(&pf->mbox.lock); 17 + 18 + req = otx2_mbox_alloc_msg_cgx_mac_addr_add(&pf->mbox); 19 + if (!req) { 20 + mutex_unlock(&pf->mbox.lock); 21 + return -ENOMEM; 22 + } 23 + 24 + ether_addr_copy(req->mac_addr, mac); 25 + err = otx2_sync_mbox_msg(&pf->mbox); 26 + 27 + if (!err) { 28 + rsp = (struct cgx_mac_addr_add_rsp *) 29 + otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr); 30 + *dmac_index = rsp->index; 31 + } 32 + 33 + mutex_unlock(&pf->mbox.lock); 34 + return err; 35 + } 36 + 37 + static int otx2_dmacflt_add_pfmac(struct otx2_nic *pf) 38 + { 39 + struct cgx_mac_addr_set_or_get *req; 40 + int err; 41 + 42 + mutex_lock(&pf->mbox.lock); 43 + 44 + req = otx2_mbox_alloc_msg_cgx_mac_addr_set(&pf->mbox); 45 + if (!req) { 46 + mutex_unlock(&pf->mbox.lock); 47 + return -ENOMEM; 48 + } 49 + 50 + ether_addr_copy(req->mac_addr, pf->netdev->dev_addr); 51 + err = otx2_sync_mbox_msg(&pf->mbox); 52 + 53 + mutex_unlock(&pf->mbox.lock); 54 + return err; 55 + } 56 + 57 + int otx2_dmacflt_add(struct otx2_nic *pf, const u8 *mac, u8 bit_pos) 58 + { 59 + u8 *dmacindex; 60 + 61 + /* Store dmacindex returned by CGX/RPM driver which will 62 + * be used for macaddr update/remove 63 + */ 64 + dmacindex = &pf->flow_cfg->bmap_to_dmacindex[bit_pos]; 65 + 66 + if (ether_addr_equal(mac, pf->netdev->dev_addr)) 67 + return otx2_dmacflt_add_pfmac(pf); 68 + else 69 + return otx2_dmacflt_do_add(pf, mac, dmacindex); 70 + } 71 + 72 + static int otx2_dmacflt_do_remove(struct otx2_nic *pfvf, const u8 *mac, 73 + u8 dmac_index) 74 + { 75 + struct cgx_mac_addr_del_req *req; 76 + int err; 77 + 78 + mutex_lock(&pfvf->mbox.lock); 79 + req = otx2_mbox_alloc_msg_cgx_mac_addr_del(&pfvf->mbox); 80 + if (!req) { 81 + mutex_unlock(&pfvf->mbox.lock); 82 + return -ENOMEM; 83 + } 84 + 85 + req->index = dmac_index; 86 + 87 + err = otx2_sync_mbox_msg(&pfvf->mbox); 88 + mutex_unlock(&pfvf->mbox.lock); 89 + 90 + return err; 91 + } 92 + 93 + static int otx2_dmacflt_remove_pfmac(struct otx2_nic *pf) 94 + { 95 + struct msg_req *req; 96 + int err; 97 + 98 + mutex_lock(&pf->mbox.lock); 99 + req = otx2_mbox_alloc_msg_cgx_mac_addr_reset(&pf->mbox); 100 + if (!req) { 101 + mutex_unlock(&pf->mbox.lock); 102 + return -ENOMEM; 103 + } 104 + 105 + err = otx2_sync_mbox_msg(&pf->mbox); 106 + 107 + mutex_unlock(&pf->mbox.lock); 108 + return err; 109 + } 110 + 111 + int otx2_dmacflt_remove(struct otx2_nic *pf, const u8 *mac, 112 + u8 bit_pos) 113 + { 114 + u8 dmacindex = pf->flow_cfg->bmap_to_dmacindex[bit_pos]; 115 + 116 + if (ether_addr_equal(mac, pf->netdev->dev_addr)) 117 + return otx2_dmacflt_remove_pfmac(pf); 118 + else 119 + return otx2_dmacflt_do_remove(pf, mac, dmacindex); 120 + } 121 + 122 + /* CGX/RPM blocks support max unicast entries of 32. 123 + * on typical configuration MAC block associated 124 + * with 4 lmacs, each lmac will have 8 dmac entries 125 + */ 126 + int otx2_dmacflt_get_max_cnt(struct otx2_nic *pf) 127 + { 128 + struct cgx_max_dmac_entries_get_rsp *rsp; 129 + struct msg_req *msg; 130 + int err; 131 + 132 + mutex_lock(&pf->mbox.lock); 133 + msg = otx2_mbox_alloc_msg_cgx_mac_max_entries_get(&pf->mbox); 134 + 135 + if (!msg) { 136 + mutex_unlock(&pf->mbox.lock); 137 + return -ENOMEM; 138 + } 139 + 140 + err = otx2_sync_mbox_msg(&pf->mbox); 141 + if (err) 142 + goto out; 143 + 144 + rsp = (struct cgx_max_dmac_entries_get_rsp *) 145 + otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &msg->hdr); 146 + pf->flow_cfg->dmacflt_max_flows = rsp->max_dmac_filters; 147 + 148 + out: 149 + mutex_unlock(&pf->mbox.lock); 150 + return err; 151 + } 152 + 153 + int otx2_dmacflt_update(struct otx2_nic *pf, u8 *mac, u8 bit_pos) 154 + { 155 + struct cgx_mac_addr_update_req *req; 156 + int rc; 157 + 158 + mutex_lock(&pf->mbox.lock); 159 + 160 + req = otx2_mbox_alloc_msg_cgx_mac_addr_update(&pf->mbox); 161 + 162 + if (!req) { 163 + mutex_unlock(&pf->mbox.lock); 164 + return -ENOMEM; 165 + } 166 + 167 + ether_addr_copy(req->mac_addr, mac); 168 + req->index = pf->flow_cfg->bmap_to_dmacindex[bit_pos]; 169 + rc = otx2_sync_mbox_msg(&pf->mbox); 170 + 171 + mutex_unlock(&pf->mbox.lock); 172 + return rc; 173 + }
+220 -9
drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
··· 18 18 bool is_vf; 19 19 u8 rss_ctx_id; 20 20 int vf; 21 + bool dmac_filter; 22 + }; 23 + 24 + enum dmac_req { 25 + DMAC_ADDR_UPDATE, 26 + DMAC_ADDR_DEL 21 27 }; 22 28 23 29 static void otx2_clear_ntuple_flow_info(struct otx2_nic *pfvf, struct otx2_flow_config *flow_cfg) ··· 225 219 if (!pf->mac_table) 226 220 return -ENOMEM; 227 221 222 + otx2_dmacflt_get_max_cnt(pf); 223 + 224 + /* DMAC filters are not allocated */ 225 + if (!pf->flow_cfg->dmacflt_max_flows) 226 + return 0; 227 + 228 + pf->flow_cfg->bmap_to_dmacindex = 229 + devm_kzalloc(pf->dev, sizeof(u8) * 230 + pf->flow_cfg->dmacflt_max_flows, 231 + GFP_KERNEL); 232 + 233 + if (!pf->flow_cfg->bmap_to_dmacindex) 234 + return -ENOMEM; 235 + 236 + pf->flags |= OTX2_FLAG_DMACFLTR_SUPPORT; 237 + 228 238 return 0; 229 239 } 230 240 ··· 301 279 int otx2_add_macfilter(struct net_device *netdev, const u8 *mac) 302 280 { 303 281 struct otx2_nic *pf = netdev_priv(netdev); 282 + 283 + if (bitmap_weight(&pf->flow_cfg->dmacflt_bmap, 284 + pf->flow_cfg->dmacflt_max_flows)) 285 + netdev_warn(netdev, 286 + "Add %pM to CGX/RPM DMAC filters list as well\n", 287 + mac); 304 288 305 289 return otx2_do_add_macfilter(pf, mac); 306 290 } ··· 379 351 list_add(&flow->list, head); 380 352 } 381 353 354 + static int otx2_get_maxflows(struct otx2_flow_config *flow_cfg) 355 + { 356 + if (flow_cfg->nr_flows == flow_cfg->ntuple_max_flows || 357 + bitmap_weight(&flow_cfg->dmacflt_bmap, 358 + flow_cfg->dmacflt_max_flows)) 359 + return flow_cfg->ntuple_max_flows + flow_cfg->dmacflt_max_flows; 360 + else 361 + return flow_cfg->ntuple_max_flows; 362 + } 363 + 382 364 int otx2_get_flow(struct otx2_nic *pfvf, struct ethtool_rxnfc *nfc, 383 365 u32 location) 384 366 { 385 367 struct otx2_flow *iter; 386 368 387 - if (location >= pfvf->flow_cfg->ntuple_max_flows) 369 + if (location >= otx2_get_maxflows(pfvf->flow_cfg)) 388 370 return -EINVAL; 389 371 390 372 list_for_each_entry(iter, &pfvf->flow_cfg->flow_list, list) { ··· 416 378 int idx = 0; 417 379 int err = 0; 418 380 419 - nfc->data = pfvf->flow_cfg->ntuple_max_flows; 381 + nfc->data = otx2_get_maxflows(pfvf->flow_cfg); 420 382 while ((!err || err == -ENOENT) && idx < rule_cnt) { 421 383 err = otx2_get_flow(pfvf, nfc, location); 422 384 if (!err) ··· 798 760 return 0; 799 761 } 800 762 763 + static int otx2_is_flow_rule_dmacfilter(struct otx2_nic *pfvf, 764 + struct ethtool_rx_flow_spec *fsp) 765 + { 766 + struct ethhdr *eth_mask = &fsp->m_u.ether_spec; 767 + struct ethhdr *eth_hdr = &fsp->h_u.ether_spec; 768 + u64 ring_cookie = fsp->ring_cookie; 769 + u32 flow_type; 770 + 771 + if (!(pfvf->flags & OTX2_FLAG_DMACFLTR_SUPPORT)) 772 + return false; 773 + 774 + flow_type = fsp->flow_type & ~(FLOW_EXT | FLOW_MAC_EXT | FLOW_RSS); 775 + 776 + /* CGX/RPM block dmac filtering configured for white listing 777 + * check for action other than DROP 778 + */ 779 + if (flow_type == ETHER_FLOW && ring_cookie != RX_CLS_FLOW_DISC && 780 + !ethtool_get_flow_spec_ring_vf(ring_cookie)) { 781 + if (is_zero_ether_addr(eth_mask->h_dest) && 782 + is_valid_ether_addr(eth_hdr->h_dest)) 783 + return true; 784 + } 785 + 786 + return false; 787 + } 788 + 801 789 static int otx2_add_flow_msg(struct otx2_nic *pfvf, struct otx2_flow *flow) 802 790 { 803 791 u64 ring_cookie = flow->flow_spec.ring_cookie; ··· 882 818 return err; 883 819 } 884 820 821 + static int otx2_add_flow_with_pfmac(struct otx2_nic *pfvf, 822 + struct otx2_flow *flow) 823 + { 824 + struct otx2_flow *pf_mac; 825 + struct ethhdr *eth_hdr; 826 + 827 + pf_mac = kzalloc(sizeof(*pf_mac), GFP_KERNEL); 828 + if (!pf_mac) 829 + return -ENOMEM; 830 + 831 + pf_mac->entry = 0; 832 + pf_mac->dmac_filter = true; 833 + pf_mac->location = pfvf->flow_cfg->ntuple_max_flows; 834 + memcpy(&pf_mac->flow_spec, &flow->flow_spec, 835 + sizeof(struct ethtool_rx_flow_spec)); 836 + pf_mac->flow_spec.location = pf_mac->location; 837 + 838 + /* Copy PF mac address */ 839 + eth_hdr = &pf_mac->flow_spec.h_u.ether_spec; 840 + ether_addr_copy(eth_hdr->h_dest, pfvf->netdev->dev_addr); 841 + 842 + /* Install DMAC filter with PF mac address */ 843 + otx2_dmacflt_add(pfvf, eth_hdr->h_dest, 0); 844 + 845 + otx2_add_flow_to_list(pfvf, pf_mac); 846 + pfvf->flow_cfg->nr_flows++; 847 + set_bit(0, &pfvf->flow_cfg->dmacflt_bmap); 848 + 849 + return 0; 850 + } 851 + 885 852 int otx2_add_flow(struct otx2_nic *pfvf, struct ethtool_rxnfc *nfc) 886 853 { 887 854 struct otx2_flow_config *flow_cfg = pfvf->flow_cfg; 888 855 struct ethtool_rx_flow_spec *fsp = &nfc->fs; 889 856 struct otx2_flow *flow; 857 + struct ethhdr *eth_hdr; 890 858 bool new = false; 859 + int err = 0; 891 860 u32 ring; 892 - int err; 893 861 894 862 ring = ethtool_get_flow_spec_ring(fsp->ring_cookie); 895 863 if (!(pfvf->flags & OTX2_FLAG_NTUPLE_SUPPORT)) ··· 930 834 if (ring >= pfvf->hw.rx_queues && fsp->ring_cookie != RX_CLS_FLOW_DISC) 931 835 return -EINVAL; 932 836 933 - if (fsp->location >= flow_cfg->ntuple_max_flows) 837 + if (fsp->location >= otx2_get_maxflows(flow_cfg)) 934 838 return -EINVAL; 935 839 936 840 flow = otx2_find_flow(pfvf, fsp->location); 937 841 if (!flow) { 938 - flow = kzalloc(sizeof(*flow), GFP_ATOMIC); 842 + flow = kzalloc(sizeof(*flow), GFP_KERNEL); 939 843 if (!flow) 940 844 return -ENOMEM; 941 845 flow->location = fsp->location; 942 - flow->entry = flow_cfg->flow_ent[flow->location]; 943 846 new = true; 944 847 } 945 848 /* struct copy */ ··· 947 852 if (fsp->flow_type & FLOW_RSS) 948 853 flow->rss_ctx_id = nfc->rss_context; 949 854 950 - err = otx2_add_flow_msg(pfvf, flow); 855 + if (otx2_is_flow_rule_dmacfilter(pfvf, &flow->flow_spec)) { 856 + eth_hdr = &flow->flow_spec.h_u.ether_spec; 857 + 858 + /* Sync dmac filter table with updated fields */ 859 + if (flow->dmac_filter) 860 + return otx2_dmacflt_update(pfvf, eth_hdr->h_dest, 861 + flow->entry); 862 + 863 + if (bitmap_full(&flow_cfg->dmacflt_bmap, 864 + flow_cfg->dmacflt_max_flows)) { 865 + netdev_warn(pfvf->netdev, 866 + "Can't insert the rule %d as max allowed dmac filters are %d\n", 867 + flow->location + 868 + flow_cfg->dmacflt_max_flows, 869 + flow_cfg->dmacflt_max_flows); 870 + err = -EINVAL; 871 + if (new) 872 + kfree(flow); 873 + return err; 874 + } 875 + 876 + /* Install PF mac address to DMAC filter list */ 877 + if (!test_bit(0, &flow_cfg->dmacflt_bmap)) 878 + otx2_add_flow_with_pfmac(pfvf, flow); 879 + 880 + flow->dmac_filter = true; 881 + flow->entry = find_first_zero_bit(&flow_cfg->dmacflt_bmap, 882 + flow_cfg->dmacflt_max_flows); 883 + fsp->location = flow_cfg->ntuple_max_flows + flow->entry; 884 + flow->flow_spec.location = fsp->location; 885 + flow->location = fsp->location; 886 + 887 + set_bit(flow->entry, &flow_cfg->dmacflt_bmap); 888 + otx2_dmacflt_add(pfvf, eth_hdr->h_dest, flow->entry); 889 + 890 + } else { 891 + if (flow->location >= pfvf->flow_cfg->ntuple_max_flows) { 892 + netdev_warn(pfvf->netdev, 893 + "Can't insert non dmac ntuple rule at %d, allowed range %d-0\n", 894 + flow->location, 895 + flow_cfg->ntuple_max_flows - 1); 896 + err = -EINVAL; 897 + } else { 898 + flow->entry = flow_cfg->flow_ent[flow->location]; 899 + err = otx2_add_flow_msg(pfvf, flow); 900 + } 901 + } 902 + 951 903 if (err) { 952 904 if (new) 953 905 kfree(flow); ··· 1032 890 return err; 1033 891 } 1034 892 893 + static void otx2_update_rem_pfmac(struct otx2_nic *pfvf, int req) 894 + { 895 + struct otx2_flow *iter; 896 + struct ethhdr *eth_hdr; 897 + bool found = false; 898 + 899 + list_for_each_entry(iter, &pfvf->flow_cfg->flow_list, list) { 900 + if (iter->dmac_filter && iter->entry == 0) { 901 + eth_hdr = &iter->flow_spec.h_u.ether_spec; 902 + if (req == DMAC_ADDR_DEL) { 903 + otx2_dmacflt_remove(pfvf, eth_hdr->h_dest, 904 + 0); 905 + clear_bit(0, &pfvf->flow_cfg->dmacflt_bmap); 906 + found = true; 907 + } else { 908 + ether_addr_copy(eth_hdr->h_dest, 909 + pfvf->netdev->dev_addr); 910 + otx2_dmacflt_update(pfvf, eth_hdr->h_dest, 0); 911 + } 912 + break; 913 + } 914 + } 915 + 916 + if (found) { 917 + list_del(&iter->list); 918 + kfree(iter); 919 + pfvf->flow_cfg->nr_flows--; 920 + } 921 + } 922 + 1035 923 int otx2_remove_flow(struct otx2_nic *pfvf, u32 location) 1036 924 { 1037 925 struct otx2_flow_config *flow_cfg = pfvf->flow_cfg; 1038 926 struct otx2_flow *flow; 1039 927 int err; 1040 928 1041 - if (location >= flow_cfg->ntuple_max_flows) 929 + if (location >= otx2_get_maxflows(flow_cfg)) 1042 930 return -EINVAL; 1043 931 1044 932 flow = otx2_find_flow(pfvf, location); 1045 933 if (!flow) 1046 934 return -ENOENT; 1047 935 1048 - err = otx2_remove_flow_msg(pfvf, flow->entry, false); 936 + if (flow->dmac_filter) { 937 + struct ethhdr *eth_hdr = &flow->flow_spec.h_u.ether_spec; 938 + 939 + /* user not allowed to remove dmac filter with interface mac */ 940 + if (ether_addr_equal(pfvf->netdev->dev_addr, eth_hdr->h_dest)) 941 + return -EPERM; 942 + 943 + err = otx2_dmacflt_remove(pfvf, eth_hdr->h_dest, 944 + flow->entry); 945 + clear_bit(flow->entry, &flow_cfg->dmacflt_bmap); 946 + /* If all dmac filters are removed delete macfilter with 947 + * interface mac address and configure CGX/RPM block in 948 + * promiscuous mode 949 + */ 950 + if (bitmap_weight(&flow_cfg->dmacflt_bmap, 951 + flow_cfg->dmacflt_max_flows) == 1) 952 + otx2_update_rem_pfmac(pfvf, DMAC_ADDR_DEL); 953 + } else { 954 + err = otx2_remove_flow_msg(pfvf, flow->entry, false); 955 + } 956 + 1049 957 if (err) 1050 958 return err; 1051 959 ··· 1291 1099 1292 1100 mutex_unlock(&pf->mbox.lock); 1293 1101 return rsp_hdr->rc; 1102 + } 1103 + 1104 + void otx2_dmacflt_reinstall_flows(struct otx2_nic *pf) 1105 + { 1106 + struct otx2_flow *iter; 1107 + struct ethhdr *eth_hdr; 1108 + 1109 + list_for_each_entry(iter, &pf->flow_cfg->flow_list, list) { 1110 + if (iter->dmac_filter) { 1111 + eth_hdr = &iter->flow_spec.h_u.ether_spec; 1112 + otx2_dmacflt_add(pf, eth_hdr->h_dest, 1113 + iter->entry); 1114 + } 1115 + } 1116 + } 1117 + 1118 + void otx2_dmacflt_update_pfmac_flow(struct otx2_nic *pfvf) 1119 + { 1120 + otx2_update_rem_pfmac(pfvf, DMAC_ADDR_UPDATE); 1294 1121 }
+17 -9
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 1110 1110 struct msg_req *msg; 1111 1111 int err; 1112 1112 1113 + if (enable && bitmap_weight(&pf->flow_cfg->dmacflt_bmap, 1114 + pf->flow_cfg->dmacflt_max_flows)) 1115 + netdev_warn(pf->netdev, 1116 + "CGX/RPM internal loopback might not work as DMAC filters are active\n"); 1117 + 1113 1118 mutex_lock(&pf->mbox.lock); 1114 1119 if (enable) 1115 1120 msg = otx2_mbox_alloc_msg_cgx_intlbk_enable(&pf->mbox); ··· 1538 1533 1539 1534 if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) { 1540 1535 /* Reserve LMT lines for NPA AURA batch free */ 1541 - pf->hw.npa_lmt_base = (__force u64 *)pf->hw.lmt_base; 1536 + pf->hw.npa_lmt_base = pf->hw.lmt_base; 1542 1537 /* Reserve LMT lines for NIX TX */ 1543 - pf->hw.nix_lmt_base = (__force u64 *)((u64)pf->hw.npa_lmt_base + 1544 - (NIX_LMTID_BASE * LMT_LINE_SIZE)); 1538 + pf->hw.nix_lmt_base = (u64 *)((u64)pf->hw.npa_lmt_base + 1539 + (pf->npa_lmt_lines * LMT_LINE_SIZE)); 1545 1540 } 1546 1541 1547 1542 err = otx2_init_hw_resources(pf); ··· 1648 1643 1649 1644 /* Restore pause frame settings */ 1650 1645 otx2_config_pause_frm(pf); 1646 + 1647 + /* Install DMAC Filters */ 1648 + if (pf->flags & OTX2_FLAG_DMACFLTR_SUPPORT) 1649 + otx2_dmacflt_reinstall_flows(pf); 1651 1650 1652 1651 err = otx2_rxtx_enable(pf, true); 1653 1652 if (err) ··· 2535 2526 if (err) 2536 2527 goto err_detach_rsrc; 2537 2528 2538 - err = cn10k_pf_lmtst_init(pf); 2529 + err = cn10k_lmtst_init(pf); 2539 2530 if (err) 2540 2531 goto err_detach_rsrc; 2541 2532 ··· 2639 2630 err_ptp_destroy: 2640 2631 otx2_ptp_destroy(pf); 2641 2632 err_detach_rsrc: 2642 - if (hw->lmt_base) 2643 - iounmap(hw->lmt_base); 2633 + if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) 2634 + qmem_free(pf->dev, pf->dync_lmt); 2644 2635 otx2_detach_resources(&pf->mbox); 2645 2636 err_disable_mbox_intr: 2646 2637 otx2_disable_mbox_intr(pf); ··· 2781 2772 otx2_mcam_flow_del(pf); 2782 2773 otx2_shutdown_tc(pf); 2783 2774 otx2_detach_resources(&pf->mbox); 2784 - if (pf->hw.lmt_base) 2785 - iounmap(pf->hw.lmt_base); 2786 - 2775 + if (test_bit(CN10K_LMTST, &pf->hw.cap_flag)) 2776 + qmem_free(pf->dev, pf->dync_lmt); 2787 2777 otx2_disable_mbox_intr(pf); 2788 2778 otx2_pfaf_mbox_destroy(pf); 2789 2779 pci_free_irq_vectors(pf->pdev);
+1 -1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
··· 288 288 struct otx2_nic *priv; 289 289 u32 burst, mark = 0; 290 290 u8 nr_police = 0; 291 - bool pps; 291 + bool pps = false; 292 292 u64 rate; 293 293 int i; 294 294
+1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.h
··· 83 83 u16 num_sqbs; 84 84 u16 sqe_thresh; 85 85 u8 sqe_per_sqb; 86 + u32 lmt_id; 86 87 u64 io_addr; 87 88 u64 *aura_fc_addr; 88 89 u64 *lmt_addr;
+5 -7
drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
··· 609 609 if (err) 610 610 goto err_detach_rsrc; 611 611 612 - err = cn10k_vf_lmtst_init(vf); 612 + err = cn10k_lmtst_init(vf); 613 613 if (err) 614 614 goto err_detach_rsrc; 615 615 ··· 667 667 err_unreg_netdev: 668 668 unregister_netdev(netdev); 669 669 err_detach_rsrc: 670 - if (hw->lmt_base) 671 - iounmap(hw->lmt_base); 670 + if (test_bit(CN10K_LMTST, &vf->hw.cap_flag)) 671 + qmem_free(vf->dev, vf->dync_lmt); 672 672 otx2_detach_resources(&vf->mbox); 673 673 err_disable_mbox_intr: 674 674 otx2vf_disable_mbox_intr(vf); ··· 700 700 destroy_workqueue(vf->otx2_wq); 701 701 otx2vf_disable_mbox_intr(vf); 702 702 otx2_detach_resources(&vf->mbox); 703 - 704 - if (vf->hw.lmt_base) 705 - iounmap(vf->hw.lmt_base); 706 - 703 + if (test_bit(CN10K_LMTST, &vf->hw.cap_flag)) 704 + qmem_free(vf->dev, vf->dync_lmt); 707 705 otx2vf_vfaf_mbox_destroy(vf); 708 706 pci_free_irq_vectors(vf->pdev); 709 707 pci_set_drvdata(pdev, NULL);
+1
drivers/net/ethernet/microchip/sparx5/Kconfig
··· 2 2 tristate "Sparx5 switch driver" 3 3 depends on NET_SWITCHDEV 4 4 depends on HAS_IOMEM 5 + depends on OF 5 6 select PHYLINK 6 7 select PHY_SPARX5_SERDES 7 8 select RESET_CONTROLLER
+1 -3
drivers/net/ethernet/moxa/moxart_ether.c
··· 540 540 SET_NETDEV_DEV(ndev, &pdev->dev); 541 541 542 542 ret = register_netdev(ndev); 543 - if (ret) { 544 - free_netdev(ndev); 543 + if (ret) 545 544 goto init_fail; 546 - } 547 545 548 546 netdev_dbg(ndev, "%s: IRQ=%d address=%pM\n", 549 547 __func__, ndev->irq, ndev->dev_addr);
+5 -4
drivers/net/ethernet/mscc/ocelot_net.c
··· 1298 1298 } 1299 1299 1300 1300 static int ocelot_netdevice_changeupper(struct net_device *dev, 1301 + struct net_device *brport_dev, 1301 1302 struct netdev_notifier_changeupper_info *info) 1302 1303 { 1303 1304 struct netlink_ext_ack *extack; ··· 1308 1307 1309 1308 if (netif_is_bridge_master(info->upper_dev)) { 1310 1309 if (info->linking) 1311 - err = ocelot_netdevice_bridge_join(dev, dev, 1310 + err = ocelot_netdevice_bridge_join(dev, brport_dev, 1312 1311 info->upper_dev, 1313 1312 extack); 1314 1313 else 1315 - err = ocelot_netdevice_bridge_leave(dev, dev, 1314 + err = ocelot_netdevice_bridge_leave(dev, brport_dev, 1316 1315 info->upper_dev); 1317 1316 } 1318 1317 if (netif_is_lag_master(info->upper_dev)) { ··· 1347 1346 if (ocelot_port->bond != dev) 1348 1347 return NOTIFY_OK; 1349 1348 1350 - err = ocelot_netdevice_changeupper(lower, info); 1349 + err = ocelot_netdevice_changeupper(lower, dev, info); 1351 1350 if (err) 1352 1351 return notifier_from_errno(err); 1353 1352 } ··· 1386 1385 struct netdev_notifier_changeupper_info *info = ptr; 1387 1386 1388 1387 if (ocelot_netdevice_dev_check(dev)) 1389 - return ocelot_netdevice_changeupper(dev, info); 1388 + return ocelot_netdevice_changeupper(dev, dev, info); 1390 1389 1391 1390 if (netif_is_lag_master(dev)) 1392 1391 return ocelot_netdevice_lag_changeupper(dev, info);
-13
drivers/net/ethernet/netronome/nfp/flower/conntrack.c
··· 1141 1141 nfp_fl_ct_clean_flow_entry(ct_entry); 1142 1142 kfree(ct_map_ent); 1143 1143 1144 - /* If this is the last pre_ct_rule it means that it is 1145 - * very likely that the nft table will be cleaned up next, 1146 - * as this happens on the removal of the last act_ct flow. 1147 - * However we cannot deregister the callback on the removal 1148 - * of the last nft flow as this runs into a deadlock situation. 1149 - * So deregister the callback on removal of the last pre_ct flow 1150 - * and remove any remaining nft flow entries. We also cannot 1151 - * save this state and delete the callback later since the 1152 - * nft table would already have been freed at that time. 1153 - */ 1154 1144 if (!zt->pre_ct_count) { 1155 - nf_flow_table_offload_del_cb(zt->nft, 1156 - nfp_fl_ct_handle_nft_flow, 1157 - zt); 1158 1145 zt->nft = NULL; 1159 1146 nfp_fl_ct_clean_nft_entries(zt); 1160 1147 }
+2 -1
drivers/net/ethernet/qualcomm/emac/emac.c
··· 735 735 736 736 put_device(&adpt->phydev->mdio.dev); 737 737 mdiobus_unregister(adpt->mii_bus); 738 - free_netdev(netdev); 739 738 740 739 if (adpt->phy.digital) 741 740 iounmap(adpt->phy.digital); 742 741 iounmap(adpt->phy.base); 742 + 743 + free_netdev(netdev); 743 744 744 745 return 0; 745 746 }
+14 -8
drivers/net/ethernet/sfc/efx_channels.c
··· 152 152 * maximum size. 153 153 */ 154 154 tx_per_ev = EFX_MAX_EVQ_SIZE / EFX_TXQ_MAX_ENT(efx); 155 + tx_per_ev = min(tx_per_ev, EFX_MAX_TXQ_PER_CHANNEL); 155 156 n_xdp_tx = num_possible_cpus(); 156 157 n_xdp_ev = DIV_ROUND_UP(n_xdp_tx, tx_per_ev); 157 158 ··· 170 169 netif_err(efx, drv, efx->net_dev, 171 170 "Insufficient resources for %d XDP event queues (%d other channels, max %d)\n", 172 171 n_xdp_ev, n_channels, max_channels); 172 + netif_err(efx, drv, efx->net_dev, 173 + "XDP_TX and XDP_REDIRECT will not work on this interface"); 173 174 efx->n_xdp_channels = 0; 174 175 efx->xdp_tx_per_channel = 0; 175 176 efx->xdp_tx_queue_count = 0; ··· 179 176 netif_err(efx, drv, efx->net_dev, 180 177 "Insufficient resources for %d XDP TX queues (%d other channels, max VIs %d)\n", 181 178 n_xdp_tx, n_channels, efx->max_vis); 179 + netif_err(efx, drv, efx->net_dev, 180 + "XDP_TX and XDP_REDIRECT will not work on this interface"); 182 181 efx->n_xdp_channels = 0; 183 182 efx->xdp_tx_per_channel = 0; 184 183 efx->xdp_tx_queue_count = 0; 185 184 } else { 186 185 efx->n_xdp_channels = n_xdp_ev; 187 - efx->xdp_tx_per_channel = EFX_MAX_TXQ_PER_CHANNEL; 186 + efx->xdp_tx_per_channel = tx_per_ev; 188 187 efx->xdp_tx_queue_count = n_xdp_tx; 189 188 n_channels += n_xdp_ev; 190 189 netif_dbg(efx, drv, efx->net_dev, ··· 896 891 if (efx_channel_is_xdp_tx(channel)) { 897 892 efx_for_each_channel_tx_queue(tx_queue, channel) { 898 893 tx_queue->queue = next_queue++; 899 - netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n", 900 - channel->channel, tx_queue->label, 901 - xdp_queue_number, tx_queue->queue); 894 + 902 895 /* We may have a few left-over XDP TX 903 896 * queues owing to xdp_tx_queue_count 904 897 * not dividing evenly by EFX_MAX_TXQ_PER_CHANNEL. 905 898 * We still allocate and probe those 906 899 * TXQs, but never use them. 907 900 */ 908 - if (xdp_queue_number < efx->xdp_tx_queue_count) 901 + if (xdp_queue_number < efx->xdp_tx_queue_count) { 902 + netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n", 903 + channel->channel, tx_queue->label, 904 + xdp_queue_number, tx_queue->queue); 909 905 efx->xdp_tx_queues[xdp_queue_number] = tx_queue; 910 - xdp_queue_number++; 906 + xdp_queue_number++; 907 + } 911 908 } 912 909 } else { 913 910 efx_for_each_channel_tx_queue(tx_queue, channel) { ··· 921 914 } 922 915 } 923 916 } 924 - if (xdp_queue_number) 925 - efx->xdp_tx_queue_count = xdp_queue_number; 917 + WARN_ON(xdp_queue_number != efx->xdp_tx_queue_count); 926 918 927 919 rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels); 928 920 if (rc)
+5 -4
drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
··· 49 49 { 50 50 struct plat_stmmacenet_data *plat; 51 51 struct stmmac_resources res; 52 - bool mdio = false; 53 - int ret, i; 54 52 struct device_node *np; 53 + int ret, i, phy_mode; 54 + bool mdio = false; 55 55 56 56 np = dev_of_node(&pdev->dev); 57 57 ··· 108 108 if (plat->bus_id < 0) 109 109 plat->bus_id = pci_dev_id(pdev); 110 110 111 - plat->phy_interface = device_get_phy_mode(&pdev->dev); 112 - if (plat->phy_interface < 0) 111 + phy_mode = device_get_phy_mode(&pdev->dev); 112 + if (phy_mode < 0) 113 113 dev_err(&pdev->dev, "phy_mode not found\n"); 114 114 115 + plat->phy_interface = phy_mode; 115 116 plat->interface = PHY_INTERFACE_MODE_GMII; 116 117 117 118 pci_set_master(pdev);
+3
drivers/net/ethernet/stmicro/stmmac/stmmac.h
··· 349 349 void stmmac_disable_tx_queue(struct stmmac_priv *priv, u32 queue); 350 350 void stmmac_enable_tx_queue(struct stmmac_priv *priv, u32 queue); 351 351 int stmmac_xsk_wakeup(struct net_device *dev, u32 queue, u32 flags); 352 + struct timespec64 stmmac_calc_tas_basetime(ktime_t old_base_time, 353 + ktime_t current_time, 354 + u64 cycle_time); 352 355 353 356 #if IS_ENABLED(CONFIG_STMMAC_SELFTESTS) 354 357 void stmmac_selftest_run(struct net_device *dev,
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 7171 7171 priv->plat->rx_queues_to_use, false); 7172 7172 7173 7173 stmmac_fpe_handshake(priv, false); 7174 + stmmac_fpe_stop_wq(priv); 7174 7175 } 7175 7176 7176 7177 priv->speed = SPEED_UNKNOWN;
+5 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 397 397 struct device_node *np = pdev->dev.of_node; 398 398 struct plat_stmmacenet_data *plat; 399 399 struct stmmac_dma_cfg *dma_cfg; 400 + int phy_mode; 400 401 void *ret; 401 402 int rc; 402 403 ··· 413 412 eth_zero_addr(mac); 414 413 } 415 414 416 - plat->phy_interface = device_get_phy_mode(&pdev->dev); 417 - if (plat->phy_interface < 0) 418 - return ERR_PTR(plat->phy_interface); 415 + phy_mode = device_get_phy_mode(&pdev->dev); 416 + if (phy_mode < 0) 417 + return ERR_PTR(phy_mode); 419 418 419 + plat->phy_interface = phy_mode; 420 420 plat->interface = stmmac_of_get_mac_mode(np); 421 421 if (plat->interface < 0) 422 422 plat->interface = plat->phy_interface;
+40 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
··· 62 62 u32 sec, nsec; 63 63 u32 quotient, reminder; 64 64 int neg_adj = 0; 65 - bool xmac; 65 + bool xmac, est_rst = false; 66 + int ret; 66 67 67 68 xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac; 68 69 ··· 76 75 sec = quotient; 77 76 nsec = reminder; 78 77 78 + /* If EST is enabled, disabled it before adjust ptp time. */ 79 + if (priv->plat->est && priv->plat->est->enable) { 80 + est_rst = true; 81 + mutex_lock(&priv->plat->est->lock); 82 + priv->plat->est->enable = false; 83 + stmmac_est_configure(priv, priv->ioaddr, priv->plat->est, 84 + priv->plat->clk_ptp_rate); 85 + mutex_unlock(&priv->plat->est->lock); 86 + } 87 + 79 88 spin_lock_irqsave(&priv->ptp_lock, flags); 80 89 stmmac_adjust_systime(priv, priv->ptpaddr, sec, nsec, neg_adj, xmac); 81 90 spin_unlock_irqrestore(&priv->ptp_lock, flags); 91 + 92 + /* Caculate new basetime and re-configured EST after PTP time adjust. */ 93 + if (est_rst) { 94 + struct timespec64 current_time, time; 95 + ktime_t current_time_ns, basetime; 96 + u64 cycle_time; 97 + 98 + mutex_lock(&priv->plat->est->lock); 99 + priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, &current_time); 100 + current_time_ns = timespec64_to_ktime(current_time); 101 + time.tv_nsec = priv->plat->est->btr_reserve[0]; 102 + time.tv_sec = priv->plat->est->btr_reserve[1]; 103 + basetime = timespec64_to_ktime(time); 104 + cycle_time = priv->plat->est->ctr[1] * NSEC_PER_SEC + 105 + priv->plat->est->ctr[0]; 106 + time = stmmac_calc_tas_basetime(basetime, 107 + current_time_ns, 108 + cycle_time); 109 + 110 + priv->plat->est->btr[0] = (u32)time.tv_nsec; 111 + priv->plat->est->btr[1] = (u32)time.tv_sec; 112 + priv->plat->est->enable = true; 113 + ret = stmmac_est_configure(priv, priv->ioaddr, priv->plat->est, 114 + priv->plat->clk_ptp_rate); 115 + mutex_unlock(&priv->plat->est->lock); 116 + if (ret) 117 + netdev_err(priv->dev, "failed to configure EST\n"); 118 + } 82 119 83 120 return 0; 84 121 }
+41 -15
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 711 711 return ret; 712 712 } 713 713 714 + struct timespec64 stmmac_calc_tas_basetime(ktime_t old_base_time, 715 + ktime_t current_time, 716 + u64 cycle_time) 717 + { 718 + struct timespec64 time; 719 + 720 + if (ktime_after(old_base_time, current_time)) { 721 + time = ktime_to_timespec64(old_base_time); 722 + } else { 723 + s64 n; 724 + ktime_t base_time; 725 + 726 + n = div64_s64(ktime_sub_ns(current_time, old_base_time), 727 + cycle_time); 728 + base_time = ktime_add_ns(old_base_time, 729 + (n + 1) * cycle_time); 730 + 731 + time = ktime_to_timespec64(base_time); 732 + } 733 + 734 + return time; 735 + } 736 + 714 737 static int tc_setup_taprio(struct stmmac_priv *priv, 715 738 struct tc_taprio_qopt_offload *qopt) 716 739 { 717 740 u32 size, wid = priv->dma_cap.estwid, dep = priv->dma_cap.estdep; 718 741 struct plat_stmmacenet_data *plat = priv->plat; 719 - struct timespec64 time, current_time; 742 + struct timespec64 time, current_time, qopt_time; 720 743 ktime_t current_time_ns; 721 744 bool fpe = false; 722 745 int i, ret = 0; ··· 796 773 GFP_KERNEL); 797 774 if (!plat->est) 798 775 return -ENOMEM; 776 + 777 + mutex_init(&priv->plat->est->lock); 799 778 } else { 800 779 memset(plat->est, 0, sizeof(*plat->est)); 801 780 } 802 781 803 782 size = qopt->num_entries; 804 783 784 + mutex_lock(&priv->plat->est->lock); 805 785 priv->plat->est->gcl_size = size; 806 786 priv->plat->est->enable = qopt->enable; 787 + mutex_unlock(&priv->plat->est->lock); 807 788 808 789 for (i = 0; i < size; i++) { 809 790 s64 delta_ns = qopt->entries[i].interval; ··· 838 811 priv->plat->est->gcl[i] = delta_ns | (gates << wid); 839 812 } 840 813 814 + mutex_lock(&priv->plat->est->lock); 841 815 /* Adjust for real system time */ 842 816 priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, &current_time); 843 817 current_time_ns = timespec64_to_ktime(current_time); 844 - if (ktime_after(qopt->base_time, current_time_ns)) { 845 - time = ktime_to_timespec64(qopt->base_time); 846 - } else { 847 - ktime_t base_time; 848 - s64 n; 849 - 850 - n = div64_s64(ktime_sub_ns(current_time_ns, qopt->base_time), 851 - qopt->cycle_time); 852 - base_time = ktime_add_ns(qopt->base_time, 853 - (n + 1) * qopt->cycle_time); 854 - 855 - time = ktime_to_timespec64(base_time); 856 - } 818 + time = stmmac_calc_tas_basetime(qopt->base_time, current_time_ns, 819 + qopt->cycle_time); 857 820 858 821 priv->plat->est->btr[0] = (u32)time.tv_nsec; 859 822 priv->plat->est->btr[1] = (u32)time.tv_sec; 823 + 824 + qopt_time = ktime_to_timespec64(qopt->base_time); 825 + priv->plat->est->btr_reserve[0] = (u32)qopt_time.tv_nsec; 826 + priv->plat->est->btr_reserve[1] = (u32)qopt_time.tv_sec; 860 827 861 828 ctr = qopt->cycle_time; 862 829 priv->plat->est->ctr[0] = do_div(ctr, NSEC_PER_SEC); 863 830 priv->plat->est->ctr[1] = (u32)ctr; 864 831 865 - if (fpe && !priv->dma_cap.fpesel) 832 + if (fpe && !priv->dma_cap.fpesel) { 833 + mutex_unlock(&priv->plat->est->lock); 866 834 return -EOPNOTSUPP; 835 + } 867 836 868 837 /* Actual FPE register configuration will be done after FPE handshake 869 838 * is success. ··· 868 845 869 846 ret = stmmac_est_configure(priv, priv->ioaddr, priv->plat->est, 870 847 priv->plat->clk_ptp_rate); 848 + mutex_unlock(&priv->plat->est->lock); 871 849 if (ret) { 872 850 netdev_err(priv->dev, "failed to configure EST\n"); 873 851 goto disable; ··· 884 860 return 0; 885 861 886 862 disable: 863 + mutex_lock(&priv->plat->est->lock); 887 864 priv->plat->est->enable = false; 888 865 stmmac_est_configure(priv, priv->ioaddr, priv->plat->est, 889 866 priv->plat->clk_ptp_rate); 867 + mutex_unlock(&priv->plat->est->lock); 890 868 891 869 priv->plat->fpe_cfg->enable = false; 892 870 stmmac_fpe_configure(priv, priv->ioaddr,
+1 -2
drivers/net/ethernet/ti/tlan.c
··· 313 313 pci_release_regions(pdev); 314 314 #endif 315 315 316 - free_netdev(dev); 317 - 318 316 cancel_work_sync(&priv->tlan_tqueue); 317 + free_netdev(dev); 319 318 } 320 319 321 320 static void tlan_start(struct net_device *dev)
+1 -2
drivers/net/fddi/defza.c
··· 1504 1504 release_mem_region(start, len); 1505 1505 1506 1506 err_out_kfree: 1507 - free_netdev(dev); 1508 - 1509 1507 pr_err("%s: initialization failure, aborting!\n", fp->name); 1508 + free_netdev(dev); 1510 1509 return ret; 1511 1510 } 1512 1511
+4 -4
drivers/net/netdevsim/ipsec.c
··· 85 85 u32 *mykey, u32 *mysalt) 86 86 { 87 87 const char aes_gcm_name[] = "rfc4106(gcm(aes))"; 88 - struct net_device *dev = xs->xso.dev; 88 + struct net_device *dev = xs->xso.real_dev; 89 89 unsigned char *key_data; 90 90 char *alg_name = NULL; 91 91 int key_len; ··· 134 134 u16 sa_idx; 135 135 int ret; 136 136 137 - dev = xs->xso.dev; 137 + dev = xs->xso.real_dev; 138 138 ns = netdev_priv(dev); 139 139 ipsec = &ns->ipsec; 140 140 ··· 194 194 195 195 static void nsim_ipsec_del_sa(struct xfrm_state *xs) 196 196 { 197 - struct netdevsim *ns = netdev_priv(xs->xso.dev); 197 + struct netdevsim *ns = netdev_priv(xs->xso.real_dev); 198 198 struct nsim_ipsec *ipsec = &ns->ipsec; 199 199 u16 sa_idx; 200 200 ··· 211 211 212 212 static bool nsim_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *xs) 213 213 { 214 - struct netdevsim *ns = netdev_priv(xs->xso.dev); 214 + struct netdevsim *ns = netdev_priv(xs->xso.real_dev); 215 215 struct nsim_ipsec *ipsec = &ns->ipsec; 216 216 217 217 ipsec->ok++;
+35 -5
drivers/net/phy/marvell10g.c
··· 78 78 /* Temperature read register (88E2110 only) */ 79 79 MV_PCS_TEMP = 0x8042, 80 80 81 + /* Number of ports on the device */ 82 + MV_PCS_PORT_INFO = 0xd00d, 83 + MV_PCS_PORT_INFO_NPORTS_MASK = 0x0380, 84 + MV_PCS_PORT_INFO_NPORTS_SHIFT = 7, 85 + 81 86 /* These registers appear at 0x800X and 0xa00X - the 0xa00X control 82 87 * registers appear to set themselves to the 0x800X when AN is 83 88 * restarted, but status registers appear readable from either. ··· 971 966 #endif 972 967 }; 973 968 969 + static int mv3310_get_number_of_ports(struct phy_device *phydev) 970 + { 971 + int ret; 972 + 973 + ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MV_PCS_PORT_INFO); 974 + if (ret < 0) 975 + return ret; 976 + 977 + ret &= MV_PCS_PORT_INFO_NPORTS_MASK; 978 + ret >>= MV_PCS_PORT_INFO_NPORTS_SHIFT; 979 + 980 + return ret + 1; 981 + } 982 + 983 + static int mv3310_match_phy_device(struct phy_device *phydev) 984 + { 985 + return mv3310_get_number_of_ports(phydev) == 1; 986 + } 987 + 988 + static int mv3340_match_phy_device(struct phy_device *phydev) 989 + { 990 + return mv3310_get_number_of_ports(phydev) == 4; 991 + } 992 + 974 993 static int mv211x_match_phy_device(struct phy_device *phydev, bool has_5g) 975 994 { 976 995 int val; ··· 1023 994 static struct phy_driver mv3310_drivers[] = { 1024 995 { 1025 996 .phy_id = MARVELL_PHY_ID_88X3310, 1026 - .phy_id_mask = MARVELL_PHY_ID_88X33X0_MASK, 997 + .phy_id_mask = MARVELL_PHY_ID_MASK, 998 + .match_phy_device = mv3310_match_phy_device, 1027 999 .name = "mv88x3310", 1028 1000 .driver_data = &mv3310_type, 1029 1001 .get_features = mv3310_get_features, ··· 1041 1011 .set_loopback = genphy_c45_loopback, 1042 1012 }, 1043 1013 { 1044 - .phy_id = MARVELL_PHY_ID_88X3340, 1045 - .phy_id_mask = MARVELL_PHY_ID_88X33X0_MASK, 1014 + .phy_id = MARVELL_PHY_ID_88X3310, 1015 + .phy_id_mask = MARVELL_PHY_ID_MASK, 1016 + .match_phy_device = mv3340_match_phy_device, 1046 1017 .name = "mv88x3340", 1047 1018 .driver_data = &mv3340_type, 1048 1019 .get_features = mv3310_get_features, ··· 1100 1069 module_phy_driver(mv3310_drivers); 1101 1070 1102 1071 static struct mdio_device_id __maybe_unused mv3310_tbl[] = { 1103 - { MARVELL_PHY_ID_88X3310, MARVELL_PHY_ID_88X33X0_MASK }, 1104 - { MARVELL_PHY_ID_88X3340, MARVELL_PHY_ID_88X33X0_MASK }, 1072 + { MARVELL_PHY_ID_88X3310, MARVELL_PHY_ID_MASK }, 1105 1073 { MARVELL_PHY_ID_88E2110, MARVELL_PHY_ID_MASK }, 1106 1074 { }, 1107 1075 };
+1
drivers/net/usb/asix_devices.c
··· 701 701 return ret; 702 702 } 703 703 704 + phy_suspend(priv->phydev); 704 705 priv->phydev->mac_managed_pm = 1; 705 706 706 707 phy_attached_info(priv->phydev);
+7 -1
drivers/net/virtio_net.c
··· 1771 1771 { 1772 1772 struct scatterlist *sgs[4], hdr, stat; 1773 1773 unsigned out_num = 0, tmp; 1774 + int ret; 1774 1775 1775 1776 /* Caller should know better */ 1776 1777 BUG_ON(!virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VQ)); ··· 1791 1790 sgs[out_num] = &stat; 1792 1791 1793 1792 BUG_ON(out_num + 1 > ARRAY_SIZE(sgs)); 1794 - virtqueue_add_sgs(vi->cvq, sgs, out_num, 1, vi, GFP_ATOMIC); 1793 + ret = virtqueue_add_sgs(vi->cvq, sgs, out_num, 1, vi, GFP_ATOMIC); 1794 + if (ret < 0) { 1795 + dev_warn(&vi->vdev->dev, 1796 + "Failed to add sgs for command vq: %d\n.", ret); 1797 + return false; 1798 + } 1795 1799 1796 1800 if (unlikely(!virtqueue_kick(vi->cvq))) 1797 1801 return vi->ctrl->status == VIRTIO_NET_OK;
+20 -2
drivers/net/vmxnet3/vmxnet3_ethtool.c
··· 1 1 /* 2 2 * Linux driver for VMware's vmxnet3 ethernet NIC. 3 3 * 4 - * Copyright (C) 2008-2020, VMware, Inc. All Rights Reserved. 4 + * Copyright (C) 2008-2021, VMware, Inc. All Rights Reserved. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify it 7 7 * under the terms of the GNU General Public License as published by the ··· 26 26 27 27 28 28 #include "vmxnet3_int.h" 29 + #include <net/vxlan.h> 30 + #include <net/geneve.h> 31 + 32 + #define VXLAN_UDP_PORT 8472 29 33 30 34 struct vmxnet3_stat_desc { 31 35 char desc[ETH_GSTRING_LEN]; ··· 266 262 if (VMXNET3_VERSION_GE_4(adapter) && 267 263 skb->encapsulation && skb->ip_summed == CHECKSUM_PARTIAL) { 268 264 u8 l4_proto = 0; 265 + u16 port; 266 + struct udphdr *udph; 269 267 270 268 switch (vlan_get_protocol(skb)) { 271 269 case htons(ETH_P_IP): ··· 280 274 return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 281 275 } 282 276 283 - if (l4_proto != IPPROTO_UDP) 277 + switch (l4_proto) { 278 + case IPPROTO_UDP: 279 + udph = udp_hdr(skb); 280 + port = be16_to_cpu(udph->dest); 281 + /* Check if offloaded port is supported */ 282 + if (port != GENEVE_UDP_PORT && 283 + port != IANA_VXLAN_UDP_PORT && 284 + port != VXLAN_UDP_PORT) { 285 + return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 286 + } 287 + break; 288 + default: 284 289 return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK); 290 + } 285 291 } 286 292 return features; 287 293 }
+4 -4
drivers/net/wan/hdlc_cisco.c
··· 364 364 return -EINVAL; 365 365 } 366 366 367 - static int __init mod_init(void) 367 + static int __init hdlc_cisco_init(void) 368 368 { 369 369 register_hdlc_protocol(&proto); 370 370 return 0; 371 371 } 372 372 373 - static void __exit mod_exit(void) 373 + static void __exit hdlc_cisco_exit(void) 374 374 { 375 375 unregister_hdlc_protocol(&proto); 376 376 } 377 377 378 - module_init(mod_init); 379 - module_exit(mod_exit); 378 + module_init(hdlc_cisco_init); 379 + module_exit(hdlc_cisco_exit); 380 380 381 381 MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>"); 382 382 MODULE_DESCRIPTION("Cisco HDLC protocol support for generic HDLC");
+4 -4
drivers/net/wan/hdlc_fr.c
··· 1279 1279 return -EINVAL; 1280 1280 } 1281 1281 1282 - static int __init mod_init(void) 1282 + static int __init hdlc_fr_init(void) 1283 1283 { 1284 1284 register_hdlc_protocol(&proto); 1285 1285 return 0; 1286 1286 } 1287 1287 1288 - static void __exit mod_exit(void) 1288 + static void __exit hdlc_fr_exit(void) 1289 1289 { 1290 1290 unregister_hdlc_protocol(&proto); 1291 1291 } 1292 1292 1293 - module_init(mod_init); 1294 - module_exit(mod_exit); 1293 + module_init(hdlc_fr_init); 1294 + module_exit(hdlc_fr_exit); 1295 1295 1296 1296 MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>"); 1297 1297 MODULE_DESCRIPTION("Frame-Relay protocol support for generic HDLC");
+4 -4
drivers/net/wan/hdlc_ppp.c
··· 705 705 return -EINVAL; 706 706 } 707 707 708 - static int __init mod_init(void) 708 + static int __init hdlc_ppp_init(void) 709 709 { 710 710 skb_queue_head_init(&tx_queue); 711 711 register_hdlc_protocol(&proto); 712 712 return 0; 713 713 } 714 714 715 - static void __exit mod_exit(void) 715 + static void __exit hdlc_ppp_exit(void) 716 716 { 717 717 unregister_hdlc_protocol(&proto); 718 718 } 719 719 720 - module_init(mod_init); 721 - module_exit(mod_exit); 720 + module_init(hdlc_ppp_init); 721 + module_exit(hdlc_ppp_exit); 722 722 723 723 MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>"); 724 724 MODULE_DESCRIPTION("PPP protocol support for generic HDLC");
+4 -4
drivers/net/wan/hdlc_raw.c
··· 90 90 } 91 91 92 92 93 - static int __init mod_init(void) 93 + static int __init hdlc_raw_init(void) 94 94 { 95 95 register_hdlc_protocol(&proto); 96 96 return 0; ··· 98 98 99 99 100 100 101 - static void __exit mod_exit(void) 101 + static void __exit hdlc_raw_exit(void) 102 102 { 103 103 unregister_hdlc_protocol(&proto); 104 104 } 105 105 106 106 107 - module_init(mod_init); 108 - module_exit(mod_exit); 107 + module_init(hdlc_raw_init); 108 + module_exit(hdlc_raw_exit); 109 109 110 110 MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>"); 111 111 MODULE_DESCRIPTION("Raw HDLC protocol support for generic HDLC");
+4 -4
drivers/net/wan/hdlc_raw_eth.c
··· 110 110 } 111 111 112 112 113 - static int __init mod_init(void) 113 + static int __init hdlc_eth_init(void) 114 114 { 115 115 register_hdlc_protocol(&proto); 116 116 return 0; ··· 118 118 119 119 120 120 121 - static void __exit mod_exit(void) 121 + static void __exit hdlc_eth_exit(void) 122 122 { 123 123 unregister_hdlc_protocol(&proto); 124 124 } 125 125 126 126 127 - module_init(mod_init); 128 - module_exit(mod_exit); 127 + module_init(hdlc_eth_init); 128 + module_exit(hdlc_eth_exit); 129 129 130 130 MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>"); 131 131 MODULE_DESCRIPTION("Ethernet encapsulation support for generic HDLC");
+4 -4
drivers/net/wan/hdlc_x25.c
··· 365 365 return -EINVAL; 366 366 } 367 367 368 - static int __init mod_init(void) 368 + static int __init hdlc_x25_init(void) 369 369 { 370 370 register_hdlc_protocol(&proto); 371 371 return 0; 372 372 } 373 373 374 - static void __exit mod_exit(void) 374 + static void __exit hdlc_x25_exit(void) 375 375 { 376 376 unregister_hdlc_protocol(&proto); 377 377 } 378 378 379 - module_init(mod_init); 380 - module_exit(mod_exit); 379 + module_init(hdlc_x25_init); 380 + module_exit(hdlc_x25_exit); 381 381 382 382 MODULE_AUTHOR("Krzysztof Halasa <khc@pm.waw.pl>"); 383 383 MODULE_DESCRIPTION("X.25 protocol support for generic HDLC");
+2 -1
drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
··· 931 931 ret = mt76_get_field(dev, MT_CONN_ON_MISC, MT_TOP_MISC2_FW_N9_RDY); 932 932 if (ret) { 933 933 dev_dbg(dev->mt76.dev, "Firmware is already download\n"); 934 - return -EIO; 934 + goto fw_loaded; 935 935 } 936 936 937 937 ret = mt7921_load_patch(dev); ··· 949 949 return -EIO; 950 950 } 951 951 952 + fw_loaded: 952 953 mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[MT_MCUQ_FWDL], false); 953 954 954 955 #ifdef CONFIG_PM
+4 -17
drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
··· 24 24 return -EIO; 25 25 } 26 26 27 - /* check for the interafce id 28 - * if if_id 1 to 8 then create IP MUX channel sessions. 29 - * To start MUX session from 0 as network interface id would start 30 - * from 1 so map it to if_id = if_id - 1 31 - */ 32 - if (if_id >= IP_MUX_SESSION_START && if_id <= IP_MUX_SESSION_END) 33 - return ipc_mux_open_session(ipc_imem->mux, if_id - 1); 34 - 35 - return -EINVAL; 27 + return ipc_mux_open_session(ipc_imem->mux, if_id); 36 28 } 37 29 38 30 /* Release a net link to CP. */ ··· 33 41 { 34 42 if (ipc_imem->mux && if_id >= IP_MUX_SESSION_START && 35 43 if_id <= IP_MUX_SESSION_END) 36 - ipc_mux_close_session(ipc_imem->mux, if_id - 1); 44 + ipc_mux_close_session(ipc_imem->mux, if_id); 37 45 } 38 46 39 47 /* Tasklet call to do uplink transfer. */ ··· 75 83 goto out; 76 84 } 77 85 78 - if (if_id >= IP_MUX_SESSION_START && if_id <= IP_MUX_SESSION_END) 79 - /* Route the UL packet through IP MUX Layer */ 80 - ret = ipc_mux_ul_trigger_encode(ipc_imem->mux, 81 - if_id - 1, skb); 82 - else 83 - dev_err(ipc_imem->dev, 84 - "invalid if_id %d: ", if_id); 86 + /* Route the UL packet through IP MUX Layer */ 87 + ret = ipc_mux_ul_trigger_encode(ipc_imem->mux, if_id, skb); 85 88 out: 86 89 return ret; 87 90 }
+3 -3
drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
··· 27 27 #define BOOT_CHECK_DEFAULT_TIMEOUT 400 28 28 29 29 /* IP MUX channel range */ 30 - #define IP_MUX_SESSION_START 1 31 - #define IP_MUX_SESSION_END 8 30 + #define IP_MUX_SESSION_START 0 31 + #define IP_MUX_SESSION_END 7 32 32 33 33 /* Default IP MUX channel */ 34 - #define IP_MUX_SESSION_DEFAULT 1 34 + #define IP_MUX_SESSION_DEFAULT 0 35 35 36 36 /** 37 37 * ipc_imem_sys_port_open - Open a port link to CP.
+1 -1
drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
··· 288 288 /* Pass the packet to the netif layer. */ 289 289 dest_skb->priority = service_class; 290 290 291 - return ipc_wwan_receive(wwan, dest_skb, false, if_id + 1); 291 + return ipc_wwan_receive(wwan, dest_skb, false, if_id); 292 292 } 293 293 294 294 /* Decode Flow Credit Table in the block */
+1 -1
drivers/net/wwan/iosm/iosm_ipc_uevent.c
··· 37 37 38 38 /* Store the device and event information */ 39 39 info->dev = dev; 40 - snprintf(info->uevent, MAX_UEVENT_LEN, "%s: %s", dev_name(dev), uevent); 40 + snprintf(info->uevent, MAX_UEVENT_LEN, "IOSM_EVENT=%s", uevent); 41 41 42 42 /* Schedule uevent in process context using work queue */ 43 43 schedule_work(&info->work);
+8 -3
drivers/net/wwan/iosm/iosm_ipc_wwan.c
··· 107 107 { 108 108 struct iosm_netdev_priv *priv = wwan_netdev_drvpriv(netdev); 109 109 struct iosm_wwan *ipc_wwan = priv->ipc_wwan; 110 + unsigned int len = skb->len; 110 111 int if_id = priv->if_id; 111 112 int ret; 112 113 ··· 124 123 125 124 /* Return code of zero is success */ 126 125 if (ret == 0) { 126 + netdev->stats.tx_packets++; 127 + netdev->stats.tx_bytes += len; 127 128 ret = NETDEV_TX_OK; 128 129 } else if (ret == -EBUSY) { 129 130 ret = NETDEV_TX_BUSY; ··· 143 140 ret); 144 141 145 142 dev_kfree_skb_any(skb); 146 - return ret; 143 + netdev->stats.tx_dropped++; 144 + return NETDEV_TX_OK; 147 145 } 148 146 149 147 /* Ops structure for wwan net link */ ··· 162 158 iosm_dev->priv_flags |= IFF_NO_QUEUE; 163 159 164 160 iosm_dev->type = ARPHRD_NONE; 161 + iosm_dev->mtu = ETH_DATA_LEN; 165 162 iosm_dev->min_mtu = ETH_MIN_MTU; 166 163 iosm_dev->max_mtu = ETH_MAX_MTU; 167 164 ··· 257 252 258 253 skb->pkt_type = PACKET_HOST; 259 254 260 - if (if_id < (IP_MUX_SESSION_START - 1) || 261 - if_id > (IP_MUX_SESSION_END - 1)) { 255 + if (if_id < IP_MUX_SESSION_START || 256 + if_id > IP_MUX_SESSION_END) { 262 257 ret = -EINVAL; 263 258 goto free; 264 259 }
+1 -1
drivers/ptp/Makefile
··· 3 3 # Makefile for PTP 1588 clock support. 4 4 # 5 5 6 - ptp-y := ptp_clock.o ptp_chardev.o ptp_sysfs.o 6 + ptp-y := ptp_clock.o ptp_chardev.o ptp_sysfs.o ptp_vclock.o 7 7 ptp_kvm-$(CONFIG_X86) := ptp_kvm_x86.o ptp_kvm_common.o 8 8 ptp_kvm-$(CONFIG_HAVE_ARM_SMCCC) := ptp_kvm_arm.o ptp_kvm_common.o 9 9 obj-$(CONFIG_PTP_1588_CLOCK) += ptp.o
+42 -2
drivers/ptp/ptp_clock.c
··· 24 24 #define PTP_PPS_EVENT PPS_CAPTUREASSERT 25 25 #define PTP_PPS_MODE (PTP_PPS_DEFAULTS | PPS_CANWAIT | PPS_TSFMT_TSPEC) 26 26 27 + struct class *ptp_class; 28 + 27 29 /* private globals */ 28 30 29 31 static dev_t ptp_devt; 30 - static struct class *ptp_class; 31 32 32 33 static DEFINE_IDA(ptp_clocks_map); 33 34 ··· 77 76 { 78 77 struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock); 79 78 79 + if (ptp_vclock_in_use(ptp)) { 80 + pr_err("ptp: virtual clock in use\n"); 81 + return -EBUSY; 82 + } 83 + 80 84 return ptp->info->settime64(ptp->info, tp); 81 85 } 82 86 ··· 102 96 struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock); 103 97 struct ptp_clock_info *ops; 104 98 int err = -EOPNOTSUPP; 99 + 100 + if (ptp_vclock_in_use(ptp)) { 101 + pr_err("ptp: virtual clock in use\n"); 102 + return -EBUSY; 103 + } 105 104 106 105 ops = ptp->info; 107 106 ··· 172 161 ptp_cleanup_pin_groups(ptp); 173 162 mutex_destroy(&ptp->tsevq_mux); 174 163 mutex_destroy(&ptp->pincfg_mux); 164 + mutex_destroy(&ptp->n_vclocks_mux); 175 165 ida_simple_remove(&ptp_clocks_map, ptp->index); 176 166 kfree(ptp); 177 167 } ··· 197 185 { 198 186 struct ptp_clock *ptp; 199 187 int err = 0, index, major = MAJOR(ptp_devt); 188 + size_t size; 200 189 201 190 if (info->n_alarm > PTP_MAX_ALARMS) 202 191 return ERR_PTR(-EINVAL); ··· 221 208 spin_lock_init(&ptp->tsevq.lock); 222 209 mutex_init(&ptp->tsevq_mux); 223 210 mutex_init(&ptp->pincfg_mux); 211 + mutex_init(&ptp->n_vclocks_mux); 224 212 init_waitqueue_head(&ptp->tsev_wq); 225 213 226 214 if (ptp->info->do_aux_work) { ··· 232 218 pr_err("failed to create ptp aux_worker %d\n", err); 233 219 goto kworker_err; 234 220 } 235 - ptp->pps_source->lookup_cookie = ptp; 221 + } 222 + 223 + /* PTP virtual clock is being registered under physical clock */ 224 + if (parent && parent->class && parent->class->name && 225 + strcmp(parent->class->name, "ptp") == 0) 226 + ptp->is_virtual_clock = true; 227 + 228 + if (!ptp->is_virtual_clock) { 229 + ptp->max_vclocks = PTP_DEFAULT_MAX_VCLOCKS; 230 + 231 + size = sizeof(int) * ptp->max_vclocks; 232 + ptp->vclock_index = kzalloc(size, GFP_KERNEL); 233 + if (!ptp->vclock_index) { 234 + err = -ENOMEM; 235 + goto no_mem_for_vclocks; 236 + } 236 237 } 237 238 238 239 err = ptp_populate_pin_groups(ptp); ··· 267 238 pr_err("failed to register pps source\n"); 268 239 goto no_pps; 269 240 } 241 + ptp->pps_source->lookup_cookie = ptp; 270 242 } 271 243 272 244 /* Initialize a new device of our class in our clock structure. */ ··· 295 265 no_pps: 296 266 ptp_cleanup_pin_groups(ptp); 297 267 no_pin_groups: 268 + kfree(ptp->vclock_index); 269 + no_mem_for_vclocks: 298 270 if (ptp->kworker) 299 271 kthread_destroy_worker(ptp->kworker); 300 272 kworker_err: 301 273 mutex_destroy(&ptp->tsevq_mux); 302 274 mutex_destroy(&ptp->pincfg_mux); 275 + mutex_destroy(&ptp->n_vclocks_mux); 303 276 ida_simple_remove(&ptp_clocks_map, index); 304 277 no_slot: 305 278 kfree(ptp); ··· 313 280 314 281 int ptp_clock_unregister(struct ptp_clock *ptp) 315 282 { 283 + if (ptp_vclock_in_use(ptp)) { 284 + pr_err("ptp: virtual clock in use\n"); 285 + return -EBUSY; 286 + } 287 + 316 288 ptp->defunct = 1; 317 289 wake_up_interruptible(&ptp->tsev_wq); 290 + 291 + kfree(ptp->vclock_index); 318 292 319 293 if (ptp->kworker) { 320 294 kthread_cancel_delayed_work_sync(&ptp->aux_work);
+39
drivers/ptp/ptp_private.h
··· 18 18 19 19 #define PTP_MAX_TIMESTAMPS 128 20 20 #define PTP_BUF_TIMESTAMPS 30 21 + #define PTP_DEFAULT_MAX_VCLOCKS 20 21 22 22 23 struct timestamp_event_queue { 23 24 struct ptp_extts_event buf[PTP_MAX_TIMESTAMPS]; ··· 47 46 const struct attribute_group *pin_attr_groups[2]; 48 47 struct kthread_worker *kworker; 49 48 struct kthread_delayed_work aux_work; 49 + unsigned int max_vclocks; 50 + unsigned int n_vclocks; 51 + int *vclock_index; 52 + struct mutex n_vclocks_mux; /* protect concurrent n_vclocks access */ 53 + bool is_virtual_clock; 54 + }; 55 + 56 + #define info_to_vclock(d) container_of((d), struct ptp_vclock, info) 57 + #define cc_to_vclock(d) container_of((d), struct ptp_vclock, cc) 58 + #define dw_to_vclock(d) container_of((d), struct ptp_vclock, refresh_work) 59 + 60 + struct ptp_vclock { 61 + struct ptp_clock *pclock; 62 + struct ptp_clock_info info; 63 + struct ptp_clock *clock; 64 + struct cyclecounter cc; 65 + struct timecounter tc; 66 + spinlock_t lock; /* protects tc/cc */ 50 67 }; 51 68 52 69 /* ··· 79 60 int cnt = q->tail - q->head; 80 61 return cnt < 0 ? PTP_MAX_TIMESTAMPS + cnt : cnt; 81 62 } 63 + 64 + /* Check if ptp virtual clock is in use */ 65 + static inline bool ptp_vclock_in_use(struct ptp_clock *ptp) 66 + { 67 + bool in_use = false; 68 + 69 + if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 70 + return true; 71 + 72 + if (!ptp->is_virtual_clock && ptp->n_vclocks) 73 + in_use = true; 74 + 75 + mutex_unlock(&ptp->n_vclocks_mux); 76 + 77 + return in_use; 78 + } 79 + 80 + extern struct class *ptp_class; 82 81 83 82 /* 84 83 * see ptp_chardev.c ··· 126 89 int ptp_populate_pin_groups(struct ptp_clock *ptp); 127 90 void ptp_cleanup_pin_groups(struct ptp_clock *ptp); 128 91 92 + struct ptp_vclock *ptp_vclock_register(struct ptp_clock *pclock); 93 + void ptp_vclock_unregister(struct ptp_vclock *vclock); 129 94 #endif
+160
drivers/ptp/ptp_sysfs.c
··· 3 3 * PTP 1588 clock support - sysfs interface. 4 4 * 5 5 * Copyright (C) 2010 OMICRON electronics GmbH 6 + * Copyright 2021 NXP 6 7 */ 7 8 #include <linux/capability.h> 8 9 #include <linux/slab.h> ··· 149 148 } 150 149 static DEVICE_ATTR(pps_enable, 0220, NULL, pps_enable_store); 151 150 151 + static int unregister_vclock(struct device *dev, void *data) 152 + { 153 + struct ptp_clock *ptp = dev_get_drvdata(dev); 154 + struct ptp_clock_info *info = ptp->info; 155 + struct ptp_vclock *vclock; 156 + u8 *num = data; 157 + 158 + vclock = info_to_vclock(info); 159 + dev_info(dev->parent, "delete virtual clock ptp%d\n", 160 + vclock->clock->index); 161 + 162 + ptp_vclock_unregister(vclock); 163 + (*num)--; 164 + 165 + /* For break. Not error. */ 166 + if (*num == 0) 167 + return -EINVAL; 168 + 169 + return 0; 170 + } 171 + 172 + static ssize_t n_vclocks_show(struct device *dev, 173 + struct device_attribute *attr, char *page) 174 + { 175 + struct ptp_clock *ptp = dev_get_drvdata(dev); 176 + ssize_t size; 177 + 178 + if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 179 + return -ERESTARTSYS; 180 + 181 + size = snprintf(page, PAGE_SIZE - 1, "%u\n", ptp->n_vclocks); 182 + 183 + mutex_unlock(&ptp->n_vclocks_mux); 184 + 185 + return size; 186 + } 187 + 188 + static ssize_t n_vclocks_store(struct device *dev, 189 + struct device_attribute *attr, 190 + const char *buf, size_t count) 191 + { 192 + struct ptp_clock *ptp = dev_get_drvdata(dev); 193 + struct ptp_vclock *vclock; 194 + int err = -EINVAL; 195 + u32 num, i; 196 + 197 + if (kstrtou32(buf, 0, &num)) 198 + return err; 199 + 200 + if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 201 + return -ERESTARTSYS; 202 + 203 + if (num > ptp->max_vclocks) { 204 + dev_err(dev, "max value is %d\n", ptp->max_vclocks); 205 + goto out; 206 + } 207 + 208 + /* Need to create more vclocks */ 209 + if (num > ptp->n_vclocks) { 210 + for (i = 0; i < num - ptp->n_vclocks; i++) { 211 + vclock = ptp_vclock_register(ptp); 212 + if (!vclock) 213 + goto out; 214 + 215 + *(ptp->vclock_index + ptp->n_vclocks + i) = 216 + vclock->clock->index; 217 + 218 + dev_info(dev, "new virtual clock ptp%d\n", 219 + vclock->clock->index); 220 + } 221 + } 222 + 223 + /* Need to delete vclocks */ 224 + if (num < ptp->n_vclocks) { 225 + i = ptp->n_vclocks - num; 226 + device_for_each_child_reverse(dev, &i, 227 + unregister_vclock); 228 + 229 + for (i = 1; i <= ptp->n_vclocks - num; i++) 230 + *(ptp->vclock_index + ptp->n_vclocks - i) = -1; 231 + } 232 + 233 + if (num == 0) 234 + dev_info(dev, "only physical clock in use now\n"); 235 + else 236 + dev_info(dev, "guarantee physical clock free running\n"); 237 + 238 + ptp->n_vclocks = num; 239 + mutex_unlock(&ptp->n_vclocks_mux); 240 + 241 + return count; 242 + out: 243 + mutex_unlock(&ptp->n_vclocks_mux); 244 + return err; 245 + } 246 + static DEVICE_ATTR_RW(n_vclocks); 247 + 248 + static ssize_t max_vclocks_show(struct device *dev, 249 + struct device_attribute *attr, char *page) 250 + { 251 + struct ptp_clock *ptp = dev_get_drvdata(dev); 252 + ssize_t size; 253 + 254 + size = snprintf(page, PAGE_SIZE - 1, "%u\n", ptp->max_vclocks); 255 + 256 + return size; 257 + } 258 + 259 + static ssize_t max_vclocks_store(struct device *dev, 260 + struct device_attribute *attr, 261 + const char *buf, size_t count) 262 + { 263 + struct ptp_clock *ptp = dev_get_drvdata(dev); 264 + unsigned int *vclock_index; 265 + int err = -EINVAL; 266 + size_t size; 267 + u32 max; 268 + 269 + if (kstrtou32(buf, 0, &max) || max == 0) 270 + return -EINVAL; 271 + 272 + if (max == ptp->max_vclocks) 273 + return count; 274 + 275 + if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 276 + return -ERESTARTSYS; 277 + 278 + if (max < ptp->n_vclocks) 279 + goto out; 280 + 281 + size = sizeof(int) * max; 282 + vclock_index = kzalloc(size, GFP_KERNEL); 283 + if (!vclock_index) { 284 + err = -ENOMEM; 285 + goto out; 286 + } 287 + 288 + size = sizeof(int) * ptp->n_vclocks; 289 + memcpy(vclock_index, ptp->vclock_index, size); 290 + 291 + kfree(ptp->vclock_index); 292 + ptp->vclock_index = vclock_index; 293 + ptp->max_vclocks = max; 294 + 295 + mutex_unlock(&ptp->n_vclocks_mux); 296 + 297 + return count; 298 + out: 299 + mutex_unlock(&ptp->n_vclocks_mux); 300 + return err; 301 + } 302 + static DEVICE_ATTR_RW(max_vclocks); 303 + 152 304 static struct attribute *ptp_attrs[] = { 153 305 &dev_attr_clock_name.attr, 154 306 ··· 316 162 &dev_attr_fifo.attr, 317 163 &dev_attr_period.attr, 318 164 &dev_attr_pps_enable.attr, 165 + &dev_attr_n_vclocks.attr, 166 + &dev_attr_max_vclocks.attr, 319 167 NULL 320 168 }; 321 169 ··· 338 182 mode = 0; 339 183 } else if (attr == &dev_attr_pps_enable.attr) { 340 184 if (!info->pps) 185 + mode = 0; 186 + } else if (attr == &dev_attr_n_vclocks.attr || 187 + attr == &dev_attr_max_vclocks.attr) { 188 + if (ptp->is_virtual_clock) 341 189 mode = 0; 342 190 } 343 191
+219
drivers/ptp/ptp_vclock.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * PTP virtual clock driver 4 + * 5 + * Copyright 2021 NXP 6 + */ 7 + #include <linux/slab.h> 8 + #include "ptp_private.h" 9 + 10 + #define PTP_VCLOCK_CC_SHIFT 31 11 + #define PTP_VCLOCK_CC_MULT (1 << PTP_VCLOCK_CC_SHIFT) 12 + #define PTP_VCLOCK_FADJ_SHIFT 9 13 + #define PTP_VCLOCK_FADJ_DENOMINATOR 15625ULL 14 + #define PTP_VCLOCK_REFRESH_INTERVAL (HZ * 2) 15 + 16 + static int ptp_vclock_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) 17 + { 18 + struct ptp_vclock *vclock = info_to_vclock(ptp); 19 + unsigned long flags; 20 + s64 adj; 21 + 22 + adj = (s64)scaled_ppm << PTP_VCLOCK_FADJ_SHIFT; 23 + adj = div_s64(adj, PTP_VCLOCK_FADJ_DENOMINATOR); 24 + 25 + spin_lock_irqsave(&vclock->lock, flags); 26 + timecounter_read(&vclock->tc); 27 + vclock->cc.mult = PTP_VCLOCK_CC_MULT + adj; 28 + spin_unlock_irqrestore(&vclock->lock, flags); 29 + 30 + return 0; 31 + } 32 + 33 + static int ptp_vclock_adjtime(struct ptp_clock_info *ptp, s64 delta) 34 + { 35 + struct ptp_vclock *vclock = info_to_vclock(ptp); 36 + unsigned long flags; 37 + 38 + spin_lock_irqsave(&vclock->lock, flags); 39 + timecounter_adjtime(&vclock->tc, delta); 40 + spin_unlock_irqrestore(&vclock->lock, flags); 41 + 42 + return 0; 43 + } 44 + 45 + static int ptp_vclock_gettime(struct ptp_clock_info *ptp, 46 + struct timespec64 *ts) 47 + { 48 + struct ptp_vclock *vclock = info_to_vclock(ptp); 49 + unsigned long flags; 50 + u64 ns; 51 + 52 + spin_lock_irqsave(&vclock->lock, flags); 53 + ns = timecounter_read(&vclock->tc); 54 + spin_unlock_irqrestore(&vclock->lock, flags); 55 + *ts = ns_to_timespec64(ns); 56 + 57 + return 0; 58 + } 59 + 60 + static int ptp_vclock_settime(struct ptp_clock_info *ptp, 61 + const struct timespec64 *ts) 62 + { 63 + struct ptp_vclock *vclock = info_to_vclock(ptp); 64 + u64 ns = timespec64_to_ns(ts); 65 + unsigned long flags; 66 + 67 + spin_lock_irqsave(&vclock->lock, flags); 68 + timecounter_init(&vclock->tc, &vclock->cc, ns); 69 + spin_unlock_irqrestore(&vclock->lock, flags); 70 + 71 + return 0; 72 + } 73 + 74 + static long ptp_vclock_refresh(struct ptp_clock_info *ptp) 75 + { 76 + struct ptp_vclock *vclock = info_to_vclock(ptp); 77 + struct timespec64 ts; 78 + 79 + ptp_vclock_gettime(&vclock->info, &ts); 80 + 81 + return PTP_VCLOCK_REFRESH_INTERVAL; 82 + } 83 + 84 + static const struct ptp_clock_info ptp_vclock_info = { 85 + .owner = THIS_MODULE, 86 + .name = "ptp virtual clock", 87 + /* The maximum ppb value that long scaled_ppm can support */ 88 + .max_adj = 32767999, 89 + .adjfine = ptp_vclock_adjfine, 90 + .adjtime = ptp_vclock_adjtime, 91 + .gettime64 = ptp_vclock_gettime, 92 + .settime64 = ptp_vclock_settime, 93 + .do_aux_work = ptp_vclock_refresh, 94 + }; 95 + 96 + static u64 ptp_vclock_read(const struct cyclecounter *cc) 97 + { 98 + struct ptp_vclock *vclock = cc_to_vclock(cc); 99 + struct ptp_clock *ptp = vclock->pclock; 100 + struct timespec64 ts = {}; 101 + 102 + if (ptp->info->gettimex64) 103 + ptp->info->gettimex64(ptp->info, &ts, NULL); 104 + else 105 + ptp->info->gettime64(ptp->info, &ts); 106 + 107 + return timespec64_to_ns(&ts); 108 + } 109 + 110 + static const struct cyclecounter ptp_vclock_cc = { 111 + .read = ptp_vclock_read, 112 + .mask = CYCLECOUNTER_MASK(32), 113 + .mult = PTP_VCLOCK_CC_MULT, 114 + .shift = PTP_VCLOCK_CC_SHIFT, 115 + }; 116 + 117 + struct ptp_vclock *ptp_vclock_register(struct ptp_clock *pclock) 118 + { 119 + struct ptp_vclock *vclock; 120 + 121 + vclock = kzalloc(sizeof(*vclock), GFP_KERNEL); 122 + if (!vclock) 123 + return NULL; 124 + 125 + vclock->pclock = pclock; 126 + vclock->info = ptp_vclock_info; 127 + vclock->cc = ptp_vclock_cc; 128 + 129 + snprintf(vclock->info.name, PTP_CLOCK_NAME_LEN, "ptp%d_virt", 130 + pclock->index); 131 + 132 + spin_lock_init(&vclock->lock); 133 + 134 + vclock->clock = ptp_clock_register(&vclock->info, &pclock->dev); 135 + if (IS_ERR_OR_NULL(vclock->clock)) { 136 + kfree(vclock); 137 + return NULL; 138 + } 139 + 140 + timecounter_init(&vclock->tc, &vclock->cc, 0); 141 + ptp_schedule_worker(vclock->clock, PTP_VCLOCK_REFRESH_INTERVAL); 142 + 143 + return vclock; 144 + } 145 + 146 + void ptp_vclock_unregister(struct ptp_vclock *vclock) 147 + { 148 + ptp_clock_unregister(vclock->clock); 149 + kfree(vclock); 150 + } 151 + 152 + int ptp_get_vclocks_index(int pclock_index, int **vclock_index) 153 + { 154 + char name[PTP_CLOCK_NAME_LEN] = ""; 155 + struct ptp_clock *ptp; 156 + struct device *dev; 157 + int num = 0; 158 + 159 + if (pclock_index < 0) 160 + return num; 161 + 162 + snprintf(name, PTP_CLOCK_NAME_LEN, "ptp%d", pclock_index); 163 + dev = class_find_device_by_name(ptp_class, name); 164 + if (!dev) 165 + return num; 166 + 167 + ptp = dev_get_drvdata(dev); 168 + 169 + if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) { 170 + put_device(dev); 171 + return num; 172 + } 173 + 174 + *vclock_index = kzalloc(sizeof(int) * ptp->n_vclocks, GFP_KERNEL); 175 + if (!(*vclock_index)) 176 + goto out; 177 + 178 + memcpy(*vclock_index, ptp->vclock_index, sizeof(int) * ptp->n_vclocks); 179 + num = ptp->n_vclocks; 180 + out: 181 + mutex_unlock(&ptp->n_vclocks_mux); 182 + put_device(dev); 183 + return num; 184 + } 185 + EXPORT_SYMBOL(ptp_get_vclocks_index); 186 + 187 + void ptp_convert_timestamp(struct skb_shared_hwtstamps *hwtstamps, 188 + int vclock_index) 189 + { 190 + char name[PTP_CLOCK_NAME_LEN] = ""; 191 + struct ptp_vclock *vclock; 192 + struct ptp_clock *ptp; 193 + unsigned long flags; 194 + struct device *dev; 195 + u64 ns; 196 + 197 + snprintf(name, PTP_CLOCK_NAME_LEN, "ptp%d", vclock_index); 198 + dev = class_find_device_by_name(ptp_class, name); 199 + if (!dev) 200 + return; 201 + 202 + ptp = dev_get_drvdata(dev); 203 + if (!ptp->is_virtual_clock) { 204 + put_device(dev); 205 + return; 206 + } 207 + 208 + vclock = info_to_vclock(ptp->info); 209 + 210 + ns = ktime_to_ns(hwtstamps->hwtstamp); 211 + 212 + spin_lock_irqsave(&vclock->lock, flags); 213 + ns = timecounter_cyc2time(&vclock->tc, ns); 214 + spin_unlock_irqrestore(&vclock->lock, flags); 215 + 216 + put_device(dev); 217 + hwtstamps->hwtstamp = ns_to_ktime(ns); 218 + } 219 + EXPORT_SYMBOL(ptp_convert_timestamp);
+1
include/linux/bpf.h
··· 780 780 void *tailcall_target; 781 781 void *tailcall_bypass; 782 782 void *bypass_addr; 783 + void *aux; 783 784 union { 784 785 struct { 785 786 struct bpf_map *map;
+10
include/linux/ethtool.h
··· 758 758 enum ethtool_link_mode_bit_indices link_mode); 759 759 760 760 /** 761 + * ethtool_get_phc_vclocks - Derive phc vclocks information, and caller 762 + * is responsible to free memory of vclock_index 763 + * @dev: pointer to net_device structure 764 + * @vclock_index: pointer to pointer of vclock index 765 + * 766 + * Return number of phc vclocks 767 + */ 768 + int ethtool_get_phc_vclocks(struct net_device *dev, int **vclock_index); 769 + 770 + /** 761 771 * ethtool_sprintf - Write formatted string to ethtool string data 762 772 * @data: Pointer to start of string to update 763 773 * @fmt: Format of string to write
+1 -5
include/linux/marvell_phy.h
··· 22 22 #define MARVELL_PHY_ID_88E1545 0x01410ea0 23 23 #define MARVELL_PHY_ID_88E1548P 0x01410ec0 24 24 #define MARVELL_PHY_ID_88E3016 0x01410e60 25 + #define MARVELL_PHY_ID_88X3310 0x002b09a0 25 26 #define MARVELL_PHY_ID_88E2110 0x002b09b0 26 27 #define MARVELL_PHY_ID_88X2222 0x01410f10 27 - 28 - /* PHY IDs and mask for Alaska 10G PHYs */ 29 - #define MARVELL_PHY_ID_88X33X0_MASK 0xfffffff8 30 - #define MARVELL_PHY_ID_88X3310 0x002b09a0 31 - #define MARVELL_PHY_ID_88X3340 0x002b09a8 32 28 33 29 /* Marvel 88E1111 in Finisar SFP module with modified PHY ID */ 34 30 #define MARVELL_PHY_ID_88E1111_FINISAR 0x01ff0cc0
+30 -1
include/linux/ptp_clock_kernel.h
··· 11 11 #include <linux/device.h> 12 12 #include <linux/pps_kernel.h> 13 13 #include <linux/ptp_clock.h> 14 + #include <linux/timecounter.h> 15 + #include <linux/skbuff.h> 14 16 17 + #define PTP_CLOCK_NAME_LEN 32 15 18 /** 16 19 * struct ptp_clock_request - request PTP clock event 17 20 * ··· 137 134 138 135 struct ptp_clock_info { 139 136 struct module *owner; 140 - char name[16]; 137 + char name[PTP_CLOCK_NAME_LEN]; 141 138 s32 max_adj; 142 139 int n_alarm; 143 140 int n_ext_ts; ··· 307 304 */ 308 305 void ptp_cancel_worker_sync(struct ptp_clock *ptp); 309 306 307 + /** 308 + * ptp_get_vclocks_index() - get all vclocks index on pclock, and 309 + * caller is responsible to free memory 310 + * of vclock_index 311 + * 312 + * @pclock_index: phc index of ptp pclock. 313 + * @vclock_index: pointer to pointer of vclock index. 314 + * 315 + * return number of vclocks. 316 + */ 317 + int ptp_get_vclocks_index(int pclock_index, int **vclock_index); 318 + 319 + /** 320 + * ptp_convert_timestamp() - convert timestamp to a ptp vclock time 321 + * 322 + * @hwtstamps: skb_shared_hwtstamps structure pointer 323 + * @vclock_index: phc index of ptp vclock. 324 + */ 325 + void ptp_convert_timestamp(struct skb_shared_hwtstamps *hwtstamps, 326 + int vclock_index); 327 + 310 328 #else 311 329 static inline struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info, 312 330 struct device *parent) ··· 346 322 unsigned long delay) 347 323 { return -EOPNOTSUPP; } 348 324 static inline void ptp_cancel_worker_sync(struct ptp_clock *ptp) 325 + { } 326 + static inline int ptp_get_vclocks_index(int pclock_index, int **vclock_index) 327 + { return 0; } 328 + static inline void ptp_convert_timestamp(struct skb_shared_hwtstamps *hwtstamps, 329 + int vclock_index) 349 330 { } 350 331 351 332 #endif
+2
include/linux/stmmac.h
··· 115 115 116 116 #define EST_GCL 1024 117 117 struct stmmac_est { 118 + struct mutex lock; 118 119 int enable; 120 + u32 btr_reserve[2]; 119 121 u32 btr_offset[2]; 120 122 u32 btr[2]; 121 123 u32 ctr[2];
+8 -1
include/net/bonding.h
··· 201 201 */ 202 202 #define BOND_LINK_NOCHANGE -1 203 203 204 + struct bond_ipsec { 205 + struct list_head list; 206 + struct xfrm_state *xs; 207 + }; 208 + 204 209 /* 205 210 * Here are the locking policies for the two bonding locks: 206 211 * Get rcu_read_lock when reading or RTNL when writing slave list. ··· 254 249 #endif /* CONFIG_DEBUG_FS */ 255 250 struct rtnl_link_stats64 bond_stats; 256 251 #ifdef CONFIG_XFRM_OFFLOAD 257 - struct xfrm_state *xs; 252 + struct list_head ipsec_list; 253 + /* protecting ipsec_list */ 254 + spinlock_t ipsec_lock; 258 255 #endif /* CONFIG_XFRM_OFFLOAD */ 259 256 }; 260 257
+1 -1
include/net/busy_poll.h
··· 38 38 39 39 static inline bool sk_can_busy_loop(const struct sock *sk) 40 40 { 41 - return sk->sk_ll_usec && !signal_pending(current); 41 + return READ_ONCE(sk->sk_ll_usec) && !signal_pending(current); 42 42 } 43 43 44 44 bool sk_busy_loop_end(void *p, unsigned long start_time);
-200
include/net/caif/caif_hsi.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (C) ST-Ericsson AB 2010 4 - * Author: Daniel Martensson / daniel.martensson@stericsson.com 5 - * Dmitry.Tarnyagin / dmitry.tarnyagin@stericsson.com 6 - */ 7 - 8 - #ifndef CAIF_HSI_H_ 9 - #define CAIF_HSI_H_ 10 - 11 - #include <net/caif/caif_layer.h> 12 - #include <net/caif/caif_device.h> 13 - #include <linux/atomic.h> 14 - 15 - /* 16 - * Maximum number of CAIF frames that can reside in the same HSI frame. 17 - */ 18 - #define CFHSI_MAX_PKTS 15 19 - 20 - /* 21 - * Maximum number of bytes used for the frame that can be embedded in the 22 - * HSI descriptor. 23 - */ 24 - #define CFHSI_MAX_EMB_FRM_SZ 96 25 - 26 - /* 27 - * Decides if HSI buffers should be prefilled with 0xFF pattern for easier 28 - * debugging. Both TX and RX buffers will be filled before the transfer. 29 - */ 30 - #define CFHSI_DBG_PREFILL 0 31 - 32 - /* Structure describing a HSI packet descriptor. */ 33 - #pragma pack(1) /* Byte alignment. */ 34 - struct cfhsi_desc { 35 - u8 header; 36 - u8 offset; 37 - u16 cffrm_len[CFHSI_MAX_PKTS]; 38 - u8 emb_frm[CFHSI_MAX_EMB_FRM_SZ]; 39 - }; 40 - #pragma pack() /* Default alignment. */ 41 - 42 - /* Size of the complete HSI packet descriptor. */ 43 - #define CFHSI_DESC_SZ (sizeof(struct cfhsi_desc)) 44 - 45 - /* 46 - * Size of the complete HSI packet descriptor excluding the optional embedded 47 - * CAIF frame. 48 - */ 49 - #define CFHSI_DESC_SHORT_SZ (CFHSI_DESC_SZ - CFHSI_MAX_EMB_FRM_SZ) 50 - 51 - /* 52 - * Maximum bytes transferred in one transfer. 53 - */ 54 - #define CFHSI_MAX_CAIF_FRAME_SZ 4096 55 - 56 - #define CFHSI_MAX_PAYLOAD_SZ (CFHSI_MAX_PKTS * CFHSI_MAX_CAIF_FRAME_SZ) 57 - 58 - /* Size of the complete HSI TX buffer. */ 59 - #define CFHSI_BUF_SZ_TX (CFHSI_DESC_SZ + CFHSI_MAX_PAYLOAD_SZ) 60 - 61 - /* Size of the complete HSI RX buffer. */ 62 - #define CFHSI_BUF_SZ_RX ((2 * CFHSI_DESC_SZ) + CFHSI_MAX_PAYLOAD_SZ) 63 - 64 - /* Bitmasks for the HSI descriptor. */ 65 - #define CFHSI_PIGGY_DESC (0x01 << 7) 66 - 67 - #define CFHSI_TX_STATE_IDLE 0 68 - #define CFHSI_TX_STATE_XFER 1 69 - 70 - #define CFHSI_RX_STATE_DESC 0 71 - #define CFHSI_RX_STATE_PAYLOAD 1 72 - 73 - /* Bitmasks for power management. */ 74 - #define CFHSI_WAKE_UP 0 75 - #define CFHSI_WAKE_UP_ACK 1 76 - #define CFHSI_WAKE_DOWN_ACK 2 77 - #define CFHSI_AWAKE 3 78 - #define CFHSI_WAKELOCK_HELD 4 79 - #define CFHSI_SHUTDOWN 5 80 - #define CFHSI_FLUSH_FIFO 6 81 - 82 - #ifndef CFHSI_INACTIVITY_TOUT 83 - #define CFHSI_INACTIVITY_TOUT (1 * HZ) 84 - #endif /* CFHSI_INACTIVITY_TOUT */ 85 - 86 - #ifndef CFHSI_WAKE_TOUT 87 - #define CFHSI_WAKE_TOUT (3 * HZ) 88 - #endif /* CFHSI_WAKE_TOUT */ 89 - 90 - #ifndef CFHSI_MAX_RX_RETRIES 91 - #define CFHSI_MAX_RX_RETRIES (10 * HZ) 92 - #endif 93 - 94 - /* Structure implemented by the CAIF HSI driver. */ 95 - struct cfhsi_cb_ops { 96 - void (*tx_done_cb) (struct cfhsi_cb_ops *drv); 97 - void (*rx_done_cb) (struct cfhsi_cb_ops *drv); 98 - void (*wake_up_cb) (struct cfhsi_cb_ops *drv); 99 - void (*wake_down_cb) (struct cfhsi_cb_ops *drv); 100 - }; 101 - 102 - /* Structure implemented by HSI device. */ 103 - struct cfhsi_ops { 104 - int (*cfhsi_up) (struct cfhsi_ops *dev); 105 - int (*cfhsi_down) (struct cfhsi_ops *dev); 106 - int (*cfhsi_tx) (u8 *ptr, int len, struct cfhsi_ops *dev); 107 - int (*cfhsi_rx) (u8 *ptr, int len, struct cfhsi_ops *dev); 108 - int (*cfhsi_wake_up) (struct cfhsi_ops *dev); 109 - int (*cfhsi_wake_down) (struct cfhsi_ops *dev); 110 - int (*cfhsi_get_peer_wake) (struct cfhsi_ops *dev, bool *status); 111 - int (*cfhsi_fifo_occupancy) (struct cfhsi_ops *dev, size_t *occupancy); 112 - int (*cfhsi_rx_cancel)(struct cfhsi_ops *dev); 113 - struct cfhsi_cb_ops *cb_ops; 114 - }; 115 - 116 - /* Structure holds status of received CAIF frames processing */ 117 - struct cfhsi_rx_state { 118 - int state; 119 - int nfrms; 120 - int pld_len; 121 - int retries; 122 - bool piggy_desc; 123 - }; 124 - 125 - /* Priority mapping */ 126 - enum { 127 - CFHSI_PRIO_CTL = 0, 128 - CFHSI_PRIO_VI, 129 - CFHSI_PRIO_VO, 130 - CFHSI_PRIO_BEBK, 131 - CFHSI_PRIO_LAST, 132 - }; 133 - 134 - struct cfhsi_config { 135 - u32 inactivity_timeout; 136 - u32 aggregation_timeout; 137 - u32 head_align; 138 - u32 tail_align; 139 - u32 q_high_mark; 140 - u32 q_low_mark; 141 - }; 142 - 143 - /* Structure implemented by CAIF HSI drivers. */ 144 - struct cfhsi { 145 - struct caif_dev_common cfdev; 146 - struct net_device *ndev; 147 - struct platform_device *pdev; 148 - struct sk_buff_head qhead[CFHSI_PRIO_LAST]; 149 - struct cfhsi_cb_ops cb_ops; 150 - struct cfhsi_ops *ops; 151 - int tx_state; 152 - struct cfhsi_rx_state rx_state; 153 - struct cfhsi_config cfg; 154 - int rx_len; 155 - u8 *rx_ptr; 156 - u8 *tx_buf; 157 - u8 *rx_buf; 158 - u8 *rx_flip_buf; 159 - spinlock_t lock; 160 - int flow_off_sent; 161 - struct list_head list; 162 - struct work_struct wake_up_work; 163 - struct work_struct wake_down_work; 164 - struct work_struct out_of_sync_work; 165 - struct workqueue_struct *wq; 166 - wait_queue_head_t wake_up_wait; 167 - wait_queue_head_t wake_down_wait; 168 - wait_queue_head_t flush_fifo_wait; 169 - struct timer_list inactivity_timer; 170 - struct timer_list rx_slowpath_timer; 171 - 172 - /* TX aggregation */ 173 - int aggregation_len; 174 - struct timer_list aggregation_timer; 175 - 176 - unsigned long bits; 177 - }; 178 - extern struct platform_driver cfhsi_driver; 179 - 180 - /** 181 - * enum ifla_caif_hsi - CAIF HSI NetlinkRT parameters. 182 - * @IFLA_CAIF_HSI_INACTIVITY_TOUT: Inactivity timeout before 183 - * taking the HSI wakeline down, in milliseconds. 184 - * When using RT Netlink to create, destroy or configure a CAIF HSI interface, 185 - * enum ifla_caif_hsi is used to specify the configuration attributes. 186 - */ 187 - enum ifla_caif_hsi { 188 - __IFLA_CAIF_HSI_UNSPEC, 189 - __IFLA_CAIF_HSI_INACTIVITY_TOUT, 190 - __IFLA_CAIF_HSI_AGGREGATION_TOUT, 191 - __IFLA_CAIF_HSI_HEAD_ALIGN, 192 - __IFLA_CAIF_HSI_TAIL_ALIGN, 193 - __IFLA_CAIF_HSI_QHIGH_WATERMARK, 194 - __IFLA_CAIF_HSI_QLOW_WATERMARK, 195 - __IFLA_CAIF_HSI_MAX 196 - }; 197 - 198 - struct cfhsi_ops *cfhsi_get_ops(void); 199 - 200 - #endif /* CAIF_HSI_H_ */
+3 -1
include/net/dst_metadata.h
··· 45 45 return &md_dst->u.tun_info; 46 46 47 47 dst = skb_dst(skb); 48 - if (dst && dst->lwtstate) 48 + if (dst && dst->lwtstate && 49 + (dst->lwtstate->type == LWTUNNEL_ENCAP_IP || 50 + dst->lwtstate->type == LWTUNNEL_ENCAP_IP6)) 49 51 return lwt_tun_info(dst->lwtstate); 50 52 51 53 return NULL;
+1 -1
include/net/ip6_route.h
··· 263 263 int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb, 264 264 int (*output)(struct net *, struct sock *, struct sk_buff *)); 265 265 266 - static inline int ip6_skb_dst_mtu(struct sk_buff *skb) 266 + static inline unsigned int ip6_skb_dst_mtu(struct sk_buff *skb) 267 267 { 268 268 int mtu; 269 269
+3 -2
include/net/mptcp.h
··· 105 105 bool mptcp_established_options(struct sock *sk, struct sk_buff *skb, 106 106 unsigned int *size, unsigned int remaining, 107 107 struct mptcp_out_options *opts); 108 - void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb); 108 + bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb); 109 109 110 110 void mptcp_write_options(__be32 *ptr, const struct tcp_sock *tp, 111 111 struct mptcp_out_options *opts); ··· 227 227 return false; 228 228 } 229 229 230 - static inline void mptcp_incoming_options(struct sock *sk, 230 + static inline bool mptcp_incoming_options(struct sock *sk, 231 231 struct sk_buff *skb) 232 232 { 233 + return true; 233 234 } 234 235 235 236 static inline void mptcp_skb_ext_move(struct sk_buff *to,
-1
include/net/netfilter/nf_conntrack_core.h
··· 30 30 void nf_conntrack_cleanup_net_list(struct list_head *net_exit_list); 31 31 32 32 void nf_conntrack_proto_pernet_init(struct net *net); 33 - void nf_conntrack_proto_pernet_fini(struct net *net); 34 33 35 34 int nf_conntrack_proto_init(void); 36 35 void nf_conntrack_proto_fini(void);
+1
include/net/netns/conntrack.h
··· 27 27 u8 tcp_loose; 28 28 u8 tcp_be_liberal; 29 29 u8 tcp_max_retrans; 30 + u8 tcp_ignore_invalid_rst; 30 31 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE) 31 32 unsigned int offload_timeout; 32 33 unsigned int offload_pickup;
+1 -3
include/net/sctp/constants.h
··· 360 360 #define SCTP_SCOPE_POLICY_MAX SCTP_SCOPE_POLICY_LINK 361 361 362 362 /* Based on IPv4 scoping <draft-stewart-tsvwg-sctp-ipv4-00.txt>, 363 - * SCTP IPv4 unusable addresses: 0.0.0.0/8, 224.0.0.0/4, 198.18.0.0/24, 364 - * 192.88.99.0/24. 363 + * SCTP IPv4 unusable addresses: 0.0.0.0/8, 224.0.0.0/4, 192.88.99.0/24. 365 364 * Also, RFC 8.4, non-unicast addresses are not considered valid SCTP 366 365 * addresses. 367 366 */ ··· 368 369 ((htonl(INADDR_BROADCAST) == a) || \ 369 370 ipv4_is_multicast(a) || \ 370 371 ipv4_is_zeronet(a) || \ 371 - ipv4_is_test_198(a) || \ 372 372 ipv4_is_anycast_6to4(a)) 373 373 374 374 /* Flags used for the bind address copy functions. */
+6 -2
include/net/sock.h
··· 316 316 * @sk_timer: sock cleanup timer 317 317 * @sk_stamp: time stamp of last packet received 318 318 * @sk_stamp_seq: lock for accessing sk_stamp on 32 bit architectures only 319 - * @sk_tsflags: SO_TIMESTAMPING socket options 319 + * @sk_tsflags: SO_TIMESTAMPING flags 320 + * @sk_bind_phc: SO_TIMESTAMPING bind PHC index of PTP virtual clock 321 + * for timestamping 320 322 * @sk_tskey: counter to disambiguate concurrent tstamp requests 321 323 * @sk_zckey: counter to order MSG_ZEROCOPY notifications 322 324 * @sk_socket: Identd and reporting IO signals ··· 495 493 seqlock_t sk_stamp_seq; 496 494 #endif 497 495 u16 sk_tsflags; 496 + int sk_bind_phc; 498 497 u8 sk_shutdown; 499 498 u32 sk_tskey; 500 499 atomic_t sk_zckey; ··· 2758 2755 2759 2756 int sock_bindtoindex(struct sock *sk, int ifindex, bool lock_sk); 2760 2757 void sock_set_timestamp(struct sock *sk, int optname, bool valbool); 2761 - int sock_set_timestamping(struct sock *sk, int optname, int val); 2758 + int sock_set_timestamping(struct sock *sk, int optname, 2759 + struct so_timestamping timestamping); 2762 2760 2763 2761 void sock_enable_timestamps(struct sock *sk); 2764 2762 void sock_no_linger(struct sock *sk);
+4
include/net/tcp.h
··· 686 686 687 687 static inline void __tcp_fast_path_on(struct tcp_sock *tp, u32 snd_wnd) 688 688 { 689 + /* mptcp hooks are only on the slow path */ 690 + if (sk_is_mptcp((struct sock *)tp)) 691 + return; 692 + 689 693 tp->pred_flags = htonl((tp->tcp_header_len << 26) | 690 694 ntohl(TCP_FLAG_ACK) | 691 695 snd_wnd);
+15
include/uapi/linux/ethtool_netlink.h
··· 46 46 ETHTOOL_MSG_FEC_SET, 47 47 ETHTOOL_MSG_MODULE_EEPROM_GET, 48 48 ETHTOOL_MSG_STATS_GET, 49 + ETHTOOL_MSG_PHC_VCLOCKS_GET, 49 50 50 51 /* add new constants above here */ 51 52 __ETHTOOL_MSG_USER_CNT, ··· 89 88 ETHTOOL_MSG_FEC_NTF, 90 89 ETHTOOL_MSG_MODULE_EEPROM_GET_REPLY, 91 90 ETHTOOL_MSG_STATS_GET_REPLY, 91 + ETHTOOL_MSG_PHC_VCLOCKS_GET_REPLY, 92 92 93 93 /* add new constants above here */ 94 94 __ETHTOOL_MSG_KERNEL_CNT, ··· 440 438 /* add new constants above here */ 441 439 __ETHTOOL_A_TSINFO_CNT, 442 440 ETHTOOL_A_TSINFO_MAX = (__ETHTOOL_A_TSINFO_CNT - 1) 441 + }; 442 + 443 + /* PHC VCLOCKS */ 444 + 445 + enum { 446 + ETHTOOL_A_PHC_VCLOCKS_UNSPEC, 447 + ETHTOOL_A_PHC_VCLOCKS_HEADER, /* nest - _A_HEADER_* */ 448 + ETHTOOL_A_PHC_VCLOCKS_NUM, /* u32 */ 449 + ETHTOOL_A_PHC_VCLOCKS_INDEX, /* array, s32 */ 450 + 451 + /* add new constants above here */ 452 + __ETHTOOL_A_PHC_VCLOCKS_CNT, 453 + ETHTOOL_A_PHC_VCLOCKS_MAX = (__ETHTOOL_A_PHC_VCLOCKS_CNT - 1) 443 454 }; 444 455 445 456 /* CABLE TEST */
+15 -2
include/uapi/linux/net_tstamp.h
··· 13 13 #include <linux/types.h> 14 14 #include <linux/socket.h> /* for SO_TIMESTAMPING */ 15 15 16 - /* SO_TIMESTAMPING gets an integer bit field comprised of these values */ 16 + /* SO_TIMESTAMPING flags */ 17 17 enum { 18 18 SOF_TIMESTAMPING_TX_HARDWARE = (1<<0), 19 19 SOF_TIMESTAMPING_TX_SOFTWARE = (1<<1), ··· 30 30 SOF_TIMESTAMPING_OPT_STATS = (1<<12), 31 31 SOF_TIMESTAMPING_OPT_PKTINFO = (1<<13), 32 32 SOF_TIMESTAMPING_OPT_TX_SWHW = (1<<14), 33 + SOF_TIMESTAMPING_BIND_PHC = (1 << 15), 33 34 34 - SOF_TIMESTAMPING_LAST = SOF_TIMESTAMPING_OPT_TX_SWHW, 35 + SOF_TIMESTAMPING_LAST = SOF_TIMESTAMPING_BIND_PHC, 35 36 SOF_TIMESTAMPING_MASK = (SOF_TIMESTAMPING_LAST - 1) | 36 37 SOF_TIMESTAMPING_LAST 37 38 }; ··· 46 45 SOF_TIMESTAMPING_TX_SOFTWARE | \ 47 46 SOF_TIMESTAMPING_TX_SCHED | \ 48 47 SOF_TIMESTAMPING_TX_ACK) 48 + 49 + /** 50 + * struct so_timestamping - SO_TIMESTAMPING parameter 51 + * 52 + * @flags: SO_TIMESTAMPING flags 53 + * @bind_phc: Index of PTP virtual clock bound to sock. This is available 54 + * if flag SOF_TIMESTAMPING_BIND_PHC is set. 55 + */ 56 + struct so_timestamping { 57 + int flags; 58 + int bind_phc; 59 + }; 49 60 50 61 /** 51 62 * struct hwtstamp_config - %SIOCGHWTSTAMP and %SIOCSHWTSTAMP parameter
+7 -1
kernel/bpf/core.c
··· 2236 2236 #endif 2237 2237 if (aux->dst_trampoline) 2238 2238 bpf_trampoline_put(aux->dst_trampoline); 2239 - for (i = 0; i < aux->func_cnt; i++) 2239 + for (i = 0; i < aux->func_cnt; i++) { 2240 + /* We can just unlink the subprog poke descriptor table as 2241 + * it was originally linked to the main program and is also 2242 + * released along with it. 2243 + */ 2244 + aux->func[i]->aux->poke_tab = NULL; 2240 2245 bpf_jit_free(aux->func[i]); 2246 + } 2241 2247 if (aux->func_cnt) { 2242 2248 kfree(aux->func); 2243 2249 bpf_prog_unlock_free(aux->prog);
+4 -2
kernel/bpf/devmap.c
··· 558 558 559 559 if (map->map_type == BPF_MAP_TYPE_DEVMAP) { 560 560 for (i = 0; i < map->max_entries; i++) { 561 - dst = READ_ONCE(dtab->netdev_map[i]); 561 + dst = rcu_dereference_check(dtab->netdev_map[i], 562 + rcu_read_lock_bh_held()); 562 563 if (!is_valid_dst(dst, xdp, exclude_ifindex)) 563 564 continue; 564 565 ··· 655 654 656 655 if (map->map_type == BPF_MAP_TYPE_DEVMAP) { 657 656 for (i = 0; i < map->max_entries; i++) { 658 - dst = READ_ONCE(dtab->netdev_map[i]); 657 + dst = rcu_dereference_check(dtab->netdev_map[i], 658 + rcu_read_lock_bh_held()); 659 659 if (!dst || dst->dev->ifindex == exclude_ifindex) 660 660 continue; 661 661
+21 -39
kernel/bpf/verifier.c
··· 12121 12121 goto out_free; 12122 12122 func[i]->is_func = 1; 12123 12123 func[i]->aux->func_idx = i; 12124 - /* the btf and func_info will be freed only at prog->aux */ 12124 + /* Below members will be freed only at prog->aux */ 12125 12125 func[i]->aux->btf = prog->aux->btf; 12126 12126 func[i]->aux->func_info = prog->aux->func_info; 12127 + func[i]->aux->poke_tab = prog->aux->poke_tab; 12128 + func[i]->aux->size_poke_tab = prog->aux->size_poke_tab; 12127 12129 12128 12130 for (j = 0; j < prog->aux->size_poke_tab; j++) { 12129 - u32 insn_idx = prog->aux->poke_tab[j].insn_idx; 12130 - int ret; 12131 + struct bpf_jit_poke_descriptor *poke; 12131 12132 12132 - if (!(insn_idx >= subprog_start && 12133 - insn_idx <= subprog_end)) 12134 - continue; 12135 - 12136 - ret = bpf_jit_add_poke_descriptor(func[i], 12137 - &prog->aux->poke_tab[j]); 12138 - if (ret < 0) { 12139 - verbose(env, "adding tail call poke descriptor failed\n"); 12140 - goto out_free; 12141 - } 12142 - 12143 - func[i]->insnsi[insn_idx - subprog_start].imm = ret + 1; 12144 - 12145 - map_ptr = func[i]->aux->poke_tab[ret].tail_call.map; 12146 - ret = map_ptr->ops->map_poke_track(map_ptr, func[i]->aux); 12147 - if (ret < 0) { 12148 - verbose(env, "tracking tail call prog failed\n"); 12149 - goto out_free; 12150 - } 12133 + poke = &prog->aux->poke_tab[j]; 12134 + if (poke->insn_idx < subprog_end && 12135 + poke->insn_idx >= subprog_start) 12136 + poke->aux = func[i]->aux; 12151 12137 } 12152 12138 12153 12139 /* Use bpf_prog_F_tag to indicate functions in stack traces. ··· 12162 12176 goto out_free; 12163 12177 } 12164 12178 cond_resched(); 12165 - } 12166 - 12167 - /* Untrack main program's aux structs so that during map_poke_run() 12168 - * we will not stumble upon the unfilled poke descriptors; each 12169 - * of the main program's poke descs got distributed across subprogs 12170 - * and got tracked onto map, so we are sure that none of them will 12171 - * be missed after the operation below 12172 - */ 12173 - for (i = 0; i < prog->aux->size_poke_tab; i++) { 12174 - map_ptr = prog->aux->poke_tab[i].tail_call.map; 12175 - 12176 - map_ptr->ops->map_poke_untrack(map_ptr, prog->aux); 12177 12179 } 12178 12180 12179 12181 /* at this point all bpf functions were successfully JITed ··· 12241 12267 bpf_prog_jit_attempt_done(prog); 12242 12268 return 0; 12243 12269 out_free: 12270 + /* We failed JIT'ing, so at this point we need to unregister poke 12271 + * descriptors from subprogs, so that kernel is not attempting to 12272 + * patch it anymore as we're freeing the subprog JIT memory. 12273 + */ 12274 + for (i = 0; i < prog->aux->size_poke_tab; i++) { 12275 + map_ptr = prog->aux->poke_tab[i].tail_call.map; 12276 + map_ptr->ops->map_poke_untrack(map_ptr, prog->aux); 12277 + } 12278 + /* At this point we're guaranteed that poke descriptors are not 12279 + * live anymore. We can just unlink its descriptor table as it's 12280 + * released with the main prog. 12281 + */ 12244 12282 for (i = 0; i < env->subprog_cnt; i++) { 12245 12283 if (!func[i]) 12246 12284 continue; 12247 - 12248 - for (j = 0; j < func[i]->aux->size_poke_tab; j++) { 12249 - map_ptr = func[i]->aux->poke_tab[j].tail_call.map; 12250 - map_ptr->ops->map_poke_untrack(map_ptr, func[i]->aux); 12251 - } 12285 + func[i]->aux->poke_tab = NULL; 12252 12286 bpf_jit_free(func[i]); 12253 12287 } 12254 12288 kfree(func);
+14
net/802/garp.c
··· 203 203 kfree(attr); 204 204 } 205 205 206 + static void garp_attr_destroy_all(struct garp_applicant *app) 207 + { 208 + struct rb_node *node, *next; 209 + struct garp_attr *attr; 210 + 211 + for (node = rb_first(&app->gid); 212 + next = node ? rb_next(node) : NULL, node != NULL; 213 + node = next) { 214 + attr = rb_entry(node, struct garp_attr, node); 215 + garp_attr_destroy(app, attr); 216 + } 217 + } 218 + 206 219 static int garp_pdu_init(struct garp_applicant *app) 207 220 { 208 221 struct sk_buff *skb; ··· 622 609 623 610 spin_lock_bh(&app->lock); 624 611 garp_gid_event(app, GARP_EVENT_TRANSMIT_PDU); 612 + garp_attr_destroy_all(app); 625 613 garp_pdu_queue(app); 626 614 spin_unlock_bh(&app->lock); 627 615
+14
net/802/mrp.c
··· 292 292 kfree(attr); 293 293 } 294 294 295 + static void mrp_attr_destroy_all(struct mrp_applicant *app) 296 + { 297 + struct rb_node *node, *next; 298 + struct mrp_attr *attr; 299 + 300 + for (node = rb_first(&app->mad); 301 + next = node ? rb_next(node) : NULL, node != NULL; 302 + node = next) { 303 + attr = rb_entry(node, struct mrp_attr, node); 304 + mrp_attr_destroy(app, attr); 305 + } 306 + } 307 + 295 308 static int mrp_pdu_init(struct mrp_applicant *app) 296 309 { 297 310 struct sk_buff *skb; ··· 908 895 909 896 spin_lock_bh(&app->lock); 910 897 mrp_mad_event(app, MRP_EVENT_TX); 898 + mrp_attr_destroy_all(app); 911 899 mrp_pdu_queue(app); 912 900 spin_unlock_bh(&app->lock); 913 901
+16 -1
net/bridge/br_if.c
··· 562 562 struct net_bridge_port *p; 563 563 int err = 0; 564 564 unsigned br_hr, dev_hr; 565 - bool changed_addr; 565 + bool changed_addr, fdb_synced = false; 566 566 567 567 /* Don't allow bridging non-ethernet like devices. */ 568 568 if ((dev->flags & IFF_LOOPBACK) || ··· 652 652 list_add_rcu(&p->list, &br->port_list); 653 653 654 654 nbp_update_port_count(br); 655 + if (!br_promisc_port(p) && (p->dev->priv_flags & IFF_UNICAST_FLT)) { 656 + /* When updating the port count we also update all ports' 657 + * promiscuous mode. 658 + * A port leaving promiscuous mode normally gets the bridge's 659 + * fdb synced to the unicast filter (if supported), however, 660 + * `br_port_clear_promisc` does not distinguish between 661 + * non-promiscuous ports and *new* ports, so we need to 662 + * sync explicitly here. 663 + */ 664 + fdb_synced = br_fdb_sync_static(br, p) == 0; 665 + if (!fdb_synced) 666 + netdev_err(dev, "failed to sync bridge static fdb addresses to this port\n"); 667 + } 655 668 656 669 netdev_update_features(br->dev); 657 670 ··· 714 701 return 0; 715 702 716 703 err7: 704 + if (fdb_synced) 705 + br_fdb_unsync_static(br, p); 717 706 list_del_rcu(&p->list); 718 707 br_fdb_delete_by_port(br, p, 0, 1); 719 708 nbp_update_port_count(br);
+6
net/bridge/br_multicast.c
··· 3264 3264 pim_hdr_type(pimhdr) != PIM_TYPE_HELLO) 3265 3265 return; 3266 3266 3267 + spin_lock(&br->multicast_lock); 3267 3268 br_ip4_multicast_mark_router(br, port); 3269 + spin_unlock(&br->multicast_lock); 3268 3270 } 3269 3271 3270 3272 static int br_ip4_multicast_mrd_rcv(struct net_bridge *br, ··· 3277 3275 igmp_hdr(skb)->type != IGMP_MRDISC_ADV) 3278 3276 return -ENOMSG; 3279 3277 3278 + spin_lock(&br->multicast_lock); 3280 3279 br_ip4_multicast_mark_router(br, port); 3280 + spin_unlock(&br->multicast_lock); 3281 3281 3282 3282 return 0; 3283 3283 } ··· 3347 3343 if (icmp6_hdr(skb)->icmp6_type != ICMPV6_MRDISC_ADV) 3348 3344 return; 3349 3345 3346 + spin_lock(&br->multicast_lock); 3350 3347 br_ip6_multicast_mark_router(br, port); 3348 + spin_unlock(&br->multicast_lock); 3351 3349 } 3352 3350 3353 3351 static int br_multicast_ipv6_rcv(struct net_bridge *br,
+16
net/core/dev.c
··· 6008 6008 diffs = memcmp(skb_mac_header(p), 6009 6009 skb_mac_header(skb), 6010 6010 maclen); 6011 + 6012 + diffs |= skb_get_nfct(p) ^ skb_get_nfct(skb); 6013 + #if IS_ENABLED(CONFIG_SKB_EXTENSIONS) && IS_ENABLED(CONFIG_NET_TC_SKB_EXT) 6014 + if (!diffs) { 6015 + struct tc_skb_ext *skb_ext = skb_ext_find(skb, TC_SKB_EXT); 6016 + struct tc_skb_ext *p_ext = skb_ext_find(p, TC_SKB_EXT); 6017 + 6018 + diffs |= (!!p_ext) ^ (!!skb_ext); 6019 + if (!diffs && unlikely(skb_ext)) 6020 + diffs |= p_ext->chain ^ skb_ext->chain; 6021 + } 6022 + #endif 6023 + 6011 6024 NAPI_GRO_CB(p)->same_flow = !diffs; 6012 6025 } 6013 6026 } ··· 6234 6221 case GRO_MERGED_FREE: 6235 6222 if (NAPI_GRO_CB(skb)->free == NAPI_GRO_FREE_STOLEN_HEAD) 6236 6223 napi_skb_free_stolen_head(skb); 6224 + else if (skb->fclone != SKB_FCLONE_UNAVAILABLE) 6225 + __kfree_skb(skb); 6237 6226 else 6238 6227 __kfree_skb_defer(skb); 6239 6228 break; ··· 6285 6270 skb_shinfo(skb)->gso_type = 0; 6286 6271 skb->truesize = SKB_TRUESIZE(skb_end_offset(skb)); 6287 6272 skb_ext_reset(skb); 6273 + nf_reset_ct(skb); 6288 6274 6289 6275 napi->skb = skb; 6290 6276 }
+1
net/core/skbuff.c
··· 943 943 944 944 void napi_skb_free_stolen_head(struct sk_buff *skb) 945 945 { 946 + nf_reset_ct(skb); 946 947 skb_dst_drop(skb); 947 948 skb_ext_put(skb); 948 949 napi_skb_cache_put(skb);
+66 -5
net/core/sock.c
··· 139 139 #include <net/tcp.h> 140 140 #include <net/busy_poll.h> 141 141 142 + #include <linux/ethtool.h> 143 + 142 144 static DEFINE_MUTEX(proto_list_mutex); 143 145 static LIST_HEAD(proto_list); 144 146 ··· 812 810 } 813 811 } 814 812 815 - int sock_set_timestamping(struct sock *sk, int optname, int val) 813 + static int sock_timestamping_bind_phc(struct sock *sk, int phc_index) 816 814 { 815 + struct net *net = sock_net(sk); 816 + struct net_device *dev = NULL; 817 + bool match = false; 818 + int *vclock_index; 819 + int i, num; 820 + 821 + if (sk->sk_bound_dev_if) 822 + dev = dev_get_by_index(net, sk->sk_bound_dev_if); 823 + 824 + if (!dev) { 825 + pr_err("%s: sock not bind to device\n", __func__); 826 + return -EOPNOTSUPP; 827 + } 828 + 829 + num = ethtool_get_phc_vclocks(dev, &vclock_index); 830 + for (i = 0; i < num; i++) { 831 + if (*(vclock_index + i) == phc_index) { 832 + match = true; 833 + break; 834 + } 835 + } 836 + 837 + if (num > 0) 838 + kfree(vclock_index); 839 + 840 + if (!match) 841 + return -EINVAL; 842 + 843 + sk->sk_bind_phc = phc_index; 844 + 845 + return 0; 846 + } 847 + 848 + int sock_set_timestamping(struct sock *sk, int optname, 849 + struct so_timestamping timestamping) 850 + { 851 + int val = timestamping.flags; 852 + int ret; 853 + 817 854 if (val & ~SOF_TIMESTAMPING_MASK) 818 855 return -EINVAL; 819 856 ··· 872 831 if (val & SOF_TIMESTAMPING_OPT_STATS && 873 832 !(val & SOF_TIMESTAMPING_OPT_TSONLY)) 874 833 return -EINVAL; 834 + 835 + if (val & SOF_TIMESTAMPING_BIND_PHC) { 836 + ret = sock_timestamping_bind_phc(sk, timestamping.bind_phc); 837 + if (ret) 838 + return ret; 839 + } 875 840 876 841 sk->sk_tsflags = val; 877 842 sock_valbool_flag(sk, SOCK_TSTAMP_NEW, optname == SO_TIMESTAMPING_NEW); ··· 954 907 int sock_setsockopt(struct socket *sock, int level, int optname, 955 908 sockptr_t optval, unsigned int optlen) 956 909 { 910 + struct so_timestamping timestamping; 957 911 struct sock_txtime sk_txtime; 958 912 struct sock *sk = sock->sk; 959 913 int val; ··· 1116 1068 case SO_TIMESTAMP_NEW: 1117 1069 case SO_TIMESTAMPNS_OLD: 1118 1070 case SO_TIMESTAMPNS_NEW: 1119 - sock_set_timestamp(sk, valbool, optname); 1071 + sock_set_timestamp(sk, optname, valbool); 1120 1072 break; 1121 1073 1122 1074 case SO_TIMESTAMPING_NEW: 1123 1075 case SO_TIMESTAMPING_OLD: 1124 - ret = sock_set_timestamping(sk, optname, val); 1076 + if (optlen == sizeof(timestamping)) { 1077 + if (copy_from_sockptr(&timestamping, optval, 1078 + sizeof(timestamping))) { 1079 + ret = -EFAULT; 1080 + break; 1081 + } 1082 + } else { 1083 + memset(&timestamping, 0, sizeof(timestamping)); 1084 + timestamping.flags = val; 1085 + } 1086 + ret = sock_set_timestamping(sk, optname, timestamping); 1125 1087 break; 1126 1088 1127 1089 case SO_RCVLOWAT: ··· 1259 1201 if (val < 0) 1260 1202 ret = -EINVAL; 1261 1203 else 1262 - sk->sk_ll_usec = val; 1204 + WRITE_ONCE(sk->sk_ll_usec, val); 1263 1205 } 1264 1206 break; 1265 1207 case SO_PREFER_BUSY_POLL: ··· 1406 1348 struct __kernel_old_timeval tm; 1407 1349 struct __kernel_sock_timeval stm; 1408 1350 struct sock_txtime txtime; 1351 + struct so_timestamping timestamping; 1409 1352 } v; 1410 1353 1411 1354 int lv = sizeof(int); ··· 1510 1451 break; 1511 1452 1512 1453 case SO_TIMESTAMPING_OLD: 1513 - v.val = sk->sk_tsflags; 1454 + lv = sizeof(v.timestamping); 1455 + v.timestamping.flags = sk->sk_tsflags; 1456 + v.timestamping.bind_phc = sk->sk_bind_phc; 1514 1457 break; 1515 1458 1516 1459 case SO_RCVTIMEO_OLD:
+4 -4
net/dsa/switch.c
··· 113 113 int err, port; 114 114 115 115 if (dst->index == info->tree_index && ds->index == info->sw_index && 116 - ds->ops->port_bridge_join) 116 + ds->ops->port_bridge_leave) 117 117 ds->ops->port_bridge_leave(ds, info->port, info->br); 118 118 119 119 if ((dst->index != info->tree_index || ds->index != info->sw_index) && 120 - ds->ops->crosschip_bridge_join) 120 + ds->ops->crosschip_bridge_leave) 121 121 ds->ops->crosschip_bridge_leave(ds, info->tree_index, 122 122 info->sw_index, info->port, 123 123 info->br); ··· 427 427 info->port, info->lag, 428 428 info->info); 429 429 430 - return 0; 430 + return -EOPNOTSUPP; 431 431 } 432 432 433 433 static int dsa_switch_lag_leave(struct dsa_switch *ds, ··· 440 440 return ds->ops->crosschip_lag_leave(ds, info->sw_index, 441 441 info->port, info->lag); 442 442 443 - return 0; 443 + return -EOPNOTSUPP; 444 444 } 445 445 446 446 static int dsa_switch_mdb_add(struct dsa_switch *ds,
+1 -1
net/ethtool/Makefile
··· 7 7 ethtool_nl-y := netlink.o bitset.o strset.o linkinfo.o linkmodes.o \ 8 8 linkstate.o debug.o wol.o features.o privflags.o rings.o \ 9 9 channels.o coalesce.o pause.o eee.o tsinfo.o cabletest.o \ 10 - tunnels.o fec.o eeprom.o stats.o 10 + tunnels.o fec.o eeprom.o stats.o phc_vclocks.o
+14
net/ethtool/common.c
··· 4 4 #include <linux/net_tstamp.h> 5 5 #include <linux/phy.h> 6 6 #include <linux/rtnetlink.h> 7 + #include <linux/ptp_clock_kernel.h> 7 8 8 9 #include "common.h" 9 10 ··· 398 397 [const_ilog2(SOF_TIMESTAMPING_OPT_STATS)] = "option-stats", 399 398 [const_ilog2(SOF_TIMESTAMPING_OPT_PKTINFO)] = "option-pktinfo", 400 399 [const_ilog2(SOF_TIMESTAMPING_OPT_TX_SWHW)] = "option-tx-swhw", 400 + [const_ilog2(SOF_TIMESTAMPING_BIND_PHC)] = "bind-phc", 401 401 }; 402 402 static_assert(ARRAY_SIZE(sof_timestamping_names) == __SOF_TIMESTAMPING_CNT); 403 403 ··· 555 553 556 554 return 0; 557 555 } 556 + 557 + int ethtool_get_phc_vclocks(struct net_device *dev, int **vclock_index) 558 + { 559 + struct ethtool_ts_info info = { }; 560 + int num = 0; 561 + 562 + if (!__ethtool_get_ts_info(dev, &info)) 563 + num = ptp_get_vclocks_index(info.phc_index, vclock_index); 564 + 565 + return num; 566 + } 567 + EXPORT_SYMBOL(ethtool_get_phc_vclocks); 558 568 559 569 const struct ethtool_phy_ops *ethtool_phy_ops; 560 570
+10
net/ethtool/netlink.c
··· 248 248 [ETHTOOL_MSG_TSINFO_GET] = &ethnl_tsinfo_request_ops, 249 249 [ETHTOOL_MSG_MODULE_EEPROM_GET] = &ethnl_module_eeprom_request_ops, 250 250 [ETHTOOL_MSG_STATS_GET] = &ethnl_stats_request_ops, 251 + [ETHTOOL_MSG_PHC_VCLOCKS_GET] = &ethnl_phc_vclocks_request_ops, 251 252 }; 252 253 253 254 static struct ethnl_dump_ctx *ethnl_dump_context(struct netlink_callback *cb) ··· 958 957 .done = ethnl_default_done, 959 958 .policy = ethnl_stats_get_policy, 960 959 .maxattr = ARRAY_SIZE(ethnl_stats_get_policy) - 1, 960 + }, 961 + { 962 + .cmd = ETHTOOL_MSG_PHC_VCLOCKS_GET, 963 + .doit = ethnl_default_doit, 964 + .start = ethnl_default_start, 965 + .dumpit = ethnl_default_dumpit, 966 + .done = ethnl_default_done, 967 + .policy = ethnl_phc_vclocks_get_policy, 968 + .maxattr = ARRAY_SIZE(ethnl_phc_vclocks_get_policy) - 1, 961 969 }, 962 970 }; 963 971
+2
net/ethtool/netlink.h
··· 347 347 extern const struct ethnl_request_ops ethnl_fec_request_ops; 348 348 extern const struct ethnl_request_ops ethnl_module_eeprom_request_ops; 349 349 extern const struct ethnl_request_ops ethnl_stats_request_ops; 350 + extern const struct ethnl_request_ops ethnl_phc_vclocks_request_ops; 350 351 351 352 extern const struct nla_policy ethnl_header_policy[ETHTOOL_A_HEADER_FLAGS + 1]; 352 353 extern const struct nla_policy ethnl_header_policy_stats[ETHTOOL_A_HEADER_FLAGS + 1]; ··· 383 382 extern const struct nla_policy ethnl_fec_set_policy[ETHTOOL_A_FEC_AUTO + 1]; 384 383 extern const struct nla_policy ethnl_module_eeprom_get_policy[ETHTOOL_A_MODULE_EEPROM_I2C_ADDRESS + 1]; 385 384 extern const struct nla_policy ethnl_stats_get_policy[ETHTOOL_A_STATS_GROUPS + 1]; 385 + extern const struct nla_policy ethnl_phc_vclocks_get_policy[ETHTOOL_A_PHC_VCLOCKS_HEADER + 1]; 386 386 387 387 int ethnl_set_linkinfo(struct sk_buff *skb, struct genl_info *info); 388 388 int ethnl_set_linkmodes(struct sk_buff *skb, struct genl_info *info);
+94
net/ethtool/phc_vclocks.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright 2021 NXP 4 + */ 5 + #include "netlink.h" 6 + #include "common.h" 7 + 8 + struct phc_vclocks_req_info { 9 + struct ethnl_req_info base; 10 + }; 11 + 12 + struct phc_vclocks_reply_data { 13 + struct ethnl_reply_data base; 14 + int num; 15 + int *index; 16 + }; 17 + 18 + #define PHC_VCLOCKS_REPDATA(__reply_base) \ 19 + container_of(__reply_base, struct phc_vclocks_reply_data, base) 20 + 21 + const struct nla_policy ethnl_phc_vclocks_get_policy[] = { 22 + [ETHTOOL_A_PHC_VCLOCKS_HEADER] = NLA_POLICY_NESTED(ethnl_header_policy), 23 + }; 24 + 25 + static int phc_vclocks_prepare_data(const struct ethnl_req_info *req_base, 26 + struct ethnl_reply_data *reply_base, 27 + struct genl_info *info) 28 + { 29 + struct phc_vclocks_reply_data *data = PHC_VCLOCKS_REPDATA(reply_base); 30 + struct net_device *dev = reply_base->dev; 31 + int ret; 32 + 33 + ret = ethnl_ops_begin(dev); 34 + if (ret < 0) 35 + return ret; 36 + data->num = ethtool_get_phc_vclocks(dev, &data->index); 37 + ethnl_ops_complete(dev); 38 + 39 + return ret; 40 + } 41 + 42 + static int phc_vclocks_reply_size(const struct ethnl_req_info *req_base, 43 + const struct ethnl_reply_data *reply_base) 44 + { 45 + const struct phc_vclocks_reply_data *data = 46 + PHC_VCLOCKS_REPDATA(reply_base); 47 + int len = 0; 48 + 49 + if (data->num > 0) { 50 + len += nla_total_size(sizeof(u32)); 51 + len += nla_total_size(sizeof(s32) * data->num); 52 + } 53 + 54 + return len; 55 + } 56 + 57 + static int phc_vclocks_fill_reply(struct sk_buff *skb, 58 + const struct ethnl_req_info *req_base, 59 + const struct ethnl_reply_data *reply_base) 60 + { 61 + const struct phc_vclocks_reply_data *data = 62 + PHC_VCLOCKS_REPDATA(reply_base); 63 + 64 + if (data->num <= 0) 65 + return 0; 66 + 67 + if (nla_put_u32(skb, ETHTOOL_A_PHC_VCLOCKS_NUM, data->num) || 68 + nla_put(skb, ETHTOOL_A_PHC_VCLOCKS_INDEX, 69 + sizeof(s32) * data->num, data->index)) 70 + return -EMSGSIZE; 71 + 72 + return 0; 73 + } 74 + 75 + static void phc_vclocks_cleanup_data(struct ethnl_reply_data *reply_base) 76 + { 77 + const struct phc_vclocks_reply_data *data = 78 + PHC_VCLOCKS_REPDATA(reply_base); 79 + 80 + kfree(data->index); 81 + } 82 + 83 + const struct ethnl_request_ops ethnl_phc_vclocks_request_ops = { 84 + .request_cmd = ETHTOOL_MSG_PHC_VCLOCKS_GET, 85 + .reply_cmd = ETHTOOL_MSG_PHC_VCLOCKS_GET_REPLY, 86 + .hdr_attr = ETHTOOL_A_PHC_VCLOCKS_HEADER, 87 + .req_info_size = sizeof(struct phc_vclocks_req_info), 88 + .reply_data_size = sizeof(struct phc_vclocks_reply_data), 89 + 90 + .prepare_data = phc_vclocks_prepare_data, 91 + .reply_size = phc_vclocks_reply_size, 92 + .fill_reply = phc_vclocks_fill_reply, 93 + .cleanup_data = phc_vclocks_cleanup_data, 94 + };
+1 -1
net/ipv4/fib_frontend.c
··· 1376 1376 portid = NETLINK_CB(skb).portid; /* netlink portid */ 1377 1377 NETLINK_CB(skb).portid = 0; /* from kernel */ 1378 1378 NETLINK_CB(skb).dst_group = 0; /* unicast */ 1379 - netlink_unicast(net->ipv4.fibnl, skb, portid, MSG_DONTWAIT); 1379 + nlmsg_unicast(net->ipv4.fibnl, skb, portid); 1380 1380 } 1381 1381 1382 1382 static int __net_init nl_fib_lookup_init(struct net *net)
+1 -4
net/ipv4/inet_diag.c
··· 580 580 nlmsg_free(rep); 581 581 goto out; 582 582 } 583 - err = netlink_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid, 584 - MSG_DONTWAIT); 585 - if (err > 0) 586 - err = 0; 583 + err = nlmsg_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid); 587 584 588 585 out: 589 586 if (sk)
+15 -3
net/ipv4/ip_tunnel.c
··· 317 317 } 318 318 319 319 dev->needed_headroom = t_hlen + hlen; 320 - mtu -= t_hlen; 320 + mtu -= t_hlen + (dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0); 321 321 322 322 if (mtu < IPV4_MIN_MTU) 323 323 mtu = IPV4_MIN_MTU; ··· 348 348 t_hlen = nt->hlen + sizeof(struct iphdr); 349 349 dev->min_mtu = ETH_MIN_MTU; 350 350 dev->max_mtu = IP_MAX_MTU - t_hlen; 351 + if (dev->type == ARPHRD_ETHER) 352 + dev->max_mtu -= dev->hard_header_len; 353 + 351 354 ip_tunnel_add(itn, nt); 352 355 return nt; 353 356 ··· 492 489 493 490 tunnel_hlen = md ? tunnel_hlen : tunnel->hlen; 494 491 pkt_size = skb->len - tunnel_hlen; 492 + pkt_size -= dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0; 495 493 496 - if (df) 494 + if (df) { 497 495 mtu = dst_mtu(&rt->dst) - (sizeof(struct iphdr) + tunnel_hlen); 498 - else 496 + mtu -= dev->type == ARPHRD_ETHER ? dev->hard_header_len : 0; 497 + } else { 499 498 mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu; 499 + } 500 500 501 501 if (skb_valid_dst(skb)) 502 502 skb_dst_update_pmtu_no_confirm(skb, mtu); ··· 978 972 int t_hlen = tunnel->hlen + sizeof(struct iphdr); 979 973 int max_mtu = IP_MAX_MTU - t_hlen; 980 974 975 + if (dev->type == ARPHRD_ETHER) 976 + max_mtu -= dev->hard_header_len; 977 + 981 978 if (new_mtu < ETH_MIN_MTU) 982 979 return -EINVAL; 983 980 ··· 1157 1148 mtu = ip_tunnel_bind_dev(dev); 1158 1149 if (tb[IFLA_MTU]) { 1159 1150 unsigned int max = IP_MAX_MTU - (nt->hlen + sizeof(struct iphdr)); 1151 + 1152 + if (dev->type == ARPHRD_ETHER) 1153 + max -= dev->hard_header_len; 1160 1154 1161 1155 mtu = clamp(dev->mtu, (unsigned int)ETH_MIN_MTU, max); 1162 1156 }
+1 -1
net/ipv4/ipmr.c
··· 2119 2119 raw_rcv(mroute_sk, skb); 2120 2120 return 0; 2121 2121 } 2122 - } 2122 + } 2123 2123 } 2124 2124 2125 2125 /* already under rcu_read_lock() */
+2 -5
net/ipv4/raw_diag.c
··· 119 119 return err; 120 120 } 121 121 122 - err = netlink_unicast(net->diag_nlsk, rep, 123 - NETLINK_CB(in_skb).portid, 124 - MSG_DONTWAIT); 125 - if (err > 0) 126 - err = 0; 122 + err = nlmsg_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid); 123 + 127 124 return err; 128 125 } 129 126
+3
net/ipv4/tcp.c
··· 1375 1375 } 1376 1376 pfrag->offset += copy; 1377 1377 } else { 1378 + if (!sk_wmem_schedule(sk, copy)) 1379 + goto wait_for_space; 1380 + 1378 1381 err = skb_zerocopy_iter_stream(sk, skb, msg, copy, uarg); 1379 1382 if (err == -EMSGSIZE || err == -EEXIST) { 1380 1383 tcp_mark_push(tp, skb);
+16 -5
net/ipv4/tcp_input.c
··· 4247 4247 { 4248 4248 trace_tcp_receive_reset(sk); 4249 4249 4250 + /* mptcp can't tell us to ignore reset pkts, 4251 + * so just ignore the return value of mptcp_incoming_options(). 4252 + */ 4250 4253 if (sk_is_mptcp(sk)) 4251 4254 mptcp_incoming_options(sk, skb); 4252 4255 ··· 4944 4941 bool fragstolen; 4945 4942 int eaten; 4946 4943 4947 - if (sk_is_mptcp(sk)) 4948 - mptcp_incoming_options(sk, skb); 4944 + /* If a subflow has been reset, the packet should not continue 4945 + * to be processed, drop the packet. 4946 + */ 4947 + if (sk_is_mptcp(sk) && !mptcp_incoming_options(sk, skb)) { 4948 + __kfree_skb(skb); 4949 + return; 4950 + } 4949 4951 4950 4952 if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) { 4951 4953 __kfree_skb(skb); ··· 5930 5922 tp->snd_cwnd = tcp_init_cwnd(tp, __sk_dst_get(sk)); 5931 5923 tp->snd_cwnd_stamp = tcp_jiffies32; 5932 5924 5933 - icsk->icsk_ca_initialized = 0; 5934 5925 bpf_skops_established(sk, bpf_op, skb); 5926 + /* Initialize congestion control unless BPF initialized it already: */ 5935 5927 if (!icsk->icsk_ca_initialized) 5936 5928 tcp_init_congestion_control(sk); 5937 5929 tcp_init_buffer_space(sk); ··· 6531 6523 case TCP_CLOSING: 6532 6524 case TCP_LAST_ACK: 6533 6525 if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) { 6534 - if (sk_is_mptcp(sk)) 6535 - mptcp_incoming_options(sk, skb); 6526 + /* If a subflow has been reset, the packet should not 6527 + * continue to be processed, drop the packet. 6528 + */ 6529 + if (sk_is_mptcp(sk) && !mptcp_incoming_options(sk, skb)) 6530 + goto discard; 6536 6531 break; 6537 6532 } 6538 6533 fallthrough;
+2 -2
net/ipv4/tcp_ipv4.c
··· 342 342 343 343 if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) 344 344 return; 345 - mtu = tcp_sk(sk)->mtu_info; 345 + mtu = READ_ONCE(tcp_sk(sk)->mtu_info); 346 346 dst = inet_csk_update_pmtu(sk, mtu); 347 347 if (!dst) 348 348 return; ··· 546 546 if (sk->sk_state == TCP_LISTEN) 547 547 goto out; 548 548 549 - tp->mtu_info = info; 549 + WRITE_ONCE(tp->mtu_info, info); 550 550 if (!sock_owned_by_user(sk)) { 551 551 tcp_v4_mtu_reduced(sk); 552 552 } else {
+1
net/ipv4/tcp_output.c
··· 1732 1732 return __tcp_mtu_to_mss(sk, pmtu) - 1733 1733 (tcp_sk(sk)->tcp_header_len - sizeof(struct tcphdr)); 1734 1734 } 1735 + EXPORT_SYMBOL(tcp_mtu_to_mss); 1735 1736 1736 1737 /* Inverse of above */ 1737 1738 int tcp_mss_to_mtu(struct sock *sk, int mss)
+3 -3
net/ipv4/udp.c
··· 1102 1102 } 1103 1103 1104 1104 ipcm_init_sk(&ipc, inet); 1105 - ipc.gso_size = up->gso_size; 1105 + ipc.gso_size = READ_ONCE(up->gso_size); 1106 1106 1107 1107 if (msg->msg_controllen) { 1108 1108 err = udp_cmsg_send(sk, msg, &ipc.gso_size); ··· 2695 2695 case UDP_SEGMENT: 2696 2696 if (val < 0 || val > USHRT_MAX) 2697 2697 return -EINVAL; 2698 - up->gso_size = val; 2698 + WRITE_ONCE(up->gso_size, val); 2699 2699 break; 2700 2700 2701 2701 case UDP_GRO: ··· 2790 2790 break; 2791 2791 2792 2792 case UDP_SEGMENT: 2793 - val = up->gso_size; 2793 + val = READ_ONCE(up->gso_size); 2794 2794 break; 2795 2795 2796 2796 case UDP_GRO:
+2 -4
net/ipv4/udp_diag.c
··· 77 77 kfree_skb(rep); 78 78 goto out; 79 79 } 80 - err = netlink_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid, 81 - MSG_DONTWAIT); 82 - if (err > 0) 83 - err = 0; 80 + err = nlmsg_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid); 81 + 84 82 out: 85 83 if (sk) 86 84 sock_put(sk);
+4 -2
net/ipv4/udp_offload.c
··· 525 525 526 526 if ((!sk && (skb->dev->features & NETIF_F_GRO_UDP_FWD)) || 527 527 (sk && udp_sk(sk)->gro_enabled) || NAPI_GRO_CB(skb)->is_flist) 528 - pp = call_gro_receive(udp_gro_receive_segment, head, skb); 529 - return pp; 528 + return call_gro_receive(udp_gro_receive_segment, head, skb); 529 + 530 + /* no GRO, be sure flush the current packet */ 531 + goto out; 530 532 } 531 533 532 534 if (NAPI_GRO_CB(skb)->encap_mark ||
+31 -1
net/ipv6/ip6_output.c
··· 60 60 { 61 61 struct dst_entry *dst = skb_dst(skb); 62 62 struct net_device *dev = dst->dev; 63 + unsigned int hh_len = LL_RESERVED_SPACE(dev); 64 + int delta = hh_len - skb_headroom(skb); 63 65 const struct in6_addr *nexthop; 64 66 struct neighbour *neigh; 65 67 int ret; 68 + 69 + /* Be paranoid, rather than too clever. */ 70 + if (unlikely(delta > 0) && dev->header_ops) { 71 + /* pskb_expand_head() might crash, if skb is shared */ 72 + if (skb_shared(skb)) { 73 + struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC); 74 + 75 + if (likely(nskb)) { 76 + if (skb->sk) 77 + skb_set_owner_w(skb, skb->sk); 78 + consume_skb(skb); 79 + } else { 80 + kfree_skb(skb); 81 + } 82 + skb = nskb; 83 + } 84 + if (skb && 85 + pskb_expand_head(skb, SKB_DATA_ALIGN(delta), 0, GFP_ATOMIC)) { 86 + kfree_skb(skb); 87 + skb = NULL; 88 + } 89 + if (!skb) { 90 + IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTDISCARDS); 91 + return -ENOMEM; 92 + } 93 + } 66 94 67 95 if (ipv6_addr_is_multicast(&ipv6_hdr(skb)->daddr)) { 68 96 struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb)); ··· 507 479 if (skb_warn_if_lro(skb)) 508 480 goto drop; 509 481 510 - if (!xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) { 482 + if (!net->ipv6.devconf_all->disable_policy && 483 + !idev->cnf.disable_policy && 484 + !xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb)) { 511 485 __IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS); 512 486 goto drop; 513 487 }
+18 -3
net/ipv6/tcp_ipv6.c
··· 348 348 static void tcp_v6_mtu_reduced(struct sock *sk) 349 349 { 350 350 struct dst_entry *dst; 351 + u32 mtu; 351 352 352 353 if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)) 353 354 return; 354 355 355 - dst = inet6_csk_update_pmtu(sk, tcp_sk(sk)->mtu_info); 356 + mtu = READ_ONCE(tcp_sk(sk)->mtu_info); 357 + 358 + /* Drop requests trying to increase our current mss. 359 + * Check done in __ip6_rt_update_pmtu() is too late. 360 + */ 361 + if (tcp_mtu_to_mss(sk, mtu) >= tcp_sk(sk)->mss_cache) 362 + return; 363 + 364 + dst = inet6_csk_update_pmtu(sk, mtu); 356 365 if (!dst) 357 366 return; 358 367 ··· 442 433 } 443 434 444 435 if (type == ICMPV6_PKT_TOOBIG) { 436 + u32 mtu = ntohl(info); 437 + 445 438 /* We are not interested in TCP_LISTEN and open_requests 446 439 * (SYN-ACKs send out by Linux are always <576bytes so 447 440 * they should go through unfragmented). ··· 454 443 if (!ip6_sk_accept_pmtu(sk)) 455 444 goto out; 456 445 457 - tp->mtu_info = ntohl(info); 446 + if (mtu < IPV6_MIN_MTU) 447 + goto out; 448 + 449 + WRITE_ONCE(tp->mtu_info, mtu); 450 + 458 451 if (!sock_owned_by_user(sk)) 459 452 tcp_v6_mtu_reduced(sk); 460 453 else if (!test_and_set_bit(TCP_MTU_REDUCED_DEFERRED, ··· 555 540 opt = ireq->ipv6_opt; 556 541 if (!opt) 557 542 opt = rcu_dereference(np->opt); 558 - err = ip6_xmit(sk, skb, fl6, sk->sk_mark, opt, 543 + err = ip6_xmit(sk, skb, fl6, skb->mark ? : sk->sk_mark, opt, 559 544 tclass, sk->sk_priority); 560 545 rcu_read_unlock(); 561 546 err = net_xmit_eval(err);
+1 -1
net/ipv6/udp.c
··· 1296 1296 int (*getfrag)(void *, char *, int, int, int, struct sk_buff *); 1297 1297 1298 1298 ipcm6_init(&ipc6); 1299 - ipc6.gso_size = up->gso_size; 1299 + ipc6.gso_size = READ_ONCE(up->gso_size); 1300 1300 ipc6.sockc.tsflags = sk->sk_tsflags; 1301 1301 ipc6.sockc.mark = sk->sk_mark; 1302 1302
+1 -1
net/ipv6/xfrm6_output.c
··· 49 49 { 50 50 struct dst_entry *dst = skb_dst(skb); 51 51 struct xfrm_state *x = dst->xfrm; 52 - int mtu; 52 + unsigned int mtu; 53 53 bool toobig; 54 54 55 55 #ifdef CONFIG_NETFILTER
+12 -10
net/iucv/iucv.c
··· 1635 1635 u8 iptype; 1636 1636 u32 ipmsgid; 1637 1637 u32 iptrgcls; 1638 - union { 1639 - u32 iprmmsg1_u32; 1640 - u8 iprmmsg1[4]; 1641 - } ln1msg1; 1642 - union { 1643 - u32 ipbfln1f; 1644 - u8 iprmmsg2[4]; 1645 - } ln1msg2; 1638 + struct { 1639 + union { 1640 + u32 iprmmsg1_u32; 1641 + u8 iprmmsg1[4]; 1642 + } ln1msg1; 1643 + union { 1644 + u32 ipbfln1f; 1645 + u8 iprmmsg2[4]; 1646 + } ln1msg2; 1647 + } rmmsg; 1646 1648 u32 res1[3]; 1647 1649 u32 ipbfln2f; 1648 1650 u8 ippollfg; ··· 1662 1660 msg.id = imp->ipmsgid; 1663 1661 msg.class = imp->iptrgcls; 1664 1662 if (imp->ipflags1 & IUCV_IPRMDATA) { 1665 - memcpy(msg.rmmsg, imp->ln1msg1.iprmmsg1, 8); 1663 + memcpy(msg.rmmsg, &imp->rmmsg, 8); 1666 1664 msg.length = 8; 1667 1665 } else 1668 - msg.length = imp->ln1msg2.ipbfln1f; 1666 + msg.length = imp->rmmsg.ln1msg2.ipbfln1f; 1669 1667 msg.reply_size = imp->ipbfln2f; 1670 1668 path->handler->message_pending(path, &msg); 1671 1669 }
+1
net/mptcp/mib.c
··· 44 44 SNMP_MIB_ITEM("RmSubflow", MPTCP_MIB_RMSUBFLOW), 45 45 SNMP_MIB_ITEM("MPPrioTx", MPTCP_MIB_MPPRIOTX), 46 46 SNMP_MIB_ITEM("MPPrioRx", MPTCP_MIB_MPPRIORX), 47 + SNMP_MIB_ITEM("RcvPruned", MPTCP_MIB_RCVPRUNED), 47 48 SNMP_MIB_SENTINEL 48 49 }; 49 50
+1
net/mptcp/mib.h
··· 37 37 MPTCP_MIB_RMSUBFLOW, /* Remove a subflow */ 38 38 MPTCP_MIB_MPPRIOTX, /* Transmit a MP_PRIO */ 39 39 MPTCP_MIB_MPPRIORX, /* Received a MP_PRIO */ 40 + MPTCP_MIB_RCVPRUNED, /* Incoming packet dropped due to memory limit */ 40 41 __MPTCP_MIB_MAX 41 42 }; 42 43
+2 -4
net/mptcp/mptcp_diag.c
··· 57 57 kfree_skb(rep); 58 58 goto out; 59 59 } 60 - err = netlink_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid, 61 - MSG_DONTWAIT); 62 - if (err > 0) 63 - err = 0; 60 + err = nlmsg_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid); 61 + 64 62 out: 65 63 sock_put(sk); 66 64
+13 -6
net/mptcp/options.c
··· 1035 1035 return hmac == mp_opt->ahmac; 1036 1036 } 1037 1037 1038 - void mptcp_incoming_options(struct sock *sk, struct sk_buff *skb) 1038 + /* Return false if a subflow has been reset, else return true */ 1039 + bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb) 1039 1040 { 1040 1041 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); 1041 1042 struct mptcp_sock *msk = mptcp_sk(subflow->conn); ··· 1054 1053 __mptcp_check_push(subflow->conn, sk); 1055 1054 __mptcp_data_acked(subflow->conn); 1056 1055 mptcp_data_unlock(subflow->conn); 1057 - return; 1056 + return true; 1058 1057 } 1059 1058 1060 1059 mptcp_get_options(sk, skb, &mp_opt); 1060 + 1061 + /* The subflow can be in close state only if check_fully_established() 1062 + * just sent a reset. If so, tell the caller to ignore the current packet. 1063 + */ 1061 1064 if (!check_fully_established(msk, sk, subflow, skb, &mp_opt)) 1062 - return; 1065 + return sk->sk_state != TCP_CLOSE; 1063 1066 1064 1067 if (mp_opt.fastclose && 1065 1068 msk->local_key == mp_opt.rcvr_key) { ··· 1105 1100 } 1106 1101 1107 1102 if (!mp_opt.dss) 1108 - return; 1103 + return true; 1109 1104 1110 1105 /* we can't wait for recvmsg() to update the ack_seq, otherwise 1111 1106 * monodirectional flows will stuck ··· 1124 1119 schedule_work(&msk->work)) 1125 1120 sock_hold(subflow->conn); 1126 1121 1127 - return; 1122 + return true; 1128 1123 } 1129 1124 1130 1125 mpext = skb_ext_add(skb, SKB_EXT_MPTCP); 1131 1126 if (!mpext) 1132 - return; 1127 + return true; 1133 1128 1134 1129 memset(mpext, 0, sizeof(*mpext)); 1135 1130 ··· 1158 1153 if (mpext->csum_reqd) 1159 1154 mpext->csum = mp_opt.csum; 1160 1155 } 1156 + 1157 + return true; 1161 1158 } 1162 1159 1163 1160 static void mptcp_set_rwin(const struct tcp_sock *tp)
+7 -5
net/mptcp/protocol.c
··· 474 474 bool cleanup, rx_empty; 475 475 476 476 cleanup = (space > 0) && (space >= (old_space << 1)); 477 - rx_empty = !atomic_read(&sk->sk_rmem_alloc); 477 + rx_empty = !__mptcp_rmem(sk); 478 478 479 479 mptcp_for_each_subflow(msk, subflow) { 480 480 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); ··· 720 720 sk_rbuf = ssk_rbuf; 721 721 722 722 /* over limit? can't append more skbs to msk, Also, no need to wake-up*/ 723 - if (atomic_read(&sk->sk_rmem_alloc) > sk_rbuf) 723 + if (__mptcp_rmem(sk) > sk_rbuf) { 724 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED); 724 725 return; 726 + } 725 727 726 728 /* Wake-up the reader only for in-sequence data */ 727 729 mptcp_data_lock(sk); ··· 1756 1754 if (!(flags & MSG_PEEK)) { 1757 1755 /* we will bulk release the skb memory later */ 1758 1756 skb->destructor = NULL; 1759 - msk->rmem_released += skb->truesize; 1757 + WRITE_ONCE(msk->rmem_released, msk->rmem_released + skb->truesize); 1760 1758 __skb_unlink(skb, &msk->receive_queue); 1761 1759 __kfree_skb(skb); 1762 1760 } ··· 1875 1873 1876 1874 atomic_sub(msk->rmem_released, &sk->sk_rmem_alloc); 1877 1875 sk_mem_uncharge(sk, msk->rmem_released); 1878 - msk->rmem_released = 0; 1876 + WRITE_ONCE(msk->rmem_released, 0); 1879 1877 } 1880 1878 1881 1879 static void __mptcp_splice_receive_queue(struct sock *sk) ··· 2382 2380 msk->out_of_order_queue = RB_ROOT; 2383 2381 msk->first_pending = NULL; 2384 2382 msk->wmem_reserved = 0; 2385 - msk->rmem_released = 0; 2383 + WRITE_ONCE(msk->rmem_released, 0); 2386 2384 msk->tx_pending_data = 0; 2387 2385 2388 2386 msk->first = NULL;
+9 -1
net/mptcp/protocol.h
··· 296 296 return (struct mptcp_sock *)sk; 297 297 } 298 298 299 + /* the msk socket don't use the backlog, also account for the bulk 300 + * free memory 301 + */ 302 + static inline int __mptcp_rmem(const struct sock *sk) 303 + { 304 + return atomic_read(&sk->sk_rmem_alloc) - READ_ONCE(mptcp_sk(sk)->rmem_released); 305 + } 306 + 299 307 static inline int __mptcp_space(const struct sock *sk) 300 308 { 301 - return tcp_space(sk) + READ_ONCE(mptcp_sk(sk)->rmem_released); 309 + return tcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf) - __mptcp_rmem(sk)); 302 310 } 303 311 304 312 static inline struct mptcp_data_frag *mptcp_send_head(const struct sock *sk)
+51 -17
net/mptcp/sockopt.c
··· 157 157 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 158 158 bool slow = lock_sock_fast(ssk); 159 159 160 - switch (optname) { 161 - case SO_TIMESTAMP_OLD: 162 - case SO_TIMESTAMP_NEW: 163 - case SO_TIMESTAMPNS_OLD: 164 - case SO_TIMESTAMPNS_NEW: 165 - sock_set_timestamp(sk, optname, !!val); 166 - break; 167 - case SO_TIMESTAMPING_NEW: 168 - case SO_TIMESTAMPING_OLD: 169 - sock_set_timestamping(sk, optname, val); 170 - break; 171 - } 172 - 160 + sock_set_timestamp(sk, optname, !!val); 173 161 unlock_sock_fast(ssk, slow); 174 162 } 175 163 ··· 166 178 } 167 179 168 180 static int mptcp_setsockopt_sol_socket_int(struct mptcp_sock *msk, int optname, 169 - sockptr_t optval, unsigned int optlen) 181 + sockptr_t optval, 182 + unsigned int optlen) 170 183 { 171 184 int val, ret; 172 185 ··· 194 205 case SO_TIMESTAMP_NEW: 195 206 case SO_TIMESTAMPNS_OLD: 196 207 case SO_TIMESTAMPNS_NEW: 197 - case SO_TIMESTAMPING_OLD: 198 - case SO_TIMESTAMPING_NEW: 199 208 return mptcp_setsockopt_sol_socket_tstamp(msk, optname, val); 200 209 } 201 210 202 211 return -ENOPROTOOPT; 212 + } 213 + 214 + static int mptcp_setsockopt_sol_socket_timestamping(struct mptcp_sock *msk, 215 + int optname, 216 + sockptr_t optval, 217 + unsigned int optlen) 218 + { 219 + struct mptcp_subflow_context *subflow; 220 + struct sock *sk = (struct sock *)msk; 221 + struct so_timestamping timestamping; 222 + int ret; 223 + 224 + if (optlen == sizeof(timestamping)) { 225 + if (copy_from_sockptr(&timestamping, optval, 226 + sizeof(timestamping))) 227 + return -EFAULT; 228 + } else if (optlen == sizeof(int)) { 229 + memset(&timestamping, 0, sizeof(timestamping)); 230 + 231 + if (copy_from_sockptr(&timestamping.flags, optval, sizeof(int))) 232 + return -EFAULT; 233 + } else { 234 + return -EINVAL; 235 + } 236 + 237 + ret = sock_setsockopt(sk->sk_socket, SOL_SOCKET, optname, 238 + KERNEL_SOCKPTR(&timestamping), 239 + sizeof(timestamping)); 240 + if (ret) 241 + return ret; 242 + 243 + lock_sock(sk); 244 + 245 + mptcp_for_each_subflow(msk, subflow) { 246 + struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 247 + bool slow = lock_sock_fast(ssk); 248 + 249 + sock_set_timestamping(sk, optname, timestamping); 250 + unlock_sock_fast(ssk, slow); 251 + } 252 + 253 + release_sock(sk); 254 + 255 + return 0; 203 256 } 204 257 205 258 static int mptcp_setsockopt_sol_socket_linger(struct mptcp_sock *msk, sockptr_t optval, ··· 330 299 case SO_TIMESTAMP_NEW: 331 300 case SO_TIMESTAMPNS_OLD: 332 301 case SO_TIMESTAMPNS_NEW: 302 + return mptcp_setsockopt_sol_socket_int(msk, optname, optval, 303 + optlen); 333 304 case SO_TIMESTAMPING_OLD: 334 305 case SO_TIMESTAMPING_NEW: 335 - return mptcp_setsockopt_sol_socket_int(msk, optname, optval, optlen); 306 + return mptcp_setsockopt_sol_socket_timestamping(msk, optname, 307 + optval, optlen); 336 308 case SO_LINGER: 337 309 return mptcp_setsockopt_sol_socket_linger(msk, optval, optlen); 338 310 case SO_RCVLOWAT:
+3 -8
net/mptcp/subflow.c
··· 214 214 ntohs(inet_sk(sk_listener)->inet_sport), 215 215 ntohs(inet_sk((struct sock *)subflow_req->msk)->inet_sport)); 216 216 if (!mptcp_pm_sport_in_anno_list(subflow_req->msk, sk_listener)) { 217 - sock_put((struct sock *)subflow_req->msk); 218 - mptcp_token_destroy_request(req); 219 - tcp_request_sock_ops.destructor(req); 220 - subflow_req->msk = NULL; 221 - subflow_req->mp_join = 0; 222 217 SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MISMATCHPORTSYNRX); 223 218 return -EPERM; 224 219 } ··· 225 230 if (unlikely(req->syncookie)) { 226 231 if (mptcp_can_accept_new_subflow(subflow_req->msk)) 227 232 subflow_init_req_cookie_join_save(subflow_req, skb); 233 + else 234 + return -EPERM; 228 235 } 229 236 230 237 pr_debug("token=%u, remote_nonce=%u msk=%p", subflow_req->token, ··· 266 269 if (!mptcp_token_join_cookie_init_state(subflow_req, skb)) 267 270 return -EINVAL; 268 271 269 - if (mptcp_can_accept_new_subflow(subflow_req->msk)) 270 - subflow_req->mp_join = 1; 271 - 272 + subflow_req->mp_join = 1; 272 273 subflow_req->ssn_offset = TCP_SKB_CB(skb)->seq - 1; 273 274 } 274 275
+15 -1
net/mptcp/syncookies.c
··· 37 37 38 38 static u32 mptcp_join_entry_hash(struct sk_buff *skb, struct net *net) 39 39 { 40 - u32 i = skb_get_hash(skb) ^ net_hash_mix(net); 40 + static u32 mptcp_join_hash_secret __read_mostly; 41 + struct tcphdr *th = tcp_hdr(skb); 42 + u32 seq, i; 43 + 44 + net_get_random_once(&mptcp_join_hash_secret, 45 + sizeof(mptcp_join_hash_secret)); 46 + 47 + if (th->syn) 48 + seq = TCP_SKB_CB(skb)->seq; 49 + else 50 + seq = TCP_SKB_CB(skb)->seq - 1; 51 + 52 + i = jhash_3words(seq, net_hash_mix(net), 53 + (__force __u32)th->source << 16 | (__force __u32)th->dest, 54 + mptcp_join_hash_secret); 41 55 42 56 return i % ARRAY_SIZE(join_entries); 43 57 }
+6
net/ncsi/Kconfig
··· 17 17 help 18 18 This allows to get MAC address from NCSI firmware and set them back to 19 19 controller. 20 + config NCSI_OEM_CMD_KEEP_PHY 21 + bool "Keep PHY Link up" 22 + depends on NET_NCSI 23 + help 24 + This allows to keep PHY link up and prevents any channel resets during 25 + the host load.
+5
net/ncsi/internal.h
··· 78 78 /* OEM Vendor Manufacture ID */ 79 79 #define NCSI_OEM_MFR_MLX_ID 0x8119 80 80 #define NCSI_OEM_MFR_BCM_ID 0x113d 81 + #define NCSI_OEM_MFR_INTEL_ID 0x157 82 + /* Intel specific OEM command */ 83 + #define NCSI_OEM_INTEL_CMD_KEEP_PHY 0x20 /* CMD ID for Keep PHY up */ 81 84 /* Broadcom specific OEM Command */ 82 85 #define NCSI_OEM_BCM_CMD_GMA 0x01 /* CMD ID for Get MAC */ 83 86 /* Mellanox specific OEM Command */ ··· 89 86 #define NCSI_OEM_MLX_CMD_SMAF 0x01 /* CMD ID for Set MC Affinity */ 90 87 #define NCSI_OEM_MLX_CMD_SMAF_PARAM 0x07 /* Parameter for SMAF */ 91 88 /* OEM Command payload lengths*/ 89 + #define NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN 7 92 90 #define NCSI_OEM_BCM_CMD_GMA_LEN 12 93 91 #define NCSI_OEM_MLX_CMD_GMA_LEN 8 94 92 #define NCSI_OEM_MLX_CMD_SMAF_LEN 60 ··· 275 271 ncsi_dev_state_probe_mlx_gma, 276 272 ncsi_dev_state_probe_mlx_smaf, 277 273 ncsi_dev_state_probe_cis, 274 + ncsi_dev_state_probe_keep_phy, 278 275 ncsi_dev_state_probe_gvi, 279 276 ncsi_dev_state_probe_gc, 280 277 ncsi_dev_state_probe_gls,
+48 -3
net/ncsi/ncsi-manage.c
··· 689 689 return 0; 690 690 } 691 691 692 + #if IS_ENABLED(CONFIG_NCSI_OEM_CMD_KEEP_PHY) 693 + 694 + static int ncsi_oem_keep_phy_intel(struct ncsi_cmd_arg *nca) 695 + { 696 + unsigned char data[NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN]; 697 + int ret = 0; 698 + 699 + nca->payload = NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN; 700 + 701 + memset(data, 0, NCSI_OEM_INTEL_CMD_KEEP_PHY_LEN); 702 + *(unsigned int *)data = ntohl((__force __be32)NCSI_OEM_MFR_INTEL_ID); 703 + 704 + data[4] = NCSI_OEM_INTEL_CMD_KEEP_PHY; 705 + 706 + /* PHY Link up attribute */ 707 + data[6] = 0x1; 708 + 709 + nca->data = data; 710 + 711 + ret = ncsi_xmit_cmd(nca); 712 + if (ret) 713 + netdev_err(nca->ndp->ndev.dev, 714 + "NCSI: Failed to transmit cmd 0x%x during configure\n", 715 + nca->type); 716 + return ret; 717 + } 718 + 719 + #endif 720 + 692 721 #if IS_ENABLED(CONFIG_NCSI_OEM_CMD_GET_MAC) 693 722 694 723 /* NCSI OEM Command APIs */ ··· 729 700 nca->payload = NCSI_OEM_BCM_CMD_GMA_LEN; 730 701 731 702 memset(data, 0, NCSI_OEM_BCM_CMD_GMA_LEN); 732 - *(unsigned int *)data = ntohl(NCSI_OEM_MFR_BCM_ID); 703 + *(unsigned int *)data = ntohl((__force __be32)NCSI_OEM_MFR_BCM_ID); 733 704 data[5] = NCSI_OEM_BCM_CMD_GMA; 734 705 735 706 nca->data = data; ··· 753 724 nca->payload = NCSI_OEM_MLX_CMD_GMA_LEN; 754 725 755 726 memset(&u, 0, sizeof(u)); 756 - u.data_u32[0] = ntohl(NCSI_OEM_MFR_MLX_ID); 727 + u.data_u32[0] = ntohl((__force __be32)NCSI_OEM_MFR_MLX_ID); 757 728 u.data_u8[5] = NCSI_OEM_MLX_CMD_GMA; 758 729 u.data_u8[6] = NCSI_OEM_MLX_CMD_GMA_PARAM; 759 730 ··· 776 747 int ret = 0; 777 748 778 749 memset(&u, 0, sizeof(u)); 779 - u.data_u32[0] = ntohl(NCSI_OEM_MFR_MLX_ID); 750 + u.data_u32[0] = ntohl((__force __be32)NCSI_OEM_MFR_MLX_ID); 780 751 u.data_u8[5] = NCSI_OEM_MLX_CMD_SMAF; 781 752 u.data_u8[6] = NCSI_OEM_MLX_CMD_SMAF_PARAM; 782 753 memcpy(&u.data_u8[MLX_SMAF_MAC_ADDR_OFFSET], ··· 1421 1392 } 1422 1393 1423 1394 nd->state = ncsi_dev_state_probe_gvi; 1395 + if (IS_ENABLED(CONFIG_NCSI_OEM_CMD_KEEP_PHY)) 1396 + nd->state = ncsi_dev_state_probe_keep_phy; 1424 1397 break; 1398 + #if IS_ENABLED(CONFIG_NCSI_OEM_CMD_KEEP_PHY) 1399 + case ncsi_dev_state_probe_keep_phy: 1400 + ndp->pending_req_num = 1; 1401 + 1402 + nca.type = NCSI_PKT_CMD_OEM; 1403 + nca.package = ndp->active_package->id; 1404 + nca.channel = 0; 1405 + ret = ncsi_oem_keep_phy_intel(&nca); 1406 + if (ret) 1407 + goto error; 1408 + 1409 + nd->state = ncsi_dev_state_probe_gvi; 1410 + break; 1411 + #endif /* CONFIG_NCSI_OEM_CMD_KEEP_PHY */ 1425 1412 case ncsi_dev_state_probe_gvi: 1426 1413 case ncsi_dev_state_probe_gc: 1427 1414 case ncsi_dev_state_probe_gls:
+9 -2
net/ncsi/ncsi-rsp.c
··· 403 403 /* Update to VLAN mode */ 404 404 cmd = (struct ncsi_cmd_ev_pkt *)skb_network_header(nr->cmd); 405 405 ncm->enable = 1; 406 - ncm->data[0] = ntohl(cmd->mode); 406 + ncm->data[0] = ntohl((__force __be32)cmd->mode); 407 407 408 408 return 0; 409 409 } ··· 699 699 return 0; 700 700 } 701 701 702 + /* Response handler for Intel card */ 703 + static int ncsi_rsp_handler_oem_intel(struct ncsi_request *nr) 704 + { 705 + return 0; 706 + } 707 + 702 708 static struct ncsi_rsp_oem_handler { 703 709 unsigned int mfr_id; 704 710 int (*handler)(struct ncsi_request *nr); 705 711 } ncsi_rsp_oem_handlers[] = { 706 712 { NCSI_OEM_MFR_MLX_ID, ncsi_rsp_handler_oem_mlx }, 707 - { NCSI_OEM_MFR_BCM_ID, ncsi_rsp_handler_oem_bcm } 713 + { NCSI_OEM_MFR_BCM_ID, ncsi_rsp_handler_oem_bcm }, 714 + { NCSI_OEM_MFR_INTEL_ID, ncsi_rsp_handler_oem_intel } 708 715 }; 709 716 710 717 /* Response handler for OEM command */
+9 -2
net/netfilter/nf_conntrack_core.c
··· 149 149 150 150 spin_lock(&nf_conntrack_locks_all_lock); 151 151 152 - nf_conntrack_locks_all = true; 152 + /* For nf_contrack_locks_all, only the latest time when another 153 + * CPU will see an update is controlled, by the "release" of the 154 + * spin_lock below. 155 + * The earliest time is not controlled, an thus KCSAN could detect 156 + * a race when nf_conntract_lock() reads the variable. 157 + * WRITE_ONCE() is used to ensure the compiler will not 158 + * optimize the write. 159 + */ 160 + WRITE_ONCE(nf_conntrack_locks_all, true); 153 161 154 162 for (i = 0; i < CONNTRACK_LOCKS; i++) { 155 163 spin_lock(&nf_conntrack_locks[i]); ··· 2465 2457 } 2466 2458 2467 2459 list_for_each_entry(net, net_exit_list, exit_list) { 2468 - nf_conntrack_proto_pernet_fini(net); 2469 2460 nf_conntrack_ecache_pernet_fini(net); 2470 2461 nf_conntrack_expect_pernet_fini(net); 2471 2462 free_percpu(net->ct.stat);
+3
net/netfilter/nf_conntrack_netlink.c
··· 218 218 if (!help) 219 219 return 0; 220 220 221 + rcu_read_lock(); 221 222 helper = rcu_dereference(help->helper); 222 223 if (!helper) 223 224 goto out; ··· 234 233 235 234 nla_nest_end(skb, nest_helper); 236 235 out: 236 + rcu_read_unlock(); 237 237 return 0; 238 238 239 239 nla_put_failure: 240 + rcu_read_unlock(); 240 241 return -1; 241 242 } 242 243
-7
net/netfilter/nf_conntrack_proto.c
··· 697 697 #endif 698 698 } 699 699 700 - void nf_conntrack_proto_pernet_fini(struct net *net) 701 - { 702 - #ifdef CONFIG_NF_CT_PROTO_GRE 703 - nf_ct_gre_keymap_flush(net); 704 - #endif 705 - } 706 - 707 700 module_param_call(hashsize, nf_conntrack_set_hashsize, param_get_uint, 708 701 &nf_conntrack_htable_size, 0600); 709 702
-13
net/netfilter/nf_conntrack_proto_gre.c
··· 55 55 return &net->ct.nf_ct_proto.gre; 56 56 } 57 57 58 - void nf_ct_gre_keymap_flush(struct net *net) 59 - { 60 - struct nf_gre_net *net_gre = gre_pernet(net); 61 - struct nf_ct_gre_keymap *km, *tmp; 62 - 63 - spin_lock_bh(&keymap_lock); 64 - list_for_each_entry_safe(km, tmp, &net_gre->keymap_list, list) { 65 - list_del_rcu(&km->list); 66 - kfree_rcu(km, rcu); 67 - } 68 - spin_unlock_bh(&keymap_lock); 69 - } 70 - 71 58 static inline int gre_key_cmpfn(const struct nf_ct_gre_keymap *km, 72 59 const struct nf_conntrack_tuple *t) 73 60 {
+51 -18
net/netfilter/nf_conntrack_proto_tcp.c
··· 823 823 return true; 824 824 } 825 825 826 + static bool tcp_can_early_drop(const struct nf_conn *ct) 827 + { 828 + switch (ct->proto.tcp.state) { 829 + case TCP_CONNTRACK_FIN_WAIT: 830 + case TCP_CONNTRACK_LAST_ACK: 831 + case TCP_CONNTRACK_TIME_WAIT: 832 + case TCP_CONNTRACK_CLOSE: 833 + case TCP_CONNTRACK_CLOSE_WAIT: 834 + return true; 835 + default: 836 + break; 837 + } 838 + 839 + return false; 840 + } 841 + 826 842 /* Returns verdict for packet, or -1 for invalid. */ 827 843 int nf_conntrack_tcp_packet(struct nf_conn *ct, 828 844 struct sk_buff *skb, ··· 1046 1030 if (index != TCP_RST_SET) 1047 1031 break; 1048 1032 1049 - if (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET) { 1033 + /* If we are closing, tuple might have been re-used already. 1034 + * last_index, last_ack, and all other ct fields used for 1035 + * sequence/window validation are outdated in that case. 1036 + * 1037 + * As the conntrack can already be expired by GC under pressure, 1038 + * just skip validation checks. 1039 + */ 1040 + if (tcp_can_early_drop(ct)) 1041 + goto in_window; 1042 + 1043 + /* td_maxack might be outdated if we let a SYN through earlier */ 1044 + if ((ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET) && 1045 + ct->proto.tcp.last_index != TCP_SYN_SET) { 1050 1046 u32 seq = ntohl(th->seq); 1051 1047 1052 - if (before(seq, ct->proto.tcp.seen[!dir].td_maxack)) { 1048 + /* If we are not in established state and SEQ=0 this is most 1049 + * likely an answer to a SYN we let go through above (last_index 1050 + * can be updated due to out-of-order ACKs). 1051 + */ 1052 + if (seq == 0 && !nf_conntrack_tcp_established(ct)) 1053 + break; 1054 + 1055 + if (before(seq, ct->proto.tcp.seen[!dir].td_maxack) && 1056 + !tn->tcp_ignore_invalid_rst) { 1053 1057 /* Invalid RST */ 1054 1058 spin_unlock_bh(&ct->lock); 1055 1059 nf_ct_l4proto_log_invalid(skb, ct, state, "invalid rst"); ··· 1170 1134 nf_ct_kill_acct(ct, ctinfo, skb); 1171 1135 return NF_ACCEPT; 1172 1136 } 1137 + 1138 + if (index == TCP_SYN_SET && old_state == TCP_CONNTRACK_SYN_SENT) { 1139 + /* do not renew timeout on SYN retransmit. 1140 + * 1141 + * Else port reuse by client or NAT middlebox can keep 1142 + * entry alive indefinitely (including nat info). 1143 + */ 1144 + return NF_ACCEPT; 1145 + } 1146 + 1173 1147 /* ESTABLISHED without SEEN_REPLY, i.e. mid-connection 1174 1148 * pickup with loose=1. Avoid large ESTABLISHED timeout. 1175 1149 */ ··· 1199 1153 nf_ct_refresh_acct(ct, ctinfo, skb, timeout); 1200 1154 1201 1155 return NF_ACCEPT; 1202 - } 1203 - 1204 - static bool tcp_can_early_drop(const struct nf_conn *ct) 1205 - { 1206 - switch (ct->proto.tcp.state) { 1207 - case TCP_CONNTRACK_FIN_WAIT: 1208 - case TCP_CONNTRACK_LAST_ACK: 1209 - case TCP_CONNTRACK_TIME_WAIT: 1210 - case TCP_CONNTRACK_CLOSE: 1211 - case TCP_CONNTRACK_CLOSE_WAIT: 1212 - return true; 1213 - default: 1214 - break; 1215 - } 1216 - 1217 - return false; 1218 1156 } 1219 1157 1220 1158 #if IS_ENABLED(CONFIG_NF_CT_NETLINK) ··· 1466 1436 * If it's non-zero, we mark only out of window RST segments as INVALID. 1467 1437 */ 1468 1438 tn->tcp_be_liberal = 0; 1439 + 1440 + /* If it's non-zero, we turn off RST sequence number check */ 1441 + tn->tcp_ignore_invalid_rst = 0; 1469 1442 1470 1443 /* Max number of the retransmitted packets without receiving an (acceptable) 1471 1444 * ACK from the destination. If this number is reached, a shorter timer
+10
net/netfilter/nf_conntrack_standalone.c
··· 579 579 #endif 580 580 NF_SYSCTL_CT_PROTO_TCP_LOOSE, 581 581 NF_SYSCTL_CT_PROTO_TCP_LIBERAL, 582 + NF_SYSCTL_CT_PROTO_TCP_IGNORE_INVALID_RST, 582 583 NF_SYSCTL_CT_PROTO_TCP_MAX_RETRANS, 583 584 NF_SYSCTL_CT_PROTO_TIMEOUT_UDP, 584 585 NF_SYSCTL_CT_PROTO_TIMEOUT_UDP_STREAM, ··· 799 798 .extra1 = SYSCTL_ZERO, 800 799 .extra2 = SYSCTL_ONE, 801 800 }, 801 + [NF_SYSCTL_CT_PROTO_TCP_IGNORE_INVALID_RST] = { 802 + .procname = "nf_conntrack_tcp_ignore_invalid_rst", 803 + .maxlen = sizeof(u8), 804 + .mode = 0644, 805 + .proc_handler = proc_dou8vec_minmax, 806 + .extra1 = SYSCTL_ZERO, 807 + .extra2 = SYSCTL_ONE, 808 + }, 802 809 [NF_SYSCTL_CT_PROTO_TCP_MAX_RETRANS] = { 803 810 .procname = "nf_conntrack_tcp_max_retrans", 804 811 .maxlen = sizeof(u8), ··· 1013 1004 XASSIGN(LOOSE, &tn->tcp_loose); 1014 1005 XASSIGN(LIBERAL, &tn->tcp_be_liberal); 1015 1006 XASSIGN(MAX_RETRANS, &tn->tcp_max_retrans); 1007 + XASSIGN(IGNORE_INVALID_RST, &tn->tcp_ignore_invalid_rst); 1016 1008 #undef XASSIGN 1017 1009 1018 1010 #if IS_ENABLED(CONFIG_NF_FLOW_TABLE)
+2 -1
net/netfilter/nf_tables_api.c
··· 3446 3446 return 0; 3447 3447 3448 3448 err_destroy_flow_rule: 3449 - nft_flow_rule_destroy(flow); 3449 + if (flow) 3450 + nft_flow_rule_destroy(flow); 3450 3451 err_release_rule: 3451 3452 nf_tables_rule_release(&ctx, rule); 3452 3453 err_release_expr:
+9 -3
net/netfilter/nft_last.c
··· 23 23 { 24 24 struct nft_last_priv *priv = nft_expr_priv(expr); 25 25 u64 last_jiffies; 26 + u32 last_set = 0; 26 27 int err; 27 28 28 - if (tb[NFTA_LAST_MSECS]) { 29 + if (tb[NFTA_LAST_SET]) { 30 + last_set = ntohl(nla_get_be32(tb[NFTA_LAST_SET])); 31 + if (last_set == 1) 32 + priv->last_set = 1; 33 + } 34 + 35 + if (last_set && tb[NFTA_LAST_MSECS]) { 29 36 err = nf_msecs_to_jiffies64(tb[NFTA_LAST_MSECS], &last_jiffies); 30 37 if (err < 0) 31 38 return err; 32 39 33 - priv->last_jiffies = jiffies + (unsigned long)last_jiffies; 34 - priv->last_set = 1; 40 + priv->last_jiffies = jiffies - (unsigned long)last_jiffies; 35 41 } 36 42 37 43 return 0;
+1 -1
net/netlink/af_netlink.c
··· 2471 2471 2472 2472 nlmsg_end(skb, rep); 2473 2473 2474 - netlink_unicast(in_skb->sk, skb, NETLINK_CB(in_skb).portid, MSG_DONTWAIT); 2474 + nlmsg_unicast(in_skb->sk, skb, NETLINK_CB(in_skb).portid); 2475 2475 } 2476 2476 EXPORT_SYMBOL(netlink_ack); 2477 2477
+3 -3
net/openvswitch/flow_table.c
··· 670 670 { 671 671 const long *cp1 = (const long *)((const u8 *)key1 + key_start); 672 672 const long *cp2 = (const long *)((const u8 *)key2 + key_start); 673 - long diffs = 0; 674 673 int i; 675 674 676 675 for (i = key_start; i < key_end; i += sizeof(long)) 677 - diffs |= *cp1++ ^ *cp2++; 676 + if (*cp1++ ^ *cp2++) 677 + return false; 678 678 679 - return diffs == 0; 679 + return true; 680 680 } 681 681 682 682 static bool flow_cmp_masked_key(const struct sw_flow *flow,
+13 -1
net/sched/act_ct.c
··· 322 322 323 323 static void tcf_ct_flow_table_cleanup_work(struct work_struct *work) 324 324 { 325 + struct flow_block_cb *block_cb, *tmp_cb; 325 326 struct tcf_ct_flow_table *ct_ft; 327 + struct flow_block *block; 326 328 327 329 ct_ft = container_of(to_rcu_work(work), struct tcf_ct_flow_table, 328 330 rwork); 329 331 nf_flow_table_free(&ct_ft->nf_ft); 332 + 333 + /* Remove any remaining callbacks before cleanup */ 334 + block = &ct_ft->nf_ft.flow_block; 335 + down_write(&ct_ft->nf_ft.flow_block_lock); 336 + list_for_each_entry_safe(block_cb, tmp_cb, &block->cb_list, list) { 337 + list_del(&block_cb->list); 338 + flow_block_cb_free(block_cb); 339 + } 340 + up_write(&ct_ft->nf_ft.flow_block_lock); 330 341 kfree(ct_ft); 331 342 332 343 module_put(THIS_MODULE); ··· 1037 1026 /* This will take care of sending queued events 1038 1027 * even if the connection is already confirmed. 1039 1028 */ 1040 - nf_conntrack_confirm(skb); 1029 + if (nf_conntrack_confirm(skb) != NF_ACCEPT) 1030 + goto drop; 1041 1031 } 1042 1032 1043 1033 if (!skip_add)
+1 -1
net/sched/sch_taprio.c
··· 564 564 /* if there's no entry, it means that the schedule didn't 565 565 * start yet, so force all gates to be open, this is in 566 566 * accordance to IEEE 802.1Qbv-2015 Section 8.6.9.4.5 567 - * "AdminGateSates" 567 + * "AdminGateStates" 568 568 */ 569 569 gate_mask = entry ? entry->gate_mask : TAPRIO_ALL_GATES_OPEN; 570 570
+2 -4
net/sctp/diag.c
··· 284 284 goto out; 285 285 } 286 286 287 - err = netlink_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid, 288 - MSG_DONTWAIT); 289 - if (err > 0) 290 - err = 0; 287 + err = nlmsg_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid); 288 + 291 289 out: 292 290 return err; 293 291 }
+2 -1
net/sctp/protocol.c
··· 398 398 retval = SCTP_SCOPE_LINK; 399 399 } else if (ipv4_is_private_10(addr->v4.sin_addr.s_addr) || 400 400 ipv4_is_private_172(addr->v4.sin_addr.s_addr) || 401 - ipv4_is_private_192(addr->v4.sin_addr.s_addr)) { 401 + ipv4_is_private_192(addr->v4.sin_addr.s_addr) || 402 + ipv4_is_test_198(addr->v4.sin_addr.s_addr)) { 402 403 retval = SCTP_SCOPE_PRIVATE; 403 404 } else { 404 405 retval = SCTP_SCOPE_GLOBAL;
+1 -1
net/sctp/sm_make_chunk.c
··· 1163 1163 const struct sctp_transport *transport, 1164 1164 __u32 probe_size) 1165 1165 { 1166 - struct sctp_sender_hb_info hbinfo; 1166 + struct sctp_sender_hb_info hbinfo = {}; 1167 1167 struct sctp_chunk *retval; 1168 1168 1169 1169 retval = sctp_make_control(asoc, SCTP_CID_HEARTBEAT, 0,
+7 -4
net/sctp/transport.c
··· 335 335 t->pathmtu = t->pl.pmtu + sctp_transport_pl_hlen(t); 336 336 sctp_assoc_sync_pmtu(t->asoc); 337 337 } 338 - } else if (t->pl.state == SCTP_PL_COMPLETE && ++t->pl.raise_count == 30) { 339 - /* Raise probe_size again after 30 * interval in Search Complete */ 340 - t->pl.state = SCTP_PL_SEARCH; /* Search Complete -> Search */ 341 - t->pl.probe_size += SCTP_PL_MIN_STEP; 338 + } else if (t->pl.state == SCTP_PL_COMPLETE) { 339 + t->pl.raise_count++; 340 + if (t->pl.raise_count == 30) { 341 + /* Raise probe_size again after 30 * interval in Search Complete */ 342 + t->pl.state = SCTP_PL_SEARCH; /* Search Complete -> Search */ 343 + t->pl.probe_size += SCTP_PL_MIN_STEP; 344 + } 342 345 } 343 346 } 344 347
+13 -6
net/socket.c
··· 104 104 #include <linux/sockios.h> 105 105 #include <net/busy_poll.h> 106 106 #include <linux/errqueue.h> 107 + #include <linux/ptp_clock_kernel.h> 107 108 108 109 #ifdef CONFIG_NET_RX_BUSY_POLL 109 110 unsigned int sysctl_net_busy_read __read_mostly; ··· 874 873 empty = 0; 875 874 if (shhwtstamps && 876 875 (sk->sk_tsflags & SOF_TIMESTAMPING_RAW_HARDWARE) && 877 - !skb_is_swtx_tstamp(skb, false_tstamp) && 878 - ktime_to_timespec64_cond(shhwtstamps->hwtstamp, tss.ts + 2)) { 879 - empty = 0; 880 - if ((sk->sk_tsflags & SOF_TIMESTAMPING_OPT_PKTINFO) && 881 - !skb_is_err_queue(skb)) 882 - put_ts_pktinfo(msg, skb); 876 + !skb_is_swtx_tstamp(skb, false_tstamp)) { 877 + if (sk->sk_tsflags & SOF_TIMESTAMPING_BIND_PHC) 878 + ptp_convert_timestamp(shhwtstamps, sk->sk_bind_phc); 879 + 880 + if (ktime_to_timespec64_cond(shhwtstamps->hwtstamp, 881 + tss.ts + 2)) { 882 + empty = 0; 883 + 884 + if ((sk->sk_tsflags & SOF_TIMESTAMPING_OPT_PKTINFO) && 885 + !skb_is_err_queue(skb)) 886 + put_ts_pktinfo(msg, skb); 887 + } 883 888 } 884 889 if (!empty) { 885 890 if (sock_flag(sk, SOCK_TSTAMP_NEW))
+2 -4
net/unix/diag.c
··· 295 295 296 296 goto again; 297 297 } 298 - err = netlink_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid, 299 - MSG_DONTWAIT); 300 - if (err > 0) 301 - err = 0; 298 + err = nlmsg_unicast(net->diag_nlsk, rep, NETLINK_CB(in_skb).portid); 299 + 302 300 out: 303 301 if (sk) 304 302 sock_put(sk);
+1
samples/bpf/Makefile
··· 331 331 -Wno-gnu-variable-sized-type-not-at-end \ 332 332 -Wno-address-of-packed-member -Wno-tautological-compare \ 333 333 -Wno-unknown-warning-option $(CLANG_ARCH_ARGS) \ 334 + -fno-asynchronous-unwind-tables \ 334 335 -I$(srctree)/samples/bpf/ -include asm_goto_workaround.h \ 335 336 -O2 -emit-llvm -Xclang -disable-llvm-passes -c $< -o - | \ 336 337 $(OPT) -O2 -mtriple=bpf-pc-linux | $(LLVM_DIS) | \
+28
samples/bpf/xdpsock_user.c
··· 96 96 static int opt_timeout = 1000; 97 97 static bool opt_need_wakeup = true; 98 98 static u32 opt_num_xsks = 1; 99 + static u32 prog_id; 99 100 static bool opt_busy_poll; 100 101 static bool opt_reduced_cap; 101 102 ··· 462 461 return NULL; 463 462 } 464 463 464 + static void remove_xdp_program(void) 465 + { 466 + u32 curr_prog_id = 0; 467 + 468 + if (bpf_get_link_xdp_id(opt_ifindex, &curr_prog_id, opt_xdp_flags)) { 469 + printf("bpf_get_link_xdp_id failed\n"); 470 + exit(EXIT_FAILURE); 471 + } 472 + 473 + if (prog_id == curr_prog_id) 474 + bpf_set_link_xdp_fd(opt_ifindex, -1, opt_xdp_flags); 475 + else if (!curr_prog_id) 476 + printf("couldn't find a prog id on a given interface\n"); 477 + else 478 + printf("program on interface changed, not removing\n"); 479 + } 480 + 465 481 static void int_exit(int sig) 466 482 { 467 483 benchmark_done = true; ··· 489 471 { 490 472 fprintf(stderr, "%s:%s:%i: errno: %d/\"%s\"\n", file, func, 491 473 line, error, strerror(error)); 474 + 475 + if (opt_num_xsks > 1) 476 + remove_xdp_program(); 492 477 exit(EXIT_FAILURE); 493 478 } 494 479 ··· 511 490 if (write(sock, &cmd, sizeof(int)) < 0) 512 491 exit_with_error(errno); 513 492 } 493 + 494 + if (opt_num_xsks > 1) 495 + remove_xdp_program(); 514 496 } 515 497 516 498 static void swap_mac_addresses(void *data) ··· 878 854 txr = tx ? &xsk->tx : NULL; 879 855 ret = xsk_socket__create(&xsk->xsk, opt_if, opt_queue, umem->umem, 880 856 rxr, txr, &cfg); 857 + if (ret) 858 + exit_with_error(-ret); 859 + 860 + ret = bpf_get_link_xdp_id(opt_ifindex, &prog_id, opt_xdp_flags); 881 861 if (ret) 882 862 exit_with_error(-ret); 883 863
+2 -5
tools/bpf/Makefile
··· 97 97 $(Q)$(RM) -- $(OUTPUT)FEATURE-DUMP.bpf 98 98 $(Q)$(RM) -r -- $(OUTPUT)feature 99 99 100 - install: $(PROGS) bpftool_install runqslower_install 100 + install: $(PROGS) bpftool_install 101 101 $(call QUIET_INSTALL, bpf_jit_disasm) 102 102 $(Q)$(INSTALL) -m 0755 -d $(DESTDIR)$(prefix)/bin 103 103 $(Q)$(INSTALL) $(OUTPUT)bpf_jit_disasm $(DESTDIR)$(prefix)/bin/bpf_jit_disasm ··· 118 118 runqslower: 119 119 $(call descend,runqslower) 120 120 121 - runqslower_install: 122 - $(call descend,runqslower,install) 123 - 124 121 runqslower_clean: 125 122 $(call descend,runqslower,clean) 126 123 ··· 128 131 $(call descend,resolve_btfids,clean) 129 132 130 133 .PHONY: all install clean bpftool bpftool_install bpftool_clean \ 131 - runqslower runqslower_install runqslower_clean \ 134 + runqslower runqslower_clean \ 132 135 resolve_btfids resolve_btfids_clean
+4 -2
tools/bpf/bpftool/jit_disasm.c
··· 43 43 { 44 44 va_list ap; 45 45 char *s; 46 + int err; 46 47 47 48 va_start(ap, fmt); 48 - if (vasprintf(&s, fmt, ap) < 0) 49 - return -1; 49 + err = vasprintf(&s, fmt, ap); 50 50 va_end(ap); 51 + if (err < 0) 52 + return -1; 51 53 52 54 if (!oper_count) { 53 55 int i;
+1 -1
tools/bpf/runqslower/runqslower.bpf.c
··· 74 74 u32 pid; 75 75 76 76 /* ivcsw: treat like an enqueue event and store timestamp */ 77 - if (prev->state == TASK_RUNNING) 77 + if (prev->__state == TASK_RUNNING) 78 78 trace_enqueue(prev); 79 79 80 80 pid = next->pid;
+2 -2
tools/lib/bpf/libbpf.c
··· 10136 10136 10137 10137 err = unlink(link->pin_path); 10138 10138 if (err != 0) 10139 - return libbpf_err_errno(err); 10139 + return -errno; 10140 10140 10141 10141 pr_debug("link fd=%d: unpinned from %s\n", link->fd, link->pin_path); 10142 10142 zfree(&link->pin_path); ··· 11197 11197 11198 11198 cnt = epoll_wait(pb->epoll_fd, pb->events, pb->cpu_cnt, timeout_ms); 11199 11199 if (cnt < 0) 11200 - return libbpf_err_errno(cnt); 11200 + return -errno; 11201 11201 11202 11202 for (i = 0; i < cnt; i++) { 11203 11203 struct perf_cpu_buf *cpu_buf = pb->events[i].data.ptr;
+26 -10
tools/testing/selftests/bpf/prog_tests/tailcalls.c
··· 715 715 bpf_object__close(obj); 716 716 } 717 717 718 + #include "tailcall_bpf2bpf4.skel.h" 719 + 718 720 /* test_tailcall_bpf2bpf_4 checks that tailcall counter is correctly preserved 719 721 * across tailcalls combined with bpf2bpf calls. for making sure that tailcall 720 722 * counter behaves correctly, bpf program will go through following flow: ··· 729 727 * the loop begins. At the end of the test make sure that the global counter is 730 728 * equal to 31, because tailcall counter includes the first two tailcalls 731 729 * whereas global counter is incremented only on loop presented on flow above. 730 + * 731 + * The noise parameter is used to insert bpf_map_update calls into the logic 732 + * to force verifier to patch instructions. This allows us to ensure jump 733 + * logic remains correct with instruction movement. 732 734 */ 733 - static void test_tailcall_bpf2bpf_4(void) 735 + static void test_tailcall_bpf2bpf_4(bool noise) 734 736 { 735 - int err, map_fd, prog_fd, main_fd, data_fd, i, val; 737 + int err, map_fd, prog_fd, main_fd, data_fd, i; 738 + struct tailcall_bpf2bpf4__bss val; 736 739 struct bpf_map *prog_array, *data_map; 737 740 struct bpf_program *prog; 738 741 struct bpf_object *obj; ··· 781 774 goto out; 782 775 } 783 776 784 - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 785 - &duration, &retval, NULL); 786 - CHECK(err || retval != sizeof(pkt_v4) * 3, "tailcall", "err %d errno %d retval %d\n", 787 - err, errno, retval); 788 - 789 777 data_map = bpf_object__find_map_by_name(obj, "tailcall.bss"); 790 778 if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map))) 791 779 return; ··· 790 788 return; 791 789 792 790 i = 0; 791 + val.noise = noise; 792 + val.count = 0; 793 + err = bpf_map_update_elem(data_fd, &i, &val, BPF_ANY); 794 + if (CHECK_FAIL(err)) 795 + goto out; 796 + 797 + err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, 798 + &duration, &retval, NULL); 799 + CHECK(err || retval != sizeof(pkt_v4) * 3, "tailcall", "err %d errno %d retval %d\n", 800 + err, errno, retval); 801 + 802 + i = 0; 793 803 err = bpf_map_lookup_elem(data_fd, &i, &val); 794 - CHECK(err || val != 31, "tailcall count", "err %d errno %d count %d\n", 795 - err, errno, val); 804 + CHECK(err || val.count != 31, "tailcall count", "err %d errno %d count %d\n", 805 + err, errno, val.count); 796 806 797 807 out: 798 808 bpf_object__close(obj); ··· 829 815 if (test__start_subtest("tailcall_bpf2bpf_3")) 830 816 test_tailcall_bpf2bpf_3(); 831 817 if (test__start_subtest("tailcall_bpf2bpf_4")) 832 - test_tailcall_bpf2bpf_4(); 818 + test_tailcall_bpf2bpf_4(false); 819 + if (test__start_subtest("tailcall_bpf2bpf_5")) 820 + test_tailcall_bpf2bpf_4(true); 833 821 }
+18
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c
··· 3 3 #include <bpf/bpf_helpers.h> 4 4 5 5 struct { 6 + __uint(type, BPF_MAP_TYPE_ARRAY); 7 + __uint(max_entries, 1); 8 + __uint(key_size, sizeof(__u32)); 9 + __uint(value_size, sizeof(__u32)); 10 + } nop_table SEC(".maps"); 11 + 12 + struct { 6 13 __uint(type, BPF_MAP_TYPE_PROG_ARRAY); 7 14 __uint(max_entries, 3); 8 15 __uint(key_size, sizeof(__u32)); ··· 17 10 } jmp_table SEC(".maps"); 18 11 19 12 int count = 0; 13 + int noise = 0; 14 + 15 + __always_inline int subprog_noise(void) 16 + { 17 + __u32 key = 0; 18 + 19 + bpf_map_lookup_elem(&nop_table, &key); 20 + return 0; 21 + } 20 22 21 23 __noinline 22 24 int subprog_tail_2(struct __sk_buff *skb) 23 25 { 26 + if (noise) 27 + subprog_noise(); 24 28 bpf_tail_call_static(skb, &jmp_table, 2); 25 29 return skb->len * 3; 26 30 }
+3 -2
tools/testing/selftests/net/icmp_redirect.sh
··· 313 313 fi 314 314 log_test $? 0 "IPv4: ${desc}" 315 315 316 - if [ "$with_redirect" = "yes" ]; then 316 + # No PMTU info for test "redirect" and "mtu exception plus redirect" 317 + if [ "$with_redirect" = "yes" ] && [ "$desc" != "redirect exception plus mtu" ]; then 317 318 ip -netns h1 -6 ro get ${H1_VRF_ARG} ${H2_N2_IP6} | \ 318 - grep -q "${H2_N2_IP6} from :: via ${R2_LLADDR} dev br0.*${mtu}" 319 + grep -v "mtu" | grep -q "${H2_N2_IP6} .*via ${R2_LLADDR} dev br0" 319 320 elif [ -n "${mtu}" ]; then 320 321 ip -netns h1 -6 ro get ${H1_VRF_ARG} ${H2_N2_IP6} | \ 321 322 grep -q "${mtu}"
+1 -1
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 1409 1409 ip netns exec $ns2 ./pm_nl_ctl add 10.0.3.2 flags subflow 1410 1410 ip netns exec $ns2 ./pm_nl_ctl add 10.0.2.2 flags subflow 1411 1411 run_tests $ns1 $ns2 10.0.1.1 1412 - chk_join_nr "subflows limited by server w cookies" 2 2 1 1412 + chk_join_nr "subflows limited by server w cookies" 2 1 1 1413 1413 1414 1414 # test signal address with cookies 1415 1415 reset_with_cookies
+35 -20
tools/testing/selftests/net/timestamping.c
··· 47 47 { 48 48 if (error) 49 49 printf("invalid option: %s\n", error); 50 - printf("timestamping interface option*\n\n" 50 + printf("timestamping <interface> [bind_phc_index] [option]*\n\n" 51 51 "Options:\n" 52 52 " IP_MULTICAST_LOOP - looping outgoing multicasts\n" 53 53 " SO_TIMESTAMP - normal software time stamping, ms resolution\n" ··· 58 58 " SOF_TIMESTAMPING_RX_SOFTWARE - software fallback for incoming packets\n" 59 59 " SOF_TIMESTAMPING_SOFTWARE - request reporting of software time stamps\n" 60 60 " SOF_TIMESTAMPING_RAW_HARDWARE - request reporting of raw HW time stamps\n" 61 + " SOF_TIMESTAMPING_BIND_PHC - request to bind a PHC of PTP vclock\n" 61 62 " SIOCGSTAMP - check last socket time stamp\n" 62 63 " SIOCGSTAMPNS - more accurate socket time stamp\n" 63 64 " PTPV2 - use PTPv2 messages\n"); ··· 312 311 313 312 int main(int argc, char **argv) 314 313 { 315 - int so_timestamping_flags = 0; 316 314 int so_timestamp = 0; 317 315 int so_timestampns = 0; 318 316 int siocgstamp = 0; ··· 325 325 struct ifreq device; 326 326 struct ifreq hwtstamp; 327 327 struct hwtstamp_config hwconfig, hwconfig_requested; 328 + struct so_timestamping so_timestamping_get = { 0, -1 }; 329 + struct so_timestamping so_timestamping = { 0, -1 }; 328 330 struct sockaddr_in addr; 329 331 struct ip_mreq imr; 330 332 struct in_addr iaddr; ··· 344 342 exit(1); 345 343 } 346 344 347 - for (i = 2; i < argc; i++) { 345 + if (argc >= 3 && sscanf(argv[2], "%d", &so_timestamping.bind_phc) == 1) 346 + val = 3; 347 + else 348 + val = 2; 349 + 350 + for (i = val; i < argc; i++) { 348 351 if (!strcasecmp(argv[i], "SO_TIMESTAMP")) 349 352 so_timestamp = 1; 350 353 else if (!strcasecmp(argv[i], "SO_TIMESTAMPNS")) ··· 363 356 else if (!strcasecmp(argv[i], "PTPV2")) 364 357 ptpv2 = 1; 365 358 else if (!strcasecmp(argv[i], "SOF_TIMESTAMPING_TX_HARDWARE")) 366 - so_timestamping_flags |= SOF_TIMESTAMPING_TX_HARDWARE; 359 + so_timestamping.flags |= SOF_TIMESTAMPING_TX_HARDWARE; 367 360 else if (!strcasecmp(argv[i], "SOF_TIMESTAMPING_TX_SOFTWARE")) 368 - so_timestamping_flags |= SOF_TIMESTAMPING_TX_SOFTWARE; 361 + so_timestamping.flags |= SOF_TIMESTAMPING_TX_SOFTWARE; 369 362 else if (!strcasecmp(argv[i], "SOF_TIMESTAMPING_RX_HARDWARE")) 370 - so_timestamping_flags |= SOF_TIMESTAMPING_RX_HARDWARE; 363 + so_timestamping.flags |= SOF_TIMESTAMPING_RX_HARDWARE; 371 364 else if (!strcasecmp(argv[i], "SOF_TIMESTAMPING_RX_SOFTWARE")) 372 - so_timestamping_flags |= SOF_TIMESTAMPING_RX_SOFTWARE; 365 + so_timestamping.flags |= SOF_TIMESTAMPING_RX_SOFTWARE; 373 366 else if (!strcasecmp(argv[i], "SOF_TIMESTAMPING_SOFTWARE")) 374 - so_timestamping_flags |= SOF_TIMESTAMPING_SOFTWARE; 367 + so_timestamping.flags |= SOF_TIMESTAMPING_SOFTWARE; 375 368 else if (!strcasecmp(argv[i], "SOF_TIMESTAMPING_RAW_HARDWARE")) 376 - so_timestamping_flags |= SOF_TIMESTAMPING_RAW_HARDWARE; 369 + so_timestamping.flags |= SOF_TIMESTAMPING_RAW_HARDWARE; 370 + else if (!strcasecmp(argv[i], "SOF_TIMESTAMPING_BIND_PHC")) 371 + so_timestamping.flags |= SOF_TIMESTAMPING_BIND_PHC; 377 372 else 378 373 usage(argv[i]); 379 374 } ··· 394 385 hwtstamp.ifr_data = (void *)&hwconfig; 395 386 memset(&hwconfig, 0, sizeof(hwconfig)); 396 387 hwconfig.tx_type = 397 - (so_timestamping_flags & SOF_TIMESTAMPING_TX_HARDWARE) ? 388 + (so_timestamping.flags & SOF_TIMESTAMPING_TX_HARDWARE) ? 398 389 HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF; 399 390 hwconfig.rx_filter = 400 - (so_timestamping_flags & SOF_TIMESTAMPING_RX_HARDWARE) ? 391 + (so_timestamping.flags & SOF_TIMESTAMPING_RX_HARDWARE) ? 401 392 ptpv2 ? HWTSTAMP_FILTER_PTP_V2_L4_SYNC : 402 393 HWTSTAMP_FILTER_PTP_V1_L4_SYNC : HWTSTAMP_FILTER_NONE; 403 394 hwconfig_requested = hwconfig; ··· 421 412 (struct sockaddr *)&addr, 422 413 sizeof(struct sockaddr_in)) < 0) 423 414 bail("bind"); 415 + 416 + if (setsockopt(sock, SOL_SOCKET, SO_BINDTODEVICE, interface, if_len)) 417 + bail("bind device"); 424 418 425 419 /* set multicast group for outgoing packets */ 426 420 inet_aton("224.0.1.130", &iaddr); /* alternate PTP domain 1 */ ··· 456 444 &enabled, sizeof(enabled)) < 0) 457 445 bail("setsockopt SO_TIMESTAMPNS"); 458 446 459 - if (so_timestamping_flags && 460 - setsockopt(sock, SOL_SOCKET, SO_TIMESTAMPING, 461 - &so_timestamping_flags, 462 - sizeof(so_timestamping_flags)) < 0) 447 + if (so_timestamping.flags && 448 + setsockopt(sock, SOL_SOCKET, SO_TIMESTAMPING, &so_timestamping, 449 + sizeof(so_timestamping)) < 0) 463 450 bail("setsockopt SO_TIMESTAMPING"); 464 451 465 452 /* request IP_PKTINFO for debugging purposes */ ··· 479 468 else 480 469 printf("SO_TIMESTAMPNS %d\n", val); 481 470 482 - if (getsockopt(sock, SOL_SOCKET, SO_TIMESTAMPING, &val, &len) < 0) { 471 + len = sizeof(so_timestamping_get); 472 + if (getsockopt(sock, SOL_SOCKET, SO_TIMESTAMPING, &so_timestamping_get, 473 + &len) < 0) { 483 474 printf("%s: %s\n", "getsockopt SO_TIMESTAMPING", 484 475 strerror(errno)); 485 476 } else { 486 - printf("SO_TIMESTAMPING %d\n", val); 487 - if (val != so_timestamping_flags) 488 - printf(" not the expected value %d\n", 489 - so_timestamping_flags); 477 + printf("SO_TIMESTAMPING flags %d, bind phc %d\n", 478 + so_timestamping_get.flags, so_timestamping_get.bind_phc); 479 + if (so_timestamping_get.flags != so_timestamping.flags || 480 + so_timestamping_get.bind_phc != so_timestamping.bind_phc) 481 + printf(" not expected, flags %d, bind phc %d\n", 482 + so_timestamping.flags, so_timestamping.bind_phc); 490 483 } 491 484 492 485 /* send packets forever every five seconds */
+1 -1
tools/testing/selftests/netfilter/Makefile
··· 5 5 conntrack_icmp_related.sh nft_flowtable.sh ipvs.sh \ 6 6 nft_concat_range.sh nft_conntrack_helper.sh \ 7 7 nft_queue.sh nft_meta.sh nf_nat_edemux.sh \ 8 - ipip-conntrack-mtu.sh 8 + ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh 9 9 10 10 LDLIBS = -lmnl 11 11 TEST_GEN_FILES = nf-queue
+167
tools/testing/selftests/netfilter/conntrack_tcp_unreplied.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Check that UNREPLIED tcp conntrack will eventually timeout. 5 + # 6 + 7 + # Kselftest framework requirement - SKIP code is 4. 8 + ksft_skip=4 9 + ret=0 10 + 11 + waittime=20 12 + sfx=$(mktemp -u "XXXXXXXX") 13 + ns1="ns1-$sfx" 14 + ns2="ns2-$sfx" 15 + 16 + nft --version > /dev/null 2>&1 17 + if [ $? -ne 0 ];then 18 + echo "SKIP: Could not run test without nft tool" 19 + exit $ksft_skip 20 + fi 21 + 22 + ip -Version > /dev/null 2>&1 23 + if [ $? -ne 0 ];then 24 + echo "SKIP: Could not run test without ip tool" 25 + exit $ksft_skip 26 + fi 27 + 28 + cleanup() { 29 + ip netns pids $ns1 | xargs kill 2>/dev/null 30 + ip netns pids $ns2 | xargs kill 2>/dev/null 31 + 32 + ip netns del $ns1 33 + ip netns del $ns2 34 + } 35 + 36 + ipv4() { 37 + echo -n 192.168.$1.2 38 + } 39 + 40 + check_counter() 41 + { 42 + ns=$1 43 + name=$2 44 + expect=$3 45 + local lret=0 46 + 47 + cnt=$(ip netns exec $ns2 nft list counter inet filter "$name" | grep -q "$expect") 48 + if [ $? -ne 0 ]; then 49 + echo "ERROR: counter $name in $ns2 has unexpected value (expected $expect)" 1>&2 50 + ip netns exec $ns2 nft list counter inet filter "$name" 1>&2 51 + lret=1 52 + fi 53 + 54 + return $lret 55 + } 56 + 57 + # Create test namespaces 58 + ip netns add $ns1 || exit 1 59 + 60 + trap cleanup EXIT 61 + 62 + ip netns add $ns2 || exit 1 63 + 64 + # Connect the namespace to the host using a veth pair 65 + ip -net $ns1 link add name veth1 type veth peer name veth2 66 + ip -net $ns1 link set netns $ns2 dev veth2 67 + 68 + ip -net $ns1 link set up dev lo 69 + ip -net $ns2 link set up dev lo 70 + ip -net $ns1 link set up dev veth1 71 + ip -net $ns2 link set up dev veth2 72 + 73 + ip -net $ns2 addr add 10.11.11.2/24 dev veth2 74 + ip -net $ns2 route add default via 10.11.11.1 75 + 76 + ip netns exec $ns2 sysctl -q net.ipv4.conf.veth2.forwarding=1 77 + 78 + # add a rule inside NS so we enable conntrack 79 + ip netns exec $ns1 iptables -A INPUT -m state --state established,related -j ACCEPT 80 + 81 + ip -net $ns1 addr add 10.11.11.1/24 dev veth1 82 + ip -net $ns1 route add 10.99.99.99 via 10.11.11.2 83 + 84 + # Check connectivity works 85 + ip netns exec $ns1 ping -q -c 2 10.11.11.2 >/dev/null || exit 1 86 + 87 + ip netns exec $ns2 nc -l -p 8080 < /dev/null & 88 + 89 + # however, conntrack entries are there 90 + 91 + ip netns exec $ns2 nft -f - <<EOF 92 + table inet filter { 93 + counter connreq { } 94 + counter redir { } 95 + chain input { 96 + type filter hook input priority 0; policy accept; 97 + ct state new tcp flags syn ip daddr 10.99.99.99 tcp dport 80 counter name "connreq" accept 98 + ct state new ct status dnat tcp dport 8080 counter name "redir" accept 99 + } 100 + } 101 + EOF 102 + if [ $? -ne 0 ]; then 103 + echo "ERROR: Could not load nft rules" 104 + exit 1 105 + fi 106 + 107 + ip netns exec $ns2 sysctl -q net.netfilter.nf_conntrack_tcp_timeout_syn_sent=10 108 + 109 + echo "INFO: connect $ns1 -> $ns2 to the virtual ip" 110 + ip netns exec $ns1 bash -c 'while true ; do 111 + nc -p 60000 10.99.99.99 80 112 + sleep 1 113 + done' & 114 + 115 + sleep 1 116 + 117 + ip netns exec $ns2 nft -f - <<EOF 118 + table inet nat { 119 + chain prerouting { 120 + type nat hook prerouting priority 0; policy accept; 121 + ip daddr 10.99.99.99 tcp dport 80 redirect to :8080 122 + } 123 + } 124 + EOF 125 + if [ $? -ne 0 ]; then 126 + echo "ERROR: Could not load nat redirect" 127 + exit 1 128 + fi 129 + 130 + count=$(ip netns exec $ns2 conntrack -L -p tcp --dport 80 2>/dev/null | wc -l) 131 + if [ $count -eq 0 ]; then 132 + echo "ERROR: $ns2 did not pick up tcp connection from peer" 133 + exit 1 134 + fi 135 + 136 + echo "INFO: NAT redirect added in ns $ns2, waiting for $waittime seconds for nat to take effect" 137 + for i in $(seq 1 $waittime); do 138 + echo -n "." 139 + 140 + sleep 1 141 + 142 + count=$(ip netns exec $ns2 conntrack -L -p tcp --reply-port-src 8080 2>/dev/null | wc -l) 143 + if [ $count -gt 0 ]; then 144 + echo 145 + echo "PASS: redirection took effect after $i seconds" 146 + break 147 + fi 148 + 149 + m=$((i%20)) 150 + if [ $m -eq 0 ]; then 151 + echo " waited for $i seconds" 152 + fi 153 + done 154 + 155 + expect="packets 1 bytes 60" 156 + check_counter "$ns2" "redir" "$expect" 157 + if [ $? -ne 0 ]; then 158 + ret=1 159 + fi 160 + 161 + if [ $ret -eq 0 ];then 162 + echo "PASS: redirection counter has expected values" 163 + else 164 + echo "ERROR: no tcp connection was redirected" 165 + fi 166 + 167 + exit $ret