···106106 phy-mode:107107 $ref: "#/properties/phy-connection-type"108108109109+ pcs-handle:110110+ $ref: /schemas/types.yaml#/definitions/phandle111111+ description:112112+ Specifies a reference to a node representing a PCS PHY device on a MDIO113113+ bus to link with an external PHY (phy-handle) if exists.114114+109115 phy-handle:110116 $ref: /schemas/types.yaml#/definitions/phandle111117 description:
-17
Documentation/devicetree/bindings/net/micrel.txt
···45454646 In fiber mode, auto-negotiation is disabled and the PHY can only work in4747 100base-fx (full and half duplex) modes.4848-4949- - lan8814,ignore-ts: If present the PHY will not support timestamping.5050-5151- This option acts as check whether Timestamping is supported by5252- hardware or not. LAN8814 phy support hardware tmestamping.5353-5454- - lan8814,latency_rx_10: Configures Latency value of phy in ingress at 10 Mbps.5555-5656- - lan8814,latency_tx_10: Configures Latency value of phy in egress at 10 Mbps.5757-5858- - lan8814,latency_rx_100: Configures Latency value of phy in ingress at 100 Mbps.5959-6060- - lan8814,latency_tx_100: Configures Latency value of phy in egress at 100 Mbps.6161-6262- - lan8814,latency_rx_1000: Configures Latency value of phy in ingress at 1000 Mbps.6363-6464- - lan8814,latency_tx_1000: Configures Latency value of phy in egress at 1000 Mbps.
···2626 specified, the TX/RX DMA interrupts should be on that node2727 instead, and only the Ethernet core interrupt is optionally2828 specified here.2929-- phy-handle : Should point to the external phy device.2929+- phy-handle : Should point to the external phy device if exists. Pointing3030+ this to the PCS/PMA PHY is deprecated and should be avoided.3031 See ethernet.txt file in the same directory.3132- xlnx,rxmem : Set to allocated memory buffer for Rx/Tx in the hardware3233···6867 - mdio : Child node for MDIO bus. Must be defined if PHY access is6968 required through the core's MDIO interface (i.e. always,7069 unless the PHY is accessed through a different bus).7070+7171+ - pcs-handle: Phandle to the internal PCS/PMA PHY in SGMII or 1000Base-X7272+ modes, where "pcs-handle" should be used to point7373+ to the PCS/PMA PHY, and "phy-handle" should point to an7474+ external PHY if exists.71757276Example:7377 axi_ethernet_eth: ethernet@40c00000 {
+32-32
Documentation/networking/dsa/dsa.rst
···1010Design principles1111=================12121313-The Distributed Switch Architecture is a subsystem which was primarily designed1414-to support Marvell Ethernet switches (MV88E6xxx, a.k.a Linkstreet product line)1515-using Linux, but has since evolved to support other vendors as well.1313+The Distributed Switch Architecture subsystem was primarily designed to1414+support Marvell Ethernet switches (MV88E6xxx, a.k.a. Link Street product1515+line) using Linux, but has since evolved to support other vendors as well.16161717The original philosophy behind this design was to be able to use unmodified1818Linux tools such as bridge, iproute2, ifconfig to work transparently whether1919they configured/queried a switch port network device or a regular network2020device.21212222-An Ethernet switch is typically comprised of multiple front-panel ports, and one2323-or more CPU or management port. The DSA subsystem currently relies on the2222+An Ethernet switch typically comprises multiple front-panel ports and one2323+or more CPU or management ports. The DSA subsystem currently relies on the2424presence of a management port connected to an Ethernet controller capable of2525receiving Ethernet frames from the switch. This is a very common setup for all2626kinds of Ethernet switches found in Small Home and Office products: routers,2727-gateways, or even top-of-the rack switches. This host Ethernet controller will2727+gateways, or even top-of-rack switches. This host Ethernet controller will2828be later referred to as "master" and "cpu" in DSA terminology and code.29293030The D in DSA stands for Distributed, because the subsystem has been designed···3333ports are referred to as "dsa" ports in DSA terminology and code. A collection3434of multiple switches connected to each other is called a "switch tree".35353636-For each front-panel port, DSA will create specialized network devices which are3636+For each front-panel port, DSA creates specialized network devices which are3737used as controlling and data-flowing endpoints for use by the Linux networking3838stack. These specialized network interfaces are referred to as "slave" network3939interfaces in DSA terminology and code.40404141The ideal case for using DSA is when an Ethernet switch supports a "switch tag"4242which is a hardware feature making the switch insert a specific tag for each4343-Ethernet frames it received to/from specific ports to help the management4343+Ethernet frame it receives to/from specific ports to help the management4444interface figure out:45454646- what port is this frame coming from···125125ports must decapsulate the packet.126126127127Note that in certain cases, it might be the case that the tagging format used128128-by a leaf switch (not connected directly to the CPU) to not be the same as what128128+by a leaf switch (not connected directly to the CPU) is not the same as what129129the network stack sees. This can be seen with Marvell switch trees, where the130130CPU port can be configured to use either the DSA or the Ethertype DSA (EDSA)131131format, but the DSA links are configured to use the shorter (without Ethertype)···270270 to/from specific switch ports271271- query the switch for ethtool operations: statistics, link state,272272 Wake-on-LAN, register dumps...273273-- external/internal PHY management: link, auto-negotiation etc.273273+- manage external/internal PHY: link, auto-negotiation, etc.274274275275These slave network devices have custom net_device_ops and ethtool_ops function276276pointers which allow DSA to introduce a level of layering between the networking277277-stack/ethtool, and the switch driver implementation.277277+stack/ethtool and the switch driver implementation.278278279279Upon frame transmission from these slave network devices, DSA will look up which280280-switch tagging protocol is currently registered with these network devices, and280280+switch tagging protocol is currently registered with these network devices and281281invoke a specific transmit routine which takes care of adding the relevant282282switch tag in the Ethernet frames.283283284284These frames are then queued for transmission using the master network device285285-``ndo_start_xmit()`` function, since they contain the appropriate switch tag, the285285+``ndo_start_xmit()`` function. Since they contain the appropriate switch tag, the286286Ethernet switch will be able to process these incoming frames from the287287-management interface and delivers these frames to the physical switch port.287287+management interface and deliver them to the physical switch port.288288289289Graphical representation290290------------------------···330330switches, these functions would utilize direct or indirect PHY addressing mode331331to return standard MII registers from the switch builtin PHYs, allowing the PHY332332library and/or to return link status, link partner pages, auto-negotiation333333-results etc..333333+results, etc.334334335335-For Ethernet switches which have both external and internal MDIO busses, the335335+For Ethernet switches which have both external and internal MDIO buses, the336336slave MII bus can be utilized to mux/demux MDIO reads and writes towards either337337internal or external MDIO devices this switch might be connected to: internal338338PHYs, external PHYs, or even external switches.···349349 table indication (when cascading switches)350350351351- ``dsa_platform_data``: platform device configuration data which can reference352352- a collection of dsa_chip_data structure if multiples switches are cascaded,352352+ a collection of dsa_chip_data structures if multiple switches are cascaded,353353 the master network device this switch tree is attached to needs to be354354 referenced355355···426426 "phy-handle" property, if found, this PHY device is created and registered427427 using ``of_phy_connect()``428428429429-- if Device Tree is used, and the PHY device is "fixed", that is, conforms to429429+- if Device Tree is used and the PHY device is "fixed", that is, conforms to430430 the definition of a non-MDIO managed PHY as defined in431431 ``Documentation/devicetree/bindings/net/fixed-link.txt``, the PHY is registered432432 and connected transparently using the special fixed MDIO bus driver···481481DSA features a standardized binding which is documented in482482``Documentation/devicetree/bindings/net/dsa/dsa.txt``. PHY/MDIO library helper483483functions such as ``of_get_phy_mode()``, ``of_phy_connect()`` are also used to query484484-per-port PHY specific details: interface connection, MDIO bus location etc..484484+per-port PHY specific details: interface connection, MDIO bus location, etc.485485486486Driver development487487==================···509509510510- ``setup``: setup function for the switch, this function is responsible for setting511511 up the ``dsa_switch_ops`` private structure with all it needs: register maps,512512- interrupts, mutexes, locks etc.. This function is also expected to properly512512+ interrupts, mutexes, locks, etc. This function is also expected to properly513513 configure the switch to separate all network interfaces from each other, that514514 is, they should be isolated by the switch hardware itself, typically by creating515515 a Port-based VLAN ID for each port and allowing only the CPU port and the···526526- ``get_phy_flags``: Some switches are interfaced to various kinds of Ethernet PHYs,527527 if the PHY library PHY driver needs to know about information it cannot obtain528528 on its own (e.g.: coming from switch memory mapped registers), this function529529- should return a 32-bits bitmask of "flags", that is private between the switch529529+ should return a 32-bit bitmask of "flags" that is private between the switch530530 driver and the Ethernet PHY driver in ``drivers/net/phy/\*``.531531532532- ``phy_read``: Function invoked by the DSA slave MDIO bus when attempting to read533533 the switch port MDIO registers. If unavailable, return 0xffff for each read.534534 For builtin switch Ethernet PHYs, this function should allow reading the link535535- status, auto-negotiation results, link partner pages etc..535535+ status, auto-negotiation results, link partner pages, etc.536536537537- ``phy_write``: Function invoked by the DSA slave MDIO bus when attempting to write538538 to the switch port MDIO registers. If unavailable return a negative error···554554------------------555555556556- ``get_strings``: ethtool function used to query the driver's strings, will557557- typically return statistics strings, private flags strings etc.557557+ typically return statistics strings, private flags strings, etc.558558559559- ``get_ethtool_stats``: ethtool function used to query per-port statistics and560560 return their values. DSA overlays slave network devices general statistics:···564564- ``get_sset_count``: ethtool function used to query the number of statistics items565565566566- ``get_wol``: ethtool function used to obtain Wake-on-LAN settings per-port, this567567- function may, for certain implementations also query the master network device567567+ function may for certain implementations also query the master network device568568 Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN569569570570- ``set_wol``: ethtool function used to configure Wake-on-LAN settings per-port,···607607 in a fully active state608608609609- ``port_enable``: function invoked by the DSA slave network device ndo_open610610- function when a port is administratively brought up, this function should be611611- fully enabling a given switch port. DSA takes care of marking the port with610610+ function when a port is administratively brought up, this function should611611+ fully enable a given switch port. DSA takes care of marking the port with612612 ``BR_STATE_BLOCKING`` if the port is a bridge member, or ``BR_STATE_FORWARDING`` if it613613 was not, and propagating these changes down to the hardware614614615615- ``port_disable``: function invoked by the DSA slave network device ndo_close616616- function when a port is administratively brought down, this function should be617617- fully disabling a given switch port. DSA takes care of marking the port with616616+ function when a port is administratively brought down, this function should617617+ fully disable a given switch port. DSA takes care of marking the port with618618 ``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is619619 disabled while being a bridge member620620···622622------------623623624624- ``port_bridge_join``: bridge layer function invoked when a given switch port is625625- added to a bridge, this function should be doing the necessary at the switch626626- level to permit the joining port from being added to the relevant logical625625+ added to a bridge, this function should do what's necessary at the switch626626+ level to permit the joining port to be added to the relevant logical627627 domain for it to ingress/egress traffic with other members of the bridge.628628629629- ``port_bridge_leave``: bridge layer function invoked when a given switch port is630630- removed from a bridge, this function should be doing the necessary at the630630+ removed from a bridge, this function should do what's necessary at the631631 switch level to deny the leaving port from ingress/egress traffic from the632632 remaining bridge members. When the port leaves the bridge, it should be aged633633 out at the switch hardware for the switch to (re) learn MAC addresses behind···663663 point for drivers that need to configure the hardware for enabling this664664 feature.665665666666-- ``port_bridge_tx_fwd_unoffload``: bridge layer function invoken when a driver666666+- ``port_bridge_tx_fwd_unoffload``: bridge layer function invoked when a driver667667 leaves a bridge port which had the TX forwarding offload feature enabled.668668669669Bridge VLAN filtering
···115115116116 If unsure, say N.117117118118-config SATA_LPM_POLICY118118+config SATA_MOBILE_LPM_POLICY119119 int "Default SATA Link Power Management policy for low power chipsets"120120 range 0 4121121 default 0122122 depends on SATA_AHCI123123 help124124 Select the Default SATA Link Power Management (LPM) policy to use125125- for chipsets / "South Bridges" designated as supporting low power.125125+ for chipsets / "South Bridges" supporting low-power modes. Such126126+ chipsets are typically found on most laptops but desktops and127127+ servers now also widely use chipsets supporting low power modes.126128127129 The value set has the following meanings:128130 0 => Keep firmware settings
+1-1
drivers/ata/ahci.c
···15951595static void ahci_update_initial_lpm_policy(struct ata_port *ap,15961596 struct ahci_host_priv *hpriv)15971597{15981598- int policy = CONFIG_SATA_LPM_POLICY;15981598+ int policy = CONFIG_SATA_MOBILE_LPM_POLICY;159915991600160016011601 /* Ignore processing for chipsets that don't use policy */
+1-1
drivers/ata/ahci.h
···236236 AHCI_HFLAG_NO_WRITE_TO_RO = (1 << 24), /* don't write to read237237 only registers */238238 AHCI_HFLAG_USE_LPM_POLICY = (1 << 25), /* chipset that should use239239- SATA_LPM_POLICY239239+ SATA_MOBILE_LPM_POLICY240240 as default lpm_policy */241241 AHCI_HFLAG_SUSPEND_PHYS = (1 << 26), /* handle PHYs during242242 suspend/resume */
···137137#endif138138};139139140140-#define SATA_DWC_QCMD_MAX 32140140+/*141141+ * Allow one extra special slot for commands and DMA management142142+ * to account for libata internal commands.143143+ */144144+#define SATA_DWC_QCMD_MAX (ATA_MAX_QUEUE + 1)141145142146struct sata_dwc_device_port {143147 struct sata_dwc_device *hsdev;
+39-35
drivers/char/random.c
···437437 * This shouldn't be set by functions like add_device_randomness(),438438 * where we can't trust the buffer passed to it is guaranteed to be439439 * unpredictable (so it might not have any entropy at all).440440- *441441- * Returns the number of bytes processed from input, which is bounded442442- * by CRNG_INIT_CNT_THRESH if account is true.443440 */444444-static size_t crng_pre_init_inject(const void *input, size_t len, bool account)441441+static void crng_pre_init_inject(const void *input, size_t len, bool account)445442{446443 static int crng_init_cnt = 0;447444 struct blake2s_state hash;···449452 spin_lock_irqsave(&base_crng.lock, flags);450453 if (crng_init != 0) {451454 spin_unlock_irqrestore(&base_crng.lock, flags);452452- return 0;455455+ return;453456 }454454-455455- if (account)456456- len = min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt);457457458458 blake2s_update(&hash, base_crng.key, sizeof(base_crng.key));459459 blake2s_update(&hash, input, len);460460 blake2s_final(&hash, base_crng.key);461461462462 if (account) {463463- crng_init_cnt += len;463463+ crng_init_cnt += min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt);464464 if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {465465 ++base_crng.generation;466466 crng_init = 1;···468474469475 if (crng_init == 1)470476 pr_notice("fast init done\n");471471-472472- return len;473477}474478475479static void _get_random_bytes(void *buf, size_t nbytes)···523531524532static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes)525533{526526- bool large_request = nbytes > 256;527534 ssize_t ret = 0;528535 size_t len;529536 u32 chacha_state[CHACHA_STATE_WORDS];···531540 if (!nbytes)532541 return 0;533542534534- len = min_t(size_t, 32, nbytes);535535- crng_make_state(chacha_state, output, len);543543+ /*544544+ * Immediately overwrite the ChaCha key at index 4 with random545545+ * bytes, in case userspace causes copy_to_user() below to sleep546546+ * forever, so that we still retain forward secrecy in that case.547547+ */548548+ crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);549549+ /*550550+ * However, if we're doing a read of len <= 32, we don't need to551551+ * use chacha_state after, so we can simply return those bytes to552552+ * the user directly.553553+ */554554+ if (nbytes <= CHACHA_KEY_SIZE) {555555+ ret = copy_to_user(buf, &chacha_state[4], nbytes) ? -EFAULT : nbytes;556556+ goto out_zero_chacha;557557+ }536558537537- if (copy_to_user(buf, output, len))538538- return -EFAULT;539539- nbytes -= len;540540- buf += len;541541- ret += len;542542-543543- while (nbytes) {544544- if (large_request && need_resched()) {545545- if (signal_pending(current))546546- break;547547- schedule();548548- }549549-559559+ do {550560 chacha20_block(chacha_state, output);551561 if (unlikely(chacha_state[12] == 0))552562 ++chacha_state[13];···561569 nbytes -= len;562570 buf += len;563571 ret += len;564564- }565572566566- memzero_explicit(chacha_state, sizeof(chacha_state));573573+ BUILD_BUG_ON(PAGE_SIZE % CHACHA_BLOCK_SIZE != 0);574574+ if (!(ret % PAGE_SIZE) && nbytes) {575575+ if (signal_pending(current))576576+ break;577577+ cond_resched();578578+ }579579+ } while (nbytes);580580+567581 memzero_explicit(output, sizeof(output));582582+out_zero_chacha:583583+ memzero_explicit(chacha_state, sizeof(chacha_state));568584 return ret;569585}570586···11411141 size_t entropy)11421142{11431143 if (unlikely(crng_init == 0 && entropy < POOL_MIN_BITS)) {11441144- size_t ret = crng_pre_init_inject(buffer, count, true);11451145- mix_pool_bytes(buffer, ret);11461146- count -= ret;11471147- buffer += ret;11481148- if (!count || crng_init == 0)11491149- return;11441144+ crng_pre_init_inject(buffer, count, true);11451145+ mix_pool_bytes(buffer, count);11461146+ return;11501147 }1151114811521149 /*···15411544 loff_t *ppos)15421545{15431546 static int maxwarn = 10;15471547+15481548+ /*15491549+ * Opportunistically attempt to initialize the RNG on platforms that15501550+ * have fast cycle counters, but don't (for now) require it to succeed.15511551+ */15521552+ if (!crng_ready())15531553+ try_to_generate_entropy();1544155415451555 if (!crng_ready() && maxwarn > 0) {15461556 maxwarn--;
+3-3
drivers/hv/channel_mgmt.c
···380380 * execute:381381 *382382 * (a) In the "normal (i.e., not resuming from hibernation)" path,383383- * the full barrier in smp_store_mb() guarantees that the store383383+ * the full barrier in virt_store_mb() guarantees that the store384384 * is propagated to all CPUs before the add_channel_work work385385 * is queued. In turn, add_channel_work is queued before the386386 * channel's ring buffer is allocated/initialized and the···392392 * recv_int_page before retrieving the channel pointer from the393393 * array of channels.394394 *395395- * (b) In the "resuming from hibernation" path, the smp_store_mb()395395+ * (b) In the "resuming from hibernation" path, the virt_store_mb()396396 * guarantees that the store is propagated to all CPUs before397397 * the VMBus connection is marked as ready for the resume event398398 * (cf. check_ready_for_resume_event()). The interrupt handler399399 * of the VMBus driver and vmbus_chan_sched() can not run before400400 * vmbus_bus_resume() has completed execution (cf. resume_noirq).401401 */402402- smp_store_mb(402402+ virt_store_mb(403403 vmbus_connection.channels[channel->offermsg.child_relid],404404 channel);405405}
+44-5
drivers/hv/hv_balloon.c
···1717#include <linux/slab.h>1818#include <linux/kthread.h>1919#include <linux/completion.h>2020+#include <linux/count_zeros.h>2021#include <linux/memory_hotplug.h>2122#include <linux/memory.h>2223#include <linux/notifier.h>···11311130 struct dm_status status;11321131 unsigned long now = jiffies;11331132 unsigned long last_post = last_post_time;11331133+ unsigned long num_pages_avail, num_pages_committed;1134113411351135 if (pressure_report_delay > 0) {11361136 --pressure_report_delay;···11561154 * num_pages_onlined) as committed to the host, otherwise it can try11571155 * asking us to balloon them out.11581156 */11591159- status.num_avail = si_mem_available();11601160- status.num_committed = vm_memory_committed() +11571157+ num_pages_avail = si_mem_available();11581158+ num_pages_committed = vm_memory_committed() +11611159 dm->num_pages_ballooned +11621160 (dm->num_pages_added > dm->num_pages_onlined ?11631161 dm->num_pages_added - dm->num_pages_onlined : 0) +11641162 compute_balloon_floor();1165116311661166- trace_balloon_status(status.num_avail, status.num_committed,11641164+ trace_balloon_status(num_pages_avail, num_pages_committed,11671165 vm_memory_committed(), dm->num_pages_ballooned,11681166 dm->num_pages_added, dm->num_pages_onlined);11671167+11681168+ /* Convert numbers of pages into numbers of HV_HYP_PAGEs. */11691169+ status.num_avail = num_pages_avail * NR_HV_HYP_PAGES_IN_PAGE;11701170+ status.num_committed = num_pages_committed * NR_HV_HYP_PAGES_IN_PAGE;11711171+11691172 /*11701173 * If our transaction ID is no longer current, just don't11711174 * send the status. This can happen if we were interrupted···16601653 }16611654}1662165516561656+static int ballooning_enabled(void)16571657+{16581658+ /*16591659+ * Disable ballooning if the page size is not 4k (HV_HYP_PAGE_SIZE),16601660+ * since currently it's unclear to us whether an unballoon request can16611661+ * make sure all page ranges are guest page size aligned.16621662+ */16631663+ if (PAGE_SIZE != HV_HYP_PAGE_SIZE) {16641664+ pr_info("Ballooning disabled because page size is not 4096 bytes\n");16651665+ return 0;16661666+ }16671667+16681668+ return 1;16691669+}16701670+16711671+static int hot_add_enabled(void)16721672+{16731673+ /*16741674+ * Disable hot add on ARM64, because we currently rely on16751675+ * memory_add_physaddr_to_nid() to get a node id of a hot add range,16761676+ * however ARM64's memory_add_physaddr_to_nid() always return 0 and16771677+ * DM_MEM_HOT_ADD_REQUEST doesn't have the NUMA node information for16781678+ * add_memory().16791679+ */16801680+ if (IS_ENABLED(CONFIG_ARM64)) {16811681+ pr_info("Memory hot add disabled on ARM64\n");16821682+ return 0;16831683+ }16841684+16851685+ return 1;16861686+}16871687+16631688static int balloon_connect_vsp(struct hv_device *dev)16641689{16651690 struct dm_version_request version_req;···17631724 * currently still requires the bits to be set, so we have to add code17641725 * to fail the host's hot-add and balloon up/down requests, if any.17651726 */17661766- cap_msg.caps.cap_bits.balloon = 1;17671767- cap_msg.caps.cap_bits.hot_add = 1;17271727+ cap_msg.caps.cap_bits.balloon = ballooning_enabled();17281728+ cap_msg.caps.cap_bits.hot_add = hot_add_enabled();1768172917691730 /*17701731 * Specify our alignment requirements as it relates
+11
drivers/hv/hv_common.c
···2020#include <linux/panic_notifier.h>2121#include <linux/ptrace.h>2222#include <linux/slab.h>2323+#include <linux/dma-map-ops.h>2324#include <asm/hyperv-tlfs.h>2425#include <asm/mshyperv.h>2526···218217 return hv_extended_cap & cap_query;219218}220219EXPORT_SYMBOL_GPL(hv_query_ext_cap);220220+221221+void hv_setup_dma_ops(struct device *dev, bool coherent)222222+{223223+ /*224224+ * Hyper-V does not offer a vIOMMU in the guest225225+ * VM, so pass 0/NULL for the IOMMU settings226226+ */227227+ arch_setup_dma_ops(dev, 0, 0, NULL, coherent);228228+}229229+EXPORT_SYMBOL_GPL(hv_setup_dma_ops);221230222231bool hv_is_hibernation_supported(void)223232{
+10-1
drivers/hv/ring_buffer.c
···439439static u32 hv_pkt_iter_avail(const struct hv_ring_buffer_info *rbi)440440{441441 u32 priv_read_loc = rbi->priv_read_index;442442- u32 write_loc = READ_ONCE(rbi->ring_buffer->write_index);442442+ u32 write_loc;443443+444444+ /*445445+ * The Hyper-V host writes the packet data, then uses446446+ * store_release() to update the write_index. Use load_acquire()447447+ * here to prevent loads of the packet data from being re-ordered448448+ * before the read of the write_index and potentially getting449449+ * stale data.450450+ */451451+ write_loc = virt_load_acquire(&rbi->ring_buffer->write_index);443452444453 if (write_loc >= priv_read_loc)445454 return write_loc - priv_read_loc;
+54-11
drivers/hv/vmbus_drv.c
···77777878 /*7979 * Hyper-V should be notified only once about a panic. If we will be8080- * doing hyperv_report_panic_msg() later with kmsg data, don't do8181- * the notification here.8080+ * doing hv_kmsg_dump() with kmsg data later, don't do the notification8181+ * here.8282 */8383 if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE8484 && hyperv_report_reg()) {···100100101101 /*102102 * Hyper-V should be notified only once about a panic. If we will be103103- * doing hyperv_report_panic_msg() later with kmsg data, don't do104104- * the notification here.103103+ * doing hv_kmsg_dump() with kmsg data later, don't do the notification104104+ * here.105105 */106106 if (hyperv_report_reg())107107 hyperv_report_panic(regs, val, true);···921921}922922923923/*924924+ * vmbus_dma_configure -- Configure DMA coherence for VMbus device925925+ */926926+static int vmbus_dma_configure(struct device *child_device)927927+{928928+ /*929929+ * On ARM64, propagate the DMA coherence setting from the top level930930+ * VMbus ACPI device to the child VMbus device being added here.931931+ * On x86/x64 coherence is assumed and these calls have no effect.932932+ */933933+ hv_setup_dma_ops(child_device,934934+ device_get_dma_attr(&hv_acpi_dev->dev) == DEV_DMA_COHERENT);935935+ return 0;936936+}937937+938938+/*924939 * vmbus_remove - Remove a vmbus device925940 */926941static void vmbus_remove(struct device *child_device)···10551040 .remove = vmbus_remove,10561041 .probe = vmbus_probe,10571042 .uevent = vmbus_uevent,10431043+ .dma_configure = vmbus_dma_configure,10581044 .dev_groups = vmbus_dev_groups,10591045 .drv_groups = vmbus_drv_groups,10601046 .bus_groups = vmbus_bus_groups,···15621546 if (ret)15631547 goto err_connect;1564154815491549+ if (hv_is_isolation_supported())15501550+ sysctl_record_panic_msg = 0;15511551+15651552 /*15661553 * Only register if the crash MSRs are available15671554 */15681555 if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {15691556 u64 hyperv_crash_ctl;15701557 /*15711571- * Sysctl registration is not fatal, since by default15721572- * reporting is enabled.15581558+ * Panic message recording (sysctl_record_panic_msg)15591559+ * is enabled by default in non-isolated guests and15601560+ * disabled by default in isolated guests; the panic15611561+ * message recording won't be available in isolated15621562+ * guests should the following registration fail.15731563 */15741564 hv_ctl_table_hdr = register_sysctl_table(hv_root_table);15751565 if (!hv_ctl_table_hdr)···21192097 child_device_obj->device.parent = &hv_acpi_dev->dev;21202098 child_device_obj->device.release = vmbus_device_release;2121209921002100+ child_device_obj->device.dma_parms = &child_device_obj->dma_parms;21012101+ child_device_obj->device.dma_mask = &child_device_obj->dma_mask;21022102+ dma_set_mask(&child_device_obj->device, DMA_BIT_MASK(64));21032103+21222104 /*21232105 * Register with the LDM. This will kick off the driver/device21242106 * binding...which will eventually call vmbus_match() and vmbus_probe()···21482122 }21492123 hv_debug_add_dev_dir(child_device_obj);2150212421512151- child_device_obj->device.dma_parms = &child_device_obj->dma_parms;21522152- child_device_obj->device.dma_mask = &child_device_obj->dma_mask;21532153- dma_set_mask(&child_device_obj->device, DMA_BIT_MASK(64));21542125 return 0;2155212621562127err_kset_unregister:···24502427 struct acpi_device *ancestor;2451242824522429 hv_acpi_dev = device;24302430+24312431+ /*24322432+ * Older versions of Hyper-V for ARM64 fail to include the _CCA24332433+ * method on the top level VMbus device in the DSDT. But devices24342434+ * are hardware coherent in all current Hyper-V use cases, so fix24352435+ * up the ACPI device to behave as if _CCA is present and indicates24362436+ * hardware coherence.24372437+ */24382438+ ACPI_COMPANION_SET(&device->dev, device);24392439+ if (IS_ENABLED(CONFIG_ACPI_CCA_REQUIRED) &&24402440+ device_get_dma_attr(&device->dev) == DEV_DMA_NOT_SUPPORTED) {24412441+ pr_info("No ACPI _CCA found; assuming coherent device I/O\n");24422442+ device->flags.cca_seen = true;24432443+ device->flags.coherent_dma = true;24442444+ }2453244524542446 result = acpi_walk_resources(device->handle, METHOD_NAME__CRS,24552447 vmbus_walk_resources, NULL);···28182780 if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {28192781 kmsg_dump_unregister(&hv_kmsg_dumper);28202782 unregister_die_notifier(&hyperv_die_block);28212821- atomic_notifier_chain_unregister(&panic_notifier_list,28222822- &hyperv_panic_block);28232783 }27842784+27852785+ /*27862786+ * The panic notifier is always registered, hence we should27872787+ * also unconditionally unregister it here as well.27882788+ */27892789+ atomic_notifier_chain_unregister(&panic_notifier_list,27902790+ &hyperv_panic_block);2824279128252792 free_page((unsigned long)hv_panic_page);28262793 unregister_sysctl_table(hv_ctl_table_hdr);
+7
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···32533253 }32543254 qidx = bp->tc_to_qidx[j];32553255 ring->queue_id = bp->q_info[qidx].queue_id;32563256+ spin_lock_init(&txr->xdp_tx_lock);32563257 if (i < bp->tx_nr_rings_xdp)32573258 continue;32583259 if (i % bp->tx_nr_rings_per_tc == (bp->tx_nr_rings_per_tc - 1))···1033910338 if (irq_re_init)1034010339 udp_tunnel_nic_reset_ntf(bp->dev);10341103401034110341+ if (bp->tx_nr_rings_xdp < num_possible_cpus()) {1034210342+ if (!static_key_enabled(&bnxt_xdp_locking_key))1034310343+ static_branch_enable(&bnxt_xdp_locking_key);1034410344+ } else if (static_key_enabled(&bnxt_xdp_locking_key)) {1034510345+ static_branch_disable(&bnxt_xdp_locking_key);1034610346+ }1034210347 set_bit(BNXT_STATE_OPEN, &bp->state);1034310348 bnxt_enable_int(bp);1034410349 /* Enable TX queues */
···243243static bool ice_vsi_fltr_changed(struct ice_vsi *vsi)244244{245245 return test_bit(ICE_VSI_UMAC_FLTR_CHANGED, vsi->state) ||246246- test_bit(ICE_VSI_MMAC_FLTR_CHANGED, vsi->state) ||247247- test_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state);246246+ test_bit(ICE_VSI_MMAC_FLTR_CHANGED, vsi->state);248247}249248250249/**···259260 if (vsi->type != ICE_VSI_PF)260261 return 0;261262262262- if (ice_vsi_has_non_zero_vlans(vsi))263263- status = ice_fltr_set_vlan_vsi_promisc(&vsi->back->hw, vsi, promisc_m);264264- else265265- status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, 0);263263+ if (ice_vsi_has_non_zero_vlans(vsi)) {264264+ promisc_m |= (ICE_PROMISC_VLAN_RX | ICE_PROMISC_VLAN_TX);265265+ status = ice_fltr_set_vlan_vsi_promisc(&vsi->back->hw, vsi,266266+ promisc_m);267267+ } else {268268+ status = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx,269269+ promisc_m, 0);270270+ }271271+266272 return status;267273}268274···284280 if (vsi->type != ICE_VSI_PF)285281 return 0;286282287287- if (ice_vsi_has_non_zero_vlans(vsi))288288- status = ice_fltr_clear_vlan_vsi_promisc(&vsi->back->hw, vsi, promisc_m);289289- else290290- status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx, promisc_m, 0);283283+ if (ice_vsi_has_non_zero_vlans(vsi)) {284284+ promisc_m |= (ICE_PROMISC_VLAN_RX | ICE_PROMISC_VLAN_TX);285285+ status = ice_fltr_clear_vlan_vsi_promisc(&vsi->back->hw, vsi,286286+ promisc_m);287287+ } else {288288+ status = ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx,289289+ promisc_m, 0);290290+ }291291+291292 return status;292293}293294···311302 struct ice_pf *pf = vsi->back;312303 struct ice_hw *hw = &pf->hw;313304 u32 changed_flags = 0;314314- u8 promisc_m;315305 int err;316306317307 if (!vsi->netdev)···328320 if (ice_vsi_fltr_changed(vsi)) {329321 clear_bit(ICE_VSI_UMAC_FLTR_CHANGED, vsi->state);330322 clear_bit(ICE_VSI_MMAC_FLTR_CHANGED, vsi->state);331331- clear_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state);332323333324 /* grab the netdev's addr_list_lock */334325 netif_addr_lock_bh(netdev);···376369 /* check for changes in promiscuous modes */377370 if (changed_flags & IFF_ALLMULTI) {378371 if (vsi->current_netdev_flags & IFF_ALLMULTI) {379379- if (ice_vsi_has_non_zero_vlans(vsi))380380- promisc_m = ICE_MCAST_VLAN_PROMISC_BITS;381381- else382382- promisc_m = ICE_MCAST_PROMISC_BITS;383383-384384- err = ice_set_promisc(vsi, promisc_m);372372+ err = ice_set_promisc(vsi, ICE_MCAST_PROMISC_BITS);385373 if (err) {386386- netdev_err(netdev, "Error setting Multicast promiscuous mode on VSI %i\n",387387- vsi->vsi_num);388374 vsi->current_netdev_flags &= ~IFF_ALLMULTI;389375 goto out_promisc;390376 }391377 } else {392378 /* !(vsi->current_netdev_flags & IFF_ALLMULTI) */393393- if (ice_vsi_has_non_zero_vlans(vsi))394394- promisc_m = ICE_MCAST_VLAN_PROMISC_BITS;395395- else396396- promisc_m = ICE_MCAST_PROMISC_BITS;397397-398398- err = ice_clear_promisc(vsi, promisc_m);379379+ err = ice_clear_promisc(vsi, ICE_MCAST_PROMISC_BITS);399380 if (err) {400400- netdev_err(netdev, "Error clearing Multicast promiscuous mode on VSI %i\n",401401- vsi->vsi_num);402381 vsi->current_netdev_flags |= IFF_ALLMULTI;403382 goto out_promisc;404383 }···25622569 spin_lock_init(&xdp_ring->tx_lock);25632570 for (j = 0; j < xdp_ring->count; j++) {25642571 tx_desc = ICE_TX_DESC(xdp_ring, j);25652565- tx_desc->cmd_type_offset_bsz = cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE);25722572+ tx_desc->cmd_type_offset_bsz = 0;25662573 }25672574 }25682575···2758276527592766 ice_for_each_xdp_txq(vsi, i)27602767 if (vsi->xdp_rings[i]) {27612761- if (vsi->xdp_rings[i]->desc)27682768+ if (vsi->xdp_rings[i]->desc) {27692769+ synchronize_rcu();27622770 ice_free_tx_ring(vsi->xdp_rings[i]);27712771+ }27632772 kfree_rcu(vsi->xdp_rings[i], rcu);27642773 vsi->xdp_rings[i] = NULL;27652774 }···34833488 if (!vid)34843489 return 0;3485349034913491+ while (test_and_set_bit(ICE_CFG_BUSY, vsi->state))34923492+ usleep_range(1000, 2000);34933493+34943494+ /* Add multicast promisc rule for the VLAN ID to be added if34953495+ * all-multicast is currently enabled.34963496+ */34973497+ if (vsi->current_netdev_flags & IFF_ALLMULTI) {34983498+ ret = ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx,34993499+ ICE_MCAST_VLAN_PROMISC_BITS,35003500+ vid);35013501+ if (ret)35023502+ goto finish;35033503+ }35043504+34863505 vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);3487350634883507 /* Add a switch rule for this VLAN ID so its corresponding VLAN tagged···35043495 */35053496 vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0);35063497 ret = vlan_ops->add_vlan(vsi, &vlan);35073507- if (!ret)35083508- set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state);34983498+ if (ret)34993499+ goto finish;35003500+35013501+ /* If all-multicast is currently enabled and this VLAN ID is only one35023502+ * besides VLAN-0 we have to update look-up type of multicast promisc35033503+ * rule for VLAN-0 from ICE_SW_LKUP_PROMISC to ICE_SW_LKUP_PROMISC_VLAN.35043504+ */35053505+ if ((vsi->current_netdev_flags & IFF_ALLMULTI) &&35063506+ ice_vsi_num_non_zero_vlans(vsi) == 1) {35073507+ ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx,35083508+ ICE_MCAST_PROMISC_BITS, 0);35093509+ ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx,35103510+ ICE_MCAST_VLAN_PROMISC_BITS, 0);35113511+ }35123512+35133513+finish:35143514+ clear_bit(ICE_CFG_BUSY, vsi->state);3509351535103516 return ret;35113517}···35463522 if (!vid)35473523 return 0;3548352435253525+ while (test_and_set_bit(ICE_CFG_BUSY, vsi->state))35263526+ usleep_range(1000, 2000);35273527+35493528 vlan_ops = ice_get_compat_vsi_vlan_ops(vsi);3550352935513530 /* Make sure VLAN delete is successful before updating VLAN···35573530 vlan = ICE_VLAN(be16_to_cpu(proto), vid, 0);35583531 ret = vlan_ops->del_vlan(vsi, &vlan);35593532 if (ret)35603560- return ret;35333533+ goto finish;3561353435623562- set_bit(ICE_VSI_VLAN_FLTR_CHANGED, vsi->state);35633563- return 0;35353535+ /* Remove multicast promisc rule for the removed VLAN ID if35363536+ * all-multicast is enabled.35373537+ */35383538+ if (vsi->current_netdev_flags & IFF_ALLMULTI)35393539+ ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx,35403540+ ICE_MCAST_VLAN_PROMISC_BITS, vid);35413541+35423542+ if (!ice_vsi_has_non_zero_vlans(vsi)) {35433543+ /* Update look-up type of multicast promisc rule for VLAN 035443544+ * from ICE_SW_LKUP_PROMISC_VLAN to ICE_SW_LKUP_PROMISC when35453545+ * all-multicast is enabled and VLAN 0 is the only VLAN rule.35463546+ */35473547+ if (vsi->current_netdev_flags & IFF_ALLMULTI) {35483548+ ice_fltr_clear_vsi_promisc(&vsi->back->hw, vsi->idx,35493549+ ICE_MCAST_VLAN_PROMISC_BITS,35503550+ 0);35513551+ ice_fltr_set_vsi_promisc(&vsi->back->hw, vsi->idx,35523552+ ICE_MCAST_PROMISC_BITS, 0);35533553+ }35543554+ }35553555+35563556+finish:35573557+ clear_bit(ICE_CFG_BUSY, vsi->state);35583558+35593559+ return ret;35643560}3565356135663562/**···5525547555265476 /* Add filter for new MAC. If filter exists, return success */55275477 err = ice_fltr_add_mac(vsi, mac, ICE_FWD_TO_VSI);55285528- if (err == -EEXIST)54785478+ if (err == -EEXIST) {55295479 /* Although this MAC filter is already present in hardware it's55305480 * possible in some cases (e.g. bonding) that dev_addr was55315481 * modified outside of the driver and needs to be restored back55325482 * to this value.55335483 */55345484 netdev_dbg(netdev, "filter for MAC %pM already exists\n", mac);55355535- else if (err)54855485+54865486+ return 0;54875487+ } else if (err) {55365488 /* error if the new filter addition failed */55375489 err = -EADDRNOTAVAIL;54905490+ }5538549155395492err_update_filters:55405493 if (err) {
+2-2
drivers/net/ethernet/intel/ice/ice_virtchnl.c
···13581358 goto error_param;13591359 }1360136013611361- /* Skip queue if not enabled */13621361 if (!test_bit(vf_q_id, vf->txq_ena))13631363- continue;13621362+ dev_dbg(ice_pf_to_dev(vsi->back), "Queue %u on VSI %u is not enabled, but stopping it anyway\n",13631363+ vf_q_id, vsi->vsi_num);1364136413651365 ice_fill_txq_meta(vsi, ring, &txq_meta);13661366
+4-2
drivers/net/ethernet/intel/ice/ice_xsk.c
···4141static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx)4242{4343 ice_clean_tx_ring(vsi->tx_rings[q_idx]);4444- if (ice_is_xdp_ena_vsi(vsi))4444+ if (ice_is_xdp_ena_vsi(vsi)) {4545+ synchronize_rcu();4546 ice_clean_tx_ring(vsi->xdp_rings[q_idx]);4747+ }4648 ice_clean_rx_ring(vsi->rx_rings[q_idx]);4749}4850···920918 struct ice_vsi *vsi = np->vsi;921919 struct ice_tx_ring *ring;922920923923- if (test_bit(ICE_DOWN, vsi->state))921921+ if (test_bit(ICE_VSI_DOWN, vsi->state))924922 return -ENETDOWN;925923926924 if (!ice_is_xdp_ena_vsi(vsi))
+1-1
drivers/net/ethernet/marvell/mv643xx_eth.c
···27512751 }2752275227532753 ret = of_get_mac_address(pnp, ppd.mac_addr);27542754- if (ret)27542754+ if (ret == -EPROBE_DEFER)27552755 return ret;2756275627572757 mv643xx_eth_property(pnp, "tx-queue-size", ppd.tx_queue_size);
+2
drivers/net/ethernet/micrel/Kconfig
···2828config KS88512929 tristate "Micrel KS8851 SPI"3030 depends on SPI3131+ depends on PTP_1588_CLOCK_OPTIONAL3132 select MII3233 select CRC323334 select EEPROM_93CX6···4039config KS8851_MLL4140 tristate "Micrel KS8851 MLL"4241 depends on HAS_IOMEM4242+ depends on PTP_1588_CLOCK_OPTIONAL4343 select MII4444 select CRC324545 select EEPROM_93CX6
···786786 kfree(efx->xdp_tx_queues);787787}788788789789+static int efx_set_xdp_tx_queue(struct efx_nic *efx, int xdp_queue_number,790790+ struct efx_tx_queue *tx_queue)791791+{792792+ if (xdp_queue_number >= efx->xdp_tx_queue_count)793793+ return -EINVAL;794794+795795+ netif_dbg(efx, drv, efx->net_dev,796796+ "Channel %u TXQ %u is XDP %u, HW %u\n",797797+ tx_queue->channel->channel, tx_queue->label,798798+ xdp_queue_number, tx_queue->queue);799799+ efx->xdp_tx_queues[xdp_queue_number] = tx_queue;800800+ return 0;801801+}802802+803803+static void efx_set_xdp_channels(struct efx_nic *efx)804804+{805805+ struct efx_tx_queue *tx_queue;806806+ struct efx_channel *channel;807807+ unsigned int next_queue = 0;808808+ int xdp_queue_number = 0;809809+ int rc;810810+811811+ /* We need to mark which channels really have RX and TX812812+ * queues, and adjust the TX queue numbers if we have separate813813+ * RX-only and TX-only channels.814814+ */815815+ efx_for_each_channel(channel, efx) {816816+ if (channel->channel < efx->tx_channel_offset)817817+ continue;818818+819819+ if (efx_channel_is_xdp_tx(channel)) {820820+ efx_for_each_channel_tx_queue(tx_queue, channel) {821821+ tx_queue->queue = next_queue++;822822+ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number,823823+ tx_queue);824824+ if (rc == 0)825825+ xdp_queue_number++;826826+ }827827+ } else {828828+ efx_for_each_channel_tx_queue(tx_queue, channel) {829829+ tx_queue->queue = next_queue++;830830+ netif_dbg(efx, drv, efx->net_dev,831831+ "Channel %u TXQ %u is HW %u\n",832832+ channel->channel, tx_queue->label,833833+ tx_queue->queue);834834+ }835835+836836+ /* If XDP is borrowing queues from net stack, it must837837+ * use the queue with no csum offload, which is the838838+ * first one of the channel839839+ * (note: tx_queue_by_type is not initialized yet)840840+ */841841+ if (efx->xdp_txq_queues_mode ==842842+ EFX_XDP_TX_QUEUES_BORROWED) {843843+ tx_queue = &channel->tx_queue[0];844844+ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number,845845+ tx_queue);846846+ if (rc == 0)847847+ xdp_queue_number++;848848+ }849849+ }850850+ }851851+ WARN_ON(efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_DEDICATED &&852852+ xdp_queue_number != efx->xdp_tx_queue_count);853853+ WARN_ON(efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED &&854854+ xdp_queue_number > efx->xdp_tx_queue_count);855855+856856+ /* If we have more CPUs than assigned XDP TX queues, assign the already857857+ * existing queues to the exceeding CPUs858858+ */859859+ next_queue = 0;860860+ while (xdp_queue_number < efx->xdp_tx_queue_count) {861861+ tx_queue = efx->xdp_tx_queues[next_queue++];862862+ rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);863863+ if (rc == 0)864864+ xdp_queue_number++;865865+ }866866+}867867+789868int efx_realloc_channels(struct efx_nic *efx, u32 rxq_entries, u32 txq_entries)790869{791870 struct efx_channel *other_channel[EFX_MAX_CHANNELS], *channel;···936857 efx_init_napi_channel(efx->channel[i]);937858 }938859860860+ efx_set_xdp_channels(efx);939861out:940862 /* Destroy unused channel structures */941863 for (i = 0; i < efx->n_channels; i++) {···969889 goto out;970890}971891972972-static inline int973973-efx_set_xdp_tx_queue(struct efx_nic *efx, int xdp_queue_number,974974- struct efx_tx_queue *tx_queue)975975-{976976- if (xdp_queue_number >= efx->xdp_tx_queue_count)977977- return -EINVAL;978978-979979- netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is XDP %u, HW %u\n",980980- tx_queue->channel->channel, tx_queue->label,981981- xdp_queue_number, tx_queue->queue);982982- efx->xdp_tx_queues[xdp_queue_number] = tx_queue;983983- return 0;984984-}985985-986892int efx_set_channels(struct efx_nic *efx)987893{988988- struct efx_tx_queue *tx_queue;989894 struct efx_channel *channel;990990- unsigned int next_queue = 0;991991- int xdp_queue_number;992895 int rc;993896994897 efx->tx_channel_offset =···989926 return -ENOMEM;990927 }991928992992- /* We need to mark which channels really have RX and TX993993- * queues, and adjust the TX queue numbers if we have separate994994- * RX-only and TX-only channels.995995- */996996- xdp_queue_number = 0;997929 efx_for_each_channel(channel, efx) {998930 if (channel->channel < efx->n_rx_channels)999931 channel->rx_queue.core_index = channel->channel;1000932 else1001933 channel->rx_queue.core_index = -1;10021002-10031003- if (channel->channel >= efx->tx_channel_offset) {10041004- if (efx_channel_is_xdp_tx(channel)) {10051005- efx_for_each_channel_tx_queue(tx_queue, channel) {10061006- tx_queue->queue = next_queue++;10071007- rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);10081008- if (rc == 0)10091009- xdp_queue_number++;10101010- }10111011- } else {10121012- efx_for_each_channel_tx_queue(tx_queue, channel) {10131013- tx_queue->queue = next_queue++;10141014- netif_dbg(efx, drv, efx->net_dev, "Channel %u TXQ %u is HW %u\n",10151015- channel->channel, tx_queue->label,10161016- tx_queue->queue);10171017- }10181018-10191019- /* If XDP is borrowing queues from net stack, it must use the queue10201020- * with no csum offload, which is the first one of the channel10211021- * (note: channel->tx_queue_by_type is not initialized yet)10221022- */10231023- if (efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_BORROWED) {10241024- tx_queue = &channel->tx_queue[0];10251025- rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);10261026- if (rc == 0)10271027- xdp_queue_number++;10281028- }10291029- }10301030- }1031934 }10321032- WARN_ON(efx->xdp_txq_queues_mode == EFX_XDP_TX_QUEUES_DEDICATED &&10331033- xdp_queue_number != efx->xdp_tx_queue_count);10341034- WARN_ON(efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED &&10351035- xdp_queue_number > efx->xdp_tx_queue_count);103693510371037- /* If we have more CPUs than assigned XDP TX queues, assign the already10381038- * existing queues to the exceeding CPUs10391039- */10401040- next_queue = 0;10411041- while (xdp_queue_number < efx->xdp_tx_queue_count) {10421042- tx_queue = efx->xdp_tx_queues[next_queue++];10431043- rc = efx_set_xdp_tx_queue(efx, xdp_queue_number, tx_queue);10441044- if (rc == 0)10451045- xdp_queue_number++;10461046- }936936+ efx_set_xdp_channels(efx);10479371048938 rc = netif_set_real_num_tx_queues(efx->net_dev, efx->n_tx_channels);1049939 if (rc)···11401124 struct efx_rx_queue *rx_queue;11411125 struct efx_channel *channel;1142112611431143- efx_for_each_channel(channel, efx) {11271127+ efx_for_each_channel_rev(channel, efx) {11441128 efx_for_each_channel_tx_queue(tx_queue, channel) {11451129 efx_init_tx_queue(tx_queue);11461130 atomic_inc(&efx->active_queues);
+3
drivers/net/ethernet/sfc/rx_common.c
···150150 struct efx_nic *efx = rx_queue->efx;151151 int i;152152153153+ if (unlikely(!rx_queue->page_ring))154154+ return;155155+153156 /* Unmap and release the pages in the recycle ring. Remove the ring. */154157 for (i = 0; i <= rx_queue->page_ptr_mask; i++) {155158 struct page *page = rx_queue->page_ring[i];
+3
drivers/net/ethernet/sfc/tx.c
···443443 if (unlikely(!tx_queue))444444 return -EINVAL;445445446446+ if (!tx_queue->initialised)447447+ return -EINVAL;448448+446449 if (efx->xdp_txq_queues_mode != EFX_XDP_TX_QUEUES_DEDICATED)447450 HARD_TX_LOCK(efx->net_dev, tx_queue->core_txq, cpu);448451
+2
drivers/net/ethernet/sfc/tx_common.c
···101101 netif_dbg(tx_queue->efx, drv, tx_queue->efx->net_dev,102102 "shutting down TX queue %d\n", tx_queue->queue);103103104104+ tx_queue->initialised = false;105105+104106 if (!tx_queue->buffer)105107 return;106108
···431431 plat->phylink_node = np;432432433433 /* Get max speed of operation from device tree */434434- if (of_property_read_u32(np, "max-speed", &plat->max_speed))435435- plat->max_speed = -1;434434+ of_property_read_u32(np, "max-speed", &plat->max_speed);436435437436 plat->bus_id = of_alias_get_id(np, "ethernet");438437 if (plat->bus_id < 0)
···107107 u32 val;108108 int ret;109109110110+ if (regnum & MII_ADDR_C45)111111+ return -EOPNOTSUPP;112112+110113 ret = mscc_miim_wait_pending(bus);111114 if (ret)112115 goto out;···152149{153150 struct mscc_miim_dev *miim = bus->priv;154151 int ret;152152+153153+ if (regnum & MII_ADDR_C45)154154+ return -EOPNOTSUPP;155155156156 ret = mscc_miim_wait_pending(bus);157157 if (ret < 0)
···469469 spin_lock(&sl->lock);470470471471 if (netif_queue_stopped(dev)) {472472- if (!netif_running(dev))472472+ if (!netif_running(dev) || !sl->tty)473473 goto out;474474475475 /* May be we must check transmitter timeout here ?
+7-2
drivers/net/usb/aqc111.c
···11021102 if (start_of_descs != desc_offset)11031103 goto err;1104110411051105- /* self check desc_offset from header*/11061106- if (desc_offset >= skb_len)11051105+ /* self check desc_offset from header and make sure that the11061106+ * bounds of the metadata array are inside the SKB11071107+ */11081108+ if (pkt_count * 2 + desc_offset >= skb_len)11071109 goto err;11101110+11111111+ /* Packets must not overlap the metadata array */11121112+ skb_trim(skb, desc_offset);1108111311091114 if (pkt_count == 0)11101115 goto err;
+11-4
drivers/net/vrf.c
···12651265 eth = (struct ethhdr *)skb->data;1266126612671267 skb_reset_mac_header(skb);12681268+ skb_reset_mac_len(skb);1268126912691270 /* we set the ethernet destination and the source addresses to the12701271 * address of the VRF device.···12951294 */12961295static int vrf_add_mac_header_if_unset(struct sk_buff *skb,12971296 struct net_device *vrf_dev,12981298- u16 proto)12971297+ u16 proto, struct net_device *orig_dev)12991298{13001300- if (skb_mac_header_was_set(skb))12991299+ if (skb_mac_header_was_set(skb) && dev_has_header(orig_dev))13011300 return 0;1302130113031302 return vrf_prepare_mac_header(skb, vrf_dev, proto);···1403140214041403 /* if packet is NDISC then keep the ingress interface */14051404 if (!is_ndisc) {14051405+ struct net_device *orig_dev = skb->dev;14061406+14061407 vrf_rx_stats(vrf_dev, skb->len);14071408 skb->dev = vrf_dev;14081409 skb->skb_iif = vrf_dev->ifindex;···14131410 int err;1414141114151412 err = vrf_add_mac_header_if_unset(skb, vrf_dev,14161416- ETH_P_IPV6);14131413+ ETH_P_IPV6,14141414+ orig_dev);14171415 if (likely(!err)) {14181416 skb_push(skb, skb->mac_len);14191417 dev_queue_xmit_nit(skb, vrf_dev);···14441440static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,14451441 struct sk_buff *skb)14461442{14431443+ struct net_device *orig_dev = skb->dev;14441444+14471445 skb->dev = vrf_dev;14481446 skb->skb_iif = vrf_dev->ifindex;14491447 IPCB(skb)->flags |= IPSKB_L3SLAVE;···14661460 if (!list_empty(&vrf_dev->ptype_all)) {14671461 int err;1468146214691469- err = vrf_add_mac_header_if_unset(skb, vrf_dev, ETH_P_IP);14631463+ err = vrf_add_mac_header_if_unset(skb, vrf_dev, ETH_P_IP,14641464+ orig_dev);14701465 if (likely(!err)) {14711466 skb_push(skb, skb->mac_len);14721467 dev_queue_xmit_nit(skb, vrf_dev);
+9
drivers/pci/controller/pci-hyperv.c
···34073407 hbus->bridge->domain_nr = dom;34083408#ifdef CONFIG_X8634093409 hbus->sysdata.domain = dom;34103410+#elif defined(CONFIG_ARM64)34113411+ /*34123412+ * Set the PCI bus parent to be the corresponding VMbus34133413+ * device. Then the VMbus device will be assigned as the34143414+ * ACPI companion in pcibios_root_bridge_prepare() and34153415+ * pci_dma_configure() will propagate device coherence34163416+ * information to devices created on the bus.34173417+ */34183418+ hbus->sysdata.parent = hdev->device.parent;34103419#endif3411342034123421 hbus->hdev = hdev;
+41-21
drivers/vdpa/mlx5/net/mlx5_vnet.c
···163163 u32 cur_num_vqs;164164 struct notifier_block nb;165165 struct vdpa_callback config_cb;166166+ struct mlx5_vdpa_wq_ent cvq_ent;166167};167168168169static void free_resources(struct mlx5_vdpa_net *ndev);···16591658 mvdev = wqent->mvdev;16601659 ndev = to_mlx5_vdpa_ndev(mvdev);16611660 cvq = &mvdev->cvq;16611661+16621662+ mutex_lock(&ndev->reslock);16631663+16641664+ if (!(mvdev->status & VIRTIO_CONFIG_S_DRIVER_OK))16651665+ goto out;16661666+16621667 if (!(ndev->mvdev.actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)))16631668 goto out;16641669···1703169617041697 if (vringh_need_notify_iotlb(&cvq->vring))17051698 vringh_notify(&cvq->vring);16991699+17001700+ queue_work(mvdev->wq, &wqent->work);17011701+ break;17061702 }17031703+17071704out:17081708- kfree(wqent);17051705+ mutex_unlock(&ndev->reslock);17091706}1710170717111708static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx)···17171706 struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);17181707 struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);17191708 struct mlx5_vdpa_virtqueue *mvq;17201720- struct mlx5_vdpa_wq_ent *wqent;1721170917221710 if (!is_index_valid(mvdev, idx))17231711 return;···17251715 if (!mvdev->wq || !mvdev->cvq.ready)17261716 return;1727171717281728- wqent = kzalloc(sizeof(*wqent), GFP_ATOMIC);17291729- if (!wqent)17301730- return;17311731-17321732- wqent->mvdev = mvdev;17331733- INIT_WORK(&wqent->work, mlx5_cvq_kick_handler);17341734- queue_work(mvdev->wq, &wqent->work);17181718+ queue_work(mvdev->wq, &ndev->cvq_ent.work);17351719 return;17361720 }17371721···21842180 goto err_mr;2185218121862182 if (!(mvdev->status & VIRTIO_CONFIG_S_DRIVER_OK))21872187- return 0;21832183+ goto err_mr;2188218421892185 restore_channels_info(ndev);21902186 err = setup_driver(mvdev);···21992195 return err;22002196}2201219721982198+/* reslock must be held for this function */22022199static int setup_driver(struct mlx5_vdpa_dev *mvdev)22032200{22042201 struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);22052202 int err;2206220322072207- mutex_lock(&ndev->reslock);22042204+ WARN_ON(!mutex_is_locked(&ndev->reslock));22052205+22082206 if (ndev->setup) {22092207 mlx5_vdpa_warn(mvdev, "setup driver called for already setup driver\n");22102208 err = 0;···22362230 goto err_fwd;22372231 }22382232 ndev->setup = true;22392239- mutex_unlock(&ndev->reslock);2240223322412234 return 0;22422235···22462241err_rqt:22472242 teardown_virtqueues(ndev);22482243out:22492249- mutex_unlock(&ndev->reslock);22502244 return err;22512245}2252224622472247+/* reslock must be held for this function */22532248static void teardown_driver(struct mlx5_vdpa_net *ndev)22542249{22552255- mutex_lock(&ndev->reslock);22502250+22512251+ WARN_ON(!mutex_is_locked(&ndev->reslock));22522252+22562253 if (!ndev->setup)22572257- goto out;22542254+ return;2258225522592256 remove_fwd_to_tir(ndev);22602257 destroy_tir(ndev);22612258 destroy_rqt(ndev);22622259 teardown_virtqueues(ndev);22632260 ndev->setup = false;22642264-out:22652265- mutex_unlock(&ndev->reslock);22662261}2267226222682263static void clear_vqs_ready(struct mlx5_vdpa_net *ndev)···2283227822842279 print_status(mvdev, status, true);2285228022812281+ mutex_lock(&ndev->reslock);22822282+22862283 if ((status ^ ndev->mvdev.status) & VIRTIO_CONFIG_S_DRIVER_OK) {22872284 if (status & VIRTIO_CONFIG_S_DRIVER_OK) {22882285 err = setup_driver(mvdev);···22942287 }22952288 } else {22962289 mlx5_vdpa_warn(mvdev, "did not expect DRIVER_OK to be cleared\n");22972297- return;22902290+ goto err_clear;22982291 }22992292 }2300229323012294 ndev->mvdev.status = status;22952295+ mutex_unlock(&ndev->reslock);23022296 return;2303229723042298err_setup:23052299 mlx5_vdpa_destroy_mr(&ndev->mvdev);23062300 ndev->mvdev.status |= VIRTIO_CONFIG_S_FAILED;23012301+err_clear:23022302+ mutex_unlock(&ndev->reslock);23072303}2308230423092305static int mlx5_vdpa_reset(struct vdpa_device *vdev)···2316230623172307 print_status(mvdev, 0, true);23182308 mlx5_vdpa_info(mvdev, "performing device reset\n");23092309+23102310+ mutex_lock(&ndev->reslock);23192311 teardown_driver(ndev);23202312 clear_vqs_ready(ndev);23212313 mlx5_vdpa_destroy_mr(&ndev->mvdev);···23302318 if (mlx5_vdpa_create_mr(mvdev, NULL))23312319 mlx5_vdpa_warn(mvdev, "create MR failed\n");23322320 }23212321+ mutex_unlock(&ndev->reslock);2333232223342323 return 0;23352324}···23662353static int mlx5_vdpa_set_map(struct vdpa_device *vdev, struct vhost_iotlb *iotlb)23672354{23682355 struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);23562356+ struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);23692357 bool change_map;23702358 int err;23592359+23602360+ mutex_lock(&ndev->reslock);2371236123722362 err = mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map);23732363 if (err) {23742364 mlx5_vdpa_warn(mvdev, "set map failed(%d)\n", err);23752375- return err;23652365+ goto err;23762366 }2377236723782368 if (change_map)23792379- return mlx5_vdpa_change_map(mvdev, iotlb);23692369+ err = mlx5_vdpa_change_map(mvdev, iotlb);2380237023812381- return 0;23712371+err:23722372+ mutex_unlock(&ndev->reslock);23732373+ return err;23822374}2383237523842376static void mlx5_vdpa_free(struct vdpa_device *vdev)···27582740 if (err)27592741 goto err_mr;2760274227432743+ ndev->cvq_ent.mvdev = mvdev;27442744+ INIT_WORK(&ndev->cvq_ent.work, mlx5_cvq_kick_handler);27612745 mvdev->wq = create_singlethread_workqueue("mlx5_vdpa_wq");27622746 if (!mvdev->wq) {27632747 err = -ENOMEM;
+2-3
drivers/virtio/virtio.c
···526526 goto err;527527 }528528529529- /* If restore didn't do it, mark device DRIVER_OK ourselves. */530530- if (!(dev->config->get_status(dev) & VIRTIO_CONFIG_S_DRIVER_OK))531531- virtio_device_ready(dev);529529+ /* Finally, tell the device we're all set */530530+ virtio_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);532531533532 virtio_config_enable(dev);534533
+1-1
fs/btrfs/extent_io.h
···118118 */119119struct extent_changeset {120120 /* How many bytes are set/cleared in this operation */121121- unsigned int bytes_changed;121121+ u64 bytes_changed;122122123123 /* Changed ranges */124124 struct ulist range_changed;
+11-2
fs/btrfs/file.c
···29572957 return ret;29582958}2959295929602960-static int btrfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)29602960+static int btrfs_punch_hole(struct file *file, loff_t offset, loff_t len)29612961{29622962+ struct inode *inode = file_inode(file);29622963 struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);29632964 struct btrfs_root *root = BTRFS_I(inode)->root;29642965 struct extent_state *cached_state = NULL;···29902989 ret = 0;29912990 goto out_only_mutex;29922991 }29922992+29932993+ ret = file_modified(file);29942994+ if (ret)29952995+ goto out_only_mutex;2993299629942997 lockstart = round_up(offset, btrfs_inode_sectorsize(BTRFS_I(inode)));29952998 lockend = round_down(offset + len,···34353430 return -EOPNOTSUPP;3436343134373432 if (mode & FALLOC_FL_PUNCH_HOLE)34383438- return btrfs_punch_hole(inode, offset, len);34333433+ return btrfs_punch_hole(file, offset, len);3439343434403435 /*34413436 * Only trigger disk allocation, don't trigger qgroup reserve···34563451 if (ret)34573452 goto out;34583453 }34543454+34553455+ ret = file_modified(file);34563456+ if (ret)34573457+ goto out;3459345834603459 /*34613460 * TODO: Move these two operations after we have checked
+22-1
fs/btrfs/inode.c
···11281128 int ret = 0;1129112911301130 if (btrfs_is_free_space_inode(inode)) {11311131- WARN_ON_ONCE(1);11321131 ret = -EINVAL;11331132 goto out_unlock;11341133 }···44854486 btrfs_warn(fs_info,44864487 "attempt to delete subvolume %llu during send",44874488 dest->root_key.objectid);44894489+ return -EPERM;44904490+ }44914491+ if (atomic_read(&dest->nr_swapfiles)) {44924492+ spin_unlock(&dest->root_item_lock);44934493+ btrfs_warn(fs_info,44944494+ "attempt to delete subvolume %llu with active swapfile",44954495+ root->root_key.objectid);44884496 return -EPERM;44894497 }44904498 root_flags = btrfs_root_flags(&dest->root_item);···1111311107 * set. We use this counter to prevent snapshots. We must increment it1111411108 * before walking the extents because we don't want a concurrent1111511109 * snapshot to run after we've already checked the extents.1111011110+ *1111111111+ * It is possible that subvolume is marked for deletion but still not1111211112+ * removed yet. To prevent this race, we check the root status before1111311113+ * activating the swapfile.1111611114 */1111511115+ spin_lock(&root->root_item_lock);1111611116+ if (btrfs_root_dead(root)) {1111711117+ spin_unlock(&root->root_item_lock);1111811118+1111911119+ btrfs_exclop_finish(fs_info);1112011120+ btrfs_warn(fs_info,1112111121+ "cannot activate swapfile because subvolume %llu is being deleted",1112211122+ root->root_key.objectid);1112311123+ return -EPERM;1112411124+ }1111711125 atomic_inc(&root->nr_swapfiles);1112611126+ spin_unlock(&root->root_item_lock);11118111271111911128 isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize);1112011129
+14-6
fs/btrfs/ioctl.c
···12391239}1240124012411241static bool defrag_check_next_extent(struct inode *inode, struct extent_map *em,12421242- bool locked)12421242+ u32 extent_thresh, u64 newer_than, bool locked)12431243{12441244 struct extent_map *next;12451245 bool ret = false;···12491249 return false;1250125012511251 /*12521252- * We want to check if the next extent can be merged with the current12531253- * one, which can be an extent created in a past generation, so we pass12541254- * a minimum generation of 0 to defrag_lookup_extent().12521252+ * Here we need to pass @newer_then when checking the next extent, or12531253+ * we will hit a case we mark current extent for defrag, but the next12541254+ * one will not be a target.12551255+ * This will just cause extra IO without really reducing the fragments.12551256 */12561256- next = defrag_lookup_extent(inode, em->start + em->len, 0, locked);12571257+ next = defrag_lookup_extent(inode, em->start + em->len, newer_than, locked);12571258 /* No more em or hole */12581259 if (!next || next->block_start >= EXTENT_MAP_LAST_BYTE)12591260 goto out;···12661265 */12671266 if (next->len >= get_extent_max_capacity(em))12681267 goto out;12681268+ /* Skip older extent */12691269+ if (next->generation < newer_than)12701270+ goto out;12711271+ /* Also check extent size */12721272+ if (next->len >= extent_thresh)12731273+ goto out;12741274+12691275 ret = true;12701276out:12711277 free_extent_map(next);···14781470 goto next;1479147114801472 next_mergeable = defrag_check_next_extent(&inode->vfs_inode, em,14811481- locked);14731473+ extent_thresh, newer_than, locked);14821474 if (!next_mergeable) {14831475 struct defrag_target_range *last;14841476
+28-37
fs/btrfs/volumes.c
···18961896 path_put(&path);18971897}1898189818991899-static int btrfs_rm_dev_item(struct btrfs_device *device)18991899+static int btrfs_rm_dev_item(struct btrfs_trans_handle *trans,19001900+ struct btrfs_device *device)19001901{19011902 struct btrfs_root *root = device->fs_info->chunk_root;19021903 int ret;19031904 struct btrfs_path *path;19041905 struct btrfs_key key;19051905- struct btrfs_trans_handle *trans;1906190619071907 path = btrfs_alloc_path();19081908 if (!path)19091909 return -ENOMEM;1910191019111911- trans = btrfs_start_transaction(root, 0);19121912- if (IS_ERR(trans)) {19131913- btrfs_free_path(path);19141914- return PTR_ERR(trans);19151915- }19161911 key.objectid = BTRFS_DEV_ITEMS_OBJECTID;19171912 key.type = BTRFS_DEV_ITEM_KEY;19181913 key.offset = device->devid;···19181923 if (ret) {19191924 if (ret > 0)19201925 ret = -ENOENT;19211921- btrfs_abort_transaction(trans, ret);19221922- btrfs_end_transaction(trans);19231926 goto out;19241927 }1925192819261929 ret = btrfs_del_item(trans, root, path);19271927- if (ret) {19281928- btrfs_abort_transaction(trans, ret);19291929- btrfs_end_transaction(trans);19301930- }19311931-19321930out:19331931 btrfs_free_path(path);19341934- if (!ret)19351935- ret = btrfs_commit_transaction(trans);19361932 return ret;19371933}19381934···20642078 struct btrfs_dev_lookup_args *args,20652079 struct block_device **bdev, fmode_t *mode)20662080{20812081+ struct btrfs_trans_handle *trans;20672082 struct btrfs_device *device;20682083 struct btrfs_fs_devices *cur_devices;20692084 struct btrfs_fs_devices *fs_devices = fs_info->fs_devices;···2085209820862099 ret = btrfs_check_raid_min_devices(fs_info, num_devices - 1);20872100 if (ret)20882088- goto out;21012101+ return ret;2089210220902103 device = btrfs_find_device(fs_info->fs_devices, args);20912104 if (!device) {···20932106 ret = BTRFS_ERROR_DEV_MISSING_NOT_FOUND;20942107 else20952108 ret = -ENOENT;20962096- goto out;21092109+ return ret;20972110 }2098211120992112 if (btrfs_pinned_by_swapfile(fs_info, device)) {21002113 btrfs_warn_in_rcu(fs_info,21012114 "cannot remove device %s (devid %llu) due to active swapfile",21022115 rcu_str_deref(device->name), device->devid);21032103- ret = -ETXTBSY;21042104- goto out;21162116+ return -ETXTBSY;21052117 }2106211821072107- if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state)) {21082108- ret = BTRFS_ERROR_DEV_TGT_REPLACE;21092109- goto out;21102110- }21192119+ if (test_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state))21202120+ return BTRFS_ERROR_DEV_TGT_REPLACE;2111212121122122 if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state) &&21132113- fs_info->fs_devices->rw_devices == 1) {21142114- ret = BTRFS_ERROR_DEV_ONLY_WRITABLE;21152115- goto out;21162116- }21232123+ fs_info->fs_devices->rw_devices == 1)21242124+ return BTRFS_ERROR_DEV_ONLY_WRITABLE;2117212521182126 if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {21192127 mutex_lock(&fs_info->chunk_mutex);···21212139 if (ret)21222140 goto error_undo;2123214121242124- /*21252125- * TODO: the superblock still includes this device in its num_devices21262126- * counter although write_all_supers() is not locked out. This21272127- * could give a filesystem state which requires a degraded mount.21282128- */21292129- ret = btrfs_rm_dev_item(device);21302130- if (ret)21422142+ trans = btrfs_start_transaction(fs_info->chunk_root, 0);21432143+ if (IS_ERR(trans)) {21442144+ ret = PTR_ERR(trans);21312145 goto error_undo;21462146+ }21472147+21482148+ ret = btrfs_rm_dev_item(trans, device);21492149+ if (ret) {21502150+ /* Any error in dev item removal is critical */21512151+ btrfs_crit(fs_info,21522152+ "failed to remove device item for devid %llu: %d",21532153+ device->devid, ret);21542154+ btrfs_abort_transaction(trans, ret);21552155+ btrfs_end_transaction(trans);21562156+ return ret;21572157+ }2132215821332159 clear_bit(BTRFS_DEV_STATE_IN_FS_METADATA, &device->dev_state);21342160 btrfs_scrub_cancel_dev(device);···22192229 free_fs_devices(cur_devices);22202230 }2221223122222222-out:22322232+ ret = btrfs_commit_transaction(trans);22332233+22232234 return ret;2224223522252236error_undo:···22312240 device->fs_devices->rw_devices++;22322241 mutex_unlock(&fs_info->chunk_mutex);22332242 }22342234- goto out;22432243+ return ret;22352244}2236224522372246void btrfs_rm_dev_replace_remove_srcdev(struct btrfs_device *srcdev)
+5-8
fs/btrfs/zoned.c
···1801180118021802 map = em->map_lookup;18031803 /* We only support single profile for now */18041804- ASSERT(map->num_stripes == 1);18051804 device = map->stripes[0].dev;1806180518071806 free_extent_map(em);···1975197619761977bool btrfs_can_activate_zone(struct btrfs_fs_devices *fs_devices, u64 flags)19771978{19791979+ struct btrfs_fs_info *fs_info = fs_devices->fs_info;19781980 struct btrfs_device *device;19791981 bool ret = false;1980198219811981- if (!btrfs_is_zoned(fs_devices->fs_info))19831983+ if (!btrfs_is_zoned(fs_info))19821984 return true;1983198519841984- /* Non-single profiles are not supported yet */19851985- ASSERT((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0);19861986-19871986 /* Check if there is a device with active zones left */19881988- mutex_lock(&fs_devices->device_list_mutex);19891989- list_for_each_entry(device, &fs_devices->devices, dev_list) {19871987+ mutex_lock(&fs_info->chunk_mutex);19881988+ list_for_each_entry(device, &fs_devices->alloc_list, dev_alloc_list) {19901989 struct btrfs_zoned_device_info *zinfo = device->zone_info;1991199019921991 if (!device->bdev)···19961999 break;19972000 }19982001 }19991999- mutex_unlock(&fs_devices->device_list_mutex);20022002+ mutex_unlock(&fs_info->chunk_mutex);2000200320012004 return ret;20022005}
···570570 return type & ~BPF_BASE_TYPE_MASK;571571}572572573573+/* only use after check_attach_btf_id() */573574static inline enum bpf_prog_type resolve_prog_type(struct bpf_prog *prog)574575{575575- return prog->aux->dst_prog ? prog->aux->dst_prog->type : prog->type;576576+ return prog->type == BPF_PROG_TYPE_EXT ?577577+ prog->aux->dst_prog->type : prog->type;576578}577579578580#endif /* _LINUX_BPF_VERIFIER_H */
-6
include/linux/virtio_config.h
···2323 * any of @get/@set, @get_status/@set_status, or @get_features/2424 * @finalize_features are NOT safe to be called from an atomic2525 * context.2626- * @enable_cbs: enable the callbacks2727- * vdev: the virtio_device2826 * @get: read the value of a configuration field2927 * vdev: the virtio_device3028 * offset: the offset of the configuration field···7678 */7779typedef void vq_callback_t(struct virtqueue *);7880struct virtio_config_ops {7979- void (*enable_cbs)(struct virtio_device *vdev);8081 void (*get)(struct virtio_device *vdev, unsigned offset,8182 void *buf, unsigned len);8283 void (*set)(struct virtio_device *vdev, unsigned offset,···229232void virtio_device_ready(struct virtio_device *dev)230233{231234 unsigned status = dev->config->get_status(dev);232232-233233- if (dev->config->enable_cbs)234234- dev->config->enable_cbs(dev);235235236236 BUG_ON(status & VIRTIO_CONFIG_S_DRIVER_OK);237237 dev->config->set_status(dev, status | VIRTIO_CONFIG_S_DRIVER_OK);
···70167016 if (!th->ack || th->rst || th->syn)70177017 return -ENOENT;7018701870197019+ if (unlikely(iph_len < sizeof(struct iphdr)))70207020+ return -EINVAL;70217021+70197022 if (tcp_synq_no_recent_overflow(sk))70207023 return -ENOENT;7021702470227025 cookie = ntohl(th->ack_seq) - 1;7023702670247024- switch (sk->sk_family) {70257025- case AF_INET:70267026- if (unlikely(iph_len < sizeof(struct iphdr)))70277027+ /* Both struct iphdr and struct ipv6hdr have the version field at the70287028+ * same offset so we can cast to the shorter header (struct iphdr).70297029+ */70307030+ switch (((struct iphdr *)iph)->version) {70317031+ case 4:70327032+ if (sk->sk_family == AF_INET6 && ipv6_only_sock(sk))70277033 return -EINVAL;7028703470297035 ret = __cookie_v4_check((struct iphdr *)iph, th, cookie);70307036 break;7031703770327038#if IS_BUILTIN(CONFIG_IPV6)70337033- case AF_INET6:70397039+ case 6:70347040 if (unlikely(iph_len < sizeof(struct ipv6hdr)))70417041+ return -EINVAL;70427042+70437043+ if (sk->sk_family != AF_INET6)70357044 return -EINVAL;7036704570377046 ret = __cookie_v6_check((struct ipv6hdr *)iph, th, cookie);
+11-4
net/core/skbuff.c
···52765276 if (skb_cloned(to))52775277 return false;5278527852795279- /* The page pool signature of struct page will eventually figure out52805280- * which pages can be recycled or not but for now let's prohibit slab52815281- * allocated and page_pool allocated SKBs from being coalesced.52795279+ /* In general, avoid mixing slab allocated and page_pool allocated52805280+ * pages within the same SKB. However when @to is not pp_recycle and52815281+ * @from is cloned, we can transition frag pages from page_pool to52825282+ * reference counted.52835283+ *52845284+ * On the other hand, don't allow coalescing two pp_recycle SKBs if52855285+ * @from is cloned, in case the SKB is using page_pool fragment52865286+ * references (PP_FLAG_PAGE_FRAG). Since we only take full page52875287+ * references for cloned SKBs at the moment that would result in52885288+ * inconsistent reference counts.52825289 */52835283- if (to->pp_recycle != from->pp_recycle)52905290+ if (to->pp_recycle != (from->pp_recycle && !skb_cloned(from)))52845291 return false;5285529252865293 if (len <= skb_tailroom(to)) {
+24-1
net/dsa/master.c
···335335 .attrs = dsa_slave_attrs,336336};337337338338+static void dsa_master_reset_mtu(struct net_device *dev)339339+{340340+ int err;341341+342342+ err = dev_set_mtu(dev, ETH_DATA_LEN);343343+ if (err)344344+ netdev_dbg(dev,345345+ "Unable to reset MTU to exclude DSA overheads\n");346346+}347347+338348int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)339349{350350+ const struct dsa_device_ops *tag_ops = cpu_dp->tag_ops;340351 struct dsa_switch *ds = cpu_dp->ds;341352 struct device_link *consumer_link;342342- int ret;353353+ int mtu, ret;354354+355355+ mtu = ETH_DATA_LEN + dsa_tag_protocol_overhead(tag_ops);343356344357 /* The DSA master must use SET_NETDEV_DEV for this to work. */345358 consumer_link = device_link_add(ds->dev, dev->dev.parent,···361348 netdev_err(dev,362349 "Failed to create a device link to DSA switch %s\n",363350 dev_name(ds->dev));351351+352352+ /* The switch driver may not implement ->port_change_mtu(), case in353353+ * which dsa_slave_change_mtu() will not update the master MTU either,354354+ * so we need to do that here.355355+ */356356+ ret = dev_set_mtu(dev, mtu);357357+ if (ret)358358+ netdev_warn(dev, "error %d setting MTU to %d to include DSA overhead\n",359359+ ret, mtu);364360365361 /* If we use a tagging format that doesn't have an ethertype366362 * field, make sure that all packets from this point on get···406384 sysfs_remove_group(&dev->dev.kobj, &dsa_group);407385 dsa_netdev_ops_set(dev, NULL);408386 dsa_master_ethtool_teardown(dev);387387+ dsa_master_reset_mtu(dev);409388 dsa_master_set_promiscuity(dev, -1);410389411390 dev->dsa_ptr = NULL;
+6-1
net/ipv4/fib_semantics.c
···889889 }890890891891 if (cfg->fc_oif || cfg->fc_gw_family) {892892- struct fib_nh *nh = fib_info_nh(fi, 0);892892+ struct fib_nh *nh;893893894894+ /* cannot match on nexthop object attributes */895895+ if (fi->nh)896896+ return 1;897897+898898+ nh = fib_info_nh(fi, 0);894899 if (cfg->fc_encap) {895900 if (fib_encap_match(net, cfg->fc_encap_type,896901 cfg->fc_encap, nh, cfg, extack))
···503503504504 if (cb->ifindex) {505505 /* direct route; use the hwaddr we stashed in sendmsg */506506+ if (cb->halen != skb->dev->addr_len) {507507+ /* sanity check, sendmsg should have already caught this */508508+ kfree_skb(skb);509509+ return -EMSGSIZE;510510+ }506511 daddr = cb->haddr;507512 } else {508513 /* If lookup fails let the device handle daddr==NULL */···517512518513 rc = dev_hard_header(skb, skb->dev, ntohs(skb->protocol),519514 daddr, skb->dev->dev_addr, skb->len);520520- if (rc) {515515+ if (rc < 0) {521516 kfree_skb(skb);522517 return -EHOSTUNREACH;523518 }···761756{762757 const unsigned int hlen = sizeof(struct mctp_hdr);763758 struct mctp_hdr *hdr, *hdr2;764764- unsigned int pos, size;759759+ unsigned int pos, size, headroom;765760 struct sk_buff *skb2;766761 int rc;767762 u8 seq;···775770 return -EMSGSIZE;776771 }777772773773+ /* keep same headroom as the original skb */774774+ headroom = skb_headroom(skb);775775+778776 /* we've got the header */779777 skb_pull(skb, hlen);780778···785777 /* size of message payload */786778 size = min(mtu - hlen, skb->len - pos);787779788788- skb2 = alloc_skb(MCTP_HEADER_MAXLEN + hlen + size, GFP_KERNEL);780780+ skb2 = alloc_skb(headroom + hlen + size, GFP_KERNEL);789781 if (!skb2) {790782 rc = -ENOMEM;791783 break;···801793 skb_set_owner_w(skb2, skb->sk);802794803795 /* establish packet */804804- skb_reserve(skb2, MCTP_HEADER_MAXLEN);796796+ skb_reserve(skb2, headroom);805797 skb_reset_network_header(skb2);806798 skb_put(skb2, hlen + size);807799 skb2->transport_header = skb2->network_header + hlen;
+1-1
net/netfilter/nf_tables_api.c
···55265526 int err, i, k;5527552755285528 for (i = 0; i < set->num_exprs; i++) {55295529- expr = kzalloc(set->exprs[i]->ops->size, GFP_KERNEL);55295529+ expr = kzalloc(set->exprs[i]->ops->size, GFP_KERNEL_ACCOUNT);55305530 if (!expr)55315531 goto err_expr;55325532
···3030 u64 last_jiffies;3131 int err;32323333- last = kzalloc(sizeof(*last), GFP_KERNEL);3333+ last = kzalloc(sizeof(*last), GFP_KERNEL_ACCOUNT);3434 if (!last)3535 return -ENOMEM;3636
···10511051 int rem = nla_len(attr);10521052 bool dont_clone_flow_key;1053105310541054- /* The first action is always 'OVS_CLONE_ATTR_ARG'. */10541054+ /* The first action is always 'OVS_CLONE_ATTR_EXEC'. */10551055 clone_arg = nla_data(attr);10561056 dont_clone_flow_key = nla_get_u32(clone_arg);10571057 actions = nla_next(clone_arg, &rem);
+93-6
net/openvswitch/flow_netlink.c
···23172317 return sfa;23182318}2319231923202320+static void ovs_nla_free_nested_actions(const struct nlattr *actions, int len);23212321+23222322+static void ovs_nla_free_check_pkt_len_action(const struct nlattr *action)23232323+{23242324+ const struct nlattr *a;23252325+ int rem;23262326+23272327+ nla_for_each_nested(a, action, rem) {23282328+ switch (nla_type(a)) {23292329+ case OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_LESS_EQUAL:23302330+ case OVS_CHECK_PKT_LEN_ATTR_ACTIONS_IF_GREATER:23312331+ ovs_nla_free_nested_actions(nla_data(a), nla_len(a));23322332+ break;23332333+ }23342334+ }23352335+}23362336+23372337+static void ovs_nla_free_clone_action(const struct nlattr *action)23382338+{23392339+ const struct nlattr *a = nla_data(action);23402340+ int rem = nla_len(action);23412341+23422342+ switch (nla_type(a)) {23432343+ case OVS_CLONE_ATTR_EXEC:23442344+ /* The real list of actions follows this attribute. */23452345+ a = nla_next(a, &rem);23462346+ ovs_nla_free_nested_actions(a, rem);23472347+ break;23482348+ }23492349+}23502350+23512351+static void ovs_nla_free_dec_ttl_action(const struct nlattr *action)23522352+{23532353+ const struct nlattr *a = nla_data(action);23542354+23552355+ switch (nla_type(a)) {23562356+ case OVS_DEC_TTL_ATTR_ACTION:23572357+ ovs_nla_free_nested_actions(nla_data(a), nla_len(a));23582358+ break;23592359+ }23602360+}23612361+23622362+static void ovs_nla_free_sample_action(const struct nlattr *action)23632363+{23642364+ const struct nlattr *a = nla_data(action);23652365+ int rem = nla_len(action);23662366+23672367+ switch (nla_type(a)) {23682368+ case OVS_SAMPLE_ATTR_ARG:23692369+ /* The real list of actions follows this attribute. */23702370+ a = nla_next(a, &rem);23712371+ ovs_nla_free_nested_actions(a, rem);23722372+ break;23732373+ }23742374+}23752375+23202376static void ovs_nla_free_set_action(const struct nlattr *a)23212377{23222378 const struct nlattr *ovs_key = nla_data(a);···23862330 }23872331}2388233223892389-void ovs_nla_free_flow_actions(struct sw_flow_actions *sf_acts)23332333+static void ovs_nla_free_nested_actions(const struct nlattr *actions, int len)23902334{23912335 const struct nlattr *a;23922336 int rem;2393233723942394- if (!sf_acts)23382338+ /* Whenever new actions are added, the need to update this23392339+ * function should be considered.23402340+ */23412341+ BUILD_BUG_ON(OVS_ACTION_ATTR_MAX != 23);23422342+23432343+ if (!actions)23952344 return;2396234523972397- nla_for_each_attr(a, sf_acts->actions, sf_acts->actions_len, rem) {23462346+ nla_for_each_attr(a, actions, len, rem) {23982347 switch (nla_type(a)) {23992399- case OVS_ACTION_ATTR_SET:24002400- ovs_nla_free_set_action(a);23482348+ case OVS_ACTION_ATTR_CHECK_PKT_LEN:23492349+ ovs_nla_free_check_pkt_len_action(a);24012350 break;23512351+23522352+ case OVS_ACTION_ATTR_CLONE:23532353+ ovs_nla_free_clone_action(a);23542354+ break;23552355+24022356 case OVS_ACTION_ATTR_CT:24032357 ovs_ct_free_action(a);24042358 break;23592359+23602360+ case OVS_ACTION_ATTR_DEC_TTL:23612361+ ovs_nla_free_dec_ttl_action(a);23622362+ break;23632363+23642364+ case OVS_ACTION_ATTR_SAMPLE:23652365+ ovs_nla_free_sample_action(a);23662366+ break;23672367+23682368+ case OVS_ACTION_ATTR_SET:23692369+ ovs_nla_free_set_action(a);23702370+ break;24052371 }24062372 }23732373+}2407237423752375+void ovs_nla_free_flow_actions(struct sw_flow_actions *sf_acts)23762376+{23772377+ if (!sf_acts)23782378+ return;23792379+23802380+ ovs_nla_free_nested_actions(sf_acts->actions, sf_acts->actions_len);24082381 kfree(sf_acts);24092382}24102383···35433458 if (!start)35443459 return -EMSGSIZE;3545346035463546- err = ovs_nla_put_actions(nla_data(attr), rem, skb);34613461+ /* Skipping the OVS_CLONE_ATTR_EXEC that is always the first attribute. */34623462+ attr = nla_next(nla_data(attr), &rem);34633463+ err = ovs_nla_put_actions(attr, rem, skb);3547346435483465 if (err)35493466 nla_nest_cancel(skb, start);