···21612161E: mike.kravetz@oracle.com21622162D: Maintenance and development of the hugetlb subsystem2163216321642164+N: Seth Jennings21652165+E: sjenning@redhat.com21662166+D: Creation and maintenance of zswap21672167+21682168+N: Dan Streetman21692169+E: ddstreet@ieee.org21702170+D: Maintenance and development of zswap21712171+D: Creation and maintenance of the zpool API21722172+21732173+N: Vitaly Wool21742174+E: vitaly.wool@konsulko.com21752175+D: Maintenance and development of zswap21762176+21642177N: Andreas S. Krebs21652178E: akrebs@altavista.net21662179D: CYPRESS CY82C693 chipset IDE, Digital's PC-Alpha 164SX boards
+11-11
Documentation/ABI/testing/sysfs-class-net-queues
···11-What: /sys/class/<iface>/queues/rx-<queue>/rps_cpus11+What: /sys/class/net/<iface>/queues/rx-<queue>/rps_cpus22Date: March 201033KernelVersion: 2.6.3544Contact: netdev@vger.kernel.org···88 network device queue. Possible values depend on the number99 of available CPU(s) in the system.10101111-What: /sys/class/<iface>/queues/rx-<queue>/rps_flow_cnt1111+What: /sys/class/net/<iface>/queues/rx-<queue>/rps_flow_cnt1212Date: April 20101313KernelVersion: 2.6.351414Contact: netdev@vger.kernel.org···1616 Number of Receive Packet Steering flows being currently1717 processed by this particular network device receive queue.18181919-What: /sys/class/<iface>/queues/tx-<queue>/tx_timeout1919+What: /sys/class/net/<iface>/queues/tx-<queue>/tx_timeout2020Date: November 20112121KernelVersion: 3.32222Contact: netdev@vger.kernel.org···2424 Indicates the number of transmit timeout events seen by this2525 network interface transmit queue.26262727-What: /sys/class/<iface>/queues/tx-<queue>/tx_maxrate2727+What: /sys/class/net/<iface>/queues/tx-<queue>/tx_maxrate2828Date: March 20152929KernelVersion: 4.13030Contact: netdev@vger.kernel.org···3232 A Mbps max-rate set for the queue, a value of zero means disabled,3333 default is disabled.34343535-What: /sys/class/<iface>/queues/tx-<queue>/xps_cpus3535+What: /sys/class/net/<iface>/queues/tx-<queue>/xps_cpus3636Date: November 20103737KernelVersion: 2.6.383838Contact: netdev@vger.kernel.org···4242 network device transmit queue. Possible values depend on the4343 number of available CPU(s) in the system.44444545-What: /sys/class/<iface>/queues/tx-<queue>/xps_rxqs4545+What: /sys/class/net/<iface>/queues/tx-<queue>/xps_rxqs4646Date: June 20184747KernelVersion: 4.18.04848Contact: netdev@vger.kernel.org···5353 number of available receive queue(s) in the network device.5454 Default is disabled.55555656-What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time5656+What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time5757Date: November 20115858KernelVersion: 3.35959Contact: netdev@vger.kernel.org···6262 of this particular network device transmit queue.6363 Default value is 1000.64646565-What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/inflight6565+What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/inflight6666Date: November 20116767KernelVersion: 3.36868Contact: netdev@vger.kernel.org···7070 Indicates the number of bytes (objects) in flight on this7171 network device transmit queue.72727373-What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit7373+What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit7474Date: November 20117575KernelVersion: 3.37676Contact: netdev@vger.kernel.org···7979 on this network device transmit queue. This value is clamped8080 to be within the bounds defined by limit_max and limit_min.81818282-What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max8282+What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max8383Date: November 20118484KernelVersion: 3.38585Contact: netdev@vger.kernel.org···8888 queued on this network device transmit queue. See8989 include/linux/dynamic_queue_limits.h for the default value.90909191-What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min9191+What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min9292Date: November 20119393KernelVersion: 3.39494Contact: netdev@vger.kernel.org
+1
Documentation/ABI/testing/sysfs-platform-silicom
···1010Date: November 20231111KernelVersion: 6.71212Contact: Henry Shi <henrys@silicom-usa.com>1313+Description:1314 This file allow user to power cycle the platform.1415 Default value is 0; when set to 1, it powers down1516 the platform, waits 5 seconds, then powers on the
+2-2
Documentation/accel/introduction.rst
···101101email threads102102-------------103103104104-* `Initial discussion on the New subsystem for acceleration devices <https://lkml.org/lkml/2022/7/31/83>`_ - Oded Gabbay (2022)105105-* `patch-set to add the new subsystem <https://lkml.org/lkml/2022/10/22/544>`_ - Oded Gabbay (2022)104104+* `Initial discussion on the New subsystem for acceleration devices <https://lore.kernel.org/lkml/CAFCwf11=9qpNAepL7NL+YAV_QO=Wv6pnWPhKHKAepK3fNn+2Dg@mail.gmail.com/>`_ - Oded Gabbay (2022)105105+* `patch-set to add the new subsystem <https://lore.kernel.org/lkml/20221022214622.18042-1-ogabbay@kernel.org/>`_ - Oded Gabbay (2022)106106107107Conference talks108108----------------
···2432433. Do any of the following needed to avoid jitter that your244244 application cannot tolerate:245245246246- a. Build your kernel with CONFIG_SLUB=y rather than247247- CONFIG_SLAB=y, thus avoiding the slab allocator's periodic248248- use of each CPU's workqueues to run its cache_reap()249249- function.250250- b. Avoid using oprofile, thus avoiding OS jitter from246246+ a. Avoid using oprofile, thus avoiding OS jitter from251247 wq_sync_buffer().252252- c. Limit your CPU frequency so that a CPU-frequency248248+ b. Limit your CPU frequency so that a CPU-frequency253249 governor is not required, possibly enlisting the aid of254250 special heatsinks or other cooling technologies. If done255251 correctly, and if you CPU architecture permits, you should···255259256260 WARNING: Please check your CPU specifications to257261 make sure that this is safe on your particular system.258258- d. As of v3.18, Christoph Lameter's on-demand vmstat workers262262+ c. As of v3.18, Christoph Lameter's on-demand vmstat workers259263 commit prevents OS jitter due to vmstat_update() on260264 CONFIG_SMP=y systems. Before v3.18, is not possible261265 to entirely get rid of the OS jitter, but you can···270274 (based on an earlier one from Gilad Ben-Yossef) that271275 reduces or even eliminates vmstat overhead for some272276 workloads at https://lore.kernel.org/r/00000140e9dfd6bd-40db3d4f-c1be-434f-8132-7820f81bb586-000000@email.amazonses.com.273273- e. If running on high-end powerpc servers, build with277277+ d. If running on high-end powerpc servers, build with274278 CONFIG_PPC_RTAS_DAEMON=n. This prevents the RTAS275279 daemon from running on each CPU every second or so.276280 (This will require editing Kconfig files and will defeat···278282 due to the rtas_event_scan() function.279283 WARNING: Please check your CPU specifications to280284 make sure that this is safe on your particular system.281281- f. If running on Cell Processor, build your kernel with285285+ e. If running on Cell Processor, build your kernel with282286 CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from283287 spu_gov_work().284288 WARNING: Please check your CPU specifications to285289 make sure that this is safe on your particular system.286286- g. If running on PowerMAC, build your kernel with290290+ f. If running on PowerMAC, build your kernel with287291 CONFIG_PMAC_RACKMETER=n to disable the CPU-meter,288292 avoiding OS jitter from rackmeter_do_timer().289293
+17-2
Documentation/dev-tools/kunit/usage.rst
···671671------------------------672672673673If we do not want to expose functions or variables for testing, one option is to674674-conditionally ``#include`` the test file at the end of your .c file. For675675-example:674674+conditionally export the used symbol. For example:675675+676676+.. code-block:: c677677+678678+ /* In my_file.c */679679+680680+ VISIBLE_IF_KUNIT int do_interesting_thing();681681+ EXPORT_SYMBOL_IF_KUNIT(do_interesting_thing);682682+683683+ /* In my_file.h */684684+685685+ #if IS_ENABLED(CONFIG_KUNIT)686686+ int do_interesting_thing(void);687687+ #endif688688+689689+Alternatively, you could conditionally ``#include`` the test file at the end of690690+your .c file. For example:676691677692.. code-block:: c678693
···1212<script type="text/javascript"> <!--1313 var sbar = document.getElementsByClassName("sphinxsidebar")[0];1414 let currents = document.getElementsByClassName("current")1515- sbar.scrollTop = currents[currents.length - 1].offsetTop;1515+ if (currents.length) {1616+ sbar.scrollTop = currents[currents.length - 1].offsetTop;1717+ }1618 --> </script>
+11-12
MAINTAINERS
···3168316831693169ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS31703170M: Corentin Chary <corentin.chary@gmail.com>31713171-L: acpi4asus-user@lists.sourceforge.net31713171+M: Luke D. Jones <luke@ljones.dev>31723172L: platform-driver-x86@vger.kernel.org31733173S: Maintained31743174-W: http://acpi4asus.sf.net31743174+W: https://asus-linux.org/31753175F: drivers/platform/x86/asus*.c31763176F: drivers/platform/x86/eeepc*.c31773177···59615961F: drivers/platform/x86/dell/dell-wmi-descriptor.c5962596259635963DELL WMI HARDWARE PRIVACY SUPPORT59645964-M: Perry Yuan <Perry.Yuan@dell.com>59655964L: Dell.Client.Kernel@dell.com59665965L: platform-driver-x86@vger.kernel.org59675966S: Maintained···1028610287F: include/scsi/viosrp.h10287102881028810289IBM Power Virtual SCSI Device Target Driver1028910289-M: Michael Cyr <mikecyr@linux.ibm.com>1029010290+M: Tyrel Datwyler <tyreld@linux.ibm.com>1029010291L: linux-scsi@vger.kernel.org1029110292L: target-devel@vger.kernel.org1029210293S: Supported···1172811729KERNEL UNIT TESTING FRAMEWORK (KUnit)1172911730M: Brendan Higgins <brendanhiggins@google.com>1173011731M: David Gow <davidgow@google.com>1173211732+R: Rae Moar <rmoar@google.com>1173111733L: linux-kselftest@vger.kernel.org1173211734L: kunit-dev@googlegroups.com1173311735S: Maintained···1290712907L: linux-man@vger.kernel.org1290812908S: Maintained1290912909W: http://www.kernel.org/doc/man-pages1291012910+T: git git://git.kernel.org/pub/scm/docs/man-pages/man-pages.git1291112911+T: git git://www.alejandro-colomar.es/src/alx/linux/man-pages/man-pages.git12910129121291112913MANAGEMENT COMPONENT TRANSPORT PROTOCOL (MCTP)1291212914M: Jeremy Kerr <jk@codeconstruct.com.au>···1518415182F: drivers/connector/1518515183F: drivers/net/1518615184F: include/dt-bindings/net/1518515185+F: include/linux/cn_proc.h1518715186F: include/linux/etherdevice.h1518815187F: include/linux/fcdevice.h1518915188F: include/linux/fddidevice.h···1519215189F: include/linux/if_*1519315190F: include/linux/inetdevice.h1519415191F: include/linux/netdevice.h1519215192+F: include/uapi/linux/cn_proc.h1519515193F: include/uapi/linux/if_*1519615194F: include/uapi/linux/netdevice.h1519715195X: drivers/net/wireless/···18097180931809818094QUALCOMM ETHQOS ETHERNET DRIVER1809918095M: Vinod Koul <vkoul@kernel.org>1810018100-R: Bhupesh Sharma <bhupesh.sharma@linaro.org>1810118096L: netdev@vger.kernel.org1810218097L: linux-arm-msm@vger.kernel.org1810318098S: Maintained···20564205612056520562SPARC + UltraSPARC (sparc/sparc64)2056620563M: "David S. Miller" <davem@davemloft.net>2056420564+M: Andreas Larsson <andreas@gaisler.com>2056720565L: sparclinux@vger.kernel.org2056820566S: Maintained2056920567Q: http://patchwork.ozlabs.org/project/sparclinux/list/···2435524351F: Documentation/filesystems/zonefs.rst2435624352F: fs/zonefs/24357243532435824358-ZPOOL COMPRESSED PAGE STORAGE API2435924359-M: Dan Streetman <ddstreet@ieee.org>2436024360-L: linux-mm@kvack.org2436124361-S: Maintained2436224362-F: include/linux/zpool.h2436324363-F: mm/zpool.c2436424364-2436524354ZR36067 VIDEO FOR LINUX DRIVER2436624355M: Corentin Labbe <clabbe@baylibre.com>2436724356L: mjpeg-users@lists.sourceforge.net···2440624409L: linux-mm@kvack.org2440724410S: Maintained2440824411F: Documentation/admin-guide/mm/zswap.rst2441224412+F: include/linux/zpool.h2440924413F: include/linux/zswap.h2441424414+F: mm/zpool.c2441024415F: mm/zswap.c24411244162441224417THE REST
+8-8
Makefile
···22VERSION = 633PATCHLEVEL = 844SUBLEVEL = 055-EXTRAVERSION = -rc155+EXTRAVERSION = -rc266NAME = Hurr durr I'ma ninja sloth7788# *DOCUMENTATION*···294294single-build :=295295296296ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),)297297- ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),)297297+ ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),)298298 need-config :=299299- endif299299+ endif300300endif301301302302ifneq ($(filter $(no-sync-config-targets), $(MAKECMDGOALS)),)303303- ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),)303303+ ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),)304304 may-sync-config :=305305- endif305305+ endif306306endif307307308308need-compiler := $(may-sync-config)···323323# We cannot build single targets and the others at the same time324324ifneq ($(filter $(single-targets), $(MAKECMDGOALS)),)325325 single-build := 1326326- ifneq ($(filter-out $(single-targets), $(MAKECMDGOALS)),)326326+ ifneq ($(filter-out $(single-targets), $(MAKECMDGOALS)),)327327 mixed-build := 1328328- endif328328+ endif329329endif330330331331# For "make -j clean all", "make -j mrproper defconfig all", etc.···16661666 @echo ' (sparse by default)'16671667 @echo ' make C=2 [targets] Force check of all c source with $$CHECK'16681668 @echo ' make RECORDMCOUNT_WARN=1 [targets] Warn about ignored mcount sections'16691669- @echo ' make W=n [targets] Enable extra build checks, n=1,2,3 where'16691669+ @echo ' make W=n [targets] Enable extra build checks, n=1,2,3,c,e where'16701670 @echo ' 1: warnings which may be relevant and do not occur too often'16711671 @echo ' 2: warnings which occur quite often but may still be relevant'16721672 @echo ' 3: more obscure warnings, can most likely be ignored'
+1
arch/Kconfig
···673673 bool "Shadow Call Stack"674674 depends on ARCH_SUPPORTS_SHADOW_CALL_STACK675675 depends on DYNAMIC_FTRACE_WITH_ARGS || DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER676676+ depends on MMU676677 help677678 This option enables the compiler's Shadow Call Stack, which678679 uses a shadow stack to protect function return addresses from
···509509 sync_counter();510510 cpu = raw_smp_processor_id();511511 set_my_cpu_offset(per_cpu_offset(cpu));512512- rcutree_report_cpu_starting(cpu);513512514513 cpu_probe();515514 constant_clockevent_init();
+2-2
arch/loongarch/kvm/mmu.c
···675675 *676676 * There are several ways to safely use this helper:677677 *678678- * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before678678+ * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before679679 * consuming it. In this case, mmu_lock doesn't need to be held during the680680 * lookup, but it does need to be held while checking the MMU notifier.681681 *···855855856856 /* Check if an invalidation has taken place since we got pfn */857857 spin_lock(&kvm->mmu_lock);858858- if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) {858858+ if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) {859859 /*860860 * This can happen when mappings are changed asynchronously, but861861 * also synchronously if a COW is triggered by
···178178179179EXPORT_SYMBOL(bcm63xx_timer_set);180180181181-int bcm63xx_timer_init(void)181181+static int bcm63xx_timer_init(void)182182{183183 int ret, irq;184184 u32 reg;
···11111212#include <asm/cpu-features.h>1313#include <asm/cpu-info.h>1414+#include <asm/fpu.h>14151516#ifdef CONFIG_MIPS_FP_SUPPORT1617···309308{310309 struct cpuinfo_mips *c = &boot_cpu_data;311310 struct task_struct *t = current;311311+312312+ /* Do this early so t->thread.fpu.fcr31 won't be clobbered in case313313+ * we are preempted before the lose_fpu(0) in start_thread.314314+ */315315+ lose_fpu(0);312316313317 t->thread.fpu.fcr31 = c->fpu_csr31;314318 switch (state->nan_2008) {
+7-1
arch/mips/kernel/traps.c
···2007200720082008void reserve_exception_space(phys_addr_t addr, unsigned long size)20092009{20102010- memblock_reserve(addr, size);20102010+ /*20112011+ * reserve exception space on CPUs other than CPU020122012+ * is too late, since memblock is unavailable when APs20132013+ * up20142014+ */20152015+ if (smp_processor_id() == 0)20162016+ memblock_reserve(addr, size);20112017}2012201820132019void __init *set_except_vector(int n, void *addr)
+3-4
arch/mips/lantiq/prom.c
···108108 prom_init_cmdline();109109110110#if defined(CONFIG_MIPS_MT_SMP)111111- if (cpu_has_mipsmt) {112112- lantiq_smp_ops = vsmp_smp_ops;111111+ lantiq_smp_ops = vsmp_smp_ops;112112+ if (cpu_has_mipsmt)113113 lantiq_smp_ops.init_secondary = lantiq_init_secondary;114114- register_smp_ops(&lantiq_smp_ops);115115- }114114+ register_smp_ops(&lantiq_smp_ops);116115#endif117116}
+3
arch/mips/loongson64/init.c
···103103 if (loongson_sysconf.vgabios_addr)104104 memblock_reserve(virt_to_phys((void *)loongson_sysconf.vgabios_addr),105105 SZ_256K);106106+ /* set nid for reserved memory */107107+ memblock_set_node((u64)node << 44, (u64)(node + 1) << 44,108108+ &memblock.reserved, node);106109}107110108111#ifndef CONFIG_NUMA
+2
arch/mips/loongson64/numa.c
···132132133133 /* Reserve pfn range 0~node[0]->node_start_pfn */134134 memblock_reserve(0, PAGE_SIZE * start_pfn);135135+ /* set nid for reserved memory on node 0 */136136+ memblock_set_node(0, 1ULL << 44, &memblock.reserved, 0);135137 }136138}137139
···11-// SPDX-License-Identifier: GPL-2.0-only22-/*33- * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc.44- * Copyright (C) 2004 Christoph Hellwig.55- *66- * Support functions for the HUB ASIC - mostly PIO mapping related.77- */88-99-#include <linux/bitops.h>1010-#include <linux/string.h>1111-#include <linux/mmzone.h>1212-#include <asm/sn/addrs.h>1313-#include <asm/sn/arch.h>1414-#include <asm/sn/agent.h>1515-#include <asm/sn/io.h>1616-#include <asm/xtalk/xtalk.h>1717-1818-1919-static int force_fire_and_forget = 1;2020-2121-/**2222- * hub_pio_map - establish a HUB PIO mapping2323- *2424- * @nasid: nasid to perform PIO mapping on2525- * @widget: widget ID to perform PIO mapping for2626- * @xtalk_addr: xtalk_address that needs to be mapped2727- * @size: size of the PIO mapping2828- *2929- **/3030-unsigned long hub_pio_map(nasid_t nasid, xwidgetnum_t widget,3131- unsigned long xtalk_addr, size_t size)3232-{3333- unsigned i;3434-3535- /* use small-window mapping if possible */3636- if ((xtalk_addr % SWIN_SIZE) + size <= SWIN_SIZE)3737- return NODE_SWIN_BASE(nasid, widget) + (xtalk_addr % SWIN_SIZE);3838-3939- if ((xtalk_addr % BWIN_SIZE) + size > BWIN_SIZE) {4040- printk(KERN_WARNING "PIO mapping at hub %d widget %d addr 0x%lx"4141- " too big (%ld)\n",4242- nasid, widget, xtalk_addr, size);4343- return 0;4444- }4545-4646- xtalk_addr &= ~(BWIN_SIZE-1);4747- for (i = 0; i < HUB_NUM_BIG_WINDOW; i++) {4848- if (test_and_set_bit(i, hub_data(nasid)->h_bigwin_used))4949- continue;5050-5151- /*5252- * The code below does a PIO write to setup an ITTE entry.5353- *5454- * We need to prevent other CPUs from seeing our updated5555- * memory shadow of the ITTE (in the piomap) until the ITTE5656- * entry is actually set up; otherwise, another CPU might5757- * attempt a PIO prematurely.5858- *5959- * Also, the only way we can know that an entry has been6060- * received by the hub and can be used by future PIO reads/6161- * writes is by reading back the ITTE entry after writing it.6262- *6363- * For these two reasons, we PIO read back the ITTE entry6464- * after we write it.6565- */6666- IIO_ITTE_PUT(nasid, i, HUB_PIO_MAP_TO_MEM, widget, xtalk_addr);6767- __raw_readq(IIO_ITTE_GET(nasid, i));6868-6969- return NODE_BWIN_BASE(nasid, widget) + (xtalk_addr % BWIN_SIZE);7070- }7171-7272- printk(KERN_WARNING "unable to establish PIO mapping for at"7373- " hub %d widget %d addr 0x%lx\n",7474- nasid, widget, xtalk_addr);7575- return 0;7676-}7777-7878-7979-/*8080- * hub_setup_prb(nasid, prbnum, credits, conveyor)8181- *8282- * Put a PRB into fire-and-forget mode if conveyor isn't set. Otherwise,8383- * put it into conveyor belt mode with the specified number of credits.8484- */8585-static void hub_setup_prb(nasid_t nasid, int prbnum, int credits)8686-{8787- union iprb_u prb;8888- int prb_offset;8989-9090- /*9191- * Get the current register value.9292- */9393- prb_offset = IIO_IOPRB(prbnum);9494- prb.iprb_regval = REMOTE_HUB_L(nasid, prb_offset);9595-9696- /*9797- * Clear out some fields.9898- */9999- prb.iprb_ovflow = 1;100100- prb.iprb_bnakctr = 0;101101- prb.iprb_anakctr = 0;102102-103103- /*104104- * Enable or disable fire-and-forget mode.105105- */106106- prb.iprb_ff = force_fire_and_forget ? 1 : 0;107107-108108- /*109109- * Set the appropriate number of PIO credits for the widget.110110- */111111- prb.iprb_xtalkctr = credits;112112-113113- /*114114- * Store the new value to the register.115115- */116116- REMOTE_HUB_S(nasid, prb_offset, prb.iprb_regval);117117-}118118-119119-/**120120- * hub_set_piomode - set pio mode for a given hub121121- *122122- * @nasid: physical node ID for the hub in question123123- *124124- * Put the hub into either "PIO conveyor belt" mode or "fire-and-forget" mode.125125- * To do this, we have to make absolutely sure that no PIOs are in progress126126- * so we turn off access to all widgets for the duration of the function.127127- *128128- * XXX - This code should really check what kind of widget we're talking129129- * to. Bridges can only handle three requests, but XG will do more.130130- * How many can crossbow handle to widget 0? We're assuming 1.131131- *132132- * XXX - There is a bug in the crossbow that link reset PIOs do not133133- * return write responses. The easiest solution to this problem is to134134- * leave widget 0 (xbow) in fire-and-forget mode at all times. This135135- * only affects pio's to xbow registers, which should be rare.136136- **/137137-static void hub_set_piomode(nasid_t nasid)138138-{139139- u64 ii_iowa;140140- union hubii_wcr_u ii_wcr;141141- unsigned i;142142-143143- ii_iowa = REMOTE_HUB_L(nasid, IIO_OUTWIDGET_ACCESS);144144- REMOTE_HUB_S(nasid, IIO_OUTWIDGET_ACCESS, 0);145145-146146- ii_wcr.wcr_reg_value = REMOTE_HUB_L(nasid, IIO_WCR);147147-148148- if (ii_wcr.iwcr_dir_con) {149149- /*150150- * Assume a bridge here.151151- */152152- hub_setup_prb(nasid, 0, 3);153153- } else {154154- /*155155- * Assume a crossbow here.156156- */157157- hub_setup_prb(nasid, 0, 1);158158- }159159-160160- /*161161- * XXX - Here's where we should take the widget type into162162- * when account assigning credits.163163- */164164- for (i = HUB_WIDGET_ID_MIN; i <= HUB_WIDGET_ID_MAX; i++)165165- hub_setup_prb(nasid, i, 3);166166-167167- REMOTE_HUB_S(nasid, IIO_OUTWIDGET_ACCESS, ii_iowa);168168-}169169-170170-/*171171- * hub_pio_init - PIO-related hub initialization172172- *173173- * @hub: hubinfo structure for our hub174174- */175175-void hub_pio_init(nasid_t nasid)176176-{177177- unsigned i;178178-179179- /* initialize big window piomaps for this hub */180180- bitmap_zero(hub_data(nasid)->h_bigwin_used, HUB_NUM_BIG_WINDOW);181181- for (i = 0; i < HUB_NUM_BIG_WINDOW; i++)182182- IIO_ITTE_DISABLE(nasid, i);183183-184184- hub_set_piomode(nasid);185185-}
···1111#include <asm/sn/arch.h>1212#include <asm/sn/agent.h>13131414+#include "ip27-common.h"1515+1416#if 01517#define NODE_NUM_CPUS(n) CNODE_NUM_CPUS(n)1618#else···2523typedef unsigned long machreg_t;26242725static arch_spinlock_t nmi_lock = __ARCH_SPIN_LOCK_UNLOCKED;2828-2929-/*3030- * Let's see what else we need to do here. Set up sp, gp?3131- */3232-void nmi_dump(void)3333-{3434- void cont_nmi_dump(void);3535-3636- cont_nmi_dump();3737-}2626+static void nmi_dump(void);38273928void install_cpu_nmi_handler(int slice)4029{···4653 * into the eframe format for the node under consideration.4754 */48554949-void nmi_cpu_eframe_save(nasid_t nasid, int slice)5656+static void nmi_cpu_eframe_save(nasid_t nasid, int slice)5057{5158 struct reg_struct *nr;5259 int i;···122129 pr_emerg("\n");123130}124131125125-void nmi_dump_hub_irq(nasid_t nasid, int slice)132132+static void nmi_dump_hub_irq(nasid_t nasid, int slice)126133{127134 u64 mask0, mask1, pend0, pend1;128135···146153 * Copy the cpu registers which have been saved in the IP27prom format147154 * into the eframe format for the node under consideration.148155 */149149-void nmi_node_eframe_save(nasid_t nasid)156156+static void nmi_node_eframe_save(nasid_t nasid)150157{151158 int slice;152159···163170/*164171 * Save the nmi cpu registers for all cpus in the system.165172 */166166-void167167-nmi_eframes_save(void)173173+static void nmi_eframes_save(void)168174{169175 nasid_t nasid;170176···171179 nmi_node_eframe_save(nasid);172180}173181174174-void175175-cont_nmi_dump(void)182182+static void nmi_dump(void)176183{177184#ifndef REAL_NMI_SIGNAL178185 static atomic_t nmied_cpus = ATOMIC_INIT(0);
···1818#include <asm/ip32/crime.h>1919#include <asm/ip32/mace.h>20202121+#include "ip32-common.h"2222+2123struct sgi_crime __iomem *crime;2224struct sgi_mace __iomem *mace;2325···4139 id, rev, field, (unsigned long) CRIME_BASE);4240}43414444-irqreturn_t crime_memerr_intr(unsigned int irq, void *dev_id)4242+irqreturn_t crime_memerr_intr(int irq, void *dev_id)4543{4644 unsigned long stat, addr;4745 int fatal = 0;···9290 return IRQ_HANDLED;9391}94929595-irqreturn_t crime_cpuerr_intr(unsigned int irq, void *dev_id)9393+irqreturn_t crime_cpuerr_intr(int irq, void *dev_id)9694{9795 unsigned long stat = crime->cpu_error_stat & CRIME_CPU_ERROR_MASK;9896 unsigned long addr = crime->cpu_error_addr & CRIME_CPU_ERROR_ADDR_MASK;
+2
arch/mips/sgi-ip32/ip32-berr.c
···1818#include <asm/ptrace.h>1919#include <asm/tlbdebug.h>20202121+#include "ip32-common.h"2222+2123static int ip32_be_handler(struct pt_regs *regs, int is_fixup)2224{2325 int data = regs->cp0_cause & 4;
···2828#include <asm/ip32/mace.h>2929#include <asm/ip32/ip32_ints.h>30303131+#include "ip32-common.h"3232+3133/* issue a PIO read to make sure no PIO writes are pending */3234static inline void flush_crime_bus(void)3335{···108106 * different IRQ map than IRIX uses, but that's OK as Linux irq handling109107 * is quite different anyway.110108 */111111-112112-/* Some initial interrupts to set up */113113-extern irqreturn_t crime_memerr_intr(int irq, void *dev_id);114114-extern irqreturn_t crime_cpuerr_intr(int irq, void *dev_id);115109116110/*117111 * This is for pure CRIME interrupts - ie not MACE. The advantage?
···2929#include <asm/ip32/crime.h>3030#include <asm/ip32/ip32_ints.h>31313232+#include "ip32-common.h"3333+3234#define POWERDOWN_TIMEOUT 1203335/*3436 * Blink frequency during reboot grace period and when panicked.
···11+/* SPDX-License-Identifier: GPL-2.0 */22+#ifndef __PARISC_EXTABLE_H33+#define __PARISC_EXTABLE_H44+55+#include <asm/ptrace.h>66+#include <linux/compiler.h>77+88+/*99+ * The exception table consists of three addresses:1010+ *1111+ * - A relative address to the instruction that is allowed to fault.1212+ * - A relative address at which the program should continue (fixup routine)1313+ * - An asm statement which specifies which CPU register will1414+ * receive -EFAULT when an exception happens if the lowest bit in1515+ * the fixup address is set.1616+ *1717+ * Note: The register specified in the err_opcode instruction will be1818+ * modified at runtime if a fault happens. Register %r0 will be ignored.1919+ *2020+ * Since relative addresses are used, 32bit values are sufficient even on2121+ * 64bit kernel.2222+ */2323+2424+struct pt_regs;2525+int fixup_exception(struct pt_regs *regs);2626+2727+#define ARCH_HAS_RELATIVE_EXTABLE2828+struct exception_table_entry {2929+ int insn; /* relative address of insn that is allowed to fault. */3030+ int fixup; /* relative address of fixup routine */3131+ int err_opcode; /* sample opcode with register which holds error code */3232+};3333+3434+#define ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr, opcode )\3535+ ".section __ex_table,\"aw\"\n" \3636+ ".align 4\n" \3737+ ".word (" #fault_addr " - .), (" #except_addr " - .)\n" \3838+ opcode "\n" \3939+ ".previous\n"4040+4141+/*4242+ * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() creates a special exception table entry4343+ * (with lowest bit set) for which the fault handler in fixup_exception() will4444+ * load -EFAULT on fault into the register specified by the err_opcode instruction,4545+ * and zeroes the target register in case of a read fault in get_user().4646+ */4747+#define ASM_EXCEPTIONTABLE_VAR(__err_var) \4848+ int __err_var = 04949+#define ASM_EXCEPTIONTABLE_ENTRY_EFAULT( fault_addr, except_addr, register )\5050+ ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr + 1, "or %%r0,%%r0," register)5151+5252+static inline void swap_ex_entry_fixup(struct exception_table_entry *a,5353+ struct exception_table_entry *b,5454+ struct exception_table_entry tmp,5555+ int delta)5656+{5757+ a->fixup = b->fixup + delta;5858+ b->fixup = tmp.fixup - delta;5959+ a->err_opcode = b->err_opcode;6060+ b->err_opcode = tmp.err_opcode;6161+}6262+#define swap_ex_entry_fixup swap_ex_entry_fixup6363+6464+#endif
···150150 * Fix up get_user() and put_user().151151 * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() sets the least-significant152152 * bit in the relative address of the fixup routine to indicate153153- * that gr[ASM_EXCEPTIONTABLE_REG] should be loaded with154154- * -EFAULT to report a userspace access error.153153+ * that the register encoded in the "or %r0,%r0,register"154154+ * opcode should be loaded with -EFAULT to report a userspace155155+ * access error.155156 */156157 if (fix->fixup & 1) {157157- regs->gr[ASM_EXCEPTIONTABLE_REG] = -EFAULT;158158+ int fault_error_reg = fix->err_opcode & 0x1f;159159+ if (!WARN_ON(!fault_error_reg))160160+ regs->gr[fault_error_reg] = -EFAULT;161161+ pr_debug("Unalignment fixup of register %d at %pS\n",162162+ fault_error_reg, (void*)regs->iaoq[0]);158163159164 /* zero target register for get_user() */160165 if (parisc_acctyp(0, regs->iir) == VM_READ) {
···6464{6565 unsigned long x = (unsigned long)addr;6666 unsigned long y = x - __START_KERNEL_map;6767+ bool ret;67686869 /* use the carry flag to determine if x was < __START_KERNEL_map */6970 if (unlikely(x > y)) {···8079 return false;8180 }82818383- return pfn_valid(x >> PAGE_SHIFT);8282+ /*8383+ * pfn_valid() relies on RCU, and may call into the scheduler on exiting8484+ * the critical section. However, this would result in recursion with8585+ * KMSAN. Therefore, disable preemption here, and re-enable preemption8686+ * below while suppressing reschedules to avoid recursion.8787+ *8888+ * Note, this sacrifices occasionally breaking scheduling guarantees.8989+ * Although, a kernel compiled with KMSAN has already given up on any9090+ * performance guarantees due to being heavily instrumented.9191+ */9292+ preempt_disable();9393+ ret = pfn_valid(x >> PAGE_SHIFT);9494+ preempt_enable_no_resched();9595+9696+ return ret;8497}85988699#endif /* !MODULE */
···746746 return 0;747747}748748749749-static int ivpu_hw_40xx_reset(struct ivpu_device *vdev)749749+static int ivpu_hw_40xx_ip_reset(struct ivpu_device *vdev)750750{751751 int ret;752752 u32 val;···764764 ret = REGB_POLL_FLD(VPU_40XX_BUTTRESS_IP_RESET, TRIGGER, 0, TIMEOUT_US);765765 if (ret)766766 ivpu_err(vdev, "Timed out waiting for RESET completion\n");767767+768768+ return ret;769769+}770770+771771+static int ivpu_hw_40xx_reset(struct ivpu_device *vdev)772772+{773773+ int ret = 0;774774+775775+ if (ivpu_hw_40xx_ip_reset(vdev)) {776776+ ivpu_err(vdev, "Failed to reset VPU IP\n");777777+ ret = -EIO;778778+ }779779+780780+ if (ivpu_pll_disable(vdev)) {781781+ ivpu_err(vdev, "Failed to disable PLL\n");782782+ ret = -EIO;783783+ }767784768785 return ret;769786}···930913931914 ivpu_hw_40xx_save_d0i3_entry_timestamp(vdev);932915933933- if (!ivpu_hw_40xx_is_idle(vdev) && ivpu_hw_40xx_reset(vdev))916916+ if (!ivpu_hw_40xx_is_idle(vdev) && ivpu_hw_40xx_ip_reset(vdev))934917 ivpu_warn(vdev, "Failed to reset the VPU\n");935918936919 if (ivpu_pll_disable(vdev)) {···10491032static void ivpu_hw_40xx_irq_wdt_nce_handler(struct ivpu_device *vdev)10501033{10511034 /* TODO: For LNN hang consider engine reset instead of full recovery */10521052- ivpu_pm_schedule_recovery(vdev);10351035+ ivpu_pm_trigger_recovery(vdev, "WDT NCE IRQ");10531036}1054103710551038static void ivpu_hw_40xx_irq_wdt_mss_handler(struct ivpu_device *vdev)10561039{10571040 ivpu_hw_wdt_disable(vdev);10581058- ivpu_pm_schedule_recovery(vdev);10411041+ ivpu_pm_trigger_recovery(vdev, "WDT MSS IRQ");10591042}1060104310611044static void ivpu_hw_40xx_irq_noc_firewall_handler(struct ivpu_device *vdev)10621045{10631063- ivpu_pm_schedule_recovery(vdev);10461046+ ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ");10641047}1065104810661049/* Handler for IRQs from VPU core (irqV) */···11541137 REGB_WR32(VPU_40XX_BUTTRESS_INTERRUPT_STAT, status);1155113811561139 if (schedule_recovery)11571157- ivpu_pm_schedule_recovery(vdev);11401140+ ivpu_pm_trigger_recovery(vdev, "Buttress IRQ");1158114111591142 return true;11601143}
···382382 return rc;383383}384384385385-static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds)385385+static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail)386386{387387 struct cxl_dev_state *cxlds = &mds->cxlds;388388 const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET);···441441 INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work);442442443443 /* background command interrupts are optional */444444- if (!(cap & CXLDEV_MBOX_CAP_BG_CMD_IRQ))444444+ if (!(cap & CXLDEV_MBOX_CAP_BG_CMD_IRQ) || !irq_avail)445445 return 0;446446447447 msgnum = FIELD_GET(CXLDEV_MBOX_CAP_IRQ_MSGNUM_MASK, cap);···588588 return devm_add_action_or_reset(mds->cxlds.dev, free_event_buf, buf);589589}590590591591-static int cxl_alloc_irq_vectors(struct pci_dev *pdev)591591+static bool cxl_alloc_irq_vectors(struct pci_dev *pdev)592592{593593 int nvecs;594594···605605 PCI_IRQ_MSIX | PCI_IRQ_MSI);606606 if (nvecs < 1) {607607 dev_dbg(&pdev->dev, "Failed to alloc irq vectors: %d\n", nvecs);608608- return -ENXIO;608608+ return false;609609 }610610- return 0;610610+ return true;611611}612612613613static irqreturn_t cxl_event_thread(int irq, void *id)···743743}744744745745static int cxl_event_config(struct pci_host_bridge *host_bridge,746746- struct cxl_memdev_state *mds)746746+ struct cxl_memdev_state *mds, bool irq_avail)747747{748748 struct cxl_event_interrupt_policy policy;749749 int rc;···754754 */755755 if (!host_bridge->native_cxl_error)756756 return 0;757757+758758+ if (!irq_avail) {759759+ dev_info(mds->cxlds.dev, "No interrupt support, disable event processing.\n");760760+ return 0;761761+ }757762758763 rc = cxl_mem_alloc_event_buf(mds);759764 if (rc)···794789 struct cxl_register_map map;795790 struct cxl_memdev *cxlmd;796791 int i, rc, pmu_count;792792+ bool irq_avail;797793798794 /*799795 * Double check the anonymous union trickery in struct cxl_regs···852846 else853847 dev_warn(&pdev->dev, "Media not active (%d)\n", rc);854848855855- rc = cxl_alloc_irq_vectors(pdev);856856- if (rc)857857- return rc;849849+ irq_avail = cxl_alloc_irq_vectors(pdev);858850859859- rc = cxl_pci_setup_mailbox(mds);851851+ rc = cxl_pci_setup_mailbox(mds, irq_avail);860852 if (rc)861853 return rc;862854···913909 }914910 }915911916916- rc = cxl_event_config(host_bridge, mds);912912+ rc = cxl_event_config(host_bridge, mds, irq_avail);917913 if (rc)918914 return rc;919915
+13-5
drivers/firewire/core-device.c
···118118 * @buf: where to put the string119119 * @size: size of @buf, in bytes120120 *121121- * The string is taken from a minimal ASCII text descriptor leaf after122122- * the immediate entry with @key. The string is zero-terminated.123123- * An overlong string is silently truncated such that it and the124124- * zero byte fit into @size.121121+ * The string is taken from a minimal ASCII text descriptor leaf just after the entry with the122122+ * @key. The string is zero-terminated. An overlong string is silently truncated such that it123123+ * and the zero byte fit into @size.125124 *126125 * Returns strlen(buf) or a negative error code.127126 */···367368 for (i = 0; i < ARRAY_SIZE(directories) && !!directories[i]; ++i) {368369 int result = fw_csr_string(directories[i], attr->key, buf, bufsize);369370 // Detected.370370- if (result >= 0)371371+ if (result >= 0) {371372 ret = result;373373+ } else if (i == 0 && attr->key == CSR_VENDOR) {374374+ // Sony DVMC-DA1 has configuration ROM such that the descriptor leaf entry375375+ // in the root directory follows to the directory entry for vendor ID376376+ // instead of the immediate value for vendor ID.377377+ result = fw_csr_string(directories[i], CSR_DIRECTORY | attr->key, buf,378378+ bufsize);379379+ if (result >= 0)380380+ ret = result;381381+ }372382 }373383374384 if (ret >= 0) {
+57-28
drivers/firmware/arm_ffa/driver.c
···107107 struct work_struct notif_pcpu_work;108108 struct work_struct irq_work;109109 struct xarray partition_info;110110- unsigned int partition_count;111110 DECLARE_HASHTABLE(notifier_hash, ilog2(FFA_MAX_NOTIFICATIONS));112111 struct mutex notify_lock; /* lock to protect notifier hashtable */113112};114113115114static struct ffa_drv_info *drv_info;115115+static void ffa_partitions_cleanup(void);116116117117/*118118 * The driver must be able to support all the versions from the earliest···733733 void *cb_data;734734735735 partition = xa_load(&drv_info->partition_info, part_id);736736+ if (!partition) {737737+ pr_err("%s: Invalid partition ID 0x%x\n", __func__, part_id);738738+ return;739739+ }740740+736741 read_lock(&partition->rw_lock);737742 callback = partition->callback;738743 cb_data = partition->cb_data;···920915 return -EOPNOTSUPP;921916922917 partition = xa_load(&drv_info->partition_info, part_id);918918+ if (!partition) {919919+ pr_err("%s: Invalid partition ID 0x%x\n", __func__, part_id);920920+ return -EINVAL;921921+ }922922+923923 write_lock(&partition->rw_lock);924924925925 cb_valid = !!partition->callback;···11961186 kfree(pbuf);11971187}1198118811991199-static void ffa_setup_partitions(void)11891189+static int ffa_setup_partitions(void)12001190{12011201- int count, idx;11911191+ int count, idx, ret;12021192 uuid_t uuid;12031193 struct ffa_device *ffa_dev;12041194 struct ffa_dev_part_info *info;···12071197 count = ffa_partition_probe(&uuid_null, &pbuf);12081198 if (count <= 0) {12091199 pr_info("%s: No partitions found, error %d\n", __func__, count);12101210- return;12001200+ return -EINVAL;12111201 }1212120212131203 xa_init(&drv_info->partition_info);···12361226 ffa_device_unregister(ffa_dev);12371227 continue;12381228 }12391239- xa_store(&drv_info->partition_info, tpbuf->id, info, GFP_KERNEL);12291229+ rwlock_init(&info->rw_lock);12301230+ ret = xa_insert(&drv_info->partition_info, tpbuf->id,12311231+ info, GFP_KERNEL);12321232+ if (ret) {12331233+ pr_err("%s: failed to save partition ID 0x%x - ret:%d\n",12341234+ __func__, tpbuf->id, ret);12351235+ ffa_device_unregister(ffa_dev);12361236+ kfree(info);12371237+ }12401238 }12411241- drv_info->partition_count = count;1242123912431240 kfree(pbuf);1244124112451242 /* Allocate for the host */12461243 info = kzalloc(sizeof(*info), GFP_KERNEL);12471247- if (!info)12481248- return;12491249- xa_store(&drv_info->partition_info, drv_info->vm_id, info, GFP_KERNEL);12501250- drv_info->partition_count++;12441244+ if (!info) {12451245+ pr_err("%s: failed to alloc Host partition ID 0x%x. Abort.\n",12461246+ __func__, drv_info->vm_id);12471247+ /* Already registered devices are freed on bus_exit */12481248+ ffa_partitions_cleanup();12491249+ return -ENOMEM;12501250+ }12511251+12521252+ rwlock_init(&info->rw_lock);12531253+ ret = xa_insert(&drv_info->partition_info, drv_info->vm_id,12541254+ info, GFP_KERNEL);12551255+ if (ret) {12561256+ pr_err("%s: failed to save Host partition ID 0x%x - ret:%d. Abort.\n",12571257+ __func__, drv_info->vm_id, ret);12581258+ kfree(info);12591259+ /* Already registered devices are freed on bus_exit */12601260+ ffa_partitions_cleanup();12611261+ }12621262+12631263+ return ret;12511264}1252126512531266static void ffa_partitions_cleanup(void)12541267{12551255- struct ffa_dev_part_info **info;12561256- int idx, count = drv_info->partition_count;12681268+ struct ffa_dev_part_info *info;12691269+ unsigned long idx;1257127012581258- if (!count)12591259- return;12711271+ xa_for_each(&drv_info->partition_info, idx, info) {12721272+ xa_erase(&drv_info->partition_info, idx);12731273+ kfree(info);12741274+ }1260127512611261- info = kcalloc(count, sizeof(*info), GFP_KERNEL);12621262- if (!info)12631263- return;12641264-12651265- xa_extract(&drv_info->partition_info, (void **)info, 0, VM_ID_MASK,12661266- count, XA_PRESENT);12671267-12681268- for (idx = 0; idx < count; idx++)12691269- kfree(info[idx]);12701270- kfree(info);12711271-12721272- drv_info->partition_count = 0;12731276 xa_destroy(&drv_info->partition_info);12741277}12751278···1531150815321509 ffa_notifications_setup();1533151015341534- ffa_setup_partitions();15111511+ ret = ffa_setup_partitions();15121512+ if (ret) {15131513+ pr_err("failed to setup partitions\n");15141514+ goto cleanup_notifs;15151515+ }1535151615361517 ret = ffa_sched_recv_cb_update(drv_info->vm_id, ffa_self_notif_handle,15371518 drv_info, true);···15431516 pr_info("Failed to register driver sched callback %d\n", ret);1544151715451518 return 0;15191519+15201520+cleanup_notifs:15211521+ ffa_notifications_cleanup();15461522free_pages:15471523 if (drv_info->tx_buffer)15481524 free_pages_exact(drv_info->tx_buffer, RXTX_BUFFER_SIZE);···15651535 ffa_rxtx_unmap(drv_info->vm_id);15661536 free_pages_exact(drv_info->tx_buffer, RXTX_BUFFER_SIZE);15671537 free_pages_exact(drv_info->rx_buffer, RXTX_BUFFER_SIZE);15681568- xa_destroy(&drv_info->partition_info);15691538 kfree(drv_info);15701539 arm_ffa_bus_exit();15711540}
+2-3
drivers/firmware/arm_scmi/clock.c
···1313#include "notify.h"14141515/* Updated only after ALL the mandatory features for that version are merged */1616-#define SCMI_PROTOCOL_SUPPORTED_VERSION 0x200011616+#define SCMI_PROTOCOL_SUPPORTED_VERSION 0x2000017171818enum scmi_clock_protocol_cmd {1919 CLOCK_ATTRIBUTES = 0x3,···954954 scmi_clock_describe_rates_get(ph, clkid, clk);955955 }956956957957- if (PROTOCOL_REV_MAJOR(version) >= 0x2 &&958958- PROTOCOL_REV_MINOR(version) >= 0x1) {957957+ if (PROTOCOL_REV_MAJOR(version) >= 0x3) {959958 cinfo->clock_config_set = scmi_clock_config_set_v2;960959 cinfo->clock_config_get = scmi_clock_config_get_v2;961960 } else {
···4545{4646 struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl);47474848+ /*4949+ * An A2P IRQ is NOT valid when received while the platform still has5050+ * the ownership of the channel, because the platform at first releases5151+ * the SMT channel and then sends the completion interrupt.5252+ *5353+ * This addresses a possible race condition in which a spurious IRQ from5454+ * a previous timed-out reply which arrived late could be wrongly5555+ * associated with the next pending transaction.5656+ */5757+ if (cl->knows_txdone && !shmem_channel_free(smbox->shmem)) {5858+ dev_warn(smbox->cinfo->dev, "Ignoring spurious A2P IRQ !\n");5959+ return;6060+ }6161+4862 scmi_rx_callback(smbox->cinfo, shmem_read_header(smbox->shmem), NULL);4963}5064
+18-5
drivers/firmware/arm_scmi/perf.c
···350350}351351352352static inline void353353-process_response_opp_v4(struct perf_dom_info *dom, struct scmi_opp *opp,354354- unsigned int loop_idx,353353+process_response_opp_v4(struct device *dev, struct perf_dom_info *dom,354354+ struct scmi_opp *opp, unsigned int loop_idx,355355 const struct scmi_msg_resp_perf_describe_levels_v4 *r)356356{357357 opp->perf = le32_to_cpu(r->opp[loop_idx].perf_val);···362362 /* Note that PERF v4 reports always five 32-bit words */363363 opp->indicative_freq = le32_to_cpu(r->opp[loop_idx].indicative_freq);364364 if (dom->level_indexing_mode) {365365+ int ret;366366+365367 opp->level_index = le32_to_cpu(r->opp[loop_idx].level_index);366368367367- xa_store(&dom->opps_by_idx, opp->level_index, opp, GFP_KERNEL);368368- xa_store(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL);369369+ ret = xa_insert(&dom->opps_by_idx, opp->level_index, opp,370370+ GFP_KERNEL);371371+ if (ret)372372+ dev_warn(dev,373373+ "Failed to add opps_by_idx at %d - ret:%d\n",374374+ opp->level_index, ret);375375+376376+ ret = xa_insert(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL);377377+ if (ret)378378+ dev_warn(dev,379379+ "Failed to add opps_by_lvl at %d - ret:%d\n",380380+ opp->perf, ret);381381+369382 hash_add(dom->opps_by_freq, &opp->hash, opp->indicative_freq);370383 }371384}···395382 if (PROTOCOL_REV_MAJOR(p->version) <= 0x3)396383 process_response_opp(opp, st->loop_idx, response);397384 else398398- process_response_opp_v4(p->perf_dom, opp, st->loop_idx,385385+ process_response_opp_v4(ph->dev, p->perf_dom, opp, st->loop_idx,399386 response);400387 p->perf_dom->opp_count++;401388
+8-4
drivers/firmware/arm_scmi/raw_mode.c
···11111111 int i;1112111211131113 for (i = 0; i < num_chans; i++) {11141114- void *xret;11151114 struct scmi_raw_queue *q;1116111511171116 q = scmi_raw_queue_init(raw);···11191120 goto err_xa;11201121 }1121112211221122- xret = xa_store(&raw->chans_q, channels[i], q,11231123+ ret = xa_insert(&raw->chans_q, channels[i], q,11231124 GFP_KERNEL);11241124- if (xa_err(xret)) {11251125+ if (ret) {11251126 dev_err(dev,11261127 "Fail to allocate Raw queue 0x%02X\n",11271128 channels[i]);11281128- ret = xa_err(xret);11291129 goto err_xa;11301130 }11311131 }···13201322 dev = raw->handle->dev;13211323 q = scmi_raw_queue_select(raw, idx,13221324 SCMI_XFER_IS_CHAN_SET(xfer) ? chan_id : 0);13251325+ if (!q) {13261326+ dev_warn(dev,13271327+ "RAW[%d] - NO queue for chan 0x%X. Dropping report.\n",13281328+ idx, chan_id);13291329+ return;13301330+ }1323133113241332 /*13251333 * Grab the msg_q_lock upfront to avoid a possible race between
···121121 struct amdgpu_bo_param bp;122122 dma_addr_t dma_addr;123123 struct page *p;124124+ unsigned long x;124125 int ret;125126126127 if (adev->gart.bo != NULL)···130129 p = alloc_pages(gfp_flags, order);131130 if (!p)132131 return -ENOMEM;132132+133133+ /* assign pages to this device */134134+ for (x = 0; x < (1UL << order); x++)135135+ p[x].mapping = adev->mman.bdev.dev_mapping;133136134137 /* If the hardware does not support UTCL2 snooping of the CPU caches135138 * then set_memory_wc() could be used as a workaround to mark the pages···228223 unsigned int order = get_order(adev->gart.table_size);229224 struct sg_table *sg = adev->gart.bo->tbo.sg;230225 struct page *p;226226+ unsigned long x;231227 int ret;232228233229 ret = amdgpu_bo_reserve(adev->gart.bo, false);···240234 sg_free_table(sg);241235 kfree(sg);242236 p = virt_to_page(adev->gart.ptr);237237+ for (x = 0; x < (1UL << order); x++)238238+ p[x].mapping = NULL;243239 __free_pages(p, order);244240245241 adev->gart.ptr = NULL;
···434434 bool EnableMinDispClkODM;435435 bool enable_auto_dpm_test_logs;436436 unsigned int disable_ips;437437+ unsigned int disable_ips_in_vpb;437438};438439439440enum visual_confirm {
+5
drivers/gpu/drm/amd/display/dc/dc_types.h
···10341034 Replay_Msg_Not_Support = -1,10351035 Replay_Set_Timing_Sync_Supported,10361036 Replay_Set_Residency_Frameupdate_Timer,10371037+ Replay_Set_Pseudo_VTotal,10371038};1038103910391040union replay_error_status {···10901089 uint16_t coasting_vtotal_table[PR_COASTING_TYPE_NUM];10911090 /* Maximum link off frame count */10921091 enum replay_link_off_frame_count_level link_off_frame_count_level;10921092+ /* Replay pseudo vtotal for abm + ips on full screen video which can improve ips residency */10931093+ uint16_t abm_with_ips_on_full_screen_video_pseudo_vtotal;10941094+ /* Replay last pseudo vtotal set to DMUB */10951095+ uint16_t last_pseudo_vtotal;10931096};1094109710951098/* To split out "global" and "per-panel" config settings.
···680680bool dcn35_apply_idle_power_optimizations(struct dc *dc, bool enable)681681{682682 struct dc_link *edp_links[MAX_NUM_EDP];683683- int edp_num;683683+ int i, edp_num;684684 if (dc->debug.dmcub_emulation)685685 return true;686686···688688 dc_get_edp_links(dc, edp_links, &edp_num);689689 if (edp_num == 0 || edp_num > 1)690690 return false;691691+692692+ for (i = 0; i < dc->current_state->stream_count; ++i) {693693+ struct dc_stream_state *stream = dc->current_state->streams[i];694694+695695+ if (!stream->dpms_off && !dc_is_embedded_signal(stream->signal))696696+ return false;697697+ }691698 }692699693700 // TODO: review other cases when idle optimization is allowed
···205205 uint32_t extended_size;206206 /* size of the remaining partitioned address space */207207 uint32_t size_left_to_read;208208- enum dc_status status;208208+ enum dc_status status = DC_ERROR_UNEXPECTED;209209 /* size of the next partition to be read from */210210 uint32_t partition_size;211211 uint32_t data_index = 0;···234234{235235 uint32_t partition_size;236236 uint32_t data_index = 0;237237- enum dc_status status;237237+ enum dc_status status = DC_ERROR_UNEXPECTED;238238239239 while (size) {240240 partition_size = dpcd_get_next_partition_size(address, size);
+47
drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
···28322832#define REPLAY_RESIDENCY_MODE_MASK (0x1 << REPLAY_RESIDENCY_MODE_SHIFT)28332833# define REPLAY_RESIDENCY_MODE_PHY (0x0 << REPLAY_RESIDENCY_MODE_SHIFT)28342834# define REPLAY_RESIDENCY_MODE_ALPM (0x1 << REPLAY_RESIDENCY_MODE_SHIFT)28352835+# define REPLAY_RESIDENCY_MODE_IPS 0x102835283628362837#define REPLAY_RESIDENCY_ENABLE_MASK (0x1 << REPLAY_RESIDENCY_ENABLE_SHIFT)28372838# define REPLAY_RESIDENCY_DISABLE (0x0 << REPLAY_RESIDENCY_ENABLE_SHIFT)···28952894 * Set Residency Frameupdate Timer.28962895 */28972896 DMUB_CMD__REPLAY_SET_RESIDENCY_FRAMEUPDATE_TIMER = 6,28972897+ /**28982898+ * Set pseudo vtotal28992899+ */29002900+ DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL = 7,28982901};2899290229002903/**···30823077};3083307830843079/**30803080+ * Data passed from driver to FW in a DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command.30813081+ */30823082+struct dmub_cmd_replay_set_pseudo_vtotal {30833083+ /**30843084+ * Panel Instance.30853085+ * Panel isntance to identify which replay_state to use30863086+ * Currently the support is only for 0 or 130873087+ */30883088+ uint8_t panel_inst;30893089+ /**30903090+ * Source Vtotal that Replay + IPS + ABM full screen video src vtotal30913091+ */30923092+ uint16_t vtotal;30933093+ /**30943094+ * Explicit padding to 4 byte boundary.30953095+ */30963096+ uint8_t pad;30973097+};30983098+30993099+/**30853100 * Definition of a DMUB_CMD__SET_REPLAY_POWER_OPT command.30863101 */30873102struct dmub_rb_cmd_replay_set_power_opt {···31823157};3183315831843159/**31603160+ * Definition of a DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command.31613161+ */31623162+struct dmub_rb_cmd_replay_set_pseudo_vtotal {31633163+ /**31643164+ * Command header.31653165+ */31663166+ struct dmub_cmd_header header;31673167+ /**31683168+ * Definition of DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command.31693169+ */31703170+ struct dmub_cmd_replay_set_pseudo_vtotal data;31713171+};31723172+31733173+/**31853174 * Data passed from driver to FW in DMUB_CMD__REPLAY_SET_RESIDENCY_FRAMEUPDATE_TIMER command.31863175 */31873176struct dmub_cmd_replay_frameupdate_timer_data {···32463207 * Definition of DMUB_CMD__REPLAY_SET_RESIDENCY_FRAMEUPDATE_TIMER command data.32473208 */32483209 struct dmub_cmd_replay_frameupdate_timer_data timer_data;32103210+ /**32113211+ * Definition of DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command data.32123212+ */32133213+ struct dmub_cmd_replay_set_pseudo_vtotal pseudo_vtotal_data;32493214};3250321532513216/**···44014358 * Definition of a DMUB_CMD__REPLAY_SET_RESIDENCY_FRAMEUPDATE_TIMER command.44024359 */44034360 struct dmub_rb_cmd_replay_set_frameupdate_timer replay_set_frameupdate_timer;43614361+ /**43624362+ * Definition of a DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command.43634363+ */43644364+ struct dmub_rb_cmd_replay_set_pseudo_vtotal replay_set_pseudo_vtotal;44044365};4405436644064367/**
···24242525#include <linux/firmware.h>2626#include <linux/pci.h>2727+#include <linux/power_supply.h>2728#include <linux/reboot.h>28292930#include "amdgpu.h"···818817 * handle the switch automatically. Driver involvement819818 * is unnecessary.820819 */821821- if (!smu->dc_controlled_by_gpio) {822822- ret = smu_set_power_source(smu,823823- adev->pm.ac_power ? SMU_POWER_SOURCE_AC :824824- SMU_POWER_SOURCE_DC);825825- if (ret) {826826- dev_err(adev->dev, "Failed to switch to %s mode!\n",827827- adev->pm.ac_power ? "AC" : "DC");828828- return ret;829829- }830830- }820820+ adev->pm.ac_power = power_supply_is_system_supplied() > 0;821821+ smu_set_ac_dc(smu);831822832823 if ((amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 1)) ||833824 (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 3)))···27032710 case SMU_PPT_LIMIT_CURRENT:27042711 switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {27052712 case IP_VERSION(13, 0, 2):27132713+ case IP_VERSION(13, 0, 6):27062714 case IP_VERSION(11, 0, 7):27072715 case IP_VERSION(11, 0, 11):27082716 case IP_VERSION(11, 0, 12):
+2
drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
···14421442 case 0x3:14431443 dev_dbg(adev->dev, "Switched to AC mode!\n");14441444 schedule_work(&smu->interrupt_work);14451445+ adev->pm.ac_power = true;14451446 break;14461447 case 0x4:14471448 dev_dbg(adev->dev, "Switched to DC mode!\n");14481449 schedule_work(&smu->interrupt_work);14501450+ adev->pm.ac_power = false;14491451 break;14501452 case 0x7:14511453 /*
+2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
···13791379 case 0x3:13801380 dev_dbg(adev->dev, "Switched to AC mode!\n");13811381 smu_v13_0_ack_ac_dc_interrupt(smu);13821382+ adev->pm.ac_power = true;13821383 break;13831384 case 0x4:13841385 dev_dbg(adev->dev, "Switched to DC mode!\n");13851386 smu_v13_0_ack_ac_dc_interrupt(smu);13871387+ adev->pm.ac_power = false;13861388 break;13871389 case 0x7:13881390 /*
···54915491 * - 0 if the new state is valid54925492 * - %-ENOSPC, if the new state is invalid, because of BW limitation54935493 * @failing_port is set to:54945494+ *54945495 * - The non-root port where a BW limit check failed54955496 * with all the ports downstream of @failing_port passing54965497 * the BW limit check.···55005499 * - %NULL if the BW limit check failed at the root port55015500 * with all the ports downstream of the root port passing55025501 * the BW limit check.55025502+ *55035503 * - %-EINVAL, if the new state is invalid, because the root port has55045504 * too many payloads.55055505 */
+2-2
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
···319319static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win,320320 struct drm_framebuffer *fb)321321{322322- struct exynos_drm_plane plane = ctx->planes[win];322322+ struct exynos_drm_plane *plane = &ctx->planes[win];323323 struct exynos_drm_plane_state *state =324324- to_exynos_plane_state(plane.base.state);324324+ to_exynos_plane_state(plane->base.state);325325 unsigned int alpha = state->base.alpha;326326 unsigned int pixel_alpha;327327 unsigned long val;
···13411341 for (i = 0; i < ctx->num_clocks; i++) {13421342 ret = clk_prepare_enable(ctx->clocks[i]);13431343 if (ret) {13441344- while (--i > 0)13441344+ while (--i >= 0)13451345 clk_disable_unprepare(ctx->clocks[i]);13461346 return ret;13471347 }
-1
drivers/gpu/drm/i915/Makefile
···1717subdir-ccflags-y += $(call cc-option, -Wpacked-not-aligned)1818subdir-ccflags-y += $(call cc-option, -Wformat-overflow)1919subdir-ccflags-y += $(call cc-option, -Wformat-truncation)2020-subdir-ccflags-y += $(call cc-option, -Wstringop-overflow)2120subdir-ccflags-y += $(call cc-option, -Wstringop-truncation)2221# The following turn off the warnings enabled by -Wextra2322ifeq ($(findstring 2, $(KBUILD_EXTRA_WARN)),)
···15251525 * can rely on frontbuffer tracking.15261526 */15271527 mask = EDP_PSR_DEBUG_MASK_MEMUP |15281528- EDP_PSR_DEBUG_MASK_HPD |15291529- EDP_PSR_DEBUG_MASK_LPSP;15281528+ EDP_PSR_DEBUG_MASK_HPD;15291529+15301530+ /*15311531+ * For some unknown reason on HSW non-ULT (or at least on15321532+ * Dell Latitude E6540) external displays start to flicker15331533+ * when PSR is enabled on the eDP. SR/PC6 residency is much15341534+ * higher than should be possible with an external display.15351535+ * As a workaround leave LPSP unmasked to prevent PSR entry15361536+ * when external displays are active.15371537+ */15381538+ if (DISPLAY_VER(dev_priv) >= 8 || IS_HASWELL_ULT(dev_priv))15391539+ mask |= EDP_PSR_DEBUG_MASK_LPSP;1530154015311541 if (DISPLAY_VER(dev_priv) < 20)15321542 mask |= EDP_PSR_DEBUG_MASK_MAX_SLEEP;
+5-23
drivers/gpu/drm/nouveau/nouveau_fence.c
···6262 if (test_bit(DMA_FENCE_FLAG_USER_BITS, &fence->base.flags)) {6363 struct nouveau_fence_chan *fctx = nouveau_fctx(fence);64646565- if (atomic_dec_and_test(&fctx->notify_ref))6565+ if (!--fctx->notify_ref)6666 drop = 1;6767 }6868···103103void104104nouveau_fence_context_del(struct nouveau_fence_chan *fctx)105105{106106- cancel_work_sync(&fctx->allow_block_work);107106 nouveau_fence_context_kill(fctx, 0);108107 nvif_event_dtor(&fctx->event);109108 fctx->dead = 1;···167168 return ret;168169}169170170170-static void171171-nouveau_fence_work_allow_block(struct work_struct *work)172172-{173173- struct nouveau_fence_chan *fctx = container_of(work, struct nouveau_fence_chan,174174- allow_block_work);175175-176176- if (atomic_read(&fctx->notify_ref) == 0)177177- nvif_event_block(&fctx->event);178178- else179179- nvif_event_allow(&fctx->event);180180-}181181-182171void183172nouveau_fence_context_new(struct nouveau_channel *chan, struct nouveau_fence_chan *fctx)184173{···178191 } args;179192 int ret;180193181181- INIT_WORK(&fctx->allow_block_work, nouveau_fence_work_allow_block);182194 INIT_LIST_HEAD(&fctx->flip);183195 INIT_LIST_HEAD(&fctx->pending);184196 spin_lock_init(&fctx->lock);···521535 struct nouveau_fence *fence = from_fence(f);522536 struct nouveau_fence_chan *fctx = nouveau_fctx(fence);523537 bool ret;524524- bool do_work;525538526526- if (atomic_inc_return(&fctx->notify_ref) == 0)527527- do_work = true;539539+ if (!fctx->notify_ref++)540540+ nvif_event_allow(&fctx->event);528541529542 ret = nouveau_fence_no_signaling(f);530543 if (ret)531544 set_bit(DMA_FENCE_FLAG_USER_BITS, &fence->base.flags);532532- else if (atomic_dec_and_test(&fctx->notify_ref))533533- do_work = true;534534-535535- if (do_work)536536- schedule_work(&fctx->allow_block_work);545545+ else if (!--fctx->notify_ref)546546+ nvif_event_block(&fctx->event);537547538548 return ret;539549}
···539539 depends on OF540540 depends on DRM_MIPI_DSI541541 depends on BACKLIGHT_CLASS_DEVICE542542+ select DRM_DISPLAY_DP_HELPER543543+ select DRM_DISPLAY_HELPER542544 help543545 Say Y here if you want to enable support for Raydium RM692E5-based544546 display panels, such as the one found in the Fairphone 5 smartphone.
···11781178 struct drm_sched_entity *entity;11791179 struct dma_fence *fence;11801180 struct drm_sched_fence *s_fence;11811181- struct drm_sched_job *sched_job;11811181+ struct drm_sched_job *sched_job = NULL;11821182 int r;1183118311841184 if (READ_ONCE(sched->pause_submit))11851185 return;1186118611871187- entity = drm_sched_select_entity(sched);11881188- if (!entity)11891189- return;11901190-11911191- sched_job = drm_sched_entity_pop_job(entity);11921192- if (!sched_job) {11931193- complete_all(&entity->entity_idle);11941194- return; /* No more work */11871187+ /* Find entity with a ready job */11881188+ while (!sched_job && (entity = drm_sched_select_entity(sched))) {11891189+ sched_job = drm_sched_entity_pop_job(entity);11901190+ if (!sched_job)11911191+ complete_all(&entity->entity_idle);11951192 }11931193+ if (!entity)11941194+ return; /* No more work */1196119511971196 s_fence = sched_job->s_fence;11981197
+4-1
drivers/gpu/drm/tests/drm_mm_test.c
···188188189189static void drm_test_mm_debug(struct kunit *test)190190{191191+ struct drm_printer p = drm_debug_printer(test->name);191192 struct drm_mm mm;192193 struct drm_mm_node nodes[2];193194194195 /* Create a small drm_mm with a couple of nodes and a few holes, and195196 * check that the debug iterator doesn't explode over a trivial drm_mm.196197 */197197-198198 drm_mm_init(&mm, 0, 4096);199199200200 memset(nodes, 0, sizeof(nodes));···209209 KUNIT_ASSERT_FALSE_MSG(test, drm_mm_reserve_node(&mm, &nodes[1]),210210 "failed to reserve node[0] {start=%lld, size=%lld)\n",211211 nodes[0].start, nodes[0].size);212212+213213+ drm_mm_print(&mm, &p);214214+ KUNIT_SUCCEED(test);212215}213216214217static bool expect_insert(struct kunit *test, struct drm_mm *mm,
+9-3
drivers/gpu/drm/ttm/ttm_device.c
···9595 ttm_pool_mgr_init(num_pages);9696 ttm_tt_mgr_init(num_pages, num_dma32);97979898- glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32);9898+ glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32 |9999+ __GFP_NOWARN);99100101101+ /* Retry without GFP_DMA32 for platforms DMA32 is not available */100102 if (unlikely(glob->dummy_read_page == NULL)) {101101- ret = -ENOMEM;102102- goto out;103103+ glob->dummy_read_page = alloc_page(__GFP_ZERO);104104+ if (unlikely(glob->dummy_read_page == NULL)) {105105+ ret = -ENOMEM;106106+ goto out;107107+ }108108+ pr_warn("Using GFP_DMA32 fallback for dummy_read_page\n");103109 }104110105111 INIT_LIST_HEAD(&glob->device_list);
+28-7
drivers/gpu/drm/v3d/v3d_submit.c
···147147 return 0;148148}149149150150+static void151151+v3d_job_deallocate(void **container)152152+{153153+ kfree(*container);154154+ *container = NULL;155155+}156156+150157static int151158v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv,152159 struct v3d_job *job, void (*free)(struct kref *ref),···280273281274 ret = v3d_job_init(v3d, file_priv, &(*job)->base,282275 v3d_job_free, args->in_sync, se, V3D_CSD);283283- if (ret)276276+ if (ret) {277277+ v3d_job_deallocate((void *)job);284278 return ret;279279+ }285280286281 ret = v3d_job_allocate((void *)clean_job, sizeof(**clean_job));287282 if (ret)···291282292283 ret = v3d_job_init(v3d, file_priv, *clean_job,293284 v3d_job_free, 0, NULL, V3D_CACHE_CLEAN);294294- if (ret)285285+ if (ret) {286286+ v3d_job_deallocate((void *)clean_job);295287 return ret;288288+ }296289297290 (*job)->args = *args;298291···871860872861 ret = v3d_job_init(v3d, file_priv, &render->base,873862 v3d_render_job_free, args->in_sync_rcl, &se, V3D_RENDER);874874- if (ret)863863+ if (ret) {864864+ v3d_job_deallocate((void *)&render);875865 goto fail;866866+ }876867877868 render->start = args->rcl_start;878869 render->end = args->rcl_end;···887874888875 ret = v3d_job_init(v3d, file_priv, &bin->base,889876 v3d_job_free, args->in_sync_bcl, &se, V3D_BIN);890890- if (ret)877877+ if (ret) {878878+ v3d_job_deallocate((void *)&bin);891879 goto fail;880880+ }892881893882 bin->start = args->bcl_start;894883 bin->end = args->bcl_end;···907892908893 ret = v3d_job_init(v3d, file_priv, clean_job,909894 v3d_job_free, 0, NULL, V3D_CACHE_CLEAN);910910- if (ret)895895+ if (ret) {896896+ v3d_job_deallocate((void *)&clean_job);911897 goto fail;898898+ }912899913900 last_job = clean_job;914901 } else {···1032101510331016 ret = v3d_job_init(v3d, file_priv, &job->base,10341017 v3d_job_free, args->in_sync, &se, V3D_TFU);10351035- if (ret)10181018+ if (ret) {10191019+ v3d_job_deallocate((void *)&job);10361020 goto fail;10211021+ }1037102210381023 job->base.bo = kcalloc(ARRAY_SIZE(args->bo_handles),10391024 sizeof(*job->base.bo), GFP_KERNEL);···1252123312531234 ret = v3d_job_init(v3d, file_priv, &cpu_job->base,12541235 v3d_job_free, 0, &se, V3D_CPU);12551255- if (ret)12361236+ if (ret) {12371237+ v3d_job_deallocate((void *)&cpu_job);12561238 goto fail;12391239+ }1257124012581241 clean_job = cpu_job->indirect_csd.clean_job;12591242 csd_job = cpu_job->indirect_csd.job;
···143143}144144EXPORT_SYMBOL_GPL(call_hid_bpf_rdesc_fixup);145145146146+/* Disables missing prototype warnings */147147+__bpf_kfunc_start_defs();148148+146149/**147150 * hid_bpf_get_data - Get the kernel memory pointer associated with the context @ctx148151 *···155152 *156153 * @returns %NULL on error, an %__u8 memory pointer on success157154 */158158-noinline __u8 *155155+__bpf_kfunc __u8 *159156hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t rdwr_buf_size)160157{161158 struct hid_bpf_ctx_kern *ctx_kern;···170167171168 return ctx_kern->data + offset;172169}170170+__bpf_kfunc_end_defs();173171174172/*175173 * The following set contains all functions we agree BPF programs···245241 return 0;246242}247243248248-/**249249- * hid_bpf_attach_prog - Attach the given @prog_fd to the given HID device250250- *251251- * @hid_id: the system unique identifier of the HID device252252- * @prog_fd: an fd in the user process representing the program to attach253253- * @flags: any logical OR combination of &enum hid_bpf_attach_flags254254- *255255- * @returns an fd of a bpf_link object on success (> %0), an error code otherwise.256256- * Closing this fd will detach the program from the HID device (unless the bpf_link257257- * is pinned to the BPF file system).258258- */259259-/* called from syscall */260260-noinline int261261-hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags)244244+static int do_hid_bpf_attach_prog(struct hid_device *hdev, int prog_fd, struct bpf_prog *prog,245245+ __u32 flags)262246{263263- struct hid_device *hdev;264264- struct device *dev;265265- int fd, err, prog_type = hid_bpf_get_prog_attach_type(prog_fd);247247+ int fd, err, prog_type;266248267267- if (!hid_bpf_ops)268268- return -EINVAL;269269-249249+ prog_type = hid_bpf_get_prog_attach_type(prog);270250 if (prog_type < 0)271251 return prog_type;272252273253 if (prog_type >= HID_BPF_PROG_TYPE_MAX)274254 return -EINVAL;275275-276276- if ((flags & ~HID_BPF_FLAG_MASK))277277- return -EINVAL;278278-279279- dev = bus_find_device(hid_bpf_ops->bus_type, NULL, &hid_id, device_match_id);280280- if (!dev)281281- return -EINVAL;282282-283283- hdev = to_hid_device(dev);284255285256 if (prog_type == HID_BPF_PROG_TYPE_DEVICE_EVENT) {286257 err = hid_bpf_allocate_event_data(hdev);···263284 return err;264285 }265286266266- fd = __hid_bpf_attach_prog(hdev, prog_type, prog_fd, flags);287287+ fd = __hid_bpf_attach_prog(hdev, prog_type, prog_fd, prog, flags);267288 if (fd < 0)268289 return fd;269290···278299 return fd;279300}280301302302+/* Disables missing prototype warnings */303303+__bpf_kfunc_start_defs();304304+305305+/**306306+ * hid_bpf_attach_prog - Attach the given @prog_fd to the given HID device307307+ *308308+ * @hid_id: the system unique identifier of the HID device309309+ * @prog_fd: an fd in the user process representing the program to attach310310+ * @flags: any logical OR combination of &enum hid_bpf_attach_flags311311+ *312312+ * @returns an fd of a bpf_link object on success (> %0), an error code otherwise.313313+ * Closing this fd will detach the program from the HID device (unless the bpf_link314314+ * is pinned to the BPF file system).315315+ */316316+/* called from syscall */317317+__bpf_kfunc int318318+hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags)319319+{320320+ struct hid_device *hdev;321321+ struct bpf_prog *prog;322322+ struct device *dev;323323+ int err, fd;324324+325325+ if (!hid_bpf_ops)326326+ return -EINVAL;327327+328328+ if ((flags & ~HID_BPF_FLAG_MASK))329329+ return -EINVAL;330330+331331+ dev = bus_find_device(hid_bpf_ops->bus_type, NULL, &hid_id, device_match_id);332332+ if (!dev)333333+ return -EINVAL;334334+335335+ hdev = to_hid_device(dev);336336+337337+ /*338338+ * take a ref on the prog itself, it will be released339339+ * on errors or when it'll be detached340340+ */341341+ prog = bpf_prog_get(prog_fd);342342+ if (IS_ERR(prog)) {343343+ err = PTR_ERR(prog);344344+ goto out_dev_put;345345+ }346346+347347+ fd = do_hid_bpf_attach_prog(hdev, prog_fd, prog, flags);348348+ if (fd < 0) {349349+ err = fd;350350+ goto out_prog_put;351351+ }352352+353353+ return fd;354354+355355+ out_prog_put:356356+ bpf_prog_put(prog);357357+ out_dev_put:358358+ put_device(dev);359359+ return err;360360+}361361+281362/**282363 * hid_bpf_allocate_context - Allocate a context to the given HID device283364 *···345306 *346307 * @returns A pointer to &struct hid_bpf_ctx on success, %NULL on error.347308 */348348-noinline struct hid_bpf_ctx *309309+__bpf_kfunc struct hid_bpf_ctx *349310hid_bpf_allocate_context(unsigned int hid_id)350311{351312 struct hid_device *hdev;···362323 hdev = to_hid_device(dev);363324364325 ctx_kern = kzalloc(sizeof(*ctx_kern), GFP_KERNEL);365365- if (!ctx_kern)326326+ if (!ctx_kern) {327327+ put_device(dev);366328 return NULL;329329+ }367330368331 ctx_kern->ctx.hid = hdev;369332···378337 * @ctx: the HID-BPF context to release379338 *380339 */381381-noinline void340340+__bpf_kfunc void382341hid_bpf_release_context(struct hid_bpf_ctx *ctx)383342{384343 struct hid_bpf_ctx_kern *ctx_kern;344344+ struct hid_device *hid;385345386346 ctx_kern = container_of(ctx, struct hid_bpf_ctx_kern, ctx);347347+ hid = (struct hid_device *)ctx_kern->ctx.hid; /* ignore const */387348388349 kfree(ctx_kern);350350+351351+ /* get_device() is called by bus_find_device() */352352+ put_device(&hid->dev);389353}390354391355/**···404358 *405359 * @returns %0 on success, a negative error code otherwise.406360 */407407-noinline int361361+__bpf_kfunc int408362hid_bpf_hw_request(struct hid_bpf_ctx *ctx, __u8 *buf, size_t buf__sz,409363 enum hid_report_type rtype, enum hid_class_request reqtype)410364{···472426 kfree(dma_data);473427 return ret;474428}429429+__bpf_kfunc_end_defs();475430476431/* our HID-BPF entrypoints */477432BTF_SET8_START(hid_bpf_fmodret_ids)
···196196static void hid_bpf_release_progs(struct work_struct *work)197197{198198 int i, j, n, map_fd = -1;199199+ bool hdev_destroyed;199200200201 if (!jmp_table.map)201202 return;···221220 if (entry->hdev) {222221 hdev = entry->hdev;223222 type = entry->type;223223+ /*224224+ * hdev is still valid, even if we are called after hid_destroy_device():225225+ * when hid_bpf_attach() gets called, it takes a ref on the dev through226226+ * bus_find_device()227227+ */228228+ hdev_destroyed = hdev->bpf.destroyed;224229225230 hid_bpf_populate_hdev(hdev, type);226231···239232 if (test_bit(next->idx, jmp_table.enabled))240233 continue;241234242242- if (next->hdev == hdev && next->type == type)235235+ if (next->hdev == hdev && next->type == type) {236236+ /*237237+ * clear the hdev reference and decrement the device ref238238+ * that was taken during bus_find_device() while calling239239+ * hid_bpf_attach()240240+ */243241 next->hdev = NULL;242242+ put_device(&hdev->dev);243243+ }244244 }245245246246- /* if type was rdesc fixup, reconnect device */247247- if (type == HID_BPF_PROG_TYPE_RDESC_FIXUP)246246+ /* if type was rdesc fixup and the device is not gone, reconnect device */247247+ if (type == HID_BPF_PROG_TYPE_RDESC_FIXUP && !hdev_destroyed)248248 hid_bpf_reconnect(hdev);249249 }250250 }···347333 return err;348334}349335350350-int hid_bpf_get_prog_attach_type(int prog_fd)336336+int hid_bpf_get_prog_attach_type(struct bpf_prog *prog)351337{352352- struct bpf_prog *prog = NULL;353353- int i;354338 int prog_type = HID_BPF_PROG_TYPE_UNDEF;355355-356356- prog = bpf_prog_get(prog_fd);357357- if (IS_ERR(prog))358358- return PTR_ERR(prog);339339+ int i;359340360341 for (i = 0; i < HID_BPF_PROG_TYPE_MAX; i++) {361342 if (hid_bpf_btf_ids[i] == prog->aux->attach_btf_id) {···358349 break;359350 }360351 }361361-362362- bpf_prog_put(prog);363352364353 return prog_type;365354}···395388/* called from syscall */396389noinline int397390__hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_type,398398- int prog_fd, __u32 flags)391391+ int prog_fd, struct bpf_prog *prog, __u32 flags)399392{400393 struct bpf_link_primer link_primer;401394 struct hid_bpf_link *link;402402- struct bpf_prog *prog = NULL;403395 struct hid_bpf_prog_entry *prog_entry;404396 int cnt, err = -EINVAL, prog_table_idx = -1;405405-406406- /* take a ref on the prog itself */407407- prog = bpf_prog_get(prog_fd);408408- if (IS_ERR(prog))409409- return PTR_ERR(prog);410397411398 mutex_lock(&hid_bpf_attach_lock);412399···468467 err_unlock:469468 mutex_unlock(&hid_bpf_attach_lock);470469471471- bpf_prog_put(prog);472470 kfree(link);473471474472 return err;
···46104610 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC088) },46114611 { /* Logitech G Pro X Superlight Gaming Mouse over USB */46124612 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC094) },46134613+ { /* Logitech G Pro X Superlight 2 Gaming Mouse over USB */46144614+ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC09b) },4613461546144616 { /* G935 Gaming Headset */46154617 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0x0a87),
···989989 bool no_previous_buffers = !q_num_bufs;990990 int ret = 0;991991992992- if (q->num_buffers == q->max_num_buffers) {992992+ if (q_num_bufs == q->max_num_buffers) {993993 dprintk(q, 1, "maximum number of buffers already allocated\n");994994 return -ENOBUFS;995995 }
+26-29
drivers/media/common/videobuf2/videobuf2-v4l2.c
···671671}672672EXPORT_SYMBOL(vb2_querybuf);673673674674-static void fill_buf_caps(struct vb2_queue *q, u32 *caps)674674+static void vb2_set_flags_and_caps(struct vb2_queue *q, u32 memory,675675+ u32 *flags, u32 *caps, u32 *max_num_bufs)675676{677677+ if (!q->allow_cache_hints || memory != V4L2_MEMORY_MMAP) {678678+ /*679679+ * This needs to clear V4L2_MEMORY_FLAG_NON_COHERENT only,680680+ * but in order to avoid bugs we zero out all bits.681681+ */682682+ *flags = 0;683683+ } else {684684+ /* Clear all unknown flags. */685685+ *flags &= V4L2_MEMORY_FLAG_NON_COHERENT;686686+ }687687+676688 *caps = V4L2_BUF_CAP_SUPPORTS_ORPHANED_BUFS;677689 if (q->io_modes & VB2_MMAP)678690 *caps |= V4L2_BUF_CAP_SUPPORTS_MMAP;···698686 *caps |= V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS;699687 if (q->supports_requests)700688 *caps |= V4L2_BUF_CAP_SUPPORTS_REQUESTS;701701-}702702-703703-static void validate_memory_flags(struct vb2_queue *q,704704- int memory,705705- u32 *flags)706706-{707707- if (!q->allow_cache_hints || memory != V4L2_MEMORY_MMAP) {708708- /*709709- * This needs to clear V4L2_MEMORY_FLAG_NON_COHERENT only,710710- * but in order to avoid bugs we zero out all bits.711711- */712712- *flags = 0;713713- } else {714714- /* Clear all unknown flags. */715715- *flags &= V4L2_MEMORY_FLAG_NON_COHERENT;689689+ if (max_num_bufs) {690690+ *max_num_bufs = q->max_num_buffers;691691+ *caps |= V4L2_BUF_CAP_SUPPORTS_MAX_NUM_BUFFERS;716692 }717693}718694···709709 int ret = vb2_verify_memory_type(q, req->memory, req->type);710710 u32 flags = req->flags;711711712712- fill_buf_caps(q, &req->capabilities);713713- validate_memory_flags(q, req->memory, &flags);712712+ vb2_set_flags_and_caps(q, req->memory, &flags,713713+ &req->capabilities, NULL);714714 req->flags = flags;715715 return ret ? ret : vb2_core_reqbufs(q, req->memory,716716 req->flags, &req->count);···751751 int ret = vb2_verify_memory_type(q, create->memory, f->type);752752 unsigned i;753753754754- fill_buf_caps(q, &create->capabilities);755755- validate_memory_flags(q, create->memory, &create->flags);756754 create->index = vb2_get_num_buffers(q);757757- create->max_num_buffers = q->max_num_buffers;758758- create->capabilities |= V4L2_BUF_CAP_SUPPORTS_MAX_NUM_BUFFERS;755755+ vb2_set_flags_and_caps(q, create->memory, &create->flags,756756+ &create->capabilities, &create->max_num_buffers);759757 if (create->count == 0)760758 return ret != -EBUSY ? ret : 0;761759···10041006 int res = vb2_verify_memory_type(vdev->queue, p->memory, p->type);10051007 u32 flags = p->flags;1006100810071007- fill_buf_caps(vdev->queue, &p->capabilities);10081008- validate_memory_flags(vdev->queue, p->memory, &flags);10091009+ vb2_set_flags_and_caps(vdev->queue, p->memory, &flags,10101010+ &p->capabilities, NULL);10091011 p->flags = flags;10101012 if (res)10111013 return res;···10241026 struct v4l2_create_buffers *p)10251027{10261028 struct video_device *vdev = video_devdata(file);10271027- int res = vb2_verify_memory_type(vdev->queue, p->memory,10281028- p->format.type);10291029+ int res = vb2_verify_memory_type(vdev->queue, p->memory, p->format.type);1029103010301030- p->index = vdev->queue->num_buffers;10311031- fill_buf_caps(vdev->queue, &p->capabilities);10321032- validate_memory_flags(vdev->queue, p->memory, &p->flags);10311031+ p->index = vb2_get_num_buffers(vdev->queue);10321032+ vb2_set_flags_and_caps(vdev->queue, p->memory, &p->flags,10331033+ &p->capabilities, &p->max_num_buffers);10331034 /*10341035 * If count == 0, then just check if memory and type are valid.10351036 * Any -EBUSY result from vb2_verify_memory_type can be mapped to 0.
···28402840 /* MT753x MAC works in 1G full duplex mode for all up-clocked28412841 * variants.28422842 */28432843- if (interface == PHY_INTERFACE_MODE_INTERNAL ||28442844- interface == PHY_INTERFACE_MODE_TRGMII ||28432843+ if (interface == PHY_INTERFACE_MODE_TRGMII ||28452844 (phy_interface_mode_is_8023z(interface))) {28462845 speed = SPEED_1000;28472846 duplex = DUPLEX_FULL;
+1-1
drivers/net/dsa/mv88e6xxx/chip.c
···36593659 int err;3660366036613661 if (!chip->info->ops->phy_read_c45)36623662- return -EOPNOTSUPP;36623662+ return 0xffff;3663366336643664 mv88e6xxx_reg_lock(chip);36653665 err = chip->info->ops->phy_read_c45(chip, bus, phy, devad, reg, &val);
+1-2
drivers/net/dsa/qca/qca8k-8xxx.c
···20512051 priv->info = of_device_get_match_data(priv->dev);2052205220532053 priv->reset_gpio = devm_gpiod_get_optional(priv->dev, "reset",20542054- GPIOD_ASIS);20542054+ GPIOD_OUT_HIGH);20552055 if (IS_ERR(priv->reset_gpio))20562056 return PTR_ERR(priv->reset_gpio);2057205720582058 if (priv->reset_gpio) {20592059- gpiod_set_value_cansleep(priv->reset_gpio, 1);20602059 /* The active low duration must be greater than 10 ms20612060 * and checkpatch.pl wants 20 ms.20622061 */
+46-14
drivers/net/ethernet/amd/pds_core/adminq.c
···6363 return nq_work;6464}65656666+static bool pdsc_adminq_inc_if_up(struct pdsc *pdsc)6767+{6868+ if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER) ||6969+ pdsc->state & BIT_ULL(PDSC_S_FW_DEAD))7070+ return false;7171+7272+ return refcount_inc_not_zero(&pdsc->adminq_refcnt);7373+}7474+6675void pdsc_process_adminq(struct pdsc_qcq *qcq)6776{6877 union pds_core_adminq_comp *comp;···8475 int aq_work = 0;8576 int credits;86778787- /* Don't process AdminQ when shutting down */8888- if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) {8989- dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n",7878+ /* Don't process AdminQ when it's not up */7979+ if (!pdsc_adminq_inc_if_up(pdsc)) {8080+ dev_err(pdsc->dev, "%s: called while adminq is unavailable\n",9081 __func__);9182 return;9283 }···133124 pds_core_intr_credits(&pdsc->intr_ctrl[qcq->intx],134125 credits,135126 PDS_CORE_INTR_CRED_REARM);127127+ refcount_dec(&pdsc->adminq_refcnt);136128}137129138130void pdsc_work_thread(struct work_struct *work)···145135146136irqreturn_t pdsc_adminq_isr(int irq, void *data)147137{148148- struct pdsc_qcq *qcq = data;149149- struct pdsc *pdsc = qcq->pdsc;138138+ struct pdsc *pdsc = data;139139+ struct pdsc_qcq *qcq;150140151151- /* Don't process AdminQ when shutting down */152152- if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) {153153- dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n",141141+ /* Don't process AdminQ when it's not up */142142+ if (!pdsc_adminq_inc_if_up(pdsc)) {143143+ dev_err(pdsc->dev, "%s: called while adminq is unavailable\n",154144 __func__);155145 return IRQ_HANDLED;156146 }157147148148+ qcq = &pdsc->adminqcq;158149 queue_work(pdsc->wq, &qcq->work);159150 pds_core_intr_mask(&pdsc->intr_ctrl[qcq->intx], PDS_CORE_INTR_MASK_CLEAR);151151+ refcount_dec(&pdsc->adminq_refcnt);160152161153 return IRQ_HANDLED;162154}···191179192180 /* Check that the FW is running */193181 if (!pdsc_is_fw_running(pdsc)) {194194- u8 fw_status = ioread8(&pdsc->info_regs->fw_status);182182+ if (pdsc->info_regs) {183183+ u8 fw_status =184184+ ioread8(&pdsc->info_regs->fw_status);195185196196- dev_info(pdsc->dev, "%s: post failed - fw not running %#02x:\n",197197- __func__, fw_status);186186+ dev_info(pdsc->dev, "%s: post failed - fw not running %#02x:\n",187187+ __func__, fw_status);188188+ } else {189189+ dev_info(pdsc->dev, "%s: post failed - BARs not setup\n",190190+ __func__);191191+ }198192 ret = -ENXIO;199193200194 goto err_out_unlock;···248230 int err = 0;249231 int index;250232233233+ if (!pdsc_adminq_inc_if_up(pdsc)) {234234+ dev_dbg(pdsc->dev, "%s: preventing adminq cmd %u\n",235235+ __func__, cmd->opcode);236236+ return -ENXIO;237237+ }238238+251239 wc.qcq = &pdsc->adminqcq;252240 index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp, &wc);253241 if (index < 0) {···272248 break;273249274250 if (!pdsc_is_fw_running(pdsc)) {275275- u8 fw_status = ioread8(&pdsc->info_regs->fw_status);251251+ if (pdsc->info_regs) {252252+ u8 fw_status =253253+ ioread8(&pdsc->info_regs->fw_status);276254277277- dev_dbg(pdsc->dev, "%s: post wait failed - fw not running %#02x:\n",278278- __func__, fw_status);255255+ dev_dbg(pdsc->dev, "%s: post wait failed - fw not running %#02x:\n",256256+ __func__, fw_status);257257+ } else {258258+ dev_dbg(pdsc->dev, "%s: post wait failed - BARs not setup\n",259259+ __func__);260260+ }279261 err = -ENXIO;280262 break;281263 }···314284 if (err == -ENXIO || err == -ETIMEDOUT)315285 queue_work(pdsc->wq, &pdsc->health_work);316286 }287287+288288+ refcount_dec(&pdsc->adminq_refcnt);317289318290 return err;319291}
+36-10
drivers/net/ethernet/amd/pds_core/core.c
···125125126126 snprintf(name, sizeof(name), "%s-%d-%s",127127 PDS_CORE_DRV_NAME, pdsc->pdev->bus->number, qcq->q.name);128128- index = pdsc_intr_alloc(pdsc, name, pdsc_adminq_isr, qcq);128128+ index = pdsc_intr_alloc(pdsc, name, pdsc_adminq_isr, pdsc);129129 if (index < 0)130130 return index;131131 qcq->intx = index;···404404 int numdescs;405405 int err;406406407407- if (init)408408- err = pdsc_dev_init(pdsc);409409- else410410- err = pdsc_dev_reinit(pdsc);407407+ err = pdsc_dev_init(pdsc);411408 if (err)412409 return err;413410···447450 pdsc_debugfs_add_viftype(pdsc);448451 }449452453453+ refcount_set(&pdsc->adminq_refcnt, 1);450454 clear_bit(PDSC_S_FW_DEAD, &pdsc->state);451455 return 0;452456···462464463465 if (!pdsc->pdev->is_virtfn)464466 pdsc_devcmd_reset(pdsc);467467+ if (pdsc->adminqcq.work.func)468468+ cancel_work_sync(&pdsc->adminqcq.work);465469 pdsc_qcq_free(pdsc, &pdsc->notifyqcq);466470 pdsc_qcq_free(pdsc, &pdsc->adminqcq);467471···476476 for (i = 0; i < pdsc->nintrs; i++)477477 pdsc_intr_free(pdsc, i);478478479479- if (removing) {480480- kfree(pdsc->intr_info);481481- pdsc->intr_info = NULL;482482- }479479+ kfree(pdsc->intr_info);480480+ pdsc->intr_info = NULL;481481+ pdsc->nintrs = 0;483482 }484483485484 if (pdsc->kern_dbpage) {···486487 pdsc->kern_dbpage = NULL;487488 }488489490490+ pci_free_irq_vectors(pdsc->pdev);489491 set_bit(PDSC_S_FW_DEAD, &pdsc->state);490492}491493···512512 PDS_CORE_INTR_MASK_SET);513513}514514515515+static void pdsc_adminq_wait_and_dec_once_unused(struct pdsc *pdsc)516516+{517517+ /* The driver initializes the adminq_refcnt to 1 when the adminq is518518+ * allocated and ready for use. Other users/requesters will increment519519+ * the refcnt while in use. If the refcnt is down to 1 then the adminq520520+ * is not in use and the refcnt can be cleared and adminq freed. Before521521+ * calling this function the driver will set PDSC_S_FW_DEAD, which522522+ * prevent subsequent attempts to use the adminq and increment the523523+ * refcnt to fail. This guarantees that this function will eventually524524+ * exit.525525+ */526526+ while (!refcount_dec_if_one(&pdsc->adminq_refcnt)) {527527+ dev_dbg_ratelimited(pdsc->dev, "%s: adminq in use\n",528528+ __func__);529529+ cpu_relax();530530+ }531531+}532532+515533void pdsc_fw_down(struct pdsc *pdsc)516534{517535 union pds_core_notifyq_comp reset_event = {···544526545527 if (pdsc->pdev->is_virtfn)546528 return;529529+530530+ pdsc_adminq_wait_and_dec_once_unused(pdsc);547531548532 /* Notify clients of fw_down */549533 if (pdsc->fw_reporter)···597577598578static void pdsc_check_pci_health(struct pdsc *pdsc)599579{600600- u8 fw_status = ioread8(&pdsc->info_regs->fw_status);580580+ u8 fw_status;581581+582582+ /* some sort of teardown already in progress */583583+ if (!pdsc->info_regs)584584+ return;585585+586586+ fw_status = ioread8(&pdsc->info_regs->fw_status);601587602588 /* is PCI broken? */603589 if (fw_status != PDS_RC_BAD_PCI)
···64646565void pdsc_debugfs_add_ident(struct pdsc *pdsc)6666{6767+ /* This file will already exist in the reset flow */6868+ if (debugfs_lookup("identity", pdsc->dentry))6969+ return;7070+6771 debugfs_create_file("identity", 0400, pdsc->dentry,6872 pdsc, &identity_fops);6973}
···360360 * As a result, a shift of INCVALUE_SHIFT_n is used to fit a value of361361 * INCVALUE_n into the TIMINCA register allowing 32+8+(24-INCVALUE_SHIFT_n)362362 * bits to count nanoseconds leaving the rest for fractional nonseconds.363363+ *364364+ * Any given INCVALUE also has an associated maximum adjustment value. This365365+ * maximum adjustment value is the largest increase (or decrease) which can be366366+ * safely applied without overflowing the INCVALUE. Since INCVALUE has367367+ * a maximum range of 24 bits, its largest value is 0xFFFFFF.368368+ *369369+ * To understand where the maximum value comes from, consider the following370370+ * equation:371371+ *372372+ * new_incval = base_incval + (base_incval * adjustment) / 1billion373373+ *374374+ * To avoid overflow that means:375375+ * max_incval = base_incval + (base_incval * max_adj) / billion376376+ *377377+ * Re-arranging:378378+ * max_adj = floor(((max_incval - base_incval) * 1billion) / 1billion)363379 */364380#define INCVALUE_96MHZ 125365381#define INCVALUE_SHIFT_96MHZ 17366382#define INCPERIOD_SHIFT_96MHZ 2367383#define INCPERIOD_96MHZ (12 >> INCPERIOD_SHIFT_96MHZ)384384+#define MAX_PPB_96MHZ 23999900 /* 23,999,900 ppb */368385369386#define INCVALUE_25MHZ 40370387#define INCVALUE_SHIFT_25MHZ 18371388#define INCPERIOD_25MHZ 1389389+#define MAX_PPB_25MHZ 599999900 /* 599,999,900 ppb */372390373391#define INCVALUE_24MHZ 125374392#define INCVALUE_SHIFT_24MHZ 14375393#define INCPERIOD_24MHZ 3394394+#define MAX_PPB_24MHZ 999999999 /* 999,999,999 ppb */376395377396#define INCVALUE_38400KHZ 26378397#define INCVALUE_SHIFT_38400KHZ 19379398#define INCPERIOD_38400KHZ 1399399+#define MAX_PPB_38400KHZ 230769100 /* 230,769,100 ppb */380400381401/* Another drawback of scaling the incvalue by a large factor is the382402 * 64-bit SYSTIM register overflows more quickly. This is dealt with
+15-7
drivers/net/ethernet/intel/e1000e/ptp.c
···280280281281 switch (hw->mac.type) {282282 case e1000_pch2lan:283283+ adapter->ptp_clock_info.max_adj = MAX_PPB_96MHZ;284284+ break;283285 case e1000_pch_lpt:286286+ if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)287287+ adapter->ptp_clock_info.max_adj = MAX_PPB_96MHZ;288288+ else289289+ adapter->ptp_clock_info.max_adj = MAX_PPB_25MHZ;290290+ break;284291 case e1000_pch_spt:292292+ adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ;293293+ break;285294 case e1000_pch_cnp:286295 case e1000_pch_tgp:287296 case e1000_pch_adp:···298289 case e1000_pch_lnp:299290 case e1000_pch_ptp:300291 case e1000_pch_nvp:301301- if ((hw->mac.type < e1000_pch_lpt) ||302302- (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)) {303303- adapter->ptp_clock_info.max_adj = 24000000 - 1;304304- break;305305- }306306- fallthrough;292292+ if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)293293+ adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ;294294+ else295295+ adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ;296296+ break;307297 case e1000_82574:308298 case e1000_82583:309309- adapter->ptp_clock_info.max_adj = 600000000 - 1;299299+ adapter->ptp_clock_info.max_adj = MAX_PPB_25MHZ;310300 break;311301 default:312302 break;
···168168 lan966x_taprio_speed_set(port, config->speed);169169170170 /* Also the GIGA_MODE_ENA(1) needs to be set regardless of the171171- * port speed for QSGMII ports.171171+ * port speed for QSGMII or SGMII ports.172172 */173173- if (phy_interface_num_ports(config->portmode) == 4)173173+ if (phy_interface_num_ports(config->portmode) == 4 ||174174+ config->portmode == PHY_INTERFACE_MODE_SGMII)174175 mode = DEV_MAC_MODE_CFG_GIGA_MODE_ENA_SET(1);175176176177 lan_wr(config->duplex | mode,
···14241424 mangle_action->mangle.mask = (__force u32)cpu_to_be32(mangle_action->mangle.mask);14251425 return;1426142614271427+ /* Both struct tcphdr and struct udphdr start with14281428+ * __be16 source;14291429+ * __be16 dest;14301430+ * so we can use the same code for both.14311431+ */14271432 case FLOW_ACT_MANGLE_HDR_TYPE_TCP:14281433 case FLOW_ACT_MANGLE_HDR_TYPE_UDP:14291429- mangle_action->mangle.val = (__force u16)cpu_to_be16(mangle_action->mangle.val);14301430- mangle_action->mangle.mask = (__force u16)cpu_to_be16(mangle_action->mangle.mask);14341434+ if (mangle_action->mangle.offset == offsetof(struct tcphdr, source)) {14351435+ mangle_action->mangle.val =14361436+ (__force u32)cpu_to_be32(mangle_action->mangle.val << 16);14371437+ /* The mask of mangle action is inverse mask,14381438+ * so clear the dest tp port with 0xFFFF to14391439+ * instead of rotate-left operation.14401440+ */14411441+ mangle_action->mangle.mask =14421442+ (__force u32)cpu_to_be32(mangle_action->mangle.mask << 16 | 0xFFFF);14431443+ }14441444+ if (mangle_action->mangle.offset == offsetof(struct tcphdr, dest)) {14451445+ mangle_action->mangle.offset = 0;14461446+ mangle_action->mangle.val =14471447+ (__force u32)cpu_to_be32(mangle_action->mangle.val);14481448+ mangle_action->mangle.mask =14491449+ (__force u32)cpu_to_be32(mangle_action->mangle.mask);14501450+ }14311451 return;1432145214331453 default:···18841864{18851865 struct flow_rule *rule = flow_cls_offload_flow_rule(flow);18861866 struct nfp_fl_ct_flow_entry *ct_entry;18671867+ struct flow_action_entry *ct_goto;18871868 struct nfp_fl_ct_zone_entry *zt;18691869+ struct flow_action_entry *act;18881870 bool wildcarded = false;18891871 struct flow_match_ct ct;18901890- struct flow_action_entry *ct_goto;18721872+ int i;18731873+18741874+ flow_action_for_each(i, act, &rule->action) {18751875+ switch (act->id) {18761876+ case FLOW_ACTION_REDIRECT:18771877+ case FLOW_ACTION_REDIRECT_INGRESS:18781878+ case FLOW_ACTION_MIRRED:18791879+ case FLOW_ACTION_MIRRED_INGRESS:18801880+ if (act->dev->rtnl_link_ops &&18811881+ !strcmp(act->dev->rtnl_link_ops->kind, "openvswitch")) {18821882+ NL_SET_ERR_MSG_MOD(extack,18831883+ "unsupported offload: out port is openvswitch internal port");18841884+ return -EOPNOTSUPP;18851885+ }18861886+ break;18871887+ default:18881888+ break;18891889+ }18901890+ }1891189118921892 flow_rule_match_ct(rule, &ct);18931893 if (!ct.mask->ct_zone) {
+4
drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c
···353353 if (data->flags & STMMAC_FLAG_HWTSTAMP_CORRECT_LATENCY)354354 plat_dat->flags |= STMMAC_FLAG_HWTSTAMP_CORRECT_LATENCY;355355356356+ /* Default TX Q0 to use TSO and rest TXQ for TBS */357357+ for (int i = 1; i < plat_dat->tx_queues_to_use; i++)358358+ plat_dat->tx_queues_cfg[i].tbs_en = 1;359359+356360 plat_dat->host_dma_width = dwmac->ops->addr_width;357361 plat_dat->init = imx_dwmac_init;358362 plat_dat->exit = imx_dwmac_exit;
+3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···39393939 priv->rx_copybreak = STMMAC_RX_COPYBREAK;3940394039413941 buf_sz = dma_conf->dma_buf_sz;39423942+ for (int i = 0; i < MTL_MAX_TX_QUEUES; i++)39433943+ if (priv->dma_conf.tx_queue[i].tbs & STMMAC_TBS_EN)39443944+ dma_conf->tx_queue[i].tbs = priv->dma_conf.tx_queue[i].tbs;39423945 memcpy(&priv->dma_conf, dma_conf, sizeof(*dma_conf));3943394639443947 stmmac_reset_queues_param(priv);
+4-1
drivers/net/hyperv/netvsc.c
···708708 /* Disable NAPI and disassociate its context from the device. */709709 for (i = 0; i < net_device->num_chn; i++) {710710 /* See also vmbus_reset_channel_cb(). */711711- napi_disable(&net_device->chan_table[i].napi);711711+ /* only disable enabled NAPI channel */712712+ if (i < ndev->real_num_rx_queues)713713+ napi_disable(&net_device->chan_table[i].napi);714714+712715 netif_napi_del(&net_device->chan_table[i].napi);713716 }714717
···104104module_param(provides_xdp_headroom, bool, 0644);105105106106static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,107107- u8 status);107107+ s8 status);108108109109static void make_tx_response(struct xenvif_queue *queue,110110- struct xen_netif_tx_request *txp,110110+ const struct xen_netif_tx_request *txp,111111 unsigned int extra_count,112112- s8 st);113113-static void push_tx_responses(struct xenvif_queue *queue);112112+ s8 status);114113115114static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx);116115···207208 unsigned int extra_count, RING_IDX end)208209{209210 RING_IDX cons = queue->tx.req_cons;210210- unsigned long flags;211211212212 do {213213- spin_lock_irqsave(&queue->response_lock, flags);214213 make_tx_response(queue, txp, extra_count, XEN_NETIF_RSP_ERROR);215215- push_tx_responses(queue);216216- spin_unlock_irqrestore(&queue->response_lock, flags);217214 if (cons == end)218215 break;219216 RING_COPY_REQUEST(&queue->tx, cons++, txp);···460465 for (shinfo->nr_frags = 0; nr_slots > 0 && shinfo->nr_frags < MAX_SKB_FRAGS;461466 nr_slots--) {462467 if (unlikely(!txp->size)) {463463- unsigned long flags;464464-465465- spin_lock_irqsave(&queue->response_lock, flags);466468 make_tx_response(queue, txp, 0, XEN_NETIF_RSP_OKAY);467467- push_tx_responses(queue);468468- spin_unlock_irqrestore(&queue->response_lock, flags);469469 ++txp;470470 continue;471471 }···486496487497 for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; ++txp) {488498 if (unlikely(!txp->size)) {489489- unsigned long flags;490490-491491- spin_lock_irqsave(&queue->response_lock, flags);492499 make_tx_response(queue, txp, 0,493500 XEN_NETIF_RSP_OKAY);494494- push_tx_responses(queue);495495- spin_unlock_irqrestore(&queue->response_lock,496496- flags);497501 continue;498502 }499503···979995 (ret == 0) ?980996 XEN_NETIF_RSP_OKAY :981997 XEN_NETIF_RSP_ERROR);982982- push_tx_responses(queue);983998 continue;984999 }9851000···99010079911008 make_tx_response(queue, &txreq, extra_count,9921009 XEN_NETIF_RSP_OKAY);993993- push_tx_responses(queue);9941010 continue;9951011 }9961012···14151433 return work_done;14161434}1417143514181418-static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,14191419- u8 status)14201420-{14211421- struct pending_tx_info *pending_tx_info;14221422- pending_ring_idx_t index;14231423- unsigned long flags;14241424-14251425- pending_tx_info = &queue->pending_tx_info[pending_idx];14261426-14271427- spin_lock_irqsave(&queue->response_lock, flags);14281428-14291429- make_tx_response(queue, &pending_tx_info->req,14301430- pending_tx_info->extra_count, status);14311431-14321432- /* Release the pending index before pusing the Tx response so14331433- * its available before a new Tx request is pushed by the14341434- * frontend.14351435- */14361436- index = pending_index(queue->pending_prod++);14371437- queue->pending_ring[index] = pending_idx;14381438-14391439- push_tx_responses(queue);14401440-14411441- spin_unlock_irqrestore(&queue->response_lock, flags);14421442-}14431443-14441444-14451445-static void make_tx_response(struct xenvif_queue *queue,14461446- struct xen_netif_tx_request *txp,14361436+static void _make_tx_response(struct xenvif_queue *queue,14371437+ const struct xen_netif_tx_request *txp,14471438 unsigned int extra_count,14481448- s8 st)14391439+ s8 status)14491440{14501441 RING_IDX i = queue->tx.rsp_prod_pvt;14511442 struct xen_netif_tx_response *resp;1452144314531444 resp = RING_GET_RESPONSE(&queue->tx, i);14541445 resp->id = txp->id;14551455- resp->status = st;14461446+ resp->status = status;1456144714571448 while (extra_count-- != 0)14581449 RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;···14401485 RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);14411486 if (notify)14421487 notify_remote_via_irq(queue->tx_irq);14881488+}14891489+14901490+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,14911491+ s8 status)14921492+{14931493+ struct pending_tx_info *pending_tx_info;14941494+ pending_ring_idx_t index;14951495+ unsigned long flags;14961496+14971497+ pending_tx_info = &queue->pending_tx_info[pending_idx];14981498+14991499+ spin_lock_irqsave(&queue->response_lock, flags);15001500+15011501+ _make_tx_response(queue, &pending_tx_info->req,15021502+ pending_tx_info->extra_count, status);15031503+15041504+ /* Release the pending index before pusing the Tx response so15051505+ * its available before a new Tx request is pushed by the15061506+ * frontend.15071507+ */15081508+ index = pending_index(queue->pending_prod++);15091509+ queue->pending_ring[index] = pending_idx;15101510+15111511+ push_tx_responses(queue);15121512+15131513+ spin_unlock_irqrestore(&queue->response_lock, flags);15141514+}15151515+15161516+static void make_tx_response(struct xenvif_queue *queue,15171517+ const struct xen_netif_tx_request *txp,15181518+ unsigned int extra_count,15191519+ s8 status)15201520+{15211521+ unsigned long flags;15221522+15231523+ spin_lock_irqsave(&queue->response_lock, flags);15241524+15251525+ _make_tx_response(queue, txp, extra_count, status);15261526+ push_tx_responses(queue);15271527+15281528+ spin_unlock_irqrestore(&queue->response_lock, flags);14431529}1444153014451531static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
+2-2
drivers/platform/mellanox/mlxbf-pmc.c
···11701170 int ret;1171117111721172 addr = pmc->block[blk_num].mmio_base +11731173- (rounddown(cnt_num, 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ);11731173+ ((cnt_num / 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ);11741174 ret = mlxbf_pmc_readl(addr, &word);11751175 if (ret)11761176 return ret;···14131413 int ret;1414141414151415 addr = pmc->block[blk_num].mmio_base +14161416- (rounddown(cnt_num, 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ);14161416+ ((cnt_num / 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ);14171417 ret = mlxbf_pmc_readl(addr, &word);14181418 if (ret)14191419 return ret;
+67
drivers/platform/mellanox/mlxbf-tmfifo.c
···4747/* Message with data needs at least two words (for header & data). */4848#define MLXBF_TMFIFO_DATA_MIN_WORDS 249495050+/* Tx timeout in milliseconds. */5151+#define TMFIFO_TX_TIMEOUT 20005252+5053/* ACPI UID for BlueField-3. */5154#define TMFIFO_BF3_UID 15255···6562 * @drop_desc: dummy desc for packet dropping6663 * @cur_len: processed length of the current descriptor6764 * @rem_len: remaining length of the pending packet6565+ * @rem_padding: remaining bytes to send as paddings6866 * @pkt_len: total length of the pending packet6967 * @next_avail: next avail descriptor id7068 * @num: vring size (number of descriptors)7169 * @align: vring alignment size7270 * @index: vring index7371 * @vdev_id: vring virtio id (VIRTIO_ID_xxx)7272+ * @tx_timeout: expire time of last tx packet7473 * @fifo: pointer to the tmfifo structure7574 */7675struct mlxbf_tmfifo_vring {···8479 struct vring_desc drop_desc;8580 int cur_len;8681 int rem_len;8282+ int rem_padding;8783 u32 pkt_len;8884 u16 next_avail;8985 int num;9086 int align;9187 int index;9288 int vdev_id;8989+ unsigned long tx_timeout;9390 struct mlxbf_tmfifo *fifo;9491};9592···826819 return true;827820}828821822822+static void mlxbf_tmfifo_check_tx_timeout(struct mlxbf_tmfifo_vring *vring)823823+{824824+ unsigned long flags;825825+826826+ /* Only handle Tx timeout for network vdev. */827827+ if (vring->vdev_id != VIRTIO_ID_NET)828828+ return;829829+830830+ /* Initialize the timeout or return if not expired. */831831+ if (!vring->tx_timeout) {832832+ /* Initialize the timeout. */833833+ vring->tx_timeout = jiffies +834834+ msecs_to_jiffies(TMFIFO_TX_TIMEOUT);835835+ return;836836+ } else if (time_before(jiffies, vring->tx_timeout)) {837837+ /* Return if not timeout yet. */838838+ return;839839+ }840840+841841+ /*842842+ * Drop the packet after timeout. The outstanding packet is843843+ * released and the remaining bytes will be sent with padding byte 0x00844844+ * as a recovery. On the peer(host) side, the padding bytes 0x00 will be845845+ * either dropped directly, or appended into existing outstanding packet846846+ * thus dropped as corrupted network packet.847847+ */848848+ vring->rem_padding = round_up(vring->rem_len, sizeof(u64));849849+ mlxbf_tmfifo_release_pkt(vring);850850+ vring->cur_len = 0;851851+ vring->rem_len = 0;852852+ vring->fifo->vring[0] = NULL;853853+854854+ /*855855+ * Make sure the load/store are in order before856856+ * returning back to virtio.857857+ */858858+ virtio_mb(false);859859+860860+ /* Notify upper layer. */861861+ spin_lock_irqsave(&vring->fifo->spin_lock[0], flags);862862+ vring_interrupt(0, vring->vq);863863+ spin_unlock_irqrestore(&vring->fifo->spin_lock[0], flags);864864+}865865+829866/* Rx & Tx processing of a queue. */830867static void mlxbf_tmfifo_rxtx(struct mlxbf_tmfifo_vring *vring, bool is_rx)831868{···892841 return;893842894843 do {844844+retry:895845 /* Get available FIFO space. */896846 if (avail == 0) {897847 if (is_rx)···901849 avail = mlxbf_tmfifo_get_tx_avail(fifo, devid);902850 if (avail <= 0)903851 break;852852+ }853853+854854+ /* Insert paddings for discarded Tx packet. */855855+ if (!is_rx) {856856+ vring->tx_timeout = 0;857857+ while (vring->rem_padding >= sizeof(u64)) {858858+ writeq(0, vring->fifo->tx.data);859859+ vring->rem_padding -= sizeof(u64);860860+ if (--avail == 0)861861+ goto retry;862862+ }904863 }905864906865 /* Console output always comes from the Tx buffer. */···923860 /* Handle one descriptor. */924861 more = mlxbf_tmfifo_rxtx_one_desc(vring, is_rx, &avail);925862 } while (more);863863+864864+ /* Check Tx timeout. */865865+ if (avail <= 0 && !is_rx)866866+ mlxbf_tmfifo_check_tx_timeout(vring);926867}927868928869/* Handle Rx or Tx queues. */
+1
drivers/platform/x86/amd/pmf/Kconfig
···1010 depends on AMD_NB1111 select ACPI_PLATFORM_PROFILE1212 depends on TEE && AMDTEE1313+ depends on AMD_SFH_HID1314 help1415 This driver provides support for the AMD Platform Management Framework.1516 The goal is to enhance end user experience by making AMD PCs smarter,
+36
drivers/platform/x86/amd/pmf/spc.c
···1010 */11111212#include <acpi/button.h>1313+#include <linux/amd-pmf-io.h>1314#include <linux/power_supply.h>1415#include <linux/units.h>1516#include "pmf.h"···4544 dev_dbg(dev->dev, "Max C0 Residency: %u\n", in->ev_info.max_c0residency);4645 dev_dbg(dev->dev, "GFX Busy: %u\n", in->ev_info.gfx_busy);4746 dev_dbg(dev->dev, "LID State: %s\n", in->ev_info.lid_state ? "close" : "open");4747+ dev_dbg(dev->dev, "User Presence: %s\n", in->ev_info.user_present ? "Present" : "Away");4848+ dev_dbg(dev->dev, "Ambient Light: %d\n", in->ev_info.ambient_light);4849 dev_dbg(dev->dev, "==== TA inputs END ====\n");4950}5051#else···150147 return 0;151148}152149150150+static int amd_pmf_get_sensor_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)151151+{152152+ struct amd_sfh_info sfh_info;153153+ int ret;154154+155155+ /* Get ALS data */156156+ ret = amd_get_sfh_info(&sfh_info, MT_ALS);157157+ if (!ret)158158+ in->ev_info.ambient_light = sfh_info.ambient_light;159159+ else160160+ return ret;161161+162162+ /* get HPD data */163163+ ret = amd_get_sfh_info(&sfh_info, MT_HPD);164164+ if (ret)165165+ return ret;166166+167167+ switch (sfh_info.user_present) {168168+ case SFH_NOT_DETECTED:169169+ in->ev_info.user_present = 0xff; /* assume no sensors connected */170170+ break;171171+ case SFH_USER_PRESENT:172172+ in->ev_info.user_present = 1;173173+ break;174174+ case SFH_USER_AWAY:175175+ in->ev_info.user_present = 0;176176+ break;177177+ }178178+179179+ return 0;180180+}181181+153182void amd_pmf_populate_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)154183{155184 /* TA side lid open is 1 and close is 0, hence the ! here */···190155 amd_pmf_get_smu_info(dev, in);191156 amd_pmf_get_battery_info(dev, in);192157 amd_pmf_get_slider_info(dev, in);158158+ amd_pmf_get_sensor_info(dev, in);193159}
+3-1
drivers/platform/x86/amd/pmf/tee-if.c
···298298 if (!new_policy_buf)299299 return -ENOMEM;300300301301- if (copy_from_user(new_policy_buf, buf, length))301301+ if (copy_from_user(new_policy_buf, buf, length)) {302302+ kfree(new_policy_buf);302303 return -EFAULT;304304+ }303305304306 kfree(dev->policy_buf);305307 dev->policy_buf = new_policy_buf;
+2-1
drivers/platform/x86/intel/ifs/load.c
···399399 if (fw->size != expected_size) {400400 dev_err(dev, "File size mismatch (expected %u, actual %zu). Corrupted IFS image.\n",401401 expected_size, fw->size);402402- return -EINVAL;402402+ ret = -EINVAL;403403+ goto release;403404 }404405405406 ret = image_sanity_check(dev, (struct microcode_header_intel *)fw->data);
···2626 * @instance_id: Unique instance id to append to directory name2727 * @name: Sysfs entry name for this instance2828 * @uncore_attr_group: Attribute group storage2929- * @max_freq_khz_dev_attr: Storage for device attribute max_freq_khz3030- * @mix_freq_khz_dev_attr: Storage for device attribute min_freq_khz3131- * @initial_max_freq_khz_dev_attr: Storage for device attribute initial_max_freq_khz3232- * @initial_min_freq_khz_dev_attr: Storage for device attribute initial_min_freq_khz3333- * @current_freq_khz_dev_attr: Storage for device attribute current_freq_khz3434- * @domain_id_dev_attr: Storage for device attribute domain_id3535- * @fabric_cluster_id_dev_attr: Storage for device attribute fabric_cluster_id3636- * @package_id_dev_attr: Storage for device attribute package_id2929+ * @max_freq_khz_kobj_attr: Storage for kobject attribute max_freq_khz3030+ * @mix_freq_khz_kobj_attr: Storage for kobject attribute min_freq_khz3131+ * @initial_max_freq_khz_kobj_attr: Storage for kobject attribute initial_max_freq_khz3232+ * @initial_min_freq_khz_kobj_attr: Storage for kobject attribute initial_min_freq_khz3333+ * @current_freq_khz_kobj_attr: Storage for kobject attribute current_freq_khz3434+ * @domain_id_kobj_attr: Storage for kobject attribute domain_id3535+ * @fabric_cluster_id_kobj_attr: Storage for kobject attribute fabric_cluster_id3636+ * @package_id_kobj_attr: Storage for kobject attribute package_id3737 * @uncore_attrs: Attribute storage for group creation3838 *3939 * This structure is used to encapsulate all data related to uncore sysfs···5353 char name[32];54545555 struct attribute_group uncore_attr_group;5656- struct device_attribute max_freq_khz_dev_attr;5757- struct device_attribute min_freq_khz_dev_attr;5858- struct device_attribute initial_max_freq_khz_dev_attr;5959- struct device_attribute initial_min_freq_khz_dev_attr;6060- struct device_attribute current_freq_khz_dev_attr;6161- struct device_attribute domain_id_dev_attr;6262- struct device_attribute fabric_cluster_id_dev_attr;6363- struct device_attribute package_id_dev_attr;5656+ struct kobj_attribute max_freq_khz_kobj_attr;5757+ struct kobj_attribute min_freq_khz_kobj_attr;5858+ struct kobj_attribute initial_max_freq_khz_kobj_attr;5959+ struct kobj_attribute initial_min_freq_khz_kobj_attr;6060+ struct kobj_attribute current_freq_khz_kobj_attr;6161+ struct kobj_attribute domain_id_kobj_attr;6262+ struct kobj_attribute fabric_cluster_id_kobj_attr;6363+ struct kobj_attribute package_id_kobj_attr;6464 struct attribute *uncore_attrs[9];6565};6666
+2-2
drivers/platform/x86/intel/wmi/sbl-fw-update.c
···3232 return -ENODEV;33333434 if (obj->type != ACPI_TYPE_INTEGER) {3535- dev_warn(dev, "wmi_query_block returned invalid value\n");3535+ dev_warn(dev, "wmidev_block_query returned invalid value\n");3636 kfree(obj);3737 return -EINVAL;3838 }···55555656 status = wmidev_block_set(to_wmi_device(dev), 0, &input);5757 if (ACPI_FAILURE(status)) {5858- dev_err(dev, "wmi_set_block failed\n");5858+ dev_err(dev, "wmidev_block_set failed\n");5959 return -ENODEV;6060 }6161
+152-54
drivers/platform/x86/p2sb.c
···2626 {}2727};28282929+/*3030+ * Cache BAR0 of P2SB device functions 0 to 7.3131+ * TODO: The constant 8 is the number of functions that PCI specification3232+ * defines. Same definitions exist tree-wide. Unify this definition and3333+ * the other definitions then move to include/uapi/linux/pci.h.3434+ */3535+#define NR_P2SB_RES_CACHE 83636+3737+struct p2sb_res_cache {3838+ u32 bus_dev_id;3939+ struct resource res;4040+};4141+4242+static struct p2sb_res_cache p2sb_resources[NR_P2SB_RES_CACHE];4343+2944static int p2sb_get_devfn(unsigned int *devfn)3045{3146 unsigned int fn = P2SB_DEVFN_DEFAULT;···5439 return 0;5540}56415757-/* Copy resource from the first BAR of the device in question */5858-static int p2sb_read_bar0(struct pci_dev *pdev, struct resource *mem)4242+static bool p2sb_valid_resource(struct resource *res)5943{6060- struct resource *bar0 = &pdev->resource[0];4444+ if (res->flags)4545+ return true;4646+4747+ return false;4848+}4949+5050+/* Copy resource from the first BAR of the device in question */5151+static void p2sb_read_bar0(struct pci_dev *pdev, struct resource *mem)5252+{5353+ struct resource *bar0 = pci_resource_n(pdev, 0);61546255 /* Make sure we have no dangling pointers in the output */6356 memset(mem, 0, sizeof(*mem));···7956 mem->end = bar0->end;8057 mem->flags = bar0->flags;8158 mem->desc = bar0->desc;5959+}6060+6161+static void p2sb_scan_and_cache_devfn(struct pci_bus *bus, unsigned int devfn)6262+{6363+ struct p2sb_res_cache *cache = &p2sb_resources[PCI_FUNC(devfn)];6464+ struct pci_dev *pdev;6565+6666+ pdev = pci_scan_single_device(bus, devfn);6767+ if (!pdev)6868+ return;6969+7070+ p2sb_read_bar0(pdev, &cache->res);7171+ cache->bus_dev_id = bus->dev.id;7272+7373+ pci_stop_and_remove_bus_device(pdev);7474+}7575+7676+static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn)7777+{7878+ unsigned int slot, fn;7979+8080+ if (PCI_FUNC(devfn) == 0) {8181+ /*8282+ * When function number of the P2SB device is zero, scan it and8383+ * other function numbers, and if devices are available, cache8484+ * their BAR0s.8585+ */8686+ slot = PCI_SLOT(devfn);8787+ for (fn = 0; fn < NR_P2SB_RES_CACHE; fn++)8888+ p2sb_scan_and_cache_devfn(bus, PCI_DEVFN(slot, fn));8989+ } else {9090+ /* Scan the P2SB device and cache its BAR0 */9191+ p2sb_scan_and_cache_devfn(bus, devfn);9292+ }9393+9494+ if (!p2sb_valid_resource(&p2sb_resources[PCI_FUNC(devfn)].res))9595+ return -ENOENT;82968397 return 0;8498}85998686-static int p2sb_scan_and_read(struct pci_bus *bus, unsigned int devfn, struct resource *mem)100100+static struct pci_bus *p2sb_get_bus(struct pci_bus *bus)87101{8888- struct pci_dev *pdev;102102+ static struct pci_bus *p2sb_bus;103103+104104+ bus = bus ?: p2sb_bus;105105+ if (bus)106106+ return bus;107107+108108+ /* Assume P2SB is on the bus 0 in domain 0 */109109+ p2sb_bus = pci_find_bus(0, 0);110110+ return p2sb_bus;111111+}112112+113113+static int p2sb_cache_resources(void)114114+{115115+ unsigned int devfn_p2sb;116116+ u32 value = P2SBC_HIDE;117117+ struct pci_bus *bus;118118+ u16 class;89119 int ret;901209191- pdev = pci_scan_single_device(bus, devfn);9292- if (!pdev)121121+ /* Get devfn for P2SB device itself */122122+ ret = p2sb_get_devfn(&devfn_p2sb);123123+ if (ret)124124+ return ret;125125+126126+ bus = p2sb_get_bus(NULL);127127+ if (!bus)93128 return -ENODEV;941299595- ret = p2sb_read_bar0(pdev, mem);130130+ /*131131+ * When a device with same devfn exists and its device class is not132132+ * PCI_CLASS_MEMORY_OTHER for P2SB, do not touch it.133133+ */134134+ pci_bus_read_config_word(bus, devfn_p2sb, PCI_CLASS_DEVICE, &class);135135+ if (!PCI_POSSIBLE_ERROR(class) && class != PCI_CLASS_MEMORY_OTHER)136136+ return -ENODEV;961379797- pci_stop_and_remove_bus_device(pdev);138138+ /*139139+ * Prevent concurrent PCI bus scan from seeing the P2SB device and140140+ * removing via sysfs while it is temporarily exposed.141141+ */142142+ pci_lock_rescan_remove();143143+144144+ /*145145+ * The BIOS prevents the P2SB device from being enumerated by the PCI146146+ * subsystem, so we need to unhide and hide it back to lookup the BAR.147147+ * Unhide the P2SB device here, if needed.148148+ */149149+ pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value);150150+ if (value & P2SBC_HIDE)151151+ pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, 0);152152+153153+ ret = p2sb_scan_and_cache(bus, devfn_p2sb);154154+155155+ /* Hide the P2SB device, if it was hidden */156156+ if (value & P2SBC_HIDE)157157+ pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, P2SBC_HIDE);158158+159159+ pci_unlock_rescan_remove();160160+98161 return ret;99162}100163···19081 * @devfn: PCI slot and function to communicate with19182 * @mem: memory resource to be filled in19283 *193193- * The BIOS prevents the P2SB device from being enumerated by the PCI194194- * subsystem, so we need to unhide and hide it back to lookup the BAR.195195- *196196- * if @bus is NULL, the bus 0 in domain 0 will be used.8484+ * If @bus is NULL, the bus 0 in domain 0 will be used.19785 * If @devfn is 0, it will be replaced by devfn of the P2SB device.19886 *19987 * Caller must provide a valid pointer to @mem.200200- *201201- * Locking is handled by pci_rescan_remove_lock mutex.20288 *20389 * Return:20490 * 0 on success or appropriate errno value on error.20591 */20692int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)20793{208208- struct pci_dev *pdev_p2sb;209209- unsigned int devfn_p2sb;210210- u32 value = P2SBC_HIDE;9494+ struct p2sb_res_cache *cache;21195 int ret;21296213213- /* Get devfn for P2SB device itself */214214- ret = p2sb_get_devfn(&devfn_p2sb);215215- if (ret)216216- return ret;217217-218218- /* if @bus is NULL, use bus 0 in domain 0 */219219- bus = bus ?: pci_find_bus(0, 0);220220-221221- /*222222- * Prevent concurrent PCI bus scan from seeing the P2SB device and223223- * removing via sysfs while it is temporarily exposed.224224- */225225- pci_lock_rescan_remove();226226-227227- /* Unhide the P2SB device, if needed */228228- pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value);229229- if (value & P2SBC_HIDE)230230- pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, 0);231231-232232- pdev_p2sb = pci_scan_single_device(bus, devfn_p2sb);233233- if (devfn)234234- ret = p2sb_scan_and_read(bus, devfn, mem);235235- else236236- ret = p2sb_read_bar0(pdev_p2sb, mem);237237- pci_stop_and_remove_bus_device(pdev_p2sb);238238-239239- /* Hide the P2SB device, if it was hidden */240240- if (value & P2SBC_HIDE)241241- pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, P2SBC_HIDE);242242-243243- pci_unlock_rescan_remove();244244-245245- if (ret)246246- return ret;247247-248248- if (mem->flags == 0)9797+ bus = p2sb_get_bus(bus);9898+ if (!bus)24999 return -ENODEV;250100101101+ if (!devfn) {102102+ ret = p2sb_get_devfn(&devfn);103103+ if (ret)104104+ return ret;105105+ }106106+107107+ cache = &p2sb_resources[PCI_FUNC(devfn)];108108+ if (cache->bus_dev_id != bus->dev.id)109109+ return -ENODEV;110110+111111+ if (!p2sb_valid_resource(&cache->res))112112+ return -ENOENT;113113+114114+ memcpy(mem, &cache->res, sizeof(*mem));251115 return 0;252116}253117EXPORT_SYMBOL_GPL(p2sb_bar);118118+119119+static int __init p2sb_fs_init(void)120120+{121121+ p2sb_cache_resources();122122+ return 0;123123+}124124+125125+/*126126+ * pci_rescan_remove_lock to avoid access to unhidden P2SB devices can127127+ * not be locked in sysfs pci bus rescan path because of deadlock. To128128+ * avoid the deadlock, access to P2SB devices with the lock at an early129129+ * step in kernel initialization and cache required resources. This130130+ * should happen after subsys_initcall which initializes PCI subsystem131131+ * and before device_initcall which requires P2SB resources.132132+ */133133+fs_initcall(p2sb_fs_init);
···330330 */331331332332static int storvsc_ringbuffer_size = (128 * 1024);333333+static int aligned_ringbuffer_size;333334static u32 max_outstanding_req_per_channel;334335static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth);335336···688687 new_sc->next_request_id_callback = storvsc_next_request_id;689688690689 ret = vmbus_open(new_sc,691691- storvsc_ringbuffer_size,692692- storvsc_ringbuffer_size,690690+ aligned_ringbuffer_size,691691+ aligned_ringbuffer_size,693692 (void *)&props,694693 sizeof(struct vmstorage_channel_properties),695694 storvsc_on_channel_callback, new_sc);···19741973 dma_set_min_align_mask(&device->device, HV_HYP_PAGE_SIZE - 1);1975197419761975 stor_device->port_number = host->host_no;19771977- ret = storvsc_connect_to_vsp(device, storvsc_ringbuffer_size, is_fc);19761976+ ret = storvsc_connect_to_vsp(device, aligned_ringbuffer_size, is_fc);19781977 if (ret)19791978 goto err_out1;19801979···21652164{21662165 int ret;2167216621682168- ret = storvsc_connect_to_vsp(hv_dev, storvsc_ringbuffer_size,21672167+ ret = storvsc_connect_to_vsp(hv_dev, aligned_ringbuffer_size,21692168 hv_dev_is_fc(hv_dev));21702169 return ret;21712170}···21992198 * the ring buffer indices) by the max request size (which is22002199 * vmbus_channel_packet_multipage_buffer + struct vstor_packet + u64)22012200 */22012201+ aligned_ringbuffer_size = VMBUS_RING_SIZE(storvsc_ringbuffer_size);22022202 max_outstanding_req_per_channel =22032203- ((storvsc_ringbuffer_size - PAGE_SIZE) /22032203+ ((aligned_ringbuffer_size - PAGE_SIZE) /22042204 ALIGN(MAX_MULTIPAGE_BUFFER_PACKET +22052205 sizeof(struct vstor_packet) + sizeof(u64),22062206 sizeof(u64)));
-2
drivers/scsi/virtio_scsi.c
···188188 while ((buf = virtqueue_get_buf(vq, &len)) != NULL)189189 fn(vscsi, buf);190190191191- if (unlikely(virtqueue_is_broken(vq)))192192- break;193191 } while (!virtqueue_enable_cb(vq));194192 spin_unlock_irqrestore(&virtscsi_vq->vq_lock, flags);195193}
+3-3
drivers/soc/apple/mailbox.c
···296296 of_node_put(args.np);297297298298 if (!pdev)299299- return ERR_PTR(EPROBE_DEFER);299299+ return ERR_PTR(-EPROBE_DEFER);300300301301 mbox = platform_get_drvdata(pdev);302302 if (!mbox)303303- return ERR_PTR(EPROBE_DEFER);303303+ return ERR_PTR(-EPROBE_DEFER);304304305305 if (!device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_CONSUMER))306306- return ERR_PTR(ENODEV);306306+ return ERR_PTR(-ENODEV);307307308308 return mbox;309309}
+2-2
drivers/spi/spi-bcm-qspi.c
···1919#include <linux/platform_device.h>2020#include <linux/slab.h>2121#include <linux/spi/spi.h>2222-#include <linux/spi/spi-mem.h>2222+#include <linux/mtd/spi-nor.h>2323#include <linux/sysfs.h>2424#include <linux/types.h>2525#include "spi-bcm-qspi.h"···1221122112221222 /* non-aligned and very short transfers are handled by MSPI */12231223 if (!IS_ALIGNED((uintptr_t)addr, 4) || !IS_ALIGNED((uintptr_t)buf, 4) ||12241224- len < 4)12241224+ len < 4 || op->cmd.opcode == SPINOR_OP_RDSFDP)12251225 mspi_read = true;1226122612271227 if (!has_bspi(qspi) || mspi_read)
+9-8
drivers/spi/spi-cadence.c
···317317 xspi->rx_bytes -= nrx;318318319319 while (ntx || nrx) {320320+ if (nrx) {321321+ u8 data = cdns_spi_read(xspi, CDNS_SPI_RXD);322322+323323+ if (xspi->rxbuf)324324+ *xspi->rxbuf++ = data;325325+326326+ nrx--;327327+ }328328+320329 if (ntx) {321330 if (xspi->txbuf)322331 cdns_spi_write(xspi, CDNS_SPI_TXD, *xspi->txbuf++);···335326 ntx--;336327 }337328338338- if (nrx) {339339- u8 data = cdns_spi_read(xspi, CDNS_SPI_RXD);340340-341341- if (xspi->rxbuf)342342- *xspi->rxbuf++ = data;343343-344344- nrx--;345345- }346329 }347330}348331
···136136137137/* SIFCTR */138138#define SIFCTR_TFWM_MASK GENMASK(31, 29) /* Transmit FIFO Watermark */139139-#define SIFCTR_TFWM_64 (0 << 29) /* Transfer Request when 64 empty stages */140140-#define SIFCTR_TFWM_32 (1 << 29) /* Transfer Request when 32 empty stages */141141-#define SIFCTR_TFWM_24 (2 << 29) /* Transfer Request when 24 empty stages */142142-#define SIFCTR_TFWM_16 (3 << 29) /* Transfer Request when 16 empty stages */143143-#define SIFCTR_TFWM_12 (4 << 29) /* Transfer Request when 12 empty stages */144144-#define SIFCTR_TFWM_8 (5 << 29) /* Transfer Request when 8 empty stages */145145-#define SIFCTR_TFWM_4 (6 << 29) /* Transfer Request when 4 empty stages */146146-#define SIFCTR_TFWM_1 (7 << 29) /* Transfer Request when 1 empty stage */139139+#define SIFCTR_TFWM_64 (0UL << 29) /* Transfer Request when 64 empty stages */140140+#define SIFCTR_TFWM_32 (1UL << 29) /* Transfer Request when 32 empty stages */141141+#define SIFCTR_TFWM_24 (2UL << 29) /* Transfer Request when 24 empty stages */142142+#define SIFCTR_TFWM_16 (3UL << 29) /* Transfer Request when 16 empty stages */143143+#define SIFCTR_TFWM_12 (4UL << 29) /* Transfer Request when 12 empty stages */144144+#define SIFCTR_TFWM_8 (5UL << 29) /* Transfer Request when 8 empty stages */145145+#define SIFCTR_TFWM_4 (6UL << 29) /* Transfer Request when 4 empty stages */146146+#define SIFCTR_TFWM_1 (7UL << 29) /* Transfer Request when 1 empty stage */147147#define SIFCTR_TFUA_MASK GENMASK(26, 20) /* Transmit FIFO Usable Area */148148#define SIFCTR_TFUA_SHIFT 20149149#define SIFCTR_TFUA(i) ((i) << SIFCTR_TFUA_SHIFT)
+4
drivers/spi/spi.c
···17171717 pm_runtime_put_noidle(ctlr->dev.parent);17181718 dev_err(&ctlr->dev, "Failed to power device: %d\n",17191719 ret);17201720+17211721+ msg->status = ret;17221722+ spi_finalize_current_message(ctlr);17231723+17201724 return ret;17211725 }17221726 }
-32
drivers/thermal/intel/intel_powerclamp.c
···4949 */5050#define DEFAULT_DURATION_JIFFIES (6)51515252-static unsigned int target_mwait;5352static struct dentry *debug_dir;5453static bool poll_pkg_cstate_enable;5554···310311 "\tpowerclamp controls idle ratio within this window. larger\n"311312 "\twindow size results in slower response time but more smooth\n"312313 "\tclamping results. default to 2.");313313-314314-static void find_target_mwait(void)315315-{316316- unsigned int eax, ebx, ecx, edx;317317- unsigned int highest_cstate = 0;318318- unsigned int highest_subcstate = 0;319319- int i;320320-321321- if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF)322322- return;323323-324324- cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx);325325-326326- if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED) ||327327- !(ecx & CPUID5_ECX_INTERRUPT_BREAK))328328- return;329329-330330- edx >>= MWAIT_SUBSTATE_SIZE;331331- for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) {332332- if (edx & MWAIT_SUBSTATE_MASK) {333333- highest_cstate = i;334334- highest_subcstate = edx & MWAIT_SUBSTATE_MASK;335335- }336336- }337337- target_mwait = (highest_cstate << MWAIT_SUBSTATE_SIZE) |338338- (highest_subcstate - 1);339339-340340-}341314342315struct pkg_cstate_info {343316 bool skip;···729758 pr_info("No package C-state available\n");730759 return -ENODEV;731760 }732732-733733- /* find the deepest mwait value */734734- find_target_mwait();735761736762 return 0;737763}
+1-1
fs/bcachefs/alloc_background.c
···17151715 * This works without any other locks because this is the only17161716 * thread that removes items from the need_discard tree17171717 */17181718- bch2_trans_unlock(trans);17181718+ bch2_trans_unlock_long(trans);17191719 blkdev_issue_discard(ca->disk_sb.bdev,17201720 k.k->p.offset * ca->mi.bucket_size,17211721 ca->mi.bucket_size,
···1111struct z_erofs_decompress_req {1212 struct super_block *sb;1313 struct page **in, **out;1414-1514 unsigned short pageofs_in, pageofs_out;1615 unsigned int inputsize, outputsize;17161818- /* indicate the algorithm will be used for decompression */1919- unsigned int alg;1717+ unsigned int alg; /* the algorithm for decompression */2018 bool inplace_io, partial_decoding, fillgaps;1919+ gfp_t gfp; /* allocation flags for extra temporary buffers */2120};22212322struct z_erofs_decompressor {
···340340 } else {341341 folio_unlock(folio);342342343343- if (!folio_test_has_hwpoisoned(folio))343343+ if (!folio_test_hwpoison(folio))344344 want = nr;345345 else {346346 /*
+1-7
fs/jfs/jfs_dmap.c
···27632763 * leafno - the number of the leaf to be updated.27642764 * newval - the new value for the leaf.27652765 *27662766- * RETURN VALUES:27672767- * 0 - success27682768- * -EIO - i/o error27662766+ * RETURN VALUES: none27692767 */27702768static int dbJoin(dmtree_t *tp, int leafno, int newval, bool is_ctl)27712769{···27902792 * get the buddy size (number of words covered) of27912793 * the new value.27922794 */27932793-27942794- if ((newval - tp->dmt_budmin) > BUDMIN)27952795- return -EIO;27962796-27972795 budsz = BUDSIZE(newval, tp->dmt_budmin);2798279627992797 /* try to join.
+20-4
fs/smb/client/cached_dir.c
···145145 struct cached_fid *cfid;146146 struct cached_fids *cfids;147147 const char *npath;148148+ int retries = 0, cur_sleep = 1;148149149150 if (tcon == NULL || tcon->cfids == NULL || tcon->nohandlecache ||150151 is_smb1_server(tcon->ses->server) || (dir_cache_timeout == 0))151152 return -EOPNOTSUPP;152153153154 ses = tcon->ses;154154- server = cifs_pick_channel(ses);155155 cfids = tcon->cfids;156156-157157- if (!server->ops->new_lease_key)158158- return -EIO;159156160157 if (cifs_sb->root == NULL)161158 return -ENOENT;159159+160160+replay_again:161161+ /* reinitialize for possible replay */162162+ flags = 0;163163+ oplock = SMB2_OPLOCK_LEVEL_II;164164+ server = cifs_pick_channel(ses);165165+166166+ if (!server->ops->new_lease_key)167167+ return -EIO;162168163169 utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);164170 if (!utf16_path)···274268 */275269 cfid->has_lease = true;276270271271+ if (retries) {272272+ smb2_set_replay(server, &rqst[0]);273273+ smb2_set_replay(server, &rqst[1]);274274+ }275275+277276 rc = compound_send_recv(xid, ses, server,278277 flags, 2, rqst,279278 resp_buftype, rsp_iov);···378367 atomic_inc(&tcon->num_remote_opens);379368 }380369 kfree(utf16_path);370370+371371+ if (is_replayable_error(rc) &&372372+ smb2_should_replay(tcon, &retries, &cur_sleep))373373+ goto replay_again;374374+381375 return rc;382376}383377
···396396 spin_lock_init(&cifs_inode->writers_lock);397397 cifs_inode->writers = 0;398398 cifs_inode->netfs.inode.i_blkbits = 14; /* 2**14 = CIFS_MAX_MSGSIZE */399399- cifs_inode->server_eof = 0;399399+ cifs_inode->netfs.remote_i_size = 0;400400 cifs_inode->uniqueid = 0;401401 cifs_inode->createtime = 0;402402 cifs_inode->epoch = 0;···13801380 struct inode *src_inode = file_inode(src_file);13811381 struct inode *target_inode = file_inode(dst_file);13821382 struct cifsInodeInfo *src_cifsi = CIFS_I(src_inode);13831383+ struct cifsInodeInfo *target_cifsi = CIFS_I(target_inode);13831384 struct cifsFileInfo *smb_file_src;13841385 struct cifsFileInfo *smb_file_target;13851386 struct cifs_tcon *src_tcon;···14291428 * Advance the EOF marker after the flush above to the end of the range14301429 * if it's short of that.14311430 */14321432- if (src_cifsi->server_eof < off + len) {14311431+ if (src_cifsi->netfs.remote_i_size < off + len) {14331432 rc = cifs_precopy_set_eof(src_inode, src_cifsi, src_tcon, xid, off + len);14341433 if (rc < 0)14351434 goto unlock;···14531452 /* Discard all the folios that overlap the destination region. */14541453 truncate_inode_pages_range(&target_inode->i_data, fstart, fend);1455145414551455+ fscache_invalidate(cifs_inode_cookie(target_inode), NULL,14561456+ i_size_read(target_inode), 0);14571457+14561458 rc = file_modified(dst_file);14571459 if (!rc) {14581460 rc = target_tcon->ses->server->ops->copychunk_range(xid,14591461 smb_file_src, smb_file_target, off, len, destoff);14601460- if (rc > 0 && destoff + rc > i_size_read(target_inode))14621462+ if (rc > 0 && destoff + rc > i_size_read(target_inode)) {14611463 truncate_setsize(target_inode, destoff + rc);14641464+ netfs_resize_file(&target_cifsi->netfs,14651465+ i_size_read(target_inode), true);14661466+ fscache_resize_cookie(cifs_inode_cookie(target_inode),14671467+ i_size_read(target_inode));14681468+ }14691469+ if (rc > 0 && destoff + rc > target_cifsi->netfs.zero_point)14701470+ target_cifsi->netfs.zero_point = destoff + rc;14621471 }1463147214641473 file_accessed(src_file);
+13-1
fs/smb/client/cifsglob.h
···5050#define CIFS_DEF_ACTIMEO (1 * HZ)51515252/*5353+ * max sleep time before retry to server5454+ */5555+#define CIFS_MAX_SLEEP 20005656+5757+/*5358 * max attribute cache timeout (jiffies) - 2^305459 */5560#define CIFS_MAX_ACTIMEO (1 << 30)···15061501 struct smbd_mr *mr;15071502#endif15081503 struct cifs_credits credits;15041504+ bool replay;15091505};1510150615111507/*···15671561 spinlock_t writers_lock;15681562 unsigned int writers; /* Number of writers on this inode */15691563 unsigned long time; /* jiffies of last update of inode */15701570- u64 server_eof; /* current file size on server -- protected by i_lock */15711564 u64 uniqueid; /* server inode number */15721565 u64 createtime; /* creation time on server */15731566 __u8 lease_key[SMB2_LEASE_KEY_SIZE]; /* lease key for this inode */···18321827static inline bool is_retryable_error(int error)18331828{18341829 if (is_interrupt_error(error) || error == -EAGAIN)18301830+ return true;18311831+ return false;18321832+}18331833+18341834+static inline bool is_replayable_error(int error)18351835+{18361836+ if (error == -EAGAIN || error == -ECONNABORTED)18351837 return true;18361838 return false;18371839}
+5-4
fs/smb/client/file.c
···21202120{21212121 loff_t end_of_write = offset + bytes_written;2122212221232123- if (end_of_write > cifsi->server_eof)21242124- cifsi->server_eof = end_of_write;21232123+ if (end_of_write > cifsi->netfs.remote_i_size)21242124+ netfs_resize_file(&cifsi->netfs, end_of_write, true);21252125}2126212621272127static ssize_t···3247324732483248 spin_lock(&inode->i_lock);32493249 cifs_update_eof(cifsi, wdata->offset, wdata->bytes);32503250- if (cifsi->server_eof > inode->i_size)32513251- i_size_write(inode, cifsi->server_eof);32503250+ if (cifsi->netfs.remote_i_size > inode->i_size)32513251+ i_size_write(inode, cifsi->netfs.remote_i_size);32523252 spin_unlock(&inode->i_lock);3253325332543254 complete(&wdata->done);···33003300 if (wdata->cfile->invalidHandle)33013301 rc = -EAGAIN;33023302 else {33033303+ wdata->replay = true;33033304#ifdef CONFIG_CIFS_SMB_DIRECT33043305 if (wdata->mr) {33053306 wdata->mr->need_invalidate = true;
+5-3
fs/smb/client/inode.c
···104104 fattr->cf_mtime = timestamp_truncate(fattr->cf_mtime, inode);105105 mtime = inode_get_mtime(inode);106106 if (timespec64_equal(&mtime, &fattr->cf_mtime) &&107107- cifs_i->server_eof == fattr->cf_eof) {107107+ cifs_i->netfs.remote_i_size == fattr->cf_eof) {108108 cifs_dbg(FYI, "%s: inode %llu is unchanged\n",109109 __func__, cifs_i->uniqueid);110110 return;···194194 else195195 clear_bit(CIFS_INO_DELETE_PENDING, &cifs_i->flags);196196197197- cifs_i->server_eof = fattr->cf_eof;197197+ cifs_i->netfs.remote_i_size = fattr->cf_eof;198198 /*199199 * Can't safely change the file size here if the client is writing to200200 * it due to potential races.···2858285828592859set_size_out:28602860 if (rc == 0) {28612861- cifsInode->server_eof = attrs->ia_size;28612861+ netfs_resize_file(&cifsInode->netfs, attrs->ia_size, true);28622862 cifs_setsize(inode, attrs->ia_size);28632863 /*28642864 * i_blocks is not related to (i_size / i_blksize), but instead···30113011 if ((attrs->ia_valid & ATTR_SIZE) &&30123012 attrs->ia_size != i_size_read(inode)) {30133013 truncate_setsize(inode, attrs->ia_size);30143014+ netfs_resize_file(&cifsInode->netfs, attrs->ia_size, true);30143015 fscache_resize_cookie(cifs_inode_cookie(inode), attrs->ia_size);30153016 }30163017···32113210 if ((attrs->ia_valid & ATTR_SIZE) &&32123211 attrs->ia_size != i_size_read(inode)) {32133212 truncate_setsize(inode, attrs->ia_size);32133213+ netfs_resize_file(&cifsInode->netfs, attrs->ia_size, true);32143214 fscache_resize_cookie(cifs_inode_cookie(inode), attrs->ia_size);32153215 }32163216
···120120 unsigned int size[2];121121 void *data[2];122122 int len;123123+ int retries = 0, cur_sleep = 1;124124+125125+replay_again:126126+ /* reinitialize for possible replay */127127+ flags = 0;128128+ oplock = SMB2_OPLOCK_LEVEL_NONE;129129+ num_rqst = 0;130130+ server = cifs_pick_channel(ses);123131124132 vars = kzalloc(sizeof(*vars), GFP_ATOMIC);125133 if (vars == NULL)126134 return -ENOMEM;127135 rqst = &vars->rqst[0];128136 rsp_iov = &vars->rsp_iov[0];129129-130130- server = cifs_pick_channel(ses);131137132138 if (smb3_encryption_required(tcon))133139 flags |= CIFS_TRANSFORM_REQ;···469463 num_rqst++;470464471465 if (cfile) {466466+ if (retries)467467+ for (i = 1; i < num_rqst - 2; i++)468468+ smb2_set_replay(server, &rqst[i]);469469+472470 rc = compound_send_recv(xid, ses, server,473471 flags, num_rqst - 2,474472 &rqst[1], &resp_buftype[1],475473 &rsp_iov[1]);476476- } else474474+ } else {475475+ if (retries)476476+ for (i = 0; i < num_rqst; i++)477477+ smb2_set_replay(server, &rqst[i]);478478+477479 rc = compound_send_recv(xid, ses, server,478480 flags, num_rqst,479481 rqst, resp_buftype,480482 rsp_iov);483483+ }481484482485finished:483486 num_rqst = 0;···635620 }636621 SMB2_close_free(&rqst[num_rqst]);637622638638- if (cfile)639639- cifsFileInfo_put(cfile);640640-641623 num_cmds += 2;642624 if (out_iov && out_buftype) {643625 memcpy(out_iov, rsp_iov, num_cmds * sizeof(*out_iov));···644632 for (i = 0; i < num_cmds; i++)645633 free_rsp_buf(resp_buftype[i], rsp_iov[i].iov_base);646634 }635635+ num_cmds -= 2; /* correct num_cmds as there could be a retry */647636 kfree(vars);637637+638638+ if (is_replayable_error(rc) &&639639+ smb2_should_replay(tcon, &retries, &cur_sleep))640640+ goto replay_again;641641+642642+ if (cfile)643643+ cifsFileInfo_put(cfile);644644+648645 return rc;649646}650647
+127-14
fs/smb/client/smb2ops.c
···11081108{11091109 struct smb2_compound_vars *vars;11101110 struct cifs_ses *ses = tcon->ses;11111111- struct TCP_Server_Info *server = cifs_pick_channel(ses);11111111+ struct TCP_Server_Info *server;11121112 struct smb_rqst *rqst;11131113 struct kvec *rsp_iov;11141114 __le16 *utf16_path = NULL;···11241124 struct smb2_file_full_ea_info *ea = NULL;11251125 struct smb2_query_info_rsp *rsp;11261126 int rc, used_len = 0;11271127+ int retries = 0, cur_sleep = 1;11281128+11291129+replay_again:11301130+ /* reinitialize for possible replay */11311131+ flags = CIFS_CP_CREATE_CLOSE_OP;11321132+ oplock = SMB2_OPLOCK_LEVEL_NONE;11331133+ server = cifs_pick_channel(ses);1127113411281135 if (smb3_encryption_required(tcon))11291136 flags |= CIFS_TRANSFORM_REQ;···12511244 goto sea_exit;12521245 smb2_set_related(&rqst[2]);1253124612471247+ if (retries) {12481248+ smb2_set_replay(server, &rqst[0]);12491249+ smb2_set_replay(server, &rqst[1]);12501250+ smb2_set_replay(server, &rqst[2]);12511251+ }12521252+12541253 rc = compound_send_recv(xid, ses, server,12551254 flags, 3, rqst,12561255 resp_buftype, rsp_iov);···12731260 kfree(vars);12741261out_free_path:12751262 kfree(utf16_path);12631263+12641264+ if (is_replayable_error(rc) &&12651265+ smb2_should_replay(tcon, &retries, &cur_sleep))12661266+ goto replay_again;12671267+12761268 return rc;12771269}12781270#endif···15021484 struct smb_rqst *rqst;15031485 struct kvec *rsp_iov;15041486 struct cifs_ses *ses = tcon->ses;15051505- struct TCP_Server_Info *server = cifs_pick_channel(ses);14871487+ struct TCP_Server_Info *server;15061488 char __user *arg = (char __user *)p;15071489 struct smb_query_info qi;15081490 struct smb_query_info __user *pqi;···15191501 void *data[2];15201502 int create_options = is_dir ? CREATE_NOT_FILE : CREATE_NOT_DIR;15211503 void (*free_req1_func)(struct smb_rqst *r);15041504+ int retries = 0, cur_sleep = 1;15051505+15061506+replay_again:15071507+ /* reinitialize for possible replay */15081508+ flags = CIFS_CP_CREATE_CLOSE_OP;15091509+ oplock = SMB2_OPLOCK_LEVEL_NONE;15101510+ server = cifs_pick_channel(ses);1522151115231512 vars = kzalloc(sizeof(*vars), GFP_ATOMIC);15241513 if (vars == NULL)···16661641 goto free_req_1;16671642 smb2_set_related(&rqst[2]);1668164316441644+ if (retries) {16451645+ smb2_set_replay(server, &rqst[0]);16461646+ smb2_set_replay(server, &rqst[1]);16471647+ smb2_set_replay(server, &rqst[2]);16481648+ }16491649+16691650 rc = compound_send_recv(xid, ses, server,16701651 flags, 3, rqst,16711652 resp_buftype, rsp_iov);···17321701 kfree(buffer);17331702free_vars:17341703 kfree(vars);17041704+17051705+ if (is_replayable_error(rc) &&17061706+ smb2_should_replay(tcon, &retries, &cur_sleep))17071707+ goto replay_again;17081708+17351709 return rc;17361710}17371711···22632227 struct cifs_open_parms oparms;22642228 struct smb2_query_directory_rsp *qd_rsp = NULL;22652229 struct smb2_create_rsp *op_rsp = NULL;22662266- struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses);22672267- int retry_count = 0;22302230+ struct TCP_Server_Info *server;22312231+ int retries = 0, cur_sleep = 1;22322232+22332233+replay_again:22342234+ /* reinitialize for possible replay */22352235+ flags = 0;22362236+ oplock = SMB2_OPLOCK_LEVEL_NONE;22372237+ server = cifs_pick_channel(tcon->ses);2268223822692239 utf16_path = cifs_convert_path_to_utf16(path, cifs_sb);22702240 if (!utf16_path)···2320227823212279 smb2_set_related(&rqst[1]);2322228023232323-again:22812281+ if (retries) {22822282+ smb2_set_replay(server, &rqst[0]);22832283+ smb2_set_replay(server, &rqst[1]);22842284+ }22852285+23242286 rc = compound_send_recv(xid, tcon->ses, server,23252287 flags, 2, rqst,23262288 resp_buftype, rsp_iov);23272327-23282328- if (rc == -EAGAIN && retry_count++ < 10)23292329- goto again;2330228923312290 /* If the open failed there is nothing to do */23322291 op_rsp = (struct smb2_create_rsp *)rsp_iov[0].iov_base;···23762333 SMB2_query_directory_free(&rqst[1]);23772334 free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);23782335 free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);23362336+23372337+ if (is_replayable_error(rc) &&23382338+ smb2_should_replay(tcon, &retries, &cur_sleep))23392339+ goto replay_again;23402340+23792341 return rc;23802342}23812343···25062458}2507245925082460void24612461+smb2_set_replay(struct TCP_Server_Info *server, struct smb_rqst *rqst)24622462+{24632463+ struct smb2_hdr *shdr;24642464+24652465+ if (server->dialect < SMB30_PROT_ID)24662466+ return;24672467+24682468+ shdr = (struct smb2_hdr *)(rqst->rq_iov[0].iov_base);24692469+ if (shdr == NULL) {24702470+ cifs_dbg(FYI, "shdr NULL in smb2_set_related\n");24712471+ return;24722472+ }24732473+ shdr->Flags |= SMB2_FLAGS_REPLAY_OPERATION;24742474+}24752475+24762476+void25092477smb2_set_related(struct smb_rqst *rqst)25102478{25112479 struct smb2_hdr *shdr;···25942530}2595253125962532/*25332533+ * helper function for exponential backoff and check if replayable25342534+ */25352535+bool smb2_should_replay(struct cifs_tcon *tcon,25362536+ int *pretries,25372537+ int *pcur_sleep)25382538+{25392539+ if (!pretries || !pcur_sleep)25402540+ return false;25412541+25422542+ if (tcon->retry || (*pretries)++ < tcon->ses->server->retrans) {25432543+ msleep(*pcur_sleep);25442544+ (*pcur_sleep) = ((*pcur_sleep) << 1);25452545+ if ((*pcur_sleep) > CIFS_MAX_SLEEP)25462546+ (*pcur_sleep) = CIFS_MAX_SLEEP;25472547+ return true;25482548+ }25492549+25502550+ return false;25512551+}25522552+25532553+/*25972554 * Passes the query info response back to the caller on success.25982555 * Caller need to free this with free_rsp_buf().25992556 */···26272542{26282543 struct smb2_compound_vars *vars;26292544 struct cifs_ses *ses = tcon->ses;26302630- struct TCP_Server_Info *server = cifs_pick_channel(ses);25452545+ struct TCP_Server_Info *server;26312546 int flags = CIFS_CP_CREATE_CLOSE_OP;26322547 struct smb_rqst *rqst;26332548 int resp_buftype[3];···26382553 int rc;26392554 __le16 *utf16_path;26402555 struct cached_fid *cfid = NULL;25562556+ int retries = 0, cur_sleep = 1;25572557+25582558+replay_again:25592559+ /* reinitialize for possible replay */25602560+ flags = CIFS_CP_CREATE_CLOSE_OP;25612561+ oplock = SMB2_OPLOCK_LEVEL_NONE;25622562+ server = cifs_pick_channel(ses);2641256326422564 if (!path)26432565 path = "";···27252633 goto qic_exit;27262634 smb2_set_related(&rqst[2]);2727263526362636+ if (retries) {26372637+ if (!cfid) {26382638+ smb2_set_replay(server, &rqst[0]);26392639+ smb2_set_replay(server, &rqst[2]);26402640+ }26412641+ smb2_set_replay(server, &rqst[1]);26422642+ }26432643+27282644 if (cfid) {27292645 rc = compound_send_recv(xid, ses, server,27302646 flags, 1, &rqst[1],···27652665 kfree(vars);27662666out_free_path:27672667 kfree(utf16_path);26682668+26692669+ if (is_replayable_error(rc) &&26702670+ smb2_should_replay(tcon, &retries, &cur_sleep))26712671+ goto replay_again;26722672+27682673 return rc;27692674}27702675···33183213 cfile->fid.volatile_fid, cfile->pid, new_size);33193214 if (rc >= 0) {33203215 truncate_setsize(inode, new_size);32163216+ netfs_resize_file(&cifsi->netfs, new_size, true);32173217+ if (offset < cifsi->netfs.zero_point)32183218+ cifsi->netfs.zero_point = offset;33213219 fscache_resize_cookie(cifs_inode_cookie(inode), new_size);33223220 }33233221 }···35443436 rc = SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid,35453437 cfile->fid.volatile_fid, cfile->pid, new_eof);35463438 if (rc == 0) {35473547- cifsi->server_eof = new_eof;34393439+ netfs_resize_file(&cifsi->netfs, new_eof, true);35483440 cifs_setsize(inode, new_eof);35493441 cifs_truncate_page(inode->i_mapping, inode->i_size);35503442 truncate_setsize(inode, new_eof);···36363528 int rc;36373529 unsigned int xid;36383530 struct inode *inode = file_inode(file);36393639- struct cifsFileInfo *cfile = file->private_data;36403531 struct cifsInodeInfo *cifsi = CIFS_I(inode);35323532+ struct cifsFileInfo *cfile = file->private_data;35333533+ struct netfs_inode *ictx = &cifsi->netfs;36413534 loff_t old_eof, new_eof;3642353536433536 xid = get_xid();···36583549 goto out_2;3659355036603551 truncate_pagecache_range(inode, off, old_eof);35523552+ ictx->zero_point = old_eof;3661355336623554 rc = smb2_copychunk_range(xid, cfile, cfile, off + len,36633555 old_eof - off - len, off);···3673356336743564 rc = 0;3675356536763676- cifsi->server_eof = i_size_read(inode) - len;36773677- truncate_setsize(inode, cifsi->server_eof);36783678- fscache_resize_cookie(cifs_inode_cookie(inode), cifsi->server_eof);35663566+ truncate_setsize(inode, new_eof);35673567+ netfs_resize_file(&cifsi->netfs, new_eof, true);35683568+ ictx->zero_point = new_eof;35693569+ fscache_resize_cookie(cifs_inode_cookie(inode), new_eof);36793570out_2:36803571 filemap_invalidate_unlock(inode->i_mapping);36813572 out:···36923581 unsigned int xid;36933582 struct cifsFileInfo *cfile = file->private_data;36943583 struct inode *inode = file_inode(file);35843584+ struct cifsInodeInfo *cifsi = CIFS_I(inode);36953585 __u64 count, old_eof, new_eof;3696358636973587 xid = get_xid();···37203608 goto out_2;3721360937223610 truncate_setsize(inode, new_eof);36113611+ netfs_resize_file(&cifsi->netfs, i_size_read(inode), true);37233612 fscache_resize_cookie(cifs_inode_cookie(inode), i_size_read(inode));3724361337253614 rc = smb2_copychunk_range(xid, cfile, cfile, off, count, off + len);
+239-30
fs/smb/client/smb2pdu.c
···195195 pserver = server->primary_server;196196 cifs_signal_cifsd_for_reconnect(pserver, false);197197skip_terminate:198198- mutex_unlock(&ses->session_mutex);199198 return -EHOSTDOWN;200199 }201200···27642765 int flags = 0;27652766 unsigned int total_len;27662767 __le16 *utf16_path = NULL;27672767- struct TCP_Server_Info *server = cifs_pick_channel(ses);27682768+ struct TCP_Server_Info *server;27692769+ int retries = 0, cur_sleep = 1;27702770+27712771+replay_again:27722772+ /* reinitialize for possible replay */27732773+ flags = 0;27742774+ n_iov = 2;27752775+ server = cifs_pick_channel(ses);2768277627692777 cifs_dbg(FYI, "mkdir\n");27702778···28752869 /* no need to inc num_remote_opens because we close it just below */28762870 trace_smb3_posix_mkdir_enter(xid, tcon->tid, ses->Suid, full_path, CREATE_NOT_FILE,28772871 FILE_WRITE_ATTRIBUTES);28722872+28732873+ if (retries)28742874+ smb2_set_replay(server, &rqst);28752875+28782876 /* resource #4: response buffer */28792877 rc = cifs_send_recv(xid, ses, server,28802878 &rqst, &resp_buftype, flags, &rsp_iov);···29162906 cifs_small_buf_release(req);29172907err_free_path:29182908 kfree(utf16_path);29092909+29102910+ if (is_replayable_error(rc) &&29112911+ smb2_should_replay(tcon, &retries, &cur_sleep))29122912+ goto replay_again;29132913+29192914 return rc;29202915}29212916···31163101 struct smb2_create_rsp *rsp = NULL;31173102 struct cifs_tcon *tcon = oparms->tcon;31183103 struct cifs_ses *ses = tcon->ses;31193119- struct TCP_Server_Info *server = cifs_pick_channel(ses);31043104+ struct TCP_Server_Info *server;31203105 struct kvec iov[SMB2_CREATE_IOV_SIZE];31213106 struct kvec rsp_iov = {NULL, 0};31223107 int resp_buftype = CIFS_NO_BUFFER;31233108 int rc = 0;31243109 int flags = 0;31103110+ int retries = 0, cur_sleep = 1;31113111+31123112+replay_again:31133113+ /* reinitialize for possible replay */31143114+ flags = 0;31153115+ server = cifs_pick_channel(ses);3125311631263117 cifs_dbg(FYI, "create/open\n");31273118 if (!ses || !server)···3148312731493128 trace_smb3_open_enter(xid, tcon->tid, tcon->ses->Suid, oparms->path,31503129 oparms->create_options, oparms->desired_access);31303130+31313131+ if (retries)31323132+ smb2_set_replay(server, &rqst);3151313331523134 rc = cifs_send_recv(xid, ses, server,31533135 &rqst, &resp_buftype, flags,···32053181creat_exit:32063182 SMB2_open_free(&rqst);32073183 free_rsp_buf(resp_buftype, rsp);31843184+31853185+ if (is_replayable_error(rc) &&31863186+ smb2_should_replay(tcon, &retries, &cur_sleep))31873187+ goto replay_again;31883188+32083189 return rc;32093190}32103191···33343305 int resp_buftype = CIFS_NO_BUFFER;33353306 int rc = 0;33363307 int flags = 0;33083308+ int retries = 0, cur_sleep = 1;33093309+33103310+ if (!tcon)33113311+ return -EIO;33123312+33133313+ ses = tcon->ses;33143314+ if (!ses)33153315+ return -EIO;33163316+33173317+replay_again:33183318+ /* reinitialize for possible replay */33193319+ flags = 0;33203320+ server = cifs_pick_channel(ses);33213321+33223322+ if (!server)33233323+ return -EIO;3337332433383325 cifs_dbg(FYI, "SMB2 IOCTL\n");33393326···33593314 /* zero out returned data len, in case of error */33603315 if (plen)33613316 *plen = 0;33623362-33633363- if (!tcon)33643364- return -EIO;33653365-33663366- ses = tcon->ses;33673367- if (!ses)33683368- return -EIO;33693369-33703370- server = cifs_pick_channel(ses);33713371- if (!server)33723372- return -EIO;3373331733743318 if (smb3_encryption_required(tcon))33753319 flags |= CIFS_TRANSFORM_REQ;···33733339 in_data, indatalen, max_out_data_len);33743340 if (rc)33753341 goto ioctl_exit;33423342+33433343+ if (retries)33443344+ smb2_set_replay(server, &rqst);3376334533773346 rc = cifs_send_recv(xid, ses, server,33783347 &rqst, &resp_buftype, flags,···34463409ioctl_exit:34473410 SMB2_ioctl_free(&rqst);34483411 free_rsp_buf(resp_buftype, rsp);34123412+34133413+ if (is_replayable_error(rc) &&34143414+ smb2_should_replay(tcon, &retries, &cur_sleep))34153415+ goto replay_again;34163416+34493417 return rc;34503418}34513419···35223480 struct smb_rqst rqst;35233481 struct smb2_close_rsp *rsp = NULL;35243482 struct cifs_ses *ses = tcon->ses;35253525- struct TCP_Server_Info *server = cifs_pick_channel(ses);34833483+ struct TCP_Server_Info *server;35263484 struct kvec iov[1];35273485 struct kvec rsp_iov;35283486 int resp_buftype = CIFS_NO_BUFFER;35293487 int rc = 0;35303488 int flags = 0;35313489 bool query_attrs = false;34903490+ int retries = 0, cur_sleep = 1;34913491+34923492+replay_again:34933493+ /* reinitialize for possible replay */34943494+ flags = 0;34953495+ query_attrs = false;34963496+ server = cifs_pick_channel(ses);3532349735333498 cifs_dbg(FYI, "Close\n");35343499···35603511 query_attrs);35613512 if (rc)35623513 goto close_exit;35143514+35153515+ if (retries)35163516+ smb2_set_replay(server, &rqst);3563351735643518 rc = cifs_send_recv(xid, ses, server,35653519 &rqst, &resp_buftype, flags, &rsp_iov);···35973545 cifs_dbg(VFS, "handle cancelled close fid 0x%llx returned error %d\n",35983546 persistent_fid, tmp_rc);35993547 }35483548+35493549+ if (is_replayable_error(rc) &&35503550+ smb2_should_replay(tcon, &retries, &cur_sleep))35513551+ goto replay_again;35523552+36003553 return rc;36013554}36023555···37323675 struct TCP_Server_Info *server;37333676 int flags = 0;37343677 bool allocated = false;36783678+ int retries = 0, cur_sleep = 1;3735367937363680 cifs_dbg(FYI, "Query Info\n");3737368137383682 if (!ses)37393683 return -EIO;36843684+36853685+replay_again:36863686+ /* reinitialize for possible replay */36873687+ flags = 0;36883688+ allocated = false;37403689 server = cifs_pick_channel(ses);36903690+37413691 if (!server)37423692 return -EIO;37433693···3765370137663702 trace_smb3_query_info_enter(xid, persistent_fid, tcon->tid,37673703 ses->Suid, info_class, (__u32)info_type);37043704+37053705+ if (retries)37063706+ smb2_set_replay(server, &rqst);3768370737693708 rc = cifs_send_recv(xid, ses, server,37703709 &rqst, &resp_buftype, flags, &rsp_iov);···38113744qinf_exit:38123745 SMB2_query_info_free(&rqst);38133746 free_rsp_buf(resp_buftype, rsp);37473747+37483748+ if (is_replayable_error(rc) &&37493749+ smb2_should_replay(tcon, &retries, &cur_sleep))37503750+ goto replay_again;37513751+38143752 return rc;38153753}38163754···39163844 u32 *plen /* returned data len */)39173845{39183846 struct cifs_ses *ses = tcon->ses;39193919- struct TCP_Server_Info *server = cifs_pick_channel(ses);38473847+ struct TCP_Server_Info *server;39203848 struct smb_rqst rqst;39213849 struct smb2_change_notify_rsp *smb_rsp;39223850 struct kvec iov[1];···39243852 int resp_buftype = CIFS_NO_BUFFER;39253853 int flags = 0;39263854 int rc = 0;38553855+ int retries = 0, cur_sleep = 1;38563856+38573857+replay_again:38583858+ /* reinitialize for possible replay */38593859+ flags = 0;38603860+ server = cifs_pick_channel(ses);3927386139283862 cifs_dbg(FYI, "change notify\n");39293863 if (!ses || !server)···3954387639553877 trace_smb3_notify_enter(xid, persistent_fid, tcon->tid, ses->Suid,39563878 (u8)watch_tree, completion_filter);38793879+38803880+ if (retries)38813881+ smb2_set_replay(server, &rqst);38823882+39573883 rc = cifs_send_recv(xid, ses, server,39583884 &rqst, &resp_buftype, flags, &rsp_iov);39593885···39923910 if (rqst.rq_iov)39933911 cifs_small_buf_release(rqst.rq_iov[0].iov_base); /* request */39943912 free_rsp_buf(resp_buftype, rsp_iov.iov_base);39133913+39143914+ if (is_replayable_error(rc) &&39153915+ smb2_should_replay(tcon, &retries, &cur_sleep))39163916+ goto replay_again;39173917+39953918 return rc;39963919}39973920···42394152 struct smb_rqst rqst;42404153 struct kvec iov[1];42414154 struct kvec rsp_iov = {NULL, 0};42424242- struct TCP_Server_Info *server = cifs_pick_channel(ses);41554155+ struct TCP_Server_Info *server;42434156 int resp_buftype = CIFS_NO_BUFFER;42444157 int flags = 0;42454158 int rc = 0;41594159+ int retries = 0, cur_sleep = 1;41604160+41614161+replay_again:41624162+ /* reinitialize for possible replay */41634163+ flags = 0;41644164+ server = cifs_pick_channel(ses);4246416542474166 cifs_dbg(FYI, "flush\n");42484167 if (!ses || !(ses->server))···42684175 goto flush_exit;4269417642704177 trace_smb3_flush_enter(xid, persistent_fid, tcon->tid, ses->Suid);41784178+41794179+ if (retries)41804180+ smb2_set_replay(server, &rqst);41814181+42714182 rc = cifs_send_recv(xid, ses, server,42724183 &rqst, &resp_buftype, flags, &rsp_iov);42734184···42864189 flush_exit:42874190 SMB2_flush_free(&rqst);42884191 free_rsp_buf(resp_buftype, rsp_iov.iov_base);41924192+41934193+ if (is_replayable_error(rc) &&41944194+ smb2_should_replay(tcon, &retries, &cur_sleep))41954195+ goto replay_again;41964196+42894197 return rc;42904198}42914199···47704668 struct cifs_io_parms *io_parms = NULL;47714669 int credit_request;4772467047734773- if (!wdata->server)46714671+ if (!wdata->server || wdata->replay)47744672 server = wdata->server = cifs_pick_channel(tcon->ses);4775467347764674 /*···48554753 rqst.rq_nvec = 1;48564754 rqst.rq_iter = wdata->iter;48574755 rqst.rq_iter_size = iov_iter_count(&rqst.rq_iter);47564756+ if (wdata->replay)47574757+ smb2_set_replay(server, &rqst);48584758#ifdef CONFIG_CIFS_SMB_DIRECT48594759 if (wdata->mr)48604760 iov[0].iov_len += sizeof(struct smbd_buffer_descriptor_v1);···49304826 int flags = 0;49314827 unsigned int total_len;49324828 struct TCP_Server_Info *server;48294829+ int retries = 0, cur_sleep = 1;4933483048314831+replay_again:48324832+ /* reinitialize for possible replay */48334833+ flags = 0;49344834 *nbytes = 0;49354935-49364936- if (n_vec < 1)49374937- return rc;49384938-49394835 if (!io_parms->server)49404836 io_parms->server = cifs_pick_channel(io_parms->tcon->ses);49414837 server = io_parms->server;49424838 if (server == NULL)49434839 return -ECONNABORTED;48404840+48414841+ if (n_vec < 1)48424842+ return rc;4944484349454844 rc = smb2_plain_req_init(SMB2_WRITE, io_parms->tcon, server,49464845 (void **) &req, &total_len);···49784871 rqst.rq_iov = iov;49794872 rqst.rq_nvec = n_vec + 1;4980487348744874+ if (retries)48754875+ smb2_set_replay(server, &rqst);48764876+49814877 rc = cifs_send_recv(xid, io_parms->tcon->ses, server,49824878 &rqst,49834879 &resp_buftype, flags, &rsp_iov);···5005489550064896 cifs_small_buf_release(req);50074897 free_rsp_buf(resp_buftype, rsp);48984898+48994899+ if (is_replayable_error(rc) &&49004900+ smb2_should_replay(io_parms->tcon, &retries, &cur_sleep))49014901+ goto replay_again;49024902+50084903 return rc;50094904}50104905···53215206 struct kvec rsp_iov;53225207 int rc = 0;53235208 struct cifs_ses *ses = tcon->ses;53245324- struct TCP_Server_Info *server = cifs_pick_channel(ses);52095209+ struct TCP_Server_Info *server;53255210 int flags = 0;52115211+ int retries = 0, cur_sleep = 1;52125212+52135213+replay_again:52145214+ /* reinitialize for possible replay */52155215+ flags = 0;52165216+ server = cifs_pick_channel(ses);5326521753275218 if (!ses || !(ses->server))53285219 return -EIO;···53475226 srch_inf->info_level);53485227 if (rc)53495228 goto qdir_exit;52295229+52305230+ if (retries)52315231+ smb2_set_replay(server, &rqst);5350523253515233 rc = cifs_send_recv(xid, ses, server,53525234 &rqst, &resp_buftype, flags, &rsp_iov);···53855261qdir_exit:53865262 SMB2_query_directory_free(&rqst);53875263 free_rsp_buf(resp_buftype, rsp);52645264+52655265+ if (is_replayable_error(rc) &&52665266+ smb2_should_replay(tcon, &retries, &cur_sleep))52675267+ goto replay_again;52685268+53885269 return rc;53895270}53905271···54565327 int rc = 0;54575328 int resp_buftype;54585329 struct cifs_ses *ses = tcon->ses;54595459- struct TCP_Server_Info *server = cifs_pick_channel(ses);53305330+ struct TCP_Server_Info *server;54605331 int flags = 0;53325332+ int retries = 0, cur_sleep = 1;53335333+53345334+replay_again:53355335+ /* reinitialize for possible replay */53365336+ flags = 0;53375337+ server = cifs_pick_channel(ses);5461533854625339 if (!ses || !server)54635340 return -EIO;···54915356 return rc;54925357 }5493535853595359+ if (retries)53605360+ smb2_set_replay(server, &rqst);5494536154955362 rc = cifs_send_recv(xid, ses, server,54965363 &rqst, &resp_buftype, flags,···5508537155095372 free_rsp_buf(resp_buftype, rsp);55105373 kfree(iov);53745374+53755375+ if (is_replayable_error(rc) &&53765376+ smb2_should_replay(tcon, &retries, &cur_sleep))53775377+ goto replay_again;53785378+55115379 return rc;55125380}55135381···55655423 int rc;55665424 struct smb2_oplock_break *req = NULL;55675425 struct cifs_ses *ses = tcon->ses;55685568- struct TCP_Server_Info *server = cifs_pick_channel(ses);54265426+ struct TCP_Server_Info *server;55695427 int flags = CIFS_OBREAK_OP;55705428 unsigned int total_len;55715429 struct kvec iov[1];55725430 struct kvec rsp_iov;55735431 int resp_buf_type;54325432+ int retries = 0, cur_sleep = 1;54335433+54345434+replay_again:54355435+ /* reinitialize for possible replay */54365436+ flags = CIFS_OBREAK_OP;54375437+ server = cifs_pick_channel(ses);5574543855755439 cifs_dbg(FYI, "SMB2_oplock_break\n");55765440 rc = smb2_plain_req_init(SMB2_OPLOCK_BREAK, tcon, server,···56015453 rqst.rq_iov = iov;56025454 rqst.rq_nvec = 1;5603545554565456+ if (retries)54575457+ smb2_set_replay(server, &rqst);54585458+56045459 rc = cifs_send_recv(xid, ses, server,56055460 &rqst, &resp_buf_type, flags, &rsp_iov);56065461 cifs_small_buf_release(req);56075607-56085462 if (rc) {56095463 cifs_stats_fail_inc(tcon, SMB2_OPLOCK_BREAK_HE);56105464 cifs_dbg(FYI, "Send error in Oplock Break = %d\n", rc);56115465 }54665466+54675467+ if (is_replayable_error(rc) &&54685468+ smb2_should_replay(tcon, &retries, &cur_sleep))54695469+ goto replay_again;5612547056135471 return rc;56145472}···57015547 int rc = 0;57025548 int resp_buftype;57035549 struct cifs_ses *ses = tcon->ses;57045704- struct TCP_Server_Info *server = cifs_pick_channel(ses);55505550+ struct TCP_Server_Info *server;57055551 FILE_SYSTEM_POSIX_INFO *info = NULL;57065552 int flags = 0;55535553+ int retries = 0, cur_sleep = 1;55545554+55555555+replay_again:55565556+ /* reinitialize for possible replay */55575557+ flags = 0;55585558+ server = cifs_pick_channel(ses);5707555957085560 rc = build_qfs_info_req(&iov, tcon, server,57095561 FS_POSIX_INFORMATION,···57245564 memset(&rqst, 0, sizeof(struct smb_rqst));57255565 rqst.rq_iov = &iov;57265566 rqst.rq_nvec = 1;55675567+55685568+ if (retries)55695569+ smb2_set_replay(server, &rqst);5727557057285571 rc = cifs_send_recv(xid, ses, server,57295572 &rqst, &resp_buftype, flags, &rsp_iov);···5747558457485585posix_qfsinf_exit:57495586 free_rsp_buf(resp_buftype, rsp_iov.iov_base);55875587+55885588+ if (is_replayable_error(rc) &&55895589+ smb2_should_replay(tcon, &retries, &cur_sleep))55905590+ goto replay_again;55915591+57505592 return rc;57515593}57525594···57665598 int rc = 0;57675599 int resp_buftype;57685600 struct cifs_ses *ses = tcon->ses;57695769- struct TCP_Server_Info *server = cifs_pick_channel(ses);56015601+ struct TCP_Server_Info *server;57705602 struct smb2_fs_full_size_info *info = NULL;57715603 int flags = 0;56045604+ int retries = 0, cur_sleep = 1;56055605+56065606+replay_again:56075607+ /* reinitialize for possible replay */56085608+ flags = 0;56095609+ server = cifs_pick_channel(ses);5772561057735611 rc = build_qfs_info_req(&iov, tcon, server,57745612 FS_FULL_SIZE_INFORMATION,···57895615 memset(&rqst, 0, sizeof(struct smb_rqst));57905616 rqst.rq_iov = &iov;57915617 rqst.rq_nvec = 1;56185618+56195619+ if (retries)56205620+ smb2_set_replay(server, &rqst);5792562157935622 rc = cifs_send_recv(xid, ses, server,57945623 &rqst, &resp_buftype, flags, &rsp_iov);···5812563558135636qfsinf_exit:58145637 free_rsp_buf(resp_buftype, rsp_iov.iov_base);56385638+56395639+ if (is_replayable_error(rc) &&56405640+ smb2_should_replay(tcon, &retries, &cur_sleep))56415641+ goto replay_again;56425642+58155643 return rc;58165644}58175645···58315649 int rc = 0;58325650 int resp_buftype, max_len, min_len;58335651 struct cifs_ses *ses = tcon->ses;58345834- struct TCP_Server_Info *server = cifs_pick_channel(ses);56525652+ struct TCP_Server_Info *server;58355653 unsigned int rsp_len, offset;58365654 int flags = 0;56555655+ int retries = 0, cur_sleep = 1;56565656+56575657+replay_again:56585658+ /* reinitialize for possible replay */56595659+ flags = 0;56605660+ server = cifs_pick_channel(ses);5837566158385662 if (level == FS_DEVICE_INFORMATION) {58395663 max_len = sizeof(FILE_SYSTEM_DEVICE_INFO);···58705682 memset(&rqst, 0, sizeof(struct smb_rqst));58715683 rqst.rq_iov = &iov;58725684 rqst.rq_nvec = 1;56855685+56865686+ if (retries)56875687+ smb2_set_replay(server, &rqst);5873568858745689 rc = cifs_send_recv(xid, ses, server,58755690 &rqst, &resp_buftype, flags, &rsp_iov);···5911572059125721qfsattr_exit:59135722 free_rsp_buf(resp_buftype, rsp_iov.iov_base);57235723+57245724+ if (is_replayable_error(rc) &&57255725+ smb2_should_replay(tcon, &retries, &cur_sleep))57265726+ goto replay_again;57275727+59145728 return rc;59155729}59165730···59335737 unsigned int count;59345738 int flags = CIFS_NO_RSP_BUF;59355739 unsigned int total_len;59365936- struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses);57405740+ struct TCP_Server_Info *server;57415741+ int retries = 0, cur_sleep = 1;57425742+57435743+replay_again:57445744+ /* reinitialize for possible replay */57455745+ flags = CIFS_NO_RSP_BUF;57465746+ server = cifs_pick_channel(tcon->ses);5937574759385748 cifs_dbg(FYI, "smb2_lockv num lock %d\n", num_lock);59395749···59705768 rqst.rq_iov = iov;59715769 rqst.rq_nvec = 2;5972577057715771+ if (retries)57725772+ smb2_set_replay(server, &rqst);57735773+59735774 rc = cifs_send_recv(xid, tcon->ses, server,59745775 &rqst, &resp_buf_type, flags,59755776 &rsp_iov);···59835778 trace_smb3_lock_err(xid, persist_fid, tcon->tid,59845779 tcon->ses->Suid, rc);59855780 }57815781+57825782+ if (is_replayable_error(rc) &&57835783+ smb2_should_replay(tcon, &retries, &cur_sleep))57845784+ goto replay_again;5986578559875786 return rc;59885787}
+5
fs/smb/client/smb2proto.h
···122122extern void smb2_set_next_command(struct cifs_tcon *tcon,123123 struct smb_rqst *rqst);124124extern void smb2_set_related(struct smb_rqst *rqst);125125+extern void smb2_set_replay(struct TCP_Server_Info *server,126126+ struct smb_rqst *rqst);127127+extern bool smb2_should_replay(struct cifs_tcon *tcon,128128+ int *pretries,129129+ int *pcur_sleep);125130126131/*127132 * SMB2 Worker functions - most of protocol specific implementation details
-7
fs/smb/client/smbencrypt.c
···2626#include "cifsproto.h"2727#include "../common/md4.h"28282929-#ifndef false3030-#define false 03131-#endif3232-#ifndef true3333-#define true 13434-#endif3535-3629/* following came from the other byteorder.h to avoid include conflicts */3730#define CVAL(buf,pos) (((unsigned char *)(buf))[pos])3831#define SSVALX(buf,pos,val) (CVAL(buf,pos)=(val)&0xFF,CVAL(buf,pos+1)=(val)>>8)
+12-2
fs/smb/client/transport.c
···400400 server->conn_id, server->hostname);401401 }402402smbd_done:403403- if (rc < 0 && rc != -EINTR)403403+ /*404404+ * there's hardly any use for the layers above to know the405405+ * actual error code here. All they should do at this point is406406+ * to retry the connection and hope it goes away.407407+ */408408+ if (rc < 0 && rc != -EINTR && rc != -EAGAIN) {404409 cifs_server_dbg(VFS, "Error %d sending data on socket to server\n",405410 rc);406406- else if (rc > 0)411411+ rc = -ECONNABORTED;412412+ cifs_signal_cifsd_for_reconnect(server, false);413413+ } else if (rc > 0)407414 rc = 0;408415out:409416 cifs_in_send_dec(server);···10311024 for (i = 0; i < ses->chan_count; i++) {10321025 server = ses->chans[i].server;10331026 if (!server || server->terminate)10271027+ continue;10281028+10291029+ if (CIFS_CHAN_NEEDS_RECONNECT(ses, i))10341030 continue;1035103110361032 /*
···365365 * @t: TCP transport instance366366 * @buf: buffer to store read data from socket367367 * @to_read: number of bytes to read from socket368368+ * @max_retries: number of retries if reading from socket fails368369 *369370 * Return: on success return number of bytes read from socket,370371 * otherwise return error number···417416418417/**419418 * create_socket - create socket for ksmbd/0419419+ * @iface: interface to bind the created socket to420420 *421421 * Return: 0 on success, error number otherwise422422 */
-38
fs/tracefs/event_inode.c
···281281 inode->i_gid = attr->gid;282282}283283284284-static void update_gid(struct eventfs_inode *ei, kgid_t gid, int level)285285-{286286- struct eventfs_inode *ei_child;287287-288288- /* at most we have events/system/event */289289- if (WARN_ON_ONCE(level > 3))290290- return;291291-292292- ei->attr.gid = gid;293293-294294- if (ei->entry_attrs) {295295- for (int i = 0; i < ei->nr_entries; i++) {296296- ei->entry_attrs[i].gid = gid;297297- }298298- }299299-300300- /*301301- * Only eventfs_inode with dentries are updated, make sure302302- * all eventfs_inodes are updated. If one of the children303303- * do not have a dentry, this function must traverse it.304304- */305305- list_for_each_entry_srcu(ei_child, &ei->children, list,306306- srcu_read_lock_held(&eventfs_srcu)) {307307- if (!ei_child->dentry)308308- update_gid(ei_child, gid, level + 1);309309- }310310-}311311-312312-void eventfs_update_gid(struct dentry *dentry, kgid_t gid)313313-{314314- struct eventfs_inode *ei = dentry->d_fsdata;315315- int idx;316316-317317- idx = srcu_read_lock(&eventfs_srcu);318318- update_gid(ei, gid, 0);319319- srcu_read_unlock(&eventfs_srcu, idx);320320-}321321-322284/**323285 * create_file - create a file in the tracefs filesystem324286 * @name: the name of the file to create.
···1496149614971497 mp->m_super = sb;1498149814991499+ /*15001500+ * Copy VFS mount flags from the context now that all parameter parsing15011501+ * is guaranteed to have been completed by either the old mount API or15021502+ * the newer fsopen/fsconfig API.15031503+ */15041504+ if (fc->sb_flags & SB_RDONLY)15051505+ set_bit(XFS_OPSTATE_READONLY, &mp->m_opstate);15061506+ if (fc->sb_flags & SB_DIRSYNC)15071507+ mp->m_features |= XFS_FEAT_DIRSYNC;15081508+ if (fc->sb_flags & SB_SYNCHRONOUS)15091509+ mp->m_features |= XFS_FEAT_WSYNC;15101510+14991511 error = xfs_fs_validate_params(mp);15001512 if (error)15011513 return error;···19771965 .free = xfs_fs_free,19781966};1979196719681968+/*19691969+ * WARNING: do not initialise any parameters in this function that depend on19701970+ * mount option parsing having already been performed as this can be called from19711971+ * fsopen() before any parameters have been set.19721972+ */19801973static int xfs_init_fs_context(19811974 struct fs_context *fc)19821975{···20121995 mp->m_logbufs = -1;20131996 mp->m_logbsize = -1;20141997 mp->m_allocsize_log = 16; /* 64k */20152015-20162016- /*20172017- * Copy binary VFS mount flags we are interested in.20182018- */20192019- if (fc->sb_flags & SB_RDONLY)20202020- set_bit(XFS_OPSTATE_READONLY, &mp->m_opstate);20212021- if (fc->sb_flags & SB_DIRSYNC)20222022- mp->m_features |= XFS_FEAT_DIRSYNC;20232023- if (fc->sb_flags & SB_SYNCHRONOUS)20242024- mp->m_features |= XFS_FEAT_WSYNC;2025199820261999 fc->s_fs_info = mp;20272000 fc->ops = &xfs_context_ops;
-11
include/linux/hid_bpf.h
···7777int hid_bpf_device_event(struct hid_bpf_ctx *ctx);7878int hid_bpf_rdesc_fixup(struct hid_bpf_ctx *ctx);79798080-/* Following functions are kfunc that we export to BPF programs */8181-/* available everywhere in HID-BPF */8282-__u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t __sz);8383-8484-/* only available in syscall */8585-int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags);8686-int hid_bpf_hw_request(struct hid_bpf_ctx *ctx, __u8 *buf, size_t buf__sz,8787- enum hid_report_type rtype, enum hid_class_request reqtype);8888-struct hid_bpf_ctx *hid_bpf_allocate_context(unsigned int hid_id);8989-void hid_bpf_release_context(struct hid_bpf_ctx *ctx);9090-9180/*9281 * Below is HID internal9382 */
+1-1
include/linux/libata.h
···471471472472/*473473 * Link power management policy: If you alter this, you also need to474474- * alter libata-scsi.c (for the ascii descriptions)474474+ * alter libata-sata.c (for the ascii descriptions)475475 */476476enum ata_lpm_policy {477477 ATA_LPM_UNKNOWN,
···20132013 if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)20142014 return 0;20152015 ms = __pfn_to_section(pfn);20162016- rcu_read_lock();20162016+ rcu_read_lock_sched();20172017 if (!valid_section(ms)) {20182018- rcu_read_unlock();20182018+ rcu_read_unlock_sched();20192019 return 0;20202020 }20212021 /*···20232023 * the entire section-sized span.20242024 */20252025 ret = early_section(ms) || pfn_section_valid(ms, pfn);20262026- rcu_read_unlock();20262026+ rcu_read_unlock_sched();2027202720282028 return ret;20292029}
+4
include/linux/netfilter/ipset/ip_set.h
···186186 /* Return true if "b" set is the same as "a"187187 * according to the create set parameters */188188 bool (*same_set)(const struct ip_set *a, const struct ip_set *b);189189+ /* Cancel ongoing garbage collectors before destroying the set*/190190+ void (*cancel_gc)(struct ip_set *set);189191 /* Region-locking is used */190192 bool region_lock;191193};···244242245243/* A generic IP set */246244struct ip_set {245245+ /* For call_cru in destroy */246246+ struct rcu_head rcu;247247 /* The name of the set */248248 char name[IPSET_MAXNAMELEN];249249 /* Lock protecting the set data */
+1-1
include/linux/spi/spi.h
···2121#include <uapi/linux/spi/spi.h>22222323/* Max no. of CS supported per spi device */2424-#define SPI_CS_CNT_MAX 42424+#define SPI_CS_CNT_MAX 1625252626struct dma_chan;2727struct software_node;
+1
include/linux/syscalls.h
···128128#define __TYPE_IS_LL(t) (__TYPE_AS(t, 0LL) || __TYPE_AS(t, 0ULL))129129#define __SC_LONG(t, a) __typeof(__builtin_choose_expr(__TYPE_IS_LL(t), 0LL, 0L)) a130130#define __SC_CAST(t, a) (__force t) a131131+#define __SC_TYPE(t, a) t131132#define __SC_ARGS(t, a) a132133#define __SC_TEST(t, a) (void)BUILD_BUG_ON_ZERO(!__TYPE_IS_LL(t) && sizeof(t) > sizeof(long))133134
+14-6
include/net/af_unix.h
···54545555#define UNIXCB(skb) (*(struct unix_skb_parms *)&((skb)->cb))56565757-#define unix_state_lock(s) spin_lock(&unix_sk(s)->lock)5858-#define unix_state_unlock(s) spin_unlock(&unix_sk(s)->lock)5959-#define unix_state_lock_nested(s) \6060- spin_lock_nested(&unix_sk(s)->lock, \6161- SINGLE_DEPTH_NESTING)6262-6357/* The AF_UNIX socket */6458struct unix_sock {6559 /* WARNING: sk has to be the first member */···78847985#define unix_sk(ptr) container_of_const(ptr, struct unix_sock, sk)8086#define unix_peer(sk) (unix_sk(sk)->peer)8787+8888+#define unix_state_lock(s) spin_lock(&unix_sk(s)->lock)8989+#define unix_state_unlock(s) spin_unlock(&unix_sk(s)->lock)9090+enum unix_socket_lock_class {9191+ U_LOCK_NORMAL,9292+ U_LOCK_SECOND, /* for double locking, see unix_state_double_lock(). */9393+ U_LOCK_DIAG, /* used while dumping icons, see sk_diag_dump_icons(). */9494+};9595+9696+static inline void unix_state_lock_nested(struct sock *sk,9797+ enum unix_socket_lock_class subclass)9898+{9999+ spin_lock_nested(&unix_sk(sk)->lock, subclass);100100+}8110182102#define peer_wait peer_wq.wait83103
···5353#define DRM_IVPU_PARAM_CORE_CLOCK_RATE 35454#define DRM_IVPU_PARAM_NUM_CONTEXTS 45555#define DRM_IVPU_PARAM_CONTEXT_BASE_ADDRESS 55656-#define DRM_IVPU_PARAM_CONTEXT_PRIORITY 65656+#define DRM_IVPU_PARAM_CONTEXT_PRIORITY 6 /* Deprecated */5757#define DRM_IVPU_PARAM_CONTEXT_ID 75858#define DRM_IVPU_PARAM_FW_API_VERSION 85959#define DRM_IVPU_PARAM_ENGINE_HEARTBEAT 9···64646565#define DRM_IVPU_PLATFORM_TYPE_SILICON 066666767+/* Deprecated, use DRM_IVPU_JOB_PRIORITY */6768#define DRM_IVPU_CONTEXT_PRIORITY_IDLE 06869#define DRM_IVPU_CONTEXT_PRIORITY_NORMAL 16970#define DRM_IVPU_CONTEXT_PRIORITY_FOCUS 27071#define DRM_IVPU_CONTEXT_PRIORITY_REALTIME 37272+7373+#define DRM_IVPU_JOB_PRIORITY_DEFAULT 07474+#define DRM_IVPU_JOB_PRIORITY_IDLE 17575+#define DRM_IVPU_JOB_PRIORITY_NORMAL 27676+#define DRM_IVPU_JOB_PRIORITY_FOCUS 37777+#define DRM_IVPU_JOB_PRIORITY_REALTIME 471787279/**7380 * DRM_IVPU_CAP_METRIC_STREAMER···118111 *119112 * %DRM_IVPU_PARAM_CONTEXT_BASE_ADDRESS:120113 * Lowest VPU virtual address available in the current context (read-only)121121- *122122- * %DRM_IVPU_PARAM_CONTEXT_PRIORITY:123123- * Value of current context scheduling priority (read-write).124124- * See DRM_IVPU_CONTEXT_PRIORITY_* for possible values.125114 *126115 * %DRM_IVPU_PARAM_CONTEXT_ID:127116 * Current context ID, always greater than 0 (read-only)···289286 * to be executed. The offset has to be 8-byte aligned.290287 */291288 __u32 commands_offset;289289+290290+ /**291291+ * @priority:292292+ *293293+ * Priority to be set for related job command queue, can be one of the following:294294+ * %DRM_IVPU_JOB_PRIORITY_DEFAULT295295+ * %DRM_IVPU_JOB_PRIORITY_IDLE296296+ * %DRM_IVPU_JOB_PRIORITY_NORMAL297297+ * %DRM_IVPU_JOB_PRIORITY_FOCUS298298+ * %DRM_IVPU_JOB_PRIORITY_REALTIME299299+ */300300+ __u32 priority;292301};293302294303/* drm_ivpu_bo_wait job status codes */
···277277 if (flags & ~IORING_FIXED_FD_NO_CLOEXEC)278278 return -EINVAL;279279280280+ /* ensure the task's creds are used when installing/receiving fds */281281+ if (req->flags & REQ_F_CREDS)282282+ return -EPERM;283283+280284 /* default to O_CLOEXEC, disable if IORING_FIXED_FD_NO_CLOEXEC is set */281285 ifi = io_kiocb_to_cmd(req, struct io_fixed_install);282286 ifi->o_flags = O_CLOEXEC;
+1-1
kernel/events/uprobes.c
···537537 }538538 }539539540540- ret = __replace_page(vma, vaddr, old_page, new_page);540540+ ret = __replace_page(vma, vaddr & PAGE_MASK, old_page, new_page);541541 if (new_page)542542 put_page(new_page);543543put_old:
+12-3
kernel/futex/core.c
···627627}628628629629/*630630- * PI futexes can not be requeued and must remove themselves from the631631- * hash bucket. The hash bucket lock (i.e. lock_ptr) is held.630630+ * PI futexes can not be requeued and must remove themselves from the hash631631+ * bucket. The hash bucket lock (i.e. lock_ptr) is held.632632 */633633void futex_unqueue_pi(struct futex_q *q)634634{635635- __futex_unqueue(q);635635+ /*636636+ * If the lock was not acquired (due to timeout or signal) then the637637+ * rt_waiter is removed before futex_q is. If this is observed by638638+ * an unlocker after dropping the rtmutex wait lock and before639639+ * acquiring the hash bucket lock, then the unlocker dequeues the640640+ * futex_q from the hash bucket list to guarantee consistent state641641+ * vs. userspace. Therefore the dequeue here must be conditional.642642+ */643643+ if (!plist_node_empty(&q->list))644644+ __futex_unqueue(q);636645637646 BUG_ON(!q->pi_state);638647 put_pi_state(q->pi_state);
+8-3
kernel/futex/pi.c
···1135113511361136 hb = futex_hash(&key);11371137 spin_lock(&hb->lock);11381138+retry_hb:1138113911391140 /*11401141 * Check waiters first. We do not trust user space values at···11781177 /*11791178 * Futex vs rt_mutex waiter state -- if there are no rt_mutex11801179 * waiters even though futex thinks there are, then the waiter11811181- * is leaving and the uncontended path is safe to take.11801180+ * is leaving. The entry needs to be removed from the list so a11811181+ * new futex_lock_pi() is not using this stale PI-state while11821182+ * the futex is available in user space again.11831183+ * There can be more than one task on its way out so it needs11841184+ * to retry.11821185 */11831186 rt_waiter = rt_mutex_top_waiter(&pi_state->pi_mutex);11841187 if (!rt_waiter) {11881188+ __futex_unqueue(top_waiter);11851189 raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);11861186- goto do_uncontended;11901190+ goto retry_hb;11871191 }1188119211891193 get_pi_state(pi_state);···12231217 return ret;12241218 }1225121912261226-do_uncontended:12271220 /*12281221 * We have no kernel internal state, i.e. no waiters in the12291222 * kernel. Waiters which are about to queue themselves are stuck
···14701470 struct event_trigger_data *data,14711471 struct trace_event_file *file)14721472{14731473- if (tracing_alloc_snapshot_instance(file->tr) != 0)14741474- return 0;14731473+ int ret = tracing_alloc_snapshot_instance(file->tr);14741474+14751475+ if (ret < 0)14761476+ return ret;1475147714761478 return register_trigger(glob, data, file);14771479}
+2-2
lib/kunit/device.c
···4545 int error;46464747 kunit_bus_device = root_device_register("kunit");4848- if (!kunit_bus_device)4949- return -ENOMEM;4848+ if (IS_ERR(kunit_bus_device))4949+ return PTR_ERR(kunit_bus_device);50505151 error = bus_register(&kunit_bus_type);5252 if (error)
+4
lib/kunit/executor.c
···146146 kfree(suite_set.start);147147}148148149149+/*150150+ * Filter and reallocate test suites. Must return the filtered test suites set151151+ * allocated at a valid virtual address or NULL in case of error.152152+ */149153struct kunit_suite_set150154kunit_filter_suites(const struct kunit_suite_set *suite_set,151155 const char *filter_glob,
+1-1
lib/kunit/kunit-test.c
···720720 long action_was_run = 0;721721722722 test_device = kunit_device_register(test, "my_device");723723- KUNIT_ASSERT_NOT_NULL(test, test_device);723723+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_device);724724725725 /* Add an action to verify cleanup. */726726 devm_add_action(test_device, test_dev_action, &action_was_run);
+11-3
lib/kunit/test.c
···1717#include <linux/panic.h>1818#include <linux/sched/debug.h>1919#include <linux/sched.h>2020+#include <linux/mm.h>20212122#include "debugfs.h"2223#include "device-impl.h"···802801 };803802 const char *action = kunit_action();804803804804+ /*805805+ * Check if the start address is a valid virtual address to detect806806+ * if the module load sequence has failed and the suite set has not807807+ * been initialized and filtered.808808+ */809809+ if (!suite_set.start || !virt_addr_valid(suite_set.start))810810+ return;811811+805812 if (!action)806813 __kunit_test_suites_exit(mod->kunit_suites,807814 mod->num_kunit_suites);808815809809- if (suite_set.start)810810- kunit_free_suite_set(suite_set);816816+ kunit_free_suite_set(suite_set);811817}812818813819static int kunit_module_notify(struct notifier_block *nb, unsigned long val,···824816825817 switch (val) {826818 case MODULE_STATE_LIVE:819819+ kunit_module_init(mod);827820 break;828821 case MODULE_STATE_GOING:829822 kunit_module_exit(mod);830823 break;831824 case MODULE_STATE_COMING:832832- kunit_module_init(mod);833825 break;834826 case MODULE_STATE_UNFORMED:835827 break;
+263-110
lib/stackdepot.c
···14141515#define pr_fmt(fmt) "stackdepot: " fmt16161717+#include <linux/debugfs.h>1718#include <linux/gfp.h>1819#include <linux/jhash.h>1920#include <linux/kernel.h>···2221#include <linux/list.h>2322#include <linux/mm.h>2423#include <linux/mutex.h>2525-#include <linux/percpu.h>2624#include <linux/printk.h>2525+#include <linux/rculist.h>2626+#include <linux/rcupdate.h>2727#include <linux/refcount.h>2828#include <linux/slab.h>2929#include <linux/spinlock.h>···6967};70687169struct stack_record {7272- struct list_head list; /* Links in hash table or freelist */7070+ struct list_head hash_list; /* Links in the hash table */7371 u32 hash; /* Hash in hash table */7472 u32 size; /* Number of stored frames */7575- union handle_parts handle;7373+ union handle_parts handle; /* Constant after initialization */7674 refcount_t count;7777- unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */7575+ union {7676+ unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */7777+ struct {7878+ /*7979+ * An important invariant of the implementation is to8080+ * only place a stack record onto the freelist iff its8181+ * refcount is zero. Because stack records with a zero8282+ * refcount are never considered as valid, it is safe to8383+ * union @entries and freelist management state below.8484+ * Conversely, as soon as an entry is off the freelist8585+ * and its refcount becomes non-zero, the below must not8686+ * be accessed until being placed back on the freelist.8787+ */8888+ struct list_head free_list; /* Links in the freelist */8989+ unsigned long rcu_state; /* RCU cookie */9090+ };9191+ };7892};79938094#define DEPOT_STACK_RECORD_SIZE \···130112 * yet allocated or if the limit on the number of pools is reached.131113 */132114static bool new_pool_required = true;133133-/* Lock that protects the variables above. */134134-static DEFINE_RWLOCK(pool_rwlock);115115+/* The lock must be held when performing pool or freelist modifications. */116116+static DEFINE_RAW_SPINLOCK(pool_lock);117117+118118+/* Statistics counters for debugfs. */119119+enum depot_counter_id {120120+ DEPOT_COUNTER_ALLOCS,121121+ DEPOT_COUNTER_FREES,122122+ DEPOT_COUNTER_INUSE,123123+ DEPOT_COUNTER_FREELIST_SIZE,124124+ DEPOT_COUNTER_COUNT,125125+};126126+static long counters[DEPOT_COUNTER_COUNT];127127+static const char *const counter_names[] = {128128+ [DEPOT_COUNTER_ALLOCS] = "allocations",129129+ [DEPOT_COUNTER_FREES] = "frees",130130+ [DEPOT_COUNTER_INUSE] = "in_use",131131+ [DEPOT_COUNTER_FREELIST_SIZE] = "freelist_size",132132+};133133+static_assert(ARRAY_SIZE(counter_names) == DEPOT_COUNTER_COUNT);135134136135static int __init disable_stack_depot(char *str)137136{···293258}294259EXPORT_SYMBOL_GPL(stack_depot_init);295260296296-/* Initializes a stack depol pool. */261261+/*262262+ * Initializes new stack depot @pool, release all its entries to the freelist,263263+ * and update the list of pools.264264+ */297265static void depot_init_pool(void *pool)298266{299267 int offset;300268301301- lockdep_assert_held_write(&pool_rwlock);302302-303303- WARN_ON(!list_empty(&free_stacks));269269+ lockdep_assert_held(&pool_lock);304270305271 /* Initialize handles and link stack records into the freelist. */306272 for (offset = 0; offset <= DEPOT_POOL_SIZE - DEPOT_STACK_RECORD_SIZE;···312276 stack->handle.offset = offset >> DEPOT_STACK_ALIGN;313277 stack->handle.extra = 0;314278315315- list_add(&stack->list, &free_stacks);279279+ /*280280+ * Stack traces of size 0 are never saved, and we can simply use281281+ * the size field as an indicator if this is a new unused stack282282+ * record in the freelist.283283+ */284284+ stack->size = 0;285285+286286+ INIT_LIST_HEAD(&stack->hash_list);287287+ /*288288+ * Add to the freelist front to prioritize never-used entries:289289+ * required in case there are entries in the freelist, but their290290+ * RCU cookie still belongs to the current RCU grace period291291+ * (there can still be concurrent readers).292292+ */293293+ list_add(&stack->free_list, &free_stacks);294294+ counters[DEPOT_COUNTER_FREELIST_SIZE]++;316295 }317296318297 /* Save reference to the pool to be used by depot_fetch_stack(). */319298 stack_pools[pools_num] = pool;320320- pools_num++;299299+300300+ /* Pairs with concurrent READ_ONCE() in depot_fetch_stack(). */301301+ WRITE_ONCE(pools_num, pools_num + 1);302302+ ASSERT_EXCLUSIVE_WRITER(pools_num);321303}322304323305/* Keeps the preallocated memory to be used for a new stack depot pool. */324306static void depot_keep_new_pool(void **prealloc)325307{326326- lockdep_assert_held_write(&pool_rwlock);308308+ lockdep_assert_held(&pool_lock);327309328310 /*329311 * If a new pool is already saved or the maximum number of···364310 * number of pools is reached. In either case, take note that365311 * keeping another pool is not required.366312 */367367- new_pool_required = false;313313+ WRITE_ONCE(new_pool_required, false);368314}369315370370-/* Updates references to the current and the next stack depot pools. */371371-static bool depot_update_pools(void **prealloc)316316+/*317317+ * Try to initialize a new stack depot pool from either a previous or the318318+ * current pre-allocation, and release all its entries to the freelist.319319+ */320320+static bool depot_try_init_pool(void **prealloc)372321{373373- lockdep_assert_held_write(&pool_rwlock);374374-375375- /* Check if we still have objects in the freelist. */376376- if (!list_empty(&free_stacks))377377- goto out_keep_prealloc;322322+ lockdep_assert_held(&pool_lock);378323379324 /* Check if we have a new pool saved and use it. */380325 if (new_pool) {···382329383330 /* Take note that we might need a new new_pool. */384331 if (pools_num < DEPOT_MAX_POOLS)385385- new_pool_required = true;332332+ WRITE_ONCE(new_pool_required, true);386333387387- /* Try keeping the preallocated memory for new_pool. */388388- goto out_keep_prealloc;334334+ return true;389335 }390336391337 /* Bail out if we reached the pool limit. */···401349 }402350403351 return false;352352+}404353405405-out_keep_prealloc:406406- /* Keep the preallocated memory for a new pool if required. */407407- if (*prealloc)408408- depot_keep_new_pool(prealloc);409409- return true;354354+/* Try to find next free usable entry. */355355+static struct stack_record *depot_pop_free(void)356356+{357357+ struct stack_record *stack;358358+359359+ lockdep_assert_held(&pool_lock);360360+361361+ if (list_empty(&free_stacks))362362+ return NULL;363363+364364+ /*365365+ * We maintain the invariant that the elements in front are least366366+ * recently used, and are therefore more likely to be associated with an367367+ * RCU grace period in the past. Consequently it is sufficient to only368368+ * check the first entry.369369+ */370370+ stack = list_first_entry(&free_stacks, struct stack_record, free_list);371371+ if (stack->size && !poll_state_synchronize_rcu(stack->rcu_state))372372+ return NULL;373373+374374+ list_del(&stack->free_list);375375+ counters[DEPOT_COUNTER_FREELIST_SIZE]--;376376+377377+ return stack;410378}411379412380/* Allocates a new stack in a stack depot pool. */···435363{436364 struct stack_record *stack;437365438438- lockdep_assert_held_write(&pool_rwlock);366366+ lockdep_assert_held(&pool_lock);439367440440- /* Update current and new pools if required and possible. */441441- if (!depot_update_pools(prealloc))368368+ /* This should already be checked by public API entry points. */369369+ if (WARN_ON_ONCE(!size))442370 return NULL;443371444372 /* Check if we have a stack record to save the stack trace. */445445- if (list_empty(&free_stacks))446446- return NULL;447447-448448- /* Get and unlink the first entry from the freelist. */449449- stack = list_first_entry(&free_stacks, struct stack_record, list);450450- list_del(&stack->list);373373+ stack = depot_pop_free();374374+ if (!stack) {375375+ /* No usable entries on the freelist - try to refill the freelist. */376376+ if (!depot_try_init_pool(prealloc))377377+ return NULL;378378+ stack = depot_pop_free();379379+ if (WARN_ON(!stack))380380+ return NULL;381381+ }451382452383 /* Limit number of saved frames to CONFIG_STACKDEPOT_MAX_FRAMES. */453384 if (size > CONFIG_STACKDEPOT_MAX_FRAMES)···469394 */470395 kmsan_unpoison_memory(stack, DEPOT_STACK_RECORD_SIZE);471396397397+ counters[DEPOT_COUNTER_ALLOCS]++;398398+ counters[DEPOT_COUNTER_INUSE]++;472399 return stack;473400}474401475402static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)476403{404404+ const int pools_num_cached = READ_ONCE(pools_num);477405 union handle_parts parts = { .handle = handle };478406 void *pool;479407 size_t offset = parts.offset << DEPOT_STACK_ALIGN;480408 struct stack_record *stack;481409482482- lockdep_assert_held(&pool_rwlock);410410+ lockdep_assert_not_held(&pool_lock);483411484484- if (parts.pool_index > pools_num) {412412+ if (parts.pool_index > pools_num_cached) {485413 WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",486486- parts.pool_index, pools_num, handle);414414+ parts.pool_index, pools_num_cached, handle);487415 return NULL;488416 }489417490418 pool = stack_pools[parts.pool_index];491491- if (!pool)419419+ if (WARN_ON(!pool))492420 return NULL;493421494422 stack = pool + offset;423423+ if (WARN_ON(!refcount_read(&stack->count)))424424+ return NULL;425425+495426 return stack;496427}497428498429/* Links stack into the freelist. */499430static void depot_free_stack(struct stack_record *stack)500431{501501- lockdep_assert_held_write(&pool_rwlock);432432+ unsigned long flags;502433503503- list_add(&stack->list, &free_stacks);434434+ lockdep_assert_not_held(&pool_lock);435435+436436+ raw_spin_lock_irqsave(&pool_lock, flags);437437+ printk_deferred_enter();438438+439439+ /*440440+ * Remove the entry from the hash list. Concurrent list traversal may441441+ * still observe the entry, but since the refcount is zero, this entry442442+ * will no longer be considered as valid.443443+ */444444+ list_del_rcu(&stack->hash_list);445445+446446+ /*447447+ * Due to being used from constrained contexts such as the allocators,448448+ * NMI, or even RCU itself, stack depot cannot rely on primitives that449449+ * would sleep (such as synchronize_rcu()) or recursively call into450450+ * stack depot again (such as call_rcu()).451451+ *452452+ * Instead, get an RCU cookie, so that we can ensure this entry isn't453453+ * moved onto another list until the next grace period, and concurrent454454+ * RCU list traversal remains safe.455455+ */456456+ stack->rcu_state = get_state_synchronize_rcu();457457+458458+ /*459459+ * Add the entry to the freelist tail, so that older entries are460460+ * considered first - their RCU cookie is more likely to no longer be461461+ * associated with the current grace period.462462+ */463463+ list_add_tail(&stack->free_list, &free_stacks);464464+465465+ counters[DEPOT_COUNTER_FREELIST_SIZE]++;466466+ counters[DEPOT_COUNTER_FREES]++;467467+ counters[DEPOT_COUNTER_INUSE]--;468468+469469+ printk_deferred_exit();470470+ raw_spin_unlock_irqrestore(&pool_lock, flags);504471}505472506473/* Calculates the hash for a stack. */···570453571454/* Finds a stack in a bucket of the hash table. */572455static inline struct stack_record *find_stack(struct list_head *bucket,573573- unsigned long *entries, int size,574574- u32 hash)456456+ unsigned long *entries, int size,457457+ u32 hash, depot_flags_t flags)575458{576576- struct list_head *pos;577577- struct stack_record *found;459459+ struct stack_record *stack, *ret = NULL;578460579579- lockdep_assert_held(&pool_rwlock);461461+ /*462462+ * Stack depot may be used from instrumentation that instruments RCU or463463+ * tracing itself; use variant that does not call into RCU and cannot be464464+ * traced.465465+ *466466+ * Note: Such use cases must take care when using refcounting to evict467467+ * unused entries, because the stack record free-then-reuse code paths468468+ * do call into RCU.469469+ */470470+ rcu_read_lock_sched_notrace();580471581581- list_for_each(pos, bucket) {582582- found = list_entry(pos, struct stack_record, list);583583- if (found->hash == hash &&584584- found->size == size &&585585- !stackdepot_memcmp(entries, found->entries, size))586586- return found;472472+ list_for_each_entry_rcu(stack, bucket, hash_list) {473473+ if (stack->hash != hash || stack->size != size)474474+ continue;475475+476476+ /*477477+ * This may race with depot_free_stack() accessing the freelist478478+ * management state unioned with @entries. The refcount is zero479479+ * in that case and the below refcount_inc_not_zero() will fail.480480+ */481481+ if (data_race(stackdepot_memcmp(entries, stack->entries, size)))482482+ continue;483483+484484+ /*485485+ * Try to increment refcount. If this succeeds, the stack record486486+ * is valid and has not yet been freed.487487+ *488488+ * If STACK_DEPOT_FLAG_GET is not used, it is undefined behavior489489+ * to then call stack_depot_put() later, and we can assume that490490+ * a stack record is never placed back on the freelist.491491+ */492492+ if ((flags & STACK_DEPOT_FLAG_GET) && !refcount_inc_not_zero(&stack->count))493493+ continue;494494+495495+ ret = stack;496496+ break;587497 }588588- return NULL;498498+499499+ rcu_read_unlock_sched_notrace();500500+501501+ return ret;589502}590503591504depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,···629482 struct page *page = NULL;630483 void *prealloc = NULL;631484 bool can_alloc = depot_flags & STACK_DEPOT_FLAG_CAN_ALLOC;632632- bool need_alloc = false;633485 unsigned long flags;634486 u32 hash;635487···651505 hash = hash_stack(entries, nr_entries);652506 bucket = &stack_table[hash & stack_hash_mask];653507654654- read_lock_irqsave(&pool_rwlock, flags);655655- printk_deferred_enter();656656-657657- /* Fast path: look the stack trace up without full locking. */658658- found = find_stack(bucket, entries, nr_entries, hash);659659- if (found) {660660- if (depot_flags & STACK_DEPOT_FLAG_GET)661661- refcount_inc(&found->count);662662- printk_deferred_exit();663663- read_unlock_irqrestore(&pool_rwlock, flags);508508+ /* Fast path: look the stack trace up without locking. */509509+ found = find_stack(bucket, entries, nr_entries, hash, depot_flags);510510+ if (found)664511 goto exit;665665- }666666-667667- /* Take note if another stack pool needs to be allocated. */668668- if (new_pool_required)669669- need_alloc = true;670670-671671- printk_deferred_exit();672672- read_unlock_irqrestore(&pool_rwlock, flags);673512674513 /*675514 * Allocate memory for a new pool if required now:676515 * we won't be able to do that under the lock.677516 */678678- if (unlikely(can_alloc && need_alloc)) {517517+ if (unlikely(can_alloc && READ_ONCE(new_pool_required))) {679518 /*680519 * Zero out zone modifiers, as we don't have specific zone681520 * requirements. Keep the flags related to allocation in atomic···674543 prealloc = page_address(page);675544 }676545677677- write_lock_irqsave(&pool_rwlock, flags);546546+ raw_spin_lock_irqsave(&pool_lock, flags);678547 printk_deferred_enter();679548680680- found = find_stack(bucket, entries, nr_entries, hash);549549+ /* Try to find again, to avoid concurrently inserting duplicates. */550550+ found = find_stack(bucket, entries, nr_entries, hash, depot_flags);681551 if (!found) {682552 struct stack_record *new =683553 depot_alloc_stack(entries, nr_entries, hash, &prealloc);684554685555 if (new) {686686- list_add(&new->list, bucket);556556+ /*557557+ * This releases the stack record into the bucket and558558+ * makes it visible to readers in find_stack().559559+ */560560+ list_add_rcu(&new->hash_list, bucket);687561 found = new;688562 }689689- } else {690690- if (depot_flags & STACK_DEPOT_FLAG_GET)691691- refcount_inc(&found->count);563563+ }564564+565565+ if (prealloc) {692566 /*693693- * Stack depot already contains this stack trace, but let's694694- * keep the preallocated memory for future.567567+ * Either stack depot already contains this stack trace, or568568+ * depot_alloc_stack() did not consume the preallocated memory.569569+ * Try to keep the preallocated memory for future.695570 */696696- if (prealloc)697697- depot_keep_new_pool(&prealloc);571571+ depot_keep_new_pool(&prealloc);698572 }699573700574 printk_deferred_exit();701701- write_unlock_irqrestore(&pool_rwlock, flags);575575+ raw_spin_unlock_irqrestore(&pool_lock, flags);702576exit:703577 if (prealloc) {704578 /* Stack depot didn't use this memory, free it. */···728592 unsigned long **entries)729593{730594 struct stack_record *stack;731731- unsigned long flags;732595733596 *entries = NULL;734597 /*···739604 if (!handle || stack_depot_disabled)740605 return 0;741606742742- read_lock_irqsave(&pool_rwlock, flags);743743- printk_deferred_enter();744744-745607 stack = depot_fetch_stack(handle);746746-747747- printk_deferred_exit();748748- read_unlock_irqrestore(&pool_rwlock, flags);608608+ /*609609+ * Should never be NULL, otherwise this is a use-after-put (or just a610610+ * corrupt handle).611611+ */612612+ if (WARN(!stack, "corrupt handle or use after stack_depot_put()"))613613+ return 0;749614750615 *entries = stack->entries;751616 return stack->size;···755620void stack_depot_put(depot_stack_handle_t handle)756621{757622 struct stack_record *stack;758758- unsigned long flags;759623760624 if (!handle || stack_depot_disabled)761625 return;762626763763- write_lock_irqsave(&pool_rwlock, flags);764764- printk_deferred_enter();765765-766627 stack = depot_fetch_stack(handle);767767- if (WARN_ON(!stack))768768- goto out;628628+ /*629629+ * Should always be able to find the stack record, otherwise this is an630630+ * unbalanced put attempt (or corrupt handle).631631+ */632632+ if (WARN(!stack, "corrupt handle or unbalanced stack_depot_put()"))633633+ return;769634770770- if (refcount_dec_and_test(&stack->count)) {771771- /* Unlink stack from the hash table. */772772- list_del(&stack->list);773773-774774- /* Free stack. */635635+ if (refcount_dec_and_test(&stack->count))775636 depot_free_stack(stack);776776- }777777-778778-out:779779- printk_deferred_exit();780780- write_unlock_irqrestore(&pool_rwlock, flags);781637}782638EXPORT_SYMBOL_GPL(stack_depot_put);783639···816690 return parts.extra;817691}818692EXPORT_SYMBOL(stack_depot_get_extra_bits);693693+694694+static int stats_show(struct seq_file *seq, void *v)695695+{696696+ /*697697+ * data race ok: These are just statistics counters, and approximate698698+ * statistics are ok for debugging.699699+ */700700+ seq_printf(seq, "pools: %d\n", data_race(pools_num));701701+ for (int i = 0; i < DEPOT_COUNTER_COUNT; i++)702702+ seq_printf(seq, "%s: %ld\n", counter_names[i], data_race(counters[i]));703703+704704+ return 0;705705+}706706+DEFINE_SHOW_ATTRIBUTE(stats);707707+708708+static int depot_debugfs_init(void)709709+{710710+ struct dentry *dir;711711+712712+ if (stack_depot_disabled)713713+ return 0;714714+715715+ dir = debugfs_create_dir("stackdepot", NULL);716716+ debugfs_create_file("stats", 0444, dir, NULL, &stats_fops);717717+ return 0;718718+}719719+late_initcall(depot_debugfs_init);
+14-4
mm/huge_memory.c
···3737#include <linux/page_owner.h>3838#include <linux/sched/sysctl.h>3939#include <linux/memory-tiers.h>4040+#include <linux/compat.h>40414142#include <asm/tlb.h>4243#include <asm/pgalloc.h>···810809{811810 loff_t off_end = off + len;812811 loff_t off_align = round_up(off, size);813813- unsigned long len_pad, ret;812812+ unsigned long len_pad, ret, off_sub;813813+814814+ if (IS_ENABLED(CONFIG_32BIT) || in_compat_syscall())815815+ return 0;814816815817 if (off_end <= off_align || (off_end - off_align) < size)816818 return 0;···839835 if (ret == addr)840836 return addr;841837842842- ret += (off - ret) & (size - 1);838838+ off_sub = (off - ret) & (size - 1);839839+840840+ if (current->mm->get_unmapped_area == arch_get_unmapped_area_topdown &&841841+ !off_sub)842842+ return ret + size;843843+844844+ ret += off_sub;843845 return ret;844846}845847···24472437 page = pmd_page(old_pmd);24482438 folio = page_folio(page);24492439 if (!folio_test_dirty(folio) && pmd_dirty(old_pmd))24502450- folio_set_dirty(folio);24402440+ folio_mark_dirty(folio);24512441 if (!folio_test_referenced(folio) && pmd_young(old_pmd))24522442 folio_set_referenced(folio);24532443 folio_remove_rmap_pmd(folio, page, vma);···35733563 }3574356435753565 if (pmd_dirty(pmdval))35763576- folio_set_dirty(folio);35663566+ folio_mark_dirty(folio);35773567 if (pmd_write(pmdval))35783568 entry = make_writable_migration_entry(page_to_pfn(page));35793569 else if (anon_exclusive)
+3
mm/memblock.c
···21762176 start = region->base;21772177 end = start + region->size;2178217821792179+ if (nid == NUMA_NO_NODE || nid >= MAX_NUMNODES)21802180+ nid = early_pfn_to_nid(PFN_DOWN(start));21812181+21792182 reserve_bootmem_region(start, end, nid);21802183 }21812184 }
+25-4
mm/memcontrol.c
···26232623}2624262426252625/*26262626- * Scheduled by try_charge() to be executed from the userland return path26272627- * and reclaims memory over the high limit.26262626+ * Reclaims memory over the high limit. Called directly from26272627+ * try_charge() (context permitting), as well as from the userland26282628+ * return path where reclaim is always able to block.26282629 */26292630void mem_cgroup_handle_over_high(gfp_t gfp_mask)26302631{···26442643 current->memcg_nr_pages_over_high = 0;2645264426462645retry_reclaim:26462646+ /*26472647+ * Bail if the task is already exiting. Unlike memory.max,26482648+ * memory.high enforcement isn't as strict, and there is no26492649+ * OOM killer involved, which means the excess could already26502650+ * be much bigger (and still growing) than it could for26512651+ * memory.max; the dying task could get stuck in fruitless26522652+ * reclaim for a long time, which isn't desirable.26532653+ */26542654+ if (task_is_dying())26552655+ goto out;26562656+26472657 /*26482658 * The allocating task should reclaim at least the batch size, but for26492659 * subsequent retries we only want to do what's necessary to prevent oom···27052693 }2706269427072695 /*26962696+ * Reclaim didn't manage to push usage below the limit, slow26972697+ * this allocating task down.26982698+ *27082699 * If we exit early, we're guaranteed to die (since27092700 * schedule_timeout_killable sets TASK_KILLABLE). This means we don't27102701 * need to account for any ill-begotten jiffies to pay them off later.···29022887 }29032888 } while ((memcg = parent_mem_cgroup(memcg)));2904288928902890+ /*28912891+ * Reclaim is set up above to be called from the userland28922892+ * return path. But also attempt synchronous reclaim to avoid28932893+ * excessive overrun while the task is still inside the28942894+ * kernel. If this is successful, the return path will see it28952895+ * when it rechecks the overage and simply bail out.28962896+ */29052897 if (current->memcg_nr_pages_over_high > MEMCG_CHARGE_BATCH &&29062898 !(current->flags & PF_MEMALLOC) &&29072907- gfpflags_allow_blocking(gfp_mask)) {28992899+ gfpflags_allow_blocking(gfp_mask))29082900 mem_cgroup_handle_over_high(gfp_mask);29092909- }29102901 return 0;29112902}29122903
+1-1
mm/memory-failure.c
···982982 int count = page_count(p) - 1;983983984984 if (extra_pins)985985- count -= 1;985985+ count -= folio_nr_pages(page_folio(p));986986987987 if (count > 0) {988988 pr_err("%#lx: %s still referenced by %d users\n",
+1-1
mm/memory.c
···14641464 delay_rmap = 0;14651465 if (!folio_test_anon(folio)) {14661466 if (pte_dirty(ptent)) {14671467- folio_set_dirty(folio);14671467+ folio_mark_dirty(folio);14681468 if (tlb_delay_rmap(tlb)) {14691469 delay_rmap = 1;14701470 force_flush = 1;
+4-2
mm/mmap.c
···18251825 /*18261826 * mmap_region() will call shmem_zero_setup() to create a file,18271827 * so use shmem's get_unmapped_area in case it can be huge.18281828- * do_mmap() will clear pgoff, so match alignment.18291828 */18301830- pgoff = 0;18311829 get_area = shmem_get_unmapped_area;18321830 } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {18331831 /* Ensures that larger anonymous mappings are THP aligned. */18341832 get_area = thp_get_unmapped_area;18351833 }18341834+18351835+ /* Always treat pgoff as zero for anonymous memory. */18361836+ if (!file)18371837+ pgoff = 0;1836183818371839 addr = get_area(file, addr, len, pgoff, flags);18381840 if (IS_ERR_VALUE(addr))
+1-1
mm/page-writeback.c
···16381638 */16391639 dtc->wb_thresh = __wb_calc_thresh(dtc);16401640 dtc->wb_bg_thresh = dtc->thresh ?16411641- div_u64((u64)dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0;16411641+ div64_u64(dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0;1642164216431643 /*16441644 * In order to avoid the stacked BDI deadlock we need
+2-2
mm/readahead.c
···469469470470 if (!folio)471471 return -ENOMEM;472472- mark = round_up(mark, 1UL << order);472472+ mark = round_down(mark, 1UL << order);473473 if (index == mark)474474 folio_set_readahead(folio);475475 err = filemap_add_folio(ractl->mapping, folio, index, gfp);···575575 * It's the expected callback index, assume sequential access.576576 * Ramp up sizes, and push forward the readahead window.577577 */578578- expected = round_up(ra->start + ra->size - ra->async_size,578578+ expected = round_down(ra->start + ra->size - ra->async_size,579579 1UL << order);580580 if (index == expected || index == (ra->start + ra->size)) {581581 ra->start += ra->size;
+13-2
mm/userfaultfd.c
···357357 unsigned long dst_start,358358 unsigned long src_start,359359 unsigned long len,360360+ atomic_t *mmap_changing,360361 uffd_flags_t flags)361362{362363 struct mm_struct *dst_mm = dst_vma->vm_mm;···473472 goto out;474473 }475474 mmap_read_lock(dst_mm);475475+ /*476476+ * If memory mappings are changing because of non-cooperative477477+ * operation (e.g. mremap) running in parallel, bail out and478478+ * request the user to retry later479479+ */480480+ if (mmap_changing && atomic_read(mmap_changing)) {481481+ err = -EAGAIN;482482+ break;483483+ }476484477485 dst_vma = NULL;478486 goto retry;···516506 unsigned long dst_start,517507 unsigned long src_start,518508 unsigned long len,509509+ atomic_t *mmap_changing,519510 uffd_flags_t flags);520511#endif /* CONFIG_HUGETLB_PAGE */521512···633622 * If this is a HUGETLB vma, pass off to appropriate routine634623 */635624 if (is_vm_hugetlb_page(dst_vma))636636- return mfill_atomic_hugetlb(dst_vma, dst_start,637637- src_start, len, flags);625625+ return mfill_atomic_hugetlb(dst_vma, dst_start, src_start,626626+ len, mmap_changing, flags);638627639628 if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma))640629 goto out_unlock;
···674674 return -EOPNOTSUPP;675675 }676676 if (tb[DEVLINK_PORT_FN_ATTR_STATE] && !ops->port_fn_state_set) {677677- NL_SET_ERR_MSG_ATTR(extack, tb[DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR],677677+ NL_SET_ERR_MSG_ATTR(extack, tb[DEVLINK_PORT_FN_ATTR_STATE],678678 "Function does not support state setting");679679 return -EOPNOTSUPP;680680 }
+2-2
net/hsr/hsr_device.c
···308308309309 skb = hsr_init_skb(master);310310 if (!skb) {311311- WARN_ONCE(1, "HSR: Could not send supervision frame\n");311311+ netdev_warn_once(master->dev, "HSR: Could not send supervision frame\n");312312 return;313313 }314314···355355356356 skb = hsr_init_skb(master);357357 if (!skb) {358358- WARN_ONCE(1, "PRP: Could not send supervision frame\n");358358+ netdev_warn_once(master->dev, "PRP: Could not send supervision frame\n");359359 return;360360 }361361
···23142314 if (__mptcp_check_fallback(msk))23152315 return false;2316231623172317- if (tcp_rtx_and_write_queues_empty(sk))23182318- return false;23192319-23202317 /* the closing socket has some data untransmitted and/or unacked:23212318 * some data in the mptcp rtx queue has not really xmitted yet.23222319 * keep it simple and re-inject the whole mptcp level rtx queue
···11821182 kfree(set);11831183}1184118411851185+static void11861186+ip_set_destroy_set_rcu(struct rcu_head *head)11871187+{11881188+ struct ip_set *set = container_of(head, struct ip_set, rcu);11891189+11901190+ ip_set_destroy_set(set);11911191+}11921192+11851193static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info,11861194 const struct nlattr * const attr[])11871195{···12011193 if (unlikely(protocol_min_failed(attr)))12021194 return -IPSET_ERR_PROTOCOL;1203119512041204- /* Must wait for flush to be really finished in list:set */12051205- rcu_barrier();1206119612071197 /* Commands are serialized and references are12081198 * protected by the ip_set_ref_lock.···12121206 * counter, so if it's already zero, we can proceed12131207 * without holding the lock.12141208 */12151215- read_lock_bh(&ip_set_ref_lock);12161209 if (!attr[IPSET_ATTR_SETNAME]) {12101210+ /* Must wait for flush to be really finished in list:set */12111211+ rcu_barrier();12121212+ read_lock_bh(&ip_set_ref_lock);12171213 for (i = 0; i < inst->ip_set_max; i++) {12181214 s = ip_set(inst, i);12191215 if (s && (s->ref || s->ref_netlink)) {···12291221 s = ip_set(inst, i);12301222 if (s) {12311223 ip_set(inst, i) = NULL;12241224+ /* Must cancel garbage collectors */12251225+ s->variant->cancel_gc(s);12321226 ip_set_destroy_set(s);12331227 }12341228 }···12381228 inst->is_destroyed = false;12391229 } else {12401230 u32 flags = flag_exist(info->nlh);12311231+ u16 features = 0;12321232+12331233+ read_lock_bh(&ip_set_ref_lock);12411234 s = find_set_and_id(inst, nla_data(attr[IPSET_ATTR_SETNAME]),12421235 &i);12431236 if (!s) {···12511238 ret = -IPSET_ERR_BUSY;12521239 goto out;12531240 }12411241+ features = s->type->features;12541242 ip_set(inst, i) = NULL;12551243 read_unlock_bh(&ip_set_ref_lock);12561256-12571257- ip_set_destroy_set(s);12441244+ if (features & IPSET_TYPE_NAME) {12451245+ /* Must wait for flush to be really finished */12461246+ rcu_barrier();12471247+ }12481248+ /* Must cancel garbage collectors */12491249+ s->variant->cancel_gc(s);12501250+ call_rcu(&s->rcu, ip_set_destroy_set_rcu);12581251 }12591252 return 0;12601253out:···14121393 ip_set(inst, from_id) = to;14131394 ip_set(inst, to_id) = from;14141395 write_unlock_bh(&ip_set_ref_lock);14151415-14161416- /* Make sure all readers of the old set pointers are completed. */14171417- synchronize_rcu();1418139614191397 return 0;14201398}···24252409{24262410 nf_unregister_sockopt(&so_set);24272411 nfnetlink_subsys_unregister(&ip_set_netlink_subsys);24282428-24292412 unregister_pernet_subsys(&ip_set_net_ops);24132413+24142414+ /* Wait for call_rcu() in destroy */24152415+ rcu_barrier();24162416+24302417 pr_debug("these are the famous last words\n");24312418}24322419
···283283 pr_debug("Setting vtag %x for secondary conntrack\n",284284 sh->vtag);285285 ct->proto.sctp.vtag[IP_CT_DIR_ORIGINAL] = sh->vtag;286286- } else {286286+ } else if (sch->type == SCTP_CID_SHUTDOWN_ACK) {287287 /* If it is a shutdown ack OOTB packet, we expect a return288288 shutdown complete, otherwise an ABORT Sec 8.4 (5) and (8) */289289 pr_debug("Setting vtag %x for new conn OOTB\n",
+6-4
net/netfilter/nf_conntrack_proto_tcp.c
···457457 const struct sk_buff *skb,458458 unsigned int dataoff,459459 const struct tcphdr *tcph,460460- u32 end, u32 win)460460+ u32 end, u32 win,461461+ enum ip_conntrack_dir dir)461462{462463 /* SYN-ACK in reply to a SYN463464 * or SYN from reply direction in simultaneous open.···472471 * Both sides must send the Window Scale option473472 * to enable window scaling in either direction.474473 */475475- if (!(sender->flags & IP_CT_TCP_FLAG_WINDOW_SCALE &&474474+ if (dir == IP_CT_DIR_REPLY &&475475+ !(sender->flags & IP_CT_TCP_FLAG_WINDOW_SCALE &&476476 receiver->flags & IP_CT_TCP_FLAG_WINDOW_SCALE)) {477477 sender->td_scale = 0;478478 receiver->td_scale = 0;···544542 if (tcph->syn) {545543 tcp_init_sender(sender, receiver,546544 skb, dataoff, tcph,547547- end, win);545545+ end, win, dir);548546 if (!tcph->ack)549547 /* Simultaneous open */550548 return NFCT_TCP_ACCEPT;···587585 */588586 tcp_init_sender(sender, receiver,589587 skb, dataoff, tcph,590590- end, win);588588+ end, win, dir);591589592590 if (dir == IP_CT_DIR_REPLY && !tcph->ack)593591 return NFCT_TCP_ACCEPT;
···12081208{12091209 nfc_free_device(ndev->nfc_dev);12101210 nci_hci_deallocate(ndev);12111211+12121212+ /* drop partial rx data packet if present */12131213+ if (ndev->rx_data_reassembly)12141214+ kfree_skb(ndev->rx_data_reassembly);12111215 kfree(ndev);12121216}12131217EXPORT_SYMBOL(nci_free_device);
+9-3
net/smc/smc_core.c
···18771877 struct smcd_dev *smcismdev,18781878 struct smcd_gid *peer_gid)18791879{18801880- return lgr->peer_gid.gid == peer_gid->gid && lgr->smcd == smcismdev &&18811881- smc_ism_is_virtual(smcismdev) ?18821882- (lgr->peer_gid.gid_ext == peer_gid->gid_ext) : 1;18801880+ if (lgr->peer_gid.gid != peer_gid->gid ||18811881+ lgr->smcd != smcismdev)18821882+ return false;18831883+18841884+ if (smc_ism_is_virtual(smcismdev) &&18851885+ lgr->peer_gid.gid_ext != peer_gid->gid_ext)18861886+ return false;18871887+18881888+ return true;18831889}1884189018851891/* create a new SMC connection (and a new link group if necessary) */
···4848 ip link add mv0 link "$name" up address "$ucaddr" type macvlan4949 # Used to test dev->mc handling5050 ip address add "$addr6" dev "$name"5151+5252+ # Check that addresses were added as expected5353+ (grep_bridge_fdb "$ucaddr" bridge fdb show dev dummy1 ||5454+ grep_bridge_fdb "$ucaddr" bridge fdb show dev dummy2) >/dev/null5555+ check_err $? "macvlan unicast address not found on a slave"5656+5757+ # mcaddr is added asynchronously by addrconf_dad_work(), use busywait5858+ (busywait 10000 grep_bridge_fdb "$mcaddr" bridge fdb show dev dummy1 ||5959+ grep_bridge_fdb "$mcaddr" bridge fdb show dev dummy2) >/dev/null6060+ check_err $? "IPv6 solicited-node multicast mac address not found on a slave"6161+5162 ip link set dev "$name" down5263 ip link del "$name"5364
···880880 does not overlap with other contacts. The value of `t` may be881881 incremented over time to move the point along a linear path.882882 """883883- x = 50 + 10 * contact_id + t884884- y = 100 + 100 * contact_id + t883883+ x = 50 + 10 * contact_id + t * 11884884+ y = 100 + 100 * contact_id + t * 11885885 return test_multitouch.Touch(contact_id, x, y)886886887887 def make_contacts(self, n, t=0):···902902 tracking_id = contact_ids.tracking_id903903 slot_num = contact_ids.slot_num904904905905- x = 50 + 10 * contact_id + t906906- y = 100 + 100 * contact_id + t905905+ x = 50 + 10 * contact_id + t * 11906906+ y = 100 + 100 * contact_id + t * 11907907908908 # If the data isn't supposed to be stored in any slots, there is909909 # nothing we can check for in the evdev stream.
+17-20
tools/testing/selftests/livepatch/functions.sh
···4242 exit 14343}44444545-# save existing dmesg so we can detect new content4646-function save_dmesg() {4747- SAVED_DMESG=$(mktemp --tmpdir -t klp-dmesg-XXXXXX)4848- dmesg > "$SAVED_DMESG"4949-}5050-5151-# cleanup temporary dmesg file from save_dmesg()5252-function cleanup_dmesg_file() {5353- rm -f "$SAVED_DMESG"5454-}5555-5645function push_config() {5746 DYNAMIC_DEBUG=$(grep '^kernel/livepatch' /sys/kernel/debug/dynamic_debug/control | \5847 awk -F'[: ]' '{print "file " $1 " line " $2 " " $4}')···889989100function cleanup() {90101 pop_config9191- cleanup_dmesg_file92102}9310394104# setup_config - save the current config and set a script exit trap that···268280function start_test {269281 local test="$1"270282271271- save_dmesg283283+ # Dump something unique into the dmesg log, then stash the entry284284+ # in LAST_DMESG. The check_result() function will use it to285285+ # find new kernel messages since the test started.286286+ local last_dmesg_msg="livepatch kselftest timestamp: $(date --rfc-3339=ns)"287287+ log "$last_dmesg_msg"288288+ loop_until 'dmesg | grep -q "$last_dmesg_msg"' ||289289+ die "buffer busy? can't find canary dmesg message: $last_dmesg_msg"290290+ LAST_DMESG=$(dmesg | grep "$last_dmesg_msg")291291+272292 echo -n "TEST: $test ... "273293 log "===== TEST: $test ====="274294}···287291 local expect="$*"288292 local result289293290290- # Note: when comparing dmesg output, the kernel log timestamps291291- # help differentiate repeated testing runs. Remove them with a292292- # post-comparison sed filter.293293-294294- result=$(dmesg | comm --nocheck-order -13 "$SAVED_DMESG" - | \294294+ # Test results include any new dmesg entry since LAST_DMESG, then:295295+ # - include lines matching keywords296296+ # - exclude lines matching keywords297297+ # - filter out dmesg timestamp prefixes298298+ result=$(dmesg | awk -v last_dmesg="$LAST_DMESG" 'p; $0 == last_dmesg { p=1 }' | \295299 grep -e 'livepatch:' -e 'test_klp' | \296300 grep -v '\(tainting\|taints\) kernel' | \297301 sed 's/^\[[ 0-9.]*\] //')298302299303 if [[ "$expect" == "$result" ]] ; then300304 echo "ok"305305+ elif [[ "$result" == "" ]] ; then306306+ echo -e "not ok\n\nbuffer overrun? can't find canary dmesg entry: $LAST_DMESG\n"307307+ die "livepatch kselftest(s) failed"301308 else302309 echo -e "not ok\n\n$(diff -upr --label expected --label result <(echo "$expect") <(echo "$result"))\n"303310 die "livepatch kselftest(s) failed"304311 fi305305-306306- cleanup_dmesg_file307312}308313309314# check_sysfs_rights(modname, rel_path, expected_rights) - check sysfs
···566566 if (map_ptr_orig == MAP_FAILED)567567 err(2, "initial mmap");568568569569- if (madvise(map_ptr, len + HPAGE_SIZE, MADV_HUGEPAGE))569569+ if (madvise(map_ptr, len, MADV_HUGEPAGE))570570 err(2, "MADV_HUGEPAGE");571571572572 pagemap_fd = open("/proc/self/pagemap", O_RDONLY);
+7
tools/testing/selftests/mm/map_hugetlb.c
···1515#include <unistd.h>1616#include <sys/mman.h>1717#include <fcntl.h>1818+#include "vm_util.h"18191920#define LENGTH (256UL*1024*1024)2021#define PROTECTION (PROT_READ | PROT_WRITE)···5958{6059 void *addr;6160 int ret;6161+ size_t hugepage_size;6262 size_t length = LENGTH;6363 int flags = FLAGS;6464 int shift = 0;6565+6666+ hugepage_size = default_huge_page_size();6767+ /* munmap with fail if the length is not page aligned */6868+ if (hugepage_size > length)6969+ length = hugepage_size;65706671 if (argc > 1)6772 length = atol(argv[1]) << 20;
+14-13
tools/testing/selftests/mm/mremap_test.c
···360360 char pattern_seed)361361{362362 void *addr, *src_addr, *dest_addr, *dest_preamble_addr;363363- unsigned long long i;363363+ int d;364364+ unsigned long long t;364365 struct timespec t_start = {0, 0}, t_end = {0, 0};365366 long long start_ns, end_ns, align_mask, ret, offset;366367 unsigned long long threshold;···379378380379 /* Set byte pattern for source block. */381380 srand(pattern_seed);382382- for (i = 0; i < threshold; i++)383383- memset((char *) src_addr + i, (char) rand(), 1);381381+ for (t = 0; t < threshold; t++)382382+ memset((char *) src_addr + t, (char) rand(), 1);384383385384 /* Mask to zero out lower bits of address for alignment */386385 align_mask = ~(c.dest_alignment - 1);···421420422421 /* Set byte pattern for the dest preamble block. */423422 srand(pattern_seed);424424- for (i = 0; i < c.dest_preamble_size; i++)425425- memset((char *) dest_preamble_addr + i, (char) rand(), 1);423423+ for (d = 0; d < c.dest_preamble_size; d++)424424+ memset((char *) dest_preamble_addr + d, (char) rand(), 1);426425 }427426428427 clock_gettime(CLOCK_MONOTONIC, &t_start);···438437439438 /* Verify byte pattern after remapping */440439 srand(pattern_seed);441441- for (i = 0; i < threshold; i++) {440440+ for (t = 0; t < threshold; t++) {442441 char c = (char) rand();443442444444- if (((char *) dest_addr)[i] != c) {443443+ if (((char *) dest_addr)[t] != c) {445444 ksft_print_msg("Data after remap doesn't match at offset %llu\n",446446- i);445445+ t);447446 ksft_print_msg("Expected: %#x\t Got: %#x\n", c & 0xff,448448- ((char *) dest_addr)[i] & 0xff);447447+ ((char *) dest_addr)[t] & 0xff);449448 ret = -1;450449 goto clean_up_dest;451450 }···454453 /* Verify the dest preamble byte pattern after remapping */455454 if (c.dest_preamble_size) {456455 srand(pattern_seed);457457- for (i = 0; i < c.dest_preamble_size; i++) {456456+ for (d = 0; d < c.dest_preamble_size; d++) {458457 char c = (char) rand();459458460460- if (((char *) dest_preamble_addr)[i] != c) {459459+ if (((char *) dest_preamble_addr)[d] != c) {461460 ksft_print_msg("Preamble data after remap doesn't match at offset %d\n",462462- i);461461+ d);463462 ksft_print_msg("Expected: %#x\t Got: %#x\n", c & 0xff,464464- ((char *) dest_preamble_addr)[i] & 0xff);463463+ ((char *) dest_preamble_addr)[d] & 0xff);465464 ret = -1;466465 goto clean_up_dest;467466 }
+6
tools/testing/selftests/mm/va_high_addr_switch.sh
···2929 # See man 1 gzip under '-f'.3030 local pg_table_levels=$(gzip -dcfq "${config}" | grep PGTABLE_LEVELS | cut -d'=' -f 2)31313232+ local cpu_supports_pl5=$(awk '/^flags/ {if (/la57/) {print 0;}3333+ else {print 1}; exit}' /proc/cpuinfo 2>/dev/null)3434+3235 if [[ "${pg_table_levels}" -lt 5 ]]; then3336 echo "$0: PGTABLE_LEVELS=${pg_table_levels}, must be >= 5 to run this test"3737+ exit $ksft_skip3838+ elif [[ "${cpu_supports_pl5}" -ne 0 ]]; then3939+ echo "$0: CPU does not have the necessary la57 flag to support page table level 5"3440 exit $ksft_skip3541 fi3642}
···44##############################################################################55# Defines6677+WAIT_TIMEOUT=${WAIT_TIMEOUT:=20}88+BUSYWAIT_TIMEOUT=$((WAIT_TIMEOUT * 1000)) # ms99+710# Kselftest framework requirement - SKIP code is 4.811ksft_skip=4912# namespace list created by setup_ns···51485249 for ns in "$@"; do5350 ip netns delete "${ns}" &> /dev/null5454- if ! busywait 2 ip netns list \| grep -vq "^$ns$" &> /dev/null; then5151+ if ! busywait $BUSYWAIT_TIMEOUT ip netns list \| grep -vq "^$ns$" &> /dev/null; then5552 echo "Warn: Failed to remove namespace $ns"5653 ret=15754 fi
···1111 local -r ns_mac="$4"12121313 [[ -e /var/run/netns/"${ns_name}" ]] || ip netns add "${ns_name}"1414- echo 100000 > "/sys/class/net/${ns_dev}/gro_flush_timeout"1414+ echo 1000000 > "/sys/class/net/${ns_dev}/gro_flush_timeout"1515 ip link set dev "${ns_dev}" netns "${ns_name}" mtu 655351616 ip -netns "${ns_name}" link set dev "${ns_dev}" up1717
···11// SPDX-License-Identifier: GPL-2.022-/* Author: Dmitry Safonov <dima@arista.com> */22+/*33+ * The test checks that both active and passive reset have correct TCP-AO44+ * signature. An "active" reset (abort) here is procured from closing55+ * listen() socket with non-accepted connections in the queue:66+ * inet_csk_listen_stop() => inet_child_forget() =>77+ * => tcp_disconnect() => tcp_send_active_reset()88+ *99+ * The passive reset is quite hard to get on established TCP connections.1010+ * It could be procured from non-established states, but the synchronization1111+ * part from userspace in order to reliably get RST seems uneasy.1212+ * So, instead it's procured by corrupting SEQ number on TIMED-WAIT state.1313+ *1414+ * It's important to test both passive and active RST as they go through1515+ * different code-paths:1616+ * - tcp_send_active_reset() makes no-data skb, sends it with tcp_transmit_skb()1717+ * - tcp_v*_send_reset() create their reply skbs and send them with1818+ * ip_send_unicast_reply()1919+ *2020+ * In both cases TCP-AO signatures have to be correct, which is verified by2121+ * (1) checking that the TCP-AO connection was reset and (2) TCP-AO counters.2222+ *2323+ * Author: Dmitry Safonov <dima@arista.com>2424+ */325#include <inttypes.h>426#include "../../../../include/linux/kernel.h"527#include "aolib.h"628729const size_t quota = 1000;3030+const size_t packet_sz = 100;831/*932 * Backlog == 0 means 1 connection in queue, see:1033 * commit 64a146513f8f ("[NET]: Revert incorrect accept queue...")···8057 if (setsockopt(sk, SOL_SOCKET, SO_LINGER, &sl, sizeof(sl)))8158 test_error("setsockopt(SO_LINGER)");8259 close(sk);8383-}8484-8585-static int test_wait_for_exception(int sk, time_t sec)8686-{8787- struct timeval tv = { .tv_sec = sec };8888- struct timeval *ptv = NULL;8989- fd_set efds;9090- int ret;9191-9292- FD_ZERO(&efds);9393- FD_SET(sk, &efds);9494-9595- if (sec)9696- ptv = &tv;9797-9898- errno = 0;9999- ret = select(sk + 1, NULL, NULL, &efds, ptv);100100- if (ret < 0)101101- return -errno;102102- return ret ? sk : 0;10360}1046110562static void test_server_active_rst(unsigned int port)···158155 test_fail("server returned %zd", bytes);159156 }160157161161- synchronize_threads(); /* 3: chekpoint/restore the connection */158158+ synchronize_threads(); /* 3: checkpoint the client */159159+ synchronize_threads(); /* 4: close the server, creating twsk */162160 if (test_get_tcp_ao_counters(sk, &ao2))163161 test_error("test_get_tcp_ao_counters()");164164-165165- synchronize_threads(); /* 4: terminate server + send more on client */166166- bytes = test_server_run(sk, quota, TEST_RETRANSMIT_SEC);167162 close(sk);163163+164164+ synchronize_threads(); /* 5: restore the socket, send more data */168165 test_tcp_ao_counters_cmp("passive RST server", &ao1, &ao2, TEST_CNT_GOOD);169166170170- synchronize_threads(); /* 5: verified => closed */171171- close(sk);167167+ synchronize_threads(); /* 6: server exits */172168}173169174170static void *server_fn(void *arg)···286284 test_error("test_wait_fds(): %d", err);287285288286 synchronize_threads(); /* 3: close listen socket */289289- if (test_client_verify(sk[0], 100, quota / 100, TEST_TIMEOUT_SEC))287287+ if (test_client_verify(sk[0], packet_sz, quota / packet_sz, TEST_TIMEOUT_SEC))290288 test_fail("Failed to send data on connected socket");291289 else292290 test_ok("Verified established tcp connection");···325323 struct tcp_sock_state img;326324 sockaddr_af saddr;327325 int sk, err;328328- socklen_t slen = sizeof(err);329326330327 sk = socket(test_family, SOCK_STREAM, IPPROTO_TCP);331328 if (sk < 0)···338337 test_error("failed to connect()");339338340339 synchronize_threads(); /* 2: accepted => send data */341341- if (test_client_verify(sk, 100, quota / 100, TEST_TIMEOUT_SEC))340340+ if (test_client_verify(sk, packet_sz, quota / packet_sz, TEST_TIMEOUT_SEC))342341 test_fail("Failed to send data on connected socket");343342 else344343 test_ok("Verified established tcp connection");345344346346- synchronize_threads(); /* 3: chekpoint/restore the connection */345345+ synchronize_threads(); /* 3: checkpoint the client */347346 test_enable_repair(sk);348347 test_sock_checkpoint(sk, &img, &saddr);349348 test_ao_checkpoint(sk, &ao_img);350350- test_kill_sk(sk);349349+ test_disable_repair(sk);351350352352- img.out.seq += quota;351351+ synchronize_threads(); /* 4: close the server, creating twsk */352352+353353+ /*354354+ * The "corruption" in SEQ has to be small enough to fit into TCP355355+ * window, see tcp_timewait_state_process() for out-of-window356356+ * segments.357357+ */358358+ img.out.seq += 5; /* 5 is more noticeable in tcpdump than 1 */359359+360360+ /*361361+ * FIXME: This is kind-of ugly and dirty, but it works.362362+ *363363+ * At this moment, the server has close'ed(sk).364364+ * The passive RST that is being targeted here is new data after365365+ * half-duplex close, see tcp_timewait_state_process() => TCP_TW_RST366366+ *367367+ * What is needed here is:368368+ * (1) wait for FIN from the server369369+ * (2) make sure that the ACK from the client went out370370+ * (3) make sure that the ACK was received and processed by the server371371+ *372372+ * Otherwise, the data that will be sent from "repaired" socket373373+ * post SEQ corruption may get to the server before it's in374374+ * TCP_FIN_WAIT2.375375+ *376376+ * (1) is easy with select()/poll()377377+ * (2) is possible by polling tcpi_state from TCP_INFO378378+ * (3) is quite complex: as server's socket was already closed,379379+ * probably the way to do it would be tcp-diag.380380+ */381381+ sleep(TEST_RETRANSMIT_SEC);382382+383383+ synchronize_threads(); /* 5: restore the socket, send more data */384384+ test_kill_sk(sk);353385354386 sk = socket(test_family, SOCK_STREAM, IPPROTO_TCP);355387 if (sk < 0)···400366 test_disable_repair(sk);401367 test_sock_state_free(&img);402368403403- synchronize_threads(); /* 4: terminate server + send more on client */404404- if (test_client_verify(sk, 100, quota / 100, 2 * TEST_TIMEOUT_SEC))405405- test_ok("client connection broken post-seq-adjust");369369+ /*370370+ * This is how "passive reset" is acquired in this test from TCP_TW_RST:371371+ *372372+ * IP 10.0.254.1.7011 > 10.0.1.1.59772: Flags [P.], seq 901:1001, ack 1001, win 249,373373+ * options [tcp-ao keyid 100 rnextkeyid 100 mac 0x10217d6c36a22379086ef3b1], length 100374374+ * IP 10.0.254.1.7011 > 10.0.1.1.59772: Flags [F.], seq 1001, ack 1001, win 249,375375+ * options [tcp-ao keyid 100 rnextkeyid 100 mac 0x104ffc99b98c10a5298cc268], length 0376376+ * IP 10.0.1.1.59772 > 10.0.254.1.7011: Flags [.], ack 1002, win 251,377377+ * options [tcp-ao keyid 100 rnextkeyid 100 mac 0xe496dd4f7f5a8a66873c6f93,nop,nop,sack 1 {1001:1002}], length 0378378+ * IP 10.0.1.1.59772 > 10.0.254.1.7011: Flags [P.], seq 1006:1106, ack 1001, win 251,379379+ * options [tcp-ao keyid 100 rnextkeyid 100 mac 0x1b5f3330fb23fbcd0c77d0ca], length 100380380+ * IP 10.0.254.1.7011 > 10.0.1.1.59772: Flags [R], seq 3215596252, win 0,381381+ * options [tcp-ao keyid 100 rnextkeyid 100 mac 0x0bcfbbf497bce844312304b2], length 0382382+ */383383+ err = test_client_verify(sk, packet_sz, quota / packet_sz, 2 * TEST_TIMEOUT_SEC);384384+ /* Make sure that the connection was reset, not timeouted */385385+ if (err && err == -ECONNRESET)386386+ test_ok("client sock was passively reset post-seq-adjust");387387+ else if (err)388388+ test_fail("client sock was not reset post-seq-adjust: %d", err);406389 else407407- test_fail("client connection still works post-seq-adjust");408408-409409- test_wait_for_exception(sk, TEST_TIMEOUT_SEC);410410-411411- if (getsockopt(sk, SOL_SOCKET, SO_ERROR, &err, &slen))412412- test_error("getsockopt()");413413- if (err != ECONNRESET && err != EPIPE)414414- test_fail("client connection was not reset: %d", err);415415- else416416- test_ok("client connection was reset");390390+ test_fail("client sock is yet connected post-seq-adjust");417391418392 if (test_get_tcp_ao_counters(sk, &ao2))419393 test_error("test_get_tcp_ao_counters()");420394421421- synchronize_threads(); /* 5: verified => closed */395395+ synchronize_threads(); /* 6: server exits */422396 close(sk);423397 test_tcp_ao_counters_cmp("client passive RST", &ao1, &ao2, TEST_CNT_GOOD);424398}···452410453411int main(int argc, char *argv[])454412{455455- test_init(15, server_fn, client_fn);413413+ test_init(14, server_fn, client_fn);456414 return 0;457415}
···2424{2525 return rseq_mm_cid_available();2626}2727+static2828+bool rseq_use_cpu_index(void)2929+{3030+ return false; /* Use mm_cid */3131+}2732#else2833# define RSEQ_PERCPU RSEQ_PERCPU_CPU_ID2934static···4035bool rseq_validate_cpu_id(void)4136{4237 return rseq_current_cpu_raw() >= 0;3838+}3939+static4040+bool rseq_use_cpu_index(void)4141+{4242+ return true; /* Use cpu_id as index. */4343}4444#endif4545···284274 /* Generate list entries for every usable cpu. */285275 sched_getaffinity(0, sizeof(allowed_cpus), &allowed_cpus);286276 for (i = 0; i < CPU_SETSIZE; i++) {287287- if (!CPU_ISSET(i, &allowed_cpus))277277+ if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus))288278 continue;289279 for (j = 1; j <= 100; j++) {290280 struct percpu_list_node *node;···309299 for (i = 0; i < CPU_SETSIZE; i++) {310300 struct percpu_list_node *node;311301312312- if (!CPU_ISSET(i, &allowed_cpus))302302+ if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus))313303 continue;314304315305 while ((node = __percpu_list_pop(&list, i))) {
+16-6
tools/testing/selftests/rseq/param_test.c
···288288{289289 return rseq_mm_cid_available();290290}291291+static292292+bool rseq_use_cpu_index(void)293293+{294294+ return false; /* Use mm_cid */295295+}291296# ifdef TEST_MEMBARRIER292297/*293298 * Membarrier does not currently support targeting a mm_cid, so···316311bool rseq_validate_cpu_id(void)317312{318313 return rseq_current_cpu_raw() >= 0;314314+}315315+static316316+bool rseq_use_cpu_index(void)317317+{318318+ return true; /* Use cpu_id as index. */319319}320320# ifdef TEST_MEMBARRIER321321static···725715 /* Generate list entries for every usable cpu. */726716 sched_getaffinity(0, sizeof(allowed_cpus), &allowed_cpus);727717 for (i = 0; i < CPU_SETSIZE; i++) {728728- if (!CPU_ISSET(i, &allowed_cpus))718718+ if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus))729719 continue;730720 for (j = 1; j <= 100; j++) {731721 struct percpu_list_node *node;···762752 for (i = 0; i < CPU_SETSIZE; i++) {763753 struct percpu_list_node *node;764754765765- if (!CPU_ISSET(i, &allowed_cpus))755755+ if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus))766756 continue;767757768758 while ((node = __percpu_list_pop(&list, i))) {···912902 /* Generate list entries for every usable cpu. */913903 sched_getaffinity(0, sizeof(allowed_cpus), &allowed_cpus);914904 for (i = 0; i < CPU_SETSIZE; i++) {915915- if (!CPU_ISSET(i, &allowed_cpus))905905+ if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus))916906 continue;917907 /* Worse-case is every item in same CPU. */918908 buffer.c[i].array =···962952 for (i = 0; i < CPU_SETSIZE; i++) {963953 struct percpu_buffer_node *node;964954965965- if (!CPU_ISSET(i, &allowed_cpus))955955+ if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus))966956 continue;967957968958 while ((node = __percpu_buffer_pop(&buffer, i))) {···11231113 /* Generate list entries for every usable cpu. */11241114 sched_getaffinity(0, sizeof(allowed_cpus), &allowed_cpus);11251115 for (i = 0; i < CPU_SETSIZE; i++) {11261126- if (!CPU_ISSET(i, &allowed_cpus))11161116+ if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus))11271117 continue;11281118 /* Worse-case is every item in same CPU. */11291119 buffer.c[i].array =···11701160 for (i = 0; i < CPU_SETSIZE; i++) {11711161 struct percpu_memcpy_buffer_node item;1172116211731173- if (!CPU_ISSET(i, &allowed_cpus))11631163+ if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus))11741164 continue;1175116511761166 while (__percpu_memcpy_buffer_pop(&buffer, &item, i)) {