···6060 C staging driver module6161 E unsigned module6262 == =====================6363+6464+What: /sys/module/grant_table/parameters/free_per_iteration6565+Date: July 20236666+KernelVersion: 6.5 but backported to all supported stable branches6767+Contact: Xen developer discussion <xen-devel@lists.xenproject.org>6868+Description: Read and write number of grant entries to attempt to free per iteration.6969+7070+ Note: Future versions of Xen and Linux may provide a better7171+ interface for controlling the rate of deferred grant reclaim7272+ or may not need it at all.7373+Users: Qubes OS (https://www.qubes-os.org)
+1-1
Documentation/admin-guide/devices.txt
···26912691 45 = /dev/ttyMM1 Marvell MPSC - port 1 (obsolete unused)26922692 46 = /dev/ttyCPM0 PPC CPM (SCC or SMC) - port 026932693 ...26942694- 47 = /dev/ttyCPM5 PPC CPM (SCC or SMC) - port 526942694+ 49 = /dev/ttyCPM5 PPC CPM (SCC or SMC) - port 326952695 50 = /dev/ttyIOC0 Altix serial card26962696 ...26972697 81 = /dev/ttyIOC31 Altix serial card
+7-4
Documentation/admin-guide/hw-vuln/spectre.rst
···484484485485 Systems which support enhanced IBRS (eIBRS) enable IBRS protection once at486486 boot, by setting the IBRS bit, and they're automatically protected against487487- Spectre v2 variant attacks, including cross-thread branch target injections488488- on SMT systems (STIBP). In other words, eIBRS enables STIBP too.487487+ Spectre v2 variant attacks.489488490490- Legacy IBRS systems clear the IBRS bit on exit to userspace and491491- therefore explicitly enable STIBP for that489489+ On Intel's enhanced IBRS systems, this includes cross-thread branch target490490+ injections on SMT systems (STIBP). In other words, Intel eIBRS enables491491+ STIBP, too.492492+493493+ AMD Automatic IBRS does not protect userspace, and Legacy IBRS systems clear494494+ the IBRS bit on exit to userspace, therefore both explicitly enable STIBP.492495493496 The retpoline mitigation is turned on by default on vulnerable494497 CPUs. It can be forced on or off by the administrator
···8484 is half of the number of your physical RAM pages, or (on a8585 machine with highmem) the number of lowmem RAM pages,8686 whichever is the lower.8787-noswap Disables swap. Remounts must respect the original settings.8888- By default swap is enabled.8987========= ============================================================90889189These parameters accept a suffix k, m or g for kilo, mega and giga and···9799use up all the memory on the machine; but enhances the scalability of98100that instance in a system with many CPUs making intensive use of it.99101102102+tmpfs blocks may be swapped out, when there is a shortage of memory.103103+tmpfs has a mount option to disable its use of swap:104104+105105+====== ===========================================================106106+noswap Disables swap. Remounts must respect the original settings.107107+ By default swap is enabled.108108+====== ===========================================================109109+100110tmpfs also supports Transparent Huge Pages which requires a kernel101111configured with CONFIG_TRANSPARENT_HUGEPAGE and with huge supported for102112your system (has_transparent_hugepage(), which is architecture specific).103113The mount options for this are:104114105105-====== ============================================================106106-huge=0 never: disables huge pages for the mount107107-huge=1 always: enables huge pages for the mount108108-huge=2 within_size: only allocate huge pages if the page will be109109- fully within i_size, also respect fadvise()/madvise() hints.110110-huge=3 advise: only allocate huge pages if requested with111111- fadvise()/madvise()112112-====== ============================================================115115+================ ==============================================================116116+huge=never Do not allocate huge pages. This is the default.117117+huge=always Attempt to allocate huge page every time a new page is needed.118118+huge=within_size Only allocate huge page if it will be fully within i_size.119119+ Also respect madvise(2) hints.120120+huge=advise Only allocate huge page if requested with madvise(2).121121+================ ==============================================================113122114114-There is a sysfs file which you can also use to control system wide THP115115-configuration for all tmpfs mounts, the file is:116116-117117-/sys/kernel/mm/transparent_hugepage/shmem_enabled118118-119119-This sysfs file is placed on top of THP sysfs directory and so is registered120120-by THP code. It is however only used to control all tmpfs mounts with one121121-single knob. Since it controls all tmpfs mounts it should only be used either122122-for emergency or testing purposes. The values you can set for shmem_enabled are:123123-124124-== ============================================================125125--1 deny: disables huge on shm_mnt and all mounts, for126126- emergency use127127--2 force: enables huge on shm_mnt and all mounts, w/o needing128128- option, for testing129129-== ============================================================123123+See also Documentation/admin-guide/mm/transhuge.rst, which describes the124124+sysfs file /sys/kernel/mm/transparent_hugepage/shmem_enabled: which can125125+be used to deny huge pages on all tmpfs mounts in an emergency, or to126126+force huge pages on all tmpfs mounts for testing.130127131128tmpfs has a mount option to set the NUMA memory allocation policy for132129all files in that instance (if CONFIG_NUMA is enabled) - which can be
+7-6
Documentation/networking/napi.rst
···6565packets but should only process up to ``budget`` number of6666Rx packets. Rx processing is usually much more expensive.67676868-In other words, it is recommended to ignore the budget argument when6969-performing TX buffer reclamation to ensure that the reclamation is not7070-arbitrarily bounded; however, it is required to honor the budget argument7171-for RX processing.6868+In other words for Rx processing the ``budget`` argument limits how many6969+packets driver can process in a single poll. Rx specific APIs like page7070+pool or XDP cannot be used at all when ``budget`` is 0.7171+skb Tx processing should happen regardless of the ``budget``, but if7272+the argument is 0 driver cannot call any XDP (or page pool) APIs.72737374.. warning::74757575- The ``budget`` argument may be 0 if core tries to only process Tx completions7676- and no Rx packets.7676+ The ``budget`` argument may be 0 if core tries to only process7777+ skb Tx completions and no Rx or XDP packets.77787879The poll method returns the amount of work done. If the driver still7980has outstanding work to do (e.g. ``budget`` was exhausted)
···254254 Samsung Javier González <javier.gonz@samsung.com>255255256256 Microsoft James Morris <jamorris@linux.microsoft.com>257257- VMware258257 Xen Andrew Cooper <andrew.cooper3@citrix.com>259258260259 Canonical John Johansen <john.johansen@canonical.com>···262263 Red Hat Josh Poimboeuf <jpoimboe@redhat.com>263264 SUSE Jiri Kosina <jkosina@suse.cz>264265265265- Amazon266266 Google Kees Cook <keescook@chromium.org>267267268268- GCC269268 LLVM Nick Desaulniers <ndesaulniers@google.com>270269 ============= ========================================================271270
+17-20
Documentation/process/security-bugs.rst
···6363of the report are treated confidentially even after the embargo has been6464lifted, in perpetuity.65656666-Coordination6767-------------6666+Coordination with other groups6767+------------------------------68686969-Fixes for sensitive bugs, such as those that might lead to privilege7070-escalations, may need to be coordinated with the private7171-<linux-distros@vs.openwall.org> mailing list so that distribution vendors7272-are well prepared to issue a fixed kernel upon public disclosure of the7373-upstream fix. Distros will need some time to test the proposed patch and7474-will generally request at least a few days of embargo, and vendor update7575-publication prefers to happen Tuesday through Thursday. When appropriate,7676-the security team can assist with this coordination, or the reporter can7777-include linux-distros from the start. In this case, remember to prefix7878-the email Subject line with "[vs]" as described in the linux-distros wiki:7979-<http://oss-security.openwall.org/wiki/mailing-lists/distros#how-to-use-the-lists>6969+The kernel security team strongly recommends that reporters of potential7070+security issues NEVER contact the "linux-distros" mailing list until7171+AFTER discussing it with the kernel security team. Do not Cc: both7272+lists at once. You may contact the linux-distros mailing list after a7373+fix has been agreed on and you fully understand the requirements that7474+doing so will impose on you and the kernel community.7575+7676+The different lists have different goals and the linux-distros rules do7777+not contribute to actually fixing any potential security problems.80788179CVE assignment8280--------------83818484-The security team does not normally assign CVEs, nor do we require them8585-for reports or fixes, as this can needlessly complicate the process and8686-may delay the bug handling. If a reporter wishes to have a CVE identifier8787-assigned ahead of public disclosure, they will need to contact the private8888-linux-distros list, described above. When such a CVE identifier is known8989-before a patch is provided, it is desirable to mention it in the commit9090-message if the reporter agrees.8282+The security team does not assign CVEs, nor do we require them for8383+reports or fixes, as this can needlessly complicate the process and may8484+delay the bug handling. If a reporter wishes to have a CVE identifier8585+assigned, they should find one by themselves, for example by contacting8686+MITRE directly. However under no circumstances will a patch inclusion8787+be delayed to wait for a CVE identifier to arrive.91889289Non-disclosure agreements9390-------------------------
+9-2
MAINTAINERS
···44634463M: Peter Chen <peter.chen@kernel.org>44644464M: Pawel Laszczak <pawell@cadence.com>44654465R: Roger Quadros <rogerq@kernel.org>44664466-R: Aswath Govindraju <a-govindraju@ti.com>44674466L: linux-usb@vger.kernel.org44684467S: Maintained44694468T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git···51485149F: include/linux/compiler_attributes.h5149515051505151COMPUTE EXPRESS LINK (CXL)51525152+M: Davidlohr Bueso <dave@stgolabs.net>51535153+M: Jonathan Cameron <jonathan.cameron@huawei.com>51545154+M: Dave Jiang <dave.jiang@intel.com>51515155M: Alison Schofield <alison.schofield@intel.com>51525156M: Vishal Verma <vishal.l.verma@intel.com>51535157M: Ira Weiny <ira.weiny@intel.com>51545154-M: Ben Widawsky <bwidawsk@kernel.org>51555158M: Dan Williams <dan.j.williams@intel.com>51565159L: linux-cxl@vger.kernel.org51575160S: Maintained···2164221641TTY LAYER2164321642M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>2164421643M: Jiri Slaby <jirislaby@kernel.org>2164421644+L: linux-kernel@vger.kernel.org2164521645+L: linux-serial@vger.kernel.org2164521646S: Supported2164621647T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git2164721648F: Documentation/driver-api/serial/2164821649F: drivers/tty/2165021650+F: drivers/tty/serial/serial_base.h2165121651+F: drivers/tty/serial/serial_base_bus.c2164921652F: drivers/tty/serial/serial_core.c2165321653+F: drivers/tty/serial/serial_ctrl.c2165421654+F: drivers/tty/serial/serial_port.c2165021655F: include/linux/selection.h2165121656F: include/linux/serial.h2165221657F: include/linux/serial_core.h
···197197CONFIG_EXT3_FS=y198198# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set199199CONFIG_EXT4_FS=y200200-CONFIG_AUTOFS4_FS=y200200+CONFIG_AUTOFS_FS=y201201CONFIG_FUSE_FS=y202202CONFIG_CUSE=y203203CONFIG_FSCACHE=y
···442442CONFIG_QUOTA=y443443CONFIG_QUOTA_NETLINK_INTERFACE=y444444# CONFIG_PRINT_QUOTA_WARNING is not set445445-CONFIG_AUTOFS4_FS=y445445+CONFIG_AUTOFS_FS=y446446CONFIG_FUSE_FS=y447447CONFIG_ISO9660_FS=m448448CONFIG_JOLIET=y
···8181CONFIG_MEMORY=y8282# CONFIG_ARM_PMU is not set8383CONFIG_EXT4_FS=y8484-CONFIG_AUTOFS4_FS=y8484+CONFIG_AUTOFS_FS=y8585CONFIG_MSDOS_FS=y8686CONFIG_VFAT_FS=y8787CONFIG_NTFS_FS=y
···188188CONFIG_EXT2_FS=y189189CONFIG_EXT3_FS=y190190# CONFIG_DNOTIFY is not set191191-CONFIG_AUTOFS4_FS=y191191+CONFIG_AUTOFS_FS=y192192CONFIG_ISO9660_FS=y193193CONFIG_JOLIET=y194194CONFIG_MSDOS_FS=y
···917917 if (task == current)918918 put_cpu_fpsimd_context();919919920920+ task_set_vl(task, type, vl);921921+920922 /*921923 * Free the changed states if they are not in use, SME will be922924 * reallocated to the correct size on next use and we just···932930933931 if (free_sme)934932 sme_free(task);935935-936936- task_set_vl(task, type, vl);937933938934out:939935 update_tsk_thread_flag(task, vec_vl_inherit_flag(type),···1666166616671667 fpsimd_flush_thread_vl(ARM64_VEC_SME);16681668 current->thread.svcr = 0;16691669- sme_smstop();16701669 }1671167016721671 current->thread.fp_type = FP_STATE_FPSIMD;
···769769# CONFIG_PRINT_QUOTA_WARNING is not set770770CONFIG_QFMT_V1=m771771CONFIG_QFMT_V2=m772772-CONFIG_AUTOFS4_FS=y772772+CONFIG_AUTOFS_FS=y773773CONFIG_FUSE_FS=m774774CONFIG_OVERLAY_FS=y775775CONFIG_OVERLAY_FS_INDEX=y
+4-11
arch/loongarch/include/asm/fpu.h
···218218219219static inline void init_lsx_upper(void)220220{221221- /*222222- * Check cpu_has_lsx only if it's a constant. This will allow the223223- * compiler to optimise out code for CPUs without LSX without adding224224- * an extra redundant check for CPUs with LSX.225225- */226226- if (__builtin_constant_p(cpu_has_lsx) && !cpu_has_lsx)227227- return;228228-229229- _init_lsx_upper();221221+ if (cpu_has_lsx)222222+ _init_lsx_upper();230223}231224232225static inline void restore_lsx_upper(struct task_struct *t)···287294288295static inline int thread_lsx_context_live(void)289296{290290- if (__builtin_constant_p(cpu_has_lsx) && !cpu_has_lsx)297297+ if (!cpu_has_lsx)291298 return 0;292299293300 return test_thread_flag(TIF_LSX_CTX_LIVE);···295302296303static inline int thread_lasx_context_live(void)297304{298298- if (__builtin_constant_p(cpu_has_lasx) && !cpu_has_lasx)305305+ if (!cpu_has_lasx)299306 return 0;300307301308 return test_thread_flag(TIF_LASX_CTX_LIVE);
+16
arch/loongarch/kernel/setup.c
···332332 strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);333333334334 strlcat(boot_command_line, init_command_line, COMMAND_LINE_SIZE);335335+ goto out;335336 }336337#endif338338+339339+ /*340340+ * Append built-in command line to the bootloader command line if341341+ * CONFIG_CMDLINE_EXTEND is enabled.342342+ */343343+ if (IS_ENABLED(CONFIG_CMDLINE_EXTEND) && CONFIG_CMDLINE[0]) {344344+ strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);345345+ strlcat(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);346346+ }347347+348348+ /*349349+ * Use built-in command line if the bootloader command line is empty.350350+ */351351+ if (IS_ENABLED(CONFIG_CMDLINE_BOOTLOADER) && !boot_command_line[0])352352+ strscpy(boot_command_line, CONFIG_CMDLINE, COMMAND_LINE_SIZE);337353338354out:339355 *cmdline_p = boot_command_line;
···153153CONFIG_QUOTA_NETLINK_INTERFACE=y154154# CONFIG_PRINT_QUOTA_WARNING is not set155155CONFIG_QFMT_V2=m156156-CONFIG_AUTOFS4_FS=m156156+CONFIG_AUTOFS_FS=m157157CONFIG_FUSE_FS=m158158CONFIG_ISO9660_FS=m159159CONFIG_JOLIET=y
···245245CONFIG_QUOTA_NETLINK_INTERFACE=y246246# CONFIG_PRINT_QUOTA_WARNING is not set247247CONFIG_QFMT_V2=m248248-CONFIG_AUTOFS4_FS=m248248+CONFIG_AUTOFS_FS=m249249CONFIG_FUSE_FS=m250250CONFIG_ISO9660_FS=m251251CONFIG_JOLIET=y
···296296CONFIG_XFS_POSIX_ACL=y297297CONFIG_QUOTA=y298298# CONFIG_PRINT_QUOTA_WARNING is not set299299-CONFIG_AUTOFS4_FS=y299299+CONFIG_AUTOFS_FS=y300300CONFIG_FUSE_FS=m301301CONFIG_ISO9660_FS=m302302CONFIG_JOLIET=y
+1-1
arch/mips/configs/loongson3_defconfig
···352352# CONFIG_PRINT_QUOTA_WARNING is not set353353CONFIG_QFMT_V1=m354354CONFIG_QFMT_V2=m355355-CONFIG_AUTOFS4_FS=y355355+CONFIG_AUTOFS_FS=y356356CONFIG_FUSE_FS=m357357CONFIG_VIRTIO_FS=m358358CONFIG_FSCACHE=m
···4747# CONFIG_USB_SUPPORT is not set4848CONFIG_EXT2_FS=y4949CONFIG_EXT4_FS=y5050-CONFIG_AUTOFS4_FS=y5050+CONFIG_AUTOFS_FS=y5151CONFIG_PROC_KCORE=y5252CONFIG_TMPFS=y5353CONFIG_CRAMFS=y
···6161CONFIG_EXT2_FS=y6262CONFIG_EXT3_FS=y6363# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6464-CONFIG_AUTOFS4_FS=y6464+CONFIG_AUTOFS_FS=y6565CONFIG_PROC_KCORE=y6666CONFIG_TMPFS=y6767CONFIG_TMPFS_POSIX_ACL=y
+1-1
arch/sh/configs/sdk7780_defconfig
···105105CONFIG_EXT3_FS=y106106# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set107107CONFIG_EXT3_FS_POSIX_ACL=y108108-CONFIG_AUTOFS4_FS=y108108+CONFIG_AUTOFS_FS=y109109CONFIG_ISO9660_FS=y110110CONFIG_MSDOS_FS=y111111CONFIG_VFAT_FS=y
···6060CONFIG_EXT3_FS=y6161# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6262CONFIG_EXT3_FS_POSIX_ACL=y6363-CONFIG_AUTOFS4_FS=y6363+CONFIG_AUTOFS_FS=y6464CONFIG_ISO9660_FS=m6565CONFIG_JOLIET=y6666CONFIG_ZISOFS=y
+1-1
arch/sh/configs/sh7763rdp_defconfig
···6363CONFIG_EXT2_FS=y6464CONFIG_EXT3_FS=y6565# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set6666-CONFIG_AUTOFS4_FS=y6666+CONFIG_AUTOFS_FS=y6767CONFIG_MSDOS_FS=y6868CONFIG_VFAT_FS=y6969CONFIG_PROC_KCORE=y
···33 * Copyright (C) 2002 - 2008 Jeff Dike (jdike@{addtoit,linux.intel}.com)44 */5566-#include <linux/minmax.h>76#include <unistd.h>87#include <errno.h>98#include <fcntl.h>···50515152static int write_sigio_thread(void *unused)5253{5353- struct pollfds *fds;5454+ struct pollfds *fds, tmp;5455 struct pollfd *p;5556 int i, n, respond_fd;5657 char c;···7778 "write_sigio_thread : "7879 "read on socket failed, "7980 "err = %d\n", errno);8080- swap(current_poll, next_poll);8181+ tmp = current_poll;8282+ current_poll = next_poll;8383+ next_poll = tmp;8184 respond_fd = sigio_private[1];8285 }8386 else {
+1-1
arch/x86/configs/i386_defconfig
···245245CONFIG_QUOTA_NETLINK_INTERFACE=y246246# CONFIG_PRINT_QUOTA_WARNING is not set247247CONFIG_QFMT_V2=y248248-CONFIG_AUTOFS4_FS=y248248+CONFIG_AUTOFS_FS=y249249CONFIG_ISO9660_FS=y250250CONFIG_JOLIET=y251251CONFIG_ZISOFS=y
+1-1
arch/x86/configs/x86_64_defconfig
···242242CONFIG_QUOTA_NETLINK_INTERFACE=y243243# CONFIG_PRINT_QUOTA_WARNING is not set244244CONFIG_QFMT_V2=y245245-CONFIG_AUTOFS4_FS=y245245+CONFIG_AUTOFS_FS=y246246CONFIG_ISO9660_FS=y247247CONFIG_JOLIET=y248248CONFIG_ZISOFS=y
+15-1
arch/x86/entry/entry_64.S
···285285 */286286.pushsection .text, "ax"287287SYM_CODE_START(ret_from_fork_asm)288288- UNWIND_HINT_REGS288288+ /*289289+ * This is the start of the kernel stack; even through there's a290290+ * register set at the top, the regset isn't necessarily coherent291291+ * (consider kthreads) and one cannot unwind further.292292+ *293293+ * This ensures stack unwinds of kernel threads terminate in a known294294+ * good state.295295+ */296296+ UNWIND_HINT_END_OF_STACK289297 ANNOTATE_NOENDBR // copy_thread290298 CALL_DEPTH_ACCOUNT291299···303295 movq %r12, %rcx /* fn_arg */304296 call ret_from_fork305297298298+ /*299299+ * Set the stack state to what is expected for the target function300300+ * -- at this point the register set should be a valid user set301301+ * and unwind should work normally.302302+ */303303+ UNWIND_HINT_REGS306304 jmp swapgs_restore_regs_and_return_to_usermode307305SYM_CODE_END(ret_from_fork_asm)308306.popsection
···27272828#include "cpu.h"29293030-static const int amd_erratum_383[];3131-static const int amd_erratum_400[];3232-static const int amd_erratum_1054[];3333-static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum);3434-3530/*3631 * nodes_per_socket: Stores the number of nodes per socket.3732 * Refer to Fam15h Models 00-0fh BKDG - CPUID Fn8000_001E_ECX3833 * Node Identifiers[10:8]3934 */4035static u32 nodes_per_socket = 1;3636+3737+/*3838+ * AMD errata checking3939+ *4040+ * Errata are defined as arrays of ints using the AMD_LEGACY_ERRATUM() or4141+ * AMD_OSVW_ERRATUM() macros. The latter is intended for newer errata that4242+ * have an OSVW id assigned, which it takes as first argument. Both take a4343+ * variable number of family-specific model-stepping ranges created by4444+ * AMD_MODEL_RANGE().4545+ *4646+ * Example:4747+ *4848+ * const int amd_erratum_319[] =4949+ * AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0x4, 0x2),5050+ * AMD_MODEL_RANGE(0x10, 0x8, 0x0, 0x8, 0x0),5151+ * AMD_MODEL_RANGE(0x10, 0x9, 0x0, 0x9, 0x0));5252+ */5353+5454+#define AMD_LEGACY_ERRATUM(...) { -1, __VA_ARGS__, 0 }5555+#define AMD_OSVW_ERRATUM(osvw_id, ...) { osvw_id, __VA_ARGS__, 0 }5656+#define AMD_MODEL_RANGE(f, m_start, s_start, m_end, s_end) \5757+ ((f << 24) | (m_start << 16) | (s_start << 12) | (m_end << 4) | (s_end))5858+#define AMD_MODEL_RANGE_FAMILY(range) (((range) >> 24) & 0xff)5959+#define AMD_MODEL_RANGE_START(range) (((range) >> 12) & 0xfff)6060+#define AMD_MODEL_RANGE_END(range) ((range) & 0xfff)6161+6262+static const int amd_erratum_400[] =6363+ AMD_OSVW_ERRATUM(1, AMD_MODEL_RANGE(0xf, 0x41, 0x2, 0xff, 0xf),6464+ AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0xff, 0xf));6565+6666+static const int amd_erratum_383[] =6767+ AMD_OSVW_ERRATUM(3, AMD_MODEL_RANGE(0x10, 0, 0, 0xff, 0xf));6868+6969+/* #1054: Instructions Retired Performance Counter May Be Inaccurate */7070+static const int amd_erratum_1054[] =7171+ AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));7272+7373+static const int amd_zenbleed[] =7474+ AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0x30, 0x0, 0x4f, 0xf),7575+ AMD_MODEL_RANGE(0x17, 0x60, 0x0, 0x7f, 0xf),7676+ AMD_MODEL_RANGE(0x17, 0xa0, 0x0, 0xaf, 0xf));7777+7878+static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)7979+{8080+ int osvw_id = *erratum++;8181+ u32 range;8282+ u32 ms;8383+8484+ if (osvw_id >= 0 && osvw_id < 65536 &&8585+ cpu_has(cpu, X86_FEATURE_OSVW)) {8686+ u64 osvw_len;8787+8888+ rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, osvw_len);8989+ if (osvw_id < osvw_len) {9090+ u64 osvw_bits;9191+9292+ rdmsrl(MSR_AMD64_OSVW_STATUS + (osvw_id >> 6),9393+ osvw_bits);9494+ return osvw_bits & (1ULL << (osvw_id & 0x3f));9595+ }9696+ }9797+9898+ /* OSVW unavailable or ID unknown, match family-model-stepping range */9999+ ms = (cpu->x86_model << 4) | cpu->x86_stepping;100100+ while ((range = *erratum++))101101+ if ((cpu->x86 == AMD_MODEL_RANGE_FAMILY(range)) &&102102+ (ms >= AMD_MODEL_RANGE_START(range)) &&103103+ (ms <= AMD_MODEL_RANGE_END(range)))104104+ return true;105105+106106+ return false;107107+}4110842109static inline int rdmsrl_amd_safe(unsigned msr, unsigned long long *p)43110{···983916 }984917}985918919919+static bool cpu_has_zenbleed_microcode(void)920920+{921921+ u32 good_rev = 0;922922+923923+ switch (boot_cpu_data.x86_model) {924924+ case 0x30 ... 0x3f: good_rev = 0x0830107a; break;925925+ case 0x60 ... 0x67: good_rev = 0x0860010b; break;926926+ case 0x68 ... 0x6f: good_rev = 0x08608105; break;927927+ case 0x70 ... 0x7f: good_rev = 0x08701032; break;928928+ case 0xa0 ... 0xaf: good_rev = 0x08a00008; break;929929+930930+ default:931931+ return false;932932+ break;933933+ }934934+935935+ if (boot_cpu_data.microcode < good_rev)936936+ return false;937937+938938+ return true;939939+}940940+941941+static void zenbleed_check(struct cpuinfo_x86 *c)942942+{943943+ if (!cpu_has_amd_erratum(c, amd_zenbleed))944944+ return;945945+946946+ if (cpu_has(c, X86_FEATURE_HYPERVISOR))947947+ return;948948+949949+ if (!cpu_has(c, X86_FEATURE_AVX))950950+ return;951951+952952+ if (!cpu_has_zenbleed_microcode()) {953953+ pr_notice_once("Zenbleed: please update your microcode for the most optimal fix\n");954954+ msr_set_bit(MSR_AMD64_DE_CFG, MSR_AMD64_DE_CFG_ZEN2_FP_BACKUP_FIX_BIT);955955+ } else {956956+ msr_clear_bit(MSR_AMD64_DE_CFG, MSR_AMD64_DE_CFG_ZEN2_FP_BACKUP_FIX_BIT);957957+ }958958+}959959+986960static void init_amd(struct cpuinfo_x86 *c)987961{988962 early_init_amd(c);···11281020 if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&11291021 cpu_has(c, X86_FEATURE_AUTOIBRS))11301022 WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS));10231023+10241024+ zenbleed_check(c);11311025}1132102611331027#ifdef CONFIG_X86_32···1225111512261116cpu_dev_register(amd_cpu_dev);1227111712281228-/*12291229- * AMD errata checking12301230- *12311231- * Errata are defined as arrays of ints using the AMD_LEGACY_ERRATUM() or12321232- * AMD_OSVW_ERRATUM() macros. The latter is intended for newer errata that12331233- * have an OSVW id assigned, which it takes as first argument. Both take a12341234- * variable number of family-specific model-stepping ranges created by12351235- * AMD_MODEL_RANGE().12361236- *12371237- * Example:12381238- *12391239- * const int amd_erratum_319[] =12401240- * AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0x4, 0x2),12411241- * AMD_MODEL_RANGE(0x10, 0x8, 0x0, 0x8, 0x0),12421242- * AMD_MODEL_RANGE(0x10, 0x9, 0x0, 0x9, 0x0));12431243- */12441244-12451245-#define AMD_LEGACY_ERRATUM(...) { -1, __VA_ARGS__, 0 }12461246-#define AMD_OSVW_ERRATUM(osvw_id, ...) { osvw_id, __VA_ARGS__, 0 }12471247-#define AMD_MODEL_RANGE(f, m_start, s_start, m_end, s_end) \12481248- ((f << 24) | (m_start << 16) | (s_start << 12) | (m_end << 4) | (s_end))12491249-#define AMD_MODEL_RANGE_FAMILY(range) (((range) >> 24) & 0xff)12501250-#define AMD_MODEL_RANGE_START(range) (((range) >> 12) & 0xfff)12511251-#define AMD_MODEL_RANGE_END(range) ((range) & 0xfff)12521252-12531253-static const int amd_erratum_400[] =12541254- AMD_OSVW_ERRATUM(1, AMD_MODEL_RANGE(0xf, 0x41, 0x2, 0xff, 0xf),12551255- AMD_MODEL_RANGE(0x10, 0x2, 0x1, 0xff, 0xf));12561256-12571257-static const int amd_erratum_383[] =12581258- AMD_OSVW_ERRATUM(3, AMD_MODEL_RANGE(0x10, 0, 0, 0xff, 0xf));12591259-12601260-/* #1054: Instructions Retired Performance Counter May Be Inaccurate */12611261-static const int amd_erratum_1054[] =12621262- AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));12631263-12641264-static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)12651265-{12661266- int osvw_id = *erratum++;12671267- u32 range;12681268- u32 ms;12691269-12701270- if (osvw_id >= 0 && osvw_id < 65536 &&12711271- cpu_has(cpu, X86_FEATURE_OSVW)) {12721272- u64 osvw_len;12731273-12741274- rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, osvw_len);12751275- if (osvw_id < osvw_len) {12761276- u64 osvw_bits;12771277-12781278- rdmsrl(MSR_AMD64_OSVW_STATUS + (osvw_id >> 6),12791279- osvw_bits);12801280- return osvw_bits & (1ULL << (osvw_id & 0x3f));12811281- }12821282- }12831283-12841284- /* OSVW unavailable or ID unknown, match family-model-stepping range */12851285- ms = (cpu->x86_model << 4) | cpu->x86_stepping;12861286- while ((range = *erratum++))12871287- if ((cpu->x86 == AMD_MODEL_RANGE_FAMILY(range)) &&12881288- (ms >= AMD_MODEL_RANGE_START(range)) &&12891289- (ms <= AMD_MODEL_RANGE_END(range)))12901290- return true;12911291-12921292- return false;12931293-}12941294-12951118static DEFINE_PER_CPU_READ_MOSTLY(unsigned long[4], amd_dr_addr_mask);1296111912971120static unsigned int amd_msr_dr_addr_masks[] = {···12781235 return 255;12791236}12801237EXPORT_SYMBOL_GPL(amd_get_highest_perf);12381238+12391239+static void zenbleed_check_cpu(void *unused)12401240+{12411241+ struct cpuinfo_x86 *c = &cpu_data(smp_processor_id());12421242+12431243+ zenbleed_check(c);12441244+}12451245+12461246+void amd_check_microcode(void)12471247+{12481248+ on_each_cpu(zenbleed_check_cpu, NULL, 1);12491249+}
+9-6
arch/x86/kernel/cpu/bugs.c
···11501150 }1151115111521152 /*11531153- * If no STIBP, enhanced IBRS is enabled, or SMT impossible, STIBP11531153+ * If no STIBP, Intel enhanced IBRS is enabled, or SMT impossible, STIBP11541154 * is not required.11551155 *11561156- * Enhanced IBRS also protects against cross-thread branch target11561156+ * Intel's Enhanced IBRS also protects against cross-thread branch target11571157 * injection in user-mode as the IBRS bit remains always set which11581158 * implicitly enables cross-thread protections. However, in legacy IBRS11591159 * mode, the IBRS bit is set only on kernel entry and cleared on return11601160- * to userspace. This disables the implicit cross-thread protection,11611161- * so allow for STIBP to be selected in that case.11601160+ * to userspace. AMD Automatic IBRS also does not protect userspace.11611161+ * These modes therefore disable the implicit cross-thread protection,11621162+ * so allow for STIBP to be selected in those cases.11621163 */11631164 if (!boot_cpu_has(X86_FEATURE_STIBP) ||11641165 !smt_possible ||11651165- spectre_v2_in_eibrs_mode(spectre_v2_enabled))11661166+ (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&11671167+ !boot_cpu_has(X86_FEATURE_AUTOIBRS)))11661168 return;1167116911681170 /*···2296229422972295static char *stibp_state(void)22982296{22992299- if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))22972297+ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&22982298+ !boot_cpu_has(X86_FEATURE_AUTOIBRS))23002299 return "";2301230023022301 switch (spectre_v2_user_stibp) {
···17861786 }17871787}1788178817891789+static bool svm_is_valid_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)17901790+{17911791+ return true;17921792+}17931793+17891794void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)17901795{17911796 struct vcpu_svm *svm = to_svm(vcpu);···3991398639923987static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)39933988{39943994- struct vmcb_control_area *control = &to_svm(vcpu)->vmcb->control;39953995-39963996- /*39973997- * Note, the next RIP must be provided as SRCU isn't held, i.e. KVM39983998- * can't read guest memory (dereference memslots) to decode the WRMSR.39993999- */40004000- if (control->exit_code == SVM_EXIT_MSR && control->exit_info_1 &&40014001- nrips && control->next_rip)39893989+ if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&39903990+ to_svm(vcpu)->vmcb->control.exit_info_1)40023991 return handle_fastpath_set_msr_irqoff(vcpu);4003399240043993 return EXIT_FASTPATH_NONE;···48144815 .set_segment = svm_set_segment,48154816 .get_cpl = svm_get_cpl,48164817 .get_cs_db_l_bits = svm_get_cs_db_l_bits,48184818+ .is_valid_cr0 = svm_is_valid_cr0,48174819 .set_cr0 = svm_set_cr0,48184820 .post_set_cr3 = sev_post_set_cr3,48194821 .is_valid_cr4 = svm_is_valid_cr4,
+4-4
arch/x86/kvm/vmx/vmenter.S
···303303 VMX_DO_EVENT_IRQOFF call asm_exc_nmi_kvm_vmx304304SYM_FUNC_END(vmx_do_nmi_irqoff)305305306306-307307-.section .text, "ax"308308-309306#ifndef CONFIG_CC_HAS_ASM_GOTO_OUTPUT307307+310308/**311309 * vmread_error_trampoline - Trampoline from inline asm to vmread_error()312310 * @field: VMCS field encoding that failed···333335 mov 3*WORD_SIZE(%_ASM_BP), %_ASM_ARG2334336 mov 2*WORD_SIZE(%_ASM_BP), %_ASM_ARG1335337336336- call vmread_error338338+ call vmread_error_trampoline2337339338340 /* Zero out @fault, which will be popped into the result register. */339341 _ASM_MOV $0, 3*WORD_SIZE(%_ASM_BP)···354356 RET355357SYM_FUNC_END(vmread_error_trampoline)356358#endif359359+360360+.section .text, "ax"357361358362SYM_FUNC_START(vmx_do_interrupt_irqoff)359363 VMX_DO_EVENT_IRQOFF CALL_NOSPEC _ASM_ARG1
+47-17
arch/x86/kvm/vmx/vmx.c
···441441 pr_warn_ratelimited(fmt); \442442} while (0)443443444444-void vmread_error(unsigned long field, bool fault)444444+noinline void vmread_error(unsigned long field)445445{446446- if (fault)447447- kvm_spurious_fault();448448- else449449- vmx_insn_failed("vmread failed: field=%lx\n", field);446446+ vmx_insn_failed("vmread failed: field=%lx\n", field);450447}448448+449449+#ifndef CONFIG_CC_HAS_ASM_GOTO_OUTPUT450450+noinstr void vmread_error_trampoline2(unsigned long field, bool fault)451451+{452452+ if (fault) {453453+ kvm_spurious_fault();454454+ } else {455455+ instrumentation_begin();456456+ vmread_error(field);457457+ instrumentation_end();458458+ }459459+}460460+#endif451461452462noinline void vmwrite_error(unsigned long field, unsigned long value)453463{···15131503 struct vcpu_vmx *vmx = to_vmx(vcpu);15141504 unsigned long old_rflags;1515150515061506+ /*15071507+ * Unlike CR0 and CR4, RFLAGS handling requires checking if the vCPU15081508+ * is an unrestricted guest in order to mark L2 as needing emulation15091509+ * if L1 runs L2 as a restricted guest.15101510+ */15161511 if (is_unrestricted_guest(vcpu)) {15171512 kvm_register_mark_available(vcpu, VCPU_EXREG_RFLAGS);15181513 vmx->rflags = rflags;···30523037 struct vcpu_vmx *vmx = to_vmx(vcpu);30533038 struct kvm_vmx *kvm_vmx = to_kvm_vmx(vcpu->kvm);3054303930403040+ /*30413041+ * KVM should never use VM86 to virtualize Real Mode when L2 is active,30423042+ * as using VM86 is unnecessary if unrestricted guest is enabled, and30433043+ * if unrestricted guest is disabled, VM-Enter (from L1) with CR0.PG=030443044+ * should VM-Fail and KVM should reject userspace attempts to stuff30453045+ * CR0.PG=0 when L2 is active.30463046+ */30473047+ WARN_ON_ONCE(is_guest_mode(vcpu));30483048+30553049 vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_TR], VCPU_SREG_TR);30563050 vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_ES], VCPU_SREG_ES);30573051 vmx_get_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_DS], VCPU_SREG_DS);···32503226#define CR3_EXITING_BITS (CPU_BASED_CR3_LOAD_EXITING | \32513227 CPU_BASED_CR3_STORE_EXITING)3252322832293229+static bool vmx_is_valid_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)32303230+{32313231+ if (is_guest_mode(vcpu))32323232+ return nested_guest_cr0_valid(vcpu, cr0);32333233+32343234+ if (to_vmx(vcpu)->nested.vmxon)32353235+ return nested_host_cr0_valid(vcpu, cr0);32363236+32373237+ return true;32383238+}32393239+32533240void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)32543241{32553242 struct vcpu_vmx *vmx = to_vmx(vcpu);···32703235 old_cr0_pg = kvm_read_cr0_bits(vcpu, X86_CR0_PG);3271323632723237 hw_cr0 = (cr0 & ~KVM_VM_CR0_ALWAYS_OFF);32733273- if (is_unrestricted_guest(vcpu))32383238+ if (enable_unrestricted_guest)32743239 hw_cr0 |= KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST;32753240 else {32763241 hw_cr0 |= KVM_VM_CR0_ALWAYS_ON;···32983263 }32993264#endif3300326533013301- if (enable_ept && !is_unrestricted_guest(vcpu)) {32663266+ if (enable_ept && !enable_unrestricted_guest) {33023267 /*33033268 * Ensure KVM has an up-to-date snapshot of the guest's CR3. If33043269 * the below code _enables_ CR3 exiting, vmx_cache_reg() will···34293394 * this bit, even if host CR4.MCE == 0.34303395 */34313396 hw_cr4 = (cr4_read_shadow() & X86_CR4_MCE) | (cr4 & ~X86_CR4_MCE);34323432- if (is_unrestricted_guest(vcpu))33973397+ if (enable_unrestricted_guest)34333398 hw_cr4 |= KVM_VM_CR4_ALWAYS_ON_UNRESTRICTED_GUEST;34343399 else if (vmx->rmode.vm86_active)34353400 hw_cr4 |= KVM_RMODE_VM_CR4_ALWAYS_ON;···34493414 vcpu->arch.cr4 = cr4;34503415 kvm_register_mark_available(vcpu, VCPU_EXREG_CR4);3451341634523452- if (!is_unrestricted_guest(vcpu)) {34173417+ if (!enable_unrestricted_guest) {34533418 if (enable_ept) {34543419 if (!is_paging(vcpu)) {34553420 hw_cr4 &= ~X86_CR4_PAE;···46864651 if (kvm_vmx->pid_table)46874652 return 0;4688465346894689- pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, vmx_get_pid_table_order(kvm));46544654+ pages = alloc_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO,46554655+ vmx_get_pid_table_order(kvm));46904656 if (!pages)46914657 return -ENOMEM;46924658···54005364 val = (val & ~vmcs12->cr0_guest_host_mask) |54015365 (vmcs12->guest_cr0 & vmcs12->cr0_guest_host_mask);5402536654035403- if (!nested_guest_cr0_valid(vcpu, val))54045404- return 1;54055405-54065367 if (kvm_set_cr0(vcpu, val))54075368 return 1;54085369 vmcs_writel(CR0_READ_SHADOW, orig_val);54095370 return 0;54105371 } else {54115411- if (to_vmx(vcpu)->nested.vmxon &&54125412- !nested_host_cr0_valid(vcpu, val))54135413- return 1;54145414-54155372 return kvm_set_cr0(vcpu, val);54165373 }54175374}···82328203 .set_segment = vmx_set_segment,82338204 .get_cpl = vmx_get_cpl,82348205 .get_cs_db_l_bits = vmx_get_cs_db_l_bits,82068206+ .is_valid_cr0 = vmx_is_valid_cr0,82358207 .set_cr0 = vmx_set_cr0,82368208 .is_valid_cr4 = vmx_is_valid_cr4,82378209 .set_cr4 = vmx_set_cr4,
+9-3
arch/x86/kvm/vmx/vmx_ops.h
···1010#include "vmcs.h"1111#include "../x86.h"12121313-void vmread_error(unsigned long field, bool fault);1313+void vmread_error(unsigned long field);1414void vmwrite_error(unsigned long field, unsigned long value);1515void vmclear_error(struct vmcs *vmcs, u64 phys_addr);1616void vmptrld_error(struct vmcs *vmcs, u64 phys_addr);···3131 * void vmread_error_trampoline(unsigned long field, bool fault);3232 */3333extern unsigned long vmread_error_trampoline;3434+3535+/*3636+ * The second VMREAD error trampoline, called from the assembly trampoline,3737+ * exists primarily to enable instrumentation for the VM-Fail path.3838+ */3939+void vmread_error_trampoline2(unsigned long field, bool fault);4040+3441#endif35423643static __always_inline void vmcs_check16(unsigned long field)···108101109102do_fail:110103 instrumentation_begin();111111- WARN_ONCE(1, KBUILD_MODNAME ": vmread failed: field=%lx\n", field);112112- pr_warn_ratelimited(KBUILD_MODNAME ": vmread failed: field=%lx\n", field);104104+ vmread_error(field);113105 instrumentation_end();114106 return 0;115107
+34-16
arch/x86/kvm/x86.c
···906906}907907EXPORT_SYMBOL_GPL(load_pdptrs);908908909909+static bool kvm_is_valid_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)910910+{911911+#ifdef CONFIG_X86_64912912+ if (cr0 & 0xffffffff00000000UL)913913+ return false;914914+#endif915915+916916+ if ((cr0 & X86_CR0_NW) && !(cr0 & X86_CR0_CD))917917+ return false;918918+919919+ if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE))920920+ return false;921921+922922+ return static_call(kvm_x86_is_valid_cr0)(vcpu, cr0);923923+}924924+909925void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsigned long cr0)910926{911927 /*···968952{969953 unsigned long old_cr0 = kvm_read_cr0(vcpu);970954955955+ if (!kvm_is_valid_cr0(vcpu, cr0))956956+ return 1;957957+971958 cr0 |= X86_CR0_ET;972959973973-#ifdef CONFIG_X86_64974974- if (cr0 & 0xffffffff00000000UL)975975- return 1;976976-#endif977977-960960+ /* Write to CR0 reserved bits are ignored, even on Intel. */978961 cr0 &= ~CR0_RESERVED_BITS;979979-980980- if ((cr0 & X86_CR0_NW) && !(cr0 & X86_CR0_CD))981981- return 1;982982-983983- if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE))984984- return 1;985962986963#ifdef CONFIG_X86_64987964 if ((vcpu->arch.efer & EFER_LME) && !is_paging(vcpu) &&···21812172 u64 data;21822173 fastpath_t ret = EXIT_FASTPATH_NONE;2183217421752175+ kvm_vcpu_srcu_read_lock(vcpu);21762176+21842177 switch (msr) {21852178 case APIC_BASE_MSR + (APIC_ICR >> 4):21862179 data = kvm_read_edx_eax(vcpu);···2204219322052194 if (ret != EXIT_FASTPATH_NONE)22062195 trace_kvm_msr_write(msr, data);21962196+21972197+ kvm_vcpu_srcu_read_unlock(vcpu);2207219822082199 return ret;22092200}···1021610203 if (r < 0)1021710204 goto out;1021810205 if (r) {1021910219- kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false);1022010220- static_call(kvm_x86_inject_irq)(vcpu, false);1022110221- WARN_ON(static_call(kvm_x86_interrupt_allowed)(vcpu, true) < 0);1020610206+ int irq = kvm_cpu_get_interrupt(vcpu);1020710207+1020810208+ if (!WARN_ON_ONCE(irq == -1)) {1020910209+ kvm_queue_interrupt(vcpu, irq, false);1021010210+ static_call(kvm_x86_inject_irq)(vcpu, false);1021110211+ WARN_ON(static_call(kvm_x86_interrupt_allowed)(vcpu, true) < 0);1021210212+ }1022210213 }1022310214 if (kvm_cpu_has_injectable_intr(vcpu))1022410215 static_call(kvm_x86_enable_irq_window)(vcpu);···1147711460 return false;1147811461 }11479114621148011480- return kvm_is_valid_cr4(vcpu, sregs->cr4);1146311463+ return kvm_is_valid_cr4(vcpu, sregs->cr4) &&1146411464+ kvm_is_valid_cr0(vcpu, sregs->cr0);1148111465}11482114661148311467static int __set_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs,···13203131851320413186bool kvm_arch_has_irq_bypass(void)1320513187{1320613206- return true;1318813188+ return enable_apicv && irq_remapping_cap(IRQ_POSTING_CAP);1320713189}13208131901320913191int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
-3
drivers/acpi/arm64/iort.c
···10071007 for (i = 0; i < node->mapping_count; i++, map++) {10081008 struct acpi_iort_node *parent;1009100910101010- if (!map->id_count)10111011- continue;10121012-10131010 parent = ACPI_ADD_PTR(struct acpi_iort_node, iort_table,10141011 map->output_reference);10151012 if (parent != iommu)
+2-2
drivers/ata/libata-core.c
···49384938 if (qc->result_tf.status & ATA_SENSE &&49394939 ((ata_is_ncq(qc->tf.protocol) &&49404940 dev->flags & ATA_DFLAG_CDL_ENABLED) ||49414941- (!(ata_is_ncq(qc->tf.protocol) &&49424942- ata_id_sense_reporting_enabled(dev->id))))) {49414941+ (!ata_is_ncq(qc->tf.protocol) &&49424942+ ata_id_sense_reporting_enabled(dev->id)))) {49434943 /*49444944 * Tell SCSI EH to not overwrite scmd->result even if49454945 * this command is finished with result SAM_STAT_GOOD.
+2-1
drivers/ata/pata_arasan_cf.c
···529529 /* dma_request_channel may sleep, so calling from process context */530530 acdev->dma_chan = dma_request_chan(acdev->host->dev, "data");531531 if (IS_ERR(acdev->dma_chan)) {532532- dev_err(acdev->host->dev, "Unable to get dma_chan\n");532532+ dev_err_probe(acdev->host->dev, PTR_ERR(acdev->dma_chan),533533+ "Unable to get dma_chan\n");533534 acdev->dma_chan = NULL;534535 goto chan_request_fail;535536 }
···194194 return err;195195}196196197197-198197/**199198 * dev_pm_set_dedicated_wake_irq - Request a dedicated wake-up interrupt200199 * @dev: Device entry···205206 * Sets up a threaded interrupt handler for a device that has206207 * a dedicated wake-up interrupt in addition to the device IO207208 * interrupt.208208- *209209- * The interrupt starts disabled, and needs to be managed for210210- * the device by the bus code or the device driver using211211- * dev_pm_enable_wake_irq*() and dev_pm_disable_wake_irq*()212212- * functions.213209 */214210int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)215211{···226232 * the status of WAKE_IRQ_DEDICATED_REVERSE to tell rpm_suspend()227233 * to enable dedicated wake-up interrupt after running the runtime suspend228234 * callback for @dev.229229- *230230- * The interrupt starts disabled, and needs to be managed for231231- * the device by the bus code or the device driver using232232- * dev_pm_enable_wake_irq*() and dev_pm_disable_wake_irq*()233233- * functions.234235 */235236int dev_pm_set_dedicated_wake_irq_reverse(struct device *dev, int irq)236237{237238 return __dev_pm_set_dedicated_wake_irq(dev, irq, WAKE_IRQ_DEDICATED_REVERSE);238239}239240EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq_reverse);240240-241241-/**242242- * dev_pm_enable_wake_irq - Enable device wake-up interrupt243243- * @dev: Device244244- *245245- * Optionally called from the bus code or the device driver for246246- * runtime_resume() to override the PM runtime core managed wake-up247247- * interrupt handling to enable the wake-up interrupt.248248- *249249- * Note that for runtime_suspend()) the wake-up interrupts250250- * should be unconditionally enabled unlike for suspend()251251- * that is conditional.252252- */253253-void dev_pm_enable_wake_irq(struct device *dev)254254-{255255- struct wake_irq *wirq = dev->power.wakeirq;256256-257257- if (wirq && (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED))258258- enable_irq(wirq->irq);259259-}260260-EXPORT_SYMBOL_GPL(dev_pm_enable_wake_irq);261261-262262-/**263263- * dev_pm_disable_wake_irq - Disable device wake-up interrupt264264- * @dev: Device265265- *266266- * Optionally called from the bus code or the device driver for267267- * runtime_suspend() to override the PM runtime core managed wake-up268268- * interrupt handling to disable the wake-up interrupt.269269- */270270-void dev_pm_disable_wake_irq(struct device *dev)271271-{272272- struct wake_irq *wirq = dev->power.wakeirq;273273-274274- if (wirq && (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED))275275- disable_irq_nosync(wirq->irq);276276-}277277-EXPORT_SYMBOL_GPL(dev_pm_disable_wake_irq);278241279242/**280243 * dev_pm_enable_wake_irq_check - Checks and enables wake-up interrupt···265314 return;266315267316enable:268268- if (!can_change_status || !(wirq->status & WAKE_IRQ_DEDICATED_REVERSE))317317+ if (!can_change_status || !(wirq->status & WAKE_IRQ_DEDICATED_REVERSE)) {269318 enable_irq(wirq->irq);319319+ wirq->status |= WAKE_IRQ_DEDICATED_ENABLED;320320+ }270321}271322272323/**···289336 if (cond_disable && (wirq->status & WAKE_IRQ_DEDICATED_REVERSE))290337 return;291338292292- if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED)339339+ if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED) {340340+ wirq->status &= ~WAKE_IRQ_DEDICATED_ENABLED;293341 disable_irq_nosync(wirq->irq);342342+ }294343}295344296345/**···331376332377 if (device_may_wakeup(wirq->dev)) {333378 if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED &&334334- !pm_runtime_status_suspended(wirq->dev))379379+ !(wirq->status & WAKE_IRQ_DEDICATED_ENABLED))335380 enable_irq(wirq->irq);336381337382 enable_irq_wake(wirq->irq);···354399 disable_irq_wake(wirq->irq);355400356401 if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED &&357357- !pm_runtime_status_suspended(wirq->dev))402402+ !(wirq->status & WAKE_IRQ_DEDICATED_ENABLED))358403 disable_irq_nosync(wirq->irq);359404 }360405}
+86-38
drivers/block/rbd.c
···38493849 list_splice_tail_init(&rbd_dev->acquiring_list, &rbd_dev->running_list);38503850}3851385138523852-static int get_lock_owner_info(struct rbd_device *rbd_dev,38533853- struct ceph_locker **lockers, u32 *num_lockers)38523852+static bool locker_equal(const struct ceph_locker *lhs,38533853+ const struct ceph_locker *rhs)38543854+{38553855+ return lhs->id.name.type == rhs->id.name.type &&38563856+ lhs->id.name.num == rhs->id.name.num &&38573857+ !strcmp(lhs->id.cookie, rhs->id.cookie) &&38583858+ ceph_addr_equal_no_type(&lhs->info.addr, &rhs->info.addr);38593859+}38603860+38613861+static void free_locker(struct ceph_locker *locker)38623862+{38633863+ if (locker)38643864+ ceph_free_lockers(locker, 1);38653865+}38663866+38673867+static struct ceph_locker *get_lock_owner_info(struct rbd_device *rbd_dev)38543868{38553869 struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;38703870+ struct ceph_locker *lockers;38713871+ u32 num_lockers;38563872 u8 lock_type;38573873 char *lock_tag;38743874+ u64 handle;38583875 int ret;38593859-38603860- dout("%s rbd_dev %p\n", __func__, rbd_dev);3861387638623877 ret = ceph_cls_lock_info(osdc, &rbd_dev->header_oid,38633878 &rbd_dev->header_oloc, RBD_LOCK_NAME,38643864- &lock_type, &lock_tag, lockers, num_lockers);38653865- if (ret)38663866- return ret;38793879+ &lock_type, &lock_tag, &lockers, &num_lockers);38803880+ if (ret) {38813881+ rbd_warn(rbd_dev, "failed to retrieve lockers: %d", ret);38823882+ return ERR_PTR(ret);38833883+ }3867388438683868- if (*num_lockers == 0) {38853885+ if (num_lockers == 0) {38693886 dout("%s rbd_dev %p no lockers detected\n", __func__, rbd_dev);38873887+ lockers = NULL;38703888 goto out;38713889 }3872389038733891 if (strcmp(lock_tag, RBD_LOCK_TAG)) {38743892 rbd_warn(rbd_dev, "locked by external mechanism, tag %s",38753893 lock_tag);38763876- ret = -EBUSY;38773877- goto out;38943894+ goto err_busy;38783895 }3879389638803880- if (lock_type == CEPH_CLS_LOCK_SHARED) {38813881- rbd_warn(rbd_dev, "shared lock type detected");38823882- ret = -EBUSY;38833883- goto out;38973897+ if (lock_type != CEPH_CLS_LOCK_EXCLUSIVE) {38983898+ rbd_warn(rbd_dev, "incompatible lock type detected");38993899+ goto err_busy;38843900 }3885390138863886- if (strncmp((*lockers)[0].id.cookie, RBD_LOCK_COOKIE_PREFIX,38873887- strlen(RBD_LOCK_COOKIE_PREFIX))) {39023902+ WARN_ON(num_lockers != 1);39033903+ ret = sscanf(lockers[0].id.cookie, RBD_LOCK_COOKIE_PREFIX " %llu",39043904+ &handle);39053905+ if (ret != 1) {38883906 rbd_warn(rbd_dev, "locked by external mechanism, cookie %s",38893889- (*lockers)[0].id.cookie);38903890- ret = -EBUSY;38913891- goto out;39073907+ lockers[0].id.cookie);39083908+ goto err_busy;38923909 }39103910+ if (ceph_addr_is_blank(&lockers[0].info.addr)) {39113911+ rbd_warn(rbd_dev, "locker has a blank address");39123912+ goto err_busy;39133913+ }39143914+39153915+ dout("%s rbd_dev %p got locker %s%llu@%pISpc/%u handle %llu\n",39163916+ __func__, rbd_dev, ENTITY_NAME(lockers[0].id.name),39173917+ &lockers[0].info.addr.in_addr,39183918+ le32_to_cpu(lockers[0].info.addr.nonce), handle);3893391938943920out:38953921 kfree(lock_tag);38963896- return ret;39223922+ return lockers;39233923+39243924+err_busy:39253925+ kfree(lock_tag);39263926+ ceph_free_lockers(lockers, num_lockers);39273927+ return ERR_PTR(-EBUSY);38973928}3898392938993930static int find_watcher(struct rbd_device *rbd_dev,···39783947static int rbd_try_lock(struct rbd_device *rbd_dev)39793948{39803949 struct ceph_client *client = rbd_dev->rbd_client->client;39813981- struct ceph_locker *lockers;39823982- u32 num_lockers;39503950+ struct ceph_locker *locker, *refreshed_locker;39833951 int ret;3984395239853953 for (;;) {39543954+ locker = refreshed_locker = NULL;39553955+39863956 ret = rbd_lock(rbd_dev);39873957 if (ret != -EBUSY)39883988- return ret;39583958+ goto out;3989395939903960 /* determine if the current lock holder is still alive */39913991- ret = get_lock_owner_info(rbd_dev, &lockers, &num_lockers);39923992- if (ret)39933993- return ret;39943994-39953995- if (num_lockers == 0)39613961+ locker = get_lock_owner_info(rbd_dev);39623962+ if (IS_ERR(locker)) {39633963+ ret = PTR_ERR(locker);39643964+ locker = NULL;39653965+ goto out;39663966+ }39673967+ if (!locker)39963968 goto again;3997396939983998- ret = find_watcher(rbd_dev, lockers);39703970+ ret = find_watcher(rbd_dev, locker);39993971 if (ret)40003972 goto out; /* request lock or error */4001397339743974+ refreshed_locker = get_lock_owner_info(rbd_dev);39753975+ if (IS_ERR(refreshed_locker)) {39763976+ ret = PTR_ERR(refreshed_locker);39773977+ refreshed_locker = NULL;39783978+ goto out;39793979+ }39803980+ if (!refreshed_locker ||39813981+ !locker_equal(locker, refreshed_locker))39823982+ goto again;39833983+40023984 rbd_warn(rbd_dev, "breaking header lock owned by %s%llu",40034003- ENTITY_NAME(lockers[0].id.name));39853985+ ENTITY_NAME(locker->id.name));4004398640053987 ret = ceph_monc_blocklist_add(&client->monc,40064006- &lockers[0].info.addr);39883988+ &locker->info.addr);40073989 if (ret) {40084008- rbd_warn(rbd_dev, "blocklist of %s%llu failed: %d",40094009- ENTITY_NAME(lockers[0].id.name), ret);39903990+ rbd_warn(rbd_dev, "failed to blocklist %s%llu: %d",39913991+ ENTITY_NAME(locker->id.name), ret);40103992 goto out;40113993 }4012399440133995 ret = ceph_cls_break_lock(&client->osdc, &rbd_dev->header_oid,40143996 &rbd_dev->header_oloc, RBD_LOCK_NAME,40154015- lockers[0].id.cookie,40164016- &lockers[0].id.name);40174017- if (ret && ret != -ENOENT)39973997+ locker->id.cookie, &locker->id.name);39983998+ if (ret && ret != -ENOENT) {39993999+ rbd_warn(rbd_dev, "failed to break header lock: %d",40004000+ ret);40184001 goto out;40024002+ }4019400340204004again:40214021- ceph_free_lockers(lockers, num_lockers);40054005+ free_locker(refreshed_locker);40064006+ free_locker(locker);40224007 }4023400840244009out:40254025- ceph_free_lockers(lockers, num_lockers);40104010+ free_locker(refreshed_locker);40114011+ free_locker(locker);40264012 return ret;40274013}40284014
+7-4
drivers/block/ublk_drv.c
···18471847 if (ublksrv_pid <= 0)18481848 return -EINVAL;1849184918501850- wait_for_completion_interruptible(&ub->completion);18501850+ if (wait_for_completion_interruptible(&ub->completion) != 0)18511851+ return -EINTR;1851185218521853 schedule_delayed_work(&ub->monitor_work, UBLK_DAEMON_MONITOR_PERIOD);18531854···21262125 * - the device number is freed already, we will not find this21272126 * device via ublk_get_device_from_id()21282127 */21292129- wait_event_interruptible(ublk_idr_wq, ublk_idr_freed(idx));21302130-21282128+ if (wait_event_interruptible(ublk_idr_wq, ublk_idr_freed(idx)))21292129+ return -EINTR;21312130 return 0;21322131}21332132···23242323 pr_devel("%s: Waiting for new ubq_daemons(nr: %d) are ready, dev id %d...\n",23252324 __func__, ub->dev_info.nr_hw_queues, header->dev_id);23262325 /* wait until new ubq_daemon sending all FETCH_REQ */23272327- wait_for_completion_interruptible(&ub->completion);23262326+ if (wait_for_completion_interruptible(&ub->completion))23272327+ return -EINTR;23282328+23282329 pr_devel("%s: All new ubq_daemons(nr: %d) are ready, dev id %d\n",23292330 __func__, ub->dev_info.nr_hw_queues, header->dev_id);23302331
···22menuconfig CXL_BUS33 tristate "CXL (Compute Express Link) Devices Support"44 depends on PCI55+ select FW_LOADER66+ select FW_UPLOAD57 select PCI_DOE68 help79 CXL is a bus that is electrically compatible with PCI Express, but···8482config CXL_MEM8583 tristate "CXL: Memory Expansion"8684 depends on CXL_PCI8787- select FW_UPLOAD8885 default CXL_BUS8986 help9087 The CXL.mem protocol allows a device to act as a provider of "System
+2-3
drivers/cxl/acpi.c
···296296 else297297 rc = cxl_decoder_autoremove(dev, cxld);298298 if (rc) {299299- dev_err(dev, "Failed to add decode range [%#llx - %#llx]\n",300300- cxld->hpa_range.start, cxld->hpa_range.end);301301- return 0;299299+ dev_err(dev, "Failed to add decode range: %pr", res);300300+ return rc;302301 }303302 dev_dbg(dev, "add: %s node: %d range [%#llx - %#llx]\n",304303 dev_name(&cxld->dev),
+1-1
drivers/cxl/cxlmem.h
···323323324324/* FW state bits */325325#define CXL_FW_STATE_BITS 32326326-#define CXL_FW_CANCEL BIT(0)326326+#define CXL_FW_CANCEL 0327327328328/**329329 * struct cxl_fw_state - Firmware upload / activation state
···402402static int gfxhub_v1_2_xcc_gart_enable(struct amdgpu_device *adev,403403 uint32_t xcc_mask)404404{405405- uint32_t tmp_mask;406405 int i;407406408408- tmp_mask = xcc_mask;409407 /*410408 * MC_VM_FB_LOCATION_BASE/TOP is NULL for VF, because they are411409 * VF copy registers so vbios post doesn't program them, for412410 * SRIOV driver need to program them413411 */414412 if (amdgpu_sriov_vf(adev)) {415415- for_each_inst(i, tmp_mask) {416416- i = ffs(tmp_mask) - 1;413413+ for_each_inst(i, xcc_mask) {417414 WREG32_SOC15_RLC(GC, GET_INST(GC, i), regMC_VM_FB_LOCATION_BASE,418415 adev->gmc.vram_start >> 24);419416 WREG32_SOC15_RLC(GC, GET_INST(GC, i), regMC_VM_FB_LOCATION_TOP,
+2-3
drivers/gpu/drm/amd/amdkfd/kfd_debug.c
···302302 if (!q)303303 return 0;304304305305- if (KFD_GC_VERSION(q->device) < IP_VERSION(11, 0, 0) ||306306- KFD_GC_VERSION(q->device) >= IP_VERSION(12, 0, 0))305305+ if (!kfd_dbg_has_cwsr_workaround(q->device))307306 return 0;308307309308 if (enable && q->properties.is_user_cu_masked)···348349{349350 uint32_t spi_dbg_cntl = pdd->spi_dbg_override | pdd->spi_dbg_launch_mode;350351 uint32_t flags = pdd->process->dbg_flags;351351- bool sq_trap_en = !!spi_dbg_cntl;352352+ bool sq_trap_en = !!spi_dbg_cntl || !kfd_dbg_has_cwsr_workaround(pdd->dev);352353353354 if (!kfd_dbg_is_per_vmid_supported(pdd->dev))354355 return 0;
···17921792 hws->funcs.edp_backlight_control(edp_link_with_sink, false);17931793 }17941794 /*resume from S3, no vbios posting, no need to power down again*/17951795+ clk_mgr_exit_optimized_pwr_state(dc, dc->clk_mgr);17961796+17951797 power_down_all_hw_blocks(dc);17961798 disable_vga_and_power_gate_all_controllers(dc);17971799 if (edp_link_with_sink && !keep_edp_vdd_on)17981800 dc->hwss.edp_power_control(edp_link_with_sink, false);18011801+ clk_mgr_optimize_pwr_state(dc, dc->clk_mgr);17991802 }18001803 bios_set_scratch_acc_mode_change(dc->ctx->dc_bios, 1);18011804}
+2-1
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_dccg.c
···8484 struct dcn_dccg *dccg_dcn,8585 enum phyd32clk_clock_source src)8686{8787- if (dccg_dcn->base.ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) {8787+ if (dccg_dcn->base.ctx->asic_id.chip_family == FAMILY_YELLOW_CARP &&8888+ dccg_dcn->base.ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) {8889 if (src == PHYD32CLKC)8990 src = PHYD32CLKF;9091 if (src == PHYD32CLKD)
+4-1
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_dccg.c
···4949 uint32_t dispclk_rdivider_value = 0;50505151 REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_RDIVIDER, &dispclk_rdivider_value);5252- REG_UPDATE(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_WDIVIDER, dispclk_rdivider_value);5252+5353+ /* Not valid for the WDIVIDER to be set to 0 */5454+ if (dispclk_rdivider_value != 0)5555+ REG_UPDATE(DENTIST_DISPCLK_CNTL, DENTIST_DISPCLK_WDIVIDER, dispclk_rdivider_value);5356}54575558static void dccg32_get_pixel_rate_div(
···12461246 * times in succession a possibility by enlarging the permutation array.12471247 */12481248 order = i915_random_order(count * count, &prng);12491249- if (!order)12501250- return -ENOMEM;12491249+ if (!order) {12501250+ err = -ENOMEM;12511251+ goto out;12521252+ }1251125312521254 max_page_size = rounddown_pow_of_two(obj->mm.page_sizes.sg);12531255 max = div_u64(max - size, max_page_size);
+1-1
drivers/gpu/drm/msm/adreno/a5xx_gpu.c
···8989 * since we've already mapped it once in9090 * submit_reloc()9191 */9292- if (WARN_ON(!ptr))9292+ if (WARN_ON(IS_ERR_OR_NULL(ptr)))9393 return;94949595 for (i = 0; i < dwords; i++) {
···149149150150static inline bool adreno_is_revn(const struct adreno_gpu *gpu, uint32_t revn)151151{152152- WARN_ON_ONCE(!gpu->revn);152152+ /* revn can be zero, but if not is set at same time as info */153153+ WARN_ON_ONCE(!gpu->info);153154154155 return gpu->revn == revn;155156}···162161163162static inline bool adreno_is_a2xx(const struct adreno_gpu *gpu)164163{165165- WARN_ON_ONCE(!gpu->revn);164164+ /* revn can be zero, but if not is set at same time as info */165165+ WARN_ON_ONCE(!gpu->info);166166167167 return (gpu->revn < 300);168168}169169170170static inline bool adreno_is_a20x(const struct adreno_gpu *gpu)171171{172172- WARN_ON_ONCE(!gpu->revn);172172+ /* revn can be zero, but if not is set at same time as info */173173+ WARN_ON_ONCE(!gpu->info);173174174175 return (gpu->revn < 210);175176}···310307311308static inline int adreno_is_a690(const struct adreno_gpu *gpu)312309{313313- return adreno_is_revn(gpu, 690);310310+ /* The order of args is important here to handle ANY_ID correctly */311311+ return adreno_cmp_rev(ADRENO_REV(6, 9, 0, ANY_ID), gpu->rev);314312};315313316314/* check for a615, a616, a618, a619 or any derivatives */
-13
drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.h
···1515#define DPU_PERF_DEFAULT_MAX_CORE_CLK_RATE 41250000016161717/**1818- * enum dpu_core_perf_data_bus_id - data bus identifier1919- * @DPU_CORE_PERF_DATA_BUS_ID_MNOC: DPU/MNOC data bus2020- * @DPU_CORE_PERF_DATA_BUS_ID_LLCC: MNOC/LLCC data bus2121- * @DPU_CORE_PERF_DATA_BUS_ID_EBI: LLCC/EBI data bus2222- */2323-enum dpu_core_perf_data_bus_id {2424- DPU_CORE_PERF_DATA_BUS_ID_MNOC,2525- DPU_CORE_PERF_DATA_BUS_ID_LLCC,2626- DPU_CORE_PERF_DATA_BUS_ID_EBI,2727- DPU_CORE_PERF_DATA_BUS_ID_MAX,2828-};2929-3030-/**3118 * struct dpu_core_perf_params - definition of performance parameters3219 * @max_per_pipe_ib: maximum instantaneous bandwidth request3320 * @bw_ctl: arbitrated bandwidth request
+7-1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_ctl.c
···51515252static const u32 fetch_tbl[SSPP_MAX] = {CTL_INVALID_BIT, 16, 17, 18, 19,5353 CTL_INVALID_BIT, CTL_INVALID_BIT, CTL_INVALID_BIT, CTL_INVALID_BIT, 0,5454- 1, 2, 3, CTL_INVALID_BIT, CTL_INVALID_BIT};5454+ 1, 2, 3, 4, 5};55555656static int _mixer_stages(const struct dpu_lm_cfg *mixer, int count,5757 enum dpu_lm lm)···197197 break;198198 case SSPP_DMA3:199199 ctx->pending_flush_mask |= BIT(25);200200+ break;201201+ case SSPP_DMA4:202202+ ctx->pending_flush_mask |= BIT(13);203203+ break;204204+ case SSPP_DMA5:205205+ ctx->pending_flush_mask |= BIT(14);200206 break;201207 case SSPP_CURSOR0:202208 ctx->pending_flush_mask |= BIT(22);
···191191192192 f->fctx = fctx;193193194194+ /*195195+ * Until this point, the fence was just some pre-allocated memory,196196+ * no-one should have taken a reference to it yet.197197+ */198198+ WARN_ON(kref_read(&fence->refcount));199199+194200 dma_fence_init(&f->base, &msm_fence_ops, &fctx->spinlock,195201 fctx->context, ++fctx->last_fence);196202}
+14-2
drivers/gpu/drm/msm/msm_gem_submit.c
···8686 }87878888 dma_fence_put(submit->user_fence);8989- dma_fence_put(submit->hw_fence);8989+9090+ /*9191+ * If the submit is freed before msm_job_run(), then hw_fence is9292+ * just some pre-allocated memory, not a reference counted fence.9393+ * Once the job runs and the hw_fence is initialized, it will9494+ * have a refcount of at least one, since the submit holds a ref9595+ * to the hw_fence.9696+ */9797+ if (kref_read(&submit->hw_fence->refcount) == 0) {9898+ kfree(submit->hw_fence);9999+ } else {100100+ dma_fence_put(submit->hw_fence);101101+ }9010291103 put_pid(submit->pid);92104 msm_submitqueue_put(submit->queue);···901889 * after the job is armed902890 */903891 if ((args->flags & MSM_SUBMIT_FENCE_SN_IN) &&904904- idr_find(&queue->fence_idr, args->fence)) {892892+ (!args->fence || idr_find(&queue->fence_idr, args->fence))) {905893 spin_unlock(&queue->idr_lock);906894 idr_preload_end();907895 ret = -EINVAL;
···7777#define ZEN_CUR_TEMP_RANGE_SEL_MASK BIT(19)7878#define ZEN_CUR_TEMP_TJ_SEL_MASK GENMASK(17, 16)79798080+/*8181+ * AMD's Industrial processor 3255 supports temperature from -40 deg to 105 deg Celsius.8282+ * Use the model name to identify 3255 CPUs and set a flag to display negative temperature.8383+ * Do not round off to zero for negative Tctl or Tdie values if the flag is set8484+ */8585+#define AMD_I3255_STR "3255"8686+8087struct k10temp_data {8188 struct pci_dev *pdev;8289 void (*read_htcreg)(struct pci_dev *pdev, u32 *regval);···9386 u32 show_temp;9487 bool is_zen;9588 u32 ccd_offset;8989+ bool disp_negative;9690};97919892#define TCTL_BIT 0···212204 switch (channel) {213205 case 0: /* Tctl */214206 *val = get_raw_temp(data);215215- if (*val < 0)207207+ if (*val < 0 && !data->disp_negative)216208 *val = 0;217209 break;218210 case 1: /* Tdie */219211 *val = get_raw_temp(data) - data->temp_offset;220220- if (*val < 0)212212+ if (*val < 0 && !data->disp_negative)221213 *val = 0;222214 break;223215 case 2 ... 13: /* Tccd{1-12} */···412404413405 data->pdev = pdev;414406 data->show_temp |= BIT(TCTL_BIT); /* Always show Tctl */407407+408408+ if (boot_cpu_data.x86 == 0x17 &&409409+ strstr(boot_cpu_data.x86_model_id, AMD_I3255_STR)) {410410+ data->disp_negative = true;411411+ }415412416413 if (boot_cpu_data.x86 == 0x15 &&417414 ((boot_cpu_data.x86_model & 0xf0) == 0x60 ||
···586586 int creb;587587 int cred;588588589589- cre6 = sio_data->sio_inb(sio_data, 0xe0);589589+ cre6 = sio_data->sio_inb(sio_data, 0xe6);590590591591 sio_data->sio_select(sio_data, NCT6775_LD_12);592592 cre0 = sio_data->sio_inb(sio_data, 0xe0);
+1
drivers/hwmon/nct6775.h
···9898 u8 bank; /* current register bank */9999 u8 in_num; /* number of in inputs we have */100100 u8 in[15][3]; /* [0]=in, [1]=in_max, [2]=in_min */101101+ const u16 *scale_in; /* internal scaling factors */101102 unsigned int rpm[NUM_FAN];102103 u16 fan_min[NUM_FAN];103104 u8 fan_pulses[NUM_FAN];
···230230 if (valid_bit != cq_uk->polarity)231231 return -ENOENT;232232233233+ /* Ensure CQE contents are read after valid bit is checked */234234+ dma_rmb();235235+233236 if (cq->dev->hw_attrs.uk_attrs.hw_rev >= IRDMA_GEN_2)234237 ext_valid = (bool)FIELD_GET(IRDMA_CQ_EXTCQE, qword3);235238···245242 polarity ^= 1;246243 if (polarity != cq_uk->polarity)247244 return -ENOENT;245245+246246+ /* Ensure ext CQE contents are read after ext valid bit is checked */247247+ dma_rmb();248248249249 IRDMA_RING_MOVE_HEAD_NOCHECK(cq_uk->cq_ring);250250 if (!IRDMA_RING_CURRENT_HEAD(cq_uk->cq_ring))
···117117}118118119119/*120120+ * Remove the given object id from the xarray if the only reference to the121121+ * object is held by the xarray. The caller must call ops destroy().122122+ */123123+static struct iommufd_object *iommufd_object_remove(struct iommufd_ctx *ictx,124124+ u32 id, bool extra_put)125125+{126126+ struct iommufd_object *obj;127127+ XA_STATE(xas, &ictx->objects, id);128128+129129+ xa_lock(&ictx->objects);130130+ obj = xas_load(&xas);131131+ if (xa_is_zero(obj) || !obj) {132132+ obj = ERR_PTR(-ENOENT);133133+ goto out_xa;134134+ }135135+136136+ /*137137+ * If the caller is holding a ref on obj we put it here under the138138+ * spinlock.139139+ */140140+ if (extra_put)141141+ refcount_dec(&obj->users);142142+143143+ if (!refcount_dec_if_one(&obj->users)) {144144+ obj = ERR_PTR(-EBUSY);145145+ goto out_xa;146146+ }147147+148148+ xas_store(&xas, NULL);149149+ if (ictx->vfio_ioas == container_of(obj, struct iommufd_ioas, obj))150150+ ictx->vfio_ioas = NULL;151151+152152+out_xa:153153+ xa_unlock(&ictx->objects);154154+155155+ /* The returned object reference count is zero */156156+ return obj;157157+}158158+159159+/*120160 * The caller holds a users refcount and wants to destroy the object. Returns121161 * true if the object was destroyed. In all cases the caller no longer has a122162 * reference on obj.123163 */124124-bool iommufd_object_destroy_user(struct iommufd_ctx *ictx,125125- struct iommufd_object *obj)164164+void __iommufd_object_destroy_user(struct iommufd_ctx *ictx,165165+ struct iommufd_object *obj, bool allow_fail)126166{167167+ struct iommufd_object *ret;168168+127169 /*128170 * The purpose of the destroy_rwsem is to ensure deterministic129171 * destruction of objects used by external drivers and destroyed by this···173131 * side of this, such as during ioctl execution.174132 */175133 down_write(&obj->destroy_rwsem);176176- xa_lock(&ictx->objects);177177- refcount_dec(&obj->users);178178- if (!refcount_dec_if_one(&obj->users)) {179179- xa_unlock(&ictx->objects);180180- up_write(&obj->destroy_rwsem);181181- return false;182182- }183183- __xa_erase(&ictx->objects, obj->id);184184- if (ictx->vfio_ioas && &ictx->vfio_ioas->obj == obj)185185- ictx->vfio_ioas = NULL;186186- xa_unlock(&ictx->objects);134134+ ret = iommufd_object_remove(ictx, obj->id, true);187135 up_write(&obj->destroy_rwsem);136136+137137+ if (allow_fail && IS_ERR(ret))138138+ return;139139+140140+ /*141141+ * If there is a bug and we couldn't destroy the object then we did put142142+ * back the caller's refcount and will eventually try to free it again143143+ * during close.144144+ */145145+ if (WARN_ON(IS_ERR(ret)))146146+ return;188147189148 iommufd_object_ops[obj->type].destroy(obj);190149 kfree(obj);191191- return true;192150}193151194152static int iommufd_destroy(struct iommufd_ucmd *ucmd)···196154 struct iommu_destroy *cmd = ucmd->cmd;197155 struct iommufd_object *obj;198156199199- obj = iommufd_get_object(ucmd->ictx, cmd->id, IOMMUFD_OBJ_ANY);157157+ obj = iommufd_object_remove(ucmd->ictx, cmd->id, false);200158 if (IS_ERR(obj))201159 return PTR_ERR(obj);202202- iommufd_ref_to_users(obj);203203- /* See iommufd_ref_to_users() */204204- if (!iommufd_object_destroy_user(ucmd->ictx, obj))205205- return -EBUSY;160160+ iommufd_object_ops[obj->type].destroy(obj);161161+ kfree(obj);206162 return 0;207163}208164
···32513251 r = md_start(&rs->md);32523252 if (r) {32533253 ti->error = "Failed to start raid array";32543254- mddev_unlock(&rs->md);32553255- goto bad_md_start;32543254+ goto bad_unlock;32563255 }3257325632583257 /* If raid4/5/6 journal mode explicitly requested (only possible with journal dev) -> set it */···32593260 r = r5c_journal_mode_set(&rs->md, rs->journal_dev.mode);32603261 if (r) {32613262 ti->error = "Failed to set raid4/5/6 journal mode";32623262- mddev_unlock(&rs->md);32633263- goto bad_journal_mode_set;32633263+ goto bad_unlock;32643264 }32653265 }32663266···32703272 if (rs_is_raid456(rs)) {32713273 r = rs_set_raid456_stripe_cache(rs);32723274 if (r)32733273- goto bad_stripe_cache;32753275+ goto bad_unlock;32743276 }3275327732763278 /* Now do an early reshape check */32773279 if (test_bit(RT_FLAG_RESHAPE_RS, &rs->runtime_flags)) {32783280 r = rs_check_reshape(rs);32793281 if (r)32803280- goto bad_check_reshape;32823282+ goto bad_unlock;3281328332823284 /* Restore new, ctr requested layout to perform check */32833285 rs_config_restore(rs, &rs_layout);···32863288 r = rs->md.pers->check_reshape(&rs->md);32873289 if (r) {32883290 ti->error = "Reshape check failed";32893289- goto bad_check_reshape;32913291+ goto bad_unlock;32903292 }32913293 }32923294 }···32973299 mddev_unlock(&rs->md);32983300 return 0;3299330133003300-bad_md_start:33013301-bad_journal_mode_set:33023302-bad_stripe_cache:33033303-bad_check_reshape:33023302+bad_unlock:33043303 md_stop(&rs->md);33043304+ mddev_unlock(&rs->md);33053305bad:33063306 raid_set_free(rs);33073307···33103314{33113315 struct raid_set *rs = ti->private;3312331633173317+ mddev_lock_nointr(&rs->md);33133318 md_stop(&rs->md);33193319+ mddev_unlock(&rs->md);33143320 raid_set_free(rs);33153321}33163322
+2
drivers/md/md.c
···6247624762486248void md_stop(struct mddev *mddev)62496249{62506250+ lockdep_assert_held(&mddev->reconfig_mutex);62516251+62506252 /* stop the array and free an attached data structures.62516253 * This is called from dm-raid62526254 */
···1508150815091509 memcpy(bond_dev->broadcast, slave_dev->broadcast,15101510 slave_dev->addr_len);15111511+15121512+ if (slave_dev->flags & IFF_POINTOPOINT) {15131513+ bond_dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);15141514+ bond_dev->flags |= (IFF_POINTOPOINT | IFF_NOARP);15151515+ }15111516}1512151715131518/* On bonding slaves other than the currently active slave, suppress
+2
drivers/net/can/usb/gs_usb.c
···10301030 usb_kill_anchored_urbs(&dev->tx_submitted);10311031 atomic_set(&dev->active_tx_urbs, 0);1032103210331033+ dev->can.state = CAN_STATE_STOPPED;10341034+10331035 /* reset the device */10341036 rc = gs_cmd_reset(dev);10351037 if (rc < 0)
+5-2
drivers/net/dsa/qca/qca8k-8xxx.c
···576576 .rd_table = &qca8k_readable_table,577577 .disable_locking = true, /* Locking is handled by qca8k read/write */578578 .cache_type = REGCACHE_NONE, /* Explicitly disable CACHE */579579- .max_raw_read = 32, /* mgmt eth can read/write up to 8 registers at time */580580- .max_raw_write = 32,579579+ .max_raw_read = 32, /* mgmt eth can read up to 8 registers at time */580580+ /* ATU regs suffer from a bug where some data are not correctly581581+ * written. Disable bulk write to correctly write ATU entry.582582+ */583583+ .use_single_write = true,581584};582585583586static int
+16-3
drivers/net/dsa/qca/qca8k-common.c
···244244}245245246246static int qca8k_fdb_search_and_insert(struct qca8k_priv *priv, u8 port_mask,247247- const u8 *mac, u16 vid)247247+ const u8 *mac, u16 vid, u8 aging)248248{249249 struct qca8k_fdb fdb = { 0 };250250 int ret;···261261 goto exit;262262263263 /* Rule exist. Delete first */264264- if (!fdb.aging) {264264+ if (fdb.aging) {265265 ret = qca8k_fdb_access(priv, QCA8K_FDB_PURGE, -1);266266 if (ret)267267 goto exit;268268+ } else {269269+ fdb.aging = aging;268270 }269271270272 /* Add port to fdb portmask */···290288291289 qca8k_fdb_write(priv, vid, 0, mac, 0);292290 ret = qca8k_fdb_access(priv, QCA8K_FDB_SEARCH, -1);291291+ if (ret < 0)292292+ goto exit;293293+294294+ ret = qca8k_fdb_read(priv, &fdb);293295 if (ret < 0)294296 goto exit;295297···816810 const u8 *addr = mdb->addr;817811 u16 vid = mdb->vid;818812819819- return qca8k_fdb_search_and_insert(priv, BIT(port), addr, vid);813813+ if (!vid)814814+ vid = QCA8K_PORT_VID_DEF;815815+816816+ return qca8k_fdb_search_and_insert(priv, BIT(port), addr, vid,817817+ QCA8K_ATU_STATUS_STATIC);820818}821819822820int qca8k_port_mdb_del(struct dsa_switch *ds, int port,···830820 struct qca8k_priv *priv = ds->priv;831821 const u8 *addr = mdb->addr;832822 u16 vid = mdb->vid;823823+824824+ if (!vid)825825+ vid = QCA8K_PORT_VID_DEF;833826834827 return qca8k_fdb_search_and_del(priv, BIT(port), addr, vid);835828}
+5-2
drivers/net/ethernet/atheros/atl1c/atl1c_main.c
···20942094 real_len = (((unsigned char *)ip_hdr(skb) - skb->data)20952095 + ntohs(ip_hdr(skb)->tot_len));2096209620972097- if (real_len < skb->len)20982098- pskb_trim(skb, real_len);20972097+ if (real_len < skb->len) {20982098+ err = pskb_trim(skb, real_len);20992099+ if (err)21002100+ return err;21012101+ }2099210221002103 hdr_len = skb_tcp_all_headers(skb);21012104 if (unlikely(skb->len == hdr_len)) {
+5-2
drivers/net/ethernet/atheros/atl1e/atl1e_main.c
···16411641 real_len = (((unsigned char *)ip_hdr(skb) - skb->data)16421642 + ntohs(ip_hdr(skb)->tot_len));1643164316441644- if (real_len < skb->len)16451645- pskb_trim(skb, real_len);16441644+ if (real_len < skb->len) {16451645+ err = pskb_trim(skb, real_len);16461646+ if (err)16471647+ return err;16481648+ }1646164916471650 hdr_len = skb_tcp_all_headers(skb);16481651 if (unlikely(skb->len == hdr_len)) {
···11381138 (lancer_chip(adapter) || BE3_chip(adapter) ||11391139 skb_vlan_tag_present(skb)) && is_ipv4_pkt(skb)) {11401140 ip = (struct iphdr *)ip_hdr(skb);11411141- pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len));11411141+ if (unlikely(pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len))))11421142+ goto tx_drop;11421143 }1143114411441145 /* If vlan tag is already inlined in the packet, skip HW VLAN
+14-4
drivers/net/ethernet/freescale/fec_main.c
···13721372}1373137313741374static void13751375-fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)13751375+fec_enet_tx_queue(struct net_device *ndev, u16 queue_id, int budget)13761376{13771377 struct fec_enet_private *fep;13781378 struct xdp_frame *xdpf;···14161416 if (!skb)14171417 goto tx_buf_done;14181418 } else {14191419+ /* Tx processing cannot call any XDP (or page pool) APIs if14201420+ * the "budget" is 0. Because NAPI is called with budget of14211421+ * 0 (such as netpoll) indicates we may be in an IRQ context,14221422+ * however, we can't use the page pool from IRQ context.14231423+ */14241424+ if (unlikely(!budget))14251425+ break;14261426+14191427 xdpf = txq->tx_buf[index].xdp;14201428 if (bdp->cbd_bufaddr)14211429 dma_unmap_single(&fep->pdev->dev,···15161508 writel(0, txq->bd.reg_desc_active);15171509}1518151015191519-static void fec_enet_tx(struct net_device *ndev)15111511+static void fec_enet_tx(struct net_device *ndev, int budget)15201512{15211513 struct fec_enet_private *fep = netdev_priv(ndev);15221514 int i;1523151515241516 /* Make sure that AVB queues are processed first. */15251517 for (i = fep->num_tx_queues - 1; i >= 0; i--)15261526- fec_enet_tx_queue(ndev, i);15181518+ fec_enet_tx_queue(ndev, i, budget);15271519}1528152015291521static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,···1866185818671859 do {18681860 done += fec_enet_rx(ndev, budget - done);18691869- fec_enet_tx(ndev);18611861+ fec_enet_tx(ndev, budget);18701862 } while ((done < budget) && fec_enet_collect_events(fep));1871186318721864 if (done < budget) {···3924391639253917 __netif_tx_lock(nq, cpu);3926391839193919+ /* Avoid tx timeout as XDP shares the queue with kernel stack */39203920+ txq_trans_cond_update(nq);39273921 for (i = 0; i < num_frames; i++) {39283922 if (fec_enet_txq_xmit_frame(fep, txq, frames[i]) < 0)39293923 break;
···52525353 for (i = 0; i < HNAE3_MAX_TC; i++) {5454 ets->prio_tc[i] = hdev->tm_info.prio_tc[i];5555- ets->tc_tx_bw[i] = hdev->tm_info.pg_info[0].tc_dwrr[i];5555+ if (i < hdev->tm_info.num_tc)5656+ ets->tc_tx_bw[i] = hdev->tm_info.pg_info[0].tc_dwrr[i];5757+ else5858+ ets->tc_tx_bw[i] = 0;56595760 if (hdev->tm_info.tc_info[i].tc_sch_mode ==5861 HCLGE_SCH_MODE_SP)···126123}127124128125static int hclge_ets_sch_mode_validate(struct hclge_dev *hdev,129129- struct ieee_ets *ets, bool *changed)126126+ struct ieee_ets *ets, bool *changed,127127+ u8 tc_num)130128{131129 bool has_ets_tc = false;132130 u32 total_ets_bw = 0;···141137 *changed = true;142138 break;143139 case IEEE_8021QAZ_TSA_ETS:140140+ if (i >= tc_num) {141141+ dev_err(&hdev->pdev->dev,142142+ "tc%u is disabled, cannot set ets bw\n",143143+ i);144144+ return -EINVAL;145145+ }146146+144147 /* The hardware will switch to sp mode if bandwidth is145148 * 0, so limit ets bandwidth must be greater than 0.146149 */···187176 if (ret)188177 return ret;189178190190- ret = hclge_ets_sch_mode_validate(hdev, ets, changed);179179+ ret = hclge_ets_sch_mode_validate(hdev, ets, changed, tc_num);191180 if (ret)192181 return ret;193182···227216 if (ret)228217 return ret;229218219219+ ret = hclge_tm_flush_cfg(hdev, true);220220+ if (ret)221221+ return ret;222222+230223 return hclge_notify_client(hdev, HNAE3_UNINIT_CLIENT);231224}232225···239224 int ret;240225241226 ret = hclge_notify_client(hdev, HNAE3_INIT_CLIENT);227227+ if (ret)228228+ return ret;229229+230230+ ret = hclge_tm_flush_cfg(hdev, false);242231 if (ret)243232 return ret;244233···332313 struct net_device *netdev = h->kinfo.netdev;333314 struct hclge_dev *hdev = vport->back;334315 u8 i, j, pfc_map, *prio_tc;316316+ int last_bad_ret = 0;335317 int ret;336318337319 if (!(hdev->dcbx_cap & DCB_CAP_DCBX_VER_IEEE))···370350 if (ret)371351 return ret;372352373373- ret = hclge_buffer_alloc(hdev);374374- if (ret) {375375- hclge_notify_client(hdev, HNAE3_UP_CLIENT);353353+ ret = hclge_tm_flush_cfg(hdev, true);354354+ if (ret)376355 return ret;377377- }378356379379- return hclge_notify_client(hdev, HNAE3_UP_CLIENT);357357+ /* No matter whether the following operations are performed358358+ * successfully or not, disabling the tm flush and notify359359+ * the network status to up are necessary.360360+ * Do not return immediately.361361+ */362362+ ret = hclge_buffer_alloc(hdev);363363+ if (ret)364364+ last_bad_ret = ret;365365+366366+ ret = hclge_tm_flush_cfg(hdev, false);367367+ if (ret)368368+ last_bad_ret = ret;369369+370370+ ret = hclge_notify_client(hdev, HNAE3_UP_CLIENT);371371+ if (ret)372372+ last_bad_ret = ret;373373+374374+ return last_bad_ret;380375}381376382377static int hclge_ieee_setapp(struct hnae3_handle *h, struct dcb_app *app)
···18391839void i40e_dbg_init(void)18401840{18411841 i40e_dbg_root = debugfs_create_dir(i40e_driver_name, NULL);18421842- if (!i40e_dbg_root)18421842+ if (IS_ERR(i40e_dbg_root))18431843 pr_info("init of debugfs failed\n");18441844}18451845
+6-5
drivers/net/ethernet/intel/iavf/iavf_main.c
···32503250 u32 val, oldval;32513251 u16 pending;3252325232533253- if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)32543254- goto out;32553255-32563253 if (!mutex_trylock(&adapter->crit_lock)) {32573254 if (adapter->state == __IAVF_REMOVE)32583255 return;···32583261 goto out;32593262 }3260326332643264+ if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)32653265+ goto unlock;32663266+32613267 event.buf_len = IAVF_MAX_AQ_BUF_SIZE;32623268 event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL);32633269 if (!event.msg_buf)32643264- goto out;32703270+ goto unlock;3265327132663272 do {32673273 ret = iavf_clean_arq_element(hw, &event, &pending);···32793279 if (pending != 0)32803280 memset(event.msg_buf, 0, IAVF_MAX_AQ_BUF_SIZE);32813281 } while (pending);32823282- mutex_unlock(&adapter->crit_lock);3283328232843283 if (iavf_is_reset_in_progress(adapter))32853284 goto freedom;···3322332333233324freedom:33243325 kfree(event.msg_buf);33263326+unlock:33273327+ mutex_unlock(&adapter->crit_lock);33253328out:33263329 /* re-enable Admin queue interrupt cause */33273330 iavf_misc_irq_enable(adapter);
+14-12
drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
···12811281 ICE_FLOW_FLD_OFF_INVAL);12821282 }1283128312841284- /* add filter for outer headers */12851284 fltr_idx = ice_ethtool_flow_to_fltr(fsp->flow_type & ~FLOW_EXT);12851285+12861286+ assign_bit(fltr_idx, hw->fdir_perfect_fltr, perfect_filter);12871287+12881288+ /* add filter for outer headers */12861289 ret = ice_fdir_set_hw_fltr_rule(pf, seg, fltr_idx,12871290 ICE_FD_HW_SEG_NON_TUN);12881288- if (ret == -EEXIST)12891289- /* Rule already exists, free memory and continue */12901290- devm_kfree(dev, seg);12911291- else if (ret)12911291+ if (ret == -EEXIST) {12921292+ /* Rule already exists, free memory and count as success */12931293+ ret = 0;12941294+ goto err_exit;12951295+ } else if (ret) {12921296 /* could not write filter, free memory */12931297 goto err_exit;12981298+ }1294129912951300 /* make tunneled filter HW entries if possible */12961301 memcpy(&tun_seg[1], seg, sizeof(*seg));···13101305 devm_kfree(dev, tun_seg);13111306 }1312130713131313- if (perfect_filter)13141314- set_bit(fltr_idx, hw->fdir_perfect_fltr);13151315- else13161316- clear_bit(fltr_idx, hw->fdir_perfect_fltr);13171317-13181308 return ret;1319130913201310err_exit:13211311 devm_kfree(dev, tun_seg);13221312 devm_kfree(dev, seg);1323131313241324- return -EOPNOTSUPP;13141314+ return ret;13251315}1326131613271317/**···19141914 input->comp_report = ICE_FXD_FLTR_QW0_COMP_REPORT_SW_FAIL;1915191519161916 /* input struct is added to the HW filter list */19171917- ice_fdir_update_list_entry(pf, input, fsp->location);19171917+ ret = ice_fdir_update_list_entry(pf, input, fsp->location);19181918+ if (ret)19191919+ goto release_lock;1918192019191921 ret = ice_fdir_write_all_fltr(pf, input, true);19201922 if (ret)
+28-12
drivers/net/ethernet/intel/igc/igc_main.c
···316316 igc_clean_tx_ring(adapter->tx_ring[i]);317317}318318319319+static void igc_disable_tx_ring_hw(struct igc_ring *ring)320320+{321321+ struct igc_hw *hw = &ring->q_vector->adapter->hw;322322+ u8 idx = ring->reg_idx;323323+ u32 txdctl;324324+325325+ txdctl = rd32(IGC_TXDCTL(idx));326326+ txdctl &= ~IGC_TXDCTL_QUEUE_ENABLE;327327+ txdctl |= IGC_TXDCTL_SWFLUSH;328328+ wr32(IGC_TXDCTL(idx), txdctl);329329+}330330+331331+/**332332+ * igc_disable_all_tx_rings_hw - Disable all transmit queue operation333333+ * @adapter: board private structure334334+ */335335+static void igc_disable_all_tx_rings_hw(struct igc_adapter *adapter)336336+{337337+ int i;338338+339339+ for (i = 0; i < adapter->num_tx_queues; i++) {340340+ struct igc_ring *tx_ring = adapter->tx_ring[i];341341+342342+ igc_disable_tx_ring_hw(tx_ring);343343+ }344344+}345345+319346/**320347 * igc_setup_tx_resources - allocate Tx resources (Descriptors)321348 * @tx_ring: tx descriptor ring (for a specific queue) to setup···50855058 /* clear VLAN promisc flag so VFTA will be updated if necessary */50865059 adapter->flags &= ~IGC_FLAG_VLAN_PROMISC;5087506050615061+ igc_disable_all_tx_rings_hw(adapter);50885062 igc_clean_all_tx_rings(adapter);50895063 igc_clean_all_rx_rings(adapter);50905064}···73167288 igc_alloc_rx_buffers_zc(ring, igc_desc_unused(ring));73177289 else73187290 igc_alloc_rx_buffers(ring, igc_desc_unused(ring));73197319-}73207320-73217321-static void igc_disable_tx_ring_hw(struct igc_ring *ring)73227322-{73237323- struct igc_hw *hw = &ring->q_vector->adapter->hw;73247324- u8 idx = ring->reg_idx;73257325- u32 txdctl;73267326-73277327- txdctl = rd32(IGC_TXDCTL(idx));73287328- txdctl &= ~IGC_TXDCTL_QUEUE_ENABLE;73297329- txdctl |= IGC_TXDCTL_SWFLUSH;73307330- wr32(IGC_TXDCTL(idx), txdctl);73317291}7332729273337293void igc_disable_tx_ring(struct igc_ring *ring)
+1-1
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···84798479 struct ixgbe_adapter *adapter = q_vector->adapter;8480848084818481 if (unlikely(skb_tail_pointer(skb) < hdr.network +84828482- VXLAN_HEADROOM))84828482+ vxlan_headroom(0)))84838483 return;8484848484858485 /* verify the port is recognized as VXLAN */
···218218219219void npc_program_mkex_hash(struct rvu *rvu, int blkaddr)220220{221221+ struct npc_mcam_kex_hash *mh = rvu->kpu.mkex_hash;221222 struct hw_cap *hwcap = &rvu->hw->cap;223223+ u8 intf, ld, hdr_offset, byte_len;222224 struct rvu_hwinfo *hw = rvu->hw;223223- u8 intf;225225+ u64 cfg;224226227227+ /* Check if hardware supports hash extraction */225228 if (!hwcap->npc_hash_extract)226229 return;227230231231+ /* Check if IPv6 source/destination address232232+ * should be hash enabled.233233+ * Hashing reduces 128bit SIP/DIP fields to 32bit234234+ * so that 224 bit X2 key can be used for IPv6 based filters as well,235235+ * which in turn results in more number of MCAM entries available for236236+ * use.237237+ *238238+ * Hashing of IPV6 SIP/DIP is enabled in below scenarios239239+ * 1. If the silicon variant supports hashing feature240240+ * 2. If the number of bytes of IP addr being extracted is 4 bytes ie241241+ * 32bit. The assumption here is that if user wants 8bytes of LSB of242242+ * IP addr or full 16 bytes then his intention is not to use 32bit243243+ * hash.244244+ */245245+ for (intf = 0; intf < hw->npc_intfs; intf++) {246246+ for (ld = 0; ld < NPC_MAX_LD; ld++) {247247+ cfg = rvu_read64(rvu, blkaddr,248248+ NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf,249249+ NPC_LID_LC,250250+ NPC_LT_LC_IP6,251251+ ld));252252+ hdr_offset = FIELD_GET(NPC_HDR_OFFSET, cfg);253253+ byte_len = FIELD_GET(NPC_BYTESM, cfg);254254+ /* Hashing of IPv6 source/destination address should be255255+ * enabled if,256256+ * hdr_offset == 8 (offset of source IPv6 address) or257257+ * hdr_offset == 24 (offset of destination IPv6)258258+ * address) and the number of byte to be259259+ * extracted is 4. As per hardware configuration260260+ * byte_len should be == actual byte_len - 1.261261+ * Hence byte_len is checked against 3 but nor 4.262262+ */263263+ if ((hdr_offset == 8 || hdr_offset == 24) && byte_len == 3)264264+ mh->lid_lt_ld_hash_en[intf][NPC_LID_LC][NPC_LT_LC_IP6][ld] = true;265265+ }266266+ }267267+268268+ /* Update hash configuration if the field is hash enabled */228269 for (intf = 0; intf < hw->npc_intfs; intf++) {229270 npc_program_mkex_hash_rx(rvu, blkaddr, intf);230271 npc_program_mkex_hash_tx(rvu, blkaddr, intf);
···328328 ret = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND2, MV_V2_PORT_CTRL,329329 MV_V2_PORT_CTRL_PWRDOWN);330330331331+ /* Sometimes, the power down bit doesn't clear immediately, and332332+ * a read of this register causes the bit not to clear. Delay333333+ * 100us to allow the PHY to come out of power down mode before334334+ * the next access.335335+ */336336+ udelay(100);337337+331338 if (phydev->drv->phy_id != MARVELL_PHY_ID_88X3310 ||332339 priv->firmware_ver < 0x00030000)333340 return ret;
···42194219 if (vi->has_rss || vi->has_rss_hash_report)42204220 virtnet_init_default_rss(vi);4221422142224222+ _virtnet_set_queues(vi, vi->curr_queue_pairs);42234223+42224224 /* serialize netdev register + virtio_device_ready() with ndo_open() */42234225 rtnl_lock();42244226···42584256 pr_debug("virtio_net: registering cpu notifier failed\n");42594257 goto free_unregister_netdev;42604258 }42614261-42624262- virtnet_set_queues(vi, vi->curr_queue_pairs);4263425942644260 /* Assume link up if device can't report link status,42654261 otherwise get link status from config. */
+107-58
drivers/net/vxlan/vxlan_core.c
···623623 return 1;624624}625625626626+static bool vxlan_parse_gpe_proto(struct vxlanhdr *hdr, __be16 *protocol)627627+{628628+ struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)hdr;629629+630630+ /* Need to have Next Protocol set for interfaces in GPE mode. */631631+ if (!gpe->np_applied)632632+ return false;633633+ /* "The initial version is 0. If a receiver does not support the634634+ * version indicated it MUST drop the packet.635635+ */636636+ if (gpe->version != 0)637637+ return false;638638+ /* "When the O bit is set to 1, the packet is an OAM packet and OAM639639+ * processing MUST occur." However, we don't implement OAM640640+ * processing, thus drop the packet.641641+ */642642+ if (gpe->oam_flag)643643+ return false;644644+645645+ *protocol = tun_p_to_eth_p(gpe->next_protocol);646646+ if (!*protocol)647647+ return false;648648+649649+ return true;650650+}651651+626652static struct vxlanhdr *vxlan_gro_remcsum(struct sk_buff *skb,627653 unsigned int off,628654 struct vxlanhdr *vh, size_t hdrlen,···675649 return vh;676650}677651678678-static struct sk_buff *vxlan_gro_receive(struct sock *sk,679679- struct list_head *head,680680- struct sk_buff *skb)652652+static struct vxlanhdr *vxlan_gro_prepare_receive(struct sock *sk,653653+ struct list_head *head,654654+ struct sk_buff *skb,655655+ struct gro_remcsum *grc)681656{682682- struct sk_buff *pp = NULL;683657 struct sk_buff *p;684658 struct vxlanhdr *vh, *vh2;685659 unsigned int hlen, off_vx;686686- int flush = 1;687660 struct vxlan_sock *vs = rcu_dereference_sk_user_data(sk);688661 __be32 flags;689689- struct gro_remcsum grc;690662691691- skb_gro_remcsum_init(&grc);663663+ skb_gro_remcsum_init(grc);692664693665 off_vx = skb_gro_offset(skb);694666 hlen = off_vx + sizeof(*vh);695667 vh = skb_gro_header(skb, hlen, off_vx);696668 if (unlikely(!vh))697697- goto out;669669+ return NULL;698670699671 skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr));700672···700676701677 if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) {702678 vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),703703- vh->vx_vni, &grc,679679+ vh->vx_vni, grc,704680 !!(vs->flags &705681 VXLAN_F_REMCSUM_NOPARTIAL));706682707683 if (!vh)708708- goto out;684684+ return NULL;709685 }710686711687 skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */···722698 }723699 }724700725725- pp = call_gro_receive(eth_gro_receive, head, skb);726726- flush = 0;701701+ return vh;702702+}727703704704+static struct sk_buff *vxlan_gro_receive(struct sock *sk,705705+ struct list_head *head,706706+ struct sk_buff *skb)707707+{708708+ struct sk_buff *pp = NULL;709709+ struct gro_remcsum grc;710710+ int flush = 1;711711+712712+ if (vxlan_gro_prepare_receive(sk, head, skb, &grc)) {713713+ pp = call_gro_receive(eth_gro_receive, head, skb);714714+ flush = 0;715715+ }716716+ skb_gro_flush_final_remcsum(skb, pp, flush, &grc);717717+ return pp;718718+}719719+720720+static struct sk_buff *vxlan_gpe_gro_receive(struct sock *sk,721721+ struct list_head *head,722722+ struct sk_buff *skb)723723+{724724+ const struct packet_offload *ptype;725725+ struct sk_buff *pp = NULL;726726+ struct gro_remcsum grc;727727+ struct vxlanhdr *vh;728728+ __be16 protocol;729729+ int flush = 1;730730+731731+ vh = vxlan_gro_prepare_receive(sk, head, skb, &grc);732732+ if (vh) {733733+ if (!vxlan_parse_gpe_proto(vh, &protocol))734734+ goto out;735735+ ptype = gro_find_receive_by_type(protocol);736736+ if (!ptype)737737+ goto out;738738+ pp = call_gro_receive(ptype->callbacks.gro_receive, head, skb);739739+ flush = 0;740740+ }728741out:729742 skb_gro_flush_final_remcsum(skb, pp, flush, &grc);730730-731743 return pp;732744}733745···773713 * 'skb->encapsulation' set.774714 */775715 return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr));716716+}717717+718718+static int vxlan_gpe_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)719719+{720720+ struct vxlanhdr *vh = (struct vxlanhdr *)(skb->data + nhoff);721721+ const struct packet_offload *ptype;722722+ int err = -ENOSYS;723723+ __be16 protocol;724724+725725+ if (!vxlan_parse_gpe_proto(vh, &protocol))726726+ return err;727727+ ptype = gro_find_complete_by_type(protocol);728728+ if (ptype)729729+ err = ptype->callbacks.gro_complete(skb, nhoff + sizeof(struct vxlanhdr));730730+ return err;776731}777732778733static struct vxlan_fdb *vxlan_fdb_alloc(struct vxlan_dev *vxlan, const u8 *mac,···16001525 unparsed->vx_flags &= ~VXLAN_GBP_USED_BITS;16011526}1602152716031603-static bool vxlan_parse_gpe_hdr(struct vxlanhdr *unparsed,16041604- __be16 *protocol,16051605- struct sk_buff *skb, u32 vxflags)16061606-{16071607- struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)unparsed;16081608-16091609- /* Need to have Next Protocol set for interfaces in GPE mode. */16101610- if (!gpe->np_applied)16111611- return false;16121612- /* "The initial version is 0. If a receiver does not support the16131613- * version indicated it MUST drop the packet.16141614- */16151615- if (gpe->version != 0)16161616- return false;16171617- /* "When the O bit is set to 1, the packet is an OAM packet and OAM16181618- * processing MUST occur." However, we don't implement OAM16191619- * processing, thus drop the packet.16201620- */16211621- if (gpe->oam_flag)16221622- return false;16231623-16241624- *protocol = tun_p_to_eth_p(gpe->next_protocol);16251625- if (!*protocol)16261626- return false;16271627-16281628- unparsed->vx_flags &= ~VXLAN_GPE_USED_BITS;16291629- return true;16301630-}16311631-16321528static bool vxlan_set_mac(struct vxlan_dev *vxlan,16331529 struct vxlan_sock *vs,16341530 struct sk_buff *skb, __be32 vni)···17011655 * used by VXLAN extensions if explicitly requested.17021656 */17031657 if (vs->flags & VXLAN_F_GPE) {17041704- if (!vxlan_parse_gpe_hdr(&unparsed, &protocol, skb, vs->flags))16581658+ if (!vxlan_parse_gpe_proto(&unparsed, &protocol))17051659 goto drop;16601660+ unparsed.vx_flags &= ~VXLAN_GPE_USED_BITS;17061661 raw_proto = true;17071662 }17081663···25632516 }2564251725652518 ndst = &rt->dst;25662566- err = skb_tunnel_check_pmtu(skb, ndst, VXLAN_HEADROOM,25192519+ err = skb_tunnel_check_pmtu(skb, ndst, vxlan_headroom(flags & VXLAN_F_GPE),25672520 netif_is_any_bridge_port(dev));25682521 if (err < 0) {25692522 goto tx_error;···26242577 goto out_unlock;26252578 }2626257926272627- err = skb_tunnel_check_pmtu(skb, ndst, VXLAN6_HEADROOM,25802580+ err = skb_tunnel_check_pmtu(skb, ndst,25812581+ vxlan_headroom((flags & VXLAN_F_GPE) | VXLAN_F_IPV6),26282582 netif_is_any_bridge_port(dev));26292583 if (err < 0) {26302584 goto tx_error;···30372989 struct vxlan_rdst *dst = &vxlan->default_dst;30382990 struct net_device *lowerdev = __dev_get_by_index(vxlan->net,30392991 dst->remote_ifindex);30403040- bool use_ipv6 = !!(vxlan->cfg.flags & VXLAN_F_IPV6);3041299230422993 /* This check is different than dev->max_mtu, because it looks at30432994 * the lowerdev->mtu, rather than the static dev->max_mtu30442995 */30452996 if (lowerdev) {30463046- int max_mtu = lowerdev->mtu -30473047- (use_ipv6 ? VXLAN6_HEADROOM : VXLAN_HEADROOM);29972997+ int max_mtu = lowerdev->mtu - vxlan_headroom(vxlan->cfg.flags);30482998 if (new_mtu > max_mtu)30492999 return -EINVAL;30503000 }···34253379 tunnel_cfg.encap_rcv = vxlan_rcv;34263380 tunnel_cfg.encap_err_lookup = vxlan_err_lookup;34273381 tunnel_cfg.encap_destroy = NULL;34283428- tunnel_cfg.gro_receive = vxlan_gro_receive;34293429- tunnel_cfg.gro_complete = vxlan_gro_complete;33823382+ if (vs->flags & VXLAN_F_GPE) {33833383+ tunnel_cfg.gro_receive = vxlan_gpe_gro_receive;33843384+ tunnel_cfg.gro_complete = vxlan_gpe_gro_complete;33853385+ } else {33863386+ tunnel_cfg.gro_receive = vxlan_gro_receive;33873387+ tunnel_cfg.gro_complete = vxlan_gro_complete;33883388+ }3430338934313390 setup_udp_tunnel_sock(net, sock, &tunnel_cfg);34323391···36953644 struct vxlan_dev *vxlan = netdev_priv(dev);36963645 struct vxlan_rdst *dst = &vxlan->default_dst;36973646 unsigned short needed_headroom = ETH_HLEN;36983698- bool use_ipv6 = !!(conf->flags & VXLAN_F_IPV6);36993647 int max_mtu = ETH_MAX_MTU;36483648+ u32 flags = conf->flags;3700364937013650 if (!changelink) {37023702- if (conf->flags & VXLAN_F_GPE)36513651+ if (flags & VXLAN_F_GPE)37033652 vxlan_raw_setup(dev);37043653 else37053654 vxlan_ether_setup(dev);···3724367337253674 dev->needed_tailroom = lowerdev->needed_tailroom;3726367537273727- max_mtu = lowerdev->mtu - (use_ipv6 ? VXLAN6_HEADROOM :37283728- VXLAN_HEADROOM);36763676+ max_mtu = lowerdev->mtu - vxlan_headroom(flags);37293677 if (max_mtu < ETH_MIN_MTU)37303678 max_mtu = ETH_MIN_MTU;37313679···37353685 if (dev->mtu > max_mtu)37363686 dev->mtu = max_mtu;3737368737383738- if (use_ipv6 || conf->flags & VXLAN_F_COLLECT_METADATA)37393739- needed_headroom += VXLAN6_HEADROOM;37403740- else37413741- needed_headroom += VXLAN_HEADROOM;36883688+ if (flags & VXLAN_F_COLLECT_METADATA)36893689+ flags |= VXLAN_F_IPV6;36903690+ needed_headroom += vxlan_headroom(flags);37423691 dev->needed_headroom = needed_headroom;3743369237443693 memcpy(&vxlan->cfg, conf, sizeof(*conf));
+1-1
drivers/phy/hisilicon/phy-hisi-inno-usb2.c
···184184 phy_set_drvdata(phy, &priv->ports[i]);185185 i++;186186187187- if (i > INNO_PHY_PORT_NUM) {187187+ if (i >= INNO_PHY_PORT_NUM) {188188 dev_warn(dev, "Support %d ports in maximum\n", i);189189 of_node_put(child);190190 break;
+1-1
drivers/phy/mediatek/phy-mtk-dp.c
···169169170170 regs = *(struct regmap **)dev->platform_data;171171 if (!regs)172172- return dev_err_probe(dev, EINVAL,172172+ return dev_err_probe(dev, -EINVAL,173173 "No data passed, requires struct regmap**\n");174174175175 dp_phy = devm_kzalloc(dev, sizeof(*dp_phy), GFP_KERNEL);
+1-1
drivers/phy/mediatek/phy-mtk-hdmi-mt8195.c
···253253 for (i = 0; i < ARRAY_SIZE(txpredivs); i++) {254254 ns_hdmipll_ck = 5 * tmds_clk * txposdiv * txpredivs[i];255255 if (ns_hdmipll_ck >= 5 * GIGA &&256256- ns_hdmipll_ck <= 1 * GIGA)256256+ ns_hdmipll_ck <= 12 * GIGA)257257 break;258258 }259259 if (i == (ARRAY_SIZE(txpredivs) - 1) &&
+50-28
drivers/phy/qualcomm/phy-qcom-snps-femto-v2.c
···110110/**111111 * struct qcom_snps_hsphy - snps hs phy attributes112112 *113113+ * @dev: device structure114114+ *113115 * @phy: generic phy114116 * @base: iomapped memory space for snps hs phy115117 *116116- * @cfg_ahb_clk: AHB2PHY interface clock117117- * @ref_clk: phy reference clock118118+ * @num_clks: number of clocks119119+ * @clks: array of clocks118120 * @phy_reset: phy reset control119121 * @vregs: regulator supplies bulk data120122 * @phy_initialized: if PHY has been initialized correctly···124122 * @update_seq_cfg: tuning parameters for phy init125123 */126124struct qcom_snps_hsphy {125125+ struct device *dev;126126+127127 struct phy *phy;128128 void __iomem *base;129129130130- struct clk *cfg_ahb_clk;131131- struct clk *ref_clk;130130+ int num_clks;131131+ struct clk_bulk_data *clks;132132 struct reset_control *phy_reset;133133 struct regulator_bulk_data vregs[SNPS_HS_NUM_VREGS];134134···138134 enum phy_mode mode;139135 struct phy_override_seq update_seq_cfg[NUM_HSPHY_TUNING_PARAMS];140136};137137+138138+static int qcom_snps_hsphy_clk_init(struct qcom_snps_hsphy *hsphy)139139+{140140+ struct device *dev = hsphy->dev;141141+142142+ hsphy->num_clks = 2;143143+ hsphy->clks = devm_kcalloc(dev, hsphy->num_clks, sizeof(*hsphy->clks), GFP_KERNEL);144144+ if (!hsphy->clks)145145+ return -ENOMEM;146146+147147+ /*148148+ * TODO: Currently no device tree instantiation of the PHY is using the clock.149149+ * This needs to be fixed in order for this code to be able to use devm_clk_bulk_get().150150+ */151151+ hsphy->clks[0].id = "cfg_ahb";152152+ hsphy->clks[0].clk = devm_clk_get_optional(dev, "cfg_ahb");153153+ if (IS_ERR(hsphy->clks[0].clk))154154+ return dev_err_probe(dev, PTR_ERR(hsphy->clks[0].clk),155155+ "failed to get cfg_ahb clk\n");156156+157157+ hsphy->clks[1].id = "ref";158158+ hsphy->clks[1].clk = devm_clk_get(dev, "ref");159159+ if (IS_ERR(hsphy->clks[1].clk))160160+ return dev_err_probe(dev, PTR_ERR(hsphy->clks[1].clk),161161+ "failed to get ref clk\n");162162+163163+ return 0;164164+}141165142166static inline void qcom_snps_hsphy_write_mask(void __iomem *base, u32 offset,143167 u32 mask, u32 val)···197165 0, USB2_AUTO_RESUME);198166 }199167200200- clk_disable_unprepare(hsphy->cfg_ahb_clk);201168 return 0;202169}203170204171static int qcom_snps_hsphy_resume(struct qcom_snps_hsphy *hsphy)205172{206206- int ret;207207-208173 dev_dbg(&hsphy->phy->dev, "Resume QCOM SNPS PHY, mode\n");209209-210210- ret = clk_prepare_enable(hsphy->cfg_ahb_clk);211211- if (ret) {212212- dev_err(&hsphy->phy->dev, "failed to enable cfg ahb clock\n");213213- return ret;214214- }215174216175 return 0;217176}···214191 if (!hsphy->phy_initialized)215192 return 0;216193217217- qcom_snps_hsphy_suspend(hsphy);218218- return 0;194194+ return qcom_snps_hsphy_suspend(hsphy);219195}220196221197static int __maybe_unused qcom_snps_hsphy_runtime_resume(struct device *dev)···224202 if (!hsphy->phy_initialized)225203 return 0;226204227227- qcom_snps_hsphy_resume(hsphy);228228- return 0;205205+ return qcom_snps_hsphy_resume(hsphy);229206}230207231208static int qcom_snps_hsphy_set_mode(struct phy *phy, enum phy_mode mode,···395374 if (ret)396375 return ret;397376398398- ret = clk_prepare_enable(hsphy->cfg_ahb_clk);377377+ ret = clk_bulk_prepare_enable(hsphy->num_clks, hsphy->clks);399378 if (ret) {400400- dev_err(&phy->dev, "failed to enable cfg ahb clock, %d\n", ret);379379+ dev_err(&phy->dev, "failed to enable clocks, %d\n", ret);401380 goto poweroff_phy;402381 }403382404383 ret = reset_control_assert(hsphy->phy_reset);405384 if (ret) {406385 dev_err(&phy->dev, "failed to assert phy_reset, %d\n", ret);407407- goto disable_ahb_clk;386386+ goto disable_clks;408387 }409388410389 usleep_range(100, 150);···412391 ret = reset_control_deassert(hsphy->phy_reset);413392 if (ret) {414393 dev_err(&phy->dev, "failed to de-assert phy_reset, %d\n", ret);415415- goto disable_ahb_clk;394394+ goto disable_clks;416395 }417396418397 qcom_snps_hsphy_write_mask(hsphy->base, USB2_PHY_USB_PHY_CFG0,···469448470449 return 0;471450472472-disable_ahb_clk:473473- clk_disable_unprepare(hsphy->cfg_ahb_clk);451451+disable_clks:452452+ clk_bulk_disable_unprepare(hsphy->num_clks, hsphy->clks);474453poweroff_phy:475454 regulator_bulk_disable(ARRAY_SIZE(hsphy->vregs), hsphy->vregs);476455···482461 struct qcom_snps_hsphy *hsphy = phy_get_drvdata(phy);483462484463 reset_control_assert(hsphy->phy_reset);485485- clk_disable_unprepare(hsphy->cfg_ahb_clk);464464+ clk_bulk_disable_unprepare(hsphy->num_clks, hsphy->clks);486465 regulator_bulk_disable(ARRAY_SIZE(hsphy->vregs), hsphy->vregs);487466 hsphy->phy_initialized = false;488467···575554 if (!hsphy)576555 return -ENOMEM;577556557557+ hsphy->dev = dev;558558+578559 hsphy->base = devm_platform_ioremap_resource(pdev, 0);579560 if (IS_ERR(hsphy->base))580561 return PTR_ERR(hsphy->base);581562582582- hsphy->ref_clk = devm_clk_get(dev, "ref");583583- if (IS_ERR(hsphy->ref_clk))584584- return dev_err_probe(dev, PTR_ERR(hsphy->ref_clk),585585- "failed to get ref clk\n");563563+ ret = qcom_snps_hsphy_clk_init(hsphy);564564+ if (ret)565565+ return dev_err_probe(dev, ret, "failed to initialize clocks\n");586566587567 hsphy->phy_reset = devm_reset_control_get_exclusive(&pdev->dev, NULL);588568 if (IS_ERR(hsphy->phy_reset)) {
···150150 DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go"),151151 },152152 },153153+ {154154+ .matches = {155155+ DMI_MATCH(DMI_SYS_VENDOR, "HP"),156156+ DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite Dragonfly G2 Notebook PC"),157157+ },158158+ },153159 { }154160};155161···626620static int intel_hid_probe(struct platform_device *device)627621{628622 acpi_handle handle = ACPI_HANDLE(&device->dev);629629- unsigned long long mode;623623+ unsigned long long mode, dummy;630624 struct intel_hid_priv *priv;631625 acpi_status status;632626 int err;···698692 if (err)699693 goto err_remove_notify;700694701701- if (priv->array) {702702- unsigned long long dummy;695695+ intel_button_array_enable(&device->dev, true);703696704704- intel_button_array_enable(&device->dev, true);705705-706706- /* Call button load method to enable HID power button */707707- if (!intel_hid_evaluate_method(handle, INTEL_HID_DSM_BTNL_FN,708708- &dummy)) {709709- dev_warn(&device->dev,710710- "failed to enable HID power button\n");711711- }712712- }697697+ /*698698+ * Call button load method to enable HID power button699699+ * Always do this since it activates events on some devices without700700+ * a button array too.701701+ */702702+ if (!intel_hid_evaluate_method(handle, INTEL_HID_DSM_BTNL_FN, &dummy))703703+ dev_warn(&device->dev, "failed to enable HID power button\n");713704714705 device_init_wakeup(&device->dev, true);715706 /*
+4-4
drivers/platform/x86/msi-laptop.c
···208208 return -EINVAL;209209210210 if (quirks->ec_read_only)211211- return -EOPNOTSUPP;211211+ return 0;212212213213 /* read current device state */214214 result = ec_read(MSI_STANDARD_EC_COMMAND_ADDRESS, &rdata);···838838static void msi_init_rfkill(struct work_struct *ignored)839839{840840 if (rfk_wlan) {841841- rfkill_set_sw_state(rfk_wlan, !wlan_s);841841+ msi_rfkill_set_state(rfk_wlan, !wlan_s);842842 rfkill_wlan_set(NULL, !wlan_s);843843 }844844 if (rfk_bluetooth) {845845- rfkill_set_sw_state(rfk_bluetooth, !bluetooth_s);845845+ msi_rfkill_set_state(rfk_bluetooth, !bluetooth_s);846846 rfkill_bluetooth_set(NULL, !bluetooth_s);847847 }848848 if (rfk_threeg) {849849- rfkill_set_sw_state(rfk_threeg, !threeg_s);849849+ msi_rfkill_set_state(rfk_threeg, !threeg_s);850850 rfkill_threeg_set(NULL, !threeg_s);851851 }852852}
···661661 /* Disable VCN33_WIFI */662662 ret = regmap_update_bits(mt6397->regmap, MT6358_LDO_VCN33_CON0_1, BIT(0), 0);663663 if (ret) {664664- dev_err(dev, "Failed to disable VCN33_BT\n");664664+ dev_err(dev, "Failed to disable VCN33_WIFI\n");665665 return ret;666666 }667667···676676 const struct mt6358_regulator_info *mt6358_info;677677 int i, max_regulator, ret;678678679679- ret = mt6358_sync_vcn33_setting(&pdev->dev);680680- if (ret)681681- return ret;682682-683679 if (mt6397->chip_id == MT6366_CHIP_ID) {684680 max_regulator = MT6366_MAX_REGULATOR;685681 mt6358_info = mt6366_regulators;···683687 max_regulator = MT6358_MAX_REGULATOR;684688 mt6358_info = mt6358_regulators;685689 }690690+691691+ ret = mt6358_sync_vcn33_setting(&pdev->dev);692692+ if (ret)693693+ return ret;686694687695 for (i = 0; i < max_regulator; i++) {688696 config.dev = &pdev->dev;
+49-78
drivers/s390/block/dasd.c
···29432943 * Requeue a request back to the block request queue29442944 * only works for block requests29452945 */29462946-static int _dasd_requeue_request(struct dasd_ccw_req *cqr)29462946+static void _dasd_requeue_request(struct dasd_ccw_req *cqr)29472947{29482948- struct dasd_block *block = cqr->block;29492948 struct request *req;2950294929512951- if (!block)29522952- return -EINVAL;29532950 /*29542951 * If the request is an ERP request there is nothing to requeue.29552952 * This will be done with the remaining original request.29562953 */29572954 if (cqr->refers)29582958- return 0;29552955+ return;29592956 spin_lock_irq(&cqr->dq->lock);29602957 req = (struct request *) cqr->callback_data;29612958 blk_mq_requeue_request(req, true);29622959 spin_unlock_irq(&cqr->dq->lock);2963296029642964- return 0;29612961+ return;29652962}2966296329672967-/*29682968- * Go through all request on the dasd_block request queue, cancel them29692969- * on the respective dasd_device, and return them to the generic29702970- * block layer.29712971- */29722972-static int dasd_flush_block_queue(struct dasd_block *block)29642964+static int _dasd_requests_to_flushqueue(struct dasd_block *block,29652965+ struct list_head *flush_queue)29732966{29742967 struct dasd_ccw_req *cqr, *n;29752975- int rc, i;29762976- struct list_head flush_queue;29772968 unsigned long flags;29692969+ int rc, i;2978297029792979- INIT_LIST_HEAD(&flush_queue);29802980- spin_lock_bh(&block->queue_lock);29712971+ spin_lock_irqsave(&block->queue_lock, flags);29812972 rc = 0;29822973restart:29832974 list_for_each_entry_safe(cqr, n, &block->ccw_queue, blocklist) {···29832992 * is returned from the dasd_device layer.29842993 */29852994 cqr->callback = _dasd_wake_block_flush_cb;29862986- for (i = 0; cqr != NULL; cqr = cqr->refers, i++)29872987- list_move_tail(&cqr->blocklist, &flush_queue);29952995+ for (i = 0; cqr; cqr = cqr->refers, i++)29962996+ list_move_tail(&cqr->blocklist, flush_queue);29882997 if (i > 1)29892998 /* moved more than one request - need to restart */29902999 goto restart;29913000 }29922992- spin_unlock_bh(&block->queue_lock);30013001+ spin_unlock_irqrestore(&block->queue_lock, flags);30023002+30033003+ return rc;30043004+}30053005+30063006+/*30073007+ * Go through all request on the dasd_block request queue, cancel them30083008+ * on the respective dasd_device, and return them to the generic30093009+ * block layer.30103010+ */30113011+static int dasd_flush_block_queue(struct dasd_block *block)30123012+{30133013+ struct dasd_ccw_req *cqr, *n;30143014+ struct list_head flush_queue;30153015+ unsigned long flags;30163016+ int rc;30173017+30183018+ INIT_LIST_HEAD(&flush_queue);30193019+ rc = _dasd_requests_to_flushqueue(block, &flush_queue);30203020+29933021 /* Now call the callback function of flushed requests */29943022restart_cb:29953023 list_for_each_entry_safe(cqr, n, &flush_queue, blocklist) {···38913881 */38923882int dasd_generic_requeue_all_requests(struct dasd_device *device)38933883{38843884+ struct dasd_block *block = device->block;38943885 struct list_head requeue_queue;38953886 struct dasd_ccw_req *cqr, *n;38963896- struct dasd_ccw_req *refers;38973887 int rc;3898388838893889+ if (!block)38903890+ return 0;38913891+38993892 INIT_LIST_HEAD(&requeue_queue);39003900- spin_lock_irq(get_ccwdev_lock(device->cdev));39013901- rc = 0;39023902- list_for_each_entry_safe(cqr, n, &device->ccw_queue, devlist) {39033903- /* Check status and move request to flush_queue */39043904- if (cqr->status == DASD_CQR_IN_IO) {39053905- rc = device->discipline->term_IO(cqr);39063906- if (rc) {39073907- /* unable to terminate requeust */39083908- dev_err(&device->cdev->dev,39093909- "Unable to terminate request %p "39103910- "on suspend\n", cqr);39113911- spin_unlock_irq(get_ccwdev_lock(device->cdev));39123912- dasd_put_device(device);39133913- return rc;39143914- }38933893+ rc = _dasd_requests_to_flushqueue(block, &requeue_queue);38943894+38953895+ /* Now call the callback function of flushed requests */38963896+restart_cb:38973897+ list_for_each_entry_safe(cqr, n, &requeue_queue, blocklist) {38983898+ wait_event(dasd_flush_wq, (cqr->status < DASD_CQR_QUEUED));38993899+ /* Process finished ERP request. */39003900+ if (cqr->refers) {39013901+ spin_lock_bh(&block->queue_lock);39023902+ __dasd_process_erp(block->base, cqr);39033903+ spin_unlock_bh(&block->queue_lock);39043904+ /* restart list_for_xx loop since dasd_process_erp39053905+ * might remove multiple elements39063906+ */39073907+ goto restart_cb;39153908 }39163916- list_move_tail(&cqr->devlist, &requeue_queue);39173917- }39183918- spin_unlock_irq(get_ccwdev_lock(device->cdev));39193919-39203920- list_for_each_entry_safe(cqr, n, &requeue_queue, devlist) {39213921- wait_event(dasd_flush_wq,39223922- (cqr->status != DASD_CQR_CLEAR_PENDING));39233923-39243924- /*39253925- * requeue requests to blocklayer will only work39263926- * for block device requests39273927- */39283928- if (_dasd_requeue_request(cqr))39293929- continue;39303930-39313931- /* remove requests from device and block queue */39323932- list_del_init(&cqr->devlist);39333933- while (cqr->refers != NULL) {39343934- refers = cqr->refers;39353935- /* remove the request from the block queue */39363936- list_del(&cqr->blocklist);39373937- /* free the finished erp request */39383938- dasd_free_erp_request(cqr, cqr->memdev);39393939- cqr = refers;39403940- }39413941-39423942- /*39433943- * _dasd_requeue_request already checked for a valid39443944- * blockdevice, no need to check again39453945- * all erp requests (cqr->refers) have a cqr->block39463946- * pointer copy from the original cqr39473947- */39093909+ _dasd_requeue_request(cqr);39483910 list_del_init(&cqr->blocklist);39493911 cqr->block->base->discipline->free_cp(39503912 cqr, (struct request *) cqr->callback_data);39513951- }39523952-39533953- /*39543954- * if requests remain then they are internal request39553955- * and go back to the device queue39563956- */39573957- if (!list_empty(&requeue_queue)) {39583958- /* move freeze_queue to start of the ccw_queue */39593959- spin_lock_irq(get_ccwdev_lock(device->cdev));39603960- list_splice_tail(&requeue_queue, &device->ccw_queue);39613961- spin_unlock_irq(get_ccwdev_lock(device->cdev));39623913 }39633914 dasd_schedule_device_bh(device);39643915 return rc;
+2-2
drivers/s390/block/dasd_3990_erp.c
···10501050 dev_err(&device->cdev->dev, "An I/O request was rejected"10511051 " because writing is inhibited\n");10521052 erp = dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED);10531053- } else if (sense[7] & SNS7_INVALID_ON_SEC) {10531053+ } else if (sense[7] == SNS7_INVALID_ON_SEC) {10541054 dev_err(&device->cdev->dev, "An I/O request was rejected on a copy pair secondary device\n");10551055 /* suppress dump of sense data for this error */10561056 set_bit(DASD_CQR_SUPPRESS_CR, &erp->refers->flags);···24412441 erp->block = cqr->block;24422442 erp->magic = cqr->magic;24432443 erp->expires = cqr->expires;24442444- erp->retries = 256;24442444+ erp->retries = device->default_retries;24452445 erp->buildclk = get_tod_clock();24462446 erp->status = DASD_CQR_FILLED;24472447
···14971497 int error;14981498 unsigned long iflags;1499149915001500- error = blk_get_queue(scsidp->request_queue);15011501- if (error)15021502- return error;15001500+ if (!blk_get_queue(scsidp->request_queue)) {15011501+ pr_warn("%s: get scsi_device queue failed\n", __func__);15021502+ return -ENODEV;15031503+ }1503150415041505 error = -ENOMEM;15051506 cdev = cdev_alloc();
+2-2
drivers/soundwire/amd_manager.c
···910910 return -ENOMEM;911911912912 amd_manager->acp_mmio = devm_ioremap(dev, res->start, resource_size(res));913913- if (IS_ERR(amd_manager->mmio)) {913913+ if (!amd_manager->acp_mmio) {914914 dev_err(dev, "mmio not found\n");915915- return PTR_ERR(amd_manager->mmio);915915+ return -ENOMEM;916916 }917917 amd_manager->instance = pdata->instance;918918 amd_manager->mmio = amd_manager->acp_mmio +
+4-4
drivers/soundwire/bus.c
···922922 "initializing enumeration and init completion for Slave %d\n",923923 slave->dev_num);924924925925- init_completion(&slave->enumeration_complete);926926- init_completion(&slave->initialization_complete);925925+ reinit_completion(&slave->enumeration_complete);926926+ reinit_completion(&slave->initialization_complete);927927928928 } else if ((status == SDW_SLAVE_ATTACHED) &&929929 (slave->status == SDW_SLAVE_UNATTACHED)) {···931931 "signaling enumeration completion for Slave %d\n",932932 slave->dev_num);933933934934- complete(&slave->enumeration_complete);934934+ complete_all(&slave->enumeration_complete);935935 }936936 slave->status = status;937937 mutex_unlock(&bus->bus_lock);···19511951 "signaling initialization completion for Slave %d\n",19521952 slave->dev_num);1953195319541954- complete(&slave->initialization_complete);19541954+ complete_all(&slave->initialization_complete);1955195519561956 /*19571957 * If the manager became pm_runtime active, the peripherals will be
+1-1
drivers/soundwire/qcom.c
···540540 status = (val >> (dev_num * SWRM_MCP_SLV_STATUS_SZ));541541542542 if ((status & SWRM_MCP_SLV_STATUS_MASK) == SDW_SLAVE_ALERT) {543543- ctrl->status[dev_num] = status;543543+ ctrl->status[dev_num] = status & SWRM_MCP_SLV_STATUS_MASK;544544 return dev_num;545545 }546546 }
+49-5
drivers/spi/spi-qcom-qspi.c
···6969 WR_FIFO_OVERRUN)7070#define QSPI_ALL_IRQS (QSPI_ERR_IRQS | RESP_FIFO_RDY | \7171 WR_FIFO_EMPTY | WR_FIFO_FULL | \7272- TRANSACTION_DONE)7272+ TRANSACTION_DONE | DMA_CHAIN_DONE)73737474#define PIO_XFER_CTRL 0x00147575#define REQUEST_COUNT_MSK 0xffff···308308 dma_addr_t dma_cmd_desc;309309310310 /* allocate for dma cmd descriptor */311311- virt_cmd_desc = dma_pool_alloc(ctrl->dma_cmd_pool, GFP_KERNEL | __GFP_ZERO, &dma_cmd_desc);312312- if (!virt_cmd_desc)313313- return -ENOMEM;311311+ virt_cmd_desc = dma_pool_alloc(ctrl->dma_cmd_pool, GFP_ATOMIC | __GFP_ZERO, &dma_cmd_desc);312312+ if (!virt_cmd_desc) {313313+ dev_warn_once(ctrl->dev, "Couldn't find memory for descriptor\n");314314+ return -EAGAIN;315315+ }314316315317 ctrl->virt_cmd_desc[ctrl->n_cmd_desc] = virt_cmd_desc;316318 ctrl->dma_cmd_desc[ctrl->n_cmd_desc] = dma_cmd_desc;···357355358356 for (i = 0; i < sgt->nents; i++) {359357 dma_ptr_sg = sg_dma_address(sgt->sgl + i);358358+ dma_len_sg = sg_dma_len(sgt->sgl + i);360359 if (!IS_ALIGNED(dma_ptr_sg, QSPI_ALIGN_REQ)) {361360 dev_warn_once(ctrl->dev, "dma_address not aligned to %d\n", QSPI_ALIGN_REQ);361361+ return -EAGAIN;362362+ }363363+ /*364364+ * When reading with DMA the controller writes to memory 1 word365365+ * at a time. If the length isn't a multiple of 4 bytes then366366+ * the controller can clobber the things later in memory.367367+ * Fallback to PIO to be safe.368368+ */369369+ if (ctrl->xfer.dir == QSPI_READ && (dma_len_sg & 0x03)) {370370+ dev_warn_once(ctrl->dev, "fallback to PIO for read of size %#010x\n",371371+ dma_len_sg);362372 return -EAGAIN;363373 }364374 }···455441456442 ret = qcom_qspi_setup_dma_desc(ctrl, xfer);457443 if (ret != -EAGAIN) {458458- if (!ret)444444+ if (!ret) {445445+ dma_wmb();459446 qcom_qspi_dma_xfer(ctrl);447447+ }460448 goto exit;461449 }462450 dev_warn_once(ctrl->dev, "DMA failure, falling back to PIO\n");···619603 int_status = readl(ctrl->base + MSTR_INT_STATUS);620604 writel(int_status, ctrl->base + MSTR_INT_STATUS);621605606606+ /* Ignore disabled interrupts */607607+ int_status &= readl(ctrl->base + MSTR_INT_EN);608608+622609 /* PIO mode handling */623610 if (ctrl->xfer.dir == QSPI_WRITE) {624611 if (int_status & WR_FIFO_EMPTY)···665646 spin_unlock(&ctrl->lock);666647 return ret;667648}649649+650650+static int qcom_qspi_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op)651651+{652652+ /*653653+ * If qcom_qspi_can_dma() is going to return false we don't need to654654+ * adjust anything.655655+ */656656+ if (op->data.nbytes <= QSPI_MAX_BYTES_FIFO)657657+ return 0;658658+659659+ /*660660+ * When reading, the transfer needs to be a multiple of 4 bytes so661661+ * shrink the transfer if that's not true. The caller will then do a662662+ * second transfer to finish things up.663663+ */664664+ if (op->data.dir == SPI_MEM_DATA_IN && (op->data.nbytes & 0x3))665665+ op->data.nbytes &= ~0x3;666666+667667+ return 0;668668+}669669+670670+static const struct spi_controller_mem_ops qcom_qspi_mem_ops = {671671+ .adjust_op_size = qcom_qspi_adjust_op_size,672672+};668673669674static int qcom_qspi_probe(struct platform_device *pdev)670675{···774731 if (of_property_read_bool(pdev->dev.of_node, "iommus"))775732 master->can_dma = qcom_qspi_can_dma;776733 master->auto_runtime_pm = true;734734+ master->mem_ops = &qcom_qspi_mem_ops;777735778736 ret = devm_pm_opp_set_clkname(&pdev->dev, "core");779737 if (ret)
···112112 for (i = 0; i < 8; i++) {113113 pxmitbuf->pxmit_urb[i] = usb_alloc_urb(0, GFP_KERNEL);114114 if (!pxmitbuf->pxmit_urb[i]) {115115+ int k;116116+117117+ for (k = i - 1; k >= 0; k--) {118118+ /* handle allocation errors part way through loop */119119+ usb_free_urb(pxmitbuf->pxmit_urb[k]);120120+ }115121 netdev_err(padapter->pnetdev, "pxmitbuf->pxmit_urb[i] == NULL\n");116122 return -ENOMEM;117123 }
+2-2
drivers/thermal/thermal_core.c
···12031203struct thermal_zone_device *12041204thermal_zone_device_register_with_trips(const char *type, struct thermal_trip *trips, int num_trips, int mask,12051205 void *devdata, struct thermal_zone_device_ops *ops,12061206- struct thermal_zone_params *tzp, int passive_delay,12061206+ const struct thermal_zone_params *tzp, int passive_delay,12071207 int polling_delay)12081208{12091209 struct thermal_zone_device *tz;···1371137113721372struct thermal_zone_device *thermal_zone_device_register(const char *type, int ntrips, int mask,13731373 void *devdata, struct thermal_zone_device_ops *ops,13741374- struct thermal_zone_params *tzp, int passive_delay,13741374+ const struct thermal_zone_params *tzp, int passive_delay,13751375 int polling_delay)13761376{13771377 return thermal_zone_device_register_with_trips(type, NULL, ntrips, mask,
···30703070 gsm->has_devices = false;30713071 }30723072 for (i = NUM_DLCI - 1; i >= 0; i--)30733073- if (gsm->dlci[i])30733073+ if (gsm->dlci[i]) {30743074 gsm_dlci_release(gsm->dlci[i]);30753075+ gsm->dlci[i] = NULL;30763076+ }30753077 mutex_unlock(&gsm->mutex);30763078 /* Now wipe the queues */30773079 tty_ldisc_flush(gsm->tty);
···16811681 if (ret)16821682 return ret;1683168316841684- /*16851685- * Set pm_runtime status as ACTIVE so that wakeup_irq gets16861686- * enabled/disabled from dev_pm_arm_wake_irq during system16871687- * suspend/resume respectively.16881688- */16891689- pm_runtime_set_active(&pdev->dev);16901690-16911684 if (port->wakeup_irq > 0) {16921685 device_init_wakeup(&pdev->dev, true);16931686 ret = dev_pm_set_dedicated_wake_irq(&pdev->dev,
+1-1
drivers/tty/serial/sh-sci.c
···590590 dma_submit_error(s->cookie_tx)) {591591 if (s->cfg->regtype == SCIx_RZ_SCIFA_REGTYPE)592592 /* Switch irq from SCIF to DMA */593593- disable_irq(s->irqs[SCIx_TXI_IRQ]);593593+ disable_irq_nosync(s->irqs[SCIx_TXI_IRQ]);594594595595 s->cookie_tx = 0;596596 schedule_work(&s->work_tx);
+1-1
drivers/tty/serial/sifive.c
···811811 local_irq_restore(flags);812812}813813814814-static int __init sifive_serial_console_setup(struct console *co, char *options)814814+static int sifive_serial_console_setup(struct console *co, char *options)815815{816816 struct sifive_serial_port *ssp;817817 int baud = SIFIVE_DEFAULT_BAUD_RATE;
+1-1
drivers/tty/serial/ucc_uart.c
···5959/* #define LOOPBACK */60606161/* The major and minor device numbers are defined in6262- * http://www.lanana.org/docs/device-list/devices-2.6+.txt. For the QE6262+ * Documentation/admin-guide/devices.txt. For the QE6363 * UART, we have major number 204 and minor numbers 46 - 49, which are the6464 * same as for the CPM2. This decision was made because no Freescale part6565 * has both a CPM and a QE.
+1-1
drivers/tty/tty_io.c
···22852285 char ch, mbz = 0;22862286 struct tty_ldisc *ld;2287228722882288- if (!tty_legacy_tiocsti)22882288+ if (!tty_legacy_tiocsti && !capable(CAP_SYS_ADMIN))22892289 return -EIO;2290229022912291 if ((current->signal->tty != tty) && !capable(CAP_SYS_ADMIN))
+3-1
drivers/usb/cdns3/cdns3-gadget.c
···30153015static int cdns3_gadget_check_config(struct usb_gadget *gadget)30163016{30173017 struct cdns3_device *priv_dev = gadget_to_cdns3_device(gadget);30183018+ struct cdns3_endpoint *priv_ep;30183019 struct usb_ep *ep;30193020 int n_in = 0;30203021 int total;3021302230223023 list_for_each_entry(ep, &gadget->ep_list, ep_list) {30233023- if (ep->claimed && (ep->address & USB_DIR_IN))30243024+ priv_ep = ep_to_cdns3_ep(ep);30253025+ if ((priv_ep->flags & EP_CLAIMED) && (ep->address & USB_DIR_IN))30243026 n_in++;30253027 }30263028
···277277 /*278278 * We're resetting only the device side because, if we're in host mode,279279 * XHCI driver will reset the host block. If dwc3 was configured for280280- * host-only mode, then we can return early.280280+ * host-only mode or current role is host, then we can return early.281281 */282282- if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)282282+ if (dwc->dr_mode == USB_DR_MODE_HOST || dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)283283 return 0;284284285285 reg = dwc3_readl(dwc->regs, DWC3_DCTL);···12071207 reg |= DWC3_GUCTL1_DEV_FORCE_20_CLK_FOR_30_CLK;1208120812091209 dwc3_writel(dwc->regs, DWC3_GUCTL1, reg);12101210- }12111211-12121212- if (dwc->dr_mode == USB_DR_MODE_HOST ||12131213- dwc->dr_mode == USB_DR_MODE_OTG) {12141214- reg = dwc3_readl(dwc->regs, DWC3_GUCTL);12151215-12161216- /*12171217- * Enable Auto retry Feature to make the controller operating in12181218- * Host mode on seeing transaction errors(CRC errors or internal12191219- * overrun scenerios) on IN transfers to reply to the device12201220- * with a non-terminating retry ACK (i.e, an ACK transcation12211221- * packet with Retry=1 & Nump != 0)12221222- */12231223- reg |= DWC3_GUCTL_HSTINAUTORETRY;12241224-12251225- dwc3_writel(dwc->regs, DWC3_GUCTL, reg);12261210 }1227121112281212 /*
-3
drivers/usb/dwc3/core.h
···256256#define DWC3_GCTL_GBLHIBERNATIONEN BIT(1)257257#define DWC3_GCTL_DSBLCLKGTNG BIT(0)258258259259-/* Global User Control Register */260260-#define DWC3_GUCTL_HSTINAUTORETRY BIT(14)261261-262259/* Global User Control 1 Register */263260#define DWC3_GUCTL1_DEV_DECOUPLE_L1L2_EVT BIT(31)264261#define DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS BIT(28)
+4-2
drivers/usb/dwc3/dwc3-pci.c
···233233234234 /*235235 * A lot of BYT devices lack ACPI resource entries for236236- * the GPIOs, add a fallback mapping to the reference236236+ * the GPIOs. If the ACPI entry for the GPIO controller237237+ * is present add a fallback mapping to the reference237238 * design GPIOs which all boards seem to use.238239 */239239- gpiod_add_lookup_table(&platform_bytcr_gpios);240240+ if (acpi_dev_present("INT33FC", NULL, -1))241241+ gpiod_add_lookup_table(&platform_bytcr_gpios);240242241243 /*242244 * These GPIOs will turn on the USB2 PHY. Note that we have to
+4
drivers/usb/gadget/composite.c
···11251125 goto done;1126112611271127 status = bind(config);11281128+11291129+ if (status == 0)11301130+ status = usb_gadget_check_config(cdev->gadget);11311131+11281132 if (status < 0) {11291133 while (!list_empty(&config->functions)) {11301134 struct usb_function *f;
+7-5
drivers/usb/gadget/legacy/raw_gadget.c
···310310 dev->eps_num = i;311311 spin_unlock_irqrestore(&dev->lock, flags);312312313313+ ret = raw_queue_event(dev, USB_RAW_EVENT_CONNECT, 0, NULL);314314+ if (ret < 0) {315315+ dev_err(&gadget->dev, "failed to queue event\n");316316+ set_gadget_data(gadget, NULL);317317+ return ret;318318+ }319319+313320 /* Matches kref_put() in gadget_unbind(). */314321 kref_get(&dev->count);315315-316316- ret = raw_queue_event(dev, USB_RAW_EVENT_CONNECT, 0, NULL);317317- if (ret < 0)318318- dev_err(&gadget->dev, "failed to queue event\n");319319-320322 return ret;321323}322324
-1
drivers/usb/gadget/udc/core.c
···878878 */879879 if (gadget->connected)880880 ret = usb_gadget_connect_locked(gadget);881881- mutex_unlock(&gadget->udc->connect_lock);882881883882unlock:884883 mutex_unlock(&gadget->udc->connect_lock);
+4-4
drivers/usb/gadget/udc/tegra-xudc.c
···37183718 int err;3719371937203720 xudc->genpd_dev_device = dev_pm_domain_attach_by_name(dev, "dev");37213721- if (IS_ERR_OR_NULL(xudc->genpd_dev_device)) {37223722- err = PTR_ERR(xudc->genpd_dev_device) ? : -ENODATA;37213721+ if (IS_ERR(xudc->genpd_dev_device)) {37223722+ err = PTR_ERR(xudc->genpd_dev_device);37233723 dev_err(dev, "failed to get device power domain: %d\n", err);37243724 return err;37253725 }3726372637273727 xudc->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "ss");37283728- if (IS_ERR_OR_NULL(xudc->genpd_dev_ss)) {37293729- err = PTR_ERR(xudc->genpd_dev_ss) ? : -ENODATA;37283728+ if (IS_ERR(xudc->genpd_dev_ss)) {37293729+ err = PTR_ERR(xudc->genpd_dev_ss);37303730 dev_err(dev, "failed to get SuperSpeed power domain: %d\n", err);37313731 return err;37323732 }
+7-1
drivers/usb/host/ohci-at91.c
···672672 else673673 at91_start_clock(ohci_at91);674674675675- ohci_resume(hcd, false);675675+ /*676676+ * According to the comment in ohci_hcd_at91_drv_suspend()677677+ * we need to do a reset if the 48Mhz clock was stopped,678678+ * that is, if ohci_at91->wakeup is clear. Tell ohci_resume()679679+ * to reset in this case by setting its "hibernated" flag.680680+ */681681+ ohci_resume(hcd, !ohci_at91->wakeup);676682677683 return 0;678684}
···626626 struct xhci_ring *ep_ring;627627 struct xhci_command *cmd;628628 struct xhci_segment *new_seg;629629- struct xhci_segment *halted_seg = NULL;630629 union xhci_trb *new_deq;631630 int new_cycle;632632- union xhci_trb *halted_trb;633633- int index = 0;634631 dma_addr_t addr;635632 u64 hw_dequeue;636633 bool cycle_found = false;···665668 hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id);666669 new_seg = ep_ring->deq_seg;667670 new_deq = ep_ring->dequeue;668668-669669- /*670670- * Quirk: xHC write-back of the DCS field in the hardware dequeue671671- * pointer is wrong - use the cycle state of the TRB pointed to by672672- * the dequeue pointer.673673- */674674- if (xhci->quirks & XHCI_EP_CTX_BROKEN_DCS &&675675- !(ep->ep_state & EP_HAS_STREAMS))676676- halted_seg = trb_in_td(xhci, td->start_seg,677677- td->first_trb, td->last_trb,678678- hw_dequeue & ~0xf, false);679679- if (halted_seg) {680680- index = ((dma_addr_t)(hw_dequeue & ~0xf) - halted_seg->dma) /681681- sizeof(*halted_trb);682682- halted_trb = &halted_seg->trbs[index];683683- new_cycle = halted_trb->generic.field[3] & 0x1;684684- xhci_dbg(xhci, "Endpoint DCS = %d TRB index = %d cycle = %d\n",685685- (u8)(hw_dequeue & 0x1), index, new_cycle);686686- } else {687687- new_cycle = hw_dequeue & 0x1;688688- }671671+ new_cycle = hw_dequeue & 0x1;689672690673 /*691674 * We want to find the pointer, segment and cycle state of the new trb
+4-4
drivers/usb/host/xhci-tegra.c
···11451145 int err;1146114611471147 tegra->genpd_dev_host = dev_pm_domain_attach_by_name(dev, "xusb_host");11481148- if (IS_ERR_OR_NULL(tegra->genpd_dev_host)) {11491149- err = PTR_ERR(tegra->genpd_dev_host) ? : -ENODATA;11481148+ if (IS_ERR(tegra->genpd_dev_host)) {11491149+ err = PTR_ERR(tegra->genpd_dev_host);11501150 dev_err(dev, "failed to get host pm-domain: %d\n", err);11511151 return err;11521152 }1153115311541154 tegra->genpd_dev_ss = dev_pm_domain_attach_by_name(dev, "xusb_ss");11551155- if (IS_ERR_OR_NULL(tegra->genpd_dev_ss)) {11561156- err = PTR_ERR(tegra->genpd_dev_ss) ? : -ENODATA;11551155+ if (IS_ERR(tegra->genpd_dev_ss)) {11561156+ err = PTR_ERR(tegra->genpd_dev_ss);11571157 dev_err(dev, "failed to get superspeed pm-domain: %d\n", err);11581158 return err;11591159 }
+4-4
drivers/usb/misc/ehset.c
···7777 switch (test_pid) {7878 case TEST_SE0_NAK_PID:7979 ret = ehset_prepare_port_for_testing(hub_udev, portnum);8080- if (!ret)8080+ if (ret < 0)8181 break;8282 ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,8383 USB_RT_PORT, USB_PORT_FEAT_TEST,···8686 break;8787 case TEST_J_PID:8888 ret = ehset_prepare_port_for_testing(hub_udev, portnum);8989- if (!ret)8989+ if (ret < 0)9090 break;9191 ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,9292 USB_RT_PORT, USB_PORT_FEAT_TEST,···9595 break;9696 case TEST_K_PID:9797 ret = ehset_prepare_port_for_testing(hub_udev, portnum);9898- if (!ret)9898+ if (ret < 0)9999 break;100100 ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,101101 USB_RT_PORT, USB_PORT_FEAT_TEST,···104104 break;105105 case TEST_PACKET_PID:106106 ret = ehset_prepare_port_for_testing(hub_udev, portnum);107107- if (!ret)107107+ if (ret < 0)108108 break;109109 ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,110110 USB_RT_PORT, USB_PORT_FEAT_TEST,
···811811812812static int __init xenbus_probe_initcall(void)813813{814814+ if (!xen_domain())815815+ return -ENODEV;816816+814817 /*815818 * Probe XenBus here in the XS_PV case, and also XS_HVM unless we816819 * need to wait for the platform PCI device to come up or
+3-3
fs/9p/fid.h
···4646 * NOTE: these are set after open so only reflect 9p client not4747 * underlying file system on server.4848 */4949-static inline void v9fs_fid_add_modes(struct p9_fid *fid, int s_flags,5050- int s_cache, unsigned int f_flags)4949+static inline void v9fs_fid_add_modes(struct p9_fid *fid, unsigned int s_flags,5050+ unsigned int s_cache, unsigned int f_flags)5151{5252 if (fid->qid.type != P9_QTFILE)5353 return;···5757 (s_flags & V9FS_DIRECT_IO) || (f_flags & O_DIRECT)) {5858 fid->mode |= P9L_DIRECT; /* no read or write cache */5959 } else if ((!(s_cache & CACHE_WRITEBACK)) ||6060- (f_flags & O_DSYNC) | (s_flags & V9FS_SYNC)) {6060+ (f_flags & O_DSYNC) || (s_flags & V9FS_SYNC)) {6161 fid->mode |= P9L_NOWRITECACHE;6262 }6363}
-2
fs/9p/v9fs.c
···545545 p9_client_begin_disconnect(v9ses->clnt);546546}547547548548-extern int v9fs_error_init(void);549549-550548static struct kobject *v9fs_kobj;551549552550#ifdef CONFIG_9P_FSCACHE
+1-1
fs/9p/v9fs.h
···108108109109struct v9fs_session_info {110110 /* options */111111- unsigned char flags;111111+ unsigned int flags;112112 unsigned char nodev;113113 unsigned short debug;114114 unsigned int afid;
···11# SPDX-License-Identifier: GPL-2.0-only22-config AUTOFS4_FS33- tristate "Old Kconfig name for Kernel automounter support"44- select AUTOFS_FS55- help66- This name exists for people to just automatically pick up the77- new name of the autofs Kconfig option. All it does is select88- the new option name.99-1010- It will go away in a release or two as people have1111- transitioned to just plain AUTOFS_FS.1212-132config AUTOFS_FS143 tristate "Kernel automounter support (supports v3, v4 and v5)"1515- default n164 help175 The automounter is a tool to automatically mount remote file systems186 on demand. This implementation is partially kernel-based to reduce
+34-17
fs/btrfs/block-group.c
···499499 * used yet since their free space will be released as soon as the transaction500500 * commits.501501 */502502-u64 add_new_free_space(struct btrfs_block_group *block_group, u64 start, u64 end)502502+int add_new_free_space(struct btrfs_block_group *block_group, u64 start, u64 end,503503+ u64 *total_added_ret)503504{504505 struct btrfs_fs_info *info = block_group->fs_info;505505- u64 extent_start, extent_end, size, total_added = 0;506506+ u64 extent_start, extent_end, size;506507 int ret;508508+509509+ if (total_added_ret)510510+ *total_added_ret = 0;507511508512 while (start < end) {509513 ret = find_first_extent_bit(&info->excluded_extents, start,···521517 start = extent_end + 1;522518 } else if (extent_start > start && extent_start < end) {523519 size = extent_start - start;524524- total_added += size;525520 ret = btrfs_add_free_space_async_trimmed(block_group,526521 start, size);527527- BUG_ON(ret); /* -ENOMEM or logic error */522522+ if (ret)523523+ return ret;524524+ if (total_added_ret)525525+ *total_added_ret += size;528526 start = extent_end + 1;529527 } else {530528 break;···535529536530 if (start < end) {537531 size = end - start;538538- total_added += size;539532 ret = btrfs_add_free_space_async_trimmed(block_group, start,540533 size);541541- BUG_ON(ret); /* -ENOMEM or logic error */534534+ if (ret)535535+ return ret;536536+ if (total_added_ret)537537+ *total_added_ret += size;542538 }543539544544- return total_added;540540+ return 0;545541}546542547543/*···787779788780 if (key.type == BTRFS_EXTENT_ITEM_KEY ||789781 key.type == BTRFS_METADATA_ITEM_KEY) {790790- total_found += add_new_free_space(block_group, last,791791- key.objectid);782782+ u64 space_added;783783+784784+ ret = add_new_free_space(block_group, last, key.objectid,785785+ &space_added);786786+ if (ret)787787+ goto out;788788+ total_found += space_added;792789 if (key.type == BTRFS_METADATA_ITEM_KEY)793790 last = key.objectid +794791 fs_info->nodesize;···808795 }809796 path->slots[0]++;810797 }811811- ret = 0;812798813813- total_found += add_new_free_space(block_group, last,814814- block_group->start + block_group->length);815815-799799+ ret = add_new_free_space(block_group, last,800800+ block_group->start + block_group->length,801801+ NULL);816802out:817803 btrfs_free_path(path);818804 return ret;···23062294 btrfs_free_excluded_extents(cache);23072295 } else if (cache->used == 0) {23082296 cache->cached = BTRFS_CACHE_FINISHED;23092309- add_new_free_space(cache, cache->start,23102310- cache->start + cache->length);22972297+ ret = add_new_free_space(cache, cache->start,22982298+ cache->start + cache->length, NULL);23112299 btrfs_free_excluded_extents(cache);23002300+ if (ret)23012301+ goto error;23122302 }2313230323142304 ret = btrfs_add_block_group_cache(info, cache);···27542740 return ERR_PTR(ret);27552741 }2756274227572757- add_new_free_space(cache, chunk_offset, chunk_offset + size);27582758-27432743+ ret = add_new_free_space(cache, chunk_offset, chunk_offset + size, NULL);27592744 btrfs_free_excluded_extents(cache);27452745+ if (ret) {27462746+ btrfs_put_block_group(cache);27472747+ return ERR_PTR(ret);27482748+ }2760274927612750 /*27622751 * Ensure the corresponding space_info object is created and
···349349 }350350 read_unlock(&fs_info->global_root_lock);351351352352+ if (btrfs_fs_compat_ro(fs_info, BLOCK_GROUP_TREE)) {353353+ num_bytes += btrfs_root_used(&fs_info->block_group_root->root_item);354354+ min_items++;355355+ }356356+352357 /*353358 * But we also want to reserve enough space so we can do the fallback354359 * global reserve for an unlink, which is an additional
+6-1
fs/btrfs/disk-io.c
···34383438 * For devices supporting discard turn on discard=async automatically,34393439 * unless it's already set or disabled. This could be turned off by34403440 * nodiscard for the same mount.34413441+ *34423442+ * The zoned mode piggy backs on the discard functionality for34433443+ * resetting a zone. There is no reason to delay the zone reset as it is34443444+ * fast enough. So, do not enable async discard for zoned mode.34413445 */34423446 if (!(btrfs_test_opt(fs_info, DISCARD_SYNC) ||34433447 btrfs_test_opt(fs_info, DISCARD_ASYNC) ||34443448 btrfs_test_opt(fs_info, NODISCARD)) &&34453445- fs_info->fs_devices->discardable) {34493449+ fs_info->fs_devices->discardable &&34503450+ !btrfs_is_zoned(fs_info)) {34463451 btrfs_set_and_info(fs_info, DISCARD_ASYNC,34473452 "auto enabling async discard");34483453 }
+17-7
fs/btrfs/free-space-tree.c
···15151515 if (prev_bit == 0 && bit == 1) {15161516 extent_start = offset;15171517 } else if (prev_bit == 1 && bit == 0) {15181518- total_found += add_new_free_space(block_group,15191519- extent_start,15201520- offset);15181518+ u64 space_added;15191519+15201520+ ret = add_new_free_space(block_group, extent_start,15211521+ offset, &space_added);15221522+ if (ret)15231523+ goto out;15241524+ total_found += space_added;15211525 if (total_found > CACHING_CTL_WAKE_UP) {15221526 total_found = 0;15231527 wake_up(&caching_ctl->wait);···15331529 }15341530 }15351531 if (prev_bit == 1) {15361536- total_found += add_new_free_space(block_group, extent_start,15371537- end);15321532+ ret = add_new_free_space(block_group, extent_start, end, NULL);15331533+ if (ret)15341534+ goto out;15381535 extent_count++;15391536 }15401537···15741569 end = block_group->start + block_group->length;1575157015761571 while (1) {15721572+ u64 space_added;15731573+15771574 ret = btrfs_next_item(root, path);15781575 if (ret < 0)15791576 goto out;···15901583 ASSERT(key.type == BTRFS_FREE_SPACE_EXTENT_KEY);15911584 ASSERT(key.objectid < end && key.objectid + key.offset <= end);1592158515931593- total_found += add_new_free_space(block_group, key.objectid,15941594- key.objectid + key.offset);15861586+ ret = add_new_free_space(block_group, key.objectid,15871587+ key.objectid + key.offset, &space_added);15881588+ if (ret)15891589+ goto out;15901590+ total_found += space_added;15951591 if (total_found > CACHING_CTL_WAKE_UP) {15961592 total_found = 0;15971593 wake_up(&caching_ctl->wait);
+8-2
fs/btrfs/transaction.c
···826826827827 trans = start_transaction(root, 0, TRANS_ATTACH,828828 BTRFS_RESERVE_NO_FLUSH, true);829829- if (trans == ERR_PTR(-ENOENT))830830- btrfs_wait_for_commit(root->fs_info, 0);829829+ if (trans == ERR_PTR(-ENOENT)) {830830+ int ret;831831+832832+ ret = btrfs_wait_for_commit(root->fs_info, 0);833833+ if (ret)834834+ return ERR_PTR(ret);835835+ }831836832837 return trans;833838}···936931 }937932938933 wait_for_commit(cur_trans, TRANS_STATE_COMPLETED);934934+ ret = cur_trans->aborted;939935 btrfs_put_transaction(cur_trans);940936out:941937 return ret;
+3
fs/btrfs/zoned.c
···805805 return -EINVAL;806806 }807807808808+ btrfs_clear_and_info(info, DISCARD_ASYNC,809809+ "zoned: async discard ignored and disabled for zoned mode");810810+808811 return 0;809812}810813
+1-1
fs/ceph/metric.c
···216216 struct ceph_mds_client *mdsc =217217 container_of(m, struct ceph_mds_client, metric);218218219219- if (mdsc->stopping)219219+ if (mdsc->stopping || disable_send_metrics)220220 return;221221222222 if (!m->session || !check_session_state(m->session)) {
+2-4
fs/file.c
···10421042 struct file *file = (struct file *)(v & ~3);1043104310441044 if (file && (file->f_mode & FMODE_ATOMIC_POS)) {10451045- if (file_count(file) > 1) {10461046- v |= FDPUT_POS_UNLOCK;10471047- mutex_lock(&file->f_pos_lock);10481048- }10451045+ v |= FDPUT_POS_UNLOCK;10461046+ mutex_lock(&file->f_pos_lock);10491047 }10501048 return v;10511049}
-2
fs/nfsd/nfs4state.c
···63416341 if (ZERO_STATEID(stateid) || ONE_STATEID(stateid) ||63426342 CLOSE_STATEID(stateid))63436343 return status;63446344- if (!same_clid(&stateid->si_opaque.so_clid, &cl->cl_clientid))63456345- return status;63466344 spin_lock(&cl->cl_lock);63476345 s = find_stateid_locked(cl, stateid);63486346 if (!s)
···10131013}101410141015101510161016+/* See MS-NLMP 2.2.1.3 */10161017int build_ntlmssp_auth_blob(unsigned char **pbuffer,10171018 u16 *buflen,10181019 struct cifs_ses *ses,···1048104710491048 flags = ses->ntlmssp->server_flags | NTLMSSP_REQUEST_TARGET |10501049 NTLMSSP_NEGOTIATE_TARGET_INFO | NTLMSSP_NEGOTIATE_WORKSTATION_SUPPLIED;10511051-10501050+ /* we only send version information in ntlmssp negotiate, so do not set this flag */10511051+ flags = flags & ~NTLMSSP_NEGOTIATE_VERSION;10521052 tmp = *pbuffer + sizeof(AUTHENTICATE_MESSAGE);10531053 sec_blob->NegotiateFlags = cpu_to_le32(flags);10541054
···684684ftrace_set_early_filter(struct ftrace_ops *ops, char *buf, int enable);685685686686/* defined in arch */687687-extern int ftrace_ip_converted(unsigned long ip);688687extern int ftrace_dyn_arch_init(void);689688extern void ftrace_replace_code(int enable);690689extern int ftrace_update_ftrace_func(ftrace_func_t func);···857858 return -EINVAL;858859}859860#endif860860-861861-/* May be defined in arch */862862-extern int ftrace_arch_read_dyn_info(char *buf, int size);863861864862extern int skip_trace(unsigned long ip);865863extern void ftrace_module_init(struct module *mod);
+23-6
include/linux/mm.h
···641641 */642642static inline bool vma_start_read(struct vm_area_struct *vma)643643{644644- /* Check before locking. A race might cause false locked result. */645645- if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))644644+ /*645645+ * Check before locking. A race might cause false locked result.646646+ * We can use READ_ONCE() for the mm_lock_seq here, and don't need647647+ * ACQUIRE semantics, because this is just a lockless check whose result648648+ * we don't rely on for anything - the mm_lock_seq read against which we649649+ * need ordering is below.650650+ */651651+ if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(vma->vm_mm->mm_lock_seq))646652 return false;647653648654 if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0))···659653 * False unlocked result is impossible because we modify and check660654 * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq661655 * modification invalidates all existing locks.656656+ *657657+ * We must use ACQUIRE semantics for the mm_lock_seq so that if we are658658+ * racing with vma_end_write_all(), we only start reading from the VMA659659+ * after it has been unlocked.660660+ * This pairs with RELEASE semantics in vma_end_write_all().662661 */663663- if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) {662662+ if (unlikely(vma->vm_lock_seq == smp_load_acquire(&vma->vm_mm->mm_lock_seq))) {664663 up_read(&vma->vm_lock->lock);665664 return false;666665 }···687676 * current task is holding mmap_write_lock, both vma->vm_lock_seq and688677 * mm->mm_lock_seq can't be concurrently modified.689678 */690690- *mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq);679679+ *mm_lock_seq = vma->vm_mm->mm_lock_seq;691680 return (vma->vm_lock_seq == *mm_lock_seq);692681}693682···699688 return;700689701690 down_write(&vma->vm_lock->lock);702702- vma->vm_lock_seq = mm_lock_seq;691691+ /*692692+ * We should use WRITE_ONCE() here because we can have concurrent reads693693+ * from the early lockless pessimistic check in vma_start_read().694694+ * We don't really care about the correctness of that early check, but695695+ * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.696696+ */697697+ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);703698 up_write(&vma->vm_lock->lock);704699}705700···719702 if (!down_write_trylock(&vma->vm_lock->lock))720703 return false;721704722722- vma->vm_lock_seq = mm_lock_seq;705705+ WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);723706 up_write(&vma->vm_lock->lock);724707 return true;725708}
+28
include/linux/mm_types.h
···514514 };515515516516#ifdef CONFIG_PER_VMA_LOCK517517+ /*518518+ * Can only be written (using WRITE_ONCE()) while holding both:519519+ * - mmap_lock (in write mode)520520+ * - vm_lock->lock (in write mode)521521+ * Can be read reliably while holding one of:522522+ * - mmap_lock (in read or write mode)523523+ * - vm_lock->lock (in read or write mode)524524+ * Can be read unreliably (using READ_ONCE()) for pessimistic bailout525525+ * while holding nothing (except RCU to keep the VMA struct allocated).526526+ *527527+ * This sequence counter is explicitly allowed to overflow; sequence528528+ * counter reuse can only lead to occasional unnecessary use of the529529+ * slowpath.530530+ */517531 int vm_lock_seq;518532 struct vma_lock *vm_lock;519533···693679 * by mmlist_lock694680 */695681#ifdef CONFIG_PER_VMA_LOCK682682+ /*683683+ * This field has lock-like semantics, meaning it is sometimes684684+ * accessed with ACQUIRE/RELEASE semantics.685685+ * Roughly speaking, incrementing the sequence number is686686+ * equivalent to releasing locks on VMAs; reading the sequence687687+ * number can be part of taking a read lock on a VMA.688688+ *689689+ * Can be modified under write mmap_lock using RELEASE690690+ * semantics.691691+ * Can be read with no other protection when holding write692692+ * mmap_lock.693693+ * Can be read with ACQUIRE semantics if not holding write694694+ * mmap_lock.695695+ */696696 int mm_lock_seq;697697#endif698698
+8-2
include/linux/mmap_lock.h
···7676static inline void vma_end_write_all(struct mm_struct *mm)7777{7878 mmap_assert_write_locked(mm);7979- /* No races during update due to exclusive mmap_lock being held */8080- WRITE_ONCE(mm->mm_lock_seq, mm->mm_lock_seq + 1);7979+ /*8080+ * Nobody can concurrently modify mm->mm_lock_seq due to exclusive8181+ * mmap_lock being held.8282+ * We need RELEASE semantics here to ensure that preceding stores into8383+ * the VMA take effect before we unlock it with this store.8484+ * Pairs with ACQUIRE semantics in vma_start_read().8585+ */8686+ smp_store_release(&mm->mm_lock_seq, mm->mm_lock_seq + 1);8187}8288#else8389static inline void vma_end_write_all(struct mm_struct *mm) {}
···6969/*7070 * Allow extra references to event channels exposed to userspace by evtchn7171 */7272-int evtchn_make_refcounted(evtchn_port_t evtchn);7272+int evtchn_make_refcounted(evtchn_port_t evtchn, bool is_static);7373int evtchn_get(evtchn_port_t evtchn);7474void evtchn_put(evtchn_port_t evtchn);7575···140140void xen_init_IRQ(void);141141142142irqreturn_t xen_debug_interrupt(int irq, void *dev_id);143143+144144+static inline void xen_evtchn_close(evtchn_port_t port)145145+{146146+ struct evtchn_close close;147147+148148+ close.port = port;149149+ if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)150150+ BUG();151151+}143152144153#endif /* _XEN_EVENTS_H */
+17-6
io_uring/io_uring.c
···24932493 return 0;24942494}2495249524962496+static bool current_pending_io(void)24972497+{24982498+ struct io_uring_task *tctx = current->io_uring;24992499+25002500+ if (!tctx)25012501+ return false;25022502+ return percpu_counter_read_positive(&tctx->inflight);25032503+}25042504+24962505/* when returns >0, the caller should retry */24972506static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,24982507 struct io_wait_queue *iowq)24992508{25002500- int token, ret;25092509+ int io_wait, ret;2501251025022511 if (unlikely(READ_ONCE(ctx->check_cq)))25032512 return 1;···25202511 return 0;2521251225222513 /*25232523- * Use io_schedule_prepare/finish, so cpufreq can take into account25242524- * that the task is waiting for IO - turns out to be important for low25252525- * QD IO.25142514+ * Mark us as being in io_wait if we have pending requests, so cpufreq25152515+ * can take into account that the task is waiting for IO - turns out25162516+ * to be important for low QD IO.25262517 */25272527- token = io_schedule_prepare();25182518+ io_wait = current->in_iowait;25192519+ if (current_pending_io())25202520+ current->in_iowait = 1;25282521 ret = 0;25292522 if (iowq->timeout == KTIME_MAX)25302523 schedule();25312524 else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))25322525 ret = -ETIME;25332533- io_schedule_finish(token);25262526+ current->in_iowait = io_wait;25342527 return ret;25352528}25362529
···1717#include <linux/rtmutex.h>1818#include <linux/sched/wake_q.h>19192020+2121+/*2222+ * This is a helper for the struct rt_mutex_waiter below. A waiter goes in two2323+ * separate trees and they need their own copy of the sort keys because of2424+ * different locking requirements.2525+ *2626+ * @entry: rbtree node to enqueue into the waiters tree2727+ * @prio: Priority of the waiter2828+ * @deadline: Deadline of the waiter if applicable2929+ *3030+ * See rt_waiter_node_less() and waiter_*_prio().3131+ */3232+struct rt_waiter_node {3333+ struct rb_node entry;3434+ int prio;3535+ u64 deadline;3636+};3737+2038/*2139 * This is the control structure for tasks blocked on a rt_mutex,2240 * which is allocated on the kernel stack on of the blocked task.2341 *2424- * @tree_entry: pi node to enqueue into the mutex waiters tree2525- * @pi_tree_entry: pi node to enqueue into the mutex owner waiters tree4242+ * @tree: node to enqueue into the mutex waiters tree4343+ * @pi_tree: node to enqueue into the mutex owner waiters tree2644 * @task: task reference to the blocked task2745 * @lock: Pointer to the rt_mutex on which the waiter blocks2846 * @wake_state: Wakeup state to use (TASK_NORMAL or TASK_RTLOCK_WAIT)2929- * @prio: Priority of the waiter3030- * @deadline: Deadline of the waiter if applicable3147 * @ww_ctx: WW context pointer4848+ *4949+ * @tree is ordered by @lock->wait_lock5050+ * @pi_tree is ordered by rt_mutex_owner(@lock)->pi_lock3251 */3352struct rt_mutex_waiter {3434- struct rb_node tree_entry;3535- struct rb_node pi_tree_entry;5353+ struct rt_waiter_node tree;5454+ struct rt_waiter_node pi_tree;3655 struct task_struct *task;3756 struct rt_mutex_base *lock;3857 unsigned int wake_state;3939- int prio;4040- u64 deadline;4158 struct ww_acquire_ctx *ww_ctx;4259};4360···122105{123106 struct rb_node *leftmost = rb_first_cached(&lock->waiters);124107125125- return rb_entry(leftmost, struct rt_mutex_waiter, tree_entry) == waiter;108108+ return rb_entry(leftmost, struct rt_mutex_waiter, tree.entry) == waiter;126109}127110128111static inline struct rt_mutex_waiter *rt_mutex_top_waiter(struct rt_mutex_base *lock)···130113 struct rb_node *leftmost = rb_first_cached(&lock->waiters);131114 struct rt_mutex_waiter *w = NULL;132115116116+ lockdep_assert_held(&lock->wait_lock);117117+133118 if (leftmost) {134134- w = rb_entry(leftmost, struct rt_mutex_waiter, tree_entry);119119+ w = rb_entry(leftmost, struct rt_mutex_waiter, tree.entry);135120 BUG_ON(w->lock != lock);136121 }137122 return w;···146127147128static inline struct rt_mutex_waiter *task_top_pi_waiter(struct task_struct *p)148129{130130+ lockdep_assert_held(&p->pi_lock);131131+149132 return rb_entry(p->pi_waiters.rb_leftmost, struct rt_mutex_waiter,150150- pi_tree_entry);133133+ pi_tree.entry);151134}152135153136#define RT_MUTEX_HAS_WAITERS 1UL···211190static inline void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter)212191{213192 debug_rt_mutex_init_waiter(waiter);214214- RB_CLEAR_NODE(&waiter->pi_tree_entry);215215- RB_CLEAR_NODE(&waiter->tree_entry);193193+ RB_CLEAR_NODE(&waiter->pi_tree.entry);194194+ RB_CLEAR_NODE(&waiter->tree.entry);216195 waiter->wake_state = TASK_NORMAL;217196 waiter->task = NULL;218197}
···562562 if (handler != SIG_IGN && handler != SIG_DFL)563563 return false;564564565565+ /* If dying, we handle all new signals by ignoring them */566566+ if (fatal_signal_pending(tsk))567567+ return false;568568+565569 /* if ptraced, let the tracer determine */566570 return !tsk->ptrace;567571}
+13-12
kernel/trace/ring_buffer.c
···523523 rb_time_t before_stamp;524524 u64 event_stamp[MAX_NEST];525525 u64 read_stamp;526526+ /* pages removed since last reset */527527+ unsigned long pages_removed;526528 /* ring buffer pages to update, > 0 to add, < 0 to remove */527529 long nr_pages_to_update;528530 struct list_head new_pages; /* new pages to add */···561559 struct buffer_page *head_page;562560 struct buffer_page *cache_reader_page;563561 unsigned long cache_read;562562+ unsigned long cache_pages_removed;564563 u64 read_stamp;565564 u64 page_stamp;566565 struct ring_buffer_event *event;···950947/**951948 * ring_buffer_wake_waiters - wake up any waiters on this ring buffer952949 * @buffer: The ring buffer to wake waiters on950950+ * @cpu: The CPU buffer to wake waiters on953951 *954952 * In the case of a file that represents a ring buffer is closing,955953 * it is prudent to wake up any waiters that are on this.···19611957 to_remove = rb_list_head(to_remove)->next;19621958 head_bit |= (unsigned long)to_remove & RB_PAGE_HEAD;19631959 }19601960+ /* Read iterators need to reset themselves when some pages removed */19611961+ cpu_buffer->pages_removed += nr_removed;1964196219651963 next_page = rb_list_head(to_remove)->next;19661964···19831977 if (head_bit)19841978 cpu_buffer->head_page = list_entry(next_page,19851979 struct buffer_page, list);19861986-19871987- /*19881988- * change read pointer to make sure any read iterators reset19891989- * themselves19901990- */19911991- cpu_buffer->read = 0;1992198019931981 /* pages are removed, resume tracing and then free the pages */19941982 atomic_dec(&cpu_buffer->record_disabled);···33763376/**33773377 * ring_buffer_unlock_commit - commit a reserved33783378 * @buffer: The buffer to commit to33793379- * @event: The event pointer to commit.33803379 *33813380 * This commits the data to the ring buffer, and releases any locks held.33823381 *···4394439543954396 iter->cache_reader_page = iter->head_page;43964397 iter->cache_read = cpu_buffer->read;43984398+ iter->cache_pages_removed = cpu_buffer->pages_removed;4397439943984400 if (iter->head) {43994401 iter->read_stamp = cpu_buffer->read_stamp;···48494849 buffer = cpu_buffer->buffer;4850485048514851 /*48524852- * Check if someone performed a consuming read to48534853- * the buffer. A consuming read invalidates the iterator48544854- * and we need to reset the iterator in this case.48524852+ * Check if someone performed a consuming read to the buffer48534853+ * or removed some pages from the buffer. In these cases,48544854+ * iterator was invalidated and we need to reset it.48554855 */48564856 if (unlikely(iter->cache_read != cpu_buffer->read ||48574857- iter->cache_reader_page != cpu_buffer->reader_page))48574857+ iter->cache_reader_page != cpu_buffer->reader_page ||48584858+ iter->cache_pages_removed != cpu_buffer->pages_removed))48584859 rb_iter_reset(iter);4859486048604861 again:···52995298 cpu_buffer->last_overrun = 0;5300529953015300 rb_head_page_activate(cpu_buffer);53015301+ cpu_buffer->pages_removed = 0;53025302}5303530353045304/* Must have disabled the cpu buffer then done a synchronize_rcu */···53585356/**53595357 * ring_buffer_reset_online_cpus - reset a ring buffer per CPU buffer53605358 * @buffer: The ring buffer to reset a per cpu buffer of53615361- * @cpu: The CPU buffer to be reset53625359 */53635360void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)53645361{
+4-10
kernel/trace/trace_events.c
···611611{612612 struct trace_event_call *call = file->event_call;613613 struct trace_array *tr = file->tr;614614- unsigned long file_flags = file->flags;615614 int ret = 0;616615 int disable;617616···634635 break;635636 disable = file->flags & EVENT_FILE_FL_SOFT_DISABLED;636637 clear_bit(EVENT_FILE_FL_SOFT_MODE_BIT, &file->flags);638638+ /* Disable use of trace_buffered_event */639639+ trace_buffered_event_disable();637640 } else638641 disable = !(file->flags & EVENT_FILE_FL_SOFT_MODE);639642···674673 if (atomic_inc_return(&file->sm_ref) > 1)675674 break;676675 set_bit(EVENT_FILE_FL_SOFT_MODE_BIT, &file->flags);676676+ /* Enable use of trace_buffered_event */677677+ trace_buffered_event_enable();677678 }678679679680 if (!(file->flags & EVENT_FILE_FL_ENABLED)) {···713710 set_bit(EVENT_FILE_FL_WAS_ENABLED_BIT, &file->flags);714711 }715712 break;716716- }717717-718718- /* Enable or disable use of trace_buffered_event */719719- if ((file_flags & EVENT_FILE_FL_SOFT_DISABLED) !=720720- (file->flags & EVENT_FILE_FL_SOFT_DISABLED)) {721721- if (file->flags & EVENT_FILE_FL_SOFT_DISABLED)722722- trace_buffered_event_enable();723723- else724724- trace_buffered_event_disable();725713 }726714727715 return ret;
+1
kernel/trace/trace_events_synth.c
···12301230 * synth_event_gen_cmd_array_start - Start synthetic event command from an array12311231 * @cmd: A pointer to the dynevent_cmd struct representing the new event12321232 * @name: The name of the synthetic event12331233+ * @mod: The module creating the event, NULL if not created from a module12331234 * @fields: An array of type/name field descriptions12341235 * @n_fields: The number of field descriptions contained in the fields array12351236 *
+2
kernel/trace/trace_events_trigger.c
···3131/**3232 * event_triggers_call - Call triggers associated with a trace event3333 * @file: The trace_event_file associated with the event3434+ * @buffer: The ring buffer that the event is being written to3435 * @rec: The trace entry for the event, NULL for unconditional invocation3636+ * @event: The event meta data in the ring buffer3537 *3638 * For each trigger associated with an event, invoke the trigger3739 * function registered with the associated trigger command. If rec is
+4-4
kernel/trace/trace_probe.c
···386386387387 /* Get BTF_KIND_FUNC type */388388 t = btf_type_by_id(btf, id);389389- if (!btf_type_is_func(t))389389+ if (!t || !btf_type_is_func(t))390390 return ERR_PTR(-ENOENT);391391392392 /* The type of BTF_KIND_FUNC is BTF_KIND_FUNC_PROTO */393393 t = btf_type_by_id(btf, t->type);394394- if (!btf_type_is_func_proto(t))394394+ if (!t || !btf_type_is_func_proto(t))395395 return ERR_PTR(-ENOENT);396396397397 return t;···443443 if (!ctx->params) {444444 params = find_btf_func_param(ctx->funcname, &ctx->nr_params,445445 ctx->flags & TPARG_FL_TPOINT);446446- if (IS_ERR(params)) {446446+ if (IS_ERR_OR_NULL(params)) {447447 trace_probe_log_err(ctx->offset, NO_BTF_ENTRY);448448 return PTR_ERR(params);449449 }···1273127312741274 params = find_btf_func_param(ctx->funcname, &nr_params,12751275 ctx->flags & TPARG_FL_TPOINT);12761276- if (IS_ERR(params)) {12761276+ if (IS_ERR_OR_NULL(params)) {12771277 if (args_idx != -1) {12781278 /* $arg* requires BTF info */12791279 trace_probe_log_err(0, NOSUP_BTFARG);
+1
kernel/trace/trace_seq.c
···131131 * trace_seq_vprintf - sequence printing of trace information132132 * @s: trace sequence descriptor133133 * @fmt: printf format string134134+ * @args: Arguments for the format string134135 *135136 * The tracer may use either sequence operations or its own136137 * copy to user routines. To simplify formatting of a trace
+1-1
lib/genalloc.c
···895895896896 of_property_read_string(np_pool, "label", &name);897897 if (!name)898898- name = np_pool->name;898898+ name = of_node_full_name(np_pool);899899 }900900 if (pdev)901901 pool = gen_pool_get(&pdev->dev, name);
···24872487 goto unlock_mutex;24882488 }2489248924902490- if (!folio_test_hwpoison(folio)) {24902490+ if (!PageHWPoison(p)) {24912491 unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n",24922492 pfn, &unpoison_rs);24932493 goto unlock_mutex;
+16-12
mm/memory.c
···53935393 if (!vma_is_anonymous(vma) && !vma_is_tcp(vma))53945394 goto inval;5395539553965396- /* find_mergeable_anon_vma uses adjacent vmas which are not locked */53975397- if (!vma->anon_vma && !vma_is_tcp(vma))53985398- goto inval;53995399-54005396 if (!vma_start_read(vma))54015397 goto inval;53985398+53995399+ /*54005400+ * find_mergeable_anon_vma uses adjacent vmas which are not locked.54015401+ * This check must happen after vma_start_read(); otherwise, a54025402+ * concurrent mremap() with MREMAP_DONTUNMAP could dissociate the VMA54035403+ * from its anon_vma.54045404+ */54055405+ if (unlikely(!vma->anon_vma && !vma_is_tcp(vma)))54065406+ goto inval_end_read;5402540754035408 /*54045409 * Due to the possibility of userfault handler dropping mmap_lock, avoid54055410 * it for now and fall back to page fault handling under mmap_lock.54065411 */54075407- if (userfaultfd_armed(vma)) {54085408- vma_end_read(vma);54095409- goto inval;54105410- }54125412+ if (userfaultfd_armed(vma))54135413+ goto inval_end_read;5411541454125415 /* Check since vm_start/vm_end might change before we lock the VMA */54135413- if (unlikely(address < vma->vm_start || address >= vma->vm_end)) {54145414- vma_end_read(vma);54155415- goto inval;54165416- }54165416+ if (unlikely(address < vma->vm_start || address >= vma->vm_end))54175417+ goto inval_end_read;5417541854185419 /* Check if the VMA got isolated after we found it */54195420 if (vma->detached) {···5426542554275426 rcu_read_unlock();54285427 return vma;54285428+54295429+inval_end_read:54305430+ vma_end_read(vma);54295431inval:54305432 rcu_read_unlock();54315433 count_vm_vma_lock_event(VMA_LOCK_ABORT);
+14-1
mm/mempolicy.c
···384384 VMA_ITERATOR(vmi, mm, 0);385385386386 mmap_write_lock(mm);387387- for_each_vma(vmi, vma)387387+ for_each_vma(vmi, vma) {388388+ vma_start_write(vma);388389 mpol_rebind_policy(vma->vm_policy, new);390390+ }389391 mmap_write_unlock(mm);390392}391393···769767 int err;770768 struct mempolicy *old;771769 struct mempolicy *new;770770+771771+ vma_assert_write_locked(vma);772772773773 pr_debug("vma %lx-%lx/%lx vm_ops %p vm_file %p set_policy %p\n",774774 vma->vm_start, vma->vm_end, vma->vm_pgoff,···13171313 if (err)13181314 goto mpol_out;1319131513161316+ /*13171317+ * Lock the VMAs before scanning for pages to migrate, to ensure we don't13181318+ * miss a concurrently inserted page.13191319+ */13201320+ vma_iter_init(&vmi, mm, start);13211321+ for_each_vma_range(vmi, vma, end)13221322+ vma_start_write(vma);13231323+13201324 ret = queue_pages_range(mm, start, end, nmask,13211325 flags | MPOL_MF_INVERT, &pagelist);13221326···15501538 break;15511539 }1552154015411541+ vma_start_write(vma);15531542 new->home_node = home_node;15541543 err = mbind_range(&vmi, vma, &prev, start, end, new);15551544 mpol_put(new);
···4848 if (walk->no_vma) {4949 /*5050 * pte_offset_map() might apply user-specific validation.5151+ * Indeed, on x86_64 the pmd entries set up by init_espfix_ap()5252+ * fit its pmd_bad() check (_PAGE_NX set and _PAGE_RW clear),5353+ * and CONFIG_EFI_PGT_DUMP efi_mm goes so far as to walk them.5154 */5252- if (walk->mm == &init_mm)5555+ if (walk->mm == &init_mm || addr >= TASK_SIZE)5356 pte = pte_offset_kernel(pmd, addr);5457 else5558 pte = pte_offset_map(pmd, addr);
+6-3
mm/shmem.c
···27962796 if (*ppos >= i_size_read(inode))27972797 break;2798279827992799- error = shmem_get_folio(inode, *ppos / PAGE_SIZE, &folio, SGP_READ);27992799+ error = shmem_get_folio(inode, *ppos / PAGE_SIZE, &folio,28002800+ SGP_READ);28002801 if (error) {28012802 if (error == -EINVAL)28022803 error = 0;···28062805 if (folio) {28072806 folio_unlock(folio);2808280728092809- if (folio_test_hwpoison(folio)) {28082808+ if (folio_test_hwpoison(folio) ||28092809+ (folio_test_large(folio) &&28102810+ folio_test_has_hwpoisoned(folio))) {28102811 error = -EIO;28112812 break;28122813 }···28442841 folio_put(folio);28452842 folio = NULL;28462843 } else {28472847- n = splice_zeropage_into_pipe(pipe, *ppos, len);28442844+ n = splice_zeropage_into_pipe(pipe, *ppos, part);28482845 }2849284628502847 if (!n)
···25612561 ipv6_ifa_notify(0, ift);25622562 }2563256325642564- if ((create || list_empty(&idev->tempaddr_list)) &&25652565- idev->cnf.use_tempaddr > 0) {25642564+ /* Also create a temporary address if it's enabled but no temporary25652565+ * address currently exists.25662566+ * However, we get called with valid_lft == 0, prefered_lft == 0, create == false25672567+ * as part of cleanup (ie. deleting the mngtmpaddr).25682568+ * We don't want that to result in creating a new temporary ip address.25692569+ */25702570+ if (list_empty(&idev->tempaddr_list) && (valid_lft || prefered_lft))25712571+ create = true;25722572+25732573+ if (create && idev->cnf.use_tempaddr > 0) {25662574 /* When a new public address is created as described25672575 * in [ADDRCONF], also create a new temporary address.25682568- * Also create a temporary address if it's enabled but25692569- * no temporary address currently exists.25702576 */25712577 read_unlock_bh(&idev->lock);25722578 ipv6_create_tempaddr(ifp, false);
···217217218218static int nft_rbtree_gc_elem(const struct nft_set *__set,219219 struct nft_rbtree *priv,220220- struct nft_rbtree_elem *rbe)220220+ struct nft_rbtree_elem *rbe,221221+ u8 genmask)221222{222223 struct nft_set *set = (struct nft_set *)__set;223224 struct rb_node *prev = rb_prev(&rbe->node);224224- struct nft_rbtree_elem *rbe_prev = NULL;225225+ struct nft_rbtree_elem *rbe_prev;225226 struct nft_set_gc_batch *gcb;226227227228 gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC);228229 if (!gcb)229230 return -ENOMEM;230231231231- /* search for expired end interval coming before this element. */232232+ /* search for end interval coming before this element.233233+ * end intervals don't carry a timeout extension, they234234+ * are coupled with the interval start element.235235+ */232236 while (prev) {233237 rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);234234- if (nft_rbtree_interval_end(rbe_prev))238238+ if (nft_rbtree_interval_end(rbe_prev) &&239239+ nft_set_elem_active(&rbe_prev->ext, genmask))235240 break;236241237242 prev = rb_prev(prev);238243 }239244240240- if (rbe_prev) {245245+ if (prev) {246246+ rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);247247+241248 rb_erase(&rbe_prev->node, &priv->root);242249 atomic_dec(&set->nelems);250250+ nft_set_gc_batch_add(gcb, rbe_prev);243251 }244252245253 rb_erase(&rbe->node, &priv->root);···329321330322 /* perform garbage collection to avoid bogus overlap reports. */331323 if (nft_set_elem_expired(&rbe->ext)) {332332- err = nft_rbtree_gc_elem(set, priv, rbe);324324+ err = nft_rbtree_gc_elem(set, priv, rbe, genmask);333325 if (err < 0)334326 return err;335327
+1-1
net/packet/af_packet.c
···36013601 if (dev) {36023602 sll->sll_hatype = dev->type;36033603 sll->sll_halen = dev->addr_len;36043604- memcpy(sll->sll_addr, dev->dev_addr, dev->addr_len);36043604+ memcpy(sll->sll_addr_flex, dev->dev_addr, dev->addr_len);36053605 } else {36063606 sll->sll_hatype = 0; /* Bad: we have no ARPHRD_UNSPEC */36073607 sll->sll_halen = 0;
+14
net/sched/sch_mqprio.c
···290290 "Attribute type expected to be TCA_MQPRIO_MIN_RATE64");291291 return -EINVAL;292292 }293293+294294+ if (nla_len(attr) != sizeof(u64)) {295295+ NL_SET_ERR_MSG_ATTR(extack, attr,296296+ "Attribute TCA_MQPRIO_MIN_RATE64 expected to have 8 bytes length");297297+ return -EINVAL;298298+ }299299+293300 if (i >= qopt->num_tc)294301 break;295302 priv->min_rate[i] = nla_get_u64(attr);···319312 "Attribute type expected to be TCA_MQPRIO_MAX_RATE64");320313 return -EINVAL;321314 }315315+316316+ if (nla_len(attr) != sizeof(u64)) {317317+ NL_SET_ERR_MSG_ATTR(extack, attr,318318+ "Attribute TCA_MQPRIO_MAX_RATE64 expected to have 8 bytes length");319319+ return -EINVAL;320320+ }321321+322322 if (i >= qopt->num_tc)323323 break;324324 priv->max_rate[i] = nla_get_u64(attr);
···583583 n->capabilities, &n->bc_entry.inputq1,584584 &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) {585585 pr_warn("Broadcast rcv link creation failed, no memory\n");586586- kfree(n);586586+ tipc_node_put(n);587587 n = NULL;588588 goto exit;589589 }
+16-7
net/unix/af_unix.c
···289289 return 0;290290}291291292292-static void unix_mkname_bsd(struct sockaddr_un *sunaddr, int addr_len)292292+static int unix_mkname_bsd(struct sockaddr_un *sunaddr, int addr_len)293293{294294+ struct sockaddr_storage *addr = (struct sockaddr_storage *)sunaddr;295295+ short offset = offsetof(struct sockaddr_storage, __data);296296+297297+ BUILD_BUG_ON(offset != offsetof(struct sockaddr_un, sun_path));298298+294299 /* This may look like an off by one error but it is a bit more295300 * subtle. 108 is the longest valid AF_UNIX path for a binding.296301 * sun_path[108] doesn't as such exist. However in kernel space297302 * we are guaranteed that it is a valid memory location in our298303 * kernel address buffer because syscall functions always pass299304 * a pointer of struct sockaddr_storage which has a bigger buffer300300- * than 108.305305+ * than 108. Also, we must terminate sun_path for strlen() in306306+ * getname_kernel().301307 */302302- ((char *)sunaddr)[addr_len] = 0;308308+ addr->__data[addr_len - offset] = 0;309309+310310+ /* Don't pass sunaddr->sun_path to strlen(). Otherwise, 108 will311311+ * cause panic if CONFIG_FORTIFY_SOURCE=y. Let __fortify_strlen()312312+ * know the actual buffer.313313+ */314314+ return strlen(addr->__data) + offset + 1;303315}304316305317static void __unix_remove_socket(struct sock *sk)···12201208 struct path parent;12211209 int err;1222121012231223- unix_mkname_bsd(sunaddr, addr_len);12241224- addr_len = strlen(sunaddr->sun_path) +12251225- offsetof(struct sockaddr_un, sun_path) + 1;12261226-12111211+ addr_len = unix_mkname_bsd(sunaddr, addr_len);12271212 addr = unix_create_addr(sunaddr, addr_len);12281213 if (!addr)12291214 return -ENOMEM;
···980980 ret = -EACCES;981981 down_write(&key->sem);982982983983- if (!capable(CAP_SYS_ADMIN)) {983983+ {984984+ bool is_privileged_op = false;985985+984986 /* only the sysadmin can chown a key to some other UID */985987 if (user != (uid_t) -1 && !uid_eq(key->uid, uid))986986- goto error_put;988988+ is_privileged_op = true;987989988990 /* only the sysadmin can set the key's GID to a group other989991 * than one of those that the current process subscribes to */990992 if (group != (gid_t) -1 && !gid_eq(gid, key->gid) && !in_group_p(gid))993993+ is_privileged_op = true;994994+995995+ if (is_privileged_op && !capable(CAP_SYS_ADMIN))991996 goto error_put;992997 }993998···10931088 down_write(&key->sem);1094108910951090 /* if we're not the sysadmin, we can only change a key that we own */10961096- if (capable(CAP_SYS_ADMIN) || uid_eq(key->uid, current_fsuid())) {10911091+ if (uid_eq(key->uid, current_fsuid()) || capable(CAP_SYS_ADMIN)) {10971092 key->perm = perm;10981093 notify_key(key, NOTIFY_KEY_SETATTR, 0);10991094 ret = 0;
+1-2
sound/core/seq/seq_ump_client.c
···298298 }299299300300 list_for_each_entry(fb, &client->ump->block_list, list) {301301- if (fb->info.first_group < 0 ||302302- fb->info.first_group + fb->info.num_groups > SNDRV_UMP_MAX_GROUPS)301301+ if (fb->info.first_group + fb->info.num_groups > SNDRV_UMP_MAX_GROUPS)303302 break;304303 group = &client->groups[fb->info.first_group];305304 for (i = 0; i < fb->info.num_groups; i++, group++) {
···750750 if (!rt5682->first_hw_init)751751 return 0;752752753753- if (!slave->unattach_request)753753+ if (!slave->unattach_request) {754754+ if (rt5682->disable_irq == true) {755755+ mutex_lock(&rt5682->disable_irq_lock);756756+ sdw_write_no_pm(slave, SDW_SCP_INTMASK1, SDW_SCP_INT1_IMPL_DEF);757757+ rt5682->disable_irq = false;758758+ mutex_unlock(&rt5682->disable_irq_lock);759759+ }754760 goto regmap_sync;761761+ }755762756763 time = wait_for_completion_timeout(&slave->initialization_complete,757764 msecs_to_jiffies(RT5682_PROBE_TIMEOUT));
+9-1
sound/soc/codecs/rt711-sdca-sdw.c
···438438 if (!rt711->first_hw_init)439439 return 0;440440441441- if (!slave->unattach_request)441441+ if (!slave->unattach_request) {442442+ if (rt711->disable_irq == true) {443443+ mutex_lock(&rt711->disable_irq_lock);444444+ sdw_write_no_pm(slave, SDW_SCP_SDCA_INTMASK1, SDW_SCP_SDCA_INTMASK_SDCA_0);445445+ sdw_write_no_pm(slave, SDW_SCP_SDCA_INTMASK2, SDW_SCP_SDCA_INTMASK_SDCA_8);446446+ rt711->disable_irq = false;447447+ mutex_unlock(&rt711->disable_irq_lock);448448+ }442449 goto regmap_sync;450450+ }443451444452 time = wait_for_completion_timeout(&slave->initialization_complete,445453 msecs_to_jiffies(RT711_PROBE_TIMEOUT));
+8-1
sound/soc/codecs/rt711-sdw.c
···538538 if (!rt711->first_hw_init)539539 return 0;540540541541- if (!slave->unattach_request)541541+ if (!slave->unattach_request) {542542+ if (rt711->disable_irq == true) {543543+ mutex_lock(&rt711->disable_irq_lock);544544+ sdw_write_no_pm(slave, SDW_SCP_INTMASK1, SDW_SCP_INT1_IMPL_DEF);545545+ rt711->disable_irq = false;546546+ mutex_unlock(&rt711->disable_irq_lock);547547+ }542548 goto regmap_sync;549549+ }543550544551 time = wait_for_completion_timeout(&slave->initialization_complete,545552 msecs_to_jiffies(RT711_PROBE_TIMEOUT));
+9-1
sound/soc/codecs/rt712-sdca-sdw.c
···438438 if (!rt712->first_hw_init)439439 return 0;440440441441- if (!slave->unattach_request)441441+ if (!slave->unattach_request) {442442+ if (rt712->disable_irq == true) {443443+ mutex_lock(&rt712->disable_irq_lock);444444+ sdw_write_no_pm(slave, SDW_SCP_SDCA_INTMASK1, SDW_SCP_SDCA_INTMASK_SDCA_0);445445+ sdw_write_no_pm(slave, SDW_SCP_SDCA_INTMASK2, SDW_SCP_SDCA_INTMASK_SDCA_8);446446+ rt712->disable_irq = false;447447+ mutex_unlock(&rt712->disable_irq_lock);448448+ }442449 goto regmap_sync;450450+ }443451444452 time = wait_for_completion_timeout(&slave->initialization_complete,445453 msecs_to_jiffies(RT712_PROBE_TIMEOUT));
+9-1
sound/soc/codecs/rt722-sdca-sdw.c
···463463 if (!rt722->first_hw_init)464464 return 0;465465466466- if (!slave->unattach_request)466466+ if (!slave->unattach_request) {467467+ if (rt722->disable_irq == true) {468468+ mutex_lock(&rt722->disable_irq_lock);469469+ sdw_write_no_pm(slave, SDW_SCP_SDCA_INTMASK1, SDW_SCP_SDCA_INTMASK_SDCA_6);470470+ sdw_write_no_pm(slave, SDW_SCP_SDCA_INTMASK2, SDW_SCP_SDCA_INTMASK_SDCA_8);471471+ rt722->disable_irq = false;472472+ mutex_unlock(&rt722->disable_irq_lock);473473+ }467474 goto regmap_sync;475475+ }468476469477 time = wait_for_completion_timeout(&slave->initialization_complete,470478 msecs_to_jiffies(RT722_PROBE_TIMEOUT));
+3
sound/soc/codecs/wm8904.c
···23082308 regmap_update_bits(wm8904->regmap, WM8904_BIAS_CONTROL_0,23092309 WM8904_POBCTRL, 0);2310231023112311+ /* Fill the cache for the ADC test register */23122312+ regmap_read(wm8904->regmap, WM8904_ADC_TEST_0, &val);23132313+23112314 /* Can leave the device powered off until we need it */23122315 regcache_cache_only(wm8904->regmap, true);23132316 regulator_bulk_disable(ARRAY_SIZE(wm8904->supplies), wm8904->supplies);
···4343 id = malloc(header.name_size);4444 TEST_ASSERT(id, "Allocate memory for id string");45454646- ret = read(stats_fd, id, header.name_size);4747- TEST_ASSERT(ret == header.name_size, "Read id string");4646+ ret = pread(stats_fd, id, header.name_size, sizeof(header));4747+ TEST_ASSERT(ret == header.name_size,4848+ "Expected header size '%u', read '%lu' bytes",4949+ header.name_size, ret);48504951 /* Check id string, that should start with "kvm" */5052 TEST_ASSERT(!strncmp(id, "kvm", 3) && strlen(id) < header.name_size,···167165 free(stats_data);168166 free(stats_desc);169167 free(id);170170-}171168172172-173173-static void vm_stats_test(struct kvm_vm *vm)174174-{175175- int stats_fd = vm_get_stats_fd(vm);176176-177177- stats_test(stats_fd);178178- close(stats_fd);179179- TEST_ASSERT(fcntl(stats_fd, F_GETFD) == -1, "Stats fd not freed");180180-}181181-182182-static void vcpu_stats_test(struct kvm_vcpu *vcpu)183183-{184184- int stats_fd = vcpu_get_stats_fd(vcpu);185185-186186- stats_test(stats_fd);187169 close(stats_fd);188170 TEST_ASSERT(fcntl(stats_fd, F_GETFD) == -1, "Stats fd not freed");189171}···185199186200int main(int argc, char *argv[])187201{202202+ int vm_stats_fds, *vcpu_stats_fds;188203 int i, j;189204 struct kvm_vcpu **vcpus;190205 struct kvm_vm **vms;···218231 vcpus = malloc(sizeof(struct kvm_vcpu *) * max_vm * max_vcpu);219232 TEST_ASSERT(vcpus, "Allocate memory for storing vCPU pointers");220233234234+ /*235235+ * Not per-VM as the array is populated, used, and invalidated within a236236+ * single for-loop iteration.237237+ */238238+ vcpu_stats_fds = calloc(max_vm, sizeof(*vcpu_stats_fds));239239+ TEST_ASSERT(vcpu_stats_fds, "Allocate memory for VM stats fds");240240+221241 for (i = 0; i < max_vm; ++i) {222242 vms[i] = vm_create_barebones();223243 for (j = 0; j < max_vcpu; ++j)224244 vcpus[i * max_vcpu + j] = __vm_vcpu_add(vms[i], j);225245 }226246227227- /* Check stats read for every VM and VCPU */247247+ /*248248+ * Check stats read for every VM and vCPU, with a variety of flavors.249249+ * Note, stats_test() closes the passed in stats fd.250250+ */228251 for (i = 0; i < max_vm; ++i) {229229- vm_stats_test(vms[i]);252252+ /*253253+ * Verify that creating multiple userspace references to a254254+ * single stats file works and doesn't cause explosions.255255+ */256256+ vm_stats_fds = vm_get_stats_fd(vms[i]);257257+ stats_test(dup(vm_stats_fds));258258+259259+ /* Verify userspace can instantiate multiple stats files. */260260+ stats_test(vm_get_stats_fd(vms[i]));261261+262262+ for (j = 0; j < max_vcpu; ++j) {263263+ vcpu_stats_fds[j] = vcpu_get_stats_fd(vcpus[i * max_vcpu + j]);264264+ stats_test(dup(vcpu_stats_fds[j]));265265+ stats_test(vcpu_get_stats_fd(vcpus[i * max_vcpu + j]));266266+ }267267+268268+ /*269269+ * Close the VM fd and redo the stats tests. KVM should gift a270270+ * reference (to the VM) to each stats fd, i.e. stats should271271+ * still be accessible even after userspace has put its last272272+ * _direct_ reference to the VM.273273+ */274274+ kvm_vm_free(vms[i]);275275+276276+ stats_test(vm_stats_fds);230277 for (j = 0; j < max_vcpu; ++j)231231- vcpu_stats_test(vcpus[i * max_vcpu + j]);278278+ stats_test(vcpu_stats_fds[j]);279279+232280 ksft_test_result_pass("vm%i\n", i);233281 }234282235235- for (i = 0; i < max_vm; ++i)236236- kvm_vm_free(vms[i]);237283 free(vms);284284+ free(vcpus);285285+ free(vcpu_stats_fds);238286239287 ksft_finished(); /* Print results and exit() accordingly */240288}
···2222#include "kvm_util.h"2323#include "processor.h"24242525-static void test_cr4_feature_bit(struct kvm_vcpu *vcpu, struct kvm_sregs *orig,2626- uint64_t feature_bit)2727-{2828- struct kvm_sregs sregs;2929- int rc;3030-3131- /* Skip the sub-test, the feature is supported. */3232- if (orig->cr4 & feature_bit)3333- return;3434-3535- memcpy(&sregs, orig, sizeof(sregs));3636- sregs.cr4 |= feature_bit;3737-3838- rc = _vcpu_sregs_set(vcpu, &sregs);3939- TEST_ASSERT(rc, "KVM allowed unsupported CR4 bit (0x%lx)", feature_bit);4040-4141- /* Sanity check that KVM didn't change anything. */4242- vcpu_sregs_get(vcpu, &sregs);4343- TEST_ASSERT(!memcmp(&sregs, orig, sizeof(sregs)), "KVM modified sregs");4444-}2525+#define TEST_INVALID_CR_BIT(vcpu, cr, orig, bit) \2626+do { \2727+ struct kvm_sregs new; \2828+ int rc; \2929+ \3030+ /* Skip the sub-test, the feature/bit is supported. */ \3131+ if (orig.cr & bit) \3232+ break; \3333+ \3434+ memcpy(&new, &orig, sizeof(sregs)); \3535+ new.cr |= bit; \3636+ \3737+ rc = _vcpu_sregs_set(vcpu, &new); \3838+ TEST_ASSERT(rc, "KVM allowed invalid " #cr " bit (0x%lx)", bit); \3939+ \4040+ /* Sanity check that KVM didn't change anything. */ \4141+ vcpu_sregs_get(vcpu, &new); \4242+ TEST_ASSERT(!memcmp(&new, &orig, sizeof(new)), "KVM modified sregs"); \4343+} while (0)45444645static uint64_t calc_supported_cr4_feature_bits(void)4746{···7980 struct kvm_vcpu *vcpu;8081 struct kvm_vm *vm;8182 uint64_t cr4;8282- int rc;8383+ int rc, i;83848485 /*8586 * Create a dummy VM, specifically to avoid doing KVM_SET_CPUID2, and···91929293 vcpu_sregs_get(vcpu, &sregs);93949595+ sregs.cr0 = 0;9496 sregs.cr4 |= calc_supported_cr4_feature_bits();9597 cr4 = sregs.cr4;9698···103103 sregs.cr4, cr4);104104105105 /* Verify all unsupported features are rejected by KVM. */106106- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_UMIP);107107- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_LA57);108108- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_VMXE);109109- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_SMXE);110110- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_FSGSBASE);111111- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_PCIDE);112112- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_OSXSAVE);113113- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_SMEP);114114- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_SMAP);115115- test_cr4_feature_bit(vcpu, &sregs, X86_CR4_PKE);106106+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_UMIP);107107+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_LA57);108108+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_VMXE);109109+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_SMXE);110110+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_FSGSBASE);111111+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_PCIDE);112112+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_OSXSAVE);113113+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_SMEP);114114+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_SMAP);115115+ TEST_INVALID_CR_BIT(vcpu, cr4, sregs, X86_CR4_PKE);116116+117117+ for (i = 32; i < 64; i++)118118+ TEST_INVALID_CR_BIT(vcpu, cr0, sregs, BIT(i));119119+120120+ /* NW without CD is illegal, as is PG without PE. */121121+ TEST_INVALID_CR_BIT(vcpu, cr0, sregs, X86_CR0_NW);122122+ TEST_INVALID_CR_BIT(vcpu, cr0, sregs, X86_CR0_PG);123123+116124 kvm_vm_free(vm);117125118126 /* Create a "real" VM and verify APIC_BASE can be set. */
+1-3
tools/testing/selftests/net/mptcp/mptcp_join.sh
···162162 elif ! iptables -V &> /dev/null; then163163 echo "SKIP: Could not run all tests without iptables tool"164164 exit $ksft_skip165165- fi166166-167167- if ! ip6tables -V &> /dev/null; then165165+ elif ! ip6tables -V &> /dev/null; then168166 echo "SKIP: Could not run all tests without ip6tables tool"169167 exit $ksft_skip170168 fi
+22-6
tools/testing/selftests/rseq/rseq.c
···3434#include "../kselftest.h"3535#include "rseq.h"36363737-static const ptrdiff_t *libc_rseq_offset_p;3838-static const unsigned int *libc_rseq_size_p;3939-static const unsigned int *libc_rseq_flags_p;3737+/*3838+ * Define weak versions to play nice with binaries that are statically linked3939+ * against a libc that doesn't support registering its own rseq.4040+ */4141+__weak ptrdiff_t __rseq_offset;4242+__weak unsigned int __rseq_size;4343+__weak unsigned int __rseq_flags;4444+4545+static const ptrdiff_t *libc_rseq_offset_p = &__rseq_offset;4646+static const unsigned int *libc_rseq_size_p = &__rseq_size;4747+static const unsigned int *libc_rseq_flags_p = &__rseq_flags;40484149/* Offset from the thread pointer to the rseq area. */4250ptrdiff_t rseq_offset;···163155static __attribute__((constructor))164156void rseq_init(void)165157{166166- libc_rseq_offset_p = dlsym(RTLD_NEXT, "__rseq_offset");167167- libc_rseq_size_p = dlsym(RTLD_NEXT, "__rseq_size");168168- libc_rseq_flags_p = dlsym(RTLD_NEXT, "__rseq_flags");158158+ /*159159+ * If the libc's registered rseq size isn't already valid, it may be160160+ * because the binary is dynamically linked and not necessarily due to161161+ * libc not having registered a restartable sequence. Try to find the162162+ * symbols if that's the case.163163+ */164164+ if (!*libc_rseq_size_p) {165165+ libc_rseq_offset_p = dlsym(RTLD_NEXT, "__rseq_offset");166166+ libc_rseq_size_p = dlsym(RTLD_NEXT, "__rseq_size");167167+ libc_rseq_flags_p = dlsym(RTLD_NEXT, "__rseq_flags");168168+ }169169 if (libc_rseq_size_p && libc_rseq_offset_p && libc_rseq_flags_p &&170170 *libc_rseq_size_p != 0) {171171 /* rseq registration owned by glibc */