···223223Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com>224224Domen Puncer <domen@coderock.org>225225Douglas Gilbert <dougg@torque.net>226226+Drew Fustini <fustini@kernel.org> <drew@pdp7.com>227227+<duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr>226228Ed L. Cashin <ecashin@coraid.com>227229Elliot Berman <quic_eberman@quicinc.com> <eberman@codeaurora.org>228230Enric Balletbo i Serra <eballetbo@kernel.org> <enric.balletbo@collabora.com>···832830Yusuke Goda <goda.yusuke@renesas.com>833831Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com>834832Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>833833+Zijun Hu <zijun.hu@oss.qualcomm.com> <quic_zijuhu@quicinc.com>834834+Zijun Hu <zijun.hu@oss.qualcomm.com> <zijuhu@codeaurora.org>835835+Zijun Hu <zijun_hu@htc.com>
+16
Documentation/ABI/testing/sysfs-edac-scrub
···4949 (RO) Supported minimum scrub cycle duration in seconds5050 by the memory scrubber.51515252+ Device-based scrub: returns the minimum scrub cycle5353+ supported by the memory device.5454+5555+ Region-based scrub: returns the max of minimum scrub cycles5656+ supported by individual memory devices that back the region.5757+5258What: /sys/bus/edac/devices/<dev-name>/scrubX/max_cycle_duration5359Date: March 20255460KernelVersion: 6.15···6256Description:6357 (RO) Supported maximum scrub cycle duration in seconds6458 by the memory scrubber.5959+6060+ Device-based scrub: returns the maximum scrub cycle supported6161+ by the memory device.6262+6363+ Region-based scrub: returns the min of maximum scrub cycles6464+ supported by individual memory devices that back the region.6565+6666+ If the memory device does not provide maximum scrub cycle6767+ information, return the maximum supported value of the scrub6868+ cycle field.65696670What: /sys/bus/edac/devices/<dev-name>/scrubX/current_cycle_duration6771Date: March 2025
···11-Altera UART22-33-Required properties:44-- compatible : should be "ALTR,uart-1.0" <DEPRECATED>55-- compatible : should be "altr,uart-1.0"66-77-Optional properties:88-- clock-frequency : frequency of the clock input to the UART
···11+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)22+%YAML 1.233+---44+$id: http://devicetree.org/schemas/serial/altr,uart-1.0.yaml#55+$schema: http://devicetree.org/meta-schemas/core.yaml#66+77+title: Altera UART88+99+maintainers:1010+ - Dinh Nguyen <dinguyen@kernel.org>1111+1212+allOf:1313+ - $ref: /schemas/serial/serial.yaml#1414+1515+properties:1616+ compatible:1717+ const: altr,uart-1.01818+1919+ clock-frequency:2020+ description: Frequency of the clock input to the UART.2121+2222+required:2323+ - compatible2424+2525+unevaluatedProperties: false
···1616Creating a TLS connection1717-------------------------18181919-First create a new TCP socket and set the TLS ULP.1919+First create a new TCP socket and once the connection is established set the2020+TLS ULP.20212122.. code-block:: c22232324 sock = socket(AF_INET, SOCK_STREAM, 0);2525+ connect(sock, addr, addrlen);2426 setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls"));25272628Setting the TLS ULP allows us to set/get TLS socket options. Currently
+1-1
Documentation/process/maintainer-netdev.rst
···312312(as of patchwork 2.2.2).313313314314Co-posting selftests315315---------------------315315+~~~~~~~~~~~~~~~~~~~~316316317317Selftests should be part of the same series as the code changes.318318Specifically for fixes both code change and related test should go into
+32-5
MAINTAINERS
···1555515555MELLANOX ETHERNET DRIVER (mlx5e)1555615556M: Saeed Mahameed <saeedm@nvidia.com>1555715557M: Tariq Toukan <tariqt@nvidia.com>1555815558+M: Mark Bloch <mbloch@nvidia.com>1555815559L: netdev@vger.kernel.org1555915560S: Maintained1556015561W: https://www.nvidia.com/networking/···1562515624M: Saeed Mahameed <saeedm@nvidia.com>1562615625M: Leon Romanovsky <leonro@nvidia.com>1562715626M: Tariq Toukan <tariqt@nvidia.com>1562715627+M: Mark Bloch <mbloch@nvidia.com>1562815628L: netdev@vger.kernel.org1562915629L: linux-rdma@vger.kernel.org1563015630S: Maintained···1568315681M: Mike Rapoport <rppt@kernel.org>1568415682L: linux-mm@kvack.org1568515683S: Maintained1568415684+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git for-next1568515685+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git fixes1568615686F: Documentation/core-api/boot-time-mm.rst1568715687F: Documentation/core-api/kho/bindings/memblock/*1568815688F: include/linux/memblock.h···1585715853F: mm/numa_emulation.c1585815854F: mm/numa_memblks.c15859158551585615856+MEMORY MANAGEMENT - OOM KILLER1585715857+M: Michal Hocko <mhocko@suse.com>1585815858+R: David Rientjes <rientjes@google.com>1585915859+R: Shakeel Butt <shakeel.butt@linux.dev>1586015860+L: linux-mm@kvack.org1586115861+S: Maintained1586215862+F: include/linux/oom.h1586315863+F: include/trace/events/oom.h1586415864+F: include/uapi/linux/oom.h1586515865+F: mm/oom_kill.c1586615866+1586015867MEMORY MANAGEMENT - PAGE ALLOCATOR1586115868M: Andrew Morton <akpm@linux-foundation.org>1586215869M: Vlastimil Babka <vbabka@suse.cz>···1588215867F: include/linux/gfp.h1588315868F: include/linux/page-isolation.h1588415869F: mm/compaction.c1587015870+F: mm/debug_page_alloc.c1587115871+F: mm/fail_page_alloc.c1588515872F: mm/page_alloc.c1587315873+F: mm/page_ext.c1587415874+F: mm/page_frag_cache.c1588615875F: mm/page_isolation.c1587615876+F: mm/page_owner.c1587715877+F: mm/page_poison.c1587815878+F: mm/page_reporting.c1587915879+F: mm/show_mem.c1588015880+F: mm/shuffle.c15887158811588815882MEMORY MANAGEMENT - RECLAIM1588915883M: Andrew Morton <akpm@linux-foundation.org>···1595215928MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE)1595315929M: Andrew Morton <akpm@linux-foundation.org>1595415930M: David Hildenbrand <david@redhat.com>1593115931+M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1595515932R: Zi Yan <ziy@nvidia.com>1595615933R: Baolin Wang <baolin.wang@linux.alibaba.com>1595715957-R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1595815934R: Liam R. Howlett <Liam.Howlett@oracle.com>1595915935R: Nico Pache <npache@redhat.com>1596015936R: Ryan Roberts <ryan.roberts@arm.com>···2120521181L: netdev@vger.kernel.org2120621182L: linux-renesas-soc@vger.kernel.org2120721183S: Maintained2120821208-F: Documentation/devicetree/bindings/net/renesas,r9a09g057-gbeth.yaml2118421184+F: Documentation/devicetree/bindings/net/renesas,rzv2h-gbeth.yaml2120921185F: drivers/net/ethernet/stmicro/stmmac/dwmac-renesas-gbeth.c21210211862121121187RENESAS RZ/V2H(P) USB2PHY PORT RESET DRIVER···2141721393K: spacemit21418213942141921395RISC-V THEAD SoC SUPPORT2142021420-M: Drew Fustini <drew@pdp7.com>2139621396+M: Drew Fustini <fustini@kernel.org>2142121397M: Guo Ren <guoren@kernel.org>2142221398M: Fu Wei <wefu@redhat.com>2142321399L: linux-riscv@lists.infradead.org···2259322569F: drivers/misc/sgi-xp/22594225702259522571SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS2257222572+M: D. Wythe <alibuda@linux.alibaba.com>2257322573+M: Dust Li <dust.li@linux.alibaba.com>2257422574+M: Sidraya Jayagond <sidraya@linux.ibm.com>2259622575M: Wenjia Zhang <wenjia@linux.ibm.com>2259722597-M: Jan Karcher <jaka@linux.ibm.com>2259822598-R: D. Wythe <alibuda@linux.alibaba.com>2257622576+R: Mahanta Jambigi <mjambigi@linux.ibm.com>2259922577R: Tony Lu <tonylu@linux.alibaba.com>2260022578R: Wen Gu <guwen@linux.alibaba.com>2260122579L: linux-rdma@vger.kernel.org···2410824082L: linux-i2c@vger.kernel.org2410924083S: Maintained2411024084F: drivers/i2c/busses/i2c-designware-amdisp.c2408524085+F: include/linux/soc/amd/isp4_misc.h24111240862411224087SYNOPSYS DESIGNWARE MMC/SD/SDIO DRIVER2411324088M: Jaehoon Chung <jh80.chung@samsung.com>
···3434#define ORC_TYPE_REGS 33535#define ORC_TYPE_REGS_PARTIAL 436363737-#ifndef __ASSEMBLY__3737+#ifndef __ASSEMBLER__3838/*3939 * This struct is more or less a vastly simplified version of the DWARF Call4040 * Frame Information standard. It contains only the necessary parts of DWARF···5353 unsigned int type:3;5454 unsigned int signal:1;5555};5656-#endif /* __ASSEMBLY__ */5656+#endif /* __ASSEMBLER__ */57575858#endif /* _ORC_TYPES_H */
···22#ifndef __ASM_VDSO_VSYSCALL_H33#define __ASM_VDSO_VSYSCALL_H4455-#ifndef __ASSEMBLY__55+#ifndef __ASSEMBLER__6677#include <vdso/datapage.h>8899/* The asm-generic header needs to be included after the definitions above */1010#include <asm-generic/vdso/vsyscall.h>11111212-#endif /* !__ASSEMBLY__ */1212+#endif /* !__ASSEMBLER__ */13131414#endif /* __ASM_VDSO_VSYSCALL_H */
···144144 if (efi_memmap_init_early(&data) < 0)145145 panic("Unable to map EFI memory map.\n");146146147147+ /*148148+ * Reserve the physical memory region occupied by the EFI149149+ * memory map table (header + descriptors). This is crucial150150+ * for kdump, as the kdump kernel relies on this original151151+ * memmap passed by the bootloader. Without reservation,152152+ * this region could be overwritten by the primary kernel.153153+ * Also, set the EFI_PRESERVE_BS_REGIONS flag to indicate that154154+ * critical boot services code/data regions like this are preserved.155155+ */156156+ memblock_reserve((phys_addr_t)boot_memmap, sizeof(*tbl) + data.size);157157+ set_bit(EFI_PRESERVE_BS_REGIONS, &efi.flags);158158+147159 early_memunmap(tbl, sizeof(*tbl));148160 }149161
···5050#endif5151;5252unsigned long boot_cpu_hartid;5353+EXPORT_SYMBOL_GPL(boot_cpu_hartid);53545455/*5556 * Place kernel memory regions on the resource tree so that
+2-2
arch/riscv/kernel/traps_misaligned.c
···454454455455 val.data_u64 = 0;456456 if (user_mode(regs)) {457457- if (copy_from_user_nofault(&val, (u8 __user *)addr, len))457457+ if (copy_from_user(&val, (u8 __user *)addr, len))458458 return -1;459459 } else {460460 memcpy(&val, (u8 *)addr, len);···555555 return -EOPNOTSUPP;556556557557 if (user_mode(regs)) {558558- if (copy_to_user_nofault((u8 __user *)addr, &val, len))558558+ if (copy_to_user((u8 __user *)addr, &val, len))559559 return -1;560560 } else {561561 memcpy((u8 *)addr, &val, len);
···88#include <linux/types.h>991010/* All SiFive vendor extensions supported in Linux */1111-const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = {1111+static const struct riscv_isa_ext_data riscv_isa_vendor_ext_sifive[] = {1212 __RISCV_ISA_EXT_DATA(xsfvfnrclipxfqf, RISCV_ISA_VENDOR_EXT_XSFVFNRCLIPXFQF),1313 __RISCV_ISA_EXT_DATA(xsfvfwmaccqqq, RISCV_ISA_VENDOR_EXT_XSFVFWMACCQQQ),1414 __RISCV_ISA_EXT_DATA(xsfvqmaccdod, RISCV_ISA_VENDOR_EXT_XSFVQMACCDOD),
+1-1
arch/s390/include/asm/ptrace.h
···265265 addr = kernel_stack_pointer(regs) + n * sizeof(long);266266 if (!regs_within_kernel_stack(regs, addr))267267 return 0;268268- return READ_ONCE_NOCHECK(addr);268268+ return READ_ONCE_NOCHECK(*(unsigned long *)addr);269269}270270271271/**
+44-15
arch/s390/pci/pci_event.c
···5454 case PCI_ERS_RESULT_CAN_RECOVER:5555 case PCI_ERS_RESULT_RECOVERED:5656 case PCI_ERS_RESULT_NEED_RESET:5757+ case PCI_ERS_RESULT_NONE:5758 return false;5859 default:5960 return true;···7877 if (!driver || !driver->err_handler)7978 return false;8079 if (!driver->err_handler->error_detected)8181- return false;8282- if (!driver->err_handler->slot_reset)8383- return false;8484- if (!driver->err_handler->resume)8580 return false;8681 return true;8782}···103106 struct zpci_dev *zdev = to_zpci(pdev);104107 int rc;105108109109+ /* The underlying device may have been disabled by the event */110110+ if (!zdev_enabled(zdev))111111+ return PCI_ERS_RESULT_NEED_RESET;112112+106113 pr_info("%s: Unblocking device access for examination\n", pci_name(pdev));107114 rc = zpci_reset_load_store_blocked(zdev);108115 if (rc) {···115114 return PCI_ERS_RESULT_NEED_RESET;116115 }117116118118- if (driver->err_handler->mmio_enabled) {117117+ if (driver->err_handler->mmio_enabled)119118 ers_res = driver->err_handler->mmio_enabled(pdev);120120- if (ers_result_indicates_abort(ers_res)) {121121- pr_info("%s: Automatic recovery failed after MMIO re-enable\n",122122- pci_name(pdev));123123- return ers_res;124124- } else if (ers_res == PCI_ERS_RESULT_NEED_RESET) {125125- pr_debug("%s: Driver needs reset to recover\n", pci_name(pdev));126126- return ers_res;127127- }119119+ else120120+ ers_res = PCI_ERS_RESULT_NONE;121121+122122+ if (ers_result_indicates_abort(ers_res)) {123123+ pr_info("%s: Automatic recovery failed after MMIO re-enable\n",124124+ pci_name(pdev));125125+ return ers_res;126126+ } else if (ers_res == PCI_ERS_RESULT_NEED_RESET) {127127+ pr_debug("%s: Driver needs reset to recover\n", pci_name(pdev));128128+ return ers_res;128129 }129130130131 pr_debug("%s: Unblocking DMA\n", pci_name(pdev));···153150 return ers_res;154151 }155152 pdev->error_state = pci_channel_io_normal;156156- ers_res = driver->err_handler->slot_reset(pdev);153153+154154+ if (driver->err_handler->slot_reset)155155+ ers_res = driver->err_handler->slot_reset(pdev);156156+ else157157+ ers_res = PCI_ERS_RESULT_NONE;158158+157159 if (ers_result_indicates_abort(ers_res)) {158160 pr_info("%s: Automatic recovery failed after slot reset\n", pci_name(pdev));159161 return ers_res;···222214 goto out_unlock;223215 }224216225225- if (ers_res == PCI_ERS_RESULT_CAN_RECOVER) {217217+ if (ers_res != PCI_ERS_RESULT_NEED_RESET) {226218 ers_res = zpci_event_do_error_state_clear(pdev, driver);227219 if (ers_result_indicates_abort(ers_res)) {228220 status_str = "failed (abort on MMIO enable)";···232224233225 if (ers_res == PCI_ERS_RESULT_NEED_RESET)234226 ers_res = zpci_event_do_reset(pdev, driver);227227+228228+ /*229229+ * ers_res can be PCI_ERS_RESULT_NONE either because the driver230230+ * decided to return it, indicating that it abstains from voting231231+ * on how to recover, or because it didn't implement the callback.232232+ * Both cases assume, that if there is nothing else causing a233233+ * disconnect, we recovered successfully.234234+ */235235+ if (ers_res == PCI_ERS_RESULT_NONE)236236+ ers_res = PCI_ERS_RESULT_RECOVERED;235237236238 if (ers_res != PCI_ERS_RESULT_RECOVERED) {237239 pr_err("%s: Automatic recovery failed; operator intervention is required\n",···291273 struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);292274 struct pci_dev *pdev = NULL;293275 pci_ers_result_t ers_res;276276+ u32 fh = 0;277277+ int rc;294278295279 zpci_dbg(3, "err fid:%x, fh:%x, pec:%x\n",296280 ccdf->fid, ccdf->fh, ccdf->pec);···301281302282 if (zdev) {303283 mutex_lock(&zdev->state_lock);284284+ rc = clp_refresh_fh(zdev->fid, &fh);285285+ if (rc)286286+ goto no_pdev;287287+ if (!fh || ccdf->fh != fh) {288288+ /* Ignore events with stale handles */289289+ zpci_dbg(3, "err fid:%x, fh:%x (stale %x)\n",290290+ ccdf->fid, fh, ccdf->fh);291291+ goto no_pdev;292292+ }304293 zpci_update_fh(zdev, ccdf->fh);305294 if (zdev->zbus->bus)306295 pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
+15-4
arch/x86/include/asm/debugreg.h
···99#include <asm/cpufeature.h>1010#include <asm/msr.h>11111212+/*1313+ * Define bits that are always set to 1 in DR7, only bit 10 is1414+ * architecturally reserved to '1'.1515+ *1616+ * This is also the init/reset value for DR7.1717+ */1818+#define DR7_FIXED_1 0x000004001919+1220DECLARE_PER_CPU(unsigned long, cpu_dr7);13211422#ifndef CONFIG_PARAVIRT_XXL···108100109101static inline void hw_breakpoint_disable(void)110102{111111- /* Zero the control register for HW Breakpoint */112112- set_debugreg(0UL, 7);103103+ /* Reset the control register for HW Breakpoint */104104+ set_debugreg(DR7_FIXED_1, 7);113105114106 /* Zero-out the individual HW breakpoint address registers */115107 set_debugreg(0UL, 0);···133125 return 0;134126135127 get_debugreg(dr7, 7);136136- dr7 &= ~0x400; /* architecturally set bit */128128+129129+ /* Architecturally set bit */130130+ dr7 &= ~DR7_FIXED_1;137131 if (dr7)138138- set_debugreg(0, 7);132132+ set_debugreg(DR7_FIXED_1, 7);133133+139134 /*140135 * Ensure the compiler doesn't lower the above statements into141136 * the critical section; disabling breakpoints late would not
···1515 which debugging register was responsible for the trap. The other bits1616 are either reserved or not of interest to us. */17171818-/* Define reserved bits in DR6 which are always set to 1 */1818+/*1919+ * Define bits in DR6 which are set to 1 by default.2020+ *2121+ * This is also the DR6 architectural value following Power-up, Reset or INIT.2222+ *2323+ * Note, with the introduction of Bus Lock Detection (BLD) and Restricted2424+ * Transactional Memory (RTM), the DR6 register has been modified:2525+ *2626+ * 1) BLD flag (bit 11) is no longer reserved to 1 if the CPU supports2727+ * Bus Lock Detection. The assertion of a bus lock could clear it.2828+ *2929+ * 2) RTM flag (bit 16) is no longer reserved to 1 if the CPU supports3030+ * restricted transactional memory. #DB occurred inside an RTM region3131+ * could clear it.3232+ *3333+ * Apparently, DR6.BLD and DR6.RTM are active low bits.3434+ *3535+ * As a result, DR6_RESERVED is an incorrect name now, but it is kept for3636+ * compatibility.3737+ */1938#define DR6_RESERVED (0xFFFF0FF0)20392140#define DR_TRAP0 (0x1) /* db0 */
+10-14
arch/x86/kernel/cpu/common.c
···22432243#endif22442244#endif2245224522462246-/*22472247- * Clear all 6 debug registers:22482248- */22492249-static void clear_all_debug_regs(void)22462246+static void initialize_debug_regs(void)22502247{22512251- int i;22522252-22532253- for (i = 0; i < 8; i++) {22542254- /* Ignore db4, db5 */22552255- if ((i == 4) || (i == 5))22562256- continue;22572257-22582258- set_debugreg(0, i);22592259- }22482248+ /* Control register first -- to make sure everything is disabled. */22492249+ set_debugreg(DR7_FIXED_1, 7);22502250+ set_debugreg(DR6_RESERVED, 6);22512251+ /* dr5 and dr4 don't exist */22522252+ set_debugreg(0, 3);22532253+ set_debugreg(0, 2);22542254+ set_debugreg(0, 1);22552255+ set_debugreg(0, 0);22602256}2261225722622258#ifdef CONFIG_KGDB···2413241724142418 load_mm_ldt(&init_mm);2415241924162416- clear_all_debug_regs();24202420+ initialize_debug_regs();24172421 dbg_restore_debug_regs();2418242224192423 doublefault_init_cpu_tss();
+1-1
arch/x86/kernel/kgdb.c
···385385 struct perf_event *bp;386386387387 /* Disable hardware debugging while we are in kgdb: */388388- set_debugreg(0UL, 7);388388+ set_debugreg(DR7_FIXED_1, 7);389389 for (i = 0; i < HBP_NUM; i++) {390390 if (!breakinfo[i].enabled)391391 continue;
+1-1
arch/x86/kernel/process_32.c
···93939494 /* Only print out debug registers if they are in their non-default state. */9595 if ((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) &&9696- (d6 == DR6_RESERVED) && (d7 == 0x400))9696+ (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1))9797 return;98989999 printk("%sDR0: %08lx DR1: %08lx DR2: %08lx DR3: %08lx\n",
+1-1
arch/x86/kernel/process_64.c
···133133134134 /* Only print out debug registers if they are in their non-default state. */135135 if (!((d0 == 0) && (d1 == 0) && (d2 == 0) && (d3 == 0) &&136136- (d6 == DR6_RESERVED) && (d7 == 0x400))) {136136+ (d6 == DR6_RESERVED) && (d7 == DR7_FIXED_1))) {137137 printk("%sDR0: %016lx DR1: %016lx DR2: %016lx\n",138138 log_lvl, d0, d1, d2);139139 printk("%sDR3: %016lx DR6: %016lx DR7: %016lx\n",
+21-13
arch/x86/kernel/traps.c
···10221022#endif10231023}1024102410251025-static __always_inline unsigned long debug_read_clear_dr6(void)10251025+static __always_inline unsigned long debug_read_reset_dr6(void)10261026{10271027 unsigned long dr6;10281028+10291029+ get_debugreg(dr6, 6);10301030+ dr6 ^= DR6_RESERVED; /* Flip to positive polarity */1028103110291032 /*10301033 * The Intel SDM says:10311034 *10321032- * Certain debug exceptions may clear bits 0-3. The remaining10331033- * contents of the DR6 register are never cleared by the10341034- * processor. To avoid confusion in identifying debug10351035- * exceptions, debug handlers should clear the register before10361036- * returning to the interrupted task.10351035+ * Certain debug exceptions may clear bits 0-3 of DR6.10371036 *10381038- * Keep it simple: clear DR6 immediately.10371037+ * BLD induced #DB clears DR6.BLD and any other debug10381038+ * exception doesn't modify DR6.BLD.10391039+ *10401040+ * RTM induced #DB clears DR6.RTM and any other debug10411041+ * exception sets DR6.RTM.10421042+ *10431043+ * To avoid confusion in identifying debug exceptions,10441044+ * debug handlers should set DR6.BLD and DR6.RTM, and10451045+ * clear other DR6 bits before returning.10461046+ *10471047+ * Keep it simple: write DR6 with its architectural reset10481048+ * value 0xFFFF0FF0, defined as DR6_RESERVED, immediately.10391049 */10401040- get_debugreg(dr6, 6);10411050 set_debugreg(DR6_RESERVED, 6);10421042- dr6 ^= DR6_RESERVED; /* Flip to positive polarity */1043105110441052 return dr6;10451053}···12471239/* IST stack entry */12481240DEFINE_IDTENTRY_DEBUG(exc_debug)12491241{12501250- exc_debug_kernel(regs, debug_read_clear_dr6());12421242+ exc_debug_kernel(regs, debug_read_reset_dr6());12511243}1252124412531245/* User entry, runs on regular task stack */12541246DEFINE_IDTENTRY_DEBUG_USER(exc_debug)12551247{12561256- exc_debug_user(regs, debug_read_clear_dr6());12481248+ exc_debug_user(regs, debug_read_reset_dr6());12571249}1258125012591251#ifdef CONFIG_X86_FRED···12721264{12731265 /*12741266 * FRED #DB stores DR6 on the stack in the format which12751275- * debug_read_clear_dr6() returns for the IDT entry points.12671267+ * debug_read_reset_dr6() returns for the IDT entry points.12761268 */12771269 unsigned long dr6 = fred_event_data(regs);12781270···12871279/* 32 bit does not have separate entry points. */12881280DEFINE_IDTENTRY_RAW(exc_debug)12891281{12901290- unsigned long dr6 = debug_read_clear_dr6();12821282+ unsigned long dr6 = debug_read_reset_dr6();1291128312921284 if (user_mode(regs))12931285 exc_debug_user(regs, dr6);
···128128static void bdev_count_inflight_rw(struct block_device *part,129129 unsigned int inflight[2], bool mq_driver)130130{131131+ int write = 0;132132+ int read = 0;131133 int cpu;132134133135 if (mq_driver) {134136 blk_mq_in_driver_rw(part, inflight);135135- } else {136136- for_each_possible_cpu(cpu) {137137- inflight[READ] += part_stat_local_read_cpu(138138- part, in_flight[READ], cpu);139139- inflight[WRITE] += part_stat_local_read_cpu(140140- part, in_flight[WRITE], cpu);141141- }137137+ return;142138 }143139144144- if (WARN_ON_ONCE((int)inflight[READ] < 0))145145- inflight[READ] = 0;146146- if (WARN_ON_ONCE((int)inflight[WRITE] < 0))147147- inflight[WRITE] = 0;140140+ for_each_possible_cpu(cpu) {141141+ read += part_stat_local_read_cpu(part, in_flight[READ], cpu);142142+ write += part_stat_local_read_cpu(part, in_flight[WRITE], cpu);143143+ }144144+145145+ /*146146+ * While iterating all CPUs, some IOs may be issued from a CPU already147147+ * traversed and complete on a CPU that has not yet been traversed,148148+ * causing the inflight number to be negative.149149+ */150150+ inflight[READ] = read > 0 ? read : 0;151151+ inflight[WRITE] = write > 0 ? write : 0;148152}149153150154/**
···14501450 {14511451 .matches = {14521452 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),14531453- DMI_MATCH(DMI_PRODUCT_VERSION, "ASUSPRO D840MB_M840SA"),14531453+ DMI_MATCH(DMI_PRODUCT_NAME, "ASUSPRO D840MB_M840SA"),14541454 },14551455 /* 320 is broken, there is no known good version. */14561456 },
+37-12
drivers/block/ublk_drv.c
···11481148 blk_mq_end_request(req, res);11491149}1150115011511151-static void ublk_complete_io_cmd(struct ublk_io *io, struct request *req,11521152- int res, unsigned issue_flags)11511151+static struct io_uring_cmd *__ublk_prep_compl_io_cmd(struct ublk_io *io,11521152+ struct request *req)11531153{11541154 /* read cmd first because req will overwrite it */11551155 struct io_uring_cmd *cmd = io->cmd;···11641164 io->flags &= ~UBLK_IO_FLAG_ACTIVE;1165116511661166 io->req = req;11671167+ return cmd;11681168+}11691169+11701170+static void ublk_complete_io_cmd(struct ublk_io *io, struct request *req,11711171+ int res, unsigned issue_flags)11721172+{11731173+ struct io_uring_cmd *cmd = __ublk_prep_compl_io_cmd(io, req);1167117411681175 /* tell ublksrv one io request is coming */11691176 io_uring_cmd_done(cmd, res, 0, issue_flags);···14231416 return BLK_STS_OK;14241417}1425141814191419+static inline bool ublk_belong_to_same_batch(const struct ublk_io *io,14201420+ const struct ublk_io *io2)14211421+{14221422+ return (io_uring_cmd_ctx_handle(io->cmd) ==14231423+ io_uring_cmd_ctx_handle(io2->cmd)) &&14241424+ (io->task == io2->task);14251425+}14261426+14261427static void ublk_queue_rqs(struct rq_list *rqlist)14271428{14281429 struct rq_list requeue_list = { };···14421427 struct ublk_queue *this_q = req->mq_hctx->driver_data;14431428 struct ublk_io *this_io = &this_q->ios[req->tag];1444142914451445- if (io && io->task != this_io->task && !rq_list_empty(&submit_list))14301430+ if (io && !ublk_belong_to_same_batch(io, this_io) &&14311431+ !rq_list_empty(&submit_list))14461432 ublk_queue_cmd_list(io, &submit_list);14471433 io = this_io;14481434···21642148 return 0;21652149}2166215021672167-static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io)21512151+static bool ublk_get_data(const struct ublk_queue *ubq, struct ublk_io *io,21522152+ struct request *req)21682153{21692169- struct request *req = io->req;21702170-21712154 /*21722155 * We have handled UBLK_IO_NEED_GET_DATA command,21732156 * so clear UBLK_IO_FLAG_NEED_GET_DATA now and just···21932178 u32 cmd_op = cmd->cmd_op;21942179 unsigned tag = ub_cmd->tag;21952180 int ret = -EINVAL;21812181+ struct request *req;2196218221972183 pr_devel("%s: received: cmd op %d queue %d tag %d result %d\n",21982184 __func__, cmd->cmd_op, ub_cmd->q_id, tag,···22522236 goto out;22532237 break;22542238 case UBLK_IO_NEED_GET_DATA:22552255- io->addr = ub_cmd->addr;22562256- if (!ublk_get_data(ubq, io))22572257- return -EIOCBQUEUED;22582258-22592259- return UBLK_IO_RES_OK;22392239+ /*22402240+ * ublk_get_data() may fail and fallback to requeue, so keep22412241+ * uring_cmd active first and prepare for handling new requeued22422242+ * request22432243+ */22442244+ req = io->req;22452245+ ublk_fill_io_cmd(io, cmd, ub_cmd->addr);22462246+ io->flags &= ~UBLK_IO_FLAG_OWNED_BY_SRV;22472247+ if (likely(ublk_get_data(ubq, io, req))) {22482248+ __ublk_prep_compl_io_cmd(io, req);22492249+ return UBLK_IO_RES_OK;22502250+ }22512251+ break;22602252 default:22612253 goto out;22622254 }···28492825 if (copy_from_user(&info, argp, sizeof(info)))28502826 return -EFAULT;2851282728522852- if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || info.nr_hw_queues > UBLK_MAX_NR_QUEUES)28282828+ if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || !info.queue_depth ||28292829+ info.nr_hw_queues > UBLK_MAX_NR_QUEUES || !info.nr_hw_queues)28532830 return -EINVAL;2854283128552832 if (capable(CAP_SYS_ADMIN))
+13-5
drivers/cxl/core/edac.c
···103103 u8 *cap, u16 *cycle, u8 *flags, u8 *min_cycle)104104{105105 struct cxl_mailbox *cxl_mbox;106106- u8 min_scrub_cycle = U8_MAX;107106 struct cxl_region_params *p;108107 struct cxl_memdev *cxlmd;109108 struct cxl_region *cxlr;109109+ u8 min_scrub_cycle = 0;110110 int i, ret;111111112112 if (!cxl_ps_ctx->cxlr) {···133133 if (ret)134134 return ret;135135136136+ /*137137+ * The min_scrub_cycle of a region is the max of minimum scrub138138+ * cycles supported by memdevs that back the region.139139+ */136140 if (min_cycle)137137- min_scrub_cycle = min(*min_cycle, min_scrub_cycle);141141+ min_scrub_cycle = max(*min_cycle, min_scrub_cycle);138142 }139143140144 if (min_cycle)···11031099 old_rec = xa_store(&array_rec->rec_gen_media,11041100 le64_to_cpu(rec->media_hdr.phys_addr), rec,11051101 GFP_KERNEL);11061106- if (xa_is_err(old_rec))11021102+ if (xa_is_err(old_rec)) {11031103+ kfree(rec);11071104 return xa_err(old_rec);11051105+ }1108110611091107 kfree(old_rec);11101108···11331127 old_rec = xa_store(&array_rec->rec_dram,11341128 le64_to_cpu(rec->media_hdr.phys_addr), rec,11351129 GFP_KERNEL);11361136- if (xa_is_err(old_rec))11301130+ if (xa_is_err(old_rec)) {11311131+ kfree(rec);11371132 return xa_err(old_rec);11331133+ }1138113411391135 kfree(old_rec);11401136···13231315 attrbs.bank = ctx->bank;13241316 break;13251317 case EDAC_REPAIR_RANK_SPARING:13261326- attrbs.repair_type = CXL_BANK_SPARING;13181318+ attrbs.repair_type = CXL_RANK_SPARING;13271319 break;13281320 default:13291321 return NULL;
+1-1
drivers/cxl/core/features.c
···544544 u32 flags;545545546546 if (rpc_in->op_size < sizeof(uuid_t))547547- return ERR_PTR(-EINVAL);547547+ return false;548548549549 feat = cxl_feature_info(cxlfs, &rpc_in->set_feat_in.uuid);550550 if (IS_ERR(feat))
···12091209 if (csrow_enabled(2 * dimm + 1, ctrl, pvt))12101210 cs_mode |= CS_ODD_PRIMARY;1211121112121212- /* Asymmetric dual-rank DIMM support. */12121212+ if (csrow_sec_enabled(2 * dimm, ctrl, pvt))12131213+ cs_mode |= CS_EVEN_SECONDARY;12141214+12131215 if (csrow_sec_enabled(2 * dimm + 1, ctrl, pvt))12141216 cs_mode |= CS_ODD_SECONDARY;12151217···12321230 return cs_mode;12331231}1234123212351235-static int __addr_mask_to_cs_size(u32 addr_mask_orig, unsigned int cs_mode,12361236- int csrow_nr, int dimm)12331233+static int calculate_cs_size(u32 mask, unsigned int cs_mode)12371234{12381238- u32 msb, weight, num_zero_bits;12391239- u32 addr_mask_deinterleaved;12401240- int size = 0;12351235+ int msb, weight, num_zero_bits;12361236+ u32 deinterleaved_mask;12371237+12381238+ if (!mask)12391239+ return 0;1241124012421241 /*12431242 * The number of zero bits in the mask is equal to the number of bits···12511248 * without swapping with the most significant bit. This can be handled12521249 * by keeping the MSB where it is and ignoring the single zero bit.12531250 */12541254- msb = fls(addr_mask_orig) - 1;12551255- weight = hweight_long(addr_mask_orig);12511251+ msb = fls(mask) - 1;12521252+ weight = hweight_long(mask);12561253 num_zero_bits = msb - weight - !!(cs_mode & CS_3R_INTERLEAVE);1257125412581255 /* Take the number of zero bits off from the top of the mask. */12591259- addr_mask_deinterleaved = GENMASK_ULL(msb - num_zero_bits, 1);12561256+ deinterleaved_mask = GENMASK(msb - num_zero_bits, 1);12571257+ edac_dbg(1, " Deinterleaved AddrMask: 0x%x\n", deinterleaved_mask);12581258+12591259+ return (deinterleaved_mask >> 2) + 1;12601260+}12611261+12621262+static int __addr_mask_to_cs_size(u32 addr_mask, u32 addr_mask_sec,12631263+ unsigned int cs_mode, int csrow_nr, int dimm)12641264+{12651265+ int size;1260126612611267 edac_dbg(1, "CS%d DIMM%d AddrMasks:\n", csrow_nr, dimm);12621262- edac_dbg(1, " Original AddrMask: 0x%x\n", addr_mask_orig);12631263- edac_dbg(1, " Deinterleaved AddrMask: 0x%x\n", addr_mask_deinterleaved);12681268+ edac_dbg(1, " Primary AddrMask: 0x%x\n", addr_mask);1264126912651270 /* Register [31:1] = Address [39:9]. Size is in kBs here. */12661266- size = (addr_mask_deinterleaved >> 2) + 1;12711271+ size = calculate_cs_size(addr_mask, cs_mode);12721272+12731273+ edac_dbg(1, " Secondary AddrMask: 0x%x\n", addr_mask_sec);12741274+ size += calculate_cs_size(addr_mask_sec, cs_mode);1267127512681276 /* Return size in MBs. */12691277 return size >> 10;···12831269static int umc_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc,12841270 unsigned int cs_mode, int csrow_nr)12851271{12721272+ u32 addr_mask = 0, addr_mask_sec = 0;12861273 int cs_mask_nr = csrow_nr;12871287- u32 addr_mask_orig;12881274 int dimm, size = 0;1289127512901276 /* No Chip Selects are enabled. */···13221308 if (!pvt->flags.zn_regs_v2)13231309 cs_mask_nr >>= 1;1324131013251325- /* Asymmetric dual-rank DIMM support. */13261326- if ((csrow_nr & 1) && (cs_mode & CS_ODD_SECONDARY))13271327- addr_mask_orig = pvt->csels[umc].csmasks_sec[cs_mask_nr];13281328- else13291329- addr_mask_orig = pvt->csels[umc].csmasks[cs_mask_nr];13111311+ if (cs_mode & (CS_EVEN_PRIMARY | CS_ODD_PRIMARY))13121312+ addr_mask = pvt->csels[umc].csmasks[cs_mask_nr];1330131313311331- return __addr_mask_to_cs_size(addr_mask_orig, cs_mode, csrow_nr, dimm);13141314+ if (cs_mode & (CS_EVEN_SECONDARY | CS_ODD_SECONDARY))13151315+ addr_mask_sec = pvt->csels[umc].csmasks_sec[cs_mask_nr];13161316+13171317+ return __addr_mask_to_cs_size(addr_mask, addr_mask_sec, cs_mode, csrow_nr, dimm);13321318}1333131913341320static void umc_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl)···35263512static int gpu_addr_mask_to_cs_size(struct amd64_pvt *pvt, u8 umc,35273513 unsigned int cs_mode, int csrow_nr)35283514{35293529- u32 addr_mask_orig = pvt->csels[umc].csmasks[csrow_nr];35153515+ u32 addr_mask = pvt->csels[umc].csmasks[csrow_nr];35163516+ u32 addr_mask_sec = pvt->csels[umc].csmasks_sec[csrow_nr];3530351735313531- return __addr_mask_to_cs_size(addr_mask_orig, cs_mode, csrow_nr, csrow_nr >> 1);35183518+ return __addr_mask_to_cs_size(addr_mask, addr_mask_sec, cs_mode, csrow_nr, csrow_nr >> 1);35323519}3533352035343521static void gpu_debug_display_dimm_sizes(struct amd64_pvt *pvt, u8 ctrl)
+13-15
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
···321321 const struct firmware *fw;322322 int r;323323324324- r = request_firmware(&fw, fw_name, adev->dev);324324+ r = firmware_request_nowarn(&fw, fw_name, adev->dev);325325 if (r) {326326- dev_err(adev->dev, "can't load firmware \"%s\"\n",327327- fw_name);326326+ if (amdgpu_discovery == 2)327327+ dev_err(adev->dev, "can't load firmware \"%s\"\n", fw_name);328328+ else329329+ drm_info(&adev->ddev, "Optional firmware \"%s\" was not found\n", fw_name);328330 return r;329331 }330332···461459 /* Read from file if it is the preferred option */462460 fw_name = amdgpu_discovery_get_fw_name(adev);463461 if (fw_name != NULL) {464464- dev_info(adev->dev, "use ip discovery information from file");462462+ drm_dbg(&adev->ddev, "use ip discovery information from file");465463 r = amdgpu_discovery_read_binary_from_file(adev, adev->mman.discovery_bin, fw_name);466466-467467- if (r) {468468- dev_err(adev->dev, "failed to read ip discovery binary from file\n");469469- r = -EINVAL;464464+ if (r)470465 goto out;471471- }472472-473466 } else {467467+ drm_dbg(&adev->ddev, "use ip discovery information from memory");474468 r = amdgpu_discovery_read_binary_from_mem(475469 adev, adev->mman.discovery_bin);476470 if (r)···13361338 int r;1337133913381340 r = amdgpu_discovery_init(adev);13391339- if (r) {13401340- DRM_ERROR("amdgpu_discovery_init failed\n");13411341+ if (r)13411342 return r;13421342- }1343134313441344 wafl_ver = 0;13451345 adev->gfx.xcc_mask = 0;···25752579 break;25762580 default:25772581 r = amdgpu_discovery_reg_base_init(adev);25782578- if (r)25792579- return -EINVAL;25822582+ if (r) {25832583+ drm_err(&adev->ddev, "discovery failed: %d\n", r);25842584+ return r;25852585+ }2580258625812587 amdgpu_discovery_harvest_ip(adev);25822588 amdgpu_discovery_get_gfx_info(adev);
+19
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
···22352235 }2236223622372237 switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {22382238+ case IP_VERSION(9, 0, 1):22392239+ case IP_VERSION(9, 2, 1):22402240+ case IP_VERSION(9, 4, 0):22412241+ case IP_VERSION(9, 2, 2):22422242+ case IP_VERSION(9, 1, 0):22432243+ case IP_VERSION(9, 3, 0):22442244+ adev->gfx.cleaner_shader_ptr = gfx_9_4_2_cleaner_shader_hex;22452245+ adev->gfx.cleaner_shader_size = sizeof(gfx_9_4_2_cleaner_shader_hex);22462246+ if (adev->gfx.me_fw_version >= 167 &&22472247+ adev->gfx.pfp_fw_version >= 196 &&22482248+ adev->gfx.mec_fw_version >= 474) {22492249+ adev->gfx.enable_cleaner_shader = true;22502250+ r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size);22512251+ if (r) {22522252+ adev->gfx.enable_cleaner_shader = false;22532253+ dev_err(adev->dev, "Failed to initialize cleaner shader\n");22542254+ }22552255+ }22562256+ break;22382257 case IP_VERSION(9, 4, 2):22392258 adev->gfx.cleaner_shader_ptr = gfx_9_4_2_cleaner_shader_hex;22402259 adev->gfx.cleaner_shader_size = sizeof(gfx_9_4_2_cleaner_shader_hex);
+6-4
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
···16301630 if (r)16311631 goto failure;1632163216331633- r = mes_v11_0_set_hw_resources_1(&adev->mes);16341634- if (r) {16351635- DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r);16361636- goto failure;16331633+ if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x50) {16341634+ r = mes_v11_0_set_hw_resources_1(&adev->mes);16351635+ if (r) {16361636+ DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r);16371637+ goto failure;16381638+ }16371639 }1638164016391641 r = mes_v11_0_query_sched_status(&adev->mes);
+2-1
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
···17421742 if (r)17431743 goto failure;1744174417451745- mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE);17451745+ if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x4b)17461746+ mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_SCHED_PIPE);1746174717471748 mes_v12_0_init_aggregated_doorbell(&adev->mes);17481749
+16-3
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
···13741374 else13751375 DRM_ERROR("Failed to allocated memory for SDMA IP Dump\n");1376137613771377- /* add firmware version checks here */13781378- if (0 && !adev->sdma.disable_uq)13791379- adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;13771377+ switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) {13781378+ case IP_VERSION(6, 0, 0):13791379+ if ((adev->sdma.instance[0].fw_version >= 24) && !adev->sdma.disable_uq)13801380+ adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;13811381+ break;13821382+ case IP_VERSION(6, 0, 2):13831383+ if ((adev->sdma.instance[0].fw_version >= 21) && !adev->sdma.disable_uq)13841384+ adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;13851385+ break;13861386+ case IP_VERSION(6, 0, 3):13871387+ if ((adev->sdma.instance[0].fw_version >= 25) && !adev->sdma.disable_uq)13881388+ adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;13891389+ break;13901390+ default:13911391+ break;13921392+ }1380139313811394 r = amdgpu_sdma_sysfs_reset_mask_init(adev);13821395 if (r)
+9-3
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
···13491349 else13501350 DRM_ERROR("Failed to allocated memory for SDMA IP Dump\n");1351135113521352- /* add firmware version checks here */13531353- if (0 && !adev->sdma.disable_uq)13541354- adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;13521352+ switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) {13531353+ case IP_VERSION(7, 0, 0):13541354+ case IP_VERSION(7, 0, 1):13551355+ if ((adev->sdma.instance[0].fw_version >= 7836028) && !adev->sdma.disable_uq)13561356+ adev->userq_funcs[AMDGPU_HW_IP_DMA] = &userq_mes_funcs;13571357+ break;13581358+ default:13591359+ break;13601360+ }1355136113561362 return r;13571363}
+5-5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···47184718 return 1;47194719}4720472047214721-/* Rescale from [min..max] to [0..AMDGPU_MAX_BL_LEVEL] */47214721+/* Rescale from [min..max] to [0..MAX_BACKLIGHT_LEVEL] */47224722static inline u32 scale_input_to_fw(int min, int max, u64 input)47234723{47244724- return DIV_ROUND_CLOSEST_ULL(input * AMDGPU_MAX_BL_LEVEL, max - min);47244724+ return DIV_ROUND_CLOSEST_ULL(input * MAX_BACKLIGHT_LEVEL, max - min);47254725}4726472647274727-/* Rescale from [0..AMDGPU_MAX_BL_LEVEL] to [min..max] */47274727+/* Rescale from [0..MAX_BACKLIGHT_LEVEL] to [min..max] */47284728static inline u32 scale_fw_to_input(int min, int max, u64 input)47294729{47304730- return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), AMDGPU_MAX_BL_LEVEL);47304730+ return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), MAX_BACKLIGHT_LEVEL);47314731}4732473247334733static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps,···49474947 drm_dbg(drm, "Backlight caps: min: %d, max: %d, ac %d, dc %d\n", min, max,49484948 caps->ac_level, caps->dc_level);49494949 } else49504950- props.brightness = props.max_brightness = AMDGPU_MAX_BL_LEVEL;49504950+ props.brightness = props.max_brightness = MAX_BACKLIGHT_LEVEL;4951495149524952 if (caps->data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE))49534953 drm_info(drm, "Using custom brightness curve\n");
···348348 * 200 ms. We'll assume that the panel driver will have the hardcoded349349 * delay in its prepare and always disable HPD.350350 *351351- * If HPD somehow makes sense on some future panel we'll have to352352- * change this to be conditional on someone specifying that HPD should353353- * be used.351351+ * For DisplayPort bridge type, we need HPD. So we use the bridge type352352+ * to conditionally disable HPD.353353+ * NOTE: The bridge type is set in ti_sn_bridge_probe() but enable_comms()354354+ * can be called before. So for DisplayPort, HPD will be enabled once355355+ * bridge type is set. We are using bridge type instead of "no-hpd"356356+ * property because it is not used properly in devicetree description357357+ * and hence is unreliable.354358 */355355- regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE,356356- HPD_DISABLE);359359+360360+ if (pdata->bridge.type != DRM_MODE_CONNECTOR_DisplayPort)361361+ regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, HPD_DISABLE,362362+ HPD_DISABLE);357363358364 pdata->comms_enabled = true;359365···12011195 struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);12021196 int val = 0;1203119712041204- pm_runtime_get_sync(pdata->dev);11981198+ /*11991199+ * Runtime reference is grabbed in ti_sn_bridge_hpd_enable()12001200+ * as the chip won't report HPD just after being powered on.12011201+ * HPD_DEBOUNCED_STATE reflects correct state only after the12021202+ * debounce time (~100-400 ms).12031203+ */12041204+12051205 regmap_read(pdata->regmap, SN_HPD_DISABLE_REG, &val);12061206- pm_runtime_put_autosuspend(pdata->dev);1207120612081207 return val & HPD_DEBOUNCED_STATE ? connector_status_connected12091208 : connector_status_disconnected;···12311220 debugfs_create_file("status", 0600, debugfs, pdata, &status_fops);12321221}1233122212231223+static void ti_sn_bridge_hpd_enable(struct drm_bridge *bridge)12241224+{12251225+ struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);12261226+12271227+ /*12281228+ * Device needs to be powered on before reading the HPD state12291229+ * for reliable hpd detection in ti_sn_bridge_detect() due to12301230+ * the high debounce time.12311231+ */12321232+12331233+ pm_runtime_get_sync(pdata->dev);12341234+}12351235+12361236+static void ti_sn_bridge_hpd_disable(struct drm_bridge *bridge)12371237+{12381238+ struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);12391239+12401240+ pm_runtime_put_autosuspend(pdata->dev);12411241+}12421242+12341243static const struct drm_bridge_funcs ti_sn_bridge_funcs = {12351244 .attach = ti_sn_bridge_attach,12361245 .detach = ti_sn_bridge_detach,···12651234 .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,12661235 .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,12671236 .debugfs_init = ti_sn65dsi86_debugfs_init,12371237+ .hpd_enable = ti_sn_bridge_hpd_enable,12381238+ .hpd_disable = ti_sn_bridge_hpd_disable,12681239};1269124012701241static void ti_sn_bridge_parse_lanes(struct ti_sn65dsi86 *pdata,···13541321 pdata->bridge.type = pdata->next_bridge->type == DRM_MODE_CONNECTOR_DisplayPort13551322 ? DRM_MODE_CONNECTOR_DisplayPort : DRM_MODE_CONNECTOR_eDP;1356132313571357- if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort)13581358- pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT;13241324+ if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort) {13251325+ pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT |13261326+ DRM_BRIDGE_OP_HPD;13271327+ /*13281328+ * If comms were already enabled they would have been enabled13291329+ * with the wrong value of HPD_DISABLE. Update it now. Comms13301330+ * could be enabled if anyone is holding a pm_runtime reference13311331+ * (like if a GPIO is in use). Note that in most cases nobody13321332+ * is doing AUX channel xfers before the bridge is added so13331333+ * HPD doesn't _really_ matter then. The only exception is in13341334+ * the eDP case where the panel wants to read the EDID before13351335+ * the bridge is added. We always consistently have HPD disabled13361336+ * for eDP.13371337+ */13381338+ mutex_lock(&pdata->comms_mutex);13391339+ if (pdata->comms_enabled)13401340+ regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG,13411341+ HPD_DISABLE, 0);13421342+ mutex_unlock(&pdata->comms_mutex);13431343+ };1359134413601345 drm_bridge_add(&pdata->bridge);13611346
+5-2
drivers/gpu/drm/display/drm_bridge_connector.c
···708708 if (bridge_connector->bridge_hdmi_audio ||709709 bridge_connector->bridge_dp_audio) {710710 struct device *dev;711711+ struct drm_bridge *bridge;711712712713 if (bridge_connector->bridge_hdmi_audio)713713- dev = bridge_connector->bridge_hdmi_audio->hdmi_audio_dev;714714+ bridge = bridge_connector->bridge_hdmi_audio;714715 else715715- dev = bridge_connector->bridge_dp_audio->hdmi_audio_dev;716716+ bridge = bridge_connector->bridge_dp_audio;717717+718718+ dev = bridge->hdmi_audio_dev;716719717720 ret = drm_connector_hdmi_audio_init(connector, dev,718721 &drm_bridge_connector_hdmi_audio_funcs,
+1-1
drivers/gpu/drm/display/drm_dp_helper.c
···725725 * monitor doesn't power down exactly after the throw away read.726726 */727727 if (!aux->is_remote) {728728- ret = drm_dp_dpcd_probe(aux, DP_DPCD_REV);728728+ ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS);729729 if (ret < 0)730730 return ret;731731 }
+4-3
drivers/gpu/drm/drm_writeback.c
···343343/**344344 * drm_writeback_connector_cleanup - Cleanup the writeback connector345345 * @dev: DRM device346346- * @wb_connector: Pointer to the writeback connector to clean up346346+ * @data: Pointer to the writeback connector to clean up347347 *348348 * This will decrement the reference counter of blobs and destroy properties. It349349 * will also clean the remaining jobs in this writeback connector. Caution: This helper will not350350 * clean up the attached encoder and the drm_connector.351351 */352352static void drm_writeback_connector_cleanup(struct drm_device *dev,353353- struct drm_writeback_connector *wb_connector)353353+ void *data)354354{355355 unsigned long flags;356356 struct drm_writeback_job *pos, *n;357357+ struct drm_writeback_connector *wb_connector = data;357358358359 delete_writeback_properties(dev);359360 drm_property_blob_put(wb_connector->pixel_formats_blob_ptr);···406405 if (ret)407406 return ret;408407409409- ret = drmm_add_action_or_reset(dev, (void *)drm_writeback_connector_cleanup,408408+ ret = drmm_add_action_or_reset(dev, drm_writeback_connector_cleanup,410409 wb_connector);411410 if (ret)412411 return ret;
···492492 case USB_DEVICE_ID_LENOVO_X12_TAB:493493 case USB_DEVICE_ID_LENOVO_X12_TAB2:494494 case USB_DEVICE_ID_LENOVO_X1_TAB:495495+ case USB_DEVICE_ID_LENOVO_X1_TAB2:495496 case USB_DEVICE_ID_LENOVO_X1_TAB3:496497 return lenovo_input_mapping_x1_tab_kbd(hdev, hi, field, usage, bit, max);497498 default:···549548550549 /*551550 * Tell the keyboard a driver understands it, and turn F7, F9, F11 into552552- * regular keys551551+ * regular keys (Compact only)553552 */554554- ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03);555555- if (ret)556556- hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret);553553+ if (hdev->product == USB_DEVICE_ID_LENOVO_CUSBKBD ||554554+ hdev->product == USB_DEVICE_ID_LENOVO_CBTKBD) {555555+ ret = lenovo_send_cmd_cptkbd(hdev, 0x01, 0x03);556556+ if (ret)557557+ hid_warn(hdev, "Failed to switch F7/9/11 mode: %d\n", ret);558558+ }557559558560 /* Switch middle button to native mode */559561 ret = lenovo_send_cmd_cptkbd(hdev, 0x09, 0x01);···609605 case USB_DEVICE_ID_LENOVO_X12_TAB2:610606 case USB_DEVICE_ID_LENOVO_TP10UBKBD:611607 case USB_DEVICE_ID_LENOVO_X1_TAB:608608+ case USB_DEVICE_ID_LENOVO_X1_TAB2:612609 case USB_DEVICE_ID_LENOVO_X1_TAB3:613610 ret = lenovo_led_set_tp10ubkbd(hdev, TP10UBKBD_FN_LOCK_LED, value);614611 if (ret)···866861 case USB_DEVICE_ID_LENOVO_X12_TAB2:867862 case USB_DEVICE_ID_LENOVO_TP10UBKBD:868863 case USB_DEVICE_ID_LENOVO_X1_TAB:864864+ case USB_DEVICE_ID_LENOVO_X1_TAB2:869865 case USB_DEVICE_ID_LENOVO_X1_TAB3:870866 return lenovo_event_tp10ubkbd(hdev, field, usage, value);871867 default:···11501144 case USB_DEVICE_ID_LENOVO_X12_TAB2:11511145 case USB_DEVICE_ID_LENOVO_TP10UBKBD:11521146 case USB_DEVICE_ID_LENOVO_X1_TAB:11471147+ case USB_DEVICE_ID_LENOVO_X1_TAB2:11531148 case USB_DEVICE_ID_LENOVO_X1_TAB3:11541149 ret = lenovo_led_set_tp10ubkbd(hdev, tp10ubkbd_led[led_nr], value);11551150 break;···13911384 case USB_DEVICE_ID_LENOVO_X12_TAB2:13921385 case USB_DEVICE_ID_LENOVO_TP10UBKBD:13931386 case USB_DEVICE_ID_LENOVO_X1_TAB:13871387+ case USB_DEVICE_ID_LENOVO_X1_TAB2:13941388 case USB_DEVICE_ID_LENOVO_X1_TAB3:13951389 ret = lenovo_probe_tp10ubkbd(hdev);13961390 break;···14811473 case USB_DEVICE_ID_LENOVO_X12_TAB2:14821474 case USB_DEVICE_ID_LENOVO_TP10UBKBD:14831475 case USB_DEVICE_ID_LENOVO_X1_TAB:14761476+ case USB_DEVICE_ID_LENOVO_X1_TAB2:14841477 case USB_DEVICE_ID_LENOVO_X1_TAB3:14851478 lenovo_remove_tp10ubkbd(hdev);14861479 break;···15321523 */15331524 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,15341525 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB) },15261526+ { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,15271527+ USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB2) },15351528 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,15361529 USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_X1_TAB3) },15371530 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC,
···308308 JOYCON_CTLR_STATE_INIT,309309 JOYCON_CTLR_STATE_READ,310310 JOYCON_CTLR_STATE_REMOVED,311311+ JOYCON_CTLR_STATE_SUSPENDED,311312};312313313314/* Controller type received as part of device info */···2751275027522751static int nintendo_hid_resume(struct hid_device *hdev)27532752{27542754- int ret = joycon_init(hdev);27532753+ struct joycon_ctlr *ctlr = hid_get_drvdata(hdev);27542754+ int ret;2755275527562756+ hid_dbg(hdev, "resume\n");27572757+ if (!joycon_using_usb(ctlr)) {27582758+ hid_dbg(hdev, "no-op resume for bt ctlr\n");27592759+ ctlr->ctlr_state = JOYCON_CTLR_STATE_READ;27602760+ return 0;27612761+ }27622762+27632763+ ret = joycon_init(hdev);27562764 if (ret)27572757- hid_err(hdev, "Failed to restore controller after resume");27652765+ hid_err(hdev,27662766+ "Failed to restore controller after resume: %d\n",27672767+ ret);27682768+ else27692769+ ctlr->ctlr_state = JOYCON_CTLR_STATE_READ;2758277027592771 return ret;27722772+}27732773+27742774+static int nintendo_hid_suspend(struct hid_device *hdev, pm_message_t message)27752775+{27762776+ struct joycon_ctlr *ctlr = hid_get_drvdata(hdev);27772777+27782778+ hid_dbg(hdev, "suspend: %d\n", message.event);27792779+ /*27802780+ * Avoid any blocking loops in suspend/resume transitions.27812781+ *27822782+ * joycon_enforce_subcmd_rate() can result in repeated retries if for27832783+ * whatever reason the controller stops providing input reports.27842784+ *27852785+ * This has been observed with bluetooth controllers which lose27862786+ * connectivity prior to suspend (but not long enough to result in27872787+ * complete disconnection).27882788+ */27892789+ ctlr->ctlr_state = JOYCON_CTLR_STATE_SUSPENDED;27902790+ return 0;27602791}2761279227622793#endif···2829279628302797#ifdef CONFIG_PM28312798 .resume = nintendo_hid_resume,27992799+ .suspend = nintendo_hid_suspend,28322800#endif28332801};28342802static int __init nintendo_init(void)
···44#include <linux/bitfield.h>55#include <linux/hid.h>66#include <linux/hid-over-i2c.h>77+#include <linux/unaligned.h>7889#include "intel-thc-dev.h"910#include "intel-thc-dma.h"···201200202201int quicki2c_reset(struct quicki2c_device *qcdev)203202{203203+ u16 input_reg = le16_to_cpu(qcdev->dev_desc.input_reg);204204+ size_t read_len = HIDI2C_LENGTH_LEN;205205+ u32 prd_len = read_len;204206 int ret;205207206208 qcdev->reset_ack = false;···217213218214 ret = wait_event_interruptible_timeout(qcdev->reset_ack_wq, qcdev->reset_ack,219215 HIDI2C_RESET_TIMEOUT * HZ);220220- if (ret <= 0 || !qcdev->reset_ack) {216216+ if (qcdev->reset_ack)217217+ return 0;218218+219219+ /*220220+ * Manually read reset response if it wasn't received, in case reset interrupt221221+ * was missed by touch device or THC hardware.222222+ */223223+ ret = thc_tic_pio_read(qcdev->thc_hw, input_reg, read_len, &prd_len,224224+ (u32 *)qcdev->input_buf);225225+ if (ret) {226226+ dev_err_once(qcdev->dev, "Read Reset Response failed, ret %d\n", ret);227227+ return ret;228228+ }229229+230230+ /*231231+ * Check response packet length, it's first 16 bits of packet.232232+ * If response packet length is zero, it's reset response, otherwise not.233233+ */234234+ if (get_unaligned_le16(qcdev->input_buf)) {221235 dev_err_once(qcdev->dev,222236 "Wait reset response timed out ret:%d timeout:%ds\n",223237 ret, HIDI2C_RESET_TIMEOUT);224238 return -ETIMEDOUT;225239 }240240+241241+ qcdev->reset_ack = true;226242227243 return 0;228244}
+6-1
drivers/hid/wacom_sys.c
···2048204820492049 remote->remote_dir = kobject_create_and_add("wacom_remote",20502050 &wacom->hdev->dev.kobj);20512051- if (!remote->remote_dir)20512051+ if (!remote->remote_dir) {20522052+ kfifo_free(&remote->remote_fifo);20522053 return -ENOMEM;20542054+ }2053205520542056 error = sysfs_create_files(remote->remote_dir, remote_unpair_attrs);2055205720562058 if (error) {20572059 hid_err(wacom->hdev,20582060 "cannot create sysfs group err: %d\n", error);20612061+ kfifo_free(&remote->remote_fifo);20622062+ kobject_put(remote->remote_dir);20592063 return error;20602064 }20612065···29052901 hid_hw_stop(hdev);2906290229072903 cancel_delayed_work_sync(&wacom->init_work);29042904+ cancel_delayed_work_sync(&wacom->aes_battery_work);29082905 cancel_work_sync(&wacom->wireless_work);29092906 cancel_work_sync(&wacom->battery_work);29102907 cancel_work_sync(&wacom->remote_work);
+1-1
drivers/i2c/busses/Kconfig
···1530153015311531config SCx200_ACB15321532 tristate "Geode ACCESS.bus support"15331533- depends on X86_32 && PCI15331533+ depends on X86_32 && PCI && HAS_IOPORT15341534 help15351535 Enable the use of the ACCESS.bus controllers on the Geode SCx200 and15361536 SC1100 processors and the CS5535 and CS5536 Geode companion devices.
···582582out_unlock:583583 mutex_unlock(&table->lock);584584 if (ret)585585- pr_warn("%s: unable to add gid %pI6 error=%d\n",586586- __func__, gid->raw, ret);585585+ pr_warn_ratelimited("%s: unable to add gid %pI6 error=%d\n",586586+ __func__, gid->raw, ret);587587 return ret;588588}589589
+11
drivers/infiniband/core/umem_odp.c
···7676 end = ALIGN(end, page_size);7777 if (unlikely(end < page_size))7878 return -EOVERFLOW;7979+ /*8080+ * The mmu notifier can be called within reclaim contexts and takes the8181+ * umem_mutex. This is rare to trigger in testing, teach lockdep about8282+ * it.8383+ */8484+ if (IS_ENABLED(CONFIG_LOCKDEP)) {8585+ fs_reclaim_acquire(GFP_KERNEL);8686+ mutex_lock(&umem_odp->umem_mutex);8787+ mutex_unlock(&umem_odp->umem_mutex);8888+ fs_reclaim_release(GFP_KERNEL);8989+ }79908091 nr_entries = (end - start) >> PAGE_SHIFT;8192 if (!(nr_entries * PAGE_SIZE / page_size))
+2-2
drivers/infiniband/hw/mlx5/counters.c
···398398 return ret;399399400400 /* We don't expose device counters over Vports */401401- if (is_mdev_switchdev_mode(dev->mdev) && port_num != 0)401401+ if (is_mdev_switchdev_mode(dev->mdev) && dev->is_rep && port_num != 0)402402 goto done;403403404404 if (MLX5_CAP_PCAM_FEATURE(dev->mdev, rx_icrc_encapsulated_counter)) {···418418 */419419 goto done;420420 }421421- ret = mlx5_lag_query_cong_counters(dev->mdev,421421+ ret = mlx5_lag_query_cong_counters(mdev,422422 stats->value +423423 cnts->num_q_counters,424424 cnts->num_cong_counters,
···20272027 }20282028}2029202920302030-static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)20302030+static int mlx5_umr_revoke_mr_with_lock(struct mlx5_ib_mr *mr)20312031{20322032- struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);20332033- struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;20342034- bool is_odp = is_odp_mr(mr);20352032 bool is_odp_dma_buf = is_dmabuf_mr(mr) &&20362036- !to_ib_umem_dmabuf(mr->umem)->pinned;20372037- bool from_cache = !!ent;20382038- int ret = 0;20332033+ !to_ib_umem_dmabuf(mr->umem)->pinned;20342034+ bool is_odp = is_odp_mr(mr);20352035+ int ret;2039203620402037 if (is_odp)20412038 mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);2042203920432040 if (is_odp_dma_buf)20442044- dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv, NULL);20412041+ dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv,20422042+ NULL);2045204320462046- if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) {20442044+ ret = mlx5r_umr_revoke_mr(mr);20452045+20462046+ if (is_odp) {20472047+ if (!ret)20482048+ to_ib_umem_odp(mr->umem)->private = NULL;20492049+ mutex_unlock(&to_ib_umem_odp(mr->umem)->umem_mutex);20502050+ }20512051+20522052+ if (is_odp_dma_buf) {20532053+ if (!ret)20542054+ to_ib_umem_dmabuf(mr->umem)->private = NULL;20552055+ dma_resv_unlock(20562056+ to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);20572057+ }20582058+20592059+ return ret;20602060+}20612061+20622062+static int mlx5r_handle_mkey_cleanup(struct mlx5_ib_mr *mr)20632063+{20642064+ bool is_odp_dma_buf = is_dmabuf_mr(mr) &&20652065+ !to_ib_umem_dmabuf(mr->umem)->pinned;20662066+ struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);20672067+ struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;20682068+ bool is_odp = is_odp_mr(mr);20692069+ bool from_cache = !!ent;20702070+ int ret;20712071+20722072+ if (mr->mmkey.cacheable && !mlx5_umr_revoke_mr_with_lock(mr) &&20732073+ !cache_ent_find_and_store(dev, mr)) {20472074 ent = mr->mmkey.cache_ent;20482075 /* upon storing to a clean temp entry - schedule its cleanup */20492076 spin_lock_irq(&ent->mkeys_queue.lock);···20822055 ent->tmp_cleanup_scheduled = true;20832056 }20842057 spin_unlock_irq(&ent->mkeys_queue.lock);20852085- goto out;20582058+ return 0;20862059 }2087206020882061 if (ent) {···20912064 mr->mmkey.cache_ent = NULL;20922065 spin_unlock_irq(&ent->mkeys_queue.lock);20932066 }20672067+20682068+ if (is_odp)20692069+ mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);20702070+20712071+ if (is_odp_dma_buf)20722072+ dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv,20732073+ NULL);20942074 ret = destroy_mkey(dev, mr);20952095-out:20962075 if (is_odp) {20972076 if (!ret)20982077 to_ib_umem_odp(mr->umem)->private = NULL;···21082075 if (is_odp_dma_buf) {21092076 if (!ret)21102077 to_ib_umem_dmabuf(mr->umem)->private = NULL;21112111- dma_resv_unlock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);20782078+ dma_resv_unlock(20792079+ to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);21122080 }21132113-21142081 return ret;21152082}21162083···21592126 }2160212721612128 /* Stop DMA */21622162- rc = mlx5_revoke_mr(mr);21292129+ rc = mlx5r_handle_mkey_cleanup(mr);21632130 if (rc)21642131 return rc;21652132
+4-4
drivers/infiniband/hw/mlx5/odp.c
···259259 }260260261261 if (MLX5_CAP_ODP(mr_to_mdev(mr)->mdev, mem_page_fault))262262- __xa_erase(&mr_to_mdev(mr)->odp_mkeys,263263- mlx5_base_mkey(mr->mmkey.key));262262+ xa_erase(&mr_to_mdev(mr)->odp_mkeys,263263+ mlx5_base_mkey(mr->mmkey.key));264264 xa_unlock(&imr->implicit_children);265265266266 /* Freeing a MR is a sleeping operation, so bounce to a work queue */···532532 }533533534534 if (MLX5_CAP_ODP(dev->mdev, mem_page_fault)) {535535- ret = __xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key),536536- &mr->mmkey, GFP_KERNEL);535535+ ret = xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key),536536+ &mr->mmkey, GFP_KERNEL);537537 if (xa_is_err(ret)) {538538 ret = ERR_PTR(xa_err(ret));539539 __xa_erase(&imr->implicit_children, idx);
+1-2
drivers/mfd/88pm860x-core.c
···573573 unsigned long flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT;574574 int data, mask, ret = -EINVAL;575575 int nr_irqs, irq_base = -1;576576- struct device_node *node = i2c->dev.of_node;577576578577 mask = PM8607_B0_MISC1_INV_INT | PM8607_B0_MISC1_INT_CLEAR579578 | PM8607_B0_MISC1_INT_MASK;···623624 ret = -EBUSY;624625 goto out;625626 }626626- irq_domain_create_legacy(of_fwnode_handle(node), nr_irqs, chip->irq_base, 0,627627+ irq_domain_create_legacy(dev_fwnode(&i2c->dev), nr_irqs, chip->irq_base, 0,627628 &pm860x_irq_domain_ops, chip);628629 chip->core_irq = i2c->irq;629630 if (!chip->core_irq)
+3-3
drivers/mfd/max8925-core.c
···656656{657657 unsigned long flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT;658658 int ret;659659- struct device_node *node = chip->dev->of_node;660659661660 /* clear all interrupts */662661 max8925_reg_read(chip->i2c, MAX8925_CHG_IRQ1);···681682 return -EBUSY;682683 }683684684684- irq_domain_create_legacy(of_fwnode_handle(node), MAX8925_NR_IRQS, chip->irq_base, 0,685685- &max8925_irq_domain_ops, chip);685685+ irq_domain_create_legacy(dev_fwnode(chip->dev), MAX8925_NR_IRQS,686686+ chip->irq_base, 0, &max8925_irq_domain_ops,687687+ chip);686688687689 /* request irq handler for pmic main irq*/688690 chip->core_irq = irq;
+1-2
drivers/mfd/twl4030-irq.c
···676676 static struct irq_chip twl4030_irq_chip;677677 int status, i;678678 int irq_base, irq_end, nr_irqs;679679- struct device_node *node = dev->of_node;680679681680 /*682681 * TWL core and pwr interrupts must be contiguous because···690691 return irq_base;691692 }692693693693- irq_domain_create_legacy(of_fwnode_handle(node), nr_irqs, irq_base, 0,694694+ irq_domain_create_legacy(dev_fwnode(dev), nr_irqs, irq_base, 0,694695 &irq_domain_simple_ops, NULL);695696696697 irq_end = irq_base + TWL4030_CORE_NR_IRQS;
+6-6
drivers/mmc/core/quirks.h
···4444 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,4545 MMC_QUIRK_NO_UHS_DDR50_TUNING, EXT_CSD_REV_ANY),46464747+ /*4848+ * Some SD cards reports discard support while they don't4949+ */5050+ MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,5151+ MMC_QUIRK_BROKEN_SD_DISCARD),5252+4753 END_FIXUP4854};4955···152146 */153147 MMC_FIXUP("M62704", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc,154148 MMC_QUIRK_TRIM_BROKEN),155155-156156- /*157157- * Some SD cards reports discard support while they don't158158- */159159- MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_SD, 0x5344, add_quirk_sd,160160- MMC_QUIRK_BROKEN_SD_DISCARD),161149162150 END_FIXUP163151};
+2-2
drivers/mmc/core/sd_uhs2.c
···91919292 err = host->ops->uhs2_control(host, UHS2_PHY_INIT);9393 if (err) {9494- pr_err("%s: failed to initial phy for UHS-II!\n",9595- mmc_hostname(host));9494+ pr_debug("%s: failed to initial phy for UHS-II!\n",9595+ mmc_hostname(host));9696 }97979898 return err;
+19-2
drivers/mmc/host/mtk-sd.c
···846846static void msdc_prepare_data(struct msdc_host *host, struct mmc_data *data)847847{848848 if (!(data->host_cookie & MSDC_PREPARE_FLAG)) {849849- data->host_cookie |= MSDC_PREPARE_FLAG;850849 data->sg_count = dma_map_sg(host->dev, data->sg, data->sg_len,851850 mmc_get_dma_dir(data));851851+ if (data->sg_count)852852+ data->host_cookie |= MSDC_PREPARE_FLAG;852853 }854854+}855855+856856+static bool msdc_data_prepared(struct mmc_data *data)857857+{858858+ return data->host_cookie & MSDC_PREPARE_FLAG;853859}854860855861static void msdc_unprepare_data(struct msdc_host *host, struct mmc_data *data)···14891483 WARN_ON(!host->hsq_en && host->mrq);14901484 host->mrq = mrq;1491148514921492- if (mrq->data)14861486+ if (mrq->data) {14931487 msdc_prepare_data(host, mrq->data);14881488+ if (!msdc_data_prepared(mrq->data)) {14891489+ host->mrq = NULL;14901490+ /*14911491+ * Failed to prepare DMA area, fail fast before14921492+ * starting any commands.14931493+ */14941494+ mrq->cmd->error = -ENOSPC;14951495+ mmc_request_done(mmc_from_priv(host), mrq);14961496+ return;14971497+ }14981498+ }1494149914951500 /* if SBC is required, we have HW option and SW option.14961501 * if HW option is enabled, and SBC does not have "special" flags,
+2-1
drivers/mmc/host/sdhci-of-k1.c
···276276277277 host->mmc->caps |= MMC_CAP_NEED_RSP_BUSY;278278279279- if (spacemit_sdhci_get_clocks(dev, pltfm_host))279279+ ret = spacemit_sdhci_get_clocks(dev, pltfm_host);280280+ if (ret)280281 goto err_pltfm;281282282283 ret = sdhci_add_host(host);
+10-10
drivers/mmc/host/sdhci-uhs2.c
···9999 /* hw clears the bit when it's done */100100 if (read_poll_timeout_atomic(sdhci_readw, val, !(val & mask), 10,101101 UHS2_RESET_TIMEOUT_100MS, true, host, SDHCI_UHS2_SW_RESET)) {102102- pr_warn("%s: %s: Reset 0x%x never completed. %s: clean reset bit.\n", __func__,103103- mmc_hostname(host->mmc), (int)mask, mmc_hostname(host->mmc));102102+ pr_debug("%s: %s: Reset 0x%x never completed. %s: clean reset bit.\n", __func__,103103+ mmc_hostname(host->mmc), (int)mask, mmc_hostname(host->mmc));104104 sdhci_writeb(host, 0, SDHCI_UHS2_SW_RESET);105105 return;106106 }···335335 if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_IF_DETECT),336336 100, UHS2_INTERFACE_DETECT_TIMEOUT_100MS, true,337337 host, SDHCI_PRESENT_STATE)) {338338- pr_warn("%s: not detect UHS2 interface in 100ms.\n", mmc_hostname(host->mmc));339339- sdhci_dumpregs(host);338338+ pr_debug("%s: not detect UHS2 interface in 100ms.\n", mmc_hostname(host->mmc));339339+ sdhci_dbg_dumpregs(host, "UHS2 interface detect timeout in 100ms");340340 return -EIO;341341 }342342···345345346346 if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_LANE_SYNC),347347 100, UHS2_LANE_SYNC_TIMEOUT_150MS, true, host, SDHCI_PRESENT_STATE)) {348348- pr_warn("%s: UHS2 Lane sync fail in 150ms.\n", mmc_hostname(host->mmc));349349- sdhci_dumpregs(host);348348+ pr_debug("%s: UHS2 Lane sync fail in 150ms.\n", mmc_hostname(host->mmc));349349+ sdhci_dbg_dumpregs(host, "UHS2 Lane sync fail in 150ms");350350 return -EIO;351351 }352352···417417 host->ops->uhs2_pre_detect_init(host);418418419419 if (sdhci_uhs2_interface_detect(host)) {420420- pr_warn("%s: cannot detect UHS2 interface.\n", mmc_hostname(host->mmc));420420+ pr_debug("%s: cannot detect UHS2 interface.\n", mmc_hostname(host->mmc));421421 return -EIO;422422 }423423424424 if (sdhci_uhs2_init(host)) {425425- pr_warn("%s: UHS2 init fail.\n", mmc_hostname(host->mmc));425425+ pr_debug("%s: UHS2 init fail.\n", mmc_hostname(host->mmc));426426 return -EIO;427427 }428428···504504 if (read_poll_timeout(sdhci_readl, val, (val & SDHCI_UHS2_IN_DORMANT_STATE),505505 100, UHS2_CHECK_DORMANT_TIMEOUT_100MS, true, host,506506 SDHCI_PRESENT_STATE)) {507507- pr_warn("%s: UHS2 IN_DORMANT fail in 100ms.\n", mmc_hostname(host->mmc));508508- sdhci_dumpregs(host);507507+ pr_debug("%s: UHS2 IN_DORMANT fail in 100ms.\n", mmc_hostname(host->mmc));508508+ sdhci_dbg_dumpregs(host, "UHS2 IN_DORMANT fail in 100ms");509509 return -EIO;510510 }511511 return 0;
···266266 reg |= MDIO_VEND2_CTRL1_AN_RESTART;267267268268 XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_CTRL1, reg);269269+270270+ reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL);271271+ reg |= XGBE_VEND2_MAC_AUTO_SW;272272+ XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL, reg);269273}270274271275static void xgbe_an37_restart(struct xgbe_prv_data *pdata)···898894899895 netif_dbg(pdata, link, pdata->netdev, "CL37 AN (%s) initialized\n",900896 (pdata->an_mode == XGBE_AN_MODE_CL37) ? "BaseX" : "SGMII");897897+898898+ reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1);899899+ reg &= ~MDIO_AN_CTRL1_ENABLE;900900+ XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_CTRL1, reg);901901+901902}902903903904static void xgbe_an73_init(struct xgbe_prv_data *pdata)···1304129513051296 pdata->phy.link = pdata->phy_if.phy_impl.link_status(pdata,13061297 &an_restart);12981298+ /* bail out if the link status register read fails */12991299+ if (pdata->phy.link < 0)13001300+ return;13011301+13071302 if (an_restart) {13081303 xgbe_phy_config_aneg(pdata);13091304 goto adjust_link;
+15-9
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
···27462746static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart)27472747{27482748 struct xgbe_phy_data *phy_data = pdata->phy_data;27492749- unsigned int reg;27502750- int ret;27492749+ int reg, ret;2751275027522751 *an_restart = 0;27532752···27802781 return 0;27812782 }2782278327832783- /* Link status is latched low, so read once to clear27842784- * and then read again to get current state27842784+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);27852785+ if (reg < 0)27862786+ return reg;27872787+27882788+ /* Link status is latched low so that momentary link drops27892789+ * can be detected. If link was already down read again27902790+ * to get the latest state.27852791 */27862786- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);27872787- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);27922792+27932793+ if (!pdata->phy.link && !(reg & MDIO_STAT1_LSTATUS)) {27942794+ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);27952795+ if (reg < 0)27962796+ return reg;27972797+ }2788279827892799 if (pdata->en_rx_adap) {27902800 /* if the link is available and adaptation is done,···28122804 xgbe_phy_set_mode(pdata, phy_data->cur_mode);28132805 }2814280628152815- /* check again for the link and adaptation status */28162816- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1);28172817- if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done)28072807+ if (pdata->rx_adapt_done)28182808 return 1;28192809 } else if (reg & MDIO_STAT1_LSTATUS)28202810 return 1;
···18641864 if (enic_is_dynamic(enic) || enic_is_sriov_vf(enic))18651865 return -EOPNOTSUPP;1866186618671867- if (netdev->mtu > enic->port_mtu)18671867+ if (new_mtu > enic->port_mtu)18681868 netdev_warn(netdev,18691869 "interface MTU (%d) set higher than port MTU (%d)\n",18701870- netdev->mtu, enic->port_mtu);18701870+ new_mtu, enic->port_mtu);1871187118721872 return _enic_change_mtu(netdev, new_mtu);18731873}
+24-2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
···39393939 MEM_TYPE_PAGE_ORDER0, NULL);39403940 if (err) {39413941 dev_err(dev, "xdp_rxq_info_reg_mem_model failed\n");39423942+ xdp_rxq_info_unreg(&fq->channel->xdp_rxq);39423943 return err;39433944 }39443945···44334432 return -EINVAL;44344433 }44354434 if (err)44364436- return err;44354435+ goto out;44374436 }4438443744394438 err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token,44404439 DPNI_QUEUE_TX, &priv->tx_qdid);44414440 if (err) {44424441 dev_err(dev, "dpni_get_qdid() failed\n");44434443- return err;44424442+ goto out;44444443 }4445444444464445 return 0;44464446+44474447+out:44484448+ while (i--) {44494449+ if (priv->fq[i].type == DPAA2_RX_FQ &&44504450+ xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq))44514451+ xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq);44524452+ }44534453+ return err;44474454}4448445544494456/* Allocate rings for storing incoming frame descriptors */···48344825 }48354826}4836482748284828+static void dpaa2_eth_free_rx_xdp_rxq(struct dpaa2_eth_priv *priv)48294829+{48304830+ int i;48314831+48324832+ for (i = 0; i < priv->num_fqs; i++) {48334833+ if (priv->fq[i].type == DPAA2_RX_FQ &&48344834+ xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq))48354835+ xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq);48364836+ }48374837+}48384838+48374839static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)48384840{48394841 struct device *dev;···50485028 free_percpu(priv->percpu_stats);50495029err_alloc_percpu_stats:50505030 dpaa2_eth_del_ch_napi(priv);50315031+ dpaa2_eth_free_rx_xdp_rxq(priv);50515032err_bind:50525033 dpaa2_eth_free_dpbps(priv);50535034err_dpbp_setup:···51015080 free_percpu(priv->percpu_extras);5102508151035082 dpaa2_eth_del_ch_napi(priv);50835083+ dpaa2_eth_free_rx_xdp_rxq(priv);51045084 dpaa2_eth_free_dpbps(priv);51055085 dpaa2_eth_free_dpio(priv);51065086 dpaa2_eth_free_dpni(priv);
+11-12
drivers/net/ethernet/intel/idpf/idpf_controlq.c
···9696 */9797static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)9898{9999- mutex_lock(&cq->cq_lock);9999+ spin_lock(&cq->cq_lock);100100101101 /* free ring buffers and the ring itself */102102 idpf_ctlq_dealloc_ring_res(hw, cq);···104104 /* Set ring_size to 0 to indicate uninitialized queue */105105 cq->ring_size = 0;106106107107- mutex_unlock(&cq->cq_lock);108108- mutex_destroy(&cq->cq_lock);107107+ spin_unlock(&cq->cq_lock);109108}110109111110/**···172173173174 idpf_ctlq_init_regs(hw, cq, is_rxq);174175175175- mutex_init(&cq->cq_lock);176176+ spin_lock_init(&cq->cq_lock);176177177178 list_add(&cq->cq_list, &hw->cq_list_head);178179···271272 int err = 0;272273 int i;273274274274- mutex_lock(&cq->cq_lock);275275+ spin_lock(&cq->cq_lock);275276276277 /* Ensure there are enough descriptors to send all messages */277278 num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);···331332 wr32(hw, cq->reg.tail, cq->next_to_use);332333333334err_unlock:334334- mutex_unlock(&cq->cq_lock);335335+ spin_unlock(&cq->cq_lock);335336336337 return err;337338}···363364 if (*clean_count > cq->ring_size)364365 return -EBADR;365366366366- mutex_lock(&cq->cq_lock);367367+ spin_lock(&cq->cq_lock);367368368369 ntc = cq->next_to_clean;369370···396397397398 cq->next_to_clean = ntc;398399399399- mutex_unlock(&cq->cq_lock);400400+ spin_unlock(&cq->cq_lock);400401401402 /* Return number of descriptors actually cleaned */402403 *clean_count = i;···434435 if (*buff_count > 0)435436 buffs_avail = true;436437437437- mutex_lock(&cq->cq_lock);438438+ spin_lock(&cq->cq_lock);438439439440 if (tbp >= cq->ring_size)440441 tbp = 0;···523524 wr32(hw, cq->reg.tail, cq->next_to_post);524525 }525526526526- mutex_unlock(&cq->cq_lock);527527+ spin_unlock(&cq->cq_lock);527528528529 /* return the number of buffers that were not posted */529530 *buff_count = *buff_count - i;···551552 u16 i;552553553554 /* take the lock before we start messing with the ring */554554- mutex_lock(&cq->cq_lock);555555+ spin_lock(&cq->cq_lock);555556556557 ntc = cq->next_to_clean;557558···613614614615 cq->next_to_clean = ntc;615616616616- mutex_unlock(&cq->cq_lock);617617+ spin_unlock(&cq->cq_lock);617618618619 *num_q_msg = i;619620 if (*num_q_msg == 0)
···23142314 struct idpf_adapter *adapter = hw->back;23152315 size_t sz = ALIGN(size, 4096);2316231623172317- mem->va = dma_alloc_coherent(&adapter->pdev->dev, sz,23182318- &mem->pa, GFP_KERNEL);23172317+ /* The control queue resources are freed under a spinlock, contiguous23182318+ * pages will avoid IOMMU remapping and the use vmap (and vunmap in23192319+ * dma_free_*() path.23202320+ */23212321+ mem->va = dma_alloc_attrs(&adapter->pdev->dev, sz, &mem->pa,23222322+ GFP_KERNEL, DMA_ATTR_FORCE_CONTIGUOUS);23192323 mem->size = sz;2320232423212325 return mem->va;···23342330{23352331 struct idpf_adapter *adapter = hw->back;2336233223372337- dma_free_coherent(&adapter->pdev->dev, mem->size,23382338- mem->va, mem->pa);23332333+ dma_free_attrs(&adapter->pdev->dev, mem->size,23342334+ mem->va, mem->pa, DMA_ATTR_FORCE_CONTIGUOUS);23392335 mem->size = 0;23402336 mem->va = NULL;23412337 mem->pa = 0;
+10
drivers/net/ethernet/intel/igc/igc_main.c
···71447144 adapter->port_num = hw->bus.func;71457145 adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE);7146714671477147+ /* Disable ASPM L1.2 on I226 devices to avoid packet loss */71487148+ if (igc_is_device_id_i226(hw))71497149+ pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);71507150+71477151 err = pci_save_state(pdev);71487152 if (err)71497153 goto err_ioremap;···75337529 pci_enable_wake(pdev, PCI_D3hot, 0);75347530 pci_enable_wake(pdev, PCI_D3cold, 0);7535753175327532+ if (igc_is_device_id_i226(hw))75337533+ pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2);75347534+75367535 if (igc_init_interrupt_scheme(adapter, true)) {75377536 netdev_err(netdev, "Unable to allocate memory for queues\n");75387537 return -ENOMEM;···7660765376617654 pci_enable_wake(pdev, PCI_D3hot, 0);76627655 pci_enable_wake(pdev, PCI_D3cold, 0);76567656+76577657+ if (igc_is_device_id_i226(hw))76587658+ pci_disable_link_state_locked(pdev, PCIE_LINK_STATE_L1_2);7663765976647660 /* In case of PCI error, adapter loses its HW address76657661 * so we should re-assign it here.
···1705170517061706 clear_bit(WX_FLAG_FDIR_HASH, wx->flags);1707170717081708+ wx->ring_feature[RING_F_FDIR].indices = 1;17081709 /* Use Flow Director in addition to RSS to ensure the best17091710 * distribution of flows across cores, even when an FDIR flow17101711 * isn't matched.···17471746 */17481747static int wx_acquire_msix_vectors(struct wx *wx)17491748{17501750- struct irq_affinity affd = { .pre_vectors = 1 };17491749+ struct irq_affinity affd = { .post_vectors = 1 };17511750 int nvecs, i;1752175117531752 /* We start by asking for one vector per queue pair */···17841783 return nvecs;17851784 }1786178517871787- wx->msix_entry->entry = 0;17881788- wx->msix_entry->vector = pci_irq_vector(wx->pdev, 0);17891786 nvecs -= 1;17901787 for (i = 0; i < nvecs; i++) {17911788 wx->msix_q_entries[i].entry = i;17921792- wx->msix_q_entries[i].vector = pci_irq_vector(wx->pdev, i + 1);17891789+ wx->msix_q_entries[i].vector = pci_irq_vector(wx->pdev, i);17931790 }1794179117951792 wx->num_q_vectors = nvecs;17931793+17941794+ wx->msix_entry->entry = nvecs;17951795+ wx->msix_entry->vector = pci_irq_vector(wx->pdev, nvecs);17961796+17971797+ if (test_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags)) {17981798+ wx->msix_entry->entry = 0;17991799+ wx->msix_entry->vector = pci_irq_vector(wx->pdev, 0);18001800+ wx->msix_q_entries[0].entry = 0;18011801+ wx->msix_q_entries[0].vector = pci_irq_vector(wx->pdev, 1);18021802+ }1796180317971804 return 0;17981805}···2300229123012292 if (direction == -1) {23022293 /* other causes */22942294+ if (test_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags))22952295+ msix_vector = 0;23032296 msix_vector |= WX_PX_IVAR_ALLOC_VAL;23042297 index = 0;23052298 ivar = rd32(wx, WX_PX_MISC_IVAR);···23102299 wr32(wx, WX_PX_MISC_IVAR, ivar);23112300 } else {23122301 /* tx or rx causes */23132313- if (!(wx->mac.type == wx_mac_em && wx->num_vfs == 7))23142314- msix_vector += 1; /* offset for queue vectors */23152302 msix_vector |= WX_PX_IVAR_ALLOC_VAL;23162303 index = ((16 * (queue & 1)) + (8 * direction));23172304 ivar = rd32(wx, WX_PX_IVAR(queue >> 1));···2348233923492340 itr_reg |= WX_PX_ITR_CNT_WDIS;2350234123512351- wr32(wx, WX_PX_ITR(v_idx + 1), itr_reg);23422342+ wr32(wx, WX_PX_ITR(v_idx), itr_reg);23522343}2353234423542345/**···24012392 wx_write_eitr(q_vector);24022393 }2403239424042404- wx_set_ivar(wx, -1, 0, 0);23952395+ wx_set_ivar(wx, -1, 0, v_idx);24052396 if (pdev->msix_enabled)24062406- wr32(wx, WX_PX_ITR(0), 1950);23972397+ wr32(wx, WX_PX_ITR(v_idx), 1950);24072398}24082399EXPORT_SYMBOL(wx_configure_vectors);24092400
+4
drivers/net/ethernet/wangxun/libwx/wx_sriov.c
···6464 wr32m(wx, WX_PSR_VM_CTL, WX_PSR_VM_CTL_POOL_MASK, 0);6565 wx->ring_feature[RING_F_VMDQ].offset = 0;66666767+ clear_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags);6768 clear_bit(WX_FLAG_SRIOV_ENABLED, wx->flags);6869 /* Disable VMDq flag so device will be set in NM mode */6970 if (wx->ring_feature[RING_F_VMDQ].limit == 1)···78777978 set_bit(WX_FLAG_SRIOV_ENABLED, wx->flags);8079 dev_info(&wx->pdev->dev, "SR-IOV enabled with %d VFs\n", num_vfs);8080+8181+ if (num_vfs == 7 && wx->mac.type == wx_mac_em)8282+ set_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags);81838284 /* Enable VMDq flag so device will be set in VM mode */8385 set_bit(WX_FLAG_VMDQ_ENABLED, wx->flags);
···161161 if (queues)162162 wx_intr_enable(wx, NGBE_INTR_ALL);163163 else164164- wx_intr_enable(wx, NGBE_INTR_MISC);164164+ wx_intr_enable(wx, NGBE_INTR_MISC(wx));165165}166166167167/**···286286 * for queue. But when num_vfs == 7, vector[1] is assigned to vf6.287287 * Misc and queue should reuse interrupt vector[0].288288 */289289- if (wx->num_vfs == 7)289289+ if (test_bit(WX_FLAG_IRQ_VECTOR_SHARED, wx->flags))290290 err = request_irq(wx->msix_entry->vector,291291 ngbe_misc_and_queue, 0, netdev->name, wx);292292 else
···692692{693693 u8 irqstat;694694 u8 rtc_control;695695+ unsigned long flags;695696696696- spin_lock(&rtc_lock);697697+ /* We cannot use spin_lock() here, as cmos_interrupt() is also called698698+ * in a non-irq context.699699+ */700700+ spin_lock_irqsave(&rtc_lock, flags);697701698702 /* When the HPET interrupt handler calls us, the interrupt699703 * status is passed as arg1 instead of the irq number. But···731727 hpet_mask_rtc_irq_bit(RTC_AIE);732728 CMOS_READ(RTC_INTR_FLAGS);733729 }734734- spin_unlock(&rtc_lock);730730+ spin_unlock_irqrestore(&rtc_lock, flags);735731736732 if (is_intr(irqstat)) {737733 rtc_update_irq(p, 1, irqstat);···12991295 * ACK the rtc irq here13001296 */13011297 if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {13021302- local_irq_disable();13031298 cmos_interrupt(0, (void *)cmos->rtc);13041304- local_irq_enable();13051299 return;13061300 }13071301
···7272 dev->parent = parent_dev;7373 dev->bus = &serial_base_bus_type;7474 dev->release = release;7575+ device_set_of_node_from_dev(dev, parent_dev);75767677 if (!serial_base_initialized) {7778 dev_dbg(port->dev, "uart_add_one_port() called before arch_initcall()?\n");
+1-1
drivers/tty/vt/ucs.c
···206206207207/**208208 * ucs_get_fallback() - Get a substitution for the provided Unicode character209209- * @base: Base Unicode code point (UCS-4)209209+ * @cp: Unicode code point (UCS-4)210210 *211211 * Get a simpler fallback character for the provided Unicode character.212212 * This is used for terminal display when corresponding glyph is unavailable.
···621621 if (s)622622 s->ret = ret;623623624624- if (trans)624624+ if (trans &&625625+ !(flags & FSCK_ERR_NO_LOG) &&626626+ ret == -BCH_ERR_fsck_fix)625627 ret = bch2_trans_log_str(trans, bch2_sb_error_strs[err]) ?: ret;626628err_unlock:627629 mutex_unlock(&c->fsck_error_msgs_lock);
···135135136136bool __bch2_snapshot_is_ancestor(struct bch_fs *c, u32 id, u32 ancestor)137137{138138- bool ret;138138+#ifdef CONFIG_BCACHEFS_DEBUG139139+ u32 orig_id = id;140140+#endif139141140142 guard(rcu)();141143 struct snapshot_table *t = rcu_dereference(c->snapshots);···149147 while (id && id < ancestor - IS_ANCESTOR_BITMAP)150148 id = get_ancestor_below(t, id, ancestor);151149152152- ret = id && id < ancestor150150+ bool ret = id && id < ancestor153151 ? test_ancestor_bitmap(t, id, ancestor)154152 : id == ancestor;155153156156- EBUG_ON(ret != __bch2_snapshot_is_ancestor_early(t, id, ancestor));154154+ EBUG_ON(ret != __bch2_snapshot_is_ancestor_early(t, orig_id, ancestor));157155 return ret;158156}159157···871869872870 for_each_btree_key_norestart(trans, iter, BTREE_ID_snapshot_trees, POS_MIN,873871 0, k, ret) {874874- if (le32_to_cpu(bkey_s_c_to_snapshot_tree(k).v->root_snapshot) == id) {872872+ if (k.k->type == KEY_TYPE_snapshot_tree &&873873+ le32_to_cpu(bkey_s_c_to_snapshot_tree(k).v->root_snapshot) == id) {875874 tree_id = k.k->p.offset;876875 break;877876 }···900897901898 for_each_btree_key_norestart(trans, iter, BTREE_ID_subvolumes, POS_MIN,902899 0, k, ret) {903903- if (le32_to_cpu(bkey_s_c_to_subvolume(k).v->snapshot) == id) {900900+ if (k.k->type == KEY_TYPE_subvolume &&901901+ le32_to_cpu(bkey_s_c_to_subvolume(k).v->snapshot) == id) {904902 snapshot->v.subvol = cpu_to_le32(k.k->p.offset);905903 SET_BCH_SNAPSHOT_SUBVOL(&snapshot->v, true);906904 break;
+11-2
fs/bcachefs/super.c
···210210static int bch2_dev_sysfs_online(struct bch_fs *, struct bch_dev *);211211static void bch2_dev_io_ref_stop(struct bch_dev *, int);212212static void __bch2_dev_read_only(struct bch_fs *, struct bch_dev *);213213-static int bch2_fs_init_rw(struct bch_fs *);214213215214struct bch_fs *bch2_dev_to_fs(dev_t dev)216215{···793794 return ret;794795}795796796796-static int bch2_fs_init_rw(struct bch_fs *c)797797+int bch2_fs_init_rw(struct bch_fs *c)797798{798799 if (test_bit(BCH_FS_rw_init_done, &c->flags))799800 return 0;···10131014 bch2_fs_vfs_init(c);10141015 if (ret)10151016 goto err;10171017+10181018+ if (go_rw_in_recovery(c)) {10191019+ /*10201020+ * start workqueues/kworkers early - kthread creation checks for10211021+ * pending signals, which is _very_ annoying10221022+ */10231023+ ret = bch2_fs_init_rw(c);10241024+ if (ret)10251025+ goto err;10261026+ }1016102710171028#ifdef CONFIG_UNICODE10181029 /* Default encoding until we can potentially have more as an option. */
···99#include "fuse_i.h"1010#include "dev_uring_i.h"11111212+#include <linux/dax.h>1213#include <linux/pagemap.h>1314#include <linux/slab.h>1415#include <linux/file.h>···162161163162 /* Will write inode on close/munmap and in all other dirtiers */164163 WARN_ON(inode->i_state & I_DIRTY_INODE);164164+165165+ if (FUSE_IS_DAX(inode))166166+ dax_break_layout_final(inode);165167166168 truncate_inode_pages_final(&inode->i_data);167169 clear_inode(inode);
+84-34
fs/nfs/flexfilelayout/flexfilelayout.c
···11051105}1106110611071107static int ff_layout_async_handle_error_v4(struct rpc_task *task,11081108+ u32 op_status,11081109 struct nfs4_state *state,11091110 struct nfs_client *clp,11101111 struct pnfs_layout_segment *lseg,···11161115 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);11171116 struct nfs4_slot_table *tbl = &clp->cl_session->fc_slot_table;1118111711191119- switch (task->tk_status) {11201120- case -NFS4ERR_BADSESSION:11211121- case -NFS4ERR_BADSLOT:11221122- case -NFS4ERR_BAD_HIGH_SLOT:11231123- case -NFS4ERR_DEADSESSION:11241124- case -NFS4ERR_CONN_NOT_BOUND_TO_SESSION:11251125- case -NFS4ERR_SEQ_FALSE_RETRY:11261126- case -NFS4ERR_SEQ_MISORDERED:11181118+ switch (op_status) {11191119+ case NFS4_OK:11201120+ case NFS4ERR_NXIO:11211121+ break;11221122+ case NFSERR_PERM:11231123+ if (!task->tk_xprt)11241124+ break;11251125+ xprt_force_disconnect(task->tk_xprt);11261126+ goto out_retry;11271127+ case NFS4ERR_BADSESSION:11281128+ case NFS4ERR_BADSLOT:11291129+ case NFS4ERR_BAD_HIGH_SLOT:11301130+ case NFS4ERR_DEADSESSION:11311131+ case NFS4ERR_CONN_NOT_BOUND_TO_SESSION:11321132+ case NFS4ERR_SEQ_FALSE_RETRY:11331133+ case NFS4ERR_SEQ_MISORDERED:11271134 dprintk("%s ERROR %d, Reset session. Exchangeid "11281135 "flags 0x%x\n", __func__, task->tk_status,11291136 clp->cl_exchange_flags);11301137 nfs4_schedule_session_recovery(clp->cl_session, task->tk_status);11311131- break;11321132- case -NFS4ERR_DELAY:11381138+ goto out_retry;11391139+ case NFS4ERR_DELAY:11331140 nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY);11341141 fallthrough;11351135- case -NFS4ERR_GRACE:11421142+ case NFS4ERR_GRACE:11361143 rpc_delay(task, FF_LAYOUT_POLL_RETRY_MAX);11371137- break;11381138- case -NFS4ERR_RETRY_UNCACHED_REP:11391139- break;11441144+ goto out_retry;11451145+ case NFS4ERR_RETRY_UNCACHED_REP:11461146+ goto out_retry;11401147 /* Invalidate Layout errors */11411141- case -NFS4ERR_PNFS_NO_LAYOUT:11421142- case -ESTALE: /* mapped NFS4ERR_STALE */11431143- case -EBADHANDLE: /* mapped NFS4ERR_BADHANDLE */11441144- case -EISDIR: /* mapped NFS4ERR_ISDIR */11451145- case -NFS4ERR_FHEXPIRED:11461146- case -NFS4ERR_WRONG_TYPE:11481148+ case NFS4ERR_PNFS_NO_LAYOUT:11491149+ case NFS4ERR_STALE:11501150+ case NFS4ERR_BADHANDLE:11511151+ case NFS4ERR_ISDIR:11521152+ case NFS4ERR_FHEXPIRED:11531153+ case NFS4ERR_WRONG_TYPE:11471154 dprintk("%s Invalid layout error %d\n", __func__,11481155 task->tk_status);11491156 /*···11641155 pnfs_destroy_layout(NFS_I(inode));11651156 rpc_wake_up(&tbl->slot_tbl_waitq);11661157 goto reset;11581158+ default:11591159+ break;11601160+ }11611161+11621162+ switch (task->tk_status) {11671163 /* RPC connection errors */11681164 case -ENETDOWN:11691165 case -ENETUNREACH:···11881174 nfs4_delete_deviceid(devid->ld, devid->nfs_client,11891175 &devid->deviceid);11901176 rpc_wake_up(&tbl->slot_tbl_waitq);11911191- fallthrough;11771177+ break;11921178 default:11931193- if (ff_layout_avoid_mds_available_ds(lseg))11941194- return -NFS4ERR_RESET_TO_PNFS;11951195-reset:11961196- dprintk("%s Retry through MDS. Error %d\n", __func__,11971197- task->tk_status);11981198- return -NFS4ERR_RESET_TO_MDS;11791179+ break;11991180 }11811181+11821182+ if (ff_layout_avoid_mds_available_ds(lseg))11831183+ return -NFS4ERR_RESET_TO_PNFS;11841184+reset:11851185+ dprintk("%s Retry through MDS. Error %d\n", __func__,11861186+ task->tk_status);11871187+ return -NFS4ERR_RESET_TO_MDS;11881188+11891189+out_retry:12001190 task->tk_status = 0;12011191 return -EAGAIN;12021192}1203119312041194/* Retry all errors through either pNFS or MDS except for -EJUKEBOX */12051195static int ff_layout_async_handle_error_v3(struct rpc_task *task,11961196+ u32 op_status,12061197 struct nfs_client *clp,12071198 struct pnfs_layout_segment *lseg,12081199 u32 idx)12091200{12101201 struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);12021202+12031203+ switch (op_status) {12041204+ case NFS_OK:12051205+ case NFSERR_NXIO:12061206+ break;12071207+ case NFSERR_PERM:12081208+ if (!task->tk_xprt)12091209+ break;12101210+ xprt_force_disconnect(task->tk_xprt);12111211+ goto out_retry;12121212+ case NFSERR_ACCES:12131213+ case NFSERR_BADHANDLE:12141214+ case NFSERR_FBIG:12151215+ case NFSERR_IO:12161216+ case NFSERR_NOSPC:12171217+ case NFSERR_ROFS:12181218+ case NFSERR_STALE:12191219+ goto out_reset_to_pnfs;12201220+ case NFSERR_JUKEBOX:12211221+ nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY);12221222+ goto out_retry;12231223+ default:12241224+ break;12251225+ }1211122612121227 switch (task->tk_status) {12131228 /* File access problems. Don't mark the device as unavailable */···12611218 nfs4_delete_deviceid(devid->ld, devid->nfs_client,12621219 &devid->deviceid);12631220 }12211221+out_reset_to_pnfs:12641222 /* FIXME: Need to prevent infinite looping here. */12651223 return -NFS4ERR_RESET_TO_PNFS;12661224out_retry:···12721228}1273122912741230static int ff_layout_async_handle_error(struct rpc_task *task,12311231+ u32 op_status,12751232 struct nfs4_state *state,12761233 struct nfs_client *clp,12771234 struct pnfs_layout_segment *lseg,···1291124612921247 switch (vers) {12931248 case 3:12941294- return ff_layout_async_handle_error_v3(task, clp, lseg, idx);12951295- case 4:12961296- return ff_layout_async_handle_error_v4(task, state, clp,12491249+ return ff_layout_async_handle_error_v3(task, op_status, clp,12971250 lseg, idx);12511251+ case 4:12521252+ return ff_layout_async_handle_error_v4(task, op_status, state,12531253+ clp, lseg, idx);12981254 default:12991255 /* should never happen */13001256 WARN_ON_ONCE(1);···13481302 switch (status) {13491303 case NFS4ERR_DELAY:13501304 case NFS4ERR_GRACE:13051305+ case NFS4ERR_PERM:13511306 break;13521307 case NFS4ERR_NXIO:13531308 ff_layout_mark_ds_unreachable(lseg, idx);···13811334 trace_ff_layout_read_error(hdr, task->tk_status);13821335 }1383133613841384- err = ff_layout_async_handle_error(task, hdr->args.context->state,13371337+ err = ff_layout_async_handle_error(task, hdr->res.op_status,13381338+ hdr->args.context->state,13851339 hdr->ds_clp, hdr->lseg,13861340 hdr->pgio_mirror_idx);13871341···15551507 trace_ff_layout_write_error(hdr, task->tk_status);15561508 }1557150915581558- err = ff_layout_async_handle_error(task, hdr->args.context->state,15101510+ err = ff_layout_async_handle_error(task, hdr->res.op_status,15111511+ hdr->args.context->state,15591512 hdr->ds_clp, hdr->lseg,15601513 hdr->pgio_mirror_idx);15611514···16051556 trace_ff_layout_commit_error(data, task->tk_status);16061557 }1607155816081608- err = ff_layout_async_handle_error(task, NULL, data->ds_clp,16091609- data->lseg, data->ds_commit_index);15591559+ err = ff_layout_async_handle_error(task, data->res.op_status,15601560+ NULL, data->ds_clp, data->lseg,15611561+ data->ds_commit_index);1610156216111563 trace_nfs4_pnfs_commit_ds(data, err);16121564 switch (err) {
+14-3
fs/nfs/inode.c
···25892589static int nfs_net_init(struct net *net)25902590{25912591 struct nfs_net *nn = net_generic(net, nfs_net_id);25922592+ int err;2592259325932594 nfs_clients_init(net);2594259525952596 if (!rpc_proc_register(net, &nn->rpcstats)) {25962596- nfs_clients_exit(net);25972597- return -ENOMEM;25972597+ err = -ENOMEM;25982598+ goto err_proc_rpc;25982599 }2599260026002600- return nfs_fs_proc_net_init(net);26012601+ err = nfs_fs_proc_net_init(net);26022602+ if (err)26032603+ goto err_proc_nfs;26042604+26052605+ return 0;26062606+26072607+err_proc_nfs:26082608+ rpc_proc_unregister(net, "nfs");26092609+err_proc_rpc:26102610+ nfs_clients_exit(net);26112611+ return err;26012612}2602261326032614static void nfs_net_exit(struct net *net)
···21822182 categories |= PAGE_IS_FILE;21832183 }2184218421852185- if (is_zero_pfn(pmd_pfn(pmd)))21852185+ if (is_huge_zero_pmd(pmd))21862186 categories |= PAGE_IS_PFNZERO;21872187 if (pmd_soft_dirty(pmd))21882188 categories |= PAGE_IS_SOFT_DIRTY;
+1
fs/smb/client/cifsglob.h
···709709struct TCP_Server_Info {710710 struct list_head tcp_ses_list;711711 struct list_head smb_ses_list;712712+ struct list_head rlist; /* reconnect list */712713 spinlock_t srv_lock; /* protect anything here that is not protected */713714 __u64 conn_id; /* connection identifier (useful for debugging) */714715 int srv_count; /* reference counter */
+36-22
fs/smb/client/connect.c
···124124 (SMB_INTERFACE_POLL_INTERVAL * HZ));125125}126126127127+#define set_need_reco(server) \128128+do { \129129+ spin_lock(&server->srv_lock); \130130+ if (server->tcpStatus != CifsExiting) \131131+ server->tcpStatus = CifsNeedReconnect; \132132+ spin_unlock(&server->srv_lock); \133133+} while (0)134134+127135/*128136 * Update the tcpStatus for the server.129137 * This is used to signal the cifsd thread to call cifs_reconnect···145137cifs_signal_cifsd_for_reconnect(struct TCP_Server_Info *server,146138 bool all_channels)147139{148148- struct TCP_Server_Info *pserver;140140+ struct TCP_Server_Info *nserver;149141 struct cifs_ses *ses;142142+ LIST_HEAD(reco);150143 int i;151151-152152- /* If server is a channel, select the primary channel */153153- pserver = SERVER_IS_CHAN(server) ? server->primary_server : server;154144155145 /* if we need to signal just this channel */156146 if (!all_channels) {157157- spin_lock(&server->srv_lock);158158- if (server->tcpStatus != CifsExiting)159159- server->tcpStatus = CifsNeedReconnect;160160- spin_unlock(&server->srv_lock);147147+ set_need_reco(server);161148 return;162149 }163150164164- spin_lock(&cifs_tcp_ses_lock);165165- list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) {166166- if (cifs_ses_exiting(ses))167167- continue;168168- spin_lock(&ses->chan_lock);169169- for (i = 0; i < ses->chan_count; i++) {170170- if (!ses->chans[i].server)151151+ if (SERVER_IS_CHAN(server))152152+ server = server->primary_server;153153+ scoped_guard(spinlock, &cifs_tcp_ses_lock) {154154+ set_need_reco(server);155155+ list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {156156+ spin_lock(&ses->ses_lock);157157+ if (ses->ses_status == SES_EXITING) {158158+ spin_unlock(&ses->ses_lock);171159 continue;172172-173173- spin_lock(&ses->chans[i].server->srv_lock);174174- if (ses->chans[i].server->tcpStatus != CifsExiting)175175- ses->chans[i].server->tcpStatus = CifsNeedReconnect;176176- spin_unlock(&ses->chans[i].server->srv_lock);160160+ }161161+ spin_lock(&ses->chan_lock);162162+ for (i = 1; i < ses->chan_count; i++) {163163+ nserver = ses->chans[i].server;164164+ if (!nserver)165165+ continue;166166+ nserver->srv_count++;167167+ list_add(&nserver->rlist, &reco);168168+ }169169+ spin_unlock(&ses->chan_lock);170170+ spin_unlock(&ses->ses_lock);177171 }178178- spin_unlock(&ses->chan_lock);179172 }180180- spin_unlock(&cifs_tcp_ses_lock);173173+174174+ list_for_each_entry_safe(server, nserver, &reco, rlist) {175175+ list_del_init(&server->rlist);176176+ set_need_reco(server);177177+ cifs_put_tcp_session(server, 0);178178+ }181179}182180183181/*
+4-16
fs/smb/client/reparse.c
···875875 abs_path += sizeof("\\DosDevices\\")-1;876876 else if (strstarts(abs_path, "\\GLOBAL??\\"))877877 abs_path += sizeof("\\GLOBAL??\\")-1;878878- else {879879- /* Unhandled absolute symlink, points outside of DOS/Win32 */880880- cifs_dbg(VFS,881881- "absolute symlink '%s' cannot be converted from NT format "882882- "because points to unknown target\n",883883- smb_target);884884- rc = -EIO;885885- goto out;886886- }878878+ else879879+ goto out_unhandled_target;887880888881 /* Sometimes path separator after \?? is double backslash */889882 if (abs_path[0] == '\\')···903910 abs_path++;904911 abs_path[0] = drive_letter;905912 } else {906906- /* Unhandled absolute symlink. Report an error. */907907- cifs_dbg(VFS,908908- "absolute symlink '%s' cannot be converted from NT format "909909- "because points to unknown target\n",910910- smb_target);911911- rc = -EIO;912912- goto out;913913+ goto out_unhandled_target;913914 }914915915916 abs_path_len = strlen(abs_path)+1;···953966 * These paths have same format as Linux symlinks, so no954967 * conversion is needed.955968 */969969+out_unhandled_target:956970 linux_target = smb_target;957971 smb_target = NULL;958972 }
+57-104
fs/smb/client/smbdirect.c
···907907 .local_dma_lkey = sc->ib.pd->local_dma_lkey,908908 .direction = DMA_TO_DEVICE,909909 };910910+ size_t payload_len = umin(*_remaining_data_length,911911+ sp->max_send_size - sizeof(*packet));910912911911- rc = smb_extract_iter_to_rdma(iter, *_remaining_data_length,913913+ rc = smb_extract_iter_to_rdma(iter, payload_len,912914 &extract);913915 if (rc < 0)914916 goto err_dma;···1013101110141012 info->count_send_empty++;10151013 return smbd_post_send_iter(info, NULL, &remaining_data_length);10141014+}10151015+10161016+static int smbd_post_send_full_iter(struct smbd_connection *info,10171017+ struct iov_iter *iter,10181018+ int *_remaining_data_length)10191019+{10201020+ int rc = 0;10211021+10221022+ /*10231023+ * smbd_post_send_iter() respects the10241024+ * negotiated max_send_size, so we need to10251025+ * loop until the full iter is posted10261026+ */10271027+10281028+ while (iov_iter_count(iter) > 0) {10291029+ rc = smbd_post_send_iter(info, iter, _remaining_data_length);10301030+ if (rc < 0)10311031+ break;10321032+ }10331033+10341034+ return rc;10161035}1017103610181037/*···14751452 char name[MAX_NAME_LEN];14761453 int rc;1477145414551455+ if (WARN_ON_ONCE(sp->max_recv_size < sizeof(struct smbdirect_data_transfer)))14561456+ return -ENOMEM;14571457+14781458 scnprintf(name, MAX_NAME_LEN, "smbd_request_%p", info);14791459 info->request_cache =14801460 kmem_cache_create(···14951469 goto out1;1496147014971471 scnprintf(name, MAX_NAME_LEN, "smbd_response_%p", info);14721472+14731473+ struct kmem_cache_args response_args = {14741474+ .align = __alignof__(struct smbd_response),14751475+ .useroffset = (offsetof(struct smbd_response, packet) +14761476+ sizeof(struct smbdirect_data_transfer)),14771477+ .usersize = sp->max_recv_size - sizeof(struct smbdirect_data_transfer),14781478+ };14981479 info->response_cache =14991499- kmem_cache_create(15001500- name,15011501- sizeof(struct smbd_response) +15021502- sp->max_recv_size,15031503- 0, SLAB_HWCACHE_ALIGN, NULL);14801480+ kmem_cache_create(name,14811481+ sizeof(struct smbd_response) + sp->max_recv_size,14821482+ &response_args, SLAB_HWCACHE_ALIGN);15041483 if (!info->response_cache)15051484 goto out2;15061485···17781747}1779174817801749/*17811781- * Receive data from receive reassembly queue17501750+ * Receive data from the transport's receive reassembly queue17821751 * All the incoming data packets are placed in reassembly queue17831783- * buf: the buffer to read data into17521752+ * iter: the buffer to read data into17841753 * size: the length of data to read17851754 * return value: actual data read17861786- * Note: this implementation copies the data from reassebmly queue to receive17551755+ *17561756+ * Note: this implementation copies the data from reassembly queue to receive17871757 * buffers used by upper layer. This is not the optimal code path. A better way17881758 * to do it is to not have upper layer allocate its receive buffers but rather17891759 * borrow the buffer from reassembly queue, and return it after data is17901760 * consumed. But this will require more changes to upper layer code, and also17911761 * need to consider packet boundaries while they still being reassembled.17921762 */17931793-static int smbd_recv_buf(struct smbd_connection *info, char *buf,17941794- unsigned int size)17631763+int smbd_recv(struct smbd_connection *info, struct msghdr *msg)17951764{17961765 struct smbdirect_socket *sc = &info->socket;17971766 struct smbd_response *response;17981767 struct smbdirect_data_transfer *data_transfer;17681768+ size_t size = iov_iter_count(&msg->msg_iter);17991769 int to_copy, to_read, data_read, offset;18001770 u32 data_length, remaining_data_length, data_offset;18011771 int rc;17721772+17731773+ if (WARN_ON_ONCE(iov_iter_rw(&msg->msg_iter) == WRITE))17741774+ return -EINVAL; /* It's a bug in upper layer to get there */1802177518031776again:18041777 /*···18101775 * the only one reading from the front of the queue. The transport18111776 * may add more entries to the back of the queue at the same time18121777 */18131813- log_read(INFO, "size=%d info->reassembly_data_length=%d\n", size,17781778+ log_read(INFO, "size=%zd info->reassembly_data_length=%d\n", size,18141779 info->reassembly_data_length);18151780 if (info->reassembly_data_length >= size) {18161781 int queue_length;···18481813 if (response->first_segment && size == 4) {18491814 unsigned int rfc1002_len =18501815 data_length + remaining_data_length;18511851- *((__be32 *)buf) = cpu_to_be32(rfc1002_len);18161816+ __be32 rfc1002_hdr = cpu_to_be32(rfc1002_len);18171817+ if (copy_to_iter(&rfc1002_hdr, sizeof(rfc1002_hdr),18181818+ &msg->msg_iter) != sizeof(rfc1002_hdr))18191819+ return -EFAULT;18521820 data_read = 4;18531821 response->first_segment = false;18541822 log_read(INFO, "returning rfc1002 length %d\n",···18601822 }1861182318621824 to_copy = min_t(int, data_length - offset, to_read);18631863- memcpy(18641864- buf + data_read,18651865- (char *)data_transfer + data_offset + offset,18661866- to_copy);18251825+ if (copy_to_iter((char *)data_transfer + data_offset + offset,18261826+ to_copy, &msg->msg_iter) != to_copy)18271827+ return -EFAULT;1867182818681829 /* move on to the next buffer? */18691830 if (to_copy == data_length - offset) {···19281891}1929189219301893/*19311931- * Receive a page from receive reassembly queue19321932- * page: the page to read data into19331933- * to_read: the length of data to read19341934- * return value: actual data read19351935- */19361936-static int smbd_recv_page(struct smbd_connection *info,19371937- struct page *page, unsigned int page_offset,19381938- unsigned int to_read)19391939-{19401940- struct smbdirect_socket *sc = &info->socket;19411941- int ret;19421942- char *to_address;19431943- void *page_address;19441944-19451945- /* make sure we have the page ready for read */19461946- ret = wait_event_interruptible(19471947- info->wait_reassembly_queue,19481948- info->reassembly_data_length >= to_read ||19491949- sc->status != SMBDIRECT_SOCKET_CONNECTED);19501950- if (ret)19511951- return ret;19521952-19531953- /* now we can read from reassembly queue and not sleep */19541954- page_address = kmap_atomic(page);19551955- to_address = (char *) page_address + page_offset;19561956-19571957- log_read(INFO, "reading from page=%p address=%p to_read=%d\n",19581958- page, to_address, to_read);19591959-19601960- ret = smbd_recv_buf(info, to_address, to_read);19611961- kunmap_atomic(page_address);19621962-19631963- return ret;19641964-}19651965-19661966-/*19671967- * Receive data from transport19681968- * msg: a msghdr point to the buffer, can be ITER_KVEC or ITER_BVEC19691969- * return: total bytes read, or 0. SMB Direct will not do partial read.19701970- */19711971-int smbd_recv(struct smbd_connection *info, struct msghdr *msg)19721972-{19731973- char *buf;19741974- struct page *page;19751975- unsigned int to_read, page_offset;19761976- int rc;19771977-19781978- if (iov_iter_rw(&msg->msg_iter) == WRITE) {19791979- /* It's a bug in upper layer to get there */19801980- cifs_dbg(VFS, "Invalid msg iter dir %u\n",19811981- iov_iter_rw(&msg->msg_iter));19821982- rc = -EINVAL;19831983- goto out;19841984- }19851985-19861986- switch (iov_iter_type(&msg->msg_iter)) {19871987- case ITER_KVEC:19881988- buf = msg->msg_iter.kvec->iov_base;19891989- to_read = msg->msg_iter.kvec->iov_len;19901990- rc = smbd_recv_buf(info, buf, to_read);19911991- break;19921992-19931993- case ITER_BVEC:19941994- page = msg->msg_iter.bvec->bv_page;19951995- page_offset = msg->msg_iter.bvec->bv_offset;19961996- to_read = msg->msg_iter.bvec->bv_len;19971997- rc = smbd_recv_page(info, page, page_offset, to_read);19981998- break;19991999-20002000- default:20012001- /* It's a bug in upper layer to get there */20022002- cifs_dbg(VFS, "Invalid msg type %d\n",20032003- iov_iter_type(&msg->msg_iter));20042004- rc = -EINVAL;20052005- }20062006-20072007-out:20082008- /* SMBDirect will read it all or nothing */20092009- if (rc > 0)20102010- msg->msg_iter.count = 0;20112011- return rc;20122012-}20132013-20142014-/*20151894 * Send data to transport20161895 * Each rqst is transported as a SMBDirect payload20171896 * rqst: the data to write···19852032 klen += rqst->rq_iov[i].iov_len;19862033 iov_iter_kvec(&iter, ITER_SOURCE, rqst->rq_iov, rqst->rq_nvec, klen);1987203419881988- rc = smbd_post_send_iter(info, &iter, &remaining_data_length);20352035+ rc = smbd_post_send_full_iter(info, &iter, &remaining_data_length);19892036 if (rc < 0)19902037 break;1991203819922039 if (iov_iter_count(&rqst->rq_iter) > 0) {19932040 /* And then the data pages if there are any */19941994- rc = smbd_post_send_iter(info, &rqst->rq_iter,19951995- &remaining_data_length);20412041+ rc = smbd_post_send_full_iter(info, &rqst->rq_iter,20422042+ &remaining_data_length);19962043 if (rc < 0)19972044 break;19982045 }
···3444344434453445 set_bit(XFS_AGSTATE_AGF_INIT, &pag->pag_opstate);34463446 }34473447+34473448#ifdef DEBUG34483448- else if (!xfs_is_shutdown(mp)) {34493449- ASSERT(pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks));34503450- ASSERT(pag->pagf_btreeblks == be32_to_cpu(agf->agf_btreeblks));34513451- ASSERT(pag->pagf_flcount == be32_to_cpu(agf->agf_flcount));34523452- ASSERT(pag->pagf_longest == be32_to_cpu(agf->agf_longest));34533453- ASSERT(pag->pagf_bno_level == be32_to_cpu(agf->agf_bno_level));34543454- ASSERT(pag->pagf_cnt_level == be32_to_cpu(agf->agf_cnt_level));34493449+ /*34503450+ * It's possible for the AGF to be out of sync if the block device is34513451+ * silently dropping writes. This can happen in fstests with dmflakey34523452+ * enabled, which allows the buffer to be cleaned and reclaimed by34533453+ * memory pressure and then re-read from disk here. We will get a34543454+ * stale version of the AGF from disk, and nothing good can happen from34553455+ * here. Hence if we detect this situation, immediately shut down the34563456+ * filesystem.34573457+ *34583458+ * This can also happen if we are already in the middle of a forced34593459+ * shutdown, so don't bother checking if we are already shut down.34603460+ */34613461+ if (!xfs_is_shutdown(pag_mount(pag))) {34623462+ bool ok = true;34633463+34643464+ ok &= pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks);34653465+ ok &= pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks);34663466+ ok &= pag->pagf_btreeblks == be32_to_cpu(agf->agf_btreeblks);34673467+ ok &= pag->pagf_flcount == be32_to_cpu(agf->agf_flcount);34683468+ ok &= pag->pagf_longest == be32_to_cpu(agf->agf_longest);34693469+ ok &= pag->pagf_bno_level == be32_to_cpu(agf->agf_bno_level);34703470+ ok &= pag->pagf_cnt_level == be32_to_cpu(agf->agf_cnt_level);34713471+34723472+ if (XFS_IS_CORRUPT(pag_mount(pag), !ok)) {34733473+ xfs_ag_mark_sick(pag, XFS_SICK_AG_AGF);34743474+ xfs_trans_brelse(tp, agfbp);34753475+ xfs_force_shutdown(pag_mount(pag),34763476+ SHUTDOWN_CORRUPT_ONDISK);34773477+ return -EFSCORRUPTED;34783478+ }34553479 }34563456-#endif34803480+#endif /* DEBUG */34813481+34573482 if (agfbpp)34583483 *agfbpp = agfbp;34593484 else
+27-4
fs/xfs/libxfs/xfs_ialloc.c
···28012801 set_bit(XFS_AGSTATE_AGI_INIT, &pag->pag_opstate);28022802 }2803280328042804+#ifdef DEBUG28042805 /*28052805- * It's possible for these to be out of sync if28062806- * we are in the middle of a forced shutdown.28062806+ * It's possible for the AGF to be out of sync if the block device is28072807+ * silently dropping writes. This can happen in fstests with dmflakey28082808+ * enabled, which allows the buffer to be cleaned and reclaimed by28092809+ * memory pressure and then re-read from disk here. We will get a28102810+ * stale version of the AGF from disk, and nothing good can happen from28112811+ * here. Hence if we detect this situation, immediately shut down the28122812+ * filesystem.28132813+ *28142814+ * This can also happen if we are already in the middle of a forced28152815+ * shutdown, so don't bother checking if we are already shut down.28072816 */28082808- ASSERT(pag->pagi_freecount == be32_to_cpu(agi->agi_freecount) ||28092809- xfs_is_shutdown(pag_mount(pag)));28172817+ if (!xfs_is_shutdown(pag_mount(pag))) {28182818+ bool ok = true;28192819+28202820+ ok &= pag->pagi_freecount == be32_to_cpu(agi->agi_freecount);28212821+ ok &= pag->pagi_count == be32_to_cpu(agi->agi_count);28222822+28232823+ if (XFS_IS_CORRUPT(pag_mount(pag), !ok)) {28242824+ xfs_ag_mark_sick(pag, XFS_SICK_AG_AGI);28252825+ xfs_trans_brelse(tp, agibp);28262826+ xfs_force_shutdown(pag_mount(pag),28272827+ SHUTDOWN_CORRUPT_ONDISK);28282828+ return -EFSCORRUPTED;28292829+ }28302830+ }28312831+#endif /* DEBUG */28322832+28102833 if (agibpp)28112834 *agibpp = agibp;28122835 else
-38
fs/xfs/xfs_buf.c
···20822082 return error;20832083}2084208420852085-/*20862086- * Push a single buffer on a delwri queue.20872087- *20882088- * The purpose of this function is to submit a single buffer of a delwri queue20892089- * and return with the buffer still on the original queue.20902090- *20912091- * The buffer locking and queue management logic between _delwri_pushbuf() and20922092- * _delwri_queue() guarantee that the buffer cannot be queued to another list20932093- * before returning.20942094- */20952095-int20962096-xfs_buf_delwri_pushbuf(20972097- struct xfs_buf *bp,20982098- struct list_head *buffer_list)20992099-{21002100- int error;21012101-21022102- ASSERT(bp->b_flags & _XBF_DELWRI_Q);21032103-21042104- trace_xfs_buf_delwri_pushbuf(bp, _RET_IP_);21052105-21062106- xfs_buf_lock(bp);21072107- bp->b_flags &= ~(_XBF_DELWRI_Q | XBF_ASYNC);21082108- bp->b_flags |= XBF_WRITE;21092109- xfs_buf_submit(bp);21102110-21112111- /*21122112- * The buffer is now locked, under I/O but still on the original delwri21132113- * queue. Wait for I/O completion, restore the DELWRI_Q flag and21142114- * return with the buffer unlocked and still on the original queue.21152115- */21162116- error = xfs_buf_iowait(bp);21172117- bp->b_flags |= _XBF_DELWRI_Q;21182118- xfs_buf_unlock(bp);21192119-21202120- return error;21212121-}21222122-21232085void xfs_buf_set_ref(struct xfs_buf *bp, int lru_ref)21242086{21252087 /*
-1
fs/xfs/xfs_buf.h
···326326void xfs_buf_delwri_queue_here(struct xfs_buf *bp, struct list_head *bl);327327extern int xfs_buf_delwri_submit(struct list_head *);328328extern int xfs_buf_delwri_submit_nowait(struct list_head *);329329-extern int xfs_buf_delwri_pushbuf(struct xfs_buf *, struct list_head *);330329331330static inline xfs_daddr_t xfs_buf_daddr(struct xfs_buf *bp)332331{
+180-117
fs/xfs/xfs_buf_item.c
···3232 return container_of(lip, struct xfs_buf_log_item, bli_item);3333}34343535+static void3636+xfs_buf_item_get_format(3737+ struct xfs_buf_log_item *bip,3838+ int count)3939+{4040+ ASSERT(bip->bli_formats == NULL);4141+ bip->bli_format_count = count;4242+4343+ if (count == 1) {4444+ bip->bli_formats = &bip->__bli_format;4545+ return;4646+ }4747+4848+ bip->bli_formats = kzalloc(count * sizeof(struct xfs_buf_log_format),4949+ GFP_KERNEL | __GFP_NOFAIL);5050+}5151+5252+static void5353+xfs_buf_item_free_format(5454+ struct xfs_buf_log_item *bip)5555+{5656+ if (bip->bli_formats != &bip->__bli_format) {5757+ kfree(bip->bli_formats);5858+ bip->bli_formats = NULL;5959+ }6060+}6161+6262+static void6363+xfs_buf_item_free(6464+ struct xfs_buf_log_item *bip)6565+{6666+ xfs_buf_item_free_format(bip);6767+ kvfree(bip->bli_item.li_lv_shadow);6868+ kmem_cache_free(xfs_buf_item_cache, bip);6969+}7070+7171+/*7272+ * xfs_buf_item_relse() is called when the buf log item is no longer needed.7373+ */7474+static void7575+xfs_buf_item_relse(7676+ struct xfs_buf_log_item *bip)7777+{7878+ struct xfs_buf *bp = bip->bli_buf;7979+8080+ trace_xfs_buf_item_relse(bp, _RET_IP_);8181+8282+ ASSERT(!test_bit(XFS_LI_IN_AIL, &bip->bli_item.li_flags));8383+ ASSERT(atomic_read(&bip->bli_refcount) == 0);8484+8585+ bp->b_log_item = NULL;8686+ xfs_buf_rele(bp);8787+ xfs_buf_item_free(bip);8888+}8989+3590/* Is this log iovec plausibly large enough to contain the buffer log format? */3691bool3792xfs_buf_log_check_iovec(···445390}446391447392/*393393+ * For a stale BLI, process all the necessary completions that must be394394+ * performed when the final BLI reference goes away. The buffer will be395395+ * referenced and locked here - we return to the caller with the buffer still396396+ * referenced and locked for them to finalise processing of the buffer.397397+ */398398+static void399399+xfs_buf_item_finish_stale(400400+ struct xfs_buf_log_item *bip)401401+{402402+ struct xfs_buf *bp = bip->bli_buf;403403+ struct xfs_log_item *lip = &bip->bli_item;404404+405405+ ASSERT(bip->bli_flags & XFS_BLI_STALE);406406+ ASSERT(xfs_buf_islocked(bp));407407+ ASSERT(bp->b_flags & XBF_STALE);408408+ ASSERT(bip->__bli_format.blf_flags & XFS_BLF_CANCEL);409409+ ASSERT(list_empty(&lip->li_trans));410410+ ASSERT(!bp->b_transp);411411+412412+ if (bip->bli_flags & XFS_BLI_STALE_INODE) {413413+ xfs_buf_item_done(bp);414414+ xfs_buf_inode_iodone(bp);415415+ ASSERT(list_empty(&bp->b_li_list));416416+ return;417417+ }418418+419419+ /*420420+ * We may or may not be on the AIL here, xfs_trans_ail_delete() will do421421+ * the right thing regardless of the situation in which we are called.422422+ */423423+ xfs_trans_ail_delete(lip, SHUTDOWN_LOG_IO_ERROR);424424+ xfs_buf_item_relse(bip);425425+ ASSERT(bp->b_log_item == NULL);426426+}427427+428428+/*448429 * This is called to unpin the buffer associated with the buf log item which was449430 * previously pinned with a call to xfs_buf_item_pin(). We enter this function450431 * with a buffer pin count, a buffer reference and a BLI reference.···529438 }530439531440 if (stale) {532532- ASSERT(bip->bli_flags & XFS_BLI_STALE);533533- ASSERT(xfs_buf_islocked(bp));534534- ASSERT(bp->b_flags & XBF_STALE);535535- ASSERT(bip->__bli_format.blf_flags & XFS_BLF_CANCEL);536536- ASSERT(list_empty(&lip->li_trans));537537- ASSERT(!bp->b_transp);538538-539441 trace_xfs_buf_item_unpin_stale(bip);540442541443 /*···539455 * processing is complete.540456 */541457 xfs_buf_rele(bp);542542-543543- /*544544- * If we get called here because of an IO error, we may or may545545- * not have the item on the AIL. xfs_trans_ail_delete() will546546- * take care of that situation. xfs_trans_ail_delete() drops547547- * the AIL lock.548548- */549549- if (bip->bli_flags & XFS_BLI_STALE_INODE) {550550- xfs_buf_item_done(bp);551551- xfs_buf_inode_iodone(bp);552552- ASSERT(list_empty(&bp->b_li_list));553553- } else {554554- xfs_trans_ail_delete(lip, SHUTDOWN_LOG_IO_ERROR);555555- xfs_buf_item_relse(bp);556556- ASSERT(bp->b_log_item == NULL);557557- }458458+ xfs_buf_item_finish_stale(bip);558459 xfs_buf_relse(bp);559460 return;560461 }···612543 * Drop the buffer log item refcount and take appropriate action. This helper613544 * determines whether the bli must be freed or not, since a decrement to zero614545 * does not necessarily mean the bli is unused.615615- *616616- * Return true if the bli is freed, false otherwise.617546 */618618-bool547547+void619548xfs_buf_item_put(620549 struct xfs_buf_log_item *bip)621550{622622- struct xfs_log_item *lip = &bip->bli_item;623623- bool aborted;624624- bool dirty;551551+552552+ ASSERT(xfs_buf_islocked(bip->bli_buf));625553626554 /* drop the bli ref and return if it wasn't the last one */627555 if (!atomic_dec_and_test(&bip->bli_refcount))628628- return false;556556+ return;557557+558558+ /* If the BLI is in the AIL, then it is still dirty and in use */559559+ if (test_bit(XFS_LI_IN_AIL, &bip->bli_item.li_flags)) {560560+ ASSERT(bip->bli_flags & XFS_BLI_DIRTY);561561+ return;562562+ }629563630564 /*631631- * We dropped the last ref and must free the item if clean or aborted.632632- * If the bli is dirty and non-aborted, the buffer was clean in the633633- * transaction but still awaiting writeback from previous changes. In634634- * that case, the bli is freed on buffer writeback completion.565565+ * In shutdown conditions, we can be asked to free a dirty BLI that566566+ * isn't in the AIL. This can occur due to a checkpoint aborting a BLI567567+ * instead of inserting it into the AIL at checkpoint IO completion. If568568+ * there's another bli reference (e.g. a btree cursor holds a clean569569+ * reference) and it is released via xfs_trans_brelse(), we can get here570570+ * with that aborted, dirty BLI. In this case, it is safe to free the571571+ * dirty BLI immediately, as it is not in the AIL and there are no572572+ * other references to it.573573+ *574574+ * We should never get here with a stale BLI via that path as575575+ * xfs_trans_brelse() specifically holds onto stale buffers rather than576576+ * releasing them.635577 */636636- aborted = test_bit(XFS_LI_ABORTED, &lip->li_flags) ||637637- xlog_is_shutdown(lip->li_log);638638- dirty = bip->bli_flags & XFS_BLI_DIRTY;639639- if (dirty && !aborted)640640- return false;641641-642642- /*643643- * The bli is aborted or clean. An aborted item may be in the AIL644644- * regardless of dirty state. For example, consider an aborted645645- * transaction that invalidated a dirty bli and cleared the dirty646646- * state.647647- */648648- if (aborted)649649- xfs_trans_ail_delete(lip, 0);650650- xfs_buf_item_relse(bip->bli_buf);651651- return true;578578+ ASSERT(!(bip->bli_flags & XFS_BLI_DIRTY) ||579579+ test_bit(XFS_LI_ABORTED, &bip->bli_item.li_flags));580580+ ASSERT(!(bip->bli_flags & XFS_BLI_STALE));581581+ xfs_buf_item_relse(bip);652582}653583654584/*···668600 * if necessary but do not unlock the buffer. This is for support of669601 * xfs_trans_bhold(). Make sure the XFS_BLI_HOLD field is cleared if we don't670602 * free the item.603603+ *604604+ * If the XFS_BLI_STALE flag is set, the last reference to the BLI *must*605605+ * perform a completion abort of any objects attached to the buffer for IO606606+ * tracking purposes. This generally only happens in shutdown situations,607607+ * normally xfs_buf_item_unpin() will drop the last BLI reference and perform608608+ * completion processing. However, because transaction completion can race with609609+ * checkpoint completion during a shutdown, this release context may end up610610+ * being the last active reference to the BLI and so needs to perform this611611+ * cleanup.671612 */672613STATIC void673614xfs_buf_item_release(···684607{685608 struct xfs_buf_log_item *bip = BUF_ITEM(lip);686609 struct xfs_buf *bp = bip->bli_buf;687687- bool released;688610 bool hold = bip->bli_flags & XFS_BLI_HOLD;689611 bool stale = bip->bli_flags & XFS_BLI_STALE;690690-#if defined(DEBUG) || defined(XFS_WARN)691691- bool ordered = bip->bli_flags & XFS_BLI_ORDERED;692692- bool dirty = bip->bli_flags & XFS_BLI_DIRTY;693612 bool aborted = test_bit(XFS_LI_ABORTED,694613 &lip->li_flags);614614+ bool dirty = bip->bli_flags & XFS_BLI_DIRTY;615615+#if defined(DEBUG) || defined(XFS_WARN)616616+ bool ordered = bip->bli_flags & XFS_BLI_ORDERED;695617#endif696618697619 trace_xfs_buf_item_release(bip);620620+621621+ ASSERT(xfs_buf_islocked(bp));698622699623 /*700624 * The bli dirty state should match whether the blf has logged segments···712634 bp->b_transp = NULL;713635 bip->bli_flags &= ~(XFS_BLI_LOGGED | XFS_BLI_HOLD | XFS_BLI_ORDERED);714636637637+ /* If there are other references, then we have nothing to do. */638638+ if (!atomic_dec_and_test(&bip->bli_refcount))639639+ goto out_release;640640+715641 /*716716- * Unref the item and unlock the buffer unless held or stale. Stale717717- * buffers remain locked until final unpin unless the bli is freed by718718- * the unref call. The latter implies shutdown because buffer719719- * invalidation dirties the bli and transaction.642642+ * Stale buffer completion frees the BLI, unlocks and releases the643643+ * buffer. Neither the BLI or buffer are safe to reference after this644644+ * call, so there's nothing more we need to do here.645645+ *646646+ * If we get here with a stale buffer and references to the BLI remain,647647+ * we must not unlock the buffer as the last BLI reference owns lock648648+ * context, not us.720649 */721721- released = xfs_buf_item_put(bip);722722- if (hold || (stale && !released))650650+ if (stale) {651651+ xfs_buf_item_finish_stale(bip);652652+ xfs_buf_relse(bp);653653+ ASSERT(!hold);723654 return;724724- ASSERT(!stale || aborted);655655+ }656656+657657+ /*658658+ * Dirty or clean, aborted items are done and need to be removed from659659+ * the AIL and released. This frees the BLI, but leaves the buffer660660+ * locked and referenced.661661+ */662662+ if (aborted || xlog_is_shutdown(lip->li_log)) {663663+ ASSERT(list_empty(&bip->bli_buf->b_li_list));664664+ xfs_buf_item_done(bp);665665+ goto out_release;666666+ }667667+668668+ /*669669+ * Clean, unreferenced BLIs can be immediately freed, leaving the buffer670670+ * locked and referenced.671671+ *672672+ * Dirty, unreferenced BLIs *must* be in the AIL awaiting writeback.673673+ */674674+ if (!dirty)675675+ xfs_buf_item_relse(bip);676676+ else677677+ ASSERT(test_bit(XFS_LI_IN_AIL, &lip->li_flags));678678+679679+ /* Not safe to reference the BLI from here */680680+out_release:681681+ /*682682+ * If we get here with a stale buffer, we must not unlock the683683+ * buffer as the last BLI reference owns lock context, not us.684684+ */685685+ if (stale || hold)686686+ return;725687 xfs_buf_relse(bp);726688}727689···846728 .iop_committed = xfs_buf_item_committed,847729 .iop_push = xfs_buf_item_push,848730};849849-850850-STATIC void851851-xfs_buf_item_get_format(852852- struct xfs_buf_log_item *bip,853853- int count)854854-{855855- ASSERT(bip->bli_formats == NULL);856856- bip->bli_format_count = count;857857-858858- if (count == 1) {859859- bip->bli_formats = &bip->__bli_format;860860- return;861861- }862862-863863- bip->bli_formats = kzalloc(count * sizeof(struct xfs_buf_log_format),864864- GFP_KERNEL | __GFP_NOFAIL);865865-}866866-867867-STATIC void868868-xfs_buf_item_free_format(869869- struct xfs_buf_log_item *bip)870870-{871871- if (bip->bli_formats != &bip->__bli_format) {872872- kfree(bip->bli_formats);873873- bip->bli_formats = NULL;874874- }875875-}876731877732/*878733 * Allocate a new buf log item to go with the given buffer.···1067976 return false;1068977}106997810701070-STATIC void10711071-xfs_buf_item_free(10721072- struct xfs_buf_log_item *bip)10731073-{10741074- xfs_buf_item_free_format(bip);10751075- kvfree(bip->bli_item.li_lv_shadow);10761076- kmem_cache_free(xfs_buf_item_cache, bip);10771077-}10781078-10791079-/*10801080- * xfs_buf_item_relse() is called when the buf log item is no longer needed.10811081- */10821082-void10831083-xfs_buf_item_relse(10841084- struct xfs_buf *bp)10851085-{10861086- struct xfs_buf_log_item *bip = bp->b_log_item;10871087-10881088- trace_xfs_buf_item_relse(bp, _RET_IP_);10891089- ASSERT(!test_bit(XFS_LI_IN_AIL, &bip->bli_item.li_flags));10901090-10911091- if (atomic_read(&bip->bli_refcount))10921092- return;10931093- bp->b_log_item = NULL;10941094- xfs_buf_rele(bp);10951095- xfs_buf_item_free(bip);10961096-}10971097-1098979void1099980xfs_buf_item_done(1100981 struct xfs_buf *bp)···10861023 xfs_trans_ail_delete(&bp->b_log_item->bli_item,10871024 (bp->b_flags & _XBF_LOGRECOVERY) ? 0 :10881025 SHUTDOWN_CORRUPT_INCORE);10891089- xfs_buf_item_relse(bp);10261026+ xfs_buf_item_relse(bp->b_log_item);10901027}
···1398139813991399 ASSERT(XFS_DQ_IS_LOCKED(dqp));14001400 ASSERT(!completion_done(&dqp->q_flush));14011401+ ASSERT(atomic_read(&dqp->q_pincount) == 0);1401140214021403 trace_xfs_dqflush(dqp);14031403-14041404- xfs_qm_dqunpin_wait(dqp);14051405-14061404 fa = xfs_qm_dqflush_check(dqp);14071405 if (fa) {14081406 xfs_alert(mp, "corrupt dquot ID 0x%x in memory at %pS",
···979979 */980980 if (xlog_is_shutdown(ip->i_mount->m_log)) {981981 xfs_iunpin_wait(ip);982982+ /*983983+ * Avoid a ABBA deadlock on the inode cluster buffer vs984984+ * concurrent xfs_ifree_cluster() trying to mark the inode985985+ * stale. We don't need the inode locked to run the flush abort986986+ * code, but the flush abort needs to lock the cluster buffer.987987+ */988988+ xfs_iunlock(ip, XFS_ILOCK_EXCL);982989 xfs_iflush_shutdown_abort(ip);990990+ xfs_ilock(ip, XFS_ILOCK_EXCL);983991 goto reclaim;984992 }985993 if (xfs_ipincount(ip))
···758758 * completed and items removed from the AIL before the next push759759 * attempt.760760 */761761+ trace_xfs_inode_push_stale(ip, _RET_IP_);761762 return XFS_ITEM_PINNED;762763 }763764764764- if (xfs_ipincount(ip) > 0 || xfs_buf_ispinned(bp))765765+ if (xfs_ipincount(ip) > 0 || xfs_buf_ispinned(bp)) {766766+ trace_xfs_inode_push_pinned(ip, _RET_IP_);765767 return XFS_ITEM_PINNED;768768+ }766769767770 if (xfs_iflags_test(ip, XFS_IFLUSHING))768771 return XFS_ITEM_FLUSHING;
+3-1
fs/xfs/xfs_log_cil.c
···793793 struct xfs_log_item *lip = lv->lv_item;794794 xfs_lsn_t item_lsn;795795796796- if (aborted)796796+ if (aborted) {797797+ trace_xlog_ail_insert_abort(lip);797798 set_bit(XFS_LI_ABORTED, &lip->li_flags);799799+ }798800799801 if (lip->li_ops->flags & XFS_ITEM_RELEASE_WHEN_COMMITTED) {800802 lip->li_ops->iop_release(lip);
+4-15
fs/xfs/xfs_mru_cache.c
···320320 xfs_mru_cache_free_func_t free_func)321321{322322 struct xfs_mru_cache *mru = NULL;323323- int err = 0, grp;323323+ int grp;324324 unsigned int grp_time;325325326326 if (mrup)···341341 mru->lists = kzalloc(mru->grp_count * sizeof(*mru->lists),342342 GFP_KERNEL | __GFP_NOFAIL);343343 if (!mru->lists) {344344- err = -ENOMEM;345345- goto exit;344344+ kfree(mru);345345+ return -ENOMEM;346346 }347347348348 for (grp = 0; grp < mru->grp_count; grp++)···361361 mru->free_func = free_func;362362 mru->data = data;363363 *mrup = mru;364364-365365-exit:366366- if (err && mru && mru->lists)367367- kfree(mru->lists);368368- if (err && mru)369369- kfree(mru);370370-371371- return err;364364+ return 0;372365}373366374367/*···417424 struct xfs_mru_cache_elem *elem)418425{419426 int error = -EINVAL;420420-421421- ASSERT(mru && mru->lists);422422- if (!mru || !mru->lists)423423- goto out_free;424427425428 error = -ENOMEM;426429 if (radix_tree_preload(GFP_KERNEL))
+19-67
fs/xfs/xfs_qm.c
···134134135135 dqp->q_flags |= XFS_DQFLAG_FREEING;136136137137+ xfs_qm_dqunpin_wait(dqp);137138 xfs_dqflock(dqp);138139139140 /*···466465 struct xfs_dquot *dqp = container_of(item,467466 struct xfs_dquot, q_lru);468467 struct xfs_qm_isolate *isol = arg;468468+ enum lru_status ret = LRU_SKIP;469469470470 if (!xfs_dqlock_nowait(dqp))471471 goto out_miss_busy;···478476 */479477 if (dqp->q_flags & XFS_DQFLAG_FREEING)480478 goto out_miss_unlock;479479+480480+ /*481481+ * If the dquot is pinned or dirty, rotate it to the end of the LRU to482482+ * give some time for it to be cleaned before we try to isolate it483483+ * again.484484+ */485485+ ret = LRU_ROTATE;486486+ if (XFS_DQ_IS_DIRTY(dqp) || atomic_read(&dqp->q_pincount) > 0) {487487+ goto out_miss_unlock;488488+ }481489482490 /*483491 * This dquot has acquired a reference in the meantime remove it from···504492 }505493506494 /*507507- * If the dquot is dirty, flush it. If it's already being flushed, just508508- * skip it so there is time for the IO to complete before we try to509509- * reclaim it again on the next LRU pass.495495+ * The dquot may still be under IO, in which case the flush lock will be496496+ * held. If we can't get the flush lock now, just skip over the dquot as497497+ * if it was dirty.510498 */511499 if (!xfs_dqflock_nowait(dqp))512500 goto out_miss_unlock;513501514514- if (XFS_DQ_IS_DIRTY(dqp)) {515515- struct xfs_buf *bp = NULL;516516- int error;517517-518518- trace_xfs_dqreclaim_dirty(dqp);519519-520520- /* we have to drop the LRU lock to flush the dquot */521521- spin_unlock(&lru->lock);522522-523523- error = xfs_dquot_use_attached_buf(dqp, &bp);524524- if (!bp || error == -EAGAIN) {525525- xfs_dqfunlock(dqp);526526- goto out_unlock_dirty;527527- }528528-529529- /*530530- * dqflush completes dqflock on error, and the delwri ioend531531- * does it on success.532532- */533533- error = xfs_qm_dqflush(dqp, bp);534534- if (error)535535- goto out_unlock_dirty;536536-537537- xfs_buf_delwri_queue(bp, &isol->buffers);538538- xfs_buf_relse(bp);539539- goto out_unlock_dirty;540540- }541541-502502+ ASSERT(!XFS_DQ_IS_DIRTY(dqp));542503 xfs_dquot_detach_buf(dqp);543504 xfs_dqfunlock(dqp);544505···533548out_miss_busy:534549 trace_xfs_dqreclaim_busy(dqp);535550 XFS_STATS_INC(dqp->q_mount, xs_qm_dqreclaim_misses);536536- return LRU_SKIP;537537-538538-out_unlock_dirty:539539- trace_xfs_dqreclaim_busy(dqp);540540- XFS_STATS_INC(dqp->q_mount, xs_qm_dqreclaim_misses);541541- xfs_dqunlock(dqp);542542- return LRU_RETRY;551551+ return ret;543552}544553545554static unsigned long···14651486 struct xfs_dquot *dqp,14661487 void *data)14671488{14681468- struct xfs_mount *mp = dqp->q_mount;14691489 struct list_head *buffer_list = data;14701490 struct xfs_buf *bp = NULL;14711491 int error = 0;···14751497 if (!XFS_DQ_IS_DIRTY(dqp))14761498 goto out_unlock;1477149914781478- /*14791479- * The only way the dquot is already flush locked by the time quotacheck14801480- * gets here is if reclaim flushed it before the dqadjust walk dirtied14811481- * it for the final time. Quotacheck collects all dquot bufs in the14821482- * local delwri queue before dquots are dirtied, so reclaim can't have14831483- * possibly queued it for I/O. The only way out is to push the buffer to14841484- * cycle the flush lock.14851485- */14861486- if (!xfs_dqflock_nowait(dqp)) {14871487- /* buf is pinned in-core by delwri list */14881488- error = xfs_buf_incore(mp->m_ddev_targp, dqp->q_blkno,14891489- mp->m_quotainfo->qi_dqchunklen, 0, &bp);14901490- if (error)14911491- goto out_unlock;14921492-14931493- if (!(bp->b_flags & _XBF_DELWRI_Q)) {14941494- error = -EAGAIN;14951495- xfs_buf_relse(bp);14961496- goto out_unlock;14971497- }14981498- xfs_buf_unlock(bp);14991499-15001500- xfs_buf_delwri_pushbuf(bp, buffer_list);15011501- xfs_buf_rele(bp);15021502-15031503- error = -EAGAIN;15041504- goto out_unlock;15051505- }15001500+ xfs_qm_dqunpin_wait(dqp);15011501+ xfs_dqflock(dqp);1506150215071503 error = xfs_dquot_use_attached_buf(dqp, &bp);15081504 if (error)
···135135#define UBLKSRV_IO_BUF_TOTAL_SIZE (1ULL << UBLKSRV_IO_BUF_TOTAL_BITS)136136137137/*138138- * zero copy requires 4k block size, and can remap ublk driver's io139139- * request into ublksrv's vm space138138+ * ublk server can register data buffers for incoming I/O requests with a sparse139139+ * io_uring buffer table. The request buffer can then be used as the data buffer140140+ * for io_uring operations via the fixed buffer index.141141+ * Note that the ublk server can never directly access the request data memory.142142+ *143143+ * To use this feature, the ublk server must first register a sparse buffer144144+ * table on an io_uring instance.145145+ * When an incoming ublk request is received, the ublk server submits a146146+ * UBLK_U_IO_REGISTER_IO_BUF command to that io_uring instance. The147147+ * ublksrv_io_cmd's q_id and tag specify the request whose buffer to register148148+ * and addr is the index in the io_uring's buffer table to install the buffer.149149+ * SQEs can now be submitted to the io_uring to read/write the request's buffer150150+ * by enabling fixed buffers (e.g. using IORING_OP_{READ,WRITE}_FIXED or151151+ * IORING_URING_CMD_FIXED) and passing the registered buffer index in buf_index.152152+ * Once the last io_uring operation using the request's buffer has completed,153153+ * the ublk server submits a UBLK_U_IO_UNREGISTER_IO_BUF command with q_id, tag,154154+ * and addr again specifying the request buffer to unregister.155155+ * The ublk request is completed when its buffer is unregistered from all156156+ * io_uring instances and the ublk server issues UBLK_U_IO_COMMIT_AND_FETCH_REQ.157157+ *158158+ * Not available for UBLK_F_UNPRIVILEGED_DEV, as a ublk server can leak159159+ * uninitialized kernel memory by not reading into the full request buffer.140160 */141161#define UBLK_F_SUPPORT_ZERO_COPY (1ULL << 0)142162···470450 __u64 sqe_addr)471451{472452 struct ublk_auto_buf_reg reg = {473473- .index = sqe_addr & 0xffff,474474- .flags = (sqe_addr >> 16) & 0xff,475475- .reserved0 = (sqe_addr >> 24) & 0xff,476476- .reserved1 = sqe_addr >> 32,453453+ .index = (__u16)sqe_addr,454454+ .flags = (__u8)(sqe_addr >> 16),455455+ .reserved0 = (__u8)(sqe_addr >> 24),456456+ .reserved1 = (__u32)(sqe_addr >> 32),477457 };478458479459 return reg;
+2-1
io_uring/io_uring.c
···1666166616671667io_req_flags_t io_file_get_flags(struct file *file)16681668{16691669+ struct inode *inode = file_inode(file);16691670 io_req_flags_t res = 0;1670167116711672 BUILD_BUG_ON(REQ_F_ISREG_BIT != REQ_F_SUPPORT_NOWAIT_BIT + 1);1672167316731673- if (S_ISREG(file_inode(file)->i_mode))16741674+ if (S_ISREG(inode->i_mode) && !(inode->i_flags & S_ANON_INODE))16741675 res |= REQ_F_ISREG;16751676 if ((file->f_flags & O_NONBLOCK) || (file->f_mode & FMODE_NOWAIT))16761677 res |= REQ_F_SUPPORT_NOWAIT;
+1
io_uring/kbuf.c
···271271 if (len > arg->max_len) {272272 len = arg->max_len;273273 if (!(bl->flags & IOBL_INC)) {274274+ arg->partial_map = 1;274275 if (iov != arg->iovs)275276 break;276277 buf->len = len;
+2-1
io_uring/kbuf.h
···5858 size_t max_len;5959 unsigned short nr_iovs;6060 unsigned short mode;6161- unsigned buf_group;6161+ unsigned short buf_group;6262+ unsigned short partial_map;6263};63646465void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+21-13
io_uring/net.c
···7575 u16 flags;7676 /* initialised and used only by !msg send variants */7777 u16 buf_group;7878- bool retry;7878+ unsigned short retry_flags;7979 void __user *msg_control;8080 /* used only for send zerocopy */8181 struct io_kiocb *notif;8282+};8383+8484+enum sr_retry_flags {8585+ IO_SR_MSG_RETRY = 1,8686+ IO_SR_MSG_PARTIAL_MAP = 2,8287};83888489/*···192187193188 req->flags &= ~REQ_F_BL_EMPTY;194189 sr->done_io = 0;195195- sr->retry = false;190190+ sr->retry_flags = 0;196191 sr->len = 0; /* get from the provided buffer */197192}198193···402397 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);403398404399 sr->done_io = 0;405405- sr->retry = false;400400+ sr->retry_flags = 0;406401 sr->len = READ_ONCE(sqe->len);407402 sr->flags = READ_ONCE(sqe->ioprio);408403 if (sr->flags & ~SENDMSG_FLAGS)···756751 struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);757752758753 sr->done_io = 0;759759- sr->retry = false;754754+ sr->retry_flags = 0;760755761756 if (unlikely(sqe->file_index || sqe->addr2))762757 return -EINVAL;···828823829824 cflags |= io_put_kbufs(req, this_ret, io_bundle_nbufs(kmsg, this_ret),830825 issue_flags);831831- if (sr->retry)826826+ if (sr->retry_flags & IO_SR_MSG_RETRY)832827 cflags = req->cqe.flags | (cflags & CQE_F_MASK);833828 /* bundle with no more immediate buffers, we're done */834829 if (req->flags & REQ_F_BL_EMPTY)···837832 * If more is available AND it was a full transfer, retry and838833 * append to this one839834 */840840- if (!sr->retry && kmsg->msg.msg_inq > 1 && this_ret > 0 &&835835+ if (!sr->retry_flags && kmsg->msg.msg_inq > 1 && this_ret > 0 &&841836 !iov_iter_count(&kmsg->msg.msg_iter)) {842837 req->cqe.flags = cflags & ~CQE_F_MASK;843838 sr->len = kmsg->msg.msg_inq;844839 sr->done_io += this_ret;845845- sr->retry = true;840840+ sr->retry_flags |= IO_SR_MSG_RETRY;846841 return false;847842 }848843 } else {···10821077 if (unlikely(ret < 0))10831078 return ret;1084107910801080+ if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) {10811081+ kmsg->vec.nr = ret;10821082+ kmsg->vec.iovec = arg.iovs;10831083+ req->flags |= REQ_F_NEED_CLEANUP;10841084+ }10851085+ if (arg.partial_map)10861086+ sr->retry_flags |= IO_SR_MSG_PARTIAL_MAP;10871087+10851088 /* special case 1 vec, can be a fast path */10861089 if (ret == 1) {10871090 sr->buf = arg.iovs[0].iov_base;···10981085 }10991086 iov_iter_init(&kmsg->msg.msg_iter, ITER_DEST, arg.iovs, ret,11001087 arg.out_len);11011101- if (arg.iovs != &kmsg->fast_iov && arg.iovs != kmsg->vec.iovec) {11021102- kmsg->vec.nr = ret;11031103- kmsg->vec.iovec = arg.iovs;11041104- req->flags |= REQ_F_NEED_CLEANUP;11051105- }11061088 } else {11071089 void __user *buf;11081090···12831275 int ret;1284127612851277 zc->done_io = 0;12861286- zc->retry = false;12781278+ zc->retry_flags = 0;1287127912881280 if (unlikely(READ_ONCE(sqe->__pad2[0]) || READ_ONCE(sqe->addr3)))12891281 return -EINVAL;
···112112 struct io_mapped_ubuf *imu = priv;113113 unsigned int i;114114115115- for (i = 0; i < imu->nr_bvecs; i++)116116- unpin_user_page(imu->bvec[i].bv_page);115115+ for (i = 0; i < imu->nr_bvecs; i++) {116116+ struct folio *folio = page_folio(imu->bvec[i].bv_page);117117+118118+ unpin_user_folio(folio, 1);119119+ }117120}118121119122static struct io_mapped_ubuf *io_alloc_imu(struct io_ring_ctx *ctx,···734731735732 data->nr_pages_mid = folio_nr_pages(folio);736733 data->folio_shift = folio_shift(folio);734734+ data->first_folio_page_idx = folio_page_idx(folio, page_array[0]);737735738736 /*739737 * Check if pages are contiguous inside a folio, and all folios have···828824 if (coalesced)829825 imu->folio_shift = data.folio_shift;830826 refcount_set(&imu->refs, 1);831831- off = (unsigned long) iov->iov_base & ((1UL << imu->folio_shift) - 1);827827+828828+ off = (unsigned long)iov->iov_base & ~PAGE_MASK;829829+ if (coalesced)830830+ off += data.first_folio_page_idx << PAGE_SHIFT;831831+832832 node->buf = imu;833833 ret = 0;834834···848840 if (ret) {849841 if (imu)850842 io_free_imu(ctx, imu);851851- if (pages)852852- unpin_user_pages(pages, nr_pages);843843+ if (pages) {844844+ for (i = 0; i < nr_pages; i++)845845+ unpin_user_folio(page_folio(pages[i]), 1);846846+ }853847 io_cache_free(&ctx->node_cache, node);854848 node = ERR_PTR(ret);855849 }···13391329{13401330 unsigned long folio_size = 1 << imu->folio_shift;13411331 unsigned long folio_mask = folio_size - 1;13421342- u64 folio_addr = imu->ubuf & ~folio_mask;13431332 struct bio_vec *res_bvec = vec->bvec;13441333 size_t total_len = 0;13451334 unsigned bvec_idx = 0;···13601351 if (unlikely(check_add_overflow(total_len, iov_len, &total_len)))13611352 return -EOVERFLOW;1362135313631363- /* by using folio address it also accounts for bvec offset */13641364- offset = buf_addr - folio_addr;13541354+ offset = buf_addr - imu->ubuf;13551355+ /*13561356+ * Only the first bvec can have non zero bv_offset, account it13571357+ * here and work with full folios below.13581358+ */13591359+ offset += imu->bvec[0].bv_offset;13601360+13651361 src_bvec = imu->bvec + (offset >> imu->folio_shift);13661362 offset &= folio_mask;13671363
+1
io_uring/rsrc.h
···4949 unsigned int nr_pages_mid;5050 unsigned int folio_shift;5151 unsigned int nr_folios;5252+ unsigned long first_folio_page_idx;5253};53545455bool io_rsrc_cache_init(struct io_ring_ctx *ctx);
+4-2
io_uring/zcrx.c
···106106 for_each_sgtable_dma_sg(mem->sgt, sg, i)107107 total_size += sg_dma_len(sg);108108109109- if (total_size < off + len)110110- return -EINVAL;109109+ if (total_size < off + len) {110110+ ret = -EINVAL;111111+ goto err;112112+ }111113112114 mem->dmabuf_offset = off;113115 mem->size = len;
+1
kernel/Kconfig.kexec
···134134 depends on KEXEC_FILE135135 depends on CRASH_DUMP136136 depends on DM_CRYPT137137+ depends on KEYS137138 help138139 With this option enabled, user space can intereact with139140 /sys/kernel/config/crash_dm_crypt_keys to make the dm crypt keys
···441441 * store that will be enabled on successful return442442 */443443 if (!handle->size) { /* A, matches D */444444- event->pending_disable = smp_processor_id();444444+ perf_event_disable_inatomic(handle->event);445445 perf_output_wakeup(handle);446446 WRITE_ONCE(rb->aux_nest, 0);447447 goto err_put;···526526527527 if (wakeup) {528528 if (handle->aux_flags & PERF_AUX_FLAG_TRUNCATED)529529- handle->event->pending_disable = smp_processor_id();529529+ perf_event_disable_inatomic(handle->event);530530 perf_output_wakeup(handle);531531 }532532
···1010#include <linux/seq_buf.h>1111#include <linux/seq_file.h>1212#include <linux/vmalloc.h>1313+#include <linux/kmemleak.h>13141415#define ALLOCINFO_FILE_NAME "allocinfo"1516#define MODULE_ALLOC_TAG_VMAP_SIZE (100000UL * sizeof(struct alloc_tag))···633632 mod->name);634633 return -ENOMEM;635634 }636636- }637635636636+ /*637637+ * Avoid a kmemleak false positive. The pointer to the counters is stored638638+ * in the alloc_tag section of the module and cannot be directly accessed.639639+ */640640+ kmemleak_ignore_percpu(tag->counters);641641+ }638642 return 0;639643}640644
+8-1
lib/group_cpus.c
···352352 int ret = -ENOMEM;353353 struct cpumask *masks = NULL;354354355355+ if (numgrps == 0)356356+ return NULL;357357+355358 if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))356359 return NULL;357360···429426#else /* CONFIG_SMP */430427struct cpumask *group_cpus_evenly(unsigned int numgrps)431428{432432- struct cpumask *masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);429429+ struct cpumask *masks;433430431431+ if (numgrps == 0)432432+ return NULL;433433+434434+ masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);434435 if (!masks)435436 return NULL;436437
+28-20
lib/raid6/rvv.c
···2626static void raid6_rvv1_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs)2727{2828 u8 **dptr = (u8 **)ptrs;2929- unsigned long d;3030- int z, z0;3129 u8 *p, *q;3030+ unsigned long vl, d;3131+ int z, z0;32323333 z0 = disks - 3; /* Highest data disk */3434 p = dptr[z0 + 1]; /* XOR parity */···36363737 asm volatile (".option push\n"3838 ".option arch,+v\n"3939- "vsetvli t0, x0, e8, m1, ta, ma\n"3939+ "vsetvli %0, x0, e8, m1, ta, ma\n"4040 ".option pop\n"4141+ : "=&r" (vl)4142 );42434344 /* v0:wp0, v1:wq0, v2:wd0/w20, v3:w10 */···10099{101100 u8 **dptr = (u8 **)ptrs;102101 u8 *p, *q;103103- unsigned long d;102102+ unsigned long vl, d;104103 int z, z0;105104106105 z0 = stop; /* P/Q right side optimization */···109108110109 asm volatile (".option push\n"111110 ".option arch,+v\n"112112- "vsetvli t0, x0, e8, m1, ta, ma\n"111111+ "vsetvli %0, x0, e8, m1, ta, ma\n"113112 ".option pop\n"113113+ : "=&r" (vl)114114 );115115116116 /* v0:wp0, v1:wq0, v2:wd0/w20, v3:w10 */···197195static void raid6_rvv2_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs)198196{199197 u8 **dptr = (u8 **)ptrs;200200- unsigned long d;201201- int z, z0;202198 u8 *p, *q;199199+ unsigned long vl, d;200200+ int z, z0;203201204202 z0 = disks - 3; /* Highest data disk */205203 p = dptr[z0 + 1]; /* XOR parity */···207205208206 asm volatile (".option push\n"209207 ".option arch,+v\n"210210- "vsetvli t0, x0, e8, m1, ta, ma\n"208208+ "vsetvli %0, x0, e8, m1, ta, ma\n"211209 ".option pop\n"210210+ : "=&r" (vl)212211 );213212214213 /*···290287{291288 u8 **dptr = (u8 **)ptrs;292289 u8 *p, *q;293293- unsigned long d;290290+ unsigned long vl, d;294291 int z, z0;295292296293 z0 = stop; /* P/Q right side optimization */···299296300297 asm volatile (".option push\n"301298 ".option arch,+v\n"302302- "vsetvli t0, x0, e8, m1, ta, ma\n"299299+ "vsetvli %0, x0, e8, m1, ta, ma\n"303300 ".option pop\n"301301+ : "=&r" (vl)304302 );305303306304 /*···417413static void raid6_rvv4_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs)418414{419415 u8 **dptr = (u8 **)ptrs;420420- unsigned long d;421421- int z, z0;422416 u8 *p, *q;417417+ unsigned long vl, d;418418+ int z, z0;423419424420 z0 = disks - 3; /* Highest data disk */425421 p = dptr[z0 + 1]; /* XOR parity */···427423428424 asm volatile (".option push\n"429425 ".option arch,+v\n"430430- "vsetvli t0, x0, e8, m1, ta, ma\n"426426+ "vsetvli %0, x0, e8, m1, ta, ma\n"431427 ".option pop\n"428428+ : "=&r" (vl)432429 );433430434431 /*···544539{545540 u8 **dptr = (u8 **)ptrs;546541 u8 *p, *q;547547- unsigned long d;542542+ unsigned long vl, d;548543 int z, z0;549544550545 z0 = stop; /* P/Q right side optimization */···553548554549 asm volatile (".option push\n"555550 ".option arch,+v\n"556556- "vsetvli t0, x0, e8, m1, ta, ma\n"551551+ "vsetvli %0, x0, e8, m1, ta, ma\n"557552 ".option pop\n"553553+ : "=&r" (vl)558554 );559555560556 /*···727721static void raid6_rvv8_gen_syndrome_real(int disks, unsigned long bytes, void **ptrs)728722{729723 u8 **dptr = (u8 **)ptrs;730730- unsigned long d;731731- int z, z0;732724 u8 *p, *q;725725+ unsigned long vl, d;726726+ int z, z0;733727734728 z0 = disks - 3; /* Highest data disk */735729 p = dptr[z0 + 1]; /* XOR parity */···737731738732 asm volatile (".option push\n"739733 ".option arch,+v\n"740740- "vsetvli t0, x0, e8, m1, ta, ma\n"734734+ "vsetvli %0, x0, e8, m1, ta, ma\n"741735 ".option pop\n"736736+ : "=&r" (vl)742737 );743738744739 /*···922915{923916 u8 **dptr = (u8 **)ptrs;924917 u8 *p, *q;925925- unsigned long d;918918+ unsigned long vl, d;926919 int z, z0;927920928921 z0 = stop; /* P/Q right side optimization */···931924932925 asm volatile (".option push\n"933926 ".option arch,+v\n"934934- "vsetvli t0, x0, e8, m1, ta, ma\n"927927+ "vsetvli %0, x0, e8, m1, ta, ma\n"935928 ".option pop\n"929929+ : "=&r" (vl)936930 );937931938932 /*
+3-1
lib/test_objagg.c
···899899 int err;900900901901 stats = objagg_hints_stats_get(objagg_hints);902902- if (IS_ERR(stats))902902+ if (IS_ERR(stats)) {903903+ *errmsg = "objagg_hints_stats_get() failed.";903904 return PTR_ERR(stats);905905+ }904906 err = __check_expect_stats(stats, expect_stats, errmsg);905907 objagg_stats_put(stats);906908 return err;
···27872787/*27882788 * alloc_and_dissolve_hugetlb_folio - Allocate a new folio and dissolve27892789 * the old one27902790- * @h: struct hstate old page belongs to27912790 * @old_folio: Old folio to dissolve27922791 * @list: List to isolate the page in case we need to27932792 * Returns 0 on success, otherwise negated error.27942793 */27952795-static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,27962796- struct folio *old_folio, struct list_head *list)27942794+static int alloc_and_dissolve_hugetlb_folio(struct folio *old_folio,27952795+ struct list_head *list)27972796{27982798- gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;27972797+ gfp_t gfp_mask;27982798+ struct hstate *h;27992799 int nid = folio_nid(old_folio);28002800 struct folio *new_folio = NULL;28012801 int ret = 0;2802280228032803retry:28042804+ /*28052805+ * The old_folio might have been dissolved from under our feet, so make sure28062806+ * to carefully check the state under the lock.28072807+ */28042808 spin_lock_irq(&hugetlb_lock);28052809 if (!folio_test_hugetlb(old_folio)) {28062810 /*···28332829 cond_resched();28342830 goto retry;28352831 } else {28322832+ h = folio_hstate(old_folio);28362833 if (!new_folio) {28372834 spin_unlock_irq(&hugetlb_lock);28352835+ gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;28382836 new_folio = alloc_buddy_hugetlb_folio(h, gfp_mask, nid,28392837 NULL, NULL);28402838 if (!new_folio)···2880287428812875int isolate_or_dissolve_huge_folio(struct folio *folio, struct list_head *list)28822876{28832883- struct hstate *h;28842877 int ret = -EBUSY;2885287828862886- /*28872887- * The page might have been dissolved from under our feet, so make sure28882888- * to carefully check the state under the lock.28892889- * Return success when racing as if we dissolved the page ourselves.28902890- */28912891- spin_lock_irq(&hugetlb_lock);28922892- if (folio_test_hugetlb(folio)) {28932893- h = folio_hstate(folio);28942894- } else {28952895- spin_unlock_irq(&hugetlb_lock);28792879+ /* Not to disrupt normal path by vainly holding hugetlb_lock */28802880+ if (!folio_test_hugetlb(folio))28962881 return 0;28972897- }28982898- spin_unlock_irq(&hugetlb_lock);2899288229002883 /*29012884 * Fence off gigantic pages as there is a cyclic dependency between29022885 * alloc_contig_range and them. Return -ENOMEM as this has the effect29032886 * of bailing out right away without further retrying.29042887 */29052905- if (hstate_is_gigantic(h))28882888+ if (folio_order(folio) > MAX_PAGE_ORDER)29062889 return -ENOMEM;2907289029082891 if (folio_ref_count(folio) && folio_isolate_hugetlb(folio, list))29092892 ret = 0;29102893 else if (!folio_ref_count(folio))29112911- ret = alloc_and_dissolve_hugetlb_folio(h, folio, list);28942894+ ret = alloc_and_dissolve_hugetlb_folio(folio, list);2912289529132896 return ret;29142897}···29112916 */29122917int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)29132918{29142914- struct hstate *h;29152919 struct folio *folio;29162920 int ret = 0;29172921···29192925 while (start_pfn < end_pfn) {29202926 folio = pfn_folio(start_pfn);2921292729222922- /*29232923- * The folio might have been dissolved from under our feet, so make sure29242924- * to carefully check the state under the lock.29252925- */29262926- spin_lock_irq(&hugetlb_lock);29272927- if (folio_test_hugetlb(folio)) {29282928- h = folio_hstate(folio);29292929- } else {29302930- spin_unlock_irq(&hugetlb_lock);29312931- start_pfn++;29322932- continue;29332933- }29342934- spin_unlock_irq(&hugetlb_lock);29352935-29362936- if (!folio_ref_count(folio)) {29372937- ret = alloc_and_dissolve_hugetlb_folio(h, folio,29382938- &isolate_list);29282928+ /* Not to disrupt normal path by vainly holding hugetlb_lock */29292929+ if (folio_test_hugetlb(folio) && !folio_ref_count(folio)) {29302930+ ret = alloc_and_dissolve_hugetlb_folio(folio, &isolate_list);29392931 if (ret)29402932 break;29412933
+14
mm/kmemleak.c
···12471247EXPORT_SYMBOL(kmemleak_transient_leak);1248124812491249/**12501250+ * kmemleak_ignore_percpu - similar to kmemleak_ignore but taking a percpu12511251+ * address argument12521252+ * @ptr: percpu address of the object12531253+ */12541254+void __ref kmemleak_ignore_percpu(const void __percpu *ptr)12551255+{12561256+ pr_debug("%s(0x%px)\n", __func__, ptr);12571257+12581258+ if (kmemleak_enabled && ptr && !IS_ERR_PCPU(ptr))12591259+ make_black_object((unsigned long)ptr, OBJECT_PERCPU);12601260+}12611261+EXPORT_SYMBOL_GPL(kmemleak_ignore_percpu);12621262+12631263+/**12501264 * kmemleak_ignore - ignore an allocated object12511265 * @ptr: pointer to beginning of the object12521266 *
-36
net/bluetooth/hci_event.c
···21502150 return rp->status;21512151}2152215221532153-static u8 hci_cc_set_ext_adv_param(struct hci_dev *hdev, void *data,21542154- struct sk_buff *skb)21552155-{21562156- struct hci_rp_le_set_ext_adv_params *rp = data;21572157- struct hci_cp_le_set_ext_adv_params *cp;21582158- struct adv_info *adv_instance;21592159-21602160- bt_dev_dbg(hdev, "status 0x%2.2x", rp->status);21612161-21622162- if (rp->status)21632163- return rp->status;21642164-21652165- cp = hci_sent_cmd_data(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS);21662166- if (!cp)21672167- return rp->status;21682168-21692169- hci_dev_lock(hdev);21702170- hdev->adv_addr_type = cp->own_addr_type;21712171- if (!cp->handle) {21722172- /* Store in hdev for instance 0 */21732173- hdev->adv_tx_power = rp->tx_power;21742174- } else {21752175- adv_instance = hci_find_adv_instance(hdev, cp->handle);21762176- if (adv_instance)21772177- adv_instance->tx_power = rp->tx_power;21782178- }21792179- /* Update adv data as tx power is known now */21802180- hci_update_adv_data(hdev, cp->handle);21812181-21822182- hci_dev_unlock(hdev);21832183-21842184- return rp->status;21852185-}21862186-21872153static u8 hci_cc_read_rssi(struct hci_dev *hdev, void *data,21882154 struct sk_buff *skb)21892155{···41304164 HCI_CC(HCI_OP_LE_READ_NUM_SUPPORTED_ADV_SETS,41314165 hci_cc_le_read_num_adv_sets,41324166 sizeof(struct hci_rp_le_read_num_supported_adv_sets)),41334133- HCI_CC(HCI_OP_LE_SET_EXT_ADV_PARAMS, hci_cc_set_ext_adv_param,41344134- sizeof(struct hci_rp_le_set_ext_adv_params)),41354167 HCI_CC_STATUS(HCI_OP_LE_SET_EXT_ADV_ENABLE,41364168 hci_cc_le_set_ext_adv_enable),41374169 HCI_CC_STATUS(HCI_OP_LE_SET_ADV_SET_RAND_ADDR,
+138-89
net/bluetooth/hci_sync.c
···12051205 sizeof(cp), &cp, HCI_CMD_TIMEOUT);12061206}1207120712081208+static int12091209+hci_set_ext_adv_params_sync(struct hci_dev *hdev, struct adv_info *adv,12101210+ const struct hci_cp_le_set_ext_adv_params *cp,12111211+ struct hci_rp_le_set_ext_adv_params *rp)12121212+{12131213+ struct sk_buff *skb;12141214+12151215+ skb = __hci_cmd_sync(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS, sizeof(*cp),12161216+ cp, HCI_CMD_TIMEOUT);12171217+12181218+ /* If command return a status event, skb will be set to -ENODATA */12191219+ if (skb == ERR_PTR(-ENODATA))12201220+ return 0;12211221+12221222+ if (IS_ERR(skb)) {12231223+ bt_dev_err(hdev, "Opcode 0x%4.4x failed: %ld",12241224+ HCI_OP_LE_SET_EXT_ADV_PARAMS, PTR_ERR(skb));12251225+ return PTR_ERR(skb);12261226+ }12271227+12281228+ if (skb->len != sizeof(*rp)) {12291229+ bt_dev_err(hdev, "Invalid response length for 0x%4.4x: %u",12301230+ HCI_OP_LE_SET_EXT_ADV_PARAMS, skb->len);12311231+ kfree_skb(skb);12321232+ return -EIO;12331233+ }12341234+12351235+ memcpy(rp, skb->data, sizeof(*rp));12361236+ kfree_skb(skb);12371237+12381238+ if (!rp->status) {12391239+ hdev->adv_addr_type = cp->own_addr_type;12401240+ if (!cp->handle) {12411241+ /* Store in hdev for instance 0 */12421242+ hdev->adv_tx_power = rp->tx_power;12431243+ } else if (adv) {12441244+ adv->tx_power = rp->tx_power;12451245+ }12461246+ }12471247+12481248+ return rp->status;12491249+}12501250+12511251+static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)12521252+{12531253+ DEFINE_FLEX(struct hci_cp_le_set_ext_adv_data, pdu, data, length,12541254+ HCI_MAX_EXT_AD_LENGTH);12551255+ u8 len;12561256+ struct adv_info *adv = NULL;12571257+ int err;12581258+12591259+ if (instance) {12601260+ adv = hci_find_adv_instance(hdev, instance);12611261+ if (!adv || !adv->adv_data_changed)12621262+ return 0;12631263+ }12641264+12651265+ len = eir_create_adv_data(hdev, instance, pdu->data,12661266+ HCI_MAX_EXT_AD_LENGTH);12671267+12681268+ pdu->length = len;12691269+ pdu->handle = adv ? adv->handle : instance;12701270+ pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE;12711271+ pdu->frag_pref = LE_SET_ADV_DATA_NO_FRAG;12721272+12731273+ err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_DATA,12741274+ struct_size(pdu, data, len), pdu,12751275+ HCI_CMD_TIMEOUT);12761276+ if (err)12771277+ return err;12781278+12791279+ /* Update data if the command succeed */12801280+ if (adv) {12811281+ adv->adv_data_changed = false;12821282+ } else {12831283+ memcpy(hdev->adv_data, pdu->data, len);12841284+ hdev->adv_data_len = len;12851285+ }12861286+12871287+ return 0;12881288+}12891289+12901290+static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance)12911291+{12921292+ struct hci_cp_le_set_adv_data cp;12931293+ u8 len;12941294+12951295+ memset(&cp, 0, sizeof(cp));12961296+12971297+ len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data));12981298+12991299+ /* There's nothing to do if the data hasn't changed */13001300+ if (hdev->adv_data_len == len &&13011301+ memcmp(cp.data, hdev->adv_data, len) == 0)13021302+ return 0;13031303+13041304+ memcpy(hdev->adv_data, cp.data, sizeof(cp.data));13051305+ hdev->adv_data_len = len;13061306+13071307+ cp.length = len;13081308+13091309+ return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_DATA,13101310+ sizeof(cp), &cp, HCI_CMD_TIMEOUT);13111311+}13121312+13131313+int hci_update_adv_data_sync(struct hci_dev *hdev, u8 instance)13141314+{13151315+ if (!hci_dev_test_flag(hdev, HCI_LE_ENABLED))13161316+ return 0;13171317+13181318+ if (ext_adv_capable(hdev))13191319+ return hci_set_ext_adv_data_sync(hdev, instance);13201320+13211321+ return hci_set_adv_data_sync(hdev, instance);13221322+}13231323+12081324int hci_setup_ext_adv_instance_sync(struct hci_dev *hdev, u8 instance)12091325{12101326 struct hci_cp_le_set_ext_adv_params cp;13271327+ struct hci_rp_le_set_ext_adv_params rp;12111328 bool connectable;12121329 u32 flags;12131330 bdaddr_t random_addr;···14331316 cp.secondary_phy = HCI_ADV_PHY_1M;14341317 }1435131814361436- err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS,14371437- sizeof(cp), &cp, HCI_CMD_TIMEOUT);13191319+ err = hci_set_ext_adv_params_sync(hdev, adv, &cp, &rp);13201320+ if (err)13211321+ return err;13221322+13231323+ /* Update adv data as tx power is known now */13241324+ err = hci_set_ext_adv_data_sync(hdev, cp.handle);14381325 if (err)14391326 return err;14401327···19431822 sizeof(cp), &cp, HCI_CMD_TIMEOUT);19441823}1945182419461946-static int hci_set_ext_adv_data_sync(struct hci_dev *hdev, u8 instance)19471947-{19481948- DEFINE_FLEX(struct hci_cp_le_set_ext_adv_data, pdu, data, length,19491949- HCI_MAX_EXT_AD_LENGTH);19501950- u8 len;19511951- struct adv_info *adv = NULL;19521952- int err;19531953-19541954- if (instance) {19551955- adv = hci_find_adv_instance(hdev, instance);19561956- if (!adv || !adv->adv_data_changed)19571957- return 0;19581958- }19591959-19601960- len = eir_create_adv_data(hdev, instance, pdu->data,19611961- HCI_MAX_EXT_AD_LENGTH);19621962-19631963- pdu->length = len;19641964- pdu->handle = adv ? adv->handle : instance;19651965- pdu->operation = LE_SET_ADV_DATA_OP_COMPLETE;19661966- pdu->frag_pref = LE_SET_ADV_DATA_NO_FRAG;19671967-19681968- err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_DATA,19691969- struct_size(pdu, data, len), pdu,19701970- HCI_CMD_TIMEOUT);19711971- if (err)19721972- return err;19731973-19741974- /* Update data if the command succeed */19751975- if (adv) {19761976- adv->adv_data_changed = false;19771977- } else {19781978- memcpy(hdev->adv_data, pdu->data, len);19791979- hdev->adv_data_len = len;19801980- }19811981-19821982- return 0;19831983-}19841984-19851985-static int hci_set_adv_data_sync(struct hci_dev *hdev, u8 instance)19861986-{19871987- struct hci_cp_le_set_adv_data cp;19881988- u8 len;19891989-19901990- memset(&cp, 0, sizeof(cp));19911991-19921992- len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data));19931993-19941994- /* There's nothing to do if the data hasn't changed */19951995- if (hdev->adv_data_len == len &&19961996- memcmp(cp.data, hdev->adv_data, len) == 0)19971997- return 0;19981998-19991999- memcpy(hdev->adv_data, cp.data, sizeof(cp.data));20002000- hdev->adv_data_len = len;20012001-20022002- cp.length = len;20032003-20042004- return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_DATA,20052005- sizeof(cp), &cp, HCI_CMD_TIMEOUT);20062006-}20072007-20082008-int hci_update_adv_data_sync(struct hci_dev *hdev, u8 instance)20092009-{20102010- if (!hci_dev_test_flag(hdev, HCI_LE_ENABLED))20112011- return 0;20122012-20132013- if (ext_adv_capable(hdev))20142014- return hci_set_ext_adv_data_sync(hdev, instance);20152015-20162016- return hci_set_adv_data_sync(hdev, instance);20172017-}20182018-20191825int hci_schedule_adv_instance_sync(struct hci_dev *hdev, u8 instance,20201826 bool force)20211827{···20181970static int hci_clear_adv_sync(struct hci_dev *hdev, struct sock *sk, bool force)20191971{20201972 struct adv_info *adv, *n;20212021- int err = 0;2022197320231974 if (ext_adv_capable(hdev))20241975 /* Remove all existing sets */20252025- err = hci_clear_adv_sets_sync(hdev, sk);20262026- if (ext_adv_capable(hdev))20272027- return err;19761976+ return hci_clear_adv_sets_sync(hdev, sk);2028197720291978 /* This is safe as long as there is no command send while the lock is20301979 * held.···20492004static int hci_remove_adv_sync(struct hci_dev *hdev, u8 instance,20502005 struct sock *sk)20512006{20522052- int err = 0;20072007+ int err;2053200820542009 /* If we use extended advertising, instance has to be removed first. */20552010 if (ext_adv_capable(hdev))20562056- err = hci_remove_ext_adv_instance_sync(hdev, instance, sk);20572057- if (ext_adv_capable(hdev))20582058- return err;20112011+ return hci_remove_ext_adv_instance_sync(hdev, instance, sk);2059201220602013 /* This is safe as long as there is no command send while the lock is20612014 * held.···21522109int hci_disable_advertising_sync(struct hci_dev *hdev)21532110{21542111 u8 enable = 0x00;21552155- int err = 0;2156211221572113 /* If controller is not advertising we are done. */21582114 if (!hci_dev_test_flag(hdev, HCI_LE_ADV))21592115 return 0;2160211621612117 if (ext_adv_capable(hdev))21622162- err = hci_disable_ext_adv_instance_sync(hdev, 0x00);21632163- if (ext_adv_capable(hdev))21642164- return err;21182118+ return hci_disable_ext_adv_instance_sync(hdev, 0x00);2165211921662120 return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_ADV_ENABLE,21672121 sizeof(enable), &enable, HCI_CMD_TIMEOUT);···25202480{25212481 int err;25222482 int old_state;24832483+24842484+ /* If controller is not advertising we are done. */24852485+ if (!hci_dev_test_flag(hdev, HCI_LE_ADV))24862486+ return 0;2523248725242488 /* If already been paused there is nothing to do. */25252489 if (hdev->advertising_paused)···63216277 struct hci_conn *conn)63226278{63236279 struct hci_cp_le_set_ext_adv_params cp;62806280+ struct hci_rp_le_set_ext_adv_params rp;63246281 int err;63256282 bdaddr_t random_addr;63266283 u8 own_addr_type;···63636318 if (err)63646319 return err;6365632063666366- err = __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_EXT_ADV_PARAMS,63676367- sizeof(cp), &cp, HCI_CMD_TIMEOUT);63216321+ err = hci_set_ext_adv_params_sync(hdev, NULL, &cp, &rp);63226322+ if (err)63236323+ return err;63246324+63256325+ /* Update adv data as tx power is known now */63266326+ err = hci_set_ext_adv_data_sync(hdev, cp.handle);63686327 if (err)63696328 return err;63706329
+24-1
net/bluetooth/mgmt.c
···10801080 struct mgmt_mesh_tx *mesh_tx;1081108110821082 hci_dev_clear_flag(hdev, HCI_MESH_SENDING);10831083- hci_disable_advertising_sync(hdev);10831083+ if (list_empty(&hdev->adv_instances))10841084+ hci_disable_advertising_sync(hdev);10841085 mesh_tx = mgmt_mesh_next(hdev, NULL);1085108610861087 if (mesh_tx)···21542153 else21552154 hci_dev_clear_flag(hdev, HCI_MESH);2156215521562156+ hdev->le_scan_interval = __le16_to_cpu(cp->period);21572157+ hdev->le_scan_window = __le16_to_cpu(cp->window);21582158+21572159 len -= sizeof(*cp);2158216021592161 /* If filters don't fit, forward all adv pkts */···21712167{21722168 struct mgmt_cp_set_mesh *cp = data;21732169 struct mgmt_pending_cmd *cmd;21702170+ __u16 period, window;21742171 int err = 0;2175217221762173 bt_dev_dbg(hdev, "sock %p", sk);···21822177 MGMT_STATUS_NOT_SUPPORTED);2183217821842179 if (cp->enable != 0x00 && cp->enable != 0x01)21802180+ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,21812181+ MGMT_STATUS_INVALID_PARAMS);21822182+21832183+ /* Keep allowed ranges in sync with set_scan_params() */21842184+ period = __le16_to_cpu(cp->period);21852185+21862186+ if (period < 0x0004 || period > 0x4000)21872187+ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,21882188+ MGMT_STATUS_INVALID_PARAMS);21892189+21902190+ window = __le16_to_cpu(cp->window);21912191+21922192+ if (window < 0x0004 || window > 0x4000)21932193+ return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,21942194+ MGMT_STATUS_INVALID_PARAMS);21952195+21962196+ if (window > period)21852197 return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_MESH_RECEIVER,21862198 MGMT_STATUS_INVALID_PARAMS);21872199···64546432 return mgmt_cmd_status(sk, hdev->id, MGMT_OP_SET_SCAN_PARAMS,64556433 MGMT_STATUS_NOT_SUPPORTED);6456643464356435+ /* Keep allowed ranges in sync with set_mesh() */64576436 interval = __le16_to_cpu(cp->interval);6458643764596438 if (interval < 0x0004 || interval > 0x4000)
+4-3
net/ipv4/ip_input.c
···325325 const struct sk_buff *hint)326326{327327 const struct iphdr *iph = ip_hdr(skb);328328- int err, drop_reason;329328 struct rtable *rt;329329+ int drop_reason;330330331331 if (ip_can_use_hint(skb, iph, hint)) {332332 drop_reason = ip_route_use_hint(skb, iph->daddr, iph->saddr,···351351 break;352352 case IPPROTO_UDP:353353 if (READ_ONCE(net->ipv4.sysctl_udp_early_demux)) {354354- err = udp_v4_early_demux(skb);355355- if (unlikely(err))354354+ drop_reason = udp_v4_early_demux(skb);355355+ if (unlikely(drop_reason))356356 goto drop_error;357357+ drop_reason = SKB_DROP_REASON_NOT_SPECIFIED;357358358359 /* must reload iph, skb->head might have changed */359360 iph = ip_hdr(skb);
+4-11
net/rose/rose_route.c
···497497 t = rose_node;498498 rose_node = rose_node->next;499499500500- for (i = 0; i < t->count; i++) {500500+ for (i = t->count - 1; i >= 0; i--) {501501 if (t->neighbour[i] != s)502502 continue;503503504504 t->count--;505505506506- switch (i) {507507- case 0:508508- t->neighbour[0] = t->neighbour[1];509509- fallthrough;510510- case 1:511511- t->neighbour[1] = t->neighbour[2];512512- break;513513- case 2:514514- break;515515- }506506+ memmove(&t->neighbour[i], &t->neighbour[i + 1],507507+ sizeof(t->neighbour[0]) *508508+ (t->count - i));516509 }517510518511 if (t->count <= 0)
+5-14
net/sched/sch_api.c
···780780781781void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)782782{783783- bool qdisc_is_offloaded = sch->flags & TCQ_F_OFFLOADED;784783 const struct Qdisc_class_ops *cops;785784 unsigned long cl;786785 u32 parentid;787786 bool notify;788787 int drops;789788790790- if (n == 0 && len == 0)791791- return;792789 drops = max_t(int, n, 0);793790 rcu_read_lock();794791 while ((parentid = sch->parent)) {···794797795798 if (sch->flags & TCQ_F_NOPARENT)796799 break;797797- /* Notify parent qdisc only if child qdisc becomes empty.798798- *799799- * If child was empty even before update then backlog800800- * counter is screwed and we skip notification because801801- * parent class is already passive.802802- *803803- * If the original child was offloaded then it is allowed804804- * to be seem as empty, so the parent is notified anyway.805805- */806806- notify = !sch->q.qlen && !WARN_ON_ONCE(!n &&807807- !qdisc_is_offloaded);800800+ /* Notify parent qdisc only if child qdisc becomes empty. */801801+ notify = !sch->q.qlen;808802 /* TODO: perform the search on a per txq basis */809803 sch = qdisc_lookup_rcu(qdisc_dev(sch), TC_H_MAJ(parentid));810804 if (sch == NULL) {···804816 }805817 cops = sch->ops->cl_ops;806818 if (notify && cops->qlen_notify) {819819+ /* Note that qlen_notify must be idempotent as it may get called820820+ * multiple times.821821+ */807822 cl = cops->find(sch, parentid);808823 cops->qlen_notify(sch, cl);809824 }
+1-1
net/sunrpc/auth_gss/auth_gss.c
···17241724 maj_stat = gss_validate_seqno_mic(ctx, task->tk_rqstp->rq_seqnos[0], seq, p, len);17251725 /* RFC 2203 5.3.3.1 - compute the checksum of each sequence number in the cache */17261726 while (unlikely(maj_stat == GSS_S_BAD_SIG && i < task->tk_rqstp->rq_seqno_count))17271727- maj_stat = gss_validate_seqno_mic(ctx, task->tk_rqstp->rq_seqnos[i], seq, p, len);17271727+ maj_stat = gss_validate_seqno_mic(ctx, task->tk_rqstp->rq_seqnos[i++], seq, p, len);17281728 if (maj_stat == GSS_S_CONTEXT_EXPIRED)17291729 clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags);17301730 if (maj_stat)
+2-2
net/vmw_vsock/vmci_transport.c
···119119 u16 proto,120120 struct vmci_handle handle)121121{122122+ memset(pkt, 0, sizeof(*pkt));123123+122124 /* We register the stream control handler as an any cid handle so we123125 * must always send from a source address of VMADDR_CID_ANY124126 */···133131 pkt->type = type;134132 pkt->src_port = src->svm_port;135133 pkt->dst_port = dst->svm_port;136136- memset(&pkt->proto, 0, sizeof(pkt->proto));137137- memset(&pkt->_reserved2, 0, sizeof(pkt->_reserved2));138134139135 switch (pkt->type) {140136 case VMCI_TRANSPORT_PACKET_TYPE_INVALID:
+1-1
scripts/gdb/linux/vfs.py
···2222 if parent == d or parent == 0:2323 return ""2424 p = dentry_name(d['d_parent']) + "/"2525- return p + d['d_iname'].string()2525+ return p + d['d_shortname']['string'].string()26262727class DentryName(gdb.Function):2828 """Return string of the full path of a dentry.
···759759 subs = find_substream(pcm_card_num, info->pcm_dev_num,760760 info->direction);761761 if (!subs || !chip || atomic_read(&chip->shutdown)) {762762- dev_err(&subs->dev->dev,762762+ dev_err(&uadev[idx].udev->dev,763763 "no sub for c#%u dev#%u dir%u\n",764764 info->pcm_card_num,765765 info->pcm_dev_num,···1360136013611361 if (!uadev[card_num].ctrl_intf) {13621362 dev_err(&subs->dev->dev, "audio ctrl intf info not cached\n");13631363- ret = -ENODEV;13641364- goto err;13631363+ return -ENODEV;13651364 }1366136513671366 ret = uaudio_populate_uac_desc(subs, resp);13681367 if (ret < 0)13691369- goto err;13681368+ return ret;1370136913711370 resp->slot_id = subs->dev->slot_id;13721371 resp->slot_id_valid = 1;1373137213741373 data = snd_soc_usb_find_priv_data(uaudio_qdev->auxdev->dev.parent);13751375- if (!data)13761376- goto err;13741374+ if (!data) {13751375+ dev_err(&subs->dev->dev, "No private data found\n");13761376+ return -ENODEV;13771377+ }1377137813781379 uaudio_qdev->data = data;13791380···13831382 &resp->xhci_mem_info.tr_data,13841383 &resp->std_as_data_ep_desc);13851384 if (ret < 0)13861386- goto err;13851385+ return ret;1387138613881387 resp->std_as_data_ep_desc_valid = 1;13891388···15011500 xhci_sideband_remove_endpoint(uadev[card_num].sb,15021501 usb_pipe_endpoint(subs->dev, subs->data_endpoint->pipe));1503150215041504-err:15051503 return ret;15061504}15071505
+2
sound/usb/stream.c
···987987 * and request Cluster Descriptor988988 */989989 wLength = le16_to_cpu(hc_header.wLength);990990+ if (wLength < sizeof(cluster))991991+ return NULL;990992 cluster = kzalloc(wLength, GFP_KERNEL);991993 if (!cluster)992994 return ERR_PTR(-ENOMEM);
+2-2
tools/arch/loongarch/include/asm/orc_types.h
···3434#define ORC_TYPE_REGS 33535#define ORC_TYPE_REGS_PARTIAL 436363737-#ifndef __ASSEMBLY__3737+#ifndef __ASSEMBLER__3838/*3939 * This struct is more or less a vastly simplified version of the DWARF Call4040 * Frame Information standard. It contains only the necessary parts of DWARF···5353 unsigned int type:3;5454 unsigned int signal:1;5555};5656-#endif /* __ASSEMBLY__ */5656+#endif /* __ASSEMBLER__ */57575858#endif /* _ORC_TYPES_H */
+29-11
tools/testing/selftests/iommu/iommufd.c
···54545555 mfd_buffer = memfd_mmap(BUFFER_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED,5656 &mfd);5757+ assert(mfd_buffer != MAP_FAILED);5858+ assert(mfd > 0);5759}58605961FIXTURE(iommufd)···17481746 unsigned int end;17491747 uint8_t *buf;17501748 int prot = PROT_READ | PROT_WRITE;17511751- int mfd;17491749+ int mfd = -1;1752175017531751 if (variant->file)17541752 buf = memfd_mmap(buf_size, prot, MAP_SHARED, &mfd);17551753 else17561754 buf = mmap(0, buf_size, prot, self->mmap_flags, -1, 0);17571755 ASSERT_NE(MAP_FAILED, buf);17561756+ if (variant->file)17571757+ ASSERT_GT(mfd, 0);17581758 check_refs(buf, buf_size, 0);1759175917601760 /*···18021798 unsigned int end;18031799 uint8_t *buf;18041800 int prot = PROT_READ | PROT_WRITE;18051805- int mfd;18011801+ int mfd = -1;1806180218071803 if (variant->file)18081804 buf = memfd_mmap(buf_size, prot, MAP_SHARED, &mfd);18091805 else18101806 buf = mmap(0, buf_size, prot, self->mmap_flags, -1, 0);18111807 ASSERT_NE(MAP_FAILED, buf);18081808+ if (variant->file)18091809+ ASSERT_GT(mfd, 0);18121810 check_refs(buf, buf_size, 0);1813181118141812 /*···2014200820152009FIXTURE_SETUP(iommufd_dirty_tracking)20162010{20112011+ size_t mmap_buffer_size;20172012 unsigned long size;20182013 int mmap_flags;20192014 void *vrc;···20292022 self->fd = open("/dev/iommu", O_RDWR);20302023 ASSERT_NE(-1, self->fd);2031202420322032- rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, variant->buffer_size);20332033- if (rc || !self->buffer) {20342034- SKIP(return, "Skipping buffer_size=%lu due to errno=%d",20352035- variant->buffer_size, rc);20362036- }20372037-20382025 mmap_flags = MAP_SHARED | MAP_ANONYMOUS | MAP_FIXED;20262026+ mmap_buffer_size = variant->buffer_size;20392027 if (variant->hugepages) {20402028 /*20412029 * MAP_POPULATE will cause the kernel to fail mmap if THPs are20422030 * not available.20432031 */20442032 mmap_flags |= MAP_HUGETLB | MAP_POPULATE;20332033+20342034+ /*20352035+ * Allocation must be aligned to the HUGEPAGE_SIZE, because the20362036+ * following mmap() will automatically align the length to be a20372037+ * multiple of the underlying huge page size. Failing to do the20382038+ * same at this allocation will result in a memory overwrite by20392039+ * the mmap().20402040+ */20412041+ if (mmap_buffer_size < HUGEPAGE_SIZE)20422042+ mmap_buffer_size = HUGEPAGE_SIZE;20432043+ }20442044+20452045+ rc = posix_memalign(&self->buffer, HUGEPAGE_SIZE, mmap_buffer_size);20462046+ if (rc || !self->buffer) {20472047+ SKIP(return, "Skipping buffer_size=%lu due to errno=%d",20482048+ mmap_buffer_size, rc);20452049 }20462050 assert((uintptr_t)self->buffer % HUGEPAGE_SIZE == 0);20472047- vrc = mmap(self->buffer, variant->buffer_size, PROT_READ | PROT_WRITE,20512051+ vrc = mmap(self->buffer, mmap_buffer_size, PROT_READ | PROT_WRITE,20482052 mmap_flags, -1, 0);20492053 assert(vrc == self->buffer);20502054···2084206620852067FIXTURE_TEARDOWN(iommufd_dirty_tracking)20862068{20872087- munmap(self->buffer, variant->buffer_size);20882088- munmap(self->bitmap, DIV_ROUND_UP(self->bitmap_size, BITS_PER_BYTE));20692069+ free(self->buffer);20702070+ free(self->bitmap);20892071 teardown_iommufd(self->fd, _metadata);20902072}20912073