Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v6.14-rc7' into x86/core, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>

+5913 -2692
+2 -1
.mailmap
··· 88 88 Antonio Quartulli <antonio@mandelbit.com> <antonio.quartulli@open-mesh.com> 89 89 Antonio Quartulli <antonio@mandelbit.com> <ordex@autistici.org> 90 90 Antonio Quartulli <antonio@mandelbit.com> <ordex@ritirata.org> 91 - Antonio Quartulli <antonio@mandelbit.com> <antonio@openvpn.net> 92 91 Antonio Quartulli <antonio@mandelbit.com> <a@unstable.cc> 93 92 Anup Patel <anup@brainfault.org> <anup.patel@wdc.com> 94 93 Archit Taneja <archit@ti.com> ··· 281 282 Herbert Xu <herbert@gondor.apana.org.au> 282 283 Huacai Chen <chenhuacai@kernel.org> <chenhc@lemote.com> 283 284 Huacai Chen <chenhuacai@kernel.org> <chenhuacai@loongson.cn> 285 + Ike Panhc <ikepanhc@gmail.com> <ike.pan@canonical.com> 284 286 J. Bruce Fields <bfields@fieldses.org> <bfields@redhat.com> 285 287 J. Bruce Fields <bfields@fieldses.org> <bfields@citi.umich.edu> 286 288 Jacob Shin <Jacob.Shin@amd.com> ··· 692 692 Subhash Jadavani <subhashj@codeaurora.org> 693 693 Sudarshan Rajagopalan <quic_sudaraja@quicinc.com> <sudaraja@codeaurora.org> 694 694 Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> 695 + Sumit Garg <sumit.garg@kernel.org> <sumit.garg@linaro.org> 695 696 Sumit Semwal <sumit.semwal@ti.com> 696 697 Surabhi Vishnoi <quic_svishnoi@quicinc.com> <svishnoi@codeaurora.org> 697 698 Sven Eckelmann <sven@narfation.org> <seckelmann@datto.com>
+1 -1
Documentation/admin-guide/README.rst
··· 176 176 values without prompting. 177 177 178 178 "make defconfig" Create a ./.config file by using the default 179 - symbol values from either arch/$ARCH/defconfig 179 + symbol values from either arch/$ARCH/configs/defconfig 180 180 or arch/$ARCH/configs/${PLATFORM}_defconfig, 181 181 depending on the architecture. 182 182
+11
Documentation/admin-guide/sysctl/kernel.rst
··· 212 212 This value defaults to 0. 213 213 214 214 215 + core_sort_vma 216 + ============= 217 + 218 + The default coredump writes VMAs in address order. By setting 219 + ``core_sort_vma`` to 1, VMAs will be written from smallest size 220 + to largest size. This is known to break at least elfutils, but 221 + can be handy when dealing with very large (and truncated) 222 + coredumps where the more useful debugging details are included 223 + in the smaller VMAs. 224 + 225 + 215 226 core_uses_pid 216 227 ============= 217 228
+1
Documentation/devicetree/bindings/iio/adc/adi,ad7606.yaml
··· 146 146 maxItems: 2 147 147 148 148 pwm-names: 149 + minItems: 1 149 150 items: 150 151 - const: convst1 151 152 - const: convst2
+1
Documentation/devicetree/bindings/input/touchscreen/imagis,ist3038c.yaml
··· 19 19 - imagis,ist3038 20 20 - imagis,ist3038b 21 21 - imagis,ist3038c 22 + - imagis,ist3038h 22 23 23 24 reg: 24 25 maxItems: 1
+2 -2
Documentation/filesystems/idmappings.rst
··· 63 63 straightforward algorithm to use is to apply the inverse of the first idmapping, 64 64 mapping ``k11000`` up to ``u1000``. Afterwards, we can map ``u1000`` down using 65 65 either the second idmapping mapping or third idmapping mapping. The second 66 - idmapping would map ``u1000`` down to ``21000``. The third idmapping would map 67 - ``u1000`` down to ``u31000``. 66 + idmapping would map ``u1000`` down to ``k21000``. The third idmapping would map 67 + ``u1000`` down to ``k31000``. 68 68 69 69 If we were given the same task for the following three idmappings:: 70 70
+1 -1
Documentation/rust/quick-start.rst
··· 145 145 **************************** 146 146 147 147 The Rust standard library source is required because the build system will 148 - cross-compile ``core`` and ``alloc``. 148 + cross-compile ``core``. 149 149 150 150 If ``rustup`` is being used, run:: 151 151
+1 -1
Documentation/rust/testing.rst
··· 97 97 98 98 /// ``` 99 99 /// # use kernel::{spawn_work_item, workqueue}; 100 - /// spawn_work_item!(workqueue::system(), || pr_info!("x"))?; 100 + /// spawn_work_item!(workqueue::system(), || pr_info!("x\n"))?; 101 101 /// # Ok::<(), Error>(()) 102 102 /// ``` 103 103
+3
Documentation/scheduler/sched-rt-group.rst
··· 102 102 * sched_rt_period_us takes values from 1 to INT_MAX. 103 103 * sched_rt_runtime_us takes values from -1 to sched_rt_period_us. 104 104 * A run time of -1 specifies runtime == period, ie. no limit. 105 + * sched_rt_runtime_us/sched_rt_period_us > 0.05 inorder to preserve 106 + bandwidth for fair dl_server. For accurate value check average of 107 + runtime/period in /sys/kernel/debug/sched/fair_server/cpuX/ 105 108 106 109 107 110 2.2 Default behaviour
+38 -20
MAINTAINERS
··· 124 124 F: include/net/iw_handler.h 125 125 F: include/net/wext.h 126 126 F: include/uapi/linux/nl80211.h 127 + N: include/uapi/linux/nl80211-.* 127 128 F: include/uapi/linux/wireless.h 128 129 F: net/wireless/ 129 130 ··· 515 514 ADM8211 WIRELESS DRIVER 516 515 L: linux-wireless@vger.kernel.org 517 516 S: Orphan 518 - F: drivers/net/wireless/admtek/adm8211.* 517 + F: drivers/net/wireless/admtek/ 519 518 520 519 ADP1050 HARDWARE MONITOR DRIVER 521 520 M: Radu Sabau <radu.sabau@analog.com> ··· 5776 5775 5777 5776 COMMON INTERNET FILE SYSTEM CLIENT (CIFS and SMB3) 5778 5777 M: Steve French <sfrench@samba.org> 5778 + M: Steve French <smfrench@gmail.com> 5779 5779 R: Paulo Alcantara <pc@manguebit.com> (DFS, global name space) 5780 5780 R: Ronnie Sahlberg <ronniesahlberg@gmail.com> (directory leases, sparse files) 5781 5781 R: Shyam Prasad N <sprasad@microsoft.com> (multichannel) ··· 6208 6206 6209 6207 CW1200 WLAN driver 6210 6208 S: Orphan 6211 - F: drivers/net/wireless/st/cw1200/ 6209 + F: drivers/net/wireless/st/ 6212 6210 F: include/linux/platform_data/net-cw1200.h 6213 6211 6214 6212 CX18 VIDEO4LINUX DRIVER ··· 9444 9442 F: include/uapi/linux/fscrypt.h 9445 9443 9446 9444 FSI SUBSYSTEM 9447 - M: Jeremy Kerr <jk@ozlabs.org> 9448 - M: Joel Stanley <joel@jms.id.au> 9449 - R: Alistar Popple <alistair@popple.id.au> 9450 - R: Eddie James <eajames@linux.ibm.com> 9445 + M: Eddie James <eajames@linux.ibm.com> 9446 + R: Ninad Palsule <ninad@linux.ibm.com> 9451 9447 L: linux-fsi@lists.ozlabs.org 9452 9448 S: Supported 9453 9449 Q: http://patchwork.ozlabs.org/project/linux-fsi/list/ 9454 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/joel/fsi.git 9455 9450 F: drivers/fsi/ 9456 9451 F: include/linux/fsi*.h 9457 9452 F: include/trace/events/fsi*.h ··· 9829 9830 F: drivers/media/usb/go7007/ 9830 9831 9831 9832 GOODIX TOUCHSCREEN 9832 - M: Bastien Nocera <hadess@hadess.net> 9833 9833 M: Hans de Goede <hdegoede@redhat.com> 9834 9834 L: linux-input@vger.kernel.org 9835 9835 S: Maintained ··· 11140 11142 F: drivers/i2c/busses/i2c-icy.c 11141 11143 11142 11144 IDEAPAD LAPTOP EXTRAS DRIVER 11143 - M: Ike Panhc <ike.pan@canonical.com> 11145 + M: Ike Panhc <ikepanhc@gmail.com> 11144 11146 L: platform-driver-x86@vger.kernel.org 11145 11147 S: Maintained 11146 11148 W: http://launchpad.net/ideapad-laptop ··· 12653 12655 12654 12656 KERNEL SMB3 SERVER (KSMBD) 12655 12657 M: Namjae Jeon <linkinjeon@kernel.org> 12658 + M: Namjae Jeon <linkinjeon@samba.org> 12656 12659 M: Steve French <sfrench@samba.org> 12660 + M: Steve French <smfrench@gmail.com> 12657 12661 R: Sergey Senozhatsky <senozhatsky@chromium.org> 12658 12662 R: Tom Talpey <tom@talpey.com> 12659 12663 L: linux-cifs@vger.kernel.org ··· 12872 12872 F: security/keys/trusted-keys/trusted_dcp.c 12873 12873 12874 12874 KEYS-TRUSTED-TEE 12875 - M: Sumit Garg <sumit.garg@linaro.org> 12875 + M: Sumit Garg <sumit.garg@kernel.org> 12876 12876 L: linux-integrity@vger.kernel.org 12877 12877 L: keyrings@vger.kernel.org 12878 12878 S: Supported ··· 13994 13994 L: libertas-dev@lists.infradead.org 13995 13995 S: Orphan 13996 13996 F: drivers/net/wireless/marvell/libertas/ 13997 + F: drivers/net/wireless/marvell/libertas_tf/ 13997 13998 13998 13999 MARVELL MACCHIATOBIN SUPPORT 13999 14000 M: Russell King <linux@armlinux.org.uk> ··· 15664 15663 M: Claudiu Beznea <claudiu.beznea@tuxon.dev> 15665 15664 L: linux-wireless@vger.kernel.org 15666 15665 S: Supported 15667 - F: drivers/net/wireless/microchip/wilc1000/ 15666 + F: drivers/net/wireless/microchip/ 15668 15667 15669 15668 MICROSEMI MIPS SOCS 15670 15669 M: Alexandre Belloni <alexandre.belloni@bootlin.com> ··· 16450 16449 T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next.git 16451 16450 F: Documentation/devicetree/bindings/net/wireless/ 16452 16451 F: drivers/net/wireless/ 16452 + X: drivers/net/wireless/ath/ 16453 + X: drivers/net/wireless/broadcom/ 16454 + X: drivers/net/wireless/intel/ 16455 + X: drivers/net/wireless/intersil/ 16456 + X: drivers/net/wireless/marvell/ 16457 + X: drivers/net/wireless/mediatek/mt76/ 16458 + X: drivers/net/wireless/mediatek/mt7601u/ 16459 + X: drivers/net/wireless/microchip/ 16460 + X: drivers/net/wireless/purelifi/ 16461 + X: drivers/net/wireless/quantenna/ 16462 + X: drivers/net/wireless/ralink/ 16463 + X: drivers/net/wireless/realtek/ 16464 + X: drivers/net/wireless/rsi/ 16465 + X: drivers/net/wireless/silabs/ 16466 + X: drivers/net/wireless/st/ 16467 + X: drivers/net/wireless/ti/ 16468 + X: drivers/net/wireless/zydas/ 16453 16469 16454 16470 NETWORKING [DSA] 16455 16471 M: Andrew Lunn <andrew@lunn.ch> ··· 17690 17672 F: drivers/tee/optee/ 17691 17673 17692 17674 OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER 17693 - M: Sumit Garg <sumit.garg@linaro.org> 17675 + M: Sumit Garg <sumit.garg@kernel.org> 17694 17676 L: op-tee@lists.trustedfirmware.org 17695 17677 S: Maintained 17696 17678 F: drivers/char/hw_random/optee-rng.c ··· 17851 17833 L: linux-wireless@vger.kernel.org 17852 17834 S: Maintained 17853 17835 W: https://wireless.wiki.kernel.org/en/users/Drivers/p54 17854 - F: drivers/net/wireless/intersil/p54/ 17836 + F: drivers/net/wireless/intersil/ 17855 17837 17856 17838 PACKET SOCKETS 17857 17839 M: Willem de Bruijn <willemdebruijn.kernel@gmail.com> ··· 19128 19110 M: Srinivasan Raju <srini.raju@purelifi.com> 19129 19111 L: linux-wireless@vger.kernel.org 19130 19112 S: Supported 19131 - F: drivers/net/wireless/purelifi/plfxlc/ 19113 + F: drivers/net/wireless/purelifi/ 19132 19114 19133 19115 PVRUSB2 VIDEO4LINUX DRIVER 19134 19116 M: Mike Isely <isely@pobox.com> ··· 19679 19661 R: Sergey Matyukevich <geomatsi@gmail.com> 19680 19662 L: linux-wireless@vger.kernel.org 19681 19663 S: Maintained 19682 - F: drivers/net/wireless/quantenna 19664 + F: drivers/net/wireless/quantenna/ 19683 19665 19684 19666 RADEON and AMDGPU DRM DRIVERS 19685 19667 M: Alex Deucher <alexander.deucher@amd.com> ··· 19759 19741 M: Stanislaw Gruszka <stf_xl@wp.pl> 19760 19742 L: linux-wireless@vger.kernel.org 19761 19743 S: Maintained 19762 - F: drivers/net/wireless/ralink/rt2x00/ 19744 + F: drivers/net/wireless/ralink/ 19763 19745 19764 19746 RAMDISK RAM BLOCK DEVICE DRIVER 19765 19747 M: Jens Axboe <axboe@kernel.dk> ··· 21107 21089 21108 21090 SAMSUNG SPI DRIVERS 21109 21091 M: Andi Shyti <andi.shyti@kernel.org> 21092 + R: Tudor Ambarus <tudor.ambarus@linaro.org> 21110 21093 L: linux-spi@vger.kernel.org 21111 21094 L: linux-samsung-soc@vger.kernel.org 21112 21095 S: Maintained ··· 21518 21499 21519 21500 SFC NETWORK DRIVER 21520 21501 M: Edward Cree <ecree.xilinx@gmail.com> 21521 - M: Martin Habets <habetsm.xilinx@gmail.com> 21522 21502 L: netdev@vger.kernel.org 21523 21503 L: linux-net-drivers@amd.com 21524 21504 S: Maintained ··· 21726 21708 M: Jérôme Pouiller <jerome.pouiller@silabs.com> 21727 21709 S: Supported 21728 21710 F: Documentation/devicetree/bindings/net/wireless/silabs,wfx.yaml 21729 - F: drivers/net/wireless/silabs/wfx/ 21711 + F: drivers/net/wireless/silabs/ 21730 21712 21731 21713 SILICON MOTION SM712 FRAME BUFFER DRIVER 21732 21714 M: Sudip Mukherjee <sudipm.mukherjee@gmail.com> ··· 23303 23285 23304 23286 TEE SUBSYSTEM 23305 23287 M: Jens Wiklander <jens.wiklander@linaro.org> 23306 - R: Sumit Garg <sumit.garg@linaro.org> 23288 + R: Sumit Garg <sumit.garg@kernel.org> 23307 23289 L: op-tee@lists.trustedfirmware.org 23308 23290 S: Maintained 23309 23291 F: Documentation/ABI/testing/sysfs-class-tee ··· 26226 26208 ZD1211RW WIRELESS DRIVER 26227 26209 L: linux-wireless@vger.kernel.org 26228 26210 S: Orphan 26229 - F: drivers/net/wireless/zydas/zd1211rw/ 26211 + F: drivers/net/wireless/zydas/ 26230 26212 26231 26213 ZD1301 MEDIA DRIVER 26232 26214 L: linux-media@vger.kernel.org
+6 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 14 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc7 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1125 1125 # Align the bit size of userspace programs with the kernel 1126 1126 KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)) 1127 1127 KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)) 1128 + 1129 + # userspace programs are linked via the compiler, use the correct linker 1130 + ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_LD_IS_LLD),yy) 1131 + KBUILD_USERLDFLAGS += --ld-path=$(LD) 1132 + endif 1128 1133 1129 1134 # make the checker run with the right architecture 1130 1135 CHECKFLAGS += --arch=$(ARCH)
+25 -12
arch/arm/mm/fault-armv.c
··· 62 62 } 63 63 64 64 static int adjust_pte(struct vm_area_struct *vma, unsigned long address, 65 - unsigned long pfn, struct vm_fault *vmf) 65 + unsigned long pfn, bool need_lock) 66 66 { 67 67 spinlock_t *ptl; 68 68 pgd_t *pgd; ··· 99 99 if (!pte) 100 100 return 0; 101 101 102 - /* 103 - * If we are using split PTE locks, then we need to take the page 104 - * lock here. Otherwise we are using shared mm->page_table_lock 105 - * which is already locked, thus cannot take it. 106 - */ 107 - if (ptl != vmf->ptl) { 102 + if (need_lock) { 103 + /* 104 + * Use nested version here to indicate that we are already 105 + * holding one similar spinlock. 106 + */ 108 107 spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); 109 108 if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) { 110 109 pte_unmap_unlock(pte, ptl); ··· 113 114 114 115 ret = do_adjust_pte(vma, address, pfn, pte); 115 116 116 - if (ptl != vmf->ptl) 117 + if (need_lock) 117 118 spin_unlock(ptl); 118 119 pte_unmap(pte); 119 120 ··· 122 123 123 124 static void 124 125 make_coherent(struct address_space *mapping, struct vm_area_struct *vma, 125 - unsigned long addr, pte_t *ptep, unsigned long pfn, 126 - struct vm_fault *vmf) 126 + unsigned long addr, pte_t *ptep, unsigned long pfn) 127 127 { 128 + const unsigned long pmd_start_addr = ALIGN_DOWN(addr, PMD_SIZE); 129 + const unsigned long pmd_end_addr = pmd_start_addr + PMD_SIZE; 128 130 struct mm_struct *mm = vma->vm_mm; 129 131 struct vm_area_struct *mpnt; 130 132 unsigned long offset; ··· 142 142 flush_dcache_mmap_lock(mapping); 143 143 vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) { 144 144 /* 145 + * If we are using split PTE locks, then we need to take the pte 146 + * lock. Otherwise we are using shared mm->page_table_lock which 147 + * is already locked, thus cannot take it. 148 + */ 149 + bool need_lock = IS_ENABLED(CONFIG_SPLIT_PTE_PTLOCKS); 150 + unsigned long mpnt_addr; 151 + 152 + /* 145 153 * If this VMA is not in our MM, we can ignore it. 146 154 * Note that we intentionally mask out the VMA 147 155 * that we are fixing up. ··· 159 151 if (!(mpnt->vm_flags & VM_MAYSHARE)) 160 152 continue; 161 153 offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT; 162 - aliases += adjust_pte(mpnt, mpnt->vm_start + offset, pfn, vmf); 154 + mpnt_addr = mpnt->vm_start + offset; 155 + 156 + /* Avoid deadlocks by not grabbing the same PTE lock again. */ 157 + if (mpnt_addr >= pmd_start_addr && mpnt_addr < pmd_end_addr) 158 + need_lock = false; 159 + aliases += adjust_pte(mpnt, mpnt_addr, pfn, need_lock); 163 160 } 164 161 flush_dcache_mmap_unlock(mapping); 165 162 if (aliases) ··· 207 194 __flush_dcache_folio(mapping, folio); 208 195 if (mapping) { 209 196 if (cache_is_vivt()) 210 - make_coherent(mapping, vma, addr, ptep, pfn, vmf); 197 + make_coherent(mapping, vma, addr, ptep, pfn); 211 198 else if (vma->vm_flags & VM_EXEC) 212 199 __flush_icache_all(); 213 200 }
+26 -5
arch/arm64/include/asm/el2_setup.h
··· 16 16 #include <asm/sysreg.h> 17 17 #include <linux/irqchip/arm-gic-v3.h> 18 18 19 + .macro init_el2_hcr val 20 + mov_q x0, \val 21 + 22 + /* 23 + * Compliant CPUs advertise their VHE-onlyness with 24 + * ID_AA64MMFR4_EL1.E2H0 < 0. On such CPUs HCR_EL2.E2H is RES1, but it 25 + * can reset into an UNKNOWN state and might not read as 1 until it has 26 + * been initialized explicitly. 27 + * 28 + * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but 29 + * don't advertise it (they predate this relaxation). 30 + * 31 + * Initalize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H 32 + * indicating whether the CPU is running in E2H mode. 33 + */ 34 + mrs_s x1, SYS_ID_AA64MMFR4_EL1 35 + sbfx x1, x1, #ID_AA64MMFR4_EL1_E2H0_SHIFT, #ID_AA64MMFR4_EL1_E2H0_WIDTH 36 + cmp x1, #0 37 + b.ge .LnVHE_\@ 38 + 39 + orr x0, x0, #HCR_E2H 40 + .LnVHE_\@: 41 + msr hcr_el2, x0 42 + isb 43 + .endm 44 + 19 45 .macro __init_el2_sctlr 20 46 mov_q x0, INIT_SCTLR_EL2_MMU_OFF 21 47 msr sctlr_el2, x0 ··· 268 242 msr_s SYS_GCSCR_EL1, xzr 269 243 msr_s SYS_GCSCRE0_EL1, xzr 270 244 .Lskip_gcs_\@: 271 - .endm 272 - 273 - .macro __init_el2_nvhe_prepare_eret 274 - mov x0, #INIT_PSTATE_EL1 275 - msr spsr_el2, x0 276 245 .endm 277 246 278 247 .macro __init_el2_mpam
+12 -10
arch/arm64/include/asm/tlbflush.h
··· 396 396 #define __flush_tlb_range_op(op, start, pages, stride, \ 397 397 asid, tlb_level, tlbi_user, lpa2) \ 398 398 do { \ 399 + typeof(start) __flush_start = start; \ 400 + typeof(pages) __flush_pages = pages; \ 399 401 int num = 0; \ 400 402 int scale = 3; \ 401 403 int shift = lpa2 ? 16 : PAGE_SHIFT; \ 402 404 unsigned long addr; \ 403 405 \ 404 - while (pages > 0) { \ 406 + while (__flush_pages > 0) { \ 405 407 if (!system_supports_tlb_range() || \ 406 - pages == 1 || \ 407 - (lpa2 && start != ALIGN(start, SZ_64K))) { \ 408 - addr = __TLBI_VADDR(start, asid); \ 408 + __flush_pages == 1 || \ 409 + (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \ 410 + addr = __TLBI_VADDR(__flush_start, asid); \ 409 411 __tlbi_level(op, addr, tlb_level); \ 410 412 if (tlbi_user) \ 411 413 __tlbi_user_level(op, addr, tlb_level); \ 412 - start += stride; \ 413 - pages -= stride >> PAGE_SHIFT; \ 414 + __flush_start += stride; \ 415 + __flush_pages -= stride >> PAGE_SHIFT; \ 414 416 continue; \ 415 417 } \ 416 418 \ 417 - num = __TLBI_RANGE_NUM(pages, scale); \ 419 + num = __TLBI_RANGE_NUM(__flush_pages, scale); \ 418 420 if (num >= 0) { \ 419 - addr = __TLBI_VADDR_RANGE(start >> shift, asid, \ 421 + addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \ 420 422 scale, num, tlb_level); \ 421 423 __tlbi(r##op, addr); \ 422 424 if (tlbi_user) \ 423 425 __tlbi_user(r##op, addr); \ 424 - start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ 425 - pages -= __TLBI_RANGE_PAGES(num, scale); \ 426 + __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \ 427 + __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\ 426 428 } \ 427 429 scale--; \ 428 430 } \
+3 -19
arch/arm64/kernel/head.S
··· 298 298 msr sctlr_el2, x0 299 299 isb 300 300 0: 301 - mov_q x0, HCR_HOST_NVHE_FLAGS 302 301 303 - /* 304 - * Compliant CPUs advertise their VHE-onlyness with 305 - * ID_AA64MMFR4_EL1.E2H0 < 0. HCR_EL2.E2H can be 306 - * RES1 in that case. Publish the E2H bit early so that 307 - * it can be picked up by the init_el2_state macro. 308 - * 309 - * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but 310 - * don't advertise it (they predate this relaxation). 311 - */ 312 - mrs_s x1, SYS_ID_AA64MMFR4_EL1 313 - tbz x1, #(ID_AA64MMFR4_EL1_E2H0_SHIFT + ID_AA64MMFR4_EL1_E2H0_WIDTH - 1), 1f 314 - 315 - orr x0, x0, #HCR_E2H 316 - 1: 317 - msr hcr_el2, x0 318 - isb 319 - 302 + init_el2_hcr HCR_HOST_NVHE_FLAGS 320 303 init_el2_state 321 304 322 305 /* Hypervisor stub */ ··· 322 339 msr sctlr_el1, x1 323 340 mov x2, xzr 324 341 3: 325 - __init_el2_nvhe_prepare_eret 342 + mov x0, #INIT_PSTATE_EL1 343 + msr spsr_el2, x0 326 344 327 345 mov w0, #BOOT_CPU_MODE_EL2 328 346 orr x0, x0, x2
+7 -3
arch/arm64/kvm/hyp/nvhe/hyp-init.S
··· 73 73 eret 74 74 SYM_CODE_END(__kvm_hyp_init) 75 75 76 + /* 77 + * Initialize EL2 CPU state to sane values. 78 + * 79 + * HCR_EL2.E2H must have been initialized already. 80 + */ 76 81 SYM_CODE_START_LOCAL(__kvm_init_el2_state) 77 - /* Initialize EL2 CPU state to sane values. */ 78 82 init_el2_state // Clobbers x0..x2 79 83 finalise_el2_state 80 84 ret ··· 210 206 211 207 2: msr SPsel, #1 // We want to use SP_EL{1,2} 212 208 213 - bl __kvm_init_el2_state 209 + init_el2_hcr 0 214 210 215 - __init_el2_nvhe_prepare_eret 211 + bl __kvm_init_el2_state 216 212 217 213 /* Enable MMU, set vectors and stack. */ 218 214 mov x0, x28
+3
arch/arm64/kvm/hyp/nvhe/psci-relay.c
··· 218 218 if (is_cpu_on) 219 219 release_boot_args(boot_args); 220 220 221 + write_sysreg_el1(INIT_SCTLR_EL1_MMU_OFF, SYS_SCTLR); 222 + write_sysreg(INIT_PSTATE_EL1, SPSR_EL2); 223 + 221 224 __host_enter(host_ctxt); 222 225 } 223 226
+4 -1
arch/arm64/mm/mmu.c
··· 1177 1177 struct vmem_altmap *altmap) 1178 1178 { 1179 1179 WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END)); 1180 + /* [start, end] should be within one section */ 1181 + WARN_ON_ONCE(end - start > PAGES_PER_SECTION * sizeof(struct page)); 1180 1182 1181 - if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES)) 1183 + if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) || 1184 + (end - start < PAGES_PER_SECTION * sizeof(struct page))) 1182 1185 return vmemmap_populate_basepages(start, end, node, altmap); 1183 1186 else 1184 1187 return vmemmap_populate_hugepages(start, end, node, altmap);
-12
arch/loongarch/kernel/acpi.c
··· 249 249 return acpi_map_pxm_to_node(pxm); 250 250 } 251 251 252 - /* 253 - * Callback for SLIT parsing. pxm_to_node() returns NUMA_NO_NODE for 254 - * I/O localities since SRAT does not list them. I/O localities are 255 - * not supported at this point. 256 - */ 257 - unsigned int numa_distance_cnt; 258 - 259 - static inline unsigned int get_numa_distances_cnt(struct acpi_table_slit *slit) 260 - { 261 - return slit->locality_count; 262 - } 263 - 264 252 void __init numa_set_distance(int from, int to, int distance) 265 253 { 266 254 if ((u8)distance != distance || (from == to && distance != LOCAL_DISTANCE)) {
+2 -2
arch/loongarch/kernel/machine_kexec.c
··· 126 126 /* All secondary cpus go to kexec_smp_wait */ 127 127 if (smp_processor_id() > 0) { 128 128 relocated_kexec_smp_wait(NULL); 129 - unreachable(); 129 + BUG(); 130 130 } 131 131 #endif 132 132 133 133 do_kexec = (void *)reboot_code_buffer; 134 134 do_kexec(efi_boot, cmdline_ptr, systable_ptr, start_addr, first_ind_entry); 135 135 136 - unreachable(); 136 + BUG(); 137 137 } 138 138 139 139
+3
arch/loongarch/kernel/setup.c
··· 387 387 */ 388 388 static void __init arch_mem_init(char **cmdline_p) 389 389 { 390 + /* Recalculate max_low_pfn for "mem=xxx" */ 391 + max_pfn = max_low_pfn = PHYS_PFN(memblock_end_of_DRAM()); 392 + 390 393 if (usermem) 391 394 pr_info("User-defined physical RAM map overwrite\n"); 392 395
+46 -1
arch/loongarch/kernel/smp.c
··· 19 19 #include <linux/smp.h> 20 20 #include <linux/threads.h> 21 21 #include <linux/export.h> 22 + #include <linux/suspend.h> 22 23 #include <linux/syscore_ops.h> 23 24 #include <linux/time.h> 24 25 #include <linux/tracepoint.h> ··· 424 423 mb(); 425 424 } 426 425 427 - void __noreturn arch_cpu_idle_dead(void) 426 + static void __noreturn idle_play_dead(void) 428 427 { 429 428 register uint64_t addr; 430 429 register void (*init_fn)(void); ··· 447 446 init_fn(); 448 447 BUG(); 449 448 } 449 + 450 + #ifdef CONFIG_HIBERNATION 451 + static void __noreturn poll_play_dead(void) 452 + { 453 + register uint64_t addr; 454 + register void (*init_fn)(void); 455 + 456 + idle_task_exit(); 457 + __this_cpu_write(cpu_state, CPU_DEAD); 458 + 459 + __smp_mb(); 460 + do { 461 + __asm__ __volatile__("nop\n\t"); 462 + addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0); 463 + } while (addr == 0); 464 + 465 + init_fn = (void *)TO_CACHE(addr); 466 + iocsr_write32(0xffffffff, LOONGARCH_IOCSR_IPI_CLEAR); 467 + 468 + init_fn(); 469 + BUG(); 470 + } 471 + #endif 472 + 473 + static void (*play_dead)(void) = idle_play_dead; 474 + 475 + void __noreturn arch_cpu_idle_dead(void) 476 + { 477 + play_dead(); 478 + BUG(); /* play_dead() doesn't return */ 479 + } 480 + 481 + #ifdef CONFIG_HIBERNATION 482 + int hibernate_resume_nonboot_cpu_disable(void) 483 + { 484 + int ret; 485 + 486 + play_dead = poll_play_dead; 487 + ret = suspend_disable_secondary_cpus(); 488 + play_dead = idle_play_dead; 489 + 490 + return ret; 491 + } 492 + #endif 450 493 451 494 #endif 452 495
+6
arch/loongarch/kvm/exit.c
··· 669 669 struct kvm_run *run = vcpu->run; 670 670 unsigned long badv = vcpu->arch.badv; 671 671 672 + /* Inject ADE exception if exceed max GPA size */ 673 + if (unlikely(badv >= vcpu->kvm->arch.gpa_size)) { 674 + kvm_queue_exception(vcpu, EXCCODE_ADE, EXSUBCODE_ADEM); 675 + return RESUME_GUEST; 676 + } 677 + 672 678 ret = kvm_handle_mm_fault(vcpu, badv, write); 673 679 if (ret) { 674 680 /* Treat as MMIO */
+7
arch/loongarch/kvm/main.c
··· 317 317 kvm_debug("GCFG:%lx GSTAT:%lx GINTC:%lx GTLBC:%lx", 318 318 read_csr_gcfg(), read_csr_gstat(), read_csr_gintc(), read_csr_gtlbc()); 319 319 320 + /* 321 + * HW Guest CSR registers are lost after CPU suspend and resume. 322 + * Clear last_vcpu so that Guest CSR registers forced to reload 323 + * from vCPU SW state. 324 + */ 325 + this_cpu_ptr(vmcs)->last_vcpu = NULL; 326 + 320 327 return 0; 321 328 } 322 329
+1 -1
arch/loongarch/kvm/vcpu.c
··· 311 311 { 312 312 int ret = RESUME_GUEST; 313 313 unsigned long estat = vcpu->arch.host_estat; 314 - u32 intr = estat & 0x1fff; /* Ignore NMI */ 314 + u32 intr = estat & CSR_ESTAT_IS; 315 315 u32 ecode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; 316 316 317 317 vcpu->mode = OUTSIDE_GUEST_MODE;
+5 -1
arch/loongarch/kvm/vm.c
··· 48 48 if (kvm_pvtime_supported()) 49 49 kvm->arch.pv_features |= BIT(KVM_FEATURE_STEAL_TIME); 50 50 51 - kvm->arch.gpa_size = BIT(cpu_vabits - 1); 51 + /* 52 + * cpu_vabits means user address space only (a half of total). 53 + * GPA size of VM is the same with the size of user address space. 54 + */ 55 + kvm->arch.gpa_size = BIT(cpu_vabits); 52 56 kvm->arch.root_level = CONFIG_PGTABLE_LEVELS - 1; 53 57 kvm->arch.invalid_ptes[0] = 0; 54 58 kvm->arch.invalid_ptes[1] = (unsigned long)invalid_pte_table;
+5 -1
arch/loongarch/mm/mmap.c
··· 3 3 * Copyright (C) 2020-2022 Loongson Technology Corporation Limited 4 4 */ 5 5 #include <linux/export.h> 6 + #include <linux/hugetlb.h> 6 7 #include <linux/io.h> 7 8 #include <linux/kfence.h> 8 9 #include <linux/memblock.h> ··· 64 63 } 65 64 66 65 info.length = len; 67 - info.align_mask = do_color_align ? (PAGE_MASK & SHM_ALIGN_MASK) : 0; 68 66 info.align_offset = pgoff << PAGE_SHIFT; 67 + if (filp && is_file_hugepages(filp)) 68 + info.align_mask = huge_page_mask_align(filp); 69 + else 70 + info.align_mask = do_color_align ? (PAGE_MASK & SHM_ALIGN_MASK) : 0; 69 71 70 72 if (dir == DOWN) { 71 73 info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+4 -2
arch/m68k/include/asm/sun3_pgalloc.h
··· 44 44 pgd_t *new_pgd; 45 45 46 46 new_pgd = __pgd_alloc(mm, 0); 47 - memcpy(new_pgd, swapper_pg_dir, PAGE_SIZE); 48 - memset(new_pgd, 0, (PAGE_OFFSET >> PGDIR_SHIFT)); 47 + if (likely(new_pgd != NULL)) { 48 + memcpy(new_pgd, swapper_pg_dir, PAGE_SIZE); 49 + memset(new_pgd, 0, (PAGE_OFFSET >> PGDIR_SHIFT)); 50 + } 49 51 return new_pgd; 50 52 } 51 53
+1
arch/riscv/Kconfig.socs
··· 26 26 27 27 config ARCH_SPACEMIT 28 28 bool "SpacemiT SoCs" 29 + select PINCTRL 29 30 help 30 31 This enables support for SpacemiT SoC platform hardware. 31 32
+2 -1
arch/s390/kernel/ftrace.c
··· 266 266 struct ftrace_ops *op, struct ftrace_regs *fregs) 267 267 { 268 268 unsigned long *parent = &arch_ftrace_regs(fregs)->regs.gprs[14]; 269 + unsigned long sp = arch_ftrace_regs(fregs)->regs.gprs[15]; 269 270 270 271 if (unlikely(ftrace_graph_is_dead())) 271 272 return; 272 273 if (unlikely(atomic_read(&current->tracing_graph_pause))) 273 274 return; 274 - if (!function_graph_enter_regs(*parent, ip, 0, parent, fregs)) 275 + if (!function_graph_enter_regs(*parent, ip, 0, (unsigned long *)sp, fregs)) 275 276 *parent = (unsigned long)&return_to_handler; 276 277 } 277 278
+3 -3
arch/s390/kernel/traps.c
··· 285 285 return; 286 286 asm volatile( 287 287 " mc 0,0\n" 288 - "0: xgr %0,%0\n" 288 + "0: lhi %[val],0\n" 289 289 "1:\n" 290 - EX_TABLE(0b,1b) 291 - : "+d" (val)); 290 + EX_TABLE(0b, 1b) 291 + : [val] "+d" (val)); 292 292 if (!val) 293 293 panic("Monitor call doesn't work!\n"); 294 294 }
+1
arch/x86/Kconfig
··· 1307 1307 config MICROCODE 1308 1308 def_bool y 1309 1309 depends on CPU_SUP_AMD || CPU_SUP_INTEL 1310 + select CRYPTO_LIB_SHA256 if CPU_SUP_AMD 1310 1311 1311 1312 config MICROCODE_INITRD32 1312 1313 def_bool y
+2
arch/x86/boot/compressed/pgtable_64.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include "misc.h" 3 3 #include <asm/bootparam.h> 4 + #include <asm/bootparam_utils.h> 4 5 #include <asm/e820/types.h> 5 6 #include <asm/processor.h> 6 7 #include "pgtable.h" ··· 108 107 bool l5_required = false; 109 108 110 109 /* Initialize boot_params. Required for cmdline_find_option_bool(). */ 110 + sanitize_boot_params(bp); 111 111 boot_params_ptr = bp; 112 112 113 113 /*
+8 -15
arch/x86/coco/sev/core.c
··· 2853 2853 if (!mdesc->response) 2854 2854 goto e_free_request; 2855 2855 2856 - mdesc->certs_data = alloc_shared_pages(SEV_FW_BLOB_MAX_SIZE); 2857 - if (!mdesc->certs_data) 2858 - goto e_free_response; 2859 - 2860 - /* initial the input address for guest request */ 2861 - mdesc->input.req_gpa = __pa(mdesc->request); 2862 - mdesc->input.resp_gpa = __pa(mdesc->response); 2863 - mdesc->input.data_gpa = __pa(mdesc->certs_data); 2864 - 2865 2856 return mdesc; 2866 2857 2867 - e_free_response: 2868 - free_shared_pages(mdesc->response, sizeof(struct snp_guest_msg)); 2869 2858 e_free_request: 2870 2859 free_shared_pages(mdesc->request, sizeof(struct snp_guest_msg)); 2871 2860 e_unmap: ··· 2874 2885 kfree(mdesc->ctx); 2875 2886 free_shared_pages(mdesc->response, sizeof(struct snp_guest_msg)); 2876 2887 free_shared_pages(mdesc->request, sizeof(struct snp_guest_msg)); 2877 - free_shared_pages(mdesc->certs_data, SEV_FW_BLOB_MAX_SIZE); 2878 2888 iounmap((__force void __iomem *)mdesc->secrets); 2879 2889 2880 2890 memset(mdesc, 0, sizeof(*mdesc)); ··· 3042 3054 * sequence number must be incremented or the VMPCK must be deleted to 3043 3055 * prevent reuse of the IV. 3044 3056 */ 3045 - rc = snp_issue_guest_request(req, &mdesc->input, rio); 3057 + rc = snp_issue_guest_request(req, &req->input, rio); 3046 3058 switch (rc) { 3047 3059 case -ENOSPC: 3048 3060 /* ··· 3052 3064 * order to increment the sequence number and thus avoid 3053 3065 * IV reuse. 3054 3066 */ 3055 - override_npages = mdesc->input.data_npages; 3067 + override_npages = req->input.data_npages; 3056 3068 req->exit_code = SVM_VMGEXIT_GUEST_REQUEST; 3057 3069 3058 3070 /* ··· 3108 3120 } 3109 3121 3110 3122 if (override_npages) 3111 - mdesc->input.data_npages = override_npages; 3123 + req->input.data_npages = override_npages; 3112 3124 3113 3125 return rc; 3114 3126 } ··· 3145 3157 * request page. 3146 3158 */ 3147 3159 memcpy(mdesc->request, &mdesc->secret_request, sizeof(mdesc->secret_request)); 3160 + 3161 + /* Initialize the input address for guest request */ 3162 + req->input.req_gpa = __pa(mdesc->request); 3163 + req->input.resp_gpa = __pa(mdesc->response); 3164 + req->input.data_gpa = req->certs_data ? __pa(req->certs_data) : 0; 3148 3165 3149 3166 rc = __handle_guest_request(mdesc, req, rio); 3150 3167 if (rc) {
+1
arch/x86/hyperv/hv_vtl.c
··· 30 30 x86_platform.realmode_init = x86_init_noop; 31 31 x86_init.irqs.pre_vector_init = x86_init_noop; 32 32 x86_init.timers.timer_init = x86_init_noop; 33 + x86_init.resources.probe_roms = x86_init_noop; 33 34 34 35 /* Avoid searching for BIOS MP tables */ 35 36 x86_init.mpparse.find_mptable = x86_init_noop;
+1 -2
arch/x86/hyperv/ivm.c
··· 464 464 enum hv_mem_host_visibility visibility) 465 465 { 466 466 struct hv_gpa_range_for_visibility *input; 467 - u16 pages_processed; 468 467 u64 hv_status; 469 468 unsigned long flags; 470 469 ··· 492 493 memcpy((void *)input->gpa_page_list, pfn, count * sizeof(*pfn)); 493 494 hv_status = hv_do_rep_hypercall( 494 495 HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY, count, 495 - 0, input, &pages_processed); 496 + 0, input, NULL); 496 497 local_irq_restore(flags); 497 498 498 499 if (hv_result_success(hv_status))
+1
arch/x86/include/asm/kvm_host.h
··· 780 780 u32 pkru; 781 781 u32 hflags; 782 782 u64 efer; 783 + u64 host_debugctl; 783 784 u64 apic_base; 784 785 struct kvm_lapic *apic; /* kernel irqchip context */ 785 786 bool load_eoi_exitmap_pending;
+4 -4
arch/x86/include/asm/pgtable-2level_types.h
··· 23 23 #define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED 24 24 25 25 /* 26 - * traditional i386 two-level paging structure: 26 + * Traditional i386 two-level paging structure: 27 27 */ 28 28 29 29 #define PGDIR_SHIFT 22 30 30 #define PTRS_PER_PGD 1024 31 31 32 - 33 32 /* 34 - * the i386 is two-level, so we don't really have any 35 - * PMD directory physically. 33 + * The i386 is two-level, so we don't really have any 34 + * PMD directory physically: 36 35 */ 36 + #define PTRS_PER_PMD 1 37 37 38 38 #define PTRS_PER_PTE 1024 39 39
+3 -3
arch/x86/include/asm/sev.h
··· 203 203 unsigned int vmpck_id; 204 204 u8 msg_version; 205 205 u8 msg_type; 206 + 207 + struct snp_req_data input; 208 + void *certs_data; 206 209 }; 207 210 208 211 /* ··· 266 263 struct snp_guest_msg secret_request, secret_response; 267 264 268 265 struct snp_secrets_page *secrets; 269 - struct snp_req_data input; 270 - 271 - void *certs_data; 272 266 273 267 struct aesgcm_ctx *ctx; 274 268
+3 -6
arch/x86/kernel/amd_nb.c
··· 143 143 144 144 struct resource *amd_get_mmconfig_range(struct resource *res) 145 145 { 146 - u32 address; 147 146 u64 base, msr; 148 147 unsigned int segn_busn_bits; 149 148 ··· 150 151 boot_cpu_data.x86_vendor != X86_VENDOR_HYGON) 151 152 return NULL; 152 153 153 - /* assume all cpus from fam10h have mmconfig */ 154 - if (boot_cpu_data.x86 < 0x10) 154 + /* Assume CPUs from Fam10h have mmconfig, although not all VMs do */ 155 + if (boot_cpu_data.x86 < 0x10 || 156 + rdmsrl_safe(MSR_FAM10H_MMIO_CONF_BASE, &msr)) 155 157 return NULL; 156 - 157 - address = MSR_FAM10H_MMIO_CONF_BASE; 158 - rdmsrl(address, msr); 159 158 160 159 /* mmconfig is not enabled */ 161 160 if (!(msr & FAM10H_MMIO_CONF_ENABLE))
+189 -85
arch/x86/kernel/cpu/microcode/amd.c
··· 23 23 24 24 #include <linux/earlycpio.h> 25 25 #include <linux/firmware.h> 26 + #include <linux/bsearch.h> 26 27 #include <linux/uaccess.h> 27 28 #include <linux/vmalloc.h> 28 29 #include <linux/initrd.h> 29 30 #include <linux/kernel.h> 30 31 #include <linux/pci.h> 31 32 33 + #include <crypto/sha2.h> 34 + 32 35 #include <asm/microcode.h> 33 36 #include <asm/processor.h> 37 + #include <asm/cmdline.h> 34 38 #include <asm/setup.h> 35 39 #include <asm/cpu.h> 36 40 #include <asm/msr.h> ··· 149 145 */ 150 146 static u32 bsp_cpuid_1_eax __ro_after_init; 151 147 148 + static bool sha_check = true; 149 + 150 + struct patch_digest { 151 + u32 patch_id; 152 + u8 sha256[SHA256_DIGEST_SIZE]; 153 + }; 154 + 155 + #include "amd_shas.c" 156 + 157 + static int cmp_id(const void *key, const void *elem) 158 + { 159 + struct patch_digest *pd = (struct patch_digest *)elem; 160 + u32 patch_id = *(u32 *)key; 161 + 162 + if (patch_id == pd->patch_id) 163 + return 0; 164 + else if (patch_id < pd->patch_id) 165 + return -1; 166 + else 167 + return 1; 168 + } 169 + 170 + static bool need_sha_check(u32 cur_rev) 171 + { 172 + switch (cur_rev >> 8) { 173 + case 0x80012: return cur_rev <= 0x800126f; break; 174 + case 0x80082: return cur_rev <= 0x800820f; break; 175 + case 0x83010: return cur_rev <= 0x830107c; break; 176 + case 0x86001: return cur_rev <= 0x860010e; break; 177 + case 0x86081: return cur_rev <= 0x8608108; break; 178 + case 0x87010: return cur_rev <= 0x8701034; break; 179 + case 0x8a000: return cur_rev <= 0x8a0000a; break; 180 + case 0xa0010: return cur_rev <= 0xa00107a; break; 181 + case 0xa0011: return cur_rev <= 0xa0011da; break; 182 + case 0xa0012: return cur_rev <= 0xa001243; break; 183 + case 0xa0082: return cur_rev <= 0xa00820e; break; 184 + case 0xa1011: return cur_rev <= 0xa101153; break; 185 + case 0xa1012: return cur_rev <= 0xa10124e; break; 186 + case 0xa1081: return cur_rev <= 0xa108109; break; 187 + case 0xa2010: return cur_rev <= 0xa20102f; break; 188 + case 0xa2012: return cur_rev <= 0xa201212; break; 189 + case 0xa4041: return cur_rev <= 0xa404109; break; 190 + case 0xa5000: return cur_rev <= 0xa500013; break; 191 + case 0xa6012: return cur_rev <= 0xa60120a; break; 192 + case 0xa7041: return cur_rev <= 0xa704109; break; 193 + case 0xa7052: return cur_rev <= 0xa705208; break; 194 + case 0xa7080: return cur_rev <= 0xa708009; break; 195 + case 0xa70c0: return cur_rev <= 0xa70C009; break; 196 + case 0xaa001: return cur_rev <= 0xaa00116; break; 197 + case 0xaa002: return cur_rev <= 0xaa00218; break; 198 + default: break; 199 + } 200 + 201 + pr_info("You should not be seeing this. Please send the following couple of lines to x86-<at>-kernel.org\n"); 202 + pr_info("CPUID(1).EAX: 0x%x, current revision: 0x%x\n", bsp_cpuid_1_eax, cur_rev); 203 + return true; 204 + } 205 + 206 + static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsigned int len) 207 + { 208 + struct patch_digest *pd = NULL; 209 + u8 digest[SHA256_DIGEST_SIZE]; 210 + struct sha256_state s; 211 + int i; 212 + 213 + if (x86_family(bsp_cpuid_1_eax) < 0x17 || 214 + x86_family(bsp_cpuid_1_eax) > 0x19) 215 + return true; 216 + 217 + if (!need_sha_check(cur_rev)) 218 + return true; 219 + 220 + if (!sha_check) 221 + return true; 222 + 223 + pd = bsearch(&patch_id, phashes, ARRAY_SIZE(phashes), sizeof(struct patch_digest), cmp_id); 224 + if (!pd) { 225 + pr_err("No sha256 digest for patch ID: 0x%x found\n", patch_id); 226 + return false; 227 + } 228 + 229 + sha256_init(&s); 230 + sha256_update(&s, data, len); 231 + sha256_final(&s, digest); 232 + 233 + if (memcmp(digest, pd->sha256, sizeof(digest))) { 234 + pr_err("Patch 0x%x SHA256 digest mismatch!\n", patch_id); 235 + 236 + for (i = 0; i < SHA256_DIGEST_SIZE; i++) 237 + pr_cont("0x%x ", digest[i]); 238 + pr_info("\n"); 239 + 240 + return false; 241 + } 242 + 243 + return true; 244 + } 245 + 246 + static u32 get_patch_level(void) 247 + { 248 + u32 rev, dummy __always_unused; 249 + 250 + native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); 251 + 252 + return rev; 253 + } 254 + 152 255 static union cpuid_1_eax ucode_rev_to_cpuid(unsigned int val) 153 256 { 154 257 union zen_patch_rev p; ··· 357 246 * On success, @sh_psize returns the patch size according to the section header, 358 247 * to the caller. 359 248 */ 360 - static bool 361 - __verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize) 249 + static bool __verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize) 362 250 { 363 251 u32 p_type, p_size; 364 252 const u32 *hdr; ··· 594 484 } 595 485 } 596 486 597 - static bool __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize) 487 + static bool __apply_microcode_amd(struct microcode_amd *mc, u32 *cur_rev, 488 + unsigned int psize) 598 489 { 599 490 unsigned long p_addr = (unsigned long)&mc->hdr.data_code; 600 - u32 rev, dummy; 491 + 492 + if (!verify_sha256_digest(mc->hdr.patch_id, *cur_rev, (const u8 *)p_addr, psize)) 493 + return -1; 601 494 602 495 native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr); 603 496 ··· 618 505 } 619 506 620 507 /* verify patch application was successful */ 621 - native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); 622 - 623 - if (rev != mc->hdr.patch_id) 508 + *cur_rev = get_patch_level(); 509 + if (*cur_rev != mc->hdr.patch_id) 624 510 return false; 625 511 626 512 return true; 627 - } 628 - 629 - /* 630 - * Early load occurs before we can vmalloc(). So we look for the microcode 631 - * patch container file in initrd, traverse equivalent cpu table, look for a 632 - * matching microcode patch, and update, all in initrd memory in place. 633 - * When vmalloc() is available for use later -- on 64-bit during first AP load, 634 - * and on 32-bit during save_microcode_in_initrd_amd() -- we can call 635 - * load_microcode_amd() to save equivalent cpu table and microcode patches in 636 - * kernel heap memory. 637 - * 638 - * Returns true if container found (sets @desc), false otherwise. 639 - */ 640 - static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size) 641 - { 642 - struct cont_desc desc = { 0 }; 643 - struct microcode_amd *mc; 644 - 645 - scan_containers(ucode, size, &desc); 646 - 647 - mc = desc.mc; 648 - if (!mc) 649 - return false; 650 - 651 - /* 652 - * Allow application of the same revision to pick up SMT-specific 653 - * changes even if the revision of the other SMT thread is already 654 - * up-to-date. 655 - */ 656 - if (old_rev > mc->hdr.patch_id) 657 - return false; 658 - 659 - return __apply_microcode_amd(mc, desc.psize); 660 513 } 661 514 662 515 static bool get_builtin_microcode(struct cpio_data *cp) ··· 662 583 return found; 663 584 } 664 585 586 + /* 587 + * Early load occurs before we can vmalloc(). So we look for the microcode 588 + * patch container file in initrd, traverse equivalent cpu table, look for a 589 + * matching microcode patch, and update, all in initrd memory in place. 590 + * When vmalloc() is available for use later -- on 64-bit during first AP load, 591 + * and on 32-bit during save_microcode_in_initrd() -- we can call 592 + * load_microcode_amd() to save equivalent cpu table and microcode patches in 593 + * kernel heap memory. 594 + */ 665 595 void __init load_ucode_amd_bsp(struct early_load_data *ed, unsigned int cpuid_1_eax) 666 596 { 597 + struct cont_desc desc = { }; 598 + struct microcode_amd *mc; 667 599 struct cpio_data cp = { }; 668 - u32 dummy; 600 + char buf[4]; 601 + u32 rev; 602 + 603 + if (cmdline_find_option(boot_command_line, "microcode.amd_sha_check", buf, 4)) { 604 + if (!strncmp(buf, "off", 3)) { 605 + sha_check = false; 606 + pr_warn_once("It is a very very bad idea to disable the blobs SHA check!\n"); 607 + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); 608 + } 609 + } 669 610 670 611 bsp_cpuid_1_eax = cpuid_1_eax; 671 612 672 - native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->old_rev, dummy); 613 + rev = get_patch_level(); 614 + ed->old_rev = rev; 673 615 674 616 /* Needed in load_microcode_amd() */ 675 617 ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax; ··· 698 598 if (!find_blobs_in_containers(&cp)) 699 599 return; 700 600 701 - if (early_apply_microcode(ed->old_rev, cp.data, cp.size)) 702 - native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy); 703 - } 704 - 705 - static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size); 706 - 707 - static int __init save_microcode_in_initrd(void) 708 - { 709 - unsigned int cpuid_1_eax = native_cpuid_eax(1); 710 - struct cpuinfo_x86 *c = &boot_cpu_data; 711 - struct cont_desc desc = { 0 }; 712 - enum ucode_state ret; 713 - struct cpio_data cp; 714 - 715 - if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) 716 - return 0; 717 - 718 - if (!find_blobs_in_containers(&cp)) 719 - return -EINVAL; 720 - 721 601 scan_containers(cp.data, cp.size, &desc); 722 - if (!desc.mc) 723 - return -EINVAL; 724 602 725 - ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size); 726 - if (ret > UCODE_UPDATED) 727 - return -EINVAL; 603 + mc = desc.mc; 604 + if (!mc) 605 + return; 728 606 729 - return 0; 607 + /* 608 + * Allow application of the same revision to pick up SMT-specific 609 + * changes even if the revision of the other SMT thread is already 610 + * up-to-date. 611 + */ 612 + if (ed->old_rev > mc->hdr.patch_id) 613 + return; 614 + 615 + if (__apply_microcode_amd(mc, &rev, desc.psize)) 616 + ed->new_rev = rev; 730 617 } 731 - early_initcall(save_microcode_in_initrd); 732 618 733 619 static inline bool patch_cpus_equivalent(struct ucode_patch *p, 734 620 struct ucode_patch *n, ··· 815 729 static struct ucode_patch *find_patch(unsigned int cpu) 816 730 { 817 731 struct ucode_cpu_info *uci = ucode_cpu_info + cpu; 818 - u32 rev, dummy __always_unused; 819 732 u16 equiv_id = 0; 820 733 821 - /* fetch rev if not populated yet: */ 822 - if (!uci->cpu_sig.rev) { 823 - rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); 824 - uci->cpu_sig.rev = rev; 825 - } 734 + uci->cpu_sig.rev = get_patch_level(); 826 735 827 736 if (x86_family(bsp_cpuid_1_eax) < 0x17) { 828 737 equiv_id = find_equiv_id(&equiv_table, uci->cpu_sig.sig); ··· 840 759 841 760 mc = p->data; 842 761 843 - rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); 844 - 762 + rev = get_patch_level(); 845 763 if (rev < mc->hdr.patch_id) { 846 - if (__apply_microcode_amd(mc, p->size)) 847 - pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id); 764 + if (__apply_microcode_amd(mc, &rev, p->size)) 765 + pr_info_once("reload revision: 0x%08x\n", rev); 848 766 } 849 767 } 850 768 851 769 static int collect_cpu_info_amd(int cpu, struct cpu_signature *csig) 852 770 { 853 - struct cpuinfo_x86 *c = &cpu_data(cpu); 854 771 struct ucode_cpu_info *uci = ucode_cpu_info + cpu; 855 772 struct ucode_patch *p; 856 773 857 774 csig->sig = cpuid_eax(0x00000001); 858 - csig->rev = c->microcode; 775 + csig->rev = get_patch_level(); 859 776 860 777 /* 861 778 * a patch could have been loaded early, set uci->mc so that ··· 894 815 goto out; 895 816 } 896 817 897 - if (!__apply_microcode_amd(mc_amd, p->size)) { 818 + if (!__apply_microcode_amd(mc_amd, &rev, p->size)) { 898 819 pr_err("CPU%d: update failed for patch_level=0x%08x\n", 899 820 cpu, mc_amd->hdr.patch_id); 900 821 return UCODE_ERROR; ··· 1016 937 } 1017 938 1018 939 /* Scan the blob in @data and add microcode patches to the cache. */ 1019 - static enum ucode_state __load_microcode_amd(u8 family, const u8 *data, 1020 - size_t size) 940 + static enum ucode_state __load_microcode_amd(u8 family, const u8 *data, size_t size) 1021 941 { 1022 942 u8 *fw = (u8 *)data; 1023 943 size_t offset; ··· 1074 996 if (ret != UCODE_OK) 1075 997 return ret; 1076 998 1077 - for_each_node(nid) { 999 + for_each_node_with_cpus(nid) { 1078 1000 cpu = cpumask_first(cpumask_of_node(nid)); 1079 1001 c = &cpu_data(cpu); 1080 1002 ··· 1090 1012 1091 1013 return ret; 1092 1014 } 1015 + 1016 + static int __init save_microcode_in_initrd(void) 1017 + { 1018 + unsigned int cpuid_1_eax = native_cpuid_eax(1); 1019 + struct cpuinfo_x86 *c = &boot_cpu_data; 1020 + struct cont_desc desc = { 0 }; 1021 + enum ucode_state ret; 1022 + struct cpio_data cp; 1023 + 1024 + if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10) 1025 + return 0; 1026 + 1027 + if (!find_blobs_in_containers(&cp)) 1028 + return -EINVAL; 1029 + 1030 + scan_containers(cp.data, cp.size, &desc); 1031 + if (!desc.mc) 1032 + return -EINVAL; 1033 + 1034 + ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size); 1035 + if (ret > UCODE_UPDATED) 1036 + return -EINVAL; 1037 + 1038 + return 0; 1039 + } 1040 + early_initcall(save_microcode_in_initrd); 1093 1041 1094 1042 /* 1095 1043 * AMD microcode firmware naming convention, up to family 15h they are in
+444
arch/x86/kernel/cpu/microcode/amd_shas.c
··· 1 + /* Keep 'em sorted. */ 2 + static const struct patch_digest phashes[] = { 3 + { 0x8001227, { 4 + 0x99,0xc0,0x9b,0x2b,0xcc,0x9f,0x52,0x1b, 5 + 0x1a,0x5f,0x1d,0x83,0xa1,0x6c,0xc4,0x46, 6 + 0xe2,0x6c,0xda,0x73,0xfb,0x2d,0x23,0xa8, 7 + 0x77,0xdc,0x15,0x31,0x33,0x4a,0x46,0x18, 8 + } 9 + }, 10 + { 0x8001250, { 11 + 0xc0,0x0b,0x6b,0x19,0xfd,0x5c,0x39,0x60, 12 + 0xd5,0xc3,0x57,0x46,0x54,0xe4,0xd1,0xaa, 13 + 0xa8,0xf7,0x1f,0xa8,0x6a,0x60,0x3e,0xe3, 14 + 0x27,0x39,0x8e,0x53,0x30,0xf8,0x49,0x19, 15 + } 16 + }, 17 + { 0x800126e, { 18 + 0xf3,0x8b,0x2b,0xb6,0x34,0xe3,0xc8,0x2c, 19 + 0xef,0xec,0x63,0x6d,0xc8,0x76,0x77,0xb3, 20 + 0x25,0x5a,0xb7,0x52,0x8c,0x83,0x26,0xe6, 21 + 0x4c,0xbe,0xbf,0xe9,0x7d,0x22,0x6a,0x43, 22 + } 23 + }, 24 + { 0x800126f, { 25 + 0x2b,0x5a,0xf2,0x9c,0xdd,0xd2,0x7f,0xec, 26 + 0xec,0x96,0x09,0x57,0xb0,0x96,0x29,0x8b, 27 + 0x2e,0x26,0x91,0xf0,0x49,0x33,0x42,0x18, 28 + 0xdd,0x4b,0x65,0x5a,0xd4,0x15,0x3d,0x33, 29 + } 30 + }, 31 + { 0x800820d, { 32 + 0x68,0x98,0x83,0xcd,0x22,0x0d,0xdd,0x59, 33 + 0x73,0x2c,0x5b,0x37,0x1f,0x84,0x0e,0x67, 34 + 0x96,0x43,0x83,0x0c,0x46,0x44,0xab,0x7c, 35 + 0x7b,0x65,0x9e,0x57,0xb5,0x90,0x4b,0x0e, 36 + } 37 + }, 38 + { 0x8301025, { 39 + 0xe4,0x7d,0xdb,0x1e,0x14,0xb4,0x5e,0x36, 40 + 0x8f,0x3e,0x48,0x88,0x3c,0x6d,0x76,0xa1, 41 + 0x59,0xc6,0xc0,0x72,0x42,0xdf,0x6c,0x30, 42 + 0x6f,0x0b,0x28,0x16,0x61,0xfc,0x79,0x77, 43 + } 44 + }, 45 + { 0x8301055, { 46 + 0x81,0x7b,0x99,0x1b,0xae,0x2d,0x4f,0x9a, 47 + 0xef,0x13,0xce,0xb5,0x10,0xaf,0x6a,0xea, 48 + 0xe5,0xb0,0x64,0x98,0x10,0x68,0x34,0x3b, 49 + 0x9d,0x7a,0xd6,0x22,0x77,0x5f,0xb3,0x5b, 50 + } 51 + }, 52 + { 0x8301072, { 53 + 0xcf,0x76,0xa7,0x1a,0x49,0xdf,0x2a,0x5e, 54 + 0x9e,0x40,0x70,0xe5,0xdd,0x8a,0xa8,0x28, 55 + 0x20,0xdc,0x91,0xd8,0x2c,0xa6,0xa0,0xb1, 56 + 0x2d,0x22,0x26,0x94,0x4b,0x40,0x85,0x30, 57 + } 58 + }, 59 + { 0x830107a, { 60 + 0x2a,0x65,0x8c,0x1a,0x5e,0x07,0x21,0x72, 61 + 0xdf,0x90,0xa6,0x51,0x37,0xd3,0x4b,0x34, 62 + 0xc4,0xda,0x03,0xe1,0x8a,0x6c,0xfb,0x20, 63 + 0x04,0xb2,0x81,0x05,0xd4,0x87,0xf4,0x0a, 64 + } 65 + }, 66 + { 0x830107b, { 67 + 0xb3,0x43,0x13,0x63,0x56,0xc1,0x39,0xad, 68 + 0x10,0xa6,0x2b,0xcc,0x02,0xe6,0x76,0x2a, 69 + 0x1e,0x39,0x58,0x3e,0x23,0x6e,0xa4,0x04, 70 + 0x95,0xea,0xf9,0x6d,0xc2,0x8a,0x13,0x19, 71 + } 72 + }, 73 + { 0x830107c, { 74 + 0x21,0x64,0xde,0xfb,0x9f,0x68,0x96,0x47, 75 + 0x70,0x5c,0xe2,0x8f,0x18,0x52,0x6a,0xac, 76 + 0xa4,0xd2,0x2e,0xe0,0xde,0x68,0x66,0xc3, 77 + 0xeb,0x1e,0xd3,0x3f,0xbc,0x51,0x1d,0x38, 78 + } 79 + }, 80 + { 0x860010d, { 81 + 0x86,0xb6,0x15,0x83,0xbc,0x3b,0x9c,0xe0, 82 + 0xb3,0xef,0x1d,0x99,0x84,0x35,0x15,0xf7, 83 + 0x7c,0x2a,0xc6,0x42,0xdb,0x73,0x07,0x5c, 84 + 0x7d,0xc3,0x02,0xb5,0x43,0x06,0x5e,0xf8, 85 + } 86 + }, 87 + { 0x8608108, { 88 + 0x14,0xfe,0x57,0x86,0x49,0xc8,0x68,0xe2, 89 + 0x11,0xa3,0xcb,0x6e,0xff,0x6e,0xd5,0x38, 90 + 0xfe,0x89,0x1a,0xe0,0x67,0xbf,0xc4,0xcc, 91 + 0x1b,0x9f,0x84,0x77,0x2b,0x9f,0xaa,0xbd, 92 + } 93 + }, 94 + { 0x8701034, { 95 + 0xc3,0x14,0x09,0xa8,0x9c,0x3f,0x8d,0x83, 96 + 0x9b,0x4c,0xa5,0xb7,0x64,0x8b,0x91,0x5d, 97 + 0x85,0x6a,0x39,0x26,0x1e,0x14,0x41,0xa8, 98 + 0x75,0xea,0xa6,0xf9,0xc9,0xd1,0xea,0x2b, 99 + } 100 + }, 101 + { 0x8a00008, { 102 + 0xd7,0x2a,0x93,0xdc,0x05,0x2f,0xa5,0x6e, 103 + 0x0c,0x61,0x2c,0x07,0x9f,0x38,0xe9,0x8e, 104 + 0xef,0x7d,0x2a,0x05,0x4d,0x56,0xaf,0x72, 105 + 0xe7,0x56,0x47,0x6e,0x60,0x27,0xd5,0x8c, 106 + } 107 + }, 108 + { 0x8a0000a, { 109 + 0x73,0x31,0x26,0x22,0xd4,0xf9,0xee,0x3c, 110 + 0x07,0x06,0xe7,0xb9,0xad,0xd8,0x72,0x44, 111 + 0x33,0x31,0xaa,0x7d,0xc3,0x67,0x0e,0xdb, 112 + 0x47,0xb5,0xaa,0xbc,0xf5,0xbb,0xd9,0x20, 113 + } 114 + }, 115 + { 0xa00104c, { 116 + 0x3c,0x8a,0xfe,0x04,0x62,0xd8,0x6d,0xbe, 117 + 0xa7,0x14,0x28,0x64,0x75,0xc0,0xa3,0x76, 118 + 0xb7,0x92,0x0b,0x97,0x0a,0x8e,0x9c,0x5b, 119 + 0x1b,0xc8,0x9d,0x3a,0x1e,0x81,0x3d,0x3b, 120 + } 121 + }, 122 + { 0xa00104e, { 123 + 0xc4,0x35,0x82,0x67,0xd2,0x86,0xe5,0xb2, 124 + 0xfd,0x69,0x12,0x38,0xc8,0x77,0xba,0xe0, 125 + 0x70,0xf9,0x77,0x89,0x10,0xa6,0x74,0x4e, 126 + 0x56,0x58,0x13,0xf5,0x84,0x70,0x28,0x0b, 127 + } 128 + }, 129 + { 0xa001053, { 130 + 0x92,0x0e,0xf4,0x69,0x10,0x3b,0xf9,0x9d, 131 + 0x31,0x1b,0xa6,0x99,0x08,0x7d,0xd7,0x25, 132 + 0x7e,0x1e,0x89,0xba,0x35,0x8d,0xac,0xcb, 133 + 0x3a,0xb4,0xdf,0x58,0x12,0xcf,0xc0,0xc3, 134 + } 135 + }, 136 + { 0xa001058, { 137 + 0x33,0x7d,0xa9,0xb5,0x4e,0x62,0x13,0x36, 138 + 0xef,0x66,0xc9,0xbd,0x0a,0xa6,0x3b,0x19, 139 + 0xcb,0xf5,0xc2,0xc3,0x55,0x47,0x20,0xec, 140 + 0x1f,0x7b,0xa1,0x44,0x0e,0x8e,0xa4,0xb2, 141 + } 142 + }, 143 + { 0xa001075, { 144 + 0x39,0x02,0x82,0xd0,0x7c,0x26,0x43,0xe9, 145 + 0x26,0xa3,0xd9,0x96,0xf7,0x30,0x13,0x0a, 146 + 0x8a,0x0e,0xac,0xe7,0x1d,0xdc,0xe2,0x0f, 147 + 0xcb,0x9e,0x8d,0xbc,0xd2,0xa2,0x44,0xe0, 148 + } 149 + }, 150 + { 0xa001078, { 151 + 0x2d,0x67,0xc7,0x35,0xca,0xef,0x2f,0x25, 152 + 0x4c,0x45,0x93,0x3f,0x36,0x01,0x8c,0xce, 153 + 0xa8,0x5b,0x07,0xd3,0xc1,0x35,0x3c,0x04, 154 + 0x20,0xa2,0xfc,0xdc,0xe6,0xce,0x26,0x3e, 155 + } 156 + }, 157 + { 0xa001079, { 158 + 0x43,0xe2,0x05,0x9c,0xfd,0xb7,0x5b,0xeb, 159 + 0x5b,0xe9,0xeb,0x3b,0x96,0xf4,0xe4,0x93, 160 + 0x73,0x45,0x3e,0xac,0x8d,0x3b,0xe4,0xdb, 161 + 0x10,0x31,0xc1,0xe4,0xa2,0xd0,0x5a,0x8a, 162 + } 163 + }, 164 + { 0xa00107a, { 165 + 0x5f,0x92,0xca,0xff,0xc3,0x59,0x22,0x5f, 166 + 0x02,0xa0,0x91,0x3b,0x4a,0x45,0x10,0xfd, 167 + 0x19,0xe1,0x8a,0x6d,0x9a,0x92,0xc1,0x3f, 168 + 0x75,0x78,0xac,0x78,0x03,0x1d,0xdb,0x18, 169 + } 170 + }, 171 + { 0xa001143, { 172 + 0x56,0xca,0xf7,0x43,0x8a,0x4c,0x46,0x80, 173 + 0xec,0xde,0xe5,0x9c,0x50,0x84,0x9a,0x42, 174 + 0x27,0xe5,0x51,0x84,0x8f,0x19,0xc0,0x8d, 175 + 0x0c,0x25,0xb4,0xb0,0x8f,0x10,0xf3,0xf8, 176 + } 177 + }, 178 + { 0xa001144, { 179 + 0x42,0xd5,0x9b,0xa7,0xd6,0x15,0x29,0x41, 180 + 0x61,0xc4,0x72,0x3f,0xf3,0x06,0x78,0x4b, 181 + 0x65,0xf3,0x0e,0xfa,0x9c,0x87,0xde,0x25, 182 + 0xbd,0xb3,0x9a,0xf4,0x75,0x13,0x53,0xdc, 183 + } 184 + }, 185 + { 0xa00115d, { 186 + 0xd4,0xc4,0x49,0x36,0x89,0x0b,0x47,0xdd, 187 + 0xfb,0x2f,0x88,0x3b,0x5f,0xf2,0x8e,0x75, 188 + 0xc6,0x6c,0x37,0x5a,0x90,0x25,0x94,0x3e, 189 + 0x36,0x9c,0xae,0x02,0x38,0x6c,0xf5,0x05, 190 + } 191 + }, 192 + { 0xa001173, { 193 + 0x28,0xbb,0x9b,0xd1,0xa0,0xa0,0x7e,0x3a, 194 + 0x59,0x20,0xc0,0xa9,0xb2,0x5c,0xc3,0x35, 195 + 0x53,0x89,0xe1,0x4c,0x93,0x2f,0x1d,0xc3, 196 + 0xe5,0xf7,0xf3,0xc8,0x9b,0x61,0xaa,0x9e, 197 + } 198 + }, 199 + { 0xa0011a8, { 200 + 0x97,0xc6,0x16,0x65,0x99,0xa4,0x85,0x3b, 201 + 0xf6,0xce,0xaa,0x49,0x4a,0x3a,0xc5,0xb6, 202 + 0x78,0x25,0xbc,0x53,0xaf,0x5d,0xcf,0xf4, 203 + 0x23,0x12,0xbb,0xb1,0xbc,0x8a,0x02,0x2e, 204 + } 205 + }, 206 + { 0xa0011ce, { 207 + 0xcf,0x1c,0x90,0xa3,0x85,0x0a,0xbf,0x71, 208 + 0x94,0x0e,0x80,0x86,0x85,0x4f,0xd7,0x86, 209 + 0xae,0x38,0x23,0x28,0x2b,0x35,0x9b,0x4e, 210 + 0xfe,0xb8,0xcd,0x3d,0x3d,0x39,0xc9,0x6a, 211 + } 212 + }, 213 + { 0xa0011d1, { 214 + 0xdf,0x0e,0xca,0xde,0xf6,0xce,0x5c,0x1e, 215 + 0x4c,0xec,0xd7,0x71,0x83,0xcc,0xa8,0x09, 216 + 0xc7,0xc5,0xfe,0xb2,0xf7,0x05,0xd2,0xc5, 217 + 0x12,0xdd,0xe4,0xf3,0x92,0x1c,0x3d,0xb8, 218 + } 219 + }, 220 + { 0xa0011d3, { 221 + 0x91,0xe6,0x10,0xd7,0x57,0xb0,0x95,0x0b, 222 + 0x9a,0x24,0xee,0xf7,0xcf,0x56,0xc1,0xa6, 223 + 0x4a,0x52,0x7d,0x5f,0x9f,0xdf,0xf6,0x00, 224 + 0x65,0xf7,0xea,0xe8,0x2a,0x88,0xe2,0x26, 225 + } 226 + }, 227 + { 0xa0011d5, { 228 + 0xed,0x69,0x89,0xf4,0xeb,0x64,0xc2,0x13, 229 + 0xe0,0x51,0x1f,0x03,0x26,0x52,0x7d,0xb7, 230 + 0x93,0x5d,0x65,0xca,0xb8,0x12,0x1d,0x62, 231 + 0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21, 232 + } 233 + }, 234 + { 0xa001223, { 235 + 0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8, 236 + 0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4, 237 + 0x83,0x75,0x94,0xdd,0xeb,0x7e,0xb7,0x15, 238 + 0x8e,0x3b,0x50,0x29,0x8a,0x9c,0xcc,0x45, 239 + } 240 + }, 241 + { 0xa001224, { 242 + 0x0e,0x0c,0xdf,0xb4,0x89,0xee,0x35,0x25, 243 + 0xdd,0x9e,0xdb,0xc0,0x69,0x83,0x0a,0xad, 244 + 0x26,0xa9,0xaa,0x9d,0xfc,0x3c,0xea,0xf9, 245 + 0x6c,0xdc,0xd5,0x6d,0x8b,0x6e,0x85,0x4a, 246 + } 247 + }, 248 + { 0xa001227, { 249 + 0xab,0xc6,0x00,0x69,0x4b,0x50,0x87,0xad, 250 + 0x5f,0x0e,0x8b,0xea,0x57,0x38,0xce,0x1d, 251 + 0x0f,0x75,0x26,0x02,0xf6,0xd6,0x96,0xe9, 252 + 0x87,0xb9,0xd6,0x20,0x27,0x7c,0xd2,0xe0, 253 + } 254 + }, 255 + { 0xa001229, { 256 + 0x7f,0x49,0x49,0x48,0x46,0xa5,0x50,0xa6, 257 + 0x28,0x89,0x98,0xe2,0x9e,0xb4,0x7f,0x75, 258 + 0x33,0xa7,0x04,0x02,0xe4,0x82,0xbf,0xb4, 259 + 0xa5,0x3a,0xba,0x24,0x8d,0x31,0x10,0x1d, 260 + } 261 + }, 262 + { 0xa00122e, { 263 + 0x56,0x94,0xa9,0x5d,0x06,0x68,0xfe,0xaf, 264 + 0xdf,0x7a,0xff,0x2d,0xdf,0x74,0x0f,0x15, 265 + 0x66,0xfb,0x00,0xb5,0x51,0x97,0x9b,0xfa, 266 + 0xcb,0x79,0x85,0x46,0x25,0xb4,0xd2,0x10, 267 + } 268 + }, 269 + { 0xa001231, { 270 + 0x0b,0x46,0xa5,0xfc,0x18,0x15,0xa0,0x9e, 271 + 0xa6,0xdc,0xb7,0xff,0x17,0xf7,0x30,0x64, 272 + 0xd4,0xda,0x9e,0x1b,0xc3,0xfc,0x02,0x3b, 273 + 0xe2,0xc6,0x0e,0x41,0x54,0xb5,0x18,0xdd, 274 + } 275 + }, 276 + { 0xa001234, { 277 + 0x88,0x8d,0xed,0xab,0xb5,0xbd,0x4e,0xf7, 278 + 0x7f,0xd4,0x0e,0x95,0x34,0x91,0xff,0xcc, 279 + 0xfb,0x2a,0xcd,0xf7,0xd5,0xdb,0x4c,0x9b, 280 + 0xd6,0x2e,0x73,0x50,0x8f,0x83,0x79,0x1a, 281 + } 282 + }, 283 + { 0xa001236, { 284 + 0x3d,0x30,0x00,0xb9,0x71,0xba,0x87,0x78, 285 + 0xa8,0x43,0x55,0xc4,0x26,0x59,0xcf,0x9d, 286 + 0x93,0xce,0x64,0x0e,0x8b,0x72,0x11,0x8b, 287 + 0xa3,0x8f,0x51,0xe9,0xca,0x98,0xaa,0x25, 288 + } 289 + }, 290 + { 0xa001238, { 291 + 0x72,0xf7,0x4b,0x0c,0x7d,0x58,0x65,0xcc, 292 + 0x00,0xcc,0x57,0x16,0x68,0x16,0xf8,0x2a, 293 + 0x1b,0xb3,0x8b,0xe1,0xb6,0x83,0x8c,0x7e, 294 + 0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59, 295 + } 296 + }, 297 + { 0xa00820c, { 298 + 0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3, 299 + 0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63, 300 + 0xf1,0x8c,0x88,0x45,0xd7,0x82,0x80,0xd1, 301 + 0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2, 302 + } 303 + }, 304 + { 0xa10113e, { 305 + 0x05,0x3c,0x66,0xd7,0xa9,0x5a,0x33,0x10, 306 + 0x1b,0xf8,0x9c,0x8f,0xed,0xfc,0xa7,0xa0, 307 + 0x15,0xe3,0x3f,0x4b,0x1d,0x0d,0x0a,0xd5, 308 + 0xfa,0x90,0xc4,0xed,0x9d,0x90,0xaf,0x53, 309 + } 310 + }, 311 + { 0xa101144, { 312 + 0xb3,0x0b,0x26,0x9a,0xf8,0x7c,0x02,0x26, 313 + 0x35,0x84,0x53,0xa4,0xd3,0x2c,0x7c,0x09, 314 + 0x68,0x7b,0x96,0xb6,0x93,0xef,0xde,0xbc, 315 + 0xfd,0x4b,0x15,0xd2,0x81,0xd3,0x51,0x47, 316 + } 317 + }, 318 + { 0xa101148, { 319 + 0x20,0xd5,0x6f,0x40,0x4a,0xf6,0x48,0x90, 320 + 0xc2,0x93,0x9a,0xc2,0xfd,0xac,0xef,0x4f, 321 + 0xfa,0xc0,0x3d,0x92,0x3c,0x6d,0x01,0x08, 322 + 0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4, 323 + } 324 + }, 325 + { 0xa10123e, { 326 + 0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18, 327 + 0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d, 328 + 0x1d,0x13,0x53,0x63,0xfe,0x42,0x6f,0xfc, 329 + 0x19,0x0f,0xf1,0xfc,0xa7,0xdd,0x89,0x1b, 330 + } 331 + }, 332 + { 0xa101244, { 333 + 0x71,0x56,0xb5,0x9f,0x21,0xbf,0xb3,0x3c, 334 + 0x8c,0xd7,0x36,0xd0,0x34,0x52,0x1b,0xb1, 335 + 0x46,0x2f,0x04,0xf0,0x37,0xd8,0x1e,0x72, 336 + 0x24,0xa2,0x80,0x84,0x83,0x65,0x84,0xc0, 337 + } 338 + }, 339 + { 0xa101248, { 340 + 0xed,0x3b,0x95,0xa6,0x68,0xa7,0x77,0x3e, 341 + 0xfc,0x17,0x26,0xe2,0x7b,0xd5,0x56,0x22, 342 + 0x2c,0x1d,0xef,0xeb,0x56,0xdd,0xba,0x6e, 343 + 0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75, 344 + } 345 + }, 346 + { 0xa108108, { 347 + 0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9, 348 + 0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6, 349 + 0xf5,0xd4,0x3f,0x7b,0x14,0xd5,0x60,0x2c, 350 + 0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16, 351 + } 352 + }, 353 + { 0xa20102d, { 354 + 0xf9,0x6e,0xf2,0x32,0xd3,0x0f,0x5f,0x11, 355 + 0x59,0xa1,0xfe,0xcc,0xcd,0x9b,0x42,0x89, 356 + 0x8b,0x89,0x2f,0xb5,0xbb,0x82,0xef,0x23, 357 + 0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4, 358 + } 359 + }, 360 + { 0xa201210, { 361 + 0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe, 362 + 0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9, 363 + 0x6d,0x3d,0x0e,0x6b,0xa7,0xac,0xe3,0x68, 364 + 0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41, 365 + } 366 + }, 367 + { 0xa404107, { 368 + 0xbb,0x04,0x4e,0x47,0xdd,0x5e,0x26,0x45, 369 + 0x1a,0xc9,0x56,0x24,0xa4,0x4c,0x82,0xb0, 370 + 0x8b,0x0d,0x9f,0xf9,0x3a,0xdf,0xc6,0x81, 371 + 0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99, 372 + } 373 + }, 374 + { 0xa500011, { 375 + 0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4, 376 + 0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1, 377 + 0xd7,0x5b,0x65,0x3a,0x7d,0xab,0xdf,0xa2, 378 + 0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74, 379 + } 380 + }, 381 + { 0xa601209, { 382 + 0x66,0x48,0xd4,0x09,0x05,0xcb,0x29,0x32, 383 + 0x66,0xb7,0x9a,0x76,0xcd,0x11,0xf3,0x30, 384 + 0x15,0x86,0xcc,0x5d,0x97,0x0f,0xc0,0x46, 385 + 0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d, 386 + } 387 + }, 388 + { 0xa704107, { 389 + 0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6, 390 + 0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93, 391 + 0x2a,0xad,0x8e,0x6b,0xea,0x9b,0xb7,0xc2, 392 + 0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39, 393 + } 394 + }, 395 + { 0xa705206, { 396 + 0x8d,0xc0,0x76,0xbd,0x58,0x9f,0x8f,0xa4, 397 + 0x12,0x9d,0x21,0xfb,0x48,0x21,0xbc,0xe7, 398 + 0x67,0x6f,0x04,0x18,0xae,0x20,0x87,0x4b, 399 + 0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc, 400 + } 401 + }, 402 + { 0xa708007, { 403 + 0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3, 404 + 0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2, 405 + 0x07,0xaa,0x3a,0xe0,0x57,0x13,0x72,0x80, 406 + 0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93, 407 + } 408 + }, 409 + { 0xa70c005, { 410 + 0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b, 411 + 0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f, 412 + 0x1f,0x1f,0xf1,0x97,0xeb,0xfe,0x56,0x55, 413 + 0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13, 414 + } 415 + }, 416 + { 0xaa00116, { 417 + 0xe8,0x4c,0x2c,0x88,0xa1,0xac,0x24,0x63, 418 + 0x65,0xe5,0xaa,0x2d,0x16,0xa9,0xc3,0xf5, 419 + 0xfe,0x1d,0x5e,0x65,0xc7,0xaa,0x92,0x4d, 420 + 0x91,0xee,0x76,0xbb,0x4c,0x66,0x78,0xc9, 421 + } 422 + }, 423 + { 0xaa00212, { 424 + 0xbd,0x57,0x5d,0x0a,0x0a,0x30,0xc1,0x75, 425 + 0x95,0x58,0x5e,0x93,0x02,0x28,0x43,0x71, 426 + 0xed,0x42,0x29,0xc8,0xec,0x34,0x2b,0xb2, 427 + 0x1a,0x65,0x4b,0xfe,0x07,0x0f,0x34,0xa1, 428 + } 429 + }, 430 + { 0xaa00213, { 431 + 0xed,0x58,0xb7,0x76,0x81,0x7f,0xd9,0x3a, 432 + 0x1a,0xff,0x8b,0x34,0xb8,0x4a,0x99,0x0f, 433 + 0x28,0x49,0x6c,0x56,0x2b,0xdc,0xb7,0xed, 434 + 0x96,0xd5,0x9d,0xc1,0x7a,0xd4,0x51,0x9b, 435 + } 436 + }, 437 + { 0xaa00215, { 438 + 0x55,0xd3,0x28,0xcb,0x87,0xa9,0x32,0xe9, 439 + 0x4e,0x85,0x4b,0x7c,0x6b,0xd5,0x7c,0xd4, 440 + 0x1b,0x51,0x71,0x3a,0x0e,0x0b,0xdc,0x9b, 441 + 0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef, 442 + } 443 + }, 444 + };
-2
arch/x86/kernel/cpu/microcode/internal.h
··· 100 100 #ifdef CONFIG_CPU_SUP_AMD 101 101 void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family); 102 102 void load_ucode_amd_ap(unsigned int family); 103 - int save_microcode_in_initrd_amd(unsigned int family); 104 103 void reload_ucode_amd(unsigned int cpu); 105 104 struct microcode_ops *init_amd_microcode(void); 106 105 void exit_amd_microcode(void); 107 106 #else /* CONFIG_CPU_SUP_AMD */ 108 107 static inline void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family) { } 109 108 static inline void load_ucode_amd_ap(unsigned int family) { } 110 - static inline int save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; } 111 109 static inline void reload_ucode_amd(unsigned int cpu) { } 112 110 static inline struct microcode_ops *init_amd_microcode(void) { return NULL; } 113 111 static inline void exit_amd_microcode(void) { }
+7 -3
arch/x86/kernel/cpu/sgx/driver.c
··· 150 150 u64 xfrm_mask; 151 151 int ret; 152 152 153 - if (!cpu_feature_enabled(X86_FEATURE_SGX_LC)) 153 + if (!cpu_feature_enabled(X86_FEATURE_SGX_LC)) { 154 + pr_info("SGX disabled: SGX launch control CPU feature is not available, /dev/sgx_enclave disabled.\n"); 154 155 return -ENODEV; 156 + } 155 157 156 158 cpuid_count(SGX_CPUID, 0, &eax, &ebx, &ecx, &edx); 157 159 158 160 if (!(eax & 1)) { 159 - pr_err("SGX disabled: SGX1 instruction support not available.\n"); 161 + pr_info("SGX disabled: SGX1 instruction support not available, /dev/sgx_enclave disabled.\n"); 160 162 return -ENODEV; 161 163 } 162 164 ··· 175 173 } 176 174 177 175 ret = misc_register(&sgx_dev_enclave); 178 - if (ret) 176 + if (ret) { 177 + pr_info("SGX disabled: Unable to register the /dev/sgx_enclave driver (%d).\n", ret); 179 178 return ret; 179 + } 180 180 181 181 return 0; 182 182 }
+7
arch/x86/kernel/cpu/sgx/ioctl.c
··· 64 64 struct file *backing; 65 65 long ret; 66 66 67 + /* 68 + * ECREATE would detect this too, but checking here also ensures 69 + * that the 'encl_size' calculations below can never overflow. 70 + */ 71 + if (!is_power_of_2(secs->size)) 72 + return -EINVAL; 73 + 67 74 va_page = sgx_encl_grow(encl, true); 68 75 if (IS_ERR(va_page)) 69 76 return PTR_ERR(va_page);
+4
arch/x86/kernel/cpu/vmware.c
··· 26 26 #include <linux/export.h> 27 27 #include <linux/clocksource.h> 28 28 #include <linux/cpu.h> 29 + #include <linux/efi.h> 29 30 #include <linux/reboot.h> 30 31 #include <linux/static_call.h> 31 32 #include <asm/div64.h> ··· 429 428 } else { 430 429 pr_warn("Failed to get TSC freq from the hypervisor\n"); 431 430 } 431 + 432 + if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !efi_enabled(EFI_BOOT)) 433 + x86_init.mpparse.find_mptable = mpparse_find_mptable; 432 434 433 435 vmware_paravirt_ops_setup(); 434 436
+1 -1
arch/x86/kvm/cpuid.c
··· 1763 1763 1764 1764 entry->ecx = entry->edx = 0; 1765 1765 if (!enable_pmu || !kvm_cpu_cap_has(X86_FEATURE_PERFMON_V2)) { 1766 - entry->eax = entry->ebx; 1766 + entry->eax = entry->ebx = 0; 1767 1767 break; 1768 1768 } 1769 1769
+17 -7
arch/x86/kvm/svm/sev.c
··· 4590 4590 4591 4591 void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa) 4592 4592 { 4593 + struct kvm *kvm = svm->vcpu.kvm; 4594 + 4593 4595 /* 4594 4596 * All host state for SEV-ES guests is categorized into three swap types 4595 4597 * based on how it is handled by hardware during a world switch: ··· 4615 4613 4616 4614 /* 4617 4615 * If DebugSwap is enabled, debug registers are loaded but NOT saved by 4618 - * the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU both 4619 - * saves and loads debug registers (Type-A). 4616 + * the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU does 4617 + * not save or load debug registers. Sadly, KVM can't prevent SNP 4618 + * guests from lying about DebugSwap on secondary vCPUs, i.e. the 4619 + * SEV_FEATURES provided at "AP Create" isn't guaranteed to match what 4620 + * the guest has actually enabled (or not!) in the VMSA. 4621 + * 4622 + * If DebugSwap is *possible*, save the masks so that they're restored 4623 + * if the guest enables DebugSwap. But for the DRs themselves, do NOT 4624 + * rely on the CPU to restore the host values; KVM will restore them as 4625 + * needed in common code, via hw_breakpoint_restore(). Note, KVM does 4626 + * NOT support virtualizing Breakpoint Extensions, i.e. the mask MSRs 4627 + * don't need to be restored per se, KVM just needs to ensure they are 4628 + * loaded with the correct values *if* the CPU writes the MSRs. 4620 4629 */ 4621 - if (sev_vcpu_has_debug_swap(svm)) { 4622 - hostsa->dr0 = native_get_debugreg(0); 4623 - hostsa->dr1 = native_get_debugreg(1); 4624 - hostsa->dr2 = native_get_debugreg(2); 4625 - hostsa->dr3 = native_get_debugreg(3); 4630 + if (sev_vcpu_has_debug_swap(svm) || 4631 + (sev_snp_guest(kvm) && cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP))) { 4626 4632 hostsa->dr0_addr_mask = amd_get_dr_addr_mask(0); 4627 4633 hostsa->dr1_addr_mask = amd_get_dr_addr_mask(1); 4628 4634 hostsa->dr2_addr_mask = amd_get_dr_addr_mask(2);
+49
arch/x86/kvm/svm/svm.c
··· 3165 3165 kvm_pr_unimpl_wrmsr(vcpu, ecx, data); 3166 3166 break; 3167 3167 } 3168 + 3169 + /* 3170 + * AMD changed the architectural behavior of bits 5:2. On CPUs 3171 + * without BusLockTrap, bits 5:2 control "external pins", but 3172 + * on CPUs that support BusLockDetect, bit 2 enables BusLockTrap 3173 + * and bits 5:3 are reserved-to-zero. Sadly, old KVM allowed 3174 + * the guest to set bits 5:2 despite not actually virtualizing 3175 + * Performance-Monitoring/Breakpoint external pins. Drop bits 3176 + * 5:2 for backwards compatibility. 3177 + */ 3178 + data &= ~GENMASK(5, 2); 3179 + 3180 + /* 3181 + * Suppress BTF as KVM doesn't virtualize BTF, but there's no 3182 + * way to communicate lack of support to the guest. 3183 + */ 3184 + if (data & DEBUGCTLMSR_BTF) { 3185 + kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data); 3186 + data &= ~DEBUGCTLMSR_BTF; 3187 + } 3188 + 3168 3189 if (data & DEBUGCTL_RESERVED_BITS) 3169 3190 return 1; 3170 3191 ··· 4210 4189 4211 4190 guest_state_enter_irqoff(); 4212 4191 4192 + /* 4193 + * Set RFLAGS.IF prior to VMRUN, as the host's RFLAGS.IF at the time of 4194 + * VMRUN controls whether or not physical IRQs are masked (KVM always 4195 + * runs with V_INTR_MASKING_MASK). Toggle RFLAGS.IF here to avoid the 4196 + * temptation to do STI+VMRUN+CLI, as AMD CPUs bleed the STI shadow 4197 + * into guest state if delivery of an event during VMRUN triggers a 4198 + * #VMEXIT, and the guest_state transitions already tell lockdep that 4199 + * IRQs are being enabled/disabled. Note! GIF=0 for the entirety of 4200 + * this path, so IRQs aren't actually unmasked while running host code. 4201 + */ 4202 + raw_local_irq_enable(); 4203 + 4213 4204 amd_clear_divider(); 4214 4205 4215 4206 if (sev_es_guest(vcpu->kvm)) ··· 4229 4196 sev_es_host_save_area(sd)); 4230 4197 else 4231 4198 __svm_vcpu_run(svm, spec_ctrl_intercepted); 4199 + 4200 + raw_local_irq_disable(); 4232 4201 4233 4202 guest_state_exit_irqoff(); 4234 4203 } ··· 4288 4253 clgi(); 4289 4254 kvm_load_guest_xsave_state(vcpu); 4290 4255 4256 + /* 4257 + * Hardware only context switches DEBUGCTL if LBR virtualization is 4258 + * enabled. Manually load DEBUGCTL if necessary (and restore it after 4259 + * VM-Exit), as running with the host's DEBUGCTL can negatively affect 4260 + * guest state and can even be fatal, e.g. due to Bus Lock Detect. 4261 + */ 4262 + if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) && 4263 + vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl) 4264 + update_debugctlmsr(svm->vmcb->save.dbgctl); 4265 + 4291 4266 kvm_wait_lapic_expire(vcpu); 4292 4267 4293 4268 /* ··· 4324 4279 4325 4280 if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI)) 4326 4281 kvm_before_interrupt(vcpu, KVM_HANDLING_NMI); 4282 + 4283 + if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) && 4284 + vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl) 4285 + update_debugctlmsr(vcpu->arch.host_debugctl); 4327 4286 4328 4287 kvm_load_host_xsave_state(vcpu); 4329 4288 stgi();
+1 -1
arch/x86/kvm/svm/svm.h
··· 584 584 /* svm.c */ 585 585 #define MSR_INVALID 0xffffffffU 586 586 587 - #define DEBUGCTL_RESERVED_BITS (~(0x3fULL)) 587 + #define DEBUGCTL_RESERVED_BITS (~DEBUGCTLMSR_LBR) 588 588 589 589 extern bool dump_invalid_vmcb; 590 590
+1 -9
arch/x86/kvm/svm/vmenter.S
··· 170 170 mov VCPU_RDI(%_ASM_DI), %_ASM_DI 171 171 172 172 /* Enter guest mode */ 173 - sti 174 - 175 173 3: vmrun %_ASM_AX 176 174 4: 177 - cli 178 - 179 175 /* Pop @svm to RAX while it's the only available register. */ 180 176 pop %_ASM_AX 181 177 ··· 336 340 mov KVM_VMCB_pa(%rax), %rax 337 341 338 342 /* Enter guest mode */ 339 - sti 340 - 341 343 1: vmrun %rax 342 - 343 - 2: cli 344 - 344 + 2: 345 345 /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ 346 346 FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT 347 347
+2 -6
arch/x86/kvm/vmx/vmx.c
··· 1514 1514 */ 1515 1515 void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) 1516 1516 { 1517 - struct vcpu_vmx *vmx = to_vmx(vcpu); 1518 - 1519 1517 if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm)) 1520 1518 shrink_ple_window(vcpu); 1521 1519 1522 1520 vmx_vcpu_load_vmcs(vcpu, cpu, NULL); 1523 1521 1524 1522 vmx_vcpu_pi_load(vcpu, cpu); 1525 - 1526 - vmx->host_debugctlmsr = get_debugctlmsr(); 1527 1523 } 1528 1524 1529 1525 void vmx_vcpu_put(struct kvm_vcpu *vcpu) ··· 7454 7458 } 7455 7459 7456 7460 /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ 7457 - if (vmx->host_debugctlmsr) 7458 - update_debugctlmsr(vmx->host_debugctlmsr); 7461 + if (vcpu->arch.host_debugctl) 7462 + update_debugctlmsr(vcpu->arch.host_debugctl); 7459 7463 7460 7464 #ifndef CONFIG_X86_64 7461 7465 /*
-2
arch/x86/kvm/vmx/vmx.h
··· 340 340 /* apic deadline value in host tsc */ 341 341 u64 hv_deadline_tsc; 342 342 343 - unsigned long host_debugctlmsr; 344 - 345 343 /* 346 344 * Only bits masked by msr_ia32_feature_control_valid_bits can be set in 347 345 * msr_ia32_feature_control. FEAT_CTL_LOCKED is always included
+2
arch/x86/kvm/x86.c
··· 10968 10968 set_debugreg(0, 7); 10969 10969 } 10970 10970 10971 + vcpu->arch.host_debugctl = get_debugctlmsr(); 10972 + 10971 10973 guest_timing_enter_irqoff(); 10972 10974 10973 10975 for (;;) {
+1 -1
block/partitions/efi.c
··· 682 682 out[size] = 0; 683 683 684 684 while (i < size) { 685 - u8 c = le16_to_cpu(in[i]) & 0xff; 685 + u8 c = le16_to_cpu(in[i]) & 0x7f; 686 686 687 687 if (c && !isprint(c)) 688 688 c = '!';
+73 -21
drivers/acpi/platform_profile.c
··· 21 21 struct device dev; 22 22 int minor; 23 23 unsigned long choices[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)]; 24 + unsigned long hidden_choices[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)]; 24 25 const struct platform_profile_ops *ops; 26 + }; 27 + 28 + struct aggregate_choices_data { 29 + unsigned long aggregate[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)]; 30 + int count; 25 31 }; 26 32 27 33 static const char * const profile_names[] = { ··· 79 73 80 74 lockdep_assert_held(&profile_lock); 81 75 handler = to_pprof_handler(dev); 82 - if (!test_bit(*bit, handler->choices)) 76 + if (!test_bit(*bit, handler->choices) && !test_bit(*bit, handler->hidden_choices)) 83 77 return -EOPNOTSUPP; 84 78 85 79 return handler->ops->profile_set(dev, *bit); ··· 245 239 /** 246 240 * _aggregate_choices - Aggregate the available profile choices 247 241 * @dev: The device 248 - * @data: The available profile choices 242 + * @arg: struct aggregate_choices_data 249 243 * 250 244 * Return: 0 on success, -errno on failure 251 245 */ 252 - static int _aggregate_choices(struct device *dev, void *data) 246 + static int _aggregate_choices(struct device *dev, void *arg) 253 247 { 248 + unsigned long tmp[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)]; 249 + struct aggregate_choices_data *data = arg; 254 250 struct platform_profile_handler *handler; 255 - unsigned long *aggregate = data; 256 251 257 252 lockdep_assert_held(&profile_lock); 258 253 handler = to_pprof_handler(dev); 259 - if (test_bit(PLATFORM_PROFILE_LAST, aggregate)) 260 - bitmap_copy(aggregate, handler->choices, PLATFORM_PROFILE_LAST); 254 + bitmap_or(tmp, handler->choices, handler->hidden_choices, PLATFORM_PROFILE_LAST); 255 + if (test_bit(PLATFORM_PROFILE_LAST, data->aggregate)) 256 + bitmap_copy(data->aggregate, tmp, PLATFORM_PROFILE_LAST); 261 257 else 262 - bitmap_and(aggregate, handler->choices, aggregate, PLATFORM_PROFILE_LAST); 258 + bitmap_and(data->aggregate, tmp, data->aggregate, PLATFORM_PROFILE_LAST); 259 + data->count++; 260 + 261 + return 0; 262 + } 263 + 264 + /** 265 + * _remove_hidden_choices - Remove hidden choices from aggregate data 266 + * @dev: The device 267 + * @arg: struct aggregate_choices_data 268 + * 269 + * Return: 0 on success, -errno on failure 270 + */ 271 + static int _remove_hidden_choices(struct device *dev, void *arg) 272 + { 273 + struct aggregate_choices_data *data = arg; 274 + struct platform_profile_handler *handler; 275 + 276 + lockdep_assert_held(&profile_lock); 277 + handler = to_pprof_handler(dev); 278 + bitmap_andnot(data->aggregate, handler->choices, 279 + handler->hidden_choices, PLATFORM_PROFILE_LAST); 263 280 264 281 return 0; 265 282 } ··· 299 270 struct device_attribute *attr, 300 271 char *buf) 301 272 { 302 - unsigned long aggregate[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)]; 273 + struct aggregate_choices_data data = { 274 + .aggregate = { [0 ... BITS_TO_LONGS(PLATFORM_PROFILE_LAST) - 1] = ~0UL }, 275 + .count = 0, 276 + }; 303 277 int err; 304 278 305 - set_bit(PLATFORM_PROFILE_LAST, aggregate); 279 + set_bit(PLATFORM_PROFILE_LAST, data.aggregate); 306 280 scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &profile_lock) { 307 281 err = class_for_each_device(&platform_profile_class, NULL, 308 - aggregate, _aggregate_choices); 282 + &data, _aggregate_choices); 309 283 if (err) 310 284 return err; 285 + if (data.count == 1) { 286 + err = class_for_each_device(&platform_profile_class, NULL, 287 + &data, _remove_hidden_choices); 288 + if (err) 289 + return err; 290 + } 311 291 } 312 292 313 293 /* no profile handler registered any more */ 314 - if (bitmap_empty(aggregate, PLATFORM_PROFILE_LAST)) 294 + if (bitmap_empty(data.aggregate, PLATFORM_PROFILE_LAST)) 315 295 return -EINVAL; 316 296 317 - return _commmon_choices_show(aggregate, buf); 297 + return _commmon_choices_show(data.aggregate, buf); 318 298 } 319 299 320 300 /** ··· 411 373 struct device_attribute *attr, 412 374 const char *buf, size_t count) 413 375 { 414 - unsigned long choices[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)]; 376 + struct aggregate_choices_data data = { 377 + .aggregate = { [0 ... BITS_TO_LONGS(PLATFORM_PROFILE_LAST) - 1] = ~0UL }, 378 + .count = 0, 379 + }; 415 380 int ret; 416 381 int i; 417 382 ··· 422 381 i = sysfs_match_string(profile_names, buf); 423 382 if (i < 0 || i == PLATFORM_PROFILE_CUSTOM) 424 383 return -EINVAL; 425 - set_bit(PLATFORM_PROFILE_LAST, choices); 384 + set_bit(PLATFORM_PROFILE_LAST, data.aggregate); 426 385 scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &profile_lock) { 427 386 ret = class_for_each_device(&platform_profile_class, NULL, 428 - choices, _aggregate_choices); 387 + &data, _aggregate_choices); 429 388 if (ret) 430 389 return ret; 431 - if (!test_bit(i, choices)) 390 + if (!test_bit(i, data.aggregate)) 432 391 return -EOPNOTSUPP; 433 392 434 393 ret = class_for_each_device(&platform_profile_class, NULL, &i, ··· 494 453 */ 495 454 int platform_profile_cycle(void) 496 455 { 456 + struct aggregate_choices_data data = { 457 + .aggregate = { [0 ... BITS_TO_LONGS(PLATFORM_PROFILE_LAST) - 1] = ~0UL }, 458 + .count = 0, 459 + }; 497 460 enum platform_profile_option next = PLATFORM_PROFILE_LAST; 498 461 enum platform_profile_option profile = PLATFORM_PROFILE_LAST; 499 - unsigned long choices[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)]; 500 462 int err; 501 463 502 - set_bit(PLATFORM_PROFILE_LAST, choices); 464 + set_bit(PLATFORM_PROFILE_LAST, data.aggregate); 503 465 scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &profile_lock) { 504 466 err = class_for_each_device(&platform_profile_class, NULL, 505 467 &profile, _aggregate_profiles); ··· 514 470 return -EINVAL; 515 471 516 472 err = class_for_each_device(&platform_profile_class, NULL, 517 - choices, _aggregate_choices); 473 + &data, _aggregate_choices); 518 474 if (err) 519 475 return err; 520 476 521 477 /* never iterate into a custom if all drivers supported it */ 522 - clear_bit(PLATFORM_PROFILE_CUSTOM, choices); 478 + clear_bit(PLATFORM_PROFILE_CUSTOM, data.aggregate); 523 479 524 - next = find_next_bit_wrap(choices, 480 + next = find_next_bit_wrap(data.aggregate, 525 481 PLATFORM_PROFILE_LAST, 526 482 profile + 1); 527 483 ··· 574 530 if (bitmap_empty(pprof->choices, PLATFORM_PROFILE_LAST)) { 575 531 dev_err(dev, "Failed to register platform_profile class device with empty choices\n"); 576 532 return ERR_PTR(-EINVAL); 533 + } 534 + 535 + if (ops->hidden_choices) { 536 + err = ops->hidden_choices(drvdata, pprof->hidden_choices); 537 + if (err) { 538 + dev_err(dev, "platform_profile hidden_choices failed\n"); 539 + return ERR_PTR(err); 540 + } 577 541 } 578 542 579 543 guard(mutex)(&profile_lock);
+1
drivers/android/binderfs.c
··· 274 274 mutex_unlock(&binderfs_minors_mutex); 275 275 276 276 if (refcount_dec_and_test(&device->ref)) { 277 + hlist_del_init(&device->hlist); 277 278 kfree(device->context.name); 278 279 kfree(device); 279 280 }
+1
drivers/base/core.c
··· 2079 2079 out: 2080 2080 sup_handle->flags &= ~FWNODE_FLAG_VISITED; 2081 2081 put_device(sup_dev); 2082 + put_device(con_dev); 2082 2083 put_device(par_dev); 2083 2084 return ret; 2084 2085 }
+2 -2
drivers/block/null_blk/main.c
··· 1549 1549 cmd = blk_mq_rq_to_pdu(req); 1550 1550 cmd->error = null_process_cmd(cmd, req_op(req), blk_rq_pos(req), 1551 1551 blk_rq_sectors(req)); 1552 - if (!blk_mq_add_to_batch(req, iob, (__force int) cmd->error, 1553 - blk_mq_end_request_batch)) 1552 + if (!blk_mq_add_to_batch(req, iob, cmd->error != BLK_STS_OK, 1553 + blk_mq_end_request_batch)) 1554 1554 blk_mq_end_request(req, cmd->error); 1555 1555 nr++; 1556 1556 }
+5 -2
drivers/block/ublk_drv.c
··· 2715 2715 if (ph.len > sizeof(struct ublk_params)) 2716 2716 ph.len = sizeof(struct ublk_params); 2717 2717 2718 - /* parameters can only be changed when device isn't live */ 2719 2718 mutex_lock(&ub->mutex); 2720 - if (ub->dev_info.state == UBLK_S_DEV_LIVE) { 2719 + if (test_bit(UB_STATE_USED, &ub->state)) { 2720 + /* 2721 + * Parameters can only be changed when device hasn't 2722 + * been started yet 2723 + */ 2721 2724 ret = -EACCES; 2722 2725 } else if (copy_from_user(&ub->params, argp, ph.len)) { 2723 2726 ret = -EFAULT;
+3 -2
drivers/block/virtio_blk.c
··· 1207 1207 1208 1208 while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) { 1209 1209 struct request *req = blk_mq_rq_from_pdu(vbr); 1210 + u8 status = virtblk_vbr_status(vbr); 1210 1211 1211 1212 found++; 1212 1213 if (!blk_mq_complete_request_remote(req) && 1213 - !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr), 1214 - virtblk_complete_batch)) 1214 + !blk_mq_add_to_batch(req, iob, status != VIRTIO_BLK_S_OK, 1215 + virtblk_complete_batch)) 1215 1216 virtblk_request_done(req); 1216 1217 } 1217 1218
+12
drivers/bluetooth/Kconfig
··· 56 56 Say Y here to enable USB poll_sync for Bluetooth USB devices by 57 57 default. 58 58 59 + config BT_HCIBTUSB_AUTO_ISOC_ALT 60 + bool "Automatically adjust alternate setting for Isoc endpoints" 61 + depends on BT_HCIBTUSB 62 + default y if CHROME_PLATFORMS 63 + help 64 + Say Y here to automatically adjusting the alternate setting for 65 + HCI_USER_CHANNEL whenever a SCO link is established. 66 + 67 + When enabled, btusb intercepts the HCI_EV_SYNC_CONN_COMPLETE packets 68 + and configures isoc endpoint alternate setting automatically when 69 + HCI_USER_CHANNEL is in use. 70 + 59 71 config BT_HCIBTUSB_BCM 60 72 bool "Broadcom protocol support" 61 73 depends on BT_HCIBTUSB
+42
drivers/bluetooth/btusb.c
··· 34 34 static bool enable_autosuspend = IS_ENABLED(CONFIG_BT_HCIBTUSB_AUTOSUSPEND); 35 35 static bool enable_poll_sync = IS_ENABLED(CONFIG_BT_HCIBTUSB_POLL_SYNC); 36 36 static bool reset = true; 37 + static bool auto_isoc_alt = IS_ENABLED(CONFIG_BT_HCIBTUSB_AUTO_ISOC_ALT); 37 38 38 39 static struct usb_driver btusb_driver; 39 40 ··· 1086 1085 spin_unlock_irqrestore(&data->rxlock, flags); 1087 1086 } 1088 1087 1088 + static void btusb_sco_connected(struct btusb_data *data, struct sk_buff *skb) 1089 + { 1090 + struct hci_event_hdr *hdr = (void *) skb->data; 1091 + struct hci_ev_sync_conn_complete *ev = 1092 + (void *) skb->data + sizeof(*hdr); 1093 + struct hci_dev *hdev = data->hdev; 1094 + unsigned int notify_air_mode; 1095 + 1096 + if (hci_skb_pkt_type(skb) != HCI_EVENT_PKT) 1097 + return; 1098 + 1099 + if (skb->len < sizeof(*hdr) || hdr->evt != HCI_EV_SYNC_CONN_COMPLETE) 1100 + return; 1101 + 1102 + if (skb->len != sizeof(*hdr) + sizeof(*ev) || ev->status) 1103 + return; 1104 + 1105 + switch (ev->air_mode) { 1106 + case BT_CODEC_CVSD: 1107 + notify_air_mode = HCI_NOTIFY_ENABLE_SCO_CVSD; 1108 + break; 1109 + 1110 + case BT_CODEC_TRANSPARENT: 1111 + notify_air_mode = HCI_NOTIFY_ENABLE_SCO_TRANSP; 1112 + break; 1113 + 1114 + default: 1115 + return; 1116 + } 1117 + 1118 + bt_dev_info(hdev, "enabling SCO with air mode %u", ev->air_mode); 1119 + data->sco_num = 1; 1120 + data->air_mode = notify_air_mode; 1121 + schedule_work(&data->work); 1122 + } 1123 + 1089 1124 static int btusb_recv_event(struct btusb_data *data, struct sk_buff *skb) 1090 1125 { 1091 1126 if (data->intr_interval) { 1092 1127 /* Trigger dequeue immediately if an event is received */ 1093 1128 schedule_delayed_work(&data->rx_work, 0); 1094 1129 } 1130 + 1131 + /* Configure altsetting for HCI_USER_CHANNEL on SCO connected */ 1132 + if (auto_isoc_alt && hci_dev_test_flag(data->hdev, HCI_USER_CHANNEL)) 1133 + btusb_sco_connected(data, skb); 1095 1134 1096 1135 return data->recv_event(data->hdev, skb); 1097 1136 } ··· 3685 3644 } 3686 3645 3687 3646 static const struct file_operations force_poll_sync_fops = { 3647 + .owner = THIS_MODULE, 3688 3648 .open = simple_open, 3689 3649 .read = force_poll_sync_read, 3690 3650 .write = force_poll_sync_write,
+3 -2
drivers/bus/mhi/host/pci_generic.c
··· 1095 1095 err_unprepare: 1096 1096 mhi_unprepare_after_power_down(mhi_cntrl); 1097 1097 err_try_reset: 1098 - if (pci_reset_function(pdev)) 1099 - dev_err(&pdev->dev, "Recovery failed\n"); 1098 + err = pci_try_reset_function(pdev); 1099 + if (err) 1100 + dev_err(&pdev->dev, "Recovery failed: %d\n", err); 1100 1101 } 1101 1102 1102 1103 static void health_check(struct timer_list *t)
+21 -1
drivers/bus/simple-pm-bus.c
··· 109 109 return 0; 110 110 } 111 111 112 + static int simple_pm_bus_suspend(struct device *dev) 113 + { 114 + struct simple_pm_bus *bus = dev_get_drvdata(dev); 115 + 116 + if (!bus) 117 + return 0; 118 + 119 + return pm_runtime_force_suspend(dev); 120 + } 121 + 122 + static int simple_pm_bus_resume(struct device *dev) 123 + { 124 + struct simple_pm_bus *bus = dev_get_drvdata(dev); 125 + 126 + if (!bus) 127 + return 0; 128 + 129 + return pm_runtime_force_resume(dev); 130 + } 131 + 112 132 static const struct dev_pm_ops simple_pm_bus_pm_ops = { 113 133 RUNTIME_PM_OPS(simple_pm_bus_runtime_suspend, simple_pm_bus_runtime_resume, NULL) 114 - NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 134 + NOIRQ_SYSTEM_SLEEP_PM_OPS(simple_pm_bus_suspend, simple_pm_bus_resume) 115 135 }; 116 136 117 137 #define ONLY_BUS ((void *) 1) /* Match if the device is only a bus. */
+5 -1
drivers/cdx/cdx.c
··· 473 473 struct device_attribute *attr, char *buf) 474 474 { 475 475 struct cdx_device *cdx_dev = to_cdx_device(dev); 476 + ssize_t len; 476 477 477 - return sysfs_emit(buf, "%s\n", cdx_dev->driver_override); 478 + device_lock(dev); 479 + len = sysfs_emit(buf, "%s\n", cdx_dev->driver_override); 480 + device_unlock(dev); 481 + return len; 478 482 } 479 483 static DEVICE_ATTR_RW(driver_override); 480 484
+1 -1
drivers/char/misc.c
··· 264 264 device_create_with_groups(&misc_class, misc->parent, dev, 265 265 misc, misc->groups, "%s", misc->name); 266 266 if (IS_ERR(misc->this_device)) { 267 + misc_minor_free(misc->minor); 267 268 if (is_dynamic) { 268 - misc_minor_free(misc->minor); 269 269 misc->minor = MISC_DYNAMIC_MINOR; 270 270 } 271 271 err = PTR_ERR(misc->this_device);
+2 -2
drivers/char/virtio_console.c
··· 923 923 924 924 pipe_lock(pipe); 925 925 ret = 0; 926 - if (pipe_empty(pipe->head, pipe->tail)) 926 + if (pipe_is_empty(pipe)) 927 927 goto error_out; 928 928 929 929 ret = wait_port_writable(port, filp->f_flags & O_NONBLOCK); 930 930 if (ret < 0) 931 931 goto error_out; 932 932 933 - occupancy = pipe_occupancy(pipe->head, pipe->tail); 933 + occupancy = pipe_buf_usage(pipe); 934 934 buf = alloc_buf(port->portdev->vdev, 0, occupancy); 935 935 936 936 if (!buf) {
-2
drivers/clk/qcom/dispcc-sm8750.c
··· 827 827 &disp_cc_mdss_byte0_clk_src.clkr.hw, 828 828 }, 829 829 .num_parents = 1, 830 - .flags = CLK_SET_RATE_PARENT, 831 830 .ops = &clk_regmap_div_ops, 832 831 }, 833 832 }; ··· 841 842 &disp_cc_mdss_byte1_clk_src.clkr.hw, 842 843 }, 843 844 .num_parents = 1, 844 - .flags = CLK_SET_RATE_PARENT, 845 845 .ops = &clk_regmap_div_ops, 846 846 }, 847 847 };
-8
drivers/clk/samsung/clk-gs101.c
··· 382 382 EARLY_WAKEUP_DPU_DEST, 383 383 EARLY_WAKEUP_CSIS_DEST, 384 384 EARLY_WAKEUP_SW_TRIG_APM, 385 - EARLY_WAKEUP_SW_TRIG_APM_SET, 386 - EARLY_WAKEUP_SW_TRIG_APM_CLEAR, 387 385 EARLY_WAKEUP_SW_TRIG_CLUSTER0, 388 - EARLY_WAKEUP_SW_TRIG_CLUSTER0_SET, 389 - EARLY_WAKEUP_SW_TRIG_CLUSTER0_CLEAR, 390 386 EARLY_WAKEUP_SW_TRIG_DPU, 391 - EARLY_WAKEUP_SW_TRIG_DPU_SET, 392 - EARLY_WAKEUP_SW_TRIG_DPU_CLEAR, 393 387 EARLY_WAKEUP_SW_TRIG_CSIS, 394 - EARLY_WAKEUP_SW_TRIG_CSIS_SET, 395 - EARLY_WAKEUP_SW_TRIG_CSIS_CLEAR, 396 388 CLK_CON_MUX_MUX_CLKCMU_BO_BUS, 397 389 CLK_CON_MUX_MUX_CLKCMU_BUS0_BUS, 398 390 CLK_CON_MUX_MUX_CLKCMU_BUS1_BUS,
+6 -1
drivers/clk/samsung/clk-pll.c
··· 206 206 */ 207 207 /* Maximum lock time can be 270 * PDIV cycles */ 208 208 #define PLL35XX_LOCK_FACTOR (270) 209 + #define PLL142XX_LOCK_FACTOR (150) 209 210 210 211 #define PLL35XX_MDIV_MASK (0x3FF) 211 212 #define PLL35XX_PDIV_MASK (0x3F) ··· 273 272 } 274 273 275 274 /* Set PLL lock time. */ 276 - writel_relaxed(rate->pdiv * PLL35XX_LOCK_FACTOR, 275 + if (pll->type == pll_142xx) 276 + writel_relaxed(rate->pdiv * PLL142XX_LOCK_FACTOR, 277 + pll->lock_reg); 278 + else 279 + writel_relaxed(rate->pdiv * PLL35XX_LOCK_FACTOR, 277 280 pll->lock_reg); 278 281 279 282 /* Change PLL PMS values */
+17 -3
drivers/gpio/gpio-aggregator.c
··· 119 119 struct platform_device *pdev; 120 120 int res, id; 121 121 122 + if (!try_module_get(THIS_MODULE)) 123 + return -ENOENT; 124 + 122 125 /* kernfs guarantees string termination, so count + 1 is safe */ 123 126 aggr = kzalloc(sizeof(*aggr) + count + 1, GFP_KERNEL); 124 - if (!aggr) 125 - return -ENOMEM; 127 + if (!aggr) { 128 + res = -ENOMEM; 129 + goto put_module; 130 + } 126 131 127 132 memcpy(aggr->args, buf, count + 1); 128 133 ··· 166 161 } 167 162 168 163 aggr->pdev = pdev; 164 + module_put(THIS_MODULE); 169 165 return count; 170 166 171 167 remove_table: ··· 181 175 kfree(aggr->lookups); 182 176 free_ga: 183 177 kfree(aggr); 178 + put_module: 179 + module_put(THIS_MODULE); 184 180 return res; 185 181 } 186 182 ··· 211 203 if (error) 212 204 return error; 213 205 206 + if (!try_module_get(THIS_MODULE)) 207 + return -ENOENT; 208 + 214 209 mutex_lock(&gpio_aggregator_lock); 215 210 aggr = idr_remove(&gpio_aggregator_idr, id); 216 211 mutex_unlock(&gpio_aggregator_lock); 217 - if (!aggr) 212 + if (!aggr) { 213 + module_put(THIS_MODULE); 218 214 return -ENOENT; 215 + } 219 216 220 217 gpio_aggregator_free(aggr); 218 + module_put(THIS_MODULE); 221 219 return count; 222 220 } 223 221 static DRIVER_ATTR_WO(delete_device);
+18 -13
drivers/gpio/gpio-rcar.c
··· 40 40 41 41 struct gpio_rcar_priv { 42 42 void __iomem *base; 43 - spinlock_t lock; 43 + raw_spinlock_t lock; 44 44 struct device *dev; 45 45 struct gpio_chip gpio_chip; 46 46 unsigned int irq_parent; ··· 123 123 * "Setting Level-Sensitive Interrupt Input Mode" 124 124 */ 125 125 126 - spin_lock_irqsave(&p->lock, flags); 126 + raw_spin_lock_irqsave(&p->lock, flags); 127 127 128 128 /* Configure positive or negative logic in POSNEG */ 129 129 gpio_rcar_modify_bit(p, POSNEG, hwirq, !active_high_rising_edge); ··· 142 142 if (!level_trigger) 143 143 gpio_rcar_write(p, INTCLR, BIT(hwirq)); 144 144 145 - spin_unlock_irqrestore(&p->lock, flags); 145 + raw_spin_unlock_irqrestore(&p->lock, flags); 146 146 } 147 147 148 148 static int gpio_rcar_irq_set_type(struct irq_data *d, unsigned int type) ··· 246 246 * "Setting General Input Mode" 247 247 */ 248 248 249 - spin_lock_irqsave(&p->lock, flags); 249 + raw_spin_lock_irqsave(&p->lock, flags); 250 250 251 251 /* Configure positive logic in POSNEG */ 252 252 gpio_rcar_modify_bit(p, POSNEG, gpio, false); ··· 261 261 if (p->info.has_outdtsel && output) 262 262 gpio_rcar_modify_bit(p, OUTDTSEL, gpio, false); 263 263 264 - spin_unlock_irqrestore(&p->lock, flags); 264 + raw_spin_unlock_irqrestore(&p->lock, flags); 265 265 } 266 266 267 267 static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset) ··· 347 347 return 0; 348 348 } 349 349 350 - spin_lock_irqsave(&p->lock, flags); 350 + raw_spin_lock_irqsave(&p->lock, flags); 351 351 outputs = gpio_rcar_read(p, INOUTSEL); 352 352 m = outputs & bankmask; 353 353 if (m) ··· 356 356 m = ~outputs & bankmask; 357 357 if (m) 358 358 val |= gpio_rcar_read(p, INDT) & m; 359 - spin_unlock_irqrestore(&p->lock, flags); 359 + raw_spin_unlock_irqrestore(&p->lock, flags); 360 360 361 361 bits[0] = val; 362 362 return 0; ··· 367 367 struct gpio_rcar_priv *p = gpiochip_get_data(chip); 368 368 unsigned long flags; 369 369 370 - spin_lock_irqsave(&p->lock, flags); 370 + raw_spin_lock_irqsave(&p->lock, flags); 371 371 gpio_rcar_modify_bit(p, OUTDT, offset, value); 372 - spin_unlock_irqrestore(&p->lock, flags); 372 + raw_spin_unlock_irqrestore(&p->lock, flags); 373 373 } 374 374 375 375 static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask, ··· 386 386 if (!bankmask) 387 387 return; 388 388 389 - spin_lock_irqsave(&p->lock, flags); 389 + raw_spin_lock_irqsave(&p->lock, flags); 390 390 val = gpio_rcar_read(p, OUTDT); 391 391 val &= ~bankmask; 392 392 val |= (bankmask & bits[0]); 393 393 gpio_rcar_write(p, OUTDT, val); 394 - spin_unlock_irqrestore(&p->lock, flags); 394 + raw_spin_unlock_irqrestore(&p->lock, flags); 395 395 } 396 396 397 397 static int gpio_rcar_direction_output(struct gpio_chip *chip, unsigned offset, ··· 468 468 p->info = *info; 469 469 470 470 ret = of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 0, &args); 471 - *npins = ret == 0 ? args.args[2] : RCAR_MAX_GPIO_PER_BANK; 471 + if (ret) { 472 + *npins = RCAR_MAX_GPIO_PER_BANK; 473 + } else { 474 + *npins = args.args[2]; 475 + of_node_put(args.np); 476 + } 472 477 473 478 if (*npins == 0 || *npins > RCAR_MAX_GPIO_PER_BANK) { 474 479 dev_warn(p->dev, "Invalid number of gpio lines %u, using %u\n", ··· 510 505 return -ENOMEM; 511 506 512 507 p->dev = dev; 513 - spin_lock_init(&p->lock); 508 + raw_spin_lock_init(&p->lock); 514 509 515 510 /* Get device configuration from DT node */ 516 511 ret = gpio_rcar_parse_dt(p, &npins);
+9 -6
drivers/gpio/gpiolib-cdev.c
··· 2729 2729 cdev->gdev = gpio_device_get(gdev); 2730 2730 2731 2731 cdev->lineinfo_changed_nb.notifier_call = lineinfo_changed_notify; 2732 - ret = atomic_notifier_chain_register(&gdev->line_state_notifier, 2733 - &cdev->lineinfo_changed_nb); 2732 + scoped_guard(write_lock_irqsave, &gdev->line_state_lock) 2733 + ret = raw_notifier_chain_register(&gdev->line_state_notifier, 2734 + &cdev->lineinfo_changed_nb); 2734 2735 if (ret) 2735 2736 goto out_free_bitmap; 2736 2737 ··· 2755 2754 blocking_notifier_chain_unregister(&gdev->device_notifier, 2756 2755 &cdev->device_unregistered_nb); 2757 2756 out_unregister_line_notifier: 2758 - atomic_notifier_chain_unregister(&gdev->line_state_notifier, 2759 - &cdev->lineinfo_changed_nb); 2757 + scoped_guard(write_lock_irqsave, &gdev->line_state_lock) 2758 + raw_notifier_chain_unregister(&gdev->line_state_notifier, 2759 + &cdev->lineinfo_changed_nb); 2760 2760 out_free_bitmap: 2761 2761 gpio_device_put(gdev); 2762 2762 bitmap_free(cdev->watched_lines); ··· 2781 2779 2782 2780 blocking_notifier_chain_unregister(&gdev->device_notifier, 2783 2781 &cdev->device_unregistered_nb); 2784 - atomic_notifier_chain_unregister(&gdev->line_state_notifier, 2785 - &cdev->lineinfo_changed_nb); 2782 + scoped_guard(write_lock_irqsave, &gdev->line_state_lock) 2783 + raw_notifier_chain_unregister(&gdev->line_state_notifier, 2784 + &cdev->lineinfo_changed_nb); 2786 2785 bitmap_free(cdev->watched_lines); 2787 2786 gpio_device_put(gdev); 2788 2787 kfree(cdev);
+16 -19
drivers/gpio/gpiolib.c
··· 1025 1025 } 1026 1026 } 1027 1027 1028 - ATOMIC_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1028 + rwlock_init(&gdev->line_state_lock); 1029 + RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier); 1029 1030 BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier); 1030 1031 1031 1032 ret = init_srcu_struct(&gdev->srcu); ··· 1057 1056 1058 1057 desc->gdev = gdev; 1059 1058 1060 - if (gc->get_direction && gpiochip_line_is_valid(gc, desc_index)) { 1061 - ret = gc->get_direction(gc, desc_index); 1062 - if (ret < 0) 1063 - /* 1064 - * FIXME: Bail-out here once all GPIO drivers 1065 - * are updated to not return errors in 1066 - * situations that can be considered normal 1067 - * operation. 1068 - */ 1069 - dev_warn(&gdev->dev, 1070 - "%s: get_direction failed: %d\n", 1071 - __func__, ret); 1072 - 1073 - assign_bit(FLAG_IS_OUT, &desc->flags, !ret); 1074 - } else { 1059 + /* 1060 + * We would typically want to check the return value of 1061 + * get_direction() here but we must not check the return value 1062 + * and bail-out as pin controllers can have pins configured to 1063 + * alternate functions and return -EINVAL. Also: there's no 1064 + * need to take the SRCU lock here. 1065 + */ 1066 + if (gc->get_direction && gpiochip_line_is_valid(gc, desc_index)) 1067 + assign_bit(FLAG_IS_OUT, &desc->flags, 1068 + !gc->get_direction(gc, desc_index)); 1069 + else 1075 1070 assign_bit(FLAG_IS_OUT, 1076 1071 &desc->flags, !gc->direction_input); 1077 - } 1078 1072 } 1079 1073 1080 1074 ret = of_gpiochip_add(gc); ··· 4189 4193 4190 4194 void gpiod_line_state_notify(struct gpio_desc *desc, unsigned long action) 4191 4195 { 4192 - atomic_notifier_call_chain(&desc->gdev->line_state_notifier, 4193 - action, desc); 4196 + guard(read_lock_irqsave)(&desc->gdev->line_state_lock); 4197 + 4198 + raw_notifier_call_chain(&desc->gdev->line_state_notifier, action, desc); 4194 4199 } 4195 4200 4196 4201 /**
+4 -1
drivers/gpio/gpiolib.h
··· 16 16 #include <linux/gpio/driver.h> 17 17 #include <linux/module.h> 18 18 #include <linux/notifier.h> 19 + #include <linux/spinlock.h> 19 20 #include <linux/srcu.h> 20 21 #include <linux/workqueue.h> 21 22 ··· 46 45 * @list: links gpio_device:s together for traversal 47 46 * @line_state_notifier: used to notify subscribers about lines being 48 47 * requested, released or reconfigured 48 + * @line_state_lock: RW-spinlock protecting the line state notifier 49 49 * @line_state_wq: used to emit line state events from a separate thread in 50 50 * process context 51 51 * @device_notifier: used to notify character device wait queues about the GPIO ··· 74 72 const char *label; 75 73 void *data; 76 74 struct list_head list; 77 - struct atomic_notifier_head line_state_notifier; 75 + struct raw_notifier_head line_state_notifier; 76 + rwlock_t line_state_lock; 78 77 struct workqueue_struct *line_state_wq; 79 78 struct blocking_notifier_head device_notifier; 80 79 struct srcu_struct srcu;
+9 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 2555 2555 int r; 2556 2556 2557 2557 r = amdgpu_device_suspend(drm_dev, true); 2558 - adev->in_s4 = false; 2559 2558 if (r) 2560 2559 return r; 2561 2560 ··· 2566 2567 static int amdgpu_pmops_thaw(struct device *dev) 2567 2568 { 2568 2569 struct drm_device *drm_dev = dev_get_drvdata(dev); 2570 + struct amdgpu_device *adev = drm_to_adev(drm_dev); 2571 + int r; 2569 2572 2570 - return amdgpu_device_resume(drm_dev, true); 2573 + r = amdgpu_device_resume(drm_dev, true); 2574 + adev->in_s4 = false; 2575 + 2576 + return r; 2571 2577 } 2572 2578 2573 2579 static int amdgpu_pmops_poweroff(struct device *dev) ··· 2585 2581 static int amdgpu_pmops_restore(struct device *dev) 2586 2582 { 2587 2583 struct drm_device *drm_dev = dev_get_drvdata(dev); 2584 + struct amdgpu_device *adev = drm_to_adev(drm_dev); 2585 + 2586 + adev->in_s4 = false; 2588 2587 2589 2588 return amdgpu_device_resume(drm_dev, true); 2590 2589 }
+3 -2
drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
··· 528 528 529 529 bo_adev = amdgpu_ttm_adev(bo->tbo.bdev); 530 530 coherent = bo->flags & AMDGPU_GEM_CREATE_COHERENT; 531 - is_system = (bo->tbo.resource->mem_type == TTM_PL_TT) || 532 - (bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT); 531 + is_system = bo->tbo.resource && 532 + (bo->tbo.resource->mem_type == TTM_PL_TT || 533 + bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT); 533 534 534 535 if (bo && bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC) 535 536 *flags |= AMDGPU_PTE_DCC;
+1 -1
drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
··· 284 284 return 0; 285 285 } 286 286 287 - ip_block = amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_VCN); 287 + ip_block = amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_VCE); 288 288 if (!ip_block) 289 289 return -EINVAL; 290 290
+5 -3
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 1230 1230 decrement_queue_count(dqm, qpd, q); 1231 1231 1232 1232 if (dqm->dev->kfd->shared_resources.enable_mes) { 1233 - retval = remove_queue_mes(dqm, q, qpd); 1234 - if (retval) { 1233 + int err; 1234 + 1235 + err = remove_queue_mes(dqm, q, qpd); 1236 + if (err) { 1235 1237 dev_err(dev, "Failed to evict queue %d\n", 1236 1238 q->properties.queue_id); 1237 - goto out; 1239 + retval = err; 1238 1240 } 1239 1241 } 1240 1242 }
+2 -2
drivers/gpu/drm/amd/amdkfd/kfd_queue.c
··· 266 266 /* EOP buffer is not required for all ASICs */ 267 267 if (properties->eop_ring_buffer_address) { 268 268 if (properties->eop_ring_buffer_size != topo_dev->node_props.eop_buffer_size) { 269 - pr_debug("queue eop bo size 0x%lx not equal to node eop buf size 0x%x\n", 270 - properties->eop_buf_bo->tbo.base.size, 269 + pr_debug("queue eop bo size 0x%x not equal to node eop buf size 0x%x\n", 270 + properties->eop_ring_buffer_size, 271 271 topo_dev->node_props.eop_buffer_size); 272 272 err = -EINVAL; 273 273 goto out_err_unreserve;
+16 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 245 245 static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector); 246 246 static void handle_hpd_rx_irq(void *param); 247 247 248 + static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm, 249 + int bl_idx, 250 + u32 user_brightness); 251 + 248 252 static bool 249 253 is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state, 250 254 struct drm_crtc_state *new_crtc_state); ··· 3375 3371 3376 3372 mutex_unlock(&dm->dc_lock); 3377 3373 3374 + /* set the backlight after a reset */ 3375 + for (i = 0; i < dm->num_of_edps; i++) { 3376 + if (dm->backlight_dev[i]) 3377 + amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]); 3378 + } 3379 + 3378 3380 return 0; 3379 3381 } 3382 + 3383 + /* leave display off for S4 sequence */ 3384 + if (adev->in_s4) 3385 + return 0; 3386 + 3380 3387 /* Recreate dc_state - DC invalidates it when setting power state to S3. */ 3381 3388 dc_state_release(dm_state->context); 3382 3389 dm_state->context = dc_state_create(dm->dc, NULL); ··· 4921 4906 dm->backlight_dev[aconnector->bl_idx] = 4922 4907 backlight_device_register(bl_name, aconnector->base.kdev, dm, 4923 4908 &amdgpu_dm_backlight_ops, &props); 4909 + dm->brightness[aconnector->bl_idx] = props.brightness; 4924 4910 4925 4911 if (IS_ERR(dm->backlight_dev[aconnector->bl_idx])) { 4926 4912 DRM_ERROR("DM: Backlight registration failed!\n"); ··· 4989 4973 aconnector->bl_idx = bl_idx; 4990 4974 4991 4975 amdgpu_dm_update_backlight_caps(dm, bl_idx); 4992 - dm->brightness[bl_idx] = AMDGPU_MAX_BL_LEVEL; 4993 4976 dm->backlight_link[bl_idx] = link; 4994 4977 dm->num_of_edps++; 4995 4978
+1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
··· 455 455 for (i = 0; i < hdcp_work->max_link; i++) { 456 456 cancel_delayed_work_sync(&hdcp_work[i].callback_dwork); 457 457 cancel_delayed_work_sync(&hdcp_work[i].watchdog_timer_dwork); 458 + cancel_delayed_work_sync(&hdcp_work[i].property_validate_dwork); 458 459 } 459 460 460 461 sysfs_remove_bin_file(kobj, &hdcp_work[0].attr);
+45 -19
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
··· 894 894 struct drm_device *dev = adev_to_drm(adev); 895 895 struct drm_connector *connector; 896 896 struct drm_connector_list_iter iter; 897 + int irq_type; 897 898 int i; 899 + 900 + /* First, clear all hpd and hpdrx interrupts */ 901 + for (i = DC_IRQ_SOURCE_HPD1; i <= DC_IRQ_SOURCE_HPD6RX; i++) { 902 + if (!dc_interrupt_set(adev->dm.dc, i, false)) 903 + drm_err(dev, "Failed to clear hpd(rx) source=%d on init\n", 904 + i); 905 + } 898 906 899 907 drm_connector_list_iter_begin(dev, &iter); 900 908 drm_for_each_connector_iter(connector, &iter) { ··· 916 908 917 909 dc_link = amdgpu_dm_connector->dc_link; 918 910 911 + /* 912 + * Get a base driver irq reference for hpd ints for the lifetime 913 + * of dm. Note that only hpd interrupt types are registered with 914 + * base driver; hpd_rx types aren't. IOW, amdgpu_irq_get/put on 915 + * hpd_rx isn't available. DM currently controls hpd_rx 916 + * explicitly with dc_interrupt_set() 917 + */ 919 918 if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) { 920 - dc_interrupt_set(adev->dm.dc, 921 - dc_link->irq_source_hpd, 922 - true); 919 + irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1; 920 + /* 921 + * TODO: There's a mismatch between mode_info.num_hpd 922 + * and what bios reports as the # of connectors with hpd 923 + * sources. Since the # of hpd source types registered 924 + * with base driver == mode_info.num_hpd, we have to 925 + * fallback to dc_interrupt_set for the remaining types. 926 + */ 927 + if (irq_type < adev->mode_info.num_hpd) { 928 + if (amdgpu_irq_get(adev, &adev->hpd_irq, irq_type)) 929 + drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", 930 + dc_link->irq_source_hpd); 931 + } else { 932 + dc_interrupt_set(adev->dm.dc, 933 + dc_link->irq_source_hpd, 934 + true); 935 + } 923 936 } 924 937 925 938 if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) { ··· 950 921 } 951 922 } 952 923 drm_connector_list_iter_end(&iter); 953 - 954 - /* Update reference counts for HPDs */ 955 - for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { 956 - if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) 957 - drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i); 958 - } 959 924 } 960 925 961 926 /** ··· 965 942 struct drm_device *dev = adev_to_drm(adev); 966 943 struct drm_connector *connector; 967 944 struct drm_connector_list_iter iter; 968 - int i; 945 + int irq_type; 969 946 970 947 drm_connector_list_iter_begin(dev, &iter); 971 948 drm_for_each_connector_iter(connector, &iter) { ··· 979 956 dc_link = amdgpu_dm_connector->dc_link; 980 957 981 958 if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) { 982 - dc_interrupt_set(adev->dm.dc, 983 - dc_link->irq_source_hpd, 984 - false); 959 + irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1; 960 + 961 + /* TODO: See same TODO in amdgpu_dm_hpd_init() */ 962 + if (irq_type < adev->mode_info.num_hpd) { 963 + if (amdgpu_irq_put(adev, &adev->hpd_irq, irq_type)) 964 + drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", 965 + dc_link->irq_source_hpd); 966 + } else { 967 + dc_interrupt_set(adev->dm.dc, 968 + dc_link->irq_source_hpd, 969 + false); 970 + } 985 971 } 986 972 987 973 if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) { ··· 1000 968 } 1001 969 } 1002 970 drm_connector_list_iter_end(&iter); 1003 - 1004 - /* Update reference counts for HPDs */ 1005 - for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) { 1006 - if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1)) 1007 - drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i); 1008 - } 1009 971 }
+5 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
··· 277 277 if (!dcc->enable) 278 278 return 0; 279 279 280 - if (format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN || 281 - !dc->cap_funcs.get_dcc_compression_cap) 280 + if (adev->family < AMDGPU_FAMILY_GC_12_0_0 && 281 + format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN) 282 + return -EINVAL; 283 + 284 + if (!dc->cap_funcs.get_dcc_compression_cap) 282 285 return -EINVAL; 283 286 284 287 input.format = format;
+7 -3
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 1455 1455 DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger); 1456 1456 1457 1457 /* Invalid input */ 1458 - if (!plane_state->dst_rect.width || 1458 + if (!plane_state || 1459 + !plane_state->dst_rect.width || 1459 1460 !plane_state->dst_rect.height || 1460 1461 !plane_state->src_rect.width || 1461 1462 !plane_state->src_rect.height) { ··· 3389 3388 break; 3390 3389 case COLOR_DEPTH_121212: 3391 3390 normalized_pix_clk = (pix_clk * 36) / 24; 3392 - break; 3391 + break; 3392 + case COLOR_DEPTH_141414: 3393 + normalized_pix_clk = (pix_clk * 42) / 24; 3394 + break; 3393 3395 case COLOR_DEPTH_161616: 3394 3396 normalized_pix_clk = (pix_clk * 48) / 24; 3395 - break; 3397 + break; 3396 3398 default: 3397 3399 ASSERT(0); 3398 3400 break;
+1
drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c
··· 239 239 dce60_timing_generator_enable_advanced_request, 240 240 .configure_crc = dce60_configure_crc, 241 241 .get_crc = dce110_get_crc, 242 + .is_two_pixels_per_container = dce110_is_two_pixels_per_container, 242 243 }; 243 244 244 245 void dce60_timing_generator_construct(
+1 -11
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
··· 1895 1895 NULL); 1896 1896 } 1897 1897 1898 - static int smu_v14_0_process_pending_interrupt(struct smu_context *smu) 1899 - { 1900 - int ret = 0; 1901 - 1902 - if (smu_cmn_feature_is_enabled(smu, SMU_FEATURE_ACDC_BIT)) 1903 - ret = smu_v14_0_allow_ih_interrupt(smu); 1904 - 1905 - return ret; 1906 - } 1907 - 1908 1898 int smu_v14_0_enable_thermal_alert(struct smu_context *smu) 1909 1899 { 1910 1900 int ret = 0; ··· 1906 1916 if (ret) 1907 1917 return ret; 1908 1918 1909 - return smu_v14_0_process_pending_interrupt(smu); 1919 + return smu_v14_0_allow_ih_interrupt(smu); 1910 1920 } 1911 1921 1912 1922 int smu_v14_0_disable_thermal_alert(struct smu_context *smu)
+24 -16
drivers/gpu/drm/display/drm_dp_mst_topology.c
··· 4025 4025 return 0; 4026 4026 } 4027 4027 4028 + static bool primary_mstb_probing_is_done(struct drm_dp_mst_topology_mgr *mgr) 4029 + { 4030 + bool probing_done = false; 4031 + 4032 + mutex_lock(&mgr->lock); 4033 + 4034 + if (mgr->mst_primary && drm_dp_mst_topology_try_get_mstb(mgr->mst_primary)) { 4035 + probing_done = mgr->mst_primary->link_address_sent; 4036 + drm_dp_mst_topology_put_mstb(mgr->mst_primary); 4037 + } 4038 + 4039 + mutex_unlock(&mgr->lock); 4040 + 4041 + return probing_done; 4042 + } 4043 + 4028 4044 static inline bool 4029 4045 drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr, 4030 4046 struct drm_dp_pending_up_req *up_req) ··· 4071 4055 4072 4056 /* TODO: Add missing handler for DP_RESOURCE_STATUS_NOTIFY events */ 4073 4057 if (msg->req_type == DP_CONNECTION_STATUS_NOTIFY) { 4074 - dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); 4075 - hotplug = true; 4058 + if (!primary_mstb_probing_is_done(mgr)) { 4059 + drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it.\n"); 4060 + } else { 4061 + dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); 4062 + hotplug = true; 4063 + } 4076 4064 } 4077 4065 4078 4066 drm_dp_mst_topology_put_mstb(mstb); ··· 4158 4138 drm_dp_send_up_ack_reply(mgr, mst_primary, up_req->msg.req_type, 4159 4139 false); 4160 4140 4141 + drm_dp_mst_topology_put_mstb(mst_primary); 4142 + 4161 4143 if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) { 4162 4144 const struct drm_dp_connection_status_notify *conn_stat = 4163 4145 &up_req->msg.u.conn_stat; 4164 - bool handle_csn; 4165 4146 4166 4147 drm_dbg_kms(mgr->dev, "Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", 4167 4148 conn_stat->port_number, ··· 4171 4150 conn_stat->message_capability_status, 4172 4151 conn_stat->input_port, 4173 4152 conn_stat->peer_device_type); 4174 - 4175 - mutex_lock(&mgr->probe_lock); 4176 - handle_csn = mst_primary->link_address_sent; 4177 - mutex_unlock(&mgr->probe_lock); 4178 - 4179 - if (!handle_csn) { 4180 - drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it."); 4181 - kfree(up_req); 4182 - goto out_put_primary; 4183 - } 4184 4153 } else if (up_req->msg.req_type == DP_RESOURCE_STATUS_NOTIFY) { 4185 4154 const struct drm_dp_resource_status_notify *res_stat = 4186 4155 &up_req->msg.u.resource_stat; ··· 4185 4174 list_add_tail(&up_req->next, &mgr->up_req_list); 4186 4175 mutex_unlock(&mgr->up_req_lock); 4187 4176 queue_work(system_long_wq, &mgr->up_req_work); 4188 - 4189 - out_put_primary: 4190 - drm_dp_mst_topology_put_mstb(mst_primary); 4191 4177 out_clear_reply: 4192 4178 reset_msg_rx_state(&mgr->up_req_recv); 4193 4179 return ret;
+4
drivers/gpu/drm/drm_atomic_uapi.c
··· 956 956 957 957 if (mode != DRM_MODE_DPMS_ON) 958 958 mode = DRM_MODE_DPMS_OFF; 959 + 960 + if (connector->dpms == mode) 961 + goto out; 962 + 959 963 connector->dpms = mode; 960 964 961 965 crtc = connector->state->crtc;
+4
drivers/gpu/drm/drm_connector.c
··· 1427 1427 * callback. For atomic drivers the remapping to the "ACTIVE" property is 1428 1428 * implemented in the DRM core. 1429 1429 * 1430 + * On atomic drivers any DPMS setproperty ioctl where the value does not 1431 + * change is completely skipped, otherwise a full atomic commit will occur. 1432 + * On legacy drivers the exact behavior is driver specific. 1433 + * 1430 1434 * Note that this property cannot be set through the MODE_ATOMIC ioctl, 1431 1435 * userspace must use "ACTIVE" on the CRTC instead. 1432 1436 *
+8 -8
drivers/gpu/drm/drm_panic_qr.rs
··· 545 545 } 546 546 self.push(&mut offset, (MODE_STOP, 4)); 547 547 548 - let pad_offset = (offset + 7) / 8; 548 + let pad_offset = offset.div_ceil(8); 549 549 for i in pad_offset..self.version.max_data() { 550 550 self.data[i] = PADDING[(i & 1) ^ (pad_offset & 1)]; 551 551 } ··· 659 659 impl QrImage<'_> { 660 660 fn new<'a, 'b>(em: &'b EncodedMsg<'b>, qrdata: &'a mut [u8]) -> QrImage<'a> { 661 661 let width = em.version.width(); 662 - let stride = (width + 7) / 8; 662 + let stride = width.div_ceil(8); 663 663 let data = qrdata; 664 664 665 665 let mut qr_image = QrImage { ··· 911 911 /// 912 912 /// * `url`: The base URL of the QR code. It will be encoded as Binary segment. 913 913 /// * `data`: A pointer to the binary data, to be encoded. if URL is NULL, it 914 - /// will be encoded as binary segment, otherwise it will be encoded 915 - /// efficiently as a numeric segment, and appended to the URL. 914 + /// will be encoded as binary segment, otherwise it will be encoded 915 + /// efficiently as a numeric segment, and appended to the URL. 916 916 /// * `data_len`: Length of the data, that needs to be encoded, must be less 917 - /// than data_size. 917 + /// than data_size. 918 918 /// * `data_size`: Size of data buffer, it should be at least 4071 bytes to hold 919 - /// a V40 QR code. It will then be overwritten with the QR code image. 919 + /// a V40 QR code. It will then be overwritten with the QR code image. 920 920 /// * `tmp`: A temporary buffer that the QR code encoder will use, to write the 921 - /// segments and ECC. 921 + /// segments and ECC. 922 922 /// * `tmp_size`: Size of the temporary buffer, it must be at least 3706 bytes 923 - /// long for V40. 923 + /// long for V40. 924 924 /// 925 925 /// # Safety 926 926 ///
+5
drivers/gpu/drm/gma500/mid_bios.c
··· 279 279 0, PCI_DEVFN(2, 0)); 280 280 int ret = -1; 281 281 282 + if (pci_gfx_root == NULL) { 283 + WARN_ON(1); 284 + return; 285 + } 286 + 282 287 /* Get the address of the platform config vbt */ 283 288 pci_read_config_dword(pci_gfx_root, 0xFC, &addr); 284 289 pci_dev_put(pci_gfx_root);
+2
drivers/gpu/drm/hyperv/hyperv_drm_drv.c
··· 154 154 return 0; 155 155 156 156 err_free_mmio: 157 + iounmap(hv->vram); 157 158 vmbus_free_mmio(hv->mem->start, hv->fb_size); 158 159 err_vmbus_close: 159 160 vmbus_close(hdev->channel); ··· 173 172 vmbus_close(hdev->channel); 174 173 hv_set_drvdata(hdev, NULL); 175 174 175 + iounmap(hv->vram); 176 176 vmbus_free_mmio(hv->mem->start, hv->fb_size); 177 177 } 178 178
+2 -3
drivers/gpu/drm/i915/display/intel_display.c
··· 7830 7830 7831 7831 intel_program_dpkgc_latency(state); 7832 7832 7833 - if (state->modeset) 7834 - intel_set_cdclk_post_plane_update(state); 7835 - 7836 7833 intel_wait_for_vblank_workers(state); 7837 7834 7838 7835 /* FIXME: We should call drm_atomic_helper_commit_hw_done() here ··· 7903 7906 intel_verify_planes(state); 7904 7907 7905 7908 intel_sagv_post_plane_update(state); 7909 + if (state->modeset) 7910 + intel_set_cdclk_post_plane_update(state); 7906 7911 intel_pmdemand_post_plane_update(state); 7907 7912 7908 7913 drm_atomic_helper_commit_hw_done(&state->base);
+2 -1
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 1867 1867 /* create encoders */ 1868 1868 mst_stream_encoders_create(dig_port); 1869 1869 ret = drm_dp_mst_topology_mgr_init(&intel_dp->mst_mgr, display->drm, 1870 - &intel_dp->aux, 16, 3, conn_base_id); 1870 + &intel_dp->aux, 16, 1871 + INTEL_NUM_PIPES(display), conn_base_id); 1871 1872 if (ret) { 1872 1873 intel_dp->mst_mgr.cbs = NULL; 1873 1874 return ret;
+4 -1
drivers/gpu/drm/i915/gem/i915_gem_mman.c
··· 164 164 * 4 - Support multiple fault handlers per object depending on object's 165 165 * backing storage (a.k.a. MMAP_OFFSET). 166 166 * 167 + * 5 - Support multiple partial mmaps(mmap part of BO + unmap a offset, multiple 168 + * times with different size and offset). 169 + * 167 170 * Restrictions: 168 171 * 169 172 * * snoopable objects cannot be accessed via the GTT. It can cause machine ··· 194 191 */ 195 192 int i915_gem_mmap_gtt_version(void) 196 193 { 197 - return 4; 194 + return 5; 198 195 } 199 196 200 197 static inline struct i915_gtt_view
+4 -2
drivers/gpu/drm/imagination/pvr_fw_meta.c
··· 527 527 static void 528 528 pvr_meta_vm_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj) 529 529 { 530 - pvr_vm_unmap(pvr_dev->kernel_vm_ctx, fw_obj->fw_mm_node.start, 531 - fw_obj->fw_mm_node.size); 530 + struct pvr_gem_object *pvr_obj = fw_obj->gem; 531 + 532 + pvr_vm_unmap_obj(pvr_dev->kernel_vm_ctx, pvr_obj, 533 + fw_obj->fw_mm_node.start, fw_obj->fw_mm_node.size); 532 534 } 533 535 534 536 static bool
+2 -2
drivers/gpu/drm/imagination/pvr_fw_trace.c
··· 333 333 if (sf_id == ROGUE_FW_SF_LAST) 334 334 return -EINVAL; 335 335 336 - timestamp = read_fw_trace(trace_seq_data, 1) | 337 - ((u64)read_fw_trace(trace_seq_data, 2) << 32); 336 + timestamp = ((u64)read_fw_trace(trace_seq_data, 1) << 32) | 337 + read_fw_trace(trace_seq_data, 2); 338 338 timestamp = (timestamp & ~ROGUE_FWT_TIMESTAMP_TIME_CLRMSK) >> 339 339 ROGUE_FWT_TIMESTAMP_TIME_SHIFT; 340 340
+14 -4
drivers/gpu/drm/imagination/pvr_queue.c
··· 109 109 return PVR_DRIVER_NAME; 110 110 } 111 111 112 + static void pvr_queue_fence_release_work(struct work_struct *w) 113 + { 114 + struct pvr_queue_fence *fence = container_of(w, struct pvr_queue_fence, release_work); 115 + 116 + pvr_context_put(fence->queue->ctx); 117 + dma_fence_free(&fence->base); 118 + } 119 + 112 120 static void pvr_queue_fence_release(struct dma_fence *f) 113 121 { 114 122 struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base); 123 + struct pvr_device *pvr_dev = fence->queue->ctx->pvr_dev; 115 124 116 - pvr_context_put(fence->queue->ctx); 117 - dma_fence_free(f); 125 + queue_work(pvr_dev->sched_wq, &fence->release_work); 118 126 } 119 127 120 128 static const char * ··· 276 268 277 269 pvr_context_get(queue->ctx); 278 270 fence->queue = queue; 271 + INIT_WORK(&fence->release_work, pvr_queue_fence_release_work); 279 272 dma_fence_init(&fence->base, fence_ops, 280 273 &fence_ctx->lock, fence_ctx->id, 281 274 atomic_inc_return(&fence_ctx->seqno)); ··· 313 304 static void 314 305 pvr_queue_job_fence_init(struct dma_fence *fence, struct pvr_queue *queue) 315 306 { 316 - pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops, 317 - &queue->job_fence_ctx); 307 + if (!fence->ops) 308 + pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops, 309 + &queue->job_fence_ctx); 318 310 } 319 311 320 312 /**
+4
drivers/gpu/drm/imagination/pvr_queue.h
··· 5 5 #define PVR_QUEUE_H 6 6 7 7 #include <drm/gpu_scheduler.h> 8 + #include <linux/workqueue.h> 8 9 9 10 #include "pvr_cccb.h" 10 11 #include "pvr_device.h" ··· 64 63 65 64 /** @queue: Queue that created this fence. */ 66 65 struct pvr_queue *queue; 66 + 67 + /** @release_work: Fence release work structure. */ 68 + struct work_struct release_work; 67 69 }; 68 70 69 71 /**
+108 -26
drivers/gpu/drm/imagination/pvr_vm.c
··· 293 293 294 294 static int 295 295 pvr_vm_bind_op_unmap_init(struct pvr_vm_bind_op *bind_op, 296 - struct pvr_vm_context *vm_ctx, u64 device_addr, 297 - u64 size) 296 + struct pvr_vm_context *vm_ctx, 297 + struct pvr_gem_object *pvr_obj, 298 + u64 device_addr, u64 size) 298 299 { 299 300 int err; 300 301 ··· 319 318 goto err_bind_op_fini; 320 319 } 321 320 321 + bind_op->pvr_obj = pvr_obj; 322 322 bind_op->vm_ctx = vm_ctx; 323 323 bind_op->device_addr = device_addr; 324 324 bind_op->size = size; ··· 600 598 } 601 599 602 600 /** 603 - * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context. 604 - * @vm_ctx: Target VM context. 605 - * 606 - * This function ensures that no mappings are left dangling by unmapping them 607 - * all in order of ascending device-virtual address. 608 - */ 609 - void 610 - pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx) 611 - { 612 - WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start, 613 - vm_ctx->gpuvm_mgr.mm_range)); 614 - } 615 - 616 - /** 617 601 * pvr_vm_context_release() - Teardown a VM context. 618 602 * @ref_count: Pointer to reference counter of the VM context. 619 603 * ··· 691 703 struct pvr_vm_bind_op *bind_op = vm_exec->extra.priv; 692 704 struct pvr_gem_object *pvr_obj = bind_op->pvr_obj; 693 705 694 - /* Unmap operations don't have an object to lock. */ 695 - if (!pvr_obj) 696 - return 0; 697 - 698 - /* Acquire lock on the GEM being mapped. */ 706 + /* Acquire lock on the GEM object being mapped/unmapped. */ 699 707 return drm_exec_lock_obj(&vm_exec->exec, gem_from_pvr_gem(pvr_obj)); 700 708 } 701 709 ··· 756 772 } 757 773 758 774 /** 759 - * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory. 775 + * pvr_vm_unmap_obj_locked() - Unmap an already mapped section of device-virtual 776 + * memory. 760 777 * @vm_ctx: Target VM context. 778 + * @pvr_obj: Target PowerVR memory object. 761 779 * @device_addr: Virtual device address at the start of the target mapping. 762 780 * @size: Size of the target mapping. 763 781 * ··· 770 784 * * Any error encountered while performing internal operations required to 771 785 * destroy the mapping (returned from pvr_vm_gpuva_unmap or 772 786 * pvr_vm_gpuva_remap). 787 + * 788 + * The vm_ctx->lock must be held when calling this function. 773 789 */ 774 - int 775 - pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size) 790 + static int 791 + pvr_vm_unmap_obj_locked(struct pvr_vm_context *vm_ctx, 792 + struct pvr_gem_object *pvr_obj, 793 + u64 device_addr, u64 size) 776 794 { 777 795 struct pvr_vm_bind_op bind_op = {0}; 778 796 struct drm_gpuvm_exec vm_exec = { ··· 789 799 }, 790 800 }; 791 801 792 - int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, device_addr, 793 - size); 802 + int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, pvr_obj, 803 + device_addr, size); 794 804 if (err) 795 805 return err; 806 + 807 + pvr_gem_object_get(pvr_obj); 796 808 797 809 err = drm_gpuvm_exec_lock(&vm_exec); 798 810 if (err) ··· 808 816 pvr_vm_bind_op_fini(&bind_op); 809 817 810 818 return err; 819 + } 820 + 821 + /** 822 + * pvr_vm_unmap_obj() - Unmap an already mapped section of device-virtual 823 + * memory. 824 + * @vm_ctx: Target VM context. 825 + * @pvr_obj: Target PowerVR memory object. 826 + * @device_addr: Virtual device address at the start of the target mapping. 827 + * @size: Size of the target mapping. 828 + * 829 + * Return: 830 + * * 0 on success, 831 + * * Any error encountered by pvr_vm_unmap_obj_locked. 832 + */ 833 + int 834 + pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj, 835 + u64 device_addr, u64 size) 836 + { 837 + int err; 838 + 839 + mutex_lock(&vm_ctx->lock); 840 + err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, device_addr, size); 841 + mutex_unlock(&vm_ctx->lock); 842 + 843 + return err; 844 + } 845 + 846 + /** 847 + * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory. 848 + * @vm_ctx: Target VM context. 849 + * @device_addr: Virtual device address at the start of the target mapping. 850 + * @size: Size of the target mapping. 851 + * 852 + * Return: 853 + * * 0 on success, 854 + * * Any error encountered by drm_gpuva_find, 855 + * * Any error encountered by pvr_vm_unmap_obj_locked. 856 + */ 857 + int 858 + pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size) 859 + { 860 + struct pvr_gem_object *pvr_obj; 861 + struct drm_gpuva *va; 862 + int err; 863 + 864 + mutex_lock(&vm_ctx->lock); 865 + 866 + va = drm_gpuva_find(&vm_ctx->gpuvm_mgr, device_addr, size); 867 + if (va) { 868 + pvr_obj = gem_to_pvr_gem(va->gem.obj); 869 + err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, 870 + va->va.addr, va->va.range); 871 + } else { 872 + err = -ENOENT; 873 + } 874 + 875 + mutex_unlock(&vm_ctx->lock); 876 + 877 + return err; 878 + } 879 + 880 + /** 881 + * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context. 882 + * @vm_ctx: Target VM context. 883 + * 884 + * This function ensures that no mappings are left dangling by unmapping them 885 + * all in order of ascending device-virtual address. 886 + */ 887 + void 888 + pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx) 889 + { 890 + mutex_lock(&vm_ctx->lock); 891 + 892 + for (;;) { 893 + struct pvr_gem_object *pvr_obj; 894 + struct drm_gpuva *va; 895 + 896 + va = drm_gpuva_find_first(&vm_ctx->gpuvm_mgr, 897 + vm_ctx->gpuvm_mgr.mm_start, 898 + vm_ctx->gpuvm_mgr.mm_range); 899 + if (!va) 900 + break; 901 + 902 + pvr_obj = gem_to_pvr_gem(va->gem.obj); 903 + 904 + WARN_ON(pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, 905 + va->va.addr, va->va.range)); 906 + } 907 + 908 + mutex_unlock(&vm_ctx->lock); 811 909 } 812 910 813 911 /* Static data areas are determined by firmware. */
+3
drivers/gpu/drm/imagination/pvr_vm.h
··· 38 38 int pvr_vm_map(struct pvr_vm_context *vm_ctx, 39 39 struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset, 40 40 u64 device_addr, u64 size); 41 + int pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx, 42 + struct pvr_gem_object *pvr_obj, 43 + u64 device_addr, u64 size); 41 44 int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size); 42 45 void pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx); 43 46
+1
drivers/gpu/drm/nouveau/Kconfig
··· 4 4 depends on DRM && PCI && MMU 5 5 select IOMMU_API 6 6 select FW_LOADER 7 + select FW_CACHE if PM_SLEEP 7 8 select DRM_CLIENT_SELECTION 8 9 select DRM_DISPLAY_DP_HELPER 9 10 select DRM_DISPLAY_HDMI_HELPER
+2 -1
drivers/gpu/drm/radeon/r300.c
··· 359 359 return -1; 360 360 } 361 361 362 - static void r300_gpu_init(struct radeon_device *rdev) 362 + /* rs400_gpu_init also calls this! */ 363 + void r300_gpu_init(struct radeon_device *rdev) 363 364 { 364 365 uint32_t gb_tile_config, tmp; 365 366
+1
drivers/gpu/drm/radeon/radeon_asic.h
··· 165 165 */ 166 166 extern int r300_init(struct radeon_device *rdev); 167 167 extern void r300_fini(struct radeon_device *rdev); 168 + extern void r300_gpu_init(struct radeon_device *rdev); 168 169 extern int r300_suspend(struct radeon_device *rdev); 169 170 extern int r300_resume(struct radeon_device *rdev); 170 171 extern int r300_asic_reset(struct radeon_device *rdev, bool hard);
+16 -2
drivers/gpu/drm/radeon/rs400.c
··· 256 256 257 257 static void rs400_gpu_init(struct radeon_device *rdev) 258 258 { 259 - /* FIXME: is this correct ? */ 260 - r420_pipes_init(rdev); 259 + /* Earlier code was calling r420_pipes_init and then 260 + * rs400_mc_wait_for_idle(rdev). The problem is that 261 + * at least on my Mobility Radeon Xpress 200M RC410 card 262 + * that ends up in this code path ends up num_gb_pipes == 3 263 + * while the card seems to have only one pipe. With the 264 + * r420 pipe initialization method. 265 + * 266 + * Problems shown up as HyperZ glitches, see: 267 + * https://bugs.freedesktop.org/show_bug.cgi?id=110897 268 + * 269 + * Delegating initialization to r300 code seems to work 270 + * and results in proper pipe numbers. The rs400 cards 271 + * are said to be not r400, but r300 kind of cards. 272 + */ 273 + r300_gpu_init(rdev); 274 + 261 275 if (rs400_mc_wait_for_idle(rdev)) { 262 276 pr_warn("rs400: Failed to wait MC idle while programming pipes. Bad things might happen. %08x\n", 263 277 RREG32(RADEON_MC_STATUS));
+2 -2
drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
··· 21 21 * 22 22 */ 23 23 24 - #if !defined(_GPU_SCHED_TRACE_H) || defined(TRACE_HEADER_MULTI_READ) 24 + #if !defined(_GPU_SCHED_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ) 25 25 #define _GPU_SCHED_TRACE_H_ 26 26 27 27 #include <linux/stringify.h> ··· 106 106 __entry->seqno) 107 107 ); 108 108 109 - #endif 109 + #endif /* _GPU_SCHED_TRACE_H_ */ 110 110 111 111 /* This part must be outside protection */ 112 112 #undef TRACE_INCLUDE_PATH
+3 -2
drivers/gpu/drm/tiny/bochs.c
··· 335 335 bochs->xres, bochs->yres, bochs->bpp, 336 336 bochs->yres_virtual); 337 337 338 - bochs_hw_blank(bochs, false); 339 - 340 338 bochs_dispi_write(bochs, VBE_DISPI_INDEX_ENABLE, 0); 341 339 bochs_dispi_write(bochs, VBE_DISPI_INDEX_BPP, bochs->bpp); 342 340 bochs_dispi_write(bochs, VBE_DISPI_INDEX_XRES, bochs->xres); ··· 504 506 static void bochs_crtc_helper_atomic_enable(struct drm_crtc *crtc, 505 507 struct drm_atomic_state *state) 506 508 { 509 + struct bochs_device *bochs = to_bochs_device(crtc->dev); 510 + 511 + bochs_hw_blank(bochs, false); 507 512 } 508 513 509 514 static void bochs_crtc_helper_atomic_disable(struct drm_crtc *crtc,
-10
drivers/gpu/drm/xe/display/xe_plane_initial.c
··· 194 194 to_intel_plane(crtc->base.primary); 195 195 struct intel_plane_state *plane_state = 196 196 to_intel_plane_state(plane->base.state); 197 - struct intel_crtc_state *crtc_state = 198 - to_intel_crtc_state(crtc->base.state); 199 197 struct drm_framebuffer *fb; 200 198 struct i915_vma *vma; 201 199 ··· 239 241 atomic_or(plane->frontbuffer_bit, &to_intel_frontbuffer(fb)->bits); 240 242 241 243 plane_config->vma = vma; 242 - 243 - /* 244 - * Flip to the newly created mapping ASAP, so we can re-use the 245 - * first part of GGTT for WOPCM, prevent flickering, and prevent 246 - * the lookup of sysmem scratch pages. 247 - */ 248 - plane->check_plane(crtc_state, plane_state); 249 - plane->async_flip(NULL, plane, crtc_state, plane_state, true); 250 244 return; 251 245 252 246 nofb:
+2 -2
drivers/gpu/drm/xe/xe_gt.c
··· 380 380 if (err) 381 381 return err; 382 382 383 - xe_wa_process_gt(gt); 384 383 xe_wa_process_oob(gt); 385 - xe_tuning_process_gt(gt); 386 384 387 385 xe_force_wake_init_gt(gt, gt_to_fw(gt)); 388 386 spin_lock_init(&gt->global_invl_lock); ··· 472 474 } 473 475 474 476 xe_gt_mcr_set_implicit_defaults(gt); 477 + xe_wa_process_gt(gt); 478 + xe_tuning_process_gt(gt); 475 479 xe_reg_sr_apply_mmio(&gt->reg_sr, gt); 476 480 477 481 err = xe_gt_clock_init(gt);
+40 -13
drivers/gpu/drm/xe/xe_guc_pc.c
··· 6 6 #include "xe_guc_pc.h" 7 7 8 8 #include <linux/delay.h> 9 + #include <linux/ktime.h> 9 10 10 11 #include <drm/drm_managed.h> 11 12 #include <generated/xe_wa_oob.h> ··· 20 19 #include "xe_gt.h" 21 20 #include "xe_gt_idle.h" 22 21 #include "xe_gt_printk.h" 22 + #include "xe_gt_throttle.h" 23 23 #include "xe_gt_types.h" 24 24 #include "xe_guc.h" 25 25 #include "xe_guc_ct.h" ··· 50 48 51 49 #define LNL_MERT_FREQ_CAP 800 52 50 #define BMG_MERT_FREQ_CAP 2133 51 + 52 + #define SLPC_RESET_TIMEOUT_MS 5 /* roughly 5ms, but no need for precision */ 53 + #define SLPC_RESET_EXTENDED_TIMEOUT_MS 1000 /* To be used only at pc_start */ 53 54 54 55 /** 55 56 * DOC: GuC Power Conservation (PC) ··· 118 113 FIELD_PREP(HOST2GUC_PC_SLPC_REQUEST_MSG_1_EVENT_ARGC, count)) 119 114 120 115 static int wait_for_pc_state(struct xe_guc_pc *pc, 121 - enum slpc_global_state state) 116 + enum slpc_global_state state, 117 + int timeout_ms) 122 118 { 123 - int timeout_us = 5000; /* rought 5ms, but no need for precision */ 119 + int timeout_us = 1000 * timeout_ms; 124 120 int slept, wait = 10; 125 121 126 122 xe_device_assert_mem_access(pc_to_xe(pc)); ··· 170 164 }; 171 165 int ret; 172 166 173 - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) 167 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 168 + SLPC_RESET_TIMEOUT_MS)) 174 169 return -EAGAIN; 175 170 176 171 /* Blocking here to ensure the results are ready before reading them */ ··· 194 187 }; 195 188 int ret; 196 189 197 - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) 190 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 191 + SLPC_RESET_TIMEOUT_MS)) 198 192 return -EAGAIN; 199 193 200 194 ret = xe_guc_ct_send(ct, action, ARRAY_SIZE(action), 0, 0); ··· 216 208 struct xe_guc_ct *ct = &pc_to_guc(pc)->ct; 217 209 int ret; 218 210 219 - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) 211 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 212 + SLPC_RESET_TIMEOUT_MS)) 220 213 return -EAGAIN; 221 214 222 215 ret = xe_guc_ct_send(ct, action, ARRAY_SIZE(action), 0, 0); ··· 449 440 return freq; 450 441 } 451 442 443 + static u32 get_cur_freq(struct xe_gt *gt) 444 + { 445 + u32 freq; 446 + 447 + freq = xe_mmio_read32(&gt->mmio, RPNSWREQ); 448 + freq = REG_FIELD_GET(REQ_RATIO_MASK, freq); 449 + return decode_freq(freq); 450 + } 451 + 452 452 /** 453 453 * xe_guc_pc_get_cur_freq - Get Current requested frequency 454 454 * @pc: The GuC PC ··· 481 463 return -ETIMEDOUT; 482 464 } 483 465 484 - *freq = xe_mmio_read32(&gt->mmio, RPNSWREQ); 485 - 486 - *freq = REG_FIELD_GET(REQ_RATIO_MASK, *freq); 487 - *freq = decode_freq(*freq); 466 + *freq = get_cur_freq(gt); 488 467 489 468 xe_force_wake_put(gt_to_fw(gt), fw_ref); 490 469 return 0; ··· 1017 1002 struct xe_gt *gt = pc_to_gt(pc); 1018 1003 u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data)); 1019 1004 unsigned int fw_ref; 1005 + ktime_t earlier; 1020 1006 int ret; 1021 1007 1022 1008 xe_gt_assert(gt, xe_device_uc_enabled(xe)); ··· 1042 1026 memset(pc->bo->vmap.vaddr, 0, size); 1043 1027 slpc_shared_data_write(pc, header.size, size); 1044 1028 1029 + earlier = ktime_get(); 1045 1030 ret = pc_action_reset(pc); 1046 1031 if (ret) 1047 1032 goto out; 1048 1033 1049 - if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING)) { 1050 - xe_gt_err(gt, "GuC PC Start failed\n"); 1051 - ret = -EIO; 1052 - goto out; 1034 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 1035 + SLPC_RESET_TIMEOUT_MS)) { 1036 + xe_gt_warn(gt, "GuC PC start taking longer than normal [freq = %dMHz (req = %dMHz), perf_limit_reasons = 0x%08X]\n", 1037 + xe_guc_pc_get_act_freq(pc), get_cur_freq(gt), 1038 + xe_gt_throttle_get_limit_reasons(gt)); 1039 + 1040 + if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING, 1041 + SLPC_RESET_EXTENDED_TIMEOUT_MS)) { 1042 + xe_gt_err(gt, "GuC PC Start failed: Dynamic GT frequency control and GT sleep states are now disabled.\n"); 1043 + goto out; 1044 + } 1045 + 1046 + xe_gt_warn(gt, "GuC PC excessive start time: %lldms", 1047 + ktime_ms_delta(ktime_get(), earlier)); 1053 1048 } 1054 1049 1055 1050 ret = pc_init_freqs(pc);
+1 -1
drivers/gpu/drm/xe/xe_guc_submit.c
··· 1246 1246 xe_pm_runtime_get(guc_to_xe(guc)); 1247 1247 trace_xe_exec_queue_destroy(q); 1248 1248 1249 + release_guc_id(guc, q); 1249 1250 if (xe_exec_queue_is_lr(q)) 1250 1251 cancel_work_sync(&ge->lr_tdr); 1251 1252 /* Confirm no work left behind accessing device structures */ 1252 1253 cancel_delayed_work_sync(&ge->sched.base.work_tdr); 1253 - release_guc_id(guc, q); 1254 1254 xe_sched_entity_fini(&ge->entity); 1255 1255 xe_sched_fini(&ge->sched); 1256 1256
+147 -51
drivers/gpu/drm/xe/xe_hmm.c
··· 19 19 return (end - start) >> PAGE_SHIFT; 20 20 } 21 21 22 - /* 22 + /** 23 23 * xe_mark_range_accessed() - mark a range is accessed, so core mm 24 24 * have such information for memory eviction or write back to 25 25 * hard disk 26 - * 27 26 * @range: the range to mark 28 27 * @write: if write to this range, we mark pages in this range 29 28 * as dirty ··· 42 43 } 43 44 } 44 45 45 - /* 46 + static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st, 47 + struct hmm_range *range, struct rw_semaphore *notifier_sem) 48 + { 49 + unsigned long i, npages, hmm_pfn; 50 + unsigned long num_chunks = 0; 51 + int ret; 52 + 53 + /* HMM docs says this is needed. */ 54 + ret = down_read_interruptible(notifier_sem); 55 + if (ret) 56 + return ret; 57 + 58 + if (mmu_interval_read_retry(range->notifier, range->notifier_seq)) { 59 + up_read(notifier_sem); 60 + return -EAGAIN; 61 + } 62 + 63 + npages = xe_npages_in_range(range->start, range->end); 64 + for (i = 0; i < npages;) { 65 + unsigned long len; 66 + 67 + hmm_pfn = range->hmm_pfns[i]; 68 + xe_assert(xe, hmm_pfn & HMM_PFN_VALID); 69 + 70 + len = 1UL << hmm_pfn_to_map_order(hmm_pfn); 71 + 72 + /* If order > 0 the page may extend beyond range->start */ 73 + len -= (hmm_pfn & ~HMM_PFN_FLAGS) & (len - 1); 74 + i += len; 75 + num_chunks++; 76 + } 77 + up_read(notifier_sem); 78 + 79 + return sg_alloc_table(st, num_chunks, GFP_KERNEL); 80 + } 81 + 82 + /** 46 83 * xe_build_sg() - build a scatter gather table for all the physical pages/pfn 47 84 * in a hmm_range. dma-map pages if necessary. dma-address is save in sg table 48 85 * and will be used to program GPU page table later. 49 - * 50 86 * @xe: the xe device who will access the dma-address in sg table 51 87 * @range: the hmm range that we build the sg table from. range->hmm_pfns[] 52 88 * has the pfn numbers of pages that back up this hmm address range. 53 89 * @st: pointer to the sg table. 90 + * @notifier_sem: The xe notifier lock. 54 91 * @write: whether we write to this range. This decides dma map direction 55 92 * for system pages. If write we map it bi-diretional; otherwise 56 93 * DMA_TO_DEVICE ··· 113 78 * Returns 0 if successful; -ENOMEM if fails to allocate memory 114 79 */ 115 80 static int xe_build_sg(struct xe_device *xe, struct hmm_range *range, 116 - struct sg_table *st, bool write) 81 + struct sg_table *st, 82 + struct rw_semaphore *notifier_sem, 83 + bool write) 117 84 { 85 + unsigned long npages = xe_npages_in_range(range->start, range->end); 118 86 struct device *dev = xe->drm.dev; 119 - struct page **pages; 120 - u64 i, npages; 121 - int ret; 87 + struct scatterlist *sgl; 88 + struct page *page; 89 + unsigned long i, j; 122 90 123 - npages = xe_npages_in_range(range->start, range->end); 124 - pages = kvmalloc_array(npages, sizeof(*pages), GFP_KERNEL); 125 - if (!pages) 126 - return -ENOMEM; 91 + lockdep_assert_held(notifier_sem); 127 92 128 - for (i = 0; i < npages; i++) { 129 - pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]); 130 - xe_assert(xe, !is_device_private_page(pages[i])); 93 + i = 0; 94 + for_each_sg(st->sgl, sgl, st->nents, j) { 95 + unsigned long hmm_pfn, size; 96 + 97 + hmm_pfn = range->hmm_pfns[i]; 98 + page = hmm_pfn_to_page(hmm_pfn); 99 + xe_assert(xe, !is_device_private_page(page)); 100 + 101 + size = 1UL << hmm_pfn_to_map_order(hmm_pfn); 102 + size -= page_to_pfn(page) & (size - 1); 103 + i += size; 104 + 105 + if (unlikely(j == st->nents - 1)) { 106 + xe_assert(xe, i >= npages); 107 + if (i > npages) 108 + size -= (i - npages); 109 + 110 + sg_mark_end(sgl); 111 + } else { 112 + xe_assert(xe, i < npages); 113 + } 114 + 115 + sg_set_page(sgl, page, size << PAGE_SHIFT, 0); 131 116 } 132 117 133 - ret = sg_alloc_table_from_pages_segment(st, pages, npages, 0, npages << PAGE_SHIFT, 134 - xe_sg_segment_size(dev), GFP_KERNEL); 135 - if (ret) 136 - goto free_pages; 137 - 138 - ret = dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 139 - DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING); 140 - if (ret) { 141 - sg_free_table(st); 142 - st = NULL; 143 - } 144 - 145 - free_pages: 146 - kvfree(pages); 147 - return ret; 118 + return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 119 + DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING); 148 120 } 149 121 150 - /* 122 + static void xe_hmm_userptr_set_mapped(struct xe_userptr_vma *uvma) 123 + { 124 + struct xe_userptr *userptr = &uvma->userptr; 125 + struct xe_vm *vm = xe_vma_vm(&uvma->vma); 126 + 127 + lockdep_assert_held_write(&vm->lock); 128 + lockdep_assert_held(&vm->userptr.notifier_lock); 129 + 130 + mutex_lock(&userptr->unmap_mutex); 131 + xe_assert(vm->xe, !userptr->mapped); 132 + userptr->mapped = true; 133 + mutex_unlock(&userptr->unmap_mutex); 134 + } 135 + 136 + void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma) 137 + { 138 + struct xe_userptr *userptr = &uvma->userptr; 139 + struct xe_vma *vma = &uvma->vma; 140 + bool write = !xe_vma_read_only(vma); 141 + struct xe_vm *vm = xe_vma_vm(vma); 142 + struct xe_device *xe = vm->xe; 143 + 144 + if (!lockdep_is_held_type(&vm->userptr.notifier_lock, 0) && 145 + !lockdep_is_held_type(&vm->lock, 0) && 146 + !(vma->gpuva.flags & XE_VMA_DESTROYED)) { 147 + /* Don't unmap in exec critical section. */ 148 + xe_vm_assert_held(vm); 149 + /* Don't unmap while mapping the sg. */ 150 + lockdep_assert_held(&vm->lock); 151 + } 152 + 153 + mutex_lock(&userptr->unmap_mutex); 154 + if (userptr->sg && userptr->mapped) 155 + dma_unmap_sgtable(xe->drm.dev, userptr->sg, 156 + write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0); 157 + userptr->mapped = false; 158 + mutex_unlock(&userptr->unmap_mutex); 159 + } 160 + 161 + /** 151 162 * xe_hmm_userptr_free_sg() - Free the scatter gather table of userptr 152 - * 153 163 * @uvma: the userptr vma which hold the scatter gather table 154 164 * 155 165 * With function xe_userptr_populate_range, we allocate storage of ··· 204 124 void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma) 205 125 { 206 126 struct xe_userptr *userptr = &uvma->userptr; 207 - struct xe_vma *vma = &uvma->vma; 208 - bool write = !xe_vma_read_only(vma); 209 - struct xe_vm *vm = xe_vma_vm(vma); 210 - struct xe_device *xe = vm->xe; 211 - struct device *dev = xe->drm.dev; 212 127 213 - xe_assert(xe, userptr->sg); 214 - dma_unmap_sgtable(dev, userptr->sg, 215 - write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0); 216 - 128 + xe_assert(xe_vma_vm(&uvma->vma)->xe, userptr->sg); 129 + xe_hmm_userptr_unmap(uvma); 217 130 sg_free_table(userptr->sg); 218 131 userptr->sg = NULL; 219 132 } ··· 239 166 { 240 167 unsigned long timeout = 241 168 jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); 242 - unsigned long *pfns, flags = HMM_PFN_REQ_FAULT; 169 + unsigned long *pfns; 243 170 struct xe_userptr *userptr; 244 171 struct xe_vma *vma = &uvma->vma; 245 172 u64 userptr_start = xe_vma_userptr(vma); 246 173 u64 userptr_end = userptr_start + xe_vma_size(vma); 247 174 struct xe_vm *vm = xe_vma_vm(vma); 248 - struct hmm_range hmm_range; 175 + struct hmm_range hmm_range = { 176 + .pfn_flags_mask = 0, /* ignore pfns */ 177 + .default_flags = HMM_PFN_REQ_FAULT, 178 + .start = userptr_start, 179 + .end = userptr_end, 180 + .notifier = &uvma->userptr.notifier, 181 + .dev_private_owner = vm->xe, 182 + }; 249 183 bool write = !xe_vma_read_only(vma); 250 184 unsigned long notifier_seq; 251 185 u64 npages; ··· 279 199 return -ENOMEM; 280 200 281 201 if (write) 282 - flags |= HMM_PFN_REQ_WRITE; 202 + hmm_range.default_flags |= HMM_PFN_REQ_WRITE; 283 203 284 204 if (!mmget_not_zero(userptr->notifier.mm)) { 285 205 ret = -EFAULT; 286 206 goto free_pfns; 287 207 } 288 208 289 - hmm_range.default_flags = flags; 290 209 hmm_range.hmm_pfns = pfns; 291 - hmm_range.notifier = &userptr->notifier; 292 - hmm_range.start = userptr_start; 293 - hmm_range.end = userptr_end; 294 - hmm_range.dev_private_owner = vm->xe; 295 210 296 211 while (true) { 297 212 hmm_range.notifier_seq = mmu_interval_read_begin(&userptr->notifier); ··· 313 238 if (ret) 314 239 goto free_pfns; 315 240 316 - ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt, write); 241 + ret = xe_alloc_sg(vm->xe, &userptr->sgt, &hmm_range, &vm->userptr.notifier_lock); 317 242 if (ret) 318 243 goto free_pfns; 319 244 245 + ret = down_read_interruptible(&vm->userptr.notifier_lock); 246 + if (ret) 247 + goto free_st; 248 + 249 + if (mmu_interval_read_retry(hmm_range.notifier, hmm_range.notifier_seq)) { 250 + ret = -EAGAIN; 251 + goto out_unlock; 252 + } 253 + 254 + ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt, 255 + &vm->userptr.notifier_lock, write); 256 + if (ret) 257 + goto out_unlock; 258 + 320 259 xe_mark_range_accessed(&hmm_range, write); 321 260 userptr->sg = &userptr->sgt; 261 + xe_hmm_userptr_set_mapped(uvma); 322 262 userptr->notifier_seq = hmm_range.notifier_seq; 263 + up_read(&vm->userptr.notifier_lock); 264 + kvfree(pfns); 265 + return 0; 323 266 267 + out_unlock: 268 + up_read(&vm->userptr.notifier_lock); 269 + free_st: 270 + sg_free_table(&userptr->sgt); 324 271 free_pfns: 325 272 kvfree(pfns); 326 273 return ret; 327 274 } 328 -
+7
drivers/gpu/drm/xe/xe_hmm.h
··· 3 3 * Copyright © 2024 Intel Corporation 4 4 */ 5 5 6 + #ifndef _XE_HMM_H_ 7 + #define _XE_HMM_H_ 8 + 6 9 #include <linux/types.h> 7 10 8 11 struct xe_userptr_vma; 9 12 10 13 int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma, bool is_mm_mmap_locked); 14 + 11 15 void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma); 16 + 17 + void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma); 18 + #endif
+12 -1
drivers/gpu/drm/xe/xe_pm.c
··· 267 267 } 268 268 ALLOW_ERROR_INJECTION(xe_pm_init_early, ERRNO); /* See xe_pci_probe() */ 269 269 270 + static u32 vram_threshold_value(struct xe_device *xe) 271 + { 272 + /* FIXME: D3Cold temporarily disabled by default on BMG */ 273 + if (xe->info.platform == XE_BATTLEMAGE) 274 + return 0; 275 + 276 + return DEFAULT_VRAM_THRESHOLD; 277 + } 278 + 270 279 /** 271 280 * xe_pm_init - Initialize Xe Power Management 272 281 * @xe: xe device instance ··· 286 277 */ 287 278 int xe_pm_init(struct xe_device *xe) 288 279 { 280 + u32 vram_threshold; 289 281 int err; 290 282 291 283 /* For now suspend/resume is only allowed with GuC */ ··· 300 290 if (err) 301 291 return err; 302 292 303 - err = xe_pm_set_vram_threshold(xe, DEFAULT_VRAM_THRESHOLD); 293 + vram_threshold = vram_threshold_value(xe); 294 + err = xe_pm_set_vram_threshold(xe, vram_threshold); 304 295 if (err) 305 296 return err; 306 297 }
+49 -47
drivers/gpu/drm/xe/xe_pt.c
··· 28 28 struct xe_pt pt; 29 29 /** @children: Array of page-table child nodes */ 30 30 struct xe_ptw *children[XE_PDES]; 31 + /** @staging: Array of page-table staging nodes */ 32 + struct xe_ptw *staging[XE_PDES]; 31 33 }; 32 34 33 35 #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM) ··· 50 48 return container_of(pt, struct xe_pt_dir, pt); 51 49 } 52 50 53 - static struct xe_pt *xe_pt_entry(struct xe_pt_dir *pt_dir, unsigned int index) 51 + static struct xe_pt * 52 + xe_pt_entry_staging(struct xe_pt_dir *pt_dir, unsigned int index) 54 53 { 55 - return container_of(pt_dir->children[index], struct xe_pt, base); 54 + return container_of(pt_dir->staging[index], struct xe_pt, base); 56 55 } 57 56 58 57 static u64 __xe_pt_empty_pte(struct xe_tile *tile, struct xe_vm *vm, ··· 128 125 } 129 126 pt->bo = bo; 130 127 pt->base.children = level ? as_xe_pt_dir(pt)->children : NULL; 128 + pt->base.staging = level ? as_xe_pt_dir(pt)->staging : NULL; 131 129 132 130 if (vm->xef) 133 131 xe_drm_client_add_bo(vm->xef->client, pt->bo); ··· 210 206 struct xe_pt_dir *pt_dir = as_xe_pt_dir(pt); 211 207 212 208 for (i = 0; i < XE_PDES; i++) { 213 - if (xe_pt_entry(pt_dir, i)) 214 - xe_pt_destroy(xe_pt_entry(pt_dir, i), flags, 209 + if (xe_pt_entry_staging(pt_dir, i)) 210 + xe_pt_destroy(xe_pt_entry_staging(pt_dir, i), flags, 215 211 deferred); 216 212 } 217 213 } ··· 380 376 /* Continue building a non-connected subtree. */ 381 377 struct iosys_map *map = &parent->bo->vmap; 382 378 383 - if (unlikely(xe_child)) 379 + if (unlikely(xe_child)) { 384 380 parent->base.children[offset] = &xe_child->base; 381 + parent->base.staging[offset] = &xe_child->base; 382 + } 385 383 386 384 xe_pt_write(xe_walk->vm->xe, map, offset, pte); 387 385 parent->num_live++; ··· 620 614 .ops = &xe_pt_stage_bind_ops, 621 615 .shifts = xe_normal_pt_shifts, 622 616 .max_level = XE_PT_HIGHEST_LEVEL, 617 + .staging = true, 623 618 }, 624 619 .vm = xe_vma_vm(vma), 625 620 .tile = tile, ··· 880 873 } 881 874 } 882 875 883 - static void xe_pt_commit_locks_assert(struct xe_vma *vma) 876 + static void xe_pt_commit_prepare_locks_assert(struct xe_vma *vma) 884 877 { 885 878 struct xe_vm *vm = xe_vma_vm(vma); 886 879 ··· 890 883 dma_resv_assert_held(xe_vma_bo(vma)->ttm.base.resv); 891 884 892 885 xe_vm_assert_held(vm); 886 + } 887 + 888 + static void xe_pt_commit_locks_assert(struct xe_vma *vma) 889 + { 890 + struct xe_vm *vm = xe_vma_vm(vma); 891 + 892 + xe_pt_commit_prepare_locks_assert(vma); 893 + 894 + if (xe_vma_is_userptr(vma)) 895 + lockdep_assert_held_read(&vm->userptr.notifier_lock); 893 896 } 894 897 895 898 static void xe_pt_commit(struct xe_vma *vma, ··· 912 895 913 896 for (i = 0; i < num_entries; i++) { 914 897 struct xe_pt *pt = entries[i].pt; 898 + struct xe_pt_dir *pt_dir; 915 899 916 900 if (!pt->level) 917 901 continue; 918 902 903 + pt_dir = as_xe_pt_dir(pt); 919 904 for (j = 0; j < entries[i].qwords; j++) { 920 905 struct xe_pt *oldpte = entries[i].pt_entries[j].pt; 906 + int j_ = j + entries[i].ofs; 921 907 908 + pt_dir->children[j_] = pt_dir->staging[j_]; 922 909 xe_pt_destroy(oldpte, xe_vma_vm(vma)->flags, deferred); 923 910 } 924 911 } ··· 934 913 { 935 914 int i, j; 936 915 937 - xe_pt_commit_locks_assert(vma); 916 + xe_pt_commit_prepare_locks_assert(vma); 938 917 939 918 for (i = num_entries - 1; i >= 0; --i) { 940 919 struct xe_pt *pt = entries[i].pt; ··· 949 928 pt_dir = as_xe_pt_dir(pt); 950 929 for (j = 0; j < entries[i].qwords; j++) { 951 930 u32 j_ = j + entries[i].ofs; 952 - struct xe_pt *newpte = xe_pt_entry(pt_dir, j_); 931 + struct xe_pt *newpte = xe_pt_entry_staging(pt_dir, j_); 953 932 struct xe_pt *oldpte = entries[i].pt_entries[j].pt; 954 933 955 - pt_dir->children[j_] = oldpte ? &oldpte->base : 0; 934 + pt_dir->staging[j_] = oldpte ? &oldpte->base : 0; 956 935 xe_pt_destroy(newpte, xe_vma_vm(vma)->flags, NULL); 957 936 } 958 937 } ··· 964 943 { 965 944 u32 i, j; 966 945 967 - xe_pt_commit_locks_assert(vma); 946 + xe_pt_commit_prepare_locks_assert(vma); 968 947 969 948 for (i = 0; i < num_entries; i++) { 970 949 struct xe_pt *pt = entries[i].pt; ··· 982 961 struct xe_pt *newpte = entries[i].pt_entries[j].pt; 983 962 struct xe_pt *oldpte = NULL; 984 963 985 - if (xe_pt_entry(pt_dir, j_)) 986 - oldpte = xe_pt_entry(pt_dir, j_); 964 + if (xe_pt_entry_staging(pt_dir, j_)) 965 + oldpte = xe_pt_entry_staging(pt_dir, j_); 987 966 988 - pt_dir->children[j_] = &newpte->base; 967 + pt_dir->staging[j_] = &newpte->base; 989 968 entries[i].pt_entries[j].pt = oldpte; 990 969 } 991 970 } ··· 1234 1213 return 0; 1235 1214 1236 1215 uvma = to_userptr_vma(vma); 1216 + if (xe_pt_userptr_inject_eagain(uvma)) 1217 + xe_vma_userptr_force_invalidate(uvma); 1218 + 1237 1219 notifier_seq = uvma->userptr.notifier_seq; 1238 1220 1239 - if (uvma->userptr.initial_bind && !xe_vm_in_fault_mode(vm)) 1240 - return 0; 1241 - 1242 1221 if (!mmu_interval_read_retry(&uvma->userptr.notifier, 1243 - notifier_seq) && 1244 - !xe_pt_userptr_inject_eagain(uvma)) 1222 + notifier_seq)) 1245 1223 return 0; 1246 1224 1247 - if (xe_vm_in_fault_mode(vm)) { 1225 + if (xe_vm_in_fault_mode(vm)) 1248 1226 return -EAGAIN; 1249 - } else { 1250 - spin_lock(&vm->userptr.invalidated_lock); 1251 - list_move_tail(&uvma->userptr.invalidate_link, 1252 - &vm->userptr.invalidated); 1253 - spin_unlock(&vm->userptr.invalidated_lock); 1254 1227 1255 - if (xe_vm_in_preempt_fence_mode(vm)) { 1256 - struct dma_resv_iter cursor; 1257 - struct dma_fence *fence; 1258 - long err; 1259 - 1260 - dma_resv_iter_begin(&cursor, xe_vm_resv(vm), 1261 - DMA_RESV_USAGE_BOOKKEEP); 1262 - dma_resv_for_each_fence_unlocked(&cursor, fence) 1263 - dma_fence_enable_sw_signaling(fence); 1264 - dma_resv_iter_end(&cursor); 1265 - 1266 - err = dma_resv_wait_timeout(xe_vm_resv(vm), 1267 - DMA_RESV_USAGE_BOOKKEEP, 1268 - false, MAX_SCHEDULE_TIMEOUT); 1269 - XE_WARN_ON(err <= 0); 1270 - } 1271 - } 1272 - 1228 + /* 1229 + * Just continue the operation since exec or rebind worker 1230 + * will take care of rebinding. 1231 + */ 1273 1232 return 0; 1274 1233 } 1275 1234 ··· 1515 1514 .ops = &xe_pt_stage_unbind_ops, 1516 1515 .shifts = xe_normal_pt_shifts, 1517 1516 .max_level = XE_PT_HIGHEST_LEVEL, 1517 + .staging = true, 1518 1518 }, 1519 1519 .tile = tile, 1520 1520 .modified_start = xe_vma_start(vma), ··· 1557 1555 { 1558 1556 int i, j; 1559 1557 1560 - xe_pt_commit_locks_assert(vma); 1558 + xe_pt_commit_prepare_locks_assert(vma); 1561 1559 1562 1560 for (i = num_entries - 1; i >= 0; --i) { 1563 1561 struct xe_vm_pgtable_update *entry = &entries[i]; ··· 1570 1568 continue; 1571 1569 1572 1570 for (j = entry->ofs; j < entry->ofs + entry->qwords; j++) 1573 - pt_dir->children[j] = 1571 + pt_dir->staging[j] = 1574 1572 entries[i].pt_entries[j - entry->ofs].pt ? 1575 1573 &entries[i].pt_entries[j - entry->ofs].pt->base : NULL; 1576 1574 } ··· 1583 1581 { 1584 1582 int i, j; 1585 1583 1586 - xe_pt_commit_locks_assert(vma); 1584 + xe_pt_commit_prepare_locks_assert(vma); 1587 1585 1588 1586 for (i = 0; i < num_entries; ++i) { 1589 1587 struct xe_vm_pgtable_update *entry = &entries[i]; ··· 1597 1595 pt_dir = as_xe_pt_dir(pt); 1598 1596 for (j = entry->ofs; j < entry->ofs + entry->qwords; j++) { 1599 1597 entry->pt_entries[j - entry->ofs].pt = 1600 - xe_pt_entry(pt_dir, j); 1601 - pt_dir->children[j] = NULL; 1598 + xe_pt_entry_staging(pt_dir, j); 1599 + pt_dir->staging[j] = NULL; 1602 1600 } 1603 1601 } 1604 1602 }
+2 -1
drivers/gpu/drm/xe/xe_pt_walk.c
··· 74 74 u64 addr, u64 end, struct xe_pt_walk *walk) 75 75 { 76 76 pgoff_t offset = xe_pt_offset(addr, level, walk); 77 - struct xe_ptw **entries = parent->children ? parent->children : NULL; 77 + struct xe_ptw **entries = walk->staging ? (parent->staging ?: NULL) : 78 + (parent->children ?: NULL); 78 79 const struct xe_pt_walk_ops *ops = walk->ops; 79 80 enum page_walk_action action; 80 81 struct xe_ptw *child;
+4
drivers/gpu/drm/xe/xe_pt_walk.h
··· 11 11 /** 12 12 * struct xe_ptw - base class for driver pagetable subclassing. 13 13 * @children: Pointer to an array of children if any. 14 + * @staging: Pointer to an array of staging if any. 14 15 * 15 16 * Drivers could subclass this, and if it's a page-directory, typically 16 17 * embed an array of xe_ptw pointers. 17 18 */ 18 19 struct xe_ptw { 19 20 struct xe_ptw **children; 21 + struct xe_ptw **staging; 20 22 }; 21 23 22 24 /** ··· 43 41 * as shared pagetables. 44 42 */ 45 43 bool shared_pt_mode; 44 + /** @staging: Walk staging PT structure */ 45 + bool staging; 46 46 }; 47 47 48 48 /**
+70 -33
drivers/gpu/drm/xe/xe_vm.c
··· 579 579 trace_xe_vm_rebind_worker_exit(vm); 580 580 } 581 581 582 - static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni, 583 - const struct mmu_notifier_range *range, 584 - unsigned long cur_seq) 582 + static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma) 585 583 { 586 - struct xe_userptr *userptr = container_of(mni, typeof(*userptr), notifier); 587 - struct xe_userptr_vma *uvma = container_of(userptr, typeof(*uvma), userptr); 584 + struct xe_userptr *userptr = &uvma->userptr; 588 585 struct xe_vma *vma = &uvma->vma; 589 - struct xe_vm *vm = xe_vma_vm(vma); 590 586 struct dma_resv_iter cursor; 591 587 struct dma_fence *fence; 592 588 long err; 593 - 594 - xe_assert(vm->xe, xe_vma_is_userptr(vma)); 595 - trace_xe_vma_userptr_invalidate(vma); 596 - 597 - if (!mmu_notifier_range_blockable(range)) 598 - return false; 599 - 600 - vm_dbg(&xe_vma_vm(vma)->xe->drm, 601 - "NOTIFIER: addr=0x%016llx, range=0x%016llx", 602 - xe_vma_start(vma), xe_vma_size(vma)); 603 - 604 - down_write(&vm->userptr.notifier_lock); 605 - mmu_interval_set_seq(mni, cur_seq); 606 - 607 - /* No need to stop gpu access if the userptr is not yet bound. */ 608 - if (!userptr->initial_bind) { 609 - up_write(&vm->userptr.notifier_lock); 610 - return true; 611 - } 612 589 613 590 /* 614 591 * Tell exec and rebind worker they need to repin and rebind this 615 592 * userptr. 616 593 */ 617 594 if (!xe_vm_in_fault_mode(vm) && 618 - !(vma->gpuva.flags & XE_VMA_DESTROYED) && vma->tile_present) { 595 + !(vma->gpuva.flags & XE_VMA_DESTROYED)) { 619 596 spin_lock(&vm->userptr.invalidated_lock); 620 597 list_move_tail(&userptr->invalidate_link, 621 598 &vm->userptr.invalidated); 622 599 spin_unlock(&vm->userptr.invalidated_lock); 623 600 } 624 - 625 - up_write(&vm->userptr.notifier_lock); 626 601 627 602 /* 628 603 * Preempt fences turn into schedule disables, pipeline these. ··· 616 641 false, MAX_SCHEDULE_TIMEOUT); 617 642 XE_WARN_ON(err <= 0); 618 643 619 - if (xe_vm_in_fault_mode(vm)) { 644 + if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) { 620 645 err = xe_vm_invalidate_vma(vma); 621 646 XE_WARN_ON(err); 622 647 } 623 648 649 + xe_hmm_userptr_unmap(uvma); 650 + } 651 + 652 + static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni, 653 + const struct mmu_notifier_range *range, 654 + unsigned long cur_seq) 655 + { 656 + struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier); 657 + struct xe_vma *vma = &uvma->vma; 658 + struct xe_vm *vm = xe_vma_vm(vma); 659 + 660 + xe_assert(vm->xe, xe_vma_is_userptr(vma)); 661 + trace_xe_vma_userptr_invalidate(vma); 662 + 663 + if (!mmu_notifier_range_blockable(range)) 664 + return false; 665 + 666 + vm_dbg(&xe_vma_vm(vma)->xe->drm, 667 + "NOTIFIER: addr=0x%016llx, range=0x%016llx", 668 + xe_vma_start(vma), xe_vma_size(vma)); 669 + 670 + down_write(&vm->userptr.notifier_lock); 671 + mmu_interval_set_seq(mni, cur_seq); 672 + 673 + __vma_userptr_invalidate(vm, uvma); 674 + up_write(&vm->userptr.notifier_lock); 624 675 trace_xe_vma_userptr_invalidate_complete(vma); 625 676 626 677 return true; ··· 655 654 static const struct mmu_interval_notifier_ops vma_userptr_notifier_ops = { 656 655 .invalidate = vma_userptr_invalidate, 657 656 }; 657 + 658 + #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT) 659 + /** 660 + * xe_vma_userptr_force_invalidate() - force invalidate a userptr 661 + * @uvma: The userptr vma to invalidate 662 + * 663 + * Perform a forced userptr invalidation for testing purposes. 664 + */ 665 + void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma) 666 + { 667 + struct xe_vm *vm = xe_vma_vm(&uvma->vma); 668 + 669 + /* Protect against concurrent userptr pinning */ 670 + lockdep_assert_held(&vm->lock); 671 + /* Protect against concurrent notifiers */ 672 + lockdep_assert_held(&vm->userptr.notifier_lock); 673 + /* 674 + * Protect against concurrent instances of this function and 675 + * the critical exec sections 676 + */ 677 + xe_vm_assert_held(vm); 678 + 679 + if (!mmu_interval_read_retry(&uvma->userptr.notifier, 680 + uvma->userptr.notifier_seq)) 681 + uvma->userptr.notifier_seq -= 2; 682 + __vma_userptr_invalidate(vm, uvma); 683 + } 684 + #endif 658 685 659 686 int xe_vm_userptr_pin(struct xe_vm *vm) 660 687 { ··· 1041 1012 INIT_LIST_HEAD(&userptr->invalidate_link); 1042 1013 INIT_LIST_HEAD(&userptr->repin_link); 1043 1014 vma->gpuva.gem.offset = bo_offset_or_userptr; 1015 + mutex_init(&userptr->unmap_mutex); 1044 1016 1045 1017 err = mmu_interval_notifier_insert(&userptr->notifier, 1046 1018 current->mm, ··· 1083 1053 * them anymore 1084 1054 */ 1085 1055 mmu_interval_notifier_remove(&userptr->notifier); 1056 + mutex_destroy(&userptr->unmap_mutex); 1086 1057 xe_vm_put(vm); 1087 1058 } else if (xe_vma_is_null(vma)) { 1088 1059 xe_vm_put(vm); ··· 1809 1778 args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE)) 1810 1779 return -EINVAL; 1811 1780 1812 - if (XE_IOCTL_DBG(xe, args->extensions)) 1813 - return -EINVAL; 1814 - 1815 1781 if (args->flags & DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE) 1816 1782 flags |= XE_VM_FLAG_SCRATCH_PAGE; 1817 1783 if (args->flags & DRM_XE_VM_CREATE_FLAG_LR_MODE) ··· 2314 2286 break; 2315 2287 } 2316 2288 case DRM_GPUVA_OP_UNMAP: 2289 + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask); 2290 + break; 2317 2291 case DRM_GPUVA_OP_PREFETCH: 2318 - /* FIXME: Need to skip some prefetch ops */ 2292 + vma = gpuva_to_vma(op->base.prefetch.va); 2293 + 2294 + if (xe_vma_is_userptr(vma)) { 2295 + err = xe_vma_userptr_pin_pages(to_userptr_vma(vma)); 2296 + if (err) 2297 + return err; 2298 + } 2299 + 2319 2300 xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask); 2320 2301 break; 2321 2302 default:
+9 -1
drivers/gpu/drm/xe/xe_vm.h
··· 274 274 const char *format, ...) 275 275 { /* noop */ } 276 276 #endif 277 - #endif 278 277 279 278 struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm); 280 279 void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap); 281 280 void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p); 282 281 void xe_vm_snapshot_free(struct xe_vm_snapshot *snap); 282 + 283 + #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT) 284 + void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma); 285 + #else 286 + static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma) 287 + { 288 + } 289 + #endif 290 + #endif
+6 -2
drivers/gpu/drm/xe/xe_vm_types.h
··· 59 59 struct sg_table *sg; 60 60 /** @notifier_seq: notifier sequence number */ 61 61 unsigned long notifier_seq; 62 + /** @unmap_mutex: Mutex protecting dma-unmapping */ 63 + struct mutex unmap_mutex; 62 64 /** 63 65 * @initial_bind: user pointer has been bound at least once. 64 66 * write: vm->userptr.notifier_lock in read mode and vm->resv held. 65 67 * read: vm->userptr.notifier_lock in write mode or vm->resv held. 66 68 */ 67 69 bool initial_bind; 70 + /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */ 71 + bool mapped; 68 72 #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT) 69 73 u32 divisor; 70 74 #endif ··· 231 227 * up for revalidation. Protected from access with the 232 228 * @invalidated_lock. Removing items from the list 233 229 * additionally requires @lock in write mode, and adding 234 - * items to the list requires the @userptr.notifer_lock in 235 - * write mode. 230 + * items to the list requires either the @userptr.notifer_lock in 231 + * write mode, OR @lock in write mode. 236 232 */ 237 233 struct list_head invalidated; 238 234 } userptr;
+7 -4
drivers/hid/hid-apple.c
··· 378 378 return false; 379 379 } 380 380 381 + static bool apple_is_omoton_kb066(struct hid_device *hdev) 382 + { 383 + return hdev->product == USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI && 384 + strcmp(hdev->name, "Bluetooth Keyboard") == 0; 385 + } 386 + 381 387 static inline void apple_setup_key_translation(struct input_dev *input, 382 388 const struct apple_key_translation *table) 383 389 { ··· 551 545 } 552 546 } 553 547 } 554 - 555 - if (usage->hid == 0xc0301) /* Omoton KB066 quirk */ 556 - code = KEY_F6; 557 548 558 549 if (usage->code != code) { 559 550 input_event_with_scancode(input, usage->type, code, usage->hid, value); ··· 731 728 { 732 729 struct apple_sc *asc = hid_get_drvdata(hdev); 733 730 734 - if ((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) { 731 + if (((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) || apple_is_omoton_kb066(hdev)) { 735 732 hid_info(hdev, "Fn key not found (Apple Wireless Keyboard clone?), disabling Fn key handling\n"); 736 733 asc->quirks &= ~APPLE_HAS_FN; 737 734 }
+1 -1
drivers/hid/hid-appleir.c
··· 188 188 static const u8 flatbattery[] = { 0x25, 0x87, 0xe0 }; 189 189 unsigned long flags; 190 190 191 - if (len != 5) 191 + if (len != 5 || !(hid->claimed & HID_CLAIMED_INPUT)) 192 192 goto out; 193 193 194 194 if (!memcmp(data, keydown, sizeof(keydown))) {
+43 -40
drivers/hid/hid-corsair-void.c
··· 71 71 72 72 #include <linux/bitfield.h> 73 73 #include <linux/bitops.h> 74 - #include <linux/cleanup.h> 75 74 #include <linux/device.h> 76 75 #include <linux/hid.h> 77 76 #include <linux/module.h> 78 - #include <linux/mutex.h> 79 77 #include <linux/power_supply.h> 80 78 #include <linux/usb.h> 81 79 #include <linux/workqueue.h> ··· 118 120 CORSAIR_VOID_BATTERY_CHARGING = 5, 119 121 }; 120 122 123 + enum { 124 + CORSAIR_VOID_ADD_BATTERY = 0, 125 + CORSAIR_VOID_REMOVE_BATTERY = 1, 126 + CORSAIR_VOID_UPDATE_BATTERY = 2, 127 + }; 128 + 121 129 static enum power_supply_property corsair_void_battery_props[] = { 122 130 POWER_SUPPLY_PROP_STATUS, 123 131 POWER_SUPPLY_PROP_PRESENT, ··· 159 155 160 156 struct power_supply *battery; 161 157 struct power_supply_desc battery_desc; 162 - struct mutex battery_mutex; 163 158 164 159 struct delayed_work delayed_status_work; 165 160 struct delayed_work delayed_firmware_work; 166 - struct work_struct battery_remove_work; 167 - struct work_struct battery_add_work; 161 + 162 + unsigned long battery_work_flags; 163 + struct work_struct battery_work; 168 164 }; 169 165 170 166 /* ··· 264 260 265 261 /* Inform power supply if battery values changed */ 266 262 if (memcmp(&orig_battery_data, battery_data, sizeof(*battery_data))) { 267 - scoped_guard(mutex, &drvdata->battery_mutex) { 268 - if (drvdata->battery) { 269 - power_supply_changed(drvdata->battery); 270 - } 271 - } 263 + set_bit(CORSAIR_VOID_UPDATE_BATTERY, 264 + &drvdata->battery_work_flags); 265 + schedule_work(&drvdata->battery_work); 272 266 } 273 267 } 274 268 ··· 538 536 539 537 } 540 538 541 - static void corsair_void_battery_remove_work_handler(struct work_struct *work) 539 + static void corsair_void_add_battery(struct corsair_void_drvdata *drvdata) 542 540 { 543 - struct corsair_void_drvdata *drvdata; 544 - 545 - drvdata = container_of(work, struct corsair_void_drvdata, 546 - battery_remove_work); 547 - scoped_guard(mutex, &drvdata->battery_mutex) { 548 - if (drvdata->battery) { 549 - power_supply_unregister(drvdata->battery); 550 - drvdata->battery = NULL; 551 - } 552 - } 553 - } 554 - 555 - static void corsair_void_battery_add_work_handler(struct work_struct *work) 556 - { 557 - struct corsair_void_drvdata *drvdata; 558 541 struct power_supply_config psy_cfg = {}; 559 542 struct power_supply *new_supply; 560 543 561 - drvdata = container_of(work, struct corsair_void_drvdata, 562 - battery_add_work); 563 - guard(mutex)(&drvdata->battery_mutex); 564 544 if (drvdata->battery) 565 545 return; 566 546 ··· 567 583 drvdata->battery = new_supply; 568 584 } 569 585 586 + static void corsair_void_battery_work_handler(struct work_struct *work) 587 + { 588 + struct corsair_void_drvdata *drvdata = container_of(work, 589 + struct corsair_void_drvdata, battery_work); 590 + 591 + bool add_battery = test_and_clear_bit(CORSAIR_VOID_ADD_BATTERY, 592 + &drvdata->battery_work_flags); 593 + bool remove_battery = test_and_clear_bit(CORSAIR_VOID_REMOVE_BATTERY, 594 + &drvdata->battery_work_flags); 595 + bool update_battery = test_and_clear_bit(CORSAIR_VOID_UPDATE_BATTERY, 596 + &drvdata->battery_work_flags); 597 + 598 + if (add_battery && !remove_battery) { 599 + corsair_void_add_battery(drvdata); 600 + } else if (remove_battery && !add_battery && drvdata->battery) { 601 + power_supply_unregister(drvdata->battery); 602 + drvdata->battery = NULL; 603 + } 604 + 605 + if (update_battery && drvdata->battery) 606 + power_supply_changed(drvdata->battery); 607 + 608 + } 609 + 570 610 static void corsair_void_headset_connected(struct corsair_void_drvdata *drvdata) 571 611 { 572 - schedule_work(&drvdata->battery_add_work); 612 + set_bit(CORSAIR_VOID_ADD_BATTERY, &drvdata->battery_work_flags); 613 + schedule_work(&drvdata->battery_work); 573 614 schedule_delayed_work(&drvdata->delayed_firmware_work, 574 615 msecs_to_jiffies(100)); 575 616 } 576 617 577 618 static void corsair_void_headset_disconnected(struct corsair_void_drvdata *drvdata) 578 619 { 579 - schedule_work(&drvdata->battery_remove_work); 620 + set_bit(CORSAIR_VOID_REMOVE_BATTERY, &drvdata->battery_work_flags); 621 + schedule_work(&drvdata->battery_work); 580 622 581 623 corsair_void_set_unknown_wireless_data(drvdata); 582 624 corsair_void_set_unknown_batt(drvdata); ··· 688 678 drvdata->battery_desc.get_property = corsair_void_battery_get_property; 689 679 690 680 drvdata->battery = NULL; 691 - INIT_WORK(&drvdata->battery_remove_work, 692 - corsair_void_battery_remove_work_handler); 693 - INIT_WORK(&drvdata->battery_add_work, 694 - corsair_void_battery_add_work_handler); 695 - ret = devm_mutex_init(drvdata->dev, &drvdata->battery_mutex); 696 - if (ret) 697 - return ret; 681 + INIT_WORK(&drvdata->battery_work, corsair_void_battery_work_handler); 698 682 699 683 ret = sysfs_create_group(&hid_dev->dev.kobj, &corsair_void_attr_group); 700 684 if (ret) ··· 725 721 struct corsair_void_drvdata *drvdata = hid_get_drvdata(hid_dev); 726 722 727 723 hid_hw_stop(hid_dev); 728 - cancel_work_sync(&drvdata->battery_remove_work); 729 - cancel_work_sync(&drvdata->battery_add_work); 724 + cancel_work_sync(&drvdata->battery_work); 730 725 if (drvdata->battery) 731 726 power_supply_unregister(drvdata->battery); 732 727
+1 -1
drivers/hid/hid-debug.c
··· 3450 3450 [KEY_MACRO_RECORD_START] = "MacroRecordStart", 3451 3451 [KEY_MACRO_RECORD_STOP] = "MacroRecordStop", 3452 3452 [KEY_MARK_WAYPOINT] = "MarkWayPoint", [KEY_MEDIA_REPEAT] = "MediaRepeat", 3453 - [KEY_MEDIA_TOP_MENU] = "MediaTopMenu", [KEY_MESSENGER] = "Messanger", 3453 + [KEY_MEDIA_TOP_MENU] = "MediaTopMenu", [KEY_MESSENGER] = "Messenger", 3454 3454 [KEY_NAV_CHART] = "NavChar", [KEY_NAV_INFO] = "NavInfo", 3455 3455 [KEY_NEWS] = "News", [KEY_NEXT_ELEMENT] = "NextElement", 3456 3456 [KEY_NEXT_FAVORITE] = "NextFavorite", [KEY_NOTIFICATION_CENTER] = "NotificationCenter",
+2
drivers/hid/hid-google-hammer.c
··· 268 268 mutex_unlock(&cbas_ec_reglock); 269 269 } 270 270 271 + #ifdef CONFIG_ACPI 271 272 static const struct acpi_device_id cbas_ec_acpi_ids[] = { 272 273 { "GOOG000B", 0 }, 273 274 { } 274 275 }; 275 276 MODULE_DEVICE_TABLE(acpi, cbas_ec_acpi_ids); 277 + #endif 276 278 277 279 #ifdef CONFIG_OF 278 280 static const struct of_device_id cbas_ec_of_match[] = {
+7 -7
drivers/hid/hid-nintendo.c
··· 457 457 }; 458 458 459 459 static const struct joycon_ctlr_button_mapping gencon_button_mappings[] = { 460 - { BTN_A, JC_BTN_A, }, 461 - { BTN_B, JC_BTN_B, }, 462 - { BTN_C, JC_BTN_R, }, 463 - { BTN_X, JC_BTN_X, }, /* MD/GEN 6B Only */ 464 - { BTN_Y, JC_BTN_Y, }, /* MD/GEN 6B Only */ 465 - { BTN_Z, JC_BTN_L, }, /* MD/GEN 6B Only */ 466 - { BTN_SELECT, JC_BTN_ZR, }, 460 + { BTN_WEST, JC_BTN_A, }, /* A */ 461 + { BTN_SOUTH, JC_BTN_B, }, /* B */ 462 + { BTN_EAST, JC_BTN_R, }, /* C */ 463 + { BTN_TL, JC_BTN_X, }, /* X MD/GEN 6B Only */ 464 + { BTN_NORTH, JC_BTN_Y, }, /* Y MD/GEN 6B Only */ 465 + { BTN_TR, JC_BTN_L, }, /* Z MD/GEN 6B Only */ 466 + { BTN_SELECT, JC_BTN_ZR, }, /* Mode */ 467 467 { BTN_START, JC_BTN_PLUS, }, 468 468 { BTN_MODE, JC_BTN_HOME, }, 469 469 { BTN_Z, JC_BTN_CAP, },
+1 -1
drivers/hid/hid-steam.c
··· 1327 1327 return; 1328 1328 } 1329 1329 1330 + hid_destroy_device(steam->client_hdev); 1330 1331 cancel_delayed_work_sync(&steam->mode_switch); 1331 1332 cancel_work_sync(&steam->work_connect); 1332 1333 cancel_work_sync(&steam->rumble_work); 1333 1334 cancel_work_sync(&steam->unregister_work); 1334 - hid_destroy_device(steam->client_hdev); 1335 1335 steam->client_hdev = NULL; 1336 1336 steam->client_opened = 0; 1337 1337 if (steam->quirks & STEAM_QUIRK_WIRELESS) {
+1 -1
drivers/hid/i2c-hid/i2c-hid-core.c
··· 290 290 ihid->rawbuf, recv_len + sizeof(__le16)); 291 291 if (error) { 292 292 dev_err(&ihid->client->dev, 293 - "failed to set a report to device: %d\n", error); 293 + "failed to get a report from device: %d\n", error); 294 294 return error; 295 295 } 296 296
+1 -1
drivers/hid/intel-ish-hid/ishtp-hid-client.c
··· 832 832 hid_ishtp_cl); 833 833 834 834 dev_dbg(ishtp_device(cl_device), "%s\n", __func__); 835 - hid_ishtp_cl_deinit(hid_ishtp_cl); 836 835 ishtp_put_device(cl_device); 837 836 ishtp_hid_remove(client_data); 837 + hid_ishtp_cl_deinit(hid_ishtp_cl); 838 838 839 839 hid_ishtp_cl = NULL; 840 840
+3 -1
drivers/hid/intel-ish-hid/ishtp-hid.c
··· 261 261 */ 262 262 void ishtp_hid_remove(struct ishtp_cl_data *client_data) 263 263 { 264 + void *data; 264 265 int i; 265 266 266 267 for (i = 0; i < client_data->num_hid_devices; ++i) { 267 268 if (client_data->hid_sensor_hubs[i]) { 268 - kfree(client_data->hid_sensor_hubs[i]->driver_data); 269 + data = client_data->hid_sensor_hubs[i]->driver_data; 269 270 hid_destroy_device(client_data->hid_sensor_hubs[i]); 271 + kfree(data); 270 272 client_data->hid_sensor_hubs[i] = NULL; 271 273 } 272 274 }
+2
drivers/hid/intel-thc-hid/intel-quickspi/pci-quickspi.c
··· 909 909 910 910 thc_change_ltr_mode(qsdev->thc_hw, THC_LTR_MODE_ACTIVE); 911 911 912 + qsdev->state = QUICKSPI_ENABLED; 913 + 912 914 return 0; 913 915 } 914 916
+1 -1
drivers/hid/intel-thc-hid/intel-quickspi/quickspi-protocol.c
··· 107 107 return 0; 108 108 } 109 109 110 - dev_err_once(qsdev->dev, "Unexpected intput report type: %d\n", input_rep_type); 110 + dev_err_once(qsdev->dev, "Unexpected input report type: %d\n", input_rep_type); 111 111 return -EINVAL; 112 112 } 113 113
+13
drivers/hv/vmbus_drv.c
··· 2262 2262 struct resource *iter; 2263 2263 2264 2264 mutex_lock(&hyperv_mmio_lock); 2265 + 2266 + /* 2267 + * If all bytes of the MMIO range to be released are within the 2268 + * special case fb_mmio shadow region, skip releasing the shadow 2269 + * region since no corresponding __request_region() was done 2270 + * in vmbus_allocate_mmio(). 2271 + */ 2272 + if (fb_mmio && start >= fb_mmio->start && 2273 + (start + size - 1 <= fb_mmio->end)) 2274 + goto skip_shadow_release; 2275 + 2265 2276 for (iter = hyperv_mmio; iter; iter = iter->sibling) { 2266 2277 if ((iter->start >= start + size) || (iter->end <= start)) 2267 2278 continue; 2268 2279 2269 2280 __release_region(iter, start, size); 2270 2281 } 2282 + 2283 + skip_shadow_release: 2271 2284 release_mem_region(start, size); 2272 2285 mutex_unlock(&hyperv_mmio_lock); 2273 2286
+10
drivers/hwmon/ad7314.c
··· 22 22 */ 23 23 #define AD7314_TEMP_MASK 0x7FE0 24 24 #define AD7314_TEMP_SHIFT 5 25 + #define AD7314_LEADING_ZEROS_MASK BIT(15) 25 26 26 27 /* 27 28 * ADT7301 and ADT7302 temperature masks 28 29 */ 29 30 #define ADT7301_TEMP_MASK 0x3FFF 31 + #define ADT7301_LEADING_ZEROS_MASK (BIT(15) | BIT(14)) 30 32 31 33 enum ad7314_variant { 32 34 adt7301, ··· 67 65 return ret; 68 66 switch (spi_get_device_id(chip->spi_dev)->driver_data) { 69 67 case ad7314: 68 + if (ret & AD7314_LEADING_ZEROS_MASK) { 69 + /* Invalid read-out, leading zero part is missing */ 70 + return -EIO; 71 + } 70 72 data = (ret & AD7314_TEMP_MASK) >> AD7314_TEMP_SHIFT; 71 73 data = sign_extend32(data, 9); 72 74 73 75 return sprintf(buf, "%d\n", 250 * data); 74 76 case adt7301: 75 77 case adt7302: 78 + if (ret & ADT7301_LEADING_ZEROS_MASK) { 79 + /* Invalid read-out, leading zero part is missing */ 80 + return -EIO; 81 + } 76 82 /* 77 83 * Documented as a 13 bit twos complement register 78 84 * with a sign bit - which is a 14 bit 2's complement
+33 -33
drivers/hwmon/ntc_thermistor.c
··· 181 181 }; 182 182 183 183 static const struct ntc_compensation ncpXXxh103[] = { 184 - { .temp_c = -40, .ohm = 247565 }, 185 - { .temp_c = -35, .ohm = 181742 }, 186 - { .temp_c = -30, .ohm = 135128 }, 187 - { .temp_c = -25, .ohm = 101678 }, 188 - { .temp_c = -20, .ohm = 77373 }, 189 - { .temp_c = -15, .ohm = 59504 }, 190 - { .temp_c = -10, .ohm = 46222 }, 191 - { .temp_c = -5, .ohm = 36244 }, 192 - { .temp_c = 0, .ohm = 28674 }, 193 - { .temp_c = 5, .ohm = 22878 }, 194 - { .temp_c = 10, .ohm = 18399 }, 195 - { .temp_c = 15, .ohm = 14910 }, 196 - { .temp_c = 20, .ohm = 12169 }, 184 + { .temp_c = -40, .ohm = 195652 }, 185 + { .temp_c = -35, .ohm = 148171 }, 186 + { .temp_c = -30, .ohm = 113347 }, 187 + { .temp_c = -25, .ohm = 87559 }, 188 + { .temp_c = -20, .ohm = 68237 }, 189 + { .temp_c = -15, .ohm = 53650 }, 190 + { .temp_c = -10, .ohm = 42506 }, 191 + { .temp_c = -5, .ohm = 33892 }, 192 + { .temp_c = 0, .ohm = 27219 }, 193 + { .temp_c = 5, .ohm = 22021 }, 194 + { .temp_c = 10, .ohm = 17926 }, 195 + { .temp_c = 15, .ohm = 14674 }, 196 + { .temp_c = 20, .ohm = 12081 }, 197 197 { .temp_c = 25, .ohm = 10000 }, 198 - { .temp_c = 30, .ohm = 8271 }, 199 - { .temp_c = 35, .ohm = 6883 }, 200 - { .temp_c = 40, .ohm = 5762 }, 201 - { .temp_c = 45, .ohm = 4851 }, 202 - { .temp_c = 50, .ohm = 4105 }, 203 - { .temp_c = 55, .ohm = 3492 }, 204 - { .temp_c = 60, .ohm = 2985 }, 205 - { .temp_c = 65, .ohm = 2563 }, 206 - { .temp_c = 70, .ohm = 2211 }, 207 - { .temp_c = 75, .ohm = 1915 }, 208 - { .temp_c = 80, .ohm = 1666 }, 209 - { .temp_c = 85, .ohm = 1454 }, 210 - { .temp_c = 90, .ohm = 1275 }, 211 - { .temp_c = 95, .ohm = 1121 }, 212 - { .temp_c = 100, .ohm = 990 }, 213 - { .temp_c = 105, .ohm = 876 }, 214 - { .temp_c = 110, .ohm = 779 }, 215 - { .temp_c = 115, .ohm = 694 }, 216 - { .temp_c = 120, .ohm = 620 }, 217 - { .temp_c = 125, .ohm = 556 }, 198 + { .temp_c = 30, .ohm = 8315 }, 199 + { .temp_c = 35, .ohm = 6948 }, 200 + { .temp_c = 40, .ohm = 5834 }, 201 + { .temp_c = 45, .ohm = 4917 }, 202 + { .temp_c = 50, .ohm = 4161 }, 203 + { .temp_c = 55, .ohm = 3535 }, 204 + { .temp_c = 60, .ohm = 3014 }, 205 + { .temp_c = 65, .ohm = 2586 }, 206 + { .temp_c = 70, .ohm = 2228 }, 207 + { .temp_c = 75, .ohm = 1925 }, 208 + { .temp_c = 80, .ohm = 1669 }, 209 + { .temp_c = 85, .ohm = 1452 }, 210 + { .temp_c = 90, .ohm = 1268 }, 211 + { .temp_c = 95, .ohm = 1110 }, 212 + { .temp_c = 100, .ohm = 974 }, 213 + { .temp_c = 105, .ohm = 858 }, 214 + { .temp_c = 110, .ohm = 758 }, 215 + { .temp_c = 115, .ohm = 672 }, 216 + { .temp_c = 120, .ohm = 596 }, 217 + { .temp_c = 125, .ohm = 531 }, 218 218 }; 219 219 220 220 /*
+4 -6
drivers/hwmon/peci/dimmtemp.c
··· 127 127 return 0; 128 128 129 129 ret = priv->gen_info->read_thresholds(priv, dimm_order, chan_rank, &data); 130 - if (ret == -ENODATA) /* Use default or previous value */ 131 - return 0; 132 130 if (ret) 133 131 return ret; 134 132 ··· 507 509 508 510 ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd4, &reg_val); 509 511 if (ret || !(reg_val & BIT(31))) 510 - return -ENODATA; /* Use default or previous value */ 512 + return -ENODATA; 511 513 512 514 ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd0, &reg_val); 513 515 if (ret) 514 - return -ENODATA; /* Use default or previous value */ 516 + return -ENODATA; 515 517 516 518 /* 517 519 * Device 26, Offset 224e0: IMC 0 channel 0 -> rank 0 ··· 544 546 545 547 ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd4, &reg_val); 546 548 if (ret || !(reg_val & BIT(31))) 547 - return -ENODATA; /* Use default or previous value */ 549 + return -ENODATA; 548 550 549 551 ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd0, &reg_val); 550 552 if (ret) 551 - return -ENODATA; /* Use default or previous value */ 553 + return -ENODATA; 552 554 553 555 /* 554 556 * Device 26, Offset 219a8: IMC 0 channel 0 -> rank 0
+2
drivers/hwmon/pmbus/pmbus.c
··· 103 103 if (pmbus_check_byte_register(client, 0, PMBUS_PAGE)) { 104 104 int page; 105 105 106 + info->pages = PMBUS_PAGES; 107 + 106 108 for (page = 1; page < PMBUS_PAGES; page++) { 107 109 if (pmbus_set_page(client, page, 0xff) < 0) 108 110 break;
+1 -1
drivers/hwmon/xgene-hwmon.c
··· 706 706 goto out; 707 707 } 708 708 709 - if (!ctx->pcc_comm_addr) { 709 + if (IS_ERR_OR_NULL(ctx->pcc_comm_addr)) { 710 710 dev_err(&pdev->dev, 711 711 "Failed to ioremap PCC comm region\n"); 712 712 rc = -ENOMEM;
+11 -2
drivers/hwtracing/intel_th/msu.c
··· 105 105 106 106 /** 107 107 * struct msc - MSC device representation 108 - * @reg_base: register window base address 108 + * @reg_base: register window base address for the entire MSU 109 + * @msu_base: register window base address for this MSC 109 110 * @thdev: intel_th_device pointer 110 111 * @mbuf: MSU buffer, if assigned 111 - * @mbuf_priv MSU buffer's private data, if @mbuf 112 + * @mbuf_priv: MSU buffer's private data, if @mbuf 113 + * @work: a work to stop the trace when the buffer is full 112 114 * @win_list: list of windows in multiblock mode 113 115 * @single_sgt: single mode buffer 114 116 * @cur_win: current window 117 + * @switch_on_unlock: window to switch to when it becomes available 115 118 * @nr_pages: total number of pages allocated for this buffer 116 119 * @single_sz: amount of data in single mode 117 120 * @single_wrap: single mode wrap occurred 118 121 * @base: buffer's base pointer 119 122 * @base_addr: buffer's base address 123 + * @orig_addr: MSC0 buffer's base address 124 + * @orig_sz: MSC0 buffer's size 120 125 * @user_count: number of users of the buffer 121 126 * @mmap_count: number of mappings 122 127 * @buf_mutex: mutex to serialize access to buffer-related bits 128 + * @iter_list: list of open file descriptor iterators 129 + * @stop_on_full: stop the trace if the current window is full 123 130 * @enabled: MSC is enabled 124 131 * @wrap: wrapping is enabled 132 + * @do_irq: IRQ resource is available, handle interrupts 133 + * @multi_is_broken: multiblock mode enabled (not disabled by PCI drvdata) 125 134 * @mode: MSC operating mode 126 135 * @burst_len: write burst length 127 136 * @index: number of this MSC in the MSU
+15
drivers/hwtracing/intel_th/pci.c
··· 335 335 .driver_data = (kernel_ulong_t)&intel_th_2x, 336 336 }, 337 337 { 338 + /* Arrow Lake */ 339 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7724), 340 + .driver_data = (kernel_ulong_t)&intel_th_2x, 341 + }, 342 + { 343 + /* Panther Lake-H */ 344 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xe324), 345 + .driver_data = (kernel_ulong_t)&intel_th_2x, 346 + }, 347 + { 348 + /* Panther Lake-P/U */ 349 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xe424), 350 + .driver_data = (kernel_ulong_t)&intel_th_2x, 351 + }, 352 + { 338 353 /* Alder Lake CPU */ 339 354 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f), 340 355 .driver_data = (kernel_ulong_t)&intel_th_2x,
+11 -1
drivers/i2c/busses/i2c-ali1535.c
··· 485 485 486 486 static int ali1535_probe(struct pci_dev *dev, const struct pci_device_id *id) 487 487 { 488 + int ret; 489 + 488 490 if (ali1535_setup(dev)) { 489 491 dev_warn(&dev->dev, 490 492 "ALI1535 not detected, module not inserted.\n"); ··· 498 496 499 497 snprintf(ali1535_adapter.name, sizeof(ali1535_adapter.name), 500 498 "SMBus ALI1535 adapter at %04x", ali1535_offset); 501 - return i2c_add_adapter(&ali1535_adapter); 499 + ret = i2c_add_adapter(&ali1535_adapter); 500 + if (ret) 501 + goto release_region; 502 + 503 + return 0; 504 + 505 + release_region: 506 + release_region(ali1535_smba, ALI1535_SMB_IOSIZE); 507 + return ret; 502 508 } 503 509 504 510 static void ali1535_remove(struct pci_dev *dev)
+11 -1
drivers/i2c/busses/i2c-ali15x3.c
··· 472 472 473 473 static int ali15x3_probe(struct pci_dev *dev, const struct pci_device_id *id) 474 474 { 475 + int ret; 476 + 475 477 if (ali15x3_setup(dev)) { 476 478 dev_err(&dev->dev, 477 479 "ALI15X3 not detected, module not inserted.\n"); ··· 485 483 486 484 snprintf(ali15x3_adapter.name, sizeof(ali15x3_adapter.name), 487 485 "SMBus ALI15X3 adapter at %04x", ali15x3_smba); 488 - return i2c_add_adapter(&ali15x3_adapter); 486 + ret = i2c_add_adapter(&ali15x3_adapter); 487 + if (ret) 488 + goto release_region; 489 + 490 + return 0; 491 + 492 + release_region: 493 + release_region(ali15x3_smba, ALI15X3_SMB_IOSIZE); 494 + return ret; 489 495 } 490 496 491 497 static void ali15x3_remove(struct pci_dev *dev)
+7 -19
drivers/i2c/busses/i2c-omap.c
··· 1048 1048 return 0; 1049 1049 } 1050 1050 1051 - static irqreturn_t 1052 - omap_i2c_isr(int irq, void *dev_id) 1053 - { 1054 - struct omap_i2c_dev *omap = dev_id; 1055 - irqreturn_t ret = IRQ_HANDLED; 1056 - u16 mask; 1057 - u16 stat; 1058 - 1059 - stat = omap_i2c_read_reg(omap, OMAP_I2C_STAT_REG); 1060 - mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG) & ~OMAP_I2C_STAT_NACK; 1061 - 1062 - if (stat & mask) 1063 - ret = IRQ_WAKE_THREAD; 1064 - 1065 - return ret; 1066 - } 1067 - 1068 1051 static int omap_i2c_xfer_data(struct omap_i2c_dev *omap) 1069 1052 { 1070 1053 u16 bits; ··· 1078 1095 } 1079 1096 1080 1097 if (stat & OMAP_I2C_STAT_NACK) { 1081 - err |= OMAP_I2C_STAT_NACK; 1098 + omap->cmd_err |= OMAP_I2C_STAT_NACK; 1082 1099 omap_i2c_ack_stat(omap, OMAP_I2C_STAT_NACK); 1100 + 1101 + if (!(stat & ~OMAP_I2C_STAT_NACK)) { 1102 + err = -EAGAIN; 1103 + break; 1104 + } 1083 1105 } 1084 1106 1085 1107 if (stat & OMAP_I2C_STAT_AL) { ··· 1460 1472 IRQF_NO_SUSPEND, pdev->name, omap); 1461 1473 else 1462 1474 r = devm_request_threaded_irq(&pdev->dev, omap->irq, 1463 - omap_i2c_isr, omap_i2c_isr_thread, 1475 + NULL, omap_i2c_isr_thread, 1464 1476 IRQF_NO_SUSPEND | IRQF_ONESHOT, 1465 1477 pdev->name, omap); 1466 1478
+11 -1
drivers/i2c/busses/i2c-sis630.c
··· 509 509 510 510 static int sis630_probe(struct pci_dev *dev, const struct pci_device_id *id) 511 511 { 512 + int ret; 513 + 512 514 if (sis630_setup(dev)) { 513 515 dev_err(&dev->dev, 514 516 "SIS630 compatible bus not detected, " ··· 524 522 snprintf(sis630_adapter.name, sizeof(sis630_adapter.name), 525 523 "SMBus SIS630 adapter at %04x", smbus_base + SMB_STS); 526 524 527 - return i2c_add_adapter(&sis630_adapter); 525 + ret = i2c_add_adapter(&sis630_adapter); 526 + if (ret) 527 + goto release_region; 528 + 529 + return 0; 530 + 531 + release_region: 532 + release_region(smbus_base + SMB_STS, SIS630_SMB_IOREGION); 533 + return ret; 528 534 } 529 535 530 536 static void sis630_remove(struct pci_dev *dev)
+1 -1
drivers/iio/adc/ad7192.c
··· 1084 1084 1085 1085 conf &= ~AD7192_CONF_CHAN_MASK; 1086 1086 for_each_set_bit(i, scan_mask, 8) 1087 - conf |= FIELD_PREP(AD7192_CONF_CHAN_MASK, i); 1087 + conf |= FIELD_PREP(AD7192_CONF_CHAN_MASK, BIT(i)); 1088 1088 1089 1089 ret = ad_sd_write_reg(&st->sd, AD7192_REG_CONF, 3, conf); 1090 1090 if (ret < 0)
+1 -1
drivers/iio/adc/ad7606.c
··· 1047 1047 1048 1048 cs = &st->chan_scales[ch]; 1049 1049 *vals = (int *)cs->scale_avail; 1050 - *length = cs->num_scales; 1050 + *length = cs->num_scales * 2; 1051 1051 *type = IIO_VAL_INT_PLUS_MICRO; 1052 1052 1053 1053 return IIO_AVAIL_LIST;
+40 -28
drivers/iio/adc/at91-sama5d2_adc.c
··· 329 329 #define AT91_HWFIFO_MAX_SIZE_STR "128" 330 330 #define AT91_HWFIFO_MAX_SIZE 128 331 331 332 - #define AT91_SAMA5D2_CHAN_SINGLE(index, num, addr) \ 332 + #define AT91_SAMA_CHAN_SINGLE(index, num, addr, rbits) \ 333 333 { \ 334 334 .type = IIO_VOLTAGE, \ 335 335 .channel = num, \ ··· 337 337 .scan_index = index, \ 338 338 .scan_type = { \ 339 339 .sign = 'u', \ 340 - .realbits = 14, \ 340 + .realbits = rbits, \ 341 341 .storagebits = 16, \ 342 342 }, \ 343 343 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ ··· 350 350 .indexed = 1, \ 351 351 } 352 352 353 - #define AT91_SAMA5D2_CHAN_DIFF(index, num, num2, addr) \ 353 + #define AT91_SAMA5D2_CHAN_SINGLE(index, num, addr) \ 354 + AT91_SAMA_CHAN_SINGLE(index, num, addr, 14) 355 + 356 + #define AT91_SAMA7G5_CHAN_SINGLE(index, num, addr) \ 357 + AT91_SAMA_CHAN_SINGLE(index, num, addr, 16) 358 + 359 + #define AT91_SAMA_CHAN_DIFF(index, num, num2, addr, rbits) \ 354 360 { \ 355 361 .type = IIO_VOLTAGE, \ 356 362 .differential = 1, \ ··· 366 360 .scan_index = index, \ 367 361 .scan_type = { \ 368 362 .sign = 's', \ 369 - .realbits = 14, \ 363 + .realbits = rbits, \ 370 364 .storagebits = 16, \ 371 365 }, \ 372 366 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \ ··· 378 372 .datasheet_name = "CH"#num"-CH"#num2, \ 379 373 .indexed = 1, \ 380 374 } 375 + 376 + #define AT91_SAMA5D2_CHAN_DIFF(index, num, num2, addr) \ 377 + AT91_SAMA_CHAN_DIFF(index, num, num2, addr, 14) 378 + 379 + #define AT91_SAMA7G5_CHAN_DIFF(index, num, num2, addr) \ 380 + AT91_SAMA_CHAN_DIFF(index, num, num2, addr, 16) 381 381 382 382 #define AT91_SAMA5D2_CHAN_TOUCH(num, name, mod) \ 383 383 { \ ··· 678 666 }; 679 667 680 668 static const struct iio_chan_spec at91_sama7g5_adc_channels[] = { 681 - AT91_SAMA5D2_CHAN_SINGLE(0, 0, 0x60), 682 - AT91_SAMA5D2_CHAN_SINGLE(1, 1, 0x64), 683 - AT91_SAMA5D2_CHAN_SINGLE(2, 2, 0x68), 684 - AT91_SAMA5D2_CHAN_SINGLE(3, 3, 0x6c), 685 - AT91_SAMA5D2_CHAN_SINGLE(4, 4, 0x70), 686 - AT91_SAMA5D2_CHAN_SINGLE(5, 5, 0x74), 687 - AT91_SAMA5D2_CHAN_SINGLE(6, 6, 0x78), 688 - AT91_SAMA5D2_CHAN_SINGLE(7, 7, 0x7c), 689 - AT91_SAMA5D2_CHAN_SINGLE(8, 8, 0x80), 690 - AT91_SAMA5D2_CHAN_SINGLE(9, 9, 0x84), 691 - AT91_SAMA5D2_CHAN_SINGLE(10, 10, 0x88), 692 - AT91_SAMA5D2_CHAN_SINGLE(11, 11, 0x8c), 693 - AT91_SAMA5D2_CHAN_SINGLE(12, 12, 0x90), 694 - AT91_SAMA5D2_CHAN_SINGLE(13, 13, 0x94), 695 - AT91_SAMA5D2_CHAN_SINGLE(14, 14, 0x98), 696 - AT91_SAMA5D2_CHAN_SINGLE(15, 15, 0x9c), 697 - AT91_SAMA5D2_CHAN_DIFF(16, 0, 1, 0x60), 698 - AT91_SAMA5D2_CHAN_DIFF(17, 2, 3, 0x68), 699 - AT91_SAMA5D2_CHAN_DIFF(18, 4, 5, 0x70), 700 - AT91_SAMA5D2_CHAN_DIFF(19, 6, 7, 0x78), 701 - AT91_SAMA5D2_CHAN_DIFF(20, 8, 9, 0x80), 702 - AT91_SAMA5D2_CHAN_DIFF(21, 10, 11, 0x88), 703 - AT91_SAMA5D2_CHAN_DIFF(22, 12, 13, 0x90), 704 - AT91_SAMA5D2_CHAN_DIFF(23, 14, 15, 0x98), 669 + AT91_SAMA7G5_CHAN_SINGLE(0, 0, 0x60), 670 + AT91_SAMA7G5_CHAN_SINGLE(1, 1, 0x64), 671 + AT91_SAMA7G5_CHAN_SINGLE(2, 2, 0x68), 672 + AT91_SAMA7G5_CHAN_SINGLE(3, 3, 0x6c), 673 + AT91_SAMA7G5_CHAN_SINGLE(4, 4, 0x70), 674 + AT91_SAMA7G5_CHAN_SINGLE(5, 5, 0x74), 675 + AT91_SAMA7G5_CHAN_SINGLE(6, 6, 0x78), 676 + AT91_SAMA7G5_CHAN_SINGLE(7, 7, 0x7c), 677 + AT91_SAMA7G5_CHAN_SINGLE(8, 8, 0x80), 678 + AT91_SAMA7G5_CHAN_SINGLE(9, 9, 0x84), 679 + AT91_SAMA7G5_CHAN_SINGLE(10, 10, 0x88), 680 + AT91_SAMA7G5_CHAN_SINGLE(11, 11, 0x8c), 681 + AT91_SAMA7G5_CHAN_SINGLE(12, 12, 0x90), 682 + AT91_SAMA7G5_CHAN_SINGLE(13, 13, 0x94), 683 + AT91_SAMA7G5_CHAN_SINGLE(14, 14, 0x98), 684 + AT91_SAMA7G5_CHAN_SINGLE(15, 15, 0x9c), 685 + AT91_SAMA7G5_CHAN_DIFF(16, 0, 1, 0x60), 686 + AT91_SAMA7G5_CHAN_DIFF(17, 2, 3, 0x68), 687 + AT91_SAMA7G5_CHAN_DIFF(18, 4, 5, 0x70), 688 + AT91_SAMA7G5_CHAN_DIFF(19, 6, 7, 0x78), 689 + AT91_SAMA7G5_CHAN_DIFF(20, 8, 9, 0x80), 690 + AT91_SAMA7G5_CHAN_DIFF(21, 10, 11, 0x88), 691 + AT91_SAMA7G5_CHAN_DIFF(22, 12, 13, 0x90), 692 + AT91_SAMA7G5_CHAN_DIFF(23, 14, 15, 0x98), 705 693 IIO_CHAN_SOFT_TIMESTAMP(24), 706 694 AT91_SAMA5D2_CHAN_TEMP(AT91_SAMA7G5_ADC_TEMP_CHANNEL, "temp", 0xdc), 707 695 };
+1 -1
drivers/iio/adc/pac1921.c
··· 1198 1198 1199 1199 label = devm_kstrdup(dev, status->package.elements[0].string.pointer, 1200 1200 GFP_KERNEL); 1201 + ACPI_FREE(status); 1201 1202 if (!label) 1202 1203 return -ENOMEM; 1203 1204 1204 1205 indio_dev->label = label; 1205 - ACPI_FREE(status); 1206 1206 1207 1207 return 0; 1208 1208 }
+6
drivers/iio/dac/ad3552r.c
··· 410 410 return ret; 411 411 } 412 412 413 + /* Clear reset error flag, see ad3552r manual, rev B table 38. */ 414 + ret = ad3552r_write_reg(dac, AD3552R_REG_ADDR_ERR_STATUS, 415 + AD3552R_MASK_RESET_STATUS); 416 + if (ret) 417 + return ret; 418 + 413 419 return ad3552r_update_reg_field(dac, 414 420 AD3552R_REG_ADDR_INTERFACE_CONFIG_A, 415 421 AD3552R_MASK_ADDR_ASCENSION,
+4 -10
drivers/iio/filter/admv8818.c
··· 574 574 struct spi_device *spi = st->spi; 575 575 unsigned int chip_id; 576 576 577 - ret = regmap_update_bits(st->regmap, ADMV8818_REG_SPI_CONFIG_A, 578 - ADMV8818_SOFTRESET_N_MSK | 579 - ADMV8818_SOFTRESET_MSK, 580 - FIELD_PREP(ADMV8818_SOFTRESET_N_MSK, 1) | 581 - FIELD_PREP(ADMV8818_SOFTRESET_MSK, 1)); 577 + ret = regmap_write(st->regmap, ADMV8818_REG_SPI_CONFIG_A, 578 + ADMV8818_SOFTRESET_N_MSK | ADMV8818_SOFTRESET_MSK); 582 579 if (ret) { 583 580 dev_err(&spi->dev, "ADMV8818 Soft Reset failed.\n"); 584 581 return ret; 585 582 } 586 583 587 - ret = regmap_update_bits(st->regmap, ADMV8818_REG_SPI_CONFIG_A, 588 - ADMV8818_SDOACTIVE_N_MSK | 589 - ADMV8818_SDOACTIVE_MSK, 590 - FIELD_PREP(ADMV8818_SDOACTIVE_N_MSK, 1) | 591 - FIELD_PREP(ADMV8818_SDOACTIVE_MSK, 1)); 584 + ret = regmap_write(st->regmap, ADMV8818_REG_SPI_CONFIG_A, 585 + ADMV8818_SDOACTIVE_N_MSK | ADMV8818_SDOACTIVE_MSK); 592 586 if (ret) { 593 587 dev_err(&spi->dev, "ADMV8818 SDO Enable failed.\n"); 594 588 return ret;
+2 -2
drivers/iio/light/apds9306.c
··· 108 108 { 109 109 .part_id = 0xB1, 110 110 .max_scale_int = 16, 111 - .max_scale_nano = 3264320, 111 + .max_scale_nano = 326432000, 112 112 }, { 113 113 .part_id = 0xB3, 114 114 .max_scale_int = 14, 115 - .max_scale_nano = 9712000, 115 + .max_scale_nano = 97120000, 116 116 }, 117 117 }; 118 118
+4 -3
drivers/iio/light/hid-sensor-prox.c
··· 49 49 #define PROX_CHANNEL(_is_proximity, _channel) \ 50 50 {\ 51 51 .type = _is_proximity ? IIO_PROXIMITY : IIO_ATTENTION,\ 52 - .info_mask_separate = _is_proximity ? BIT(IIO_CHAN_INFO_RAW) :\ 53 - BIT(IIO_CHAN_INFO_PROCESSED),\ 54 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) |\ 52 + .info_mask_separate = \ 53 + (_is_proximity ? BIT(IIO_CHAN_INFO_RAW) :\ 54 + BIT(IIO_CHAN_INFO_PROCESSED)) |\ 55 + BIT(IIO_CHAN_INFO_OFFSET) |\ 55 56 BIT(IIO_CHAN_INFO_SCALE) |\ 56 57 BIT(IIO_CHAN_INFO_SAMP_FREQ) |\ 57 58 BIT(IIO_CHAN_INFO_HYSTERESIS),\
+2 -1
drivers/iio/proximity/hx9023s.c
··· 1036 1036 return -ENOMEM; 1037 1037 1038 1038 memcpy(bin->data, fw->data, fw->size); 1039 - release_firmware(fw); 1040 1039 1041 1040 bin->fw_size = fw->size; 1042 1041 bin->fw_ver = bin->data[FW_VER_OFFSET]; 1043 1042 bin->reg_count = get_unaligned_le16(bin->data + FW_REG_CNT_OFFSET); 1043 + 1044 + release_firmware(fw); 1044 1045 1045 1046 return hx9023s_bin_load(data, bin); 1046 1047 }
+32 -7
drivers/input/joystick/xpad.c
··· 140 140 { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 141 141 { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX }, 142 142 { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX }, 143 + { 0x044f, 0xd01e, "ThrustMaster, Inc. ESWAP X 2 ELDEN RING EDITION", 0, XTYPE_XBOXONE }, 143 144 { 0x044f, 0x0f10, "Thrustmaster Modena GT Wheel", 0, XTYPE_XBOX }, 144 145 { 0x044f, 0xb326, "Thrustmaster Gamepad GP XID", 0, XTYPE_XBOX360 }, 145 146 { 0x045e, 0x0202, "Microsoft X-Box pad v1 (US)", 0, XTYPE_XBOX }, ··· 178 177 { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX }, 179 178 { 0x06a3, 0x0201, "Saitek Adrenalin", 0, XTYPE_XBOX }, 180 179 { 0x06a3, 0xf51a, "Saitek P3600", 0, XTYPE_XBOX360 }, 180 + { 0x0738, 0x4503, "Mad Catz Racing Wheel", 0, XTYPE_XBOXONE }, 181 181 { 0x0738, 0x4506, "Mad Catz 4506 Wireless Controller", 0, XTYPE_XBOX }, 182 182 { 0x0738, 0x4516, "Mad Catz Control Pad", 0, XTYPE_XBOX }, 183 183 { 0x0738, 0x4520, "Mad Catz Control Pad Pro", 0, XTYPE_XBOX }, ··· 240 238 { 0x0e6f, 0x0146, "Rock Candy Wired Controller for Xbox One", 0, XTYPE_XBOXONE }, 241 239 { 0x0e6f, 0x0147, "PDP Marvel Xbox One Controller", 0, XTYPE_XBOXONE }, 242 240 { 0x0e6f, 0x015c, "PDP Xbox One Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 241 + { 0x0e6f, 0x015d, "PDP Mirror's Edge Official Wired Controller for Xbox One", XTYPE_XBOXONE }, 243 242 { 0x0e6f, 0x0161, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, 244 243 { 0x0e6f, 0x0162, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, 245 244 { 0x0e6f, 0x0163, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, ··· 279 276 { 0x0f0d, 0x0078, "Hori Real Arcade Pro V Kai Xbox One", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 280 277 { 0x0f0d, 0x00c5, "Hori Fighting Commander ONE", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE }, 281 278 { 0x0f0d, 0x00dc, "HORIPAD FPS for Nintendo Switch", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 279 + { 0x0f0d, 0x0151, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE }, 280 + { 0x0f0d, 0x0152, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE }, 282 281 { 0x0f30, 0x010b, "Philips Recoil", 0, XTYPE_XBOX }, 283 282 { 0x0f30, 0x0202, "Joytech Advanced Controller", 0, XTYPE_XBOX }, 284 283 { 0x0f30, 0x8888, "BigBen XBMiniPad Controller", 0, XTYPE_XBOX }, 285 284 { 0x102c, 0xff0c, "Joytech Wireless Advanced Controller", 0, XTYPE_XBOX }, 286 285 { 0x1038, 0x1430, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 }, 287 286 { 0x1038, 0x1431, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 }, 287 + { 0x10f5, 0x7005, "Turtle Beach Recon Controller", 0, XTYPE_XBOXONE }, 288 288 { 0x11c9, 0x55f0, "Nacon GC-100XF", 0, XTYPE_XBOX360 }, 289 289 { 0x11ff, 0x0511, "PXN V900", 0, XTYPE_XBOX360 }, 290 290 { 0x1209, 0x2882, "Ardwiino Controller", 0, XTYPE_XBOX360 }, ··· 312 306 { 0x1689, 0xfe00, "Razer Sabertooth", 0, XTYPE_XBOX360 }, 313 307 { 0x17ef, 0x6182, "Lenovo Legion Controller for Windows", 0, XTYPE_XBOX360 }, 314 308 { 0x1949, 0x041a, "Amazon Game Controller", 0, XTYPE_XBOX360 }, 315 - { 0x1a86, 0xe310, "QH Electronics Controller", 0, XTYPE_XBOX360 }, 309 + { 0x1a86, 0xe310, "Legion Go S", 0, XTYPE_XBOX360 }, 316 310 { 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 }, 317 311 { 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, 318 312 { 0x1bad, 0x0130, "Ion Drum Rocker", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 }, ··· 349 343 { 0x1bad, 0xfa01, "MadCatz GamePad", 0, XTYPE_XBOX360 }, 350 344 { 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 }, 351 345 { 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 }, 346 + { 0x1ee9, 0x1590, "ZOTAC Gaming Zone", 0, XTYPE_XBOX360 }, 352 347 { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE }, 353 348 { 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE }, 354 349 { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 }, ··· 373 366 { 0x24c6, 0x5510, "Hori Fighting Commander ONE (Xbox 360/PC Mode)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, 374 367 { 0x24c6, 0x551a, "PowerA FUSION Pro Controller", 0, XTYPE_XBOXONE }, 375 368 { 0x24c6, 0x561a, "PowerA FUSION Controller", 0, XTYPE_XBOXONE }, 369 + { 0x24c6, 0x581a, "ThrustMaster XB1 Classic Controller", 0, XTYPE_XBOXONE }, 376 370 { 0x24c6, 0x5b00, "ThrustMaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 }, 377 371 { 0x24c6, 0x5b02, "Thrustmaster, Inc. GPX Controller", 0, XTYPE_XBOX360 }, 378 372 { 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 }, ··· 382 374 { 0x2563, 0x058d, "OneXPlayer Gamepad", 0, XTYPE_XBOX360 }, 383 375 { 0x294b, 0x3303, "Snakebyte GAMEPAD BASE X", 0, XTYPE_XBOXONE }, 384 376 { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE }, 377 + { 0x2993, 0x2001, "TECNO Pocket Go", 0, XTYPE_XBOX360 }, 385 378 { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE }, 386 379 { 0x2dc8, 0x3106, "8BitDo Ultimate Wireless / Pro 2 Wired Controller", 0, XTYPE_XBOX360 }, 380 + { 0x2dc8, 0x3109, "8BitDo Ultimate Wireless Bluetooth", 0, XTYPE_XBOX360 }, 387 381 { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 }, 382 + { 0x2dc8, 0x6001, "8BitDo SN30 Pro", 0, XTYPE_XBOX360 }, 388 383 { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE }, 384 + { 0x2e24, 0x1688, "Hyperkin X91 X-Box One pad", 0, XTYPE_XBOXONE }, 385 + { 0x2e95, 0x0504, "SCUF Gaming Controller", MAP_SELECT_BUTTON, XTYPE_XBOXONE }, 389 386 { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 }, 390 387 { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 }, 391 388 { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 }, ··· 398 385 { 0x31e3, 0x1230, "Wooting Two HE (ARM)", 0, XTYPE_XBOX360 }, 399 386 { 0x31e3, 0x1300, "Wooting 60HE (AVR)", 0, XTYPE_XBOX360 }, 400 387 { 0x31e3, 0x1310, "Wooting 60HE (ARM)", 0, XTYPE_XBOX360 }, 388 + { 0x3285, 0x0603, "Nacon Pro Compact controller for Xbox", 0, XTYPE_XBOXONE }, 401 389 { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 }, 390 + { 0x3285, 0x0614, "Nacon Pro Compact", 0, XTYPE_XBOXONE }, 402 391 { 0x3285, 0x0646, "Nacon Pro Compact", 0, XTYPE_XBOXONE }, 392 + { 0x3285, 0x0662, "Nacon Revolution5 Pro", 0, XTYPE_XBOX360 }, 403 393 { 0x3285, 0x0663, "Nacon Evol-X", 0, XTYPE_XBOXONE }, 404 394 { 0x3537, 0x1004, "GameSir T4 Kaleid", 0, XTYPE_XBOX360 }, 395 + { 0x3537, 0x1010, "GameSir G7 SE", 0, XTYPE_XBOXONE }, 405 396 { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX }, 397 + { 0x413d, 0x2104, "Black Shark Green Ghost Gamepad", 0, XTYPE_XBOX360 }, 406 398 { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX }, 407 399 { 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN } 408 400 }; ··· 506 488 XPAD_XBOX360_VENDOR(0x03f0), /* HP HyperX Xbox 360 controllers */ 507 489 XPAD_XBOXONE_VENDOR(0x03f0), /* HP HyperX Xbox One controllers */ 508 490 XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster Xbox 360 controllers */ 491 + XPAD_XBOXONE_VENDOR(0x044f), /* Thrustmaster Xbox One controllers */ 509 492 XPAD_XBOX360_VENDOR(0x045e), /* Microsoft Xbox 360 controllers */ 510 493 XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft Xbox One controllers */ 511 494 XPAD_XBOX360_VENDOR(0x046d), /* Logitech Xbox 360-style controllers */ ··· 538 519 XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */ 539 520 XPAD_XBOX360_VENDOR(0x17ef), /* Lenovo */ 540 521 XPAD_XBOX360_VENDOR(0x1949), /* Amazon controllers */ 541 - XPAD_XBOX360_VENDOR(0x1a86), /* QH Electronics */ 522 + XPAD_XBOX360_VENDOR(0x1a86), /* Nanjing Qinheng Microelectronics (WCH) */ 542 523 XPAD_XBOX360_VENDOR(0x1bad), /* Harmonix Rock Band guitar and drums */ 524 + XPAD_XBOX360_VENDOR(0x1ee9), /* ZOTAC Technology Limited */ 543 525 XPAD_XBOX360_VENDOR(0x20d6), /* PowerA controllers */ 544 526 XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA controllers */ 545 527 XPAD_XBOX360_VENDOR(0x2345), /* Machenike Controllers */ ··· 548 528 XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA controllers */ 549 529 XPAD_XBOX360_VENDOR(0x2563), /* OneXPlayer Gamepad */ 550 530 XPAD_XBOX360_VENDOR(0x260d), /* Dareu H101 */ 551 - XPAD_XBOXONE_VENDOR(0x294b), /* Snakebyte */ 531 + XPAD_XBOXONE_VENDOR(0x294b), /* Snakebyte */ 532 + XPAD_XBOX360_VENDOR(0x2993), /* TECNO Mobile */ 552 533 XPAD_XBOX360_VENDOR(0x2c22), /* Qanba Controllers */ 553 - XPAD_XBOX360_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller */ 554 - XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller for Xbox */ 555 - XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke Xbox One pad */ 556 - XPAD_XBOX360_VENDOR(0x2f24), /* GameSir controllers */ 534 + XPAD_XBOX360_VENDOR(0x2dc8), /* 8BitDo Controllers */ 535 + XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Controllers */ 536 + XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Controllers */ 537 + XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */ 538 + XPAD_XBOXONE_VENDOR(0x2e95), /* SCUF Gaming Controller */ 557 539 XPAD_XBOX360_VENDOR(0x31e3), /* Wooting Keyboards */ 558 540 XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */ 559 541 XPAD_XBOXONE_VENDOR(0x3285), /* Nacon Evol-X */ 560 542 XPAD_XBOX360_VENDOR(0x3537), /* GameSir Controllers */ 561 543 XPAD_XBOXONE_VENDOR(0x3537), /* GameSir Controllers */ 544 + XPAD_XBOX360_VENDOR(0x413d), /* Black Shark Green Ghost Controller */ 562 545 { } 563 546 }; 564 547 ··· 714 691 XBOXONE_INIT_PKT(0x045e, 0x0b00, xboxone_s_init), 715 692 XBOXONE_INIT_PKT(0x045e, 0x0b00, extra_input_packet_init), 716 693 XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_led_on), 694 + XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_led_on), 717 695 XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_auth), 696 + XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_auth), 718 697 XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init), 719 698 XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init), 720 699 XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
+22 -28
drivers/input/misc/iqs7222.c
··· 100 100 101 101 enum iqs7222_reg_grp_id { 102 102 IQS7222_REG_GRP_STAT, 103 - IQS7222_REG_GRP_FILT, 104 103 IQS7222_REG_GRP_CYCLE, 105 104 IQS7222_REG_GRP_GLBL, 106 105 IQS7222_REG_GRP_BTN, 107 106 IQS7222_REG_GRP_CHAN, 107 + IQS7222_REG_GRP_FILT, 108 108 IQS7222_REG_GRP_SLDR, 109 109 IQS7222_REG_GRP_TPAD, 110 110 IQS7222_REG_GRP_GPIO, ··· 286 286 287 287 struct iqs7222_reg_grp_desc { 288 288 u16 base; 289 + u16 val_len; 289 290 int num_row; 290 291 int num_col; 291 292 }; ··· 343 342 }, 344 343 [IQS7222_REG_GRP_FILT] = { 345 344 .base = 0xAC00, 345 + .val_len = 3, 346 346 .num_row = 1, 347 347 .num_col = 2, 348 348 }, ··· 402 400 }, 403 401 [IQS7222_REG_GRP_FILT] = { 404 402 .base = 0xAC00, 403 + .val_len = 3, 405 404 .num_row = 1, 406 405 .num_col = 2, 407 406 }, ··· 457 454 }, 458 455 [IQS7222_REG_GRP_FILT] = { 459 456 .base = 0xC400, 457 + .val_len = 3, 460 458 .num_row = 1, 461 459 .num_col = 2, 462 460 }, ··· 500 496 }, 501 497 [IQS7222_REG_GRP_FILT] = { 502 498 .base = 0xC400, 499 + .val_len = 3, 503 500 .num_row = 1, 504 501 .num_col = 2, 505 502 }, ··· 548 543 }, 549 544 [IQS7222_REG_GRP_FILT] = { 550 545 .base = 0xAA00, 546 + .val_len = 3, 551 547 .num_row = 1, 552 548 .num_col = 2, 553 549 }, ··· 606 600 }, 607 601 [IQS7222_REG_GRP_FILT] = { 608 602 .base = 0xAA00, 603 + .val_len = 3, 609 604 .num_row = 1, 610 605 .num_col = 2, 611 606 }, ··· 663 656 }, 664 657 [IQS7222_REG_GRP_FILT] = { 665 658 .base = 0xAE00, 659 + .val_len = 3, 666 660 .num_row = 1, 667 661 .num_col = 2, 668 662 }, ··· 720 712 }, 721 713 [IQS7222_REG_GRP_FILT] = { 722 714 .base = 0xAE00, 715 + .val_len = 3, 723 716 .num_row = 1, 724 717 .num_col = 2, 725 718 }, ··· 777 768 }, 778 769 [IQS7222_REG_GRP_FILT] = { 779 770 .base = 0xAE00, 771 + .val_len = 3, 780 772 .num_row = 1, 781 773 .num_col = 2, 782 774 }, ··· 1614 1604 } 1615 1605 1616 1606 static int iqs7222_read_burst(struct iqs7222_private *iqs7222, 1617 - u16 reg, void *val, u16 num_val) 1607 + u16 reg, void *val, u16 val_len) 1618 1608 { 1619 1609 u8 reg_buf[sizeof(__be16)]; 1620 1610 int ret, i; ··· 1629 1619 { 1630 1620 .addr = client->addr, 1631 1621 .flags = I2C_M_RD, 1632 - .len = num_val * sizeof(__le16), 1622 + .len = val_len, 1633 1623 .buf = (u8 *)val, 1634 1624 }, 1635 1625 }; ··· 1685 1675 __le16 val_buf; 1686 1676 int error; 1687 1677 1688 - error = iqs7222_read_burst(iqs7222, reg, &val_buf, 1); 1678 + error = iqs7222_read_burst(iqs7222, reg, &val_buf, sizeof(val_buf)); 1689 1679 if (error) 1690 1680 return error; 1691 1681 ··· 1695 1685 } 1696 1686 1697 1687 static int iqs7222_write_burst(struct iqs7222_private *iqs7222, 1698 - u16 reg, const void *val, u16 num_val) 1688 + u16 reg, const void *val, u16 val_len) 1699 1689 { 1700 1690 int reg_len = reg > U8_MAX ? sizeof(reg) : sizeof(u8); 1701 - int val_len = num_val * sizeof(__le16); 1702 1691 int msg_len = reg_len + val_len; 1703 1692 int ret, i; 1704 1693 struct i2c_client *client = iqs7222->client; ··· 1756 1747 { 1757 1748 __le16 val_buf = cpu_to_le16(val); 1758 1749 1759 - return iqs7222_write_burst(iqs7222, reg, &val_buf, 1); 1750 + return iqs7222_write_burst(iqs7222, reg, &val_buf, sizeof(val_buf)); 1760 1751 } 1761 1752 1762 1753 static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222) ··· 1840 1831 1841 1832 /* 1842 1833 * Acknowledge reset before writing any registers in case the device 1843 - * suffers a spurious reset during initialization. Because this step 1844 - * may change the reserved fields of the second filter beta register, 1845 - * its cache must be updated. 1846 - * 1847 - * Writing the second filter beta register, in turn, may clobber the 1848 - * system status register. As such, the filter beta register pair is 1849 - * written first to protect against this hazard. 1834 + * suffers a spurious reset during initialization. 1850 1835 */ 1851 1836 if (dir == WRITE) { 1852 - u16 reg = dev_desc->reg_grps[IQS7222_REG_GRP_FILT].base + 1; 1853 - u16 filt_setup; 1854 - 1855 1837 error = iqs7222_write_word(iqs7222, IQS7222_SYS_SETUP, 1856 1838 iqs7222->sys_setup[0] | 1857 1839 IQS7222_SYS_SETUP_ACK_RESET); 1858 1840 if (error) 1859 1841 return error; 1860 - 1861 - error = iqs7222_read_word(iqs7222, reg, &filt_setup); 1862 - if (error) 1863 - return error; 1864 - 1865 - iqs7222->filt_setup[1] &= GENMASK(7, 0); 1866 - iqs7222->filt_setup[1] |= (filt_setup & ~GENMASK(7, 0)); 1867 1842 } 1868 1843 1869 1844 /* ··· 1876 1883 int num_col = dev_desc->reg_grps[i].num_col; 1877 1884 u16 reg = dev_desc->reg_grps[i].base; 1878 1885 __le16 *val_buf; 1886 + u16 val_len = dev_desc->reg_grps[i].val_len ? : num_col * sizeof(*val_buf); 1879 1887 u16 *val; 1880 1888 1881 1889 if (!num_col) ··· 1894 1900 switch (dir) { 1895 1901 case READ: 1896 1902 error = iqs7222_read_burst(iqs7222, reg, 1897 - val_buf, num_col); 1903 + val_buf, val_len); 1898 1904 for (k = 0; k < num_col; k++) 1899 1905 val[k] = le16_to_cpu(val_buf[k]); 1900 1906 break; ··· 1903 1909 for (k = 0; k < num_col; k++) 1904 1910 val_buf[k] = cpu_to_le16(val[k]); 1905 1911 error = iqs7222_write_burst(iqs7222, reg, 1906 - val_buf, num_col); 1912 + val_buf, val_len); 1907 1913 break; 1908 1914 1909 1915 default: ··· 1956 1962 int error, i; 1957 1963 1958 1964 error = iqs7222_read_burst(iqs7222, IQS7222_PROD_NUM, dev_id, 1959 - ARRAY_SIZE(dev_id)); 1965 + sizeof(dev_id)); 1960 1966 if (error) 1961 1967 return error; 1962 1968 ··· 2909 2915 __le16 status[IQS7222_MAX_COLS_STAT]; 2910 2916 2911 2917 error = iqs7222_read_burst(iqs7222, IQS7222_SYS_STATUS, status, 2912 - num_stat); 2918 + num_stat * sizeof(*status)); 2913 2919 if (error) 2914 2920 return error; 2915 2921
+55 -56
drivers/input/serio/i8042-acpipnpio.h
··· 1080 1080 DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"), 1081 1081 DMI_MATCH(DMI_BOARD_NAME, "AURA1501"), 1082 1082 }, 1083 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1084 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1083 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1085 1084 }, 1086 1085 { 1087 1086 .matches = { 1088 1087 DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"), 1089 1088 DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"), 1090 1089 }, 1091 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1092 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1090 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1093 1091 }, 1094 1092 { 1095 1093 /* Mivvy M310 */ ··· 1157 1159 }, 1158 1160 /* 1159 1161 * A lot of modern Clevo barebones have touchpad and/or keyboard issues 1160 - * after suspend fixable with nomux + reset + noloop + nopnp. Luckily, 1161 - * none of them have an external PS/2 port so this can safely be set for 1162 - * all of them. 1162 + * after suspend fixable with the forcenorestore quirk. 1163 1163 * Clevo barebones come with board_vendor and/or system_vendor set to 1164 1164 * either the very generic string "Notebook" and/or a different value 1165 1165 * for each individual reseller. The only somewhat universal way to ··· 1167 1171 .matches = { 1168 1172 DMI_MATCH(DMI_BOARD_NAME, "LAPQC71A"), 1169 1173 }, 1170 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1171 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1174 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1172 1175 }, 1173 1176 { 1174 1177 .matches = { 1175 1178 DMI_MATCH(DMI_BOARD_NAME, "LAPQC71B"), 1176 1179 }, 1177 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1178 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1180 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1179 1181 }, 1180 1182 { 1181 1183 .matches = { 1182 1184 DMI_MATCH(DMI_BOARD_NAME, "N140CU"), 1183 1185 }, 1184 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1185 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1186 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1186 1187 }, 1187 1188 { 1188 1189 .matches = { 1189 1190 DMI_MATCH(DMI_BOARD_NAME, "N141CU"), 1190 1191 }, 1191 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1192 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1192 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1193 1193 }, 1194 1194 { 1195 1195 .matches = { ··· 1197 1205 .matches = { 1198 1206 DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"), 1199 1207 }, 1200 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1201 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1208 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1202 1209 }, 1203 1210 { 1204 - /* 1205 - * Setting SERIO_QUIRK_NOMUX or SERIO_QUIRK_RESET_ALWAYS makes 1206 - * the keyboard very laggy for ~5 seconds after boot and 1207 - * sometimes also after resume. 1208 - * However both are required for the keyboard to not fail 1209 - * completely sometimes after boot or resume. 1210 - */ 1211 1211 .matches = { 1212 1212 DMI_MATCH(DMI_BOARD_NAME, "NHxxRZQ"), 1213 1213 }, 1214 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1215 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1214 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1216 1215 }, 1217 1216 { 1218 1217 .matches = { 1219 1218 DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"), 1220 1219 }, 1221 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1222 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1220 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1223 1221 }, 1224 1222 /* 1225 1223 * At least one modern Clevo barebone has the touchpad connected both ··· 1225 1243 .matches = { 1226 1244 DMI_MATCH(DMI_BOARD_NAME, "NS50MU"), 1227 1245 }, 1228 - .driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX | 1229 - SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP | 1230 - SERIO_QUIRK_NOPNP) 1246 + .driver_data = (void *)(SERIO_QUIRK_NOAUX | 1247 + SERIO_QUIRK_FORCENORESTORE) 1231 1248 }, 1232 1249 { 1233 1250 .matches = { 1234 1251 DMI_MATCH(DMI_BOARD_NAME, "NS50_70MU"), 1235 1252 }, 1236 - .driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX | 1237 - SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP | 1238 - SERIO_QUIRK_NOPNP) 1253 + .driver_data = (void *)(SERIO_QUIRK_NOAUX | 1254 + SERIO_QUIRK_FORCENORESTORE) 1239 1255 }, 1240 1256 { 1241 1257 .matches = { ··· 1245 1265 .matches = { 1246 1266 DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"), 1247 1267 }, 1248 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1249 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1268 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1269 + }, 1270 + { 1271 + .matches = { 1272 + DMI_MATCH(DMI_BOARD_NAME, "P640RE"), 1273 + }, 1274 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1250 1275 }, 1251 1276 { 1252 1277 /* ··· 1262 1277 .matches = { 1263 1278 DMI_MATCH(DMI_PRODUCT_NAME, "P65xH"), 1264 1279 }, 1265 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1266 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1280 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1267 1281 }, 1268 1282 { 1269 1283 /* Clevo P650RS, 650RP6, Sager NP8152-S, and others */ 1270 1284 .matches = { 1271 1285 DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"), 1272 1286 }, 1273 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1274 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1287 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1275 1288 }, 1276 1289 { 1277 1290 /* ··· 1280 1297 .matches = { 1281 1298 DMI_MATCH(DMI_PRODUCT_NAME, "P65_P67H"), 1282 1299 }, 1283 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1284 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1300 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1285 1301 }, 1286 1302 { 1287 1303 /* ··· 1291 1309 .matches = { 1292 1310 DMI_MATCH(DMI_PRODUCT_NAME, "P65_67RP"), 1293 1311 }, 1294 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1295 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1312 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1296 1313 }, 1297 1314 { 1298 1315 /* ··· 1302 1321 .matches = { 1303 1322 DMI_MATCH(DMI_PRODUCT_NAME, "P65_67RS"), 1304 1323 }, 1305 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1306 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1324 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1307 1325 }, 1308 1326 { 1309 1327 /* ··· 1313 1333 .matches = { 1314 1334 DMI_MATCH(DMI_PRODUCT_NAME, "P67xRP"), 1315 1335 }, 1316 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1317 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1336 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1318 1337 }, 1319 1338 { 1320 1339 .matches = { 1321 1340 DMI_MATCH(DMI_BOARD_NAME, "PB50_70DFx,DDx"), 1322 1341 }, 1323 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1324 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1342 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1343 + }, 1344 + { 1345 + .matches = { 1346 + DMI_MATCH(DMI_BOARD_NAME, "PB51RF"), 1347 + }, 1348 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1349 + }, 1350 + { 1351 + .matches = { 1352 + DMI_MATCH(DMI_BOARD_NAME, "PB71RD"), 1353 + }, 1354 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1355 + }, 1356 + { 1357 + .matches = { 1358 + DMI_MATCH(DMI_BOARD_NAME, "PC70DR"), 1359 + }, 1360 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1325 1361 }, 1326 1362 { 1327 1363 .matches = { 1328 1364 DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"), 1329 1365 }, 1330 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1331 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1366 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1367 + }, 1368 + { 1369 + .matches = { 1370 + DMI_MATCH(DMI_BOARD_NAME, "PCX0DX_GN20"), 1371 + }, 1372 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1332 1373 }, 1333 1374 /* See comment on TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU above */ 1334 1375 { ··· 1362 1361 .matches = { 1363 1362 DMI_MATCH(DMI_BOARD_NAME, "X170SM"), 1364 1363 }, 1365 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1366 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1364 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1367 1365 }, 1368 1366 { 1369 1367 .matches = { 1370 1368 DMI_MATCH(DMI_BOARD_NAME, "X170KM-G"), 1371 1369 }, 1372 - .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1373 - SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1370 + .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE) 1374 1371 }, 1375 1372 { 1376 1373 /*
+1 -1
drivers/input/touchscreen/ads7846.c
··· 1021 1021 if (pdata->get_pendown_state) { 1022 1022 ts->get_pendown_state = pdata->get_pendown_state; 1023 1023 } else { 1024 - ts->gpio_pendown = gpiod_get(&spi->dev, "pendown", GPIOD_IN); 1024 + ts->gpio_pendown = devm_gpiod_get(&spi->dev, "pendown", GPIOD_IN); 1025 1025 if (IS_ERR(ts->gpio_pendown)) { 1026 1026 dev_err(&spi->dev, "failed to request pendown GPIO\n"); 1027 1027 return PTR_ERR(ts->gpio_pendown);
+13 -13
drivers/input/touchscreen/goodix_berlin_core.c
··· 165 165 struct device *dev; 166 166 struct regmap *regmap; 167 167 struct regulator *avdd; 168 - struct regulator *iovdd; 168 + struct regulator *vddio; 169 169 struct gpio_desc *reset_gpio; 170 170 struct touchscreen_properties props; 171 171 struct goodix_berlin_fw_version fw_version; ··· 248 248 { 249 249 int error; 250 250 251 - error = regulator_enable(cd->iovdd); 251 + error = regulator_enable(cd->vddio); 252 252 if (error) { 253 - dev_err(cd->dev, "Failed to enable iovdd: %d\n", error); 253 + dev_err(cd->dev, "Failed to enable vddio: %d\n", error); 254 254 return error; 255 255 } 256 256 257 - /* Vendor waits 3ms for IOVDD to settle */ 257 + /* Vendor waits 3ms for VDDIO to settle */ 258 258 usleep_range(3000, 3100); 259 259 260 260 error = regulator_enable(cd->avdd); 261 261 if (error) { 262 262 dev_err(cd->dev, "Failed to enable avdd: %d\n", error); 263 - goto err_iovdd_disable; 263 + goto err_vddio_disable; 264 264 } 265 265 266 - /* Vendor waits 15ms for IOVDD to settle */ 266 + /* Vendor waits 15ms for AVDD to settle */ 267 267 usleep_range(15000, 15100); 268 268 269 269 gpiod_set_value_cansleep(cd->reset_gpio, 0); ··· 283 283 err_dev_reset: 284 284 gpiod_set_value_cansleep(cd->reset_gpio, 1); 285 285 regulator_disable(cd->avdd); 286 - err_iovdd_disable: 287 - regulator_disable(cd->iovdd); 286 + err_vddio_disable: 287 + regulator_disable(cd->vddio); 288 288 return error; 289 289 } 290 290 ··· 292 292 { 293 293 gpiod_set_value_cansleep(cd->reset_gpio, 1); 294 294 regulator_disable(cd->avdd); 295 - regulator_disable(cd->iovdd); 295 + regulator_disable(cd->vddio); 296 296 } 297 297 298 298 static int goodix_berlin_read_version(struct goodix_berlin_core *cd) ··· 744 744 return dev_err_probe(dev, PTR_ERR(cd->avdd), 745 745 "Failed to request avdd regulator\n"); 746 746 747 - cd->iovdd = devm_regulator_get(dev, "iovdd"); 748 - if (IS_ERR(cd->iovdd)) 749 - return dev_err_probe(dev, PTR_ERR(cd->iovdd), 750 - "Failed to request iovdd regulator\n"); 747 + cd->vddio = devm_regulator_get(dev, "vddio"); 748 + if (IS_ERR(cd->vddio)) 749 + return dev_err_probe(dev, PTR_ERR(cd->vddio), 750 + "Failed to request vddio regulator\n"); 751 751 752 752 error = goodix_berlin_power_on(cd); 753 753 if (error) {
+9
drivers/input/touchscreen/imagis.c
··· 22 22 23 23 #define IST3032C_WHOAMI 0x32c 24 24 #define IST3038C_WHOAMI 0x38c 25 + #define IST3038H_WHOAMI 0x38d 25 26 26 27 #define IST3038B_REG_CHIPID 0x30 27 28 #define IST3038B_WHOAMI 0x30380b ··· 429 428 .protocol_b = true, 430 429 }; 431 430 431 + static const struct imagis_properties imagis_3038h_data = { 432 + .interrupt_msg_cmd = IST3038C_REG_INTR_MESSAGE, 433 + .touch_coord_cmd = IST3038C_REG_TOUCH_COORD, 434 + .whoami_cmd = IST3038C_REG_CHIPID, 435 + .whoami_val = IST3038H_WHOAMI, 436 + }; 437 + 432 438 static const struct of_device_id imagis_of_match[] = { 433 439 { .compatible = "imagis,ist3032c", .data = &imagis_3032c_data }, 434 440 { .compatible = "imagis,ist3038", .data = &imagis_3038_data }, 435 441 { .compatible = "imagis,ist3038b", .data = &imagis_3038b_data }, 436 442 { .compatible = "imagis,ist3038c", .data = &imagis_3038c_data }, 443 + { .compatible = "imagis,ist3038h", .data = &imagis_3038h_data }, 437 444 { }, 438 445 }; 439 446 MODULE_DEVICE_TABLE(of, imagis_of_match);
+2
drivers/input/touchscreen/wdt87xx_i2c.c
··· 1153 1153 }; 1154 1154 MODULE_DEVICE_TABLE(i2c, wdt87xx_dev_id); 1155 1155 1156 + #ifdef CONFIG_ACPI 1156 1157 static const struct acpi_device_id wdt87xx_acpi_id[] = { 1157 1158 { "WDHT0001", 0 }, 1158 1159 { } 1159 1160 }; 1160 1161 MODULE_DEVICE_TABLE(acpi, wdt87xx_acpi_id); 1162 + #endif 1161 1163 1162 1164 static struct i2c_driver wdt87xx_driver = { 1163 1165 .probe = wdt87xx_ts_probe,
+10 -11
drivers/leds/leds-st1202.c
··· 261 261 int err, reg; 262 262 263 263 for_each_available_child_of_node_scoped(dev_of_node(dev), child) { 264 - struct led_init_data init_data = {}; 265 - 266 264 err = of_property_read_u32(child, "reg", &reg); 267 265 if (err) 268 266 return dev_err_probe(dev, err, "Invalid register\n"); ··· 274 276 led->led_cdev.pattern_set = st1202_led_pattern_set; 275 277 led->led_cdev.pattern_clear = st1202_led_pattern_clear; 276 278 led->led_cdev.default_trigger = "pattern"; 277 - 278 - init_data.fwnode = led->fwnode; 279 - init_data.devicename = "st1202"; 280 - init_data.default_label = ":"; 281 - 282 - err = devm_led_classdev_register_ext(dev, &led->led_cdev, &init_data); 283 - if (err < 0) 284 - return dev_err_probe(dev, err, "Failed to register LED class device\n"); 285 - 286 279 led->led_cdev.brightness_set = st1202_brightness_set; 287 280 led->led_cdev.brightness_get = st1202_brightness_get; 288 281 } ··· 357 368 return ret; 358 369 359 370 for (int i = 0; i < ST1202_MAX_LEDS; i++) { 371 + struct led_init_data init_data = {}; 360 372 led = &chip->leds[i]; 361 373 led->chip = chip; 362 374 led->led_num = i; ··· 374 384 if (ret < 0) 375 385 return dev_err_probe(&client->dev, ret, 376 386 "Failed to clear LED pattern\n"); 387 + 388 + init_data.fwnode = led->fwnode; 389 + init_data.devicename = "st1202"; 390 + init_data.default_label = ":"; 391 + 392 + ret = devm_led_classdev_register_ext(&client->dev, &led->led_cdev, &init_data); 393 + if (ret < 0) 394 + return dev_err_probe(&client->dev, ret, 395 + "Failed to register LED class device\n"); 377 396 } 378 397 379 398 return 0;
+1 -1
drivers/md/dm-flakey.c
··· 426 426 if (!clone) 427 427 return NULL; 428 428 429 - bio_init(clone, fc->dev->bdev, bio->bi_inline_vecs, nr_iovecs, bio->bi_opf); 429 + bio_init(clone, fc->dev->bdev, clone->bi_inline_vecs, nr_iovecs, bio->bi_opf); 430 430 431 431 clone->bi_iter.bi_sector = flakey_map_sector(ti, bio->bi_iter.bi_sector); 432 432 clone->bi_private = bio;
+1 -1
drivers/media/dvb-frontends/rtl2832_sdr.c
··· 1363 1363 dev->vb_queue.ops = &rtl2832_sdr_vb2_ops; 1364 1364 dev->vb_queue.mem_ops = &vb2_vmalloc_memops; 1365 1365 dev->vb_queue.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; 1366 + dev->vb_queue.lock = &dev->vb_queue_lock; 1366 1367 ret = vb2_queue_init(&dev->vb_queue); 1367 1368 if (ret) { 1368 1369 dev_err(&pdev->dev, "Could not initialize vb2 queue\n"); ··· 1422 1421 /* Init video_device structure */ 1423 1422 dev->vdev = rtl2832_sdr_template; 1424 1423 dev->vdev.queue = &dev->vb_queue; 1425 - dev->vdev.queue->lock = &dev->vb_queue_lock; 1426 1424 video_set_drvdata(&dev->vdev, dev); 1427 1425 1428 1426 /* Register the v4l2_device structure */
-15
drivers/misc/cardreader/rtsx_usb.c
··· 286 286 int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status) 287 287 { 288 288 int ret; 289 - u8 interrupt_val = 0; 290 289 u16 *buf; 291 290 292 291 if (!status) ··· 307 308 } else { 308 309 ret = rtsx_usb_get_status_with_bulk(ucr, status); 309 310 } 310 - 311 - rtsx_usb_read_register(ucr, CARD_INT_PEND, &interrupt_val); 312 - /* Cross check presence with interrupts */ 313 - if (*status & XD_CD) 314 - if (!(interrupt_val & XD_INT)) 315 - *status &= ~XD_CD; 316 - 317 - if (*status & SD_CD) 318 - if (!(interrupt_val & SD_INT)) 319 - *status &= ~SD_CD; 320 - 321 - if (*status & MS_CD) 322 - if (!(interrupt_val & MS_INT)) 323 - *status &= ~MS_CD; 324 311 325 312 /* usb_control_msg may return positive when success */ 326 313 if (ret < 0)
+1 -1
drivers/misc/eeprom/digsy_mtc_eeprom.c
··· 50 50 }; 51 51 52 52 static struct gpiod_lookup_table eeprom_spi_gpiod_table = { 53 - .dev_id = "spi_gpio", 53 + .dev_id = "spi_gpio.1", 54 54 .table = { 55 55 GPIO_LOOKUP("gpio@b00", GPIO_EEPROM_CLK, 56 56 "sck", GPIO_ACTIVE_HIGH),
+2
drivers/misc/mei/hw-me-regs.h
··· 117 117 118 118 #define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */ 119 119 120 + #define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */ 121 + 120 122 /* 121 123 * MEI HW Section 122 124 */
+2
drivers/misc/mei/pci-me.c
··· 124 124 125 125 {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)}, 126 126 127 + {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)}, 128 + 127 129 /* required last entry */ 128 130 {0, } 129 131 };
+1 -1
drivers/misc/mei/vsc-tp.c
··· 502 502 if (ret) 503 503 return ret; 504 504 505 - tp->wakeuphost = devm_gpiod_get(dev, "wakeuphost", GPIOD_IN); 505 + tp->wakeuphost = devm_gpiod_get(dev, "wakeuphostint", GPIOD_IN); 506 506 if (IS_ERR(tp->wakeuphost)) 507 507 return PTR_ERR(tp->wakeuphost); 508 508
+4 -3
drivers/misc/ntsync.c
··· 873 873 { 874 874 int fds[NTSYNC_MAX_WAIT_COUNT + 1]; 875 875 const __u32 count = args->count; 876 + size_t size = array_size(count, sizeof(fds[0])); 876 877 struct ntsync_q *q; 877 878 __u32 total_count; 878 879 __u32 i, j; ··· 881 880 if (args->pad || (args->flags & ~NTSYNC_WAIT_REALTIME)) 882 881 return -EINVAL; 883 882 884 - if (args->count > NTSYNC_MAX_WAIT_COUNT) 883 + if (size >= sizeof(fds)) 885 884 return -EINVAL; 886 885 887 886 total_count = count; 888 887 if (args->alert) 889 888 total_count++; 890 889 891 - if (copy_from_user(fds, u64_to_user_ptr(args->objs), 892 - array_size(count, sizeof(*fds)))) 890 + if (copy_from_user(fds, u64_to_user_ptr(args->objs), size)) 893 891 return -EFAULT; 894 892 if (args->alert) 895 893 fds[count] = args->alert; ··· 1208 1208 .minor = MISC_DYNAMIC_MINOR, 1209 1209 .name = NTSYNC_NAME, 1210 1210 .fops = &ntsync_fops, 1211 + .mode = 0666, 1211 1212 }; 1212 1213 1213 1214 module_misc_device(ntsync_misc);
+47 -8
drivers/net/bonding/bond_options.c
··· 1242 1242 slave->dev->flags & IFF_MULTICAST; 1243 1243 } 1244 1244 1245 + /** 1246 + * slave_set_ns_maddrs - add/del all NS mac addresses for slave 1247 + * @bond: bond device 1248 + * @slave: slave device 1249 + * @add: add or remove all the NS mac addresses 1250 + * 1251 + * This function tries to add or delete all the NS mac addresses on the slave 1252 + * 1253 + * Note, the IPv6 NS target address is the unicast address in Neighbor 1254 + * Solicitation (NS) message. The dest address of NS message should be 1255 + * solicited-node multicast address of the target. The dest mac of NS message 1256 + * is converted from the solicited-node multicast address. 1257 + * 1258 + * This function is called when 1259 + * * arp_validate changes 1260 + * * enslaving, releasing new slaves 1261 + */ 1245 1262 static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool add) 1246 1263 { 1247 1264 struct in6_addr *targets = bond->params.ns_targets; 1248 1265 char slot_maddr[MAX_ADDR_LEN]; 1266 + struct in6_addr mcaddr; 1249 1267 int i; 1250 1268 1251 1269 if (!slave_can_set_ns_maddr(bond, slave)) ··· 1273 1255 if (ipv6_addr_any(&targets[i])) 1274 1256 break; 1275 1257 1276 - if (!ndisc_mc_map(&targets[i], slot_maddr, slave->dev, 0)) { 1258 + addrconf_addr_solict_mult(&targets[i], &mcaddr); 1259 + if (!ndisc_mc_map(&mcaddr, slot_maddr, slave->dev, 0)) { 1277 1260 if (add) 1278 1261 dev_mc_add(slave->dev, slot_maddr); 1279 1262 else ··· 1297 1278 slave_set_ns_maddrs(bond, slave, false); 1298 1279 } 1299 1280 1281 + /** 1282 + * slave_set_ns_maddr - set new NS mac address for slave 1283 + * @bond: bond device 1284 + * @slave: slave device 1285 + * @target: the new IPv6 target 1286 + * @slot: the old IPv6 target in the slot 1287 + * 1288 + * This function tries to replace the old mac address to new one on the slave. 1289 + * 1290 + * Note, the target/slot IPv6 address is the unicast address in Neighbor 1291 + * Solicitation (NS) message. The dest address of NS message should be 1292 + * solicited-node multicast address of the target. The dest mac of NS message 1293 + * is converted from the solicited-node multicast address. 1294 + * 1295 + * This function is called when 1296 + * * An IPv6 NS target is added or removed. 1297 + */ 1300 1298 static void slave_set_ns_maddr(struct bonding *bond, struct slave *slave, 1301 1299 struct in6_addr *target, struct in6_addr *slot) 1302 1300 { 1303 - char target_maddr[MAX_ADDR_LEN], slot_maddr[MAX_ADDR_LEN]; 1301 + char mac_addr[MAX_ADDR_LEN]; 1302 + struct in6_addr mcast_addr; 1304 1303 1305 1304 if (!bond->params.arp_validate || !slave_can_set_ns_maddr(bond, slave)) 1306 1305 return; 1307 1306 1308 - /* remove the previous maddr from slave */ 1307 + /* remove the previous mac addr from slave */ 1308 + addrconf_addr_solict_mult(slot, &mcast_addr); 1309 1309 if (!ipv6_addr_any(slot) && 1310 - !ndisc_mc_map(slot, slot_maddr, slave->dev, 0)) 1311 - dev_mc_del(slave->dev, slot_maddr); 1310 + !ndisc_mc_map(&mcast_addr, mac_addr, slave->dev, 0)) 1311 + dev_mc_del(slave->dev, mac_addr); 1312 1312 1313 - /* add new maddr on slave if target is set */ 1313 + /* add new mac addr on slave if target is set */ 1314 + addrconf_addr_solict_mult(target, &mcast_addr); 1314 1315 if (!ipv6_addr_any(target) && 1315 - !ndisc_mc_map(target, target_maddr, slave->dev, 0)) 1316 - dev_mc_add(slave->dev, target_maddr); 1316 + !ndisc_mc_map(&mcast_addr, mac_addr, slave->dev, 0)) 1317 + dev_mc_add(slave->dev, mac_addr); 1317 1318 } 1318 1319 1319 1320 static void _bond_options_ns_ip6_target_set(struct bonding *bond, int slot,
+1 -1
drivers/net/caif/caif_virtio.c
··· 745 745 746 746 if (cfv->vr_rx) 747 747 vdev->vringh_config->del_vrhs(cfv->vdev); 748 - if (cfv->vdev) 748 + if (cfv->vq_tx) 749 749 vdev->config->del_vqs(cfv->vdev); 750 750 free_netdev(netdev); 751 751 return err;
+2 -6
drivers/net/dsa/mt7530.c
··· 2591 2591 if (ret < 0) 2592 2592 return ret; 2593 2593 2594 - return 0; 2594 + /* Setup VLAN ID 0 for VLAN-unaware bridges */ 2595 + return mt7530_setup_vlan0(priv); 2595 2596 } 2596 2597 2597 2598 static int ··· 2685 2684 } 2686 2685 2687 2686 ret = mt7531_setup_common(ds); 2688 - if (ret) 2689 - return ret; 2690 - 2691 - /* Setup VLAN ID 0 for VLAN-unaware bridges */ 2692 - ret = mt7530_setup_vlan0(priv); 2693 2687 if (ret) 2694 2688 return ret; 2695 2689
+48 -11
drivers/net/dsa/mv88e6xxx/chip.c
··· 2208 2208 return err; 2209 2209 } 2210 2210 2211 - static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, 2212 - const unsigned char *addr, u16 vid, 2213 - u8 state) 2211 + static int mv88e6xxx_port_db_get(struct mv88e6xxx_chip *chip, 2212 + const unsigned char *addr, u16 vid, 2213 + u16 *fid, struct mv88e6xxx_atu_entry *entry) 2214 2214 { 2215 - struct mv88e6xxx_atu_entry entry; 2216 2215 struct mv88e6xxx_vtu_entry vlan; 2217 - u16 fid; 2218 2216 int err; 2219 2217 2220 2218 /* Ports have two private address databases: one for when the port is ··· 2223 2225 * VLAN ID into the port's database used for VLAN-unaware bridging. 2224 2226 */ 2225 2227 if (vid == 0) { 2226 - fid = MV88E6XXX_FID_BRIDGED; 2228 + *fid = MV88E6XXX_FID_BRIDGED; 2227 2229 } else { 2228 2230 err = mv88e6xxx_vtu_get(chip, vid, &vlan); 2229 2231 if (err) ··· 2233 2235 if (!vlan.valid) 2234 2236 return -EOPNOTSUPP; 2235 2237 2236 - fid = vlan.fid; 2238 + *fid = vlan.fid; 2237 2239 } 2238 2240 2239 - entry.state = 0; 2240 - ether_addr_copy(entry.mac, addr); 2241 - eth_addr_dec(entry.mac); 2241 + entry->state = 0; 2242 + ether_addr_copy(entry->mac, addr); 2243 + eth_addr_dec(entry->mac); 2242 2244 2243 - err = mv88e6xxx_g1_atu_getnext(chip, fid, &entry); 2245 + return mv88e6xxx_g1_atu_getnext(chip, *fid, entry); 2246 + } 2247 + 2248 + static bool mv88e6xxx_port_db_find(struct mv88e6xxx_chip *chip, 2249 + const unsigned char *addr, u16 vid) 2250 + { 2251 + struct mv88e6xxx_atu_entry entry; 2252 + u16 fid; 2253 + int err; 2254 + 2255 + err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry); 2256 + if (err) 2257 + return false; 2258 + 2259 + return entry.state && ether_addr_equal(entry.mac, addr); 2260 + } 2261 + 2262 + static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port, 2263 + const unsigned char *addr, u16 vid, 2264 + u8 state) 2265 + { 2266 + struct mv88e6xxx_atu_entry entry; 2267 + u16 fid; 2268 + int err; 2269 + 2270 + err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry); 2244 2271 if (err) 2245 2272 return err; 2246 2273 ··· 2869 2846 mv88e6xxx_reg_lock(chip); 2870 2847 err = mv88e6xxx_port_db_load_purge(chip, port, addr, vid, 2871 2848 MV88E6XXX_G1_ATU_DATA_STATE_UC_STATIC); 2849 + if (err) 2850 + goto out; 2851 + 2852 + if (!mv88e6xxx_port_db_find(chip, addr, vid)) 2853 + err = -ENOSPC; 2854 + 2855 + out: 2872 2856 mv88e6xxx_reg_unlock(chip); 2873 2857 2874 2858 return err; ··· 6644 6614 mv88e6xxx_reg_lock(chip); 6645 6615 err = mv88e6xxx_port_db_load_purge(chip, port, mdb->addr, mdb->vid, 6646 6616 MV88E6XXX_G1_ATU_DATA_STATE_MC_STATIC); 6617 + if (err) 6618 + goto out; 6619 + 6620 + if (!mv88e6xxx_port_db_find(chip, mdb->addr, mdb->vid)) 6621 + err = -ENOSPC; 6622 + 6623 + out: 6647 6624 mv88e6xxx_reg_unlock(chip); 6648 6625 6649 6626 return err;
+1 -1
drivers/net/dsa/realtek/Kconfig
··· 44 44 Select to enable support for Realtek RTL8366RB. 45 45 46 46 config NET_DSA_REALTEK_RTL8366RB_LEDS 47 - bool "Support RTL8366RB LED control" 47 + bool 48 48 depends on (LEDS_CLASS=y || LEDS_CLASS=NET_DSA_REALTEK_RTL8366RB) 49 49 depends on NET_DSA_REALTEK_RTL8366RB 50 50 default NET_DSA_REALTEK_RTL8366RB
+22 -3
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2038 2038 struct rx_cmp_ext *rxcmp1; 2039 2039 u32 tmp_raw_cons = *raw_cons; 2040 2040 u16 cons, prod, cp_cons = RING_CMP(tmp_raw_cons); 2041 + struct skb_shared_info *sinfo; 2041 2042 struct bnxt_sw_rx_bd *rx_buf; 2042 2043 unsigned int len; 2043 2044 u8 *data_ptr, agg_bufs, cmp_type; ··· 2165 2164 false); 2166 2165 if (!frag_len) 2167 2166 goto oom_next_rx; 2167 + 2168 2168 } 2169 2169 xdp_active = true; 2170 2170 } ··· 2174 2172 if (bnxt_rx_xdp(bp, rxr, cons, &xdp, data, &data_ptr, &len, event)) { 2175 2173 rc = 1; 2176 2174 goto next_rx; 2175 + } 2176 + if (xdp_buff_has_frags(&xdp)) { 2177 + sinfo = xdp_get_shared_info_from_buff(&xdp); 2178 + agg_bufs = sinfo->nr_frags; 2179 + } else { 2180 + agg_bufs = 0; 2177 2181 } 2178 2182 } 2179 2183 ··· 2218 2210 if (!skb) 2219 2211 goto oom_next_rx; 2220 2212 } else { 2221 - skb = bnxt_xdp_build_skb(bp, skb, agg_bufs, rxr->page_pool, &xdp, rxcmp1); 2213 + skb = bnxt_xdp_build_skb(bp, skb, agg_bufs, 2214 + rxr->page_pool, &xdp); 2222 2215 if (!skb) { 2223 2216 /* we should be able to free the old skb here */ 2224 2217 bnxt_xdp_buff_frags_free(rxr, &xdp); ··· 15384 15375 struct bnxt_cp_ring_info *cpr; 15385 15376 u64 *sw; 15386 15377 15378 + if (!bp->bnapi) 15379 + return; 15380 + 15387 15381 cpr = &bp->bnapi[i]->cp_ring; 15388 15382 sw = cpr->stats.sw_stats; 15389 15383 ··· 15409 15397 struct bnxt *bp = netdev_priv(dev); 15410 15398 struct bnxt_napi *bnapi; 15411 15399 u64 *sw; 15400 + 15401 + if (!bp->tx_ring) 15402 + return; 15412 15403 15413 15404 bnapi = bp->tx_ring[bp->tx_ring_map[i]].bnapi; 15414 15405 sw = bnapi->cp_ring.stats.sw_stats; ··· 15453 15438 struct bnxt *bp = netdev_priv(dev); 15454 15439 struct bnxt_ring_struct *ring; 15455 15440 int rc; 15441 + 15442 + if (!bp->rx_ring) 15443 + return -ENETDOWN; 15456 15444 15457 15445 rxr = &bp->rx_ring[idx]; 15458 15446 clone = qmem; ··· 15539 15521 struct bnxt_ring_struct *ring; 15540 15522 15541 15523 bnxt_free_one_rx_ring_skbs(bp, rxr); 15524 + bnxt_free_one_tpa_info(bp, rxr); 15542 15525 15543 15526 xdp_rxq_info_unreg(&rxr->xdp_rxq); 15544 15527 ··· 15651 15632 cpr = &rxr->bnapi->cp_ring; 15652 15633 cpr->sw_stats->rx.rx_resets++; 15653 15634 15654 - for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) { 15635 + for (i = 0; i <= bp->nr_vnics; i++) { 15655 15636 vnic = &bp->vnic_info[i]; 15656 15637 15657 15638 rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true); ··· 15679 15660 struct bnxt_vnic_info *vnic; 15680 15661 int i; 15681 15662 15682 - for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) { 15663 + for (i = 0; i <= bp->nr_vnics; i++) { 15683 15664 vnic = &bp->vnic_info[i]; 15684 15665 vnic->mru = 0; 15685 15666 bnxt_hwrm_vnic_update(bp, vnic,
+3 -10
drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
··· 460 460 461 461 struct sk_buff * 462 462 bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, u8 num_frags, 463 - struct page_pool *pool, struct xdp_buff *xdp, 464 - struct rx_cmp_ext *rxcmp1) 463 + struct page_pool *pool, struct xdp_buff *xdp) 465 464 { 466 465 struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); 467 466 468 467 if (!skb) 469 468 return NULL; 470 - skb_checksum_none_assert(skb); 471 - if (RX_CMP_L4_CS_OK(rxcmp1)) { 472 - if (bp->dev->features & NETIF_F_RXCSUM) { 473 - skb->ip_summed = CHECKSUM_UNNECESSARY; 474 - skb->csum_level = RX_CMP_ENCAP(rxcmp1); 475 - } 476 - } 469 + 477 470 xdp_update_skb_shared_info(skb, num_frags, 478 471 sinfo->xdp_frags_size, 479 - BNXT_RX_PAGE_SIZE * sinfo->nr_frags, 472 + BNXT_RX_PAGE_SIZE * num_frags, 480 473 xdp_buff_is_frag_pfmemalloc(xdp)); 481 474 return skb; 482 475 }
+1 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
··· 33 33 struct xdp_buff *xdp); 34 34 struct sk_buff *bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, 35 35 u8 num_frags, struct page_pool *pool, 36 - struct xdp_buff *xdp, 37 - struct rx_cmp_ext *rxcmp1); 36 + struct xdp_buff *xdp); 38 37 #endif
+1 -1
drivers/net/ethernet/emulex/benet/be.h
··· 562 562 struct be_dma_mem mbox_mem_alloced; 563 563 564 564 struct be_mcc_obj mcc_obj; 565 - struct mutex mcc_lock; /* For serializing mcc cmds to BE card */ 565 + spinlock_t mcc_lock; /* For serializing mcc cmds to BE card */ 566 566 spinlock_t mcc_cq_lock; 567 567 568 568 u16 cfg_num_rx_irqs; /* configured via set-channels */
+98 -99
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 575 575 /* Wait till no more pending mcc requests are present */ 576 576 static int be_mcc_wait_compl(struct be_adapter *adapter) 577 577 { 578 - #define mcc_timeout 12000 /* 12s timeout */ 578 + #define mcc_timeout 120000 /* 12s timeout */ 579 579 int i, status = 0; 580 580 struct be_mcc_obj *mcc_obj = &adapter->mcc_obj; 581 581 ··· 589 589 590 590 if (atomic_read(&mcc_obj->q.used) == 0) 591 591 break; 592 - usleep_range(500, 1000); 592 + udelay(100); 593 593 } 594 594 if (i == mcc_timeout) { 595 595 dev_err(&adapter->pdev->dev, "FW not responding\n"); ··· 866 866 static int be_cmd_lock(struct be_adapter *adapter) 867 867 { 868 868 if (use_mcc(adapter)) { 869 - mutex_lock(&adapter->mcc_lock); 869 + spin_lock_bh(&adapter->mcc_lock); 870 870 return 0; 871 871 } else { 872 872 return mutex_lock_interruptible(&adapter->mbox_lock); ··· 877 877 static void be_cmd_unlock(struct be_adapter *adapter) 878 878 { 879 879 if (use_mcc(adapter)) 880 - return mutex_unlock(&adapter->mcc_lock); 880 + return spin_unlock_bh(&adapter->mcc_lock); 881 881 else 882 882 return mutex_unlock(&adapter->mbox_lock); 883 883 } ··· 1047 1047 struct be_cmd_req_mac_query *req; 1048 1048 int status; 1049 1049 1050 - mutex_lock(&adapter->mcc_lock); 1050 + spin_lock_bh(&adapter->mcc_lock); 1051 1051 1052 1052 wrb = wrb_from_mccq(adapter); 1053 1053 if (!wrb) { ··· 1076 1076 } 1077 1077 1078 1078 err: 1079 - mutex_unlock(&adapter->mcc_lock); 1079 + spin_unlock_bh(&adapter->mcc_lock); 1080 1080 return status; 1081 1081 } 1082 1082 ··· 1088 1088 struct be_cmd_req_pmac_add *req; 1089 1089 int status; 1090 1090 1091 - mutex_lock(&adapter->mcc_lock); 1091 + spin_lock_bh(&adapter->mcc_lock); 1092 1092 1093 1093 wrb = wrb_from_mccq(adapter); 1094 1094 if (!wrb) { ··· 1113 1113 } 1114 1114 1115 1115 err: 1116 - mutex_unlock(&adapter->mcc_lock); 1116 + spin_unlock_bh(&adapter->mcc_lock); 1117 1117 1118 1118 if (base_status(status) == MCC_STATUS_UNAUTHORIZED_REQUEST) 1119 1119 status = -EPERM; ··· 1131 1131 if (pmac_id == -1) 1132 1132 return 0; 1133 1133 1134 - mutex_lock(&adapter->mcc_lock); 1134 + spin_lock_bh(&adapter->mcc_lock); 1135 1135 1136 1136 wrb = wrb_from_mccq(adapter); 1137 1137 if (!wrb) { ··· 1151 1151 status = be_mcc_notify_wait(adapter); 1152 1152 1153 1153 err: 1154 - mutex_unlock(&adapter->mcc_lock); 1154 + spin_unlock_bh(&adapter->mcc_lock); 1155 1155 return status; 1156 1156 } 1157 1157 ··· 1414 1414 struct be_dma_mem *q_mem = &rxq->dma_mem; 1415 1415 int status; 1416 1416 1417 - mutex_lock(&adapter->mcc_lock); 1417 + spin_lock_bh(&adapter->mcc_lock); 1418 1418 1419 1419 wrb = wrb_from_mccq(adapter); 1420 1420 if (!wrb) { ··· 1444 1444 } 1445 1445 1446 1446 err: 1447 - mutex_unlock(&adapter->mcc_lock); 1447 + spin_unlock_bh(&adapter->mcc_lock); 1448 1448 return status; 1449 1449 } 1450 1450 ··· 1508 1508 struct be_cmd_req_q_destroy *req; 1509 1509 int status; 1510 1510 1511 - mutex_lock(&adapter->mcc_lock); 1511 + spin_lock_bh(&adapter->mcc_lock); 1512 1512 1513 1513 wrb = wrb_from_mccq(adapter); 1514 1514 if (!wrb) { ··· 1525 1525 q->created = false; 1526 1526 1527 1527 err: 1528 - mutex_unlock(&adapter->mcc_lock); 1528 + spin_unlock_bh(&adapter->mcc_lock); 1529 1529 return status; 1530 1530 } 1531 1531 ··· 1593 1593 struct be_cmd_req_hdr *hdr; 1594 1594 int status = 0; 1595 1595 1596 - mutex_lock(&adapter->mcc_lock); 1596 + spin_lock_bh(&adapter->mcc_lock); 1597 1597 1598 1598 wrb = wrb_from_mccq(adapter); 1599 1599 if (!wrb) { ··· 1621 1621 adapter->stats_cmd_sent = true; 1622 1622 1623 1623 err: 1624 - mutex_unlock(&adapter->mcc_lock); 1624 + spin_unlock_bh(&adapter->mcc_lock); 1625 1625 return status; 1626 1626 } 1627 1627 ··· 1637 1637 CMD_SUBSYSTEM_ETH)) 1638 1638 return -EPERM; 1639 1639 1640 - mutex_lock(&adapter->mcc_lock); 1640 + spin_lock_bh(&adapter->mcc_lock); 1641 1641 1642 1642 wrb = wrb_from_mccq(adapter); 1643 1643 if (!wrb) { ··· 1660 1660 adapter->stats_cmd_sent = true; 1661 1661 1662 1662 err: 1663 - mutex_unlock(&adapter->mcc_lock); 1663 + spin_unlock_bh(&adapter->mcc_lock); 1664 1664 return status; 1665 1665 } 1666 1666 ··· 1697 1697 struct be_cmd_req_link_status *req; 1698 1698 int status; 1699 1699 1700 - mutex_lock(&adapter->mcc_lock); 1700 + spin_lock_bh(&adapter->mcc_lock); 1701 1701 1702 1702 if (link_status) 1703 1703 *link_status = LINK_DOWN; ··· 1736 1736 } 1737 1737 1738 1738 err: 1739 - mutex_unlock(&adapter->mcc_lock); 1739 + spin_unlock_bh(&adapter->mcc_lock); 1740 1740 return status; 1741 1741 } 1742 1742 ··· 1747 1747 struct be_cmd_req_get_cntl_addnl_attribs *req; 1748 1748 int status = 0; 1749 1749 1750 - mutex_lock(&adapter->mcc_lock); 1750 + spin_lock_bh(&adapter->mcc_lock); 1751 1751 1752 1752 wrb = wrb_from_mccq(adapter); 1753 1753 if (!wrb) { ··· 1762 1762 1763 1763 status = be_mcc_notify(adapter); 1764 1764 err: 1765 - mutex_unlock(&adapter->mcc_lock); 1765 + spin_unlock_bh(&adapter->mcc_lock); 1766 1766 return status; 1767 1767 } 1768 1768 ··· 1811 1811 if (!get_fat_cmd.va) 1812 1812 return -ENOMEM; 1813 1813 1814 - mutex_lock(&adapter->mcc_lock); 1814 + spin_lock_bh(&adapter->mcc_lock); 1815 1815 1816 1816 while (total_size) { 1817 1817 buf_size = min(total_size, (u32)60 * 1024); ··· 1849 1849 log_offset += buf_size; 1850 1850 } 1851 1851 err: 1852 + spin_unlock_bh(&adapter->mcc_lock); 1852 1853 dma_free_coherent(&adapter->pdev->dev, get_fat_cmd.size, 1853 1854 get_fat_cmd.va, get_fat_cmd.dma); 1854 - mutex_unlock(&adapter->mcc_lock); 1855 1855 return status; 1856 1856 } 1857 1857 ··· 1862 1862 struct be_cmd_req_get_fw_version *req; 1863 1863 int status; 1864 1864 1865 - mutex_lock(&adapter->mcc_lock); 1865 + spin_lock_bh(&adapter->mcc_lock); 1866 1866 1867 1867 wrb = wrb_from_mccq(adapter); 1868 1868 if (!wrb) { ··· 1885 1885 sizeof(adapter->fw_on_flash)); 1886 1886 } 1887 1887 err: 1888 - mutex_unlock(&adapter->mcc_lock); 1888 + spin_unlock_bh(&adapter->mcc_lock); 1889 1889 return status; 1890 1890 } 1891 1891 ··· 1899 1899 struct be_cmd_req_modify_eq_delay *req; 1900 1900 int status = 0, i; 1901 1901 1902 - mutex_lock(&adapter->mcc_lock); 1902 + spin_lock_bh(&adapter->mcc_lock); 1903 1903 1904 1904 wrb = wrb_from_mccq(adapter); 1905 1905 if (!wrb) { ··· 1922 1922 1923 1923 status = be_mcc_notify(adapter); 1924 1924 err: 1925 - mutex_unlock(&adapter->mcc_lock); 1925 + spin_unlock_bh(&adapter->mcc_lock); 1926 1926 return status; 1927 1927 } 1928 1928 ··· 1949 1949 struct be_cmd_req_vlan_config *req; 1950 1950 int status; 1951 1951 1952 - mutex_lock(&adapter->mcc_lock); 1952 + spin_lock_bh(&adapter->mcc_lock); 1953 1953 1954 1954 wrb = wrb_from_mccq(adapter); 1955 1955 if (!wrb) { ··· 1971 1971 1972 1972 status = be_mcc_notify_wait(adapter); 1973 1973 err: 1974 - mutex_unlock(&adapter->mcc_lock); 1974 + spin_unlock_bh(&adapter->mcc_lock); 1975 1975 return status; 1976 1976 } 1977 1977 ··· 1982 1982 struct be_cmd_req_rx_filter *req = mem->va; 1983 1983 int status; 1984 1984 1985 - mutex_lock(&adapter->mcc_lock); 1985 + spin_lock_bh(&adapter->mcc_lock); 1986 1986 1987 1987 wrb = wrb_from_mccq(adapter); 1988 1988 if (!wrb) { ··· 2015 2015 2016 2016 status = be_mcc_notify_wait(adapter); 2017 2017 err: 2018 - mutex_unlock(&adapter->mcc_lock); 2018 + spin_unlock_bh(&adapter->mcc_lock); 2019 2019 return status; 2020 2020 } 2021 2021 ··· 2046 2046 CMD_SUBSYSTEM_COMMON)) 2047 2047 return -EPERM; 2048 2048 2049 - mutex_lock(&adapter->mcc_lock); 2049 + spin_lock_bh(&adapter->mcc_lock); 2050 2050 2051 2051 wrb = wrb_from_mccq(adapter); 2052 2052 if (!wrb) { ··· 2066 2066 status = be_mcc_notify_wait(adapter); 2067 2067 2068 2068 err: 2069 - mutex_unlock(&adapter->mcc_lock); 2069 + spin_unlock_bh(&adapter->mcc_lock); 2070 2070 2071 2071 if (base_status(status) == MCC_STATUS_FEATURE_NOT_SUPPORTED) 2072 2072 return -EOPNOTSUPP; ··· 2085 2085 CMD_SUBSYSTEM_COMMON)) 2086 2086 return -EPERM; 2087 2087 2088 - mutex_lock(&adapter->mcc_lock); 2088 + spin_lock_bh(&adapter->mcc_lock); 2089 2089 2090 2090 wrb = wrb_from_mccq(adapter); 2091 2091 if (!wrb) { ··· 2108 2108 } 2109 2109 2110 2110 err: 2111 - mutex_unlock(&adapter->mcc_lock); 2111 + spin_unlock_bh(&adapter->mcc_lock); 2112 2112 return status; 2113 2113 } 2114 2114 ··· 2189 2189 if (!(be_if_cap_flags(adapter) & BE_IF_FLAGS_RSS)) 2190 2190 return 0; 2191 2191 2192 - mutex_lock(&adapter->mcc_lock); 2192 + spin_lock_bh(&adapter->mcc_lock); 2193 2193 2194 2194 wrb = wrb_from_mccq(adapter); 2195 2195 if (!wrb) { ··· 2214 2214 2215 2215 status = be_mcc_notify_wait(adapter); 2216 2216 err: 2217 - mutex_unlock(&adapter->mcc_lock); 2217 + spin_unlock_bh(&adapter->mcc_lock); 2218 2218 return status; 2219 2219 } 2220 2220 ··· 2226 2226 struct be_cmd_req_enable_disable_beacon *req; 2227 2227 int status; 2228 2228 2229 - mutex_lock(&adapter->mcc_lock); 2229 + spin_lock_bh(&adapter->mcc_lock); 2230 2230 2231 2231 wrb = wrb_from_mccq(adapter); 2232 2232 if (!wrb) { ··· 2247 2247 status = be_mcc_notify_wait(adapter); 2248 2248 2249 2249 err: 2250 - mutex_unlock(&adapter->mcc_lock); 2250 + spin_unlock_bh(&adapter->mcc_lock); 2251 2251 return status; 2252 2252 } 2253 2253 ··· 2258 2258 struct be_cmd_req_get_beacon_state *req; 2259 2259 int status; 2260 2260 2261 - mutex_lock(&adapter->mcc_lock); 2261 + spin_lock_bh(&adapter->mcc_lock); 2262 2262 2263 2263 wrb = wrb_from_mccq(adapter); 2264 2264 if (!wrb) { ··· 2282 2282 } 2283 2283 2284 2284 err: 2285 - mutex_unlock(&adapter->mcc_lock); 2285 + spin_unlock_bh(&adapter->mcc_lock); 2286 2286 return status; 2287 2287 } 2288 2288 ··· 2306 2306 return -ENOMEM; 2307 2307 } 2308 2308 2309 - mutex_lock(&adapter->mcc_lock); 2309 + spin_lock_bh(&adapter->mcc_lock); 2310 2310 2311 2311 wrb = wrb_from_mccq(adapter); 2312 2312 if (!wrb) { ··· 2328 2328 memcpy(data, resp->page_data + off, len); 2329 2329 } 2330 2330 err: 2331 - mutex_unlock(&adapter->mcc_lock); 2331 + spin_unlock_bh(&adapter->mcc_lock); 2332 2332 dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma); 2333 2333 return status; 2334 2334 } ··· 2345 2345 void *ctxt = NULL; 2346 2346 int status; 2347 2347 2348 - mutex_lock(&adapter->mcc_lock); 2348 + spin_lock_bh(&adapter->mcc_lock); 2349 2349 adapter->flash_status = 0; 2350 2350 2351 2351 wrb = wrb_from_mccq(adapter); ··· 2387 2387 if (status) 2388 2388 goto err_unlock; 2389 2389 2390 - mutex_unlock(&adapter->mcc_lock); 2390 + spin_unlock_bh(&adapter->mcc_lock); 2391 2391 2392 2392 if (!wait_for_completion_timeout(&adapter->et_cmd_compl, 2393 2393 msecs_to_jiffies(60000))) ··· 2406 2406 return status; 2407 2407 2408 2408 err_unlock: 2409 - mutex_unlock(&adapter->mcc_lock); 2409 + spin_unlock_bh(&adapter->mcc_lock); 2410 2410 return status; 2411 2411 } 2412 2412 ··· 2460 2460 struct be_mcc_wrb *wrb; 2461 2461 int status; 2462 2462 2463 - mutex_lock(&adapter->mcc_lock); 2463 + spin_lock_bh(&adapter->mcc_lock); 2464 2464 2465 2465 wrb = wrb_from_mccq(adapter); 2466 2466 if (!wrb) { ··· 2478 2478 2479 2479 status = be_mcc_notify_wait(adapter); 2480 2480 err: 2481 - mutex_unlock(&adapter->mcc_lock); 2481 + spin_unlock_bh(&adapter->mcc_lock); 2482 2482 return status; 2483 2483 } 2484 2484 ··· 2491 2491 struct lancer_cmd_resp_read_object *resp; 2492 2492 int status; 2493 2493 2494 - mutex_lock(&adapter->mcc_lock); 2494 + spin_lock_bh(&adapter->mcc_lock); 2495 2495 2496 2496 wrb = wrb_from_mccq(adapter); 2497 2497 if (!wrb) { ··· 2525 2525 } 2526 2526 2527 2527 err_unlock: 2528 - mutex_unlock(&adapter->mcc_lock); 2528 + spin_unlock_bh(&adapter->mcc_lock); 2529 2529 return status; 2530 2530 } 2531 2531 ··· 2537 2537 struct be_cmd_write_flashrom *req; 2538 2538 int status; 2539 2539 2540 - mutex_lock(&adapter->mcc_lock); 2540 + spin_lock_bh(&adapter->mcc_lock); 2541 2541 adapter->flash_status = 0; 2542 2542 2543 2543 wrb = wrb_from_mccq(adapter); ··· 2562 2562 if (status) 2563 2563 goto err_unlock; 2564 2564 2565 - mutex_unlock(&adapter->mcc_lock); 2565 + spin_unlock_bh(&adapter->mcc_lock); 2566 2566 2567 2567 if (!wait_for_completion_timeout(&adapter->et_cmd_compl, 2568 2568 msecs_to_jiffies(40000))) ··· 2573 2573 return status; 2574 2574 2575 2575 err_unlock: 2576 - mutex_unlock(&adapter->mcc_lock); 2576 + spin_unlock_bh(&adapter->mcc_lock); 2577 2577 return status; 2578 2578 } 2579 2579 ··· 2584 2584 struct be_mcc_wrb *wrb; 2585 2585 int status; 2586 2586 2587 - mutex_lock(&adapter->mcc_lock); 2587 + spin_lock_bh(&adapter->mcc_lock); 2588 2588 2589 2589 wrb = wrb_from_mccq(adapter); 2590 2590 if (!wrb) { ··· 2611 2611 memcpy(flashed_crc, req->crc, 4); 2612 2612 2613 2613 err: 2614 - mutex_unlock(&adapter->mcc_lock); 2614 + spin_unlock_bh(&adapter->mcc_lock); 2615 2615 return status; 2616 2616 } 2617 2617 ··· 3217 3217 struct be_cmd_req_acpi_wol_magic_config *req; 3218 3218 int status; 3219 3219 3220 - mutex_lock(&adapter->mcc_lock); 3220 + spin_lock_bh(&adapter->mcc_lock); 3221 3221 3222 3222 wrb = wrb_from_mccq(adapter); 3223 3223 if (!wrb) { ··· 3234 3234 status = be_mcc_notify_wait(adapter); 3235 3235 3236 3236 err: 3237 - mutex_unlock(&adapter->mcc_lock); 3237 + spin_unlock_bh(&adapter->mcc_lock); 3238 3238 return status; 3239 3239 } 3240 3240 ··· 3249 3249 CMD_SUBSYSTEM_LOWLEVEL)) 3250 3250 return -EPERM; 3251 3251 3252 - mutex_lock(&adapter->mcc_lock); 3252 + spin_lock_bh(&adapter->mcc_lock); 3253 3253 3254 3254 wrb = wrb_from_mccq(adapter); 3255 3255 if (!wrb) { ··· 3272 3272 if (status) 3273 3273 goto err_unlock; 3274 3274 3275 - mutex_unlock(&adapter->mcc_lock); 3275 + spin_unlock_bh(&adapter->mcc_lock); 3276 3276 3277 3277 if (!wait_for_completion_timeout(&adapter->et_cmd_compl, 3278 3278 msecs_to_jiffies(SET_LB_MODE_TIMEOUT))) ··· 3281 3281 return status; 3282 3282 3283 3283 err_unlock: 3284 - mutex_unlock(&adapter->mcc_lock); 3284 + spin_unlock_bh(&adapter->mcc_lock); 3285 3285 return status; 3286 3286 } 3287 3287 ··· 3298 3298 CMD_SUBSYSTEM_LOWLEVEL)) 3299 3299 return -EPERM; 3300 3300 3301 - mutex_lock(&adapter->mcc_lock); 3301 + spin_lock_bh(&adapter->mcc_lock); 3302 3302 3303 3303 wrb = wrb_from_mccq(adapter); 3304 3304 if (!wrb) { ··· 3324 3324 if (status) 3325 3325 goto err; 3326 3326 3327 - mutex_unlock(&adapter->mcc_lock); 3327 + spin_unlock_bh(&adapter->mcc_lock); 3328 3328 3329 3329 wait_for_completion(&adapter->et_cmd_compl); 3330 3330 resp = embedded_payload(wrb); ··· 3332 3332 3333 3333 return status; 3334 3334 err: 3335 - mutex_unlock(&adapter->mcc_lock); 3335 + spin_unlock_bh(&adapter->mcc_lock); 3336 3336 return status; 3337 3337 } 3338 3338 ··· 3348 3348 CMD_SUBSYSTEM_LOWLEVEL)) 3349 3349 return -EPERM; 3350 3350 3351 - mutex_lock(&adapter->mcc_lock); 3351 + spin_lock_bh(&adapter->mcc_lock); 3352 3352 3353 3353 wrb = wrb_from_mccq(adapter); 3354 3354 if (!wrb) { ··· 3382 3382 } 3383 3383 3384 3384 err: 3385 - mutex_unlock(&adapter->mcc_lock); 3385 + spin_unlock_bh(&adapter->mcc_lock); 3386 3386 return status; 3387 3387 } 3388 3388 ··· 3393 3393 struct be_cmd_req_seeprom_read *req; 3394 3394 int status; 3395 3395 3396 - mutex_lock(&adapter->mcc_lock); 3396 + spin_lock_bh(&adapter->mcc_lock); 3397 3397 3398 3398 wrb = wrb_from_mccq(adapter); 3399 3399 if (!wrb) { ··· 3409 3409 status = be_mcc_notify_wait(adapter); 3410 3410 3411 3411 err: 3412 - mutex_unlock(&adapter->mcc_lock); 3412 + spin_unlock_bh(&adapter->mcc_lock); 3413 3413 return status; 3414 3414 } 3415 3415 ··· 3424 3424 CMD_SUBSYSTEM_COMMON)) 3425 3425 return -EPERM; 3426 3426 3427 - mutex_lock(&adapter->mcc_lock); 3427 + spin_lock_bh(&adapter->mcc_lock); 3428 3428 3429 3429 wrb = wrb_from_mccq(adapter); 3430 3430 if (!wrb) { ··· 3469 3469 } 3470 3470 dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma); 3471 3471 err: 3472 - mutex_unlock(&adapter->mcc_lock); 3472 + spin_unlock_bh(&adapter->mcc_lock); 3473 3473 return status; 3474 3474 } 3475 3475 ··· 3479 3479 struct be_cmd_req_set_qos *req; 3480 3480 int status; 3481 3481 3482 - mutex_lock(&adapter->mcc_lock); 3482 + spin_lock_bh(&adapter->mcc_lock); 3483 3483 3484 3484 wrb = wrb_from_mccq(adapter); 3485 3485 if (!wrb) { ··· 3499 3499 status = be_mcc_notify_wait(adapter); 3500 3500 3501 3501 err: 3502 - mutex_unlock(&adapter->mcc_lock); 3502 + spin_unlock_bh(&adapter->mcc_lock); 3503 3503 return status; 3504 3504 } 3505 3505 ··· 3611 3611 struct be_cmd_req_get_fn_privileges *req; 3612 3612 int status; 3613 3613 3614 - mutex_lock(&adapter->mcc_lock); 3614 + spin_lock_bh(&adapter->mcc_lock); 3615 3615 3616 3616 wrb = wrb_from_mccq(adapter); 3617 3617 if (!wrb) { ··· 3643 3643 } 3644 3644 3645 3645 err: 3646 - mutex_unlock(&adapter->mcc_lock); 3646 + spin_unlock_bh(&adapter->mcc_lock); 3647 3647 return status; 3648 3648 } 3649 3649 ··· 3655 3655 struct be_cmd_req_set_fn_privileges *req; 3656 3656 int status; 3657 3657 3658 - mutex_lock(&adapter->mcc_lock); 3658 + spin_lock_bh(&adapter->mcc_lock); 3659 3659 3660 3660 wrb = wrb_from_mccq(adapter); 3661 3661 if (!wrb) { ··· 3675 3675 3676 3676 status = be_mcc_notify_wait(adapter); 3677 3677 err: 3678 - mutex_unlock(&adapter->mcc_lock); 3678 + spin_unlock_bh(&adapter->mcc_lock); 3679 3679 return status; 3680 3680 } 3681 3681 ··· 3707 3707 return -ENOMEM; 3708 3708 } 3709 3709 3710 - mutex_lock(&adapter->mcc_lock); 3710 + spin_lock_bh(&adapter->mcc_lock); 3711 3711 3712 3712 wrb = wrb_from_mccq(adapter); 3713 3713 if (!wrb) { ··· 3771 3771 } 3772 3772 3773 3773 out: 3774 - mutex_unlock(&adapter->mcc_lock); 3774 + spin_unlock_bh(&adapter->mcc_lock); 3775 3775 dma_free_coherent(&adapter->pdev->dev, get_mac_list_cmd.size, 3776 3776 get_mac_list_cmd.va, get_mac_list_cmd.dma); 3777 3777 return status; ··· 3831 3831 if (!cmd.va) 3832 3832 return -ENOMEM; 3833 3833 3834 - mutex_lock(&adapter->mcc_lock); 3834 + spin_lock_bh(&adapter->mcc_lock); 3835 3835 3836 3836 wrb = wrb_from_mccq(adapter); 3837 3837 if (!wrb) { ··· 3853 3853 3854 3854 err: 3855 3855 dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma); 3856 - mutex_unlock(&adapter->mcc_lock); 3856 + spin_unlock_bh(&adapter->mcc_lock); 3857 3857 return status; 3858 3858 } 3859 3859 ··· 3889 3889 CMD_SUBSYSTEM_COMMON)) 3890 3890 return -EPERM; 3891 3891 3892 - mutex_lock(&adapter->mcc_lock); 3892 + spin_lock_bh(&adapter->mcc_lock); 3893 3893 3894 3894 wrb = wrb_from_mccq(adapter); 3895 3895 if (!wrb) { ··· 3930 3930 status = be_mcc_notify_wait(adapter); 3931 3931 3932 3932 err: 3933 - mutex_unlock(&adapter->mcc_lock); 3933 + spin_unlock_bh(&adapter->mcc_lock); 3934 3934 return status; 3935 3935 } 3936 3936 ··· 3944 3944 int status; 3945 3945 u16 vid; 3946 3946 3947 - mutex_lock(&adapter->mcc_lock); 3947 + spin_lock_bh(&adapter->mcc_lock); 3948 3948 3949 3949 wrb = wrb_from_mccq(adapter); 3950 3950 if (!wrb) { ··· 3991 3991 } 3992 3992 3993 3993 err: 3994 - mutex_unlock(&adapter->mcc_lock); 3994 + spin_unlock_bh(&adapter->mcc_lock); 3995 3995 return status; 3996 3996 } 3997 3997 ··· 4190 4190 struct be_cmd_req_set_ext_fat_caps *req; 4191 4191 int status; 4192 4192 4193 - mutex_lock(&adapter->mcc_lock); 4193 + spin_lock_bh(&adapter->mcc_lock); 4194 4194 4195 4195 wrb = wrb_from_mccq(adapter); 4196 4196 if (!wrb) { ··· 4206 4206 4207 4207 status = be_mcc_notify_wait(adapter); 4208 4208 err: 4209 - mutex_unlock(&adapter->mcc_lock); 4209 + spin_unlock_bh(&adapter->mcc_lock); 4210 4210 return status; 4211 4211 } 4212 4212 ··· 4684 4684 if (iface == 0xFFFFFFFF) 4685 4685 return -1; 4686 4686 4687 - mutex_lock(&adapter->mcc_lock); 4687 + spin_lock_bh(&adapter->mcc_lock); 4688 4688 4689 4689 wrb = wrb_from_mccq(adapter); 4690 4690 if (!wrb) { ··· 4701 4701 4702 4702 status = be_mcc_notify_wait(adapter); 4703 4703 err: 4704 - mutex_unlock(&adapter->mcc_lock); 4704 + spin_unlock_bh(&adapter->mcc_lock); 4705 4705 return status; 4706 4706 } 4707 4707 ··· 4735 4735 struct be_cmd_resp_get_iface_list *resp; 4736 4736 int status; 4737 4737 4738 - mutex_lock(&adapter->mcc_lock); 4738 + spin_lock_bh(&adapter->mcc_lock); 4739 4739 4740 4740 wrb = wrb_from_mccq(adapter); 4741 4741 if (!wrb) { ··· 4756 4756 } 4757 4757 4758 4758 err: 4759 - mutex_unlock(&adapter->mcc_lock); 4759 + spin_unlock_bh(&adapter->mcc_lock); 4760 4760 return status; 4761 4761 } 4762 4762 ··· 4850 4850 if (BEx_chip(adapter)) 4851 4851 return 0; 4852 4852 4853 - mutex_lock(&adapter->mcc_lock); 4853 + spin_lock_bh(&adapter->mcc_lock); 4854 4854 4855 4855 wrb = wrb_from_mccq(adapter); 4856 4856 if (!wrb) { ··· 4868 4868 req->enable = 1; 4869 4869 status = be_mcc_notify_wait(adapter); 4870 4870 err: 4871 - mutex_unlock(&adapter->mcc_lock); 4871 + spin_unlock_bh(&adapter->mcc_lock); 4872 4872 return status; 4873 4873 } 4874 4874 ··· 4941 4941 u32 link_config = 0; 4942 4942 int status; 4943 4943 4944 - mutex_lock(&adapter->mcc_lock); 4944 + spin_lock_bh(&adapter->mcc_lock); 4945 4945 4946 4946 wrb = wrb_from_mccq(adapter); 4947 4947 if (!wrb) { ··· 4969 4969 4970 4970 status = be_mcc_notify_wait(adapter); 4971 4971 err: 4972 - mutex_unlock(&adapter->mcc_lock); 4972 + spin_unlock_bh(&adapter->mcc_lock); 4973 4973 return status; 4974 4974 } 4975 4975 ··· 5000 5000 struct be_mcc_wrb *wrb; 5001 5001 int status; 5002 5002 5003 - if (mutex_lock_interruptible(&adapter->mcc_lock)) 5004 - return -1; 5003 + spin_lock_bh(&adapter->mcc_lock); 5005 5004 5006 5005 wrb = wrb_from_mccq(adapter); 5007 5006 if (!wrb) { ··· 5038 5039 dev_info(&adapter->pdev->dev, 5039 5040 "Adapter does not support HW error recovery\n"); 5040 5041 5041 - mutex_unlock(&adapter->mcc_lock); 5042 + spin_unlock_bh(&adapter->mcc_lock); 5042 5043 return status; 5043 5044 } 5044 5045 ··· 5052 5053 struct be_cmd_resp_hdr *resp; 5053 5054 int status; 5054 5055 5055 - mutex_lock(&adapter->mcc_lock); 5056 + spin_lock_bh(&adapter->mcc_lock); 5056 5057 5057 5058 wrb = wrb_from_mccq(adapter); 5058 5059 if (!wrb) { ··· 5075 5076 memcpy(wrb_payload, resp, sizeof(*resp) + resp->response_length); 5076 5077 be_dws_le_to_cpu(wrb_payload, sizeof(*resp) + resp->response_length); 5077 5078 err: 5078 - mutex_unlock(&adapter->mcc_lock); 5079 + spin_unlock_bh(&adapter->mcc_lock); 5079 5080 return status; 5080 5081 } 5081 5082 EXPORT_SYMBOL(be_roce_mcc_cmd);
+1 -1
drivers/net/ethernet/emulex/benet/be_main.c
··· 5667 5667 } 5668 5668 5669 5669 mutex_init(&adapter->mbox_lock); 5670 - mutex_init(&adapter->mcc_lock); 5671 5670 mutex_init(&adapter->rx_filter_lock); 5671 + spin_lock_init(&adapter->mcc_lock); 5672 5672 spin_lock_init(&adapter->mcc_cq_lock); 5673 5673 init_completion(&adapter->et_cmd_compl); 5674 5674
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
··· 483 483 484 484 ret = hclge_ptp_get_cycle(hdev); 485 485 if (ret) 486 - return ret; 486 + goto out; 487 487 } 488 488 489 489 ret = hclge_ptp_int_en(hdev, true);
+1 -1
drivers/net/ethernet/intel/ice/ice_arfs.c
··· 511 511 struct hlist_head *arfs_fltr_list; 512 512 unsigned int i; 513 513 514 - if (!vsi || vsi->type != ICE_VSI_PF) 514 + if (!vsi || vsi->type != ICE_VSI_PF || ice_is_arfs_active(vsi)) 515 515 return; 516 516 517 517 arfs_fltr_list = kcalloc(ICE_MAX_ARFS_LIST, sizeof(*arfs_fltr_list),
-6
drivers/net/ethernet/intel/ice/ice_eswitch.c
··· 49 49 if (vlan_ops->dis_rx_filtering(uplink_vsi)) 50 50 goto err_vlan_filtering; 51 51 52 - if (ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_set_allow_override)) 53 - goto err_override_uplink; 54 - 55 52 if (ice_vsi_update_local_lb(uplink_vsi, true)) 56 53 goto err_override_local_lb; 57 54 ··· 60 63 err_up: 61 64 ice_vsi_update_local_lb(uplink_vsi, false); 62 65 err_override_local_lb: 63 - ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override); 64 - err_override_uplink: 65 66 vlan_ops->ena_rx_filtering(uplink_vsi); 66 67 err_vlan_filtering: 67 68 ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, false, ··· 270 275 vlan_ops = ice_get_compat_vsi_vlan_ops(uplink_vsi); 271 276 272 277 ice_vsi_update_local_lb(uplink_vsi, false); 273 - ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override); 274 278 vlan_ops->ena_rx_filtering(uplink_vsi); 275 279 ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, false, 276 280 ICE_FLTR_TX);
+27
drivers/net/ethernet/intel/ice/ice_lag.c
··· 1001 1001 } 1002 1002 1003 1003 /** 1004 + * ice_lag_config_eswitch - configure eswitch to work with LAG 1005 + * @lag: lag info struct 1006 + * @netdev: active network interface device struct 1007 + * 1008 + * Updates all port representors in eswitch to use @netdev for Tx. 1009 + * 1010 + * Configures the netdev to keep dst metadata (also used in representor Tx). 1011 + * This is required for an uplink without switchdev mode configured. 1012 + */ 1013 + static void ice_lag_config_eswitch(struct ice_lag *lag, 1014 + struct net_device *netdev) 1015 + { 1016 + struct ice_repr *repr; 1017 + unsigned long id; 1018 + 1019 + xa_for_each(&lag->pf->eswitch.reprs, id, repr) 1020 + repr->dst->u.port_info.lower_dev = netdev; 1021 + 1022 + netif_keep_dst(netdev); 1023 + } 1024 + 1025 + /** 1004 1026 * ice_lag_unlink - handle unlink event 1005 1027 * @lag: LAG info struct 1006 1028 */ ··· 1043 1021 ice_lag_move_vf_nodes(lag, act_port, pri_port); 1044 1022 lag->primary = false; 1045 1023 lag->active_port = ICE_LAG_INVALID_PORT; 1024 + 1025 + /* Config primary's eswitch back to normal operation. */ 1026 + ice_lag_config_eswitch(lag, lag->netdev); 1046 1027 } else { 1047 1028 struct ice_lag *primary_lag; 1048 1029 ··· 1444 1419 ice_lag_move_vf_nodes(lag, prim_port, 1445 1420 event_port); 1446 1421 lag->active_port = event_port; 1422 + ice_lag_config_eswitch(lag, event_netdev); 1447 1423 return; 1448 1424 } 1449 1425 ··· 1454 1428 /* new active port */ 1455 1429 ice_lag_move_vf_nodes(lag, lag->active_port, event_port); 1456 1430 lag->active_port = event_port; 1431 + ice_lag_config_eswitch(lag, event_netdev); 1457 1432 } else { 1458 1433 /* port not set as currently active (e.g. new active port 1459 1434 * has already claimed the nodes and filters
-18
drivers/net/ethernet/intel/ice/ice_lib.c
··· 3937 3937 } 3938 3938 3939 3939 /** 3940 - * ice_vsi_ctx_set_allow_override - allow destination override on VSI 3941 - * @ctx: pointer to VSI ctx structure 3942 - */ 3943 - void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx) 3944 - { 3945 - ctx->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD; 3946 - } 3947 - 3948 - /** 3949 - * ice_vsi_ctx_clear_allow_override - turn off destination override on VSI 3950 - * @ctx: pointer to VSI ctx structure 3951 - */ 3952 - void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx) 3953 - { 3954 - ctx->info.sec_flags &= ~ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD; 3955 - } 3956 - 3957 - /** 3958 3940 * ice_vsi_update_local_lb - update sw block in VSI with local loopback bit 3959 3941 * @vsi: pointer to VSI structure 3960 3942 * @set: set or unset the bit
-4
drivers/net/ethernet/intel/ice/ice_lib.h
··· 105 105 void ice_vsi_ctx_set_antispoof(struct ice_vsi_ctx *ctx); 106 106 107 107 void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx); 108 - 109 - void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx); 110 - 111 - void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx); 112 108 int ice_vsi_update_local_lb(struct ice_vsi *vsi, bool set); 113 109 int ice_vsi_add_vlan_zero(struct ice_vsi *vsi); 114 110 int ice_vsi_del_vlan_zero(struct ice_vsi *vsi);
+2 -2
drivers/net/ethernet/intel/ice/ice_main.c
··· 5065 5065 return err; 5066 5066 5067 5067 ice_devlink_init_regions(pf); 5068 - ice_health_init(pf); 5069 5068 ice_devlink_register(pf); 5069 + ice_health_init(pf); 5070 5070 5071 5071 return 0; 5072 5072 } 5073 5073 5074 5074 static void ice_deinit_devlink(struct ice_pf *pf) 5075 5075 { 5076 - ice_devlink_unregister(pf); 5077 5076 ice_health_deinit(pf); 5077 + ice_devlink_unregister(pf); 5078 5078 ice_devlink_destroy_regions(pf); 5079 5079 ice_devlink_unregister_params(pf); 5080 5080 }
+3 -1
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 2424 2424 ICE_TXD_CTX_QW1_CMD_S); 2425 2425 2426 2426 ice_tstamp(tx_ring, skb, first, &offload); 2427 - if (ice_is_switchdev_running(vsi->back) && vsi->type != ICE_VSI_SF) 2427 + if ((ice_is_switchdev_running(vsi->back) || 2428 + ice_lag_is_switchdev_running(vsi->back)) && 2429 + vsi->type != ICE_VSI_SF) 2428 2430 ice_eswitch_set_target_vsi(skb, &offload); 2429 2431 2430 2432 if (offload.cd_qw1 & ICE_TX_DESC_DTYPE_CTX) {
+3
drivers/net/ethernet/mellanox/mlx5/core/devlink.c
··· 46 46 u32 running_fw, stored_fw; 47 47 int err; 48 48 49 + if (!mlx5_core_is_pf(dev)) 50 + return 0; 51 + 49 52 err = devlink_info_version_fixed_put(req, "fw.psid", dev->board_id); 50 53 if (err) 51 54 return err;
+5 -7
drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
··· 48 48 struct list_head *iter; 49 49 50 50 netdev_for_each_lower_dev(dev, lower, iter) { 51 - struct mlx5_core_dev *mdev; 52 - struct mlx5e_priv *priv; 53 - 54 51 if (!mlx5e_eswitch_rep(lower)) 55 52 continue; 56 53 57 - priv = netdev_priv(lower); 58 - mdev = priv->mdev; 59 - if (mlx5_lag_is_shared_fdb(mdev) && mlx5_esw_bridge_dev_same_esw(lower, esw)) 54 + if (mlx5_esw_bridge_dev_same_esw(lower, esw)) 60 55 return lower; 61 56 } 62 57 ··· 120 125 priv = netdev_priv(rep); 121 126 mdev = priv->mdev; 122 127 if (netif_is_lag_master(dev)) 123 - return mlx5_lag_is_shared_fdb(mdev) && mlx5_lag_is_master(mdev); 128 + return mlx5_lag_is_master(mdev); 124 129 return true; 125 130 } 126 131 ··· 448 453 449 454 rep = mlx5_esw_bridge_rep_vport_num_vhca_id_get(dev, esw, &vport_num, &esw_owner_vhca_id); 450 455 if (!rep) 456 + return NOTIFY_DONE; 457 + 458 + if (netif_is_lag_master(dev) && !mlx5_lag_is_shared_fdb(esw->dev)) 451 459 return NOTIFY_DONE; 452 460 453 461 switch (event) {
+2 -4
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 5132 5132 struct mlx5e_priv *priv = netdev_priv(dev); 5133 5133 struct mlx5_core_dev *mdev = priv->mdev; 5134 5134 u8 mode, setting; 5135 - int err; 5136 5135 5137 - err = mlx5_eswitch_get_vepa(mdev->priv.eswitch, &setting); 5138 - if (err) 5139 - return err; 5136 + if (mlx5_eswitch_get_vepa(mdev->priv.eswitch, &setting)) 5137 + return -EOPNOTSUPP; 5140 5138 mode = setting ? BRIDGE_MODE_VEPA : BRIDGE_MODE_VEB; 5141 5139 return ndo_dflt_bridge_getlink(skb, pid, seq, dev, 5142 5140 mode,
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 871 871 872 872 static int comp_irq_request_sf(struct mlx5_core_dev *dev, u16 vecidx) 873 873 { 874 + struct mlx5_irq_pool *pool = mlx5_irq_table_get_comp_irq_pool(dev); 874 875 struct mlx5_eq_table *table = dev->priv.eq_table; 875 - struct mlx5_irq_pool *pool = mlx5_irq_pool_get(dev); 876 876 struct irq_affinity_desc af_desc = {}; 877 877 struct mlx5_irq *irq; 878 878
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c
··· 175 175 176 176 void mlx5_irq_affinity_irq_release(struct mlx5_core_dev *dev, struct mlx5_irq *irq) 177 177 { 178 - struct mlx5_irq_pool *pool = mlx5_irq_pool_get(dev); 178 + struct mlx5_irq_pool *pool = mlx5_irq_get_pool(irq); 179 179 int cpu; 180 180 181 181 cpu = cpumask_first(mlx5_irq_get_affinity_mask(irq));
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
··· 951 951 mlx5_eswitch_reload_ib_reps(ldev->pf[i].dev->priv.eswitch); 952 952 } 953 953 954 - static bool mlx5_shared_fdb_supported(struct mlx5_lag *ldev) 954 + bool mlx5_lag_shared_fdb_supported(struct mlx5_lag *ldev) 955 955 { 956 956 int idx = mlx5_lag_get_dev_index_by_seq(ldev, MLX5_LAG_P1); 957 957 struct mlx5_core_dev *dev; ··· 1038 1038 } 1039 1039 1040 1040 if (do_bond && !__mlx5_lag_is_active(ldev)) { 1041 - bool shared_fdb = mlx5_shared_fdb_supported(ldev); 1041 + bool shared_fdb = mlx5_lag_shared_fdb_supported(ldev); 1042 1042 1043 1043 roce_lag = mlx5_lag_is_roce_lag(ldev); 1044 1044
+1
drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
··· 92 92 return test_bit(MLX5_LAG_FLAG_NDEVS_READY, &ldev->state_flags); 93 93 } 94 94 95 + bool mlx5_lag_shared_fdb_supported(struct mlx5_lag *ldev); 95 96 bool mlx5_lag_check_prereq(struct mlx5_lag *ldev); 96 97 void mlx5_modify_lag(struct mlx5_lag *ldev, 97 98 struct lag_tracker *tracker);
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
··· 83 83 if (mlx5_eswitch_mode(dev0) != MLX5_ESWITCH_OFFLOADS || 84 84 !MLX5_CAP_PORT_SELECTION(dev0, port_select_flow_table) || 85 85 !MLX5_CAP_GEN(dev0, create_lag_when_not_master_up) || 86 - !mlx5_lag_check_prereq(ldev)) 86 + !mlx5_lag_check_prereq(ldev) || 87 + !mlx5_lag_shared_fdb_supported(ldev)) 87 88 return -EOPNOTSUPP; 88 89 89 90 err = mlx5_mpesw_metadata_set(ldev);
+5
drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
··· 196 196 ns = mlx5_get_flow_namespace(chains->dev, chains->ns); 197 197 } 198 198 199 + if (!ns) { 200 + mlx5_core_warn(chains->dev, "Failed to get flow namespace\n"); 201 + return ERR_PTR(-EOPNOTSUPP); 202 + } 203 + 199 204 ft_attr.autogroup.num_reserved_entries = 2; 200 205 ft_attr.autogroup.max_num_groups = chains->group_num; 201 206 ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr);
+3 -1
drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h
··· 10 10 11 11 struct mlx5_irq; 12 12 struct cpu_rmap; 13 + struct mlx5_irq_pool; 13 14 14 15 int mlx5_irq_table_init(struct mlx5_core_dev *dev); 15 16 void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev); 16 17 int mlx5_irq_table_create(struct mlx5_core_dev *dev); 17 18 void mlx5_irq_table_destroy(struct mlx5_core_dev *dev); 18 19 void mlx5_irq_table_free_irqs(struct mlx5_core_dev *dev); 20 + struct mlx5_irq_pool * 21 + mlx5_irq_table_get_comp_irq_pool(struct mlx5_core_dev *dev); 19 22 int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table); 20 23 int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table); 21 24 struct mlx5_irq_table *mlx5_irq_table_get(struct mlx5_core_dev *dev); ··· 41 38 int mlx5_irq_get_index(struct mlx5_irq *irq); 42 39 int mlx5_irq_get_irq(const struct mlx5_irq *irq); 43 40 44 - struct mlx5_irq_pool; 45 41 #ifdef CONFIG_MLX5_SF 46 42 struct mlx5_irq *mlx5_irq_affinity_irq_request_auto(struct mlx5_core_dev *dev, 47 43 struct cpumask *used_cpus, u16 vecidx);
+10 -3
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 378 378 return irq->map.index; 379 379 } 380 380 381 + struct mlx5_irq_pool *mlx5_irq_get_pool(struct mlx5_irq *irq) 382 + { 383 + return irq->pool; 384 + } 385 + 381 386 /* irq_pool API */ 382 387 383 388 /* requesting an irq from a given pool according to given index */ ··· 410 405 return irq_table->sf_ctrl_pool; 411 406 } 412 407 413 - static struct mlx5_irq_pool *sf_irq_pool_get(struct mlx5_irq_table *irq_table) 408 + static struct mlx5_irq_pool * 409 + sf_comp_irq_pool_get(struct mlx5_irq_table *irq_table) 414 410 { 415 411 return irq_table->sf_comp_pool; 416 412 } 417 413 418 - struct mlx5_irq_pool *mlx5_irq_pool_get(struct mlx5_core_dev *dev) 414 + struct mlx5_irq_pool * 415 + mlx5_irq_table_get_comp_irq_pool(struct mlx5_core_dev *dev) 419 416 { 420 417 struct mlx5_irq_table *irq_table = mlx5_irq_table_get(dev); 421 418 struct mlx5_irq_pool *pool = NULL; 422 419 423 420 if (mlx5_core_is_sf(dev)) 424 - pool = sf_irq_pool_get(irq_table); 421 + pool = sf_comp_irq_pool_get(irq_table); 425 422 426 423 /* In some configs, there won't be a pool of SFs IRQs. Hence, returning 427 424 * the PF IRQs pool in case the SF pool doesn't exist.
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h
··· 28 28 struct mlx5_core_dev *dev; 29 29 }; 30 30 31 - struct mlx5_irq_pool *mlx5_irq_pool_get(struct mlx5_core_dev *dev); 32 31 static inline bool mlx5_irq_pool_is_sf_pool(struct mlx5_irq_pool *pool) 33 32 { 34 33 return !strncmp("mlx5_sf", pool->name, strlen("mlx5_sf")); ··· 39 40 int mlx5_irq_get_locked(struct mlx5_irq *irq); 40 41 int mlx5_irq_read_locked(struct mlx5_irq *irq); 41 42 int mlx5_irq_put(struct mlx5_irq *irq); 43 + struct mlx5_irq_pool *mlx5_irq_get_pool(struct mlx5_irq *irq); 42 44 43 45 #endif /* __PCI_IRQ_H__ */
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc.h
··· 24 24 struct mlx5hws_matcher *matcher; 25 25 struct mlx5hws_match_template *mt; 26 26 struct mlx5hws_action_template *at[MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM]; 27 + u32 priority; 27 28 u8 num_of_at; 28 - u16 priority; 29 29 u8 size_log; 30 30 atomic_t num_of_rules; 31 31 struct list_head *rules;
+4
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste.h
··· 210 210 void (*set_encap_l3)(u8 *hw_ste_p, u8 *frst_s_action, 211 211 u8 *scnd_d_action, u32 reformat_id, 212 212 int size); 213 + void (*set_insert_hdr)(u8 *hw_ste_p, u8 *d_action, u32 reformat_id, 214 + u8 anchor, u8 offset, int size); 215 + void (*set_remove_hdr)(u8 *hw_ste_p, u8 *s_action, u8 anchor, 216 + u8 offset, int size); 213 217 /* Send */ 214 218 void (*prepare_for_postsend)(u8 *hw_ste_p, u32 ste_size); 215 219 };
+27 -25
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.c
··· 266 266 dr_ste_v1_set_reparse(hw_ste_p); 267 267 } 268 268 269 - static void dr_ste_v1_set_insert_hdr(u8 *hw_ste_p, u8 *d_action, 270 - u32 reformat_id, 271 - u8 anchor, u8 offset, 272 - int size) 269 + void dr_ste_v1_set_insert_hdr(u8 *hw_ste_p, u8 *d_action, 270 + u32 reformat_id, 271 + u8 anchor, u8 offset, 272 + int size) 273 273 { 274 274 MLX5_SET(ste_double_action_insert_with_ptr_v1, d_action, 275 275 action_id, DR_STE_V1_ACTION_ID_INSERT_POINTER); ··· 286 286 dr_ste_v1_set_reparse(hw_ste_p); 287 287 } 288 288 289 - static void dr_ste_v1_set_remove_hdr(u8 *hw_ste_p, u8 *s_action, 290 - u8 anchor, u8 offset, 291 - int size) 289 + void dr_ste_v1_set_remove_hdr(u8 *hw_ste_p, u8 *s_action, 290 + u8 anchor, u8 offset, 291 + int size) 292 292 { 293 293 MLX5_SET(ste_single_action_remove_header_size_v1, s_action, 294 294 action_id, DR_STE_V1_ACTION_ID_REMOVE_BY_SIZE); ··· 584 584 action = MLX5_ADDR_OF(ste_mask_and_match_v1, last_ste, action); 585 585 action_sz = DR_STE_ACTION_TRIPLE_SZ; 586 586 } 587 - dr_ste_v1_set_insert_hdr(last_ste, action, 588 - attr->reformat.id, 589 - attr->reformat.param_0, 590 - attr->reformat.param_1, 591 - attr->reformat.size); 587 + ste_ctx->set_insert_hdr(last_ste, action, 588 + attr->reformat.id, 589 + attr->reformat.param_0, 590 + attr->reformat.param_1, 591 + attr->reformat.size); 592 592 action_sz -= DR_STE_ACTION_DOUBLE_SZ; 593 593 action += DR_STE_ACTION_DOUBLE_SZ; 594 594 } else if (action_type_set[DR_ACTION_TYP_REMOVE_HDR]) { ··· 597 597 action = MLX5_ADDR_OF(ste_mask_and_match_v1, last_ste, action); 598 598 action_sz = DR_STE_ACTION_TRIPLE_SZ; 599 599 } 600 - dr_ste_v1_set_remove_hdr(last_ste, action, 601 - attr->reformat.param_0, 602 - attr->reformat.param_1, 603 - attr->reformat.size); 600 + ste_ctx->set_remove_hdr(last_ste, action, 601 + attr->reformat.param_0, 602 + attr->reformat.param_1, 603 + attr->reformat.size); 604 604 action_sz -= DR_STE_ACTION_SINGLE_SZ; 605 605 action += DR_STE_ACTION_SINGLE_SZ; 606 606 } ··· 792 792 action = MLX5_ADDR_OF(ste_mask_and_match_v1, last_ste, action); 793 793 action_sz = DR_STE_ACTION_TRIPLE_SZ; 794 794 } 795 - dr_ste_v1_set_insert_hdr(last_ste, action, 796 - attr->reformat.id, 797 - attr->reformat.param_0, 798 - attr->reformat.param_1, 799 - attr->reformat.size); 795 + ste_ctx->set_insert_hdr(last_ste, action, 796 + attr->reformat.id, 797 + attr->reformat.param_0, 798 + attr->reformat.param_1, 799 + attr->reformat.size); 800 800 action_sz -= DR_STE_ACTION_DOUBLE_SZ; 801 801 action += DR_STE_ACTION_DOUBLE_SZ; 802 802 allow_modify_hdr = false; ··· 808 808 allow_modify_hdr = true; 809 809 allow_ctr = true; 810 810 } 811 - dr_ste_v1_set_remove_hdr(last_ste, action, 812 - attr->reformat.param_0, 813 - attr->reformat.param_1, 814 - attr->reformat.size); 811 + ste_ctx->set_remove_hdr(last_ste, action, 812 + attr->reformat.param_0, 813 + attr->reformat.param_1, 814 + attr->reformat.size); 815 815 action_sz -= DR_STE_ACTION_SINGLE_SZ; 816 816 action += DR_STE_ACTION_SINGLE_SZ; 817 817 } ··· 2200 2200 .set_pop_vlan = &dr_ste_v1_set_pop_vlan, 2201 2201 .set_rx_decap = &dr_ste_v1_set_rx_decap, 2202 2202 .set_encap_l3 = &dr_ste_v1_set_encap_l3, 2203 + .set_insert_hdr = &dr_ste_v1_set_insert_hdr, 2204 + .set_remove_hdr = &dr_ste_v1_set_remove_hdr, 2203 2205 /* Send */ 2204 2206 .prepare_for_postsend = &dr_ste_v1_prepare_for_postsend, 2205 2207 };
+4
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v1.h
··· 156 156 void dr_ste_v1_set_encap_l3(u8 *hw_ste_p, u8 *frst_s_action, u8 *scnd_d_action, 157 157 u32 reformat_id, int size); 158 158 void dr_ste_v1_set_rx_decap(u8 *hw_ste_p, u8 *s_action); 159 + void dr_ste_v1_set_insert_hdr(u8 *hw_ste_p, u8 *d_action, u32 reformat_id, 160 + u8 anchor, u8 offset, int size); 161 + void dr_ste_v1_set_remove_hdr(u8 *hw_ste_p, u8 *s_action, u8 anchor, 162 + u8 offset, int size); 159 163 void dr_ste_v1_set_actions_tx(struct mlx5dr_ste_ctx *ste_ctx, struct mlx5dr_domain *dmn, 160 164 u8 *action_type_set, u32 actions_caps, u8 *last_ste, 161 165 struct mlx5dr_ste_actions_attr *attr, u32 *added_stes);
+2
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v2.c
··· 69 69 .set_pop_vlan = &dr_ste_v1_set_pop_vlan, 70 70 .set_rx_decap = &dr_ste_v1_set_rx_decap, 71 71 .set_encap_l3 = &dr_ste_v1_set_encap_l3, 72 + .set_insert_hdr = &dr_ste_v1_set_insert_hdr, 73 + .set_remove_hdr = &dr_ste_v1_set_remove_hdr, 72 74 /* Send */ 73 75 .prepare_for_postsend = &dr_ste_v1_prepare_for_postsend, 74 76 };
+42
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_ste_v3.c
··· 79 79 dr_ste_v1_set_reparse(hw_ste_p); 80 80 } 81 81 82 + static void dr_ste_v3_set_insert_hdr(u8 *hw_ste_p, u8 *d_action, 83 + u32 reformat_id, u8 anchor, 84 + u8 offset, int size) 85 + { 86 + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, 87 + action_id, DR_STE_V1_ACTION_ID_INSERT_POINTER); 88 + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, 89 + start_anchor, anchor); 90 + 91 + /* The hardware expects here size and offset in words (2 byte) */ 92 + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, 93 + size, size / 2); 94 + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, 95 + start_offset, offset / 2); 96 + 97 + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, 98 + pointer, reformat_id); 99 + MLX5_SET(ste_double_action_insert_with_ptr_v3, d_action, 100 + attributes, DR_STE_V1_ACTION_INSERT_PTR_ATTR_NONE); 101 + 102 + dr_ste_v1_set_reparse(hw_ste_p); 103 + } 104 + 105 + static void dr_ste_v3_set_remove_hdr(u8 *hw_ste_p, u8 *s_action, 106 + u8 anchor, u8 offset, int size) 107 + { 108 + MLX5_SET(ste_single_action_remove_header_size_v3, s_action, 109 + action_id, DR_STE_V1_ACTION_ID_REMOVE_BY_SIZE); 110 + MLX5_SET(ste_single_action_remove_header_size_v3, s_action, 111 + start_anchor, anchor); 112 + 113 + /* The hardware expects here size and offset in words (2 byte) */ 114 + MLX5_SET(ste_single_action_remove_header_size_v3, s_action, 115 + remove_size, size / 2); 116 + MLX5_SET(ste_single_action_remove_header_size_v3, s_action, 117 + start_offset, offset / 2); 118 + 119 + dr_ste_v1_set_reparse(hw_ste_p); 120 + } 121 + 82 122 static int 83 123 dr_ste_v3_set_action_decap_l3_list(void *data, u32 data_sz, 84 124 u8 *hw_action, u32 hw_action_sz, ··· 251 211 .set_pop_vlan = &dr_ste_v3_set_pop_vlan, 252 212 .set_rx_decap = &dr_ste_v3_set_rx_decap, 253 213 .set_encap_l3 = &dr_ste_v3_set_encap_l3, 214 + .set_insert_hdr = &dr_ste_v3_set_insert_hdr, 215 + .set_remove_hdr = &dr_ste_v3_set_remove_hdr, 254 216 /* Send */ 255 217 .prepare_for_postsend = &dr_ste_v1_prepare_for_postsend, 256 218 };
+10 -1
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 1547 1547 * adapter-MTU file and apc->mana_pci_debugfs folder. 1548 1548 */ 1549 1549 debugfs_remove_recursive(gc->mana_pci_debugfs); 1550 + gc->mana_pci_debugfs = NULL; 1550 1551 pci_iounmap(pdev, bar0_va); 1551 1552 free_gc: 1552 1553 pci_set_drvdata(pdev, NULL); ··· 1569 1568 mana_gd_cleanup(pdev); 1570 1569 1571 1570 debugfs_remove_recursive(gc->mana_pci_debugfs); 1571 + 1572 + gc->mana_pci_debugfs = NULL; 1572 1573 1573 1574 pci_iounmap(pdev, gc->bar0_va); 1574 1575 ··· 1625 1622 1626 1623 debugfs_remove_recursive(gc->mana_pci_debugfs); 1627 1624 1625 + gc->mana_pci_debugfs = NULL; 1626 + 1628 1627 pci_disable_device(pdev); 1629 1628 } 1630 1629 ··· 1653 1648 mana_debugfs_root = debugfs_create_dir("mana", NULL); 1654 1649 1655 1650 err = pci_register_driver(&mana_driver); 1656 - if (err) 1651 + if (err) { 1657 1652 debugfs_remove(mana_debugfs_root); 1653 + mana_debugfs_root = NULL; 1654 + } 1658 1655 1659 1656 return err; 1660 1657 } ··· 1666 1659 pci_unregister_driver(&mana_driver); 1667 1660 1668 1661 debugfs_remove(mana_debugfs_root); 1662 + 1663 + mana_debugfs_root = NULL; 1669 1664 } 1670 1665 1671 1666 module_init(mana_driver_init);
+6 -4
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 738 738 static void mana_cleanup_port_context(struct mana_port_context *apc) 739 739 { 740 740 /* 741 - * at this point all dir/files under the vport directory 742 - * are already cleaned up. 743 - * We are sure the apc->mana_port_debugfs remove will not 744 - * cause any freed memory access issues 741 + * make sure subsequent cleanup attempts don't end up removing already 742 + * cleaned dentry pointer 745 743 */ 746 744 debugfs_remove(apc->mana_port_debugfs); 745 + apc->mana_port_debugfs = NULL; 747 746 kfree(apc->rxqs); 748 747 apc->rxqs = NULL; 749 748 } ··· 1253 1254 return; 1254 1255 1255 1256 debugfs_remove_recursive(ac->mana_eqs_debugfs); 1257 + ac->mana_eqs_debugfs = NULL; 1256 1258 1257 1259 for (i = 0; i < gc->max_num_queues; i++) { 1258 1260 eq = ac->eqs[i].eq; ··· 1914 1914 1915 1915 for (i = 0; i < apc->num_queues; i++) { 1916 1916 debugfs_remove_recursive(apc->tx_qp[i].mana_tx_debugfs); 1917 + apc->tx_qp[i].mana_tx_debugfs = NULL; 1917 1918 1918 1919 napi = &apc->tx_qp[i].tx_cq.napi; 1919 1920 if (apc->tx_qp[i].txq.napi_initialized) { ··· 2100 2099 return; 2101 2100 2102 2101 debugfs_remove_recursive(rxq->mana_rx_debugfs); 2102 + rxq->mana_rx_debugfs = NULL; 2103 2103 2104 2104 napi = &rxq->rx_cq.napi; 2105 2105
+6 -2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
··· 454 454 455 455 num_vlans = sriov->num_allowed_vlans; 456 456 sriov->allowed_vlans = kcalloc(num_vlans, sizeof(u16), GFP_KERNEL); 457 - if (!sriov->allowed_vlans) 457 + if (!sriov->allowed_vlans) { 458 + qlcnic_sriov_free_vlans(adapter); 458 459 return -ENOMEM; 460 + } 459 461 460 462 vlans = (u16 *)&cmd->rsp.arg[3]; 461 463 for (i = 0; i < num_vlans; i++) ··· 2169 2167 vf = &sriov->vf_info[i]; 2170 2168 vf->sriov_vlans = kcalloc(sriov->num_allowed_vlans, 2171 2169 sizeof(*vf->sriov_vlans), GFP_KERNEL); 2172 - if (!vf->sriov_vlans) 2170 + if (!vf->sriov_vlans) { 2171 + qlcnic_sriov_free_vlans(adapter); 2173 2172 return -ENOMEM; 2173 + } 2174 2174 } 2175 2175 2176 2176 return 0;
+10
drivers/net/ethernet/realtek/rtase/rtase_main.c
··· 1501 1501 static void rtase_sw_reset(struct net_device *dev) 1502 1502 { 1503 1503 struct rtase_private *tp = netdev_priv(dev); 1504 + struct rtase_ring *ring, *tmp; 1505 + struct rtase_int_vector *ivec; 1504 1506 int ret; 1507 + u32 i; 1505 1508 1506 1509 netif_stop_queue(dev); 1507 1510 netif_carrier_off(dev); ··· 1514 1511 rtase_wait_for_quiescence(dev); 1515 1512 rtase_tx_clear(tp); 1516 1513 rtase_rx_clear(tp); 1514 + 1515 + for (i = 0; i < tp->int_nums; i++) { 1516 + ivec = &tp->int_vector[i]; 1517 + list_for_each_entry_safe(ring, tmp, &ivec->ring_list, 1518 + ring_entry) 1519 + list_del(&ring->ring_entry); 1520 + } 1517 1521 1518 1522 ret = rtase_init_ring(dev); 1519 1523 if (ret) {
+4 -2
drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
··· 11 11 #include "dwmac_dma.h" 12 12 #include "dwmac1000.h" 13 13 14 + #define DRIVER_NAME "dwmac-loongson-pci" 15 + 14 16 /* Normal Loongson Tx Summary */ 15 17 #define DMA_INTR_ENA_NIE_TX_LOONGSON 0x00040000 16 18 /* Normal Loongson Rx Summary */ ··· 570 568 for (i = 0; i < PCI_STD_NUM_BARS; i++) { 571 569 if (pci_resource_len(pdev, i) == 0) 572 570 continue; 573 - ret = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev)); 571 + ret = pcim_iomap_regions(pdev, BIT(0), DRIVER_NAME); 574 572 if (ret) 575 573 goto err_disable_device; 576 574 break; ··· 689 687 MODULE_DEVICE_TABLE(pci, loongson_dwmac_id_table); 690 688 691 689 static struct pci_driver loongson_dwmac_driver = { 692 - .name = "dwmac-loongson-pci", 690 + .name = DRIVER_NAME, 693 691 .id_table = loongson_dwmac_id_table, 694 692 .probe = loongson_dwmac_probe, 695 693 .remove = loongson_dwmac_remove,
+9 -9
drivers/net/ipa/data/ipa_data-v4.7.c
··· 28 28 enum ipa_rsrc_group_id { 29 29 /* Source resource group identifiers */ 30 30 IPA_RSRC_GROUP_SRC_UL_DL = 0, 31 - IPA_RSRC_GROUP_SRC_UC_RX_Q, 32 31 IPA_RSRC_GROUP_SRC_COUNT, /* Last in set; not a source group */ 33 32 34 33 /* Destination resource group identifiers */ 35 - IPA_RSRC_GROUP_DST_UL_DL_DPL = 0, 36 - IPA_RSRC_GROUP_DST_UNUSED_1, 34 + IPA_RSRC_GROUP_DST_UL_DL = 0, 37 35 IPA_RSRC_GROUP_DST_COUNT, /* Last; not a destination group */ 38 36 }; 39 37 40 38 /* QSB configuration data for an SoC having IPA v4.7 */ 41 39 static const struct ipa_qsb_data ipa_qsb_data[] = { 42 40 [IPA_QSB_MASTER_DDR] = { 43 - .max_writes = 8, 44 - .max_reads = 0, /* no limit (hardware max) */ 41 + .max_writes = 12, 42 + .max_reads = 13, 45 43 .max_reads_beats = 120, 46 44 }, 47 45 }; ··· 79 81 }, 80 82 .endpoint = { 81 83 .config = { 82 - .resource_group = IPA_RSRC_GROUP_DST_UL_DL_DPL, 84 + .resource_group = IPA_RSRC_GROUP_DST_UL_DL, 83 85 .aggregation = true, 84 86 .status_enable = true, 85 87 .rx = { ··· 104 106 .filter_support = true, 105 107 .config = { 106 108 .resource_group = IPA_RSRC_GROUP_SRC_UL_DL, 109 + .checksum = true, 107 110 .qmap = true, 108 111 .status_enable = true, 109 112 .tx = { ··· 127 128 }, 128 129 .endpoint = { 129 130 .config = { 130 - .resource_group = IPA_RSRC_GROUP_DST_UL_DL_DPL, 131 + .resource_group = IPA_RSRC_GROUP_DST_UL_DL, 132 + .checksum = true, 131 133 .qmap = true, 132 134 .aggregation = true, 133 135 .rx = { ··· 197 197 /* Destination resource configuration data for an SoC having IPA v4.7 */ 198 198 static const struct ipa_resource ipa_resource_dst[] = { 199 199 [IPA_RESOURCE_TYPE_DST_DATA_SECTORS] = { 200 - .limits[IPA_RSRC_GROUP_DST_UL_DL_DPL] = { 200 + .limits[IPA_RSRC_GROUP_DST_UL_DL] = { 201 201 .min = 7, .max = 7, 202 202 }, 203 203 }, 204 204 [IPA_RESOURCE_TYPE_DST_DPS_DMARS] = { 205 - .limits[IPA_RSRC_GROUP_DST_UL_DL_DPL] = { 205 + .limits[IPA_RSRC_GROUP_DST_UL_DL] = { 206 206 .min = 2, .max = 2, 207 207 }, 208 208 },
+5
drivers/net/mctp/mctp-i2c.c
··· 583 583 struct mctp_i2c_hdr *hdr; 584 584 struct mctp_hdr *mhdr; 585 585 u8 lldst, llsrc; 586 + int rc; 586 587 587 588 if (len > MCTP_I2C_MAXMTU) 588 589 return -EMSGSIZE; ··· 593 592 594 593 lldst = *((u8 *)daddr); 595 594 llsrc = *((u8 *)saddr); 595 + 596 + rc = skb_cow_head(skb, sizeof(struct mctp_i2c_hdr)); 597 + if (rc) 598 + return rc; 596 599 597 600 skb_push(skb, sizeof(struct mctp_i2c_hdr)); 598 601 skb_reset_mac_header(skb);
+8
drivers/net/mctp/mctp-i3c.c
··· 506 506 const void *saddr, unsigned int len) 507 507 { 508 508 struct mctp_i3c_internal_hdr *ihdr; 509 + int rc; 510 + 511 + if (!daddr || !saddr) 512 + return -EINVAL; 513 + 514 + rc = skb_cow_head(skb, sizeof(struct mctp_i3c_internal_hdr)); 515 + if (rc) 516 + return rc; 509 517 510 518 skb_push(skb, sizeof(struct mctp_i3c_internal_hdr)); 511 519 skb_reset_mac_header(skb);
+68
drivers/net/phy/nxp-c45-tja11xx.c
··· 22 22 #define PHY_ID_TJA_1103 0x001BB010 23 23 #define PHY_ID_TJA_1120 0x001BB031 24 24 25 + #define VEND1_DEVICE_ID3 0x0004 26 + #define TJA1120_DEV_ID3_SILICON_VERSION GENMASK(15, 12) 27 + #define TJA1120_DEV_ID3_SAMPLE_TYPE GENMASK(11, 8) 28 + #define DEVICE_ID3_SAMPLE_TYPE_R 0x9 29 + 25 30 #define VEND1_DEVICE_CONTROL 0x0040 26 31 #define DEVICE_CONTROL_RESET BIT(15) 27 32 #define DEVICE_CONTROL_CONFIG_GLOBAL_EN BIT(14) ··· 113 108 #define MII_BASIC_CONFIG_RGMII 0x7 114 109 #define MII_BASIC_CONFIG_RMII 0x5 115 110 #define MII_BASIC_CONFIG_MII 0x4 111 + 112 + #define VEND1_SGMII_BASIC_CONTROL 0xB000 113 + #define SGMII_LPM BIT(11) 116 114 117 115 #define VEND1_SYMBOL_ERROR_CNT_XTD 0x8351 118 116 #define EXTENDED_CNT_EN BIT(15) ··· 1601 1593 return 0; 1602 1594 } 1603 1595 1596 + /* Errata: ES_TJA1120 and ES_TJA1121 Rev. 1.0 — 28 November 2024 Section 3.1 & 3.2 */ 1597 + static void nxp_c45_tja1120_errata(struct phy_device *phydev) 1598 + { 1599 + bool macsec_ability, sgmii_ability; 1600 + int silicon_version, sample_type; 1601 + int phy_abilities; 1602 + int ret = 0; 1603 + 1604 + ret = phy_read_mmd(phydev, MDIO_MMD_VEND1, VEND1_DEVICE_ID3); 1605 + if (ret < 0) 1606 + return; 1607 + 1608 + sample_type = FIELD_GET(TJA1120_DEV_ID3_SAMPLE_TYPE, ret); 1609 + if (sample_type != DEVICE_ID3_SAMPLE_TYPE_R) 1610 + return; 1611 + 1612 + silicon_version = FIELD_GET(TJA1120_DEV_ID3_SILICON_VERSION, ret); 1613 + 1614 + phy_abilities = phy_read_mmd(phydev, MDIO_MMD_VEND1, 1615 + VEND1_PORT_ABILITIES); 1616 + macsec_ability = !!(phy_abilities & MACSEC_ABILITY); 1617 + sgmii_ability = !!(phy_abilities & SGMII_ABILITY); 1618 + if ((!macsec_ability && silicon_version == 2) || 1619 + (macsec_ability && silicon_version == 1)) { 1620 + /* TJA1120/TJA1121 PHY configuration errata workaround. 1621 + * Apply PHY writes sequence before link up. 1622 + */ 1623 + if (!macsec_ability) { 1624 + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x4b95); 1625 + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0xf3cd); 1626 + } else { 1627 + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x89c7); 1628 + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0x0893); 1629 + } 1630 + 1631 + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x0476, 0x58a0); 1632 + 1633 + phy_write_mmd(phydev, MDIO_MMD_PMAPMD, 0x8921, 0xa3a); 1634 + phy_write_mmd(phydev, MDIO_MMD_PMAPMD, 0x89F1, 0x16c1); 1635 + 1636 + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x0); 1637 + phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0x0); 1638 + 1639 + if (sgmii_ability) { 1640 + /* TJA1120B/TJA1121B SGMII PCS restart errata workaround. 1641 + * Put SGMII PCS into power down mode and back up. 1642 + */ 1643 + phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, 1644 + VEND1_SGMII_BASIC_CONTROL, 1645 + SGMII_LPM); 1646 + phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, 1647 + VEND1_SGMII_BASIC_CONTROL, 1648 + SGMII_LPM); 1649 + } 1650 + } 1651 + } 1652 + 1604 1653 static int nxp_c45_config_init(struct phy_device *phydev) 1605 1654 { 1606 1655 int ret; ··· 1673 1608 */ 1674 1609 phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 1); 1675 1610 phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 2); 1611 + 1612 + if (phy_id_compare(phydev->phy_id, PHY_ID_TJA_1120, GENMASK(31, 4))) 1613 + nxp_c45_tja1120_errata(phydev); 1676 1614 1677 1615 phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, VEND1_PHY_CONFIG, 1678 1616 PHY_CONFIG_AUTO);
+19 -9
drivers/net/ppp/ppp_generic.c
··· 72 72 #define PPP_PROTO_LEN 2 73 73 #define PPP_LCP_HDRLEN 4 74 74 75 + /* The filter instructions generated by libpcap are constructed 76 + * assuming a four-byte PPP header on each packet, where the last 77 + * 2 bytes are the protocol field defined in the RFC and the first 78 + * byte of the first 2 bytes indicates the direction. 79 + * The second byte is currently unused, but we still need to initialize 80 + * it to prevent crafted BPF programs from reading them which would 81 + * cause reading of uninitialized data. 82 + */ 83 + #define PPP_FILTER_OUTBOUND_TAG 0x0100 84 + #define PPP_FILTER_INBOUND_TAG 0x0000 85 + 75 86 /* 76 87 * An instance of /dev/ppp can be associated with either a ppp 77 88 * interface unit or a ppp channel. In both cases, file->private_data ··· 1773 1762 1774 1763 if (proto < 0x8000) { 1775 1764 #ifdef CONFIG_PPP_FILTER 1776 - /* check if we should pass this packet */ 1777 - /* the filter instructions are constructed assuming 1778 - a four-byte PPP header on each packet */ 1779 - *(u8 *)skb_push(skb, 2) = 1; 1765 + /* check if the packet passes the pass and active filters. 1766 + * See comment for PPP_FILTER_OUTBOUND_TAG above. 1767 + */ 1768 + *(__be16 *)skb_push(skb, 2) = htons(PPP_FILTER_OUTBOUND_TAG); 1780 1769 if (ppp->pass_filter && 1781 1770 bpf_prog_run(ppp->pass_filter, skb) == 0) { 1782 1771 if (ppp->debug & 1) ··· 2493 2482 /* network protocol frame - give it to the kernel */ 2494 2483 2495 2484 #ifdef CONFIG_PPP_FILTER 2496 - /* check if the packet passes the pass and active filters */ 2497 - /* the filter instructions are constructed assuming 2498 - a four-byte PPP header on each packet */ 2499 2485 if (ppp->pass_filter || ppp->active_filter) { 2500 2486 if (skb_unclone(skb, GFP_ATOMIC)) 2501 2487 goto err; 2502 - 2503 - *(u8 *)skb_push(skb, 2) = 0; 2488 + /* Check if the packet passes the pass and active filters. 2489 + * See comment for PPP_FILTER_INBOUND_TAG above. 2490 + */ 2491 + *(__be16 *)skb_push(skb, 2) = htons(PPP_FILTER_INBOUND_TAG); 2504 2492 if (ppp->pass_filter && 2505 2493 bpf_prog_run(ppp->pass_filter, skb) == 0) { 2506 2494 if (ppp->debug & 1)
+2 -2
drivers/net/usb/lan78xx.c
··· 627 627 628 628 kfree(buf); 629 629 630 - return ret; 630 + return ret < 0 ? ret : 0; 631 631 } 632 632 633 633 static int lan78xx_write_reg(struct lan78xx_net *dev, u32 index, u32 data) ··· 658 658 659 659 kfree(buf); 660 660 661 - return ret; 661 + return ret < 0 ? ret : 0; 662 662 } 663 663 664 664 static int lan78xx_update_reg(struct lan78xx_net *dev, u32 reg, u32 mask,
+13 -7
drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
··· 1172 1172 struct brcmf_bus *bus_if; 1173 1173 struct brcmf_sdio_dev *sdiodev; 1174 1174 mmc_pm_flag_t sdio_flags; 1175 + bool cap_power_off; 1175 1176 int ret = 0; 1176 1177 1177 1178 func = container_of(dev, struct sdio_func, dev); ··· 1180 1179 if (func->num != 1) 1181 1180 return 0; 1182 1181 1182 + cap_power_off = !!(func->card->host->caps & MMC_CAP_POWER_OFF_CARD); 1183 1183 1184 1184 bus_if = dev_get_drvdata(dev); 1185 1185 sdiodev = bus_if->bus_priv.sdio; 1186 1186 1187 - if (sdiodev->wowl_enabled) { 1187 + if (sdiodev->wowl_enabled || !cap_power_off) { 1188 1188 brcmf_sdiod_freezer_on(sdiodev); 1189 1189 brcmf_sdio_wd_timer(sdiodev->bus, 0); 1190 1190 1191 1191 sdio_flags = MMC_PM_KEEP_POWER; 1192 - if (sdiodev->settings->bus.sdio.oob_irq_supported) 1193 - enable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr); 1194 - else 1195 - sdio_flags |= MMC_PM_WAKE_SDIO_IRQ; 1192 + 1193 + if (sdiodev->wowl_enabled) { 1194 + if (sdiodev->settings->bus.sdio.oob_irq_supported) 1195 + enable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr); 1196 + else 1197 + sdio_flags |= MMC_PM_WAKE_SDIO_IRQ; 1198 + } 1196 1199 1197 1200 if (sdio_set_host_pm_flags(sdiodev->func1, sdio_flags)) 1198 1201 brcmf_err("Failed to set pm_flags %x\n", sdio_flags); ··· 1218 1213 struct brcmf_sdio_dev *sdiodev = bus_if->bus_priv.sdio; 1219 1214 struct sdio_func *func = container_of(dev, struct sdio_func, dev); 1220 1215 int ret = 0; 1216 + bool cap_power_off = !!(func->card->host->caps & MMC_CAP_POWER_OFF_CARD); 1221 1217 1222 1218 brcmf_dbg(SDIO, "Enter: F%d\n", func->num); 1223 1219 if (func->num != 2) 1224 1220 return 0; 1225 1221 1226 - if (!sdiodev->wowl_enabled) { 1222 + if (!sdiodev->wowl_enabled && cap_power_off) { 1227 1223 /* bus was powered off and device removed, probe again */ 1228 1224 ret = brcmf_sdiod_probe(sdiodev); 1229 1225 if (ret) 1230 1226 brcmf_err("Failed to probe device on resume\n"); 1231 1227 } else { 1232 - if (sdiodev->settings->bus.sdio.oob_irq_supported) 1228 + if (sdiodev->wowl_enabled && sdiodev->settings->bus.sdio.oob_irq_supported) 1233 1229 disable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr); 1234 1230 1235 1231 brcmf_sdiod_freezer_off(sdiodev);
+58 -28
drivers/net/wireless/intel/iwlwifi/fw/dbg.c
··· 558 558 } 559 559 560 560 /* 561 - * alloc_sgtable - allocates scallerlist table in the given size, 562 - * fills it with pages and returns it 561 + * alloc_sgtable - allocates (chained) scatterlist in the given size, 562 + * fills it with pages and returns it 563 563 * @size: the size (in bytes) of the table 564 - */ 565 - static struct scatterlist *alloc_sgtable(int size) 564 + */ 565 + static struct scatterlist *alloc_sgtable(ssize_t size) 566 566 { 567 - int alloc_size, nents, i; 568 - struct page *new_page; 569 - struct scatterlist *iter; 570 - struct scatterlist *table; 567 + struct scatterlist *result = NULL, *prev; 568 + int nents, i, n_prev; 571 569 572 570 nents = DIV_ROUND_UP(size, PAGE_SIZE); 573 - table = kcalloc(nents, sizeof(*table), GFP_KERNEL); 574 - if (!table) 575 - return NULL; 576 - sg_init_table(table, nents); 577 - iter = table; 578 - for_each_sg(table, iter, sg_nents(table), i) { 579 - new_page = alloc_page(GFP_KERNEL); 580 - if (!new_page) { 581 - /* release all previous allocated pages in the table */ 582 - iter = table; 583 - for_each_sg(table, iter, sg_nents(table), i) { 584 - new_page = sg_page(iter); 585 - if (new_page) 586 - __free_page(new_page); 587 - } 588 - kfree(table); 571 + 572 + #define N_ENTRIES_PER_PAGE (PAGE_SIZE / sizeof(*result)) 573 + /* 574 + * We need an additional entry for table chaining, 575 + * this ensures the loop can finish i.e. we can 576 + * fit at least two entries per page (obviously, 577 + * many more really fit.) 578 + */ 579 + BUILD_BUG_ON(N_ENTRIES_PER_PAGE < 2); 580 + 581 + while (nents > 0) { 582 + struct scatterlist *new, *iter; 583 + int n_fill, n_alloc; 584 + 585 + if (nents <= N_ENTRIES_PER_PAGE) { 586 + /* last needed table */ 587 + n_fill = nents; 588 + n_alloc = nents; 589 + nents = 0; 590 + } else { 591 + /* fill a page with entries */ 592 + n_alloc = N_ENTRIES_PER_PAGE; 593 + /* reserve one for chaining */ 594 + n_fill = n_alloc - 1; 595 + nents -= n_fill; 596 + } 597 + 598 + new = kcalloc(n_alloc, sizeof(*new), GFP_KERNEL); 599 + if (!new) { 600 + if (result) 601 + _devcd_free_sgtable(result); 589 602 return NULL; 590 603 } 591 - alloc_size = min_t(int, size, PAGE_SIZE); 592 - size -= PAGE_SIZE; 593 - sg_set_page(iter, new_page, alloc_size, 0); 604 + sg_init_table(new, n_alloc); 605 + 606 + if (!result) 607 + result = new; 608 + else 609 + sg_chain(prev, n_prev, new); 610 + prev = new; 611 + n_prev = n_alloc; 612 + 613 + for_each_sg(new, iter, n_fill, i) { 614 + struct page *new_page = alloc_page(GFP_KERNEL); 615 + 616 + if (!new_page) { 617 + _devcd_free_sgtable(result); 618 + return NULL; 619 + } 620 + 621 + sg_set_page(iter, new_page, PAGE_SIZE, 0); 622 + } 594 623 } 595 - return table; 624 + 625 + return result; 596 626 } 597 627 598 628 static void iwl_fw_get_prph_len(struct iwl_fw_runtime *fwrt,
+3
drivers/net/wireless/intel/iwlwifi/fw/dump.c
··· 540 540 } err_info = {}; 541 541 int ret; 542 542 543 + if (err_id) 544 + *err_id = 0; 545 + 543 546 if (!base) 544 547 return false; 545 548
+1 -1
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 1181 1181 1182 1182 if (tlv_len != sizeof(*fseq_ver)) 1183 1183 goto invalid_tlv_len; 1184 - IWL_INFO(drv, "TLV_FW_FSEQ_VERSION: %s\n", 1184 + IWL_INFO(drv, "TLV_FW_FSEQ_VERSION: %.32s\n", 1185 1185 fseq_ver->version); 1186 1186 } 1187 1187 break;
+2
drivers/net/wireless/intel/iwlwifi/iwl-trans.c
··· 403 403 404 404 iwl_trans_pcie_op_mode_leave(trans); 405 405 406 + cancel_work_sync(&trans->restart.wk); 407 + 406 408 trans->op_mode = NULL; 407 409 408 410 trans->state = IWL_TRANS_NO_FW;
+51 -26
drivers/net/wireless/intel/iwlwifi/mvm/d3.c
··· 3092 3092 ieee80211_resume_disconnect(vif); 3093 3093 } 3094 3094 3095 - static bool iwl_mvm_check_rt_status(struct iwl_mvm *mvm, 3096 - struct ieee80211_vif *vif) 3095 + enum rt_status { 3096 + FW_ALIVE, 3097 + FW_NEEDS_RESET, 3098 + FW_ERROR, 3099 + }; 3100 + 3101 + static enum rt_status iwl_mvm_check_rt_status(struct iwl_mvm *mvm, 3102 + struct ieee80211_vif *vif) 3097 3103 { 3098 3104 u32 err_id; 3099 3105 ··· 3107 3101 if (iwl_fwrt_read_err_table(mvm->trans, 3108 3102 mvm->trans->dbg.lmac_error_event_table[0], 3109 3103 &err_id)) { 3110 - if (err_id == RF_KILL_INDICATOR_FOR_WOWLAN && vif) { 3111 - struct cfg80211_wowlan_wakeup wakeup = { 3112 - .rfkill_release = true, 3113 - }; 3114 - ieee80211_report_wowlan_wakeup(vif, &wakeup, 3115 - GFP_KERNEL); 3104 + if (err_id == RF_KILL_INDICATOR_FOR_WOWLAN) { 3105 + IWL_WARN(mvm, "Rfkill was toggled during suspend\n"); 3106 + if (vif) { 3107 + struct cfg80211_wowlan_wakeup wakeup = { 3108 + .rfkill_release = true, 3109 + }; 3110 + 3111 + ieee80211_report_wowlan_wakeup(vif, &wakeup, 3112 + GFP_KERNEL); 3113 + } 3114 + 3115 + return FW_NEEDS_RESET; 3116 3116 } 3117 - return true; 3117 + return FW_ERROR; 3118 3118 } 3119 3119 3120 3120 /* check if we have lmac2 set and check for error */ 3121 3121 if (iwl_fwrt_read_err_table(mvm->trans, 3122 3122 mvm->trans->dbg.lmac_error_event_table[1], 3123 3123 NULL)) 3124 - return true; 3124 + return FW_ERROR; 3125 3125 3126 3126 /* check for umac error */ 3127 3127 if (iwl_fwrt_read_err_table(mvm->trans, 3128 3128 mvm->trans->dbg.umac_error_event_table, 3129 3129 NULL)) 3130 - return true; 3130 + return FW_ERROR; 3131 3131 3132 - return false; 3132 + return FW_ALIVE; 3133 3133 } 3134 3134 3135 3135 /* ··· 3504 3492 bool d0i3_first = fw_has_capa(&mvm->fw->ucode_capa, 3505 3493 IWL_UCODE_TLV_CAPA_D0I3_END_FIRST); 3506 3494 bool resume_notif_based = iwl_mvm_d3_resume_notif_based(mvm); 3495 + enum rt_status rt_status; 3507 3496 bool keep = false; 3508 3497 3509 3498 mutex_lock(&mvm->mutex); ··· 3528 3515 3529 3516 iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt); 3530 3517 3531 - if (iwl_mvm_check_rt_status(mvm, vif)) { 3532 - IWL_ERR(mvm, "FW Error occurred during suspend. Restarting.\n"); 3518 + rt_status = iwl_mvm_check_rt_status(mvm, vif); 3519 + if (rt_status != FW_ALIVE) { 3533 3520 set_bit(STATUS_FW_ERROR, &mvm->trans->status); 3534 - iwl_mvm_dump_nic_error_log(mvm); 3535 - iwl_dbg_tlv_time_point(&mvm->fwrt, 3536 - IWL_FW_INI_TIME_POINT_FW_ASSERT, NULL); 3537 - iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert, 3538 - false, 0); 3521 + if (rt_status == FW_ERROR) { 3522 + IWL_ERR(mvm, "FW Error occurred during suspend. Restarting.\n"); 3523 + iwl_mvm_dump_nic_error_log(mvm); 3524 + iwl_dbg_tlv_time_point(&mvm->fwrt, 3525 + IWL_FW_INI_TIME_POINT_FW_ASSERT, 3526 + NULL); 3527 + iwl_fw_dbg_collect_desc(&mvm->fwrt, 3528 + &iwl_dump_desc_assert, 3529 + false, 0); 3530 + } 3539 3531 ret = 1; 3540 3532 goto err; 3541 3533 } ··· 3697 3679 .notif_expected = 3698 3680 IWL_D3_NOTIF_D3_END_NOTIF, 3699 3681 }; 3682 + enum rt_status rt_status; 3700 3683 int ret; 3701 3684 3702 3685 lockdep_assert_held(&mvm->mutex); ··· 3707 3688 mvm->last_reset_or_resume_time_jiffies = jiffies; 3708 3689 iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt); 3709 3690 3710 - if (iwl_mvm_check_rt_status(mvm, NULL)) { 3711 - IWL_ERR(mvm, "FW Error occurred during suspend. Restarting.\n"); 3691 + rt_status = iwl_mvm_check_rt_status(mvm, NULL); 3692 + if (rt_status != FW_ALIVE) { 3712 3693 set_bit(STATUS_FW_ERROR, &mvm->trans->status); 3713 - iwl_mvm_dump_nic_error_log(mvm); 3714 - iwl_dbg_tlv_time_point(&mvm->fwrt, 3715 - IWL_FW_INI_TIME_POINT_FW_ASSERT, NULL); 3716 - iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert, 3717 - false, 0); 3694 + if (rt_status == FW_ERROR) { 3695 + IWL_ERR(mvm, 3696 + "iwl_mvm_check_rt_status failed, device is gone during suspend\n"); 3697 + iwl_mvm_dump_nic_error_log(mvm); 3698 + iwl_dbg_tlv_time_point(&mvm->fwrt, 3699 + IWL_FW_INI_TIME_POINT_FW_ASSERT, 3700 + NULL); 3701 + iwl_fw_dbg_collect_desc(&mvm->fwrt, 3702 + &iwl_dump_desc_assert, 3703 + false, 0); 3704 + } 3718 3705 mvm->trans->state = IWL_TRANS_NO_FW; 3719 3706 ret = -ENODEV; 3720 3707
+7
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
··· 1479 1479 if (mvm->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_9000) 1480 1480 return -EOPNOTSUPP; 1481 1481 1482 + /* 1483 + * If the firmware is not running, silently succeed since there is 1484 + * no data to clear. 1485 + */ 1486 + if (!iwl_mvm_firmware_running(mvm)) 1487 + return count; 1488 + 1482 1489 mutex_lock(&mvm->mutex); 1483 1490 iwl_fw_dbg_clear_monitor_buf(&mvm->fwrt); 1484 1491 mutex_unlock(&mvm->mutex);
+3 -3
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2012-2014, 2018-2024 Intel Corporation 3 + * Copyright (C) 2012-2014, 2018-2025 Intel Corporation 4 4 * Copyright (C) 2013-2015 Intel Mobile Communications GmbH 5 5 * Copyright (C) 2016-2017 Intel Deutschland GmbH 6 6 */ ··· 422 422 /* if reached this point, Alive notification was received */ 423 423 iwl_mei_alive_notif(true); 424 424 425 + iwl_trans_fw_alive(mvm->trans, alive_data.scd_base_addr); 426 + 425 427 ret = iwl_pnvm_load(mvm->trans, &mvm->notif_wait, 426 428 &mvm->fw->ucode_capa); 427 429 if (ret) { ··· 431 429 iwl_fw_set_current_image(&mvm->fwrt, old_type); 432 430 return ret; 433 431 } 434 - 435 - iwl_trans_fw_alive(mvm->trans, alive_data.scd_base_addr); 436 432 437 433 /* 438 434 * Note: all the queues are enabled as part of the interface
+4 -4
drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
··· 995 995 */ 996 996 u8 ru = le32_get_bits(phy_data->d1, IWL_RX_PHY_DATA1_HE_RU_ALLOC_MASK); 997 997 u32 rate_n_flags = phy_data->rate_n_flags; 998 - u32 he_type = rate_n_flags & RATE_MCS_HE_TYPE_MSK_V1; 998 + u32 he_type = rate_n_flags & RATE_MCS_HE_TYPE_MSK; 999 999 u8 offs = 0; 1000 1000 1001 1001 rx_status->bw = RATE_INFO_BW_HE_RU; ··· 1050 1050 1051 1051 if (he_mu) 1052 1052 he_mu->flags2 |= 1053 - le16_encode_bits(FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK_V1, 1053 + le16_encode_bits(FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK, 1054 1054 rate_n_flags), 1055 1055 IEEE80211_RADIOTAP_HE_MU_FLAGS2_BW_FROM_SIG_A_BW); 1056 - else if (he_type == RATE_MCS_HE_TYPE_TRIG_V1) 1056 + else if (he_type == RATE_MCS_HE_TYPE_TRIG) 1057 1057 he->data6 |= 1058 1058 cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA6_TB_PPDU_BW_KNOWN) | 1059 - le16_encode_bits(FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK_V1, 1059 + le16_encode_bits(FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK, 1060 1060 rate_n_flags), 1061 1061 IEEE80211_RADIOTAP_HE_DATA6_TB_PPDU_BW); 1062 1062 }
+2
drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
··· 1030 1030 /* End TE, notify mac80211 */ 1031 1031 mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID; 1032 1032 mvmvif->time_event_data.link_id = -1; 1033 + /* set the bit so the ROC cleanup will actually clean up */ 1034 + set_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status); 1033 1035 iwl_mvm_roc_finished(mvm); 1034 1036 ieee80211_remain_on_channel_expired(mvm->hw); 1035 1037 } else if (le32_to_cpu(notif->start)) {
+3 -2
drivers/net/wireless/intel/iwlwifi/pcie/internal.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */ 2 2 /* 3 - * Copyright (C) 2003-2015, 2018-2024 Intel Corporation 3 + * Copyright (C) 2003-2015, 2018-2025 Intel Corporation 4 4 * Copyright (C) 2013-2015 Intel Mobile Communications GmbH 5 5 * Copyright (C) 2016-2017 Intel Deutschland GmbH 6 6 */ ··· 646 646 unsigned int len); 647 647 struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb, 648 648 struct iwl_cmd_meta *cmd_meta, 649 - u8 **hdr, unsigned int hdr_room); 649 + u8 **hdr, unsigned int hdr_room, 650 + unsigned int offset); 650 651 651 652 void iwl_pcie_free_tso_pages(struct iwl_trans *trans, struct sk_buff *skb, 652 653 struct iwl_cmd_meta *cmd_meta);
+4 -2
drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 3 * Copyright (C) 2017 Intel Deutschland GmbH 4 - * Copyright (C) 2018-2020, 2023-2024 Intel Corporation 4 + * Copyright (C) 2018-2020, 2023-2025 Intel Corporation 5 5 */ 6 6 #include <net/tso.h> 7 7 #include <linux/tcp.h> ··· 188 188 (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)); 189 189 190 190 /* Our device supports 9 segments at most, it will fit in 1 page */ 191 - sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room); 191 + sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room, 192 + snap_ip_tcp_hdrlen + hdr_len); 192 193 if (!sgt) 193 194 return -ENOMEM; 194 195 ··· 348 347 return tfd; 349 348 350 349 out_err: 350 + iwl_pcie_free_tso_pages(trans, skb, out_meta); 351 351 iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd); 352 352 return NULL; 353 353 }
+14 -9
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause 2 2 /* 3 - * Copyright (C) 2003-2014, 2018-2021, 2023-2024 Intel Corporation 3 + * Copyright (C) 2003-2014, 2018-2021, 2023-2025 Intel Corporation 4 4 * Copyright (C) 2013-2015 Intel Mobile Communications GmbH 5 5 * Copyright (C) 2016-2017 Intel Deutschland GmbH 6 6 */ ··· 1855 1855 * @cmd_meta: command meta to store the scatter list information for unmapping 1856 1856 * @hdr: output argument for TSO headers 1857 1857 * @hdr_room: requested length for TSO headers 1858 + * @offset: offset into the data from which mapping should start 1858 1859 * 1859 1860 * Allocate space for a scatter gather list and TSO headers and map the SKB 1860 1861 * using the scatter gather list. The SKB is unmapped again when the page is ··· 1865 1864 */ 1866 1865 struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb, 1867 1866 struct iwl_cmd_meta *cmd_meta, 1868 - u8 **hdr, unsigned int hdr_room) 1867 + u8 **hdr, unsigned int hdr_room, 1868 + unsigned int offset) 1869 1869 { 1870 1870 struct sg_table *sgt; 1871 + unsigned int n_segments = skb_shinfo(skb)->nr_frags + 1; 1872 + int orig_nents; 1871 1873 1872 1874 if (WARN_ON_ONCE(skb_has_frag_list(skb))) 1873 1875 return NULL; ··· 1878 1874 *hdr = iwl_pcie_get_page_hdr(trans, 1879 1875 hdr_room + __alignof__(struct sg_table) + 1880 1876 sizeof(struct sg_table) + 1881 - (skb_shinfo(skb)->nr_frags + 1) * 1882 - sizeof(struct scatterlist), 1877 + n_segments * sizeof(struct scatterlist), 1883 1878 skb); 1884 1879 if (!*hdr) 1885 1880 return NULL; ··· 1886 1883 sgt = (void *)PTR_ALIGN(*hdr + hdr_room, __alignof__(struct sg_table)); 1887 1884 sgt->sgl = (void *)(sgt + 1); 1888 1885 1889 - sg_init_table(sgt->sgl, skb_shinfo(skb)->nr_frags + 1); 1886 + sg_init_table(sgt->sgl, n_segments); 1890 1887 1891 1888 /* Only map the data, not the header (it is copied to the TSO page) */ 1892 - sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, skb_headlen(skb), 1893 - skb->data_len); 1894 - if (WARN_ON_ONCE(sgt->orig_nents <= 0)) 1889 + orig_nents = skb_to_sgvec(skb, sgt->sgl, offset, skb->len - offset); 1890 + if (WARN_ON_ONCE(orig_nents <= 0)) 1895 1891 return NULL; 1892 + 1893 + sgt->orig_nents = orig_nents; 1896 1894 1897 1895 /* And map the entire SKB */ 1898 1896 if (dma_map_sgtable(trans->dev, sgt, DMA_TO_DEVICE, 0) < 0) ··· 1943 1939 (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len; 1944 1940 1945 1941 /* Our device supports 9 segments at most, it will fit in 1 page */ 1946 - sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room); 1942 + sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room, 1943 + snap_ip_tcp_hdrlen + hdr_len + iv_len); 1947 1944 if (!sgt) 1948 1945 return -ENOMEM; 1949 1946
+2 -1
drivers/nvme/host/apple.c
··· 599 599 } 600 600 601 601 if (!nvme_try_complete_req(req, cqe->status, cqe->result) && 602 - !blk_mq_add_to_batch(req, iob, nvme_req(req)->status, 602 + !blk_mq_add_to_batch(req, iob, 603 + nvme_req(req)->status != NVME_SC_SUCCESS, 603 604 apple_nvme_complete_batch)) 604 605 apple_nvme_complete_rq(req); 605 606 }
+6 -6
drivers/nvme/host/core.c
··· 431 431 432 432 static inline void __nvme_end_req(struct request *req) 433 433 { 434 + if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) { 435 + if (blk_rq_is_passthrough(req)) 436 + nvme_log_err_passthru(req); 437 + else 438 + nvme_log_error(req); 439 + } 434 440 nvme_end_req_zoned(req); 435 441 nvme_trace_bio_complete(req); 436 442 if (req->cmd_flags & REQ_NVME_MPATH) ··· 447 441 { 448 442 blk_status_t status = nvme_error_status(nvme_req(req)->status); 449 443 450 - if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) { 451 - if (blk_rq_is_passthrough(req)) 452 - nvme_log_err_passthru(req); 453 - else 454 - nvme_log_error(req); 455 - } 456 444 __nvme_end_req(req); 457 445 blk_mq_end_request(req, status); 458 446 }
+8 -4
drivers/nvme/host/ioctl.c
··· 128 128 if (!nvme_ctrl_sgl_supported(ctrl)) 129 129 dev_warn_once(ctrl->device, "using unchecked data buffer\n"); 130 130 if (has_metadata) { 131 - if (!supports_metadata) 132 - return -EINVAL; 131 + if (!supports_metadata) { 132 + ret = -EINVAL; 133 + goto out; 134 + } 133 135 if (!nvme_ctrl_meta_sgl_supported(ctrl)) 134 136 dev_warn_once(ctrl->device, 135 137 "using unchecked metadata buffer\n"); ··· 141 139 struct iov_iter iter; 142 140 143 141 /* fixedbufs is only for non-vectored io */ 144 - if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) 145 - return -EINVAL; 142 + if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) { 143 + ret = -EINVAL; 144 + goto out; 145 + } 146 146 ret = io_uring_cmd_import_fixed(ubuffer, bufflen, 147 147 rq_data_dir(req), &iter, ioucmd); 148 148 if (ret < 0)
+28 -11
drivers/nvme/host/pci.c
··· 1130 1130 1131 1131 trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail); 1132 1132 if (!nvme_try_complete_req(req, cqe->status, cqe->result) && 1133 - !blk_mq_add_to_batch(req, iob, nvme_req(req)->status, 1134 - nvme_pci_complete_batch)) 1133 + !blk_mq_add_to_batch(req, iob, 1134 + nvme_req(req)->status != NVME_SC_SUCCESS, 1135 + nvme_pci_complete_batch)) 1135 1136 nvme_pci_complete_rq(req); 1136 1137 } 1137 1138 ··· 1412 1411 struct nvme_dev *dev = nvmeq->dev; 1413 1412 struct request *abort_req; 1414 1413 struct nvme_command cmd = { }; 1414 + struct pci_dev *pdev = to_pci_dev(dev->dev); 1415 1415 u32 csts = readl(dev->bar + NVME_REG_CSTS); 1416 1416 u8 opcode; 1417 1417 1418 + /* 1419 + * Shutdown the device immediately if we see it is disconnected. This 1420 + * unblocks PCIe error handling if the nvme driver is waiting in 1421 + * error_resume for a device that has been removed. We can't unbind the 1422 + * driver while the driver's error callback is waiting to complete, so 1423 + * we're relying on a timeout to break that deadlock if a removal 1424 + * occurs while reset work is running. 1425 + */ 1426 + if (pci_dev_is_disconnected(pdev)) 1427 + nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING); 1418 1428 if (nvme_state_terminal(&dev->ctrl)) 1419 1429 goto disable; 1420 1430 ··· 1433 1421 * the recovery mechanism will surely fail. 1434 1422 */ 1435 1423 mb(); 1436 - if (pci_channel_offline(to_pci_dev(dev->dev))) 1424 + if (pci_channel_offline(pdev)) 1437 1425 return BLK_EH_RESET_TIMER; 1438 1426 1439 1427 /* ··· 1995 1983 return; 1996 1984 1997 1985 /* 1986 + * Controllers may support a CMB size larger than their BAR, for 1987 + * example, due to being behind a bridge. Reduce the CMB to the 1988 + * reported size of the BAR 1989 + */ 1990 + size = min(size, bar_size - offset); 1991 + 1992 + if (!IS_ALIGNED(size, memremap_compat_align()) || 1993 + !IS_ALIGNED(pci_resource_start(pdev, bar), 1994 + memremap_compat_align())) 1995 + return; 1996 + 1997 + /* 1998 1998 * Tell the controller about the host side address mapping the CMB, 1999 1999 * and enable CMB decoding for the NVMe 1.4+ scheme: 2000 2000 */ ··· 2016 1992 dev->bar + NVME_REG_CMBMSC); 2017 1993 } 2018 1994 2019 - /* 2020 - * Controllers may support a CMB size larger than their BAR, 2021 - * for example, due to being behind a bridge. Reduce the CMB to 2022 - * the reported size of the BAR 2023 - */ 2024 - if (size > bar_size - offset) 2025 - size = bar_size - offset; 2026 - 2027 1995 if (pci_p2pdma_add_resource(pdev, bar, size, offset)) { 2028 1996 dev_warn(dev->ctrl.device, 2029 1997 "failed to register the CMB\n"); 1998 + hi_lo_writeq(0, dev->bar + NVME_REG_CMBMSC); 2030 1999 return; 2031 2000 } 2032 2001
+36 -9
drivers/nvme/host/tcp.c
··· 217 217 return queue - queue->ctrl->queues; 218 218 } 219 219 220 + static inline bool nvme_tcp_recv_pdu_supported(enum nvme_tcp_pdu_type type) 221 + { 222 + switch (type) { 223 + case nvme_tcp_c2h_term: 224 + case nvme_tcp_c2h_data: 225 + case nvme_tcp_r2t: 226 + case nvme_tcp_rsp: 227 + return true; 228 + default: 229 + return false; 230 + } 231 + } 232 + 220 233 /* 221 234 * Check if the queue is TLS encrypted 222 235 */ ··· 788 775 [NVME_TCP_FES_PDU_SEQ_ERR] = "PDU Sequence Error", 789 776 [NVME_TCP_FES_HDR_DIGEST_ERR] = "Header Digest Error", 790 777 [NVME_TCP_FES_DATA_OUT_OF_RANGE] = "Data Transfer Out Of Range", 791 - [NVME_TCP_FES_R2T_LIMIT_EXCEEDED] = "R2T Limit Exceeded", 778 + [NVME_TCP_FES_DATA_LIMIT_EXCEEDED] = "Data Transfer Limit Exceeded", 792 779 [NVME_TCP_FES_UNSUPPORTED_PARAM] = "Unsupported Parameter", 793 780 }; 794 781 ··· 831 818 return 0; 832 819 833 820 hdr = queue->pdu; 821 + if (unlikely(hdr->hlen != sizeof(struct nvme_tcp_rsp_pdu))) { 822 + if (!nvme_tcp_recv_pdu_supported(hdr->type)) 823 + goto unsupported_pdu; 824 + 825 + dev_err(queue->ctrl->ctrl.device, 826 + "pdu type %d has unexpected header length (%d)\n", 827 + hdr->type, hdr->hlen); 828 + return -EPROTO; 829 + } 830 + 834 831 if (unlikely(hdr->type == nvme_tcp_c2h_term)) { 835 832 /* 836 833 * C2HTermReq never includes Header or Data digests. ··· 873 850 nvme_tcp_init_recv_ctx(queue); 874 851 return nvme_tcp_handle_r2t(queue, (void *)queue->pdu); 875 852 default: 876 - dev_err(queue->ctrl->ctrl.device, 877 - "unsupported pdu type (%d)\n", hdr->type); 878 - return -EINVAL; 853 + goto unsupported_pdu; 879 854 } 855 + 856 + unsupported_pdu: 857 + dev_err(queue->ctrl->ctrl.device, 858 + "unsupported pdu type (%d)\n", hdr->type); 859 + return -EINVAL; 880 860 } 881 861 882 862 static inline void nvme_tcp_end_request(struct request *rq, u16 status) ··· 1521 1495 msg.msg_flags = MSG_WAITALL; 1522 1496 ret = kernel_recvmsg(queue->sock, &msg, &iov, 1, 1523 1497 iov.iov_len, msg.msg_flags); 1524 - if (ret < sizeof(*icresp)) { 1498 + if (ret >= 0 && ret < sizeof(*icresp)) 1499 + ret = -ECONNRESET; 1500 + if (ret < 0) { 1525 1501 pr_warn("queue %d: failed to receive icresp, error %d\n", 1526 1502 nvme_tcp_queue_id(queue), ret); 1527 - if (ret >= 0) 1528 - ret = -ECONNRESET; 1529 1503 goto free_icresp; 1530 1504 } 1531 1505 ret = -ENOTCONN; ··· 2725 2699 { 2726 2700 struct nvme_tcp_queue *queue = hctx->driver_data; 2727 2701 struct sock *sk = queue->sock->sk; 2702 + int ret; 2728 2703 2729 2704 if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags)) 2730 2705 return 0; ··· 2733 2706 set_bit(NVME_TCP_Q_POLLING, &queue->flags); 2734 2707 if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue)) 2735 2708 sk_busy_loop(sk, true); 2736 - nvme_tcp_try_recv(queue); 2709 + ret = nvme_tcp_try_recv(queue); 2737 2710 clear_bit(NVME_TCP_Q_POLLING, &queue->flags); 2738 - return queue->nr_cqe; 2711 + return ret < 0 ? ret : queue->nr_cqe; 2739 2712 } 2740 2713 2741 2714 static int nvme_tcp_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
-1
drivers/nvme/target/nvmet.h
··· 647 647 struct nvmet_host *host); 648 648 void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, 649 649 u8 event_info, u8 log_page); 650 - bool nvmet_subsys_nsid_exists(struct nvmet_subsys *subsys, u32 nsid); 651 650 652 651 #define NVMET_MIN_QUEUE_SIZE 16 653 652 #define NVMET_MAX_QUEUE_SIZE 1024
+14 -14
drivers/nvme/target/pci-epf.c
··· 1265 1265 struct nvmet_pci_epf_queue *cq = &ctrl->cq[cqid]; 1266 1266 u16 status; 1267 1267 1268 - if (test_and_set_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags)) 1268 + if (test_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags)) 1269 1269 return NVME_SC_QID_INVALID | NVME_STATUS_DNR; 1270 1270 1271 1271 if (!(flags & NVME_QUEUE_PHYS_CONTIG)) 1272 1272 return NVME_SC_INVALID_QUEUE | NVME_STATUS_DNR; 1273 - 1274 - if (flags & NVME_CQ_IRQ_ENABLED) 1275 - set_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags); 1276 1273 1277 1274 cq->pci_addr = pci_addr; 1278 1275 cq->qid = cqid; ··· 1287 1290 cq->qes = ctrl->io_cqes; 1288 1291 cq->pci_size = cq->qes * cq->depth; 1289 1292 1290 - cq->iv = nvmet_pci_epf_add_irq_vector(ctrl, vector); 1291 - if (!cq->iv) { 1292 - status = NVME_SC_INTERNAL | NVME_STATUS_DNR; 1293 - goto err; 1293 + if (flags & NVME_CQ_IRQ_ENABLED) { 1294 + cq->iv = nvmet_pci_epf_add_irq_vector(ctrl, vector); 1295 + if (!cq->iv) 1296 + return NVME_SC_INTERNAL | NVME_STATUS_DNR; 1297 + set_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags); 1294 1298 } 1295 1299 1296 1300 status = nvmet_cq_create(tctrl, &cq->nvme_cq, cqid, cq->depth); 1297 1301 if (status != NVME_SC_SUCCESS) 1298 1302 goto err; 1303 + 1304 + set_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags); 1299 1305 1300 1306 dev_dbg(ctrl->dev, "CQ[%u]: %u entries of %zu B, IRQ vector %u\n", 1301 1307 cqid, qsize, cq->qes, cq->vector); ··· 1306 1306 return NVME_SC_SUCCESS; 1307 1307 1308 1308 err: 1309 - clear_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags); 1310 - clear_bit(NVMET_PCI_EPF_Q_LIVE, &cq->flags); 1309 + if (test_and_clear_bit(NVMET_PCI_EPF_Q_IRQ_ENABLED, &cq->flags)) 1310 + nvmet_pci_epf_remove_irq_vector(ctrl, cq->vector); 1311 1311 return status; 1312 1312 } 1313 1313 ··· 1333 1333 struct nvmet_pci_epf_queue *sq = &ctrl->sq[sqid]; 1334 1334 u16 status; 1335 1335 1336 - if (test_and_set_bit(NVMET_PCI_EPF_Q_LIVE, &sq->flags)) 1336 + if (test_bit(NVMET_PCI_EPF_Q_LIVE, &sq->flags)) 1337 1337 return NVME_SC_QID_INVALID | NVME_STATUS_DNR; 1338 1338 1339 1339 if (!(flags & NVME_QUEUE_PHYS_CONTIG)) ··· 1355 1355 1356 1356 status = nvmet_sq_create(tctrl, &sq->nvme_sq, sqid, sq->depth); 1357 1357 if (status != NVME_SC_SUCCESS) 1358 - goto out_clear_bit; 1358 + return status; 1359 1359 1360 1360 sq->iod_wq = alloc_workqueue("sq%d_wq", WQ_UNBOUND, 1361 1361 min_t(int, sq->depth, WQ_MAX_ACTIVE), sqid); ··· 1365 1365 goto out_destroy_sq; 1366 1366 } 1367 1367 1368 + set_bit(NVMET_PCI_EPF_Q_LIVE, &sq->flags); 1369 + 1368 1370 dev_dbg(ctrl->dev, "SQ[%u]: %u entries of %zu B\n", 1369 1371 sqid, qsize, sq->qes); 1370 1372 ··· 1374 1372 1375 1373 out_destroy_sq: 1376 1374 nvmet_sq_destroy(&sq->nvme_sq); 1377 - out_clear_bit: 1378 - clear_bit(NVMET_PCI_EPF_Q_LIVE, &sq->flags); 1379 1375 return status; 1380 1376 } 1381 1377
+11 -4
drivers/nvme/target/tcp.c
··· 571 571 struct nvmet_tcp_cmd *cmd = 572 572 container_of(req, struct nvmet_tcp_cmd, req); 573 573 struct nvmet_tcp_queue *queue = cmd->queue; 574 + enum nvmet_tcp_recv_state queue_state; 575 + struct nvmet_tcp_cmd *queue_cmd; 574 576 struct nvme_sgl_desc *sgl; 575 577 u32 len; 576 578 577 - if (unlikely(cmd == queue->cmd)) { 579 + /* Pairs with store_release in nvmet_prepare_receive_pdu() */ 580 + queue_state = smp_load_acquire(&queue->rcv_state); 581 + queue_cmd = READ_ONCE(queue->cmd); 582 + 583 + if (unlikely(cmd == queue_cmd)) { 578 584 sgl = &cmd->req.cmd->common.dptr.sgl; 579 585 len = le32_to_cpu(sgl->length); 580 586 ··· 589 583 * Avoid using helpers, this might happen before 590 584 * nvmet_req_init is completed. 591 585 */ 592 - if (queue->rcv_state == NVMET_TCP_RECV_PDU && 586 + if (queue_state == NVMET_TCP_RECV_PDU && 593 587 len && len <= cmd->req.port->inline_data_size && 594 588 nvme_is_write(cmd->req.cmd)) 595 589 return; ··· 853 847 { 854 848 queue->offset = 0; 855 849 queue->left = sizeof(struct nvme_tcp_hdr); 856 - queue->cmd = NULL; 857 - queue->rcv_state = NVMET_TCP_RECV_PDU; 850 + WRITE_ONCE(queue->cmd, NULL); 851 + /* Ensure rcv_state is visible only after queue->cmd is set */ 852 + smp_store_release(&queue->rcv_state, NVMET_TCP_RECV_PDU); 858 853 } 859 854 860 855 static void nvmet_tcp_free_crypto(struct nvmet_tcp_queue *queue)
+2 -2
drivers/of/of_reserved_mem.c
··· 415 415 416 416 prop = of_get_flat_dt_prop(node, "alignment", &len); 417 417 if (prop) { 418 - if (len != dt_root_size_cells * sizeof(__be32)) { 418 + if (len != dt_root_addr_cells * sizeof(__be32)) { 419 419 pr_err("invalid alignment property in '%s' node.\n", 420 420 uname); 421 421 return -EINVAL; 422 422 } 423 - align = dt_mem_next_cell(dt_root_size_cells, &prop); 423 + align = dt_mem_next_cell(dt_root_addr_cells, &prop); 424 424 } 425 425 426 426 nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
+1 -1
drivers/pinctrl/bcm/pinctrl-bcm281xx.c
··· 974 974 .reg_bits = 32, 975 975 .reg_stride = 4, 976 976 .val_bits = 32, 977 - .max_register = BCM281XX_PIN_VC_CAM3_SDA, 977 + .max_register = BCM281XX_PIN_VC_CAM3_SDA * 4, 978 978 }; 979 979 980 980 static int bcm281xx_pinctrl_get_groups_count(struct pinctrl_dev *pctldev)
+3
drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
··· 2374 2374 pctrl->gpio_bank[id].gc.parent = dev; 2375 2375 pctrl->gpio_bank[id].gc.fwnode = child; 2376 2376 pctrl->gpio_bank[id].gc.label = devm_kasprintf(dev, GFP_KERNEL, "%pfw", child); 2377 + if (pctrl->gpio_bank[id].gc.label == NULL) 2378 + return -ENOMEM; 2379 + 2377 2380 pctrl->gpio_bank[id].gc.dbg_show = npcmgpio_dbg_show; 2378 2381 pctrl->gpio_bank[id].direction_input = pctrl->gpio_bank[id].gc.direction_input; 2379 2382 pctrl->gpio_bank[id].gc.direction_input = npcmgpio_direction_input;
+2 -1
drivers/pinctrl/spacemit/Kconfig
··· 4 4 # 5 5 6 6 config PINCTRL_SPACEMIT_K1 7 - tristate "SpacemiT K1 SoC Pinctrl driver" 7 + bool "SpacemiT K1 SoC Pinctrl driver" 8 8 depends on ARCH_SPACEMIT || COMPILE_TEST 9 9 depends on OF 10 + default y 10 11 select GENERIC_PINCTRL_GROUPS 11 12 select GENERIC_PINMUX_FUNCTIONS 12 13 select GENERIC_PINCONF
+1 -1
drivers/pinctrl/spacemit/pinctrl-k1.c
··· 1044 1044 .of_match_table = k1_pinctrl_ids, 1045 1045 }, 1046 1046 }; 1047 - module_platform_driver(k1_pinctrl_driver); 1047 + builtin_platform_driver(k1_pinctrl_driver); 1048 1048 1049 1049 MODULE_AUTHOR("Yixun Lan <dlan@gentoo.org>"); 1050 1050 MODULE_DESCRIPTION("Pinctrl driver for the SpacemiT K1 SoC");
+4 -1
drivers/platform/surface/surface_aggregator_registry.c
··· 371 371 NULL, 372 372 }; 373 373 374 - /* Devices for Surface Pro 9 (Intel/x86) and 10 */ 374 + /* Devices for Surface Pro 9, 10 and 11 (Intel/x86) */ 375 375 static const struct software_node *ssam_node_group_sp9[] = { 376 376 &ssam_node_root, 377 377 &ssam_node_hub_kip, ··· 429 429 430 430 /* Surface Pro 10 */ 431 431 { "MSHW0510", (unsigned long)ssam_node_group_sp9 }, 432 + 433 + /* Surface Pro 11 */ 434 + { "MSHW0583", (unsigned long)ssam_node_group_sp9 }, 432 435 433 436 /* Surface Book 2 */ 434 437 { "MSHW0107", (unsigned long)ssam_node_group_gen5 },
+2
drivers/platform/x86/amd/pmf/core.c
··· 452 452 453 453 mutex_init(&dev->lock); 454 454 mutex_init(&dev->update_mutex); 455 + mutex_init(&dev->cb_mutex); 455 456 456 457 apmf_acpi_init(dev); 457 458 platform_set_drvdata(pdev, dev); ··· 478 477 amd_pmf_dbgfs_unregister(dev); 479 478 mutex_destroy(&dev->lock); 480 479 mutex_destroy(&dev->update_mutex); 480 + mutex_destroy(&dev->cb_mutex); 481 481 kfree(dev->buf); 482 482 } 483 483
+4 -1
drivers/platform/x86/amd/pmf/pmf.h
··· 106 106 #define PMF_TA_IF_VERSION_MAJOR 1 107 107 #define TA_PMF_ACTION_MAX 32 108 108 #define TA_PMF_UNDO_MAX 8 109 - #define TA_OUTPUT_RESERVED_MEM 906 109 + #define TA_OUTPUT_RESERVED_MEM 922 110 110 #define MAX_OPERATION_PARAMS 4 111 + 112 + #define TA_ERROR_CRYPTO_INVALID_PARAM 0x20002 113 + #define TA_ERROR_CRYPTO_BIN_TOO_LARGE 0x2000d 111 114 112 115 #define PMF_IF_V1 1 113 116 #define PMF_IF_V2 2
+2
drivers/platform/x86/amd/pmf/spc.c
··· 219 219 220 220 switch (dev->current_profile) { 221 221 case PLATFORM_PROFILE_PERFORMANCE: 222 + case PLATFORM_PROFILE_BALANCED_PERFORMANCE: 222 223 val = TA_BEST_PERFORMANCE; 223 224 break; 224 225 case PLATFORM_PROFILE_BALANCED: 225 226 val = TA_BETTER_PERFORMANCE; 226 227 break; 227 228 case PLATFORM_PROFILE_LOW_POWER: 229 + case PLATFORM_PROFILE_QUIET: 228 230 val = TA_BEST_BATTERY; 229 231 break; 230 232 default:
+11
drivers/platform/x86/amd/pmf/sps.c
··· 297 297 298 298 switch (pmf->current_profile) { 299 299 case PLATFORM_PROFILE_PERFORMANCE: 300 + case PLATFORM_PROFILE_BALANCED_PERFORMANCE: 300 301 mode = POWER_MODE_PERFORMANCE; 301 302 break; 302 303 case PLATFORM_PROFILE_BALANCED: 303 304 mode = POWER_MODE_BALANCED_POWER; 304 305 break; 305 306 case PLATFORM_PROFILE_LOW_POWER: 307 + case PLATFORM_PROFILE_QUIET: 306 308 mode = POWER_MODE_POWER_SAVER; 307 309 break; 308 310 default: ··· 389 387 return 0; 390 388 } 391 389 390 + static int amd_pmf_hidden_choices(void *drvdata, unsigned long *choices) 391 + { 392 + set_bit(PLATFORM_PROFILE_QUIET, choices); 393 + set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, choices); 394 + 395 + return 0; 396 + } 397 + 392 398 static int amd_pmf_profile_probe(void *drvdata, unsigned long *choices) 393 399 { 394 400 set_bit(PLATFORM_PROFILE_LOW_POWER, choices); ··· 408 398 409 399 static const struct platform_profile_ops amd_pmf_profile_ops = { 410 400 .probe = amd_pmf_profile_probe, 401 + .hidden_choices = amd_pmf_hidden_choices, 411 402 .profile_get = amd_pmf_profile_get, 412 403 .profile_set = amd_pmf_profile_set, 413 404 };
+59 -23
drivers/platform/x86/amd/pmf/tee-if.c
··· 27 27 MODULE_PARM_DESC(pb_side_load, "Sideload policy binaries debug policy failures"); 28 28 #endif 29 29 30 - static const uuid_t amd_pmf_ta_uuid = UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d, 31 - 0xb1, 0x2d, 0xc5, 0x29, 0xb1, 0x3d, 0x85, 0x43); 30 + static const uuid_t amd_pmf_ta_uuid[] = { UUID_INIT(0xd9b39bf2, 0x66bd, 0x4154, 0xaf, 0xb8, 0x8a, 31 + 0xcc, 0x2b, 0x2b, 0x60, 0xd6), 32 + UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d, 0xb1, 0x2d, 0xc5, 33 + 0x29, 0xb1, 0x3d, 0x85, 0x43), 34 + }; 32 35 33 36 static const char *amd_pmf_uevent_as_str(unsigned int state) 34 37 { ··· 324 321 */ 325 322 schedule_delayed_work(&dev->pb_work, msecs_to_jiffies(pb_actions_ms * 3)); 326 323 } else { 327 - dev_err(dev->dev, "ta invoke cmd init failed err: %x\n", res); 324 + dev_dbg(dev->dev, "ta invoke cmd init failed err: %x\n", res); 328 325 dev->smart_pc_enabled = false; 329 - return -EIO; 326 + return res; 330 327 } 331 328 332 329 return 0; ··· 393 390 return ver->impl_id == TEE_IMPL_ID_AMDTEE; 394 391 } 395 392 396 - static int amd_pmf_ta_open_session(struct tee_context *ctx, u32 *id) 393 + static int amd_pmf_ta_open_session(struct tee_context *ctx, u32 *id, const uuid_t *uuid) 397 394 { 398 395 struct tee_ioctl_open_session_arg sess_arg = {}; 399 396 int rc; 400 397 401 - export_uuid(sess_arg.uuid, &amd_pmf_ta_uuid); 398 + export_uuid(sess_arg.uuid, uuid); 402 399 sess_arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC; 403 400 sess_arg.num_params = 0; 404 401 ··· 437 434 return 0; 438 435 } 439 436 440 - static int amd_pmf_tee_init(struct amd_pmf_dev *dev) 437 + static int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid) 441 438 { 442 439 u32 size; 443 440 int ret; ··· 448 445 return PTR_ERR(dev->tee_ctx); 449 446 } 450 447 451 - ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id); 448 + ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id, uuid); 452 449 if (ret) { 453 450 dev_err(dev->dev, "Failed to open TA session (%d)\n", ret); 454 451 ret = -EINVAL; ··· 492 489 493 490 int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev) 494 491 { 495 - int ret; 492 + bool status; 493 + int ret, i; 496 494 497 495 ret = apmf_check_smart_pc(dev); 498 496 if (ret) { ··· 506 502 return -ENODEV; 507 503 } 508 504 509 - ret = amd_pmf_tee_init(dev); 510 - if (ret) 511 - return ret; 512 - 513 505 INIT_DELAYED_WORK(&dev->pb_work, amd_pmf_invoke_cmd); 514 506 515 507 ret = amd_pmf_set_dram_addr(dev, true); 516 508 if (ret) 517 - goto error; 509 + goto err_cancel_work; 518 510 519 511 dev->policy_base = devm_ioremap_resource(dev->dev, dev->res); 520 512 if (IS_ERR(dev->policy_base)) { 521 513 ret = PTR_ERR(dev->policy_base); 522 - goto error; 514 + goto err_free_dram_buf; 523 515 } 524 516 525 517 dev->policy_buf = kzalloc(dev->policy_sz, GFP_KERNEL); 526 518 if (!dev->policy_buf) { 527 519 ret = -ENOMEM; 528 - goto error; 520 + goto err_free_dram_buf; 529 521 } 530 522 531 523 memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz); ··· 531 531 dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL); 532 532 if (!dev->prev_data) { 533 533 ret = -ENOMEM; 534 - goto error; 534 + goto err_free_policy; 535 535 } 536 536 537 - ret = amd_pmf_start_policy_engine(dev); 538 - if (ret) 539 - goto error; 537 + for (i = 0; i < ARRAY_SIZE(amd_pmf_ta_uuid); i++) { 538 + ret = amd_pmf_tee_init(dev, &amd_pmf_ta_uuid[i]); 539 + if (ret) 540 + goto err_free_prev_data; 541 + 542 + ret = amd_pmf_start_policy_engine(dev); 543 + switch (ret) { 544 + case TA_PMF_TYPE_SUCCESS: 545 + status = true; 546 + break; 547 + case TA_ERROR_CRYPTO_INVALID_PARAM: 548 + case TA_ERROR_CRYPTO_BIN_TOO_LARGE: 549 + amd_pmf_tee_deinit(dev); 550 + status = false; 551 + break; 552 + default: 553 + ret = -EINVAL; 554 + amd_pmf_tee_deinit(dev); 555 + goto err_free_prev_data; 556 + } 557 + 558 + if (status) 559 + break; 560 + } 561 + 562 + if (!status && !pb_side_load) { 563 + ret = -EINVAL; 564 + goto err_free_prev_data; 565 + } 540 566 541 567 if (pb_side_load) 542 568 amd_pmf_open_pb(dev, dev->dbgfs_dir); 543 569 544 570 ret = amd_pmf_register_input_device(dev); 545 571 if (ret) 546 - goto error; 572 + goto err_pmf_remove_pb; 547 573 548 574 return 0; 549 575 550 - error: 551 - amd_pmf_deinit_smart_pc(dev); 576 + err_pmf_remove_pb: 577 + if (pb_side_load && dev->esbin) 578 + amd_pmf_remove_pb(dev); 579 + amd_pmf_tee_deinit(dev); 580 + err_free_prev_data: 581 + kfree(dev->prev_data); 582 + err_free_policy: 583 + kfree(dev->policy_buf); 584 + err_free_dram_buf: 585 + kfree(dev->buf); 586 + err_cancel_work: 587 + cancel_delayed_work_sync(&dev->pb_work); 552 588 553 589 return ret; 554 590 }
+7
drivers/platform/x86/intel/hid.c
··· 139 139 DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"), 140 140 }, 141 141 }, 142 + { 143 + .ident = "Microsoft Surface Go 4", 144 + .matches = { 145 + DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"), 146 + DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 4"), 147 + }, 148 + }, 142 149 { } 143 150 }; 144 151
+7
drivers/platform/x86/intel/vsec.c
··· 404 404 .caps = VSEC_CAP_TELEMETRY | VSEC_CAP_SDSI | VSEC_CAP_TPMI, 405 405 }; 406 406 407 + /* DMR OOBMSM info */ 408 + static const struct intel_vsec_platform_info dmr_oobmsm_info = { 409 + .caps = VSEC_CAP_TELEMETRY | VSEC_CAP_TPMI, 410 + }; 411 + 407 412 /* TGL info */ 408 413 static const struct intel_vsec_platform_info tgl_info = { 409 414 .caps = VSEC_CAP_TELEMETRY, ··· 425 420 #define PCI_DEVICE_ID_INTEL_VSEC_MTL_M 0x7d0d 426 421 #define PCI_DEVICE_ID_INTEL_VSEC_MTL_S 0xad0d 427 422 #define PCI_DEVICE_ID_INTEL_VSEC_OOBMSM 0x09a7 423 + #define PCI_DEVICE_ID_INTEL_VSEC_OOBMSM_DMR 0x09a1 428 424 #define PCI_DEVICE_ID_INTEL_VSEC_RPL 0xa77d 429 425 #define PCI_DEVICE_ID_INTEL_VSEC_TGL 0x9a0d 430 426 #define PCI_DEVICE_ID_INTEL_VSEC_LNL_M 0x647d ··· 436 430 { PCI_DEVICE_DATA(INTEL, VSEC_MTL_M, &mtl_info) }, 437 431 { PCI_DEVICE_DATA(INTEL, VSEC_MTL_S, &mtl_info) }, 438 432 { PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM, &oobmsm_info) }, 433 + { PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM_DMR, &dmr_oobmsm_info) }, 439 434 { PCI_DEVICE_DATA(INTEL, VSEC_RPL, &tgl_info) }, 440 435 { PCI_DEVICE_DATA(INTEL, VSEC_TGL, &tgl_info) }, 441 436 { PCI_DEVICE_DATA(INTEL, VSEC_LNL_M, &lnl_info) },
+1
drivers/platform/x86/thinkpad_acpi.c
··· 9972 9972 * Individual addressing is broken on models that expose the 9973 9973 * primary battery as BAT1. 9974 9974 */ 9975 + TPACPI_Q_LNV('G', '8', true), /* ThinkPad X131e */ 9975 9976 TPACPI_Q_LNV('8', 'F', true), /* Thinkpad X120e */ 9976 9977 TPACPI_Q_LNV('J', '7', true), /* B5400 */ 9977 9978 TPACPI_Q_LNV('J', 'I', true), /* Thinkpad 11e */
+2 -1
drivers/rapidio/devices/rio_mport_cdev.c
··· 1742 1742 err = rio_add_net(net); 1743 1743 if (err) { 1744 1744 rmcd_debug(RDEV, "failed to register net, err=%d", err); 1745 - kfree(net); 1745 + put_device(&net->dev); 1746 + mport->net = NULL; 1746 1747 goto cleanup; 1747 1748 } 1748 1749 }
+4 -1
drivers/rapidio/rio-scan.c
··· 871 871 dev_set_name(&net->dev, "rnet_%d", net->id); 872 872 net->dev.parent = &mport->dev; 873 873 net->dev.release = rio_scan_release_dev; 874 - rio_add_net(net); 874 + if (rio_add_net(net)) { 875 + put_device(&net->dev); 876 + net = NULL; 877 + } 875 878 } 876 879 877 880 return net;
+3 -2
drivers/slimbus/messaging.c
··· 148 148 } 149 149 150 150 ret = ctrl->xfer_msg(ctrl, txn); 151 - 152 - if (!ret && need_tid && !txn->msg->comp) { 151 + if (ret == -ETIMEDOUT) { 152 + slim_free_txn_tid(ctrl, txn); 153 + } else if (!ret && need_tid && !txn->msg->comp) { 153 154 unsigned long ms = txn->rl + HZ; 154 155 155 156 time_left = wait_for_completion_timeout(txn->comp,
+1 -4
drivers/spi/atmel-quadspi.c
··· 930 930 931 931 /* Release the chip-select. */ 932 932 ret = atmel_qspi_reg_sync(aq); 933 - if (ret) { 934 - pm_runtime_mark_last_busy(&aq->pdev->dev); 935 - pm_runtime_put_autosuspend(&aq->pdev->dev); 933 + if (ret) 936 934 return ret; 937 - } 938 935 atmel_qspi_write(QSPI_CR_LASTXFER, aq, QSPI_CR); 939 936 940 937 return atmel_qspi_wait_for_completion(aq, QSPI_SR_CSRA);
+18 -23
drivers/spi/spi-microchip-core.c
··· 70 70 #define INT_RX_CHANNEL_OVERFLOW BIT(2) 71 71 #define INT_TX_CHANNEL_UNDERRUN BIT(3) 72 72 73 - #define INT_ENABLE_MASK (CONTROL_RX_DATA_INT | CONTROL_TX_DATA_INT | \ 74 - CONTROL_RX_OVER_INT | CONTROL_TX_UNDER_INT) 73 + #define INT_ENABLE_MASK (CONTROL_RX_OVER_INT | CONTROL_TX_UNDER_INT) 75 74 76 75 #define REG_CONTROL (0x00) 77 76 #define REG_FRAME_SIZE (0x04) ··· 132 133 mchp_corespi_write(spi, REG_CONTROL, control); 133 134 } 134 135 135 - static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi) 136 + static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi, int fifo_max) 136 137 { 137 - while (spi->rx_len >= spi->n_bytes && !(mchp_corespi_read(spi, REG_STATUS) & STATUS_RXFIFO_EMPTY)) { 138 - u32 data = mchp_corespi_read(spi, REG_RX_DATA); 138 + for (int i = 0; i < fifo_max; i++) { 139 + u32 data; 140 + 141 + while (mchp_corespi_read(spi, REG_STATUS) & STATUS_RXFIFO_EMPTY) 142 + ; 143 + 144 + data = mchp_corespi_read(spi, REG_RX_DATA); 139 145 140 146 spi->rx_len -= spi->n_bytes; 141 147 ··· 215 211 mchp_corespi_write(spi, REG_FRAMESUP, len); 216 212 } 217 213 218 - static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi) 214 + static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi, int fifo_max) 219 215 { 220 - int fifo_max, i = 0; 216 + int i = 0; 221 217 222 - fifo_max = DIV_ROUND_UP(min(spi->tx_len, FIFO_DEPTH), spi->n_bytes); 223 218 mchp_corespi_set_xfer_size(spi, fifo_max); 224 219 225 220 while ((i < fifo_max) && !(mchp_corespi_read(spi, REG_STATUS) & STATUS_TXFIFO_FULL)) { ··· 416 413 if (intfield == 0) 417 414 return IRQ_NONE; 418 415 419 - if (intfield & INT_TXDONE) 420 - mchp_corespi_write(spi, REG_INT_CLEAR, INT_TXDONE); 421 - 422 - if (intfield & INT_RXRDY) { 423 - mchp_corespi_write(spi, REG_INT_CLEAR, INT_RXRDY); 424 - 425 - if (spi->rx_len) 426 - mchp_corespi_read_fifo(spi); 427 - } 428 - 429 - if (!spi->rx_len && !spi->tx_len) 430 - finalise = true; 431 - 432 416 if (intfield & INT_RX_CHANNEL_OVERFLOW) { 433 417 mchp_corespi_write(spi, REG_INT_CLEAR, INT_RX_CHANNEL_OVERFLOW); 434 418 finalise = true; ··· 502 512 503 513 mchp_corespi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select); 504 514 505 - while (spi->tx_len) 506 - mchp_corespi_write_fifo(spi); 515 + while (spi->tx_len) { 516 + int fifo_max = DIV_ROUND_UP(min(spi->tx_len, FIFO_DEPTH), spi->n_bytes); 507 517 518 + mchp_corespi_write_fifo(spi, fifo_max); 519 + mchp_corespi_read_fifo(spi, fifo_max); 520 + } 521 + 522 + spi_finalize_current_transfer(host); 508 523 return 1; 509 524 } 510 525
+8 -3
drivers/thunderbolt/tunnel.c
··· 1009 1009 */ 1010 1010 tb_tunnel_get(tunnel); 1011 1011 1012 + tunnel->dprx_started = true; 1013 + 1012 1014 if (tunnel->callback) { 1013 1015 tunnel->dprx_timeout = dprx_timeout_to_ktime(dprx_timeout); 1014 1016 queue_delayed_work(tunnel->tb->wq, &tunnel->dprx_work, 0); ··· 1023 1021 1024 1022 static void tb_dp_dprx_stop(struct tb_tunnel *tunnel) 1025 1023 { 1026 - tunnel->dprx_canceled = true; 1027 - cancel_delayed_work(&tunnel->dprx_work); 1028 - tb_tunnel_put(tunnel); 1024 + if (tunnel->dprx_started) { 1025 + tunnel->dprx_started = false; 1026 + tunnel->dprx_canceled = true; 1027 + cancel_delayed_work(&tunnel->dprx_work); 1028 + tb_tunnel_put(tunnel); 1029 + } 1029 1030 } 1030 1031 1031 1032 static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
+2
drivers/thunderbolt/tunnel.h
··· 63 63 * @allocated_down: Allocated downstream bandwidth (only for USB3) 64 64 * @bw_mode: DP bandwidth allocation mode registers can be used to 65 65 * determine consumed and allocated bandwidth 66 + * @dprx_started: DPRX negotiation was started (tb_dp_dprx_start() was called for it) 66 67 * @dprx_canceled: Was DPRX capabilities read poll canceled 67 68 * @dprx_timeout: If set DPRX capabilities read poll work will timeout after this passes 68 69 * @dprx_work: Worker that is scheduled to poll completion of DPRX capabilities read ··· 101 100 int allocated_up; 102 101 int allocated_down; 103 102 bool bw_mode; 103 + bool dprx_started; 104 104 bool dprx_canceled; 105 105 ktime_t dprx_timeout; 106 106 struct delayed_work dprx_work;
+7 -6
drivers/usb/atm/cxacru.c
··· 1131 1131 struct cxacru_data *instance; 1132 1132 struct usb_device *usb_dev = interface_to_usbdev(intf); 1133 1133 struct usb_host_endpoint *cmd_ep = usb_dev->ep_in[CXACRU_EP_CMD]; 1134 - struct usb_endpoint_descriptor *in, *out; 1134 + static const u8 ep_addrs[] = { 1135 + CXACRU_EP_CMD + USB_DIR_IN, 1136 + CXACRU_EP_CMD + USB_DIR_OUT, 1137 + 0}; 1135 1138 int ret; 1136 1139 1137 1140 /* instance init */ ··· 1182 1179 } 1183 1180 1184 1181 if (usb_endpoint_xfer_int(&cmd_ep->desc)) 1185 - ret = usb_find_common_endpoints(intf->cur_altsetting, 1186 - NULL, NULL, &in, &out); 1182 + ret = usb_check_int_endpoints(intf, ep_addrs); 1187 1183 else 1188 - ret = usb_find_common_endpoints(intf->cur_altsetting, 1189 - &in, &out, NULL, NULL); 1184 + ret = usb_check_bulk_endpoints(intf, ep_addrs); 1190 1185 1191 - if (ret) { 1186 + if (!ret) { 1192 1187 usb_err(usbatm_instance, "cxacru_bind: interface has incorrect endpoints\n"); 1193 1188 ret = -ENODEV; 1194 1189 goto fail;
+33
drivers/usb/core/hub.c
··· 6066 6066 } /* usb_hub_cleanup() */ 6067 6067 6068 6068 /** 6069 + * hub_hc_release_resources - clear resources used by host controller 6070 + * @udev: pointer to device being released 6071 + * 6072 + * Context: task context, might sleep 6073 + * 6074 + * Function releases the host controller resources in correct order before 6075 + * making any operation on resuming usb device. The host controller resources 6076 + * allocated for devices in tree should be released starting from the last 6077 + * usb device in tree toward the root hub. This function is used only during 6078 + * resuming device when usb device require reinitialization – that is, when 6079 + * flag udev->reset_resume is set. 6080 + * 6081 + * This call is synchronous, and may not be used in an interrupt context. 6082 + */ 6083 + static void hub_hc_release_resources(struct usb_device *udev) 6084 + { 6085 + struct usb_hub *hub = usb_hub_to_struct_hub(udev); 6086 + struct usb_hcd *hcd = bus_to_hcd(udev->bus); 6087 + int i; 6088 + 6089 + /* Release up resources for all children before this device */ 6090 + for (i = 0; i < udev->maxchild; i++) 6091 + if (hub->ports[i]->child) 6092 + hub_hc_release_resources(hub->ports[i]->child); 6093 + 6094 + if (hcd->driver->reset_device) 6095 + hcd->driver->reset_device(hcd, udev); 6096 + } 6097 + 6098 + /** 6069 6099 * usb_reset_and_verify_device - perform a USB port reset to reinitialize a device 6070 6100 * @udev: device to reset (not in SUSPENDED or NOTATTACHED state) 6071 6101 * ··· 6158 6128 6159 6129 bos = udev->bos; 6160 6130 udev->bos = NULL; 6131 + 6132 + if (udev->reset_resume) 6133 + hub_hc_release_resources(udev); 6161 6134 6162 6135 mutex_lock(hcd->address0_mutex); 6163 6136
+4
drivers/usb/core/quirks.c
··· 341 341 { USB_DEVICE(0x0638, 0x0a13), .driver_info = 342 342 USB_QUIRK_STRING_FETCH_255 }, 343 343 344 + /* Prolific Single-LUN Mass Storage Card Reader */ 345 + { USB_DEVICE(0x067b, 0x2731), .driver_info = USB_QUIRK_DELAY_INIT | 346 + USB_QUIRK_NO_LPM }, 347 + 344 348 /* Saitek Cyborg Gold Joystick */ 345 349 { USB_DEVICE(0x06a3, 0x0006), .driver_info = 346 350 USB_QUIRK_CONFIG_INTF_STRINGS },
+48 -37
drivers/usb/dwc3/core.c
··· 131 131 } 132 132 } 133 133 134 - void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode) 134 + void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode, bool ignore_susphy) 135 135 { 136 + unsigned int hw_mode; 136 137 u32 reg; 137 138 138 139 reg = dwc3_readl(dwc->regs, DWC3_GCTL); 140 + 141 + /* 142 + * For DRD controllers, GUSB3PIPECTL.SUSPENDENABLE and 143 + * GUSB2PHYCFG.SUSPHY should be cleared during mode switching, 144 + * and they can be set after core initialization. 145 + */ 146 + hw_mode = DWC3_GHWPARAMS0_MODE(dwc->hwparams.hwparams0); 147 + if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD && !ignore_susphy) { 148 + if (DWC3_GCTL_PRTCAP(reg) != mode) 149 + dwc3_enable_susphy(dwc, false); 150 + } 151 + 139 152 reg &= ~(DWC3_GCTL_PRTCAPDIR(DWC3_GCTL_PRTCAP_OTG)); 140 153 reg |= DWC3_GCTL_PRTCAPDIR(mode); 141 154 dwc3_writel(dwc->regs, DWC3_GCTL, reg); ··· 229 216 230 217 spin_lock_irqsave(&dwc->lock, flags); 231 218 232 - dwc3_set_prtcap(dwc, desired_dr_role); 219 + dwc3_set_prtcap(dwc, desired_dr_role, false); 233 220 234 221 spin_unlock_irqrestore(&dwc->lock, flags); 235 222 ··· 671 658 */ 672 659 reg &= ~DWC3_GUSB3PIPECTL_UX_EXIT_PX; 673 660 674 - /* 675 - * Above DWC_usb3.0 1.94a, it is recommended to set 676 - * DWC3_GUSB3PIPECTL_SUSPHY to '0' during coreConsultant configuration. 677 - * So default value will be '0' when the core is reset. Application 678 - * needs to set it to '1' after the core initialization is completed. 679 - * 680 - * Similarly for DRD controllers, GUSB3PIPECTL.SUSPENDENABLE must be 681 - * cleared after power-on reset, and it can be set after core 682 - * initialization. 683 - */ 661 + /* Ensure the GUSB3PIPECTL.SUSPENDENABLE is cleared prior to phy init. */ 684 662 reg &= ~DWC3_GUSB3PIPECTL_SUSPHY; 685 663 686 664 if (dwc->u2ss_inp3_quirk) ··· 751 747 break; 752 748 } 753 749 754 - /* 755 - * Above DWC_usb3.0 1.94a, it is recommended to set 756 - * DWC3_GUSB2PHYCFG_SUSPHY to '0' during coreConsultant configuration. 757 - * So default value will be '0' when the core is reset. Application 758 - * needs to set it to '1' after the core initialization is completed. 759 - * 760 - * Similarly for DRD controllers, GUSB2PHYCFG.SUSPHY must be cleared 761 - * after power-on reset, and it can be set after core initialization. 762 - */ 750 + /* Ensure the GUSB2PHYCFG.SUSPHY is cleared prior to phy init. */ 763 751 reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; 764 752 765 753 if (dwc->dis_enblslpm_quirk) ··· 825 829 if (ret < 0) 826 830 goto err_exit_usb3_phy; 827 831 } 832 + 833 + /* 834 + * Above DWC_usb3.0 1.94a, it is recommended to set 835 + * DWC3_GUSB3PIPECTL_SUSPHY and DWC3_GUSB2PHYCFG_SUSPHY to '0' during 836 + * coreConsultant configuration. So default value will be '0' when the 837 + * core is reset. Application needs to set it to '1' after the core 838 + * initialization is completed. 839 + * 840 + * Certain phy requires to be in P0 power state during initialization. 841 + * Make sure GUSB3PIPECTL.SUSPENDENABLE and GUSB2PHYCFG.SUSPHY are clear 842 + * prior to phy init to maintain in the P0 state. 843 + * 844 + * After phy initialization, some phy operations can only be executed 845 + * while in lower P states. Ensure GUSB3PIPECTL.SUSPENDENABLE and 846 + * GUSB2PHYCFG.SUSPHY are set soon after initialization to avoid 847 + * blocking phy ops. 848 + */ 849 + if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A)) 850 + dwc3_enable_susphy(dwc, true); 828 851 829 852 return 0; 830 853 ··· 1603 1588 1604 1589 switch (dwc->dr_mode) { 1605 1590 case USB_DR_MODE_PERIPHERAL: 1606 - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE); 1591 + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, false); 1607 1592 1608 1593 if (dwc->usb2_phy) 1609 1594 otg_set_vbus(dwc->usb2_phy->otg, false); ··· 1615 1600 return dev_err_probe(dev, ret, "failed to initialize gadget\n"); 1616 1601 break; 1617 1602 case USB_DR_MODE_HOST: 1618 - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST); 1603 + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST, false); 1619 1604 1620 1605 if (dwc->usb2_phy) 1621 1606 otg_set_vbus(dwc->usb2_phy->otg, true); ··· 1660 1645 } 1661 1646 1662 1647 /* de-assert DRVVBUS for HOST and OTG mode */ 1663 - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE); 1648 + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, true); 1664 1649 } 1665 1650 1666 1651 static void dwc3_get_software_properties(struct dwc3 *dwc) ··· 1850 1835 dwc->tx_thr_num_pkt_prd = tx_thr_num_pkt_prd; 1851 1836 dwc->tx_max_burst_prd = tx_max_burst_prd; 1852 1837 1853 - dwc->imod_interval = 0; 1854 - 1855 1838 dwc->tx_fifo_resize_max_num = tx_fifo_resize_max_num; 1856 1839 } 1857 1840 ··· 1867 1854 unsigned int hwparam_gen = 1868 1855 DWC3_GHWPARAMS3_SSPHY_IFC(dwc->hwparams.hwparams3); 1869 1856 1870 - /* Check for proper value of imod_interval */ 1871 - if (dwc->imod_interval && !dwc3_has_imod(dwc)) { 1872 - dev_warn(dwc->dev, "Interrupt moderation not supported\n"); 1873 - dwc->imod_interval = 0; 1874 - } 1875 - 1876 1857 /* 1858 + * Enable IMOD for all supporting controllers. 1859 + * 1860 + * Particularly, DWC_usb3 v3.00a must enable this feature for 1861 + * the following reason: 1862 + * 1877 1863 * Workaround for STAR 9000961433 which affects only version 1878 1864 * 3.00a of the DWC_usb3 core. This prevents the controller 1879 1865 * interrupt from being masked while handling events. IMOD 1880 1866 * allows us to work around this issue. Enable it for the 1881 1867 * affected version. 1882 1868 */ 1883 - if (!dwc->imod_interval && 1884 - DWC3_VER_IS(DWC3, 300A)) 1869 + if (dwc3_has_imod((dwc))) 1885 1870 dwc->imod_interval = 1; 1886 1871 1887 1872 /* Check the maximum_speed parameter */ ··· 2468 2457 if (ret) 2469 2458 return ret; 2470 2459 2471 - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE); 2460 + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, true); 2472 2461 dwc3_gadget_resume(dwc); 2473 2462 break; 2474 2463 case DWC3_GCTL_PRTCAP_HOST: ··· 2476 2465 ret = dwc3_core_init_for_resume(dwc); 2477 2466 if (ret) 2478 2467 return ret; 2479 - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST); 2468 + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST, true); 2480 2469 break; 2481 2470 } 2482 2471 /* Restore GUSB2PHYCFG bits that were modified in suspend */ ··· 2505 2494 if (ret) 2506 2495 return ret; 2507 2496 2508 - dwc3_set_prtcap(dwc, dwc->current_dr_role); 2497 + dwc3_set_prtcap(dwc, dwc->current_dr_role, true); 2509 2498 2510 2499 dwc3_otg_init(dwc); 2511 2500 if (dwc->current_otg_role == DWC3_OTG_ROLE_HOST) {
+1 -1
drivers/usb/dwc3/core.h
··· 1558 1558 #define DWC3_HAS_OTG BIT(3) 1559 1559 1560 1560 /* prototypes */ 1561 - void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode); 1561 + void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode, bool ignore_susphy); 1562 1562 void dwc3_set_mode(struct dwc3 *dwc, u32 mode); 1563 1563 u32 dwc3_core_fifo_space(struct dwc3_ep *dep, u8 type); 1564 1564
+2 -2
drivers/usb/dwc3/drd.c
··· 173 173 * block "Initialize GCTL for OTG operation". 174 174 */ 175 175 /* GCTL.PrtCapDir=2'b11 */ 176 - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG); 176 + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG, true); 177 177 /* GUSB2PHYCFG0.SusPHY=0 */ 178 178 reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0)); 179 179 reg &= ~DWC3_GUSB2PHYCFG_SUSPHY; ··· 556 556 557 557 dwc3_drd_update(dwc); 558 558 } else { 559 - dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG); 559 + dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG, true); 560 560 561 561 /* use OTG block to get ID event */ 562 562 irq = dwc3_otg_get_irq(dwc);
+7 -3
drivers/usb/dwc3/gadget.c
··· 4501 4501 dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), 4502 4502 DWC3_GEVNTSIZ_SIZE(evt->length)); 4503 4503 4504 + evt->flags &= ~DWC3_EVENT_PENDING; 4505 + /* 4506 + * Add an explicit write memory barrier to make sure that the update of 4507 + * clearing DWC3_EVENT_PENDING is observed in dwc3_check_event_buf() 4508 + */ 4509 + wmb(); 4510 + 4504 4511 if (dwc->imod_interval) { 4505 4512 dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), DWC3_GEVNTCOUNT_EHB); 4506 4513 dwc3_writel(dwc->regs, DWC3_DEV_IMOD(0), dwc->imod_interval); 4507 4514 } 4508 - 4509 - /* Keep the clearing of DWC3_EVENT_PENDING at the end */ 4510 - evt->flags &= ~DWC3_EVENT_PENDING; 4511 4515 4512 4516 return ret; 4513 4517 }
+12 -5
drivers/usb/gadget/composite.c
··· 1050 1050 else 1051 1051 usb_gadget_set_remote_wakeup(gadget, 0); 1052 1052 done: 1053 - if (power <= USB_SELF_POWER_VBUS_MAX_DRAW) 1054 - usb_gadget_set_selfpowered(gadget); 1055 - else 1053 + if (power > USB_SELF_POWER_VBUS_MAX_DRAW || 1054 + (c && !(c->bmAttributes & USB_CONFIG_ATT_SELFPOWER))) 1056 1055 usb_gadget_clear_selfpowered(gadget); 1056 + else 1057 + usb_gadget_set_selfpowered(gadget); 1057 1058 1058 1059 usb_gadget_vbus_draw(gadget, power); 1059 1060 if (result >= 0 && cdev->delayed_status) ··· 2616 2615 2617 2616 cdev->suspended = 1; 2618 2617 2619 - usb_gadget_set_selfpowered(gadget); 2618 + if (cdev->config && 2619 + cdev->config->bmAttributes & USB_CONFIG_ATT_SELFPOWER) 2620 + usb_gadget_set_selfpowered(gadget); 2621 + 2620 2622 usb_gadget_vbus_draw(gadget, 2); 2621 2623 } 2622 2624 ··· 2653 2649 else 2654 2650 maxpower = min(maxpower, 900U); 2655 2651 2656 - if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW) 2652 + if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW || 2653 + !(cdev->config->bmAttributes & USB_CONFIG_ATT_SELFPOWER)) 2657 2654 usb_gadget_clear_selfpowered(gadget); 2655 + else 2656 + usb_gadget_set_selfpowered(gadget); 2658 2657 2659 2658 usb_gadget_vbus_draw(gadget, maxpower); 2660 2659 } else {
+2 -2
drivers/usb/gadget/function/u_ether.c
··· 1052 1052 * There is a transfer in progress. So we trigger a remote 1053 1053 * wakeup to inform the host. 1054 1054 */ 1055 - ether_wakeup_host(dev->port_usb); 1056 - return; 1055 + if (!ether_wakeup_host(dev->port_usb)) 1056 + return; 1057 1057 } 1058 1058 spin_lock_irqsave(&dev->lock, flags); 1059 1059 link->is_suspend = true;
+8
drivers/usb/host/xhci-hub.c
··· 12 12 #include <linux/slab.h> 13 13 #include <linux/unaligned.h> 14 14 #include <linux/bitfield.h> 15 + #include <linux/pci.h> 15 16 16 17 #include "xhci.h" 17 18 #include "xhci-trace.h" ··· 771 770 enum usb_link_tunnel_mode xhci_port_is_tunneled(struct xhci_hcd *xhci, 772 771 struct xhci_port *port) 773 772 { 773 + struct usb_hcd *hcd; 774 774 void __iomem *base; 775 775 u32 offset; 776 + 777 + /* Don't try and probe this capability for non-Intel hosts */ 778 + hcd = xhci_to_hcd(xhci); 779 + if (!dev_is_pci(hcd->self.controller) || 780 + to_pci_dev(hcd->self.controller)->vendor != PCI_VENDOR_ID_INTEL) 781 + return USB_LINK_UNKNOWN; 776 782 777 783 base = &xhci->cap_regs->hc_capbase; 778 784 offset = xhci_find_next_ext_cap(base, 0, XHCI_EXT_CAPS_INTEL_SPR_SHADOW);
+2 -1
drivers/usb/host/xhci-mem.c
··· 2437 2437 * and our use of dma addresses in the trb_address_map radix tree needs 2438 2438 * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need. 2439 2439 */ 2440 - if (xhci->quirks & XHCI_ZHAOXIN_TRB_FETCH) 2440 + if (xhci->quirks & XHCI_TRB_OVERFETCH) 2441 + /* Buggy HC prefetches beyond segment bounds - allocate dummy space at the end */ 2441 2442 xhci->segment_pool = dma_pool_create("xHCI ring segments", dev, 2442 2443 TRB_SEGMENT_SIZE * 2, TRB_SEGMENT_SIZE * 2, xhci->page_size * 2); 2443 2444 else
+7 -3
drivers/usb/host/xhci-pci.c
··· 38 38 #define PCI_DEVICE_ID_ETRON_EJ168 0x7023 39 39 #define PCI_DEVICE_ID_ETRON_EJ188 0x7052 40 40 41 + #define PCI_DEVICE_ID_VIA_VL805 0x3483 42 + 41 43 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 42 44 #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 43 45 #define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI 0x9cb1 ··· 420 418 pdev->device == 0x3432) 421 419 xhci->quirks |= XHCI_BROKEN_STREAMS; 422 420 423 - if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) 421 + if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == PCI_DEVICE_ID_VIA_VL805) { 424 422 xhci->quirks |= XHCI_LPM_SUPPORT; 423 + xhci->quirks |= XHCI_TRB_OVERFETCH; 424 + } 425 425 426 426 if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 427 427 pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) { ··· 471 467 472 468 if (pdev->device == 0x9202) { 473 469 xhci->quirks |= XHCI_RESET_ON_RESUME; 474 - xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; 470 + xhci->quirks |= XHCI_TRB_OVERFETCH; 475 471 } 476 472 477 473 if (pdev->device == 0x9203) 478 - xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH; 474 + xhci->quirks |= XHCI_TRB_OVERFETCH; 479 475 } 480 476 481 477 if (pdev->vendor == PCI_VENDOR_ID_CDNS &&
+5 -1
drivers/usb/host/xhci.c
··· 780 780 struct xhci_segment *seg; 781 781 782 782 ring = xhci->cmd_ring; 783 - xhci_for_each_ring_seg(ring->first_seg, seg) 783 + xhci_for_each_ring_seg(ring->first_seg, seg) { 784 + /* erase all TRBs before the link */ 784 785 memset(seg->trbs, 0, sizeof(union xhci_trb) * (TRBS_PER_SEGMENT - 1)); 786 + /* clear link cycle bit */ 787 + seg->trbs[TRBS_PER_SEGMENT - 1].link.control &= cpu_to_le32(~TRB_CYCLE); 788 + } 785 789 786 790 xhci_initialize_ring_info(ring); 787 791 /*
+1 -1
drivers/usb/host/xhci.h
··· 1632 1632 #define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42) 1633 1633 #define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43) 1634 1634 #define XHCI_RESET_TO_DEFAULT BIT_ULL(44) 1635 - #define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(45) 1635 + #define XHCI_TRB_OVERFETCH BIT_ULL(45) 1636 1636 #define XHCI_ZHAOXIN_HOST BIT_ULL(46) 1637 1637 #define XHCI_WRITE_64_HI_LO BIT_ULL(47) 1638 1638 #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
+5 -1
drivers/usb/renesas_usbhs/common.c
··· 312 312 priv->clks[1] = of_clk_get(dev_of_node(dev), 1); 313 313 if (PTR_ERR(priv->clks[1]) == -ENOENT) 314 314 priv->clks[1] = NULL; 315 - else if (IS_ERR(priv->clks[1])) 315 + else if (IS_ERR(priv->clks[1])) { 316 + clk_put(priv->clks[0]); 316 317 return PTR_ERR(priv->clks[1]); 318 + } 317 319 318 320 return 0; 319 321 } ··· 780 778 struct usbhs_priv *priv = usbhs_pdev_to_priv(pdev); 781 779 782 780 dev_dbg(&pdev->dev, "usb remove\n"); 781 + 782 + flush_delayed_work(&priv->notify_hotplug_work); 783 783 784 784 /* power off */ 785 785 if (!usbhs_get_dparam(priv, runtime_pwctrl))
+1 -1
drivers/usb/renesas_usbhs/mod_gadget.c
··· 1094 1094 goto usbhs_mod_gadget_probe_err_gpriv; 1095 1095 } 1096 1096 1097 - gpriv->transceiver = usb_get_phy(USB_PHY_TYPE_UNDEFINED); 1097 + gpriv->transceiver = devm_usb_get_phy(dev, USB_PHY_TYPE_UNDEFINED); 1098 1098 dev_info(dev, "%stransceiver found\n", 1099 1099 !IS_ERR(gpriv->transceiver) ? "" : "no "); 1100 1100
+14
drivers/usb/serial/ftdi_sio.c
··· 1079 1079 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 1080 1080 /* GMC devices */ 1081 1081 { USB_DEVICE(GMC_VID, GMC_Z216C_PID) }, 1082 + /* Altera USB Blaster 3 */ 1083 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6022_PID, 1) }, 1084 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6025_PID, 2) }, 1085 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 2) }, 1086 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 3) }, 1087 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6029_PID, 2) }, 1088 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 2) }, 1089 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 3) }, 1090 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602C_PID, 1) }, 1091 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 1) }, 1092 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 2) }, 1093 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) }, 1094 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) }, 1095 + { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) }, 1082 1096 { } /* Terminating entry */ 1083 1097 }; 1084 1098
+13
drivers/usb/serial/ftdi_sio_ids.h
··· 1612 1612 */ 1613 1613 #define GMC_VID 0x1cd7 1614 1614 #define GMC_Z216C_PID 0x0217 /* GMC Z216C Adapter IR-USB */ 1615 + 1616 + /* 1617 + * Altera USB Blaster 3 (http://www.altera.com). 1618 + */ 1619 + #define ALTERA_VID 0x09fb 1620 + #define ALTERA_UB3_6022_PID 0x6022 1621 + #define ALTERA_UB3_6025_PID 0x6025 1622 + #define ALTERA_UB3_6026_PID 0x6026 1623 + #define ALTERA_UB3_6029_PID 0x6029 1624 + #define ALTERA_UB3_602A_PID 0x602a 1625 + #define ALTERA_UB3_602C_PID 0x602c 1626 + #define ALTERA_UB3_602D_PID 0x602d 1627 + #define ALTERA_UB3_602E_PID 0x602e
+32 -16
drivers/usb/serial/option.c
··· 1368 1368 .driver_info = NCTRL(0) | RSVD(1) }, 1369 1369 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */ 1370 1370 .driver_info = RSVD(0) }, 1371 - { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */ 1371 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990A (rmnet) */ 1372 1372 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1373 - { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990 (MBIM) */ 1373 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990A (MBIM) */ 1374 1374 .driver_info = NCTRL(0) | RSVD(1) }, 1375 - { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990 (RNDIS) */ 1375 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990A (RNDIS) */ 1376 1376 .driver_info = NCTRL(2) | RSVD(3) }, 1377 - { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */ 1377 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990A (ECM) */ 1378 1378 .driver_info = NCTRL(0) | RSVD(1) }, 1379 1379 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */ 1380 1380 .driver_info = RSVD(0) | NCTRL(3) }, ··· 1388 1388 .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1389 1389 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */ 1390 1390 .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) }, 1391 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x30), /* Telit FE990B (rmnet) */ 1392 + .driver_info = NCTRL(5) }, 1393 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x40) }, 1394 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x60) }, 1395 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x30), /* Telit FE990B (MBIM) */ 1396 + .driver_info = NCTRL(6) }, 1397 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x40) }, 1398 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x60) }, 1399 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x30), /* Telit FE990B (RNDIS) */ 1400 + .driver_info = NCTRL(6) }, 1401 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x40) }, 1402 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x60) }, 1403 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x30), /* Telit FE990B (ECM) */ 1404 + .driver_info = NCTRL(6) }, 1405 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x40) }, 1406 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x60) }, 1391 1407 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */ 1392 1408 .driver_info = RSVD(0) | NCTRL(3) }, 1393 1409 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */ 1394 1410 .driver_info = RSVD(0) | NCTRL(3) }, 1395 1411 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */ 1396 1412 .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1397 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x60) }, /* Telit FN990B (rmnet) */ 1398 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x40) }, 1399 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x30), 1413 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x30), /* Telit FN990B (rmnet) */ 1400 1414 .driver_info = NCTRL(5) }, 1401 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x60) }, /* Telit FN990B (MBIM) */ 1402 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x40) }, 1403 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x30), 1415 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) }, 1416 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) }, 1417 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30), /* Telit FN990B (MBIM) */ 1404 1418 .driver_info = NCTRL(6) }, 1405 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x60) }, /* Telit FN990B (RNDIS) */ 1406 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x40) }, 1407 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x30), 1419 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) }, 1420 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x60) }, 1421 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x30), /* Telit FN990B (RNDIS) */ 1408 1422 .driver_info = NCTRL(6) }, 1409 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x60) }, /* Telit FN990B (ECM) */ 1410 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x40) }, 1411 - { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x30), 1423 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x40) }, 1424 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x60) }, 1425 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x30), /* Telit FN990B (ECM) */ 1412 1426 .driver_info = NCTRL(6) }, 1427 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x40) }, 1428 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x60) }, 1413 1429 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1414 1430 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1415 1431 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+11
drivers/usb/typec/tcpm/tcpci_rt1711h.c
··· 334 334 { 335 335 int ret; 336 336 struct rt1711h_chip *chip; 337 + const u16 alert_mask = TCPC_ALERT_TX_SUCCESS | TCPC_ALERT_TX_DISCARDED | 338 + TCPC_ALERT_TX_FAILED | TCPC_ALERT_RX_HARD_RST | 339 + TCPC_ALERT_RX_STATUS | TCPC_ALERT_POWER_STATUS | 340 + TCPC_ALERT_CC_STATUS | TCPC_ALERT_RX_BUF_OVF | 341 + TCPC_ALERT_FAULT; 337 342 338 343 chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL); 339 344 if (!chip) ··· 387 382 dev_name(chip->dev), chip); 388 383 if (ret < 0) 389 384 return ret; 385 + 386 + /* Enable alert interrupts */ 387 + ret = rt1711h_write16(chip, TCPC_ALERT_MASK, alert_mask); 388 + if (ret < 0) 389 + return ret; 390 + 390 391 enable_irq_wake(client->irq); 391 392 392 393 return 0;
+4 -4
drivers/usb/typec/tcpm/tcpm.c
··· 5117 5117 */ 5118 5118 if (port->vbus_never_low) { 5119 5119 port->vbus_never_low = false; 5120 - tcpm_set_state(port, SNK_SOFT_RESET, 5121 - port->timings.sink_wait_cap_time); 5120 + upcoming_state = SNK_SOFT_RESET; 5122 5121 } else { 5123 5122 if (!port->self_powered) 5124 5123 upcoming_state = SNK_WAIT_CAPABILITIES_TIMEOUT; 5125 5124 else 5126 5125 upcoming_state = hard_reset_state(port); 5127 - tcpm_set_state(port, SNK_WAIT_CAPABILITIES_TIMEOUT, 5128 - port->timings.sink_wait_cap_time); 5129 5126 } 5127 + 5128 + tcpm_set_state(port, upcoming_state, 5129 + port->timings.sink_wait_cap_time); 5130 5130 break; 5131 5131 case SNK_WAIT_CAPABILITIES_TIMEOUT: 5132 5132 /*
+13 -12
drivers/usb/typec/ucsi/ucsi.c
··· 25 25 * difficult to estimate the time it takes for the system to process the command 26 26 * before it is actually passed to the PPM. 27 27 */ 28 - #define UCSI_TIMEOUT_MS 5000 28 + #define UCSI_TIMEOUT_MS 10000 29 29 30 30 /* 31 31 * UCSI_SWAP_TIMEOUT_MS - Timeout for role swap requests ··· 1346 1346 1347 1347 mutex_lock(&ucsi->ppm_lock); 1348 1348 1349 - ret = ucsi->ops->read_cci(ucsi, &cci); 1349 + ret = ucsi->ops->poll_cci(ucsi, &cci); 1350 1350 if (ret < 0) 1351 1351 goto out; 1352 1352 ··· 1364 1364 1365 1365 tmo = jiffies + msecs_to_jiffies(UCSI_TIMEOUT_MS); 1366 1366 do { 1367 - ret = ucsi->ops->read_cci(ucsi, &cci); 1367 + ret = ucsi->ops->poll_cci(ucsi, &cci); 1368 1368 if (ret < 0) 1369 1369 goto out; 1370 1370 if (cci & UCSI_CCI_COMMAND_COMPLETE) ··· 1393 1393 /* Give the PPM time to process a reset before reading CCI */ 1394 1394 msleep(20); 1395 1395 1396 - ret = ucsi->ops->read_cci(ucsi, &cci); 1396 + ret = ucsi->ops->poll_cci(ucsi, &cci); 1397 1397 if (ret) 1398 1398 goto out; 1399 1399 ··· 1825 1825 1826 1826 err_unregister: 1827 1827 for (con = connector; con->port; con++) { 1828 + if (con->wq) 1829 + destroy_workqueue(con->wq); 1828 1830 ucsi_unregister_partner(con); 1829 1831 ucsi_unregister_altmodes(con, UCSI_RECIPIENT_CON); 1830 1832 ucsi_unregister_port_psy(con); 1831 - if (con->wq) 1832 - destroy_workqueue(con->wq); 1833 1833 1834 1834 usb_power_delivery_unregister_capabilities(con->port_sink_caps); 1835 1835 con->port_sink_caps = NULL; ··· 1929 1929 struct ucsi *ucsi; 1930 1930 1931 1931 if (!ops || 1932 - !ops->read_version || !ops->read_cci || !ops->read_message_in || 1933 - !ops->sync_control || !ops->async_control) 1932 + !ops->read_version || !ops->read_cci || !ops->poll_cci || 1933 + !ops->read_message_in || !ops->sync_control || !ops->async_control) 1934 1934 return ERR_PTR(-EINVAL); 1935 1935 1936 1936 ucsi = kzalloc(sizeof(*ucsi), GFP_KERNEL); ··· 2013 2013 2014 2014 for (i = 0; i < ucsi->cap.num_connectors; i++) { 2015 2015 cancel_work_sync(&ucsi->connector[i].work); 2016 - ucsi_unregister_partner(&ucsi->connector[i]); 2017 - ucsi_unregister_altmodes(&ucsi->connector[i], 2018 - UCSI_RECIPIENT_CON); 2019 - ucsi_unregister_port_psy(&ucsi->connector[i]); 2020 2016 2021 2017 if (ucsi->connector[i].wq) { 2022 2018 struct ucsi_work *uwork; ··· 2027 2031 mutex_unlock(&ucsi->connector[i].lock); 2028 2032 destroy_workqueue(ucsi->connector[i].wq); 2029 2033 } 2034 + 2035 + ucsi_unregister_partner(&ucsi->connector[i]); 2036 + ucsi_unregister_altmodes(&ucsi->connector[i], 2037 + UCSI_RECIPIENT_CON); 2038 + ucsi_unregister_port_psy(&ucsi->connector[i]); 2030 2039 2031 2040 usb_power_delivery_unregister_capabilities(ucsi->connector[i].port_sink_caps); 2032 2041 ucsi->connector[i].port_sink_caps = NULL;
+2
drivers/usb/typec/ucsi/ucsi.h
··· 62 62 * struct ucsi_operations - UCSI I/O operations 63 63 * @read_version: Read implemented UCSI version 64 64 * @read_cci: Read CCI register 65 + * @poll_cci: Read CCI register while polling with notifications disabled 65 66 * @read_message_in: Read message data from UCSI 66 67 * @sync_control: Blocking control operation 67 68 * @async_control: Non-blocking control operation ··· 77 76 struct ucsi_operations { 78 77 int (*read_version)(struct ucsi *ucsi, u16 *version); 79 78 int (*read_cci)(struct ucsi *ucsi, u32 *cci); 79 + int (*poll_cci)(struct ucsi *ucsi, u32 *cci); 80 80 int (*read_message_in)(struct ucsi *ucsi, void *val, size_t val_len); 81 81 int (*sync_control)(struct ucsi *ucsi, u64 command); 82 82 int (*async_control)(struct ucsi *ucsi, u64 command);
+14 -7
drivers/usb/typec/ucsi/ucsi_acpi.c
··· 59 59 static int ucsi_acpi_read_cci(struct ucsi *ucsi, u32 *cci) 60 60 { 61 61 struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 62 - int ret; 63 - 64 - if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) { 65 - ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ); 66 - if (ret) 67 - return ret; 68 - } 69 62 70 63 memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci)); 71 64 72 65 return 0; 66 + } 67 + 68 + static int ucsi_acpi_poll_cci(struct ucsi *ucsi, u32 *cci) 69 + { 70 + struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi); 71 + int ret; 72 + 73 + ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ); 74 + if (ret) 75 + return ret; 76 + 77 + return ucsi_acpi_read_cci(ucsi, cci); 73 78 } 74 79 75 80 static int ucsi_acpi_read_message_in(struct ucsi *ucsi, void *val, size_t val_len) ··· 99 94 static const struct ucsi_operations ucsi_acpi_ops = { 100 95 .read_version = ucsi_acpi_read_version, 101 96 .read_cci = ucsi_acpi_read_cci, 97 + .poll_cci = ucsi_acpi_poll_cci, 102 98 .read_message_in = ucsi_acpi_read_message_in, 103 99 .sync_control = ucsi_sync_control_common, 104 100 .async_control = ucsi_acpi_async_control ··· 148 142 static const struct ucsi_operations ucsi_gram_ops = { 149 143 .read_version = ucsi_acpi_read_version, 150 144 .read_cci = ucsi_acpi_read_cci, 145 + .poll_cci = ucsi_acpi_poll_cci, 151 146 .read_message_in = ucsi_gram_read_message_in, 152 147 .sync_control = ucsi_gram_sync_control, 153 148 .async_control = ucsi_acpi_async_control
+1
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 664 664 static const struct ucsi_operations ucsi_ccg_ops = { 665 665 .read_version = ucsi_ccg_read_version, 666 666 .read_cci = ucsi_ccg_read_cci, 667 + .poll_cci = ucsi_ccg_read_cci, 667 668 .read_message_in = ucsi_ccg_read_message_in, 668 669 .sync_control = ucsi_ccg_sync_control, 669 670 .async_control = ucsi_ccg_async_control,
+1
drivers/usb/typec/ucsi/ucsi_glink.c
··· 206 206 static const struct ucsi_operations pmic_glink_ucsi_ops = { 207 207 .read_version = pmic_glink_ucsi_read_version, 208 208 .read_cci = pmic_glink_ucsi_read_cci, 209 + .poll_cci = pmic_glink_ucsi_read_cci, 209 210 .read_message_in = pmic_glink_ucsi_read_message_in, 210 211 .sync_control = ucsi_sync_control_common, 211 212 .async_control = pmic_glink_ucsi_async_control,
+1
drivers/usb/typec/ucsi/ucsi_stm32g0.c
··· 424 424 static const struct ucsi_operations ucsi_stm32g0_ops = { 425 425 .read_version = ucsi_stm32g0_read_version, 426 426 .read_cci = ucsi_stm32g0_read_cci, 427 + .poll_cci = ucsi_stm32g0_read_cci, 427 428 .read_message_in = ucsi_stm32g0_read_message_in, 428 429 .sync_control = ucsi_sync_control_common, 429 430 .async_control = ucsi_stm32g0_async_control,
+1
drivers/usb/typec/ucsi/ucsi_yoga_c630.c
··· 74 74 static const struct ucsi_operations yoga_c630_ucsi_ops = { 75 75 .read_version = yoga_c630_ucsi_read_version, 76 76 .read_cci = yoga_c630_ucsi_read_cci, 77 + .poll_cci = yoga_c630_ucsi_read_cci, 77 78 .read_message_in = yoga_c630_ucsi_read_message_in, 78 79 .sync_control = ucsi_sync_control_common, 79 80 .async_control = yoga_c630_ucsi_async_control,
+34 -18
drivers/video/fbdev/hyperv_fb.c
··· 282 282 static uint screen_fb_size; 283 283 static uint dio_fb_size; /* FB size for deferred IO */ 284 284 285 + static void hvfb_putmem(struct fb_info *info); 286 + 285 287 /* Send message to Hyper-V host */ 286 288 static inline int synthvid_send(struct hv_device *hdev, 287 289 struct synthvid_msg *msg) ··· 865 863 } 866 864 867 865 /* 866 + * fb_ops.fb_destroy is called by the last put_fb_info() call at the end 867 + * of unregister_framebuffer() or fb_release(). Do any cleanup related to 868 + * framebuffer here. 869 + */ 870 + static void hvfb_destroy(struct fb_info *info) 871 + { 872 + hvfb_putmem(info); 873 + framebuffer_release(info); 874 + } 875 + 876 + /* 868 877 * TODO: GEN1 codepaths allocate from system or DMA-able memory. Fix the 869 878 * driver to use the _SYSMEM_ or _DMAMEM_ helpers in these cases. 870 879 */ ··· 890 877 .fb_set_par = hvfb_set_par, 891 878 .fb_setcolreg = hvfb_setcolreg, 892 879 .fb_blank = hvfb_blank, 880 + .fb_destroy = hvfb_destroy, 893 881 }; 894 882 895 883 /* Get options from kernel paramenter "video=" */ ··· 966 952 } 967 953 968 954 /* Release contiguous physical memory */ 969 - static void hvfb_release_phymem(struct hv_device *hdev, 955 + static void hvfb_release_phymem(struct device *device, 970 956 phys_addr_t paddr, unsigned int size) 971 957 { 972 958 unsigned int order = get_order(size); ··· 974 960 if (order <= MAX_PAGE_ORDER) 975 961 __free_pages(pfn_to_page(paddr >> PAGE_SHIFT), order); 976 962 else 977 - dma_free_coherent(&hdev->device, 963 + dma_free_coherent(device, 978 964 round_up(size, PAGE_SIZE), 979 965 phys_to_virt(paddr), 980 966 paddr); ··· 1003 989 1004 990 base = pci_resource_start(pdev, 0); 1005 991 size = pci_resource_len(pdev, 0); 992 + aperture_remove_conflicting_devices(base, size, KBUILD_MODNAME); 1006 993 1007 994 /* 1008 995 * For Gen 1 VM, we can directly use the contiguous memory ··· 1025 1010 goto getmem_done; 1026 1011 } 1027 1012 pr_info("Unable to allocate enough contiguous physical memory on Gen 1 VM. Using MMIO instead.\n"); 1013 + } else { 1014 + aperture_remove_all_conflicting_devices(KBUILD_MODNAME); 1028 1015 } 1029 1016 1030 1017 /* 1031 - * Cannot use the contiguous physical memory. 1032 - * Allocate mmio space for framebuffer. 1018 + * Cannot use contiguous physical memory, so allocate MMIO space for 1019 + * the framebuffer. At this point in the function, conflicting devices 1020 + * that might have claimed the framebuffer MMIO space based on 1021 + * screen_info.lfb_base must have already been removed so that 1022 + * vmbus_allocate_mmio() does not allocate different MMIO space. If the 1023 + * kdump image were to be loaded using kexec_file_load(), the 1024 + * framebuffer location in the kdump image would be set from 1025 + * screen_info.lfb_base at the time that kdump is enabled. If the 1026 + * framebuffer has moved elsewhere, this could be the wrong location, 1027 + * causing kdump to hang when efifb (for example) loads. 1033 1028 */ 1034 1029 dio_fb_size = 1035 1030 screen_width * screen_height * screen_depth / 8; ··· 1076 1051 info->screen_size = dio_fb_size; 1077 1052 1078 1053 getmem_done: 1079 - if (base && size) 1080 - aperture_remove_conflicting_devices(base, size, KBUILD_MODNAME); 1081 - else 1082 - aperture_remove_all_conflicting_devices(KBUILD_MODNAME); 1083 - 1084 1054 if (!gen2vm) 1085 1055 pci_dev_put(pdev); 1086 1056 ··· 1094 1074 } 1095 1075 1096 1076 /* Release the framebuffer */ 1097 - static void hvfb_putmem(struct hv_device *hdev, struct fb_info *info) 1077 + static void hvfb_putmem(struct fb_info *info) 1098 1078 { 1099 1079 struct hvfb_par *par = info->par; 1100 1080 1101 1081 if (par->need_docopy) { 1102 1082 vfree(par->dio_vp); 1103 - iounmap(info->screen_base); 1083 + iounmap(par->mmio_vp); 1104 1084 vmbus_free_mmio(par->mem->start, screen_fb_size); 1105 1085 } else { 1106 - hvfb_release_phymem(hdev, info->fix.smem_start, 1086 + hvfb_release_phymem(info->device, info->fix.smem_start, 1107 1087 screen_fb_size); 1108 1088 } 1109 1089 ··· 1192 1172 if (ret) 1193 1173 goto error; 1194 1174 1195 - ret = register_framebuffer(info); 1175 + ret = devm_register_framebuffer(&hdev->device, info); 1196 1176 if (ret) { 1197 1177 pr_err("Unable to register framebuffer\n"); 1198 1178 goto error; ··· 1217 1197 1218 1198 error: 1219 1199 fb_deferred_io_cleanup(info); 1220 - hvfb_putmem(hdev, info); 1200 + hvfb_putmem(info); 1221 1201 error2: 1222 1202 vmbus_close(hdev->channel); 1223 1203 error1: ··· 1240 1220 1241 1221 fb_deferred_io_cleanup(info); 1242 1222 1243 - unregister_framebuffer(info); 1244 1223 cancel_delayed_work_sync(&par->dwork); 1245 1224 1246 1225 vmbus_close(hdev->channel); 1247 1226 hv_set_drvdata(hdev, NULL); 1248 - 1249 - hvfb_putmem(hdev, info); 1250 - framebuffer_release(info); 1251 1227 } 1252 1228 1253 1229 static int hvfb_suspend(struct hv_device *hdev)
+3 -3
drivers/virt/acrn/hsm.c
··· 49 49 switch (cmd & PMCMD_TYPE_MASK) { 50 50 case ACRN_PMCMD_GET_PX_CNT: 51 51 case ACRN_PMCMD_GET_CX_CNT: 52 - pm_info = kmalloc(sizeof(u64), GFP_KERNEL); 52 + pm_info = kzalloc(sizeof(u64), GFP_KERNEL); 53 53 if (!pm_info) 54 54 return -ENOMEM; 55 55 ··· 64 64 kfree(pm_info); 65 65 break; 66 66 case ACRN_PMCMD_GET_PX_DATA: 67 - px_data = kmalloc(sizeof(*px_data), GFP_KERNEL); 67 + px_data = kzalloc(sizeof(*px_data), GFP_KERNEL); 68 68 if (!px_data) 69 69 return -ENOMEM; 70 70 ··· 79 79 kfree(px_data); 80 80 break; 81 81 case ACRN_PMCMD_GET_CX_DATA: 82 - cx_data = kmalloc(sizeof(*cx_data), GFP_KERNEL); 82 + cx_data = kzalloc(sizeof(*cx_data), GFP_KERNEL); 83 83 if (!cx_data) 84 84 return -ENOMEM; 85 85
+43 -15
drivers/virt/coco/sev-guest/sev-guest.c
··· 39 39 struct miscdevice misc; 40 40 41 41 struct snp_msg_desc *msg_desc; 42 - 43 - union { 44 - struct snp_report_req report; 45 - struct snp_derived_key_req derived_key; 46 - struct snp_ext_report_req ext_report; 47 - } req; 48 42 }; 49 43 50 44 /* ··· 66 72 67 73 static int get_report(struct snp_guest_dev *snp_dev, struct snp_guest_request_ioctl *arg) 68 74 { 69 - struct snp_report_req *report_req = &snp_dev->req.report; 75 + struct snp_report_req *report_req __free(kfree) = NULL; 70 76 struct snp_msg_desc *mdesc = snp_dev->msg_desc; 71 77 struct snp_report_resp *report_resp; 72 78 struct snp_guest_req req = {}; ··· 74 80 75 81 if (!arg->req_data || !arg->resp_data) 76 82 return -EINVAL; 83 + 84 + report_req = kzalloc(sizeof(*report_req), GFP_KERNEL_ACCOUNT); 85 + if (!report_req) 86 + return -ENOMEM; 77 87 78 88 if (copy_from_user(report_req, (void __user *)arg->req_data, sizeof(*report_req))) 79 89 return -EFAULT; ··· 115 117 116 118 static int get_derived_key(struct snp_guest_dev *snp_dev, struct snp_guest_request_ioctl *arg) 117 119 { 118 - struct snp_derived_key_req *derived_key_req = &snp_dev->req.derived_key; 120 + struct snp_derived_key_req *derived_key_req __free(kfree) = NULL; 119 121 struct snp_derived_key_resp derived_key_resp = {0}; 120 122 struct snp_msg_desc *mdesc = snp_dev->msg_desc; 121 123 struct snp_guest_req req = {}; ··· 133 135 */ 134 136 resp_len = sizeof(derived_key_resp.data) + mdesc->ctx->authsize; 135 137 if (sizeof(buf) < resp_len) 138 + return -ENOMEM; 139 + 140 + derived_key_req = kzalloc(sizeof(*derived_key_req), GFP_KERNEL_ACCOUNT); 141 + if (!derived_key_req) 136 142 return -ENOMEM; 137 143 138 144 if (copy_from_user(derived_key_req, (void __user *)arg->req_data, ··· 171 169 struct snp_req_resp *io) 172 170 173 171 { 174 - struct snp_ext_report_req *report_req = &snp_dev->req.ext_report; 172 + struct snp_ext_report_req *report_req __free(kfree) = NULL; 175 173 struct snp_msg_desc *mdesc = snp_dev->msg_desc; 176 174 struct snp_report_resp *report_resp; 177 175 struct snp_guest_req req = {}; 178 176 int ret, npages = 0, resp_len; 179 177 sockptr_t certs_address; 178 + struct page *page; 180 179 181 180 if (sockptr_is_null(io->req_data) || sockptr_is_null(io->resp_data)) 182 181 return -EINVAL; 182 + 183 + report_req = kzalloc(sizeof(*report_req), GFP_KERNEL_ACCOUNT); 184 + if (!report_req) 185 + return -ENOMEM; 183 186 184 187 if (copy_from_sockptr(report_req, io->req_data, sizeof(*report_req))) 185 188 return -EFAULT; ··· 211 204 * the host. If host does not supply any certs in it, then copy 212 205 * zeros to indicate that certificate data was not provided. 213 206 */ 214 - memset(mdesc->certs_data, 0, report_req->certs_len); 215 207 npages = report_req->certs_len >> PAGE_SHIFT; 208 + page = alloc_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO, 209 + get_order(report_req->certs_len)); 210 + if (!page) 211 + return -ENOMEM; 212 + 213 + req.certs_data = page_address(page); 214 + ret = set_memory_decrypted((unsigned long)req.certs_data, npages); 215 + if (ret) { 216 + pr_err("failed to mark page shared, ret=%d\n", ret); 217 + __free_pages(page, get_order(report_req->certs_len)); 218 + return -EFAULT; 219 + } 220 + 216 221 cmd: 217 222 /* 218 223 * The intermediate response buffer is used while decrypting the ··· 233 214 */ 234 215 resp_len = sizeof(report_resp->data) + mdesc->ctx->authsize; 235 216 report_resp = kzalloc(resp_len, GFP_KERNEL_ACCOUNT); 236 - if (!report_resp) 237 - return -ENOMEM; 217 + if (!report_resp) { 218 + ret = -ENOMEM; 219 + goto e_free_data; 220 + } 238 221 239 - mdesc->input.data_npages = npages; 222 + req.input.data_npages = npages; 240 223 241 224 req.msg_version = arg->msg_version; 242 225 req.msg_type = SNP_MSG_REPORT_REQ; ··· 253 232 254 233 /* If certs length is invalid then copy the returned length */ 255 234 if (arg->vmm_error == SNP_GUEST_VMM_ERR_INVALID_LEN) { 256 - report_req->certs_len = mdesc->input.data_npages << PAGE_SHIFT; 235 + report_req->certs_len = req.input.data_npages << PAGE_SHIFT; 257 236 258 237 if (copy_to_sockptr(io->req_data, report_req, sizeof(*report_req))) 259 238 ret = -EFAULT; ··· 262 241 if (ret) 263 242 goto e_free; 264 243 265 - if (npages && copy_to_sockptr(certs_address, mdesc->certs_data, report_req->certs_len)) { 244 + if (npages && copy_to_sockptr(certs_address, req.certs_data, report_req->certs_len)) { 266 245 ret = -EFAULT; 267 246 goto e_free; 268 247 } ··· 272 251 273 252 e_free: 274 253 kfree(report_resp); 254 + e_free_data: 255 + if (npages) { 256 + if (set_memory_encrypted((unsigned long)req.certs_data, npages)) 257 + WARN_ONCE(ret, "failed to restore encryption mask (leak it)\n"); 258 + else 259 + __free_pages(page, get_order(report_req->certs_len)); 260 + } 275 261 return ret; 276 262 } 277 263
+2 -1
drivers/virt/vboxguest/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config VBOXGUEST 3 3 tristate "Virtual Box Guest integration support" 4 - depends on (ARM64 || X86) && PCI && INPUT 4 + depends on (ARM64 || X86 || COMPILE_TEST) && PCI && INPUT 5 + depends on HAS_IOPORT 5 6 help 6 7 This is a driver for the Virtual Box Guest PCI device used in 7 8 Virtual Box virtual machines. Enabling this driver will add
+5 -4
fs/affs/file.c
··· 596 596 BUG_ON(tmp > bsize); 597 597 AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA); 598 598 AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino); 599 - AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx); 599 + AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1); 600 600 AFFS_DATA_HEAD(bh)->size = cpu_to_be32(tmp); 601 601 affs_fix_checksum(sb, bh); 602 602 bh->b_state &= ~(1UL << BH_New); ··· 724 724 tmp = min(bsize - boff, to - from); 725 725 BUG_ON(boff + tmp > bsize || tmp > bsize); 726 726 memcpy(AFFS_DATA(bh) + boff, data + from, tmp); 727 - be32_add_cpu(&AFFS_DATA_HEAD(bh)->size, tmp); 727 + AFFS_DATA_HEAD(bh)->size = cpu_to_be32( 728 + max(boff + tmp, be32_to_cpu(AFFS_DATA_HEAD(bh)->size))); 728 729 affs_fix_checksum(sb, bh); 729 730 mark_buffer_dirty_inode(bh, inode); 730 731 written += tmp; ··· 747 746 if (buffer_new(bh)) { 748 747 AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA); 749 748 AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino); 750 - AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx); 749 + AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1); 751 750 AFFS_DATA_HEAD(bh)->size = cpu_to_be32(bsize); 752 751 AFFS_DATA_HEAD(bh)->next = 0; 753 752 bh->b_state &= ~(1UL << BH_New); ··· 781 780 if (buffer_new(bh)) { 782 781 AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA); 783 782 AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino); 784 - AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx); 783 + AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1); 785 784 AFFS_DATA_HEAD(bh)->size = cpu_to_be32(tmp); 786 785 AFFS_DATA_HEAD(bh)->next = 0; 787 786 bh->b_state &= ~(1UL << BH_New);
+6 -5
fs/afs/cell.c
··· 64 64 return ERR_PTR(-ENAMETOOLONG); 65 65 66 66 if (!name) { 67 - cell = net->ws_cell; 67 + cell = rcu_dereference_protected(net->ws_cell, 68 + lockdep_is_held(&net->cells_lock)); 68 69 if (!cell) 69 70 return ERR_PTR(-EDESTADDRREQ); 70 71 goto found; ··· 389 388 /* install the new cell */ 390 389 down_write(&net->cells_lock); 391 390 afs_see_cell(new_root, afs_cell_trace_see_ws); 392 - old_root = net->ws_cell; 393 - net->ws_cell = new_root; 391 + old_root = rcu_replace_pointer(net->ws_cell, new_root, 392 + lockdep_is_held(&net->cells_lock)); 394 393 up_write(&net->cells_lock); 395 394 396 395 afs_unuse_cell(net, old_root, afs_cell_trace_unuse_ws); ··· 946 945 _enter(""); 947 946 948 947 down_write(&net->cells_lock); 949 - ws = net->ws_cell; 950 - net->ws_cell = NULL; 948 + ws = rcu_replace_pointer(net->ws_cell, NULL, 949 + lockdep_is_held(&net->cells_lock)); 951 950 up_write(&net->cells_lock); 952 951 afs_unuse_cell(net, ws, afs_cell_trace_unuse_ws); 953 952
+13 -2
fs/afs/dynroot.c
··· 314 314 const char *name; 315 315 bool dotted = vnode->fid.vnode == 3; 316 316 317 - if (!net->ws_cell) 317 + if (!dentry) { 318 + /* We're in RCU-pathwalk. */ 319 + cell = rcu_dereference(net->ws_cell); 320 + if (dotted) 321 + name = cell->name - 1; 322 + else 323 + name = cell->name; 324 + /* Shouldn't need to set a delayed call. */ 325 + return name; 326 + } 327 + 328 + if (!rcu_access_pointer(net->ws_cell)) 318 329 return ERR_PTR(-ENOENT); 319 330 320 331 down_read(&net->cells_lock); 321 332 322 - cell = net->ws_cell; 333 + cell = rcu_dereference_protected(net->ws_cell, lockdep_is_held(&net->cells_lock)); 323 334 if (dotted) 324 335 name = cell->name - 1; 325 336 else
+1 -1
fs/afs/internal.h
··· 287 287 288 288 /* Cell database */ 289 289 struct rb_root cells; 290 - struct afs_cell *ws_cell; 290 + struct afs_cell __rcu *ws_cell; 291 291 struct work_struct cells_manager; 292 292 struct timer_list cells_timer; 293 293 atomic_t cells_outstanding;
+2 -2
fs/afs/proc.c
··· 206 206 207 207 net = afs_seq2net_single(m); 208 208 down_read(&net->cells_lock); 209 - cell = net->ws_cell; 209 + cell = rcu_dereference_protected(net->ws_cell, lockdep_is_held(&net->cells_lock)); 210 210 if (cell) 211 211 seq_printf(m, "%s\n", cell->name); 212 212 up_read(&net->cells_lock); ··· 242 242 243 243 ret = -EEXIST; 244 244 inode_lock(file_inode(file)); 245 - if (!net->ws_cell) 245 + if (!rcu_access_pointer(net->ws_cell)) 246 246 ret = afs_cell_init(net, buf); 247 247 else 248 248 printk("busy\n");
+1 -1
fs/bcachefs/btree_io.c
··· 1186 1186 le64_to_cpu(i->journal_seq), 1187 1187 b->written, b->written + sectors, ptr_written); 1188 1188 1189 - b->written += sectors; 1189 + b->written = min(b->written + sectors, btree_sectors(c)); 1190 1190 1191 1191 if (blacklisted && !first) 1192 1192 continue;
+8
fs/bcachefs/btree_update.h
··· 126 126 127 127 int bch2_btree_insert_clone_trans(struct btree_trans *, enum btree_id, struct bkey_i *); 128 128 129 + int bch2_btree_write_buffer_insert_err(struct btree_trans *, 130 + enum btree_id, struct bkey_i *); 131 + 129 132 static inline int __must_check bch2_trans_update_buffered(struct btree_trans *trans, 130 133 enum btree_id btree, 131 134 struct bkey_i *k) 132 135 { 136 + if (unlikely(!btree_type_uses_write_buffer(btree))) { 137 + int ret = bch2_btree_write_buffer_insert_err(trans, btree, k); 138 + dump_stack(); 139 + return ret; 140 + } 133 141 /* 134 142 * Most updates skip the btree write buffer until journal replay is 135 143 * finished because synchronization with journal replay relies on having
+20 -1
fs/bcachefs/btree_write_buffer.c
··· 264 264 BUG_ON(wb->sorted.size < wb->flushing.keys.nr); 265 265 } 266 266 267 + int bch2_btree_write_buffer_insert_err(struct btree_trans *trans, 268 + enum btree_id btree, struct bkey_i *k) 269 + { 270 + struct bch_fs *c = trans->c; 271 + struct printbuf buf = PRINTBUF; 272 + 273 + prt_printf(&buf, "attempting to do write buffer update on non wb btree="); 274 + bch2_btree_id_to_text(&buf, btree); 275 + prt_str(&buf, "\n"); 276 + bch2_bkey_val_to_text(&buf, c, bkey_i_to_s_c(k)); 277 + 278 + bch2_fs_inconsistent(c, "%s", buf.buf); 279 + printbuf_exit(&buf); 280 + return -EROFS; 281 + } 282 + 267 283 static int bch2_btree_write_buffer_flush_locked(struct btree_trans *trans) 268 284 { 269 285 struct bch_fs *c = trans->c; ··· 328 312 darray_for_each(wb->sorted, i) { 329 313 struct btree_write_buffered_key *k = &wb->flushing.keys.data[i->idx]; 330 314 331 - BUG_ON(!btree_type_uses_write_buffer(k->btree)); 315 + if (unlikely(!btree_type_uses_write_buffer(k->btree))) { 316 + ret = bch2_btree_write_buffer_insert_err(trans, k->btree, &k->k); 317 + goto err; 318 + } 332 319 333 320 for (struct wb_key_ref *n = i + 1; n < min(i + 4, &darray_top(wb->sorted)); n++) 334 321 prefetch(&wb->flushing.keys.data[n->idx]);
+1 -1
fs/bcachefs/extents.c
··· 99 99 100 100 /* Pick at random, biased in favor of the faster device: */ 101 101 102 - return bch2_rand_range(l1 + l2) > l1; 102 + return bch2_get_random_u64_below(l1 + l2) > l1; 103 103 } 104 104 105 105 if (bch2_force_reconstruct_read)
+1
fs/bcachefs/inode.c
··· 1198 1198 opts->_name##_from_inode = true; \ 1199 1199 } else { \ 1200 1200 opts->_name = c->opts._name; \ 1201 + opts->_name##_from_inode = false; \ 1201 1202 } 1202 1203 BCH_INODE_OPTS() 1203 1204 #undef x
+12 -7
fs/bcachefs/io_read.c
··· 59 59 } 60 60 rcu_read_unlock(); 61 61 62 - return bch2_rand_range(nr * CONGESTED_MAX) < total; 62 + return get_random_u32_below(nr * CONGESTED_MAX) < total; 63 63 } 64 64 65 65 #else ··· 951 951 goto retry_pick; 952 952 } 953 953 954 - /* 955 - * Unlock the iterator while the btree node's lock is still in 956 - * cache, before doing the IO: 957 - */ 958 - bch2_trans_unlock(trans); 959 - 960 954 if (flags & BCH_READ_NODECODE) { 961 955 /* 962 956 * can happen if we retry, and the extent we were going to read ··· 1107 1113 trace_and_count(c, read_split, &orig->bio); 1108 1114 } 1109 1115 1116 + /* 1117 + * Unlock the iterator while the btree node's lock is still in 1118 + * cache, before doing the IO: 1119 + */ 1120 + if (!(flags & BCH_READ_IN_RETRY)) 1121 + bch2_trans_unlock(trans); 1122 + else 1123 + bch2_trans_unlock_long(trans); 1124 + 1110 1125 if (!rbio->pick.idx) { 1111 1126 if (unlikely(!rbio->have_ioref)) { 1112 1127 struct printbuf buf = PRINTBUF; ··· 1163 1160 if (likely(!(flags & BCH_READ_IN_RETRY))) { 1164 1161 return 0; 1165 1162 } else { 1163 + bch2_trans_unlock(trans); 1164 + 1166 1165 int ret; 1167 1166 1168 1167 rbio->context = RBIO_CONTEXT_UNBOUND;
+32 -27
fs/bcachefs/journal.c
··· 1021 1021 1022 1022 /* allocate journal on a device: */ 1023 1023 1024 - static int __bch2_set_nr_journal_buckets(struct bch_dev *ca, unsigned nr, 1025 - bool new_fs, struct closure *cl) 1024 + static int bch2_set_nr_journal_buckets_iter(struct bch_dev *ca, unsigned nr, 1025 + bool new_fs, struct closure *cl) 1026 1026 { 1027 1027 struct bch_fs *c = ca->fs; 1028 1028 struct journal_device *ja = &ca->journal; ··· 1150 1150 return ret; 1151 1151 } 1152 1152 1153 - /* 1154 - * Allocate more journal space at runtime - not currently making use if it, but 1155 - * the code works: 1156 - */ 1157 - int bch2_set_nr_journal_buckets(struct bch_fs *c, struct bch_dev *ca, 1158 - unsigned nr) 1153 + static int bch2_set_nr_journal_buckets_loop(struct bch_fs *c, struct bch_dev *ca, 1154 + unsigned nr, bool new_fs) 1159 1155 { 1160 1156 struct journal_device *ja = &ca->journal; 1161 - struct closure cl; 1162 1157 int ret = 0; 1163 1158 1159 + struct closure cl; 1164 1160 closure_init_stack(&cl); 1165 - 1166 - down_write(&c->state_lock); 1167 1161 1168 1162 /* don't handle reducing nr of buckets yet: */ 1169 1163 if (nr < ja->nr) 1170 - goto unlock; 1164 + return 0; 1171 1165 1172 - while (ja->nr < nr) { 1166 + while (!ret && ja->nr < nr) { 1173 1167 struct disk_reservation disk_res = { 0, 0, 0 }; 1174 1168 1175 1169 /* ··· 1176 1182 * filesystem-wide allocation will succeed, this is a device 1177 1183 * specific allocation - we can hang here: 1178 1184 */ 1185 + if (!new_fs) { 1186 + ret = bch2_disk_reservation_get(c, &disk_res, 1187 + bucket_to_sector(ca, nr - ja->nr), 1, 0); 1188 + if (ret) 1189 + break; 1190 + } 1179 1191 1180 - ret = bch2_disk_reservation_get(c, &disk_res, 1181 - bucket_to_sector(ca, nr - ja->nr), 1, 0); 1182 - if (ret) 1183 - break; 1192 + ret = bch2_set_nr_journal_buckets_iter(ca, nr, new_fs, &cl); 1184 1193 1185 - ret = __bch2_set_nr_journal_buckets(ca, nr, false, &cl); 1194 + if (ret == -BCH_ERR_bucket_alloc_blocked || 1195 + ret == -BCH_ERR_open_buckets_empty) 1196 + ret = 0; /* wait and retry */ 1186 1197 1187 1198 bch2_disk_reservation_put(c, &disk_res); 1188 - 1189 1199 closure_sync(&cl); 1190 - 1191 - if (ret && 1192 - ret != -BCH_ERR_bucket_alloc_blocked && 1193 - ret != -BCH_ERR_open_buckets_empty) 1194 - break; 1195 1200 } 1196 1201 1197 - bch_err_fn(c, ret); 1198 - unlock: 1202 + return ret; 1203 + } 1204 + 1205 + /* 1206 + * Allocate more journal space at runtime - not currently making use if it, but 1207 + * the code works: 1208 + */ 1209 + int bch2_set_nr_journal_buckets(struct bch_fs *c, struct bch_dev *ca, 1210 + unsigned nr) 1211 + { 1212 + down_write(&c->state_lock); 1213 + int ret = bch2_set_nr_journal_buckets_loop(c, ca, nr, false); 1199 1214 up_write(&c->state_lock); 1215 + 1216 + bch_err_fn(c, ret); 1200 1217 return ret; 1201 1218 } 1202 1219 ··· 1233 1228 min(1 << 13, 1234 1229 (1 << 24) / ca->mi.bucket_size)); 1235 1230 1236 - ret = __bch2_set_nr_journal_buckets(ca, nr, new_fs, NULL); 1231 + ret = bch2_set_nr_journal_buckets_loop(ca->fs, ca, nr, new_fs); 1237 1232 err: 1238 1233 bch_err_fn(ca, ret); 1239 1234 return ret;
+12 -13
fs/bcachefs/movinggc.c
··· 74 74 struct move_bucket *b, u64 time) 75 75 { 76 76 struct bch_fs *c = trans->c; 77 - struct btree_iter iter; 78 - struct bkey_s_c k; 79 - struct bch_alloc_v4 _a; 80 - const struct bch_alloc_v4 *a; 81 - int ret; 82 77 83 - if (bch2_bucket_is_open(trans->c, 84 - b->k.bucket.inode, 85 - b->k.bucket.offset)) 78 + if (bch2_bucket_is_open(c, b->k.bucket.inode, b->k.bucket.offset)) 86 79 return 0; 87 80 88 - k = bch2_bkey_get_iter(trans, &iter, BTREE_ID_alloc, 89 - b->k.bucket, BTREE_ITER_cached); 90 - ret = bkey_err(k); 81 + struct btree_iter iter; 82 + struct bkey_s_c k = bch2_bkey_get_iter(trans, &iter, BTREE_ID_alloc, 83 + b->k.bucket, BTREE_ITER_cached); 84 + int ret = bkey_err(k); 91 85 if (ret) 92 86 return ret; 93 87 ··· 89 95 if (!ca) 90 96 goto out; 91 97 92 - a = bch2_alloc_to_v4(k, &_a); 98 + if (ca->mi.state != BCH_MEMBER_STATE_rw || 99 + !bch2_dev_is_online(ca)) 100 + goto out_put; 101 + 102 + struct bch_alloc_v4 _a; 103 + const struct bch_alloc_v4 *a = bch2_alloc_to_v4(k, &_a); 93 104 b->k.gen = a->gen; 94 105 b->sectors = bch2_bucket_sectors_dirty(*a); 95 106 u64 lru_idx = alloc_lru_idx_fragmentation(*a, ca); 96 107 97 108 ret = lru_idx && lru_idx <= time; 98 - 109 + out_put: 99 110 bch2_dev_put(ca); 100 111 out: 101 112 bch2_trans_iter_exit(trans, &iter);
+16 -8
fs/bcachefs/super-io.c
··· 69 69 return v; 70 70 } 71 71 72 - void bch2_set_version_incompat(struct bch_fs *c, enum bcachefs_metadata_version version) 72 + bool bch2_set_version_incompat(struct bch_fs *c, enum bcachefs_metadata_version version) 73 73 { 74 - mutex_lock(&c->sb_lock); 75 - SET_BCH_SB_VERSION_INCOMPAT(c->disk_sb.sb, 76 - max(BCH_SB_VERSION_INCOMPAT(c->disk_sb.sb), version)); 77 - c->disk_sb.sb->features[0] |= cpu_to_le64(BCH_FEATURE_incompat_version_field); 78 - bch2_write_super(c); 79 - mutex_unlock(&c->sb_lock); 74 + bool ret = (c->sb.features & BIT_ULL(BCH_FEATURE_incompat_version_field)) && 75 + version <= c->sb.version_incompat_allowed; 76 + 77 + if (ret) { 78 + mutex_lock(&c->sb_lock); 79 + SET_BCH_SB_VERSION_INCOMPAT(c->disk_sb.sb, 80 + max(BCH_SB_VERSION_INCOMPAT(c->disk_sb.sb), version)); 81 + bch2_write_super(c); 82 + mutex_unlock(&c->sb_lock); 83 + } 84 + 85 + return ret; 80 86 } 81 87 82 88 const char * const bch2_sb_fields[] = { ··· 1225 1219 c->disk_sb.sb->version = cpu_to_le16(new_version); 1226 1220 c->disk_sb.sb->features[0] |= cpu_to_le64(BCH_SB_FEATURES_ALL); 1227 1221 1228 - if (incompat) 1222 + if (incompat) { 1229 1223 SET_BCH_SB_VERSION_INCOMPAT_ALLOWED(c->disk_sb.sb, 1230 1224 max(BCH_SB_VERSION_INCOMPAT_ALLOWED(c->disk_sb.sb), new_version)); 1225 + c->disk_sb.sb->features[0] |= cpu_to_le64(BCH_FEATURE_incompat_version_field); 1226 + } 1231 1227 } 1232 1228 1233 1229 static int bch2_sb_ext_validate(struct bch_sb *sb, struct bch_sb_field *f,
+4 -7
fs/bcachefs/super-io.h
··· 21 21 void bch2_version_to_text(struct printbuf *, enum bcachefs_metadata_version); 22 22 enum bcachefs_metadata_version bch2_latest_compatible_version(enum bcachefs_metadata_version); 23 23 24 - void bch2_set_version_incompat(struct bch_fs *, enum bcachefs_metadata_version); 24 + bool bch2_set_version_incompat(struct bch_fs *, enum bcachefs_metadata_version); 25 25 26 26 static inline bool bch2_request_incompat_feature(struct bch_fs *c, 27 27 enum bcachefs_metadata_version version) 28 28 { 29 - if (unlikely(version > c->sb.version_incompat)) { 30 - if (version > c->sb.version_incompat_allowed) 31 - return false; 32 - bch2_set_version_incompat(c, version); 33 - } 34 - return true; 29 + return likely(version <= c->sb.version_incompat) 30 + ? true 31 + : bch2_set_version_incompat(c, version); 35 32 } 36 33 37 34 static inline size_t bch2_sb_field_bytes(struct bch_sb_field *f)
+6 -5
fs/bcachefs/super.c
··· 1811 1811 goto err_late; 1812 1812 1813 1813 up_write(&c->state_lock); 1814 - return 0; 1814 + out: 1815 + printbuf_exit(&label); 1816 + printbuf_exit(&errbuf); 1817 + bch_err_fn(c, ret); 1818 + return ret; 1815 1819 1816 1820 err_unlock: 1817 1821 mutex_unlock(&c->sb_lock); ··· 1824 1820 if (ca) 1825 1821 bch2_dev_free(ca); 1826 1822 bch2_free_super(&sb); 1827 - printbuf_exit(&label); 1828 - printbuf_exit(&errbuf); 1829 - bch_err_fn(c, ret); 1830 - return ret; 1823 + goto out; 1831 1824 err_late: 1832 1825 up_write(&c->state_lock); 1833 1826 ca = NULL;
+15 -9
fs/bcachefs/util.c
··· 653 653 return 0; 654 654 } 655 655 656 - size_t bch2_rand_range(size_t max) 656 + u64 bch2_get_random_u64_below(u64 ceil) 657 657 { 658 - size_t rand; 658 + if (ceil <= U32_MAX) 659 + return __get_random_u32_below(ceil); 659 660 660 - if (!max) 661 - return 0; 661 + /* this is the same (clever) algorithm as in __get_random_u32_below() */ 662 + u64 rand = get_random_u64(); 663 + u64 mult = ceil * rand; 662 664 663 - do { 664 - rand = get_random_long(); 665 - rand &= roundup_pow_of_two(max) - 1; 666 - } while (rand >= max); 665 + if (unlikely(mult < ceil)) { 666 + u64 bound; 667 + div64_u64_rem(-ceil, ceil, &bound); 668 + while (unlikely(mult < bound)) { 669 + rand = get_random_u64(); 670 + mult = ceil * rand; 671 + } 672 + } 667 673 668 - return rand; 674 + return mul_u64_u64_shr(ceil, rand, 64); 669 675 } 670 676 671 677 void memcpy_to_bio(struct bio *dst, struct bvec_iter dst_iter, const void *src)
+1 -1
fs/bcachefs/util.h
··· 401 401 _ret; \ 402 402 }) 403 403 404 - size_t bch2_rand_range(size_t); 404 + u64 bch2_get_random_u64_below(u64); 405 405 406 406 void memcpy_to_bio(struct bio *, struct bvec_iter, const void *); 407 407 void memcpy_from_bio(void *, struct bio *, struct bvec_iter);
+7 -2
fs/btrfs/inode.c
··· 1382 1382 continue; 1383 1383 } 1384 1384 if (done_offset) { 1385 - *done_offset = start - 1; 1386 - return 0; 1385 + /* 1386 + * Move @end to the end of the processed range, 1387 + * and exit the loop to unlock the processed extents. 1388 + */ 1389 + end = start - 1; 1390 + ret = 0; 1391 + break; 1387 1392 } 1388 1393 ret = -ENOSPC; 1389 1394 }
+2 -2
fs/btrfs/sysfs.c
··· 1330 1330 1331 1331 int btrfs_read_policy_to_enum(const char *str, s64 *value_ret) 1332 1332 { 1333 - char param[32] = { 0 }; 1333 + char param[32]; 1334 1334 char __maybe_unused *value_str; 1335 1335 1336 1336 if (!str || strlen(str) == 0) 1337 1337 return 0; 1338 1338 1339 - strncpy(param, str, sizeof(param) - 1); 1339 + strscpy(param, str); 1340 1340 1341 1341 #ifdef CONFIG_BTRFS_EXPERIMENTAL 1342 1342 /* Separate value from input in policy:value format. */
+1
fs/btrfs/volumes.c
··· 7155 7155 btrfs_err(fs_info, 7156 7156 "failed to add chunk map, start=%llu len=%llu: %d", 7157 7157 map->start, map->chunk_len, ret); 7158 + btrfs_free_chunk_map(map); 7158 7159 } 7159 7160 7160 7161 return ret;
+13 -2
fs/coredump.c
··· 63 63 64 64 static int core_uses_pid; 65 65 static unsigned int core_pipe_limit; 66 + static unsigned int core_sort_vma; 66 67 static char core_pattern[CORENAME_MAX_SIZE] = "core"; 67 68 static int core_name_size = CORENAME_MAX_SIZE; 68 69 unsigned int core_file_note_size_limit = CORE_FILE_NOTE_SIZE_DEFAULT; ··· 1027 1026 .extra1 = (unsigned int *)&core_file_note_size_min, 1028 1027 .extra2 = (unsigned int *)&core_file_note_size_max, 1029 1028 }, 1029 + { 1030 + .procname = "core_sort_vma", 1031 + .data = &core_sort_vma, 1032 + .maxlen = sizeof(int), 1033 + .mode = 0644, 1034 + .proc_handler = proc_douintvec_minmax, 1035 + .extra1 = SYSCTL_ZERO, 1036 + .extra2 = SYSCTL_ONE, 1037 + }, 1030 1038 }; 1031 1039 1032 1040 static int __init init_fs_coredump_sysctls(void) ··· 1266 1256 cprm->vma_data_size += m->dump_size; 1267 1257 } 1268 1258 1269 - sort(cprm->vma_meta, cprm->vma_count, sizeof(*cprm->vma_meta), 1270 - cmp_vma_size, NULL); 1259 + if (core_sort_vma) 1260 + sort(cprm->vma_meta, cprm->vma_count, sizeof(*cprm->vma_meta), 1261 + cmp_vma_size, NULL); 1271 1262 1272 1263 return true; 1273 1264 }
+8 -2
fs/exfat/balloc.c
··· 141 141 return 0; 142 142 } 143 143 144 - void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync) 144 + int exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync) 145 145 { 146 146 int i, b; 147 147 unsigned int ent_idx; ··· 150 150 struct exfat_mount_options *opts = &sbi->options; 151 151 152 152 if (!is_valid_cluster(sbi, clu)) 153 - return; 153 + return -EIO; 154 154 155 155 ent_idx = CLUSTER_TO_BITMAP_ENT(clu); 156 156 i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx); 157 157 b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx); 158 158 159 + if (!test_bit_le(b, sbi->vol_amap[i]->b_data)) 160 + return -EIO; 161 + 159 162 clear_bit_le(b, sbi->vol_amap[i]->b_data); 163 + 160 164 exfat_update_bh(sbi->vol_amap[i], sync); 161 165 162 166 if (opts->discard) { ··· 175 171 opts->discard = 0; 176 172 } 177 173 } 174 + 175 + return 0; 178 176 } 179 177 180 178 /*
+1 -1
fs/exfat/exfat_fs.h
··· 456 456 int exfat_load_bitmap(struct super_block *sb); 457 457 void exfat_free_bitmap(struct exfat_sb_info *sbi); 458 458 int exfat_set_bitmap(struct inode *inode, unsigned int clu, bool sync); 459 - void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync); 459 + int exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync); 460 460 unsigned int exfat_find_free_bitmap(struct super_block *sb, unsigned int clu); 461 461 int exfat_count_used_clusters(struct super_block *sb, unsigned int *ret_count); 462 462 int exfat_trim_fs(struct inode *inode, struct fstrim_range *range);
+7 -4
fs/exfat/fatent.c
··· 175 175 BITMAP_OFFSET_SECTOR_INDEX(sb, CLUSTER_TO_BITMAP_ENT(clu)); 176 176 177 177 if (p_chain->flags == ALLOC_NO_FAT_CHAIN) { 178 + int err; 178 179 unsigned int last_cluster = p_chain->dir + p_chain->size - 1; 179 180 do { 180 181 bool sync = false; ··· 190 189 cur_cmap_i = next_cmap_i; 191 190 } 192 191 193 - exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode))); 192 + err = exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode))); 193 + if (err) 194 + break; 194 195 clu++; 195 196 num_clusters++; 196 197 } while (num_clusters < p_chain->size); ··· 213 210 cur_cmap_i = next_cmap_i; 214 211 } 215 212 216 - exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode))); 213 + if (exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode)))) 214 + break; 217 215 clu = n_clu; 218 216 num_clusters++; 219 217 220 218 if (err) 221 - goto dec_used_clus; 219 + break; 222 220 223 221 if (num_clusters >= sbi->num_clusters - EXFAT_FIRST_CLUSTER) { 224 222 /* ··· 233 229 } while (clu != EXFAT_EOF_CLUSTER); 234 230 } 235 231 236 - dec_used_clus: 237 232 sbi->used_clusters -= num_clusters; 238 233 return 0; 239 234 }
+1 -1
fs/exfat/file.c
··· 587 587 valid_size = ei->valid_size; 588 588 589 589 ret = generic_write_checks(iocb, iter); 590 - if (ret < 0) 590 + if (ret <= 0) 591 591 goto unlock; 592 592 593 593 if (iocb->ki_flags & IOCB_DIRECT) {
+6 -1
fs/exfat/namei.c
··· 232 232 dentry = 0; 233 233 } 234 234 235 - while (dentry + num_entries < total_entries && 235 + while (dentry + num_entries <= total_entries && 236 236 clu.dir != EXFAT_EOF_CLUSTER) { 237 237 i = dentry & (dentries_per_clu - 1); 238 238 ··· 645 645 info->size = le64_to_cpu(ep2->dentry.stream.valid_size); 646 646 info->valid_size = le64_to_cpu(ep2->dentry.stream.valid_size); 647 647 info->size = le64_to_cpu(ep2->dentry.stream.size); 648 + 649 + if (unlikely(EXFAT_B_TO_CLU_ROUND_UP(info->size, sbi) > sbi->used_clusters)) { 650 + exfat_fs_error(sb, "data size is invalid(%lld)", info->size); 651 + return -EIO; 652 + } 648 653 649 654 info->start_clu = le32_to_cpu(ep2->dentry.stream.start_clu); 650 655 if (!is_valid_cluster(sbi, info->start_clu) && info->size) {
-3
fs/ext4/file.c
··· 756 756 return VM_FAULT_SIGBUS; 757 757 } 758 758 } else { 759 - result = filemap_fsnotify_fault(vmf); 760 - if (unlikely(result)) 761 - return result; 762 759 filemap_invalidate_lock_shared(mapping); 763 760 } 764 761 result = dax_iomap_fault(vmf, order, &pfn, &error, &ext4_iomap_ops);
+7 -8
fs/fuse/dev.c
··· 1457 1457 if (ret < 0) 1458 1458 goto out; 1459 1459 1460 - if (pipe_occupancy(pipe->head, pipe->tail) + cs.nr_segs > pipe->max_usage) { 1460 + if (pipe_buf_usage(pipe) + cs.nr_segs > pipe->max_usage) { 1461 1461 ret = -EIO; 1462 1462 goto out; 1463 1463 } ··· 2107 2107 struct file *out, loff_t *ppos, 2108 2108 size_t len, unsigned int flags) 2109 2109 { 2110 - unsigned int head, tail, mask, count; 2110 + unsigned int head, tail, count; 2111 2111 unsigned nbuf; 2112 2112 unsigned idx; 2113 2113 struct pipe_buffer *bufs; ··· 2124 2124 2125 2125 head = pipe->head; 2126 2126 tail = pipe->tail; 2127 - mask = pipe->ring_size - 1; 2128 - count = head - tail; 2127 + count = pipe_occupancy(head, tail); 2129 2128 2130 2129 bufs = kvmalloc_array(count, sizeof(struct pipe_buffer), GFP_KERNEL); 2131 2130 if (!bufs) { ··· 2134 2135 2135 2136 nbuf = 0; 2136 2137 rem = 0; 2137 - for (idx = tail; idx != head && rem < len; idx++) 2138 - rem += pipe->bufs[idx & mask].len; 2138 + for (idx = tail; !pipe_empty(head, idx) && rem < len; idx++) 2139 + rem += pipe_buf(pipe, idx)->len; 2139 2140 2140 2141 ret = -EINVAL; 2141 2142 if (rem < len) ··· 2146 2147 struct pipe_buffer *ibuf; 2147 2148 struct pipe_buffer *obuf; 2148 2149 2149 - if (WARN_ON(nbuf >= count || tail == head)) 2150 + if (WARN_ON(nbuf >= count || pipe_empty(head, tail))) 2150 2151 goto out_free; 2151 2152 2152 - ibuf = &pipe->bufs[tail & mask]; 2153 + ibuf = pipe_buf(pipe, tail); 2153 2154 obuf = &bufs[nbuf]; 2154 2155 2155 2156 if (rem >= ibuf->len) {
+2 -1
fs/nfs/file.c
··· 29 29 #include <linux/pagemap.h> 30 30 #include <linux/gfp.h> 31 31 #include <linux/swap.h> 32 + #include <linux/compaction.h> 32 33 33 34 #include <linux/uaccess.h> 34 35 #include <linux/filelock.h> ··· 458 457 /* If the private flag is set, then the folio is not freeable */ 459 458 if (folio_test_private(folio)) { 460 459 if ((current_gfp_context(gfp) & GFP_KERNEL) != GFP_KERNEL || 461 - current_is_kswapd()) 460 + current_is_kswapd() || current_is_kcompactd()) 462 461 return false; 463 462 if (nfs_wb_folio(folio->mapping->host, folio) < 0) 464 463 return false;
+14 -18
fs/pipe.c
··· 210 210 /* Done while waiting without holding the pipe lock - thus the READ_ONCE() */ 211 211 static inline bool pipe_readable(const struct pipe_inode_info *pipe) 212 212 { 213 - unsigned int head = READ_ONCE(pipe->head); 214 - unsigned int tail = READ_ONCE(pipe->tail); 213 + union pipe_index idx = { .head_tail = READ_ONCE(pipe->head_tail) }; 215 214 unsigned int writers = READ_ONCE(pipe->writers); 216 215 217 - return !pipe_empty(head, tail) || !writers; 216 + return !pipe_empty(idx.head, idx.tail) || !writers; 218 217 } 219 218 220 219 static inline unsigned int pipe_update_tail(struct pipe_inode_info *pipe, ··· 394 395 wake_next_reader = true; 395 396 mutex_lock(&pipe->mutex); 396 397 } 397 - if (pipe_empty(pipe->head, pipe->tail)) 398 + if (pipe_is_empty(pipe)) 398 399 wake_next_reader = false; 399 400 mutex_unlock(&pipe->mutex); 400 401 ··· 416 417 /* Done while waiting without holding the pipe lock - thus the READ_ONCE() */ 417 418 static inline bool pipe_writable(const struct pipe_inode_info *pipe) 418 419 { 419 - unsigned int head = READ_ONCE(pipe->head); 420 - unsigned int tail = READ_ONCE(pipe->tail); 420 + union pipe_index idx = { .head_tail = READ_ONCE(pipe->head_tail) }; 421 421 unsigned int max_usage = READ_ONCE(pipe->max_usage); 422 422 423 - return !pipe_full(head, tail, max_usage) || 423 + return !pipe_full(idx.head, idx.tail, max_usage) || 424 424 !READ_ONCE(pipe->readers); 425 425 } 426 426 ··· 577 579 kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN); 578 580 wait_event_interruptible_exclusive(pipe->wr_wait, pipe_writable(pipe)); 579 581 mutex_lock(&pipe->mutex); 580 - was_empty = pipe_empty(pipe->head, pipe->tail); 582 + was_empty = pipe_is_empty(pipe); 581 583 wake_next_writer = true; 582 584 } 583 585 out: 584 - if (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) 586 + if (pipe_is_full(pipe)) 585 587 wake_next_writer = false; 586 588 mutex_unlock(&pipe->mutex); 587 589 ··· 614 616 static long pipe_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 615 617 { 616 618 struct pipe_inode_info *pipe = filp->private_data; 617 - unsigned int count, head, tail, mask; 619 + unsigned int count, head, tail; 618 620 619 621 switch (cmd) { 620 622 case FIONREAD: ··· 622 624 count = 0; 623 625 head = pipe->head; 624 626 tail = pipe->tail; 625 - mask = pipe->ring_size - 1; 626 627 627 - while (tail != head) { 628 - count += pipe->bufs[tail & mask].len; 628 + while (!pipe_empty(head, tail)) { 629 + count += pipe_buf(pipe, tail)->len; 629 630 tail++; 630 631 } 631 632 mutex_unlock(&pipe->mutex); ··· 656 659 { 657 660 __poll_t mask; 658 661 struct pipe_inode_info *pipe = filp->private_data; 659 - unsigned int head, tail; 662 + union pipe_index idx; 660 663 661 664 /* Epoll has some historical nasty semantics, this enables them */ 662 665 WRITE_ONCE(pipe->poll_usage, true); ··· 677 680 * if something changes and you got it wrong, the poll 678 681 * table entry will wake you up and fix it. 679 682 */ 680 - head = READ_ONCE(pipe->head); 681 - tail = READ_ONCE(pipe->tail); 683 + idx.head_tail = READ_ONCE(pipe->head_tail); 682 684 683 685 mask = 0; 684 686 if (filp->f_mode & FMODE_READ) { 685 - if (!pipe_empty(head, tail)) 687 + if (!pipe_empty(idx.head, idx.tail)) 686 688 mask |= EPOLLIN | EPOLLRDNORM; 687 689 if (!pipe->writers && filp->f_pipe != pipe->w_counter) 688 690 mask |= EPOLLHUP; 689 691 } 690 692 691 693 if (filp->f_mode & FMODE_WRITE) { 692 - if (!pipe_full(head, tail, pipe->max_usage)) 694 + if (!pipe_full(idx.head, idx.tail, pipe->max_usage)) 693 695 mask |= EPOLLOUT | EPOLLWRNORM; 694 696 /* 695 697 * Most Unices do not set EPOLLERR for FIFOs but on Linux they
+19 -15
fs/smb/client/cifsacl.c
··· 763 763 struct cifs_fattr *fattr, bool mode_from_special_sid) 764 764 { 765 765 int i; 766 - int num_aces = 0; 766 + u16 num_aces = 0; 767 767 int acl_size; 768 768 char *acl_base; 769 769 struct smb_ace **ppace; ··· 778 778 } 779 779 780 780 /* validate that we do not go past end of acl */ 781 - if (end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size)) { 781 + if (end_of_acl < (char *)pdacl + sizeof(struct smb_acl) || 782 + end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size)) { 782 783 cifs_dbg(VFS, "ACL too small to parse DACL\n"); 783 784 return; 784 785 } 785 786 786 787 cifs_dbg(NOISY, "DACL revision %d size %d num aces %d\n", 787 788 le16_to_cpu(pdacl->revision), le16_to_cpu(pdacl->size), 788 - le32_to_cpu(pdacl->num_aces)); 789 + le16_to_cpu(pdacl->num_aces)); 789 790 790 791 /* reset rwx permissions for user/group/other. 791 792 Also, if num_aces is 0 i.e. DACL has no ACEs, ··· 796 795 acl_base = (char *)pdacl; 797 796 acl_size = sizeof(struct smb_acl); 798 797 799 - num_aces = le32_to_cpu(pdacl->num_aces); 798 + num_aces = le16_to_cpu(pdacl->num_aces); 800 799 if (num_aces > 0) { 801 800 umode_t denied_mode = 0; 802 801 803 - if (num_aces > ULONG_MAX / sizeof(struct smb_ace *)) 802 + if (num_aces > (le16_to_cpu(pdacl->size) - sizeof(struct smb_acl)) / 803 + (offsetof(struct smb_ace, sid) + 804 + offsetof(struct smb_sid, sub_auth) + sizeof(__le16))) 804 805 return; 806 + 805 807 ppace = kmalloc_array(num_aces, sizeof(struct smb_ace *), 806 808 GFP_KERNEL); 807 809 if (!ppace) ··· 941 937 static void populate_new_aces(char *nacl_base, 942 938 struct smb_sid *pownersid, 943 939 struct smb_sid *pgrpsid, 944 - __u64 *pnmode, u32 *pnum_aces, u16 *pnsize, 940 + __u64 *pnmode, u16 *pnum_aces, u16 *pnsize, 945 941 bool modefromsid, 946 942 bool posix) 947 943 { 948 944 __u64 nmode; 949 - u32 num_aces = 0; 945 + u16 num_aces = 0; 950 946 u16 nsize = 0; 951 947 __u64 user_mode; 952 948 __u64 group_mode; ··· 1054 1050 u16 size = 0; 1055 1051 struct smb_ace *pntace = NULL; 1056 1052 char *acl_base = NULL; 1057 - u32 src_num_aces = 0; 1053 + u16 src_num_aces = 0; 1058 1054 u16 nsize = 0; 1059 1055 struct smb_ace *pnntace = NULL; 1060 1056 char *nacl_base = NULL; ··· 1062 1058 1063 1059 acl_base = (char *)pdacl; 1064 1060 size = sizeof(struct smb_acl); 1065 - src_num_aces = le32_to_cpu(pdacl->num_aces); 1061 + src_num_aces = le16_to_cpu(pdacl->num_aces); 1066 1062 1067 1063 nacl_base = (char *)pndacl; 1068 1064 nsize = sizeof(struct smb_acl); ··· 1094 1090 u16 size = 0; 1095 1091 struct smb_ace *pntace = NULL; 1096 1092 char *acl_base = NULL; 1097 - u32 src_num_aces = 0; 1093 + u16 src_num_aces = 0; 1098 1094 u16 nsize = 0; 1099 1095 struct smb_ace *pnntace = NULL; 1100 1096 char *nacl_base = NULL; 1101 - u32 num_aces = 0; 1097 + u16 num_aces = 0; 1102 1098 bool new_aces_set = false; 1103 1099 1104 1100 /* Assuming that pndacl and pnmode are never NULL */ ··· 1116 1112 1117 1113 acl_base = (char *)pdacl; 1118 1114 size = sizeof(struct smb_acl); 1119 - src_num_aces = le32_to_cpu(pdacl->num_aces); 1115 + src_num_aces = le16_to_cpu(pdacl->num_aces); 1120 1116 1121 1117 /* Retain old ACEs which we can retain */ 1122 1118 for (i = 0; i < src_num_aces; ++i) { ··· 1162 1158 } 1163 1159 1164 1160 finalize_dacl: 1165 - pndacl->num_aces = cpu_to_le32(num_aces); 1161 + pndacl->num_aces = cpu_to_le16(num_aces); 1166 1162 pndacl->size = cpu_to_le16(nsize); 1167 1163 1168 1164 return 0; ··· 1297 1293 dacloffset ? dacl_ptr->revision : cpu_to_le16(ACL_REVISION); 1298 1294 1299 1295 ndacl_ptr->size = cpu_to_le16(0); 1300 - ndacl_ptr->num_aces = cpu_to_le32(0); 1296 + ndacl_ptr->num_aces = cpu_to_le16(0); 1301 1297 1302 1298 rc = set_chmod_dacl(dacl_ptr, ndacl_ptr, owner_sid_ptr, group_sid_ptr, 1303 1299 pnmode, mode_from_sid, posix); ··· 1657 1653 dacl_ptr = (struct smb_acl *)((char *)pntsd + dacloffset); 1658 1654 if (mode_from_sid) 1659 1655 nsecdesclen += 1660 - le32_to_cpu(dacl_ptr->num_aces) * sizeof(struct smb_ace); 1656 + le16_to_cpu(dacl_ptr->num_aces) * sizeof(struct smb_ace); 1661 1657 else /* cifsacl */ 1662 1658 nsecdesclen += le16_to_cpu(dacl_ptr->size); 1663 1659 }
+12 -4
fs/smb/client/connect.c
··· 1825 1825 struct smb3_fs_context *ctx, 1826 1826 bool match_super) 1827 1827 { 1828 - if (ctx->sectype != Unspecified && 1829 - ctx->sectype != ses->sectype) 1830 - return 0; 1828 + struct TCP_Server_Info *server = ses->server; 1829 + enum securityEnum ctx_sec, ses_sec; 1831 1830 1832 1831 if (!match_super && ctx->dfs_root_ses != ses->dfs_root_ses) 1833 1832 return 0; ··· 1838 1839 if (ses->chan_max < ctx->max_channels) 1839 1840 return 0; 1840 1841 1841 - switch (ses->sectype) { 1842 + ctx_sec = server->ops->select_sectype(server, ctx->sectype); 1843 + ses_sec = server->ops->select_sectype(server, ses->sectype); 1844 + 1845 + if (ctx_sec != ses_sec) 1846 + return 0; 1847 + 1848 + switch (ctx_sec) { 1849 + case IAKerb: 1842 1850 case Kerberos: 1843 1851 if (!uid_eq(ctx->cred_uid, ses->cred_uid)) 1844 1852 return 0; 1845 1853 break; 1854 + case NTLMv2: 1855 + case RawNTLMSSP: 1846 1856 default: 1847 1857 /* NULL username means anonymous session */ 1848 1858 if (ses->user_name == NULL) {
+11 -7
fs/smb/client/fs_context.c
··· 171 171 fsparam_string("username", Opt_user), 172 172 fsparam_string("pass", Opt_pass), 173 173 fsparam_string("password", Opt_pass), 174 + fsparam_string("pass2", Opt_pass2), 174 175 fsparam_string("password2", Opt_pass2), 175 176 fsparam_string("ip", Opt_ip), 176 177 fsparam_string("addr", Opt_ip), ··· 1132 1131 } else if (!strcmp("user", param->key) || !strcmp("username", param->key)) { 1133 1132 skip_parsing = true; 1134 1133 opt = Opt_user; 1134 + } else if (!strcmp("pass2", param->key) || !strcmp("password2", param->key)) { 1135 + skip_parsing = true; 1136 + opt = Opt_pass2; 1135 1137 } 1136 1138 } 1137 1139 ··· 1344 1340 } 1345 1341 break; 1346 1342 case Opt_acregmax: 1347 - ctx->acregmax = HZ * result.uint_32; 1348 - if (ctx->acregmax > CIFS_MAX_ACTIMEO) { 1343 + if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { 1349 1344 cifs_errorf(fc, "acregmax too large\n"); 1350 1345 goto cifs_parse_mount_err; 1351 1346 } 1347 + ctx->acregmax = HZ * result.uint_32; 1352 1348 break; 1353 1349 case Opt_acdirmax: 1354 - ctx->acdirmax = HZ * result.uint_32; 1355 - if (ctx->acdirmax > CIFS_MAX_ACTIMEO) { 1350 + if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { 1356 1351 cifs_errorf(fc, "acdirmax too large\n"); 1357 1352 goto cifs_parse_mount_err; 1358 1353 } 1354 + ctx->acdirmax = HZ * result.uint_32; 1359 1355 break; 1360 1356 case Opt_actimeo: 1361 - if (HZ * result.uint_32 > CIFS_MAX_ACTIMEO) { 1357 + if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) { 1362 1358 cifs_errorf(fc, "timeout too large\n"); 1363 1359 goto cifs_parse_mount_err; 1364 1360 } ··· 1370 1366 ctx->acdirmax = ctx->acregmax = HZ * result.uint_32; 1371 1367 break; 1372 1368 case Opt_closetimeo: 1373 - ctx->closetimeo = HZ * result.uint_32; 1374 - if (ctx->closetimeo > SMB3_MAX_DCLOSETIMEO) { 1369 + if (result.uint_32 > SMB3_MAX_DCLOSETIMEO / HZ) { 1375 1370 cifs_errorf(fc, "closetimeo too large\n"); 1376 1371 goto cifs_parse_mount_err; 1377 1372 } 1373 + ctx->closetimeo = HZ * result.uint_32; 1378 1374 break; 1379 1375 case Opt_echo_interval: 1380 1376 ctx->echo_interval = result.uint_32;
+2 -1
fs/smb/common/smbacl.h
··· 107 107 struct smb_acl { 108 108 __le16 revision; /* revision level */ 109 109 __le16 size; 110 - __le32 num_aces; 110 + __le16 num_aces; 111 + __le16 reserved; 111 112 } __attribute__((packed)); 112 113 113 114 struct smb_ace {
+20
fs/smb/server/connection.c
··· 433 433 default_conn_ops.terminate_fn = ops->terminate_fn; 434 434 } 435 435 436 + void ksmbd_conn_r_count_inc(struct ksmbd_conn *conn) 437 + { 438 + atomic_inc(&conn->r_count); 439 + } 440 + 441 + void ksmbd_conn_r_count_dec(struct ksmbd_conn *conn) 442 + { 443 + /* 444 + * Checking waitqueue to dropping pending requests on 445 + * disconnection. waitqueue_active is safe because it 446 + * uses atomic operation for condition. 447 + */ 448 + atomic_inc(&conn->refcnt); 449 + if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 450 + wake_up(&conn->r_count_q); 451 + 452 + if (atomic_dec_and_test(&conn->refcnt)) 453 + kfree(conn); 454 + } 455 + 436 456 int ksmbd_conn_transport_init(void) 437 457 { 438 458 int ret;
+2
fs/smb/server/connection.h
··· 168 168 void ksmbd_conn_transport_destroy(void); 169 169 void ksmbd_conn_lock(struct ksmbd_conn *conn); 170 170 void ksmbd_conn_unlock(struct ksmbd_conn *conn); 171 + void ksmbd_conn_r_count_inc(struct ksmbd_conn *conn); 172 + void ksmbd_conn_r_count_dec(struct ksmbd_conn *conn); 171 173 172 174 /* 173 175 * WARNING
-3
fs/smb/server/ksmbd_work.c
··· 26 26 INIT_LIST_HEAD(&work->request_entry); 27 27 INIT_LIST_HEAD(&work->async_request_entry); 28 28 INIT_LIST_HEAD(&work->fp_entry); 29 - INIT_LIST_HEAD(&work->interim_entry); 30 29 INIT_LIST_HEAD(&work->aux_read_list); 31 30 work->iov_alloc_cnt = 4; 32 31 work->iov = kcalloc(work->iov_alloc_cnt, sizeof(struct kvec), ··· 55 56 kfree(work->tr_buf); 56 57 kvfree(work->request_buf); 57 58 kfree(work->iov); 58 - if (!list_empty(&work->interim_entry)) 59 - list_del(&work->interim_entry); 60 59 61 60 if (work->async_id) 62 61 ksmbd_release_id(&work->conn->async_ida, work->async_id);
-1
fs/smb/server/ksmbd_work.h
··· 89 89 /* List head at conn->async_requests */ 90 90 struct list_head async_request_entry; 91 91 struct list_head fp_entry; 92 - struct list_head interim_entry; 93 92 }; 94 93 95 94 /**
+21 -22
fs/smb/server/oplock.c
··· 46 46 opinfo->fid = id; 47 47 opinfo->Tid = Tid; 48 48 INIT_LIST_HEAD(&opinfo->op_entry); 49 - INIT_LIST_HEAD(&opinfo->interim_list); 50 49 init_waitqueue_head(&opinfo->oplock_q); 51 50 init_waitqueue_head(&opinfo->oplock_brk); 52 51 atomic_set(&opinfo->refcount, 1); ··· 634 635 { 635 636 struct smb2_oplock_break *rsp = NULL; 636 637 struct ksmbd_work *work = container_of(wk, struct ksmbd_work, work); 638 + struct ksmbd_conn *conn = work->conn; 637 639 struct oplock_break_info *br_info = work->request_buf; 638 640 struct smb2_hdr *rsp_hdr; 639 641 struct ksmbd_file *fp; ··· 690 690 691 691 out: 692 692 ksmbd_free_work_struct(work); 693 + ksmbd_conn_r_count_dec(conn); 693 694 } 694 695 695 696 /** ··· 725 724 work->sess = opinfo->sess; 726 725 727 726 if (opinfo->op_state == OPLOCK_ACK_WAIT) { 727 + ksmbd_conn_r_count_inc(conn); 728 728 INIT_WORK(&work->work, __smb2_oplock_break_noti); 729 729 ksmbd_queue_work(work); 730 730 ··· 747 745 { 748 746 struct smb2_lease_break *rsp = NULL; 749 747 struct ksmbd_work *work = container_of(wk, struct ksmbd_work, work); 748 + struct ksmbd_conn *conn = work->conn; 750 749 struct lease_break_info *br_info = work->request_buf; 751 750 struct smb2_hdr *rsp_hdr; 752 751 ··· 794 791 795 792 out: 796 793 ksmbd_free_work_struct(work); 794 + ksmbd_conn_r_count_dec(conn); 797 795 } 798 796 799 797 /** ··· 807 803 static int smb2_lease_break_noti(struct oplock_info *opinfo) 808 804 { 809 805 struct ksmbd_conn *conn = opinfo->conn; 810 - struct list_head *tmp, *t; 811 806 struct ksmbd_work *work; 812 807 struct lease_break_info *br_info; 813 808 struct lease *lease = opinfo->o_lease; ··· 834 831 work->sess = opinfo->sess; 835 832 836 833 if (opinfo->op_state == OPLOCK_ACK_WAIT) { 837 - list_for_each_safe(tmp, t, &opinfo->interim_list) { 838 - struct ksmbd_work *in_work; 839 - 840 - in_work = list_entry(tmp, struct ksmbd_work, 841 - interim_entry); 842 - setup_async_work(in_work, NULL, NULL); 843 - smb2_send_interim_resp(in_work, STATUS_PENDING); 844 - list_del_init(&in_work->interim_entry); 845 - release_async_work(in_work); 846 - } 834 + ksmbd_conn_r_count_inc(conn); 847 835 INIT_WORK(&work->work, __smb2_lease_break_noti); 848 836 ksmbd_queue_work(work); 849 837 wait_for_break_ack(opinfo); ··· 865 871 } 866 872 } 867 873 868 - static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level) 874 + static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level, 875 + struct ksmbd_work *in_work) 869 876 { 870 877 int err = 0; 871 878 ··· 909 914 } 910 915 911 916 if (lease->state & (SMB2_LEASE_WRITE_CACHING_LE | 912 - SMB2_LEASE_HANDLE_CACHING_LE)) 917 + SMB2_LEASE_HANDLE_CACHING_LE)) { 918 + if (in_work) { 919 + setup_async_work(in_work, NULL, NULL); 920 + smb2_send_interim_resp(in_work, STATUS_PENDING); 921 + release_async_work(in_work); 922 + } 923 + 913 924 brk_opinfo->op_state = OPLOCK_ACK_WAIT; 914 - else 925 + } else 915 926 atomic_dec(&brk_opinfo->breaking_cnt); 916 927 } else { 917 928 err = oplock_break_pending(brk_opinfo, req_op_level); ··· 1117 1116 if (ksmbd_conn_releasing(opinfo->conn)) 1118 1117 continue; 1119 1118 1120 - oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE); 1119 + oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); 1121 1120 opinfo_put(opinfo); 1122 1121 } 1123 1122 } ··· 1153 1152 1154 1153 if (ksmbd_conn_releasing(opinfo->conn)) 1155 1154 continue; 1156 - oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE); 1155 + oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL); 1157 1156 opinfo_put(opinfo); 1158 1157 } 1159 1158 } ··· 1253 1252 goto op_break_not_needed; 1254 1253 } 1255 1254 1256 - list_add(&work->interim_entry, &prev_opinfo->interim_list); 1257 - err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II); 1255 + err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II, work); 1258 1256 opinfo_put(prev_opinfo); 1259 1257 if (err == -ENOENT) 1260 1258 goto set_lev; ··· 1322 1322 } 1323 1323 1324 1324 brk_opinfo->open_trunc = is_trunc; 1325 - list_add(&work->interim_entry, &brk_opinfo->interim_list); 1326 - oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II); 1325 + oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II, work); 1327 1326 opinfo_put(brk_opinfo); 1328 1327 } 1329 1328 ··· 1385 1386 SMB2_LEASE_KEY_SIZE)) 1386 1387 goto next; 1387 1388 brk_op->open_trunc = is_trunc; 1388 - oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE); 1389 + oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE, NULL); 1389 1390 next: 1390 1391 opinfo_put(brk_op); 1391 1392 rcu_read_lock();
-1
fs/smb/server/oplock.h
··· 67 67 bool is_lease; 68 68 bool open_trunc; /* truncate on open */ 69 69 struct lease *o_lease; 70 - struct list_head interim_list; 71 70 struct list_head op_entry; 72 71 struct list_head lease_entry; 73 72 wait_queue_head_t oplock_q; /* Other server threads */
+2 -12
fs/smb/server/server.c
··· 270 270 271 271 ksmbd_conn_try_dequeue_request(work); 272 272 ksmbd_free_work_struct(work); 273 - /* 274 - * Checking waitqueue to dropping pending requests on 275 - * disconnection. waitqueue_active is safe because it 276 - * uses atomic operation for condition. 277 - */ 278 - atomic_inc(&conn->refcnt); 279 - if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q)) 280 - wake_up(&conn->r_count_q); 281 - 282 - if (atomic_dec_and_test(&conn->refcnt)) 283 - kfree(conn); 273 + ksmbd_conn_r_count_dec(conn); 284 274 } 285 275 286 276 /** ··· 300 310 conn->request_buf = NULL; 301 311 302 312 ksmbd_conn_enqueue_request(work); 303 - atomic_inc(&conn->r_count); 313 + ksmbd_conn_r_count_inc(conn); 304 314 /* update activity on connection */ 305 315 conn->last_active = jiffies; 306 316 INIT_WORK(&work->work, handle_ksmbd_work);
+4 -4
fs/smb/server/smb2pdu.c
··· 7458 7458 } 7459 7459 7460 7460 no_check_cl: 7461 + flock = smb_lock->fl; 7462 + list_del(&smb_lock->llist); 7463 + 7461 7464 if (smb_lock->zero_len) { 7462 7465 err = 0; 7463 7466 goto skip; 7464 7467 } 7465 - 7466 - flock = smb_lock->fl; 7467 - list_del(&smb_lock->llist); 7468 7468 retry: 7469 7469 rc = vfs_lock_file(filp, smb_lock->cmd, flock, NULL); 7470 7470 skip: 7471 - if (flags & SMB2_LOCKFLAG_UNLOCK) { 7471 + if (smb_lock->flags & SMB2_LOCKFLAG_UNLOCK) { 7472 7472 if (!rc) { 7473 7473 ksmbd_debug(SMB, "File unlocked\n"); 7474 7474 } else if (rc == -ENOENT) {
+36 -16
fs/smb/server/smbacl.c
··· 333 333 pace->e_perm = state->other.allow; 334 334 } 335 335 336 - int init_acl_state(struct posix_acl_state *state, int cnt) 336 + int init_acl_state(struct posix_acl_state *state, u16 cnt) 337 337 { 338 338 int alloc; 339 339 ··· 368 368 struct smb_fattr *fattr) 369 369 { 370 370 int i, ret; 371 - int num_aces = 0; 371 + u16 num_aces = 0; 372 372 unsigned int acl_size; 373 373 char *acl_base; 374 374 struct smb_ace **ppace; ··· 389 389 390 390 ksmbd_debug(SMB, "DACL revision %d size %d num aces %d\n", 391 391 le16_to_cpu(pdacl->revision), le16_to_cpu(pdacl->size), 392 - le32_to_cpu(pdacl->num_aces)); 392 + le16_to_cpu(pdacl->num_aces)); 393 393 394 394 acl_base = (char *)pdacl; 395 395 acl_size = sizeof(struct smb_acl); 396 396 397 - num_aces = le32_to_cpu(pdacl->num_aces); 397 + num_aces = le16_to_cpu(pdacl->num_aces); 398 398 if (num_aces <= 0) 399 399 return; 400 400 401 - if (num_aces > ULONG_MAX / sizeof(struct smb_ace *)) 401 + if (num_aces > (le16_to_cpu(pdacl->size) - sizeof(struct smb_acl)) / 402 + (offsetof(struct smb_ace, sid) + 403 + offsetof(struct smb_sid, sub_auth) + sizeof(__le16))) 402 404 return; 403 405 404 406 ret = init_acl_state(&acl_state, num_aces); ··· 434 432 offsetof(struct smb_sid, sub_auth); 435 433 436 434 if (end_of_acl - acl_base < acl_size || 435 + ppace[i]->sid.num_subauth == 0 || 437 436 ppace[i]->sid.num_subauth > SID_MAX_SUB_AUTHORITIES || 438 437 (end_of_acl - acl_base < 439 438 acl_size + sizeof(__le32) * ppace[i]->sid.num_subauth) || ··· 583 580 584 581 static void set_posix_acl_entries_dacl(struct mnt_idmap *idmap, 585 582 struct smb_ace *pndace, 586 - struct smb_fattr *fattr, u32 *num_aces, 583 + struct smb_fattr *fattr, u16 *num_aces, 587 584 u16 *size, u32 nt_aces_num) 588 585 { 589 586 struct posix_acl_entry *pace; ··· 704 701 struct smb_fattr *fattr) 705 702 { 706 703 struct smb_ace *ntace, *pndace; 707 - int nt_num_aces = le32_to_cpu(nt_dacl->num_aces), num_aces = 0; 704 + u16 nt_num_aces = le16_to_cpu(nt_dacl->num_aces), num_aces = 0; 708 705 unsigned short size = 0; 709 706 int i; 710 707 ··· 731 728 732 729 set_posix_acl_entries_dacl(idmap, pndace, fattr, 733 730 &num_aces, &size, nt_num_aces); 734 - pndacl->num_aces = cpu_to_le32(num_aces); 731 + pndacl->num_aces = cpu_to_le16(num_aces); 735 732 pndacl->size = cpu_to_le16(le16_to_cpu(pndacl->size) + size); 736 733 } 737 734 ··· 739 736 struct smb_acl *pndacl, struct smb_fattr *fattr) 740 737 { 741 738 struct smb_ace *pace, *pndace; 742 - u32 num_aces = 0; 739 + u16 num_aces = 0; 743 740 u16 size = 0, ace_size = 0; 744 741 uid_t uid; 745 742 const struct smb_sid *sid; ··· 795 792 fattr->cf_mode, 0007); 796 793 797 794 out: 798 - pndacl->num_aces = cpu_to_le32(num_aces); 795 + pndacl->num_aces = cpu_to_le16(num_aces); 799 796 pndacl->size = cpu_to_le16(le16_to_cpu(pndacl->size) + size); 800 797 } 801 798 ··· 809 806 pr_err("ACL too small to parse SID %p\n", psid); 810 807 return -EINVAL; 811 808 } 809 + 810 + if (!psid->num_subauth) 811 + return 0; 812 + 813 + if (psid->num_subauth > SID_MAX_SUB_AUTHORITIES || 814 + end_of_acl < (char *)psid + 8 + sizeof(__le32) * psid->num_subauth) 815 + return -EINVAL; 812 816 813 817 return 0; 814 818 } ··· 858 848 pntsd->type = cpu_to_le16(DACL_PRESENT); 859 849 860 850 if (pntsd->osidoffset) { 851 + if (le32_to_cpu(pntsd->osidoffset) < sizeof(struct smb_ntsd)) 852 + return -EINVAL; 853 + 861 854 rc = parse_sid(owner_sid_ptr, end_of_acl); 862 855 if (rc) { 863 856 pr_err("%s: Error %d parsing Owner SID\n", __func__, rc); ··· 876 863 } 877 864 878 865 if (pntsd->gsidoffset) { 866 + if (le32_to_cpu(pntsd->gsidoffset) < sizeof(struct smb_ntsd)) 867 + return -EINVAL; 868 + 879 869 rc = parse_sid(group_sid_ptr, end_of_acl); 880 870 if (rc) { 881 871 pr_err("%s: Error %d mapping Owner SID to gid\n", ··· 900 884 pntsd->type |= cpu_to_le16(DACL_PROTECTED); 901 885 902 886 if (dacloffset) { 887 + if (dacloffset < sizeof(struct smb_ntsd)) 888 + return -EINVAL; 889 + 903 890 parse_dacl(idmap, dacl_ptr, end_of_acl, 904 891 owner_sid_ptr, group_sid_ptr, fattr); 905 892 } ··· 1025 1006 struct smb_sid owner_sid, group_sid; 1026 1007 struct dentry *parent = path->dentry->d_parent; 1027 1008 struct mnt_idmap *idmap = mnt_idmap(path->mnt); 1028 - int inherited_flags = 0, flags = 0, i, ace_cnt = 0, nt_size = 0, pdacl_size; 1029 - int rc = 0, num_aces, dacloffset, pntsd_type, pntsd_size, acl_len, aces_size; 1009 + int inherited_flags = 0, flags = 0, i, nt_size = 0, pdacl_size; 1010 + int rc = 0, dacloffset, pntsd_type, pntsd_size, acl_len, aces_size; 1011 + u16 num_aces, ace_cnt = 0; 1030 1012 char *aces_base; 1031 1013 bool is_dir = S_ISDIR(d_inode(path->dentry)->i_mode); 1032 1014 ··· 1043 1023 1044 1024 parent_pdacl = (struct smb_acl *)((char *)parent_pntsd + dacloffset); 1045 1025 acl_len = pntsd_size - dacloffset; 1046 - num_aces = le32_to_cpu(parent_pdacl->num_aces); 1026 + num_aces = le16_to_cpu(parent_pdacl->num_aces); 1047 1027 pntsd_type = le16_to_cpu(parent_pntsd->type); 1048 1028 pdacl_size = le16_to_cpu(parent_pdacl->size); 1049 1029 ··· 1203 1183 pdacl = (struct smb_acl *)((char *)pntsd + le32_to_cpu(pntsd->dacloffset)); 1204 1184 pdacl->revision = cpu_to_le16(2); 1205 1185 pdacl->size = cpu_to_le16(sizeof(struct smb_acl) + nt_size); 1206 - pdacl->num_aces = cpu_to_le32(ace_cnt); 1186 + pdacl->num_aces = cpu_to_le16(ace_cnt); 1207 1187 pace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl)); 1208 1188 memcpy(pace, aces_base, nt_size); 1209 1189 pntsd_size += sizeof(struct smb_acl) + nt_size; ··· 1284 1264 1285 1265 ace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl)); 1286 1266 aces_size = acl_size - sizeof(struct smb_acl); 1287 - for (i = 0; i < le32_to_cpu(pdacl->num_aces); i++) { 1267 + for (i = 0; i < le16_to_cpu(pdacl->num_aces); i++) { 1288 1268 if (offsetof(struct smb_ace, access_req) > aces_size) 1289 1269 break; 1290 1270 ace_size = le16_to_cpu(ace->size); ··· 1305 1285 1306 1286 ace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl)); 1307 1287 aces_size = acl_size - sizeof(struct smb_acl); 1308 - for (i = 0; i < le32_to_cpu(pdacl->num_aces); i++) { 1288 + for (i = 0; i < le16_to_cpu(pdacl->num_aces); i++) { 1309 1289 if (offsetof(struct smb_ace, access_req) > aces_size) 1310 1290 break; 1311 1291 ace_size = le16_to_cpu(ace->size);
+1 -1
fs/smb/server/smbacl.h
··· 86 86 int build_sec_desc(struct mnt_idmap *idmap, struct smb_ntsd *pntsd, 87 87 struct smb_ntsd *ppntsd, int ppntsd_size, int addition_info, 88 88 __u32 *secdesclen, struct smb_fattr *fattr); 89 - int init_acl_state(struct posix_acl_state *state, int cnt); 89 + int init_acl_state(struct posix_acl_state *state, u16 cnt); 90 90 void free_acl_state(struct posix_acl_state *state); 91 91 void posix_state_to_acl(struct posix_acl_state *state, 92 92 struct posix_acl_entry *pace);
+1
fs/smb/server/transport_ipc.c
··· 281 281 if (entry->type + 1 != type) { 282 282 pr_err("Waiting for IPC type %d, got %d. Ignore.\n", 283 283 entry->type + 1, type); 284 + continue; 284 285 } 285 286 286 287 entry->response = kvzalloc(sz, KSMBD_DEFAULT_GFP);
+10 -10
fs/splice.c
··· 331 331 int i; 332 332 333 333 /* Work out how much data we can actually add into the pipe */ 334 - used = pipe_occupancy(pipe->head, pipe->tail); 334 + used = pipe_buf_usage(pipe); 335 335 npages = max_t(ssize_t, pipe->max_usage - used, 0); 336 336 len = min_t(size_t, len, npages * PAGE_SIZE); 337 337 npages = DIV_ROUND_UP(len, PAGE_SIZE); ··· 527 527 return -ERESTARTSYS; 528 528 529 529 repeat: 530 - while (pipe_empty(pipe->head, pipe->tail)) { 530 + while (pipe_is_empty(pipe)) { 531 531 if (!pipe->writers) 532 532 return 0; 533 533 ··· 820 820 if (signal_pending(current)) 821 821 break; 822 822 823 - while (pipe_empty(pipe->head, pipe->tail)) { 823 + while (pipe_is_empty(pipe)) { 824 824 ret = 0; 825 825 if (!pipe->writers) 826 826 goto out; ··· 968 968 return 0; 969 969 970 970 /* Don't try to read more the pipe has space for. */ 971 - p_space = pipe->max_usage - pipe_occupancy(pipe->head, pipe->tail); 971 + p_space = pipe->max_usage - pipe_buf_usage(pipe); 972 972 len = min_t(size_t, len, p_space << PAGE_SHIFT); 973 973 974 974 if (unlikely(len > MAX_RW_COUNT)) ··· 1080 1080 more = sd->flags & SPLICE_F_MORE; 1081 1081 sd->flags |= SPLICE_F_MORE; 1082 1082 1083 - WARN_ON_ONCE(!pipe_empty(pipe->head, pipe->tail)); 1083 + WARN_ON_ONCE(!pipe_is_empty(pipe)); 1084 1084 1085 1085 while (len) { 1086 1086 size_t read_len; ··· 1268 1268 send_sig(SIGPIPE, current, 0); 1269 1269 return -EPIPE; 1270 1270 } 1271 - if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage)) 1271 + if (!pipe_is_full(pipe)) 1272 1272 return 0; 1273 1273 if (flags & SPLICE_F_NONBLOCK) 1274 1274 return -EAGAIN; ··· 1652 1652 * Check the pipe occupancy without the inode lock first. This function 1653 1653 * is speculative anyways, so missing one is ok. 1654 1654 */ 1655 - if (!pipe_empty(pipe->head, pipe->tail)) 1655 + if (!pipe_is_empty(pipe)) 1656 1656 return 0; 1657 1657 1658 1658 ret = 0; 1659 1659 pipe_lock(pipe); 1660 1660 1661 - while (pipe_empty(pipe->head, pipe->tail)) { 1661 + while (pipe_is_empty(pipe)) { 1662 1662 if (signal_pending(current)) { 1663 1663 ret = -ERESTARTSYS; 1664 1664 break; ··· 1688 1688 * Check pipe occupancy without the inode lock first. This function 1689 1689 * is speculative anyways, so missing one is ok. 1690 1690 */ 1691 - if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage)) 1691 + if (!pipe_is_full(pipe)) 1692 1692 return 0; 1693 1693 1694 1694 ret = 0; 1695 1695 pipe_lock(pipe); 1696 1696 1697 - while (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) { 1697 + while (pipe_is_full(pipe)) { 1698 1698 if (!pipe->readers) { 1699 1699 send_sig(SIGPIPE, current, 0); 1700 1700 ret = -EPIPE;
+1 -2
fs/vboxsf/super.c
··· 21 21 22 22 #define VBOXSF_SUPER_MAGIC 0x786f4256 /* 'VBox' little endian */ 23 23 24 - static const unsigned char VBSF_MOUNT_SIGNATURE[4] = { '\000', '\377', '\376', 25 - '\375' }; 24 + static const unsigned char VBSF_MOUNT_SIGNATURE[4] __nonstring = "\000\377\376\375"; 26 25 27 26 static int follow_symlinks; 28 27 module_param(follow_symlinks, int, 0444);
+3 -5
fs/xfs/libxfs/xfs_alloc.c
··· 33 33 34 34 struct workqueue_struct *xfs_alloc_wq; 35 35 36 - #define XFS_ABSDIFF(a,b) (((a) <= (b)) ? ((b) - (a)) : ((a) - (b))) 37 - 38 36 #define XFSA_FIXUP_BNO_OK 1 39 37 #define XFSA_FIXUP_CNT_OK 2 40 38 ··· 408 410 if (newbno1 != NULLAGBLOCK && newbno2 != NULLAGBLOCK) { 409 411 if (newlen1 < newlen2 || 410 412 (newlen1 == newlen2 && 411 - XFS_ABSDIFF(newbno1, wantbno) > 412 - XFS_ABSDIFF(newbno2, wantbno))) 413 + abs_diff(newbno1, wantbno) > 414 + abs_diff(newbno2, wantbno))) 413 415 newbno1 = newbno2; 414 416 } else if (newbno2 != NULLAGBLOCK) 415 417 newbno1 = newbno2; ··· 425 427 } else 426 428 newbno1 = freeend - wantlen; 427 429 *newbnop = newbno1; 428 - return newbno1 == NULLAGBLOCK ? 0 : XFS_ABSDIFF(newbno1, wantbno); 430 + return newbno1 == NULLAGBLOCK ? 0 : abs_diff(newbno1, wantbno); 429 431 } 430 432 431 433 /*
+63 -119
fs/xfs/xfs_buf.c
··· 29 29 /* 30 30 * Locking orders 31 31 * 32 - * xfs_buf_ioacct_inc: 33 - * xfs_buf_ioacct_dec: 34 - * b_sema (caller holds) 35 - * b_lock 36 - * 37 32 * xfs_buf_stale: 38 33 * b_sema (caller holds) 39 34 * b_lock ··· 77 82 } 78 83 79 84 /* 80 - * Bump the I/O in flight count on the buftarg if we haven't yet done so for 81 - * this buffer. The count is incremented once per buffer (per hold cycle) 82 - * because the corresponding decrement is deferred to buffer release. Buffers 83 - * can undergo I/O multiple times in a hold-release cycle and per buffer I/O 84 - * tracking adds unnecessary overhead. This is used for sychronization purposes 85 - * with unmount (see xfs_buftarg_drain()), so all we really need is a count of 86 - * in-flight buffers. 87 - * 88 - * Buffers that are never released (e.g., superblock, iclog buffers) must set 89 - * the XBF_NO_IOACCT flag before I/O submission. Otherwise, the buftarg count 90 - * never reaches zero and unmount hangs indefinitely. 91 - */ 92 - static inline void 93 - xfs_buf_ioacct_inc( 94 - struct xfs_buf *bp) 95 - { 96 - if (bp->b_flags & XBF_NO_IOACCT) 97 - return; 98 - 99 - ASSERT(bp->b_flags & XBF_ASYNC); 100 - spin_lock(&bp->b_lock); 101 - if (!(bp->b_state & XFS_BSTATE_IN_FLIGHT)) { 102 - bp->b_state |= XFS_BSTATE_IN_FLIGHT; 103 - percpu_counter_inc(&bp->b_target->bt_io_count); 104 - } 105 - spin_unlock(&bp->b_lock); 106 - } 107 - 108 - /* 109 - * Clear the in-flight state on a buffer about to be released to the LRU or 110 - * freed and unaccount from the buftarg. 111 - */ 112 - static inline void 113 - __xfs_buf_ioacct_dec( 114 - struct xfs_buf *bp) 115 - { 116 - lockdep_assert_held(&bp->b_lock); 117 - 118 - if (bp->b_state & XFS_BSTATE_IN_FLIGHT) { 119 - bp->b_state &= ~XFS_BSTATE_IN_FLIGHT; 120 - percpu_counter_dec(&bp->b_target->bt_io_count); 121 - } 122 - } 123 - 124 - /* 125 85 * When we mark a buffer stale, we remove the buffer from the LRU and clear the 126 86 * b_lru_ref count so that the buffer is freed immediately when the buffer 127 87 * reference count falls to zero. If the buffer is already on the LRU, we need ··· 99 149 */ 100 150 bp->b_flags &= ~_XBF_DELWRI_Q; 101 151 102 - /* 103 - * Once the buffer is marked stale and unlocked, a subsequent lookup 104 - * could reset b_flags. There is no guarantee that the buffer is 105 - * unaccounted (released to LRU) before that occurs. Drop in-flight 106 - * status now to preserve accounting consistency. 107 - */ 108 152 spin_lock(&bp->b_lock); 109 - __xfs_buf_ioacct_dec(bp); 110 - 111 153 atomic_set(&bp->b_lru_ref, 0); 112 154 if (!(bp->b_state & XFS_BSTATE_DISPOSE) && 113 155 (list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru))) ··· 736 794 737 795 int 738 796 _xfs_buf_read( 739 - struct xfs_buf *bp, 740 - xfs_buf_flags_t flags) 797 + struct xfs_buf *bp) 741 798 { 742 - ASSERT(!(flags & XBF_WRITE)); 743 799 ASSERT(bp->b_maps[0].bm_bn != XFS_BUF_DADDR_NULL); 744 800 745 801 bp->b_flags &= ~(XBF_WRITE | XBF_ASYNC | XBF_READ_AHEAD | XBF_DONE); 746 - bp->b_flags |= flags & (XBF_READ | XBF_ASYNC | XBF_READ_AHEAD); 747 - 802 + bp->b_flags |= XBF_READ; 748 803 xfs_buf_submit(bp); 749 - if (flags & XBF_ASYNC) 750 - return 0; 751 804 return xfs_buf_iowait(bp); 752 805 } 753 806 ··· 794 857 struct xfs_buf *bp; 795 858 int error; 796 859 860 + ASSERT(!(flags & (XBF_WRITE | XBF_ASYNC | XBF_READ_AHEAD))); 861 + 797 862 flags |= XBF_READ; 798 863 *bpp = NULL; 799 864 ··· 809 870 /* Initiate the buffer read and wait. */ 810 871 XFS_STATS_INC(target->bt_mount, xb_get_read); 811 872 bp->b_ops = ops; 812 - error = _xfs_buf_read(bp, flags); 813 - 814 - /* Readahead iodone already dropped the buffer, so exit. */ 815 - if (flags & XBF_ASYNC) 816 - return 0; 873 + error = _xfs_buf_read(bp); 817 874 } else { 818 875 /* Buffer already read; all we need to do is check it. */ 819 876 error = xfs_buf_reverify(bp, ops); 820 - 821 - /* Readahead already finished; drop the buffer and exit. */ 822 - if (flags & XBF_ASYNC) { 823 - xfs_buf_relse(bp); 824 - return 0; 825 - } 826 877 827 878 /* We do not want read in the flags */ 828 879 bp->b_flags &= ~XBF_READ; ··· 865 936 int nmaps, 866 937 const struct xfs_buf_ops *ops) 867 938 { 939 + const xfs_buf_flags_t flags = XBF_READ | XBF_ASYNC | XBF_READ_AHEAD; 868 940 struct xfs_buf *bp; 869 941 870 942 /* ··· 875 945 if (xfs_buftarg_is_mem(target)) 876 946 return; 877 947 878 - xfs_buf_read_map(target, map, nmaps, 879 - XBF_TRYLOCK | XBF_ASYNC | XBF_READ_AHEAD, &bp, ops, 880 - __this_address); 948 + if (xfs_buf_get_map(target, map, nmaps, flags | XBF_TRYLOCK, &bp)) 949 + return; 950 + trace_xfs_buf_readahead(bp, 0, _RET_IP_); 951 + 952 + if (bp->b_flags & XBF_DONE) { 953 + xfs_buf_reverify(bp, ops); 954 + xfs_buf_relse(bp); 955 + return; 956 + } 957 + XFS_STATS_INC(target->bt_mount, xb_get_read); 958 + bp->b_ops = ops; 959 + bp->b_flags &= ~(XBF_WRITE | XBF_DONE); 960 + bp->b_flags |= flags; 961 + percpu_counter_inc(&target->bt_readahead_count); 962 + xfs_buf_submit(bp); 881 963 } 882 964 883 965 /* ··· 945 1003 struct xfs_buf *bp; 946 1004 DEFINE_SINGLE_BUF_MAP(map, XFS_BUF_DADDR_NULL, numblks); 947 1005 1006 + /* there are currently no valid flags for xfs_buf_get_uncached */ 1007 + ASSERT(flags == 0); 1008 + 948 1009 *bpp = NULL; 949 1010 950 - /* flags might contain irrelevant bits, pass only what we care about */ 951 - error = _xfs_buf_alloc(target, &map, 1, flags & XBF_NO_IOACCT, &bp); 1011 + error = _xfs_buf_alloc(target, &map, 1, flags, &bp); 952 1012 if (error) 953 1013 return error; 954 1014 ··· 1004 1060 spin_unlock(&bp->b_lock); 1005 1061 return; 1006 1062 } 1007 - __xfs_buf_ioacct_dec(bp); 1008 1063 spin_unlock(&bp->b_lock); 1009 1064 xfs_buf_free(bp); 1010 1065 } ··· 1022 1079 spin_lock(&bp->b_lock); 1023 1080 ASSERT(bp->b_hold >= 1); 1024 1081 if (bp->b_hold > 1) { 1025 - /* 1026 - * Drop the in-flight state if the buffer is already on the LRU 1027 - * and it holds the only reference. This is racy because we 1028 - * haven't acquired the pag lock, but the use of _XBF_IN_FLIGHT 1029 - * ensures the decrement occurs only once per-buf. 1030 - */ 1031 - if (--bp->b_hold == 1 && !list_empty(&bp->b_lru)) 1032 - __xfs_buf_ioacct_dec(bp); 1082 + bp->b_hold--; 1033 1083 goto out_unlock; 1034 1084 } 1035 1085 1036 1086 /* we are asked to drop the last reference */ 1037 - __xfs_buf_ioacct_dec(bp); 1038 - if (!(bp->b_flags & XBF_STALE) && atomic_read(&bp->b_lru_ref)) { 1087 + if (atomic_read(&bp->b_lru_ref)) { 1039 1088 /* 1040 1089 * If the buffer is added to the LRU, keep the reference to the 1041 1090 * buffer for the LRU and clear the (now stale) dispose list ··· 1280 1345 resubmit: 1281 1346 xfs_buf_ioerror(bp, 0); 1282 1347 bp->b_flags |= (XBF_DONE | XBF_WRITE_FAIL); 1348 + reinit_completion(&bp->b_iowait); 1283 1349 xfs_buf_submit(bp); 1284 1350 return true; 1285 1351 out_stale: ··· 1291 1355 return false; 1292 1356 } 1293 1357 1294 - static void 1295 - xfs_buf_ioend( 1358 + /* returns false if the caller needs to resubmit the I/O, else true */ 1359 + static bool 1360 + __xfs_buf_ioend( 1296 1361 struct xfs_buf *bp) 1297 1362 { 1298 1363 trace_xfs_buf_iodone(bp, _RET_IP_); ··· 1306 1369 bp->b_ops->verify_read(bp); 1307 1370 if (!bp->b_error) 1308 1371 bp->b_flags |= XBF_DONE; 1372 + if (bp->b_flags & XBF_READ_AHEAD) 1373 + percpu_counter_dec(&bp->b_target->bt_readahead_count); 1309 1374 } else { 1310 1375 if (!bp->b_error) { 1311 1376 bp->b_flags &= ~XBF_WRITE_FAIL; ··· 1315 1376 } 1316 1377 1317 1378 if (unlikely(bp->b_error) && xfs_buf_ioend_handle_error(bp)) 1318 - return; 1379 + return false; 1319 1380 1320 1381 /* clear the retry state */ 1321 1382 bp->b_last_error = 0; ··· 1336 1397 1337 1398 bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD | 1338 1399 _XBF_LOGRECOVERY); 1400 + return true; 1401 + } 1339 1402 1403 + static void 1404 + xfs_buf_ioend( 1405 + struct xfs_buf *bp) 1406 + { 1407 + if (!__xfs_buf_ioend(bp)) 1408 + return; 1340 1409 if (bp->b_flags & XBF_ASYNC) 1341 1410 xfs_buf_relse(bp); 1342 1411 else ··· 1358 1411 struct xfs_buf *bp = 1359 1412 container_of(work, struct xfs_buf, b_ioend_work); 1360 1413 1361 - xfs_buf_ioend(bp); 1362 - } 1363 - 1364 - static void 1365 - xfs_buf_ioend_async( 1366 - struct xfs_buf *bp) 1367 - { 1368 - INIT_WORK(&bp->b_ioend_work, xfs_buf_ioend_work); 1369 - queue_work(bp->b_mount->m_buf_workqueue, &bp->b_ioend_work); 1414 + if (__xfs_buf_ioend(bp)) 1415 + xfs_buf_relse(bp); 1370 1416 } 1371 1417 1372 1418 void ··· 1431 1491 XFS_TEST_ERROR(false, bp->b_mount, XFS_ERRTAG_BUF_IOERROR)) 1432 1492 xfs_buf_ioerror(bp, -EIO); 1433 1493 1434 - xfs_buf_ioend_async(bp); 1494 + if (bp->b_flags & XBF_ASYNC) { 1495 + INIT_WORK(&bp->b_ioend_work, xfs_buf_ioend_work); 1496 + queue_work(bp->b_mount->m_buf_workqueue, &bp->b_ioend_work); 1497 + } else { 1498 + complete(&bp->b_iowait); 1499 + } 1500 + 1435 1501 bio_put(bio); 1436 1502 } 1437 1503 ··· 1514 1568 { 1515 1569 ASSERT(!(bp->b_flags & XBF_ASYNC)); 1516 1570 1517 - trace_xfs_buf_iowait(bp, _RET_IP_); 1518 - wait_for_completion(&bp->b_iowait); 1519 - trace_xfs_buf_iowait_done(bp, _RET_IP_); 1571 + do { 1572 + trace_xfs_buf_iowait(bp, _RET_IP_); 1573 + wait_for_completion(&bp->b_iowait); 1574 + trace_xfs_buf_iowait_done(bp, _RET_IP_); 1575 + } while (!__xfs_buf_ioend(bp)); 1520 1576 1521 1577 return bp->b_error; 1522 1578 } ··· 1595 1647 * left over from previous use of the buffer (e.g. failed readahead). 1596 1648 */ 1597 1649 bp->b_error = 0; 1598 - 1599 - if (bp->b_flags & XBF_ASYNC) 1600 - xfs_buf_ioacct_inc(bp); 1601 1650 1602 1651 if ((bp->b_flags & XBF_WRITE) && !xfs_buf_verify_write(bp)) { 1603 1652 xfs_force_shutdown(bp->b_mount, SHUTDOWN_CORRUPT_INCORE); ··· 1721 1776 struct xfs_buftarg *btp) 1722 1777 { 1723 1778 /* 1724 - * First wait on the buftarg I/O count for all in-flight buffers to be 1725 - * released. This is critical as new buffers do not make the LRU until 1726 - * they are released. 1779 + * First wait for all in-flight readahead buffers to be released. This is 1780 + * critical as new buffers do not make the LRU until they are released. 1727 1781 * 1728 1782 * Next, flush the buffer workqueue to ensure all completion processing 1729 1783 * has finished. Just waiting on buffer locks is not sufficient for ··· 1731 1787 * all reference counts have been dropped before we start walking the 1732 1788 * LRU list. 1733 1789 */ 1734 - while (percpu_counter_sum(&btp->bt_io_count)) 1790 + while (percpu_counter_sum(&btp->bt_readahead_count)) 1735 1791 delay(100); 1736 1792 flush_workqueue(btp->bt_mount->m_buf_workqueue); 1737 1793 } ··· 1848 1904 struct xfs_buftarg *btp) 1849 1905 { 1850 1906 shrinker_free(btp->bt_shrinker); 1851 - ASSERT(percpu_counter_sum(&btp->bt_io_count) == 0); 1852 - percpu_counter_destroy(&btp->bt_io_count); 1907 + ASSERT(percpu_counter_sum(&btp->bt_readahead_count) == 0); 1908 + percpu_counter_destroy(&btp->bt_readahead_count); 1853 1909 list_lru_destroy(&btp->bt_lru); 1854 1910 } 1855 1911 ··· 1903 1959 1904 1960 if (list_lru_init(&btp->bt_lru)) 1905 1961 return -ENOMEM; 1906 - if (percpu_counter_init(&btp->bt_io_count, 0, GFP_KERNEL)) 1962 + if (percpu_counter_init(&btp->bt_readahead_count, 0, GFP_KERNEL)) 1907 1963 goto out_destroy_lru; 1908 1964 1909 1965 btp->bt_shrinker = ··· 1917 1973 return 0; 1918 1974 1919 1975 out_destroy_io_count: 1920 - percpu_counter_destroy(&btp->bt_io_count); 1976 + percpu_counter_destroy(&btp->bt_readahead_count); 1921 1977 out_destroy_lru: 1922 1978 list_lru_destroy(&btp->bt_lru); 1923 1979 return -ENOMEM;
+2 -5
fs/xfs/xfs_buf.h
··· 27 27 #define XBF_READ (1u << 0) /* buffer intended for reading from device */ 28 28 #define XBF_WRITE (1u << 1) /* buffer intended for writing to device */ 29 29 #define XBF_READ_AHEAD (1u << 2) /* asynchronous read-ahead */ 30 - #define XBF_NO_IOACCT (1u << 3) /* bypass I/O accounting (non-LRU bufs) */ 31 30 #define XBF_ASYNC (1u << 4) /* initiator will not wait for completion */ 32 31 #define XBF_DONE (1u << 5) /* all pages in the buffer uptodate */ 33 32 #define XBF_STALE (1u << 6) /* buffer has been staled, do not find it */ ··· 57 58 { XBF_READ, "READ" }, \ 58 59 { XBF_WRITE, "WRITE" }, \ 59 60 { XBF_READ_AHEAD, "READ_AHEAD" }, \ 60 - { XBF_NO_IOACCT, "NO_IOACCT" }, \ 61 61 { XBF_ASYNC, "ASYNC" }, \ 62 62 { XBF_DONE, "DONE" }, \ 63 63 { XBF_STALE, "STALE" }, \ ··· 75 77 * Internal state flags. 76 78 */ 77 79 #define XFS_BSTATE_DISPOSE (1 << 0) /* buffer being discarded */ 78 - #define XFS_BSTATE_IN_FLIGHT (1 << 1) /* I/O in flight */ 79 80 80 81 struct xfs_buf_cache { 81 82 struct rhashtable bc_hash; ··· 113 116 struct shrinker *bt_shrinker; 114 117 struct list_lru bt_lru; 115 118 116 - struct percpu_counter bt_io_count; 119 + struct percpu_counter bt_readahead_count; 117 120 struct ratelimit_state bt_ioerror_rl; 118 121 119 122 /* Atomic write unit values */ ··· 288 291 int xfs_buf_read_uncached(struct xfs_buftarg *target, xfs_daddr_t daddr, 289 292 size_t numblks, xfs_buf_flags_t flags, struct xfs_buf **bpp, 290 293 const struct xfs_buf_ops *ops); 291 - int _xfs_buf_read(struct xfs_buf *bp, xfs_buf_flags_t flags); 294 + int _xfs_buf_read(struct xfs_buf *bp); 292 295 void xfs_buf_hold(struct xfs_buf *bp); 293 296 294 297 /* Releasing Buffers */
+1 -1
fs/xfs/xfs_buf_mem.c
··· 117 117 struct xfs_buftarg *btp) 118 118 { 119 119 ASSERT(xfs_buftarg_is_mem(btp)); 120 - ASSERT(percpu_counter_sum(&btp->bt_io_count) == 0); 120 + ASSERT(percpu_counter_sum(&btp->bt_readahead_count) == 0); 121 121 122 122 trace_xmbuf_free(btp); 123 123
-13
fs/xfs/xfs_file.c
··· 1451 1451 1452 1452 trace_xfs_read_fault(ip, order); 1453 1453 1454 - ret = filemap_fsnotify_fault(vmf); 1455 - if (unlikely(ret)) 1456 - return ret; 1457 1454 xfs_ilock(ip, XFS_MMAPLOCK_SHARED); 1458 1455 ret = xfs_dax_fault_locked(vmf, order, false); 1459 1456 xfs_iunlock(ip, XFS_MMAPLOCK_SHARED); ··· 1479 1482 vm_fault_t ret; 1480 1483 1481 1484 trace_xfs_write_fault(ip, order); 1482 - /* 1483 - * Usually we get here from ->page_mkwrite callback but in case of DAX 1484 - * we will get here also for ordinary write fault. Handle HSM 1485 - * notifications for that case. 1486 - */ 1487 - if (IS_DAX(inode)) { 1488 - ret = filemap_fsnotify_fault(vmf); 1489 - if (unlikely(ret)) 1490 - return ret; 1491 - } 1492 1485 1493 1486 sb_start_pagefault(inode->i_sb); 1494 1487 file_update_time(vmf->vma->vm_file);
+1 -1
fs/xfs/xfs_log_recover.c
··· 3380 3380 */ 3381 3381 xfs_buf_lock(bp); 3382 3382 xfs_buf_hold(bp); 3383 - error = _xfs_buf_read(bp, XBF_READ); 3383 + error = _xfs_buf_read(bp); 3384 3384 if (error) { 3385 3385 if (!xlog_is_shutdown(log)) { 3386 3386 xfs_buf_ioerror_alert(bp, __this_address);
+2 -5
fs/xfs/xfs_mount.c
··· 181 181 182 182 /* 183 183 * Allocate a (locked) buffer to hold the superblock. This will be kept 184 - * around at all times to optimize access to the superblock. Therefore, 185 - * set XBF_NO_IOACCT to make sure it doesn't hold the buftarg count 186 - * elevated. 184 + * around at all times to optimize access to the superblock. 187 185 */ 188 186 reread: 189 187 error = xfs_buf_read_uncached(mp->m_ddev_targp, XFS_SB_DADDR, 190 - BTOBB(sector_size), XBF_NO_IOACCT, &bp, 191 - buf_ops); 188 + BTOBB(sector_size), 0, &bp, buf_ops); 192 189 if (error) { 193 190 if (loud) 194 191 xfs_warn(mp, "SB validate failed with error %d.", error);
+1 -1
fs/xfs/xfs_rtalloc.c
··· 1407 1407 1408 1408 /* m_blkbb_log is not set up yet */ 1409 1409 error = xfs_buf_read_uncached(mp->m_rtdev_targp, XFS_RTSB_DADDR, 1410 - mp->m_sb.sb_blocksize >> BBSHIFT, XBF_NO_IOACCT, &bp, 1410 + mp->m_sb.sb_blocksize >> BBSHIFT, 0, &bp, 1411 1411 &xfs_rtsb_buf_ops); 1412 1412 if (error) { 1413 1413 xfs_warn(mp, "rt sb validate failed with error %d.", error);
+1
fs/xfs/xfs_trace.h
··· 593 593 DEFINE_BUF_FLAGS_EVENT(xfs_buf_find); 594 594 DEFINE_BUF_FLAGS_EVENT(xfs_buf_get); 595 595 DEFINE_BUF_FLAGS_EVENT(xfs_buf_read); 596 + DEFINE_BUF_FLAGS_EVENT(xfs_buf_readahead); 596 597 597 598 TRACE_EVENT(xfs_buf_ioerror, 598 599 TP_PROTO(struct xfs_buf *bp, int error, xfs_failaddr_t caller_ip),
+13 -5
include/linux/blk-mq.h
··· 28 28 typedef __u32 __bitwise req_flags_t; 29 29 30 30 /* Keep rqf_name[] in sync with the definitions below */ 31 - enum { 31 + enum rqf_flags { 32 32 /* drive already may have started this one */ 33 33 __RQF_STARTED, 34 34 /* request for flush sequence */ ··· 852 852 return rq->rq_flags & RQF_RESV; 853 853 } 854 854 855 - /* 855 + /** 856 + * blk_mq_add_to_batch() - add a request to the completion batch 857 + * @req: The request to add to batch 858 + * @iob: The batch to add the request 859 + * @is_error: Specify true if the request failed with an error 860 + * @complete: The completaion handler for the request 861 + * 856 862 * Batched completions only work when there is no I/O error and no special 857 863 * ->end_io handler. 864 + * 865 + * Return: true when the request was added to the batch, otherwise false 858 866 */ 859 867 static inline bool blk_mq_add_to_batch(struct request *req, 860 - struct io_comp_batch *iob, int ioerror, 868 + struct io_comp_batch *iob, bool is_error, 861 869 void (*complete)(struct io_comp_batch *)) 862 870 { 863 871 /* ··· 873 865 * 1) No batch container 874 866 * 2) Has scheduler data attached 875 867 * 3) Not a passthrough request and end_io set 876 - * 4) Not a passthrough request and an ioerror 868 + * 4) Not a passthrough request and failed with an error 877 869 */ 878 870 if (!iob) 879 871 return false; ··· 882 874 if (!blk_rq_is_passthrough(req)) { 883 875 if (req->end_io) 884 876 return false; 885 - if (ioerror < 0) 877 + if (is_error) 886 878 return false; 887 879 } 888 880
+1 -1
include/linux/cleanup.h
··· 212 212 { return val; } 213 213 214 214 #define no_free_ptr(p) \ 215 - ((typeof(p)) __must_check_fn(__get_and_null(p, NULL))) 215 + ((typeof(p)) __must_check_fn((__force const volatile void *)__get_and_null(p, NULL))) 216 216 217 217 #define return_ptr(p) return no_free_ptr(p) 218 218
+5
include/linux/compaction.h
··· 80 80 return 2UL << order; 81 81 } 82 82 83 + static inline int current_is_kcompactd(void) 84 + { 85 + return current->flags & PF_KCOMPACTD; 86 + } 87 + 83 88 #ifdef CONFIG_COMPACTION 84 89 85 90 extern unsigned int extfrag_for_order(struct zone *zone, unsigned int order);
+2 -8
include/linux/cred.h
··· 172 172 173 173 static inline const struct cred *override_creds(const struct cred *override_cred) 174 174 { 175 - const struct cred *old = current->cred; 176 - 177 - rcu_assign_pointer(current->cred, override_cred); 178 - return old; 175 + return rcu_replace_pointer(current->cred, override_cred, 1); 179 176 } 180 177 181 178 static inline const struct cred *revert_creds(const struct cred *revert_cred) 182 179 { 183 - const struct cred *override_cred = current->cred; 184 - 185 - rcu_assign_pointer(current->cred, revert_cred); 186 - return override_cred; 180 + return rcu_replace_pointer(current->cred, revert_cred, 1); 187 181 } 188 182 189 183 /**
+21
include/linux/fsnotify.h
··· 171 171 } 172 172 173 173 /* 174 + * fsnotify_mmap_perm - permission hook before mmap of file range 175 + */ 176 + static inline int fsnotify_mmap_perm(struct file *file, int prot, 177 + const loff_t off, size_t len) 178 + { 179 + /* 180 + * mmap() generates only pre-content events. 181 + */ 182 + if (!file || likely(!FMODE_FSNOTIFY_HSM(file->f_mode))) 183 + return 0; 184 + 185 + return fsnotify_pre_content(&file->f_path, &off, len); 186 + } 187 + 188 + /* 174 189 * fsnotify_truncate_perm - permission hook before file truncate 175 190 */ 176 191 static inline int fsnotify_truncate_perm(const struct path *path, loff_t length) ··· 234 219 235 220 static inline int fsnotify_file_area_perm(struct file *file, int perm_mask, 236 221 const loff_t *ppos, size_t count) 222 + { 223 + return 0; 224 + } 225 + 226 + static inline int fsnotify_mmap_perm(struct file *file, int prot, 227 + const loff_t off, size_t len) 237 228 { 238 229 return 0; 239 230 }
+5
include/linux/hugetlb.h
··· 682 682 683 683 int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list); 684 684 int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn); 685 + void wait_for_freed_hugetlb_folios(void); 685 686 struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, 686 687 unsigned long addr, bool cow_from_owner); 687 688 struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid, ··· 1067 1066 unsigned long end_pfn) 1068 1067 { 1069 1068 return 0; 1069 + } 1070 + 1071 + static inline void wait_for_freed_hugetlb_folios(void) 1072 + { 1070 1073 } 1071 1074 1072 1075 static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+1 -1
include/linux/log2.h
··· 41 41 * *not* considered a power of two. 42 42 * Return: true if @n is a power of 2, otherwise false. 43 43 */ 44 - static inline __attribute__((const)) 44 + static __always_inline __attribute__((const)) 45 45 bool is_power_of_2(unsigned long n) 46 46 { 47 47 return (n != 0 && ((n & (n - 1)) == 0));
-1
include/linux/mm.h
··· 3420 3420 extern vm_fault_t filemap_map_pages(struct vm_fault *vmf, 3421 3421 pgoff_t start_pgoff, pgoff_t end_pgoff); 3422 3422 extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf); 3423 - extern vm_fault_t filemap_fsnotify_fault(struct vm_fault *vmf); 3424 3423 3425 3424 extern unsigned long stack_guard_gap; 3426 3425 /* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
+76 -22
include/linux/pipe_fs_i.h
··· 31 31 unsigned long private; 32 32 }; 33 33 34 + /* 35 + * Really only alpha needs 32-bit fields, but 36 + * might as well do it for 64-bit architectures 37 + * since that's what we've historically done, 38 + * and it makes 'head_tail' always be a simple 39 + * 'unsigned long'. 40 + */ 41 + #ifdef CONFIG_64BIT 42 + typedef unsigned int pipe_index_t; 43 + #else 44 + typedef unsigned short pipe_index_t; 45 + #endif 46 + 47 + /* 48 + * We have to declare this outside 'struct pipe_inode_info', 49 + * but then we can't use 'union pipe_index' for an anonymous 50 + * union, so we end up having to duplicate this declaration 51 + * below. Annoying. 52 + */ 53 + union pipe_index { 54 + unsigned long head_tail; 55 + struct { 56 + pipe_index_t head; 57 + pipe_index_t tail; 58 + }; 59 + }; 60 + 34 61 /** 35 62 * struct pipe_inode_info - a linux kernel pipe 36 63 * @mutex: mutex protecting the whole thing ··· 65 38 * @wr_wait: writer wait point in case of full pipe 66 39 * @head: The point of buffer production 67 40 * @tail: The point of buffer consumption 41 + * @head_tail: unsigned long union of @head and @tail 68 42 * @note_loss: The next read() should insert a data-lost message 69 43 * @max_usage: The maximum number of slots that may be used in the ring 70 44 * @ring_size: total number of buffers (should be a power of 2) ··· 86 58 struct pipe_inode_info { 87 59 struct mutex mutex; 88 60 wait_queue_head_t rd_wait, wr_wait; 89 - unsigned int head; 90 - unsigned int tail; 61 + 62 + /* This has to match the 'union pipe_index' above */ 63 + union { 64 + unsigned long head_tail; 65 + struct { 66 + pipe_index_t head; 67 + pipe_index_t tail; 68 + }; 69 + }; 70 + 91 71 unsigned int max_usage; 92 72 unsigned int ring_size; 93 73 unsigned int nr_accounted; ··· 177 141 } 178 142 179 143 /** 180 - * pipe_empty - Return true if the pipe is empty 181 - * @head: The pipe ring head pointer 182 - * @tail: The pipe ring tail pointer 183 - */ 184 - static inline bool pipe_empty(unsigned int head, unsigned int tail) 185 - { 186 - return head == tail; 187 - } 188 - 189 - /** 190 144 * pipe_occupancy - Return number of slots used in the pipe 191 145 * @head: The pipe ring head pointer 192 146 * @tail: The pipe ring tail pointer 193 147 */ 194 148 static inline unsigned int pipe_occupancy(unsigned int head, unsigned int tail) 195 149 { 196 - return head - tail; 150 + return (pipe_index_t)(head - tail); 151 + } 152 + 153 + /** 154 + * pipe_empty - Return true if the pipe is empty 155 + * @head: The pipe ring head pointer 156 + * @tail: The pipe ring tail pointer 157 + */ 158 + static inline bool pipe_empty(unsigned int head, unsigned int tail) 159 + { 160 + return !pipe_occupancy(head, tail); 197 161 } 198 162 199 163 /** ··· 206 170 unsigned int limit) 207 171 { 208 172 return pipe_occupancy(head, tail) >= limit; 173 + } 174 + 175 + /** 176 + * pipe_is_full - Return true if the pipe is full 177 + * @pipe: the pipe 178 + */ 179 + static inline bool pipe_is_full(const struct pipe_inode_info *pipe) 180 + { 181 + return pipe_full(pipe->head, pipe->tail, pipe->max_usage); 182 + } 183 + 184 + /** 185 + * pipe_is_empty - Return true if the pipe is empty 186 + * @pipe: the pipe 187 + */ 188 + static inline bool pipe_is_empty(const struct pipe_inode_info *pipe) 189 + { 190 + return pipe_empty(pipe->head, pipe->tail); 191 + } 192 + 193 + /** 194 + * pipe_buf_usage - Return how many pipe buffers are in use 195 + * @pipe: the pipe 196 + */ 197 + static inline unsigned int pipe_buf_usage(const struct pipe_inode_info *pipe) 198 + { 199 + return pipe_occupancy(pipe->head, pipe->tail); 209 200 } 210 201 211 202 /** ··· 306 243 if (!buf->ops->try_steal) 307 244 return false; 308 245 return buf->ops->try_steal(pipe, buf); 309 - } 310 - 311 - static inline void pipe_discard_from(struct pipe_inode_info *pipe, 312 - unsigned int old_head) 313 - { 314 - unsigned int mask = pipe->ring_size - 1; 315 - 316 - while (pipe->head > old_head) 317 - pipe_buf_release(pipe, &pipe->bufs[--pipe->head & mask]); 318 246 } 319 247 320 248 /* Differs from PIPE_BUF in that PIPE_SIZE is the length of the actual
+3
include/linux/platform_profile.h
··· 33 33 * @probe: Callback to setup choices available to the new class device. These 34 34 * choices will only be enforced when setting a new profile, not when 35 35 * getting the current one. 36 + * @hidden_choices: Callback to setup choices that are not visible to the user 37 + * but can be set by the driver. 36 38 * @profile_get: Callback that will be called when showing the current platform 37 39 * profile in sysfs. 38 40 * @profile_set: Callback that will be called when storing a new platform ··· 42 40 */ 43 41 struct platform_profile_ops { 44 42 int (*probe)(void *drvdata, unsigned long *choices); 43 + int (*hidden_choices)(void *drvdata, unsigned long *choices); 45 44 int (*profile_get)(struct device *dev, enum platform_profile_option *profile); 46 45 int (*profile_set)(struct device *dev, enum platform_profile_option profile); 47 46 };
+1 -1
include/linux/sched.h
··· 1701 1701 #define PF_USED_MATH 0x00002000 /* If unset the fpu must be initialized before use */ 1702 1702 #define PF_USER_WORKER 0x00004000 /* Kernel thread cloned from userspace thread */ 1703 1703 #define PF_NOFREEZE 0x00008000 /* This thread should not be frozen */ 1704 - #define PF__HOLE__00010000 0x00010000 1704 + #define PF_KCOMPACTD 0x00010000 /* I am kcompactd */ 1705 1705 #define PF_KSWAPD 0x00020000 /* I am kswapd */ 1706 1706 #define PF_MEMALLOC_NOFS 0x00040000 /* All allocations inherit GFP_NOFS. See memalloc_nfs_save() */ 1707 1707 #define PF_MEMALLOC_NOIO 0x00080000 /* All allocations inherit GFP_NOIO. See memalloc_noio_save() */
+38 -70
include/net/bluetooth/hci_core.h
··· 804 804 extern struct list_head hci_dev_list; 805 805 extern struct list_head hci_cb_list; 806 806 extern rwlock_t hci_dev_list_lock; 807 + extern struct mutex hci_cb_list_lock; 807 808 808 809 #define hci_dev_set_flag(hdev, nr) set_bit((nr), (hdev)->dev_flags) 809 810 #define hci_dev_clear_flag(hdev, nr) clear_bit((nr), (hdev)->dev_flags) ··· 2011 2010 2012 2011 char *name; 2013 2012 2014 - bool (*match) (struct hci_conn *conn); 2015 2013 void (*connect_cfm) (struct hci_conn *conn, __u8 status); 2016 2014 void (*disconn_cfm) (struct hci_conn *conn, __u8 status); 2017 2015 void (*security_cfm) (struct hci_conn *conn, __u8 status, 2018 - __u8 encrypt); 2016 + __u8 encrypt); 2019 2017 void (*key_change_cfm) (struct hci_conn *conn, __u8 status); 2020 2018 void (*role_switch_cfm) (struct hci_conn *conn, __u8 status, __u8 role); 2021 2019 }; 2022 2020 2023 - static inline void hci_cb_lookup(struct hci_conn *conn, struct list_head *list) 2024 - { 2025 - struct hci_cb *cb, *cpy; 2026 - 2027 - rcu_read_lock(); 2028 - list_for_each_entry_rcu(cb, &hci_cb_list, list) { 2029 - if (cb->match && cb->match(conn)) { 2030 - cpy = kmalloc(sizeof(*cpy), GFP_ATOMIC); 2031 - if (!cpy) 2032 - break; 2033 - 2034 - *cpy = *cb; 2035 - INIT_LIST_HEAD(&cpy->list); 2036 - list_add_rcu(&cpy->list, list); 2037 - } 2038 - } 2039 - rcu_read_unlock(); 2040 - } 2041 - 2042 2021 static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status) 2043 2022 { 2044 - struct list_head list; 2045 - struct hci_cb *cb, *tmp; 2023 + struct hci_cb *cb; 2046 2024 2047 - INIT_LIST_HEAD(&list); 2048 - hci_cb_lookup(conn, &list); 2049 - 2050 - list_for_each_entry_safe(cb, tmp, &list, list) { 2025 + mutex_lock(&hci_cb_list_lock); 2026 + list_for_each_entry(cb, &hci_cb_list, list) { 2051 2027 if (cb->connect_cfm) 2052 2028 cb->connect_cfm(conn, status); 2053 - kfree(cb); 2054 2029 } 2030 + mutex_unlock(&hci_cb_list_lock); 2055 2031 2056 2032 if (conn->connect_cfm_cb) 2057 2033 conn->connect_cfm_cb(conn, status); ··· 2036 2058 2037 2059 static inline void hci_disconn_cfm(struct hci_conn *conn, __u8 reason) 2038 2060 { 2039 - struct list_head list; 2040 - struct hci_cb *cb, *tmp; 2061 + struct hci_cb *cb; 2041 2062 2042 - INIT_LIST_HEAD(&list); 2043 - hci_cb_lookup(conn, &list); 2044 - 2045 - list_for_each_entry_safe(cb, tmp, &list, list) { 2063 + mutex_lock(&hci_cb_list_lock); 2064 + list_for_each_entry(cb, &hci_cb_list, list) { 2046 2065 if (cb->disconn_cfm) 2047 2066 cb->disconn_cfm(conn, reason); 2048 - kfree(cb); 2049 2067 } 2068 + mutex_unlock(&hci_cb_list_lock); 2050 2069 2051 2070 if (conn->disconn_cfm_cb) 2052 2071 conn->disconn_cfm_cb(conn, reason); 2053 2072 } 2054 2073 2055 - static inline void hci_security_cfm(struct hci_conn *conn, __u8 status, 2056 - __u8 encrypt) 2057 - { 2058 - struct list_head list; 2059 - struct hci_cb *cb, *tmp; 2060 - 2061 - INIT_LIST_HEAD(&list); 2062 - hci_cb_lookup(conn, &list); 2063 - 2064 - list_for_each_entry_safe(cb, tmp, &list, list) { 2065 - if (cb->security_cfm) 2066 - cb->security_cfm(conn, status, encrypt); 2067 - kfree(cb); 2068 - } 2069 - 2070 - if (conn->security_cfm_cb) 2071 - conn->security_cfm_cb(conn, status); 2072 - } 2073 - 2074 2074 static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status) 2075 2075 { 2076 + struct hci_cb *cb; 2076 2077 __u8 encrypt; 2077 2078 2078 2079 if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags)) ··· 2059 2102 2060 2103 encrypt = test_bit(HCI_CONN_ENCRYPT, &conn->flags) ? 0x01 : 0x00; 2061 2104 2062 - hci_security_cfm(conn, status, encrypt); 2105 + mutex_lock(&hci_cb_list_lock); 2106 + list_for_each_entry(cb, &hci_cb_list, list) { 2107 + if (cb->security_cfm) 2108 + cb->security_cfm(conn, status, encrypt); 2109 + } 2110 + mutex_unlock(&hci_cb_list_lock); 2111 + 2112 + if (conn->security_cfm_cb) 2113 + conn->security_cfm_cb(conn, status); 2063 2114 } 2064 2115 2065 2116 static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status) 2066 2117 { 2118 + struct hci_cb *cb; 2067 2119 __u8 encrypt; 2068 2120 2069 2121 if (conn->state == BT_CONFIG) { ··· 2099 2133 conn->sec_level = conn->pending_sec_level; 2100 2134 } 2101 2135 2102 - hci_security_cfm(conn, status, encrypt); 2136 + mutex_lock(&hci_cb_list_lock); 2137 + list_for_each_entry(cb, &hci_cb_list, list) { 2138 + if (cb->security_cfm) 2139 + cb->security_cfm(conn, status, encrypt); 2140 + } 2141 + mutex_unlock(&hci_cb_list_lock); 2142 + 2143 + if (conn->security_cfm_cb) 2144 + conn->security_cfm_cb(conn, status); 2103 2145 } 2104 2146 2105 2147 static inline void hci_key_change_cfm(struct hci_conn *conn, __u8 status) 2106 2148 { 2107 - struct list_head list; 2108 - struct hci_cb *cb, *tmp; 2149 + struct hci_cb *cb; 2109 2150 2110 - INIT_LIST_HEAD(&list); 2111 - hci_cb_lookup(conn, &list); 2112 - 2113 - list_for_each_entry_safe(cb, tmp, &list, list) { 2151 + mutex_lock(&hci_cb_list_lock); 2152 + list_for_each_entry(cb, &hci_cb_list, list) { 2114 2153 if (cb->key_change_cfm) 2115 2154 cb->key_change_cfm(conn, status); 2116 - kfree(cb); 2117 2155 } 2156 + mutex_unlock(&hci_cb_list_lock); 2118 2157 } 2119 2158 2120 2159 static inline void hci_role_switch_cfm(struct hci_conn *conn, __u8 status, 2121 2160 __u8 role) 2122 2161 { 2123 - struct list_head list; 2124 - struct hci_cb *cb, *tmp; 2162 + struct hci_cb *cb; 2125 2163 2126 - INIT_LIST_HEAD(&list); 2127 - hci_cb_lookup(conn, &list); 2128 - 2129 - list_for_each_entry_safe(cb, tmp, &list, list) { 2164 + mutex_lock(&hci_cb_list_lock); 2165 + list_for_each_entry(cb, &hci_cb_list, list) { 2130 2166 if (cb->role_switch_cfm) 2131 2167 cb->role_switch_cfm(conn, status, role); 2132 - kfree(cb); 2133 2168 } 2169 + mutex_unlock(&hci_cb_list_lock); 2134 2170 } 2135 2171 2136 2172 static inline bool hci_bdaddr_is_rpa(bdaddr_t *bdaddr, u8 addr_type)
+3 -1
include/net/netfilter/nf_tables.h
··· 1891 1891 void __init nft_chain_route_init(void); 1892 1892 void nft_chain_route_fini(void); 1893 1893 1894 - void nf_tables_trans_destroy_flush_work(void); 1894 + void nf_tables_trans_destroy_flush_work(struct net *net); 1895 1895 1896 1896 int nf_msecs_to_jiffies64(const struct nlattr *nla, u64 *result); 1897 1897 __be64 nf_jiffies64_to_msecs(u64 input); ··· 1905 1905 struct nftables_pernet { 1906 1906 struct list_head tables; 1907 1907 struct list_head commit_list; 1908 + struct list_head destroy_list; 1908 1909 struct list_head commit_set_list; 1909 1910 struct list_head binding_list; 1910 1911 struct list_head module_list; ··· 1916 1915 unsigned int base_seq; 1917 1916 unsigned int gc_seq; 1918 1917 u8 validate_state; 1918 + struct work_struct destroy_work; 1919 1919 }; 1920 1920 1921 1921 extern unsigned int nf_tables_net_id;
+4 -1
include/sound/soc.h
··· 1261 1261 1262 1262 /* mixer control */ 1263 1263 struct soc_mixer_control { 1264 - int min, max, platform_max; 1264 + /* Minimum and maximum specified as written to the hardware */ 1265 + int min, max; 1266 + /* Limited maximum value specified as presented through the control */ 1267 + int platform_max; 1265 1268 int reg, rreg; 1266 1269 unsigned int shift, rshift; 1267 1270 unsigned int sign_bit;
+1 -1
init/Kconfig
··· 1968 1968 depends on !MODVERSIONS || GENDWARFKSYMS 1969 1969 depends on !GCC_PLUGIN_RANDSTRUCT 1970 1970 depends on !RANDSTRUCT 1971 - depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE 1971 + depends on !DEBUG_INFO_BTF || (PAHOLE_HAS_LANG_EXCLUDE && !LTO) 1972 1972 depends on !CFI_CLANG || HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC 1973 1973 select CFI_ICALL_NORMALIZE_INTEGERS if CFI_CLANG 1974 1974 depends on !CALL_PADDING || RUSTC_VERSION >= 108100
+3 -4
io_uring/rw.c
··· 560 560 if (kiocb->ki_flags & IOCB_WRITE) 561 561 io_req_end_write(req); 562 562 if (unlikely(res != req->cqe.res)) { 563 - if (res == -EAGAIN && io_rw_should_reissue(req)) { 563 + if (res == -EAGAIN && io_rw_should_reissue(req)) 564 564 req->flags |= REQ_F_REISSUE | REQ_F_BL_NO_RECYCLE; 565 - return; 566 - } 567 - req->cqe.res = res; 565 + else 566 + req->cqe.res = res; 568 567 } 569 568 570 569 /* order with io_iopoll_complete() checking ->iopoll_completed */
+28 -4
kernel/events/core.c
··· 11830 11830 static struct lock_class_key cpuctx_mutex; 11831 11831 static struct lock_class_key cpuctx_lock; 11832 11832 11833 + static bool idr_cmpxchg(struct idr *idr, unsigned long id, void *old, void *new) 11834 + { 11835 + void *tmp, *val = idr_find(idr, id); 11836 + 11837 + if (val != old) 11838 + return false; 11839 + 11840 + tmp = idr_replace(idr, new, id); 11841 + if (IS_ERR(tmp)) 11842 + return false; 11843 + 11844 + WARN_ON_ONCE(tmp != val); 11845 + return true; 11846 + } 11847 + 11833 11848 int perf_pmu_register(struct pmu *pmu, const char *name, int type) 11834 11849 { 11835 11850 int cpu, ret, max = PERF_TYPE_MAX; ··· 11871 11856 if (type >= 0) 11872 11857 max = type; 11873 11858 11874 - ret = idr_alloc(&pmu_idr, pmu, max, 0, GFP_KERNEL); 11859 + ret = idr_alloc(&pmu_idr, NULL, max, 0, GFP_KERNEL); 11875 11860 if (ret < 0) 11876 11861 goto free_pdc; 11877 11862 ··· 11879 11864 11880 11865 type = ret; 11881 11866 pmu->type = type; 11867 + atomic_set(&pmu->exclusive_cnt, 0); 11882 11868 11883 11869 if (pmu_bus_running && !pmu->dev) { 11884 11870 ret = pmu_dev_alloc(pmu); ··· 11928 11912 if (!pmu->event_idx) 11929 11913 pmu->event_idx = perf_event_idx_default; 11930 11914 11915 + /* 11916 + * Now that the PMU is complete, make it visible to perf_try_init_event(). 11917 + */ 11918 + if (!idr_cmpxchg(&pmu_idr, pmu->type, NULL, pmu)) 11919 + goto free_context; 11931 11920 list_add_rcu(&pmu->entry, &pmus); 11932 - atomic_set(&pmu->exclusive_cnt, 0); 11921 + 11933 11922 ret = 0; 11934 11923 unlock: 11935 11924 mutex_unlock(&pmus_lock); 11936 11925 11937 11926 return ret; 11927 + 11928 + free_context: 11929 + free_percpu(pmu->cpu_pmu_context); 11938 11930 11939 11931 free_dev: 11940 11932 if (pmu->dev && pmu->dev != PMU_NULL_DEV) { ··· 11963 11939 { 11964 11940 mutex_lock(&pmus_lock); 11965 11941 list_del_rcu(&pmu->entry); 11942 + idr_remove(&pmu_idr, pmu->type); 11943 + mutex_unlock(&pmus_lock); 11966 11944 11967 11945 /* 11968 11946 * We dereference the pmu list under both SRCU and regular RCU, so ··· 11974 11948 synchronize_rcu(); 11975 11949 11976 11950 free_percpu(pmu->pmu_disable_count); 11977 - idr_remove(&pmu_idr, pmu->type); 11978 11951 if (pmu_bus_running && pmu->dev && pmu->dev != PMU_NULL_DEV) { 11979 11952 if (pmu->nr_addr_filters) 11980 11953 device_remove_file(pmu->dev, &dev_attr_nr_addr_filters); ··· 11981 11956 put_device(pmu->dev); 11982 11957 } 11983 11958 free_pmu_context(pmu); 11984 - mutex_unlock(&pmus_lock); 11985 11959 } 11986 11960 EXPORT_SYMBOL_GPL(perf_pmu_unregister); 11987 11961
+2 -2
kernel/locking/rtmutex_common.h
··· 59 59 }; 60 60 61 61 /** 62 - * rt_wake_q_head - Wrapper around regular wake_q_head to support 63 - * "sleeping" spinlocks on RT 62 + * struct rt_wake_q_head - Wrapper around regular wake_q_head to support 63 + * "sleeping" spinlocks on RT 64 64 * @head: The regular wake_q_head for sleeping lock variants 65 65 * @rtlock_task: Task pointer for RT lock (spin/rwlock) wakeups 66 66 */
+9 -4
kernel/locking/semaphore.c
··· 29 29 #include <linux/export.h> 30 30 #include <linux/sched.h> 31 31 #include <linux/sched/debug.h> 32 + #include <linux/sched/wake_q.h> 32 33 #include <linux/semaphore.h> 33 34 #include <linux/spinlock.h> 34 35 #include <linux/ftrace.h> ··· 39 38 static noinline int __down_interruptible(struct semaphore *sem); 40 39 static noinline int __down_killable(struct semaphore *sem); 41 40 static noinline int __down_timeout(struct semaphore *sem, long timeout); 42 - static noinline void __up(struct semaphore *sem); 41 + static noinline void __up(struct semaphore *sem, struct wake_q_head *wake_q); 43 42 44 43 /** 45 44 * down - acquire the semaphore ··· 184 183 void __sched up(struct semaphore *sem) 185 184 { 186 185 unsigned long flags; 186 + DEFINE_WAKE_Q(wake_q); 187 187 188 188 raw_spin_lock_irqsave(&sem->lock, flags); 189 189 if (likely(list_empty(&sem->wait_list))) 190 190 sem->count++; 191 191 else 192 - __up(sem); 192 + __up(sem, &wake_q); 193 193 raw_spin_unlock_irqrestore(&sem->lock, flags); 194 + if (!wake_q_empty(&wake_q)) 195 + wake_up_q(&wake_q); 194 196 } 195 197 EXPORT_SYMBOL(up); 196 198 ··· 273 269 return __down_common(sem, TASK_UNINTERRUPTIBLE, timeout); 274 270 } 275 271 276 - static noinline void __sched __up(struct semaphore *sem) 272 + static noinline void __sched __up(struct semaphore *sem, 273 + struct wake_q_head *wake_q) 277 274 { 278 275 struct semaphore_waiter *waiter = list_first_entry(&sem->wait_list, 279 276 struct semaphore_waiter, list); 280 277 list_del(&waiter->list); 281 278 waiter->up = true; 282 - wake_up_process(waiter->task); 279 + wake_q_add(wake_q, waiter->task); 283 280 }
+1 -1
kernel/pid_namespace.c
··· 107 107 goto out_free_idr; 108 108 ns->ns.ops = &pidns_operations; 109 109 110 - ns->pid_max = parent_pid_ns->pid_max; 110 + ns->pid_max = PID_MAX_LIMIT; 111 111 err = register_pidns_sysctls(ns); 112 112 if (err) 113 113 goto out_free_inum;
+4 -4
kernel/sched/cputime.c
··· 9 9 10 10 #ifdef CONFIG_IRQ_TIME_ACCOUNTING 11 11 12 - DEFINE_STATIC_KEY_FALSE(sched_clock_irqtime); 13 - 14 12 /* 15 13 * There are no locks covering percpu hardirq/softirq time. 16 14 * They are only modified in vtime_account, on corresponding CPU ··· 22 24 */ 23 25 DEFINE_PER_CPU(struct irqtime, cpu_irqtime); 24 26 27 + int sched_clock_irqtime; 28 + 25 29 void enable_sched_clock_irqtime(void) 26 30 { 27 - static_branch_enable(&sched_clock_irqtime); 31 + sched_clock_irqtime = 1; 28 32 } 29 33 30 34 void disable_sched_clock_irqtime(void) 31 35 { 32 - static_branch_disable(&sched_clock_irqtime); 36 + sched_clock_irqtime = 0; 33 37 } 34 38 35 39 static void irqtime_account_delta(struct irqtime *irqtime, u64 delta,
+1 -1
kernel/sched/deadline.c
··· 3189 3189 * value smaller than the currently allocated bandwidth in 3190 3190 * any of the root_domains. 3191 3191 */ 3192 - for_each_possible_cpu(cpu) { 3192 + for_each_online_cpu(cpu) { 3193 3193 rcu_read_lock_sched(); 3194 3194 3195 3195 if (dl_bw_visited(cpu, gen))
+3
kernel/sched/ext.c
··· 6422 6422 __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, 6423 6423 u64 wake_flags, bool *is_idle) 6424 6424 { 6425 + if (!ops_cpu_valid(prev_cpu, NULL)) 6426 + goto prev_cpu; 6427 + 6425 6428 if (!check_builtin_idle_enabled()) 6426 6429 goto prev_cpu; 6427 6430
+4 -2
kernel/sched/fair.c
··· 4045 4045 { 4046 4046 struct cfs_rq *prev_cfs_rq; 4047 4047 struct list_head *prev; 4048 + struct rq *rq = rq_of(cfs_rq); 4048 4049 4049 4050 if (cfs_rq->on_list) { 4050 4051 prev = cfs_rq->leaf_cfs_rq_list.prev; 4051 4052 } else { 4052 - struct rq *rq = rq_of(cfs_rq); 4053 - 4054 4053 prev = rq->tmp_alone_branch; 4055 4054 } 4055 + 4056 + if (prev == &rq->leaf_cfs_rq_list) 4057 + return false; 4056 4058 4057 4059 prev_cfs_rq = container_of(prev, struct cfs_rq, leaf_cfs_rq_list); 4058 4060
+2 -2
kernel/sched/sched.h
··· 3259 3259 }; 3260 3260 3261 3261 DECLARE_PER_CPU(struct irqtime, cpu_irqtime); 3262 - DECLARE_STATIC_KEY_FALSE(sched_clock_irqtime); 3262 + extern int sched_clock_irqtime; 3263 3263 3264 3264 static inline int irqtime_enabled(void) 3265 3265 { 3266 - return static_branch_likely(&sched_clock_irqtime); 3266 + return sched_clock_irqtime; 3267 3267 } 3268 3268 3269 3269 /*
+18 -6
kernel/trace/trace_events_hist.c
··· 5689 5689 guard(mutex)(&event_mutex); 5690 5690 5691 5691 event_file = event_file_data(file); 5692 - if (!event_file) 5693 - return -ENODEV; 5692 + if (!event_file) { 5693 + ret = -ENODEV; 5694 + goto err; 5695 + } 5694 5696 5695 5697 hist_file = kzalloc(sizeof(*hist_file), GFP_KERNEL); 5696 - if (!hist_file) 5697 - return -ENOMEM; 5698 + if (!hist_file) { 5699 + ret = -ENOMEM; 5700 + goto err; 5701 + } 5698 5702 5699 5703 hist_file->file = file; 5700 5704 hist_file->last_act = get_hist_hit_count(event_file); ··· 5706 5702 /* Clear private_data to avoid warning in single_open() */ 5707 5703 file->private_data = NULL; 5708 5704 ret = single_open(file, hist_show, hist_file); 5709 - if (ret) 5705 + if (ret) { 5710 5706 kfree(hist_file); 5707 + goto err; 5708 + } 5711 5709 5710 + return 0; 5711 + err: 5712 + tracing_release_file_tr(inode, file); 5712 5713 return ret; 5713 5714 } 5714 5715 ··· 5988 5979 5989 5980 /* Clear private_data to avoid warning in single_open() */ 5990 5981 file->private_data = NULL; 5991 - return single_open(file, hist_debug_show, file); 5982 + ret = single_open(file, hist_debug_show, file); 5983 + if (ret) 5984 + tracing_release_file_tr(inode, file); 5985 + return ret; 5992 5986 } 5993 5987 5994 5988 const struct file_operations event_hist_debug_fops = {
+20
kernel/trace/trace_fprobe.c
··· 1049 1049 if (*is_return) 1050 1050 return 0; 1051 1051 1052 + if (is_tracepoint) { 1053 + tmp = *symbol; 1054 + while (*tmp && (isalnum(*tmp) || *tmp == '_')) 1055 + tmp++; 1056 + if (*tmp) { 1057 + /* find a wrong character. */ 1058 + trace_probe_log_err(tmp - *symbol, BAD_TP_NAME); 1059 + kfree(*symbol); 1060 + *symbol = NULL; 1061 + return -EINVAL; 1062 + } 1063 + } 1064 + 1052 1065 /* If there is $retval, this should be a return fprobe. */ 1053 1066 for (i = 2; i < argc; i++) { 1054 1067 tmp = strstr(argv[i], "$retval"); ··· 1069 1056 if (is_tracepoint) { 1070 1057 trace_probe_log_set_index(i); 1071 1058 trace_probe_log_err(tmp - argv[i], RETVAL_ON_PROBE); 1059 + kfree(*symbol); 1060 + *symbol = NULL; 1072 1061 return -EINVAL; 1073 1062 } 1074 1063 *is_return = true; ··· 1230 1215 if (is_return && tf->tp.entry_arg) { 1231 1216 tf->fp.entry_handler = trace_fprobe_entry_handler; 1232 1217 tf->fp.entry_data_size = traceprobe_get_entry_data_size(&tf->tp); 1218 + if (ALIGN(tf->fp.entry_data_size, sizeof(long)) > MAX_FPROBE_DATA_SIZE) { 1219 + trace_probe_log_set_index(2); 1220 + trace_probe_log_err(0, TOO_MANY_EARGS); 1221 + return -E2BIG; 1222 + } 1233 1223 } 1234 1224 1235 1225 ret = traceprobe_set_print_fmt(&tf->tp,
+3 -2
kernel/trace/trace_probe.h
··· 36 36 #define MAX_BTF_ARGS_LEN 128 37 37 #define MAX_DENTRY_ARGS_LEN 256 38 38 #define MAX_STRING_SIZE PATH_MAX 39 - #define MAX_ARG_BUF_LEN (MAX_TRACE_ARGS * MAX_ARG_NAME_LEN) 40 39 41 40 /* Reserved field names */ 42 41 #define FIELD_STRING_IP "__probe_ip" ··· 480 481 C(NON_UNIQ_SYMBOL, "The symbol is not unique"), \ 481 482 C(BAD_RETPROBE, "Retprobe address must be an function entry"), \ 482 483 C(NO_TRACEPOINT, "Tracepoint is not found"), \ 484 + C(BAD_TP_NAME, "Invalid character in tracepoint name"),\ 483 485 C(BAD_ADDR_SUFFIX, "Invalid probed address suffix"), \ 484 486 C(NO_GROUP_NAME, "Group name is not specified"), \ 485 487 C(GROUP_TOO_LONG, "Group name is too long"), \ ··· 544 544 C(NO_BTF_FIELD, "This field is not found."), \ 545 545 C(BAD_BTF_TID, "Failed to get BTF type info."),\ 546 546 C(BAD_TYPE4STR, "This type does not fit for string."),\ 547 - C(NEED_STRING_TYPE, "$comm and immediate-string only accepts string type"), 547 + C(NEED_STRING_TYPE, "$comm and immediate-string only accepts string type"),\ 548 + C(TOO_MANY_EARGS, "Too many entry arguments specified"), 548 549 549 550 #undef C 550 551 #define C(a, b) TP_ERR_##a
+1 -1
lib/Kconfig.debug
··· 2103 2103 reallocated, catching possible invalid pointers to the skb. 2104 2104 2105 2105 For more information, check 2106 - Documentation/dev-tools/fault-injection/fault-injection.rst 2106 + Documentation/fault-injection/fault-injection.rst 2107 2107 2108 2108 config FAULT_INJECTION_CONFIGFS 2109 2109 bool "Configfs interface for fault-injection capabilities"
+3
mm/compaction.c
··· 3181 3181 long default_timeout = msecs_to_jiffies(HPAGE_FRAG_CHECK_INTERVAL_MSEC); 3182 3182 long timeout = default_timeout; 3183 3183 3184 + current->flags |= PF_KCOMPACTD; 3184 3185 set_freezable(); 3185 3186 3186 3187 pgdat->kcompactd_max_order = 0; ··· 3237 3236 if (unlikely(pgdat->proactive_compact_trigger)) 3238 3237 pgdat->proactive_compact_trigger = false; 3239 3238 } 3239 + 3240 + current->flags &= ~PF_KCOMPACTD; 3240 3241 3241 3242 return 0; 3242 3243 }
+3 -90
mm/filemap.c
··· 47 47 #include <linux/splice.h> 48 48 #include <linux/rcupdate_wait.h> 49 49 #include <linux/sched/mm.h> 50 - #include <linux/fsnotify.h> 51 50 #include <asm/pgalloc.h> 52 51 #include <asm/tlbflush.h> 53 52 #include "internal.h" ··· 2896 2897 size = min(size, folio_size(folio) - offset); 2897 2898 offset %= PAGE_SIZE; 2898 2899 2899 - while (spliced < size && 2900 - !pipe_full(pipe->head, pipe->tail, pipe->max_usage)) { 2900 + while (spliced < size && !pipe_is_full(pipe)) { 2901 2901 struct pipe_buffer *buf = pipe_head_buf(pipe); 2902 2902 size_t part = min_t(size_t, PAGE_SIZE - offset, size - spliced); 2903 2903 ··· 2953 2955 iocb.ki_pos = *ppos; 2954 2956 2955 2957 /* Work out how much data we can actually add into the pipe */ 2956 - used = pipe_occupancy(pipe->head, pipe->tail); 2958 + used = pipe_buf_usage(pipe); 2957 2959 npages = max_t(ssize_t, pipe->max_usage - used, 0); 2958 2960 len = min_t(size_t, len, npages * PAGE_SIZE); 2959 2961 ··· 3013 3015 total_spliced += n; 3014 3016 *ppos += n; 3015 3017 in->f_ra.prev_pos = *ppos; 3016 - if (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) 3018 + if (pipe_is_full(pipe)) 3017 3019 goto out; 3018 3020 } 3019 3021 ··· 3197 3199 unsigned long vm_flags = vmf->vma->vm_flags; 3198 3200 unsigned int mmap_miss; 3199 3201 3200 - /* 3201 - * If we have pre-content watches we need to disable readahead to make 3202 - * sure that we don't populate our mapping with 0 filled pages that we 3203 - * never emitted an event for. 3204 - */ 3205 - if (unlikely(FMODE_FSNOTIFY_HSM(file->f_mode))) 3206 - return fpin; 3207 - 3208 3202 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 3209 3203 /* Use the readahead code, even if readahead is disabled */ 3210 3204 if ((vm_flags & VM_HUGEPAGE) && HPAGE_PMD_ORDER <= MAX_PAGECACHE_ORDER) { ··· 3265 3275 struct file *fpin = NULL; 3266 3276 unsigned int mmap_miss; 3267 3277 3268 - /* See comment in do_sync_mmap_readahead. */ 3269 - if (unlikely(FMODE_FSNOTIFY_HSM(file->f_mode))) 3270 - return fpin; 3271 - 3272 3278 /* If we don't want any read-ahead, don't bother */ 3273 3279 if (vmf->vma->vm_flags & VM_RAND_READ || !ra->ra_pages) 3274 3280 return fpin; ··· 3322 3336 pte_unmap(ptep); 3323 3337 return ret; 3324 3338 } 3325 - 3326 - /** 3327 - * filemap_fsnotify_fault - maybe emit a pre-content event. 3328 - * @vmf: struct vm_fault containing details of the fault. 3329 - * 3330 - * If we have a pre-content watch on this file we will emit an event for this 3331 - * range. If we return anything the fault caller should return immediately, we 3332 - * will return VM_FAULT_RETRY if we had to emit an event, which will trigger the 3333 - * fault again and then the fault handler will run the second time through. 3334 - * 3335 - * Return: a bitwise-OR of %VM_FAULT_ codes, 0 if nothing happened. 3336 - */ 3337 - vm_fault_t filemap_fsnotify_fault(struct vm_fault *vmf) 3338 - { 3339 - struct file *fpin = NULL; 3340 - int mask = (vmf->flags & FAULT_FLAG_WRITE) ? MAY_WRITE : MAY_ACCESS; 3341 - loff_t pos = vmf->pgoff >> PAGE_SHIFT; 3342 - size_t count = PAGE_SIZE; 3343 - int err; 3344 - 3345 - /* 3346 - * We already did this and now we're retrying with everything locked, 3347 - * don't emit the event and continue. 3348 - */ 3349 - if (vmf->flags & FAULT_FLAG_TRIED) 3350 - return 0; 3351 - 3352 - /* No watches, we're done. */ 3353 - if (likely(!FMODE_FSNOTIFY_HSM(vmf->vma->vm_file->f_mode))) 3354 - return 0; 3355 - 3356 - fpin = maybe_unlock_mmap_for_io(vmf, fpin); 3357 - if (!fpin) 3358 - return VM_FAULT_SIGBUS; 3359 - 3360 - err = fsnotify_file_area_perm(fpin, mask, &pos, count); 3361 - fput(fpin); 3362 - if (err) 3363 - return VM_FAULT_SIGBUS; 3364 - return VM_FAULT_RETRY; 3365 - } 3366 - EXPORT_SYMBOL_GPL(filemap_fsnotify_fault); 3367 3339 3368 3340 /** 3369 3341 * filemap_fault - read in file data for page fault handling ··· 3426 3482 * or because readahead was otherwise unable to retrieve it. 3427 3483 */ 3428 3484 if (unlikely(!folio_test_uptodate(folio))) { 3429 - /* 3430 - * If this is a precontent file we have can now emit an event to 3431 - * try and populate the folio. 3432 - */ 3433 - if (!(vmf->flags & FAULT_FLAG_TRIED) && 3434 - unlikely(FMODE_FSNOTIFY_HSM(file->f_mode))) { 3435 - loff_t pos = folio_pos(folio); 3436 - size_t count = folio_size(folio); 3437 - 3438 - /* We're NOWAIT, we have to retry. */ 3439 - if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) { 3440 - folio_unlock(folio); 3441 - goto out_retry; 3442 - } 3443 - 3444 - if (mapping_locked) 3445 - filemap_invalidate_unlock_shared(mapping); 3446 - mapping_locked = false; 3447 - 3448 - folio_unlock(folio); 3449 - fpin = maybe_unlock_mmap_for_io(vmf, fpin); 3450 - if (!fpin) 3451 - goto out_retry; 3452 - 3453 - error = fsnotify_file_area_perm(fpin, MAY_ACCESS, &pos, 3454 - count); 3455 - if (error) 3456 - ret = VM_FAULT_SIGBUS; 3457 - goto out_retry; 3458 - } 3459 - 3460 3485 /* 3461 3486 * If the invalidate lock is not held, the folio was in cache 3462 3487 * and uptodate and now it is not. Strange but possible since we
+8
mm/hugetlb.c
··· 2943 2943 return ret; 2944 2944 } 2945 2945 2946 + void wait_for_freed_hugetlb_folios(void) 2947 + { 2948 + if (llist_empty(&hpage_freelist)) 2949 + return; 2950 + 2951 + flush_work(&free_hpage_work); 2952 + } 2953 + 2946 2954 typedef enum { 2947 2955 /* 2948 2956 * For either 0/1: we checked the per-vma resv map, and one resv
+3 -2
mm/internal.h
··· 1115 1115 * mm/memory-failure.c 1116 1116 */ 1117 1117 #ifdef CONFIG_MEMORY_FAILURE 1118 - void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu); 1118 + int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill); 1119 1119 void shake_folio(struct folio *folio); 1120 1120 extern int hwpoison_filter(struct page *p); 1121 1121 ··· 1138 1138 struct vm_area_struct *vma); 1139 1139 1140 1140 #else 1141 - static inline void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu) 1141 + static inline int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill) 1142 1142 { 1143 + return -EBUSY; 1143 1144 } 1144 1145 #endif 1145 1146
+1
mm/kmsan/hooks.c
··· 357 357 size -= to_go; 358 358 } 359 359 } 360 + EXPORT_SYMBOL_GPL(kmsan_handle_dma); 360 361 361 362 void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, 362 363 enum dma_data_direction dir)
+31 -32
mm/memory-failure.c
··· 1556 1556 return ret; 1557 1557 } 1558 1558 1559 - void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu) 1559 + int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill) 1560 1560 { 1561 - if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { 1562 - struct address_space *mapping; 1561 + enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; 1562 + struct address_space *mapping; 1563 1563 1564 + if (folio_test_swapcache(folio)) { 1565 + pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); 1566 + ttu &= ~TTU_HWPOISON; 1567 + } 1568 + 1569 + /* 1570 + * Propagate the dirty bit from PTEs to struct page first, because we 1571 + * need this to decide if we should kill or just drop the page. 1572 + * XXX: the dirty test could be racy: set_page_dirty() may not always 1573 + * be called inside page lock (it's recommended but not enforced). 1574 + */ 1575 + mapping = folio_mapping(folio); 1576 + if (!must_kill && !folio_test_dirty(folio) && mapping && 1577 + mapping_can_writeback(mapping)) { 1578 + if (folio_mkclean(folio)) { 1579 + folio_set_dirty(folio); 1580 + } else { 1581 + ttu &= ~TTU_HWPOISON; 1582 + pr_info("%#lx: corrupted page was clean: dropped without side effects\n", 1583 + pfn); 1584 + } 1585 + } 1586 + 1587 + if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) { 1564 1588 /* 1565 1589 * For hugetlb folios in shared mappings, try_to_unmap 1566 1590 * could potentially call huge_pmd_unshare. Because of ··· 1596 1572 if (!mapping) { 1597 1573 pr_info("%#lx: could not lock mapping for mapped hugetlb folio\n", 1598 1574 folio_pfn(folio)); 1599 - return; 1575 + return -EBUSY; 1600 1576 } 1601 1577 1602 1578 try_to_unmap(folio, ttu|TTU_RMAP_LOCKED); ··· 1604 1580 } else { 1605 1581 try_to_unmap(folio, ttu); 1606 1582 } 1583 + 1584 + return folio_mapped(folio) ? -EBUSY : 0; 1607 1585 } 1608 1586 1609 1587 /* ··· 1615 1589 static bool hwpoison_user_mappings(struct folio *folio, struct page *p, 1616 1590 unsigned long pfn, int flags) 1617 1591 { 1618 - enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON; 1619 - struct address_space *mapping; 1620 1592 LIST_HEAD(tokill); 1621 1593 bool unmap_success; 1622 1594 int forcekill; ··· 1637 1613 if (!folio_mapped(folio)) 1638 1614 return true; 1639 1615 1640 - if (folio_test_swapcache(folio)) { 1641 - pr_err("%#lx: keeping poisoned page in swap cache\n", pfn); 1642 - ttu &= ~TTU_HWPOISON; 1643 - } 1644 - 1645 - /* 1646 - * Propagate the dirty bit from PTEs to struct page first, because we 1647 - * need this to decide if we should kill or just drop the page. 1648 - * XXX: the dirty test could be racy: set_page_dirty() may not always 1649 - * be called inside page lock (it's recommended but not enforced). 1650 - */ 1651 - mapping = folio_mapping(folio); 1652 - if (!(flags & MF_MUST_KILL) && !folio_test_dirty(folio) && mapping && 1653 - mapping_can_writeback(mapping)) { 1654 - if (folio_mkclean(folio)) { 1655 - folio_set_dirty(folio); 1656 - } else { 1657 - ttu &= ~TTU_HWPOISON; 1658 - pr_info("%#lx: corrupted page was clean: dropped without side effects\n", 1659 - pfn); 1660 - } 1661 - } 1662 - 1663 1616 /* 1664 1617 * First collect all the processes that have the page 1665 1618 * mapped in dirty form. This has to be done before try_to_unmap, ··· 1644 1643 */ 1645 1644 collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED); 1646 1645 1647 - unmap_poisoned_folio(folio, ttu); 1648 - 1649 - unmap_success = !folio_mapped(folio); 1646 + unmap_success = !unmap_poisoned_folio(folio, pfn, flags & MF_MUST_KILL); 1650 1647 if (!unmap_success) 1651 1648 pr_err("%#lx: failed to unmap page (folio mapcount=%d)\n", 1652 1649 pfn, folio_mapcount(folio));
+14 -26
mm/memory.c
··· 76 76 #include <linux/ptrace.h> 77 77 #include <linux/vmalloc.h> 78 78 #include <linux/sched/sysctl.h> 79 - #include <linux/fsnotify.h> 80 79 81 80 #include <trace/events/kmem.h> 82 81 ··· 3050 3051 next = pgd_addr_end(addr, end); 3051 3052 if (pgd_none(*pgd) && !create) 3052 3053 continue; 3053 - if (WARN_ON_ONCE(pgd_leaf(*pgd))) 3054 - return -EINVAL; 3054 + if (WARN_ON_ONCE(pgd_leaf(*pgd))) { 3055 + err = -EINVAL; 3056 + break; 3057 + } 3055 3058 if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) { 3056 3059 if (!create) 3057 3060 continue; ··· 5184 5183 bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) && 5185 5184 !(vma->vm_flags & VM_SHARED); 5186 5185 int type, nr_pages; 5187 - unsigned long addr = vmf->address; 5186 + unsigned long addr; 5187 + bool needs_fallback = false; 5188 + 5189 + fallback: 5190 + addr = vmf->address; 5188 5191 5189 5192 /* Did we COW the page? */ 5190 5193 if (is_cow) ··· 5227 5222 * approach also applies to non-anonymous-shmem faults to avoid 5228 5223 * inflating the RSS of the process. 5229 5224 */ 5230 - if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) { 5225 + if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) || 5226 + unlikely(needs_fallback)) { 5231 5227 nr_pages = 1; 5232 5228 } else if (nr_pages > 1) { 5233 5229 pgoff_t idx = folio_page_idx(folio, page); ··· 5264 5258 ret = VM_FAULT_NOPAGE; 5265 5259 goto unlock; 5266 5260 } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) { 5267 - update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages); 5268 - ret = VM_FAULT_NOPAGE; 5269 - goto unlock; 5261 + needs_fallback = true; 5262 + pte_unmap_unlock(vmf->pte, vmf->ptl); 5263 + goto fallback; 5270 5264 } 5271 5265 5272 5266 folio_ref_add(folio, nr_pages - 1); ··· 5749 5743 static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) 5750 5744 { 5751 5745 struct vm_area_struct *vma = vmf->vma; 5752 - 5753 5746 if (vma_is_anonymous(vma)) 5754 5747 return do_huge_pmd_anonymous_page(vmf); 5755 - /* 5756 - * Currently we just emit PAGE_SIZE for our fault events, so don't allow 5757 - * a huge fault if we have a pre content watch on this file. This would 5758 - * be trivial to support, but there would need to be tests to ensure 5759 - * this works properly and those don't exist currently. 5760 - */ 5761 - if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode))) 5762 - return VM_FAULT_FALLBACK; 5763 5748 if (vma->vm_ops->huge_fault) 5764 5749 return vma->vm_ops->huge_fault(vmf, PMD_ORDER); 5765 5750 return VM_FAULT_FALLBACK; ··· 5774 5777 } 5775 5778 5776 5779 if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { 5777 - /* See comment in create_huge_pmd. */ 5778 - if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode))) 5779 - goto split; 5780 5780 if (vma->vm_ops->huge_fault) { 5781 5781 ret = vma->vm_ops->huge_fault(vmf, PMD_ORDER); 5782 5782 if (!(ret & VM_FAULT_FALLBACK)) ··· 5796 5802 /* No support for anonymous transparent PUD pages yet */ 5797 5803 if (vma_is_anonymous(vma)) 5798 5804 return VM_FAULT_FALLBACK; 5799 - /* See comment in create_huge_pmd. */ 5800 - if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode))) 5801 - return VM_FAULT_FALLBACK; 5802 5805 if (vma->vm_ops->huge_fault) 5803 5806 return vma->vm_ops->huge_fault(vmf, PUD_ORDER); 5804 5807 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ ··· 5813 5822 if (vma_is_anonymous(vma)) 5814 5823 goto split; 5815 5824 if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { 5816 - /* See comment in create_huge_pmd. */ 5817 - if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode))) 5818 - goto split; 5819 5825 if (vma->vm_ops->huge_fault) { 5820 5826 ret = vma->vm_ops->huge_fault(vmf, PUD_ORDER); 5821 5827 if (!(ret & VM_FAULT_FALLBACK))
+13 -15
mm/memory_hotplug.c
··· 1822 1822 if (folio_test_large(folio)) 1823 1823 pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1; 1824 1824 1825 - /* 1826 - * HWPoison pages have elevated reference counts so the migration would 1827 - * fail on them. It also doesn't make any sense to migrate them in the 1828 - * first place. Still try to unmap such a page in case it is still mapped 1829 - * (keep the unmap as the catch all safety net). 1830 - */ 1831 - if (folio_test_hwpoison(folio) || 1832 - (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) { 1833 - if (WARN_ON(folio_test_lru(folio))) 1834 - folio_isolate_lru(folio); 1835 - if (folio_mapped(folio)) 1836 - unmap_poisoned_folio(folio, TTU_IGNORE_MLOCK); 1837 - continue; 1838 - } 1839 - 1840 1825 if (!folio_try_get(folio)) 1841 1826 continue; 1842 1827 1843 1828 if (unlikely(page_folio(page) != folio)) 1844 1829 goto put_folio; 1830 + 1831 + if (folio_test_hwpoison(folio) || 1832 + (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) { 1833 + if (WARN_ON(folio_test_lru(folio))) 1834 + folio_isolate_lru(folio); 1835 + if (folio_mapped(folio)) { 1836 + folio_lock(folio); 1837 + unmap_poisoned_folio(folio, pfn, false); 1838 + folio_unlock(folio); 1839 + } 1840 + 1841 + goto put_folio; 1842 + } 1845 1843 1846 1844 if (!isolate_folio_to_list(folio, &source)) { 1847 1845 if (__ratelimit(&migrate_rs)) {
-7
mm/nommu.c
··· 1613 1613 } 1614 1614 EXPORT_SYMBOL(remap_vmalloc_range); 1615 1615 1616 - vm_fault_t filemap_fsnotify_fault(struct vm_fault *vmf) 1617 - { 1618 - BUG(); 1619 - return 0; 1620 - } 1621 - EXPORT_SYMBOL_GPL(filemap_fsnotify_fault); 1622 - 1623 1616 vm_fault_t filemap_fault(struct vm_fault *vmf) 1624 1617 { 1625 1618 BUG();
+2 -2
mm/page_alloc.c
··· 4243 4243 restart: 4244 4244 compaction_retries = 0; 4245 4245 no_progress_loops = 0; 4246 + compact_result = COMPACT_SKIPPED; 4246 4247 compact_priority = DEF_COMPACT_PRIORITY; 4247 4248 cpuset_mems_cookie = read_mems_allowed_begin(); 4248 4249 zonelist_iter_cookie = zonelist_iter_begin(); ··· 5850 5849 5851 5850 for (j = i + 1; j < MAX_NR_ZONES; j++) { 5852 5851 struct zone *upper_zone = &pgdat->node_zones[j]; 5853 - bool empty = !zone_managed_pages(upper_zone); 5854 5852 5855 5853 managed_pages += zone_managed_pages(upper_zone); 5856 5854 5857 - if (clear || empty) 5855 + if (clear) 5858 5856 zone->lowmem_reserve[j] = 0; 5859 5857 else 5860 5858 zone->lowmem_reserve[j] = managed_pages / ratio;
+10
mm/page_isolation.c
··· 608 608 int ret; 609 609 610 610 /* 611 + * Due to the deferred freeing of hugetlb folios, the hugepage folios may 612 + * not immediately release to the buddy system. This can cause PageBuddy() 613 + * to fail in __test_page_isolated_in_pageblock(). To ensure that the 614 + * hugetlb folios are properly released back to the buddy system, we 615 + * invoke the wait_for_freed_hugetlb_folios() function to wait for the 616 + * release to complete. 617 + */ 618 + wait_for_freed_hugetlb_folios(); 619 + 620 + /* 611 621 * Note: pageblock_nr_pages != MAX_PAGE_ORDER. Then, chunks of free 612 622 * pages are not aligned to pageblock_nr_pages. 613 623 * Then we just check migratetype first.
-14
mm/readahead.c
··· 128 128 #include <linux/blk-cgroup.h> 129 129 #include <linux/fadvise.h> 130 130 #include <linux/sched/mm.h> 131 - #include <linux/fsnotify.h> 132 131 133 132 #include "internal.h" 134 133 ··· 558 559 pgoff_t prev_index, miss; 559 560 560 561 /* 561 - * If we have pre-content watches we need to disable readahead to make 562 - * sure that we don't find 0 filled pages in cache that we never emitted 563 - * events for. Filesystems supporting HSM must make sure to not call 564 - * this function with ractl->file unset for files handled by HSM. 565 - */ 566 - if (ractl->file && unlikely(FMODE_FSNOTIFY_HSM(ractl->file->f_mode))) 567 - return; 568 - 569 - /* 570 562 * Even if readahead is disabled, issue this request as readahead 571 563 * as we'll need it to satisfy the requested range. The forced 572 564 * readahead will do the right thing and limit the read to just the ··· 633 643 634 644 /* no readahead */ 635 645 if (!ra->ra_pages) 636 - return; 637 - 638 - /* See the comment in page_cache_sync_ra. */ 639 - if (ractl->file && unlikely(FMODE_FSNOTIFY_HSM(ractl->file->f_mode))) 640 646 return; 641 647 642 648 /*
+31 -8
mm/shmem.c
··· 1548 1548 if (WARN_ON_ONCE(!wbc->for_reclaim)) 1549 1549 goto redirty; 1550 1550 1551 - if (WARN_ON_ONCE((info->flags & VM_LOCKED) || sbinfo->noswap)) 1551 + if ((info->flags & VM_LOCKED) || sbinfo->noswap) 1552 1552 goto redirty; 1553 1553 1554 1554 if (!total_swap_pages) ··· 2253 2253 struct folio *folio = NULL; 2254 2254 bool skip_swapcache = false; 2255 2255 swp_entry_t swap; 2256 - int error, nr_pages; 2256 + int error, nr_pages, order, split_order; 2257 2257 2258 2258 VM_BUG_ON(!*foliop || !xa_is_value(*foliop)); 2259 2259 swap = radix_to_swp_entry(*foliop); ··· 2272 2272 2273 2273 /* Look it up and read it in.. */ 2274 2274 folio = swap_cache_get_folio(swap, NULL, 0); 2275 + order = xa_get_order(&mapping->i_pages, index); 2275 2276 if (!folio) { 2276 - int order = xa_get_order(&mapping->i_pages, index); 2277 2277 bool fallback_order0 = false; 2278 - int split_order; 2279 2278 2280 2279 /* Or update major stats only when swapin succeeds?? */ 2281 2280 if (fault_type) { ··· 2338 2339 error = -ENOMEM; 2339 2340 goto failed; 2340 2341 } 2342 + } else if (order != folio_order(folio)) { 2343 + /* 2344 + * Swap readahead may swap in order 0 folios into swapcache 2345 + * asynchronously, while the shmem mapping can still stores 2346 + * large swap entries. In such cases, we should split the 2347 + * large swap entry to prevent possible data corruption. 2348 + */ 2349 + split_order = shmem_split_large_entry(inode, index, swap, gfp); 2350 + if (split_order < 0) { 2351 + error = split_order; 2352 + goto failed; 2353 + } 2354 + 2355 + /* 2356 + * If the large swap entry has already been split, it is 2357 + * necessary to recalculate the new swap entry based on 2358 + * the old order alignment. 2359 + */ 2360 + if (split_order > 0) { 2361 + pgoff_t offset = index - round_down(index, 1 << split_order); 2362 + 2363 + swap = swp_entry(swp_type(swap), swp_offset(swap) + offset); 2364 + } 2341 2365 } 2342 2366 2343 2367 alloced: ··· 2368 2346 folio_lock(folio); 2369 2347 if ((!skip_swapcache && !folio_test_swapcache(folio)) || 2370 2348 folio->swap.val != swap.val || 2371 - !shmem_confirm_swap(mapping, index, swap)) { 2349 + !shmem_confirm_swap(mapping, index, swap) || 2350 + xa_get_order(&mapping->i_pages, index) != folio_order(folio)) { 2372 2351 error = -EEXIST; 2373 2352 goto unlock; 2374 2353 } ··· 3510 3487 3511 3488 size = min_t(size_t, size, PAGE_SIZE - offset); 3512 3489 3513 - if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage)) { 3490 + if (!pipe_is_full(pipe)) { 3514 3491 struct pipe_buffer *buf = pipe_head_buf(pipe); 3515 3492 3516 3493 *buf = (struct pipe_buffer) { ··· 3537 3514 int error = 0; 3538 3515 3539 3516 /* Work out how much data we can actually add into the pipe */ 3540 - used = pipe_occupancy(pipe->head, pipe->tail); 3517 + used = pipe_buf_usage(pipe); 3541 3518 npages = max_t(ssize_t, pipe->max_usage - used, 0); 3542 3519 len = min_t(size_t, len, npages * PAGE_SIZE); 3543 3520 ··· 3624 3601 total_spliced += n; 3625 3602 *ppos += n; 3626 3603 in->f_ra.prev_pos = *ppos; 3627 - if (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) 3604 + if (pipe_is_full(pipe)) 3628 3605 break; 3629 3606 3630 3607 cond_resched();
+10 -4
mm/slab_common.c
··· 1304 1304 static int rcu_delay_page_cache_fill_msec = 5000; 1305 1305 module_param(rcu_delay_page_cache_fill_msec, int, 0444); 1306 1306 1307 + static struct workqueue_struct *rcu_reclaim_wq; 1308 + 1307 1309 /* Maximum number of jiffies to wait before draining a batch. */ 1308 1310 #define KFREE_DRAIN_JIFFIES (5 * HZ) 1309 1311 #define KFREE_N_BATCHES 2 ··· 1634 1632 if (delayed_work_pending(&krcp->monitor_work)) { 1635 1633 delay_left = krcp->monitor_work.timer.expires - jiffies; 1636 1634 if (delay < delay_left) 1637 - mod_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); 1635 + mod_delayed_work(rcu_reclaim_wq, &krcp->monitor_work, delay); 1638 1636 return; 1639 1637 } 1640 - queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay); 1638 + queue_delayed_work(rcu_reclaim_wq, &krcp->monitor_work, delay); 1641 1639 } 1642 1640 1643 1641 static void ··· 1735 1733 // "free channels", the batch can handle. Break 1736 1734 // the loop since it is done with this CPU thus 1737 1735 // queuing an RCU work is _always_ success here. 1738 - queued = queue_rcu_work(system_unbound_wq, &krwp->rcu_work); 1736 + queued = queue_rcu_work(rcu_reclaim_wq, &krwp->rcu_work); 1739 1737 WARN_ON_ONCE(!queued); 1740 1738 break; 1741 1739 } ··· 1885 1883 if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && 1886 1884 !atomic_xchg(&krcp->work_in_progress, 1)) { 1887 1885 if (atomic_read(&krcp->backoff_page_cache_fill)) { 1888 - queue_delayed_work(system_unbound_wq, 1886 + queue_delayed_work(rcu_reclaim_wq, 1889 1887 &krcp->page_cache_work, 1890 1888 msecs_to_jiffies(rcu_delay_page_cache_fill_msec)); 1891 1889 } else { ··· 2121 2119 int cpu; 2122 2120 int i, j; 2123 2121 struct shrinker *kfree_rcu_shrinker; 2122 + 2123 + rcu_reclaim_wq = alloc_workqueue("kvfree_rcu_reclaim", 2124 + WQ_UNBOUND | WQ_MEM_RECLAIM, 0); 2125 + WARN_ON(!rcu_reclaim_wq); 2124 2126 2125 2127 /* Clamp it to [0:100] seconds interval. */ 2126 2128 if (rcu_delay_page_cache_fill_msec < 0 ||
+10 -2
mm/swapfile.c
··· 653 653 return; 654 654 655 655 if (!ci->count) { 656 - free_cluster(si, ci); 656 + if (ci->flags != CLUSTER_FLAG_FREE) 657 + free_cluster(si, ci); 657 658 } else if (ci->count != SWAPFILE_CLUSTER) { 658 659 if (ci->flags != CLUSTER_FLAG_FRAG) 659 660 move_cluster(si, ci, &si->frag_clusters[ci->order], ··· 858 857 } 859 858 offset++; 860 859 } 860 + 861 + /* in case no swap cache is reclaimed */ 862 + if (ci->flags == CLUSTER_FLAG_NONE) 863 + relocate_cluster(si, ci); 861 864 862 865 unlock_cluster(ci); 863 866 if (to_scan <= 0) ··· 2646 2641 for (offset = 0; offset < end; offset += SWAPFILE_CLUSTER) { 2647 2642 ci = lock_cluster(si, offset); 2648 2643 unlock_cluster(ci); 2649 - offset += SWAPFILE_CLUSTER; 2650 2644 } 2651 2645 } 2652 2646 ··· 3546 3542 int err, i; 3547 3543 3548 3544 si = swp_swap_info(entry); 3545 + if (WARN_ON_ONCE(!si)) { 3546 + pr_err("%s%08lx\n", Bad_file, entry.val); 3547 + return -EINVAL; 3548 + } 3549 3549 3550 3550 offset = swp_offset(entry); 3551 3551 VM_WARN_ON(nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER);
+90 -17
mm/userfaultfd.c
··· 18 18 #include <asm/tlbflush.h> 19 19 #include <asm/tlb.h> 20 20 #include "internal.h" 21 + #include "swap.h" 21 22 22 23 static __always_inline 23 24 bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) ··· 1077 1076 return err; 1078 1077 } 1079 1078 1080 - static int move_swap_pte(struct mm_struct *mm, 1079 + static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma, 1081 1080 unsigned long dst_addr, unsigned long src_addr, 1082 1081 pte_t *dst_pte, pte_t *src_pte, 1083 1082 pte_t orig_dst_pte, pte_t orig_src_pte, 1084 1083 pmd_t *dst_pmd, pmd_t dst_pmdval, 1085 - spinlock_t *dst_ptl, spinlock_t *src_ptl) 1084 + spinlock_t *dst_ptl, spinlock_t *src_ptl, 1085 + struct folio *src_folio) 1086 1086 { 1087 - if (!pte_swp_exclusive(orig_src_pte)) 1088 - return -EBUSY; 1089 - 1090 1087 double_pt_lock(dst_ptl, src_ptl); 1091 1088 1092 1089 if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte, 1093 1090 dst_pmd, dst_pmdval)) { 1094 1091 double_pt_unlock(dst_ptl, src_ptl); 1095 1092 return -EAGAIN; 1093 + } 1094 + 1095 + /* 1096 + * The src_folio resides in the swapcache, requiring an update to its 1097 + * index and mapping to align with the dst_vma, where a swap-in may 1098 + * occur and hit the swapcache after moving the PTE. 1099 + */ 1100 + if (src_folio) { 1101 + folio_move_anon_rmap(src_folio, dst_vma); 1102 + src_folio->index = linear_page_index(dst_vma, dst_addr); 1096 1103 } 1097 1104 1098 1105 orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte); ··· 1150 1141 __u64 mode) 1151 1142 { 1152 1143 swp_entry_t entry; 1144 + struct swap_info_struct *si = NULL; 1153 1145 pte_t orig_src_pte, orig_dst_pte; 1154 1146 pte_t src_folio_pte; 1155 1147 spinlock_t *src_ptl, *dst_ptl; ··· 1250 1240 */ 1251 1241 if (!src_folio) { 1252 1242 struct folio *folio; 1243 + bool locked; 1253 1244 1254 1245 /* 1255 1246 * Pin the page while holding the lock to be sure the ··· 1270 1259 goto out; 1271 1260 } 1272 1261 1262 + locked = folio_trylock(folio); 1263 + /* 1264 + * We avoid waiting for folio lock with a raised 1265 + * refcount for large folios because extra refcounts 1266 + * will result in split_folio() failing later and 1267 + * retrying. If multiple tasks are trying to move a 1268 + * large folio we can end up livelocking. 1269 + */ 1270 + if (!locked && folio_test_large(folio)) { 1271 + spin_unlock(src_ptl); 1272 + err = -EAGAIN; 1273 + goto out; 1274 + } 1275 + 1273 1276 folio_get(folio); 1274 1277 src_folio = folio; 1275 1278 src_folio_pte = orig_src_pte; 1276 1279 spin_unlock(src_ptl); 1277 1280 1278 - if (!folio_trylock(src_folio)) { 1279 - pte_unmap(&orig_src_pte); 1280 - pte_unmap(&orig_dst_pte); 1281 + if (!locked) { 1282 + pte_unmap(src_pte); 1283 + pte_unmap(dst_pte); 1281 1284 src_pte = dst_pte = NULL; 1282 1285 /* now we can block and wait */ 1283 1286 folio_lock(src_folio); ··· 1307 1282 /* at this point we have src_folio locked */ 1308 1283 if (folio_test_large(src_folio)) { 1309 1284 /* split_folio() can block */ 1310 - pte_unmap(&orig_src_pte); 1311 - pte_unmap(&orig_dst_pte); 1285 + pte_unmap(src_pte); 1286 + pte_unmap(dst_pte); 1312 1287 src_pte = dst_pte = NULL; 1313 1288 err = split_folio(src_folio); 1314 1289 if (err) ··· 1333 1308 goto out; 1334 1309 } 1335 1310 if (!anon_vma_trylock_write(src_anon_vma)) { 1336 - pte_unmap(&orig_src_pte); 1337 - pte_unmap(&orig_dst_pte); 1311 + pte_unmap(src_pte); 1312 + pte_unmap(dst_pte); 1338 1313 src_pte = dst_pte = NULL; 1339 1314 /* now we can block and wait */ 1340 1315 anon_vma_lock_write(src_anon_vma); ··· 1347 1322 orig_dst_pte, orig_src_pte, dst_pmd, 1348 1323 dst_pmdval, dst_ptl, src_ptl, src_folio); 1349 1324 } else { 1325 + struct folio *folio = NULL; 1326 + 1350 1327 entry = pte_to_swp_entry(orig_src_pte); 1351 1328 if (non_swap_entry(entry)) { 1352 1329 if (is_migration_entry(entry)) { 1353 - pte_unmap(&orig_src_pte); 1354 - pte_unmap(&orig_dst_pte); 1330 + pte_unmap(src_pte); 1331 + pte_unmap(dst_pte); 1355 1332 src_pte = dst_pte = NULL; 1356 1333 migration_entry_wait(mm, src_pmd, src_addr); 1357 1334 err = -EAGAIN; ··· 1362 1335 goto out; 1363 1336 } 1364 1337 1365 - err = move_swap_pte(mm, dst_addr, src_addr, dst_pte, src_pte, 1366 - orig_dst_pte, orig_src_pte, dst_pmd, 1367 - dst_pmdval, dst_ptl, src_ptl); 1338 + if (!pte_swp_exclusive(orig_src_pte)) { 1339 + err = -EBUSY; 1340 + goto out; 1341 + } 1342 + 1343 + si = get_swap_device(entry); 1344 + if (unlikely(!si)) { 1345 + err = -EAGAIN; 1346 + goto out; 1347 + } 1348 + /* 1349 + * Verify the existence of the swapcache. If present, the folio's 1350 + * index and mapping must be updated even when the PTE is a swap 1351 + * entry. The anon_vma lock is not taken during this process since 1352 + * the folio has already been unmapped, and the swap entry is 1353 + * exclusive, preventing rmap walks. 1354 + * 1355 + * For large folios, return -EBUSY immediately, as split_folio() 1356 + * also returns -EBUSY when attempting to split unmapped large 1357 + * folios in the swapcache. This issue needs to be resolved 1358 + * separately to allow proper handling. 1359 + */ 1360 + if (!src_folio) 1361 + folio = filemap_get_folio(swap_address_space(entry), 1362 + swap_cache_index(entry)); 1363 + if (!IS_ERR_OR_NULL(folio)) { 1364 + if (folio_test_large(folio)) { 1365 + err = -EBUSY; 1366 + folio_put(folio); 1367 + goto out; 1368 + } 1369 + src_folio = folio; 1370 + src_folio_pte = orig_src_pte; 1371 + if (!folio_trylock(src_folio)) { 1372 + pte_unmap(src_pte); 1373 + pte_unmap(dst_pte); 1374 + src_pte = dst_pte = NULL; 1375 + put_swap_device(si); 1376 + si = NULL; 1377 + /* now we can block and wait */ 1378 + folio_lock(src_folio); 1379 + goto retry; 1380 + } 1381 + } 1382 + err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte, 1383 + orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval, 1384 + dst_ptl, src_ptl, src_folio); 1368 1385 } 1369 1386 1370 1387 out: ··· 1425 1354 if (src_pte) 1426 1355 pte_unmap(src_pte); 1427 1356 mmu_notifier_invalidate_range_end(&range); 1357 + if (si) 1358 + put_swap_device(si); 1428 1359 1429 1360 return err; 1430 1361 }
+3
mm/util.c
··· 23 23 #include <linux/processor.h> 24 24 #include <linux/sizes.h> 25 25 #include <linux/compat.h> 26 + #include <linux/fsnotify.h> 26 27 27 28 #include <linux/uaccess.h> 28 29 ··· 570 569 LIST_HEAD(uf); 571 570 572 571 ret = security_mmap_file(file, prot, flag); 572 + if (!ret) 573 + ret = fsnotify_mmap_perm(file, prot, pgoff >> PAGE_SHIFT, len); 573 574 if (!ret) { 574 575 if (mmap_write_lock_killable(mm)) 575 576 return -EINTR;
+8 -4
mm/vma.c
··· 1509 1509 static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg) 1510 1510 { 1511 1511 struct vm_area_struct *vma = vmg->vma; 1512 + unsigned long start = vmg->start; 1513 + unsigned long end = vmg->end; 1512 1514 struct vm_area_struct *merged; 1513 1515 1514 1516 /* First, try to merge. */ 1515 1517 merged = vma_merge_existing_range(vmg); 1516 1518 if (merged) 1517 1519 return merged; 1520 + if (vmg_nomem(vmg)) 1521 + return ERR_PTR(-ENOMEM); 1518 1522 1519 1523 /* Split any preceding portion of the VMA. */ 1520 - if (vma->vm_start < vmg->start) { 1521 - int err = split_vma(vmg->vmi, vma, vmg->start, 1); 1524 + if (vma->vm_start < start) { 1525 + int err = split_vma(vmg->vmi, vma, start, 1); 1522 1526 1523 1527 if (err) 1524 1528 return ERR_PTR(err); 1525 1529 } 1526 1530 1527 1531 /* Split any trailing portion of the VMA. */ 1528 - if (vma->vm_end > vmg->end) { 1529 - int err = split_vma(vmg->vmi, vma, vmg->end, 0); 1532 + if (vma->vm_end > end) { 1533 + int err = split_vma(vmg->vmi, vma, end, 0); 1530 1534 1531 1535 if (err) 1532 1536 return ERR_PTR(err);
+2 -2
mm/vmalloc.c
··· 586 586 mask |= PGTBL_PGD_MODIFIED; 587 587 err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask); 588 588 if (err) 589 - return err; 589 + break; 590 590 } while (pgd++, addr = next, addr != end); 591 591 592 592 if (mask & ARCH_PAGE_TABLE_SYNC_MASK) 593 593 arch_sync_kernel_mappings(start, end); 594 594 595 - return 0; 595 + return err; 596 596 } 597 597 598 598 /*
+1 -1
mm/zswap.c
··· 43 43 * statistics 44 44 **********************************/ 45 45 /* The number of compressed pages currently stored in zswap */ 46 - atomic_long_t zswap_stored_pages = ATOMIC_INIT(0); 46 + atomic_long_t zswap_stored_pages = ATOMIC_LONG_INIT(0); 47 47 48 48 /* 49 49 * The statistics below are not protected from concurrent access for
+2 -1
net/8021q/vlan.c
··· 131 131 { 132 132 const char *name = real_dev->name; 133 133 134 - if (real_dev->features & NETIF_F_VLAN_CHALLENGED) { 134 + if (real_dev->features & NETIF_F_VLAN_CHALLENGED || 135 + real_dev->type != ARPHRD_ETHER) { 135 136 pr_info("VLANs not supported on %s\n", name); 136 137 NL_SET_ERR_MSG_MOD(extack, "VLANs not supported on device"); 137 138 return -EOPNOTSUPP;
+7 -3
net/bluetooth/hci_core.c
··· 57 57 58 58 /* HCI callback list */ 59 59 LIST_HEAD(hci_cb_list); 60 + DEFINE_MUTEX(hci_cb_list_lock); 60 61 61 62 /* HCI ID Numbering */ 62 63 static DEFINE_IDA(hci_index_ida); ··· 2973 2972 { 2974 2973 BT_DBG("%p name %s", cb, cb->name); 2975 2974 2976 - list_add_tail_rcu(&cb->list, &hci_cb_list); 2975 + mutex_lock(&hci_cb_list_lock); 2976 + list_add_tail(&cb->list, &hci_cb_list); 2977 + mutex_unlock(&hci_cb_list_lock); 2977 2978 2978 2979 return 0; 2979 2980 } ··· 2985 2982 { 2986 2983 BT_DBG("%p name %s", cb, cb->name); 2987 2984 2988 - list_del_rcu(&cb->list); 2989 - synchronize_rcu(); 2985 + mutex_lock(&hci_cb_list_lock); 2986 + list_del(&cb->list); 2987 + mutex_unlock(&hci_cb_list_lock); 2990 2988 2991 2989 return 0; 2992 2990 }
+22 -15
net/bluetooth/hci_event.c
··· 3391 3391 hci_update_scan(hdev); 3392 3392 } 3393 3393 3394 - params = hci_conn_params_lookup(hdev, &conn->dst, conn->dst_type); 3395 - if (params) { 3396 - switch (params->auto_connect) { 3397 - case HCI_AUTO_CONN_LINK_LOSS: 3398 - if (ev->reason != HCI_ERROR_CONNECTION_TIMEOUT) 3394 + /* Re-enable passive scanning if disconnected device is marked 3395 + * as auto-connectable. 3396 + */ 3397 + if (conn->type == LE_LINK) { 3398 + params = hci_conn_params_lookup(hdev, &conn->dst, 3399 + conn->dst_type); 3400 + if (params) { 3401 + switch (params->auto_connect) { 3402 + case HCI_AUTO_CONN_LINK_LOSS: 3403 + if (ev->reason != HCI_ERROR_CONNECTION_TIMEOUT) 3404 + break; 3405 + fallthrough; 3406 + 3407 + case HCI_AUTO_CONN_DIRECT: 3408 + case HCI_AUTO_CONN_ALWAYS: 3409 + hci_pend_le_list_del_init(params); 3410 + hci_pend_le_list_add(params, 3411 + &hdev->pend_le_conns); 3412 + hci_update_passive_scan(hdev); 3399 3413 break; 3400 - fallthrough; 3401 3414 3402 - case HCI_AUTO_CONN_DIRECT: 3403 - case HCI_AUTO_CONN_ALWAYS: 3404 - hci_pend_le_list_del_init(params); 3405 - hci_pend_le_list_add(params, &hdev->pend_le_conns); 3406 - hci_update_passive_scan(hdev); 3407 - break; 3408 - 3409 - default: 3410 - break; 3415 + default: 3416 + break; 3417 + } 3411 3418 } 3412 3419 } 3413 3420
-6
net/bluetooth/iso.c
··· 2187 2187 return HCI_LM_ACCEPT; 2188 2188 } 2189 2189 2190 - static bool iso_match(struct hci_conn *hcon) 2191 - { 2192 - return hcon->type == ISO_LINK || hcon->type == LE_LINK; 2193 - } 2194 - 2195 2190 static void iso_connect_cfm(struct hci_conn *hcon, __u8 status) 2196 2191 { 2197 2192 if (hcon->type != ISO_LINK) { ··· 2368 2373 2369 2374 static struct hci_cb iso_cb = { 2370 2375 .name = "ISO", 2371 - .match = iso_match, 2372 2376 .connect_cfm = iso_connect_cfm, 2373 2377 .disconn_cfm = iso_disconn_cfm, 2374 2378 };
+6 -6
net/bluetooth/l2cap_core.c
··· 7182 7182 return NULL; 7183 7183 } 7184 7184 7185 - static bool l2cap_match(struct hci_conn *hcon) 7186 - { 7187 - return hcon->type == ACL_LINK || hcon->type == LE_LINK; 7188 - } 7189 - 7190 7185 static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status) 7191 7186 { 7192 7187 struct hci_dev *hdev = hcon->hdev; 7193 7188 struct l2cap_conn *conn; 7194 7189 struct l2cap_chan *pchan; 7195 7190 u8 dst_type; 7191 + 7192 + if (hcon->type != ACL_LINK && hcon->type != LE_LINK) 7193 + return; 7196 7194 7197 7195 BT_DBG("hcon %p bdaddr %pMR status %d", hcon, &hcon->dst, status); 7198 7196 ··· 7256 7258 7257 7259 static void l2cap_disconn_cfm(struct hci_conn *hcon, u8 reason) 7258 7260 { 7261 + if (hcon->type != ACL_LINK && hcon->type != LE_LINK) 7262 + return; 7263 + 7259 7264 BT_DBG("hcon %p reason %d", hcon, reason); 7260 7265 7261 7266 l2cap_conn_del(hcon, bt_to_errno(reason)); ··· 7566 7565 7567 7566 static struct hci_cb l2cap_cb = { 7568 7567 .name = "L2CAP", 7569 - .match = l2cap_match, 7570 7568 .connect_cfm = l2cap_connect_cfm, 7571 7569 .disconn_cfm = l2cap_disconn_cfm, 7572 7570 .security_cfm = l2cap_security_cfm,
+5
net/bluetooth/mgmt.c
··· 9660 9660 sizeof(*ev) + (name ? eir_precalc_len(name_len) : 0) + 9661 9661 eir_precalc_len(sizeof(conn->dev_class))); 9662 9662 9663 + if (!skb) 9664 + return; 9665 + 9663 9666 ev = skb_put(skb, sizeof(*ev)); 9664 9667 bacpy(&ev->addr.bdaddr, &conn->dst); 9665 9668 ev->addr.type = link_to_bdaddr(conn->type, conn->dst_type); ··· 10416 10413 10417 10414 skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_FOUND, 10418 10415 sizeof(*ev) + (name ? eir_precalc_len(name_len) : 0)); 10416 + if (!skb) 10417 + return; 10419 10418 10420 10419 ev = skb_put(skb, sizeof(*ev)); 10421 10420 bacpy(&ev->addr.bdaddr, bdaddr);
-6
net/bluetooth/rfcomm/core.c
··· 2134 2134 return 0; 2135 2135 } 2136 2136 2137 - static bool rfcomm_match(struct hci_conn *hcon) 2138 - { 2139 - return hcon->type == ACL_LINK; 2140 - } 2141 - 2142 2137 static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt) 2143 2138 { 2144 2139 struct rfcomm_session *s; ··· 2180 2185 2181 2186 static struct hci_cb rfcomm_cb = { 2182 2187 .name = "RFCOMM", 2183 - .match = rfcomm_match, 2184 2188 .security_cfm = rfcomm_security_cfm 2185 2189 }; 2186 2190
+18 -7
net/bluetooth/sco.c
··· 107 107 kref_put(&conn->ref, sco_conn_free); 108 108 } 109 109 110 + static struct sco_conn *sco_conn_hold(struct sco_conn *conn) 111 + { 112 + BT_DBG("conn %p refcnt %u", conn, kref_read(&conn->ref)); 113 + 114 + kref_get(&conn->ref); 115 + return conn; 116 + } 117 + 110 118 static struct sco_conn *sco_conn_hold_unless_zero(struct sco_conn *conn) 111 119 { 112 120 if (!conn) ··· 1361 1353 bacpy(&sco_pi(sk)->src, &conn->hcon->src); 1362 1354 bacpy(&sco_pi(sk)->dst, &conn->hcon->dst); 1363 1355 1356 + sco_conn_hold(conn); 1364 1357 hci_conn_hold(conn->hcon); 1365 1358 __sco_chan_add(conn, sk, parent); 1366 1359 ··· 1407 1398 return lm; 1408 1399 } 1409 1400 1410 - static bool sco_match(struct hci_conn *hcon) 1411 - { 1412 - return hcon->type == SCO_LINK || hcon->type == ESCO_LINK; 1413 - } 1414 - 1415 1401 static void sco_connect_cfm(struct hci_conn *hcon, __u8 status) 1416 1402 { 1403 + if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK) 1404 + return; 1405 + 1417 1406 BT_DBG("hcon %p bdaddr %pMR status %u", hcon, &hcon->dst, status); 1418 1407 1419 1408 if (!status) { 1420 1409 struct sco_conn *conn; 1421 1410 1422 1411 conn = sco_conn_add(hcon); 1423 - if (conn) 1412 + if (conn) { 1424 1413 sco_conn_ready(conn); 1414 + sco_conn_put(conn); 1415 + } 1425 1416 } else 1426 1417 sco_conn_del(hcon, bt_to_errno(status)); 1427 1418 } 1428 1419 1429 1420 static void sco_disconn_cfm(struct hci_conn *hcon, __u8 reason) 1430 1421 { 1422 + if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK) 1423 + return; 1424 + 1431 1425 BT_DBG("hcon %p reason %d", hcon, reason); 1432 1426 1433 1427 sco_conn_del(hcon, bt_to_errno(reason)); ··· 1456 1444 1457 1445 static struct hci_cb sco_cb = { 1458 1446 .name = "SCO", 1459 - .match = sco_match, 1460 1447 .connect_cfm = sco_connect_cfm, 1461 1448 .disconn_cfm = sco_disconn_cfm, 1462 1449 };
+3
net/core/dev.c
··· 3872 3872 { 3873 3873 netdev_features_t features; 3874 3874 3875 + if (!skb_frags_readable(skb)) 3876 + goto out_kfree_skb; 3877 + 3875 3878 features = netif_skb_features(skb); 3876 3879 skb = validate_xmit_vlan(skb, features); 3877 3880 if (unlikely(!skb))
+3 -1
net/core/devmem.c
··· 109 109 struct netdev_rx_queue *rxq; 110 110 unsigned long xa_idx; 111 111 unsigned int rxq_idx; 112 + int err; 112 113 113 114 if (binding->list.next) 114 115 list_del(&binding->list); ··· 121 120 122 121 rxq_idx = get_netdev_rx_queue_index(rxq); 123 122 124 - WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx)); 123 + err = netdev_rx_queue_restart(binding->dev, rxq_idx); 124 + WARN_ON(err && err != -ENETDOWN); 125 125 } 126 126 127 127 xa_erase(&net_devmem_dmabuf_bindings, binding->id);
+7 -2
net/core/netpoll.c
··· 319 319 static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb) 320 320 { 321 321 netdev_tx_t status = NETDEV_TX_BUSY; 322 + netdev_tx_t ret = NET_XMIT_DROP; 322 323 struct net_device *dev; 323 324 unsigned long tries; 324 325 /* It is up to the caller to keep npinfo alive. */ ··· 328 327 lockdep_assert_irqs_disabled(); 329 328 330 329 dev = np->dev; 330 + rcu_read_lock(); 331 331 npinfo = rcu_dereference_bh(dev->npinfo); 332 332 333 333 if (!npinfo || !netif_running(dev) || !netif_device_present(dev)) { 334 334 dev_kfree_skb_irq(skb); 335 - return NET_XMIT_DROP; 335 + goto out; 336 336 } 337 337 338 338 /* don't get messages out of order, and no recursion */ ··· 372 370 skb_queue_tail(&npinfo->txq, skb); 373 371 schedule_delayed_work(&npinfo->tx_work,0); 374 372 } 375 - return NETDEV_TX_OK; 373 + ret = NETDEV_TX_OK; 374 + out: 375 + rcu_read_unlock(); 376 + return ret; 376 377 } 377 378 378 379 netdev_tx_t netpoll_send_skb(struct netpoll *np, struct sk_buff *skb)
+4 -4
net/ethtool/cabletest.c
··· 72 72 dev = req_info.dev; 73 73 74 74 rtnl_lock(); 75 - phydev = ethnl_req_get_phydev(&req_info, 76 - tb[ETHTOOL_A_CABLE_TEST_HEADER], 75 + phydev = ethnl_req_get_phydev(&req_info, tb, 76 + ETHTOOL_A_CABLE_TEST_HEADER, 77 77 info->extack); 78 78 if (IS_ERR_OR_NULL(phydev)) { 79 79 ret = -EOPNOTSUPP; ··· 339 339 goto out_dev_put; 340 340 341 341 rtnl_lock(); 342 - phydev = ethnl_req_get_phydev(&req_info, 343 - tb[ETHTOOL_A_CABLE_TEST_TDR_HEADER], 342 + phydev = ethnl_req_get_phydev(&req_info, tb, 343 + ETHTOOL_A_CABLE_TEST_TDR_HEADER, 344 344 info->extack); 345 345 if (IS_ERR_OR_NULL(phydev)) { 346 346 ret = -EOPNOTSUPP;
+1 -1
net/ethtool/linkstate.c
··· 103 103 struct phy_device *phydev; 104 104 int ret; 105 105 106 - phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_LINKSTATE_HEADER], 106 + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_LINKSTATE_HEADER, 107 107 info->extack); 108 108 if (IS_ERR(phydev)) { 109 109 ret = PTR_ERR(phydev);
+3 -3
net/ethtool/netlink.c
··· 211 211 } 212 212 213 213 struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info, 214 - const struct nlattr *header, 214 + struct nlattr **tb, unsigned int header, 215 215 struct netlink_ext_ack *extack) 216 216 { 217 217 struct phy_device *phydev; ··· 225 225 return req_info->dev->phydev; 226 226 227 227 phydev = phy_link_topo_get_phy(req_info->dev, req_info->phy_index); 228 - if (!phydev) { 229 - NL_SET_ERR_MSG_ATTR(extack, header, 228 + if (!phydev && tb) { 229 + NL_SET_ERR_MSG_ATTR(extack, tb[header], 230 230 "no phy matching phyindex"); 231 231 return ERR_PTR(-ENODEV); 232 232 }
+3 -2
net/ethtool/netlink.h
··· 275 275 * ethnl_req_get_phydev() - Gets the phy_device targeted by this request, 276 276 * if any. Must be called under rntl_lock(). 277 277 * @req_info: The ethnl request to get the phy from. 278 - * @header: The netlink header, used for error reporting. 278 + * @tb: The netlink attributes array, for error reporting. 279 + * @header: The netlink header index, used for error reporting. 279 280 * @extack: The netlink extended ACK, for error reporting. 280 281 * 281 282 * The caller must hold RTNL, until it's done interacting with the returned ··· 290 289 * is returned. 291 290 */ 292 291 struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info, 293 - const struct nlattr *header, 292 + struct nlattr **tb, unsigned int header, 294 293 struct netlink_ext_ack *extack); 295 294 296 295 /**
+1 -1
net/ethtool/phy.c
··· 125 125 struct phy_req_info *req_info = PHY_REQINFO(req_base); 126 126 struct phy_device *phydev; 127 127 128 - phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PHY_HEADER], 128 + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PHY_HEADER, 129 129 extack); 130 130 if (!phydev) 131 131 return 0;
+3 -3
net/ethtool/plca.c
··· 62 62 struct phy_device *phydev; 63 63 int ret; 64 64 65 - phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PLCA_HEADER], 65 + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PLCA_HEADER, 66 66 info->extack); 67 67 // check that the PHY device is available and connected 68 68 if (IS_ERR_OR_NULL(phydev)) { ··· 152 152 bool mod = false; 153 153 int ret; 154 154 155 - phydev = ethnl_req_get_phydev(req_info, tb[ETHTOOL_A_PLCA_HEADER], 155 + phydev = ethnl_req_get_phydev(req_info, tb, ETHTOOL_A_PLCA_HEADER, 156 156 info->extack); 157 157 // check that the PHY device is available and connected 158 158 if (IS_ERR_OR_NULL(phydev)) ··· 211 211 struct phy_device *phydev; 212 212 int ret; 213 213 214 - phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PLCA_HEADER], 214 + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PLCA_HEADER, 215 215 info->extack); 216 216 // check that the PHY device is available and connected 217 217 if (IS_ERR_OR_NULL(phydev)) {
+2 -2
net/ethtool/pse-pd.c
··· 64 64 if (ret < 0) 65 65 return ret; 66 66 67 - phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PSE_HEADER], 67 + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PSE_HEADER, 68 68 info->extack); 69 69 if (IS_ERR(phydev)) 70 70 return -ENODEV; ··· 261 261 struct phy_device *phydev; 262 262 int ret; 263 263 264 - phydev = ethnl_req_get_phydev(req_info, tb[ETHTOOL_A_PSE_HEADER], 264 + phydev = ethnl_req_get_phydev(req_info, tb, ETHTOOL_A_PSE_HEADER, 265 265 info->extack); 266 266 ret = ethnl_set_pse_validate(phydev, info); 267 267 if (ret)
+1 -1
net/ethtool/stats.c
··· 138 138 struct phy_device *phydev; 139 139 int ret; 140 140 141 - phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_STATS_HEADER], 141 + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_STATS_HEADER, 142 142 info->extack); 143 143 if (IS_ERR(phydev)) 144 144 return PTR_ERR(phydev);
+1 -1
net/ethtool/strset.c
··· 309 309 return 0; 310 310 } 311 311 312 - phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_HEADER_FLAGS], 312 + phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_HEADER_FLAGS, 313 313 info->extack); 314 314 315 315 /* phydev can be NULL, check for errors only */
+2 -1
net/ethtool/tsinfo.c
··· 290 290 reply_data = ctx->reply_data; 291 291 memset(reply_data, 0, sizeof(*reply_data)); 292 292 reply_data->base.dev = dev; 293 - memset(&reply_data->ts_info, 0, sizeof(reply_data->ts_info)); 293 + reply_data->ts_info.cmd = ETHTOOL_GET_TS_INFO; 294 + reply_data->ts_info.phc_index = -1; 294 295 295 296 return ehdr; 296 297 }
+7 -4
net/ipv4/tcp_offload.c
··· 13 13 #include <net/tcp.h> 14 14 #include <net/protocol.h> 15 15 16 - static void tcp_gso_tstamp(struct sk_buff *skb, unsigned int ts_seq, 16 + static void tcp_gso_tstamp(struct sk_buff *skb, struct sk_buff *gso_skb, 17 17 unsigned int seq, unsigned int mss) 18 18 { 19 + u32 flags = skb_shinfo(gso_skb)->tx_flags & SKBTX_ANY_TSTAMP; 20 + u32 ts_seq = skb_shinfo(gso_skb)->tskey; 21 + 19 22 while (skb) { 20 23 if (before(ts_seq, seq + mss)) { 21 - skb_shinfo(skb)->tx_flags |= SKBTX_SW_TSTAMP; 24 + skb_shinfo(skb)->tx_flags |= flags; 22 25 skb_shinfo(skb)->tskey = ts_seq; 23 26 return; 24 27 } ··· 196 193 th = tcp_hdr(skb); 197 194 seq = ntohl(th->seq); 198 195 199 - if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_SW_TSTAMP)) 200 - tcp_gso_tstamp(segs, skb_shinfo(gso_skb)->tskey, seq, mss); 196 + if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_ANY_TSTAMP)) 197 + tcp_gso_tstamp(segs, gso_skb, seq, mss); 201 198 202 199 newcheck = ~csum_fold(csum_add(csum_unfold(th->check), delta)); 203 200
+6 -2
net/ipv4/udp_offload.c
··· 321 321 322 322 /* clear destructor to avoid skb_segment assigning it to tail */ 323 323 copy_dtor = gso_skb->destructor == sock_wfree; 324 - if (copy_dtor) 324 + if (copy_dtor) { 325 325 gso_skb->destructor = NULL; 326 + gso_skb->sk = NULL; 327 + } 326 328 327 329 segs = skb_segment(gso_skb, features); 328 330 if (IS_ERR_OR_NULL(segs)) { 329 - if (copy_dtor) 331 + if (copy_dtor) { 330 332 gso_skb->destructor = sock_wfree; 333 + gso_skb->sk = sk; 334 + } 331 335 return segs; 332 336 } 333 337
+9 -6
net/ipv6/addrconf.c
··· 3209 3209 struct in6_addr addr; 3210 3210 struct net_device *dev; 3211 3211 struct net *net = dev_net(idev->dev); 3212 - int scope, plen, offset = 0; 3212 + int scope, plen; 3213 3213 u32 pflags = 0; 3214 3214 3215 3215 ASSERT_RTNL(); 3216 3216 3217 3217 memset(&addr, 0, sizeof(struct in6_addr)); 3218 - /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */ 3219 - if (idev->dev->addr_len == sizeof(struct in6_addr)) 3220 - offset = sizeof(struct in6_addr) - 4; 3221 - memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4); 3218 + memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4); 3222 3219 3223 3220 if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) { 3224 3221 scope = IPV6_ADDR_COMPATv4; ··· 3526 3529 return; 3527 3530 } 3528 3531 3529 - if (dev->type == ARPHRD_ETHER) { 3532 + /* Generate the IPv6 link-local address using addrconf_addr_gen(), 3533 + * unless we have an IPv4 GRE device not bound to an IP address and 3534 + * which is in EUI64 mode (as __ipv6_isatap_ifid() would fail in this 3535 + * case). Such devices fall back to add_v4_addrs() instead. 3536 + */ 3537 + if (!(dev->type == ARPHRD_IPGRE && *(__be32 *)dev->dev_addr == 0 && 3538 + idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_EUI64)) { 3530 3539 addrconf_addr_gen(idev, true); 3531 3540 return; 3532 3541 }
+3 -1
net/ipv6/ila/ila_lwt.c
··· 88 88 goto drop; 89 89 } 90 90 91 - if (ilwt->connected) { 91 + /* cache only if we don't create a dst reference loop */ 92 + if (ilwt->connected && orig_dst->lwtstate != dst->lwtstate) { 92 93 local_bh_disable(); 93 94 dst_cache_set_ip6(&ilwt->dst_cache, dst, &fl6.saddr); 94 95 local_bh_enable(); 95 96 } 96 97 } 97 98 99 + skb_dst_drop(skb); 98 100 skb_dst_set(skb, dst); 99 101 return dst_output(net, sk, skb); 100 102
+27 -22
net/llc/llc_s_ac.c
··· 24 24 #include <net/llc_s_ac.h> 25 25 #include <net/llc_s_ev.h> 26 26 #include <net/llc_sap.h> 27 - 27 + #include <net/sock.h> 28 28 29 29 /** 30 30 * llc_sap_action_unitdata_ind - forward UI PDU to network layer ··· 40 40 return 0; 41 41 } 42 42 43 + static int llc_prepare_and_xmit(struct sk_buff *skb) 44 + { 45 + struct llc_sap_state_ev *ev = llc_sap_ev(skb); 46 + struct sk_buff *nskb; 47 + int rc; 48 + 49 + rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); 50 + if (rc) 51 + return rc; 52 + 53 + nskb = skb_clone(skb, GFP_ATOMIC); 54 + if (!nskb) 55 + return -ENOMEM; 56 + 57 + if (skb->sk) 58 + skb_set_owner_w(nskb, skb->sk); 59 + 60 + return dev_queue_xmit(nskb); 61 + } 62 + 43 63 /** 44 64 * llc_sap_action_send_ui - sends UI PDU resp to UNITDATA REQ to MAC layer 45 65 * @sap: SAP ··· 72 52 int llc_sap_action_send_ui(struct llc_sap *sap, struct sk_buff *skb) 73 53 { 74 54 struct llc_sap_state_ev *ev = llc_sap_ev(skb); 75 - int rc; 76 55 77 56 llc_pdu_header_init(skb, LLC_PDU_TYPE_U, ev->saddr.lsap, 78 57 ev->daddr.lsap, LLC_PDU_CMD); 79 58 llc_pdu_init_as_ui_cmd(skb); 80 - rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); 81 - if (likely(!rc)) { 82 - skb_get(skb); 83 - rc = dev_queue_xmit(skb); 84 - } 85 - return rc; 59 + 60 + return llc_prepare_and_xmit(skb); 86 61 } 87 62 88 63 /** ··· 92 77 int llc_sap_action_send_xid_c(struct llc_sap *sap, struct sk_buff *skb) 93 78 { 94 79 struct llc_sap_state_ev *ev = llc_sap_ev(skb); 95 - int rc; 96 80 97 81 llc_pdu_header_init(skb, LLC_PDU_TYPE_U_XID, ev->saddr.lsap, 98 82 ev->daddr.lsap, LLC_PDU_CMD); 99 83 llc_pdu_init_as_xid_cmd(skb, LLC_XID_NULL_CLASS_2, 0); 100 - rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); 101 - if (likely(!rc)) { 102 - skb_get(skb); 103 - rc = dev_queue_xmit(skb); 104 - } 105 - return rc; 84 + 85 + return llc_prepare_and_xmit(skb); 106 86 } 107 87 108 88 /** ··· 143 133 int llc_sap_action_send_test_c(struct llc_sap *sap, struct sk_buff *skb) 144 134 { 145 135 struct llc_sap_state_ev *ev = llc_sap_ev(skb); 146 - int rc; 147 136 148 137 llc_pdu_header_init(skb, LLC_PDU_TYPE_U, ev->saddr.lsap, 149 138 ev->daddr.lsap, LLC_PDU_CMD); 150 139 llc_pdu_init_as_test_cmd(skb); 151 - rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); 152 - if (likely(!rc)) { 153 - skb_get(skb); 154 - rc = dev_queue_xmit(skb); 155 - } 156 - return rc; 140 + 141 + return llc_prepare_and_xmit(skb); 157 142 } 158 143 159 144 int llc_sap_action_send_test_r(struct llc_sap *sap, struct sk_buff *skb)
+8 -2
net/mac80211/driver-ops.c
··· 116 116 117 117 sdata->flags &= ~IEEE80211_SDATA_IN_DRIVER; 118 118 119 - /* Remove driver debugfs entries */ 120 - ieee80211_debugfs_recreate_netdev(sdata, sdata->vif.valid_links); 119 + /* 120 + * Remove driver debugfs entries. 121 + * The virtual monitor interface doesn't get a debugfs 122 + * entry, so it's exempt here. 123 + */ 124 + if (sdata != rcu_access_pointer(local->monitor_sdata)) 125 + ieee80211_debugfs_recreate_netdev(sdata, 126 + sdata->vif.valid_links); 121 127 122 128 trace_drv_remove_interface(local, sdata); 123 129 local->ops->remove_interface(&local->hw, &sdata->vif);
+8 -1
net/mac80211/eht.c
··· 2 2 /* 3 3 * EHT handling 4 4 * 5 - * Copyright(c) 2021-2024 Intel Corporation 5 + * Copyright(c) 2021-2025 Intel Corporation 6 6 */ 7 7 8 8 #include "ieee80211_i.h" ··· 75 75 76 76 link_sta->cur_max_bandwidth = ieee80211_sta_cap_rx_bw(link_sta); 77 77 link_sta->pub->bandwidth = ieee80211_sta_cur_vht_bw(link_sta); 78 + 79 + /* 80 + * The MPDU length bits are reserved on all but 2.4 GHz and get set via 81 + * VHT (5 GHz) or HE (6 GHz) capabilities. 82 + */ 83 + if (sband->band != NL80211_BAND_2GHZ) 84 + return; 78 85 79 86 switch (u8_get_bits(eht_cap->eht_cap_elem.mac_cap_info[0], 80 87 IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK)) {
+6 -5
net/mac80211/iface.c
··· 1206 1206 return; 1207 1207 } 1208 1208 1209 - RCU_INIT_POINTER(local->monitor_sdata, NULL); 1210 - mutex_unlock(&local->iflist_mtx); 1211 - 1212 - synchronize_net(); 1213 - 1209 + clear_bit(SDATA_STATE_RUNNING, &sdata->state); 1214 1210 ieee80211_link_release_channel(&sdata->deflink); 1215 1211 1216 1212 if (ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF)) 1217 1213 drv_remove_interface(local, sdata); 1214 + 1215 + RCU_INIT_POINTER(local->monitor_sdata, NULL); 1216 + mutex_unlock(&local->iflist_mtx); 1217 + 1218 + synchronize_net(); 1218 1219 1219 1220 kfree(sdata); 1220 1221 }
+1
net/mac80211/mlme.c
··· 4959 4959 parse_params.start = bss_ies->data; 4960 4960 parse_params.len = bss_ies->len; 4961 4961 parse_params.bss = cbss; 4962 + parse_params.link_id = -1; 4962 4963 bss_elems = ieee802_11_parse_elems_full(&parse_params); 4963 4964 if (!bss_elems) { 4964 4965 ret = false;
+90 -45
net/mac80211/parse.c
··· 47 47 /* The EPCS Multi-Link element in the original elements */ 48 48 const struct element *ml_epcs_elem; 49 49 50 + bool multi_link_inner; 51 + bool skip_vendor; 52 + 50 53 /* 51 54 * scratch buffer that can be used for various element parsing related 52 55 * tasks, e.g., element de-fragmentation etc. ··· 155 152 switch (le16_get_bits(mle->control, 156 153 IEEE80211_ML_CONTROL_TYPE)) { 157 154 case IEEE80211_ML_CONTROL_TYPE_BASIC: 158 - if (elems_parse->ml_basic_elem) { 155 + if (elems_parse->multi_link_inner) { 159 156 elems->parse_error |= 160 157 IEEE80211_PARSE_ERR_DUP_NEST_ML_BASIC; 161 158 break; 162 159 } 163 - elems_parse->ml_basic_elem = elem; 164 160 break; 165 161 case IEEE80211_ML_CONTROL_TYPE_RECONF: 166 162 elems_parse->ml_reconf_elem = elem; ··· 401 399 IEEE80211_PARSE_ERR_BAD_ELEM_SIZE; 402 400 break; 403 401 case WLAN_EID_VENDOR_SPECIFIC: 402 + if (elems_parse->skip_vendor) 403 + break; 404 + 404 405 if (elen >= 4 && pos[0] == 0x00 && pos[1] == 0x50 && 405 406 pos[2] == 0xf2) { 406 407 /* Microsoft OUI (00:50:F2) */ ··· 871 866 } 872 867 } 873 868 874 - static void ieee80211_mle_parse_link(struct ieee80211_elems_parse *elems_parse, 875 - struct ieee80211_elems_parse_params *params) 869 + static const struct element * 870 + ieee80211_prep_mle_link_parse(struct ieee80211_elems_parse *elems_parse, 871 + struct ieee80211_elems_parse_params *params, 872 + struct ieee80211_elems_parse_params *sub) 876 873 { 877 874 struct ieee802_11_elems *elems = &elems_parse->elems; 878 875 struct ieee80211_mle_per_sta_profile *prof; 879 - struct ieee80211_elems_parse_params sub = { 880 - .mode = params->mode, 881 - .action = params->action, 882 - .from_ap = params->from_ap, 883 - .link_id = -1, 884 - }; 885 - ssize_t ml_len = elems->ml_basic_len; 886 - const struct element *non_inherit = NULL; 876 + const struct element *tmp; 877 + ssize_t ml_len; 887 878 const u8 *end; 879 + 880 + if (params->mode < IEEE80211_CONN_MODE_EHT) 881 + return NULL; 882 + 883 + for_each_element_extid(tmp, WLAN_EID_EXT_EHT_MULTI_LINK, 884 + elems->ie_start, elems->total_len) { 885 + const struct ieee80211_multi_link_elem *mle = 886 + (void *)tmp->data + 1; 887 + 888 + if (!ieee80211_mle_size_ok(tmp->data + 1, tmp->datalen - 1)) 889 + continue; 890 + 891 + if (le16_get_bits(mle->control, IEEE80211_ML_CONTROL_TYPE) != 892 + IEEE80211_ML_CONTROL_TYPE_BASIC) 893 + continue; 894 + 895 + elems_parse->ml_basic_elem = tmp; 896 + break; 897 + } 888 898 889 899 ml_len = cfg80211_defragment_element(elems_parse->ml_basic_elem, 890 900 elems->ie_start, ··· 911 891 WLAN_EID_FRAGMENT); 912 892 913 893 if (ml_len < 0) 914 - return; 894 + return NULL; 915 895 916 896 elems->ml_basic = (const void *)elems_parse->scratch_pos; 917 897 elems->ml_basic_len = ml_len; 918 898 elems_parse->scratch_pos += ml_len; 919 899 920 900 if (params->link_id == -1) 921 - return; 901 + return NULL; 922 902 923 903 ieee80211_mle_get_sta_prof(elems_parse, params->link_id); 924 904 prof = elems->prof; 925 905 926 906 if (!prof) 927 - return; 907 + return NULL; 928 908 929 909 /* check if we have the 4 bytes for the fixed part in assoc response */ 930 910 if (elems->sta_prof_len < sizeof(*prof) + prof->sta_info_len - 1 + 4) { 931 911 elems->prof = NULL; 932 912 elems->sta_prof_len = 0; 933 - return; 913 + return NULL; 934 914 } 935 915 936 916 /* ··· 939 919 * the -1 is because the 'sta_info_len' is accounted to as part of the 940 920 * per-STA profile, but not part of the 'u8 variable[]' portion. 941 921 */ 942 - sub.start = prof->variable + prof->sta_info_len - 1 + 4; 922 + sub->start = prof->variable + prof->sta_info_len - 1 + 4; 943 923 end = (const u8 *)prof + elems->sta_prof_len; 944 - sub.len = end - sub.start; 924 + sub->len = end - sub->start; 945 925 946 - non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, 947 - sub.start, sub.len); 948 - _ieee802_11_parse_elems_full(&sub, elems_parse, non_inherit); 926 + sub->mode = params->mode; 927 + sub->action = params->action; 928 + sub->from_ap = params->from_ap; 929 + sub->link_id = -1; 930 + 931 + return cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, 932 + sub->start, sub->len); 949 933 } 950 934 951 935 static void ··· 997 973 struct ieee802_11_elems * 998 974 ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params) 999 975 { 976 + struct ieee80211_elems_parse_params sub = {}; 1000 977 struct ieee80211_elems_parse *elems_parse; 1001 - struct ieee802_11_elems *elems; 1002 978 const struct element *non_inherit = NULL; 1003 - u8 *nontransmitted_profile; 1004 - int nontransmitted_profile_len = 0; 979 + struct ieee802_11_elems *elems; 1005 980 size_t scratch_len = 3 * params->len; 981 + bool multi_link_inner = false; 1006 982 1007 983 BUILD_BUG_ON(offsetof(typeof(*elems_parse), elems) != 0); 984 + 985 + /* cannot parse for both a specific link and non-transmitted BSS */ 986 + if (WARN_ON(params->link_id >= 0 && params->bss)) 987 + return NULL; 1008 988 1009 989 elems_parse = kzalloc(struct_size(elems_parse, scratch, scratch_len), 1010 990 GFP_ATOMIC); ··· 1026 998 ieee80211_clear_tpe(&elems->tpe); 1027 999 ieee80211_clear_tpe(&elems->csa_tpe); 1028 1000 1029 - nontransmitted_profile = elems_parse->scratch_pos; 1030 - nontransmitted_profile_len = 1031 - ieee802_11_find_bssid_profile(params->start, params->len, 1032 - elems, params->bss, 1033 - nontransmitted_profile); 1034 - elems_parse->scratch_pos += nontransmitted_profile_len; 1035 - non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, 1036 - nontransmitted_profile, 1037 - nontransmitted_profile_len); 1001 + /* 1002 + * If we're looking for a non-transmitted BSS then we cannot at 1003 + * the same time be looking for a second link as the two can only 1004 + * appear in the same frame carrying info for different BSSes. 1005 + * 1006 + * In any case, we only look for one at a time, as encoded by 1007 + * the WARN_ON above. 1008 + */ 1009 + if (params->bss) { 1010 + int nontx_len = 1011 + ieee802_11_find_bssid_profile(params->start, 1012 + params->len, 1013 + elems, params->bss, 1014 + elems_parse->scratch_pos); 1015 + sub.start = elems_parse->scratch_pos; 1016 + sub.mode = params->mode; 1017 + sub.len = nontx_len; 1018 + sub.action = params->action; 1019 + sub.link_id = params->link_id; 1038 1020 1021 + /* consume the space used for non-transmitted profile */ 1022 + elems_parse->scratch_pos += nontx_len; 1023 + 1024 + non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE, 1025 + sub.start, nontx_len); 1026 + } else { 1027 + /* must always parse to get elems_parse->ml_basic_elem */ 1028 + non_inherit = ieee80211_prep_mle_link_parse(elems_parse, params, 1029 + &sub); 1030 + multi_link_inner = true; 1031 + } 1032 + 1033 + elems_parse->skip_vendor = 1034 + cfg80211_find_elem(WLAN_EID_VENDOR_SPECIFIC, 1035 + sub.start, sub.len); 1039 1036 elems->crc = _ieee802_11_parse_elems_full(params, elems_parse, 1040 1037 non_inherit); 1041 1038 1042 - /* Override with nontransmitted profile, if found */ 1043 - if (nontransmitted_profile_len) { 1044 - struct ieee80211_elems_parse_params sub = { 1045 - .mode = params->mode, 1046 - .start = nontransmitted_profile, 1047 - .len = nontransmitted_profile_len, 1048 - .action = params->action, 1049 - .link_id = params->link_id, 1050 - }; 1051 - 1039 + /* Override with nontransmitted/per-STA profile if found */ 1040 + if (sub.len) { 1041 + elems_parse->multi_link_inner = multi_link_inner; 1042 + elems_parse->skip_vendor = false; 1052 1043 _ieee802_11_parse_elems_full(&sub, elems_parse, NULL); 1053 1044 } 1054 - 1055 - ieee80211_mle_parse_link(elems_parse, params); 1056 1045 1057 1046 ieee80211_mle_defrag_reconf(elems_parse); 1058 1047
+5 -5
net/mac80211/rx.c
··· 6 6 * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net> 7 7 * Copyright 2013-2014 Intel Mobile Communications GmbH 8 8 * Copyright(c) 2015 - 2017 Intel Deutschland GmbH 9 - * Copyright (C) 2018-2024 Intel Corporation 9 + * Copyright (C) 2018-2025 Intel Corporation 10 10 */ 11 11 12 12 #include <linux/jiffies.h> ··· 3329 3329 return; 3330 3330 } 3331 3331 3332 - if (!ether_addr_equal(mgmt->sa, sdata->deflink.u.mgd.bssid) || 3333 - !ether_addr_equal(mgmt->bssid, sdata->deflink.u.mgd.bssid)) { 3332 + if (!ether_addr_equal(mgmt->sa, sdata->vif.cfg.ap_addr) || 3333 + !ether_addr_equal(mgmt->bssid, sdata->vif.cfg.ap_addr)) { 3334 3334 /* Not from the current AP or not associated yet. */ 3335 3335 return; 3336 3336 } ··· 3346 3346 3347 3347 skb_reserve(skb, local->hw.extra_tx_headroom); 3348 3348 resp = skb_put_zero(skb, 24); 3349 - memcpy(resp->da, mgmt->sa, ETH_ALEN); 3349 + memcpy(resp->da, sdata->vif.cfg.ap_addr, ETH_ALEN); 3350 3350 memcpy(resp->sa, sdata->vif.addr, ETH_ALEN); 3351 - memcpy(resp->bssid, sdata->deflink.u.mgd.bssid, ETH_ALEN); 3351 + memcpy(resp->bssid, sdata->vif.cfg.ap_addr, ETH_ALEN); 3352 3352 resp->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT | 3353 3353 IEEE80211_STYPE_ACTION); 3354 3354 skb_put(skb, 1 + sizeof(resp->u.action.u.sa_query));
+17 -3
net/mac80211/sta_info.c
··· 4 4 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> 5 5 * Copyright 2013-2014 Intel Mobile Communications GmbH 6 6 * Copyright (C) 2015 - 2017 Intel Deutschland GmbH 7 - * Copyright (C) 2018-2023 Intel Corporation 7 + * Copyright (C) 2018-2024 Intel Corporation 8 8 */ 9 9 10 10 #include <linux/module.h> ··· 1335 1335 sta->sta.addr, new_state); 1336 1336 1337 1337 /* notify the driver before the actual changes so it can 1338 - * fail the transition 1338 + * fail the transition if the state is increasing. 1339 + * The driver is required not to fail when the transition 1340 + * is decreasing the state, so first, do all the preparation 1341 + * work and only then, notify the driver. 1339 1342 */ 1340 - if (test_sta_flag(sta, WLAN_STA_INSERTED)) { 1343 + if (new_state > sta->sta_state && 1344 + test_sta_flag(sta, WLAN_STA_INSERTED)) { 1341 1345 int err = drv_sta_state(sta->local, sta->sdata, sta, 1342 1346 sta->sta_state, new_state); 1343 1347 if (err) ··· 1415 1411 break; 1416 1412 default: 1417 1413 break; 1414 + } 1415 + 1416 + if (new_state < sta->sta_state && 1417 + test_sta_flag(sta, WLAN_STA_INSERTED)) { 1418 + int err = drv_sta_state(sta->local, sta->sdata, sta, 1419 + sta->sta_state, new_state); 1420 + 1421 + WARN_ONCE(err, 1422 + "Driver is not allowed to fail if the sta_state is transitioning down the list: %d\n", 1423 + err); 1418 1424 } 1419 1425 1420 1426 sta->sta_state = new_state;
+8 -5
net/mac80211/util.c
··· 6 6 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net> 7 7 * Copyright 2013-2014 Intel Mobile Communications GmbH 8 8 * Copyright (C) 2015-2017 Intel Deutschland GmbH 9 - * Copyright (C) 2018-2024 Intel Corporation 9 + * Copyright (C) 2018-2025 Intel Corporation 10 10 * 11 11 * utilities for mac80211 12 12 */ ··· 687 687 struct ieee80211_sub_if_data *sdata, 688 688 unsigned int queues, bool drop) 689 689 { 690 - if (!local->ops->flush) 690 + if (!local->ops->flush && !drop) 691 691 return; 692 692 693 693 /* ··· 714 714 } 715 715 } 716 716 717 - drv_flush(local, sdata, queues, drop); 717 + if (local->ops->flush) 718 + drv_flush(local, sdata, queues, drop); 718 719 719 720 ieee80211_wake_queues_by_reason(&local->hw, queues, 720 721 IEEE80211_QUEUE_STOP_REASON_FLUSH, ··· 2193 2192 ieee80211_reconfig_roc(local); 2194 2193 2195 2194 /* Requeue all works */ 2196 - list_for_each_entry(sdata, &local->interfaces, list) 2197 - wiphy_work_queue(local->hw.wiphy, &sdata->work); 2195 + list_for_each_entry(sdata, &local->interfaces, list) { 2196 + if (ieee80211_sdata_running(sdata)) 2197 + wiphy_work_queue(local->hw.wiphy, &sdata->work); 2198 + } 2198 2199 } 2199 2200 2200 2201 ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,
+8 -2
net/mctp/route.c
··· 332 332 & MCTP_HDR_SEQ_MASK; 333 333 334 334 if (!key->reasm_head) { 335 - key->reasm_head = skb; 336 - key->reasm_tailp = &(skb_shinfo(skb)->frag_list); 335 + /* Since we're manipulating the shared frag_list, ensure it isn't 336 + * shared with any other SKBs. 337 + */ 338 + key->reasm_head = skb_unshare(skb, GFP_ATOMIC); 339 + if (!key->reasm_head) 340 + return -ENOMEM; 341 + 342 + key->reasm_tailp = &(skb_shinfo(key->reasm_head)->frag_list); 337 343 key->last_seq = this_seq; 338 344 return 0; 339 345 }
+109
net/mctp/test/route-test.c
··· 921 921 __mctp_route_test_fini(test, dev, rt, sock); 922 922 } 923 923 924 + /* Input route to socket, using a fragmented message created from clones. 925 + */ 926 + static void mctp_test_route_input_cloned_frag(struct kunit *test) 927 + { 928 + /* 5 packet fragments, forming 2 complete messages */ 929 + const struct mctp_hdr hdrs[5] = { 930 + RX_FRAG(FL_S, 0), 931 + RX_FRAG(0, 1), 932 + RX_FRAG(FL_E, 2), 933 + RX_FRAG(FL_S, 0), 934 + RX_FRAG(FL_E, 1), 935 + }; 936 + struct mctp_test_route *rt; 937 + struct mctp_test_dev *dev; 938 + struct sk_buff *skb[5]; 939 + struct sk_buff *rx_skb; 940 + struct socket *sock; 941 + size_t data_len; 942 + u8 compare[100]; 943 + u8 flat[100]; 944 + size_t total; 945 + void *p; 946 + int rc; 947 + 948 + /* Arbitrary length */ 949 + data_len = 3; 950 + total = data_len + sizeof(struct mctp_hdr); 951 + 952 + __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY); 953 + 954 + /* Create a single skb initially with concatenated packets */ 955 + skb[0] = mctp_test_create_skb(&hdrs[0], 5 * total); 956 + mctp_test_skb_set_dev(skb[0], dev); 957 + memset(skb[0]->data, 0 * 0x11, skb[0]->len); 958 + memcpy(skb[0]->data, &hdrs[0], sizeof(struct mctp_hdr)); 959 + 960 + /* Extract and populate packets */ 961 + for (int i = 1; i < 5; i++) { 962 + skb[i] = skb_clone(skb[i - 1], GFP_ATOMIC); 963 + KUNIT_ASSERT_TRUE(test, skb[i]); 964 + p = skb_pull(skb[i], total); 965 + KUNIT_ASSERT_TRUE(test, p); 966 + skb_reset_network_header(skb[i]); 967 + memcpy(skb[i]->data, &hdrs[i], sizeof(struct mctp_hdr)); 968 + memset(&skb[i]->data[sizeof(struct mctp_hdr)], i * 0x11, data_len); 969 + } 970 + for (int i = 0; i < 5; i++) 971 + skb_trim(skb[i], total); 972 + 973 + /* SOM packets have a type byte to match the socket */ 974 + skb[0]->data[4] = 0; 975 + skb[3]->data[4] = 0; 976 + 977 + skb_dump("pkt1 ", skb[0], false); 978 + skb_dump("pkt2 ", skb[1], false); 979 + skb_dump("pkt3 ", skb[2], false); 980 + skb_dump("pkt4 ", skb[3], false); 981 + skb_dump("pkt5 ", skb[4], false); 982 + 983 + for (int i = 0; i < 5; i++) { 984 + KUNIT_EXPECT_EQ(test, refcount_read(&skb[i]->users), 1); 985 + /* Take a reference so we can check refcounts at the end */ 986 + skb_get(skb[i]); 987 + } 988 + 989 + /* Feed the fragments into MCTP core */ 990 + for (int i = 0; i < 5; i++) { 991 + rc = mctp_route_input(&rt->rt, skb[i]); 992 + KUNIT_EXPECT_EQ(test, rc, 0); 993 + } 994 + 995 + /* Receive first reassembled message */ 996 + rx_skb = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc); 997 + KUNIT_EXPECT_EQ(test, rc, 0); 998 + KUNIT_EXPECT_EQ(test, rx_skb->len, 3 * data_len); 999 + rc = skb_copy_bits(rx_skb, 0, flat, rx_skb->len); 1000 + for (int i = 0; i < rx_skb->len; i++) 1001 + compare[i] = (i / data_len) * 0x11; 1002 + /* Set type byte */ 1003 + compare[0] = 0; 1004 + 1005 + KUNIT_EXPECT_MEMEQ(test, flat, compare, rx_skb->len); 1006 + KUNIT_EXPECT_EQ(test, refcount_read(&rx_skb->users), 1); 1007 + kfree_skb(rx_skb); 1008 + 1009 + /* Receive second reassembled message */ 1010 + rx_skb = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc); 1011 + KUNIT_EXPECT_EQ(test, rc, 0); 1012 + KUNIT_EXPECT_EQ(test, rx_skb->len, 2 * data_len); 1013 + rc = skb_copy_bits(rx_skb, 0, flat, rx_skb->len); 1014 + for (int i = 0; i < rx_skb->len; i++) 1015 + compare[i] = (i / data_len + 3) * 0x11; 1016 + /* Set type byte */ 1017 + compare[0] = 0; 1018 + 1019 + KUNIT_EXPECT_MEMEQ(test, flat, compare, rx_skb->len); 1020 + KUNIT_EXPECT_EQ(test, refcount_read(&rx_skb->users), 1); 1021 + kfree_skb(rx_skb); 1022 + 1023 + /* Check input skb refcounts */ 1024 + for (int i = 0; i < 5; i++) { 1025 + KUNIT_EXPECT_EQ(test, refcount_read(&skb[i]->users), 1); 1026 + kfree_skb(skb[i]); 1027 + } 1028 + 1029 + __mctp_route_test_fini(test, dev, rt, sock); 1030 + } 1031 + 924 1032 #if IS_ENABLED(CONFIG_MCTP_FLOWS) 925 1033 926 1034 static void mctp_test_flow_init(struct kunit *test, ··· 1252 1144 KUNIT_CASE(mctp_test_packet_flow), 1253 1145 KUNIT_CASE(mctp_test_fragment_flow), 1254 1146 KUNIT_CASE(mctp_test_route_output_key_create), 1147 + KUNIT_CASE(mctp_test_route_input_cloned_frag), 1255 1148 {} 1256 1149 }; 1257 1150
+15 -3
net/mptcp/pm_netlink.c
··· 977 977 978 978 static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet, 979 979 struct mptcp_pm_addr_entry *entry, 980 - bool needs_id) 980 + bool needs_id, bool replace) 981 981 { 982 982 struct mptcp_pm_addr_entry *cur, *del_entry = NULL; 983 983 unsigned int addr_max; ··· 1016 1016 } 1017 1017 if (entry->addr.id) 1018 1018 goto out; 1019 + 1020 + /* allow callers that only need to look up the local 1021 + * addr's id to skip replacement. This allows them to 1022 + * avoid calling synchronize_rcu in the packet recv 1023 + * path. 1024 + */ 1025 + if (!replace) { 1026 + kfree(entry); 1027 + ret = cur->addr.id; 1028 + goto out; 1029 + } 1019 1030 1020 1031 pernet->addrs--; 1021 1032 entry->addr.id = cur->addr.id; ··· 1176 1165 entry->ifindex = 0; 1177 1166 entry->flags = MPTCP_PM_ADDR_FLAG_IMPLICIT; 1178 1167 entry->lsk = NULL; 1179 - ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true); 1168 + ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true, false); 1180 1169 if (ret < 0) 1181 1170 kfree(entry); 1182 1171 ··· 1444 1433 } 1445 1434 } 1446 1435 ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, 1447 - !mptcp_pm_has_addr_attr_id(attr, info)); 1436 + !mptcp_pm_has_addr_attr_id(attr, info), 1437 + true); 1448 1438 if (ret < 0) { 1449 1439 GENL_SET_ERR_MSG_FMT(info, "too many addresses or duplicate one: %d", ret); 1450 1440 goto out_free;
+4 -4
net/netfilter/ipvs/ip_vs_ctl.c
··· 3091 3091 case IP_VS_SO_GET_SERVICES: 3092 3092 { 3093 3093 struct ip_vs_get_services *get; 3094 - int size; 3094 + size_t size; 3095 3095 3096 3096 get = (struct ip_vs_get_services *)arg; 3097 3097 size = struct_size(get, entrytable, get->num_services); 3098 3098 if (*len != size) { 3099 - pr_err("length: %u != %u\n", *len, size); 3099 + pr_err("length: %u != %zu\n", *len, size); 3100 3100 ret = -EINVAL; 3101 3101 goto out; 3102 3102 } ··· 3132 3132 case IP_VS_SO_GET_DESTS: 3133 3133 { 3134 3134 struct ip_vs_get_dests *get; 3135 - int size; 3135 + size_t size; 3136 3136 3137 3137 get = (struct ip_vs_get_dests *)arg; 3138 3138 size = struct_size(get, entrytable, get->num_dests); 3139 3139 if (*len != size) { 3140 - pr_err("length: %u != %u\n", *len, size); 3140 + pr_err("length: %u != %zu\n", *len, size); 3141 3141 ret = -EINVAL; 3142 3142 goto out; 3143 3143 }
+4 -2
net/netfilter/nf_conncount.c
··· 132 132 struct nf_conn *found_ct; 133 133 unsigned int collect = 0; 134 134 135 - if (time_is_after_eq_jiffies((unsigned long)list->last_gc)) 135 + if ((u32)jiffies == list->last_gc) 136 136 goto add_new_node; 137 137 138 138 /* check the saved connections */ ··· 234 234 bool ret = false; 235 235 236 236 /* don't bother if we just did GC */ 237 - if (time_is_after_eq_jiffies((unsigned long)READ_ONCE(list->last_gc))) 237 + if ((u32)jiffies == READ_ONCE(list->last_gc)) 238 238 return false; 239 239 240 240 /* don't bother if other cpu is already doing GC */ ··· 377 377 378 378 conn->tuple = *tuple; 379 379 conn->zone = *zone; 380 + conn->cpu = raw_smp_processor_id(); 381 + conn->jiffies32 = (u32)jiffies; 380 382 memcpy(rbconn->key, key, sizeof(u32) * data->keylen); 381 383 382 384 nf_conncount_list_init(&rbconn->list);
+14 -10
net/netfilter/nf_tables_api.c
··· 34 34 static LIST_HEAD(nf_tables_expressions); 35 35 static LIST_HEAD(nf_tables_objects); 36 36 static LIST_HEAD(nf_tables_flowtables); 37 - static LIST_HEAD(nf_tables_destroy_list); 38 37 static LIST_HEAD(nf_tables_gc_list); 39 38 static DEFINE_SPINLOCK(nf_tables_destroy_list_lock); 40 39 static DEFINE_SPINLOCK(nf_tables_gc_list_lock); ··· 124 125 table->validate_state = new_validate_state; 125 126 } 126 127 static void nf_tables_trans_destroy_work(struct work_struct *w); 127 - static DECLARE_WORK(trans_destroy_work, nf_tables_trans_destroy_work); 128 128 129 129 static void nft_trans_gc_work(struct work_struct *work); 130 130 static DECLARE_WORK(trans_gc_work, nft_trans_gc_work); ··· 10004 10006 10005 10007 static void nf_tables_trans_destroy_work(struct work_struct *w) 10006 10008 { 10009 + struct nftables_pernet *nft_net = container_of(w, struct nftables_pernet, destroy_work); 10007 10010 struct nft_trans *trans, *next; 10008 10011 LIST_HEAD(head); 10009 10012 10010 10013 spin_lock(&nf_tables_destroy_list_lock); 10011 - list_splice_init(&nf_tables_destroy_list, &head); 10014 + list_splice_init(&nft_net->destroy_list, &head); 10012 10015 spin_unlock(&nf_tables_destroy_list_lock); 10013 10016 10014 10017 if (list_empty(&head)) ··· 10023 10024 } 10024 10025 } 10025 10026 10026 - void nf_tables_trans_destroy_flush_work(void) 10027 + void nf_tables_trans_destroy_flush_work(struct net *net) 10027 10028 { 10028 - flush_work(&trans_destroy_work); 10029 + struct nftables_pernet *nft_net = nft_pernet(net); 10030 + 10031 + flush_work(&nft_net->destroy_work); 10029 10032 } 10030 10033 EXPORT_SYMBOL_GPL(nf_tables_trans_destroy_flush_work); 10031 10034 ··· 10485 10484 10486 10485 trans->put_net = true; 10487 10486 spin_lock(&nf_tables_destroy_list_lock); 10488 - list_splice_tail_init(&nft_net->commit_list, &nf_tables_destroy_list); 10487 + list_splice_tail_init(&nft_net->commit_list, &nft_net->destroy_list); 10489 10488 spin_unlock(&nf_tables_destroy_list_lock); 10490 10489 10491 10490 nf_tables_module_autoload_cleanup(net); 10492 - schedule_work(&trans_destroy_work); 10491 + schedule_work(&nft_net->destroy_work); 10493 10492 10494 10493 mutex_unlock(&nft_net->commit_mutex); 10495 10494 } ··· 11854 11853 11855 11854 gc_seq = nft_gc_seq_begin(nft_net); 11856 11855 11857 - nf_tables_trans_destroy_flush_work(); 11856 + nf_tables_trans_destroy_flush_work(net); 11858 11857 again: 11859 11858 list_for_each_entry(table, &nft_net->tables, list) { 11860 11859 if (nft_table_has_owner(table) && ··· 11896 11895 11897 11896 INIT_LIST_HEAD(&nft_net->tables); 11898 11897 INIT_LIST_HEAD(&nft_net->commit_list); 11898 + INIT_LIST_HEAD(&nft_net->destroy_list); 11899 11899 INIT_LIST_HEAD(&nft_net->commit_set_list); 11900 11900 INIT_LIST_HEAD(&nft_net->binding_list); 11901 11901 INIT_LIST_HEAD(&nft_net->module_list); ··· 11905 11903 nft_net->base_seq = 1; 11906 11904 nft_net->gc_seq = 0; 11907 11905 nft_net->validate_state = NFT_VALIDATE_SKIP; 11906 + INIT_WORK(&nft_net->destroy_work, nf_tables_trans_destroy_work); 11908 11907 11909 11908 return 0; 11910 11909 } ··· 11934 11931 if (!list_empty(&nft_net->module_list)) 11935 11932 nf_tables_module_autoload_cleanup(net); 11936 11933 11934 + cancel_work_sync(&nft_net->destroy_work); 11937 11935 __nft_release_tables(net); 11938 11936 11939 11937 nft_gc_seq_end(nft_net, gc_seq); 11940 11938 11941 11939 mutex_unlock(&nft_net->commit_mutex); 11940 + 11942 11941 WARN_ON_ONCE(!list_empty(&nft_net->tables)); 11943 11942 WARN_ON_ONCE(!list_empty(&nft_net->module_list)); 11944 11943 WARN_ON_ONCE(!list_empty(&nft_net->notify_list)); 11944 + WARN_ON_ONCE(!list_empty(&nft_net->destroy_list)); 11945 11945 } 11946 11946 11947 11947 static void nf_tables_exit_batch(struct list_head *net_exit_list) ··· 12035 12029 unregister_netdevice_notifier(&nf_tables_flowtable_notifier); 12036 12030 nft_chain_filter_fini(); 12037 12031 nft_chain_route_fini(); 12038 - nf_tables_trans_destroy_flush_work(); 12039 12032 unregister_pernet_subsys(&nf_tables_net_ops); 12040 12033 cancel_work_sync(&trans_gc_work); 12041 - cancel_work_sync(&trans_destroy_work); 12042 12034 rcu_barrier(); 12043 12035 rhltable_destroy(&nft_objname_ht); 12044 12036 nf_tables_core_module_exit();
+4 -4
net/netfilter/nft_compat.c
··· 228 228 return 0; 229 229 } 230 230 231 - static void nft_compat_wait_for_destructors(void) 231 + static void nft_compat_wait_for_destructors(struct net *net) 232 232 { 233 233 /* xtables matches or targets can have side effects, e.g. 234 234 * creation/destruction of /proc files. ··· 236 236 * work queue. If we have pending invocations we thus 237 237 * need to wait for those to finish. 238 238 */ 239 - nf_tables_trans_destroy_flush_work(); 239 + nf_tables_trans_destroy_flush_work(net); 240 240 } 241 241 242 242 static int ··· 262 262 263 263 nft_target_set_tgchk_param(&par, ctx, target, info, &e, proto, inv); 264 264 265 - nft_compat_wait_for_destructors(); 265 + nft_compat_wait_for_destructors(ctx->net); 266 266 267 267 ret = xt_check_target(&par, size, proto, inv); 268 268 if (ret < 0) { ··· 515 515 516 516 nft_match_set_mtchk_param(&par, ctx, match, info, &e, proto, inv); 517 517 518 - nft_compat_wait_for_destructors(); 518 + nft_compat_wait_for_destructors(ctx->net); 519 519 520 520 return xt_check_match(&par, size, proto, inv); 521 521 }
+4 -2
net/netfilter/nft_ct.c
··· 230 230 enum ip_conntrack_info ctinfo; 231 231 u16 value = nft_reg_load16(&regs->data[priv->sreg]); 232 232 struct nf_conn *ct; 233 + int oldcnt; 233 234 234 235 ct = nf_ct_get(skb, &ctinfo); 235 236 if (ct) /* already tracked */ ··· 251 250 252 251 ct = this_cpu_read(nft_ct_pcpu_template); 253 252 254 - if (likely(refcount_read(&ct->ct_general.use) == 1)) { 255 - refcount_inc(&ct->ct_general.use); 253 + __refcount_inc(&ct->ct_general.use, &oldcnt); 254 + if (likely(oldcnt == 1)) { 256 255 nf_ct_zone_add(ct, &zone); 257 256 } else { 257 + refcount_dec(&ct->ct_general.use); 258 258 /* previous skb got queued to userspace, allocate temporary 259 259 * one until percpu template can be reused. 260 260 */
+4 -6
net/netfilter/nft_exthdr.c
··· 85 85 unsigned char optbuf[sizeof(struct ip_options) + 40]; 86 86 struct ip_options *opt = (struct ip_options *)optbuf; 87 87 struct iphdr *iph, _iph; 88 - unsigned int start; 89 88 bool found = false; 90 89 __be32 info; 91 90 int optlen; ··· 92 93 iph = skb_header_pointer(skb, 0, sizeof(_iph), &_iph); 93 94 if (!iph) 94 95 return -EBADMSG; 95 - start = sizeof(struct iphdr); 96 96 97 97 optlen = iph->ihl * 4 - (int)sizeof(struct iphdr); 98 98 if (optlen <= 0) ··· 101 103 /* Copy the options since __ip_options_compile() modifies 102 104 * the options. 103 105 */ 104 - if (skb_copy_bits(skb, start, opt->__data, optlen)) 106 + if (skb_copy_bits(skb, sizeof(struct iphdr), opt->__data, optlen)) 105 107 return -EBADMSG; 106 108 opt->optlen = optlen; 107 109 ··· 116 118 found = target == IPOPT_SSRR ? opt->is_strictroute : 117 119 !opt->is_strictroute; 118 120 if (found) 119 - *offset = opt->srr + start; 121 + *offset = opt->srr; 120 122 break; 121 123 case IPOPT_RR: 122 124 if (!opt->rr) 123 125 break; 124 - *offset = opt->rr + start; 126 + *offset = opt->rr; 125 127 found = true; 126 128 break; 127 129 case IPOPT_RA: 128 130 if (!opt->router_alert) 129 131 break; 130 - *offset = opt->router_alert + start; 132 + *offset = opt->router_alert; 131 133 found = true; 132 134 break; 133 135 default:
+18 -12
net/openvswitch/conntrack.c
··· 1368 1368 attr == OVS_KEY_ATTR_CT_MARK) 1369 1369 return true; 1370 1370 if (IS_ENABLED(CONFIG_NF_CONNTRACK_LABELS) && 1371 - attr == OVS_KEY_ATTR_CT_LABELS) 1372 - return true; 1371 + attr == OVS_KEY_ATTR_CT_LABELS) { 1372 + struct ovs_net *ovs_net = net_generic(net, ovs_net_id); 1373 + 1374 + return ovs_net->xt_label; 1375 + } 1373 1376 1374 1377 return false; 1375 1378 } ··· 1381 1378 const struct sw_flow_key *key, 1382 1379 struct sw_flow_actions **sfa, bool log) 1383 1380 { 1384 - unsigned int n_bits = sizeof(struct ovs_key_ct_labels) * BITS_PER_BYTE; 1385 1381 struct ovs_conntrack_info ct_info; 1386 1382 const char *helper = NULL; 1387 1383 u16 family; ··· 1407 1405 if (!ct_info.ct) { 1408 1406 OVS_NLERR(log, "Failed to allocate conntrack template"); 1409 1407 return -ENOMEM; 1410 - } 1411 - 1412 - if (nf_connlabels_get(net, n_bits - 1)) { 1413 - nf_ct_tmpl_free(ct_info.ct); 1414 - OVS_NLERR(log, "Failed to set connlabel length"); 1415 - return -EOPNOTSUPP; 1416 1408 } 1417 1409 1418 1410 if (ct_info.timeout[0]) { ··· 1577 1581 if (ct_info->ct) { 1578 1582 if (ct_info->timeout[0]) 1579 1583 nf_ct_destroy_timeout(ct_info->ct); 1580 - nf_connlabels_put(nf_ct_net(ct_info->ct)); 1581 1584 nf_ct_tmpl_free(ct_info->ct); 1582 1585 } 1583 1586 } ··· 2001 2006 2002 2007 int ovs_ct_init(struct net *net) 2003 2008 { 2004 - #if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) 2009 + unsigned int n_bits = sizeof(struct ovs_key_ct_labels) * BITS_PER_BYTE; 2005 2010 struct ovs_net *ovs_net = net_generic(net, ovs_net_id); 2006 2011 2012 + if (nf_connlabels_get(net, n_bits - 1)) { 2013 + ovs_net->xt_label = false; 2014 + OVS_NLERR(true, "Failed to set connlabel length"); 2015 + } else { 2016 + ovs_net->xt_label = true; 2017 + } 2018 + 2019 + #if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) 2007 2020 return ovs_ct_limit_init(net, ovs_net); 2008 2021 #else 2009 2022 return 0; ··· 2020 2017 2021 2018 void ovs_ct_exit(struct net *net) 2022 2019 { 2023 - #if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) 2024 2020 struct ovs_net *ovs_net = net_generic(net, ovs_net_id); 2025 2021 2022 + #if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) 2026 2023 ovs_ct_limit_exit(net, ovs_net); 2027 2024 #endif 2025 + 2026 + if (ovs_net->xt_label) 2027 + nf_connlabels_put(net); 2028 2028 }
+3
net/openvswitch/datapath.h
··· 160 160 #if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT) 161 161 struct ovs_ct_limit_info *ct_limit_info; 162 162 #endif 163 + 164 + /* Module reference for configuring conntrack. */ 165 + bool xt_label; 163 166 }; 164 167 165 168 /**
+1 -14
net/openvswitch/flow_netlink.c
··· 2317 2317 OVS_FLOW_ATTR_MASK, true, skb); 2318 2318 } 2319 2319 2320 - #define MAX_ACTIONS_BUFSIZE (32 * 1024) 2321 - 2322 2320 static struct sw_flow_actions *nla_alloc_flow_actions(int size) 2323 2321 { 2324 2322 struct sw_flow_actions *sfa; 2325 - 2326 - WARN_ON_ONCE(size > MAX_ACTIONS_BUFSIZE); 2327 2323 2328 2324 sfa = kmalloc(kmalloc_size_roundup(sizeof(*sfa) + size), GFP_KERNEL); 2329 2325 if (!sfa) ··· 2475 2479 goto out; 2476 2480 2477 2481 new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2); 2478 - 2479 - if (new_acts_size > MAX_ACTIONS_BUFSIZE) { 2480 - if ((next_offset + req_size) > MAX_ACTIONS_BUFSIZE) { 2481 - OVS_NLERR(log, "Flow action size exceeds max %u", 2482 - MAX_ACTIONS_BUFSIZE); 2483 - return ERR_PTR(-EMSGSIZE); 2484 - } 2485 - new_acts_size = MAX_ACTIONS_BUFSIZE; 2486 - } 2487 2482 2488 2483 acts = nla_alloc_flow_actions(new_acts_size); 2489 2484 if (IS_ERR(acts)) ··· 3532 3545 int err; 3533 3546 u32 mpls_label_count = 0; 3534 3547 3535 - *sfa = nla_alloc_flow_actions(min(nla_len(attr), MAX_ACTIONS_BUFSIZE)); 3548 + *sfa = nla_alloc_flow_actions(nla_len(attr)); 3536 3549 if (IS_ERR(*sfa)) 3537 3550 return PTR_ERR(*sfa); 3538 3551
+6
net/sched/sch_api.c
··· 2254 2254 return -EOPNOTSUPP; 2255 2255 } 2256 2256 2257 + /* Prevent creation of traffic classes with classid TC_H_ROOT */ 2258 + if (clid == TC_H_ROOT) { 2259 + NL_SET_ERR_MSG(extack, "Cannot create traffic class with classid TC_H_ROOT"); 2260 + return -EINVAL; 2261 + } 2262 + 2257 2263 new_cl = cl; 2258 2264 err = -EOPNOTSUPP; 2259 2265 if (cops->change)
+2 -1
net/sched/sch_gred.c
··· 913 913 for (i = 0; i < table->DPs; i++) 914 914 gred_destroy_vq(table->tab[i]); 915 915 916 - gred_offload(sch, TC_GRED_DESTROY); 916 + if (table->opt) 917 + gred_offload(sch, TC_GRED_DESTROY); 917 918 kfree(table->opt); 918 919 } 919 920
+18 -7
net/switchdev/switchdev.c
··· 472 472 EXPORT_SYMBOL_GPL(switchdev_port_obj_act_is_deferred); 473 473 474 474 static ATOMIC_NOTIFIER_HEAD(switchdev_notif_chain); 475 - static BLOCKING_NOTIFIER_HEAD(switchdev_blocking_notif_chain); 475 + static RAW_NOTIFIER_HEAD(switchdev_blocking_notif_chain); 476 476 477 477 /** 478 478 * register_switchdev_notifier - Register notifier ··· 518 518 519 519 int register_switchdev_blocking_notifier(struct notifier_block *nb) 520 520 { 521 - struct blocking_notifier_head *chain = &switchdev_blocking_notif_chain; 521 + struct raw_notifier_head *chain = &switchdev_blocking_notif_chain; 522 + int err; 522 523 523 - return blocking_notifier_chain_register(chain, nb); 524 + rtnl_lock(); 525 + err = raw_notifier_chain_register(chain, nb); 526 + rtnl_unlock(); 527 + 528 + return err; 524 529 } 525 530 EXPORT_SYMBOL_GPL(register_switchdev_blocking_notifier); 526 531 527 532 int unregister_switchdev_blocking_notifier(struct notifier_block *nb) 528 533 { 529 - struct blocking_notifier_head *chain = &switchdev_blocking_notif_chain; 534 + struct raw_notifier_head *chain = &switchdev_blocking_notif_chain; 535 + int err; 530 536 531 - return blocking_notifier_chain_unregister(chain, nb); 537 + rtnl_lock(); 538 + err = raw_notifier_chain_unregister(chain, nb); 539 + rtnl_unlock(); 540 + 541 + return err; 532 542 } 533 543 EXPORT_SYMBOL_GPL(unregister_switchdev_blocking_notifier); 534 544 ··· 546 536 struct switchdev_notifier_info *info, 547 537 struct netlink_ext_ack *extack) 548 538 { 539 + ASSERT_RTNL(); 549 540 info->dev = dev; 550 541 info->extack = extack; 551 - return blocking_notifier_call_chain(&switchdev_blocking_notif_chain, 552 - val, info); 542 + return raw_notifier_call_chain(&switchdev_blocking_notif_chain, 543 + val, info); 553 544 } 554 545 EXPORT_SYMBOL_GPL(call_switchdev_blocking_notifiers); 555 546
+7
net/wireless/core.c
··· 1191 1191 { 1192 1192 struct cfg80211_internal_bss *scan, *tmp; 1193 1193 struct cfg80211_beacon_registration *reg, *treg; 1194 + unsigned long flags; 1195 + 1196 + spin_lock_irqsave(&rdev->wiphy_work_lock, flags); 1197 + WARN_ON(!list_empty(&rdev->wiphy_work_list)); 1198 + spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); 1199 + cancel_work_sync(&rdev->wiphy_work); 1200 + 1194 1201 rfkill_destroy(rdev->wiphy.rfkill); 1195 1202 list_for_each_entry_safe(reg, treg, &rdev->beacon_registrations, list) { 1196 1203 list_del(&reg->list);
+14 -5
net/wireless/nl80211.c
··· 4220 4220 if (flags[flag]) 4221 4221 *mntrflags |= (1<<flag); 4222 4222 4223 + /* cooked monitor mode is incompatible with other modes */ 4224 + if (*mntrflags & MONITOR_FLAG_COOK_FRAMES && 4225 + *mntrflags != MONITOR_FLAG_COOK_FRAMES) 4226 + return -EOPNOTSUPP; 4227 + 4223 4228 *mntrflags |= MONITOR_FLAG_CHANGED; 4224 4229 4225 4230 return 0; ··· 11123 11118 11124 11119 static int nl80211_process_links(struct cfg80211_registered_device *rdev, 11125 11120 struct cfg80211_assoc_link *links, 11121 + int assoc_link_id, 11126 11122 const u8 *ssid, int ssid_len, 11127 11123 struct genl_info *info) 11128 11124 { ··· 11154 11148 } 11155 11149 links[link_id].bss = 11156 11150 nl80211_assoc_bss(rdev, ssid, ssid_len, attrs, 11157 - link_id, link_id); 11151 + assoc_link_id, link_id); 11158 11152 if (IS_ERR(links[link_id].bss)) { 11159 11153 err = PTR_ERR(links[link_id].bss); 11160 11154 links[link_id].bss = NULL; ··· 11351 11345 req.ap_mld_addr = nla_data(info->attrs[NL80211_ATTR_MLD_ADDR]); 11352 11346 ap_addr = req.ap_mld_addr; 11353 11347 11354 - err = nl80211_process_links(rdev, req.links, ssid, ssid_len, 11355 - info); 11348 + err = nl80211_process_links(rdev, req.links, req.link_id, 11349 + ssid, ssid_len, info); 11356 11350 if (err) 11357 11351 goto free; 11358 11352 ··· 16507 16501 16508 16502 add_links = 0; 16509 16503 if (info->attrs[NL80211_ATTR_MLO_LINKS]) { 16510 - err = nl80211_process_links(rdev, links, NULL, 0, info); 16504 + err = nl80211_process_links(rdev, links, 16505 + /* mark as MLO, but not assoc */ 16506 + IEEE80211_MLD_MAX_NUM_LINKS, 16507 + NULL, 0, info); 16511 16508 if (err) 16512 16509 return err; 16513 16510 ··· 16538 16529 goto out; 16539 16530 } 16540 16531 16541 - err = cfg80211_assoc_ml_reconf(rdev, dev, links, rem_links); 16532 + err = -EOPNOTSUPP; 16542 16533 16543 16534 out: 16544 16535 for (link_id = 0; link_id < ARRAY_SIZE(links); link_id++)
+2 -1
net/wireless/reg.c
··· 407 407 { 408 408 if (!alpha2) 409 409 return false; 410 - return isalpha(alpha2[0]) && isalpha(alpha2[1]); 410 + return isascii(alpha2[0]) && isalpha(alpha2[0]) && 411 + isascii(alpha2[1]) && isalpha(alpha2[1]); 411 412 } 412 413 413 414 static bool alpha2_equal(const char *alpha2_x, const char *alpha2_y)
+18
rust/kernel/alloc/allocator_test.rs
··· 62 62 )); 63 63 } 64 64 65 + // ISO C (ISO/IEC 9899:2011) defines `aligned_alloc`: 66 + // 67 + // > The value of alignment shall be a valid alignment supported by the implementation 68 + // [...]. 69 + // 70 + // As an example of the "supported by the implementation" requirement, POSIX.1-2001 (IEEE 71 + // 1003.1-2001) defines `posix_memalign`: 72 + // 73 + // > The value of alignment shall be a power of two multiple of sizeof (void *). 74 + // 75 + // and POSIX-based implementations of `aligned_alloc` inherit this requirement. At the time 76 + // of writing, this is known to be the case on macOS (but not in glibc). 77 + // 78 + // Satisfy the stricter requirement to avoid spurious test failures on some platforms. 79 + let min_align = core::mem::size_of::<*const crate::ffi::c_void>(); 80 + let layout = layout.align_to(min_align).map_err(|_| AllocError)?; 81 + let layout = layout.pad_to_align(); 82 + 65 83 // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or 66 84 // exceeds the given size and alignment requirements. 67 85 let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8;
+1 -1
rust/kernel/error.rs
··· 107 107 } else { 108 108 // TODO: Make it a `WARN_ONCE` once available. 109 109 crate::pr_warn!( 110 - "attempted to create `Error` with out of range `errno`: {}", 110 + "attempted to create `Error` with out of range `errno`: {}\n", 111 111 errno 112 112 ); 113 113 code::EINVAL
+10 -13
rust/kernel/init.rs
··· 259 259 /// }, 260 260 /// })); 261 261 /// let foo: Pin<&mut Foo> = foo; 262 - /// pr_info!("a: {}", &*foo.a.lock()); 262 + /// pr_info!("a: {}\n", &*foo.a.lock()); 263 263 /// ``` 264 264 /// 265 265 /// # Syntax ··· 319 319 /// }, GFP_KERNEL)?, 320 320 /// })); 321 321 /// let foo = foo.unwrap(); 322 - /// pr_info!("a: {}", &*foo.a.lock()); 322 + /// pr_info!("a: {}\n", &*foo.a.lock()); 323 323 /// ``` 324 324 /// 325 325 /// ```rust,ignore ··· 352 352 /// x: 64, 353 353 /// }, GFP_KERNEL)?, 354 354 /// })); 355 - /// pr_info!("a: {}", &*foo.a.lock()); 355 + /// pr_info!("a: {}\n", &*foo.a.lock()); 356 356 /// # Ok::<_, AllocError>(()) 357 357 /// ``` 358 358 /// ··· 882 882 /// 883 883 /// impl Foo { 884 884 /// fn setup(self: Pin<&mut Self>) { 885 - /// pr_info!("Setting up foo"); 885 + /// pr_info!("Setting up foo\n"); 886 886 /// } 887 887 /// } 888 888 /// ··· 986 986 /// 987 987 /// impl Foo { 988 988 /// fn setup(&mut self) { 989 - /// pr_info!("Setting up foo"); 989 + /// pr_info!("Setting up foo\n"); 990 990 /// } 991 991 /// } 992 992 /// ··· 1336 1336 /// #[pinned_drop] 1337 1337 /// impl PinnedDrop for Foo { 1338 1338 /// fn drop(self: Pin<&mut Self>) { 1339 - /// pr_info!("Foo is being dropped!"); 1339 + /// pr_info!("Foo is being dropped!\n"); 1340 1340 /// } 1341 1341 /// } 1342 1342 /// ``` ··· 1418 1418 // SAFETY: `T: Zeroable` and `UnsafeCell` is `repr(transparent)`. 1419 1419 {<T: ?Sized + Zeroable>} UnsafeCell<T>, 1420 1420 1421 - // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee). 1421 + // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee: 1422 + // https://doc.rust-lang.org/stable/std/option/index.html#representation). 1422 1423 Option<NonZeroU8>, Option<NonZeroU16>, Option<NonZeroU32>, Option<NonZeroU64>, 1423 1424 Option<NonZeroU128>, Option<NonZeroUsize>, 1424 1425 Option<NonZeroI8>, Option<NonZeroI16>, Option<NonZeroI32>, Option<NonZeroI64>, 1425 1426 Option<NonZeroI128>, Option<NonZeroIsize>, 1426 - 1427 - // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee). 1428 - // 1429 - // In this case we are allowed to use `T: ?Sized`, since all zeros is the `None` variant. 1430 - {<T: ?Sized>} Option<NonNull<T>>, 1431 - {<T: ?Sized>} Option<KBox<T>>, 1427 + {<T>} Option<NonNull<T>>, 1428 + {<T>} Option<KBox<T>>, 1432 1429 1433 1430 // SAFETY: `null` pointer is valid. 1434 1431 //
+3 -3
rust/kernel/init/macros.rs
··· 45 45 //! #[pinned_drop] 46 46 //! impl PinnedDrop for Foo { 47 47 //! fn drop(self: Pin<&mut Self>) { 48 - //! pr_info!("{self:p} is getting dropped."); 48 + //! pr_info!("{self:p} is getting dropped.\n"); 49 49 //! } 50 50 //! } 51 51 //! ··· 412 412 //! #[pinned_drop] 413 413 //! impl PinnedDrop for Foo { 414 414 //! fn drop(self: Pin<&mut Self>) { 415 - //! pr_info!("{self:p} is getting dropped."); 415 + //! pr_info!("{self:p} is getting dropped.\n"); 416 416 //! } 417 417 //! } 418 418 //! ``` ··· 423 423 //! // `unsafe`, full path and the token parameter are added, everything else stays the same. 424 424 //! unsafe impl ::kernel::init::PinnedDrop for Foo { 425 425 //! fn drop(self: Pin<&mut Self>, _: ::kernel::init::__internal::OnlyCallFromDrop) { 426 - //! pr_info!("{self:p} is getting dropped."); 426 + //! pr_info!("{self:p} is getting dropped.\n"); 427 427 //! } 428 428 //! } 429 429 //! ```
+1 -1
rust/kernel/lib.rs
··· 6 6 //! usage by Rust code in the kernel and is shared by all of them. 7 7 //! 8 8 //! In other words, all the rest of the Rust code in the kernel (e.g. kernel 9 - //! modules written in Rust) depends on [`core`], [`alloc`] and this crate. 9 + //! modules written in Rust) depends on [`core`] and this crate. 10 10 //! 11 11 //! If you need a kernel C API that is not ported or wrapped yet here, then 12 12 //! do so first instead of bypassing this crate.
+4 -12
rust/kernel/sync.rs
··· 30 30 unsafe impl Sync for LockClassKey {} 31 31 32 32 impl LockClassKey { 33 - /// Creates a new lock class key. 34 - pub const fn new() -> Self { 35 - Self(Opaque::uninit()) 36 - } 37 - 38 33 pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key { 39 34 self.0.get() 40 - } 41 - } 42 - 43 - impl Default for LockClassKey { 44 - fn default() -> Self { 45 - Self::new() 46 35 } 47 36 } 48 37 ··· 40 51 #[macro_export] 41 52 macro_rules! static_lock_class { 42 53 () => {{ 43 - static CLASS: $crate::sync::LockClassKey = $crate::sync::LockClassKey::new(); 54 + static CLASS: $crate::sync::LockClassKey = 55 + // SAFETY: lockdep expects uninitialized memory when it's handed a statically allocated 56 + // lock_class_key 57 + unsafe { ::core::mem::MaybeUninit::uninit().assume_init() }; 44 58 &CLASS 45 59 }}; 46 60 }
+1 -1
rust/kernel/sync/locked_by.rs
··· 55 55 /// fn print_bytes_used(dir: &Directory, file: &File) { 56 56 /// let guard = dir.inner.lock(); 57 57 /// let inner_file = file.inner.access(&guard); 58 - /// pr_info!("{} {}", guard.bytes_used, inner_file.bytes_used); 58 + /// pr_info!("{} {}\n", guard.bytes_used, inner_file.bytes_used); 59 59 /// } 60 60 /// 61 61 /// /// Increments `bytes_used` for both the directory and file.
+1 -1
rust/kernel/task.rs
··· 320 320 321 321 /// Wakes up the task. 322 322 pub fn wake_up(&self) { 323 - // SAFETY: It's always safe to call `signal_pending` on a valid task, even if the task 323 + // SAFETY: It's always safe to call `wake_up_process` on a valid task, even if the task 324 324 // running. 325 325 unsafe { bindings::wake_up_process(self.as_ptr()) }; 326 326 }
+3 -3
rust/kernel/workqueue.rs
··· 60 60 //! type Pointer = Arc<MyStruct>; 61 61 //! 62 62 //! fn run(this: Arc<MyStruct>) { 63 - //! pr_info!("The value is: {}", this.value); 63 + //! pr_info!("The value is: {}\n", this.value); 64 64 //! } 65 65 //! } 66 66 //! ··· 108 108 //! type Pointer = Arc<MyStruct>; 109 109 //! 110 110 //! fn run(this: Arc<MyStruct>) { 111 - //! pr_info!("The value is: {}", this.value_1); 111 + //! pr_info!("The value is: {}\n", this.value_1); 112 112 //! } 113 113 //! } 114 114 //! ··· 116 116 //! type Pointer = Arc<MyStruct>; 117 117 //! 118 118 //! fn run(this: Arc<MyStruct>) { 119 - //! pr_info!("The second value is: {}", this.value_2); 119 + //! pr_info!("The second value is: {}\n", this.value_2); 120 120 //! } 121 121 //! } 122 122 //!
+42 -29
scripts/generate_rust_analyzer.py
··· 57 57 crates_indexes[display_name] = len(crates) 58 58 crates.append(crate) 59 59 60 - # First, the ones in `rust/` since they are a bit special. 61 - append_crate( 62 - "core", 63 - sysroot_src / "core" / "src" / "lib.rs", 64 - [], 65 - cfg=crates_cfgs.get("core", []), 66 - is_workspace_member=False, 67 - ) 60 + def append_sysroot_crate( 61 + display_name, 62 + deps, 63 + cfg=[], 64 + ): 65 + append_crate( 66 + display_name, 67 + sysroot_src / display_name / "src" / "lib.rs", 68 + deps, 69 + cfg, 70 + is_workspace_member=False, 71 + ) 72 + 73 + # NB: sysroot crates reexport items from one another so setting up our transitive dependencies 74 + # here is important for ensuring that rust-analyzer can resolve symbols. The sources of truth 75 + # for this dependency graph are `(sysroot_src / crate / "Cargo.toml" for crate in crates)`. 76 + append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", [])) 77 + append_sysroot_crate("alloc", ["core"]) 78 + append_sysroot_crate("std", ["alloc", "core"]) 79 + append_sysroot_crate("proc_macro", ["core", "std"]) 68 80 69 81 append_crate( 70 82 "compiler_builtins", ··· 87 75 append_crate( 88 76 "macros", 89 77 srctree / "rust" / "macros" / "lib.rs", 90 - [], 78 + ["std", "proc_macro"], 91 79 is_proc_macro=True, 92 80 ) 93 81 ··· 97 85 ["core", "compiler_builtins"], 98 86 ) 99 87 100 - append_crate( 101 - "bindings", 102 - srctree / "rust"/ "bindings" / "lib.rs", 103 - ["core"], 104 - cfg=cfg, 105 - ) 106 - crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True)) 88 + def append_crate_with_generated( 89 + display_name, 90 + deps, 91 + ): 92 + append_crate( 93 + display_name, 94 + srctree / "rust"/ display_name / "lib.rs", 95 + deps, 96 + cfg=cfg, 97 + ) 98 + crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True)) 99 + crates[-1]["source"] = { 100 + "include_dirs": [ 101 + str(srctree / "rust" / display_name), 102 + str(objtree / "rust") 103 + ], 104 + "exclude_dirs": [], 105 + } 107 106 108 - append_crate( 109 - "kernel", 110 - srctree / "rust" / "kernel" / "lib.rs", 111 - ["core", "macros", "build_error", "bindings"], 112 - cfg=cfg, 113 - ) 114 - crates[-1]["source"] = { 115 - "include_dirs": [ 116 - str(srctree / "rust" / "kernel"), 117 - str(objtree / "rust") 118 - ], 119 - "exclude_dirs": [], 120 - } 107 + append_crate_with_generated("bindings", ["core"]) 108 + append_crate_with_generated("uapi", ["core"]) 109 + append_crate_with_generated("kernel", ["core", "macros", "build_error", "bindings", "uapi"]) 121 110 122 111 def is_root_crate(build_file, target): 123 112 try:
+1 -1
scripts/package/install-extmod-build
··· 63 63 # Clear VPATH and srcroot because the source files reside in the output 64 64 # directory. 65 65 # shellcheck disable=SC2016 # $(MAKE) and $(build) will be expanded by Make 66 - "${MAKE}" run-command KBUILD_RUN_COMMAND='+$(MAKE) HOSTCC='"${CC}"' VPATH= srcroot=. $(build)='"${destdir}"/scripts 66 + "${MAKE}" run-command KBUILD_RUN_COMMAND='+$(MAKE) HOSTCC='"${CC}"' VPATH= srcroot=. $(build)='"$(realpath --relative-base=. "${destdir}")"/scripts 67 67 68 68 rm -f "${destdir}/scripts/Kbuild" 69 69 fi
+2 -2
scripts/rustdoc_test_gen.rs
··· 15 15 //! - Test code should be able to define functions and call them, without having to carry 16 16 //! the context. 17 17 //! 18 - //! - Later on, we may want to be able to test non-kernel code (e.g. `core`, `alloc` or 19 - //! third-party crates) which likely use the standard library `assert*!` macros. 18 + //! - Later on, we may want to be able to test non-kernel code (e.g. `core` or third-party 19 + //! crates) which likely use the standard library `assert*!` macros. 20 20 //! 21 21 //! For this reason, instead of the passed context, `kunit_get_current_test()` is used instead 22 22 //! (i.e. `current->kunit_test`).
+30 -16
sound/core/seq/seq_clientmgr.c
··· 106 106 return clienttab[clientid]; 107 107 } 108 108 109 - struct snd_seq_client *snd_seq_client_use_ptr(int clientid) 109 + static struct snd_seq_client *client_use_ptr(int clientid, bool load_module) 110 110 { 111 111 unsigned long flags; 112 112 struct snd_seq_client *client; ··· 126 126 } 127 127 spin_unlock_irqrestore(&clients_lock, flags); 128 128 #ifdef CONFIG_MODULES 129 - if (!in_interrupt()) { 129 + if (load_module) { 130 130 static DECLARE_BITMAP(client_requested, SNDRV_SEQ_GLOBAL_CLIENTS); 131 131 static DECLARE_BITMAP(card_requested, SNDRV_CARDS); 132 132 ··· 168 168 return client; 169 169 } 170 170 171 + /* get snd_seq_client object for the given id quickly */ 172 + struct snd_seq_client *snd_seq_client_use_ptr(int clientid) 173 + { 174 + return client_use_ptr(clientid, false); 175 + } 176 + 177 + /* get snd_seq_client object for the given id; 178 + * if not found, retry after loading the modules 179 + */ 180 + static struct snd_seq_client *client_load_and_use_ptr(int clientid) 181 + { 182 + return client_use_ptr(clientid, IS_ENABLED(CONFIG_MODULES)); 183 + } 184 + 171 185 /* Take refcount and perform ioctl_mutex lock on the given client; 172 186 * used only for OSS sequencer 173 187 * Unlock via snd_seq_client_ioctl_unlock() below ··· 190 176 { 191 177 struct snd_seq_client *client; 192 178 193 - client = snd_seq_client_use_ptr(clientid); 179 + client = client_load_and_use_ptr(clientid); 194 180 if (!client) 195 181 return false; 196 182 mutex_lock(&client->ioctl_mutex); ··· 1209 1195 int err = 0; 1210 1196 1211 1197 /* requested client number */ 1212 - cptr = snd_seq_client_use_ptr(info->client); 1198 + cptr = client_load_and_use_ptr(info->client); 1213 1199 if (cptr == NULL) 1214 1200 return -ENOENT; /* don't change !!! */ 1215 1201 ··· 1271 1257 struct snd_seq_client *cptr; 1272 1258 1273 1259 /* requested client number */ 1274 - cptr = snd_seq_client_use_ptr(client_info->client); 1260 + cptr = client_load_and_use_ptr(client_info->client); 1275 1261 if (cptr == NULL) 1276 1262 return -ENOENT; /* don't change !!! */ 1277 1263 ··· 1410 1396 struct snd_seq_client *cptr; 1411 1397 struct snd_seq_client_port *port; 1412 1398 1413 - cptr = snd_seq_client_use_ptr(info->addr.client); 1399 + cptr = client_load_and_use_ptr(info->addr.client); 1414 1400 if (cptr == NULL) 1415 1401 return -ENXIO; 1416 1402 ··· 1517 1503 struct snd_seq_client *receiver = NULL, *sender = NULL; 1518 1504 struct snd_seq_client_port *sport = NULL, *dport = NULL; 1519 1505 1520 - receiver = snd_seq_client_use_ptr(subs->dest.client); 1506 + receiver = client_load_and_use_ptr(subs->dest.client); 1521 1507 if (!receiver) 1522 1508 goto __end; 1523 - sender = snd_seq_client_use_ptr(subs->sender.client); 1509 + sender = client_load_and_use_ptr(subs->sender.client); 1524 1510 if (!sender) 1525 1511 goto __end; 1526 1512 sport = snd_seq_port_use_ptr(sender, subs->sender.port); ··· 1885 1871 struct snd_seq_client_pool *info = arg; 1886 1872 struct snd_seq_client *cptr; 1887 1873 1888 - cptr = snd_seq_client_use_ptr(info->client); 1874 + cptr = client_load_and_use_ptr(info->client); 1889 1875 if (cptr == NULL) 1890 1876 return -ENOENT; 1891 1877 memset(info, 0, sizeof(*info)); ··· 1989 1975 struct snd_seq_client_port *sport = NULL; 1990 1976 1991 1977 result = -EINVAL; 1992 - sender = snd_seq_client_use_ptr(subs->sender.client); 1978 + sender = client_load_and_use_ptr(subs->sender.client); 1993 1979 if (!sender) 1994 1980 goto __end; 1995 1981 sport = snd_seq_port_use_ptr(sender, subs->sender.port); ··· 2020 2006 struct list_head *p; 2021 2007 int i; 2022 2008 2023 - cptr = snd_seq_client_use_ptr(subs->root.client); 2009 + cptr = client_load_and_use_ptr(subs->root.client); 2024 2010 if (!cptr) 2025 2011 goto __end; 2026 2012 port = snd_seq_port_use_ptr(cptr, subs->root.port); ··· 2087 2073 if (info->client < 0) 2088 2074 info->client = 0; 2089 2075 for (; info->client < SNDRV_SEQ_MAX_CLIENTS; info->client++) { 2090 - cptr = snd_seq_client_use_ptr(info->client); 2076 + cptr = client_load_and_use_ptr(info->client); 2091 2077 if (cptr) 2092 2078 break; /* found */ 2093 2079 } ··· 2110 2096 struct snd_seq_client *cptr; 2111 2097 struct snd_seq_client_port *port = NULL; 2112 2098 2113 - cptr = snd_seq_client_use_ptr(info->addr.client); 2099 + cptr = client_load_and_use_ptr(info->addr.client); 2114 2100 if (cptr == NULL) 2115 2101 return -ENXIO; 2116 2102 ··· 2207 2193 size = sizeof(struct snd_ump_endpoint_info); 2208 2194 else 2209 2195 size = sizeof(struct snd_ump_block_info); 2210 - cptr = snd_seq_client_use_ptr(client); 2196 + cptr = client_load_and_use_ptr(client); 2211 2197 if (!cptr) 2212 2198 return -ENOENT; 2213 2199 ··· 2489 2475 if (check_event_type_and_length(ev)) 2490 2476 return -EINVAL; 2491 2477 2492 - cptr = snd_seq_client_use_ptr(client); 2478 + cptr = client_load_and_use_ptr(client); 2493 2479 if (cptr == NULL) 2494 2480 return -EINVAL; 2495 2481 ··· 2721 2707 2722 2708 /* list the client table */ 2723 2709 for (c = 0; c < SNDRV_SEQ_MAX_CLIENTS; c++) { 2724 - client = snd_seq_client_use_ptr(c); 2710 + client = client_load_and_use_ptr(c); 2725 2711 if (client == NULL) 2726 2712 continue; 2727 2713 if (client->type == NO_CLIENT) {
+1
sound/pci/hda/Kconfig
··· 222 222 223 223 config SND_HDA_CODEC_REALTEK 224 224 tristate "Build Realtek HD-audio codec support" 225 + depends on INPUT 225 226 select SND_HDA_GENERIC 226 227 select SND_HDA_GENERIC_LEDS 227 228 select SND_HDA_SCODEC_COMPONENT
+2
sound/pci/hda/hda_intel.c
··· 2232 2232 SND_PCI_QUIRK(0x1631, 0xe017, "Packard Bell NEC IMEDIA 5204", 0), 2233 2233 /* KONTRON SinglePC may cause a stall at runtime resume */ 2234 2234 SND_PCI_QUIRK(0x1734, 0x1232, "KONTRON SinglePC", 0), 2235 + /* Dell ALC3271 */ 2236 + SND_PCI_QUIRK(0x1028, 0x0962, "Dell ALC3271", 0), 2235 2237 {} 2236 2238 }; 2237 2239
+136 -13
sound/pci/hda/patch_realtek.c
··· 3843 3843 } 3844 3844 } 3845 3845 3846 + static void alc222_init(struct hda_codec *codec) 3847 + { 3848 + struct alc_spec *spec = codec->spec; 3849 + hda_nid_t hp_pin = alc_get_hp_pin(spec); 3850 + bool hp1_pin_sense, hp2_pin_sense; 3851 + 3852 + if (!hp_pin) 3853 + return; 3854 + 3855 + msleep(30); 3856 + 3857 + hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin); 3858 + hp2_pin_sense = snd_hda_jack_detect(codec, 0x14); 3859 + 3860 + if (hp1_pin_sense || hp2_pin_sense) { 3861 + msleep(2); 3862 + 3863 + if (hp1_pin_sense) 3864 + snd_hda_codec_write(codec, hp_pin, 0, 3865 + AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); 3866 + if (hp2_pin_sense) 3867 + snd_hda_codec_write(codec, 0x14, 0, 3868 + AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT); 3869 + msleep(75); 3870 + 3871 + if (hp1_pin_sense) 3872 + snd_hda_codec_write(codec, hp_pin, 0, 3873 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE); 3874 + if (hp2_pin_sense) 3875 + snd_hda_codec_write(codec, 0x14, 0, 3876 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE); 3877 + 3878 + msleep(75); 3879 + } 3880 + } 3881 + 3882 + static void alc222_shutup(struct hda_codec *codec) 3883 + { 3884 + struct alc_spec *spec = codec->spec; 3885 + hda_nid_t hp_pin = alc_get_hp_pin(spec); 3886 + bool hp1_pin_sense, hp2_pin_sense; 3887 + 3888 + if (!hp_pin) 3889 + hp_pin = 0x21; 3890 + 3891 + hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin); 3892 + hp2_pin_sense = snd_hda_jack_detect(codec, 0x14); 3893 + 3894 + if (hp1_pin_sense || hp2_pin_sense) { 3895 + msleep(2); 3896 + 3897 + if (hp1_pin_sense) 3898 + snd_hda_codec_write(codec, hp_pin, 0, 3899 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 3900 + if (hp2_pin_sense) 3901 + snd_hda_codec_write(codec, 0x14, 0, 3902 + AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE); 3903 + 3904 + msleep(75); 3905 + 3906 + if (hp1_pin_sense) 3907 + snd_hda_codec_write(codec, hp_pin, 0, 3908 + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); 3909 + if (hp2_pin_sense) 3910 + snd_hda_codec_write(codec, 0x14, 0, 3911 + AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0); 3912 + 3913 + msleep(75); 3914 + } 3915 + alc_auto_setup_eapd(codec, false); 3916 + alc_shutup_pins(codec); 3917 + } 3918 + 3846 3919 static void alc_default_init(struct hda_codec *codec) 3847 3920 { 3848 3921 struct alc_spec *spec = codec->spec; ··· 4790 4717 } 4791 4718 } 4792 4719 4720 + static void alc295_fixup_hp_mute_led_coefbit11(struct hda_codec *codec, 4721 + const struct hda_fixup *fix, int action) 4722 + { 4723 + struct alc_spec *spec = codec->spec; 4724 + 4725 + if (action == HDA_FIXUP_ACT_PRE_PROBE) { 4726 + spec->mute_led_polarity = 0; 4727 + spec->mute_led_coef.idx = 0xb; 4728 + spec->mute_led_coef.mask = 3 << 3; 4729 + spec->mute_led_coef.on = 1 << 3; 4730 + spec->mute_led_coef.off = 1 << 4; 4731 + snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set); 4732 + } 4733 + } 4734 + 4793 4735 static void alc285_fixup_hp_mute_led(struct hda_codec *codec, 4794 4736 const struct hda_fixup *fix, int action) 4795 4737 { ··· 5015 4927 alc298_samsung_v2_init_amps(codec, 4); 5016 4928 } 5017 4929 5018 - #if IS_REACHABLE(CONFIG_INPUT) 5019 4930 static void gpio2_mic_hotkey_event(struct hda_codec *codec, 5020 4931 struct hda_jack_callback *event) 5021 4932 { ··· 5123 5036 spec->kb_dev = NULL; 5124 5037 } 5125 5038 } 5126 - #else /* INPUT */ 5127 - #define alc280_fixup_hp_gpio2_mic_hotkey NULL 5128 - #define alc233_fixup_lenovo_line2_mic_hotkey NULL 5129 - #endif /* INPUT */ 5130 5039 5131 5040 static void alc269_fixup_hp_line1_mic1_led(struct hda_codec *codec, 5132 5041 const struct hda_fixup *fix, int action) ··· 5134 5051 spec->cap_mute_led_nid = 0x18; 5135 5052 snd_hda_gen_add_micmute_led_cdev(codec, vref_micmute_led_set); 5136 5053 } 5054 + } 5055 + 5056 + static void alc233_fixup_lenovo_low_en_micmute_led(struct hda_codec *codec, 5057 + const struct hda_fixup *fix, int action) 5058 + { 5059 + struct alc_spec *spec = codec->spec; 5060 + 5061 + if (action == HDA_FIXUP_ACT_PRE_PROBE) 5062 + spec->micmute_led_polarity = 1; 5063 + alc233_fixup_lenovo_line2_mic_hotkey(codec, fix, action); 5137 5064 } 5138 5065 5139 5066 static void alc_hp_mute_disable(struct hda_codec *codec, unsigned int delay) ··· 7671 7578 ALC290_FIXUP_MONO_SPEAKERS_HSJACK, 7672 7579 ALC290_FIXUP_SUBWOOFER, 7673 7580 ALC290_FIXUP_SUBWOOFER_HSJACK, 7581 + ALC295_FIXUP_HP_MUTE_LED_COEFBIT11, 7674 7582 ALC269_FIXUP_THINKPAD_ACPI, 7675 7583 ALC269_FIXUP_LENOVO_XPAD_ACPI, 7676 7584 ALC269_FIXUP_DMIC_THINKPAD_ACPI, ··· 7715 7621 ALC275_FIXUP_DELL_XPS, 7716 7622 ALC293_FIXUP_LENOVO_SPK_NOISE, 7717 7623 ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, 7624 + ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED, 7718 7625 ALC255_FIXUP_DELL_SPK_NOISE, 7719 7626 ALC225_FIXUP_DISABLE_MIC_VREF, 7720 7627 ALC225_FIXUP_DELL1_MIC_NO_PRESENCE, ··· 7785 7690 ALC285_FIXUP_THINKPAD_X1_GEN7, 7786 7691 ALC285_FIXUP_THINKPAD_HEADSET_JACK, 7787 7692 ALC294_FIXUP_ASUS_ALLY, 7788 - ALC294_FIXUP_ASUS_ALLY_X, 7789 7693 ALC294_FIXUP_ASUS_ALLY_PINS, 7790 7694 ALC294_FIXUP_ASUS_ALLY_VERBS, 7791 7695 ALC294_FIXUP_ASUS_ALLY_SPEAKER, ··· 8710 8616 .type = HDA_FIXUP_FUNC, 8711 8617 .v.func = alc233_fixup_lenovo_line2_mic_hotkey, 8712 8618 }, 8619 + [ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED] = { 8620 + .type = HDA_FIXUP_FUNC, 8621 + .v.func = alc233_fixup_lenovo_low_en_micmute_led, 8622 + }, 8713 8623 [ALC233_FIXUP_INTEL_NUC8_DMIC] = { 8714 8624 .type = HDA_FIXUP_FUNC, 8715 8625 .v.func = alc_fixup_inv_dmic, ··· 9236 9138 .chained = true, 9237 9139 .chain_id = ALC294_FIXUP_ASUS_ALLY_PINS 9238 9140 }, 9239 - [ALC294_FIXUP_ASUS_ALLY_X] = { 9240 - .type = HDA_FIXUP_FUNC, 9241 - .v.func = tas2781_fixup_i2c, 9242 - .chained = true, 9243 - .chain_id = ALC294_FIXUP_ASUS_ALLY_PINS 9244 - }, 9245 9141 [ALC294_FIXUP_ASUS_ALLY_PINS] = { 9246 9142 .type = HDA_FIXUP_PINS, 9247 9143 .v.pins = (const struct hda_pintbl[]) { ··· 9416 9324 .v.func = alc_fixup_inv_dmic, 9417 9325 .chained = true, 9418 9326 .chain_id = ALC283_FIXUP_INT_MIC, 9327 + }, 9328 + [ALC295_FIXUP_HP_MUTE_LED_COEFBIT11] = { 9329 + .type = HDA_FIXUP_FUNC, 9330 + .v.func = alc295_fixup_hp_mute_led_coefbit11, 9419 9331 }, 9420 9332 [ALC298_FIXUP_SAMSUNG_AMP] = { 9421 9333 .type = HDA_FIXUP_FUNC, ··· 10471 10375 SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 10472 10376 SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), 10473 10377 SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10378 + SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11), 10474 10379 SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360), 10475 10380 SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), 10476 10381 SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT), ··· 10697 10600 SND_PCI_QUIRK(0x103c, 0x8e1a, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED), 10698 10601 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 10699 10602 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 10603 + SND_PCI_QUIRK(0x1043, 0x1054, "ASUS G614FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2), 10700 10604 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 10605 + SND_PCI_QUIRK(0x1043, 0x1074, "ASUS G614PH/PM/PP", ALC287_FIXUP_CS35L41_I2C_2), 10701 10606 SND_PCI_QUIRK(0x1043, 0x10a1, "ASUS UX391UA", ALC294_FIXUP_ASUS_SPK), 10702 10607 SND_PCI_QUIRK(0x1043, 0x10a4, "ASUS TP3407SA", ALC287_FIXUP_TAS2781_I2C), 10703 10608 SND_PCI_QUIRK(0x1043, 0x10c0, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), ··· 10707 10608 SND_PCI_QUIRK(0x1043, 0x10d3, "ASUS K6500ZC", ALC294_FIXUP_ASUS_SPK), 10708 10609 SND_PCI_QUIRK(0x1043, 0x1154, "ASUS TP3607SH", ALC287_FIXUP_TAS2781_I2C), 10709 10610 SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 10611 + SND_PCI_QUIRK(0x1043, 0x1194, "ASUS UM3406KA", ALC287_FIXUP_CS35L41_I2C_2), 10710 10612 SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 10711 10613 SND_PCI_QUIRK(0x1043, 0x1204, "ASUS Strix G615JHR_JMR_JPR", ALC287_FIXUP_TAS2781_I2C), 10712 10614 SND_PCI_QUIRK(0x1043, 0x1214, "ASUS Strix G615LH_LM_LP", ALC287_FIXUP_TAS2781_I2C), 10713 10615 SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 10714 10616 SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE), 10715 10617 SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE), 10618 + SND_PCI_QUIRK(0x1043, 0x1294, "ASUS B3405CVA", ALC245_FIXUP_CS35L41_SPI_2), 10716 10619 SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE), 10717 10620 SND_PCI_QUIRK(0x1043, 0x12a3, "Asus N7691ZM", ALC269_FIXUP_ASUS_N7601ZM), 10718 10621 SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2), 10622 + SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC245_FIXUP_CS35L41_SPI_2), 10719 10623 SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC), 10720 10624 SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC), 10721 10625 SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE), ··· 10747 10645 SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS), 10748 10646 SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK), 10749 10647 SND_PCI_QUIRK(0x1043, 0x17f3, "ROG Ally NR2301L/X", ALC294_FIXUP_ASUS_ALLY), 10750 - SND_PCI_QUIRK(0x1043, 0x1eb3, "ROG Ally X RC72LA", ALC294_FIXUP_ASUS_ALLY_X), 10751 10648 SND_PCI_QUIRK(0x1043, 0x1863, "ASUS UX6404VI/VV", ALC245_FIXUP_CS35L41_SPI_2), 10752 10649 SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS), 10753 10650 SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC), ··· 10800 10699 SND_PCI_QUIRK(0x1043, 0x1f12, "ASUS UM5302", ALC287_FIXUP_CS35L41_I2C_2), 10801 10700 SND_PCI_QUIRK(0x1043, 0x1f1f, "ASUS H7604JI/JV/J3D", ALC245_FIXUP_CS35L41_SPI_2), 10802 10701 SND_PCI_QUIRK(0x1043, 0x1f62, "ASUS UX7602ZM", ALC245_FIXUP_CS35L41_SPI_2), 10702 + SND_PCI_QUIRK(0x1043, 0x1f63, "ASUS P5405CSA", ALC245_FIXUP_CS35L41_SPI_2), 10803 10703 SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401), 10704 + SND_PCI_QUIRK(0x1043, 0x1fb3, "ASUS ROG Flow Z13 GZ302EA", ALC287_FIXUP_CS35L41_I2C_2), 10705 + SND_PCI_QUIRK(0x1043, 0x3011, "ASUS B5605CVA", ALC245_FIXUP_CS35L41_SPI_2), 10804 10706 SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2), 10707 + SND_PCI_QUIRK(0x1043, 0x3061, "ASUS B3405CCA", ALC245_FIXUP_CS35L41_SPI_2), 10708 + SND_PCI_QUIRK(0x1043, 0x3071, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2), 10709 + SND_PCI_QUIRK(0x1043, 0x30c1, "ASUS B3605CCA / P3605CCA", ALC245_FIXUP_CS35L41_SPI_2), 10710 + SND_PCI_QUIRK(0x1043, 0x30d1, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2), 10711 + SND_PCI_QUIRK(0x1043, 0x30e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2), 10805 10712 SND_PCI_QUIRK(0x1043, 0x31d0, "ASUS Zen AIO 27 Z272SD_A272SD", ALC274_FIXUP_ASUS_ZEN_AIO_27), 10713 + SND_PCI_QUIRK(0x1043, 0x31e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2), 10714 + SND_PCI_QUIRK(0x1043, 0x31f1, "ASUS B3605CCA", ALC245_FIXUP_CS35L41_SPI_2), 10806 10715 SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), 10807 10716 SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), 10808 10717 SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), 10809 10718 SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), 10810 10719 SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), 10720 + SND_PCI_QUIRK(0x1043, 0x3d78, "ASUS GA603KH", ALC287_FIXUP_CS35L41_I2C_2), 10721 + SND_PCI_QUIRK(0x1043, 0x3d88, "ASUS GA603KM", ALC287_FIXUP_CS35L41_I2C_2), 10722 + SND_PCI_QUIRK(0x1043, 0x3e00, "ASUS G814FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2), 10723 + SND_PCI_QUIRK(0x1043, 0x3e20, "ASUS G814PH/PM/PP", ALC287_FIXUP_CS35L41_I2C_2), 10811 10724 SND_PCI_QUIRK(0x1043, 0x3e30, "ASUS TP3607SA", ALC287_FIXUP_TAS2781_I2C), 10812 10725 SND_PCI_QUIRK(0x1043, 0x3ee0, "ASUS Strix G815_JHR_JMR_JPR", ALC287_FIXUP_TAS2781_I2C), 10813 10726 SND_PCI_QUIRK(0x1043, 0x3ef0, "ASUS Strix G635LR_LW_LX", ALC287_FIXUP_TAS2781_I2C), ··· 10829 10714 SND_PCI_QUIRK(0x1043, 0x3f10, "ASUS Strix G835LR_LW_LX", ALC287_FIXUP_TAS2781_I2C), 10830 10715 SND_PCI_QUIRK(0x1043, 0x3f20, "ASUS Strix G615LR_LW", ALC287_FIXUP_TAS2781_I2C), 10831 10716 SND_PCI_QUIRK(0x1043, 0x3f30, "ASUS Strix G815LR_LW", ALC287_FIXUP_TAS2781_I2C), 10717 + SND_PCI_QUIRK(0x1043, 0x3fd0, "ASUS B3605CVA", ALC245_FIXUP_CS35L41_SPI_2), 10718 + SND_PCI_QUIRK(0x1043, 0x3ff0, "ASUS B5405CVA", ALC245_FIXUP_CS35L41_SPI_2), 10832 10719 SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC), 10833 10720 SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC), 10834 10721 SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC), ··· 11029 10912 SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), 11030 10913 SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340), 11031 10914 SND_PCI_QUIRK(0x17aa, 0x334b, "Lenovo ThinkCentre M70 Gen5", ALC283_FIXUP_HEADSET_MIC), 10915 + SND_PCI_QUIRK(0x17aa, 0x3384, "ThinkCentre M90a PRO", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED), 10916 + SND_PCI_QUIRK(0x17aa, 0x3386, "ThinkCentre M90a Gen6", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED), 10917 + SND_PCI_QUIRK(0x17aa, 0x3387, "ThinkCentre M70a Gen6", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED), 11032 10918 SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN), 11033 10919 HDA_CODEC_QUIRK(0x17aa, 0x3802, "DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 11034 10920 SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8", ALC287_FIXUP_TAS2781_I2C), ··· 12020 11900 spec->codec_variant = ALC269_TYPE_ALC300; 12021 11901 spec->gen.mixer_nid = 0; /* no loopback on ALC300 */ 12022 11902 break; 11903 + case 0x10ec0222: 12023 11904 case 0x10ec0623: 12024 11905 spec->codec_variant = ALC269_TYPE_ALC623; 11906 + spec->shutup = alc222_shutup; 11907 + spec->init_hook = alc222_init; 12025 11908 break; 12026 11909 case 0x10ec0700: 12027 11910 case 0x10ec0701:
+7
sound/soc/amd/yc/acp6x-mach.c
··· 252 252 .driver_data = &acp6x_card, 253 253 .matches = { 254 254 DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 255 + DMI_MATCH(DMI_PRODUCT_NAME, "21M6"), 256 + } 257 + }, 258 + { 259 + .driver_data = &acp6x_card, 260 + .matches = { 261 + DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), 255 262 DMI_MATCH(DMI_PRODUCT_NAME, "21ME"), 256 263 } 257 264 },
+10 -3
sound/soc/codecs/cs42l43-jack.c
··· 167 167 autocontrol |= 0x3 << CS42L43_JACKDET_MODE_SHIFT; 168 168 169 169 ret = cs42l43_find_index(priv, "cirrus,tip-fall-db-ms", 500, 170 - NULL, cs42l43_accdet_db_ms, 170 + &priv->tip_fall_db_ms, cs42l43_accdet_db_ms, 171 171 ARRAY_SIZE(cs42l43_accdet_db_ms)); 172 172 if (ret < 0) 173 173 goto error; ··· 175 175 tip_deb |= ret << CS42L43_TIPSENSE_FALLING_DB_TIME_SHIFT; 176 176 177 177 ret = cs42l43_find_index(priv, "cirrus,tip-rise-db-ms", 500, 178 - NULL, cs42l43_accdet_db_ms, 178 + &priv->tip_rise_db_ms, cs42l43_accdet_db_ms, 179 179 ARRAY_SIZE(cs42l43_accdet_db_ms)); 180 180 if (ret < 0) 181 181 goto error; ··· 764 764 error: 765 765 mutex_unlock(&priv->jack_lock); 766 766 767 + priv->suspend_jack_debounce = false; 768 + 767 769 pm_runtime_mark_last_busy(priv->dev); 768 770 pm_runtime_put_autosuspend(priv->dev); 769 771 } ··· 773 771 irqreturn_t cs42l43_tip_sense(int irq, void *data) 774 772 { 775 773 struct cs42l43_codec *priv = data; 774 + unsigned int db_delay = priv->tip_debounce_ms; 776 775 777 776 cancel_delayed_work(&priv->bias_sense_timeout); 778 777 cancel_delayed_work(&priv->tip_sense_work); 779 778 cancel_delayed_work(&priv->button_press_work); 780 779 cancel_work(&priv->button_release_work); 781 780 781 + // Ensure delay after suspend is long enough to avoid false detection 782 + if (priv->suspend_jack_debounce) 783 + db_delay += priv->tip_fall_db_ms + priv->tip_rise_db_ms; 784 + 782 785 queue_delayed_work(system_long_wq, &priv->tip_sense_work, 783 - msecs_to_jiffies(priv->tip_debounce_ms)); 786 + msecs_to_jiffies(db_delay)); 784 787 785 788 return IRQ_HANDLED; 786 789 }
+15 -2
sound/soc/codecs/cs42l43.c
··· 1146 1146 1147 1147 SOC_DOUBLE_R_SX_TLV("ADC Volume", CS42L43_ADC_B_CTRL1, CS42L43_ADC_B_CTRL2, 1148 1148 CS42L43_ADC_PGA_GAIN_SHIFT, 1149 - 0xF, 5, cs42l43_adc_tlv), 1149 + 0xF, 4, cs42l43_adc_tlv), 1150 1150 1151 1151 SOC_DOUBLE("PDM1 Invert Switch", CS42L43_DMIC_PDM_CTRL, 1152 1152 CS42L43_PDM1L_INV_SHIFT, CS42L43_PDM1R_INV_SHIFT, 1, 0), ··· 2402 2402 return 0; 2403 2403 } 2404 2404 2405 + static int cs42l43_codec_runtime_force_suspend(struct device *dev) 2406 + { 2407 + struct cs42l43_codec *priv = dev_get_drvdata(dev); 2408 + 2409 + dev_dbg(priv->dev, "Runtime suspend\n"); 2410 + 2411 + priv->suspend_jack_debounce = true; 2412 + 2413 + pm_runtime_force_suspend(dev); 2414 + 2415 + return 0; 2416 + } 2417 + 2405 2418 static const struct dev_pm_ops cs42l43_codec_pm_ops = { 2406 2419 RUNTIME_PM_OPS(NULL, cs42l43_codec_runtime_resume, NULL) 2407 - SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 2420 + SYSTEM_SLEEP_PM_OPS(cs42l43_codec_runtime_force_suspend, pm_runtime_force_resume) 2408 2421 }; 2409 2422 2410 2423 static const struct platform_device_id cs42l43_codec_id_table[] = {
+3
sound/soc/codecs/cs42l43.h
··· 78 78 79 79 bool use_ring_sense; 80 80 unsigned int tip_debounce_ms; 81 + unsigned int tip_fall_db_ms; 82 + unsigned int tip_rise_db_ms; 81 83 unsigned int bias_low; 82 84 unsigned int bias_sense_ua; 83 85 unsigned int bias_ramp_ms; ··· 97 95 bool button_detect_running; 98 96 bool jack_present; 99 97 int jack_override; 98 + bool suspend_jack_debounce; 100 99 101 100 struct work_struct hp_ilimit_work; 102 101 struct delayed_work hp_ilimit_clear_work;
+3
sound/soc/codecs/rt1320-sdw.c
··· 535 535 /* set the timeout values */ 536 536 prop->clk_stop_timeout = 64; 537 537 538 + /* BIOS may set wake_capable. Make sure it is 0 as wake events are disabled. */ 539 + prop->wake_capable = 0; 540 + 538 541 return 0; 539 542 } 540 543
+4
sound/soc/codecs/rt722-sdca-sdw.c
··· 86 86 case 0x6100067: 87 87 case 0x6100070 ... 0x610007c: 88 88 case 0x6100080: 89 + case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN, 90 + CH_01) ... 91 + SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN, 92 + CH_04): 89 93 case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME, 90 94 CH_01): 91 95 case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME,
+11 -2
sound/soc/codecs/wm0010.c
··· 920 920 if (ret) { 921 921 dev_err(wm0010->dev, "Failed to set IRQ %d as wake source: %d\n", 922 922 irq, ret); 923 - return ret; 923 + goto free_irq; 924 924 } 925 925 926 926 if (spi->max_speed_hz) ··· 932 932 &soc_component_dev_wm0010, wm0010_dai, 933 933 ARRAY_SIZE(wm0010_dai)); 934 934 if (ret < 0) 935 - return ret; 935 + goto disable_irq_wake; 936 936 937 937 return 0; 938 + 939 + disable_irq_wake: 940 + irq_set_irq_wake(wm0010->irq, 0); 941 + 942 + free_irq: 943 + if (wm0010->irq) 944 + free_irq(wm0010->irq, wm0010); 945 + 946 + return ret; 938 947 } 939 948 940 949 static void wm0010_spi_remove(struct spi_device *spi)
+2 -2
sound/soc/codecs/wsa884x.c
··· 1875 1875 * Reading temperature is possible only when Power Amplifier is 1876 1876 * off. Report last cached data. 1877 1877 */ 1878 - *temp = wsa884x->temperature; 1878 + *temp = wsa884x->temperature * 1000; 1879 1879 return 0; 1880 1880 } 1881 1881 ··· 1934 1934 if ((val > WSA884X_LOW_TEMP_THRESHOLD) && 1935 1935 (val < WSA884X_HIGH_TEMP_THRESHOLD)) { 1936 1936 wsa884x->temperature = val; 1937 - *temp = val; 1937 + *temp = val * 1000; 1938 1938 ret = 0; 1939 1939 } else { 1940 1940 ret = -EAGAIN;
+1 -1
sound/soc/intel/boards/sof_sdw.c
··· 954 954 955 955 /* generate DAI links by each sdw link */ 956 956 while (sof_dais->initialised) { 957 - int current_be_id; 957 + int current_be_id = 0; 958 958 959 959 ret = create_sdw_dailink(card, sof_dais, dai_links, 960 960 &current_be_id, codec_conf);
+7 -8
sound/soc/soc-ops.c
··· 337 337 if (ucontrol->value.integer.value[0] < 0) 338 338 return -EINVAL; 339 339 val = ucontrol->value.integer.value[0]; 340 - if (mc->platform_max && ((int)val + min) > mc->platform_max) 340 + if (mc->platform_max && val > mc->platform_max) 341 341 return -EINVAL; 342 342 if (val > max - min) 343 343 return -EINVAL; ··· 350 350 if (ucontrol->value.integer.value[1] < 0) 351 351 return -EINVAL; 352 352 val2 = ucontrol->value.integer.value[1]; 353 - if (mc->platform_max && ((int)val2 + min) > mc->platform_max) 353 + if (mc->platform_max && val2 > mc->platform_max) 354 354 return -EINVAL; 355 355 if (val2 > max - min) 356 356 return -EINVAL; ··· 503 503 { 504 504 struct soc_mixer_control *mc = 505 505 (struct soc_mixer_control *)kcontrol->private_value; 506 - int platform_max; 507 - int min = mc->min; 506 + int max; 508 507 509 - if (!mc->platform_max) 510 - mc->platform_max = mc->max; 511 - platform_max = mc->platform_max; 508 + max = mc->max - mc->min; 509 + if (mc->platform_max && mc->platform_max < max) 510 + max = mc->platform_max; 512 511 513 512 uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER; 514 513 uinfo->count = snd_soc_volsw_is_stereo(mc) ? 2 : 1; 515 514 uinfo->value.integer.min = 0; 516 - uinfo->value.integer.max = platform_max - min; 515 + uinfo->value.integer.max = max; 517 516 518 517 return 0; 519 518 }
+2 -2
sound/soc/tegra/tegra210_adx.c
··· 264 264 .rates = SNDRV_PCM_RATE_8000_192000, \ 265 265 .formats = SNDRV_PCM_FMTBIT_S8 | \ 266 266 SNDRV_PCM_FMTBIT_S16_LE | \ 267 - SNDRV_PCM_FMTBIT_S16_LE | \ 267 + SNDRV_PCM_FMTBIT_S24_LE | \ 268 268 SNDRV_PCM_FMTBIT_S32_LE, \ 269 269 }, \ 270 270 .capture = { \ ··· 274 274 .rates = SNDRV_PCM_RATE_8000_192000, \ 275 275 .formats = SNDRV_PCM_FMTBIT_S8 | \ 276 276 SNDRV_PCM_FMTBIT_S16_LE | \ 277 - SNDRV_PCM_FMTBIT_S16_LE | \ 277 + SNDRV_PCM_FMTBIT_S24_LE | \ 278 278 SNDRV_PCM_FMTBIT_S32_LE, \ 279 279 }, \ 280 280 .ops = &tegra210_adx_out_dai_ops, \
+11
sound/usb/usx2y/usbusx2y.c
··· 151 151 static void snd_usx2y_card_private_free(struct snd_card *card); 152 152 static void usx2y_unlinkseq(struct snd_usx2y_async_seq *s); 153 153 154 + #ifdef USX2Y_NRPACKS_VARIABLE 155 + int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */ 156 + module_param(nrpacks, int, 0444); 157 + MODULE_PARM_DESC(nrpacks, "Number of packets per URB."); 158 + #endif 159 + 154 160 /* 155 161 * pipe 4 is used for switching the lamps, setting samplerate, volumes .... 156 162 */ ··· 437 431 struct usb_device *device = interface_to_usbdev(intf); 438 432 struct snd_card *card; 439 433 int err; 434 + 435 + #ifdef USX2Y_NRPACKS_VARIABLE 436 + if (nrpacks < 0 || nrpacks > USX2Y_NRPACKS_MAX) 437 + return -EINVAL; 438 + #endif 440 439 441 440 if (le16_to_cpu(device->descriptor.idVendor) != 0x1604 || 442 441 (le16_to_cpu(device->descriptor.idProduct) != USB_ID_US122 &&
+26
sound/usb/usx2y/usbusx2y.h
··· 7 7 8 8 #define NRURBS 2 9 9 10 + /* Default value used for nr of packs per urb. 11 + * 1 to 4 have been tested ok on uhci. 12 + * To use 3 on ohci, you'd need a patch: 13 + * look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on 14 + * "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425" 15 + * 16 + * 1, 2 and 4 work out of the box on ohci, if I recall correctly. 17 + * Bigger is safer operation, smaller gives lower latencies. 18 + */ 19 + #define USX2Y_NRPACKS 4 20 + 21 + #define USX2Y_NRPACKS_MAX 1024 22 + 23 + /* If your system works ok with this module's parameter 24 + * nrpacks set to 1, you might as well comment 25 + * this define out, and thereby produce smaller, faster code. 26 + * You'd also set USX2Y_NRPACKS to 1 then. 27 + */ 28 + #define USX2Y_NRPACKS_VARIABLE 1 29 + 30 + #ifdef USX2Y_NRPACKS_VARIABLE 31 + extern int nrpacks; 32 + #define nr_of_packs() nrpacks 33 + #else 34 + #define nr_of_packs() USX2Y_NRPACKS 35 + #endif 10 36 11 37 #define URBS_ASYNC_SEQ 10 12 38 #define URB_DATA_LEN_ASYNC_SEQ 32
-27
sound/usb/usx2y/usbusx2yaudio.c
··· 28 28 #include "usx2y.h" 29 29 #include "usbusx2y.h" 30 30 31 - /* Default value used for nr of packs per urb. 32 - * 1 to 4 have been tested ok on uhci. 33 - * To use 3 on ohci, you'd need a patch: 34 - * look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on 35 - * "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425" 36 - * 37 - * 1, 2 and 4 work out of the box on ohci, if I recall correctly. 38 - * Bigger is safer operation, smaller gives lower latencies. 39 - */ 40 - #define USX2Y_NRPACKS 4 41 - 42 - /* If your system works ok with this module's parameter 43 - * nrpacks set to 1, you might as well comment 44 - * this define out, and thereby produce smaller, faster code. 45 - * You'd also set USX2Y_NRPACKS to 1 then. 46 - */ 47 - #define USX2Y_NRPACKS_VARIABLE 1 48 - 49 - #ifdef USX2Y_NRPACKS_VARIABLE 50 - static int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */ 51 - #define nr_of_packs() nrpacks 52 - module_param(nrpacks, int, 0444); 53 - MODULE_PARM_DESC(nrpacks, "Number of packets per URB."); 54 - #else 55 - #define nr_of_packs() USX2Y_NRPACKS 56 - #endif 57 - 58 31 static int usx2y_urb_capt_retire(struct snd_usx2y_substream *subs) 59 32 { 60 33 struct urb *urb = subs->completed_urb;
+2
tools/testing/selftests/damon/damon_nr_regions.py
··· 65 65 66 66 test_name = 'nr_regions test with %d/%d/%d real/min/max nr_regions' % ( 67 67 real_nr_regions, min_nr_regions, max_nr_regions) 68 + collected_nr_regions.sort() 68 69 if (collected_nr_regions[0] < min_nr_regions or 69 70 collected_nr_regions[-1] > max_nr_regions): 70 71 print('fail %s' % test_name) ··· 110 109 attrs = kdamonds.kdamonds[0].contexts[0].monitoring_attrs 111 110 attrs.min_nr_regions = 3 112 111 attrs.max_nr_regions = 7 112 + attrs.update_us = 100000 113 113 err = kdamonds.kdamonds[0].commit() 114 114 if err is not None: 115 115 proc.terminate()
+6 -3
tools/testing/selftests/damon/damos_quota.py
··· 51 51 nr_quota_exceeds = scheme.stats.qt_exceeds 52 52 53 53 wss_collected.sort() 54 + nr_expected_quota_exceeds = 0 54 55 for wss in wss_collected: 55 56 if wss > sz_quota: 56 57 print('quota is not kept: %s > %s' % (wss, sz_quota)) 57 58 print('collected samples are as below') 58 59 print('\n'.join(['%d' % wss for wss in wss_collected])) 59 60 exit(1) 61 + if wss == sz_quota: 62 + nr_expected_quota_exceeds += 1 60 63 61 - if nr_quota_exceeds < len(wss_collected): 62 - print('quota is not always exceeded: %d > %d' % 63 - (len(wss_collected), nr_quota_exceeds)) 64 + if nr_quota_exceeds < nr_expected_quota_exceeds: 65 + print('quota is exceeded less than expected: %d < %d' % 66 + (nr_quota_exceeds, nr_expected_quota_exceeds)) 64 67 exit(1) 65 68 66 69 if __name__ == '__main__':
+3
tools/testing/selftests/damon/damos_quota_goal.py
··· 63 63 if last_effective_bytes != 0 else -1.0)) 64 64 65 65 if last_effective_bytes == goal.effective_bytes: 66 + # effective quota was already minimum that cannot be more reduced 67 + if expect_increase is False and last_effective_bytes == 1: 68 + continue 66 69 print('efective bytes not changed: %d' % goal.effective_bytes) 67 70 exit(1) 68 71
+2 -2
tools/testing/selftests/drivers/net/bonding/bond_options.sh
··· 11 11 12 12 lib_dir=$(dirname "$0") 13 13 source ${lib_dir}/bond_topo_3d1c.sh 14 - c_maddr="33:33:00:00:00:10" 15 - g_maddr="33:33:00:00:02:54" 14 + c_maddr="33:33:ff:00:00:10" 15 + g_maddr="33:33:ff:00:02:54" 16 16 17 17 skip_prio() 18 18 {
+184 -14
tools/testing/selftests/drivers/net/ping.py
··· 1 1 #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + import os 5 + import random, string, time 4 6 from lib.py import ksft_run, ksft_exit 5 - from lib.py import ksft_eq 6 - from lib.py import NetDrvEpEnv 7 + from lib.py import ksft_eq, KsftSkipEx, KsftFailEx 8 + from lib.py import EthtoolFamily, NetDrvEpEnv 7 9 from lib.py import bkg, cmd, wait_port_listen, rand_port 10 + from lib.py import ethtool, ip 8 11 12 + remote_ifname="" 13 + no_sleep=False 9 14 10 - def test_v4(cfg) -> None: 15 + def _test_v4(cfg) -> None: 11 16 cfg.require_v4() 12 17 13 18 cmd(f"ping -c 1 -W0.5 {cfg.remote_v4}") 14 19 cmd(f"ping -c 1 -W0.5 {cfg.v4}", host=cfg.remote) 20 + cmd(f"ping -s 65000 -c 1 -W0.5 {cfg.remote_v4}") 21 + cmd(f"ping -s 65000 -c 1 -W0.5 {cfg.v4}", host=cfg.remote) 15 22 16 - 17 - def test_v6(cfg) -> None: 23 + def _test_v6(cfg) -> None: 18 24 cfg.require_v6() 19 25 20 - cmd(f"ping -c 1 -W0.5 {cfg.remote_v6}") 21 - cmd(f"ping -c 1 -W0.5 {cfg.v6}", host=cfg.remote) 26 + cmd(f"ping -c 1 -W5 {cfg.remote_v6}") 27 + cmd(f"ping -c 1 -W5 {cfg.v6}", host=cfg.remote) 28 + cmd(f"ping -s 65000 -c 1 -W0.5 {cfg.remote_v6}") 29 + cmd(f"ping -s 65000 -c 1 -W0.5 {cfg.v6}", host=cfg.remote) 22 30 23 - 24 - def test_tcp(cfg) -> None: 31 + def _test_tcp(cfg) -> None: 25 32 cfg.require_cmd("socat", remote=True) 26 33 27 34 port = rand_port() 28 35 listen_cmd = f"socat -{cfg.addr_ipver} -t 2 -u TCP-LISTEN:{port},reuseport STDOUT" 29 36 37 + test_string = ''.join(random.choice(string.ascii_lowercase) for _ in range(65536)) 30 38 with bkg(listen_cmd, exit_wait=True) as nc: 31 39 wait_port_listen(port) 32 40 33 - cmd(f"echo ping | socat -t 2 -u STDIN TCP:{cfg.baddr}:{port}", 41 + cmd(f"echo {test_string} | socat -t 2 -u STDIN TCP:{cfg.baddr}:{port}", 34 42 shell=True, host=cfg.remote) 35 - ksft_eq(nc.stdout.strip(), "ping") 43 + ksft_eq(nc.stdout.strip(), test_string) 36 44 45 + test_string = ''.join(random.choice(string.ascii_lowercase) for _ in range(65536)) 37 46 with bkg(listen_cmd, host=cfg.remote, exit_wait=True) as nc: 38 47 wait_port_listen(port, host=cfg.remote) 39 48 40 - cmd(f"echo ping | socat -t 2 -u STDIN TCP:{cfg.remote_baddr}:{port}", shell=True) 41 - ksft_eq(nc.stdout.strip(), "ping") 49 + cmd(f"echo {test_string} | socat -t 2 -u STDIN TCP:{cfg.remote_baddr}:{port}", shell=True) 50 + ksft_eq(nc.stdout.strip(), test_string) 42 51 52 + def _set_offload_checksum(cfg, netnl, on) -> None: 53 + try: 54 + ethtool(f" -K {cfg.ifname} rx {on} tx {on} ") 55 + except: 56 + return 57 + 58 + def _set_xdp_generic_sb_on(cfg) -> None: 59 + test_dir = os.path.dirname(os.path.realpath(__file__)) 60 + prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 61 + cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 62 + cmd(f"ip link set dev {cfg.ifname} mtu 1500 xdpgeneric obj {prog} sec xdp", shell=True) 63 + 64 + if no_sleep != True: 65 + time.sleep(10) 66 + 67 + def _set_xdp_generic_mb_on(cfg) -> None: 68 + test_dir = os.path.dirname(os.path.realpath(__file__)) 69 + prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 70 + cmd(f"ip link set dev {remote_ifname} mtu 9000", shell=True, host=cfg.remote) 71 + ip("link set dev %s mtu 9000 xdpgeneric obj %s sec xdp.frags" % (cfg.ifname, prog)) 72 + 73 + if no_sleep != True: 74 + time.sleep(10) 75 + 76 + def _set_xdp_native_sb_on(cfg) -> None: 77 + test_dir = os.path.dirname(os.path.realpath(__file__)) 78 + prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 79 + cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 80 + cmd(f"ip -j link set dev {cfg.ifname} mtu 1500 xdp obj {prog} sec xdp", shell=True) 81 + xdp_info = ip("-d link show %s" % (cfg.ifname), json=True)[0] 82 + if xdp_info['xdp']['mode'] != 1: 83 + """ 84 + If the interface doesn't support native-mode, it falls back to generic mode. 85 + The mode value 1 is native and 2 is generic. 86 + So it raises an exception if mode is not 1(native mode). 87 + """ 88 + raise KsftSkipEx('device does not support native-XDP') 89 + 90 + if no_sleep != True: 91 + time.sleep(10) 92 + 93 + def _set_xdp_native_mb_on(cfg) -> None: 94 + test_dir = os.path.dirname(os.path.realpath(__file__)) 95 + prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 96 + cmd(f"ip link set dev {remote_ifname} mtu 9000", shell=True, host=cfg.remote) 97 + try: 98 + cmd(f"ip link set dev {cfg.ifname} mtu 9000 xdp obj {prog} sec xdp.frags", shell=True) 99 + except Exception as e: 100 + cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 101 + raise KsftSkipEx('device does not support native-multi-buffer XDP') 102 + 103 + if no_sleep != True: 104 + time.sleep(10) 105 + 106 + def _set_xdp_offload_on(cfg) -> None: 107 + test_dir = os.path.dirname(os.path.realpath(__file__)) 108 + prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o" 109 + cmd(f"ip link set dev {cfg.ifname} mtu 1500", shell=True) 110 + try: 111 + cmd(f"ip link set dev {cfg.ifname} xdpoffload obj {prog} sec xdp", shell=True) 112 + except Exception as e: 113 + raise KsftSkipEx('device does not support offloaded XDP') 114 + cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 115 + 116 + if no_sleep != True: 117 + time.sleep(10) 118 + 119 + def get_interface_info(cfg) -> None: 120 + global remote_ifname 121 + global no_sleep 122 + 123 + remote_info = cmd(f"ip -4 -o addr show to {cfg.remote_v4} | awk '{{print $2}}'", shell=True, host=cfg.remote).stdout 124 + remote_ifname = remote_info.rstrip('\n') 125 + if remote_ifname == "": 126 + raise KsftFailEx('Can not get remote interface') 127 + local_info = ip("-d link show %s" % (cfg.ifname), json=True)[0] 128 + if 'parentbus' in local_info and local_info['parentbus'] == "netdevsim": 129 + no_sleep=True 130 + if 'linkinfo' in local_info and local_info['linkinfo']['info_kind'] == "veth": 131 + no_sleep=True 132 + 133 + def set_interface_init(cfg) -> None: 134 + cmd(f"ip link set dev {cfg.ifname} mtu 1500", shell=True) 135 + cmd(f"ip link set dev {cfg.ifname} xdp off ", shell=True) 136 + cmd(f"ip link set dev {cfg.ifname} xdpgeneric off ", shell=True) 137 + cmd(f"ip link set dev {cfg.ifname} xdpoffload off", shell=True) 138 + cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote) 139 + 140 + def test_default(cfg, netnl) -> None: 141 + _set_offload_checksum(cfg, netnl, "off") 142 + _test_v4(cfg) 143 + _test_v6(cfg) 144 + _test_tcp(cfg) 145 + _set_offload_checksum(cfg, netnl, "on") 146 + _test_v4(cfg) 147 + _test_v6(cfg) 148 + _test_tcp(cfg) 149 + 150 + def test_xdp_generic_sb(cfg, netnl) -> None: 151 + _set_xdp_generic_sb_on(cfg) 152 + _set_offload_checksum(cfg, netnl, "off") 153 + _test_v4(cfg) 154 + _test_v6(cfg) 155 + _test_tcp(cfg) 156 + _set_offload_checksum(cfg, netnl, "on") 157 + _test_v4(cfg) 158 + _test_v6(cfg) 159 + _test_tcp(cfg) 160 + ip("link set dev %s xdpgeneric off" % cfg.ifname) 161 + 162 + def test_xdp_generic_mb(cfg, netnl) -> None: 163 + _set_xdp_generic_mb_on(cfg) 164 + _set_offload_checksum(cfg, netnl, "off") 165 + _test_v4(cfg) 166 + _test_v6(cfg) 167 + _test_tcp(cfg) 168 + _set_offload_checksum(cfg, netnl, "on") 169 + _test_v4(cfg) 170 + _test_v6(cfg) 171 + _test_tcp(cfg) 172 + ip("link set dev %s xdpgeneric off" % cfg.ifname) 173 + 174 + def test_xdp_native_sb(cfg, netnl) -> None: 175 + _set_xdp_native_sb_on(cfg) 176 + _set_offload_checksum(cfg, netnl, "off") 177 + _test_v4(cfg) 178 + _test_v6(cfg) 179 + _test_tcp(cfg) 180 + _set_offload_checksum(cfg, netnl, "on") 181 + _test_v4(cfg) 182 + _test_v6(cfg) 183 + _test_tcp(cfg) 184 + ip("link set dev %s xdp off" % cfg.ifname) 185 + 186 + def test_xdp_native_mb(cfg, netnl) -> None: 187 + _set_xdp_native_mb_on(cfg) 188 + _set_offload_checksum(cfg, netnl, "off") 189 + _test_v4(cfg) 190 + _test_v6(cfg) 191 + _test_tcp(cfg) 192 + _set_offload_checksum(cfg, netnl, "on") 193 + _test_v4(cfg) 194 + _test_v6(cfg) 195 + _test_tcp(cfg) 196 + ip("link set dev %s xdp off" % cfg.ifname) 197 + 198 + def test_xdp_offload(cfg, netnl) -> None: 199 + _set_xdp_offload_on(cfg) 200 + _test_v4(cfg) 201 + _test_v6(cfg) 202 + _test_tcp(cfg) 203 + ip("link set dev %s xdpoffload off" % cfg.ifname) 43 204 44 205 def main() -> None: 45 206 with NetDrvEpEnv(__file__) as cfg: 46 - ksft_run(globs=globals(), case_pfx={"test_"}, args=(cfg, )) 207 + get_interface_info(cfg) 208 + set_interface_init(cfg) 209 + ksft_run([test_default, 210 + test_xdp_generic_sb, 211 + test_xdp_generic_mb, 212 + test_xdp_native_sb, 213 + test_xdp_native_mb, 214 + test_xdp_offload], 215 + args=(cfg, EthtoolFamily())) 216 + set_interface_init(cfg) 47 217 ksft_exit() 48 218 49 219
+13 -8
tools/testing/selftests/kvm/mmu_stress_test.c
··· 18 18 #include "ucall_common.h" 19 19 20 20 static bool mprotect_ro_done; 21 + static bool all_vcpus_hit_ro_fault; 21 22 22 23 static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride) 23 24 { ··· 37 36 38 37 /* 39 38 * Write to the region while mprotect(PROT_READ) is underway. Keep 40 - * looping until the memory is guaranteed to be read-only, otherwise 41 - * vCPUs may complete their writes and advance to the next stage 42 - * prematurely. 39 + * looping until the memory is guaranteed to be read-only and a fault 40 + * has occurred, otherwise vCPUs may complete their writes and advance 41 + * to the next stage prematurely. 43 42 * 44 43 * For architectures that support skipping the faulting instruction, 45 44 * generate the store via inline assembly to ensure the exact length ··· 57 56 #else 58 57 vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa); 59 58 #endif 60 - } while (!READ_ONCE(mprotect_ro_done)); 59 + } while (!READ_ONCE(mprotect_ro_done) || !READ_ONCE(all_vcpus_hit_ro_fault)); 61 60 62 61 /* 63 62 * Only architectures that write the entire range can explicitly sync, ··· 82 81 83 82 static int nr_vcpus; 84 83 static atomic_t rendezvous; 84 + static atomic_t nr_ro_faults; 85 85 86 86 static void rendezvous_with_boss(void) 87 87 { ··· 150 148 * be stuck on the faulting instruction for other architectures. Go to 151 149 * stage 3 without a rendezvous 152 150 */ 153 - do { 154 - r = _vcpu_run(vcpu); 155 - } while (!r); 151 + r = _vcpu_run(vcpu); 156 152 TEST_ASSERT(r == -1 && errno == EFAULT, 157 153 "Expected EFAULT on write to RO memory, got r = %d, errno = %d", r, errno); 154 + 155 + atomic_inc(&nr_ro_faults); 156 + if (atomic_read(&nr_ro_faults) == nr_vcpus) { 157 + WRITE_ONCE(all_vcpus_hit_ro_fault, true); 158 + sync_global_to_guest(vm, all_vcpus_hit_ro_fault); 159 + } 158 160 159 161 #if defined(__x86_64__) || defined(__aarch64__) 160 162 /* ··· 384 378 rendezvous_with_vcpus(&time_run2, "run 2"); 385 379 386 380 mprotect(mem, slot_size, PROT_READ); 387 - usleep(10); 388 381 mprotect_ro_done = true; 389 382 sync_global_to_guest(vm, mprotect_ro_done); 390 383
+2
tools/testing/selftests/kvm/x86/nested_exceptions_test.c
··· 85 85 86 86 GUEST_ASSERT_EQ(ctrl->exit_code, (SVM_EXIT_EXCP_BASE + vector)); 87 87 GUEST_ASSERT_EQ(ctrl->exit_info_1, error_code); 88 + GUEST_ASSERT(!ctrl->int_state); 88 89 } 89 90 90 91 static void l1_svm_code(struct svm_test_data *svm) ··· 123 122 GUEST_ASSERT_EQ(vmreadz(VM_EXIT_REASON), EXIT_REASON_EXCEPTION_NMI); 124 123 GUEST_ASSERT_EQ((vmreadz(VM_EXIT_INTR_INFO) & 0xff), vector); 125 124 GUEST_ASSERT_EQ(vmreadz(VM_EXIT_INTR_ERROR_CODE), error_code); 125 + GUEST_ASSERT(!vmreadz(GUEST_INTERRUPTIBILITY_INFO)); 126 126 } 127 127 128 128 static void l1_vmx_code(struct vmx_pages *vmx)
+2 -1
tools/testing/selftests/kvm/x86/sev_smoke_test.c
··· 52 52 bool bad = false; 53 53 for (i = 0; i < 4095; i++) { 54 54 if (from_host[i] != from_guest[i]) { 55 - printf("mismatch at %02hhx | %02hhx %02hhx\n", i, from_host[i], from_guest[i]); 55 + printf("mismatch at %u | %02hhx %02hhx\n", 56 + i, from_host[i], from_guest[i]); 56 57 bad = true; 57 58 } 58 59 }
+1 -1
tools/testing/selftests/mm/hugepage-mremap.c
··· 15 15 #define _GNU_SOURCE 16 16 #include <stdlib.h> 17 17 #include <stdio.h> 18 - #include <asm-generic/unistd.h> 18 + #include <unistd.h> 19 19 #include <sys/mman.h> 20 20 #include <errno.h> 21 21 #include <fcntl.h> /* Definition of O_* constants */
+7 -1
tools/testing/selftests/mm/ksm_functional_tests.c
··· 11 11 #include <string.h> 12 12 #include <stdbool.h> 13 13 #include <stdint.h> 14 - #include <asm-generic/unistd.h> 14 + #include <unistd.h> 15 15 #include <errno.h> 16 16 #include <fcntl.h> 17 17 #include <sys/mman.h> ··· 369 369 munmap(map, size); 370 370 } 371 371 372 + #ifdef __NR_userfaultfd 372 373 static void test_unmerge_uffd_wp(void) 373 374 { 374 375 struct uffdio_writeprotect uffd_writeprotect; ··· 430 429 unmap: 431 430 munmap(map, size); 432 431 } 432 + #endif 433 433 434 434 /* Verify that KSM can be enabled / queried with prctl. */ 435 435 static void test_prctl(void) ··· 686 684 exit(test_child_ksm()); 687 685 } 688 686 687 + #ifdef __NR_userfaultfd 689 688 tests++; 689 + #endif 690 690 691 691 ksft_print_header(); 692 692 ksft_set_plan(tests); ··· 700 696 test_unmerge(); 701 697 test_unmerge_zero_pages(); 702 698 test_unmerge_discarded(); 699 + #ifdef __NR_userfaultfd 703 700 test_unmerge_uffd_wp(); 701 + #endif 704 702 705 703 test_prot_none(); 706 704
+13 -1
tools/testing/selftests/mm/memfd_secret.c
··· 17 17 18 18 #include <stdlib.h> 19 19 #include <string.h> 20 - #include <asm-generic/unistd.h> 20 + #include <unistd.h> 21 21 #include <errno.h> 22 22 #include <stdio.h> 23 23 #include <fcntl.h> ··· 27 27 #define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__) 28 28 #define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__) 29 29 #define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__) 30 + 31 + #ifdef __NR_memfd_secret 30 32 31 33 #define PATTERN 0x55 32 34 ··· 334 332 335 333 ksft_finished(); 336 334 } 335 + 336 + #else /* __NR_memfd_secret */ 337 + 338 + int main(int argc, char *argv[]) 339 + { 340 + printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n"); 341 + return KSFT_SKIP; 342 + } 343 + 344 + #endif /* __NR_memfd_secret */
+7 -1
tools/testing/selftests/mm/mkdirty.c
··· 9 9 */ 10 10 #include <fcntl.h> 11 11 #include <signal.h> 12 - #include <asm-generic/unistd.h> 12 + #include <unistd.h> 13 13 #include <string.h> 14 14 #include <errno.h> 15 15 #include <stdlib.h> ··· 265 265 munmap(mmap_mem, mmap_size); 266 266 } 267 267 268 + #ifdef __NR_userfaultfd 268 269 static void test_uffdio_copy(void) 269 270 { 270 271 struct uffdio_register uffdio_register; ··· 323 322 munmap(dst, pagesize); 324 323 free(src); 325 324 } 325 + #endif /* __NR_userfaultfd */ 326 326 327 327 int main(void) 328 328 { ··· 336 334 thpsize / 1024); 337 335 tests += 3; 338 336 } 337 + #ifdef __NR_userfaultfd 339 338 tests += 1; 339 + #endif /* __NR_userfaultfd */ 340 340 341 341 ksft_print_header(); 342 342 ksft_set_plan(tests); ··· 368 364 if (thpsize) 369 365 test_pte_mapped_thp(); 370 366 /* Placing a fresh page via userfaultfd may set the PTE dirty. */ 367 + #ifdef __NR_userfaultfd 371 368 test_uffdio_copy(); 369 + #endif /* __NR_userfaultfd */ 372 370 373 371 err = ksft_get_fail_cnt(); 374 372 if (err)
-1
tools/testing/selftests/mm/mlock2.h
··· 3 3 #include <errno.h> 4 4 #include <stdio.h> 5 5 #include <stdlib.h> 6 - #include <asm-generic/unistd.h> 7 6 8 7 static int mlock2_(void *start, size_t len, int flags) 9 8 {
+1 -1
tools/testing/selftests/mm/protection_keys.c
··· 42 42 #include <sys/wait.h> 43 43 #include <sys/stat.h> 44 44 #include <fcntl.h> 45 - #include <asm-generic/unistd.h> 45 + #include <unistd.h> 46 46 #include <sys/ptrace.h> 47 47 #include <setjmp.h> 48 48
+4
tools/testing/selftests/mm/uffd-common.c
··· 673 673 674 674 int uffd_open_sys(unsigned int flags) 675 675 { 676 + #ifdef __NR_userfaultfd 676 677 return syscall(__NR_userfaultfd, flags); 678 + #else 679 + return -1; 680 + #endif 677 681 } 678 682 679 683 int uffd_open(unsigned int flags)
+14 -1
tools/testing/selftests/mm/uffd-stress.c
··· 33 33 * pthread_mutex_lock will also verify the atomicity of the memory 34 34 * transfer (UFFDIO_COPY). 35 35 */ 36 - #include <asm-generic/unistd.h> 36 + 37 37 #include "uffd-common.h" 38 38 39 39 uint64_t features; 40 + #ifdef __NR_userfaultfd 40 41 41 42 #define BOUNCE_RANDOM (1<<0) 42 43 #define BOUNCE_RACINGFAULTS (1<<1) ··· 472 471 nr_pages, nr_pages_per_cpu); 473 472 return userfaultfd_stress(); 474 473 } 474 + 475 + #else /* __NR_userfaultfd */ 476 + 477 + #warning "missing __NR_userfaultfd definition" 478 + 479 + int main(void) 480 + { 481 + printf("skip: Skipping userfaultfd test (missing __NR_userfaultfd)\n"); 482 + return KSFT_SKIP; 483 + } 484 + 485 + #endif /* __NR_userfaultfd */
+13 -1
tools/testing/selftests/mm/uffd-unit-tests.c
··· 5 5 * Copyright (C) 2015-2023 Red Hat, Inc. 6 6 */ 7 7 8 - #include <asm-generic/unistd.h> 9 8 #include "uffd-common.h" 10 9 11 10 #include "../../../../mm/gup_test.h" 11 + 12 + #ifdef __NR_userfaultfd 12 13 13 14 /* The unit test doesn't need a large or random size, make it 32MB for now */ 14 15 #define UFFD_TEST_MEM_SIZE (32UL << 20) ··· 1559 1558 return ksft_get_fail_cnt() ? KSFT_FAIL : KSFT_PASS; 1560 1559 } 1561 1560 1561 + #else /* __NR_userfaultfd */ 1562 + 1563 + #warning "missing __NR_userfaultfd definition" 1564 + 1565 + int main(void) 1566 + { 1567 + printf("Skipping %s (missing __NR_userfaultfd)\n", __file__); 1568 + return KSFT_SKIP; 1569 + } 1570 + 1571 + #endif /* __NR_userfaultfd */
+1
tools/testing/selftests/net/Makefile
··· 31 31 TEST_PROGS += ioam6.sh 32 32 TEST_PROGS += gro.sh 33 33 TEST_PROGS += gre_gso.sh 34 + TEST_PROGS += gre_ipv6_lladdr.sh 34 35 TEST_PROGS += cmsg_so_mark.sh 35 36 TEST_PROGS += cmsg_so_priority.sh 36 37 TEST_PROGS += cmsg_time.sh cmsg_ipv6.sh
+177
tools/testing/selftests/net/gre_ipv6_lladdr.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + source ./lib.sh 5 + 6 + PAUSE_ON_FAIL="no" 7 + 8 + # The trap function handler 9 + # 10 + exit_cleanup_all() 11 + { 12 + cleanup_all_ns 13 + 14 + exit "${EXIT_STATUS}" 15 + } 16 + 17 + # Add fake IPv4 and IPv6 networks on the loopback device, to be used as 18 + # underlay by future GRE devices. 19 + # 20 + setup_basenet() 21 + { 22 + ip -netns "${NS0}" link set dev lo up 23 + ip -netns "${NS0}" address add dev lo 192.0.2.10/24 24 + ip -netns "${NS0}" address add dev lo 2001:db8::10/64 nodad 25 + } 26 + 27 + # Check if network device has an IPv6 link-local address assigned. 28 + # 29 + # Parameters: 30 + # 31 + # * $1: The network device to test 32 + # * $2: An extra regular expression that should be matched (to verify the 33 + # presence of extra attributes) 34 + # * $3: The expected return code from grep (to allow checking the absence of 35 + # a link-local address) 36 + # * $4: The user visible name for the scenario being tested 37 + # 38 + check_ipv6_ll_addr() 39 + { 40 + local DEV="$1" 41 + local EXTRA_MATCH="$2" 42 + local XRET="$3" 43 + local MSG="$4" 44 + 45 + RET=0 46 + set +e 47 + ip -netns "${NS0}" -6 address show dev "${DEV}" scope link | grep "fe80::" | grep -q "${EXTRA_MATCH}" 48 + check_err_fail "${XRET}" $? "" 49 + log_test "${MSG}" 50 + set -e 51 + } 52 + 53 + # Create a GRE device and verify that it gets an IPv6 link-local address as 54 + # expected. 55 + # 56 + # Parameters: 57 + # 58 + # * $1: The device type (gre, ip6gre, gretap or ip6gretap) 59 + # * $2: The local underlay IP address (can be an IPv4, an IPv6 or "any") 60 + # * $3: The remote underlay IP address (can be an IPv4, an IPv6 or "any") 61 + # * $4: The IPv6 interface identifier generation mode to use for the GRE 62 + # device (eui64, none, stable-privacy or random). 63 + # 64 + test_gre_device() 65 + { 66 + local GRE_TYPE="$1" 67 + local LOCAL_IP="$2" 68 + local REMOTE_IP="$3" 69 + local MODE="$4" 70 + local ADDR_GEN_MODE 71 + local MATCH_REGEXP 72 + local MSG 73 + 74 + ip link add netns "${NS0}" name gretest type "${GRE_TYPE}" local "${LOCAL_IP}" remote "${REMOTE_IP}" 75 + 76 + case "${MODE}" in 77 + "eui64") 78 + ADDR_GEN_MODE=0 79 + MATCH_REGEXP="" 80 + MSG="${GRE_TYPE}, mode: 0 (EUI64), ${LOCAL_IP} -> ${REMOTE_IP}" 81 + XRET=0 82 + ;; 83 + "none") 84 + ADDR_GEN_MODE=1 85 + MATCH_REGEXP="" 86 + MSG="${GRE_TYPE}, mode: 1 (none), ${LOCAL_IP} -> ${REMOTE_IP}" 87 + XRET=1 # No link-local address should be generated 88 + ;; 89 + "stable-privacy") 90 + ADDR_GEN_MODE=2 91 + MATCH_REGEXP="stable-privacy" 92 + MSG="${GRE_TYPE}, mode: 2 (stable privacy), ${LOCAL_IP} -> ${REMOTE_IP}" 93 + XRET=0 94 + # Initialise stable_secret (required for stable-privacy mode) 95 + ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.stable_secret="2001:db8::abcd" 96 + ;; 97 + "random") 98 + ADDR_GEN_MODE=3 99 + MATCH_REGEXP="stable-privacy" 100 + MSG="${GRE_TYPE}, mode: 3 (random), ${LOCAL_IP} -> ${REMOTE_IP}" 101 + XRET=0 102 + ;; 103 + esac 104 + 105 + # Check that IPv6 link-local address is generated when device goes up 106 + ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}" 107 + ip -netns "${NS0}" link set dev gretest up 108 + check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "config: ${MSG}" 109 + 110 + # Now disable link-local address generation 111 + ip -netns "${NS0}" link set dev gretest down 112 + ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode=1 113 + ip -netns "${NS0}" link set dev gretest up 114 + 115 + # Check that link-local address generation works when re-enabled while 116 + # the device is already up 117 + ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}" 118 + check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "update: ${MSG}" 119 + 120 + ip -netns "${NS0}" link del dev gretest 121 + } 122 + 123 + test_gre4() 124 + { 125 + local GRE_TYPE 126 + local MODE 127 + 128 + for GRE_TYPE in "gre" "gretap"; do 129 + printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n" 130 + 131 + for MODE in "eui64" "none" "stable-privacy" "random"; do 132 + test_gre_device "${GRE_TYPE}" 192.0.2.10 192.0.2.11 "${MODE}" 133 + test_gre_device "${GRE_TYPE}" any 192.0.2.11 "${MODE}" 134 + test_gre_device "${GRE_TYPE}" 192.0.2.10 any "${MODE}" 135 + done 136 + done 137 + } 138 + 139 + test_gre6() 140 + { 141 + local GRE_TYPE 142 + local MODE 143 + 144 + for GRE_TYPE in "ip6gre" "ip6gretap"; do 145 + printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n" 146 + 147 + for MODE in "eui64" "none" "stable-privacy" "random"; do 148 + test_gre_device "${GRE_TYPE}" 2001:db8::10 2001:db8::11 "${MODE}" 149 + test_gre_device "${GRE_TYPE}" any 2001:db8::11 "${MODE}" 150 + test_gre_device "${GRE_TYPE}" 2001:db8::10 any "${MODE}" 151 + done 152 + done 153 + } 154 + 155 + usage() 156 + { 157 + echo "Usage: $0 [-p]" 158 + exit 1 159 + } 160 + 161 + while getopts :p o 162 + do 163 + case $o in 164 + p) PAUSE_ON_FAIL="yes";; 165 + *) usage;; 166 + esac 167 + done 168 + 169 + setup_ns NS0 170 + 171 + set -e 172 + trap exit_cleanup_all EXIT 173 + 174 + setup_basenet 175 + 176 + test_gre4 177 + test_gre6
+6
tools/testing/selftests/net/lib/xdp_dummy.bpf.c
··· 10 10 return XDP_PASS; 11 11 } 12 12 13 + SEC("xdp.frags") 14 + int xdp_dummy_prog_frags(struct xdp_md *ctx) 15 + { 16 + return XDP_PASS; 17 + } 18 + 13 19 char _license[] SEC("license") = "GPL";
+7
tools/testing/selftests/net/netfilter/br_netfilter.sh
··· 13 13 14 14 checktool "nft --version" "run test without nft tool" 15 15 16 + read t < /proc/sys/kernel/tainted 17 + if [ "$t" -ne 0 ];then 18 + echo SKIP: kernel is tainted 19 + exit $ksft_skip 20 + fi 21 + 16 22 cleanup() { 17 23 cleanup_all_ns 18 24 } ··· 171 165 echo PASS: kernel not tainted 172 166 else 173 167 echo ERROR: kernel is tainted 168 + dmesg 174 169 ret=1 175 170 fi 176 171
+7
tools/testing/selftests/net/netfilter/br_netfilter_queue.sh
··· 4 4 5 5 checktool "nft --version" "run test without nft tool" 6 6 7 + read t < /proc/sys/kernel/tainted 8 + if [ "$t" -ne 0 ];then 9 + echo SKIP: kernel is tainted 10 + exit $ksft_skip 11 + fi 12 + 7 13 cleanup() { 8 14 cleanup_all_ns 9 15 } ··· 78 72 echo PASS: kernel not tainted 79 73 else 80 74 echo ERROR: kernel is tainted 75 + dmesg 81 76 exit 1 82 77 fi 83 78
+1
tools/testing/selftests/net/netfilter/nft_queue.sh
··· 593 593 echo "PASS: queue program exiting while packets queued" 594 594 else 595 595 echo "TAINT: queue program exiting while packets queued" 596 + dmesg 596 597 ret=1 597 598 fi 598 599 }
+25
tools/testing/selftests/tc-testing/tc-tests/qdiscs/drr.json
··· 61 61 "teardown": [ 62 62 "$TC qdisc del dev $DUMMY handle 1: root" 63 63 ] 64 + }, 65 + { 66 + "id": "4009", 67 + "name": "Reject creation of DRR class with classid TC_H_ROOT", 68 + "category": [ 69 + "qdisc", 70 + "drr" 71 + ], 72 + "plugins": { 73 + "requires": "nsPlugin" 74 + }, 75 + "setup": [ 76 + "$TC qdisc add dev $DUMMY root handle ffff: drr", 77 + "$TC filter add dev $DUMMY parent ffff: basic classid ffff:1", 78 + "$TC class add dev $DUMMY parent ffff: classid ffff:1 drr", 79 + "$TC filter add dev $DUMMY parent ffff: prio 1 u32 match u16 0x0000 0xfe00 at 2 flowid ffff:ffff" 80 + ], 81 + "cmdUnderTest": "$TC class add dev $DUMMY parent ffff: classid ffff:ffff drr", 82 + "expExitCode": "2", 83 + "verifyCmd": "$TC class show dev $DUMMY", 84 + "matchPattern": "class drr ffff:ffff", 85 + "matchCount": "0", 86 + "teardown": [ 87 + "$TC qdisc del dev $DUMMY root" 88 + ] 64 89 } 65 90 ]
+5 -5
tools/testing/selftests/vDSO/parse_vdso.c
··· 53 53 /* Symbol table */ 54 54 ELF(Sym) *symtab; 55 55 const char *symstrings; 56 - ELF(Word) *gnu_hash; 56 + ELF(Word) *gnu_hash, *gnu_bucket; 57 57 ELF_HASH_ENTRY *bucket, *chain; 58 58 ELF_HASH_ENTRY nbucket, nchain; 59 59 ··· 185 185 /* The bucket array is located after the header (4 uint32) and the bloom 186 186 * filter (size_t array of gnu_hash[2] elements). 187 187 */ 188 - vdso_info.bucket = vdso_info.gnu_hash + 4 + 189 - sizeof(size_t) / 4 * vdso_info.gnu_hash[2]; 188 + vdso_info.gnu_bucket = vdso_info.gnu_hash + 4 + 189 + sizeof(size_t) / 4 * vdso_info.gnu_hash[2]; 190 190 } else { 191 191 vdso_info.nbucket = hash[0]; 192 192 vdso_info.nchain = hash[1]; ··· 268 268 if (vdso_info.gnu_hash) { 269 269 uint32_t h1 = gnu_hash(name), h2, *hashval; 270 270 271 - i = vdso_info.bucket[h1 % vdso_info.nbucket]; 271 + i = vdso_info.gnu_bucket[h1 % vdso_info.nbucket]; 272 272 if (i == 0) 273 273 return 0; 274 274 h1 |= 1; 275 - hashval = vdso_info.bucket + vdso_info.nbucket + 275 + hashval = vdso_info.gnu_bucket + vdso_info.nbucket + 276 276 (i - vdso_info.gnu_hash[1]); 277 277 for (;; i++) { 278 278 ELF(Sym) *sym = &vdso_info.symtab[i];
+1 -1
usr/include/Makefile
··· 10 10 11 11 # In theory, we do not care -m32 or -m64 for header compile tests. 12 12 # It is here just because CONFIG_CC_CAN_LINK is tested with -m32 or -m64. 13 - UAPI_CFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS)) 13 + UAPI_CFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)) 14 14 15 15 # USERCFLAGS might contain sysroot location for CC. 16 16 UAPI_CFLAGS += $(USERCFLAGS)