···176176 values without prompting.177177178178 "make defconfig" Create a ./.config file by using the default179179- symbol values from either arch/$ARCH/defconfig179179+ symbol values from either arch/$ARCH/configs/defconfig180180 or arch/$ARCH/configs/${PLATFORM}_defconfig,181181 depending on the architecture.182182
+11
Documentation/admin-guide/sysctl/kernel.rst
···212212This value defaults to 0.213213214214215215+core_sort_vma216216+=============217217+218218+The default coredump writes VMAs in address order. By setting219219+``core_sort_vma`` to 1, VMAs will be written from smallest size220220+to largest size. This is known to break at least elfutils, but221221+can be handy when dealing with very large (and truncated)222222+coredumps where the more useful debugging details are included223223+in the smaller VMAs.224224+225225+215226core_uses_pid216227=============217228
···6363straightforward algorithm to use is to apply the inverse of the first idmapping,6464mapping ``k11000`` up to ``u1000``. Afterwards, we can map ``u1000`` down using6565either the second idmapping mapping or third idmapping mapping. The second6666-idmapping would map ``u1000`` down to ``21000``. The third idmapping would map6767-``u1000`` down to ``u31000``.6666+idmapping would map ``u1000`` down to ``k21000``. The third idmapping would map6767+``u1000`` down to ``k31000``.68686969If we were given the same task for the following three idmappings::7070
+1-1
Documentation/rust/quick-start.rst
···145145****************************146146147147The Rust standard library source is required because the build system will148148-cross-compile ``core`` and ``alloc``.148148+cross-compile ``core``.149149150150If ``rustup`` is being used, run::151151
···102102 * sched_rt_period_us takes values from 1 to INT_MAX.103103 * sched_rt_runtime_us takes values from -1 to sched_rt_period_us.104104 * A run time of -1 specifies runtime == period, ie. no limit.105105+ * sched_rt_runtime_us/sched_rt_period_us > 0.05 inorder to preserve106106+ bandwidth for fair dl_server. For accurate value check average of107107+ runtime/period in /sys/kernel/debug/sched/fair_server/cpuX/1051081061091071102.2 Default behaviour
+38-20
MAINTAINERS
···124124F: include/net/iw_handler.h125125F: include/net/wext.h126126F: include/uapi/linux/nl80211.h127127+N: include/uapi/linux/nl80211-.*127128F: include/uapi/linux/wireless.h128129F: net/wireless/129130···515514ADM8211 WIRELESS DRIVER516515L: linux-wireless@vger.kernel.org517516S: Orphan518518-F: drivers/net/wireless/admtek/adm8211.*517517+F: drivers/net/wireless/admtek/519518520519ADP1050 HARDWARE MONITOR DRIVER521520M: Radu Sabau <radu.sabau@analog.com>···5776577557775776COMMON INTERNET FILE SYSTEM CLIENT (CIFS and SMB3)57785777M: Steve French <sfrench@samba.org>57785778+M: Steve French <smfrench@gmail.com>57795779R: Paulo Alcantara <pc@manguebit.com> (DFS, global name space)57805780R: Ronnie Sahlberg <ronniesahlberg@gmail.com> (directory leases, sparse files)57815781R: Shyam Prasad N <sprasad@microsoft.com> (multichannel)···6208620662096207CW1200 WLAN driver62106208S: Orphan62116211-F: drivers/net/wireless/st/cw1200/62096209+F: drivers/net/wireless/st/62126210F: include/linux/platform_data/net-cw1200.h6213621162146212CX18 VIDEO4LINUX DRIVER···94449442F: include/uapi/linux/fscrypt.h9445944394469444FSI SUBSYSTEM94479447-M: Jeremy Kerr <jk@ozlabs.org>94489448-M: Joel Stanley <joel@jms.id.au>94499449-R: Alistar Popple <alistair@popple.id.au>94509450-R: Eddie James <eajames@linux.ibm.com>94459445+M: Eddie James <eajames@linux.ibm.com>94469446+R: Ninad Palsule <ninad@linux.ibm.com>94519447L: linux-fsi@lists.ozlabs.org94529448S: Supported94539449Q: http://patchwork.ozlabs.org/project/linux-fsi/list/94549454-T: git git://git.kernel.org/pub/scm/linux/kernel/git/joel/fsi.git94559450F: drivers/fsi/94569451F: include/linux/fsi*.h94579452F: include/trace/events/fsi*.h···98299830F: drivers/media/usb/go7007/9830983198319832GOODIX TOUCHSCREEN98329832-M: Bastien Nocera <hadess@hadess.net>98339833M: Hans de Goede <hdegoede@redhat.com>98349834L: linux-input@vger.kernel.org98359835S: Maintained···1114011142F: drivers/i2c/busses/i2c-icy.c11141111431114211144IDEAPAD LAPTOP EXTRAS DRIVER1114311143-M: Ike Panhc <ike.pan@canonical.com>1114511145+M: Ike Panhc <ikepanhc@gmail.com>1114411146L: platform-driver-x86@vger.kernel.org1114511147S: Maintained1114611148W: http://launchpad.net/ideapad-laptop···12653126551265412656KERNEL SMB3 SERVER (KSMBD)1265512657M: Namjae Jeon <linkinjeon@kernel.org>1265812658+M: Namjae Jeon <linkinjeon@samba.org>1265612659M: Steve French <sfrench@samba.org>1266012660+M: Steve French <smfrench@gmail.com>1265712661R: Sergey Senozhatsky <senozhatsky@chromium.org>1265812662R: Tom Talpey <tom@talpey.com>1265912663L: linux-cifs@vger.kernel.org···1287212872F: security/keys/trusted-keys/trusted_dcp.c12873128731287412874KEYS-TRUSTED-TEE1287512875-M: Sumit Garg <sumit.garg@linaro.org>1287512875+M: Sumit Garg <sumit.garg@kernel.org>1287612876L: linux-integrity@vger.kernel.org1287712877L: keyrings@vger.kernel.org1287812878S: Supported···1399413994L: libertas-dev@lists.infradead.org1399513995S: Orphan1399613996F: drivers/net/wireless/marvell/libertas/1399713997+F: drivers/net/wireless/marvell/libertas_tf/13997139981399813999MARVELL MACCHIATOBIN SUPPORT1399914000M: Russell King <linux@armlinux.org.uk>···1566415663M: Claudiu Beznea <claudiu.beznea@tuxon.dev>1566515664L: linux-wireless@vger.kernel.org1566615665S: Supported1566715667-F: drivers/net/wireless/microchip/wilc1000/1566615666+F: drivers/net/wireless/microchip/15668156671566915668MICROSEMI MIPS SOCS1567015669M: Alexandre Belloni <alexandre.belloni@bootlin.com>···1645016449T: git git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next.git1645116450F: Documentation/devicetree/bindings/net/wireless/1645216451F: drivers/net/wireless/1645216452+X: drivers/net/wireless/ath/1645316453+X: drivers/net/wireless/broadcom/1645416454+X: drivers/net/wireless/intel/1645516455+X: drivers/net/wireless/intersil/1645616456+X: drivers/net/wireless/marvell/1645716457+X: drivers/net/wireless/mediatek/mt76/1645816458+X: drivers/net/wireless/mediatek/mt7601u/1645916459+X: drivers/net/wireless/microchip/1646016460+X: drivers/net/wireless/purelifi/1646116461+X: drivers/net/wireless/quantenna/1646216462+X: drivers/net/wireless/ralink/1646316463+X: drivers/net/wireless/realtek/1646416464+X: drivers/net/wireless/rsi/1646516465+X: drivers/net/wireless/silabs/1646616466+X: drivers/net/wireless/st/1646716467+X: drivers/net/wireless/ti/1646816468+X: drivers/net/wireless/zydas/16453164691645416470NETWORKING [DSA]1645516471M: Andrew Lunn <andrew@lunn.ch>···1769017672F: drivers/tee/optee/17691176731769217674OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER1769317693-M: Sumit Garg <sumit.garg@linaro.org>1767517675+M: Sumit Garg <sumit.garg@kernel.org>1769417676L: op-tee@lists.trustedfirmware.org1769517677S: Maintained1769617678F: drivers/char/hw_random/optee-rng.c···1785117833L: linux-wireless@vger.kernel.org1785217834S: Maintained1785317835W: https://wireless.wiki.kernel.org/en/users/Drivers/p541785417854-F: drivers/net/wireless/intersil/p54/1783617836+F: drivers/net/wireless/intersil/17855178371785617838PACKET SOCKETS1785717839M: Willem de Bruijn <willemdebruijn.kernel@gmail.com>···1912819110M: Srinivasan Raju <srini.raju@purelifi.com>1912919111L: linux-wireless@vger.kernel.org1913019112S: Supported1913119131-F: drivers/net/wireless/purelifi/plfxlc/1911319113+F: drivers/net/wireless/purelifi/19132191141913319115PVRUSB2 VIDEO4LINUX DRIVER1913419116M: Mike Isely <isely@pobox.com>···1967919661R: Sergey Matyukevich <geomatsi@gmail.com>1968019662L: linux-wireless@vger.kernel.org1968119663S: Maintained1968219682-F: drivers/net/wireless/quantenna1966419664+F: drivers/net/wireless/quantenna/19683196651968419666RADEON and AMDGPU DRM DRIVERS1968519667M: Alex Deucher <alexander.deucher@amd.com>···1975919741M: Stanislaw Gruszka <stf_xl@wp.pl>1976019742L: linux-wireless@vger.kernel.org1976119743S: Maintained1976219762-F: drivers/net/wireless/ralink/rt2x00/1974419744+F: drivers/net/wireless/ralink/19763197451976419746RAMDISK RAM BLOCK DEVICE DRIVER1976519747M: Jens Axboe <axboe@kernel.dk>···21107210892110821090SAMSUNG SPI DRIVERS2110921091M: Andi Shyti <andi.shyti@kernel.org>2109221092+R: Tudor Ambarus <tudor.ambarus@linaro.org>2111021093L: linux-spi@vger.kernel.org2111121094L: linux-samsung-soc@vger.kernel.org2111221095S: Maintained···21518214992151921500SFC NETWORK DRIVER2152021501M: Edward Cree <ecree.xilinx@gmail.com>2152121521-M: Martin Habets <habetsm.xilinx@gmail.com>2152221502L: netdev@vger.kernel.org2152321503L: linux-net-drivers@amd.com2152421504S: Maintained···2172621708M: Jérôme Pouiller <jerome.pouiller@silabs.com>2172721709S: Supported2172821710F: Documentation/devicetree/bindings/net/wireless/silabs,wfx.yaml2172921729-F: drivers/net/wireless/silabs/wfx/2171121711+F: drivers/net/wireless/silabs/21730217122173121713SILICON MOTION SM712 FRAME BUFFER DRIVER2173221714M: Sudip Mukherjee <sudipm.mukherjee@gmail.com>···23303232852330423286TEE SUBSYSTEM2330523287M: Jens Wiklander <jens.wiklander@linaro.org>2330623306-R: Sumit Garg <sumit.garg@linaro.org>2328823288+R: Sumit Garg <sumit.garg@kernel.org>2330723289L: op-tee@lists.trustedfirmware.org2330823290S: Maintained2330923291F: Documentation/ABI/testing/sysfs-class-tee···2622626208ZD1211RW WIRELESS DRIVER2622726209L: linux-wireless@vger.kernel.org2622826210S: Orphan2622926229-F: drivers/net/wireless/zydas/zd1211rw/2621126211+F: drivers/net/wireless/zydas/26230262122623126213ZD1301 MEDIA DRIVER2623226214L: linux-media@vger.kernel.org
+6-1
Makefile
···22VERSION = 633PATCHLEVEL = 1444SUBLEVEL = 055-EXTRAVERSION = -rc555+EXTRAVERSION = -rc766NAME = Baby Opossum Posse7788# *DOCUMENTATION*···11251125# Align the bit size of userspace programs with the kernel11261126KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))11271127KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))11281128+11291129+# userspace programs are linked via the compiler, use the correct linker11301130+ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_LD_IS_LLD),yy)11311131+KBUILD_USERLDFLAGS += --ld-path=$(LD)11321132+endif1128113311291134# make the checker run with the right architecture11301135CHECKFLAGS += --arch=$(ARCH)
+25-12
arch/arm/mm/fault-armv.c
···6262}63636464static int adjust_pte(struct vm_area_struct *vma, unsigned long address,6565- unsigned long pfn, struct vm_fault *vmf)6565+ unsigned long pfn, bool need_lock)6666{6767 spinlock_t *ptl;6868 pgd_t *pgd;···9999 if (!pte)100100 return 0;101101102102- /*103103- * If we are using split PTE locks, then we need to take the page104104- * lock here. Otherwise we are using shared mm->page_table_lock105105- * which is already locked, thus cannot take it.106106- */107107- if (ptl != vmf->ptl) {102102+ if (need_lock) {103103+ /*104104+ * Use nested version here to indicate that we are already105105+ * holding one similar spinlock.106106+ */108107 spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);109108 if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) {110109 pte_unmap_unlock(pte, ptl);···113114114115 ret = do_adjust_pte(vma, address, pfn, pte);115116116116- if (ptl != vmf->ptl)117117+ if (need_lock)117118 spin_unlock(ptl);118119 pte_unmap(pte);119120···122123123124static void124125make_coherent(struct address_space *mapping, struct vm_area_struct *vma,125125- unsigned long addr, pte_t *ptep, unsigned long pfn,126126- struct vm_fault *vmf)126126+ unsigned long addr, pte_t *ptep, unsigned long pfn)127127{128128+ const unsigned long pmd_start_addr = ALIGN_DOWN(addr, PMD_SIZE);129129+ const unsigned long pmd_end_addr = pmd_start_addr + PMD_SIZE;128130 struct mm_struct *mm = vma->vm_mm;129131 struct vm_area_struct *mpnt;130132 unsigned long offset;···142142 flush_dcache_mmap_lock(mapping);143143 vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {144144 /*145145+ * If we are using split PTE locks, then we need to take the pte146146+ * lock. Otherwise we are using shared mm->page_table_lock which147147+ * is already locked, thus cannot take it.148148+ */149149+ bool need_lock = IS_ENABLED(CONFIG_SPLIT_PTE_PTLOCKS);150150+ unsigned long mpnt_addr;151151+152152+ /*145153 * If this VMA is not in our MM, we can ignore it.146154 * Note that we intentionally mask out the VMA147155 * that we are fixing up.···159151 if (!(mpnt->vm_flags & VM_MAYSHARE))160152 continue;161153 offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;162162- aliases += adjust_pte(mpnt, mpnt->vm_start + offset, pfn, vmf);154154+ mpnt_addr = mpnt->vm_start + offset;155155+156156+ /* Avoid deadlocks by not grabbing the same PTE lock again. */157157+ if (mpnt_addr >= pmd_start_addr && mpnt_addr < pmd_end_addr)158158+ need_lock = false;159159+ aliases += adjust_pte(mpnt, mpnt_addr, pfn, need_lock);163160 }164161 flush_dcache_mmap_unlock(mapping);165162 if (aliases)···207194 __flush_dcache_folio(mapping, folio);208195 if (mapping) {209196 if (cache_is_vivt())210210- make_coherent(mapping, vma, addr, ptep, pfn, vmf);197197+ make_coherent(mapping, vma, addr, ptep, pfn);211198 else if (vma->vm_flags & VM_EXEC)212199 __flush_icache_all();213200 }
+26-5
arch/arm64/include/asm/el2_setup.h
···1616#include <asm/sysreg.h>1717#include <linux/irqchip/arm-gic-v3.h>18181919+.macro init_el2_hcr val2020+ mov_q x0, \val2121+2222+ /*2323+ * Compliant CPUs advertise their VHE-onlyness with2424+ * ID_AA64MMFR4_EL1.E2H0 < 0. On such CPUs HCR_EL2.E2H is RES1, but it2525+ * can reset into an UNKNOWN state and might not read as 1 until it has2626+ * been initialized explicitly.2727+ *2828+ * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but2929+ * don't advertise it (they predate this relaxation).3030+ *3131+ * Initalize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H3232+ * indicating whether the CPU is running in E2H mode.3333+ */3434+ mrs_s x1, SYS_ID_AA64MMFR4_EL13535+ sbfx x1, x1, #ID_AA64MMFR4_EL1_E2H0_SHIFT, #ID_AA64MMFR4_EL1_E2H0_WIDTH3636+ cmp x1, #03737+ b.ge .LnVHE_\@3838+3939+ orr x0, x0, #HCR_E2H4040+.LnVHE_\@:4141+ msr hcr_el2, x04242+ isb4343+.endm4444+1945.macro __init_el2_sctlr2046 mov_q x0, INIT_SCTLR_EL2_MMU_OFF2147 msr sctlr_el2, x0···268242 msr_s SYS_GCSCR_EL1, xzr269243 msr_s SYS_GCSCRE0_EL1, xzr270244.Lskip_gcs_\@:271271-.endm272272-273273-.macro __init_el2_nvhe_prepare_eret274274- mov x0, #INIT_PSTATE_EL1275275- msr spsr_el2, x0276245.endm277246278247.macro __init_el2_mpam
···298298 msr sctlr_el2, x0299299 isb3003000:301301- mov_q x0, HCR_HOST_NVHE_FLAGS302301303303- /*304304- * Compliant CPUs advertise their VHE-onlyness with305305- * ID_AA64MMFR4_EL1.E2H0 < 0. HCR_EL2.E2H can be306306- * RES1 in that case. Publish the E2H bit early so that307307- * it can be picked up by the init_el2_state macro.308308- *309309- * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but310310- * don't advertise it (they predate this relaxation).311311- */312312- mrs_s x1, SYS_ID_AA64MMFR4_EL1313313- tbz x1, #(ID_AA64MMFR4_EL1_E2H0_SHIFT + ID_AA64MMFR4_EL1_E2H0_WIDTH - 1), 1f314314-315315- orr x0, x0, #HCR_E2H316316-1:317317- msr hcr_el2, x0318318- isb319319-302302+ init_el2_hcr HCR_HOST_NVHE_FLAGS320303 init_el2_state321304322305 /* Hypervisor stub */···322339 msr sctlr_el1, x1323340 mov x2, xzr3243413:325325- __init_el2_nvhe_prepare_eret342342+ mov x0, #INIT_PSTATE_EL1343343+ msr spsr_el2, x0326344327345 mov w0, #BOOT_CPU_MODE_EL2328346 orr x0, x0, x2
+7-3
arch/arm64/kvm/hyp/nvhe/hyp-init.S
···7373 eret7474SYM_CODE_END(__kvm_hyp_init)75757676+/*7777+ * Initialize EL2 CPU state to sane values.7878+ *7979+ * HCR_EL2.E2H must have been initialized already.8080+ */7681SYM_CODE_START_LOCAL(__kvm_init_el2_state)7777- /* Initialize EL2 CPU state to sane values. */7882 init_el2_state // Clobbers x0..x27983 finalise_el2_state8084 ret···2102062112072: msr SPsel, #1 // We want to use SP_EL{1,2}212208213213- bl __kvm_init_el2_state209209+ init_el2_hcr 0214210215215- __init_el2_nvhe_prepare_eret211211+ bl __kvm_init_el2_state216212217213 /* Enable MMU, set vectors and stack. */218214 mov x0, x28
+3
arch/arm64/kvm/hyp/nvhe/psci-relay.c
···218218 if (is_cpu_on)219219 release_boot_args(boot_args);220220221221+ write_sysreg_el1(INIT_SCTLR_EL1_MMU_OFF, SYS_SCTLR);222222+ write_sysreg(INIT_PSTATE_EL1, SPSR_EL2);223223+221224 __host_enter(host_ctxt);222225}223226
+4-1
arch/arm64/mm/mmu.c
···11771177 struct vmem_altmap *altmap)11781178{11791179 WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));11801180+ /* [start, end] should be within one section */11811181+ WARN_ON_ONCE(end - start > PAGES_PER_SECTION * sizeof(struct page));1180118211811181- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))11831183+ if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||11841184+ (end - start < PAGES_PER_SECTION * sizeof(struct page)))11821185 return vmemmap_populate_basepages(start, end, node, altmap);11831186 else11841187 return vmemmap_populate_hugepages(start, end, node, altmap);
-12
arch/loongarch/kernel/acpi.c
···249249 return acpi_map_pxm_to_node(pxm);250250}251251252252-/*253253- * Callback for SLIT parsing. pxm_to_node() returns NUMA_NO_NODE for254254- * I/O localities since SRAT does not list them. I/O localities are255255- * not supported at this point.256256- */257257-unsigned int numa_distance_cnt;258258-259259-static inline unsigned int get_numa_distances_cnt(struct acpi_table_slit *slit)260260-{261261- return slit->locality_count;262262-}263263-264252void __init numa_set_distance(int from, int to, int distance)265253{266254 if ((u8)distance != distance || (from == to && distance != LOCAL_DISTANCE)) {
+2-2
arch/loongarch/kernel/machine_kexec.c
···126126 /* All secondary cpus go to kexec_smp_wait */127127 if (smp_processor_id() > 0) {128128 relocated_kexec_smp_wait(NULL);129129- unreachable();129129+ BUG();130130 }131131#endif132132133133 do_kexec = (void *)reboot_code_buffer;134134 do_kexec(efi_boot, cmdline_ptr, systable_ptr, start_addr, first_ind_entry);135135136136- unreachable();136136+ BUG();137137}138138139139
···669669 struct kvm_run *run = vcpu->run;670670 unsigned long badv = vcpu->arch.badv;671671672672+ /* Inject ADE exception if exceed max GPA size */673673+ if (unlikely(badv >= vcpu->kvm->arch.gpa_size)) {674674+ kvm_queue_exception(vcpu, EXCCODE_ADE, EXSUBCODE_ADEM);675675+ return RESUME_GUEST;676676+ }677677+672678 ret = kvm_handle_mm_fault(vcpu, badv, write);673679 if (ret) {674680 /* Treat as MMIO */
+7
arch/loongarch/kvm/main.c
···317317 kvm_debug("GCFG:%lx GSTAT:%lx GINTC:%lx GTLBC:%lx",318318 read_csr_gcfg(), read_csr_gstat(), read_csr_gintc(), read_csr_gtlbc());319319320320+ /*321321+ * HW Guest CSR registers are lost after CPU suspend and resume.322322+ * Clear last_vcpu so that Guest CSR registers forced to reload323323+ * from vCPU SW state.324324+ */325325+ this_cpu_ptr(vmcs)->last_vcpu = NULL;326326+320327 return 0;321328}322329
+1-1
arch/loongarch/kvm/vcpu.c
···311311{312312 int ret = RESUME_GUEST;313313 unsigned long estat = vcpu->arch.host_estat;314314- u32 intr = estat & 0x1fff; /* Ignore NMI */314314+ u32 intr = estat & CSR_ESTAT_IS;315315 u32 ecode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;316316317317 vcpu->mode = OUTSIDE_GUEST_MODE;
+5-1
arch/loongarch/kvm/vm.c
···4848 if (kvm_pvtime_supported())4949 kvm->arch.pv_features |= BIT(KVM_FEATURE_STEAL_TIME);50505151- kvm->arch.gpa_size = BIT(cpu_vabits - 1);5151+ /*5252+ * cpu_vabits means user address space only (a half of total).5353+ * GPA size of VM is the same with the size of user address space.5454+ */5555+ kvm->arch.gpa_size = BIT(cpu_vabits);5256 kvm->arch.root_level = CONFIG_PGTABLE_LEVELS - 1;5357 kvm->arch.invalid_ptes[0] = 0;5458 kvm->arch.invalid_ptes[1] = (unsigned long)invalid_pte_table;
···2323#define ARCH_PAGE_TABLE_SYNC_MASK PGTBL_PMD_MODIFIED24242525/*2626- * traditional i386 two-level paging structure:2626+ * Traditional i386 two-level paging structure:2727 */28282929#define PGDIR_SHIFT 223030#define PTRS_PER_PGD 102431313232-3332/*3434- * the i386 is two-level, so we don't really have any3535- * PMD directory physically.3333+ * The i386 is two-level, so we don't really have any3434+ * PMD directory physically:3635 */3636+#define PTRS_PER_PMD 137373838#define PTRS_PER_PTE 10243939
···143143144144struct resource *amd_get_mmconfig_range(struct resource *res)145145{146146- u32 address;147146 u64 base, msr;148147 unsigned int segn_busn_bits;149148···150151 boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)151152 return NULL;152153153153- /* assume all cpus from fam10h have mmconfig */154154- if (boot_cpu_data.x86 < 0x10)154154+ /* Assume CPUs from Fam10h have mmconfig, although not all VMs do */155155+ if (boot_cpu_data.x86 < 0x10 ||156156+ rdmsrl_safe(MSR_FAM10H_MMIO_CONF_BASE, &msr))155157 return NULL;156156-157157- address = MSR_FAM10H_MMIO_CONF_BASE;158158- rdmsrl(address, msr);159158160159 /* mmconfig is not enabled */161160 if (!(msr & FAM10H_MMIO_CONF_ENABLE))
+189-85
arch/x86/kernel/cpu/microcode/amd.c
···23232424#include <linux/earlycpio.h>2525#include <linux/firmware.h>2626+#include <linux/bsearch.h>2627#include <linux/uaccess.h>2728#include <linux/vmalloc.h>2829#include <linux/initrd.h>2930#include <linux/kernel.h>3031#include <linux/pci.h>31323333+#include <crypto/sha2.h>3434+3235#include <asm/microcode.h>3336#include <asm/processor.h>3737+#include <asm/cmdline.h>3438#include <asm/setup.h>3539#include <asm/cpu.h>3640#include <asm/msr.h>···149145 */150146static u32 bsp_cpuid_1_eax __ro_after_init;151147148148+static bool sha_check = true;149149+150150+struct patch_digest {151151+ u32 patch_id;152152+ u8 sha256[SHA256_DIGEST_SIZE];153153+};154154+155155+#include "amd_shas.c"156156+157157+static int cmp_id(const void *key, const void *elem)158158+{159159+ struct patch_digest *pd = (struct patch_digest *)elem;160160+ u32 patch_id = *(u32 *)key;161161+162162+ if (patch_id == pd->patch_id)163163+ return 0;164164+ else if (patch_id < pd->patch_id)165165+ return -1;166166+ else167167+ return 1;168168+}169169+170170+static bool need_sha_check(u32 cur_rev)171171+{172172+ switch (cur_rev >> 8) {173173+ case 0x80012: return cur_rev <= 0x800126f; break;174174+ case 0x80082: return cur_rev <= 0x800820f; break;175175+ case 0x83010: return cur_rev <= 0x830107c; break;176176+ case 0x86001: return cur_rev <= 0x860010e; break;177177+ case 0x86081: return cur_rev <= 0x8608108; break;178178+ case 0x87010: return cur_rev <= 0x8701034; break;179179+ case 0x8a000: return cur_rev <= 0x8a0000a; break;180180+ case 0xa0010: return cur_rev <= 0xa00107a; break;181181+ case 0xa0011: return cur_rev <= 0xa0011da; break;182182+ case 0xa0012: return cur_rev <= 0xa001243; break;183183+ case 0xa0082: return cur_rev <= 0xa00820e; break;184184+ case 0xa1011: return cur_rev <= 0xa101153; break;185185+ case 0xa1012: return cur_rev <= 0xa10124e; break;186186+ case 0xa1081: return cur_rev <= 0xa108109; break;187187+ case 0xa2010: return cur_rev <= 0xa20102f; break;188188+ case 0xa2012: return cur_rev <= 0xa201212; break;189189+ case 0xa4041: return cur_rev <= 0xa404109; break;190190+ case 0xa5000: return cur_rev <= 0xa500013; break;191191+ case 0xa6012: return cur_rev <= 0xa60120a; break;192192+ case 0xa7041: return cur_rev <= 0xa704109; break;193193+ case 0xa7052: return cur_rev <= 0xa705208; break;194194+ case 0xa7080: return cur_rev <= 0xa708009; break;195195+ case 0xa70c0: return cur_rev <= 0xa70C009; break;196196+ case 0xaa001: return cur_rev <= 0xaa00116; break;197197+ case 0xaa002: return cur_rev <= 0xaa00218; break;198198+ default: break;199199+ }200200+201201+ pr_info("You should not be seeing this. Please send the following couple of lines to x86-<at>-kernel.org\n");202202+ pr_info("CPUID(1).EAX: 0x%x, current revision: 0x%x\n", bsp_cpuid_1_eax, cur_rev);203203+ return true;204204+}205205+206206+static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsigned int len)207207+{208208+ struct patch_digest *pd = NULL;209209+ u8 digest[SHA256_DIGEST_SIZE];210210+ struct sha256_state s;211211+ int i;212212+213213+ if (x86_family(bsp_cpuid_1_eax) < 0x17 ||214214+ x86_family(bsp_cpuid_1_eax) > 0x19)215215+ return true;216216+217217+ if (!need_sha_check(cur_rev))218218+ return true;219219+220220+ if (!sha_check)221221+ return true;222222+223223+ pd = bsearch(&patch_id, phashes, ARRAY_SIZE(phashes), sizeof(struct patch_digest), cmp_id);224224+ if (!pd) {225225+ pr_err("No sha256 digest for patch ID: 0x%x found\n", patch_id);226226+ return false;227227+ }228228+229229+ sha256_init(&s);230230+ sha256_update(&s, data, len);231231+ sha256_final(&s, digest);232232+233233+ if (memcmp(digest, pd->sha256, sizeof(digest))) {234234+ pr_err("Patch 0x%x SHA256 digest mismatch!\n", patch_id);235235+236236+ for (i = 0; i < SHA256_DIGEST_SIZE; i++)237237+ pr_cont("0x%x ", digest[i]);238238+ pr_info("\n");239239+240240+ return false;241241+ }242242+243243+ return true;244244+}245245+246246+static u32 get_patch_level(void)247247+{248248+ u32 rev, dummy __always_unused;249249+250250+ native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);251251+252252+ return rev;253253+}254254+152255static union cpuid_1_eax ucode_rev_to_cpuid(unsigned int val)153256{154257 union zen_patch_rev p;···357246 * On success, @sh_psize returns the patch size according to the section header,358247 * to the caller.359248 */360360-static bool361361-__verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize)249249+static bool __verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize)362250{363251 u32 p_type, p_size;364252 const u32 *hdr;···594484 }595485}596486597597-static bool __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize)487487+static bool __apply_microcode_amd(struct microcode_amd *mc, u32 *cur_rev,488488+ unsigned int psize)598489{599490 unsigned long p_addr = (unsigned long)&mc->hdr.data_code;600600- u32 rev, dummy;491491+492492+ if (!verify_sha256_digest(mc->hdr.patch_id, *cur_rev, (const u8 *)p_addr, psize))493493+ return -1;601494602495 native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);603496···618505 }619506620507 /* verify patch application was successful */621621- native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);622622-623623- if (rev != mc->hdr.patch_id)508508+ *cur_rev = get_patch_level();509509+ if (*cur_rev != mc->hdr.patch_id)624510 return false;625511626512 return true;627627-}628628-629629-/*630630- * Early load occurs before we can vmalloc(). So we look for the microcode631631- * patch container file in initrd, traverse equivalent cpu table, look for a632632- * matching microcode patch, and update, all in initrd memory in place.633633- * When vmalloc() is available for use later -- on 64-bit during first AP load,634634- * and on 32-bit during save_microcode_in_initrd_amd() -- we can call635635- * load_microcode_amd() to save equivalent cpu table and microcode patches in636636- * kernel heap memory.637637- *638638- * Returns true if container found (sets @desc), false otherwise.639639- */640640-static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size)641641-{642642- struct cont_desc desc = { 0 };643643- struct microcode_amd *mc;644644-645645- scan_containers(ucode, size, &desc);646646-647647- mc = desc.mc;648648- if (!mc)649649- return false;650650-651651- /*652652- * Allow application of the same revision to pick up SMT-specific653653- * changes even if the revision of the other SMT thread is already654654- * up-to-date.655655- */656656- if (old_rev > mc->hdr.patch_id)657657- return false;658658-659659- return __apply_microcode_amd(mc, desc.psize);660513}661514662515static bool get_builtin_microcode(struct cpio_data *cp)···662583 return found;663584}664585586586+/*587587+ * Early load occurs before we can vmalloc(). So we look for the microcode588588+ * patch container file in initrd, traverse equivalent cpu table, look for a589589+ * matching microcode patch, and update, all in initrd memory in place.590590+ * When vmalloc() is available for use later -- on 64-bit during first AP load,591591+ * and on 32-bit during save_microcode_in_initrd() -- we can call592592+ * load_microcode_amd() to save equivalent cpu table and microcode patches in593593+ * kernel heap memory.594594+ */665595void __init load_ucode_amd_bsp(struct early_load_data *ed, unsigned int cpuid_1_eax)666596{597597+ struct cont_desc desc = { };598598+ struct microcode_amd *mc;667599 struct cpio_data cp = { };668668- u32 dummy;600600+ char buf[4];601601+ u32 rev;602602+603603+ if (cmdline_find_option(boot_command_line, "microcode.amd_sha_check", buf, 4)) {604604+ if (!strncmp(buf, "off", 3)) {605605+ sha_check = false;606606+ pr_warn_once("It is a very very bad idea to disable the blobs SHA check!\n");607607+ add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);608608+ }609609+ }669610670611 bsp_cpuid_1_eax = cpuid_1_eax;671612672672- native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->old_rev, dummy);613613+ rev = get_patch_level();614614+ ed->old_rev = rev;673615674616 /* Needed in load_microcode_amd() */675617 ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax;···698598 if (!find_blobs_in_containers(&cp))699599 return;700600701701- if (early_apply_microcode(ed->old_rev, cp.data, cp.size))702702- native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy);703703-}704704-705705-static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size);706706-707707-static int __init save_microcode_in_initrd(void)708708-{709709- unsigned int cpuid_1_eax = native_cpuid_eax(1);710710- struct cpuinfo_x86 *c = &boot_cpu_data;711711- struct cont_desc desc = { 0 };712712- enum ucode_state ret;713713- struct cpio_data cp;714714-715715- if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10)716716- return 0;717717-718718- if (!find_blobs_in_containers(&cp))719719- return -EINVAL;720720-721601 scan_containers(cp.data, cp.size, &desc);722722- if (!desc.mc)723723- return -EINVAL;724602725725- ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);726726- if (ret > UCODE_UPDATED)727727- return -EINVAL;603603+ mc = desc.mc;604604+ if (!mc)605605+ return;728606729729- return 0;607607+ /*608608+ * Allow application of the same revision to pick up SMT-specific609609+ * changes even if the revision of the other SMT thread is already610610+ * up-to-date.611611+ */612612+ if (ed->old_rev > mc->hdr.patch_id)613613+ return;614614+615615+ if (__apply_microcode_amd(mc, &rev, desc.psize))616616+ ed->new_rev = rev;730617}731731-early_initcall(save_microcode_in_initrd);732618733619static inline bool patch_cpus_equivalent(struct ucode_patch *p,734620 struct ucode_patch *n,···815729static struct ucode_patch *find_patch(unsigned int cpu)816730{817731 struct ucode_cpu_info *uci = ucode_cpu_info + cpu;818818- u32 rev, dummy __always_unused;819732 u16 equiv_id = 0;820733821821- /* fetch rev if not populated yet: */822822- if (!uci->cpu_sig.rev) {823823- rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);824824- uci->cpu_sig.rev = rev;825825- }734734+ uci->cpu_sig.rev = get_patch_level();826735827736 if (x86_family(bsp_cpuid_1_eax) < 0x17) {828737 equiv_id = find_equiv_id(&equiv_table, uci->cpu_sig.sig);···840759841760 mc = p->data;842761843843- rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);844844-762762+ rev = get_patch_level();845763 if (rev < mc->hdr.patch_id) {846846- if (__apply_microcode_amd(mc, p->size))847847- pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id);764764+ if (__apply_microcode_amd(mc, &rev, p->size))765765+ pr_info_once("reload revision: 0x%08x\n", rev);848766 }849767}850768851769static int collect_cpu_info_amd(int cpu, struct cpu_signature *csig)852770{853853- struct cpuinfo_x86 *c = &cpu_data(cpu);854771 struct ucode_cpu_info *uci = ucode_cpu_info + cpu;855772 struct ucode_patch *p;856773857774 csig->sig = cpuid_eax(0x00000001);858858- csig->rev = c->microcode;775775+ csig->rev = get_patch_level();859776860777 /*861778 * a patch could have been loaded early, set uci->mc so that···894815 goto out;895816 }896817897897- if (!__apply_microcode_amd(mc_amd, p->size)) {818818+ if (!__apply_microcode_amd(mc_amd, &rev, p->size)) {898819 pr_err("CPU%d: update failed for patch_level=0x%08x\n",899820 cpu, mc_amd->hdr.patch_id);900821 return UCODE_ERROR;···1016937}10179381018939/* Scan the blob in @data and add microcode patches to the cache. */10191019-static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,10201020- size_t size)940940+static enum ucode_state __load_microcode_amd(u8 family, const u8 *data, size_t size)1021941{1022942 u8 *fw = (u8 *)data;1023943 size_t offset;···1074996 if (ret != UCODE_OK)1075997 return ret;107699810771077- for_each_node(nid) {999999+ for_each_node_with_cpus(nid) {10781000 cpu = cpumask_first(cpumask_of_node(nid));10791001 c = &cpu_data(cpu);10801002···1090101210911013 return ret;10921014}10151015+10161016+static int __init save_microcode_in_initrd(void)10171017+{10181018+ unsigned int cpuid_1_eax = native_cpuid_eax(1);10191019+ struct cpuinfo_x86 *c = &boot_cpu_data;10201020+ struct cont_desc desc = { 0 };10211021+ enum ucode_state ret;10221022+ struct cpio_data cp;10231023+10241024+ if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10)10251025+ return 0;10261026+10271027+ if (!find_blobs_in_containers(&cp))10281028+ return -EINVAL;10291029+10301030+ scan_containers(cp.data, cp.size, &desc);10311031+ if (!desc.mc)10321032+ return -EINVAL;10331033+10341034+ ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);10351035+ if (ret > UCODE_UPDATED)10361036+ return -EINVAL;10371037+10381038+ return 0;10391039+}10401040+early_initcall(save_microcode_in_initrd);1093104110941042/*10951043 * AMD microcode firmware naming convention, up to family 15h they are in
···100100#ifdef CONFIG_CPU_SUP_AMD101101void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family);102102void load_ucode_amd_ap(unsigned int family);103103-int save_microcode_in_initrd_amd(unsigned int family);104103void reload_ucode_amd(unsigned int cpu);105104struct microcode_ops *init_amd_microcode(void);106105void exit_amd_microcode(void);107106#else /* CONFIG_CPU_SUP_AMD */108107static inline void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family) { }109108static inline void load_ucode_amd_ap(unsigned int family) { }110110-static inline int save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }111109static inline void reload_ucode_amd(unsigned int cpu) { }112110static inline struct microcode_ops *init_amd_microcode(void) { return NULL; }113111static inline void exit_amd_microcode(void) { }
+7-3
arch/x86/kernel/cpu/sgx/driver.c
···150150 u64 xfrm_mask;151151 int ret;152152153153- if (!cpu_feature_enabled(X86_FEATURE_SGX_LC))153153+ if (!cpu_feature_enabled(X86_FEATURE_SGX_LC)) {154154+ pr_info("SGX disabled: SGX launch control CPU feature is not available, /dev/sgx_enclave disabled.\n");154155 return -ENODEV;156156+ }155157156158 cpuid_count(SGX_CPUID, 0, &eax, &ebx, &ecx, &edx);157159158160 if (!(eax & 1)) {159159- pr_err("SGX disabled: SGX1 instruction support not available.\n");161161+ pr_info("SGX disabled: SGX1 instruction support not available, /dev/sgx_enclave disabled.\n");160162 return -ENODEV;161163 }162164···175173 }176174177175 ret = misc_register(&sgx_dev_enclave);178178- if (ret)176176+ if (ret) {177177+ pr_info("SGX disabled: Unable to register the /dev/sgx_enclave driver (%d).\n", ret);179178 return ret;179179+ }180180181181 return 0;182182}
+7
arch/x86/kernel/cpu/sgx/ioctl.c
···6464 struct file *backing;6565 long ret;66666767+ /*6868+ * ECREATE would detect this too, but checking here also ensures6969+ * that the 'encl_size' calculations below can never overflow.7070+ */7171+ if (!is_power_of_2(secs->size))7272+ return -EINVAL;7373+6774 va_page = sgx_encl_grow(encl, true);6875 if (IS_ERR(va_page))6976 return PTR_ERR(va_page);
+4
arch/x86/kernel/cpu/vmware.c
···2626#include <linux/export.h>2727#include <linux/clocksource.h>2828#include <linux/cpu.h>2929+#include <linux/efi.h>2930#include <linux/reboot.h>3031#include <linux/static_call.h>3132#include <asm/div64.h>···429428 } else {430429 pr_warn("Failed to get TSC freq from the hypervisor\n");431430 }431431+432432+ if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !efi_enabled(EFI_BOOT))433433+ x86_init.mpparse.find_mptable = mpparse_find_mptable;432434433435 vmware_paravirt_ops_setup();434436
···4590459045914591void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa)45924592{45934593+ struct kvm *kvm = svm->vcpu.kvm;45944594+45934595 /*45944596 * All host state for SEV-ES guests is categorized into three swap types45954597 * based on how it is handled by hardware during a world switch:···4615461346164614 /*46174615 * If DebugSwap is enabled, debug registers are loaded but NOT saved by46184618- * the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU both46194619- * saves and loads debug registers (Type-A).46164616+ * the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU does46174617+ * not save or load debug registers. Sadly, KVM can't prevent SNP46184618+ * guests from lying about DebugSwap on secondary vCPUs, i.e. the46194619+ * SEV_FEATURES provided at "AP Create" isn't guaranteed to match what46204620+ * the guest has actually enabled (or not!) in the VMSA.46214621+ *46224622+ * If DebugSwap is *possible*, save the masks so that they're restored46234623+ * if the guest enables DebugSwap. But for the DRs themselves, do NOT46244624+ * rely on the CPU to restore the host values; KVM will restore them as46254625+ * needed in common code, via hw_breakpoint_restore(). Note, KVM does46264626+ * NOT support virtualizing Breakpoint Extensions, i.e. the mask MSRs46274627+ * don't need to be restored per se, KVM just needs to ensure they are46284628+ * loaded with the correct values *if* the CPU writes the MSRs.46204629 */46214621- if (sev_vcpu_has_debug_swap(svm)) {46224622- hostsa->dr0 = native_get_debugreg(0);46234623- hostsa->dr1 = native_get_debugreg(1);46244624- hostsa->dr2 = native_get_debugreg(2);46254625- hostsa->dr3 = native_get_debugreg(3);46304630+ if (sev_vcpu_has_debug_swap(svm) ||46314631+ (sev_snp_guest(kvm) && cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP))) {46264632 hostsa->dr0_addr_mask = amd_get_dr_addr_mask(0);46274633 hostsa->dr1_addr_mask = amd_get_dr_addr_mask(1);46284634 hostsa->dr2_addr_mask = amd_get_dr_addr_mask(2);
+49
arch/x86/kvm/svm/svm.c
···31653165 kvm_pr_unimpl_wrmsr(vcpu, ecx, data);31663166 break;31673167 }31683168+31693169+ /*31703170+ * AMD changed the architectural behavior of bits 5:2. On CPUs31713171+ * without BusLockTrap, bits 5:2 control "external pins", but31723172+ * on CPUs that support BusLockDetect, bit 2 enables BusLockTrap31733173+ * and bits 5:3 are reserved-to-zero. Sadly, old KVM allowed31743174+ * the guest to set bits 5:2 despite not actually virtualizing31753175+ * Performance-Monitoring/Breakpoint external pins. Drop bits31763176+ * 5:2 for backwards compatibility.31773177+ */31783178+ data &= ~GENMASK(5, 2);31793179+31803180+ /*31813181+ * Suppress BTF as KVM doesn't virtualize BTF, but there's no31823182+ * way to communicate lack of support to the guest.31833183+ */31843184+ if (data & DEBUGCTLMSR_BTF) {31853185+ kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data);31863186+ data &= ~DEBUGCTLMSR_BTF;31873187+ }31883188+31683189 if (data & DEBUGCTL_RESERVED_BITS)31693190 return 1;31703191···4210418942114190 guest_state_enter_irqoff();4212419141924192+ /*41934193+ * Set RFLAGS.IF prior to VMRUN, as the host's RFLAGS.IF at the time of41944194+ * VMRUN controls whether or not physical IRQs are masked (KVM always41954195+ * runs with V_INTR_MASKING_MASK). Toggle RFLAGS.IF here to avoid the41964196+ * temptation to do STI+VMRUN+CLI, as AMD CPUs bleed the STI shadow41974197+ * into guest state if delivery of an event during VMRUN triggers a41984198+ * #VMEXIT, and the guest_state transitions already tell lockdep that41994199+ * IRQs are being enabled/disabled. Note! GIF=0 for the entirety of42004200+ * this path, so IRQs aren't actually unmasked while running host code.42014201+ */42024202+ raw_local_irq_enable();42034203+42134204 amd_clear_divider();4214420542154206 if (sev_es_guest(vcpu->kvm))···42294196 sev_es_host_save_area(sd));42304197 else42314198 __svm_vcpu_run(svm, spec_ctrl_intercepted);41994199+42004200+ raw_local_irq_disable();4232420142334202 guest_state_exit_irqoff();42344203}···42884253 clgi();42894254 kvm_load_guest_xsave_state(vcpu);4290425542564256+ /*42574257+ * Hardware only context switches DEBUGCTL if LBR virtualization is42584258+ * enabled. Manually load DEBUGCTL if necessary (and restore it after42594259+ * VM-Exit), as running with the host's DEBUGCTL can negatively affect42604260+ * guest state and can even be fatal, e.g. due to Bus Lock Detect.42614261+ */42624262+ if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) &&42634263+ vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl)42644264+ update_debugctlmsr(svm->vmcb->save.dbgctl);42654265+42914266 kvm_wait_lapic_expire(vcpu);4292426742934268 /*···4324427943254280 if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))43264281 kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);42824282+42834283+ if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) &&42844284+ vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl)42854285+ update_debugctlmsr(vcpu->arch.host_debugctl);4327428643284287 kvm_load_host_xsave_state(vcpu);43294288 stgi();
···170170 mov VCPU_RDI(%_ASM_DI), %_ASM_DI171171172172 /* Enter guest mode */173173- sti174174-1751733: vmrun %_ASM_AX1761744:177177- cli178178-179175 /* Pop @svm to RAX while it's the only available register. */180176 pop %_ASM_AX181177···336340 mov KVM_VMCB_pa(%rax), %rax337341338342 /* Enter guest mode */339339- sti340340-3413431: vmrun %rax342342-343343-2: cli344344-344344+2:345345 /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */346346 FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT347347
+2-6
arch/x86/kvm/vmx/vmx.c
···15141514 */15151515void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)15161516{15171517- struct vcpu_vmx *vmx = to_vmx(vcpu);15181518-15191517 if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))15201518 shrink_ple_window(vcpu);1521151915221520 vmx_vcpu_load_vmcs(vcpu, cpu, NULL);1523152115241522 vmx_vcpu_pi_load(vcpu, cpu);15251525-15261526- vmx->host_debugctlmsr = get_debugctlmsr();15271523}1528152415291525void vmx_vcpu_put(struct kvm_vcpu *vcpu)···74547458 }7455745974567460 /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */74577457- if (vmx->host_debugctlmsr)74587458- update_debugctlmsr(vmx->host_debugctlmsr);74617461+ if (vcpu->arch.host_debugctl)74627462+ update_debugctlmsr(vcpu->arch.host_debugctl);7459746374607464#ifndef CONFIG_X86_6474617465 /*
-2
arch/x86/kvm/vmx/vmx.h
···340340 /* apic deadline value in host tsc */341341 u64 hv_deadline_tsc;342342343343- unsigned long host_debugctlmsr;344344-345343 /*346344 * Only bits masked by msr_ia32_feature_control_valid_bits can be set in347345 * msr_ia32_feature_control. FEAT_CTL_LOCKED is always included
···682682 out[size] = 0;683683684684 while (i < size) {685685- u8 c = le16_to_cpu(in[i]) & 0xff;685685+ u8 c = le16_to_cpu(in[i]) & 0x7f;686686687687 if (c && !isprint(c))688688 c = '!';
+73-21
drivers/acpi/platform_profile.c
···2121 struct device dev;2222 int minor;2323 unsigned long choices[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)];2424+ unsigned long hidden_choices[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)];2425 const struct platform_profile_ops *ops;2626+};2727+2828+struct aggregate_choices_data {2929+ unsigned long aggregate[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)];3030+ int count;2531};26322733static const char * const profile_names[] = {···79738074 lockdep_assert_held(&profile_lock);8175 handler = to_pprof_handler(dev);8282- if (!test_bit(*bit, handler->choices))7676+ if (!test_bit(*bit, handler->choices) && !test_bit(*bit, handler->hidden_choices))8377 return -EOPNOTSUPP;84788579 return handler->ops->profile_set(dev, *bit);···245239/**246240 * _aggregate_choices - Aggregate the available profile choices247241 * @dev: The device248248- * @data: The available profile choices242242+ * @arg: struct aggregate_choices_data249243 *250244 * Return: 0 on success, -errno on failure251245 */252252-static int _aggregate_choices(struct device *dev, void *data)246246+static int _aggregate_choices(struct device *dev, void *arg)253247{248248+ unsigned long tmp[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)];249249+ struct aggregate_choices_data *data = arg;254250 struct platform_profile_handler *handler;255255- unsigned long *aggregate = data;256251257252 lockdep_assert_held(&profile_lock);258253 handler = to_pprof_handler(dev);259259- if (test_bit(PLATFORM_PROFILE_LAST, aggregate))260260- bitmap_copy(aggregate, handler->choices, PLATFORM_PROFILE_LAST);254254+ bitmap_or(tmp, handler->choices, handler->hidden_choices, PLATFORM_PROFILE_LAST);255255+ if (test_bit(PLATFORM_PROFILE_LAST, data->aggregate))256256+ bitmap_copy(data->aggregate, tmp, PLATFORM_PROFILE_LAST);261257 else262262- bitmap_and(aggregate, handler->choices, aggregate, PLATFORM_PROFILE_LAST);258258+ bitmap_and(data->aggregate, tmp, data->aggregate, PLATFORM_PROFILE_LAST);259259+ data->count++;260260+261261+ return 0;262262+}263263+264264+/**265265+ * _remove_hidden_choices - Remove hidden choices from aggregate data266266+ * @dev: The device267267+ * @arg: struct aggregate_choices_data268268+ *269269+ * Return: 0 on success, -errno on failure270270+ */271271+static int _remove_hidden_choices(struct device *dev, void *arg)272272+{273273+ struct aggregate_choices_data *data = arg;274274+ struct platform_profile_handler *handler;275275+276276+ lockdep_assert_held(&profile_lock);277277+ handler = to_pprof_handler(dev);278278+ bitmap_andnot(data->aggregate, handler->choices,279279+ handler->hidden_choices, PLATFORM_PROFILE_LAST);263280264281 return 0;265282}···299270 struct device_attribute *attr,300271 char *buf)301272{302302- unsigned long aggregate[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)];273273+ struct aggregate_choices_data data = {274274+ .aggregate = { [0 ... BITS_TO_LONGS(PLATFORM_PROFILE_LAST) - 1] = ~0UL },275275+ .count = 0,276276+ };303277 int err;304278305305- set_bit(PLATFORM_PROFILE_LAST, aggregate);279279+ set_bit(PLATFORM_PROFILE_LAST, data.aggregate);306280 scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &profile_lock) {307281 err = class_for_each_device(&platform_profile_class, NULL,308308- aggregate, _aggregate_choices);282282+ &data, _aggregate_choices);309283 if (err)310284 return err;285285+ if (data.count == 1) {286286+ err = class_for_each_device(&platform_profile_class, NULL,287287+ &data, _remove_hidden_choices);288288+ if (err)289289+ return err;290290+ }311291 }312292313293 /* no profile handler registered any more */314314- if (bitmap_empty(aggregate, PLATFORM_PROFILE_LAST))294294+ if (bitmap_empty(data.aggregate, PLATFORM_PROFILE_LAST))315295 return -EINVAL;316296317317- return _commmon_choices_show(aggregate, buf);297297+ return _commmon_choices_show(data.aggregate, buf);318298}319299320300/**···411373 struct device_attribute *attr,412374 const char *buf, size_t count)413375{414414- unsigned long choices[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)];376376+ struct aggregate_choices_data data = {377377+ .aggregate = { [0 ... BITS_TO_LONGS(PLATFORM_PROFILE_LAST) - 1] = ~0UL },378378+ .count = 0,379379+ };415380 int ret;416381 int i;417382···422381 i = sysfs_match_string(profile_names, buf);423382 if (i < 0 || i == PLATFORM_PROFILE_CUSTOM)424383 return -EINVAL;425425- set_bit(PLATFORM_PROFILE_LAST, choices);384384+ set_bit(PLATFORM_PROFILE_LAST, data.aggregate);426385 scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &profile_lock) {427386 ret = class_for_each_device(&platform_profile_class, NULL,428428- choices, _aggregate_choices);387387+ &data, _aggregate_choices);429388 if (ret)430389 return ret;431431- if (!test_bit(i, choices))390390+ if (!test_bit(i, data.aggregate))432391 return -EOPNOTSUPP;433392434393 ret = class_for_each_device(&platform_profile_class, NULL, &i,···494453 */495454int platform_profile_cycle(void)496455{456456+ struct aggregate_choices_data data = {457457+ .aggregate = { [0 ... BITS_TO_LONGS(PLATFORM_PROFILE_LAST) - 1] = ~0UL },458458+ .count = 0,459459+ };497460 enum platform_profile_option next = PLATFORM_PROFILE_LAST;498461 enum platform_profile_option profile = PLATFORM_PROFILE_LAST;499499- unsigned long choices[BITS_TO_LONGS(PLATFORM_PROFILE_LAST)];500462 int err;501463502502- set_bit(PLATFORM_PROFILE_LAST, choices);464464+ set_bit(PLATFORM_PROFILE_LAST, data.aggregate);503465 scoped_cond_guard(mutex_intr, return -ERESTARTSYS, &profile_lock) {504466 err = class_for_each_device(&platform_profile_class, NULL,505467 &profile, _aggregate_profiles);···514470 return -EINVAL;515471516472 err = class_for_each_device(&platform_profile_class, NULL,517517- choices, _aggregate_choices);473473+ &data, _aggregate_choices);518474 if (err)519475 return err;520476521477 /* never iterate into a custom if all drivers supported it */522522- clear_bit(PLATFORM_PROFILE_CUSTOM, choices);478478+ clear_bit(PLATFORM_PROFILE_CUSTOM, data.aggregate);523479524524- next = find_next_bit_wrap(choices,480480+ next = find_next_bit_wrap(data.aggregate,525481 PLATFORM_PROFILE_LAST,526482 profile + 1);527483···574530 if (bitmap_empty(pprof->choices, PLATFORM_PROFILE_LAST)) {575531 dev_err(dev, "Failed to register platform_profile class device with empty choices\n");576532 return ERR_PTR(-EINVAL);533533+ }534534+535535+ if (ops->hidden_choices) {536536+ err = ops->hidden_choices(drvdata, pprof->hidden_choices);537537+ if (err) {538538+ dev_err(dev, "platform_profile hidden_choices failed\n");539539+ return ERR_PTR(err);540540+ }577541 }578542579543 guard(mutex)(&profile_lock);
+1
drivers/android/binderfs.c
···274274 mutex_unlock(&binderfs_minors_mutex);275275276276 if (refcount_dec_and_test(&device->ref)) {277277+ hlist_del_init(&device->hlist);277278 kfree(device->context.name);278279 kfree(device);279280 }
···27152715 if (ph.len > sizeof(struct ublk_params))27162716 ph.len = sizeof(struct ublk_params);2717271727182718- /* parameters can only be changed when device isn't live */27192718 mutex_lock(&ub->mutex);27202720- if (ub->dev_info.state == UBLK_S_DEV_LIVE) {27192719+ if (test_bit(UB_STATE_USED, &ub->state)) {27202720+ /*27212721+ * Parameters can only be changed when device hasn't27222722+ * been started yet27232723+ */27212724 ret = -EACCES;27222725 } else if (copy_from_user(&ub->params, argp, ph.len)) {27232726 ret = -EFAULT;
+3-2
drivers/block/virtio_blk.c
···1207120712081208 while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) {12091209 struct request *req = blk_mq_rq_from_pdu(vbr);12101210+ u8 status = virtblk_vbr_status(vbr);1210121112111212 found++;12121213 if (!blk_mq_complete_request_remote(req) &&12131213- !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr),12141214- virtblk_complete_batch))12141214+ !blk_mq_add_to_batch(req, iob, status != VIRTIO_BLK_S_OK,12151215+ virtblk_complete_batch))12151216 virtblk_request_done(req);12161217 }12171218
+12
drivers/bluetooth/Kconfig
···5656 Say Y here to enable USB poll_sync for Bluetooth USB devices by5757 default.58585959+config BT_HCIBTUSB_AUTO_ISOC_ALT6060+ bool "Automatically adjust alternate setting for Isoc endpoints"6161+ depends on BT_HCIBTUSB6262+ default y if CHROME_PLATFORMS6363+ help6464+ Say Y here to automatically adjusting the alternate setting for6565+ HCI_USER_CHANNEL whenever a SCO link is established.6666+6767+ When enabled, btusb intercepts the HCI_EV_SYNC_CONN_COMPLETE packets6868+ and configures isoc endpoint alternate setting automatically when6969+ HCI_USER_CHANNEL is in use.7070+5971config BT_HCIBTUSB_BCM6072 bool "Broadcom protocol support"6173 depends on BT_HCIBTUSB
+42
drivers/bluetooth/btusb.c
···3434static bool enable_autosuspend = IS_ENABLED(CONFIG_BT_HCIBTUSB_AUTOSUSPEND);3535static bool enable_poll_sync = IS_ENABLED(CONFIG_BT_HCIBTUSB_POLL_SYNC);3636static bool reset = true;3737+static bool auto_isoc_alt = IS_ENABLED(CONFIG_BT_HCIBTUSB_AUTO_ISOC_ALT);37383839static struct usb_driver btusb_driver;3940···10861085 spin_unlock_irqrestore(&data->rxlock, flags);10871086}1088108710881088+static void btusb_sco_connected(struct btusb_data *data, struct sk_buff *skb)10891089+{10901090+ struct hci_event_hdr *hdr = (void *) skb->data;10911091+ struct hci_ev_sync_conn_complete *ev =10921092+ (void *) skb->data + sizeof(*hdr);10931093+ struct hci_dev *hdev = data->hdev;10941094+ unsigned int notify_air_mode;10951095+10961096+ if (hci_skb_pkt_type(skb) != HCI_EVENT_PKT)10971097+ return;10981098+10991099+ if (skb->len < sizeof(*hdr) || hdr->evt != HCI_EV_SYNC_CONN_COMPLETE)11001100+ return;11011101+11021102+ if (skb->len != sizeof(*hdr) + sizeof(*ev) || ev->status)11031103+ return;11041104+11051105+ switch (ev->air_mode) {11061106+ case BT_CODEC_CVSD:11071107+ notify_air_mode = HCI_NOTIFY_ENABLE_SCO_CVSD;11081108+ break;11091109+11101110+ case BT_CODEC_TRANSPARENT:11111111+ notify_air_mode = HCI_NOTIFY_ENABLE_SCO_TRANSP;11121112+ break;11131113+11141114+ default:11151115+ return;11161116+ }11171117+11181118+ bt_dev_info(hdev, "enabling SCO with air mode %u", ev->air_mode);11191119+ data->sco_num = 1;11201120+ data->air_mode = notify_air_mode;11211121+ schedule_work(&data->work);11221122+}11231123+10891124static int btusb_recv_event(struct btusb_data *data, struct sk_buff *skb)10901125{10911126 if (data->intr_interval) {10921127 /* Trigger dequeue immediately if an event is received */10931128 schedule_delayed_work(&data->rx_work, 0);10941129 }11301130+11311131+ /* Configure altsetting for HCI_USER_CHANNEL on SCO connected */11321132+ if (auto_isoc_alt && hci_dev_test_flag(data->hdev, HCI_USER_CHANNEL))11331133+ btusb_sco_connected(data, skb);1095113410961135 return data->recv_event(data->hdev, skb);10971136}···36853644}3686364536873646static const struct file_operations force_poll_sync_fops = {36473647+ .owner = THIS_MODULE,36883648 .open = simple_open,36893649 .read = force_poll_sync_read,36903650 .write = force_poll_sync_write,
···10251025 }10261026 }1027102710281028- ATOMIC_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier);10281028+ rwlock_init(&gdev->line_state_lock);10291029+ RAW_INIT_NOTIFIER_HEAD(&gdev->line_state_notifier);10291030 BLOCKING_INIT_NOTIFIER_HEAD(&gdev->device_notifier);1030103110311032 ret = init_srcu_struct(&gdev->srcu);···1057105610581057 desc->gdev = gdev;1059105810601060- if (gc->get_direction && gpiochip_line_is_valid(gc, desc_index)) {10611061- ret = gc->get_direction(gc, desc_index);10621062- if (ret < 0)10631063- /*10641064- * FIXME: Bail-out here once all GPIO drivers10651065- * are updated to not return errors in10661066- * situations that can be considered normal10671067- * operation.10681068- */10691069- dev_warn(&gdev->dev,10701070- "%s: get_direction failed: %d\n",10711071- __func__, ret);10721072-10731073- assign_bit(FLAG_IS_OUT, &desc->flags, !ret);10741074- } else {10591059+ /*10601060+ * We would typically want to check the return value of10611061+ * get_direction() here but we must not check the return value10621062+ * and bail-out as pin controllers can have pins configured to10631063+ * alternate functions and return -EINVAL. Also: there's no10641064+ * need to take the SRCU lock here.10651065+ */10661066+ if (gc->get_direction && gpiochip_line_is_valid(gc, desc_index))10671067+ assign_bit(FLAG_IS_OUT, &desc->flags,10681068+ !gc->get_direction(gc, desc_index));10691069+ else10751070 assign_bit(FLAG_IS_OUT,10761071 &desc->flags, !gc->direction_input);10771077- }10781072 }1079107310801074 ret = of_gpiochip_add(gc);···4189419341904194void gpiod_line_state_notify(struct gpio_desc *desc, unsigned long action)41914195{41924192- atomic_notifier_call_chain(&desc->gdev->line_state_notifier,41934193- action, desc);41964196+ guard(read_lock_irqsave)(&desc->gdev->line_state_lock);41974197+41984198+ raw_notifier_call_chain(&desc->gdev->line_state_notifier, action, desc);41944199}4195420041964201/**
+4-1
drivers/gpio/gpiolib.h
···1616#include <linux/gpio/driver.h>1717#include <linux/module.h>1818#include <linux/notifier.h>1919+#include <linux/spinlock.h>1920#include <linux/srcu.h>2021#include <linux/workqueue.h>2122···4645 * @list: links gpio_device:s together for traversal4746 * @line_state_notifier: used to notify subscribers about lines being4847 * requested, released or reconfigured4848+ * @line_state_lock: RW-spinlock protecting the line state notifier4949 * @line_state_wq: used to emit line state events from a separate thread in5050 * process context5151 * @device_notifier: used to notify character device wait queues about the GPIO···7472 const char *label;7573 void *data;7674 struct list_head list;7777- struct atomic_notifier_head line_state_notifier;7575+ struct raw_notifier_head line_state_notifier;7676+ rwlock_t line_state_lock;7877 struct workqueue_struct *line_state_wq;7978 struct blocking_notifier_head device_notifier;8079 struct srcu_struct srcu;
+9-2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···25552555 int r;2556255625572557 r = amdgpu_device_suspend(drm_dev, true);25582558- adev->in_s4 = false;25592558 if (r)25602559 return r;25612560···25662567static int amdgpu_pmops_thaw(struct device *dev)25672568{25682569 struct drm_device *drm_dev = dev_get_drvdata(dev);25702570+ struct amdgpu_device *adev = drm_to_adev(drm_dev);25712571+ int r;2569257225702570- return amdgpu_device_resume(drm_dev, true);25732573+ r = amdgpu_device_resume(drm_dev, true);25742574+ adev->in_s4 = false;25752575+25762576+ return r;25712577}2572257825732579static int amdgpu_pmops_poweroff(struct device *dev)···25852581static int amdgpu_pmops_restore(struct device *dev)25862582{25872583 struct drm_device *drm_dev = dev_get_drvdata(dev);25842584+ struct amdgpu_device *adev = drm_to_adev(drm_dev);25852585+25862586+ adev->in_s4 = false;2588258725892588 return amdgpu_device_resume(drm_dev, true);25902589}
···12301230 decrement_queue_count(dqm, qpd, q);1231123112321232 if (dqm->dev->kfd->shared_resources.enable_mes) {12331233- retval = remove_queue_mes(dqm, q, qpd);12341234- if (retval) {12331233+ int err;12341234+12351235+ err = remove_queue_mes(dqm, q, qpd);12361236+ if (err) {12351237 dev_err(dev, "Failed to evict queue %d\n",12361238 q->properties.queue_id);12371237- goto out;12391239+ retval = err;12381240 }12391241 }12401242 }
+2-2
drivers/gpu/drm/amd/amdkfd/kfd_queue.c
···266266 /* EOP buffer is not required for all ASICs */267267 if (properties->eop_ring_buffer_address) {268268 if (properties->eop_ring_buffer_size != topo_dev->node_props.eop_buffer_size) {269269- pr_debug("queue eop bo size 0x%lx not equal to node eop buf size 0x%x\n",270270- properties->eop_buf_bo->tbo.base.size,269269+ pr_debug("queue eop bo size 0x%x not equal to node eop buf size 0x%x\n",270270+ properties->eop_ring_buffer_size,271271 topo_dev->node_props.eop_buffer_size);272272 err = -EINVAL;273273 goto out_err_unreserve;
+16-1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···245245static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector);246246static void handle_hpd_rx_irq(void *param);247247248248+static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,249249+ int bl_idx,250250+ u32 user_brightness);251251+248252static bool249253is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,250254 struct drm_crtc_state *new_crtc_state);···3375337133763372 mutex_unlock(&dm->dc_lock);3377337333743374+ /* set the backlight after a reset */33753375+ for (i = 0; i < dm->num_of_edps; i++) {33763376+ if (dm->backlight_dev[i])33773377+ amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);33783378+ }33793379+33783380 return 0;33793381 }33823382+33833383+ /* leave display off for S4 sequence */33843384+ if (adev->in_s4)33853385+ return 0;33863386+33803387 /* Recreate dc_state - DC invalidates it when setting power state to S3. */33813388 dc_state_release(dm_state->context);33823389 dm_state->context = dc_state_create(dm->dc, NULL);···49214906 dm->backlight_dev[aconnector->bl_idx] =49224907 backlight_device_register(bl_name, aconnector->base.kdev, dm,49234908 &amdgpu_dm_backlight_ops, &props);49094909+ dm->brightness[aconnector->bl_idx] = props.brightness;4924491049254911 if (IS_ERR(dm->backlight_dev[aconnector->bl_idx])) {49264912 DRM_ERROR("DM: Backlight registration failed!\n");···49894973 aconnector->bl_idx = bl_idx;4990497449914975 amdgpu_dm_update_backlight_caps(dm, bl_idx);49924992- dm->brightness[bl_idx] = AMDGPU_MAX_BL_LEVEL;49934976 dm->backlight_link[bl_idx] = link;49944977 dm->num_of_edps++;49954978
···894894 struct drm_device *dev = adev_to_drm(adev);895895 struct drm_connector *connector;896896 struct drm_connector_list_iter iter;897897+ int irq_type;897898 int i;899899+900900+ /* First, clear all hpd and hpdrx interrupts */901901+ for (i = DC_IRQ_SOURCE_HPD1; i <= DC_IRQ_SOURCE_HPD6RX; i++) {902902+ if (!dc_interrupt_set(adev->dm.dc, i, false))903903+ drm_err(dev, "Failed to clear hpd(rx) source=%d on init\n",904904+ i);905905+ }898906899907 drm_connector_list_iter_begin(dev, &iter);900908 drm_for_each_connector_iter(connector, &iter) {···916908917909 dc_link = amdgpu_dm_connector->dc_link;918910911911+ /*912912+ * Get a base driver irq reference for hpd ints for the lifetime913913+ * of dm. Note that only hpd interrupt types are registered with914914+ * base driver; hpd_rx types aren't. IOW, amdgpu_irq_get/put on915915+ * hpd_rx isn't available. DM currently controls hpd_rx916916+ * explicitly with dc_interrupt_set()917917+ */919918 if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) {920920- dc_interrupt_set(adev->dm.dc,921921- dc_link->irq_source_hpd,922922- true);919919+ irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1;920920+ /*921921+ * TODO: There's a mismatch between mode_info.num_hpd922922+ * and what bios reports as the # of connectors with hpd923923+ * sources. Since the # of hpd source types registered924924+ * with base driver == mode_info.num_hpd, we have to925925+ * fallback to dc_interrupt_set for the remaining types.926926+ */927927+ if (irq_type < adev->mode_info.num_hpd) {928928+ if (amdgpu_irq_get(adev, &adev->hpd_irq, irq_type))929929+ drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n",930930+ dc_link->irq_source_hpd);931931+ } else {932932+ dc_interrupt_set(adev->dm.dc,933933+ dc_link->irq_source_hpd,934934+ true);935935+ }923936 }924937925938 if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) {···950921 }951922 }952923 drm_connector_list_iter_end(&iter);953953-954954- /* Update reference counts for HPDs */955955- for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) {956956- if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1))957957- drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i);958958- }959924}960925961926/**···965942 struct drm_device *dev = adev_to_drm(adev);966943 struct drm_connector *connector;967944 struct drm_connector_list_iter iter;968968- int i;945945+ int irq_type;969946970947 drm_connector_list_iter_begin(dev, &iter);971948 drm_for_each_connector_iter(connector, &iter) {···979956 dc_link = amdgpu_dm_connector->dc_link;980957981958 if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) {982982- dc_interrupt_set(adev->dm.dc,983983- dc_link->irq_source_hpd,984984- false);959959+ irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1;960960+961961+ /* TODO: See same TODO in amdgpu_dm_hpd_init() */962962+ if (irq_type < adev->mode_info.num_hpd) {963963+ if (amdgpu_irq_put(adev, &adev->hpd_irq, irq_type))964964+ drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n",965965+ dc_link->irq_source_hpd);966966+ } else {967967+ dc_interrupt_set(adev->dm.dc,968968+ dc_link->irq_source_hpd,969969+ false);970970+ }985971 }986972987973 if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) {···1000968 }1001969 }1002970 drm_connector_list_iter_end(&iter);10031003-10041004- /* Update reference counts for HPDs */10051005- for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) {10061006- if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1))10071007- drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i);10081008- }1009971}
···18951895 NULL);18961896}1897189718981898-static int smu_v14_0_process_pending_interrupt(struct smu_context *smu)18991899-{19001900- int ret = 0;19011901-19021902- if (smu_cmn_feature_is_enabled(smu, SMU_FEATURE_ACDC_BIT))19031903- ret = smu_v14_0_allow_ih_interrupt(smu);19041904-19051905- return ret;19061906-}19071907-19081898int smu_v14_0_enable_thermal_alert(struct smu_context *smu)19091899{19101900 int ret = 0;···19061916 if (ret)19071917 return ret;1908191819091909- return smu_v14_0_process_pending_interrupt(smu);19191919+ return smu_v14_0_allow_ih_interrupt(smu);19101920}1911192119121922int smu_v14_0_disable_thermal_alert(struct smu_context *smu)
···956956957957 if (mode != DRM_MODE_DPMS_ON)958958 mode = DRM_MODE_DPMS_OFF;959959+960960+ if (connector->dpms == mode)961961+ goto out;962962+959963 connector->dpms = mode;960964961965 crtc = connector->state->crtc;
+4
drivers/gpu/drm/drm_connector.c
···14271427 * callback. For atomic drivers the remapping to the "ACTIVE" property is14281428 * implemented in the DRM core.14291429 *14301430+ * On atomic drivers any DPMS setproperty ioctl where the value does not14311431+ * change is completely skipped, otherwise a full atomic commit will occur.14321432+ * On legacy drivers the exact behavior is driver specific.14331433+ *14301434 * Note that this property cannot be set through the MODE_ATOMIC ioctl,14311435 * userspace must use "ACTIVE" on the CRTC instead.14321436 *
+8-8
drivers/gpu/drm/drm_panic_qr.rs
···545545 }546546 self.push(&mut offset, (MODE_STOP, 4));547547548548- let pad_offset = (offset + 7) / 8;548548+ let pad_offset = offset.div_ceil(8);549549 for i in pad_offset..self.version.max_data() {550550 self.data[i] = PADDING[(i & 1) ^ (pad_offset & 1)];551551 }···659659impl QrImage<'_> {660660 fn new<'a, 'b>(em: &'b EncodedMsg<'b>, qrdata: &'a mut [u8]) -> QrImage<'a> {661661 let width = em.version.width();662662- let stride = (width + 7) / 8;662662+ let stride = width.div_ceil(8);663663 let data = qrdata;664664665665 let mut qr_image = QrImage {···911911///912912/// * `url`: The base URL of the QR code. It will be encoded as Binary segment.913913/// * `data`: A pointer to the binary data, to be encoded. if URL is NULL, it914914-/// will be encoded as binary segment, otherwise it will be encoded915915-/// efficiently as a numeric segment, and appended to the URL.914914+/// will be encoded as binary segment, otherwise it will be encoded915915+/// efficiently as a numeric segment, and appended to the URL.916916/// * `data_len`: Length of the data, that needs to be encoded, must be less917917-/// than data_size.917917+/// than data_size.918918/// * `data_size`: Size of data buffer, it should be at least 4071 bytes to hold919919-/// a V40 QR code. It will then be overwritten with the QR code image.919919+/// a V40 QR code. It will then be overwritten with the QR code image.920920/// * `tmp`: A temporary buffer that the QR code encoder will use, to write the921921-/// segments and ECC.921921+/// segments and ECC.922922/// * `tmp_size`: Size of the temporary buffer, it must be at least 3706 bytes923923-/// long for V40.923923+/// long for V40.924924///925925/// # Safety926926///
+5
drivers/gpu/drm/gma500/mid_bios.c
···279279 0, PCI_DEVFN(2, 0));280280 int ret = -1;281281282282+ if (pci_gfx_root == NULL) {283283+ WARN_ON(1);284284+ return;285285+ }286286+282287 /* Get the address of the platform config vbt */283288 pci_read_config_dword(pci_gfx_root, 0xFC, &addr);284289 pci_dev_put(pci_gfx_root);
···7830783078317831 intel_program_dpkgc_latency(state);7832783278337833- if (state->modeset)78347834- intel_set_cdclk_post_plane_update(state);78357835-78367833 intel_wait_for_vblank_workers(state);7837783478387835 /* FIXME: We should call drm_atomic_helper_commit_hw_done() here···79037906 intel_verify_planes(state);7904790779057908 intel_sagv_post_plane_update(state);79097909+ if (state->modeset)79107910+ intel_set_cdclk_post_plane_update(state);79067911 intel_pmdemand_post_plane_update(state);7907791279087913 drm_atomic_helper_commit_hw_done(&state->base);
···164164 * 4 - Support multiple fault handlers per object depending on object's165165 * backing storage (a.k.a. MMAP_OFFSET).166166 *167167+ * 5 - Support multiple partial mmaps(mmap part of BO + unmap a offset, multiple168168+ * times with different size and offset).169169+ *167170 * Restrictions:168171 *169172 * * snoopable objects cannot be accessed via the GTT. It can cause machine···194191 */195192int i915_gem_mmap_gtt_version(void)196193{197197- return 4;194194+ return 5;198195}199196200197static inline struct i915_gtt_view
···55#define PVR_QUEUE_H6677#include <drm/gpu_scheduler.h>88+#include <linux/workqueue.h>89910#include "pvr_cccb.h"1011#include "pvr_device.h"···64636564 /** @queue: Queue that created this fence. */6665 struct pvr_queue *queue;6666+6767+ /** @release_work: Fence release work structure. */6868+ struct work_struct release_work;6769};68706971/**
+108-26
drivers/gpu/drm/imagination/pvr_vm.c
···293293294294static int295295pvr_vm_bind_op_unmap_init(struct pvr_vm_bind_op *bind_op,296296- struct pvr_vm_context *vm_ctx, u64 device_addr,297297- u64 size)296296+ struct pvr_vm_context *vm_ctx,297297+ struct pvr_gem_object *pvr_obj,298298+ u64 device_addr, u64 size)298299{299300 int err;300301···319318 goto err_bind_op_fini;320319 }321320321321+ bind_op->pvr_obj = pvr_obj;322322 bind_op->vm_ctx = vm_ctx;323323 bind_op->device_addr = device_addr;324324 bind_op->size = size;···600598}601599602600/**603603- * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context.604604- * @vm_ctx: Target VM context.605605- *606606- * This function ensures that no mappings are left dangling by unmapping them607607- * all in order of ascending device-virtual address.608608- */609609-void610610-pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx)611611-{612612- WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start,613613- vm_ctx->gpuvm_mgr.mm_range));614614-}615615-616616-/**617601 * pvr_vm_context_release() - Teardown a VM context.618602 * @ref_count: Pointer to reference counter of the VM context.619603 *···691703 struct pvr_vm_bind_op *bind_op = vm_exec->extra.priv;692704 struct pvr_gem_object *pvr_obj = bind_op->pvr_obj;693705694694- /* Unmap operations don't have an object to lock. */695695- if (!pvr_obj)696696- return 0;697697-698698- /* Acquire lock on the GEM being mapped. */706706+ /* Acquire lock on the GEM object being mapped/unmapped. */699707 return drm_exec_lock_obj(&vm_exec->exec, gem_from_pvr_gem(pvr_obj));700708}701709···756772}757773758774/**759759- * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.775775+ * pvr_vm_unmap_obj_locked() - Unmap an already mapped section of device-virtual776776+ * memory.760777 * @vm_ctx: Target VM context.778778+ * @pvr_obj: Target PowerVR memory object.761779 * @device_addr: Virtual device address at the start of the target mapping.762780 * @size: Size of the target mapping.763781 *···770784 * * Any error encountered while performing internal operations required to771785 * destroy the mapping (returned from pvr_vm_gpuva_unmap or772786 * pvr_vm_gpuva_remap).787787+ *788788+ * The vm_ctx->lock must be held when calling this function.773789 */774774-int775775-pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)790790+static int791791+pvr_vm_unmap_obj_locked(struct pvr_vm_context *vm_ctx,792792+ struct pvr_gem_object *pvr_obj,793793+ u64 device_addr, u64 size)776794{777795 struct pvr_vm_bind_op bind_op = {0};778796 struct drm_gpuvm_exec vm_exec = {···789799 },790800 };791801792792- int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, device_addr,793793- size);802802+ int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, pvr_obj,803803+ device_addr, size);794804 if (err)795805 return err;806806+807807+ pvr_gem_object_get(pvr_obj);796808797809 err = drm_gpuvm_exec_lock(&vm_exec);798810 if (err)···808816 pvr_vm_bind_op_fini(&bind_op);809817810818 return err;819819+}820820+821821+/**822822+ * pvr_vm_unmap_obj() - Unmap an already mapped section of device-virtual823823+ * memory.824824+ * @vm_ctx: Target VM context.825825+ * @pvr_obj: Target PowerVR memory object.826826+ * @device_addr: Virtual device address at the start of the target mapping.827827+ * @size: Size of the target mapping.828828+ *829829+ * Return:830830+ * * 0 on success,831831+ * * Any error encountered by pvr_vm_unmap_obj_locked.832832+ */833833+int834834+pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,835835+ u64 device_addr, u64 size)836836+{837837+ int err;838838+839839+ mutex_lock(&vm_ctx->lock);840840+ err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, device_addr, size);841841+ mutex_unlock(&vm_ctx->lock);842842+843843+ return err;844844+}845845+846846+/**847847+ * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.848848+ * @vm_ctx: Target VM context.849849+ * @device_addr: Virtual device address at the start of the target mapping.850850+ * @size: Size of the target mapping.851851+ *852852+ * Return:853853+ * * 0 on success,854854+ * * Any error encountered by drm_gpuva_find,855855+ * * Any error encountered by pvr_vm_unmap_obj_locked.856856+ */857857+int858858+pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)859859+{860860+ struct pvr_gem_object *pvr_obj;861861+ struct drm_gpuva *va;862862+ int err;863863+864864+ mutex_lock(&vm_ctx->lock);865865+866866+ va = drm_gpuva_find(&vm_ctx->gpuvm_mgr, device_addr, size);867867+ if (va) {868868+ pvr_obj = gem_to_pvr_gem(va->gem.obj);869869+ err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj,870870+ va->va.addr, va->va.range);871871+ } else {872872+ err = -ENOENT;873873+ }874874+875875+ mutex_unlock(&vm_ctx->lock);876876+877877+ return err;878878+}879879+880880+/**881881+ * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context.882882+ * @vm_ctx: Target VM context.883883+ *884884+ * This function ensures that no mappings are left dangling by unmapping them885885+ * all in order of ascending device-virtual address.886886+ */887887+void888888+pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx)889889+{890890+ mutex_lock(&vm_ctx->lock);891891+892892+ for (;;) {893893+ struct pvr_gem_object *pvr_obj;894894+ struct drm_gpuva *va;895895+896896+ va = drm_gpuva_find_first(&vm_ctx->gpuvm_mgr,897897+ vm_ctx->gpuvm_mgr.mm_start,898898+ vm_ctx->gpuvm_mgr.mm_range);899899+ if (!va)900900+ break;901901+902902+ pvr_obj = gem_to_pvr_gem(va->gem.obj);903903+904904+ WARN_ON(pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj,905905+ va->va.addr, va->va.range));906906+ }907907+908908+ mutex_unlock(&vm_ctx->lock);811909}812910813911/* Static data areas are determined by firmware. */
···165165 */166166extern int r300_init(struct radeon_device *rdev);167167extern void r300_fini(struct radeon_device *rdev);168168+extern void r300_gpu_init(struct radeon_device *rdev);168169extern int r300_suspend(struct radeon_device *rdev);169170extern int r300_resume(struct radeon_device *rdev);170171extern int r300_asic_reset(struct radeon_device *rdev, bool hard);
+16-2
drivers/gpu/drm/radeon/rs400.c
···256256257257static void rs400_gpu_init(struct radeon_device *rdev)258258{259259- /* FIXME: is this correct ? */260260- r420_pipes_init(rdev);259259+ /* Earlier code was calling r420_pipes_init and then260260+ * rs400_mc_wait_for_idle(rdev). The problem is that261261+ * at least on my Mobility Radeon Xpress 200M RC410 card262262+ * that ends up in this code path ends up num_gb_pipes == 3263263+ * while the card seems to have only one pipe. With the264264+ * r420 pipe initialization method.265265+ *266266+ * Problems shown up as HyperZ glitches, see:267267+ * https://bugs.freedesktop.org/show_bug.cgi?id=110897268268+ *269269+ * Delegating initialization to r300 code seems to work270270+ * and results in proper pipe numbers. The rs400 cards271271+ * are said to be not r400, but r300 kind of cards.272272+ */273273+ r300_gpu_init(rdev);274274+261275 if (rs400_mc_wait_for_idle(rdev)) {262276 pr_warn("rs400: Failed to wait MC idle while programming pipes. Bad things might happen. %08x\n",263277 RREG32(RADEON_MC_STATUS));
+2-2
drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
···2121 *2222 */23232424-#if !defined(_GPU_SCHED_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)2424+#if !defined(_GPU_SCHED_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)2525#define _GPU_SCHED_TRACE_H_26262727#include <linux/stringify.h>···106106 __entry->seqno)107107);108108109109-#endif109109+#endif /* _GPU_SCHED_TRACE_H_ */110110111111/* This part must be outside protection */112112#undef TRACE_INCLUDE_PATH
···194194 to_intel_plane(crtc->base.primary);195195 struct intel_plane_state *plane_state =196196 to_intel_plane_state(plane->base.state);197197- struct intel_crtc_state *crtc_state =198198- to_intel_crtc_state(crtc->base.state);199197 struct drm_framebuffer *fb;200198 struct i915_vma *vma;201199···239241 atomic_or(plane->frontbuffer_bit, &to_intel_frontbuffer(fb)->bits);240242241243 plane_config->vma = vma;242242-243243- /*244244- * Flip to the newly created mapping ASAP, so we can re-use the245245- * first part of GGTT for WOPCM, prevent flickering, and prevent246246- * the lookup of sysmem scratch pages.247247- */248248- plane->check_plane(crtc_state, plane_state);249249- plane->async_flip(NULL, plane, crtc_state, plane_state, true);250244 return;251245252246nofb:
···1111/**1212 * struct xe_ptw - base class for driver pagetable subclassing.1313 * @children: Pointer to an array of children if any.1414+ * @staging: Pointer to an array of staging if any.1415 *1516 * Drivers could subclass this, and if it's a page-directory, typically1617 * embed an array of xe_ptw pointers.1718 */1819struct xe_ptw {1920 struct xe_ptw **children;2121+ struct xe_ptw **staging;2022};21232224/**···4341 * as shared pagetables.4442 */4543 bool shared_pt_mode;4444+ /** @staging: Walk staging PT structure */4545+ bool staging;4646};47474848/**
+70-33
drivers/gpu/drm/xe/xe_vm.c
···579579 trace_xe_vm_rebind_worker_exit(vm);580580}581581582582-static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,583583- const struct mmu_notifier_range *range,584584- unsigned long cur_seq)582582+static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)585583{586586- struct xe_userptr *userptr = container_of(mni, typeof(*userptr), notifier);587587- struct xe_userptr_vma *uvma = container_of(userptr, typeof(*uvma), userptr);584584+ struct xe_userptr *userptr = &uvma->userptr;588585 struct xe_vma *vma = &uvma->vma;589589- struct xe_vm *vm = xe_vma_vm(vma);590586 struct dma_resv_iter cursor;591587 struct dma_fence *fence;592588 long err;593593-594594- xe_assert(vm->xe, xe_vma_is_userptr(vma));595595- trace_xe_vma_userptr_invalidate(vma);596596-597597- if (!mmu_notifier_range_blockable(range))598598- return false;599599-600600- vm_dbg(&xe_vma_vm(vma)->xe->drm,601601- "NOTIFIER: addr=0x%016llx, range=0x%016llx",602602- xe_vma_start(vma), xe_vma_size(vma));603603-604604- down_write(&vm->userptr.notifier_lock);605605- mmu_interval_set_seq(mni, cur_seq);606606-607607- /* No need to stop gpu access if the userptr is not yet bound. */608608- if (!userptr->initial_bind) {609609- up_write(&vm->userptr.notifier_lock);610610- return true;611611- }612589613590 /*614591 * Tell exec and rebind worker they need to repin and rebind this615592 * userptr.616593 */617594 if (!xe_vm_in_fault_mode(vm) &&618618- !(vma->gpuva.flags & XE_VMA_DESTROYED) && vma->tile_present) {595595+ !(vma->gpuva.flags & XE_VMA_DESTROYED)) {619596 spin_lock(&vm->userptr.invalidated_lock);620597 list_move_tail(&userptr->invalidate_link,621598 &vm->userptr.invalidated);622599 spin_unlock(&vm->userptr.invalidated_lock);623600 }624624-625625- up_write(&vm->userptr.notifier_lock);626601627602 /*628603 * Preempt fences turn into schedule disables, pipeline these.···616641 false, MAX_SCHEDULE_TIMEOUT);617642 XE_WARN_ON(err <= 0);618643619619- if (xe_vm_in_fault_mode(vm)) {644644+ if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) {620645 err = xe_vm_invalidate_vma(vma);621646 XE_WARN_ON(err);622647 }623648649649+ xe_hmm_userptr_unmap(uvma);650650+}651651+652652+static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,653653+ const struct mmu_notifier_range *range,654654+ unsigned long cur_seq)655655+{656656+ struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier);657657+ struct xe_vma *vma = &uvma->vma;658658+ struct xe_vm *vm = xe_vma_vm(vma);659659+660660+ xe_assert(vm->xe, xe_vma_is_userptr(vma));661661+ trace_xe_vma_userptr_invalidate(vma);662662+663663+ if (!mmu_notifier_range_blockable(range))664664+ return false;665665+666666+ vm_dbg(&xe_vma_vm(vma)->xe->drm,667667+ "NOTIFIER: addr=0x%016llx, range=0x%016llx",668668+ xe_vma_start(vma), xe_vma_size(vma));669669+670670+ down_write(&vm->userptr.notifier_lock);671671+ mmu_interval_set_seq(mni, cur_seq);672672+673673+ __vma_userptr_invalidate(vm, uvma);674674+ up_write(&vm->userptr.notifier_lock);624675 trace_xe_vma_userptr_invalidate_complete(vma);625676626677 return true;···655654static const struct mmu_interval_notifier_ops vma_userptr_notifier_ops = {656655 .invalidate = vma_userptr_invalidate,657656};657657+658658+#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)659659+/**660660+ * xe_vma_userptr_force_invalidate() - force invalidate a userptr661661+ * @uvma: The userptr vma to invalidate662662+ *663663+ * Perform a forced userptr invalidation for testing purposes.664664+ */665665+void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)666666+{667667+ struct xe_vm *vm = xe_vma_vm(&uvma->vma);668668+669669+ /* Protect against concurrent userptr pinning */670670+ lockdep_assert_held(&vm->lock);671671+ /* Protect against concurrent notifiers */672672+ lockdep_assert_held(&vm->userptr.notifier_lock);673673+ /*674674+ * Protect against concurrent instances of this function and675675+ * the critical exec sections676676+ */677677+ xe_vm_assert_held(vm);678678+679679+ if (!mmu_interval_read_retry(&uvma->userptr.notifier,680680+ uvma->userptr.notifier_seq))681681+ uvma->userptr.notifier_seq -= 2;682682+ __vma_userptr_invalidate(vm, uvma);683683+}684684+#endif658685659686int xe_vm_userptr_pin(struct xe_vm *vm)660687{···10411012 INIT_LIST_HEAD(&userptr->invalidate_link);10421013 INIT_LIST_HEAD(&userptr->repin_link);10431014 vma->gpuva.gem.offset = bo_offset_or_userptr;10151015+ mutex_init(&userptr->unmap_mutex);1044101610451017 err = mmu_interval_notifier_insert(&userptr->notifier,10461018 current->mm,···10831053 * them anymore10841054 */10851055 mmu_interval_notifier_remove(&userptr->notifier);10561056+ mutex_destroy(&userptr->unmap_mutex);10861057 xe_vm_put(vm);10871058 } else if (xe_vma_is_null(vma)) {10881059 xe_vm_put(vm);···18091778 args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE))18101779 return -EINVAL;1811178018121812- if (XE_IOCTL_DBG(xe, args->extensions))18131813- return -EINVAL;18141814-18151781 if (args->flags & DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE)18161782 flags |= XE_VM_FLAG_SCRATCH_PAGE;18171783 if (args->flags & DRM_XE_VM_CREATE_FLAG_LR_MODE)···23142286 break;23152287 }23162288 case DRM_GPUVA_OP_UNMAP:22892289+ xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);22902290+ break;23172291 case DRM_GPUVA_OP_PREFETCH:23182318- /* FIXME: Need to skip some prefetch ops */22922292+ vma = gpuva_to_vma(op->base.prefetch.va);22932293+22942294+ if (xe_vma_is_userptr(vma)) {22952295+ err = xe_vma_userptr_pin_pages(to_userptr_vma(vma));22962296+ if (err)22972297+ return err;22982298+ }22992299+23192300 xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);23202301 break;23212302 default:
···5959 struct sg_table *sg;6060 /** @notifier_seq: notifier sequence number */6161 unsigned long notifier_seq;6262+ /** @unmap_mutex: Mutex protecting dma-unmapping */6363+ struct mutex unmap_mutex;6264 /**6365 * @initial_bind: user pointer has been bound at least once.6466 * write: vm->userptr.notifier_lock in read mode and vm->resv held.6567 * read: vm->userptr.notifier_lock in write mode or vm->resv held.6668 */6769 bool initial_bind;7070+ /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */7171+ bool mapped;6872#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)6973 u32 divisor;7074#endif···231227 * up for revalidation. Protected from access with the232228 * @invalidated_lock. Removing items from the list233229 * additionally requires @lock in write mode, and adding234234- * items to the list requires the @userptr.notifer_lock in235235- * write mode.230230+ * items to the list requires either the @userptr.notifer_lock in231231+ * write mode, OR @lock in write mode.236232 */237233 struct list_head invalidated;238234 } userptr;
···290290 ihid->rawbuf, recv_len + sizeof(__le16));291291 if (error) {292292 dev_err(&ihid->client->dev,293293- "failed to set a report to device: %d\n", error);293293+ "failed to get a report from device: %d\n", error);294294 return error;295295 }296296
···22622262 struct resource *iter;2263226322642264 mutex_lock(&hyperv_mmio_lock);22652265+22662266+ /*22672267+ * If all bytes of the MMIO range to be released are within the22682268+ * special case fb_mmio shadow region, skip releasing the shadow22692269+ * region since no corresponding __request_region() was done22702270+ * in vmbus_allocate_mmio().22712271+ */22722272+ if (fb_mmio && start >= fb_mmio->start &&22732273+ (start + size - 1 <= fb_mmio->end))22742274+ goto skip_shadow_release;22752275+22652276 for (iter = hyperv_mmio; iter; iter = iter->sibling) {22662277 if ((iter->start >= start + size) || (iter->end <= start))22672278 continue;2268227922692280 __release_region(iter, start, size);22702281 }22822282+22832283+skip_shadow_release:22712284 release_mem_region(start, size);22722285 mutex_unlock(&hyperv_mmio_lock);22732286
+10
drivers/hwmon/ad7314.c
···2222 */2323#define AD7314_TEMP_MASK 0x7FE02424#define AD7314_TEMP_SHIFT 52525+#define AD7314_LEADING_ZEROS_MASK BIT(15)25262627/*2728 * ADT7301 and ADT7302 temperature masks2829 */2930#define ADT7301_TEMP_MASK 0x3FFF3131+#define ADT7301_LEADING_ZEROS_MASK (BIT(15) | BIT(14))30323133enum ad7314_variant {3234 adt7301,···6765 return ret;6866 switch (spi_get_device_id(chip->spi_dev)->driver_data) {6967 case ad7314:6868+ if (ret & AD7314_LEADING_ZEROS_MASK) {6969+ /* Invalid read-out, leading zero part is missing */7070+ return -EIO;7171+ }7072 data = (ret & AD7314_TEMP_MASK) >> AD7314_TEMP_SHIFT;7173 data = sign_extend32(data, 9);72747375 return sprintf(buf, "%d\n", 250 * data);7476 case adt7301:7577 case adt7302:7878+ if (ret & ADT7301_LEADING_ZEROS_MASK) {7979+ /* Invalid read-out, leading zero part is missing */8080+ return -EIO;8181+ }7682 /*7783 * Documented as a 13 bit twos complement register7884 * with a sign bit - which is a 14 bit 2's complement
···127127 return 0;128128129129 ret = priv->gen_info->read_thresholds(priv, dimm_order, chan_rank, &data);130130- if (ret == -ENODATA) /* Use default or previous value */131131- return 0;132130 if (ret)133131 return ret;134132···507509508510 ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd4, ®_val);509511 if (ret || !(reg_val & BIT(31)))510510- return -ENODATA; /* Use default or previous value */512512+ return -ENODATA;511513512514 ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd0, ®_val);513515 if (ret)514514- return -ENODATA; /* Use default or previous value */516516+ return -ENODATA;515517516518 /*517519 * Device 26, Offset 224e0: IMC 0 channel 0 -> rank 0···544546545547 ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd4, ®_val);546548 if (ret || !(reg_val & BIT(31)))547547- return -ENODATA; /* Use default or previous value */549549+ return -ENODATA;548550549551 ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd0, ®_val);550552 if (ret)551551- return -ENODATA; /* Use default or previous value */553553+ return -ENODATA;552554553555 /*554556 * Device 26, Offset 219a8: IMC 0 channel 0 -> rank 0
+2
drivers/hwmon/pmbus/pmbus.c
···103103 if (pmbus_check_byte_register(client, 0, PMBUS_PAGE)) {104104 int page;105105106106+ info->pages = PMBUS_PAGES;107107+106108 for (page = 1; page < PMBUS_PAGES; page++) {107109 if (pmbus_set_page(client, page, 0xff) < 0)108110 break;
+1-1
drivers/hwmon/xgene-hwmon.c
···706706 goto out;707707 }708708709709- if (!ctx->pcc_comm_addr) {709709+ if (IS_ERR_OR_NULL(ctx->pcc_comm_addr)) {710710 dev_err(&pdev->dev,711711 "Failed to ioremap PCC comm region\n");712712 rc = -ENOMEM;
+11-2
drivers/hwtracing/intel_th/msu.c
···105105106106/**107107 * struct msc - MSC device representation108108- * @reg_base: register window base address108108+ * @reg_base: register window base address for the entire MSU109109+ * @msu_base: register window base address for this MSC109110 * @thdev: intel_th_device pointer110111 * @mbuf: MSU buffer, if assigned111111- * @mbuf_priv MSU buffer's private data, if @mbuf112112+ * @mbuf_priv: MSU buffer's private data, if @mbuf113113+ * @work: a work to stop the trace when the buffer is full112114 * @win_list: list of windows in multiblock mode113115 * @single_sgt: single mode buffer114116 * @cur_win: current window117117+ * @switch_on_unlock: window to switch to when it becomes available115118 * @nr_pages: total number of pages allocated for this buffer116119 * @single_sz: amount of data in single mode117120 * @single_wrap: single mode wrap occurred118121 * @base: buffer's base pointer119122 * @base_addr: buffer's base address123123+ * @orig_addr: MSC0 buffer's base address124124+ * @orig_sz: MSC0 buffer's size120125 * @user_count: number of users of the buffer121126 * @mmap_count: number of mappings122127 * @buf_mutex: mutex to serialize access to buffer-related bits128128+ * @iter_list: list of open file descriptor iterators129129+ * @stop_on_full: stop the trace if the current window is full123130 * @enabled: MSC is enabled124131 * @wrap: wrapping is enabled132132+ * @do_irq: IRQ resource is available, handle interrupts133133+ * @multi_is_broken: multiblock mode enabled (not disabled by PCI drvdata)125134 * @mode: MSC operating mode126135 * @burst_len: write burst length127136 * @index: number of this MSC in the MSU
···117117118118#define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */119119120120+#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */121121+120122/*121123 * MEI HW Section122124 */
···502502 if (ret)503503 return ret;504504505505- tp->wakeuphost = devm_gpiod_get(dev, "wakeuphost", GPIOD_IN);505505+ tp->wakeuphost = devm_gpiod_get(dev, "wakeuphostint", GPIOD_IN);506506 if (IS_ERR(tp->wakeuphost))507507 return PTR_ERR(tp->wakeuphost);508508
+4-3
drivers/misc/ntsync.c
···873873{874874 int fds[NTSYNC_MAX_WAIT_COUNT + 1];875875 const __u32 count = args->count;876876+ size_t size = array_size(count, sizeof(fds[0]));876877 struct ntsync_q *q;877878 __u32 total_count;878879 __u32 i, j;···881880 if (args->pad || (args->flags & ~NTSYNC_WAIT_REALTIME))882881 return -EINVAL;883882884884- if (args->count > NTSYNC_MAX_WAIT_COUNT)883883+ if (size >= sizeof(fds))885884 return -EINVAL;886885887886 total_count = count;888887 if (args->alert)889888 total_count++;890889891891- if (copy_from_user(fds, u64_to_user_ptr(args->objs),892892- array_size(count, sizeof(*fds))))890890+ if (copy_from_user(fds, u64_to_user_ptr(args->objs), size))893891 return -EFAULT;894892 if (args->alert)895893 fds[count] = args->alert;···12081208 .minor = MISC_DYNAMIC_MINOR,12091209 .name = NTSYNC_NAME,12101210 .fops = &ntsync_fops,12111211+ .mode = 0666,12111212};1212121312131214module_misc_device(ntsync_misc);
+47-8
drivers/net/bonding/bond_options.c
···12421242 slave->dev->flags & IFF_MULTICAST;12431243}1244124412451245+/**12461246+ * slave_set_ns_maddrs - add/del all NS mac addresses for slave12471247+ * @bond: bond device12481248+ * @slave: slave device12491249+ * @add: add or remove all the NS mac addresses12501250+ *12511251+ * This function tries to add or delete all the NS mac addresses on the slave12521252+ *12531253+ * Note, the IPv6 NS target address is the unicast address in Neighbor12541254+ * Solicitation (NS) message. The dest address of NS message should be12551255+ * solicited-node multicast address of the target. The dest mac of NS message12561256+ * is converted from the solicited-node multicast address.12571257+ *12581258+ * This function is called when12591259+ * * arp_validate changes12601260+ * * enslaving, releasing new slaves12611261+ */12451262static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool add)12461263{12471264 struct in6_addr *targets = bond->params.ns_targets;12481265 char slot_maddr[MAX_ADDR_LEN];12661266+ struct in6_addr mcaddr;12491267 int i;1250126812511269 if (!slave_can_set_ns_maddr(bond, slave))···12731255 if (ipv6_addr_any(&targets[i]))12741256 break;1275125712761276- if (!ndisc_mc_map(&targets[i], slot_maddr, slave->dev, 0)) {12581258+ addrconf_addr_solict_mult(&targets[i], &mcaddr);12591259+ if (!ndisc_mc_map(&mcaddr, slot_maddr, slave->dev, 0)) {12771260 if (add)12781261 dev_mc_add(slave->dev, slot_maddr);12791262 else···12971278 slave_set_ns_maddrs(bond, slave, false);12981279}1299128012811281+/**12821282+ * slave_set_ns_maddr - set new NS mac address for slave12831283+ * @bond: bond device12841284+ * @slave: slave device12851285+ * @target: the new IPv6 target12861286+ * @slot: the old IPv6 target in the slot12871287+ *12881288+ * This function tries to replace the old mac address to new one on the slave.12891289+ *12901290+ * Note, the target/slot IPv6 address is the unicast address in Neighbor12911291+ * Solicitation (NS) message. The dest address of NS message should be12921292+ * solicited-node multicast address of the target. The dest mac of NS message12931293+ * is converted from the solicited-node multicast address.12941294+ *12951295+ * This function is called when12961296+ * * An IPv6 NS target is added or removed.12971297+ */13001298static void slave_set_ns_maddr(struct bonding *bond, struct slave *slave,13011299 struct in6_addr *target, struct in6_addr *slot)13021300{13031303- char target_maddr[MAX_ADDR_LEN], slot_maddr[MAX_ADDR_LEN];13011301+ char mac_addr[MAX_ADDR_LEN];13021302+ struct in6_addr mcast_addr;1304130313051304 if (!bond->params.arp_validate || !slave_can_set_ns_maddr(bond, slave))13061305 return;1307130613081308- /* remove the previous maddr from slave */13071307+ /* remove the previous mac addr from slave */13081308+ addrconf_addr_solict_mult(slot, &mcast_addr);13091309 if (!ipv6_addr_any(slot) &&13101310- !ndisc_mc_map(slot, slot_maddr, slave->dev, 0))13111311- dev_mc_del(slave->dev, slot_maddr);13101310+ !ndisc_mc_map(&mcast_addr, mac_addr, slave->dev, 0))13111311+ dev_mc_del(slave->dev, mac_addr);1312131213131313- /* add new maddr on slave if target is set */13131313+ /* add new mac addr on slave if target is set */13141314+ addrconf_addr_solict_mult(target, &mcast_addr);13141315 if (!ipv6_addr_any(target) &&13151315- !ndisc_mc_map(target, target_maddr, slave->dev, 0))13161316- dev_mc_add(slave->dev, target_maddr);13161316+ !ndisc_mc_map(&mcast_addr, mac_addr, slave->dev, 0))13171317+ dev_mc_add(slave->dev, mac_addr);13171318}1318131913191320static void _bond_options_ns_ip6_target_set(struct bonding *bond, int slot,
+1-1
drivers/net/caif/caif_virtio.c
···745745746746 if (cfv->vr_rx)747747 vdev->vringh_config->del_vrhs(cfv->vdev);748748- if (cfv->vdev)748748+ if (cfv->vq_tx)749749 vdev->config->del_vqs(cfv->vdev);750750 free_netdev(netdev);751751 return err;
···22082208 return err;22092209}2210221022112211-static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,22122212- const unsigned char *addr, u16 vid,22132213- u8 state)22112211+static int mv88e6xxx_port_db_get(struct mv88e6xxx_chip *chip,22122212+ const unsigned char *addr, u16 vid,22132213+ u16 *fid, struct mv88e6xxx_atu_entry *entry)22142214{22152215- struct mv88e6xxx_atu_entry entry;22162215 struct mv88e6xxx_vtu_entry vlan;22172217- u16 fid;22182216 int err;2219221722202218 /* Ports have two private address databases: one for when the port is···22232225 * VLAN ID into the port's database used for VLAN-unaware bridging.22242226 */22252227 if (vid == 0) {22262226- fid = MV88E6XXX_FID_BRIDGED;22282228+ *fid = MV88E6XXX_FID_BRIDGED;22272229 } else {22282230 err = mv88e6xxx_vtu_get(chip, vid, &vlan);22292231 if (err)···22332235 if (!vlan.valid)22342236 return -EOPNOTSUPP;2235223722362236- fid = vlan.fid;22382238+ *fid = vlan.fid;22372239 }2238224022392239- entry.state = 0;22402240- ether_addr_copy(entry.mac, addr);22412241- eth_addr_dec(entry.mac);22412241+ entry->state = 0;22422242+ ether_addr_copy(entry->mac, addr);22432243+ eth_addr_dec(entry->mac);2242224422432243- err = mv88e6xxx_g1_atu_getnext(chip, fid, &entry);22452245+ return mv88e6xxx_g1_atu_getnext(chip, *fid, entry);22462246+}22472247+22482248+static bool mv88e6xxx_port_db_find(struct mv88e6xxx_chip *chip,22492249+ const unsigned char *addr, u16 vid)22502250+{22512251+ struct mv88e6xxx_atu_entry entry;22522252+ u16 fid;22532253+ int err;22542254+22552255+ err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry);22562256+ if (err)22572257+ return false;22582258+22592259+ return entry.state && ether_addr_equal(entry.mac, addr);22602260+}22612261+22622262+static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,22632263+ const unsigned char *addr, u16 vid,22642264+ u8 state)22652265+{22662266+ struct mv88e6xxx_atu_entry entry;22672267+ u16 fid;22682268+ int err;22692269+22702270+ err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry);22442271 if (err)22452272 return err;22462273···28692846 mv88e6xxx_reg_lock(chip);28702847 err = mv88e6xxx_port_db_load_purge(chip, port, addr, vid,28712848 MV88E6XXX_G1_ATU_DATA_STATE_UC_STATIC);28492849+ if (err)28502850+ goto out;28512851+28522852+ if (!mv88e6xxx_port_db_find(chip, addr, vid))28532853+ err = -ENOSPC;28542854+28552855+out:28722856 mv88e6xxx_reg_unlock(chip);2873285728742858 return err;···66446614 mv88e6xxx_reg_lock(chip);66456615 err = mv88e6xxx_port_db_load_purge(chip, port, mdb->addr, mdb->vid,66466616 MV88E6XXX_G1_ATU_DATA_STATE_MC_STATIC);66176617+ if (err)66186618+ goto out;66196619+66206620+ if (!mv88e6xxx_port_db_find(chip, mdb->addr, mdb->vid))66216621+ err = -ENOSPC;66226622+66236623+out:66476624 mv88e6xxx_reg_unlock(chip);6648662566496626 return err;
+1-1
drivers/net/dsa/realtek/Kconfig
···4444 Select to enable support for Realtek RTL8366RB.45454646config NET_DSA_REALTEK_RTL8366RB_LEDS4747- bool "Support RTL8366RB LED control"4747+ bool4848 depends on (LEDS_CLASS=y || LEDS_CLASS=NET_DSA_REALTEK_RTL8366RB)4949 depends on NET_DSA_REALTEK_RTL8366RB5050 default NET_DSA_REALTEK_RTL8366RB
···10011001}1002100210031003/**10041004+ * ice_lag_config_eswitch - configure eswitch to work with LAG10051005+ * @lag: lag info struct10061006+ * @netdev: active network interface device struct10071007+ *10081008+ * Updates all port representors in eswitch to use @netdev for Tx.10091009+ *10101010+ * Configures the netdev to keep dst metadata (also used in representor Tx).10111011+ * This is required for an uplink without switchdev mode configured.10121012+ */10131013+static void ice_lag_config_eswitch(struct ice_lag *lag,10141014+ struct net_device *netdev)10151015+{10161016+ struct ice_repr *repr;10171017+ unsigned long id;10181018+10191019+ xa_for_each(&lag->pf->eswitch.reprs, id, repr)10201020+ repr->dst->u.port_info.lower_dev = netdev;10211021+10221022+ netif_keep_dst(netdev);10231023+}10241024+10251025+/**10041026 * ice_lag_unlink - handle unlink event10051027 * @lag: LAG info struct10061028 */···10431021 ice_lag_move_vf_nodes(lag, act_port, pri_port);10441022 lag->primary = false;10451023 lag->active_port = ICE_LAG_INVALID_PORT;10241024+10251025+ /* Config primary's eswitch back to normal operation. */10261026+ ice_lag_config_eswitch(lag, lag->netdev);10461027 } else {10471028 struct ice_lag *primary_lag;10481029···14441419 ice_lag_move_vf_nodes(lag, prim_port,14451420 event_port);14461421 lag->active_port = event_port;14221422+ ice_lag_config_eswitch(lag, event_netdev);14471423 return;14481424 }14491425···14541428 /* new active port */14551429 ice_lag_move_vf_nodes(lag, lag->active_port, event_port);14561430 lag->active_port = event_port;14311431+ ice_lag_config_eswitch(lag, event_netdev);14571432 } else {14581433 /* port not set as currently active (e.g. new active port14591434 * has already claimed the nodes and filters
-18
drivers/net/ethernet/intel/ice/ice_lib.c
···39373937}3938393839393939/**39403940- * ice_vsi_ctx_set_allow_override - allow destination override on VSI39413941- * @ctx: pointer to VSI ctx structure39423942- */39433943-void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx)39443944-{39453945- ctx->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD;39463946-}39473947-39483948-/**39493949- * ice_vsi_ctx_clear_allow_override - turn off destination override on VSI39503950- * @ctx: pointer to VSI ctx structure39513951- */39523952-void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx)39533953-{39543954- ctx->info.sec_flags &= ~ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD;39553955-}39563956-39573957-/**39583940 * ice_vsi_update_local_lb - update sw block in VSI with local loopback bit39593941 * @vsi: pointer to VSI structure39603942 * @set: set or unset the bit
···378378 return irq->map.index;379379}380380381381+struct mlx5_irq_pool *mlx5_irq_get_pool(struct mlx5_irq *irq)382382+{383383+ return irq->pool;384384+}385385+381386/* irq_pool API */382387383388/* requesting an irq from a given pool according to given index */···410405 return irq_table->sf_ctrl_pool;411406}412407413413-static struct mlx5_irq_pool *sf_irq_pool_get(struct mlx5_irq_table *irq_table)408408+static struct mlx5_irq_pool *409409+sf_comp_irq_pool_get(struct mlx5_irq_table *irq_table)414410{415411 return irq_table->sf_comp_pool;416412}417413418418-struct mlx5_irq_pool *mlx5_irq_pool_get(struct mlx5_core_dev *dev)414414+struct mlx5_irq_pool *415415+mlx5_irq_table_get_comp_irq_pool(struct mlx5_core_dev *dev)419416{420417 struct mlx5_irq_table *irq_table = mlx5_irq_table_get(dev);421418 struct mlx5_irq_pool *pool = NULL;422419423420 if (mlx5_core_is_sf(dev))424424- pool = sf_irq_pool_get(irq_table);421421+ pool = sf_comp_irq_pool_get(irq_table);425422426423 /* In some configs, there won't be a pool of SFs IRQs. Hence, returning427424 * the PF IRQs pool in case the SF pool doesn't exist.
···738738static void mana_cleanup_port_context(struct mana_port_context *apc)739739{740740 /*741741- * at this point all dir/files under the vport directory742742- * are already cleaned up.743743- * We are sure the apc->mana_port_debugfs remove will not744744- * cause any freed memory access issues741741+ * make sure subsequent cleanup attempts don't end up removing already742742+ * cleaned dentry pointer745743 */746744 debugfs_remove(apc->mana_port_debugfs);745745+ apc->mana_port_debugfs = NULL;747746 kfree(apc->rxqs);748747 apc->rxqs = NULL;749748}···12531254 return;1254125512551256 debugfs_remove_recursive(ac->mana_eqs_debugfs);12571257+ ac->mana_eqs_debugfs = NULL;1256125812571259 for (i = 0; i < gc->max_num_queues; i++) {12581260 eq = ac->eqs[i].eq;···1914191419151915 for (i = 0; i < apc->num_queues; i++) {19161916 debugfs_remove_recursive(apc->tx_qp[i].mana_tx_debugfs);19171917+ apc->tx_qp[i].mana_tx_debugfs = NULL;1917191819181919 napi = &apc->tx_qp[i].tx_cq.napi;19191920 if (apc->tx_qp[i].txq.napi_initialized) {···21002099 return;2101210021022101 debugfs_remove_recursive(rxq->mana_rx_debugfs);21022102+ rxq->mana_rx_debugfs = NULL;2103210321042104 napi = &rxq->rx_cq.napi;21052105
···7272#define PPP_PROTO_LEN 27373#define PPP_LCP_HDRLEN 474747575+/* The filter instructions generated by libpcap are constructed7676+ * assuming a four-byte PPP header on each packet, where the last7777+ * 2 bytes are the protocol field defined in the RFC and the first7878+ * byte of the first 2 bytes indicates the direction.7979+ * The second byte is currently unused, but we still need to initialize8080+ * it to prevent crafted BPF programs from reading them which would8181+ * cause reading of uninitialized data.8282+ */8383+#define PPP_FILTER_OUTBOUND_TAG 0x01008484+#define PPP_FILTER_INBOUND_TAG 0x00008585+7586/*7687 * An instance of /dev/ppp can be associated with either a ppp7788 * interface unit or a ppp channel. In both cases, file->private_data···1773176217741763 if (proto < 0x8000) {17751764#ifdef CONFIG_PPP_FILTER17761776- /* check if we should pass this packet */17771777- /* the filter instructions are constructed assuming17781778- a four-byte PPP header on each packet */17791779- *(u8 *)skb_push(skb, 2) = 1;17651765+ /* check if the packet passes the pass and active filters.17661766+ * See comment for PPP_FILTER_OUTBOUND_TAG above.17671767+ */17681768+ *(__be16 *)skb_push(skb, 2) = htons(PPP_FILTER_OUTBOUND_TAG);17801769 if (ppp->pass_filter &&17811770 bpf_prog_run(ppp->pass_filter, skb) == 0) {17821771 if (ppp->debug & 1)···24932482 /* network protocol frame - give it to the kernel */2494248324952484#ifdef CONFIG_PPP_FILTER24962496- /* check if the packet passes the pass and active filters */24972497- /* the filter instructions are constructed assuming24982498- a four-byte PPP header on each packet */24992485 if (ppp->pass_filter || ppp->active_filter) {25002486 if (skb_unclone(skb, GFP_ATOMIC))25012487 goto err;25022502-25032503- *(u8 *)skb_push(skb, 2) = 0;24882488+ /* Check if the packet passes the pass and active filters.24892489+ * See comment for PPP_FILTER_INBOUND_TAG above.24902490+ */24912491+ *(__be16 *)skb_push(skb, 2) = htons(PPP_FILTER_INBOUND_TAG);25042492 if (ppp->pass_filter &&25052493 bpf_prog_run(ppp->pass_filter, skb) == 0) {25062494 if (ppp->debug & 1)
+2-2
drivers/net/usb/lan78xx.c
···627627628628 kfree(buf);629629630630- return ret;630630+ return ret < 0 ? ret : 0;631631}632632633633static int lan78xx_write_reg(struct lan78xx_net *dev, u32 index, u32 data)···658658659659 kfree(buf);660660661661- return ret;661661+ return ret < 0 ? ret : 0;662662}663663664664static int lan78xx_update_reg(struct lan78xx_net *dev, u32 reg, u32 mask,
···30923092 ieee80211_resume_disconnect(vif);30933093}3094309430953095-static bool iwl_mvm_check_rt_status(struct iwl_mvm *mvm,30963096- struct ieee80211_vif *vif)30953095+enum rt_status {30963096+ FW_ALIVE,30973097+ FW_NEEDS_RESET,30983098+ FW_ERROR,30993099+};31003100+31013101+static enum rt_status iwl_mvm_check_rt_status(struct iwl_mvm *mvm,31023102+ struct ieee80211_vif *vif)30973103{30983104 u32 err_id;30993105···31073101 if (iwl_fwrt_read_err_table(mvm->trans,31083102 mvm->trans->dbg.lmac_error_event_table[0],31093103 &err_id)) {31103110- if (err_id == RF_KILL_INDICATOR_FOR_WOWLAN && vif) {31113111- struct cfg80211_wowlan_wakeup wakeup = {31123112- .rfkill_release = true,31133113- };31143114- ieee80211_report_wowlan_wakeup(vif, &wakeup,31153115- GFP_KERNEL);31043104+ if (err_id == RF_KILL_INDICATOR_FOR_WOWLAN) {31053105+ IWL_WARN(mvm, "Rfkill was toggled during suspend\n");31063106+ if (vif) {31073107+ struct cfg80211_wowlan_wakeup wakeup = {31083108+ .rfkill_release = true,31093109+ };31103110+31113111+ ieee80211_report_wowlan_wakeup(vif, &wakeup,31123112+ GFP_KERNEL);31133113+ }31143114+31153115+ return FW_NEEDS_RESET;31163116 }31173117- return true;31173117+ return FW_ERROR;31183118 }3119311931203120 /* check if we have lmac2 set and check for error */31213121 if (iwl_fwrt_read_err_table(mvm->trans,31223122 mvm->trans->dbg.lmac_error_event_table[1],31233123 NULL))31243124- return true;31243124+ return FW_ERROR;3125312531263126 /* check for umac error */31273127 if (iwl_fwrt_read_err_table(mvm->trans,31283128 mvm->trans->dbg.umac_error_event_table,31293129 NULL))31303130- return true;31303130+ return FW_ERROR;3131313131323132- return false;31323132+ return FW_ALIVE;31333133}3134313431353135/*···35043492 bool d0i3_first = fw_has_capa(&mvm->fw->ucode_capa,35053493 IWL_UCODE_TLV_CAPA_D0I3_END_FIRST);35063494 bool resume_notif_based = iwl_mvm_d3_resume_notif_based(mvm);34953495+ enum rt_status rt_status;35073496 bool keep = false;3508349735093498 mutex_lock(&mvm->mutex);···3528351535293516 iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt);3530351735313531- if (iwl_mvm_check_rt_status(mvm, vif)) {35323532- IWL_ERR(mvm, "FW Error occurred during suspend. Restarting.\n");35183518+ rt_status = iwl_mvm_check_rt_status(mvm, vif);35193519+ if (rt_status != FW_ALIVE) {35333520 set_bit(STATUS_FW_ERROR, &mvm->trans->status);35343534- iwl_mvm_dump_nic_error_log(mvm);35353535- iwl_dbg_tlv_time_point(&mvm->fwrt,35363536- IWL_FW_INI_TIME_POINT_FW_ASSERT, NULL);35373537- iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert,35383538- false, 0);35213521+ if (rt_status == FW_ERROR) {35223522+ IWL_ERR(mvm, "FW Error occurred during suspend. Restarting.\n");35233523+ iwl_mvm_dump_nic_error_log(mvm);35243524+ iwl_dbg_tlv_time_point(&mvm->fwrt,35253525+ IWL_FW_INI_TIME_POINT_FW_ASSERT,35263526+ NULL);35273527+ iwl_fw_dbg_collect_desc(&mvm->fwrt,35283528+ &iwl_dump_desc_assert,35293529+ false, 0);35303530+ }35393531 ret = 1;35403532 goto err;35413533 }···36973679 .notif_expected =36983680 IWL_D3_NOTIF_D3_END_NOTIF,36993681 };36823682+ enum rt_status rt_status;37003683 int ret;3701368437023685 lockdep_assert_held(&mvm->mutex);···37073688 mvm->last_reset_or_resume_time_jiffies = jiffies;37083689 iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt);3709369037103710- if (iwl_mvm_check_rt_status(mvm, NULL)) {37113711- IWL_ERR(mvm, "FW Error occurred during suspend. Restarting.\n");36913691+ rt_status = iwl_mvm_check_rt_status(mvm, NULL);36923692+ if (rt_status != FW_ALIVE) {37123693 set_bit(STATUS_FW_ERROR, &mvm->trans->status);37133713- iwl_mvm_dump_nic_error_log(mvm);37143714- iwl_dbg_tlv_time_point(&mvm->fwrt,37153715- IWL_FW_INI_TIME_POINT_FW_ASSERT, NULL);37163716- iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert,37173717- false, 0);36943694+ if (rt_status == FW_ERROR) {36953695+ IWL_ERR(mvm,36963696+ "iwl_mvm_check_rt_status failed, device is gone during suspend\n");36973697+ iwl_mvm_dump_nic_error_log(mvm);36983698+ iwl_dbg_tlv_time_point(&mvm->fwrt,36993699+ IWL_FW_INI_TIME_POINT_FW_ASSERT,37003700+ NULL);37013701+ iwl_fw_dbg_collect_desc(&mvm->fwrt,37023702+ &iwl_dump_desc_assert,37033703+ false, 0);37043704+ }37183705 mvm->trans->state = IWL_TRANS_NO_FW;37193706 ret = -ENODEV;37203707
+7
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
···14791479 if (mvm->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_9000)14801480 return -EOPNOTSUPP;1481148114821482+ /*14831483+ * If the firmware is not running, silently succeed since there is14841484+ * no data to clear.14851485+ */14861486+ if (!iwl_mvm_firmware_running(mvm))14871487+ return count;14881488+14821489 mutex_lock(&mvm->mutex);14831490 iwl_fw_dbg_clear_monitor_buf(&mvm->fwrt);14841491 mutex_unlock(&mvm->mutex);
+3-3
drivers/net/wireless/intel/iwlwifi/mvm/fw.c
···11// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause22/*33- * Copyright (C) 2012-2014, 2018-2024 Intel Corporation33+ * Copyright (C) 2012-2014, 2018-2025 Intel Corporation44 * Copyright (C) 2013-2015 Intel Mobile Communications GmbH55 * Copyright (C) 2016-2017 Intel Deutschland GmbH66 */···422422 /* if reached this point, Alive notification was received */423423 iwl_mei_alive_notif(true);424424425425+ iwl_trans_fw_alive(mvm->trans, alive_data.scd_base_addr);426426+425427 ret = iwl_pnvm_load(mvm->trans, &mvm->notif_wait,426428 &mvm->fw->ucode_capa);427429 if (ret) {···431429 iwl_fw_set_current_image(&mvm->fwrt, old_type);432430 return ret;433431 }434434-435435- iwl_trans_fw_alive(mvm->trans, alive_data.scd_base_addr);436432437433 /*438434 * Note: all the queues are enabled as part of the interface
···10301030 /* End TE, notify mac80211 */10311031 mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID;10321032 mvmvif->time_event_data.link_id = -1;10331033+ /* set the bit so the ROC cleanup will actually clean up */10341034+ set_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status);10331035 iwl_mvm_roc_finished(mvm);10341036 ieee80211_remain_on_channel_expired(mvm->hw);10351037 } else if (le32_to_cpu(notif->start)) {
···431431432432static inline void __nvme_end_req(struct request *req)433433{434434+ if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) {435435+ if (blk_rq_is_passthrough(req))436436+ nvme_log_err_passthru(req);437437+ else438438+ nvme_log_error(req);439439+ }434440 nvme_end_req_zoned(req);435441 nvme_trace_bio_complete(req);436442 if (req->cmd_flags & REQ_NVME_MPATH)···447441{448442 blk_status_t status = nvme_error_status(nvme_req(req)->status);449443450450- if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) {451451- if (blk_rq_is_passthrough(req))452452- nvme_log_err_passthru(req);453453- else454454- nvme_log_error(req);455455- }456444 __nvme_end_req(req);457445 blk_mq_end_request(req, status);458446}
+8-4
drivers/nvme/host/ioctl.c
···128128 if (!nvme_ctrl_sgl_supported(ctrl))129129 dev_warn_once(ctrl->device, "using unchecked data buffer\n");130130 if (has_metadata) {131131- if (!supports_metadata)132132- return -EINVAL;131131+ if (!supports_metadata) {132132+ ret = -EINVAL;133133+ goto out;134134+ }133135 if (!nvme_ctrl_meta_sgl_supported(ctrl))134136 dev_warn_once(ctrl->device,135137 "using unchecked metadata buffer\n");···141139 struct iov_iter iter;142140143141 /* fixedbufs is only for non-vectored io */144144- if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC))145145- return -EINVAL;142142+ if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) {143143+ ret = -EINVAL;144144+ goto out;145145+ }146146 ret = io_uring_cmd_import_fixed(ubuffer, bufflen,147147 rq_data_dir(req), &iter, ioucmd);148148 if (ret < 0)
+28-11
drivers/nvme/host/pci.c
···1130113011311131 trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail);11321132 if (!nvme_try_complete_req(req, cqe->status, cqe->result) &&11331133- !blk_mq_add_to_batch(req, iob, nvme_req(req)->status,11341134- nvme_pci_complete_batch))11331133+ !blk_mq_add_to_batch(req, iob,11341134+ nvme_req(req)->status != NVME_SC_SUCCESS,11351135+ nvme_pci_complete_batch))11351136 nvme_pci_complete_rq(req);11361137}11371138···14121411 struct nvme_dev *dev = nvmeq->dev;14131412 struct request *abort_req;14141413 struct nvme_command cmd = { };14141414+ struct pci_dev *pdev = to_pci_dev(dev->dev);14151415 u32 csts = readl(dev->bar + NVME_REG_CSTS);14161416 u8 opcode;1417141714181418+ /*14191419+ * Shutdown the device immediately if we see it is disconnected. This14201420+ * unblocks PCIe error handling if the nvme driver is waiting in14211421+ * error_resume for a device that has been removed. We can't unbind the14221422+ * driver while the driver's error callback is waiting to complete, so14231423+ * we're relying on a timeout to break that deadlock if a removal14241424+ * occurs while reset work is running.14251425+ */14261426+ if (pci_dev_is_disconnected(pdev))14271427+ nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);14181428 if (nvme_state_terminal(&dev->ctrl))14191429 goto disable;14201430···14331421 * the recovery mechanism will surely fail.14341422 */14351423 mb();14361436- if (pci_channel_offline(to_pci_dev(dev->dev)))14241424+ if (pci_channel_offline(pdev))14371425 return BLK_EH_RESET_TIMER;1438142614391427 /*···19951983 return;1996198419971985 /*19861986+ * Controllers may support a CMB size larger than their BAR, for19871987+ * example, due to being behind a bridge. Reduce the CMB to the19881988+ * reported size of the BAR19891989+ */19901990+ size = min(size, bar_size - offset);19911991+19921992+ if (!IS_ALIGNED(size, memremap_compat_align()) ||19931993+ !IS_ALIGNED(pci_resource_start(pdev, bar),19941994+ memremap_compat_align()))19951995+ return;19961996+19971997+ /*19981998 * Tell the controller about the host side address mapping the CMB,19991999 * and enable CMB decoding for the NVMe 1.4+ scheme:20002000 */···20161992 dev->bar + NVME_REG_CMBMSC);20171993 }2018199420192019- /*20202020- * Controllers may support a CMB size larger than their BAR,20212021- * for example, due to being behind a bridge. Reduce the CMB to20222022- * the reported size of the BAR20232023- */20242024- if (size > bar_size - offset)20252025- size = bar_size - offset;20262026-20271995 if (pci_p2pdma_add_resource(pdev, bar, size, offset)) {20281996 dev_warn(dev->ctrl.device,20291997 "failed to register the CMB\n");19981998+ hi_lo_writeq(0, dev->bar + NVME_REG_CMBMSC);20301999 return;20312000 }20322001
+36-9
drivers/nvme/host/tcp.c
···217217 return queue - queue->ctrl->queues;218218}219219220220+static inline bool nvme_tcp_recv_pdu_supported(enum nvme_tcp_pdu_type type)221221+{222222+ switch (type) {223223+ case nvme_tcp_c2h_term:224224+ case nvme_tcp_c2h_data:225225+ case nvme_tcp_r2t:226226+ case nvme_tcp_rsp:227227+ return true;228228+ default:229229+ return false;230230+ }231231+}232232+220233/*221234 * Check if the queue is TLS encrypted222235 */···788775 [NVME_TCP_FES_PDU_SEQ_ERR] = "PDU Sequence Error",789776 [NVME_TCP_FES_HDR_DIGEST_ERR] = "Header Digest Error",790777 [NVME_TCP_FES_DATA_OUT_OF_RANGE] = "Data Transfer Out Of Range",791791- [NVME_TCP_FES_R2T_LIMIT_EXCEEDED] = "R2T Limit Exceeded",778778+ [NVME_TCP_FES_DATA_LIMIT_EXCEEDED] = "Data Transfer Limit Exceeded",792779 [NVME_TCP_FES_UNSUPPORTED_PARAM] = "Unsupported Parameter",793780 };794781···831818 return 0;832819833820 hdr = queue->pdu;821821+ if (unlikely(hdr->hlen != sizeof(struct nvme_tcp_rsp_pdu))) {822822+ if (!nvme_tcp_recv_pdu_supported(hdr->type))823823+ goto unsupported_pdu;824824+825825+ dev_err(queue->ctrl->ctrl.device,826826+ "pdu type %d has unexpected header length (%d)\n",827827+ hdr->type, hdr->hlen);828828+ return -EPROTO;829829+ }830830+834831 if (unlikely(hdr->type == nvme_tcp_c2h_term)) {835832 /*836833 * C2HTermReq never includes Header or Data digests.···873850 nvme_tcp_init_recv_ctx(queue);874851 return nvme_tcp_handle_r2t(queue, (void *)queue->pdu);875852 default:876876- dev_err(queue->ctrl->ctrl.device,877877- "unsupported pdu type (%d)\n", hdr->type);878878- return -EINVAL;853853+ goto unsupported_pdu;879854 }855855+856856+unsupported_pdu:857857+ dev_err(queue->ctrl->ctrl.device,858858+ "unsupported pdu type (%d)\n", hdr->type);859859+ return -EINVAL;880860}881861882862static inline void nvme_tcp_end_request(struct request *rq, u16 status)···15211495 msg.msg_flags = MSG_WAITALL;15221496 ret = kernel_recvmsg(queue->sock, &msg, &iov, 1,15231497 iov.iov_len, msg.msg_flags);15241524- if (ret < sizeof(*icresp)) {14981498+ if (ret >= 0 && ret < sizeof(*icresp))14991499+ ret = -ECONNRESET;15001500+ if (ret < 0) {15251501 pr_warn("queue %d: failed to receive icresp, error %d\n",15261502 nvme_tcp_queue_id(queue), ret);15271527- if (ret >= 0)15281528- ret = -ECONNRESET;15291503 goto free_icresp;15301504 }15311505 ret = -ENOTCONN;···27252699{27262700 struct nvme_tcp_queue *queue = hctx->driver_data;27272701 struct sock *sk = queue->sock->sk;27022702+ int ret;2728270327292704 if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags))27302705 return 0;···27332706 set_bit(NVME_TCP_Q_POLLING, &queue->flags);27342707 if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue))27352708 sk_busy_loop(sk, true);27362736- nvme_tcp_try_recv(queue);27092709+ ret = nvme_tcp_try_recv(queue);27372710 clear_bit(NVME_TCP_Q_POLLING, &queue->flags);27382738- return queue->nr_cqe;27112711+ return ret < 0 ? ret : queue->nr_cqe;27392712}2740271327412714static int nvme_tcp_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
···10441044 .of_match_table = k1_pinctrl_ids,10451045 },10461046};10471047-module_platform_driver(k1_pinctrl_driver);10471047+builtin_platform_driver(k1_pinctrl_driver);1048104810491049MODULE_AUTHOR("Yixun Lan <dlan@gentoo.org>");10501050MODULE_DESCRIPTION("Pinctrl driver for the SpacemiT K1 SoC");
···219219220220 switch (dev->current_profile) {221221 case PLATFORM_PROFILE_PERFORMANCE:222222+ case PLATFORM_PROFILE_BALANCED_PERFORMANCE:222223 val = TA_BEST_PERFORMANCE;223224 break;224225 case PLATFORM_PROFILE_BALANCED:225226 val = TA_BETTER_PERFORMANCE;226227 break;227228 case PLATFORM_PROFILE_LOW_POWER:229229+ case PLATFORM_PROFILE_QUIET:228230 val = TA_BEST_BATTERY;229231 break;230232 default:
+11
drivers/platform/x86/amd/pmf/sps.c
···297297298298 switch (pmf->current_profile) {299299 case PLATFORM_PROFILE_PERFORMANCE:300300+ case PLATFORM_PROFILE_BALANCED_PERFORMANCE:300301 mode = POWER_MODE_PERFORMANCE;301302 break;302303 case PLATFORM_PROFILE_BALANCED:303304 mode = POWER_MODE_BALANCED_POWER;304305 break;305306 case PLATFORM_PROFILE_LOW_POWER:307307+ case PLATFORM_PROFILE_QUIET:306308 mode = POWER_MODE_POWER_SAVER;307309 break;308310 default:···389387 return 0;390388}391389390390+static int amd_pmf_hidden_choices(void *drvdata, unsigned long *choices)391391+{392392+ set_bit(PLATFORM_PROFILE_QUIET, choices);393393+ set_bit(PLATFORM_PROFILE_BALANCED_PERFORMANCE, choices);394394+395395+ return 0;396396+}397397+392398static int amd_pmf_profile_probe(void *drvdata, unsigned long *choices)393399{394400 set_bit(PLATFORM_PROFILE_LOW_POWER, choices);···408398409399static const struct platform_profile_ops amd_pmf_profile_ops = {410400 .probe = amd_pmf_profile_probe,401401+ .hidden_choices = amd_pmf_hidden_choices,411402 .profile_get = amd_pmf_profile_get,412403 .profile_set = amd_pmf_profile_set,413404};
+59-23
drivers/platform/x86/amd/pmf/tee-if.c
···2727MODULE_PARM_DESC(pb_side_load, "Sideload policy binaries debug policy failures");2828#endif29293030-static const uuid_t amd_pmf_ta_uuid = UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d,3131- 0xb1, 0x2d, 0xc5, 0x29, 0xb1, 0x3d, 0x85, 0x43);3030+static const uuid_t amd_pmf_ta_uuid[] = { UUID_INIT(0xd9b39bf2, 0x66bd, 0x4154, 0xaf, 0xb8, 0x8a,3131+ 0xcc, 0x2b, 0x2b, 0x60, 0xd6),3232+ UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d, 0xb1, 0x2d, 0xc5,3333+ 0x29, 0xb1, 0x3d, 0x85, 0x43),3434+ };32353336static const char *amd_pmf_uevent_as_str(unsigned int state)3437{···324321 */325322 schedule_delayed_work(&dev->pb_work, msecs_to_jiffies(pb_actions_ms * 3));326323 } else {327327- dev_err(dev->dev, "ta invoke cmd init failed err: %x\n", res);324324+ dev_dbg(dev->dev, "ta invoke cmd init failed err: %x\n", res);328325 dev->smart_pc_enabled = false;329329- return -EIO;326326+ return res;330327 }331328332329 return 0;···393390 return ver->impl_id == TEE_IMPL_ID_AMDTEE;394391}395392396396-static int amd_pmf_ta_open_session(struct tee_context *ctx, u32 *id)393393+static int amd_pmf_ta_open_session(struct tee_context *ctx, u32 *id, const uuid_t *uuid)397394{398395 struct tee_ioctl_open_session_arg sess_arg = {};399396 int rc;400397401401- export_uuid(sess_arg.uuid, &amd_pmf_ta_uuid);398398+ export_uuid(sess_arg.uuid, uuid);402399 sess_arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC;403400 sess_arg.num_params = 0;404401···437434 return 0;438435}439436440440-static int amd_pmf_tee_init(struct amd_pmf_dev *dev)437437+static int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid)441438{442439 u32 size;443440 int ret;···448445 return PTR_ERR(dev->tee_ctx);449446 }450447451451- ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id);448448+ ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id, uuid);452449 if (ret) {453450 dev_err(dev->dev, "Failed to open TA session (%d)\n", ret);454451 ret = -EINVAL;···492489493490int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)494491{495495- int ret;492492+ bool status;493493+ int ret, i;496494497495 ret = apmf_check_smart_pc(dev);498496 if (ret) {···506502 return -ENODEV;507503 }508504509509- ret = amd_pmf_tee_init(dev);510510- if (ret)511511- return ret;512512-513505 INIT_DELAYED_WORK(&dev->pb_work, amd_pmf_invoke_cmd);514506515507 ret = amd_pmf_set_dram_addr(dev, true);516508 if (ret)517517- goto error;509509+ goto err_cancel_work;518510519511 dev->policy_base = devm_ioremap_resource(dev->dev, dev->res);520512 if (IS_ERR(dev->policy_base)) {521513 ret = PTR_ERR(dev->policy_base);522522- goto error;514514+ goto err_free_dram_buf;523515 }524516525517 dev->policy_buf = kzalloc(dev->policy_sz, GFP_KERNEL);526518 if (!dev->policy_buf) {527519 ret = -ENOMEM;528528- goto error;520520+ goto err_free_dram_buf;529521 }530522531523 memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz);···531531 dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL);532532 if (!dev->prev_data) {533533 ret = -ENOMEM;534534- goto error;534534+ goto err_free_policy;535535 }536536537537- ret = amd_pmf_start_policy_engine(dev);538538- if (ret)539539- goto error;537537+ for (i = 0; i < ARRAY_SIZE(amd_pmf_ta_uuid); i++) {538538+ ret = amd_pmf_tee_init(dev, &amd_pmf_ta_uuid[i]);539539+ if (ret)540540+ goto err_free_prev_data;541541+542542+ ret = amd_pmf_start_policy_engine(dev);543543+ switch (ret) {544544+ case TA_PMF_TYPE_SUCCESS:545545+ status = true;546546+ break;547547+ case TA_ERROR_CRYPTO_INVALID_PARAM:548548+ case TA_ERROR_CRYPTO_BIN_TOO_LARGE:549549+ amd_pmf_tee_deinit(dev);550550+ status = false;551551+ break;552552+ default:553553+ ret = -EINVAL;554554+ amd_pmf_tee_deinit(dev);555555+ goto err_free_prev_data;556556+ }557557+558558+ if (status)559559+ break;560560+ }561561+562562+ if (!status && !pb_side_load) {563563+ ret = -EINVAL;564564+ goto err_free_prev_data;565565+ }540566541567 if (pb_side_load)542568 amd_pmf_open_pb(dev, dev->dbgfs_dir);543569544570 ret = amd_pmf_register_input_device(dev);545571 if (ret)546546- goto error;572572+ goto err_pmf_remove_pb;547573548574 return 0;549575550550-error:551551- amd_pmf_deinit_smart_pc(dev);576576+err_pmf_remove_pb:577577+ if (pb_side_load && dev->esbin)578578+ amd_pmf_remove_pb(dev);579579+ amd_pmf_tee_deinit(dev);580580+err_free_prev_data:581581+ kfree(dev->prev_data);582582+err_free_policy:583583+ kfree(dev->policy_buf);584584+err_free_dram_buf:585585+ kfree(dev->buf);586586+err_cancel_work:587587+ cancel_delayed_work_sync(&dev->pb_work);552588553589 return ret;554590}
+7
drivers/platform/x86/intel/hid.c
···139139 DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),140140 },141141 },142142+ {143143+ .ident = "Microsoft Surface Go 4",144144+ .matches = {145145+ DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),146146+ DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 4"),147147+ },148148+ },142149 { }143150};144151
···6363 * @allocated_down: Allocated downstream bandwidth (only for USB3)6464 * @bw_mode: DP bandwidth allocation mode registers can be used to6565 * determine consumed and allocated bandwidth6666+ * @dprx_started: DPRX negotiation was started (tb_dp_dprx_start() was called for it)6667 * @dprx_canceled: Was DPRX capabilities read poll canceled6768 * @dprx_timeout: If set DPRX capabilities read poll work will timeout after this passes6869 * @dprx_work: Worker that is scheduled to poll completion of DPRX capabilities read···101100 int allocated_up;102101 int allocated_down;103102 bool bw_mode;103103+ bool dprx_started;104104 bool dprx_canceled;105105 ktime_t dprx_timeout;106106 struct delayed_work dprx_work;
+7-6
drivers/usb/atm/cxacru.c
···11311131 struct cxacru_data *instance;11321132 struct usb_device *usb_dev = interface_to_usbdev(intf);11331133 struct usb_host_endpoint *cmd_ep = usb_dev->ep_in[CXACRU_EP_CMD];11341134- struct usb_endpoint_descriptor *in, *out;11341134+ static const u8 ep_addrs[] = {11351135+ CXACRU_EP_CMD + USB_DIR_IN,11361136+ CXACRU_EP_CMD + USB_DIR_OUT,11371137+ 0};11351138 int ret;1136113911371140 /* instance init */···11821179 }1183118011841181 if (usb_endpoint_xfer_int(&cmd_ep->desc))11851185- ret = usb_find_common_endpoints(intf->cur_altsetting,11861186- NULL, NULL, &in, &out);11821182+ ret = usb_check_int_endpoints(intf, ep_addrs);11871183 else11881188- ret = usb_find_common_endpoints(intf->cur_altsetting,11891189- &in, &out, NULL, NULL);11841184+ ret = usb_check_bulk_endpoints(intf, ep_addrs);1190118511911191- if (ret) {11861186+ if (!ret) {11921187 usb_err(usbatm_instance, "cxacru_bind: interface has incorrect endpoints\n");11931188 ret = -ENODEV;11941189 goto fail;
+33
drivers/usb/core/hub.c
···60666066} /* usb_hub_cleanup() */6067606760686068/**60696069+ * hub_hc_release_resources - clear resources used by host controller60706070+ * @udev: pointer to device being released60716071+ *60726072+ * Context: task context, might sleep60736073+ *60746074+ * Function releases the host controller resources in correct order before60756075+ * making any operation on resuming usb device. The host controller resources60766076+ * allocated for devices in tree should be released starting from the last60776077+ * usb device in tree toward the root hub. This function is used only during60786078+ * resuming device when usb device require reinitialization – that is, when60796079+ * flag udev->reset_resume is set.60806080+ *60816081+ * This call is synchronous, and may not be used in an interrupt context.60826082+ */60836083+static void hub_hc_release_resources(struct usb_device *udev)60846084+{60856085+ struct usb_hub *hub = usb_hub_to_struct_hub(udev);60866086+ struct usb_hcd *hcd = bus_to_hcd(udev->bus);60876087+ int i;60886088+60896089+ /* Release up resources for all children before this device */60906090+ for (i = 0; i < udev->maxchild; i++)60916091+ if (hub->ports[i]->child)60926092+ hub_hc_release_resources(hub->ports[i]->child);60936093+60946094+ if (hcd->driver->reset_device)60956095+ hcd->driver->reset_device(hcd, udev);60966096+}60976097+60986098+/**60696099 * usb_reset_and_verify_device - perform a USB port reset to reinitialize a device60706100 * @udev: device to reset (not in SUSPENDED or NOTATTACHED state)60716101 *···6158612861596129 bos = udev->bos;61606130 udev->bos = NULL;61316131+61326132+ if (udev->reset_resume)61336133+ hub_hc_release_resources(udev);6161613461626135 mutex_lock(hcd->address0_mutex);61636136
···131131 }132132}133133134134-void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode)134134+void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode, bool ignore_susphy)135135{136136+ unsigned int hw_mode;136137 u32 reg;137138138139 reg = dwc3_readl(dwc->regs, DWC3_GCTL);140140+141141+ /*142142+ * For DRD controllers, GUSB3PIPECTL.SUSPENDENABLE and143143+ * GUSB2PHYCFG.SUSPHY should be cleared during mode switching,144144+ * and they can be set after core initialization.145145+ */146146+ hw_mode = DWC3_GHWPARAMS0_MODE(dwc->hwparams.hwparams0);147147+ if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD && !ignore_susphy) {148148+ if (DWC3_GCTL_PRTCAP(reg) != mode)149149+ dwc3_enable_susphy(dwc, false);150150+ }151151+139152 reg &= ~(DWC3_GCTL_PRTCAPDIR(DWC3_GCTL_PRTCAP_OTG));140153 reg |= DWC3_GCTL_PRTCAPDIR(mode);141154 dwc3_writel(dwc->regs, DWC3_GCTL, reg);···229216230217 spin_lock_irqsave(&dwc->lock, flags);231218232232- dwc3_set_prtcap(dwc, desired_dr_role);219219+ dwc3_set_prtcap(dwc, desired_dr_role, false);233220234221 spin_unlock_irqrestore(&dwc->lock, flags);235222···671658 */672659 reg &= ~DWC3_GUSB3PIPECTL_UX_EXIT_PX;673660674674- /*675675- * Above DWC_usb3.0 1.94a, it is recommended to set676676- * DWC3_GUSB3PIPECTL_SUSPHY to '0' during coreConsultant configuration.677677- * So default value will be '0' when the core is reset. Application678678- * needs to set it to '1' after the core initialization is completed.679679- *680680- * Similarly for DRD controllers, GUSB3PIPECTL.SUSPENDENABLE must be681681- * cleared after power-on reset, and it can be set after core682682- * initialization.683683- */661661+ /* Ensure the GUSB3PIPECTL.SUSPENDENABLE is cleared prior to phy init. */684662 reg &= ~DWC3_GUSB3PIPECTL_SUSPHY;685663686664 if (dwc->u2ss_inp3_quirk)···751747 break;752748 }753749754754- /*755755- * Above DWC_usb3.0 1.94a, it is recommended to set756756- * DWC3_GUSB2PHYCFG_SUSPHY to '0' during coreConsultant configuration.757757- * So default value will be '0' when the core is reset. Application758758- * needs to set it to '1' after the core initialization is completed.759759- *760760- * Similarly for DRD controllers, GUSB2PHYCFG.SUSPHY must be cleared761761- * after power-on reset, and it can be set after core initialization.762762- */750750+ /* Ensure the GUSB2PHYCFG.SUSPHY is cleared prior to phy init. */763751 reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;764752765753 if (dwc->dis_enblslpm_quirk)···825829 if (ret < 0)826830 goto err_exit_usb3_phy;827831 }832832+833833+ /*834834+ * Above DWC_usb3.0 1.94a, it is recommended to set835835+ * DWC3_GUSB3PIPECTL_SUSPHY and DWC3_GUSB2PHYCFG_SUSPHY to '0' during836836+ * coreConsultant configuration. So default value will be '0' when the837837+ * core is reset. Application needs to set it to '1' after the core838838+ * initialization is completed.839839+ *840840+ * Certain phy requires to be in P0 power state during initialization.841841+ * Make sure GUSB3PIPECTL.SUSPENDENABLE and GUSB2PHYCFG.SUSPHY are clear842842+ * prior to phy init to maintain in the P0 state.843843+ *844844+ * After phy initialization, some phy operations can only be executed845845+ * while in lower P states. Ensure GUSB3PIPECTL.SUSPENDENABLE and846846+ * GUSB2PHYCFG.SUSPHY are set soon after initialization to avoid847847+ * blocking phy ops.848848+ */849849+ if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A))850850+ dwc3_enable_susphy(dwc, true);828851829852 return 0;830853···1603158816041589 switch (dwc->dr_mode) {16051590 case USB_DR_MODE_PERIPHERAL:16061606- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);15911591+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, false);1607159216081593 if (dwc->usb2_phy)16091594 otg_set_vbus(dwc->usb2_phy->otg, false);···16151600 return dev_err_probe(dev, ret, "failed to initialize gadget\n");16161601 break;16171602 case USB_DR_MODE_HOST:16181618- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST);16031603+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST, false);1619160416201605 if (dwc->usb2_phy)16211606 otg_set_vbus(dwc->usb2_phy->otg, true);···16601645 }1661164616621647 /* de-assert DRVVBUS for HOST and OTG mode */16631663- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);16481648+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, true);16641649}1665165016661651static void dwc3_get_software_properties(struct dwc3 *dwc)···18501835 dwc->tx_thr_num_pkt_prd = tx_thr_num_pkt_prd;18511836 dwc->tx_max_burst_prd = tx_max_burst_prd;1852183718531853- dwc->imod_interval = 0;18541854-18551838 dwc->tx_fifo_resize_max_num = tx_fifo_resize_max_num;18561839}18571840···18671854 unsigned int hwparam_gen =18681855 DWC3_GHWPARAMS3_SSPHY_IFC(dwc->hwparams.hwparams3);1869185618701870- /* Check for proper value of imod_interval */18711871- if (dwc->imod_interval && !dwc3_has_imod(dwc)) {18721872- dev_warn(dwc->dev, "Interrupt moderation not supported\n");18731873- dwc->imod_interval = 0;18741874- }18751875-18761857 /*18581858+ * Enable IMOD for all supporting controllers.18591859+ *18601860+ * Particularly, DWC_usb3 v3.00a must enable this feature for18611861+ * the following reason:18621862+ *18771863 * Workaround for STAR 9000961433 which affects only version18781864 * 3.00a of the DWC_usb3 core. This prevents the controller18791865 * interrupt from being masked while handling events. IMOD18801866 * allows us to work around this issue. Enable it for the18811867 * affected version.18821868 */18831883- if (!dwc->imod_interval &&18841884- DWC3_VER_IS(DWC3, 300A))18691869+ if (dwc3_has_imod((dwc)))18851870 dwc->imod_interval = 1;1886187118871872 /* Check the maximum_speed parameter */···24682457 if (ret)24692458 return ret;2470245924712471- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);24602460+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, true);24722461 dwc3_gadget_resume(dwc);24732462 break;24742463 case DWC3_GCTL_PRTCAP_HOST:···24762465 ret = dwc3_core_init_for_resume(dwc);24772466 if (ret)24782467 return ret;24792479- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST);24682468+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST, true);24802469 break;24812470 }24822471 /* Restore GUSB2PHYCFG bits that were modified in suspend */···25052494 if (ret)25062495 return ret;2507249625082508- dwc3_set_prtcap(dwc, dwc->current_dr_role);24972497+ dwc3_set_prtcap(dwc, dwc->current_dr_role, true);2509249825102499 dwc3_otg_init(dwc);25112500 if (dwc->current_otg_role == DWC3_OTG_ROLE_HOST) {
···173173 * block "Initialize GCTL for OTG operation".174174 */175175 /* GCTL.PrtCapDir=2'b11 */176176- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG);176176+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG, true);177177 /* GUSB2PHYCFG0.SusPHY=0 */178178 reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));179179 reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;···556556557557 dwc3_drd_update(dwc);558558 } else {559559- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG);559559+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG, true);560560561561 /* use OTG block to get ID event */562562 irq = dwc3_otg_get_irq(dwc);
+7-3
drivers/usb/dwc3/gadget.c
···45014501 dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0),45024502 DWC3_GEVNTSIZ_SIZE(evt->length));4503450345044504+ evt->flags &= ~DWC3_EVENT_PENDING;45054505+ /*45064506+ * Add an explicit write memory barrier to make sure that the update of45074507+ * clearing DWC3_EVENT_PENDING is observed in dwc3_check_event_buf()45084508+ */45094509+ wmb();45104510+45044511 if (dwc->imod_interval) {45054512 dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), DWC3_GEVNTCOUNT_EHB);45064513 dwc3_writel(dwc->regs, DWC3_DEV_IMOD(0), dwc->imod_interval);45074514 }45084508-45094509- /* Keep the clearing of DWC3_EVENT_PENDING at the end */45104510- evt->flags &= ~DWC3_EVENT_PENDING;4511451545124516 return ret;45134517}
···10521052 * There is a transfer in progress. So we trigger a remote10531053 * wakeup to inform the host.10541054 */10551055- ether_wakeup_host(dev->port_usb);10561056- return;10551055+ if (!ether_wakeup_host(dev->port_usb))10561056+ return;10571057 }10581058 spin_lock_irqsave(&dev->lock, flags);10591059 link->is_suspend = true;
···24372437 * and our use of dma addresses in the trb_address_map radix tree needs24382438 * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need.24392439 */24402440- if (xhci->quirks & XHCI_ZHAOXIN_TRB_FETCH)24402440+ if (xhci->quirks & XHCI_TRB_OVERFETCH)24412441+ /* Buggy HC prefetches beyond segment bounds - allocate dummy space at the end */24412442 xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,24422443 TRB_SEGMENT_SIZE * 2, TRB_SEGMENT_SIZE * 2, xhci->page_size * 2);24432444 else
···2525 * difficult to estimate the time it takes for the system to process the command2626 * before it is actually passed to the PPM.2727 */2828-#define UCSI_TIMEOUT_MS 50002828+#define UCSI_TIMEOUT_MS 1000029293030/*3131 * UCSI_SWAP_TIMEOUT_MS - Timeout for role swap requests···1346134613471347 mutex_lock(&ucsi->ppm_lock);1348134813491349- ret = ucsi->ops->read_cci(ucsi, &cci);13491349+ ret = ucsi->ops->poll_cci(ucsi, &cci);13501350 if (ret < 0)13511351 goto out;13521352···1364136413651365 tmo = jiffies + msecs_to_jiffies(UCSI_TIMEOUT_MS);13661366 do {13671367- ret = ucsi->ops->read_cci(ucsi, &cci);13671367+ ret = ucsi->ops->poll_cci(ucsi, &cci);13681368 if (ret < 0)13691369 goto out;13701370 if (cci & UCSI_CCI_COMMAND_COMPLETE)···13931393 /* Give the PPM time to process a reset before reading CCI */13941394 msleep(20);1395139513961396- ret = ucsi->ops->read_cci(ucsi, &cci);13961396+ ret = ucsi->ops->poll_cci(ucsi, &cci);13971397 if (ret)13981398 goto out;13991399···1825182518261826err_unregister:18271827 for (con = connector; con->port; con++) {18281828+ if (con->wq)18291829+ destroy_workqueue(con->wq);18281830 ucsi_unregister_partner(con);18291831 ucsi_unregister_altmodes(con, UCSI_RECIPIENT_CON);18301832 ucsi_unregister_port_psy(con);18311831- if (con->wq)18321832- destroy_workqueue(con->wq);1833183318341834 usb_power_delivery_unregister_capabilities(con->port_sink_caps);18351835 con->port_sink_caps = NULL;···19291929 struct ucsi *ucsi;1930193019311931 if (!ops ||19321932- !ops->read_version || !ops->read_cci || !ops->read_message_in ||19331933- !ops->sync_control || !ops->async_control)19321932+ !ops->read_version || !ops->read_cci || !ops->poll_cci ||19331933+ !ops->read_message_in || !ops->sync_control || !ops->async_control)19341934 return ERR_PTR(-EINVAL);1935193519361936 ucsi = kzalloc(sizeof(*ucsi), GFP_KERNEL);···2013201320142014 for (i = 0; i < ucsi->cap.num_connectors; i++) {20152015 cancel_work_sync(&ucsi->connector[i].work);20162016- ucsi_unregister_partner(&ucsi->connector[i]);20172017- ucsi_unregister_altmodes(&ucsi->connector[i],20182018- UCSI_RECIPIENT_CON);20192019- ucsi_unregister_port_psy(&ucsi->connector[i]);2020201620212017 if (ucsi->connector[i].wq) {20222018 struct ucsi_work *uwork;···20272031 mutex_unlock(&ucsi->connector[i].lock);20282032 destroy_workqueue(ucsi->connector[i].wq);20292033 }20342034+20352035+ ucsi_unregister_partner(&ucsi->connector[i]);20362036+ ucsi_unregister_altmodes(&ucsi->connector[i],20372037+ UCSI_RECIPIENT_CON);20382038+ ucsi_unregister_port_psy(&ucsi->connector[i]);2030203920312040 usb_power_delivery_unregister_capabilities(ucsi->connector[i].port_sink_caps);20322041 ucsi->connector[i].port_sink_caps = NULL;
+2
drivers/usb/typec/ucsi/ucsi.h
···6262 * struct ucsi_operations - UCSI I/O operations6363 * @read_version: Read implemented UCSI version6464 * @read_cci: Read CCI register6565+ * @poll_cci: Read CCI register while polling with notifications disabled6566 * @read_message_in: Read message data from UCSI6667 * @sync_control: Blocking control operation6768 * @async_control: Non-blocking control operation···7776struct ucsi_operations {7877 int (*read_version)(struct ucsi *ucsi, u16 *version);7978 int (*read_cci)(struct ucsi *ucsi, u32 *cci);7979+ int (*poll_cci)(struct ucsi *ucsi, u32 *cci);8080 int (*read_message_in)(struct ucsi *ucsi, void *val, size_t val_len);8181 int (*sync_control)(struct ucsi *ucsi, u64 command);8282 int (*async_control)(struct ucsi *ucsi, u64 command);
···282282static uint screen_fb_size;283283static uint dio_fb_size; /* FB size for deferred IO */284284285285+static void hvfb_putmem(struct fb_info *info);286286+285287/* Send message to Hyper-V host */286288static inline int synthvid_send(struct hv_device *hdev,287289 struct synthvid_msg *msg)···865863}866864867865/*866866+ * fb_ops.fb_destroy is called by the last put_fb_info() call at the end867867+ * of unregister_framebuffer() or fb_release(). Do any cleanup related to868868+ * framebuffer here.869869+ */870870+static void hvfb_destroy(struct fb_info *info)871871+{872872+ hvfb_putmem(info);873873+ framebuffer_release(info);874874+}875875+876876+/*868877 * TODO: GEN1 codepaths allocate from system or DMA-able memory. Fix the869878 * driver to use the _SYSMEM_ or _DMAMEM_ helpers in these cases.870879 */···890877 .fb_set_par = hvfb_set_par,891878 .fb_setcolreg = hvfb_setcolreg,892879 .fb_blank = hvfb_blank,880880+ .fb_destroy = hvfb_destroy,893881};894882895883/* Get options from kernel paramenter "video=" */···966952}967953968954/* Release contiguous physical memory */969969-static void hvfb_release_phymem(struct hv_device *hdev,955955+static void hvfb_release_phymem(struct device *device,970956 phys_addr_t paddr, unsigned int size)971957{972958 unsigned int order = get_order(size);···974960 if (order <= MAX_PAGE_ORDER)975961 __free_pages(pfn_to_page(paddr >> PAGE_SHIFT), order);976962 else977977- dma_free_coherent(&hdev->device,963963+ dma_free_coherent(device,978964 round_up(size, PAGE_SIZE),979965 phys_to_virt(paddr),980966 paddr);···10039891004990 base = pci_resource_start(pdev, 0);1005991 size = pci_resource_len(pdev, 0);992992+ aperture_remove_conflicting_devices(base, size, KBUILD_MODNAME);10069931007994 /*1008995 * For Gen 1 VM, we can directly use the contiguous memory···10251010 goto getmem_done;10261011 }10271012 pr_info("Unable to allocate enough contiguous physical memory on Gen 1 VM. Using MMIO instead.\n");10131013+ } else {10141014+ aperture_remove_all_conflicting_devices(KBUILD_MODNAME);10281015 }1029101610301017 /*10311031- * Cannot use the contiguous physical memory.10321032- * Allocate mmio space for framebuffer.10181018+ * Cannot use contiguous physical memory, so allocate MMIO space for10191019+ * the framebuffer. At this point in the function, conflicting devices10201020+ * that might have claimed the framebuffer MMIO space based on10211021+ * screen_info.lfb_base must have already been removed so that10221022+ * vmbus_allocate_mmio() does not allocate different MMIO space. If the10231023+ * kdump image were to be loaded using kexec_file_load(), the10241024+ * framebuffer location in the kdump image would be set from10251025+ * screen_info.lfb_base at the time that kdump is enabled. If the10261026+ * framebuffer has moved elsewhere, this could be the wrong location,10271027+ * causing kdump to hang when efifb (for example) loads.10331028 */10341029 dio_fb_size =10351030 screen_width * screen_height * screen_depth / 8;···10761051 info->screen_size = dio_fb_size;1077105210781053getmem_done:10791079- if (base && size)10801080- aperture_remove_conflicting_devices(base, size, KBUILD_MODNAME);10811081- else10821082- aperture_remove_all_conflicting_devices(KBUILD_MODNAME);10831083-10841054 if (!gen2vm)10851055 pci_dev_put(pdev);10861056···10941074}1095107510961076/* Release the framebuffer */10971097-static void hvfb_putmem(struct hv_device *hdev, struct fb_info *info)10771077+static void hvfb_putmem(struct fb_info *info)10981078{10991079 struct hvfb_par *par = info->par;1100108011011081 if (par->need_docopy) {11021082 vfree(par->dio_vp);11031103- iounmap(info->screen_base);10831083+ iounmap(par->mmio_vp);11041084 vmbus_free_mmio(par->mem->start, screen_fb_size);11051085 } else {11061106- hvfb_release_phymem(hdev, info->fix.smem_start,10861086+ hvfb_release_phymem(info->device, info->fix.smem_start,11071087 screen_fb_size);11081088 }11091089···11921172 if (ret)11931173 goto error;1194117411951195- ret = register_framebuffer(info);11751175+ ret = devm_register_framebuffer(&hdev->device, info);11961176 if (ret) {11971177 pr_err("Unable to register framebuffer\n");11981178 goto error;···1217119712181198error:12191199 fb_deferred_io_cleanup(info);12201220- hvfb_putmem(hdev, info);12001200+ hvfb_putmem(info);12211201error2:12221202 vmbus_close(hdev->channel);12231203error1:···1240122012411221 fb_deferred_io_cleanup(info);1242122212431243- unregister_framebuffer(info);12441223 cancel_delayed_work_sync(&par->dwork);1245122412461225 vmbus_close(hdev->channel);12471226 hv_set_drvdata(hdev, NULL);12481248-12491249- hvfb_putmem(hdev, info);12501250- framebuffer_release(info);12511227}1252122812531229static int hvfb_suspend(struct hv_device *hdev)
+3-3
drivers/virt/acrn/hsm.c
···4949 switch (cmd & PMCMD_TYPE_MASK) {5050 case ACRN_PMCMD_GET_PX_CNT:5151 case ACRN_PMCMD_GET_CX_CNT:5252- pm_info = kmalloc(sizeof(u64), GFP_KERNEL);5252+ pm_info = kzalloc(sizeof(u64), GFP_KERNEL);5353 if (!pm_info)5454 return -ENOMEM;5555···6464 kfree(pm_info);6565 break;6666 case ACRN_PMCMD_GET_PX_DATA:6767- px_data = kmalloc(sizeof(*px_data), GFP_KERNEL);6767+ px_data = kzalloc(sizeof(*px_data), GFP_KERNEL);6868 if (!px_data)6969 return -ENOMEM;7070···7979 kfree(px_data);8080 break;8181 case ACRN_PMCMD_GET_CX_DATA:8282- cx_data = kmalloc(sizeof(*cx_data), GFP_KERNEL);8282+ cx_data = kzalloc(sizeof(*cx_data), GFP_KERNEL);8383 if (!cx_data)8484 return -ENOMEM;8585
+43-15
drivers/virt/coco/sev-guest/sev-guest.c
···3939 struct miscdevice misc;40404141 struct snp_msg_desc *msg_desc;4242-4343- union {4444- struct snp_report_req report;4545- struct snp_derived_key_req derived_key;4646- struct snp_ext_report_req ext_report;4747- } req;4842};49435044/*···66726773static int get_report(struct snp_guest_dev *snp_dev, struct snp_guest_request_ioctl *arg)6874{6969- struct snp_report_req *report_req = &snp_dev->req.report;7575+ struct snp_report_req *report_req __free(kfree) = NULL;7076 struct snp_msg_desc *mdesc = snp_dev->msg_desc;7177 struct snp_report_resp *report_resp;7278 struct snp_guest_req req = {};···74807581 if (!arg->req_data || !arg->resp_data)7682 return -EINVAL;8383+8484+ report_req = kzalloc(sizeof(*report_req), GFP_KERNEL_ACCOUNT);8585+ if (!report_req)8686+ return -ENOMEM;77877888 if (copy_from_user(report_req, (void __user *)arg->req_data, sizeof(*report_req)))7989 return -EFAULT;···115117116118static int get_derived_key(struct snp_guest_dev *snp_dev, struct snp_guest_request_ioctl *arg)117119{118118- struct snp_derived_key_req *derived_key_req = &snp_dev->req.derived_key;120120+ struct snp_derived_key_req *derived_key_req __free(kfree) = NULL;119121 struct snp_derived_key_resp derived_key_resp = {0};120122 struct snp_msg_desc *mdesc = snp_dev->msg_desc;121123 struct snp_guest_req req = {};···133135 */134136 resp_len = sizeof(derived_key_resp.data) + mdesc->ctx->authsize;135137 if (sizeof(buf) < resp_len)138138+ return -ENOMEM;139139+140140+ derived_key_req = kzalloc(sizeof(*derived_key_req), GFP_KERNEL_ACCOUNT);141141+ if (!derived_key_req)136142 return -ENOMEM;137143138144 if (copy_from_user(derived_key_req, (void __user *)arg->req_data,···171169 struct snp_req_resp *io)172170173171{174174- struct snp_ext_report_req *report_req = &snp_dev->req.ext_report;172172+ struct snp_ext_report_req *report_req __free(kfree) = NULL;175173 struct snp_msg_desc *mdesc = snp_dev->msg_desc;176174 struct snp_report_resp *report_resp;177175 struct snp_guest_req req = {};178176 int ret, npages = 0, resp_len;179177 sockptr_t certs_address;178178+ struct page *page;180179181180 if (sockptr_is_null(io->req_data) || sockptr_is_null(io->resp_data))182181 return -EINVAL;182182+183183+ report_req = kzalloc(sizeof(*report_req), GFP_KERNEL_ACCOUNT);184184+ if (!report_req)185185+ return -ENOMEM;183186184187 if (copy_from_sockptr(report_req, io->req_data, sizeof(*report_req)))185188 return -EFAULT;···211204 * the host. If host does not supply any certs in it, then copy212205 * zeros to indicate that certificate data was not provided.213206 */214214- memset(mdesc->certs_data, 0, report_req->certs_len);215207 npages = report_req->certs_len >> PAGE_SHIFT;208208+ page = alloc_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO,209209+ get_order(report_req->certs_len));210210+ if (!page)211211+ return -ENOMEM;212212+213213+ req.certs_data = page_address(page);214214+ ret = set_memory_decrypted((unsigned long)req.certs_data, npages);215215+ if (ret) {216216+ pr_err("failed to mark page shared, ret=%d\n", ret);217217+ __free_pages(page, get_order(report_req->certs_len));218218+ return -EFAULT;219219+ }220220+216221cmd:217222 /*218223 * The intermediate response buffer is used while decrypting the···233214 */234215 resp_len = sizeof(report_resp->data) + mdesc->ctx->authsize;235216 report_resp = kzalloc(resp_len, GFP_KERNEL_ACCOUNT);236236- if (!report_resp)237237- return -ENOMEM;217217+ if (!report_resp) {218218+ ret = -ENOMEM;219219+ goto e_free_data;220220+ }238221239239- mdesc->input.data_npages = npages;222222+ req.input.data_npages = npages;240223241224 req.msg_version = arg->msg_version;242225 req.msg_type = SNP_MSG_REPORT_REQ;···253232254233 /* If certs length is invalid then copy the returned length */255234 if (arg->vmm_error == SNP_GUEST_VMM_ERR_INVALID_LEN) {256256- report_req->certs_len = mdesc->input.data_npages << PAGE_SHIFT;235235+ report_req->certs_len = req.input.data_npages << PAGE_SHIFT;257236258237 if (copy_to_sockptr(io->req_data, report_req, sizeof(*report_req)))259238 ret = -EFAULT;···262241 if (ret)263242 goto e_free;264243265265- if (npages && copy_to_sockptr(certs_address, mdesc->certs_data, report_req->certs_len)) {244244+ if (npages && copy_to_sockptr(certs_address, req.certs_data, report_req->certs_len)) {266245 ret = -EFAULT;267246 goto e_free;268247 }···272251273252e_free:274253 kfree(report_resp);254254+e_free_data:255255+ if (npages) {256256+ if (set_memory_encrypted((unsigned long)req.certs_data, npages))257257+ WARN_ONCE(ret, "failed to restore encryption mask (leak it)\n");258258+ else259259+ __free_pages(page, get_order(report_req->certs_len));260260+ }275261 return ret;276262}277263
+2-1
drivers/virt/vboxguest/Kconfig
···11# SPDX-License-Identifier: GPL-2.0-only22config VBOXGUEST33 tristate "Virtual Box Guest integration support"44- depends on (ARM64 || X86) && PCI && INPUT44+ depends on (ARM64 || X86 || COMPILE_TEST) && PCI && INPUT55+ depends on HAS_IOPORT56 help67 This is a driver for the Virtual Box Guest PCI device used in78 Virtual Box virtual machines. Enabling this driver will add
···5959 }6060 rcu_read_unlock();61616262- return bch2_rand_range(nr * CONGESTED_MAX) < total;6262+ return get_random_u32_below(nr * CONGESTED_MAX) < total;6363}64646565#else···951951 goto retry_pick;952952 }953953954954- /*955955- * Unlock the iterator while the btree node's lock is still in956956- * cache, before doing the IO:957957- */958958- bch2_trans_unlock(trans);959959-960954 if (flags & BCH_READ_NODECODE) {961955 /*962956 * can happen if we retry, and the extent we were going to read···11071113 trace_and_count(c, read_split, &orig->bio);11081114 }1109111511161116+ /*11171117+ * Unlock the iterator while the btree node's lock is still in11181118+ * cache, before doing the IO:11191119+ */11201120+ if (!(flags & BCH_READ_IN_RETRY))11211121+ bch2_trans_unlock(trans);11221122+ else11231123+ bch2_trans_unlock_long(trans);11241124+11101125 if (!rbio->pick.idx) {11111126 if (unlikely(!rbio->have_ioref)) {11121127 struct printbuf buf = PRINTBUF;···11631160 if (likely(!(flags & BCH_READ_IN_RETRY))) {11641161 return 0;11651162 } else {11631163+ bch2_trans_unlock(trans);11641164+11661165 int ret;1167116611681167 rbio->context = RBIO_CONTEXT_UNBOUND;
+32-27
fs/bcachefs/journal.c
···1021102110221022/* allocate journal on a device: */1023102310241024-static int __bch2_set_nr_journal_buckets(struct bch_dev *ca, unsigned nr,10251025- bool new_fs, struct closure *cl)10241024+static int bch2_set_nr_journal_buckets_iter(struct bch_dev *ca, unsigned nr,10251025+ bool new_fs, struct closure *cl)10261026{10271027 struct bch_fs *c = ca->fs;10281028 struct journal_device *ja = &ca->journal;···11501150 return ret;11511151}1152115211531153-/*11541154- * Allocate more journal space at runtime - not currently making use if it, but11551155- * the code works:11561156- */11571157-int bch2_set_nr_journal_buckets(struct bch_fs *c, struct bch_dev *ca,11581158- unsigned nr)11531153+static int bch2_set_nr_journal_buckets_loop(struct bch_fs *c, struct bch_dev *ca,11541154+ unsigned nr, bool new_fs)11591155{11601156 struct journal_device *ja = &ca->journal;11611161- struct closure cl;11621157 int ret = 0;1163115811591159+ struct closure cl;11641160 closure_init_stack(&cl);11651165-11661166- down_write(&c->state_lock);1167116111681162 /* don't handle reducing nr of buckets yet: */11691163 if (nr < ja->nr)11701170- goto unlock;11641164+ return 0;1171116511721172- while (ja->nr < nr) {11661166+ while (!ret && ja->nr < nr) {11731167 struct disk_reservation disk_res = { 0, 0, 0 };1174116811751169 /*···11761182 * filesystem-wide allocation will succeed, this is a device11771183 * specific allocation - we can hang here:11781184 */11851185+ if (!new_fs) {11861186+ ret = bch2_disk_reservation_get(c, &disk_res,11871187+ bucket_to_sector(ca, nr - ja->nr), 1, 0);11881188+ if (ret)11891189+ break;11901190+ }1179119111801180- ret = bch2_disk_reservation_get(c, &disk_res,11811181- bucket_to_sector(ca, nr - ja->nr), 1, 0);11821182- if (ret)11831183- break;11921192+ ret = bch2_set_nr_journal_buckets_iter(ca, nr, new_fs, &cl);1184119311851185- ret = __bch2_set_nr_journal_buckets(ca, nr, false, &cl);11941194+ if (ret == -BCH_ERR_bucket_alloc_blocked ||11951195+ ret == -BCH_ERR_open_buckets_empty)11961196+ ret = 0; /* wait and retry */1186119711871198 bch2_disk_reservation_put(c, &disk_res);11881188-11891199 closure_sync(&cl);11901190-11911191- if (ret &&11921192- ret != -BCH_ERR_bucket_alloc_blocked &&11931193- ret != -BCH_ERR_open_buckets_empty)11941194- break;11951200 }1196120111971197- bch_err_fn(c, ret);11981198-unlock:12021202+ return ret;12031203+}12041204+12051205+/*12061206+ * Allocate more journal space at runtime - not currently making use if it, but12071207+ * the code works:12081208+ */12091209+int bch2_set_nr_journal_buckets(struct bch_fs *c, struct bch_dev *ca,12101210+ unsigned nr)12111211+{12121212+ down_write(&c->state_lock);12131213+ int ret = bch2_set_nr_journal_buckets_loop(c, ca, nr, false);11991214 up_write(&c->state_lock);12151215+12161216+ bch_err_fn(c, ret);12001217 return ret;12011218}12021219···12331228 min(1 << 13,12341229 (1 << 24) / ca->mi.bucket_size));1235123012361236- ret = __bch2_set_nr_journal_buckets(ca, nr, new_fs, NULL);12311231+ ret = bch2_set_nr_journal_buckets_loop(ca->fs, ca, nr, new_fs);12371232err:12381233 bch_err_fn(ca, ret);12391234 return ret;
+12-13
fs/bcachefs/movinggc.c
···7474 struct move_bucket *b, u64 time)7575{7676 struct bch_fs *c = trans->c;7777- struct btree_iter iter;7878- struct bkey_s_c k;7979- struct bch_alloc_v4 _a;8080- const struct bch_alloc_v4 *a;8181- int ret;82778383- if (bch2_bucket_is_open(trans->c,8484- b->k.bucket.inode,8585- b->k.bucket.offset))7878+ if (bch2_bucket_is_open(c, b->k.bucket.inode, b->k.bucket.offset))8679 return 0;87808888- k = bch2_bkey_get_iter(trans, &iter, BTREE_ID_alloc,8989- b->k.bucket, BTREE_ITER_cached);9090- ret = bkey_err(k);8181+ struct btree_iter iter;8282+ struct bkey_s_c k = bch2_bkey_get_iter(trans, &iter, BTREE_ID_alloc,8383+ b->k.bucket, BTREE_ITER_cached);8484+ int ret = bkey_err(k);9185 if (ret)9286 return ret;9387···8995 if (!ca)9096 goto out;91979292- a = bch2_alloc_to_v4(k, &_a);9898+ if (ca->mi.state != BCH_MEMBER_STATE_rw ||9999+ !bch2_dev_is_online(ca))100100+ goto out_put;101101+102102+ struct bch_alloc_v4 _a;103103+ const struct bch_alloc_v4 *a = bch2_alloc_to_v4(k, &_a);93104 b->k.gen = a->gen;94105 b->sectors = bch2_bucket_sectors_dirty(*a);95106 u64 lru_idx = alloc_lru_idx_fragmentation(*a, ca);9610797108 ret = lru_idx && lru_idx <= time;9898-109109+out_put:99110 bch2_dev_put(ca);100111out:101112 bch2_trans_iter_exit(trans, &iter);
···653653 return 0;654654}655655656656-size_t bch2_rand_range(size_t max)656656+u64 bch2_get_random_u64_below(u64 ceil)657657{658658- size_t rand;658658+ if (ceil <= U32_MAX)659659+ return __get_random_u32_below(ceil);659660660660- if (!max)661661- return 0;661661+ /* this is the same (clever) algorithm as in __get_random_u32_below() */662662+ u64 rand = get_random_u64();663663+ u64 mult = ceil * rand;662664663663- do {664664- rand = get_random_long();665665- rand &= roundup_pow_of_two(max) - 1;666666- } while (rand >= max);665665+ if (unlikely(mult < ceil)) {666666+ u64 bound;667667+ div64_u64_rem(-ceil, ceil, &bound);668668+ while (unlikely(mult < bound)) {669669+ rand = get_random_u64();670670+ mult = ceil * rand;671671+ }672672+ }667673668668- return rand;674674+ return mul_u64_u64_shr(ceil, rand, 64);669675}670676671677void memcpy_to_bio(struct bio *dst, struct bvec_iter dst_iter, const void *src)
+1-1
fs/bcachefs/util.h
···401401 _ret; \402402})403403404404-size_t bch2_rand_range(size_t);404404+u64 bch2_get_random_u64_below(u64);405405406406void memcpy_to_bio(struct bio *, struct bvec_iter, const void *);407407void memcpy_from_bio(void *, struct bio *, struct bvec_iter);
+7-2
fs/btrfs/inode.c
···13821382 continue;13831383 }13841384 if (done_offset) {13851385- *done_offset = start - 1;13861386- return 0;13851385+ /*13861386+ * Move @end to the end of the processed range,13871387+ * and exit the loop to unlock the processed extents.13881388+ */13891389+ end = start - 1;13901390+ ret = 0;13911391+ break;13871392 }13881393 ret = -ENOSPC;13891394 }
+2-2
fs/btrfs/sysfs.c
···1330133013311331int btrfs_read_policy_to_enum(const char *str, s64 *value_ret)13321332{13331333- char param[32] = { 0 };13331333+ char param[32];13341334 char __maybe_unused *value_str;1335133513361336 if (!str || strlen(str) == 0)13371337 return 0;1338133813391339- strncpy(param, str, sizeof(param) - 1);13391339+ strscpy(param, str);1340134013411341#ifdef CONFIG_BTRFS_EXPERIMENTAL13421342 /* Separate value from input in policy:value format. */
···756756 return VM_FAULT_SIGBUS;757757 }758758 } else {759759- result = filemap_fsnotify_fault(vmf);760760- if (unlikely(result))761761- return result;762759 filemap_invalidate_lock_shared(mapping);763760 }764761 result = dax_iomap_fault(vmf, order, &pfn, &error, &ext4_iomap_ops);
+7-8
fs/fuse/dev.c
···14571457 if (ret < 0)14581458 goto out;1459145914601460- if (pipe_occupancy(pipe->head, pipe->tail) + cs.nr_segs > pipe->max_usage) {14601460+ if (pipe_buf_usage(pipe) + cs.nr_segs > pipe->max_usage) {14611461 ret = -EIO;14621462 goto out;14631463 }···21072107 struct file *out, loff_t *ppos,21082108 size_t len, unsigned int flags)21092109{21102110- unsigned int head, tail, mask, count;21102110+ unsigned int head, tail, count;21112111 unsigned nbuf;21122112 unsigned idx;21132113 struct pipe_buffer *bufs;···2124212421252125 head = pipe->head;21262126 tail = pipe->tail;21272127- mask = pipe->ring_size - 1;21282128- count = head - tail;21272127+ count = pipe_occupancy(head, tail);2129212821302129 bufs = kvmalloc_array(count, sizeof(struct pipe_buffer), GFP_KERNEL);21312130 if (!bufs) {···2134213521352136 nbuf = 0;21362137 rem = 0;21372137- for (idx = tail; idx != head && rem < len; idx++)21382138- rem += pipe->bufs[idx & mask].len;21382138+ for (idx = tail; !pipe_empty(head, idx) && rem < len; idx++)21392139+ rem += pipe_buf(pipe, idx)->len;2139214021402141 ret = -EINVAL;21412142 if (rem < len)···21462147 struct pipe_buffer *ibuf;21472148 struct pipe_buffer *obuf;2148214921492149- if (WARN_ON(nbuf >= count || tail == head))21502150+ if (WARN_ON(nbuf >= count || pipe_empty(head, tail)))21502151 goto out_free;2151215221522152- ibuf = &pipe->bufs[tail & mask];21532153+ ibuf = pipe_buf(pipe, tail);21532154 obuf = &bufs[nbuf];2154215521552156 if (rem >= ibuf->len) {
+2-1
fs/nfs/file.c
···2929#include <linux/pagemap.h>3030#include <linux/gfp.h>3131#include <linux/swap.h>3232+#include <linux/compaction.h>32333334#include <linux/uaccess.h>3435#include <linux/filelock.h>···458457 /* If the private flag is set, then the folio is not freeable */459458 if (folio_test_private(folio)) {460459 if ((current_gfp_context(gfp) & GFP_KERNEL) != GFP_KERNEL ||461461- current_is_kswapd())460460+ current_is_kswapd() || current_is_kcompactd())462461 return false;463462 if (nfs_wb_folio(folio->mapping->host, folio) < 0)464463 return false;
+14-18
fs/pipe.c
···210210/* Done while waiting without holding the pipe lock - thus the READ_ONCE() */211211static inline bool pipe_readable(const struct pipe_inode_info *pipe)212212{213213- unsigned int head = READ_ONCE(pipe->head);214214- unsigned int tail = READ_ONCE(pipe->tail);213213+ union pipe_index idx = { .head_tail = READ_ONCE(pipe->head_tail) };215214 unsigned int writers = READ_ONCE(pipe->writers);216215217217- return !pipe_empty(head, tail) || !writers;216216+ return !pipe_empty(idx.head, idx.tail) || !writers;218217}219218220219static inline unsigned int pipe_update_tail(struct pipe_inode_info *pipe,···394395 wake_next_reader = true;395396 mutex_lock(&pipe->mutex);396397 }397397- if (pipe_empty(pipe->head, pipe->tail))398398+ if (pipe_is_empty(pipe))398399 wake_next_reader = false;399400 mutex_unlock(&pipe->mutex);400401···416417/* Done while waiting without holding the pipe lock - thus the READ_ONCE() */417418static inline bool pipe_writable(const struct pipe_inode_info *pipe)418419{419419- unsigned int head = READ_ONCE(pipe->head);420420- unsigned int tail = READ_ONCE(pipe->tail);420420+ union pipe_index idx = { .head_tail = READ_ONCE(pipe->head_tail) };421421 unsigned int max_usage = READ_ONCE(pipe->max_usage);422422423423- return !pipe_full(head, tail, max_usage) ||423423+ return !pipe_full(idx.head, idx.tail, max_usage) ||424424 !READ_ONCE(pipe->readers);425425}426426···577579 kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);578580 wait_event_interruptible_exclusive(pipe->wr_wait, pipe_writable(pipe));579581 mutex_lock(&pipe->mutex);580580- was_empty = pipe_empty(pipe->head, pipe->tail);582582+ was_empty = pipe_is_empty(pipe);581583 wake_next_writer = true;582584 }583585out:584584- if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))586586+ if (pipe_is_full(pipe))585587 wake_next_writer = false;586588 mutex_unlock(&pipe->mutex);587589···614616static long pipe_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)615617{616618 struct pipe_inode_info *pipe = filp->private_data;617617- unsigned int count, head, tail, mask;619619+ unsigned int count, head, tail;618620619621 switch (cmd) {620622 case FIONREAD:···622624 count = 0;623625 head = pipe->head;624626 tail = pipe->tail;625625- mask = pipe->ring_size - 1;626627627627- while (tail != head) {628628- count += pipe->bufs[tail & mask].len;628628+ while (!pipe_empty(head, tail)) {629629+ count += pipe_buf(pipe, tail)->len;629630 tail++;630631 }631632 mutex_unlock(&pipe->mutex);···656659{657660 __poll_t mask;658661 struct pipe_inode_info *pipe = filp->private_data;659659- unsigned int head, tail;662662+ union pipe_index idx;660663661664 /* Epoll has some historical nasty semantics, this enables them */662665 WRITE_ONCE(pipe->poll_usage, true);···677680 * if something changes and you got it wrong, the poll678681 * table entry will wake you up and fix it.679682 */680680- head = READ_ONCE(pipe->head);681681- tail = READ_ONCE(pipe->tail);683683+ idx.head_tail = READ_ONCE(pipe->head_tail);682684683685 mask = 0;684686 if (filp->f_mode & FMODE_READ) {685685- if (!pipe_empty(head, tail))687687+ if (!pipe_empty(idx.head, idx.tail))686688 mask |= EPOLLIN | EPOLLRDNORM;687689 if (!pipe->writers && filp->f_pipe != pipe->w_counter)688690 mask |= EPOLLHUP;689691 }690692691693 if (filp->f_mode & FMODE_WRITE) {692692- if (!pipe_full(head, tail, pipe->max_usage))694694+ if (!pipe_full(idx.head, idx.tail, pipe->max_usage))693695 mask |= EPOLLOUT | EPOLLWRNORM;694696 /*695697 * Most Unices do not set EPOLLERR for FIFOs but on Linux they
···281281 if (entry->type + 1 != type) {282282 pr_err("Waiting for IPC type %d, got %d. Ignore.\n",283283 entry->type + 1, type);284284+ continue;284285 }285286286287 entry->response = kvzalloc(sz, KSMBD_DEFAULT_GFP);
+10-10
fs/splice.c
···331331 int i;332332333333 /* Work out how much data we can actually add into the pipe */334334- used = pipe_occupancy(pipe->head, pipe->tail);334334+ used = pipe_buf_usage(pipe);335335 npages = max_t(ssize_t, pipe->max_usage - used, 0);336336 len = min_t(size_t, len, npages * PAGE_SIZE);337337 npages = DIV_ROUND_UP(len, PAGE_SIZE);···527527 return -ERESTARTSYS;528528529529repeat:530530- while (pipe_empty(pipe->head, pipe->tail)) {530530+ while (pipe_is_empty(pipe)) {531531 if (!pipe->writers)532532 return 0;533533···820820 if (signal_pending(current))821821 break;822822823823- while (pipe_empty(pipe->head, pipe->tail)) {823823+ while (pipe_is_empty(pipe)) {824824 ret = 0;825825 if (!pipe->writers)826826 goto out;···968968 return 0;969969970970 /* Don't try to read more the pipe has space for. */971971- p_space = pipe->max_usage - pipe_occupancy(pipe->head, pipe->tail);971971+ p_space = pipe->max_usage - pipe_buf_usage(pipe);972972 len = min_t(size_t, len, p_space << PAGE_SHIFT);973973974974 if (unlikely(len > MAX_RW_COUNT))···10801080 more = sd->flags & SPLICE_F_MORE;10811081 sd->flags |= SPLICE_F_MORE;1082108210831083- WARN_ON_ONCE(!pipe_empty(pipe->head, pipe->tail));10831083+ WARN_ON_ONCE(!pipe_is_empty(pipe));1084108410851085 while (len) {10861086 size_t read_len;···12681268 send_sig(SIGPIPE, current, 0);12691269 return -EPIPE;12701270 }12711271- if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage))12711271+ if (!pipe_is_full(pipe))12721272 return 0;12731273 if (flags & SPLICE_F_NONBLOCK)12741274 return -EAGAIN;···16521652 * Check the pipe occupancy without the inode lock first. This function16531653 * is speculative anyways, so missing one is ok.16541654 */16551655- if (!pipe_empty(pipe->head, pipe->tail))16551655+ if (!pipe_is_empty(pipe))16561656 return 0;1657165716581658 ret = 0;16591659 pipe_lock(pipe);1660166016611661- while (pipe_empty(pipe->head, pipe->tail)) {16611661+ while (pipe_is_empty(pipe)) {16621662 if (signal_pending(current)) {16631663 ret = -ERESTARTSYS;16641664 break;···16881688 * Check pipe occupancy without the inode lock first. This function16891689 * is speculative anyways, so missing one is ok.16901690 */16911691- if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage))16911691+ if (!pipe_is_full(pipe))16921692 return 0;1693169316941694 ret = 0;16951695 pipe_lock(pipe);1696169616971697- while (pipe_full(pipe->head, pipe->tail, pipe->max_usage)) {16971697+ while (pipe_is_full(pipe)) {16981698 if (!pipe->readers) {16991699 send_sig(SIGPIPE, current, 0);17001700 ret = -EPIPE;
···2929/*3030 * Locking orders3131 *3232- * xfs_buf_ioacct_inc:3333- * xfs_buf_ioacct_dec:3434- * b_sema (caller holds)3535- * b_lock3636- *3732 * xfs_buf_stale:3833 * b_sema (caller holds)3934 * b_lock···7782}78837984/*8080- * Bump the I/O in flight count on the buftarg if we haven't yet done so for8181- * this buffer. The count is incremented once per buffer (per hold cycle)8282- * because the corresponding decrement is deferred to buffer release. Buffers8383- * can undergo I/O multiple times in a hold-release cycle and per buffer I/O8484- * tracking adds unnecessary overhead. This is used for sychronization purposes8585- * with unmount (see xfs_buftarg_drain()), so all we really need is a count of8686- * in-flight buffers.8787- *8888- * Buffers that are never released (e.g., superblock, iclog buffers) must set8989- * the XBF_NO_IOACCT flag before I/O submission. Otherwise, the buftarg count9090- * never reaches zero and unmount hangs indefinitely.9191- */9292-static inline void9393-xfs_buf_ioacct_inc(9494- struct xfs_buf *bp)9595-{9696- if (bp->b_flags & XBF_NO_IOACCT)9797- return;9898-9999- ASSERT(bp->b_flags & XBF_ASYNC);100100- spin_lock(&bp->b_lock);101101- if (!(bp->b_state & XFS_BSTATE_IN_FLIGHT)) {102102- bp->b_state |= XFS_BSTATE_IN_FLIGHT;103103- percpu_counter_inc(&bp->b_target->bt_io_count);104104- }105105- spin_unlock(&bp->b_lock);106106-}107107-108108-/*109109- * Clear the in-flight state on a buffer about to be released to the LRU or110110- * freed and unaccount from the buftarg.111111- */112112-static inline void113113-__xfs_buf_ioacct_dec(114114- struct xfs_buf *bp)115115-{116116- lockdep_assert_held(&bp->b_lock);117117-118118- if (bp->b_state & XFS_BSTATE_IN_FLIGHT) {119119- bp->b_state &= ~XFS_BSTATE_IN_FLIGHT;120120- percpu_counter_dec(&bp->b_target->bt_io_count);121121- }122122-}123123-124124-/*12585 * When we mark a buffer stale, we remove the buffer from the LRU and clear the12686 * b_lru_ref count so that the buffer is freed immediately when the buffer12787 * reference count falls to zero. If the buffer is already on the LRU, we need···99149 */100150 bp->b_flags &= ~_XBF_DELWRI_Q;101151102102- /*103103- * Once the buffer is marked stale and unlocked, a subsequent lookup104104- * could reset b_flags. There is no guarantee that the buffer is105105- * unaccounted (released to LRU) before that occurs. Drop in-flight106106- * status now to preserve accounting consistency.107107- */108152 spin_lock(&bp->b_lock);109109- __xfs_buf_ioacct_dec(bp);110110-111153 atomic_set(&bp->b_lru_ref, 0);112154 if (!(bp->b_state & XFS_BSTATE_DISPOSE) &&113155 (list_lru_del_obj(&bp->b_target->bt_lru, &bp->b_lru)))···736794737795int738796_xfs_buf_read(739739- struct xfs_buf *bp,740740- xfs_buf_flags_t flags)797797+ struct xfs_buf *bp)741798{742742- ASSERT(!(flags & XBF_WRITE));743799 ASSERT(bp->b_maps[0].bm_bn != XFS_BUF_DADDR_NULL);744800745801 bp->b_flags &= ~(XBF_WRITE | XBF_ASYNC | XBF_READ_AHEAD | XBF_DONE);746746- bp->b_flags |= flags & (XBF_READ | XBF_ASYNC | XBF_READ_AHEAD);747747-802802+ bp->b_flags |= XBF_READ;748803 xfs_buf_submit(bp);749749- if (flags & XBF_ASYNC)750750- return 0;751804 return xfs_buf_iowait(bp);752805}753806···794857 struct xfs_buf *bp;795858 int error;796859860860+ ASSERT(!(flags & (XBF_WRITE | XBF_ASYNC | XBF_READ_AHEAD)));861861+797862 flags |= XBF_READ;798863 *bpp = NULL;799864···809870 /* Initiate the buffer read and wait. */810871 XFS_STATS_INC(target->bt_mount, xb_get_read);811872 bp->b_ops = ops;812812- error = _xfs_buf_read(bp, flags);813813-814814- /* Readahead iodone already dropped the buffer, so exit. */815815- if (flags & XBF_ASYNC)816816- return 0;873873+ error = _xfs_buf_read(bp);817874 } else {818875 /* Buffer already read; all we need to do is check it. */819876 error = xfs_buf_reverify(bp, ops);820820-821821- /* Readahead already finished; drop the buffer and exit. */822822- if (flags & XBF_ASYNC) {823823- xfs_buf_relse(bp);824824- return 0;825825- }826877827878 /* We do not want read in the flags */828879 bp->b_flags &= ~XBF_READ;···865936 int nmaps,866937 const struct xfs_buf_ops *ops)867938{939939+ const xfs_buf_flags_t flags = XBF_READ | XBF_ASYNC | XBF_READ_AHEAD;868940 struct xfs_buf *bp;869941870942 /*···875945 if (xfs_buftarg_is_mem(target))876946 return;877947878878- xfs_buf_read_map(target, map, nmaps,879879- XBF_TRYLOCK | XBF_ASYNC | XBF_READ_AHEAD, &bp, ops,880880- __this_address);948948+ if (xfs_buf_get_map(target, map, nmaps, flags | XBF_TRYLOCK, &bp))949949+ return;950950+ trace_xfs_buf_readahead(bp, 0, _RET_IP_);951951+952952+ if (bp->b_flags & XBF_DONE) {953953+ xfs_buf_reverify(bp, ops);954954+ xfs_buf_relse(bp);955955+ return;956956+ }957957+ XFS_STATS_INC(target->bt_mount, xb_get_read);958958+ bp->b_ops = ops;959959+ bp->b_flags &= ~(XBF_WRITE | XBF_DONE);960960+ bp->b_flags |= flags;961961+ percpu_counter_inc(&target->bt_readahead_count);962962+ xfs_buf_submit(bp);881963}882964883965/*···9451003 struct xfs_buf *bp;9461004 DEFINE_SINGLE_BUF_MAP(map, XFS_BUF_DADDR_NULL, numblks);947100510061006+ /* there are currently no valid flags for xfs_buf_get_uncached */10071007+ ASSERT(flags == 0);10081008+9481009 *bpp = NULL;9491010950950- /* flags might contain irrelevant bits, pass only what we care about */951951- error = _xfs_buf_alloc(target, &map, 1, flags & XBF_NO_IOACCT, &bp);10111011+ error = _xfs_buf_alloc(target, &map, 1, flags, &bp);9521012 if (error)9531013 return error;9541014···10041060 spin_unlock(&bp->b_lock);10051061 return;10061062 }10071007- __xfs_buf_ioacct_dec(bp);10081063 spin_unlock(&bp->b_lock);10091064 xfs_buf_free(bp);10101065}···10221079 spin_lock(&bp->b_lock);10231080 ASSERT(bp->b_hold >= 1);10241081 if (bp->b_hold > 1) {10251025- /*10261026- * Drop the in-flight state if the buffer is already on the LRU10271027- * and it holds the only reference. This is racy because we10281028- * haven't acquired the pag lock, but the use of _XBF_IN_FLIGHT10291029- * ensures the decrement occurs only once per-buf.10301030- */10311031- if (--bp->b_hold == 1 && !list_empty(&bp->b_lru))10321032- __xfs_buf_ioacct_dec(bp);10821082+ bp->b_hold--;10331083 goto out_unlock;10341084 }1035108510361086 /* we are asked to drop the last reference */10371037- __xfs_buf_ioacct_dec(bp);10381038- if (!(bp->b_flags & XBF_STALE) && atomic_read(&bp->b_lru_ref)) {10871087+ if (atomic_read(&bp->b_lru_ref)) {10391088 /*10401089 * If the buffer is added to the LRU, keep the reference to the10411090 * buffer for the LRU and clear the (now stale) dispose list···12801345resubmit:12811346 xfs_buf_ioerror(bp, 0);12821347 bp->b_flags |= (XBF_DONE | XBF_WRITE_FAIL);13481348+ reinit_completion(&bp->b_iowait);12831349 xfs_buf_submit(bp);12841350 return true;12851351out_stale:···12911355 return false;12921356}1293135712941294-static void12951295-xfs_buf_ioend(13581358+/* returns false if the caller needs to resubmit the I/O, else true */13591359+static bool13601360+__xfs_buf_ioend(12961361 struct xfs_buf *bp)12971362{12981363 trace_xfs_buf_iodone(bp, _RET_IP_);···13061369 bp->b_ops->verify_read(bp);13071370 if (!bp->b_error)13081371 bp->b_flags |= XBF_DONE;13721372+ if (bp->b_flags & XBF_READ_AHEAD)13731373+ percpu_counter_dec(&bp->b_target->bt_readahead_count);13091374 } else {13101375 if (!bp->b_error) {13111376 bp->b_flags &= ~XBF_WRITE_FAIL;···13151376 }1316137713171378 if (unlikely(bp->b_error) && xfs_buf_ioend_handle_error(bp))13181318- return;13791379+ return false;1319138013201381 /* clear the retry state */13211382 bp->b_last_error = 0;···1336139713371398 bp->b_flags &= ~(XBF_READ | XBF_WRITE | XBF_READ_AHEAD |13381399 _XBF_LOGRECOVERY);14001400+ return true;14011401+}1339140214031403+static void14041404+xfs_buf_ioend(14051405+ struct xfs_buf *bp)14061406+{14071407+ if (!__xfs_buf_ioend(bp))14081408+ return;13401409 if (bp->b_flags & XBF_ASYNC)13411410 xfs_buf_relse(bp);13421411 else···13581411 struct xfs_buf *bp =13591412 container_of(work, struct xfs_buf, b_ioend_work);1360141313611361- xfs_buf_ioend(bp);13621362-}13631363-13641364-static void13651365-xfs_buf_ioend_async(13661366- struct xfs_buf *bp)13671367-{13681368- INIT_WORK(&bp->b_ioend_work, xfs_buf_ioend_work);13691369- queue_work(bp->b_mount->m_buf_workqueue, &bp->b_ioend_work);14141414+ if (__xfs_buf_ioend(bp))14151415+ xfs_buf_relse(bp);13701416}1371141713721418void···14311491 XFS_TEST_ERROR(false, bp->b_mount, XFS_ERRTAG_BUF_IOERROR))14321492 xfs_buf_ioerror(bp, -EIO);1433149314341434- xfs_buf_ioend_async(bp);14941494+ if (bp->b_flags & XBF_ASYNC) {14951495+ INIT_WORK(&bp->b_ioend_work, xfs_buf_ioend_work);14961496+ queue_work(bp->b_mount->m_buf_workqueue, &bp->b_ioend_work);14971497+ } else {14981498+ complete(&bp->b_iowait);14991499+ }15001500+14351501 bio_put(bio);14361502}14371503···15141568{15151569 ASSERT(!(bp->b_flags & XBF_ASYNC));1516157015171517- trace_xfs_buf_iowait(bp, _RET_IP_);15181518- wait_for_completion(&bp->b_iowait);15191519- trace_xfs_buf_iowait_done(bp, _RET_IP_);15711571+ do {15721572+ trace_xfs_buf_iowait(bp, _RET_IP_);15731573+ wait_for_completion(&bp->b_iowait);15741574+ trace_xfs_buf_iowait_done(bp, _RET_IP_);15751575+ } while (!__xfs_buf_ioend(bp));1520157615211577 return bp->b_error;15221578}···15951647 * left over from previous use of the buffer (e.g. failed readahead).15961648 */15971649 bp->b_error = 0;15981598-15991599- if (bp->b_flags & XBF_ASYNC)16001600- xfs_buf_ioacct_inc(bp);1601165016021651 if ((bp->b_flags & XBF_WRITE) && !xfs_buf_verify_write(bp)) {16031652 xfs_force_shutdown(bp->b_mount, SHUTDOWN_CORRUPT_INCORE);···17211776 struct xfs_buftarg *btp)17221777{17231778 /*17241724- * First wait on the buftarg I/O count for all in-flight buffers to be17251725- * released. This is critical as new buffers do not make the LRU until17261726- * they are released.17791779+ * First wait for all in-flight readahead buffers to be released. This is17801780+ * critical as new buffers do not make the LRU until they are released.17271781 *17281782 * Next, flush the buffer workqueue to ensure all completion processing17291783 * has finished. Just waiting on buffer locks is not sufficient for···17311787 * all reference counts have been dropped before we start walking the17321788 * LRU list.17331789 */17341734- while (percpu_counter_sum(&btp->bt_io_count))17901790+ while (percpu_counter_sum(&btp->bt_readahead_count))17351791 delay(100);17361792 flush_workqueue(btp->bt_mount->m_buf_workqueue);17371793}···18481904 struct xfs_buftarg *btp)18491905{18501906 shrinker_free(btp->bt_shrinker);18511851- ASSERT(percpu_counter_sum(&btp->bt_io_count) == 0);18521852- percpu_counter_destroy(&btp->bt_io_count);19071907+ ASSERT(percpu_counter_sum(&btp->bt_readahead_count) == 0);19081908+ percpu_counter_destroy(&btp->bt_readahead_count);18531909 list_lru_destroy(&btp->bt_lru);18541910}18551911···1903195919041960 if (list_lru_init(&btp->bt_lru))19051961 return -ENOMEM;19061906- if (percpu_counter_init(&btp->bt_io_count, 0, GFP_KERNEL))19621962+ if (percpu_counter_init(&btp->bt_readahead_count, 0, GFP_KERNEL))19071963 goto out_destroy_lru;1908196419091965 btp->bt_shrinker =···19171973 return 0;1918197419191975out_destroy_io_count:19201920- percpu_counter_destroy(&btp->bt_io_count);19761976+ percpu_counter_destroy(&btp->bt_readahead_count);19211977out_destroy_lru:19221978 list_lru_destroy(&btp->bt_lru);19231979 return -ENOMEM;
···1451145114521452 trace_xfs_read_fault(ip, order);1453145314541454- ret = filemap_fsnotify_fault(vmf);14551455- if (unlikely(ret))14561456- return ret;14571454 xfs_ilock(ip, XFS_MMAPLOCK_SHARED);14581455 ret = xfs_dax_fault_locked(vmf, order, false);14591456 xfs_iunlock(ip, XFS_MMAPLOCK_SHARED);···14791482 vm_fault_t ret;1480148314811484 trace_xfs_write_fault(ip, order);14821482- /*14831483- * Usually we get here from ->page_mkwrite callback but in case of DAX14841484- * we will get here also for ordinary write fault. Handle HSM14851485- * notifications for that case.14861486- */14871487- if (IS_DAX(inode)) {14881488- ret = filemap_fsnotify_fault(vmf);14891489- if (unlikely(ret))14901490- return ret;14911491- }1492148514931486 sb_start_pagefault(inode->i_sb);14941487 file_update_time(vmf->vma->vm_file);
+1-1
fs/xfs/xfs_log_recover.c
···33803380 */33813381 xfs_buf_lock(bp);33823382 xfs_buf_hold(bp);33833383- error = _xfs_buf_read(bp, XBF_READ);33833383+ error = _xfs_buf_read(bp);33843384 if (error) {33853385 if (!xlog_is_shutdown(log)) {33863386 xfs_buf_ioerror_alert(bp, __this_address);
+2-5
fs/xfs/xfs_mount.c
···181181182182 /*183183 * Allocate a (locked) buffer to hold the superblock. This will be kept184184- * around at all times to optimize access to the superblock. Therefore,185185- * set XBF_NO_IOACCT to make sure it doesn't hold the buftarg count186186- * elevated.184184+ * around at all times to optimize access to the superblock.187185 */188186reread:189187 error = xfs_buf_read_uncached(mp->m_ddev_targp, XFS_SB_DADDR,190190- BTOBB(sector_size), XBF_NO_IOACCT, &bp,191191- buf_ops);188188+ BTOBB(sector_size), 0, &bp, buf_ops);192189 if (error) {193190 if (loud)194191 xfs_warn(mp, "SB validate failed with error %d.", error);
+1-1
fs/xfs/xfs_rtalloc.c
···1407140714081408 /* m_blkbb_log is not set up yet */14091409 error = xfs_buf_read_uncached(mp->m_rtdev_targp, XFS_RTSB_DADDR,14101410- mp->m_sb.sb_blocksize >> BBSHIFT, XBF_NO_IOACCT, &bp,14101410+ mp->m_sb.sb_blocksize >> BBSHIFT, 0, &bp,14111411 &xfs_rtsb_buf_ops);14121412 if (error) {14131413 xfs_warn(mp, "rt sb validate failed with error %d.", error);
+1
fs/xfs/xfs_trace.h
···593593DEFINE_BUF_FLAGS_EVENT(xfs_buf_find);594594DEFINE_BUF_FLAGS_EVENT(xfs_buf_get);595595DEFINE_BUF_FLAGS_EVENT(xfs_buf_read);596596+DEFINE_BUF_FLAGS_EVENT(xfs_buf_readahead);596597597598TRACE_EVENT(xfs_buf_ioerror,598599 TP_PROTO(struct xfs_buf *bp, int error, xfs_failaddr_t caller_ip),
+13-5
include/linux/blk-mq.h
···2828typedef __u32 __bitwise req_flags_t;29293030/* Keep rqf_name[] in sync with the definitions below */3131-enum {3131+enum rqf_flags {3232 /* drive already may have started this one */3333 __RQF_STARTED,3434 /* request for flush sequence */···852852 return rq->rq_flags & RQF_RESV;853853}854854855855-/*855855+/**856856+ * blk_mq_add_to_batch() - add a request to the completion batch857857+ * @req: The request to add to batch858858+ * @iob: The batch to add the request859859+ * @is_error: Specify true if the request failed with an error860860+ * @complete: The completaion handler for the request861861+ *856862 * Batched completions only work when there is no I/O error and no special857863 * ->end_io handler.864864+ *865865+ * Return: true when the request was added to the batch, otherwise false858866 */859867static inline bool blk_mq_add_to_batch(struct request *req,860860- struct io_comp_batch *iob, int ioerror,868868+ struct io_comp_batch *iob, bool is_error,861869 void (*complete)(struct io_comp_batch *))862870{863871 /*···873865 * 1) No batch container874866 * 2) Has scheduler data attached875867 * 3) Not a passthrough request and end_io set876876- * 4) Not a passthrough request and an ioerror868868+ * 4) Not a passthrough request and failed with an error877869 */878870 if (!iob)879871 return false;···882874 if (!blk_rq_is_passthrough(req)) {883875 if (req->end_io)884876 return false;885885- if (ioerror < 0)877877+ if (is_error)886878 return false;887879 }888880
···171171}172172173173/*174174+ * fsnotify_mmap_perm - permission hook before mmap of file range175175+ */176176+static inline int fsnotify_mmap_perm(struct file *file, int prot,177177+ const loff_t off, size_t len)178178+{179179+ /*180180+ * mmap() generates only pre-content events.181181+ */182182+ if (!file || likely(!FMODE_FSNOTIFY_HSM(file->f_mode)))183183+ return 0;184184+185185+ return fsnotify_pre_content(&file->f_path, &off, len);186186+}187187+188188+/*174189 * fsnotify_truncate_perm - permission hook before file truncate175190 */176191static inline int fsnotify_truncate_perm(const struct path *path, loff_t length)···234219235220static inline int fsnotify_file_area_perm(struct file *file, int perm_mask,236221 const loff_t *ppos, size_t count)222222+{223223+ return 0;224224+}225225+226226+static inline int fsnotify_mmap_perm(struct file *file, int prot,227227+ const loff_t off, size_t len)237228{238229 return 0;239230}
+5
include/linux/hugetlb.h
···682682683683int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);684684int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn);685685+void wait_for_freed_hugetlb_folios(void);685686struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,686687 unsigned long addr, bool cow_from_owner);687688struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,···10671066 unsigned long end_pfn)10681067{10691068 return 0;10691069+}10701070+10711071+static inline void wait_for_freed_hugetlb_folios(void)10721072+{10701073}1071107410721075static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+1-1
include/linux/log2.h
···4141 * *not* considered a power of two.4242 * Return: true if @n is a power of 2, otherwise false.4343 */4444-static inline __attribute__((const))4444+static __always_inline __attribute__((const))4545bool is_power_of_2(unsigned long n)4646{4747 return (n != 0 && ((n & (n - 1)) == 0));
-1
include/linux/mm.h
···34203420extern vm_fault_t filemap_map_pages(struct vm_fault *vmf,34213421 pgoff_t start_pgoff, pgoff_t end_pgoff);34223422extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf);34233423-extern vm_fault_t filemap_fsnotify_fault(struct vm_fault *vmf);3424342334253424extern unsigned long stack_guard_gap;34263425/* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
+76-22
include/linux/pipe_fs_i.h
···3131 unsigned long private;3232};33333434+/*3535+ * Really only alpha needs 32-bit fields, but3636+ * might as well do it for 64-bit architectures3737+ * since that's what we've historically done,3838+ * and it makes 'head_tail' always be a simple3939+ * 'unsigned long'.4040+ */4141+#ifdef CONFIG_64BIT4242+typedef unsigned int pipe_index_t;4343+#else4444+typedef unsigned short pipe_index_t;4545+#endif4646+4747+/*4848+ * We have to declare this outside 'struct pipe_inode_info',4949+ * but then we can't use 'union pipe_index' for an anonymous5050+ * union, so we end up having to duplicate this declaration5151+ * below. Annoying.5252+ */5353+union pipe_index {5454+ unsigned long head_tail;5555+ struct {5656+ pipe_index_t head;5757+ pipe_index_t tail;5858+ };5959+};6060+3461/**3562 * struct pipe_inode_info - a linux kernel pipe3663 * @mutex: mutex protecting the whole thing···6538 * @wr_wait: writer wait point in case of full pipe6639 * @head: The point of buffer production6740 * @tail: The point of buffer consumption4141+ * @head_tail: unsigned long union of @head and @tail6842 * @note_loss: The next read() should insert a data-lost message6943 * @max_usage: The maximum number of slots that may be used in the ring7044 * @ring_size: total number of buffers (should be a power of 2)···8658struct pipe_inode_info {8759 struct mutex mutex;8860 wait_queue_head_t rd_wait, wr_wait;8989- unsigned int head;9090- unsigned int tail;6161+6262+ /* This has to match the 'union pipe_index' above */6363+ union {6464+ unsigned long head_tail;6565+ struct {6666+ pipe_index_t head;6767+ pipe_index_t tail;6868+ };6969+ };7070+9171 unsigned int max_usage;9272 unsigned int ring_size;9373 unsigned int nr_accounted;···177141}178142179143/**180180- * pipe_empty - Return true if the pipe is empty181181- * @head: The pipe ring head pointer182182- * @tail: The pipe ring tail pointer183183- */184184-static inline bool pipe_empty(unsigned int head, unsigned int tail)185185-{186186- return head == tail;187187-}188188-189189-/**190144 * pipe_occupancy - Return number of slots used in the pipe191145 * @head: The pipe ring head pointer192146 * @tail: The pipe ring tail pointer193147 */194148static inline unsigned int pipe_occupancy(unsigned int head, unsigned int tail)195149{196196- return head - tail;150150+ return (pipe_index_t)(head - tail);151151+}152152+153153+/**154154+ * pipe_empty - Return true if the pipe is empty155155+ * @head: The pipe ring head pointer156156+ * @tail: The pipe ring tail pointer157157+ */158158+static inline bool pipe_empty(unsigned int head, unsigned int tail)159159+{160160+ return !pipe_occupancy(head, tail);197161}198162199163/**···206170 unsigned int limit)207171{208172 return pipe_occupancy(head, tail) >= limit;173173+}174174+175175+/**176176+ * pipe_is_full - Return true if the pipe is full177177+ * @pipe: the pipe178178+ */179179+static inline bool pipe_is_full(const struct pipe_inode_info *pipe)180180+{181181+ return pipe_full(pipe->head, pipe->tail, pipe->max_usage);182182+}183183+184184+/**185185+ * pipe_is_empty - Return true if the pipe is empty186186+ * @pipe: the pipe187187+ */188188+static inline bool pipe_is_empty(const struct pipe_inode_info *pipe)189189+{190190+ return pipe_empty(pipe->head, pipe->tail);191191+}192192+193193+/**194194+ * pipe_buf_usage - Return how many pipe buffers are in use195195+ * @pipe: the pipe196196+ */197197+static inline unsigned int pipe_buf_usage(const struct pipe_inode_info *pipe)198198+{199199+ return pipe_occupancy(pipe->head, pipe->tail);209200}210201211202/**···306243 if (!buf->ops->try_steal)307244 return false;308245 return buf->ops->try_steal(pipe, buf);309309-}310310-311311-static inline void pipe_discard_from(struct pipe_inode_info *pipe,312312- unsigned int old_head)313313-{314314- unsigned int mask = pipe->ring_size - 1;315315-316316- while (pipe->head > old_head)317317- pipe_buf_release(pipe, &pipe->bufs[--pipe->head & mask]);318246}319247320248/* Differs from PIPE_BUF in that PIPE_SIZE is the length of the actual
+3
include/linux/platform_profile.h
···3333 * @probe: Callback to setup choices available to the new class device. These3434 * choices will only be enforced when setting a new profile, not when3535 * getting the current one.3636+ * @hidden_choices: Callback to setup choices that are not visible to the user3737+ * but can be set by the driver.3638 * @profile_get: Callback that will be called when showing the current platform3739 * profile in sysfs.3840 * @profile_set: Callback that will be called when storing a new platform···4240 */4341struct platform_profile_ops {4442 int (*probe)(void *drvdata, unsigned long *choices);4343+ int (*hidden_choices)(void *drvdata, unsigned long *choices);4544 int (*profile_get)(struct device *dev, enum platform_profile_option *profile);4645 int (*profile_set)(struct device *dev, enum platform_profile_option profile);4746};
+1-1
include/linux/sched.h
···17011701#define PF_USED_MATH 0x00002000 /* If unset the fpu must be initialized before use */17021702#define PF_USER_WORKER 0x00004000 /* Kernel thread cloned from userspace thread */17031703#define PF_NOFREEZE 0x00008000 /* This thread should not be frozen */17041704-#define PF__HOLE__00010000 0x0001000017041704+#define PF_KCOMPACTD 0x00010000 /* I am kcompactd */17051705#define PF_KSWAPD 0x00020000 /* I am kswapd */17061706#define PF_MEMALLOC_NOFS 0x00040000 /* All allocations inherit GFP_NOFS. See memalloc_nfs_save() */17071707#define PF_MEMALLOC_NOIO 0x00080000 /* All allocations inherit GFP_NOIO. See memalloc_noio_save() */
···1261126112621262/* mixer control */12631263struct soc_mixer_control {12641264- int min, max, platform_max;12641264+ /* Minimum and maximum specified as written to the hardware */12651265+ int min, max;12661266+ /* Limited maximum value specified as presented through the control */12671267+ int platform_max;12651268 int reg, rreg;12661269 unsigned int shift, rshift;12671270 unsigned int sign_bit;
+1-1
init/Kconfig
···19681968 depends on !MODVERSIONS || GENDWARFKSYMS19691969 depends on !GCC_PLUGIN_RANDSTRUCT19701970 depends on !RANDSTRUCT19711971- depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE19711971+ depends on !DEBUG_INFO_BTF || (PAHOLE_HAS_LANG_EXCLUDE && !LTO)19721972 depends on !CFI_CLANG || HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC19731973 select CFI_ICALL_NORMALIZE_INTEGERS if CFI_CLANG19741974 depends on !CALL_PADDING || RUSTC_VERSION >= 108100
+3-4
io_uring/rw.c
···560560 if (kiocb->ki_flags & IOCB_WRITE)561561 io_req_end_write(req);562562 if (unlikely(res != req->cqe.res)) {563563- if (res == -EAGAIN && io_rw_should_reissue(req)) {563563+ if (res == -EAGAIN && io_rw_should_reissue(req))564564 req->flags |= REQ_F_REISSUE | REQ_F_BL_NO_RECYCLE;565565- return;566566- }567567- req->cqe.res = res;565565+ else566566+ req->cqe.res = res;568567 }569568570569 /* order with io_iopoll_complete() checking ->iopoll_completed */
···991010#ifdef CONFIG_IRQ_TIME_ACCOUNTING11111212-DEFINE_STATIC_KEY_FALSE(sched_clock_irqtime);1313-1412/*1513 * There are no locks covering percpu hardirq/softirq time.1614 * They are only modified in vtime_account, on corresponding CPU···2224 */2325DEFINE_PER_CPU(struct irqtime, cpu_irqtime);24262727+int sched_clock_irqtime;2828+2529void enable_sched_clock_irqtime(void)2630{2727- static_branch_enable(&sched_clock_irqtime);3131+ sched_clock_irqtime = 1;2832}29333034void disable_sched_clock_irqtime(void)3135{3232- static_branch_disable(&sched_clock_irqtime);3636+ sched_clock_irqtime = 0;3337}34383539static void irqtime_account_delta(struct irqtime *irqtime, u64 delta,
+1-1
kernel/sched/deadline.c
···31893189 * value smaller than the currently allocated bandwidth in31903190 * any of the root_domains.31913191 */31923192- for_each_possible_cpu(cpu) {31923192+ for_each_online_cpu(cpu) {31933193 rcu_read_lock_sched();3194319431953195 if (dl_bw_visited(cpu, gen))
+3
kernel/sched/ext.c
···64226422__bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,64236423 u64 wake_flags, bool *is_idle)64246424{64256425+ if (!ops_cpu_valid(prev_cpu, NULL))64266426+ goto prev_cpu;64276427+64256428 if (!check_builtin_idle_enabled())64266429 goto prev_cpu;64276430
···32593259};3260326032613261DECLARE_PER_CPU(struct irqtime, cpu_irqtime);32623262-DECLARE_STATIC_KEY_FALSE(sched_clock_irqtime);32623262+extern int sched_clock_irqtime;3263326332643264static inline int irqtime_enabled(void)32653265{32663266- return static_branch_likely(&sched_clock_irqtime);32663266+ return sched_clock_irqtime;32673267}3268326832693269/*
+18-6
kernel/trace/trace_events_hist.c
···56895689 guard(mutex)(&event_mutex);5690569056915691 event_file = event_file_data(file);56925692- if (!event_file)56935693- return -ENODEV;56925692+ if (!event_file) {56935693+ ret = -ENODEV;56945694+ goto err;56955695+ }5694569656955697 hist_file = kzalloc(sizeof(*hist_file), GFP_KERNEL);56965696- if (!hist_file)56975697- return -ENOMEM;56985698+ if (!hist_file) {56995699+ ret = -ENOMEM;57005700+ goto err;57015701+ }5698570256995703 hist_file->file = file;57005704 hist_file->last_act = get_hist_hit_count(event_file);···57065702 /* Clear private_data to avoid warning in single_open() */57075703 file->private_data = NULL;57085704 ret = single_open(file, hist_show, hist_file);57095709- if (ret)57055705+ if (ret) {57105706 kfree(hist_file);57075707+ goto err;57085708+ }5711570957105710+ return 0;57115711+err:57125712+ tracing_release_file_tr(inode, file);57125713 return ret;57135714}57145715···5988597959895980 /* Clear private_data to avoid warning in single_open() */59905981 file->private_data = NULL;59915991- return single_open(file, hist_debug_show, file);59825982+ ret = single_open(file, hist_debug_show, file);59835983+ if (ret)59845984+ tracing_release_file_tr(inode, file);59855985+ return ret;59925986}5993598759945988const struct file_operations event_hist_debug_fops = {
+20
kernel/trace/trace_fprobe.c
···10491049 if (*is_return)10501050 return 0;1051105110521052+ if (is_tracepoint) {10531053+ tmp = *symbol;10541054+ while (*tmp && (isalnum(*tmp) || *tmp == '_'))10551055+ tmp++;10561056+ if (*tmp) {10571057+ /* find a wrong character. */10581058+ trace_probe_log_err(tmp - *symbol, BAD_TP_NAME);10591059+ kfree(*symbol);10601060+ *symbol = NULL;10611061+ return -EINVAL;10621062+ }10631063+ }10641064+10521065 /* If there is $retval, this should be a return fprobe. */10531066 for (i = 2; i < argc; i++) {10541067 tmp = strstr(argv[i], "$retval");···10691056 if (is_tracepoint) {10701057 trace_probe_log_set_index(i);10711058 trace_probe_log_err(tmp - argv[i], RETVAL_ON_PROBE);10591059+ kfree(*symbol);10601060+ *symbol = NULL;10721061 return -EINVAL;10731062 }10741063 *is_return = true;···12301215 if (is_return && tf->tp.entry_arg) {12311216 tf->fp.entry_handler = trace_fprobe_entry_handler;12321217 tf->fp.entry_data_size = traceprobe_get_entry_data_size(&tf->tp);12181218+ if (ALIGN(tf->fp.entry_data_size, sizeof(long)) > MAX_FPROBE_DATA_SIZE) {12191219+ trace_probe_log_set_index(2);12201220+ trace_probe_log_err(0, TOO_MANY_EARGS);12211221+ return -E2BIG;12221222+ }12331223 }1234122412351225 ret = traceprobe_set_print_fmt(&tf->tp,
+3-2
kernel/trace/trace_probe.h
···3636#define MAX_BTF_ARGS_LEN 1283737#define MAX_DENTRY_ARGS_LEN 2563838#define MAX_STRING_SIZE PATH_MAX3939-#define MAX_ARG_BUF_LEN (MAX_TRACE_ARGS * MAX_ARG_NAME_LEN)40394140/* Reserved field names */4241#define FIELD_STRING_IP "__probe_ip"···480481 C(NON_UNIQ_SYMBOL, "The symbol is not unique"), \481482 C(BAD_RETPROBE, "Retprobe address must be an function entry"), \482483 C(NO_TRACEPOINT, "Tracepoint is not found"), \484484+ C(BAD_TP_NAME, "Invalid character in tracepoint name"),\483485 C(BAD_ADDR_SUFFIX, "Invalid probed address suffix"), \484486 C(NO_GROUP_NAME, "Group name is not specified"), \485487 C(GROUP_TOO_LONG, "Group name is too long"), \···544544 C(NO_BTF_FIELD, "This field is not found."), \545545 C(BAD_BTF_TID, "Failed to get BTF type info."),\546546 C(BAD_TYPE4STR, "This type does not fit for string."),\547547- C(NEED_STRING_TYPE, "$comm and immediate-string only accepts string type"),547547+ C(NEED_STRING_TYPE, "$comm and immediate-string only accepts string type"),\548548+ C(TOO_MANY_EARGS, "Too many entry arguments specified"),548549549550#undef C550551#define C(a, b) TP_ERR_##a
+1-1
lib/Kconfig.debug
···21032103 reallocated, catching possible invalid pointers to the skb.2104210421052105 For more information, check21062106- Documentation/dev-tools/fault-injection/fault-injection.rst21062106+ Documentation/fault-injection/fault-injection.rst2107210721082108config FAULT_INJECTION_CONFIGFS21092109 bool "Configfs interface for fault-injection capabilities"
+3
mm/compaction.c
···31813181 long default_timeout = msecs_to_jiffies(HPAGE_FRAG_CHECK_INTERVAL_MSEC);31823182 long timeout = default_timeout;3183318331843184+ current->flags |= PF_KCOMPACTD;31843185 set_freezable();3185318631863187 pgdat->kcompactd_max_order = 0;···32373236 if (unlikely(pgdat->proactive_compact_trigger))32383237 pgdat->proactive_compact_trigger = false;32393238 }32393239+32403240+ current->flags &= ~PF_KCOMPACTD;3240324132413242 return 0;32423243}
+3-90
mm/filemap.c
···4747#include <linux/splice.h>4848#include <linux/rcupdate_wait.h>4949#include <linux/sched/mm.h>5050-#include <linux/fsnotify.h>5150#include <asm/pgalloc.h>5251#include <asm/tlbflush.h>5352#include "internal.h"···28962897 size = min(size, folio_size(folio) - offset);28972898 offset %= PAGE_SIZE;2898289928992899- while (spliced < size &&29002900- !pipe_full(pipe->head, pipe->tail, pipe->max_usage)) {29002900+ while (spliced < size && !pipe_is_full(pipe)) {29012901 struct pipe_buffer *buf = pipe_head_buf(pipe);29022902 size_t part = min_t(size_t, PAGE_SIZE - offset, size - spliced);29032903···29532955 iocb.ki_pos = *ppos;2954295629552957 /* Work out how much data we can actually add into the pipe */29562956- used = pipe_occupancy(pipe->head, pipe->tail);29582958+ used = pipe_buf_usage(pipe);29572959 npages = max_t(ssize_t, pipe->max_usage - used, 0);29582960 len = min_t(size_t, len, npages * PAGE_SIZE);29592961···30133015 total_spliced += n;30143016 *ppos += n;30153017 in->f_ra.prev_pos = *ppos;30163016- if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))30183018+ if (pipe_is_full(pipe))30173019 goto out;30183020 }30193021···31973199 unsigned long vm_flags = vmf->vma->vm_flags;31983200 unsigned int mmap_miss;3199320132003200- /*32013201- * If we have pre-content watches we need to disable readahead to make32023202- * sure that we don't populate our mapping with 0 filled pages that we32033203- * never emitted an event for.32043204- */32053205- if (unlikely(FMODE_FSNOTIFY_HSM(file->f_mode)))32063206- return fpin;32073207-32083202#ifdef CONFIG_TRANSPARENT_HUGEPAGE32093203 /* Use the readahead code, even if readahead is disabled */32103204 if ((vm_flags & VM_HUGEPAGE) && HPAGE_PMD_ORDER <= MAX_PAGECACHE_ORDER) {···32653275 struct file *fpin = NULL;32663276 unsigned int mmap_miss;3267327732683268- /* See comment in do_sync_mmap_readahead. */32693269- if (unlikely(FMODE_FSNOTIFY_HSM(file->f_mode)))32703270- return fpin;32713271-32723278 /* If we don't want any read-ahead, don't bother */32733279 if (vmf->vma->vm_flags & VM_RAND_READ || !ra->ra_pages)32743280 return fpin;···33223336 pte_unmap(ptep);33233337 return ret;33243338}33253325-33263326-/**33273327- * filemap_fsnotify_fault - maybe emit a pre-content event.33283328- * @vmf: struct vm_fault containing details of the fault.33293329- *33303330- * If we have a pre-content watch on this file we will emit an event for this33313331- * range. If we return anything the fault caller should return immediately, we33323332- * will return VM_FAULT_RETRY if we had to emit an event, which will trigger the33333333- * fault again and then the fault handler will run the second time through.33343334- *33353335- * Return: a bitwise-OR of %VM_FAULT_ codes, 0 if nothing happened.33363336- */33373337-vm_fault_t filemap_fsnotify_fault(struct vm_fault *vmf)33383338-{33393339- struct file *fpin = NULL;33403340- int mask = (vmf->flags & FAULT_FLAG_WRITE) ? MAY_WRITE : MAY_ACCESS;33413341- loff_t pos = vmf->pgoff >> PAGE_SHIFT;33423342- size_t count = PAGE_SIZE;33433343- int err;33443344-33453345- /*33463346- * We already did this and now we're retrying with everything locked,33473347- * don't emit the event and continue.33483348- */33493349- if (vmf->flags & FAULT_FLAG_TRIED)33503350- return 0;33513351-33523352- /* No watches, we're done. */33533353- if (likely(!FMODE_FSNOTIFY_HSM(vmf->vma->vm_file->f_mode)))33543354- return 0;33553355-33563356- fpin = maybe_unlock_mmap_for_io(vmf, fpin);33573357- if (!fpin)33583358- return VM_FAULT_SIGBUS;33593359-33603360- err = fsnotify_file_area_perm(fpin, mask, &pos, count);33613361- fput(fpin);33623362- if (err)33633363- return VM_FAULT_SIGBUS;33643364- return VM_FAULT_RETRY;33653365-}33663366-EXPORT_SYMBOL_GPL(filemap_fsnotify_fault);3367333933683340/**33693341 * filemap_fault - read in file data for page fault handling···34263482 * or because readahead was otherwise unable to retrieve it.34273483 */34283484 if (unlikely(!folio_test_uptodate(folio))) {34293429- /*34303430- * If this is a precontent file we have can now emit an event to34313431- * try and populate the folio.34323432- */34333433- if (!(vmf->flags & FAULT_FLAG_TRIED) &&34343434- unlikely(FMODE_FSNOTIFY_HSM(file->f_mode))) {34353435- loff_t pos = folio_pos(folio);34363436- size_t count = folio_size(folio);34373437-34383438- /* We're NOWAIT, we have to retry. */34393439- if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) {34403440- folio_unlock(folio);34413441- goto out_retry;34423442- }34433443-34443444- if (mapping_locked)34453445- filemap_invalidate_unlock_shared(mapping);34463446- mapping_locked = false;34473447-34483448- folio_unlock(folio);34493449- fpin = maybe_unlock_mmap_for_io(vmf, fpin);34503450- if (!fpin)34513451- goto out_retry;34523452-34533453- error = fsnotify_file_area_perm(fpin, MAY_ACCESS, &pos,34543454- count);34553455- if (error)34563456- ret = VM_FAULT_SIGBUS;34573457- goto out_retry;34583458- }34593459-34603485 /*34613486 * If the invalidate lock is not held, the folio was in cache34623487 * and uptodate and now it is not. Strange but possible since we
+8
mm/hugetlb.c
···29432943 return ret;29442944}2945294529462946+void wait_for_freed_hugetlb_folios(void)29472947+{29482948+ if (llist_empty(&hpage_freelist))29492949+ return;29502950+29512951+ flush_work(&free_hpage_work);29522952+}29532953+29462954typedef enum {29472955 /*29482956 * For either 0/1: we checked the per-vma resv map, and one resv
···15561556 return ret;15571557}1558155815591559-void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu)15591559+int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill)15601560{15611561- if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {15621562- struct address_space *mapping;15611561+ enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON;15621562+ struct address_space *mapping;1563156315641564+ if (folio_test_swapcache(folio)) {15651565+ pr_err("%#lx: keeping poisoned page in swap cache\n", pfn);15661566+ ttu &= ~TTU_HWPOISON;15671567+ }15681568+15691569+ /*15701570+ * Propagate the dirty bit from PTEs to struct page first, because we15711571+ * need this to decide if we should kill or just drop the page.15721572+ * XXX: the dirty test could be racy: set_page_dirty() may not always15731573+ * be called inside page lock (it's recommended but not enforced).15741574+ */15751575+ mapping = folio_mapping(folio);15761576+ if (!must_kill && !folio_test_dirty(folio) && mapping &&15771577+ mapping_can_writeback(mapping)) {15781578+ if (folio_mkclean(folio)) {15791579+ folio_set_dirty(folio);15801580+ } else {15811581+ ttu &= ~TTU_HWPOISON;15821582+ pr_info("%#lx: corrupted page was clean: dropped without side effects\n",15831583+ pfn);15841584+ }15851585+ }15861586+15871587+ if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {15641588 /*15651589 * For hugetlb folios in shared mappings, try_to_unmap15661590 * could potentially call huge_pmd_unshare. Because of···15961572 if (!mapping) {15971573 pr_info("%#lx: could not lock mapping for mapped hugetlb folio\n",15981574 folio_pfn(folio));15991599- return;15751575+ return -EBUSY;16001576 }1601157716021578 try_to_unmap(folio, ttu|TTU_RMAP_LOCKED);···16041580 } else {16051581 try_to_unmap(folio, ttu);16061582 }15831583+15841584+ return folio_mapped(folio) ? -EBUSY : 0;16071585}1608158616091587/*···16151589static bool hwpoison_user_mappings(struct folio *folio, struct page *p,16161590 unsigned long pfn, int flags)16171591{16181618- enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON;16191619- struct address_space *mapping;16201592 LIST_HEAD(tokill);16211593 bool unmap_success;16221594 int forcekill;···16371613 if (!folio_mapped(folio))16381614 return true;1639161516401640- if (folio_test_swapcache(folio)) {16411641- pr_err("%#lx: keeping poisoned page in swap cache\n", pfn);16421642- ttu &= ~TTU_HWPOISON;16431643- }16441644-16451645- /*16461646- * Propagate the dirty bit from PTEs to struct page first, because we16471647- * need this to decide if we should kill or just drop the page.16481648- * XXX: the dirty test could be racy: set_page_dirty() may not always16491649- * be called inside page lock (it's recommended but not enforced).16501650- */16511651- mapping = folio_mapping(folio);16521652- if (!(flags & MF_MUST_KILL) && !folio_test_dirty(folio) && mapping &&16531653- mapping_can_writeback(mapping)) {16541654- if (folio_mkclean(folio)) {16551655- folio_set_dirty(folio);16561656- } else {16571657- ttu &= ~TTU_HWPOISON;16581658- pr_info("%#lx: corrupted page was clean: dropped without side effects\n",16591659- pfn);16601660- }16611661- }16621662-16631616 /*16641617 * First collect all the processes that have the page16651618 * mapped in dirty form. This has to be done before try_to_unmap,···16441643 */16451644 collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED);1646164516471647- unmap_poisoned_folio(folio, ttu);16481648-16491649- unmap_success = !folio_mapped(folio);16461646+ unmap_success = !unmap_poisoned_folio(folio, pfn, flags & MF_MUST_KILL);16501647 if (!unmap_success)16511648 pr_err("%#lx: failed to unmap page (folio mapcount=%d)\n",16521649 pfn, folio_mapcount(folio));
+14-26
mm/memory.c
···7676#include <linux/ptrace.h>7777#include <linux/vmalloc.h>7878#include <linux/sched/sysctl.h>7979-#include <linux/fsnotify.h>80798180#include <trace/events/kmem.h>8281···30503051 next = pgd_addr_end(addr, end);30513052 if (pgd_none(*pgd) && !create)30523053 continue;30533053- if (WARN_ON_ONCE(pgd_leaf(*pgd)))30543054- return -EINVAL;30543054+ if (WARN_ON_ONCE(pgd_leaf(*pgd))) {30553055+ err = -EINVAL;30563056+ break;30573057+ }30553058 if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) {30563059 if (!create)30573060 continue;···51845183 bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&51855184 !(vma->vm_flags & VM_SHARED);51865185 int type, nr_pages;51875187- unsigned long addr = vmf->address;51865186+ unsigned long addr;51875187+ bool needs_fallback = false;51885188+51895189+fallback:51905190+ addr = vmf->address;5188519151895192 /* Did we COW the page? */51905193 if (is_cow)···52275222 * approach also applies to non-anonymous-shmem faults to avoid52285223 * inflating the RSS of the process.52295224 */52305230- if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {52255225+ if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||52265226+ unlikely(needs_fallback)) {52315227 nr_pages = 1;52325228 } else if (nr_pages > 1) {52335229 pgoff_t idx = folio_page_idx(folio, page);···52645258 ret = VM_FAULT_NOPAGE;52655259 goto unlock;52665260 } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {52675267- update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);52685268- ret = VM_FAULT_NOPAGE;52695269- goto unlock;52615261+ needs_fallback = true;52625262+ pte_unmap_unlock(vmf->pte, vmf->ptl);52635263+ goto fallback;52705264 }5271526552725266 folio_ref_add(folio, nr_pages - 1);···57495743static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)57505744{57515745 struct vm_area_struct *vma = vmf->vma;57525752-57535746 if (vma_is_anonymous(vma))57545747 return do_huge_pmd_anonymous_page(vmf);57555755- /*57565756- * Currently we just emit PAGE_SIZE for our fault events, so don't allow57575757- * a huge fault if we have a pre content watch on this file. This would57585758- * be trivial to support, but there would need to be tests to ensure57595759- * this works properly and those don't exist currently.57605760- */57615761- if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode)))57625762- return VM_FAULT_FALLBACK;57635748 if (vma->vm_ops->huge_fault)57645749 return vma->vm_ops->huge_fault(vmf, PMD_ORDER);57655750 return VM_FAULT_FALLBACK;···57745777 }5775577857765779 if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) {57775777- /* See comment in create_huge_pmd. */57785778- if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode)))57795779- goto split;57805780 if (vma->vm_ops->huge_fault) {57815781 ret = vma->vm_ops->huge_fault(vmf, PMD_ORDER);57825782 if (!(ret & VM_FAULT_FALLBACK))···57965802 /* No support for anonymous transparent PUD pages yet */57975803 if (vma_is_anonymous(vma))57985804 return VM_FAULT_FALLBACK;57995799- /* See comment in create_huge_pmd. */58005800- if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode)))58015801- return VM_FAULT_FALLBACK;58025805 if (vma->vm_ops->huge_fault)58035806 return vma->vm_ops->huge_fault(vmf, PUD_ORDER);58045807#endif /* CONFIG_TRANSPARENT_HUGEPAGE */···58135822 if (vma_is_anonymous(vma))58145823 goto split;58155824 if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) {58165816- /* See comment in create_huge_pmd. */58175817- if (unlikely(FMODE_FSNOTIFY_HSM(vma->vm_file->f_mode)))58185818- goto split;58195825 if (vma->vm_ops->huge_fault) {58205826 ret = vma->vm_ops->huge_fault(vmf, PUD_ORDER);58215827 if (!(ret & VM_FAULT_FALLBACK))
+13-15
mm/memory_hotplug.c
···18221822 if (folio_test_large(folio))18231823 pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;1824182418251825- /*18261826- * HWPoison pages have elevated reference counts so the migration would18271827- * fail on them. It also doesn't make any sense to migrate them in the18281828- * first place. Still try to unmap such a page in case it is still mapped18291829- * (keep the unmap as the catch all safety net).18301830- */18311831- if (folio_test_hwpoison(folio) ||18321832- (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) {18331833- if (WARN_ON(folio_test_lru(folio)))18341834- folio_isolate_lru(folio);18351835- if (folio_mapped(folio))18361836- unmap_poisoned_folio(folio, TTU_IGNORE_MLOCK);18371837- continue;18381838- }18391839-18401825 if (!folio_try_get(folio))18411826 continue;1842182718431828 if (unlikely(page_folio(page) != folio))18441829 goto put_folio;18301830+18311831+ if (folio_test_hwpoison(folio) ||18321832+ (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) {18331833+ if (WARN_ON(folio_test_lru(folio)))18341834+ folio_isolate_lru(folio);18351835+ if (folio_mapped(folio)) {18361836+ folio_lock(folio);18371837+ unmap_poisoned_folio(folio, pfn, false);18381838+ folio_unlock(folio);18391839+ }18401840+18411841+ goto put_folio;18421842+ }1845184318461844 if (!isolate_folio_to_list(folio, &source)) {18471845 if (__ratelimit(&migrate_rs)) {
···608608 int ret;609609610610 /*611611+ * Due to the deferred freeing of hugetlb folios, the hugepage folios may612612+ * not immediately release to the buddy system. This can cause PageBuddy()613613+ * to fail in __test_page_isolated_in_pageblock(). To ensure that the614614+ * hugetlb folios are properly released back to the buddy system, we615615+ * invoke the wait_for_freed_hugetlb_folios() function to wait for the616616+ * release to complete.617617+ */618618+ wait_for_freed_hugetlb_folios();619619+620620+ /*611621 * Note: pageblock_nr_pages != MAX_PAGE_ORDER. Then, chunks of free612622 * pages are not aligned to pageblock_nr_pages.613623 * Then we just check migratetype first.
-14
mm/readahead.c
···128128#include <linux/blk-cgroup.h>129129#include <linux/fadvise.h>130130#include <linux/sched/mm.h>131131-#include <linux/fsnotify.h>132131133132#include "internal.h"134133···558559 pgoff_t prev_index, miss;559560560561 /*561561- * If we have pre-content watches we need to disable readahead to make562562- * sure that we don't find 0 filled pages in cache that we never emitted563563- * events for. Filesystems supporting HSM must make sure to not call564564- * this function with ractl->file unset for files handled by HSM.565565- */566566- if (ractl->file && unlikely(FMODE_FSNOTIFY_HSM(ractl->file->f_mode)))567567- return;568568-569569- /*570562 * Even if readahead is disabled, issue this request as readahead571563 * as we'll need it to satisfy the requested range. The forced572564 * readahead will do the right thing and limit the read to just the···633643634644 /* no readahead */635645 if (!ra->ra_pages)636636- return;637637-638638- /* See the comment in page_cache_sync_ra. */639639- if (ractl->file && unlikely(FMODE_FSNOTIFY_HSM(ractl->file->f_mode)))640646 return;641647642648 /*
+31-8
mm/shmem.c
···15481548 if (WARN_ON_ONCE(!wbc->for_reclaim))15491549 goto redirty;1550155015511551- if (WARN_ON_ONCE((info->flags & VM_LOCKED) || sbinfo->noswap))15511551+ if ((info->flags & VM_LOCKED) || sbinfo->noswap)15521552 goto redirty;1553155315541554 if (!total_swap_pages)···22532253 struct folio *folio = NULL;22542254 bool skip_swapcache = false;22552255 swp_entry_t swap;22562256- int error, nr_pages;22562256+ int error, nr_pages, order, split_order;2257225722582258 VM_BUG_ON(!*foliop || !xa_is_value(*foliop));22592259 swap = radix_to_swp_entry(*foliop);···2272227222732273 /* Look it up and read it in.. */22742274 folio = swap_cache_get_folio(swap, NULL, 0);22752275+ order = xa_get_order(&mapping->i_pages, index);22752276 if (!folio) {22762276- int order = xa_get_order(&mapping->i_pages, index);22772277 bool fallback_order0 = false;22782278- int split_order;2279227822802279 /* Or update major stats only when swapin succeeds?? */22812280 if (fault_type) {···23382339 error = -ENOMEM;23392340 goto failed;23402341 }23422342+ } else if (order != folio_order(folio)) {23432343+ /*23442344+ * Swap readahead may swap in order 0 folios into swapcache23452345+ * asynchronously, while the shmem mapping can still stores23462346+ * large swap entries. In such cases, we should split the23472347+ * large swap entry to prevent possible data corruption.23482348+ */23492349+ split_order = shmem_split_large_entry(inode, index, swap, gfp);23502350+ if (split_order < 0) {23512351+ error = split_order;23522352+ goto failed;23532353+ }23542354+23552355+ /*23562356+ * If the large swap entry has already been split, it is23572357+ * necessary to recalculate the new swap entry based on23582358+ * the old order alignment.23592359+ */23602360+ if (split_order > 0) {23612361+ pgoff_t offset = index - round_down(index, 1 << split_order);23622362+23632363+ swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);23642364+ }23412365 }2342236623432367alloced:···23682346 folio_lock(folio);23692347 if ((!skip_swapcache && !folio_test_swapcache(folio)) ||23702348 folio->swap.val != swap.val ||23712371- !shmem_confirm_swap(mapping, index, swap)) {23492349+ !shmem_confirm_swap(mapping, index, swap) ||23502350+ xa_get_order(&mapping->i_pages, index) != folio_order(folio)) {23722351 error = -EEXIST;23732352 goto unlock;23742353 }···3510348735113488 size = min_t(size_t, size, PAGE_SIZE - offset);3512348935133513- if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage)) {34903490+ if (!pipe_is_full(pipe)) {35143491 struct pipe_buffer *buf = pipe_head_buf(pipe);3515349235163493 *buf = (struct pipe_buffer) {···35373514 int error = 0;3538351535393516 /* Work out how much data we can actually add into the pipe */35403540- used = pipe_occupancy(pipe->head, pipe->tail);35173517+ used = pipe_buf_usage(pipe);35413518 npages = max_t(ssize_t, pipe->max_usage - used, 0);35423519 len = min_t(size_t, len, npages * PAGE_SIZE);35433520···36243601 total_spliced += n;36253602 *ppos += n;36263603 in->f_ra.prev_pos = *ppos;36273627- if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))36043604+ if (pipe_is_full(pipe))36283605 break;3629360636303607 cond_resched();
+10-4
mm/slab_common.c
···13041304static int rcu_delay_page_cache_fill_msec = 5000;13051305module_param(rcu_delay_page_cache_fill_msec, int, 0444);1306130613071307+static struct workqueue_struct *rcu_reclaim_wq;13081308+13071309/* Maximum number of jiffies to wait before draining a batch. */13081310#define KFREE_DRAIN_JIFFIES (5 * HZ)13091311#define KFREE_N_BATCHES 2···16341632 if (delayed_work_pending(&krcp->monitor_work)) {16351633 delay_left = krcp->monitor_work.timer.expires - jiffies;16361634 if (delay < delay_left)16371637- mod_delayed_work(system_unbound_wq, &krcp->monitor_work, delay);16351635+ mod_delayed_work(rcu_reclaim_wq, &krcp->monitor_work, delay);16381636 return;16391637 }16401640- queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay);16381638+ queue_delayed_work(rcu_reclaim_wq, &krcp->monitor_work, delay);16411639}1642164016431641static void···17351733 // "free channels", the batch can handle. Break17361734 // the loop since it is done with this CPU thus17371735 // queuing an RCU work is _always_ success here.17381738- queued = queue_rcu_work(system_unbound_wq, &krwp->rcu_work);17361736+ queued = queue_rcu_work(rcu_reclaim_wq, &krwp->rcu_work);17391737 WARN_ON_ONCE(!queued);17401738 break;17411739 }···18851883 if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&18861884 !atomic_xchg(&krcp->work_in_progress, 1)) {18871885 if (atomic_read(&krcp->backoff_page_cache_fill)) {18881888- queue_delayed_work(system_unbound_wq,18861886+ queue_delayed_work(rcu_reclaim_wq,18891887 &krcp->page_cache_work,18901888 msecs_to_jiffies(rcu_delay_page_cache_fill_msec));18911889 } else {···21212119 int cpu;21222120 int i, j;21232121 struct shrinker *kfree_rcu_shrinker;21222122+21232123+ rcu_reclaim_wq = alloc_workqueue("kvfree_rcu_reclaim",21242124+ WQ_UNBOUND | WQ_MEM_RECLAIM, 0);21252125+ WARN_ON(!rcu_reclaim_wq);2124212621252127 /* Clamp it to [0:100] seconds interval. */21262128 if (rcu_delay_page_cache_fill_msec < 0 ||
+10-2
mm/swapfile.c
···653653 return;654654655655 if (!ci->count) {656656- free_cluster(si, ci);656656+ if (ci->flags != CLUSTER_FLAG_FREE)657657+ free_cluster(si, ci);657658 } else if (ci->count != SWAPFILE_CLUSTER) {658659 if (ci->flags != CLUSTER_FLAG_FRAG)659660 move_cluster(si, ci, &si->frag_clusters[ci->order],···858857 }859858 offset++;860859 }860860+861861+ /* in case no swap cache is reclaimed */862862+ if (ci->flags == CLUSTER_FLAG_NONE)863863+ relocate_cluster(si, ci);861864862865 unlock_cluster(ci);863866 if (to_scan <= 0)···26462641 for (offset = 0; offset < end; offset += SWAPFILE_CLUSTER) {26472642 ci = lock_cluster(si, offset);26482643 unlock_cluster(ci);26492649- offset += SWAPFILE_CLUSTER;26502644 }26512645}26522646···35463542 int err, i;3547354335483544 si = swp_swap_info(entry);35453545+ if (WARN_ON_ONCE(!si)) {35463546+ pr_err("%s%08lx\n", Bad_file, entry.val);35473547+ return -EINVAL;35483548+ }3549354935503550 offset = swp_offset(entry);35513551 VM_WARN_ON(nr > SWAPFILE_CLUSTER - offset % SWAPFILE_CLUSTER);
+90-17
mm/userfaultfd.c
···1818#include <asm/tlbflush.h>1919#include <asm/tlb.h>2020#include "internal.h"2121+#include "swap.h"21222223static __always_inline2324bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)···10771076 return err;10781077}1079107810801080-static int move_swap_pte(struct mm_struct *mm,10791079+static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,10811080 unsigned long dst_addr, unsigned long src_addr,10821081 pte_t *dst_pte, pte_t *src_pte,10831082 pte_t orig_dst_pte, pte_t orig_src_pte,10841083 pmd_t *dst_pmd, pmd_t dst_pmdval,10851085- spinlock_t *dst_ptl, spinlock_t *src_ptl)10841084+ spinlock_t *dst_ptl, spinlock_t *src_ptl,10851085+ struct folio *src_folio)10861086{10871087- if (!pte_swp_exclusive(orig_src_pte))10881088- return -EBUSY;10891089-10901087 double_pt_lock(dst_ptl, src_ptl);1091108810921089 if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte,10931090 dst_pmd, dst_pmdval)) {10941091 double_pt_unlock(dst_ptl, src_ptl);10951092 return -EAGAIN;10931093+ }10941094+10951095+ /*10961096+ * The src_folio resides in the swapcache, requiring an update to its10971097+ * index and mapping to align with the dst_vma, where a swap-in may10981098+ * occur and hit the swapcache after moving the PTE.10991099+ */11001100+ if (src_folio) {11011101+ folio_move_anon_rmap(src_folio, dst_vma);11021102+ src_folio->index = linear_page_index(dst_vma, dst_addr);10961103 }1097110410981105 orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);···11501141 __u64 mode)11511142{11521143 swp_entry_t entry;11441144+ struct swap_info_struct *si = NULL;11531145 pte_t orig_src_pte, orig_dst_pte;11541146 pte_t src_folio_pte;11551147 spinlock_t *src_ptl, *dst_ptl;···12501240 */12511241 if (!src_folio) {12521242 struct folio *folio;12431243+ bool locked;1253124412541245 /*12551246 * Pin the page while holding the lock to be sure the···12701259 goto out;12711260 }1272126112621262+ locked = folio_trylock(folio);12631263+ /*12641264+ * We avoid waiting for folio lock with a raised12651265+ * refcount for large folios because extra refcounts12661266+ * will result in split_folio() failing later and12671267+ * retrying. If multiple tasks are trying to move a12681268+ * large folio we can end up livelocking.12691269+ */12701270+ if (!locked && folio_test_large(folio)) {12711271+ spin_unlock(src_ptl);12721272+ err = -EAGAIN;12731273+ goto out;12741274+ }12751275+12731276 folio_get(folio);12741277 src_folio = folio;12751278 src_folio_pte = orig_src_pte;12761279 spin_unlock(src_ptl);1277128012781278- if (!folio_trylock(src_folio)) {12791279- pte_unmap(&orig_src_pte);12801280- pte_unmap(&orig_dst_pte);12811281+ if (!locked) {12821282+ pte_unmap(src_pte);12831283+ pte_unmap(dst_pte);12811284 src_pte = dst_pte = NULL;12821285 /* now we can block and wait */12831286 folio_lock(src_folio);···13071282 /* at this point we have src_folio locked */13081283 if (folio_test_large(src_folio)) {13091284 /* split_folio() can block */13101310- pte_unmap(&orig_src_pte);13111311- pte_unmap(&orig_dst_pte);12851285+ pte_unmap(src_pte);12861286+ pte_unmap(dst_pte);13121287 src_pte = dst_pte = NULL;13131288 err = split_folio(src_folio);13141289 if (err)···13331308 goto out;13341309 }13351310 if (!anon_vma_trylock_write(src_anon_vma)) {13361336- pte_unmap(&orig_src_pte);13371337- pte_unmap(&orig_dst_pte);13111311+ pte_unmap(src_pte);13121312+ pte_unmap(dst_pte);13381313 src_pte = dst_pte = NULL;13391314 /* now we can block and wait */13401315 anon_vma_lock_write(src_anon_vma);···13471322 orig_dst_pte, orig_src_pte, dst_pmd,13481323 dst_pmdval, dst_ptl, src_ptl, src_folio);13491324 } else {13251325+ struct folio *folio = NULL;13261326+13501327 entry = pte_to_swp_entry(orig_src_pte);13511328 if (non_swap_entry(entry)) {13521329 if (is_migration_entry(entry)) {13531353- pte_unmap(&orig_src_pte);13541354- pte_unmap(&orig_dst_pte);13301330+ pte_unmap(src_pte);13311331+ pte_unmap(dst_pte);13551332 src_pte = dst_pte = NULL;13561333 migration_entry_wait(mm, src_pmd, src_addr);13571334 err = -EAGAIN;···13621335 goto out;13631336 }1364133713651365- err = move_swap_pte(mm, dst_addr, src_addr, dst_pte, src_pte,13661366- orig_dst_pte, orig_src_pte, dst_pmd,13671367- dst_pmdval, dst_ptl, src_ptl);13381338+ if (!pte_swp_exclusive(orig_src_pte)) {13391339+ err = -EBUSY;13401340+ goto out;13411341+ }13421342+13431343+ si = get_swap_device(entry);13441344+ if (unlikely(!si)) {13451345+ err = -EAGAIN;13461346+ goto out;13471347+ }13481348+ /*13491349+ * Verify the existence of the swapcache. If present, the folio's13501350+ * index and mapping must be updated even when the PTE is a swap13511351+ * entry. The anon_vma lock is not taken during this process since13521352+ * the folio has already been unmapped, and the swap entry is13531353+ * exclusive, preventing rmap walks.13541354+ *13551355+ * For large folios, return -EBUSY immediately, as split_folio()13561356+ * also returns -EBUSY when attempting to split unmapped large13571357+ * folios in the swapcache. This issue needs to be resolved13581358+ * separately to allow proper handling.13591359+ */13601360+ if (!src_folio)13611361+ folio = filemap_get_folio(swap_address_space(entry),13621362+ swap_cache_index(entry));13631363+ if (!IS_ERR_OR_NULL(folio)) {13641364+ if (folio_test_large(folio)) {13651365+ err = -EBUSY;13661366+ folio_put(folio);13671367+ goto out;13681368+ }13691369+ src_folio = folio;13701370+ src_folio_pte = orig_src_pte;13711371+ if (!folio_trylock(src_folio)) {13721372+ pte_unmap(src_pte);13731373+ pte_unmap(dst_pte);13741374+ src_pte = dst_pte = NULL;13751375+ put_swap_device(si);13761376+ si = NULL;13771377+ /* now we can block and wait */13781378+ folio_lock(src_folio);13791379+ goto retry;13801380+ }13811381+ }13821382+ err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte,13831383+ orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval,13841384+ dst_ptl, src_ptl, src_folio);13681385 }1369138613701387out:···14251354 if (src_pte)14261355 pte_unmap(src_pte);14271356 mmu_notifier_invalidate_range_end(&range);13571357+ if (si)13581358+ put_swap_device(si);1428135914291360 return err;14301361}
+3
mm/util.c
···2323#include <linux/processor.h>2424#include <linux/sizes.h>2525#include <linux/compat.h>2626+#include <linux/fsnotify.h>26272728#include <linux/uaccess.h>2829···570569 LIST_HEAD(uf);571570572571 ret = security_mmap_file(file, prot, flag);572572+ if (!ret)573573+ ret = fsnotify_mmap_perm(file, prot, pgoff >> PAGE_SHIFT, len);573574 if (!ret) {574575 if (mmap_write_lock_killable(mm))575576 return -EINTR;
+8-4
mm/vma.c
···15091509static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg)15101510{15111511 struct vm_area_struct *vma = vmg->vma;15121512+ unsigned long start = vmg->start;15131513+ unsigned long end = vmg->end;15121514 struct vm_area_struct *merged;1513151515141516 /* First, try to merge. */15151517 merged = vma_merge_existing_range(vmg);15161518 if (merged)15171519 return merged;15201520+ if (vmg_nomem(vmg))15211521+ return ERR_PTR(-ENOMEM);1518152215191523 /* Split any preceding portion of the VMA. */15201520- if (vma->vm_start < vmg->start) {15211521- int err = split_vma(vmg->vmi, vma, vmg->start, 1);15241524+ if (vma->vm_start < start) {15251525+ int err = split_vma(vmg->vmi, vma, start, 1);1522152615231527 if (err)15241528 return ERR_PTR(err);15251529 }1526153015271531 /* Split any trailing portion of the VMA. */15281528- if (vma->vm_end > vmg->end) {15291529- int err = split_vma(vmg->vmi, vma, vmg->end, 0);15321532+ if (vma->vm_end > end) {15331533+ int err = split_vma(vmg->vmi, vma, end, 0);1530153415311535 if (err)15321536 return ERR_PTR(err);
···4343* statistics4444**********************************/4545/* The number of compressed pages currently stored in zswap */4646-atomic_long_t zswap_stored_pages = ATOMIC_INIT(0);4646+atomic_long_t zswap_stored_pages = ATOMIC_LONG_INIT(0);47474848/*4949 * The statistics below are not protected from concurrent access for
+2-1
net/8021q/vlan.c
···131131{132132 const char *name = real_dev->name;133133134134- if (real_dev->features & NETIF_F_VLAN_CHALLENGED) {134134+ if (real_dev->features & NETIF_F_VLAN_CHALLENGED ||135135+ real_dev->type != ARPHRD_ETHER) {135136 pr_info("VLANs not supported on %s\n", name);136137 NL_SET_ERR_MSG_MOD(extack, "VLANs not supported on device");137138 return -EOPNOTSUPP;
+7-3
net/bluetooth/hci_core.c
···57575858/* HCI callback list */5959LIST_HEAD(hci_cb_list);6060+DEFINE_MUTEX(hci_cb_list_lock);60616162/* HCI ID Numbering */6263static DEFINE_IDA(hci_index_ida);···29732972{29742973 BT_DBG("%p name %s", cb, cb->name);2975297429762976- list_add_tail_rcu(&cb->list, &hci_cb_list);29752975+ mutex_lock(&hci_cb_list_lock);29762976+ list_add_tail(&cb->list, &hci_cb_list);29772977+ mutex_unlock(&hci_cb_list_lock);2977297829782979 return 0;29792980}···29852982{29862983 BT_DBG("%p name %s", cb, cb->name);2987298429882988- list_del_rcu(&cb->list);29892989- synchronize_rcu();29852985+ mutex_lock(&hci_cb_list_lock);29862986+ list_del(&cb->list);29872987+ mutex_unlock(&hci_cb_list_lock);2990298829912989 return 0;29922990}
+22-15
net/bluetooth/hci_event.c
···33913391 hci_update_scan(hdev);33923392 }3393339333943394- params = hci_conn_params_lookup(hdev, &conn->dst, conn->dst_type);33953395- if (params) {33963396- switch (params->auto_connect) {33973397- case HCI_AUTO_CONN_LINK_LOSS:33983398- if (ev->reason != HCI_ERROR_CONNECTION_TIMEOUT)33943394+ /* Re-enable passive scanning if disconnected device is marked33953395+ * as auto-connectable.33963396+ */33973397+ if (conn->type == LE_LINK) {33983398+ params = hci_conn_params_lookup(hdev, &conn->dst,33993399+ conn->dst_type);34003400+ if (params) {34013401+ switch (params->auto_connect) {34023402+ case HCI_AUTO_CONN_LINK_LOSS:34033403+ if (ev->reason != HCI_ERROR_CONNECTION_TIMEOUT)34043404+ break;34053405+ fallthrough;34063406+34073407+ case HCI_AUTO_CONN_DIRECT:34083408+ case HCI_AUTO_CONN_ALWAYS:34093409+ hci_pend_le_list_del_init(params);34103410+ hci_pend_le_list_add(params,34113411+ &hdev->pend_le_conns);34123412+ hci_update_passive_scan(hdev);33993413 break;34003400- fallthrough;3401341434023402- case HCI_AUTO_CONN_DIRECT:34033403- case HCI_AUTO_CONN_ALWAYS:34043404- hci_pend_le_list_del_init(params);34053405- hci_pend_le_list_add(params, &hdev->pend_le_conns);34063406- hci_update_passive_scan(hdev);34073407- break;34083408-34093409- default:34103410- break;34153415+ default:34163416+ break;34173417+ }34113418 }34123419 }34133420
···38723872{38733873 netdev_features_t features;3874387438753875+ if (!skb_frags_readable(skb))38763876+ goto out_kfree_skb;38773877+38753878 features = netif_skb_features(skb);38763879 skb = validate_xmit_vlan(skb, features);38773880 if (unlikely(!skb))
+3-1
net/core/devmem.c
···109109 struct netdev_rx_queue *rxq;110110 unsigned long xa_idx;111111 unsigned int rxq_idx;112112+ int err;112113113114 if (binding->list.next)114115 list_del(&binding->list);···121120122121 rxq_idx = get_netdev_rx_queue_index(rxq);123122124124- WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx));123123+ err = netdev_rx_queue_restart(binding->dev, rxq_idx);124124+ WARN_ON(err && err != -ENETDOWN);125125 }126126127127 xa_erase(&net_devmem_dmabuf_bindings, binding->id);
+7-2
net/core/netpoll.c
···319319static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb)320320{321321 netdev_tx_t status = NETDEV_TX_BUSY;322322+ netdev_tx_t ret = NET_XMIT_DROP;322323 struct net_device *dev;323324 unsigned long tries;324325 /* It is up to the caller to keep npinfo alive. */···328327 lockdep_assert_irqs_disabled();329328330329 dev = np->dev;330330+ rcu_read_lock();331331 npinfo = rcu_dereference_bh(dev->npinfo);332332333333 if (!npinfo || !netif_running(dev) || !netif_device_present(dev)) {334334 dev_kfree_skb_irq(skb);335335- return NET_XMIT_DROP;335335+ goto out;336336 }337337338338 /* don't get messages out of order, and no recursion */···372370 skb_queue_tail(&npinfo->txq, skb);373371 schedule_delayed_work(&npinfo->tx_work,0);374372 }375375- return NETDEV_TX_OK;373373+ ret = NETDEV_TX_OK;374374+out:375375+ rcu_read_unlock();376376+ return ret;376377}377378378379netdev_tx_t netpoll_send_skb(struct netpoll *np, struct sk_buff *skb)
+4-4
net/ethtool/cabletest.c
···7272 dev = req_info.dev;73737474 rtnl_lock();7575- phydev = ethnl_req_get_phydev(&req_info,7676- tb[ETHTOOL_A_CABLE_TEST_HEADER],7575+ phydev = ethnl_req_get_phydev(&req_info, tb,7676+ ETHTOOL_A_CABLE_TEST_HEADER,7777 info->extack);7878 if (IS_ERR_OR_NULL(phydev)) {7979 ret = -EOPNOTSUPP;···339339 goto out_dev_put;340340341341 rtnl_lock();342342- phydev = ethnl_req_get_phydev(&req_info,343343- tb[ETHTOOL_A_CABLE_TEST_TDR_HEADER],342342+ phydev = ethnl_req_get_phydev(&req_info, tb,343343+ ETHTOOL_A_CABLE_TEST_TDR_HEADER,344344 info->extack);345345 if (IS_ERR_OR_NULL(phydev)) {346346 ret = -EOPNOTSUPP;
+1-1
net/ethtool/linkstate.c
···103103 struct phy_device *phydev;104104 int ret;105105106106- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_LINKSTATE_HEADER],106106+ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_LINKSTATE_HEADER,107107 info->extack);108108 if (IS_ERR(phydev)) {109109 ret = PTR_ERR(phydev);
···275275 * ethnl_req_get_phydev() - Gets the phy_device targeted by this request,276276 * if any. Must be called under rntl_lock().277277 * @req_info: The ethnl request to get the phy from.278278- * @header: The netlink header, used for error reporting.278278+ * @tb: The netlink attributes array, for error reporting.279279+ * @header: The netlink header index, used for error reporting.279280 * @extack: The netlink extended ACK, for error reporting.280281 *281282 * The caller must hold RTNL, until it's done interacting with the returned···290289 * is returned.291290 */292291struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info,293293- const struct nlattr *header,292292+ struct nlattr **tb, unsigned int header,294293 struct netlink_ext_ack *extack);295294296295/**
···6262 struct phy_device *phydev;6363 int ret;64646565- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PLCA_HEADER],6565+ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PLCA_HEADER,6666 info->extack);6767 // check that the PHY device is available and connected6868 if (IS_ERR_OR_NULL(phydev)) {···152152 bool mod = false;153153 int ret;154154155155- phydev = ethnl_req_get_phydev(req_info, tb[ETHTOOL_A_PLCA_HEADER],155155+ phydev = ethnl_req_get_phydev(req_info, tb, ETHTOOL_A_PLCA_HEADER,156156 info->extack);157157 // check that the PHY device is available and connected158158 if (IS_ERR_OR_NULL(phydev))···211211 struct phy_device *phydev;212212 int ret;213213214214- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PLCA_HEADER],214214+ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PLCA_HEADER,215215 info->extack);216216 // check that the PHY device is available and connected217217 if (IS_ERR_OR_NULL(phydev)) {
+2-2
net/ethtool/pse-pd.c
···6464 if (ret < 0)6565 return ret;66666767- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PSE_HEADER],6767+ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PSE_HEADER,6868 info->extack);6969 if (IS_ERR(phydev))7070 return -ENODEV;···261261 struct phy_device *phydev;262262 int ret;263263264264- phydev = ethnl_req_get_phydev(req_info, tb[ETHTOOL_A_PSE_HEADER],264264+ phydev = ethnl_req_get_phydev(req_info, tb, ETHTOOL_A_PSE_HEADER,265265 info->extack);266266 ret = ethnl_set_pse_validate(phydev, info);267267 if (ret)
+1-1
net/ethtool/stats.c
···138138 struct phy_device *phydev;139139 int ret;140140141141- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_STATS_HEADER],141141+ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_STATS_HEADER,142142 info->extack);143143 if (IS_ERR(phydev))144144 return PTR_ERR(phydev);
+1-1
net/ethtool/strset.c
···309309 return 0;310310 }311311312312- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_HEADER_FLAGS],312312+ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_HEADER_FLAGS,313313 info->extack);314314315315 /* phydev can be NULL, check for errors only */
···321321322322 /* clear destructor to avoid skb_segment assigning it to tail */323323 copy_dtor = gso_skb->destructor == sock_wfree;324324- if (copy_dtor)324324+ if (copy_dtor) {325325 gso_skb->destructor = NULL;326326+ gso_skb->sk = NULL;327327+ }326328327329 segs = skb_segment(gso_skb, features);328330 if (IS_ERR_OR_NULL(segs)) {329329- if (copy_dtor)331331+ if (copy_dtor) {330332 gso_skb->destructor = sock_wfree;333333+ gso_skb->sk = sk;334334+ }331335 return segs;332336 }333337
+9-6
net/ipv6/addrconf.c
···32093209 struct in6_addr addr;32103210 struct net_device *dev;32113211 struct net *net = dev_net(idev->dev);32123212- int scope, plen, offset = 0;32123212+ int scope, plen;32133213 u32 pflags = 0;3214321432153215 ASSERT_RTNL();3216321632173217 memset(&addr, 0, sizeof(struct in6_addr));32183218- /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */32193219- if (idev->dev->addr_len == sizeof(struct in6_addr))32203220- offset = sizeof(struct in6_addr) - 4;32213221- memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4);32183218+ memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4);3222321932233220 if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) {32243221 scope = IPV6_ADDR_COMPATv4;···35263529 return;35273530 }3528353135293529- if (dev->type == ARPHRD_ETHER) {35323532+ /* Generate the IPv6 link-local address using addrconf_addr_gen(),35333533+ * unless we have an IPv4 GRE device not bound to an IP address and35343534+ * which is in EUI64 mode (as __ipv6_isatap_ifid() would fail in this35353535+ * case). Such devices fall back to add_v4_addrs() instead.35363536+ */35373537+ if (!(dev->type == ARPHRD_IPGRE && *(__be32 *)dev->dev_addr == 0 &&35383538+ idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_EUI64)) {35303539 addrconf_addr_gen(idev, true);35313540 return;35323541 }
+3-1
net/ipv6/ila/ila_lwt.c
···8888 goto drop;8989 }90909191- if (ilwt->connected) {9191+ /* cache only if we don't create a dst reference loop */9292+ if (ilwt->connected && orig_dst->lwtstate != dst->lwtstate) {9293 local_bh_disable();9394 dst_cache_set_ip6(&ilwt->dst_cache, dst, &fl6.saddr);9495 local_bh_enable();9596 }9697 }97989999+ skb_dst_drop(skb);98100 skb_dst_set(skb, dst);99101 return dst_output(net, sk, skb);100102
···4747 /* The EPCS Multi-Link element in the original elements */4848 const struct element *ml_epcs_elem;49495050+ bool multi_link_inner;5151+ bool skip_vendor;5252+5053 /*5154 * scratch buffer that can be used for various element parsing related5255 * tasks, e.g., element de-fragmentation etc.···155152 switch (le16_get_bits(mle->control,156153 IEEE80211_ML_CONTROL_TYPE)) {157154 case IEEE80211_ML_CONTROL_TYPE_BASIC:158158- if (elems_parse->ml_basic_elem) {155155+ if (elems_parse->multi_link_inner) {159156 elems->parse_error |=160157 IEEE80211_PARSE_ERR_DUP_NEST_ML_BASIC;161158 break;162159 }163163- elems_parse->ml_basic_elem = elem;164160 break;165161 case IEEE80211_ML_CONTROL_TYPE_RECONF:166162 elems_parse->ml_reconf_elem = elem;···401399 IEEE80211_PARSE_ERR_BAD_ELEM_SIZE;402400 break;403401 case WLAN_EID_VENDOR_SPECIFIC:402402+ if (elems_parse->skip_vendor)403403+ break;404404+404405 if (elen >= 4 && pos[0] == 0x00 && pos[1] == 0x50 &&405406 pos[2] == 0xf2) {406407 /* Microsoft OUI (00:50:F2) */···871866 }872867}873868874874-static void ieee80211_mle_parse_link(struct ieee80211_elems_parse *elems_parse,875875- struct ieee80211_elems_parse_params *params)869869+static const struct element *870870+ieee80211_prep_mle_link_parse(struct ieee80211_elems_parse *elems_parse,871871+ struct ieee80211_elems_parse_params *params,872872+ struct ieee80211_elems_parse_params *sub)876873{877874 struct ieee802_11_elems *elems = &elems_parse->elems;878875 struct ieee80211_mle_per_sta_profile *prof;879879- struct ieee80211_elems_parse_params sub = {880880- .mode = params->mode,881881- .action = params->action,882882- .from_ap = params->from_ap,883883- .link_id = -1,884884- };885885- ssize_t ml_len = elems->ml_basic_len;886886- const struct element *non_inherit = NULL;876876+ const struct element *tmp;877877+ ssize_t ml_len;887878 const u8 *end;879879+880880+ if (params->mode < IEEE80211_CONN_MODE_EHT)881881+ return NULL;882882+883883+ for_each_element_extid(tmp, WLAN_EID_EXT_EHT_MULTI_LINK,884884+ elems->ie_start, elems->total_len) {885885+ const struct ieee80211_multi_link_elem *mle =886886+ (void *)tmp->data + 1;887887+888888+ if (!ieee80211_mle_size_ok(tmp->data + 1, tmp->datalen - 1))889889+ continue;890890+891891+ if (le16_get_bits(mle->control, IEEE80211_ML_CONTROL_TYPE) !=892892+ IEEE80211_ML_CONTROL_TYPE_BASIC)893893+ continue;894894+895895+ elems_parse->ml_basic_elem = tmp;896896+ break;897897+ }888898889899 ml_len = cfg80211_defragment_element(elems_parse->ml_basic_elem,890900 elems->ie_start,···911891 WLAN_EID_FRAGMENT);912892913893 if (ml_len < 0)914914- return;894894+ return NULL;915895916896 elems->ml_basic = (const void *)elems_parse->scratch_pos;917897 elems->ml_basic_len = ml_len;918898 elems_parse->scratch_pos += ml_len;919899920900 if (params->link_id == -1)921921- return;901901+ return NULL;922902923903 ieee80211_mle_get_sta_prof(elems_parse, params->link_id);924904 prof = elems->prof;925905926906 if (!prof)927927- return;907907+ return NULL;928908929909 /* check if we have the 4 bytes for the fixed part in assoc response */930910 if (elems->sta_prof_len < sizeof(*prof) + prof->sta_info_len - 1 + 4) {931911 elems->prof = NULL;932912 elems->sta_prof_len = 0;933933- return;913913+ return NULL;934914 }935915936916 /*···939919 * the -1 is because the 'sta_info_len' is accounted to as part of the940920 * per-STA profile, but not part of the 'u8 variable[]' portion.941921 */942942- sub.start = prof->variable + prof->sta_info_len - 1 + 4;922922+ sub->start = prof->variable + prof->sta_info_len - 1 + 4;943923 end = (const u8 *)prof + elems->sta_prof_len;944944- sub.len = end - sub.start;924924+ sub->len = end - sub->start;945925946946- non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,947947- sub.start, sub.len);948948- _ieee802_11_parse_elems_full(&sub, elems_parse, non_inherit);926926+ sub->mode = params->mode;927927+ sub->action = params->action;928928+ sub->from_ap = params->from_ap;929929+ sub->link_id = -1;930930+931931+ return cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,932932+ sub->start, sub->len);949933}950934951935static void···997973struct ieee802_11_elems *998974ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params)999975{976976+ struct ieee80211_elems_parse_params sub = {};1000977 struct ieee80211_elems_parse *elems_parse;10011001- struct ieee802_11_elems *elems;1002978 const struct element *non_inherit = NULL;10031003- u8 *nontransmitted_profile;10041004- int nontransmitted_profile_len = 0;979979+ struct ieee802_11_elems *elems;1005980 size_t scratch_len = 3 * params->len;981981+ bool multi_link_inner = false;10069821007983 BUILD_BUG_ON(offsetof(typeof(*elems_parse), elems) != 0);984984+985985+ /* cannot parse for both a specific link and non-transmitted BSS */986986+ if (WARN_ON(params->link_id >= 0 && params->bss))987987+ return NULL;10089881009989 elems_parse = kzalloc(struct_size(elems_parse, scratch, scratch_len),1010990 GFP_ATOMIC);···1026998 ieee80211_clear_tpe(&elems->tpe);1027999 ieee80211_clear_tpe(&elems->csa_tpe);1028100010291029- nontransmitted_profile = elems_parse->scratch_pos;10301030- nontransmitted_profile_len =10311031- ieee802_11_find_bssid_profile(params->start, params->len,10321032- elems, params->bss,10331033- nontransmitted_profile);10341034- elems_parse->scratch_pos += nontransmitted_profile_len;10351035- non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,10361036- nontransmitted_profile,10371037- nontransmitted_profile_len);10011001+ /*10021002+ * If we're looking for a non-transmitted BSS then we cannot at10031003+ * the same time be looking for a second link as the two can only10041004+ * appear in the same frame carrying info for different BSSes.10051005+ *10061006+ * In any case, we only look for one at a time, as encoded by10071007+ * the WARN_ON above.10081008+ */10091009+ if (params->bss) {10101010+ int nontx_len =10111011+ ieee802_11_find_bssid_profile(params->start,10121012+ params->len,10131013+ elems, params->bss,10141014+ elems_parse->scratch_pos);10151015+ sub.start = elems_parse->scratch_pos;10161016+ sub.mode = params->mode;10171017+ sub.len = nontx_len;10181018+ sub.action = params->action;10191019+ sub.link_id = params->link_id;1038102010211021+ /* consume the space used for non-transmitted profile */10221022+ elems_parse->scratch_pos += nontx_len;10231023+10241024+ non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,10251025+ sub.start, nontx_len);10261026+ } else {10271027+ /* must always parse to get elems_parse->ml_basic_elem */10281028+ non_inherit = ieee80211_prep_mle_link_parse(elems_parse, params,10291029+ &sub);10301030+ multi_link_inner = true;10311031+ }10321032+10331033+ elems_parse->skip_vendor =10341034+ cfg80211_find_elem(WLAN_EID_VENDOR_SPECIFIC,10351035+ sub.start, sub.len);10391036 elems->crc = _ieee802_11_parse_elems_full(params, elems_parse,10401037 non_inherit);1041103810421042- /* Override with nontransmitted profile, if found */10431043- if (nontransmitted_profile_len) {10441044- struct ieee80211_elems_parse_params sub = {10451045- .mode = params->mode,10461046- .start = nontransmitted_profile,10471047- .len = nontransmitted_profile_len,10481048- .action = params->action,10491049- .link_id = params->link_id,10501050- };10511051-10391039+ /* Override with nontransmitted/per-STA profile if found */10401040+ if (sub.len) {10411041+ elems_parse->multi_link_inner = multi_link_inner;10421042+ elems_parse->skip_vendor = false;10521043 _ieee802_11_parse_elems_full(&sub, elems_parse, NULL);10531044 }10541054-10551055- ieee80211_mle_parse_link(elems_parse, params);1056104510571046 ieee80211_mle_defrag_reconf(elems_parse);10581047
+5-5
net/mac80211/rx.c
···66 * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>77 * Copyright 2013-2014 Intel Mobile Communications GmbH88 * Copyright(c) 2015 - 2017 Intel Deutschland GmbH99- * Copyright (C) 2018-2024 Intel Corporation99+ * Copyright (C) 2018-2025 Intel Corporation1010 */11111212#include <linux/jiffies.h>···33293329 return;33303330 }3331333133323332- if (!ether_addr_equal(mgmt->sa, sdata->deflink.u.mgd.bssid) ||33333333- !ether_addr_equal(mgmt->bssid, sdata->deflink.u.mgd.bssid)) {33323332+ if (!ether_addr_equal(mgmt->sa, sdata->vif.cfg.ap_addr) ||33333333+ !ether_addr_equal(mgmt->bssid, sdata->vif.cfg.ap_addr)) {33343334 /* Not from the current AP or not associated yet. */33353335 return;33363336 }···3346334633473347 skb_reserve(skb, local->hw.extra_tx_headroom);33483348 resp = skb_put_zero(skb, 24);33493349- memcpy(resp->da, mgmt->sa, ETH_ALEN);33493349+ memcpy(resp->da, sdata->vif.cfg.ap_addr, ETH_ALEN);33503350 memcpy(resp->sa, sdata->vif.addr, ETH_ALEN);33513351- memcpy(resp->bssid, sdata->deflink.u.mgd.bssid, ETH_ALEN);33513351+ memcpy(resp->bssid, sdata->vif.cfg.ap_addr, ETH_ALEN);33523352 resp->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT |33533353 IEEE80211_STYPE_ACTION);33543354 skb_put(skb, 1 + sizeof(resp->u.action.u.sa_query));
+17-3
net/mac80211/sta_info.c
···44 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>55 * Copyright 2013-2014 Intel Mobile Communications GmbH66 * Copyright (C) 2015 - 2017 Intel Deutschland GmbH77- * Copyright (C) 2018-2023 Intel Corporation77+ * Copyright (C) 2018-2024 Intel Corporation88 */991010#include <linux/module.h>···13351335 sta->sta.addr, new_state);1336133613371337 /* notify the driver before the actual changes so it can13381338- * fail the transition13381338+ * fail the transition if the state is increasing.13391339+ * The driver is required not to fail when the transition13401340+ * is decreasing the state, so first, do all the preparation13411341+ * work and only then, notify the driver.13391342 */13401340- if (test_sta_flag(sta, WLAN_STA_INSERTED)) {13431343+ if (new_state > sta->sta_state &&13441344+ test_sta_flag(sta, WLAN_STA_INSERTED)) {13411345 int err = drv_sta_state(sta->local, sta->sdata, sta,13421346 sta->sta_state, new_state);13431347 if (err)···14151411 break;14161412 default:14171413 break;14141414+ }14151415+14161416+ if (new_state < sta->sta_state &&14171417+ test_sta_flag(sta, WLAN_STA_INSERTED)) {14181418+ int err = drv_sta_state(sta->local, sta->sdata, sta,14191419+ sta->sta_state, new_state);14201420+14211421+ WARN_ONCE(err,14221422+ "Driver is not allowed to fail if the sta_state is transitioning down the list: %d\n",14231423+ err);14181424 }1419142514201426 sta->sta_state = new_state;
+8-5
net/mac80211/util.c
···66 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net>77 * Copyright 2013-2014 Intel Mobile Communications GmbH88 * Copyright (C) 2015-2017 Intel Deutschland GmbH99- * Copyright (C) 2018-2024 Intel Corporation99+ * Copyright (C) 2018-2025 Intel Corporation1010 *1111 * utilities for mac802111212 */···687687 struct ieee80211_sub_if_data *sdata,688688 unsigned int queues, bool drop)689689{690690- if (!local->ops->flush)690690+ if (!local->ops->flush && !drop)691691 return;692692693693 /*···714714 }715715 }716716717717- drv_flush(local, sdata, queues, drop);717717+ if (local->ops->flush)718718+ drv_flush(local, sdata, queues, drop);718719719720 ieee80211_wake_queues_by_reason(&local->hw, queues,720721 IEEE80211_QUEUE_STOP_REASON_FLUSH,···21932192 ieee80211_reconfig_roc(local);2194219321952194 /* Requeue all works */21962196- list_for_each_entry(sdata, &local->interfaces, list)21972197- wiphy_work_queue(local->hw.wiphy, &sdata->work);21952195+ list_for_each_entry(sdata, &local->interfaces, list) {21962196+ if (ieee80211_sdata_running(sdata))21972197+ wiphy_work_queue(local->hw.wiphy, &sdata->work);21982198+ }21982199 }2199220022002201 ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,
+8-2
net/mctp/route.c
···332332 & MCTP_HDR_SEQ_MASK;333333334334 if (!key->reasm_head) {335335- key->reasm_head = skb;336336- key->reasm_tailp = &(skb_shinfo(skb)->frag_list);335335+ /* Since we're manipulating the shared frag_list, ensure it isn't336336+ * shared with any other SKBs.337337+ */338338+ key->reasm_head = skb_unshare(skb, GFP_ATOMIC);339339+ if (!key->reasm_head)340340+ return -ENOMEM;341341+342342+ key->reasm_tailp = &(skb_shinfo(key->reasm_head)->frag_list);337343 key->last_seq = this_seq;338344 return 0;339345 }
+109
net/mctp/test/route-test.c
···921921 __mctp_route_test_fini(test, dev, rt, sock);922922}923923924924+/* Input route to socket, using a fragmented message created from clones.925925+ */926926+static void mctp_test_route_input_cloned_frag(struct kunit *test)927927+{928928+ /* 5 packet fragments, forming 2 complete messages */929929+ const struct mctp_hdr hdrs[5] = {930930+ RX_FRAG(FL_S, 0),931931+ RX_FRAG(0, 1),932932+ RX_FRAG(FL_E, 2),933933+ RX_FRAG(FL_S, 0),934934+ RX_FRAG(FL_E, 1),935935+ };936936+ struct mctp_test_route *rt;937937+ struct mctp_test_dev *dev;938938+ struct sk_buff *skb[5];939939+ struct sk_buff *rx_skb;940940+ struct socket *sock;941941+ size_t data_len;942942+ u8 compare[100];943943+ u8 flat[100];944944+ size_t total;945945+ void *p;946946+ int rc;947947+948948+ /* Arbitrary length */949949+ data_len = 3;950950+ total = data_len + sizeof(struct mctp_hdr);951951+952952+ __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY);953953+954954+ /* Create a single skb initially with concatenated packets */955955+ skb[0] = mctp_test_create_skb(&hdrs[0], 5 * total);956956+ mctp_test_skb_set_dev(skb[0], dev);957957+ memset(skb[0]->data, 0 * 0x11, skb[0]->len);958958+ memcpy(skb[0]->data, &hdrs[0], sizeof(struct mctp_hdr));959959+960960+ /* Extract and populate packets */961961+ for (int i = 1; i < 5; i++) {962962+ skb[i] = skb_clone(skb[i - 1], GFP_ATOMIC);963963+ KUNIT_ASSERT_TRUE(test, skb[i]);964964+ p = skb_pull(skb[i], total);965965+ KUNIT_ASSERT_TRUE(test, p);966966+ skb_reset_network_header(skb[i]);967967+ memcpy(skb[i]->data, &hdrs[i], sizeof(struct mctp_hdr));968968+ memset(&skb[i]->data[sizeof(struct mctp_hdr)], i * 0x11, data_len);969969+ }970970+ for (int i = 0; i < 5; i++)971971+ skb_trim(skb[i], total);972972+973973+ /* SOM packets have a type byte to match the socket */974974+ skb[0]->data[4] = 0;975975+ skb[3]->data[4] = 0;976976+977977+ skb_dump("pkt1 ", skb[0], false);978978+ skb_dump("pkt2 ", skb[1], false);979979+ skb_dump("pkt3 ", skb[2], false);980980+ skb_dump("pkt4 ", skb[3], false);981981+ skb_dump("pkt5 ", skb[4], false);982982+983983+ for (int i = 0; i < 5; i++) {984984+ KUNIT_EXPECT_EQ(test, refcount_read(&skb[i]->users), 1);985985+ /* Take a reference so we can check refcounts at the end */986986+ skb_get(skb[i]);987987+ }988988+989989+ /* Feed the fragments into MCTP core */990990+ for (int i = 0; i < 5; i++) {991991+ rc = mctp_route_input(&rt->rt, skb[i]);992992+ KUNIT_EXPECT_EQ(test, rc, 0);993993+ }994994+995995+ /* Receive first reassembled message */996996+ rx_skb = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc);997997+ KUNIT_EXPECT_EQ(test, rc, 0);998998+ KUNIT_EXPECT_EQ(test, rx_skb->len, 3 * data_len);999999+ rc = skb_copy_bits(rx_skb, 0, flat, rx_skb->len);10001000+ for (int i = 0; i < rx_skb->len; i++)10011001+ compare[i] = (i / data_len) * 0x11;10021002+ /* Set type byte */10031003+ compare[0] = 0;10041004+10051005+ KUNIT_EXPECT_MEMEQ(test, flat, compare, rx_skb->len);10061006+ KUNIT_EXPECT_EQ(test, refcount_read(&rx_skb->users), 1);10071007+ kfree_skb(rx_skb);10081008+10091009+ /* Receive second reassembled message */10101010+ rx_skb = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc);10111011+ KUNIT_EXPECT_EQ(test, rc, 0);10121012+ KUNIT_EXPECT_EQ(test, rx_skb->len, 2 * data_len);10131013+ rc = skb_copy_bits(rx_skb, 0, flat, rx_skb->len);10141014+ for (int i = 0; i < rx_skb->len; i++)10151015+ compare[i] = (i / data_len + 3) * 0x11;10161016+ /* Set type byte */10171017+ compare[0] = 0;10181018+10191019+ KUNIT_EXPECT_MEMEQ(test, flat, compare, rx_skb->len);10201020+ KUNIT_EXPECT_EQ(test, refcount_read(&rx_skb->users), 1);10211021+ kfree_skb(rx_skb);10221022+10231023+ /* Check input skb refcounts */10241024+ for (int i = 0; i < 5; i++) {10251025+ KUNIT_EXPECT_EQ(test, refcount_read(&skb[i]->users), 1);10261026+ kfree_skb(skb[i]);10271027+ }10281028+10291029+ __mctp_route_test_fini(test, dev, rt, sock);10301030+}10311031+9241032#if IS_ENABLED(CONFIG_MCTP_FLOWS)92510339261034static void mctp_test_flow_init(struct kunit *test,···12521144 KUNIT_CASE(mctp_test_packet_flow),12531145 KUNIT_CASE(mctp_test_fragment_flow),12541146 KUNIT_CASE(mctp_test_route_output_key_create),11471147+ KUNIT_CASE(mctp_test_route_input_cloned_frag),12551148 {}12561149};12571150
+15-3
net/mptcp/pm_netlink.c
···977977978978static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet,979979 struct mptcp_pm_addr_entry *entry,980980- bool needs_id)980980+ bool needs_id, bool replace)981981{982982 struct mptcp_pm_addr_entry *cur, *del_entry = NULL;983983 unsigned int addr_max;···10161016 }10171017 if (entry->addr.id)10181018 goto out;10191019+10201020+ /* allow callers that only need to look up the local10211021+ * addr's id to skip replacement. This allows them to10221022+ * avoid calling synchronize_rcu in the packet recv10231023+ * path.10241024+ */10251025+ if (!replace) {10261026+ kfree(entry);10271027+ ret = cur->addr.id;10281028+ goto out;10291029+ }1019103010201031 pernet->addrs--;10211032 entry->addr.id = cur->addr.id;···11761165 entry->ifindex = 0;11771166 entry->flags = MPTCP_PM_ADDR_FLAG_IMPLICIT;11781167 entry->lsk = NULL;11791179- ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true);11681168+ ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true, false);11801169 if (ret < 0)11811170 kfree(entry);11821171···14441433 }14451434 }14461435 ret = mptcp_pm_nl_append_new_local_addr(pernet, entry,14471447- !mptcp_pm_has_addr_attr_id(attr, info));14361436+ !mptcp_pm_has_addr_attr_id(attr, info),14371437+ true);14481438 if (ret < 0) {14491439 GENL_SET_ERR_MSG_FMT(info, "too many addresses or duplicate one: %d", ret);14501440 goto out_free;
+4-4
net/netfilter/ipvs/ip_vs_ctl.c
···30913091 case IP_VS_SO_GET_SERVICES:30923092 {30933093 struct ip_vs_get_services *get;30943094- int size;30943094+ size_t size;3095309530963096 get = (struct ip_vs_get_services *)arg;30973097 size = struct_size(get, entrytable, get->num_services);30983098 if (*len != size) {30993099- pr_err("length: %u != %u\n", *len, size);30993099+ pr_err("length: %u != %zu\n", *len, size);31003100 ret = -EINVAL;31013101 goto out;31023102 }···31323132 case IP_VS_SO_GET_DESTS:31333133 {31343134 struct ip_vs_get_dests *get;31353135- int size;31353135+ size_t size;3136313631373137 get = (struct ip_vs_get_dests *)arg;31383138 size = struct_size(get, entrytable, get->num_dests);31393139 if (*len != size) {31403140- pr_err("length: %u != %u\n", *len, size);31403140+ pr_err("length: %u != %zu\n", *len, size);31413141 ret = -EINVAL;31423142 goto out;31433143 }
+4-2
net/netfilter/nf_conncount.c
···132132 struct nf_conn *found_ct;133133 unsigned int collect = 0;134134135135- if (time_is_after_eq_jiffies((unsigned long)list->last_gc))135135+ if ((u32)jiffies == list->last_gc)136136 goto add_new_node;137137138138 /* check the saved connections */···234234 bool ret = false;235235236236 /* don't bother if we just did GC */237237- if (time_is_after_eq_jiffies((unsigned long)READ_ONCE(list->last_gc)))237237+ if ((u32)jiffies == READ_ONCE(list->last_gc))238238 return false;239239240240 /* don't bother if other cpu is already doing GC */···377377378378 conn->tuple = *tuple;379379 conn->zone = *zone;380380+ conn->cpu = raw_smp_processor_id();381381+ conn->jiffies32 = (u32)jiffies;380382 memcpy(rbconn->key, key, sizeof(u32) * data->keylen);381383382384 nf_conncount_list_init(&rbconn->list);
···228228 return 0;229229}230230231231-static void nft_compat_wait_for_destructors(void)231231+static void nft_compat_wait_for_destructors(struct net *net)232232{233233 /* xtables matches or targets can have side effects, e.g.234234 * creation/destruction of /proc files.···236236 * work queue. If we have pending invocations we thus237237 * need to wait for those to finish.238238 */239239- nf_tables_trans_destroy_flush_work();239239+ nf_tables_trans_destroy_flush_work(net);240240}241241242242static int···262262263263 nft_target_set_tgchk_param(&par, ctx, target, info, &e, proto, inv);264264265265- nft_compat_wait_for_destructors();265265+ nft_compat_wait_for_destructors(ctx->net);266266267267 ret = xt_check_target(&par, size, proto, inv);268268 if (ret < 0) {···515515516516 nft_match_set_mtchk_param(&par, ctx, match, info, &e, proto, inv);517517518518- nft_compat_wait_for_destructors();518518+ nft_compat_wait_for_destructors(ctx->net);519519520520 return xt_check_match(&par, size, proto, inv);521521}
+4-2
net/netfilter/nft_ct.c
···230230 enum ip_conntrack_info ctinfo;231231 u16 value = nft_reg_load16(®s->data[priv->sreg]);232232 struct nf_conn *ct;233233+ int oldcnt;233234234235 ct = nf_ct_get(skb, &ctinfo);235236 if (ct) /* already tracked */···251250252251 ct = this_cpu_read(nft_ct_pcpu_template);253252254254- if (likely(refcount_read(&ct->ct_general.use) == 1)) {255255- refcount_inc(&ct->ct_general.use);253253+ __refcount_inc(&ct->ct_general.use, &oldcnt);254254+ if (likely(oldcnt == 1)) {256255 nf_ct_zone_add(ct, &zone);257256 } else {257257+ refcount_dec(&ct->ct_general.use);258258 /* previous skb got queued to userspace, allocate temporary259259 * one until percpu template can be reused.260260 */
+4-6
net/netfilter/nft_exthdr.c
···8585 unsigned char optbuf[sizeof(struct ip_options) + 40];8686 struct ip_options *opt = (struct ip_options *)optbuf;8787 struct iphdr *iph, _iph;8888- unsigned int start;8988 bool found = false;9089 __be32 info;9190 int optlen;···9293 iph = skb_header_pointer(skb, 0, sizeof(_iph), &_iph);9394 if (!iph)9495 return -EBADMSG;9595- start = sizeof(struct iphdr);96969797 optlen = iph->ihl * 4 - (int)sizeof(struct iphdr);9898 if (optlen <= 0)···101103 /* Copy the options since __ip_options_compile() modifies102104 * the options.103105 */104104- if (skb_copy_bits(skb, start, opt->__data, optlen))106106+ if (skb_copy_bits(skb, sizeof(struct iphdr), opt->__data, optlen))105107 return -EBADMSG;106108 opt->optlen = optlen;107109···116118 found = target == IPOPT_SSRR ? opt->is_strictroute :117119 !opt->is_strictroute;118120 if (found)119119- *offset = opt->srr + start;121121+ *offset = opt->srr;120122 break;121123 case IPOPT_RR:122124 if (!opt->rr)123125 break;124124- *offset = opt->rr + start;126126+ *offset = opt->rr;125127 found = true;126128 break;127129 case IPOPT_RA:128130 if (!opt->router_alert)129131 break;130130- *offset = opt->router_alert + start;132132+ *offset = opt->router_alert;131133 found = true;132134 break;133135 default:
+18-12
net/openvswitch/conntrack.c
···13681368 attr == OVS_KEY_ATTR_CT_MARK)13691369 return true;13701370 if (IS_ENABLED(CONFIG_NF_CONNTRACK_LABELS) &&13711371- attr == OVS_KEY_ATTR_CT_LABELS)13721372- return true;13711371+ attr == OVS_KEY_ATTR_CT_LABELS) {13721372+ struct ovs_net *ovs_net = net_generic(net, ovs_net_id);13731373+13741374+ return ovs_net->xt_label;13751375+ }1373137613741377 return false;13751378}···13811378 const struct sw_flow_key *key,13821379 struct sw_flow_actions **sfa, bool log)13831380{13841384- unsigned int n_bits = sizeof(struct ovs_key_ct_labels) * BITS_PER_BYTE;13851381 struct ovs_conntrack_info ct_info;13861382 const char *helper = NULL;13871383 u16 family;···14071405 if (!ct_info.ct) {14081406 OVS_NLERR(log, "Failed to allocate conntrack template");14091407 return -ENOMEM;14101410- }14111411-14121412- if (nf_connlabels_get(net, n_bits - 1)) {14131413- nf_ct_tmpl_free(ct_info.ct);14141414- OVS_NLERR(log, "Failed to set connlabel length");14151415- return -EOPNOTSUPP;14161408 }1417140914181410 if (ct_info.timeout[0]) {···15771581 if (ct_info->ct) {15781582 if (ct_info->timeout[0])15791583 nf_ct_destroy_timeout(ct_info->ct);15801580- nf_connlabels_put(nf_ct_net(ct_info->ct));15811584 nf_ct_tmpl_free(ct_info->ct);15821585 }15831586}···2001200620022007int ovs_ct_init(struct net *net)20032008{20042004-#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)20092009+ unsigned int n_bits = sizeof(struct ovs_key_ct_labels) * BITS_PER_BYTE;20052010 struct ovs_net *ovs_net = net_generic(net, ovs_net_id);2006201120122012+ if (nf_connlabels_get(net, n_bits - 1)) {20132013+ ovs_net->xt_label = false;20142014+ OVS_NLERR(true, "Failed to set connlabel length");20152015+ } else {20162016+ ovs_net->xt_label = true;20172017+ }20182018+20192019+#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)20072020 return ovs_ct_limit_init(net, ovs_net);20082021#else20092022 return 0;···2020201720212018void ovs_ct_exit(struct net *net)20222019{20232023-#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)20242020 struct ovs_net *ovs_net = net_generic(net, ovs_net_id);2025202120222022+#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)20262023 ovs_ct_limit_exit(net, ovs_net);20272024#endif20252025+20262026+ if (ovs_net->xt_label)20272027+ nf_connlabels_put(net);20282028}
···22542254 return -EOPNOTSUPP;22552255 }2256225622572257+ /* Prevent creation of traffic classes with classid TC_H_ROOT */22582258+ if (clid == TC_H_ROOT) {22592259+ NL_SET_ERR_MSG(extack, "Cannot create traffic class with classid TC_H_ROOT");22602260+ return -EINVAL;22612261+ }22622262+22572263 new_cl = cl;22582264 err = -EOPNOTSUPP;22592265 if (cops->change)
+2-1
net/sched/sch_gred.c
···913913 for (i = 0; i < table->DPs; i++)914914 gred_destroy_vq(table->tab[i]);915915916916- gred_offload(sch, TC_GRED_DESTROY);916916+ if (table->opt)917917+ gred_offload(sch, TC_GRED_DESTROY);917918 kfree(table->opt);918919}919920
···6262 ));6363 }64646565+ // ISO C (ISO/IEC 9899:2011) defines `aligned_alloc`:6666+ //6767+ // > The value of alignment shall be a valid alignment supported by the implementation6868+ // [...].6969+ //7070+ // As an example of the "supported by the implementation" requirement, POSIX.1-2001 (IEEE7171+ // 1003.1-2001) defines `posix_memalign`:7272+ //7373+ // > The value of alignment shall be a power of two multiple of sizeof (void *).7474+ //7575+ // and POSIX-based implementations of `aligned_alloc` inherit this requirement. At the time7676+ // of writing, this is known to be the case on macOS (but not in glibc).7777+ //7878+ // Satisfy the stricter requirement to avoid spurious test failures on some platforms.7979+ let min_align = core::mem::size_of::<*const crate::ffi::c_void>();8080+ let layout = layout.align_to(min_align).map_err(|_| AllocError)?;8181+ let layout = layout.pad_to_align();8282+6583 // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or6684 // exceeds the given size and alignment requirements.6785 let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8;
+1-1
rust/kernel/error.rs
···107107 } else {108108 // TODO: Make it a `WARN_ONCE` once available.109109 crate::pr_warn!(110110- "attempted to create `Error` with out of range `errno`: {}",110110+ "attempted to create `Error` with out of range `errno`: {}\n",111111 errno112112 );113113 code::EINVAL
+10-13
rust/kernel/init.rs
···259259/// },260260/// }));261261/// let foo: Pin<&mut Foo> = foo;262262-/// pr_info!("a: {}", &*foo.a.lock());262262+/// pr_info!("a: {}\n", &*foo.a.lock());263263/// ```264264///265265/// # Syntax···319319/// }, GFP_KERNEL)?,320320/// }));321321/// let foo = foo.unwrap();322322-/// pr_info!("a: {}", &*foo.a.lock());322322+/// pr_info!("a: {}\n", &*foo.a.lock());323323/// ```324324///325325/// ```rust,ignore···352352/// x: 64,353353/// }, GFP_KERNEL)?,354354/// }));355355-/// pr_info!("a: {}", &*foo.a.lock());355355+/// pr_info!("a: {}\n", &*foo.a.lock());356356/// # Ok::<_, AllocError>(())357357/// ```358358///···882882 ///883883 /// impl Foo {884884 /// fn setup(self: Pin<&mut Self>) {885885- /// pr_info!("Setting up foo");885885+ /// pr_info!("Setting up foo\n");886886 /// }887887 /// }888888 ///···986986 ///987987 /// impl Foo {988988 /// fn setup(&mut self) {989989- /// pr_info!("Setting up foo");989989+ /// pr_info!("Setting up foo\n");990990 /// }991991 /// }992992 ///···13361336/// #[pinned_drop]13371337/// impl PinnedDrop for Foo {13381338/// fn drop(self: Pin<&mut Self>) {13391339-/// pr_info!("Foo is being dropped!");13391339+/// pr_info!("Foo is being dropped!\n");13401340/// }13411341/// }13421342/// ```···14181418 // SAFETY: `T: Zeroable` and `UnsafeCell` is `repr(transparent)`.14191419 {<T: ?Sized + Zeroable>} UnsafeCell<T>,1420142014211421- // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee).14211421+ // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee:14221422+ // https://doc.rust-lang.org/stable/std/option/index.html#representation).14221423 Option<NonZeroU8>, Option<NonZeroU16>, Option<NonZeroU32>, Option<NonZeroU64>,14231424 Option<NonZeroU128>, Option<NonZeroUsize>,14241425 Option<NonZeroI8>, Option<NonZeroI16>, Option<NonZeroI32>, Option<NonZeroI64>,14251426 Option<NonZeroI128>, Option<NonZeroIsize>,14261426-14271427- // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee).14281428- //14291429- // In this case we are allowed to use `T: ?Sized`, since all zeros is the `None` variant.14301430- {<T: ?Sized>} Option<NonNull<T>>,14311431- {<T: ?Sized>} Option<KBox<T>>,14271427+ {<T>} Option<NonNull<T>>,14281428+ {<T>} Option<KBox<T>>,1432142914331430 // SAFETY: `null` pointer is valid.14341431 //
+3-3
rust/kernel/init/macros.rs
···4545//! #[pinned_drop]4646//! impl PinnedDrop for Foo {4747//! fn drop(self: Pin<&mut Self>) {4848-//! pr_info!("{self:p} is getting dropped.");4848+//! pr_info!("{self:p} is getting dropped.\n");4949//! }5050//! }5151//!···412412//! #[pinned_drop]413413//! impl PinnedDrop for Foo {414414//! fn drop(self: Pin<&mut Self>) {415415-//! pr_info!("{self:p} is getting dropped.");415415+//! pr_info!("{self:p} is getting dropped.\n");416416//! }417417//! }418418//! ```···423423//! // `unsafe`, full path and the token parameter are added, everything else stays the same.424424//! unsafe impl ::kernel::init::PinnedDrop for Foo {425425//! fn drop(self: Pin<&mut Self>, _: ::kernel::init::__internal::OnlyCallFromDrop) {426426-//! pr_info!("{self:p} is getting dropped.");426426+//! pr_info!("{self:p} is getting dropped.\n");427427//! }428428//! }429429//! ```
+1-1
rust/kernel/lib.rs
···66//! usage by Rust code in the kernel and is shared by all of them.77//!88//! In other words, all the rest of the Rust code in the kernel (e.g. kernel99-//! modules written in Rust) depends on [`core`], [`alloc`] and this crate.99+//! modules written in Rust) depends on [`core`] and this crate.1010//!1111//! If you need a kernel C API that is not ported or wrapped yet here, then1212//! do so first instead of bypassing this crate.
···5555/// fn print_bytes_used(dir: &Directory, file: &File) {5656/// let guard = dir.inner.lock();5757/// let inner_file = file.inner.access(&guard);5858-/// pr_info!("{} {}", guard.bytes_used, inner_file.bytes_used);5858+/// pr_info!("{} {}\n", guard.bytes_used, inner_file.bytes_used);5959/// }6060///6161/// /// Increments `bytes_used` for both the directory and file.
+1-1
rust/kernel/task.rs
···320320321321 /// Wakes up the task.322322 pub fn wake_up(&self) {323323- // SAFETY: It's always safe to call `signal_pending` on a valid task, even if the task323323+ // SAFETY: It's always safe to call `wake_up_process` on a valid task, even if the task324324 // running.325325 unsafe { bindings::wake_up_process(self.as_ptr()) };326326 }
+3-3
rust/kernel/workqueue.rs
···6060//! type Pointer = Arc<MyStruct>;6161//!6262//! fn run(this: Arc<MyStruct>) {6363-//! pr_info!("The value is: {}", this.value);6363+//! pr_info!("The value is: {}\n", this.value);6464//! }6565//! }6666//!···108108//! type Pointer = Arc<MyStruct>;109109//!110110//! fn run(this: Arc<MyStruct>) {111111-//! pr_info!("The value is: {}", this.value_1);111111+//! pr_info!("The value is: {}\n", this.value_1);112112//! }113113//! }114114//!···116116//! type Pointer = Arc<MyStruct>;117117//!118118//! fn run(this: Arc<MyStruct>) {119119-//! pr_info!("The second value is: {}", this.value_2);119119+//! pr_info!("The second value is: {}\n", this.value_2);120120//! }121121//! }122122//!
+42-29
scripts/generate_rust_analyzer.py
···5757 crates_indexes[display_name] = len(crates)5858 crates.append(crate)59596060- # First, the ones in `rust/` since they are a bit special.6161- append_crate(6262- "core",6363- sysroot_src / "core" / "src" / "lib.rs",6464- [],6565- cfg=crates_cfgs.get("core", []),6666- is_workspace_member=False,6767- )6060+ def append_sysroot_crate(6161+ display_name,6262+ deps,6363+ cfg=[],6464+ ):6565+ append_crate(6666+ display_name,6767+ sysroot_src / display_name / "src" / "lib.rs",6868+ deps,6969+ cfg,7070+ is_workspace_member=False,7171+ )7272+7373+ # NB: sysroot crates reexport items from one another so setting up our transitive dependencies7474+ # here is important for ensuring that rust-analyzer can resolve symbols. The sources of truth7575+ # for this dependency graph are `(sysroot_src / crate / "Cargo.toml" for crate in crates)`.7676+ append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", []))7777+ append_sysroot_crate("alloc", ["core"])7878+ append_sysroot_crate("std", ["alloc", "core"])7979+ append_sysroot_crate("proc_macro", ["core", "std"])68806981 append_crate(7082 "compiler_builtins",···8775 append_crate(8876 "macros",8977 srctree / "rust" / "macros" / "lib.rs",9090- [],7878+ ["std", "proc_macro"],9179 is_proc_macro=True,9280 )9381···9785 ["core", "compiler_builtins"],9886 )9987100100- append_crate(101101- "bindings",102102- srctree / "rust"/ "bindings" / "lib.rs",103103- ["core"],104104- cfg=cfg,105105- )106106- crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True))8888+ def append_crate_with_generated(8989+ display_name,9090+ deps,9191+ ):9292+ append_crate(9393+ display_name,9494+ srctree / "rust"/ display_name / "lib.rs",9595+ deps,9696+ cfg=cfg,9797+ )9898+ crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True))9999+ crates[-1]["source"] = {100100+ "include_dirs": [101101+ str(srctree / "rust" / display_name),102102+ str(objtree / "rust")103103+ ],104104+ "exclude_dirs": [],105105+ }107106108108- append_crate(109109- "kernel",110110- srctree / "rust" / "kernel" / "lib.rs",111111- ["core", "macros", "build_error", "bindings"],112112- cfg=cfg,113113- )114114- crates[-1]["source"] = {115115- "include_dirs": [116116- str(srctree / "rust" / "kernel"),117117- str(objtree / "rust")118118- ],119119- "exclude_dirs": [],120120- }107107+ append_crate_with_generated("bindings", ["core"])108108+ append_crate_with_generated("uapi", ["core"])109109+ append_crate_with_generated("kernel", ["core", "macros", "build_error", "bindings", "uapi"])121110122111 def is_root_crate(build_file, target):123112 try:
+1-1
scripts/package/install-extmod-build
···6363 # Clear VPATH and srcroot because the source files reside in the output6464 # directory.6565 # shellcheck disable=SC2016 # $(MAKE) and $(build) will be expanded by Make6666- "${MAKE}" run-command KBUILD_RUN_COMMAND='+$(MAKE) HOSTCC='"${CC}"' VPATH= srcroot=. $(build)='"${destdir}"/scripts6666+ "${MAKE}" run-command KBUILD_RUN_COMMAND='+$(MAKE) HOSTCC='"${CC}"' VPATH= srcroot=. $(build)='"$(realpath --relative-base=. "${destdir}")"/scripts67676868 rm -f "${destdir}/scripts/Kbuild"6969fi
+2-2
scripts/rustdoc_test_gen.rs
···1515//! - Test code should be able to define functions and call them, without having to carry1616//! the context.1717//!1818-//! - Later on, we may want to be able to test non-kernel code (e.g. `core`, `alloc` or1919-//! third-party crates) which likely use the standard library `assert*!` macros.1818+//! - Later on, we may want to be able to test non-kernel code (e.g. `core` or third-party1919+//! crates) which likely use the standard library `assert*!` macros.2020//!2121//! For this reason, instead of the passed context, `kunit_get_current_test()` is used instead2222//! (i.e. `current->kunit_test`).
+30-16
sound/core/seq/seq_clientmgr.c
···106106 return clienttab[clientid];107107}108108109109-struct snd_seq_client *snd_seq_client_use_ptr(int clientid)109109+static struct snd_seq_client *client_use_ptr(int clientid, bool load_module)110110{111111 unsigned long flags;112112 struct snd_seq_client *client;···126126 }127127 spin_unlock_irqrestore(&clients_lock, flags);128128#ifdef CONFIG_MODULES129129- if (!in_interrupt()) {129129+ if (load_module) {130130 static DECLARE_BITMAP(client_requested, SNDRV_SEQ_GLOBAL_CLIENTS);131131 static DECLARE_BITMAP(card_requested, SNDRV_CARDS);132132···168168 return client;169169}170170171171+/* get snd_seq_client object for the given id quickly */172172+struct snd_seq_client *snd_seq_client_use_ptr(int clientid)173173+{174174+ return client_use_ptr(clientid, false);175175+}176176+177177+/* get snd_seq_client object for the given id;178178+ * if not found, retry after loading the modules179179+ */180180+static struct snd_seq_client *client_load_and_use_ptr(int clientid)181181+{182182+ return client_use_ptr(clientid, IS_ENABLED(CONFIG_MODULES));183183+}184184+171185/* Take refcount and perform ioctl_mutex lock on the given client;172186 * used only for OSS sequencer173187 * Unlock via snd_seq_client_ioctl_unlock() below···190176{191177 struct snd_seq_client *client;192178193193- client = snd_seq_client_use_ptr(clientid);179179+ client = client_load_and_use_ptr(clientid);194180 if (!client)195181 return false;196182 mutex_lock(&client->ioctl_mutex);···12091195 int err = 0;1210119612111197 /* requested client number */12121212- cptr = snd_seq_client_use_ptr(info->client);11981198+ cptr = client_load_and_use_ptr(info->client);12131199 if (cptr == NULL)12141200 return -ENOENT; /* don't change !!! */12151201···12711257 struct snd_seq_client *cptr;1272125812731259 /* requested client number */12741274- cptr = snd_seq_client_use_ptr(client_info->client);12601260+ cptr = client_load_and_use_ptr(client_info->client);12751261 if (cptr == NULL)12761262 return -ENOENT; /* don't change !!! */12771263···14101396 struct snd_seq_client *cptr;14111397 struct snd_seq_client_port *port;1412139814131413- cptr = snd_seq_client_use_ptr(info->addr.client);13991399+ cptr = client_load_and_use_ptr(info->addr.client);14141400 if (cptr == NULL)14151401 return -ENXIO;14161402···15171503 struct snd_seq_client *receiver = NULL, *sender = NULL;15181504 struct snd_seq_client_port *sport = NULL, *dport = NULL;1519150515201520- receiver = snd_seq_client_use_ptr(subs->dest.client);15061506+ receiver = client_load_and_use_ptr(subs->dest.client);15211507 if (!receiver)15221508 goto __end;15231523- sender = snd_seq_client_use_ptr(subs->sender.client);15091509+ sender = client_load_and_use_ptr(subs->sender.client);15241510 if (!sender)15251511 goto __end;15261512 sport = snd_seq_port_use_ptr(sender, subs->sender.port);···18851871 struct snd_seq_client_pool *info = arg;18861872 struct snd_seq_client *cptr;1887187318881888- cptr = snd_seq_client_use_ptr(info->client);18741874+ cptr = client_load_and_use_ptr(info->client);18891875 if (cptr == NULL)18901876 return -ENOENT;18911877 memset(info, 0, sizeof(*info));···19891975 struct snd_seq_client_port *sport = NULL;1990197619911977 result = -EINVAL;19921992- sender = snd_seq_client_use_ptr(subs->sender.client);19781978+ sender = client_load_and_use_ptr(subs->sender.client);19931979 if (!sender)19941980 goto __end;19951981 sport = snd_seq_port_use_ptr(sender, subs->sender.port);···20202006 struct list_head *p;20212007 int i;2022200820232023- cptr = snd_seq_client_use_ptr(subs->root.client);20092009+ cptr = client_load_and_use_ptr(subs->root.client);20242010 if (!cptr)20252011 goto __end;20262012 port = snd_seq_port_use_ptr(cptr, subs->root.port);···20872073 if (info->client < 0)20882074 info->client = 0;20892075 for (; info->client < SNDRV_SEQ_MAX_CLIENTS; info->client++) {20902090- cptr = snd_seq_client_use_ptr(info->client);20762076+ cptr = client_load_and_use_ptr(info->client);20912077 if (cptr)20922078 break; /* found */20932079 }···21102096 struct snd_seq_client *cptr;21112097 struct snd_seq_client_port *port = NULL;2112209821132113- cptr = snd_seq_client_use_ptr(info->addr.client);20992099+ cptr = client_load_and_use_ptr(info->addr.client);21142100 if (cptr == NULL)21152101 return -ENXIO;21162102···22072193 size = sizeof(struct snd_ump_endpoint_info);22082194 else22092195 size = sizeof(struct snd_ump_block_info);22102210- cptr = snd_seq_client_use_ptr(client);21962196+ cptr = client_load_and_use_ptr(client);22112197 if (!cptr)22122198 return -ENOENT;22132199···24892475 if (check_event_type_and_length(ev))24902476 return -EINVAL;2491247724922492- cptr = snd_seq_client_use_ptr(client);24782478+ cptr = client_load_and_use_ptr(client);24932479 if (cptr == NULL)24942480 return -EINVAL;24952481···2721270727222708 /* list the client table */27232709 for (c = 0; c < SNDRV_SEQ_MAX_CLIENTS; c++) {27242724- client = snd_seq_client_use_ptr(c);27102710+ client = client_load_and_use_ptr(c);27252711 if (client == NULL)27262712 continue;27272713 if (client->type == NO_CLIENT) {
···78787979 bool use_ring_sense;8080 unsigned int tip_debounce_ms;8181+ unsigned int tip_fall_db_ms;8282+ unsigned int tip_rise_db_ms;8183 unsigned int bias_low;8284 unsigned int bias_sense_ua;8385 unsigned int bias_ramp_ms;···9795 bool button_detect_running;9896 bool jack_present;9997 int jack_override;9898+ bool suspend_jack_debounce;10099101100 struct work_struct hp_ilimit_work;102101 struct delayed_work hp_ilimit_clear_work;
+3
sound/soc/codecs/rt1320-sdw.c
···535535 /* set the timeout values */536536 prop->clk_stop_timeout = 64;537537538538+ /* BIOS may set wake_capable. Make sure it is 0 as wake events are disabled. */539539+ prop->wake_capable = 0;540540+538541 return 0;539542}540543
+4
sound/soc/codecs/rt722-sdca-sdw.c
···8686 case 0x6100067:8787 case 0x6100070 ... 0x610007c:8888 case 0x6100080:8989+ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN,9090+ CH_01) ...9191+ SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN,9292+ CH_04):8993 case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME,9094 CH_01):9195 case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME,
+11-2
sound/soc/codecs/wm0010.c
···920920 if (ret) {921921 dev_err(wm0010->dev, "Failed to set IRQ %d as wake source: %d\n",922922 irq, ret);923923- return ret;923923+ goto free_irq;924924 }925925926926 if (spi->max_speed_hz)···932932 &soc_component_dev_wm0010, wm0010_dai,933933 ARRAY_SIZE(wm0010_dai));934934 if (ret < 0)935935- return ret;935935+ goto disable_irq_wake;936936937937 return 0;938938+939939+disable_irq_wake:940940+ irq_set_irq_wake(wm0010->irq, 0);941941+942942+free_irq:943943+ if (wm0010->irq)944944+ free_irq(wm0010->irq, wm0010);945945+946946+ return ret;938947}939948940949static void wm0010_spi_remove(struct spi_device *spi)
+2-2
sound/soc/codecs/wsa884x.c
···18751875 * Reading temperature is possible only when Power Amplifier is18761876 * off. Report last cached data.18771877 */18781878- *temp = wsa884x->temperature;18781878+ *temp = wsa884x->temperature * 1000;18791879 return 0;18801880 }18811881···19341934 if ((val > WSA884X_LOW_TEMP_THRESHOLD) &&19351935 (val < WSA884X_HIGH_TEMP_THRESHOLD)) {19361936 wsa884x->temperature = val;19371937- *temp = val;19371937+ *temp = val * 1000;19381938 ret = 0;19391939 } else {19401940 ret = -EAGAIN;
+1-1
sound/soc/intel/boards/sof_sdw.c
···954954955955 /* generate DAI links by each sdw link */956956 while (sof_dais->initialised) {957957- int current_be_id;957957+ int current_be_id = 0;958958959959 ret = create_sdw_dailink(card, sof_dais, dai_links,960960 ¤t_be_id, codec_conf);
+7-8
sound/soc/soc-ops.c
···337337 if (ucontrol->value.integer.value[0] < 0)338338 return -EINVAL;339339 val = ucontrol->value.integer.value[0];340340- if (mc->platform_max && ((int)val + min) > mc->platform_max)340340+ if (mc->platform_max && val > mc->platform_max)341341 return -EINVAL;342342 if (val > max - min)343343 return -EINVAL;···350350 if (ucontrol->value.integer.value[1] < 0)351351 return -EINVAL;352352 val2 = ucontrol->value.integer.value[1];353353- if (mc->platform_max && ((int)val2 + min) > mc->platform_max)353353+ if (mc->platform_max && val2 > mc->platform_max)354354 return -EINVAL;355355 if (val2 > max - min)356356 return -EINVAL;···503503{504504 struct soc_mixer_control *mc =505505 (struct soc_mixer_control *)kcontrol->private_value;506506- int platform_max;507507- int min = mc->min;506506+ int max;508507509509- if (!mc->platform_max)510510- mc->platform_max = mc->max;511511- platform_max = mc->platform_max;508508+ max = mc->max - mc->min;509509+ if (mc->platform_max && mc->platform_max < max)510510+ max = mc->platform_max;512511513512 uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;514513 uinfo->count = snd_soc_volsw_is_stereo(mc) ? 2 : 1;515514 uinfo->value.integer.min = 0;516516- uinfo->value.integer.max = platform_max - min;515515+ uinfo->value.integer.max = max;517516518517 return 0;519518}
···151151static void snd_usx2y_card_private_free(struct snd_card *card);152152static void usx2y_unlinkseq(struct snd_usx2y_async_seq *s);153153154154+#ifdef USX2Y_NRPACKS_VARIABLE155155+int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */156156+module_param(nrpacks, int, 0444);157157+MODULE_PARM_DESC(nrpacks, "Number of packets per URB.");158158+#endif159159+154160/*155161 * pipe 4 is used for switching the lamps, setting samplerate, volumes ....156162 */···437431 struct usb_device *device = interface_to_usbdev(intf);438432 struct snd_card *card;439433 int err;434434+435435+#ifdef USX2Y_NRPACKS_VARIABLE436436+ if (nrpacks < 0 || nrpacks > USX2Y_NRPACKS_MAX)437437+ return -EINVAL;438438+#endif440439441440 if (le16_to_cpu(device->descriptor.idVendor) != 0x1604 ||442441 (le16_to_cpu(device->descriptor.idProduct) != USB_ID_US122 &&
+26
sound/usb/usx2y/usbusx2y.h
···7788#define NRURBS 2991010+/* Default value used for nr of packs per urb.1111+ * 1 to 4 have been tested ok on uhci.1212+ * To use 3 on ohci, you'd need a patch:1313+ * look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on1414+ * "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425"1515+ *1616+ * 1, 2 and 4 work out of the box on ohci, if I recall correctly.1717+ * Bigger is safer operation, smaller gives lower latencies.1818+ */1919+#define USX2Y_NRPACKS 42020+2121+#define USX2Y_NRPACKS_MAX 10242222+2323+/* If your system works ok with this module's parameter2424+ * nrpacks set to 1, you might as well comment2525+ * this define out, and thereby produce smaller, faster code.2626+ * You'd also set USX2Y_NRPACKS to 1 then.2727+ */2828+#define USX2Y_NRPACKS_VARIABLE 12929+3030+#ifdef USX2Y_NRPACKS_VARIABLE3131+extern int nrpacks;3232+#define nr_of_packs() nrpacks3333+#else3434+#define nr_of_packs() USX2Y_NRPACKS3535+#endif10361137#define URBS_ASYNC_SEQ 101238#define URB_DATA_LEN_ASYNC_SEQ 32
-27
sound/usb/usx2y/usbusx2yaudio.c
···2828#include "usx2y.h"2929#include "usbusx2y.h"30303131-/* Default value used for nr of packs per urb.3232- * 1 to 4 have been tested ok on uhci.3333- * To use 3 on ohci, you'd need a patch:3434- * look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on3535- * "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425"3636- *3737- * 1, 2 and 4 work out of the box on ohci, if I recall correctly.3838- * Bigger is safer operation, smaller gives lower latencies.3939- */4040-#define USX2Y_NRPACKS 44141-4242-/* If your system works ok with this module's parameter4343- * nrpacks set to 1, you might as well comment4444- * this define out, and thereby produce smaller, faster code.4545- * You'd also set USX2Y_NRPACKS to 1 then.4646- */4747-#define USX2Y_NRPACKS_VARIABLE 14848-4949-#ifdef USX2Y_NRPACKS_VARIABLE5050-static int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */5151-#define nr_of_packs() nrpacks5252-module_param(nrpacks, int, 0444);5353-MODULE_PARM_DESC(nrpacks, "Number of packets per URB.");5454-#else5555-#define nr_of_packs() USX2Y_NRPACKS5656-#endif5757-5831static int usx2y_urb_capt_retire(struct snd_usx2y_substream *subs)5932{6033 struct urb *urb = subs->completed_urb;
+2
tools/testing/selftests/damon/damon_nr_regions.py
···65656666 test_name = 'nr_regions test with %d/%d/%d real/min/max nr_regions' % (6767 real_nr_regions, min_nr_regions, max_nr_regions)6868+ collected_nr_regions.sort()6869 if (collected_nr_regions[0] < min_nr_regions or6970 collected_nr_regions[-1] > max_nr_regions):7071 print('fail %s' % test_name)···110109 attrs = kdamonds.kdamonds[0].contexts[0].monitoring_attrs111110 attrs.min_nr_regions = 3112111 attrs.max_nr_regions = 7112112+ attrs.update_us = 100000113113 err = kdamonds.kdamonds[0].commit()114114 if err is not None:115115 proc.terminate()
+6-3
tools/testing/selftests/damon/damos_quota.py
···5151 nr_quota_exceeds = scheme.stats.qt_exceeds52525353 wss_collected.sort()5454+ nr_expected_quota_exceeds = 05455 for wss in wss_collected:5556 if wss > sz_quota:5657 print('quota is not kept: %s > %s' % (wss, sz_quota))5758 print('collected samples are as below')5859 print('\n'.join(['%d' % wss for wss in wss_collected]))5960 exit(1)6161+ if wss == sz_quota:6262+ nr_expected_quota_exceeds += 160636161- if nr_quota_exceeds < len(wss_collected):6262- print('quota is not always exceeded: %d > %d' %6363- (len(wss_collected), nr_quota_exceeds))6464+ if nr_quota_exceeds < nr_expected_quota_exceeds:6565+ print('quota is exceeded less than expected: %d < %d' %6666+ (nr_quota_exceeds, nr_expected_quota_exceeds))6467 exit(1)65686669if __name__ == '__main__':
+3
tools/testing/selftests/damon/damos_quota_goal.py
···6363 if last_effective_bytes != 0 else -1.0))64646565 if last_effective_bytes == goal.effective_bytes:6666+ # effective quota was already minimum that cannot be more reduced6767+ if expect_increase is False and last_effective_bytes == 1:6868+ continue6669 print('efective bytes not changed: %d' % goal.effective_bytes)6770 exit(1)6871
···11#!/usr/bin/env python322# SPDX-License-Identifier: GPL-2.03344+import os55+import random, string, time46from lib.py import ksft_run, ksft_exit55-from lib.py import ksft_eq66-from lib.py import NetDrvEpEnv77+from lib.py import ksft_eq, KsftSkipEx, KsftFailEx88+from lib.py import EthtoolFamily, NetDrvEpEnv79from lib.py import bkg, cmd, wait_port_listen, rand_port1010+from lib.py import ethtool, ip8111212+remote_ifname=""1313+no_sleep=False9141010-def test_v4(cfg) -> None:1515+def _test_v4(cfg) -> None:1116 cfg.require_v4()12171318 cmd(f"ping -c 1 -W0.5 {cfg.remote_v4}")1419 cmd(f"ping -c 1 -W0.5 {cfg.v4}", host=cfg.remote)2020+ cmd(f"ping -s 65000 -c 1 -W0.5 {cfg.remote_v4}")2121+ cmd(f"ping -s 65000 -c 1 -W0.5 {cfg.v4}", host=cfg.remote)15221616-1717-def test_v6(cfg) -> None:2323+def _test_v6(cfg) -> None:1824 cfg.require_v6()19252020- cmd(f"ping -c 1 -W0.5 {cfg.remote_v6}")2121- cmd(f"ping -c 1 -W0.5 {cfg.v6}", host=cfg.remote)2626+ cmd(f"ping -c 1 -W5 {cfg.remote_v6}")2727+ cmd(f"ping -c 1 -W5 {cfg.v6}", host=cfg.remote)2828+ cmd(f"ping -s 65000 -c 1 -W0.5 {cfg.remote_v6}")2929+ cmd(f"ping -s 65000 -c 1 -W0.5 {cfg.v6}", host=cfg.remote)22302323-2424-def test_tcp(cfg) -> None:3131+def _test_tcp(cfg) -> None:2532 cfg.require_cmd("socat", remote=True)26332734 port = rand_port()2835 listen_cmd = f"socat -{cfg.addr_ipver} -t 2 -u TCP-LISTEN:{port},reuseport STDOUT"29363737+ test_string = ''.join(random.choice(string.ascii_lowercase) for _ in range(65536))3038 with bkg(listen_cmd, exit_wait=True) as nc:3139 wait_port_listen(port)32403333- cmd(f"echo ping | socat -t 2 -u STDIN TCP:{cfg.baddr}:{port}",4141+ cmd(f"echo {test_string} | socat -t 2 -u STDIN TCP:{cfg.baddr}:{port}",3442 shell=True, host=cfg.remote)3535- ksft_eq(nc.stdout.strip(), "ping")4343+ ksft_eq(nc.stdout.strip(), test_string)36444545+ test_string = ''.join(random.choice(string.ascii_lowercase) for _ in range(65536))3746 with bkg(listen_cmd, host=cfg.remote, exit_wait=True) as nc:3847 wait_port_listen(port, host=cfg.remote)39484040- cmd(f"echo ping | socat -t 2 -u STDIN TCP:{cfg.remote_baddr}:{port}", shell=True)4141- ksft_eq(nc.stdout.strip(), "ping")4949+ cmd(f"echo {test_string} | socat -t 2 -u STDIN TCP:{cfg.remote_baddr}:{port}", shell=True)5050+ ksft_eq(nc.stdout.strip(), test_string)42515252+def _set_offload_checksum(cfg, netnl, on) -> None:5353+ try:5454+ ethtool(f" -K {cfg.ifname} rx {on} tx {on} ")5555+ except:5656+ return5757+5858+def _set_xdp_generic_sb_on(cfg) -> None:5959+ test_dir = os.path.dirname(os.path.realpath(__file__))6060+ prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o"6161+ cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote)6262+ cmd(f"ip link set dev {cfg.ifname} mtu 1500 xdpgeneric obj {prog} sec xdp", shell=True)6363+6464+ if no_sleep != True:6565+ time.sleep(10)6666+6767+def _set_xdp_generic_mb_on(cfg) -> None:6868+ test_dir = os.path.dirname(os.path.realpath(__file__))6969+ prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o"7070+ cmd(f"ip link set dev {remote_ifname} mtu 9000", shell=True, host=cfg.remote)7171+ ip("link set dev %s mtu 9000 xdpgeneric obj %s sec xdp.frags" % (cfg.ifname, prog))7272+7373+ if no_sleep != True:7474+ time.sleep(10)7575+7676+def _set_xdp_native_sb_on(cfg) -> None:7777+ test_dir = os.path.dirname(os.path.realpath(__file__))7878+ prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o"7979+ cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote)8080+ cmd(f"ip -j link set dev {cfg.ifname} mtu 1500 xdp obj {prog} sec xdp", shell=True)8181+ xdp_info = ip("-d link show %s" % (cfg.ifname), json=True)[0]8282+ if xdp_info['xdp']['mode'] != 1:8383+ """8484+ If the interface doesn't support native-mode, it falls back to generic mode.8585+ The mode value 1 is native and 2 is generic.8686+ So it raises an exception if mode is not 1(native mode).8787+ """8888+ raise KsftSkipEx('device does not support native-XDP')8989+9090+ if no_sleep != True:9191+ time.sleep(10)9292+9393+def _set_xdp_native_mb_on(cfg) -> None:9494+ test_dir = os.path.dirname(os.path.realpath(__file__))9595+ prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o"9696+ cmd(f"ip link set dev {remote_ifname} mtu 9000", shell=True, host=cfg.remote)9797+ try:9898+ cmd(f"ip link set dev {cfg.ifname} mtu 9000 xdp obj {prog} sec xdp.frags", shell=True)9999+ except Exception as e:100100+ cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote)101101+ raise KsftSkipEx('device does not support native-multi-buffer XDP')102102+103103+ if no_sleep != True:104104+ time.sleep(10)105105+106106+def _set_xdp_offload_on(cfg) -> None:107107+ test_dir = os.path.dirname(os.path.realpath(__file__))108108+ prog = test_dir + "/../../net/lib/xdp_dummy.bpf.o"109109+ cmd(f"ip link set dev {cfg.ifname} mtu 1500", shell=True)110110+ try:111111+ cmd(f"ip link set dev {cfg.ifname} xdpoffload obj {prog} sec xdp", shell=True)112112+ except Exception as e:113113+ raise KsftSkipEx('device does not support offloaded XDP')114114+ cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote)115115+116116+ if no_sleep != True:117117+ time.sleep(10)118118+119119+def get_interface_info(cfg) -> None:120120+ global remote_ifname121121+ global no_sleep122122+123123+ remote_info = cmd(f"ip -4 -o addr show to {cfg.remote_v4} | awk '{{print $2}}'", shell=True, host=cfg.remote).stdout124124+ remote_ifname = remote_info.rstrip('\n')125125+ if remote_ifname == "":126126+ raise KsftFailEx('Can not get remote interface')127127+ local_info = ip("-d link show %s" % (cfg.ifname), json=True)[0]128128+ if 'parentbus' in local_info and local_info['parentbus'] == "netdevsim":129129+ no_sleep=True130130+ if 'linkinfo' in local_info and local_info['linkinfo']['info_kind'] == "veth":131131+ no_sleep=True132132+133133+def set_interface_init(cfg) -> None:134134+ cmd(f"ip link set dev {cfg.ifname} mtu 1500", shell=True)135135+ cmd(f"ip link set dev {cfg.ifname} xdp off ", shell=True)136136+ cmd(f"ip link set dev {cfg.ifname} xdpgeneric off ", shell=True)137137+ cmd(f"ip link set dev {cfg.ifname} xdpoffload off", shell=True)138138+ cmd(f"ip link set dev {remote_ifname} mtu 1500", shell=True, host=cfg.remote)139139+140140+def test_default(cfg, netnl) -> None:141141+ _set_offload_checksum(cfg, netnl, "off")142142+ _test_v4(cfg)143143+ _test_v6(cfg)144144+ _test_tcp(cfg)145145+ _set_offload_checksum(cfg, netnl, "on")146146+ _test_v4(cfg)147147+ _test_v6(cfg)148148+ _test_tcp(cfg)149149+150150+def test_xdp_generic_sb(cfg, netnl) -> None:151151+ _set_xdp_generic_sb_on(cfg)152152+ _set_offload_checksum(cfg, netnl, "off")153153+ _test_v4(cfg)154154+ _test_v6(cfg)155155+ _test_tcp(cfg)156156+ _set_offload_checksum(cfg, netnl, "on")157157+ _test_v4(cfg)158158+ _test_v6(cfg)159159+ _test_tcp(cfg)160160+ ip("link set dev %s xdpgeneric off" % cfg.ifname)161161+162162+def test_xdp_generic_mb(cfg, netnl) -> None:163163+ _set_xdp_generic_mb_on(cfg)164164+ _set_offload_checksum(cfg, netnl, "off")165165+ _test_v4(cfg)166166+ _test_v6(cfg)167167+ _test_tcp(cfg)168168+ _set_offload_checksum(cfg, netnl, "on")169169+ _test_v4(cfg)170170+ _test_v6(cfg)171171+ _test_tcp(cfg)172172+ ip("link set dev %s xdpgeneric off" % cfg.ifname)173173+174174+def test_xdp_native_sb(cfg, netnl) -> None:175175+ _set_xdp_native_sb_on(cfg)176176+ _set_offload_checksum(cfg, netnl, "off")177177+ _test_v4(cfg)178178+ _test_v6(cfg)179179+ _test_tcp(cfg)180180+ _set_offload_checksum(cfg, netnl, "on")181181+ _test_v4(cfg)182182+ _test_v6(cfg)183183+ _test_tcp(cfg)184184+ ip("link set dev %s xdp off" % cfg.ifname)185185+186186+def test_xdp_native_mb(cfg, netnl) -> None:187187+ _set_xdp_native_mb_on(cfg)188188+ _set_offload_checksum(cfg, netnl, "off")189189+ _test_v4(cfg)190190+ _test_v6(cfg)191191+ _test_tcp(cfg)192192+ _set_offload_checksum(cfg, netnl, "on")193193+ _test_v4(cfg)194194+ _test_v6(cfg)195195+ _test_tcp(cfg)196196+ ip("link set dev %s xdp off" % cfg.ifname)197197+198198+def test_xdp_offload(cfg, netnl) -> None:199199+ _set_xdp_offload_on(cfg)200200+ _test_v4(cfg)201201+ _test_v6(cfg)202202+ _test_tcp(cfg)203203+ ip("link set dev %s xdpoffload off" % cfg.ifname)4320444205def main() -> None:45206 with NetDrvEpEnv(__file__) as cfg:4646- ksft_run(globs=globals(), case_pfx={"test_"}, args=(cfg, ))207207+ get_interface_info(cfg)208208+ set_interface_init(cfg)209209+ ksft_run([test_default,210210+ test_xdp_generic_sb,211211+ test_xdp_generic_mb,212212+ test_xdp_native_sb,213213+ test_xdp_native_mb,214214+ test_xdp_offload],215215+ args=(cfg, EthtoolFamily()))216216+ set_interface_init(cfg)47217 ksft_exit()4821849219
+13-8
tools/testing/selftests/kvm/mmu_stress_test.c
···1818#include "ucall_common.h"19192020static bool mprotect_ro_done;2121+static bool all_vcpus_hit_ro_fault;21222223static void guest_code(uint64_t start_gpa, uint64_t end_gpa, uint64_t stride)2324{···37363837 /*3938 * Write to the region while mprotect(PROT_READ) is underway. Keep4040- * looping until the memory is guaranteed to be read-only, otherwise4141- * vCPUs may complete their writes and advance to the next stage4242- * prematurely.3939+ * looping until the memory is guaranteed to be read-only and a fault4040+ * has occurred, otherwise vCPUs may complete their writes and advance4141+ * to the next stage prematurely.4342 *4443 * For architectures that support skipping the faulting instruction,4544 * generate the store via inline assembly to ensure the exact length···5756#else5857 vcpu_arch_put_guest(*((volatile uint64_t *)gpa), gpa);5958#endif6060- } while (!READ_ONCE(mprotect_ro_done));5959+ } while (!READ_ONCE(mprotect_ro_done) || !READ_ONCE(all_vcpus_hit_ro_fault));61606261 /*6362 * Only architectures that write the entire range can explicitly sync,···82818382static int nr_vcpus;8483static atomic_t rendezvous;8484+static atomic_t nr_ro_faults;85858686static void rendezvous_with_boss(void)8787{···150148 * be stuck on the faulting instruction for other architectures. Go to151149 * stage 3 without a rendezvous152150 */153153- do {154154- r = _vcpu_run(vcpu);155155- } while (!r);151151+ r = _vcpu_run(vcpu);156152 TEST_ASSERT(r == -1 && errno == EFAULT,157153 "Expected EFAULT on write to RO memory, got r = %d, errno = %d", r, errno);154154+155155+ atomic_inc(&nr_ro_faults);156156+ if (atomic_read(&nr_ro_faults) == nr_vcpus) {157157+ WRITE_ONCE(all_vcpus_hit_ro_fault, true);158158+ sync_global_to_guest(vm, all_vcpus_hit_ro_fault);159159+ }158160159161#if defined(__x86_64__) || defined(__aarch64__)160162 /*···384378 rendezvous_with_vcpus(&time_run2, "run 2");385379386380 mprotect(mem, slot_size, PROT_READ);387387- usleep(10);388381 mprotect_ro_done = true;389382 sync_global_to_guest(vm, mprotect_ro_done);390383
···5252 bool bad = false;5353 for (i = 0; i < 4095; i++) {5454 if (from_host[i] != from_guest[i]) {5555- printf("mismatch at %02hhx | %02hhx %02hhx\n", i, from_host[i], from_guest[i]);5555+ printf("mismatch at %u | %02hhx %02hhx\n",5656+ i, from_host[i], from_guest[i]);5657 bad = true;5758 }5859 }
···673673674674int uffd_open_sys(unsigned int flags)675675{676676+#ifdef __NR_userfaultfd676677 return syscall(__NR_userfaultfd, flags);678678+#else679679+ return -1;680680+#endif677681}678682679683int uffd_open(unsigned int flags)
+14-1
tools/testing/selftests/mm/uffd-stress.c
···3333 * pthread_mutex_lock will also verify the atomicity of the memory3434 * transfer (UFFDIO_COPY).3535 */3636-#include <asm-generic/unistd.h>3636+3737#include "uffd-common.h"38383939uint64_t features;4040+#ifdef __NR_userfaultfd40414142#define BOUNCE_RANDOM (1<<0)4243#define BOUNCE_RACINGFAULTS (1<<1)···472471 nr_pages, nr_pages_per_cpu);473472 return userfaultfd_stress();474473}474474+475475+#else /* __NR_userfaultfd */476476+477477+#warning "missing __NR_userfaultfd definition"478478+479479+int main(void)480480+{481481+ printf("skip: Skipping userfaultfd test (missing __NR_userfaultfd)\n");482482+ return KSFT_SKIP;483483+}484484+485485+#endif /* __NR_userfaultfd */
+13-1
tools/testing/selftests/mm/uffd-unit-tests.c
···55 * Copyright (C) 2015-2023 Red Hat, Inc.66 */7788-#include <asm-generic/unistd.h>98#include "uffd-common.h"1091110#include "../../../../mm/gup_test.h"1111+1212+#ifdef __NR_userfaultfd12131314/* The unit test doesn't need a large or random size, make it 32MB for now */1415#define UFFD_TEST_MEM_SIZE (32UL << 20)···15591558 return ksft_get_fail_cnt() ? KSFT_FAIL : KSFT_PASS;15601559}1561156015611561+#else /* __NR_userfaultfd */15621562+15631563+#warning "missing __NR_userfaultfd definition"15641564+15651565+int main(void)15661566+{15671567+ printf("Skipping %s (missing __NR_userfaultfd)\n", __file__);15681568+ return KSFT_SKIP;15691569+}15701570+15711571+#endif /* __NR_userfaultfd */
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+44+source ./lib.sh55+66+PAUSE_ON_FAIL="no"77+88+# The trap function handler99+#1010+exit_cleanup_all()1111+{1212+ cleanup_all_ns1313+1414+ exit "${EXIT_STATUS}"1515+}1616+1717+# Add fake IPv4 and IPv6 networks on the loopback device, to be used as1818+# underlay by future GRE devices.1919+#2020+setup_basenet()2121+{2222+ ip -netns "${NS0}" link set dev lo up2323+ ip -netns "${NS0}" address add dev lo 192.0.2.10/242424+ ip -netns "${NS0}" address add dev lo 2001:db8::10/64 nodad2525+}2626+2727+# Check if network device has an IPv6 link-local address assigned.2828+#2929+# Parameters:3030+#3131+# * $1: The network device to test3232+# * $2: An extra regular expression that should be matched (to verify the3333+# presence of extra attributes)3434+# * $3: The expected return code from grep (to allow checking the absence of3535+# a link-local address)3636+# * $4: The user visible name for the scenario being tested3737+#3838+check_ipv6_ll_addr()3939+{4040+ local DEV="$1"4141+ local EXTRA_MATCH="$2"4242+ local XRET="$3"4343+ local MSG="$4"4444+4545+ RET=04646+ set +e4747+ ip -netns "${NS0}" -6 address show dev "${DEV}" scope link | grep "fe80::" | grep -q "${EXTRA_MATCH}"4848+ check_err_fail "${XRET}" $? ""4949+ log_test "${MSG}"5050+ set -e5151+}5252+5353+# Create a GRE device and verify that it gets an IPv6 link-local address as5454+# expected.5555+#5656+# Parameters:5757+#5858+# * $1: The device type (gre, ip6gre, gretap or ip6gretap)5959+# * $2: The local underlay IP address (can be an IPv4, an IPv6 or "any")6060+# * $3: The remote underlay IP address (can be an IPv4, an IPv6 or "any")6161+# * $4: The IPv6 interface identifier generation mode to use for the GRE6262+# device (eui64, none, stable-privacy or random).6363+#6464+test_gre_device()6565+{6666+ local GRE_TYPE="$1"6767+ local LOCAL_IP="$2"6868+ local REMOTE_IP="$3"6969+ local MODE="$4"7070+ local ADDR_GEN_MODE7171+ local MATCH_REGEXP7272+ local MSG7373+7474+ ip link add netns "${NS0}" name gretest type "${GRE_TYPE}" local "${LOCAL_IP}" remote "${REMOTE_IP}"7575+7676+ case "${MODE}" in7777+ "eui64")7878+ ADDR_GEN_MODE=07979+ MATCH_REGEXP=""8080+ MSG="${GRE_TYPE}, mode: 0 (EUI64), ${LOCAL_IP} -> ${REMOTE_IP}"8181+ XRET=08282+ ;;8383+ "none")8484+ ADDR_GEN_MODE=18585+ MATCH_REGEXP=""8686+ MSG="${GRE_TYPE}, mode: 1 (none), ${LOCAL_IP} -> ${REMOTE_IP}"8787+ XRET=1 # No link-local address should be generated8888+ ;;8989+ "stable-privacy")9090+ ADDR_GEN_MODE=29191+ MATCH_REGEXP="stable-privacy"9292+ MSG="${GRE_TYPE}, mode: 2 (stable privacy), ${LOCAL_IP} -> ${REMOTE_IP}"9393+ XRET=09494+ # Initialise stable_secret (required for stable-privacy mode)9595+ ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.stable_secret="2001:db8::abcd"9696+ ;;9797+ "random")9898+ ADDR_GEN_MODE=39999+ MATCH_REGEXP="stable-privacy"100100+ MSG="${GRE_TYPE}, mode: 3 (random), ${LOCAL_IP} -> ${REMOTE_IP}"101101+ XRET=0102102+ ;;103103+ esac104104+105105+ # Check that IPv6 link-local address is generated when device goes up106106+ ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}"107107+ ip -netns "${NS0}" link set dev gretest up108108+ check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "config: ${MSG}"109109+110110+ # Now disable link-local address generation111111+ ip -netns "${NS0}" link set dev gretest down112112+ ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode=1113113+ ip -netns "${NS0}" link set dev gretest up114114+115115+ # Check that link-local address generation works when re-enabled while116116+ # the device is already up117117+ ip netns exec "${NS0}" sysctl -qw net.ipv6.conf.gretest.addr_gen_mode="${ADDR_GEN_MODE}"118118+ check_ipv6_ll_addr gretest "${MATCH_REGEXP}" "${XRET}" "update: ${MSG}"119119+120120+ ip -netns "${NS0}" link del dev gretest121121+}122122+123123+test_gre4()124124+{125125+ local GRE_TYPE126126+ local MODE127127+128128+ for GRE_TYPE in "gre" "gretap"; do129129+ printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n"130130+131131+ for MODE in "eui64" "none" "stable-privacy" "random"; do132132+ test_gre_device "${GRE_TYPE}" 192.0.2.10 192.0.2.11 "${MODE}"133133+ test_gre_device "${GRE_TYPE}" any 192.0.2.11 "${MODE}"134134+ test_gre_device "${GRE_TYPE}" 192.0.2.10 any "${MODE}"135135+ done136136+ done137137+}138138+139139+test_gre6()140140+{141141+ local GRE_TYPE142142+ local MODE143143+144144+ for GRE_TYPE in "ip6gre" "ip6gretap"; do145145+ printf "\n####\nTesting IPv6 link-local address generation on ${GRE_TYPE} devices\n####\n\n"146146+147147+ for MODE in "eui64" "none" "stable-privacy" "random"; do148148+ test_gre_device "${GRE_TYPE}" 2001:db8::10 2001:db8::11 "${MODE}"149149+ test_gre_device "${GRE_TYPE}" any 2001:db8::11 "${MODE}"150150+ test_gre_device "${GRE_TYPE}" 2001:db8::10 any "${MODE}"151151+ done152152+ done153153+}154154+155155+usage()156156+{157157+ echo "Usage: $0 [-p]"158158+ exit 1159159+}160160+161161+while getopts :p o162162+do163163+ case $o in164164+ p) PAUSE_ON_FAIL="yes";;165165+ *) usage;;166166+ esac167167+done168168+169169+setup_ns NS0170170+171171+set -e172172+trap exit_cleanup_all EXIT173173+174174+setup_basenet175175+176176+test_gre4177177+test_gre6
···6161 "teardown": [6262 "$TC qdisc del dev $DUMMY handle 1: root"6363 ]6464+ },6565+ {6666+ "id": "4009",6767+ "name": "Reject creation of DRR class with classid TC_H_ROOT",6868+ "category": [6969+ "qdisc",7070+ "drr"7171+ ],7272+ "plugins": {7373+ "requires": "nsPlugin"7474+ },7575+ "setup": [7676+ "$TC qdisc add dev $DUMMY root handle ffff: drr",7777+ "$TC filter add dev $DUMMY parent ffff: basic classid ffff:1",7878+ "$TC class add dev $DUMMY parent ffff: classid ffff:1 drr",7979+ "$TC filter add dev $DUMMY parent ffff: prio 1 u32 match u16 0x0000 0xfe00 at 2 flowid ffff:ffff"8080+ ],8181+ "cmdUnderTest": "$TC class add dev $DUMMY parent ffff: classid ffff:ffff drr",8282+ "expExitCode": "2",8383+ "verifyCmd": "$TC class show dev $DUMMY",8484+ "matchPattern": "class drr ffff:ffff",8585+ "matchCount": "0",8686+ "teardown": [8787+ "$TC qdisc del dev $DUMMY root"8888+ ]6489 }6590]
+5-5
tools/testing/selftests/vDSO/parse_vdso.c
···5353 /* Symbol table */5454 ELF(Sym) *symtab;5555 const char *symstrings;5656- ELF(Word) *gnu_hash;5656+ ELF(Word) *gnu_hash, *gnu_bucket;5757 ELF_HASH_ENTRY *bucket, *chain;5858 ELF_HASH_ENTRY nbucket, nchain;5959···185185 /* The bucket array is located after the header (4 uint32) and the bloom186186 * filter (size_t array of gnu_hash[2] elements).187187 */188188- vdso_info.bucket = vdso_info.gnu_hash + 4 +189189- sizeof(size_t) / 4 * vdso_info.gnu_hash[2];188188+ vdso_info.gnu_bucket = vdso_info.gnu_hash + 4 +189189+ sizeof(size_t) / 4 * vdso_info.gnu_hash[2];190190 } else {191191 vdso_info.nbucket = hash[0];192192 vdso_info.nchain = hash[1];···268268 if (vdso_info.gnu_hash) {269269 uint32_t h1 = gnu_hash(name), h2, *hashval;270270271271- i = vdso_info.bucket[h1 % vdso_info.nbucket];271271+ i = vdso_info.gnu_bucket[h1 % vdso_info.nbucket];272272 if (i == 0)273273 return 0;274274 h1 |= 1;275275- hashval = vdso_info.bucket + vdso_info.nbucket +275275+ hashval = vdso_info.gnu_bucket + vdso_info.nbucket +276276 (i - vdso_info.gnu_hash[1]);277277 for (;; i++) {278278 ELF(Sym) *sym = &vdso_info.symtab[i];
+1-1
usr/include/Makefile
···10101111# In theory, we do not care -m32 or -m64 for header compile tests.1212# It is here just because CONFIG_CC_CAN_LINK is tested with -m32 or -m64.1313-UAPI_CFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))1313+UAPI_CFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))14141515# USERCFLAGS might contain sysroot location for CC.1616UAPI_CFLAGS += $(USERCFLAGS)