···594594 their sockets will only be able to connect within their own595595 namespace.596596597597+The first write to ``child_ns_mode`` locks its value. Subsequent writes of the598598+same value succeed, but writing a different value returns ``-EBUSY``.599599+597600Changing ``child_ns_mode`` only affects namespaces created after the change;598601it does not modify the current namespace or any existing children.599602
···13961396Memory for the region is taken starting at the address denoted by the13971397field userspace_addr, which must point at user addressable memory for13981398the entire memory slot size. Any object may back this memory, including13991399-anonymous memory, ordinary files, and hugetlbfs.13991399+anonymous memory, ordinary files, and hugetlbfs. Changes in the backing14001400+of the memory region are automatically reflected into the guest.14011401+For example, an mmap() that affects the region will be made visible14021402+immediately. Another example is madvise(MADV_DROP).1400140314011404On architectures that support a form of address tagging, userspace_addr must14021405be an untagged address.···14141411use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows it,14151412to make a new slot read-only. In this case, writes to this memory will be14161413posted to userspace as KVM_EXIT_MMIO exits.14171417-14181418-When the KVM_CAP_SYNC_MMU capability is available, changes in the backing of14191419-the memory region are automatically reflected into the guest. For example, an14201420-mmap() that affects the region will be made visible immediately. Another14211421-example is madvise(MADV_DROP).1422141414231415For TDX guest, deleting/moving memory region loses guest memory contents.14241416Read only region isn't supported. Only as-id 0 is supported.
+22-23
MAINTAINERS
···12921292F: include/uapi/drm/amdxdna_accel.h1293129312941294AMD XGBE DRIVER12951295-M: "Shyam Sundar S K" <Shyam-sundar.S-k@amd.com>12961295M: Raju Rangoju <Raju.Rangoju@amd.com>12971296L: netdev@vger.kernel.org12981297S: Maintained···6212621362136214CISCO SCSI HBA DRIVER62146215M: Karan Tilak Kumar <kartilak@cisco.com>62166216+M: Narsimhulu Musini <nmusini@cisco.com>62156217M: Sesidhar Baddela <sebaddel@cisco.com>62166218L: linux-scsi@vger.kernel.org62176219S: Supported62186220F: drivers/scsi/snic/6219622162206222CISCO VIC ETHERNET NIC DRIVER62216221-M: Christian Benvenuti <benve@cisco.com>62226223M: Satish Kharat <satishkh@cisco.com>62236224S: Maintained62246225F: drivers/net/ethernet/cisco/enic/6225622662266227CISCO VIC LOW LATENCY NIC DRIVER62276227-M: Christian Benvenuti <benve@cisco.com>62286228M: Nelson Escobar <neescoba@cisco.com>62296229+M: Satish Kharat <satishkh@cisco.com>62296230S: Supported62306231F: drivers/infiniband/hw/usnic/62316232···62796280F: include/linux/clk.h6280628162816282CLOCKSOURCE, CLOCKEVENT DRIVERS62826282-M: Daniel Lezcano <daniel.lezcano@linaro.org>62836283+M: Daniel Lezcano <daniel.lezcano@kernel.org>62836284M: Thomas Gleixner <tglx@kernel.org>62846285L: linux-kernel@vger.kernel.org62856286S: Supported···6668666966696670CPU IDLE TIME MANAGEMENT FRAMEWORK66706671M: "Rafael J. Wysocki" <rafael@kernel.org>66716671-M: Daniel Lezcano <daniel.lezcano@linaro.org>66726672+M: Daniel Lezcano <daniel.lezcano@kernel.org>66726673R: Christian Loehle <christian.loehle@arm.com>66736674L: linux-pm@vger.kernel.org66746675S: Maintained···6698669966996700CPUIDLE DRIVER - ARM BIG LITTLE67006701M: Lorenzo Pieralisi <lpieralisi@kernel.org>67016701-M: Daniel Lezcano <daniel.lezcano@linaro.org>67026702+M: Daniel Lezcano <daniel.lezcano@kernel.org>67026703L: linux-pm@vger.kernel.org67036704L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)67046705S: Maintained···67066707F: drivers/cpuidle/cpuidle-big_little.c6707670867086709CPUIDLE DRIVER - ARM EXYNOS67096709-M: Daniel Lezcano <daniel.lezcano@linaro.org>67106710+M: Daniel Lezcano <daniel.lezcano@kernel.org>67106711M: Kukjin Kim <kgene@kernel.org>67116712R: Krzysztof Kozlowski <krzk@kernel.org>67126713L: linux-pm@vger.kernel.org···1441114412M: Herve Codina <herve.codina@bootlin.com>1441214413S: Maintained1441314414F: Documentation/devicetree/bindings/net/lantiq,pef2256.yaml1441414414-F: drivers/net/wan/framer/pef2256/1441514415+F: drivers/net/wan/framer/1441514416F: drivers/pinctrl/pinctrl-pef2256.c1441614416-F: include/linux/framer/pef2256.h1441714417+F: include/linux/framer/14417144181441814419LASI 53c700 driver for PARISC1441914420M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>···1665516656M: David Hildenbrand <david@kernel.org>1665616657R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1665716658R: Liam R. Howlett <Liam.Howlett@oracle.com>1665816658-R: Vlastimil Babka <vbabka@suse.cz>1665916659+R: Vlastimil Babka <vbabka@kernel.org>1665916660R: Mike Rapoport <rppt@kernel.org>1666016661R: Suren Baghdasaryan <surenb@google.com>1666116662R: Michal Hocko <mhocko@suse.com>···1678516786M: David Hildenbrand <david@kernel.org>1678616787R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1678716788R: Liam R. Howlett <Liam.Howlett@oracle.com>1678816788-R: Vlastimil Babka <vbabka@suse.cz>1678916789+R: Vlastimil Babka <vbabka@kernel.org>1678916790R: Mike Rapoport <rppt@kernel.org>1679016791R: Suren Baghdasaryan <surenb@google.com>1679116792R: Michal Hocko <mhocko@suse.com>···16840168411684116842MEMORY MANAGEMENT - PAGE ALLOCATOR1684216843M: Andrew Morton <akpm@linux-foundation.org>1684316843-M: Vlastimil Babka <vbabka@suse.cz>1684416844+M: Vlastimil Babka <vbabka@kernel.org>1684416845R: Suren Baghdasaryan <surenb@google.com>1684516846R: Michal Hocko <mhocko@suse.com>1684616847R: Brendan Jackman <jackmanb@google.com>···1688616887M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1688716888R: Rik van Riel <riel@surriel.com>1688816889R: Liam R. Howlett <Liam.Howlett@oracle.com>1688916889-R: Vlastimil Babka <vbabka@suse.cz>1689016890+R: Vlastimil Babka <vbabka@kernel.org>1689016891R: Harry Yoo <harry.yoo@oracle.com>1689116892R: Jann Horn <jannh@google.com>1689216893L: linux-mm@kvack.org···1698516986M: Andrew Morton <akpm@linux-foundation.org>1698616987M: Liam R. Howlett <Liam.Howlett@oracle.com>1698716988M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1698816988-R: Vlastimil Babka <vbabka@suse.cz>1698916989+R: Vlastimil Babka <vbabka@kernel.org>1698916990R: Jann Horn <jannh@google.com>1699016991R: Pedro Falcato <pfalcato@suse.de>1699116992L: linux-mm@kvack.org···1701517016M: Suren Baghdasaryan <surenb@google.com>1701617017M: Liam R. Howlett <Liam.Howlett@oracle.com>1701717018M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1701817018-R: Vlastimil Babka <vbabka@suse.cz>1701917019+R: Vlastimil Babka <vbabka@kernel.org>1701917020R: Shakeel Butt <shakeel.butt@linux.dev>1702017021L: linux-mm@kvack.org1702117022S: Maintained···1703117032M: Liam R. Howlett <Liam.Howlett@oracle.com>1703217033M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>1703317034M: David Hildenbrand <david@kernel.org>1703417034-R: Vlastimil Babka <vbabka@suse.cz>1703517035+R: Vlastimil Babka <vbabka@kernel.org>1703517036R: Jann Horn <jannh@google.com>1703617037L: linux-mm@kvack.org1703717038S: Maintained···2050820509F: drivers/pci/controller/dwc/pcie-kirin.c20509205102051020511PCIE DRIVER FOR HISILICON STB2051120511-M: Shawn Guo <shawn.guo@linaro.org>2051220512+M: Shawn Guo <shawnguo@kernel.org>2051220513L: linux-pci@vger.kernel.org2051320514S: Maintained2051420515F: Documentation/devicetree/bindings/pci/hisilicon-histb-pcie.txt···2169421695F: drivers/net/ethernet/qualcomm/emac/21695216962169621697QUALCOMM ETHQOS ETHERNET DRIVER2169721697-M: Vinod Koul <vkoul@kernel.org>2169821698+M: Mohd Ayaan Anwar <mohd.anwar@oss.qualcomm.com>2169821699L: netdev@vger.kernel.org2169921700L: linux-arm-msm@vger.kernel.org2170021701S: Maintained···2317323174RUST [ALLOC]2317423175M: Danilo Krummrich <dakr@kernel.org>2317523176R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>2317623176-R: Vlastimil Babka <vbabka@suse.cz>2317723177+R: Vlastimil Babka <vbabka@kernel.org>2317723178R: Liam R. Howlett <Liam.Howlett@oracle.com>2317823179R: Uladzislau Rezki <urezki@gmail.com>2317923180L: rust-for-linux@vger.kernel.org···2434924350F: drivers/nvmem/layouts/sl28vpd.c24350243512435124352SLAB ALLOCATOR2435224352-M: Vlastimil Babka <vbabka@suse.cz>2435324353+M: Vlastimil Babka <vbabka@kernel.org>2435324354M: Andrew Morton <akpm@linux-foundation.org>2435424355R: Christoph Lameter <cl@gentwo.org>2435524356R: David Rientjes <rientjes@google.com>···26216262172621726218THERMAL2621826219M: Rafael J. Wysocki <rafael@kernel.org>2621926219-M: Daniel Lezcano <daniel.lezcano@linaro.org>2622026220+M: Daniel Lezcano <daniel.lezcano@kernel.org>2622026221R: Zhang Rui <rui.zhang@intel.com>2622126222R: Lukasz Luba <lukasz.luba@arm.com>2622226223L: linux-pm@vger.kernel.org···26246262472624726248THERMAL/CPU_COOLING2624826249M: Amit Daniel Kachhap <amit.kachhap@gmail.com>2624926249-M: Daniel Lezcano <daniel.lezcano@linaro.org>2625026250+M: Daniel Lezcano <daniel.lezcano@kernel.org>2625026251M: Viresh Kumar <viresh.kumar@linaro.org>2625126252R: Lukasz Luba <lukasz.luba@arm.com>2625226253L: linux-pm@vger.kernel.org···29185291862918629187ZSWAP COMPRESSED SWAP CACHING2918729188M: Johannes Weiner <hannes@cmpxchg.org>2918829188-M: Yosry Ahmed <yosry.ahmed@linux.dev>2918929189+M: Yosry Ahmed <yosry@kernel.org>2918929190M: Nhat Pham <nphamcs@gmail.com>2919029191R: Chengming Zhou <chengming.zhou@linux.dev>2919129192L: linux-mm@kvack.org
···3737 * We pick the reserved-ASID to minimise the impact.3838 */3939 __tlbi(aside1is, __TLBI_VADDR(0, 0));4040- dsb(ish);4040+ __tlbi_sync_s1ish();4141 }42424343 ret = caches_clean_inval_user_pou(start, start + chunk);
+15-6
arch/arm64/kernel/topology.c
···400400int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)401401{402402 /*403403- * Abort call on counterless CPU or when interrupts are404404- * disabled - can lead to deadlock in smp sync call.403403+ * Abort call on counterless CPU.405404 */406405 if (!cpu_has_amu_feat(cpu))407406 return -EOPNOTSUPP;408407409409- if (WARN_ON_ONCE(irqs_disabled()))410410- return -EPERM;411411-412412- smp_call_function_single(cpu, func, val, 1);408408+ if (irqs_disabled()) {409409+ /*410410+ * When IRQs are disabled (tick path: sched_tick ->411411+ * topology_scale_freq_tick or cppc_scale_freq_tick), only local412412+ * CPU counter reads are allowed. Remote CPU counter read would413413+ * require smp_call_function_single() which is unsafe with IRQs414414+ * disabled.415415+ */416416+ if (WARN_ON_ONCE(cpu != smp_processor_id()))417417+ return -EPERM;418418+ func(val);419419+ } else {420420+ smp_call_function_single(cpu, func, val, 1);421421+ }413422414423 return 0;415424}
···358358 break;359359 case KVM_CAP_IOEVENTFD:360360 case KVM_CAP_USER_MEMORY:361361- case KVM_CAP_SYNC_MMU:362361 case KVM_CAP_DESTROY_MEMORY_REGION_WORKS:363362 case KVM_CAP_ONE_REG:364363 case KVM_CAP_ARM_PSCI:
···17541754 }1755175517561756 /*17571757- * Both the canonical IPA and fault IPA must be hugepage-aligned to17581758- * ensure we find the right PFN and lay down the mapping in the right17591759- * place.17571757+ * Both the canonical IPA and fault IPA must be aligned to the17581758+ * mapping size to ensure we find the right PFN and lay down the17591759+ * mapping in the right place.17601760 */17611761- if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) {17621762- fault_ipa &= ~(vma_pagesize - 1);17631763- ipa &= ~(vma_pagesize - 1);17641764- }17611761+ fault_ipa = ALIGN_DOWN(fault_ipa, vma_pagesize);17621762+ ipa = ALIGN_DOWN(ipa, vma_pagesize);1765176317661764 gfn = ipa >> PAGE_SHIFT;17671765 mte_allowed = kvm_vma_mte_allowed(vma);
···18161816 ID_AA64MMFR3_EL1_SCTLRX |18171817 ID_AA64MMFR3_EL1_S1POE |18181818 ID_AA64MMFR3_EL1_S1PIE;18191819+18201820+ if (!system_supports_poe())18211821+ val &= ~ID_AA64MMFR3_EL1_S1POE;18191822 break;18201823 case SYS_ID_MMFR4_EL1:18211824 val &= ~ID_MMFR4_EL1_CCIDX;
+5-1
arch/arm64/lib/delay.c
···3232 * Note that userspace cannot change the offset behind our back either,3333 * as the vcpu mutex is held as long as KVM_RUN is in progress.3434 */3535-#define __delay_cycles() __arch_counter_get_cntvct_stable()3535+static cycles_t notrace __delay_cycles(void)3636+{3737+ guard(preempt_notrace)();3838+ return __arch_counter_get_cntvct_stable();3939+}36403741void __delay(unsigned long cycles)3842{
+3-3
arch/arm64/mm/ioremap.c
···1414 return 0;1515}16161717-void __iomem *ioremap_prot(phys_addr_t phys_addr, size_t size,1818- pgprot_t pgprot)1717+void __iomem *__ioremap_prot(phys_addr_t phys_addr, size_t size,1818+ pgprot_t pgprot)1919{2020 unsigned long last_addr = phys_addr + size - 1;2121···39394040 return generic_ioremap_prot(phys_addr, size, pgprot);4141}4242-EXPORT_SYMBOL(ioremap_prot);4242+EXPORT_SYMBOL(__ioremap_prot);43434444/*4545 * Must be called after early_fixmap_init
+10-2
arch/arm64/mm/mmap.c
···3434 [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_EXEC3535};36363737+static ptdesc_t gcs_page_prot __ro_after_init = _PAGE_GCS_RO;3838+3739/*3840 * You really shouldn't be using read() or write() on /dev/mem. This might go3941 * away in the future.···7573 protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY;7674 }77757878- if (lpa2_is_enabled())7676+ if (lpa2_is_enabled()) {7977 for (int i = 0; i < ARRAY_SIZE(protection_map); i++)8078 pgprot_val(protection_map[i]) &= ~PTE_SHARED;7979+ gcs_page_prot &= ~PTE_SHARED;8080+ }81818282 return 0;8383}···91879288 /* Short circuit GCS to avoid bloating the table. */9389 if (system_supports_gcs() && (vm_flags & VM_SHADOW_STACK)) {9494- prot = _PAGE_GCS_RO;9090+ /* Honour mprotect(PROT_NONE) on shadow stack mappings */9191+ if (vm_flags & VM_ACCESS_FLAGS)9292+ prot = gcs_page_prot;9393+ else9494+ prot = pgprot_val(protection_map[VM_NONE]);9595 } else {9696 prot = pgprot_val(protection_map[vm_flags &9797 (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]);
···118118 case KVM_CAP_ONE_REG:119119 case KVM_CAP_ENABLE_CAP:120120 case KVM_CAP_READONLY_MEM:121121- case KVM_CAP_SYNC_MMU:122121 case KVM_CAP_IMMEDIATE_EXIT:123122 case KVM_CAP_IOEVENTFD:124123 case KVM_CAP_MP_STATE:
···3838config KVM_BOOK3S_PR_POSSIBLE3939 bool4040 select KVM_MMIO4141- select KVM_GENERIC_MMU_NOTIFIER42414342config KVM_BOOK3S_HV_POSSIBLE4443 bool···8081 tristate "KVM for POWER7 and later using hypervisor mode in host"8182 depends on KVM_BOOK3S_64 && PPC_POWERNV8283 select KVM_BOOK3S_HV_POSSIBLE8383- select KVM_GENERIC_MMU_NOTIFIER8484 select KVM_BOOK3S_HV_PMU8585 select CMA8686 help···201203 depends on !CONTEXT_TRACKING_USER202204 select KVM203205 select KVM_MMIO204204- select KVM_GENERIC_MMU_NOTIFIER205206 help206207 Support running unmodified E500 guest kernels in virtual machines on207208 E500v2 host processors.···217220 select KVM218221 select KVM_MMIO219222 select KVM_BOOKE_HV220220- select KVM_GENERIC_MMU_NOTIFIER221223 help222224 Support running unmodified E500MC/E5500/E6500 guest kernels in223225 virtual machines on E500MC/E5500/E6500 host processors.
-6
arch/powerpc/kvm/powerpc.c
···623623 r = !!(hv_enabled && kvmppc_hv_ops->enable_nested &&624624 !kvmppc_hv_ops->enable_nested(NULL));625625 break;626626-#endif627627- case KVM_CAP_SYNC_MMU:628628- BUILD_BUG_ON(!IS_ENABLED(CONFIG_KVM_GENERIC_MMU_NOTIFIER));629629- r = 1;630630- break;631631-#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE632626 case KVM_CAP_PPC_HTAB_FD:633627 r = hv_enabled;634628 break;
-1
arch/riscv/kvm/Kconfig
···3030 select KVM_GENERIC_HARDWARE_ENABLING3131 select KVM_MMIO3232 select VIRT_XFER_TO_GUEST_WORK3333- select KVM_GENERIC_MMU_NOTIFIER3433 select SCHED_INFO3534 select GUEST_PERF_EVENTS if PERF_EVENTS3635 help
-1
arch/riscv/kvm/vm.c
···181181 break;182182 case KVM_CAP_IOEVENTFD:183183 case KVM_CAP_USER_MEMORY:184184- case KVM_CAP_SYNC_MMU:185184 case KVM_CAP_DESTROY_MEMORY_REGION_WORKS:186185 case KVM_CAP_ONE_REG:187186 case KVM_CAP_READONLY_MEM:
···22#ifndef _S390_VTIME_H33#define _S390_VTIME_H4455+#include <asm/lowcore.h>66+#include <asm/cpu_mf.h>77+#include <asm/idle.h>88+99+DECLARE_PER_CPU(u64, mt_cycles[8]);1010+511static inline void update_timer_sys(void)612{713 struct lowcore *lc = get_lowcore();···2418 lc->system_timer += lc->last_update_timer - lc->exit_timer;2519 lc->user_timer += lc->exit_timer - lc->mcck_enter_timer;2620 lc->last_update_timer = lc->mcck_enter_timer;2121+}2222+2323+static inline void update_timer_idle(void)2424+{2525+ struct s390_idle_data *idle = this_cpu_ptr(&s390_idle);2626+ struct lowcore *lc = get_lowcore();2727+ u64 cycles_new[8];2828+ int i, mtid;2929+3030+ mtid = smp_cpu_mtid;3131+ if (mtid) {3232+ stcctm(MT_DIAG, mtid, cycles_new);3333+ for (i = 0; i < mtid; i++)3434+ __this_cpu_add(mt_cycles[i], cycles_new[i] - idle->mt_cycles_enter[i]);3535+ }3636+ /*3737+ * This is a bit subtle: Forward last_update_clock so it excludes idle3838+ * time. For correct steal time calculation in do_account_vtime() add3939+ * passed wall time before idle_enter to steal_timer:4040+ * During the passed wall time before idle_enter CPU time may have4141+ * been accounted to system, hardirq, softirq, etc. lowcore fields.4242+ * The accounted CPU times will be subtracted again from steal_timer4343+ * when accumulated steal time is calculated in do_account_vtime().4444+ */4545+ lc->steal_timer += idle->clock_idle_enter - lc->last_update_clock;4646+ lc->last_update_clock = lc->int_clock;4747+ lc->system_timer += lc->last_update_timer - idle->timer_idle_enter;4848+ lc->last_update_timer = lc->sys_enter_timer;2749}28502951#endif /* _S390_VTIME_H */
-2
arch/s390/kernel/entry.h
···5656long sys_s390_pci_mmio_read(unsigned long, void __user *, size_t);5757long sys_s390_sthyi(unsigned long function_code, void __user *buffer, u64 __user *return_code, unsigned long flags);58585959-DECLARE_PER_CPU(u64, mt_cycles[8]);6060-6159unsigned long stack_alloc(void);6260void stack_free(unsigned long stack);6361
···48484949static inline int virt_timer_forward(u64 elapsed)5050{5151- BUG_ON(!irqs_disabled());5252-5151+ lockdep_assert_irqs_disabled();5352 if (list_empty(&virt_timer_list))5453 return 0;5554 elapsed = atomic64_add_return(elapsed, &virt_timer_elapsed);···136137 lc->system_timer += timer;137138138139 /* Update MT utilization calculation */139139- if (smp_cpu_mtid &&140140- time_after64(jiffies_64, this_cpu_read(mt_scaling_jiffies)))140140+ if (smp_cpu_mtid && time_after64(jiffies_64, __this_cpu_read(mt_scaling_jiffies)))141141 update_mt_scaling();142142143143 /* Calculate cputime delta */144144- user = update_tsk_timer(&tsk->thread.user_timer,145145- READ_ONCE(lc->user_timer));146146- guest = update_tsk_timer(&tsk->thread.guest_timer,147147- READ_ONCE(lc->guest_timer));148148- system = update_tsk_timer(&tsk->thread.system_timer,149149- READ_ONCE(lc->system_timer));150150- hardirq = update_tsk_timer(&tsk->thread.hardirq_timer,151151- READ_ONCE(lc->hardirq_timer));152152- softirq = update_tsk_timer(&tsk->thread.softirq_timer,153153- READ_ONCE(lc->softirq_timer));154154- lc->steal_timer +=155155- clock - user - guest - system - hardirq - softirq;144144+ user = update_tsk_timer(&tsk->thread.user_timer, lc->user_timer);145145+ guest = update_tsk_timer(&tsk->thread.guest_timer, lc->guest_timer);146146+ system = update_tsk_timer(&tsk->thread.system_timer, lc->system_timer);147147+ hardirq = update_tsk_timer(&tsk->thread.hardirq_timer, lc->hardirq_timer);148148+ softirq = update_tsk_timer(&tsk->thread.softirq_timer, lc->softirq_timer);149149+ lc->steal_timer += clock - user - guest - system - hardirq - softirq;156150157151 /* Push account value */158152 if (user) {···217225 return timer - lc->last_update_timer;218226}219227220220-/*221221- * Update process times based on virtual cpu times stored by entry.S222222- * to the lowcore fields user_timer, system_timer & steal_clock.223223- */224228void vtime_account_kernel(struct task_struct *tsk)225229{226230 struct lowcore *lc = get_lowcore();···226238 lc->guest_timer += delta;227239 else228240 lc->system_timer += delta;229229-230230- virt_timer_forward(delta);231241}232242EXPORT_SYMBOL_GPL(vtime_account_kernel);233243234244void vtime_account_softirq(struct task_struct *tsk)235245{236236- u64 delta = vtime_delta();237237-238238- get_lowcore()->softirq_timer += delta;239239-240240- virt_timer_forward(delta);246246+ get_lowcore()->softirq_timer += vtime_delta();241247}242248243249void vtime_account_hardirq(struct task_struct *tsk)244250{245245- u64 delta = vtime_delta();246246-247247- get_lowcore()->hardirq_timer += delta;248248-249249- virt_timer_forward(delta);251251+ get_lowcore()->hardirq_timer += vtime_delta();250252}251253252254/*
-2
arch/s390/kvm/Kconfig
···2828 select HAVE_KVM_INVALID_WAKEUPS2929 select HAVE_KVM_NO_POLL3030 select KVM_VFIO3131- select MMU_NOTIFIER3231 select VIRT_XFER_TO_GUEST_WORK3333- select KVM_GENERIC_MMU_NOTIFIER3432 select KVM_MMU_LOCKLESS_AGING3533 help3634 Support hosting paravirtualized guest machines using the SIE
-1
arch/s390/kvm/kvm-s390.c
···601601 switch (ext) {602602 case KVM_CAP_S390_PSW:603603 case KVM_CAP_S390_GMAP:604604- case KVM_CAP_SYNC_MMU:605604#ifdef CONFIG_KVM_S390_UCONTROL606605 case KVM_CAP_S390_UCONTROL:607606#endif
···7777#endif78787979#ifndef CONFIG_PARAVIRT8080-#ifndef __ASSEMBLY__8080+#ifndef __ASSEMBLER__8181/*8282 * Used in the idle loop; sti takes one instruction cycle8383 * to complete:···9595{9696 native_halt();9797}9898-#endif /* __ASSEMBLY__ */9898+#endif /* __ASSEMBLER__ */9999#endif /* CONFIG_PARAVIRT */100100101101#ifdef CONFIG_PARAVIRT_XXL
+2-2
arch/x86/include/asm/linkage.h
···6868 * Depending on -fpatchable-function-entry=N,N usage (CONFIG_CALL_PADDING) the6969 * CFI symbol layout changes.7070 *7171- * Without CALL_THUNKS:7171+ * Without CALL_PADDING:7272 *7373 * .align FUNCTION_ALIGNMENT7474 * __cfi_##name:···7777 * .long __kcfi_typeid_##name7878 * name:7979 *8080- * With CALL_THUNKS:8080+ * With CALL_PADDING:8181 *8282 * .align FUNCTION_ALIGNMENT8383 * __cfi_##name:
···2525void handle_invalid_op(struct pt_regs *regs);2626#endif27272828+noinstr bool handle_bug(struct pt_regs *regs);2929+2830static inline int get_si_code(unsigned long condition)2931{3032 if (condition & DR_STEP)
+22-7
arch/x86/kernel/alternative.c
···1182118211831183 poison_endbr(addr);11841184 if (IS_ENABLED(CONFIG_FINEIBT))11851185- poison_cfi(addr - 16);11851185+ poison_cfi(addr - CFI_OFFSET);11861186 }11871187}11881188···13881388#define fineibt_preamble_bhi (fineibt_preamble_bhi - fineibt_preamble_start)13891389#define fineibt_preamble_ud 0x1313901390#define fineibt_preamble_hash 513911391+13921392+#define fineibt_prefix_size (fineibt_preamble_size - ENDBR_INSN_SIZE)1391139313921394/*13931395 * <fineibt_caller_start>:···16361634 * have determined there are no indirect calls to it and we16371635 * don't need no CFI either.16381636 */16391639- if (!is_endbr(addr + 16))16371637+ if (!is_endbr(addr + CFI_OFFSET))16401638 continue;1641163916421640 hash = decode_preamble_hash(addr, &arity);16431641 if (WARN(!hash, "no CFI hash found at: %pS %px %*ph\n",16441642 addr, addr, 5, addr))16451643 return -EINVAL;16441644+16451645+ /*16461646+ * FineIBT relies on being at func-16, so if the preamble is16471647+ * actually larger than that, place it the tail end.16481648+ *16491649+ * NOTE: this is possible with things like DEBUG_CALL_THUNKS16501650+ * and DEBUG_FORCE_FUNCTION_ALIGN_64B.16511651+ */16521652+ addr += CFI_OFFSET - fineibt_prefix_size;1646165316471654 text_poke_early(addr, fineibt_preamble_start, fineibt_preamble_size);16481655 WARN_ON(*(u32 *)(addr + fineibt_preamble_hash) != 0x12345678);···16751664 for (s = start; s < end; s++) {16761665 void *addr = (void *)s + *s;1677166616781678- if (!exact_endbr(addr + 16))16671667+ if (!exact_endbr(addr + CFI_OFFSET))16791668 continue;1680166916811681- poison_endbr(addr + 16);16701670+ poison_endbr(addr + CFI_OFFSET);16821671 }16831672}16841673···17831772 if (FINEIBT_WARN(fineibt_preamble_size, 20) ||17841773 FINEIBT_WARN(fineibt_preamble_bhi + fineibt_bhi1_size, 20) ||17851774 FINEIBT_WARN(fineibt_caller_size, 14) ||17861786- FINEIBT_WARN(fineibt_paranoid_size, 20))17751775+ FINEIBT_WARN(fineibt_paranoid_size, 20) ||17761776+ WARN_ON_ONCE(CFI_OFFSET < fineibt_prefix_size))17871777 return;1788177817891779 if (cfi_mode == CFI_AUTO) {···18981886 switch (cfi_mode) {18991887 case CFI_FINEIBT:19001888 /*18891889+ * FineIBT preamble is at func-16.18901890+ */18911891+ addr += CFI_OFFSET - fineibt_prefix_size;18921892+18931893+ /*19011894 * FineIBT prefix should start with an ENDBR.19021895 */19031896 if (!is_endbr(addr))···19391922 break;19401923 }19411924}19421942-19431943-#define fineibt_prefix_size (fineibt_preamble_size - ENDBR_INSN_SIZE)1944192519451926/*19461927 * When regs->ip points to a 0xD6 byte in the FineIBT preamble,
···48054805#endif48064806 case KVM_CAP_NOP_IO_DELAY:48074807 case KVM_CAP_MP_STATE:48084808- case KVM_CAP_SYNC_MMU:48094808 case KVM_CAP_USER_NMI:48104809 case KVM_CAP_IRQ_INJECT_STATUS:48114810 case KVM_CAP_IOEVENTFD:
+2-5
arch/x86/mm/extable.c
···411411 return;412412413413 if (trapnr == X86_TRAP_UD) {414414- if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {415415- /* Skip the ud2. */416416- regs->ip += LEN_UD2;414414+ if (handle_bug(regs))417415 return;418418- }419416420417 /*421421- * If this was a BUG and report_bug returns or if this418418+ * If this was a BUG and handle_bug returns or if this422419 * was just a normal #UD, we want to continue onward and423420 * crash.424421 */
···2323#include "amdxdna_pci_drv.h"2424#include "amdxdna_pm.h"25252626-static bool force_cmdlist;2626+static bool force_cmdlist = true;2727module_param(force_cmdlist, bool, 0600);2828-MODULE_PARM_DESC(force_cmdlist, "Force use command list (Default false)");2828+MODULE_PARM_DESC(force_cmdlist, "Force use command list (Default true)");29293030#define HWCTX_MAX_TIMEOUT 60000 /* milliseconds */3131···5353{5454 drm_sched_stop(&hwctx->priv->sched, bad_job);5555 aie2_destroy_context(xdna->dev_handle, hwctx);5656+ drm_sched_start(&hwctx->priv->sched, 0);5657}57585859static int aie2_hwctx_restart(struct amdxdna_dev *xdna, struct amdxdna_hwctx *hwctx)···8180 }82818382out:8484- drm_sched_start(&hwctx->priv->sched, 0);8583 XDNA_DBG(xdna, "%s restarted, ret %d", hwctx->name, ret);8684 return ret;8785}···297297 struct dma_fence *fence;298298 int ret;299299300300- if (!hwctx->priv->mbox_chann)300300+ ret = amdxdna_pm_resume_get(hwctx->client->xdna);301301+ if (ret)301302 return NULL;302303303303- if (!mmget_not_zero(job->mm))304304+ if (!hwctx->priv->mbox_chann) {305305+ amdxdna_pm_suspend_put(hwctx->client->xdna);306306+ return NULL;307307+ }308308+309309+ if (!mmget_not_zero(job->mm)) {310310+ amdxdna_pm_suspend_put(hwctx->client->xdna);304311 return ERR_PTR(-ESRCH);312312+ }305313306314 kref_get(&job->refcnt);307315 fence = dma_fence_get(job->fence);308308-309309- ret = amdxdna_pm_resume_get(hwctx->client->xdna);310310- if (ret)311311- goto out;312316313317 if (job->drv_cmd) {314318 switch (job->drv_cmd->opcode) {···501497502498 if (AIE2_FEATURE_ON(xdna->dev_handle, AIE2_TEMPORAL_ONLY)) {503499 ret = aie2_destroy_context(xdna->dev_handle, hwctx);504504- if (ret)500500+ if (ret && ret != -ENODEV)505501 XDNA_ERR(xdna, "Destroy temporal only context failed, ret %d", ret);506502 } else {507503 ret = xrs_release_resource(xdna->xrs_hdl, (uintptr_t)hwctx);···633629 goto free_entity;634630 }635631636636- ret = amdxdna_pm_resume_get(xdna);632632+ ret = amdxdna_pm_resume_get_locked(xdna);637633 if (ret)638634 goto free_col_list;639635···764760 if (!hwctx->cus)765761 return -ENOMEM;766762767767- ret = amdxdna_pm_resume_get(xdna);763763+ ret = amdxdna_pm_resume_get_locked(xdna);768764 if (ret)769765 goto free_cus;770766···1074107010751071 ret = dma_resv_wait_timeout(gobj->resv, DMA_RESV_USAGE_BOOKKEEP,10761072 true, MAX_SCHEDULE_TIMEOUT);10771077- if (!ret || ret == -ERESTARTSYS)10731073+ if (!ret)10781074 XDNA_ERR(xdna, "Failed to wait for bo, ret %ld", ret);10751075+ else if (ret == -ERESTARTSYS)10761076+ XDNA_DBG(xdna, "Wait for bo interrupted by signal");10791077}
···3232module_param(aie2_max_col, uint, 0600);3333MODULE_PARM_DESC(aie2_max_col, "Maximum column could be used");34343535+static char *npu_fw[] = {3636+ "npu_7.sbin",3737+ "npu.sbin"3838+};3939+3540/*3641 * The management mailbox channel is allocated by firmware.3742 * The related register and ring buffer information is on SRAM BAR.···328323 return;329324 }330325326326+ aie2_runtime_cfg(ndev, AIE2_RT_CFG_CLK_GATING, NULL);331327 aie2_mgmt_fw_fini(ndev);332328 xdna_mailbox_stop_channel(ndev->mgmt_chann);333329 xdna_mailbox_destroy_channel(ndev->mgmt_chann);···412406 goto stop_psp;413407 }414408415415- ret = aie2_pm_init(ndev);416416- if (ret) {417417- XDNA_ERR(xdna, "failed to init pm, ret %d", ret);418418- goto destroy_mgmt_chann;419419- }420420-421409 ret = aie2_mgmt_fw_init(ndev);422410 if (ret) {423411 XDNA_ERR(xdna, "initial mgmt firmware failed, ret %d", ret);412412+ goto destroy_mgmt_chann;413413+ }414414+415415+ ret = aie2_pm_init(ndev);416416+ if (ret) {417417+ XDNA_ERR(xdna, "failed to init pm, ret %d", ret);424418 goto destroy_mgmt_chann;425419 }426420···457451{458452 struct amdxdna_client *client;459453460460- guard(mutex)(&xdna->dev_lock);461454 list_for_each_entry(client, &xdna->client_list, node)462455 aie2_hwctx_suspend(client);463456···494489 struct psp_config psp_conf;495490 const struct firmware *fw;496491 unsigned long bars = 0;492492+ char *fw_full_path;497493 int i, nvec, ret;498494499495 if (!hypervisor_is_type(X86_HYPER_NATIVE)) {···509503 ndev->priv = xdna->dev_info->dev_priv;510504 ndev->xdna = xdna;511505512512- ret = request_firmware(&fw, ndev->priv->fw_path, &pdev->dev);506506+ for (i = 0; i < ARRAY_SIZE(npu_fw); i++) {507507+ fw_full_path = kasprintf(GFP_KERNEL, "%s%s", ndev->priv->fw_path, npu_fw[i]);508508+ if (!fw_full_path)509509+ return -ENOMEM;510510+511511+ ret = firmware_request_nowarn(&fw, fw_full_path, &pdev->dev);512512+ kfree(fw_full_path);513513+ if (!ret) {514514+ XDNA_INFO(xdna, "Load firmware %s%s", ndev->priv->fw_path, npu_fw[i]);515515+ break;516516+ }517517+ }518518+513519 if (ret) {514520 XDNA_ERR(xdna, "failed to request_firmware %s, ret %d",515521 ndev->priv->fw_path, ret);···969951 if (!drm_dev_enter(&xdna->ddev, &idx))970952 return -ENODEV;971953972972- ret = amdxdna_pm_resume_get(xdna);954954+ ret = amdxdna_pm_resume_get_locked(xdna);973955 if (ret)974956 goto dev_exit;975957···10621044 if (!drm_dev_enter(&xdna->ddev, &idx))10631045 return -ENODEV;1064104610651065- ret = amdxdna_pm_resume_get(xdna);10471047+ ret = amdxdna_pm_resume_get_locked(xdna);10661048 if (ret)10671049 goto dev_exit;10681050···11521134 if (!drm_dev_enter(&xdna->ddev, &idx))11531135 return -ENODEV;1154113611551155- ret = amdxdna_pm_resume_get(xdna);11371137+ ret = amdxdna_pm_resume_get_locked(xdna);11561138 if (ret)11571139 goto dev_exit;11581140
+1-1
drivers/accel/amdxdna/aie2_pm.c
···3131{3232 int ret;33333434- ret = amdxdna_pm_resume_get(ndev->xdna);3434+ ret = amdxdna_pm_resume_get_locked(ndev->xdna);3535 if (ret)3636 return ret;3737
+11-13
drivers/accel/amdxdna/amdxdna_ctx.c
···104104105105 if (size) {106106 count = FIELD_GET(AMDXDNA_CMD_COUNT, cmd->header);107107- if (unlikely(count <= num_masks)) {107107+ if (unlikely(count <= num_masks ||108108+ count * sizeof(u32) +109109+ offsetof(struct amdxdna_cmd, data[0]) >110110+ abo->mem.size)) {108111 *size = 0;109112 return NULL;110113 }···269266 struct amdxdna_drm_config_hwctx *args = data;270267 struct amdxdna_dev *xdna = to_xdna_dev(dev);271268 struct amdxdna_hwctx *hwctx;272272- int ret, idx;273269 u32 buf_size;274270 void *buf;271271+ int ret;275272 u64 val;276273277274 if (XDNA_MBZ_DBG(xdna, &args->pad, sizeof(args->pad)))···313310 return -EINVAL;314311 }315312316316- mutex_lock(&xdna->dev_lock);317317- idx = srcu_read_lock(&client->hwctx_srcu);313313+ guard(mutex)(&xdna->dev_lock);318314 hwctx = xa_load(&client->hwctx_xa, args->handle);319315 if (!hwctx) {320316 XDNA_DBG(xdna, "PID %d failed to get hwctx %d", client->pid, args->handle);321317 ret = -EINVAL;322322- goto unlock_srcu;318318+ goto free_buf;323319 }324320325321 ret = xdna->dev_info->ops->hwctx_config(hwctx, args->param_type, val, buf, buf_size);326322327327-unlock_srcu:328328- srcu_read_unlock(&client->hwctx_srcu, idx);329329- mutex_unlock(&xdna->dev_lock);323323+free_buf:330324 kfree(buf);331325 return ret;332326}···334334 struct amdxdna_hwctx *hwctx;335335 struct amdxdna_gem_obj *abo;336336 struct drm_gem_object *gobj;337337- int ret, idx;337337+ int ret;338338339339 if (!xdna->dev_info->ops->hwctx_sync_debug_bo)340340 return -EOPNOTSUPP;···345345346346 abo = to_xdna_obj(gobj);347347 guard(mutex)(&xdna->dev_lock);348348- idx = srcu_read_lock(&client->hwctx_srcu);349348 hwctx = xa_load(&client->hwctx_xa, abo->assigned_hwctx);350349 if (!hwctx) {351350 ret = -EINVAL;352352- goto unlock_srcu;351351+ goto put_obj;353352 }354353355354 ret = xdna->dev_info->ops->hwctx_sync_debug_bo(hwctx, debug_bo_hdl);356355357357-unlock_srcu:358358- srcu_read_unlock(&client->hwctx_srcu, idx);356356+put_obj:359357 drm_gem_object_put(gobj);360358 return ret;361359}
+19-19
drivers/accel/amdxdna/amdxdna_gem.c
···2121#include "amdxdna_pci_drv.h"2222#include "amdxdna_ubuf.h"23232424-#define XDNA_MAX_CMD_BO_SIZE SZ_32K2525-2624MODULE_IMPORT_NS("DMA_BUF");27252826static int···743745{744746 struct amdxdna_dev *xdna = to_xdna_dev(dev);745747 struct amdxdna_gem_obj *abo;746746- int ret;747747-748748- if (args->size > XDNA_MAX_CMD_BO_SIZE) {749749- XDNA_ERR(xdna, "Command bo size 0x%llx too large", args->size);750750- return ERR_PTR(-EINVAL);751751- }752748753749 if (args->size < sizeof(struct amdxdna_cmd)) {754750 XDNA_DBG(xdna, "Command BO size 0x%llx too small", args->size);···756764 abo->type = AMDXDNA_BO_CMD;757765 abo->client = filp->driver_priv;758766759759- ret = amdxdna_gem_obj_vmap(abo, &abo->mem.kva);760760- if (ret) {761761- XDNA_ERR(xdna, "Vmap cmd bo failed, ret %d", ret);762762- goto release_obj;763763- }764764-765767 return abo;766766-767767-release_obj:768768- drm_gem_object_put(to_gobj(abo));769769- return ERR_PTR(ret);770768}771769772770int amdxdna_drm_create_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)···853871 struct amdxdna_dev *xdna = client->xdna;854872 struct amdxdna_gem_obj *abo;855873 struct drm_gem_object *gobj;874874+ int ret;856875857876 gobj = drm_gem_object_lookup(client->filp, bo_hdl);858877 if (!gobj) {···862879 }863880864881 abo = to_xdna_obj(gobj);865865- if (bo_type == AMDXDNA_BO_INVALID || abo->type == bo_type)882882+ if (bo_type != AMDXDNA_BO_INVALID && abo->type != bo_type)883883+ goto put_obj;884884+885885+ if (bo_type != AMDXDNA_BO_CMD || abo->mem.kva)866886 return abo;867887888888+ if (abo->mem.size > SZ_32K) {889889+ XDNA_ERR(xdna, "Cmd bo is too big %ld", abo->mem.size);890890+ goto put_obj;891891+ }892892+893893+ ret = amdxdna_gem_obj_vmap(abo, &abo->mem.kva);894894+ if (ret) {895895+ XDNA_ERR(xdna, "Vmap cmd bo failed, ret %d", ret);896896+ goto put_obj;897897+ }898898+899899+ return abo;900900+901901+put_obj:868902 drm_gem_object_put(gobj);869903 return NULL;870904}
+3
drivers/accel/amdxdna/amdxdna_pci_drv.c
···2323MODULE_FIRMWARE("amdnpu/17f0_10/npu.sbin");2424MODULE_FIRMWARE("amdnpu/17f0_11/npu.sbin");2525MODULE_FIRMWARE("amdnpu/17f0_20/npu.sbin");2626+MODULE_FIRMWARE("amdnpu/1502_00/npu_7.sbin");2727+MODULE_FIRMWARE("amdnpu/17f0_10/npu_7.sbin");2828+MODULE_FIRMWARE("amdnpu/17f0_11/npu_7.sbin");26292730/*2831 * 0.0: Initial version
+2
drivers/accel/amdxdna/amdxdna_pm.c
···1616 struct amdxdna_dev *xdna = to_xdna_dev(dev_get_drvdata(dev));1717 int ret = -EOPNOTSUPP;18181919+ guard(mutex)(&xdna->dev_lock);1920 if (xdna->dev_info->ops->suspend)2021 ret = xdna->dev_info->ops->suspend(xdna);2122···2928 struct amdxdna_dev *xdna = to_xdna_dev(dev_get_drvdata(dev));3029 int ret = -EOPNOTSUPP;31303131+ guard(mutex)(&xdna->dev_lock);3232 if (xdna->dev_info->ops->resume)3333 ret = xdna->dev_info->ops->resume(xdna);3434
+11
drivers/accel/amdxdna/amdxdna_pm.h
···1515void amdxdna_pm_init(struct amdxdna_dev *xdna);1616void amdxdna_pm_fini(struct amdxdna_dev *xdna);17171818+static inline int amdxdna_pm_resume_get_locked(struct amdxdna_dev *xdna)1919+{2020+ int ret;2121+2222+ mutex_unlock(&xdna->dev_lock);2323+ ret = amdxdna_pm_resume_get(xdna);2424+ mutex_lock(&xdna->dev_lock);2525+2626+ return ret;2727+}2828+1829#endif /* _AMDXDNA_PM_H_ */
···390390 },391391392392 /*393393+ * The screen backlight turns off during udev device creation394394+ * when returning true for _OSI("Windows 2009")395395+ */396396+ {397397+ .callback = dmi_disable_osi_win7,398398+ .ident = "Acer Aspire One D255",399399+ .matches = {400400+ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),401401+ DMI_MATCH(DMI_PRODUCT_NAME, "AOD255"),402402+ },403403+ },404404+405405+ /*393406 * The wireless hotkey does not work on those machines when394407 * returning true for _OSI("Windows 2012")395408 */
+8
drivers/acpi/sleep.c
···386386 DMI_MATCH(DMI_PRODUCT_NAME, "80E1"),387387 },388388 },389389+ {390390+ .callback = init_nvs_save_s3,391391+ .ident = "Lenovo G70-35",392392+ .matches = {393393+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),394394+ DMI_MATCH(DMI_PRODUCT_NAME, "80Q5"),395395+ },396396+ },389397 /*390398 * ThinkPad X1 Tablet(2016) cannot do suspend-to-idle using391399 * the Low Power S0 Idle firmware interface (see
+3-5
drivers/ata/libata-core.c
···62696269 }62706270 }6271627162726272- /* Make sure the deferred qc work finished. */62736273- cancel_work_sync(&ap->deferred_qc_work);62746274- WARN_ON(ap->deferred_qc);62756275-62766272 /* Tell EH to disable all devices */62776273 ap->pflags |= ATA_PFLAG_UNLOADING;62786274 ata_port_schedule_eh(ap);···62796283 /* wait till EH commits suicide */62806284 ata_port_wait_eh(ap);6281628562826282- /* it better be dead now */62866286+ /* It better be dead now and not have any remaining deferred qc. */62836287 WARN_ON(!(ap->pflags & ATA_PFLAG_UNLOADED));62886288+ WARN_ON(ap->deferred_qc);6284628962906290+ cancel_work_sync(&ap->deferred_qc_work);62856291 cancel_delayed_work_sync(&ap->hotplug_task);62866292 cancel_delayed_work_sync(&ap->scsi_rescan_task);62876293
+19-3
drivers/ata/libata-eh.c
···640640 set_host_byte(scmd, DID_OK);641641642642 ata_qc_for_each_raw(ap, qc, i) {643643- if (qc->flags & ATA_QCFLAG_ACTIVE &&644644- qc->scsicmd == scmd)643643+ if (qc->scsicmd != scmd)644644+ continue;645645+ if ((qc->flags & ATA_QCFLAG_ACTIVE) ||646646+ qc == ap->deferred_qc)645647 break;646648 }647649648648- if (i < ATA_MAX_QUEUE) {650650+ if (qc == ap->deferred_qc) {651651+ /*652652+ * This is a deferred command that timed out while653653+ * waiting for the command queue to drain. Since the qc654654+ * is not active yet (deferred_qc is still set, so the655655+ * deferred qc work has not issued the command yet),656656+ * simply signal the timeout by finishing the SCSI657657+ * command and clear the deferred qc to prevent the658658+ * deferred qc work from issuing this qc.659659+ */660660+ WARN_ON_ONCE(qc->flags & ATA_QCFLAG_ACTIVE);661661+ ap->deferred_qc = NULL;662662+ set_host_byte(scmd, DID_TIME_OUT);663663+ scsi_eh_finish_cmd(scmd, &ap->eh_done_q);664664+ } else if (i < ATA_MAX_QUEUE) {649665 /* the scmd has an associated qc */650666 if (!(qc->flags & ATA_QCFLAG_EH)) {651667 /* which hasn't failed yet, timeout */
+13-14
drivers/base/property.c
···797797fwnode_get_next_child_node(const struct fwnode_handle *fwnode,798798 struct fwnode_handle *child)799799{800800- return fwnode_call_ptr_op(fwnode, get_next_child_node, child);800800+ struct fwnode_handle *next;801801+802802+ if (IS_ERR_OR_NULL(fwnode))803803+ return NULL;804804+805805+ /* Try to find a child in primary fwnode */806806+ next = fwnode_call_ptr_op(fwnode, get_next_child_node, child);807807+ if (next)808808+ return next;809809+810810+ /* When no more children in primary, continue with secondary */811811+ return fwnode_call_ptr_op(fwnode->secondary, get_next_child_node, child);801812}802813EXPORT_SYMBOL_GPL(fwnode_get_next_child_node);803814···852841struct fwnode_handle *device_get_next_child_node(const struct device *dev,853842 struct fwnode_handle *child)854843{855855- const struct fwnode_handle *fwnode = dev_fwnode(dev);856856- struct fwnode_handle *next;857857-858858- if (IS_ERR_OR_NULL(fwnode))859859- return NULL;860860-861861- /* Try to find a child in primary fwnode */862862- next = fwnode_get_next_child_node(fwnode, child);863863- if (next)864864- return next;865865-866866- /* When no more children in primary, continue with secondary */867867- return fwnode_get_next_child_node(fwnode->secondary, child);844844+ return fwnode_get_next_child_node(dev_fwnode(dev), child);868845}869846EXPORT_SYMBOL_GPL(device_get_next_child_node);870847
+23-30
drivers/block/drbd/drbd_actlog.c
···483483484484int drbd_al_begin_io_nonblock(struct drbd_device *device, struct drbd_interval *i)485485{486486- struct lru_cache *al = device->act_log;487486 /* for bios crossing activity log extent boundaries,488487 * we may need to activate two extents in one go */489488 unsigned first = i->sector >> (AL_EXTENT_SHIFT-9);490489 unsigned last = i->size == 0 ? first : (i->sector + (i->size >> 9) - 1) >> (AL_EXTENT_SHIFT-9);491491- unsigned nr_al_extents;492492- unsigned available_update_slots;493490 unsigned enr;494491495495- D_ASSERT(device, first <= last);496496-497497- nr_al_extents = 1 + last - first; /* worst case: all touched extends are cold. */498498- available_update_slots = min(al->nr_elements - al->used,499499- al->max_pending_changes - al->pending_changes);500500-501501- /* We want all necessary updates for a given request within the same transaction502502- * We could first check how many updates are *actually* needed,503503- * and use that instead of the worst-case nr_al_extents */504504- if (available_update_slots < nr_al_extents) {505505- /* Too many activity log extents are currently "hot".506506- *507507- * If we have accumulated pending changes already,508508- * we made progress.509509- *510510- * If we cannot get even a single pending change through,511511- * stop the fast path until we made some progress,512512- * or requests to "cold" extents could be starved. */513513- if (!al->pending_changes)514514- __set_bit(__LC_STARVING, &device->act_log->flags);515515- return -ENOBUFS;492492+ if (i->partially_in_al_next_enr) {493493+ D_ASSERT(device, first < i->partially_in_al_next_enr);494494+ D_ASSERT(device, last >= i->partially_in_al_next_enr);495495+ first = i->partially_in_al_next_enr;516496 }497497+498498+ D_ASSERT(device, first <= last);517499518500 /* Is resync active in this area? */519501 for (enr = first; enr <= last; enr++) {···511529 }512530 }513531514514- /* Checkout the refcounts.515515- * Given that we checked for available elements and update slots above,516516- * this has to be successful. */532532+ /* Try to checkout the refcounts. */517533 for (enr = first; enr <= last; enr++) {518534 struct lc_element *al_ext;519535 al_ext = lc_get_cumulative(device->act_log, enr);520520- if (!al_ext)521521- drbd_info(device, "LOGIC BUG for enr=%u\n", enr);536536+537537+ if (!al_ext) {538538+ /* Did not work. We may have exhausted the possible539539+ * changes per transaction. Or raced with someone540540+ * "locking" it against changes.541541+ * Remember where to continue from.542542+ */543543+ if (enr > first)544544+ i->partially_in_al_next_enr = enr;545545+ return -ENOBUFS;546546+ }522547 }523548 return 0;524549}···545556546557 for (enr = first; enr <= last; enr++) {547558 extent = lc_find(device->act_log, enr);548548- if (!extent) {559559+ /* Yes, this masks a bug elsewhere. However, during normal560560+ * operation this is harmless, so no need to crash the kernel561561+ * by the BUG_ON(refcount == 0) in lc_put().562562+ */563563+ if (!extent || extent->refcnt == 0) {549564 drbd_err(device, "al_complete_io() called on inactive extent %u\n", enr);550565 continue;551566 }
+4-1
drivers/block/drbd/drbd_interval.h
···88struct drbd_interval {99 struct rb_node rb;1010 sector_t sector; /* start sector of the interval */1111- unsigned int size; /* size in bytes */1211 sector_t end; /* highest interval end in subtree */1212+ unsigned int size; /* size in bytes */1313 unsigned int local:1 /* local or remote request? */;1414 unsigned int waiting:1; /* someone is waiting for completion */1515 unsigned int completed:1; /* this has been completed already;1616 * ignore for conflict detection */1717+1818+ /* to resume a partially successful drbd_al_begin_io_nonblock(); */1919+ unsigned int partially_in_al_next_enr;1720};18211922static inline void drbd_clear_interval(struct drbd_interval *i)
···602602static int __scan_channels(struct ipmi_smi *intf,603603 struct ipmi_device_id *id, bool rescan);604604605605+static void ipmi_lock_xmit_msgs(struct ipmi_smi *intf, int run_to_completion,606606+ unsigned long *flags)607607+{608608+ if (run_to_completion)609609+ return;610610+ spin_lock_irqsave(&intf->xmit_msgs_lock, *flags);611611+}612612+613613+static void ipmi_unlock_xmit_msgs(struct ipmi_smi *intf, int run_to_completion,614614+ unsigned long *flags)615615+{616616+ if (run_to_completion)617617+ return;618618+ spin_unlock_irqrestore(&intf->xmit_msgs_lock, *flags);619619+}620620+605621static void free_ipmi_user(struct kref *ref)606622{607623 struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);···18851869 return smi_msg;18861870}1887187118881888-static void smi_send(struct ipmi_smi *intf,18721872+static int smi_send(struct ipmi_smi *intf,18891873 const struct ipmi_smi_handlers *handlers,18901874 struct ipmi_smi_msg *smi_msg, int priority)18911875{18921876 int run_to_completion = READ_ONCE(intf->run_to_completion);18931877 unsigned long flags = 0;18781878+ int rv = 0;1894187918951895- if (!run_to_completion)18961896- spin_lock_irqsave(&intf->xmit_msgs_lock, flags);18801880+ ipmi_lock_xmit_msgs(intf, run_to_completion, &flags);18971881 smi_msg = smi_add_send_msg(intf, smi_msg, priority);18981898- if (!run_to_completion)18991899- spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);18821882+ ipmi_unlock_xmit_msgs(intf, run_to_completion, &flags);1900188319011901- if (smi_msg)19021902- handlers->sender(intf->send_info, smi_msg);18841884+ if (smi_msg) {18851885+ rv = handlers->sender(intf->send_info, smi_msg);18861886+ if (rv) {18871887+ ipmi_lock_xmit_msgs(intf, run_to_completion, &flags);18881888+ intf->curr_msg = NULL;18891889+ ipmi_unlock_xmit_msgs(intf, run_to_completion, &flags);18901890+ /*18911891+ * Something may have been added to the transmit18921892+ * queue, so schedule a check for that.18931893+ */18941894+ queue_work(system_wq, &intf->smi_work);18951895+ }18961896+ }18971897+ return rv;19031898}1904189919051900static bool is_maintenance_mode_cmd(struct kernel_ipmi_msg *msg)···23232296 struct ipmi_recv_msg *recv_msg;23242297 int run_to_completion = READ_ONCE(intf->run_to_completion);23252298 int rv = 0;22992299+ bool in_seq_table = false;2326230023272301 if (supplied_recv) {23282302 recv_msg = supplied_recv;···23772349 rv = i_ipmi_req_ipmb(intf, addr, msgid, msg, smi_msg, recv_msg,23782350 source_address, source_lun,23792351 retries, retry_time_ms);23522352+ in_seq_table = true;23802353 } else if (is_ipmb_direct_addr(addr)) {23812354 rv = i_ipmi_req_ipmb_direct(intf, addr, msgid, msg, smi_msg,23822355 recv_msg, source_lun);23832356 } else if (is_lan_addr(addr)) {23842357 rv = i_ipmi_req_lan(intf, addr, msgid, msg, smi_msg, recv_msg,23852358 source_lun, retries, retry_time_ms);23592359+ in_seq_table = true;23862360 } else {23872387- /* Unknown address type. */23612361+ /* Unknown address type. */23882362 ipmi_inc_stat(intf, sent_invalid_commands);23892363 rv = -EINVAL;23902364 }2391236523922392- if (rv) {23662366+ if (!rv) {23672367+ dev_dbg(intf->si_dev, "Send: %*ph\n",23682368+ smi_msg->data_size, smi_msg->data);23692369+23702370+ rv = smi_send(intf, intf->handlers, smi_msg, priority);23712371+ if (rv != IPMI_CC_NO_ERROR)23722372+ /* smi_send() returns an IPMI err, return a Linux one. */23732373+ rv = -EIO;23742374+ if (rv && in_seq_table) {23752375+ /*23762376+ * If it's in the sequence table, it will be23772377+ * retried later, so ignore errors.23782378+ */23792379+ rv = 0;23802380+ /* But we need to fix the timeout. */23812381+ intf_start_seq_timer(intf, smi_msg->msgid);23822382+ ipmi_free_smi_msg(smi_msg);23832383+ smi_msg = NULL;23842384+ }23852385+ }23932386out_err:23872387+ if (!run_to_completion)23882388+ mutex_unlock(&intf->users_mutex);23892389+23902390+ if (rv) {23942391 if (!supplied_smi)23952392 ipmi_free_smi_msg(smi_msg);23962393 if (!supplied_recv)23972394 ipmi_free_recv_msg(recv_msg);23982398- } else {23992399- dev_dbg(intf->si_dev, "Send: %*ph\n",24002400- smi_msg->data_size, smi_msg->data);24012401-24022402- smi_send(intf, intf->handlers, smi_msg, priority);24032395 }24042404- if (!run_to_completion)24052405- mutex_unlock(&intf->users_mutex);24062406-24072396 return rv;24082397}24092398···39943949 dev_dbg(intf->si_dev, "Invalid command: %*ph\n",39953950 msg->data_size, msg->data);3996395139973997- smi_send(intf, intf->handlers, msg, 0);39983998- /*39993999- * We used the message, so return the value that40004000- * causes it to not be freed or queued.40014001- */40024002- rv = -1;39523952+ if (smi_send(intf, intf->handlers, msg, 0) == IPMI_CC_NO_ERROR)39533953+ /*39543954+ * We used the message, so return the value that39553955+ * causes it to not be freed or queued.39563956+ */39573957+ rv = -1;40033958 } else if (!IS_ERR(recv_msg)) {40043959 /* Extract the source address from the data. */40053960 ipmb_addr = (struct ipmi_ipmb_addr *) &recv_msg->addr;···40734028 msg->data[4] = IPMI_INVALID_CMD_COMPLETION_CODE;40744029 msg->data_size = 5;4075403040764076- smi_send(intf, intf->handlers, msg, 0);40774077- /*40784078- * We used the message, so return the value that40794079- * causes it to not be freed or queued.40804080- */40814081- rv = -1;40314031+ if (smi_send(intf, intf->handlers, msg, 0) == IPMI_CC_NO_ERROR)40324032+ /*40334033+ * We used the message, so return the value that40344034+ * causes it to not be freed or queued.40354035+ */40364036+ rv = -1;40824037 } else if (!IS_ERR(recv_msg)) {40834038 /* Extract the source address from the data. */40844039 daddr = (struct ipmi_ipmb_direct_addr *)&recv_msg->addr;···42184173 struct ipmi_smi_msg *msg)42194174{42204175 struct cmd_rcvr *rcvr;42214221- int rv = 0;41764176+ int rv = 0; /* Free by default */42224177 unsigned char netfn;42234178 unsigned char cmd;42244179 unsigned char chan;···42714226 dev_dbg(intf->si_dev, "Invalid command: %*ph\n",42724227 msg->data_size, msg->data);4273422842744274- smi_send(intf, intf->handlers, msg, 0);42754275- /*42764276- * We used the message, so return the value that42774277- * causes it to not be freed or queued.42784278- */42794279- rv = -1;42294229+ if (smi_send(intf, intf->handlers, msg, 0) == IPMI_CC_NO_ERROR)42304230+ /*42314231+ * We used the message, so return the value that42324232+ * causes it to not be freed or queued.42334233+ */42344234+ rv = -1;42804235 } else if (!IS_ERR(recv_msg)) {42814236 /* Extract the source address from the data. */42824237 lan_addr = (struct ipmi_lan_addr *) &recv_msg->addr;···48694824 * message delivery.48704825 */48714826restart:48724872- if (!run_to_completion)48734873- spin_lock_irqsave(&intf->xmit_msgs_lock, flags);48274827+ ipmi_lock_xmit_msgs(intf, run_to_completion, &flags);48744828 if (intf->curr_msg == NULL && !intf->in_shutdown) {48754829 struct list_head *entry = NULL;48764830···48854841 intf->curr_msg = newmsg;48864842 }48874843 }48884888- if (!run_to_completion)48894889- spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);48444844+ ipmi_unlock_xmit_msgs(intf, run_to_completion, &flags);4890484548914846 if (newmsg) {48924847 cc = intf->handlers->sender(intf->send_info, newmsg);···48934850 if (newmsg->recv_msg)48944851 deliver_err_response(intf,48954852 newmsg->recv_msg, cc);48964896- else48974897- ipmi_free_smi_msg(newmsg);48534853+ ipmi_lock_xmit_msgs(intf, run_to_completion, &flags);48544854+ intf->curr_msg = NULL;48554855+ ipmi_unlock_xmit_msgs(intf, run_to_completion, &flags);48564856+ ipmi_free_smi_msg(newmsg);48574857+ newmsg = NULL;48984858 goto restart;48994859 }49004860 }···49654919 spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,49664920 flags);4967492149684968- if (!run_to_completion)49694969- spin_lock_irqsave(&intf->xmit_msgs_lock, flags);49224922+ ipmi_lock_xmit_msgs(intf, run_to_completion, &flags);49704923 /*49714924 * We can get an asynchronous event or receive message in addition49724925 * to commands we send.49734926 */49744927 if (msg == intf->curr_msg)49754928 intf->curr_msg = NULL;49764976- if (!run_to_completion)49774977- spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);49294929+ ipmi_unlock_xmit_msgs(intf, run_to_completion, &flags);4978493049794931 if (run_to_completion)49804932 smi_work(&intf->smi_work);···50855041 ipmi_inc_stat(intf,50865042 retransmitted_ipmb_commands);5087504350885088- smi_send(intf, intf->handlers, smi_msg, 0);50445044+ /* If this fails we'll retry later or timeout. */50455045+ if (smi_send(intf, intf->handlers, smi_msg, 0) != IPMI_CC_NO_ERROR) {50465046+ /* But fix the timeout. */50475047+ intf_start_seq_timer(intf, smi_msg->msgid);50485048+ ipmi_free_smi_msg(smi_msg);50495049+ }50895050 } else50905051 ipmi_free_smi_msg(smi_msg);50915052
+24-13
drivers/char/ipmi/ipmi_si_intf.c
···809809 */810810 return_hosed_msg(smi_info, IPMI_BUS_ERR);811811 }812812+ if (smi_info->waiting_msg != NULL) {813813+ /* Also handle if there was a message waiting. */814814+ smi_info->curr_msg = smi_info->waiting_msg;815815+ smi_info->waiting_msg = NULL;816816+ return_hosed_msg(smi_info, IPMI_BUS_ERR);817817+ }812818 smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_HOSED);813819 goto out;814820 }···924918{925919 struct smi_info *smi_info = send_info;926920 unsigned long flags;921921+ int rv = IPMI_CC_NO_ERROR;927922928923 debug_timestamp(smi_info, "Enqueue");929924925925+ /*926926+ * Check here for run to completion mode. A check under lock is927927+ * later.928928+ */930929 if (smi_info->si_state == SI_HOSED)931930 return IPMI_BUS_ERR;932931···945934 }946935947936 spin_lock_irqsave(&smi_info->si_lock, flags);948948- /*949949- * The following two lines don't need to be under the lock for950950- * the lock's sake, but they do need SMP memory barriers to951951- * avoid getting things out of order. We are already claiming952952- * the lock, anyway, so just do it under the lock to avoid the953953- * ordering problem.954954- */955955- BUG_ON(smi_info->waiting_msg);956956- smi_info->waiting_msg = msg;957957- check_start_timer_thread(smi_info);937937+ if (smi_info->si_state == SI_HOSED) {938938+ rv = IPMI_BUS_ERR;939939+ } else {940940+ BUG_ON(smi_info->waiting_msg);941941+ smi_info->waiting_msg = msg;942942+ check_start_timer_thread(smi_info);943943+ }958944 spin_unlock_irqrestore(&smi_info->si_lock, flags);959959- return IPMI_CC_NO_ERROR;945945+ return rv;960946}961947962948static void set_run_to_completion(void *send_info, bool i_run_to_completion)···11211113 * SI_USEC_PER_JIFFY);11221114 smi_result = smi_event_handler(smi_info, time_diff);1123111511241124- if ((smi_info->io.irq) && (!smi_info->interrupt_disabled)) {11161116+ if (smi_info->si_state == SI_HOSED) {11171117+ timeout = jiffies + SI_TIMEOUT_HOSED;11181118+ } else if ((smi_info->io.irq) && (!smi_info->interrupt_disabled)) {11251119 /* Running with interrupts, only do long timeouts. */11261120 timeout = jiffies + SI_TIMEOUT_JIFFIES;11271121 smi_inc_stat(smi_info, long_timeouts);···22362226 unsigned long jiffies_now;22372227 long time_diff;2238222822392239- while (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL)) {22292229+ while (smi_info->si_state != SI_HOSED &&22302230+ (smi_info->curr_msg || (smi_info->si_state != SI_NORMAL))) {22402231 jiffies_now = jiffies;22412232 time_diff = (((long)jiffies_now - (long)smi_info->last_timeout_jiffies)22422233 * SI_USEC_PER_JIFFY);
···14761476 refresh_frequency_limits(policy);14771477}1478147814791479-static bool intel_pstate_update_max_freq(struct cpudata *cpudata)14791479+static bool intel_pstate_update_max_freq(int cpu)14801480{14811481- struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpudata->cpu);14811481+ struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpu);14821482 if (!policy)14831483 return false;1484148414851485- __intel_pstate_update_max_freq(policy, cpudata);14851485+ __intel_pstate_update_max_freq(policy, all_cpu_data[cpu]);1486148614871487 return true;14881488}···15011501 int cpu;1502150215031503 for_each_possible_cpu(cpu)15041504- intel_pstate_update_max_freq(all_cpu_data[cpu]);15041504+ intel_pstate_update_max_freq(cpu);1505150515061506 mutex_lock(&hybrid_capacity_lock);15071507···16471647static void update_cpu_qos_request(int cpu, enum freq_qos_req_type type)16481648{16491649 struct cpudata *cpudata = all_cpu_data[cpu];16501650- unsigned int freq = cpudata->pstate.turbo_freq;16511650 struct freq_qos_request *req;16511651+ unsigned int freq;1652165216531653 struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpu);16541654 if (!policy)···1660166016611661 if (hwp_active)16621662 intel_pstate_get_hwp_cap(cpudata);16631663+16641664+ freq = cpudata->pstate.turbo_freq;1663166516641666 if (type == FREQ_QOS_MIN) {16651667 freq = DIV_ROUND_UP(freq * global.min_perf_pct, 100);···19101908 struct cpudata *cpudata =19111909 container_of(to_delayed_work(work), struct cpudata, hwp_notify_work);1912191019131913- if (intel_pstate_update_max_freq(cpudata)) {19111911+ if (intel_pstate_update_max_freq(cpudata->cpu)) {19141912 /*19151913 * The driver will not be unregistered while this function is19161914 * running, so update the capacity without acquiring the driver
+18
drivers/cxl/core/core.h
···152152int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port,153153 struct access_coordinate *c);154154155155+static inline struct device *port_to_host(struct cxl_port *port)156156+{157157+ struct cxl_port *parent = is_cxl_root(port) ? NULL :158158+ to_cxl_port(port->dev.parent);159159+160160+ /*161161+ * The host of CXL root port and the first level of ports is162162+ * the platform firmware device, the host of all other ports163163+ * is their parent port.164164+ */165165+ if (!parent)166166+ return port->uport_dev;167167+ else if (is_cxl_root(parent))168168+ return parent->uport_dev;169169+ else170170+ return &parent->dev;171171+}172172+155173static inline struct device *dport_to_host(struct cxl_dport *dport)156174{157175 struct cxl_port *port = dport->port;
+1-1
drivers/cxl/core/hdm.c
···904904 if ((cxld->flags & CXL_DECODER_F_ENABLE) == 0)905905 return;906906907907- if (test_bit(CXL_DECODER_F_LOCK, &cxld->flags))907907+ if (cxld->flags & CXL_DECODER_F_LOCK)908908 return;909909910910 if (port->commit_end == id)
+9-2
drivers/cxl/core/mbox.c
···311311 * cxl_payload_from_user_allowed() - Check contents of in_payload.312312 * @opcode: The mailbox command opcode.313313 * @payload_in: Pointer to the input payload passed in from user space.314314+ * @in_size: Size of @payload_in in bytes.314315 *315316 * Return:316317 * * true - payload_in passes check for @opcode.···326325 *327326 * The specific checks are determined by the opcode.328327 */329329-static bool cxl_payload_from_user_allowed(u16 opcode, void *payload_in)328328+static bool cxl_payload_from_user_allowed(u16 opcode, void *payload_in,329329+ size_t in_size)330330{331331 switch (opcode) {332332 case CXL_MBOX_OP_SET_PARTITION_INFO: {333333 struct cxl_mbox_set_partition_info *pi = payload_in;334334335335+ if (in_size < sizeof(*pi))336336+ return false;335337 if (pi->flags & CXL_SET_PARTITION_IMMEDIATE_FLAG)336338 return false;337339 break;···342338 case CXL_MBOX_OP_CLEAR_LOG: {343339 const uuid_t *uuid = (uuid_t *)payload_in;344340341341+ if (in_size < sizeof(uuid_t))342342+ return false;345343 /*346344 * Restrict the ‘Clear log’ action to only apply to347345 * Vendor debug logs.···371365 if (IS_ERR(mbox_cmd->payload_in))372366 return PTR_ERR(mbox_cmd->payload_in);373367374374- if (!cxl_payload_from_user_allowed(opcode, mbox_cmd->payload_in)) {368368+ if (!cxl_payload_from_user_allowed(opcode, mbox_cmd->payload_in,369369+ in_size)) {375370 dev_dbg(cxl_mbox->host, "%s: input payload not allowed\n",376371 cxl_mem_opcode_to_name(opcode));377372 kvfree(mbox_cmd->payload_in);
+9-4
drivers/cxl/core/memdev.c
···10891089DEFINE_FREE(put_cxlmd, struct cxl_memdev *,10901090 if (!IS_ERR_OR_NULL(_T)) put_device(&_T->dev))1091109110921092-static struct cxl_memdev *cxl_memdev_autoremove(struct cxl_memdev *cxlmd)10921092+static bool cxl_memdev_attach_failed(struct cxl_memdev *cxlmd)10931093{10941094- int rc;10951095-10961094 /*10971095 * If @attach is provided fail if the driver is not attached upon10981096 * return. Note that failure here could be the result of a race to···10981100 * succeeded and then cxl_mem unbound before the lock is acquired.10991101 */11001102 guard(device)(&cxlmd->dev);11011101- if (cxlmd->attach && !cxlmd->dev.driver) {11031103+ return (cxlmd->attach && !cxlmd->dev.driver);11041104+}11051105+11061106+static struct cxl_memdev *cxl_memdev_autoremove(struct cxl_memdev *cxlmd)11071107+{11081108+ int rc;11091109+11101110+ if (cxl_memdev_attach_failed(cxlmd)) {11021111 cxl_memdev_unregister(cxlmd);11031112 return ERR_PTR(-ENXIO);11041113 }
+32-10
drivers/cxl/core/pmem.c
···115115 device_unregister(&cxl_nvb->dev);116116}117117118118-/**119119- * devm_cxl_add_nvdimm_bridge() - add the root of a LIBNVDIMM topology120120- * @host: platform firmware root device121121- * @port: CXL port at the root of a CXL topology122122- *123123- * Return: bridge device that can host cxl_nvdimm objects124124- */125125-struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,126126- struct cxl_port *port)118118+static bool cxl_nvdimm_bridge_failed_attach(struct cxl_nvdimm_bridge *cxl_nvb)119119+{120120+ struct device *dev = &cxl_nvb->dev;121121+122122+ guard(device)(dev);123123+ /* If the device has no driver, then it failed to attach. */124124+ return dev->driver == NULL;125125+}126126+127127+struct cxl_nvdimm_bridge *__devm_cxl_add_nvdimm_bridge(struct device *host,128128+ struct cxl_port *port)127129{128130 struct cxl_nvdimm_bridge *cxl_nvb;129131 struct device *dev;···147145 if (rc)148146 goto err;149147148148+ if (cxl_nvdimm_bridge_failed_attach(cxl_nvb)) {149149+ unregister_nvb(cxl_nvb);150150+ return ERR_PTR(-ENODEV);151151+ }152152+150153 rc = devm_add_action_or_reset(host, unregister_nvb, cxl_nvb);151154 if (rc)152155 return ERR_PTR(rc);···162155 put_device(dev);163156 return ERR_PTR(rc);164157}165165-EXPORT_SYMBOL_NS_GPL(devm_cxl_add_nvdimm_bridge, "CXL");158158+EXPORT_SYMBOL_FOR_MODULES(__devm_cxl_add_nvdimm_bridge, "cxl_pmem");166159167160static void cxl_nvdimm_release(struct device *dev)168161{···261254 cxl_nvb = cxl_find_nvdimm_bridge(port);262255 if (!cxl_nvb)263256 return -ENODEV;257257+258258+ /*259259+ * Take the uport_dev lock to guard against race of nvdimm_bus object.260260+ * cxl_acpi_probe() registers the nvdimm_bus and is done under the261261+ * root port uport_dev lock.262262+ *263263+ * Take the cxl_nvb device lock to ensure that cxl_nvb driver is in a264264+ * consistent state. And the driver registers nvdimm_bus.265265+ */266266+ guard(device)(cxl_nvb->port->uport_dev);267267+ guard(device)(&cxl_nvb->dev);268268+ if (!cxl_nvb->nvdimm_bus) {269269+ rc = -ENODEV;270270+ goto err_alloc;271271+ }264272265273 cxl_nvd = cxl_nvdimm_alloc(cxl_nvb, cxlmd);266274 if (IS_ERR(cxl_nvd)) {
+18-34
drivers/cxl/core/port.c
···615615static void unregister_port(void *_port)616616{617617 struct cxl_port *port = _port;618618- struct cxl_port *parent = parent_port_of(port);619619- struct device *lock_dev;620618621621- /*622622- * CXL root port's and the first level of ports are unregistered623623- * under the platform firmware device lock, all other ports are624624- * unregistered while holding their parent port lock.625625- */626626- if (!parent)627627- lock_dev = port->uport_dev;628628- else if (is_cxl_root(parent))629629- lock_dev = parent->uport_dev;630630- else631631- lock_dev = &parent->dev;632632-633633- device_lock_assert(lock_dev);619619+ device_lock_assert(port_to_host(port));634620 port->dead = true;635621 device_unregister(&port->dev);636622}···14131427 return NULL;14141428}1415142914161416-static struct device *endpoint_host(struct cxl_port *endpoint)14171417-{14181418- struct cxl_port *port = to_cxl_port(endpoint->dev.parent);14191419-14201420- if (is_cxl_root(port))14211421- return port->uport_dev;14221422- return &port->dev;14231423-}14241424-14251430static void delete_endpoint(void *data)14261431{14271432 struct cxl_memdev *cxlmd = data;14281433 struct cxl_port *endpoint = cxlmd->endpoint;14291429- struct device *host = endpoint_host(endpoint);14341434+ struct device *host = port_to_host(endpoint);1430143514311436 scoped_guard(device, host) {14321437 if (host->driver && !endpoint->dead) {···1433145614341457int cxl_endpoint_autoremove(struct cxl_memdev *cxlmd, struct cxl_port *endpoint)14351458{14361436- struct device *host = endpoint_host(endpoint);14591459+ struct device *host = port_to_host(endpoint);14371460 struct device *dev = &cxlmd->dev;1438146114391462 get_device(host);···17671790{17681791 struct cxl_dport *dport;1769179217701770- device_lock_assert(&port->dev);17931793+ /*17941794+ * The port is already visible in CXL hierarchy, but it may still17951795+ * be in the process of binding to the CXL port driver at this point.17961796+ *17971797+ * port creation and driver binding are protected by the port's host17981798+ * lock, so acquire the host lock here to ensure the port has completed17991799+ * driver binding before proceeding with dport addition.18001800+ */18011801+ guard(device)(port_to_host(port));18021802+ guard(device)(&port->dev);17711803 dport = cxl_find_dport_by_dev(port, dport_dev);17721804 if (!dport) {17731805 dport = probe_dport(port, dport_dev);···18431857 * RP port enumerated by cxl_acpi without dport will18441858 * have the dport added here.18451859 */18461846- scoped_guard(device, &port->dev) {18471847- dport = find_or_add_dport(port, dport_dev);18481848- if (IS_ERR(dport)) {18491849- if (PTR_ERR(dport) == -EAGAIN)18501850- goto retry;18511851- return PTR_ERR(dport);18521852- }18601860+ dport = find_or_add_dport(port, dport_dev);18611861+ if (IS_ERR(dport)) {18621862+ if (PTR_ERR(dport) == -EAGAIN)18631863+ goto retry;18641864+ return PTR_ERR(dport);18531865 }1854186618551867 rc = cxl_add_ep(dport, &cxlmd->dev);
+2-2
drivers/cxl/core/region.c
···11001100static void cxl_region_setup_flags(struct cxl_region *cxlr,11011101 struct cxl_decoder *cxld)11021102{11031103- if (test_bit(CXL_DECODER_F_LOCK, &cxld->flags)) {11031103+ if (cxld->flags & CXL_DECODER_F_LOCK) {11041104 set_bit(CXL_REGION_F_LOCK, &cxlr->flags);11051105 clear_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags);11061106 }1107110711081108- if (test_bit(CXL_DECODER_F_NORMALIZED_ADDRESSING, &cxld->flags))11081108+ if (cxld->flags & CXL_DECODER_F_NORMALIZED_ADDRESSING)11091109 set_bit(CXL_REGION_F_NORMALIZED_ADDRESSING, &cxlr->flags);11101110}11111111
···13131414static __read_mostly DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX);15151616+/**1717+ * devm_cxl_add_nvdimm_bridge() - add the root of a LIBNVDIMM topology1818+ * @host: platform firmware root device1919+ * @port: CXL port at the root of a CXL topology2020+ *2121+ * Return: bridge device that can host cxl_nvdimm objects2222+ */2323+struct cxl_nvdimm_bridge *devm_cxl_add_nvdimm_bridge(struct device *host,2424+ struct cxl_port *port)2525+{2626+ return __devm_cxl_add_nvdimm_bridge(host, port);2727+}2828+EXPORT_SYMBOL_NS_GPL(devm_cxl_add_nvdimm_bridge, "CXL");2929+1630static void clear_exclusive(void *mds)1731{1832 clear_exclusive_cxl_commands(mds, exclusive_cmds);···142128 unsigned long flags = 0, cmd_mask = 0;143129 struct nvdimm *nvdimm;144130 int rc;131131+132132+ if (test_bit(CXL_NVD_F_INVALIDATED, &cxl_nvd->flags))133133+ return -EBUSY;145134146135 set_exclusive_cxl_commands(mds, exclusive_cmds);147136 rc = devm_add_action_or_reset(dev, clear_exclusive, mds);···326309 scoped_guard(device, dev) {327310 if (dev->driver) {328311 cxl_nvd = to_cxl_nvdimm(dev);329329- if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data)312312+ if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data) {330313 release = true;314314+ set_bit(CXL_NVD_F_INVALIDATED, &cxl_nvd->flags);315315+ }331316 }332317 }333318 if (release)···372353 .probe = cxl_nvdimm_bridge_probe,373354 .id = CXL_DEVICE_NVDIMM_BRIDGE,374355 .drv = {356356+ .probe_type = PROBE_FORCE_SYNCHRONOUS,375357 .suppress_bind_attrs = true,376358 },377359};
+2-5
drivers/dpll/zl3073x/core.c
···981981 }982982983983 /* Add devres action to release DPLL related resources */984984- rc = devm_add_action_or_reset(zldev->dev, zl3073x_dev_dpll_fini, zldev);985985- if (rc)986986- goto error;987987-988988- return 0;984984+ return devm_add_action_or_reset(zldev->dev, zl3073x_dev_dpll_fini, zldev);989985990986error:991987 zl3073x_dev_dpll_fini(zldev);···10221026 "Unknown or non-match chip ID: 0x%0x\n",10231027 id);10241028 }10291029+ zldev->chip_id = id;1025103010261031 /* Read revision, firmware version and custom config version */10271032 rc = zl3073x_read_u16(zldev, ZL_REG_REVISION, &revision);
+28
drivers/dpll/zl3073x/core.h
···3535 * @dev: pointer to device3636 * @regmap: regmap to access device registers3737 * @multiop_lock: to serialize multiple register operations3838+ * @chip_id: chip ID read from hardware3839 * @ref: array of input references' invariants3940 * @out: array of outs' invariants4041 * @synth: array of synths' invariants···4948 struct device *dev;5049 struct regmap *regmap;5150 struct mutex multiop_lock;5151+ u16 chip_id;52525353 /* Invariants */5454 struct zl3073x_ref ref[ZL3073X_NUM_REFS];···145143 *****************/146144147145int zl3073x_ref_phase_offsets_update(struct zl3073x_dev *zldev, int channel);146146+147147+/**148148+ * zl3073x_dev_is_ref_phase_comp_32bit - check ref phase comp register size149149+ * @zldev: pointer to zl3073x device150150+ *151151+ * Some chip IDs have a 32-bit wide ref_phase_offset_comp register instead152152+ * of the default 48-bit.153153+ *154154+ * Return: true if the register is 32-bit, false if 48-bit155155+ */156156+static inline bool157157+zl3073x_dev_is_ref_phase_comp_32bit(struct zl3073x_dev *zldev)158158+{159159+ switch (zldev->chip_id) {160160+ case 0x0E30:161161+ case 0x0E93:162162+ case 0x0E94:163163+ case 0x0E95:164164+ case 0x0E96:165165+ case 0x0E97:166166+ case 0x1F60:167167+ return true;168168+ default:169169+ return false;170170+ }171171+}148172149173static inline bool150174zl3073x_is_n_pin(u8 id)
+5-2
drivers/dpll/zl3073x/dpll.c
···475475 ref_id = zl3073x_input_pin_ref_get(pin->id);476476 ref = zl3073x_ref_state_get(zldev, ref_id);477477478478- /* Perform sign extension for 48bit signed value */479479- phase_comp = sign_extend64(ref->phase_comp, 47);478478+ /* Perform sign extension based on register width */479479+ if (zl3073x_dev_is_ref_phase_comp_32bit(zldev))480480+ phase_comp = sign_extend64(ref->phase_comp, 31);481481+ else482482+ phase_comp = sign_extend64(ref->phase_comp, 47);480483481484 /* Reverse two's complement negation applied during set and convert482485 * to 32bit signed int
···3267326732683268 /* Make sure this is called after checking for gc->get(). */32693269 ret = gc->get(gc, offset);32703270- if (ret > 1)32713271- ret = -EBADE;32703270+ if (ret > 1) {32713271+ gpiochip_warn(gc,32723272+ "invalid return value from gc->get(): %d, consider fixing the driver\n",32733273+ ret);32743274+ ret = !!ret;32753275+ }3272327632733277 return ret;32743278}
···332332 if (!context || !context->initialized) {333333 dev_err(adev->dev, "TA is not initialized\n");334334 ret = -EINVAL;335335- goto err_free_shared_buf;335335+ goto free_shared_buf;336336 }337337338338 if (!psp->ta_funcs || !psp->ta_funcs->fn_ta_invoke) {339339 dev_err(adev->dev, "Unsupported function to invoke TA\n");340340 ret = -EOPNOTSUPP;341341- goto err_free_shared_buf;341341+ goto free_shared_buf;342342 }343343344344 context->session_id = ta_id;···346346 mutex_lock(&psp->ras_context.mutex);347347 ret = prep_ta_mem_context(&context->mem_context, shared_buf, shared_buf_len);348348 if (ret)349349- goto err_free_shared_buf;349349+ goto unlock;350350351351 ret = psp_fn_ta_invoke(psp, cmd_id);352352 if (ret || context->resp_status) {···354354 ret, context->resp_status);355355 if (!ret) {356356 ret = -EINVAL;357357- goto err_free_shared_buf;357357+ goto unlock;358358 }359359 }360360361361 if (copy_to_user((char *)&buf[copy_pos], context->mem_context.shared_buf, shared_buf_len))362362 ret = -EFAULT;363363364364-err_free_shared_buf:364364+unlock:365365 mutex_unlock(&psp->ras_context.mutex);366366+367367+free_shared_buf:366368 kfree(shared_buf);367369368370 return ret;
+18-4
drivers/gpu/drm/amd/amdgpu/amdgpu_userq_fence.c
···3535static const struct dma_fence_ops amdgpu_userq_fence_ops;3636static struct kmem_cache *amdgpu_userq_fence_slab;37373838+#define AMDGPU_USERQ_MAX_HANDLES (1U << 16)3939+3840int amdgpu_userq_fence_slab_init(void)3941{4042 amdgpu_userq_fence_slab = kmem_cache_create("amdgpu_userq_fence",···480478 if (!amdgpu_userq_enabled(dev))481479 return -ENOTSUPP;482480481481+ if (args->num_syncobj_handles > AMDGPU_USERQ_MAX_HANDLES ||482482+ args->num_bo_write_handles > AMDGPU_USERQ_MAX_HANDLES ||483483+ args->num_bo_read_handles > AMDGPU_USERQ_MAX_HANDLES)484484+ return -EINVAL;485485+483486 num_syncobj_handles = args->num_syncobj_handles;484487 syncobj_handles = memdup_user(u64_to_user_ptr(args->syncobj_handles),485488 size_mul(sizeof(u32), num_syncobj_handles));···671664 if (!amdgpu_userq_enabled(dev))672665 return -ENOTSUPP;673666667667+ if (wait_info->num_syncobj_handles > AMDGPU_USERQ_MAX_HANDLES ||668668+ wait_info->num_bo_write_handles > AMDGPU_USERQ_MAX_HANDLES ||669669+ wait_info->num_bo_read_handles > AMDGPU_USERQ_MAX_HANDLES)670670+ return -EINVAL;671671+674672 num_read_bo_handles = wait_info->num_bo_read_handles;675673 bo_handles_read = memdup_user(u64_to_user_ptr(wait_info->bo_read_handles),676674 size_mul(sizeof(u32), num_read_bo_handles));···845833846834 dma_resv_for_each_fence(&resv_cursor, gobj_read[i]->resv,847835 DMA_RESV_USAGE_READ, fence) {848848- if (WARN_ON_ONCE(num_fences >= wait_info->num_fences)) {836836+ if (num_fences >= wait_info->num_fences) {849837 r = -EINVAL;850838 goto free_fences;851839 }···862850863851 dma_resv_for_each_fence(&resv_cursor, gobj_write[i]->resv,864852 DMA_RESV_USAGE_WRITE, fence) {865865- if (WARN_ON_ONCE(num_fences >= wait_info->num_fences)) {853853+ if (num_fences >= wait_info->num_fences) {866854 r = -EINVAL;867855 goto free_fences;868856 }···886874 goto free_fences;887875888876 dma_fence_unwrap_for_each(f, &iter, fence) {889889- if (WARN_ON_ONCE(num_fences >= wait_info->num_fences)) {877877+ if (num_fences >= wait_info->num_fences) {890878 r = -EINVAL;879879+ dma_fence_put(fence);891880 goto free_fences;892881 }893882···911898 if (r)912899 goto free_fences;913900914914- if (WARN_ON_ONCE(num_fences >= wait_info->num_fences)) {901901+ if (num_fences >= wait_info->num_fences) {915902 r = -EINVAL;903903+ dma_fence_put(fence);916904 goto free_fences;917905 }918906
-5
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
···720720 mes_set_hw_res_pkt.enable_reg_active_poll = 1;721721 mes_set_hw_res_pkt.enable_level_process_quantum_check = 1;722722 mes_set_hw_res_pkt.oversubscription_timer = 50;723723- if ((mes->adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x7f)724724- mes_set_hw_res_pkt.enable_lr_compute_wa = 1;725725- else726726- dev_info_once(mes->adev->dev,727727- "MES FW version must be >= 0x7f to enable LR compute workaround.\n");728723729724 if (amdgpu_mes_log_enable) {730725 mes_set_hw_res_pkt.enable_mes_event_int_logging = 1;
-5
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
···779779 mes_set_hw_res_pkt.use_different_vmid_compute = 1;780780 mes_set_hw_res_pkt.enable_reg_active_poll = 1;781781 mes_set_hw_res_pkt.enable_level_process_quantum_check = 1;782782- if ((mes->adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x82)783783- mes_set_hw_res_pkt.enable_lr_compute_wa = 1;784784- else785785- dev_info_once(adev->dev,786786- "MES FW version must be >= 0x82 to enable LR compute workaround.\n");787782788783 /*789784 * Keep oversubscribe timer for sdma . When we have unmapped doorbell
···13381338EXPORT_SYMBOL_GPL(drm_gpusvm_range_pages_valid);1339133913401340/**13411341- * drm_gpusvm_range_pages_valid_unlocked() - GPU SVM range pages valid unlocked13411341+ * drm_gpusvm_pages_valid_unlocked() - GPU SVM pages valid unlocked13421342 * @gpusvm: Pointer to the GPU SVM structure13431343- * @range: Pointer to the GPU SVM range structure13431343+ * @svm_pages: Pointer to the GPU SVM pages structure13441344 *13451345- * This function determines if a GPU SVM range pages are valid. Expected be13461346- * called without holding gpusvm->notifier_lock.13451345+ * This function determines if a GPU SVM pages are valid. Expected be called13461346+ * without holding gpusvm->notifier_lock.13471347 *13481348- * Return: True if GPU SVM range has valid pages, False otherwise13481348+ * Return: True if GPU SVM pages are valid, False otherwise13491349 */13501350static bool drm_gpusvm_pages_valid_unlocked(struct drm_gpusvm *gpusvm,13511351 struct drm_gpusvm_pages *svm_pages)
···737737 if (!obj)738738 goto done;739739740740- if (WARN_ON(obj->type != ACPI_TYPE_BUFFER) ||741741- WARN_ON(obj->buffer.length != 4))740740+ if (obj->type != ACPI_TYPE_BUFFER ||741741+ obj->buffer.length != 4)742742 goto done;743743744744 caps->status = 0;···773773 if (!obj)774774 goto done;775775776776- if (WARN_ON(obj->type != ACPI_TYPE_BUFFER) ||777777- WARN_ON(obj->buffer.length != 4))776776+ if (obj->type != ACPI_TYPE_BUFFER ||777777+ obj->buffer.length != 4)778778 goto done;779779780780 jt->status = 0;···861861862862 _DOD = output.pointer;863863864864- if (WARN_ON(_DOD->type != ACPI_TYPE_PACKAGE) ||865865- WARN_ON(_DOD->package.count > ARRAY_SIZE(dod->acpiIdList)))864864+ if (_DOD->type != ACPI_TYPE_PACKAGE ||865865+ _DOD->package.count > ARRAY_SIZE(dod->acpiIdList))866866 return;867867868868 for (int i = 0; i < _DOD->package.count; i++) {
+2-2
drivers/gpu/drm/tiny/sharp-memory.c
···541541542542 smd = devm_drm_dev_alloc(dev, &sharp_memory_drm_driver,543543 struct sharp_memory_device, drm);544544- if (!smd)545545- return -ENOMEM;544544+ if (IS_ERR(smd))545545+ return PTR_ERR(smd);546546547547 spi_set_drvdata(spi, smd);548548
+4
drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
···105105 * @handle: DMA address handle for the command buffer space if @using_mob is106106 * false. Immutable.107107 * @size: The size of the command buffer space. Immutable.108108+ * @id: Monotonically increasing ID of the last cmdbuf submitted.108109 * @num_contexts: Number of contexts actually enabled.109110 */110111struct vmw_cmdbuf_man {···133132 bool has_pool;134133 dma_addr_t handle;135134 size_t size;135135+ u64 id;136136 u32 num_contexts;137137};138138···304302{305303 struct vmw_cmdbuf_man *man = header->man;306304 u32 val;305305+306306+ header->cb_header->id = man->id++;307307308308 val = upper_32_bits(header->handle);309309 vmw_write(man->dev_priv, SVGA_REG_COMMAND_HIGH, val);
+2-2
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
···11431143 ret = vmw_user_bo_lookup(sw_context->filp, handle, &vmw_bo);11441144 if (ret != 0) {11451145 drm_dbg(&dev_priv->drm, "Could not find or use MOB buffer.\n");11461146- return PTR_ERR(vmw_bo);11461146+ return ret;11471147 }11481148 vmw_bo_placement_set(vmw_bo, VMW_BO_DOMAIN_MOB, VMW_BO_DOMAIN_MOB);11491149 ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo);···11991199 ret = vmw_user_bo_lookup(sw_context->filp, handle, &vmw_bo);12001200 if (ret != 0) {12011201 drm_dbg(&dev_priv->drm, "Could not find or use GMR region.\n");12021202- return PTR_ERR(vmw_bo);12021202+ return ret;12031203 }12041204 vmw_bo_placement_set(vmw_bo, VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM,12051205 VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM);
+8-1
drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
···260260 return ret;261261}262262263263+static void vmw_bo_dirty_free(struct kref *kref)264264+{265265+ struct vmw_bo_dirty *dirty = container_of(kref, struct vmw_bo_dirty, ref_count);266266+267267+ kvfree(dirty);268268+}269269+263270/**264271 * vmw_bo_dirty_release - Release a dirty-tracking user from a buffer object265272 * @vbo: The buffer object···281274{282275 struct vmw_bo_dirty *dirty = vbo->dirty;283276284284- if (dirty && kref_put(&dirty->ref_count, (void *)kvfree))277277+ if (dirty && kref_put(&dirty->ref_count, vmw_bo_dirty_free))285278 vbo->dirty = NULL;286279}287280
+6
drivers/gpu/drm/xe/regs/xe_engine_regs.h
···9696#define ENABLE_SEMAPHORE_POLL_BIT REG_BIT(13)97979898#define RING_CMD_CCTL(base) XE_REG((base) + 0xc4, XE_REG_OPTION_MASKED)9999+100100+#define CS_MMIO_GROUP_INSTANCE_SELECT(base) XE_REG((base) + 0xcc)101101+#define SELECTIVE_READ_ADDRESSING REG_BIT(30)102102+#define SELECTIVE_READ_GROUP REG_GENMASK(29, 23)103103+#define SELECTIVE_READ_INSTANCE REG_GENMASK(22, 16)104104+99105/*100106 * CMD_CCTL read/write fields take a MOCS value and _not_ a table index.101107 * The lsb of each can be considered a separate enabling bit for encryption.
+54-12
drivers/gpu/drm/xe/xe_gt.c
···210210 return ret;211211}212212213213+/* Dwords required to emit a RMW of a register */214214+#define EMIT_RMW_DW 20215215+213216static int emit_wa_job(struct xe_gt *gt, struct xe_exec_queue *q)214217{215215- struct xe_reg_sr *sr = &q->hwe->reg_lrc;218218+ struct xe_hw_engine *hwe = q->hwe;219219+ struct xe_reg_sr *sr = &hwe->reg_lrc;216220 struct xe_reg_sr_entry *entry;217217- int count_rmw = 0, count = 0, ret;221221+ int count_rmw = 0, count_rmw_mcr = 0, count = 0, ret;218222 unsigned long idx;219223 struct xe_bb *bb;220224 size_t bb_len = 0;···228224 xa_for_each(&sr->xa, idx, entry) {229225 if (entry->reg.masked || entry->clr_bits == ~0)230226 ++count;227227+ else if (entry->reg.mcr)228228+ ++count_rmw_mcr;231229 else232230 ++count_rmw;233231 }···237231 if (count)238232 bb_len += count * 2 + 1;239233240240- if (count_rmw)241241- bb_len += count_rmw * 20 + 7;234234+ /*235235+ * RMW of MCR registers is the same as a normal RMW, except an236236+ * additional LRI (3 dwords) is required per register to steer the read237237+ * to a nom-terminated instance.238238+ *239239+ * We could probably shorten the batch slightly by eliding the240240+ * steering for consecutive MCR registers that have the same241241+ * group/instance target, but it's not worth the extra complexity to do242242+ * so.243243+ */244244+ bb_len += count_rmw * EMIT_RMW_DW;245245+ bb_len += count_rmw_mcr * (EMIT_RMW_DW + 3);242246243243- if (q->hwe->class == XE_ENGINE_CLASS_RENDER)247247+ /*248248+ * After doing all RMW, we need 7 trailing dwords to clean up,249249+ * plus an additional 3 dwords to reset steering if any of the250250+ * registers were MCR.251251+ */252252+ if (count_rmw || count_rmw_mcr)253253+ bb_len += 7 + (count_rmw_mcr ? 3 : 0);254254+255255+ if (hwe->class == XE_ENGINE_CLASS_RENDER)244256 /*245257 * Big enough to emit all of the context's 3DSTATE via246258 * xe_lrc_emit_hwe_state_instructions()247259 */248248- bb_len += xe_gt_lrc_size(gt, q->hwe->class) / sizeof(u32);260260+ bb_len += xe_gt_lrc_size(gt, hwe->class) / sizeof(u32);249261250250- xe_gt_dbg(gt, "LRC %s WA job: %zu dwords\n", q->hwe->name, bb_len);262262+ xe_gt_dbg(gt, "LRC %s WA job: %zu dwords\n", hwe->name, bb_len);251263252264 bb = xe_bb_new(gt, bb_len, false);253265 if (IS_ERR(bb))···300276 }301277 }302278303303- if (count_rmw) {304304- /* Emit MI_MATH for each RMW reg: 20dw per reg + 7 trailing dw */305305-279279+ if (count_rmw || count_rmw_mcr) {306280 xa_for_each(&sr->xa, idx, entry) {307281 if (entry->reg.masked || entry->clr_bits == ~0)308282 continue;283283+284284+ if (entry->reg.mcr) {285285+ struct xe_reg_mcr reg = { .__reg.raw = entry->reg.raw };286286+ u8 group, instance;287287+288288+ xe_gt_mcr_get_nonterminated_steering(gt, reg, &group, &instance);289289+ *cs++ = MI_LOAD_REGISTER_IMM | MI_LRI_NUM_REGS(1);290290+ *cs++ = CS_MMIO_GROUP_INSTANCE_SELECT(hwe->mmio_base).addr;291291+ *cs++ = SELECTIVE_READ_ADDRESSING |292292+ REG_FIELD_PREP(SELECTIVE_READ_GROUP, group) |293293+ REG_FIELD_PREP(SELECTIVE_READ_INSTANCE, instance);294294+ }309295310296 *cs++ = MI_LOAD_REGISTER_REG | MI_LRR_DST_CS_MMIO;311297 *cs++ = entry->reg.addr;···342308 *cs++ = CS_GPR_REG(0, 0).addr;343309 *cs++ = entry->reg.addr;344310345345- xe_gt_dbg(gt, "REG[%#x] = ~%#x|%#x\n",346346- entry->reg.addr, entry->clr_bits, entry->set_bits);311311+ xe_gt_dbg(gt, "REG[%#x] = ~%#x|%#x%s\n",312312+ entry->reg.addr, entry->clr_bits, entry->set_bits,313313+ entry->reg.mcr ? " (MCR)" : "");347314 }348315349316 /* reset used GPR */···356321 *cs++ = 0;357322 *cs++ = CS_GPR_REG(0, 2).addr;358323 *cs++ = 0;324324+325325+ /* reset steering */326326+ if (count_rmw_mcr) {327327+ *cs++ = MI_LOAD_REGISTER_IMM | MI_LRI_NUM_REGS(1);328328+ *cs++ = CS_MMIO_GROUP_INSTANCE_SELECT(q->hwe->mmio_base).addr;329329+ *cs++ = 0;330330+ }359331 }360332361333 cs = xe_lrc_emit_hwe_state_instructions(q, cs);
+21-9
drivers/gpu/drm/xe/xe_sync.c
···146146147147 if (!signal) {148148 sync->fence = drm_syncobj_fence_get(sync->syncobj);149149- if (XE_IOCTL_DBG(xe, !sync->fence))150150- return -EINVAL;149149+ if (XE_IOCTL_DBG(xe, !sync->fence)) {150150+ err = -EINVAL;151151+ goto free_sync;152152+ }151153 }152154 break;153155···169167170168 if (signal) {171169 sync->chain_fence = dma_fence_chain_alloc();172172- if (!sync->chain_fence)173173- return -ENOMEM;170170+ if (!sync->chain_fence) {171171+ err = -ENOMEM;172172+ goto free_sync;173173+ }174174 } else {175175 sync->fence = drm_syncobj_fence_get(sync->syncobj);176176- if (XE_IOCTL_DBG(xe, !sync->fence))177177- return -EINVAL;176176+ if (XE_IOCTL_DBG(xe, !sync->fence)) {177177+ err = -EINVAL;178178+ goto free_sync;179179+ }178180179181 err = dma_fence_chain_find_seqno(&sync->fence,180182 sync_in.timeline_value);181183 if (err)182182- return err;184184+ goto free_sync;183185 }184186 break;185187···206200 if (XE_IOCTL_DBG(xe, IS_ERR(sync->ufence)))207201 return PTR_ERR(sync->ufence);208202 sync->ufence_chain_fence = dma_fence_chain_alloc();209209- if (!sync->ufence_chain_fence)210210- return -ENOMEM;203203+ if (!sync->ufence_chain_fence) {204204+ err = -ENOMEM;205205+ goto free_sync;206206+ }211207 sync->ufence_syncobj = ufence_syncobj;212208 }213209···224216 sync->timeline_value = sync_in.timeline_value;225217226218 return 0;219219+220220+free_sync:221221+ xe_sync_entry_cleanup(sync);222222+ return err;227223}228224ALLOW_ERROR_INJECTION(xe_sync_entry_parse, ERRNO);229225
+1
drivers/infiniband/Kconfig
···66 depends on INET77 depends on m || IPV6 != m88 depends on !ALPHA99+ select DMA_SHARED_BUFFER910 select IRQ_POLL1011 select DIMLIB1112 help
+13
drivers/infiniband/core/cache.c
···926926 if (err)927927 return err;928928929929+ /*930930+ * Mark the device as ready for GID cache updates. This allows netdev931931+ * event handlers to update the GID cache even before the device is932932+ * fully registered.933933+ */934934+ ib_device_enable_gid_updates(ib_dev);935935+929936 rdma_roce_rescan_device(ib_dev);930937931938 return err;···1644163716451638void ib_cache_cleanup_one(struct ib_device *device)16461639{16401640+ /*16411641+ * Clear the GID updates mark first to prevent event handlers from16421642+ * accessing the device while it's being torn down.16431643+ */16441644+ ib_device_disable_gid_updates(device);16451645+16471646 /* The cleanup function waits for all in-progress workqueue16481647 * elements and cleans up the GID cache. This function should be16491648 * called after the device was removed from the devices list and
+5-1
drivers/infiniband/core/cma.c
···27292729 *to_destroy = NULL;27302730 if (cma_family(id_priv) == AF_IB && !rdma_cap_ib_cm(cma_dev->device, 1))27312731 return 0;27322732+ if (id_priv->restricted_node_type != RDMA_NODE_UNSPECIFIED &&27332733+ id_priv->restricted_node_type != cma_dev->device->node_type)27342734+ return 0;2732273527332736 dev_id_priv =27342737 __rdma_create_id(net, cma_listen_handler, id_priv,···27392736 if (IS_ERR(dev_id_priv))27402737 return PTR_ERR(dev_id_priv);2741273827392739+ dev_id_priv->restricted_node_type = id_priv->restricted_node_type;27422740 dev_id_priv->state = RDMA_CM_ADDR_BOUND;27432741 memcpy(cma_src_addr(dev_id_priv), cma_src_addr(id_priv),27442742 rdma_addr_size(cma_src_addr(id_priv)));···41984194 }4199419542004196 mutex_lock(&lock);42014201- if (id_priv->cma_dev)41974197+ if (READ_ONCE(id_priv->state) != RDMA_CM_IDLE)42024198 ret = -EALREADY;42034199 else42044200 id_priv->restricted_node_type = node_type;
···9393static DEFINE_XARRAY_FLAGS(devices, XA_FLAGS_ALLOC);9494static DECLARE_RWSEM(devices_rwsem);9595#define DEVICE_REGISTERED XA_MARK_19696+#define DEVICE_GID_UPDATES XA_MARK_296979798static u32 highest_client_id;9899#define CLIENT_REGISTERED XA_MARK_1···24132412 unsigned long index;2414241324152414 down_read(&devices_rwsem);24162416- xa_for_each_marked (&devices, index, dev, DEVICE_REGISTERED)24152415+ xa_for_each_marked(&devices, index, dev, DEVICE_GID_UPDATES)24172416 ib_enum_roce_netdev(dev, filter, filter_cookie, cb, cookie);24182417 up_read(&devices_rwsem);24182418+}24192419+24202420+/**24212421+ * ib_device_enable_gid_updates - Mark device as ready for GID cache updates24222422+ * @device: Device to mark24232423+ *24242424+ * Called after GID table is allocated and initialized. After this mark is set,24252425+ * netdevice event handlers can update the device's GID cache. This allows24262426+ * events that arrive during device registration to be processed, avoiding24272427+ * stale GID entries when netdev properties change during the device24282428+ * registration process.24292429+ */24302430+void ib_device_enable_gid_updates(struct ib_device *device)24312431+{24322432+ down_write(&devices_rwsem);24332433+ xa_set_mark(&devices, device->index, DEVICE_GID_UPDATES);24342434+ up_write(&devices_rwsem);24352435+}24362436+24372437+/**24382438+ * ib_device_disable_gid_updates - Clear the GID updates mark24392439+ * @device: Device to unmark24402440+ *24412441+ * Called before GID table cleanup to prevent event handlers from accessing24422442+ * the device while it's being torn down.24432443+ */24442444+void ib_device_disable_gid_updates(struct ib_device *device)24452445+{24462446+ down_write(&devices_rwsem);24472447+ xa_clear_mark(&devices, device->index, DEVICE_GID_UPDATES);24482448+ up_write(&devices_rwsem);24192449}2420245024212451/*
···34743474 int lpi_base;34753475 int nr_lpis;34763476 int nr_ites;34773477+ int id_bits;34773478 int sz;3478347934793480 if (!its_alloc_device_table(its, dev_id))···34863485 /*34873486 * Even if the device wants a single LPI, the ITT must be34883487 * sized as a power of two (and you need at least one bit...).34883488+ * Also honor the ITS's own EID limit.34893489 */34903490+ id_bits = FIELD_GET(GITS_TYPER_IDBITS, its->typer) + 1;34913491+ nvecs = min_t(unsigned int, nvecs, BIT(id_bits));34903492 nr_ites = max(2, nvecs);34913493 sz = nr_ites * (FIELD_GET(GITS_TYPER_ITT_ENTRY_SIZE, its->typer) + 1);34923494 sz = max(sz, ITS_ITT_ALIGN);
···22782278 * change it through the dynamic interface later.22792279 */22802280 dsa_switch_for_each_available_port(dp, ds) {22812281+ /* May be called during unbind when we unoffload a VLAN-aware22822282+ * bridge from port 1 while port 0 was already torn down22832283+ */22842284+ if (!dp->pl)22852285+ continue;22862286+22812287 phylink_replay_link_begin(dp->pl);22822288 mac[dp->index].speed = priv->info->port_speed[SJA1105_SPEED_AUTO];22832289 }···23402334 }2341233523422336 dsa_switch_for_each_available_port(dp, ds)23432343- phylink_replay_link_end(dp->pl);23372337+ if (dp->pl)23382338+ phylink_replay_link_end(dp->pl);2344233923452340 rc = sja1105_reload_cbs(priv);23462341 if (rc < 0)
+7-6
drivers/net/ethernet/broadcom/bnxt/bnxt.c
···62326232 int rc;6233623362346234 set_bit(BNXT_FLTR_FW_DELETED, &fltr->base.state);62356235+ if (!test_bit(BNXT_STATE_OPEN, &bp->state))62366236+ return 0;62376237+62356238 rc = hwrm_req_init(bp, req, HWRM_CFA_NTUPLE_FILTER_FREE);62366239 if (rc)62376240 return rc;···1088210879 struct bnxt_ntuple_filter *ntp_fltr;1088310880 int i;10884108811088510885- if (netif_running(bp->dev)) {1088610886- bnxt_hwrm_vnic_free_one(bp, &rss_ctx->vnic);1088710887- for (i = 0; i < BNXT_MAX_CTX_PER_VNIC; i++) {1088810888- if (vnic->fw_rss_cos_lb_ctx[i] != INVALID_HW_RING_ID)1088910889- bnxt_hwrm_vnic_ctx_free_one(bp, vnic, i);1089010890- }1088210882+ bnxt_hwrm_vnic_free_one(bp, &rss_ctx->vnic);1088310883+ for (i = 0; i < BNXT_MAX_CTX_PER_VNIC; i++) {1088410884+ if (vnic->fw_rss_cos_lb_ctx[i] != INVALID_HW_RING_ID)1088510885+ bnxt_hwrm_vnic_ctx_free_one(bp, vnic, i);1089110886 }1089210887 if (!all)1089310888 return;
···403403 int ret;404404 int ch;405405406406- if (!cpu_is_ixp46x())407407- return -EOPNOTSUPP;408408-409406 if (!netif_running(netdev))410407 return -EINVAL;411408412409 ret = ixp46x_ptp_find(&port->timesync_regs, &port->phc_index);413410 if (ret)414414- return ret;411411+ return -EOPNOTSUPP;415412416413 ch = PORT2CHANNEL(port);417414 regs = port->timesync_regs;
+3
drivers/net/ethernet/xscale/ptp_ixp46x.c
···232232233233int ixp46x_ptp_find(struct ixp46x_ts_regs *__iomem *regs, int *phc_index)234234{235235+ if (!cpu_is_ixp46x())236236+ return -ENODEV;237237+235238 *regs = ixp_clock.regs;236239 *phc_index = ptp_clock_index(ixp_clock.ptp_clock);237240
···7070 peer->tcp.sk_cb.sk_data_ready(sk);7171}72727373+static struct sk_buff *ovpn_tcp_skb_packet(const struct ovpn_peer *peer,7474+ struct sk_buff *orig_skb,7575+ const int pkt_len, const int pkt_off)7676+{7777+ struct sk_buff *ovpn_skb;7878+ int err;7979+8080+ /* create a new skb with only the content of the current packet */8181+ ovpn_skb = netdev_alloc_skb(peer->ovpn->dev, pkt_len);8282+ if (unlikely(!ovpn_skb))8383+ goto err;8484+8585+ skb_copy_header(ovpn_skb, orig_skb);8686+ err = skb_copy_bits(orig_skb, pkt_off, skb_put(ovpn_skb, pkt_len),8787+ pkt_len);8888+ if (unlikely(err)) {8989+ net_warn_ratelimited("%s: skb_copy_bits failed for peer %u\n",9090+ netdev_name(peer->ovpn->dev), peer->id);9191+ kfree_skb(ovpn_skb);9292+ goto err;9393+ }9494+9595+ consume_skb(orig_skb);9696+ return ovpn_skb;9797+err:9898+ kfree_skb(orig_skb);9999+ return NULL;100100+}101101+73102static void ovpn_tcp_rcv(struct strparser *strp, struct sk_buff *skb)74103{75104 struct ovpn_peer *peer = container_of(strp, struct ovpn_peer, tcp.strp);76105 struct strp_msg *msg = strp_msg(skb);7777- size_t pkt_len = msg->full_len - 2;7878- size_t off = msg->offset + 2;106106+ int pkt_len = msg->full_len - 2;79107 u8 opcode;801088181- /* ensure skb->data points to the beginning of the openvpn packet */8282- if (!pskb_pull(skb, off)) {8383- net_warn_ratelimited("%s: packet too small for peer %u\n",8484- netdev_name(peer->ovpn->dev), peer->id);8585- goto err;8686- }8787-8888- /* strparser does not trim the skb for us, therefore we do it now */8989- if (pskb_trim(skb, pkt_len) != 0) {9090- net_warn_ratelimited("%s: trimming skb failed for peer %u\n",9191- netdev_name(peer->ovpn->dev), peer->id);9292- goto err;9393- }9494-9595- /* we need the first 4 bytes of data to be accessible109109+ /* we need at least 4 bytes of data in the packet96110 * to extract the opcode and the key ID later on97111 */9898- if (!pskb_may_pull(skb, OVPN_OPCODE_SIZE)) {112112+ if (unlikely(pkt_len < OVPN_OPCODE_SIZE)) {99113 net_warn_ratelimited("%s: packet too small to fetch opcode for peer %u\n",100114 netdev_name(peer->ovpn->dev), peer->id);101115 goto err;102116 }117117+118118+ /* extract the packet into a new skb */119119+ skb = ovpn_tcp_skb_packet(peer, skb, pkt_len, msg->offset + 2);120120+ if (unlikely(!skb))121121+ goto err;103122104123 /* DATA_V2 packets are handled in kernel, the rest goes to user space */105124 opcode = ovpn_opcode_from_skb(skb, 0);···132113 /* The packet size header must be there when sending the packet133114 * to userspace, therefore we put it back134115 */135135- skb_push(skb, 2);116116+ *(__be16 *)__skb_push(skb, sizeof(u16)) = htons(pkt_len);136117 ovpn_tcp_to_userspace(peer, strp->sk, skb);137118 return;138119 }
+17-8
drivers/net/phy/phy_device.c
···18661866 goto error;1867186718681868 phy_resume(phydev);18691869- if (!phydev->is_on_sfp_module)18701870- phy_led_triggers_register(phydev);1871186918721870 /**18731871 * If the external phy used by current mac interface is managed by···1979198119801982 phydev->phy_link_change = NULL;19811983 phydev->phylink = NULL;19821982-19831983- if (!phydev->is_on_sfp_module)19841984- phy_led_triggers_unregister(phydev);1985198419861985 if (phydev->mdio.dev.driver)19871986 module_put(phydev->mdio.dev.driver->owner);···37733778 /* Set the state to READY by default */37743779 phydev->state = PHY_READY;3775378037813781+ /* Register the PHY LED triggers */37823782+ if (!phydev->is_on_sfp_module)37833783+ phy_led_triggers_register(phydev);37843784+37763785 /* Get the LEDs from the device tree, and instantiate standard37773786 * LEDs for them.37783787 */37793779- if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev))37883788+ if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev)) {37803789 err = of_phy_leds(phydev);37903790+ if (err)37913791+ goto out;37923792+ }37933793+37943794+ return 0;3781379537823796out:37973797+ if (!phydev->is_on_sfp_module)37983798+ phy_led_triggers_unregister(phydev);37993799+37833800 /* Re-assert the reset signal on error */37843784- if (err)37853785- phy_device_reset(phydev, 1);38013801+ phy_device_reset(phydev, 1);3786380237873803 return err;37883804}···3806380038073801 if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev))38083802 phy_leds_unregister(phydev);38033803+38043804+ if (!phydev->is_on_sfp_module)38053805+ phy_led_triggers_unregister(phydev);3809380638103807 phydev->state = PHY_DOWN;38113808
+1-1
drivers/net/phy/qcom/qca807x.c
···375375 reg = QCA807X_MMD7_LED_FORCE_CTRL(offset);376376 val = phy_read_mmd(priv->phy, MDIO_MMD_AN, reg);377377378378- return FIELD_GET(QCA807X_GPIO_FORCE_MODE_MASK, val);378378+ return !!FIELD_GET(QCA807X_GPIO_FORCE_MODE_MASK, val);379379}380380381381static int qca807x_gpio_set(struct gpio_chip *gc, unsigned int offset, int value)
···951951 goto out;952952953953 /* try to attach to the target device */954954- sdiodev->bus = brcmf_sdio_probe(sdiodev);955955- if (IS_ERR(sdiodev->bus)) {956956- ret = PTR_ERR(sdiodev->bus);954954+ ret = brcmf_sdio_probe(sdiodev);955955+ if (ret)957956 goto out;958958- }957957+959958 brcmf_sdiod_host_fixup(sdiodev->func2->card->host);960959out:961960 if (ret)
···905905 * supported, so we avoid reprogramming the region on every MSI,906906 * specifically unmapping immediately after writel().907907 */908908+ if (ep->msi_iatu_mapped && (ep->msi_msg_addr != msg_addr ||909909+ ep->msi_map_size != map_size)) {910910+ /*911911+ * The host changed the MSI target address or the required912912+ * mapping size changed. Reprogramming the iATU when there are913913+ * operations in flight is unsafe on this controller. However,914914+ * there is no unified way to check if we have operations in915915+ * flight, thus we don't know if we should WARN() or not.916916+ */917917+ dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys);918918+ ep->msi_iatu_mapped = false;919919+ }920920+908921 if (!ep->msi_iatu_mapped) {909922 ret = dw_pcie_ep_map_addr(epc, func_no, 0,910923 ep->msi_mem_phys, msg_addr,···928915 ep->msi_iatu_mapped = true;929916 ep->msi_msg_addr = msg_addr;930917 ep->msi_map_size = map_size;931931- } else if (WARN_ON_ONCE(ep->msi_msg_addr != msg_addr ||932932- ep->msi_map_size != map_size)) {933933- /*934934- * The host changed the MSI target address or the required935935- * mapping size changed. Reprogramming the iATU at runtime is936936- * unsafe on this controller, so bail out instead of trying to937937- * update the existing region.938938- */939939- return -EINVAL;940918 }941919942920 writel(msg_data | (interrupt_num - 1), ep->msi_mem + offset);···10131009 return ret;1014101010151011 writel(msg_data, ep->msi_mem + offset);10121012+10131013+ /* flush posted write before unmap */10141014+ readl(ep->msi_mem + offset);1016101510171016 dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys);10181017
···508508 This driver supports the FP9931/JD9930 voltage regulator chip509509 which is used to provide power to Electronic Paper Displays510510 so it is found in E-Book readers.511511- If HWWON is enabled, it also provides temperature measurement.511511+ If HWMON is enabled, it also provides temperature measurement.512512513513config REGULATOR_LM363X514514 tristate "TI LM363X voltage regulators"
···332332 int i;333333334334 data = devm_kzalloc(&client->dev, sizeof(*data), GFP_KERNEL);335335+ if (!data)336336+ return -ENOMEM;337337+335338 data->regmap = devm_regmap_init_i2c(client, ®map_config);336339 if (IS_ERR(data->regmap))337340 return dev_err_probe(&client->dev, PTR_ERR(data->regmap),
+2
drivers/scsi/lpfc/lpfc_init.c
···1202512025 iounmap(phba->sli4_hba.conf_regs_memmap_p);1202612026 if (phba->sli4_hba.dpp_regs_memmap_p)1202712027 iounmap(phba->sli4_hba.dpp_regs_memmap_p);1202812028+ if (phba->sli4_hba.dpp_regs_memmap_wc_p)1202912029+ iounmap(phba->sli4_hba.dpp_regs_memmap_wc_p);1202812030 break;1202912031 case LPFC_SLI_INTF_IF_TYPE_1:1203012032 break;
+30-6
drivers/scsi/lpfc/lpfc_sli.c
···1597715977 return NULL;1597815978}15979159791598015980+static __maybe_unused void __iomem *1598115981+lpfc_dpp_wc_map(struct lpfc_hba *phba, uint8_t dpp_barset)1598215982+{1598315983+1598415984+ /* DPP region is supposed to cover 64-bit BAR2 */1598515985+ if (dpp_barset != WQ_PCI_BAR_4_AND_5) {1598615986+ lpfc_log_msg(phba, KERN_WARNING, LOG_INIT,1598715987+ "3273 dpp_barset x%x != WQ_PCI_BAR_4_AND_5\n",1598815988+ dpp_barset);1598915989+ return NULL;1599015990+ }1599115991+1599215992+ if (!phba->sli4_hba.dpp_regs_memmap_wc_p) {1599315993+ void __iomem *dpp_map;1599415994+1599515995+ dpp_map = ioremap_wc(phba->pci_bar2_map,1599615996+ pci_resource_len(phba->pcidev,1599715997+ PCI_64BIT_BAR4));1599815998+1599915999+ if (dpp_map)1600016000+ phba->sli4_hba.dpp_regs_memmap_wc_p = dpp_map;1600116001+ }1600216002+1600316003+ return phba->sli4_hba.dpp_regs_memmap_wc_p;1600416004+}1600516005+1598016006/**1598116007 * lpfc_modify_hba_eq_delay - Modify Delay Multiplier on EQs1598216008 * @phba: HBA structure that EQs are on.···1696616940 uint8_t dpp_barset;1696716941 uint32_t dpp_offset;1696816942 uint8_t wq_create_version;1696916969-#ifdef CONFIG_X861697016970- unsigned long pg_addr;1697116971-#endif16972169431697316944 /* sanity check on queue memory */1697416945 if (!wq || !cq)···17151171281715217129#ifdef CONFIG_X861715317130 /* Enable combined writes for DPP aperture */1715417154- pg_addr = (unsigned long)(wq->dpp_regaddr) & PAGE_MASK;1715517155- rc = set_memory_wc(pg_addr, 1);1715617156- if (rc) {1713117131+ bar_memmap_p = lpfc_dpp_wc_map(phba, dpp_barset);1713217132+ if (!bar_memmap_p) {1715717133 lpfc_printf_log(phba, KERN_ERR, LOG_INIT,1715817134 "3272 Cannot setup Combined "1715917135 "Write on WQ[%d] - disable DPP\n",1716017136 wq->queue_id);1716117137 phba->cfg_enable_dpp = 0;1713817138+ } else {1713917139+ wq->dpp_regaddr = bar_memmap_p + dpp_offset;1716217140 }1716317141#else1716417142 phba->cfg_enable_dpp = 0;
+3
drivers/scsi/lpfc/lpfc_sli4.h
···785785 void __iomem *dpp_regs_memmap_p; /* Kernel memory mapped address for786786 * dpp registers787787 */788788+ void __iomem *dpp_regs_memmap_wc_p;/* Kernel memory mapped address for789789+ * dpp registers with write combining790790+ */788791 union {789792 struct {790793 /* IF Type 0, BAR 0 PCI cfg space reg mem map */
···18561856 cmd_request->payload_sz = payload_sz;1857185718581858 /* Invokes the vsc to start an IO */18591859- ret = storvsc_do_io(dev, cmd_request, get_cpu());18601860- put_cpu();18591859+ migrate_disable();18601860+ ret = storvsc_do_io(dev, cmd_request, smp_processor_id());18611861+ migrate_enable();1861186218621863 if (ret)18631864 scsi_dma_unmap(scmnd);
···2424#include <linux/pm_opp.h>2525#include <linux/regulator/consumer.h>2626#include <linux/sched/clock.h>2727+#include <linux/sizes.h>2728#include <linux/iopoll.h>2829#include <scsi/scsi_cmnd.h>2930#include <scsi/scsi_dbg.h>···518517519518 if (hba->mcq_enabled) {520519 struct ufs_hw_queue *hwq = ufshcd_mcq_req_to_hwq(hba, rq);521521-522522- hwq_id = hwq->id;520520+ if (hwq)521521+ hwq_id = hwq->id;523522 } else {524523 doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);525524 }···43904389 spin_unlock_irqrestore(hba->host->host_lock, flags);43914390 mutex_unlock(&hba->uic_cmd_mutex);4392439143934393- /*43944394- * If the h8 exit fails during the runtime resume process, it becomes43954395- * stuck and cannot be recovered through the error handler. To fix43964396- * this, use link recovery instead of the error handler.43974397- */43984398- if (ret && hba->pm_op_in_progress)43994399- ret = ufshcd_link_recovery(hba);44004400-44014392 return ret;44024393}44034394···52425249 hba->dev_info.rpmb_region_size[1] = desc_buf[RPMB_UNIT_DESC_PARAM_REGION1_SIZE];52435250 hba->dev_info.rpmb_region_size[2] = desc_buf[RPMB_UNIT_DESC_PARAM_REGION2_SIZE];52445251 hba->dev_info.rpmb_region_size[3] = desc_buf[RPMB_UNIT_DESC_PARAM_REGION3_SIZE];52525252+52535253+ if (hba->dev_info.wspecversion <= 0x0220) {52545254+ /*52555255+ * These older spec chips have only one RPMB region,52565256+ * sized between 128 kB minimum and 16 MB maximum.52575257+ * No per region size fields are provided (respective52585258+ * REGIONX_SIZE fields always contain zeros), so get52595259+ * it from the logical block count and size fields for52605260+ * compatibility52615261+ *52625262+ * (See JESD220C-2_2 Section 14.1.4.652635263+ * RPMB Unit Descriptor,* offset 13h, 4 bytes)52645264+ */52655265+ hba->dev_info.rpmb_region_size[0] =52665266+ (get_unaligned_be64(desc_buf52675267+ + RPMB_UNIT_DESC_PARAM_LOGICAL_BLK_COUNT)52685268+ << desc_buf[RPMB_UNIT_DESC_PARAM_LOGICAL_BLK_SIZE])52695269+ / SZ_128K;52705270+ }52455271 }5246527252475273···5975596359765964 hba->auto_bkops_enabled = false;59775965 trace_ufshcd_auto_bkops_state(hba, "Disabled");59665966+ hba->urgent_bkops_lvl = BKOPS_STATUS_PERF_IMPACT;59785967 hba->is_urgent_bkops_lvl_checked = false;59795968out:59805969 return err;···60796066 * impacted or critical. Handle these device by determining their urgent60806067 * bkops status at runtime.60816068 */60826082- if (curr_status < BKOPS_STATUS_PERF_IMPACT) {60696069+ if ((curr_status > BKOPS_STATUS_NO_OP) && (curr_status < BKOPS_STATUS_PERF_IMPACT)) {60836070 dev_err(hba->dev, "%s: device raised urgent BKOPS exception for bkops status %d\n",60846071 __func__, curr_status);60856072 /* update the current status as the urgent bkops level */···7110709771117098 ret = ufshcd_vops_get_outstanding_cqs(hba, &outstanding_cqs);71127099 if (ret)71137113- outstanding_cqs = (1U << hba->nr_hw_queues) - 1;71007100+ outstanding_cqs = (1ULL << hba->nr_hw_queues) - 1;7114710171157102 /* Exclude the poll queues */71167103 nr_queues = hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL];···1019210179 } else {1019310180 dev_err(hba->dev, "%s: hibern8 exit failed %d\n",1019410181 __func__, ret);1019510195- goto vendor_suspend;1018210182+ /*1018310183+ * If the h8 exit fails during the runtime resume1018410184+ * process, it becomes stuck and cannot be recovered1018510185+ * through the error handler. To fix this, use link1018610186+ * recovery instead of the error handler.1018710187+ */1018810188+ ret = ufshcd_link_recovery(hba);1018910189+ if (ret)1019010190+ goto vendor_suspend;1019610191 }1019710192 } else if (ufshcd_is_link_off(hba)) {1019810193 /*
+2-1
fs/binfmt_elf.c
···4747#include <linux/dax.h>4848#include <linux/uaccess.h>4949#include <uapi/linux/rseq.h>5050+#include <linux/rseq.h>5051#include <asm/param.h>5152#include <asm/page.h>5253···287286 }288287#ifdef CONFIG_RSEQ289288 NEW_AUX_ENT(AT_RSEQ_FEATURE_SIZE, offsetof(struct rseq, end));290290- NEW_AUX_ENT(AT_RSEQ_ALIGN, __alignof__(struct rseq));289289+ NEW_AUX_ENT(AT_RSEQ_ALIGN, rseq_alloc_align());291290#endif292291#undef NEW_AUX_ENT293292 /* AT_NULL is zero; clear the rest too */
···20612061 * @ep: the &struct eventpoll to be currently checked.20622062 * @depth: Current depth of the path being checked.20632063 *20642064- * Return: depth of the subtree, or INT_MAX if we found a loop or went too deep.20642064+ * Return: depth of the subtree, or a value bigger than EP_MAX_NESTS if we found20652065+ * a loop or went too deep.20652066 */20662067static int ep_loop_check_proc(struct eventpoll *ep, int depth)20672068{···20812080 struct eventpoll *ep_tovisit;20822081 ep_tovisit = epi->ffd.file->private_data;20832082 if (ep_tovisit == inserting_into || depth > EP_MAX_NESTS)20842084- result = INT_MAX;20832083+ result = EP_MAX_NESTS+1;20852084 else20862085 result = max(result, ep_loop_check_proc(ep_tovisit, depth + 1) + 1);20872086 if (result > EP_MAX_NESTS)
+1-1
fs/file_attr.c
···378378 struct path filepath __free(path_put) = {};379379 unsigned int lookup_flags = 0;380380 struct file_attr fattr;381381- struct file_kattr fa;381381+ struct file_kattr fa = { .flags_valid = true }; /* hint only */382382 int error;383383384384 BUILD_BUG_ON(sizeof(struct file_attr) < FILE_ATTR_SIZE_VER0);
+5-4
fs/fs-writeback.c
···198198199199static bool wb_wait_for_completion_cb(struct wb_completion *done)200200{201201+ unsigned long timeout = sysctl_hung_task_timeout_secs;201202 unsigned long waited_secs = (jiffies - done->wait_start) / HZ;202203203204 done->progress_stamp = jiffies;204204- if (waited_secs > sysctl_hung_task_timeout_secs)205205+ if (timeout && (waited_secs > timeout))205206 pr_info("INFO: The task %s:%d has been waiting for writeback "206207 "completion for more than %lu seconds.",207208 current->comm, current->pid, waited_secs);···19551954 .range_end = LLONG_MAX,19561955 };19571956 unsigned long start_time = jiffies;19571957+ unsigned long timeout = sysctl_hung_task_timeout_secs;19581958 long write_chunk;19591959 long total_wrote = 0; /* count both pages and inodes */19601960 unsigned long dirtied_before = jiffies;···20422040 __writeback_single_inode(inode, &wbc);2043204120442042 /* Report progress to inform the hung task detector of the progress. */20452045- if (work->done && work->done->progress_stamp &&20462046- (jiffies - work->done->progress_stamp) > HZ *20472047- sysctl_hung_task_timeout_secs / 2)20432043+ if (work->done && work->done->progress_stamp && timeout &&20442044+ (jiffies - work->done->progress_stamp) > HZ * timeout / 2)20482045 wake_up_all(work->done->waitq);2049204620502047 wbc_detach_inode(&wbc);
+1
fs/iomap/buffered-io.c
···624624 * iomap_readahead - Attempt to read pages from a file.625625 * @ops: The operations vector for the filesystem.626626 * @ctx: The ctx used for issuing readahead.627627+ * @private: The filesystem-specific information for issuing iomap_iter.627628 *628629 * This function is for filesystems to call to implement their readahead629630 * address_space operation.
+46
fs/iomap/ioend.c
···6969 return folio_count;7070}71717272+static DEFINE_SPINLOCK(failed_ioend_lock);7373+static LIST_HEAD(failed_ioend_list);7474+7575+static void7676+iomap_fail_ioends(7777+ struct work_struct *work)7878+{7979+ struct iomap_ioend *ioend;8080+ struct list_head tmp;8181+ unsigned long flags;8282+8383+ spin_lock_irqsave(&failed_ioend_lock, flags);8484+ list_replace_init(&failed_ioend_list, &tmp);8585+ spin_unlock_irqrestore(&failed_ioend_lock, flags);8686+8787+ while ((ioend = list_first_entry_or_null(&tmp, struct iomap_ioend,8888+ io_list))) {8989+ list_del_init(&ioend->io_list);9090+ iomap_finish_ioend_buffered(ioend);9191+ cond_resched();9292+ }9393+}9494+9595+static DECLARE_WORK(failed_ioend_work, iomap_fail_ioends);9696+9797+static void iomap_fail_ioend_buffered(struct iomap_ioend *ioend)9898+{9999+ unsigned long flags;100100+101101+ /*102102+ * Bounce I/O errors to a workqueue to avoid nested i_lock acquisitions103103+ * in the fserror code. The caller no longer owns the ioend reference104104+ * after the spinlock drops.105105+ */106106+ spin_lock_irqsave(&failed_ioend_lock, flags);107107+ if (list_empty(&failed_ioend_list))108108+ WARN_ON_ONCE(!schedule_work(&failed_ioend_work));109109+ list_add_tail(&ioend->io_list, &failed_ioend_list);110110+ spin_unlock_irqrestore(&failed_ioend_lock, flags);111111+}112112+72113static void ioend_writeback_end_bio(struct bio *bio)73114{74115 struct iomap_ioend *ioend = iomap_ioend_from_bio(bio);7511676117 ioend->io_error = blk_status_to_errno(bio->bi_status);118118+ if (ioend->io_error) {119119+ iomap_fail_ioend_buffered(ioend);120120+ return;121121+ }122122+77123 iomap_finish_ioend_buffered(ioend);78124}79125
···15311531static void *m_start(struct seq_file *m, loff_t *pos)15321532{15331533 struct proc_mounts *p = m->private;15341534+ struct mount *mnt;1534153515351536 down_read(&namespace_sem);1536153715371537- return mnt_find_id_at(p->ns, *pos);15381538+ mnt = mnt_find_id_at(p->ns, *pos);15391539+ if (mnt)15401540+ *pos = mnt->mnt_id_unique;15411541+ return mnt;15381542}1539154315401544static void *m_next(struct seq_file *m, void *v, loff_t *pos)15411545{15421542- struct mount *next = NULL, *mnt = v;15461546+ struct mount *mnt = v;15431547 struct rb_node *node = rb_next(&mnt->mnt_node);1544154815451545- ++*pos;15461549 if (node) {15471547- next = node_to_mount(node);15501550+ struct mount *next = node_to_mount(node);15481551 *pos = next->mnt_id_unique;15521552+ return next;15491553 }15501550- return next;15541554+15551555+ /*15561556+ * No more mounts. Set pos past current mount's ID so that if15571557+ * iteration restarts, mnt_find_id_at() returns NULL.15581558+ */15591559+ *pos = mnt->mnt_id_unique + 1;15601560+ return NULL;15511561}1552156215531563static void m_stop(struct seq_file *m, void *v)···28012791}2802279228032793static void lock_mount_exact(const struct path *path,28042804- struct pinned_mountpoint *mp);27942794+ struct pinned_mountpoint *mp, bool copy_mount,27952795+ unsigned int copy_flags);2805279628062797#define LOCK_MOUNT_MAYBE_BENEATH(mp, path, beneath) \28072798 struct pinned_mountpoint mp __cleanup(unlock_mount) = {}; \···28102799#define LOCK_MOUNT(mp, path) LOCK_MOUNT_MAYBE_BENEATH(mp, (path), false)28112800#define LOCK_MOUNT_EXACT(mp, path) \28122801 struct pinned_mountpoint mp __cleanup(unlock_mount) = {}; \28132813- lock_mount_exact((path), &mp)28022802+ lock_mount_exact((path), &mp, false, 0)28032803+#define LOCK_MOUNT_EXACT_COPY(mp, path, copy_flags) \28042804+ struct pinned_mountpoint mp __cleanup(unlock_mount) = {}; \28052805+ lock_mount_exact((path), &mp, true, (copy_flags))2814280628152807static int graft_tree(struct mount *mnt, const struct pinned_mountpoint *mp)28162808{···30873073 return file;30883074}3089307530903090-DEFINE_FREE(put_empty_mnt_ns, struct mnt_namespace *,30913091- if (!IS_ERR_OR_NULL(_T)) free_mnt_ns(_T))30923092-30933076static struct mnt_namespace *create_new_namespace(struct path *path, unsigned int flags)30943077{30953095- struct mnt_namespace *new_ns __free(put_empty_mnt_ns) = NULL;30963096- struct path to_path __free(path_put) = {};30973078 struct mnt_namespace *ns = current->nsproxy->mnt_ns;30983079 struct user_namespace *user_ns = current_user_ns();30993099- struct mount *new_ns_root;30803080+ struct mnt_namespace *new_ns;30813081+ struct mount *new_ns_root, *old_ns_root;30823082+ struct path to_path;31003083 struct mount *mnt;31013084 unsigned int copy_flags = 0;31023085 bool locked = false;···31053094 if (IS_ERR(new_ns))31063095 return ERR_CAST(new_ns);3107309631083108- scoped_guard(namespace_excl) {31093109- new_ns_root = clone_mnt(ns->root, ns->root->mnt.mnt_root, copy_flags);31103110- if (IS_ERR(new_ns_root))31113111- return ERR_CAST(new_ns_root);30973097+ old_ns_root = ns->root;30983098+ to_path.mnt = &old_ns_root->mnt;30993099+ to_path.dentry = old_ns_root->mnt.mnt_root;3112310031133113- /*31143114- * If the real rootfs had a locked mount on top of it somewhere31153115- * in the stack, lock the new mount tree as well so it can't be31163116- * exposed.31173117- */31183118- mnt = ns->root;31193119- while (mnt->overmount) {31203120- mnt = mnt->overmount;31213121- if (mnt->mnt.mnt_flags & MNT_LOCKED)31223122- locked = true;31233123- }31013101+ VFS_WARN_ON_ONCE(old_ns_root->mnt.mnt_sb->s_type != &nullfs_fs_type);31023102+31033103+ LOCK_MOUNT_EXACT_COPY(mp, &to_path, copy_flags);31043104+ if (IS_ERR(mp.parent)) {31053105+ free_mnt_ns(new_ns);31063106+ return ERR_CAST(mp.parent);31073107+ }31083108+ new_ns_root = mp.parent;31093109+31103110+ /*31113111+ * If the real rootfs had a locked mount on top of it somewhere31123112+ * in the stack, lock the new mount tree as well so it can't be31133113+ * exposed.31143114+ */31153115+ mnt = old_ns_root;31163116+ while (mnt->overmount) {31173117+ mnt = mnt->overmount;31183118+ if (mnt->mnt.mnt_flags & MNT_LOCKED)31193119+ locked = true;31243120 }3125312131263122 /*31273127- * We dropped the namespace semaphore so we can actually lock31283128- * the copy for mounting. The copied mount isn't attached to any31293129- * mount namespace and it is thus excluded from any propagation.31303130- * So realistically we're isolated and the mount can't be31313131- * overmounted.31323132- */31333133-31343134- /* Borrow the reference from clone_mnt(). */31353135- to_path.mnt = &new_ns_root->mnt;31363136- to_path.dentry = dget(new_ns_root->mnt.mnt_root);31373137-31383138- /* Now lock for actual mounting. */31393139- LOCK_MOUNT_EXACT(mp, &to_path);31403140- if (unlikely(IS_ERR(mp.parent)))31413141- return ERR_CAST(mp.parent);31423142-31433143- /*31443144- * We don't emulate unshare()ing a mount namespace. We stick to the31453145- * restrictions of creating detached bind-mounts. It has a lot31463146- * saner and simpler semantics.31233123+ * We don't emulate unshare()ing a mount namespace. We stick31243124+ * to the restrictions of creating detached bind-mounts. It31253125+ * has a lot saner and simpler semantics.31473126 */31483127 mnt = __do_loopback(path, flags, copy_flags);31493149- if (IS_ERR(mnt))31503150- return ERR_CAST(mnt);31513151-31523128 scoped_guard(mount_writer) {31293129+ if (IS_ERR(mnt)) {31303130+ emptied_ns = new_ns;31313131+ umount_tree(new_ns_root, 0);31323132+ return ERR_CAST(mnt);31333133+ }31343134+31533135 if (locked)31543136 mnt->mnt.mnt_flags |= MNT_LOCKED;31553137 /*31563156- * Now mount the detached tree on top of the copy of the31573157- * real rootfs we created.31383138+ * now mount the detached tree on top of the copy31393139+ * of the real rootfs we created.31583140 */31593141 attach_mnt(mnt, new_ns_root, mp.mp);31603142 if (user_ns != ns->user_ns)31613143 lock_mnt_tree(new_ns_root);31623144 }3163314531643164- /* Add all mounts to the new namespace. */31653165- for (struct mount *p = new_ns_root; p; p = next_mnt(p, new_ns_root)) {31663166- mnt_add_to_ns(new_ns, p);31463146+ for (mnt = new_ns_root; mnt; mnt = next_mnt(mnt, new_ns_root)) {31473147+ mnt_add_to_ns(new_ns, mnt);31673148 new_ns->nr_mounts++;31683149 }3169315031703170- new_ns->root = real_mount(no_free_ptr(to_path.mnt));31513151+ new_ns->root = new_ns_root;31713152 ns_tree_add_raw(new_ns);31723172- return no_free_ptr(new_ns);31533153+ return new_ns;31733154}3174315531753156static struct file *open_new_namespace(struct path *path, unsigned int flags)···38433840}3844384138453842static void lock_mount_exact(const struct path *path,38463846- struct pinned_mountpoint *mp)38433843+ struct pinned_mountpoint *mp, bool copy_mount,38443844+ unsigned int copy_flags)38473845{38483846 struct dentry *dentry = path->dentry;38493847 int err;38483848+38493849+ /* Assert that inode_lock() locked the correct inode. */38503850+ VFS_WARN_ON_ONCE(copy_mount && !path_mounted(path));3850385138513852 inode_lock(dentry->d_inode);38523853 namespace_lock();38533854 if (unlikely(cant_mount(dentry)))38543855 err = -ENOENT;38553855- else if (path_overmounted(path))38563856+ else if (!copy_mount && path_overmounted(path))38563857 err = -EBUSY;38573858 else38583859 err = get_mountpoint(dentry, mp);···38643857 namespace_unlock();38653858 inode_unlock(dentry->d_inode);38663859 mp->parent = ERR_PTR(err);38673867- } else {38683868- mp->parent = real_mount(path->mnt);38603860+ return;38693861 }38623862+38633863+ if (copy_mount)38643864+ mp->parent = clone_mnt(real_mount(path->mnt), dentry, copy_flags);38653865+ else38663866+ mp->parent = real_mount(path->mnt);38673867+ if (unlikely(IS_ERR(mp->parent)))38683868+ __unlock_mount(mp);38703869}3871387038723871int finish_automount(struct vfsmount *__m, const struct path *path)···5691567856925679 s->mnt = mnt_file->f_path.mnt;56935680 ns = real_mount(s->mnt)->mnt_ns;56815681+ if (IS_ERR(ns))56825682+ return PTR_ERR(ns);56945683 if (!ns)56955684 /*56965685 * We can't set mount point and mnt_ns_id since we don't have a
+4-6
fs/pidfs.c
···608608 struct user_namespace *user_ns;609609610610 user_ns = task_cred_xxx(task, user_ns);611611- if (!ns_ref_get(user_ns))612612- break;613613- ns_common = to_ns_common(user_ns);611611+ if (ns_ref_get(user_ns))612612+ ns_common = to_ns_common(user_ns);614613 }615614#endif616615 break;···619620 struct pid_namespace *pid_ns;620621621622 pid_ns = task_active_pid_ns(task);622622- if (!ns_ref_get(pid_ns))623623- break;624624- ns_common = to_ns_common(pid_ns);623623+ if (ns_ref_get(pid_ns))624624+ ns_common = to_ns_common(pid_ns);625625 }626626#endif627627 break;
···2167216721682168#ifdef CONFIG_KEYS2169216921702170-/* strlen("cifs:a:") + CIFS_MAX_DOMAINNAME_LEN + 1 */21712171-#define CIFSCREDS_DESC_SIZE (7 + CIFS_MAX_DOMAINNAME_LEN + 1)21722172-21732170/* Populate username and pw fields from keyring if possible */21742171static int21752172cifs_set_cifscreds(struct smb3_fs_context *ctx, struct cifs_ses *ses)···21742177 int rc = 0;21752178 int is_domain = 0;21762179 const char *delim, *payload;21802180+ size_t desc_sz;21772181 char *desc;21782182 ssize_t len;21792183 struct key *key;···21832185 struct sockaddr_in6 *sa6;21842186 const struct user_key_payload *upayload;2185218721862186- desc = kmalloc(CIFSCREDS_DESC_SIZE, GFP_KERNEL);21882188+ /* "cifs:a:" and "cifs:d:" are the same length; +1 for NUL terminator */21892189+ desc_sz = strlen("cifs:a:") + CIFS_MAX_DOMAINNAME_LEN + 1;21902190+ desc = kmalloc(desc_sz, GFP_KERNEL);21872191 if (!desc)21882192 return -ENOMEM;21892193···21932193 switch (server->dstaddr.ss_family) {21942194 case AF_INET:21952195 sa = (struct sockaddr_in *)&server->dstaddr;21962196- sprintf(desc, "cifs:a:%pI4", &sa->sin_addr.s_addr);21962196+ snprintf(desc, desc_sz, "cifs:a:%pI4", &sa->sin_addr.s_addr);21972197 break;21982198 case AF_INET6:21992199 sa6 = (struct sockaddr_in6 *)&server->dstaddr;22002200- sprintf(desc, "cifs:a:%pI6c", &sa6->sin6_addr.s6_addr);22002200+ snprintf(desc, desc_sz, "cifs:a:%pI6c", &sa6->sin6_addr.s6_addr);22012201 break;22022202 default:22032203 cifs_dbg(FYI, "Bad ss_family (%hu)\n",···22162216 }2217221722182218 /* didn't work, try to find a domain key */22192219- sprintf(desc, "cifs:d:%s", ses->domainName);22192219+ snprintf(desc, desc_sz, "cifs:d:%s", ses->domainName);22202220 cifs_dbg(FYI, "%s: desc=%s\n", __func__, desc);22212221 key = request_key(&key_type_logon, desc, "");22222222 if (IS_ERR(key)) {···22362236 /* find first : in payload */22372237 payload = upayload->data;22382238 delim = strnchr(payload, upayload->datalen, ':');22392239- cifs_dbg(FYI, "payload=%s\n", payload);22402239 if (!delim) {22412240 cifs_dbg(FYI, "Unable to find ':' in payload (datalen=%d)\n",22422241 upayload->datalen);···29142915{29152916 struct cifs_sb_info *old = CIFS_SB(sb);29162917 struct cifs_sb_info *new = mnt_data->cifs_sb;29172917- unsigned int oldflags = old->mnt_cifs_flags & CIFS_MOUNT_MASK;29182918- unsigned int newflags = new->mnt_cifs_flags & CIFS_MOUNT_MASK;29182918+ unsigned int oldflags = cifs_sb_flags(old) & CIFS_MOUNT_MASK;29192919+ unsigned int newflags = cifs_sb_flags(new) & CIFS_MOUNT_MASK;2919292029202921 if ((sb->s_flags & CIFS_MS_MASK) != (mnt_data->flags & CIFS_MS_MASK))29212922 return 0;···29702971 struct smb3_fs_context *ctx = mnt_data->ctx;29712972 struct cifs_sb_info *old = CIFS_SB(sb);29722973 struct cifs_sb_info *new = mnt_data->cifs_sb;29732973- bool old_set = (old->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH) &&29742974+ bool old_set = (cifs_sb_flags(old) & CIFS_MOUNT_USE_PREFIX_PATH) &&29742975 old->prepath;29752975- bool new_set = (new->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH) &&29762976+ bool new_set = (cifs_sb_flags(new) & CIFS_MOUNT_USE_PREFIX_PATH) &&29762977 new->prepath;2977297829782979 if (tcon->origin_fullpath &&···30033004 cifs_sb = CIFS_SB(sb);3004300530053006 /* We do not want to use a superblock that has been shutdown */30063006- if (CIFS_MOUNT_SHUTDOWN & cifs_sb->mnt_cifs_flags) {30073007+ if (cifs_forced_shutdown(cifs_sb)) {30073008 spin_unlock(&cifs_tcp_ses_lock);30083009 return 0;30093010 }···34683469int cifs_setup_cifs_sb(struct cifs_sb_info *cifs_sb)34693470{34703471 struct smb3_fs_context *ctx = cifs_sb->ctx;34723472+ unsigned int sbflags;34733473+ int rc = 0;3471347434723475 INIT_DELAYED_WORK(&cifs_sb->prune_tlinks, cifs_prune_tlinks);34733476 INIT_LIST_HEAD(&cifs_sb->tcon_sb_link);···34943493 }34953494 ctx->local_nls = cifs_sb->local_nls;3496349534973497- smb3_update_mnt_flags(cifs_sb);34963496+ sbflags = smb3_update_mnt_flags(cifs_sb);3498349734993498 if (ctx->direct_io)35003499 cifs_dbg(FYI, "mounting share using direct i/o\n");35013500 if (ctx->cache_ro) {35023501 cifs_dbg(VFS, "mounting share with read only caching. Ensure that the share will not be modified while in use.\n");35033503- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_RO_CACHE;35023502+ sbflags |= CIFS_MOUNT_RO_CACHE;35043503 } else if (ctx->cache_rw) {35053504 cifs_dbg(VFS, "mounting share in single client RW caching mode. Ensure that no other systems will be accessing the share.\n");35063506- cifs_sb->mnt_cifs_flags |= (CIFS_MOUNT_RO_CACHE |35073507- CIFS_MOUNT_RW_CACHE);35053505+ sbflags |= CIFS_MOUNT_RO_CACHE | CIFS_MOUNT_RW_CACHE;35083506 }3509350735103508 if ((ctx->cifs_acl) && (ctx->dynperm))···35123512 if (ctx->prepath) {35133513 cifs_sb->prepath = kstrdup(ctx->prepath, GFP_KERNEL);35143514 if (cifs_sb->prepath == NULL)35153515- return -ENOMEM;35163516- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;35153515+ rc = -ENOMEM;35163516+ else35173517+ sbflags |= CIFS_MOUNT_USE_PREFIX_PATH;35173518 }3518351935193519- return 0;35203520+ atomic_set(&cifs_sb->mnt_cifs_flags, sbflags);35213521+ return rc;35203522}3521352335223524/* Release all succeed connections */35233525void cifs_mount_put_conns(struct cifs_mount_ctx *mnt_ctx)35243526{35273527+ struct cifs_sb_info *cifs_sb = mnt_ctx->cifs_sb;35253528 int rc = 0;3526352935273530 if (mnt_ctx->tcon)···35363533 mnt_ctx->ses = NULL;35373534 mnt_ctx->tcon = NULL;35383535 mnt_ctx->server = NULL;35393539- mnt_ctx->cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_POSIX_PATHS;35363536+ atomic_andnot(CIFS_MOUNT_POSIX_PATHS, &cifs_sb->mnt_cifs_flags);35403537 free_xid(mnt_ctx->xid);35413538}35423539···35903587int cifs_mount_get_tcon(struct cifs_mount_ctx *mnt_ctx)35913588{35923589 struct TCP_Server_Info *server;35903590+ struct cifs_tcon *tcon = NULL;35933591 struct cifs_sb_info *cifs_sb;35943592 struct smb3_fs_context *ctx;35953595- struct cifs_tcon *tcon = NULL;35933593+ unsigned int sbflags;35963594 int rc = 0;3597359535983598- if (WARN_ON_ONCE(!mnt_ctx || !mnt_ctx->server || !mnt_ctx->ses || !mnt_ctx->fs_ctx ||35993599- !mnt_ctx->cifs_sb)) {36003600- rc = -EINVAL;36013601- goto out;35963596+ if (WARN_ON_ONCE(!mnt_ctx))35973597+ return -EINVAL;35983598+ if (WARN_ON_ONCE(!mnt_ctx->server || !mnt_ctx->ses ||35993599+ !mnt_ctx->fs_ctx || !mnt_ctx->cifs_sb)) {36003600+ mnt_ctx->tcon = NULL;36013601+ return -EINVAL;36023602 }36033603 server = mnt_ctx->server;36043604 ctx = mnt_ctx->fs_ctx;36053605 cifs_sb = mnt_ctx->cifs_sb;36063606+ sbflags = cifs_sb_flags(cifs_sb);3606360736073608 /* search for existing tcon to this server share */36083609 tcon = cifs_get_tcon(mnt_ctx->ses, ctx);···36213614 * path (i.e., do not remap / and \ and do not map any special characters)36223615 */36233616 if (tcon->posix_extensions) {36243624- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_POSIX_PATHS;36253625- cifs_sb->mnt_cifs_flags &= ~(CIFS_MOUNT_MAP_SFM_CHR |36263626- CIFS_MOUNT_MAP_SPECIAL_CHR);36173617+ sbflags |= CIFS_MOUNT_POSIX_PATHS;36183618+ sbflags &= ~(CIFS_MOUNT_MAP_SFM_CHR |36193619+ CIFS_MOUNT_MAP_SPECIAL_CHR);36273620 }3628362136293622#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY···36503643 /* do not care if a following call succeed - informational */36513644 if (!tcon->pipe && server->ops->qfs_tcon) {36523645 server->ops->qfs_tcon(mnt_ctx->xid, tcon, cifs_sb);36533653- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RO_CACHE) {36463646+ if (sbflags & CIFS_MOUNT_RO_CACHE) {36543647 if (tcon->fsDevInfo.DeviceCharacteristics &36553648 cpu_to_le32(FILE_READ_ONLY_DEVICE))36563649 cifs_dbg(VFS, "mounted to read only share\n");36573657- else if ((cifs_sb->mnt_cifs_flags &36583658- CIFS_MOUNT_RW_CACHE) == 0)36503650+ else if (!(sbflags & CIFS_MOUNT_RW_CACHE))36593651 cifs_dbg(VFS, "read only mount of RW share\n");36603652 /* no need to log a RW mount of a typical RW share */36613653 }···36663660 * Inside cifs_fscache_get_super_cookie it checks36673661 * that we do not get super cookie twice.36683662 */36693669- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_FSCACHE)36633663+ if (sbflags & CIFS_MOUNT_FSCACHE)36703664 cifs_fscache_get_super_cookie(tcon);3671366536723666out:36733667 mnt_ctx->tcon = tcon;36683668+ atomic_set(&cifs_sb->mnt_cifs_flags, sbflags);36743669 return rc;36753670}36763671···37903783 cifs_sb, full_path, tcon->Flags & SMB_SHARE_IS_IN_DFS);37913784 if (rc != 0) {37923785 cifs_server_dbg(VFS, "cannot query dirs between root and final path, enabling CIFS_MOUNT_USE_PREFIX_PATH\n");37933793- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;37863786+ atomic_or(CIFS_MOUNT_USE_PREFIX_PATH,37873787+ &cifs_sb->mnt_cifs_flags);37943788 rc = 0;37953789 }37963790 }···38713863 * Force the use of prefix path to support failover on DFS paths that resolve to targets38723864 * that have different prefix paths.38733865 */38743874- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;38663866+ atomic_or(CIFS_MOUNT_USE_PREFIX_PATH, &cifs_sb->mnt_cifs_flags);38753867 kfree(cifs_sb->prepath);38763868 cifs_sb->prepath = ctx->prepath;38773869 ctx->prepath = NULL;···43654357 kuid_t fsuid = current_fsuid();43664358 int err;4367435943684368- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))43604360+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_MULTIUSER))43694361 return cifs_get_tlink(cifs_sb_master_tlink(cifs_sb));4370436243714363 spin_lock(&cifs_sb->tlink_tree_lock);
+1-1
fs/smb/client/dfs_cache.c
···13331333 * Force the use of prefix path to support failover on DFS paths that resolve to targets13341334 * that have different prefix paths.13351335 */13361336- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;13361336+ atomic_or(CIFS_MOUNT_USE_PREFIX_PATH, &cifs_sb->mnt_cifs_flags);1337133713381338 refresh_tcon_referral(tcon, true);13391339 return 0;
+29-24
fs/smb/client/dir.c
···8282 const char *tree, int tree_len,8383 bool prefix)8484{8585- int dfsplen;8686- int pplen = 0;8787- struct cifs_sb_info *cifs_sb = CIFS_SB(direntry->d_sb);8585+ struct cifs_sb_info *cifs_sb = CIFS_SB(direntry);8686+ unsigned int sbflags = cifs_sb_flags(cifs_sb);8887 char dirsep = CIFS_DIR_SEP(cifs_sb);8888+ int pplen = 0;8989+ int dfsplen;8990 char *s;90919192 if (unlikely(!page))···9796 else9897 dfsplen = 0;9998100100- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH)9999+ if (sbflags & CIFS_MOUNT_USE_PREFIX_PATH)101100 pplen = cifs_sb->prepath ? strlen(cifs_sb->prepath) + 1 : 0;102101103102 s = dentry_path_raw(direntry, page, PATH_MAX);···124123 if (dfsplen) {125124 s -= dfsplen;126125 memcpy(s, tree, dfsplen);127127- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) {126126+ if (sbflags & CIFS_MOUNT_POSIX_PATHS) {128127 int i;129128 for (i = 0; i < dfsplen; i++) {130129 if (s[i] == '\\')···153152static int154153check_name(struct dentry *direntry, struct cifs_tcon *tcon)155154{156156- struct cifs_sb_info *cifs_sb = CIFS_SB(direntry->d_sb);155155+ struct cifs_sb_info *cifs_sb = CIFS_SB(direntry);157156 int i;158157159158 if (unlikely(tcon->fsAttrInfo.MaxPathNameComponentLength &&···161160 le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength)))162161 return -ENAMETOOLONG;163162164164- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS)) {163163+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_POSIX_PATHS)) {165164 for (i = 0; i < direntry->d_name.len; i++) {166165 if (direntry->d_name.name[i] == '\\') {167166 cifs_dbg(FYI, "Invalid file name\n");···182181 int rc = -ENOENT;183182 int create_options = CREATE_NOT_DIR;184183 int desired_access;185185- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);184184+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);186185 struct cifs_tcon *tcon = tlink_tcon(tlink);187186 const char *full_path;188187 void *page = alloc_dentry_path();189188 struct inode *newinode = NULL;189189+ unsigned int sbflags;190190 int disposition;191191 struct TCP_Server_Info *server = tcon->ses->server;192192 struct cifs_open_parms oparms;···367365 * If Open reported that we actually created a file then we now have to368366 * set the mode if possible.369367 */368368+ sbflags = cifs_sb_flags(cifs_sb);370369 if ((tcon->unix_ext) && (*oplock & CIFS_CREATE_ACTION)) {371370 struct cifs_unix_set_info_args args = {372371 .mode = mode,···377374 .device = 0,378375 };379376380380- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) {377377+ if (sbflags & CIFS_MOUNT_SET_UID) {381378 args.uid = current_fsuid();382379 if (inode->i_mode & S_ISGID)383380 args.gid = inode->i_gid;···414411 if (server->ops->set_lease_key)415412 server->ops->set_lease_key(newinode, fid);416413 if ((*oplock & CIFS_CREATE_ACTION) && S_ISREG(newinode->i_mode)) {417417- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM)414414+ if (sbflags & CIFS_MOUNT_DYNPERM)418415 newinode->i_mode = mode;419419- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) {416416+ if (sbflags & CIFS_MOUNT_SET_UID) {420417 newinode->i_uid = current_fsuid();421418 if (inode->i_mode & S_ISGID)422419 newinode->i_gid = inode->i_gid;···461458cifs_atomic_open(struct inode *inode, struct dentry *direntry,462459 struct file *file, unsigned int oflags, umode_t mode)463460{464464- int rc;465465- unsigned int xid;461461+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);462462+ struct cifs_open_info_data buf = {};463463+ struct TCP_Server_Info *server;464464+ struct cifsFileInfo *file_info;465465+ struct cifs_pending_open open;466466+ struct cifs_fid fid = {};466467 struct tcon_link *tlink;467468 struct cifs_tcon *tcon;468468- struct TCP_Server_Info *server;469469- struct cifs_fid fid = {};470470- struct cifs_pending_open open;469469+ unsigned int sbflags;470470+ unsigned int xid;471471 __u32 oplock;472472- struct cifsFileInfo *file_info;473473- struct cifs_open_info_data buf = {};472472+ int rc;474473475475- if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb))))474474+ if (unlikely(cifs_forced_shutdown(cifs_sb)))476475 return smb_EIO(smb_eio_trace_forced_shutdown);477476478477 /*···504499 cifs_dbg(FYI, "parent inode = 0x%p name is: %pd and dentry = 0x%p\n",505500 inode, direntry, direntry);506501507507- tlink = cifs_sb_tlink(CIFS_SB(inode->i_sb));502502+ tlink = cifs_sb_tlink(cifs_sb);508503 if (IS_ERR(tlink)) {509504 rc = PTR_ERR(tlink);510505 goto out_free_xid;···541536 goto out;542537 }543538544544- if (file->f_flags & O_DIRECT &&545545- CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) {546546- if (CIFS_SB(inode->i_sb)->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)539539+ sbflags = cifs_sb_flags(cifs_sb);540540+ if ((file->f_flags & O_DIRECT) && (sbflags & CIFS_MOUNT_STRICT_IO)) {541541+ if (sbflags & CIFS_MOUNT_NO_BRL)547542 file->f_op = &cifs_file_direct_nobrl_ops;548543 else549544 file->f_op = &cifs_file_direct_ops;550550- }545545+ }551546552547 file_info = cifs_new_fileinfo(&fid, file, tlink, oplock, buf.symlink_target);553548 if (file_info == NULL) {
+44-46
fs/smb/client/file.c
···270270static int cifs_init_request(struct netfs_io_request *rreq, struct file *file)271271{272272 struct cifs_io_request *req = container_of(rreq, struct cifs_io_request, rreq);273273- struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb);273273+ struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode);274274 struct cifsFileInfo *open_file = NULL;275275276276 rreq->rsize = cifs_sb->ctx->rsize;···281281 open_file = file->private_data;282282 rreq->netfs_priv = file->private_data;283283 req->cfile = cifsFileInfo_get(open_file);284284- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD)284284+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_RWPIDFORWARD)285285 req->pid = req->cfile->pid;286286 } else if (rreq->origin != NETFS_WRITEBACK) {287287 WARN_ON_ONCE(1);···906906 * close because it may cause a error when we open this file907907 * again and get at least level II oplock.908908 */909909- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO)909909+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_STRICT_IO)910910 set_bit(CIFS_INO_INVALID_MAPPING, &cifsi->flags);911911 cifs_set_oplock_level(cifsi, 0);912912 }···955955int cifs_file_flush(const unsigned int xid, struct inode *inode,956956 struct cifsFileInfo *cfile)957957{958958- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);958958+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);959959 struct cifs_tcon *tcon;960960 int rc;961961962962- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)962962+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOSSYNC)963963 return 0;964964965965 if (cfile && (OPEN_FMODE(cfile->f_flags) & FMODE_WRITE)) {···10151015int cifs_open(struct inode *inode, struct file *file)1016101610171017{10181018+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);10191019+ struct cifs_open_info_data data = {};10201020+ struct cifsFileInfo *cfile = NULL;10211021+ struct TCP_Server_Info *server;10221022+ struct cifs_pending_open open;10231023+ bool posix_open_ok = false;10241024+ struct cifs_fid fid = {};10251025+ struct tcon_link *tlink;10261026+ struct cifs_tcon *tcon;10271027+ const char *full_path;10281028+ unsigned int sbflags;10181029 int rc = -EACCES;10191030 unsigned int xid;10201031 __u32 oplock;10211021- struct cifs_sb_info *cifs_sb;10221022- struct TCP_Server_Info *server;10231023- struct cifs_tcon *tcon;10241024- struct tcon_link *tlink;10251025- struct cifsFileInfo *cfile = NULL;10261032 void *page;10271027- const char *full_path;10281028- bool posix_open_ok = false;10291029- struct cifs_fid fid = {};10301030- struct cifs_pending_open open;10311031- struct cifs_open_info_data data = {};1032103310331034 xid = get_xid();1034103510351035- cifs_sb = CIFS_SB(inode->i_sb);10361036 if (unlikely(cifs_forced_shutdown(cifs_sb))) {10371037 free_xid(xid);10381038 return smb_EIO(smb_eio_trace_forced_shutdown);···10561056 cifs_dbg(FYI, "inode = 0x%p file flags are 0x%x for %s\n",10571057 inode, file->f_flags, full_path);1058105810591059- if (file->f_flags & O_DIRECT &&10601060- cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) {10611061- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)10591059+ sbflags = cifs_sb_flags(cifs_sb);10601060+ if ((file->f_flags & O_DIRECT) && (sbflags & CIFS_MOUNT_STRICT_IO)) {10611061+ if (sbflags & CIFS_MOUNT_NO_BRL)10621062 file->f_op = &cifs_file_direct_nobrl_ops;10631063 else10641064 file->f_op = &cifs_file_direct_ops;···12091209 struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);12101210 int rc = 0;12111211#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY12121212- struct cifs_sb_info *cifs_sb = CIFS_SB(cfile->dentry->d_sb);12121212+ struct cifs_sb_info *cifs_sb = CIFS_SB(cinode);12131213#endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */1214121412151215 down_read_nested(&cinode->lock_sem, SINGLE_DEPTH_NESTING);···12221222#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY12231223 if (cap_unix(tcon->ses) &&12241224 (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) &&12251225- ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0))12251225+ ((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOPOSIXBRL) == 0))12261226 rc = cifs_push_posix_locks(cfile);12271227 else12281228#endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */···20112011 struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);20122012 int rc = 0;20132013#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY20142014- struct cifs_sb_info *cifs_sb = CIFS_SB(cfile->dentry->d_sb);20142014+ struct cifs_sb_info *cifs_sb = CIFS_SB(cinode);20152015#endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */2016201620172017 /* we are going to update can_cache_brlcks here - need a write access */···20242024#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY20252025 if (cap_unix(tcon->ses) &&20262026 (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) &&20272027- ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0))20272027+ ((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOPOSIXBRL) == 0))20282028 rc = cifs_push_posix_locks(cfile);20292029 else20302030#endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */···2428242824292429 cifs_read_flock(fl, &type, &lock, &unlock, &wait_flag,24302430 tcon->ses->server);24312431- cifs_sb = CIFS_FILE_SB(file);24312431+ cifs_sb = CIFS_SB(file);2432243224332433 if (cap_unix(tcon->ses) &&24342434 (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) &&24352435- ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0))24352435+ ((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOPOSIXBRL) == 0))24362436 posix_lck = true;2437243724382438 if (!lock && !unlock) {···2455245524562456int cifs_lock(struct file *file, int cmd, struct file_lock *flock)24572457{24582458- int rc, xid;24582458+ struct cifs_sb_info *cifs_sb = CIFS_SB(file);24592459+ struct cifsFileInfo *cfile;24592460 int lock = 0, unlock = 0;24602461 bool wait_flag = false;24612462 bool posix_lck = false;24622462- struct cifs_sb_info *cifs_sb;24632463 struct cifs_tcon *tcon;24642464- struct cifsFileInfo *cfile;24652464 __u32 type;24652465+ int rc, xid;2466246624672467 rc = -EACCES;24682468 xid = get_xid();···2477247724782478 cifs_read_flock(flock, &type, &lock, &unlock, &wait_flag,24792479 tcon->ses->server);24802480- cifs_sb = CIFS_FILE_SB(file);24812480 set_bit(CIFS_INO_CLOSE_ON_LOCK, &CIFS_I(d_inode(cfile->dentry))->flags);2482248124832482 if (cap_unix(tcon->ses) &&24842483 (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) &&24852485- ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0))24842484+ ((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOPOSIXBRL) == 0))24862485 posix_lck = true;24872486 /*24882487 * BB add code here to normalize offset and length to account for···25312532struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,25322533 bool fsuid_only)25332534{25352535+ struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_inode);25342536 struct cifsFileInfo *open_file = NULL;25352535- struct cifs_sb_info *cifs_sb = CIFS_SB(cifs_inode->netfs.inode.i_sb);2536253725372538 /* only filter by fsuid on multiuser mounts */25382538- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))25392539+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_MULTIUSER))25392540 fsuid_only = false;2540254125412542 spin_lock(&cifs_inode->open_file_lock);···25882589 return rc;25892590 }2590259125912591- cifs_sb = CIFS_SB(cifs_inode->netfs.inode.i_sb);25922592+ cifs_sb = CIFS_SB(cifs_inode);2592259325932594 /* only filter by fsuid on multiuser mounts */25942594- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER))25952595+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_MULTIUSER))25952596 fsuid_only = false;2596259725972598 spin_lock(&cifs_inode->open_file_lock);···27862787 struct TCP_Server_Info *server;27872788 struct cifsFileInfo *smbfile = file->private_data;27882789 struct inode *inode = file_inode(file);27892789- struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);27902790+ struct cifs_sb_info *cifs_sb = CIFS_SB(file);2790279127912792 rc = file_write_and_wait_range(file, start, end);27922793 if (rc) {···28002801 file, datasync);2801280228022803 tcon = tlink_tcon(smbfile->tlink);28032803- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) {28042804+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOSSYNC)) {28042805 server = tcon->ses->server;28052806 if (server->ops->flush == NULL) {28062807 rc = -ENOSYS;···28522853 struct inode *inode = file->f_mapping->host;28532854 struct cifsInodeInfo *cinode = CIFS_I(inode);28542855 struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server;28552855- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);28562856+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);28562857 ssize_t rc;2857285828582859 rc = netfs_start_io_write(inode);···28692870 if (rc <= 0)28702871 goto out;2871287228722872- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) &&28732873+ if ((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOPOSIXBRL) &&28732874 (cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(from),28742875 server->vals->exclusive_lock_type, 0,28752876 NULL, CIFS_WRITE_OP))) {···28922893{28932894 struct inode *inode = file_inode(iocb->ki_filp);28942895 struct cifsInodeInfo *cinode = CIFS_I(inode);28952895- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);28962896+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);28962897 struct cifsFileInfo *cfile = (struct cifsFileInfo *)28972898 iocb->ki_filp->private_data;28982899 struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);···29052906 if (CIFS_CACHE_WRITE(cinode)) {29062907 if (cap_unix(tcon->ses) &&29072908 (CIFS_UNIX_FCNTL_CAP & le64_to_cpu(tcon->fsUnixInfo.Capability)) &&29082908- ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0)) {29092909+ ((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOPOSIXBRL) == 0)) {29092910 written = netfs_file_write_iter(iocb, from);29102911 goto out;29112912 }···29932994{29942995 struct inode *inode = file_inode(iocb->ki_filp);29952996 struct cifsInodeInfo *cinode = CIFS_I(inode);29962996- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);29972997+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);29972998 struct cifsFileInfo *cfile = (struct cifsFileInfo *)29982999 iocb->ki_filp->private_data;29993000 struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);···30103011 if (!CIFS_CACHE_READ(cinode))30113012 return netfs_unbuffered_read_iter(iocb, to);3012301330133013- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOPOSIXBRL) == 0) {30143014+ if ((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NOPOSIXBRL) == 0) {30143015 if (iocb->ki_flags & IOCB_DIRECT)30153016 return netfs_unbuffered_read_iter(iocb, to);30163017 return netfs_buffered_read_iter(iocb, to);···31293130 if (is_inode_writable(cifsInode) ||31303131 ((cifsInode->oplock & CIFS_CACHE_RW_FLG) != 0 && from_readdir)) {31313132 /* This inode is open for write at least once */31323132- struct cifs_sb_info *cifs_sb;31333133+ struct cifs_sb_info *cifs_sb = CIFS_SB(cifsInode);3133313431343134- cifs_sb = CIFS_SB(cifsInode->netfs.inode.i_sb);31353135- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DIRECT_IO) {31353135+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_DIRECT_IO) {31363136 /* since no page cache to corrupt on directio31373137 we can change size safely */31383138 return true;···31793181 server = tcon->ses->server;3180318231813183 scoped_guard(spinlock, &cinode->open_file_lock) {31823182- unsigned int sbflags = cifs_sb->mnt_cifs_flags;31843184+ unsigned int sbflags = cifs_sb_flags(cifs_sb);3183318531843186 server->ops->downgrade_oplock(server, cinode, cfile->oplock_level,31853187 cfile->oplock_epoch, &purge_cache);
+74-75
fs/smb/client/fs_context.c
···20622062 kfree(ctx);20632063}2064206420652065-void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb)20652065+unsigned int smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb)20662066{20672067+ unsigned int sbflags = cifs_sb_flags(cifs_sb);20672068 struct smb3_fs_context *ctx = cifs_sb->ctx;2068206920692070 if (ctx->nodfs)20702070- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_NO_DFS;20712071+ sbflags |= CIFS_MOUNT_NO_DFS;20712072 else20722072- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_NO_DFS;20732073+ sbflags &= ~CIFS_MOUNT_NO_DFS;2073207420742075 if (ctx->noperm)20752075- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_NO_PERM;20762076+ sbflags |= CIFS_MOUNT_NO_PERM;20762077 else20772077- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_NO_PERM;20782078+ sbflags &= ~CIFS_MOUNT_NO_PERM;2078207920792080 if (ctx->setuids)20802080- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_SET_UID;20812081+ sbflags |= CIFS_MOUNT_SET_UID;20812082 else20822082- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SET_UID;20832083+ sbflags &= ~CIFS_MOUNT_SET_UID;2083208420842085 if (ctx->setuidfromacl)20852085- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_UID_FROM_ACL;20862086+ sbflags |= CIFS_MOUNT_UID_FROM_ACL;20862087 else20872087- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_UID_FROM_ACL;20882088+ sbflags &= ~CIFS_MOUNT_UID_FROM_ACL;2088208920892090 if (ctx->server_ino)20902090- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_SERVER_INUM;20912091+ sbflags |= CIFS_MOUNT_SERVER_INUM;20912092 else20922092- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM;20932093+ sbflags &= ~CIFS_MOUNT_SERVER_INUM;2093209420942095 if (ctx->remap)20952095- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_MAP_SFM_CHR;20962096+ sbflags |= CIFS_MOUNT_MAP_SFM_CHR;20962097 else20972097- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_MAP_SFM_CHR;20982098+ sbflags &= ~CIFS_MOUNT_MAP_SFM_CHR;2098209920992100 if (ctx->sfu_remap)21002100- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_MAP_SPECIAL_CHR;21012101+ sbflags |= CIFS_MOUNT_MAP_SPECIAL_CHR;21012102 else21022102- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_MAP_SPECIAL_CHR;21032103+ sbflags &= ~CIFS_MOUNT_MAP_SPECIAL_CHR;2103210421042105 if (ctx->no_xattr)21052105- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_NO_XATTR;21062106+ sbflags |= CIFS_MOUNT_NO_XATTR;21062107 else21072107- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_NO_XATTR;21082108+ sbflags &= ~CIFS_MOUNT_NO_XATTR;2108210921092110 if (ctx->sfu_emul)21102110- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_UNX_EMUL;21112111+ sbflags |= CIFS_MOUNT_UNX_EMUL;21112112 else21122112- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_UNX_EMUL;21132113+ sbflags &= ~CIFS_MOUNT_UNX_EMUL;2113211421142115 if (ctx->nobrl)21152115- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_NO_BRL;21162116+ sbflags |= CIFS_MOUNT_NO_BRL;21162117 else21172117- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_NO_BRL;21182118+ sbflags &= ~CIFS_MOUNT_NO_BRL;2118211921192120 if (ctx->nohandlecache)21202120- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_NO_HANDLE_CACHE;21212121+ sbflags |= CIFS_MOUNT_NO_HANDLE_CACHE;21212122 else21222122- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_NO_HANDLE_CACHE;21232123+ sbflags &= ~CIFS_MOUNT_NO_HANDLE_CACHE;2123212421242125 if (ctx->nostrictsync)21252125- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_NOSSYNC;21262126+ sbflags |= CIFS_MOUNT_NOSSYNC;21262127 else21272127- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_NOSSYNC;21282128+ sbflags &= ~CIFS_MOUNT_NOSSYNC;2128212921292130 if (ctx->mand_lock)21302130- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_NOPOSIXBRL;21312131+ sbflags |= CIFS_MOUNT_NOPOSIXBRL;21312132 else21322132- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_NOPOSIXBRL;21332133+ sbflags &= ~CIFS_MOUNT_NOPOSIXBRL;2133213421342135 if (ctx->rwpidforward)21352135- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_RWPIDFORWARD;21362136+ sbflags |= CIFS_MOUNT_RWPIDFORWARD;21362137 else21372137- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_RWPIDFORWARD;21382138+ sbflags &= ~CIFS_MOUNT_RWPIDFORWARD;2138213921392140 if (ctx->mode_ace)21402140- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_MODE_FROM_SID;21412141+ sbflags |= CIFS_MOUNT_MODE_FROM_SID;21412142 else21422142- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_MODE_FROM_SID;21432143+ sbflags &= ~CIFS_MOUNT_MODE_FROM_SID;2143214421442145 if (ctx->cifs_acl)21452145- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_ACL;21462146+ sbflags |= CIFS_MOUNT_CIFS_ACL;21462147 else21472147- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_CIFS_ACL;21482148+ sbflags &= ~CIFS_MOUNT_CIFS_ACL;2148214921492150 if (ctx->backupuid_specified)21502150- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_BACKUPUID;21512151+ sbflags |= CIFS_MOUNT_CIFS_BACKUPUID;21512152 else21522152- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_CIFS_BACKUPUID;21532153+ sbflags &= ~CIFS_MOUNT_CIFS_BACKUPUID;2153215421542155 if (ctx->backupgid_specified)21552155- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_CIFS_BACKUPGID;21562156+ sbflags |= CIFS_MOUNT_CIFS_BACKUPGID;21562157 else21572157- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_CIFS_BACKUPGID;21582158+ sbflags &= ~CIFS_MOUNT_CIFS_BACKUPGID;2158215921592160 if (ctx->override_uid)21602160- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_OVERR_UID;21612161+ sbflags |= CIFS_MOUNT_OVERR_UID;21612162 else21622162- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_OVERR_UID;21632163+ sbflags &= ~CIFS_MOUNT_OVERR_UID;2163216421642165 if (ctx->override_gid)21652165- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_OVERR_GID;21662166+ sbflags |= CIFS_MOUNT_OVERR_GID;21662167 else21672167- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_OVERR_GID;21682168+ sbflags &= ~CIFS_MOUNT_OVERR_GID;2168216921692170 if (ctx->dynperm)21702170- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_DYNPERM;21712171+ sbflags |= CIFS_MOUNT_DYNPERM;21712172 else21722172- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_DYNPERM;21732173+ sbflags &= ~CIFS_MOUNT_DYNPERM;2173217421742175 if (ctx->fsc)21752175- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_FSCACHE;21762176+ sbflags |= CIFS_MOUNT_FSCACHE;21762177 else21772177- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_FSCACHE;21782178+ sbflags &= ~CIFS_MOUNT_FSCACHE;2178217921792180 if (ctx->multiuser)21802180- cifs_sb->mnt_cifs_flags |= (CIFS_MOUNT_MULTIUSER |21812181- CIFS_MOUNT_NO_PERM);21812181+ sbflags |= CIFS_MOUNT_MULTIUSER | CIFS_MOUNT_NO_PERM;21822182 else21832183- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_MULTIUSER;21832183+ sbflags &= ~CIFS_MOUNT_MULTIUSER;218421842185218521862186 if (ctx->strict_io)21872187- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_STRICT_IO;21872187+ sbflags |= CIFS_MOUNT_STRICT_IO;21882188 else21892189- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_STRICT_IO;21892189+ sbflags &= ~CIFS_MOUNT_STRICT_IO;2190219021912191 if (ctx->direct_io)21922192- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_DIRECT_IO;21922192+ sbflags |= CIFS_MOUNT_DIRECT_IO;21932193 else21942194- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_DIRECT_IO;21942194+ sbflags &= ~CIFS_MOUNT_DIRECT_IO;2195219521962196 if (ctx->mfsymlinks)21972197- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_MF_SYMLINKS;21972197+ sbflags |= CIFS_MOUNT_MF_SYMLINKS;21982198 else21992199- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_MF_SYMLINKS;22002200- if (ctx->mfsymlinks) {22012201- if (ctx->sfu_emul) {22022202- /*22032203- * Our SFU ("Services for Unix") emulation allows now22042204- * creating new and reading existing SFU symlinks.22052205- * Older Linux kernel versions were not able to neither22062206- * read existing nor create new SFU symlinks. But22072207- * creating and reading SFU style mknod and FIFOs was22082208- * supported for long time. When "mfsymlinks" and22092209- * "sfu" are both enabled at the same time, it allows22102210- * reading both types of symlinks, but will only create22112211- * them with mfsymlinks format. This allows better22122212- * Apple compatibility, compatibility with older Linux22132213- * kernel clients (probably better for Samba too)22142214- * while still recognizing old Windows style symlinks.22152215- */22162216- cifs_dbg(VFS, "mount options mfsymlinks and sfu both enabled\n");22172217- }22182218- }22192219- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SHUTDOWN;21992199+ sbflags &= ~CIFS_MOUNT_MF_SYMLINKS;2220220022212221- return;22012201+ if (ctx->mfsymlinks && ctx->sfu_emul) {22022202+ /*22032203+ * Our SFU ("Services for Unix") emulation allows now22042204+ * creating new and reading existing SFU symlinks.22052205+ * Older Linux kernel versions were not able to neither22062206+ * read existing nor create new SFU symlinks. But22072207+ * creating and reading SFU style mknod and FIFOs was22082208+ * supported for long time. When "mfsymlinks" and22092209+ * "sfu" are both enabled at the same time, it allows22102210+ * reading both types of symlinks, but will only create22112211+ * them with mfsymlinks format. This allows better22122212+ * Apple compatibility, compatibility with older Linux22132213+ * kernel clients (probably better for Samba too)22142214+ * while still recognizing old Windows style symlinks.22152215+ */22162216+ cifs_dbg(VFS, "mount options mfsymlinks and sfu both enabled\n");22172217+ }22182218+ sbflags &= ~CIFS_MOUNT_SHUTDOWN;22192219+ atomic_set(&cifs_sb->mnt_cifs_flags, sbflags);22202220+ return sbflags;22222221}
+1-1
fs/smb/client/fs_context.h
···374374 struct smb3_fs_context *ctx);375375int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb,376376 struct cifs_ses *ses);377377-void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb);377377+unsigned int smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb);378378379379/*380380 * max deferred close timeout (jiffies) - 2^30
+76-70
fs/smb/client/inode.c
···40404141static void cifs_set_ops(struct inode *inode)4242{4343- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);4343+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);4444+ struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);4445 struct netfs_inode *ictx = netfs_inode(inode);4646+ unsigned int sbflags = cifs_sb_flags(cifs_sb);45474648 switch (inode->i_mode & S_IFMT) {4749 case S_IFREG:4850 inode->i_op = &cifs_file_inode_ops;4949- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DIRECT_IO) {5151+ if (sbflags & CIFS_MOUNT_DIRECT_IO) {5052 set_bit(NETFS_ICTX_UNBUFFERED, &ictx->flags);5151- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)5353+ if (sbflags & CIFS_MOUNT_NO_BRL)5254 inode->i_fop = &cifs_file_direct_nobrl_ops;5355 else5456 inode->i_fop = &cifs_file_direct_ops;5555- } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_STRICT_IO) {5656- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)5757+ } else if (sbflags & CIFS_MOUNT_STRICT_IO) {5858+ if (sbflags & CIFS_MOUNT_NO_BRL)5759 inode->i_fop = &cifs_file_strict_nobrl_ops;5860 else5961 inode->i_fop = &cifs_file_strict_ops;6060- } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_BRL)6262+ } else if (sbflags & CIFS_MOUNT_NO_BRL)6163 inode->i_fop = &cifs_file_nobrl_ops;6264 else { /* not direct, send byte range locks */6365 inode->i_fop = &cifs_file_ops;6466 }65676668 /* check if server can support readahead */6767- if (cifs_sb_master_tcon(cifs_sb)->ses->server->max_read <6868- PAGE_SIZE + MAX_CIFS_HDR_SIZE)6969+ if (tcon->ses->server->max_read < PAGE_SIZE + MAX_CIFS_HDR_SIZE)6970 inode->i_data.a_ops = &cifs_addr_ops_smallbuf;7071 else7172 inode->i_data.a_ops = &cifs_addr_ops;···195194 inode->i_gid = fattr->cf_gid;196195197196 /* if dynperm is set, don't clobber existing mode */198198- if (inode_state_read(inode) & I_NEW ||199199- !(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM))197197+ if ((inode_state_read(inode) & I_NEW) ||198198+ !(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_DYNPERM))200199 inode->i_mode = fattr->cf_mode;201200202201 cifs_i->cifsAttrs = fattr->cf_cifsattrs;···249248{250249 struct cifs_sb_info *cifs_sb = CIFS_SB(sb);251250252252- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)253253- return;254254-255255- fattr->cf_uniqueid = iunique(sb, ROOT_I);251251+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_SERVER_INUM))252252+ fattr->cf_uniqueid = iunique(sb, ROOT_I);256253}257254258255/* Fill a cifs_fattr struct with info from FILE_UNIX_BASIC_INFO. */···258259cifs_unix_basic_to_fattr(struct cifs_fattr *fattr, FILE_UNIX_BASIC_INFO *info,259260 struct cifs_sb_info *cifs_sb)260261{262262+ unsigned int sbflags;263263+261264 memset(fattr, 0, sizeof(*fattr));262265 fattr->cf_uniqueid = le64_to_cpu(info->UniqueId);263266 fattr->cf_bytes = le64_to_cpu(info->NumOfBytes);···318317 break;319318 }320319320320+ sbflags = cifs_sb_flags(cifs_sb);321321 fattr->cf_uid = cifs_sb->ctx->linux_uid;322322- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_UID)) {322322+ if (!(sbflags & CIFS_MOUNT_OVERR_UID)) {323323 u64 id = le64_to_cpu(info->Uid);324324 if (id < ((uid_t)-1)) {325325 kuid_t uid = make_kuid(&init_user_ns, id);···330328 }331329332330 fattr->cf_gid = cifs_sb->ctx->linux_gid;333333- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_GID)) {331331+ if (!(sbflags & CIFS_MOUNT_OVERR_GID)) {334332 u64 id = le64_to_cpu(info->Gid);335333 if (id < ((gid_t)-1)) {336334 kgid_t gid = make_kgid(&init_user_ns, id);···384382 *385383 * If file type or uniqueid is different, return error.386384 */387387- if (unlikely((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) &&385385+ if (unlikely((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_SERVER_INUM) &&388386 CIFS_I(*inode)->uniqueid != fattr->cf_uniqueid)) {389387 CIFS_I(*inode)->time = 0; /* force reval */390388 return -ESTALE;···470468 cifs_fill_uniqueid(sb, fattr);471469472470 /* check for Minshall+French symlinks */473473- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) {471471+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_MF_SYMLINKS) {474472 tmprc = check_mf_symlink(xid, tcon, cifs_sb, fattr, full_path);475473 cifs_dbg(FYI, "check_mf_symlink: %d\n", tmprc);476474 }···10831081 else if ((tcon->ses->capabilities &10841082 tcon->ses->server->vals->cap_nt_find) == 0)10851083 info.info_level = SMB_FIND_FILE_INFO_STANDARD;10861086- else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)10841084+ else if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_SERVER_INUM)10871085 info.info_level = SMB_FIND_FILE_ID_FULL_DIR_INFO;10881086 else /* no srvino useful for fallback to some netapp */10891087 info.info_level = SMB_FIND_FILE_DIRECTORY_INFO;···11111109 struct TCP_Server_Info *server = tcon->ses->server;11121110 int rc;1113111111141114- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) {11121112+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_SERVER_INUM)) {11151113 if (*inode)11161114 fattr->cf_uniqueid = CIFS_I(*inode)->uniqueid;11171115 else···12651263 struct inode **inode,12661264 const char *full_path)12671265{12681268- struct cifs_open_info_data tmp_data = {};12691269- struct cifs_tcon *tcon;12701270- struct TCP_Server_Info *server;12711271- struct tcon_link *tlink;12721266 struct cifs_sb_info *cifs_sb = CIFS_SB(sb);12671267+ struct cifs_open_info_data tmp_data = {};12731268 void *smb1_backup_rsp_buf = NULL;12741274- int rc = 0;12691269+ struct TCP_Server_Info *server;12701270+ struct cifs_tcon *tcon;12711271+ struct tcon_link *tlink;12721272+ unsigned int sbflags;12751273 int tmprc = 0;12741274+ int rc = 0;1276127512771276 tlink = cifs_sb_tlink(cifs_sb);12781277 if (IS_ERR(tlink))···13731370#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY13741371handle_mnt_opt:13751372#endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */13731373+ sbflags = cifs_sb_flags(cifs_sb);13761374 /* query for SFU type info if supported and needed */13771375 if ((fattr->cf_cifsattrs & ATTR_SYSTEM) &&13781378- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)) {13761376+ (sbflags & CIFS_MOUNT_UNX_EMUL)) {13791377 tmprc = cifs_sfu_type(fattr, full_path, cifs_sb, xid);13801378 if (tmprc)13811379 cifs_dbg(FYI, "cifs_sfu_type failed: %d\n", tmprc);13821380 }1383138113841382 /* fill in 0777 bits from ACL */13851385- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID) {13831383+ if (sbflags & CIFS_MOUNT_MODE_FROM_SID) {13861384 rc = cifs_acl_to_fattr(cifs_sb, fattr, *inode,13871385 true, full_path, fid);13881386 if (rc == -EREMOTE)···13931389 __func__, rc);13941390 goto out;13951391 }13961396- } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) {13921392+ } else if (sbflags & CIFS_MOUNT_CIFS_ACL) {13971393 rc = cifs_acl_to_fattr(cifs_sb, fattr, *inode,13981394 false, full_path, fid);13991395 if (rc == -EREMOTE)···14031399 __func__, rc);14041400 goto out;14051401 }14061406- } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL)14021402+ } else if (sbflags & CIFS_MOUNT_UNX_EMUL)14071403 /* fill in remaining high mode bits e.g. SUID, VTX */14081404 cifs_sfu_mode(fattr, full_path, cifs_sb, xid);14091405 else if (!(tcon->posix_extensions))···141314091414141014151411 /* check for Minshall+French symlinks */14161416- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) {14121412+ if (sbflags & CIFS_MOUNT_MF_SYMLINKS) {14171413 tmprc = check_mf_symlink(xid, tcon, cifs_sb, fattr, full_path);14181414 cifs_dbg(FYI, "check_mf_symlink: %d\n", tmprc);14191415 }···15131509 * 3. Tweak fattr based on mount options15141510 */15151511 /* check for Minshall+French symlinks */15161516- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) {15121512+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_MF_SYMLINKS) {15171513 tmprc = check_mf_symlink(xid, tcon, cifs_sb, fattr, full_path);15181514 cifs_dbg(FYI, "check_mf_symlink: %d\n", tmprc);15191515 }···16641660 int len;16651661 int rc;1666166216671667- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_USE_PREFIX_PATH)16631663+ if ((cifs_sb_flags(cifs_sb) & CIFS_MOUNT_USE_PREFIX_PATH)16681664 && cifs_sb->prepath) {16691665 len = strlen(cifs_sb->prepath);16701666 path = kzalloc(len + 2 /* leading sep + null */, GFP_KERNEL);···21022098 const char *full_path, struct cifs_sb_info *cifs_sb,21032099 struct cifs_tcon *tcon, const unsigned int xid)21042100{21052105- int rc = 0;21062101 struct inode *inode = NULL;21022102+ unsigned int sbflags;21032103+ int rc = 0;2107210421082105 if (tcon->posix_extensions) {21092106 rc = smb311_posix_get_inode_info(&inode, full_path,···21442139 if (parent->i_mode & S_ISGID)21452140 mode |= S_ISGID;2146214121422142+ sbflags = cifs_sb_flags(cifs_sb);21472143#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY21482144 if (tcon->unix_ext) {21492145 struct cifs_unix_set_info_args args = {···21542148 .mtime = NO_CHANGE_64,21552149 .device = 0,21562150 };21572157- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) {21512151+ if (sbflags & CIFS_MOUNT_SET_UID) {21582152 args.uid = current_fsuid();21592153 if (parent->i_mode & S_ISGID)21602154 args.gid = parent->i_gid;···21722166 {21732167#endif /* CONFIG_CIFS_ALLOW_INSECURE_LEGACY */21742168 struct TCP_Server_Info *server = tcon->ses->server;21752175- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) &&21692169+ if (!(sbflags & CIFS_MOUNT_CIFS_ACL) &&21762170 (mode & S_IWUGO) == 0 && server->ops->mkdir_setinfo)21772171 server->ops->mkdir_setinfo(inode, full_path, cifs_sb,21782172 tcon, xid);21792179- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM)21732173+ if (sbflags & CIFS_MOUNT_DYNPERM)21802174 inode->i_mode = (mode | S_IFDIR);2181217521822182- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) {21762176+ if (sbflags & CIFS_MOUNT_SET_UID) {21832177 inode->i_uid = current_fsuid();21842178 if (inode->i_mode & S_ISGID)21852179 inode->i_gid = parent->i_gid;···26922686{26932687 struct inode *inode = d_inode(dentry);26942688 struct cifsInodeInfo *cifs_i = CIFS_I(inode);26952695- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);26892689+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);26962690 struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);26972691 struct cached_fid *cfid = NULL;26982692···27332727 }2734272827352729 /* hardlinked files w/ noserverino get "special" treatment */27362736- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) &&27302730+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_SERVER_INUM) &&27372731 S_ISREG(inode->i_mode) && inode->i_nlink != 1)27382732 return true;27392733···27582752int27592753cifs_revalidate_mapping(struct inode *inode)27602754{27612761- int rc;27622755 struct cifsInodeInfo *cifs_inode = CIFS_I(inode);27562756+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);27632757 unsigned long *flags = &cifs_inode->flags;27642764- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);27582758+ int rc;2765275927662760 /* swapfiles are not supposed to be shared */27672761 if (IS_SWAPFILE(inode))···2774276827752769 if (test_and_clear_bit(CIFS_INO_INVALID_MAPPING, flags)) {27762770 /* for cache=singleclient, do not invalidate */27772777- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RW_CACHE)27712771+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_RW_CACHE)27782772 goto skip_invalidate;2779277327802774 cifs_inode->netfs.zero_point = cifs_inode->netfs.remote_i_size;···28982892int cifs_getattr(struct mnt_idmap *idmap, const struct path *path,28992893 struct kstat *stat, u32 request_mask, unsigned int flags)29002894{29012901- struct dentry *dentry = path->dentry;29022902- struct cifs_sb_info *cifs_sb = CIFS_SB(dentry->d_sb);28952895+ struct cifs_sb_info *cifs_sb = CIFS_SB(path->dentry);29032896 struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);28972897+ struct dentry *dentry = path->dentry;29042898 struct inode *inode = d_inode(dentry);28992899+ unsigned int sbflags;29052900 int rc;2906290129072902 if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb))))···29592952 * enabled, and the admin hasn't overridden them, set the ownership29602953 * to the fsuid/fsgid of the current process.29612954 */29622962- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MULTIUSER) &&29632963- !(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) &&29552955+ sbflags = cifs_sb_flags(cifs_sb);29562956+ if ((sbflags & CIFS_MOUNT_MULTIUSER) &&29572957+ !(sbflags & CIFS_MOUNT_CIFS_ACL) &&29642958 !tcon->unix_ext) {29652965- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_UID))29592959+ if (!(sbflags & CIFS_MOUNT_OVERR_UID))29662960 stat->uid = current_fsuid();29672967- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_GID))29612961+ if (!(sbflags & CIFS_MOUNT_OVERR_GID))29682962 stat->gid = current_fsgid();29692963 }29702964 return 0;···31103102 void *page = alloc_dentry_path();31113103 struct inode *inode = d_inode(direntry);31123104 struct cifsInodeInfo *cifsInode = CIFS_I(inode);31133113- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);31053105+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);31143106 struct tcon_link *tlink;31153107 struct cifs_tcon *pTcon;31163108 struct cifs_unix_set_info_args *args = NULL;···3121311331223114 xid = get_xid();3123311531243124- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_PERM)31163116+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NO_PERM)31253117 attrs->ia_valid |= ATTR_FORCE;3126311831273119 rc = setattr_prepare(&nop_mnt_idmap, direntry, attrs);···32743266static int32753267cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)32763268{32773277- unsigned int xid;32693269+ struct inode *inode = d_inode(direntry);32703270+ struct cifsInodeInfo *cifsInode = CIFS_I(inode);32713271+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);32723272+ unsigned int sbflags = cifs_sb_flags(cifs_sb);32733273+ struct cifsFileInfo *cfile = NULL;32743274+ void *page = alloc_dentry_path();32753275+ __u64 mode = NO_CHANGE_64;32783276 kuid_t uid = INVALID_UID;32793277 kgid_t gid = INVALID_GID;32803280- struct inode *inode = d_inode(direntry);32813281- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);32823282- struct cifsInodeInfo *cifsInode = CIFS_I(inode);32833283- struct cifsFileInfo *cfile = NULL;32843278 const char *full_path;32853285- void *page = alloc_dentry_path();32863286- int rc = -EACCES;32873279 __u32 dosattr = 0;32883288- __u64 mode = NO_CHANGE_64;32893289- bool posix = cifs_sb_master_tcon(cifs_sb)->posix_extensions;32803280+ int rc = -EACCES;32813281+ unsigned int xid;3290328232913283 xid = get_xid();3292328432933285 cifs_dbg(FYI, "setattr on file %pd attrs->ia_valid 0x%x\n",32943286 direntry, attrs->ia_valid);3295328732963296- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_PERM)32883288+ if (sbflags & CIFS_MOUNT_NO_PERM)32973289 attrs->ia_valid |= ATTR_FORCE;3298329032993291 rc = setattr_prepare(&nop_mnt_idmap, direntry, attrs);···33543346 if (attrs->ia_valid & ATTR_GID)33553347 gid = attrs->ia_gid;3356334833573357- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) ||33583358- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID)) {33493349+ if (sbflags & (CIFS_MOUNT_CIFS_ACL | CIFS_MOUNT_MODE_FROM_SID)) {33593350 if (uid_valid(uid) || gid_valid(gid)) {33603351 mode = NO_CHANGE_64;33613352 rc = id_mode_to_cifs_acl(inode, full_path, &mode,···33653358 goto cifs_setattr_exit;33663359 }33673360 }33683368- } else33693369- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID))33613361+ } else if (!(sbflags & CIFS_MOUNT_SET_UID)) {33703362 attrs->ia_valid &= ~(ATTR_UID | ATTR_GID);33633363+ }3371336433723365 /* skip mode change if it's just for clearing setuid/setgid */33733366 if (attrs->ia_valid & (ATTR_KILL_SUID|ATTR_KILL_SGID))···33763369 if (attrs->ia_valid & ATTR_MODE) {33773370 mode = attrs->ia_mode;33783371 rc = 0;33793379- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) ||33803380- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID) ||33813381- posix) {33723372+ if ((sbflags & (CIFS_MOUNT_CIFS_ACL | CIFS_MOUNT_MODE_FROM_SID)) ||33733373+ cifs_sb_master_tcon(cifs_sb)->posix_extensions) {33823374 rc = id_mode_to_cifs_acl(inode, full_path, &mode,33833375 INVALID_UID, INVALID_GID);33843376 if (rc) {···33993393 dosattr = cifsInode->cifsAttrs | ATTR_READONLY;3400339434013395 /* fix up mode if we're not using dynperm */34023402- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM) == 0)33963396+ if ((sbflags & CIFS_MOUNT_DYNPERM) == 0)34033397 attrs->ia_mode = inode->i_mode & ~S_IWUGO;34043398 } else if ((mode & S_IWUGO) &&34053399 (cifsInode->cifsAttrs & ATTR_READONLY)) {···34103404 dosattr |= ATTR_NORMAL;3411340534123406 /* reset local inode permissions to normal */34133413- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM)) {34073407+ if (!(sbflags & CIFS_MOUNT_DYNPERM)) {34143408 attrs->ia_mode &= ~(S_IALLUGO);34153409 if (S_ISDIR(inode->i_mode))34163410 attrs->ia_mode |=···34193413 attrs->ia_mode |=34203414 cifs_sb->ctx->file_mode;34213415 }34223422- } else if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DYNPERM)) {34163416+ } else if (!(sbflags & CIFS_MOUNT_DYNPERM)) {34233417 /* ignore mode change - ATTR_READONLY hasn't changed */34243418 attrs->ia_valid &= ~ATTR_MODE;34253419 }
+1-1
fs/smb/client/ioctl.c
···216216 */217217 case CIFS_GOING_FLAGS_LOGFLUSH:218218 case CIFS_GOING_FLAGS_NOLOGFLUSH:219219- sbi->mnt_cifs_flags |= CIFS_MOUNT_SHUTDOWN;219219+ atomic_or(CIFS_MOUNT_SHUTDOWN, &sbi->mnt_cifs_flags);220220 goto shutdown_good;221221 default:222222 rc = -EINVAL;
+8-6
fs/smb/client/link.c
···544544cifs_symlink(struct mnt_idmap *idmap, struct inode *inode,545545 struct dentry *direntry, const char *symname)546546{547547- int rc = -EOPNOTSUPP;548548- unsigned int xid;549549- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);547547+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);548548+ struct inode *newinode = NULL;550549 struct tcon_link *tlink;551550 struct cifs_tcon *pTcon;552551 const char *full_path;552552+ int rc = -EOPNOTSUPP;553553+ unsigned int sbflags;554554+ unsigned int xid;553555 void *page;554554- struct inode *newinode = NULL;555556556557 if (unlikely(cifs_forced_shutdown(cifs_sb)))557558 return smb_EIO(smb_eio_trace_forced_shutdown);···581580 cifs_dbg(FYI, "symname is %s\n", symname);582581583582 /* BB what if DFS and this volume is on different share? BB */583583+ sbflags = cifs_sb_flags(cifs_sb);584584 rc = -EOPNOTSUPP;585585 switch (cifs_symlink_type(cifs_sb)) {586586 case CIFS_SYMLINK_TYPE_UNIX:···596594 break;597595598596 case CIFS_SYMLINK_TYPE_MFSYMLINKS:599599- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) {597597+ if (sbflags & CIFS_MOUNT_MF_SYMLINKS) {600598 rc = create_mf_symlink(xid, pTcon, cifs_sb,601599 full_path, symname);602600 }603601 break;604602605603 case CIFS_SYMLINK_TYPE_SFU:606606- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL) {604604+ if (sbflags & CIFS_MOUNT_UNX_EMUL) {607605 rc = __cifs_sfu_make_node(xid, inode, direntry, pTcon,608606 full_path, S_IFLNK,609607 0, symname);
+10-6
fs/smb/client/misc.c
···275275void276276cifs_autodisable_serverino(struct cifs_sb_info *cifs_sb)277277{278278- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) {278278+ unsigned int sbflags = cifs_sb_flags(cifs_sb);279279+280280+ if (sbflags & CIFS_MOUNT_SERVER_INUM) {279281 struct cifs_tcon *tcon = NULL;280282281283 if (cifs_sb->master_tlink)282284 tcon = cifs_sb_master_tcon(cifs_sb);283285284284- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM;286286+ atomic_andnot(CIFS_MOUNT_SERVER_INUM, &cifs_sb->mnt_cifs_flags);285287 cifs_sb->mnt_cifs_serverino_autodisabled = true;286288 cifs_dbg(VFS, "Autodisabling the use of server inode numbers on %s\n",287289 tcon ? tcon->tree_name : "new server");···384382bool385383backup_cred(struct cifs_sb_info *cifs_sb)386384{387387- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_BACKUPUID) {385385+ unsigned int sbflags = cifs_sb_flags(cifs_sb);386386+387387+ if (sbflags & CIFS_MOUNT_CIFS_BACKUPUID) {388388 if (uid_eq(cifs_sb->ctx->backupuid, current_fsuid()))389389 return true;390390 }391391- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_BACKUPGID) {391391+ if (sbflags & CIFS_MOUNT_CIFS_BACKUPGID) {392392 if (in_group_p(cifs_sb->ctx->backupgid))393393 return true;394394 }···959955 convert_delimiter(cifs_sb->prepath, CIFS_DIR_SEP(cifs_sb));960956 }961957962962- cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_USE_PREFIX_PATH;958958+ atomic_or(CIFS_MOUNT_USE_PREFIX_PATH, &cifs_sb->mnt_cifs_flags);963959 return 0;964960}965961···988984 * look up or tcon is not DFS.989985 */990986 if (strlen(full_path) < 2 || !cifs_sb ||991991- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS) ||987987+ (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NO_DFS) ||992988 !is_tcon_dfs(tcon))993989 return 0;994990
+21-18
fs/smb/client/readdir.c
···121121 * want to clobber the existing one with the one that122122 * the readdir code created.123123 */124124- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM))124124+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_SERVER_INUM))125125 fattr->cf_uniqueid = CIFS_I(inode)->uniqueid;126126127127 /*···177177 struct cifs_open_info_data data = {178178 .reparse = { .tag = fattr->cf_cifstag, },179179 };180180+ unsigned int sbflags;180181181182 fattr->cf_uid = cifs_sb->ctx->linux_uid;182183 fattr->cf_gid = cifs_sb->ctx->linux_gid;···216215 * may look wrong since the inodes may not have timed out by the time217216 * "ls" does a stat() call on them.218217 */219219- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) ||220220- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID))218218+ sbflags = cifs_sb_flags(cifs_sb);219219+ if (sbflags & (CIFS_MOUNT_CIFS_ACL | CIFS_MOUNT_MODE_FROM_SID))221220 fattr->cf_flags |= CIFS_FATTR_NEED_REVAL;222221223223- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL &&224224- fattr->cf_cifsattrs & ATTR_SYSTEM) {222222+ if ((sbflags & CIFS_MOUNT_UNX_EMUL) &&223223+ (fattr->cf_cifsattrs & ATTR_SYSTEM)) {225224 if (fattr->cf_eof == 0) {226225 fattr->cf_mode &= ~S_IFMT;227226 fattr->cf_mode |= S_IFIFO;···346345_initiate_cifs_search(const unsigned int xid, struct file *file,347346 const char *full_path)348347{348348+ struct cifs_sb_info *cifs_sb = CIFS_SB(file);349349+ struct tcon_link *tlink = NULL;350350+ struct TCP_Server_Info *server;351351+ struct cifsFileInfo *cifsFile;352352+ struct cifs_tcon *tcon;353353+ unsigned int sbflags;349354 __u16 search_flags;350355 int rc = 0;351351- struct cifsFileInfo *cifsFile;352352- struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);353353- struct tcon_link *tlink = NULL;354354- struct cifs_tcon *tcon;355355- struct TCP_Server_Info *server;356356357357 if (file->private_data == NULL) {358358 tlink = cifs_sb_tlink(cifs_sb);···387385 cifs_dbg(FYI, "Full path: %s start at: %lld\n", full_path, file->f_pos);388386389387ffirst_retry:388388+ sbflags = cifs_sb_flags(cifs_sb);390389 /* test for Unix extensions */391390 /* but now check for them on the share/mount not on the SMB session */392391 /* if (cap_unix(tcon->ses) { */···398395 else if ((tcon->ses->capabilities &399396 tcon->ses->server->vals->cap_nt_find) == 0) {400397 cifsFile->srch_inf.info_level = SMB_FIND_FILE_INFO_STANDARD;401401- } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) {398398+ } else if (sbflags & CIFS_MOUNT_SERVER_INUM) {402399 cifsFile->srch_inf.info_level = SMB_FIND_FILE_ID_FULL_DIR_INFO;403400 } else /* not srvinos - BB fixme add check for backlevel? */ {404401 cifsFile->srch_inf.info_level = SMB_FIND_FILE_FULL_DIRECTORY_INFO;···414411415412 if (rc == 0) {416413 cifsFile->invalidHandle = false;417417- } else if ((rc == -EOPNOTSUPP) &&418418- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) {414414+ } else if (rc == -EOPNOTSUPP && (sbflags & CIFS_MOUNT_SERVER_INUM)) {419415 cifs_autodisable_serverino(cifs_sb);420416 goto ffirst_retry;421417 }···692690 loff_t first_entry_in_buffer;693691 loff_t index_to_find = pos;694692 struct cifsFileInfo *cfile = file->private_data;695695- struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);693693+ struct cifs_sb_info *cifs_sb = CIFS_SB(file);696694 struct TCP_Server_Info *server = tcon->ses->server;697695 /* check if index in the buffer */698696···957955 struct cifs_sb_info *cifs_sb = CIFS_SB(sb);958956 struct cifs_dirent de = { NULL, };959957 struct cifs_fattr fattr;958958+ unsigned int sbflags;960959 struct qstr name;961960 int rc = 0;962961···10221019 break;10231020 }1024102110251025- if (de.ino && (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) {10221022+ sbflags = cifs_sb_flags(cifs_sb);10231023+ if (de.ino && (sbflags & CIFS_MOUNT_SERVER_INUM)) {10261024 fattr.cf_uniqueid = de.ino;10271025 } else {10281026 fattr.cf_uniqueid = iunique(sb, ROOT_I);10291027 cifs_autodisable_serverino(cifs_sb);10301028 }1031102910321032- if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MF_SYMLINKS) &&10331033- couldbe_mf_symlink(&fattr))10301030+ if ((sbflags & CIFS_MOUNT_MF_SYMLINKS) && couldbe_mf_symlink(&fattr))10341031 /*10351032 * trying to get the type and mode can be slow,10361033 * so just call those regular files for now, and mark···10611058 const char *full_path;10621059 void *page = alloc_dentry_path();10631060 struct cached_fid *cfid = NULL;10641064- struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);10611061+ struct cifs_sb_info *cifs_sb = CIFS_SB(file);1065106210661063 xid = get_xid();10671064
+15-14
fs/smb/client/reparse.c
···5555 const char *full_path, const char *symname)5656{5757 struct reparse_symlink_data_buffer *buf = NULL;5858- struct cifs_open_info_data data = {};5959- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);5858+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);6059 const char *symroot = cifs_sb->ctx->symlinkroot;6161- struct inode *new;6262- struct kvec iov;6363- __le16 *path = NULL;6464- bool directory;6565- char *symlink_target = NULL;6666- char *sym = NULL;6060+ struct cifs_open_info_data data = {};6761 char sep = CIFS_DIR_SEP(cifs_sb);6262+ char *symlink_target = NULL;6863 u16 len, plen, poff, slen;6464+ unsigned int sbflags;6565+ __le16 *path = NULL;6666+ struct inode *new;6767+ char *sym = NULL;6868+ struct kvec iov;6969+ bool directory;6970 int rc = 0;70717172 if (strlen(symname) > REPARSE_SYM_PATH_MAX)···8483 .symlink_target = symlink_target,8584 };86858787- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) &&8888- symroot && symname[0] == '/') {8686+ sbflags = cifs_sb_flags(cifs_sb);8787+ if (!(sbflags & CIFS_MOUNT_POSIX_PATHS) && symroot && symname[0] == '/') {8988 /*9089 * This is a request to create an absolute symlink on the server9190 * which does not support POSIX paths, and expects symlink in···165164 * mask these characters in NT object prefix by '_' and then change166165 * them back.167166 */168168- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && symname[0] == '/')167167+ if (!(sbflags & CIFS_MOUNT_POSIX_PATHS) && symname[0] == '/')169168 sym[0] = sym[1] = sym[2] = sym[5] = '_';170169171170 path = cifs_convert_path_to_utf16(sym, cifs_sb);···174173 goto out;175174 }176175177177- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && symname[0] == '/') {176176+ if (!(sbflags & CIFS_MOUNT_POSIX_PATHS) && symname[0] == '/') {178177 sym[0] = '\\';179178 sym[1] = sym[2] = '?';180179 sym[5] = ':';···198197 slen = 2 * UniStrnlen((wchar_t *)path, REPARSE_SYM_PATH_MAX);199198 poff = 0;200199 plen = slen;201201- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) && symname[0] == '/') {200200+ if (!(sbflags & CIFS_MOUNT_POSIX_PATHS) && symname[0] == '/') {202201 /*203202 * For absolute NT symlinks skip leading "\\??\\" in PrintName as204203 * PrintName is user visible location in DOS/Win32 format (not in NT format).···825824 goto out;826825 }827826828828- if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS) &&827827+ if (!(cifs_sb_flags(cifs_sb) & CIFS_MOUNT_POSIX_PATHS) &&829828 symroot && !relative) {830829 /*831830 * This is an absolute symlink from the server which does not
+2-2
fs/smb/client/reparse.h
···3333{3434 u32 uid = le32_to_cpu(*(__le32 *)ptr);35353636- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_UID)3636+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_OVERR_UID)3737 return cifs_sb->ctx->linux_uid;3838 return make_kuid(current_user_ns(), uid);3939}···4343{4444 u32 gid = le32_to_cpu(*(__le32 *)ptr);45454646- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_OVERR_GID)4646+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_OVERR_GID)4747 return cifs_sb->ctx->linux_gid;4848 return make_kgid(current_user_ns(), gid);4949}
+14-8
fs/smb/client/smb1ops.c
···49495050 if (!CIFSSMBQFSUnixInfo(xid, tcon)) {5151 __u64 cap = le64_to_cpu(tcon->fsUnixInfo.Capability);5252+ unsigned int sbflags;52535354 cifs_dbg(FYI, "unix caps which server supports %lld\n", cap);5455 /*···7675 if (cap & CIFS_UNIX_TRANSPORT_ENCRYPTION_MANDATORY_CAP)7776 cifs_dbg(VFS, "per-share encryption not supported yet\n");78777878+ if (cifs_sb)7979+ sbflags = cifs_sb_flags(cifs_sb);8080+7981 cap &= CIFS_UNIX_CAP_MASK;8082 if (ctx && ctx->no_psx_acl)8183 cap &= ~CIFS_UNIX_POSIX_ACL_CAP;8284 else if (CIFS_UNIX_POSIX_ACL_CAP & cap) {8385 cifs_dbg(FYI, "negotiated posix acl support\n");8486 if (cifs_sb)8585- cifs_sb->mnt_cifs_flags |=8686- CIFS_MOUNT_POSIXACL;8787+ sbflags |= CIFS_MOUNT_POSIXACL;8788 }88898990 if (ctx && ctx->posix_paths == 0)···9390 else if (cap & CIFS_UNIX_POSIX_PATHNAMES_CAP) {9491 cifs_dbg(FYI, "negotiate posix pathnames\n");9592 if (cifs_sb)9696- cifs_sb->mnt_cifs_flags |=9797- CIFS_MOUNT_POSIX_PATHS;9393+ sbflags |= CIFS_MOUNT_POSIX_PATHS;9894 }9595+9696+ if (cifs_sb)9797+ atomic_set(&cifs_sb->mnt_cifs_flags, sbflags);999810099 cifs_dbg(FYI, "Negotiate caps 0x%x\n", (int)cap);101100#ifdef CONFIG_CIFS_DEBUG2···11521147 __u64 volatile_fid, __u16 net_fid,11531148 struct cifsInodeInfo *cinode, unsigned int oplock)11541149{11551155- unsigned int sbflags = CIFS_SB(cinode->netfs.inode.i_sb)->mnt_cifs_flags;11501150+ unsigned int sbflags = cifs_sb_flags(CIFS_SB(cinode));11561151 __u8 op;1157115211581153 op = !!((oplock & CIFS_CACHE_READ_FLG) || (sbflags & CIFS_MOUNT_RO_CACHE));···12871282 struct dentry *dentry, struct cifs_tcon *tcon,12881283 const char *full_path, umode_t mode, dev_t dev)12891284{12901290- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);12851285+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode);12861286+ unsigned int sbflags = cifs_sb_flags(cifs_sb);12911287 struct inode *newinode = NULL;12921288 int rc;12931289···13041298 .mtime = NO_CHANGE_64,13051299 .device = dev,13061300 };13071307- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID) {13011301+ if (sbflags & CIFS_MOUNT_SET_UID) {13081302 args.uid = current_fsuid();13091303 args.gid = current_fsgid();13101304 } else {···13231317 if (rc == 0)13241318 d_instantiate(dentry, newinode);13251319 return rc;13261326- } else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL) {13201320+ } else if (sbflags & CIFS_MOUNT_UNX_EMUL) {13271321 /*13281322 * Check if mounted with mount parm 'sfu' mount parm.13291323 * SFU emulation should work with all servers
+1-1
fs/smb/client/smb2file.c
···7272 * POSIX server does not distinguish between symlinks to file and7373 * symlink directory. So nothing is needed to fix on the client side.7474 */7575- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS)7575+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_POSIX_PATHS)7676 return 0;77777878 if (!*target)
+4-14
fs/smb/client/smb2misc.c
···455455__le16 *456456cifs_convert_path_to_utf16(const char *from, struct cifs_sb_info *cifs_sb)457457{458458- int len;459458 const char *start_of_path;460460- __le16 *to;461461- int map_type;462462-463463- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SFM_CHR)464464- map_type = SFM_MAP_UNI_RSVD;465465- else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MAP_SPECIAL_CHR)466466- map_type = SFU_MAP_UNI_RSVD;467467- else468468- map_type = NO_MAP_UNI_RSVD;459459+ int len;469460470461 /* Windows doesn't allow paths beginning with \ */471462 if (from[0] == '\\')···470479 } else471480 start_of_path = from;472481473473- to = cifs_strndup_to_utf16(start_of_path, PATH_MAX, &len,474474- cifs_sb->local_nls, map_type);475475- return to;482482+ return cifs_strndup_to_utf16(start_of_path, PATH_MAX, &len,483483+ cifs_sb->local_nls, cifs_remap(cifs_sb));476484}477485478486__le32 smb2_get_lease_state(struct cifsInodeInfo *cinode, unsigned int oplock)479487{480480- unsigned int sbflags = CIFS_SB(cinode->netfs.inode.i_sb)->mnt_cifs_flags;488488+ unsigned int sbflags = cifs_sb_flags(CIFS_SB(cinode));481489 __le32 lease = 0;482490483491 if ((oplock & CIFS_CACHE_WRITE_FLG) || (sbflags & CIFS_MOUNT_RW_CACHE))
+4-4
fs/smb/client/smb2ops.c
···986986 rc = -EREMOTE;987987 }988988 if (rc == -EREMOTE && IS_ENABLED(CONFIG_CIFS_DFS_UPCALL) &&989989- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_DFS))989989+ (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NO_DFS))990990 rc = -EOPNOTSUPP;991991 goto out;992992 }···26912691 __u64 volatile_fid, __u16 net_fid,26922692 struct cifsInodeInfo *cinode, unsigned int oplock)26932693{26942694- unsigned int sbflags = CIFS_SB(cinode->netfs.inode.i_sb)->mnt_cifs_flags;26942694+ unsigned int sbflags = cifs_sb_flags(CIFS_SB(cinode));26952695 __u8 op;2696269626972697 if (tcon->ses->server->capabilities & SMB2_GLOBAL_CAP_LEASING)···53325332 struct dentry *dentry, struct cifs_tcon *tcon,53335333 const char *full_path, umode_t mode, dev_t dev)53345334{53355335- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);53355335+ unsigned int sbflags = cifs_sb_flags(CIFS_SB(inode));53365336 int rc = -EOPNOTSUPP;5337533753385338 /*···53415341 * supports block and char device, socket & fifo,53425342 * and was used by default in earlier versions of Windows53435343 */53445344- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL) {53445344+ if (sbflags & CIFS_MOUNT_UNX_EMUL) {53455345 rc = cifs_sfu_make_node(xid, inode, dentry, tcon,53465346 full_path, mode, dev);53475347 } else if (CIFS_REPARSE_SUPPORT(tcon)) {
···807807}808808809809/*810810- * Return a channel (master if none) of @ses that can be used to send811811- * regular requests.810810+ * cifs_pick_channel - pick an eligible channel for network operations812811 *813813- * If we are currently binding a new channel (negprot/sess.setup),814814- * return the new incomplete channel.812812+ * @ses: session reference813813+ *814814+ * Select an eligible channel (not terminating and not marked as needing815815+ * reconnect), preferring the least loaded one. If no eligible channel is816816+ * found, fall back to the primary channel (index 0).817817+ *818818+ * Return: TCP_Server_Info pointer for the chosen channel, or NULL if @ses is819819+ * NULL.815820 */816821struct TCP_Server_Info *cifs_pick_channel(struct cifs_ses *ses)817822{818823 uint index = 0;819819- unsigned int min_in_flight = UINT_MAX, max_in_flight = 0;824824+ unsigned int min_in_flight = UINT_MAX;820825 struct TCP_Server_Info *server = NULL;821826 int i, start, cur;822827···851846 min_in_flight = server->in_flight;852847 index = cur;853848 }854854- if (server->in_flight > max_in_flight)855855- max_in_flight = server->in_flight;856849 }857857-858858- /* if all channels are equally loaded, fall back to round-robin */859859- if (min_in_flight == max_in_flight)860860- index = (uint)start % ses->chan_count;861850862851 server = ses->chans[index].server;863852 spin_unlock(&ses->chan_lock);
+3-3
fs/smb/client/xattr.c
···149149 break;150150 }151151152152- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_XATTR)152152+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NO_XATTR)153153 goto out;154154155155 if (pTcon->ses->server->ops->set_EA) {···309309 break;310310 }311311312312- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_XATTR)312312+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NO_XATTR)313313 goto out;314314315315 if (pTcon->ses->server->ops->query_all_EAs)···398398 if (unlikely(cifs_forced_shutdown(cifs_sb)))399399 return smb_EIO(smb_eio_trace_forced_shutdown);400400401401- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NO_XATTR)401401+ if (cifs_sb_flags(cifs_sb) & CIFS_MOUNT_NO_XATTR)402402 return -EOPNOTSUPP;403403404404 tlink = cifs_sb_tlink(cifs_sb);
···13471347 * feature was introduced. This counter can go negative due to the way13481348 * we handle nearly-lockless reservations, so we must use the _positive13491349 * variant here to avoid writing out nonsense frextents.13501350+ *13511351+ * RT groups are only supported on v5 file systems, which always13521352+ * have lazy SB counters.13501353 */13511354 if (xfs_has_rtgroups(mp) && !xfs_has_zoned(mp)) {13521355 mp->m_sb.sb_frextents =
+1-1
fs/xfs/scrub/dir_repair.c
···177177 rd->dir_names = NULL;178178 if (rd->dir_entries)179179 xfarray_destroy(rd->dir_entries);180180- rd->dir_names = NULL;180180+ rd->dir_entries = NULL;181181}182182183183/* Set up for a directory repair. */
+6-1
fs/xfs/scrub/orphanage.c
···442442 return 0;443443444444 d_child = try_lookup_noperm(&qname, d_orphanage);445445+ if (IS_ERR(d_child)) {446446+ dput(d_orphanage);447447+ return PTR_ERR(d_child);448448+ }449449+445450 if (d_child) {446451 trace_xrep_adoption_check_child(sc->mp, d_child);447452···484479 return;485480486481 d_child = try_lookup_noperm(&qname, d_orphanage);487487- while (d_child != NULL) {482482+ while (!IS_ERR_OR_NULL(d_child)) {488483 trace_xrep_adoption_invalidate_child(sc->mp, d_child);489484490485 ASSERT(d_is_negative(d_child));
+2-15
fs/xfs/xfs_fsops.c
···9595 struct xfs_growfs_data *in) /* growfs data input struct */9696{9797 xfs_agnumber_t oagcount = mp->m_sb.sb_agcount;9898+ xfs_rfsblock_t nb = in->newblocks;9899 struct xfs_buf *bp;99100 int error;100101 xfs_agnumber_t nagcount;101102 xfs_agnumber_t nagimax = 0;102102- xfs_rfsblock_t nb, nb_div, nb_mod;103103 int64_t delta;104104 bool lastag_extended = false;105105 struct xfs_trans *tp;106106 struct aghdr_init_data id = {};107107 struct xfs_perag *last_pag;108108109109- nb = in->newblocks;110109 error = xfs_sb_validate_fsb_count(&mp->m_sb, nb);111110 if (error)112111 return error;···124125 mp->m_sb.sb_rextsize);125126 if (error)126127 return error;128128+ xfs_growfs_compute_deltas(mp, nb, &delta, &nagcount);127129128128- nb_div = nb;129129- nb_mod = do_div(nb_div, mp->m_sb.sb_agblocks);130130- if (nb_mod && nb_mod >= XFS_MIN_AG_BLOCKS)131131- nb_div++;132132- else if (nb_mod)133133- nb = nb_div * mp->m_sb.sb_agblocks;134134-135135- if (nb_div > XFS_MAX_AGNUMBER + 1) {136136- nb_div = XFS_MAX_AGNUMBER + 1;137137- nb = nb_div * mp->m_sb.sb_agblocks;138138- }139139- nagcount = nb_div;140140- delta = nb - mp->m_sb.sb_dblocks;141130 /*142131 * Reject filesystems with a single AG because they are not143132 * supported, and reject a shrink operation that would cause a
+18-2
fs/xfs/xfs_health.c
···314314 xfs_rtgroup_put(rtg);315315}316316317317+static inline void xfs_inode_report_fserror(struct xfs_inode *ip)318318+{319319+ /*320320+ * Do not report inodes being constructed or freed, or metadata inodes,321321+ * to fsnotify.322322+ */323323+ if (xfs_iflags_test(ip, XFS_INEW | XFS_IRECLAIM) ||324324+ xfs_is_internal_inode(ip)) {325325+ fserror_report_metadata(ip->i_mount->m_super, -EFSCORRUPTED,326326+ GFP_NOFS);327327+ return;328328+ }329329+330330+ fserror_report_file_metadata(VFS_I(ip), -EFSCORRUPTED, GFP_NOFS);331331+}332332+317333/* Mark the unhealthy parts of an inode. */318334void319335xfs_inode_mark_sick(···355339 inode_state_clear(VFS_I(ip), I_DONTCACHE);356340 spin_unlock(&VFS_I(ip)->i_lock);357341358358- fserror_report_file_metadata(VFS_I(ip), -EFSCORRUPTED, GFP_NOFS);342342+ xfs_inode_report_fserror(ip);359343 if (mask)360344 xfs_healthmon_report_inode(ip, XFS_HEALTHMON_SICK, old_mask,361345 mask);···387371 inode_state_clear(VFS_I(ip), I_DONTCACHE);388372 spin_unlock(&VFS_I(ip)->i_lock);389373390390- fserror_report_file_metadata(VFS_I(ip), -EFSCORRUPTED, GFP_NOFS);374374+ xfs_inode_report_fserror(ip);391375 if (mask)392376 xfs_healthmon_report_inode(ip, XFS_HEALTHMON_CORRUPT, old_mask,393377 mask);
···106106 mapping_set_folio_min_order(VFS_I(ip)->i_mapping,107107 M_IGEO(mp)->min_folio_order);108108109109- XFS_STATS_INC(mp, vn_active);109109+ XFS_STATS_INC(mp, xs_inodes_active);110110 ASSERT(atomic_read(&ip->i_pincount) == 0);111111 ASSERT(ip->i_ino == 0);112112···172172 /* asserts to verify all state is correct here */173173 ASSERT(atomic_read(&ip->i_pincount) == 0);174174 ASSERT(!ip->i_itemp || list_empty(&ip->i_itemp->ili_item.li_bio_list));175175- XFS_STATS_DEC(ip->i_mount, vn_active);175175+ if (xfs_is_metadir_inode(ip))176176+ XFS_STATS_DEC(ip->i_mount, xs_inodes_meta);177177+ else178178+ XFS_STATS_DEC(ip->i_mount, xs_inodes_active);176179177180 call_rcu(&VFS_I(ip)->i_rcu, xfs_inode_free_callback);178181}···639636 if (!ip)640637 return -ENOMEM;641638639639+ /*640640+ * Set XFS_INEW as early as possible so that the health code won't pass641641+ * the inode to the fserror code if the ondisk inode cannot be loaded.642642+ * We're going to free the xfs_inode immediately if that happens, which643643+ * would lead to UAF problems.644644+ */645645+ xfs_iflags_set(ip, XFS_INEW);646646+642647 error = xfs_imap(pag, tp, ip->i_ino, &ip->i_imap, flags);643648 if (error)644649 goto out_destroy;···724713 ip->i_udquot = NULL;725714 ip->i_gdquot = NULL;726715 ip->i_pdquot = NULL;727727- xfs_iflags_set(ip, XFS_INEW);728716729717 /* insert the new inode */730718 spin_lock(&pag->pag_ici_lock);···22442234 struct xfs_mount *mp = ip->i_mount;22452235 bool need_inactive;2246223622472247- XFS_STATS_INC(mp, vn_reclaim);22372237+ XFS_STATS_INC(mp, xs_inode_mark_reclaimable);2248223822492239 /*22502240 * We should never get here with any of the reclaim flags already set.
+1-1
fs/xfs/xfs_mount.h
···345345 struct xfs_hooks m_dir_update_hooks;346346347347 /* Private data referring to a health monitor object. */348348- struct xfs_healthmon *m_healthmon;348348+ struct xfs_healthmon __rcu *m_healthmon;349349} xfs_mount_t;350350351351#define M_IGEO(mp) (&(mp)->m_ino_geo)
···235235236236#ifdef XFS_WARN237237238238+/*239239+ * Please note that this ASSERT doesn't kill the kernel. It will if the kernel240240+ * has panic_on_warn set.241241+ */238242#define ASSERT(expr) \239243 (likely(expr) ? (void)0 : asswarn(NULL, #expr, __FILE__, __LINE__))240244···249245#endif /* XFS_WARN */250246#endif /* DEBUG */251247248248+/*249249+ * Use this to catch metadata corruptions that are not caught by block or250250+ * structure verifiers. The reason is that the verifiers check corruptions only251251+ * within the scope of the object being verified.252252+ */252253#define XFS_IS_CORRUPT(mp, expr) \253254 (unlikely(expr) ? xfs_corruption_error(#expr, XFS_ERRLEVEL_LOW, (mp), \254255 NULL, 0, __FILE__, __LINE__, \
+37-7
fs/xfs/xfs_rtalloc.c
···112112 error = xfs_rtget_summary(oargs, log, bbno, &sum);113113 if (error)114114 goto out;115115+ if (XFS_IS_CORRUPT(oargs->mp, sum < 0)) {116116+ error = -EFSCORRUPTED;117117+ goto out;118118+ }115119 if (sum == 0)116120 continue;117121 error = xfs_rtmodify_summary(oargs, log, bbno, -sum);···124120 error = xfs_rtmodify_summary(nargs, log, bbno, sum);125121 if (error)126122 goto out;127127- ASSERT(sum > 0);128123 }129124 }130125 error = 0;···10501047 */10511048 xfs_trans_resv_calc(mp, &mp->m_resv);1052104910501050+ /*10511051+ * Sync sb counters now to reflect the updated values. Lazy counters are10521052+ * not always updated and in order to avoid inconsistencies between10531053+ * frextents and rtextents, it is better to sync the counters.10541054+ */10551055+10561056+ if (xfs_has_lazysbcount(mp))10571057+ xfs_log_sb(args.tp);10581058+10531059 error = xfs_trans_commit(args.tp);10541060 if (error)10551061 goto out_free;···10911079}1092108010931081/*10941094- * Calculate the last rbmblock currently used.10821082+ * This will return the bitmap block number (indexed at 0) that will be10831083+ * extended/modified. There are 2 cases here:10841084+ * 1. The size of the rtg is such that it is a multiple of10851085+ * xfs_rtbitmap_rtx_per_rbmblock() i.e, an integral number of bitmap blocks10861086+ * are completely filled up. In this case, we should return10871087+ * 1 + (the last used bitmap block number).10881088+ * 2. The size of the rtg is not an multiple of xfs_rtbitmap_rtx_per_rbmblock().10891089+ * Here we will return the block number of last used block number. In this10901090+ * case, we will modify the last used bitmap block to extend the size of the10911091+ * rtgroup.10951092 *10961093 * This also deals with the case where there were no rtextents before.10971094 */10981095static xfs_fileoff_t10991099-xfs_last_rt_bmblock(10961096+xfs_last_rt_bmblock_to_extend(11001097 struct xfs_rtgroup *rtg)11011098{11021099 struct xfs_mount *mp = rtg_mount(rtg);11031100 xfs_rgnumber_t rgno = rtg_rgno(rtg);11041101 xfs_fileoff_t bmbno = 0;11021102+ unsigned int mod = 0;1105110311061104 ASSERT(!mp->m_sb.sb_rgcount || rgno >= mp->m_sb.sb_rgcount - 1);11071105···11191097 xfs_rtxnum_t nrext = xfs_last_rtgroup_extents(mp);1120109811211099 /* Also fill up the previous block if not entirely full. */11221122- bmbno = xfs_rtbitmap_blockcount_len(mp, nrext);11231123- if (xfs_rtx_to_rbmword(mp, nrext) != 0)11241124- bmbno--;11001100+ /* We are doing a -1 to convert it to a 0 based index */11011101+ bmbno = xfs_rtbitmap_blockcount_len(mp, nrext) - 1;11021102+ div_u64_rem(nrext, xfs_rtbitmap_rtx_per_rbmblock(mp), &mod);11031103+ /*11041104+ * mod = 0 means that all the current blocks are full. So11051105+ * return the next block number to be used for the rtgroup11061106+ * growth.11071107+ */11081108+ if (mod == 0)11091109+ bmbno++;11251110 }1126111111271112 return bmbno;···12331204 goto out_rele;12341205 }1235120612361236- for (bmbno = xfs_last_rt_bmblock(rtg); bmbno < bmblocks; bmbno++) {12071207+ for (bmbno = xfs_last_rt_bmblock_to_extend(rtg); bmbno < bmblocks;12081208+ bmbno++) {12371209 error = xfs_growfs_rt_bmblock(rtg, nrblocks, rextsize, bmbno);12381210 if (error)12391211 goto out_error;
+11-6
fs/xfs/xfs_stats.c
···4242 { "xstrat", xfsstats_offset(xs_write_calls) },4343 { "rw", xfsstats_offset(xs_attr_get) },4444 { "attr", xfsstats_offset(xs_iflush_count)},4545- { "icluster", xfsstats_offset(vn_active) },4545+ { "icluster", xfsstats_offset(xs_inodes_active) },4646 { "vnodes", xfsstats_offset(xb_get) },4747 { "buf", xfsstats_offset(xs_abtb_2) },4848 { "abtb2", xfsstats_offset(xs_abtc_2) },···5959 { "rtrefcntbt", xfsstats_offset(xs_qm_dqreclaims)},6060 /* we print both series of quota information together */6161 { "qm", xfsstats_offset(xs_gc_read_calls)},6262- { "zoned", xfsstats_offset(__pad1)},6262+ { "zoned", xfsstats_offset(xs_inodes_meta)},6363+ { "metafile", xfsstats_offset(xs_xstrat_bytes)},6364 };64656566 /* Loop over all stats groups */···10099101100void xfs_stats_clearall(struct xfsstats __percpu *stats)102101{102102+ uint32_t xs_inodes_active, xs_inodes_meta;103103 int c;104104- uint32_t vn_active;105104106105 xfs_notice(NULL, "Clearing xfsstats");107106 for_each_possible_cpu(c) {108107 preempt_disable();109109- /* save vn_active, it's a universal truth! */110110- vn_active = per_cpu_ptr(stats, c)->s.vn_active;108108+ /*109109+ * Save the active / meta inode counters, as they are stateful.110110+ */111111+ xs_inodes_active = per_cpu_ptr(stats, c)->s.xs_inodes_active;112112+ xs_inodes_meta = per_cpu_ptr(stats, c)->s.xs_inodes_meta;111113 memset(per_cpu_ptr(stats, c), 0, sizeof(*stats));112112- per_cpu_ptr(stats, c)->s.vn_active = vn_active;114114+ per_cpu_ptr(stats, c)->s.xs_inodes_active = xs_inodes_active;115115+ per_cpu_ptr(stats, c)->s.xs_inodes_meta = xs_inodes_meta;113116 preempt_enable();114117 }115118}
+10-9
fs/xfs/xfs_stats.h
···100100 uint32_t xs_iflush_count;101101 uint32_t xs_icluster_flushcnt;102102 uint32_t xs_icluster_flushinode;103103- uint32_t vn_active; /* # vnodes not on free lists */104104- uint32_t vn_alloc; /* # times vn_alloc called */105105- uint32_t vn_get; /* # times vn_get called */106106- uint32_t vn_hold; /* # times vn_hold called */107107- uint32_t vn_rele; /* # times vn_rele called */108108- uint32_t vn_reclaim; /* # times vn_reclaim called */109109- uint32_t vn_remove; /* # times vn_remove called */110110- uint32_t vn_free; /* # times vn_free called */103103+ uint32_t xs_inodes_active;104104+ uint32_t __unused_vn_alloc;105105+ uint32_t __unused_vn_get;106106+ uint32_t __unused_vn_hold;107107+ uint32_t xs_inode_destroy;108108+ uint32_t xs_inode_destroy2; /* same as xs_inode_destroy */109109+ uint32_t xs_inode_mark_reclaimable;110110+ uint32_t __unused_vn_free;111111 uint32_t xb_get;112112 uint32_t xb_create;113113 uint32_t xb_get_locked;···142142 uint32_t xs_gc_read_calls;143143 uint32_t xs_gc_write_calls;144144 uint32_t xs_gc_zone_reset_calls;145145- uint32_t __pad1;145145+/* Metafile counters */146146+ uint32_t xs_inodes_meta;146147/* Extra precision counters */147148 uint64_t xs_xstrat_bytes;148149 uint64_t xs_write_bytes;
···1414struct mempolicy;15151616/* Helper macro to avoid gfp flags if they are the default one */1717-#define __default_gfp(a,...) a1818-#define default_gfp(...) __default_gfp(__VA_ARGS__ __VA_OPT__(,) GFP_KERNEL)1717+#define __default_gfp(a,b,...) b1818+#define default_gfp(...) __default_gfp(,##__VA_ARGS__,GFP_KERNEL)19192020/* Convert GFP flags to their corresponding migrate type */2121#define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)···339339{340340 return folio_alloc_noprof(gfp, order);341341}342342-#define vma_alloc_folio_noprof(gfp, order, vma, addr) \343343- folio_alloc_noprof(gfp, order)342342+static inline struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order,343343+ struct vm_area_struct *vma, unsigned long addr)344344+{345345+ return folio_alloc_noprof(gfp, order);346346+}344347#endif345348346349#define alloc_pages(...) alloc_hooks(alloc_pages_noprof(__VA_ARGS__))
+2
include/linux/gfp_types.h
···139139 * %__GFP_ACCOUNT causes the allocation to be accounted to kmemcg.140140 *141141 * %__GFP_NO_OBJ_EXT causes slab allocation to have no object extension.142142+ * mark_obj_codetag_empty() should be called upon freeing for objects allocated143143+ * with this flag to indicate that their NULL tags are expected and normal.142144 */143145#define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE)144146#define __GFP_WRITE ((__force gfp_t)___GFP_WRITE)
···434434/*435435 * Convert various time units to each other:436436 */437437-extern unsigned int jiffies_to_msecs(const unsigned long j);438438-extern unsigned int jiffies_to_usecs(const unsigned long j);437437+438438+#if HZ <= MSEC_PER_SEC && !(MSEC_PER_SEC % HZ)439439+/**440440+ * jiffies_to_msecs - Convert jiffies to milliseconds441441+ * @j: jiffies value442442+ *443443+ * This inline version takes care of HZ in {100,250,1000}.444444+ *445445+ * Return: milliseconds value446446+ */447447+static inline unsigned int jiffies_to_msecs(const unsigned long j)448448+{449449+ return (MSEC_PER_SEC / HZ) * j;450450+}451451+#else452452+unsigned int jiffies_to_msecs(const unsigned long j);453453+#endif454454+455455+#if !(USEC_PER_SEC % HZ)456456+/**457457+ * jiffies_to_usecs - Convert jiffies to microseconds458458+ * @j: jiffies value459459+ *460460+ * Return: microseconds value461461+ */462462+static inline unsigned int jiffies_to_usecs(const unsigned long j)463463+{464464+ /*465465+ * Hz usually doesn't go much further MSEC_PER_SEC.466466+ * jiffies_to_usecs() and usecs_to_jiffies() depend on that.467467+ */468468+ BUILD_BUG_ON(HZ > USEC_PER_SEC);469469+470470+ return (USEC_PER_SEC / HZ) * j;471471+}472472+#else473473+unsigned int jiffies_to_usecs(const unsigned long j);474474+#endif439475440476/**441477 * jiffies_to_nsecs - Convert jiffies to nanoseconds
···2323/**2424 * struct liveupdate_file_op_args - Arguments for file operation callbacks.2525 * @handler: The file handler being called.2626- * @retrieved: The retrieve status for the 'can_finish / finish'2727- * operation.2626+ * @retrieve_status: The retrieve status for the 'can_finish / finish'2727+ * operation. A value of 0 means the retrieve has not been2828+ * attempted, a positive value means the retrieve was2929+ * successful, and a negative value means the retrieve failed,3030+ * and the value is the error code of the call.2831 * @file: The file object. For retrieve: [OUT] The callback sets2932 * this to the new file. For other ops: [IN] The caller sets3033 * this to the file being operated on.···4340 */4441struct liveupdate_file_op_args {4542 struct liveupdate_file_handler *handler;4646- bool retrieved;4343+ int retrieve_status;4744 struct file *file;4845 u64 serialized_data;4946 void *private_data;
+5-4
include/linux/mmc/host.h
···486486487487 struct mmc_ios ios; /* current io bus settings */488488489489+ bool claimed; /* host exclusively claimed */490490+489491 /* group bitfields together to minimize padding */490492 unsigned int use_spi_crc:1;491491- unsigned int claimed:1; /* host exclusively claimed */492493 unsigned int doing_init_tune:1; /* initial tuning in progress */493493- unsigned int can_retune:1; /* re-tuning can be used */494494 unsigned int doing_retune:1; /* re-tuning in progress */495495- unsigned int retune_now:1; /* do re-tuning at next req */496496- unsigned int retune_paused:1; /* re-tuning is temporarily disabled */497495 unsigned int retune_crc_disable:1; /* don't trigger retune upon crc */498496 unsigned int can_dma_map_merge:1; /* merging can be used */499497 unsigned int vqmmc_enabled:1; /* vqmmc regulator is enabled */···506508 int rescan_disable; /* disable card detection */507509 int rescan_entered; /* used with nonremovable devices */508510511511+ bool can_retune; /* re-tuning can be used */512512+ bool retune_now; /* do re-tuning at next req */513513+ bool retune_paused; /* re-tuning is temporarily disabled */509514 int need_retune; /* re-tuning is needed */510515 int hold_retune; /* hold off re-tuning */511516 unsigned int retune_period; /* re-tuning period in secs */
+4-4
include/linux/overflow.h
···4242 * both the type-agnostic benefits of the macros while also being able to4343 * enforce that the return value is, in fact, checked.4444 */4545-static inline bool __must_check __must_check_overflow(bool overflow)4545+static __always_inline bool __must_check __must_check_overflow(bool overflow)4646{4747 return unlikely(overflow);4848}···327327 * with any overflow causing the return value to be SIZE_MAX. The328328 * lvalue must be size_t to avoid implicit type conversion.329329 */330330-static inline size_t __must_check size_mul(size_t factor1, size_t factor2)330330+static __always_inline size_t __must_check size_mul(size_t factor1, size_t factor2)331331{332332 size_t bytes;333333···346346 * with any overflow causing the return value to be SIZE_MAX. The347347 * lvalue must be size_t to avoid implicit type conversion.348348 */349349-static inline size_t __must_check size_add(size_t addend1, size_t addend2)349349+static __always_inline size_t __must_check size_add(size_t addend1, size_t addend2)350350{351351 size_t bytes;352352···367367 * argument may be SIZE_MAX (or the result with be forced to SIZE_MAX).368368 * The lvalue must be size_t to avoid implicit type conversion.369369 */370370-static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)370370+static __always_inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)371371{372372 size_t bytes;373373
+2-14
include/linux/pm_runtime.h
···545545 *546546 * Decrement the runtime PM usage counter of @dev and if it turns out to be547547 * equal to 0, queue up a work item for @dev like in pm_request_idle().548548- *549549- * Return:550550- * * 1: Success. Usage counter dropped to zero, but device was already suspended.551551- * * 0: Success.552552- * * -EINVAL: Runtime PM error.553553- * * -EACCES: Runtime PM disabled.554554- * * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status555555- * change ongoing.556556- * * -EBUSY: Runtime PM child_count non-zero.557557- * * -EPERM: Device PM QoS resume latency 0.558558- * * -EINPROGRESS: Suspend already in progress.559559- * * -ENOSYS: CONFIG_PM not enabled.560548 */561561-static inline int pm_runtime_put(struct device *dev)549549+static inline void pm_runtime_put(struct device *dev)562550{563563- return __pm_runtime_idle(dev, RPM_GET_PUT | RPM_ASYNC);551551+ __pm_runtime_idle(dev, RPM_GET_PUT | RPM_ASYNC);564552}565553566554/**
+12
include/linux/rseq.h
···146146 t->rseq = current->rseq;147147}148148149149+/*150150+ * Value returned by getauxval(AT_RSEQ_ALIGN) and expected by rseq151151+ * registration. This is the active rseq area size rounded up to next152152+ * power of 2, which guarantees that the rseq structure will always be153153+ * aligned on the nearest power of two large enough to contain it, even154154+ * as it grows.155155+ */156156+static inline unsigned int rseq_alloc_align(void)157157+{158158+ return 1U << get_count_order(offsetof(struct rseq, end));159159+}160160+149161#else /* CONFIG_RSEQ */150162static inline void rseq_handle_slowpath(struct pt_regs *regs) { }151163static inline void rseq_signal_deliver(struct ksignal *ksig, struct pt_regs *regs) { }
···517517DEFINE_FREE(kfree, void *, if (!IS_ERR_OR_NULL(_T)) kfree(_T))518518DEFINE_FREE(kfree_sensitive, void *, if (_T) kfree_sensitive(_T))519519520520-/**521521- * ksize - Report actual allocation size of associated object522522- *523523- * @objp: Pointer returned from a prior kmalloc()-family allocation.524524- *525525- * This should not be used for writing beyond the originally requested526526- * allocation size. Either use krealloc() or round up the allocation size527527- * with kmalloc_size_roundup() prior to allocation. If this is used to528528- * access beyond the originally requested allocation size, UBSAN_BOUNDS529529- * and/or FORTIFY_SOURCE may trip, since they only know about the530530- * originally allocated size via the __alloc_size attribute.531531- */532520size_t ksize(const void *objp);533521534522#ifdef CONFIG_PRINTK
+3
include/linux/tnum.h
···131131 return !(tnum_subreg(a)).mask;132132}133133134134+/* Returns the smallest member of t larger than z */135135+u64 tnum_step(struct tnum t, u64 z);136136+134137#endif /* _LINUX_TNUM_H */
+11-2
include/net/af_vsock.h
···276276 return vsock_net_mode(sock_net(sk_vsock(vsk))) == VSOCK_NET_MODE_GLOBAL;277277}278278279279-static inline void vsock_net_set_child_mode(struct net *net,279279+static inline bool vsock_net_set_child_mode(struct net *net,280280 enum vsock_net_mode mode)281281{282282- WRITE_ONCE(net->vsock.child_ns_mode, mode);282282+ int new_locked = mode + 1;283283+ int old_locked = 0; /* unlocked */284284+285285+ if (try_cmpxchg(&net->vsock.child_ns_mode_locked,286286+ &old_locked, new_locked)) {287287+ WRITE_ONCE(net->vsock.child_ns_mode, mode);288288+ return true;289289+ }290290+291291+ return old_locked == new_locked;283292}284293285294static inline enum vsock_net_mode vsock_net_child_mode(struct net *net)
···181181 *182182 * It needs to be called before the RDMA identifier is bound183183 * to an device, which mean it should be called before184184- * rdma_bind_addr(), rdma_bind_addr() and rdma_listen().184184+ * rdma_bind_addr(), rdma_resolve_addr() and rdma_listen().185185 */186186int rdma_restrict_node_type(struct rdma_cm_id *id, u8 node_type);187187
···440440441441 TP_fast_assign(442442 __entry->mm_id = mm_ptr_to_hash(mm);443443- __entry->curr = !!(current->mm == mm);443443+ /*444444+ * curr is true if the mm matches the current task's mm_struct.445445+ * Since kthreads (PF_KTHREAD) have no mm_struct of their own446446+ * but can borrow one via kthread_use_mm(), we must filter them447447+ * out to avoid incorrectly attributing the RSS update to them.448448+ */449449+ __entry->curr = current->mm == mm && !(current->flags & PF_KTHREAD);444450 __entry->member = member;445451 __entry->size = (percpu_counter_sum_positive(&mm->rss_stat[member])446452 << PAGE_SHIFT);
+6-6
include/uapi/drm/drm_fourcc.h
···401401 * implementation can multiply the values by 2^6=64. For that reason the padding402402 * must only contain zeros.403403 * index 0 = Y plane, [15:0] z:Y [6:10] little endian404404- * index 1 = Cr plane, [15:0] z:Cr [6:10] little endian405405- * index 2 = Cb plane, [15:0] z:Cb [6:10] little endian404404+ * index 1 = Cb plane, [15:0] z:Cb [6:10] little endian405405+ * index 2 = Cr plane, [15:0] z:Cr [6:10] little endian406406 */407407#define DRM_FORMAT_S010 fourcc_code('S', '0', '1', '0') /* 2x2 subsampled Cb (1) and Cr (2) planes 10 bits per channel */408408#define DRM_FORMAT_S210 fourcc_code('S', '2', '1', '0') /* 2x1 subsampled Cb (1) and Cr (2) planes 10 bits per channel */···414414 * implementation can multiply the values by 2^4=16. For that reason the padding415415 * must only contain zeros.416416 * index 0 = Y plane, [15:0] z:Y [4:12] little endian417417- * index 1 = Cr plane, [15:0] z:Cr [4:12] little endian418418- * index 2 = Cb plane, [15:0] z:Cb [4:12] little endian417417+ * index 1 = Cb plane, [15:0] z:Cb [4:12] little endian418418+ * index 2 = Cr plane, [15:0] z:Cr [4:12] little endian419419 */420420#define DRM_FORMAT_S012 fourcc_code('S', '0', '1', '2') /* 2x2 subsampled Cb (1) and Cr (2) planes 12 bits per channel */421421#define DRM_FORMAT_S212 fourcc_code('S', '2', '1', '2') /* 2x1 subsampled Cb (1) and Cr (2) planes 12 bits per channel */···424424/*425425 * 3 plane YCbCr426426 * index 0 = Y plane, [15:0] Y little endian427427- * index 1 = Cr plane, [15:0] Cr little endian428428- * index 2 = Cb plane, [15:0] Cb little endian427427+ * index 1 = Cb plane, [15:0] Cb little endian428428+ * index 2 = Cr plane, [15:0] Cr little endian429429 */430430#define DRM_FORMAT_S016 fourcc_code('S', '0', '1', '6') /* 2x2 subsampled Cb (1) and Cr (2) planes 16 bits per channel */431431#define DRM_FORMAT_S216 fourcc_code('S', '2', '1', '6') /* 2x1 subsampled Cb (1) and Cr (2) planes 16 bits per channel */
+1-1
include/uapi/linux/pci_regs.h
···712712#define PCI_EXP_LNKCTL2_HASD 0x0020 /* HW Autonomous Speed Disable */713713#define PCI_EXP_LNKSTA2 0x32 /* Link Status 2 */714714#define PCI_EXP_LNKSTA2_FLIT 0x0400 /* Flit Mode Status */715715-#define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 0x32 /* end of v2 EPs w/ link */715715+#define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 0x34 /* end of v2 EPs w/ link */716716#define PCI_EXP_SLTCAP2 0x34 /* Slot Capabilities 2 */717717#define PCI_EXP_SLTCAP2_IBPD 0x00000001 /* In-band PD Disable Supported */718718#define PCI_EXP_SLTCTL2 0x38 /* Slot Control 2 */
+22-4
include/uapi/linux/rseq.h
···8787};88888989/*9090- * struct rseq is aligned on 4 * 8 bytes to ensure it is always9191- * contained within a single cache-line.9090+ * The original size and alignment of the allocation for struct rseq is9191+ * 32 bytes.9292 *9393- * A single struct rseq per thread is allowed.9393+ * The allocation size needs to be greater or equal to9494+ * max(getauxval(AT_RSEQ_FEATURE_SIZE), 32), and the allocation needs to9595+ * be aligned on max(getauxval(AT_RSEQ_ALIGN), 32).9696+ *9797+ * As an alternative, userspace is allowed to use both the original size9898+ * and alignment of 32 bytes for backward compatibility.9999+ *100100+ * A single active struct rseq registration per thread is allowed.94101 */95102struct rseq {96103 /*···188181 struct rseq_slice_ctrl slice_ctrl;189182190183 /*184184+ * Before rseq became extensible, its original size was 32 bytes even185185+ * though the active rseq area was only 20 bytes.186186+ * Exposing a 32 bytes feature size would make life needlessly painful187187+ * for userspace. Therefore, add a reserved byte after byte 32188188+ * to bump the rseq feature size from 32 to 33.189189+ * The next field to be added to the rseq area will be larger190190+ * than one byte, and will replace this reserved byte.191191+ */192192+ __u8 __reserved;193193+194194+ /*191195 * Flexible array member at end of structure, after last feature field.192196 */193197 char end[];194194-} __attribute__((aligned(4 * sizeof(__u64))));198198+} __attribute__((aligned(32)));195199196200#endif /* _UAPI_LINUX_RSEQ_H */
+1-1
init/Kconfig
···153153config CC_HAS_BROKEN_COUNTED_BY_REF154154 bool155155 # https://github.com/llvm/llvm-project/issues/182575156156- default y if CC_IS_CLANG && CLANG_VERSION < 220000156156+ default y if CC_IS_CLANG && CLANG_VERSION < 220100157157158158config CC_HAS_MULTIDIMENSIONAL_NONSTRING159159 def_bool $(success,echo 'char tag[][4] __attribute__((__nonstring__)) = { };' | $(CC) $(CLANG_FLAGS) -x c - -c -o /dev/null -Werror)
···269269{270270 return TNUM(swab64(a.value), swab64(a.mask));271271}272272+273273+/* Given tnum t, and a number z such that tmin <= z < tmax, where tmin274274+ * is the smallest member of the t (= t.value) and tmax is the largest275275+ * member of t (= t.value | t.mask), returns the smallest member of t276276+ * larger than z.277277+ *278278+ * For example,279279+ * t = x11100x0280280+ * z = 11110001 (241)281281+ * result = 11110010 (242)282282+ *283283+ * Note: if this function is called with z >= tmax, it just returns284284+ * early with tmax; if this function is called with z < tmin, the285285+ * algorithm already returns tmin.286286+ */287287+u64 tnum_step(struct tnum t, u64 z)288288+{289289+ u64 tmax, j, p, q, r, s, v, u, w, res;290290+ u8 k;291291+292292+ tmax = t.value | t.mask;293293+294294+ /* if z >= largest member of t, return largest member of t */295295+ if (z >= tmax)296296+ return tmax;297297+298298+ /* if z < smallest member of t, return smallest member of t */299299+ if (z < t.value)300300+ return t.value;301301+302302+ /* keep t's known bits, and match all unknown bits to z */303303+ j = t.value | (z & t.mask);304304+305305+ if (j > z) {306306+ p = ~z & t.value & ~t.mask;307307+ k = fls64(p); /* k is the most-significant 0-to-1 flip */308308+ q = U64_MAX << k;309309+ r = q & z; /* positions > k matched to z */310310+ s = ~q & t.value; /* positions <= k matched to t.value */311311+ v = r | s;312312+ res = v;313313+ } else {314314+ p = z & ~t.value & ~t.mask;315315+ k = fls64(p); /* k is the most-significant 1-to-0 flip */316316+ q = U64_MAX << k;317317+ r = q & t.mask & z; /* unknown positions > k, matched to z */318318+ s = q & ~t.mask; /* known positions > k, set to 1 */319319+ v = r | s;320320+ /* add 1 to unknown positions > k to make value greater than z */321321+ u = v + (1ULL << k);322322+ /* extract bits in unknown positions > k from u, rest from t.value */323323+ w = (u & t.mask) | t.value;324324+ res = w;325325+ }326326+ return res;327327+}
+30
kernel/bpf/verifier.c
···2379237923802380static void __update_reg64_bounds(struct bpf_reg_state *reg)23812381{23822382+ u64 tnum_next, tmax;23832383+ bool umin_in_tnum;23842384+23822385 /* min signed is max(sign bit) | min(other bits) */23832386 reg->smin_value = max_t(s64, reg->smin_value,23842387 reg->var_off.value | (reg->var_off.mask & S64_MIN));···23912388 reg->umin_value = max(reg->umin_value, reg->var_off.value);23922389 reg->umax_value = min(reg->umax_value,23932390 reg->var_off.value | reg->var_off.mask);23912391+23922392+ /* Check if u64 and tnum overlap in a single value */23932393+ tnum_next = tnum_step(reg->var_off, reg->umin_value);23942394+ umin_in_tnum = (reg->umin_value & ~reg->var_off.mask) == reg->var_off.value;23952395+ tmax = reg->var_off.value | reg->var_off.mask;23962396+ if (umin_in_tnum && tnum_next > reg->umax_value) {23972397+ /* The u64 range and the tnum only overlap in umin.23982398+ * u64: ---[xxxxxx]-----23992399+ * tnum: --xx----------x-24002400+ */24012401+ ___mark_reg_known(reg, reg->umin_value);24022402+ } else if (!umin_in_tnum && tnum_next == tmax) {24032403+ /* The u64 range and the tnum only overlap in the maximum value24042404+ * represented by the tnum, called tmax.24052405+ * u64: ---[xxxxxx]-----24062406+ * tnum: xx-----x--------24072407+ */24082408+ ___mark_reg_known(reg, tmax);24092409+ } else if (!umin_in_tnum && tnum_next <= reg->umax_value &&24102410+ tnum_step(reg->var_off, tnum_next) > reg->umax_value) {24112411+ /* The u64 range and the tnum only overlap in between umin24122412+ * (excluded) and umax.24132413+ * u64: ---[xxxxxx]-----24142414+ * tnum: xx----x-------x-24152415+ */24162416+ ___mark_reg_known(reg, tnum_next);24172417+ }23942418}2395241923962420static void __update_reg_bounds(struct bpf_reg_state *reg)
-1
kernel/configs/debug.config
···2929# CONFIG_UBSAN_ALIGNMENT is not set3030# CONFIG_UBSAN_DIV_ZERO is not set3131# CONFIG_UBSAN_TRAP is not set3232-# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set3332CONFIG_DEBUG_FS=y3433CONFIG_DEBUG_FS_ALLOW_ALL=y3534CONFIG_DEBUG_IRQFLAGS=y
+1-1
kernel/dma/direct.h
···85858686 if (is_swiotlb_force_bounce(dev)) {8787 if (attrs & DMA_ATTR_MMIO)8888- goto err_overflow;8888+ return DMA_MAPPING_ERROR;89899090 return swiotlb_map(dev, phys, size, dir, attrs);9191 }
+64-23
kernel/events/core.c
···41384138 if (*perf_event_fasync(event))41394139 event->pending_kill = POLL_ERR;4140414041414141- perf_event_wakeup(event);41414141+ event->pending_wakeup = 1;41424142+ irq_work_queue(&event->pending_irq);41424143 } else {41434144 struct perf_cpu_pmu_context *cpc = this_cpc(event->pmu_ctx->pmu);41444145···74657464 ret = perf_mmap_aux(vma, event, nr_pages);74667465 if (ret)74677466 return ret;74677467+74687468+ /*74697469+ * Since pinned accounting is per vm we cannot allow fork() to copy our74707470+ * vma.74717471+ */74727472+ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);74737473+ vma->vm_ops = &perf_mmap_vmops;74747474+74757475+ mapped = get_mapped(event, event_mapped);74767476+ if (mapped)74777477+ mapped(event, vma->vm_mm);74787478+74797479+ /*74807480+ * Try to map it into the page table. On fail, invoke74817481+ * perf_mmap_close() to undo the above, as the callsite expects74827482+ * full cleanup in this case and therefore does not invoke74837483+ * vmops::close().74847484+ */74857485+ ret = map_range(event->rb, vma);74867486+ if (ret)74877487+ perf_mmap_close(vma);74687488 }74697469-74707470- /*74717471- * Since pinned accounting is per vm we cannot allow fork() to copy our74727472- * vma.74737473- */74747474- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);74757475- vma->vm_ops = &perf_mmap_vmops;74767476-74777477- mapped = get_mapped(event, event_mapped);74787478- if (mapped)74797479- mapped(event, vma->vm_mm);74807480-74817481- /*74827482- * Try to map it into the page table. On fail, invoke74837483- * perf_mmap_close() to undo the above, as the callsite expects74847484- * full cleanup in this case and therefore does not invoke74857485- * vmops::close().74867486- */74877487- ret = map_range(event->rb, vma);74887488- if (ret)74897489- perf_mmap_close(vma);7490748974917490 return ret;74927491}···1077710776 struct perf_sample_data *data,1077810777 struct pt_regs *regs)1077910778{1077910779+ /*1078010780+ * Entry point from hardware PMI, interrupts should be disabled here.1078110781+ * This serializes us against perf_event_remove_from_context() in1078210782+ * things like perf_event_release_kernel().1078310783+ */1078410784+ lockdep_assert_irqs_disabled();1078510785+1078010786 return __perf_event_overflow(event, 1, data, regs);1078110787}1078210788···1086010852{1086110853 struct hw_perf_event *hwc = &event->hw;10862108541085510855+ /*1085610856+ * This is:1085710857+ * - software preempt1085810858+ * - tracepoint preempt1085910859+ * - tp_target_task irq (ctx->lock)1086010860+ * - uprobes preempt/irq1086110861+ * - kprobes preempt/irq1086210862+ * - hw_breakpoint irq1086310863+ *1086410864+ * Any of these are sufficient to hold off RCU and thus ensure @event1086510865+ * exists.1086610866+ */1086710867+ lockdep_assert_preemption_disabled();1086310868 local64_add(nr, &event->count);10864108691086510870 if (!regs)1086610871 return;10867108721086810873 if (!is_sampling_event(event))1087410874+ return;1087510875+1087610876+ /*1087710877+ * Serialize against event_function_call() IPIs like normal overflow1087810878+ * event handling. Specifically, must not allow1087910879+ * perf_event_release_kernel() -> perf_remove_from_context() to make1088010880+ * progress and 'release' the event from under us.1088110881+ */1088210882+ guard(irqsave)();1088310883+ if (event->state != PERF_EVENT_STATE_ACTIVE)1086910884 return;10870108851087110886 if ((event->attr.sample_type & PERF_SAMPLE_PERIOD) && !event->attr.freq) {···1138911358 struct perf_sample_data data;1139011359 struct perf_event *event;11391113601136111361+ /*1136211362+ * Per being a tracepoint, this runs with preemption disabled.1136311363+ */1136411364+ lockdep_assert_preemption_disabled();1136511365+1139211366 struct perf_raw_record raw = {1139311367 .frag = {1139411368 .size = entry_size,···1172511689{1172611690 struct perf_sample_data sample;1172711691 struct pt_regs *regs = data;1169211692+1169311693+ /*1169411694+ * Exception context, will have interrupts disabled.1169511695+ */1169611696+ lockdep_assert_irqs_disabled();11728116971172911698 perf_sample_data_init(&sample, bp->attr.bp_addr, 0);1173011699···12195121541219612155 if (regs && !perf_exclude_event(event, regs)) {1219712156 if (!(event->attr.exclude_idle && is_idle_task(current)))1219812198- if (__perf_event_overflow(event, 1, &data, regs))1215712157+ if (perf_event_overflow(event, &data, regs))1219912158 ret = HRTIMER_NORESTART;1220012159 }1220112160
+1-1
kernel/fork.c
···30853085 return 0;3086308630873087 /* don't need lock here; in the worst case we'll do useless copy */30883088- if (fs->users == 1)30883088+ if (!(unshare_flags & CLONE_NEWNS) && fs->users == 1)30893089 return 0;3090309030913091 *new_fsp = copy_fs_struct(fs);
+1-1
kernel/kcsan/kcsan_test.c
···168168 if (!report_available())169169 return false;170170171171- expect = kmalloc_obj(observed.lines);171171+ expect = (typeof(expect))kmalloc_obj(observed.lines);172172 if (WARN_ON(!expect))173173 return false;174174
+25-16
kernel/liveupdate/luo_file.c
···134134 * state that is not preserved. Set by the handler's .preserve()135135 * callback, and must be freed in the handler's .unpreserve()136136 * callback.137137- * @retrieved: A flag indicating whether a user/kernel in the new kernel has137137+ * @retrieve_status: Status code indicating whether a user/kernel in the new kernel has138138 * successfully called retrieve() on this file. This prevents139139- * multiple retrieval attempts.139139+ * multiple retrieval attempts. A value of 0 means a retrieve()140140+ * has not been attempted, a positive value means the retrieve()141141+ * was successful, and a negative value means the retrieve()142142+ * failed, and the value is the error code of the call.140143 * @mutex: A mutex that protects the fields of this specific instance141144 * (e.g., @retrieved, @file), ensuring that operations like142145 * retrieving or finishing a file are atomic.···164161 struct file *file;165162 u64 serialized_data;166163 void *private_data;167167- bool retrieved;164164+ int retrieve_status;168165 struct mutex mutex;169166 struct list_head list;170167 u64 token;···301298 luo_file->file = file;302299 luo_file->fh = fh;303300 luo_file->token = token;304304- luo_file->retrieved = false;305301 mutex_init(&luo_file->mutex);306302307303 args.handler = fh;···579577 return -ENOENT;580578581579 guard(mutex)(&luo_file->mutex);582582- if (luo_file->retrieved) {580580+ if (luo_file->retrieve_status < 0) {581581+ /* Retrieve was attempted and it failed. Return the error code. */582582+ return luo_file->retrieve_status;583583+ }584584+585585+ if (luo_file->retrieve_status > 0) {583586 /*584587 * Someone is asking for this file again, so get a reference585588 * for them.···597590 args.handler = luo_file->fh;598591 args.serialized_data = luo_file->serialized_data;599592 err = luo_file->fh->ops->retrieve(&args);600600- if (!err) {601601- luo_file->file = args.file;602602-603603- /* Get reference so we can keep this file in LUO until finish */604604- get_file(luo_file->file);605605- *filep = luo_file->file;606606- luo_file->retrieved = true;593593+ if (err) {594594+ /* Keep the error code for later use. */595595+ luo_file->retrieve_status = err;596596+ return err;607597 }608598609609- return err;599599+ luo_file->file = args.file;600600+ /* Get reference so we can keep this file in LUO until finish */601601+ get_file(luo_file->file);602602+ *filep = luo_file->file;603603+ luo_file->retrieve_status = 1;604604+605605+ return 0;610606}611607612608static int luo_file_can_finish_one(struct luo_file_set *file_set,···625615 args.handler = luo_file->fh;626616 args.file = luo_file->file;627617 args.serialized_data = luo_file->serialized_data;628628- args.retrieved = luo_file->retrieved;618618+ args.retrieve_status = luo_file->retrieve_status;629619 can_finish = luo_file->fh->ops->can_finish(&args);630620 }631621···642632 args.handler = luo_file->fh;643633 args.file = luo_file->file;644634 args.serialized_data = luo_file->serialized_data;645645- args.retrieved = luo_file->retrieved;635635+ args.retrieve_status = luo_file->retrieve_status;646636647637 luo_file->fh->ops->finish(&args);648638 luo_flb_file_finish(luo_file->fh);···798788 luo_file->file = NULL;799789 luo_file->serialized_data = file_ser[i].data;800790 luo_file->token = file_ser[i].token;801801- luo_file->retrieved = false;802791 mutex_init(&luo_file->mutex);803792 list_add_tail(&luo_file->list, &file_set->files_list);804793 }
+5-3
kernel/rseq.c
···8080#include <linux/syscalls.h>8181#include <linux/uaccess.h>8282#include <linux/types.h>8383+#include <linux/rseq.h>8384#include <asm/ptrace.h>84858586#define CREATE_TRACE_POINTS···450449 * auxiliary vector AT_RSEQ_ALIGN. If rseq_len is the original rseq451450 * size, the required alignment is the original struct rseq alignment.452451 *453453- * In order to be valid, rseq_len is either the original rseq size, or454454- * large enough to contain all supported fields, as communicated to452452+ * The rseq_len is required to be greater or equal to the original rseq453453+ * size. In order to be valid, rseq_len is either the original rseq size,454454+ * or large enough to contain all supported fields, as communicated to455455 * user-space through the ELF auxiliary vector AT_RSEQ_FEATURE_SIZE.456456 */457457 if (rseq_len < ORIG_RSEQ_SIZE ||458458 (rseq_len == ORIG_RSEQ_SIZE && !IS_ALIGNED((unsigned long)rseq, ORIG_RSEQ_SIZE)) ||459459- (rseq_len != ORIG_RSEQ_SIZE && (!IS_ALIGNED((unsigned long)rseq, __alignof__(*rseq)) ||459459+ (rseq_len != ORIG_RSEQ_SIZE && (!IS_ALIGNED((unsigned long)rseq, rseq_alloc_align()) ||460460 rseq_len < offsetof(struct rseq, end))))461461 return -EINVAL;462462 if (!access_ok(rseq, rseq_len))
+1
kernel/sched/core.c
···68306830 /* SCX must consult the BPF scheduler to tell if rq is empty */68316831 if (!rq->nr_running && !scx_enabled()) {68326832 next = prev;68336833+ rq->next_class = &idle_sched_class;68336834 goto picked;68346835 }68356836 } else if (!preempt && prev_state) {
+2-2
kernel/sched/ext.c
···24602460 /* see kick_cpus_irq_workfn() */24612461 smp_store_release(&rq->scx.kick_sync, rq->scx.kick_sync + 1);2462246224632463- rq->next_class = &ext_sched_class;24632463+ rq_modified_begin(rq, &ext_sched_class);2464246424652465 rq_unpin_lock(rq, rf);24662466 balance_one(rq, prev);···24752475 * If @force_scx is true, always try to pick a SCHED_EXT task,24762476 * regardless of any higher-priority sched classes activity.24772477 */24782478- if (!force_scx && sched_class_above(rq->next_class, &ext_sched_class))24782478+ if (!force_scx && rq_modified_above(rq, &ext_sched_class))24792479 return RETRY_TASK;2480248024812481 keep_prev = rq->scx.flags & SCX_RQ_BAL_KEEP;
+112-38
kernel/sched/fair.c
···589589 return vruntime_cmp(a->deadline, "<", b->deadline);590590}591591592592+/*593593+ * Per avg_vruntime() below, cfs_rq::zero_vruntime is only slightly stale594594+ * and this value should be no more than two lag bounds. Which puts it in the595595+ * general order of:596596+ *597597+ * (slice + TICK_NSEC) << NICE_0_LOAD_SHIFT598598+ *599599+ * which is around 44 bits in size (on 64bit); that is 20 for600600+ * NICE_0_LOAD_SHIFT, another 20 for NSEC_PER_MSEC and then a handful for601601+ * however many msec the actual slice+tick ends up begin.602602+ *603603+ * (disregarding the actual divide-by-weight part makes for the worst case604604+ * weight of 2, which nicely cancels vs the fuzz in zero_vruntime not actually605605+ * being the zero-lag point).606606+ */592607static inline s64 entity_key(struct cfs_rq *cfs_rq, struct sched_entity *se)593608{594609 return vruntime_op(se->vruntime, "-", cfs_rq->zero_vruntime);···691676}692677693678static inline694694-void sum_w_vruntime_update(struct cfs_rq *cfs_rq, s64 delta)679679+void update_zero_vruntime(struct cfs_rq *cfs_rq, s64 delta)695680{696681 /*697697- * v' = v + d ==> sum_w_vruntime' = sum_runtime - d*sum_weight682682+ * v' = v + d ==> sum_w_vruntime' = sum_w_vruntime - d*sum_weight698683 */699684 cfs_rq->sum_w_vruntime -= cfs_rq->sum_weight * delta;685685+ cfs_rq->zero_vruntime += delta;700686}701687702688/*703703- * Specifically: avg_runtime() + 0 must result in entity_eligible() := true689689+ * Specifically: avg_vruntime() + 0 must result in entity_eligible() := true704690 * For this to be so, the result of this function must have a left bias.691691+ *692692+ * Called in:693693+ * - place_entity() -- before enqueue694694+ * - update_entity_lag() -- before dequeue695695+ * - entity_tick()696696+ *697697+ * This means it is one entry 'behind' but that puts it close enough to where698698+ * the bound on entity_key() is at most two lag bounds.705699 */706700u64 avg_vruntime(struct cfs_rq *cfs_rq)707701{708702 struct sched_entity *curr = cfs_rq->curr;709709- s64 avg = cfs_rq->sum_w_vruntime;710710- long load = cfs_rq->sum_weight;703703+ long weight = cfs_rq->sum_weight;704704+ s64 delta = 0;711705712712- if (curr && curr->on_rq) {713713- unsigned long weight = scale_load_down(curr->load.weight);706706+ if (curr && !curr->on_rq)707707+ curr = NULL;714708715715- avg += entity_key(cfs_rq, curr) * weight;716716- load += weight;717717- }709709+ if (weight) {710710+ s64 runtime = cfs_rq->sum_w_vruntime;718711719719- if (load) {712712+ if (curr) {713713+ unsigned long w = scale_load_down(curr->load.weight);714714+715715+ runtime += entity_key(cfs_rq, curr) * w;716716+ weight += w;717717+ }718718+720719 /* sign flips effective floor / ceiling */721721- if (avg < 0)722722- avg -= (load - 1);723723- avg = div_s64(avg, load);720720+ if (runtime < 0)721721+ runtime -= (weight - 1);722722+723723+ delta = div_s64(runtime, weight);724724+ } else if (curr) {725725+ /*726726+ * When there is but one element, it is the average.727727+ */728728+ delta = curr->vruntime - cfs_rq->zero_vruntime;724729 }725730726726- return cfs_rq->zero_vruntime + avg;731731+ update_zero_vruntime(cfs_rq, delta);732732+733733+ return cfs_rq->zero_vruntime;727734}735735+736736+static inline u64 cfs_rq_max_slice(struct cfs_rq *cfs_rq);728737729738/*730739 * lag_i = S - s_i = w_i * (V - v_i)···763724 * EEVDF gives the following limit for a steady state system:764725 *765726 * -r_max < lag < max(r_max, q)766766- *767767- * XXX could add max_slice to the augmented data to track this.768727 */769728static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se)770729{730730+ u64 max_slice = cfs_rq_max_slice(cfs_rq) + TICK_NSEC;771731 s64 vlag, limit;772732773733 WARN_ON_ONCE(!se->on_rq);774734775735 vlag = avg_vruntime(cfs_rq) - se->vruntime;776776- limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se);736736+ limit = calc_delta_fair(max_slice, se);777737778738 se->vlag = clamp(vlag, -limit, limit);779739}···815777 return vruntime_eligible(cfs_rq, se->vruntime);816778}817779818818-static void update_zero_vruntime(struct cfs_rq *cfs_rq)819819-{820820- u64 vruntime = avg_vruntime(cfs_rq);821821- s64 delta = vruntime_op(vruntime, "-", cfs_rq->zero_vruntime);822822-823823- sum_w_vruntime_update(cfs_rq, delta);824824-825825- cfs_rq->zero_vruntime = vruntime;826826-}827827-828780static inline u64 cfs_rq_min_slice(struct cfs_rq *cfs_rq)829781{830782 struct sched_entity *root = __pick_root_entity(cfs_rq);···828800 min_slice = min(min_slice, root->min_slice);829801830802 return min_slice;803803+}804804+805805+static inline u64 cfs_rq_max_slice(struct cfs_rq *cfs_rq)806806+{807807+ struct sched_entity *root = __pick_root_entity(cfs_rq);808808+ struct sched_entity *curr = cfs_rq->curr;809809+ u64 max_slice = 0ULL;810810+811811+ if (curr && curr->on_rq)812812+ max_slice = curr->slice;813813+814814+ if (root)815815+ max_slice = max(max_slice, root->max_slice);816816+817817+ return max_slice;831818}832819833820static inline bool __entity_less(struct rb_node *a, const struct rb_node *b)···869826 }870827}871828829829+static inline void __max_slice_update(struct sched_entity *se, struct rb_node *node)830830+{831831+ if (node) {832832+ struct sched_entity *rse = __node_2_se(node);833833+ if (rse->max_slice > se->max_slice)834834+ se->max_slice = rse->max_slice;835835+ }836836+}837837+872838/*873839 * se->min_vruntime = min(se->vruntime, {left,right}->min_vruntime)874840 */···885833{886834 u64 old_min_vruntime = se->min_vruntime;887835 u64 old_min_slice = se->min_slice;836836+ u64 old_max_slice = se->max_slice;888837 struct rb_node *node = &se->run_node;889838890839 se->min_vruntime = se->vruntime;···896843 __min_slice_update(se, node->rb_right);897844 __min_slice_update(se, node->rb_left);898845846846+ se->max_slice = se->slice;847847+ __max_slice_update(se, node->rb_right);848848+ __max_slice_update(se, node->rb_left);849849+899850 return se->min_vruntime == old_min_vruntime &&900900- se->min_slice == old_min_slice;851851+ se->min_slice == old_min_slice &&852852+ se->max_slice == old_max_slice;901853}902854903855RB_DECLARE_CALLBACKS(static, min_vruntime_cb, struct sched_entity,···914856static void __enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)915857{916858 sum_w_vruntime_add(cfs_rq, se);917917- update_zero_vruntime(cfs_rq);918859 se->min_vruntime = se->vruntime;919860 se->min_slice = se->slice;920861 rb_add_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline,···925868 rb_erase_augmented_cached(&se->run_node, &cfs_rq->tasks_timeline,926869 &min_vruntime_cb);927870 sum_w_vruntime_sub(cfs_rq, se);928928- update_zero_vruntime(cfs_rq);929871}930872931873struct sched_entity *__pick_root_entity(struct cfs_rq *cfs_rq)···38463790 unsigned long weight)38473791{38483792 bool curr = cfs_rq->curr == se;37933793+ bool rel_vprot = false;37943794+ u64 vprot;3849379538503796 if (se->on_rq) {38513797 /* commit outstanding execution time */···38553797 update_entity_lag(cfs_rq, se);38563798 se->deadline -= se->vruntime;38573799 se->rel_deadline = 1;38003800+ if (curr && protect_slice(se)) {38013801+ vprot = se->vprot - se->vruntime;38023802+ rel_vprot = true;38033803+ }38043804+38583805 cfs_rq->nr_queued--;38593806 if (!curr)38603807 __dequeue_entity(cfs_rq, se);···38753812 if (se->rel_deadline)38763813 se->deadline = div_s64(se->deadline * se->load.weight, weight);3877381438153815+ if (rel_vprot)38163816+ vprot = div_s64(vprot * se->load.weight, weight);38173817+38783818 update_load_set(&se->load, weight);3879381938803820 do {···38893823 enqueue_load_avg(cfs_rq, se);38903824 if (se->on_rq) {38913825 place_entity(cfs_rq, se, 0);38263826+ if (rel_vprot)38273827+ se->vprot = se->vruntime + vprot;38923828 update_load_add(&cfs_rq->load, se->load.weight);38933829 if (!curr)38943830 __enqueue_entity(cfs_rq, se);···54885420}5489542154905422static void54915491-set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)54235423+set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, bool first)54925424{54935425 clear_buddies(cfs_rq, se);54945426···55035435 __dequeue_entity(cfs_rq, se);55045436 update_load_avg(cfs_rq, se, UPDATE_TG);5505543755065506- set_protect_slice(cfs_rq, se);54385438+ if (first)54395439+ set_protect_slice(cfs_rq, se);55075440 }5508544155095442 update_stats_curr_start(cfs_rq, se);···55925523 */55935524 update_load_avg(cfs_rq, curr, UPDATE_TG);55945525 update_cfs_group(curr);55265526+55275527+ /*55285528+ * Pulls along cfs_rq::zero_vruntime.55295529+ */55305530+ avg_vruntime(cfs_rq);5595553155965532#ifdef CONFIG_SCHED_HRTICK55975533 /*···90228948 pse = parent_entity(pse);90238949 }90248950 if (se_depth >= pse_depth) {90259025- set_next_entity(cfs_rq_of(se), se);89518951+ set_next_entity(cfs_rq_of(se), se, true);90268952 se = parent_entity(se);90278953 }90288954 }9029895590308956 put_prev_entity(cfs_rq, pse);90319031- set_next_entity(cfs_rq, se);89578957+ set_next_entity(cfs_rq, se, true);9032895890338959 __set_next_task_fair(rq, p, true);90348960 }···1298212908 t0 = sched_clock_cpu(this_cpu);1298312909 __sched_balance_update_blocked_averages(this_rq);12984129101298512985- this_rq->next_class = &fair_sched_class;1291112911+ rq_modified_begin(this_rq, &fair_sched_class);1298612912 raw_spin_rq_unlock(this_rq);12987129131298812914 for_each_domain(this_cpu, sd) {···1304912975 pulled_task = 1;13050129761305112977 /* If a higher prio class was modified, restart the pick */1305213052- if (sched_class_above(this_rq->next_class, &fair_sched_class))1297812978+ if (rq_modified_above(this_rq, &fair_sched_class))1305312979 pulled_task = -1;13054129801305512981out:···1364213568 for_each_sched_entity(se) {1364313569 struct cfs_rq *cfs_rq = cfs_rq_of(se);13644135701364513645- set_next_entity(cfs_rq, se);1357113571+ set_next_entity(cfs_rq, se, first);1364613572 /* ensure bandwidth has been allocated on our new cfs_rq */1364713573 account_cfs_rq_runtime(cfs_rq, 0);1364813574 }
···630630631631config WARN_CONTEXT_ANALYSIS632632 bool "Compiler context-analysis warnings"633633- depends on CC_IS_CLANG && CLANG_VERSION >= 220000633633+ depends on CC_IS_CLANG && CLANG_VERSION >= 220100634634 # Branch profiling re-defines "if", which messes with the compiler's635635 # ability to analyze __cond_acquires(..), resulting in false positives.636636 depends on !TRACE_BRANCH_PROFILING···641641 and releasing user-definable "context locks".642642643643 Clang's name of the feature is "Thread Safety Analysis". Requires644644- Clang 22 or later.644644+ Clang 22.1.0 or later.645645646646 Produces warnings by default. Select CONFIG_WERROR if you wish to647647 turn these warnings into errors.···760760761761config DEBUG_OBJECTS762762 bool "Debug object operations"763763+ depends on PREEMPT_COUNT || !DEFERRED_STRUCT_PAGE_INIT763764 depends on DEBUG_KERNEL764765 help765766 If you say Y here, additional code will be inserted into the···17661765 every process, showing its current stack trace.17671766 It is also used by various kernel debugging features that require17681767 stack trace generation.17691769-17701770-config WARN_ALL_UNSEEDED_RANDOM17711771- bool "Warn for all uses of unseeded randomness"17721772- default n17731773- help17741774- Some parts of the kernel contain bugs relating to their use of17751775- cryptographically secure random numbers before it's actually possible17761776- to generate those numbers securely. This setting ensures that these17771777- flaws don't go unnoticed, by enabling a message, should this ever17781778- occur. This will allow people with obscure setups to know when things17791779- are going wrong, so that they might contact developers about fixing17801780- it.17811781-17821782- Unfortunately, on some models of some architectures getting17831783- a fully seeded CRNG is extremely difficult, and so this can17841784- result in dmesg getting spammed for a surprisingly long17851785- time. This is really bad from a security perspective, and17861786- so architecture maintainers really need to do what they can17871787- to get the CRNG seeded sooner after the system is booted.17881788- However, since users cannot do anything actionable to17891789- address this, by default this option is disabled.17901790-17911791- Say Y here if you want to receive warnings for all uses of17921792- unseeded randomness. This will be of use primarily for17931793- those developers interested in improving the security of17941794- Linux kernels running on their architecture (or17951795- subarchitecture).1796176817971769config DEBUG_KOBJECT17981770 bool "kobject debugging"
+18-1
lib/debugobjects.c
···398398399399 atomic_inc(&cpus_allocating);400400 while (pool_should_refill(&pool_global)) {401401+ gfp_t gfp = __GFP_HIGH | __GFP_NOWARN;401402 HLIST_HEAD(head);402403403403- if (!kmem_alloc_batch(&head, obj_cache, __GFP_HIGH | __GFP_NOWARN))404404+ /*405405+ * Allow reclaim only in preemptible context and during406406+ * early boot. If not preemptible, the caller might hold407407+ * locks causing a deadlock in the allocator.408408+ *409409+ * If the reclaim flag is not set during early boot then410410+ * allocations, which happen before deferred page411411+ * initialization has completed, will fail.412412+ *413413+ * In preemptible context the flag is harmless and not a414414+ * performance issue as that's usually invoked from slow415415+ * path initialization context.416416+ */417417+ if (preemptible() || system_state < SYSTEM_SCHEDULING)418418+ gfp |= __GFP_KSWAPD_RECLAIM;419419+420420+ if (!kmem_alloc_batch(&head, obj_cache, gfp))404421 break;405422406423 guard(raw_spinlock_irqsave)(&pool_lock);
···1313#include <linux/hash.h>1414#include <linux/irq_work.h>1515#include <linux/jhash.h>1616+#include <linux/kasan-enabled.h>1617#include <linux/kcsan-checks.h>1718#include <linux/kfence.h>1819#include <linux/kmemleak.h>···918917 return;919918920919 /*920920+ * If KASAN hardware tags are enabled, disable KFENCE, because it921921+ * does not support MTE yet.922922+ */923923+ if (kasan_hw_tags_enabled()) {924924+ pr_info("disabled as KASAN HW tags are enabled\n");925925+ if (__kfence_pool) {926926+ memblock_free(__kfence_pool, KFENCE_POOL_SIZE);927927+ __kfence_pool = NULL;928928+ }929929+ kfence_sample_interval = 0;930930+ return;931931+ }932932+933933+ /*921934 * If the pool has already been initialized by arch, there is no need to922935 * re-allocate the memory pool.923936 */···1004989#ifdef CONFIG_CONTIG_ALLOC1005990 struct page *pages;100699110071007- pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL, first_online_node,10081008- NULL);992992+ pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL | __GFP_SKIP_KASAN,993993+ first_online_node, NULL);1009994 if (!pages)1010995 return -ENOMEM;10119961012997 __kfence_pool = page_to_virt(pages);10131013- pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node,10141014- NULL);998998+ pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL | __GFP_SKIP_KASAN,999999+ first_online_node, NULL);10151000 if (pages)10161001 kfence_metadata_init = page_to_virt(pages);10171002#else···10211006 return -EINVAL;10221007 }1023100810241024- __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL);10091009+ __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE,10101010+ GFP_KERNEL | __GFP_SKIP_KASAN);10251011 if (!__kfence_pool)10261012 return -ENOMEM;1027101310281028- kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERNEL);10141014+ kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE,10151015+ GFP_KERNEL | __GFP_SKIP_KASAN);10291016#endif1030101710311018 if (!kfence_metadata_init)
+6-1
mm/memfd_luo.c
···326326 struct memfd_luo_folio_ser *folios_ser;327327 struct memfd_luo_ser *ser;328328329329- if (args->retrieved)329329+ /*330330+ * If retrieve was successful, nothing to do. If it failed, retrieve()331331+ * already cleaned up everything it could. So nothing to do there332332+ * either. Only need to clean up when retrieve was not called.333333+ */334334+ if (args->retrieve_status)330335 return;331336332337 ser = phys_to_virt(args->serialized_data);
+5-1
mm/mm_init.c
···18961896 for_each_node(nid) {18971897 pg_data_t *pgdat;1898189818991899- if (!node_online(nid))18991899+ /*19001900+ * If an architecture has not allocated node data for19011901+ * this node, presume the node is memoryless or offline.19021902+ */19031903+ if (!NODE_DATA(nid))19001904 alloc_offline_node_data(nid);1901190519021906 pgdat = NODE_DATA(nid);
···49164916 goto response_unlock;49174917 }4918491849194919+ /* Check if Key Size is sufficient for the security level */49204920+ if (!l2cap_check_enc_key_size(conn->hcon, pchan)) {49214921+ result = L2CAP_CR_LE_BAD_KEY_SIZE;49224922+ chan = NULL;49234923+ goto response_unlock;49244924+ }49254925+49194926 /* Check for valid dynamic CID range */49204927 if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) {49214928 result = L2CAP_CR_LE_INVALID_SCID;···50585051 struct l2cap_chan *chan, *pchan;50595052 u16 mtu, mps;50605053 __le16 psm;50615061- u8 result, len = 0;50545054+ u8 result, rsp_len = 0;50625055 int i, num_scid;50635056 bool defer = false;5064505750655058 if (!enable_ecred)50665059 return -EINVAL;50605060+50615061+ memset(pdu, 0, sizeof(*pdu));5067506250685063 if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) {50695064 result = L2CAP_CR_LE_INVALID_PARAMS;···5074506550755066 cmd_len -= sizeof(*req);50765067 num_scid = cmd_len / sizeof(u16);50685068+50695069+ /* Always respond with the same number of scids as in the request */50705070+ rsp_len = cmd_len;5077507150785072 if (num_scid > L2CAP_ECRED_MAX_CID) {50795073 result = L2CAP_CR_LE_INVALID_PARAMS;···50875075 mps = __le16_to_cpu(req->mps);5088507650895077 if (mtu < L2CAP_ECRED_MIN_MTU || mps < L2CAP_ECRED_MIN_MPS) {50905090- result = L2CAP_CR_LE_UNACCEPT_PARAMS;50785078+ result = L2CAP_CR_LE_INVALID_PARAMS;50915079 goto response;50925080 }50935081···5107509551085096 BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps);5109509751105110- memset(pdu, 0, sizeof(*pdu));51115111-51125098 /* Check if we have socket listening on psm */51135099 pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src,51145100 &conn->hcon->dst, LE_LINK);···5119510951205110 if (!smp_sufficient_security(conn->hcon, pchan->sec_level,51215111 SMP_ALLOW_STK)) {51225122- result = L2CAP_CR_LE_AUTHENTICATION;51125112+ result = pchan->sec_level == BT_SECURITY_MEDIUM ?51135113+ L2CAP_CR_LE_ENCRYPTION : L2CAP_CR_LE_AUTHENTICATION;51145114+ goto unlock;51155115+ }51165116+51175117+ /* Check if the listening channel has set an output MTU then the51185118+ * requested MTU shall be less than or equal to that value.51195119+ */51205120+ if (pchan->omtu && mtu < pchan->omtu) {51215121+ result = L2CAP_CR_LE_UNACCEPT_PARAMS;51235122 goto unlock;51245123 }51255124···51405121 BT_DBG("scid[%d] 0x%4.4x", i, scid);5141512251425123 pdu->dcid[i] = 0x0000;51435143- len += sizeof(*pdu->dcid);5144512451455125 /* Check for valid dynamic CID range */51465126 if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) {···52065188 return 0;5207518952085190 l2cap_send_cmd(conn, cmd->ident, L2CAP_ECRED_CONN_RSP,52095209- sizeof(*pdu) + len, pdu);51915191+ sizeof(*pdu) + rsp_len, pdu);5210519252115193 return 0;52125194}···53285310 struct l2cap_ecred_reconf_req *req = (void *) data;53295311 struct l2cap_ecred_reconf_rsp rsp;53305312 u16 mtu, mps, result;53315331- struct l2cap_chan *chan;53135313+ struct l2cap_chan *chan[L2CAP_ECRED_MAX_CID] = {};53325314 int i, num_scid;5333531553345316 if (!enable_ecred)53355317 return -EINVAL;5336531853375337- if (cmd_len < sizeof(*req) || cmd_len - sizeof(*req) % sizeof(u16)) {53385338- result = L2CAP_CR_LE_INVALID_PARAMS;53195319+ if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) {53205320+ result = L2CAP_RECONF_INVALID_CID;53395321 goto respond;53405322 }53415323···53455327 BT_DBG("mtu %u mps %u", mtu, mps);5346532853475329 if (mtu < L2CAP_ECRED_MIN_MTU) {53485348- result = L2CAP_RECONF_INVALID_MTU;53305330+ result = L2CAP_RECONF_INVALID_PARAMS;53495331 goto respond;53505332 }5351533353525334 if (mps < L2CAP_ECRED_MIN_MPS) {53535353- result = L2CAP_RECONF_INVALID_MPS;53355335+ result = L2CAP_RECONF_INVALID_PARAMS;53545336 goto respond;53555337 }5356533853575339 cmd_len -= sizeof(*req);53585340 num_scid = cmd_len / sizeof(u16);53415341+53425342+ if (num_scid > L2CAP_ECRED_MAX_CID) {53435343+ result = L2CAP_RECONF_INVALID_PARAMS;53445344+ goto respond;53455345+ }53465346+53595347 result = L2CAP_RECONF_SUCCESS;5360534853495349+ /* Check if each SCID, MTU and MPS are valid */53615350 for (i = 0; i < num_scid; i++) {53625351 u16 scid;5363535253645353 scid = __le16_to_cpu(req->scid[i]);53655365- if (!scid)53665366- return -EPROTO;53675367-53685368- chan = __l2cap_get_chan_by_dcid(conn, scid);53695369- if (!chan)53705370- continue;53715371-53725372- /* If the MTU value is decreased for any of the included53735373- * channels, then the receiver shall disconnect all53745374- * included channels.53755375- */53765376- if (chan->omtu > mtu) {53775377- BT_ERR("chan %p decreased MTU %u -> %u", chan,53785378- chan->omtu, mtu);53795379- result = L2CAP_RECONF_INVALID_MTU;53545354+ if (!scid) {53555355+ result = L2CAP_RECONF_INVALID_CID;53565356+ goto respond;53805357 }5381535853825382- chan->omtu = mtu;53835383- chan->remote_mps = mps;53595359+ chan[i] = __l2cap_get_chan_by_dcid(conn, scid);53605360+ if (!chan[i]) {53615361+ result = L2CAP_RECONF_INVALID_CID;53625362+ goto respond;53635363+ }53645364+53655365+ /* The MTU field shall be greater than or equal to the greatest53665366+ * current MTU size of these channels.53675367+ */53685368+ if (chan[i]->omtu > mtu) {53695369+ BT_ERR("chan %p decreased MTU %u -> %u", chan[i],53705370+ chan[i]->omtu, mtu);53715371+ result = L2CAP_RECONF_INVALID_MTU;53725372+ goto respond;53735373+ }53745374+53755375+ /* If more than one channel is being configured, the MPS field53765376+ * shall be greater than or equal to the current MPS size of53775377+ * each of these channels. If only one channel is being53785378+ * configured, the MPS field may be less than the current MPS53795379+ * of that channel.53805380+ */53815381+ if (chan[i]->remote_mps >= mps && i) {53825382+ BT_ERR("chan %p decreased MPS %u -> %u", chan[i],53835383+ chan[i]->remote_mps, mps);53845384+ result = L2CAP_RECONF_INVALID_MPS;53855385+ goto respond;53865386+ }53875387+ }53885388+53895389+ /* Commit the new MTU and MPS values after checking they are valid */53905390+ for (i = 0; i < num_scid; i++) {53915391+ chan[i]->omtu = mtu;53925392+ chan[i]->remote_mps = mps;53845393 }5385539453865395respond:
+12-4
net/bluetooth/l2cap_sock.c
···10291029 break;10301030 }1031103110321032- /* Setting is not supported as it's the remote side that10331033- * decides this.10341034- */10351035- err = -EPERM;10321032+ /* Only allow setting output MTU when not connected */10331033+ if (sk->sk_state == BT_CONNECTED) {10341034+ err = -EISCONN;10351035+ break;10361036+ }10371037+10381038+ err = copy_safe_from_sockptr(&mtu, sizeof(mtu), optval, optlen);10391039+ if (err)10401040+ break;10411041+10421042+ chan->omtu = mtu;10361043 break;1037104410381045 case BT_RCVMTU:···1823181618241817 skb_queue_purge(&sk->sk_receive_queue);18251818 skb_queue_purge(&sk->sk_write_queue);18191819+ skb_queue_purge(&sk->sk_error_queue);18261820}1827182118281822static void l2cap_skb_msg_name(struct sk_buff *skb, void *msg_name,
···48224822 * to -1 or to their cpu id, but not to our id.48234823 */48244824 if (READ_ONCE(txq->xmit_lock_owner) != cpu) {48254825+ bool is_list = false;48264826+48254827 if (dev_xmit_recursion())48264828 goto recursion_alert;48274829···48344832 HARD_TX_LOCK(dev, txq, cpu);4835483348364834 if (!netif_xmit_stopped(txq)) {48354835+ is_list = !!skb->next;48364836+48374837 dev_xmit_recursion_inc();48384838 skb = dev_hard_start_xmit(skb, dev, txq, &rc);48394839 dev_xmit_recursion_dec();48404840- if (dev_xmit_complete(rc)) {48414841- HARD_TX_UNLOCK(dev, txq);48424842- goto out;48434843- }48404840+48414841+ /* GSO segments a single SKB into48424842+ * a list of frames. TCP expects error48434843+ * to mean none of the data was sent.48444844+ */48454845+ if (is_list)48464846+ rc = NETDEV_TX_OK;48444847 }48454848 HARD_TX_UNLOCK(dev, txq);48494849+ if (!skb) /* xmit completed */48504850+ goto out;48514851+48464852 net_crit_ratelimited("Virtual device %s asks to queue packet!\n",48474853 dev->name);48544854+ /* NETDEV_TX_BUSY or queue was stopped */48554855+ if (!is_list)48564856+ rc = -ENETDOWN;48484857 } else {48494858 /* Recursion is detected! It is possible,48504859 * unfortunately···48634850recursion_alert:48644851 net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n",48654852 dev->name);48534853+ rc = -ENETDOWN;48664854 }48674855 }4868485648694869- rc = -ENETDOWN;48704857 rcu_read_unlock_bh();4871485848724859 dev_core_stats_tx_dropped_inc(dev);···5005499250064993static struct rps_dev_flow *50074994set_rps_cpu(struct net_device *dev, struct sk_buff *skb,50085008- struct rps_dev_flow *rflow, u16 next_cpu, u32 hash,50095009- u32 flow_id)49954995+ struct rps_dev_flow *rflow, u16 next_cpu, u32 hash)50104996{50114997 if (next_cpu < nr_cpu_ids) {50124998 u32 head;···50165004 struct rps_dev_flow *tmp_rflow;50175005 unsigned int tmp_cpu;50185006 u16 rxq_index;50075007+ u32 flow_id;50195008 int rc;5020500950215010 /* Should we steer this flow to a different hardware queue? */···50325019 if (!flow_table)50335020 goto out;5034502150225022+ flow_id = rfs_slot(hash, flow_table);50355023 tmp_rflow = &flow_table->flows[flow_id];50365024 tmp_cpu = READ_ONCE(tmp_rflow->cpu);50375025···50805066 struct rps_dev_flow_table *flow_table;50815067 struct rps_map *map;50825068 int cpu = -1;50835083- u32 flow_id;50845069 u32 tcpu;50855070 u32 hash;50865071···51265113 /* OK, now we know there is a match,51275114 * we can look at the local (per receive queue) flow table51285115 */51295129- flow_id = rfs_slot(hash, flow_table);51305130- rflow = &flow_table->flows[flow_id];51165116+ rflow = &flow_table->flows[rfs_slot(hash, flow_table)];51315117 tcpu = rflow->cpu;5132511851335119 /*···51455133 ((int)(READ_ONCE(per_cpu(softnet_data, tcpu).input_queue_head) -51465134 rflow->last_qtail)) >= 0)) {51475135 tcpu = next_cpu;51485148- rflow = set_rps_cpu(dev, skb, rflow, next_cpu, hash,51495149- flow_id);51365136+ rflow = set_rps_cpu(dev, skb, rflow, next_cpu, hash);51505137 }5151513851525139 if (tcpu < nr_cpu_ids && cpu_online(tcpu)) {
+18-5
net/core/skbuff.c
···5590559055915591static bool skb_may_tx_timestamp(struct sock *sk, bool tsonly)55925592{55935593- bool ret;55935593+ struct socket *sock;55945594+ struct file *file;55955595+ bool ret = false;5594559655955597 if (likely(tsonly || READ_ONCE(sock_net(sk)->core.sysctl_tstamp_allow_data)))55965598 return true;5597559955985598- read_lock_bh(&sk->sk_callback_lock);55995599- ret = sk->sk_socket && sk->sk_socket->file &&56005600- file_ns_capable(sk->sk_socket->file, &init_user_ns, CAP_NET_RAW);56015601- read_unlock_bh(&sk->sk_callback_lock);56005600+ /* The sk pointer remains valid as long as the skb is. The sk_socket and56015601+ * file pointer may become NULL if the socket is closed. Both structures56025602+ * (including file->cred) are RCU freed which means they can be accessed56035603+ * within a RCU read section.56045604+ */56055605+ rcu_read_lock();56065606+ sock = READ_ONCE(sk->sk_socket);56075607+ if (!sock)56085608+ goto out;56095609+ file = READ_ONCE(sock->file);56105610+ if (!file)56115611+ goto out;56125612+ ret = file_ns_capable(file, &init_user_ns, CAP_NET_RAW);56135613+out:56145614+ rcu_read_unlock();56025615 return ret;56035616}56045617
···48584858 */4859485948604860static enum skb_drop_reason tcp_sequence(const struct sock *sk,48614861- u32 seq, u32 end_seq)48614861+ u32 seq, u32 end_seq,48624862+ const struct tcphdr *th)48624863{48634864 const struct tcp_sock *tp = tcp_sk(sk);48654865+ u32 seq_limit;4864486648654867 if (before(end_seq, tp->rcv_wup))48664868 return SKB_DROP_REASON_TCP_OLD_SEQUENCE;4867486948684868- if (after(end_seq, tp->rcv_nxt + tcp_receive_window(tp))) {48694869- if (after(seq, tp->rcv_nxt + tcp_receive_window(tp)))48704870+ seq_limit = tp->rcv_nxt + tcp_receive_window(tp);48714871+ if (unlikely(after(end_seq, seq_limit))) {48724872+ /* Some stacks are known to handle FIN incorrectly; allow the48734873+ * FIN to extend beyond the window and check it in detail later.48744874+ */48754875+ if (!after(end_seq - th->fin, seq_limit))48764876+ return SKB_NOT_DROPPED_YET;48774877+48784878+ if (after(seq, seq_limit))48704879 return SKB_DROP_REASON_TCP_INVALID_SEQUENCE;4871488048724881 /* Only accept this packet if receive queue is empty. */···6388637963896380step1:63906381 /* Step 1: check sequence number */63916391- reason = tcp_sequence(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);63826382+ reason = tcp_sequence(sk, TCP_SKB_CB(skb)->seq,63836383+ TCP_SKB_CB(skb)->end_seq, th);63926384 if (reason) {63936385 /* RFC793, page 37: "In all states except SYN-SENT, all reset63946386 * (RST) segments are validated by checking their SEQ-fields."
···925925 * socket is created, wait for troubles.926926 */927927 child = inet_csk(sk)->icsk_af_ops->syn_recv_sock(sk, skb, req, NULL,928928- req, &own_req);928928+ req, &own_req, NULL);929929 if (!child)930930 goto listen_overflow;931931
+1-2
net/ipv4/udplite.c
···2020/* Designate sk as UDP-Lite socket */2121static int udplite_sk_init(struct sock *sk)2222{2323- udp_init_sock(sk);2423 pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, "2524 "please contact the netdev mailing list\n");2626- return 0;2525+ return udp_init_sock(sk);2726}28272928static int udplite_rcv(struct sk_buff *skb)
+42-56
net/ipv6/tcp_ipv6.c
···13121312 sizeof(struct inet6_skb_parm));13131313}1314131413151315+/* Called from tcp_v4_syn_recv_sock() for v6_mapped children. */13161316+static void tcp_v6_mapped_child_init(struct sock *newsk, const struct sock *sk)13171317+{13181318+ struct inet_sock *newinet = inet_sk(newsk);13191319+ struct ipv6_pinfo *newnp;13201320+13211321+ newinet->pinet6 = newnp = tcp_inet6_sk(newsk);13221322+ newinet->ipv6_fl_list = NULL;13231323+13241324+ memcpy(newnp, tcp_inet6_sk(sk), sizeof(struct ipv6_pinfo));13251325+13261326+ newnp->saddr = newsk->sk_v6_rcv_saddr;13271327+13281328+ inet_csk(newsk)->icsk_af_ops = &ipv6_mapped;13291329+ if (sk_is_mptcp(newsk))13301330+ mptcpv6_handle_mapped(newsk, true);13311331+ newsk->sk_backlog_rcv = tcp_v4_do_rcv;13321332+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)13331333+ tcp_sk(newsk)->af_specific = &tcp_sock_ipv6_mapped_specific;13341334+#endif13351335+13361336+ newnp->ipv6_mc_list = NULL;13371337+ newnp->ipv6_ac_list = NULL;13381338+ newnp->pktoptions = NULL;13391339+ newnp->opt = NULL;13401340+13411341+ /* tcp_v4_syn_recv_sock() has initialized newinet->mc_{index,ttl} */13421342+ newnp->mcast_oif = newinet->mc_index;13431343+ newnp->mcast_hops = newinet->mc_ttl;13441344+13451345+ newnp->rcv_flowinfo = 0;13461346+ if (inet6_test_bit(REPFLOW, sk))13471347+ newnp->flow_label = 0;13481348+}13491349+13151350static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,13161351 struct request_sock *req,13171352 struct dst_entry *dst,13181353 struct request_sock *req_unhash,13191319- bool *own_req)13541354+ bool *own_req,13551355+ void (*opt_child_init)(struct sock *newsk,13561356+ const struct sock *sk))13201357{13211358 const struct ipv6_pinfo *np = tcp_inet6_sk(sk);13221359 struct inet_request_sock *ireq;···13691332#endif13701333 struct flowi6 fl6;1371133413721372- if (skb->protocol == htons(ETH_P_IP)) {13731373- /*13741374- * v6 mapped13751375- */13761376-13771377- newsk = tcp_v4_syn_recv_sock(sk, skb, req, dst,13781378- req_unhash, own_req);13791379-13801380- if (!newsk)13811381- return NULL;13821382-13831383- newinet = inet_sk(newsk);13841384- newinet->pinet6 = tcp_inet6_sk(newsk);13851385- newinet->ipv6_fl_list = NULL;13861386-13871387- newnp = tcp_inet6_sk(newsk);13881388- newtp = tcp_sk(newsk);13891389-13901390- memcpy(newnp, np, sizeof(struct ipv6_pinfo));13911391-13921392- newnp->saddr = newsk->sk_v6_rcv_saddr;13931393-13941394- inet_csk(newsk)->icsk_af_ops = &ipv6_mapped;13951395- if (sk_is_mptcp(newsk))13961396- mptcpv6_handle_mapped(newsk, true);13971397- newsk->sk_backlog_rcv = tcp_v4_do_rcv;13981398-#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)13991399- newtp->af_specific = &tcp_sock_ipv6_mapped_specific;14001400-#endif14011401-14021402- newnp->ipv6_mc_list = NULL;14031403- newnp->ipv6_ac_list = NULL;14041404- newnp->pktoptions = NULL;14051405- newnp->opt = NULL;14061406- newnp->mcast_oif = inet_iif(skb);14071407- newnp->mcast_hops = ip_hdr(skb)->ttl;14081408- newnp->rcv_flowinfo = 0;14091409- if (inet6_test_bit(REPFLOW, sk))14101410- newnp->flow_label = 0;14111411-14121412- /*14131413- * No need to charge this sock to the relevant IPv6 refcnt debug socks count14141414- * here, tcp_create_openreq_child now does this for us, see the comment in14151415- * that function for the gory details. -acme14161416- */14171417-14181418- /* It is tricky place. Until this moment IPv4 tcp14191419- worked with IPv6 icsk.icsk_af_ops.14201420- Sync it now.14211421- */14221422- tcp_sync_mss(newsk, inet_csk(newsk)->icsk_pmtu_cookie);14231423-14241424- return newsk;14251425- }14261426-13351335+ if (skb->protocol == htons(ETH_P_IP))13361336+ return tcp_v4_syn_recv_sock(sk, skb, req, dst,13371337+ req_unhash, own_req,13381338+ tcp_v6_mapped_child_init);14271339 ireq = inet_rsk(req);1428134014291341 if (sk_acceptq_is_full(sk))
+1-2
net/ipv6/udplite.c
···16161717static int udplitev6_sk_init(struct sock *sk)1818{1919- udpv6_init_sock(sk);2019 pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, "2120 "please contact the netdev mailing list\n");2222- return 0;2121+ return udpv6_init_sock(sk);2322}24232524static int udplitev6_rcv(struct sk_buff *skb)
···628628 skb = txm->frag_skb;629629 }630630631631- if (WARN_ON(!skb_shinfo(skb)->nr_frags) ||631631+ if (WARN_ON_ONCE(!skb_shinfo(skb)->nr_frags) ||632632 WARN_ON_ONCE(!skb_frag_page(&skb_shinfo(skb)->frags[0]))) {633633 ret = -EINVAL;634634 goto out;···749749{750750 struct sock *sk = sock->sk;751751 struct kcm_sock *kcm = kcm_sk(sk);752752- struct sk_buff *skb = NULL, *head = NULL;752752+ struct sk_buff *skb = NULL, *head = NULL, *frag_prev = NULL;753753 size_t copy, copied = 0;754754 long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);755755 int eor = (sock->type == SOCK_DGRAM) ?···824824 else825825 skb->next = tskb;826826827827+ frag_prev = skb;827828 skb = tskb;828829 skb->ip_summed = CHECKSUM_UNNECESSARY;829830 continue;···933932934933out_error:935934 kcm_push(kcm);935935+936936+ /* When MAX_SKB_FRAGS was reached, a new skb was allocated and937937+ * linked into the frag_list before data copy. If the copy938938+ * subsequently failed, this skb has zero frags. Remove it from939939+ * the frag_list to prevent kcm_write_msgs from later hitting940940+ * WARN_ON(!skb_shinfo(skb)->nr_frags).941941+ */942942+ if (frag_prev && !skb_shinfo(skb)->nr_frags) {943943+ if (head == frag_prev)944944+ skb_shinfo(head)->frag_list = NULL;945945+ else946946+ frag_prev->next = NULL;947947+ kfree_skb(skb);948948+ /* Update skb as it may be saved in partial_message via goto */949949+ skb = frag_prev;950950+ }936951937952 if (sock->type == SOCK_SEQPACKET) {938953 /* Wrote some bytes before encountering an
+2
net/mac80211/link.c
···281281 struct ieee80211_bss_conf *old[IEEE80211_MLD_MAX_NUM_LINKS];282282 struct ieee80211_link_data *old_data[IEEE80211_MLD_MAX_NUM_LINKS];283283 bool use_deflink = old_links == 0; /* set for error case */284284+ bool non_sta = sdata->vif.type != NL80211_IFTYPE_STATION;284285285286 lockdep_assert_wiphy(sdata->local->hw.wiphy);286287···338337 link = links[link_id];339338 ieee80211_link_init(sdata, link_id, &link->data, &link->conf);340339 ieee80211_link_setup(&link->data);340340+ ieee80211_set_wmm_default(&link->data, true, non_sta);341341 }342342343343 if (new_links == 0)
+3
net/mac80211/mesh.c
···16351635 if (!mesh_matches_local(sdata, elems))16361636 goto free;1637163716381638+ if (!elems->mesh_chansw_params_ie)16391639+ goto free;16401640+16381641 ifmsh->chsw_ttl = elems->mesh_chansw_params_ie->mesh_ttl;16391642 if (!--ifmsh->chsw_ttl)16401643 fwd_csa = false;
+3
net/mac80211/mlme.c
···70857085 control = le16_to_cpu(prof->control);70867086 link_id = control & IEEE80211_MLE_STA_RECONF_CONTROL_LINK_ID;7087708770887088+ if (link_id >= IEEE80211_MLD_MAX_NUM_LINKS)70897089+ continue;70907090+70887091 removed_links |= BIT(link_id);7089709270907093 /* the MAC address should not be included, but handle it */
···796796797797 if (ext || (son->attr & OPEN)) {798798 BYTE_ALIGN(bs);799799- if (nf_h323_error_boundary(bs, len, 0))799799+ if (nf_h323_error_boundary(bs, 2, 0))800800 return H323_ERROR_BOUND;801801 len = get_len(bs);802802 if (nf_h323_error_boundary(bs, len, 0))
+38-1
net/psp/psp_main.c
···166166{167167 struct udphdr *uh = udp_hdr(skb);168168 struct psphdr *psph = (struct psphdr *)(uh + 1);169169+ const struct sock *sk = skb->sk;169170170171 uh->dest = htons(PSP_DEFAULT_UDP_PORT);171171- uh->source = udp_flow_src_port(net, skb, 0, 0, false);172172+173173+ /* A bit of theory: Selection of the source port.174174+ *175175+ * We need some entropy, so that multiple flows use different176176+ * source ports for better RSS spreading at the receiver.177177+ *178178+ * We also need that all packets belonging to one TCP flow179179+ * use the same source port through their duration,180180+ * so that all these packets land in the same receive queue.181181+ *182182+ * udp_flow_src_port() is using sk_txhash, inherited from183183+ * skb_set_hash_from_sk() call in __tcp_transmit_skb().184184+ * This field is subject to reshuffling, thanks to185185+ * sk_rethink_txhash() calls in various TCP functions.186186+ *187187+ * Instead, use sk->sk_hash which is constant through188188+ * the whole flow duration.189189+ */190190+ if (likely(sk)) {191191+ u32 hash = sk->sk_hash;192192+ int min, max;193193+194194+ /* These operations are cheap, no need to cache the result195195+ * in another socket field.196196+ */197197+ inet_get_local_port_range(net, &min, &max);198198+ /* Since this is being sent on the wire obfuscate hash a bit199199+ * to minimize possibility that any useful information to an200200+ * attacker is leaked. Only upper 16 bits are relevant in the201201+ * computation for 16 bit port value because we use a202202+ * reciprocal divide.203203+ */204204+ hash ^= hash << 16;205205+ uh->source = htons((((u64)hash * (max - min)) >> 32) + min);206206+ } else {207207+ uh->source = udp_flow_src_port(net, skb, 0, 0, false);208208+ }172209 uh->check = 0;173210 uh->len = htons(udp_len);174211
+3
net/rds/connection.c
···455455 rcu_read_unlock();456456 }457457458458+ /* we do not hold the socket lock here but it is safe because459459+ * fan-out is disabled when calling conn_slots_available()460460+ */458461 if (conn->c_trans->conn_slots_available)459462 conn->c_trans->conn_slots_available(conn, false);460463}
+4-22
net/rds/tcp_listen.c
···5959static int6060rds_tcp_get_peer_sport(struct socket *sock)6161{6262- union {6363- struct sockaddr_storage storage;6464- struct sockaddr addr;6565- struct sockaddr_in sin;6666- struct sockaddr_in6 sin6;6767- } saddr;6868- int sport;6262+ struct sock *sk = sock->sk;69637070- if (kernel_getpeername(sock, &saddr.addr) >= 0) {7171- switch (saddr.addr.sa_family) {7272- case AF_INET:7373- sport = ntohs(saddr.sin.sin_port);7474- break;7575- case AF_INET6:7676- sport = ntohs(saddr.sin6.sin6_port);7777- break;7878- default:7979- sport = -1;8080- }8181- } else {8282- sport = -1;8383- }6464+ if (!sk)6565+ return -1;84668585- return sport;6767+ return ntohs(READ_ONCE(inet_sk(sk)->inet_dport));8668}87698870/* rds_tcp_accept_one_path(): if accepting on cp_index > 0, make sure the
+4-2
net/smc/af_smc.c
···124124 struct request_sock *req,125125 struct dst_entry *dst,126126 struct request_sock *req_unhash,127127- bool *own_req)127127+ bool *own_req,128128+ void (*opt_child_init)(struct sock *newsk,129129+ const struct sock *sk))128130{129131 struct smc_sock *smc;130132 struct sock *child;···144142145143 /* passthrough to original syn recv sock fct */146144 child = smc->ori_af_ops->syn_recv_sock(sk, skb, req, dst, req_unhash,147147- own_req);145145+ own_req, opt_child_init);148146 /* child must not inherit smc or its ops */149147 if (child) {150148 rcu_assign_sk_user_data(child, NULL);
···9090 *9191 * - /proc/sys/net/vsock/ns_mode (read-only) reports the current namespace's9292 * mode, which is set at namespace creation and immutable thereafter.9393- * - /proc/sys/net/vsock/child_ns_mode (writable) controls what mode future9393+ * - /proc/sys/net/vsock/child_ns_mode (write-once) controls what mode future9494 * child namespaces will inherit when created. The initial value matches9595 * the namespace's own ns_mode.9696 *9797 * Changing child_ns_mode only affects newly created namespaces, not the9898 * current namespace or existing children. A "local" namespace cannot set9999- * child_ns_mode to "global". At namespace creation, ns_mode is inherited100100- * from the parent's child_ns_mode.9999+ * child_ns_mode to "global". child_ns_mode is write-once, so that it may be100100+ * configured and locked down by a namespace manager. Writing a different101101+ * value after the first write returns -EBUSY. At namespace creation, ns_mode102102+ * is inherited from the parent's child_ns_mode.101103 *102102- * The init_net mode is "global" and cannot be modified.104104+ * The init_net mode is "global" and cannot be modified. The init_net105105+ * child_ns_mode is also write-once, so an init process (e.g. systemd) can106106+ * set it to "local" to ensure all new namespaces inherit local mode.103107 *104108 * The modes affect the allocation and accessibility of CIDs as follows:105109 *···28292825 if (write)28302826 return -EPERM;2831282728322832- net = current->nsproxy->net_ns;28282828+ net = container_of(table->data, struct net, vsock.mode);2833282928342830 return __vsock_net_mode_string(table, write, buffer, lenp, ppos,28352831 vsock_net_mode(net), NULL);···28422838 struct net *net;28432839 int ret;2844284028452845- net = current->nsproxy->net_ns;28412841+ net = container_of(table->data, struct net, vsock.child_ns_mode);2846284228472843 ret = __vsock_net_mode_string(table, write, buffer, lenp, ppos,28482844 vsock_net_child_mode(net), &new_mode);···28572853 new_mode == VSOCK_NET_MODE_GLOBAL)28582854 return -EPERM;2859285528602860- vsock_net_set_child_mode(net, new_mode);28562856+ if (!vsock_net_set_child_mode(net, new_mode))28572857+ return -EBUSY;28612858 }2862285928632860 return 0;
+1
net/wireless/core.c
···12111211 /* this has nothing to do now but make sure it's gone */12121212 cancel_work_sync(&rdev->wiphy_work);1213121312141214+ cancel_work_sync(&rdev->rfkill_block);12141215 cancel_work_sync(&rdev->conn_work);12151216 flush_work(&rdev->event_work);12161217 cancel_delayed_work_sync(&rdev->dfs_update_channels_wk);
+2-2
net/wireless/radiotap.c
···239239 default:240240 if (!iterator->current_namespace ||241241 iterator->_arg_index >= iterator->current_namespace->n_bits) {242242- if (iterator->current_namespace == &radiotap_ns)243243- return -ENOENT;244242 align = 0;245243 } else {246244 align = iterator->current_namespace->align_size[iterator->_arg_index].align;247245 size = iterator->current_namespace->align_size[iterator->_arg_index].size;248246 }249247 if (!align) {248248+ if (iterator->current_namespace == &radiotap_ns)249249+ return -ENOENT;250250 /* skip all subsequent data */251251 iterator->_arg = iterator->_next_ns_data;252252 /* give up on this namespace */
+1-1
net/wireless/wext-compat.c
···683683684684 idx = erq->flags & IW_ENCODE_INDEX;685685 if (cipher == WLAN_CIPHER_SUITE_AES_CMAC) {686686- if (idx < 4 || idx > 5) {686686+ if (idx < 5 || idx > 6) {687687 idx = wdev->wext.default_mgmt_key;688688 if (idx < 0)689689 return -EINVAL;
···544544 return NOTIFY_DONE;545545}546546547547+static int xfrm_dev_unregister(struct net_device *dev)548548+{549549+ xfrm_dev_state_flush(dev_net(dev), dev, true);550550+ xfrm_dev_policy_flush(dev_net(dev), dev, true);551551+552552+ return NOTIFY_DONE;553553+}554554+547555static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void *ptr)548556{549557 struct net_device *dev = netdev_notifier_info_to_dev(ptr);···564556 return xfrm_api_check(dev);565557566558 case NETDEV_DOWN:567567- case NETDEV_UNREGISTER:568559 return xfrm_dev_down(dev);560560+561561+ case NETDEV_UNREGISTER:562562+ return xfrm_dev_unregister(dev);569563 }570564 return NOTIFY_DONE;571565}
+9-2
net/xfrm/xfrm_policy.c
···38013801 struct xfrm_tmpl *tp[XFRM_MAX_DEPTH];38023802 struct xfrm_tmpl *stp[XFRM_MAX_DEPTH];38033803 struct xfrm_tmpl **tpp = tp;38043804+ int i, k = 0;38043805 int ti = 0;38053805- int i, k;3806380638073807 sp = skb_sec_path(skb);38083808 if (!sp)···38283828 tpp = stp;38293829 }3830383038313831+ if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET && sp == &dummy)38323832+ /* This policy template was already checked by HW38333833+ * and secpath was removed in __xfrm_policy_check2.38343834+ */38353835+ goto out;38363836+38313837 /* For each tunnel xfrm, find the first matching tmpl.38323838 * For each tmpl before that, find corresponding xfrm.38333839 * Order is _important_. Later we will implement···38433837 * verified to allow them to be skipped in future policy38443838 * checks (e.g. nested tunnels).38453839 */38463846- for (i = xfrm_nr-1, k = 0; i >= 0; i--) {38403840+ for (i = xfrm_nr - 1; i >= 0; i--) {38473841 k = xfrm_policy_ok(tpp[i], sp, k, family, if_id);38483842 if (k < 0) {38493843 if (k < -1)···38593853 goto reject;38603854 }3861385538563856+out:38623857 xfrm_pols_put(pols, npols);38633858 sp->verified_cnt = k;38643859
+89-42
rust/kernel/io.rs
···139139140140/// Internal helper macros used to invoke C MMIO read functions.141141///142142-/// This macro is intended to be used by higher-level MMIO access macros (define_read) and provides143143-/// a unified expansion for infallible vs. fallible read semantics. It emits a direct call into the144144-/// corresponding C helper and performs the required cast to the Rust return type.142142+/// This macro is intended to be used by higher-level MMIO access macros (io_define_read) and143143+/// provides a unified expansion for infallible vs. fallible read semantics. It emits a direct call144144+/// into the corresponding C helper and performs the required cast to the Rust return type.145145///146146/// # Parameters147147///···166166167167/// Internal helper macros used to invoke C MMIO write functions.168168///169169-/// This macro is intended to be used by higher-level MMIO access macros (define_write) and provides170170-/// a unified expansion for infallible vs. fallible write semantics. It emits a direct call into the171171-/// corresponding C helper and performs the required cast to the Rust return type.169169+/// This macro is intended to be used by higher-level MMIO access macros (io_define_write) and170170+/// provides a unified expansion for infallible vs. fallible write semantics. It emits a direct call171171+/// into the corresponding C helper and performs the required cast to the Rust return type.172172///173173/// # Parameters174174///···193193 }};194194}195195196196-macro_rules! define_read {196196+/// Generates an accessor method for reading from an I/O backend.197197+///198198+/// This macro reduces boilerplate by automatically generating either compile-time bounds-checked199199+/// (infallible) or runtime bounds-checked (fallible) read methods. It abstracts the address200200+/// calculation and bounds checking, and delegates the actual I/O read operation to a specified201201+/// helper macro, making it generic over different I/O backends.202202+///203203+/// # Parameters204204+///205205+/// * `infallible` / `fallible` - Determines the bounds-checking strategy. `infallible` relies on206206+/// `IoKnownSize` for compile-time checks and returns the value directly. `fallible` performs207207+/// runtime checks against `maxsize()` and returns a `Result<T>`.208208+/// * `$(#[$attr:meta])*` - Optional attributes to apply to the generated method (e.g.,209209+/// `#[cfg(CONFIG_64BIT)]` or inline directives).210210+/// * `$vis:vis` - The visibility of the generated method (e.g., `pub`).211211+/// * `$name:ident` / `$try_name:ident` - The name of the generated method (e.g., `read32`,212212+/// `try_read8`).213213+/// * `$call_macro:ident` - The backend-specific helper macro used to emit the actual I/O call214214+/// (e.g., `call_mmio_read`).215215+/// * `$c_fn:ident` - The backend-specific C function or identifier to be passed into the216216+/// `$call_macro`.217217+/// * `$type_name:ty` - The Rust type of the value being read (e.g., `u8`, `u32`).218218+#[macro_export]219219+macro_rules! io_define_read {197220 (infallible, $(#[$attr:meta])* $vis:vis $name:ident, $call_macro:ident($c_fn:ident) ->198221 $type_name:ty) => {199222 /// Read IO data from a given offset known at compile time.···249226 }250227 };251228}252252-pub(crate) use define_read;229229+pub use io_define_read;253230254254-macro_rules! define_write {231231+/// Generates an accessor method for writing to an I/O backend.232232+///233233+/// This macro reduces boilerplate by automatically generating either compile-time bounds-checked234234+/// (infallible) or runtime bounds-checked (fallible) write methods. It abstracts the address235235+/// calculation and bounds checking, and delegates the actual I/O write operation to a specified236236+/// helper macro, making it generic over different I/O backends.237237+///238238+/// # Parameters239239+///240240+/// * `infallible` / `fallible` - Determines the bounds-checking strategy. `infallible` relies on241241+/// `IoKnownSize` for compile-time checks and returns `()`. `fallible` performs runtime checks242242+/// against `maxsize()` and returns a `Result`.243243+/// * `$(#[$attr:meta])*` - Optional attributes to apply to the generated method (e.g.,244244+/// `#[cfg(CONFIG_64BIT)]` or inline directives).245245+/// * `$vis:vis` - The visibility of the generated method (e.g., `pub`).246246+/// * `$name:ident` / `$try_name:ident` - The name of the generated method (e.g., `write32`,247247+/// `try_write8`).248248+/// * `$call_macro:ident` - The backend-specific helper macro used to emit the actual I/O call249249+/// (e.g., `call_mmio_write`).250250+/// * `$c_fn:ident` - The backend-specific C function or identifier to be passed into the251251+/// `$call_macro`.252252+/// * `$type_name:ty` - The Rust type of the value being written (e.g., `u8`, `u32`). Note the use253253+/// of `<-` before the type to denote a write operation.254254+#[macro_export]255255+macro_rules! io_define_write {255256 (infallible, $(#[$attr:meta])* $vis:vis $name:ident, $call_macro:ident($c_fn:ident) <-256257 $type_name:ty) => {257258 /// Write IO data from a given offset known at compile time.···306259 }307260 };308261}309309-pub(crate) use define_write;262262+pub use io_define_write;310263311264/// Checks whether an access of type `U` at the given `offset`312265/// is valid within this region.···556509 self.0.maxsize()557510 }558511559559- define_read!(fallible, try_read8, call_mmio_read(readb) -> u8);560560- define_read!(fallible, try_read16, call_mmio_read(readw) -> u16);561561- define_read!(fallible, try_read32, call_mmio_read(readl) -> u32);562562- define_read!(512512+ io_define_read!(fallible, try_read8, call_mmio_read(readb) -> u8);513513+ io_define_read!(fallible, try_read16, call_mmio_read(readw) -> u16);514514+ io_define_read!(fallible, try_read32, call_mmio_read(readl) -> u32);515515+ io_define_read!(563516 fallible,564517 #[cfg(CONFIG_64BIT)]565518 try_read64,566519 call_mmio_read(readq) -> u64567520 );568521569569- define_write!(fallible, try_write8, call_mmio_write(writeb) <- u8);570570- define_write!(fallible, try_write16, call_mmio_write(writew) <- u16);571571- define_write!(fallible, try_write32, call_mmio_write(writel) <- u32);572572- define_write!(522522+ io_define_write!(fallible, try_write8, call_mmio_write(writeb) <- u8);523523+ io_define_write!(fallible, try_write16, call_mmio_write(writew) <- u16);524524+ io_define_write!(fallible, try_write32, call_mmio_write(writel) <- u32);525525+ io_define_write!(573526 fallible,574527 #[cfg(CONFIG_64BIT)]575528 try_write64,576529 call_mmio_write(writeq) <- u64577530 );578531579579- define_read!(infallible, read8, call_mmio_read(readb) -> u8);580580- define_read!(infallible, read16, call_mmio_read(readw) -> u16);581581- define_read!(infallible, read32, call_mmio_read(readl) -> u32);582582- define_read!(532532+ io_define_read!(infallible, read8, call_mmio_read(readb) -> u8);533533+ io_define_read!(infallible, read16, call_mmio_read(readw) -> u16);534534+ io_define_read!(infallible, read32, call_mmio_read(readl) -> u32);535535+ io_define_read!(583536 infallible,584537 #[cfg(CONFIG_64BIT)]585538 read64,586539 call_mmio_read(readq) -> u64587540 );588541589589- define_write!(infallible, write8, call_mmio_write(writeb) <- u8);590590- define_write!(infallible, write16, call_mmio_write(writew) <- u16);591591- define_write!(infallible, write32, call_mmio_write(writel) <- u32);592592- define_write!(542542+ io_define_write!(infallible, write8, call_mmio_write(writeb) <- u8);543543+ io_define_write!(infallible, write16, call_mmio_write(writew) <- u16);544544+ io_define_write!(infallible, write32, call_mmio_write(writel) <- u32);545545+ io_define_write!(593546 infallible,594547 #[cfg(CONFIG_64BIT)]595548 write64,···613566 unsafe { &*core::ptr::from_ref(raw).cast() }614567 }615568616616- define_read!(infallible, pub read8_relaxed, call_mmio_read(readb_relaxed) -> u8);617617- define_read!(infallible, pub read16_relaxed, call_mmio_read(readw_relaxed) -> u16);618618- define_read!(infallible, pub read32_relaxed, call_mmio_read(readl_relaxed) -> u32);619619- define_read!(569569+ io_define_read!(infallible, pub read8_relaxed, call_mmio_read(readb_relaxed) -> u8);570570+ io_define_read!(infallible, pub read16_relaxed, call_mmio_read(readw_relaxed) -> u16);571571+ io_define_read!(infallible, pub read32_relaxed, call_mmio_read(readl_relaxed) -> u32);572572+ io_define_read!(620573 infallible,621574 #[cfg(CONFIG_64BIT)]622575 pub read64_relaxed,623576 call_mmio_read(readq_relaxed) -> u64624577 );625578626626- define_read!(fallible, pub try_read8_relaxed, call_mmio_read(readb_relaxed) -> u8);627627- define_read!(fallible, pub try_read16_relaxed, call_mmio_read(readw_relaxed) -> u16);628628- define_read!(fallible, pub try_read32_relaxed, call_mmio_read(readl_relaxed) -> u32);629629- define_read!(579579+ io_define_read!(fallible, pub try_read8_relaxed, call_mmio_read(readb_relaxed) -> u8);580580+ io_define_read!(fallible, pub try_read16_relaxed, call_mmio_read(readw_relaxed) -> u16);581581+ io_define_read!(fallible, pub try_read32_relaxed, call_mmio_read(readl_relaxed) -> u32);582582+ io_define_read!(630583 fallible,631584 #[cfg(CONFIG_64BIT)]632585 pub try_read64_relaxed,633586 call_mmio_read(readq_relaxed) -> u64634587 );635588636636- define_write!(infallible, pub write8_relaxed, call_mmio_write(writeb_relaxed) <- u8);637637- define_write!(infallible, pub write16_relaxed, call_mmio_write(writew_relaxed) <- u16);638638- define_write!(infallible, pub write32_relaxed, call_mmio_write(writel_relaxed) <- u32);639639- define_write!(589589+ io_define_write!(infallible, pub write8_relaxed, call_mmio_write(writeb_relaxed) <- u8);590590+ io_define_write!(infallible, pub write16_relaxed, call_mmio_write(writew_relaxed) <- u16);591591+ io_define_write!(infallible, pub write32_relaxed, call_mmio_write(writel_relaxed) <- u32);592592+ io_define_write!(640593 infallible,641594 #[cfg(CONFIG_64BIT)]642595 pub write64_relaxed,643596 call_mmio_write(writeq_relaxed) <- u64644597 );645598646646- define_write!(fallible, pub try_write8_relaxed, call_mmio_write(writeb_relaxed) <- u8);647647- define_write!(fallible, pub try_write16_relaxed, call_mmio_write(writew_relaxed) <- u16);648648- define_write!(fallible, pub try_write32_relaxed, call_mmio_write(writel_relaxed) <- u32);649649- define_write!(599599+ io_define_write!(fallible, pub try_write8_relaxed, call_mmio_write(writeb_relaxed) <- u8);600600+ io_define_write!(fallible, pub try_write16_relaxed, call_mmio_write(writew_relaxed) <- u16);601601+ io_define_write!(fallible, pub try_write32_relaxed, call_mmio_write(writel_relaxed) <- u32);602602+ io_define_write!(650603 fallible,651604 #[cfg(CONFIG_64BIT)]652605 pub try_write64_relaxed,
+12-12
rust/kernel/pci/io.rs
···88 device,99 devres::Devres,1010 io::{1111- define_read,1212- define_write,1111+ io_define_read,1212+ io_define_write,1313 Io,1414 IoCapable,1515 IoKnownSize,···8888/// Internal helper macros used to invoke C PCI configuration space read functions.8989///9090/// This macro is intended to be used by higher-level PCI configuration space access macros9191-/// (define_read) and provides a unified expansion for infallible vs. fallible read semantics. It9191+/// (io_define_read) and provides a unified expansion for infallible vs. fallible read semantics. It9292/// emits a direct call into the corresponding C helper and performs the required cast to the Rust9393/// return type.9494///···117117/// Internal helper macros used to invoke C PCI configuration space write functions.118118///119119/// This macro is intended to be used by higher-level PCI configuration space access macros120120-/// (define_write) and provides a unified expansion for infallible vs. fallible read semantics. It121121-/// emits a direct call into the corresponding C helper and performs the required cast to the Rust122122-/// return type.120120+/// (io_define_write) and provides a unified expansion for infallible vs. fallible read semantics.121121+/// It emits a direct call into the corresponding C helper and performs the required cast to the122122+/// Rust return type.123123///124124/// # Parameters125125///···163163 // PCI configuration space does not support fallible operations.164164 // The default implementations from the Io trait are not used.165165166166- define_read!(infallible, read8, call_config_read(pci_read_config_byte) -> u8);167167- define_read!(infallible, read16, call_config_read(pci_read_config_word) -> u16);168168- define_read!(infallible, read32, call_config_read(pci_read_config_dword) -> u32);166166+ io_define_read!(infallible, read8, call_config_read(pci_read_config_byte) -> u8);167167+ io_define_read!(infallible, read16, call_config_read(pci_read_config_word) -> u16);168168+ io_define_read!(infallible, read32, call_config_read(pci_read_config_dword) -> u32);169169170170- define_write!(infallible, write8, call_config_write(pci_write_config_byte) <- u8);171171- define_write!(infallible, write16, call_config_write(pci_write_config_word) <- u16);172172- define_write!(infallible, write32, call_config_write(pci_write_config_dword) <- u32);170170+ io_define_write!(infallible, write8, call_config_write(pci_write_config_byte) <- u8);171171+ io_define_write!(infallible, write16, call_config_write(pci_write_config_word) <- u16);172172+ io_define_write!(infallible, write32, call_config_write(pci_write_config_dword) <- u32);173173}174174175175impl<'a, S: ConfigSpaceKind> IoKnownSize for ConfigSpace<'a, S> {
···230230static void attach_ksyms_all(struct bpf_program *empty, bool kretprobe)231231{232232 LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);233233- char **syms = NULL;234234- size_t cnt = 0;233233+ struct bpf_link *link = NULL;234234+ struct ksyms *ksyms = NULL;235235236236 /* Some recursive functions will be skipped in237237 * bpf_get_ksyms -> skip_entry, as they can introduce sufficient···241241 * So, don't run the kprobe-multi-all and kretprobe-multi-all on242242 * a debug kernel.243243 */244244- if (bpf_get_ksyms(&syms, &cnt, true)) {244244+ if (bpf_get_ksyms(&ksyms, true)) {245245 fprintf(stderr, "failed to get ksyms\n");246246 exit(1);247247 }248248249249- opts.syms = (const char **) syms;250250- opts.cnt = cnt;249249+ opts.syms = (const char **)ksyms->filtered_syms;250250+ opts.cnt = ksyms->filtered_cnt;251251 opts.retprobe = kretprobe;252252 /* attach empty to all the kernel functions except bpf_get_numa_node_id. */253253- if (!bpf_program__attach_kprobe_multi_opts(empty, NULL, &opts)) {253253+ link = bpf_program__attach_kprobe_multi_opts(empty, NULL, &opts);254254+ free_kallsyms_local(ksyms);255255+ if (!link) {254256 fprintf(stderr, "failed to attach bpf_program__attach_kprobe_multi_opts to all\n");255257 exit(1);256258 }
+32-13
tools/testing/selftests/bpf/bpf_util.h
···88#include <errno.h>99#include <syscall.h>1010#include <bpf/libbpf.h> /* libbpf_num_possible_cpus */1111+#include <linux/args.h>11121213static inline unsigned int bpf_num_possible_cpus(void)1314{···2221 return possible_cpus;2322}24232525-/* Copy up to sz - 1 bytes from zero-terminated src string and ensure that dst2626- * is zero-terminated string no matter what (unless sz == 0, in which case2727- * it's a no-op). It's conceptually close to FreeBSD's strlcpy(), but differs2828- * in what is returned. Given this is internal helper, it's trivial to extend2929- * this, when necessary. Use this instead of strncpy inside libbpf source code.2424+/*2525+ * Simplified strscpy() implementation. The kernel one is in lib/string.c3026 */3131-static inline void bpf_strlcpy(char *dst, const char *src, size_t sz)2727+static inline ssize_t sized_strscpy(char *dest, const char *src, size_t count)3228{3333- size_t i;2929+ long res = 0;34303535- if (sz == 0)3636- return;3131+ if (count == 0)3232+ return -E2BIG;37333838- sz--;3939- for (i = 0; i < sz && src[i]; i++)4040- dst[i] = src[i];4141- dst[i] = '\0';3434+ while (count > 1) {3535+ char c;3636+3737+ c = src[res];3838+ dest[res] = c;3939+ if (!c)4040+ return res;4141+ res++;4242+ count--;4343+ }4444+4545+ /* Force NUL-termination. */4646+ dest[res] = '\0';4747+4848+ /* Return E2BIG if the source didn't stop */4949+ return src[res] ? -E2BIG : res;4250}5151+5252+#define __strscpy0(dst, src, ...) \5353+ sized_strscpy(dst, src, sizeof(dst))5454+#define __strscpy1(dst, src, size) \5555+ sized_strscpy(dst, src, size)5656+5757+#undef strscpy /* Redefine the placeholder from tools/include/linux/string.h */5858+#define strscpy(dst, src, ...) \5959+ CONCATENATE(__strscpy, COUNT_ARGS(__VA_ARGS__))(dst, src, __VA_ARGS__)43604461#define __bpf_percpu_val_align __attribute__((__aligned__(8)))4562
+19-6
tools/testing/selftests/bpf/bpftool_helpers.c
···11// SPDX-License-Identifier: GPL-2.0-only22-#include "bpftool_helpers.h"32#include <unistd.h>43#include <string.h>54#include <stdbool.h>55+66+#include "bpf_util.h"77+#include "bpftool_helpers.h"6879#define BPFTOOL_PATH_MAX_LEN 64810#define BPFTOOL_FULL_CMD_MAX_LEN 5129111012#define BPFTOOL_DEFAULT_PATH "tools/sbin/bpftool"11131212-static int detect_bpftool_path(char *buffer)1414+static int detect_bpftool_path(char *buffer, size_t size)1315{1416 char tmp[BPFTOOL_PATH_MAX_LEN];1717+ const char *env_path;1818+1919+ /* First, check if BPFTOOL environment variable is set */2020+ env_path = getenv("BPFTOOL");2121+ if (env_path && access(env_path, X_OK) == 0) {2222+ strscpy(buffer, env_path, size);2323+ return 0;2424+ } else if (env_path) {2525+ fprintf(stderr, "bpftool '%s' doesn't exist or is not executable\n", env_path);2626+ return 1;2727+ }15281629 /* Check default bpftool location (will work if we are running the1730 * default flavor of test_progs)1831 */1932 snprintf(tmp, BPFTOOL_PATH_MAX_LEN, "./%s", BPFTOOL_DEFAULT_PATH);2033 if (access(tmp, X_OK) == 0) {2121- strncpy(buffer, tmp, BPFTOOL_PATH_MAX_LEN);3434+ strscpy(buffer, tmp, size);2235 return 0;2336 }2437···4027 */4128 snprintf(tmp, BPFTOOL_PATH_MAX_LEN, "../%s", BPFTOOL_DEFAULT_PATH);4229 if (access(tmp, X_OK) == 0) {4343- strncpy(buffer, tmp, BPFTOOL_PATH_MAX_LEN);3030+ strscpy(buffer, tmp, size);4431 return 0;4532 }46334747- /* Failed to find bpftool binary */3434+ fprintf(stderr, "Failed to detect bpftool path, use BPFTOOL env var to override\n");4835 return 1;4936}5037···5744 int ret;58455946 /* Detect and cache bpftool binary location */6060- if (bpftool_path[0] == 0 && detect_bpftool_path(bpftool_path))4747+ if (bpftool_path[0] == 0 && detect_bpftool_path(bpftool_path, sizeof(bpftool_path)))6148 return 1;62496350 ret = snprintf(command, BPFTOOL_FULL_CMD_MAX_LEN, "%s %s%s",
···122122 pc += cnt;123123 }124124 qsort(labels.pcs, labels.cnt, sizeof(*labels.pcs), cmp_u32);125125- for (i = 0; i < labels.cnt; ++i)126126- /* gcc is unable to infer upper bound for labels.cnt and assumes127127- * it to be U32_MAX. U32_MAX takes 10 decimal digits.128128- * snprintf below prints into labels.names[*],129129- * which has space only for two digits and a letter.130130- * To avoid truncation warning use (i % MAX_LOCAL_LABELS),131131- * which informs gcc about printed value upper bound.132132- */133133- snprintf(labels.names[i], sizeof(labels.names[i]), "L%d", i % MAX_LOCAL_LABELS);125125+ /* gcc is unable to infer upper bound for labels.cnt and126126+ * assumes it to be U32_MAX. U32_MAX takes 10 decimal digits.127127+ * snprintf below prints into labels.names[*], which has space128128+ * only for two digits and a letter. To avoid truncation129129+ * warning use (i < MAX_LOCAL_LABELS), which informs gcc about130130+ * printed value upper bound.131131+ */132132+ for (i = 0; i < labels.cnt && i < MAX_LOCAL_LABELS; ++i)133133+ snprintf(labels.names[i], sizeof(labels.names[i]), "L%d", i);134134135135 /* now print with labels */136136 labels.print_phase = true;
···308308 return -1;309309 }310310311311- strncpy(type_str, type, type_sz);312312- strncpy(field_str, field, field_sz);311311+ memcpy(type_str, type, type_sz);312312+ type_str[type_sz] = '\0';313313+ memcpy(field_str, field, field_sz);314314+ field_str[field_sz] = '\0';313315 btf_id = btf__find_by_name(btf, type_str);314316 if (btf_id < 0) {315317 PRINT_FAIL("No BTF info for type %s\n", type_str);
+4-1
tools/testing/selftests/bpf/prog_tests/dynptr.c
···137137 );138138139139 link = bpf_program__attach(prog);140140- if (!ASSERT_OK_PTR(link, "bpf_program__attach"))140140+ if (!ASSERT_OK_PTR(link, "bpf_program__attach")) {141141+ bpf_object__close(obj);141142 goto cleanup;143143+ }142144143145 err = bpf_prog_test_run_opts(aux_prog_fd, &topts);144146 bpf_link__destroy(link);147147+ bpf_object__close(obj);145148146149 if (!ASSERT_OK(err, "test_run"))147150 goto cleanup;
+2-2
tools/testing/selftests/bpf/prog_tests/fd_array.c
···412412 ASSERT_EQ(prog_fd, -E2BIG, "prog should have been rejected with -E2BIG");413413414414cleanup_fds:415415- while (i > 0)416416- Close(extra_fds[--i]);415415+ while (i-- > 0)416416+ Close(extra_fds[i]);417417}418418419419void test_fd_array_cnt(void)
···126126127127static void close_xsk(struct xsk *xsk)128128{129129- if (xsk->umem)130130- xsk_umem__delete(xsk->umem);131129 if (xsk->socket)132130 xsk_socket__delete(xsk->socket);131131+ if (xsk->umem)132132+ xsk_umem__delete(xsk->umem);133133 munmap(xsk->umem_area, UMEM_SIZE);134134}135135
+1-1
tools/testing/selftests/bpf/progs/dmabuf_iter.c
···48484949 /* Buffers are not required to be named */5050 if (pname) {5151- if (bpf_probe_read_kernel(name, sizeof(name), pname))5151+ if (bpf_probe_read_kernel_str(name, sizeof(name), pname) < 0)5252 return 1;53535454 /* Name strings can be provided by userspace */
+197
tools/testing/selftests/bpf/progs/map_kptr_race.c
···11+// SPDX-License-Identifier: GPL-2.022+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */33+#include <vmlinux.h>44+#include <bpf/bpf_helpers.h>55+#include <bpf/bpf_tracing.h>66+#include "../test_kmods/bpf_testmod_kfunc.h"77+88+struct map_value {99+ struct prog_test_ref_kfunc __kptr *ref_ptr;1010+};1111+1212+struct {1313+ __uint(type, BPF_MAP_TYPE_HASH);1414+ __uint(map_flags, BPF_F_NO_PREALLOC);1515+ __type(key, int);1616+ __type(value, struct map_value);1717+ __uint(max_entries, 1);1818+} race_hash_map SEC(".maps");1919+2020+struct {2121+ __uint(type, BPF_MAP_TYPE_PERCPU_HASH);2222+ __uint(map_flags, BPF_F_NO_PREALLOC);2323+ __type(key, int);2424+ __type(value, struct map_value);2525+ __uint(max_entries, 1);2626+} race_percpu_hash_map SEC(".maps");2727+2828+struct {2929+ __uint(type, BPF_MAP_TYPE_SK_STORAGE);3030+ __uint(map_flags, BPF_F_NO_PREALLOC);3131+ __type(key, int);3232+ __type(value, struct map_value);3333+} race_sk_ls_map SEC(".maps");3434+3535+int num_of_refs;3636+int sk_ls_leak_done;3737+int target_map_id;3838+int map_freed;3939+const volatile int nr_cpus;4040+4141+SEC("tc")4242+int test_htab_leak(struct __sk_buff *skb)4343+{4444+ struct prog_test_ref_kfunc *p, *old;4545+ struct map_value val = {};4646+ struct map_value *v;4747+ int key = 0;4848+4949+ if (bpf_map_update_elem(&race_hash_map, &key, &val, BPF_ANY))5050+ return 1;5151+5252+ v = bpf_map_lookup_elem(&race_hash_map, &key);5353+ if (!v)5454+ return 2;5555+5656+ p = bpf_kfunc_call_test_acquire(&(unsigned long){0});5757+ if (!p)5858+ return 3;5959+ old = bpf_kptr_xchg(&v->ref_ptr, p);6060+ if (old)6161+ bpf_kfunc_call_test_release(old);6262+6363+ bpf_map_delete_elem(&race_hash_map, &key);6464+6565+ p = bpf_kfunc_call_test_acquire(&(unsigned long){0});6666+ if (!p)6767+ return 4;6868+ old = bpf_kptr_xchg(&v->ref_ptr, p);6969+ if (old)7070+ bpf_kfunc_call_test_release(old);7171+7272+ return 0;7373+}7474+7575+static int fill_percpu_kptr(struct map_value *v)7676+{7777+ struct prog_test_ref_kfunc *p, *old;7878+7979+ p = bpf_kfunc_call_test_acquire(&(unsigned long){0});8080+ if (!p)8181+ return 1;8282+ old = bpf_kptr_xchg(&v->ref_ptr, p);8383+ if (old)8484+ bpf_kfunc_call_test_release(old);8585+ return 0;8686+}8787+8888+SEC("tc")8989+int test_percpu_htab_leak(struct __sk_buff *skb)9090+{9191+ struct map_value *v, *arr[16] = {};9292+ struct map_value val = {};9393+ int key = 0;9494+ int err = 0;9595+9696+ if (bpf_map_update_elem(&race_percpu_hash_map, &key, &val, BPF_ANY))9797+ return 1;9898+9999+ for (int i = 0; i < nr_cpus; i++) {100100+ v = bpf_map_lookup_percpu_elem(&race_percpu_hash_map, &key, i);101101+ if (!v)102102+ return 2;103103+ arr[i] = v;104104+ }105105+106106+ bpf_map_delete_elem(&race_percpu_hash_map, &key);107107+108108+ for (int i = 0; i < nr_cpus; i++) {109109+ v = arr[i];110110+ err = fill_percpu_kptr(v);111111+ if (err)112112+ return 3;113113+ }114114+115115+ return 0;116116+}117117+118118+SEC("tp_btf/inet_sock_set_state")119119+int BPF_PROG(test_sk_ls_leak, struct sock *sk, int oldstate, int newstate)120120+{121121+ struct prog_test_ref_kfunc *p, *old;122122+ struct map_value *v;123123+124124+ if (newstate != BPF_TCP_SYN_SENT)125125+ return 0;126126+127127+ if (sk_ls_leak_done)128128+ return 0;129129+130130+ v = bpf_sk_storage_get(&race_sk_ls_map, sk, NULL,131131+ BPF_SK_STORAGE_GET_F_CREATE);132132+ if (!v)133133+ return 0;134134+135135+ p = bpf_kfunc_call_test_acquire(&(unsigned long){0});136136+ if (!p)137137+ return 0;138138+ old = bpf_kptr_xchg(&v->ref_ptr, p);139139+ if (old)140140+ bpf_kfunc_call_test_release(old);141141+142142+ bpf_sk_storage_delete(&race_sk_ls_map, sk);143143+144144+ p = bpf_kfunc_call_test_acquire(&(unsigned long){0});145145+ if (!p)146146+ return 0;147147+ old = bpf_kptr_xchg(&v->ref_ptr, p);148148+ if (old)149149+ bpf_kfunc_call_test_release(old);150150+151151+ sk_ls_leak_done = 1;152152+ return 0;153153+}154154+155155+long target_map_ptr;156156+157157+SEC("fentry/bpf_map_put")158158+int BPF_PROG(map_put, struct bpf_map *map)159159+{160160+ if (target_map_id && map->id == (u32)target_map_id)161161+ target_map_ptr = (long)map;162162+ return 0;163163+}164164+165165+SEC("fexit/htab_map_free")166166+int BPF_PROG(htab_map_free, struct bpf_map *map)167167+{168168+ if (target_map_ptr && (long)map == target_map_ptr)169169+ map_freed = 1;170170+ return 0;171171+}172172+173173+SEC("fexit/bpf_sk_storage_map_free")174174+int BPF_PROG(sk_map_free, struct bpf_map *map)175175+{176176+ if (target_map_ptr && (long)map == target_map_ptr)177177+ map_freed = 1;178178+ return 0;179179+}180180+181181+SEC("syscall")182182+int count_ref(void *ctx)183183+{184184+ struct prog_test_ref_kfunc *p;185185+ unsigned long arg = 0;186186+187187+ p = bpf_kfunc_call_test_acquire(&arg);188188+ if (!p)189189+ return 1;190190+191191+ num_of_refs = p->cnt.refs.counter;192192+193193+ bpf_kfunc_call_test_release(p);194194+ return 0;195195+}196196+197197+char _license[] SEC("license") = "GPL";
···18631863 : __clobber_all);18641864}1865186518661866+/* This test covers the bounds deduction when the u64 range and the tnum18671867+ * overlap only at umax. After instruction 3, the ranges look as follows:18681868+ *18691869+ * 0 umin=0xe01 umax=0xf00 U64_MAX18701870+ * | [xxxxxxxxxxxxxx] |18711871+ * |----------------------------|------------------------------|18721872+ * | x x | tnum values18731873+ *18741874+ * The verifier can therefore deduce that the R0=0xf0=240.18751875+ */18761876+SEC("socket")18771877+__description("bounds refinement with single-value tnum on umax")18781878+__msg("3: (15) if r0 == 0xe0 {{.*}} R0=240")18791879+__success __log_level(2)18801880+__flag(BPF_F_TEST_REG_INVARIANTS)18811881+__naked void bounds_refinement_tnum_umax(void *ctx)18821882+{18831883+ asm volatile(" \18841884+ call %[bpf_get_prandom_u32]; \18851885+ r0 |= 0xe0; \18861886+ r0 &= 0xf0; \18871887+ if r0 == 0xe0 goto +2; \18881888+ if r0 == 0xf0 goto +1; \18891889+ r10 = 0; \18901890+ exit; \18911891+" :18921892+ : __imm(bpf_get_prandom_u32)18931893+ : __clobber_all);18941894+}18951895+18961896+/* This test covers the bounds deduction when the u64 range and the tnum18971897+ * overlap only at umin. After instruction 3, the ranges look as follows:18981898+ *18991899+ * 0 umin=0xe00 umax=0xeff U64_MAX19001900+ * | [xxxxxxxxxxxxxx] |19011901+ * |----------------------------|------------------------------|19021902+ * | x x | tnum values19031903+ *19041904+ * The verifier can therefore deduce that the R0=0xe0=224.19051905+ */19061906+SEC("socket")19071907+__description("bounds refinement with single-value tnum on umin")19081908+__msg("3: (15) if r0 == 0xf0 {{.*}} R0=224")19091909+__success __log_level(2)19101910+__flag(BPF_F_TEST_REG_INVARIANTS)19111911+__naked void bounds_refinement_tnum_umin(void *ctx)19121912+{19131913+ asm volatile(" \19141914+ call %[bpf_get_prandom_u32]; \19151915+ r0 |= 0xe0; \19161916+ r0 &= 0xf0; \19171917+ if r0 == 0xf0 goto +2; \19181918+ if r0 == 0xe0 goto +1; \19191919+ r10 = 0; \19201920+ exit; \19211921+" :19221922+ : __imm(bpf_get_prandom_u32)19231923+ : __clobber_all);19241924+}19251925+19261926+/* This test covers the bounds deduction when the only possible tnum value is19271927+ * in the middle of the u64 range. After instruction 3, the ranges look as19281928+ * follows:19291929+ *19301930+ * 0 umin=0x7cf umax=0x7df U64_MAX19311931+ * | [xxxxxxxxxxxx] |19321932+ * |----------------------------|------------------------------|19331933+ * | x x x x x | tnum values19341934+ * | +--- 0x7e019351935+ * +--- 0x7d019361936+ *19371937+ * Since the lower four bits are zero, the tnum and the u64 range only overlap19381938+ * in R0=0x7d0=2000. Instruction 5 is therefore dead code.19391939+ */19401940+SEC("socket")19411941+__description("bounds refinement with single-value tnum in middle of range")19421942+__msg("3: (a5) if r0 < 0x7cf {{.*}} R0=2000")19431943+__success __log_level(2)19441944+__naked void bounds_refinement_tnum_middle(void *ctx)19451945+{19461946+ asm volatile(" \19471947+ call %[bpf_get_prandom_u32]; \19481948+ if r0 & 0x0f goto +4; \19491949+ if r0 > 0x7df goto +3; \19501950+ if r0 < 0x7cf goto +2; \19511951+ if r0 == 0x7d0 goto +1; \19521952+ r10 = 0; \19531953+ exit; \19541954+" :19551955+ : __imm(bpf_get_prandom_u32)19561956+ : __clobber_all);19571957+}19581958+19591959+/* This test cover the negative case for the tnum/u64 overlap. Since19601960+ * they contain the same two values (i.e., {0, 1}), we can't deduce19611961+ * anything more.19621962+ */19631963+SEC("socket")19641964+__description("bounds refinement: several overlaps between tnum and u64")19651965+__msg("2: (25) if r0 > 0x1 {{.*}} R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=1,var_off=(0x0; 0x1))")19661966+__failure __log_level(2)19671967+__naked void bounds_refinement_several_overlaps(void *ctx)19681968+{19691969+ asm volatile(" \19701970+ call %[bpf_get_prandom_u32]; \19711971+ if r0 < 0 goto +3; \19721972+ if r0 > 1 goto +2; \19731973+ if r0 == 1 goto +1; \19741974+ r10 = 0; \19751975+ exit; \19761976+" :19771977+ : __imm(bpf_get_prandom_u32)19781978+ : __clobber_all);19791979+}19801980+19811981+/* This test cover the negative case for the tnum/u64 overlap. Since19821982+ * they overlap in the two values contained by the u64 range (i.e.,19831983+ * {0xf, 0x10}), we can't deduce anything more.19841984+ */19851985+SEC("socket")19861986+__description("bounds refinement: multiple overlaps between tnum and u64")19871987+__msg("2: (25) if r0 > 0x10 {{.*}} R0=scalar(smin=umin=smin32=umin32=15,smax=umax=smax32=umax32=16,var_off=(0x0; 0x1f))")19881988+__failure __log_level(2)19891989+__naked void bounds_refinement_multiple_overlaps(void *ctx)19901990+{19911991+ asm volatile(" \19921992+ call %[bpf_get_prandom_u32]; \19931993+ if r0 < 0xf goto +3; \19941994+ if r0 > 0x10 goto +2; \19951995+ if r0 == 0x10 goto +1; \19961996+ r10 = 0; \19971997+ exit; \19981998+" :19991999+ : __imm(bpf_get_prandom_u32)20002000+ : __clobber_all);20012001+}20022002+18662003char _license[] SEC("license") = "GPL";
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+# shellcheck disable=SC215444+55+lib_dir=$(dirname "$0")66+source "$lib_dir"/../../../net/lib.sh77+88+trap cleanup_all_ns EXIT99+1010+# Test that there is no reference count leak and that dummy1 can be deleted.1111+# https://lore.kernel.org/netdev/4d69abe1-ca8d-4f0b-bcf8-13899b211e57@I-love.SAKURA.ne.jp/1212+setup_ns ns1 ns21313+ip -n "$ns1" link add name team1 type team1414+ip -n "$ns1" link add name dummy1 mtu 1499 type dummy1515+ip -n "$ns1" link set dev dummy1 master team11616+ip -n "$ns1" link set dev dummy1 netns "$ns2"1717+ip -n "$ns2" link del dev dummy1