···270270illegal Windows/NTFS/SMB characters to a remap range (this mount parameter271271is the default for SMB3). This remap (``mapposix``) range is also272272compatible with Mac (and "Services for Mac" on some older Windows).273273+When POSIX Extensions for SMB 3.1.1 are negotiated, remapping is automatically274274+disabled.273275274276CIFS VFS Mount Options275277======================
+77
Documentation/block/ublk.rst
···352352parameter of `struct ublk_param_segment` with backend for avoiding353353unnecessary IO split, which usually hurts io_uring performance.354354355355+Auto Buffer Registration356356+------------------------357357+358358+The ``UBLK_F_AUTO_BUF_REG`` feature automatically handles buffer registration359359+and unregistration for I/O requests, which simplifies the buffer management360360+process and reduces overhead in the ublk server implementation.361361+362362+This is another feature flag for using zero copy, and it is compatible with363363+``UBLK_F_SUPPORT_ZERO_COPY``.364364+365365+Feature Overview366366+~~~~~~~~~~~~~~~~367367+368368+This feature automatically registers request buffers to the io_uring context369369+before delivering I/O commands to the ublk server and unregisters them when370370+completing I/O commands. This eliminates the need for manual buffer371371+registration/unregistration via ``UBLK_IO_REGISTER_IO_BUF`` and372372+``UBLK_IO_UNREGISTER_IO_BUF`` commands, then IO handling in ublk server373373+can avoid dependency on the two uring_cmd operations.374374+375375+IOs can't be issued concurrently to io_uring if there is any dependency376376+among these IOs. So this way not only simplifies ublk server implementation,377377+but also makes concurrent IO handling becomes possible by removing the378378+dependency on buffer registration & unregistration commands.379379+380380+Usage Requirements381381+~~~~~~~~~~~~~~~~~~382382+383383+1. The ublk server must create a sparse buffer table on the same ``io_ring_ctx``384384+ used for ``UBLK_IO_FETCH_REQ`` and ``UBLK_IO_COMMIT_AND_FETCH_REQ``. If385385+ uring_cmd is issued on a different ``io_ring_ctx``, manual buffer386386+ unregistration is required.387387+388388+2. Buffer registration data must be passed via uring_cmd's ``sqe->addr`` with the389389+ following structure::390390+391391+ struct ublk_auto_buf_reg {392392+ __u16 index; /* Buffer index for registration */393393+ __u8 flags; /* Registration flags */394394+ __u8 reserved0; /* Reserved for future use */395395+ __u32 reserved1; /* Reserved for future use */396396+ };397397+398398+ ublk_auto_buf_reg_to_sqe_addr() is for converting the above structure into399399+ ``sqe->addr``.400400+401401+3. All reserved fields in ``ublk_auto_buf_reg`` must be zeroed.402402+403403+4. Optional flags can be passed via ``ublk_auto_buf_reg.flags``.404404+405405+Fallback Behavior406406+~~~~~~~~~~~~~~~~~407407+408408+If auto buffer registration fails:409409+410410+1. When ``UBLK_AUTO_BUF_REG_FALLBACK`` is enabled:411411+412412+ - The uring_cmd is completed413413+ - ``UBLK_IO_F_NEED_REG_BUF`` is set in ``ublksrv_io_desc.op_flags``414414+ - The ublk server must manually deal with the failure, such as, register415415+ the buffer manually, or using user copy feature for retrieving the data416416+ for handling ublk IO417417+418418+2. If fallback is not enabled:419419+420420+ - The ublk I/O request fails silently421421+ - The uring_cmd won't be completed422422+423423+Limitations424424+~~~~~~~~~~~425425+426426+- Requires same ``io_ring_ctx`` for all operations427427+- May require manual buffer management in fallback cases428428+- io_ring_ctx buffer table has a max size of 16K, which may not be enough429429+ in case that too many ublk devices are handled by this single io_ring_ctx430430+ and each one has very large queue depth431431+355432References356433==========357434
···1515 Some peripherals such as PWM have their I/O go through the 4 "GPIOs".16161717maintainers:1818- - Jianlong Huang <jianlong.huang@starfivetech.com>1818+ - Hal Feng <hal.feng@starfivetech.com>19192020properties:2121 compatible:
···1818 any GPIO can be set up to be controlled by any of the peripherals.19192020maintainers:2121- - Jianlong Huang <jianlong.huang@starfivetech.com>2121+ - Hal Feng <hal.feng@starfivetech.com>22222323properties:2424 compatible:
+3-1
Documentation/filesystems/proc.rst
···584584 ms may share585585 gd stack segment growns down586586 pf pure PFN range587587- dw disabled write to the mapped file588587 lo pages are locked in memory589588 io memory mapped I/O area590589 sr sequential read advise provided···606607 mt arm64 MTE allocation tags are enabled607608 um userfaultfd missing tracking608609 uw userfaultfd wr-protect tracking610610+ ui userfaultfd minor fault609611 ss shadow/guarded control stack page610612 sl sealed613613+ lf lock on fault pages614614+ dp always lazily freeable mapping611615 == =======================================612616613617Note that there is no guarantee that every flag and associated mnemonic will
+8-4
MAINTAINERS
···45554555M: Martin KaFai Lau <martin.lau@linux.dev>45564556M: Daniel Borkmann <daniel@iogearbox.net>45574557R: John Fastabend <john.fastabend@gmail.com>45584558+R: Stanislav Fomichev <sdf@fomichev.me>45584559L: bpf@vger.kernel.org45594560L: netdev@vger.kernel.org45604561S: Maintained···62556254F: include/linux/smpboot.h62566255F: kernel/cpu.c62576256F: kernel/smpboot.*62576257+F: rust/helper/cpu.c62586258F: rust/kernel/cpu.rs6259625962606260CPU IDLE TIME MANAGEMENT FRAMEWORK···1592115919R: Nico Pache <npache@redhat.com>1592215920R: Ryan Roberts <ryan.roberts@arm.com>1592315921R: Dev Jain <dev.jain@arm.com>1592215922+R: Barry Song <baohua@kernel.org>1592415923L: linux-mm@kvack.org1592515924S: Maintained1592615925W: http://www.linux-mm.org···1749617493NETWORKING [TCP]1749717494M: Eric Dumazet <edumazet@google.com>1749817495M: Neal Cardwell <ncardwell@google.com>1749917499-R: Kuniyuki Iwashima <kuniyu@amazon.com>1749617496+R: Kuniyuki Iwashima <kuniyu@google.com>1750017497L: netdev@vger.kernel.org1750117498S: Maintained1750217499F: Documentation/networking/net_cachelines/tcp_sock.rst···17526175231752717524NETWORKING [SOCKETS]1752817525M: Eric Dumazet <edumazet@google.com>1752917529-M: Kuniyuki Iwashima <kuniyu@amazon.com>1752617526+M: Kuniyuki Iwashima <kuniyu@google.com>1753017527M: Paolo Abeni <pabeni@redhat.com>1753117528M: Willem de Bruijn <willemb@google.com>1753217529S: Maintained···1754117538F: net/socket.c17542175391754317540NETWORKING [UNIX SOCKETS]1754417544-M: Kuniyuki Iwashima <kuniyu@amazon.com>1754117541+M: Kuniyuki Iwashima <kuniyu@google.com>1754517542S: Maintained1754617543F: include/net/af_unix.h1754717544F: include/net/netns/unix.h···23664236612366523662STARFIVE JH71X0 PINCTRL DRIVERS2366623663M: Emil Renner Berthing <kernel@esmil.dk>2366723667-M: Jianlong Huang <jianlong.huang@starfivetech.com>2366823664M: Hal Feng <hal.feng@starfivetech.com>2366923665L: linux-gpio@vger.kernel.org2367023666S: Maintained···2696926967M: Jakub Kicinski <kuba@kernel.org>2697026968M: Jesper Dangaard Brouer <hawk@kernel.org>2697126969M: John Fastabend <john.fastabend@gmail.com>2697026970+R: Stanislav Fomichev <sdf@fomichev.me>2697226971L: netdev@vger.kernel.org2697326972L: bpf@vger.kernel.org2697426973S: Supported···2699126988M: Magnus Karlsson <magnus.karlsson@intel.com>2699226989M: Maciej Fijalkowski <maciej.fijalkowski@intel.com>2699326990R: Jonathan Lemon <jonathan.lemon@gmail.com>2699126991+R: Stanislav Fomichev <sdf@fomichev.me>2699426992L: netdev@vger.kernel.org2699526993L: bpf@vger.kernel.org2699626994S: Maintained
···1010#error only <linux/bitops.h> can be included directly1111#endif12121313-#ifndef __ASSEMBLY__1313+#ifndef __ASSEMBLER__14141515#include <linux/types.h>1616#include <linux/compiler.h>···192192#include <asm-generic/bitops/le.h>193193#include <asm-generic/bitops/ext2-atomic-setbit.h>194194195195-#endif /* !__ASSEMBLY__ */195195+#endif /* !__ASSEMBLER__ */196196197197#endif
···77#ifndef __ASM_ARC_DSP_H88#define __ASM_ARC_DSP_H991010-#ifndef __ASSEMBLY__1010+#ifndef __ASSEMBLER__11111212/*1313 * DSP-related saved registers - need to be saved only when you are···2424#endif2525};26262727-#endif /* !__ASSEMBLY__ */2727+#endif /* !__ASSEMBLER__ */28282929#endif /* __ASM_ARC_DSP_H */
···26262727static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)2828{2929- __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);2929+ __vcpu_assign_sys_reg(vcpu, ZCR_EL1, read_sysreg_el1(SYS_ZCR));3030 /*3131 * On saving/restoring guest sve state, always use the maximum VL for3232 * the guest. The layout of the data when saving the sve state depends···79798080 has_fpmr = kvm_has_fpmr(kern_hyp_va(vcpu->kvm));8181 if (has_fpmr)8282- __vcpu_sys_reg(vcpu, FPMR) = read_sysreg_s(SYS_FPMR);8282+ __vcpu_assign_sys_reg(vcpu, FPMR, read_sysreg_s(SYS_FPMR));83838484 if (system_supports_sve())8585 __hyp_sve_restore_host();
+2-2
arch/arm64/kvm/hyp/vhe/switch.c
···223223 */224224 val = read_sysreg_el0(SYS_CNTP_CVAL);225225 if (map.direct_ptimer == vcpu_ptimer(vcpu))226226- __vcpu_sys_reg(vcpu, CNTP_CVAL_EL0) = val;226226+ __vcpu_assign_sys_reg(vcpu, CNTP_CVAL_EL0, val);227227 if (map.direct_ptimer == vcpu_hptimer(vcpu))228228- __vcpu_sys_reg(vcpu, CNTHP_CVAL_EL2) = val;228228+ __vcpu_assign_sys_reg(vcpu, CNTHP_CVAL_EL2, val);229229230230 offset = read_sysreg_s(SYS_CNTPOFF_EL2);231231
···521521 return -EINVAL;522522 }523523524524+ /*525525+ * Map complete page to the paste address. So the user526526+ * space should pass 0ULL to the offset parameter.527527+ */528528+ if (vma->vm_pgoff) {529529+ pr_debug("Page offset unsupported to map paste address\n");530530+ return -EINVAL;531531+ }532532+524533 /* Ensure instance has an open send window */525534 if (!txwin) {526535 pr_err("No send window open?\n");
+6-2
arch/powerpc/platforms/powernv/memtrace.c
···4848static int memtrace_mmap(struct file *filp, struct vm_area_struct *vma)4949{5050 struct memtrace_entry *ent = filp->private_data;5151+ unsigned long ent_nrpages = ent->size >> PAGE_SHIFT;5252+ unsigned long vma_nrpages = vma_pages(vma);51535252- if (ent->size < vma->vm_end - vma->vm_start)5454+ /* The requested page offset should be within object's page count */5555+ if (vma->vm_pgoff >= ent_nrpages)5356 return -EINVAL;54575555- if (vma->vm_pgoff << PAGE_SHIFT >= ent->size)5858+ /* The requested mapping range should remain within the bounds */5959+ if (vma_nrpages > ent_nrpages - vma->vm_pgoff)5660 return -EINVAL;57615862 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
···470470/* In both cases, we borrow bit 6 to store the exclusive marker in swap PTEs. */471471#define _PAGE_SWP_EXCLUSIVE _PAGE_USER472472473473-static inline int pte_swp_exclusive(pte_t pte)473473+static inline bool pte_swp_exclusive(pte_t pte)474474{475475 return pte.pte_low & _PAGE_SWP_EXCLUSIVE;476476}
···299299 .send_call_func_single_ipi = native_send_call_func_single_ipi,300300};301301EXPORT_SYMBOL_GPL(smp_ops);302302+303303+int arch_cpu_rescan_dead_smt_siblings(void)304304+{305305+ enum cpuhp_smt_control old = cpu_smt_control;306306+ int ret;307307+308308+ /*309309+ * If SMT has been disabled and SMT siblings are in HLT, bring them back310310+ * online and offline them again so that they end up in MWAIT proper.311311+ *312312+ * Called with hotplug enabled.313313+ */314314+ if (old != CPU_SMT_DISABLED && old != CPU_SMT_FORCE_DISABLED)315315+ return 0;316316+317317+ ret = cpuhp_smt_enable();318318+ if (ret)319319+ return ret;320320+321321+ ret = cpuhp_smt_disable(old);322322+323323+ return ret;324324+}325325+EXPORT_SYMBOL_GPL(arch_cpu_rescan_dead_smt_siblings);
+7-47
arch/x86/kernel/smpboot.c
···12441244 local_irq_disable();12451245}1246124612471247+/*12481248+ * We need to flush the caches before going to sleep, lest we have12491249+ * dirty data in our caches when we come back up.12501250+ */12471251void __noreturn mwait_play_dead(unsigned int eax_hint)12481252{12491253 struct mwait_cpu_dead *md = this_cpu_ptr(&mwait_cpu_dead);···12911287 native_halt();12921288 }12931289 }12941294-}12951295-12961296-/*12971297- * We need to flush the caches before going to sleep, lest we have12981298- * dirty data in our caches when we come back up.12991299- */13001300-static inline void mwait_play_dead_cpuid_hint(void)13011301-{13021302- unsigned int eax, ebx, ecx, edx;13031303- unsigned int highest_cstate = 0;13041304- unsigned int highest_subcstate = 0;13051305- int i;13061306-13071307- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||13081308- boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)13091309- return;13101310- if (!this_cpu_has(X86_FEATURE_MWAIT))13111311- return;13121312- if (!this_cpu_has(X86_FEATURE_CLFLUSH))13131313- return;13141314-13151315- eax = CPUID_LEAF_MWAIT;13161316- ecx = 0;13171317- native_cpuid(&eax, &ebx, &ecx, &edx);13181318-13191319- /*13201320- * eax will be 0 if EDX enumeration is not valid.13211321- * Initialized below to cstate, sub_cstate value when EDX is valid.13221322- */13231323- if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED)) {13241324- eax = 0;13251325- } else {13261326- edx >>= MWAIT_SUBSTATE_SIZE;13271327- for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) {13281328- if (edx & MWAIT_SUBSTATE_MASK) {13291329- highest_cstate = i;13301330- highest_subcstate = edx & MWAIT_SUBSTATE_MASK;13311331- }13321332- }13331333- eax = (highest_cstate << MWAIT_SUBSTATE_SIZE) |13341334- (highest_subcstate - 1);13351335- }13361336-13371337- mwait_play_dead(eax);13381290}1339129113401292/*···13431383 play_dead_common();13441384 tboot_shutdown(TB_SHUTDOWN_WFS);1345138513461346- mwait_play_dead_cpuid_hint();13471347- if (cpuidle_play_dead())13481348- hlt_play_dead();13861386+ /* Below returns only on error. */13871387+ cpuidle_play_dead();13881388+ hlt_play_dead();13491389}1350139013511391#else /* ... !CONFIG_HOTPLUG_CPU */
+8-1
arch/x86/kvm/mmu/mmu.c
···48964896{48974897 u64 error_code = PFERR_GUEST_FINAL_MASK;48984898 u8 level = PG_LEVEL_4K;48994899+ u64 direct_bits;48994900 u64 end;49004901 int r;4901490249024903 if (!vcpu->kvm->arch.pre_fault_allowed)49034904 return -EOPNOTSUPP;49054905+49064906+ if (kvm_is_gfn_alias(vcpu->kvm, gpa_to_gfn(range->gpa)))49074907+ return -EINVAL;4904490849054909 /*49064910 * reload is efficient when called repeatedly, so we can do it on···49144910 if (r)49154911 return r;4916491249134913+ direct_bits = 0;49174914 if (kvm_arch_has_private_mem(vcpu->kvm) &&49184915 kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa)))49194916 error_code |= PFERR_PRIVATE_ACCESS;49174917+ else49184918+ direct_bits = gfn_to_gpa(kvm_gfn_direct_bits(vcpu->kvm));4920491949214920 /*49224921 * Shadow paging uses GVA for kvm page fault, so restrict to49234922 * two-dimensional paging.49244923 */49254925- r = kvm_tdp_map_page(vcpu, range->gpa, error_code, &level);49244924+ r = kvm_tdp_map_page(vcpu, range->gpa | direct_bits, error_code, &level);49264925 if (r < 0)49274926 return r;49284927
+35-9
arch/x86/kvm/svm/sev.c
···28712871 }28722872}2873287328742874+static bool is_sev_snp_initialized(void)28752875+{28762876+ struct sev_user_data_snp_status *status;28772877+ struct sev_data_snp_addr buf;28782878+ bool initialized = false;28792879+ int ret, error = 0;28802880+28812881+ status = snp_alloc_firmware_page(GFP_KERNEL | __GFP_ZERO);28822882+ if (!status)28832883+ return false;28842884+28852885+ buf.address = __psp_pa(status);28862886+ ret = sev_do_cmd(SEV_CMD_SNP_PLATFORM_STATUS, &buf, &error);28872887+ if (ret) {28882888+ pr_err("SEV: SNP_PLATFORM_STATUS failed ret=%d, fw_error=%d (%#x)\n",28892889+ ret, error, error);28902890+ goto out;28912891+ }28922892+28932893+ initialized = !!status->state;28942894+28952895+out:28962896+ snp_free_firmware_page(status);28972897+28982898+ return initialized;28992899+}29002900+28742901void __init sev_hardware_setup(void)28752902{28762903 unsigned int eax, ebx, ecx, edx, sev_asid_count, sev_es_asid_count;···30022975 sev_snp_supported = sev_snp_enabled && cc_platform_has(CC_ATTR_HOST_SEV_SNP);3003297630042977out:29782978+ if (sev_enabled) {29792979+ init_args.probe = true;29802980+ if (sev_platform_init(&init_args))29812981+ sev_supported = sev_es_supported = sev_snp_supported = false;29822982+ else if (sev_snp_supported)29832983+ sev_snp_supported = is_sev_snp_initialized();29842984+ }29852985+30052986 if (boot_cpu_has(X86_FEATURE_SEV))30062987 pr_info("SEV %s (ASIDs %u - %u)\n",30072988 sev_supported ? min_sev_asid <= max_sev_asid ? "enabled" :···30363001 sev_supported_vmsa_features = 0;30373002 if (sev_es_debug_swap_enabled)30383003 sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;30393039-30403040- if (!sev_enabled)30413041- return;30423042-30433043- /*30443044- * Do both SNP and SEV initialization at KVM module load.30453045- */30463046- init_args.probe = true;30473047- sev_platform_init(&init_args);30483004}3049300530503006void sev_hardware_unsetup(void)
+5-12
arch/x86/power/hibernate.c
···192192193193int arch_resume_nosmt(void)194194{195195- int ret = 0;195195+ int ret;196196+196197 /*197198 * We reached this while coming out of hibernation. This means198199 * that SMT siblings are sleeping in hlt, as mwait is not safe···207206 * Called with hotplug disabled.208207 */209208 cpu_hotplug_enable();210210- if (cpu_smt_control == CPU_SMT_DISABLED ||211211- cpu_smt_control == CPU_SMT_FORCE_DISABLED) {212212- enum cpuhp_smt_control old = cpu_smt_control;213209214214- ret = cpuhp_smt_enable();215215- if (ret)216216- goto out;217217- ret = cpuhp_smt_disable(old);218218- if (ret)219219- goto out;220220- }221221-out:210210+ ret = arch_cpu_rescan_dead_smt_siblings();211211+222212 cpu_hotplug_disable();213213+223214 return ret;224215}
···998998 if (!plug || rq_list_empty(&plug->mq_list))999999 return false;1000100010011001- rq_list_for_each(&plug->mq_list, rq) {10021002- if (rq->q == q) {10031003- if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==10041004- BIO_MERGE_OK)10051005- return true;10061006- break;10071007- }10011001+ rq = plug->mq_list.tail;10021002+ if (rq->q == q)10031003+ return blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==10041004+ BIO_MERGE_OK;10051005+ else if (!plug->multiple_queues)10061006+ return false;1008100710091009- /*10101010- * Only keep iterating plug list for merges if we have multiple10111011- * queues10121012- */10131013- if (!plug->multiple_queues)10141014- break;10081008+ rq_list_for_each(&plug->mq_list, rq) {10091009+ if (rq->q != q)10101010+ continue;10111011+ if (blk_attempt_bio_merge(q, rq, bio, nr_segs, false) ==10121012+ BIO_MERGE_OK)10131013+ return true;10141014+ break;10151015 }10161016 return false;10171017}
+6-2
block/blk-zoned.c
···12251225 if (bio_flagged(bio, BIO_EMULATES_ZONE_APPEND)) {12261226 bio->bi_opf &= ~REQ_OP_MASK;12271227 bio->bi_opf |= REQ_OP_ZONE_APPEND;12281228+ bio_clear_flag(bio, BIO_EMULATES_ZONE_APPEND);12281229 }1229123012301231 /*···13071306 spin_unlock_irqrestore(&zwplug->lock, flags);1308130713091308 bdev = bio->bi_bdev;13101310- submit_bio_noacct_nocheck(bio);1311130913121310 /*13131311 * blk-mq devices will reuse the extra reference on the request queue···13141314 * path for BIO-based devices will not do that. So drop this extra13151315 * reference here.13161316 */13171317- if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO))13171317+ if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) {13181318+ bdev->bd_disk->fops->submit_bio(bio);13181319 blk_queue_exit(bdev->bd_disk->queue);13201320+ } else {13211321+ blk_mq_submit_bio(bio);13221322+ }1319132313201324put_zwplug:13211325 /* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */
···3333static DEFINE_MUTEX(isolated_cpus_lock);3434static DEFINE_MUTEX(round_robin_lock);35353636-static unsigned long power_saving_mwait_eax;3636+static unsigned int power_saving_mwait_eax;37373838static unsigned char tsc_detected_unstable;3939static unsigned char tsc_marked_unstable;
+3-6
drivers/acpi/apei/einj-core.c
···883883 }884884885885 einj_dev = faux_device_create("acpi-einj", NULL, &einj_device_ops);886886- if (!einj_dev)887887- return -ENODEV;888886889889- einj_initialized = true;887887+ if (einj_dev)888888+ einj_initialized = true;890889891890 return 0;892891}893892894893static void __exit einj_exit(void)895894{896896- if (einj_initialized)897897- faux_device_destroy(einj_dev);898898-895895+ faux_device_destroy(einj_dev);899896}900897901898module_init(einj_init);
+1-1
drivers/acpi/cppc_acpi.c
···476476 struct cpc_desc *cpc_ptr;477477 int cpu;478478479479- for_each_possible_cpu(cpu) {479479+ for_each_present_cpu(cpu) {480480 cpc_ptr = per_cpu(cpc_desc_ptr, cpu);481481 desired_reg = &cpc_ptr->cpc_regs[DESIRED_PERF];482482 if (!CPC_IN_SYSTEM_MEMORY(desired_reg) &&
+17
drivers/acpi/ec.c
···2323#include <linux/delay.h>2424#include <linux/interrupt.h>2525#include <linux/list.h>2626+#include <linux/printk.h>2627#include <linux/spinlock.h>2728#include <linux/slab.h>2929+#include <linux/string.h>2830#include <linux/suspend.h>2931#include <linux/acpi.h>3032#include <linux/dmi.h>···20302028 * Asus X50GL:20312029 * https://bugzilla.kernel.org/show_bug.cgi?id=1188020322030 */20312031+ goto out;20322032+ }20332033+20342034+ if (!strstarts(ecdt_ptr->id, "\\")) {20352035+ /*20362036+ * The ECDT table on some MSI notebooks contains invalid data, together20372037+ * with an empty ID string ("").20382038+ *20392039+ * Section 5.2.15 of the ACPI specification requires the ID string to be20402040+ * a "fully qualified reference to the (...) embedded controller device",20412041+ * so this string always has to start with a backslash.20422042+ *20432043+ * By verifying this we can avoid such faulty ECDT tables in a safe way.20442044+ */20452045+ pr_err(FW_BUG "Ignoring ECDT due to invalid ID string \"%s\"\n", ecdt_ptr->id);20332046 goto out;20342047 }20352048
···279279 * after acpi_cppc_processor_probe() has been called for all online CPUs280280 */281281 acpi_processor_init_invariance_cppc();282282+283283+ acpi_idle_rescan_dead_smt_siblings();284284+282285 return 0;283286err:284287 driver_unregister(&acpi_processor_driver);
+8
drivers/acpi/processor_idle.c
···2424#include <acpi/processor.h>2525#include <linux/context_tracking.h>26262727+#include "internal.h"2828+2729/*2830 * Include the apic definitions for x86 to have the APIC timer related defines2931 * available also for UP (on SMP it gets magically included via linux/smp.h).···5755};58565957#ifdef CONFIG_ACPI_PROCESSOR_CSTATE5858+void acpi_idle_rescan_dead_smt_siblings(void)5959+{6060+ if (cpuidle_get_driver() == &acpi_idle_driver)6161+ arch_cpu_rescan_dead_smt_siblings();6262+}6363+6064static6165DEFINE_PER_CPU(struct acpi_processor_cx * [CPUIDLE_STATE_MAX], acpi_cstate);6266
+7
drivers/acpi/resource.c
···667667 },668668 },669669 {670670+ /* MACHENIKE L16P/L16P */671671+ .matches = {672672+ DMI_MATCH(DMI_SYS_VENDOR, "MACHENIKE"),673673+ DMI_MATCH(DMI_BOARD_NAME, "L16P"),674674+ },675675+ },676676+ {670677 /*671678 * TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the672679 * board-name is changed, so check OEM strings instead. Note
+2-1
drivers/base/faux.c
···8686 .name = "faux_driver",8787 .bus = &faux_bus_type,8888 .probe_type = PROBE_FORCE_SYNCHRONOUS,8989+ .suppress_bind_attrs = true,8990};90919192static void faux_device_release(struct device *dev)···170169 * successful is almost impossible to determine by the caller.171170 */172171 if (!dev->driver) {173173- dev_err(dev, "probe did not succeed, tearing down the device\n");172172+ dev_dbg(dev, "probe did not succeed, tearing down the device\n");174173 faux_device_destroy(faux_dev);175174 faux_dev = NULL;176175 }
+5-6
drivers/block/loop.c
···12481248 lo->lo_flags &= ~LOOP_SET_STATUS_CLEARABLE_FLAGS;12491249 lo->lo_flags |= (info->lo_flags & LOOP_SET_STATUS_SETTABLE_FLAGS);1250125012511251- if (size_changed) {12521252- loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,12531253- lo->lo_backing_file);12541254- loop_set_size(lo, new_size);12551255- }12561256-12571251 /* update the direct I/O flag if lo_offset changed */12581252 loop_update_dio(lo);12591253···12551261 blk_mq_unfreeze_queue(lo->lo_queue, memflags);12561262 if (partscan)12571263 clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);12641264+ if (!err && size_changed) {12651265+ loff_t new_size = get_size(lo->lo_offset, lo->lo_sizelimit,12661266+ lo->lo_backing_file);12671267+ loop_set_size(lo, new_size);12681268+ }12581269out_unlock:12591270 mutex_unlock(&lo->lo_mutex);12601271 if (partscan)
+18-13
drivers/bluetooth/btintel_pcie.c
···396396static int btintel_pcie_start_rx(struct btintel_pcie_data *data)397397{398398 int i, ret;399399+ struct rxq *rxq = &data->rxq;399400400400- for (i = 0; i < BTINTEL_PCIE_RX_MAX_QUEUE; i++) {401401+ /* Post (BTINTEL_PCIE_RX_DESCS_COUNT - 3) buffers to overcome the402402+ * hardware issues leading to race condition at the firmware.403403+ */404404+405405+ for (i = 0; i < rxq->count - 3; i++) {401406 ret = btintel_pcie_submit_rx(data);402407 if (ret)403408 return ret;···17871782 * + size of index * Number of queues(2) * type of index array(4)17881783 * + size of context information17891784 */17901790- total = (sizeof(struct tfd) + sizeof(struct urbd0) + sizeof(struct frbd)17911791- + sizeof(struct urbd1)) * BTINTEL_DESCS_COUNT;17851785+ total = (sizeof(struct tfd) + sizeof(struct urbd0)) * BTINTEL_PCIE_TX_DESCS_COUNT;17861786+ total += (sizeof(struct frbd) + sizeof(struct urbd1)) * BTINTEL_PCIE_RX_DESCS_COUNT;1792178717931788 /* Add the sum of size of index array and size of ci struct */17941789 total += (sizeof(u16) * BTINTEL_PCIE_NUM_QUEUES * 4) + sizeof(struct ctx_info);···18131808 data->dma_v_addr = v_addr;1814180918151810 /* Setup descriptor count */18161816- data->txq.count = BTINTEL_DESCS_COUNT;18171817- data->rxq.count = BTINTEL_DESCS_COUNT;18111811+ data->txq.count = BTINTEL_PCIE_TX_DESCS_COUNT;18121812+ data->rxq.count = BTINTEL_PCIE_RX_DESCS_COUNT;1818181318191814 /* Setup tfds */18201815 data->txq.tfds_p_addr = p_addr;18211816 data->txq.tfds = v_addr;1822181718231823- p_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);18241824- v_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT);18181818+ p_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);18191819+ v_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT);1825182018261821 /* Setup urbd0 */18271822 data->txq.urbd0s_p_addr = p_addr;18281823 data->txq.urbd0s = v_addr;1829182418301830- p_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);18311831- v_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT);18251825+ p_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);18261826+ v_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT);1832182718331828 /* Setup FRBD*/18341829 data->rxq.frbds_p_addr = p_addr;18351830 data->rxq.frbds = v_addr;1836183118371837- p_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);18381838- v_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT);18321832+ p_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);18331833+ v_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT);1839183418401835 /* Setup urbd1 */18411836 data->rxq.urbd1s_p_addr = p_addr;18421837 data->rxq.urbd1s = v_addr;1843183818441844- p_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);18451845- v_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT);18391839+ p_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);18401840+ v_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT);1846184118471842 /* Setup data buffers for txq */18481843 err = btintel_pcie_setup_txq_bufs(data, &data->txq);
+5-5
drivers/bluetooth/btintel_pcie.h
···154154/* Default interrupt timeout in msec */155155#define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000156156157157-/* The number of descriptors in TX/RX queues */158158-#define BTINTEL_DESCS_COUNT 16157157+/* The number of descriptors in TX queues */158158+#define BTINTEL_PCIE_TX_DESCS_COUNT 32159159+160160+/* The number of descriptors in RX queues */161161+#define BTINTEL_PCIE_RX_DESCS_COUNT 64159162160163/* Number of Queue for TX and RX161164 * It indicates the index of the IA(Index Array)···179176180177/* Doorbell vector for TFD */181178#define BTINTEL_PCIE_TX_DB_VEC 0182182-183183-/* Number of pending RX requests for downlink */184184-#define BTINTEL_PCIE_RX_MAX_QUEUE 6185179186180/* Doorbell vector for FRBD */187181#define BTINTEL_PCIE_RX_DB_VEC 513
+2-2
drivers/cpufreq/rcpufreq_dt.rs
···2626}27272828/// Finds supply name for the CPU from DT.2929-fn find_supply_names(dev: &Device, cpu: u32) -> Option<KVec<CString>> {2929+fn find_supply_names(dev: &Device, cpu: cpu::CpuId) -> Option<KVec<CString>> {3030 // Try "cpu0" for older DTs, fallback to "cpu".3131- let name = (cpu == 0)3131+ let name = (cpu.as_u32() == 0)3232 .then(|| find_supply_name_exact(dev, "cpu0"))3333 .flatten()3434 .or_else(|| find_supply_name_exact(dev, "cpu"))?;
+1-1
drivers/dma-buf/dma-buf.c
···11181118 * Catch exporters making buffers inaccessible even when11191119 * attachments preventing that exist.11201120 */11211121- WARN_ON_ONCE(ret == EBUSY);11211121+ WARN_ON_ONCE(ret == -EBUSY);11221122 if (ret)11231123 return ERR_PTR(ret);11241124 }
···110110#define HDMI_PLL_LOCK BIT(31)111111#define HDMI_PLL_LOCK_G12A (3 << 30)112112113113-#define PIXEL_FREQ_1000_1001(_freq) \114114- DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)115115-#define PHY_FREQ_1000_1001(_freq) \116116- (PIXEL_FREQ_1000_1001(DIV_ROUND_DOWN_ULL(_freq, 10ULL)) * 10)113113+#define FREQ_1000_1001(_freq) DIV_ROUND_CLOSEST_ULL((_freq) * 1000ULL, 1001ULL)117114118115/* VID PLL Dividers */119116enum {···769772 pll_freq);770773}771774775775+static bool meson_vclk_freqs_are_matching_param(unsigned int idx,776776+ unsigned long long phy_freq,777777+ unsigned long long vclk_freq)778778+{779779+ DRM_DEBUG_DRIVER("i = %d vclk_freq = %lluHz alt = %lluHz\n",780780+ idx, params[idx].vclk_freq,781781+ FREQ_1000_1001(params[idx].vclk_freq));782782+ DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",783783+ idx, params[idx].phy_freq,784784+ FREQ_1000_1001(params[idx].phy_freq));785785+786786+ /* Match strict frequency */787787+ if (phy_freq == params[idx].phy_freq &&788788+ vclk_freq == params[idx].vclk_freq)789789+ return true;790790+791791+ /* Match 1000/1001 variant: vclk deviation has to be less than 1kHz792792+ * (drm EDID is defined in 1kHz steps, so everything smaller must be793793+ * rounding error) and the PHY freq deviation has to be less than794794+ * 10kHz (as the TMDS clock is 10 times the pixel clock, so anything795795+ * smaller must be rounding error as well).796796+ */797797+ if (abs(vclk_freq - FREQ_1000_1001(params[idx].vclk_freq)) < 1000 &&798798+ abs(phy_freq - FREQ_1000_1001(params[idx].phy_freq)) < 10000)799799+ return true;800800+801801+ /* no match */802802+ return false;803803+}804804+772805enum drm_mode_status773806meson_vclk_vic_supported_freq(struct meson_drm *priv,774807 unsigned long long phy_freq,···817790 }818791819792 for (i = 0 ; params[i].pixel_freq ; ++i) {820820- DRM_DEBUG_DRIVER("i = %d pixel_freq = %lluHz alt = %lluHz\n",821821- i, params[i].pixel_freq,822822- PIXEL_FREQ_1000_1001(params[i].pixel_freq));823823- DRM_DEBUG_DRIVER("i = %d phy_freq = %lluHz alt = %lluHz\n",824824- i, params[i].phy_freq,825825- PHY_FREQ_1000_1001(params[i].phy_freq));826826- /* Match strict frequency */827827- if (phy_freq == params[i].phy_freq &&828828- vclk_freq == params[i].vclk_freq)829829- return MODE_OK;830830- /* Match 1000/1001 variant */831831- if (phy_freq == PHY_FREQ_1000_1001(params[i].phy_freq) &&832832- vclk_freq == PIXEL_FREQ_1000_1001(params[i].vclk_freq))793793+ if (meson_vclk_freqs_are_matching_param(i, phy_freq, vclk_freq))833794 return MODE_OK;834795 }835796···10901075 }1091107610921077 for (freq = 0 ; params[freq].pixel_freq ; ++freq) {10931093- if ((phy_freq == params[freq].phy_freq ||10941094- phy_freq == PHY_FREQ_1000_1001(params[freq].phy_freq)) &&10951095- (vclk_freq == params[freq].vclk_freq ||10961096- vclk_freq == PIXEL_FREQ_1000_1001(params[freq].vclk_freq))) {10781078+ if (meson_vclk_freqs_are_matching_param(freq, phy_freq,10791079+ vclk_freq)) {10971080 if (vclk_freq != params[freq].vclk_freq)10981081 vic_alternate_clock = true;10991082 else
+1
drivers/gpu/drm/sitronix/Kconfig
···55 select DRM_GEM_SHMEM_HELPER66 select DRM_KMS_HELPER77 select REGMAP_I2C88+ select VIDEOMODE_HELPERS89 help910 DRM driver for Sitronix ST7571 panels controlled over I2C.1011
+6-6
drivers/gpu/drm/vc4/vc4_hdmi.c
···560560 if (ret)561561 return ret;562562563563- ret = drm_connector_hdmi_audio_init(connector, dev->dev,564564- &vc4_hdmi_audio_funcs,565565- 8, false, -1);566566- if (ret)567567- return ret;568568-569563 drm_connector_helper_add(connector, &vc4_hdmi_connector_helper_funcs);570564571565 /*···22842290 dev_err(dev, "Could not register CPU DAI: %d\n", ret);22852291 return ret;22862292 }22932293+22942294+ ret = drm_connector_hdmi_audio_init(&vc4_hdmi->connector, dev,22952295+ &vc4_hdmi_audio_funcs, 8, false,22962296+ -1);22972297+ if (ret)22982298+ return ret;2287229922882300 dai_link->cpus = &vc4_hdmi->audio.cpu;22892301 dai_link->codecs = &vc4_hdmi->audio.codec;
···764764 return false;765765 }766766767767- if (range_size <= SZ_64K && !supports_4K_migration(vm->xe)) {767767+ if (range_size < SZ_64K && !supports_4K_migration(vm->xe)) {768768 drm_dbg(&vm->xe->drm, "Platform doesn't support SZ_4K range migration\n");769769 return false;770770 }
+7-5
drivers/idle/intel_idle.c
···152152 int index, bool irqoff)153153{154154 struct cpuidle_state *state = &drv->states[index];155155- unsigned long eax = flg2MWAIT(state->flags);156156- unsigned long ecx = 1*irqoff; /* break on interrupt flag */155155+ unsigned int eax = flg2MWAIT(state->flags);156156+ unsigned int ecx = 1*irqoff; /* break on interrupt flag */157157158158 mwait_idle_with_hints(eax, ecx);159159···226226static __cpuidle int intel_idle_s2idle(struct cpuidle_device *dev,227227 struct cpuidle_driver *drv, int index)228228{229229- unsigned long ecx = 1; /* break on interrupt flag */230229 struct cpuidle_state *state = &drv->states[index];231231- unsigned long eax = flg2MWAIT(state->flags);230230+ unsigned int eax = flg2MWAIT(state->flags);231231+ unsigned int ecx = 1; /* break on interrupt flag */232232233233 if (state->flags & CPUIDLE_FLAG_INIT_XSTATE)234234 fpu_idle_fpregs();···25072507 pr_debug("Local APIC timer is reliable in %s\n",25082508 boot_cpu_has(X86_FEATURE_ARAT) ? "all C-states" : "C1");2509250925102510+ arch_cpu_rescan_dead_smt_siblings();25112511+25102512 return 0;2511251325122514hp_setup_fail:···25202518 return retval;2521251925222520}25232523-device_initcall(intel_idle_init);25212521+subsys_initcall_sync(intel_idle_init);2524252225252523/*25262524 * We are not really modular, but we used to support that. Meaning we also
+2-2
drivers/iommu/tegra-smmu.c
···559559{560560 unsigned int pd_index = iova_pd_index(iova);561561 struct tegra_smmu *smmu = as->smmu;562562- struct tegra_pd *pd = as->pd;562562+ u32 *pd = &as->pd->val[pd_index];563563 unsigned long offset = pd_index * sizeof(*pd);564564565565 /* Set the page directory entry first */566566- pd->val[pd_index] = value;566566+ *pd = value;567567568568 /* The flush the page directory entry from caches */569569 dma_sync_single_range_for_device(smmu->dev, as->pd_dma, offset,
···11# SPDX-License-Identifier: GPL-2.022config FSL_ENETC_CORE33 tristate44+ select NXP_NETC_LIB if NXP_NTMP45 help56 This module supports common functionality between the PF and VF67 drivers for the NXP ENETC controller.···2221 This module provides common functionalities for both ENETC and NETC2322 Switch, such as NETC Table Management Protocol (NTMP) 2.0, common tc2423 flower and debugfs interfaces and so on.2424+2525+config NXP_NTMP2626+ bool25272628config FSL_ENETC2729 tristate "ENETC PF driver"···4945 select FSL_ENETC_CORE5046 select FSL_ENETC_MDIO5147 select NXP_ENETC_PF_COMMON5252- select NXP_NETC_LIB4848+ select NXP_NTMP5349 select PHYLINK5450 select DIMLIB5551 help
+4-4
drivers/net/ethernet/intel/e1000/e1000_main.c
···477477478478 cancel_delayed_work_sync(&adapter->phy_info_task);479479 cancel_delayed_work_sync(&adapter->fifo_stall_task);480480-481481- /* Only kill reset task if adapter is not resetting */482482- if (!test_bit(__E1000_RESETTING, &adapter->flags))483483- cancel_work_sync(&adapter->reset_task);484480}485481486482void e1000_down(struct e1000_adapter *adapter)···12611265 e1000_release_manageability(adapter);1262126612631267 unregister_netdev(netdev);12681268+12691269+ /* Only kill reset task if adapter is not resetting */12701270+ if (!test_bit(__E1000_RESETTING, &adapter->flags))12711271+ cancel_work_sync(&adapter->reset_task);1264127212651273 e1000_phy_hw_reset(hw);12661274
···15461546 * @vf: pointer to the VF structure15471547 * @flr: VFLR was issued or not15481548 *15491549- * Returns true if the VF is in reset, resets successfully, or resets15501550- * are disabled and false otherwise.15491549+ * Return: True if reset was performed successfully or if resets are disabled.15501550+ * False if reset is already in progress.15511551 **/15521552bool i40e_reset_vf(struct i40e_vf *vf, bool flr)15531553{···1566156615671567 /* If VF is being reset already we don't need to continue. */15681568 if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states))15691569- return true;15691569+ return false;1570157015711571 i40e_trigger_vf_reset(vf, flr);15721572···43284328 reg = rd32(hw, I40E_GLGEN_VFLRSTAT(reg_idx));43294329 if (reg & BIT(bit_idx))43304330 /* i40e_reset_vf will clear the bit in GLGEN_VFLRSTAT */43314331- i40e_reset_vf(vf, true);43314331+ if (!i40e_reset_vf(vf, true)) {43324332+ /* At least one VF did not finish resetting, retry next time */43334333+ set_bit(__I40E_VFLR_EVENT_PENDING, pf->state);43344334+ }43324335 }4333433643344337 return 0;
+11
drivers/net/ethernet/intel/iavf/iavf_main.c
···32093209 }3210321032113211continue_reset:32123212+ /* If we are still early in the state machine, just restart. */32133213+ if (adapter->state <= __IAVF_INIT_FAILED) {32143214+ iavf_shutdown_adminq(hw);32153215+ iavf_change_state(adapter, __IAVF_STARTUP);32163216+ iavf_startup(adapter);32173217+ queue_delayed_work(adapter->wq, &adapter->watchdog_task,32183218+ msecs_to_jiffies(30));32193219+ netdev_unlock(netdev);32203220+ return;32213221+ }32223222+32123223 /* We don't use netif_running() because it may be true prior to32133224 * ndo_open() returning, so we can't assume it means all our open32143225 * tasks have finished, since we're not holding the rtnl_lock here.
+17
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
···7979 return iavf_status_to_errno(status);8080 received_op =8181 (enum virtchnl_ops)le32_to_cpu(event->desc.cookie_high);8282+8383+ if (received_op == VIRTCHNL_OP_EVENT) {8484+ struct iavf_adapter *adapter = hw->back;8585+ struct virtchnl_pf_event *vpe =8686+ (struct virtchnl_pf_event *)event->msg_buf;8787+8888+ if (vpe->event != VIRTCHNL_EVENT_RESET_IMPENDING)8989+ continue;9090+9191+ dev_info(&adapter->pdev->dev, "Reset indication received from the PF\n");9292+ if (!(adapter->flags & IAVF_FLAG_RESET_PENDING))9393+ iavf_schedule_reset(adapter,9494+ IAVF_FLAG_RESET_PENDING);9595+9696+ return -EIO;9797+ }9898+8299 if (op_to_poll == received_op)83100 break;84101 }
···13701370 struct mlx5hws_cmd_set_fte_attr fte_attr = {0};13711371 struct mlx5hws_cmd_forward_tbl *fw_island;13721372 struct mlx5hws_action *action;13731373- u32 i /*, packet_reformat_id*/;13741374- int ret;13731373+ int ret, last_dest_idx = -1;13741374+ u32 i;1375137513761376 if (num_dest <= 1) {13771377 mlx5hws_err(ctx, "Action must have multiple dests\n");···14011401 dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id;14021402 fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;14031403 fte_attr.ignore_flow_level = ignore_flow_level;14041404- /* ToDo: In SW steering we have a handling of 'go to WIRE'14051405- * destination here by upper layer setting 'is_wire_ft' flag14061406- * if the destination is wire.14071407- * This is because uplink should be last dest in the list.14081408- */14041404+ if (dests[i].is_wire_ft)14051405+ last_dest_idx = i;14091406 break;14101407 case MLX5HWS_ACTION_TYP_VPORT:14111408 dest_list[i].destination_type = MLX5_FLOW_DESTINATION_TYPE_VPORT;···14251428 goto free_dest_list;14261429 }14271430 }14311431+14321432+ if (last_dest_idx != -1)14331433+ swap(dest_list[last_dest_idx], dest_list[num_dest - 1]);1428143414291435 fte_attr.dests_num = num_dest;14301436 fte_attr.dests = dest_list;
···12521252 */12531253static int prepare_extradata(struct netconsole_target *nt)12541254{12551255- u32 fields = SYSDATA_CPU_NR | SYSDATA_TASKNAME;12561255 int extradata_len;1257125612581257 /* userdata was appended when configfs write helper was called···12591260 */12601261 extradata_len = nt->userdata_length;1261126212621262- if (!(nt->sysdata_fields & fields))12631263+ if (!nt->sysdata_fields)12631264 goto out;1264126512651266 if (nt->sysdata_fields & SYSDATA_CPU_NR)
+2-1
drivers/net/netdevsim/netdev.c
···371371 int done;372372373373 done = nsim_rcv(rq, budget);374374- napi_complete(napi);374374+ if (done < budget)375375+ napi_complete_done(napi, done);375376376377 return done;377378}
+12
drivers/net/phy/mdio_bus.c
···445445446446 lockdep_assert_held_once(&bus->mdio_lock);447447448448+ if (addr >= PHY_MAX_ADDR)449449+ return -ENXIO;450450+448451 if (bus->read)449452 retval = bus->read(bus, addr, regnum);450453 else···476473 int err;477474478475 lockdep_assert_held_once(&bus->mdio_lock);476476+477477+ if (addr >= PHY_MAX_ADDR)478478+ return -ENXIO;479479480480 if (bus->write)481481 err = bus->write(bus, addr, regnum, val);···541535542536 lockdep_assert_held_once(&bus->mdio_lock);543537538538+ if (addr >= PHY_MAX_ADDR)539539+ return -ENXIO;540540+544541 if (bus->read_c45)545542 retval = bus->read_c45(bus, addr, devad, regnum);546543 else···574565 int err;575566576567 lockdep_assert_held_once(&bus->mdio_lock);568568+569569+ if (addr >= PHY_MAX_ADDR)570570+ return -ENXIO;577571578572 if (bus->write_c45)579573 err = bus->write_c45(bus, addr, devad, regnum, val);
+12-6
drivers/net/phy/phy_caps.c
···188188 * When @exact is not set, we return either an exact match, or matching capabilities189189 * at lower speed, or the lowest matching speed, or NULL.190190 *191191+ * Non-exact matches will try to return an exact speed and duplex match, but may192192+ * return matching capabilities with same speed but a different duplex.193193+ *191194 * Returns: a matched link_capabilities according to the above process, NULL192195 * otherwise.193196 */···198195phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported,199196 bool exact)200197{201201- const struct link_capabilities *lcap, *last = NULL;198198+ const struct link_capabilities *lcap, *match = NULL, *last = NULL;202199203200 for_each_link_caps_desc_speed(lcap) {204201 if (linkmode_intersects(lcap->linkmodes, supported)) {···207204 if (lcap->speed == speed && lcap->duplex == duplex) {208205 return lcap;209206 } else if (!exact) {210210- if (lcap->speed <= speed)211211- return lcap;207207+ if (!match && lcap->speed <= speed)208208+ match = lcap;209209+210210+ if (lcap->speed < speed)211211+ break;212212 }213213 }214214 }215215216216- if (!exact)217217- return last;216216+ if (!match && !exact)217217+ match = last;218218219219- return NULL;219219+ return match;220220}221221EXPORT_SYMBOL_GPL(phy_caps_lookup);222222
···909909910910 /* NAPI functions as RCU section */911911 peer_dev = rcu_dereference_check(priv->peer, rcu_read_lock_bh_held());912912- peer_txq = netdev_get_tx_queue(peer_dev, queue_idx);912912+ peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL;913913914914 for (i = 0; i < budget; i++) {915915 void *ptr = __ptr_ring_consume(&rq->xdp_ring);···959959 rq->stats.vs.xdp_packets += done;960960 u64_stats_update_end(&rq->stats.syncp);961961962962- if (unlikely(netif_tx_queue_stopped(peer_txq)))962962+ if (peer_txq && unlikely(netif_tx_queue_stopped(peer_txq)))963963 netif_tx_wake_queue(peer_txq);964964965965 return done;
+25-8
drivers/net/wireless/ath/ath10k/mac.c
···44 * Copyright (c) 2011-2017 Qualcomm Atheros, Inc.55 * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.66 * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.77+ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.78 */89910#include "mac.h"···10211020 return -ETIMEDOUT;1022102110231022 return ar->last_wmi_vdev_start_status;10231023+}10241024+10251025+static inline int ath10k_vdev_delete_sync(struct ath10k *ar)10261026+{10271027+ unsigned long time_left;10281028+10291029+ lockdep_assert_held(&ar->conf_mutex);10301030+10311031+ if (!test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map))10321032+ return 0;10331033+10341034+ if (test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags))10351035+ return -ESHUTDOWN;10361036+10371037+ time_left = wait_for_completion_timeout(&ar->vdev_delete_done,10381038+ ATH10K_VDEV_DELETE_TIMEOUT_HZ);10391039+ if (time_left == 0)10401040+ return -ETIMEDOUT;10411041+10421042+ return 0;10241043}1025104410261045static int ath10k_monitor_vdev_start(struct ath10k *ar, int vdev_id)···59215900 struct ath10k *ar = hw->priv;59225901 struct ath10k_vif *arvif = (void *)vif->drv_priv;59235902 struct ath10k_peer *peer;59245924- unsigned long time_left;59255903 int ret;59265904 int i;59275905···59605940 ath10k_warn(ar, "failed to delete WMI vdev %i: %d\n",59615941 arvif->vdev_id, ret);5962594259635963- if (test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) {59645964- time_left = wait_for_completion_timeout(&ar->vdev_delete_done,59655965- ATH10K_VDEV_DELETE_TIMEOUT_HZ);59665966- if (time_left == 0) {59675967- ath10k_warn(ar, "Timeout in receiving vdev delete response\n");59685968- goto out;59695969- }59435943+ ret = ath10k_vdev_delete_sync(ar);59445944+ if (ret) {59455945+ ath10k_warn(ar, "Error in receiving vdev delete response: %d\n", ret);59465946+ goto out;59705947 }5971594859725949 /* Some firmware revisions don't notify host about self-peer removal
+3-1
drivers/net/wireless/ath/ath10k/snoc.c
···938938939939 dev_set_threaded(ar->napi_dev, true);940940 ath10k_core_napi_enable(ar);941941- ath10k_snoc_irq_enable(ar);941941+ /* IRQs are left enabled when we restart due to a firmware crash */942942+ if (!test_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags))943943+ ath10k_snoc_irq_enable(ar);942944 ath10k_snoc_rx_post(ar);943945944946 clear_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags);
+15-14
drivers/net/wireless/ath/ath11k/core.c
···990990 INIT_LIST_HEAD(&ar->fw_stats.bcn);991991992992 init_completion(&ar->fw_stats_complete);993993+ init_completion(&ar->fw_stats_done);993994}994995995996void ath11k_fw_stats_free(struct ath11k_fw_stats *stats)···21352134{21362135 int ret;2137213621372137+ switch (ath11k_crypto_mode) {21382138+ case ATH11K_CRYPT_MODE_SW:21392139+ set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);21402140+ set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);21412141+ break;21422142+ case ATH11K_CRYPT_MODE_HW:21432143+ clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);21442144+ clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);21452145+ break;21462146+ default:21472147+ ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode);21482148+ return -EINVAL;21492149+ }21502150+21382151 ret = ath11k_core_start_firmware(ab, ab->fw_mode);21392152 if (ret) {21402153 ath11k_err(ab, "failed to start firmware: %d\n", ret);···21652150 if (ret) {21662151 ath11k_err(ab, "failed to init DP: %d\n", ret);21672152 goto err_firmware_stop;21682168- }21692169-21702170- switch (ath11k_crypto_mode) {21712171- case ATH11K_CRYPT_MODE_SW:21722172- set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);21732173- set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);21742174- break;21752175- case ATH11K_CRYPT_MODE_HW:21762176- clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags);21772177- clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags);21782178- break;21792179- default:21802180- ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode);21812181- return -EINVAL;21822153 }2183215421842155 if (ath11k_frame_mode == ATH11K_HW_TXRX_RAW)
···11// SPDX-License-Identifier: BSD-3-Clause-Clear22/*33 * Copyright (c) 2018-2020 The Linux Foundation. All rights reserved.44- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.44+ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.55 */6677#include <linux/vmalloc.h>···9393 spin_unlock_bh(&dbr_data->lock);9494}95959696-static void ath11k_debugfs_fw_stats_reset(struct ath11k *ar)9797-{9898- spin_lock_bh(&ar->data_lock);9999- ar->fw_stats_done = false;100100- ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);101101- ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs);102102- spin_unlock_bh(&ar->data_lock);103103-}104104-10596void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats)10697{10798 struct ath11k_base *ab = ar->ab;108108- struct ath11k_pdev *pdev;109109- bool is_end;110110- static unsigned int num_vdev, num_bcn;111111- size_t total_vdevs_started = 0;112112- int i;9999+ bool is_end = true;113100114114- /* WMI_REQUEST_PDEV_STAT request has been already processed */115115-116116- if (stats->stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) {117117- ar->fw_stats_done = true;118118- return;119119- }120120-121121- if (stats->stats_id == WMI_REQUEST_VDEV_STAT) {122122- if (list_empty(&stats->vdevs)) {123123- ath11k_warn(ab, "empty vdev stats");124124- return;125125- }126126- /* FW sends all the active VDEV stats irrespective of PDEV,127127- * hence limit until the count of all VDEVs started128128- */129129- for (i = 0; i < ab->num_radios; i++) {130130- pdev = rcu_dereference(ab->pdevs_active[i]);131131- if (pdev && pdev->ar)132132- total_vdevs_started += ar->num_started_vdevs;133133- }134134-135135- is_end = ((++num_vdev) == total_vdevs_started);136136-137137- list_splice_tail_init(&stats->vdevs,138138- &ar->fw_stats.vdevs);139139-140140- if (is_end) {141141- ar->fw_stats_done = true;142142- num_vdev = 0;143143- }144144- return;145145- }146146-101101+ /* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_RSSI_PER_CHAIN_STAT and102102+ * WMI_REQUEST_VDEV_STAT requests have been already processed.103103+ */147104 if (stats->stats_id == WMI_REQUEST_BCN_STAT) {148105 if (list_empty(&stats->bcn)) {149106 ath11k_warn(ab, "empty bcn stats");···109152 /* Mark end until we reached the count of all started VDEVs110153 * within the PDEV111154 */112112- is_end = ((++num_bcn) == ar->num_started_vdevs);155155+ if (ar->num_started_vdevs)156156+ is_end = ((++ar->fw_stats.num_bcn_recvd) ==157157+ ar->num_started_vdevs);113158114159 list_splice_tail_init(&stats->bcn,115160 &ar->fw_stats.bcn);116161117117- if (is_end) {118118- ar->fw_stats_done = true;119119- num_bcn = 0;120120- }162162+ if (is_end)163163+ complete(&ar->fw_stats_done);121164 }122122-}123123-124124-static int ath11k_debugfs_fw_stats_request(struct ath11k *ar,125125- struct stats_request_params *req_param)126126-{127127- struct ath11k_base *ab = ar->ab;128128- unsigned long timeout, time_left;129129- int ret;130130-131131- lockdep_assert_held(&ar->conf_mutex);132132-133133- /* FW stats can get split when exceeding the stats data buffer limit.134134- * In that case, since there is no end marking for the back-to-back135135- * received 'update stats' event, we keep a 3 seconds timeout in case,136136- * fw_stats_done is not marked yet137137- */138138- timeout = jiffies + secs_to_jiffies(3);139139-140140- ath11k_debugfs_fw_stats_reset(ar);141141-142142- reinit_completion(&ar->fw_stats_complete);143143-144144- ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);145145-146146- if (ret) {147147- ath11k_warn(ab, "could not request fw stats (%d)\n",148148- ret);149149- return ret;150150- }151151-152152- time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ);153153-154154- if (!time_left)155155- return -ETIMEDOUT;156156-157157- for (;;) {158158- if (time_after(jiffies, timeout))159159- break;160160-161161- spin_lock_bh(&ar->data_lock);162162- if (ar->fw_stats_done) {163163- spin_unlock_bh(&ar->data_lock);164164- break;165165- }166166- spin_unlock_bh(&ar->data_lock);167167- }168168- return 0;169169-}170170-171171-int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id,172172- u32 vdev_id, u32 stats_id)173173-{174174- struct ath11k_base *ab = ar->ab;175175- struct stats_request_params req_param;176176- int ret;177177-178178- mutex_lock(&ar->conf_mutex);179179-180180- if (ar->state != ATH11K_STATE_ON) {181181- ret = -ENETDOWN;182182- goto err_unlock;183183- }184184-185185- req_param.pdev_id = pdev_id;186186- req_param.vdev_id = vdev_id;187187- req_param.stats_id = stats_id;188188-189189- ret = ath11k_debugfs_fw_stats_request(ar, &req_param);190190- if (ret)191191- ath11k_warn(ab, "failed to request fw stats: %d\n", ret);192192-193193- ath11k_dbg(ab, ATH11K_DBG_WMI,194194- "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n",195195- pdev_id, vdev_id, stats_id);196196-197197-err_unlock:198198- mutex_unlock(&ar->conf_mutex);199199-200200- return ret;201165}202166203167static int ath11k_open_pdev_stats(struct inode *inode, struct file *file)···146268 req_param.vdev_id = 0;147269 req_param.stats_id = WMI_REQUEST_PDEV_STAT;148270149149- ret = ath11k_debugfs_fw_stats_request(ar, &req_param);271271+ ret = ath11k_mac_fw_stats_request(ar, &req_param);150272 if (ret) {151273 ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret);152274 goto err_free;···217339 req_param.vdev_id = 0;218340 req_param.stats_id = WMI_REQUEST_VDEV_STAT;219341220220- ret = ath11k_debugfs_fw_stats_request(ar, &req_param);342342+ ret = ath11k_mac_fw_stats_request(ar, &req_param);221343 if (ret) {222344 ath11k_warn(ar->ab, "failed to request fw vdev stats: %d\n", ret);223345 goto err_free;···293415 continue;294416295417 req_param.vdev_id = arvif->vdev_id;296296- ret = ath11k_debugfs_fw_stats_request(ar, &req_param);418418+ ret = ath11k_mac_fw_stats_request(ar, &req_param);297419 if (ret) {298420 ath11k_warn(ar->ab, "failed to request fw bcn stats: %d\n", ret);299421 goto err_free;
+1-9
drivers/net/wireless/ath/ath11k/debugfs.h
···11/* SPDX-License-Identifier: BSD-3-Clause-Clear */22/*33 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.44- * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.44+ * Copyright (c) 2021-2022, 2025 Qualcomm Innovation Center, Inc. All rights reserved.55 */6677#ifndef _ATH11K_DEBUGFS_H_···273273void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats);274274275275void ath11k_debugfs_fw_stats_init(struct ath11k *ar);276276-int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id,277277- u32 vdev_id, u32 stats_id);278276279277static inline bool ath11k_debugfs_is_pktlog_lite_mode_enabled(struct ath11k *ar)280278{···375377}376378377379static inline int ath11k_debugfs_rx_filter(struct ath11k *ar)378378-{379379- return 0;380380-}381381-382382-static inline int ath11k_debugfs_get_fw_stats(struct ath11k *ar,383383- u32 pdev_id, u32 vdev_id, u32 stats_id)384380{385381 return 0;386382}
+83-44
drivers/net/wireless/ath/ath11k/mac.c
···89978997 }89988998}8999899990009000+static void ath11k_mac_fw_stats_reset(struct ath11k *ar)90019001+{90029002+ spin_lock_bh(&ar->data_lock);90039003+ ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);90049004+ ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs);90059005+ ar->fw_stats.num_vdev_recvd = 0;90069006+ ar->fw_stats.num_bcn_recvd = 0;90079007+ spin_unlock_bh(&ar->data_lock);90089008+}90099009+90109010+int ath11k_mac_fw_stats_request(struct ath11k *ar,90119011+ struct stats_request_params *req_param)90129012+{90139013+ struct ath11k_base *ab = ar->ab;90149014+ unsigned long time_left;90159015+ int ret;90169016+90179017+ lockdep_assert_held(&ar->conf_mutex);90189018+90199019+ ath11k_mac_fw_stats_reset(ar);90209020+90219021+ reinit_completion(&ar->fw_stats_complete);90229022+ reinit_completion(&ar->fw_stats_done);90239023+90249024+ ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);90259025+90269026+ if (ret) {90279027+ ath11k_warn(ab, "could not request fw stats (%d)\n",90289028+ ret);90299029+ return ret;90309030+ }90319031+90329032+ time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ);90339033+ if (!time_left)90349034+ return -ETIMEDOUT;90359035+90369036+ /* FW stats can get split when exceeding the stats data buffer limit.90379037+ * In that case, since there is no end marking for the back-to-back90389038+ * received 'update stats' event, we keep a 3 seconds timeout in case,90399039+ * fw_stats_done is not marked yet90409040+ */90419041+ time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ);90429042+ if (!time_left)90439043+ return -ETIMEDOUT;90449044+90459045+ return 0;90469046+}90479047+90489048+static int ath11k_mac_get_fw_stats(struct ath11k *ar, u32 pdev_id,90499049+ u32 vdev_id, u32 stats_id)90509050+{90519051+ struct ath11k_base *ab = ar->ab;90529052+ struct stats_request_params req_param;90539053+ int ret;90549054+90559055+ lockdep_assert_held(&ar->conf_mutex);90569056+90579057+ if (ar->state != ATH11K_STATE_ON)90589058+ return -ENETDOWN;90599059+90609060+ req_param.pdev_id = pdev_id;90619061+ req_param.vdev_id = vdev_id;90629062+ req_param.stats_id = stats_id;90639063+90649064+ ret = ath11k_mac_fw_stats_request(ar, &req_param);90659065+ if (ret)90669066+ ath11k_warn(ab, "failed to request fw stats: %d\n", ret);90679067+90689068+ ath11k_dbg(ab, ATH11K_DBG_WMI,90699069+ "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n",90709070+ pdev_id, vdev_id, stats_id);90719071+90729072+ return ret;90739073+}90749074+90009075static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,90019076 struct ieee80211_vif *vif,90029077 struct ieee80211_sta *sta,···9106903191079032 ath11k_mac_put_chain_rssi(sinfo, arsta, "ppdu", false);9108903390349034+ mutex_lock(&ar->conf_mutex);91099035 if (!(sinfo->filled & BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL)) &&91109036 arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA &&91119037 ar->ab->hw_params.supports_rssi_stats &&91129112- !ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0,91139113- WMI_REQUEST_RSSI_PER_CHAIN_STAT)) {90389038+ !ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,90399039+ WMI_REQUEST_RSSI_PER_CHAIN_STAT)) {91149040 ath11k_mac_put_chain_rssi(sinfo, arsta, "fw stats", true);91159041 }91169042···91199043 if (!signal &&91209044 arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA &&91219045 ar->ab->hw_params.supports_rssi_stats &&91229122- !(ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0,91239123- WMI_REQUEST_VDEV_STAT)))90469046+ !(ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,90479047+ WMI_REQUEST_VDEV_STAT)))91249048 signal = arsta->rssi_beacon;90499049+ mutex_unlock(&ar->conf_mutex);9125905091269051 ath11k_dbg(ar->ab, ATH11K_DBG_MAC,91279052 "sta statistics db2dbm %u rssi comb %d rssi beacon %d\n",···94579380 return ret;94589381}9459938294609460-static int ath11k_fw_stats_request(struct ath11k *ar,94619461- struct stats_request_params *req_param)94629462-{94639463- struct ath11k_base *ab = ar->ab;94649464- unsigned long time_left;94659465- int ret;94669466-94679467- lockdep_assert_held(&ar->conf_mutex);94689468-94699469- spin_lock_bh(&ar->data_lock);94709470- ar->fw_stats_done = false;94719471- ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs);94729472- spin_unlock_bh(&ar->data_lock);94739473-94749474- reinit_completion(&ar->fw_stats_complete);94759475-94769476- ret = ath11k_wmi_send_stats_request_cmd(ar, req_param);94779477- if (ret) {94789478- ath11k_warn(ab, "could not request fw stats (%d)\n",94799479- ret);94809480- return ret;94819481- }94829482-94839483- time_left = wait_for_completion_timeout(&ar->fw_stats_complete,94849484- 1 * HZ);94859485-94869486- if (!time_left)94879487- return -ETIMEDOUT;94889488-94899489- return 0;94909490-}94919491-94929383static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw,94939384 struct ieee80211_vif *vif,94949385 unsigned int link_id,···94649419{94659420 struct ath11k *ar = hw->priv;94669421 struct ath11k_base *ab = ar->ab;94679467- struct stats_request_params req_param = {0};94689422 struct ath11k_fw_stats_pdev *pdev;94699423 int ret;94709424···94759431 */94769432 mutex_lock(&ar->conf_mutex);9477943394789478- if (ar->state != ATH11K_STATE_ON)94799479- goto err_fallback;94809480-94819434 /* Firmware doesn't provide Tx power during CAC hence no need to fetch94829435 * the stats.94839436 */···94839442 return -EAGAIN;94849443 }9485944494869486- req_param.pdev_id = ar->pdev->pdev_id;94879487- req_param.stats_id = WMI_REQUEST_PDEV_STAT;94889488-94899489- ret = ath11k_fw_stats_request(ar, &req_param);94459445+ ret = ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0,94469446+ WMI_REQUEST_PDEV_STAT);94909447 if (ret) {94919448 ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret);94929449 goto err_fallback;
+3-1
drivers/net/wireless/ath/ath11k/mac.h
···11/* SPDX-License-Identifier: BSD-3-Clause-Clear */22/*33 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.44- * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.44+ * Copyright (c) 2021-2023, 2025 Qualcomm Innovation Center, Inc. All rights reserved.55 */6677#ifndef ATH11K_MAC_H···179179void ath11k_mac_fill_reg_tpc_info(struct ath11k *ar,180180 struct ieee80211_vif *vif,181181 struct ieee80211_chanctx_conf *ctx);182182+int ath11k_mac_fw_stats_request(struct ath11k *ar,183183+ struct stats_request_params *req_param);182184#endif
+43-6
drivers/net/wireless/ath/ath11k/wmi.c
···81588158static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *skb)81598159{81608160 struct ath11k_fw_stats stats = {};81618161+ size_t total_vdevs_started = 0;81628162+ struct ath11k_pdev *pdev;81638163+ bool is_end = true;81648164+ int i;81658165+81618166 struct ath11k *ar;81628167 int ret;81638168···8189818481908185 spin_lock_bh(&ar->data_lock);8191818681928192- /* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via81878187+ /* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_VDEV_STAT and81888188+ * WMI_REQUEST_RSSI_PER_CHAIN_STAT can be requested via mac ops or via81938189 * debugfs fw stats. Therefore, processing it separately.81948190 */81958191 if (stats.stats_id == WMI_REQUEST_PDEV_STAT) {81968192 list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs);81978197- ar->fw_stats_done = true;81938193+ complete(&ar->fw_stats_done);81988194 goto complete;81998195 }8200819682018201- /* WMI_REQUEST_VDEV_STAT, WMI_REQUEST_BCN_STAT and WMI_REQUEST_RSSI_PER_CHAIN_STAT82028202- * are currently requested only via debugfs fw stats. Hence, processing these82038203- * in debugfs context81978197+ if (stats.stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) {81988198+ complete(&ar->fw_stats_done);81998199+ goto complete;82008200+ }82018201+82028202+ if (stats.stats_id == WMI_REQUEST_VDEV_STAT) {82038203+ if (list_empty(&stats.vdevs)) {82048204+ ath11k_warn(ab, "empty vdev stats");82058205+ goto complete;82068206+ }82078207+ /* FW sends all the active VDEV stats irrespective of PDEV,82088208+ * hence limit until the count of all VDEVs started82098209+ */82108210+ for (i = 0; i < ab->num_radios; i++) {82118211+ pdev = rcu_dereference(ab->pdevs_active[i]);82128212+ if (pdev && pdev->ar)82138213+ total_vdevs_started += ar->num_started_vdevs;82148214+ }82158215+82168216+ if (total_vdevs_started)82178217+ is_end = ((++ar->fw_stats.num_vdev_recvd) ==82188218+ total_vdevs_started);82198219+82208220+ list_splice_tail_init(&stats.vdevs,82218221+ &ar->fw_stats.vdevs);82228222+82238223+ if (is_end)82248224+ complete(&ar->fw_stats_done);82258225+82268226+ goto complete;82278227+ }82288228+82298229+ /* WMI_REQUEST_BCN_STAT is currently requested only via debugfs fw stats.82308230+ * Hence, processing it in debugfs context82048231 */82058232 ath11k_debugfs_fw_stats_process(ar, &stats);8206823382078234complete:82088235 complete(&ar->fw_stats_complete);82098209- rcu_read_unlock();82108236 spin_unlock_bh(&ar->data_lock);82378237+ rcu_read_unlock();8211823882128239 /* Since the stats's pdev, vdev and beacon list are spliced and reinitialised82138240 * at this point, no need to free the individual list.
···585585 * or cache was blocked586586 * @HAL_REO_CMD_FAILED: Command execution failed, could be due to587587 * invalid queue desc588588- * @HAL_REO_CMD_RESOURCE_BLOCKED:588588+ * @HAL_REO_CMD_RESOURCE_BLOCKED: Command could not be executed because589589+ * one or more descriptors were blocked589590 * @HAL_REO_CMD_DRAIN:590591 */591592enum hal_reo_cmd_status {
···15011501 * Scratch value was altered, this means the device was powered off, we15021502 * need to reset it completely.15031503 * Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan,15041504- * so assume that any bits there mean that the device is usable.15041504+ * but not bits [15:8]. So if we have bits set in lower word, assume15051505+ * the device is alive.15061506+ * For older devices, just try silently to grab the NIC.15051507 */15061506- if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ &&15071507- !iwl_read32(trans, CSR_FUNC_SCRATCH))15081508- device_was_powered_off = true;15081508+ if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) {15091509+ if (!(iwl_read32(trans, CSR_FUNC_SCRATCH) &15101510+ CSR_FUNC_SCRATCH_POWER_OFF_MASK))15111511+ device_was_powered_off = true;15121512+ } else {15131513+ /*15141514+ * bh are re-enabled by iwl_trans_pcie_release_nic_access,15151515+ * so re-enable them if _iwl_trans_pcie_grab_nic_access fails.15161516+ */15171517+ local_bh_disable();15181518+ if (_iwl_trans_pcie_grab_nic_access(trans, true)) {15191519+ iwl_trans_pcie_release_nic_access(trans);15201520+ } else {15211521+ device_was_powered_off = true;15221522+ local_bh_enable();15231523+ }15241524+ }1509152515101526 if (restore || device_was_powered_off) {15111527 trans->state = IWL_TRANS_NO_FW;
···429429 pdu->result = le64_to_cpu(nvme_req(req)->result.u64);430430431431 /*432432- * For iopoll, complete it directly. Note that using the uring_cmd433433- * helper for this is safe only because we check blk_rq_is_poll().434434- * As that returns false if we're NOT on a polled queue, then it's435435- * safe to use the polled completion helper.436436- *437437- * Otherwise, move the completion to task work.432432+ * IOPOLL could potentially complete this request directly, but433433+ * if multiple rings are polling on the same queue, then it's possible434434+ * for one ring to find completions for another ring. Punting the435435+ * completion via task_work will always direct it to the right436436+ * location, rather than potentially complete requests for ringA437437+ * under iopoll invocations from ringB.438438 */439439- if (blk_rq_is_poll(req)) {440440- if (pdu->bio)441441- blk_rq_unmap_user(pdu->bio);442442- io_uring_cmd_iopoll_done(ioucmd, pdu->result, pdu->status);443443- } else {444444- io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);445445- }446446-439439+ io_uring_cmd_do_in_task_lazy(ioucmd, nvme_uring_task_cb);447440 return RQ_END_IO_FREE;448441}449442
-5
drivers/pinctrl/pinctrl-st.c
···374374}375375376376/* Low level functions.. */377377-static inline int st_gpio_bank(int gpio)378378-{379379- return gpio/ST_GPIO_PINS_PER_BANK;380380-}381381-382377static inline int st_gpio_pin(int gpio)383378{384379 return gpio%ST_GPIO_PINS_PER_BANK;
···143143 */144144static int prepare_function_table(struct device *dev, struct device_node *pnode,145145 struct sunxi_desc_pin *pins, int npins,146146- const u8 *irq_bank_muxes)146146+ unsigned pin_base, const u8 *irq_bank_muxes)147147{148148 struct device_node *node;149149 struct property *prop;···166166 */167167 for (i = 0; i < npins; i++) {168168 struct sunxi_desc_pin *pin = &pins[i];169169- int bank = pin->pin.number / PINS_PER_BANK;169169+ int bank = (pin->pin.number - pin_base) / PINS_PER_BANK;170170171171 if (irq_bank_muxes[bank]) {172172 pin->variant++;···211211 last_bank = 0;212212 for (i = 0; i < npins; i++) {213213 struct sunxi_desc_pin *pin = &pins[i];214214- int bank = pin->pin.number / PINS_PER_BANK;214214+ int bank = (pin->pin.number - pin_base) / PINS_PER_BANK;215215 int lastfunc = pin->variant + 1;216216 int irq_mux = irq_bank_muxes[bank];217217···353353 return PTR_ERR(pins);354354355355 ret = prepare_function_table(&pdev->dev, pnode, pins, desc->npins,356356- irq_bank_muxes);356356+ desc->pin_base, irq_bank_muxes);357357 if (ret)358358 return ret;359359
+1-11
drivers/ptp/ptp_private.h
···9898/* Check if ptp virtual clock is in use */9999static inline bool ptp_vclock_in_use(struct ptp_clock *ptp)100100{101101- bool in_use = false;102102-103103- if (mutex_lock_interruptible(&ptp->n_vclocks_mux))104104- return true;105105-106106- if (!ptp->is_virtual_clock && ptp->n_vclocks)107107- in_use = true;108108-109109- mutex_unlock(&ptp->n_vclocks_mux);110110-111111- return in_use;101101+ return !ptp->is_virtual_clock;112102}113103114104/* Check if ptp clock shall be free running */
+3
drivers/rapidio/rio_cm.c
···783783 if (buf == NULL || ch_id == 0 || len == 0 || len > RIO_MAX_MSG_SIZE)784784 return -EINVAL;785785786786+ if (len < sizeof(struct rio_ch_chan_hdr))787787+ return -EINVAL; /* insufficient data from user */788788+786789 ch = riocm_get_channel(ch_id);787790 if (!ch) {788791 riocm_error("%s(%d) ch_%d not found", current->comm,
+3-3
drivers/regulator/max20086-regulator.c
···55// Copyright (C) 2022 Laurent Pinchart <laurent.pinchart@idesonboard.com>66// Copyright (C) 2018 Avnet, Inc.7788+#include <linux/cleanup.h>89#include <linux/err.h>910#include <linux/gpio/consumer.h>1011#include <linux/i2c.h>···134133static int max20086_parse_regulators_dt(struct max20086 *chip, bool *boot_on)135134{136135 struct of_regulator_match *matches;137137- struct device_node *node;138136 unsigned int i;139137 int ret;140138141141- node = of_get_child_by_name(chip->dev->of_node, "regulators");139139+ struct device_node *node __free(device_node) =140140+ of_get_child_by_name(chip->dev->of_node, "regulators");142141 if (!node) {143142 dev_err(chip->dev, "regulators node not found\n");144143 return -ENODEV;···154153155154 ret = of_regulator_match(chip->dev, node, matches,156155 chip->info->num_outputs);157157- of_node_put(node);158156 if (ret < 0) {159157 dev_err(chip->dev, "Failed to match regulators\n");160158 return -EINVAL;
+2
drivers/s390/scsi/zfcp_sysfs.c
···449449 if (kstrtoull(buf, 0, (unsigned long long *) &fcp_lun))450450 return -EINVAL;451451452452+ flush_work(&port->rport_work);453453+452454 retval = zfcp_unit_add(port, fcp_lun);453455 if (retval)454456 return retval;
···665665 * if the device is in the process of becoming ready, we666666 * should retry.667667 */668668- if ((sshdr.asc == 0x04) && (sshdr.ascq == 0x01))668668+ if ((sshdr.asc == 0x04) &&669669+ (sshdr.ascq == 0x01 || sshdr.ascq == 0x0a))669670 return NEEDS_RETRY;670671 /*671672 * if the device is not started, we need to wake
+5-6
drivers/scsi/scsi_transport_iscsi.c
···34993499 pr_err("%s could not find host no %u\n",35003500 __func__, ev->u.new_flashnode.host_no);35013501 err = -ENODEV;35023502- goto put_host;35023502+ goto exit_new_fnode;35033503 }3504350435053505 index = transport->new_flashnode(shost, data, len);···35093509 else35103510 err = -EIO;3511351135123512-put_host:35133512 scsi_host_put(shost);3514351335153514exit_new_fnode:···35333534 pr_err("%s could not find host no %u\n",35343535 __func__, ev->u.del_flashnode.host_no);35353536 err = -ENODEV;35363536- goto put_host;35373537+ goto exit_del_fnode;35373538 }3538353935393540 idx = ev->u.del_flashnode.flashnode_idx;···35753576 pr_err("%s could not find host no %u\n",35763577 __func__, ev->u.login_flashnode.host_no);35773578 err = -ENODEV;35783578- goto put_host;35793579+ goto exit_login_fnode;35793580 }3580358135813582 idx = ev->u.login_flashnode.flashnode_idx;···36273628 pr_err("%s could not find host no %u\n",36283629 __func__, ev->u.logout_flashnode.host_no);36293630 err = -ENODEV;36303630- goto put_host;36313631+ goto exit_logout_fnode;36313632 }3632363336333634 idx = ev->u.logout_flashnode.flashnode_idx;···36773678 pr_err("%s could not find host no %u\n",36783679 __func__, ev->u.logout_flashnode.host_no);36793680 err = -ENODEV;36803680- goto put_host;36813681+ goto exit_logout_sid;36813682 }3682368336833684 session = iscsi_session_lookup(ev->u.logout_flashnode_sid.sid);
+6-4
drivers/scsi/storvsc_drv.c
···362362/*363363 * Timeout in seconds for all devices managed by this driver.364364 */365365-static int storvsc_timeout = 180;365365+static const int storvsc_timeout = 180;366366367367#if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)368368static struct scsi_transport_template *fc_transport_template;···768768 return;769769 }770770771771- t = wait_for_completion_timeout(&request->wait_event, 10*HZ);771771+ t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);772772 if (t == 0) {773773 dev_err(dev, "Failed to create sub-channel: timed out\n");774774 return;···833833 if (ret != 0)834834 return ret;835835836836- t = wait_for_completion_timeout(&request->wait_event, 5*HZ);836836+ t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);837837 if (t == 0)838838 return -ETIMEDOUT;839839···13501350 return ret;1351135113521352 ret = storvsc_channel_init(device, is_fc);13531353+ if (ret)13541354+ vmbus_close(device->channel);1353135513541356 return ret;13551357}···16701668 if (ret != 0)16711669 return FAILED;1672167016731673- t = wait_for_completion_timeout(&request->wait_event, 5*HZ);16711671+ t = wait_for_completion_timeout(&request->wait_event, storvsc_timeout * HZ);16741672 if (t == 0)16751673 return TIMEOUT_ERROR;16761674
···297297 if (trigger->ops->enable) {298298 ret = trigger->ops->enable(trigger, config);299299 if (ret) {300300- if (offload->ops->trigger_disable)300300+ if (offload->ops && offload->ops->trigger_disable)301301 offload->ops->trigger_disable(offload);302302 return ret;303303 }
+18-12
drivers/spi/spi-omap2-mcspi.c
···134134 size_t max_xfer_len;135135 u32 ref_clk_hz;136136 bool use_multi_mode;137137+ bool last_msg_kept_cs;137138};138139139140struct omap2_mcspi_cs {···12701269 * multi-mode is applicable.12711270 */12721271 mcspi->use_multi_mode = true;12721272+12731273+ if (mcspi->last_msg_kept_cs)12741274+ mcspi->use_multi_mode = false;12751275+12731276 list_for_each_entry(tr, &msg->transfers, transfer_list) {12741277 if (!tr->bits_per_word)12751278 bits_per_word = msg->spi->bits_per_word;···12921287 mcspi->use_multi_mode = false;12931288 }1294128912951295- /* Check if transfer asks to change the CS status after the transfer */12961296- if (!tr->cs_change)12971297- mcspi->use_multi_mode = false;12981298-12991299- /*13001300- * If at least one message is not compatible, switch back to single mode13011301- *13021302- * The bits_per_word of certain transfer can be different, but it will have no13031303- * impact on the signal itself.13041304- */13051305- if (!mcspi->use_multi_mode)13061306- break;12901290+ if (list_is_last(&tr->transfer_list, &msg->transfers)) {12911291+ /* Check if transfer asks to keep the CS status after the whole message */12921292+ if (tr->cs_change) {12931293+ mcspi->use_multi_mode = false;12941294+ mcspi->last_msg_kept_cs = true;12951295+ } else {12961296+ mcspi->last_msg_kept_cs = false;12971297+ }12981298+ } else {12991299+ /* Check if transfer asks to change the CS status after the transfer */13001300+ if (!tr->cs_change)13011301+ mcspi->use_multi_mode = false;13021302+ }13071303 }1308130413091305 omap2_mcspi_set_mode(ctlr);
···399399 return ret;400400}401401402402-static long bch2_ioctl_fs_usage(struct bch_fs *c,402402+static noinline_for_stack long bch2_ioctl_fs_usage(struct bch_fs *c,403403 struct bch_ioctl_fs_usage __user *user_arg)404404{405405 struct bch_ioctl_fs_usage arg = {};···469469}470470471471/* obsolete, didn't allow for new data types: */472472-static long bch2_ioctl_dev_usage(struct bch_fs *c,472472+static noinline_for_stack long bch2_ioctl_dev_usage(struct bch_fs *c,473473 struct bch_ioctl_dev_usage __user *user_arg)474474{475475 struct bch_ioctl_dev_usage arg;
+3-1
fs/bcachefs/disk_accounting.c
···618618 for (unsigned j = 0; j < nr; j++)619619 src_v[j] -= dst_v[j];620620621621- if (fsck_err(trans, accounting_mismatch, "%s", buf.buf)) {621621+ bch2_trans_unlock_long(trans);622622+623623+ if (fsck_err(c, accounting_mismatch, "%s", buf.buf)) {622624 percpu_up_write(&c->mark_lock);623625 ret = commit_do(trans, NULL, NULL, 0,624626 bch2_disk_accounting_mod(trans, &acc_k, src_v, nr, false));
+4-1
fs/bcachefs/error.c
···6969 if (trans)7070 bch2_trans_updates_to_text(&buf, trans);7171 bool ret = __bch2_inconsistent_error(c, &buf);7272- bch2_print_str_nonblocking(c, KERN_ERR, buf.buf);7272+ bch2_print_str(c, KERN_ERR, buf.buf);73737474 printbuf_exit(&buf);7575 return ret;···620620621621 if (s)622622 s->ret = ret;623623+624624+ if (trans)625625+ ret = bch2_trans_log_str(trans, bch2_sb_error_strs[err]) ?: ret;623626err_unlock:624627 mutex_unlock(&c->fsck_error_msgs_lock);625628err:
+8
fs/bcachefs/fs.c
···24902490 if (ret)24912491 goto err_stop_fs;2492249224932493+ /*24942494+ * We might be doing a RO mount because other options required it, or we24952495+ * have no alloc info and it's a small image with no room to regenerate24962496+ * it24972497+ */24982498+ if (c->opts.read_only)24992499+ fc->sb_flags |= SB_RDONLY;25002500+24932501 sb = sget(fc->fs_type, NULL, bch2_set_super, fc->sb_flags|SB_NOSEC, c);24942502 ret = PTR_ERR_OR_ZERO(sb);24952503 if (ret)
···175175 new_inode->bi_dir_offset = dir_offset;176176 }177177178178+ if (S_ISDIR(mode)) {179179+ ret = bch2_maybe_propagate_has_case_insensitive(trans,180180+ (subvol_inum) {181181+ new_inode->bi_subvol ?: dir.subvol,182182+ new_inode->bi_inum },183183+ new_inode);184184+ if (ret)185185+ goto err;186186+ }187187+178188 if (S_ISDIR(mode) &&179189 !new_inode->bi_subvol)180190 new_inode->bi_depth = dir_u->bi_depth + 1;
+11-11
fs/bcachefs/rcu_pending.c
···182182 while (nr--)183183 kfree(*p);184184}185185-186186-#define local_irq_save(flags) \187187-do { \188188- flags = 0; \189189-} while (0)190185#endif191186192187static noinline void __process_finished_items(struct rcu_pending *pending,···424429425430 BUG_ON((ptr != NULL) != (pending->process == RCU_PENDING_KVFREE_FN));426431427427- local_irq_save(flags);428428- p = this_cpu_ptr(pending->p);429429- spin_lock(&p->lock);432432+ /* We could technically be scheduled before taking the lock and end up433433+ * using a different cpu's rcu_pending_pcpu: that's ok, it needs a lock434434+ * anyways435435+ *436436+ * And we have to do it this way to avoid breaking PREEMPT_RT, which437437+ * redefines how spinlocks work:438438+ */439439+ p = raw_cpu_ptr(pending->p);440440+ spin_lock_irqsave(&p->lock, flags);430441 rcu_gp_poll_state_t seq = __get_state_synchronize_rcu(pending->srcu);431442restart:432443 if (may_sleep &&···521520 goto free_node;522521 }523522524524- local_irq_save(flags);525525- p = this_cpu_ptr(pending->p);526526- spin_lock(&p->lock);523523+ p = raw_cpu_ptr(pending->p);524524+ spin_lock_irqsave(&p->lock, flags);527525 goto restart;528526}529527
+21-6
fs/bcachefs/recovery.c
···9999 goto out;100100 case BTREE_ID_snapshots:101101 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_reconstruct_snapshots, 0) ?: ret;102102+ ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_check_topology, 0) ?: ret;102103 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_scan_for_btree_nodes, 0) ?: ret;103104 goto out;104105 default:106106+ ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_check_topology, 0) ?: ret;105107 ret = __bch2_run_explicit_recovery_pass(c, msg, BCH_RECOVERY_PASS_scan_for_btree_nodes, 0) ?: ret;106108 goto out;107109 }···273271 goto out;274272275273 struct btree_path *path = btree_iter_path(trans, &iter);276276- if (unlikely(!btree_path_node(path, k->level))) {274274+ if (unlikely(!btree_path_node(path, k->level) &&275275+ !k->allocated)) {276276+ struct bch_fs *c = trans->c;277277+278278+ if (!(c->recovery.passes_complete & (BIT_ULL(BCH_RECOVERY_PASS_scan_for_btree_nodes)|279279+ BIT_ULL(BCH_RECOVERY_PASS_check_topology)))) {280280+ bch_err(c, "have key in journal replay for btree depth that does not exist, confused");281281+ ret = -EINVAL;282282+ }283283+#if 0277284 bch2_trans_iter_exit(trans, &iter);278285 bch2_trans_node_iter_init(trans, &iter, k->btree_id, k->k->k.p,279286 BTREE_MAX_DEPTH, 0, iter_flags);280287 ret = bch2_btree_iter_traverse(trans, &iter) ?:281288 bch2_btree_increase_depth(trans, iter.path, 0) ?:282289 -BCH_ERR_transaction_restart_nested;290290+#endif291291+ k->overwritten = true;283292 goto out;284293 }285294···752739 ? min(c->opts.recovery_pass_last, BCH_RECOVERY_PASS_snapshots_read)753740 : BCH_RECOVERY_PASS_snapshots_read;754741 c->opts.nochanges = true;755755- c->opts.read_only = true;756742 }743743+744744+ if (c->opts.nochanges)745745+ c->opts.read_only = true;757746758747 mutex_lock(&c->sb_lock);759748 struct bch_sb_field_ext *ext = bch2_sb_field_get(c->disk_sb.sb, ext);···11081093out:11091094 bch2_flush_fsck_errs(c);1110109511111111- if (!IS_ERR(clean))11121112- kfree(clean);11131113-11141096 if (!ret &&11151097 test_bit(BCH_FS_need_delete_dead_snapshots, &c->flags) &&11161098 !c->opts.nochanges) {···11161104 }1117110511181106 bch_err_fn(c, ret);11071107+final_out:11081108+ if (!IS_ERR(clean))11091109+ kfree(clean);11191110 return ret;11201111err:11211112fsck_err:···11321117 bch2_print_str(c, KERN_ERR, buf.buf);11331118 printbuf_exit(&buf);11341119 }11351135- return ret;11201120+ goto final_out;11361121}1137112211381123int bch2_fs_initialize(struct bch_fs *c)
+11-3
fs/bcachefs/recovery_passes.c
···294294 enum bch_run_recovery_pass_flags *flags)295295{296296 struct bch_fs_recovery *r = &c->recovery;297297- bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags);298298- bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent);297297+298298+ /*299299+ * Never run scan_for_btree_nodes persistently: check_topology will run300300+ * it if required301301+ */302302+ if (pass == BCH_RECOVERY_PASS_scan_for_btree_nodes)303303+ *flags |= RUN_RECOVERY_PASS_nopersistent;299304300305 if ((*flags & RUN_RECOVERY_PASS_ratelimit) &&301306 !bch2_recovery_pass_want_ratelimit(c, pass))···315310 * Otherwise, we run run_explicit_recovery_pass when we find damage, so316311 * it should run again even if it's already run:317312 */313313+ bool in_recovery = test_bit(BCH_FS_in_recovery, &c->flags);314314+ bool persistent = !in_recovery || !(*flags & RUN_RECOVERY_PASS_nopersistent);318315319316 if (persistent320317 ? !(c->sb.recovery_passes_required & BIT_ULL(pass))···340333{341334 struct bch_fs_recovery *r = &c->recovery;342335 int ret = 0;336336+343337344338 lockdep_assert_held(&c->sb_lock);345339···454446455447int bch2_run_print_explicit_recovery_pass(struct bch_fs *c, enum bch_recovery_pass pass)456448{457457- enum bch_run_recovery_pass_flags flags = RUN_RECOVERY_PASS_nopersistent;449449+ enum bch_run_recovery_pass_flags flags = 0;458450459451 if (!recovery_pass_needs_set(c, pass, &flags))460452 return 0;
+4-1
fs/bcachefs/sb-downgrade.c
···253253254254static int downgrade_table_extra(struct bch_fs *c, darray_char *table)255255{256256+ unsigned dst_offset = table->nr;256257 struct bch_sb_field_downgrade_entry *dst = (void *) &darray_top(*table);257258 unsigned bytes = sizeof(*dst) + sizeof(dst->errors[0]) * le16_to_cpu(dst->nr_errors);258259 int ret = 0;···269268 if (ret)270269 return ret;271270271271+ dst = (void *) &table->data[dst_offset];272272+ dst->nr_errors = cpu_to_le16(nr_errors + 1);273273+272274 /* open coded __set_bit_le64, as dst is packed and273275 * dst->recovery_passes is misaligned */274276 unsigned b = BCH_RECOVERY_PASS_STABLE_check_allocations;···282278 break;283279 }284280285285- dst->nr_errors = cpu_to_le16(nr_errors);286281 return ret;287282}288283
···2121struct cached_dirents {2222 bool is_valid:1;2323 bool is_failed:1;2424- struct dir_context *ctx; /*2525- * Only used to make sure we only take entries2626- * from a single context. Never dereferenced.2727- */2424+ struct file *file; /*2525+ * Used to associate the cache with a single2626+ * open file instance.2727+ */2828 struct mutex de_mutex;2929 int pos; /* Expected ctx->pos */3030 struct list_head entries;
+8-2
fs/smb/client/connect.c
···37183718 goto out;37193719 }3720372037213721- /* if new SMB3.11 POSIX extensions are supported do not remap / and \ */37223722- if (tcon->posix_extensions)37213721+ /*37223722+ * if new SMB3.11 POSIX extensions are supported, do not change anything in the37233723+ * path (i.e., do not remap / and \ and do not map any special characters)37243724+ */37253725+ if (tcon->posix_extensions) {37233726 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_POSIX_PATHS;37273727+ cifs_sb->mnt_cifs_flags &= ~(CIFS_MOUNT_MAP_SFM_CHR |37283728+ CIFS_MOUNT_MAP_SPECIAL_CHR);37293729+ }3724373037253731#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY37263732 /* tell server which Unix caps we support */
+6-3
fs/smb/client/file.c
···999999 rc = cifs_get_readable_path(tcon, full_path, &cfile);10001000 }10011001 if (rc == 0) {10021002- if (file->f_flags == cfile->f_flags) {10021002+ unsigned int oflags = file->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);10031003+ unsigned int cflags = cfile->f_flags & ~(O_CREAT|O_EXCL|O_TRUNC);10041004+10051005+ if (cifs_convert_flags(oflags, 0) == cifs_convert_flags(cflags, 0) &&10061006+ (oflags & (O_SYNC|O_DIRECT)) == (cflags & (O_SYNC|O_DIRECT))) {10031007 file->private_data = cfile;10041008 spin_lock(&CIFS_I(inode)->deferred_lock);10051009 cifs_del_deferred_close(cfile);10061010 spin_unlock(&CIFS_I(inode)->deferred_lock);10071011 goto use_cache;10081008- } else {10091009- _cifsFileInfo_put(cfile, true, false);10101012 }10131013+ _cifsFileInfo_put(cfile, true, false);10111014 } else {10121015 /* hard link on the defeered close file */10131016 rc = cifs_get_hardlink_path(tcon, inode, file);
···5757 * @offset: offset into the folio5858 */5959static inline void bvec_set_folio(struct bio_vec *bv, struct folio *folio,6060- unsigned int len, unsigned int offset)6060+ size_t len, size_t offset)6161{6262- bvec_set_page(bv, &folio->page, len, offset);6262+ unsigned long nr = offset / PAGE_SIZE;6363+6464+ WARN_ON_ONCE(len > UINT_MAX);6565+ bvec_set_page(bv, folio_page(folio, nr), len, offset % PAGE_SIZE);6366}64676568/**
+3
include/linux/cpu.h
···120120extern void cpu_maps_update_done(void);121121int bringup_hibernate_cpu(unsigned int sleep_cpu);122122void bringup_nonboot_cpus(unsigned int max_cpus);123123+int arch_cpu_rescan_dead_smt_siblings(void);123124124125#else /* CONFIG_SMP */125126#define cpuhp_tasks_frozen 0···134133}135134136135static inline int add_cpu(unsigned int cpu) { return 0;}136136+137137+static inline int arch_cpu_rescan_dead_smt_siblings(void) { return 0; }137138138139#endif /* CONFIG_SMP */139140extern const struct bus_type cpu_subsys;
+4-2
include/linux/fs.h
···22742274 return true;22752275}2276227622772277+int compat_vma_mmap_prepare(struct file *file, struct vm_area_struct *vma);22782278+22772279static inline int call_mmap(struct file *file, struct vm_area_struct *vma)22782280{22792279- if (WARN_ON_ONCE(file->f_op->mmap_prepare))22802280- return -EINVAL;22812281+ if (file->f_op->mmap_prepare)22822282+ return compat_vma_mmap_prepare(file, vma);2281228322822284 return file->f_op->mmap(file, vma);22832285}
+1-1
include/linux/key.h
···236236#define KEY_FLAG_ROOT_CAN_INVAL 7 /* set if key can be invalidated by root without permission */237237#define KEY_FLAG_KEEP 8 /* set if key should not be removed */238238#define KEY_FLAG_UID_KEYRING 9 /* set if key is a user or user session keyring */239239-#define KEY_FLAG_FINAL_PUT 10 /* set if final put has happened on key */239239+#define KEY_FLAG_USER_ALIVE 10 /* set if final put has not happened on key yet */240240241241 /* the key type and key description string242242 * - the desc is used to match a key against search criteria
+2-2
include/linux/scatterlist.h
···9999 * @sg: The current sg entry100100 *101101 * Description:102102- * Usually the next entry will be @sg@ + 1, but if this sg element is part102102+ * Usually the next entry will be @sg + 1, but if this sg element is part103103 * of a chained scatterlist, it could jump to the start of a new104104 * scatterlist array.105105 *···254254 * @sgl: Second scatterlist255255 *256256 * Description:257257- * Links @prv@ and @sgl@ together, to form a longer scatterlist.257257+ * Links @prv and @sgl together, to form a longer scatterlist.258258 *259259 **/260260static inline void sg_chain(struct scatterlist *prv, unsigned int prv_nents,
···270270 /* truncate end piece, if needed, for non partial buffers */271271 if (len > arg->max_len) {272272 len = arg->max_len;273273- if (!(bl->flags & IOBL_INC))273273+ if (!(bl->flags & IOBL_INC)) {274274+ if (iov != arg->iovs)275275+ break;274276 buf->len = len;277277+ }275278 }276279277280 iov->iov_base = u64_to_user_ptr(buf->addr);
+5-2
io_uring/register.c
···273273 if (ctx->flags & IORING_SETUP_SQPOLL) {274274 sqd = ctx->sq_data;275275 if (sqd) {276276+ struct task_struct *tsk;277277+276278 /*277279 * Observe the correct sqd->lock -> ctx->uring_lock278280 * ordering. Fine to drop uring_lock here, we hold···284282 mutex_unlock(&ctx->uring_lock);285283 mutex_lock(&sqd->lock);286284 mutex_lock(&ctx->uring_lock);287287- if (sqd->thread)288288- tctx = sqd->thread->io_uring;285285+ tsk = sqpoll_task_locked(sqd);286286+ if (tsk)287287+ tctx = tsk->io_uring;289288 }290289 } else {291290 tctx = current->io_uring;
+28-15
io_uring/sqpoll.c
···3030void io_sq_thread_unpark(struct io_sq_data *sqd)3131 __releases(&sqd->lock)3232{3333- WARN_ON_ONCE(sqd->thread == current);3333+ WARN_ON_ONCE(sqpoll_task_locked(sqd) == current);34343535 /*3636 * Do the dance but not conditional clear_bit() because it'd race with···4646void io_sq_thread_park(struct io_sq_data *sqd)4747 __acquires(&sqd->lock)4848{4949- WARN_ON_ONCE(data_race(sqd->thread) == current);4949+ struct task_struct *tsk;50505151 atomic_inc(&sqd->park_pending);5252 set_bit(IO_SQ_THREAD_SHOULD_PARK, &sqd->state);5353 mutex_lock(&sqd->lock);5454- if (sqd->thread)5555- wake_up_process(sqd->thread);5454+5555+ tsk = sqpoll_task_locked(sqd);5656+ if (tsk) {5757+ WARN_ON_ONCE(tsk == current);5858+ wake_up_process(tsk);5959+ }5660}57615862void io_sq_thread_stop(struct io_sq_data *sqd)5963{6060- WARN_ON_ONCE(sqd->thread == current);6464+ struct task_struct *tsk;6565+6166 WARN_ON_ONCE(test_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state));62676368 set_bit(IO_SQ_THREAD_SHOULD_STOP, &sqd->state);6469 mutex_lock(&sqd->lock);6565- if (sqd->thread)6666- wake_up_process(sqd->thread);7070+ tsk = sqpoll_task_locked(sqd);7171+ if (tsk) {7272+ WARN_ON_ONCE(tsk == current);7373+ wake_up_process(tsk);7474+ }6775 mutex_unlock(&sqd->lock);6876 wait_for_completion(&sqd->exited);6977}···278270 /* offload context creation failed, just exit */279271 if (!current->io_uring) {280272 mutex_lock(&sqd->lock);281281- sqd->thread = NULL;273273+ rcu_assign_pointer(sqd->thread, NULL);274274+ put_task_struct(current);282275 mutex_unlock(&sqd->lock);283276 goto err_out;284277 }···388379 io_sq_tw(&retry_list, UINT_MAX);389380390381 io_uring_cancel_generic(true, sqd);391391- sqd->thread = NULL;382382+ rcu_assign_pointer(sqd->thread, NULL);383383+ put_task_struct(current);392384 list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)393385 atomic_or(IORING_SQ_NEED_WAKEUP, &ctx->rings->sq_flags);394386 io_run_task_work();···494484 goto err_sqpoll;495485 }496486497497- sqd->thread = tsk;487487+ mutex_lock(&sqd->lock);488488+ rcu_assign_pointer(sqd->thread, tsk);489489+ mutex_unlock(&sqd->lock);490490+498491 task_to_put = get_task_struct(tsk);499492 ret = io_uring_alloc_task_context(tsk, ctx);500493 wake_up_new_task(tsk);···508495 ret = -EINVAL;509496 goto err;510497 }511511-512512- if (task_to_put)513513- put_task_struct(task_to_put);514498 return 0;515499err_sqpoll:516500 complete(&ctx->sq_data->exited);···525515 int ret = -EINVAL;526516527517 if (sqd) {518518+ struct task_struct *tsk;519519+528520 io_sq_thread_park(sqd);529521 /* Don't set affinity for a dying thread */530530- if (sqd->thread)531531- ret = io_wq_cpu_affinity(sqd->thread->io_uring, mask);522522+ tsk = sqpoll_task_locked(sqd);523523+ if (tsk)524524+ ret = io_wq_cpu_affinity(tsk->io_uring, mask);532525 io_sq_thread_unpark(sqd);533526 }534527
···14061406 lockdep_assert_irqs_disabled();1407140714081408 /*14091409+ * Ensure that release_task(tsk) can't happen while14101410+ * handle_posix_cpu_timers() is running. Otherwise, a concurrent14111411+ * posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and14121412+ * miss timer->it.cpu.firing != 0.14131413+ */14141414+ if (tsk->exit_state)14151415+ return;14161416+14171417+ /*14091418 * If the actual expiry is deferred to task work context and the14101419 * work is already scheduled there is no point to do anything here.14111420 */
···7373 * Should only be used casually, it (currently) scans the entire list7474 * to get the last entry.7575 *7676- * Note that the @sgl@ pointer passed in need not be the first one,7777- * the important bit is that @nents@ denotes the number of entries that7878- * exist from @sgl@.7676+ * Note that the @sgl pointer passed in need not be the first one,7777+ * the important bit is that @nents denotes the number of entries that7878+ * exist from @sgl.7979 *8080 **/8181struct scatterlist *sg_last(struct scatterlist *sgl, unsigned int nents)···345345 * @gfp_mask: GFP allocation mask346346 *347347 * Description:348348- * Allocate and initialize an sg table. If @nents@ is larger than348348+ * Allocate and initialize an sg table. If @nents is larger than349349 * SG_MAX_SINGLE_ALLOC a chained sg table will be setup.350350 *351351 **/
-1
mm/damon/Kconfig
···4455config DAMON66 bool "DAMON: Data Access Monitoring Framework"77- default y87 help98 This builds a framework that allows kernel subsystems to monitor109 access frequency of each memory region. The information can be useful
+2
mm/madvise.c
···508508 pte_offset_map_lock(mm, pmd, addr, &ptl);509509 if (!start_pte)510510 break;511511+ flush_tlb_batched_pending(mm);511512 arch_enter_lazy_mmu_mode();512513 if (!err)513514 nr = 0;···742741 start_pte = pte;743742 if (!start_pte)744743 break;744744+ flush_tlb_batched_pending(mm);745745 arch_enter_lazy_mmu_mode();746746 if (!err)747747 nr = 0;
+40
mm/util.c
···11311131}11321132EXPORT_SYMBOL(flush_dcache_folio);11331133#endif11341134+11351135+/**11361136+ * compat_vma_mmap_prepare() - Apply the file's .mmap_prepare() hook to an11371137+ * existing VMA11381138+ * @file: The file which possesss an f_op->mmap_prepare() hook11391139+ * @vma: The VMA to apply the .mmap_prepare() hook to.11401140+ *11411141+ * Ordinarily, .mmap_prepare() is invoked directly upon mmap(). However, certain11421142+ * 'wrapper' file systems invoke a nested mmap hook of an underlying file.11431143+ *11441144+ * Until all filesystems are converted to use .mmap_prepare(), we must be11451145+ * conservative and continue to invoke these 'wrapper' filesystems using the11461146+ * deprecated .mmap() hook.11471147+ *11481148+ * However we have a problem if the underlying file system possesses an11491149+ * .mmap_prepare() hook, as we are in a different context when we invoke the11501150+ * .mmap() hook, already having a VMA to deal with.11511151+ *11521152+ * compat_vma_mmap_prepare() is a compatibility function that takes VMA state,11531153+ * establishes a struct vm_area_desc descriptor, passes to the underlying11541154+ * .mmap_prepare() hook and applies any changes performed by it.11551155+ *11561156+ * Once the conversion of filesystems is complete this function will no longer11571157+ * be required and will be removed.11581158+ *11591159+ * Returns: 0 on success or error.11601160+ */11611161+int compat_vma_mmap_prepare(struct file *file, struct vm_area_struct *vma)11621162+{11631163+ struct vm_area_desc desc;11641164+ int err;11651165+11661166+ err = file->f_op->mmap_prepare(vma_to_desc(vma, &desc));11671167+ if (err)11681168+ return err;11691169+ set_vma_from_desc(vma, &desc);11701170+11711171+ return 0;11721172+}11731173+EXPORT_SYMBOL(compat_vma_mmap_prepare);
+4-19
mm/vma.c
···967967 err = dup_anon_vma(next, middle, &anon_dup);968968 }969969970970- if (err)970970+ if (err || commit_merge(vmg))971971 goto abort;972972-973973- err = commit_merge(vmg);974974- if (err) {975975- VM_WARN_ON(err != -ENOMEM);976976-977977- if (anon_dup)978978- unlink_anon_vmas(anon_dup);979979-980980- /*981981- * We've cleaned up any cloned anon_vma's, no VMAs have been982982- * modified, no harm no foul if the user requests that we not983983- * report this and just give up, leaving the VMAs unmerged.984984- */985985- if (!vmg->give_up_on_oom)986986- vmg->state = VMA_MERGE_ERROR_NOMEM;987987- return NULL;988988- }989972990973 khugepaged_enter_vma(vmg->target, vmg->flags);991974 vmg->state = VMA_MERGE_SUCCESS;···977994abort:978995 vma_iter_set(vmg->vmi, start);979996 vma_iter_load(vmg->vmi);997997+998998+ if (anon_dup)999999+ unlink_anon_vmas(anon_dup);98010009811001 /*9821002 * This means we have failed to clone anon_vma's correctly, but no···31123126 userfaultfd_unmap_complete(mm, &uf);31133127 return ret;31143128}31153115-3116312931173130/* Insert vm structure into process list sorted by address31183131 * and into the inode's i_mmap tree. If vm_file is non-NULL
+47
mm/vma.h
···222222 return 0;223223}224224225225+226226+/*227227+ * Temporary helper functions for file systems which wrap an invocation of228228+ * f_op->mmap() but which might have an underlying file system which implements229229+ * f_op->mmap_prepare().230230+ */231231+232232+static inline struct vm_area_desc *vma_to_desc(struct vm_area_struct *vma,233233+ struct vm_area_desc *desc)234234+{235235+ desc->mm = vma->vm_mm;236236+ desc->start = vma->vm_start;237237+ desc->end = vma->vm_end;238238+239239+ desc->pgoff = vma->vm_pgoff;240240+ desc->file = vma->vm_file;241241+ desc->vm_flags = vma->vm_flags;242242+ desc->page_prot = vma->vm_page_prot;243243+244244+ desc->vm_ops = NULL;245245+ desc->private_data = NULL;246246+247247+ return desc;248248+}249249+250250+static inline void set_vma_from_desc(struct vm_area_struct *vma,251251+ struct vm_area_desc *desc)252252+{253253+ /*254254+ * Since we're invoking .mmap_prepare() despite having a partially255255+ * established VMA, we must take care to handle setting fields256256+ * correctly.257257+ */258258+259259+ /* Mutable fields. Populated with initial state. */260260+ vma->vm_pgoff = desc->pgoff;261261+ if (vma->vm_file != desc->file)262262+ vma_set_file(vma, desc->file);263263+ if (vma->vm_flags != desc->vm_flags)264264+ vm_flags_set(vma, desc->vm_flags);265265+ vma->vm_page_prot = desc->page_prot;266266+267267+ /* User-defined fields. */268268+ vma->vm_ops = desc->vm_ops;269269+ vma->vm_private_data = desc->private_data;270270+}271271+225272int226273do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,227274 struct mm_struct *mm, unsigned long start,
+10-7
net/bluetooth/eir.c
···242242 return ad_len;243243}244244245245-u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr)245245+u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size)246246{247247 struct adv_info *adv = NULL;248248 u8 ad_len = 0, flags = 0;···286286 /* If flags would still be empty, then there is no need to287287 * include the "Flags" AD field".288288 */289289- if (flags) {289289+ if (flags && (ad_len + eir_precalc_len(1) <= size)) {290290 ptr[0] = 0x02;291291 ptr[1] = EIR_FLAGS;292292 ptr[2] = flags;···316316 }317317318318 /* Provide Tx Power only if we can provide a valid value for it */319319- if (adv_tx_power != HCI_TX_POWER_INVALID) {319319+ if (adv_tx_power != HCI_TX_POWER_INVALID &&320320+ (ad_len + eir_precalc_len(1) <= size)) {320321 ptr[0] = 0x02;321322 ptr[1] = EIR_TX_POWER;322323 ptr[2] = (u8)adv_tx_power;···367366368367void *eir_get_service_data(u8 *eir, size_t eir_len, u16 uuid, size_t *len)369368{370370- while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, len))) {369369+ size_t dlen;370370+371371+ while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, &dlen))) {371372 u16 value = get_unaligned_le16(eir);372373373374 if (uuid == value) {374375 if (len)375375- *len -= 2;376376+ *len = dlen - 2;376377 return &eir[2];377378 }378379379379- eir += *len;380380- eir_len -= *len;380380+ eir += dlen;381381+ eir_len -= dlen;381382 }382383383384 return NULL;
···32333233 .arg1_type = ARG_PTR_TO_CTX,32343234};3235323532363236+static void bpf_skb_change_protocol(struct sk_buff *skb, u16 proto)32373237+{32383238+ skb->protocol = htons(proto);32393239+ if (skb_valid_dst(skb))32403240+ skb_dst_drop(skb);32413241+}32423242+32363243static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len)32373244{32383245 /* Caller already did skb_cow() with len as headroom,···33363329 }33373330 }3338333133393339- skb->protocol = htons(ETH_P_IPV6);33323332+ bpf_skb_change_protocol(skb, ETH_P_IPV6);33403333 skb_clear_hash(skb);3341333433423335 return 0;···33663359 }33673360 }3368336133693369- skb->protocol = htons(ETH_P_IP);33623362+ bpf_skb_change_protocol(skb, ETH_P_IP);33703363 skb_clear_hash(skb);3371336433723365 return 0;···35573550 /* Match skb->protocol to new outer l3 protocol */35583551 if (skb->protocol == htons(ETH_P_IP) &&35593552 flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV6)35603560- skb->protocol = htons(ETH_P_IPV6);35533553+ bpf_skb_change_protocol(skb, ETH_P_IPV6);35613554 else if (skb->protocol == htons(ETH_P_IPV6) &&35623555 flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV4)35633563- skb->protocol = htons(ETH_P_IP);35563556+ bpf_skb_change_protocol(skb, ETH_P_IP);35643557 }3565355835663559 if (skb_is_gso(skb)) {···36133606 /* Match skb->protocol to new outer l3 protocol */36143607 if (skb->protocol == htons(ETH_P_IP) &&36153608 flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV6)36163616- skb->protocol = htons(ETH_P_IPV6);36093609+ bpf_skb_change_protocol(skb, ETH_P_IPV6);36173610 else if (skb->protocol == htons(ETH_P_IPV6) &&36183611 flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV4)36193619- skb->protocol = htons(ETH_P_IP);36123612+ bpf_skb_change_protocol(skb, ETH_P_IP);3620361336213614 if (skb_is_gso(skb)) {36223615 struct skb_shared_info *shinfo = skb_shinfo(skb);
+2-1
net/ethtool/ioctl.c
···10831083 ethtool_get_flow_spec_ring(info.fs.ring_cookie))10841084 return -EINVAL;1085108510861086- if (!xa_load(&dev->ethtool->rss_ctx, info.rss_context))10861086+ if (info.rss_context &&10871087+ !xa_load(&dev->ethtool->rss_ctx, info.rss_context))10871088 return -EINVAL;10881089 }10891090
+55-55
net/ipv6/route.c
···37373737 }37383738}3739373937403740+static int fib6_config_validate(struct fib6_config *cfg,37413741+ struct netlink_ext_ack *extack)37423742+{37433743+ /* RTF_PCPU is an internal flag; can not be set by userspace */37443744+ if (cfg->fc_flags & RTF_PCPU) {37453745+ NL_SET_ERR_MSG(extack, "Userspace can not set RTF_PCPU");37463746+ goto errout;37473747+ }37483748+37493749+ /* RTF_CACHE is an internal flag; can not be set by userspace */37503750+ if (cfg->fc_flags & RTF_CACHE) {37513751+ NL_SET_ERR_MSG(extack, "Userspace can not set RTF_CACHE");37523752+ goto errout;37533753+ }37543754+37553755+ if (cfg->fc_type > RTN_MAX) {37563756+ NL_SET_ERR_MSG(extack, "Invalid route type");37573757+ goto errout;37583758+ }37593759+37603760+ if (cfg->fc_dst_len > 128) {37613761+ NL_SET_ERR_MSG(extack, "Invalid prefix length");37623762+ goto errout;37633763+ }37643764+37653765+#ifdef CONFIG_IPV6_SUBTREES37663766+ if (cfg->fc_src_len > 128) {37673767+ NL_SET_ERR_MSG(extack, "Invalid source address length");37683768+ goto errout;37693769+ }37703770+37713771+ if (cfg->fc_nh_id && cfg->fc_src_len) {37723772+ NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing");37733773+ goto errout;37743774+ }37753775+#else37763776+ if (cfg->fc_src_len) {37773777+ NL_SET_ERR_MSG(extack,37783778+ "Specifying source address requires IPV6_SUBTREES to be enabled");37793779+ goto errout;37803780+ }37813781+#endif37823782+ return 0;37833783+errout:37843784+ return -EINVAL;37853785+}37863786+37403787static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,37413788 gfp_t gfp_flags,37423789 struct netlink_ext_ack *extack)···39323885{39333886 struct fib6_info *rt;39343887 int err;38883888+38893889+ err = fib6_config_validate(cfg, extack);38903890+ if (err)38913891+ return err;3935389239363893 rt = ip6_route_info_create(cfg, gfp_flags, extack);39373894 if (IS_ERR(rt))···45304479 rcu_read_unlock();45314480}4532448145334533-static int fib6_config_validate(struct fib6_config *cfg,45344534- struct netlink_ext_ack *extack)45354535-{45364536- /* RTF_PCPU is an internal flag; can not be set by userspace */45374537- if (cfg->fc_flags & RTF_PCPU) {45384538- NL_SET_ERR_MSG(extack, "Userspace can not set RTF_PCPU");45394539- goto errout;45404540- }45414541-45424542- /* RTF_CACHE is an internal flag; can not be set by userspace */45434543- if (cfg->fc_flags & RTF_CACHE) {45444544- NL_SET_ERR_MSG(extack, "Userspace can not set RTF_CACHE");45454545- goto errout;45464546- }45474547-45484548- if (cfg->fc_type > RTN_MAX) {45494549- NL_SET_ERR_MSG(extack, "Invalid route type");45504550- goto errout;45514551- }45524552-45534553- if (cfg->fc_dst_len > 128) {45544554- NL_SET_ERR_MSG(extack, "Invalid prefix length");45554555- goto errout;45564556- }45574557-45584558-#ifdef CONFIG_IPV6_SUBTREES45594559- if (cfg->fc_src_len > 128) {45604560- NL_SET_ERR_MSG(extack, "Invalid source address length");45614561- goto errout;45624562- }45634563-45644564- if (cfg->fc_nh_id && cfg->fc_src_len) {45654565- NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing");45664566- goto errout;45674567- }45684568-#else45694569- if (cfg->fc_src_len) {45704570- NL_SET_ERR_MSG(extack,45714571- "Specifying source address requires IPV6_SUBTREES to be enabled");45724572- goto errout;45734573- }45744574-#endif45754575- return 0;45764576-errout:45774577- return -EINVAL;45784578-}45794579-45804482static void rtmsg_to_fib6_config(struct net *net,45814483 struct in6_rtmsg *rtmsg,45824484 struct fib6_config *cfg)···4567456345684564 switch (cmd) {45694565 case SIOCADDRT:45704570- err = fib6_config_validate(&cfg, NULL);45714571- if (err)45724572- break;45734573-45744566 /* Only do the default setting of fc_metric in route adding */45754567 if (cfg.fc_metric == 0)45764568 cfg.fc_metric = IP6_RT_PRIO_USER;···54025402 int nhn = 0;54035403 int err;5404540454055405+ err = fib6_config_validate(cfg, extack);54065406+ if (err)54075407+ return err;54085408+54055409 replace = (cfg->fc_nlinfo.nlh &&54065410 (cfg->fc_nlinfo.nlh->nlmsg_flags & NLM_F_REPLACE));54075411···5638563456395635 err = rtm_to_fib6_config(skb, nlh, &cfg, extack);56405636 if (err < 0)56415641- return err;56425642-56435643- err = fib6_config_validate(&cfg, extack);56445644- if (err)56455637 return err;5646563856475639 if (cfg.fc_metric == 0)
+1-1
net/sched/sch_ets.c
···661661 for (i = q->nbands; i < oldbands; i++) {662662 if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)663663 list_del_init(&q->classes[i].alist);664664- qdisc_tree_flush_backlog(q->classes[i].qdisc);664664+ qdisc_purge_queue(q->classes[i].qdisc);665665 }666666 WRITE_ONCE(q->nstrict, nstrict);667667 memcpy(q->prio2band, priomap, sizeof(priomap));
+1-1
net/sched/sch_prio.c
···211211 memcpy(q->prio2band, qopt->priomap, TC_PRIO_MAX+1);212212213213 for (i = q->bands; i < oldbands; i++)214214- qdisc_tree_flush_backlog(q->queues[i]);214214+ qdisc_purge_queue(q->queues[i]);215215216216 for (i = oldbands; i < q->bands; i++) {217217 q->queues[i] = queues[i];
···6677use crate::{bindings, device::Device, error::Result, prelude::ENODEV};8899+/// Returns the maximum number of possible CPUs in the current system configuration.1010+#[inline]1111+pub fn nr_cpu_ids() -> u32 {1212+ #[cfg(any(NR_CPUS_1, CONFIG_FORCE_NR_CPUS))]1313+ {1414+ bindings::NR_CPUS1515+ }1616+1717+ #[cfg(not(any(NR_CPUS_1, CONFIG_FORCE_NR_CPUS)))]1818+ // SAFETY: `nr_cpu_ids` is a valid global provided by the kernel.1919+ unsafe {2020+ bindings::nr_cpu_ids2121+ }2222+}2323+2424+/// The CPU ID.2525+///2626+/// Represents a CPU identifier as a wrapper around an [`u32`].2727+///2828+/// # Invariants2929+///3030+/// The CPU ID lies within the range `[0, nr_cpu_ids())`.3131+///3232+/// # Examples3333+///3434+/// ```3535+/// use kernel::cpu::CpuId;3636+///3737+/// let cpu = 0;3838+///3939+/// // SAFETY: 0 is always a valid CPU number.4040+/// let id = unsafe { CpuId::from_u32_unchecked(cpu) };4141+///4242+/// assert_eq!(id.as_u32(), cpu);4343+/// assert!(CpuId::from_i32(0).is_some());4444+/// assert!(CpuId::from_i32(-1).is_none());4545+/// ```4646+#[derive(Copy, Clone, PartialEq, Eq, Debug)]4747+pub struct CpuId(u32);4848+4949+impl CpuId {5050+ /// Creates a new [`CpuId`] from the given `id` without checking bounds.5151+ ///5252+ /// # Safety5353+ ///5454+ /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`).5555+ #[inline]5656+ pub unsafe fn from_i32_unchecked(id: i32) -> Self {5757+ debug_assert!(id >= 0);5858+ debug_assert!((id as u32) < nr_cpu_ids());5959+6060+ // INVARIANT: The function safety guarantees `id` is a valid CPU id.6161+ Self(id as u32)6262+ }6363+6464+ /// Creates a new [`CpuId`] from the given `id`, checking that it is valid.6565+ pub fn from_i32(id: i32) -> Option<Self> {6666+ if id < 0 || id as u32 >= nr_cpu_ids() {6767+ None6868+ } else {6969+ // INVARIANT: `id` has just been checked as a valid CPU ID.7070+ Some(Self(id as u32))7171+ }7272+ }7373+7474+ /// Creates a new [`CpuId`] from the given `id` without checking bounds.7575+ ///7676+ /// # Safety7777+ ///7878+ /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`).7979+ #[inline]8080+ pub unsafe fn from_u32_unchecked(id: u32) -> Self {8181+ debug_assert!(id < nr_cpu_ids());8282+8383+ // Ensure the `id` fits in an [`i32`] as it's also representable that way.8484+ debug_assert!(id <= i32::MAX as u32);8585+8686+ // INVARIANT: The function safety guarantees `id` is a valid CPU id.8787+ Self(id)8888+ }8989+9090+ /// Creates a new [`CpuId`] from the given `id`, checking that it is valid.9191+ pub fn from_u32(id: u32) -> Option<Self> {9292+ if id >= nr_cpu_ids() {9393+ None9494+ } else {9595+ // INVARIANT: `id` has just been checked as a valid CPU ID.9696+ Some(Self(id))9797+ }9898+ }9999+100100+ /// Returns CPU number.101101+ #[inline]102102+ pub fn as_u32(&self) -> u32 {103103+ self.0104104+ }105105+106106+ /// Returns the ID of the CPU the code is currently running on.107107+ ///108108+ /// The returned value is considered unstable because it may change109109+ /// unexpectedly due to preemption or CPU migration. It should only be110110+ /// used when the context ensures that the task remains on the same CPU111111+ /// or the users could use a stale (yet valid) CPU ID.112112+ pub fn current() -> Self {113113+ // SAFETY: raw_smp_processor_id() always returns a valid CPU ID.114114+ unsafe { Self::from_u32_unchecked(bindings::raw_smp_processor_id()) }115115+ }116116+}117117+118118+impl From<CpuId> for u32 {119119+ fn from(id: CpuId) -> Self {120120+ id.as_u32()121121+ }122122+}123123+124124+impl From<CpuId> for i32 {125125+ fn from(id: CpuId) -> Self {126126+ id.as_u32() as i32127127+ }128128+}129129+9130/// Creates a new instance of CPU's device.10131///11132/// # Safety···13817/// Callers must ensure that the CPU device is not used after it has been unregistered.13918/// This can be achieved, for example, by registering a CPU hotplug notifier and removing14019/// any references to the CPU device within the notifier's callback.141141-pub unsafe fn from_cpu(cpu: u32) -> Result<&'static Device> {2020+pub unsafe fn from_cpu(cpu: CpuId) -> Result<&'static Device> {14221 // SAFETY: It is safe to call `get_cpu_device()` for any CPU.143143- let ptr = unsafe { bindings::get_cpu_device(cpu) };2222+ let ptr = unsafe { bindings::get_cpu_device(u32::from(cpu)) };14423 if ptr.is_null() {14524 return Err(ENODEV);14625 }
+128-45
rust/kernel/cpufreq.rs
···10101111use crate::{1212 clk::Hertz,1313+ cpu::CpuId,1314 cpumask,1415 device::{Bound, Device},1516 devres::Devres,···466465467466 /// Returns the primary CPU for the [`Policy`].468467 #[inline]469469- pub fn cpu(&self) -> u32 {470470- self.as_ref().cpu468468+ pub fn cpu(&self) -> CpuId {469469+ // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number.470470+ unsafe { CpuId::from_u32_unchecked(self.as_ref().cpu) }471471 }472472473473 /// Returns the minimum frequency for the [`Policy`].···527525 #[inline]528526 pub fn generic_get(&self) -> Result<u32> {529527 // SAFETY: By the type invariant, the pointer stored in `self` is valid.530530- Ok(unsafe { bindings::cpufreq_generic_get(self.cpu()) })528528+ Ok(unsafe { bindings::cpufreq_generic_get(u32::from(self.cpu())) })531529 }532530533531 /// Provides a wrapper to the register with energy model using the OPP core.···680678struct PolicyCpu<'a>(&'a mut Policy);681679682680impl<'a> PolicyCpu<'a> {683683- fn from_cpu(cpu: u32) -> Result<Self> {681681+ fn from_cpu(cpu: CpuId) -> Result<Self> {684682 // SAFETY: It is safe to call `cpufreq_cpu_get` for any valid CPU.685685- let ptr = from_err_ptr(unsafe { bindings::cpufreq_cpu_get(cpu) })?;683683+ let ptr = from_err_ptr(unsafe { bindings::cpufreq_cpu_get(u32::from(cpu)) })?;686684687685 Ok(Self(688686 // SAFETY: The `ptr` is guaranteed to be valid and remains valid for the lifetime of···10571055impl<T: Driver> Registration<T> {10581056 /// Driver's `init` callback.10591057 ///10601060- /// SAFETY: Called from C. Inputs must be valid pointers.10611061- extern "C" fn init_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10581058+ /// # Safety10591059+ ///10601060+ /// - This function may only be called from the cpufreq C infrastructure.10611061+ /// - The pointer arguments must be valid pointers.10621062+ unsafe extern "C" fn init_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10621063 from_result(|| {10631064 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the10641065 // lifetime of `policy`.···1075107010761071 /// Driver's `exit` callback.10771072 ///10781078- /// SAFETY: Called from C. Inputs must be valid pointers.10791079- extern "C" fn exit_callback(ptr: *mut bindings::cpufreq_policy) {10731073+ /// # Safety10741074+ ///10751075+ /// - This function may only be called from the cpufreq C infrastructure.10761076+ /// - The pointer arguments must be valid pointers.10771077+ unsafe extern "C" fn exit_callback(ptr: *mut bindings::cpufreq_policy) {10801078 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the10811079 // lifetime of `policy`.10821080 let policy = unsafe { Policy::from_raw_mut(ptr) };···1090108210911083 /// Driver's `online` callback.10921084 ///10931093- /// SAFETY: Called from C. Inputs must be valid pointers.10941094- extern "C" fn online_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10851085+ /// # Safety10861086+ ///10871087+ /// - This function may only be called from the cpufreq C infrastructure.10881088+ /// - The pointer arguments must be valid pointers.10891089+ unsafe extern "C" fn online_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10951090 from_result(|| {10961091 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the10971092 // lifetime of `policy`.···1105109411061095 /// Driver's `offline` callback.11071096 ///11081108- /// SAFETY: Called from C. Inputs must be valid pointers.11091109- extern "C" fn offline_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {10971097+ /// # Safety10981098+ ///10991099+ /// - This function may only be called from the cpufreq C infrastructure.11001100+ /// - The pointer arguments must be valid pointers.11011101+ unsafe extern "C" fn offline_callback(11021102+ ptr: *mut bindings::cpufreq_policy,11031103+ ) -> kernel::ffi::c_int {11101104 from_result(|| {11111105 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11121106 // lifetime of `policy`.···1122110611231107 /// Driver's `suspend` callback.11241108 ///11251125- /// SAFETY: Called from C. Inputs must be valid pointers.11261126- extern "C" fn suspend_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {11091109+ /// # Safety11101110+ ///11111111+ /// - This function may only be called from the cpufreq C infrastructure.11121112+ /// - The pointer arguments must be valid pointers.11131113+ unsafe extern "C" fn suspend_callback(11141114+ ptr: *mut bindings::cpufreq_policy,11151115+ ) -> kernel::ffi::c_int {11271116 from_result(|| {11281117 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11291118 // lifetime of `policy`.···1139111811401119 /// Driver's `resume` callback.11411120 ///11421142- /// SAFETY: Called from C. Inputs must be valid pointers.11431143- extern "C" fn resume_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {11211121+ /// # Safety11221122+ ///11231123+ /// - This function may only be called from the cpufreq C infrastructure.11241124+ /// - The pointer arguments must be valid pointers.11251125+ unsafe extern "C" fn resume_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {11441126 from_result(|| {11451127 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11461128 // lifetime of `policy`.···1154113011551131 /// Driver's `ready` callback.11561132 ///11571157- /// SAFETY: Called from C. Inputs must be valid pointers.11581158- extern "C" fn ready_callback(ptr: *mut bindings::cpufreq_policy) {11331133+ /// # Safety11341134+ ///11351135+ /// - This function may only be called from the cpufreq C infrastructure.11361136+ /// - The pointer arguments must be valid pointers.11371137+ unsafe extern "C" fn ready_callback(ptr: *mut bindings::cpufreq_policy) {11591138 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11601139 // lifetime of `policy`.11611140 let policy = unsafe { Policy::from_raw_mut(ptr) };···1167114011681141 /// Driver's `verify` callback.11691142 ///11701170- /// SAFETY: Called from C. Inputs must be valid pointers.11711171- extern "C" fn verify_callback(ptr: *mut bindings::cpufreq_policy_data) -> kernel::ffi::c_int {11431143+ /// # Safety11441144+ ///11451145+ /// - This function may only be called from the cpufreq C infrastructure.11461146+ /// - The pointer arguments must be valid pointers.11471147+ unsafe extern "C" fn verify_callback(11481148+ ptr: *mut bindings::cpufreq_policy_data,11491149+ ) -> kernel::ffi::c_int {11721150 from_result(|| {11731151 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11741152 // lifetime of `policy`.···1184115211851153 /// Driver's `setpolicy` callback.11861154 ///11871187- /// SAFETY: Called from C. Inputs must be valid pointers.11881188- extern "C" fn setpolicy_callback(ptr: *mut bindings::cpufreq_policy) -> kernel::ffi::c_int {11551155+ /// # Safety11561156+ ///11571157+ /// - This function may only be called from the cpufreq C infrastructure.11581158+ /// - The pointer arguments must be valid pointers.11591159+ unsafe extern "C" fn setpolicy_callback(11601160+ ptr: *mut bindings::cpufreq_policy,11611161+ ) -> kernel::ffi::c_int {11891162 from_result(|| {11901163 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the11911164 // lifetime of `policy`.···1201116412021165 /// Driver's `target` callback.12031166 ///12041204- /// SAFETY: Called from C. Inputs must be valid pointers.12051205- extern "C" fn target_callback(11671167+ /// # Safety11681168+ ///11691169+ /// - This function may only be called from the cpufreq C infrastructure.11701170+ /// - The pointer arguments must be valid pointers.11711171+ unsafe extern "C" fn target_callback(12061172 ptr: *mut bindings::cpufreq_policy,12071173 target_freq: u32,12081174 relation: u32,···1220118012211181 /// Driver's `target_index` callback.12221182 ///12231223- /// SAFETY: Called from C. Inputs must be valid pointers.12241224- extern "C" fn target_index_callback(11831183+ /// # Safety11841184+ ///11851185+ /// - This function may only be called from the cpufreq C infrastructure.11861186+ /// - The pointer arguments must be valid pointers.11871187+ unsafe extern "C" fn target_index_callback(12251188 ptr: *mut bindings::cpufreq_policy,12261189 index: u32,12271190 ) -> kernel::ffi::c_int {···1243120012441201 /// Driver's `fast_switch` callback.12451202 ///12461246- /// SAFETY: Called from C. Inputs must be valid pointers.12471247- extern "C" fn fast_switch_callback(12031203+ /// # Safety12041204+ ///12051205+ /// - This function may only be called from the cpufreq C infrastructure.12061206+ /// - The pointer arguments must be valid pointers.12071207+ unsafe extern "C" fn fast_switch_callback(12481208 ptr: *mut bindings::cpufreq_policy,12491209 target_freq: u32,12501210 ) -> kernel::ffi::c_uint {···12581212 }1259121312601214 /// Driver's `adjust_perf` callback.12611261- extern "C" fn adjust_perf_callback(12151215+ ///12161216+ /// # Safety12171217+ ///12181218+ /// - This function may only be called from the cpufreq C infrastructure.12191219+ unsafe extern "C" fn adjust_perf_callback(12621220 cpu: u32,12631221 min_perf: usize,12641222 target_perf: usize,12651223 capacity: usize,12661224 ) {12671267- if let Ok(mut policy) = PolicyCpu::from_cpu(cpu) {12251225+ // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number.12261226+ let cpu_id = unsafe { CpuId::from_u32_unchecked(cpu) };12271227+12281228+ if let Ok(mut policy) = PolicyCpu::from_cpu(cpu_id) {12681229 T::adjust_perf(&mut policy, min_perf, target_perf, capacity);12691230 }12701231 }1271123212721233 /// Driver's `get_intermediate` callback.12731234 ///12741274- /// SAFETY: Called from C. Inputs must be valid pointers.12751275- extern "C" fn get_intermediate_callback(12351235+ /// # Safety12361236+ ///12371237+ /// - This function may only be called from the cpufreq C infrastructure.12381238+ /// - The pointer arguments must be valid pointers.12391239+ unsafe extern "C" fn get_intermediate_callback(12761240 ptr: *mut bindings::cpufreq_policy,12771241 index: u32,12781242 ) -> kernel::ffi::c_uint {···1299124313001244 /// Driver's `target_intermediate` callback.13011245 ///13021302- /// SAFETY: Called from C. Inputs must be valid pointers.13031303- extern "C" fn target_intermediate_callback(12461246+ /// # Safety12471247+ ///12481248+ /// - This function may only be called from the cpufreq C infrastructure.12491249+ /// - The pointer arguments must be valid pointers.12501250+ unsafe extern "C" fn target_intermediate_callback(13041251 ptr: *mut bindings::cpufreq_policy,13051252 index: u32,13061253 ) -> kernel::ffi::c_int {···13211262 }1322126313231264 /// Driver's `get` callback.13241324- extern "C" fn get_callback(cpu: u32) -> kernel::ffi::c_uint {13251325- PolicyCpu::from_cpu(cpu).map_or(0, |mut policy| T::get(&mut policy).map_or(0, |f| f))12651265+ ///12661266+ /// # Safety12671267+ ///12681268+ /// - This function may only be called from the cpufreq C infrastructure.12691269+ unsafe extern "C" fn get_callback(cpu: u32) -> kernel::ffi::c_uint {12701270+ // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number.12711271+ let cpu_id = unsafe { CpuId::from_u32_unchecked(cpu) };12721272+12731273+ PolicyCpu::from_cpu(cpu_id).map_or(0, |mut policy| T::get(&mut policy).map_or(0, |f| f))13261274 }1327127513281276 /// Driver's `update_limit` callback.13291329- extern "C" fn update_limits_callback(ptr: *mut bindings::cpufreq_policy) {12771277+ ///12781278+ /// # Safety12791279+ ///12801280+ /// - This function may only be called from the cpufreq C infrastructure.12811281+ /// - The pointer arguments must be valid pointers.12821282+ unsafe extern "C" fn update_limits_callback(ptr: *mut bindings::cpufreq_policy) {13301283 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the13311284 // lifetime of `policy`.13321285 let policy = unsafe { Policy::from_raw_mut(ptr) };···1347127613481277 /// Driver's `bios_limit` callback.13491278 ///13501350- /// SAFETY: Called from C. Inputs must be valid pointers.13511351- extern "C" fn bios_limit_callback(cpu: i32, limit: *mut u32) -> kernel::ffi::c_int {12791279+ /// # Safety12801280+ ///12811281+ /// - This function may only be called from the cpufreq C infrastructure.12821282+ /// - The pointer arguments must be valid pointers.12831283+ unsafe extern "C" fn bios_limit_callback(cpu: i32, limit: *mut u32) -> kernel::ffi::c_int {12841284+ // SAFETY: The C API guarantees that `cpu` refers to a valid CPU number.12851285+ let cpu_id = unsafe { CpuId::from_i32_unchecked(cpu) };12861286+13521287 from_result(|| {13531353- let mut policy = PolicyCpu::from_cpu(cpu as u32)?;12881288+ let mut policy = PolicyCpu::from_cpu(cpu_id)?;1354128913551290 // SAFETY: `limit` is guaranteed by the C code to be valid.13561291 T::bios_limit(&mut policy, &mut (unsafe { *limit })).map(|()| 0)···1365128813661289 /// Driver's `set_boost` callback.13671290 ///13681368- /// SAFETY: Called from C. Inputs must be valid pointers.13691369- extern "C" fn set_boost_callback(12911291+ /// # Safety12921292+ ///12931293+ /// - This function may only be called from the cpufreq C infrastructure.12941294+ /// - The pointer arguments must be valid pointers.12951295+ unsafe extern "C" fn set_boost_callback(13701296 ptr: *mut bindings::cpufreq_policy,13711297 state: i32,13721298 ) -> kernel::ffi::c_int {···1383130313841304 /// Driver's `register_em` callback.13851305 ///13861386- /// SAFETY: Called from C. Inputs must be valid pointers.13871387- extern "C" fn register_em_callback(ptr: *mut bindings::cpufreq_policy) {13061306+ /// # Safety13071307+ ///13081308+ /// - This function may only be called from the cpufreq C infrastructure.13091309+ /// - The pointer arguments must be valid pointers.13101310+ unsafe extern "C" fn register_em_callback(ptr: *mut bindings::cpufreq_policy) {13881311 // SAFETY: The `ptr` is guaranteed to be valid by the contract with the C code for the13891312 // lifetime of `policy`.13901313 let policy = unsafe { Policy::from_raw_mut(ptr) };
+36-15
rust/kernel/cpumask.rs
···6677use crate::{88 alloc::{AllocError, Flags},99+ cpu::CpuId,910 prelude::*,1011 types::Opaque,1112};···3635///3736/// ```3837/// use kernel::bindings;3838+/// use kernel::cpu::CpuId;3939/// use kernel::cpumask::Cpumask;4040///4141-/// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: u32, clear_cpu: i32) {4141+/// fn set_clear_cpu(ptr: *mut bindings::cpumask, set_cpu: CpuId, clear_cpu: CpuId) {4242/// // SAFETY: The `ptr` is valid for writing and remains valid for the lifetime of the4343/// // returned reference.4444/// let mask = unsafe { Cpumask::as_mut_ref(ptr) };···9290 /// This mismatches kernel naming convention and corresponds to the C9391 /// function `__cpumask_set_cpu()`.9492 #[inline]9595- pub fn set(&mut self, cpu: u32) {9393+ pub fn set(&mut self, cpu: CpuId) {9694 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to `__cpumask_set_cpu`.9797- unsafe { bindings::__cpumask_set_cpu(cpu, self.as_raw()) };9595+ unsafe { bindings::__cpumask_set_cpu(u32::from(cpu), self.as_raw()) };9896 }999710098 /// Clear `cpu` in the cpumask.···103101 /// This mismatches kernel naming convention and corresponds to the C104102 /// function `__cpumask_clear_cpu()`.105103 #[inline]106106- pub fn clear(&mut self, cpu: i32) {104104+ pub fn clear(&mut self, cpu: CpuId) {107105 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to108106 // `__cpumask_clear_cpu`.109109- unsafe { bindings::__cpumask_clear_cpu(cpu, self.as_raw()) };107107+ unsafe { bindings::__cpumask_clear_cpu(i32::from(cpu), self.as_raw()) };110108 }111109112110 /// Test `cpu` in the cpumask.113111 ///114112 /// Equivalent to the kernel's `cpumask_test_cpu` API.115113 #[inline]116116- pub fn test(&self, cpu: i32) -> bool {114114+ pub fn test(&self, cpu: CpuId) -> bool {117115 // SAFETY: By the type invariant, `self.as_raw` is a valid argument to `cpumask_test_cpu`.118118- unsafe { bindings::cpumask_test_cpu(cpu, self.as_raw()) }116116+ unsafe { bindings::cpumask_test_cpu(i32::from(cpu), self.as_raw()) }119117 }120118121119 /// Set all CPUs in the cpumask.···180178/// The following example demonstrates how to create and update a [`CpumaskVar`].181179///182180/// ```181181+/// use kernel::cpu::CpuId;183182/// use kernel::cpumask::CpumaskVar;184183///185184/// let mut mask = CpumaskVar::new_zero(GFP_KERNEL).unwrap();186185///187186/// assert!(mask.empty());188188-/// mask.set(2);189189-/// assert!(mask.test(2));190190-/// mask.set(3);191191-/// assert!(mask.test(3));192192-/// assert_eq!(mask.weight(), 2);187187+/// let mut count = 0;188188+///189189+/// let cpu2 = CpuId::from_u32(2);190190+/// if let Some(cpu) = cpu2 {191191+/// mask.set(cpu);192192+/// assert!(mask.test(cpu));193193+/// count += 1;194194+/// }195195+///196196+/// let cpu3 = CpuId::from_u32(3);197197+/// if let Some(cpu) = cpu3 {198198+/// mask.set(cpu);199199+/// assert!(mask.test(cpu));200200+/// count += 1;201201+/// }202202+///203203+/// assert_eq!(mask.weight(), count);193204///194205/// let mask2 = CpumaskVar::try_clone(&mask).unwrap();195195-/// assert!(mask2.test(2));196196-/// assert!(mask2.test(3));197197-/// assert_eq!(mask2.weight(), 2);206206+///207207+/// if let Some(cpu) = cpu2 {208208+/// assert!(mask2.test(cpu));209209+/// }210210+///211211+/// if let Some(cpu) = cpu3 {212212+/// assert!(mask2.test(cpu));213213+/// }214214+/// assert_eq!(mask2.weight(), count);198215/// ```199216pub struct CpumaskVar {200217 #[cfg(CONFIG_CPUMASK_OFFSTACK)]
+1-1
rust/kernel/time/hrtimer.rs
···517517 ) -> *mut Self {518518 // SAFETY: As per the safety requirement of this function, `ptr`519519 // is pointing inside a `$timer_type`.520520- unsafe { ::kernel::container_of!(ptr, $timer_type, $field).cast_mut() }520520+ unsafe { ::kernel::container_of!(ptr, $timer_type, $field) }521521 }522522 }523523 }
···747747 'noise' : (0,) })748748749749750750+def test_rss_default_context_rule(cfg):751751+ """752752+ Allocate a port, direct this port to context 0, then create a new RSS753753+ context and steer all TCP traffic to it (context 1). Verify that:754754+ * Traffic to the specific port continues to use queues of the main755755+ context (0/1).756756+ * Traffic to any other TCP port is redirected to the new context757757+ (queues 2/3).758758+ """759759+760760+ require_ntuple(cfg)761761+762762+ queue_cnt = len(_get_rx_cnts(cfg))763763+ if queue_cnt < 4:764764+ try:765765+ ksft_pr(f"Increasing queue count {queue_cnt} -> 4")766766+ ethtool(f"-L {cfg.ifname} combined 4")767767+ defer(ethtool, f"-L {cfg.ifname} combined {queue_cnt}")768768+ except Exception as exc:769769+ raise KsftSkipEx("Not enough queues for the test") from exc770770+771771+ # Use queues 0 and 1 for the main context772772+ ethtool(f"-X {cfg.ifname} equal 2")773773+ defer(ethtool, f"-X {cfg.ifname} default")774774+775775+ # Create a new RSS context that uses queues 2 and 3776776+ ctx_id = ethtool_create(cfg, "-X", "context new start 2 equal 2")777777+ defer(ethtool, f"-X {cfg.ifname} context {ctx_id} delete")778778+779779+ # Generic low-priority rule: redirect all TCP traffic to the new context.780780+ # Give it an explicit higher location number (lower priority).781781+ flow_generic = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} context {ctx_id} loc 1"782782+ ethtool(f"-N {cfg.ifname} {flow_generic}")783783+ defer(ethtool, f"-N {cfg.ifname} delete 1")784784+785785+ # Specific high-priority rule for a random port that should stay on context 0.786786+ # Assign loc 0 so it is evaluated before the generic rule.787787+ port_main = rand_port()788788+ flow_main = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} dst-port {port_main} context 0 loc 0"789789+ ethtool(f"-N {cfg.ifname} {flow_main}")790790+ defer(ethtool, f"-N {cfg.ifname} delete 0")791791+792792+ _ntuple_rule_check(cfg, 1, ctx_id)793793+794794+ # Verify that traffic matching the specific rule still goes to queues 0/1795795+ _send_traffic_check(cfg, port_main, "context 0",796796+ { 'target': (0, 1),797797+ 'empty' : (2, 3) })798798+799799+ # And that traffic for any other port is steered to the new context800800+ port_other = rand_port()801801+ _send_traffic_check(cfg, port_other, f"context {ctx_id}",802802+ { 'target': (2, 3),803803+ 'noise' : (0, 1) })804804+805805+750806def main() -> None:751807 with NetDrvEpEnv(__file__, nsim_test=False) as cfg:752808 cfg.context_cnt = None···816760 test_rss_context_overlap, test_rss_context_overlap2,817761 test_rss_context_out_of_order, test_rss_context4_create_with_cfg,818762 test_flow_add_context_missing,819819- test_delete_rss_context_busy, test_rss_ntuple_addition],763763+ test_delete_rss_context_busy, test_rss_ntuple_addition,764764+ test_rss_default_context_rule],820765 args=(cfg, ))821766 ksft_exit()822767
···2222#include "gic.h"2323#include "vgic.h"24242525-static const uint64_t CVAL_MAX = ~0ULL;2525+/* Depends on counter width. */2626+static uint64_t CVAL_MAX;2627/* tval is a signed 32-bit int. */2728static const int32_t TVAL_MAX = INT32_MAX;2829static const int32_t TVAL_MIN = INT32_MIN;···3130/* After how much time we say there is no IRQ. */3231static const uint32_t TIMEOUT_NO_IRQ_US = 50000;33323434-/* A nice counter value to use as the starting one for most tests. */3535-static const uint64_t DEF_CNT = (CVAL_MAX / 2);3333+/* Counter value to use as the starting one for most tests. Set to CVAL_MAX/2 */3434+static uint64_t DEF_CNT;36353736/* Number of runs. */3837static const uint32_t NR_TEST_ITERS_DEF = 5;···192191{193192 atomic_set(&shared_data.handled, 0);194193 atomic_set(&shared_data.spurious, 0);195195- timer_set_ctl(timer, ctl);196194 timer_set_tval(timer, tval_cycles);195195+ timer_set_ctl(timer, ctl);197196}198197199198static void set_xval_irq(enum arch_timer timer, uint64_t xval, uint32_t ctl,···733732 test_set_cnt_after_tval(timer, 0, tval, (uint64_t) tval + 1,734733 wm);735734 }736736-737737- for (i = 0; i < ARRAY_SIZE(sleep_method); i++) {738738- sleep_method_t sm = sleep_method[i];739739-740740- test_set_cnt_after_cval_no_irq(timer, 0, DEF_CNT, CVAL_MAX, sm);741741- }742735}743736744737/*···844849 GUEST_DONE();845850}846851852852+static cpu_set_t default_cpuset;853853+847854static uint32_t next_pcpu(void)848855{849856 uint32_t max = get_nprocs();850857 uint32_t cur = sched_getcpu();851858 uint32_t next = cur;852852- cpu_set_t cpuset;859859+ cpu_set_t cpuset = default_cpuset;853860854861 TEST_ASSERT(max > 1, "Need at least two physical cpus");855855-856856- sched_getaffinity(0, sizeof(cpuset), &cpuset);857862858863 do {859864 next = (next + 1) % CPU_SETSIZE;···970975 test_init_timer_irq(*vm, *vcpu);971976 vgic_v3_setup(*vm, 1, 64);972977 sync_global_to_guest(*vm, test_args);978978+ sync_global_to_guest(*vm, CVAL_MAX);979979+ sync_global_to_guest(*vm, DEF_CNT);973980}974981975982static void test_print_help(char *name)···983986 pr_info("\t-b: Test both physical and virtual timers (default: true)\n");984987 pr_info("\t-l: Delta (in ms) used for long wait time test (default: %u)\n",985988 LONG_WAIT_TEST_MS);986986- pr_info("\t-l: Delta (in ms) used for wait times (default: %u)\n",989989+ pr_info("\t-w: Delta (in ms) used for wait times (default: %u)\n",987990 WAIT_TEST_MS);988991 pr_info("\t-p: Test physical timer (default: true)\n");989992 pr_info("\t-v: Test virtual timer (default: true)\n");···10321035 return false;10331036}1034103710381038+static void set_counter_defaults(void)10391039+{10401040+ const uint64_t MIN_ROLLOVER_SECS = 40ULL * 365 * 24 * 3600;10411041+ uint64_t freq = read_sysreg(CNTFRQ_EL0);10421042+ uint64_t width = ilog2(MIN_ROLLOVER_SECS * freq);10431043+10441044+ width = clamp(width, 56, 64);10451045+ CVAL_MAX = GENMASK_ULL(width - 1, 0);10461046+ DEF_CNT = CVAL_MAX / 2;10471047+}10481048+10351049int main(int argc, char *argv[])10361050{10371051 struct kvm_vcpu *vcpu;···1053104510541046 if (!parse_args(argc, argv))10551047 exit(KSFT_SKIP);10481048+10491049+ sched_getaffinity(0, sizeof(default_cpuset), &default_cpuset);10501050+ set_counter_defaults();1056105110571052 if (test_args.test_virtual) {10581053 test_vm_create(&vm, &vcpu, VIRTUAL);