···11-APM X-Gene SLIMpro Mailbox I2C Driver22-33-An I2C controller accessed over the "SLIMpro" mailbox.44-55-Required properties :66-77- - compatible : should be "apm,xgene-slimpro-i2c"88- - mboxes : use the label reference for the mailbox as the first parameter.99- The second parameter is the channel number.1010-1111-Example :1212- i2cslimpro {1313- compatible = "apm,xgene-slimpro-i2c";1414- mboxes = <&mailbox 0>;1515- };
···11111212There is a mailing list for discussing Linux amateur radio matters1313called linux-hams@vger.kernel.org. To subscribe to it, send a message to1414-majordomo@vger.kernel.org with the words "subscribe linux-hams" in the body1515-of the message, the subject field is ignored. You don't need to be1616-subscribed to post but of course that means you might miss an answer.1414+linux-hams+subscribe@vger.kernel.org or use the web interface at1515+https://vger.kernel.org. The subject and body of the message are1616+ignored. You don't need to be subscribed to post but of course that1717+means you might miss an answer.
···137137138138Checksum offload header fields are in big endian format.139139140140+Packet format::141141+140142 Bit 0 - 6 7 8-15 16-31141143 Function Header Type Next Header Checksum Valid Reserved142144143145Header Type is to indicate the type of header, this usually is set to CHECKSUM144146145147Header types146146-= ==========================================148148+149149+= ===============1471500 Reserved1481511 Reserved1491522 checksum header153153+= ===============150154151155Checksum Valid is to indicate whether the header checksum is valid. Value of 1152156implies that checksum is calculated on this packet and is valid, value of 0···187183packets and either ACK the MAP command or deliver the IP packet to the188184network stack as needed189185190190-MAP header|IP Packet|Optional padding|MAP header|IP Packet|Optional padding....186186+Packet format::191187192192-MAP header|IP Packet|Optional padding|MAP header|Command Packet|Optional pad...188188+ MAP header|IP Packet|Optional padding|MAP header|IP Packet|Optional padding....189189+190190+ MAP header|IP Packet|Optional padding|MAP header|Command Packet|Optional pad...1931911941923. Userspace configuration195193==========================
+2-4
Documentation/networking/net_failover.rst
···9696received only on the 'failover' device.97979898Below is the patch snippet used with 'cloud-ifupdown-helper' script found on9999-Debian cloud images:9999+Debian cloud images::100100101101-::102101 @@ -27,6 +27,8 @@ do_setup() {103102 local working="$cfgdir/.$INTERFACE"104103 local final="$cfgdir/$INTERFACE"···171172172173The following script is executed on the destination hypervisor once migration173174completes, and it reattaches the VF to the VM and brings down the virtio-net174174-interface.175175+interface::175176176176-::177177 # reattach-vf.sh178178 #!/bin/bash179179
+75
Documentation/rust/coding-guidelines.rst
···3838individual files, and does not require a kernel configuration. Sometimes it may3939even work with broken code.40404141+Imports4242+~~~~~~~4343+4444+``rustfmt``, by default, formats imports in a way that is prone to conflicts4545+while merging and rebasing, since in some cases it condenses several items into4646+the same line. For instance:4747+4848+.. code-block:: rust4949+5050+ // Do not use this style.5151+ use crate::{5252+ example1,5353+ example2::{example3, example4, example5},5454+ example6, example7,5555+ example8::example9,5656+ };5757+5858+Instead, the kernel uses a vertical layout that looks like this:5959+6060+.. code-block:: rust6161+6262+ use crate::{6363+ example1,6464+ example2::{6565+ example3,6666+ example4,6767+ example5, //6868+ },6969+ example6,7070+ example7,7171+ example8::example9, //7272+ };7373+7474+That is, each item goes into its own line, and braces are used as soon as there7575+is more than one item in a list.7676+7777+The trailing empty comment allows to preserve this formatting. Not only that,7878+``rustfmt`` will actually reformat imports vertically when the empty comment is7979+added. That is, it is possible to easily reformat the original example into the8080+expected style by running ``rustfmt`` on an input like:8181+8282+.. code-block:: rust8383+8484+ // Do not use this style.8585+ use crate::{8686+ example1,8787+ example2::{example3, example4, example5, //8888+ },8989+ example6, example7,9090+ example8::example9, //9191+ };9292+9393+The trailing empty comment works for nested imports, as shown above, as well as9494+for single item imports -- this can be useful to minimize diffs within patch9595+series:9696+9797+.. code-block:: rust9898+9999+ use crate::{100100+ example1, //101101+ };102102+103103+The trailing empty comment works in any of the lines within the braces, but it104104+is preferred to keep it in the last item, since it is reminiscent of the105105+trailing comma in other formatters. Sometimes it may be simpler to avoid moving106106+the comment several times within a patch series due to changes in the list.107107+108108+There may be cases where exceptions may need to be made, i.e. none of this is109109+a hard rule. There is also code that is not migrated to this style yet, but110110+please do not introduce code in other styles.111111+112112+Eventually, the goal is to get ``rustfmt`` to support this formatting style (or113113+a similar one) automatically in a stable release without requiring the trailing114114+empty comment. Thus, at some point, the goal is to remove those comments.115115+4111642117Comments43118--------
+17-3
Documentation/virt/kvm/api.rst
···12291229KVM_SET_VCPU_EVENTS or otherwise) because such an exception is always delivered12301230directly to the virtual CPU).1231123112321232+Calling this ioctl on a vCPU that hasn't been initialized will return12331233+-ENOEXEC.12341234+12321235::1233123612341237 struct kvm_vcpu_events {···1312130913131310See KVM_GET_VCPU_EVENTS for the data structure.1314131113121312+Calling this ioctl on a vCPU that hasn't been initialized will return13131313+-ENOEXEC.13151314131613154.33 KVM_GET_DEBUGREGS13171316----------------------···64376432guest_memfd range is not allowed (any number of memory regions can be bound to64386433a single guest_memfd file, but the bound ranges must not overlap).6439643464406440-When the capability KVM_CAP_GUEST_MEMFD_MMAP is supported, the 'flags' field64416441-supports GUEST_MEMFD_FLAG_MMAP. Setting this flag on guest_memfd creation64426442-enables mmap() and faulting of guest_memfd memory to host userspace.64356435+The capability KVM_CAP_GUEST_MEMFD_FLAGS enumerates the `flags` that can be64366436+specified via KVM_CREATE_GUEST_MEMFD. Currently defined flags:64376437+64386438+ ============================ ================================================64396439+ GUEST_MEMFD_FLAG_MMAP Enable using mmap() on the guest_memfd file64406440+ descriptor.64416441+ GUEST_MEMFD_FLAG_INIT_SHARED Make all memory in the file shared during64426442+ KVM_CREATE_GUEST_MEMFD (memory files created64436443+ without INIT_SHARED will be marked private).64446444+ Shared memory can be faulted into host userspace64456445+ page tables. Private memory cannot.64466446+ ============================ ================================================6443644764446448When the KVM MMU performs a PFN lookup to service a guest fault and the backing64456449guest_memfd has the GUEST_MEMFD_FLAG_MMAP set, then the fault will always be
+2-1
Documentation/virt/kvm/devices/arm-vgic-v3.rst
···1313to inject interrupts to the VGIC instead of directly to CPUs. It is not1414possible to create both a GICv3 and GICv2 on the same VM.15151616-Creating a guest GICv3 device requires a host GICv3 as well.1616+Creating a guest GICv3 device requires a host GICv3 host, or a GICv5 host with1717+support for FEAT_GCIE_LEGACY.171818191920Groups:
+3-2
MAINTAINERS
···38413841ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS38423842M: Corentin Chary <corentin.chary@gmail.com>38433843M: Luke D. Jones <luke@ljones.dev>38443844+M: Denis Benato <benato.denis96@gmail.com>38443845L: platform-driver-x86@vger.kernel.org38453846S: Maintained38463847W: https://asus-linux.org/···2689326892F: drivers/vfio/cdx/*26894268932689526894VFIO DRIVER2689626896-M: Alex Williamson <alex.williamson@redhat.com>2689526895+M: Alex Williamson <alex@shazbot.org>2689726896L: kvm@vger.kernel.org2689826897S: Maintained2689926898T: git https://github.com/awilliam/linux-vfio.git···2705627055F: drivers/media/test-drivers/vimc/*27057270562705827057VIRT LIB2705927059-M: Alex Williamson <alex.williamson@redhat.com>2705827058+M: Alex Williamson <alex@shazbot.org>2706027059M: Paolo Bonzini <pbonzini@redhat.com>2706127060L: kvm@vger.kernel.org2706227061S: Supported
···965965 def_bool y966966 depends on HAVE_CFI_ICALL_NORMALIZE_INTEGERS967967 depends on RUSTC_VERSION >= 107900968968+ depends on ARM64 || X86_64968969 # With GCOV/KASAN we need this fix: https://github.com/rust-lang/rust/pull/129373969970 depends on (RUSTC_LLVM_VERSION >= 190103 && RUSTC_VERSION >= 108200) || \970971 (!GCOV_KERNEL && !KASAN_GENERIC && !KASAN_SW_TAGS)
+32-6
arch/arm64/include/asm/el2_setup.h
···2424 * ID_AA64MMFR4_EL1.E2H0 < 0. On such CPUs HCR_EL2.E2H is RES1, but it2525 * can reset into an UNKNOWN state and might not read as 1 until it has2626 * been initialized explicitly.2727- *2828- * Fruity CPUs seem to have HCR_EL2.E2H set to RAO/WI, but2929- * don't advertise it (they predate this relaxation).3030- *3127 * Initalize HCR_EL2.E2H so that later code can rely upon HCR_EL2.E2H3228 * indicating whether the CPU is running in E2H mode.3329 */3430 mrs_s x1, SYS_ID_AA64MMFR4_EL13531 sbfx x1, x1, #ID_AA64MMFR4_EL1_E2H0_SHIFT, #ID_AA64MMFR4_EL1_E2H0_WIDTH3632 cmp x1, #03737- b.ge .LnVHE_\@3333+ b.lt .LnE2H0_\@38343535+ /*3636+ * Unfortunately, HCR_EL2.E2H can be RES1 even if not advertised3737+ * as such via ID_AA64MMFR4_EL1.E2H0:3838+ *3939+ * - Fruity CPUs predate the !FEAT_E2H0 relaxation, and seem to4040+ * have HCR_EL2.E2H implemented as RAO/WI.4141+ *4242+ * - On CPUs that lack FEAT_FGT, a hypervisor can't trap guest4343+ * reads of ID_AA64MMFR4_EL1 to advertise !FEAT_E2H0. NV4444+ * guests on these hosts can write to HCR_EL2.E2H without4545+ * trapping to the hypervisor, but these writes have no4646+ * functional effect.4747+ *4848+ * Handle both cases by checking for an essential VHE property4949+ * (system register remapping) to decide whether we're5050+ * effectively VHE-only or not.5151+ */5252+ msr_hcr_el2 x0 // Setup HCR_EL2 as nVHE5353+ isb5454+ mov x1, #1 // Write something to FAR_EL15555+ msr far_el1, x15656+ isb5757+ mov x1, #2 // Try to overwrite it via FAR_EL25858+ msr far_el2, x15959+ isb6060+ mrs x1, far_el1 // If we see the latest write in FAR_EL1,6161+ cmp x1, #2 // we can safely assume we are VHE only.6262+ b.ne .LnVHE_\@ // Otherwise, we know that nVHE works.6363+6464+.LnE2H0_\@:3965 orr x0, x0, #HCR_E2H4040-.LnVHE_\@:4166 msr_hcr_el2 x04267 isb6868+.LnVHE_\@:4369.endm44704571.macro __init_el2_sctlr
+50
arch/arm64/include/asm/kvm_host.h
···816816 u64 hcrx_el2;817817 u64 mdcr_el2;818818819819+ struct {820820+ u64 r;821821+ u64 w;822822+ } fgt[__NR_FGT_GROUP_IDS__];823823+819824 /* Exception Information */820825 struct kvm_vcpu_fault_info fault;821826···16051600void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt);16061601void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1);16071602void check_feature_map(void);16031603+void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu);1608160416051605+static __always_inline enum fgt_group_id __fgt_reg_to_group_id(enum vcpu_sysreg reg)16061606+{16071607+ switch (reg) {16081608+ case HFGRTR_EL2:16091609+ case HFGWTR_EL2:16101610+ return HFGRTR_GROUP;16111611+ case HFGITR_EL2:16121612+ return HFGITR_GROUP;16131613+ case HDFGRTR_EL2:16141614+ case HDFGWTR_EL2:16151615+ return HDFGRTR_GROUP;16161616+ case HAFGRTR_EL2:16171617+ return HAFGRTR_GROUP;16181618+ case HFGRTR2_EL2:16191619+ case HFGWTR2_EL2:16201620+ return HFGRTR2_GROUP;16211621+ case HFGITR2_EL2:16221622+ return HFGITR2_GROUP;16231623+ case HDFGRTR2_EL2:16241624+ case HDFGWTR2_EL2:16251625+ return HDFGRTR2_GROUP;16261626+ default:16271627+ BUILD_BUG_ON(1);16281628+ }16291629+}16301630+16311631+#define vcpu_fgt(vcpu, reg) \16321632+ ({ \16331633+ enum fgt_group_id id = __fgt_reg_to_group_id(reg); \16341634+ u64 *p; \16351635+ switch (reg) { \16361636+ case HFGWTR_EL2: \16371637+ case HDFGWTR_EL2: \16381638+ case HFGWTR2_EL2: \16391639+ case HDFGWTR2_EL2: \16401640+ p = &(vcpu)->arch.fgt[id].w; \16411641+ break; \16421642+ default: \16431643+ p = &(vcpu)->arch.fgt[id].r; \16441644+ break; \16451645+ } \16461646+ \16471647+ p; \16481648+ })1609164916101650#endif /* __ARM64_KVM_HOST_H__ */
+10-1
arch/arm64/include/asm/sysreg.h
···12201220 __val; \12211221})1222122212231223+/*12241224+ * The "Z" constraint combined with the "%x0" template should be enough12251225+ * to force XZR generation if (v) is a constant 0 value but LLVM does not12261226+ * yet understand that modifier/constraint combo so a conditional is required12271227+ * to nudge the compiler into using XZR as a source for a 0 constant value.12281228+ */12231229#define write_sysreg_s(v, r) do { \12241230 u64 __val = (u64)(v); \12251231 u32 __maybe_unused __check_r = (u32)(r); \12261226- asm volatile(__msr_s(r, "%x0") : : "rZ" (__val)); \12321232+ if (__builtin_constant_p(__val) && __val == 0) \12331233+ asm volatile(__msr_s(r, "xzr")); \12341234+ else \12351235+ asm volatile(__msr_s(r, "%x0") : : "r" (__val)); \12271236} while (0)1228123712291238/*
+5-3
arch/arm64/kernel/entry-common.c
···697697698698static void noinstr el0_softstp(struct pt_regs *regs, unsigned long esr)699699{700700+ bool step_done;701701+700702 if (!is_ttbr0_addr(regs->pc))701703 arm64_apply_bp_hardening();702704···709707 * If we are stepping a suspended breakpoint there's nothing more to do:710708 * the single-step is complete.711709 */712712- if (!try_step_suspended_breakpoints(regs)) {713713- local_daif_restore(DAIF_PROCCTX);710710+ step_done = try_step_suspended_breakpoints(regs);711711+ local_daif_restore(DAIF_PROCCTX);712712+ if (!step_done)714713 do_el0_softstep(esr, regs);715715- }716714 arm64_exit_to_user_mode(regs);717715}718716
···55 */6677#include <linux/kvm_host.h>88+#include <asm/kvm_emulate.h>99+#include <asm/kvm_nested.h>810#include <asm/sysreg.h>9111012/*···14291427 *res0 = *res1 = 0;14301428 break;14311429 }14301430+}14311431+14321432+static __always_inline struct fgt_masks *__fgt_reg_to_masks(enum vcpu_sysreg reg)14331433+{14341434+ switch (reg) {14351435+ case HFGRTR_EL2:14361436+ return &hfgrtr_masks;14371437+ case HFGWTR_EL2:14381438+ return &hfgwtr_masks;14391439+ case HFGITR_EL2:14401440+ return &hfgitr_masks;14411441+ case HDFGRTR_EL2:14421442+ return &hdfgrtr_masks;14431443+ case HDFGWTR_EL2:14441444+ return &hdfgwtr_masks;14451445+ case HAFGRTR_EL2:14461446+ return &hafgrtr_masks;14471447+ case HFGRTR2_EL2:14481448+ return &hfgrtr2_masks;14491449+ case HFGWTR2_EL2:14501450+ return &hfgwtr2_masks;14511451+ case HFGITR2_EL2:14521452+ return &hfgitr2_masks;14531453+ case HDFGRTR2_EL2:14541454+ return &hdfgrtr2_masks;14551455+ case HDFGWTR2_EL2:14561456+ return &hdfgwtr2_masks;14571457+ default:14581458+ BUILD_BUG_ON(1);14591459+ }14601460+}14611461+14621462+static __always_inline void __compute_fgt(struct kvm_vcpu *vcpu, enum vcpu_sysreg reg)14631463+{14641464+ u64 fgu = vcpu->kvm->arch.fgu[__fgt_reg_to_group_id(reg)];14651465+ struct fgt_masks *m = __fgt_reg_to_masks(reg);14661466+ u64 clear = 0, set = 0, val = m->nmask;14671467+14681468+ set |= fgu & m->mask;14691469+ clear |= fgu & m->nmask;14701470+14711471+ if (is_nested_ctxt(vcpu)) {14721472+ u64 nested = __vcpu_sys_reg(vcpu, reg);14731473+ set |= nested & m->mask;14741474+ clear |= ~nested & m->nmask;14751475+ }14761476+14771477+ val |= set;14781478+ val &= ~clear;14791479+ *vcpu_fgt(vcpu, reg) = val;14801480+}14811481+14821482+static void __compute_hfgwtr(struct kvm_vcpu *vcpu)14831483+{14841484+ __compute_fgt(vcpu, HFGWTR_EL2);14851485+14861486+ if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))14871487+ *vcpu_fgt(vcpu, HFGWTR_EL2) |= HFGWTR_EL2_TCR_EL1;14881488+}14891489+14901490+static void __compute_hdfgwtr(struct kvm_vcpu *vcpu)14911491+{14921492+ __compute_fgt(vcpu, HDFGWTR_EL2);14931493+14941494+ if (is_hyp_ctxt(vcpu))14951495+ *vcpu_fgt(vcpu, HDFGWTR_EL2) |= HDFGWTR_EL2_MDSCR_EL1;14961496+}14971497+14981498+void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu)14991499+{15001500+ if (!cpus_have_final_cap(ARM64_HAS_FGT))15011501+ return;15021502+15031503+ __compute_fgt(vcpu, HFGRTR_EL2);15041504+ __compute_hfgwtr(vcpu);15051505+ __compute_fgt(vcpu, HFGITR_EL2);15061506+ __compute_fgt(vcpu, HDFGRTR_EL2);15071507+ __compute_hdfgwtr(vcpu);15081508+ __compute_fgt(vcpu, HAFGRTR_EL2);15091509+15101510+ if (!cpus_have_final_cap(ARM64_HAS_FGT2))15111511+ return;15121512+15131513+ __compute_fgt(vcpu, HFGRTR2_EL2);15141514+ __compute_fgt(vcpu, HFGWTR2_EL2);15151515+ __compute_fgt(vcpu, HFGITR2_EL2);15161516+ __compute_fgt(vcpu, HDFGRTR2_EL2);15171517+ __compute_fgt(vcpu, HDFGWTR2_EL2);14321518}
+10-5
arch/arm64/kvm/debug.c
···1515#include <asm/kvm_arm.h>1616#include <asm/kvm_emulate.h>17171818+static int cpu_has_spe(u64 dfr0)1919+{2020+ return cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) &&2121+ !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P);2222+}2323+1824/**1925 * kvm_arm_setup_mdcr_el2 - configure vcpu mdcr_el2 value2026 *···8377 *host_data_ptr(debug_brps) = SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr0);8478 *host_data_ptr(debug_wrps) = SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr0);85798080+ if (cpu_has_spe(dfr0))8181+ host_data_set_flag(HAS_SPE);8282+8683 if (has_vhe())8784 return;8888-8989- if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_PMSVer_SHIFT) &&9090- !(read_sysreg_s(SYS_PMBIDR_EL1) & PMBIDR_EL1_P))9191- host_data_set_flag(HAS_SPE);92859386 /* Check if we have BRBE implemented and available at the host */9487 if (cpuid_feature_extract_unsigned_field(dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT))···107102void kvm_debug_init_vhe(void)108103{109104 /* Clear PMSCR_EL1.E{0,1}SPE which reset to UNKNOWN values. */110110- if (SYS_FIELD_GET(ID_AA64DFR0_EL1, PMSVer, read_sysreg(id_aa64dfr0_el1)))105105+ if (host_data_test_flag(HAS_SPE))111106 write_sysreg_el1(0, SYS_PMSCR);112107}113108
-70
arch/arm64/kvm/guest.c
···591591 return copy_core_reg_indices(vcpu, NULL);592592}593593594594-static const u64 timer_reg_list[] = {595595- KVM_REG_ARM_TIMER_CTL,596596- KVM_REG_ARM_TIMER_CNT,597597- KVM_REG_ARM_TIMER_CVAL,598598- KVM_REG_ARM_PTIMER_CTL,599599- KVM_REG_ARM_PTIMER_CNT,600600- KVM_REG_ARM_PTIMER_CVAL,601601-};602602-603603-#define NUM_TIMER_REGS ARRAY_SIZE(timer_reg_list)604604-605605-static bool is_timer_reg(u64 index)606606-{607607- switch (index) {608608- case KVM_REG_ARM_TIMER_CTL:609609- case KVM_REG_ARM_TIMER_CNT:610610- case KVM_REG_ARM_TIMER_CVAL:611611- case KVM_REG_ARM_PTIMER_CTL:612612- case KVM_REG_ARM_PTIMER_CNT:613613- case KVM_REG_ARM_PTIMER_CVAL:614614- return true;615615- }616616- return false;617617-}618618-619619-static int copy_timer_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)620620-{621621- for (int i = 0; i < NUM_TIMER_REGS; i++) {622622- if (put_user(timer_reg_list[i], uindices))623623- return -EFAULT;624624- uindices++;625625- }626626-627627- return 0;628628-}629629-630630-static int set_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)631631-{632632- void __user *uaddr = (void __user *)(long)reg->addr;633633- u64 val;634634- int ret;635635-636636- ret = copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id));637637- if (ret != 0)638638- return -EFAULT;639639-640640- return kvm_arm_timer_set_reg(vcpu, reg->id, val);641641-}642642-643643-static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)644644-{645645- void __user *uaddr = (void __user *)(long)reg->addr;646646- u64 val;647647-648648- val = kvm_arm_timer_get_reg(vcpu, reg->id);649649- return copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)) ? -EFAULT : 0;650650-}651651-652594static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu)653595{654596 const unsigned int slices = vcpu_sve_slices(vcpu);···666724 res += num_sve_regs(vcpu);667725 res += kvm_arm_num_sys_reg_descs(vcpu);668726 res += kvm_arm_get_fw_num_regs(vcpu);669669- res += NUM_TIMER_REGS;670727671728 return res;672729}···696755 return ret;697756 uindices += kvm_arm_get_fw_num_regs(vcpu);698757699699- ret = copy_timer_indices(vcpu, uindices);700700- if (ret < 0)701701- return ret;702702- uindices += NUM_TIMER_REGS;703703-704758 return kvm_arm_copy_sys_reg_indices(vcpu, uindices);705759}706760···713777 case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg);714778 }715779716716- if (is_timer_reg(reg->id))717717- return get_timer_reg(vcpu, reg);718718-719780 return kvm_arm_sys_reg_get_reg(vcpu, reg);720781}721782···729796 return kvm_arm_set_fw_reg(vcpu, reg);730797 case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg);731798 }732732-733733- if (is_timer_reg(reg->id))734734- return set_timer_reg(vcpu, reg);735799736800 return kvm_arm_sys_reg_set_reg(vcpu, reg);737801}
+6-1
arch/arm64/kvm/handle_exit.c
···147147 if (esr & ESR_ELx_WFx_ISS_RV) {148148 u64 val, now;149149150150- now = kvm_arm_timer_get_reg(vcpu, KVM_REG_ARM_TIMER_CNT);150150+ now = kvm_phys_timer_read();151151+ if (is_hyp_ctxt(vcpu) && vcpu_el2_e2h_is_set(vcpu))152152+ now -= timer_get_offset(vcpu_hvtimer(vcpu));153153+ else154154+ now -= timer_get_offset(vcpu_vtimer(vcpu));155155+151156 val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu));152157153158 if (now >= val)
+17-131
arch/arm64/kvm/hyp/include/hyp/switch.h
···195195 __deactivate_cptr_traps_nvhe(vcpu);196196}197197198198-#define reg_to_fgt_masks(reg) \199199- ({ \200200- struct fgt_masks *m; \201201- switch(reg) { \202202- case HFGRTR_EL2: \203203- m = &hfgrtr_masks; \204204- break; \205205- case HFGWTR_EL2: \206206- m = &hfgwtr_masks; \207207- break; \208208- case HFGITR_EL2: \209209- m = &hfgitr_masks; \210210- break; \211211- case HDFGRTR_EL2: \212212- m = &hdfgrtr_masks; \213213- break; \214214- case HDFGWTR_EL2: \215215- m = &hdfgwtr_masks; \216216- break; \217217- case HAFGRTR_EL2: \218218- m = &hafgrtr_masks; \219219- break; \220220- case HFGRTR2_EL2: \221221- m = &hfgrtr2_masks; \222222- break; \223223- case HFGWTR2_EL2: \224224- m = &hfgwtr2_masks; \225225- break; \226226- case HFGITR2_EL2: \227227- m = &hfgitr2_masks; \228228- break; \229229- case HDFGRTR2_EL2: \230230- m = &hdfgrtr2_masks; \231231- break; \232232- case HDFGWTR2_EL2: \233233- m = &hdfgwtr2_masks; \234234- break; \235235- default: \236236- BUILD_BUG_ON(1); \237237- } \238238- \239239- m; \240240- })241241-242242-#define compute_clr_set(vcpu, reg, clr, set) \243243- do { \244244- u64 hfg = __vcpu_sys_reg(vcpu, reg); \245245- struct fgt_masks *m = reg_to_fgt_masks(reg); \246246- set |= hfg & m->mask; \247247- clr |= ~hfg & m->nmask; \248248- } while(0)249249-250250-#define reg_to_fgt_group_id(reg) \251251- ({ \252252- enum fgt_group_id id; \253253- switch(reg) { \254254- case HFGRTR_EL2: \255255- case HFGWTR_EL2: \256256- id = HFGRTR_GROUP; \257257- break; \258258- case HFGITR_EL2: \259259- id = HFGITR_GROUP; \260260- break; \261261- case HDFGRTR_EL2: \262262- case HDFGWTR_EL2: \263263- id = HDFGRTR_GROUP; \264264- break; \265265- case HAFGRTR_EL2: \266266- id = HAFGRTR_GROUP; \267267- break; \268268- case HFGRTR2_EL2: \269269- case HFGWTR2_EL2: \270270- id = HFGRTR2_GROUP; \271271- break; \272272- case HFGITR2_EL2: \273273- id = HFGITR2_GROUP; \274274- break; \275275- case HDFGRTR2_EL2: \276276- case HDFGWTR2_EL2: \277277- id = HDFGRTR2_GROUP; \278278- break; \279279- default: \280280- BUILD_BUG_ON(1); \281281- } \282282- \283283- id; \284284- })285285-286286-#define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \287287- do { \288288- u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \289289- struct fgt_masks *m = reg_to_fgt_masks(reg); \290290- set |= hfg & m->mask; \291291- clr |= hfg & m->nmask; \292292- } while(0)293293-294294-#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set) \295295- do { \296296- struct fgt_masks *m = reg_to_fgt_masks(reg); \297297- u64 c = clr, s = set; \298298- u64 val; \299299- \300300- ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \301301- if (is_nested_ctxt(vcpu)) \302302- compute_clr_set(vcpu, reg, c, s); \303303- \304304- compute_undef_clr_set(vcpu, kvm, reg, c, s); \305305- \306306- val = m->nmask; \307307- val |= s; \308308- val &= ~c; \309309- write_sysreg_s(val, SYS_ ## reg); \310310- } while(0)311311-312312-#define update_fgt_traps(hctxt, vcpu, kvm, reg) \313313- update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0)314314-315198static inline bool cpu_has_amu(void)316199{317200 u64 pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1);···203320 ID_AA64PFR0_EL1_AMU_SHIFT);204321}205322323323+#define __activate_fgt(hctxt, vcpu, reg) \324324+ do { \325325+ ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \326326+ write_sysreg_s(*vcpu_fgt(vcpu, reg), SYS_ ## reg); \327327+ } while (0)328328+206329static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)207330{208331 struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt);209209- struct kvm *kvm = kern_hyp_va(vcpu->kvm);210332211333 if (!cpus_have_final_cap(ARM64_HAS_FGT))212334 return;213335214214- update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);215215- update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,216216- cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?217217- HFGWTR_EL2_TCR_EL1_MASK : 0);218218- update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);219219- update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);220220- update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);336336+ __activate_fgt(hctxt, vcpu, HFGRTR_EL2);337337+ __activate_fgt(hctxt, vcpu, HFGWTR_EL2);338338+ __activate_fgt(hctxt, vcpu, HFGITR_EL2);339339+ __activate_fgt(hctxt, vcpu, HDFGRTR_EL2);340340+ __activate_fgt(hctxt, vcpu, HDFGWTR_EL2);221341222342 if (cpu_has_amu())223223- update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);343343+ __activate_fgt(hctxt, vcpu, HAFGRTR_EL2);224344225345 if (!cpus_have_final_cap(ARM64_HAS_FGT2))226346 return;227347228228- update_fgt_traps(hctxt, vcpu, kvm, HFGRTR2_EL2);229229- update_fgt_traps(hctxt, vcpu, kvm, HFGWTR2_EL2);230230- update_fgt_traps(hctxt, vcpu, kvm, HFGITR2_EL2);231231- update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR2_EL2);232232- update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR2_EL2);348348+ __activate_fgt(hctxt, vcpu, HFGRTR2_EL2);349349+ __activate_fgt(hctxt, vcpu, HFGWTR2_EL2);350350+ __activate_fgt(hctxt, vcpu, HFGITR2_EL2);351351+ __activate_fgt(hctxt, vcpu, HDFGRTR2_EL2);352352+ __activate_fgt(hctxt, vcpu, HDFGWTR2_EL2);233353}234354235355#define __deactivate_fgt(htcxt, vcpu, reg) \
+1
arch/arm64/kvm/hyp/nvhe/pkvm.c
···172172173173 /* Trust the host for non-protected vcpu features. */174174 vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2;175175+ memcpy(vcpu->arch.fgt, host_vcpu->arch.fgt, sizeof(vcpu->arch.fgt));175176 return 0;176177 }177178
+6-3
arch/arm64/kvm/nested.c
···18591859{18601860 u64 guest_mdcr = __vcpu_sys_reg(vcpu, MDCR_EL2);1861186118621862+ if (is_nested_ctxt(vcpu))18631863+ vcpu->arch.mdcr_el2 |= (guest_mdcr & NV_MDCR_GUEST_INCLUDE);18621864 /*18631865 * In yet another example where FEAT_NV2 is fscking broken, accesses18641866 * to MDSCR_EL1 are redirected to the VNCR despite having an effect18651867 * at EL2. Use a big hammer to apply sanity.18681868+ *18691869+ * Unless of course we have FEAT_FGT, in which case we can precisely18701870+ * trap MDSCR_EL1.18661871 */18671867- if (is_hyp_ctxt(vcpu))18721872+ else if (!cpus_have_final_cap(ARM64_HAS_FGT))18681873 vcpu->arch.mdcr_el2 |= MDCR_EL2_TDA;18691869- else18701870- vcpu->arch.mdcr_el2 |= (guest_mdcr & NV_MDCR_GUEST_INCLUDE);18711874}
···33#ifndef __ASM_KGDB_H_44#define __ASM_KGDB_H_5566+#include <linux/build_bug.h>77+68#ifdef __KERNEL__79810#define GDB_SIZEOF_REG sizeof(unsigned long)9111010-#define DBG_MAX_REG_NUM (36)1111-#define NUMREGBYTES ((DBG_MAX_REG_NUM) * GDB_SIZEOF_REG)1212+#define DBG_MAX_REG_NUM 361313+#define NUMREGBYTES (DBG_MAX_REG_NUM * GDB_SIZEOF_REG)1214#define CACHE_FLUSH_IS_SAFE 11315#define BUFMAX 20481616+static_assert(BUFMAX > NUMREGBYTES,1717+ "As per KGDB documentation, BUFMAX must be larger than NUMREGBYTES");1418#ifdef CONFIG_RISCV_ISA_C1519#define BREAK_INSTR_SIZE 21620#else···10197#define DBG_REG_STATUS_OFF 3310298#define DBG_REG_BADADDR_OFF 3410399#define DBG_REG_CAUSE_OFF 35100100+/* NOTE: increase DBG_MAX_REG_NUM if you add more values here. */104101105102extern const char riscv_gdb_stub_feature[64];106103
+1
arch/riscv/kernel/cpu-hotplug.c
···54545555 pr_notice("CPU%u: off\n", cpu);56565757+ clear_tasks_mm_cpumask(cpu);5758 /* Verify from the firmware if the cpu is really stopped*/5859 if (cpu_ops->cpu_is_stopped)5960 ret = cpu_ops->cpu_is_stopped(cpu);
···4949 post_kprobe_handler(p, kcb, regs);5050}51515252-static bool __kprobes arch_check_kprobe(struct kprobe *p)5252+static bool __kprobes arch_check_kprobe(unsigned long addr)5353{5454- unsigned long tmp = (unsigned long)p->addr - p->offset;5555- unsigned long addr = (unsigned long)p->addr;5454+ unsigned long tmp, offset;5555+5656+ /* start iterating at the closest preceding symbol */5757+ if (!kallsyms_lookup_size_offset(addr, NULL, &offset))5858+ return false;5959+6060+ tmp = addr - offset;56615762 while (tmp <= addr) {5863 if (tmp == addr)···7671 if ((unsigned long)insn & 0x1)7772 return -EILSEQ;78737979- if (!arch_check_kprobe(p))7474+ if (!arch_check_kprobe((unsigned long)p->addr))8075 return -EILSEQ;81768277 /* copy instruction */
+5-2
arch/riscv/kernel/setup.c
···331331 /* Parse the ACPI tables for possible boot-time configuration */332332 acpi_boot_table_init();333333334334+ if (acpi_disabled) {334335#if IS_ENABLED(CONFIG_BUILTIN_DTB)335335- unflatten_and_copy_device_tree();336336+ unflatten_and_copy_device_tree();336337#else337337- unflatten_device_tree();338338+ unflatten_device_tree();338339#endif340340+ }341341+339342 misc_mem_init();340343341344 init_resources();
+2-2
arch/riscv/kernel/tests/kprobes/test-kprobes.h
···1111#define KPROBE_TEST_MAGIC_LOWER 0x0000babe1212#define KPROBE_TEST_MAGIC_UPPER 0xcafe000013131414-#ifndef __ASSEMBLY__1414+#ifndef __ASSEMBLER__15151616/* array of addresses to install kprobes */1717extern void *test_kprobes_addresses[];···1919/* array of functions that return KPROBE_TEST_MAGIC */2020extern long (*test_kprobes_functions[])(void);21212222-#endif /* __ASSEMBLY__ */2222+#endif /* __ASSEMBLER__ */23232424#endif /* TEST_KPROBES_H */
+14-2
arch/x86/kernel/cpu/amd.c
···13551355 return 0;1356135613571357 value = ioread32(addr);13581358- iounmap(addr);1359135813601359 /* Value with "all bits set" is an error response and should be ignored. */13611361- if (value == U32_MAX)13601360+ if (value == U32_MAX) {13611361+ iounmap(addr);13621362 return 0;13631363+ }13641364+13651365+ /*13661366+ * Clear all reason bits so they won't be retained if the next reset13671367+ * does not update the register. Besides, some bits are never cleared by13681368+ * hardware so it's software's responsibility to clear them.13691369+ *13701370+ * Writing the value back effectively clears all reason bits as they are13711371+ * write-1-to-clear.13721372+ */13731373+ iowrite32(value, addr);13741374+ iounmap(addr);1363137513641376 for (i = 0; i < ARRAY_SIZE(s5_reset_reason_txt); i++) {13651377 if (!(value & BIT(i)))
+10-4
arch/x86/kernel/cpu/resctrl/monitor.c
···242242 u32 unused, u32 rmid, enum resctrl_event_id eventid,243243 u64 *val, void *ignored)244244{245245+ struct rdt_hw_mon_domain *hw_dom = resctrl_to_arch_mon_dom(d);245246 int cpu = cpumask_any(&d->hdr.cpu_mask);247247+ struct arch_mbm_state *am;246248 u64 msr_val;247249 u32 prmid;248250 int ret;···253251254252 prmid = logical_rmid_to_physical_rmid(cpu, rmid);255253 ret = __rmid_read_phys(prmid, eventid, &msr_val);256256- if (ret)257257- return ret;258254259259- *val = get_corrected_val(r, d, rmid, eventid, msr_val);255255+ if (!ret) {256256+ *val = get_corrected_val(r, d, rmid, eventid, msr_val);257257+ } else if (ret == -EINVAL) {258258+ am = get_arch_mbm_state(hw_dom, rmid, eventid);259259+ if (am)260260+ am->prev_msr = 0;261261+ }260262261261- return 0;263263+ return ret;262264}263265264266static int __cntr_id_read(u32 cntr_id, u64 *val)
+5-3
arch/x86/kvm/pmu.c
···108108 bool is_intel = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;109109 int min_nr_gp_ctrs = pmu_ops->MIN_NR_GP_COUNTERS;110110111111- perf_get_x86_pmu_capability(&kvm_host_pmu);112112-113111 /*114112 * Hybrid PMUs don't play nice with virtualization without careful115113 * configuration by userspace, and KVM's APIs for reporting supported116114 * vPMU features do not account for hybrid PMUs. Disable vPMU support117115 * for hybrid PMUs until KVM gains a way to let userspace opt-in.118116 */119119- if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU))117117+ if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {120118 enable_pmu = false;119119+ memset(&kvm_host_pmu, 0, sizeof(kvm_host_pmu));120120+ } else {121121+ perf_get_x86_pmu_capability(&kvm_host_pmu);122122+ }121123122124 if (enable_pmu) {123125 /*
+4-3
arch/x86/kvm/x86.c
···13941139411394213942#ifdef CONFIG_KVM_GUEST_MEMFD1394313943/*1394413944- * KVM doesn't yet support mmap() on guest_memfd for VMs with private memory1394513945- * (the private vs. shared tracking needs to be moved into guest_memfd).1394413944+ * KVM doesn't yet support initializing guest_memfd memory as shared for VMs1394513945+ * with private memory (the private vs. shared tracking needs to be moved into1394613946+ * guest_memfd).1394613947 */1394713947-bool kvm_arch_supports_gmem_mmap(struct kvm *kvm)1394813948+bool kvm_arch_supports_gmem_init_shared(struct kvm *kvm)1394813949{1394913950 return !kvm_arch_has_private_mem(kvm);1395013951}
+1-1
arch/x86/mm/pat/set_memory.c
···446446 }447447448448 start = fix_addr(__cpa_addr(cpa, 0));449449- end = fix_addr(__cpa_addr(cpa, cpa->numpages));449449+ end = start + cpa->numpages * PAGE_SIZE;450450 if (cpa->force_flush_all)451451 end = TLB_FLUSH_ALL;452452
+22-2
arch/x86/mm/tlb.c
···911911 * CR3 and cpu_tlbstate.loaded_mm are not all in sync.912912 */913913 this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING);914914- barrier();915914916916- /* Start receiving IPIs and then read tlb_gen (and LAM below) */915915+ /*916916+ * Make sure this CPU is set in mm_cpumask() such that we'll917917+ * receive invalidation IPIs.918918+ *919919+ * Rely on the smp_mb() implied by cpumask_set_cpu()'s atomic920920+ * operation, or explicitly provide one. Such that:921921+ *922922+ * switch_mm_irqs_off() flush_tlb_mm_range()923923+ * smp_store_release(loaded_mm, SWITCHING); atomic64_inc_return(tlb_gen)924924+ * smp_mb(); // here // smp_mb() implied925925+ * atomic64_read(tlb_gen); this_cpu_read(loaded_mm);926926+ *927927+ * we properly order against flush_tlb_mm_range(), where the928928+ * loaded_mm load can happen in mative_flush_tlb_multi() ->929929+ * should_flush_tlb().930930+ *931931+ * This way switch_mm() must see the new tlb_gen or932932+ * flush_tlb_mm_range() must see the new loaded_mm, or both.933933+ */917934 if (next != &init_mm && !cpumask_test_cpu(cpu, mm_cpumask(next)))918935 cpumask_set_cpu(cpu, mm_cpumask(next));936936+ else937937+ smp_mb();938938+919939 next_tlb_gen = atomic64_read(&next->context.tlb_gen);920940921941 ns = choose_new_asid(next, next_tlb_gen);
+4-9
block/blk-cgroup.c
···812812}813813/*814814 * Similar to blkg_conf_open_bdev, but additionally freezes the queue,815815- * acquires q->elevator_lock, and ensures the correct locking order816816- * between q->elevator_lock and q->rq_qos_mutex.815815+ * ensures the correct locking order between freeze queue and q->rq_qos_mutex.817816 *818817 * This function returns negative error on failure. On success it returns819818 * memflags which must be saved and later passed to blkg_conf_exit_frozen···833834 * At this point, we haven’t started protecting anything related to QoS,834835 * so we release q->rq_qos_mutex here, which was first acquired in blkg_835836 * conf_open_bdev. Later, we re-acquire q->rq_qos_mutex after freezing836836- * the queue and acquiring q->elevator_lock to maintain the correct837837- * locking order.837837+ * the queue to maintain the correct locking order.838838 */839839 mutex_unlock(&ctx->bdev->bd_queue->rq_qos_mutex);840840841841 memflags = blk_mq_freeze_queue(ctx->bdev->bd_queue);842842- mutex_lock(&ctx->bdev->bd_queue->elevator_lock);843842 mutex_lock(&ctx->bdev->bd_queue->rq_qos_mutex);844843845844 return memflags;···992995EXPORT_SYMBOL_GPL(blkg_conf_exit);993996994997/*995995- * Similar to blkg_conf_exit, but also unfreezes the queue and releases996996- * q->elevator_lock. Should be used when blkg_conf_open_bdev_frozen997997- * is used to open the bdev.998998+ * Similar to blkg_conf_exit, but also unfreezes the queue. Should be used999999+ * when blkg_conf_open_bdev_frozen is used to open the bdev.9981000 */9991001void blkg_conf_exit_frozen(struct blkg_conf_ctx *ctx, unsigned long memflags)10001002{···10011005 struct request_queue *q = ctx->bdev->bd_queue;1002100610031007 blkg_conf_exit(ctx);10041004- mutex_unlock(&q->elevator_lock);10051008 blk_mq_unfreeze_queue(q, memflags);10061009 }10071010}
+1-1
block/blk-mq-sched.c
···557557 if (blk_mq_is_shared_tags(flags)) {558558 /* Shared tags are stored at index 0 in @et->tags. */559559 q->sched_shared_tags = et->tags[0];560560- blk_mq_tag_update_sched_shared_tags(q);560560+ blk_mq_tag_update_sched_shared_tags(q, et->nr_requests);561561 }562562563563 queue_for_each_hw_ctx(q, hctx, i) {
···49414941 * tags can't grow, see blk_mq_alloc_sched_tags().49424942 */49434943 if (q->elevator)49444944- blk_mq_tag_update_sched_shared_tags(q);49444944+ blk_mq_tag_update_sched_shared_tags(q, nr);49454945 else49464946 blk_mq_tag_resize_shared_tags(set, nr);49474947 } else if (!q->elevator) {
+2-1
block/blk-mq.h
···186186void blk_mq_put_tags(struct blk_mq_tags *tags, int *tag_array, int nr_tags);187187void blk_mq_tag_resize_shared_tags(struct blk_mq_tag_set *set,188188 unsigned int size);189189-void blk_mq_tag_update_sched_shared_tags(struct request_queue *q);189189+void blk_mq_tag_update_sched_shared_tags(struct request_queue *q,190190+ unsigned int nr);190191191192void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);192193void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_tag_iter_fn *fn,
+2
drivers/accel/qaic/qaic.h
···9797 * response queue's head and tail pointer of this DBC.9898 */9999 void __iomem *dbc_base;100100+ /* Synchronizes access to Request queue's head and tail pointer */101101+ struct mutex req_lock;100102 /* Head of list where each node is a memory handle queued in request queue */101103 struct list_head xfer_list;102104 /* Synchronizes DBC readers during cleanup */
···133133{134134 return !(start_method == ACPI_TPM2_START_METHOD ||135135 start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD ||136136- start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC ||137137- start_method == ACPI_TPM2_CRB_WITH_ARM_FFA);136136+ start_method == ACPI_TPM2_COMMAND_BUFFER_WITH_ARM_SMC);138137}139138140139static bool crb_wait_for_reg_32(u32 __iomem *reg, u32 mask, u32 value,···190191 *191192 * Return: 0 always192193 */193193-static int __crb_go_idle(struct device *dev, struct crb_priv *priv)194194+static int __crb_go_idle(struct device *dev, struct crb_priv *priv, int loc)194195{195196 int rc;196197···198199 return 0;199200200201 iowrite32(CRB_CTRL_REQ_GO_IDLE, &priv->regs_t->ctrl_req);202202+203203+ if (priv->sm == ACPI_TPM2_CRB_WITH_ARM_FFA) {204204+ rc = tpm_crb_ffa_start(CRB_FFA_START_TYPE_COMMAND, loc);205205+ if (rc)206206+ return rc;207207+ }201208202209 rc = crb_try_pluton_doorbell(priv, true);203210 if (rc)···225220 struct device *dev = &chip->dev;226221 struct crb_priv *priv = dev_get_drvdata(dev);227222228228- return __crb_go_idle(dev, priv);223223+ return __crb_go_idle(dev, priv, chip->locality);229224}230225231226/**···243238 *244239 * Return: 0 on success -ETIME on timeout;245240 */246246-static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv)241241+static int __crb_cmd_ready(struct device *dev, struct crb_priv *priv, int loc)247242{248243 int rc;249244···251246 return 0;252247253248 iowrite32(CRB_CTRL_REQ_CMD_READY, &priv->regs_t->ctrl_req);249249+250250+ if (priv->sm == ACPI_TPM2_CRB_WITH_ARM_FFA) {251251+ rc = tpm_crb_ffa_start(CRB_FFA_START_TYPE_COMMAND, loc);252252+ if (rc)253253+ return rc;254254+ }254255255256 rc = crb_try_pluton_doorbell(priv, true);256257 if (rc)···278267 struct device *dev = &chip->dev;279268 struct crb_priv *priv = dev_get_drvdata(dev);280269281281- return __crb_cmd_ready(dev, priv);270270+ return __crb_cmd_ready(dev, priv, chip->locality);282271}283272284273static int __crb_request_locality(struct device *dev,···455444456445 /* Seems to be necessary for every command */457446 if (priv->sm == ACPI_TPM2_COMMAND_BUFFER_WITH_PLUTON)458458- __crb_cmd_ready(&chip->dev, priv);447447+ __crb_cmd_ready(&chip->dev, priv, chip->locality);459448460449 memcpy_toio(priv->cmd, buf, len);461450···683672 * PTT HW bug w/a: wake up the device to access684673 * possibly not retained registers.685674 */686686- ret = __crb_cmd_ready(dev, priv);675675+ ret = __crb_cmd_ready(dev, priv, 0);687676 if (ret)688677 goto out_relinquish_locality;689678···755744 if (!ret)756745 priv->cmd_size = cmd_size;757746758758- __crb_go_idle(dev, priv);747747+ __crb_go_idle(dev, priv, 0);759748760749out_relinquish_locality:761750
+5-1
drivers/cpufreq/amd-pstate.c
···16141614 * min_perf value across kexec reboots. If this CPU is just onlined normally after this, the16151615 * limits, epp and desired perf will get reset to the cached values in cpudata struct16161616 */16171617- return amd_pstate_update_perf(policy, perf.bios_min_perf, 0U, 0U, 0U, false);16171617+ return amd_pstate_update_perf(policy, perf.bios_min_perf,16181618+ FIELD_GET(AMD_CPPC_DES_PERF_MASK, cpudata->cppc_req_cached),16191619+ FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),16201620+ FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached),16211621+ false);16181622}1619162316201624static int amd_pstate_suspend(struct cpufreq_policy *policy)
+9-12
drivers/cpuidle/governors/menu.c
···188188 *189189 * This can deal with workloads that have long pauses interspersed190190 * with sporadic activity with a bunch of short pauses.191191+ *192192+ * However, if the number of remaining samples is too small to exclude193193+ * any more outliers, allow the deepest available idle state to be194194+ * selected because there are systems where the time spent by CPUs in195195+ * deep idle states is correlated to the maximum frequency the CPUs196196+ * can get to. On those systems, shallow idle states should be avoided197197+ * unless there is a clear indication that the given CPU is most likley198198+ * going to be woken up shortly.191199 */192192- if (divisor * 4 <= INTERVALS * 3) {193193- /*194194- * If there are sufficiently many data points still under195195- * consideration after the outliers have been eliminated,196196- * returning without a prediction would be a mistake because it197197- * is likely that the next interval will not exceed the current198198- * maximum, so return the latter in that case.199199- */200200- if (divisor >= INTERVALS / 2)201201- return max;202202-200200+ if (divisor * 4 <= INTERVALS * 3)203201 return UINT_MAX;204204- }205202206203 /* Update the thresholds for the next round. */207204 if (avg - min > max - avg)
+1-1
drivers/cxl/acpi.c
···348348 struct resource res;349349 int nid, rc;350350351351- res = DEFINE_RES(start, size, 0);351351+ res = DEFINE_RES_MEM(start, size);352352 nid = phys_to_target_node(start);353353354354 rc = hmat_get_extended_linear_cache_size(&res, nid, &cache_size);
+3
drivers/cxl/core/features.c
···371371{372372 struct cxl_feat_entry *feat;373373374374+ if (!cxlfs || !cxlfs->entries)375375+ return ERR_PTR(-EOPNOTSUPP);376376+374377 for (int i = 0; i < cxlfs->entries->num_features; i++) {375378 feat = &cxlfs->entries->ent[i];376379 if (uuid_equal(uuid, &feat->uuid))
+14-12
drivers/cxl/core/port.c
···11821182 if (rc)11831183 return ERR_PTR(rc);1184118411851185+ /*11861186+ * Setup port register if this is the first dport showed up. Having11871187+ * a dport also means that there is at least 1 active link.11881188+ */11891189+ if (port->nr_dports == 1 &&11901190+ port->component_reg_phys != CXL_RESOURCE_NONE) {11911191+ rc = cxl_port_setup_regs(port, port->component_reg_phys);11921192+ if (rc) {11931193+ xa_erase(&port->dports, (unsigned long)dport->dport_dev);11941194+ return ERR_PTR(rc);11951195+ }11961196+ port->component_reg_phys = CXL_RESOURCE_NONE;11971197+ }11981198+11851199 get_device(dport_dev);11861200 rc = devm_add_action_or_reset(host, cxl_dport_remove, dport);11871201 if (rc)···12131199 dport->link_latency = cxl_pci_get_latency(to_pci_dev(dport_dev));1214120012151201 cxl_debugfs_create_dport_dir(dport);12161216-12171217- /*12181218- * Setup port register if this is the first dport showed up. Having12191219- * a dport also means that there is at least 1 active link.12201220- */12211221- if (port->nr_dports == 1 &&12221222- port->component_reg_phys != CXL_RESOURCE_NONE) {12231223- rc = cxl_port_setup_regs(port, port->component_reg_phys);12241224- if (rc)12251225- return ERR_PTR(rc);12261226- port->component_reg_phys = CXL_RESOURCE_NONE;12271227- }1228120212291203 return dport;12301204}
+4-7
drivers/cxl/core/region.c
···839839}840840841841static bool region_res_match_cxl_range(const struct cxl_region_params *p,842842- struct range *range)842842+ const struct range *range)843843{844844 if (!p->res)845845 return false;···33983398 p = &cxlr->params;3399339934003400 guard(rwsem_read)(&cxl_rwsem.region);34013401- if (p->res && p->res->start == r->start && p->res->end == r->end)34023402- return 1;34033403-34043404- return 0;34013401+ return region_res_match_cxl_range(p, r);34053402}3406340334073404static int cxl_extended_linear_cache_resize(struct cxl_region *cxlr,···3663366636643667 if (offset < p->cache_size) {36653668 dev_err(&cxlr->dev,36663666- "Offset %#llx is within extended linear cache %pr\n",36693669+ "Offset %#llx is within extended linear cache %pa\n",36673670 offset, &p->cache_size);36683671 return -EINVAL;36693672 }3670367336713674 region_size = resource_size(p->res);36723675 if (offset >= region_size) {36733673- dev_err(&cxlr->dev, "Offset %#llx exceeds region size %pr\n",36763676+ dev_err(&cxlr->dev, "Offset %#llx exceeds region size %pa\n",36743677 offset, ®ion_size);36753678 return -EINVAL;36763679 }
···364364 if (p->uf_bo && ring->funcs->no_user_fence)365365 return -EINVAL;366366367367+ if (!p->adev->debug_enable_ce_cs &&368368+ chunk_ib->flags & AMDGPU_IB_FLAG_CE) {369369+ dev_err_ratelimited(p->adev->dev, "CE CS is blocked, use debug=0x400 to override\n");370370+ return -EINVAL;371371+ }372372+367373 if (chunk_ib->ip_type == AMDGPU_HW_IP_GFX &&368374 chunk_ib->flags & AMDGPU_IB_FLAG_PREEMPT) {369375 if (chunk_ib->flags & AMDGPU_IB_FLAG_CE)···708702 */709703 const s64 us_upper_bound = 200000;710704711711- if (!adev->mm_stats.log2_max_MBps) {705705+ if ((!adev->mm_stats.log2_max_MBps) || !ttm_resource_manager_used(&adev->mman.vram_mgr.manager)) {712706 *max_bytes = 0;713707 *max_vis_bytes = 0;714708 return;
+7
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···1882188218831883static bool amdgpu_device_aspm_support_quirk(struct amdgpu_device *adev)18841884{18851885+ /* Enabling ASPM causes randoms hangs on Tahiti and Oland on Zen4.18861886+ * It's unclear if this is a platform-specific or GPU-specific issue.18871887+ * Disable ASPM on SI for the time being.18881888+ */18891889+ if (adev->family == AMDGPU_FAMILY_SI)18901890+ return true;18911891+18851892#if IS_ENABLED(CONFIG_X86)18861893 struct cpuinfo_x86 *c = &cpu_data(0);18871894
···409409 return -EINVAL;410410411411 /* Clear the doorbell array before detection */412412- memset(adev->mes.hung_queue_db_array_cpu_addr, 0,412412+ memset(adev->mes.hung_queue_db_array_cpu_addr, AMDGPU_MES_INVALID_DB_OFFSET,413413 adev->mes.hung_queue_db_array_size * sizeof(u32));414414 input.queue_type = queue_type;415415 input.detect_only = detect_only;···420420 dev_err(adev->dev, "failed to detect and reset\n");421421 } else {422422 *hung_db_num = 0;423423- for (i = 0; i < adev->mes.hung_queue_db_array_size; i++) {423423+ for (i = 0; i < adev->mes.hung_queue_hqd_info_offset; i++) {424424 if (db_array[i] != AMDGPU_MES_INVALID_DB_OFFSET) {425425 hung_db_array[i] = db_array[i];426426 *hung_db_num += 1;427427 }428428 }429429+430430+ /*431431+ * TODO: return HQD info for MES scheduled user compute queue reset cases432432+ * stored in hung_db_array hqd info offset to full array size433433+ */429434 }430435431436 return r;···691686bool amdgpu_mes_suspend_resume_all_supported(struct amdgpu_device *adev)692687{693688 uint32_t mes_rev = adev->mes.sched_version & AMDGPU_MES_VERSION_MASK;694694- bool is_supported = false;695689696696- if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) &&697697- amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(12, 0, 0) &&698698- mes_rev >= 0x63)699699- is_supported = true;700700-701701- return is_supported;690690+ return ((amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) &&691691+ amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(12, 0, 0) &&692692+ mes_rev >= 0x63) ||693693+ amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(12, 0, 0));702694}703695704696/* Fix me -- node_id is used to identify the correct MES instances in the future */
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
···149149 void *resource_1_addr[AMDGPU_MAX_MES_PIPES];150150151151 int hung_queue_db_array_size;152152+ int hung_queue_hqd_info_offset;152153 struct amdgpu_bo *hung_queue_db_array_gpu_obj;153154 uint64_t hung_queue_db_array_gpu_addr;154155 void *hung_queue_db_array_cpu_addr;
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
···811811 if (r)812812 return r;813813814814- /* signal the fence of the bad job */814814+ /* signal the guilty fence and set an error on all fences from the context */815815 if (guilty_fence)816816 amdgpu_fence_driver_guilty_force_completion(guilty_fence);817817 /* Re-emit the non-guilty commands */
···12091209 pr_debug_ratelimited("Evicting process pid %d queues\n",12101210 pdd->process->lead_thread->pid);1211121112121212+ if (dqm->dev->kfd->shared_resources.enable_mes) {12131213+ pdd->last_evict_timestamp = get_jiffies_64();12141214+ retval = suspend_all_queues_mes(dqm);12151215+ if (retval) {12161216+ dev_err(dev, "Suspending all queues failed");12171217+ goto out;12181218+ }12191219+ }12201220+12121221 /* Mark all queues as evicted. Deactivate all active queues on12131222 * the qpd.12141223 */···12301221 decrement_queue_count(dqm, qpd, q);1231122212321223 if (dqm->dev->kfd->shared_resources.enable_mes) {12331233- int err;12341234-12351235- err = remove_queue_mes(dqm, q, qpd);12361236- if (err) {12241224+ retval = remove_queue_mes(dqm, q, qpd);12251225+ if (retval) {12371226 dev_err(dev, "Failed to evict queue %d\n",12381227 q->properties.queue_id);12391239- retval = err;12281228+ goto out;12401229 }12411230 }12421231 }12431243- pdd->last_evict_timestamp = get_jiffies_64();12441244- if (!dqm->dev->kfd->shared_resources.enable_mes)12321232+12331233+ if (!dqm->dev->kfd->shared_resources.enable_mes) {12341234+ pdd->last_evict_timestamp = get_jiffies_64();12451235 retval = execute_queues_cpsch(dqm,12461236 qpd->is_debug ?12471237 KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES :12481238 KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0,12491239 USE_DEFAULT_GRACE_PERIOD);12401240+ } else {12411241+ retval = resume_all_queues_mes(dqm);12421242+ if (retval)12431243+ dev_err(dev, "Resuming all queues failed");12441244+ }1250124512511246out:12521247 dqm_unlock(dqm);···31113098 return ret;31123099}3113310031143114-static int kfd_dqm_evict_pasid_mes(struct device_queue_manager *dqm,31153115- struct qcm_process_device *qpd)31163116-{31173117- struct device *dev = dqm->dev->adev->dev;31183118- int ret = 0;31193119-31203120- /* Check if process is already evicted */31213121- dqm_lock(dqm);31223122- if (qpd->evicted) {31233123- /* Increment the evicted count to make sure the31243124- * process stays evicted before its terminated.31253125- */31263126- qpd->evicted++;31273127- dqm_unlock(dqm);31283128- goto out;31293129- }31303130- dqm_unlock(dqm);31313131-31323132- ret = suspend_all_queues_mes(dqm);31333133- if (ret) {31343134- dev_err(dev, "Suspending all queues failed");31353135- goto out;31363136- }31373137-31383138- ret = dqm->ops.evict_process_queues(dqm, qpd);31393139- if (ret) {31403140- dev_err(dev, "Evicting process queues failed");31413141- goto out;31423142- }31433143-31443144- ret = resume_all_queues_mes(dqm);31453145- if (ret)31463146- dev_err(dev, "Resuming all queues failed");31473147-31483148-out:31493149- return ret;31503150-}31513151-31523101int kfd_evict_process_device(struct kfd_process_device *pdd)31533102{31543103 struct device_queue_manager *dqm;31553104 struct kfd_process *p;31563156- int ret = 0;3157310531583106 p = pdd->process;31593107 dqm = pdd->dev->dqm;3160310831613109 WARN(debug_evictions, "Evicting pid %d", p->lead_thread->pid);3162311031633163- if (dqm->dev->kfd->shared_resources.enable_mes)31643164- ret = kfd_dqm_evict_pasid_mes(dqm, &pdd->qpd);31653165- else31663166- ret = dqm->ops.evict_process_queues(dqm, &pdd->qpd);31673167-31683168- return ret;31113111+ return dqm->ops.evict_process_queues(dqm, &pdd->qpd);31693112}3170311331713114int reserve_debug_trap_vmid(struct device_queue_manager *dqm,
+4-8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
···2085208520862086 dc_hardware_init(adev->dm.dc);2087208720882088- adev->dm.restore_backlight = true;20892089-20902088 adev->dm.hpd_rx_offload_wq = hpd_rx_irq_create_workqueue(adev);20912089 if (!adev->dm.hpd_rx_offload_wq) {20922090 drm_err(adev_to_drm(adev), "failed to create hpd rx offload workqueue.\n");···34403442 dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);3441344334423444 dc_resume(dm->dc);34433443- adev->dm.restore_backlight = true;3444344534453446 amdgpu_dm_irq_resume_early(adev);34463447···99669969 bool mode_set_reset_required = false;99679970 u32 i;99689971 struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count};99729972+ bool set_backlight_level = false;9969997399709974 /* Disable writeback */99719975 for_each_old_connector_in_state(state, connector, old_con_state, i) {···1008610088 acrtc->hw_mode = new_crtc_state->mode;1008710089 crtc->hwmode = new_crtc_state->mode;1008810090 mode_set_reset_required = true;1009110091+ set_backlight_level = true;1008910092 } else if (modereset_required(new_crtc_state)) {1009010093 drm_dbg_atomic(dev,1009110094 "Atomic commit: RESET. crtc id %d:[%p]\n",···1014310144 * to fix a flicker issue.1014410145 * It will cause the dm->actual_brightness is not the current panel brightness1014510146 * level. (the dm->brightness is the correct panel level)1014610146- * So we set the backlight level with dm->brightness value after initial1014710147- * set mode. Use restore_backlight flag to avoid setting backlight level1014810148- * for every subsequent mode set.1014710147+ * So we set the backlight level with dm->brightness value after set mode1014910148 */1015010150- if (dm->restore_backlight) {1014910149+ if (set_backlight_level) {1015110150 for (i = 0; i < dm->num_of_edps; i++) {1015210151 if (dm->backlight_dev[i])1015310152 amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);1015410153 }1015510155- dm->restore_backlight = false;1015610154 }1015710155}1015810156
-7
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
···631631 u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];632632633633 /**634634- * @restore_backlight:635635- *636636- * Flag to indicate whether to restore backlight after modeset.637637- */638638- bool restore_backlight;639639-640640- /**641634 * @aux_hpd_discon_quirk:642635 *643636 * quirk for hpd discon while aux is on-going.
+5
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
···35003500 * for these GPUs to calculate bandwidth requirements.35013501 */35023502 if (high_pixelclock_count) {35033503+ /* Work around flickering lines at the bottom edge35043504+ * of the screen when using a single 4K 60Hz monitor.35053505+ */35063506+ disable_mclk_switching = true;35073507+35033508 /* On Oland, we observe some flickering when two 4K 60Hz35043509 * displays are connected, possibly because voltage is too low.35053510 * Raise the voltage by requiring a higher SCLK.
···121121 }122122123123 /* Test for known Chip ID. */124124- if (chipid[0] != REG_CHIPID0_VALUE || chipid[1] != REG_CHIPID1_VALUE ||125125- chipid[2] != REG_CHIPID2_VALUE) {124124+ if (chipid[0] != REG_CHIPID0_VALUE || chipid[1] != REG_CHIPID1_VALUE) {126125 dev_err(ctx->dev, "Unknown Chip ID: 0x%02x 0x%02x 0x%02x\n",127126 chipid[0], chipid[1], chipid[2]);128127 return -EINVAL;
+1-1
drivers/gpu/drm/drm_draw.c
···127127128128void drm_draw_fill24(struct iosys_map *dmap, unsigned int dpitch,129129 unsigned int height, unsigned int width,130130- u16 color)130130+ u32 color)131131{132132 unsigned int y, x;133133
+1-1
drivers/gpu/drm/drm_draw_internal.h
···47474848void drm_draw_fill24(struct iosys_map *dmap, unsigned int dpitch,4949 unsigned int height, unsigned int width,5050- u16 color);5050+ u32 color);51515252void drm_draw_fill32(struct iosys_map *dmap, unsigned int dpitch,5353 unsigned int height, unsigned int width,
+20-18
drivers/gpu/drm/i915/display/intel_fb.c
···21132113 if (intel_fb_uses_dpt(fb))21142114 intel_dpt_destroy(intel_fb->dpt_vm);2115211521162116- intel_frontbuffer_put(intel_fb->frontbuffer);21172117-21182116 intel_fb_bo_framebuffer_fini(intel_fb_bo(fb));21172117+21182118+ intel_frontbuffer_put(intel_fb->frontbuffer);2119211921202120 kfree(intel_fb);21212121}···22182218 int ret = -EINVAL;22192219 int i;2220222022212221+ /*22222222+ * intel_frontbuffer_get() must be done before22232223+ * intel_fb_bo_framebuffer_init() to avoid set_tiling vs. addfb race.22242224+ */22252225+ intel_fb->frontbuffer = intel_frontbuffer_get(obj);22262226+ if (!intel_fb->frontbuffer)22272227+ return -ENOMEM;22282228+22212229 ret = intel_fb_bo_framebuffer_init(fb, obj, mode_cmd);22222230 if (ret)22232223- return ret;22242224-22252225- intel_fb->frontbuffer = intel_frontbuffer_get(obj);22262226- if (!intel_fb->frontbuffer) {22272227- ret = -ENOMEM;22282228- goto err;22292229- }22312231+ goto err_frontbuffer_put;2230223222312233 ret = -EINVAL;22322234 if (!drm_any_plane_has_format(display->drm,···22372235 drm_dbg_kms(display->drm,22382236 "unsupported pixel format %p4cc / modifier 0x%llx\n",22392237 &mode_cmd->pixel_format, mode_cmd->modifier[0]);22402240- goto err_frontbuffer_put;22382238+ goto err_bo_framebuffer_fini;22412239 }2242224022432241 max_stride = intel_fb_max_stride(display, mode_cmd->pixel_format,···22482246 mode_cmd->modifier[0] != DRM_FORMAT_MOD_LINEAR ?22492247 "tiled" : "linear",22502248 mode_cmd->pitches[0], max_stride);22512251- goto err_frontbuffer_put;22492249+ goto err_bo_framebuffer_fini;22522250 }2253225122542252 /* FIXME need to adjust LINOFF/TILEOFF accordingly. */···22562254 drm_dbg_kms(display->drm,22572255 "plane 0 offset (0x%08x) must be 0\n",22582256 mode_cmd->offsets[0]);22592259- goto err_frontbuffer_put;22572257+ goto err_bo_framebuffer_fini;22602258 }2261225922622260 drm_helper_mode_fill_fb_struct(display->drm, fb, info, mode_cmd);···2266226422672265 if (mode_cmd->handles[i] != mode_cmd->handles[0]) {22682266 drm_dbg_kms(display->drm, "bad plane %d handle\n", i);22692269- goto err_frontbuffer_put;22672267+ goto err_bo_framebuffer_fini;22702268 }2271226922722270 stride_alignment = intel_fb_stride_alignment(fb, i);···22742272 drm_dbg_kms(display->drm,22752273 "plane %d pitch (%d) must be at least %u byte aligned\n",22762274 i, fb->pitches[i], stride_alignment);22772277- goto err_frontbuffer_put;22752275+ goto err_bo_framebuffer_fini;22782276 }2279227722802278 if (intel_fb_is_gen12_ccs_aux_plane(fb, i)) {···22842282 drm_dbg_kms(display->drm,22852283 "ccs aux plane %d pitch (%d) must be %d\n",22862284 i, fb->pitches[i], ccs_aux_stride);22872287- goto err_frontbuffer_put;22852285+ goto err_bo_framebuffer_fini;22882286 }22892287 }22902288···2293229122942292 ret = intel_fill_fb_info(display, intel_fb);22952293 if (ret)22962296- goto err_frontbuffer_put;22942294+ goto err_bo_framebuffer_fini;2297229522982296 if (intel_fb_uses_dpt(fb)) {22992297 struct i915_address_space *vm;···23192317err_free_dpt:23202318 if (intel_fb_uses_dpt(fb))23212319 intel_dpt_destroy(intel_fb->dpt_vm);23202320+err_bo_framebuffer_fini:23212321+ intel_fb_bo_framebuffer_fini(obj);23222322err_frontbuffer_put:23232323 intel_frontbuffer_put(intel_fb->frontbuffer);23242324-err:23252325- intel_fb_bo_framebuffer_fini(obj);23262324 return ret;23272325}23282326
+9-1
drivers/gpu/drm/i915/display/intel_frontbuffer.c
···270270 spin_unlock(&display->fb_tracking.lock);271271272272 i915_active_fini(&front->write);273273+274274+ drm_gem_object_put(obj);273275 kfree_rcu(front, rcu);274276}275277···289287 if (!front)290288 return NULL;291289290290+ drm_gem_object_get(obj);291291+292292 front->obj = obj;293293 kref_init(&front->ref);294294 atomic_set(&front->bits, 0);···303299 spin_lock(&display->fb_tracking.lock);304300 cur = intel_bo_set_frontbuffer(obj, front);305301 spin_unlock(&display->fb_tracking.lock);306306- if (cur != front)302302+303303+ if (cur != front) {304304+ drm_gem_object_put(obj);307305 kfree(front);306306+ }307307+308308 return cur;309309}310310
+10-2
drivers/gpu/drm/i915/display/intel_psr.c
···34023402 struct intel_display *display = to_intel_display(intel_dp);3403340334043404 if (DISPLAY_VER(display) < 20 && intel_dp->psr.psr2_sel_fetch_enabled) {34053405+ /* Selective fetch prior LNL */34053406 if (intel_dp->psr.psr2_sel_fetch_cff_enabled) {34063407 /* can we turn CFF off? */34073408 if (intel_dp->psr.busy_frontbuffer_bits == 0)···34213420 intel_psr_configure_full_frame_update(intel_dp);3422342134233422 intel_psr_force_update(intel_dp);34233423+ } else if (!intel_dp->psr.psr2_sel_fetch_enabled) {34243424+ /*34253425+ * PSR1 on all platforms34263426+ * PSR2 HW tracking34273427+ * Panel Replay Full frame update34283428+ */34293429+ intel_psr_force_update(intel_dp);34243430 } else {34313431+ /* Selective update LNL onwards */34253432 intel_psr_exit(intel_dp);34263433 }3427343434283428- if ((!intel_dp->psr.psr2_sel_fetch_enabled || DISPLAY_VER(display) >= 20) &&34293429- !intel_dp->psr.busy_frontbuffer_bits)34353435+ if (!intel_dp->psr.active && !intel_dp->psr.busy_frontbuffer_bits)34303436 queue_work(display->wq.unordered, &intel_dp->psr.work);34313437}34323438
···89899090 if (!front) {9191 RCU_INIT_POINTER(obj->frontbuffer, NULL);9292- drm_gem_object_put(intel_bo_to_drm_bo(obj));9392 } else if (rcu_access_pointer(obj->frontbuffer)) {9493 cur = rcu_dereference_protected(obj->frontbuffer, true);9594 kref_get(&cur->ref);9695 } else {9797- drm_gem_object_get(intel_bo_to_drm_bo(obj));9896 rcu_assign_pointer(obj->frontbuffer, front);9997 }10098
+8-1
drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
···1325132513261326static void ct_try_receive_message(struct intel_guc_ct *ct)13271327{13281328+ struct intel_guc *guc = ct_to_guc(ct);13281329 int ret;1329133013301330- if (GEM_WARN_ON(!ct->enabled))13311331+ if (!ct->enabled) {13321332+ GEM_WARN_ON(!guc_to_gt(guc)->uc.reset_in_progress);13331333+ return;13341334+ }13351335+13361336+ /* When interrupt disabled, message handling is not expected */13371337+ if (!guc->interrupts.enabled)13311338 return;1332133913331340 ret = ct_receive(ct);
···965965 dma_resv_assert_held(resv);966966967967 dma_resv_for_each_fence(&cursor, resv, usage, fence) {968968- /* Make sure to grab an additional ref on the added fence */969969- dma_fence_get(fence);970970- ret = drm_sched_job_add_dependency(job, fence);971971- if (ret) {972972- dma_fence_put(fence);968968+ /*969969+ * As drm_sched_job_add_dependency always consumes the fence970970+ * reference (even when it fails), and dma_resv_for_each_fence971971+ * is not obtaining one, we need to grab one before calling.972972+ */973973+ ret = drm_sched_job_add_dependency(job, dma_fence_get(fence));974974+ if (ret)973975 return ret;974974- }975976 }976977 return 0;977978}
···66666767/**6868 * xe_pci_fake_data_gen_params - Generate struct xe_pci_fake_data parameters6969+ * @test: test context object6970 * @prev: the pointer to the previous parameter to iterate from or NULL7071 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE7172 *···243242244243/**245244 * xe_pci_graphics_ip_gen_param - Generate graphics struct xe_ip parameters245245+ * @test: test context object246246 * @prev: the pointer to the previous parameter to iterate from or NULL247247 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE248248 *···268266269267/**270268 * xe_pci_media_ip_gen_param - Generate media struct xe_ip parameters269269+ * @test: test context object271270 * @prev: the pointer to the previous parameter to iterate from or NULL272271 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE273272 *···293290294291/**295292 * xe_pci_id_gen_param - Generate struct pci_device_id parameters293293+ * @test: test context object296294 * @prev: the pointer to the previous parameter to iterate from or NULL297295 * @desc: output buffer with minimum size of KUNIT_PARAM_DESC_SIZE298296 *···380376381377/**382378 * xe_pci_live_device_gen_param - Helper to iterate Xe devices as KUnit parameters379379+ * @test: test context object383380 * @prev: the previously returned value, or NULL for the first iteration384381 * @desc: the buffer for a parameter name385382 *
-8
drivers/gpu/drm/xe/xe_bo_evict.c
···182182183183static int xe_bo_restore_and_map_ggtt(struct xe_bo *bo)184184{185185- struct xe_device *xe = xe_bo_device(bo);186185 int ret;187186188187 ret = xe_bo_restore_pinned(bo);···199200 xe_ggtt_map_bo_unlocked(tile->mem.ggtt, bo);200201 }201202 }202202-203203- /*204204- * We expect validate to trigger a move VRAM and our move code205205- * should setup the iosys map.206206- */207207- xe_assert(xe, !(bo->flags & XE_BO_FLAG_PINNED_LATE_RESTORE) ||208208- !iosys_map_is_null(&bo->vmap));209203210204 return 0;211205}
···124124 if (xe_gt_is_main_type(gt))125125 gtidle->powergate_enable |= RENDER_POWERGATE_ENABLE;126126127127+ if (MEDIA_VERx100(xe) >= 1100 && MEDIA_VERx100(xe) < 1255)128128+ gtidle->powergate_enable |= MEDIA_SAMPLERS_POWERGATE_ENABLE;129129+127130 if (xe->info.platform != XE_DG1) {128131 for (i = XE_HW_ENGINE_VCS0, j = 0; i <= XE_HW_ENGINE_VCS7; ++i, ++j) {129132 if ((gt->info.engine_mask & BIT(i)))···249246 drm_printf(p, "Media Slice%d Power Gate Status: %s\n", n,250247 str_up_down(pg_status & media_slices[n].status_bit));251248 }249249+250250+ if (MEDIA_VERx100(xe) >= 1100 && MEDIA_VERx100(xe) < 1255)251251+ drm_printf(p, "Media Samplers Power Gating Enabled: %s\n",252252+ str_yes_no(pg_enabled & MEDIA_SAMPLERS_POWERGATE_ENABLE));253253+252254 return 0;253255}254256
+12-1
drivers/gpu/drm/xe/xe_guc_submit.c
···4444#include "xe_ring_ops_types.h"4545#include "xe_sched_job.h"4646#include "xe_trace.h"4747+#include "xe_uc_fw.h"4748#include "xe_vm.h"48494950static struct xe_guc *···14901489 xe_gt_assert(guc_to_gt(guc), !(q->flags & EXEC_QUEUE_FLAG_PERMANENT));14911490 trace_xe_exec_queue_cleanup_entity(q);1492149114931493- if (exec_queue_registered(q))14921492+ /*14931493+ * Expected state transitions for cleanup:14941494+ * - If the exec queue is registered and GuC firmware is running, we must first14951495+ * disable scheduling and deregister the queue to ensure proper teardown and14961496+ * resource release in the GuC, then destroy the exec queue on driver side.14971497+ * - If the GuC is already stopped (e.g., during driver unload or GPU reset),14981498+ * we cannot expect a response for the deregister request. In this case,14991499+ * it is safe to directly destroy the exec queue on driver side, as the GuC15001500+ * will not process further requests and all resources must be cleaned up locally.15011501+ */15021502+ if (exec_queue_registered(q) && xe_uc_fw_is_running(&guc->fw))14941503 disable_scheduling_deregister(guc, q);14951504 else14961505 __guc_exec_queue_destroy(guc, q);
+4-2
drivers/gpu/drm/xe/xe_migrate.c
···434434435435 err = xe_migrate_lock_prepare_vm(tile, m, vm);436436 if (err)437437- return err;437437+ goto err_out;438438439439 if (xe->info.has_usm) {440440 struct xe_hw_engine *hwe = xe_gt_hw_engine(primary_gt,···21132113 if (current_bytes & ~PAGE_MASK) {21142114 int pitch = 4;2115211521162116- current_bytes = min_t(int, current_bytes, S16_MAX * pitch);21162116+ current_bytes = min_t(int, current_bytes,21172117+ round_down(S16_MAX * pitch,21182118+ XE_CACHELINE_BYTES));21172119 }2118212021192121 __fence = xe_migrate_vram(m, current_bytes,
+2
drivers/gpu/drm/xe/xe_pci.c
···867867 if (err)868868 return err;869869870870+ xe_vram_resize_bar(xe);871871+870872 err = xe_device_probe_early(xe);871873 /*872874 * In Boot Survivability mode, no drm card is exposed and driver
+15-2
drivers/gpu/drm/xe/xe_svm.c
···10341034 if (err)10351035 return err;1036103610371037+ dpagemap = xe_vma_resolve_pagemap(vma, tile);10381038+ if (!dpagemap && !ctx.devmem_only)10391039+ ctx.device_private_page_owner = NULL;10371040 range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);1038104110391042 if (IS_ERR(range))···1057105410581055 range_debug(range, "PAGE FAULT");1059105610601060- dpagemap = xe_vma_resolve_pagemap(vma, tile);10611057 if (--migrate_try_count >= 0 &&10621058 xe_svm_range_needs_migrate_to_vram(range, vma, !!dpagemap || ctx.devmem_only)) {10631059 ktime_t migrate_start = xe_svm_stats_ktime_get();···10751073 drm_dbg(&vm->xe->drm,10761074 "VRAM allocation failed, falling back to retrying fault, asid=%u, errno=%pe\n",10771075 vm->usm.asid, ERR_PTR(err));10781078- goto retry;10761076+10771077+ /*10781078+ * In the devmem-only case, mixed mappings may10791079+ * be found. The get_pages function will fix10801080+ * these up to a single location, allowing the10811081+ * page fault handler to make forward progress.10821082+ */10831083+ if (ctx.devmem_only)10841084+ goto get_pages;10851085+ else10861086+ goto retry;10791087 } else {10801088 drm_err(&vm->xe->drm,10811089 "VRAM allocation failed, retry count exceeded, asid=%u, errno=%pe\n",···10951083 }10961084 }1097108510861086+get_pages:10981087 get_pages_start = xe_svm_stats_ktime_get();1099108811001089 range_debug(range, "GET PAGES");
+23-9
drivers/gpu/drm/xe/xe_vm.c
···28322832}2833283328342834static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,28352835- bool validate)28352835+ bool res_evict, bool validate)28362836{28372837 struct xe_bo *bo = xe_vma_bo(vma);28382838 struct xe_vm *vm = xe_vma_vm(vma);···28432843 err = drm_exec_lock_obj(exec, &bo->ttm.base);28442844 if (!err && validate)28452845 err = xe_bo_validate(bo, vm,28462846- !xe_vm_in_preempt_fence_mode(vm), exec);28462846+ !xe_vm_in_preempt_fence_mode(vm) &&28472847+ res_evict, exec);28472848 }2848284928492850 return err;···29142913}2915291429162915static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,29172917- struct xe_vma_op *op)29162916+ struct xe_vma_ops *vops, struct xe_vma_op *op)29182917{29192918 int err = 0;29192919+ bool res_evict;29202920+29212921+ /*29222922+ * We only allow evicting a BO within the VM if it is not part of an29232923+ * array of binds, as an array of binds can evict another BO within the29242924+ * bind.29252925+ */29262926+ res_evict = !(vops->flags & XE_VMA_OPS_ARRAY_OF_BINDS);2920292729212928 switch (op->base.op) {29222929 case DRM_GPUVA_OP_MAP:29232930 if (!op->map.invalidate_on_bind)29242931 err = vma_lock_and_validate(exec, op->map.vma,29322932+ res_evict,29252933 !xe_vm_in_fault_mode(vm) ||29262934 op->map.immediate);29272935 break;···2941293129422932 err = vma_lock_and_validate(exec,29432933 gpuva_to_vma(op->base.remap.unmap->va),29442944- false);29342934+ res_evict, false);29452935 if (!err && op->remap.prev)29462946- err = vma_lock_and_validate(exec, op->remap.prev, true);29362936+ err = vma_lock_and_validate(exec, op->remap.prev,29372937+ res_evict, true);29472938 if (!err && op->remap.next)29482948- err = vma_lock_and_validate(exec, op->remap.next, true);29392939+ err = vma_lock_and_validate(exec, op->remap.next,29402940+ res_evict, true);29492941 break;29502942 case DRM_GPUVA_OP_UNMAP:29512943 err = check_ufence(gpuva_to_vma(op->base.unmap.va));···2956294429572945 err = vma_lock_and_validate(exec,29582946 gpuva_to_vma(op->base.unmap.va),29592959- false);29472947+ res_evict, false);29602948 break;29612949 case DRM_GPUVA_OP_PREFETCH:29622950 {···2971295929722960 err = vma_lock_and_validate(exec,29732961 gpuva_to_vma(op->base.prefetch.va),29742974- false);29622962+ res_evict, false);29752963 if (!err && !xe_vma_has_no_bo(vma))29762964 err = xe_bo_migrate(xe_vma_bo(vma),29772965 region_to_mem_type[region],···30173005 return err;3018300630193007 list_for_each_entry(op, &vops->list, link) {30203020- err = op_lock_and_prep(exec, vm, op);30083008+ err = op_lock_and_prep(exec, vm, vops, op);30213009 if (err)30223010 return err;30233011 }···36503638 }3651363936523640 xe_vma_ops_init(&vops, vm, q, syncs, num_syncs);36413641+ if (args->num_binds > 1)36423642+ vops.flags |= XE_VMA_OPS_ARRAY_OF_BINDS;36533643 for (i = 0; i < args->num_binds; ++i) {36543644 u64 range = bind_ops[i].range;36553645 u64 addr = bind_ops[i].addr;
+1
drivers/gpu/drm/xe/xe_vm_types.h
···476476 /** @flag: signify the properties within xe_vma_ops*/477477#define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0)478478#define XE_VMA_OPS_FLAG_MADVISE BIT(1)479479+#define XE_VMA_OPS_ARRAY_OF_BINDS BIT(2)479480 u32 flags;480481#ifdef TEST_VM_OPS_ERROR481482 /** @inject_error: inject error to test error handling */
+26-8
drivers/gpu/drm/xe/xe_vram.c
···26262727#define BAR_SIZE_SHIFT 2028282929-static void3030-_resize_bar(struct xe_device *xe, int resno, resource_size_t size)2929+/*3030+ * Release all the BARs that could influence/block LMEMBAR resizing, i.e.3131+ * assigned IORESOURCE_MEM_64 BARs3232+ */3333+static void release_bars(struct pci_dev *pdev)3434+{3535+ struct resource *res;3636+ int i;3737+3838+ pci_dev_for_each_resource(pdev, res, i) {3939+ /* Resource already un-assigned, do not reset it */4040+ if (!res->parent)4141+ continue;4242+4343+ /* No need to release unrelated BARs */4444+ if (!(res->flags & IORESOURCE_MEM_64))4545+ continue;4646+4747+ pci_release_resource(pdev, i);4848+ }4949+}5050+5151+static void resize_bar(struct xe_device *xe, int resno, resource_size_t size)3152{3253 struct pci_dev *pdev = to_pci_dev(xe->drm.dev);3354 int bar_size = pci_rebar_bytes_to_size(size);3455 int ret;35563636- if (pci_resource_len(pdev, resno))3737- pci_release_resource(pdev, resno);5757+ release_bars(pdev);38583959 ret = pci_resize_resource(pdev, resno, bar_size);4060 if (ret) {···7050 * if force_vram_bar_size is set, attempt to set to the requested size7151 * else set to maximum possible size7252 */7373-static void resize_vram_bar(struct xe_device *xe)5353+void xe_vram_resize_bar(struct xe_device *xe)7454{7555 int force_vram_bar_size = xe_modparam.force_vram_bar_size;7656 struct pci_dev *pdev = to_pci_dev(xe->drm.dev);···139119 pci_read_config_dword(pdev, PCI_COMMAND, &pci_cmd);140120 pci_write_config_dword(pdev, PCI_COMMAND, pci_cmd & ~PCI_COMMAND_MEMORY);141121142142- _resize_bar(xe, LMEM_BAR, rebar_size);122122+ resize_bar(xe, LMEM_BAR, rebar_size);143123144124 pci_assign_unassigned_bus_resources(pdev->bus);145125 pci_write_config_dword(pdev, PCI_COMMAND, pci_cmd);···167147 drm_err(&xe->drm, "pci resource is not valid\n");168148 return -ENXIO;169149 }170170-171171- resize_vram_bar(xe);172150173151 lmem_bar->io_start = pci_resource_start(pdev, LMEM_BAR);174152 lmem_bar->io_size = pci_resource_len(pdev, LMEM_BAR);
···9393 If unsure, say Y.94949595config HID_HAPTIC9696- tristate "Haptic touchpad support"9696+ bool "Haptic touchpad support"9797 default n9898 help9999 Support for touchpads with force sensors and haptic actuators instead of a
+24-3
drivers/hid/hid-cp2112.c
···689689 count = cp2112_write_read_req(buf, addr, read_length,690690 command, NULL, 0);691691 } else {692692- count = cp2112_write_req(buf, addr, command,692692+ /* Copy starts from data->block[1] so the length can693693+ * be at max I2C_SMBUS_CLOCK_MAX + 1694694+ */695695+696696+ if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1)697697+ count = -EINVAL;698698+ else699699+ count = cp2112_write_req(buf, addr, command,693700 data->block + 1,694701 data->block[0]);695702 }···707700 I2C_SMBUS_BLOCK_MAX,708701 command, NULL, 0);709702 } else {710710- count = cp2112_write_req(buf, addr, command,703703+ /* data_length here is data->block[0] + 1704704+ * so make sure that the data->block[0] is705705+ * less than or equals I2C_SMBUS_BLOCK_MAX + 1706706+ */707707+ if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1)708708+ count = -EINVAL;709709+ else710710+ count = cp2112_write_req(buf, addr, command,711711 data->block,712712 data->block[0] + 1);713713 }···723709 size = I2C_SMBUS_BLOCK_DATA;724710 read_write = I2C_SMBUS_READ;725711726726- count = cp2112_write_read_req(buf, addr, I2C_SMBUS_BLOCK_MAX,712712+ /* data_length is data->block[0] + 1, so713713+ * so data->block[0] should be less than or714714+ * equal to the I2C_SMBUS_BLOCK_MAX + 1715715+ */716716+ if (data->block[0] > I2C_SMBUS_BLOCK_MAX + 1)717717+ count = -EINVAL;718718+ else719719+ count = cp2112_write_read_req(buf, addr, I2C_SMBUS_BLOCK_MAX,727720 command, data->block,728721 data->block[0] + 1);729722 break;
···17371737{17381738 unsigned long status, flags;17391739 struct vmballoon *b;17401740- int ret;17401740+ int ret = 0;1741174117421742 b = container_of(b_dev_info, struct vmballoon, b_dev_info);17431743···17961796 * A failure happened. While we can deflate the page we just17971797 * inflated, this deflation can also encounter an error. Instead17981798 * we will decrease the size of the balloon to reflect the17991799- * change and report failure.17991799+ * change.18001800 */18011801 atomic64_dec(&b->size);18021802- ret = -EBUSY;18031802 } else {18041803 /*18051804 * Success. Take a reference for the page, and we will add it to18061805 * the list after acquiring the lock.18071806 */18081807 get_page(newpage);18091809- ret = 0;18101808 }1811180918121810 /* Update the balloon list under the @pages_lock */···18151817 * If we succeed just insert it to the list and update the statistics18161818 * under the lock.18171819 */18181818- if (!ret) {18201820+ if (status == VMW_BALLOON_SUCCESS) {18191821 balloon_page_insert(&b->b_dev_info, newpage);18201822 __count_vm_event(BALLOON_MIGRATE);18211823 }
-42
drivers/mmc/core/block.c
···7979#define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16)8080#define MMC_EXTRACT_VALUE_FROM_ARG(x) ((x & 0x0000FF00) >> 8)81818282-/**8383- * struct rpmb_frame - rpmb frame as defined by eMMC 5.1 (JESD84-B51)8484- *8585- * @stuff : stuff bytes8686- * @key_mac : The authentication key or the message authentication8787- * code (MAC) depending on the request/response type.8888- * The MAC will be delivered in the last (or the only)8989- * block of data.9090- * @data : Data to be written or read by signed access.9191- * @nonce : Random number generated by the host for the requests9292- * and copied to the response by the RPMB engine.9393- * @write_counter: Counter value for the total amount of the successful9494- * authenticated data write requests made by the host.9595- * @addr : Address of the data to be programmed to or read9696- * from the RPMB. Address is the serial number of9797- * the accessed block (half sector 256B).9898- * @block_count : Number of blocks (half sectors, 256B) requested to be9999- * read/programmed.100100- * @result : Includes information about the status of the write counter101101- * (valid, expired) and result of the access made to the RPMB.102102- * @req_resp : Defines the type of request and response to/from the memory.103103- *104104- * The stuff bytes and big-endian properties are modeled to fit to the spec.105105- */106106-struct rpmb_frame {107107- u8 stuff[196];108108- u8 key_mac[32];109109- u8 data[256];110110- u8 nonce[16];111111- __be32 write_counter;112112- __be16 addr;113113- __be16 block_count;114114- __be16 result;115115- __be16 req_resp;116116-} __packed;117117-118118-#define RPMB_PROGRAM_KEY 0x1 /* Program RPMB Authentication Key */119119-#define RPMB_GET_WRITE_COUNTER 0x2 /* Read RPMB write counter */120120-#define RPMB_WRITE_DATA 0x3 /* Write data to RPMB partition */121121-#define RPMB_READ_DATA 0x4 /* Read data from RPMB partition */122122-#define RPMB_RESULT_READ 0x5 /* Read result request (Internal) */123123-12482#define RPMB_FRAME_SIZE sizeof(struct rpmb_frame)12583#define CHECK_SIZE_NEQ(val) ((val) != sizeof(struct rpmb_frame))12684#define CHECK_SIZE_ALIGNED(val) IS_ALIGNED((val), sizeof(struct rpmb_frame))
+23-24
drivers/net/bonding/bond_main.c
···21962196 unblock_netpoll_tx();21972197 }2198219821992199- if (bond_mode_can_use_xmit_hash(bond))21992199+ /* broadcast mode uses the all_slaves to loop through slaves. */22002200+ if (bond_mode_can_use_xmit_hash(bond) ||22012201+ BOND_MODE(bond) == BOND_MODE_BROADCAST)22002202 bond_update_slave_arr(bond, NULL);2201220322022204 if (!slave_dev->netdev_ops->ndo_bpf ||···2374237223752373 bond_upper_dev_unlink(bond, slave);2376237423772377- if (bond_mode_can_use_xmit_hash(bond))23752375+ if (bond_mode_can_use_xmit_hash(bond) ||23762376+ BOND_MODE(bond) == BOND_MODE_BROADCAST)23782377 bond_update_slave_arr(bond, slave);2379237823802379 slave_info(bond_dev, slave_dev, "Releasing %s interface\n",···27832780{27842781 struct bonding *bond = container_of(work, struct bonding,27852782 mii_work.work);27862786- bool should_notify_peers = false;27832783+ bool should_notify_peers;27872784 bool commit;27882785 unsigned long delay;27892786 struct slave *slave;···27952792 goto re_arm;2796279327972794 rcu_read_lock();27952795+27982796 should_notify_peers = bond_should_notify_peers(bond);27992797 commit = !!bond_miimon_inspect(bond);28002800- if (bond->send_peer_notif) {28012801- rcu_read_unlock();28022802- if (rtnl_trylock()) {28032803- bond->send_peer_notif--;28042804- rtnl_unlock();28052805- }28062806- } else {28072807- rcu_read_unlock();28082808- }2809279828102810- if (commit) {27992799+ rcu_read_unlock();28002800+28012801+ if (commit || bond->send_peer_notif) {28112802 /* Race avoidance with bond_close cancel of workqueue */28122803 if (!rtnl_trylock()) {28132804 delay = 1;28142814- should_notify_peers = false;28152805 goto re_arm;28162806 }2817280728182818- bond_for_each_slave(bond, slave, iter) {28192819- bond_commit_link_state(slave, BOND_SLAVE_NOTIFY_LATER);28082808+ if (commit) {28092809+ bond_for_each_slave(bond, slave, iter) {28102810+ bond_commit_link_state(slave,28112811+ BOND_SLAVE_NOTIFY_LATER);28122812+ }28132813+ bond_miimon_commit(bond);28202814 }28212821- bond_miimon_commit(bond);28152815+28162816+ if (bond->send_peer_notif) {28172817+ bond->send_peer_notif--;28182818+ if (should_notify_peers)28192819+ call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,28202820+ bond->dev);28212821+ }2822282228232823 rtnl_unlock(); /* might sleep, hold no other locks */28242824 }···28292823re_arm:28302824 if (bond->params.miimon)28312825 queue_delayed_work(bond->wq, &bond->mii_work, delay);28322832-28332833- if (should_notify_peers) {28342834- if (!rtnl_trylock())28352835- return;28362836- call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, bond->dev);28372837- rtnl_unlock();28382838- }28392826}2840282728412828static int bond_upper_dev_walk(struct net_device *upper,
+1-1
drivers/net/can/bxcan.c
···842842 u32 id;843843 int i, j;844844845845- if (can_dropped_invalid_skb(ndev, skb))845845+ if (can_dev_dropped_skb(ndev, skb))846846 return NETDEV_TX_OK;847847848848 if (bxcan_tx_busy(priv))
+4-2
drivers/net/can/dev/netlink.c
···452452 }453453454454 if (data[IFLA_CAN_RESTART_MS]) {455455- if (!priv->do_set_mode) {455455+ unsigned int restart_ms = nla_get_u32(data[IFLA_CAN_RESTART_MS]);456456+457457+ if (restart_ms != 0 && !priv->do_set_mode) {456458 NL_SET_ERR_MSG(extack,457459 "Device doesn't support restart from Bus Off");458460 return -EOPNOTSUPP;···463461 /* Do not allow changing restart delay while running */464462 if (dev->flags & IFF_UP)465463 return -EBUSY;466466- priv->restart_ms = nla_get_u32(data[IFLA_CAN_RESTART_MS]);464464+ priv->restart_ms = restart_ms;467465 }468466469467 if (data[IFLA_CAN_RESTART]) {
+1-1
drivers/net/can/esd/esdacc.c
···254254 u32 acc_id;255255 u32 acc_dlc;256256257257- if (can_dropped_invalid_skb(netdev, skb))257257+ if (can_dev_dropped_skb(netdev, skb))258258 return NETDEV_TX_OK;259259260260 /* Access core->tx_fifo_tail only once because it may be changed
+1-1
drivers/net/can/rockchip/rockchip_canfd-tx.c
···7272 int err;7373 u8 i;74747575- if (can_dropped_invalid_skb(ndev, skb))7575+ if (can_dev_dropped_skb(ndev, skb))7676 return NETDEV_TX_OK;77777878 if (!netif_subqueue_maybe_stop(priv->ndev, 0,
+1-1
drivers/net/ethernet/dlink/dl2k.c
···733733 u64 tfc_vlan_tag = 0;734734735735 if (np->link_status == 0) { /* Link Down */736736- dev_kfree_skb(skb);736736+ dev_kfree_skb_any(skb);737737 return NETDEV_TX_OK;738738 }739739 entry = np->cur_tx % TX_RING_SIZE;
···978978 int err;979979980980 dev->priv.devc = mlx5_devcom_register_device(dev);981981- if (IS_ERR(dev->priv.devc))982982- mlx5_core_warn(dev, "failed to register devcom device %pe\n",983983- dev->priv.devc);981981+ if (!dev->priv.devc)982982+ mlx5_core_warn(dev, "failed to register devcom device\n");984983985984 err = mlx5_query_board_id(dev);986985 if (err) {
+22-2
drivers/net/ethernet/renesas/ravb_main.c
···2211221122122212 skb_tx_timestamp(skb);22132213 }22142214- /* Descriptor type must be set after all the above writes */22152215- dma_wmb();22142214+22162215 if (num_tx_desc > 1) {22172216 desc->die_dt = DT_FEND;22182217 desc--;22182218+ /* When using multi-descriptors, DT_FEND needs to get written22192219+ * before DT_FSTART, but the compiler may reorder the memory22202220+ * writes in an attempt to optimize the code.22212221+ * Use a dma_wmb() barrier to make sure DT_FEND and DT_FSTART22222222+ * are written exactly in the order shown in the code.22232223+ * This is particularly important for cases where the DMA engine22242224+ * is already running when we are running this code. If the DMA22252225+ * sees DT_FSTART without the corresponding DT_FEND it will enter22262226+ * an error condition.22272227+ */22282228+ dma_wmb();22192229 desc->die_dt = DT_FSTART;22202230 } else {22312231+ /* Descriptor type must be set after all the above writes */22322232+ dma_wmb();22212233 desc->die_dt = DT_FSINGLE;22222234 }22352235+22362236+ /* Before ringing the doorbell we need to make sure that the latest22372237+ * writes have been committed to memory, otherwise it could delay22382238+ * things until the doorbell is rang again.22392239+ * This is in replacement of the read operation mentioned in the HW22402240+ * manuals.22412241+ */22422242+ dma_wmb();22232243 ravb_modify(ndev, TCCR, TCCR_TSRQ0 << q, TCCR_TSRQ0 << q);2224224422252245 priv->cur_tx[q] += num_tx_desc;
···560560static __poll_t ovpn_tcp_poll(struct file *file, struct socket *sock,561561 poll_table *wait)562562{563563- __poll_t mask = datagram_poll(file, sock, wait);563563+ struct sk_buff_head *queue = &sock->sk->sk_receive_queue;564564 struct ovpn_socket *ovpn_sock;565565+ struct ovpn_peer *peer = NULL;566566+ __poll_t mask;565567566568 rcu_read_lock();567569 ovpn_sock = rcu_dereference_sk_user_data(sock->sk);568568- if (ovpn_sock && ovpn_sock->peer &&569569- !skb_queue_empty(&ovpn_sock->peer->tcp.user_queue))570570- mask |= EPOLLIN | EPOLLRDNORM;570570+ /* if we landed in this callback, we expect to have a571571+ * meaningful state. The ovpn_socket lifecycle would572572+ * prevent it otherwise.573573+ */574574+ if (WARN(!ovpn_sock || !ovpn_sock->peer,575575+ "ovpn: null state in ovpn_tcp_poll!")) {576576+ rcu_read_unlock();577577+ return 0;578578+ }579579+580580+ if (ovpn_peer_hold(ovpn_sock->peer)) {581581+ peer = ovpn_sock->peer;582582+ queue = &peer->tcp.user_queue;583583+ }571584 rcu_read_unlock();585585+586586+ mask = datagram_poll_queue(file, sock, wait, queue);587587+588588+ if (peer)589589+ ovpn_peer_put(peer);572590573591 return mask;574592}
+2-2
drivers/net/phy/micrel.c
···42814281{42824282 struct lan8814_shared_priv *shared = phy_package_get_priv(phydev);4283428342844284+ shared->phydev = phydev;42854285+42844286 /* Initialise shared lock for clock*/42854287 mutex_init(&shared->shared_lock);42864288···43374335 return 0;4338433643394337 phydev_dbg(phydev, "successfully registered ptp clock\n");43404340-43414341- shared->phydev = phydev;4342433843434339 /* The EP.4 is shared between all the PHYs in the package and also it43444340 * can be accessed by any of the PHYs
···10811081 queue = sk->sk_user_data;10821082 if (likely(queue && sk_stream_is_writeable(sk))) {10831083 clear_bit(SOCK_NOSPACE, &sk->sk_socket->flags);10841084+ /* Ensure pending TLS partial records are retried */10851085+ if (nvme_tcp_queue_tls(queue))10861086+ queue->write_space(sk);10841087 queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);10851088 }10861089 read_unlock_bh(&sk->sk_callback_lock);
+1
drivers/pci/Kconfig
···306306 bool "VGA Arbitration" if EXPERT307307 default y308308 depends on (PCI && !S390)309309+ select SCREEN_INFO if X86309310 help310311 Some "legacy" VGA devices implemented on PCI typically have the same311312 hard-decoded addresses as they did on ISA. When multiple PCI devices
+1-1
drivers/pci/controller/cadence/pcie-cadence-ep.c
···255255 u16 flags, mme;256256 u8 cap;257257258258- cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX);258258+ cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI);259259 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn);260260261261 /* Validate that the MSI feature is actually enabled. */
···20152015 if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) {20162016 /* Program crspace counters to count clock cycles using "count_clock" sysfs */20172017 attr = &pmc->block[blk_num].attr_count_clock;20182018+ sysfs_attr_init(&attr->dev_attr.attr);20182019 attr->dev_attr.attr.mode = 0644;20192020 attr->dev_attr.show = mlxbf_pmc_count_clock_show;20202021 attr->dev_attr.store = mlxbf_pmc_count_clock_store;
+10-2
drivers/platform/x86/dell/alienware-wmi-wmax.c
···210210 .driver_data = &g_series_quirks,211211 },212212 {213213+ .ident = "Dell Inc. G15 5530",214214+ .matches = {215215+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),216216+ DMI_MATCH(DMI_PRODUCT_NAME, "Dell G15 5530"),217217+ },218218+ .driver_data = &g_series_quirks,219219+ },220220+ {213221 .ident = "Dell Inc. G16 7630",214222 .matches = {215223 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),···1647163916481640static int wmax_wmi_suspend(struct device *dev)16491641{16501650- if (awcc->hwmon)16421642+ if (awcc && awcc->hwmon)16511643 awcc_hwmon_suspend(dev);1652164416531645 return 0;···1655164716561648static int wmax_wmi_resume(struct device *dev)16571649{16581658- if (awcc->hwmon)16501650+ if (awcc && awcc->hwmon)16591651 awcc_hwmon_resume(dev);1660165216611653 return 0;
+1-1
drivers/ptp/ptp_ocp.c
···25482548 for (i = 0; i < OCP_SMA_NUM; i++) {25492549 bp->sma[i].fixed_fcn = true;25502550 bp->sma[i].fixed_dir = true;25512551- bp->sma[1].dpll_prop.capabilities &=25512551+ bp->sma[i].dpll_prop.capabilities &=25522552 ~DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE;25532553 }25542554 return;
+1-1
drivers/scsi/libfc/fc_fcp.c
···503503 host_bcode = FC_ERROR;504504 goto err;505505 }506506- if (offset + len > fsp->data_len) {506506+ if (size_add(offset, len) > fsp->data_len) {507507 /* this should never happen */508508 if ((fr_flags(fp) & FCPHF_CRC_UNCHECKED) &&509509 fc_frame_crc_check(fp))
+4-4
drivers/scsi/qla4xxx/ql4_os.c
···41044104 * The mid-level driver tries to ensure that queuecommand never gets41054105 * invoked concurrently with itself or the interrupt handler (although41064106 * the interrupt handler may call this routine as part of request-41074107- * completion handling). Unfortunely, it sometimes calls the scheduler41074107+ * completion handling). Unfortunately, it sometimes calls the scheduler41084108 * in interrupt context which is a big NO! NO!.41094109 **/41104110static int qla4xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)···46474647 cmd = scsi_host_find_tag(ha->host, index);46484648 /*46494649 * We cannot just check if the index is valid,46504650- * becase if we are run from the scsi eh, then46504650+ * because if we are run from the scsi eh, then46514651 * the scsi/block layer is going to prevent46524652 * the tag from being released.46534653 */···49524952 /* Upon successful firmware/chip reset, re-initialize the adapter */49534953 if (status == QLA_SUCCESS) {49544954 /* For ISP-4xxx, force function 1 to always initialize49554955- * before function 3 to prevent both funcions from49554955+ * before function 3 to prevent both functions from49564956 * stepping on top of the other */49574957 if (is_qla40XX(ha) && (ha->mac_index == 3))49584958 ssleep(6);···69146914 struct ddb_entry *ddb_entry = NULL;6915691569166916 /* Create session object, with INVALID_ENTRY,69176917- * the targer_id would get set when we issue the login69176917+ * the target_id would get set when we issue the login69186918 */69196919 cls_sess = iscsi_session_setup(&qla4xxx_iscsi_transport, ha->host,69206920 cmds_max, sizeof(struct ddb_entry),
+46-52
drivers/scsi/storvsc_drv.c
···14061406 }1407140714081408 /*14091409- * Our channel array is sparsley populated and we14091409+ * Our channel array could be sparsley populated and we14101410 * initiated I/O on a processor/hw-q that does not14111411 * currently have a designated channel. Fix this.14121412 * The strategy is simple:14131413- * I. Ensure NUMA locality14141414- * II. Distribute evenly (best effort)14131413+ * I. Prefer the channel associated with the current CPU14141414+ * II. Ensure NUMA locality14151415+ * III. Distribute evenly (best effort)14151416 */14171417+14181418+ /* Prefer the channel on the I/O issuing processor/hw-q */14191419+ if (cpumask_test_cpu(q_num, &stor_device->alloced_cpus))14201420+ return stor_device->stor_chns[q_num];1416142114171422 node_mask = cpumask_of_node(cpu_to_node(q_num));14181423···14741469 /* See storvsc_change_target_cpu(). */14751470 outgoing_channel = READ_ONCE(stor_device->stor_chns[q_num]);14761471 if (outgoing_channel != NULL) {14771477- if (outgoing_channel->target_cpu == q_num) {14781478- /*14791479- * Ideally, we want to pick a different channel if14801480- * available on the same NUMA node.14811481- */14821482- node_mask = cpumask_of_node(cpu_to_node(q_num));14831483- for_each_cpu_wrap(tgt_cpu,14841484- &stor_device->alloced_cpus, q_num + 1) {14851485- if (!cpumask_test_cpu(tgt_cpu, node_mask))14861486- continue;14871487- if (tgt_cpu == q_num)14881488- continue;14891489- channel = READ_ONCE(14901490- stor_device->stor_chns[tgt_cpu]);14911491- if (channel == NULL)14921492- continue;14931493- if (hv_get_avail_to_write_percent(14941494- &channel->outbound)14951495- > ring_avail_percent_lowater) {14961496- outgoing_channel = channel;14971497- goto found_channel;14981498- }14991499- }14721472+ if (hv_get_avail_to_write_percent(&outgoing_channel->outbound)14731473+ > ring_avail_percent_lowater)14741474+ goto found_channel;1500147515011501- /*15021502- * All the other channels on the same NUMA node are15031503- * busy. Try to use the channel on the current CPU15041504- */15051505- if (hv_get_avail_to_write_percent(15061506- &outgoing_channel->outbound)15071507- > ring_avail_percent_lowater)14761476+ /*14771477+ * Channel is busy, try to find a channel on the same NUMA node14781478+ */14791479+ node_mask = cpumask_of_node(cpu_to_node(q_num));14801480+ for_each_cpu_wrap(tgt_cpu, &stor_device->alloced_cpus,14811481+ q_num + 1) {14821482+ if (!cpumask_test_cpu(tgt_cpu, node_mask))14831483+ continue;14841484+ channel = READ_ONCE(stor_device->stor_chns[tgt_cpu]);14851485+ if (!channel)14861486+ continue;14871487+ if (hv_get_avail_to_write_percent(&channel->outbound)14881488+ > ring_avail_percent_lowater) {14891489+ outgoing_channel = channel;15081490 goto found_channel;15091509-15101510- /*15111511- * If we reach here, all the channels on the current15121512- * NUMA node are busy. Try to find a channel in15131513- * other NUMA nodes15141514- */15151515- for_each_cpu(tgt_cpu, &stor_device->alloced_cpus) {15161516- if (cpumask_test_cpu(tgt_cpu, node_mask))15171517- continue;15181518- channel = READ_ONCE(15191519- stor_device->stor_chns[tgt_cpu]);15201520- if (channel == NULL)15211521- continue;15221522- if (hv_get_avail_to_write_percent(15231523- &channel->outbound)15241524- > ring_avail_percent_lowater) {15251525- outgoing_channel = channel;15261526- goto found_channel;15271527- }15281491 }15291492 }14931493+14941494+ /*14951495+ * If we reach here, all the channels on the current14961496+ * NUMA node are busy. Try to find a channel in14971497+ * all NUMA nodes14981498+ */14991499+ for_each_cpu_wrap(tgt_cpu, &stor_device->alloced_cpus,15001500+ q_num + 1) {15011501+ channel = READ_ONCE(stor_device->stor_chns[tgt_cpu]);15021502+ if (!channel)15031503+ continue;15041504+ if (hv_get_avail_to_write_percent(&channel->outbound)15051505+ > ring_avail_percent_lowater) {15061506+ outgoing_channel = channel;15071507+ goto found_channel;15081508+ }15091509+ }15101510+ /*15111511+ * If we reach here, all the channels are busy. Use the15121512+ * original channel found.15131513+ */15301514 } else {15311515 spin_lock_irqsave(&stor_device->lock, flags);15321516 outgoing_channel = stor_device->stor_chns[q_num];
···13391339 * Don't update inode if the file type is different13401340 */13411341 umode = p9mode2unixmode(v9ses, st, &rdev);13421342- if (inode_wrong_type(inode, umode)) {13431343- /*13441344- * Do this as a way of letting the caller know the inode should not13451345- * be reused13461346- */13471347- v9fs_invalidate_inode_attr(inode);13421342+ if (inode_wrong_type(inode, umode))13481343 goto out;13491349- }1350134413511345 /*13521346 * We don't want to refresh inode->i_size,
+1-7
fs/9p/vfs_inode_dotl.c
···897897 /*898898 * Don't update inode if the file type is different899899 */900900- if (inode_wrong_type(inode, st->st_mode)) {901901- /*902902- * Do this as a way of letting the caller know the inode should not903903- * be reused904904- */905905- v9fs_invalidate_inode_attr(inode);900900+ if (inode_wrong_type(inode, st->st_mode))906901 goto out;907907- }908902909903 /*910904 * We don't want to refresh inode->i_size,
+1-1
fs/btrfs/delayed-inode.c
···2110211021112111 for (int i = 0; i < count; i++) {21122112 __btrfs_kill_delayed_node(delayed_nodes[i]);21132113+ btrfs_delayed_node_ref_tracker_dir_print(delayed_nodes[i]);21132114 btrfs_release_delayed_node(delayed_nodes[i],21142115 &delayed_node_trackers[i]);21152115- btrfs_delayed_node_ref_tracker_dir_print(delayed_nodes[i]);21162116 }21172117 }21182118}
+7
fs/btrfs/delayed-inode.h
···219219 if (!btrfs_test_opt(node->root->fs_info, REF_TRACKER))220220 return;221221222222+ /*223223+ * Only print if there are leaked references. The caller is224224+ * holding one reference, so if refs == 1 there is no leak.225225+ */226226+ if (refcount_read(&node->refs) == 1)227227+ return;228228+222229 ref_tracker_dir_print(&node->ref_dir.dir,223230 BTRFS_DELAYED_NODE_REF_TRACKER_DISPLAY_LIMIT);224231}
+1-1
fs/btrfs/extent_io.c
···973973{974974 const u64 ra_pos = readahead_pos(ractl);975975 const u64 ra_end = ra_pos + readahead_length(ractl);976976- const u64 em_end = em->start + em->ram_bytes;976976+ const u64 em_end = em->start + em->len;977977978978 /* No expansion for holes and inline extents. */979979 if (em->disk_bytenr > EXTENT_MAP_LAST_BYTE)
+8-7
fs/btrfs/free-space-tree.c
···11061106 * If ret is 1 (no key found), it means this is an empty block group,11071107 * without any extents allocated from it and there's no block group11081108 * item (key BTRFS_BLOCK_GROUP_ITEM_KEY) located in the extent tree11091109- * because we are using the block group tree feature, so block group11101110- * items are stored in the block group tree. It also means there are no11111111- * extents allocated for block groups with a start offset beyond this11121112- * block group's end offset (this is the last, highest, block group).11091109+ * because we are using the block group tree feature (so block group11101110+ * items are stored in the block group tree) or this is a new block11111111+ * group created in the current transaction and its block group item11121112+ * was not yet inserted in the extent tree (that happens in11131113+ * btrfs_create_pending_block_groups() -> insert_block_group_item()).11141114+ * It also means there are no extents allocated for block groups with a11151115+ * start offset beyond this block group's end offset (this is the last,11161116+ * highest, block group).11131117 */11141114- if (!btrfs_fs_compat_ro(trans->fs_info, BLOCK_GROUP_TREE))11151115- ASSERT(ret == 0);11161116-11171118 start = block_group->start;11181119 end = block_group->start + block_group->length;11191120 while (ret == 0) {
···982982983983 extent_root = btrfs_extent_root(fs_info, 0);984984 /* If the extent tree is damaged we cannot ignore it (IGNOREBADROOTS). */985985- if (IS_ERR(extent_root)) {985985+ if (!extent_root) {986986 btrfs_warn(fs_info, "ref-verify: extent tree not available, disabling");987987 btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);988988 return 0;
+7-6
fs/btrfs/relocation.c
···37803780/*37813781 * Mark start of chunk relocation that is cancellable. Check if the cancellation37823782 * has been requested meanwhile and don't start in that case.37833783+ * NOTE: if this returns an error, reloc_chunk_end() must not be called.37833784 *37843785 * Return:37853786 * 0 success···3797379637983797 if (atomic_read(&fs_info->reloc_cancel_req) > 0) {37993798 btrfs_info(fs_info, "chunk relocation canceled on start");38003800- /*38013801- * On cancel, clear all requests but let the caller mark38023802- * the end after cleanup operations.38033803- */37993799+ /* On cancel, clear all requests. */38003800+ clear_and_wake_up_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags);38043801 atomic_set(&fs_info->reloc_cancel_req, 0);38053802 return -ECANCELED;38063803 }···3807380838083809/*38093810 * Mark end of chunk relocation that is cancellable and wake any waiters.38113811+ * NOTE: call only if a previous call to reloc_chunk_start() succeeded.38103812 */38113813static void reloc_chunk_end(struct btrfs_fs_info *fs_info)38123814{38153815+ ASSERT(test_bit(BTRFS_FS_RELOC_RUNNING, &fs_info->flags));38133816 /* Requested after start, clear bit first so any waiters can continue */38143817 if (atomic_read(&fs_info->reloc_cancel_req) > 0)38153818 btrfs_info(fs_info, "chunk relocation canceled during operation");···40244023 if (err && rw)40254024 btrfs_dec_block_group_ro(rc->block_group);40264025 iput(rc->data_inode);40264026+ reloc_chunk_end(fs_info);40274027out_put_bg:40284028 btrfs_put_block_group(bg);40294029- reloc_chunk_end(fs_info);40304029 free_reloc_control(rc);40314030 return err;40324031}···42094208 ret = ret2;42104209out_unset:42114210 unset_reloc_control(rc);42124212-out_end:42134211 reloc_chunk_end(fs_info);42124212+out_end:42144213 free_reloc_control(rc);42154214out:42164215 free_reloc_roots(&reloc_roots);
+2-2
fs/btrfs/scrub.c
···694694695695 /* stripe->folios[] is allocated by us and no highmem is allowed. */696696 ASSERT(folio);697697- ASSERT(!folio_test_partial_kmap(folio));697697+ ASSERT(!folio_test_highmem(folio));698698 return folio_address(folio) + offset_in_folio(folio, offset);699699}700700···707707708708 /* stripe->folios[] is allocated by us and no highmem is allowed. */709709 ASSERT(folio);710710- ASSERT(!folio_test_partial_kmap(folio));710710+ ASSERT(!folio_test_highmem(folio));711711 /* And the range must be contained inside the folio. */712712 ASSERT(offset_in_folio(folio, offset) + fs_info->sectorsize <= folio_size(folio));713713 return page_to_phys(folio_page(folio, 0)) + offset_in_folio(folio, offset);
+51-9
fs/btrfs/send.c
···178178 u64 cur_inode_rdev;179179 u64 cur_inode_last_extent;180180 u64 cur_inode_next_write_offset;181181- struct fs_path cur_inode_path;182181 bool cur_inode_new;183182 bool cur_inode_new_gen;184183 bool cur_inode_deleted;···304305305306 struct btrfs_lru_cache dir_created_cache;306307 struct btrfs_lru_cache dir_utimes_cache;308308+309309+ /* Must be last as it ends in a flexible-array member. */310310+ struct fs_path cur_inode_path;307311};308312309313struct pending_dir_move {···41024100 return ret;41034101}4104410241034103+static int rbtree_check_dir_ref_comp(const void *k, const struct rb_node *node)41044104+{41054105+ const struct recorded_ref *data = k;41064106+ const struct recorded_ref *ref = rb_entry(node, struct recorded_ref, node);41074107+41084108+ if (data->dir > ref->dir)41094109+ return 1;41104110+ if (data->dir < ref->dir)41114111+ return -1;41124112+ if (data->dir_gen > ref->dir_gen)41134113+ return 1;41144114+ if (data->dir_gen < ref->dir_gen)41154115+ return -1;41164116+ return 0;41174117+}41184118+41194119+static bool rbtree_check_dir_ref_less(struct rb_node *node, const struct rb_node *parent)41204120+{41214121+ const struct recorded_ref *entry = rb_entry(node, struct recorded_ref, node);41224122+41234123+ return rbtree_check_dir_ref_comp(entry, parent) < 0;41244124+}41254125+41264126+static int record_check_dir_ref_in_tree(struct rb_root *root,41274127+ struct recorded_ref *ref, struct list_head *list)41284128+{41294129+ struct recorded_ref *tmp_ref;41304130+ int ret;41314131+41324132+ if (rb_find(ref, root, rbtree_check_dir_ref_comp))41334133+ return 0;41344134+41354135+ ret = dup_ref(ref, list);41364136+ if (ret < 0)41374137+ return ret;41384138+41394139+ tmp_ref = list_last_entry(list, struct recorded_ref, list);41404140+ rb_add(&tmp_ref->node, root, rbtree_check_dir_ref_less);41414141+ tmp_ref->root = root;41424142+ return 0;41434143+}41444144+41054145static int rename_current_inode(struct send_ctx *sctx,41064146 struct fs_path *current_path,41074147 struct fs_path *new_path)···41714127 struct recorded_ref *cur;41724128 struct recorded_ref *cur2;41734129 LIST_HEAD(check_dirs);41304130+ struct rb_root rbtree_check_dirs = RB_ROOT;41744131 struct fs_path *valid_path = NULL;41754132 u64 ow_inode = 0;41764133 u64 ow_gen;41774134 u64 ow_mode;41784178- u64 last_dir_ino_rm = 0;41794135 bool did_overwrite = false;41804136 bool is_orphan = false;41814137 bool can_rename = true;···44794435 goto out;44804436 }44814437 }44824482- ret = dup_ref(cur, &check_dirs);44384438+ ret = record_check_dir_ref_in_tree(&rbtree_check_dirs, cur, &check_dirs);44834439 if (ret < 0)44844440 goto out;44854441 }···45074463 }4508446445094465 list_for_each_entry(cur, &sctx->deleted_refs, list) {45104510- ret = dup_ref(cur, &check_dirs);44664466+ ret = record_check_dir_ref_in_tree(&rbtree_check_dirs, cur, &check_dirs);45114467 if (ret < 0)45124468 goto out;45134469 }···45174473 * We have a moved dir. Add the old parent to check_dirs45184474 */45194475 cur = list_first_entry(&sctx->deleted_refs, struct recorded_ref, list);45204520- ret = dup_ref(cur, &check_dirs);44764476+ ret = record_check_dir_ref_in_tree(&rbtree_check_dirs, cur, &check_dirs);45214477 if (ret < 0)45224478 goto out;45234479 } else if (!S_ISDIR(sctx->cur_inode_mode)) {···45514507 if (is_current_inode_path(sctx, cur->full_path))45524508 fs_path_reset(&sctx->cur_inode_path);45534509 }45544554- ret = dup_ref(cur, &check_dirs);45104510+ ret = record_check_dir_ref_in_tree(&rbtree_check_dirs, cur, &check_dirs);45554511 if (ret < 0)45564512 goto out;45574513 }···45944550 ret = cache_dir_utimes(sctx, cur->dir, cur->dir_gen);45954551 if (ret < 0)45964552 goto out;45974597- } else if (ret == inode_state_did_delete &&45984598- cur->dir != last_dir_ino_rm) {45534553+ } else if (ret == inode_state_did_delete) {45994554 ret = can_rmdir(sctx, cur->dir, cur->dir_gen);46004555 if (ret < 0)46014556 goto out;···46064563 ret = send_rmdir(sctx, valid_path);46074564 if (ret < 0)46084565 goto out;46094609- last_dir_ino_rm = cur->dir;46104566 }46114567 }46124568 }
+8-3
fs/btrfs/super.c
···19001900 return PTR_ERR(sb);19011901 }1902190219031903- set_device_specific_options(fs_info);19041904-19051903 if (sb->s_root) {19061904 /*19071905 * Not the first mount of the fs thus got an existing super block.···19441946 deactivate_locked_super(sb);19451947 return -EACCES;19461948 }19491949+ set_device_specific_options(fs_info);19471950 bdev = fs_devices->latest_dev->bdev;19481951 snprintf(sb->s_id, sizeof(sb->s_id), "%pg", bdev);19491952 shrinker_debugfs_rename(sb->s_shrink, "sb-btrfs:%s", sb->s_id);···20682069 fs_info->super_copy = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);20692070 fs_info->super_for_commit = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);20702071 if (!fs_info->super_copy || !fs_info->super_for_commit) {20712071- btrfs_free_fs_info(fs_info);20722072+ /*20732073+ * Dont call btrfs_free_fs_info() to free it as it's still20742074+ * initialized partially.20752075+ */20762076+ kfree(fs_info->super_copy);20772077+ kfree(fs_info->super_for_commit);20782078+ kvfree(fs_info);20722079 return -ENOMEM;20732080 }20742081 btrfs_init_fs_info(fs_info);
···478478 if (!hugetlb_vma_trylock_write(vma))479479 continue;480480481481- /*482482- * Skip VMAs without shareable locks. Per the design in commit483483- * 40549ba8f8e0, these will be handled by remove_inode_hugepages()484484- * called after this function with proper locking.485485- */486486- if (!__vma_shareable_lock(vma))487487- goto skip;488488-489481 v_start = vma_offset_start(vma, start);490482 v_end = vma_offset_end(vma, end);491483···488496 * vmas. Therefore, lock is not held when calling489497 * unmap_hugepage_range for private vmas.490498 */491491-skip:492499 hugetlb_vma_unlock_write(vma);493500 }494501}
···15351535 /* Deal with the suid/sgid bit corner case */15361536 if (nfs_should_remove_suid(inode)) {15371537 spin_lock(&inode->i_lock);15381538- nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE);15381538+ nfs_set_cache_invalid(inode, NFS_INO_INVALID_MODE15391539+ | NFS_INO_REVAL_FORCED);15391540 spin_unlock(&inode->i_lock);15401541 }15411542 return 0;
···339339sid_to_id(struct cifs_sb_info *cifs_sb, struct smb_sid *psid,340340 struct cifs_fattr *fattr, uint sidtype)341341{342342- int rc = 0;343342 struct key *sidkey;344343 char *sidstr;345344 const struct cred *saved_cred;···445446 * fails then we just fall back to using the ctx->linux_uid/linux_gid.446447 */447448got_valid_id:448448- rc = 0;449449 if (sidtype == SIDOWNER)450450 fattr->cf_uid = fuid;451451 else452452 fattr->cf_gid = fgid;453453- return rc;453453+454454+ return 0;454455}455456456457int
+74-127
fs/smb/client/cifsencrypt.c
···2424#include <linux/iov_iter.h>2525#include <crypto/aead.h>2626#include <crypto/arc4.h>2727+#include <crypto/md5.h>2828+#include <crypto/sha2.h>27292828-static size_t cifs_shash_step(void *iter_base, size_t progress, size_t len,2929- void *priv, void *priv2)3030+static int cifs_sig_update(struct cifs_calc_sig_ctx *ctx,3131+ const u8 *data, size_t len)3032{3131- struct shash_desc *shash = priv;3333+ if (ctx->md5) {3434+ md5_update(ctx->md5, data, len);3535+ return 0;3636+ }3737+ if (ctx->hmac) {3838+ hmac_sha256_update(ctx->hmac, data, len);3939+ return 0;4040+ }4141+ return crypto_shash_update(ctx->shash, data, len);4242+}4343+4444+static int cifs_sig_final(struct cifs_calc_sig_ctx *ctx, u8 *out)4545+{4646+ if (ctx->md5) {4747+ md5_final(ctx->md5, out);4848+ return 0;4949+ }5050+ if (ctx->hmac) {5151+ hmac_sha256_final(ctx->hmac, out);5252+ return 0;5353+ }5454+ return crypto_shash_final(ctx->shash, out);5555+}5656+5757+static size_t cifs_sig_step(void *iter_base, size_t progress, size_t len,5858+ void *priv, void *priv2)5959+{6060+ struct cifs_calc_sig_ctx *ctx = priv;3261 int ret, *pret = priv2;33623434- ret = crypto_shash_update(shash, iter_base, len);6363+ ret = cifs_sig_update(ctx, iter_base, len);3564 if (ret < 0) {3665 *pret = ret;3766 return len;···7142/*7243 * Pass the data from an iterator into a hash.7344 */7474-static int cifs_shash_iter(const struct iov_iter *iter, size_t maxsize,7575- struct shash_desc *shash)4545+static int cifs_sig_iter(const struct iov_iter *iter, size_t maxsize,4646+ struct cifs_calc_sig_ctx *ctx)7647{7748 struct iov_iter tmp_iter = *iter;7849 int err = -EIO;79508080- if (iterate_and_advance_kernel(&tmp_iter, maxsize, shash, &err,8181- cifs_shash_step) != maxsize)5151+ if (iterate_and_advance_kernel(&tmp_iter, maxsize, ctx, &err,5252+ cifs_sig_step) != maxsize)8253 return err;8354 return 0;8455}85568686-int __cifs_calc_signature(struct smb_rqst *rqst,8787- struct TCP_Server_Info *server, char *signature,8888- struct shash_desc *shash)5757+int __cifs_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,5858+ char *signature, struct cifs_calc_sig_ctx *ctx)8959{9060 int i;9161 ssize_t rc;···11082 return -EIO;11183 }11284113113- rc = crypto_shash_update(shash,114114- iov[i].iov_base, iov[i].iov_len);8585+ rc = cifs_sig_update(ctx, iov[i].iov_base, iov[i].iov_len);11586 if (rc) {11687 cifs_dbg(VFS, "%s: Could not update with payload\n",11788 __func__);···11891 }11992 }12093121121- rc = cifs_shash_iter(&rqst->rq_iter, iov_iter_count(&rqst->rq_iter), shash);9494+ rc = cifs_sig_iter(&rqst->rq_iter, iov_iter_count(&rqst->rq_iter), ctx);12295 if (rc < 0)12396 return rc;12497125125- rc = crypto_shash_final(shash, signature);9898+ rc = cifs_sig_final(ctx, signature);12699 if (rc)127100 cifs_dbg(VFS, "%s: Could not generate hash\n", __func__);128101···139112static int cifs_calc_signature(struct smb_rqst *rqst,140113 struct TCP_Server_Info *server, char *signature)141114{142142- int rc;115115+ struct md5_ctx ctx;143116144117 if (!rqst->rq_iov || !signature || !server)145118 return -EINVAL;146146-147147- rc = cifs_alloc_hash("md5", &server->secmech.md5);148148- if (rc)149149- return -1;150150-151151- rc = crypto_shash_init(server->secmech.md5);152152- if (rc) {153153- cifs_dbg(VFS, "%s: Could not init md5\n", __func__);154154- return rc;119119+ if (fips_enabled) {120120+ cifs_dbg(VFS,121121+ "MD5 signature support is disabled due to FIPS\n");122122+ return -EOPNOTSUPP;155123 }156124157157- rc = crypto_shash_update(server->secmech.md5,158158- server->session_key.response, server->session_key.len);159159- if (rc) {160160- cifs_dbg(VFS, "%s: Could not update with response\n", __func__);161161- return rc;162162- }125125+ md5_init(&ctx);126126+ md5_update(&ctx, server->session_key.response, server->session_key.len);163127164164- return __cifs_calc_signature(rqst, server, signature, server->secmech.md5);128128+ return __cifs_calc_signature(129129+ rqst, server, signature,130130+ &(struct cifs_calc_sig_ctx){ .md5 = &ctx });165131}166132167133/* must be called with server->srv_mutex held */···425405}426406427407static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,428428- const struct nls_table *nls_cp, struct shash_desc *hmacmd5)408408+ const struct nls_table *nls_cp)429409{430430- int rc = 0;431410 int len;432411 char nt_hash[CIFS_NTHASH_SIZE];412412+ struct hmac_md5_ctx hmac_ctx;433413 __le16 *user;434414 wchar_t *domain;435415 wchar_t *server;···437417 /* calculate md4 hash of password */438418 E_md4hash(ses->password, nt_hash, nls_cp);439419440440- rc = crypto_shash_setkey(hmacmd5->tfm, nt_hash, CIFS_NTHASH_SIZE);441441- if (rc) {442442- cifs_dbg(VFS, "%s: Could not set NT hash as a key, rc=%d\n", __func__, rc);443443- return rc;444444- }445445-446446- rc = crypto_shash_init(hmacmd5);447447- if (rc) {448448- cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc);449449- return rc;450450- }420420+ hmac_md5_init_usingrawkey(&hmac_ctx, nt_hash, CIFS_NTHASH_SIZE);451421452422 /* convert ses->user_name to unicode */453423 len = ses->user_name ? strlen(ses->user_name) : 0;···452442 *(u16 *)user = 0;453443 }454444455455- rc = crypto_shash_update(hmacmd5, (char *)user, 2 * len);445445+ hmac_md5_update(&hmac_ctx, (const u8 *)user, 2 * len);456446 kfree(user);457457- if (rc) {458458- cifs_dbg(VFS, "%s: Could not update with user, rc=%d\n", __func__, rc);459459- return rc;460460- }461447462448 /* convert ses->domainName to unicode and uppercase */463449 if (ses->domainName) {···465459466460 len = cifs_strtoUTF16((__le16 *)domain, ses->domainName, len,467461 nls_cp);468468- rc = crypto_shash_update(hmacmd5, (char *)domain, 2 * len);462462+ hmac_md5_update(&hmac_ctx, (const u8 *)domain, 2 * len);469463 kfree(domain);470470- if (rc) {471471- cifs_dbg(VFS, "%s: Could not update with domain, rc=%d\n", __func__, rc);472472- return rc;473473- }474464 } else {475465 /* We use ses->ip_addr if no domain name available */476466 len = strlen(ses->ip_addr);···476474 return -ENOMEM;477475478476 len = cifs_strtoUTF16((__le16 *)server, ses->ip_addr, len, nls_cp);479479- rc = crypto_shash_update(hmacmd5, (char *)server, 2 * len);477477+ hmac_md5_update(&hmac_ctx, (const u8 *)server, 2 * len);480478 kfree(server);481481- if (rc) {482482- cifs_dbg(VFS, "%s: Could not update with server, rc=%d\n", __func__, rc);483483- return rc;484484- }485479 }486480487487- rc = crypto_shash_final(hmacmd5, ntlmv2_hash);488488- if (rc)489489- cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc);490490-491491- return rc;481481+ hmac_md5_final(&hmac_ctx, ntlmv2_hash);482482+ return 0;492483}493484494494-static int495495-CalcNTLMv2_response(const struct cifs_ses *ses, char *ntlmv2_hash, struct shash_desc *hmacmd5)485485+static void CalcNTLMv2_response(const struct cifs_ses *ses, char *ntlmv2_hash)496486{497497- int rc;498487 struct ntlmv2_resp *ntlmv2 = (struct ntlmv2_resp *)499488 (ses->auth_key.response + CIFS_SESS_KEY_SIZE);500489 unsigned int hash_len;···494501 hash_len = ses->auth_key.len - (CIFS_SESS_KEY_SIZE +495502 offsetof(struct ntlmv2_resp, challenge.key[0]));496503497497- rc = crypto_shash_setkey(hmacmd5->tfm, ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE);498498- if (rc) {499499- cifs_dbg(VFS, "%s: Could not set NTLMv2 hash as a key, rc=%d\n", __func__, rc);500500- return rc;501501- }502502-503503- rc = crypto_shash_init(hmacmd5);504504- if (rc) {505505- cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc);506506- return rc;507507- }508508-509504 if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED)510505 memcpy(ntlmv2->challenge.key, ses->ntlmssp->cryptkey, CIFS_SERVER_CHALLENGE_SIZE);511506 else512507 memcpy(ntlmv2->challenge.key, ses->server->cryptkey, CIFS_SERVER_CHALLENGE_SIZE);513508514514- rc = crypto_shash_update(hmacmd5, ntlmv2->challenge.key, hash_len);515515- if (rc) {516516- cifs_dbg(VFS, "%s: Could not update with response, rc=%d\n", __func__, rc);517517- return rc;518518- }519519-520520- /* Note that the MD5 digest over writes anon.challenge_key.key */521521- rc = crypto_shash_final(hmacmd5, ntlmv2->ntlmv2_hash);522522- if (rc)523523- cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc);524524-525525- return rc;509509+ /* Note that the HMAC-MD5 value overwrites ntlmv2->challenge.key */510510+ hmac_md5_usingrawkey(ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE,511511+ ntlmv2->challenge.key, hash_len,512512+ ntlmv2->ntlmv2_hash);526513}527514528515/*···559586int560587setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)561588{562562- struct shash_desc *hmacmd5 = NULL;563589 unsigned char *tiblob = NULL; /* target info blob */564590 struct ntlmv2_resp *ntlmv2;565591 char ntlmv2_hash[16];···629657 ntlmv2->client_chal = cc;630658 ntlmv2->reserved2 = 0;631659632632- rc = cifs_alloc_hash("hmac(md5)", &hmacmd5);633633- if (rc) {634634- cifs_dbg(VFS, "Could not allocate HMAC-MD5, rc=%d\n", rc);660660+ if (fips_enabled) {661661+ cifs_dbg(VFS, "NTLMv2 support is disabled due to FIPS\n");662662+ rc = -EOPNOTSUPP;635663 goto unlock;636664 }637665638666 /* calculate ntlmv2_hash */639639- rc = calc_ntlmv2_hash(ses, ntlmv2_hash, nls_cp, hmacmd5);667667+ rc = calc_ntlmv2_hash(ses, ntlmv2_hash, nls_cp);640668 if (rc) {641669 cifs_dbg(VFS, "Could not get NTLMv2 hash, rc=%d\n", rc);642670 goto unlock;643671 }644672645673 /* calculate first part of the client response (CR1) */646646- rc = CalcNTLMv2_response(ses, ntlmv2_hash, hmacmd5);647647- if (rc) {648648- cifs_dbg(VFS, "Could not calculate CR1, rc=%d\n", rc);649649- goto unlock;650650- }674674+ CalcNTLMv2_response(ses, ntlmv2_hash);651675652676 /* now calculate the session key for NTLMv2 */653653- rc = crypto_shash_setkey(hmacmd5->tfm, ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE);654654- if (rc) {655655- cifs_dbg(VFS, "%s: Could not set NTLMv2 hash as a key, rc=%d\n", __func__, rc);656656- goto unlock;657657- }658658-659659- rc = crypto_shash_init(hmacmd5);660660- if (rc) {661661- cifs_dbg(VFS, "%s: Could not init HMAC-MD5, rc=%d\n", __func__, rc);662662- goto unlock;663663- }664664-665665- rc = crypto_shash_update(hmacmd5, ntlmv2->ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE);666666- if (rc) {667667- cifs_dbg(VFS, "%s: Could not update with response, rc=%d\n", __func__, rc);668668- goto unlock;669669- }670670-671671- rc = crypto_shash_final(hmacmd5, ses->auth_key.response);672672- if (rc)673673- cifs_dbg(VFS, "%s: Could not generate MD5 hash, rc=%d\n", __func__, rc);677677+ hmac_md5_usingrawkey(ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE,678678+ ntlmv2->ntlmv2_hash, CIFS_HMAC_MD5_HASH_SIZE,679679+ ses->auth_key.response);680680+ rc = 0;674681unlock:675682 cifs_server_unlock(ses->server);676676- cifs_free_hash(&hmacmd5);677683setup_ntlmv2_rsp_ret:678684 kfree_sensitive(tiblob);679685···693743cifs_crypto_secmech_release(struct TCP_Server_Info *server)694744{695745 cifs_free_hash(&server->secmech.aes_cmac);696696- cifs_free_hash(&server->secmech.hmacsha256);697697- cifs_free_hash(&server->secmech.md5);698698- cifs_free_hash(&server->secmech.sha512);699746700747 if (server->secmech.enc) {701748 crypto_free_aead(server->secmech.enc);
-4
fs/smb/client/cifsfs.c
···21392139 "also older servers complying with the SNIA CIFS Specification)");21402140MODULE_VERSION(CIFS_VERSION);21412141MODULE_SOFTDEP("ecb");21422142-MODULE_SOFTDEP("hmac");21432143-MODULE_SOFTDEP("md5");21442142MODULE_SOFTDEP("nls");21452143MODULE_SOFTDEP("aes");21462144MODULE_SOFTDEP("cmac");21472147-MODULE_SOFTDEP("sha256");21482148-MODULE_SOFTDEP("sha512");21492145MODULE_SOFTDEP("aead2");21502146MODULE_SOFTDEP("ccm");21512147MODULE_SOFTDEP("gcm");
+1-21
fs/smb/client/cifsglob.h
···2424#include "cifsacl.h"2525#include <crypto/internal/hash.h>2626#include <uapi/linux/cifs/cifs_mount.h>2727+#include "../common/cifsglob.h"2728#include "../common/smb2pdu.h"2829#include "smb2pdu.h"2930#include <linux/filelock.h>···222221223222/* crypto hashing related structure/fields, not specific to a sec mech */224223struct cifs_secmech {225225- struct shash_desc *md5; /* md5 hash function, for CIFS/SMB1 signatures */226226- struct shash_desc *hmacsha256; /* hmac-sha256 hash function, for SMB2 signatures */227227- struct shash_desc *sha512; /* sha512 hash function, for SMB3.1.1 preauth hash */228224 struct shash_desc *aes_cmac; /* block-cipher based MAC function, for SMB3 signatures */229225230226 struct crypto_aead *enc; /* smb3 encryption AEAD TFM (AES-CCM and AES-GCM) */···700702 return be32_to_cpu(*((__be32 *)buf)) & 0xffffff;701703}702704703703-static inline void704704-inc_rfc1001_len(void *buf, int count)705705-{706706- be32_add_cpu((__be32 *)buf, count);707707-}708708-709705struct TCP_Server_Info {710706 struct list_head tcp_ses_list;711707 struct list_head smb_ses_list;···10121020 */10131021#define CIFS_MAX_RFC1002_WSIZE ((1<<17) - 1 - sizeof(WRITE_REQ) + 4)10141022#define CIFS_MAX_RFC1002_RSIZE ((1<<17) - 1 - sizeof(READ_RSP) + 4)10151015-10161016-#define CIFS_DEFAULT_IOSIZE (1024 * 1024)1017102310181024/*10191025 * Windows only supports a max of 60kb reads and 65535 byte writes. Default to···21382148extern mempool_t cifs_io_subrequest_pool;2139214921402150/* Operations for different SMB versions */21412141-#define SMB1_VERSION_STRING "1.0"21422142-#define SMB20_VERSION_STRING "2.0"21432151#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY21442152extern struct smb_version_operations smb1_operations;21452153extern struct smb_version_values smb1_values;21462154extern struct smb_version_operations smb20_operations;21472155extern struct smb_version_values smb20_values;21482156#endif /* CIFS_ALLOW_INSECURE_LEGACY */21492149-#define SMB21_VERSION_STRING "2.1"21502157extern struct smb_version_operations smb21_operations;21512158extern struct smb_version_values smb21_values;21522152-#define SMBDEFAULT_VERSION_STRING "default"21532159extern struct smb_version_values smbdefault_values;21542154-#define SMB3ANY_VERSION_STRING "3"21552160extern struct smb_version_values smb3any_values;21562156-#define SMB30_VERSION_STRING "3.0"21572161extern struct smb_version_operations smb30_operations;21582162extern struct smb_version_values smb30_values;21592159-#define SMB302_VERSION_STRING "3.02"21602160-#define ALT_SMB302_VERSION_STRING "3.0.2"21612163/*extern struct smb_version_operations smb302_operations;*/ /* not needed yet */21622164extern struct smb_version_values smb302_values;21632163-#define SMB311_VERSION_STRING "3.1.1"21642164-#define ALT_SMB311_VERSION_STRING "3.11"21652165extern struct smb_version_operations smb311_operations;21662166extern struct smb_version_values smb311_values;21672167
···437437 SMBDIRECT_MR_READY,438438 SMBDIRECT_MR_REGISTERED,439439 SMBDIRECT_MR_INVALIDATED,440440- SMBDIRECT_MR_ERROR440440+ SMBDIRECT_MR_ERROR,441441+ SMBDIRECT_MR_DISABLED441442};442443443444struct smbdirect_mr_io {444445 struct smbdirect_socket *socket;445446 struct ib_cqe cqe;447447+448448+ /*449449+ * We can have up to two references:450450+ * 1. by the connection451451+ * 2. by the registration452452+ */453453+ struct kref kref;454454+ struct mutex mutex;446455447456 struct list_head list;448457
···452452 int nr_frozen_tasks;453453454454 /* Freeze time data consistency protection */455455- seqcount_t freeze_seq;455455+ seqcount_spinlock_t freeze_seq;456456457457 /*458458 * Most recent time the cgroup was requested to freeze.
+4-3
include/linux/exportfs.h
···320320static inline bool exportfs_can_encode_fh(const struct export_operations *nop,321321 int fh_flags)322322{323323- if (!nop)324324- return false;325325-326323 /*327324 * If a non-decodeable file handle was requested, we only need to make328325 * sure that filesystem did not opt-out of encoding fid.329326 */330327 if (fh_flags & EXPORT_FH_FID)331328 return exportfs_can_encode_fid(nop);329329+330330+ /* Normal file handles cannot be created without export ops */331331+ if (!nop)332332+ return false;332333333334 /*334335 * If a connectable file handle was requested, we need to make sure that
···2020 * always zero. So we can use these bits to encode the specific blocking2121 * type.2222 *2323+ * Note that on architectures where this is not guaranteed, or for any2424+ * unaligned lock, this tracking mechanism is silently skipped for that2525+ * lock.2626+ *2327 * Type encoding:2428 * 00 - Blocked on mutex (BLOCKER_TYPE_MUTEX)2529 * 01 - Blocked on semaphore (BLOCKER_TYPE_SEM)···4945 * If the lock pointer matches the BLOCKER_TYPE_MASK, return5046 * without writing anything.5147 */5252- if (WARN_ON_ONCE(lock_ptr & BLOCKER_TYPE_MASK))4848+ if (lock_ptr & BLOCKER_TYPE_MASK)5349 return;54505551 WRITE_ONCE(current->blocker, lock_ptr | type);···57535854static inline void hung_task_clear_blocker(void)5955{6060- WARN_ON_ONCE(!READ_ONCE(current->blocker));6161-6256 WRITE_ONCE(current->blocker, 0UL);6357}6458
···16591659 void *netfs;16601660#endif1661166116621662+ unsigned short retrans;16621663 int pnfs_error;16631664 int error; /* merge with pnfs_error */16641665 unsigned int good_bytes; /* boundary of good data */
···61616262#define to_rpmb_dev(x) container_of((x), struct rpmb_dev, dev)63636464+/**6565+ * struct rpmb_frame - RPMB frame structure for authenticated access6666+ *6767+ * @stuff : stuff bytes, a padding/reserved area of 196 bytes at the6868+ * beginning of the RPMB frame. They don’t carry meaningful6969+ * data but are required to make the frame exactly 512 bytes.7070+ * @key_mac : The authentication key or the message authentication7171+ * code (MAC) depending on the request/response type.7272+ * The MAC will be delivered in the last (or the only)7373+ * block of data.7474+ * @data : Data to be written or read by signed access.7575+ * @nonce : Random number generated by the host for the requests7676+ * and copied to the response by the RPMB engine.7777+ * @write_counter: Counter value for the total amount of the successful7878+ * authenticated data write requests made by the host.7979+ * @addr : Address of the data to be programmed to or read8080+ * from the RPMB. Address is the serial number of8181+ * the accessed block (half sector 256B).8282+ * @block_count : Number of blocks (half sectors, 256B) requested to be8383+ * read/programmed.8484+ * @result : Includes information about the status of the write counter8585+ * (valid, expired) and result of the access made to the RPMB.8686+ * @req_resp : Defines the type of request and response to/from the memory.8787+ *8888+ * The stuff bytes and big-endian properties are modeled to fit to the spec.8989+ */9090+struct rpmb_frame {9191+ u8 stuff[196];9292+ u8 key_mac[32];9393+ u8 data[256];9494+ u8 nonce[16];9595+ __be32 write_counter;9696+ __be16 addr;9797+ __be16 block_count;9898+ __be16 result;9999+ __be16 req_resp;100100+};101101+102102+#define RPMB_PROGRAM_KEY 0x1 /* Program RPMB Authentication Key */103103+#define RPMB_GET_WRITE_COUNTER 0x2 /* Read RPMB write counter */104104+#define RPMB_WRITE_DATA 0x3 /* Write data to RPMB partition */105105+#define RPMB_READ_DATA 0x4 /* Read data from RPMB partition */106106+#define RPMB_RESULT_READ 0x5 /* Read result request (Internal) */107107+64108#if IS_ENABLED(CONFIG_RPMB)65109struct rpmb_dev *rpmb_dev_get(struct rpmb_dev *rdev);66110void rpmb_dev_put(struct rpmb_dev *rdev);
+3
include/linux/skbuff.h
···42044204 struct sk_buff_head *sk_queue,42054205 unsigned int flags, int *off, int *err);42064206struct sk_buff *skb_recv_datagram(struct sock *sk, unsigned int flags, int *err);42074207+__poll_t datagram_poll_queue(struct file *file, struct socket *sock,42084208+ struct poll_table_struct *wait,42094209+ struct sk_buff_head *rcv_queue);42074210__poll_t datagram_poll(struct file *file, struct socket *sock,42084211 struct poll_table_struct *wait);42094212int skb_copy_datagram_iter(const struct sk_buff *from, int offset,
+4
include/linux/virtio_net.h
···401401 if (!tnl_hdr_negotiated)402402 return -EINVAL;403403404404+ vhdr->hash_hdr.hash_value = 0;405405+ vhdr->hash_hdr.hash_report = 0;406406+ vhdr->hash_hdr.padding = 0;407407+404408 /* Let the basic parsing deal with plain GSO features. */405409 skb_shinfo(skb)->gso_type &= ~tnl_gso_type;406410 ret = virtio_net_hdr_from_skb(skb, hdr, true, false, vlan_hlen);
···15551555 __u32 userq_num_slots;15561556};1557155715581558-/* GFX metadata BO sizes and alignment info (in bytes) */15591559-struct drm_amdgpu_info_uq_fw_areas_gfx {15601560- /* shadow area size */15611561- __u32 shadow_size;15621562- /* shadow area base virtual mem alignment */15631563- __u32 shadow_alignment;15641564- /* context save area size */15651565- __u32 csa_size;15661566- /* context save area base virtual mem alignment */15671567- __u32 csa_alignment;15681568-};15691569-15701570-/* IP specific fw related information used in the15711571- * subquery AMDGPU_INFO_UQ_FW_AREAS15721572- */15731573-struct drm_amdgpu_info_uq_fw_areas {15741574- union {15751575- struct drm_amdgpu_info_uq_fw_areas_gfx gfx;15761576- };15771577-};15781578-15791558struct drm_amdgpu_info_num_handles {15801559 /** Max handles as supported by firmware for UVD */15811560 __u32 uvd_max_handles;
···421421 if (unlikely(ret))422422 return ret;423423424424- /* nothing to do, but copy params back */425425- if (p.sq_entries == ctx->sq_entries && p.cq_entries == ctx->cq_entries) {426426- if (copy_to_user(arg, &p, sizeof(p)))427427- return -EFAULT;428428- return 0;429429- }430430-431424 size = rings_size(p.flags, p.sq_entries, p.cq_entries,432425 &sq_array_offset);433426 if (size == SIZE_MAX)···606613 if (ret)607614 return ret;608615 if (copy_to_user(rd_uptr, &rd, sizeof(rd))) {616616+ guard(mutex)(&ctx->mmap_lock);609617 io_free_region(ctx, &ctx->param_region);610618 return -EFAULT;611619 }
+6-2
io_uring/rw.c
···542542{543543 if (res == req->cqe.res)544544 return;545545- if (res == -EAGAIN && io_rw_should_reissue(req)) {545545+ if ((res == -EOPNOTSUPP || res == -EAGAIN) && io_rw_should_reissue(req)) {546546 req->flags |= REQ_F_REISSUE | REQ_F_BL_NO_RECYCLE;547547 } else {548548 req_set_fail(req);···655655 if (ret >= 0 && req->flags & REQ_F_CUR_POS)656656 req->file->f_pos = rw->kiocb.ki_pos;657657 if (ret >= 0 && !(req->ctx->flags & IORING_SETUP_IOPOLL)) {658658+ u32 cflags = 0;659659+658660 __io_complete_rw_common(req, ret);659661 /*660662 * Safe to call io_end from here as we're inline661663 * from the submission path.662664 */663665 io_req_io_end(req);664664- io_req_set_res(req, final_ret, io_put_kbuf(req, ret, sel->buf_list));666666+ if (sel)667667+ cflags = io_put_kbuf(req, ret, sel->buf_list);668668+ io_req_set_res(req, final_ret, cflags);665669 io_req_rw_cleanup(req, issue_flags);666670 return IOU_COMPLETE;667671 } else {
+14-11
kernel/bpf/helpers.c
···12151215 rcu_read_unlock_trace();12161216}1217121712181218+static void bpf_async_cb_rcu_free(struct rcu_head *rcu)12191219+{12201220+ struct bpf_async_cb *cb = container_of(rcu, struct bpf_async_cb, rcu);12211221+12221222+ kfree_nolock(cb);12231223+}12241224+12181225static void bpf_wq_delete_work(struct work_struct *work)12191226{12201227 struct bpf_work *w = container_of(work, struct bpf_work, delete_work);1221122812221229 cancel_work_sync(&w->work);1223123012241224- kfree_rcu(w, cb.rcu);12311231+ call_rcu(&w->cb.rcu, bpf_async_cb_rcu_free);12251232}1226123312271234static void bpf_timer_delete_work(struct work_struct *work)···1237123012381231 /* Cancel the timer and wait for callback to complete if it was running.12391232 * If hrtimer_cancel() can be safely called it's safe to call12401240- * kfree_rcu(t) right after for both preallocated and non-preallocated12331233+ * call_rcu() right after for both preallocated and non-preallocated12411234 * maps. The async->cb = NULL was already done and no code path can see12421235 * address 't' anymore. Timer if armed for existing bpf_hrtimer before12431236 * bpf_timer_cancel_and_free will have been cancelled.12441237 */12451238 hrtimer_cancel(&t->timer);12461246- kfree_rcu(t, cb.rcu);12391239+ call_rcu(&t->cb.rcu, bpf_async_cb_rcu_free);12471240}1248124112491242static int __bpf_async_init(struct bpf_async_kern *async, struct bpf_map *map, u64 flags,···12771270 goto out;12781271 }1279127212801280- /* Allocate via bpf_map_kmalloc_node() for memcg accounting. Until12811281- * kmalloc_nolock() is available, avoid locking issues by using12821282- * __GFP_HIGH (GFP_ATOMIC & ~__GFP_RECLAIM).12831283- */12841284- cb = bpf_map_kmalloc_node(map, size, __GFP_HIGH, map->numa_node);12731273+ cb = bpf_map_kmalloc_nolock(map, size, 0, map->numa_node);12851274 if (!cb) {12861275 ret = -ENOMEM;12871276 goto out;···13181315 * or pinned in bpffs.13191316 */13201317 WRITE_ONCE(async->cb, NULL);13211321- kfree(cb);13181318+ kfree_nolock(cb);13221319 ret = -EPERM;13231320 }13241321out:···15831580 * timer _before_ calling us, such that failing to cancel it here will15841581 * cause it to possibly use struct hrtimer after freeing bpf_hrtimer.15851582 * Therefore, we _need_ to cancel any outstanding timers before we do15861586- * kfree_rcu, even though no more timers can be armed.15831583+ * call_rcu, even though no more timers can be armed.15871584 *15881585 * Moreover, we need to schedule work even if timer does not belong to15891586 * the calling callback_fn, as on two different CPUs, we can end up in a···16101607 * completion.16111608 */16121609 if (hrtimer_try_to_cancel(&t->timer) >= 0)16131613- kfree_rcu(t, cb.rcu);16101610+ call_rcu(&t->cb.rcu, bpf_async_cb_rcu_free);16141611 else16151612 queue_work(system_dfl_wq, &t->cb.delete_work);16161613 } else {
···58925892 * if the parent has to be frozen, the child has too.58935893 */58945894 cgrp->freezer.e_freeze = parent->freezer.e_freeze;58955895- seqcount_init(&cgrp->freezer.freeze_seq);58955895+ seqcount_spinlock_init(&cgrp->freezer.freeze_seq, &css_set_lock);58965896 if (cgrp->freezer.e_freeze) {58975897 /*58985898 * Set the CGRP_FREEZE flag, so when a process will be
···94039403 flags |= MAP_HUGETLB;9404940494059405 if (file) {94069406- struct inode *inode;94069406+ const struct inode *inode;94079407 dev_t dev;9408940894099409 buf = kmalloc(PATH_MAX, GFP_KERNEL);···94169416 * need to add enough zero bytes after the string to handle94179417 * the 64bit alignment we do later.94189418 */94199419- name = file_path(file, buf, PATH_MAX - sizeof(u64));94199419+ name = d_path(file_user_path(file), buf, PATH_MAX - sizeof(u64));94209420 if (IS_ERR(name)) {94219421 name = "//toolong";94229422 goto cpy_name;94239423 }94249424- inode = file_inode(vma->vm_file);94249424+ inode = file_user_inode(vma->vm_file);94259425 dev = inode->i_sb->s_dev;94269426 ino = inode->i_ino;94279427 gen = inode->i_generation;···94929492 if (!filter->path.dentry)94939493 return false;9494949494959495- if (d_inode(filter->path.dentry) != file_inode(file))94959495+ if (d_inode(filter->path.dentry) != file_user_inode(file))94969496 return false;9497949794989498 if (filter->offset > offset + size)
+3-3
kernel/events/uprobes.c
···2765276527662766 handler_chain(uprobe, regs);2767276727682768+ /* Try to optimize after first hit. */27692769+ arch_uprobe_optimize(&uprobe->arch, bp_vaddr);27702770+27682771 /*27692772 * If user decided to take execution elsewhere, it makes little sense27702773 * to execute the original instruction, so let's skip it.27712774 */27722775 if (instruction_pointer(regs) != bp_vaddr)27732776 goto out;27742774-27752775- /* Try to optimize after first hit. */27762776- arch_uprobe_optimize(&uprobe->arch, bp_vaddr);2777277727782778 if (arch_uprobe_skip_sstep(&uprobe->arch, regs))27792779 goto out;
+2
kernel/sched/core.c
···85718571 sched_tick_stop(cpu);8572857285738573 rq_lock_irqsave(rq, &rf);85748574+ update_rq_clock(rq);85748575 if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {85758576 WARN(true, "Dying CPU not properly vacated!");85768577 dump_rq_tasks(rq, KERN_WARNING);85778578 }85798579+ dl_server_stop(&rq->fair_server);85788580 rq_unlock_irqrestore(rq, &rf);8579858185808582 calc_load_migrate(rq);
+3
kernel/sched/deadline.c
···15821582 if (!dl_server(dl_se) || dl_se->dl_server_active)15831583 return;1584158415851585+ if (WARN_ON_ONCE(!cpu_online(cpu_of(rq))))15861586+ return;15871587+15851588 dl_se->dl_server_active = 1;15861589 enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);15871590 if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
+13-13
kernel/sched/fair.c
···89208920 return p;8921892189228922idle:89238923- if (!rf)89248924- return NULL;89238923+ if (rf) {89248924+ new_tasks = sched_balance_newidle(rq, rf);8925892589268926- new_tasks = sched_balance_newidle(rq, rf);89268926+ /*89278927+ * Because sched_balance_newidle() releases (and re-acquires)89288928+ * rq->lock, it is possible for any higher priority task to89298929+ * appear. In that case we must re-start the pick_next_entity()89308930+ * loop.89318931+ */89328932+ if (new_tasks < 0)89338933+ return RETRY_TASK;8927893489288928- /*89298929- * Because sched_balance_newidle() releases (and re-acquires) rq->lock, it is89308930- * possible for any higher priority task to appear. In that case we89318931- * must re-start the pick_next_entity() loop.89328932- */89338933- if (new_tasks < 0)89348934- return RETRY_TASK;89358935-89368936- if (new_tasks > 0)89378937- goto again;89358935+ if (new_tasks > 0)89368936+ goto again;89378937+ }8938893889398939 /*89408940 * rq is about to be idle, check if we need to update the
···14731473 if (IS_ERR(param_ctx))14741474 return PTR_ERR(param_ctx);14751475 test_ctx = damon_new_ctx();14761476+ if (!test_ctx)14771477+ return -ENOMEM;14761478 err = damon_commit_ctx(test_ctx, param_ctx);14771477- if (err) {14781478- damon_destroy_ctx(test_ctx);14791479+ if (err)14791480 goto out;14801480- }14811481 err = damon_commit_ctx(kdamond->damon_ctx, param_ctx);14821482out:14831483+ damon_destroy_ctx(test_ctx);14831484 damon_destroy_ctx(param_ctx);14841485 return err;14851486}
+3
mm/huge_memory.c
···41094109 if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1)41104110 return false;4111411141124112+ if (folio_contain_hwpoisoned_page(folio))41134113+ return false;41144114+41124115 for (i = 0; i < folio_nr_pages(folio); i++) {41134116 if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) {41144117 if (++num_zero_pages > khugepaged_max_ptes_none)
+2-3
mm/hugetlb.c
···76147614 p4d_t *p4d = p4d_offset(pgd, addr);76157615 pud_t *pud = pud_offset(p4d, addr);7616761676177617- i_mmap_assert_write_locked(vma->vm_file->f_mapping);76187618- hugetlb_vma_assert_locked(vma);76197617 if (sz != PMD_SIZE)76207618 return 0;76217619 if (!ptdesc_pmd_is_shared(virt_to_ptdesc(ptep)))76227620 return 0;76237623-76217621+ i_mmap_assert_write_locked(vma->vm_file->f_mapping);76227622+ hugetlb_vma_assert_locked(vma);76247623 pud_clear(pud);76257624 /*76267625 * Once our caller drops the rmap lock, some other process might be
···12371237}1238123812391239/*12401240- * Perform final tasks for MADV_DONTUNMAP operation, clearing mlock() and12411241- * account flags on remaining VMA by convention (it cannot be mlock()'d any12421242- * longer, as pages in range are no longer mapped), and removing anon_vma_chain12431243- * links from it (if the entire VMA was copied over).12401240+ * Perform final tasks for MADV_DONTUNMAP operation, clearing mlock() flag on12411241+ * remaining VMA by convention (it cannot be mlock()'d any longer, as pages in12421242+ * range are no longer mapped), and removing anon_vma_chain links from it if the12431243+ * entire VMA was copied over.12441244 */12451245static void dontunmap_complete(struct vma_remap_struct *vrm,12461246 struct vm_area_struct *new_vma)···12501250 unsigned long old_start = vrm->vma->vm_start;12511251 unsigned long old_end = vrm->vma->vm_end;1252125212531253- /*12541254- * We always clear VM_LOCKED[ONFAULT] | VM_ACCOUNT on the old12551255- * vma.12561256- */12571257- vm_flags_clear(vrm->vma, VM_LOCKED_MASK | VM_ACCOUNT);12531253+ /* We always clear VM_LOCKED[ONFAULT] on the old VMA. */12541254+ vm_flags_clear(vrm->vma, VM_LOCKED_MASK);1258125512591256 /*12601257 * anon_vma links of the old vma is no longer needed after its page
+3
mm/page_owner.c
···168168 unsigned long flags;169169 struct stack *stack;170170171171+ if (!gfpflags_allow_spinning(gfp_mask))172172+ return;173173+171174 set_current_in_page_owner();172175 stack = kmalloc(sizeof(*stack), gfp_nested_mask(gfp_mask));173176 if (!stack) {
+12-4
mm/slub.c
···21702170 struct slabobj_ext *obj_exts;2171217121722172 obj_exts = slab_obj_exts(slab);21732173- if (!obj_exts)21732173+ if (!obj_exts) {21742174+ /*21752175+ * If obj_exts allocation failed, slab->obj_exts is set to21762176+ * OBJEXTS_ALLOC_FAIL. In this case, we end up here and should21772177+ * clear the flag.21782178+ */21792179+ slab->obj_exts = 0;21742180 return;21812181+ }2175218221762183 /*21772184 * obj_exts was created with __GFP_NO_OBJ_EXT flag, therefore its···64506443 slab = virt_to_slab(x);64516444 s = slab->slab_cache;6452644564466446+ /* Point 'x' back to the beginning of allocated object */64476447+ x -= s->offset;64486448+64536449 /*64546450 * We used freepointer in 'x' to link 'x' into df->objects.64556451 * Clear it to NULL to avoid false positive detection64566452 * of "Freepointer corruption".64576453 */64586458- *(void **)x = NULL;64546454+ set_freepointer(s, x, NULL);6459645564606460- /* Point 'x' back to the beginning of allocated object */64616461- x -= s->offset;64626456 __slab_free(s, slab, x, x, 1, _THIS_IP_);64636457 }64646458
+7-18
net/bpf/test_run.c
···2929#include <trace/events/bpf_test_run.h>30303131struct bpf_test_timer {3232- enum { NO_PREEMPT, NO_MIGRATE } mode;3332 u32 i;3433 u64 time_start, time_spent;3534};···3637static void bpf_test_timer_enter(struct bpf_test_timer *t)3738 __acquires(rcu)3839{3939- rcu_read_lock();4040- if (t->mode == NO_PREEMPT)4141- preempt_disable();4242- else4343- migrate_disable();4444-4040+ rcu_read_lock_dont_migrate();4541 t->time_start = ktime_get_ns();4642}4743···4450 __releases(rcu)4551{4652 t->time_start = 0;4747-4848- if (t->mode == NO_PREEMPT)4949- preempt_enable();5050- else5151- migrate_enable();5252- rcu_read_unlock();5353+ rcu_read_unlock_migrate();5354}54555556static bool bpf_test_timer_continue(struct bpf_test_timer *t, int iterations,···363374364375{365376 struct xdp_test_data xdp = { .batch_size = batch_size };366366- struct bpf_test_timer t = { .mode = NO_MIGRATE };377377+ struct bpf_test_timer t = {};367378 int ret;368379369380 if (!repeat)···393404 struct bpf_prog_array_item item = {.prog = prog};394405 struct bpf_run_ctx *old_ctx;395406 struct bpf_cg_run_ctx run_ctx;396396- struct bpf_test_timer t = { NO_MIGRATE };407407+ struct bpf_test_timer t = {};397408 enum bpf_cgroup_storage_type stype;398409 int ret;399410···12581269 goto free_ctx;1259127012601271 if (kattr->test.data_size_in - meta_sz < ETH_HLEN)12611261- return -EINVAL;12721272+ goto free_ctx;1262127312631274 data = bpf_test_init(kattr, linear_sz, max_linear_sz, headroom, tailroom);12641275 if (IS_ERR(data)) {···13661377 const union bpf_attr *kattr,13671378 union bpf_attr __user *uattr)13681379{13691369- struct bpf_test_timer t = { NO_PREEMPT };13801380+ struct bpf_test_timer t = {};13701381 u32 size = kattr->test.data_size_in;13711382 struct bpf_flow_dissector ctx = {};13721383 u32 repeat = kattr->test.repeat;···14341445int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kattr,14351446 union bpf_attr __user *uattr)14361447{14371437- struct bpf_test_timer t = { NO_PREEMPT };14481448+ struct bpf_test_timer t = {};14381449 struct bpf_prog_array *progs = NULL;14391450 struct bpf_sk_lookup_kern ctx = {};14401451 u32 repeat = kattr->test.repeat;
+34-10
net/core/datagram.c
···920920EXPORT_SYMBOL(skb_copy_and_csum_datagram_msg);921921922922/**923923- * datagram_poll - generic datagram poll923923+ * datagram_poll_queue - same as datagram_poll, but on a specific receive924924+ * queue924925 * @file: file struct925926 * @sock: socket926927 * @wait: poll table928928+ * @rcv_queue: receive queue to poll927929 *928928- * Datagram poll: Again totally generic. This also handles929929- * sequenced packet sockets providing the socket receive queue930930- * is only ever holding data ready to receive.930930+ * Performs polling on the given receive queue, handling shutdown, error,931931+ * and connection state. This is useful for protocols that deliver932932+ * userspace-bound packets through a custom queue instead of933933+ * sk->sk_receive_queue.931934 *932932- * Note: when you *don't* use this routine for this protocol,933933- * and you use a different write policy from sock_writeable()934934- * then please supply your own write_space callback.935935+ * Return: poll bitmask indicating the socket's current state935936 */936936-__poll_t datagram_poll(struct file *file, struct socket *sock,937937- poll_table *wait)937937+__poll_t datagram_poll_queue(struct file *file, struct socket *sock,938938+ poll_table *wait, struct sk_buff_head *rcv_queue)938939{939940 struct sock *sk = sock->sk;940941 __poll_t mask;···957956 mask |= EPOLLHUP;958957959958 /* readable? */960960- if (!skb_queue_empty_lockless(&sk->sk_receive_queue))959959+ if (!skb_queue_empty_lockless(rcv_queue))961960 mask |= EPOLLIN | EPOLLRDNORM;962961963962 /* Connection-based need to check for termination and startup */···978977 sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);979978980979 return mask;980980+}981981+EXPORT_SYMBOL(datagram_poll_queue);982982+983983+/**984984+ * datagram_poll - generic datagram poll985985+ * @file: file struct986986+ * @sock: socket987987+ * @wait: poll table988988+ *989989+ * Datagram poll: Again totally generic. This also handles990990+ * sequenced packet sockets providing the socket receive queue991991+ * is only ever holding data ready to receive.992992+ *993993+ * Note: when you *don't* use this routine for this protocol,994994+ * and you use a different write policy from sock_writeable()995995+ * then please supply your own write_space callback.996996+ *997997+ * Return: poll bitmask indicating the socket's current state998998+ */999999+__poll_t datagram_poll(struct file *file, struct socket *sock, poll_table *wait)10001000+{10011001+ return datagram_poll_queue(file, sock, wait,10021002+ &sock->sk->sk_receive_queue);9811003}9821004EXPORT_SYMBOL(datagram_poll);
···4343 if (skb_queue_len(&cell->napi_skbs) == 1)4444 napi_schedule(&cell->napi);45454646- if (have_bh_lock)4747- local_unlock_nested_bh(&gcells->cells->bh_lock);4848-4946 res = NET_RX_SUCCESS;50475148unlock:4949+ if (have_bh_lock)5050+ local_unlock_nested_bh(&gcells->cells->bh_lock);5251 rcu_read_unlock();5352 return res;5453}
-3
net/core/rtnetlink.c
···47154715 int err;47164716 u16 vid;4717471747184718- if (!netlink_capable(skb, CAP_NET_ADMIN))47194719- return -EPERM;47204720-47214718 if (!del_bulk) {47224719 err = nlmsg_parse_deprecated(nlh, sizeof(*ndm), tb, NDA_MAX,47234720 NULL, extack);
+7-1
net/hsr/hsr_netlink.c
···3434 struct netlink_ext_ack *extack)3535{3636 struct net *link_net = rtnl_newlink_link_net(params);3737+ struct net_device *link[2], *interlink = NULL;3738 struct nlattr **data = params->data;3839 enum hsr_version proto_version;3940 unsigned char multicast_spec;4041 u8 proto = HSR_PROTOCOL_HSR;41424242- struct net_device *link[2], *interlink = NULL;4343+ if (!net_eq(link_net, dev_net(dev))) {4444+ NL_SET_ERR_MSG_MOD(extack,4545+ "HSR slaves/interlink must be on the same net namespace than HSR link");4646+ return -EINVAL;4747+ }4848+4349 if (!data) {4450 NL_SET_ERR_MSG_MOD(extack, "No slave devices specified");4551 return -EINVAL;
+6
net/mptcp/pm_kernel.c
···370370 }371371372372subflow:373373+ /* No need to try establishing subflows to remote id0 if not allowed */374374+ if (mptcp_pm_add_addr_c_flag_case(msk))375375+ goto exit;376376+373377 /* check if should create a new subflow */374378 while (msk->pm.local_addr_used < endp_subflow_max &&375379 msk->pm.extra_subflows < limit_extra_subflows) {···405401 __mptcp_subflow_connect(sk, &local, &addrs[i]);406402 spin_lock_bh(&msk->pm.lock);407403 }404404+405405+exit:408406 mptcp_pm_nl_check_work_pending(msk);409407}410408
+7-6
net/sctp/inqueue.c
···169169 chunk->head_skb = chunk->skb;170170171171 /* skbs with "cover letter" */172172- if (chunk->head_skb && chunk->skb->data_len == chunk->skb->len)172172+ if (chunk->head_skb && chunk->skb->data_len == chunk->skb->len) {173173+ if (WARN_ON(!skb_shinfo(chunk->skb)->frag_list)) {174174+ __SCTP_INC_STATS(dev_net(chunk->skb->dev),175175+ SCTP_MIB_IN_PKT_DISCARDS);176176+ sctp_chunk_free(chunk);177177+ goto next_chunk;178178+ }173179 chunk->skb = skb_shinfo(chunk->skb)->frag_list;174174-175175- if (WARN_ON(!chunk->skb)) {176176- __SCTP_INC_STATS(dev_net(chunk->skb->dev), SCTP_MIB_IN_PKT_DISCARDS);177177- sctp_chunk_free(chunk);178178- goto next_chunk;179180 }180181 }181182
-13
net/smc/smc_inet.c
···5656 .protocol = IPPROTO_SMC,5757 .prot = &smc_inet_prot,5858 .ops = &smc_inet_stream_ops,5959- .flags = INET_PROTOSW_ICSK,6059};61606261#if IS_ENABLED(CONFIG_IPV6)···103104 .protocol = IPPROTO_SMC,104105 .prot = &smc_inet6_prot,105106 .ops = &smc_inet6_stream_ops,106106- .flags = INET_PROTOSW_ICSK,107107};108108#endif /* CONFIG_IPV6 */109109-110110-static unsigned int smc_sync_mss(struct sock *sk, u32 pmtu)111111-{112112- /* No need pass it through to clcsock, mss can always be set by113113- * sock_create_kern or smc_setsockopt.114114- */115115- return 0;116116-}117109118110static int smc_inet_init_sock(struct sock *sk)119111{···112122113123 /* init common smc sock */114124 smc_sk_init(net, sk, IPPROTO_SMC);115115-116116- inet_csk(sk)->icsk_sync_mss = smc_sync_mss;117117-118125 /* create clcsock */119126 return smc_create_clcsk(net, sk, sk->sk_family);120127}
+19-19
net/vmw_vsock/af_vsock.c
···487487 goto err;488488 }489489490490- if (vsk->transport) {491491- if (vsk->transport == new_transport) {492492- ret = 0;493493- goto err;494494- }490490+ if (vsk->transport && vsk->transport == new_transport) {491491+ ret = 0;492492+ goto err;493493+ }495494495495+ /* We increase the module refcnt to prevent the transport unloading496496+ * while there are open sockets assigned to it.497497+ */498498+ if (!new_transport || !try_module_get(new_transport->module)) {499499+ ret = -ENODEV;500500+ goto err;501501+ }502502+503503+ /* It's safe to release the mutex after a successful try_module_get().504504+ * Whichever transport `new_transport` points at, it won't go away until505505+ * the last module_put() below or in vsock_deassign_transport().506506+ */507507+ mutex_unlock(&vsock_register_mutex);508508+509509+ if (vsk->transport) {496510 /* transport->release() must be called with sock lock acquired.497511 * This path can only be taken during vsock_connect(), where we498512 * have already held the sock lock. In the other cases, this···525511 sk->sk_state = TCP_CLOSE;526512 vsk->peer_shutdown = 0;527513 }528528-529529- /* We increase the module refcnt to prevent the transport unloading530530- * while there are open sockets assigned to it.531531- */532532- if (!new_transport || !try_module_get(new_transport->module)) {533533- ret = -ENODEV;534534- goto err;535535- }536536-537537- /* It's safe to release the mutex after a successful try_module_get().538538- * Whichever transport `new_transport` points at, it won't go away until539539- * the last module_put() below or in vsock_deassign_transport().540540- */541541- mutex_unlock(&vsock_register_mutex);542514543515 if (sk->sk_type == SOCK_SEQPACKET) {544516 if (!new_transport->seqpacket_allow ||
···167167 let ptr = if self.nbits <= BITS_PER_LONG {168168 // SAFETY: Bitmap is represented inline.169169 #[allow(unused_unsafe, reason = "Safe since Rust 1.92.0")]170170- unsafe { core::ptr::addr_of!(self.repr.bitmap) }170170+ unsafe {171171+ core::ptr::addr_of!(self.repr.bitmap)172172+ }171173 } else {172174 // SAFETY: Bitmap is represented as array of `unsigned long`.173175 unsafe { self.repr.ptr.as_ptr() }···186184 let ptr = if self.nbits <= BITS_PER_LONG {187185 // SAFETY: Bitmap is represented inline.188186 #[allow(unused_unsafe, reason = "Safe since Rust 1.92.0")]189189- unsafe { core::ptr::addr_of_mut!(self.repr.bitmap) }187187+ unsafe {188188+ core::ptr::addr_of_mut!(self.repr.bitmap)189189+ }190190 } else {191191 // SAFETY: Bitmap is represented as array of `unsigned long`.192192 unsafe { self.repr.ptr.as_ptr() }
+1-2
rust/kernel/cpufreq.rs
···3838const CPUFREQ_NAME_LEN: usize = bindings::CPUFREQ_NAME_LEN as usize;39394040/// Default transition latency value in nanoseconds.4141-pub const DEFAULT_TRANSITION_LATENCY_NS: u32 =4242- bindings::CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;4141+pub const DEFAULT_TRANSITION_LATENCY_NS: u32 = bindings::CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;43424443/// CPU frequency driver flags.4544pub mod flags {
+1-1
sound/firewire/amdtp-stream.h
···3232 * allows 5 times as large as IEC 61883-6 defines.3333 * @CIP_HEADER_WITHOUT_EOH: Only for in-stream. CIP Header doesn't include3434 * valid EOH.3535- * @CIP_NO_HEADERS: a lack of headers in packets3535+ * @CIP_NO_HEADER: a lack of headers in packets3636 * @CIP_UNALIGHED_DBC: Only for in-stream. The value of dbc is not alighed to3737 * the value of current SYT_INTERVAL; e.g. initial value is not zero.3838 * @CIP_UNAWARE_SYT: For outgoing packet, the value in SYT field of CIP is 0xffff.
···2525 return labs(a - b) <= (a + b) / 100 * err;2626}27272828+/*2929+ * Checks if two given values differ by less than err% of their sum and assert3030+ * with detailed debug info if not.3131+ */3232+static inline int values_close_report(long a, long b, int err)3333+{3434+ long diff = labs(a - b);3535+ long limit = (a + b) / 100 * err;3636+ double actual_err = (a + b) ? (100.0 * diff / (a + b)) : 0.0;3737+ int close = diff <= limit;3838+3939+ if (!close)4040+ fprintf(stderr,4141+ "[FAIL] actual=%ld expected=%ld | diff=%ld | limit=%ld | "4242+ "tolerance=%d%% | actual_error=%.2f%%\n",4343+ a, b, diff, limit, err, actual_err);4444+4545+ return close;4646+}4747+2848extern ssize_t read_text(const char *path, char *buf, size_t max_len);2949extern ssize_t write_text(const char *path, char *buf, ssize_t len);3050
+9-9
tools/testing/selftests/cgroup/test_cpu.c
···219219 if (user_usec <= 0)220220 goto cleanup;221221222222- if (!values_close(usage_usec, expected_usage_usec, 1))222222+ if (!values_close_report(usage_usec, expected_usage_usec, 1))223223 goto cleanup;224224225225 ret = KSFT_PASS;···291291292292 user_usec = cg_read_key_long(cpucg, "cpu.stat", "user_usec");293293 nice_usec = cg_read_key_long(cpucg, "cpu.stat", "nice_usec");294294- if (!values_close(nice_usec, expected_nice_usec, 1))294294+ if (!values_close_report(nice_usec, expected_nice_usec, 1))295295 goto cleanup;296296297297 ret = KSFT_PASS;···404404 goto cleanup;405405406406 delta = children[i + 1].usage - children[i].usage;407407- if (!values_close(delta, children[0].usage, 35))407407+ if (!values_close_report(delta, children[0].usage, 35))408408 goto cleanup;409409 }410410···444444 int ret = KSFT_FAIL, i;445445446446 for (i = 0; i < num_children - 1; i++) {447447- if (!values_close(children[i + 1].usage, children[0].usage, 15))447447+ if (!values_close_report(children[i + 1].usage, children[0].usage, 15))448448 goto cleanup;449449 }450450···573573574574 nested_leaf_usage = leaf[1].usage + leaf[2].usage;575575 if (overprovisioned) {576576- if (!values_close(leaf[0].usage, nested_leaf_usage, 15))576576+ if (!values_close_report(leaf[0].usage, nested_leaf_usage, 15))577577 goto cleanup;578578- } else if (!values_close(leaf[0].usage * 2, nested_leaf_usage, 15))578578+ } else if (!values_close_report(leaf[0].usage * 2, nested_leaf_usage, 15))579579 goto cleanup;580580581581582582 child_usage = cg_read_key_long(child, "cpu.stat", "usage_usec");583583 if (child_usage <= 0)584584 goto cleanup;585585- if (!values_close(child_usage, nested_leaf_usage, 1))585585+ if (!values_close_report(child_usage, nested_leaf_usage, 1))586586 goto cleanup;587587588588 ret = KSFT_PASS;···691691 expected_usage_usec692692 = n_periods * quota_usec + MIN(remainder_usec, quota_usec);693693694694- if (!values_close(usage_usec, expected_usage_usec, 10))694694+ if (!values_close_report(usage_usec, expected_usage_usec, 10))695695 goto cleanup;696696697697 ret = KSFT_PASS;···762762 expected_usage_usec763763 = n_periods * quota_usec + MIN(remainder_usec, quota_usec);764764765765- if (!values_close(usage_usec, expected_usage_usec, 10))765765+ if (!values_close_report(usage_usec, expected_usage_usec, 10))766766 goto cleanup;767767768768 ret = KSFT_PASS;
···18181919#include "test_util.h"20202121+sigjmp_buf expect_sigbus_jmpbuf;2222+2323+void __attribute__((used)) expect_sigbus_handler(int signum)2424+{2525+ siglongjmp(expect_sigbus_jmpbuf, 1);2626+}2727+2128/*2229 * Random number generator that is usable from guest code. This is the2330 * Park-Miller LCG using standard constants.