Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.16-rc8).

Conflicts:

drivers/net/ethernet/microsoft/mana/gdma_main.c
9669ddda18fb ("net: mana: Fix warnings for missing export.h header inclusion")
755391121038 ("net: mana: Allocate MSI-X vectors dynamically")
https://lore.kernel.org/20250711130752.23023d98@canb.auug.org.au

Adjacent changes:

drivers/net/ethernet/ti/icssg/icssg_prueth.h
6e86fb73de0f ("net: ti: icssg-prueth: Fix buffer allocation for ICSSG")
ffe8a4909176 ("net: ti: icssg-prueth: Read firmware-names from device tree")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2497 -1273
-6
Documentation/devicetree/bindings/iio/adc/adi,ad7606.yaml
··· 223 223 - required: 224 224 - pwms 225 225 226 - - oneOf: 227 - - required: 228 - - interrupts 229 - - required: 230 - - io-backends 231 - 232 226 - if: 233 227 properties: 234 228 compatible:
+1 -1
Documentation/devicetree/bindings/iio/gyroscope/invensense,mpu3050.yaml
··· 21 21 vlogic-supply: true 22 22 23 23 interrupts: 24 - minItems: 1 24 + maxItems: 1 25 25 description: 26 26 Interrupt mapping for the trigger interrupt from the internal oscillator. 27 27
+1 -1
Documentation/hwmon/ina238.rst
··· 65 65 ------------------------------------ 66 66 67 67 ======================= ======================================================= 68 - energy1_input Energy measurement (mJ) 68 + energy1_input Energy measurement (uJ) 69 69 70 70 power1_input_highest Peak Power (uW) 71 71 ======================= =======================================================
+9 -2
Documentation/virt/kvm/api.rst
··· 2008 2008 be used as a vm ioctl to set the initial tsc frequency of subsequently 2009 2009 created vCPUs. 2010 2010 2011 + For TSC protected Confidential Computing (CoCo) VMs where TSC frequency 2012 + is configured once at VM scope and remains unchanged during VM's 2013 + lifetime, the vm ioctl should be used to configure the TSC frequency 2014 + and the vcpu ioctl is not supported. 2015 + 2016 + Example of such CoCo VMs: TDX guests. 2017 + 2011 2018 4.56 KVM_GET_TSC_KHZ 2012 2019 -------------------- 2013 2020 ··· 7237 7230 placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info`` 7238 7231 field of the union. 7239 7232 7240 - * ``TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT``: the guest has requested to 7241 - set up a notification interrupt for vector ``vector``. 7233 + * ``TDVMCALL_SETUP_EVENT_NOTIFY_INTERRUPT``: the guest has requested to 7234 + set up a notification interrupt for vector ``vector``. 7242 7235 7243 7236 KVM may add support for more values in the future that may cause a userspace 7244 7237 exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case,
+88 -7
Documentation/virt/kvm/review-checklist.rst
··· 7 7 1. The patch must follow Documentation/process/coding-style.rst and 8 8 Documentation/process/submitting-patches.rst. 9 9 10 - 2. Patches should be against kvm.git master branch. 10 + 2. Patches should be against kvm.git master or next branches. 11 11 12 12 3. If the patch introduces or modifies a new userspace API: 13 13 - the API must be documented in Documentation/virt/kvm/api.rst ··· 18 18 5. New features must default to off (userspace should explicitly request them). 19 19 Performance improvements can and should default to on. 20 20 21 - 6. New cpu features should be exposed via KVM_GET_SUPPORTED_CPUID2 21 + 6. New cpu features should be exposed via KVM_GET_SUPPORTED_CPUID2, 22 + or its equivalent for non-x86 architectures 22 23 23 - 7. Emulator changes should be accompanied by unit tests for qemu-kvm.git 24 - kvm/test directory. 24 + 7. The feature should be testable (see below). 25 25 26 26 8. Changes should be vendor neutral when possible. Changes to common code 27 27 are better than duplicating changes to vendor code. ··· 36 36 11. New guest visible features must either be documented in a hardware manual 37 37 or be accompanied by documentation. 38 38 39 - 12. Features must be robust against reset and kexec - for example, shared 40 - host/guest memory must be unshared to prevent the host from writing to 41 - guest memory that the guest has not reserved for this purpose. 39 + Testing of KVM code 40 + ------------------- 41 + 42 + All features contributed to KVM, and in many cases bugfixes too, should be 43 + accompanied by some kind of tests and/or enablement in open source guests 44 + and VMMs. KVM is covered by multiple test suites: 45 + 46 + *Selftests* 47 + These are low level tests that allow granular testing of kernel APIs. 48 + This includes API failure scenarios, invoking APIs after specific 49 + guest instructions, and testing multiple calls to ``KVM_CREATE_VM`` 50 + within a single test. They are included in the kernel tree at 51 + ``tools/testing/selftests/kvm``. 52 + 53 + ``kvm-unit-tests`` 54 + A collection of small guests that test CPU and emulated device features 55 + from a guest's perspective. They run under QEMU or ``kvmtool``, and 56 + are generally not KVM-specific: they can be run with any accelerator 57 + that QEMU support or even on bare metal, making it possible to compare 58 + behavior across hypervisors and processor families. 59 + 60 + Functional test suites 61 + Various sets of functional tests exist, such as QEMU's ``tests/functional`` 62 + suite and `avocado-vt <https://avocado-vt.readthedocs.io/en/latest/>`__. 63 + These typically involve running a full operating system in a virtual 64 + machine. 65 + 66 + The best testing approach depends on the feature's complexity and 67 + operation. Here are some examples and guidelines: 68 + 69 + New instructions (no new registers or APIs) 70 + The corresponding CPU features (if applicable) should be made available 71 + in QEMU. If the instructions require emulation support or other code in 72 + KVM, it is worth adding coverage to ``kvm-unit-tests`` or selftests; 73 + the latter can be a better choice if the instructions relate to an API 74 + that already has good selftest coverage. 75 + 76 + New hardware features (new registers, no new APIs) 77 + These should be tested via ``kvm-unit-tests``; this more or less implies 78 + supporting them in QEMU and/or ``kvmtool``. In some cases selftests 79 + can be used instead, similar to the previous case, or specifically to 80 + test corner cases in guest state save/restore. 81 + 82 + Bug fixes and performance improvements 83 + These usually do not introduce new APIs, but it's worth sharing 84 + any benchmarks and tests that will validate your contribution, 85 + ideally in the form of regression tests. Tests and benchmarks 86 + can be included in either ``kvm-unit-tests`` or selftests, depending 87 + on the specifics of your change. Selftests are especially useful for 88 + regression tests because they are included directly in Linux's tree. 89 + 90 + Large scale internal changes 91 + While it's difficult to provide a single policy, you should ensure that 92 + the changed code is covered by either ``kvm-unit-tests`` or selftests. 93 + In some cases the affected code is run for any guests and functional 94 + tests suffice. Explain your testing process in the cover letter, 95 + as that can help identify gaps in existing test suites. 96 + 97 + New APIs 98 + It is important to demonstrate your use case. This can be as simple as 99 + explaining that the feature is already in use on bare metal, or it can be 100 + a proof-of-concept implementation in userspace. The latter need not be 101 + open source, though that is of course preferrable for easier testing. 102 + Selftests should test corner cases of the APIs, and should also cover 103 + basic host and guest operation if no open source VMM uses the feature. 104 + 105 + Bigger features, usually spanning host and guest 106 + These should be supported by Linux guests, with limited exceptions for 107 + Hyper-V features that are testable on Windows guests. It is strongly 108 + suggested that the feature be usable with an open source host VMM, such 109 + as at least one of QEMU or crosvm, and guest firmware. Selftests should 110 + test at least API error cases. Guest operation can be covered by 111 + either selftests of ``kvm-unit-tests`` (this is especially important for 112 + paravirtualized and Windows-only features). Strong selftest coverage 113 + can also be a replacement for implementation in an open source VMM, 114 + but this is generally not recommended. 115 + 116 + Following the above suggestions for testing in selftests and 117 + ``kvm-unit-tests`` will make it easier for the maintainers to review 118 + and accept your code. In fact, even before you contribute your changes 119 + upstream it will make it easier for you to develop for KVM. 120 + 121 + Of course, the KVM maintainers reserve the right to require more tests, 122 + though they may also waive the requirement from time to time.
+6 -4
MAINTAINERS
··· 5581 5581 F: drivers/char/ 5582 5582 F: drivers/misc/ 5583 5583 F: include/linux/miscdevice.h 5584 + F: rust/kernel/miscdevice.rs 5584 5585 F: samples/rust/rust_misc_device.rs 5585 5586 X: drivers/char/agp/ 5586 5587 X: drivers/char/hw_random/ ··· 12201 12200 F: include/uapi/linux/idxd.h 12202 12201 12203 12202 INTEL IN FIELD SCAN (IFS) DEVICE 12204 - M: Jithu Joseph <jithu.joseph@intel.com> 12203 + M: Tony Luck <tony.luck@intel.com> 12205 12204 R: Ashok Raj <ashok.raj.linux@gmail.com> 12206 - R: Tony Luck <tony.luck@intel.com> 12207 12205 S: Maintained 12208 12206 F: drivers/platform/x86/intel/ifs 12209 12207 F: include/trace/events/intel_ifs.h ··· 12542 12542 F: drivers/net/wireless/intel/iwlwifi/ 12543 12543 12544 12544 INTEL WMI SLIM BOOTLOADER (SBL) FIRMWARE UPDATE DRIVER 12545 - M: Jithu Joseph <jithu.joseph@intel.com> 12546 - S: Maintained 12545 + S: Orphan 12547 12546 W: https://slimbootloader.github.io/security/firmware-update.html 12548 12547 F: drivers/platform/x86/intel/wmi/sbl-fw-update.c 12549 12548 ··· 17404 17405 F: include/linux/framer/framer-provider.h 17405 17406 F: include/linux/framer/framer.h 17406 17407 F: include/linux/in.h 17408 + F: include/linux/in6.h 17407 17409 F: include/linux/indirect_call_wrapper.h 17408 17410 F: include/linux/inet.h 17409 17411 F: include/linux/inet_diag.h ··· 25923 25923 25924 25924 USERSPACE COPYIN/COPYOUT (UIOVEC) 25925 25925 M: Alexander Viro <viro@zeniv.linux.org.uk> 25926 + L: linux-block@vger.kernel.org 25927 + L: linux-fsdevel@vger.kernel.org 25926 25928 S: Maintained 25927 25929 F: include/linux/uio.h 25928 25930 F: lib/iov_iter.c
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm64/kvm/sys_regs.c
··· 2624 2624 */ 2625 2625 if (hpmn > vcpu->kvm->arch.nr_pmu_counters) { 2626 2626 hpmn = vcpu->kvm->arch.nr_pmu_counters; 2627 - u64_replace_bits(val, hpmn, MDCR_EL2_HPMN); 2627 + u64p_replace_bits(&val, hpmn, MDCR_EL2_HPMN); 2628 2628 } 2629 2629 2630 2630 __vcpu_assign_sys_reg(vcpu, MDCR_EL2, val);
+2 -1
arch/riscv/Kconfig
··· 98 98 select CLONE_BACKWARDS 99 99 select COMMON_CLK 100 100 select CPU_PM if CPU_IDLE || HIBERNATION || SUSPEND 101 + select DYNAMIC_FTRACE if FUNCTION_TRACER 101 102 select EDAC_SUPPORT 102 103 select FRAME_POINTER if PERF_EVENTS || (FUNCTION_TRACER && !DYNAMIC_FTRACE) 103 104 select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY if DYNAMIC_FTRACE ··· 163 162 select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL 164 163 select HAVE_FUNCTION_GRAPH_TRACER if HAVE_DYNAMIC_FTRACE_WITH_ARGS 165 164 select HAVE_FUNCTION_GRAPH_FREGS 166 - select HAVE_FUNCTION_TRACER if !XIP_KERNEL 165 + select HAVE_FUNCTION_TRACER if !XIP_KERNEL && HAVE_DYNAMIC_FTRACE 167 166 select HAVE_EBPF_JIT if MMU 168 167 select HAVE_GUP_FAST if MMU 169 168 select HAVE_FUNCTION_ARG_ACCESS_API
+3 -1
arch/riscv/include/asm/kvm_aia.h
··· 87 87 88 88 extern struct kvm_device_ops kvm_riscv_aia_device_ops; 89 89 90 + bool kvm_riscv_vcpu_aia_imsic_has_interrupt(struct kvm_vcpu *vcpu); 91 + void kvm_riscv_vcpu_aia_imsic_load(struct kvm_vcpu *vcpu, int cpu); 92 + void kvm_riscv_vcpu_aia_imsic_put(struct kvm_vcpu *vcpu); 90 93 void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu); 91 94 int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu); 92 95 ··· 164 161 int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner, 165 162 void __iomem **hgei_va, phys_addr_t *hgei_pa); 166 163 void kvm_riscv_aia_free_hgei(int cpu, int hgei); 167 - void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable); 168 164 169 165 void kvm_riscv_aia_enable(void); 170 166 void kvm_riscv_aia_disable(void);
+3
arch/riscv/include/asm/kvm_host.h
··· 306 306 return IS_ENABLED(CONFIG_GUEST_PERF_EVENTS) && !!vcpu; 307 307 } 308 308 309 + static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} 310 + static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} 311 + 309 312 #define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 310 313 311 314 void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,
+2 -2
arch/riscv/include/asm/uaccess.h
··· 311 311 do { \ 312 312 if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \ 313 313 !IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \ 314 - __inttype(x) val = (__inttype(x))x; \ 315 - if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(val), sizeof(*__gu_ptr))) \ 314 + __inttype(x) ___val = (__inttype(x))x; \ 315 + if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(___val), sizeof(*__gu_ptr))) \ 316 316 goto label; \ 317 317 break; \ 318 318 } \
+14 -4
arch/riscv/kernel/ftrace.c
··· 14 14 #include <asm/text-patching.h> 15 15 16 16 #ifdef CONFIG_DYNAMIC_FTRACE 17 + void ftrace_arch_code_modify_prepare(void) 18 + __acquires(&text_mutex) 19 + { 20 + mutex_lock(&text_mutex); 21 + } 22 + 23 + void ftrace_arch_code_modify_post_process(void) 24 + __releases(&text_mutex) 25 + { 26 + mutex_unlock(&text_mutex); 27 + } 28 + 17 29 unsigned long ftrace_call_adjust(unsigned long addr) 18 30 { 19 31 if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS)) ··· 41 29 42 30 void arch_ftrace_update_code(int command) 43 31 { 44 - mutex_lock(&text_mutex); 45 32 command |= FTRACE_MAY_SLEEP; 46 33 ftrace_modify_all_code(command); 47 - mutex_unlock(&text_mutex); 48 34 flush_icache_all(); 49 35 } 50 36 ··· 159 149 unsigned int nops[2], offset; 160 150 int ret; 161 151 152 + guard(mutex)(&text_mutex); 153 + 162 154 ret = ftrace_rec_set_nop_ops(rec); 163 155 if (ret) 164 156 return ret; ··· 169 157 nops[0] = to_auipc_t0(offset); 170 158 nops[1] = RISCV_INSN_NOP4; 171 159 172 - mutex_lock(&text_mutex); 173 160 ret = patch_insn_write((void *)pc, nops, 2 * MCOUNT_INSN_SIZE); 174 - mutex_unlock(&text_mutex); 175 161 176 162 return ret; 177 163 }
+6 -4
arch/riscv/kernel/traps.c
··· 6 6 #include <linux/cpu.h> 7 7 #include <linux/kernel.h> 8 8 #include <linux/init.h> 9 + #include <linux/irqflags.h> 9 10 #include <linux/randomize_kstack.h> 10 11 #include <linux/sched.h> 11 12 #include <linux/sched/debug.h> ··· 152 151 { \ 153 152 if (user_mode(regs)) { \ 154 153 irqentry_enter_from_user_mode(regs); \ 154 + local_irq_enable(); \ 155 155 do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \ 156 + local_irq_disable(); \ 156 157 irqentry_exit_to_user_mode(regs); \ 157 158 } else { \ 158 159 irqentry_state_t state = irqentry_nmi_enter(regs); \ ··· 176 173 177 174 if (user_mode(regs)) { 178 175 irqentry_enter_from_user_mode(regs); 179 - 180 176 local_irq_enable(); 181 177 182 178 handled = riscv_v_first_use_handler(regs); 183 - 184 - local_irq_disable(); 185 - 186 179 if (!handled) 187 180 do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->epc, 188 181 "Oops - illegal instruction"); 189 182 183 + local_irq_disable(); 190 184 irqentry_exit_to_user_mode(regs); 191 185 } else { 192 186 irqentry_state_t state = irqentry_nmi_enter(regs); ··· 308 308 { 309 309 if (user_mode(regs)) { 310 310 irqentry_enter_from_user_mode(regs); 311 + local_irq_enable(); 311 312 312 313 handle_break(regs); 313 314 315 + local_irq_disable(); 314 316 irqentry_exit_to_user_mode(regs); 315 317 } else { 316 318 irqentry_state_t state = irqentry_nmi_enter(regs);
+1 -1
arch/riscv/kernel/traps_misaligned.c
··· 461 461 } 462 462 463 463 if (!fp) 464 - SET_RD(insn, regs, val.data_ulong << shift >> shift); 464 + SET_RD(insn, regs, (long)(val.data_ulong << shift) >> shift); 465 465 else if (len == 8) 466 466 set_f64_rd(insn, regs, val.data_u64); 467 467 else
+7 -44
arch/riscv/kvm/aia.c
··· 30 30 unsigned int kvm_riscv_aia_max_ids; 31 31 DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available); 32 32 33 - static int aia_find_hgei(struct kvm_vcpu *owner) 34 - { 35 - int i, hgei; 36 - unsigned long flags; 37 - struct aia_hgei_control *hgctrl = get_cpu_ptr(&aia_hgei); 38 - 39 - raw_spin_lock_irqsave(&hgctrl->lock, flags); 40 - 41 - hgei = -1; 42 - for (i = 1; i <= kvm_riscv_aia_nr_hgei; i++) { 43 - if (hgctrl->owners[i] == owner) { 44 - hgei = i; 45 - break; 46 - } 47 - } 48 - 49 - raw_spin_unlock_irqrestore(&hgctrl->lock, flags); 50 - 51 - put_cpu_ptr(&aia_hgei); 52 - return hgei; 53 - } 54 - 55 33 static inline unsigned long aia_hvictl_value(bool ext_irq_pending) 56 34 { 57 35 unsigned long hvictl; ··· 73 95 74 96 bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask) 75 97 { 76 - int hgei; 77 98 unsigned long seip; 78 99 79 100 if (!kvm_riscv_aia_available()) ··· 91 114 if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip) 92 115 return false; 93 116 94 - hgei = aia_find_hgei(vcpu); 95 - if (hgei > 0) 96 - return !!(ncsr_read(CSR_HGEIP) & BIT(hgei)); 97 - 98 - return false; 117 + return kvm_riscv_vcpu_aia_imsic_has_interrupt(vcpu); 99 118 } 100 119 101 120 void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu) ··· 137 164 csr_write(CSR_HVIPRIO2H, csr->hviprio2h); 138 165 #endif 139 166 } 167 + 168 + if (kvm_riscv_aia_initialized(vcpu->kvm)) 169 + kvm_riscv_vcpu_aia_imsic_load(vcpu, cpu); 140 170 } 141 171 142 172 void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu) ··· 149 173 150 174 if (!kvm_riscv_aia_available()) 151 175 return; 176 + 177 + if (kvm_riscv_aia_initialized(vcpu->kvm)) 178 + kvm_riscv_vcpu_aia_imsic_put(vcpu); 152 179 153 180 if (kvm_riscv_nacl_available()) { 154 181 nsh = nacl_shmem(); ··· 449 470 } 450 471 451 472 raw_spin_unlock_irqrestore(&hgctrl->lock, flags); 452 - } 453 - 454 - void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable) 455 - { 456 - int hgei; 457 - 458 - if (!kvm_riscv_aia_available()) 459 - return; 460 - 461 - hgei = aia_find_hgei(owner); 462 - if (hgei > 0) { 463 - if (enable) 464 - csr_set(CSR_HGEIE, BIT(hgei)); 465 - else 466 - csr_clear(CSR_HGEIE, BIT(hgei)); 467 - } 468 473 } 469 474 470 475 static irqreturn_t hgei_interrupt(int irq, void *dev_id)
+45
arch/riscv/kvm/aia_imsic.c
··· 676 676 imsic_swfile_extirq_update(vcpu); 677 677 } 678 678 679 + bool kvm_riscv_vcpu_aia_imsic_has_interrupt(struct kvm_vcpu *vcpu) 680 + { 681 + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; 682 + unsigned long flags; 683 + bool ret = false; 684 + 685 + /* 686 + * The IMSIC SW-file directly injects interrupt via hvip so 687 + * only check for interrupt when IMSIC VS-file is being used. 688 + */ 689 + 690 + read_lock_irqsave(&imsic->vsfile_lock, flags); 691 + if (imsic->vsfile_cpu > -1) 692 + ret = !!(csr_read(CSR_HGEIP) & BIT(imsic->vsfile_hgei)); 693 + read_unlock_irqrestore(&imsic->vsfile_lock, flags); 694 + 695 + return ret; 696 + } 697 + 698 + void kvm_riscv_vcpu_aia_imsic_load(struct kvm_vcpu *vcpu, int cpu) 699 + { 700 + /* 701 + * No need to explicitly clear HGEIE CSR bits because the 702 + * hgei interrupt handler (aka hgei_interrupt()) will always 703 + * clear it for us. 704 + */ 705 + } 706 + 707 + void kvm_riscv_vcpu_aia_imsic_put(struct kvm_vcpu *vcpu) 708 + { 709 + struct imsic *imsic = vcpu->arch.aia_context.imsic_state; 710 + unsigned long flags; 711 + 712 + if (!kvm_vcpu_is_blocking(vcpu)) 713 + return; 714 + 715 + read_lock_irqsave(&imsic->vsfile_lock, flags); 716 + if (imsic->vsfile_cpu > -1) 717 + csr_set(CSR_HGEIE, BIT(imsic->vsfile_hgei)); 718 + read_unlock_irqrestore(&imsic->vsfile_lock, flags); 719 + } 720 + 679 721 void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu) 680 722 { 681 723 unsigned long flags; ··· 822 780 * to the old IMSIC VS-file so we first move all interrupt 823 781 * producers to the new IMSIC VS-file. 824 782 */ 783 + 784 + /* Ensure HGEIE CSR bit is zero before using the new IMSIC VS-file */ 785 + csr_clear(CSR_HGEIE, BIT(new_vsfile_hgei)); 825 786 826 787 /* Zero-out new IMSIC VS-file */ 827 788 imsic_vsfile_local_clear(new_vsfile_hgei, imsic->nr_hw_eix);
-10
arch/riscv/kvm/vcpu.c
··· 207 207 return kvm_riscv_vcpu_timer_pending(vcpu); 208 208 } 209 209 210 - void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) 211 - { 212 - kvm_riscv_aia_wakeon_hgei(vcpu, true); 213 - } 214 - 215 - void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) 216 - { 217 - kvm_riscv_aia_wakeon_hgei(vcpu, false); 218 - } 219 - 220 210 int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) 221 211 { 222 212 return (kvm_riscv_vcpu_has_interrupts(vcpu, -1UL) &&
+16
arch/riscv/kvm/vcpu_timer.c
··· 345 345 /* 346 346 * The vstimecmp CSRs are saved by kvm_riscv_vcpu_timer_sync() 347 347 * upon every VM exit so no need to save here. 348 + * 349 + * If VS-timer expires when no VCPU running on a host CPU then 350 + * WFI executed by such host CPU will be effective NOP resulting 351 + * in no power savings. This is because as-per RISC-V Privileged 352 + * specificaiton: "WFI is also required to resume execution for 353 + * locally enabled interrupts pending at any privilege level, 354 + * regardless of the global interrupt enable at each privilege 355 + * level." 356 + * 357 + * To address the above issue, vstimecmp CSR must be set to -1UL 358 + * over here when VCPU is scheduled-out or exits to user space. 348 359 */ 360 + 361 + csr_write(CSR_VSTIMECMP, -1UL); 362 + #if defined(CONFIG_32BIT) 363 + csr_write(CSR_VSTIMECMPH, -1UL); 364 + #endif 349 365 350 366 /* timer should be enabled for the remaining operations */ 351 367 if (unlikely(!t->init_done))
+3 -1
arch/riscv/tools/relocs_check.sh
··· 14 14 ${srctree}/scripts/relocs_check.sh "$@" | 15 15 # These relocations are okay 16 16 # R_RISCV_RELATIVE 17 - grep -F -w -v 'R_RISCV_RELATIVE' 17 + # R_RISCV_NONE 18 + grep -F -w -v 'R_RISCV_RELATIVE 19 + R_RISCV_NONE' 18 20 ) 19 21 20 22 if [ -z "$bad_relocs" ]; then
+9 -1
arch/s390/net/bpf_jit_comp.c
··· 566 566 { 567 567 memcpy(plt, &bpf_plt, sizeof(*plt)); 568 568 plt->ret = ret; 569 - plt->target = target; 569 + /* 570 + * (target == NULL) implies that the branch to this PLT entry was 571 + * patched and became a no-op. However, some CPU could have jumped 572 + * to this PLT entry before patching and may be still executing it. 573 + * 574 + * Since the intention in this case is to make the PLT entry a no-op, 575 + * make the target point to the return label instead of NULL. 576 + */ 577 + plt->target = target ?: ret; 570 578 } 571 579 572 580 /*
+2 -1
arch/x86/coco/sev/Makefile
··· 5 5 # Clang 14 and older may fail to respect __no_sanitize_undefined when inlining 6 6 UBSAN_SANITIZE_sev-nmi.o := n 7 7 8 - # GCC may fail to respect __no_sanitize_address when inlining 8 + # GCC may fail to respect __no_sanitize_address or __no_kcsan when inlining 9 9 KASAN_SANITIZE_sev-nmi.o := n 10 + KCSAN_SANITIZE_sev-nmi.o := n
+1
arch/x86/hyperv/hv_init.c
··· 34 34 #include <linux/syscore_ops.h> 35 35 #include <clocksource/hyperv_timer.h> 36 36 #include <linux/highmem.h> 37 + #include <linux/export.h> 37 38 38 39 void *hv_hypercall_pg; 39 40 EXPORT_SYMBOL_GPL(hv_hypercall_pg);
+41 -28
arch/x86/hyperv/irqdomain.c
··· 10 10 11 11 #include <linux/pci.h> 12 12 #include <linux/irq.h> 13 + #include <linux/export.h> 13 14 #include <asm/mshyperv.h> 14 15 15 16 static int hv_map_interrupt(union hv_device_id device_id, bool level, ··· 47 46 if (nr_bank < 0) { 48 47 local_irq_restore(flags); 49 48 pr_err("%s: unable to generate VP set\n", __func__); 50 - return EINVAL; 49 + return -EINVAL; 51 50 } 52 51 intr_desc->target.flags = HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET; 53 52 ··· 67 66 if (!hv_result_success(status)) 68 67 hv_status_err(status, "\n"); 69 68 70 - return hv_result(status); 69 + return hv_result_to_errno(status); 71 70 } 72 71 73 72 static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *old_entry) ··· 89 88 status = hv_do_hypercall(HVCALL_UNMAP_DEVICE_INTERRUPT, input, NULL); 90 89 local_irq_restore(flags); 91 90 92 - return hv_result(status); 91 + if (!hv_result_success(status)) 92 + hv_status_err(status, "\n"); 93 + 94 + return hv_result_to_errno(status); 93 95 } 94 96 95 97 #ifdef CONFIG_PCI_MSI ··· 173 169 return dev_id; 174 170 } 175 171 176 - static int hv_map_msi_interrupt(struct pci_dev *dev, int cpu, int vector, 177 - struct hv_interrupt_entry *entry) 172 + /** 173 + * hv_map_msi_interrupt() - "Map" the MSI IRQ in the hypervisor. 174 + * @data: Describes the IRQ 175 + * @out_entry: Hypervisor (MSI) interrupt entry (can be NULL) 176 + * 177 + * Map the IRQ in the hypervisor by issuing a MAP_DEVICE_INTERRUPT hypercall. 178 + * 179 + * Return: 0 on success, -errno on failure 180 + */ 181 + int hv_map_msi_interrupt(struct irq_data *data, 182 + struct hv_interrupt_entry *out_entry) 178 183 { 179 - union hv_device_id device_id = hv_build_pci_dev_id(dev); 184 + struct irq_cfg *cfg = irqd_cfg(data); 185 + struct hv_interrupt_entry dummy; 186 + union hv_device_id device_id; 187 + struct msi_desc *msidesc; 188 + struct pci_dev *dev; 189 + int cpu; 180 190 181 - return hv_map_interrupt(device_id, false, cpu, vector, entry); 191 + msidesc = irq_data_get_msi_desc(data); 192 + dev = msi_desc_to_pci_dev(msidesc); 193 + device_id = hv_build_pci_dev_id(dev); 194 + cpu = cpumask_first(irq_data_get_effective_affinity_mask(data)); 195 + 196 + return hv_map_interrupt(device_id, false, cpu, cfg->vector, 197 + out_entry ? out_entry : &dummy); 182 198 } 199 + EXPORT_SYMBOL_GPL(hv_map_msi_interrupt); 183 200 184 201 static inline void entry_to_msi_msg(struct hv_interrupt_entry *entry, struct msi_msg *msg) 185 202 { ··· 213 188 static int hv_unmap_msi_interrupt(struct pci_dev *dev, struct hv_interrupt_entry *old_entry); 214 189 static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 215 190 { 191 + struct hv_interrupt_entry *stored_entry; 192 + struct irq_cfg *cfg = irqd_cfg(data); 216 193 struct msi_desc *msidesc; 217 194 struct pci_dev *dev; 218 - struct hv_interrupt_entry out_entry, *stored_entry; 219 - struct irq_cfg *cfg = irqd_cfg(data); 220 - const cpumask_t *affinity; 221 - int cpu; 222 - u64 status; 195 + int ret; 223 196 224 197 msidesc = irq_data_get_msi_desc(data); 225 198 dev = msi_desc_to_pci_dev(msidesc); ··· 226 203 pr_debug("%s: cfg is NULL", __func__); 227 204 return; 228 205 } 229 - 230 - affinity = irq_data_get_effective_affinity_mask(data); 231 - cpu = cpumask_first_and(affinity, cpu_online_mask); 232 206 233 207 if (data->chip_data) { 234 208 /* ··· 239 219 stored_entry = data->chip_data; 240 220 data->chip_data = NULL; 241 221 242 - status = hv_unmap_msi_interrupt(dev, stored_entry); 222 + ret = hv_unmap_msi_interrupt(dev, stored_entry); 243 223 244 224 kfree(stored_entry); 245 225 246 - if (status != HV_STATUS_SUCCESS) { 247 - hv_status_debug(status, "failed to unmap\n"); 226 + if (ret) 248 227 return; 249 - } 250 228 } 251 229 252 230 stored_entry = kzalloc(sizeof(*stored_entry), GFP_ATOMIC); ··· 253 235 return; 254 236 } 255 237 256 - status = hv_map_msi_interrupt(dev, cpu, cfg->vector, &out_entry); 257 - if (status != HV_STATUS_SUCCESS) { 238 + ret = hv_map_msi_interrupt(data, stored_entry); 239 + if (ret) { 258 240 kfree(stored_entry); 259 241 return; 260 242 } 261 243 262 - *stored_entry = out_entry; 263 244 data->chip_data = stored_entry; 264 - entry_to_msi_msg(&out_entry, msg); 245 + entry_to_msi_msg(data->chip_data, msg); 265 246 266 247 return; 267 248 } ··· 274 257 { 275 258 struct hv_interrupt_entry old_entry; 276 259 struct msi_msg msg; 277 - u64 status; 278 260 279 261 if (!irqd->chip_data) { 280 262 pr_debug("%s: no chip data\n!", __func__); ··· 286 270 kfree(irqd->chip_data); 287 271 irqd->chip_data = NULL; 288 272 289 - status = hv_unmap_msi_interrupt(dev, &old_entry); 290 - 291 - if (status != HV_STATUS_SUCCESS) 292 - hv_status_err(status, "\n"); 273 + (void)hv_unmap_msi_interrupt(dev, &old_entry); 293 274 } 294 275 295 276 static void hv_msi_free_irq(struct irq_domain *domain,
+1
arch/x86/hyperv/ivm.c
··· 10 10 #include <linux/types.h> 11 11 #include <linux/slab.h> 12 12 #include <linux/cpu.h> 13 + #include <linux/export.h> 13 14 #include <asm/svm.h> 14 15 #include <asm/sev.h> 15 16 #include <asm/io.h>
+1
arch/x86/hyperv/nested.c
··· 11 11 12 12 13 13 #include <linux/types.h> 14 + #include <linux/export.h> 14 15 #include <hyperv/hvhdk.h> 15 16 #include <asm/mshyperv.h> 16 17 #include <asm/tlbflush.h>
+2 -20
arch/x86/include/asm/mshyperv.h
··· 112 112 return hv_status; 113 113 } 114 114 115 - /* Hypercall to the L0 hypervisor */ 116 - static inline u64 hv_do_nested_hypercall(u64 control, void *input, void *output) 117 - { 118 - return hv_do_hypercall(control | HV_HYPERCALL_NESTED, input, output); 119 - } 120 - 121 115 /* Fast hypercall with 8 bytes of input and no output */ 122 116 static inline u64 _hv_do_fast_hypercall8(u64 control, u64 input1) 123 117 { ··· 155 161 static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1) 156 162 { 157 163 u64 control = (u64)code | HV_HYPERCALL_FAST_BIT; 158 - 159 - return _hv_do_fast_hypercall8(control, input1); 160 - } 161 - 162 - static inline u64 hv_do_fast_nested_hypercall8(u16 code, u64 input1) 163 - { 164 - u64 control = (u64)code | HV_HYPERCALL_FAST_BIT | HV_HYPERCALL_NESTED; 165 164 166 165 return _hv_do_fast_hypercall8(control, input1); 167 166 } ··· 210 223 return _hv_do_fast_hypercall16(control, input1, input2); 211 224 } 212 225 213 - static inline u64 hv_do_fast_nested_hypercall16(u16 code, u64 input1, u64 input2) 214 - { 215 - u64 control = (u64)code | HV_HYPERCALL_FAST_BIT | HV_HYPERCALL_NESTED; 216 - 217 - return _hv_do_fast_hypercall16(control, input1, input2); 218 - } 219 - 220 226 extern struct hv_vp_assist_page **hv_vp_assist_page; 221 227 222 228 static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu) ··· 242 262 243 263 struct irq_domain *hv_create_pci_msi_domain(void); 244 264 265 + int hv_map_msi_interrupt(struct irq_data *data, 266 + struct hv_interrupt_entry *out_entry); 245 267 int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector, 246 268 struct hv_interrupt_entry *entry); 247 269 int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry);
+4 -5
arch/x86/kvm/vmx/tdx.c
··· 173 173 tdx_clear_unsupported_cpuid(entry); 174 174 } 175 175 176 - #define TDVMCALLINFO_GET_QUOTE BIT(0) 177 176 #define TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT BIT(1) 178 177 179 178 static int init_kvm_tdx_caps(const struct tdx_sys_info_td_conf *td_conf, ··· 191 192 caps->cpuid.nent = td_conf->num_cpuid_config; 192 193 193 194 caps->user_tdvmcallinfo_1_r11 = 194 - TDVMCALLINFO_GET_QUOTE | 195 195 TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT; 196 196 197 197 for (i = 0; i < td_conf->num_cpuid_config; i++) ··· 2269 2271 const struct tdx_sys_info_td_conf *td_conf = &tdx_sysinfo->td_conf; 2270 2272 struct kvm_tdx_capabilities __user *user_caps; 2271 2273 struct kvm_tdx_capabilities *caps = NULL; 2274 + u32 nr_user_entries; 2272 2275 int ret = 0; 2273 2276 2274 2277 /* flags is reserved for future use */ 2275 2278 if (cmd->flags) 2276 2279 return -EINVAL; 2277 2280 2278 - caps = kmalloc(sizeof(*caps) + 2281 + caps = kzalloc(sizeof(*caps) + 2279 2282 sizeof(struct kvm_cpuid_entry2) * td_conf->num_cpuid_config, 2280 2283 GFP_KERNEL); 2281 2284 if (!caps) 2282 2285 return -ENOMEM; 2283 2286 2284 2287 user_caps = u64_to_user_ptr(cmd->data); 2285 - if (copy_from_user(caps, user_caps, sizeof(*caps))) { 2288 + if (get_user(nr_user_entries, &user_caps->cpuid.nent)) { 2286 2289 ret = -EFAULT; 2287 2290 goto out; 2288 2291 } 2289 2292 2290 - if (caps->cpuid.nent < td_conf->num_cpuid_config) { 2293 + if (nr_user_entries < td_conf->num_cpuid_config) { 2291 2294 ret = -E2BIG; 2292 2295 goto out; 2293 2296 }
+4
arch/x86/kvm/x86.c
··· 6188 6188 u32 user_tsc_khz; 6189 6189 6190 6190 r = -EINVAL; 6191 + 6192 + if (vcpu->arch.guest_tsc_protected) 6193 + goto out; 6194 + 6191 6195 user_tsc_khz = (u32)arg; 6192 6196 6193 6197 if (kvm_caps.has_tsc_control &&
+1 -1
arch/x86/kvm/xen.c
··· 1526 1526 if (kvm_read_guest_virt(vcpu, (gva_t)sched_poll.ports, ports, 1527 1527 sched_poll.nr_ports * sizeof(*ports), &e)) { 1528 1528 *r = -EFAULT; 1529 - return true; 1529 + goto out; 1530 1530 } 1531 1531 1532 1532 for (i = 0; i < sched_poll.nr_ports; i++) {
+1
block/blk-sysfs.c
··· 960 960 elevator_set_none(q); 961 961 962 962 blk_debugfs_remove(disk); 963 + kobject_put(&disk->queue_kobj); 963 964 }
-2
drivers/acpi/riscv/cppc.c
··· 37 37 { 38 38 if (sbi_spec_version >= sbi_mk_version(2, 0) && 39 39 sbi_probe_extension(SBI_EXT_CPPC) > 0) { 40 - pr_info("SBI CPPC extension detected\n"); 41 40 cppc_ext_present = true; 42 41 } else { 43 - pr_info("SBI CPPC extension NOT detected!!\n"); 44 42 cppc_ext_present = false; 45 43 } 46 44
+2
drivers/base/regmap/regmap.c
··· 1173 1173 err_map: 1174 1174 kfree(map); 1175 1175 err: 1176 + if (bus && bus->free_on_exit) 1177 + kfree(bus); 1176 1178 return ERR_PTR(ret); 1177 1179 } 1178 1180 EXPORT_SYMBOL_GPL(__regmap_init);
+2 -3
drivers/block/loop.c
··· 308 308 static void lo_rw_aio_do_completion(struct loop_cmd *cmd) 309 309 { 310 310 struct request *rq = blk_mq_rq_from_pdu(cmd); 311 - struct loop_device *lo = rq->q->queuedata; 312 311 313 312 if (!atomic_dec_and_test(&cmd->ref)) 314 313 return; 315 314 kfree(cmd->bvec); 316 315 cmd->bvec = NULL; 317 316 if (req_op(rq) == REQ_OP_WRITE) 318 - file_end_write(lo->lo_backing_file); 317 + kiocb_end_write(&cmd->iocb); 319 318 if (likely(!blk_should_fake_timeout(rq->q))) 320 319 blk_mq_complete_request(rq); 321 320 } ··· 390 391 } 391 392 392 393 if (rw == ITER_SOURCE) { 393 - file_start_write(lo->lo_backing_file); 394 + kiocb_start_write(&cmd->iocb); 394 395 ret = file->f_op->write_iter(&cmd->iocb, &iter); 395 396 } else 396 397 ret = file->f_op->read_iter(&cmd->iocb, &iter);
+9 -10
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 943 943 struct fsl_mc_obj_desc endpoint_desc = {{ 0 }}; 944 944 struct dprc_endpoint endpoint1 = {{ 0 }}; 945 945 struct dprc_endpoint endpoint2 = {{ 0 }}; 946 + struct fsl_mc_bus *mc_bus; 946 947 int state, err; 947 948 948 949 mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent); ··· 967 966 strcpy(endpoint_desc.type, endpoint2.type); 968 967 endpoint_desc.id = endpoint2.id; 969 968 endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev); 969 + if (endpoint) 970 + return endpoint; 970 971 971 972 /* 972 973 * We know that the device has an endpoint because we verified by ··· 976 973 * yet discovered by the fsl-mc bus, thus the lookup returned NULL. 977 974 * Force a rescan of the devices in this container and retry the lookup. 978 975 */ 979 - if (!endpoint) { 980 - struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev); 981 - 982 - if (mutex_trylock(&mc_bus->scan_mutex)) { 983 - err = dprc_scan_objects(mc_bus_dev, true); 984 - mutex_unlock(&mc_bus->scan_mutex); 985 - } 986 - 987 - if (err < 0) 988 - return ERR_PTR(err); 976 + mc_bus = to_fsl_mc_bus(mc_bus_dev); 977 + if (mutex_trylock(&mc_bus->scan_mutex)) { 978 + err = dprc_scan_objects(mc_bus_dev, true); 979 + mutex_unlock(&mc_bus->scan_mutex); 989 980 } 981 + if (err < 0) 982 + return ERR_PTR(err); 990 983 991 984 endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev); 992 985 /*
+1
drivers/clocksource/hyperv_timer.c
··· 22 22 #include <linux/irq.h> 23 23 #include <linux/acpi.h> 24 24 #include <linux/hyperv.h> 25 + #include <linux/export.h> 25 26 #include <clocksource/hyperv_timer.h> 26 27 #include <hyperv/hvhdk.h> 27 28 #include <asm/mshyperv.h>
+28 -2
drivers/comedi/comedi_fops.c
··· 1556 1556 } 1557 1557 1558 1558 for (i = 0; i < n_insns; ++i) { 1559 + unsigned int n = insns[i].n; 1560 + 1559 1561 if (insns[i].insn & INSN_MASK_WRITE) { 1560 1562 if (copy_from_user(data, insns[i].data, 1561 - insns[i].n * sizeof(unsigned int))) { 1563 + n * sizeof(unsigned int))) { 1562 1564 dev_dbg(dev->class_dev, 1563 1565 "copy_from_user failed\n"); 1564 1566 ret = -EFAULT; 1565 1567 goto error; 1568 + } 1569 + if (n < MIN_SAMPLES) { 1570 + memset(&data[n], 0, (MIN_SAMPLES - n) * 1571 + sizeof(unsigned int)); 1566 1572 } 1567 1573 } 1568 1574 ret = parse_insn(dev, insns + i, data, file); ··· 1576 1570 goto error; 1577 1571 if (insns[i].insn & INSN_MASK_READ) { 1578 1572 if (copy_to_user(insns[i].data, data, 1579 - insns[i].n * sizeof(unsigned int))) { 1573 + n * sizeof(unsigned int))) { 1580 1574 dev_dbg(dev->class_dev, 1581 1575 "copy_to_user failed\n"); 1582 1576 ret = -EFAULT; ··· 1593 1587 if (ret < 0) 1594 1588 return ret; 1595 1589 return i; 1590 + } 1591 + 1592 + #define MAX_INSNS MAX_SAMPLES 1593 + static int check_insnlist_len(struct comedi_device *dev, unsigned int n_insns) 1594 + { 1595 + if (n_insns > MAX_INSNS) { 1596 + dev_dbg(dev->class_dev, "insnlist length too large\n"); 1597 + return -EINVAL; 1598 + } 1599 + return 0; 1596 1600 } 1597 1601 1598 1602 /* ··· 1648 1632 insn->n * sizeof(unsigned int))) { 1649 1633 ret = -EFAULT; 1650 1634 goto error; 1635 + } 1636 + if (insn->n < MIN_SAMPLES) { 1637 + memset(&data[insn->n], 0, 1638 + (MIN_SAMPLES - insn->n) * sizeof(unsigned int)); 1651 1639 } 1652 1640 } 1653 1641 ret = parse_insn(dev, insn, data, file); ··· 2259 2239 rc = -EFAULT; 2260 2240 break; 2261 2241 } 2242 + rc = check_insnlist_len(dev, insnlist.n_insns); 2243 + if (rc) 2244 + break; 2262 2245 insns = kcalloc(insnlist.n_insns, sizeof(*insns), GFP_KERNEL); 2263 2246 if (!insns) { 2264 2247 rc = -ENOMEM; ··· 3165 3142 if (copy_from_user(&insnlist32, compat_ptr(arg), sizeof(insnlist32))) 3166 3143 return -EFAULT; 3167 3144 3145 + rc = check_insnlist_len(dev, insnlist32.n_insns); 3146 + if (rc) 3147 + return rc; 3168 3148 insns = kcalloc(insnlist32.n_insns, sizeof(*insns), GFP_KERNEL); 3169 3149 if (!insns) 3170 3150 return -ENOMEM;
+10 -7
drivers/comedi/drivers.c
··· 339 339 unsigned int *data, 340 340 unsigned int mask) 341 341 { 342 - unsigned int chan_mask = 1 << CR_CHAN(insn->chanspec); 342 + unsigned int chan = CR_CHAN(insn->chanspec); 343 343 344 - if (!mask) 345 - mask = chan_mask; 344 + if (!mask && chan < 32) 345 + mask = 1U << chan; 346 346 347 347 switch (data[0]) { 348 348 case INSN_CONFIG_DIO_INPUT: ··· 382 382 unsigned int comedi_dio_update_state(struct comedi_subdevice *s, 383 383 unsigned int *data) 384 384 { 385 - unsigned int chanmask = (s->n_chan < 32) ? ((1 << s->n_chan) - 1) 385 + unsigned int chanmask = (s->n_chan < 32) ? ((1U << s->n_chan) - 1) 386 386 : 0xffffffff; 387 387 unsigned int mask = data[0] & chanmask; 388 388 unsigned int bits = data[1]; ··· 615 615 unsigned int _data[2]; 616 616 int ret; 617 617 618 + if (insn->n == 0) 619 + return 0; 620 + 618 621 memset(_data, 0, sizeof(_data)); 619 622 memset(&_insn, 0, sizeof(_insn)); 620 623 _insn.insn = INSN_BITS; ··· 628 625 if (insn->insn == INSN_WRITE) { 629 626 if (!(s->subdev_flags & SDF_WRITABLE)) 630 627 return -EINVAL; 631 - _data[0] = 1 << (chan - base_chan); /* mask */ 632 - _data[1] = data[0] ? (1 << (chan - base_chan)) : 0; /* bits */ 628 + _data[0] = 1U << (chan - base_chan); /* mask */ 629 + _data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */ 633 630 } 634 631 635 632 ret = s->insn_bits(dev, s, &_insn, _data); ··· 712 709 713 710 if (s->type == COMEDI_SUBD_DO) { 714 711 if (s->n_chan < 32) 715 - s->io_bits = (1 << s->n_chan) - 1; 712 + s->io_bits = (1U << s->n_chan) - 1; 716 713 else 717 714 s->io_bits = 0xffffffff; 718 715 }
+2 -1
drivers/comedi/drivers/aio_iiro_16.c
··· 177 177 * Digital input change of state interrupts are optionally supported 178 178 * using IRQ 2-7, 10-12, 14, or 15. 179 179 */ 180 - if ((1 << it->options[1]) & 0xdcfc) { 180 + if (it->options[1] > 0 && it->options[1] < 16 && 181 + (1 << it->options[1]) & 0xdcfc) { 181 182 ret = request_irq(it->options[1], aio_iiro_16_cos, 0, 182 183 dev->board_name, dev); 183 184 if (ret == 0)
+1 -1
drivers/comedi/drivers/comedi_test.c
··· 792 792 { 793 793 struct waveform_private *devpriv = dev->private; 794 794 795 - if (devpriv) { 795 + if (devpriv && dev->n_subdevices) { 796 796 timer_delete_sync(&devpriv->ai_timer); 797 797 timer_delete_sync(&devpriv->ao_timer); 798 798 }
+2 -1
drivers/comedi/drivers/das16m1.c
··· 522 522 devpriv->extra_iobase = dev->iobase + DAS16M1_8255_IOBASE; 523 523 524 524 /* only irqs 2, 3, 4, 5, 6, 7, 10, 11, 12, 14, and 15 are valid */ 525 - if ((1 << it->options[1]) & 0xdcfc) { 525 + if (it->options[1] >= 2 && it->options[1] <= 15 && 526 + (1 << it->options[1]) & 0xdcfc) { 526 527 ret = request_irq(it->options[1], das16m1_interrupt, 0, 527 528 dev->board_name, dev); 528 529 if (ret == 0)
+2 -1
drivers/comedi/drivers/das6402.c
··· 567 567 das6402_reset(dev); 568 568 569 569 /* IRQs 2,3,5,6,7, 10,11,15 are valid for "enhanced" mode */ 570 - if ((1 << it->options[1]) & 0x8cec) { 570 + if (it->options[1] > 0 && it->options[1] < 16 && 571 + (1 << it->options[1]) & 0x8cec) { 571 572 ret = request_irq(it->options[1], das6402_interrupt, 0, 572 573 dev->board_name, dev); 573 574 if (ret == 0) {
+2 -1
drivers/comedi/drivers/pcl812.c
··· 1149 1149 if (IS_ERR(dev->pacer)) 1150 1150 return PTR_ERR(dev->pacer); 1151 1151 1152 - if ((1 << it->options[1]) & board->irq_bits) { 1152 + if (it->options[1] > 0 && it->options[1] < 16 && 1153 + (1 << it->options[1]) & board->irq_bits) { 1153 1154 ret = request_irq(it->options[1], pcl812_interrupt, 0, 1154 1155 dev->board_name, dev); 1155 1156 if (ret == 0)
+12 -11
drivers/cpuidle/cpuidle-psci.c
··· 45 45 static DEFINE_PER_CPU_READ_MOSTLY(struct psci_cpuidle_data, psci_cpuidle_data); 46 46 static DEFINE_PER_CPU(struct psci_cpuidle_domain_state, psci_domain_state); 47 47 static bool psci_cpuidle_use_syscore; 48 - static bool psci_cpuidle_use_cpuhp; 49 48 50 49 void psci_set_domain_state(struct generic_pm_domain *pd, unsigned int state_idx, 51 50 u32 state) ··· 123 124 { 124 125 struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev); 125 126 126 - if (pd_dev) 127 - pm_runtime_get_sync(pd_dev); 127 + if (pd_dev) { 128 + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) 129 + pm_runtime_get_sync(pd_dev); 130 + else 131 + dev_pm_genpd_resume(pd_dev); 132 + } 128 133 129 134 return 0; 130 135 } ··· 138 135 struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev); 139 136 140 137 if (pd_dev) { 141 - pm_runtime_put_sync(pd_dev); 138 + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) 139 + pm_runtime_put_sync(pd_dev); 140 + else 141 + dev_pm_genpd_suspend(pd_dev); 142 + 142 143 /* Clear domain state to start fresh at next online. */ 143 144 psci_clear_domain_state(); 144 145 } ··· 202 195 static void psci_idle_init_cpuhp(void) 203 196 { 204 197 int err; 205 - 206 - if (!psci_cpuidle_use_cpuhp) 207 - return; 208 198 209 199 err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING, 210 200 "cpuidle/psci:online", ··· 263 259 * s2ram and s2idle. 264 260 */ 265 261 drv->states[state_count - 1].enter_s2idle = psci_enter_s2idle_domain_idle_state; 266 - if (!IS_ENABLED(CONFIG_PREEMPT_RT)) { 262 + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) 267 263 drv->states[state_count - 1].enter = psci_enter_domain_idle_state; 268 - psci_cpuidle_use_cpuhp = true; 269 - } 270 264 271 265 return 0; 272 266 } ··· 341 339 342 340 dt_idle_detach_cpu(data->dev); 343 341 psci_cpuidle_use_syscore = false; 344 - psci_cpuidle_use_cpuhp = false; 345 342 } 346 343 347 344 static int psci_idle_init_cpu(struct device *dev, int cpu)
+5 -5
drivers/crypto/chelsio/chcr_algo.c
··· 314 314 if (digest_size == SHA1_DIGEST_SIZE) { 315 315 error = crypto_shash_init(desc) ?: 316 316 crypto_shash_update(desc, iopad, SHA1_BLOCK_SIZE) ?: 317 - crypto_shash_export(desc, (void *)&sha1_st); 317 + crypto_shash_export_core(desc, &sha1_st); 318 318 memcpy(result_hash, sha1_st.state, SHA1_DIGEST_SIZE); 319 319 } else if (digest_size == SHA224_DIGEST_SIZE) { 320 320 error = crypto_shash_init(desc) ?: 321 321 crypto_shash_update(desc, iopad, SHA256_BLOCK_SIZE) ?: 322 - crypto_shash_export(desc, (void *)&sha256_st); 322 + crypto_shash_export_core(desc, &sha256_st); 323 323 memcpy(result_hash, sha256_st.state, SHA256_DIGEST_SIZE); 324 324 325 325 } else if (digest_size == SHA256_DIGEST_SIZE) { 326 326 error = crypto_shash_init(desc) ?: 327 327 crypto_shash_update(desc, iopad, SHA256_BLOCK_SIZE) ?: 328 - crypto_shash_export(desc, (void *)&sha256_st); 328 + crypto_shash_export_core(desc, &sha256_st); 329 329 memcpy(result_hash, sha256_st.state, SHA256_DIGEST_SIZE); 330 330 331 331 } else if (digest_size == SHA384_DIGEST_SIZE) { 332 332 error = crypto_shash_init(desc) ?: 333 333 crypto_shash_update(desc, iopad, SHA512_BLOCK_SIZE) ?: 334 - crypto_shash_export(desc, (void *)&sha512_st); 334 + crypto_shash_export_core(desc, &sha512_st); 335 335 memcpy(result_hash, sha512_st.state, SHA512_DIGEST_SIZE); 336 336 337 337 } else if (digest_size == SHA512_DIGEST_SIZE) { 338 338 error = crypto_shash_init(desc) ?: 339 339 crypto_shash_update(desc, iopad, SHA512_BLOCK_SIZE) ?: 340 - crypto_shash_export(desc, (void *)&sha512_st); 340 + crypto_shash_export_core(desc, &sha512_st); 341 341 memcpy(result_hash, sha512_st.state, SHA512_DIGEST_SIZE); 342 342 } else { 343 343 error = -EINVAL;
+7 -7
drivers/crypto/intel/qat/qat_common/qat_algs.c
··· 5 5 #include <linux/crypto.h> 6 6 #include <crypto/internal/aead.h> 7 7 #include <crypto/internal/cipher.h> 8 + #include <crypto/internal/hash.h> 8 9 #include <crypto/internal/skcipher.h> 9 10 #include <crypto/aes.h> 10 11 #include <crypto/sha1.h> 11 12 #include <crypto/sha2.h> 12 - #include <crypto/hash.h> 13 13 #include <crypto/hmac.h> 14 14 #include <crypto/algapi.h> 15 15 #include <crypto/authenc.h> ··· 154 154 155 155 switch (ctx->qat_hash_alg) { 156 156 case ICP_QAT_HW_AUTH_ALGO_SHA1: 157 - if (crypto_shash_export(shash, &ctx->sha1)) 157 + if (crypto_shash_export_core(shash, &ctx->sha1)) 158 158 return -EFAULT; 159 159 for (i = 0; i < digest_size >> 2; i++, hash_state_out++) 160 160 *hash_state_out = cpu_to_be32(ctx->sha1.state[i]); 161 161 break; 162 162 case ICP_QAT_HW_AUTH_ALGO_SHA256: 163 - if (crypto_shash_export(shash, &ctx->sha256)) 163 + if (crypto_shash_export_core(shash, &ctx->sha256)) 164 164 return -EFAULT; 165 165 for (i = 0; i < digest_size >> 2; i++, hash_state_out++) 166 166 *hash_state_out = cpu_to_be32(ctx->sha256.state[i]); 167 167 break; 168 168 case ICP_QAT_HW_AUTH_ALGO_SHA512: 169 - if (crypto_shash_export(shash, &ctx->sha512)) 169 + if (crypto_shash_export_core(shash, &ctx->sha512)) 170 170 return -EFAULT; 171 171 for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) 172 172 *hash512_state_out = cpu_to_be64(ctx->sha512.state[i]); ··· 190 190 191 191 switch (ctx->qat_hash_alg) { 192 192 case ICP_QAT_HW_AUTH_ALGO_SHA1: 193 - if (crypto_shash_export(shash, &ctx->sha1)) 193 + if (crypto_shash_export_core(shash, &ctx->sha1)) 194 194 return -EFAULT; 195 195 for (i = 0; i < digest_size >> 2; i++, hash_state_out++) 196 196 *hash_state_out = cpu_to_be32(ctx->sha1.state[i]); 197 197 break; 198 198 case ICP_QAT_HW_AUTH_ALGO_SHA256: 199 - if (crypto_shash_export(shash, &ctx->sha256)) 199 + if (crypto_shash_export_core(shash, &ctx->sha256)) 200 200 return -EFAULT; 201 201 for (i = 0; i < digest_size >> 2; i++, hash_state_out++) 202 202 *hash_state_out = cpu_to_be32(ctx->sha256.state[i]); 203 203 break; 204 204 case ICP_QAT_HW_AUTH_ALGO_SHA512: 205 - if (crypto_shash_export(shash, &ctx->sha512)) 205 + if (crypto_shash_export_core(shash, &ctx->sha512)) 206 206 return -EFAULT; 207 207 for (i = 0; i < digest_size >> 3; i++, hash512_state_out++) 208 208 *hash512_state_out = cpu_to_be64(ctx->sha512.state[i]);
+32 -28
drivers/dma/dw-edma/dw-edma-pcie.c
··· 161 161 const struct pci_device_id *pid) 162 162 { 163 163 struct dw_edma_pcie_data *pdata = (void *)pid->driver_data; 164 - struct dw_edma_pcie_data vsec_data; 164 + struct dw_edma_pcie_data *vsec_data __free(kfree) = NULL; 165 165 struct device *dev = &pdev->dev; 166 166 struct dw_edma_chip *chip; 167 167 int err, nr_irqs; 168 168 int i, mask; 169 + 170 + vsec_data = kmalloc(sizeof(*vsec_data), GFP_KERNEL); 171 + if (!vsec_data) 172 + return -ENOMEM; 169 173 170 174 /* Enable PCI device */ 171 175 err = pcim_enable_device(pdev); ··· 178 174 return err; 179 175 } 180 176 181 - memcpy(&vsec_data, pdata, sizeof(struct dw_edma_pcie_data)); 177 + memcpy(vsec_data, pdata, sizeof(struct dw_edma_pcie_data)); 182 178 183 179 /* 184 180 * Tries to find if exists a PCIe Vendor-Specific Extended Capability 185 181 * for the DMA, if one exists, then reconfigures it. 186 182 */ 187 - dw_edma_pcie_get_vsec_dma_data(pdev, &vsec_data); 183 + dw_edma_pcie_get_vsec_dma_data(pdev, vsec_data); 188 184 189 185 /* Mapping PCI BAR regions */ 190 - mask = BIT(vsec_data.rg.bar); 191 - for (i = 0; i < vsec_data.wr_ch_cnt; i++) { 192 - mask |= BIT(vsec_data.ll_wr[i].bar); 193 - mask |= BIT(vsec_data.dt_wr[i].bar); 186 + mask = BIT(vsec_data->rg.bar); 187 + for (i = 0; i < vsec_data->wr_ch_cnt; i++) { 188 + mask |= BIT(vsec_data->ll_wr[i].bar); 189 + mask |= BIT(vsec_data->dt_wr[i].bar); 194 190 } 195 - for (i = 0; i < vsec_data.rd_ch_cnt; i++) { 196 - mask |= BIT(vsec_data.ll_rd[i].bar); 197 - mask |= BIT(vsec_data.dt_rd[i].bar); 191 + for (i = 0; i < vsec_data->rd_ch_cnt; i++) { 192 + mask |= BIT(vsec_data->ll_rd[i].bar); 193 + mask |= BIT(vsec_data->dt_rd[i].bar); 198 194 } 199 195 err = pcim_iomap_regions(pdev, mask, pci_name(pdev)); 200 196 if (err) { ··· 217 213 return -ENOMEM; 218 214 219 215 /* IRQs allocation */ 220 - nr_irqs = pci_alloc_irq_vectors(pdev, 1, vsec_data.irqs, 216 + nr_irqs = pci_alloc_irq_vectors(pdev, 1, vsec_data->irqs, 221 217 PCI_IRQ_MSI | PCI_IRQ_MSIX); 222 218 if (nr_irqs < 1) { 223 219 pci_err(pdev, "fail to alloc IRQ vector (number of IRQs=%u)\n", ··· 228 224 /* Data structure initialization */ 229 225 chip->dev = dev; 230 226 231 - chip->mf = vsec_data.mf; 227 + chip->mf = vsec_data->mf; 232 228 chip->nr_irqs = nr_irqs; 233 229 chip->ops = &dw_edma_pcie_plat_ops; 234 230 235 - chip->ll_wr_cnt = vsec_data.wr_ch_cnt; 236 - chip->ll_rd_cnt = vsec_data.rd_ch_cnt; 231 + chip->ll_wr_cnt = vsec_data->wr_ch_cnt; 232 + chip->ll_rd_cnt = vsec_data->rd_ch_cnt; 237 233 238 - chip->reg_base = pcim_iomap_table(pdev)[vsec_data.rg.bar]; 234 + chip->reg_base = pcim_iomap_table(pdev)[vsec_data->rg.bar]; 239 235 if (!chip->reg_base) 240 236 return -ENOMEM; 241 237 242 238 for (i = 0; i < chip->ll_wr_cnt; i++) { 243 239 struct dw_edma_region *ll_region = &chip->ll_region_wr[i]; 244 240 struct dw_edma_region *dt_region = &chip->dt_region_wr[i]; 245 - struct dw_edma_block *ll_block = &vsec_data.ll_wr[i]; 246 - struct dw_edma_block *dt_block = &vsec_data.dt_wr[i]; 241 + struct dw_edma_block *ll_block = &vsec_data->ll_wr[i]; 242 + struct dw_edma_block *dt_block = &vsec_data->dt_wr[i]; 247 243 248 244 ll_region->vaddr.io = pcim_iomap_table(pdev)[ll_block->bar]; 249 245 if (!ll_region->vaddr.io) ··· 267 263 for (i = 0; i < chip->ll_rd_cnt; i++) { 268 264 struct dw_edma_region *ll_region = &chip->ll_region_rd[i]; 269 265 struct dw_edma_region *dt_region = &chip->dt_region_rd[i]; 270 - struct dw_edma_block *ll_block = &vsec_data.ll_rd[i]; 271 - struct dw_edma_block *dt_block = &vsec_data.dt_rd[i]; 266 + struct dw_edma_block *ll_block = &vsec_data->ll_rd[i]; 267 + struct dw_edma_block *dt_block = &vsec_data->dt_rd[i]; 272 268 273 269 ll_region->vaddr.io = pcim_iomap_table(pdev)[ll_block->bar]; 274 270 if (!ll_region->vaddr.io) ··· 302 298 pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", chip->mf); 303 299 304 300 pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p)\n", 305 - vsec_data.rg.bar, vsec_data.rg.off, vsec_data.rg.sz, 301 + vsec_data->rg.bar, vsec_data->rg.off, vsec_data->rg.sz, 306 302 chip->reg_base); 307 303 308 304 309 305 for (i = 0; i < chip->ll_wr_cnt; i++) { 310 306 pci_dbg(pdev, "L. List:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 311 - i, vsec_data.ll_wr[i].bar, 312 - vsec_data.ll_wr[i].off, chip->ll_region_wr[i].sz, 307 + i, vsec_data->ll_wr[i].bar, 308 + vsec_data->ll_wr[i].off, chip->ll_region_wr[i].sz, 313 309 chip->ll_region_wr[i].vaddr.io, &chip->ll_region_wr[i].paddr); 314 310 315 311 pci_dbg(pdev, "Data:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 316 - i, vsec_data.dt_wr[i].bar, 317 - vsec_data.dt_wr[i].off, chip->dt_region_wr[i].sz, 312 + i, vsec_data->dt_wr[i].bar, 313 + vsec_data->dt_wr[i].off, chip->dt_region_wr[i].sz, 318 314 chip->dt_region_wr[i].vaddr.io, &chip->dt_region_wr[i].paddr); 319 315 } 320 316 321 317 for (i = 0; i < chip->ll_rd_cnt; i++) { 322 318 pci_dbg(pdev, "L. List:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 323 - i, vsec_data.ll_rd[i].bar, 324 - vsec_data.ll_rd[i].off, chip->ll_region_rd[i].sz, 319 + i, vsec_data->ll_rd[i].bar, 320 + vsec_data->ll_rd[i].off, chip->ll_region_rd[i].sz, 325 321 chip->ll_region_rd[i].vaddr.io, &chip->ll_region_rd[i].paddr); 326 322 327 323 pci_dbg(pdev, "Data:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 328 - i, vsec_data.dt_rd[i].bar, 329 - vsec_data.dt_rd[i].off, chip->dt_region_rd[i].sz, 324 + i, vsec_data->dt_rd[i].bar, 325 + vsec_data->dt_rd[i].off, chip->dt_region_rd[i].sz, 330 326 chip->dt_region_rd[i].vaddr.io, &chip->dt_region_rd[i].paddr); 331 327 } 332 328
+2 -2
drivers/dma/mediatek/mtk-cqdma.c
··· 449 449 return ret; 450 450 451 451 spin_lock_irqsave(&cvc->pc->lock, flags); 452 - spin_lock_irqsave(&cvc->vc.lock, flags); 452 + spin_lock(&cvc->vc.lock); 453 453 vd = mtk_cqdma_find_active_desc(c, cookie); 454 - spin_unlock_irqrestore(&cvc->vc.lock, flags); 454 + spin_unlock(&cvc->vc.lock); 455 455 spin_unlock_irqrestore(&cvc->pc->lock, flags); 456 456 457 457 if (vd) {
+5 -6
drivers/dma/nbpfaxi.c
··· 1351 1351 if (irqs == 1) { 1352 1352 eirq = irqbuf[0]; 1353 1353 1354 - for (i = 0; i <= num_channels; i++) 1354 + for (i = 0; i < num_channels; i++) 1355 1355 nbpf->chan[i].irq = irqbuf[0]; 1356 1356 } else { 1357 1357 eirq = platform_get_irq_byname(pdev, "error"); ··· 1361 1361 if (irqs == num_channels + 1) { 1362 1362 struct nbpf_channel *chan; 1363 1363 1364 - for (i = 0, chan = nbpf->chan; i <= num_channels; 1364 + for (i = 0, chan = nbpf->chan; i < num_channels; 1365 1365 i++, chan++) { 1366 1366 /* Skip the error IRQ */ 1367 1367 if (irqbuf[i] == eirq) 1368 1368 i++; 1369 + if (i >= ARRAY_SIZE(irqbuf)) 1370 + return -EINVAL; 1369 1371 chan->irq = irqbuf[i]; 1370 1372 } 1371 - 1372 - if (chan != nbpf->chan + num_channels) 1373 - return -EINVAL; 1374 1373 } else { 1375 1374 /* 2 IRQs and more than one channel */ 1376 1375 if (irqbuf[0] == eirq) ··· 1377 1378 else 1378 1379 irq = irqbuf[0]; 1379 1380 1380 - for (i = 0; i <= num_channels; i++) 1381 + for (i = 0; i < num_channels; i++) 1381 1382 nbpf->chan[i].irq = irq; 1382 1383 } 1383 1384 }
+13
drivers/gpio/gpiolib-acpi-quirks.c
··· 331 331 .ignore_interrupt = "AMDI0030:00@11", 332 332 }, 333 333 }, 334 + { 335 + /* 336 + * Wakeup only works when keyboard backlight is turned off 337 + * https://gitlab.freedesktop.org/drm/amd/-/issues/4169 338 + */ 339 + .matches = { 340 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 341 + DMI_MATCH(DMI_PRODUCT_FAMILY, "Acer Nitro V 15"), 342 + }, 343 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 344 + .ignore_interrupt = "AMDI0030:00@8", 345 + }, 346 + }, 334 347 {} /* Terminating entry */ 335 348 }; 336 349
+1 -1
drivers/gpio/gpiolib-devres.c
··· 319 319 */ 320 320 void devm_gpiod_put_array(struct device *dev, struct gpio_descs *descs) 321 321 { 322 - devm_remove_action(dev, devm_gpiod_release_array, descs); 322 + devm_release_action(dev, devm_gpiod_release_array, descs); 323 323 } 324 324 EXPORT_SYMBOL_GPL(devm_gpiod_put_array); 325 325
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 5193 5193 dev->dev->power.disable_depth--; 5194 5194 #endif 5195 5195 } 5196 + 5197 + amdgpu_vram_mgr_clear_reset_blocks(adev); 5196 5198 adev->in_suspend = false; 5197 5199 5198 5200 if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))
+7 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
··· 427 427 { 428 428 unsigned long flags; 429 429 ktime_t deadline; 430 + bool ret; 430 431 431 432 if (unlikely(ring->adev->debug_disable_soft_recovery)) 432 433 return false; ··· 442 441 dma_fence_set_error(fence, -ENODATA); 443 442 spin_unlock_irqrestore(fence->lock, flags); 444 443 445 - atomic_inc(&ring->adev->gpu_reset_counter); 446 444 while (!dma_fence_is_signaled(fence) && 447 445 ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0) 448 446 ring->funcs->soft_recovery(ring, vmid); 449 447 450 - return dma_fence_is_signaled(fence); 448 + ret = dma_fence_is_signaled(fence); 449 + /* increment the counter only if soft reset worked */ 450 + if (ret) 451 + atomic_inc(&ring->adev->gpu_reset_counter); 452 + 453 + return ret; 451 454 } 452 455 453 456 /*
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
··· 154 154 uint64_t start, uint64_t size); 155 155 int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr, 156 156 uint64_t start); 157 + void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev); 157 158 158 159 bool amdgpu_res_cpu_visible(struct amdgpu_device *adev, 159 160 struct ttm_resource *res);
+17
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 783 783 } 784 784 785 785 /** 786 + * amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks 787 + * 788 + * @adev: amdgpu device pointer 789 + * 790 + * Reset the cleared drm buddy blocks. 791 + */ 792 + void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev) 793 + { 794 + struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr; 795 + struct drm_buddy *mm = &mgr->mm; 796 + 797 + mutex_lock(&mgr->lock); 798 + drm_buddy_reset_clear(mm, false); 799 + mutex_unlock(&mgr->lock); 800 + } 801 + 802 + /** 786 803 * amdgpu_vram_mgr_intersects - test each drm buddy block for intersection 787 804 * 788 805 * @man: TTM memory type manager
+1
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
··· 4640 4640 memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(struct vi_mqd_allocation)); 4641 4641 /* reset ring buffer */ 4642 4642 ring->wptr = 0; 4643 + atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0); 4643 4644 amdgpu_ring_clear_ring(ring); 4644 4645 } 4645 4646 return 0;
+10 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 728 728 * support programmable degamma anywhere. 729 729 */ 730 730 is_dcn = dm->adev->dm.dc->caps.color.dpp.dcn_arch; 731 - drm_crtc_enable_color_mgmt(&acrtc->base, is_dcn ? MAX_COLOR_LUT_ENTRIES : 0, 731 + /* Dont't enable DRM CRTC degamma property for DCN401 since the 732 + * pre-blending degamma LUT doesn't apply to cursor, and therefore 733 + * can't work similar to a post-blending degamma LUT as in other hw 734 + * versions. 735 + * TODO: revisit it once KMS plane color API is merged. 736 + */ 737 + drm_crtc_enable_color_mgmt(&acrtc->base, 738 + (is_dcn && 739 + dm->adev->dm.dc->ctx->dce_version != DCN_VERSION_4_01) ? 740 + MAX_COLOR_LUT_ENTRIES : 0, 732 741 true, MAX_COLOR_LUT_ENTRIES); 733 742 734 743 drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES);
+2 -1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
··· 1565 1565 clk_mgr->base.bw_params = kzalloc(sizeof(*clk_mgr->base.bw_params), GFP_KERNEL); 1566 1566 if (!clk_mgr->base.bw_params) { 1567 1567 BREAK_TO_DEBUGGER(); 1568 - kfree(clk_mgr); 1568 + kfree(clk_mgr401); 1569 1569 return NULL; 1570 1570 } 1571 1571 ··· 1576 1576 if (!clk_mgr->wm_range_table) { 1577 1577 BREAK_TO_DEBUGGER(); 1578 1578 kfree(clk_mgr->base.bw_params); 1579 + kfree(clk_mgr401); 1579 1580 return NULL; 1580 1581 } 1581 1582
+1 -1
drivers/gpu/drm/bridge/ti-sn65dsi86.c
··· 1373 1373 regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG, 1374 1374 HPD_DISABLE, 0); 1375 1375 mutex_unlock(&pdata->comms_mutex); 1376 - }; 1376 + } 1377 1377 1378 1378 drm_bridge_add(&pdata->bridge); 1379 1379
+1 -1
drivers/gpu/drm/display/drm_dp_helper.c
··· 725 725 * monitor doesn't power down exactly after the throw away read. 726 726 */ 727 727 if (!aux->is_remote) { 728 - ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS); 728 + ret = drm_dp_dpcd_probe(aux, DP_TRAINING_PATTERN_SET); 729 729 if (ret < 0) 730 730 return ret; 731 731 }
+43
drivers/gpu/drm/drm_buddy.c
··· 405 405 EXPORT_SYMBOL(drm_get_buddy); 406 406 407 407 /** 408 + * drm_buddy_reset_clear - reset blocks clear state 409 + * 410 + * @mm: DRM buddy manager 411 + * @is_clear: blocks clear state 412 + * 413 + * Reset the clear state based on @is_clear value for each block 414 + * in the freelist. 415 + */ 416 + void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear) 417 + { 418 + u64 root_size, size, start; 419 + unsigned int order; 420 + int i; 421 + 422 + size = mm->size; 423 + for (i = 0; i < mm->n_roots; ++i) { 424 + order = ilog2(size) - ilog2(mm->chunk_size); 425 + start = drm_buddy_block_offset(mm->roots[i]); 426 + __force_merge(mm, start, start + size, order); 427 + 428 + root_size = mm->chunk_size << order; 429 + size -= root_size; 430 + } 431 + 432 + for (i = 0; i <= mm->max_order; ++i) { 433 + struct drm_buddy_block *block; 434 + 435 + list_for_each_entry_reverse(block, &mm->free_list[i], link) { 436 + if (is_clear != drm_buddy_block_is_clear(block)) { 437 + if (is_clear) { 438 + mark_cleared(block); 439 + mm->clear_avail += drm_buddy_block_size(mm, block); 440 + } else { 441 + clear_reset(block); 442 + mm->clear_avail -= drm_buddy_block_size(mm, block); 443 + } 444 + } 445 + } 446 + } 447 + } 448 + EXPORT_SYMBOL(drm_buddy_reset_clear); 449 + 450 + /** 408 451 * drm_buddy_free_block - free a block 409 452 * 410 453 * @mm: DRM buddy manager
+1 -1
drivers/gpu/drm/drm_gem_dma_helper.c
··· 230 230 231 231 if (drm_gem_is_imported(gem_obj)) { 232 232 if (dma_obj->vaddr) 233 - dma_buf_vunmap_unlocked(gem_obj->dma_buf, &map); 233 + dma_buf_vunmap_unlocked(gem_obj->import_attach->dmabuf, &map); 234 234 drm_prime_gem_destroy(gem_obj, dma_obj->sgt); 235 235 } else if (dma_obj->vaddr) { 236 236 if (dma_obj->map_noncoherent)
+6 -2
drivers/gpu/drm/drm_gem_framebuffer_helper.c
··· 419 419 static void __drm_gem_fb_end_cpu_access(struct drm_framebuffer *fb, enum dma_data_direction dir, 420 420 unsigned int num_planes) 421 421 { 422 + struct dma_buf_attachment *import_attach; 422 423 struct drm_gem_object *obj; 423 424 int ret; 424 425 ··· 428 427 obj = drm_gem_fb_get_obj(fb, num_planes); 429 428 if (!obj) 430 429 continue; 430 + import_attach = obj->import_attach; 431 431 if (!drm_gem_is_imported(obj)) 432 432 continue; 433 - ret = dma_buf_end_cpu_access(obj->dma_buf, dir); 433 + ret = dma_buf_end_cpu_access(import_attach->dmabuf, dir); 434 434 if (ret) 435 435 drm_err(fb->dev, "dma_buf_end_cpu_access(%u, %d) failed: %d\n", 436 436 ret, num_planes, dir); ··· 454 452 */ 455 453 int drm_gem_fb_begin_cpu_access(struct drm_framebuffer *fb, enum dma_data_direction dir) 456 454 { 455 + struct dma_buf_attachment *import_attach; 457 456 struct drm_gem_object *obj; 458 457 unsigned int i; 459 458 int ret; ··· 465 462 ret = -EINVAL; 466 463 goto err___drm_gem_fb_end_cpu_access; 467 464 } 465 + import_attach = obj->import_attach; 468 466 if (!drm_gem_is_imported(obj)) 469 467 continue; 470 - ret = dma_buf_begin_cpu_access(obj->dma_buf, dir); 468 + ret = dma_buf_begin_cpu_access(import_attach->dmabuf, dir); 471 469 if (ret) 472 470 goto err___drm_gem_fb_end_cpu_access; 473 471 }
+2 -2
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 349 349 int ret = 0; 350 350 351 351 if (drm_gem_is_imported(obj)) { 352 - ret = dma_buf_vmap(obj->dma_buf, map); 352 + ret = dma_buf_vmap(obj->import_attach->dmabuf, map); 353 353 } else { 354 354 pgprot_t prot = PAGE_KERNEL; 355 355 ··· 409 409 struct drm_gem_object *obj = &shmem->base; 410 410 411 411 if (drm_gem_is_imported(obj)) { 412 - dma_buf_vunmap(obj->dma_buf, map); 412 + dma_buf_vunmap(obj->import_attach->dmabuf, map); 413 413 } else { 414 414 dma_resv_assert_held(shmem->base.resv); 415 415
+7 -1
drivers/gpu/drm/drm_prime.c
··· 453 453 } 454 454 455 455 mutex_lock(&dev->object_name_lock); 456 - /* re-export the original imported/exported object */ 456 + /* re-export the original imported object */ 457 + if (obj->import_attach) { 458 + dmabuf = obj->import_attach->dmabuf; 459 + get_dma_buf(dmabuf); 460 + goto out_have_obj; 461 + } 462 + 457 463 if (obj->dma_buf) { 458 464 get_dma_buf(obj->dma_buf); 459 465 dmabuf = obj->dma_buf;
+2 -2
drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
··· 65 65 struct iosys_map map = IOSYS_MAP_INIT_VADDR(etnaviv_obj->vaddr); 66 66 67 67 if (etnaviv_obj->vaddr) 68 - dma_buf_vunmap_unlocked(etnaviv_obj->base.dma_buf, &map); 68 + dma_buf_vunmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map); 69 69 70 70 /* Don't drop the pages for imported dmabuf, as they are not 71 71 * ours, just free the array we allocated: ··· 82 82 83 83 lockdep_assert_held(&etnaviv_obj->lock); 84 84 85 - ret = dma_buf_vmap(etnaviv_obj->base.dma_buf, &map); 85 + ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map); 86 86 if (ret) 87 87 return NULL; 88 88 return map.vaddr;
+35 -1
drivers/gpu/drm/mediatek/mtk_crtc.c
··· 719 719 return 0; 720 720 } 721 721 722 + void mtk_crtc_plane_disable(struct drm_crtc *crtc, struct drm_plane *plane) 723 + { 724 + #if IS_REACHABLE(CONFIG_MTK_CMDQ) 725 + struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc); 726 + struct mtk_plane_state *plane_state = to_mtk_plane_state(plane->state); 727 + int i; 728 + 729 + /* no need to wait for disabling the plane by CPU */ 730 + if (!mtk_crtc->cmdq_client.chan) 731 + return; 732 + 733 + if (!mtk_crtc->enabled) 734 + return; 735 + 736 + /* set pending plane state to disabled */ 737 + for (i = 0; i < mtk_crtc->layer_nr; i++) { 738 + struct drm_plane *mtk_plane = &mtk_crtc->planes[i]; 739 + struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(mtk_plane->state); 740 + 741 + if (mtk_plane->index == plane->index) { 742 + memcpy(mtk_plane_state, plane_state, sizeof(*plane_state)); 743 + break; 744 + } 745 + } 746 + mtk_crtc_update_config(mtk_crtc, false); 747 + 748 + /* wait for planes to be disabled by CMDQ */ 749 + wait_event_timeout(mtk_crtc->cb_blocking_queue, 750 + mtk_crtc->cmdq_vblank_cnt == 0, 751 + msecs_to_jiffies(500)); 752 + #endif 753 + } 754 + 722 755 void mtk_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane, 723 756 struct drm_atomic_state *state) 724 757 { ··· 963 930 mtk_ddp_comp_supported_rotations(comp), 964 931 mtk_ddp_comp_get_blend_modes(comp), 965 932 mtk_ddp_comp_get_formats(comp), 966 - mtk_ddp_comp_get_num_formats(comp), i); 933 + mtk_ddp_comp_get_num_formats(comp), 934 + mtk_ddp_comp_is_afbc_supported(comp), i); 967 935 if (ret) 968 936 return ret; 969 937
+1
drivers/gpu/drm/mediatek/mtk_crtc.h
··· 21 21 unsigned int num_conn_routes); 22 22 int mtk_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane, 23 23 struct mtk_plane_state *state); 24 + void mtk_crtc_plane_disable(struct drm_crtc *crtc, struct drm_plane *plane); 24 25 void mtk_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane, 25 26 struct drm_atomic_state *plane_state); 26 27 struct device *mtk_crtc_dma_dev_get(struct drm_crtc *crtc);
+1
drivers/gpu/drm/mediatek/mtk_ddp_comp.c
··· 366 366 .get_blend_modes = mtk_ovl_get_blend_modes, 367 367 .get_formats = mtk_ovl_get_formats, 368 368 .get_num_formats = mtk_ovl_get_num_formats, 369 + .is_afbc_supported = mtk_ovl_is_afbc_supported, 369 370 }; 370 371 371 372 static const struct mtk_ddp_comp_funcs ddp_postmask = {
+9
drivers/gpu/drm/mediatek/mtk_ddp_comp.h
··· 83 83 u32 (*get_blend_modes)(struct device *dev); 84 84 const u32 *(*get_formats)(struct device *dev); 85 85 size_t (*get_num_formats)(struct device *dev); 86 + bool (*is_afbc_supported)(struct device *dev); 86 87 void (*connect)(struct device *dev, struct device *mmsys_dev, unsigned int next); 87 88 void (*disconnect)(struct device *dev, struct device *mmsys_dev, unsigned int next); 88 89 void (*add)(struct device *dev, struct mtk_mutex *mutex); ··· 293 292 return comp->funcs->get_num_formats(comp->dev); 294 293 295 294 return 0; 295 + } 296 + 297 + static inline bool mtk_ddp_comp_is_afbc_supported(struct mtk_ddp_comp *comp) 298 + { 299 + if (comp->funcs && comp->funcs->is_afbc_supported) 300 + return comp->funcs->is_afbc_supported(comp->dev); 301 + 302 + return false; 296 303 } 297 304 298 305 static inline bool mtk_ddp_comp_add(struct mtk_ddp_comp *comp, struct mtk_mutex *mutex)
+1
drivers/gpu/drm/mediatek/mtk_disp_drv.h
··· 106 106 u32 mtk_ovl_get_blend_modes(struct device *dev); 107 107 const u32 *mtk_ovl_get_formats(struct device *dev); 108 108 size_t mtk_ovl_get_num_formats(struct device *dev); 109 + bool mtk_ovl_is_afbc_supported(struct device *dev); 109 110 110 111 void mtk_ovl_adaptor_add_comp(struct device *dev, struct mtk_mutex *mutex); 111 112 void mtk_ovl_adaptor_remove_comp(struct device *dev, struct mtk_mutex *mutex);
+7
drivers/gpu/drm/mediatek/mtk_disp_ovl.c
··· 236 236 return ovl->data->num_formats; 237 237 } 238 238 239 + bool mtk_ovl_is_afbc_supported(struct device *dev) 240 + { 241 + struct mtk_disp_ovl *ovl = dev_get_drvdata(dev); 242 + 243 + return ovl->data->supports_afbc; 244 + } 245 + 239 246 int mtk_ovl_clk_enable(struct device *dev) 240 247 { 241 248 struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
+2 -2
drivers/gpu/drm/mediatek/mtk_dpi.c
··· 1095 1095 }; 1096 1096 1097 1097 static const u32 mt8195_dpi_output_fmts[] = { 1098 - MEDIA_BUS_FMT_BGR888_1X24, 1099 1098 MEDIA_BUS_FMT_RGB888_1X24, 1100 1099 MEDIA_BUS_FMT_RGB888_2X12_LE, 1101 1100 MEDIA_BUS_FMT_RGB888_2X12_BE, ··· 1102 1103 MEDIA_BUS_FMT_YUYV8_1X16, 1103 1104 MEDIA_BUS_FMT_YUYV10_1X20, 1104 1105 MEDIA_BUS_FMT_YUYV12_1X24, 1106 + MEDIA_BUS_FMT_BGR888_1X24, 1105 1107 MEDIA_BUS_FMT_YUV8_1X24, 1106 1108 MEDIA_BUS_FMT_YUV10_1X30, 1107 1109 }; 1108 1110 1109 1111 static const u32 mt8195_dp_intf_output_fmts[] = { 1110 - MEDIA_BUS_FMT_BGR888_1X24, 1111 1112 MEDIA_BUS_FMT_RGB888_1X24, 1112 1113 MEDIA_BUS_FMT_RGB888_2X12_LE, 1113 1114 MEDIA_BUS_FMT_RGB888_2X12_BE, 1114 1115 MEDIA_BUS_FMT_RGB101010_1X30, 1115 1116 MEDIA_BUS_FMT_YUYV8_1X16, 1116 1117 MEDIA_BUS_FMT_YUYV10_1X20, 1118 + MEDIA_BUS_FMT_BGR888_1X24, 1117 1119 MEDIA_BUS_FMT_YUV8_1X24, 1118 1120 MEDIA_BUS_FMT_YUV10_1X30, 1119 1121 };
+10 -2
drivers/gpu/drm/mediatek/mtk_plane.c
··· 285 285 struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 286 286 plane); 287 287 struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(new_state); 288 + struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state, 289 + plane); 290 + 288 291 mtk_plane_state->pending.enable = false; 289 292 wmb(); /* Make sure the above parameter is set before update */ 290 293 mtk_plane_state->pending.dirty = true; 294 + 295 + mtk_crtc_plane_disable(old_state->crtc, plane); 291 296 } 292 297 293 298 static void mtk_plane_atomic_update(struct drm_plane *plane, ··· 326 321 int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane, 327 322 unsigned long possible_crtcs, enum drm_plane_type type, 328 323 unsigned int supported_rotations, const u32 blend_modes, 329 - const u32 *formats, size_t num_formats, unsigned int plane_idx) 324 + const u32 *formats, size_t num_formats, 325 + bool supports_afbc, unsigned int plane_idx) 330 326 { 331 327 int err; 332 328 ··· 338 332 339 333 err = drm_universal_plane_init(dev, plane, possible_crtcs, 340 334 &mtk_plane_funcs, formats, 341 - num_formats, modifiers, type, NULL); 335 + num_formats, 336 + supports_afbc ? modifiers : NULL, 337 + type, NULL); 342 338 if (err) { 343 339 DRM_ERROR("failed to initialize plane\n"); 344 340 return err;
+2 -1
drivers/gpu/drm/mediatek/mtk_plane.h
··· 49 49 int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane, 50 50 unsigned long possible_crtcs, enum drm_plane_type type, 51 51 unsigned int supported_rotations, const u32 blend_modes, 52 - const u32 *formats, size_t num_formats, unsigned int plane_idx); 52 + const u32 *formats, size_t num_formats, 53 + bool supports_afbc, unsigned int plane_idx); 53 54 #endif
+3
drivers/gpu/drm/nouveau/nvif/chan.c
··· 39 39 const u32 pbptr = (chan->push.cur - map) + chan->func->gpfifo.post_size; 40 40 const u32 gpptr = (chan->gpfifo.cur + 1) & chan->gpfifo.max; 41 41 42 + if (!chan->func->gpfifo.post) 43 + return 0; 44 + 42 45 return chan->func->gpfifo.post(chan, gpptr, pbptr); 43 46 } 44 47
+1 -1
drivers/gpu/drm/panfrost/panfrost_job.c
··· 841 841 .num_rqs = DRM_SCHED_PRIORITY_COUNT, 842 842 .credit_limit = 2, 843 843 .timeout = msecs_to_jiffies(JOB_TIMEOUT_MS), 844 - .timeout_wq = pfdev->reset.wq, 845 844 .name = "pan_js", 846 845 .dev = pfdev->dev, 847 846 }; ··· 878 879 pfdev->reset.wq = alloc_ordered_workqueue("panfrost-reset", 0); 879 880 if (!pfdev->reset.wq) 880 881 return -ENOMEM; 882 + args.timeout_wq = pfdev->reset.wq; 881 883 882 884 for (j = 0; j < NUM_JOB_SLOTS; j++) { 883 885 js->queue[j].fence_context = dma_fence_context_alloc(1);
+6 -17
drivers/gpu/drm/radeon/radeon_device.c
··· 26 26 * Jerome Glisse 27 27 */ 28 28 29 - #include <linux/console.h> 30 29 #include <linux/efi.h> 31 30 #include <linux/pci.h> 32 31 #include <linux/pm_runtime.h> ··· 1634 1635 pci_set_power_state(pdev, PCI_D3hot); 1635 1636 } 1636 1637 1637 - if (notify_clients) { 1638 - console_lock(); 1639 - drm_client_dev_suspend(dev, true); 1640 - console_unlock(); 1641 - } 1638 + if (notify_clients) 1639 + drm_client_dev_suspend(dev, false); 1640 + 1642 1641 return 0; 1643 1642 } 1644 1643 ··· 1658 1661 if (dev->switch_power_state == DRM_SWITCH_POWER_OFF) 1659 1662 return 0; 1660 1663 1661 - if (notify_clients) { 1662 - console_lock(); 1663 - } 1664 1664 if (resume) { 1665 1665 pci_set_power_state(pdev, PCI_D0); 1666 1666 pci_restore_state(pdev); 1667 - if (pci_enable_device(pdev)) { 1668 - if (notify_clients) 1669 - console_unlock(); 1667 + if (pci_enable_device(pdev)) 1670 1668 return -1; 1671 - } 1672 1669 } 1673 1670 /* resume AGP if in use */ 1674 1671 radeon_agp_resume(rdev); ··· 1738 1747 if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled) 1739 1748 radeon_pm_compute_clocks(rdev); 1740 1749 1741 - if (notify_clients) { 1742 - drm_client_dev_resume(dev, true); 1743 - console_unlock(); 1744 - } 1750 + if (notify_clients) 1751 + drm_client_dev_resume(dev, false); 1745 1752 1746 1753 return 0; 1747 1754 }
+2 -19
drivers/gpu/drm/scheduler/sched_entity.c
··· 355 355 } 356 356 EXPORT_SYMBOL(drm_sched_entity_destroy); 357 357 358 - /* drm_sched_entity_clear_dep - callback to clear the entities dependency */ 359 - static void drm_sched_entity_clear_dep(struct dma_fence *f, 360 - struct dma_fence_cb *cb) 361 - { 362 - struct drm_sched_entity *entity = 363 - container_of(cb, struct drm_sched_entity, cb); 364 - 365 - entity->dependency = NULL; 366 - dma_fence_put(f); 367 - } 368 - 369 358 /* 370 359 * drm_sched_entity_wakeup - callback to clear the entity's dependency and 371 360 * wake up the scheduler ··· 365 376 struct drm_sched_entity *entity = 366 377 container_of(cb, struct drm_sched_entity, cb); 367 378 368 - drm_sched_entity_clear_dep(f, cb); 379 + entity->dependency = NULL; 380 + dma_fence_put(f); 369 381 drm_sched_wakeup(entity->rq->sched); 370 382 } 371 383 ··· 419 429 fence = dma_fence_get(&s_fence->scheduled); 420 430 dma_fence_put(entity->dependency); 421 431 entity->dependency = fence; 422 - if (!dma_fence_add_callback(fence, &entity->cb, 423 - drm_sched_entity_clear_dep)) 424 - return true; 425 - 426 - /* Ignore it when it is already scheduled */ 427 - dma_fence_put(fence); 428 - return false; 429 432 } 430 433 431 434 if (!dma_fence_add_callback(entity->dependency, &entity->cb,
+3 -2
drivers/gpu/drm/virtio/virtgpu_prime.c
··· 204 204 { 205 205 struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); 206 206 struct virtio_gpu_device *vgdev = obj->dev->dev_private; 207 + struct dma_buf_attachment *attach = obj->import_attach; 207 208 208 209 if (drm_gem_is_imported(obj)) { 209 - struct dma_buf *dmabuf = obj->dma_buf; 210 + struct dma_buf *dmabuf = attach->dmabuf; 210 211 211 212 dma_resv_lock(dmabuf->resv, NULL); 212 213 virtgpu_dma_buf_unmap(bo); 213 214 dma_resv_unlock(dmabuf->resv); 214 215 215 - dma_buf_detach(dmabuf, obj->import_attach); 216 + dma_buf_detach(dmabuf, attach); 216 217 dma_buf_put(dmabuf); 217 218 } 218 219
+3 -3
drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
··· 85 85 int ret; 86 86 87 87 if (drm_gem_is_imported(obj)) { 88 - ret = dma_buf_vmap(obj->dma_buf, map); 88 + ret = dma_buf_vmap(obj->import_attach->dmabuf, map); 89 89 if (!ret) { 90 90 if (drm_WARN_ON(obj->dev, map->is_iomem)) { 91 - dma_buf_vunmap(obj->dma_buf, map); 91 + dma_buf_vunmap(obj->import_attach->dmabuf, map); 92 92 return -EIO; 93 93 } 94 94 } ··· 102 102 static void vmw_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) 103 103 { 104 104 if (drm_gem_is_imported(obj)) 105 - dma_buf_vunmap(obj->dma_buf, map); 105 + dma_buf_vunmap(obj->import_attach->dmabuf, map); 106 106 else 107 107 drm_gem_ttm_vunmap(obj, map); 108 108 }
+9 -6
drivers/gpu/drm/xe/xe_gt.c
··· 417 417 if (err) 418 418 return err; 419 419 420 + xe_mocs_init_early(gt); 421 + 420 422 return 0; 421 423 } 422 424 ··· 632 630 if (err) 633 631 return err; 634 632 635 - err = xe_gt_pagefault_init(gt); 636 - if (err) 637 - return err; 638 - 639 - xe_mocs_init_early(gt); 640 - 641 633 err = xe_gt_sysfs_init(gt); 642 634 if (err) 643 635 return err; 644 636 645 637 err = gt_fw_domain_init(gt); 638 + if (err) 639 + return err; 640 + 641 + err = xe_gt_pagefault_init(gt); 646 642 if (err) 647 643 return err; 648 644 ··· 838 838 err = -ETIMEDOUT; 839 839 goto err_out; 840 840 } 841 + 842 + if (IS_SRIOV_PF(gt_to_xe(gt))) 843 + xe_gt_sriov_pf_stop_prepare(gt); 841 844 842 845 xe_uc_gucrc_disable(&gt->uc); 843 846 xe_uc_stop_prepare(&gt->uc);
+19
drivers/gpu/drm/xe/xe_gt_sriov_pf.c
··· 172 172 pf_clear_vf_scratch_regs(gt, vfid); 173 173 } 174 174 175 + static void pf_cancel_restart(struct xe_gt *gt) 176 + { 177 + xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt))); 178 + 179 + if (cancel_work_sync(&gt->sriov.pf.workers.restart)) 180 + xe_gt_sriov_dbg_verbose(gt, "pending restart canceled!\n"); 181 + } 182 + 183 + /** 184 + * xe_gt_sriov_pf_stop_prepare() - Prepare to stop SR-IOV support. 185 + * @gt: the &xe_gt 186 + * 187 + * This function can only be called on the PF. 188 + */ 189 + void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt) 190 + { 191 + pf_cancel_restart(gt); 192 + } 193 + 175 194 static void pf_restart(struct xe_gt *gt) 176 195 { 177 196 struct xe_device *xe = gt_to_xe(gt);
+5
drivers/gpu/drm/xe/xe_gt_sriov_pf.h
··· 13 13 int xe_gt_sriov_pf_init(struct xe_gt *gt); 14 14 void xe_gt_sriov_pf_init_hw(struct xe_gt *gt); 15 15 void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid); 16 + void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt); 16 17 void xe_gt_sriov_pf_restart(struct xe_gt *gt); 17 18 #else 18 19 static inline int xe_gt_sriov_pf_init_early(struct xe_gt *gt) ··· 27 26 } 28 27 29 28 static inline void xe_gt_sriov_pf_init_hw(struct xe_gt *gt) 29 + { 30 + } 31 + 32 + static inline void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt) 30 33 { 31 34 } 32 35
+27
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
··· 2364 2364 return err; 2365 2365 } 2366 2366 2367 + static int pf_push_self_config(struct xe_gt *gt) 2368 + { 2369 + int err; 2370 + 2371 + err = pf_push_full_vf_config(gt, PFID); 2372 + if (err) { 2373 + xe_gt_sriov_err(gt, "Failed to push self configuration (%pe)\n", 2374 + ERR_PTR(err)); 2375 + return err; 2376 + } 2377 + 2378 + xe_gt_sriov_dbg_verbose(gt, "self configuration completed\n"); 2379 + return 0; 2380 + } 2381 + 2367 2382 static void fini_config(void *arg) 2368 2383 { 2369 2384 struct xe_gt *gt = arg; ··· 2402 2387 int xe_gt_sriov_pf_config_init(struct xe_gt *gt) 2403 2388 { 2404 2389 struct xe_device *xe = gt_to_xe(gt); 2390 + int err; 2405 2391 2406 2392 xe_gt_assert(gt, IS_SRIOV_PF(xe)); 2393 + 2394 + mutex_lock(xe_gt_sriov_pf_master_mutex(gt)); 2395 + err = pf_push_self_config(gt); 2396 + mutex_unlock(xe_gt_sriov_pf_master_mutex(gt)); 2397 + 2398 + if (err) 2399 + return err; 2407 2400 2408 2401 return devm_add_action_or_reset(xe->drm.dev, fini_config, gt); 2409 2402 } ··· 2429 2406 { 2430 2407 unsigned int n, total_vfs = xe_sriov_pf_get_totalvfs(gt_to_xe(gt)); 2431 2408 unsigned int fail = 0, skip = 0; 2409 + 2410 + mutex_lock(xe_gt_sriov_pf_master_mutex(gt)); 2411 + pf_push_self_config(gt); 2412 + mutex_unlock(xe_gt_sriov_pf_master_mutex(gt)); 2432 2413 2433 2414 for (n = 1; n <= total_vfs; n++) { 2434 2415 if (xe_gt_sriov_pf_config_is_empty(gt, n))
+3 -3
drivers/gpu/drm/xe/xe_migrate.c
··· 1817 1817 xe_bo_assert_held(bo); 1818 1818 1819 1819 /* Use bounce buffer for small access and unaligned access */ 1820 - if (len & XE_CACHELINE_MASK || 1821 - ((uintptr_t)buf | offset) & XE_CACHELINE_MASK) { 1820 + if (!IS_ALIGNED(len, XE_CACHELINE_BYTES) || 1821 + !IS_ALIGNED((unsigned long)buf + offset, XE_CACHELINE_BYTES)) { 1822 1822 int buf_offset = 0; 1823 1823 1824 1824 /* ··· 1848 1848 err = xe_migrate_access_memory(m, bo, 1849 1849 offset & ~XE_CACHELINE_MASK, 1850 1850 (void *)ptr, 1851 - sizeof(bounce), 0); 1851 + sizeof(bounce), write); 1852 1852 if (err) 1853 1853 return err; 1854 1854 } else {
+10 -12
drivers/gpu/drm/xe/xe_ring_ops.c
··· 110 110 return i; 111 111 } 112 112 113 - static int emit_flush_invalidate(u32 *dw, int i) 113 + static int emit_flush_invalidate(u32 addr, u32 val, u32 *dw, int i) 114 114 { 115 115 dw[i++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW | 116 - MI_FLUSH_IMM_DW | MI_FLUSH_DW_STORE_INDEX; 117 - dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR; 116 + MI_FLUSH_IMM_DW; 117 + 118 + dw[i++] = addr | MI_FLUSH_DW_USE_GTT; 118 119 dw[i++] = 0; 119 - dw[i++] = 0; 120 + dw[i++] = val; 120 121 121 122 return i; 122 123 } ··· 398 397 static void emit_migration_job_gen12(struct xe_sched_job *job, 399 398 struct xe_lrc *lrc, u32 seqno) 400 399 { 400 + u32 saddr = xe_lrc_start_seqno_ggtt_addr(lrc); 401 401 u32 dw[MAX_JOB_SIZE_DW], i = 0; 402 402 403 403 i = emit_copy_timestamp(lrc, dw, i); 404 404 405 - i = emit_store_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc), 406 - seqno, dw, i); 405 + i = emit_store_imm_ggtt(saddr, seqno, dw, i); 407 406 408 407 dw[i++] = MI_ARB_ON_OFF | MI_ARB_DISABLE; /* Enabled again below */ 409 408 410 409 i = emit_bb_start(job->ptrs[0].batch_addr, BIT(8), dw, i); 411 410 412 - if (!IS_SRIOV_VF(gt_to_xe(job->q->gt))) { 413 - /* XXX: Do we need this? Leaving for now. */ 414 - dw[i++] = preparser_disable(true); 415 - i = emit_flush_invalidate(dw, i); 416 - dw[i++] = preparser_disable(false); 417 - } 411 + dw[i++] = preparser_disable(true); 412 + i = emit_flush_invalidate(saddr, seqno, dw, i); 413 + dw[i++] = preparser_disable(false); 418 414 419 415 i = emit_bb_start(job->ptrs[1].batch_addr, BIT(8), dw, i); 420 416
+1 -1
drivers/hv/Kconfig
··· 9 9 select PARAVIRT 10 10 select X86_HV_CALLBACK_VECTOR if X86 11 11 select OF_EARLY_FLATTREE if OF 12 - select SYSFB if !HYPERV_VTL_MODE 12 + select SYSFB if EFI && !HYPERV_VTL_MODE 13 13 help 14 14 Select this option to run Linux as a Hyper-V client operating 15 15 system.
+1
drivers/hv/channel.c
··· 18 18 #include <linux/uio.h> 19 19 #include <linux/interrupt.h> 20 20 #include <linux/set_memory.h> 21 + #include <linux/export.h> 21 22 #include <asm/page.h> 22 23 #include <asm/mshyperv.h> 23 24
+1
drivers/hv/channel_mgmt.c
··· 20 20 #include <linux/delay.h> 21 21 #include <linux/cpu.h> 22 22 #include <linux/hyperv.h> 23 + #include <linux/export.h> 23 24 #include <asm/mshyperv.h> 24 25 #include <linux/sched/isolation.h> 25 26
+4 -1
drivers/hv/connection.c
··· 519 519 else 520 520 WARN_ON_ONCE(1); 521 521 } else { 522 - hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event); 522 + u64 control = HVCALL_SIGNAL_EVENT; 523 + 524 + control |= hv_nested ? HV_HYPERCALL_NESTED : 0; 525 + hv_do_fast_hypercall8(control, channel->sig_event); 523 526 } 524 527 } 525 528 EXPORT_SYMBOL_GPL(vmbus_set_event);
+4 -2
drivers/hv/hv.c
··· 85 85 else 86 86 status = HV_STATUS_INVALID_PARAMETER; 87 87 } else { 88 - status = hv_do_hypercall(HVCALL_POST_MESSAGE, 89 - aligned_msg, NULL); 88 + u64 control = HVCALL_POST_MESSAGE; 89 + 90 + control |= hv_nested ? HV_HYPERCALL_NESTED : 0; 91 + status = hv_do_hypercall(control, aligned_msg, NULL); 90 92 } 91 93 92 94 local_irq_restore(flags);
+1
drivers/hv/hv_proc.c
··· 6 6 #include <linux/slab.h> 7 7 #include <linux/cpuhotplug.h> 8 8 #include <linux/minmax.h> 9 + #include <linux/export.h> 9 10 #include <asm/mshyperv.h> 10 11 11 12 /*
+1
drivers/hv/mshv_common.c
··· 13 13 #include <linux/mm.h> 14 14 #include <asm/mshyperv.h> 15 15 #include <linux/resume_user_mode.h> 16 + #include <linux/export.h> 16 17 17 18 #include "mshv.h" 18 19
+1
drivers/hv/mshv_root_hv_call.c
··· 9 9 10 10 #include <linux/kernel.h> 11 11 #include <linux/mm.h> 12 + #include <linux/export.h> 12 13 #include <asm/mshyperv.h> 13 14 14 15 #include "mshv_root.h"
+1
drivers/hv/ring_buffer.c
··· 18 18 #include <linux/slab.h> 19 19 #include <linux/prefetch.h> 20 20 #include <linux/io.h> 21 + #include <linux/export.h> 21 22 #include <asm/mshyperv.h> 22 23 23 24 #include "hyperv_vmbus.h"
+5 -4
drivers/hv/vmbus_drv.c
··· 2509 2509 return 0; 2510 2510 } 2511 2511 #endif 2512 - 2512 + #ifndef HYPERVISOR_CALLBACK_VECTOR 2513 2513 static int vmbus_set_irq(struct platform_device *pdev) 2514 2514 { 2515 2515 struct irq_data *data; ··· 2534 2534 2535 2535 return 0; 2536 2536 } 2537 + #endif 2537 2538 2538 2539 static int vmbus_device_add(struct platform_device *pdev) 2539 2540 { ··· 2550 2549 if (ret) 2551 2550 return ret; 2552 2551 2553 - if (!__is_defined(HYPERVISOR_CALLBACK_VECTOR)) 2554 - ret = vmbus_set_irq(pdev); 2552 + #ifndef HYPERVISOR_CALLBACK_VECTOR 2553 + ret = vmbus_set_irq(pdev); 2555 2554 if (ret) 2556 2555 return ret; 2557 - 2556 + #endif 2558 2557 for_each_of_range(&parser, &range) { 2559 2558 struct resource *res; 2560 2559
+5
drivers/hwmon/corsair-cpro.c
··· 89 89 struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */ 90 90 u8 *cmd_buffer; 91 91 u8 *buffer; 92 + int buffer_recv_size; /* number of received bytes in buffer */ 92 93 int target[6]; 93 94 DECLARE_BITMAP(temp_cnct, NUM_TEMP_SENSORS); 94 95 DECLARE_BITMAP(fan_cnct, NUM_FANS); ··· 147 146 if (!t) 148 147 return -ETIMEDOUT; 149 148 149 + if (ccp->buffer_recv_size != IN_BUFFER_SIZE) 150 + return -EPROTO; 151 + 150 152 return ccp_get_errno(ccp); 151 153 } 152 154 ··· 161 157 spin_lock(&ccp->wait_input_report_lock); 162 158 if (!completion_done(&ccp->wait_input_report)) { 163 159 memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size)); 160 + ccp->buffer_recv_size = size; 164 161 complete_all(&ccp->wait_input_report); 165 162 } 166 163 spin_unlock(&ccp->wait_input_report_lock);
+4 -4
drivers/hwmon/ina238.c
··· 97 97 * Power (mW) = 0.2 * register value * 20000 / rshunt / 4 * gain 98 98 * (Specific for SQ52206) 99 99 * Power (mW) = 0.24 * register value * 20000 / rshunt / 4 * gain 100 - * Energy (mJ) = 16 * 0.24 * register value * 20000 / rshunt / 4 * gain 100 + * Energy (uJ) = 16 * 0.24 * register value * 20000 / rshunt / 4 * gain * 1000 101 101 */ 102 102 #define INA238_CALIBRATION_VALUE 16384 103 103 #define INA238_FIXED_SHUNT 20000 ··· 500 500 if (ret) 501 501 return ret; 502 502 503 - /* result in mJ */ 504 - energy = div_u64(regval * INA238_FIXED_SHUNT * data->gain * 16 * 505 - data->config->power_calculate_factor, 4 * 100 * data->rshunt); 503 + /* result in uJ */ 504 + energy = div_u64(regval * INA238_FIXED_SHUNT * data->gain * 16 * 10 * 505 + data->config->power_calculate_factor, 4 * data->rshunt); 506 506 507 507 return sysfs_emit(buf, "%llu\n", energy); 508 508 }
+4 -4
drivers/hwmon/pmbus/ucd9000.c
··· 226 226 } 227 227 228 228 if (value) { 229 - if (ret & UCD9000_GPIO_CONFIG_STATUS) 229 + if (ret & UCD9000_GPIO_CONFIG_OUT_VALUE) 230 230 return 0; 231 231 232 - ret |= UCD9000_GPIO_CONFIG_STATUS; 232 + ret |= UCD9000_GPIO_CONFIG_OUT_VALUE; 233 233 } else { 234 - if (!(ret & UCD9000_GPIO_CONFIG_STATUS)) 234 + if (!(ret & UCD9000_GPIO_CONFIG_OUT_VALUE)) 235 235 return 0; 236 236 237 - ret &= ~UCD9000_GPIO_CONFIG_STATUS; 237 + ret &= ~UCD9000_GPIO_CONFIG_OUT_VALUE; 238 238 } 239 239 240 240 ret |= UCD9000_GPIO_CONFIG_ENABLE;
+5 -2
drivers/i2c/busses/i2c-omap.c
··· 1472 1472 } 1473 1473 1474 1474 /* reset ASAP, clearing any IRQs */ 1475 - omap_i2c_init(omap); 1475 + r = omap_i2c_init(omap); 1476 + if (r) 1477 + goto err_mux_state_deselect; 1476 1478 1477 1479 if (omap->rev < OMAP_I2C_OMAP1_REV_2) 1478 1480 r = devm_request_irq(&pdev->dev, omap->irq, omap_i2c_omap1_isr, ··· 1517 1515 1518 1516 err_unuse_clocks: 1519 1517 omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0); 1518 + err_mux_state_deselect: 1520 1519 if (omap->mux_state) 1521 1520 mux_state_deselect(omap->mux_state); 1522 1521 err_put_pm: 1523 - pm_runtime_dont_use_autosuspend(omap->dev); 1524 1522 pm_runtime_put_sync(omap->dev); 1525 1523 err_disable_pm: 1524 + pm_runtime_dont_use_autosuspend(omap->dev); 1526 1525 pm_runtime_disable(&pdev->dev); 1527 1526 1528 1527 return r;
+3 -5
drivers/i2c/busses/i2c-stm32.c
··· 102 102 void *dma_async_param) 103 103 { 104 104 struct dma_async_tx_descriptor *txdesc; 105 - struct device *chan_dev; 106 105 int ret; 107 106 108 107 if (rd_wr) { ··· 115 116 } 116 117 117 118 dma->dma_len = len; 118 - chan_dev = dma->chan_using->device->dev; 119 119 120 - dma->dma_buf = dma_map_single(chan_dev, buf, dma->dma_len, 120 + dma->dma_buf = dma_map_single(dev, buf, dma->dma_len, 121 121 dma->dma_data_dir); 122 - if (dma_mapping_error(chan_dev, dma->dma_buf)) { 122 + if (dma_mapping_error(dev, dma->dma_buf)) { 123 123 dev_err(dev, "DMA mapping failed\n"); 124 124 return -EINVAL; 125 125 } ··· 148 150 return 0; 149 151 150 152 err: 151 - dma_unmap_single(chan_dev, dma->dma_buf, dma->dma_len, 153 + dma_unmap_single(dev, dma->dma_buf, dma->dma_len, 152 154 dma->dma_data_dir); 153 155 return ret; 154 156 }
+9 -15
drivers/i2c/busses/i2c-stm32f7.c
··· 739 739 740 740 static void stm32f7_i2c_dma_callback(void *arg) 741 741 { 742 - struct stm32f7_i2c_dev *i2c_dev = (struct stm32f7_i2c_dev *)arg; 742 + struct stm32f7_i2c_dev *i2c_dev = arg; 743 743 struct stm32_i2c_dma *dma = i2c_dev->dma; 744 - struct device *dev = dma->chan_using->device->dev; 745 744 746 745 stm32f7_i2c_disable_dma_req(i2c_dev); 747 - dma_unmap_single(dev, dma->dma_buf, dma->dma_len, dma->dma_data_dir); 746 + dmaengine_terminate_async(dma->chan_using); 747 + dma_unmap_single(i2c_dev->dev, dma->dma_buf, dma->dma_len, 748 + dma->dma_data_dir); 748 749 complete(&dma->dma_complete); 749 750 } 750 751 ··· 1511 1510 u16 addr = f7_msg->addr; 1512 1511 void __iomem *base = i2c_dev->base; 1513 1512 struct device *dev = i2c_dev->dev; 1514 - struct stm32_i2c_dma *dma = i2c_dev->dma; 1515 1513 1516 1514 /* Bus error */ 1517 1515 if (status & STM32F7_I2C_ISR_BERR) { ··· 1551 1551 } 1552 1552 1553 1553 /* Disable dma */ 1554 - if (i2c_dev->use_dma) { 1555 - stm32f7_i2c_disable_dma_req(i2c_dev); 1556 - dmaengine_terminate_async(dma->chan_using); 1557 - } 1554 + if (i2c_dev->use_dma) 1555 + stm32f7_i2c_dma_callback(i2c_dev); 1558 1556 1559 1557 i2c_dev->master_mode = false; 1560 1558 complete(&i2c_dev->complete); ··· 1598 1600 { 1599 1601 struct stm32f7_i2c_dev *i2c_dev = data; 1600 1602 struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg; 1601 - struct stm32_i2c_dma *dma = i2c_dev->dma; 1602 1603 void __iomem *base = i2c_dev->base; 1603 1604 u32 status, mask; 1604 1605 int ret; ··· 1616 1619 dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n", 1617 1620 __func__, f7_msg->addr); 1618 1621 writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR); 1619 - if (i2c_dev->use_dma) { 1620 - stm32f7_i2c_disable_dma_req(i2c_dev); 1621 - dmaengine_terminate_async(dma->chan_using); 1622 - } 1622 + if (i2c_dev->use_dma) 1623 + stm32f7_i2c_dma_callback(i2c_dev); 1623 1624 f7_msg->result = -ENXIO; 1624 1625 } 1625 1626 ··· 1635 1640 ret = wait_for_completion_timeout(&i2c_dev->dma->dma_complete, HZ); 1636 1641 if (!ret) { 1637 1642 dev_dbg(i2c_dev->dev, "<%s>: Timed out\n", __func__); 1638 - stm32f7_i2c_disable_dma_req(i2c_dev); 1639 - dmaengine_terminate_async(dma->chan_using); 1643 + stm32f7_i2c_dma_callback(i2c_dev); 1640 1644 f7_msg->result = -ETIMEDOUT; 1641 1645 } 1642 1646 }
+2
drivers/iio/accel/fxls8962af-core.c
··· 877 877 if (ret) 878 878 return ret; 879 879 880 + synchronize_irq(data->irq); 881 + 880 882 ret = __fxls8962af_fifo_set_mode(data, false); 881 883 882 884 if (data->enable_event)
+5 -5
drivers/iio/accel/st_accel_core.c
··· 1353 1353 union acpi_object *ont; 1354 1354 union acpi_object *elements; 1355 1355 acpi_status status; 1356 + struct device *parent = indio_dev->dev.parent; 1356 1357 int ret = -EINVAL; 1357 1358 unsigned int val; 1358 1359 int i, j; ··· 1372 1371 }; 1373 1372 1374 1373 1375 - adev = ACPI_COMPANION(indio_dev->dev.parent); 1374 + adev = ACPI_COMPANION(parent); 1376 1375 if (!adev) 1377 1376 return -ENXIO; 1378 1377 ··· 1381 1380 if (status == AE_NOT_FOUND) { 1382 1381 return -ENXIO; 1383 1382 } else if (ACPI_FAILURE(status)) { 1384 - dev_warn(&indio_dev->dev, "failed to execute _ONT: %d\n", 1385 - status); 1383 + dev_warn(parent, "failed to execute _ONT: %d\n", status); 1386 1384 return status; 1387 1385 } 1388 1386 ··· 1457 1457 } 1458 1458 1459 1459 ret = 0; 1460 - dev_info(&indio_dev->dev, "computed mount matrix from ACPI\n"); 1460 + dev_info(parent, "computed mount matrix from ACPI\n"); 1461 1461 1462 1462 out: 1463 1463 kfree(buffer.pointer); 1464 1464 if (ret) 1465 - dev_dbg(&indio_dev->dev, 1465 + dev_dbg(parent, 1466 1466 "failed to apply ACPI orientation data: %d\n", ret); 1467 1467 1468 1468 return ret;
+3 -2
drivers/iio/adc/ad7380.c
··· 1953 1953 1954 1954 if (st->chip_info->has_hardware_gain) { 1955 1955 device_for_each_child_node_scoped(dev, node) { 1956 - unsigned int channel, gain; 1956 + unsigned int channel; 1957 1957 int gain_idx; 1958 + u16 gain; 1958 1959 1959 1960 ret = fwnode_property_read_u32(node, "reg", &channel); 1960 1961 if (ret) ··· 1967 1966 "Invalid channel number %i\n", 1968 1967 channel); 1969 1968 1970 - ret = fwnode_property_read_u32(node, "adi,gain-milli", 1969 + ret = fwnode_property_read_u16(node, "adi,gain-milli", 1971 1970 &gain); 1972 1971 if (ret && ret != -EINVAL) 1973 1972 return dev_err_probe(dev, ret,
+3 -4
drivers/iio/adc/ad7949.c
··· 308 308 309 309 static int ad7949_spi_probe(struct spi_device *spi) 310 310 { 311 - u32 spi_ctrl_mask = spi->controller->bits_per_word_mask; 312 311 struct device *dev = &spi->dev; 313 312 const struct ad7949_adc_spec *spec; 314 313 struct ad7949_adc_chip *ad7949_adc; ··· 336 337 ad7949_adc->resolution = spec->resolution; 337 338 338 339 /* Set SPI bits per word */ 339 - if (spi_ctrl_mask & SPI_BPW_MASK(ad7949_adc->resolution)) { 340 + if (spi_is_bpw_supported(spi, ad7949_adc->resolution)) { 340 341 spi->bits_per_word = ad7949_adc->resolution; 341 - } else if (spi_ctrl_mask == SPI_BPW_MASK(16)) { 342 + } else if (spi_is_bpw_supported(spi, 16)) { 342 343 spi->bits_per_word = 16; 343 - } else if (spi_ctrl_mask == SPI_BPW_MASK(8)) { 344 + } else if (spi_is_bpw_supported(spi, 8)) { 344 345 spi->bits_per_word = 8; 345 346 } else { 346 347 dev_err(dev, "unable to find common BPW with spi controller\n");
+4 -2
drivers/iio/adc/adi-axi-adc.c
··· 445 445 static int ad7606_bus_reg_read(struct iio_backend *back, u32 reg, u32 *val) 446 446 { 447 447 struct adi_axi_adc_state *st = iio_backend_get_priv(back); 448 - int addr; 448 + u32 addr, reg_val; 449 449 450 450 guard(mutex)(&st->lock); 451 451 ··· 455 455 */ 456 456 addr = FIELD_PREP(ADI_AXI_REG_ADDRESS_MASK, reg) | ADI_AXI_REG_READ_BIT; 457 457 axi_adc_raw_write(back, addr); 458 - axi_adc_raw_read(back, val); 458 + axi_adc_raw_read(back, &reg_val); 459 + 460 + *val = FIELD_GET(ADI_AXI_REG_VALUE_MASK, reg_val); 459 461 460 462 /* Write 0x0 on the bus to get back to ADC mode */ 461 463 axi_adc_raw_write(back, 0);
+1
drivers/iio/adc/axp20x_adc.c
··· 187 187 .consumer_channel = "batt_chrg_i", 188 188 .adc_channel_label = "batt_chrg_i", 189 189 }, 190 + { } 190 191 }; 191 192 192 193 /*
+21 -22
drivers/iio/adc/max1363.c
··· 511 511 MAX1363_CHAN_U(1, _s1, 1, bits, ev_spec, num_ev_spec), \ 512 512 MAX1363_CHAN_U(2, _s2, 2, bits, ev_spec, num_ev_spec), \ 513 513 MAX1363_CHAN_U(3, _s3, 3, bits, ev_spec, num_ev_spec), \ 514 - MAX1363_CHAN_B(0, 1, d0m1, 4, bits, ev_spec, num_ev_spec), \ 515 - MAX1363_CHAN_B(2, 3, d2m3, 5, bits, ev_spec, num_ev_spec), \ 516 - MAX1363_CHAN_B(1, 0, d1m0, 6, bits, ev_spec, num_ev_spec), \ 517 - MAX1363_CHAN_B(3, 2, d3m2, 7, bits, ev_spec, num_ev_spec), \ 514 + MAX1363_CHAN_B(0, 1, d0m1, 12, bits, ev_spec, num_ev_spec), \ 515 + MAX1363_CHAN_B(2, 3, d2m3, 13, bits, ev_spec, num_ev_spec), \ 516 + MAX1363_CHAN_B(1, 0, d1m0, 18, bits, ev_spec, num_ev_spec), \ 517 + MAX1363_CHAN_B(3, 2, d3m2, 19, bits, ev_spec, num_ev_spec), \ 518 518 IIO_CHAN_SOFT_TIMESTAMP(8) \ 519 519 } 520 520 ··· 532 532 /* Applies to max1236, max1237 */ 533 533 static const enum max1363_modes max1236_mode_list[] = { 534 534 _s0, _s1, _s2, _s3, 535 - s0to1, s0to2, s0to3, 535 + s0to1, s0to2, s2to3, s0to3, 536 536 d0m1, d2m3, d1m0, d3m2, 537 537 d0m1to2m3, d1m0to3m2, 538 - s2to3, 539 538 }; 540 539 541 540 /* Applies to max1238, max1239 */ 542 541 static const enum max1363_modes max1238_mode_list[] = { 543 542 _s0, _s1, _s2, _s3, _s4, _s5, _s6, _s7, _s8, _s9, _s10, _s11, 544 543 s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, 544 + s6to7, s6to8, s6to9, s6to10, s6to11, 545 545 s0to7, s0to8, s0to9, s0to10, s0to11, 546 546 d0m1, d2m3, d4m5, d6m7, d8m9, d10m11, 547 547 d1m0, d3m2, d5m4, d7m6, d9m8, d11m10, 548 - d0m1to2m3, d0m1to4m5, d0m1to6m7, d0m1to8m9, d0m1to10m11, 549 - d1m0to3m2, d1m0to5m4, d1m0to7m6, d1m0to9m8, d1m0to11m10, 550 - s6to7, s6to8, s6to9, s6to10, s6to11, 551 - d6m7to8m9, d6m7to10m11, d7m6to9m8, d7m6to11m10, 548 + d0m1to2m3, d0m1to4m5, d0m1to6m7, d6m7to8m9, 549 + d0m1to8m9, d6m7to10m11, d0m1to10m11, d1m0to3m2, 550 + d1m0to5m4, d1m0to7m6, d7m6to9m8, d1m0to9m8, 551 + d7m6to11m10, d1m0to11m10, 552 552 }; 553 553 554 554 #define MAX1363_12X_CHANS(bits) { \ ··· 584 584 585 585 static const enum max1363_modes max11607_mode_list[] = { 586 586 _s0, _s1, _s2, _s3, 587 - s0to1, s0to2, s0to3, 588 - s2to3, 587 + s0to1, s0to2, s2to3, 588 + s0to3, 589 589 d0m1, d2m3, d1m0, d3m2, 590 590 d0m1to2m3, d1m0to3m2, 591 591 }; 592 592 593 593 static const enum max1363_modes max11608_mode_list[] = { 594 594 _s0, _s1, _s2, _s3, _s4, _s5, _s6, _s7, 595 - s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, s0to7, 596 - s6to7, 595 + s0to1, s0to2, s0to3, s0to4, s0to5, s0to6, s6to7, s0to7, 597 596 d0m1, d2m3, d4m5, d6m7, 598 597 d1m0, d3m2, d5m4, d7m6, 599 598 d0m1to2m3, d0m1to4m5, d0m1to6m7, ··· 608 609 MAX1363_CHAN_U(5, _s5, 5, bits, NULL, 0), \ 609 610 MAX1363_CHAN_U(6, _s6, 6, bits, NULL, 0), \ 610 611 MAX1363_CHAN_U(7, _s7, 7, bits, NULL, 0), \ 611 - MAX1363_CHAN_B(0, 1, d0m1, 8, bits, NULL, 0), \ 612 - MAX1363_CHAN_B(2, 3, d2m3, 9, bits, NULL, 0), \ 613 - MAX1363_CHAN_B(4, 5, d4m5, 10, bits, NULL, 0), \ 614 - MAX1363_CHAN_B(6, 7, d6m7, 11, bits, NULL, 0), \ 615 - MAX1363_CHAN_B(1, 0, d1m0, 12, bits, NULL, 0), \ 616 - MAX1363_CHAN_B(3, 2, d3m2, 13, bits, NULL, 0), \ 617 - MAX1363_CHAN_B(5, 4, d5m4, 14, bits, NULL, 0), \ 618 - MAX1363_CHAN_B(7, 6, d7m6, 15, bits, NULL, 0), \ 612 + MAX1363_CHAN_B(0, 1, d0m1, 12, bits, NULL, 0), \ 613 + MAX1363_CHAN_B(2, 3, d2m3, 13, bits, NULL, 0), \ 614 + MAX1363_CHAN_B(4, 5, d4m5, 14, bits, NULL, 0), \ 615 + MAX1363_CHAN_B(6, 7, d6m7, 15, bits, NULL, 0), \ 616 + MAX1363_CHAN_B(1, 0, d1m0, 18, bits, NULL, 0), \ 617 + MAX1363_CHAN_B(3, 2, d3m2, 19, bits, NULL, 0), \ 618 + MAX1363_CHAN_B(5, 4, d5m4, 20, bits, NULL, 0), \ 619 + MAX1363_CHAN_B(7, 6, d7m6, 21, bits, NULL, 0), \ 619 620 IIO_CHAN_SOFT_TIMESTAMP(16) \ 620 621 } 621 622 static const struct iio_chan_spec max11602_channels[] = MAX1363_8X_CHANS(8);
+3 -4
drivers/iio/adc/stm32-adc-core.c
··· 430 430 return -ENOMEM; 431 431 } 432 432 433 - for (i = 0; i < priv->cfg->num_irqs; i++) { 434 - irq_set_chained_handler(priv->irq[i], stm32_adc_irq_handler); 435 - irq_set_handler_data(priv->irq[i], priv); 436 - } 433 + for (i = 0; i < priv->cfg->num_irqs; i++) 434 + irq_set_chained_handler_and_data(priv->irq[i], 435 + stm32_adc_irq_handler, priv); 437 436 438 437 return 0; 439 438 }
+17 -19
drivers/iio/common/st_sensors/st_sensors_core.c
··· 154 154 return err; 155 155 156 156 st_accel_set_fullscale_error: 157 - dev_err(&indio_dev->dev, "failed to set new fullscale.\n"); 157 + dev_err(indio_dev->dev.parent, "failed to set new fullscale.\n"); 158 158 return err; 159 159 } 160 160 ··· 231 231 ARRAY_SIZE(regulator_names), 232 232 regulator_names); 233 233 if (err) 234 - return dev_err_probe(&indio_dev->dev, err, 235 - "unable to enable supplies\n"); 234 + return dev_err_probe(parent, err, "unable to enable supplies\n"); 236 235 237 236 return 0; 238 237 } ··· 240 241 static int st_sensors_set_drdy_int_pin(struct iio_dev *indio_dev, 241 242 struct st_sensors_platform_data *pdata) 242 243 { 244 + struct device *parent = indio_dev->dev.parent; 243 245 struct st_sensor_data *sdata = iio_priv(indio_dev); 244 246 245 247 /* Sensor does not support interrupts */ 246 248 if (!sdata->sensor_settings->drdy_irq.int1.addr && 247 249 !sdata->sensor_settings->drdy_irq.int2.addr) { 248 250 if (pdata->drdy_int_pin) 249 - dev_info(&indio_dev->dev, 251 + dev_info(parent, 250 252 "DRDY on pin INT%d specified, but sensor does not support interrupts\n", 251 253 pdata->drdy_int_pin); 252 254 return 0; ··· 256 256 switch (pdata->drdy_int_pin) { 257 257 case 1: 258 258 if (!sdata->sensor_settings->drdy_irq.int1.mask) { 259 - dev_err(&indio_dev->dev, 260 - "DRDY on INT1 not available.\n"); 259 + dev_err(parent, "DRDY on INT1 not available.\n"); 261 260 return -EINVAL; 262 261 } 263 262 sdata->drdy_int_pin = 1; 264 263 break; 265 264 case 2: 266 265 if (!sdata->sensor_settings->drdy_irq.int2.mask) { 267 - dev_err(&indio_dev->dev, 268 - "DRDY on INT2 not available.\n"); 266 + dev_err(parent, "DRDY on INT2 not available.\n"); 269 267 return -EINVAL; 270 268 } 271 269 sdata->drdy_int_pin = 2; 272 270 break; 273 271 default: 274 - dev_err(&indio_dev->dev, "DRDY on pdata not valid.\n"); 272 + dev_err(parent, "DRDY on pdata not valid.\n"); 275 273 return -EINVAL; 276 274 } 277 275 278 276 if (pdata->open_drain) { 279 277 if (!sdata->sensor_settings->drdy_irq.int1.addr_od && 280 278 !sdata->sensor_settings->drdy_irq.int2.addr_od) 281 - dev_err(&indio_dev->dev, 279 + dev_err(parent, 282 280 "open drain requested but unsupported.\n"); 283 281 else 284 282 sdata->int_pin_open_drain = true; ··· 334 336 int st_sensors_init_sensor(struct iio_dev *indio_dev, 335 337 struct st_sensors_platform_data *pdata) 336 338 { 339 + struct device *parent = indio_dev->dev.parent; 337 340 struct st_sensor_data *sdata = iio_priv(indio_dev); 338 341 struct st_sensors_platform_data *of_pdata; 339 342 int err = 0; ··· 342 343 mutex_init(&sdata->odr_lock); 343 344 344 345 /* If OF/DT pdata exists, it will take precedence of anything else */ 345 - of_pdata = st_sensors_dev_probe(indio_dev->dev.parent, pdata); 346 + of_pdata = st_sensors_dev_probe(parent, pdata); 346 347 if (IS_ERR(of_pdata)) 347 348 return PTR_ERR(of_pdata); 348 349 if (of_pdata) ··· 369 370 if (err < 0) 370 371 return err; 371 372 } else 372 - dev_info(&indio_dev->dev, "Full-scale not possible\n"); 373 + dev_info(parent, "Full-scale not possible\n"); 373 374 374 375 err = st_sensors_set_odr(indio_dev, sdata->odr); 375 376 if (err < 0) ··· 404 405 mask = sdata->sensor_settings->drdy_irq.int2.mask_od; 405 406 } 406 407 407 - dev_info(&indio_dev->dev, 408 + dev_info(parent, 408 409 "set interrupt line to open drain mode on pin %d\n", 409 410 sdata->drdy_int_pin); 410 411 err = st_sensors_write_data_with_mask(indio_dev, addr, ··· 592 593 int st_sensors_verify_id(struct iio_dev *indio_dev) 593 594 { 594 595 struct st_sensor_data *sdata = iio_priv(indio_dev); 596 + struct device *parent = indio_dev->dev.parent; 595 597 int wai, err; 596 598 597 599 if (sdata->sensor_settings->wai_addr) { 598 600 err = regmap_read(sdata->regmap, 599 601 sdata->sensor_settings->wai_addr, &wai); 600 602 if (err < 0) { 601 - dev_err(&indio_dev->dev, 602 - "failed to read Who-Am-I register.\n"); 603 - return err; 603 + return dev_err_probe(parent, err, 604 + "failed to read Who-Am-I register.\n"); 604 605 } 605 606 606 607 if (sdata->sensor_settings->wai != wai) { 607 - dev_warn(&indio_dev->dev, 608 - "%s: WhoAmI mismatch (0x%x).\n", 609 - indio_dev->name, wai); 608 + dev_warn(parent, "%s: WhoAmI mismatch (0x%x).\n", 609 + indio_dev->name, wai); 610 610 } 611 611 } 612 612
+9 -11
drivers/iio/common/st_sensors/st_sensors_trigger.c
··· 127 127 sdata->trig = devm_iio_trigger_alloc(parent, "%s-trigger", 128 128 indio_dev->name); 129 129 if (sdata->trig == NULL) { 130 - dev_err(&indio_dev->dev, "failed to allocate iio trigger.\n"); 130 + dev_err(parent, "failed to allocate iio trigger.\n"); 131 131 return -ENOMEM; 132 132 } 133 133 ··· 143 143 case IRQF_TRIGGER_FALLING: 144 144 case IRQF_TRIGGER_LOW: 145 145 if (!sdata->sensor_settings->drdy_irq.addr_ihl) { 146 - dev_err(&indio_dev->dev, 146 + dev_err(parent, 147 147 "falling/low specified for IRQ but hardware supports only rising/high: will request rising/high\n"); 148 148 if (irq_trig == IRQF_TRIGGER_FALLING) 149 149 irq_trig = IRQF_TRIGGER_RISING; ··· 156 156 sdata->sensor_settings->drdy_irq.mask_ihl, 1); 157 157 if (err < 0) 158 158 return err; 159 - dev_info(&indio_dev->dev, 159 + dev_info(parent, 160 160 "interrupts on the falling edge or active low level\n"); 161 161 } 162 162 break; 163 163 case IRQF_TRIGGER_RISING: 164 - dev_info(&indio_dev->dev, 165 - "interrupts on the rising edge\n"); 164 + dev_info(parent, "interrupts on the rising edge\n"); 166 165 break; 167 166 case IRQF_TRIGGER_HIGH: 168 - dev_info(&indio_dev->dev, 169 - "interrupts active high level\n"); 167 + dev_info(parent, "interrupts active high level\n"); 170 168 break; 171 169 default: 172 170 /* This is the most preferred mode, if possible */ 173 - dev_err(&indio_dev->dev, 171 + dev_err(parent, 174 172 "unsupported IRQ trigger specified (%lx), enforce rising edge\n", irq_trig); 175 173 irq_trig = IRQF_TRIGGER_RISING; 176 174 } ··· 177 179 if (irq_trig == IRQF_TRIGGER_FALLING || 178 180 irq_trig == IRQF_TRIGGER_RISING) { 179 181 if (!sdata->sensor_settings->drdy_irq.stat_drdy.addr) { 180 - dev_err(&indio_dev->dev, 182 + dev_err(parent, 181 183 "edge IRQ not supported w/o stat register.\n"); 182 184 return -EOPNOTSUPP; 183 185 } ··· 212 214 sdata->trig->name, 213 215 sdata->trig); 214 216 if (err) { 215 - dev_err(&indio_dev->dev, "failed to request trigger IRQ.\n"); 217 + dev_err(parent, "failed to request trigger IRQ.\n"); 216 218 return err; 217 219 } 218 220 219 221 err = devm_iio_trigger_register(parent, sdata->trig); 220 222 if (err < 0) { 221 - dev_err(&indio_dev->dev, "failed to register iio trigger.\n"); 223 + dev_err(parent, "failed to register iio trigger.\n"); 222 224 return err; 223 225 } 224 226 indio_dev->trig = iio_trigger_get(sdata->trig);
+3 -1
drivers/iio/dac/ad3530r.c
··· 166 166 AD3530R_OUTPUT_OPERATING_MODE_0 : 167 167 AD3530R_OUTPUT_OPERATING_MODE_1; 168 168 pdmode = powerdown ? st->chan[chan->channel].powerdown_mode : 0; 169 - mask = AD3530R_OP_MODE_CHAN_MSK(chan->channel); 169 + mask = chan->channel < AD3531R_MAX_CHANNELS ? 170 + AD3530R_OP_MODE_CHAN_MSK(chan->channel) : 171 + AD3530R_OP_MODE_CHAN_MSK(chan->channel - 4); 170 172 val = field_prep(mask, pdmode); 171 173 172 174 ret = regmap_update_bits(st->regmap, reg, mask, val);
+4 -1
drivers/iio/industrialio-backend.c
··· 155 155 ssize_t rc; 156 156 int ret; 157 157 158 + if (count >= sizeof(buf)) 159 + return -ENOSPC; 160 + 158 161 rc = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, userbuf, count); 159 162 if (rc < 0) 160 163 return rc; 161 164 162 - buf[count] = '\0'; 165 + buf[rc] = '\0'; 163 166 164 167 ret = sscanf(buf, "%i %i", &back->cached_reg_addr, &val); 165 168
+4 -1
drivers/iio/industrialio-core.c
··· 411 411 char buf[80]; 412 412 int ret; 413 413 414 + if (count >= sizeof(buf)) 415 + return -EINVAL; 416 + 414 417 ret = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, userbuf, 415 418 count); 416 419 if (ret < 0) 417 420 return ret; 418 421 419 - buf[count] = '\0'; 422 + buf[ret] = '\0'; 420 423 421 424 ret = sscanf(buf, "%i %i", &reg, &val); 422 425
+1 -1
drivers/input/joystick/xpad.c
··· 172 172 { 0x046d, 0xca88, "Logitech Compact Controller for Xbox", 0, XTYPE_XBOX }, 173 173 { 0x046d, 0xca8a, "Logitech Precision Vibration Feedback Wheel", 0, XTYPE_XBOX }, 174 174 { 0x046d, 0xcaa3, "Logitech DriveFx Racing Wheel", 0, XTYPE_XBOX360 }, 175 + { 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX360 }, 175 176 { 0x056e, 0x2004, "Elecom JC-U3613M", 0, XTYPE_XBOX360 }, 176 177 { 0x05fd, 0x1007, "Mad Catz Controller (unverified)", 0, XTYPE_XBOX }, 177 178 { 0x05fd, 0x107a, "InterAct 'PowerPad Pro' X-Box pad (Germany)", 0, XTYPE_XBOX }, 178 179 { 0x05fe, 0x3030, "Chic Controller", 0, XTYPE_XBOX }, 179 180 { 0x05fe, 0x3031, "Chic Controller", 0, XTYPE_XBOX }, 180 - { 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX }, 181 181 { 0x062a, 0x0020, "Logic3 Xbox GamePad", 0, XTYPE_XBOX }, 182 182 { 0x062a, 0x0033, "Competition Pro Steering Wheel", 0, XTYPE_XBOX }, 183 183 { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX },
+29 -5
drivers/interconnect/core.c
··· 20 20 21 21 #include "internal.h" 22 22 23 - #define ICC_DYN_ID_START 10000 23 + #define ICC_DYN_ID_START 100000 24 24 25 25 #define CREATE_TRACE_POINTS 26 26 #include "trace.h" ··· 819 819 { 820 820 struct icc_node *node; 821 821 822 + if (id >= ICC_DYN_ID_START) 823 + return ERR_PTR(-EINVAL); 824 + 822 825 /* check if node already exists */ 823 826 node = node_find(id); 824 827 if (node) ··· 909 906 return; 910 907 911 908 kfree(node->links); 909 + if (node->id >= ICC_DYN_ID_START) 910 + kfree(node->name); 912 911 kfree(node); 913 912 } 914 913 EXPORT_SYMBOL_GPL(icc_node_destroy); 914 + 915 + /** 916 + * icc_node_set_name() - set node name 917 + * @node: node 918 + * @provider: node provider 919 + * @name: node name 920 + * 921 + * Return: 0 on success, or -ENOMEM on allocation failure 922 + */ 923 + int icc_node_set_name(struct icc_node *node, const struct icc_provider *provider, const char *name) 924 + { 925 + if (node->id >= ICC_DYN_ID_START) { 926 + node->name = kasprintf(GFP_KERNEL, "%s@%s", name, 927 + dev_name(provider->dev)); 928 + if (!node->name) 929 + return -ENOMEM; 930 + } else { 931 + node->name = name; 932 + } 933 + 934 + return 0; 935 + } 936 + EXPORT_SYMBOL_GPL(icc_node_set_name); 915 937 916 938 /** 917 939 * icc_link_nodes() - create link between two nodes ··· 1065 1037 } 1066 1038 node->avg_bw = node->init_avg; 1067 1039 node->peak_bw = node->init_peak; 1068 - 1069 - if (node->id >= ICC_DYN_ID_START) 1070 - node->name = devm_kasprintf(provider->dev, GFP_KERNEL, "%s@%s", 1071 - node->name, dev_name(provider->dev)); 1072 1040 1073 1041 if (node->avg_bw || node->peak_bw) { 1074 1042 if (provider->pre_aggregate)
+2
drivers/interconnect/icc-clk.c
··· 117 117 118 118 node->name = devm_kasprintf(dev, GFP_KERNEL, "%s_master", data[i].name); 119 119 if (!node->name) { 120 + icc_node_destroy(node->id); 120 121 ret = -ENOMEM; 121 122 goto err; 122 123 } ··· 136 135 137 136 node->name = devm_kasprintf(dev, GFP_KERNEL, "%s_slave", data[i].name); 138 137 if (!node->name) { 138 + icc_node_destroy(node->id); 139 139 ret = -ENOMEM; 140 140 goto err; 141 141 }
+6 -1
drivers/interconnect/qcom/icc-rpmh.c
··· 293 293 goto err_remove_nodes; 294 294 } 295 295 296 - node->name = qn->name; 296 + ret = icc_node_set_name(node, provider, qn->name); 297 + if (ret) { 298 + icc_node_destroy(node->id); 299 + goto err_remove_nodes; 300 + } 301 + 297 302 node->data = qn; 298 303 icc_node_add(node, provider); 299 304
+6 -1
drivers/interconnect/qcom/osm-l3.c
··· 236 236 goto err; 237 237 } 238 238 239 - node->name = qnodes[i]->name; 239 + ret = icc_node_set_name(node, provider, qnodes[i]->name); 240 + if (ret) { 241 + icc_node_destroy(node->id); 242 + goto err; 243 + } 244 + 240 245 /* Cast away const and add it back in qcom_osm_l3_set() */ 241 246 node->data = (void *)qnodes[i]; 242 247 icc_node_add(node, provider);
+1
drivers/interconnect/qcom/sc7280.c
··· 238 238 .id = SC7280_MASTER_PCIE_1, 239 239 .channels = 1, 240 240 .buswidth = 8, 241 + .num_links = 1, 241 242 .links = { SC7280_SLAVE_ANOC_PCIE_GEM_NOC }, 242 243 }; 243 244
+5
drivers/interconnect/samsung/exynos.c
··· 134 134 priv->node = icc_node; 135 135 icc_node->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%pOFn", 136 136 bus_dev->of_node); 137 + if (!icc_node->name) { 138 + icc_node_destroy(pdev->id); 139 + return -ENOMEM; 140 + } 141 + 137 142 if (of_property_read_u32(bus_dev->of_node, "samsung,data-clock-ratio", 138 143 &priv->bus_clk_ratio)) 139 144 priv->bus_clk_ratio = EXYNOS_ICC_DEFAULT_BUS_CLK_RATIO;
+12 -21
drivers/iommu/hyperv-iommu.c
··· 193 193 static void 194 194 hyperv_root_ir_compose_msi_msg(struct irq_data *irq_data, struct msi_msg *msg) 195 195 { 196 - u64 status; 197 - u32 vector; 198 - struct irq_cfg *cfg; 199 - int ioapic_id; 200 - const struct cpumask *affinity; 201 - int cpu; 202 - struct hv_interrupt_entry entry; 203 196 struct hyperv_root_ir_data *data = irq_data->chip_data; 197 + struct hv_interrupt_entry entry; 198 + const struct cpumask *affinity; 204 199 struct IO_APIC_route_entry e; 200 + struct irq_cfg *cfg; 201 + int cpu, ioapic_id; 202 + u32 vector; 205 203 206 204 cfg = irqd_cfg(irq_data); 207 205 affinity = irq_data_get_effective_affinity_mask(irq_data); ··· 212 214 && data->entry.ioapic_rte.as_uint64) { 213 215 entry = data->entry; 214 216 215 - status = hv_unmap_ioapic_interrupt(ioapic_id, &entry); 216 - 217 - if (status != HV_STATUS_SUCCESS) 218 - hv_status_debug(status, "failed to unmap\n"); 217 + (void)hv_unmap_ioapic_interrupt(ioapic_id, &entry); 219 218 220 219 data->entry.ioapic_rte.as_uint64 = 0; 221 220 data->entry.source = 0; /* Invalid source */ 222 221 } 223 222 224 223 225 - status = hv_map_ioapic_interrupt(ioapic_id, data->is_level, cpu, 226 - vector, &entry); 227 - 228 - if (status != HV_STATUS_SUCCESS) { 229 - hv_status_err(status, "map failed\n"); 224 + if (hv_map_ioapic_interrupt(ioapic_id, data->is_level, cpu, 225 + vector, &entry)) 230 226 return; 231 - } 232 227 233 228 data->entry = entry; 234 229 ··· 313 322 data = irq_data->chip_data; 314 323 e = &data->entry; 315 324 316 - if (e->source == HV_DEVICE_TYPE_IOAPIC 317 - && e->ioapic_rte.as_uint64) 318 - hv_unmap_ioapic_interrupt(data->ioapic_id, 319 - &data->entry); 325 + if (e->source == HV_DEVICE_TYPE_IOAPIC && 326 + e->ioapic_rte.as_uint64) 327 + (void)hv_unmap_ioapic_interrupt(data->ioapic_id, 328 + &data->entry); 320 329 321 330 kfree(data); 322 331 }
+1 -1
drivers/memstick/core/memstick.c
··· 324 324 static int h_memstick_read_dev_id(struct memstick_dev *card, 325 325 struct memstick_request **mrq) 326 326 { 327 - struct ms_id_register id_reg; 327 + struct ms_id_register id_reg = {}; 328 328 329 329 if (!(*mrq)) { 330 330 memstick_init_req(&card->current_mrq, MS_TPC_READ_REG, &id_reg,
+17 -7
drivers/misc/amd-sbi/rmi-core.c
··· 42 42 #define RD_MCA_CMD 0x86 43 43 44 44 /* CPUID MCAMSR mask & index */ 45 - #define CPUID_MCA_THRD_MASK GENMASK(15, 0) 46 45 #define CPUID_MCA_THRD_INDEX 32 47 46 #define CPUID_MCA_FUNC_MASK GENMASK(31, 0) 48 47 #define CPUID_EXT_FUNC_INDEX 56 ··· 128 129 goto exit_unlock; 129 130 } 130 131 131 - thread = msg->cpu_in_out << CPUID_MCA_THRD_INDEX & CPUID_MCA_THRD_MASK; 132 + thread = msg->cpu_in_out >> CPUID_MCA_THRD_INDEX; 132 133 133 134 /* Thread > 127, Thread128 CS register, 1'b1 needs to be set to 1 */ 134 135 if (thread > 127) { ··· 209 210 goto exit_unlock; 210 211 } 211 212 212 - thread = msg->mcamsr_in_out << CPUID_MCA_THRD_INDEX & CPUID_MCA_THRD_MASK; 213 + thread = msg->mcamsr_in_out >> CPUID_MCA_THRD_INDEX; 213 214 214 215 /* Thread > 127, Thread128 CS register, 1'b1 needs to be set to 1 */ 215 216 if (thread > 127) { ··· 320 321 ret = regmap_read(data->regmap, SBRMI_OUTBNDMSG7, &ec); 321 322 if (ret || ec) 322 323 goto exit_clear_alert; 324 + 325 + /* Clear the input value before updating the output data */ 326 + msg->mb_in_out = 0; 327 + 323 328 /* 324 329 * For a read operation, the initiator (BMC) reads the firmware 325 330 * response Command Data Out[31:0] from SBRMI::OutBndMsg_inst[4:1] ··· 376 373 mutex_unlock(&data->lock); 377 374 378 375 if (msg.rflag && !ret) 379 - return copy_to_user(arg, &msg, sizeof(struct apml_reg_xfer_msg)); 376 + if (copy_to_user(arg, &msg, sizeof(struct apml_reg_xfer_msg))) 377 + return -EFAULT; 380 378 return ret; 381 379 } 382 380 ··· 395 391 if (ret && ret != -EPROTOTYPE) 396 392 return ret; 397 393 398 - return copy_to_user(arg, &msg, sizeof(struct apml_mbox_msg)); 394 + if (copy_to_user(arg, &msg, sizeof(struct apml_mbox_msg))) 395 + return -EFAULT; 396 + return ret; 399 397 } 400 398 401 399 static int apml_cpuid_xfer(struct sbrmi_data *data, struct apml_cpuid_msg __user *arg) ··· 414 408 if (ret && ret != -EPROTOTYPE) 415 409 return ret; 416 410 417 - return copy_to_user(arg, &msg, sizeof(struct apml_cpuid_msg)); 411 + if (copy_to_user(arg, &msg, sizeof(struct apml_cpuid_msg))) 412 + return -EFAULT; 413 + return ret; 418 414 } 419 415 420 416 static int apml_mcamsr_xfer(struct sbrmi_data *data, struct apml_mcamsr_msg __user *arg) ··· 433 425 if (ret && ret != -EPROTOTYPE) 434 426 return ret; 435 427 436 - return copy_to_user(arg, &msg, sizeof(struct apml_mcamsr_msg)); 428 + if (copy_to_user(arg, &msg, sizeof(struct apml_mcamsr_msg))) 429 + return -EFAULT; 430 + return ret; 437 431 } 438 432 439 433 static long sbrmi_ioctl(struct file *fp, unsigned int cmd, unsigned long arg)
+2 -1
drivers/mmc/host/bcm2835.c
··· 503 503 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 504 504 505 505 if (!desc) { 506 - dma_unmap_sg(dma_chan->device->dev, data->sg, sg_len, dir_data); 506 + dma_unmap_sg(dma_chan->device->dev, data->sg, data->sg_len, 507 + dir_data); 507 508 return; 508 509 } 509 510
+2 -1
drivers/mmc/host/sdhci-pci-core.c
··· 913 913 { 914 914 return slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_GLK_EMMC && 915 915 (dmi_match(DMI_BIOS_VENDOR, "LENOVO") || 916 - dmi_match(DMI_SYS_VENDOR, "IRBIS")); 916 + dmi_match(DMI_SYS_VENDOR, "IRBIS") || 917 + dmi_match(DMI_SYS_VENDOR, "Positivo Tecnologia SA")); 917 918 } 918 919 919 920 static bool jsl_broken_hs400es(struct sdhci_pci_slot *slot)
+6 -3
drivers/mmc/host/sdhci_am654.c
··· 613 613 static const struct sdhci_pltfm_data sdhci_am654_pdata = { 614 614 .ops = &sdhci_am654_ops, 615 615 .quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12, 616 - .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 616 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 617 + SDHCI_QUIRK2_DISABLE_HW_TIMEOUT, 617 618 }; 618 619 619 620 static const struct sdhci_am654_driver_data sdhci_am654_sr1_drvdata = { ··· 644 643 static const struct sdhci_pltfm_data sdhci_j721e_8bit_pdata = { 645 644 .ops = &sdhci_j721e_8bit_ops, 646 645 .quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12, 647 - .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 646 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 647 + SDHCI_QUIRK2_DISABLE_HW_TIMEOUT, 648 648 }; 649 649 650 650 static const struct sdhci_am654_driver_data sdhci_j721e_8bit_drvdata = { ··· 669 667 static const struct sdhci_pltfm_data sdhci_j721e_4bit_pdata = { 670 668 .ops = &sdhci_j721e_4bit_ops, 671 669 .quirks = SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12, 672 - .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 670 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 671 + SDHCI_QUIRK2_DISABLE_HW_TIMEOUT, 673 672 }; 674 673 675 674 static const struct sdhci_am654_driver_data sdhci_j721e_4bit_drvdata = {
+1
drivers/mux/Kconfig
··· 48 48 config MUX_MMIO 49 49 tristate "MMIO/Regmap register bitfield-controlled Multiplexer" 50 50 depends on OF 51 + select REGMAP_MMIO 51 52 help 52 53 MMIO/Regmap register bitfield-controlled Multiplexer controller. 53 54
+9 -3
drivers/net/can/dev/dev.c
··· 145 145 EXPORT_SYMBOL_GPL(can_change_state); 146 146 147 147 /* CAN device restart for bus-off recovery */ 148 - static void can_restart(struct net_device *dev) 148 + static int can_restart(struct net_device *dev) 149 149 { 150 150 struct can_priv *priv = netdev_priv(dev); 151 151 struct sk_buff *skb; 152 152 struct can_frame *cf; 153 153 int err; 154 + 155 + if (!priv->do_set_mode) 156 + return -EOPNOTSUPP; 154 157 155 158 if (netif_carrier_ok(dev)) 156 159 netdev_err(dev, "Attempt to restart for bus-off recovery, but carrier is OK?\n"); ··· 176 173 if (err) { 177 174 netdev_err(dev, "Restart failed, error %pe\n", ERR_PTR(err)); 178 175 netif_carrier_off(dev); 176 + 177 + return err; 179 178 } else { 180 179 netdev_dbg(dev, "Restarted\n"); 181 180 priv->can_stats.restarts++; 182 181 } 182 + 183 + return 0; 183 184 } 184 185 185 186 static void can_restart_work(struct work_struct *work) ··· 208 201 return -EBUSY; 209 202 210 203 cancel_delayed_work_sync(&priv->restart_work); 211 - can_restart(dev); 212 204 213 - return 0; 205 + return can_restart(dev); 214 206 } 215 207 216 208 /* CAN bus-off
+12
drivers/net/can/dev/netlink.c
··· 285 285 } 286 286 287 287 if (data[IFLA_CAN_RESTART_MS]) { 288 + if (!priv->do_set_mode) { 289 + NL_SET_ERR_MSG(extack, 290 + "Device doesn't support restart from Bus Off"); 291 + return -EOPNOTSUPP; 292 + } 293 + 288 294 /* Do not allow changing restart delay while running */ 289 295 if (dev->flags & IFF_UP) 290 296 return -EBUSY; ··· 298 292 } 299 293 300 294 if (data[IFLA_CAN_RESTART]) { 295 + if (!priv->do_set_mode) { 296 + NL_SET_ERR_MSG(extack, 297 + "Device doesn't support restart from Bus Off"); 298 + return -EOPNOTSUPP; 299 + } 300 + 301 301 /* Do not allow a restart while not running */ 302 302 if (!(dev->flags & IFF_UP)) 303 303 return -EINVAL;
+3
drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
··· 816 816 /* Tx SPB */ 817 817 tx_spb_ctrl_wl(intf, ((intf->channel + 8) << TX_SPB_CTRL_XF_BID_SHIFT), 818 818 TX_SPB_CTRL_XF_CTRL2); 819 + 820 + if (intf->parent->tx_chan_offset) 821 + tx_pause_ctrl_wl(intf, (1 << (intf->channel + 8)), TX_PAUSE_MAP_VECTOR); 819 822 tx_spb_top_wl(intf, 0x1e, TX_SPB_TOP_BLKOUT); 820 823 821 824 tx_spb_dma_wq(intf, intf->tx_spb_dma_addr, TX_SPB_DMA_READ);
+12 -3
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 4666 4666 return PTR_ERR(dpmac_dev); 4667 4667 } 4668 4668 4669 - if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) 4669 + if (IS_ERR(dpmac_dev)) 4670 4670 return 0; 4671 4671 4672 + if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) { 4673 + err = 0; 4674 + goto out_put_device; 4675 + } 4676 + 4672 4677 mac = kzalloc(sizeof(struct dpaa2_mac), GFP_KERNEL); 4673 - if (!mac) 4674 - return -ENOMEM; 4678 + if (!mac) { 4679 + err = -ENOMEM; 4680 + goto out_put_device; 4681 + } 4675 4682 4676 4683 mac->mc_dev = dpmac_dev; 4677 4684 mac->mc_io = priv->mc_io; ··· 4712 4705 dpaa2_mac_close(mac); 4713 4706 err_free_mac: 4714 4707 kfree(mac); 4708 + out_put_device: 4709 + put_device(&dpmac_dev->dev); 4715 4710 return err; 4716 4711 } 4717 4712
+12 -3
drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
··· 1448 1448 if (PTR_ERR(dpmac_dev) == -EPROBE_DEFER) 1449 1449 return PTR_ERR(dpmac_dev); 1450 1450 1451 - if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) 1451 + if (IS_ERR(dpmac_dev)) 1452 1452 return 0; 1453 1453 1454 + if (dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) { 1455 + err = 0; 1456 + goto out_put_device; 1457 + } 1458 + 1454 1459 mac = kzalloc(sizeof(*mac), GFP_KERNEL); 1455 - if (!mac) 1456 - return -ENOMEM; 1460 + if (!mac) { 1461 + err = -ENOMEM; 1462 + goto out_put_device; 1463 + } 1457 1464 1458 1465 mac->mc_dev = dpmac_dev; 1459 1466 mac->mc_io = port_priv->ethsw_data->mc_io; ··· 1490 1483 dpaa2_mac_close(mac); 1491 1484 err_free_mac: 1492 1485 kfree(mac); 1486 + out_put_device: 1487 + put_device(&dpmac_dev->dev); 1493 1488 return err; 1494 1489 } 1495 1490
+37 -30
drivers/net/ethernet/google/gve/gve_main.c
··· 1988 1988 gve_handle_link_status(priv, GVE_DEVICE_STATUS_LINK_STATUS_MASK & status); 1989 1989 } 1990 1990 1991 - static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) 1991 + static struct gve_notify_block *gve_get_tx_notify_block(struct gve_priv *priv, 1992 + unsigned int txqueue) 1992 1993 { 1993 - struct gve_notify_block *block; 1994 - struct gve_tx_ring *tx = NULL; 1995 - struct gve_priv *priv; 1996 - u32 last_nic_done; 1997 - u32 current_time; 1998 1994 u32 ntfy_idx; 1999 1995 2000 - netdev_info(dev, "Timeout on tx queue, %d", txqueue); 2001 - priv = netdev_priv(dev); 2002 1996 if (txqueue > priv->tx_cfg.num_queues) 2003 - goto reset; 1997 + return NULL; 2004 1998 2005 1999 ntfy_idx = gve_tx_idx_to_ntfy(priv, txqueue); 2006 2000 if (ntfy_idx >= priv->num_ntfy_blks) 2007 - goto reset; 2001 + return NULL; 2008 2002 2009 - block = &priv->ntfy_blocks[ntfy_idx]; 2010 - tx = block->tx; 2003 + return &priv->ntfy_blocks[ntfy_idx]; 2004 + } 2005 + 2006 + static bool gve_tx_timeout_try_q_kick(struct gve_priv *priv, 2007 + unsigned int txqueue) 2008 + { 2009 + struct gve_notify_block *block; 2010 + u32 current_time; 2011 + 2012 + block = gve_get_tx_notify_block(priv, txqueue); 2013 + 2014 + if (!block) 2015 + return false; 2011 2016 2012 2017 current_time = jiffies_to_msecs(jiffies); 2013 - if (tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time) 2014 - goto reset; 2018 + if (block->tx->last_kick_msec + MIN_TX_TIMEOUT_GAP > current_time) 2019 + return false; 2015 2020 2016 - /* Check to see if there are missed completions, which will allow us to 2017 - * kick the queue. 2018 - */ 2019 - last_nic_done = gve_tx_load_event_counter(priv, tx); 2020 - if (last_nic_done - tx->done) { 2021 - netdev_info(dev, "Kicking queue %d", txqueue); 2022 - iowrite32be(GVE_IRQ_MASK, gve_irq_doorbell(priv, block)); 2023 - napi_schedule(&block->napi); 2024 - tx->last_kick_msec = current_time; 2025 - goto out; 2026 - } // Else reset. 2021 + netdev_info(priv->dev, "Kicking queue %d", txqueue); 2022 + napi_schedule(&block->napi); 2023 + block->tx->last_kick_msec = current_time; 2024 + return true; 2025 + } 2027 2026 2028 - reset: 2029 - gve_schedule_reset(priv); 2027 + static void gve_tx_timeout(struct net_device *dev, unsigned int txqueue) 2028 + { 2029 + struct gve_notify_block *block; 2030 + struct gve_priv *priv; 2030 2031 2031 - out: 2032 - if (tx) 2033 - tx->queue_timeout++; 2032 + netdev_info(dev, "Timeout on tx queue, %d", txqueue); 2033 + priv = netdev_priv(dev); 2034 + 2035 + if (!gve_tx_timeout_try_q_kick(priv, txqueue)) 2036 + gve_schedule_reset(priv); 2037 + 2038 + block = gve_get_tx_notify_block(priv, txqueue); 2039 + if (block) 2040 + block->tx->queue_timeout++; 2034 2041 priv->tx_timeo_cnt++; 2035 2042 } 2036 2043
+31
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 11 11 #include <linux/irq.h> 12 12 #include <linux/ip.h> 13 13 #include <linux/ipv6.h> 14 + #include <linux/iommu.h> 14 15 #include <linux/module.h> 15 16 #include <linux/pci.h> 16 17 #include <linux/skbuff.h> ··· 1040 1039 static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring) 1041 1040 { 1042 1041 u32 alloc_size = ring->tqp->handle->kinfo.tx_spare_buf_size; 1042 + struct net_device *netdev = ring_to_netdev(ring); 1043 + struct hns3_nic_priv *priv = netdev_priv(netdev); 1043 1044 struct hns3_tx_spare *tx_spare; 1044 1045 struct page *page; 1045 1046 dma_addr_t dma; ··· 1083 1080 tx_spare->buf = page_address(page); 1084 1081 tx_spare->len = PAGE_SIZE << order; 1085 1082 ring->tx_spare = tx_spare; 1083 + ring->tx_copybreak = priv->tx_copybreak; 1086 1084 return; 1087 1085 1088 1086 dma_mapping_error: ··· 4878 4874 devm_kfree(&pdev->dev, priv->tqp_vector); 4879 4875 } 4880 4876 4877 + static void hns3_update_tx_spare_buf_config(struct hns3_nic_priv *priv) 4878 + { 4879 + #define HNS3_MIN_SPARE_BUF_SIZE (2 * 1024 * 1024) 4880 + #define HNS3_MAX_PACKET_SIZE (64 * 1024) 4881 + 4882 + struct iommu_domain *domain = iommu_get_domain_for_dev(priv->dev); 4883 + struct hnae3_ae_dev *ae_dev = hns3_get_ae_dev(priv->ae_handle); 4884 + struct hnae3_handle *handle = priv->ae_handle; 4885 + 4886 + if (ae_dev->dev_version < HNAE3_DEVICE_VERSION_V3) 4887 + return; 4888 + 4889 + if (!(domain && iommu_is_dma_domain(domain))) 4890 + return; 4891 + 4892 + priv->min_tx_copybreak = HNS3_MAX_PACKET_SIZE; 4893 + priv->min_tx_spare_buf_size = HNS3_MIN_SPARE_BUF_SIZE; 4894 + 4895 + if (priv->tx_copybreak < priv->min_tx_copybreak) 4896 + priv->tx_copybreak = priv->min_tx_copybreak; 4897 + if (handle->kinfo.tx_spare_buf_size < priv->min_tx_spare_buf_size) 4898 + handle->kinfo.tx_spare_buf_size = priv->min_tx_spare_buf_size; 4899 + } 4900 + 4881 4901 static void hns3_ring_get_cfg(struct hnae3_queue *q, struct hns3_nic_priv *priv, 4882 4902 unsigned int ring_type) 4883 4903 { ··· 5135 5107 int i, j; 5136 5108 int ret; 5137 5109 5110 + hns3_update_tx_spare_buf_config(priv); 5138 5111 for (i = 0; i < ring_num; i++) { 5139 5112 ret = hns3_alloc_ring_memory(&priv->ring[i]); 5140 5113 if (ret) { ··· 5342 5313 priv->ae_handle = handle; 5343 5314 priv->tx_timeout_count = 0; 5344 5315 priv->max_non_tso_bd_num = ae_dev->dev_specs.max_non_tso_bd_num; 5316 + priv->min_tx_copybreak = 0; 5317 + priv->min_tx_spare_buf_size = 0; 5345 5318 set_bit(HNS3_NIC_STATE_DOWN, &priv->state); 5346 5319 5347 5320 handle->msg_enable = netif_msg_init(debug, DEFAULT_MSG_LEVEL);
+2
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
··· 596 596 struct hns3_enet_coalesce rx_coal; 597 597 u32 tx_copybreak; 598 598 u32 rx_copybreak; 599 + u32 min_tx_copybreak; 600 + u32 min_tx_spare_buf_size; 599 601 }; 600 602 601 603 union l3_hdr_info {
+21 -15
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 9576 9576 return false; 9577 9577 } 9578 9578 9579 - int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en) 9579 + static int __hclge_enable_vport_vlan_filter(struct hclge_vport *vport, 9580 + bool request_en) 9580 9581 { 9581 - struct hclge_dev *hdev = vport->back; 9582 9582 bool need_en; 9583 9583 int ret; 9584 9584 9585 - mutex_lock(&hdev->vport_lock); 9586 - 9587 - vport->req_vlan_fltr_en = request_en; 9588 - 9589 9585 need_en = hclge_need_enable_vport_vlan_filter(vport); 9590 - if (need_en == vport->cur_vlan_fltr_en) { 9591 - mutex_unlock(&hdev->vport_lock); 9586 + if (need_en == vport->cur_vlan_fltr_en) 9592 9587 return 0; 9593 - } 9594 9588 9595 9589 ret = hclge_set_vport_vlan_filter(vport, need_en); 9596 - if (ret) { 9597 - mutex_unlock(&hdev->vport_lock); 9590 + if (ret) 9598 9591 return ret; 9599 - } 9600 9592 9601 9593 vport->cur_vlan_fltr_en = need_en; 9602 9594 9595 + return 0; 9596 + } 9597 + 9598 + int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en) 9599 + { 9600 + struct hclge_dev *hdev = vport->back; 9601 + int ret; 9602 + 9603 + mutex_lock(&hdev->vport_lock); 9604 + vport->req_vlan_fltr_en = request_en; 9605 + ret = __hclge_enable_vport_vlan_filter(vport, request_en); 9603 9606 mutex_unlock(&hdev->vport_lock); 9604 9607 9605 - return 0; 9608 + return ret; 9606 9609 } 9607 9610 9608 9611 static int hclge_enable_vlan_filter(struct hnae3_handle *handle, bool enable) ··· 10626 10623 &vport->state)) 10627 10624 continue; 10628 10625 10629 - ret = hclge_enable_vport_vlan_filter(vport, 10630 - vport->req_vlan_fltr_en); 10626 + mutex_lock(&hdev->vport_lock); 10627 + ret = __hclge_enable_vport_vlan_filter(vport, 10628 + vport->req_vlan_fltr_en); 10631 10629 if (ret) { 10632 10630 dev_err(&hdev->pdev->dev, 10633 10631 "failed to sync vlan filter state for vport%u, ret = %d\n", 10634 10632 vport->vport_id, ret); 10635 10633 set_bit(HCLGE_VPORT_STATE_VLAN_FLTR_CHANGE, 10636 10634 &vport->state); 10635 + mutex_unlock(&hdev->vport_lock); 10637 10636 return; 10638 10637 } 10638 + mutex_unlock(&hdev->vport_lock); 10639 10639 } 10640 10640 } 10641 10641
+6 -3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
··· 497 497 if (ret) { 498 498 dev_err(&hdev->pdev->dev, 499 499 "failed to init freq, ret = %d\n", ret); 500 - goto out; 500 + goto out_clear_int; 501 501 } 502 502 503 503 ret = hclge_ptp_set_ts_mode(hdev, &hdev->ptp->ts_cfg); 504 504 if (ret) { 505 505 dev_err(&hdev->pdev->dev, 506 506 "failed to init ts mode, ret = %d\n", ret); 507 - goto out; 507 + goto out_clear_int; 508 508 } 509 509 510 510 ktime_get_real_ts64(&ts); ··· 512 512 if (ret) { 513 513 dev_err(&hdev->pdev->dev, 514 514 "failed to init ts time, ret = %d\n", ret); 515 - goto out; 515 + goto out_clear_int; 516 516 } 517 517 518 518 set_bit(HCLGE_STATE_PTP_EN, &hdev->state); ··· 520 520 521 521 return 0; 522 522 523 + out_clear_int: 524 + clear_bit(HCLGE_PTP_FLAG_EN, &hdev->ptp->flags); 525 + hclge_ptp_int_en(hdev, false); 523 526 out: 524 527 hclge_ptp_destroy_clock(hdev); 525 528
+1 -5
drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
··· 3094 3094 3095 3095 static u32 hclgevf_get_max_channels(struct hclgevf_dev *hdev) 3096 3096 { 3097 - struct hnae3_handle *nic = &hdev->nic; 3098 - struct hnae3_knic_private_info *kinfo = &nic->kinfo; 3099 - 3100 - return min_t(u32, hdev->rss_size_max, 3101 - hdev->num_tqps / kinfo->tc_info.num_tc); 3097 + return min(hdev->rss_size_max, hdev->num_tqps); 3102 3098 } 3103 3099 3104 3100 /**
+3
drivers/net/ethernet/intel/e1000e/defines.h
··· 638 638 /* For checksumming, the sum of all words in the NVM should equal 0xBABA. */ 639 639 #define NVM_SUM 0xBABA 640 640 641 + /* Uninitialized ("empty") checksum word value */ 642 + #define NVM_CHECKSUM_UNINITIALIZED 0xFFFF 643 + 641 644 /* PBA (printed board assembly) number words */ 642 645 #define NVM_PBA_OFFSET_0 8 643 646 #define NVM_PBA_OFFSET_1 9
+2
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 4274 4274 ret_val = e1000e_update_nvm_checksum(hw); 4275 4275 if (ret_val) 4276 4276 return ret_val; 4277 + } else if (hw->mac.type == e1000_pch_tgp) { 4278 + return 0; 4277 4279 } 4278 4280 } 4279 4281
+6
drivers/net/ethernet/intel/e1000e/nvm.c
··· 558 558 checksum += nvm_data; 559 559 } 560 560 561 + if (hw->mac.type == e1000_pch_tgp && 562 + nvm_data == NVM_CHECKSUM_UNINITIALIZED) { 563 + e_dbg("Uninitialized NVM Checksum on TGP platform - ignoring\n"); 564 + return 0; 565 + } 566 + 561 567 if (checksum != (u16)NVM_SUM) { 562 568 e_dbg("NVM Checksum Invalid\n"); 563 569 return -E1000_ERR_NVM;
+3 -3
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 3138 3138 const u8 *addr = al->list[i].addr; 3139 3139 3140 3140 /* Allow to delete VF primary MAC only if it was not set 3141 - * administratively by PF or if VF is trusted. 3141 + * administratively by PF. 3142 3142 */ 3143 3143 if (ether_addr_equal(addr, vf->default_lan_addr.addr)) { 3144 - if (i40e_can_vf_change_mac(vf)) 3144 + if (!vf->pf_set_mac) 3145 3145 was_unimac_deleted = true; 3146 3146 else 3147 3147 continue; ··· 5008 5008 vf_stats->broadcast = stats->rx_broadcast; 5009 5009 vf_stats->multicast = stats->rx_multicast; 5010 5010 vf_stats->rx_dropped = stats->rx_discards + stats->rx_discards_other; 5011 - vf_stats->tx_dropped = stats->tx_discards; 5011 + vf_stats->tx_dropped = stats->tx_errors; 5012 5012 5013 5013 return 0; 5014 5014 }
+2
drivers/net/ethernet/intel/ice/ice_ddp.c
··· 2301 2301 return ICE_DDP_PKG_ERR; 2302 2302 2303 2303 buf_copy = devm_kmemdup(ice_hw_to_dev(hw), buf, len, GFP_KERNEL); 2304 + if (!buf_copy) 2305 + return ICE_DDP_PKG_ERR; 2304 2306 2305 2307 state = ice_init_pkg(hw, buf_copy, len); 2306 2308 if (!ice_is_init_pkg_successful(state)) {
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 1947 1947 1948 1948 err = mlx5_cmd_invoke(dev, inb, outb, out, out_size, callback, context, 1949 1949 pages_queue, token, force_polling); 1950 - if (callback) 1951 - return err; 1950 + if (callback && !err) 1951 + return 0; 1952 1952 1953 1953 if (err > 0) /* Failed in FW, command didn't execute */ 1954 1954 err = deliv_status_to_err(err);
+54 -54
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1182 1182 static int esw_add_fdb_peer_miss_rules(struct mlx5_eswitch *esw, 1183 1183 struct mlx5_core_dev *peer_dev) 1184 1184 { 1185 + struct mlx5_eswitch *peer_esw = peer_dev->priv.eswitch; 1185 1186 struct mlx5_flow_destination dest = {}; 1186 1187 struct mlx5_flow_act flow_act = {0}; 1187 1188 struct mlx5_flow_handle **flows; 1188 - /* total vports is the same for both e-switches */ 1189 - int nvports = esw->total_vports; 1190 1189 struct mlx5_flow_handle *flow; 1190 + struct mlx5_vport *peer_vport; 1191 1191 struct mlx5_flow_spec *spec; 1192 - struct mlx5_vport *vport; 1193 1192 int err, pfindex; 1194 1193 unsigned long i; 1195 1194 void *misc; 1196 1195 1197 - if (!MLX5_VPORT_MANAGER(esw->dev) && !mlx5_core_is_ecpf_esw_manager(esw->dev)) 1196 + if (!MLX5_VPORT_MANAGER(peer_dev) && 1197 + !mlx5_core_is_ecpf_esw_manager(peer_dev)) 1198 1198 return 0; 1199 1199 1200 1200 spec = kvzalloc(sizeof(*spec), GFP_KERNEL); ··· 1203 1203 1204 1204 peer_miss_rules_setup(esw, peer_dev, spec, &dest); 1205 1205 1206 - flows = kvcalloc(nvports, sizeof(*flows), GFP_KERNEL); 1206 + flows = kvcalloc(peer_esw->total_vports, sizeof(*flows), GFP_KERNEL); 1207 1207 if (!flows) { 1208 1208 err = -ENOMEM; 1209 1209 goto alloc_flows_err; ··· 1213 1213 misc = MLX5_ADDR_OF(fte_match_param, spec->match_value, 1214 1214 misc_parameters); 1215 1215 1216 - if (mlx5_core_is_ecpf_esw_manager(esw->dev)) { 1217 - vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF); 1218 - esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch, 1219 - spec, MLX5_VPORT_PF); 1216 + if (mlx5_core_is_ecpf_esw_manager(peer_dev)) { 1217 + peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF); 1218 + esw_set_peer_miss_rule_source_port(esw, peer_esw, spec, 1219 + MLX5_VPORT_PF); 1220 1220 1221 1221 flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), 1222 1222 spec, &flow_act, &dest, 1); ··· 1224 1224 err = PTR_ERR(flow); 1225 1225 goto add_pf_flow_err; 1226 1226 } 1227 - flows[vport->index] = flow; 1227 + flows[peer_vport->index] = flow; 1228 1228 } 1229 1229 1230 - if (mlx5_ecpf_vport_exists(esw->dev)) { 1231 - vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF); 1230 + if (mlx5_ecpf_vport_exists(peer_dev)) { 1231 + peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF); 1232 1232 MLX5_SET(fte_match_set_misc, misc, source_port, MLX5_VPORT_ECPF); 1233 1233 flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), 1234 1234 spec, &flow_act, &dest, 1); ··· 1236 1236 err = PTR_ERR(flow); 1237 1237 goto add_ecpf_flow_err; 1238 1238 } 1239 - flows[vport->index] = flow; 1239 + flows[peer_vport->index] = flow; 1240 1240 } 1241 1241 1242 - mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) { 1242 + mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport, 1243 + mlx5_core_max_vfs(peer_dev)) { 1243 1244 esw_set_peer_miss_rule_source_port(esw, 1244 - peer_dev->priv.eswitch, 1245 - spec, vport->vport); 1245 + peer_esw, 1246 + spec, peer_vport->vport); 1246 1247 1247 1248 flow = mlx5_add_flow_rules(mlx5_eswitch_get_slow_fdb(esw), 1248 1249 spec, &flow_act, &dest, 1); ··· 1251 1250 err = PTR_ERR(flow); 1252 1251 goto add_vf_flow_err; 1253 1252 } 1254 - flows[vport->index] = flow; 1253 + flows[peer_vport->index] = flow; 1255 1254 } 1256 1255 1257 - if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1258 - mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { 1259 - if (i >= mlx5_core_max_ec_vfs(peer_dev)) 1260 - break; 1261 - esw_set_peer_miss_rule_source_port(esw, peer_dev->priv.eswitch, 1262 - spec, vport->vport); 1256 + if (mlx5_core_ec_sriov_enabled(peer_dev)) { 1257 + mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport, 1258 + mlx5_core_max_ec_vfs(peer_dev)) { 1259 + esw_set_peer_miss_rule_source_port(esw, peer_esw, 1260 + spec, 1261 + peer_vport->vport); 1263 1262 flow = mlx5_add_flow_rules(esw->fdb_table.offloads.slow_fdb, 1264 1263 spec, &flow_act, &dest, 1); 1265 1264 if (IS_ERR(flow)) { 1266 1265 err = PTR_ERR(flow); 1267 1266 goto add_ec_vf_flow_err; 1268 1267 } 1269 - flows[vport->index] = flow; 1268 + flows[peer_vport->index] = flow; 1270 1269 } 1271 1270 } 1272 1271 ··· 1283 1282 return 0; 1284 1283 1285 1284 add_ec_vf_flow_err: 1286 - mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { 1287 - if (!flows[vport->index]) 1285 + mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport, 1286 + mlx5_core_max_ec_vfs(peer_dev)) { 1287 + if (!flows[peer_vport->index]) 1288 1288 continue; 1289 - mlx5_del_flow_rules(flows[vport->index]); 1289 + mlx5_del_flow_rules(flows[peer_vport->index]); 1290 1290 } 1291 1291 add_vf_flow_err: 1292 - mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) { 1293 - if (!flows[vport->index]) 1292 + mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport, 1293 + mlx5_core_max_vfs(peer_dev)) { 1294 + if (!flows[peer_vport->index]) 1294 1295 continue; 1295 - mlx5_del_flow_rules(flows[vport->index]); 1296 + mlx5_del_flow_rules(flows[peer_vport->index]); 1296 1297 } 1297 - if (mlx5_ecpf_vport_exists(esw->dev)) { 1298 - vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF); 1299 - mlx5_del_flow_rules(flows[vport->index]); 1298 + if (mlx5_ecpf_vport_exists(peer_dev)) { 1299 + peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF); 1300 + mlx5_del_flow_rules(flows[peer_vport->index]); 1300 1301 } 1301 1302 add_ecpf_flow_err: 1302 - if (mlx5_core_is_ecpf_esw_manager(esw->dev)) { 1303 - vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF); 1304 - mlx5_del_flow_rules(flows[vport->index]); 1303 + if (mlx5_core_is_ecpf_esw_manager(peer_dev)) { 1304 + peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF); 1305 + mlx5_del_flow_rules(flows[peer_vport->index]); 1305 1306 } 1306 1307 add_pf_flow_err: 1307 1308 esw_warn(esw->dev, "FDB: Failed to add peer miss flow rule err %d\n", err); ··· 1316 1313 static void esw_del_fdb_peer_miss_rules(struct mlx5_eswitch *esw, 1317 1314 struct mlx5_core_dev *peer_dev) 1318 1315 { 1316 + struct mlx5_eswitch *peer_esw = peer_dev->priv.eswitch; 1319 1317 u16 peer_index = mlx5_get_dev_index(peer_dev); 1320 1318 struct mlx5_flow_handle **flows; 1321 - struct mlx5_vport *vport; 1319 + struct mlx5_vport *peer_vport; 1322 1320 unsigned long i; 1323 1321 1324 1322 flows = esw->fdb_table.offloads.peer_miss_rules[peer_index]; 1325 1323 if (!flows) 1326 1324 return; 1327 1325 1328 - if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1329 - mlx5_esw_for_each_ec_vf_vport(esw, i, vport, mlx5_core_max_ec_vfs(esw->dev)) { 1330 - /* The flow for a particular vport could be NULL if the other ECPF 1331 - * has fewer or no VFs enabled 1332 - */ 1333 - if (!flows[vport->index]) 1334 - continue; 1335 - mlx5_del_flow_rules(flows[vport->index]); 1336 - } 1326 + if (mlx5_core_ec_sriov_enabled(peer_dev)) { 1327 + mlx5_esw_for_each_ec_vf_vport(peer_esw, i, peer_vport, 1328 + mlx5_core_max_ec_vfs(peer_dev)) 1329 + mlx5_del_flow_rules(flows[peer_vport->index]); 1337 1330 } 1338 1331 1339 - mlx5_esw_for_each_vf_vport(esw, i, vport, mlx5_core_max_vfs(esw->dev)) 1340 - mlx5_del_flow_rules(flows[vport->index]); 1332 + mlx5_esw_for_each_vf_vport(peer_esw, i, peer_vport, 1333 + mlx5_core_max_vfs(peer_dev)) 1334 + mlx5_del_flow_rules(flows[peer_vport->index]); 1341 1335 1342 - if (mlx5_ecpf_vport_exists(esw->dev)) { 1343 - vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_ECPF); 1344 - mlx5_del_flow_rules(flows[vport->index]); 1336 + if (mlx5_ecpf_vport_exists(peer_dev)) { 1337 + peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_ECPF); 1338 + mlx5_del_flow_rules(flows[peer_vport->index]); 1345 1339 } 1346 1340 1347 - if (mlx5_core_is_ecpf_esw_manager(esw->dev)) { 1348 - vport = mlx5_eswitch_get_vport(esw, MLX5_VPORT_PF); 1349 - mlx5_del_flow_rules(flows[vport->index]); 1341 + if (mlx5_core_is_ecpf_esw_manager(peer_dev)) { 1342 + peer_vport = mlx5_eswitch_get_vport(peer_esw, MLX5_VPORT_PF); 1343 + mlx5_del_flow_rules(flows[peer_vport->index]); 1350 1344 } 1351 1345 1352 1346 kvfree(flows);
+1
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 8 8 #include <linux/version.h> 9 9 #include <linux/msi.h> 10 10 #include <linux/irqdomain.h> 11 + #include <linux/export.h> 11 12 12 13 #include <net/mana/mana.h> 13 14 #include <net/mana/hw_channel.h>
+1
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 10 10 #include <linux/filter.h> 11 11 #include <linux/mm.h> 12 12 #include <linux/pci.h> 13 + #include <linux/export.h> 13 14 14 15 #include <net/checksum.h> 15 16 #include <net/ip6_checksum.h>
+108 -54
drivers/net/ethernet/ti/icssg/icssg_config.c
··· 288 288 int i; 289 289 290 290 addr = lower_32_bits(prueth->msmcram.pa); 291 - if (slice) 292 - addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE; 291 + if (slice) { 292 + if (prueth->pdata.banked_ms_ram) 293 + addr += MSMC_RAM_BANK_SIZE; 294 + else 295 + addr += PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE; 296 + } 293 297 294 298 if (addr % SZ_64K) { 295 299 dev_warn(prueth->dev, "buffer pool needs to be 64KB aligned\n"); ··· 301 297 } 302 298 303 299 bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET; 304 - /* workaround for f/w bug. bpool 0 needs to be initialized */ 305 - for (i = 0; i < PRUETH_NUM_BUF_POOLS; i++) { 300 + 301 + /* Configure buffer pools for forwarding buffers 302 + * - used by firmware to store packets to be forwarded to other port 303 + * - 8 total pools per slice 304 + */ 305 + for (i = 0; i < PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE; i++) { 306 306 writel(addr, &bpool_cfg[i].addr); 307 - writel(PRUETH_EMAC_BUF_POOL_SIZE, &bpool_cfg[i].len); 308 - addr += PRUETH_EMAC_BUF_POOL_SIZE; 307 + writel(PRUETH_SW_FWD_BUF_POOL_SIZE, &bpool_cfg[i].len); 308 + addr += PRUETH_SW_FWD_BUF_POOL_SIZE; 309 309 } 310 310 311 - if (!slice) 312 - addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE; 313 - else 314 - addr += PRUETH_SW_NUM_BUF_POOLS_HOST * PRUETH_SW_BUF_POOL_SIZE_HOST; 311 + /* Configure buffer pools for Local Injection buffers 312 + * - used by firmware to store packets received from host core 313 + * - 16 total pools per slice 314 + */ 315 + for (i = 0; i < PRUETH_NUM_LI_BUF_POOLS_PER_SLICE; i++) { 316 + int cfg_idx = i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE; 315 317 316 - for (i = PRUETH_NUM_BUF_POOLS; 317 - i < 2 * PRUETH_SW_NUM_BUF_POOLS_HOST + PRUETH_NUM_BUF_POOLS; 318 - i++) { 319 - /* The driver only uses first 4 queues per PRU so only initialize them */ 320 - if (i % PRUETH_SW_NUM_BUF_POOLS_HOST < PRUETH_SW_NUM_BUF_POOLS_PER_PRU) { 321 - writel(addr, &bpool_cfg[i].addr); 322 - writel(PRUETH_SW_BUF_POOL_SIZE_HOST, &bpool_cfg[i].len); 323 - addr += PRUETH_SW_BUF_POOL_SIZE_HOST; 318 + /* The driver only uses first 4 queues per PRU, 319 + * so only initialize buffer for them 320 + */ 321 + if ((i % PRUETH_NUM_LI_BUF_POOLS_PER_PORT_PER_SLICE) 322 + < PRUETH_SW_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE) { 323 + writel(addr, &bpool_cfg[cfg_idx].addr); 324 + writel(PRUETH_SW_LI_BUF_POOL_SIZE, 325 + &bpool_cfg[cfg_idx].len); 326 + addr += PRUETH_SW_LI_BUF_POOL_SIZE; 324 327 } else { 325 - writel(0, &bpool_cfg[i].addr); 326 - writel(0, &bpool_cfg[i].len); 328 + writel(0, &bpool_cfg[cfg_idx].addr); 329 + writel(0, &bpool_cfg[cfg_idx].len); 327 330 } 328 331 } 329 332 330 - if (!slice) 331 - addr += PRUETH_SW_NUM_BUF_POOLS_HOST * PRUETH_SW_BUF_POOL_SIZE_HOST; 332 - else 333 - addr += PRUETH_EMAC_RX_CTX_BUF_SIZE; 333 + /* Express RX buffer queue 334 + * - used by firmware to store express packets to be transmitted 335 + * to the host core 336 + */ 337 + rxq_ctx = emac->dram.va + HOST_RX_Q_EXP_CONTEXT_OFFSET; 338 + for (i = 0; i < 3; i++) 339 + writel(addr, &rxq_ctx->start[i]); 334 340 341 + addr += PRUETH_SW_HOST_EXP_BUF_POOL_SIZE; 342 + writel(addr, &rxq_ctx->end); 343 + 344 + /* Pre-emptible RX buffer queue 345 + * - used by firmware to store preemptible packets to be transmitted 346 + * to the host core 347 + */ 335 348 rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET; 336 349 for (i = 0; i < 3; i++) 337 350 writel(addr, &rxq_ctx->start[i]); 338 351 339 - addr += PRUETH_EMAC_RX_CTX_BUF_SIZE; 340 - writel(addr - SZ_2K, &rxq_ctx->end); 352 + addr += PRUETH_SW_HOST_PRE_BUF_POOL_SIZE; 353 + writel(addr, &rxq_ctx->end); 354 + 355 + /* Set pointer for default dropped packet write 356 + * - used by firmware to temporarily store packet to be dropped 357 + */ 358 + rxq_ctx = emac->dram.va + DEFAULT_MSMC_Q_OFFSET; 359 + writel(addr, &rxq_ctx->start[0]); 341 360 342 361 return 0; 343 362 } ··· 374 347 u32 addr; 375 348 int i; 376 349 377 - /* Layout to have 64KB aligned buffer pool 378 - * |BPOOL0|BPOOL1|RX_CTX0|RX_CTX1| 379 - */ 380 - 381 350 addr = lower_32_bits(prueth->msmcram.pa); 382 - if (slice) 383 - addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE; 351 + if (slice) { 352 + if (prueth->pdata.banked_ms_ram) 353 + addr += MSMC_RAM_BANK_SIZE; 354 + else 355 + addr += PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE; 356 + } 384 357 385 358 if (addr % SZ_64K) { 386 359 dev_warn(prueth->dev, "buffer pool needs to be 64KB aligned\n"); ··· 388 361 } 389 362 390 363 bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET; 391 - /* workaround for f/w bug. bpool 0 needs to be initilalized */ 392 - writel(addr, &bpool_cfg[0].addr); 393 - writel(0, &bpool_cfg[0].len); 394 364 395 - for (i = PRUETH_EMAC_BUF_POOL_START; 396 - i < PRUETH_EMAC_BUF_POOL_START + PRUETH_NUM_BUF_POOLS; 397 - i++) { 398 - writel(addr, &bpool_cfg[i].addr); 399 - writel(PRUETH_EMAC_BUF_POOL_SIZE, &bpool_cfg[i].len); 400 - addr += PRUETH_EMAC_BUF_POOL_SIZE; 365 + /* Configure buffer pools for forwarding buffers 366 + * - in mac mode - no forwarding so initialize all pools to 0 367 + * - 8 total pools per slice 368 + */ 369 + for (i = 0; i < PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE; i++) { 370 + writel(0, &bpool_cfg[i].addr); 371 + writel(0, &bpool_cfg[i].len); 401 372 } 402 373 403 - if (!slice) 404 - addr += PRUETH_NUM_BUF_POOLS * PRUETH_EMAC_BUF_POOL_SIZE; 405 - else 406 - addr += PRUETH_EMAC_RX_CTX_BUF_SIZE * 2; 374 + /* Configure buffer pools for Local Injection buffers 375 + * - used by firmware to store packets received from host core 376 + * - 16 total pools per slice 377 + */ 378 + bpool_cfg = emac->dram.va + BUFFER_POOL_0_ADDR_OFFSET; 379 + for (i = 0; i < PRUETH_NUM_LI_BUF_POOLS_PER_SLICE; i++) { 380 + int cfg_idx = i + PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE; 407 381 408 - /* Pre-emptible RX buffer queue */ 409 - rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET; 410 - for (i = 0; i < 3; i++) 411 - writel(addr, &rxq_ctx->start[i]); 382 + /* In EMAC mode, only first 4 buffers are used, 383 + * as 1 slice needs to handle only 1 port 384 + */ 385 + if (i < PRUETH_EMAC_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE) { 386 + writel(addr, &bpool_cfg[cfg_idx].addr); 387 + writel(PRUETH_EMAC_LI_BUF_POOL_SIZE, 388 + &bpool_cfg[cfg_idx].len); 389 + addr += PRUETH_EMAC_LI_BUF_POOL_SIZE; 390 + } else { 391 + writel(0, &bpool_cfg[cfg_idx].addr); 392 + writel(0, &bpool_cfg[cfg_idx].len); 393 + } 394 + } 412 395 413 - addr += PRUETH_EMAC_RX_CTX_BUF_SIZE; 414 - writel(addr, &rxq_ctx->end); 415 - 416 - /* Express RX buffer queue */ 396 + /* Express RX buffer queue 397 + * - used by firmware to store express packets to be transmitted 398 + * to host core 399 + */ 417 400 rxq_ctx = emac->dram.va + HOST_RX_Q_EXP_CONTEXT_OFFSET; 418 401 for (i = 0; i < 3; i++) 419 402 writel(addr, &rxq_ctx->start[i]); 420 403 421 - addr += PRUETH_EMAC_RX_CTX_BUF_SIZE; 404 + addr += PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE; 422 405 writel(addr, &rxq_ctx->end); 406 + 407 + /* Pre-emptible RX buffer queue 408 + * - used by firmware to store preemptible packets to be transmitted 409 + * to host core 410 + */ 411 + rxq_ctx = emac->dram.va + HOST_RX_Q_PRE_CONTEXT_OFFSET; 412 + for (i = 0; i < 3; i++) 413 + writel(addr, &rxq_ctx->start[i]); 414 + 415 + addr += PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE; 416 + writel(addr, &rxq_ctx->end); 417 + 418 + /* Set pointer for default dropped packet write 419 + * - used by firmware to temporarily store packet to be dropped 420 + */ 421 + rxq_ctx = emac->dram.va + DEFAULT_MSMC_Q_OFFSET; 422 + writel(addr, &rxq_ctx->start[0]); 423 423 424 424 return 0; 425 425 }
+64 -14
drivers/net/ethernet/ti/icssg/icssg_config.h
··· 26 26 #define PRUETH_MAX_RX_FLOWS 1 /* excluding default flow */ 27 27 #define PRUETH_RX_FLOW_DATA 0 28 28 29 - #define PRUETH_EMAC_BUF_POOL_SIZE SZ_8K 30 - #define PRUETH_EMAC_POOLS_PER_SLICE 24 31 - #define PRUETH_EMAC_BUF_POOL_START 8 32 - #define PRUETH_NUM_BUF_POOLS 8 33 - #define PRUETH_EMAC_RX_CTX_BUF_SIZE SZ_16K /* per slice */ 34 - #define MSMC_RAM_SIZE \ 35 - (2 * (PRUETH_EMAC_BUF_POOL_SIZE * PRUETH_NUM_BUF_POOLS + \ 36 - PRUETH_EMAC_RX_CTX_BUF_SIZE * 2)) 29 + /* Defines for forwarding path buffer pools: 30 + * - used by firmware to store packets to be forwarded to other port 31 + * - 8 total pools per slice 32 + * - only used in switch mode (as no forwarding in mac mode) 33 + */ 34 + #define PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE 8 35 + #define PRUETH_SW_FWD_BUF_POOL_SIZE (SZ_8K) 37 36 38 - #define PRUETH_SW_BUF_POOL_SIZE_HOST SZ_4K 39 - #define PRUETH_SW_NUM_BUF_POOLS_HOST 8 40 - #define PRUETH_SW_NUM_BUF_POOLS_PER_PRU 4 41 - #define MSMC_RAM_SIZE_SWITCH_MODE \ 42 - (MSMC_RAM_SIZE + \ 43 - (2 * PRUETH_SW_BUF_POOL_SIZE_HOST * PRUETH_SW_NUM_BUF_POOLS_HOST)) 37 + /* Defines for local injection path buffer pools: 38 + * - used by firmware to store packets received from host core 39 + * - 16 total pools per slice 40 + * - 8 pools per port per slice and each slice handles both ports 41 + * - only 4 out of 8 pools used per port (as only 4 real QoS levels in ICSSG) 42 + * - switch mode: 8 total pools used 43 + * - mac mode: 4 total pools used 44 + */ 45 + #define PRUETH_NUM_LI_BUF_POOLS_PER_SLICE 16 46 + #define PRUETH_NUM_LI_BUF_POOLS_PER_PORT_PER_SLICE 8 47 + #define PRUETH_SW_LI_BUF_POOL_SIZE SZ_4K 48 + #define PRUETH_SW_USED_LI_BUF_POOLS_PER_SLICE 8 49 + #define PRUETH_SW_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE 4 50 + #define PRUETH_EMAC_LI_BUF_POOL_SIZE SZ_8K 51 + #define PRUETH_EMAC_USED_LI_BUF_POOLS_PER_SLICE 4 52 + #define PRUETH_EMAC_USED_LI_BUF_POOLS_PER_PORT_PER_SLICE 4 53 + 54 + /* Defines for host egress path - express and preemptible buffers 55 + * - used by firmware to store express and preemptible packets 56 + * to be transmitted to host core 57 + * - used by both mac/switch modes 58 + */ 59 + #define PRUETH_SW_HOST_EXP_BUF_POOL_SIZE SZ_16K 60 + #define PRUETH_SW_HOST_PRE_BUF_POOL_SIZE (SZ_16K - SZ_2K) 61 + #define PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE PRUETH_SW_HOST_EXP_BUF_POOL_SIZE 62 + #define PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE PRUETH_SW_HOST_PRE_BUF_POOL_SIZE 63 + 64 + /* Buffer used by firmware to temporarily store packet to be dropped */ 65 + #define PRUETH_SW_DROP_PKT_BUF_SIZE SZ_2K 66 + #define PRUETH_EMAC_DROP_PKT_BUF_SIZE PRUETH_SW_DROP_PKT_BUF_SIZE 67 + 68 + /* Total switch mode memory usage for buffers per slice */ 69 + #define PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE \ 70 + (PRUETH_SW_FWD_BUF_POOL_SIZE * PRUETH_NUM_FWD_BUF_POOLS_PER_SLICE + \ 71 + PRUETH_SW_LI_BUF_POOL_SIZE * PRUETH_SW_USED_LI_BUF_POOLS_PER_SLICE + \ 72 + PRUETH_SW_HOST_EXP_BUF_POOL_SIZE + \ 73 + PRUETH_SW_HOST_PRE_BUF_POOL_SIZE + \ 74 + PRUETH_SW_DROP_PKT_BUF_SIZE) 75 + 76 + /* Total switch mode memory usage for all buffers */ 77 + #define PRUETH_SW_TOTAL_BUF_SIZE \ 78 + (2 * PRUETH_SW_TOTAL_BUF_SIZE_PER_SLICE) 79 + 80 + /* Total mac mode memory usage for buffers per slice */ 81 + #define PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE \ 82 + (PRUETH_EMAC_LI_BUF_POOL_SIZE * \ 83 + PRUETH_EMAC_USED_LI_BUF_POOLS_PER_SLICE + \ 84 + PRUETH_EMAC_HOST_EXP_BUF_POOL_SIZE + \ 85 + PRUETH_EMAC_HOST_PRE_BUF_POOL_SIZE + \ 86 + PRUETH_EMAC_DROP_PKT_BUF_SIZE) 87 + 88 + /* Total mac mode memory usage for all buffers */ 89 + #define PRUETH_EMAC_TOTAL_BUF_SIZE \ 90 + (2 * PRUETH_EMAC_TOTAL_BUF_SIZE_PER_SLICE) 91 + 92 + /* Size of 1 bank of MSMC/OC_SRAM memory */ 93 + #define MSMC_RAM_BANK_SIZE SZ_256K 44 94 45 95 #define PRUETH_SWITCH_FDB_MASK ((SIZE_OF_FDB / NUMBER_OF_FDB_BUCKET_ENTRIES) - 1) 46 96
+14 -6
drivers/net/ethernet/ti/icssg/icssg_prueth.c
··· 1814 1814 goto put_mem; 1815 1815 } 1816 1816 1817 - msmc_ram_size = MSMC_RAM_SIZE; 1818 1817 prueth->is_switchmode_supported = prueth->pdata.switch_mode; 1819 - if (prueth->is_switchmode_supported) 1820 - msmc_ram_size = MSMC_RAM_SIZE_SWITCH_MODE; 1818 + if (prueth->pdata.banked_ms_ram) { 1819 + /* Reserve 2 MSMC RAM banks for buffers to avoid arbitration */ 1820 + msmc_ram_size = (2 * MSMC_RAM_BANK_SIZE); 1821 + } else { 1822 + msmc_ram_size = PRUETH_EMAC_TOTAL_BUF_SIZE; 1823 + if (prueth->is_switchmode_supported) 1824 + msmc_ram_size = PRUETH_SW_TOTAL_BUF_SIZE; 1825 + } 1821 1826 1822 1827 /* NOTE: FW bug needs buffer base to be 64KB aligned */ 1823 1828 prueth->msmcram.va = ··· 1990 1985 1991 1986 free_pool: 1992 1987 gen_pool_free(prueth->sram_pool, 1993 - (unsigned long)prueth->msmcram.va, msmc_ram_size); 1988 + (unsigned long)prueth->msmcram.va, 1989 + prueth->msmcram.size); 1994 1990 1995 1991 put_mem: 1996 1992 pruss_release_mem_region(prueth->pruss, &prueth->shram); ··· 2043 2037 icss_iep_put(prueth->iep0); 2044 2038 2045 2039 gen_pool_free(prueth->sram_pool, 2046 - (unsigned long)prueth->msmcram.va, 2047 - MSMC_RAM_SIZE); 2040 + (unsigned long)prueth->msmcram.va, 2041 + prueth->msmcram.size); 2048 2042 2049 2043 pruss_release_mem_region(prueth->pruss, &prueth->shram); 2050 2044 ··· 2061 2055 .fdqring_mode = K3_RINGACC_RING_MODE_MESSAGE, 2062 2056 .quirk_10m_link_issue = 1, 2063 2057 .switch_mode = 1, 2058 + .banked_ms_ram = 0, 2064 2059 }; 2065 2060 2066 2061 static const struct prueth_pdata am64x_icssg_pdata = { 2067 2062 .fdqring_mode = K3_RINGACC_RING_MODE_RING, 2068 2063 .quirk_10m_link_issue = 1, 2069 2064 .switch_mode = 1, 2065 + .banked_ms_ram = 1, 2070 2066 }; 2071 2067 2072 2068 static const struct of_device_id prueth_dt_match[] = {
+2
drivers/net/ethernet/ti/icssg/icssg_prueth.h
··· 252 252 * @fdqring_mode: Free desc queue mode 253 253 * @quirk_10m_link_issue: 10M link detect errata 254 254 * @switch_mode: switch firmware support 255 + * @banked_ms_ram: banked memory support 255 256 */ 256 257 struct prueth_pdata { 257 258 enum k3_ring_mode fdqring_mode; 258 259 u32 quirk_10m_link_issue:1; 259 260 u32 switch_mode:1; 261 + u32 banked_ms_ram:1; 260 262 }; 261 263 262 264 struct icssg_firmwares {
+3
drivers/net/ethernet/ti/icssg/icssg_switch_map.h
··· 180 180 /* Used to notify the FW of the current link speed */ 181 181 #define PORT_LINK_SPEED_OFFSET 0x00A8 182 182 183 + /* 2k memory pointer reserved for default writes by PRU0*/ 184 + #define DEFAULT_MSMC_Q_OFFSET 0x00AC 185 + 183 186 /* TAS gate mask for windows list0 */ 184 187 #define TAS_GATE_MASK_LIST0 0x0100 185 188
+11 -16
drivers/nvme/host/core.c
··· 381 381 nr->status & NVME_SC_MASK, /* Status Code */ 382 382 nr->status & NVME_STATUS_MORE ? "MORE " : "", 383 383 nr->status & NVME_STATUS_DNR ? "DNR " : "", 384 - nr->cmd->common.cdw10, 385 - nr->cmd->common.cdw11, 386 - nr->cmd->common.cdw12, 387 - nr->cmd->common.cdw13, 388 - nr->cmd->common.cdw14, 389 - nr->cmd->common.cdw15); 384 + le32_to_cpu(nr->cmd->common.cdw10), 385 + le32_to_cpu(nr->cmd->common.cdw11), 386 + le32_to_cpu(nr->cmd->common.cdw12), 387 + le32_to_cpu(nr->cmd->common.cdw13), 388 + le32_to_cpu(nr->cmd->common.cdw14), 389 + le32_to_cpu(nr->cmd->common.cdw15)); 390 390 } 391 391 392 392 enum nvme_disposition { ··· 764 764 !test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags) && 765 765 !blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH)) 766 766 return BLK_STS_RESOURCE; 767 + 768 + if (!(rq->rq_flags & RQF_DONTPREP)) 769 + nvme_clear_nvme_request(rq); 770 + 767 771 return nvme_host_path_error(rq); 768 772 } 769 773 EXPORT_SYMBOL_GPL(nvme_fail_nonready_command); ··· 3541 3537 if (ret) 3542 3538 goto out_free; 3543 3539 } 3544 - 3545 - if (le16_to_cpu(id->awupf) != ctrl->subsys->awupf) { 3546 - dev_err_ratelimited(ctrl->device, 3547 - "inconsistent AWUPF, controller not added (%u/%u).\n", 3548 - le16_to_cpu(id->awupf), ctrl->subsys->awupf); 3549 - ret = -EINVAL; 3550 - goto out_free; 3551 - } 3552 - 3553 3540 memcpy(ctrl->subsys->firmware_rev, id->fr, 3554 3541 sizeof(ctrl->subsys->firmware_rev)); 3555 3542 ··· 4072 4077 return; 4073 4078 } 4074 4079 } 4075 - list_add(&ns->list, &ns->ctrl->namespaces); 4080 + list_add_rcu(&ns->list, &ns->ctrl->namespaces); 4076 4081 } 4077 4082 4078 4083 static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
+2 -2
drivers/nvme/target/tcp.c
··· 1928 1928 struct sock *sk = queue->sock->sk; 1929 1929 1930 1930 /* Restore the default callbacks before starting upcall */ 1931 - read_lock_bh(&sk->sk_callback_lock); 1931 + write_lock_bh(&sk->sk_callback_lock); 1932 1932 sk->sk_user_data = NULL; 1933 1933 sk->sk_data_ready = port->data_ready; 1934 - read_unlock_bh(&sk->sk_callback_lock); 1934 + write_unlock_bh(&sk->sk_callback_lock); 1935 1935 if (!nvmet_tcp_try_peek_pdu(queue)) { 1936 1936 if (!nvmet_tcp_tls_handshake(queue)) 1937 1937 return;
+4 -1
drivers/nvmem/imx-ocotp-ele.c
··· 12 12 #include <linux/of.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/slab.h> 15 + #include <linux/if_ether.h> /* ETH_ALEN */ 15 16 16 17 enum fuse_type { 17 18 FUSE_FSB = BIT(0), ··· 119 118 int i; 120 119 121 120 /* Deal with some post processing of nvmem cell data */ 122 - if (id && !strcmp(id, "mac-address")) 121 + if (id && !strcmp(id, "mac-address")) { 122 + bytes = min(bytes, ETH_ALEN); 123 123 for (i = 0; i < bytes / 2; i++) 124 124 swap(buf[i], buf[bytes - i - 1]); 125 + } 125 126 126 127 return 0; 127 128 }
+4 -1
drivers/nvmem/imx-ocotp.c
··· 23 23 #include <linux/platform_device.h> 24 24 #include <linux/slab.h> 25 25 #include <linux/delay.h> 26 + #include <linux/if_ether.h> /* ETH_ALEN */ 26 27 27 28 #define IMX_OCOTP_OFFSET_B0W0 0x400 /* Offset from base address of the 28 29 * OTP Bank0 Word0 ··· 228 227 int i; 229 228 230 229 /* Deal with some post processing of nvmem cell data */ 231 - if (id && !strcmp(id, "mac-address")) 230 + if (id && !strcmp(id, "mac-address")) { 231 + bytes = min(bytes, ETH_ALEN); 232 232 for (i = 0; i < bytes / 2; i++) 233 233 swap(buf[i], buf[bytes - i - 1]); 234 + } 234 235 235 236 return 0; 236 237 }
+3 -3
drivers/nvmem/layouts/u-boot-env.c
··· 92 92 size_t crc32_data_offset; 93 93 size_t crc32_data_len; 94 94 size_t crc32_offset; 95 - __le32 *crc32_addr; 95 + uint32_t *crc32_addr; 96 96 size_t data_offset; 97 97 size_t data_len; 98 98 size_t dev_size; ··· 143 143 goto err_kfree; 144 144 } 145 145 146 - crc32_addr = (__le32 *)(buf + crc32_offset); 147 - crc32 = le32_to_cpu(*crc32_addr); 146 + crc32_addr = (uint32_t *)(buf + crc32_offset); 147 + crc32 = *crc32_addr; 148 148 crc32_data_len = dev_size - crc32_data_offset; 149 149 data_len = dev_size - data_offset; 150 150
+1
drivers/pci/controller/pci-hyperv-intf.c
··· 14 14 #include <linux/kernel.h> 15 15 #include <linux/module.h> 16 16 #include <linux/hyperv.h> 17 + #include <linux/export.h> 17 18 18 19 struct hyperv_pci_block_ops hvpci_block_ops; 19 20 EXPORT_SYMBOL_GPL(hvpci_block_ops);
+19 -2
drivers/pci/controller/pci-hyperv.c
··· 599 599 #define hv_msi_prepare pci_msi_prepare 600 600 601 601 /** 602 - * hv_arch_irq_unmask() - "Unmask" the IRQ by setting its current 602 + * hv_irq_retarget_interrupt() - "Unmask" the IRQ by setting its current 603 603 * affinity. 604 604 * @data: Describes the IRQ 605 605 * ··· 608 608 * is built out of this PCI bus's instance GUID and the function 609 609 * number of the device. 610 610 */ 611 - static void hv_arch_irq_unmask(struct irq_data *data) 611 + static void hv_irq_retarget_interrupt(struct irq_data *data) 612 612 { 613 613 struct msi_desc *msi_desc = irq_data_get_msi_desc(data); 614 614 struct hv_retarget_device_interrupt *params; ··· 712 712 if (!hv_result_success(res) && hbus->state != hv_pcibus_removing) 713 713 dev_err(&hbus->hdev->device, 714 714 "%s() failed: %#llx", __func__, res); 715 + } 716 + 717 + static void hv_arch_irq_unmask(struct irq_data *data) 718 + { 719 + if (hv_root_partition()) 720 + /* 721 + * In case of the nested root partition, the nested hypervisor 722 + * is taking care of interrupt remapping and thus the 723 + * MAP_DEVICE_INTERRUPT hypercall is required instead of 724 + * RETARGET_INTERRUPT. 725 + */ 726 + (void)hv_map_msi_interrupt(data, NULL); 727 + else 728 + hv_irq_retarget_interrupt(data); 715 729 } 716 730 #elif defined(CONFIG_ARM64) 717 731 /* ··· 4212 4198 int ret; 4213 4199 4214 4200 if (!hv_is_hyperv_initialized()) 4201 + return -ENODEV; 4202 + 4203 + if (hv_root_partition() && !hv_nested) 4215 4204 return -ENODEV; 4216 4205 4217 4206 ret = hv_pci_irqchip_init();
+4 -1
drivers/phy/phy-core.c
··· 994 994 } 995 995 996 996 device_initialize(&phy->dev); 997 - mutex_init(&phy->mutex); 997 + lockdep_register_key(&phy->lockdep_key); 998 + mutex_init_with_key(&phy->mutex, &phy->lockdep_key); 998 999 999 1000 phy->dev.class = &phy_class; 1000 1001 phy->dev.parent = dev; ··· 1260 1259 dev_vdbg(dev, "releasing '%s'\n", dev_name(dev)); 1261 1260 debugfs_remove_recursive(phy->debugfs); 1262 1261 regulator_put(phy->pwr); 1262 + mutex_destroy(&phy->mutex); 1263 + lockdep_unregister_key(&phy->lockdep_key); 1263 1264 ida_free(&phy_ida, phy->id); 1264 1265 kfree(phy); 1265 1266 }
+4 -2
drivers/phy/phy-snps-eusb2.c
··· 567 567 } 568 568 } 569 569 570 - if (IS_ERR_OR_NULL(phy->ref_clk)) 571 - return dev_err_probe(dev, PTR_ERR(phy->ref_clk), 570 + if (IS_ERR_OR_NULL(phy->ref_clk)) { 571 + ret = phy->ref_clk ? PTR_ERR(phy->ref_clk) : -ENOENT; 572 + return dev_err_probe(dev, ret, 572 573 "failed to get ref clk\n"); 574 + } 573 575 574 576 num = ARRAY_SIZE(phy->vregs); 575 577 for (i = 0; i < num; i++)
+47 -30
drivers/phy/tegra/xusb-tegra186.c
··· 648 648 udelay(100); 649 649 } 650 650 651 - if (padctl->soc->trk_hw_mode) { 652 - value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL2); 653 - value |= USB2_TRK_HW_MODE; 651 + value = padctl_readl(padctl, XUSB_PADCTL_USB2_BIAS_PAD_CTL2); 652 + if (padctl->soc->trk_update_on_idle) 654 653 value &= ~CYA_TRK_CODE_UPDATE_ON_IDLE; 655 - padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL2); 656 - } else { 654 + if (padctl->soc->trk_hw_mode) 655 + value |= USB2_TRK_HW_MODE; 656 + padctl_writel(padctl, value, XUSB_PADCTL_USB2_BIAS_PAD_CTL2); 657 + 658 + if (!padctl->soc->trk_hw_mode) 657 659 clk_disable_unprepare(priv->usb2_trk_clk); 658 - } 659 660 } 660 661 661 662 static void tegra186_utmi_bias_pad_power_off(struct tegra_xusb_padctl *padctl) ··· 783 782 } 784 783 785 784 static int tegra186_xusb_padctl_id_override(struct tegra_xusb_padctl *padctl, 786 - bool status) 785 + struct tegra_xusb_usb2_port *port, bool status) 787 786 { 788 - u32 value; 787 + u32 value, id_override; 788 + int err = 0; 789 789 790 790 dev_dbg(padctl->dev, "%s id override\n", status ? "set" : "clear"); 791 791 792 792 value = padctl_readl(padctl, USB2_VBUS_ID); 793 + id_override = value & ID_OVERRIDE(~0); 793 794 794 795 if (status) { 795 796 if (value & VBUS_OVERRIDE) { ··· 802 799 value = padctl_readl(padctl, USB2_VBUS_ID); 803 800 } 804 801 805 - value &= ~ID_OVERRIDE(~0); 806 - value |= ID_OVERRIDE_GROUNDED; 807 - } else { 808 - value &= ~ID_OVERRIDE(~0); 809 - value |= ID_OVERRIDE_FLOATING; 810 - } 802 + if (id_override != ID_OVERRIDE_GROUNDED) { 803 + value &= ~ID_OVERRIDE(~0); 804 + value |= ID_OVERRIDE_GROUNDED; 805 + padctl_writel(padctl, value, USB2_VBUS_ID); 811 806 812 - padctl_writel(padctl, value, USB2_VBUS_ID); 807 + err = regulator_enable(port->supply); 808 + if (err) { 809 + dev_err(padctl->dev, "Failed to enable regulator: %d\n", err); 810 + return err; 811 + } 812 + } 813 + } else { 814 + if (id_override == ID_OVERRIDE_GROUNDED) { 815 + /* 816 + * The regulator is disabled only when the role transitions 817 + * from USB_ROLE_HOST to USB_ROLE_NONE. 818 + */ 819 + err = regulator_disable(port->supply); 820 + if (err) { 821 + dev_err(padctl->dev, "Failed to disable regulator: %d\n", err); 822 + return err; 823 + } 824 + 825 + value &= ~ID_OVERRIDE(~0); 826 + value |= ID_OVERRIDE_FLOATING; 827 + padctl_writel(padctl, value, USB2_VBUS_ID); 828 + } 829 + } 813 830 814 831 return 0; 815 832 } ··· 849 826 850 827 if (mode == PHY_MODE_USB_OTG) { 851 828 if (submode == USB_ROLE_HOST) { 852 - tegra186_xusb_padctl_id_override(padctl, true); 853 - 854 - err = regulator_enable(port->supply); 829 + err = tegra186_xusb_padctl_id_override(padctl, port, true); 830 + if (err) 831 + goto out; 855 832 } else if (submode == USB_ROLE_DEVICE) { 856 833 tegra186_xusb_padctl_vbus_override(padctl, true); 857 834 } else if (submode == USB_ROLE_NONE) { 858 - /* 859 - * When port is peripheral only or role transitions to 860 - * USB_ROLE_NONE from USB_ROLE_DEVICE, regulator is not 861 - * enabled. 862 - */ 863 - if (regulator_is_enabled(port->supply)) 864 - regulator_disable(port->supply); 865 - 866 - tegra186_xusb_padctl_id_override(padctl, false); 835 + err = tegra186_xusb_padctl_id_override(padctl, port, false); 836 + if (err) 837 + goto out; 867 838 tegra186_xusb_padctl_vbus_override(padctl, false); 868 839 } 869 840 } 870 - 841 + out: 871 842 mutex_unlock(&padctl->lock); 872 - 873 843 return err; 874 844 } 875 845 ··· 1726 1710 .num_supplies = ARRAY_SIZE(tegra194_xusb_padctl_supply_names), 1727 1711 .supports_gen2 = true, 1728 1712 .poll_trk_completed = true, 1729 - .trk_hw_mode = true, 1713 + .trk_hw_mode = false, 1714 + .trk_update_on_idle = true, 1730 1715 .supports_lp_cfg_en = true, 1731 1716 }; 1732 1717 EXPORT_SYMBOL_GPL(tegra234_xusb_padctl_soc);
+1
drivers/phy/tegra/xusb.h
··· 434 434 bool need_fake_usb3_port; 435 435 bool poll_trk_completed; 436 436 bool trk_hw_mode; 437 + bool trk_update_on_idle; 437 438 bool supports_lp_cfg_en; 438 439 }; 439 440
+2
drivers/platform/arm64/huawei-gaokun-ec.c
··· 662 662 { 663 663 struct auxiliary_device *adev = to_auxiliary_dev(dev); 664 664 665 + of_node_put(dev->of_node); 665 666 kfree(adev); 666 667 } 667 668 ··· 694 693 695 694 ret = auxiliary_device_init(adev); 696 695 if (ret) { 696 + of_node_put(adev->dev.of_node); 697 697 kfree(adev); 698 698 return ret; 699 699 }
+17 -8
drivers/platform/mellanox/mlxbf-pmc.c
··· 15 15 #include <linux/hwmon.h> 16 16 #include <linux/platform_device.h> 17 17 #include <linux/string.h> 18 + #include <linux/string_helpers.h> 18 19 #include <uapi/linux/psci.h> 19 20 20 21 #define MLXBF_PMC_WRITE_REG_32 0x82000009 ··· 1223 1222 return -ENODEV; 1224 1223 } 1225 1224 1226 - /* Get the event number given the name */ 1225 + /* Get the event name given the number */ 1227 1226 static char *mlxbf_pmc_get_event_name(const char *blk, u32 evt) 1228 1227 { 1229 1228 const struct mlxbf_pmc_events *events; ··· 1785 1784 attr, struct mlxbf_pmc_attribute, dev_attr); 1786 1785 unsigned int blk_num, cnt_num; 1787 1786 bool is_l3 = false; 1787 + char *evt_name; 1788 1788 int evt_num; 1789 1789 int err; 1790 1790 ··· 1793 1791 cnt_num = attr_event->index; 1794 1792 1795 1793 if (isalpha(buf[0])) { 1794 + /* Remove the trailing newline character if present */ 1795 + evt_name = kstrdup_and_replace(buf, '\n', '\0', GFP_KERNEL); 1796 + if (!evt_name) 1797 + return -ENOMEM; 1798 + 1796 1799 evt_num = mlxbf_pmc_get_event_num(pmc->block_name[blk_num], 1797 - buf); 1800 + evt_name); 1801 + kfree(evt_name); 1798 1802 if (evt_num < 0) 1799 1803 return -EINVAL; 1800 1804 } else { 1801 1805 err = kstrtouint(buf, 0, &evt_num); 1802 1806 if (err < 0) 1803 1807 return err; 1808 + 1809 + if (!mlxbf_pmc_get_event_name(pmc->block_name[blk_num], evt_num)) 1810 + return -EINVAL; 1804 1811 } 1805 1812 1806 1813 if (strstr(pmc->block_name[blk_num], "l3cache")) ··· 1890 1879 { 1891 1880 struct mlxbf_pmc_attribute *attr_enable = container_of( 1892 1881 attr, struct mlxbf_pmc_attribute, dev_attr); 1893 - unsigned int en, blk_num; 1882 + unsigned int blk_num; 1894 1883 u32 word; 1895 1884 int err; 1885 + bool en; 1896 1886 1897 1887 blk_num = attr_enable->nr; 1898 1888 1899 - err = kstrtouint(buf, 0, &en); 1889 + err = kstrtobool(buf, &en); 1900 1890 if (err < 0) 1901 1891 return err; 1902 1892 ··· 1917 1905 MLXBF_PMC_CRSPACE_PERFMON_CTL(pmc->block[blk_num].counters), 1918 1906 MLXBF_PMC_WRITE_REG_32, word); 1919 1907 } else { 1920 - if (en && en != 1) 1921 - return -EINVAL; 1922 - 1923 1908 err = mlxbf_pmc_config_l3_counters(blk_num, false, !!en); 1924 1909 if (err) 1925 1910 return err; 1926 1911 1927 - if (en == 1) { 1912 + if (en) { 1928 1913 err = mlxbf_pmc_config_l3_counters(blk_num, true, false); 1929 1914 if (err) 1930 1915 return err;
+2 -1
drivers/platform/x86/Makefile
··· 58 58 # Hewlett Packard Enterprise 59 59 obj-$(CONFIG_UV_SYSFS) += uv_sysfs.o 60 60 61 + obj-$(CONFIG_FW_ATTR_CLASS) += firmware_attributes_class.o 62 + 61 63 # IBM Thinkpad and Lenovo 62 64 obj-$(CONFIG_IBM_RTL) += ibm_rtl.o 63 65 obj-$(CONFIG_IDEAPAD_LAPTOP) += ideapad-laptop.o ··· 130 128 obj-$(CONFIG_TOPSTAR_LAPTOP) += topstar-laptop.o 131 129 132 130 # Platform drivers 133 - obj-$(CONFIG_FW_ATTR_CLASS) += firmware_attributes_class.o 134 131 obj-$(CONFIG_SERIAL_MULTI_INSTANTIATE) += serial-multi-instantiate.o 135 132 obj-$(CONFIG_TOUCHSCREEN_DMI) += touchscreen_dmi.o 136 133 obj-$(CONFIG_WIRELESS_HOTKEY) += wireless-hotkey.o
+17
drivers/platform/x86/dell/alienware-wmi-wmax.c
··· 90 90 91 91 static const struct dmi_system_id awcc_dmi_table[] __initconst = { 92 92 { 93 + .ident = "Alienware Area-51m", 94 + .matches = { 95 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 96 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware Area-51m"), 97 + }, 98 + .driver_data = &generic_quirks, 99 + }, 100 + { 93 101 .ident = "Alienware Area-51m R2", 94 102 .matches = { 95 103 DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 96 104 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware Area-51m R2"), 105 + }, 106 + .driver_data = &generic_quirks, 107 + }, 108 + { 109 + .ident = "Alienware m15 R5", 110 + .matches = { 111 + DMI_MATCH(DMI_SYS_VENDOR, "Alienware"), 112 + DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m15 R5"), 97 113 }, 98 114 .driver_data = &generic_quirks, 99 115 }, ··· 249 233 }, 250 234 .driver_data = &g_series_quirks, 251 235 }, 236 + {} 252 237 }; 253 238 254 239 enum AWCC_GET_FAN_SENSORS_OPERATIONS {
+1
drivers/platform/x86/dell/dell-lis3lv02d.c
··· 49 49 DELL_LIS3LV02D_DMI_ENTRY("Latitude E6330", 0x29), 50 50 DELL_LIS3LV02D_DMI_ENTRY("Latitude E6430", 0x29), 51 51 DELL_LIS3LV02D_DMI_ENTRY("Precision 3540", 0x29), 52 + DELL_LIS3LV02D_DMI_ENTRY("Precision 3551", 0x29), 52 53 DELL_LIS3LV02D_DMI_ENTRY("Precision M6800", 0x29), 53 54 DELL_LIS3LV02D_DMI_ENTRY("Vostro V131", 0x1d), 54 55 DELL_LIS3LV02D_DMI_ENTRY("Vostro 5568", 0x29),
+7 -3
drivers/platform/x86/dell/dell-wmi-ddv.c
··· 689 689 690 690 dev_dbg(&data->wdev->dev, "Translation cache miss\n"); 691 691 692 - /* Perform a translation between a ACPI battery and a battery index */ 693 - 694 - ret = power_supply_get_property(battery, POWER_SUPPLY_PROP_SERIAL_NUMBER, &val); 692 + /* 693 + * Perform a translation between a ACPI battery and a battery index. 694 + * We have to use power_supply_get_property_direct() here because this 695 + * function will also get called from the callbacks of the power supply 696 + * extension. 697 + */ 698 + ret = power_supply_get_property_direct(battery, POWER_SUPPLY_PROP_SERIAL_NUMBER, &val); 695 699 if (ret < 0) 696 700 return ret; 697 701
+2 -2
drivers/platform/x86/ideapad-laptop.c
··· 1669 1669 priv->kbd_bl.led.name = "platform::" LED_FUNCTION_KBD_BACKLIGHT; 1670 1670 priv->kbd_bl.led.brightness_get = ideapad_kbd_bl_led_cdev_brightness_get; 1671 1671 priv->kbd_bl.led.brightness_set_blocking = ideapad_kbd_bl_led_cdev_brightness_set; 1672 - priv->kbd_bl.led.flags = LED_BRIGHT_HW_CHANGED; 1672 + priv->kbd_bl.led.flags = LED_BRIGHT_HW_CHANGED | LED_RETAIN_AT_SHUTDOWN; 1673 1673 1674 1674 err = led_classdev_register(&priv->platform_device->dev, &priv->kbd_bl.led); 1675 1675 if (err) ··· 1728 1728 priv->fn_lock.led.name = "platform::" LED_FUNCTION_FNLOCK; 1729 1729 priv->fn_lock.led.brightness_get = ideapad_fn_lock_led_cdev_get; 1730 1730 priv->fn_lock.led.brightness_set_blocking = ideapad_fn_lock_led_cdev_set; 1731 - priv->fn_lock.led.flags = LED_BRIGHT_HW_CHANGED; 1731 + priv->fn_lock.led.flags = LED_BRIGHT_HW_CHANGED | LED_RETAIN_AT_SHUTDOWN; 1732 1732 1733 1733 err = led_classdev_register(&priv->platform_device->dev, &priv->fn_lock.led); 1734 1734 if (err)
+21 -9
drivers/platform/x86/lenovo-wmi-hotkey-utilities.c
··· 122 122 return -EIO; 123 123 124 124 union acpi_object *obj __free(kfree) = output.pointer; 125 - if (obj && obj->type == ACPI_TYPE_INTEGER) 126 - led_version = obj->integer.value; 127 - else 125 + if (!obj || obj->type != ACPI_TYPE_INTEGER) 128 126 return -EIO; 129 127 130 - wpriv->cdev[led_type].max_brightness = LED_ON; 131 - wpriv->cdev[led_type].flags = LED_CORE_SUSPENDRESUME; 128 + led_version = obj->integer.value; 129 + 130 + /* 131 + * Output parameters define: 0 means mute LED is not supported, Non-zero means 132 + * mute LED can be supported. 133 + */ 134 + if (led_version == 0) 135 + return 0; 136 + 132 137 133 138 switch (led_type) { 134 139 case MIC_MUTE: 135 - if (led_version != WMI_LUD_SUPPORT_MICMUTE_LED_VER) 136 - return -EIO; 140 + if (led_version != WMI_LUD_SUPPORT_MICMUTE_LED_VER) { 141 + pr_warn("The MIC_MUTE LED of this device isn't supported.\n"); 142 + return 0; 143 + } 137 144 138 145 wpriv->cdev[led_type].name = "platform::micmute"; 139 146 wpriv->cdev[led_type].brightness_set_blocking = &lsh_wmi_micmute_led_set; 140 147 wpriv->cdev[led_type].default_trigger = "audio-micmute"; 141 148 break; 142 149 case AUDIO_MUTE: 143 - if (led_version != WMI_LUD_SUPPORT_AUDIOMUTE_LED_VER) 144 - return -EIO; 150 + if (led_version != WMI_LUD_SUPPORT_AUDIOMUTE_LED_VER) { 151 + pr_warn("The AUDIO_MUTE LED of this device isn't supported.\n"); 152 + return 0; 153 + } 145 154 146 155 wpriv->cdev[led_type].name = "platform::mute"; 147 156 wpriv->cdev[led_type].brightness_set_blocking = &lsh_wmi_audiomute_led_set; ··· 160 151 dev_err(dev, "Unknown LED type %d\n", led_type); 161 152 return -EINVAL; 162 153 } 154 + 155 + wpriv->cdev[led_type].max_brightness = LED_ON; 156 + wpriv->cdev[led_type].flags = LED_CORE_SUSPENDRESUME; 163 157 164 158 err = devm_led_classdev_register(dev, &wpriv->cdev[led_type]); 165 159 if (err < 0) {
+16 -2
drivers/pmdomain/governor.c
··· 8 8 #include <linux/pm_domain.h> 9 9 #include <linux/pm_qos.h> 10 10 #include <linux/hrtimer.h> 11 + #include <linux/cpu.h> 11 12 #include <linux/cpuidle.h> 12 13 #include <linux/cpumask.h> 13 14 #include <linux/ktime.h> ··· 350 349 struct cpuidle_device *dev; 351 350 ktime_t domain_wakeup, next_hrtimer; 352 351 ktime_t now = ktime_get(); 352 + struct device *cpu_dev; 353 + s64 cpu_constraint, global_constraint; 353 354 s64 idle_duration_ns; 354 355 int cpu, i; 355 356 ··· 362 359 if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN)) 363 360 return true; 364 361 362 + global_constraint = cpu_latency_qos_limit(); 365 363 /* 366 364 * Find the next wakeup for any of the online CPUs within the PM domain 367 365 * and its subdomains. Note, we only need the genpd->cpus, as it already ··· 376 372 if (ktime_before(next_hrtimer, domain_wakeup)) 377 373 domain_wakeup = next_hrtimer; 378 374 } 375 + 376 + cpu_dev = get_cpu_device(cpu); 377 + if (cpu_dev) { 378 + cpu_constraint = dev_pm_qos_raw_resume_latency(cpu_dev); 379 + if (cpu_constraint < global_constraint) 380 + global_constraint = cpu_constraint; 381 + } 379 382 } 380 383 384 + global_constraint *= NSEC_PER_USEC; 381 385 /* The minimum idle duration is from now - until the next wakeup. */ 382 386 idle_duration_ns = ktime_to_ns(ktime_sub(domain_wakeup, now)); 383 387 if (idle_duration_ns <= 0) ··· 401 389 */ 402 390 i = genpd->state_idx; 403 391 do { 404 - if (idle_duration_ns >= (genpd->states[i].residency_ns + 405 - genpd->states[i].power_off_latency_ns)) { 392 + if ((idle_duration_ns >= (genpd->states[i].residency_ns + 393 + genpd->states[i].power_off_latency_ns)) && 394 + (global_constraint >= (genpd->states[i].power_on_latency_ns + 395 + genpd->states[i].power_off_latency_ns))) { 406 396 genpd->state_idx = i; 407 397 genpd->gd->last_enter = now; 408 398 genpd->gd->reflect_residency = true;
+70 -12
drivers/power/supply/power_supply_core.c
··· 1235 1235 return false; 1236 1236 } 1237 1237 1238 - int power_supply_get_property(struct power_supply *psy, 1239 - enum power_supply_property psp, 1240 - union power_supply_propval *val) 1238 + static int __power_supply_get_property(struct power_supply *psy, enum power_supply_property psp, 1239 + union power_supply_propval *val, bool use_extensions) 1241 1240 { 1242 1241 struct power_supply_ext_registration *reg; 1243 1242 ··· 1246 1247 return -ENODEV; 1247 1248 } 1248 1249 1249 - scoped_guard(rwsem_read, &psy->extensions_sem) { 1250 - power_supply_for_each_extension(reg, psy) { 1251 - if (power_supply_ext_has_property(reg->ext, psp)) 1250 + if (use_extensions) { 1251 + scoped_guard(rwsem_read, &psy->extensions_sem) { 1252 + power_supply_for_each_extension(reg, psy) { 1253 + if (!power_supply_ext_has_property(reg->ext, psp)) 1254 + continue; 1255 + 1252 1256 return reg->ext->get_property(psy, reg->ext, reg->data, psp, val); 1257 + } 1253 1258 } 1254 1259 } 1255 1260 ··· 1264 1261 else 1265 1262 return -EINVAL; 1266 1263 } 1264 + 1265 + int power_supply_get_property(struct power_supply *psy, enum power_supply_property psp, 1266 + union power_supply_propval *val) 1267 + { 1268 + return __power_supply_get_property(psy, psp, val, true); 1269 + } 1267 1270 EXPORT_SYMBOL_GPL(power_supply_get_property); 1268 1271 1269 - int power_supply_set_property(struct power_supply *psy, 1270 - enum power_supply_property psp, 1271 - const union power_supply_propval *val) 1272 + /** 1273 + * power_supply_get_property_direct - Read a power supply property without checking for extensions 1274 + * @psy: The power supply 1275 + * @psp: The power supply property to read 1276 + * @val: The resulting value of the power supply property 1277 + * 1278 + * Read a power supply property without taking into account any power supply extensions registered 1279 + * on the given power supply. This is mostly useful for power supply extensions that want to access 1280 + * their own power supply as using power_supply_get_property() directly will result in a potential 1281 + * deadlock. 1282 + * 1283 + * Return: 0 on success or negative error code on failure. 1284 + */ 1285 + int power_supply_get_property_direct(struct power_supply *psy, enum power_supply_property psp, 1286 + union power_supply_propval *val) 1287 + { 1288 + return __power_supply_get_property(psy, psp, val, false); 1289 + } 1290 + EXPORT_SYMBOL_GPL(power_supply_get_property_direct); 1291 + 1292 + 1293 + static int __power_supply_set_property(struct power_supply *psy, enum power_supply_property psp, 1294 + const union power_supply_propval *val, bool use_extensions) 1272 1295 { 1273 1296 struct power_supply_ext_registration *reg; 1274 1297 1275 1298 if (atomic_read(&psy->use_cnt) <= 0) 1276 1299 return -ENODEV; 1277 1300 1278 - scoped_guard(rwsem_read, &psy->extensions_sem) { 1279 - power_supply_for_each_extension(reg, psy) { 1280 - if (power_supply_ext_has_property(reg->ext, psp)) { 1301 + if (use_extensions) { 1302 + scoped_guard(rwsem_read, &psy->extensions_sem) { 1303 + power_supply_for_each_extension(reg, psy) { 1304 + if (!power_supply_ext_has_property(reg->ext, psp)) 1305 + continue; 1306 + 1281 1307 if (reg->ext->set_property) 1282 1308 return reg->ext->set_property(psy, reg->ext, reg->data, 1283 1309 psp, val); ··· 1321 1289 1322 1290 return psy->desc->set_property(psy, psp, val); 1323 1291 } 1292 + 1293 + int power_supply_set_property(struct power_supply *psy, enum power_supply_property psp, 1294 + const union power_supply_propval *val) 1295 + { 1296 + return __power_supply_set_property(psy, psp, val, true); 1297 + } 1324 1298 EXPORT_SYMBOL_GPL(power_supply_set_property); 1299 + 1300 + /** 1301 + * power_supply_set_property_direct - Write a power supply property without checking for extensions 1302 + * @psy: The power supply 1303 + * @psp: The power supply property to write 1304 + * @val: The value to write to the power supply property 1305 + * 1306 + * Write a power supply property without taking into account any power supply extensions registered 1307 + * on the given power supply. This is mostly useful for power supply extensions that want to access 1308 + * their own power supply as using power_supply_set_property() directly will result in a potential 1309 + * deadlock. 1310 + * 1311 + * Return: 0 on success or negative error code on failure. 1312 + */ 1313 + int power_supply_set_property_direct(struct power_supply *psy, enum power_supply_property psp, 1314 + const union power_supply_propval *val) 1315 + { 1316 + return __power_supply_set_property(psy, psp, val, false); 1317 + } 1318 + EXPORT_SYMBOL_GPL(power_supply_set_property_direct); 1325 1319 1326 1320 int power_supply_property_is_writeable(struct power_supply *psy, 1327 1321 enum power_supply_property psp)
+4
drivers/power/supply/test_power.c
··· 259 259 static int test_power_battery_extmanufacture_year = 1234; 260 260 static int test_power_battery_exttemp_max = 1000; 261 261 static const enum power_supply_property test_power_battery_extprops[] = { 262 + POWER_SUPPLY_PROP_TIME_TO_EMPTY_NOW, 262 263 POWER_SUPPLY_PROP_MANUFACTURE_YEAR, 263 264 POWER_SUPPLY_PROP_TEMP_MAX, 264 265 }; ··· 271 270 union power_supply_propval *val) 272 271 { 273 272 switch (psp) { 273 + case POWER_SUPPLY_PROP_TIME_TO_EMPTY_NOW: 274 + return power_supply_get_property_direct(psy, POWER_SUPPLY_PROP_TIME_TO_EMPTY_AVG, 275 + val); 274 276 case POWER_SUPPLY_PROP_MANUFACTURE_YEAR: 275 277 val->intval = test_power_battery_extmanufacture_year; 276 278 break;
+3
drivers/s390/net/ism_drv.c
··· 130 130 struct ism_req_hdr *req = cmd; 131 131 struct ism_resp_hdr *resp = cmd; 132 132 133 + spin_lock(&ism->cmd_lock); 133 134 __ism_write_cmd(ism, req + 1, sizeof(*req), req->len - sizeof(*req)); 134 135 __ism_write_cmd(ism, req, 0, sizeof(*req)); 135 136 ··· 144 143 } 145 144 __ism_read_cmd(ism, resp + 1, sizeof(*resp), resp->len - sizeof(*resp)); 146 145 out: 146 + spin_unlock(&ism->cmd_lock); 147 147 return resp->ret; 148 148 } 149 149 ··· 608 606 return -ENOMEM; 609 607 610 608 spin_lock_init(&ism->lock); 609 + spin_lock_init(&ism->cmd_lock); 611 610 dev_set_drvdata(&pdev->dev, ism); 612 611 ism->pdev = pdev; 613 612 ism->dev.parent = &pdev->dev;
+3 -1
drivers/soundwire/amd_manager.c
··· 238 238 239 239 if (sts & AMD_SDW_IMM_RES_VALID) { 240 240 dev_err(amd_manager->dev, "SDW%x manager is in bad state\n", amd_manager->instance); 241 - writel(0x00, amd_manager->mmio + ACP_SW_IMM_CMD_STS); 241 + writel(AMD_SDW_IMM_RES_VALID, amd_manager->mmio + ACP_SW_IMM_CMD_STS); 242 242 } 243 243 writel(upper_data, amd_manager->mmio + ACP_SW_IMM_CMD_UPPER_WORD); 244 244 writel(lower_data, amd_manager->mmio + ACP_SW_IMM_CMD_LOWER_QWORD); ··· 1209 1209 } 1210 1210 1211 1211 if (amd_manager->power_mode_mask & AMD_SDW_CLK_STOP_MODE) { 1212 + cancel_work_sync(&amd_manager->amd_sdw_work); 1212 1213 amd_sdw_wake_enable(amd_manager, false); 1213 1214 if (amd_manager->acp_rev >= ACP70_PCI_REV_ID) { 1214 1215 ret = amd_sdw_host_wake_enable(amd_manager, false); ··· 1220 1219 if (ret) 1221 1220 return ret; 1222 1221 } else if (amd_manager->power_mode_mask & AMD_SDW_POWER_OFF_MODE) { 1222 + cancel_work_sync(&amd_manager->amd_sdw_work); 1223 1223 amd_sdw_wake_enable(amd_manager, false); 1224 1224 if (amd_manager->acp_rev >= ACP70_PCI_REV_ID) { 1225 1225 ret = amd_sdw_host_wake_enable(amd_manager, false);
-26
drivers/soundwire/qcom.c
··· 156 156 u8 word_length; 157 157 u8 blk_group_count; 158 158 u8 lane_control; 159 - u8 ch_mask; 160 159 }; 161 160 162 161 /* ··· 1048 1049 { 1049 1050 u32 reg = SWRM_DP_PORT_CTRL_BANK(enable_ch->port_num, bank); 1050 1051 struct qcom_swrm_ctrl *ctrl = to_qcom_sdw(bus); 1051 - struct qcom_swrm_port_config *pcfg; 1052 1052 u32 val; 1053 1053 1054 - pcfg = &ctrl->pconfig[enable_ch->port_num]; 1055 1054 ctrl->reg_read(ctrl, reg, &val); 1056 - if (pcfg->ch_mask != SWR_INVALID_PARAM && pcfg->ch_mask != 0) 1057 - enable_ch->ch_mask = pcfg->ch_mask; 1058 1055 1059 1056 if (enable_ch->enable) 1060 1057 val |= (enable_ch->ch_mask << SWRM_DP_PORT_CTRL_EN_CHAN_SHFT); ··· 1270 1275 return ctrl->sruntime[dai->id]; 1271 1276 } 1272 1277 1273 - static int qcom_swrm_set_channel_map(struct snd_soc_dai *dai, 1274 - unsigned int tx_num, const unsigned int *tx_slot, 1275 - unsigned int rx_num, const unsigned int *rx_slot) 1276 - { 1277 - struct qcom_swrm_ctrl *ctrl = dev_get_drvdata(dai->dev); 1278 - int i; 1279 - 1280 - if (tx_slot) { 1281 - for (i = 0; i < tx_num; i++) 1282 - ctrl->pconfig[i].ch_mask = tx_slot[i]; 1283 - } 1284 - 1285 - if (rx_slot) { 1286 - for (i = 0; i < rx_num; i++) 1287 - ctrl->pconfig[i].ch_mask = rx_slot[i]; 1288 - } 1289 - 1290 - return 0; 1291 - } 1292 - 1293 1278 static int qcom_swrm_startup(struct snd_pcm_substream *substream, 1294 1279 struct snd_soc_dai *dai) 1295 1280 { ··· 1306 1331 .shutdown = qcom_swrm_shutdown, 1307 1332 .set_stream = qcom_swrm_set_sdw_stream, 1308 1333 .get_stream = qcom_swrm_get_sdw_stream, 1309 - .set_channel_map = qcom_swrm_set_channel_map, 1310 1334 }; 1311 1335 1312 1336 static const struct snd_soc_component_driver qcom_swrm_dai_component = {
+10 -4
drivers/spi/spi.c
··· 4138 4138 xfer->tx_nbits != SPI_NBITS_OCTAL) 4139 4139 return -EINVAL; 4140 4140 if ((xfer->tx_nbits == SPI_NBITS_DUAL) && 4141 - !(spi->mode & (SPI_TX_DUAL | SPI_TX_QUAD))) 4141 + !(spi->mode & (SPI_TX_DUAL | SPI_TX_QUAD | SPI_TX_OCTAL))) 4142 4142 return -EINVAL; 4143 4143 if ((xfer->tx_nbits == SPI_NBITS_QUAD) && 4144 - !(spi->mode & SPI_TX_QUAD)) 4144 + !(spi->mode & (SPI_TX_QUAD | SPI_TX_OCTAL))) 4145 + return -EINVAL; 4146 + if ((xfer->tx_nbits == SPI_NBITS_OCTAL) && 4147 + !(spi->mode & SPI_TX_OCTAL)) 4145 4148 return -EINVAL; 4146 4149 } 4147 4150 /* Check transfer rx_nbits */ ··· 4157 4154 xfer->rx_nbits != SPI_NBITS_OCTAL) 4158 4155 return -EINVAL; 4159 4156 if ((xfer->rx_nbits == SPI_NBITS_DUAL) && 4160 - !(spi->mode & (SPI_RX_DUAL | SPI_RX_QUAD))) 4157 + !(spi->mode & (SPI_RX_DUAL | SPI_RX_QUAD | SPI_RX_OCTAL))) 4161 4158 return -EINVAL; 4162 4159 if ((xfer->rx_nbits == SPI_NBITS_QUAD) && 4163 - !(spi->mode & SPI_RX_QUAD)) 4160 + !(spi->mode & (SPI_RX_QUAD | SPI_RX_OCTAL))) 4161 + return -EINVAL; 4162 + if ((xfer->rx_nbits == SPI_NBITS_OCTAL) && 4163 + !(spi->mode & SPI_RX_OCTAL)) 4164 4164 return -EINVAL; 4165 4165 } 4166 4166
+56 -42
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
··· 97 97 * tracked separately with the state. 98 98 */ 99 99 int peer_use_count; 100 + 101 + /* 102 + * Flag to indicate that the first vchiq connect has made it through. 103 + * This means that both sides should be fully ready, and we should 104 + * be able to suspend after this point. 105 + */ 106 + int first_connect; 100 107 }; 101 108 102 109 static int ··· 280 273 return 0; 281 274 } 282 275 276 + int 277 + vchiq_platform_init_state(struct vchiq_state *state) 278 + { 279 + struct vchiq_arm_state *platform_state; 280 + 281 + platform_state = devm_kzalloc(state->dev, sizeof(*platform_state), GFP_KERNEL); 282 + if (!platform_state) 283 + return -ENOMEM; 284 + 285 + rwlock_init(&platform_state->susp_res_lock); 286 + 287 + init_completion(&platform_state->ka_evt); 288 + atomic_set(&platform_state->ka_use_count, 0); 289 + atomic_set(&platform_state->ka_use_ack_count, 0); 290 + atomic_set(&platform_state->ka_release_count, 0); 291 + 292 + platform_state->state = state; 293 + 294 + state->platform_state = (struct opaque_platform_state *)platform_state; 295 + 296 + return 0; 297 + } 298 + 283 299 static struct vchiq_arm_state *vchiq_platform_get_arm_state(struct vchiq_state *state) 284 300 { 285 301 return (struct vchiq_arm_state *)state->platform_state; ··· 393 363 struct vchiq_state *state = instance->state; 394 364 int ret = 0; 395 365 396 - if (mutex_lock_killable(&state->mutex)) 397 - return -EAGAIN; 366 + mutex_lock(&state->mutex); 398 367 399 368 /* Remove all services */ 400 369 vchiq_shutdown_internal(state, instance); ··· 1011 982 } 1012 983 1013 984 int 1014 - vchiq_platform_init_state(struct vchiq_state *state) 1015 - { 1016 - struct vchiq_arm_state *platform_state; 1017 - char threadname[16]; 1018 - 1019 - platform_state = devm_kzalloc(state->dev, sizeof(*platform_state), GFP_KERNEL); 1020 - if (!platform_state) 1021 - return -ENOMEM; 1022 - 1023 - snprintf(threadname, sizeof(threadname), "vchiq-keep/%d", 1024 - state->id); 1025 - platform_state->ka_thread = kthread_create(&vchiq_keepalive_thread_func, 1026 - (void *)state, threadname); 1027 - if (IS_ERR(platform_state->ka_thread)) { 1028 - dev_err(state->dev, "couldn't create thread %s\n", threadname); 1029 - return PTR_ERR(platform_state->ka_thread); 1030 - } 1031 - 1032 - rwlock_init(&platform_state->susp_res_lock); 1033 - 1034 - init_completion(&platform_state->ka_evt); 1035 - atomic_set(&platform_state->ka_use_count, 0); 1036 - atomic_set(&platform_state->ka_use_ack_count, 0); 1037 - atomic_set(&platform_state->ka_release_count, 0); 1038 - 1039 - platform_state->state = state; 1040 - 1041 - state->platform_state = (struct opaque_platform_state *)platform_state; 1042 - 1043 - return 0; 1044 - } 1045 - 1046 - int 1047 985 vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service, 1048 986 enum USE_TYPE_E use_type) 1049 987 { ··· 1325 1329 return ret; 1326 1330 } 1327 1331 1328 - void vchiq_platform_connected(struct vchiq_state *state) 1329 - { 1330 - struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); 1331 - 1332 - wake_up_process(arm_state->ka_thread); 1333 - } 1334 - 1335 1332 void vchiq_platform_conn_state_changed(struct vchiq_state *state, 1336 1333 enum vchiq_connstate oldstate, 1337 1334 enum vchiq_connstate newstate) 1338 1335 { 1336 + struct vchiq_arm_state *arm_state = vchiq_platform_get_arm_state(state); 1337 + char threadname[16]; 1338 + 1339 1339 dev_dbg(state->dev, "suspend: %d: %s->%s\n", 1340 1340 state->id, get_conn_state_name(oldstate), get_conn_state_name(newstate)); 1341 + if (state->conn_state != VCHIQ_CONNSTATE_CONNECTED) 1342 + return; 1343 + 1344 + write_lock_bh(&arm_state->susp_res_lock); 1345 + if (arm_state->first_connect) { 1346 + write_unlock_bh(&arm_state->susp_res_lock); 1347 + return; 1348 + } 1349 + 1350 + arm_state->first_connect = 1; 1351 + write_unlock_bh(&arm_state->susp_res_lock); 1352 + snprintf(threadname, sizeof(threadname), "vchiq-keep/%d", 1353 + state->id); 1354 + arm_state->ka_thread = kthread_create(&vchiq_keepalive_thread_func, 1355 + (void *)state, 1356 + threadname); 1357 + if (IS_ERR(arm_state->ka_thread)) { 1358 + dev_err(state->dev, "suspend: Couldn't create thread %s\n", 1359 + threadname); 1360 + } else { 1361 + wake_up_process(arm_state->ka_thread); 1362 + } 1341 1363 } 1342 1364 1343 1365 static const struct of_device_id vchiq_of_match[] = {
-1
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
··· 3343 3343 return -EAGAIN; 3344 3344 3345 3345 vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED); 3346 - vchiq_platform_connected(state); 3347 3346 complete(&state->connect); 3348 3347 } 3349 3348
-2
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.h
··· 575 575 576 576 int vchiq_send_remote_use_active(struct vchiq_state *state); 577 577 578 - void vchiq_platform_connected(struct vchiq_state *state); 579 - 580 578 void vchiq_platform_conn_state_changed(struct vchiq_state *state, 581 579 enum vchiq_connstate oldstate, 582 580 enum vchiq_connstate newstate);
+5 -5
drivers/thunderbolt/switch.c
··· 1450 1450 return ret; 1451 1451 1452 1452 data[0] &= ~ADP_DP_CS_0_VIDEO_HOPID_MASK; 1453 - data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK; 1453 + data[1] &= ~ADP_DP_CS_1_AUX_TX_HOPID_MASK; 1454 1454 data[1] &= ~ADP_DP_CS_1_AUX_RX_HOPID_MASK; 1455 1455 1456 1456 data[0] |= (video << ADP_DP_CS_0_VIDEO_HOPID_SHIFT) & ··· 3437 3437 } 3438 3438 } 3439 3439 3440 - static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags) 3440 + static int tb_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime) 3441 3441 { 3442 3442 if (flags) 3443 3443 tb_sw_dbg(sw, "enabling wakeup: %#x\n", flags); ··· 3445 3445 tb_sw_dbg(sw, "disabling wakeup\n"); 3446 3446 3447 3447 if (tb_switch_is_usb4(sw)) 3448 - return usb4_switch_set_wake(sw, flags); 3448 + return usb4_switch_set_wake(sw, flags, runtime); 3449 3449 return tb_lc_set_wake(sw, flags); 3450 3450 } 3451 3451 ··· 3521 3521 tb_switch_check_wakes(sw); 3522 3522 3523 3523 /* Disable wakes */ 3524 - tb_switch_set_wake(sw, 0); 3524 + tb_switch_set_wake(sw, 0, true); 3525 3525 3526 3526 err = tb_switch_tmu_init(sw); 3527 3527 if (err) ··· 3603 3603 flags |= TB_WAKE_ON_USB4 | TB_WAKE_ON_USB3 | TB_WAKE_ON_PCIE; 3604 3604 } 3605 3605 3606 - tb_switch_set_wake(sw, flags); 3606 + tb_switch_set_wake(sw, flags, runtime); 3607 3607 3608 3608 if (tb_switch_is_usb4(sw)) 3609 3609 usb4_switch_set_sleep(sw);
+1 -1
drivers/thunderbolt/tb.h
··· 1317 1317 int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf, 1318 1318 size_t size); 1319 1319 bool usb4_switch_lane_bonding_possible(struct tb_switch *sw); 1320 - int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags); 1320 + int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime); 1321 1321 int usb4_switch_set_sleep(struct tb_switch *sw); 1322 1322 int usb4_switch_nvm_sector_size(struct tb_switch *sw); 1323 1323 int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
+5 -7
drivers/thunderbolt/usb4.c
··· 403 403 * usb4_switch_set_wake() - Enabled/disable wake 404 404 * @sw: USB4 router 405 405 * @flags: Wakeup flags (%0 to disable) 406 + * @runtime: Wake is being programmed during system runtime 406 407 * 407 408 * Enables/disables router to wake up from sleep. 408 409 */ 409 - int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags) 410 + int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime) 410 411 { 411 - struct usb4_port *usb4; 412 412 struct tb_port *port; 413 413 u64 route = tb_route(sw); 414 414 u32 val; ··· 438 438 val |= PORT_CS_19_WOU4; 439 439 } else { 440 440 bool configured = val & PORT_CS_19_PC; 441 - usb4 = port->usb4; 441 + bool wakeup = runtime || device_may_wakeup(&port->usb4->dev); 442 442 443 - if (((flags & TB_WAKE_ON_CONNECT) && 444 - device_may_wakeup(&usb4->dev)) && !configured) 443 + if ((flags & TB_WAKE_ON_CONNECT) && wakeup && !configured) 445 444 val |= PORT_CS_19_WOC; 446 - if (((flags & TB_WAKE_ON_DISCONNECT) && 447 - device_may_wakeup(&usb4->dev)) && configured) 445 + if ((flags & TB_WAKE_ON_DISCONNECT) && wakeup && configured) 448 446 val |= PORT_CS_19_WOD; 449 447 if ((flags & TB_WAKE_ON_USB4) && configured) 450 448 val |= PORT_CS_19_WOU4;
+1 -1
drivers/tty/serial/pch_uart.c
··· 954 954 __func__); 955 955 return 0; 956 956 } 957 - dma_sync_sg_for_device(port->dev, priv->sg_tx_p, nent, DMA_TO_DEVICE); 957 + dma_sync_sg_for_device(port->dev, priv->sg_tx_p, num, DMA_TO_DEVICE); 958 958 priv->desc_tx = desc; 959 959 desc->callback = pch_dma_tx_complete; 960 960 desc->callback_param = priv;
+3
drivers/tty/serial/serial_base_bus.c
··· 13 13 #include <linux/device.h> 14 14 #include <linux/idr.h> 15 15 #include <linux/module.h> 16 + #include <linux/of.h> 16 17 #include <linux/serial_core.h> 17 18 #include <linux/slab.h> 18 19 #include <linux/spinlock.h> ··· 94 93 { 95 94 struct serial_ctrl_device *ctrl_dev = to_serial_base_ctrl_device(dev); 96 95 96 + of_node_put(dev->of_node); 97 97 kfree(ctrl_dev); 98 98 } 99 99 ··· 142 140 { 143 141 struct serial_port_device *port_dev = to_serial_base_port_device(dev); 144 142 143 + of_node_put(dev->of_node); 145 144 kfree(port_dev); 146 145 } 147 146
+6 -2
drivers/usb/core/hub.c
··· 5751 5751 struct usb_device *hdev = hub->hdev; 5752 5752 u16 portstatus, portchange; 5753 5753 int i = 0; 5754 + int err; 5754 5755 5755 5756 connect_change = test_bit(port1, hub->change_bits); 5756 5757 clear_bit(port1, hub->event_bits); ··· 5848 5847 } else if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION) 5849 5848 || udev->state == USB_STATE_NOTATTACHED) { 5850 5849 dev_dbg(&port_dev->dev, "do warm reset, port only\n"); 5851 - if (hub_port_reset(hub, port1, NULL, 5852 - HUB_BH_RESET_TIME, true) < 0) 5850 + err = hub_port_reset(hub, port1, NULL, 5851 + HUB_BH_RESET_TIME, true); 5852 + if (!udev && err == -ENOTCONN) 5853 + connect_change = 0; 5854 + else if (err < 0) 5853 5855 hub_port_disable(hub, port1, 1); 5854 5856 } else { 5855 5857 dev_dbg(&port_dev->dev, "do warm reset, full device\n");
+27 -13
drivers/usb/dwc2/gadget.c
··· 5389 5389 if (gusbcfg & GUSBCFG_ULPI_UTMI_SEL) { 5390 5390 /* ULPI interface */ 5391 5391 gpwrdn |= GPWRDN_ULPI_LATCH_EN_DURING_HIB_ENTRY; 5392 + dwc2_writel(hsotg, gpwrdn, GPWRDN); 5393 + udelay(10); 5394 + 5395 + /* Suspend the Phy Clock */ 5396 + pcgcctl = dwc2_readl(hsotg, PCGCTL); 5397 + pcgcctl |= PCGCTL_STOPPCLK; 5398 + dwc2_writel(hsotg, pcgcctl, PCGCTL); 5399 + udelay(10); 5400 + 5401 + gpwrdn = dwc2_readl(hsotg, GPWRDN); 5402 + gpwrdn |= GPWRDN_PMUACTV; 5403 + dwc2_writel(hsotg, gpwrdn, GPWRDN); 5404 + udelay(10); 5405 + } else { 5406 + /* UTMI+ Interface */ 5407 + dwc2_writel(hsotg, gpwrdn, GPWRDN); 5408 + udelay(10); 5409 + 5410 + gpwrdn = dwc2_readl(hsotg, GPWRDN); 5411 + gpwrdn |= GPWRDN_PMUACTV; 5412 + dwc2_writel(hsotg, gpwrdn, GPWRDN); 5413 + udelay(10); 5414 + 5415 + pcgcctl = dwc2_readl(hsotg, PCGCTL); 5416 + pcgcctl |= PCGCTL_STOPPCLK; 5417 + dwc2_writel(hsotg, pcgcctl, PCGCTL); 5418 + udelay(10); 5392 5419 } 5393 - dwc2_writel(hsotg, gpwrdn, GPWRDN); 5394 - udelay(10); 5395 - 5396 - /* Suspend the Phy Clock */ 5397 - pcgcctl = dwc2_readl(hsotg, PCGCTL); 5398 - pcgcctl |= PCGCTL_STOPPCLK; 5399 - dwc2_writel(hsotg, pcgcctl, PCGCTL); 5400 - udelay(10); 5401 - 5402 - gpwrdn = dwc2_readl(hsotg, GPWRDN); 5403 - gpwrdn |= GPWRDN_PMUACTV; 5404 - dwc2_writel(hsotg, gpwrdn, GPWRDN); 5405 - udelay(10); 5406 5420 5407 5421 /* Set flag to indicate that we are in hibernation */ 5408 5422 hsotg->hibernated = 1;
+2 -5
drivers/usb/dwc3/dwc3-qcom.c
··· 680 680 ret = reset_control_deassert(qcom->resets); 681 681 if (ret) { 682 682 dev_err(&pdev->dev, "failed to deassert resets, err=%d\n", ret); 683 - goto reset_assert; 683 + return ret; 684 684 } 685 685 686 686 ret = clk_bulk_prepare_enable(qcom->num_clocks, qcom->clks); 687 687 if (ret < 0) 688 - goto reset_assert; 688 + return ret; 689 689 690 690 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 691 691 if (!r) { ··· 755 755 dwc3_core_remove(&qcom->dwc); 756 756 clk_disable: 757 757 clk_bulk_disable_unprepare(qcom->num_clocks, qcom->clks); 758 - reset_assert: 759 - reset_control_assert(qcom->resets); 760 758 761 759 return ret; 762 760 } ··· 769 771 clk_bulk_disable_unprepare(qcom->num_clocks, qcom->clks); 770 772 771 773 dwc3_qcom_interconnect_exit(qcom); 772 - reset_control_assert(qcom->resets); 773 774 } 774 775 775 776 static int dwc3_qcom_pm_suspend(struct device *dev)
+4
drivers/usb/gadget/configfs.c
··· 1065 1065 unsigned int bytes_to_strip = 0; 1066 1066 int l = len; 1067 1067 1068 + if (!len) 1069 + return len; 1068 1070 if (page[l - 1] == '\n') { 1069 1071 --l; 1070 1072 ++bytes_to_strip; ··· 1190 1188 struct gadget_info *gi = os_desc_item_to_gadget_info(item); 1191 1189 int res, l; 1192 1190 1191 + if (!len) 1192 + return len; 1193 1193 l = min_t(int, len, OS_STRING_QW_SIGN_LEN >> 1); 1194 1194 if (page[l - 1] == '\n') 1195 1195 --l;
+2
drivers/usb/musb/musb_gadget.c
··· 1913 1913 * gadget driver here and have everything work; 1914 1914 * that currently misbehaves. 1915 1915 */ 1916 + usb_gadget_set_state(g, USB_STATE_NOTATTACHED); 1916 1917 1917 1918 /* Force check of devctl register for PM runtime */ 1918 1919 pm_runtime_mark_last_busy(musb->controller); ··· 2020 2019 case OTG_STATE_B_PERIPHERAL: 2021 2020 case OTG_STATE_B_IDLE: 2022 2021 musb_set_state(musb, OTG_STATE_B_IDLE); 2022 + usb_gadget_set_state(&musb->g, USB_STATE_NOTATTACHED); 2023 2023 break; 2024 2024 case OTG_STATE_B_SRP_INIT: 2025 2025 break;
+2
drivers/usb/serial/ftdi_sio.c
··· 803 803 .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk }, 804 804 { USB_DEVICE(FTDI_VID, FTDI_NDI_AURORA_SCU_PID), 805 805 .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk }, 806 + { USB_DEVICE(FTDI_NDI_VID, FTDI_NDI_EMGUIDE_GEMINI_PID), 807 + .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk }, 806 808 { USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) }, 807 809 { USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) }, 808 810 { USB_DEVICE(FTDI_VID, RTSYSTEMS_USB_VX8_PID) },
+3
drivers/usb/serial/ftdi_sio_ids.h
··· 204 204 #define FTDI_NDI_FUTURE_3_PID 0xDA73 /* NDI future device #3 */ 205 205 #define FTDI_NDI_AURORA_SCU_PID 0xDA74 /* NDI Aurora SCU */ 206 206 207 + #define FTDI_NDI_VID 0x23F2 208 + #define FTDI_NDI_EMGUIDE_GEMINI_PID 0x0003 /* NDI Emguide Gemini */ 209 + 207 210 /* 208 211 * ChamSys Limited (www.chamsys.co.uk) USB wing/interface product IDs 209 212 */
+5
drivers/usb/serial/option.c
··· 1415 1415 .driver_info = NCTRL(5) }, 1416 1416 { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) }, 1417 1417 { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) }, 1418 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10c7, 0xff, 0xff, 0x30), /* Telit FE910C04 (ECM) */ 1419 + .driver_info = NCTRL(4) }, 1420 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10c7, 0xff, 0xff, 0x40) }, 1418 1421 { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30), /* Telit FN990B (MBIM) */ 1419 1422 .driver_info = NCTRL(6) }, 1420 1423 { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) }, ··· 2346 2343 .driver_info = RSVD(3) }, 2347 2344 { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe145, 0xff), /* Foxconn T99W651 RNDIS */ 2348 2345 .driver_info = RSVD(5) | RSVD(6) }, 2346 + { USB_DEVICE_INTERFACE_CLASS(0x0489, 0xe167, 0xff), /* Foxconn T99W640 MBIM */ 2347 + .driver_info = RSVD(3) }, 2349 2348 { USB_DEVICE(0x1508, 0x1001), /* Fibocom NL668 (IOT version) */ 2350 2349 .driver_info = RSVD(4) | RSVD(5) | RSVD(6) }, 2351 2350 { USB_DEVICE(0x1782, 0x4d10) }, /* Fibocom L610 (AT mode) */
+2 -1
fs/bcachefs/alloc_foreground.c
··· 511 511 bch2_dev_usage_read_fast(ca, &req->usage); 512 512 avail = dev_buckets_free(ca, req->usage, req->watermark); 513 513 514 - if (req->usage.buckets[BCH_DATA_need_discard] > avail) 514 + if (req->usage.buckets[BCH_DATA_need_discard] > 515 + min(avail, ca->mi.nbuckets >> 7)) 515 516 bch2_dev_do_discards(ca); 516 517 517 518 if (req->usage.buckets[BCH_DATA_need_gc_gens] > avail)
+3 -3
fs/bcachefs/btree_io.c
··· 1295 1295 1296 1296 btree_bounce_free(c, btree_buf_bytes(b), used_mempool, sorted); 1297 1297 1298 - if (updated_range) 1299 - bch2_btree_node_drop_keys_outside_node(b); 1300 - 1301 1298 i = &b->data->keys; 1302 1299 for (k = i->start; k != vstruct_last(i);) { 1303 1300 struct bkey tmp; ··· 1331 1334 set_needs_whiteout(btree_bset_first(b), true); 1332 1335 1333 1336 btree_node_reset_sib_u64s(b); 1337 + 1338 + if (updated_range) 1339 + bch2_btree_node_drop_keys_outside_node(b); 1334 1340 1335 1341 /* 1336 1342 * XXX:
+4
fs/bcachefs/dirent.c
··· 13 13 14 14 #include <linux/dcache.h> 15 15 16 + #ifdef CONFIG_UNICODE 16 17 int bch2_casefold(struct btree_trans *trans, const struct bch_hash_info *info, 17 18 const struct qstr *str, struct qstr *out_cf) 18 19 { ··· 34 33 *out_cf = (struct qstr) QSTR_INIT(buf, ret); 35 34 return 0; 36 35 } 36 + #endif 37 37 38 38 static unsigned bch2_dirent_name_bytes(struct bkey_s_c_dirent d) 39 39 { ··· 256 254 if (!bch2_fs_casefold_enabled(c)) 257 255 return -EOPNOTSUPP; 258 256 257 + #ifdef CONFIG_UNICODE 259 258 memcpy(&dirent->v.d_cf_name_block.d_names[0], name->name, name->len); 260 259 261 260 char *cf_out = &dirent->v.d_cf_name_block.d_names[name->len]; ··· 282 279 dirent->v.d_cf_name_block.d_cf_name_len = cpu_to_le16(cf_len); 283 280 284 281 EBUG_ON(bch2_dirent_get_casefold_name(dirent_i_to_s_c(dirent)).len != cf_len); 282 + #endif 285 283 } 286 284 287 285 unsigned u64s = dirent_val_u64s(name->len, cf_len);
+8
fs/bcachefs/dirent.h
··· 23 23 struct bch_hash_info; 24 24 struct bch_inode_info; 25 25 26 + #ifdef CONFIG_UNICODE 26 27 int bch2_casefold(struct btree_trans *, const struct bch_hash_info *, 27 28 const struct qstr *, struct qstr *); 29 + #else 30 + static inline int bch2_casefold(struct btree_trans *trans, const struct bch_hash_info *info, 31 + const struct qstr *str, struct qstr *out_cf) 32 + { 33 + return -EOPNOTSUPP; 34 + } 35 + #endif 28 36 29 37 static inline int bch2_maybe_casefold(struct btree_trans *trans, 30 38 const struct bch_hash_info *info,
+5
fs/bcachefs/io_read.c
··· 166 166 BUG_ON(ret); 167 167 168 168 async_object_list_del(c, promote, op->list_idx); 169 + async_object_list_del(c, rbio, rbio->list_idx); 169 170 170 171 bch2_data_update_exit(&op->write); 171 172 ··· 457 456 if (rbio->start_time) 458 457 bch2_time_stats_update(&rbio->c->times[BCH_TIME_data_read], 459 458 rbio->start_time); 459 + #ifdef CONFIG_BCACHEFS_ASYNC_OBJECT_LISTS 460 + if (rbio->list_idx) 461 + async_object_list_del(rbio->c, rbio, rbio->list_idx); 462 + #endif 460 463 bio_endio(&rbio->bio); 461 464 } 462 465
+1
fs/bcachefs/journal_io.c
··· 1767 1767 1768 1768 closure_wake_up(&c->freelist_wait); 1769 1769 bch2_reset_alloc_cursors(c); 1770 + do_discards = true; 1770 1771 } 1771 1772 1772 1773 j->seq_ondisk = seq;
+1 -1
fs/bcachefs/movinggc.c
··· 71 71 if (ret) 72 72 return ret; 73 73 74 - struct bch_dev *ca = bch2_dev_tryget(c, k.k->p.inode); 74 + struct bch_dev *ca = bch2_dev_bucket_tryget(c, k.k->p); 75 75 if (!ca) 76 76 goto out; 77 77
-2
fs/cachefiles/io.c
··· 347 347 default: 348 348 ki->was_async = false; 349 349 cachefiles_write_complete(&ki->iocb, ret); 350 - if (ret > 0) 351 - ret = 0; 352 350 break; 353 351 } 354 352
+1 -3
fs/cachefiles/ondemand.c
··· 83 83 84 84 trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len); 85 85 ret = __cachefiles_write(object, file, pos, iter, NULL, NULL); 86 - if (!ret) { 87 - ret = len; 86 + if (ret > 0) 88 87 kiocb->ki_pos += ret; 89 - } 90 88 91 89 out: 92 90 fput(file);
+6
fs/efivarfs/super.c
··· 390 390 return 0; 391 391 } 392 392 393 + static void efivarfs_free(struct fs_context *fc) 394 + { 395 + kfree(fc->s_fs_info); 396 + } 397 + 393 398 static const struct fs_context_operations efivarfs_context_ops = { 394 399 .get_tree = efivarfs_get_tree, 395 400 .parse_param = efivarfs_parse_param, 396 401 .reconfigure = efivarfs_reconfigure, 402 + .free = efivarfs_free, 397 403 }; 398 404 399 405 static int efivarfs_check_missing(efi_char16_t *name16, efi_guid_t vendor,
+3
fs/iomap/buffered-io.c
··· 71 71 unsigned long flags; 72 72 bool uptodate = true; 73 73 74 + if (folio_test_uptodate(folio)) 75 + return; 76 + 74 77 if (ifs) { 75 78 spin_lock_irqsave(&ifs->state_lock, flags); 76 79 uptodate = ifs_set_range_uptodate(folio, ifs, off, len);
+8 -1
fs/isofs/inode.c
··· 1440 1440 inode->i_op = &page_symlink_inode_operations; 1441 1441 inode_nohighmem(inode); 1442 1442 inode->i_data.a_ops = &isofs_symlink_aops; 1443 - } else 1443 + } else if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) || 1444 + S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) { 1444 1445 /* XXX - parse_rock_ridge_inode() had already set i_rdev. */ 1445 1446 init_special_inode(inode, inode->i_mode, inode->i_rdev); 1447 + } else { 1448 + printk(KERN_DEBUG "ISOFS: Invalid file type 0%04o for inode %lu.\n", 1449 + inode->i_mode, inode->i_ino); 1450 + ret = -EIO; 1451 + goto fail; 1452 + } 1446 1453 1447 1454 ret = 0; 1448 1455 out:
+5
fs/netfs/read_pgpriv2.c
··· 110 110 if (!creq->io_streams[1].avail) 111 111 goto cancel_put; 112 112 113 + __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags); 114 + trace_netfs_copy2cache(rreq, creq); 113 115 trace_netfs_write(creq, netfs_write_trace_copy_to_cache); 114 116 netfs_stat(&netfs_n_wh_copy_to_cache); 115 117 rreq->copy_to_cache = creq; ··· 156 154 netfs_issue_write(creq, &creq->io_streams[1]); 157 155 smp_wmb(); /* Write lists before ALL_QUEUED. */ 158 156 set_bit(NETFS_RREQ_ALL_QUEUED, &creq->flags); 157 + trace_netfs_rreq(rreq, netfs_rreq_trace_end_copy_to_cache); 158 + if (list_empty_careful(&creq->io_streams[1].subrequests)) 159 + netfs_wake_collector(creq); 159 160 160 161 netfs_put_request(creq, netfs_rreq_trace_put_return); 161 162 creq->copy_to_cache = NULL;
+4 -4
fs/notify/dnotify/dnotify.c
··· 308 308 goto out_err; 309 309 } 310 310 311 + error = file_f_owner_allocate(filp); 312 + if (error) 313 + goto out_err; 314 + 311 315 /* new fsnotify mark, we expect most fcntl calls to add a new mark */ 312 316 new_dn_mark = kmem_cache_alloc(dnotify_mark_cache, GFP_KERNEL); 313 317 if (!new_dn_mark) { 314 318 error = -ENOMEM; 315 319 goto out_err; 316 320 } 317 - 318 - error = file_f_owner_allocate(filp); 319 - if (error) 320 - goto out_err; 321 321 322 322 /* set up the new_fsn_mark and new_dn_mark */ 323 323 new_fsn_mark = &new_dn_mark->fsn_mark;
+1 -1
fs/pidfs.c
··· 319 319 if (!c) 320 320 return -ESRCH; 321 321 322 - if (!(kinfo.mask & PIDFD_INFO_COREDUMP)) { 322 + if ((kinfo.mask & PIDFD_INFO_COREDUMP) && !(kinfo.coredump_mask)) { 323 323 task_lock(task); 324 324 if (task->mm) 325 325 kinfo.coredump_mask = pidfs_coredump_mask(task->mm->flags);
+4 -2
fs/smb/client/dir.c
··· 190 190 int disposition; 191 191 struct TCP_Server_Info *server = tcon->ses->server; 192 192 struct cifs_open_parms oparms; 193 + struct cached_fid *parent_cfid = NULL; 193 194 int rdwr_for_fscache = 0; 194 195 __le32 lease_flags = 0; 195 196 ··· 314 313 if (!tcon->unix_ext && (mode & S_IWUGO) == 0) 315 314 create_options |= CREATE_OPTION_READONLY; 316 315 316 + 317 317 retry_open: 318 318 if (tcon->cfids && direntry->d_parent && server->dialect >= SMB30_PROT_ID) { 319 - struct cached_fid *parent_cfid; 320 - 319 + parent_cfid = NULL; 321 320 spin_lock(&tcon->cfids->cfid_list_lock); 322 321 list_for_each_entry(parent_cfid, &tcon->cfids->entries, entry) { 323 322 if (parent_cfid->dentry == direntry->d_parent) { ··· 328 327 memcpy(fid->parent_lease_key, 329 328 parent_cfid->fid.lease_key, 330 329 SMB2_LEASE_KEY_SIZE); 330 + parent_cfid->dirents.is_valid = false; 331 331 } 332 332 break; 333 333 }
+9 -1
fs/smb/client/file.c
··· 3088 3088 struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo, 3089 3089 oplock_break); 3090 3090 struct inode *inode = d_inode(cfile->dentry); 3091 - struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 3091 + struct super_block *sb = inode->i_sb; 3092 + struct cifs_sb_info *cifs_sb = CIFS_SB(sb); 3092 3093 struct cifsInodeInfo *cinode = CIFS_I(inode); 3093 3094 struct cifs_tcon *tcon; 3094 3095 struct TCP_Server_Info *server; ··· 3099 3098 __u64 persistent_fid, volatile_fid; 3100 3099 __u16 net_fid; 3101 3100 3101 + /* 3102 + * Hold a reference to the superblock to prevent it and its inodes from 3103 + * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put() 3104 + * may release the last reference to the sb and trigger inode eviction. 3105 + */ 3106 + cifs_sb_active(sb); 3102 3107 wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS, 3103 3108 TASK_UNINTERRUPTIBLE); 3104 3109 ··· 3177 3170 cifs_put_tlink(tlink); 3178 3171 out: 3179 3172 cifs_done_oplock_break(cinode); 3173 + cifs_sb_deactive(sb); 3180 3174 } 3181 3175 3182 3176 static int cifs_swap_activate(struct swap_info_struct *sis,
+2 -1
fs/smb/client/smb2inode.c
··· 1346 1346 * empty object on the server. 1347 1347 */ 1348 1348 if (!(le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS)) 1349 - return ERR_PTR(-EOPNOTSUPP); 1349 + if (!tcon->posix_extensions) 1350 + return ERR_PTR(-EOPNOTSUPP); 1350 1351 1351 1352 oparms = CIFS_OPARMS(cifs_sb, tcon, full_path, 1352 1353 SYNCHRONIZE | DELETE |
+8 -2
fs/smb/client/smb2ops.c
··· 4316 4316 u8 key[SMB3_ENC_DEC_KEY_SIZE]; 4317 4317 struct aead_request *req; 4318 4318 u8 *iv; 4319 + DECLARE_CRYPTO_WAIT(wait); 4319 4320 unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize); 4320 4321 void *creq; 4321 4322 size_t sensitive_size; ··· 4367 4366 aead_request_set_crypt(req, sg, sg, crypt_len, iv); 4368 4367 aead_request_set_ad(req, assoc_data_len); 4369 4368 4370 - rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req); 4369 + aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, 4370 + crypto_req_done, &wait); 4371 + 4372 + rc = crypto_wait_req(enc ? crypto_aead_encrypt(req) 4373 + : crypto_aead_decrypt(req), &wait); 4371 4374 4372 4375 if (!rc && enc) 4373 4376 memcpy(&tr_hdr->Signature, sign, SMB2_SIGNATURE_SIZE); ··· 5260 5255 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_UNX_EMUL) { 5261 5256 rc = cifs_sfu_make_node(xid, inode, dentry, tcon, 5262 5257 full_path, mode, dev); 5263 - } else if (le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS) { 5258 + } else if ((le32_to_cpu(tcon->fsAttrInfo.Attributes) & FILE_SUPPORTS_REPARSE_POINTS) 5259 + || (tcon->posix_extensions)) { 5264 5260 rc = smb2_mknod_reparse(xid, inode, dentry, tcon, 5265 5261 full_path, mode, dev); 5266 5262 }
+1 -1
fs/ufs/super.c
··· 397 397 pr_err("ufstype can't be changed during remount\n"); 398 398 return -EINVAL; 399 399 } 400 - if (!ctx->flavour) { 400 + if (ctx->flavour) { 401 401 pr_err("conflicting ufstype options\n"); 402 402 return -EINVAL; 403 403 }
+9 -5
fs/xfs/libxfs/xfs_group.c
··· 163 163 164 164 xfs_defer_drain_free(&xg->xg_intents_drain); 165 165 #ifdef __KERNEL__ 166 - kfree(xg->xg_busy_extents); 166 + if (xfs_group_has_extent_busy(xg->xg_mount, xg->xg_type)) 167 + kfree(xg->xg_busy_extents); 167 168 #endif 168 169 169 170 if (uninit) ··· 190 189 xg->xg_type = type; 191 190 192 191 #ifdef __KERNEL__ 193 - xg->xg_busy_extents = xfs_extent_busy_alloc(); 194 - if (!xg->xg_busy_extents) 195 - return -ENOMEM; 192 + if (xfs_group_has_extent_busy(mp, type)) { 193 + xg->xg_busy_extents = xfs_extent_busy_alloc(); 194 + if (!xg->xg_busy_extents) 195 + return -ENOMEM; 196 + } 196 197 spin_lock_init(&xg->xg_state_lock); 197 198 xfs_hooks_init(&xg->xg_rmap_update_hooks); 198 199 #endif ··· 213 210 out_drain: 214 211 xfs_defer_drain_free(&xg->xg_intents_drain); 215 212 #ifdef __KERNEL__ 216 - kfree(xg->xg_busy_extents); 213 + if (xfs_group_has_extent_busy(xg->xg_mount, xg->xg_type)) 214 + kfree(xg->xg_busy_extents); 217 215 #endif 218 216 return error; 219 217 }
+5 -10
fs/xfs/xfs_buf.c
··· 1683 1683 fs_put_dax(btp->bt_daxdev, btp->bt_mount); 1684 1684 /* the main block device is closed by kill_block_super */ 1685 1685 if (btp->bt_bdev != btp->bt_mount->m_super->s_bdev) 1686 - bdev_fput(btp->bt_bdev_file); 1686 + bdev_fput(btp->bt_file); 1687 1687 kfree(btp); 1688 1688 } 1689 1689 ··· 1712 1712 max_bytes = 0; 1713 1713 } 1714 1714 1715 - btp->bt_bdev_awu_min = min_bytes; 1716 - btp->bt_bdev_awu_max = max_bytes; 1715 + btp->bt_awu_min = min_bytes; 1716 + btp->bt_awu_max = max_bytes; 1717 1717 } 1718 1718 1719 1719 /* Configure a buffer target that abstracts a block device. */ ··· 1738 1738 return -EINVAL; 1739 1739 } 1740 1740 1741 - /* 1742 - * Flush the block device pagecache so our bios see anything dirtied 1743 - * before mount. 1744 - */ 1745 1741 if (bdev_can_atomic_write(btp->bt_bdev)) 1746 1742 xfs_configure_buftarg_atomic_writes(btp); 1747 - 1748 - return sync_blockdev(btp->bt_bdev); 1743 + return 0; 1749 1744 } 1750 1745 1751 1746 int ··· 1798 1803 btp = kzalloc(sizeof(*btp), GFP_KERNEL | __GFP_NOFAIL); 1799 1804 1800 1805 btp->bt_mount = mp; 1801 - btp->bt_bdev_file = bdev_file; 1806 + btp->bt_file = bdev_file; 1802 1807 btp->bt_bdev = file_bdev(bdev_file); 1803 1808 btp->bt_dev = btp->bt_bdev->bd_dev; 1804 1809 btp->bt_daxdev = fs_dax_get_by_bdev(btp->bt_bdev, &btp->bt_dax_part_off,
+3 -5
fs/xfs/xfs_buf.h
··· 94 94 */ 95 95 struct xfs_buftarg { 96 96 dev_t bt_dev; 97 - struct file *bt_bdev_file; 98 97 struct block_device *bt_bdev; 99 98 struct dax_device *bt_daxdev; 100 99 struct file *bt_file; ··· 111 112 struct percpu_counter bt_readahead_count; 112 113 struct ratelimit_state bt_ioerror_rl; 113 114 114 - /* Atomic write unit values, bytes */ 115 - unsigned int bt_bdev_awu_min; 116 - unsigned int bt_bdev_awu_max; 115 + /* Hardware atomic write unit values, bytes */ 116 + unsigned int bt_awu_min; 117 + unsigned int bt_awu_max; 117 118 118 119 /* built-in cache, if we're not using the perag one */ 119 120 struct xfs_buf_cache bt_cache[]; ··· 374 375 extern void xfs_buftarg_drain(struct xfs_buftarg *); 375 376 int xfs_configure_buftarg(struct xfs_buftarg *btp, unsigned int sectorsize); 376 377 377 - #define xfs_getsize_buftarg(buftarg) block_size((buftarg)->bt_bdev) 378 378 #define xfs_readonly_buftarg(buftarg) bdev_read_only((buftarg)->bt_bdev) 379 379 380 380 int xfs_buf_reverify(struct xfs_buf *bp, const struct xfs_buf_ops *ops);
+7 -22
fs/xfs/xfs_discard.c
··· 103 103 bio_put(bio); 104 104 } 105 105 106 - static inline struct block_device * 107 - xfs_group_bdev( 108 - const struct xfs_group *xg) 109 - { 110 - struct xfs_mount *mp = xg->xg_mount; 111 - 112 - switch (xg->xg_type) { 113 - case XG_TYPE_AG: 114 - return mp->m_ddev_targp->bt_bdev; 115 - case XG_TYPE_RTG: 116 - return mp->m_rtdev_targp->bt_bdev; 117 - default: 118 - ASSERT(0); 119 - break; 120 - } 121 - return NULL; 122 - } 123 - 124 106 /* 125 107 * Walk the discard list and issue discards on all the busy extents in the 126 108 * list. We plug and chain the bios so that we only need a single completion ··· 120 138 121 139 blk_start_plug(&plug); 122 140 list_for_each_entry(busyp, &extents->extent_list, list) { 123 - trace_xfs_discard_extent(busyp->group, busyp->bno, 124 - busyp->length); 141 + struct xfs_group *xg = busyp->group; 142 + struct xfs_buftarg *btp = 143 + xfs_group_type_buftarg(xg->xg_mount, xg->xg_type); 125 144 126 - error = __blkdev_issue_discard(xfs_group_bdev(busyp->group), 127 - xfs_gbno_to_daddr(busyp->group, busyp->bno), 145 + trace_xfs_discard_extent(xg, busyp->bno, busyp->length); 146 + 147 + error = __blkdev_issue_discard(btp->bt_bdev, 148 + xfs_gbno_to_daddr(xg, busyp->bno), 128 149 XFS_FSB_TO_BB(mp, busyp->length), 129 150 GFP_KERNEL, &bio); 130 151 if (error && error != -EOPNOTSUPP) {
+8
fs/xfs/xfs_extent_busy.h
··· 68 68 list_sort(NULL, list, xfs_extent_busy_ag_cmp); 69 69 } 70 70 71 + /* 72 + * Zoned RTGs don't need to track busy extents, as the actual block freeing only 73 + * happens by a zone reset, which forces out all transactions that touched the 74 + * to be reset zone first. 75 + */ 76 + #define xfs_group_has_extent_busy(mp, type) \ 77 + ((type) == XG_TYPE_AG || !xfs_has_zoned((mp))) 78 + 71 79 #endif /* __XFS_EXTENT_BUSY_H__ */
+1 -1
fs/xfs/xfs_file.c
··· 752 752 * HW offload should be faster, so try that first if it is already 753 753 * known that the write length is not too large. 754 754 */ 755 - if (ocount > xfs_inode_buftarg(ip)->bt_bdev_awu_max) 755 + if (ocount > xfs_inode_buftarg(ip)->bt_awu_max) 756 756 dops = &xfs_atomic_write_cow_iomap_ops; 757 757 else 758 758 dops = &xfs_direct_write_iomap_ops;
+1 -1
fs/xfs/xfs_inode.h
··· 358 358 359 359 static inline bool xfs_inode_can_hw_atomic_write(const struct xfs_inode *ip) 360 360 { 361 - return xfs_inode_buftarg(ip)->bt_bdev_awu_max > 0; 361 + return xfs_inode_buftarg(ip)->bt_awu_max > 0; 362 362 } 363 363 364 364 /*
+1 -1
fs/xfs/xfs_iomap.c
··· 827 827 /* 828 828 * The ->iomap_begin caller should ensure this, but check anyway. 829 829 */ 830 - return len <= xfs_inode_buftarg(ip)->bt_bdev_awu_max; 830 + return len <= xfs_inode_buftarg(ip)->bt_awu_max; 831 831 } 832 832 833 833 static int
+1 -1
fs/xfs/xfs_iops.c
··· 665 665 * less than our out of place write limit, but we don't want to exceed 666 666 * the awu_max. 667 667 */ 668 - return min(awu_max, xfs_inode_buftarg(ip)->bt_bdev_awu_max); 668 + return min(awu_max, xfs_inode_buftarg(ip)->bt_awu_max); 669 669 } 670 670 671 671 static void
+39 -58
fs/xfs/xfs_mount.c
··· 171 171 ASSERT(mp->m_ddev_targp != NULL); 172 172 173 173 /* 174 - * For the initial read, we must guess at the sector 175 - * size based on the block device. It's enough to 176 - * get the sb_sectsize out of the superblock and 177 - * then reread with the proper length. 178 - * We don't verify it yet, because it may not be complete. 174 + * In the first pass, use the device sector size to just read enough 175 + * of the superblock to extract the XFS sector size. 176 + * 177 + * The device sector size must be smaller than or equal to the XFS 178 + * sector size and thus we can always read the superblock. Once we know 179 + * the XFS sector size, re-read it and run the buffer verifier. 179 180 */ 180 - sector_size = xfs_getsize_buftarg(mp->m_ddev_targp); 181 + sector_size = mp->m_ddev_targp->bt_logical_sectorsize; 181 182 buf_ops = NULL; 182 183 183 - /* 184 - * Allocate a (locked) buffer to hold the superblock. This will be kept 185 - * around at all times to optimize access to the superblock. 186 - */ 187 184 reread: 188 185 error = xfs_buf_read_uncached(mp->m_ddev_targp, XFS_SB_DADDR, 189 186 BTOBB(sector_size), &bp, buf_ops); ··· 244 247 /* no need to be quiet anymore, so reset the buf ops */ 245 248 bp->b_ops = &xfs_sb_buf_ops; 246 249 250 + /* 251 + * Keep a pointer of the sb buffer around instead of caching it in the 252 + * buffer cache because we access it frequently. 253 + */ 247 254 mp->m_sb_bp = bp; 248 255 xfs_buf_unlock(bp); 249 256 return 0; ··· 679 678 } 680 679 681 680 /* 682 - * If the data device advertises atomic write support, limit the size of data 683 - * device atomic writes to the greatest power-of-two factor of the AG size so 684 - * that every atomic write unit aligns with the start of every AG. This is 685 - * required so that the per-AG allocations for an atomic write will always be 681 + * If the underlying device advertises atomic write support, limit the size of 682 + * atomic writes to the greatest power-of-two factor of the group size so 683 + * that every atomic write unit aligns with the start of every group. This is 684 + * required so that the allocations for an atomic write will always be 686 685 * aligned compatibly with the alignment requirements of the storage. 687 686 * 688 - * If the data device doesn't advertise atomic writes, then there are no 689 - * alignment restrictions and the largest out-of-place write we can do 690 - * ourselves is the number of blocks that user files can allocate from any AG. 687 + * If the device doesn't advertise atomic writes, then there are no alignment 688 + * restrictions and the largest out-of-place write we can do ourselves is the 689 + * number of blocks that user files can allocate from any group. 691 690 */ 692 - static inline xfs_extlen_t xfs_calc_perag_awu_max(struct xfs_mount *mp) 691 + static xfs_extlen_t 692 + xfs_calc_group_awu_max( 693 + struct xfs_mount *mp, 694 + enum xfs_group_type type) 693 695 { 694 - if (mp->m_ddev_targp->bt_bdev_awu_min > 0) 695 - return max_pow_of_two_factor(mp->m_sb.sb_agblocks); 696 - return rounddown_pow_of_two(mp->m_ag_max_usable); 697 - } 696 + struct xfs_groups *g = &mp->m_groups[type]; 697 + struct xfs_buftarg *btp = xfs_group_type_buftarg(mp, type); 698 698 699 - /* 700 - * Reflink on the realtime device requires rtgroups, and atomic writes require 701 - * reflink. 702 - * 703 - * If the realtime device advertises atomic write support, limit the size of 704 - * data device atomic writes to the greatest power-of-two factor of the rtgroup 705 - * size so that every atomic write unit aligns with the start of every rtgroup. 706 - * This is required so that the per-rtgroup allocations for an atomic write 707 - * will always be aligned compatibly with the alignment requirements of the 708 - * storage. 709 - * 710 - * If the rt device doesn't advertise atomic writes, then there are no 711 - * alignment restrictions and the largest out-of-place write we can do 712 - * ourselves is the number of blocks that user files can allocate from any 713 - * rtgroup. 714 - */ 715 - static inline xfs_extlen_t xfs_calc_rtgroup_awu_max(struct xfs_mount *mp) 716 - { 717 - struct xfs_groups *rgs = &mp->m_groups[XG_TYPE_RTG]; 718 - 719 - if (rgs->blocks == 0) 699 + if (g->blocks == 0) 720 700 return 0; 721 - if (mp->m_rtdev_targp && mp->m_rtdev_targp->bt_bdev_awu_min > 0) 722 - return max_pow_of_two_factor(rgs->blocks); 723 - return rounddown_pow_of_two(rgs->blocks); 701 + if (btp && btp->bt_awu_min > 0) 702 + return max_pow_of_two_factor(g->blocks); 703 + return rounddown_pow_of_two(g->blocks); 724 704 } 725 705 726 706 /* Compute the maximum atomic write unit size for each section. */ 727 707 static inline void 728 708 xfs_calc_atomic_write_unit_max( 729 - struct xfs_mount *mp) 709 + struct xfs_mount *mp, 710 + enum xfs_group_type type) 730 711 { 731 - struct xfs_groups *ags = &mp->m_groups[XG_TYPE_AG]; 732 - struct xfs_groups *rgs = &mp->m_groups[XG_TYPE_RTG]; 712 + struct xfs_groups *g = &mp->m_groups[type]; 733 713 734 714 const xfs_extlen_t max_write = xfs_calc_atomic_write_max(mp); 735 715 const xfs_extlen_t max_ioend = xfs_reflink_max_atomic_cow(mp); 736 - const xfs_extlen_t max_agsize = xfs_calc_perag_awu_max(mp); 737 - const xfs_extlen_t max_rgsize = xfs_calc_rtgroup_awu_max(mp); 716 + const xfs_extlen_t max_gsize = xfs_calc_group_awu_max(mp, type); 738 717 739 - ags->awu_max = min3(max_write, max_ioend, max_agsize); 740 - rgs->awu_max = min3(max_write, max_ioend, max_rgsize); 741 - 742 - trace_xfs_calc_atomic_write_unit_max(mp, max_write, max_ioend, 743 - max_agsize, max_rgsize); 718 + g->awu_max = min3(max_write, max_ioend, max_gsize); 719 + trace_xfs_calc_atomic_write_unit_max(mp, type, max_write, max_ioend, 720 + max_gsize, g->awu_max); 744 721 } 745 722 746 723 /* ··· 736 757 max(mp->m_groups[XG_TYPE_AG].blocks, 737 758 mp->m_groups[XG_TYPE_RTG].blocks); 738 759 const xfs_extlen_t max_group_write = 739 - max(xfs_calc_perag_awu_max(mp), xfs_calc_rtgroup_awu_max(mp)); 760 + max(xfs_calc_group_awu_max(mp, XG_TYPE_AG), 761 + xfs_calc_group_awu_max(mp, XG_TYPE_RTG)); 740 762 int error; 741 763 742 764 if (new_max_bytes == 0) ··· 793 813 return error; 794 814 } 795 815 796 - xfs_calc_atomic_write_unit_max(mp); 816 + xfs_calc_atomic_write_unit_max(mp, XG_TYPE_AG); 817 + xfs_calc_atomic_write_unit_max(mp, XG_TYPE_RTG); 797 818 mp->m_awu_max_bytes = new_max_bytes; 798 819 return 0; 799 820 }
+17
fs/xfs/xfs_mount.h
··· 802 802 int xfs_set_max_atomic_write_opt(struct xfs_mount *mp, 803 803 unsigned long long new_max_bytes); 804 804 805 + static inline struct xfs_buftarg * 806 + xfs_group_type_buftarg( 807 + struct xfs_mount *mp, 808 + enum xfs_group_type type) 809 + { 810 + switch (type) { 811 + case XG_TYPE_AG: 812 + return mp->m_ddev_targp; 813 + case XG_TYPE_RTG: 814 + return mp->m_rtdev_targp; 815 + default: 816 + ASSERT(0); 817 + break; 818 + } 819 + return NULL; 820 + } 821 + 805 822 #endif /* __XFS_MOUNT_H__ */
+1 -2
fs/xfs/xfs_notify_failure.c
··· 253 253 return -EOPNOTSUPP; 254 254 } 255 255 256 - error = xfs_dax_translate_range(type == XG_TYPE_RTG ? 257 - mp->m_rtdev_targp : mp->m_ddev_targp, 256 + error = xfs_dax_translate_range(xfs_group_type_buftarg(mp, type), 258 257 offset, len, &daddr, &bblen); 259 258 if (error) 260 259 return error;
+14 -17
fs/xfs/xfs_trace.h
··· 171 171 DEFINE_ATTR_LIST_EVENT(xfs_attr_node_list); 172 172 173 173 TRACE_EVENT(xfs_calc_atomic_write_unit_max, 174 - TP_PROTO(struct xfs_mount *mp, unsigned int max_write, 175 - unsigned int max_ioend, unsigned int max_agsize, 176 - unsigned int max_rgsize), 177 - TP_ARGS(mp, max_write, max_ioend, max_agsize, max_rgsize), 174 + TP_PROTO(struct xfs_mount *mp, enum xfs_group_type type, 175 + unsigned int max_write, unsigned int max_ioend, 176 + unsigned int max_gsize, unsigned int awu_max), 177 + TP_ARGS(mp, type, max_write, max_ioend, max_gsize, awu_max), 178 178 TP_STRUCT__entry( 179 179 __field(dev_t, dev) 180 + __field(enum xfs_group_type, type) 180 181 __field(unsigned int, max_write) 181 182 __field(unsigned int, max_ioend) 182 - __field(unsigned int, max_agsize) 183 - __field(unsigned int, max_rgsize) 184 - __field(unsigned int, data_awu_max) 185 - __field(unsigned int, rt_awu_max) 183 + __field(unsigned int, max_gsize) 184 + __field(unsigned int, awu_max) 186 185 ), 187 186 TP_fast_assign( 188 187 __entry->dev = mp->m_super->s_dev; 188 + __entry->type = type; 189 189 __entry->max_write = max_write; 190 190 __entry->max_ioend = max_ioend; 191 - __entry->max_agsize = max_agsize; 192 - __entry->max_rgsize = max_rgsize; 193 - __entry->data_awu_max = mp->m_groups[XG_TYPE_AG].awu_max; 194 - __entry->rt_awu_max = mp->m_groups[XG_TYPE_RTG].awu_max; 191 + __entry->max_gsize = max_gsize; 192 + __entry->awu_max = awu_max; 195 193 ), 196 - TP_printk("dev %d:%d max_write %u max_ioend %u max_agsize %u max_rgsize %u data_awu_max %u rt_awu_max %u", 194 + TP_printk("dev %d:%d %s max_write %u max_ioend %u max_gsize %u awu_max %u", 197 195 MAJOR(__entry->dev), MINOR(__entry->dev), 196 + __print_symbolic(__entry->type, XG_TYPE_STRINGS), 198 197 __entry->max_write, 199 198 __entry->max_ioend, 200 - __entry->max_agsize, 201 - __entry->max_rgsize, 202 - __entry->data_awu_max, 203 - __entry->rt_awu_max) 199 + __entry->max_gsize, 200 + __entry->awu_max) 204 201 ); 205 202 206 203 TRACE_EVENT(xfs_calc_max_atomic_write_fsblocks,
+1 -1
fs/xfs/xfs_xattr.c
··· 243 243 offset = context->buffer + context->count; 244 244 memcpy(offset, prefix, prefix_len); 245 245 offset += prefix_len; 246 - strncpy(offset, (char *)name, namelen); /* real name */ 246 + memcpy(offset, (char *)name, namelen); /* real name */ 247 247 offset += namelen; 248 248 *offset = '\0'; 249 249
+2
include/drm/drm_buddy.h
··· 160 160 u64 new_size, 161 161 struct list_head *blocks); 162 162 163 + void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear); 164 + 163 165 void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block); 164 166 165 167 void drm_buddy_free_list(struct drm_buddy *mm,
+7
include/linux/interconnect-provider.h
··· 119 119 struct icc_node *icc_node_create_dyn(void); 120 120 struct icc_node *icc_node_create(int id); 121 121 void icc_node_destroy(int id); 122 + int icc_node_set_name(struct icc_node *node, const struct icc_provider *provider, const char *name); 122 123 int icc_link_nodes(struct icc_node *src_node, struct icc_node **dst_node); 123 124 int icc_link_create(struct icc_node *node, const int dst_id); 124 125 void icc_node_add(struct icc_node *node, struct icc_provider *provider); ··· 151 150 152 151 static inline void icc_node_destroy(int id) 153 152 { 153 + } 154 + 155 + static inline int icc_node_set_name(struct icc_node *node, const struct icc_provider *provider, 156 + const char *name) 157 + { 158 + return -EOPNOTSUPP; 154 159 } 155 160 156 161 static inline int icc_link_nodes(struct icc_node *src_node, struct icc_node **dst_node)
+1
include/linux/ism.h
··· 28 28 29 29 struct ism_dev { 30 30 spinlock_t lock; /* protects the ism device */ 31 + spinlock_t cmd_lock; /* serializes cmds */ 31 32 struct list_head list; 32 33 struct pci_dev *pdev; 33 34
+2
include/linux/phy/phy.h
··· 154 154 * @id: id of the phy device 155 155 * @ops: function pointers for performing phy operations 156 156 * @mutex: mutex to protect phy_ops 157 + * @lockdep_key: lockdep information for this mutex 157 158 * @init_count: used to protect when the PHY is used by multiple consumers 158 159 * @power_count: used to protect when the PHY is used by multiple consumers 159 160 * @attrs: used to specify PHY specific attributes ··· 166 165 int id; 167 166 const struct phy_ops *ops; 168 167 struct mutex mutex; 168 + struct lock_class_key lockdep_key; 169 169 int init_count; 170 170 int power_count; 171 171 struct phy_attrs attrs;
+8
include/linux/power_supply.h
··· 888 888 extern int power_supply_get_property(struct power_supply *psy, 889 889 enum power_supply_property psp, 890 890 union power_supply_propval *val); 891 + int power_supply_get_property_direct(struct power_supply *psy, enum power_supply_property psp, 892 + union power_supply_propval *val); 891 893 #if IS_ENABLED(CONFIG_POWER_SUPPLY) 892 894 extern int power_supply_set_property(struct power_supply *psy, 893 895 enum power_supply_property psp, 894 896 const union power_supply_propval *val); 897 + int power_supply_set_property_direct(struct power_supply *psy, enum power_supply_property psp, 898 + const union power_supply_propval *val); 895 899 #else 896 900 static inline int power_supply_set_property(struct power_supply *psy, 897 901 enum power_supply_property psp, 898 902 const union power_supply_propval *val) 903 + { return 0; } 904 + static inline int power_supply_set_property_direct(struct power_supply *psy, 905 + enum power_supply_property psp, 906 + const union power_supply_propval *val) 899 907 { return 0; } 900 908 #endif 901 909 extern void power_supply_external_power_changed(struct power_supply *psy);
+4 -11
include/net/xfrm.h
··· 441 441 int xfrm_input_unregister_afinfo(const struct xfrm_input_afinfo *afinfo); 442 442 443 443 void xfrm_flush_gc(void); 444 - void xfrm_state_delete_tunnel(struct xfrm_state *x); 445 444 446 445 struct xfrm_type { 447 446 struct module *owner; ··· 473 474 474 475 int xfrm_register_type_offload(const struct xfrm_type_offload *type, unsigned short family); 475 476 void xfrm_unregister_type_offload(const struct xfrm_type_offload *type, unsigned short family); 476 - void xfrm_set_type_offload(struct xfrm_state *x); 477 + void xfrm_set_type_offload(struct xfrm_state *x, bool try_load); 477 478 static inline void xfrm_unset_type_offload(struct xfrm_state *x) 478 479 { 479 480 if (!x->type_offload) ··· 915 916 xfrm_pol_put(pols[i]); 916 917 } 917 918 918 - void __xfrm_state_destroy(struct xfrm_state *, bool); 919 + void __xfrm_state_destroy(struct xfrm_state *); 919 920 920 921 static inline void __xfrm_state_put(struct xfrm_state *x) 921 922 { ··· 925 926 static inline void xfrm_state_put(struct xfrm_state *x) 926 927 { 927 928 if (refcount_dec_and_test(&x->refcnt)) 928 - __xfrm_state_destroy(x, false); 929 - } 930 - 931 - static inline void xfrm_state_put_sync(struct xfrm_state *x) 932 - { 933 - if (refcount_dec_and_test(&x->refcnt)) 934 - __xfrm_state_destroy(x, true); 929 + __xfrm_state_destroy(x); 935 930 } 936 931 937 932 static inline void xfrm_state_hold(struct xfrm_state *x) ··· 1763 1770 1764 1771 struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num); 1765 1772 int xfrm_state_delete(struct xfrm_state *x); 1766 - int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync); 1773 + int xfrm_state_flush(struct net *net, u8 proto, bool task_valid); 1767 1774 int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid); 1768 1775 int xfrm_dev_policy_flush(struct net *net, struct net_device *dev, 1769 1776 bool task_valid);
+30
include/trace/events/netfs.h
··· 55 55 EM(netfs_rreq_trace_copy, "COPY ") \ 56 56 EM(netfs_rreq_trace_dirty, "DIRTY ") \ 57 57 EM(netfs_rreq_trace_done, "DONE ") \ 58 + EM(netfs_rreq_trace_end_copy_to_cache, "END-C2C") \ 58 59 EM(netfs_rreq_trace_free, "FREE ") \ 59 60 EM(netfs_rreq_trace_ki_complete, "KI-CMPL") \ 60 61 EM(netfs_rreq_trace_recollect, "RECLLCT") \ ··· 558 557 __entry->cookie, 559 558 __entry->ino, 560 559 __entry->start, __entry->start + __entry->len - 1) 560 + ); 561 + 562 + TRACE_EVENT(netfs_copy2cache, 563 + TP_PROTO(const struct netfs_io_request *rreq, 564 + const struct netfs_io_request *creq), 565 + 566 + TP_ARGS(rreq, creq), 567 + 568 + TP_STRUCT__entry( 569 + __field(unsigned int, rreq) 570 + __field(unsigned int, creq) 571 + __field(unsigned int, cookie) 572 + __field(unsigned int, ino) 573 + ), 574 + 575 + TP_fast_assign( 576 + struct netfs_inode *__ctx = netfs_inode(rreq->inode); 577 + struct fscache_cookie *__cookie = netfs_i_cookie(__ctx); 578 + __entry->rreq = rreq->debug_id; 579 + __entry->creq = creq->debug_id; 580 + __entry->cookie = __cookie ? __cookie->debug_id : 0; 581 + __entry->ino = rreq->inode->i_ino; 582 + ), 583 + 584 + TP_printk("R=%08x CR=%08x c=%08x i=%x ", 585 + __entry->rreq, 586 + __entry->creq, 587 + __entry->cookie, 588 + __entry->ino) 561 589 ); 562 590 563 591 TRACE_EVENT(netfs_collect,
+8 -4
io_uring/net.c
··· 1738 1738 int ret; 1739 1739 bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK; 1740 1740 1741 - if (unlikely(req->flags & REQ_F_FAIL)) { 1742 - ret = -ECONNRESET; 1743 - goto out; 1741 + if (connect->in_progress) { 1742 + struct poll_table_struct pt = { ._key = EPOLLERR }; 1743 + 1744 + if (vfs_poll(req->file, &pt) & EPOLLERR) 1745 + goto get_sock_err; 1744 1746 } 1745 1747 1746 1748 file_flags = force_nonblock ? O_NONBLOCK : 0; ··· 1767 1765 * which means the previous result is good. For both of these, 1768 1766 * grab the sock_error() and use that for the completion. 1769 1767 */ 1770 - if (ret == -EBADFD || ret == -EISCONN) 1768 + if (ret == -EBADFD || ret == -EISCONN) { 1769 + get_sock_err: 1771 1770 ret = sock_error(sock_from_file(req->file)->sk); 1771 + } 1772 1772 } 1773 1773 if (ret == -ERESTARTSYS) 1774 1774 ret = -EINTR;
-2
io_uring/poll.c
··· 273 273 return IOU_POLL_REISSUE; 274 274 } 275 275 } 276 - if (unlikely(req->cqe.res & EPOLLERR)) 277 - req_set_fail(req); 278 276 if (req->apoll_events & EPOLLONESHOT) 279 277 return IOU_POLL_DONE; 280 278
+3 -1
io_uring/zcrx.c
··· 76 76 int dmabuf_fd = area_reg->dmabuf_fd; 77 77 int i, ret; 78 78 79 + if (off) 80 + return -EINVAL; 79 81 if (WARN_ON_ONCE(!ifq->dev)) 80 82 return -EFAULT; 81 83 if (!IS_ENABLED(CONFIG_DMA_SHARED_BUFFER)) ··· 108 106 for_each_sgtable_dma_sg(mem->sgt, sg, i) 109 107 total_size += sg_dma_len(sg); 110 108 111 - if (total_size < off + len) { 109 + if (total_size != len) { 112 110 ret = -EINVAL; 113 111 goto err; 114 112 }
+8 -3
kernel/bpf/helpers.c
··· 884 884 if (fmt[i] == 'p') { 885 885 sizeof_cur_arg = sizeof(long); 886 886 887 + if (fmt[i + 1] == 0 || isspace(fmt[i + 1]) || 888 + ispunct(fmt[i + 1])) { 889 + if (tmp_buf) 890 + cur_arg = raw_args[num_spec]; 891 + goto nocopy_fmt; 892 + } 893 + 887 894 if ((fmt[i + 1] == 'k' || fmt[i + 1] == 'u') && 888 895 fmt[i + 2] == 's') { 889 896 fmt_ptype = fmt[i + 1]; ··· 898 891 goto fmt_str; 899 892 } 900 893 901 - if (fmt[i + 1] == 0 || isspace(fmt[i + 1]) || 902 - ispunct(fmt[i + 1]) || fmt[i + 1] == 'K' || 894 + if (fmt[i + 1] == 'K' || 903 895 fmt[i + 1] == 'x' || fmt[i + 1] == 's' || 904 896 fmt[i + 1] == 'S') { 905 - /* just kernel pointers */ 906 897 if (tmp_buf) 907 898 cur_arg = raw_args[num_spec]; 908 899 i++;
+1 -1
kernel/bpf/sysfs_btf.c
··· 21 21 { 22 22 unsigned long pages = PAGE_ALIGN(attr->size) >> PAGE_SHIFT; 23 23 size_t vm_size = vma->vm_end - vma->vm_start; 24 - phys_addr_t addr = virt_to_phys(__start_BTF); 24 + phys_addr_t addr = __pa_symbol(__start_BTF); 25 25 unsigned long pfn = addr >> PAGE_SHIFT; 26 26 27 27 if (attr->private != __start_BTF || !PAGE_ALIGNED(addr))
+1 -7
kernel/cgroup/legacy_freezer.c
··· 66 66 bool cgroup_freezing(struct task_struct *task) 67 67 { 68 68 bool ret; 69 - unsigned int state; 70 69 71 70 rcu_read_lock(); 72 - /* Check if the cgroup is still FREEZING, but not FROZEN. The extra 73 - * !FROZEN check is required, because the FREEZING bit is not cleared 74 - * when the state FROZEN is reached. 75 - */ 76 - state = task_freezer(task)->state; 77 - ret = (state & CGROUP_FREEZING) && !(state & CGROUP_FROZEN); 71 + ret = task_freezer(task)->state & CGROUP_FREEZING; 78 72 rcu_read_unlock(); 79 73 80 74 return ret;
+3 -12
kernel/freezer.c
··· 201 201 202 202 void __thaw_task(struct task_struct *p) 203 203 { 204 - unsigned long flags; 205 - 206 - spin_lock_irqsave(&freezer_lock, flags); 207 - if (WARN_ON_ONCE(freezing(p))) 208 - goto unlock; 209 - 210 - if (!frozen(p) || task_call_func(p, __restore_freezer_state, NULL)) 211 - goto unlock; 212 - 213 - wake_up_state(p, TASK_FROZEN); 214 - unlock: 215 - spin_unlock_irqrestore(&freezer_lock, flags); 204 + guard(spinlock_irqsave)(&freezer_lock); 205 + if (frozen(p) && !task_call_func(p, __restore_freezer_state, NULL)) 206 + wake_up_state(p, TASK_FROZEN); 216 207 } 217 208 218 209 /**
+8 -4
kernel/sched/ext.c
··· 1272 1272 1273 1273 #define SCX_CALL_OP(sch, mask, op, rq, args...) \ 1274 1274 do { \ 1275 - update_locked_rq(rq); \ 1275 + if (rq) \ 1276 + update_locked_rq(rq); \ 1276 1277 if (mask) { \ 1277 1278 scx_kf_allow(mask); \ 1278 1279 (sch)->ops.op(args); \ ··· 1281 1280 } else { \ 1282 1281 (sch)->ops.op(args); \ 1283 1282 } \ 1284 - update_locked_rq(NULL); \ 1283 + if (rq) \ 1284 + update_locked_rq(NULL); \ 1285 1285 } while (0) 1286 1286 1287 1287 #define SCX_CALL_OP_RET(sch, mask, op, rq, args...) \ 1288 1288 ({ \ 1289 1289 __typeof__((sch)->ops.op(args)) __ret; \ 1290 1290 \ 1291 - update_locked_rq(rq); \ 1291 + if (rq) \ 1292 + update_locked_rq(rq); \ 1292 1293 if (mask) { \ 1293 1294 scx_kf_allow(mask); \ 1294 1295 __ret = (sch)->ops.op(args); \ ··· 1298 1295 } else { \ 1299 1296 __ret = (sch)->ops.op(args); \ 1300 1297 } \ 1301 - update_locked_rq(NULL); \ 1298 + if (rq) \ 1299 + update_locked_rq(NULL); \ 1302 1300 __ret; \ 1303 1301 }) 1304 1302
+1 -1
kernel/sched/ext_idle.c
··· 903 903 * selection optimizations and simply check whether the previously 904 904 * used CPU is idle and within the allowed cpumask. 905 905 */ 906 - if (p->nr_cpus_allowed == 1) { 906 + if (p->nr_cpus_allowed == 1 || is_migration_disabled(p)) { 907 907 if (cpumask_test_cpu(prev_cpu, allowed ?: p->cpus_ptr) && 908 908 scx_idle_test_and_clear_cpu(prev_cpu)) 909 909 cpu = prev_cpu;
+1 -1
kernel/sched/loadavg.c
··· 80 80 long nr_active, delta = 0; 81 81 82 82 nr_active = this_rq->nr_running - adjust; 83 - nr_active += (int)this_rq->nr_uninterruptible; 83 + nr_active += (long)this_rq->nr_uninterruptible; 84 84 85 85 if (nr_active != this_rq->calc_load_active) { 86 86 delta = nr_active - this_rq->calc_load_active;
+1 -1
kernel/sched/sched.h
··· 1149 1149 * one CPU and if it got migrated afterwards it may decrease 1150 1150 * it on another CPU. Always updated under the runqueue lock: 1151 1151 */ 1152 - unsigned int nr_uninterruptible; 1152 + unsigned long nr_uninterruptible; 1153 1153 1154 1154 union { 1155 1155 struct task_struct __rcu *donor; /* Scheduler context */
+5
kernel/trace/trace_events.c
··· 3136 3136 if (ret < 0) 3137 3137 return ret; 3138 3138 3139 + down_write(&trace_event_sem); 3139 3140 list_add(&call->list, &ftrace_events); 3141 + up_write(&trace_event_sem); 3142 + 3140 3143 if (call->flags & TRACE_EVENT_FL_DYNAMIC) 3141 3144 atomic_set(&call->refcnt, 0); 3142 3145 else ··· 3752 3749 { 3753 3750 struct trace_event_call *call; 3754 3751 int ret; 3752 + 3753 + lockdep_assert_held(&trace_event_sem); 3755 3754 3756 3755 list_for_each_entry(call, &ftrace_events, list) { 3757 3756 ret = __trace_add_new_event(call, tr);
+1 -1
kernel/trace/trace_osnoise.c
··· 637 637 638 638 entry = ring_buffer_event_data(event); 639 639 640 - memcpy(&entry->caller, fstack->calls, size); 641 640 entry->size = fstack->nr_entries; 641 + memcpy(&entry->caller, fstack->calls, size); 642 642 643 643 trace_buffer_unlock_commit_nostack(buffer, event); 644 644 }
+9 -4
mm/secretmem.c
··· 261 261 262 262 static int secretmem_init_fs_context(struct fs_context *fc) 263 263 { 264 - return init_pseudo(fc, SECRETMEM_MAGIC) ? 0 : -ENOMEM; 264 + struct pseudo_fs_context *ctx; 265 + 266 + ctx = init_pseudo(fc, SECRETMEM_MAGIC); 267 + if (!ctx) 268 + return -ENOMEM; 269 + 270 + fc->s_iflags |= SB_I_NOEXEC; 271 + fc->s_iflags |= SB_I_NODEV; 272 + return 0; 265 273 } 266 274 267 275 static struct file_system_type secretmem_fs = { ··· 286 278 secretmem_mnt = kern_mount(&secretmem_fs); 287 279 if (IS_ERR(secretmem_mnt)) 288 280 return PTR_ERR(secretmem_mnt); 289 - 290 - /* prevent secretmem mappings from ever getting PROT_EXEC */ 291 - secretmem_mnt->mnt_flags |= MNT_NOEXEC; 292 281 293 282 return 0; 294 283 }
+21 -3
net/appletalk/aarp.c
··· 35 35 #include <linux/seq_file.h> 36 36 #include <linux/export.h> 37 37 #include <linux/etherdevice.h> 38 + #include <linux/refcount.h> 38 39 39 40 int sysctl_aarp_expiry_time = AARP_EXPIRY_TIME; 40 41 int sysctl_aarp_tick_time = AARP_TICK_TIME; ··· 45 44 /* Lists of aarp entries */ 46 45 /** 47 46 * struct aarp_entry - AARP entry 47 + * @refcnt: Reference count 48 48 * @last_sent: Last time we xmitted the aarp request 49 49 * @packet_queue: Queue of frames wait for resolution 50 50 * @status: Used for proxy AARP ··· 57 55 * @next: Next entry in chain 58 56 */ 59 57 struct aarp_entry { 58 + refcount_t refcnt; 60 59 /* These first two are only used for unresolved entries */ 61 60 unsigned long last_sent; 62 61 struct sk_buff_head packet_queue; ··· 82 79 /* Used to walk the list and purge/kick entries. */ 83 80 static struct timer_list aarp_timer; 84 81 82 + static inline void aarp_entry_get(struct aarp_entry *a) 83 + { 84 + refcount_inc(&a->refcnt); 85 + } 86 + 87 + static inline void aarp_entry_put(struct aarp_entry *a) 88 + { 89 + if (refcount_dec_and_test(&a->refcnt)) 90 + kfree(a); 91 + } 92 + 85 93 /* 86 94 * Delete an aarp queue 87 95 * ··· 101 87 static void __aarp_expire(struct aarp_entry *a) 102 88 { 103 89 skb_queue_purge(&a->packet_queue); 104 - kfree(a); 90 + aarp_entry_put(a); 105 91 } 106 92 107 93 /* ··· 394 380 static struct aarp_entry *aarp_alloc(void) 395 381 { 396 382 struct aarp_entry *a = kmalloc(sizeof(*a), GFP_ATOMIC); 383 + if (!a) 384 + return NULL; 397 385 398 - if (a) 399 - skb_queue_head_init(&a->packet_queue); 386 + refcount_set(&a->refcnt, 1); 387 + skb_queue_head_init(&a->packet_queue); 400 388 return a; 401 389 } 402 390 ··· 493 477 entry->dev = atif->dev; 494 478 495 479 write_lock_bh(&aarp_lock); 480 + aarp_entry_get(entry); 496 481 497 482 hash = sa->s_node % (AARP_HASH_SIZE - 1); 498 483 entry->next = proxies[hash]; ··· 519 502 retval = 1; 520 503 } 521 504 505 + aarp_entry_put(entry); 522 506 write_unlock_bh(&aarp_lock); 523 507 out: 524 508 return retval;
+2
net/ipv4/ipcomp.c
··· 54 54 } 55 55 56 56 /* We always hold one tunnel user reference to indicate a tunnel */ 57 + static struct lock_class_key xfrm_state_lock_key; 57 58 static struct xfrm_state *ipcomp_tunnel_create(struct xfrm_state *x) 58 59 { 59 60 struct net *net = xs_net(x); ··· 63 62 t = xfrm_state_alloc(net); 64 63 if (!t) 65 64 goto out; 65 + lockdep_set_class(&t->lock, &xfrm_state_lock_key); 66 66 67 67 t->id.proto = IPPROTO_IPIP; 68 68 t->id.spi = x->props.saddr.a4;
+3
net/ipv4/xfrm4_input.c
··· 202 202 if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0) 203 203 goto out; 204 204 205 + /* set the transport header to ESP */ 206 + skb_set_transport_header(skb, offset); 207 + 205 208 NAPI_GRO_CB(skb)->proto = IPPROTO_UDP; 206 209 207 210 pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+2
net/ipv6/ipcomp6.c
··· 71 71 return 0; 72 72 } 73 73 74 + static struct lock_class_key xfrm_state_lock_key; 74 75 static struct xfrm_state *ipcomp6_tunnel_create(struct xfrm_state *x) 75 76 { 76 77 struct net *net = xs_net(x); ··· 80 79 t = xfrm_state_alloc(net); 81 80 if (!t) 82 81 goto out; 82 + lockdep_set_class(&t->lock, &xfrm_state_lock_key); 83 83 84 84 t->id.proto = IPPROTO_IPV6; 85 85 t->id.spi = xfrm6_tunnel_alloc_spi(net, (xfrm_address_t *)&x->props.saddr);
+3
net/ipv6/xfrm6_input.c
··· 202 202 if (len <= sizeof(struct ip_esp_hdr) || udpdata32[0] == 0) 203 203 goto out; 204 204 205 + /* set the transport header to ESP */ 206 + skb_set_transport_header(skb, offset); 207 + 205 208 NAPI_GRO_CB(skb)->proto = IPPROTO_UDP; 206 209 207 210 pp = call_gro_receive(ops->callbacks.gro_receive, head, skb);
+1 -1
net/ipv6/xfrm6_tunnel.c
··· 334 334 struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net); 335 335 unsigned int i; 336 336 337 + xfrm_state_flush(net, IPSEC_PROTO_ANY, false); 337 338 xfrm_flush_gc(); 338 - xfrm_state_flush(net, 0, false, true); 339 339 340 340 for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++) 341 341 WARN_ON_ONCE(!hlist_empty(&xfrm6_tn->spi_byaddr[i]));
+1 -1
net/key/af_key.c
··· 1766 1766 if (proto == 0) 1767 1767 return -EINVAL; 1768 1768 1769 - err = xfrm_state_flush(net, proto, true, false); 1769 + err = xfrm_state_flush(net, proto, true); 1770 1770 err2 = unicast_flush_resp(sk, hdr); 1771 1771 if (err || err2) { 1772 1772 if (err == -ESRCH) /* empty table - go quietly */
+3 -4
net/sched/sch_qfq.c
··· 536 536 537 537 static void qfq_destroy_class(struct Qdisc *sch, struct qfq_class *cl) 538 538 { 539 - struct qfq_sched *q = qdisc_priv(sch); 540 - 541 - qfq_rm_from_agg(q, cl); 542 539 gen_kill_estimator(&cl->rate_est); 543 540 qdisc_put(cl->qdisc); 544 541 kfree(cl); ··· 556 559 557 560 qdisc_purge_queue(cl->qdisc); 558 561 qdisc_class_hash_remove(&q->clhash, &cl->common); 559 - qfq_destroy_class(sch, cl); 562 + qfq_rm_from_agg(q, cl); 560 563 561 564 sch_tree_unlock(sch); 562 565 566 + qfq_destroy_class(sch, cl); 563 567 return 0; 564 568 } 565 569 ··· 1501 1503 for (i = 0; i < q->clhash.hashsize; i++) { 1502 1504 hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i], 1503 1505 common.hnode) { 1506 + qfq_rm_from_agg(q, cl); 1504 1507 qfq_destroy_class(sch, cl); 1505 1508 } 1506 1509 }
-1
net/xfrm/xfrm_device.c
··· 305 305 return -EINVAL; 306 306 } 307 307 308 - xfrm_set_type_offload(x); 309 308 if (!x->type_offload) { 310 309 NL_SET_ERR_MSG(extack, "Type doesn't support offload"); 311 310 dev_put(dev);
+1 -6
net/xfrm/xfrm_interface_core.c
··· 875 875 return -EINVAL; 876 876 } 877 877 878 - if (p.collect_md) { 878 + if (p.collect_md || xi->p.collect_md) { 879 879 NL_SET_ERR_MSG(extack, "collect_md can't be changed"); 880 880 return -EINVAL; 881 881 } ··· 886 886 } else { 887 887 if (xi->dev != dev) 888 888 return -EEXIST; 889 - if (xi->p.collect_md) { 890 - NL_SET_ERR_MSG(extack, 891 - "device can't be changed to collect_md"); 892 - return -EINVAL; 893 - } 894 889 } 895 890 896 891 return xfrmi_update(xi, &p);
+1 -2
net/xfrm/xfrm_ipcomp.c
··· 97 97 struct ip_comp_hdr *ipch = ip_comp_hdr(skb); 98 98 const int plen = skb->len; 99 99 100 - skb_reset_transport_header(skb); 100 + skb->transport_header = skb->network_header + sizeof(*ipch); 101 101 102 102 return ipcomp_post_acomp(skb, err, 0) ?: 103 103 skb->len < (plen + sizeof(ip_comp_hdr)) ? -EINVAL : ··· 313 313 struct ipcomp_data *ipcd = x->data; 314 314 if (!ipcd) 315 315 return; 316 - xfrm_state_delete_tunnel(x); 317 316 ipcomp_free_data(ipcd); 318 317 kfree(ipcd); 319 318 }
+26 -43
net/xfrm/xfrm_state.c
··· 424 424 } 425 425 EXPORT_SYMBOL(xfrm_unregister_type_offload); 426 426 427 - void xfrm_set_type_offload(struct xfrm_state *x) 427 + void xfrm_set_type_offload(struct xfrm_state *x, bool try_load) 428 428 { 429 429 const struct xfrm_type_offload *type = NULL; 430 430 struct xfrm_state_afinfo *afinfo; 431 - bool try_load = true; 432 431 433 432 retry: 434 433 afinfo = xfrm_state_get_afinfo(x->props.family); ··· 592 593 } 593 594 EXPORT_SYMBOL(xfrm_state_free); 594 595 595 - static void ___xfrm_state_destroy(struct xfrm_state *x) 596 + static void xfrm_state_gc_destroy(struct xfrm_state *x) 596 597 { 597 598 if (x->mode_cbs && x->mode_cbs->destroy_state) 598 599 x->mode_cbs->destroy_state(x); ··· 606 607 kfree(x->coaddr); 607 608 kfree(x->replay_esn); 608 609 kfree(x->preplay_esn); 610 + xfrm_unset_type_offload(x); 609 611 if (x->type) { 610 612 x->type->destructor(x); 611 613 xfrm_put_type(x->type); ··· 631 631 synchronize_rcu(); 632 632 633 633 hlist_for_each_entry_safe(x, tmp, &gc_list, gclist) 634 - ___xfrm_state_destroy(x); 634 + xfrm_state_gc_destroy(x); 635 635 } 636 636 637 637 static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) ··· 780 780 struct xfrm_dev_offload *xso = &x->xso; 781 781 struct net_device *dev = READ_ONCE(xso->dev); 782 782 783 - xfrm_unset_type_offload(x); 784 - 785 783 if (dev && dev->xfrmdev_ops) { 786 784 spin_lock_bh(&xfrm_state_dev_gc_lock); 787 785 if (!hlist_unhashed(&x->dev_gclist)) ··· 795 797 } 796 798 #endif 797 799 798 - void __xfrm_state_destroy(struct xfrm_state *x, bool sync) 800 + void __xfrm_state_destroy(struct xfrm_state *x) 799 801 { 800 802 WARN_ON(x->km.state != XFRM_STATE_DEAD); 801 803 802 - if (sync) { 803 - synchronize_rcu(); 804 - ___xfrm_state_destroy(x); 805 - } else { 806 - spin_lock_bh(&xfrm_state_gc_lock); 807 - hlist_add_head(&x->gclist, &xfrm_state_gc_list); 808 - spin_unlock_bh(&xfrm_state_gc_lock); 809 - schedule_work(&xfrm_state_gc_work); 810 - } 804 + spin_lock_bh(&xfrm_state_gc_lock); 805 + hlist_add_head(&x->gclist, &xfrm_state_gc_list); 806 + spin_unlock_bh(&xfrm_state_gc_lock); 807 + schedule_work(&xfrm_state_gc_work); 811 808 } 812 809 EXPORT_SYMBOL(__xfrm_state_destroy); 813 810 811 + static void xfrm_state_delete_tunnel(struct xfrm_state *x); 814 812 int __xfrm_state_delete(struct xfrm_state *x) 815 813 { 816 814 struct net *net = xs_net(x); ··· 833 839 spin_unlock(&net->xfrm.xfrm_state_lock); 834 840 835 841 xfrm_dev_state_delete(x); 842 + 843 + xfrm_state_delete_tunnel(x); 836 844 837 845 /* All xfrm_state objects are created by xfrm_state_alloc. 838 846 * The xfrm_state_alloc call gives a reference, and that ··· 917 921 } 918 922 #endif 919 923 920 - int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync) 924 + int xfrm_state_flush(struct net *net, u8 proto, bool task_valid) 921 925 { 922 926 int i, err = 0, cnt = 0; 923 927 ··· 939 943 err = xfrm_state_delete(x); 940 944 xfrm_audit_state_delete(x, err ? 0 : 1, 941 945 task_valid); 942 - if (sync) 943 - xfrm_state_put_sync(x); 944 - else 945 - xfrm_state_put(x); 946 + xfrm_state_put(x); 946 947 if (!err) 947 948 cnt++; 948 949 ··· 1300 1307 static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x, 1301 1308 const struct flowi *fl, unsigned short family, 1302 1309 struct xfrm_state **best, int *acq_in_progress, 1303 - int *error) 1310 + int *error, unsigned int pcpu_id) 1304 1311 { 1305 - /* We need the cpu id just as a lookup key, 1306 - * we don't require it to be stable. 1307 - */ 1308 - unsigned int pcpu_id = get_cpu(); 1309 - put_cpu(); 1310 - 1311 1312 /* Resolution logic: 1312 1313 * 1. There is a valid state with matching selector. Done. 1313 1314 * 2. Valid state with inappropriate selector. Skip. ··· 1368 1381 /* We need the cpu id just as a lookup key, 1369 1382 * we don't require it to be stable. 1370 1383 */ 1371 - pcpu_id = get_cpu(); 1372 - put_cpu(); 1384 + pcpu_id = raw_smp_processor_id(); 1373 1385 1374 1386 to_put = NULL; 1375 1387 1376 1388 sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation); 1377 1389 1378 1390 rcu_read_lock(); 1391 + xfrm_hash_ptrs_get(net, &state_ptrs); 1392 + 1379 1393 hlist_for_each_entry_rcu(x, &pol->state_cache_list, state_cache) { 1380 1394 if (x->props.family == encap_family && 1381 1395 x->props.reqid == tmpl->reqid && ··· 1388 1400 tmpl->id.proto == x->id.proto && 1389 1401 (tmpl->id.spi == x->id.spi || !tmpl->id.spi)) 1390 1402 xfrm_state_look_at(pol, x, fl, encap_family, 1391 - &best, &acquire_in_progress, &error); 1403 + &best, &acquire_in_progress, &error, pcpu_id); 1392 1404 } 1393 1405 1394 1406 if (best) ··· 1405 1417 tmpl->id.proto == x->id.proto && 1406 1418 (tmpl->id.spi == x->id.spi || !tmpl->id.spi)) 1407 1419 xfrm_state_look_at(pol, x, fl, family, 1408 - &best, &acquire_in_progress, &error); 1420 + &best, &acquire_in_progress, &error, pcpu_id); 1409 1421 } 1410 1422 1411 1423 cached: ··· 1416 1428 best = NULL; 1417 1429 else if (acquire_in_progress) /* XXX: acquire_in_progress should not happen */ 1418 1430 WARN_ON(1); 1419 - 1420 - xfrm_hash_ptrs_get(net, &state_ptrs); 1421 1431 1422 1432 h = __xfrm_dst_hash(daddr, saddr, tmpl->reqid, encap_family, state_ptrs.hmask); 1423 1433 hlist_for_each_entry_rcu(x, state_ptrs.bydst + h, bydst) { ··· 1446 1460 tmpl->id.proto == x->id.proto && 1447 1461 (tmpl->id.spi == x->id.spi || !tmpl->id.spi)) 1448 1462 xfrm_state_look_at(pol, x, fl, family, 1449 - &best, &acquire_in_progress, &error); 1463 + &best, &acquire_in_progress, &error, pcpu_id); 1450 1464 } 1451 1465 if (best || acquire_in_progress) 1452 1466 goto found; ··· 1481 1495 tmpl->id.proto == x->id.proto && 1482 1496 (tmpl->id.spi == x->id.spi || !tmpl->id.spi)) 1483 1497 xfrm_state_look_at(pol, x, fl, family, 1484 - &best, &acquire_in_progress, &error); 1498 + &best, &acquire_in_progress, &error, pcpu_id); 1485 1499 } 1486 1500 1487 1501 found: ··· 3082 3096 } 3083 3097 EXPORT_SYMBOL(xfrm_flush_gc); 3084 3098 3085 - /* Temporarily located here until net/xfrm/xfrm_tunnel.c is created */ 3086 - void xfrm_state_delete_tunnel(struct xfrm_state *x) 3099 + static void xfrm_state_delete_tunnel(struct xfrm_state *x) 3087 3100 { 3088 3101 if (x->tunnel) { 3089 3102 struct xfrm_state *t = x->tunnel; 3090 3103 3091 - if (atomic_read(&t->tunnel_users) == 2) 3104 + if (atomic_dec_return(&t->tunnel_users) == 1) 3092 3105 xfrm_state_delete(t); 3093 - atomic_dec(&t->tunnel_users); 3094 - xfrm_state_put_sync(t); 3106 + xfrm_state_put(t); 3095 3107 x->tunnel = NULL; 3096 3108 } 3097 3109 } 3098 - EXPORT_SYMBOL(xfrm_state_delete_tunnel); 3099 3110 3100 3111 u32 xfrm_state_mtu(struct xfrm_state *x, int mtu) 3101 3112 { ··· 3297 3314 unsigned int sz; 3298 3315 3299 3316 flush_work(&net->xfrm.state_hash_work); 3317 + xfrm_state_flush(net, IPSEC_PROTO_ANY, false); 3300 3318 flush_work(&xfrm_state_gc_work); 3301 - xfrm_state_flush(net, 0, false, true); 3302 3319 3303 3320 WARN_ON(!list_empty(&net->xfrm.state_all)); 3304 3321
+2 -1
net/xfrm/xfrm_user.c
··· 977 977 /* override default values from above */ 978 978 xfrm_update_ae_params(x, attrs, 0); 979 979 980 + xfrm_set_type_offload(x, attrs[XFRMA_OFFLOAD_DEV]); 980 981 /* configure the hardware if offload is requested */ 981 982 if (attrs[XFRMA_OFFLOAD_DEV]) { 982 983 err = xfrm_dev_state_add(net, x, ··· 2635 2634 struct xfrm_usersa_flush *p = nlmsg_data(nlh); 2636 2635 int err; 2637 2636 2638 - err = xfrm_state_flush(net, p->proto, true, false); 2637 + err = xfrm_state_flush(net, p->proto, true); 2639 2638 if (err) { 2640 2639 if (err == -ESRCH) /* empty table */ 2641 2640 return 0;
+1
rust/Makefile
··· 194 194 RUST_MODFILE=test.rs \ 195 195 OBJTREE=$(abspath $(objtree)) \ 196 196 $(RUSTDOC) --test $(rust_common_flags) \ 197 + -Zcrate-attr='feature(used_with_arg)' \ 197 198 @$(objtree)/include/generated/rustc_cfg \ 198 199 $(rustc_target_flags) $(rustdoc_test_target_flags) \ 199 200 $(rustdoc_test_quiet) \
+1 -1
rust/kernel/firmware.rs
··· 202 202 }; 203 203 204 204 #[link_section = ".modinfo"] 205 - #[used] 205 + #[used(compiler)] 206 206 static __MODULE_FIRMWARE: [u8; $($builder)*::create(__MODULE_FIRMWARE_PREFIX) 207 207 .build_length()] = $($builder)*::create(__MODULE_FIRMWARE_PREFIX).build(); 208 208 };
+4 -4
rust/kernel/init.rs
··· 231 231 ($(&$this:ident in)? $t:ident $(::<$($generics:ty),* $(,)?>)? { 232 232 $($fields:tt)* 233 233 }) => { 234 - ::pin_init::try_init!($(&$this in)? $t $(::<$($generics),* $(,)?>)? { 234 + ::pin_init::try_init!($(&$this in)? $t $(::<$($generics),*>)? { 235 235 $($fields)* 236 236 }? $crate::error::Error) 237 237 }; 238 238 ($(&$this:ident in)? $t:ident $(::<$($generics:ty),* $(,)?>)? { 239 239 $($fields:tt)* 240 240 }? $err:ty) => { 241 - ::pin_init::try_init!($(&$this in)? $t $(::<$($generics),* $(,)?>)? { 241 + ::pin_init::try_init!($(&$this in)? $t $(::<$($generics),*>)? { 242 242 $($fields)* 243 243 }? $err) 244 244 }; ··· 291 291 ($(&$this:ident in)? $t:ident $(::<$($generics:ty),* $(,)?>)? { 292 292 $($fields:tt)* 293 293 }) => { 294 - ::pin_init::try_pin_init!($(&$this in)? $t $(::<$($generics),* $(,)?>)? { 294 + ::pin_init::try_pin_init!($(&$this in)? $t $(::<$($generics),*>)? { 295 295 $($fields)* 296 296 }? $crate::error::Error) 297 297 }; 298 298 ($(&$this:ident in)? $t:ident $(::<$($generics:ty),* $(,)?>)? { 299 299 $($fields:tt)* 300 300 }? $err:ty) => { 301 - ::pin_init::try_pin_init!($(&$this in)? $t $(::<$($generics),* $(,)?>)? { 301 + ::pin_init::try_pin_init!($(&$this in)? $t $(::<$($generics),*>)? { 302 302 $($fields)* 303 303 }? $err) 304 304 };
+1 -1
rust/kernel/kunit.rs
··· 302 302 is_init: false, 303 303 }; 304 304 305 - #[used] 305 + #[used(compiler)] 306 306 #[allow(unused_unsafe)] 307 307 #[cfg_attr(not(target_os = "macos"), link_section = ".kunit_test_suites")] 308 308 static mut KUNIT_TEST_SUITE_ENTRY: *const ::kernel::bindings::kunit_suite =
+3
rust/kernel/lib.rs
··· 34 34 // Expected to become stable. 35 35 #![feature(arbitrary_self_types)] 36 36 // 37 + // To be determined. 38 + #![feature(used_with_arg)] 39 + // 37 40 // `feature(derive_coerce_pointee)` is expected to become stable. Before Rust 38 41 // 1.84.0, it did not exist, so enable the predecessor features. 39 42 #![cfg_attr(CONFIG_RUSTC_HAS_COERCE_POINTEE, feature(derive_coerce_pointee))]
+5 -5
rust/macros/module.rs
··· 57 57 {cfg} 58 58 #[doc(hidden)] 59 59 #[cfg_attr(not(target_os = \"macos\"), link_section = \".modinfo\")] 60 - #[used] 60 + #[used(compiler)] 61 61 pub static __{module}_{counter}: [u8; {length}] = *{string}; 62 62 ", 63 63 cfg = if builtin { ··· 249 249 // key or a new section. For the moment, keep it simple. 250 250 #[cfg(MODULE)] 251 251 #[doc(hidden)] 252 - #[used] 252 + #[used(compiler)] 253 253 static __IS_RUST_MODULE: () = (); 254 254 255 255 static mut __MOD: ::core::mem::MaybeUninit<{type_}> = ··· 273 273 274 274 #[cfg(MODULE)] 275 275 #[doc(hidden)] 276 - #[used] 276 + #[used(compiler)] 277 277 #[link_section = \".init.data\"] 278 278 static __UNIQUE_ID___addressable_init_module: unsafe extern \"C\" fn() -> i32 = init_module; 279 279 ··· 293 293 294 294 #[cfg(MODULE)] 295 295 #[doc(hidden)] 296 - #[used] 296 + #[used(compiler)] 297 297 #[link_section = \".exit.data\"] 298 298 static __UNIQUE_ID___addressable_cleanup_module: extern \"C\" fn() = cleanup_module; 299 299 ··· 303 303 #[cfg(not(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS))] 304 304 #[doc(hidden)] 305 305 #[link_section = \"{initcall_section}\"] 306 - #[used] 306 + #[used(compiler)] 307 307 pub static __{ident}_initcall: extern \"C\" fn() -> 308 308 ::kernel::ffi::c_int = __{ident}_init; 309 309
+2 -1
scripts/Makefile.build
··· 312 312 # - Stable since Rust 1.82.0: `feature(asm_const)`, `feature(raw_ref_op)`. 313 313 # - Stable since Rust 1.87.0: `feature(asm_goto)`. 314 314 # - Expected to become stable: `feature(arbitrary_self_types)`. 315 + # - To be determined: `feature(used_with_arg)`. 315 316 # 316 317 # Please see https://github.com/Rust-for-Linux/linux/issues/2 for details on 317 318 # the unstable features in use. 318 - rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,raw_ref_op 319 + rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,raw_ref_op,used_with_arg 319 320 320 321 # `--out-dir` is required to avoid temporaries being created by `rustc` in the 321 322 # current working directory, which may be not accessible in the out-of-tree
+24 -24
sound/core/compress_offload.c
··· 1269 1269 stream = &data->stream; 1270 1270 1271 1271 guard(mutex)(&stream->device->lock); 1272 - switch (_IOC_NR(cmd)) { 1273 - case _IOC_NR(SNDRV_COMPRESS_IOCTL_VERSION): 1272 + switch (cmd) { 1273 + case SNDRV_COMPRESS_IOCTL_VERSION: 1274 1274 return put_user(SNDRV_COMPRESS_VERSION, 1275 1275 (int __user *)arg) ? -EFAULT : 0; 1276 - case _IOC_NR(SNDRV_COMPRESS_GET_CAPS): 1276 + case SNDRV_COMPRESS_GET_CAPS: 1277 1277 return snd_compr_get_caps(stream, arg); 1278 1278 #ifndef COMPR_CODEC_CAPS_OVERFLOW 1279 - case _IOC_NR(SNDRV_COMPRESS_GET_CODEC_CAPS): 1279 + case SNDRV_COMPRESS_GET_CODEC_CAPS: 1280 1280 return snd_compr_get_codec_caps(stream, arg); 1281 1281 #endif 1282 - case _IOC_NR(SNDRV_COMPRESS_SET_PARAMS): 1282 + case SNDRV_COMPRESS_SET_PARAMS: 1283 1283 return snd_compr_set_params(stream, arg); 1284 - case _IOC_NR(SNDRV_COMPRESS_GET_PARAMS): 1284 + case SNDRV_COMPRESS_GET_PARAMS: 1285 1285 return snd_compr_get_params(stream, arg); 1286 - case _IOC_NR(SNDRV_COMPRESS_SET_METADATA): 1286 + case SNDRV_COMPRESS_SET_METADATA: 1287 1287 return snd_compr_set_metadata(stream, arg); 1288 - case _IOC_NR(SNDRV_COMPRESS_GET_METADATA): 1288 + case SNDRV_COMPRESS_GET_METADATA: 1289 1289 return snd_compr_get_metadata(stream, arg); 1290 1290 } 1291 1291 1292 1292 if (stream->direction == SND_COMPRESS_ACCEL) { 1293 1293 #if IS_ENABLED(CONFIG_SND_COMPRESS_ACCEL) 1294 - switch (_IOC_NR(cmd)) { 1295 - case _IOC_NR(SNDRV_COMPRESS_TASK_CREATE): 1294 + switch (cmd) { 1295 + case SNDRV_COMPRESS_TASK_CREATE: 1296 1296 return snd_compr_task_create(stream, arg); 1297 - case _IOC_NR(SNDRV_COMPRESS_TASK_FREE): 1297 + case SNDRV_COMPRESS_TASK_FREE: 1298 1298 return snd_compr_task_seq(stream, arg, snd_compr_task_free_one); 1299 - case _IOC_NR(SNDRV_COMPRESS_TASK_START): 1299 + case SNDRV_COMPRESS_TASK_START: 1300 1300 return snd_compr_task_start_ioctl(stream, arg); 1301 - case _IOC_NR(SNDRV_COMPRESS_TASK_STOP): 1301 + case SNDRV_COMPRESS_TASK_STOP: 1302 1302 return snd_compr_task_seq(stream, arg, snd_compr_task_stop_one); 1303 - case _IOC_NR(SNDRV_COMPRESS_TASK_STATUS): 1303 + case SNDRV_COMPRESS_TASK_STATUS: 1304 1304 return snd_compr_task_status_ioctl(stream, arg); 1305 1305 } 1306 1306 #endif 1307 1307 return -ENOTTY; 1308 1308 } 1309 1309 1310 - switch (_IOC_NR(cmd)) { 1311 - case _IOC_NR(SNDRV_COMPRESS_TSTAMP): 1310 + switch (cmd) { 1311 + case SNDRV_COMPRESS_TSTAMP: 1312 1312 return snd_compr_tstamp(stream, arg); 1313 - case _IOC_NR(SNDRV_COMPRESS_AVAIL): 1313 + case SNDRV_COMPRESS_AVAIL: 1314 1314 return snd_compr_ioctl_avail(stream, arg); 1315 - case _IOC_NR(SNDRV_COMPRESS_PAUSE): 1315 + case SNDRV_COMPRESS_PAUSE: 1316 1316 return snd_compr_pause(stream); 1317 - case _IOC_NR(SNDRV_COMPRESS_RESUME): 1317 + case SNDRV_COMPRESS_RESUME: 1318 1318 return snd_compr_resume(stream); 1319 - case _IOC_NR(SNDRV_COMPRESS_START): 1319 + case SNDRV_COMPRESS_START: 1320 1320 return snd_compr_start(stream); 1321 - case _IOC_NR(SNDRV_COMPRESS_STOP): 1321 + case SNDRV_COMPRESS_STOP: 1322 1322 return snd_compr_stop(stream); 1323 - case _IOC_NR(SNDRV_COMPRESS_DRAIN): 1323 + case SNDRV_COMPRESS_DRAIN: 1324 1324 return snd_compr_drain(stream); 1325 - case _IOC_NR(SNDRV_COMPRESS_PARTIAL_DRAIN): 1325 + case SNDRV_COMPRESS_PARTIAL_DRAIN: 1326 1326 return snd_compr_partial_drain(stream); 1327 - case _IOC_NR(SNDRV_COMPRESS_NEXT_TRACK): 1327 + case SNDRV_COMPRESS_NEXT_TRACK: 1328 1328 return snd_compr_next_track(stream); 1329 1329 } 1330 1330
+84 -30
sound/pci/hda/cs35l56_hda.c
··· 873 873 return 0; 874 874 } 875 875 876 + static int cs35l56_hda_fixup_yoga9(struct cs35l56_hda *cs35l56, int *bus_addr) 877 + { 878 + /* The cirrus,dev-index property has the wrong values */ 879 + switch (*bus_addr) { 880 + case 0x30: 881 + cs35l56->index = 1; 882 + return 0; 883 + case 0x31: 884 + cs35l56->index = 0; 885 + return 0; 886 + default: 887 + /* There is a pseudo-address for broadcast to both amps - ignore it */ 888 + dev_dbg(cs35l56->base.dev, "Ignoring I2C address %#x\n", *bus_addr); 889 + return 0; 890 + } 891 + } 892 + 893 + static const struct { 894 + const char *sub; 895 + int (*fixup_fn)(struct cs35l56_hda *cs35l56, int *bus_addr); 896 + } cs35l56_hda_fixups[] = { 897 + { 898 + .sub = "17AA390B", /* Lenovo Yoga Book 9i GenX */ 899 + .fixup_fn = cs35l56_hda_fixup_yoga9, 900 + }, 901 + }; 902 + 903 + static int cs35l56_hda_apply_platform_fixups(struct cs35l56_hda *cs35l56, const char *sub, 904 + int *bus_addr) 905 + { 906 + int i; 907 + 908 + if (IS_ERR(sub)) 909 + return 0; 910 + 911 + for (i = 0; i < ARRAY_SIZE(cs35l56_hda_fixups); i++) { 912 + if (strcasecmp(cs35l56_hda_fixups[i].sub, sub) == 0) { 913 + dev_dbg(cs35l56->base.dev, "Applying fixup for %s\n", 914 + cs35l56_hda_fixups[i].sub); 915 + return (cs35l56_hda_fixups[i].fixup_fn)(cs35l56, bus_addr); 916 + } 917 + } 918 + 919 + return 0; 920 + } 921 + 876 922 static int cs35l56_hda_read_acpi(struct cs35l56_hda *cs35l56, int hid, int id) 877 923 { 878 924 u32 values[HDA_MAX_COMPONENTS]; ··· 943 897 ACPI_COMPANION_SET(cs35l56->base.dev, adev); 944 898 } 945 899 946 - property = "cirrus,dev-index"; 947 - ret = device_property_count_u32(cs35l56->base.dev, property); 948 - if (ret <= 0) 949 - goto err; 950 - 951 - if (ret > ARRAY_SIZE(values)) { 952 - ret = -EINVAL; 953 - goto err; 954 - } 955 - nval = ret; 956 - 957 - ret = device_property_read_u32_array(cs35l56->base.dev, property, values, nval); 958 - if (ret) 959 - goto err; 960 - 900 + /* Initialize things that could be overwritten by a fixup */ 961 901 cs35l56->index = -1; 962 - for (i = 0; i < nval; i++) { 963 - if (values[i] == id) { 964 - cs35l56->index = i; 965 - break; 966 - } 967 - } 968 - /* 969 - * It's not an error for the ID to be missing: for I2C there can be 970 - * an alias address that is not a real device. So reject silently. 971 - */ 972 - if (cs35l56->index == -1) { 973 - dev_dbg(cs35l56->base.dev, "No index found in %s\n", property); 974 - ret = -ENODEV; 975 - goto err; 976 - } 977 902 978 903 sub = acpi_get_subsystem_id(ACPI_HANDLE(cs35l56->base.dev)); 904 + ret = cs35l56_hda_apply_platform_fixups(cs35l56, sub, &id); 905 + if (ret) 906 + return ret; 907 + 908 + if (cs35l56->index == -1) { 909 + property = "cirrus,dev-index"; 910 + ret = device_property_count_u32(cs35l56->base.dev, property); 911 + if (ret <= 0) 912 + goto err; 913 + 914 + if (ret > ARRAY_SIZE(values)) { 915 + ret = -EINVAL; 916 + goto err; 917 + } 918 + nval = ret; 919 + 920 + ret = device_property_read_u32_array(cs35l56->base.dev, property, values, nval); 921 + if (ret) 922 + goto err; 923 + 924 + for (i = 0; i < nval; i++) { 925 + if (values[i] == id) { 926 + cs35l56->index = i; 927 + break; 928 + } 929 + } 930 + 931 + /* 932 + * It's not an error for the ID to be missing: for I2C there can be 933 + * an alias address that is not a real device. So reject silently. 934 + */ 935 + if (cs35l56->index == -1) { 936 + dev_dbg(cs35l56->base.dev, "No index found in %s\n", property); 937 + ret = -ENODEV; 938 + goto err; 939 + } 940 + } 979 941 980 942 if (IS_ERR(sub)) { 981 943 dev_info(cs35l56->base.dev,
+5
sound/pci/hda/patch_realtek.c
··· 7497 7497 }; 7498 7498 struct alc_spec *spec = codec->spec; 7499 7499 7500 + /* Support Audio mute LED and Mic mute LED on keyboard */ 7501 + hda_fixup_ideapad_acpi(codec, fix, action); 7502 + 7500 7503 switch (action) { 7501 7504 case HDA_FIXUP_ACT_PRE_PROBE: 7502 7505 snd_hda_apply_pincfgs(codec, pincfgs); ··· 10817 10814 SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 10818 10815 SND_PCI_QUIRK(0x103c, 0x8bb3, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2), 10819 10816 SND_PCI_QUIRK(0x103c, 0x8bb4, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2), 10817 + SND_PCI_QUIRK(0x103c, 0x8bbe, "HP Victus 16-r0xxx (MB 8BBE)", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10820 10818 SND_PCI_QUIRK(0x103c, 0x8bc8, "HP Victus 15-fa1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10821 10819 SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT), 10822 10820 SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2), ··· 11010 11006 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), 11011 11007 SND_PCI_QUIRK(0x1043, 0x1a63, "ASUS UX3405MA", ALC245_FIXUP_CS35L41_SPI_2), 11012 11008 SND_PCI_QUIRK(0x1043, 0x1a83, "ASUS UM5302LA", ALC294_FIXUP_CS35L41_I2C_2), 11009 + SND_PCI_QUIRK(0x1043, 0x1a8e, "ASUS G712LWS", ALC294_FIXUP_LENOVO_MIC_LOCATION), 11013 11010 SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2), 11014 11011 SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B), 11015 11012 SND_PCI_QUIRK(0x1043, 0x1b13, "ASUS U41SV/GA403U", ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC),
+21
sound/soc/amd/yc/acp6x-mach.c
··· 420 420 .driver_data = &acp6x_card, 421 421 .matches = { 422 422 DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 423 + DMI_MATCH(DMI_PRODUCT_NAME, "M6501RM"), 424 + } 425 + }, 426 + { 427 + .driver_data = &acp6x_card, 428 + .matches = { 429 + DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."), 423 430 DMI_MATCH(DMI_PRODUCT_NAME, "E1404FA"), 424 431 } 425 432 }, ··· 546 539 .driver_data = &acp6x_card, 547 540 .matches = { 548 541 DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 542 + DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb1xxx"), 543 + } 544 + }, 545 + { 546 + .driver_data = &acp6x_card, 547 + .matches = { 548 + DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 549 549 DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb2xxx"), 550 550 } 551 551 }, ··· 596 582 .matches = { 597 583 DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 598 584 DMI_MATCH(DMI_BOARD_NAME, "8A7F"), 585 + } 586 + }, 587 + { 588 + .driver_data = &acp6x_card, 589 + .matches = { 590 + DMI_MATCH(DMI_BOARD_VENDOR, "HP"), 591 + DMI_MATCH(DMI_BOARD_NAME, "8A81"), 599 592 } 600 593 }, 601 594 {
+5 -2
sound/soc/codecs/rt5660.c
··· 1315 1315 regmap_update_bits(rt5660->regmap, RT5660_GPIO_CTRL1, 1316 1316 RT5660_GP1_PIN_MASK, RT5660_GP1_PIN_DMIC1_SCL); 1317 1317 1318 - if (rt5660->pdata.dmic1_data_pin == RT5660_DMIC1_DATA_GPIO2) 1318 + if (rt5660->pdata.dmic1_data_pin == RT5660_DMIC1_DATA_GPIO2) { 1319 1319 regmap_update_bits(rt5660->regmap, RT5660_DMIC_CTRL1, 1320 1320 RT5660_SEL_DMIC_DATA_MASK, 1321 1321 RT5660_SEL_DMIC_DATA_GPIO2); 1322 - else if (rt5660->pdata.dmic1_data_pin == RT5660_DMIC1_DATA_IN1P) 1322 + regmap_update_bits(rt5660->regmap, RT5660_GPIO_CTRL1, 1323 + RT5660_GP2_PIN_MASK, RT5660_GP2_PIN_DMIC1_SDA); 1324 + } else if (rt5660->pdata.dmic1_data_pin == RT5660_DMIC1_DATA_IN1P) { 1323 1325 regmap_update_bits(rt5660->regmap, RT5660_DMIC_CTRL1, 1324 1326 RT5660_SEL_DMIC_DATA_MASK, 1325 1327 RT5660_SEL_DMIC_DATA_IN1P); 1328 + } 1326 1329 } 1327 1330 1328 1331 return devm_snd_soc_register_component(&i2c->dev,
+3 -1
sound/soc/intel/avs/pcm.c
··· 1570 1570 { 1571 1571 struct snd_soc_acpi_mach *mach; 1572 1572 struct snd_soc_dai *dai, *save; 1573 + struct avs_mach_pdata *pdata; 1573 1574 struct hda_codec *codec; 1574 1575 char name[32]; 1575 1576 1576 1577 mach = dev_get_platdata(component->card->dev); 1577 - codec = mach->pdata; 1578 + pdata = mach->pdata; 1579 + codec = pdata->codec; 1578 1580 snprintf(name, sizeof(name), "%s-cpu", dev_name(&codec->core.dev)); 1579 1581 1580 1582 for_each_component_dais_safe(component, dai, save) {
+1 -1
sound/soc/intel/boards/Kconfig
··· 11 11 kernel: saying N will just cause the configurator to skip all 12 12 the questions about Intel ASoC machine drivers. 13 13 14 - if SND_SOC_INTEL_MACH 14 + if SND_SOC_INTEL_MACH && (SND_SOC_SOF_INTEL_COMMON || !SND_SOC_SOF_INTEL_COMMON) 15 15 16 16 config SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES 17 17 bool "Use more user friendly long card names"
+29
sound/soc/intel/common/soc-acpi-intel-arl-match.c
··· 238 238 } 239 239 }; 240 240 241 + static const struct snd_soc_acpi_adr_device rt1316_3_single_adr[] = { 242 + { 243 + .adr = 0x000330025D131601ull, 244 + .num_endpoints = 1, 245 + .endpoints = &single_endpoint, 246 + .name_prefix = "rt1316-1" 247 + } 248 + }; 249 + 241 250 static const struct snd_soc_acpi_adr_device rt1320_2_single_adr[] = { 242 251 { 243 252 .adr = 0x000230025D132001ull, ··· 377 368 {} 378 369 }; 379 370 371 + static const struct snd_soc_acpi_link_adr arl_rt711_l0_rt1316_l3[] = { 372 + { 373 + .mask = BIT(0), 374 + .num_adr = ARRAY_SIZE(rt711_sdca_0_adr), 375 + .adr_d = rt711_sdca_0_adr, 376 + }, 377 + { 378 + .mask = BIT(3), 379 + .num_adr = ARRAY_SIZE(rt1316_3_single_adr), 380 + .adr_d = rt1316_3_single_adr, 381 + }, 382 + {} 383 + }; 384 + 380 385 static const struct snd_soc_acpi_link_adr arl_rt722_l0_rt1320_l2[] = { 381 386 { 382 387 .mask = BIT(0), ··· 503 480 .drv_name = "sof_sdw", 504 481 .sof_tplg_filename = "sof-arl-cs42l43-l2.tplg", 505 482 .get_function_tplg_files = sof_sdw_get_tplg_files, 483 + }, 484 + { 485 + .link_mask = BIT(0) | BIT(3), 486 + .links = arl_rt711_l0_rt1316_l3, 487 + .drv_name = "sof_sdw", 488 + .sof_tplg_filename = "sof-arl-rt711-l0-rt1316-l3.tplg", 506 489 }, 507 490 { 508 491 .link_mask = 0x1, /* link0 required */
+95 -33
tools/hv/hv_fcopy_uio_daemon.c
··· 35 35 #define WIN8_SRV_MINOR 1 36 36 #define WIN8_SRV_VERSION (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR) 37 37 38 - #define FCOPY_UIO "/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio" 38 + #define FCOPY_DEVICE_PATH(subdir) \ 39 + "/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/" #subdir 40 + #define FCOPY_UIO_PATH FCOPY_DEVICE_PATH(uio) 41 + #define FCOPY_CHANNELS_PATH FCOPY_DEVICE_PATH(channels) 39 42 40 43 #define FCOPY_VER_COUNT 1 41 44 static const int fcopy_versions[] = { ··· 50 47 UTIL_FW_VERSION 51 48 }; 52 49 53 - #define HV_RING_SIZE 0x4000 /* 16KB ring buffer size */ 50 + static uint32_t get_ring_buffer_size(void) 51 + { 52 + char ring_path[PATH_MAX]; 53 + DIR *dir; 54 + struct dirent *entry; 55 + struct stat st; 56 + uint32_t ring_size = 0; 57 + int retry_count = 0; 54 58 55 - static unsigned char desc[HV_RING_SIZE]; 59 + /* Find the channel directory */ 60 + dir = opendir(FCOPY_CHANNELS_PATH); 61 + if (!dir) { 62 + usleep(100 * 1000); /* Avoid race with kernel, wait 100ms and retry once */ 63 + dir = opendir(FCOPY_CHANNELS_PATH); 64 + if (!dir) { 65 + syslog(LOG_ERR, "Failed to open channels directory: %s", strerror(errno)); 66 + return 0; 67 + } 68 + } 69 + 70 + retry_once: 71 + while ((entry = readdir(dir)) != NULL) { 72 + if (entry->d_type == DT_DIR && strcmp(entry->d_name, ".") != 0 && 73 + strcmp(entry->d_name, "..") != 0) { 74 + snprintf(ring_path, sizeof(ring_path), "%s/%s/ring", 75 + FCOPY_CHANNELS_PATH, entry->d_name); 76 + 77 + if (stat(ring_path, &st) == 0) { 78 + /* 79 + * stat returns size of Tx, Rx rings combined, 80 + * so take half of it for individual ring size. 81 + */ 82 + ring_size = (uint32_t)st.st_size / 2; 83 + syslog(LOG_INFO, "Ring buffer size from %s: %u bytes", 84 + ring_path, ring_size); 85 + break; 86 + } 87 + } 88 + } 89 + 90 + if (!ring_size && retry_count == 0) { 91 + retry_count = 1; 92 + rewinddir(dir); 93 + usleep(100 * 1000); /* Wait 100ms and retry once */ 94 + goto retry_once; 95 + } 96 + 97 + closedir(dir); 98 + 99 + if (!ring_size) 100 + syslog(LOG_ERR, "Could not determine ring size"); 101 + 102 + return ring_size; 103 + } 104 + 105 + static unsigned char *desc; 56 106 57 107 static int target_fd; 58 108 static char target_fname[PATH_MAX]; ··· 118 62 119 63 filesize = 0; 120 64 p = path_name; 121 - snprintf(target_fname, sizeof(target_fname), "%s/%s", 122 - path_name, file_name); 65 + if (snprintf(target_fname, sizeof(target_fname), "%s/%s", 66 + path_name, file_name) >= sizeof(target_fname)) { 67 + syslog(LOG_ERR, "target file name is too long: %s/%s", path_name, file_name); 68 + goto done; 69 + } 123 70 124 71 /* 125 72 * Check to see if the path is already in place; if not, ··· 329 270 { 330 271 size_t len = 0; 331 272 332 - while (len < dest_size) { 273 + while (len < dest_size && *src) { 333 274 if (src[len] < 0x80) 334 275 dest[len++] = (char)(*src++); 335 276 else ··· 341 282 342 283 static int hv_fcopy_start(struct hv_start_fcopy *smsg_in) 343 284 { 285 + /* 286 + * file_name and path_name should have same length with appropriate 287 + * member of hv_start_fcopy. 288 + */ 289 + char file_name[W_MAX_PATH], path_name[W_MAX_PATH]; 290 + 344 291 setlocale(LC_ALL, "en_US.utf8"); 345 - size_t file_size, path_size; 346 - char *file_name, *path_name; 347 - char *in_file_name = (char *)smsg_in->file_name; 348 - char *in_path_name = (char *)smsg_in->path_name; 349 - 350 - file_size = wcstombs(NULL, (const wchar_t *restrict)in_file_name, 0) + 1; 351 - path_size = wcstombs(NULL, (const wchar_t *restrict)in_path_name, 0) + 1; 352 - 353 - file_name = (char *)malloc(file_size * sizeof(char)); 354 - path_name = (char *)malloc(path_size * sizeof(char)); 355 - 356 - if (!file_name || !path_name) { 357 - free(file_name); 358 - free(path_name); 359 - syslog(LOG_ERR, "Can't allocate memory for file name and/or path name"); 360 - return HV_E_FAIL; 361 - } 362 - 363 - wcstoutf8(file_name, (__u16 *)in_file_name, file_size); 364 - wcstoutf8(path_name, (__u16 *)in_path_name, path_size); 292 + wcstoutf8(file_name, smsg_in->file_name, W_MAX_PATH - 1); 293 + wcstoutf8(path_name, smsg_in->path_name, W_MAX_PATH - 1); 365 294 366 295 return hv_fcopy_create_file(file_name, path_name, smsg_in->copy_flags); 367 296 } ··· 453 406 int daemonize = 1, long_index = 0, opt, ret = -EINVAL; 454 407 struct vmbus_br txbr, rxbr; 455 408 void *ring; 456 - uint32_t len = HV_RING_SIZE; 409 + uint32_t ring_size, len; 457 410 char uio_name[NAME_MAX] = {0}; 458 411 char uio_dev_path[PATH_MAX] = {0}; 459 412 ··· 484 437 openlog("HV_UIO_FCOPY", 0, LOG_USER); 485 438 syslog(LOG_INFO, "starting; pid is:%d", getpid()); 486 439 487 - fcopy_get_first_folder(FCOPY_UIO, uio_name); 440 + ring_size = get_ring_buffer_size(); 441 + if (!ring_size) { 442 + ret = -ENODEV; 443 + goto exit; 444 + } 445 + 446 + desc = malloc(ring_size * sizeof(unsigned char)); 447 + if (!desc) { 448 + syslog(LOG_ERR, "malloc failed for desc buffer"); 449 + ret = -ENOMEM; 450 + goto exit; 451 + } 452 + 453 + fcopy_get_first_folder(FCOPY_UIO_PATH, uio_name); 488 454 snprintf(uio_dev_path, sizeof(uio_dev_path), "/dev/%s", uio_name); 489 455 fcopy_fd = open(uio_dev_path, O_RDWR); 490 456 ··· 505 445 syslog(LOG_ERR, "open %s failed; error: %d %s", 506 446 uio_dev_path, errno, strerror(errno)); 507 447 ret = fcopy_fd; 508 - goto exit; 448 + goto free_desc; 509 449 } 510 450 511 - ring = vmbus_uio_map(&fcopy_fd, HV_RING_SIZE); 451 + ring = vmbus_uio_map(&fcopy_fd, ring_size); 512 452 if (!ring) { 513 453 ret = errno; 514 454 syslog(LOG_ERR, "mmap ringbuffer failed; error: %d %s", ret, strerror(ret)); 515 455 goto close; 516 456 } 517 - vmbus_br_setup(&txbr, ring, HV_RING_SIZE); 518 - vmbus_br_setup(&rxbr, (char *)ring + HV_RING_SIZE, HV_RING_SIZE); 457 + vmbus_br_setup(&txbr, ring, ring_size); 458 + vmbus_br_setup(&rxbr, (char *)ring + ring_size, ring_size); 519 459 520 460 rxbr.vbr->imask = 0; 521 461 ··· 532 472 goto close; 533 473 } 534 474 535 - len = HV_RING_SIZE; 475 + len = ring_size; 536 476 ret = rte_vmbus_chan_recv_raw(&rxbr, desc, &len); 537 477 if (unlikely(ret <= 0)) { 538 478 /* This indicates a failure to communicate (or worse) */ ··· 552 492 } 553 493 close: 554 494 close(fcopy_fd); 495 + free_desc: 496 + free(desc); 555 497 exit: 556 498 return ret; 557 499 }
+13 -7
tools/lib/bpf/libbpf.c
··· 735 735 736 736 struct usdt_manager *usdt_man; 737 737 738 - struct bpf_map *arena_map; 738 + int arena_map_idx; 739 739 void *arena_data; 740 740 size_t arena_data_sz; 741 741 ··· 1517 1517 obj->efile.obj_buf_sz = obj_buf_sz; 1518 1518 obj->efile.btf_maps_shndx = -1; 1519 1519 obj->kconfig_map_idx = -1; 1520 + obj->arena_map_idx = -1; 1520 1521 1521 1522 obj->kern_version = get_kernel_version(); 1522 1523 obj->state = OBJ_OPEN; ··· 2965 2964 const long page_sz = sysconf(_SC_PAGE_SIZE); 2966 2965 size_t mmap_sz; 2967 2966 2968 - mmap_sz = bpf_map_mmap_sz(obj->arena_map); 2967 + mmap_sz = bpf_map_mmap_sz(map); 2969 2968 if (roundup(data_sz, page_sz) > mmap_sz) { 2970 2969 pr_warn("elf: sec '%s': declared ARENA map size (%zu) is too small to hold global __arena variables of size %zu\n", 2971 2970 sec_name, mmap_sz, data_sz); ··· 3039 3038 if (map->def.type != BPF_MAP_TYPE_ARENA) 3040 3039 continue; 3041 3040 3042 - if (obj->arena_map) { 3041 + if (obj->arena_map_idx >= 0) { 3043 3042 pr_warn("map '%s': only single ARENA map is supported (map '%s' is also ARENA)\n", 3044 - map->name, obj->arena_map->name); 3043 + map->name, obj->maps[obj->arena_map_idx].name); 3045 3044 return -EINVAL; 3046 3045 } 3047 - obj->arena_map = map; 3046 + obj->arena_map_idx = i; 3048 3047 3049 3048 if (obj->efile.arena_data) { 3050 3049 err = init_arena_map_data(obj, map, ARENA_SEC, obj->efile.arena_data_shndx, ··· 3054 3053 return err; 3055 3054 } 3056 3055 } 3057 - if (obj->efile.arena_data && !obj->arena_map) { 3056 + if (obj->efile.arena_data && obj->arena_map_idx < 0) { 3058 3057 pr_warn("elf: sec '%s': to use global __arena variables the ARENA map should be explicitly declared in SEC(\".maps\")\n", 3059 3058 ARENA_SEC); 3060 3059 return -ENOENT; ··· 4584 4583 if (shdr_idx == obj->efile.arena_data_shndx) { 4585 4584 reloc_desc->type = RELO_DATA; 4586 4585 reloc_desc->insn_idx = insn_idx; 4587 - reloc_desc->map_idx = obj->arena_map - obj->maps; 4586 + reloc_desc->map_idx = obj->arena_map_idx; 4588 4587 reloc_desc->sym_off = sym->st_value; 4588 + 4589 + map = &obj->maps[obj->arena_map_idx]; 4590 + pr_debug("prog '%s': found arena map %d (%s, sec %d, off %zu) for insn %u\n", 4591 + prog->name, obj->arena_map_idx, map->name, map->sec_idx, 4592 + map->sec_offset, insn_idx); 4589 4593 return 0; 4590 4594 } 4591 4595
+1
tools/objtool/check.c
··· 224 224 str_ends_with(func->name, "_4core9panicking14panic_explicit") || 225 225 str_ends_with(func->name, "_4core9panicking14panic_nounwind") || 226 226 str_ends_with(func->name, "_4core9panicking18panic_bounds_check") || 227 + str_ends_with(func->name, "_4core9panicking18panic_nounwind_fmt") || 227 228 str_ends_with(func->name, "_4core9panicking19assert_failed_inner") || 228 229 str_ends_with(func->name, "_4core9panicking30panic_null_pointer_dereference") || 229 230 str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") ||
+67
tools/testing/selftests/bpf/prog_tests/recursive_attach.c
··· 149 149 fentry_recursive_target__destroy(target_skel); 150 150 fentry_recursive__destroy(tracing_skel); 151 151 } 152 + 153 + static void *fentry_target_test_run(void *arg) 154 + { 155 + for (;;) { 156 + int prog_fd = __atomic_load_n((int *)arg, __ATOMIC_SEQ_CST); 157 + LIBBPF_OPTS(bpf_test_run_opts, topts); 158 + int err; 159 + 160 + if (prog_fd == -1) 161 + break; 162 + err = bpf_prog_test_run_opts(prog_fd, &topts); 163 + if (!ASSERT_OK(err, "fentry_target test_run")) 164 + break; 165 + } 166 + 167 + return NULL; 168 + } 169 + 170 + void test_fentry_attach_stress(void) 171 + { 172 + struct fentry_recursive_target *target_skel = NULL; 173 + struct fentry_recursive *tracing_skel = NULL; 174 + struct bpf_program *prog; 175 + int err, i, tgt_prog_fd; 176 + pthread_t thread; 177 + 178 + target_skel = fentry_recursive_target__open_and_load(); 179 + if (!ASSERT_OK_PTR(target_skel, 180 + "fentry_recursive_target__open_and_load")) 181 + goto close_prog; 182 + tgt_prog_fd = bpf_program__fd(target_skel->progs.fentry_target); 183 + err = pthread_create(&thread, NULL, 184 + fentry_target_test_run, &tgt_prog_fd); 185 + if (!ASSERT_OK(err, "bpf_program__set_attach_target")) 186 + goto close_prog; 187 + 188 + for (i = 0; i < 1000; i++) { 189 + tracing_skel = fentry_recursive__open(); 190 + if (!ASSERT_OK_PTR(tracing_skel, "fentry_recursive__open")) 191 + goto stop_thread; 192 + 193 + prog = tracing_skel->progs.recursive_attach; 194 + err = bpf_program__set_attach_target(prog, tgt_prog_fd, 195 + "fentry_target"); 196 + if (!ASSERT_OK(err, "bpf_program__set_attach_target")) 197 + goto stop_thread; 198 + 199 + err = fentry_recursive__load(tracing_skel); 200 + if (!ASSERT_OK(err, "fentry_recursive__load")) 201 + goto stop_thread; 202 + 203 + err = fentry_recursive__attach(tracing_skel); 204 + if (!ASSERT_OK(err, "fentry_recursive__attach")) 205 + goto stop_thread; 206 + 207 + fentry_recursive__destroy(tracing_skel); 208 + tracing_skel = NULL; 209 + } 210 + 211 + stop_thread: 212 + __atomic_store_n(&tgt_prog_fd, -1, __ATOMIC_SEQ_CST); 213 + err = pthread_join(thread, NULL); 214 + ASSERT_OK(err, "pthread_join"); 215 + close_prog: 216 + fentry_recursive__destroy(tracing_skel); 217 + fentry_recursive_target__destroy(target_skel); 218 + }
+2
tools/testing/selftests/bpf/prog_tests/snprintf.c
··· 116 116 ASSERT_ERR(load_single_snprintf("%llc"), "invalid specifier 7"); 117 117 ASSERT_ERR(load_single_snprintf("\x80"), "non ascii character"); 118 118 ASSERT_ERR(load_single_snprintf("\x1"), "non printable character"); 119 + ASSERT_ERR(load_single_snprintf("%p%"), "invalid specifier 8"); 120 + ASSERT_ERR(load_single_snprintf("%s%"), "invalid specifier 9"); 119 121 } 120 122 121 123 void test_snprintf(void)
+18 -5
tools/testing/selftests/drivers/net/lib/py/load.py
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + import re 3 4 import time 4 5 5 6 from lib.py import ksft_pr, cmd, ip, rand_port, wait_port_listen ··· 11 10 12 11 self.env = env 13 12 14 - if port is None: 15 - port = rand_port() 16 - self._iperf_server = cmd(f"iperf3 -s -1 -p {port}", background=True) 17 - wait_port_listen(port) 13 + self.port = rand_port() if port is None else port 14 + self._iperf_server = cmd(f"iperf3 -s -1 -p {self.port}", background=True) 15 + wait_port_listen(self.port) 18 16 time.sleep(0.1) 19 - self._iperf_client = cmd(f"iperf3 -c {env.addr} -P 16 -p {port} -t 86400", 17 + self._iperf_client = cmd(f"iperf3 -c {env.addr} -P 16 -p {self.port} -t 86400", 20 18 background=True, host=env.remote) 21 19 22 20 # Wait for traffic to ramp up ··· 56 56 ksft_pr(">> Server:") 57 57 ksft_pr(self._iperf_server.stdout) 58 58 ksft_pr(self._iperf_server.stderr) 59 + self._wait_client_stopped() 60 + 61 + def _wait_client_stopped(self, sleep=0.005, timeout=5): 62 + end = time.monotonic() + timeout 63 + 64 + live_port_pattern = re.compile(fr":{self.port:04X} 0[^6] ") 65 + 66 + while time.monotonic() < end: 67 + data = cmd("cat /proc/net/tcp*", host=self.env.remote).stdout 68 + if not live_port_pattern.search(data): 69 + return 70 + time.sleep(sleep) 71 + raise Exception(f"Waiting for client to stop timed out after {timeout}s")
+7 -1
tools/testing/selftests/futex/include/futex2test.h
··· 4 4 * 5 5 * Copyright 2021 Collabora Ltd. 6 6 */ 7 + #include <linux/time_types.h> 7 8 #include <stdint.h> 8 9 9 10 #define u64_to_ptr(x) ((void *)(uintptr_t)(x)) ··· 66 65 static inline int futex_waitv(volatile struct futex_waitv *waiters, unsigned long nr_waiters, 67 66 unsigned long flags, struct timespec *timo, clockid_t clockid) 68 67 { 69 - return syscall(__NR_futex_waitv, waiters, nr_waiters, flags, timo, clockid); 68 + struct __kernel_timespec ts = { 69 + .tv_sec = timo->tv_sec, 70 + .tv_nsec = timo->tv_nsec, 71 + }; 72 + 73 + return syscall(__NR_futex_waitv, waiters, nr_waiters, flags, &ts, clockid); 70 74 } 71 75 72 76 /*
+2 -1
tools/testing/selftests/net/mptcp/Makefile
··· 4 4 5 5 CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES) 6 6 7 - TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \ 7 + TEST_PROGS := mptcp_connect.sh mptcp_connect_mmap.sh mptcp_connect_sendfile.sh \ 8 + mptcp_connect_checksum.sh pm_netlink.sh mptcp_join.sh diag.sh \ 8 9 simult_flows.sh mptcp_sockopt.sh userspace_pm.sh 9 10 10 11 TEST_GEN_FILES = mptcp_connect pm_nl_ctl mptcp_sockopt mptcp_inq mptcp_diag
+5
tools/testing/selftests/net/mptcp/mptcp_connect_checksum.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \ 5 + "$(dirname "${0}")/mptcp_connect.sh" -C "${@}"
+5
tools/testing/selftests/net/mptcp/mptcp_connect_mmap.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \ 5 + "$(dirname "${0}")/mptcp_connect.sh" -m mmap "${@}"
+5
tools/testing/selftests/net/mptcp/mptcp_connect_sendfile.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + MPTCP_LIB_KSFT_TEST="$(basename "${0}" .sh)" \ 5 + "$(dirname "${0}")/mptcp_connect.sh" -m sendfile "${@}"
+22 -23
tools/testing/selftests/net/netfilter/conntrack_clash.sh
··· 93 93 run_one_clash_test() 94 94 { 95 95 local ns="$1" 96 - local daddr="$2" 97 - local dport="$3" 96 + local ctns="$2" 97 + local daddr="$3" 98 + local dport="$4" 98 99 local entries 99 100 local cre 100 101 101 102 if ! ip netns exec "$ns" ./udpclash $daddr $dport;then 102 - echo "FAIL: did not receive expected number of replies for $daddr:$dport" 103 - ret=1 104 - return 1 103 + echo "INFO: did not receive expected number of replies for $daddr:$dport" 104 + ip netns exec "$ctns" conntrack -S 105 + # don't fail: check if clash resolution triggered after all. 105 106 fi 106 107 107 - entries=$(conntrack -S | wc -l) 108 - cre=$(conntrack -S | grep -v "clash_resolve=0" | wc -l) 108 + entries=$(ip netns exec "$ctns" conntrack -S | wc -l) 109 + cre=$(ip netns exec "$ctns" conntrack -S | grep "clash_resolve=0" | wc -l) 109 110 110 - if [ "$cre" -ne "$entries" ] ;then 111 + if [ "$cre" -ne "$entries" ];then 111 112 clash_resolution_active=1 112 113 return 0 113 114 fi 114 115 115 - # 1 cpu -> parallel insertion impossible 116 - if [ "$entries" -eq 1 ]; then 117 - return 0 118 - fi 119 - 120 - # not a failure: clash resolution logic did not trigger, but all replies 121 - # were received. With right timing, xmit completed sequentially and 116 + # not a failure: clash resolution logic did not trigger. 117 + # With right timing, xmit completed sequentially and 122 118 # no parallel insertion occurs. 123 119 return $ksft_skip 124 120 } ··· 122 126 run_clash_test() 123 127 { 124 128 local ns="$1" 125 - local daddr="$2" 126 - local dport="$3" 129 + local ctns="$2" 130 + local daddr="$3" 131 + local dport="$4" 132 + local softerr=0 127 133 128 134 for i in $(seq 1 10);do 129 - run_one_clash_test "$ns" "$daddr" "$dport" 135 + run_one_clash_test "$ns" "$ctns" "$daddr" "$dport" 130 136 local rv=$? 131 137 if [ $rv -eq 0 ];then 132 138 echo "PASS: clash resolution test for $daddr:$dport on attempt $i" 133 139 return 0 134 - elif [ $rv -eq 1 ];then 135 - echo "FAIL: clash resolution test for $daddr:$dport on attempt $i" 136 - return 1 140 + elif [ $rv -eq $ksft_skip ]; then 141 + softerr=1 137 142 fi 138 143 done 144 + 145 + [ $softerr -eq 1 ] && echo "SKIP: clash resolution for $daddr:$dport did not trigger" 139 146 } 140 147 141 148 ip link add veth0 netns "$nsclient1" type veth peer name veth0 netns "$nsrouter" ··· 160 161 161 162 # exercise clash resolution with nat: 162 163 # nsrouter is supposed to dnat to 10.0.2.1:900{0,1,2,3}. 163 - run_clash_test "$nsclient1" 10.0.1.99 "$dport" 164 + run_clash_test "$nsclient1" "$nsrouter" 10.0.1.99 "$dport" 164 165 165 166 # exercise clash resolution without nat. 166 167 load_simple_ruleset "$nsclient2" 167 - run_clash_test "$nsclient2" 127.0.0.1 9001 168 + run_clash_test "$nsclient2" "$nsclient2" 127.0.0.1 9001 168 169 169 170 if [ $clash_resolution_active -eq 0 ];then 170 171 [ "$ret" -eq 0 ] && ret=$ksft_skip
+8
tools/testing/selftests/sched_ext/exit.c
··· 22 22 struct bpf_link *link; 23 23 char buf[16]; 24 24 25 + /* 26 + * On single-CPU systems, ops.select_cpu() is never 27 + * invoked, so skip this test to avoid getting stuck 28 + * indefinitely. 29 + */ 30 + if (tc == EXIT_SELECT_CPU && libbpf_num_possible_cpus() == 1) 31 + continue; 32 + 25 33 skel = exit__open(); 26 34 SCX_ENUM_INIT(skel); 27 35 skel->rodata->exit_point = tc;