Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.19-rc8).

No adjacent changes, conflicts:

drivers/net/ethernet/spacemit/k1_emac.c
2c84959167d64 ("net: spacemit: Check for netif_carrier_ok() in emac_stats_update()")
f66086798f91f ("net: spacemit: Remove broken flow control support")
https://lore.kernel.org/aXjAqZA3iEWD_DGM@sirena.org.uk

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2584 -1350
+1 -1
Documentation/admin-guide/laptops/alienware-wmi.rst
··· 105 105 106 106 Manual fan control on the other hand, is not exposed directly by the AWCC 107 107 interface. Instead it let's us control a fan `boost` value. This `boost` value 108 - has the following aproximate behavior over the fan pwm: 108 + has the following approximate behavior over the fan pwm: 109 109 110 110 :: 111 111
+2
Documentation/admin-guide/sysctl/vm.rst
··· 231 231 inode is old enough to be eligible for writeback by the kernel flusher threads. 232 232 And, it is also used as the interval to wakeup dirtytime_writeback thread. 233 233 234 + Setting this to zero disables periodic dirtytime writeback. 235 + 234 236 235 237 dirty_writeback_centisecs 236 238 =========================
+3 -1
Documentation/arch/riscv/uabi.rst
··· 7 7 ------------------------------------ 8 8 9 9 The canonical order of ISA extension names in the ISA string is defined in 10 - chapter 27 of the unprivileged specification. 10 + Chapter 27 of the RISC-V Instruction Set Manual Volume I Unprivileged ISA 11 + (Document Version 20191213). 12 + 11 13 The specification uses vague wording, such as should, when it comes to ordering, 12 14 so for our purposes the following rules apply: 13 15
+2 -2
Documentation/arch/x86/amd_hsmp.rst
··· 14 14 15 15 More details on the interface can be found in chapter 16 16 "7 Host System Management Port (HSMP)" of the family/model PPR 17 - Eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip 17 + Eg: https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50 18 18 19 19 20 20 HSMP interface is supported on EPYC line of server CPUs and MI300A (APU). ··· 185 185 186 186 More details on the interface and message definitions can be found in chapter 187 187 "7 Host System Management Port (HSMP)" of the respective family/model PPR 188 - eg: https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip 188 + eg: https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50 189 189 190 190 User space C-APIs are made available by linking against the esmi library, 191 191 which is provided by the E-SMS project https://www.amd.com/en/developer/e-sms.html.
+1 -1
Documentation/devicetree/bindings/display/mediatek/mediatek,dp.yaml
··· 11 11 - Jitao shi <jitao.shi@mediatek.com> 12 12 13 13 description: | 14 - MediaTek DP and eDP are different hardwares and there are some features 14 + MediaTek DP and eDP are different hardware and there are some features 15 15 which are not supported for eDP. For example, audio is not supported for 16 16 eDP. Therefore, we need to use two different compatibles to describe them. 17 17 In addition, We just need to enable the power domain of DP, so the clock
+31
Documentation/devicetree/bindings/interconnect/qcom,sa8775p-rpmh.yaml
··· 74 74 - description: aggre UFS CARD AXI clock 75 75 - description: RPMH CC IPA clock 76 76 77 + - if: 78 + properties: 79 + compatible: 80 + contains: 81 + enum: 82 + - qcom,sa8775p-config-noc 83 + - qcom,sa8775p-dc-noc 84 + - qcom,sa8775p-gem-noc 85 + - qcom,sa8775p-gpdsp-anoc 86 + - qcom,sa8775p-lpass-ag-noc 87 + - qcom,sa8775p-mmss-noc 88 + - qcom,sa8775p-nspa-noc 89 + - qcom,sa8775p-nspb-noc 90 + - qcom,sa8775p-pcie-anoc 91 + - qcom,sa8775p-system-noc 92 + then: 93 + properties: 94 + clocks: false 95 + 96 + - if: 97 + properties: 98 + compatible: 99 + contains: 100 + enum: 101 + - qcom,sa8775p-clk-virt 102 + - qcom,sa8775p-mc-virt 103 + then: 104 + properties: 105 + reg: false 106 + clocks: false 107 + 77 108 unevaluatedProperties: false 78 109 79 110 examples:
+1 -1
Documentation/devicetree/bindings/pinctrl/marvell,armada3710-xb-pinctrl.yaml
··· 88 88 pcie1_clkreq, pcie1_wakeup, pmic0, pmic1, ptp, ptp_clk, 89 89 ptp_trig, pwm0, pwm1, pwm2, pwm3, rgmii, sdio0, sdio_sb, smi, 90 90 spi_cs1, spi_cs2, spi_cs3, spi_quad, uart1, uart2, 91 - usb2_drvvbus1, usb32_drvvbus ] 91 + usb2_drvvbus1, usb32_drvvbus0 ] 92 92 93 93 function: 94 94 enum: [ drvbus, emmc, gpio, i2c, jtag, led, mii, mii_err, onewire,
+1 -1
Documentation/misc-devices/amd-sbi.rst
··· 15 15 More details on the interface can be found in chapter 16 16 "5 Advanced Platform Management Link (APML)" of the family/model PPR [1]_. 17 17 18 - .. [1] https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/programmer-references/55898_B1_pub_0_50.zip 18 + .. [1] https://docs.amd.com/v/u/en-US/55898_B1_pub_0_50 19 19 20 20 21 21 SBRMI device
+41
Documentation/process/conclave.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Linux kernel project continuity 4 + =============================== 5 + 6 + The Linux kernel development project is widely distributed, with over 7 + 100 maintainers each working to keep changes moving through their own 8 + repositories. The final step, though, is a centralized one where changes 9 + are pulled into the mainline repository. That is normally done by Linus 10 + Torvalds but, as was demonstrated by the 4.19 release in 2018, there are 11 + others who can do that work when the need arises. 12 + 13 + Should the maintainers of that repository become unwilling or unable to 14 + do that work going forward (including facilitating a transition), the 15 + project will need to find one or more replacements without delay. The 16 + process by which that will be done is listed below. $ORGANIZER is the 17 + last Maintainer Summit organizer or the current Linux Foundation (LF) 18 + Technical Advisory Board (TAB) Chair as a backup. 19 + 20 + - Within 72 hours, $ORGANIZER will open a discussion with the invitees 21 + of the most recently concluded Maintainers Summit. A meeting of those 22 + invitees and the TAB, either online or in-person, will be set as soon 23 + as possible in a way that maximizes the number of people who can 24 + participate. 25 + 26 + - If there has been no Maintainers Summit in the last 15 months, the set of 27 + invitees for this meeting will be determined by the TAB. 28 + 29 + - The invitees to this meeting may bring in other maintainers as needed. 30 + 31 + - This meeting, chaired by $ORGANIZER, will consider options for the 32 + ongoing management of the top-level kernel repository consistent with 33 + the expectation that it maximizes the long term health of the project 34 + and its community. 35 + 36 + - Within two weeks, a representative of this group will communicate to the 37 + broader community, using the ksummit@lists.linux.dev mailing list, what 38 + the next steps will be. 39 + 40 + The Linux Foundation, as guided by the TAB, will take the steps 41 + necessary to support and implement this plan.
+1
Documentation/process/index.rst
··· 68 68 stable-kernel-rules 69 69 management-style 70 70 researcher-guidelines 71 + conclave 71 72 72 73 Dealing with bugs 73 74 -----------------
+1 -1
MAINTAINERS
··· 9260 9260 EMULEX 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER (be2net) 9261 9261 M: Ajit Khaparde <ajit.khaparde@broadcom.com> 9262 9262 M: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> 9263 - M: Somnath Kotur <somnath.kotur@broadcom.com> 9264 9263 L: netdev@vger.kernel.org 9265 9264 S: Maintained 9266 9265 W: http://www.emulex.com ··· 13166 13167 F: Documentation/driver-api/interconnect.rst 13167 13168 F: drivers/interconnect/ 13168 13169 F: include/dt-bindings/interconnect/ 13170 + F: include/linux/interconnect-clk.h 13169 13171 F: include/linux/interconnect-provider.h 13170 13172 F: include/linux/interconnect.h 13171 13173
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
-1
arch/arm64/configs/defconfig
··· 670 670 CONFIG_PINCTRL_SC7280_LPASS_LPI=m 671 671 CONFIG_PINCTRL_SM6115_LPASS_LPI=m 672 672 CONFIG_PINCTRL_SM8250_LPASS_LPI=m 673 - CONFIG_PINCTRL_SM8350_LPASS_LPI=m 674 673 CONFIG_PINCTRL_SM8450_LPASS_LPI=m 675 674 CONFIG_PINCTRL_SC8280XP_LPASS_LPI=m 676 675 CONFIG_PINCTRL_SM8550_LPASS_LPI=m
+2
arch/arm64/include/asm/kvm_asm.h
··· 300 300 __le32 *origptr, __le32 *updptr, int nr_inst); 301 301 void kvm_compute_final_ctr_el0(struct alt_instr *alt, 302 302 __le32 *origptr, __le32 *updptr, int nr_inst); 303 + void kvm_pan_patch_el2_entry(struct alt_instr *alt, 304 + __le32 *origptr, __le32 *updptr, int nr_inst); 303 305 void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr, u64 elr_virt, 304 306 u64 elr_phys, u64 par, uintptr_t vcpu, u64 far, u64 hpfar); 305 307
-16
arch/arm64/include/asm/kvm_emulate.h
··· 119 119 return (unsigned long *)&vcpu->arch.hcr_el2; 120 120 } 121 121 122 - static inline void vcpu_clear_wfx_traps(struct kvm_vcpu *vcpu) 123 - { 124 - vcpu->arch.hcr_el2 &= ~HCR_TWE; 125 - if (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) || 126 - vcpu->kvm->arch.vgic.nassgireq) 127 - vcpu->arch.hcr_el2 &= ~HCR_TWI; 128 - else 129 - vcpu->arch.hcr_el2 |= HCR_TWI; 130 - } 131 - 132 - static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu) 133 - { 134 - vcpu->arch.hcr_el2 |= HCR_TWE; 135 - vcpu->arch.hcr_el2 |= HCR_TWI; 136 - } 137 - 138 122 static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) 139 123 { 140 124 return vcpu->arch.vsesr_el2;
+12 -4
arch/arm64/include/asm/kvm_pgtable.h
··· 87 87 88 88 #define KVM_PTE_LEAF_ATTR_HI_SW GENMASK(58, 55) 89 89 90 - #define KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54) 90 + #define __KVM_PTE_LEAF_ATTR_HI_S1_XN BIT(54) 91 + #define __KVM_PTE_LEAF_ATTR_HI_S1_UXN BIT(54) 92 + #define __KVM_PTE_LEAF_ATTR_HI_S1_PXN BIT(53) 93 + 94 + #define KVM_PTE_LEAF_ATTR_HI_S1_XN \ 95 + ({ cpus_have_final_cap(ARM64_KVM_HVHE) ? \ 96 + (__KVM_PTE_LEAF_ATTR_HI_S1_UXN | \ 97 + __KVM_PTE_LEAF_ATTR_HI_S1_PXN) : \ 98 + __KVM_PTE_LEAF_ATTR_HI_S1_XN; }) 91 99 92 100 #define KVM_PTE_LEAF_ATTR_HI_S2_XN GENMASK(54, 53) 93 101 ··· 301 293 * children. 302 294 * @KVM_PGTABLE_WALK_SHARED: Indicates the page-tables may be shared 303 295 * with other software walkers. 304 - * @KVM_PGTABLE_WALK_HANDLE_FAULT: Indicates the page-table walk was 305 - * invoked from a fault handler. 296 + * @KVM_PGTABLE_WALK_IGNORE_EAGAIN: Don't terminate the walk early if 297 + * the walker returns -EAGAIN. 306 298 * @KVM_PGTABLE_WALK_SKIP_BBM_TLBI: Visit and update table entries 307 299 * without Break-before-make's 308 300 * TLB invalidation. ··· 315 307 KVM_PGTABLE_WALK_TABLE_PRE = BIT(1), 316 308 KVM_PGTABLE_WALK_TABLE_POST = BIT(2), 317 309 KVM_PGTABLE_WALK_SHARED = BIT(3), 318 - KVM_PGTABLE_WALK_HANDLE_FAULT = BIT(4), 310 + KVM_PGTABLE_WALK_IGNORE_EAGAIN = BIT(4), 319 311 KVM_PGTABLE_WALK_SKIP_BBM_TLBI = BIT(5), 320 312 KVM_PGTABLE_WALK_SKIP_CMO = BIT(6), 321 313 };
+2 -1
arch/arm64/include/asm/sysreg.h
··· 91 91 */ 92 92 #define pstate_field(op1, op2) ((op1) << Op1_shift | (op2) << Op2_shift) 93 93 #define PSTATE_Imm_shift CRm_shift 94 - #define SET_PSTATE(x, r) __emit_inst(0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift)) 94 + #define ENCODE_PSTATE(x, r) (0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift)) 95 + #define SET_PSTATE(x, r) __emit_inst(ENCODE_PSTATE(x, r)) 95 96 96 97 #define PSTATE_PAN pstate_field(0, 4) 97 98 #define PSTATE_UAO pstate_field(0, 3)
+1 -1
arch/arm64/kernel/hibernate.c
··· 402 402 * Memory allocated by get_safe_page() will be dealt with by the hibernate code, 403 403 * we don't need to free it here. 404 404 */ 405 - int swsusp_arch_resume(void) 405 + int __nocfi swsusp_arch_resume(void) 406 406 { 407 407 int rc; 408 408 void *zero_page;
+1
arch/arm64/kernel/image-vars.h
··· 86 86 KVM_NVHE_ALIAS(kvm_update_va_mask); 87 87 KVM_NVHE_ALIAS(kvm_get_kimage_voffset); 88 88 KVM_NVHE_ALIAS(kvm_compute_final_ctr_el0); 89 + KVM_NVHE_ALIAS(kvm_pan_patch_el2_entry); 89 90 KVM_NVHE_ALIAS(spectre_bhb_patch_loop_iter); 90 91 KVM_NVHE_ALIAS(spectre_bhb_patch_loop_mitigation_enable); 91 92 KVM_NVHE_ALIAS(spectre_bhb_patch_wa3);
+12 -14
arch/arm64/kernel/ptrace.c
··· 968 968 vq = sve_vq_from_vl(task_get_vl(target, type)); 969 969 970 970 /* Enter/exit streaming mode */ 971 - if (system_supports_sme()) { 972 - switch (type) { 973 - case ARM64_VEC_SVE: 974 - target->thread.svcr &= ~SVCR_SM_MASK; 975 - set_tsk_thread_flag(target, TIF_SVE); 976 - break; 977 - case ARM64_VEC_SME: 978 - target->thread.svcr |= SVCR_SM_MASK; 979 - set_tsk_thread_flag(target, TIF_SME); 980 - break; 981 - default: 982 - WARN_ON_ONCE(1); 983 - return -EINVAL; 984 - } 971 + switch (type) { 972 + case ARM64_VEC_SVE: 973 + target->thread.svcr &= ~SVCR_SM_MASK; 974 + set_tsk_thread_flag(target, TIF_SVE); 975 + break; 976 + case ARM64_VEC_SME: 977 + target->thread.svcr |= SVCR_SM_MASK; 978 + set_tsk_thread_flag(target, TIF_SME); 979 + break; 980 + default: 981 + WARN_ON_ONCE(1); 982 + return -EINVAL; 985 983 } 986 984 987 985 /* Always zero V regs, FPSR, and FPCR */
+20 -6
arch/arm64/kernel/signal.c
··· 449 449 if (user->sve_size < SVE_SIG_CONTEXT_SIZE(vq)) 450 450 return -EINVAL; 451 451 452 + if (sm) { 453 + sme_alloc(current, false); 454 + if (!current->thread.sme_state) 455 + return -ENOMEM; 456 + } 457 + 452 458 sve_alloc(current, true); 453 459 if (!current->thread.sve_state) { 454 460 clear_thread_flag(TIF_SVE); 455 461 return -ENOMEM; 456 462 } 463 + 464 + if (sm) { 465 + current->thread.svcr |= SVCR_SM_MASK; 466 + set_thread_flag(TIF_SME); 467 + } else { 468 + current->thread.svcr &= ~SVCR_SM_MASK; 469 + set_thread_flag(TIF_SVE); 470 + } 471 + 472 + current->thread.fp_type = FP_STATE_SVE; 457 473 458 474 err = __copy_from_user(current->thread.sve_state, 459 475 (char __user const *)user->sve + ··· 477 461 SVE_SIG_REGS_SIZE(vq)); 478 462 if (err) 479 463 return -EFAULT; 480 - 481 - if (flags & SVE_SIG_FLAG_SM) 482 - current->thread.svcr |= SVCR_SM_MASK; 483 - else 484 - set_thread_flag(TIF_SVE); 485 - current->thread.fp_type = FP_STATE_SVE; 486 464 487 465 err = read_fpsimd_context(&fpsimd, user); 488 466 if (err) ··· 585 575 586 576 if (user->za_size < ZA_SIG_CONTEXT_SIZE(vq)) 587 577 return -EINVAL; 578 + 579 + sve_alloc(current, false); 580 + if (!current->thread.sve_state) 581 + return -ENOMEM; 588 582 589 583 sme_alloc(current, true); 590 584 if (!current->thread.sme_state) {
+1
arch/arm64/kvm/arm.c
··· 569 569 return kvm_wfi_trap_policy == KVM_WFX_NOTRAP; 570 570 571 571 return single_task_running() && 572 + vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 && 572 573 (atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) || 573 574 vcpu->kvm->arch.vgic.nassgireq); 574 575 }
+6 -2
arch/arm64/kvm/at.c
··· 403 403 struct s1_walk_result *wr, u64 va) 404 404 { 405 405 u64 va_top, va_bottom, baddr, desc, new_desc, ipa; 406 + struct kvm_s2_trans s2_trans = {}; 406 407 int level, stride, ret; 407 408 408 409 level = wi->sl; ··· 421 420 ipa = baddr | index; 422 421 423 422 if (wi->s2) { 424 - struct kvm_s2_trans s2_trans = {}; 425 - 426 423 ret = kvm_walk_nested_s2(vcpu, ipa, &s2_trans); 427 424 if (ret) { 428 425 fail_s1_walk(wr, ··· 514 515 new_desc |= PTE_AF; 515 516 516 517 if (new_desc != desc) { 518 + if (wi->s2 && !kvm_s2_trans_writable(&s2_trans)) { 519 + fail_s1_walk(wr, ESR_ELx_FSC_PERM_L(level), true); 520 + return -EPERM; 521 + } 522 + 517 523 ret = kvm_swap_s1_desc(vcpu, ipa, desc, new_desc, wi); 518 524 if (ret) 519 525 return ret;
+3 -1
arch/arm64/kvm/hyp/entry.S
··· 126 126 127 127 add x1, x1, #VCPU_CONTEXT 128 128 129 - ALTERNATIVE(nop, SET_PSTATE_PAN(1), ARM64_HAS_PAN, CONFIG_ARM64_PAN) 129 + alternative_cb ARM64_ALWAYS_SYSTEM, kvm_pan_patch_el2_entry 130 + nop 131 + alternative_cb_end 130 132 131 133 // Store the guest regs x2 and x3 132 134 stp x2, x3, [x1, #CPU_XREG_OFFSET(2)]
+1 -1
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 854 854 return false; 855 855 } 856 856 857 - static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code) 857 + static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu) 858 858 { 859 859 /* 860 860 * Check for the conditions of Cortex-A510's #2077057. When these occur
+3
arch/arm64/kvm/hyp/nvhe/hyp-main.c
··· 180 180 /* Propagate WFx trapping flags */ 181 181 hyp_vcpu->vcpu.arch.hcr_el2 &= ~(HCR_TWE | HCR_TWI); 182 182 hyp_vcpu->vcpu.arch.hcr_el2 |= hcr_el2 & (HCR_TWE | HCR_TWI); 183 + } else { 184 + memcpy(&hyp_vcpu->vcpu.arch.fgt, hyp_vcpu->host_vcpu->arch.fgt, 185 + sizeof(hyp_vcpu->vcpu.arch.fgt)); 183 186 } 184 187 } 185 188
-1
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 172 172 173 173 /* Trust the host for non-protected vcpu features. */ 174 174 vcpu->arch.hcrx_el2 = host_vcpu->arch.hcrx_el2; 175 - memcpy(vcpu->arch.fgt, host_vcpu->arch.fgt, sizeof(vcpu->arch.fgt)); 176 175 return 0; 177 176 } 178 177
+1 -1
arch/arm64/kvm/hyp/nvhe/switch.c
··· 211 211 { 212 212 const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu); 213 213 214 - synchronize_vcpu_pstate(vcpu, exit_code); 214 + synchronize_vcpu_pstate(vcpu); 215 215 216 216 /* 217 217 * Some guests (e.g., protected VMs) are not be allowed to run in
+3 -2
arch/arm64/kvm/hyp/pgtable.c
··· 144 144 * page table walk. 145 145 */ 146 146 if (r == -EAGAIN) 147 - return !(walker->flags & KVM_PGTABLE_WALK_HANDLE_FAULT); 147 + return walker->flags & KVM_PGTABLE_WALK_IGNORE_EAGAIN; 148 148 149 149 return !r; 150 150 } ··· 1262 1262 { 1263 1263 return stage2_update_leaf_attrs(pgt, addr, size, 0, 1264 1264 KVM_PTE_LEAF_ATTR_LO_S2_S2AP_W, 1265 - NULL, NULL, 0); 1265 + NULL, NULL, 1266 + KVM_PGTABLE_WALK_IGNORE_EAGAIN); 1266 1267 } 1267 1268 1268 1269 void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr,
+1 -1
arch/arm64/kvm/hyp/vhe/switch.c
··· 536 536 537 537 static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) 538 538 { 539 - synchronize_vcpu_pstate(vcpu, exit_code); 539 + synchronize_vcpu_pstate(vcpu); 540 540 541 541 /* 542 542 * If we were in HYP context on entry, adjust the PSTATE view
+5 -7
arch/arm64/kvm/mmu.c
··· 497 497 this->count = 1; 498 498 rb_link_node(&this->node, parent, node); 499 499 rb_insert_color(&this->node, &hyp_shared_pfns); 500 - ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn, 1); 500 + ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn); 501 501 unlock: 502 502 mutex_unlock(&hyp_shared_pfns_lock); 503 503 ··· 523 523 524 524 rb_erase(&this->node, &hyp_shared_pfns); 525 525 kfree(this); 526 - ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn, 1); 526 + ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn); 527 527 unlock: 528 528 mutex_unlock(&hyp_shared_pfns_lock); 529 529 ··· 1563 1563 *prot &= ~KVM_PGTABLE_PROT_PX; 1564 1564 } 1565 1565 1566 - #define KVM_PGTABLE_WALK_MEMABORT_FLAGS (KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED) 1567 - 1568 1566 static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, 1569 1567 struct kvm_s2_trans *nested, 1570 1568 struct kvm_memory_slot *memslot, bool is_perm) 1571 1569 { 1572 1570 bool write_fault, exec_fault, writable; 1573 - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; 1571 + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED; 1574 1572 enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; 1575 1573 struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt; 1576 1574 unsigned long mmu_seq; ··· 1663 1665 struct kvm_pgtable *pgt; 1664 1666 struct page *page; 1665 1667 vm_flags_t vm_flags; 1666 - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; 1668 + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED; 1667 1669 1668 1670 if (fault_is_perm) 1669 1671 fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); ··· 1931 1933 /* Resolve the access fault by making the page young again. */ 1932 1934 static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa) 1933 1935 { 1934 - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; 1936 + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_SHARED; 1935 1937 struct kvm_s2_mmu *mmu; 1936 1938 1937 1939 trace_kvm_access_fault(fault_ipa);
+4 -1
arch/arm64/kvm/sys_regs.c
··· 4668 4668 * that we don't know how to handle. This certainly qualifies 4669 4669 * as a gross bug that should be fixed right away. 4670 4670 */ 4671 - BUG_ON(!r->access); 4671 + if (!r->access) { 4672 + bad_trap(vcpu, params, r, "register access"); 4673 + return; 4674 + } 4672 4675 4673 4676 /* Skip instruction if instructed so */ 4674 4677 if (likely(r->access(vcpu, params, r)))
+28
arch/arm64/kvm/va_layout.c
··· 296 296 generate_mov_q(read_sanitised_ftr_reg(SYS_CTR_EL0), 297 297 origptr, updptr, nr_inst); 298 298 } 299 + 300 + void kvm_pan_patch_el2_entry(struct alt_instr *alt, 301 + __le32 *origptr, __le32 *updptr, int nr_inst) 302 + { 303 + /* 304 + * If we're running at EL1 without hVHE, then SCTLR_EL2.SPAN means 305 + * nothing to us (it is RES1), and we don't need to set PSTATE.PAN 306 + * to anything useful. 307 + */ 308 + if (!is_kernel_in_hyp_mode() && !cpus_have_cap(ARM64_KVM_HVHE)) 309 + return; 310 + 311 + /* 312 + * Leap of faith: at this point, we must be running VHE one way or 313 + * another, and FEAT_PAN is required to be implemented. If KVM 314 + * explodes at runtime because your system does not abide by this 315 + * requirement, call your favourite HW vendor, they have screwed up. 316 + * 317 + * We don't expect hVHE to access any userspace mapping, so always 318 + * set PSTATE.PAN on enty. Same thing if we have PAN enabled on an 319 + * EL2 kernel. Only force it to 0 if we have not configured PAN in 320 + * the kernel (and you know this is really silly). 321 + */ 322 + if (cpus_have_cap(ARM64_KVM_HVHE) || IS_ENABLED(CONFIG_ARM64_PAN)) 323 + *updptr = cpu_to_le32(ENCODE_PSTATE(1, PAN)); 324 + else 325 + *updptr = cpu_to_le32(ENCODE_PSTATE(0, PAN)); 326 + }
+1
arch/riscv/Kconfig.errata
··· 84 84 select DMA_GLOBAL_POOL 85 85 select RISCV_DMA_NONCOHERENT 86 86 select RISCV_NONSTANDARD_CACHE_OPS 87 + select CACHEMAINT_FOR_DMA 87 88 select SIFIVE_CCACHE 88 89 default n 89 90 help
+12 -2
arch/riscv/include/asm/uaccess.h
··· 97 97 */ 98 98 99 99 #ifdef CONFIG_CC_HAS_ASM_GOTO_OUTPUT 100 + /* 101 + * Use a temporary variable for the output of the asm goto to avoid a 102 + * triggering an LLVM assertion due to sign extending the output when 103 + * it is used in later function calls: 104 + * https://github.com/llvm/llvm-project/issues/143795 105 + */ 100 106 #define __get_user_asm(insn, x, ptr, label) \ 107 + do { \ 108 + u64 __tmp; \ 101 109 asm_goto_output( \ 102 110 "1:\n" \ 103 111 " " insn " %0, %1\n" \ 104 112 _ASM_EXTABLE_UACCESS_ERR(1b, %l2, %0) \ 105 - : "=&r" (x) \ 106 - : "m" (*(ptr)) : : label) 113 + : "=&r" (__tmp) \ 114 + : "m" (*(ptr)) : : label); \ 115 + (x) = (__typeof__(x))(unsigned long)__tmp; \ 116 + } while (0) 107 117 #else /* !CONFIG_CC_HAS_ASM_GOTO_OUTPUT */ 108 118 #define __get_user_asm(insn, x, ptr, label) \ 109 119 do { \
+2 -1
arch/riscv/kernel/suspend.c
··· 51 51 52 52 #ifdef CONFIG_MMU 53 53 if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SSTC)) { 54 - csr_write(CSR_STIMECMP, context->stimecmp); 55 54 #if __riscv_xlen < 64 55 + csr_write(CSR_STIMECMP, ULONG_MAX); 56 56 csr_write(CSR_STIMECMPH, context->stimecmph); 57 57 #endif 58 + csr_write(CSR_STIMECMP, context->stimecmp); 58 59 } 59 60 60 61 csr_write(CSR_SATP, context->satp);
+4 -2
arch/riscv/kvm/vcpu_timer.c
··· 72 72 static int kvm_riscv_vcpu_update_vstimecmp(struct kvm_vcpu *vcpu, u64 ncycles) 73 73 { 74 74 #if defined(CONFIG_32BIT) 75 - ncsr_write(CSR_VSTIMECMP, ncycles & 0xFFFFFFFF); 75 + ncsr_write(CSR_VSTIMECMP, ULONG_MAX); 76 76 ncsr_write(CSR_VSTIMECMPH, ncycles >> 32); 77 + ncsr_write(CSR_VSTIMECMP, (u32)ncycles); 77 78 #else 78 79 ncsr_write(CSR_VSTIMECMP, ncycles); 79 80 #endif ··· 308 307 return; 309 308 310 309 #if defined(CONFIG_32BIT) 311 - ncsr_write(CSR_VSTIMECMP, (u32)t->next_cycles); 310 + ncsr_write(CSR_VSTIMECMP, ULONG_MAX); 312 311 ncsr_write(CSR_VSTIMECMPH, (u32)(t->next_cycles >> 32)); 312 + ncsr_write(CSR_VSTIMECMP, (u32)(t->next_cycles)); 313 313 #else 314 314 ncsr_write(CSR_VSTIMECMP, t->next_cycles); 315 315 #endif
+9 -8
arch/s390/boot/vmlinux.lds.S
··· 137 137 } 138 138 _end = .; 139 139 140 + /* Sections to be discarded */ 141 + /DISCARD/ : { 142 + COMMON_DISCARDS 143 + *(.eh_frame) 144 + *(*__ksymtab*) 145 + *(___kcrctab*) 146 + *(.modinfo) 147 + } 148 + 140 149 DWARF_DEBUG 141 150 ELF_DETAILS 142 151 ··· 170 161 *(.rela.*) *(.rela_*) 171 162 } 172 163 ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) detected!") 173 - 174 - /* Sections to be discarded */ 175 - /DISCARD/ : { 176 - COMMON_DISCARDS 177 - *(.eh_frame) 178 - *(*__ksymtab*) 179 - *(___kcrctab*) 180 - } 181 164 }
+1 -1
arch/s390/kernel/vdso/Makefile
··· 28 28 KBUILD_CFLAGS_VDSO := $(filter-out -munaligned-symbols,$(KBUILD_CFLAGS_VDSO)) 29 29 KBUILD_CFLAGS_VDSO := $(filter-out -fno-asynchronous-unwind-tables,$(KBUILD_CFLAGS_VDSO)) 30 30 KBUILD_CFLAGS_VDSO += -fPIC -fno-common -fno-builtin -fasynchronous-unwind-tables 31 - KBUILD_CFLAGS_VDSO += -fno-stack-protector 31 + KBUILD_CFLAGS_VDSO += -fno-stack-protector $(DISABLE_KSTACK_ERASE) 32 32 ldflags-y := -shared -soname=linux-vdso.so.1 \ 33 33 --hash-style=both --build-id=sha1 -T 34 34
+11 -2
arch/x86/events/perf_event.h
··· 1574 1574 struct hw_perf_event *hwc = &event->hw; 1575 1575 unsigned int hw_event, bts_event; 1576 1576 1577 - if (event->attr.freq) 1577 + /* 1578 + * Only use BTS for fixed rate period==1 events. 1579 + */ 1580 + if (event->attr.freq || period != 1) 1581 + return false; 1582 + 1583 + /* 1584 + * BTS doesn't virtualize. 1585 + */ 1586 + if (event->attr.exclude_host) 1578 1587 return false; 1579 1588 1580 1589 hw_event = hwc->config & INTEL_ARCH_EVENT_MASK; 1581 1590 bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); 1582 1591 1583 - return hw_event == bts_event && period == 1; 1592 + return hw_event == bts_event; 1584 1593 } 1585 1594 1586 1595 static inline bool intel_pmu_has_bts(struct perf_event *event)
+5 -10
arch/x86/mm/fault.c
··· 821 821 force_sig_pkuerr((void __user *)address, pkey); 822 822 else 823 823 force_sig_fault(SIGSEGV, si_code, (void __user *)address); 824 - 825 - local_irq_disable(); 826 824 } 827 825 828 826 static noinline void ··· 1472 1474 do_kern_addr_fault(regs, error_code, address); 1473 1475 } else { 1474 1476 do_user_addr_fault(regs, error_code, address); 1475 - /* 1476 - * User address page fault handling might have reenabled 1477 - * interrupts. Fixing up all potential exit points of 1478 - * do_user_addr_fault() and its leaf functions is just not 1479 - * doable w/o creating an unholy mess or turning the code 1480 - * upside down. 1481 - */ 1482 - local_irq_disable(); 1483 1477 } 1478 + /* 1479 + * page fault handling might have reenabled interrupts, 1480 + * make sure to disable them again. 1481 + */ 1482 + local_irq_disable(); 1484 1483 } 1485 1484 1486 1485 DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
+1 -1
block/blk-mq.c
··· 1480 1480 static void blk_rq_poll_completion(struct request *rq, struct completion *wait) 1481 1481 { 1482 1482 do { 1483 - blk_hctx_poll(rq->q, rq->mq_hctx, NULL, 0); 1483 + blk_hctx_poll(rq->q, rq->mq_hctx, NULL, BLK_POLL_ONESHOT); 1484 1484 cond_resched(); 1485 1485 } while (!completion_done(wait)); 1486 1486 }
+1
block/blk-zoned.c
··· 1957 1957 1958 1958 disk->nr_zones = args->nr_zones; 1959 1959 if (args->nr_conv_zones >= disk->nr_zones) { 1960 + queue_limits_cancel_update(q); 1960 1961 pr_warn("%s: Invalid number of conventional zones %u / %u\n", 1961 1962 disk->disk_name, args->nr_conv_zones, disk->nr_zones); 1962 1963 ret = -ENODEV;
+6
crypto/authencesn.c
··· 169 169 struct scatterlist *src, *dst; 170 170 int err; 171 171 172 + if (assoclen < 8) 173 + return -EINVAL; 174 + 172 175 sg_init_table(areq_ctx->src, 2); 173 176 src = scatterwalk_ffwd(areq_ctx->src, req->src, assoclen); 174 177 dst = src; ··· 258 255 struct scatterlist *dst = req->dst; 259 256 u32 tmp[2]; 260 257 int err; 258 + 259 + if (assoclen < 8) 260 + return -EINVAL; 261 261 262 262 cryptlen -= authsize; 263 263
+2
drivers/base/dd.c
··· 548 548 static void device_unbind_cleanup(struct device *dev) 549 549 { 550 550 devres_release_all(dev); 551 + if (dev->driver->p_cb.post_unbind_rust) 552 + dev->driver->p_cb.post_unbind_rust(dev); 551 553 arch_teardown_dma_ops(dev); 552 554 kfree(dev->dma_range_map); 553 555 dev->dma_range_map = NULL;
+6 -5
drivers/base/regmap/regcache-maple.c
··· 95 95 96 96 mas_unlock(&mas); 97 97 98 - if (ret == 0) { 99 - kfree(lower); 100 - kfree(upper); 98 + if (ret) { 99 + kfree(entry); 100 + return ret; 101 101 } 102 - 103 - return ret; 102 + kfree(lower); 103 + kfree(upper); 104 + return 0; 104 105 } 105 106 106 107 static int regcache_maple_drop(struct regmap *map, unsigned int min,
+3 -1
drivers/base/regmap/regmap.c
··· 408 408 static void regmap_lock_hwlock_irqsave(void *__map) 409 409 { 410 410 struct regmap *map = __map; 411 + unsigned long flags = 0; 411 412 412 413 hwspin_lock_timeout_irqsave(map->hwlock, UINT_MAX, 413 - &map->spinlock_flags); 414 + &flags); 415 + map->spinlock_flags = flags; 414 416 } 415 417 416 418 static void regmap_unlock_hwlock(void *__map)
+34 -5
drivers/block/ublk_drv.c
··· 2885 2885 return ub; 2886 2886 } 2887 2887 2888 + static bool ublk_validate_user_pid(struct ublk_device *ub, pid_t ublksrv_pid) 2889 + { 2890 + rcu_read_lock(); 2891 + ublksrv_pid = pid_nr(find_vpid(ublksrv_pid)); 2892 + rcu_read_unlock(); 2893 + 2894 + return ub->ublksrv_tgid == ublksrv_pid; 2895 + } 2896 + 2888 2897 static int ublk_ctrl_start_dev(struct ublk_device *ub, 2889 2898 const struct ublksrv_ctrl_cmd *header) 2890 2899 { ··· 2962 2953 if (wait_for_completion_interruptible(&ub->completion) != 0) 2963 2954 return -EINTR; 2964 2955 2965 - if (ub->ublksrv_tgid != ublksrv_pid) 2956 + if (!ublk_validate_user_pid(ub, ublksrv_pid)) 2966 2957 return -EINVAL; 2967 2958 2968 2959 mutex_lock(&ub->mutex); ··· 2981 2972 disk->fops = &ub_fops; 2982 2973 disk->private_data = ub; 2983 2974 2984 - ub->dev_info.ublksrv_pid = ublksrv_pid; 2975 + ub->dev_info.ublksrv_pid = ub->ublksrv_tgid; 2985 2976 ub->ub_disk = disk; 2986 2977 2987 2978 ublk_apply_params(ub); ··· 3329 3320 static int ublk_ctrl_get_dev_info(struct ublk_device *ub, 3330 3321 const struct ublksrv_ctrl_cmd *header) 3331 3322 { 3323 + struct task_struct *p; 3324 + struct pid *pid; 3325 + struct ublksrv_ctrl_dev_info dev_info; 3326 + pid_t init_ublksrv_tgid = ub->dev_info.ublksrv_pid; 3332 3327 void __user *argp = (void __user *)(unsigned long)header->addr; 3333 3328 3334 3329 if (header->len < sizeof(struct ublksrv_ctrl_dev_info) || !header->addr) 3335 3330 return -EINVAL; 3336 3331 3337 - if (copy_to_user(argp, &ub->dev_info, sizeof(ub->dev_info))) 3332 + memcpy(&dev_info, &ub->dev_info, sizeof(dev_info)); 3333 + dev_info.ublksrv_pid = -1; 3334 + 3335 + if (init_ublksrv_tgid > 0) { 3336 + rcu_read_lock(); 3337 + pid = find_pid_ns(init_ublksrv_tgid, &init_pid_ns); 3338 + p = pid_task(pid, PIDTYPE_TGID); 3339 + if (p) { 3340 + int vnr = task_tgid_vnr(p); 3341 + 3342 + if (vnr) 3343 + dev_info.ublksrv_pid = vnr; 3344 + } 3345 + rcu_read_unlock(); 3346 + } 3347 + 3348 + if (copy_to_user(argp, &dev_info, sizeof(dev_info))) 3338 3349 return -EFAULT; 3339 3350 3340 3351 return 0; ··· 3499 3470 pr_devel("%s: All FETCH_REQs received, dev id %d\n", __func__, 3500 3471 header->dev_id); 3501 3472 3502 - if (ub->ublksrv_tgid != ublksrv_pid) 3473 + if (!ublk_validate_user_pid(ub, ublksrv_pid)) 3503 3474 return -EINVAL; 3504 3475 3505 3476 mutex_lock(&ub->mutex); ··· 3510 3481 ret = -EBUSY; 3511 3482 goto out_unlock; 3512 3483 } 3513 - ub->dev_info.ublksrv_pid = ublksrv_pid; 3484 + ub->dev_info.ublksrv_pid = ub->ublksrv_tgid; 3514 3485 ub->dev_info.state = UBLK_S_DEV_LIVE; 3515 3486 pr_devel("%s: new ublksrv_pid %d, dev id %d\n", 3516 3487 __func__, ublksrv_pid, header->dev_id);
+2 -1
drivers/clocksource/timer-riscv.c
··· 50 50 51 51 if (static_branch_likely(&riscv_sstc_available)) { 52 52 #if defined(CONFIG_32BIT) 53 - csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF); 53 + csr_write(CSR_STIMECMP, ULONG_MAX); 54 54 csr_write(CSR_STIMECMPH, next_tval >> 32); 55 + csr_write(CSR_STIMECMP, next_tval & 0xFFFFFFFF); 55 56 #else 56 57 csr_write(CSR_STIMECMP, next_tval); 57 58 #endif
+1 -1
drivers/comedi/comedi_fops.c
··· 1155 1155 for (i = 0; i < s->n_chan; i++) { 1156 1156 int x; 1157 1157 1158 - x = (dev->minor << 28) | (it->subdev << 24) | (i << 16) | 1158 + x = (it->subdev << 24) | (i << 16) | 1159 1159 (s->range_table_list[i]->length); 1160 1160 if (put_user(x, it->rangelist + i)) 1161 1161 return -EFAULT;
+30 -2
drivers/comedi/drivers/dmm32at.c
··· 330 330 331 331 static void dmm32at_setaitimer(struct comedi_device *dev, unsigned int nansec) 332 332 { 333 + unsigned long irq_flags; 333 334 unsigned char lo1, lo2, hi2; 334 335 unsigned short both2; 335 336 ··· 342 341 343 342 /* set counter clocks to 10MHz, disable all aux dio */ 344 343 outb(0, dev->iobase + DMM32AT_CTRDIO_CFG_REG); 344 + 345 + /* serialize access to control register and paged registers */ 346 + spin_lock_irqsave(&dev->spinlock, irq_flags); 345 347 346 348 /* get access to the clock regs */ 347 349 outb(DMM32AT_CTRL_PAGE_8254, dev->iobase + DMM32AT_CTRL_REG); ··· 358 354 outb(lo2, dev->iobase + DMM32AT_CLK2); 359 355 outb(hi2, dev->iobase + DMM32AT_CLK2); 360 356 357 + spin_unlock_irqrestore(&dev->spinlock, irq_flags); 358 + 361 359 /* enable the ai conversion interrupt and the clock to start scans */ 362 360 outb(DMM32AT_INTCLK_ADINT | 363 361 DMM32AT_INTCLK_CLKEN | DMM32AT_INTCLK_CLKSEL, ··· 369 363 static int dmm32at_ai_cmd(struct comedi_device *dev, struct comedi_subdevice *s) 370 364 { 371 365 struct comedi_cmd *cmd = &s->async->cmd; 366 + unsigned long irq_flags; 372 367 int ret; 373 368 374 369 dmm32at_ai_set_chanspec(dev, s, cmd->chanlist[0], cmd->chanlist_len); 375 370 371 + /* serialize access to control register and paged registers */ 372 + spin_lock_irqsave(&dev->spinlock, irq_flags); 373 + 376 374 /* reset the interrupt just in case */ 377 375 outb(DMM32AT_CTRL_INTRST, dev->iobase + DMM32AT_CTRL_REG); 376 + 377 + spin_unlock_irqrestore(&dev->spinlock, irq_flags); 378 378 379 379 /* 380 380 * wait for circuit to settle ··· 441 429 comedi_handle_events(dev, s); 442 430 } 443 431 432 + /* serialize access to control register and paged registers */ 433 + spin_lock(&dev->spinlock); 434 + 444 435 /* reset the interrupt */ 445 436 outb(DMM32AT_CTRL_INTRST, dev->iobase + DMM32AT_CTRL_REG); 437 + 438 + spin_unlock(&dev->spinlock); 446 439 return IRQ_HANDLED; 447 440 } 448 441 ··· 498 481 static int dmm32at_8255_io(struct comedi_device *dev, 499 482 int dir, int port, int data, unsigned long regbase) 500 483 { 484 + unsigned long irq_flags; 485 + int ret; 486 + 487 + /* serialize access to control register and paged registers */ 488 + spin_lock_irqsave(&dev->spinlock, irq_flags); 489 + 501 490 /* get access to the DIO regs */ 502 491 outb(DMM32AT_CTRL_PAGE_8255, dev->iobase + DMM32AT_CTRL_REG); 503 492 504 493 if (dir) { 505 494 outb(data, dev->iobase + regbase + port); 506 - return 0; 495 + ret = 0; 496 + } else { 497 + ret = inb(dev->iobase + regbase + port); 507 498 } 508 - return inb(dev->iobase + regbase + port); 499 + 500 + spin_unlock_irqrestore(&dev->spinlock, irq_flags); 501 + 502 + return ret; 509 503 } 510 504 511 505 /* Make sure the board is there and put it to a known state */
+1 -1
drivers/comedi/range.c
··· 52 52 const struct comedi_lrange *lr; 53 53 struct comedi_subdevice *s; 54 54 55 - subd = (it->range_type >> 24) & 0xf; 55 + subd = (it->range_type >> 24) & 0xff; 56 56 chan = (it->range_type >> 16) & 0xff; 57 57 58 58 if (!dev->attached)
+9 -3
drivers/gpio/gpiolib-cdev.c
··· 2549 2549 ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC); 2550 2550 if (!ctx) { 2551 2551 pr_err("Failed to allocate memory for line info notification\n"); 2552 + fput(fp); 2552 2553 return NOTIFY_DONE; 2553 2554 } 2554 2555 ··· 2697 2696 2698 2697 cdev = kzalloc(sizeof(*cdev), GFP_KERNEL); 2699 2698 if (!cdev) 2700 - return -ENODEV; 2699 + return -ENOMEM; 2701 2700 2702 2701 cdev->watched_lines = bitmap_zalloc(gdev->ngpio, GFP_KERNEL); 2703 2702 if (!cdev->watched_lines) ··· 2797 2796 return -ENOMEM; 2798 2797 2799 2798 ret = cdev_device_add(&gdev->chrdev, &gdev->dev); 2800 - if (ret) 2799 + if (ret) { 2800 + destroy_workqueue(gdev->line_state_wq); 2801 2801 return ret; 2802 + } 2802 2803 2803 2804 guard(srcu)(&gdev->srcu); 2804 2805 gc = srcu_dereference(gdev->chip, &gdev->srcu); 2805 - if (!gc) 2806 + if (!gc) { 2807 + cdev_device_del(&gdev->chrdev, &gdev->dev); 2808 + destroy_workqueue(gdev->line_state_wq); 2806 2809 return -ENODEV; 2810 + } 2807 2811 2808 2812 gpiochip_dbg(gc, "added GPIO chardev (%d:%d)\n", MAJOR(devt), gdev->id); 2809 2813
+11 -5
drivers/gpio/gpiolib-shared.c
··· 515 515 { 516 516 struct gpio_shared_entry *entry; 517 517 struct gpio_shared_ref *ref; 518 - unsigned long *flags; 518 + struct gpio_desc *desc; 519 519 int ret; 520 520 521 521 list_for_each_entry(entry, &gpio_shared_list, list) { ··· 543 543 if (list_count_nodes(&entry->refs) <= 1) 544 544 continue; 545 545 546 - flags = &gdev->descs[entry->offset].flags; 546 + desc = &gdev->descs[entry->offset]; 547 547 548 - __set_bit(GPIOD_FLAG_SHARED, flags); 548 + __set_bit(GPIOD_FLAG_SHARED, &desc->flags); 549 549 /* 550 550 * Shared GPIOs are not requested via the normal path. Make 551 551 * them inaccessible to anyone even before we register the 552 552 * chip. 553 553 */ 554 - __set_bit(GPIOD_FLAG_REQUESTED, flags); 554 + ret = gpiod_request_commit(desc, "shared"); 555 + if (ret) 556 + return ret; 555 557 556 558 pr_debug("GPIO %u owned by %s is shared by multiple consumers\n", 557 559 entry->offset, gpio_device_get_label(gdev)); ··· 564 562 ref->con_id ?: "(none)"); 565 563 566 564 ret = gpio_shared_make_adev(gdev, entry, ref); 567 - if (ret) 565 + if (ret) { 566 + gpiod_free_commit(desc); 568 567 return ret; 568 + } 569 569 } 570 570 } 571 571 ··· 582 578 list_for_each_entry(entry, &gpio_shared_list, list) { 583 579 if (!device_match_fwnode(&gdev->dev, entry->fwnode)) 584 580 continue; 581 + 582 + gpiod_free_commit(&gdev->descs[entry->offset]); 585 583 586 584 list_for_each_entry(ref, &entry->refs, list) { 587 585 guard(mutex)(&ref->lock);
+2 -2
drivers/gpio/gpiolib.c
··· 2453 2453 * on each other, and help provide better diagnostics in debugfs. 2454 2454 * They're called even less than the "set direction" calls. 2455 2455 */ 2456 - static int gpiod_request_commit(struct gpio_desc *desc, const char *label) 2456 + int gpiod_request_commit(struct gpio_desc *desc, const char *label) 2457 2457 { 2458 2458 unsigned int offset; 2459 2459 int ret; ··· 2515 2515 return ret; 2516 2516 } 2517 2517 2518 - static void gpiod_free_commit(struct gpio_desc *desc) 2518 + void gpiod_free_commit(struct gpio_desc *desc) 2519 2519 { 2520 2520 unsigned long flags; 2521 2521
+2
drivers/gpio/gpiolib.h
··· 244 244 struct gpio_desc *desc) 245 245 246 246 int gpiod_request(struct gpio_desc *desc, const char *label); 247 + int gpiod_request_commit(struct gpio_desc *desc, const char *label); 247 248 void gpiod_free(struct gpio_desc *desc); 249 + void gpiod_free_commit(struct gpio_desc *desc); 248 250 249 251 static inline int gpiod_request_user(struct gpio_desc *desc, const char *label) 250 252 {
+1 -1
drivers/gpu/drm/Kconfig
··· 210 210 211 211 config DRM_GPUSVM 212 212 tristate 213 - depends on DRM && DEVICE_PRIVATE 213 + depends on DRM 214 214 select HMM_MIRROR 215 215 select MMU_NOTIFIER 216 216 help
+3 -1
drivers/gpu/drm/Makefile
··· 108 108 obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o 109 109 110 110 drm_gpusvm_helper-y := \ 111 - drm_gpusvm.o\ 111 + drm_gpusvm.o 112 + drm_gpusvm_helper-$(CONFIG_ZONE_DEVICE) += \ 112 113 drm_pagemap.o 114 + 113 115 obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o 114 116 115 117 obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 763 763 } 764 764 765 765 static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring, 766 - u64 start_wptr, u32 end_wptr) 766 + u64 start_wptr, u64 end_wptr) 767 767 { 768 768 unsigned int first_idx = start_wptr & ring->buf_mask; 769 769 unsigned int last_idx = end_wptr & ring->buf_mask;
+4 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
··· 733 733 734 734 if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready) { 735 735 736 - if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid) 737 - return 0; 736 + if (!adev->gmc.gmc_funcs->flush_gpu_tlb_pasid) { 737 + r = 0; 738 + goto error_unlock_reset; 739 + } 738 740 739 741 if (adev->gmc.flush_tlb_needs_extra_type_2) 740 742 adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
··· 302 302 if (job && job->vmid) 303 303 amdgpu_vmid_reset(adev, ring->vm_hub, job->vmid); 304 304 amdgpu_ring_undo(ring); 305 - return r; 305 + goto free_fence; 306 306 } 307 307 *f = &af->base; 308 308 /* get a ref for the job */
+5 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 217 217 if (!entity) 218 218 return 0; 219 219 220 - return drm_sched_job_init(&(*job)->base, entity, 1, owner, 221 - drm_client_id); 220 + r = drm_sched_job_init(&(*job)->base, entity, 1, owner, drm_client_id); 221 + if (!r) 222 + return 0; 223 + 224 + kfree((*job)->hw_vm_fence); 222 225 223 226 err_fence: 224 227 kfree((*job)->hw_fence);
-12
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
··· 278 278 u32 sh_num, u32 instance, int xcc_id); 279 279 static u32 gfx_v12_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev); 280 280 281 - static void gfx_v12_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure); 282 281 static void gfx_v12_0_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg, 283 282 uint32_t val); 284 283 static int gfx_v12_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev); ··· 4633 4634 return r; 4634 4635 } 4635 4636 4636 - static void gfx_v12_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, 4637 - bool start, 4638 - bool secure) 4639 - { 4640 - uint32_t v = secure ? FRAME_TMZ : 0; 4641 - 4642 - amdgpu_ring_write(ring, PACKET3(PACKET3_FRAME_CONTROL, 0)); 4643 - amdgpu_ring_write(ring, v | FRAME_CMD(start ? 0 : 1)); 4644 - } 4645 - 4646 4637 static void gfx_v12_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg, 4647 4638 uint32_t reg_val_offs) 4648 4639 { ··· 5509 5520 .emit_cntxcntl = gfx_v12_0_ring_emit_cntxcntl, 5510 5521 .init_cond_exec = gfx_v12_0_ring_emit_init_cond_exec, 5511 5522 .preempt_ib = gfx_v12_0_ring_preempt_ib, 5512 - .emit_frame_cntl = gfx_v12_0_ring_emit_frame_cntl, 5513 5523 .emit_wreg = gfx_v12_0_ring_emit_wreg, 5514 5524 .emit_reg_wait = gfx_v12_0_ring_emit_reg_wait, 5515 5525 .emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait,
+1 -2
drivers/gpu/drm/amd/amdkfd/kfd_debug.h
··· 120 120 && dev->kfd->mec2_fw_version < 0x1b6) || 121 121 (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1) 122 122 && dev->kfd->mec2_fw_version < 0x30) || 123 - (KFD_GC_VERSION(dev) >= IP_VERSION(11, 0, 0) && 124 - KFD_GC_VERSION(dev) < IP_VERSION(12, 0, 0))) 123 + kfd_dbg_has_cwsr_workaround(dev)) 125 124 return false; 126 125 127 126 /* Assume debugging and cooperative launch supported otherwise. */
+3 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_colorop.c
··· 79 79 goto cleanup; 80 80 81 81 list->type = ops[i]->base.id; 82 - list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[i]->base.id); 83 82 84 83 i++; 85 84 ··· 196 197 goto cleanup; 197 198 198 199 drm_colorop_set_next_property(ops[i-1], ops[i]); 200 + 201 + list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[0]->base.id); 202 + 199 203 return 0; 200 204 201 205 cleanup:
-11
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
··· 248 248 struct vblank_control_work *vblank_work = 249 249 container_of(work, struct vblank_control_work, work); 250 250 struct amdgpu_display_manager *dm = vblank_work->dm; 251 - struct amdgpu_device *adev = drm_to_adev(dm->ddev); 252 - int r; 253 251 254 252 mutex_lock(&dm->dc_lock); 255 253 ··· 277 279 278 280 if (dm->active_vblank_irq_count == 0) { 279 281 dc_post_update_surfaces_to_stream(dm->dc); 280 - 281 - r = amdgpu_dpm_pause_power_profile(adev, true); 282 - if (r) 283 - dev_warn(adev->dev, "failed to set default power profile mode\n"); 284 - 285 282 dc_allow_idle_optimizations(dm->dc, true); 286 - 287 - r = amdgpu_dpm_pause_power_profile(adev, false); 288 - if (r) 289 - dev_warn(adev->dev, "failed to restore the power profile mode\n"); 290 283 } 291 284 292 285 mutex_unlock(&dm->dc_lock);
+8 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
··· 915 915 struct amdgpu_dm_connector *amdgpu_dm_connector; 916 916 const struct dc_link *dc_link; 917 917 918 - use_polling |= connector->polled != DRM_CONNECTOR_POLL_HPD; 919 - 920 918 if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK) 921 919 continue; 922 920 923 921 amdgpu_dm_connector = to_amdgpu_dm_connector(connector); 922 + 923 + /* 924 + * Analog connectors may be hot-plugged unlike other connector 925 + * types that don't support HPD. Only poll analog connectors. 926 + */ 927 + use_polling |= 928 + amdgpu_dm_connector->dc_link && 929 + dc_connector_supports_analog(amdgpu_dm_connector->dc_link->link_id.id); 924 930 925 931 dc_link = amdgpu_dm_connector->dc_link; 926 932
+9 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
··· 1790 1790 static int 1791 1791 dm_plane_init_colorops(struct drm_plane *plane) 1792 1792 { 1793 - struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES]; 1793 + struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES] = {}; 1794 1794 struct drm_device *dev = plane->dev; 1795 1795 struct amdgpu_device *adev = drm_to_adev(dev); 1796 1796 struct dc *dc = adev->dm.dc; 1797 1797 int len = 0; 1798 - int ret; 1798 + int ret = 0; 1799 + int i; 1799 1800 1800 1801 if (plane->type == DRM_PLANE_TYPE_CURSOR) 1801 1802 return 0; ··· 1807 1806 if (ret) { 1808 1807 drm_err(plane->dev, "Failed to create color pipeline for plane %d: %d\n", 1809 1808 plane->base.id, ret); 1810 - return ret; 1809 + goto out; 1811 1810 } 1812 1811 len++; 1813 1812 ··· 1815 1814 drm_plane_create_color_pipeline_property(plane, pipelines, len); 1816 1815 } 1817 1816 1818 - return 0; 1817 + out: 1818 + for (i = 0; i < len; i++) 1819 + kfree(pipelines[i].name); 1820 + 1821 + return ret; 1819 1822 } 1820 1823 #endif 1821 1824
+16 -15
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
··· 2273 2273 if (scaling_factor == 0) 2274 2274 return -EINVAL; 2275 2275 2276 - memset(smc_table, 0, sizeof(SISLANDS_SMC_STATETABLE)); 2277 - 2278 2276 ret = si_calculate_adjusted_tdp_limits(adev, 2279 2277 false, /* ??? */ 2280 2278 adev->pm.dpm.tdp_adjustment, ··· 2280 2282 &near_tdp_limit); 2281 2283 if (ret) 2282 2284 return ret; 2285 + 2286 + if (adev->pdev->device == 0x6611 && adev->pdev->revision == 0x87) { 2287 + /* Workaround buggy powertune on Radeon 430 and 520. */ 2288 + tdp_limit = 32; 2289 + near_tdp_limit = 28; 2290 + } 2283 2291 2284 2292 smc_table->dpm2Params.TDPLimit = 2285 2293 cpu_to_be32(si_scale_power_for_smc(tdp_limit, scaling_factor) * 1000); ··· 2332 2328 2333 2329 if (ni_pi->enable_power_containment) { 2334 2330 SISLANDS_SMC_STATETABLE *smc_table = &si_pi->smc_statetable; 2335 - u32 scaling_factor = si_get_smc_power_scaling_factor(adev); 2336 2331 int ret; 2337 - 2338 - memset(smc_table, 0, sizeof(SISLANDS_SMC_STATETABLE)); 2339 - 2340 - smc_table->dpm2Params.NearTDPLimit = 2341 - cpu_to_be32(si_scale_power_for_smc(adev->pm.dpm.near_tdp_limit_adjusted, scaling_factor) * 1000); 2342 - smc_table->dpm2Params.SafePowerLimit = 2343 - cpu_to_be32(si_scale_power_for_smc((adev->pm.dpm.near_tdp_limit_adjusted * SISLANDS_DPM2_TDP_SAFE_LIMIT_PERCENT) / 100, scaling_factor) * 1000); 2344 2332 2345 2333 ret = amdgpu_si_copy_bytes_to_smc(adev, 2346 2334 (si_pi->state_table_start + ··· 3469 3473 (adev->pdev->revision == 0x80) || 3470 3474 (adev->pdev->revision == 0x81) || 3471 3475 (adev->pdev->revision == 0x83) || 3472 - (adev->pdev->revision == 0x87) || 3476 + (adev->pdev->revision == 0x87 && 3477 + adev->pdev->device != 0x6611) || 3473 3478 (adev->pdev->device == 0x6604) || 3474 3479 (adev->pdev->device == 0x6605)) { 3475 3480 max_sclk = 75000; 3481 + } else if (adev->pdev->revision == 0x87 && 3482 + adev->pdev->device == 0x6611) { 3483 + /* Radeon 430 and 520 */ 3484 + max_sclk = 78000; 3476 3485 } 3477 3486 } 3478 3487 ··· 7601 7600 case AMDGPU_IRQ_STATE_DISABLE: 7602 7601 cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT); 7603 7602 cg_thermal_int |= CG_THERMAL_INT__THERM_INT_MASK_HIGH_MASK; 7604 - WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int); 7603 + WREG32(mmCG_THERMAL_INT, cg_thermal_int); 7605 7604 break; 7606 7605 case AMDGPU_IRQ_STATE_ENABLE: 7607 7606 cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT); 7608 7607 cg_thermal_int &= ~CG_THERMAL_INT__THERM_INT_MASK_HIGH_MASK; 7609 - WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int); 7608 + WREG32(mmCG_THERMAL_INT, cg_thermal_int); 7610 7609 break; 7611 7610 default: 7612 7611 break; ··· 7618 7617 case AMDGPU_IRQ_STATE_DISABLE: 7619 7618 cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT); 7620 7619 cg_thermal_int |= CG_THERMAL_INT__THERM_INT_MASK_LOW_MASK; 7621 - WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int); 7620 + WREG32(mmCG_THERMAL_INT, cg_thermal_int); 7622 7621 break; 7623 7622 case AMDGPU_IRQ_STATE_ENABLE: 7624 7623 cg_thermal_int = RREG32_SMC(mmCG_THERMAL_INT); 7625 7624 cg_thermal_int &= ~CG_THERMAL_INT__THERM_INT_MASK_LOW_MASK; 7626 - WREG32_SMC(mmCG_THERMAL_INT, cg_thermal_int); 7625 + WREG32(mmCG_THERMAL_INT, cg_thermal_int); 7627 7626 break; 7628 7627 default: 7629 7628 break;
+14 -6
drivers/gpu/drm/bridge/synopsys/dw-dp.c
··· 2062 2062 } 2063 2063 2064 2064 ret = drm_bridge_attach(encoder, bridge, NULL, DRM_BRIDGE_ATTACH_NO_CONNECTOR); 2065 - if (ret) 2065 + if (ret) { 2066 2066 dev_err_probe(dev, ret, "Failed to attach bridge\n"); 2067 + goto unregister_aux; 2068 + } 2067 2069 2068 2070 dw_dp_init_hw(dp); 2069 2071 2070 2072 ret = phy_init(dp->phy); 2071 2073 if (ret) { 2072 2074 dev_err_probe(dev, ret, "phy init failed\n"); 2073 - return ERR_PTR(ret); 2075 + goto unregister_aux; 2074 2076 } 2075 2077 2076 2078 ret = devm_add_action_or_reset(dev, dw_dp_phy_exit, dp); 2077 2079 if (ret) 2078 - return ERR_PTR(ret); 2080 + goto unregister_aux; 2079 2081 2080 2082 dp->irq = platform_get_irq(pdev, 0); 2081 - if (dp->irq < 0) 2082 - return ERR_PTR(ret); 2083 + if (dp->irq < 0) { 2084 + ret = dp->irq; 2085 + goto unregister_aux; 2086 + } 2083 2087 2084 2088 ret = devm_request_threaded_irq(dev, dp->irq, NULL, dw_dp_irq, 2085 2089 IRQF_ONESHOT, dev_name(dev), dp); 2086 2090 if (ret) { 2087 2091 dev_err_probe(dev, ret, "failed to request irq\n"); 2088 - return ERR_PTR(ret); 2092 + goto unregister_aux; 2089 2093 } 2090 2094 2091 2095 return dp; 2096 + 2097 + unregister_aux: 2098 + drm_dp_aux_unregister(&dp->aux); 2099 + return ERR_PTR(ret); 2092 2100 } 2093 2101 EXPORT_SYMBOL_GPL(dw_dp_bind); 2094 2102
+22 -14
drivers/gpu/drm/i915/display/intel_color_pipeline.c
··· 34 34 return ret; 35 35 36 36 list->type = colorop->base.base.id; 37 - list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", colorop->base.base.id); 38 37 39 38 /* TODO: handle failures and clean up */ 39 + prev_op = &colorop->base; 40 + 41 + colorop = intel_colorop_create(INTEL_PLANE_CB_CSC); 42 + ret = drm_plane_colorop_ctm_3x4_init(dev, &colorop->base, plane, 43 + DRM_COLOROP_FLAG_ALLOW_BYPASS); 44 + if (ret) 45 + return ret; 46 + 47 + drm_colorop_set_next_property(prev_op, &colorop->base); 40 48 prev_op = &colorop->base; 41 49 42 50 if (DISPLAY_VER(display) >= 35 && ··· 63 55 prev_op = &colorop->base; 64 56 } 65 57 66 - colorop = intel_colorop_create(INTEL_PLANE_CB_CSC); 67 - ret = drm_plane_colorop_ctm_3x4_init(dev, &colorop->base, plane, 68 - DRM_COLOROP_FLAG_ALLOW_BYPASS); 69 - if (ret) 70 - return ret; 71 - 72 - drm_colorop_set_next_property(prev_op, &colorop->base); 73 - prev_op = &colorop->base; 74 - 75 58 colorop = intel_colorop_create(INTEL_PLANE_CB_POST_CSC_LUT); 76 59 ret = drm_plane_colorop_curve_1d_lut_init(dev, &colorop->base, plane, 77 60 PLANE_GAMMA_SIZE, ··· 73 74 74 75 drm_colorop_set_next_property(prev_op, &colorop->base); 75 76 77 + list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", list->type); 78 + 76 79 return 0; 77 80 } 78 81 ··· 82 81 { 83 82 struct drm_device *dev = plane->dev; 84 83 struct intel_display *display = to_intel_display(dev); 85 - struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES]; 84 + struct drm_prop_enum_list pipelines[MAX_COLOR_PIPELINES] = {}; 86 85 int len = 0; 87 - int ret; 86 + int ret = 0; 87 + int i; 88 88 89 89 /* Currently expose pipeline only for HDR planes */ 90 90 if (!icl_is_hdr_plane(display, to_intel_plane(plane)->id)) ··· 94 92 /* Add pipeline consisting of transfer functions */ 95 93 ret = _intel_color_pipeline_plane_init(plane, &pipelines[len], pipe); 96 94 if (ret) 97 - return ret; 95 + goto out; 98 96 len++; 99 97 100 - return drm_plane_create_color_pipeline_property(plane, pipelines, len); 98 + ret = drm_plane_create_color_pipeline_property(plane, pipelines, len); 99 + 100 + for (i = 0; i < len; i++) 101 + kfree(pipelines[i].name); 102 + 103 + out: 104 + return ret; 101 105 }
+7 -1
drivers/gpu/drm/imagination/pvr_fw_trace.c
··· 137 137 struct rogue_fwif_kccb_cmd cmd; 138 138 int idx; 139 139 int err; 140 + int slot; 140 141 141 142 if (group_mask) 142 143 fw_trace->tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_TRACE | group_mask; ··· 155 154 cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_LOGTYPE_UPDATE; 156 155 cmd.kccb_flags = 0; 157 156 158 - err = pvr_kccb_send_cmd(pvr_dev, &cmd, NULL); 157 + err = pvr_kccb_send_cmd(pvr_dev, &cmd, &slot); 158 + if (err) 159 + goto err_drm_dev_exit; 159 160 161 + err = pvr_kccb_wait_for_completion(pvr_dev, slot, HZ, NULL); 162 + 163 + err_drm_dev_exit: 160 164 drm_dev_exit(idx); 161 165 162 166 err_up_read:
+1 -1
drivers/gpu/drm/mediatek/Kconfig
··· 8 8 depends on OF 9 9 depends on MTK_MMSYS 10 10 select DRM_CLIENT_SELECTION 11 - select DRM_GEM_DMA_HELPER if DRM_FBDEV_EMULATION 11 + select DRM_GEM_DMA_HELPER 12 12 select DRM_KMS_HELPER 13 13 select DRM_DISPLAY_HELPER 14 14 select DRM_BRIDGE_CONNECTOR
+9 -14
drivers/gpu/drm/mediatek/mtk_dpi.c
··· 836 836 enum drm_bridge_attach_flags flags) 837 837 { 838 838 struct mtk_dpi *dpi = bridge_to_dpi(bridge); 839 - int ret; 840 - 841 - dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 1, -1); 842 - if (IS_ERR(dpi->next_bridge)) { 843 - ret = PTR_ERR(dpi->next_bridge); 844 - if (ret == -EPROBE_DEFER) 845 - return ret; 846 - 847 - /* Old devicetree has only one endpoint */ 848 - dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 0, 0); 849 - if (IS_ERR(dpi->next_bridge)) 850 - return dev_err_probe(dpi->dev, PTR_ERR(dpi->next_bridge), 851 - "Failed to get bridge\n"); 852 - } 853 839 854 840 return drm_bridge_attach(encoder, dpi->next_bridge, 855 841 &dpi->bridge, flags); ··· 1304 1318 dpi->irq = platform_get_irq(pdev, 0); 1305 1319 if (dpi->irq < 0) 1306 1320 return dpi->irq; 1321 + 1322 + dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 1, -1); 1323 + if (IS_ERR(dpi->next_bridge) && PTR_ERR(dpi->next_bridge) == -ENODEV) { 1324 + /* Old devicetree has only one endpoint */ 1325 + dpi->next_bridge = devm_drm_of_get_bridge(dpi->dev, dpi->dev->of_node, 0, 0); 1326 + } 1327 + if (IS_ERR(dpi->next_bridge)) 1328 + return dev_err_probe(dpi->dev, PTR_ERR(dpi->next_bridge), 1329 + "Failed to get bridge\n"); 1307 1330 1308 1331 platform_set_drvdata(pdev, dpi); 1309 1332
+103 -161
drivers/gpu/drm/mediatek/mtk_gem.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 3 * Copyright (c) 2015 MediaTek Inc. 4 + * Copyright (c) 2025 Collabora Ltd. 5 + * AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> 4 6 */ 5 7 6 8 #include <linux/dma-buf.h> ··· 20 18 21 19 static int mtk_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); 22 20 23 - static const struct vm_operations_struct vm_ops = { 24 - .open = drm_gem_vm_open, 25 - .close = drm_gem_vm_close, 26 - }; 21 + static void mtk_gem_free_object(struct drm_gem_object *obj) 22 + { 23 + struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj); 24 + struct mtk_drm_private *priv = obj->dev->dev_private; 25 + 26 + if (dma_obj->sgt) 27 + drm_prime_gem_destroy(obj, dma_obj->sgt); 28 + else 29 + dma_free_wc(priv->dma_dev, dma_obj->base.size, 30 + dma_obj->vaddr, dma_obj->dma_addr); 31 + 32 + /* release file pointer to gem object. */ 33 + drm_gem_object_release(obj); 34 + 35 + kfree(dma_obj); 36 + } 37 + 38 + /* 39 + * Allocate a sg_table for this GEM object. 40 + * Note: Both the table's contents, and the sg_table itself must be freed by 41 + * the caller. 42 + * Returns a pointer to the newly allocated sg_table, or an ERR_PTR() error. 43 + */ 44 + static struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj) 45 + { 46 + struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj); 47 + struct mtk_drm_private *priv = obj->dev->dev_private; 48 + struct sg_table *sgt; 49 + int ret; 50 + 51 + sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); 52 + if (!sgt) 53 + return ERR_PTR(-ENOMEM); 54 + 55 + ret = dma_get_sgtable(priv->dma_dev, sgt, dma_obj->vaddr, 56 + dma_obj->dma_addr, obj->size); 57 + if (ret) { 58 + DRM_ERROR("failed to allocate sgt, %d\n", ret); 59 + kfree(sgt); 60 + return ERR_PTR(ret); 61 + } 62 + 63 + return sgt; 64 + } 27 65 28 66 static const struct drm_gem_object_funcs mtk_gem_object_funcs = { 29 67 .free = mtk_gem_free_object, 68 + .print_info = drm_gem_dma_object_print_info, 30 69 .get_sg_table = mtk_gem_prime_get_sg_table, 31 - .vmap = mtk_gem_prime_vmap, 32 - .vunmap = mtk_gem_prime_vunmap, 70 + .vmap = drm_gem_dma_object_vmap, 33 71 .mmap = mtk_gem_object_mmap, 34 - .vm_ops = &vm_ops, 72 + .vm_ops = &drm_gem_dma_vm_ops, 35 73 }; 36 74 37 - static struct mtk_gem_obj *mtk_gem_init(struct drm_device *dev, 38 - unsigned long size) 75 + static struct drm_gem_dma_object *mtk_gem_init(struct drm_device *dev, 76 + unsigned long size, bool private) 39 77 { 40 - struct mtk_gem_obj *mtk_gem_obj; 78 + struct drm_gem_dma_object *dma_obj; 41 79 int ret; 42 80 43 81 size = round_up(size, PAGE_SIZE); ··· 85 43 if (size == 0) 86 44 return ERR_PTR(-EINVAL); 87 45 88 - mtk_gem_obj = kzalloc(sizeof(*mtk_gem_obj), GFP_KERNEL); 89 - if (!mtk_gem_obj) 46 + dma_obj = kzalloc(sizeof(*dma_obj), GFP_KERNEL); 47 + if (!dma_obj) 90 48 return ERR_PTR(-ENOMEM); 91 49 92 - mtk_gem_obj->base.funcs = &mtk_gem_object_funcs; 50 + dma_obj->base.funcs = &mtk_gem_object_funcs; 93 51 94 - ret = drm_gem_object_init(dev, &mtk_gem_obj->base, size); 95 - if (ret < 0) { 52 + if (private) { 53 + ret = 0; 54 + drm_gem_private_object_init(dev, &dma_obj->base, size); 55 + } else { 56 + ret = drm_gem_object_init(dev, &dma_obj->base, size); 57 + } 58 + if (ret) { 96 59 DRM_ERROR("failed to initialize gem object\n"); 97 - kfree(mtk_gem_obj); 60 + kfree(dma_obj); 98 61 return ERR_PTR(ret); 99 62 } 100 63 101 - return mtk_gem_obj; 64 + return dma_obj; 102 65 } 103 66 104 - struct mtk_gem_obj *mtk_gem_create(struct drm_device *dev, 105 - size_t size, bool alloc_kmap) 67 + static struct drm_gem_dma_object *mtk_gem_create(struct drm_device *dev, size_t size) 106 68 { 107 69 struct mtk_drm_private *priv = dev->dev_private; 108 - struct mtk_gem_obj *mtk_gem; 70 + struct drm_gem_dma_object *dma_obj; 109 71 struct drm_gem_object *obj; 110 72 int ret; 111 73 112 - mtk_gem = mtk_gem_init(dev, size); 113 - if (IS_ERR(mtk_gem)) 114 - return ERR_CAST(mtk_gem); 74 + dma_obj = mtk_gem_init(dev, size, false); 75 + if (IS_ERR(dma_obj)) 76 + return ERR_CAST(dma_obj); 115 77 116 - obj = &mtk_gem->base; 78 + obj = &dma_obj->base; 117 79 118 - mtk_gem->dma_attrs = DMA_ATTR_WRITE_COMBINE; 119 - 120 - if (!alloc_kmap) 121 - mtk_gem->dma_attrs |= DMA_ATTR_NO_KERNEL_MAPPING; 122 - 123 - mtk_gem->cookie = dma_alloc_attrs(priv->dma_dev, obj->size, 124 - &mtk_gem->dma_addr, GFP_KERNEL, 125 - mtk_gem->dma_attrs); 126 - if (!mtk_gem->cookie) { 80 + dma_obj->vaddr = dma_alloc_wc(priv->dma_dev, obj->size, 81 + &dma_obj->dma_addr, 82 + GFP_KERNEL | __GFP_NOWARN); 83 + if (!dma_obj->vaddr) { 127 84 DRM_ERROR("failed to allocate %zx byte dma buffer", obj->size); 128 85 ret = -ENOMEM; 129 86 goto err_gem_free; 130 87 } 131 88 132 - if (alloc_kmap) 133 - mtk_gem->kvaddr = mtk_gem->cookie; 134 - 135 - DRM_DEBUG_DRIVER("cookie = %p dma_addr = %pad size = %zu\n", 136 - mtk_gem->cookie, &mtk_gem->dma_addr, 89 + DRM_DEBUG_DRIVER("vaddr = %p dma_addr = %pad size = %zu\n", 90 + dma_obj->vaddr, &dma_obj->dma_addr, 137 91 size); 138 92 139 - return mtk_gem; 93 + return dma_obj; 140 94 141 95 err_gem_free: 142 96 drm_gem_object_release(obj); 143 - kfree(mtk_gem); 97 + kfree(dma_obj); 144 98 return ERR_PTR(ret); 145 - } 146 - 147 - void mtk_gem_free_object(struct drm_gem_object *obj) 148 - { 149 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 150 - struct mtk_drm_private *priv = obj->dev->dev_private; 151 - 152 - if (mtk_gem->sg) 153 - drm_prime_gem_destroy(obj, mtk_gem->sg); 154 - else 155 - dma_free_attrs(priv->dma_dev, obj->size, mtk_gem->cookie, 156 - mtk_gem->dma_addr, mtk_gem->dma_attrs); 157 - 158 - /* release file pointer to gem object. */ 159 - drm_gem_object_release(obj); 160 - 161 - kfree(mtk_gem); 162 99 } 163 100 164 101 int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, 165 102 struct drm_mode_create_dumb *args) 166 103 { 167 - struct mtk_gem_obj *mtk_gem; 104 + struct drm_gem_dma_object *dma_obj; 168 105 int ret; 169 106 170 107 args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8); ··· 156 135 args->size = args->pitch; 157 136 args->size *= args->height; 158 137 159 - mtk_gem = mtk_gem_create(dev, args->size, false); 160 - if (IS_ERR(mtk_gem)) 161 - return PTR_ERR(mtk_gem); 138 + dma_obj = mtk_gem_create(dev, args->size); 139 + if (IS_ERR(dma_obj)) 140 + return PTR_ERR(dma_obj); 162 141 163 142 /* 164 143 * allocate a id of idr table where the obj is registered 165 144 * and handle has the id what user can see. 166 145 */ 167 - ret = drm_gem_handle_create(file_priv, &mtk_gem->base, &args->handle); 146 + ret = drm_gem_handle_create(file_priv, &dma_obj->base, &args->handle); 168 147 if (ret) 169 148 goto err_handle_create; 170 149 171 150 /* drop reference from allocate - handle holds it now. */ 172 - drm_gem_object_put(&mtk_gem->base); 151 + drm_gem_object_put(&dma_obj->base); 173 152 174 153 return 0; 175 154 176 155 err_handle_create: 177 - mtk_gem_free_object(&mtk_gem->base); 156 + mtk_gem_free_object(&dma_obj->base); 178 157 return ret; 179 158 } 180 159 ··· 182 161 struct vm_area_struct *vma) 183 162 184 163 { 185 - int ret; 186 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 164 + struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj); 187 165 struct mtk_drm_private *priv = obj->dev->dev_private; 166 + int ret; 188 167 189 168 /* 190 169 * Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the 191 170 * whole buffer from the start. 192 171 */ 193 - vma->vm_pgoff = 0; 172 + vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node); 194 173 195 174 /* 196 175 * dma_alloc_attrs() allocated a struct page table for mtk_gem, so clear 197 176 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap(). 198 177 */ 199 - vm_flags_set(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP); 178 + vm_flags_mod(vma, VM_IO | VM_DONTEXPAND | VM_DONTDUMP, VM_PFNMAP); 179 + 200 180 vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); 201 181 vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); 202 182 203 - ret = dma_mmap_attrs(priv->dma_dev, vma, mtk_gem->cookie, 204 - mtk_gem->dma_addr, obj->size, mtk_gem->dma_attrs); 183 + ret = dma_mmap_wc(priv->dma_dev, vma, dma_obj->vaddr, 184 + dma_obj->dma_addr, obj->size); 185 + if (ret) 186 + drm_gem_vm_close(vma); 205 187 206 188 return ret; 207 189 } 208 190 209 - /* 210 - * Allocate a sg_table for this GEM object. 211 - * Note: Both the table's contents, and the sg_table itself must be freed by 212 - * the caller. 213 - * Returns a pointer to the newly allocated sg_table, or an ERR_PTR() error. 214 - */ 215 - struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj) 216 - { 217 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 218 - struct mtk_drm_private *priv = obj->dev->dev_private; 219 - struct sg_table *sgt; 220 - int ret; 221 - 222 - sgt = kzalloc(sizeof(*sgt), GFP_KERNEL); 223 - if (!sgt) 224 - return ERR_PTR(-ENOMEM); 225 - 226 - ret = dma_get_sgtable_attrs(priv->dma_dev, sgt, mtk_gem->cookie, 227 - mtk_gem->dma_addr, obj->size, 228 - mtk_gem->dma_attrs); 229 - if (ret) { 230 - DRM_ERROR("failed to allocate sgt, %d\n", ret); 231 - kfree(sgt); 232 - return ERR_PTR(ret); 233 - } 234 - 235 - return sgt; 236 - } 237 - 238 191 struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, 239 - struct dma_buf_attachment *attach, struct sg_table *sg) 192 + struct dma_buf_attachment *attach, struct sg_table *sgt) 240 193 { 241 - struct mtk_gem_obj *mtk_gem; 194 + struct drm_gem_dma_object *dma_obj; 242 195 243 196 /* check if the entries in the sg_table are contiguous */ 244 - if (drm_prime_get_contiguous_size(sg) < attach->dmabuf->size) { 197 + if (drm_prime_get_contiguous_size(sgt) < attach->dmabuf->size) { 245 198 DRM_ERROR("sg_table is not contiguous"); 246 199 return ERR_PTR(-EINVAL); 247 200 } 248 201 249 - mtk_gem = mtk_gem_init(dev, attach->dmabuf->size); 250 - if (IS_ERR(mtk_gem)) 251 - return ERR_CAST(mtk_gem); 202 + dma_obj = mtk_gem_init(dev, attach->dmabuf->size, true); 203 + if (IS_ERR(dma_obj)) 204 + return ERR_CAST(dma_obj); 252 205 253 - mtk_gem->dma_addr = sg_dma_address(sg->sgl); 254 - mtk_gem->sg = sg; 206 + dma_obj->dma_addr = sg_dma_address(sgt->sgl); 207 + dma_obj->sgt = sgt; 255 208 256 - return &mtk_gem->base; 257 - } 258 - 259 - int mtk_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) 260 - { 261 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 262 - struct sg_table *sgt = NULL; 263 - unsigned int npages; 264 - 265 - if (mtk_gem->kvaddr) 266 - goto out; 267 - 268 - sgt = mtk_gem_prime_get_sg_table(obj); 269 - if (IS_ERR(sgt)) 270 - return PTR_ERR(sgt); 271 - 272 - npages = obj->size >> PAGE_SHIFT; 273 - mtk_gem->pages = kcalloc(npages, sizeof(*mtk_gem->pages), GFP_KERNEL); 274 - if (!mtk_gem->pages) { 275 - sg_free_table(sgt); 276 - kfree(sgt); 277 - return -ENOMEM; 278 - } 279 - 280 - drm_prime_sg_to_page_array(sgt, mtk_gem->pages, npages); 281 - 282 - mtk_gem->kvaddr = vmap(mtk_gem->pages, npages, VM_MAP, 283 - pgprot_writecombine(PAGE_KERNEL)); 284 - if (!mtk_gem->kvaddr) { 285 - sg_free_table(sgt); 286 - kfree(sgt); 287 - kfree(mtk_gem->pages); 288 - return -ENOMEM; 289 - } 290 - sg_free_table(sgt); 291 - kfree(sgt); 292 - 293 - out: 294 - iosys_map_set_vaddr(map, mtk_gem->kvaddr); 295 - 296 - return 0; 297 - } 298 - 299 - void mtk_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map) 300 - { 301 - struct mtk_gem_obj *mtk_gem = to_mtk_gem_obj(obj); 302 - void *vaddr = map->vaddr; 303 - 304 - if (!mtk_gem->pages) 305 - return; 306 - 307 - vunmap(vaddr); 308 - mtk_gem->kvaddr = NULL; 309 - kfree(mtk_gem->pages); 209 + return &dma_obj->base; 310 210 }
+1 -32
drivers/gpu/drm/mediatek/mtk_gem.h
··· 7 7 #define _MTK_GEM_H_ 8 8 9 9 #include <drm/drm_gem.h> 10 + #include <drm/drm_gem_dma_helper.h> 10 11 11 - /* 12 - * mtk drm buffer structure. 13 - * 14 - * @base: a gem object. 15 - * - a new handle to this gem object would be created 16 - * by drm_gem_handle_create(). 17 - * @cookie: the return value of dma_alloc_attrs(), keep it for dma_free_attrs() 18 - * @kvaddr: kernel virtual address of gem buffer. 19 - * @dma_addr: dma address of gem buffer. 20 - * @dma_attrs: dma attributes of gem buffer. 21 - * 22 - * P.S. this object would be transferred to user as kms_bo.handle so 23 - * user can access the buffer through kms_bo.handle. 24 - */ 25 - struct mtk_gem_obj { 26 - struct drm_gem_object base; 27 - void *cookie; 28 - void *kvaddr; 29 - dma_addr_t dma_addr; 30 - unsigned long dma_attrs; 31 - struct sg_table *sg; 32 - struct page **pages; 33 - }; 34 - 35 - #define to_mtk_gem_obj(x) container_of(x, struct mtk_gem_obj, base) 36 - 37 - void mtk_gem_free_object(struct drm_gem_object *gem); 38 - struct mtk_gem_obj *mtk_gem_create(struct drm_device *dev, size_t size, 39 - bool alloc_kmap); 40 12 int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, 41 13 struct drm_mode_create_dumb *args); 42 - struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj); 43 14 struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, 44 15 struct dma_buf_attachment *attach, struct sg_table *sg); 45 - int mtk_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); 46 - void mtk_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map); 47 16 48 17 #endif
+1 -1
drivers/gpu/drm/mediatek/mtk_hdmi_common.c
··· 303 303 return dev_err_probe(dev, ret, "Failed to get clocks\n"); 304 304 305 305 hdmi->irq = platform_get_irq(pdev, 0); 306 - if (!hdmi->irq) 306 + if (hdmi->irq < 0) 307 307 return hdmi->irq; 308 308 309 309 hdmi->regs = device_node_to_regmap(dev->of_node);
+1 -1
drivers/gpu/drm/mediatek/mtk_hdmi_common.h
··· 168 168 bool audio_enable; 169 169 bool powered; 170 170 bool enabled; 171 - unsigned int irq; 171 + int irq; 172 172 enum hdmi_hpd_state hpd; 173 173 hdmi_codec_plugged_cb plugged_cb; 174 174 struct device *codec_dev;
+33 -25
drivers/gpu/drm/mediatek/mtk_hdmi_ddc_v2.c
··· 66 66 return 0; 67 67 } 68 68 69 - static int mtk_ddc_wr_one(struct mtk_hdmi_ddc *ddc, u16 addr_id, 70 - u16 offset_id, u8 *wr_data) 69 + static int mtk_ddcm_write_hdmi(struct mtk_hdmi_ddc *ddc, u16 addr_id, 70 + u16 offset_id, u16 data_cnt, u8 *wr_data) 71 71 { 72 72 u32 val; 73 - int ret; 73 + int ret, i; 74 + 75 + /* Don't allow transfer with a size over than the transfer fifo size 76 + * (16 byte) 77 + */ 78 + if (data_cnt > 16) { 79 + dev_err(ddc->dev, "Invalid DDCM write request\n"); 80 + return -EINVAL; 81 + } 74 82 75 83 /* If down, rise bus for write operation */ 76 84 mtk_ddc_check_and_rise_low_bus(ddc); ··· 86 78 regmap_update_bits(ddc->regs, HPD_DDC_CTRL, HPD_DDC_DELAY_CNT, 87 79 FIELD_PREP(HPD_DDC_DELAY_CNT, DDC2_DLY_CNT)); 88 80 81 + /* In case there is no payload data, just do a single write for the 82 + * address only 83 + */ 89 84 if (wr_data) { 90 - regmap_write(ddc->regs, SI2C_CTRL, 91 - FIELD_PREP(SI2C_ADDR, SI2C_ADDR_READ) | 92 - FIELD_PREP(SI2C_WDATA, *wr_data) | 93 - SI2C_WR); 85 + /* Fill transfer fifo with payload data */ 86 + for (i = 0; i < data_cnt; i++) { 87 + regmap_write(ddc->regs, SI2C_CTRL, 88 + FIELD_PREP(SI2C_ADDR, SI2C_ADDR_READ) | 89 + FIELD_PREP(SI2C_WDATA, wr_data[i]) | 90 + SI2C_WR); 91 + } 94 92 } 95 - 96 93 regmap_write(ddc->regs, DDC_CTRL, 97 94 FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_SEQ_WRITE) | 98 - FIELD_PREP(DDC_CTRL_DIN_CNT, wr_data == NULL ? 0 : 1) | 95 + FIELD_PREP(DDC_CTRL_DIN_CNT, wr_data == NULL ? 0 : data_cnt) | 99 96 FIELD_PREP(DDC_CTRL_OFFSET, offset_id) | 100 97 FIELD_PREP(DDC_CTRL_ADDR, addr_id)); 101 98 usleep_range(1000, 1250); ··· 109 96 !(val & DDC_I2C_IN_PROG), 500, 1000); 110 97 if (ret) { 111 98 dev_err(ddc->dev, "DDC I2C write timeout\n"); 99 + 100 + /* Abort transfer if it is still in progress */ 101 + regmap_update_bits(ddc->regs, DDC_CTRL, DDC_CTRL_CMD, 102 + FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_ABORT_XFER)); 103 + 112 104 return ret; 113 105 } 114 106 ··· 197 179 500 * (temp_length + 5)); 198 180 if (ret) { 199 181 dev_err(ddc->dev, "Timeout waiting for DDC I2C\n"); 182 + 183 + /* Abort transfer if it is still in progress */ 184 + regmap_update_bits(ddc->regs, DDC_CTRL, DDC_CTRL_CMD, 185 + FIELD_PREP(DDC_CTRL_CMD, DDC_CMD_ABORT_XFER)); 186 + 200 187 return ret; 201 188 } 202 189 ··· 273 250 static int mtk_hdmi_ddc_fg_data_write(struct mtk_hdmi_ddc *ddc, u16 b_dev, 274 251 u8 data_addr, u16 data_cnt, u8 *pr_data) 275 252 { 276 - int i, ret; 277 - 278 253 regmap_set_bits(ddc->regs, HDCP2X_POL_CTRL, HDCP2X_DIS_POLL_EN); 279 - /* 280 - * In case there is no payload data, just do a single write for the 281 - * address only 282 - */ 283 - if (data_cnt == 0) 284 - return mtk_ddc_wr_one(ddc, b_dev, data_addr, NULL); 285 254 286 - i = 0; 287 - do { 288 - ret = mtk_ddc_wr_one(ddc, b_dev, data_addr + i, pr_data + i); 289 - if (ret) 290 - return ret; 291 - } while (++i < data_cnt); 292 - 293 - return 0; 255 + return mtk_ddcm_write_hdmi(ddc, b_dev, data_addr, data_cnt, pr_data); 294 256 } 295 257 296 258 static int mtk_hdmi_ddc_v2_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs, int num)
+4 -3
drivers/gpu/drm/mediatek/mtk_hdmi_v2.c
··· 1120 1120 mtk_hdmi_v2_disable(hdmi); 1121 1121 } 1122 1122 1123 - static int mtk_hdmi_v2_hdmi_tmds_char_rate_valid(const struct drm_bridge *bridge, 1124 - const struct drm_display_mode *mode, 1125 - unsigned long long tmds_rate) 1123 + static enum drm_mode_status 1124 + mtk_hdmi_v2_hdmi_tmds_char_rate_valid(const struct drm_bridge *bridge, 1125 + const struct drm_display_mode *mode, 1126 + unsigned long long tmds_rate) 1126 1127 { 1127 1128 if (mode->clock < MTK_HDMI_V2_CLOCK_MIN) 1128 1129 return MODE_CLOCK_LOW;
+4 -4
drivers/gpu/drm/mediatek/mtk_plane.c
··· 11 11 #include <drm/drm_fourcc.h> 12 12 #include <drm/drm_framebuffer.h> 13 13 #include <drm/drm_gem_atomic_helper.h> 14 + #include <drm/drm_gem_dma_helper.h> 14 15 #include <drm/drm_print.h> 15 16 #include <linux/align.h> 16 17 17 18 #include "mtk_crtc.h" 18 19 #include "mtk_ddp_comp.h" 19 20 #include "mtk_drm_drv.h" 20 - #include "mtk_gem.h" 21 21 #include "mtk_plane.h" 22 22 23 23 static const u64 modifiers[] = { ··· 114 114 struct mtk_plane_state *mtk_plane_state) 115 115 { 116 116 struct drm_framebuffer *fb = new_state->fb; 117 + struct drm_gem_dma_object *dma_obj; 117 118 struct drm_gem_object *gem; 118 - struct mtk_gem_obj *mtk_gem; 119 119 unsigned int pitch, format; 120 120 u64 modifier; 121 121 dma_addr_t addr; ··· 124 124 int offset; 125 125 126 126 gem = fb->obj[0]; 127 - mtk_gem = to_mtk_gem_obj(gem); 128 - addr = mtk_gem->dma_addr; 127 + dma_obj = to_drm_gem_dma_obj(gem); 128 + addr = dma_obj->dma_addr; 129 129 pitch = fb->pitches[0]; 130 130 format = fb->format->format; 131 131 modifier = fb->modifier;
+74 -21
drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h
··· 1 1 /* SPDX-License-Identifier: MIT */ 2 2 #ifndef __NVBIOS_CONN_H__ 3 3 #define __NVBIOS_CONN_H__ 4 + 5 + /* 6 + * An enumerator representing all of the possible VBIOS connector types defined 7 + * by Nvidia at 8 + * https://nvidia.github.io/open-gpu-doc/DCB/DCB-4.x-Specification.html. 9 + * 10 + * [1] Nvidia's documentation actually claims DCB_CONNECTOR_HDMI_0 is a "3-Pin 11 + * DIN Stereo Connector". This seems very likely to be a documentation typo 12 + * or some sort of funny historical baggage, because we've treated this 13 + * connector type as HDMI for years without issue. 14 + * TODO: Check with Nvidia what's actually happening here. 15 + */ 4 16 enum dcb_connector_type { 5 - DCB_CONNECTOR_VGA = 0x00, 6 - DCB_CONNECTOR_TV_0 = 0x10, 7 - DCB_CONNECTOR_TV_1 = 0x11, 8 - DCB_CONNECTOR_TV_3 = 0x13, 9 - DCB_CONNECTOR_DVI_I = 0x30, 10 - DCB_CONNECTOR_DVI_D = 0x31, 11 - DCB_CONNECTOR_DMS59_0 = 0x38, 12 - DCB_CONNECTOR_DMS59_1 = 0x39, 13 - DCB_CONNECTOR_LVDS = 0x40, 14 - DCB_CONNECTOR_LVDS_SPWG = 0x41, 15 - DCB_CONNECTOR_DP = 0x46, 16 - DCB_CONNECTOR_eDP = 0x47, 17 - DCB_CONNECTOR_mDP = 0x48, 18 - DCB_CONNECTOR_HDMI_0 = 0x60, 19 - DCB_CONNECTOR_HDMI_1 = 0x61, 20 - DCB_CONNECTOR_HDMI_C = 0x63, 21 - DCB_CONNECTOR_DMS59_DP0 = 0x64, 22 - DCB_CONNECTOR_DMS59_DP1 = 0x65, 23 - DCB_CONNECTOR_WFD = 0x70, 24 - DCB_CONNECTOR_USB_C = 0x71, 25 - DCB_CONNECTOR_NONE = 0xff 17 + /* Analog outputs */ 18 + DCB_CONNECTOR_VGA = 0x00, // VGA 15-pin connector 19 + DCB_CONNECTOR_DVI_A = 0x01, // DVI-A 20 + DCB_CONNECTOR_POD_VGA = 0x02, // Pod - VGA 15-pin connector 21 + DCB_CONNECTOR_TV_0 = 0x10, // TV - Composite Out 22 + DCB_CONNECTOR_TV_1 = 0x11, // TV - S-Video Out 23 + DCB_CONNECTOR_TV_2 = 0x12, // TV - S-Video Breakout - Composite 24 + DCB_CONNECTOR_TV_3 = 0x13, // HDTV Component - YPrPb 25 + DCB_CONNECTOR_TV_SCART = 0x14, // TV - SCART Connector 26 + DCB_CONNECTOR_TV_SCART_D = 0x16, // TV - Composite SCART over D-connector 27 + DCB_CONNECTOR_TV_DTERM = 0x17, // HDTV - D-connector (EIAJ4120) 28 + DCB_CONNECTOR_POD_TV_3 = 0x18, // Pod - HDTV - YPrPb 29 + DCB_CONNECTOR_POD_TV_1 = 0x19, // Pod - S-Video 30 + DCB_CONNECTOR_POD_TV_0 = 0x1a, // Pod - Composite 31 + 32 + /* DVI digital outputs */ 33 + DCB_CONNECTOR_DVI_I_TV_1 = 0x20, // DVI-I-TV-S-Video 34 + DCB_CONNECTOR_DVI_I_TV_0 = 0x21, // DVI-I-TV-Composite 35 + DCB_CONNECTOR_DVI_I_TV_2 = 0x22, // DVI-I-TV-S-Video Breakout-Composite 36 + DCB_CONNECTOR_DVI_I = 0x30, // DVI-I 37 + DCB_CONNECTOR_DVI_D = 0x31, // DVI-D 38 + DCB_CONNECTOR_DVI_ADC = 0x32, // Apple Display Connector (ADC) 39 + DCB_CONNECTOR_DMS59_0 = 0x38, // LFH-DVI-I-1 40 + DCB_CONNECTOR_DMS59_1 = 0x39, // LFH-DVI-I-2 41 + DCB_CONNECTOR_BNC = 0x3c, // BNC Connector [for SDI?] 42 + 43 + /* LVDS / TMDS digital outputs */ 44 + DCB_CONNECTOR_LVDS = 0x40, // LVDS-SPWG-Attached [is this name correct?] 45 + DCB_CONNECTOR_LVDS_SPWG = 0x41, // LVDS-OEM-Attached (non-removable) 46 + DCB_CONNECTOR_LVDS_REM = 0x42, // LVDS-SPWG-Detached [following naming above] 47 + DCB_CONNECTOR_LVDS_SPWG_REM = 0x43, // LVDS-OEM-Detached (removable) 48 + DCB_CONNECTOR_TMDS = 0x45, // TMDS-OEM-Attached (non-removable) 49 + 50 + /* DP digital outputs */ 51 + DCB_CONNECTOR_DP = 0x46, // DisplayPort External Connector 52 + DCB_CONNECTOR_eDP = 0x47, // DisplayPort Internal Connector 53 + DCB_CONNECTOR_mDP = 0x48, // DisplayPort (Mini) External Connector 54 + 55 + /* Dock outputs (not used) */ 56 + DCB_CONNECTOR_DOCK_VGA_0 = 0x50, // VGA 15-pin if not docked 57 + DCB_CONNECTOR_DOCK_VGA_1 = 0x51, // VGA 15-pin if docked 58 + DCB_CONNECTOR_DOCK_DVI_I_0 = 0x52, // DVI-I if not docked 59 + DCB_CONNECTOR_DOCK_DVI_I_1 = 0x53, // DVI-I if docked 60 + DCB_CONNECTOR_DOCK_DVI_D_0 = 0x54, // DVI-D if not docked 61 + DCB_CONNECTOR_DOCK_DVI_D_1 = 0x55, // DVI-D if docked 62 + DCB_CONNECTOR_DOCK_DP_0 = 0x56, // DisplayPort if not docked 63 + DCB_CONNECTOR_DOCK_DP_1 = 0x57, // DisplayPort if docked 64 + DCB_CONNECTOR_DOCK_mDP_0 = 0x58, // DisplayPort (Mini) if not docked 65 + DCB_CONNECTOR_DOCK_mDP_1 = 0x59, // DisplayPort (Mini) if docked 66 + 67 + /* HDMI? digital outputs */ 68 + DCB_CONNECTOR_HDMI_0 = 0x60, // HDMI? See [1] in top-level enum comment above 69 + DCB_CONNECTOR_HDMI_1 = 0x61, // HDMI-A connector 70 + DCB_CONNECTOR_SPDIF = 0x62, // Audio S/PDIF connector 71 + DCB_CONNECTOR_HDMI_C = 0x63, // HDMI-C (Mini) connector 72 + 73 + /* Misc. digital outputs */ 74 + DCB_CONNECTOR_DMS59_DP0 = 0x64, // LFH-DP-1 75 + DCB_CONNECTOR_DMS59_DP1 = 0x65, // LFH-DP-2 76 + DCB_CONNECTOR_WFD = 0x70, // Virtual connector for Wifi Display (WFD) 77 + DCB_CONNECTOR_USB_C = 0x71, // [DP over USB-C; not present in docs] 78 + DCB_CONNECTOR_NONE = 0xff // Skip Entry 26 79 }; 27 80 28 81 struct nvbios_connT {
+2
drivers/gpu/drm/nouveau/nouveau_display.c
··· 352 352 353 353 static const struct drm_mode_config_funcs nouveau_mode_config_funcs = { 354 354 .fb_create = nouveau_user_framebuffer_create, 355 + .atomic_commit = drm_atomic_helper_commit, 356 + .atomic_check = drm_atomic_helper_check, 355 357 }; 356 358 357 359
+53 -20
drivers/gpu/drm/nouveau/nvkm/engine/disp/uconn.c
··· 191 191 spin_lock(&disp->client.lock); 192 192 if (!conn->object.func) { 193 193 switch (conn->info.type) { 194 - case DCB_CONNECTOR_VGA : args->v0.type = NVIF_CONN_V0_VGA; break; 195 - case DCB_CONNECTOR_TV_0 : 196 - case DCB_CONNECTOR_TV_1 : 197 - case DCB_CONNECTOR_TV_3 : args->v0.type = NVIF_CONN_V0_TV; break; 198 - case DCB_CONNECTOR_DMS59_0 : 199 - case DCB_CONNECTOR_DMS59_1 : 200 - case DCB_CONNECTOR_DVI_I : args->v0.type = NVIF_CONN_V0_DVI_I; break; 201 - case DCB_CONNECTOR_DVI_D : args->v0.type = NVIF_CONN_V0_DVI_D; break; 202 - case DCB_CONNECTOR_LVDS : args->v0.type = NVIF_CONN_V0_LVDS; break; 203 - case DCB_CONNECTOR_LVDS_SPWG: args->v0.type = NVIF_CONN_V0_LVDS_SPWG; break; 204 - case DCB_CONNECTOR_DMS59_DP0: 205 - case DCB_CONNECTOR_DMS59_DP1: 206 - case DCB_CONNECTOR_DP : 207 - case DCB_CONNECTOR_mDP : 208 - case DCB_CONNECTOR_USB_C : args->v0.type = NVIF_CONN_V0_DP; break; 209 - case DCB_CONNECTOR_eDP : args->v0.type = NVIF_CONN_V0_EDP; break; 210 - case DCB_CONNECTOR_HDMI_0 : 211 - case DCB_CONNECTOR_HDMI_1 : 212 - case DCB_CONNECTOR_HDMI_C : args->v0.type = NVIF_CONN_V0_HDMI; break; 194 + /* VGA */ 195 + case DCB_CONNECTOR_DVI_A : 196 + case DCB_CONNECTOR_POD_VGA : 197 + case DCB_CONNECTOR_VGA : args->v0.type = NVIF_CONN_V0_VGA; break; 198 + 199 + /* TV */ 200 + case DCB_CONNECTOR_TV_0 : 201 + case DCB_CONNECTOR_TV_1 : 202 + case DCB_CONNECTOR_TV_2 : 203 + case DCB_CONNECTOR_TV_SCART : 204 + case DCB_CONNECTOR_TV_SCART_D : 205 + case DCB_CONNECTOR_TV_DTERM : 206 + case DCB_CONNECTOR_POD_TV_3 : 207 + case DCB_CONNECTOR_POD_TV_1 : 208 + case DCB_CONNECTOR_POD_TV_0 : 209 + case DCB_CONNECTOR_TV_3 : args->v0.type = NVIF_CONN_V0_TV; break; 210 + 211 + /* DVI */ 212 + case DCB_CONNECTOR_DVI_I_TV_1 : 213 + case DCB_CONNECTOR_DVI_I_TV_0 : 214 + case DCB_CONNECTOR_DVI_I_TV_2 : 215 + case DCB_CONNECTOR_DVI_ADC : 216 + case DCB_CONNECTOR_DMS59_0 : 217 + case DCB_CONNECTOR_DMS59_1 : 218 + case DCB_CONNECTOR_DVI_I : args->v0.type = NVIF_CONN_V0_DVI_I; break; 219 + case DCB_CONNECTOR_TMDS : 220 + case DCB_CONNECTOR_DVI_D : args->v0.type = NVIF_CONN_V0_DVI_D; break; 221 + 222 + /* LVDS */ 223 + case DCB_CONNECTOR_LVDS : args->v0.type = NVIF_CONN_V0_LVDS; break; 224 + case DCB_CONNECTOR_LVDS_SPWG : args->v0.type = NVIF_CONN_V0_LVDS_SPWG; break; 225 + 226 + /* DP */ 227 + case DCB_CONNECTOR_DMS59_DP0 : 228 + case DCB_CONNECTOR_DMS59_DP1 : 229 + case DCB_CONNECTOR_DP : 230 + case DCB_CONNECTOR_mDP : 231 + case DCB_CONNECTOR_USB_C : args->v0.type = NVIF_CONN_V0_DP; break; 232 + case DCB_CONNECTOR_eDP : args->v0.type = NVIF_CONN_V0_EDP; break; 233 + 234 + /* HDMI */ 235 + case DCB_CONNECTOR_HDMI_0 : 236 + case DCB_CONNECTOR_HDMI_1 : 237 + case DCB_CONNECTOR_HDMI_C : args->v0.type = NVIF_CONN_V0_HDMI; break; 238 + 239 + /* 240 + * Dock & unused outputs. 241 + * BNC, SPDIF, WFD, and detached LVDS go here. 242 + */ 213 243 default: 214 - WARN_ON(1); 244 + nvkm_warn(&disp->engine.subdev, 245 + "unimplemented connector type 0x%02x\n", 246 + conn->info.type); 247 + args->v0.type = NVIF_CONN_V0_VGA; 215 248 ret = -EINVAL; 216 249 break; 217 250 }
+8 -7
drivers/gpu/drm/vkms/vkms_colorop.c
··· 37 37 goto cleanup; 38 38 39 39 list->type = ops[i]->base.id; 40 - list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[i]->base.id); 41 40 42 41 i++; 43 42 ··· 87 88 88 89 drm_colorop_set_next_property(ops[i - 1], ops[i]); 89 90 91 + list->name = kasprintf(GFP_KERNEL, "Color Pipeline %d", ops[0]->base.id); 92 + 90 93 return 0; 91 94 92 95 cleanup: ··· 104 103 105 104 int vkms_initialize_colorops(struct drm_plane *plane) 106 105 { 107 - struct drm_prop_enum_list pipeline; 108 - int ret; 106 + struct drm_prop_enum_list pipeline = {}; 107 + int ret = 0; 109 108 110 109 /* Add color pipeline */ 111 110 ret = vkms_initialize_color_pipeline(plane, &pipeline); 112 111 if (ret) 113 - return ret; 112 + goto out; 114 113 115 114 /* Create COLOR_PIPELINE property and attach */ 116 115 ret = drm_plane_create_color_pipeline_property(plane, &pipeline, 1); 117 - if (ret) 118 - return ret; 119 116 120 - return 0; 117 + kfree(pipeline.name); 118 + out: 119 + return ret; 121 120 }
+3 -2
drivers/gpu/drm/xe/Kconfig
··· 39 39 select DRM_TTM 40 40 select DRM_TTM_HELPER 41 41 select DRM_EXEC 42 - select DRM_GPUSVM if !UML && DEVICE_PRIVATE 42 + select DRM_GPUSVM if !UML 43 43 select DRM_GPUVM 44 44 select DRM_SCHED 45 45 select MMU_NOTIFIER ··· 80 80 bool "Enable CPU to GPU address mirroring" 81 81 depends on DRM_XE 82 82 depends on !UML 83 - depends on DEVICE_PRIVATE 83 + depends on ZONE_DEVICE 84 84 default y 85 + select DEVICE_PRIVATE 85 86 select DRM_GPUSVM 86 87 help 87 88 Enable this option if you want support for CPU to GPU address
+7 -2
drivers/gpu/drm/xe/xe_bo.c
··· 1055 1055 unsigned long *scanned) 1056 1056 { 1057 1057 struct xe_device *xe = ttm_to_xe_device(bo->bdev); 1058 + struct ttm_tt *tt = bo->ttm; 1058 1059 long lret; 1059 1060 1060 1061 /* Fake move to system, without copying data. */ ··· 1080 1079 .writeback = false, 1081 1080 .allow_move = false}); 1082 1081 1083 - if (lret > 0) 1082 + if (lret > 0) { 1084 1083 xe_ttm_tt_account_subtract(xe, bo->ttm); 1084 + update_global_total_pages(bo->bdev, -(long)tt->num_pages); 1085 + } 1085 1086 1086 1087 return lret; 1087 1088 } ··· 1169 1166 if (needs_rpm) 1170 1167 xe_pm_runtime_put(xe); 1171 1168 1172 - if (lret > 0) 1169 + if (lret > 0) { 1173 1170 xe_ttm_tt_account_subtract(xe, tt); 1171 + update_global_total_pages(bo->bdev, -(long)tt->num_pages); 1172 + } 1174 1173 1175 1174 out_unref: 1176 1175 xe_bo_put(xe_bo);
+57 -15
drivers/gpu/drm/xe/xe_debugfs.c
··· 256 256 return simple_read_from_buffer(ubuf, size, pos, buf, len); 257 257 } 258 258 259 + static int __wedged_mode_set_reset_policy(struct xe_gt *gt, enum xe_wedged_mode mode) 260 + { 261 + bool enable_engine_reset; 262 + int ret; 263 + 264 + enable_engine_reset = (mode != XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET); 265 + ret = xe_guc_ads_scheduler_policy_toggle_reset(&gt->uc.guc.ads, 266 + enable_engine_reset); 267 + if (ret) 268 + xe_gt_err(gt, "Failed to update GuC ADS scheduler policy (%pe)\n", ERR_PTR(ret)); 269 + 270 + return ret; 271 + } 272 + 273 + static int wedged_mode_set_reset_policy(struct xe_device *xe, enum xe_wedged_mode mode) 274 + { 275 + struct xe_gt *gt; 276 + int ret; 277 + u8 id; 278 + 279 + guard(xe_pm_runtime)(xe); 280 + for_each_gt(gt, xe, id) { 281 + ret = __wedged_mode_set_reset_policy(gt, mode); 282 + if (ret) { 283 + if (id > 0) { 284 + xe->wedged.inconsistent_reset = true; 285 + drm_err(&xe->drm, "Inconsistent reset policy state between GTs\n"); 286 + } 287 + return ret; 288 + } 289 + } 290 + 291 + xe->wedged.inconsistent_reset = false; 292 + 293 + return 0; 294 + } 295 + 296 + static bool wedged_mode_needs_policy_update(struct xe_device *xe, enum xe_wedged_mode mode) 297 + { 298 + if (xe->wedged.inconsistent_reset) 299 + return true; 300 + 301 + if (xe->wedged.mode == mode) 302 + return false; 303 + 304 + if (xe->wedged.mode == XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET || 305 + mode == XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET) 306 + return true; 307 + 308 + return false; 309 + } 310 + 259 311 static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf, 260 312 size_t size, loff_t *pos) 261 313 { 262 314 struct xe_device *xe = file_inode(f)->i_private; 263 - struct xe_gt *gt; 264 315 u32 wedged_mode; 265 316 ssize_t ret; 266 - u8 id; 267 317 268 318 ret = kstrtouint_from_user(ubuf, size, 0, &wedged_mode); 269 319 if (ret) ··· 322 272 if (wedged_mode > 2) 323 273 return -EINVAL; 324 274 325 - if (xe->wedged.mode == wedged_mode) 326 - return size; 275 + if (wedged_mode_needs_policy_update(xe, wedged_mode)) { 276 + ret = wedged_mode_set_reset_policy(xe, wedged_mode); 277 + if (ret) 278 + return ret; 279 + } 327 280 328 281 xe->wedged.mode = wedged_mode; 329 - 330 - xe_pm_runtime_get(xe); 331 - for_each_gt(gt, xe, id) { 332 - ret = xe_guc_ads_scheduler_policy_toggle_reset(&gt->uc.guc.ads); 333 - if (ret) { 334 - xe_gt_err(gt, "Failed to update GuC ADS scheduler policy. GuC may still cause engine reset even with wedged_mode=2\n"); 335 - xe_pm_runtime_put(xe); 336 - return -EIO; 337 - } 338 - } 339 - xe_pm_runtime_put(xe); 340 282 341 283 return size; 342 284 }
+18
drivers/gpu/drm/xe/xe_device_types.h
··· 44 44 struct xe_pxp; 45 45 struct xe_vram_region; 46 46 47 + /** 48 + * enum xe_wedged_mode - possible wedged modes 49 + * @XE_WEDGED_MODE_NEVER: Device will never be declared wedged. 50 + * @XE_WEDGED_MODE_UPON_CRITICAL_ERROR: Device will be declared wedged only 51 + * when critical error occurs like GT reset failure or firmware failure. 52 + * This is the default mode. 53 + * @XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET: Device will be declared wedged on 54 + * any hang. In this mode, engine resets are disabled to avoid automatic 55 + * recovery attempts. This mode is primarily intended for debugging hangs. 56 + */ 57 + enum xe_wedged_mode { 58 + XE_WEDGED_MODE_NEVER = 0, 59 + XE_WEDGED_MODE_UPON_CRITICAL_ERROR = 1, 60 + XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET = 2, 61 + }; 62 + 47 63 #define XE_BO_INVALID_OFFSET LONG_MAX 48 64 49 65 #define GRAPHICS_VER(xe) ((xe)->info.graphics_verx100 / 100) ··· 603 587 int mode; 604 588 /** @wedged.method: Recovery method to be sent in the drm device wedged uevent */ 605 589 unsigned long method; 590 + /** @wedged.inconsistent_reset: Inconsistent reset policy state between GTs */ 591 + bool inconsistent_reset; 606 592 } wedged; 607 593 608 594 /** @bo_device: Struct to control async free of BOs */
+31 -1
drivers/gpu/drm/xe/xe_exec_queue.c
··· 328 328 * @xe: Xe device. 329 329 * @tile: tile which bind exec queue belongs to. 330 330 * @flags: exec queue creation flags 331 + * @user_vm: The user VM which this exec queue belongs to 331 332 * @extensions: exec queue creation extensions 332 333 * 333 334 * Normalize bind exec queue creation. Bind exec queue is tied to migration VM ··· 342 341 */ 343 342 struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe, 344 343 struct xe_tile *tile, 344 + struct xe_vm *user_vm, 345 345 u32 flags, u64 extensions) 346 346 { 347 347 struct xe_gt *gt = tile->primary_gt; ··· 379 377 xe_exec_queue_put(q); 380 378 return ERR_PTR(err); 381 379 } 380 + 381 + if (user_vm) 382 + q->user_vm = xe_vm_get(user_vm); 382 383 } 383 384 384 385 return q; ··· 410 405 list_for_each_entry_safe(eq, next, &q->multi_gt_list, 411 406 multi_gt_link) 412 407 xe_exec_queue_put(eq); 408 + } 409 + 410 + if (q->user_vm) { 411 + xe_vm_put(q->user_vm); 412 + q->user_vm = NULL; 413 413 } 414 414 415 415 q->ops->destroy(q); ··· 752 742 XE_IOCTL_DBG(xe, eci[0].engine_instance != 0)) 753 743 return -EINVAL; 754 744 745 + vm = xe_vm_lookup(xef, args->vm_id); 746 + if (XE_IOCTL_DBG(xe, !vm)) 747 + return -ENOENT; 748 + 749 + err = down_read_interruptible(&vm->lock); 750 + if (err) { 751 + xe_vm_put(vm); 752 + return err; 753 + } 754 + 755 + if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) { 756 + up_read(&vm->lock); 757 + xe_vm_put(vm); 758 + return -ENOENT; 759 + } 760 + 755 761 for_each_tile(tile, xe, id) { 756 762 struct xe_exec_queue *new; 757 763 ··· 775 749 if (id) 776 750 flags |= EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD; 777 751 778 - new = xe_exec_queue_create_bind(xe, tile, flags, 752 + new = xe_exec_queue_create_bind(xe, tile, vm, flags, 779 753 args->extensions); 780 754 if (IS_ERR(new)) { 755 + up_read(&vm->lock); 756 + xe_vm_put(vm); 781 757 err = PTR_ERR(new); 782 758 if (q) 783 759 goto put_exec_queue; ··· 791 763 list_add_tail(&new->multi_gt_list, 792 764 &q->multi_gt_link); 793 765 } 766 + up_read(&vm->lock); 767 + xe_vm_put(vm); 794 768 } else { 795 769 logical_mask = calc_validate_logical_mask(xe, eci, 796 770 args->width,
+1
drivers/gpu/drm/xe/xe_exec_queue.h
··· 28 28 u32 flags, u64 extensions); 29 29 struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe, 30 30 struct xe_tile *tile, 31 + struct xe_vm *user_vm, 31 32 u32 flags, u64 extensions); 32 33 33 34 void xe_exec_queue_fini(struct xe_exec_queue *q);
+6
drivers/gpu/drm/xe/xe_exec_queue_types.h
··· 54 54 struct kref refcount; 55 55 /** @vm: VM (address space) for this exec queue */ 56 56 struct xe_vm *vm; 57 + /** 58 + * @user_vm: User VM (address space) for this exec queue (bind queues 59 + * only) 60 + */ 61 + struct xe_vm *user_vm; 62 + 57 63 /** @class: class of this exec queue */ 58 64 enum xe_engine_class class; 59 65 /**
+1 -1
drivers/gpu/drm/xe/xe_ggtt.c
··· 322 322 else 323 323 ggtt->pt_ops = &xelp_pt_ops; 324 324 325 - ggtt->wq = alloc_workqueue("xe-ggtt-wq", 0, WQ_MEM_RECLAIM); 325 + ggtt->wq = alloc_workqueue("xe-ggtt-wq", WQ_MEM_RECLAIM, 0); 326 326 if (!ggtt->wq) 327 327 return -ENOMEM; 328 328
+2 -2
drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
··· 41 41 }; 42 42 43 43 /** 44 - * xe_gt_sriov_vf_migration - VF migration data. 44 + * struct xe_gt_sriov_vf_migration - VF migration data. 45 45 */ 46 46 struct xe_gt_sriov_vf_migration { 47 - /** @migration: VF migration recovery worker */ 47 + /** @worker: VF migration recovery worker */ 48 48 struct work_struct worker; 49 49 /** @lock: Protects recovery_queued, teardown */ 50 50 spinlock_t lock;
+8 -6
drivers/gpu/drm/xe/xe_guc_ads.c
··· 983 983 /** 984 984 * xe_guc_ads_scheduler_policy_toggle_reset - Toggle reset policy 985 985 * @ads: Additional data structures object 986 + * @enable_engine_reset: true to enable engine resets, false otherwise 986 987 * 987 - * This function update the GuC's engine reset policy based on wedged.mode. 988 + * This function update the GuC's engine reset policy. 988 989 * 989 990 * Return: 0 on success, and negative error code otherwise. 990 991 */ 991 - int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads) 992 + int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads, 993 + bool enable_engine_reset) 992 994 { 993 995 struct guc_policies *policies; 994 996 struct xe_guc *guc = ads_to_guc(ads); 995 - struct xe_device *xe = ads_to_xe(ads); 996 997 CLASS(xe_guc_buf, buf)(&guc->buf, sizeof(*policies)); 997 998 998 999 if (!xe_guc_buf_is_valid(buf)) ··· 1005 1004 policies->dpc_promote_time = ads_blob_read(ads, policies.dpc_promote_time); 1006 1005 policies->max_num_work_items = ads_blob_read(ads, policies.max_num_work_items); 1007 1006 policies->is_valid = 1; 1008 - if (xe->wedged.mode == 2) 1009 - policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET; 1010 - else 1007 + 1008 + if (enable_engine_reset) 1011 1009 policies->global_flags &= ~GLOBAL_POLICY_DISABLE_ENGINE_RESET; 1010 + else 1011 + policies->global_flags |= GLOBAL_POLICY_DISABLE_ENGINE_RESET; 1012 1012 1013 1013 return guc_ads_action_update_policies(ads, xe_guc_buf_flush(buf)); 1014 1014 }
+4 -1
drivers/gpu/drm/xe/xe_guc_ads.h
··· 6 6 #ifndef _XE_GUC_ADS_H_ 7 7 #define _XE_GUC_ADS_H_ 8 8 9 + #include <linux/types.h> 10 + 9 11 struct xe_guc_ads; 10 12 11 13 int xe_guc_ads_init(struct xe_guc_ads *ads); ··· 15 13 void xe_guc_ads_populate(struct xe_guc_ads *ads); 16 14 void xe_guc_ads_populate_minimal(struct xe_guc_ads *ads); 17 15 void xe_guc_ads_populate_post_load(struct xe_guc_ads *ads); 18 - int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads); 16 + int xe_guc_ads_scheduler_policy_toggle_reset(struct xe_guc_ads *ads, 17 + bool enable_engine_reset); 19 18 20 19 #endif
+3 -1
drivers/gpu/drm/xe/xe_late_bind_fw_types.h
··· 15 15 #define XE_LB_MAX_PAYLOAD_SIZE SZ_4K 16 16 17 17 /** 18 - * xe_late_bind_fw_id - enum to determine late binding fw index 18 + * enum xe_late_bind_fw_id - enum to determine late binding fw index 19 19 */ 20 20 enum xe_late_bind_fw_id { 21 + /** @XE_LB_FW_FAN_CONTROL: Fan control */ 21 22 XE_LB_FW_FAN_CONTROL = 0, 23 + /** @XE_LB_FW_MAX_ID: Number of IDs */ 22 24 XE_LB_FW_MAX_ID 23 25 }; 24 26
+3
drivers/gpu/drm/xe/xe_lrc.c
··· 1050 1050 { 1051 1051 u32 *cmd = batch; 1052 1052 1053 + if (IS_SRIOV_VF(gt_to_xe(lrc->gt))) 1054 + return 0; 1055 + 1053 1056 if (xe_gt_WARN_ON(lrc->gt, max_len < 12)) 1054 1057 return -ENOSPC; 1055 1058
+2 -2
drivers/gpu/drm/xe/xe_migrate.c
··· 2445 2445 if (is_migrate) 2446 2446 mutex_lock(&m->job_mutex); 2447 2447 else 2448 - xe_vm_assert_held(q->vm); /* User queues VM's should be locked */ 2448 + xe_vm_assert_held(q->user_vm); /* User queues VM's should be locked */ 2449 2449 } 2450 2450 2451 2451 /** ··· 2463 2463 if (is_migrate) 2464 2464 mutex_unlock(&m->job_mutex); 2465 2465 else 2466 - xe_vm_assert_held(q->vm); /* User queues VM's should be locked */ 2466 + xe_vm_assert_held(q->user_vm); /* User queues VM's should be locked */ 2467 2467 } 2468 2468 2469 2469 #if IS_ENABLED(CONFIG_PROVE_LOCKING)
+1 -1
drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
··· 346 346 flags = EXEC_QUEUE_FLAG_KERNEL | 347 347 EXEC_QUEUE_FLAG_PERMANENT | 348 348 EXEC_QUEUE_FLAG_MIGRATE; 349 - q = xe_exec_queue_create_bind(xe, tile, flags, 0); 349 + q = xe_exec_queue_create_bind(xe, tile, NULL, flags, 0); 350 350 if (IS_ERR(q)) { 351 351 err = PTR_ERR(q); 352 352 goto err_ret;
+6 -1
drivers/gpu/drm/xe/xe_vm.c
··· 1617 1617 if (!vm->pt_root[id]) 1618 1618 continue; 1619 1619 1620 - q = xe_exec_queue_create_bind(xe, tile, create_flags, 0); 1620 + q = xe_exec_queue_create_bind(xe, tile, vm, create_flags, 0); 1621 1621 if (IS_ERR(q)) { 1622 1622 err = PTR_ERR(q); 1623 1623 goto err_close; ··· 3576 3576 err = -EINVAL; 3577 3577 goto put_exec_queue; 3578 3578 } 3579 + } 3580 + 3581 + if (XE_IOCTL_DBG(xe, q && vm != q->user_vm)) { 3582 + err = -EINVAL; 3583 + goto put_exec_queue; 3579 3584 } 3580 3585 3581 3586 /* Ensure all UNMAPs visible */
+1 -1
drivers/gpu/drm/xe/xe_vm.h
··· 379 379 } 380 380 381 381 /** 382 - * xe_vm_set_validation_exec() - Accessor to read the drm_exec object 382 + * xe_vm_validation_exec() - Accessor to read the drm_exec object 383 383 * @vm: The vm we want to register a drm_exec object with. 384 384 * 385 385 * Return: The drm_exec object used to lock the vm's resv. The value
+19 -6
drivers/hwtracing/intel_th/core.c
··· 810 810 int err; 811 811 812 812 dev = bus_find_device_by_devt(&intel_th_bus, inode->i_rdev); 813 - if (!dev || !dev->driver) { 813 + if (!dev) 814 + return -ENODEV; 815 + 816 + if (!dev->driver) { 814 817 err = -ENODEV; 815 - goto out_no_device; 818 + goto err_put_dev; 816 819 } 817 820 818 821 thdrv = to_intel_th_driver(dev->driver); 819 822 fops = fops_get(thdrv->fops); 820 823 if (!fops) { 821 824 err = -ENODEV; 822 - goto out_put_device; 825 + goto err_put_dev; 823 826 } 824 827 825 828 replace_fops(file, fops); ··· 832 829 if (file->f_op->open) { 833 830 err = file->f_op->open(inode, file); 834 831 if (err) 835 - goto out_put_device; 832 + goto err_put_dev; 836 833 } 837 834 838 835 return 0; 839 836 840 - out_put_device: 837 + err_put_dev: 841 838 put_device(dev); 842 - out_no_device: 839 + 843 840 return err; 841 + } 842 + 843 + static int intel_th_output_release(struct inode *inode, struct file *file) 844 + { 845 + struct intel_th_device *thdev = file->private_data; 846 + 847 + put_device(&thdev->dev); 848 + 849 + return 0; 844 850 } 845 851 846 852 static const struct file_operations intel_th_output_fops = { 847 853 .open = intel_th_output_open, 854 + .release = intel_th_output_release, 848 855 .llseek = noop_llseek, 849 856 }; 850 857
+1 -1
drivers/i2c/busses/i2c-k1.c
··· 566 566 return dev_err_probe(dev, i2c->irq, "failed to get irq resource"); 567 567 568 568 ret = devm_request_irq(i2c->dev, i2c->irq, spacemit_i2c_irq_handler, 569 - IRQF_NO_SUSPEND | IRQF_ONESHOT, dev_name(i2c->dev), i2c); 569 + IRQF_NO_SUSPEND, dev_name(i2c->dev), i2c); 570 570 if (ret) 571 571 return dev_err_probe(dev, ret, "failed to request irq"); 572 572
+3 -3
drivers/iio/accel/adxl380.c
··· 1784 1784 st->int_map[1] = ADXL380_INT0_MAP1_REG; 1785 1785 } else { 1786 1786 st->irq = fwnode_irq_get_byname(dev_fwnode(st->dev), "INT1"); 1787 - if (st->irq > 0) 1788 - return dev_err_probe(st->dev, -ENODEV, 1789 - "no interrupt name specified"); 1787 + if (st->irq < 0) 1788 + return dev_err_probe(st->dev, st->irq, 1789 + "no interrupt name specified\n"); 1790 1790 st->int_map[0] = ADXL380_INT1_MAP0_REG; 1791 1791 st->int_map[1] = ADXL380_INT1_MAP1_REG; 1792 1792 }
+71 -1
drivers/iio/accel/st_accel_core.c
··· 517 517 .wai_addr = ST_SENSORS_DEFAULT_WAI_ADDRESS, 518 518 .sensors_supported = { 519 519 [0] = H3LIS331DL_ACCEL_DEV_NAME, 520 - [1] = IIS328DQ_ACCEL_DEV_NAME, 521 520 }, 522 521 .ch = (struct iio_chan_spec *)st_accel_12bit_channels, 523 522 .odr = { ··· 557 558 .num = ST_ACCEL_FS_AVL_400G, 558 559 .value = 0x03, 559 560 .gain = IIO_G_TO_M_S_2(195000), 561 + }, 562 + }, 563 + }, 564 + .bdu = { 565 + .addr = 0x23, 566 + .mask = 0x80, 567 + }, 568 + .drdy_irq = { 569 + .int1 = { 570 + .addr = 0x22, 571 + .mask = 0x02, 572 + }, 573 + .int2 = { 574 + .addr = 0x22, 575 + .mask = 0x10, 576 + }, 577 + .addr_ihl = 0x22, 578 + .mask_ihl = 0x80, 579 + }, 580 + .sim = { 581 + .addr = 0x23, 582 + .value = BIT(0), 583 + }, 584 + .multi_read_bit = true, 585 + .bootime = 2, 586 + }, 587 + { 588 + .wai = 0x32, 589 + .wai_addr = ST_SENSORS_DEFAULT_WAI_ADDRESS, 590 + .sensors_supported = { 591 + [0] = IIS328DQ_ACCEL_DEV_NAME, 592 + }, 593 + .ch = (struct iio_chan_spec *)st_accel_12bit_channels, 594 + .odr = { 595 + .addr = 0x20, 596 + .mask = 0x18, 597 + .odr_avl = { 598 + { .hz = 50, .value = 0x00, }, 599 + { .hz = 100, .value = 0x01, }, 600 + { .hz = 400, .value = 0x02, }, 601 + { .hz = 1000, .value = 0x03, }, 602 + }, 603 + }, 604 + .pw = { 605 + .addr = 0x20, 606 + .mask = 0x20, 607 + .value_on = ST_SENSORS_DEFAULT_POWER_ON_VALUE, 608 + .value_off = ST_SENSORS_DEFAULT_POWER_OFF_VALUE, 609 + }, 610 + .enable_axis = { 611 + .addr = ST_SENSORS_DEFAULT_AXIS_ADDR, 612 + .mask = ST_SENSORS_DEFAULT_AXIS_MASK, 613 + }, 614 + .fs = { 615 + .addr = 0x23, 616 + .mask = 0x30, 617 + .fs_avl = { 618 + [0] = { 619 + .num = ST_ACCEL_FS_AVL_100G, 620 + .value = 0x00, 621 + .gain = IIO_G_TO_M_S_2(980), 622 + }, 623 + [1] = { 624 + .num = ST_ACCEL_FS_AVL_200G, 625 + .value = 0x01, 626 + .gain = IIO_G_TO_M_S_2(1950), 627 + }, 628 + [2] = { 629 + .num = ST_ACCEL_FS_AVL_400G, 630 + .value = 0x03, 631 + .gain = IIO_G_TO_M_S_2(3910), 560 632 }, 561 633 }, 562 634 },
+3 -1
drivers/iio/adc/ad7280a.c
··· 1024 1024 1025 1025 st->spi->max_speed_hz = AD7280A_MAX_SPI_CLK_HZ; 1026 1026 st->spi->mode = SPI_MODE_1; 1027 - spi_setup(st->spi); 1027 + ret = spi_setup(st->spi); 1028 + if (ret < 0) 1029 + return ret; 1028 1030 1029 1031 st->ctrl_lb = FIELD_PREP(AD7280A_CTRL_LB_ACQ_TIME_MSK, st->acquisition_time) | 1030 1032 FIELD_PREP(AD7280A_CTRL_LB_THERMISTOR_MSK, st->thermistor_term_en);
+2 -1
drivers/iio/adc/ad7606_par.c
··· 43 43 struct iio_dev *indio_dev) 44 44 { 45 45 struct ad7606_state *st = iio_priv(indio_dev); 46 - unsigned int ret, c; 46 + unsigned int c; 47 + int ret; 47 48 struct iio_backend_data_fmt data = { 48 49 .sign_extend = true, 49 50 .enable = true,
+1 -1
drivers/iio/adc/ad9467.c
··· 95 95 96 96 #define CHIPID_AD9434 0x6A 97 97 #define AD9434_DEF_OUTPUT_MODE 0x00 98 - #define AD9434_REG_VREF_MASK 0xC0 98 + #define AD9434_REG_VREF_MASK GENMASK(4, 0) 99 99 100 100 /* 101 101 * Analog Devices AD9467 16-Bit, 200/250 MSPS ADC
+1
drivers/iio/adc/at91-sama5d2_adc.c
··· 2481 2481 struct at91_adc_state *st = iio_priv(indio_dev); 2482 2482 2483 2483 iio_device_unregister(indio_dev); 2484 + cancel_work_sync(&st->touch_st.workq); 2484 2485 2485 2486 at91_adc_dma_disable(st); 2486 2487
+2 -13
drivers/iio/adc/exynos_adc.c
··· 540 540 ADC_CHANNEL(9, "adc9"), 541 541 }; 542 542 543 - static int exynos_adc_remove_devices(struct device *dev, void *c) 544 - { 545 - struct platform_device *pdev = to_platform_device(dev); 546 - 547 - platform_device_unregister(pdev); 548 - 549 - return 0; 550 - } 551 - 552 543 static int exynos_adc_probe(struct platform_device *pdev) 553 544 { 554 545 struct exynos_adc *info = NULL; ··· 651 660 return 0; 652 661 653 662 err_of_populate: 654 - device_for_each_child(&indio_dev->dev, NULL, 655 - exynos_adc_remove_devices); 663 + of_platform_depopulate(&indio_dev->dev); 656 664 iio_device_unregister(indio_dev); 657 665 err_irq: 658 666 free_irq(info->irq, info); ··· 671 681 struct iio_dev *indio_dev = platform_get_drvdata(pdev); 672 682 struct exynos_adc *info = iio_priv(indio_dev); 673 683 674 - device_for_each_child(&indio_dev->dev, NULL, 675 - exynos_adc_remove_devices); 684 + of_platform_depopulate(&indio_dev->dev); 676 685 iio_device_unregister(indio_dev); 677 686 free_irq(info->irq, info); 678 687 if (info->data->exit_hw)
+3 -3
drivers/iio/adc/pac1934.c
··· 665 665 /* add the power_acc field */ 666 666 curr_energy += inc; 667 667 668 - clamp(curr_energy, PAC_193X_MIN_POWER_ACC, PAC_193X_MAX_POWER_ACC); 669 - 670 - reg_data->energy_sec_acc[cnt] = curr_energy; 668 + reg_data->energy_sec_acc[cnt] = clamp(curr_energy, 669 + PAC_193X_MIN_POWER_ACC, 670 + PAC_193X_MAX_POWER_ACC); 671 671 } 672 672 673 673 offset_reg_data_p += PAC1934_VPOWER_ACC_REG_LEN;
+3 -3
drivers/iio/chemical/scd4x.c
··· 584 584 .sign = 'u', 585 585 .realbits = 16, 586 586 .storagebits = 16, 587 - .endianness = IIO_BE, 587 + .endianness = IIO_CPU, 588 588 }, 589 589 }, 590 590 { ··· 599 599 .sign = 'u', 600 600 .realbits = 16, 601 601 .storagebits = 16, 602 - .endianness = IIO_BE, 602 + .endianness = IIO_CPU, 603 603 }, 604 604 }, 605 605 { ··· 612 612 .sign = 'u', 613 613 .realbits = 16, 614 614 .storagebits = 16, 615 - .endianness = IIO_BE, 615 + .endianness = IIO_CPU, 616 616 }, 617 617 }, 618 618 };
+4 -1
drivers/iio/dac/ad3552r-hs.c
··· 549 549 550 550 guard(mutex)(&st->lock); 551 551 552 + if (count >= sizeof(buf)) 553 + return -ENOSPC; 554 + 552 555 ret = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, userbuf, 553 556 count); 554 557 if (ret < 0) 555 558 return ret; 556 559 557 - buf[count] = '\0'; 560 + buf[ret] = '\0'; 558 561 559 562 ret = match_string(dbgfs_attr_source, ARRAY_SIZE(dbgfs_attr_source), 560 563 buf);
+6
drivers/iio/dac/ad5686.c
··· 434 434 .num_channels = 4, 435 435 .regmap_type = AD5686_REGMAP, 436 436 }, 437 + [ID_AD5695R] = { 438 + .channels = ad5685r_channels, 439 + .int_vref_mv = 2500, 440 + .num_channels = 4, 441 + .regmap_type = AD5686_REGMAP, 442 + }, 437 443 [ID_AD5696] = { 438 444 .channels = ad5686_channels, 439 445 .num_channels = 4,
+5 -4
drivers/iio/imu/inv_icm45600/inv_icm45600_core.c
··· 960 960 return IIO_VAL_INT; 961 961 /* 962 962 * T°C = (temp / 128) + 25 963 - * Tm°C = 1000 * ((temp * 100 / 12800) + 25) 964 - * scale: 100000 / 13248 = 7.8125 965 - * offset: 25000 963 + * Tm°C = ((temp + 25 * 128) / 128)) * 1000 964 + * Tm°C = (temp + 3200) * (1000 / 128) 965 + * scale: 1000 / 128 = 7.8125 966 + * offset: 3200 966 967 */ 967 968 case IIO_CHAN_INFO_SCALE: 968 969 *val = 7; 969 970 *val2 = 812500; 970 971 return IIO_VAL_INT_PLUS_MICRO; 971 972 case IIO_CHAN_INFO_OFFSET: 972 - *val = 25000; 973 + *val = 3200; 973 974 return IIO_VAL_INT; 974 975 default: 975 976 return -EINVAL;
+11 -4
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
··· 101 101 IIO_CHAN_SOFT_TIMESTAMP(3), 102 102 }; 103 103 104 + static const struct iio_chan_spec st_lsm6ds0_acc_channels[] = { 105 + ST_LSM6DSX_CHANNEL(IIO_ACCEL, 0x28, IIO_MOD_X, 0), 106 + ST_LSM6DSX_CHANNEL(IIO_ACCEL, 0x2a, IIO_MOD_Y, 1), 107 + ST_LSM6DSX_CHANNEL(IIO_ACCEL, 0x2c, IIO_MOD_Z, 2), 108 + IIO_CHAN_SOFT_TIMESTAMP(3), 109 + }; 110 + 104 111 static const struct iio_chan_spec st_lsm6dsx_gyro_channels[] = { 105 112 ST_LSM6DSX_CHANNEL(IIO_ANGL_VEL, 0x22, IIO_MOD_X, 0), 106 113 ST_LSM6DSX_CHANNEL(IIO_ANGL_VEL, 0x24, IIO_MOD_Y, 1), ··· 149 142 }, 150 143 .channels = { 151 144 [ST_LSM6DSX_ID_ACC] = { 152 - .chan = st_lsm6dsx_acc_channels, 153 - .len = ARRAY_SIZE(st_lsm6dsx_acc_channels), 145 + .chan = st_lsm6ds0_acc_channels, 146 + .len = ARRAY_SIZE(st_lsm6ds0_acc_channels), 154 147 }, 155 148 [ST_LSM6DSX_ID_GYRO] = { 156 149 .chan = st_lsm6ds0_gyro_channels, ··· 1456 1449 }, 1457 1450 .channels = { 1458 1451 [ST_LSM6DSX_ID_ACC] = { 1459 - .chan = st_lsm6dsx_acc_channels, 1460 - .len = ARRAY_SIZE(st_lsm6dsx_acc_channels), 1452 + .chan = st_lsm6ds0_acc_channels, 1453 + .len = ARRAY_SIZE(st_lsm6ds0_acc_channels), 1461 1454 }, 1462 1455 [ST_LSM6DSX_ID_GYRO] = { 1463 1456 .chan = st_lsm6dsx_gyro_channels,
+3 -1
drivers/iio/industrialio-core.c
··· 1657 1657 mutex_destroy(&iio_dev_opaque->info_exist_lock); 1658 1658 mutex_destroy(&iio_dev_opaque->mlock); 1659 1659 1660 + lockdep_unregister_key(&iio_dev_opaque->info_exist_key); 1660 1661 lockdep_unregister_key(&iio_dev_opaque->mlock_key); 1661 1662 1662 1663 ida_free(&iio_ida, iio_dev_opaque->id); ··· 1718 1717 INIT_LIST_HEAD(&iio_dev_opaque->ioctl_handlers); 1719 1718 1720 1719 lockdep_register_key(&iio_dev_opaque->mlock_key); 1720 + lockdep_register_key(&iio_dev_opaque->info_exist_key); 1721 1721 1722 1722 mutex_init_with_key(&iio_dev_opaque->mlock, &iio_dev_opaque->mlock_key); 1723 - mutex_init(&iio_dev_opaque->info_exist_lock); 1723 + mutex_init_with_key(&iio_dev_opaque->info_exist_lock, &iio_dev_opaque->info_exist_key); 1724 1724 1725 1725 indio_dev->dev.parent = parent; 1726 1726 indio_dev->dev.type = &iio_device_type;
+18
drivers/input/serio/i8042-acpipnpio.h
··· 116 116 .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_NEVER) 117 117 }, 118 118 { 119 + /* 120 + * ASUS Zenbook UX425QA_UM425QA 121 + * Some Zenbooks report "Zenbook" with a lowercase b. 122 + */ 123 + .matches = { 124 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 125 + DMI_MATCH(DMI_PRODUCT_NAME, "Zenbook UX425QA_UM425QA"), 126 + }, 127 + .driver_data = (void *)(SERIO_QUIRK_PROBE_DEFER | SERIO_QUIRK_RESET_NEVER) 128 + }, 129 + { 119 130 /* ASUS ZenBook UX425UA/QA */ 120 131 .matches = { 121 132 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ··· 1183 1172 { 1184 1173 .matches = { 1185 1174 DMI_MATCH(DMI_BOARD_NAME, "X5KK45xS_X5SP45xS"), 1175 + }, 1176 + .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1177 + SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP) 1178 + }, 1179 + { 1180 + .matches = { 1181 + DMI_MATCH(DMI_BOARD_NAME, "WUJIE Series-X5SP4NAG"), 1186 1182 }, 1187 1183 .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS | 1188 1184 SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+5
drivers/interconnect/debugfs-client.c
··· 150 150 return ret; 151 151 } 152 152 153 + src_node = devm_kstrdup(&pdev->dev, "", GFP_KERNEL); 154 + dst_node = devm_kstrdup(&pdev->dev, "", GFP_KERNEL); 155 + if (!src_node || !dst_node) 156 + return -ENOMEM; 157 + 153 158 client_dir = debugfs_create_dir("test_client", icc_dir); 154 159 155 160 debugfs_create_str("src_node", 0600, client_dir, &src_node);
+1 -2
drivers/iommu/amd/iommu.c
··· 2450 2450 goto out_err; 2451 2451 } 2452 2452 2453 - out_err: 2454 - 2455 2453 iommu_completion_wait(iommu); 2456 2454 2457 2455 if (FEATURE_NUM_INT_REMAP_SUP_2K(amd_iommu_efr2)) ··· 2460 2462 if (dev_is_pci(dev)) 2461 2463 pci_prepare_ats(to_pci_dev(dev), PAGE_SHIFT); 2462 2464 2465 + out_err: 2463 2466 return iommu_dev; 2464 2467 } 2465 2468
+1 -1
drivers/iommu/generic_pt/iommu_pt.h
··· 645 645 struct pt_iommu_map_args *map = arg; 646 646 647 647 pts.type = pt_load_single_entry(&pts); 648 - if (level == 0) { 648 + if (pts.level == 0) { 649 649 if (pts.type != PT_ENTRY_EMPTY) 650 650 return -EADDRINUSE; 651 651 pt_install_leaf_entry(&pts, map->oa, PAGE_SHIFT,
+1 -1
drivers/iommu/io-pgtable-arm.c
··· 637 637 pte = READ_ONCE(*ptep); 638 638 if (!pte) { 639 639 WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN)); 640 - return -ENOENT; 640 + return 0; 641 641 } 642 642 643 643 /* If the size matches this level, we're in the right place */
+4 -4
drivers/irqchip/irq-gic-v3-its.c
··· 709 709 struct its_cmd_block *cmd, 710 710 struct its_cmd_desc *desc) 711 711 { 712 - unsigned long itt_addr; 712 + phys_addr_t itt_addr; 713 713 u8 size = ilog2(desc->its_mapd_cmd.dev->nr_ites); 714 714 715 715 itt_addr = virt_to_phys(desc->its_mapd_cmd.dev->itt); ··· 879 879 struct its_cmd_desc *desc) 880 880 { 881 881 struct its_vpe *vpe = valid_vpe(its, desc->its_vmapp_cmd.vpe); 882 - unsigned long vpt_addr, vconf_addr; 882 + phys_addr_t vpt_addr, vconf_addr; 883 883 u64 target; 884 884 bool alloc; 885 885 ··· 2477 2477 baser->psz = psz; 2478 2478 tmp = indirect ? GITS_LVL1_ENTRY_SIZE : esz; 2479 2479 2480 - pr_info("ITS@%pa: allocated %d %s @%lx (%s, esz %d, psz %dK, shr %d)\n", 2480 + pr_info("ITS@%pa: allocated %d %s @%llx (%s, esz %d, psz %dK, shr %d)\n", 2481 2481 &its->phys_base, (int)(PAGE_ORDER_TO_SIZE(order) / (int)tmp), 2482 2482 its_base_type_string[type], 2483 - (unsigned long)virt_to_phys(base), 2483 + (u64)virt_to_phys(base), 2484 2484 indirect ? "indirect" : "flat", (int)esz, 2485 2485 psz / SZ_1K, (int)shr >> GITS_BASER_SHAREABILITY_SHIFT); 2486 2486
+8 -1
drivers/irqchip/irq-renesas-rzv2h.c
··· 328 328 u32 titsr, titsr_k, titsel_n, tien; 329 329 struct rzv2h_icu_priv *priv; 330 330 u32 tssr, tssr_k, tssel_n; 331 + u32 titsr_cur, tssr_cur; 331 332 unsigned int hwirq; 332 333 u32 tint, sense; 333 334 int tint_nr; ··· 377 376 guard(raw_spinlock)(&priv->lock); 378 377 379 378 tssr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TSSR(tssr_k)); 379 + titsr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TITSR(titsr_k)); 380 + 381 + tssr_cur = field_get(ICU_TSSR_TSSEL_MASK(tssel_n, priv->info->field_width), tssr); 382 + titsr_cur = field_get(ICU_TITSR_TITSEL_MASK(titsel_n), titsr); 383 + if (tssr_cur == tint && titsr_cur == sense) 384 + return 0; 385 + 380 386 tssr &= ~(ICU_TSSR_TSSEL_MASK(tssel_n, priv->info->field_width) | tien); 381 387 tssr |= ICU_TSSR_TSSEL_PREP(tint, tssel_n, priv->info->field_width); 382 388 383 389 writel_relaxed(tssr, priv->base + priv->info->t_offs + ICU_TSSR(tssr_k)); 384 390 385 - titsr = readl_relaxed(priv->base + priv->info->t_offs + ICU_TITSR(titsr_k)); 386 391 titsr &= ~ICU_TITSR_TITSEL_MASK(titsel_n); 387 392 titsr |= ICU_TITSR_TITSEL_PREP(sense, titsel_n); 388 393
+9
drivers/md/bcache/bcache.h
··· 273 273 274 274 struct bio_set bio_split; 275 275 276 + struct bio_set bio_detached; 277 + 276 278 unsigned int data_csum:1; 277 279 278 280 int (*cache_miss)(struct btree *b, struct search *s, ··· 753 751 */ 754 752 }; 755 753 struct bio bio; 754 + }; 755 + 756 + struct detached_dev_io_private { 757 + struct bcache_device *d; 758 + unsigned long start_time; 759 + struct bio *orig_bio; 760 + struct bio bio; 756 761 }; 757 762 758 763 #define BTREE_PRIO USHRT_MAX
+36 -45
drivers/md/bcache/request.c
··· 1077 1077 continue_at(cl, cached_dev_bio_complete, NULL); 1078 1078 } 1079 1079 1080 - struct detached_dev_io_private { 1081 - struct bcache_device *d; 1082 - unsigned long start_time; 1083 - bio_end_io_t *bi_end_io; 1084 - void *bi_private; 1085 - struct block_device *orig_bdev; 1086 - }; 1087 - 1088 1080 static void detached_dev_end_io(struct bio *bio) 1089 1081 { 1090 - struct detached_dev_io_private *ddip; 1091 - 1092 - ddip = bio->bi_private; 1093 - bio->bi_end_io = ddip->bi_end_io; 1094 - bio->bi_private = ddip->bi_private; 1082 + struct detached_dev_io_private *ddip = 1083 + container_of(bio, struct detached_dev_io_private, bio); 1084 + struct bio *orig_bio = ddip->orig_bio; 1095 1085 1096 1086 /* Count on the bcache device */ 1097 - bio_end_io_acct_remapped(bio, ddip->start_time, ddip->orig_bdev); 1087 + bio_end_io_acct(orig_bio, ddip->start_time); 1098 1088 1099 1089 if (bio->bi_status) { 1100 - struct cached_dev *dc = container_of(ddip->d, 1101 - struct cached_dev, disk); 1090 + struct cached_dev *dc = bio->bi_private; 1091 + 1102 1092 /* should count I/O error for backing device here */ 1103 1093 bch_count_backing_io_errors(dc, bio); 1094 + orig_bio->bi_status = bio->bi_status; 1104 1095 } 1105 1096 1106 - kfree(ddip); 1107 - bio_endio(bio); 1097 + bio_put(bio); 1098 + bio_endio(orig_bio); 1108 1099 } 1109 1100 1110 - static void detached_dev_do_request(struct bcache_device *d, struct bio *bio, 1111 - struct block_device *orig_bdev, unsigned long start_time) 1101 + static void detached_dev_do_request(struct bcache_device *d, 1102 + struct bio *orig_bio, unsigned long start_time) 1112 1103 { 1113 1104 struct detached_dev_io_private *ddip; 1114 1105 struct cached_dev *dc = container_of(d, struct cached_dev, disk); 1106 + struct bio *clone_bio; 1115 1107 1116 - /* 1117 - * no need to call closure_get(&dc->disk.cl), 1118 - * because upper layer had already opened bcache device, 1119 - * which would call closure_get(&dc->disk.cl) 1120 - */ 1121 - ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO); 1122 - if (!ddip) { 1123 - bio->bi_status = BLK_STS_RESOURCE; 1124 - bio_endio(bio); 1108 + if (bio_op(orig_bio) == REQ_OP_DISCARD && 1109 + !bdev_max_discard_sectors(dc->bdev)) { 1110 + bio_endio(orig_bio); 1125 1111 return; 1126 1112 } 1127 1113 1128 - ddip->d = d; 1129 - /* Count on the bcache device */ 1130 - ddip->orig_bdev = orig_bdev; 1131 - ddip->start_time = start_time; 1132 - ddip->bi_end_io = bio->bi_end_io; 1133 - ddip->bi_private = bio->bi_private; 1134 - bio->bi_end_io = detached_dev_end_io; 1135 - bio->bi_private = ddip; 1114 + clone_bio = bio_alloc_clone(dc->bdev, orig_bio, GFP_NOIO, 1115 + &d->bio_detached); 1116 + if (!clone_bio) { 1117 + orig_bio->bi_status = BLK_STS_RESOURCE; 1118 + bio_endio(orig_bio); 1119 + return; 1120 + } 1136 1121 1137 - if ((bio_op(bio) == REQ_OP_DISCARD) && 1138 - !bdev_max_discard_sectors(dc->bdev)) 1139 - detached_dev_end_io(bio); 1140 - else 1141 - submit_bio_noacct(bio); 1122 + ddip = container_of(clone_bio, struct detached_dev_io_private, bio); 1123 + /* Count on the bcache device */ 1124 + ddip->d = d; 1125 + ddip->start_time = start_time; 1126 + ddip->orig_bio = orig_bio; 1127 + 1128 + clone_bio->bi_end_io = detached_dev_end_io; 1129 + clone_bio->bi_private = dc; 1130 + 1131 + submit_bio_noacct(clone_bio); 1142 1132 } 1143 1133 1144 1134 static void quit_max_writeback_rate(struct cache_set *c, ··· 1204 1214 1205 1215 start_time = bio_start_io_acct(bio); 1206 1216 1207 - bio_set_dev(bio, dc->bdev); 1208 1217 bio->bi_iter.bi_sector += dc->sb.data_offset; 1209 1218 1210 1219 if (cached_dev_get(dc)) { 1220 + bio_set_dev(bio, dc->bdev); 1211 1221 s = search_alloc(bio, d, orig_bdev, start_time); 1212 1222 trace_bcache_request_start(s->d, bio); 1213 1223 ··· 1227 1237 else 1228 1238 cached_dev_read(dc, s); 1229 1239 } 1230 - } else 1240 + } else { 1231 1241 /* I/O request sent to backing device */ 1232 - detached_dev_do_request(d, bio, orig_bdev, start_time); 1242 + detached_dev_do_request(d, bio, start_time); 1243 + } 1233 1244 } 1234 1245 1235 1246 static int cached_dev_ioctl(struct bcache_device *d, blk_mode_t mode,
+10 -2
drivers/md/bcache/super.c
··· 887 887 } 888 888 889 889 bioset_exit(&d->bio_split); 890 + bioset_exit(&d->bio_detached); 890 891 kvfree(d->full_dirty_stripes); 891 892 kvfree(d->stripe_sectors_dirty); 892 893 ··· 950 949 BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER)) 951 950 goto out_ida_remove; 952 951 952 + if (bioset_init(&d->bio_detached, 4, 953 + offsetof(struct detached_dev_io_private, bio), 954 + BIOSET_NEED_BVECS|BIOSET_NEED_RESCUER)) 955 + goto out_bioset_split_exit; 956 + 953 957 if (lim.logical_block_size > PAGE_SIZE && cached_bdev) { 954 958 /* 955 959 * This should only happen with BCACHE_SB_VERSION_BDEV. ··· 970 964 971 965 d->disk = blk_alloc_disk(&lim, NUMA_NO_NODE); 972 966 if (IS_ERR(d->disk)) 973 - goto out_bioset_exit; 967 + goto out_bioset_detach_exit; 974 968 975 969 set_capacity(d->disk, sectors); 976 970 snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", idx); ··· 982 976 d->disk->private_data = d; 983 977 return 0; 984 978 985 - out_bioset_exit: 979 + out_bioset_detach_exit: 980 + bioset_exit(&d->bio_detached); 981 + out_bioset_split_exit: 986 982 bioset_exit(&d->bio_split); 987 983 out_ida_remove: 988 984 ida_free(&bcache_device_idx, idx);
+9 -9
drivers/misc/mei/mei-trace.h
··· 21 21 TP_ARGS(dev, reg, offs, val), 22 22 TP_STRUCT__entry( 23 23 __string(dev, dev_name(dev)) 24 - __field(const char *, reg) 24 + __string(reg, reg) 25 25 __field(u32, offs) 26 26 __field(u32, val) 27 27 ), 28 28 TP_fast_assign( 29 29 __assign_str(dev); 30 - __entry->reg = reg; 30 + __assign_str(reg); 31 31 __entry->offs = offs; 32 32 __entry->val = val; 33 33 ), 34 34 TP_printk("[%s] read %s:[%#x] = %#x", 35 - __get_str(dev), __entry->reg, __entry->offs, __entry->val) 35 + __get_str(dev), __get_str(reg), __entry->offs, __entry->val) 36 36 ); 37 37 38 38 TRACE_EVENT(mei_reg_write, ··· 40 40 TP_ARGS(dev, reg, offs, val), 41 41 TP_STRUCT__entry( 42 42 __string(dev, dev_name(dev)) 43 - __field(const char *, reg) 43 + __string(reg, reg) 44 44 __field(u32, offs) 45 45 __field(u32, val) 46 46 ), 47 47 TP_fast_assign( 48 48 __assign_str(dev); 49 - __entry->reg = reg; 49 + __assign_str(reg); 50 50 __entry->offs = offs; 51 51 __entry->val = val; 52 52 ), 53 53 TP_printk("[%s] write %s[%#x] = %#x", 54 - __get_str(dev), __entry->reg, __entry->offs, __entry->val) 54 + __get_str(dev), __get_str(reg), __entry->offs, __entry->val) 55 55 ); 56 56 57 57 TRACE_EVENT(mei_pci_cfg_read, ··· 59 59 TP_ARGS(dev, reg, offs, val), 60 60 TP_STRUCT__entry( 61 61 __string(dev, dev_name(dev)) 62 - __field(const char *, reg) 62 + __string(reg, reg) 63 63 __field(u32, offs) 64 64 __field(u32, val) 65 65 ), 66 66 TP_fast_assign( 67 67 __assign_str(dev); 68 - __entry->reg = reg; 68 + __assign_str(reg); 69 69 __entry->offs = offs; 70 70 __entry->val = val; 71 71 ), 72 72 TP_printk("[%s] pci cfg read %s:[%#x] = %#x", 73 - __get_str(dev), __entry->reg, __entry->offs, __entry->val) 73 + __get_str(dev), __get_str(reg), __entry->offs, __entry->val) 74 74 ); 75 75 76 76 #endif /* _MEI_TRACE_H_ */
+40 -8
drivers/misc/uacce/uacce.c
··· 40 40 return 0; 41 41 } 42 42 43 - static int uacce_put_queue(struct uacce_queue *q) 43 + static int uacce_stop_queue(struct uacce_queue *q) 44 44 { 45 45 struct uacce_device *uacce = q->uacce; 46 46 47 - if ((q->state == UACCE_Q_STARTED) && uacce->ops->stop_queue) 47 + if (q->state != UACCE_Q_STARTED) 48 + return 0; 49 + 50 + if (uacce->ops->stop_queue) 48 51 uacce->ops->stop_queue(q); 49 52 50 - if ((q->state == UACCE_Q_INIT || q->state == UACCE_Q_STARTED) && 51 - uacce->ops->put_queue) 53 + q->state = UACCE_Q_INIT; 54 + 55 + return 0; 56 + } 57 + 58 + static void uacce_put_queue(struct uacce_queue *q) 59 + { 60 + struct uacce_device *uacce = q->uacce; 61 + 62 + uacce_stop_queue(q); 63 + 64 + if (q->state != UACCE_Q_INIT) 65 + return; 66 + 67 + if (uacce->ops->put_queue) 52 68 uacce->ops->put_queue(q); 53 69 54 70 q->state = UACCE_Q_ZOMBIE; 55 - 56 - return 0; 57 71 } 58 72 59 73 static long uacce_fops_unl_ioctl(struct file *filep, ··· 94 80 ret = uacce_start_queue(q); 95 81 break; 96 82 case UACCE_CMD_PUT_Q: 97 - ret = uacce_put_queue(q); 83 + ret = uacce_stop_queue(q); 98 84 break; 99 85 default: 100 86 if (uacce->ops->ioctl) ··· 228 214 } 229 215 } 230 216 217 + static int uacce_vma_mremap(struct vm_area_struct *area) 218 + { 219 + return -EPERM; 220 + } 221 + 231 222 static const struct vm_operations_struct uacce_vm_ops = { 232 223 .close = uacce_vma_close, 224 + .mremap = uacce_vma_mremap, 233 225 }; 234 226 235 227 static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma) ··· 402 382 struct uacce_device *uacce = to_uacce_device(dev); 403 383 u32 val; 404 384 385 + if (!uacce->ops->isolate_err_threshold_read) 386 + return -ENOENT; 387 + 405 388 val = uacce->ops->isolate_err_threshold_read(uacce); 406 389 407 390 return sysfs_emit(buf, "%u\n", val); ··· 416 393 struct uacce_device *uacce = to_uacce_device(dev); 417 394 unsigned long val; 418 395 int ret; 396 + 397 + if (!uacce->ops->isolate_err_threshold_write) 398 + return -ENOENT; 419 399 420 400 if (kstrtoul(buf, 0, &val) < 0) 421 401 return -EINVAL; ··· 545 519 */ 546 520 int uacce_register(struct uacce_device *uacce) 547 521 { 522 + int ret; 523 + 548 524 if (!uacce) 549 525 return -ENODEV; 550 526 ··· 557 529 uacce->cdev->ops = &uacce_fops; 558 530 uacce->cdev->owner = THIS_MODULE; 559 531 560 - return cdev_device_add(uacce->cdev, &uacce->dev); 532 + ret = cdev_device_add(uacce->cdev, &uacce->dev); 533 + if (ret) 534 + uacce->cdev = NULL; 535 + 536 + return ret; 561 537 } 562 538 EXPORT_SYMBOL_GPL(uacce_register); 563 539
+41
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 1306 1306 return err; 1307 1307 } 1308 1308 1309 + static int sdmmc_card_busy(struct mmc_host *mmc) 1310 + { 1311 + struct realtek_pci_sdmmc *host = mmc_priv(mmc); 1312 + struct rtsx_pcr *pcr = host->pcr; 1313 + int err; 1314 + u8 stat; 1315 + u8 mask = SD_DAT3_STATUS | SD_DAT2_STATUS | SD_DAT1_STATUS 1316 + | SD_DAT0_STATUS; 1317 + 1318 + mutex_lock(&pcr->pcr_mutex); 1319 + 1320 + rtsx_pci_start_run(pcr); 1321 + 1322 + err = rtsx_pci_write_register(pcr, SD_BUS_STAT, 1323 + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 1324 + SD_CLK_TOGGLE_EN); 1325 + if (err) 1326 + goto out; 1327 + 1328 + mdelay(1); 1329 + 1330 + err = rtsx_pci_read_register(pcr, SD_BUS_STAT, &stat); 1331 + if (err) 1332 + goto out; 1333 + 1334 + err = rtsx_pci_write_register(pcr, SD_BUS_STAT, 1335 + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 0); 1336 + out: 1337 + mutex_unlock(&pcr->pcr_mutex); 1338 + 1339 + if (err) 1340 + return err; 1341 + 1342 + /* check if any pin between dat[0:3] is low */ 1343 + if ((stat & mask) != mask) 1344 + return 1; 1345 + else 1346 + return 0; 1347 + } 1348 + 1309 1349 static int sdmmc_execute_tuning(struct mmc_host *mmc, u32 opcode) 1310 1350 { 1311 1351 struct realtek_pci_sdmmc *host = mmc_priv(mmc); ··· 1458 1418 .get_ro = sdmmc_get_ro, 1459 1419 .get_cd = sdmmc_get_cd, 1460 1420 .start_signal_voltage_switch = sdmmc_switch_voltage, 1421 + .card_busy = sdmmc_card_busy, 1461 1422 .execute_tuning = sdmmc_execute_tuning, 1462 1423 .init_sd_express = sdmmc_init_sd_express, 1463 1424 };
+14
drivers/mmc/host/sdhci-of-dwcmshc.c
··· 739 739 sdhci_writel(host, extra, reg); 740 740 741 741 if (clock <= 52000000) { 742 + if (host->mmc->ios.timing == MMC_TIMING_MMC_HS200 || 743 + host->mmc->ios.timing == MMC_TIMING_MMC_HS400) { 744 + dev_err(mmc_dev(host->mmc), 745 + "Can't reduce the clock below 52MHz in HS200/HS400 mode"); 746 + return; 747 + } 748 + 742 749 /* 743 750 * Disable DLL and reset both of sample and drive clock. 744 751 * The bypass bit and start bit need to be set if DLL is not locked. ··· 1595 1588 { 1596 1589 u32 emmc_caps = MMC_CAP2_NO_SD | MMC_CAP2_NO_SDIO; 1597 1590 unsigned int val, hsp_int_status, hsp_pwr_ctrl; 1591 + static const char * const clk_ids[] = {"axi"}; 1598 1592 struct of_phandle_args args; 1599 1593 struct eic7700_priv *priv; 1600 1594 struct regmap *hsp_regmap; ··· 1612 1604 dev_err(dev, "failed to reset\n"); 1613 1605 return ret; 1614 1606 } 1607 + 1608 + ret = dwcmshc_get_enable_other_clks(mmc_dev(host->mmc), dwc_priv, 1609 + ARRAY_SIZE(clk_ids), clk_ids); 1610 + if (ret) 1611 + return ret; 1615 1612 1616 1613 ret = of_parse_phandle_with_fixed_args(dev->of_node, "eswin,hsp-sp-csr", 2, 0, &args); 1617 1614 if (ret) { ··· 1739 1726 .set_uhs_signaling = sdhci_eic7700_set_uhs_wrapper, 1740 1727 .set_power = sdhci_set_power_and_bus_voltage, 1741 1728 .irq = dwcmshc_cqe_irq_handler, 1729 + .adma_write_desc = dwcmshc_adma_write_desc, 1742 1730 .platform_execute_tuning = sdhci_eic7700_executing_tuning, 1743 1731 }; 1744 1732
+4 -4
drivers/mux/mmio.c
··· 101 101 mux_mmio = mux_chip_priv(mux_chip); 102 102 103 103 mux_mmio->fields = devm_kmalloc(dev, num_fields * sizeof(*mux_mmio->fields), GFP_KERNEL); 104 - if (IS_ERR(mux_mmio->fields)) 105 - return PTR_ERR(mux_mmio->fields); 104 + if (!mux_mmio->fields) 105 + return -ENOMEM; 106 106 107 107 mux_mmio->hardware_states = devm_kmalloc(dev, num_fields * 108 108 sizeof(*mux_mmio->hardware_states), GFP_KERNEL); 109 - if (IS_ERR(mux_mmio->hardware_states)) 110 - return PTR_ERR(mux_mmio->hardware_states); 109 + if (!mux_mmio->hardware_states) 110 + return -ENOMEM; 111 111 112 112 for (i = 0; i < num_fields; i++) { 113 113 struct mux_control *mux = &mux_chip->mux[i];
+15 -13
drivers/net/bonding/bond_main.c
··· 2235 2235 unblock_netpoll_tx(); 2236 2236 } 2237 2237 2238 - /* broadcast mode uses the all_slaves to loop through slaves. */ 2239 - if (bond_mode_can_use_xmit_hash(bond) || 2240 - BOND_MODE(bond) == BOND_MODE_BROADCAST) 2241 - bond_update_slave_arr(bond, NULL); 2242 - 2243 2238 if (!slave_dev->netdev_ops->ndo_bpf || 2244 2239 !slave_dev->netdev_ops->ndo_xdp_xmit) { 2245 2240 if (bond->xdp_prog) { ··· 2267 2272 if (bond->xdp_prog) 2268 2273 bpf_prog_inc(bond->xdp_prog); 2269 2274 } 2275 + 2276 + /* broadcast mode uses the all_slaves to loop through slaves. */ 2277 + if (bond_mode_can_use_xmit_hash(bond) || 2278 + BOND_MODE(bond) == BOND_MODE_BROADCAST) 2279 + bond_update_slave_arr(bond, NULL); 2270 2280 2271 2281 bond_xdp_set_features(bond_dev); 2272 2282 ··· 3074 3074 __func__, &sip); 3075 3075 return; 3076 3076 } 3077 - slave->last_rx = jiffies; 3078 - slave->target_last_arp_rx[i] = jiffies; 3077 + WRITE_ONCE(slave->last_rx, jiffies); 3078 + WRITE_ONCE(slave->target_last_arp_rx[i], jiffies); 3079 3079 } 3080 3080 3081 3081 static int bond_arp_rcv(const struct sk_buff *skb, struct bonding *bond, ··· 3294 3294 __func__, saddr); 3295 3295 return; 3296 3296 } 3297 - slave->last_rx = jiffies; 3298 - slave->target_last_arp_rx[i] = jiffies; 3297 + WRITE_ONCE(slave->last_rx, jiffies); 3298 + WRITE_ONCE(slave->target_last_arp_rx[i], jiffies); 3299 3299 } 3300 3300 3301 3301 static int bond_na_rcv(const struct sk_buff *skb, struct bonding *bond, ··· 3365 3365 (slave_do_arp_validate_only(bond) && is_ipv6) || 3366 3366 #endif 3367 3367 !slave_do_arp_validate_only(bond)) 3368 - slave->last_rx = jiffies; 3368 + WRITE_ONCE(slave->last_rx, jiffies); 3369 3369 return RX_HANDLER_ANOTHER; 3370 3370 } else if (is_arp) { 3371 3371 return bond_arp_rcv(skb, bond, slave); ··· 3433 3433 3434 3434 if (slave->link != BOND_LINK_UP) { 3435 3435 if (bond_time_in_interval(bond, last_tx, 1) && 3436 - bond_time_in_interval(bond, slave->last_rx, 1)) { 3436 + bond_time_in_interval(bond, READ_ONCE(slave->last_rx), 1)) { 3437 3437 3438 3438 bond_propose_link_state(slave, BOND_LINK_UP); 3439 3439 slave_state_changed = 1; ··· 3457 3457 * when the source ip is 0, so don't take the link down 3458 3458 * if we don't know our ip yet 3459 3459 */ 3460 - if (!bond_time_in_interval(bond, last_tx, bond->params.missed_max) || 3461 - !bond_time_in_interval(bond, slave->last_rx, bond->params.missed_max)) { 3460 + if (!bond_time_in_interval(bond, last_tx, 3461 + bond->params.missed_max) || 3462 + !bond_time_in_interval(bond, READ_ONCE(slave->last_rx), 3463 + bond->params.missed_max)) { 3462 3464 3463 3465 bond_propose_link_state(slave, BOND_LINK_DOWN); 3464 3466 slave_state_changed = 1;
+4 -4
drivers/net/bonding/bond_options.c
··· 1152 1152 1153 1153 if (slot >= 0 && slot < BOND_MAX_ARP_TARGETS) { 1154 1154 bond_for_each_slave(bond, slave, iter) 1155 - slave->target_last_arp_rx[slot] = last_rx; 1155 + WRITE_ONCE(slave->target_last_arp_rx[slot], last_rx); 1156 1156 targets[slot] = target; 1157 1157 } 1158 1158 } ··· 1221 1221 bond_for_each_slave(bond, slave, iter) { 1222 1222 targets_rx = slave->target_last_arp_rx; 1223 1223 for (i = ind; (i < BOND_MAX_ARP_TARGETS-1) && targets[i+1]; i++) 1224 - targets_rx[i] = targets_rx[i+1]; 1225 - targets_rx[i] = 0; 1224 + WRITE_ONCE(targets_rx[i], READ_ONCE(targets_rx[i+1])); 1225 + WRITE_ONCE(targets_rx[i], 0); 1226 1226 } 1227 1227 for (i = ind; (i < BOND_MAX_ARP_TARGETS-1) && targets[i+1]; i++) 1228 1228 targets[i] = targets[i+1]; ··· 1377 1377 1378 1378 if (slot >= 0 && slot < BOND_MAX_NS_TARGETS) { 1379 1379 bond_for_each_slave(bond, slave, iter) { 1380 - slave->target_last_arp_rx[slot] = last_rx; 1380 + WRITE_ONCE(slave->target_last_arp_rx[slot], last_rx); 1381 1381 slave_set_ns_maddr(bond, slave, target, &targets[slot]); 1382 1382 } 1383 1383 targets[slot] = *target;
+1 -1
drivers/net/can/at91_can.c
··· 1099 1099 if (IS_ERR(transceiver)) { 1100 1100 err = PTR_ERR(transceiver); 1101 1101 dev_err_probe(&pdev->dev, err, "failed to get phy\n"); 1102 - goto exit_iounmap; 1102 + goto exit_free; 1103 1103 } 1104 1104 1105 1105 dev->netdev_ops = &at91_netdev_ops;
+2 -2
drivers/net/can/usb/gs_usb.c
··· 610 610 { 611 611 struct gs_usb *parent = urb->context; 612 612 struct gs_can *dev; 613 - struct net_device *netdev; 613 + struct net_device *netdev = NULL; 614 614 int rc; 615 615 struct net_device_stats *stats; 616 616 struct gs_host_frame *hf = urb->transfer_buffer; ··· 768 768 } 769 769 } else if (rc != -ESHUTDOWN && net_ratelimit()) { 770 770 netdev_info(netdev, "failed to re-submit IN URB: %pe\n", 771 - ERR_PTR(urb->status)); 771 + ERR_PTR(rc)); 772 772 } 773 773 } 774 774
+8 -7
drivers/net/dsa/yt921x.c
··· 682 682 const struct yt921x_mib_desc *desc = &yt921x_mib_descs[i]; 683 683 u32 reg = YT921X_MIBn_DATA0(port) + desc->offset; 684 684 u64 *valp = &((u64 *)mib)[i]; 685 - u64 val = *valp; 686 685 u32 val0; 687 - u32 val1; 686 + u64 val; 688 687 689 688 res = yt921x_reg_read(priv, reg, &val0); 690 689 if (res) 691 690 break; 692 691 693 692 if (desc->size <= 1) { 694 - if (val < (u32)val) 695 - /* overflow */ 696 - val += (u64)U32_MAX + 1; 697 - val &= ~U32_MAX; 698 - val |= val0; 693 + u64 old_val = *valp; 694 + 695 + val = (old_val & ~(u64)U32_MAX) | val0; 696 + if (val < old_val) 697 + val += 1ull << 32; 699 698 } else { 699 + u32 val1; 700 + 700 701 res = yt921x_reg_read(priv, reg + 4, &val1); 701 702 if (res) 702 703 break;
+4 -1
drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
··· 1228 1228 netdev_err(intf->ndev, "invalid PHY mode: %s for port %d\n", 1229 1229 phy_modes(intf->phy_interface), intf->port); 1230 1230 ret = -EINVAL; 1231 - goto err_free_netdev; 1231 + goto err_deregister_fixed_link; 1232 1232 } 1233 1233 1234 1234 ret = of_get_ethdev_address(ndev_dn, ndev); ··· 1252 1252 1253 1253 return intf; 1254 1254 1255 + err_deregister_fixed_link: 1256 + if (of_phy_is_fixed_link(ndev_dn)) 1257 + of_phy_deregister_fixed_link(ndev_dn); 1255 1258 err_free_netdev: 1256 1259 free_netdev(ndev); 1257 1260 err:
+5
drivers/net/ethernet/google/gve/gve.h
··· 1206 1206 } 1207 1207 } 1208 1208 1209 + static inline bool gve_is_clock_enabled(struct gve_priv *priv) 1210 + { 1211 + return priv->nic_ts_report; 1212 + } 1213 + 1209 1214 /* gqi napi handler defined in gve_main.c */ 1210 1215 int gve_napi_poll(struct napi_struct *napi, int budget); 1211 1216
+1 -1
drivers/net/ethernet/google/gve/gve_ethtool.c
··· 942 942 943 943 ethtool_op_get_ts_info(netdev, info); 944 944 945 - if (priv->nic_timestamp_supported) { 945 + if (gve_is_clock_enabled(priv)) { 946 946 info->so_timestamping |= SOF_TIMESTAMPING_RX_HARDWARE | 947 947 SOF_TIMESTAMPING_RAW_HARDWARE; 948 948
+7 -5
drivers/net/ethernet/google/gve/gve_main.c
··· 680 680 } 681 681 } 682 682 683 - err = gve_init_clock(priv); 684 - if (err) { 685 - dev_err(&priv->pdev->dev, "Failed to init clock"); 686 - goto abort_with_ptype_lut; 683 + if (priv->nic_timestamp_supported) { 684 + err = gve_init_clock(priv); 685 + if (err) { 686 + dev_warn(&priv->pdev->dev, "Failed to init clock, continuing without PTP support"); 687 + err = 0; 688 + } 687 689 } 688 690 689 691 err = gve_init_rss_config(priv, priv->rx_cfg.num_queues); ··· 2185 2183 } 2186 2184 2187 2185 if (kernel_config->rx_filter != HWTSTAMP_FILTER_NONE) { 2188 - if (!priv->nic_ts_report) { 2186 + if (!gve_is_clock_enabled(priv)) { 2189 2187 NL_SET_ERR_MSG_MOD(extack, 2190 2188 "RX timestamping is not supported"); 2191 2189 kernel_config->rx_filter = HWTSTAMP_FILTER_NONE;
-8
drivers/net/ethernet/google/gve/gve_ptp.c
··· 70 70 struct gve_ptp *ptp; 71 71 int err; 72 72 73 - if (!priv->nic_timestamp_supported) { 74 - dev_dbg(&priv->pdev->dev, "Device does not support PTP\n"); 75 - return -EOPNOTSUPP; 76 - } 77 - 78 73 priv->ptp = kzalloc(sizeof(*priv->ptp), GFP_KERNEL); 79 74 if (!priv->ptp) 80 75 return -ENOMEM; ··· 110 115 int gve_init_clock(struct gve_priv *priv) 111 116 { 112 117 int err; 113 - 114 - if (!priv->nic_timestamp_supported) 115 - return 0; 116 118 117 119 err = gve_ptp_init(priv); 118 120 if (err)
+1 -1
drivers/net/ethernet/google/gve/gve_rx_dqo.c
··· 484 484 { 485 485 const struct gve_xdp_buff *ctx = (void *)_ctx; 486 486 487 - if (!ctx->gve->nic_ts_report) 487 + if (!gve_is_clock_enabled(ctx->gve)) 488 488 return -ENODATA; 489 489 490 490 if (!(ctx->compl_desc->ts_sub_nsecs_low & GVE_DQO_RX_HWTSTAMP_VALID))
+6 -4
drivers/net/ethernet/intel/ice/ice_lib.c
··· 2787 2787 2788 2788 ASSERT_RTNL(); 2789 2789 ice_for_each_rxq(vsi, q_idx) 2790 - netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, 2791 - &vsi->rx_rings[q_idx]->q_vector->napi); 2790 + if (vsi->rx_rings[q_idx] && vsi->rx_rings[q_idx]->q_vector) 2791 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, 2792 + &vsi->rx_rings[q_idx]->q_vector->napi); 2792 2793 2793 2794 ice_for_each_txq(vsi, q_idx) 2794 - netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX, 2795 - &vsi->tx_rings[q_idx]->q_vector->napi); 2795 + if (vsi->tx_rings[q_idx] && vsi->tx_rings[q_idx]->q_vector) 2796 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX, 2797 + &vsi->tx_rings[q_idx]->q_vector->napi); 2796 2798 /* Also set the interrupt number for the NAPI */ 2797 2799 ice_for_each_q_vector(vsi, v_idx) { 2798 2800 struct ice_q_vector *q_vector = vsi->q_vectors[v_idx];
-1
drivers/net/ethernet/intel/ice/ice_main.c
··· 7038 7038 cur_ns->rx_errors = pf->stats.crc_errors + 7039 7039 pf->stats.illegal_bytes + 7040 7040 pf->stats.rx_undersize + 7041 - pf->hw_csum_rx_error + 7042 7041 pf->stats.rx_jabber + 7043 7042 pf->stats.rx_fragments + 7044 7043 pf->stats.rx_oversize;
+10 -16
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 11468 11468 */ 11469 11469 static int ixgbe_recovery_probe(struct ixgbe_adapter *adapter) 11470 11470 { 11471 - struct net_device *netdev = adapter->netdev; 11472 11471 struct pci_dev *pdev = adapter->pdev; 11473 11472 struct ixgbe_hw *hw = &adapter->hw; 11474 - bool disable_dev; 11475 11473 int err = -EIO; 11476 11474 11477 11475 if (hw->mac.type != ixgbe_mac_e610) 11478 - goto clean_up_probe; 11476 + return err; 11479 11477 11480 11478 ixgbe_get_hw_control(adapter); 11481 - mutex_init(&hw->aci.lock); 11482 11479 err = ixgbe_get_flash_data(&adapter->hw); 11483 11480 if (err) 11484 - goto shutdown_aci; 11481 + goto err_release_hw_control; 11485 11482 11486 11483 timer_setup(&adapter->service_timer, ixgbe_service_timer, 0); 11487 11484 INIT_WORK(&adapter->service_task, ixgbe_recovery_service_task); ··· 11501 11504 devl_unlock(adapter->devlink); 11502 11505 11503 11506 return 0; 11504 - shutdown_aci: 11505 - mutex_destroy(&adapter->hw.aci.lock); 11507 + err_release_hw_control: 11506 11508 ixgbe_release_hw_control(adapter); 11507 - clean_up_probe: 11508 - disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state); 11509 - free_netdev(netdev); 11510 - devlink_free(adapter->devlink); 11511 - pci_release_mem_regions(pdev); 11512 - if (disable_dev) 11513 - pci_disable_device(pdev); 11514 11509 return err; 11515 11510 } 11516 11511 ··· 11644 11655 if (err) 11645 11656 goto err_sw_init; 11646 11657 11647 - if (ixgbe_check_fw_error(adapter)) 11648 - return ixgbe_recovery_probe(adapter); 11658 + if (ixgbe_check_fw_error(adapter)) { 11659 + err = ixgbe_recovery_probe(adapter); 11660 + if (err) 11661 + goto err_sw_init; 11662 + 11663 + return 0; 11664 + } 11649 11665 11650 11666 if (adapter->hw.mac.type == ixgbe_mac_e610) { 11651 11667 err = ixgbe_get_caps(&adapter->hw);
+1 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
··· 1389 1389 efs->rule.flow_type = mvpp2_cls_ethtool_flow_to_type(info->fs.flow_type); 1390 1390 if (efs->rule.flow_type < 0) { 1391 1391 ret = efs->rule.flow_type; 1392 - goto clean_rule; 1392 + goto clean_eth_rule; 1393 1393 } 1394 1394 1395 1395 ret = mvpp2_cls_rfs_parse_rule(&efs->rule);
+1 -1
drivers/net/ethernet/marvell/octeon_ep/octep_main.c
··· 1338 1338 1339 1339 ret = octep_ctrl_net_init(oct); 1340 1340 if (ret) 1341 - return ret; 1341 + goto unsupported_dev; 1342 1342 1343 1343 INIT_WORK(&oct->tx_timeout_task, octep_tx_timeout_task); 1344 1344 INIT_WORK(&oct->ctrl_mbox_task, octep_ctrl_mbox_task);
+16
drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
··· 613 613 cq->dbg = NULL; 614 614 } 615 615 } 616 + 617 + static int vhca_id_show(struct seq_file *file, void *priv) 618 + { 619 + struct mlx5_core_dev *dev = file->private; 620 + 621 + seq_printf(file, "0x%x\n", MLX5_CAP_GEN(dev, vhca_id)); 622 + return 0; 623 + } 624 + 625 + DEFINE_SHOW_ATTRIBUTE(vhca_id); 626 + 627 + void mlx5_vhca_debugfs_init(struct mlx5_core_dev *dev) 628 + { 629 + debugfs_create_file("vhca_id", 0400, dev->priv.dbg.dbg_root, dev, 630 + &vhca_id_fops); 631 + }
+14
drivers/net/ethernet/mellanox/mlx5/core/dev.c
··· 575 575 return plen && flen && flen == plen && 576 576 !memcmp(fsystem_guid, psystem_guid, flen); 577 577 } 578 + 579 + void mlx5_core_reps_aux_devs_remove(struct mlx5_core_dev *dev) 580 + { 581 + struct mlx5_priv *priv = &dev->priv; 582 + 583 + if (priv->adev[MLX5_INTERFACE_PROTOCOL_ETH]) 584 + device_lock_assert(&priv->adev[MLX5_INTERFACE_PROTOCOL_ETH]->adev.dev); 585 + else 586 + mlx5_core_err(dev, "ETH driver already removed\n"); 587 + if (priv->adev[MLX5_INTERFACE_PROTOCOL_IB_REP]) 588 + del_adev(&priv->adev[MLX5_INTERFACE_PROTOCOL_IB_REP]->adev); 589 + if (priv->adev[MLX5_INTERFACE_PROTOCOL_ETH_REP]) 590 + del_adev(&priv->adev[MLX5_INTERFACE_PROTOCOL_ETH_REP]->adev); 591 + }
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
··· 430 430 attrs->replay_esn.esn = sa_entry->esn_state.esn; 431 431 attrs->replay_esn.esn_msb = sa_entry->esn_state.esn_msb; 432 432 attrs->replay_esn.overlap = sa_entry->esn_state.overlap; 433 - if (attrs->dir == XFRM_DEV_OFFLOAD_OUT) 433 + if (attrs->dir == XFRM_DEV_OFFLOAD_OUT || 434 + x->xso.type != XFRM_DEV_OFFLOAD_PACKET) 434 435 goto skip_replay_window; 435 436 436 437 switch (x->replay_esn->replay_window) {
+11 -6
drivers/net/ethernet/mellanox/mlx5/core/en_accel/psp_rxtx.c
··· 177 177 { 178 178 struct mlx5e_priv *priv = netdev_priv(netdev); 179 179 struct net *net = sock_net(skb->sk); 180 - const struct ipv6hdr *ip6; 181 - struct tcphdr *th; 182 180 183 181 if (!mlx5e_psp_set_state(priv, skb, psp_st)) 184 182 return true; ··· 188 190 return false; 189 191 } 190 192 if (skb_is_gso(skb)) { 191 - ip6 = ipv6_hdr(skb); 192 - th = inner_tcp_hdr(skb); 193 + int len = skb_shinfo(skb)->gso_size + inner_tcp_hdrlen(skb); 194 + struct tcphdr *th = inner_tcp_hdr(skb); 193 195 194 - th->check = ~tcp_v6_check(skb_shinfo(skb)->gso_size + inner_tcp_hdrlen(skb), &ip6->saddr, 195 - &ip6->daddr, 0); 196 + if (skb->protocol == htons(ETH_P_IP)) { 197 + const struct iphdr *ip = ip_hdr(skb); 198 + 199 + th->check = ~tcp_v4_check(len, ip->saddr, ip->daddr, 0); 200 + } else { 201 + const struct ipv6hdr *ip6 = ipv6_hdr(skb); 202 + 203 + th->check = ~tcp_v6_check(len, &ip6->saddr, &ip6->daddr, 0); 204 + } 196 205 } 197 206 198 207 return true;
+12 -9
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 4102 4102 mlx5e_queue_update_stats(priv); 4103 4103 } 4104 4104 4105 + netdev_stats_to_stats64(stats, &dev->stats); 4106 + 4105 4107 if (mlx5e_is_uplink_rep(priv)) { 4106 4108 struct mlx5e_vport_stats *vstats = &priv->stats.vport; 4107 4109 ··· 4120 4118 mlx5e_fold_sw_stats64(priv, stats); 4121 4119 } 4122 4120 4123 - stats->rx_missed_errors = priv->stats.qcnt.rx_out_of_buffer; 4124 - stats->rx_dropped = PPORT_2863_GET(pstats, if_in_discards); 4121 + stats->rx_missed_errors += priv->stats.qcnt.rx_out_of_buffer; 4122 + stats->rx_dropped += PPORT_2863_GET(pstats, if_in_discards); 4125 4123 4126 - stats->rx_length_errors = 4124 + stats->rx_length_errors += 4127 4125 PPORT_802_3_GET(pstats, a_in_range_length_errors) + 4128 4126 PPORT_802_3_GET(pstats, a_out_of_range_length_field) + 4129 4127 PPORT_802_3_GET(pstats, a_frame_too_long_errors) + 4130 4128 VNIC_ENV_GET(&priv->stats.vnic, eth_wqe_too_small); 4131 - stats->rx_crc_errors = 4129 + stats->rx_crc_errors += 4132 4130 PPORT_802_3_GET(pstats, a_frame_check_sequence_errors); 4133 - stats->rx_frame_errors = PPORT_802_3_GET(pstats, a_alignment_errors); 4134 - stats->tx_aborted_errors = PPORT_2863_GET(pstats, if_out_discards); 4135 - stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors + 4136 - stats->rx_frame_errors; 4137 - stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors; 4131 + stats->rx_frame_errors += PPORT_802_3_GET(pstats, a_alignment_errors); 4132 + stats->tx_aborted_errors += PPORT_2863_GET(pstats, if_out_discards); 4133 + stats->rx_errors += stats->rx_length_errors + stats->rx_crc_errors + 4134 + stats->rx_frame_errors; 4135 + stats->tx_errors += stats->tx_aborted_errors + stats->tx_carrier_errors; 4138 4136 } 4139 4137 4140 4138 static void mlx5e_nic_set_rx_mode(struct mlx5e_priv *priv) ··· 6897 6895 struct mlx5e_priv *priv = netdev_priv(netdev); 6898 6896 struct mlx5_core_dev *mdev = edev->mdev; 6899 6897 6898 + mlx5_eswitch_safe_aux_devs_remove(mdev); 6900 6899 mlx5_core_uplink_netdev_set(mdev, NULL); 6901 6900 6902 6901 if (priv->profile)
+13 -6
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 2147 2147 2148 2148 static void mlx5e_tc_del_fdb_peers_flow(struct mlx5e_tc_flow *flow) 2149 2149 { 2150 + struct mlx5_devcom_comp_dev *devcom; 2151 + struct mlx5_devcom_comp_dev *pos; 2152 + struct mlx5_eswitch *peer_esw; 2150 2153 int i; 2151 2154 2152 - for (i = 0; i < MLX5_MAX_PORTS; i++) { 2153 - if (i == mlx5_get_dev_index(flow->priv->mdev)) 2154 - continue; 2155 + devcom = flow->priv->mdev->priv.eswitch->devcom; 2156 + mlx5_devcom_for_each_peer_entry(devcom, peer_esw, pos) { 2157 + i = mlx5_get_dev_index(peer_esw->dev); 2155 2158 mlx5e_tc_del_fdb_peer_flow(flow, i); 2156 2159 } 2157 2160 } ··· 5516 5513 5517 5514 void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw) 5518 5515 { 5516 + struct mlx5_devcom_comp_dev *devcom; 5517 + struct mlx5_devcom_comp_dev *pos; 5519 5518 struct mlx5e_tc_flow *flow, *tmp; 5519 + struct mlx5_eswitch *peer_esw; 5520 5520 int i; 5521 5521 5522 - for (i = 0; i < MLX5_MAX_PORTS; i++) { 5523 - if (i == mlx5_get_dev_index(esw->dev)) 5524 - continue; 5522 + devcom = esw->devcom; 5523 + 5524 + mlx5_devcom_for_each_peer_entry(devcom, peer_esw, pos) { 5525 + i = mlx5_get_dev_index(peer_esw->dev); 5525 5526 list_for_each_entry_safe(flow, tmp, &esw->offloads.peer_flows[i], peer[i]) 5526 5527 mlx5e_tc_del_fdb_peers_flow(flow); 5527 5528 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_lgcy.c
··· 188 188 if (IS_ERR(vport->ingress.acl)) { 189 189 err = PTR_ERR(vport->ingress.acl); 190 190 vport->ingress.acl = NULL; 191 - return err; 191 + goto out; 192 192 } 193 193 194 194 err = esw_acl_ingress_lgcy_groups_create(esw, vport);
+5 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
··· 929 929 int mlx5_esw_ipsec_vf_packet_offload_supported(struct mlx5_core_dev *dev, 930 930 u16 vport_num); 931 931 bool mlx5_esw_host_functions_enabled(const struct mlx5_core_dev *dev); 932 + void mlx5_eswitch_safe_aux_devs_remove(struct mlx5_core_dev *dev); 932 933 #else /* CONFIG_MLX5_ESWITCH */ 933 934 /* eswitch API stubs */ 934 935 static inline int mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; } ··· 1010 1009 static inline bool 1011 1010 mlx5_esw_vport_vhca_id(struct mlx5_eswitch *esw, u16 vportn, u16 *vhca_id) 1012 1011 { 1013 - return -EOPNOTSUPP; 1012 + return false; 1014 1013 } 1014 + 1015 + static inline void 1016 + mlx5_eswitch_safe_aux_devs_remove(struct mlx5_core_dev *dev) {} 1015 1017 1016 1018 #endif /* CONFIG_MLX5_ESWITCH */ 1017 1019
+26
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 3981 3981 return true; 3982 3982 } 3983 3983 3984 + #define MLX5_ESW_HOLD_TIMEOUT_MS 7000 3985 + #define MLX5_ESW_HOLD_RETRY_DELAY_MS 500 3986 + 3987 + void mlx5_eswitch_safe_aux_devs_remove(struct mlx5_core_dev *dev) 3988 + { 3989 + unsigned long timeout; 3990 + bool hold_esw = true; 3991 + 3992 + /* Wait for any concurrent eswitch mode transition to complete. */ 3993 + if (!mlx5_esw_hold(dev)) { 3994 + timeout = jiffies + msecs_to_jiffies(MLX5_ESW_HOLD_TIMEOUT_MS); 3995 + while (!mlx5_esw_hold(dev)) { 3996 + if (!time_before(jiffies, timeout)) { 3997 + hold_esw = false; 3998 + break; 3999 + } 4000 + msleep(MLX5_ESW_HOLD_RETRY_DELAY_MS); 4001 + } 4002 + } 4003 + if (hold_esw) { 4004 + if (mlx5_eswitch_mode(dev) == MLX5_ESWITCH_OFFLOADS) 4005 + mlx5_core_reps_aux_devs_remove(dev); 4006 + mlx5_esw_release(dev); 4007 + } 4008 + } 4009 + 3984 4010 int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode, 3985 4011 struct netlink_ext_ack *extack) 3986 4012 {
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
··· 1198 1198 u32 out[MLX5_ST_SZ_DW(set_flow_table_root_out)] = {}; 1199 1199 u32 in[MLX5_ST_SZ_DW(set_flow_table_root_in)] = {}; 1200 1200 1201 - if (disconnect && MLX5_CAP_FLOWTABLE_NIC_TX(dev, reset_root_to_default)) 1201 + if (disconnect && 1202 + !MLX5_CAP_FLOWTABLE_NIC_TX(dev, reset_root_to_default)) 1202 1203 return -EOPNOTSUPP; 1203 1204 1204 1205 MLX5_SET(set_flow_table_root_in, in, opcode,
+3 -11
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1806 1806 return -ENOMEM; 1807 1807 } 1808 1808 1809 - static int vhca_id_show(struct seq_file *file, void *priv) 1810 - { 1811 - struct mlx5_core_dev *dev = file->private; 1812 - 1813 - seq_printf(file, "0x%x\n", MLX5_CAP_GEN(dev, vhca_id)); 1814 - return 0; 1815 - } 1816 - 1817 - DEFINE_SHOW_ATTRIBUTE(vhca_id); 1818 - 1819 1809 static int mlx5_notifiers_init(struct mlx5_core_dev *dev) 1820 1810 { 1821 1811 int err; ··· 1874 1884 priv->numa_node = dev_to_node(mlx5_core_dma_dev(dev)); 1875 1885 priv->dbg.dbg_root = debugfs_create_dir(dev_name(dev->device), 1876 1886 mlx5_debugfs_root); 1877 - debugfs_create_file("vhca_id", 0400, priv->dbg.dbg_root, dev, &vhca_id_fops); 1887 + 1878 1888 INIT_LIST_HEAD(&priv->traps); 1879 1889 1880 1890 err = mlx5_cmd_init(dev); ··· 2011 2021 err); 2012 2022 goto err_init_one; 2013 2023 } 2024 + 2025 + mlx5_vhca_debugfs_init(dev); 2014 2026 2015 2027 pci_save_state(pdev); 2016 2028 return 0;
+2
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 258 258 void mlx5_cmd_flush(struct mlx5_core_dev *dev); 259 259 void mlx5_cq_debugfs_init(struct mlx5_core_dev *dev); 260 260 void mlx5_cq_debugfs_cleanup(struct mlx5_core_dev *dev); 261 + void mlx5_vhca_debugfs_init(struct mlx5_core_dev *dev); 261 262 262 263 int mlx5_query_pcam_reg(struct mlx5_core_dev *dev, u32 *pcam, u8 feature_group, 263 264 u8 access_reg_group); ··· 291 290 void mlx5_unregister_device(struct mlx5_core_dev *dev); 292 291 void mlx5_dev_set_lightweight(struct mlx5_core_dev *dev); 293 292 bool mlx5_dev_is_lightweight(struct mlx5_core_dev *dev); 293 + void mlx5_core_reps_aux_devs_remove(struct mlx5_core_dev *dev); 294 294 295 295 void mlx5_fw_reporters_create(struct mlx5_core_dev *dev); 296 296 int mlx5_query_mtpps(struct mlx5_core_dev *dev, u32 *mtpps, u32 mtpps_size);
+1
drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
··· 76 76 goto init_one_err; 77 77 } 78 78 79 + mlx5_vhca_debugfs_init(mdev); 79 80 return 0; 80 81 81 82 init_one_err:
+2 -3
drivers/net/ethernet/rocker/rocker_main.c
··· 1524 1524 { 1525 1525 struct rocker_world_ops *wops = rocker_port->rocker->wops; 1526 1526 1527 - if (!wops->port_post_fini) 1528 - return; 1529 - wops->port_post_fini(rocker_port); 1527 + if (wops->port_post_fini) 1528 + wops->port_post_fini(rocker_port); 1530 1529 kfree(rocker_port->wpriv); 1531 1530 } 1532 1531
+1 -6
drivers/net/ethernet/sfc/mcdi_filters.c
··· 2182 2182 2183 2183 int efx_mcdi_rx_pull_rss_config(struct efx_nic *efx) 2184 2184 { 2185 - int rc; 2186 - 2187 - mutex_lock(&efx->net_dev->ethtool->rss_lock); 2188 - rc = efx_mcdi_rx_pull_rss_context_config(efx, &efx->rss_context); 2189 - mutex_unlock(&efx->net_dev->ethtool->rss_lock); 2190 - return rc; 2185 + return efx_mcdi_rx_pull_rss_context_config(efx, &efx->rss_context); 2191 2186 } 2192 2187 2193 2188 void efx_mcdi_rx_restore_rss_contexts(struct efx_nic *efx)
+27 -7
drivers/net/ethernet/spacemit/k1_emac.c
··· 1032 1032 100, 10000); 1033 1033 1034 1034 if (ret) { 1035 - netdev_err(priv->ndev, "Read stat timeout\n"); 1035 + /* 1036 + * This could be caused by the PHY stopping its refclk even when 1037 + * the link is up, for power saving. See also comments in 1038 + * emac_stats_update(). 1039 + */ 1040 + dev_err_ratelimited(&priv->ndev->dev, 1041 + "Read stat timeout. PHY clock stopped?\n"); 1036 1042 return ret; 1037 1043 } 1038 1044 ··· 1086 1080 1087 1081 assert_spin_locked(&priv->stats_lock); 1088 1082 1089 - if (!netif_running(priv->ndev) || !netif_device_present(priv->ndev)) { 1090 - /* Not up, don't try to update */ 1083 + /* 1084 + * We can't read statistics if the interface is not up. Also, some PHYs 1085 + * stop their reference clocks for link down power saving, which also 1086 + * causes reading statistics to time out. Don't update and don't 1087 + * reschedule in these cases. 1088 + */ 1089 + if (!netif_running(priv->ndev) || 1090 + !netif_carrier_ok(priv->ndev) || 1091 + !netif_device_present(priv->ndev)) { 1091 1092 return; 1092 1093 } 1093 1094 1094 1095 for (i = 0; i < sizeof(priv->tx_stats) / sizeof(*tx_stats); i++) { 1095 1096 /* 1096 - * If reading stats times out, everything is broken and there's 1097 - * nothing we can do. Reading statistics also can't return an 1098 - * error, so just return without updating and without 1099 - * rescheduling. 1097 + * If reading stats times out anyway, the stat registers will be 1098 + * stuck, and we can't really recover from that. 1099 + * 1100 + * Reading statistics also can't return an error, so just return 1101 + * without updating and without rescheduling. 1100 1102 */ 1101 1103 if (emac_tx_read_stat_cnt(priv, i, &res)) 1102 1104 return; ··· 1545 1531 } 1546 1532 1547 1533 emac_wr(priv, MAC_GLOBAL_CONTROL, ctrl); 1534 + 1535 + /* 1536 + * Reschedule stats updates now that link is up. See comments in 1537 + * emac_stats_update(). 1538 + */ 1539 + mod_timer(&priv->stats_timer, jiffies); 1548 1540 } 1549 1541 1550 1542 phy_print_status(phydev);
+13 -4
drivers/net/phy/micrel.c
··· 2655 2655 2656 2656 kszphy_parse_led_mode(phydev); 2657 2657 2658 - clk = devm_clk_get_optional_enabled(&phydev->mdio.dev, "rmii-ref"); 2658 + clk = devm_clk_get_optional(&phydev->mdio.dev, "rmii-ref"); 2659 2659 /* NOTE: clk may be NULL if building without CONFIG_HAVE_CLK */ 2660 2660 if (!IS_ERR_OR_NULL(clk)) { 2661 - unsigned long rate = clk_get_rate(clk); 2662 2661 bool rmii_ref_clk_sel_25_mhz; 2662 + unsigned long rate; 2663 + int err; 2664 + 2665 + err = clk_prepare_enable(clk); 2666 + if (err) { 2667 + phydev_err(phydev, "Failed to enable rmii-ref clock\n"); 2668 + return err; 2669 + } 2670 + 2671 + rate = clk_get_rate(clk); 2672 + clk_disable_unprepare(clk); 2663 2673 2664 2674 if (type) 2665 2675 priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel; ··· 2687 2677 } 2688 2678 } else if (!clk) { 2689 2679 /* unnamed clock from the generic ethernet-phy binding */ 2690 - clk = devm_clk_get_optional_enabled(&phydev->mdio.dev, NULL); 2680 + clk = devm_clk_get_optional(&phydev->mdio.dev, NULL); 2691 2681 } 2692 2682 2693 2683 if (IS_ERR(clk)) 2694 2684 return PTR_ERR(clk); 2695 2685 2696 - clk_disable_unprepare(clk); 2697 2686 priv->clk = clk; 2698 2687 2699 2688 if (ksz8041_fiber_mode(phydev))
+7 -2
drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
··· 395 395 struct sk_buff *skb) 396 396 { 397 397 unsigned long long data_bus_addr, data_base_addr; 398 + struct skb_shared_info *shinfo = skb_shinfo(skb); 398 399 struct device *dev = rxq->dpmaif_ctrl->dev; 399 400 struct dpmaif_bat_page *page_info; 400 401 unsigned int data_len; ··· 403 402 404 403 page_info = rxq->bat_frag->bat_skb; 405 404 page_info += t7xx_normal_pit_bid(pkt_info); 406 - dma_unmap_page(dev, page_info->data_bus_addr, page_info->data_len, DMA_FROM_DEVICE); 407 405 408 406 if (!page_info->page) 409 407 return -EINVAL; 408 + 409 + if (shinfo->nr_frags >= MAX_SKB_FRAGS) 410 + return -EINVAL; 411 + 412 + dma_unmap_page(dev, page_info->data_bus_addr, page_info->data_len, DMA_FROM_DEVICE); 410 413 411 414 data_bus_addr = le32_to_cpu(pkt_info->pd.data_addr_h); 412 415 data_bus_addr = (data_bus_addr << 32) + le32_to_cpu(pkt_info->pd.data_addr_l); ··· 418 413 data_offset = data_bus_addr - data_base_addr; 419 414 data_offset += page_info->offset; 420 415 data_len = FIELD_GET(PD_PIT_DATA_LEN, le32_to_cpu(pkt_info->header)); 421 - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page_info->page, 416 + skb_add_rx_frag(skb, shinfo->nr_frags, page_info->page, 422 417 data_offset, data_len, page_info->data_len); 423 418 424 419 page_info->page = NULL;
+1
drivers/ntb/ntb_transport.c
··· 1394 1394 goto err2; 1395 1395 } 1396 1396 1397 + mutex_init(&nt->link_event_lock); 1397 1398 INIT_DELAYED_WORK(&nt->link_work, ntb_transport_link_work); 1398 1399 INIT_WORK(&nt->link_cleanup, ntb_transport_link_cleanup_work); 1399 1400
+1 -17
drivers/pci/rebar.c
··· 295 295 int exclude_bars) 296 296 { 297 297 struct pci_host_bridge *host; 298 - int old, ret; 299 298 300 299 /* Check if we must preserve the firmware's resource assignment */ 301 300 host = pci_find_host_bridge(dev->bus); ··· 307 308 if (!pci_rebar_size_supported(dev, resno, size)) 308 309 return -EINVAL; 309 310 310 - old = pci_rebar_get_current_size(dev, resno); 311 - if (old < 0) 312 - return old; 313 - 314 - ret = pci_rebar_set_size(dev, resno, size); 315 - if (ret) 316 - return ret; 317 - 318 - ret = pci_do_resource_release_and_resize(dev, resno, size, exclude_bars); 319 - if (ret) 320 - goto error_resize; 321 - return 0; 322 - 323 - error_resize: 324 - pci_rebar_set_size(dev, resno, old); 325 - return ret; 311 + return pci_do_resource_release_and_resize(dev, resno, size, exclude_bars); 326 312 } 327 313 EXPORT_SYMBOL(pci_resize_resource);
+19 -4
drivers/pci/setup-bus.c
··· 2504 2504 struct resource *b_win, *r; 2505 2505 LIST_HEAD(saved); 2506 2506 unsigned int i; 2507 - int ret = 0; 2507 + int old, ret; 2508 2508 2509 2509 b_win = pbus_select_window(bus, res); 2510 2510 if (!b_win) 2511 2511 return -EINVAL; 2512 + 2513 + old = pci_rebar_get_current_size(pdev, resno); 2514 + if (old < 0) 2515 + return old; 2516 + 2517 + ret = pci_rebar_set_size(pdev, resno, size); 2518 + if (ret) 2519 + return ret; 2512 2520 2513 2521 pci_dev_for_each_resource(pdev, r, i) { 2514 2522 if (i >= PCI_BRIDGE_RESOURCES) ··· 2550 2542 return ret; 2551 2543 2552 2544 restore: 2553 - /* Revert to the old configuration */ 2545 + /* 2546 + * Revert to the old configuration. 2547 + * 2548 + * BAR Size must be restored first because it affects the read-only 2549 + * bits in BAR (the old address might not be restorable otherwise 2550 + * due to low address bits). 2551 + */ 2552 + pci_rebar_set_size(pdev, resno, old); 2553 + 2554 2554 list_for_each_entry(dev_res, &saved, list) { 2555 2555 struct resource *res = dev_res->res; 2556 2556 struct pci_dev *dev = dev_res->dev; ··· 2572 2556 2573 2557 restore_dev_resource(dev_res); 2574 2558 2575 - ret = pci_claim_resource(dev, i); 2576 - if (ret) 2559 + if (pci_claim_resource(dev, i)) 2577 2560 continue; 2578 2561 2579 2562 if (i < PCI_BRIDGE_RESOURCES) {
+1 -1
drivers/pinctrl/meson/pinctrl-meson.c
··· 619 619 pc->chip.set = meson_gpio_set; 620 620 pc->chip.base = -1; 621 621 pc->chip.ngpio = pc->data->num_pins; 622 - pc->chip.can_sleep = false; 622 + pc->chip.can_sleep = true; 623 623 624 624 ret = gpiochip_add_data(&pc->chip, pc); 625 625 if (ret) {
+1 -1
drivers/pinctrl/pinctrl-th1520.c
··· 287 287 TH1520_PAD(5, QSPI0_D0_MOSI, QSPI, PWM, I2S, GPIO, ____, ____, 0), 288 288 TH1520_PAD(6, QSPI0_D1_MISO, QSPI, PWM, I2S, GPIO, ____, ____, 0), 289 289 TH1520_PAD(7, QSPI0_D2_WP, QSPI, PWM, I2S, GPIO, ____, ____, 0), 290 - TH1520_PAD(8, QSPI1_D3_HOLD, QSPI, ____, I2S, GPIO, ____, ____, 0), 290 + TH1520_PAD(8, QSPI0_D3_HOLD, QSPI, ____, I2S, GPIO, ____, ____, 0), 291 291 TH1520_PAD(9, I2C2_SCL, I2C, UART, ____, GPIO, ____, ____, 0), 292 292 TH1520_PAD(10, I2C2_SDA, I2C, UART, ____, GPIO, ____, ____, 0), 293 293 TH1520_PAD(11, I2C3_SCL, I2C, ____, ____, GPIO, ____, ____, 0),
+3 -12
drivers/pinctrl/qcom/Kconfig
··· 61 61 (Low Power Island) found on the Qualcomm Technologies Inc SoCs. 62 62 63 63 config PINCTRL_SC7280_LPASS_LPI 64 - tristate "Qualcomm Technologies Inc SC7280 LPASS LPI pin controller driver" 64 + tristate "Qualcomm Technologies Inc SC7280 and SM8350 LPASS LPI pin controller driver" 65 65 depends on ARM64 || COMPILE_TEST 66 66 depends on PINCTRL_LPASS_LPI 67 67 help 68 68 This is the pinctrl, pinmux, pinconf and gpiolib driver for the 69 69 Qualcomm Technologies Inc LPASS (Low Power Audio SubSystem) LPI 70 - (Low Power Island) found on the Qualcomm Technologies Inc SC7280 platform. 70 + (Low Power Island) found on the Qualcomm Technologies Inc SC7280 71 + and SM8350 platforms. 71 72 72 73 config PINCTRL_SDM660_LPASS_LPI 73 74 tristate "Qualcomm Technologies Inc SDM660 LPASS LPI pin controller driver" ··· 106 105 This is the pinctrl, pinmux, pinconf and gpiolib driver for the 107 106 Qualcomm Technologies Inc LPASS (Low Power Audio SubSystem) LPI 108 107 (Low Power Island) found on the Qualcomm Technologies Inc SM8250 platform. 109 - 110 - config PINCTRL_SM8350_LPASS_LPI 111 - tristate "Qualcomm Technologies Inc SM8350 LPASS LPI pin controller driver" 112 - depends on ARM64 || COMPILE_TEST 113 - depends on PINCTRL_LPASS_LPI 114 - help 115 - This is the pinctrl, pinmux, pinconf and gpiolib driver for the 116 - Qualcomm Technologies Inc LPASS (Low Power Audio SubSystem) LPI 117 - (Low Power Island) found on the Qualcomm Technologies Inc SM8350 118 - platform. 119 108 120 109 config PINCTRL_SM8450_LPASS_LPI 121 110 tristate "Qualcomm Technologies Inc SM8450 LPASS LPI pin controller driver"
-1
drivers/pinctrl/qcom/Makefile
··· 64 64 obj-$(CONFIG_PINCTRL_SM8250) += pinctrl-sm8250.o 65 65 obj-$(CONFIG_PINCTRL_SM8250_LPASS_LPI) += pinctrl-sm8250-lpass-lpi.o 66 66 obj-$(CONFIG_PINCTRL_SM8350) += pinctrl-sm8350.o 67 - obj-$(CONFIG_PINCTRL_SM8350_LPASS_LPI) += pinctrl-sm8350-lpass-lpi.o 68 67 obj-$(CONFIG_PINCTRL_SM8450) += pinctrl-sm8450.o 69 68 obj-$(CONFIG_PINCTRL_SM8450_LPASS_LPI) += pinctrl-sm8450-lpass-lpi.o 70 69 obj-$(CONFIG_PINCTRL_SM8550) += pinctrl-sm8550.o
+17
drivers/pinctrl/qcom/pinctrl-lpass-lpi.c
··· 312 312 .pin_config_group_set = lpi_config_set, 313 313 }; 314 314 315 + static int lpi_gpio_get_direction(struct gpio_chip *chip, unsigned int pin) 316 + { 317 + unsigned long config = pinconf_to_config_packed(PIN_CONFIG_LEVEL, 0); 318 + struct lpi_pinctrl *state = gpiochip_get_data(chip); 319 + unsigned long arg; 320 + int ret; 321 + 322 + ret = lpi_config_get(state->ctrl, pin, &config); 323 + if (ret) 324 + return ret; 325 + 326 + arg = pinconf_to_config_argument(config); 327 + 328 + return arg ? GPIO_LINE_DIRECTION_OUT : GPIO_LINE_DIRECTION_IN; 329 + } 330 + 315 331 static int lpi_gpio_direction_input(struct gpio_chip *chip, unsigned int pin) 316 332 { 317 333 struct lpi_pinctrl *state = gpiochip_get_data(chip); ··· 425 409 #endif 426 410 427 411 static const struct gpio_chip lpi_gpio_template = { 412 + .get_direction = lpi_gpio_get_direction, 428 413 .direction_input = lpi_gpio_direction_input, 429 414 .direction_output = lpi_gpio_direction_output, 430 415 .get = lpi_gpio_get,
+3
drivers/pinctrl/qcom/pinctrl-sc7280-lpass-lpi.c
··· 131 131 { 132 132 .compatible = "qcom,sc7280-lpass-lpi-pinctrl", 133 133 .data = &sc7280_lpi_data, 134 + }, { 135 + .compatible = "qcom,sm8350-lpass-lpi-pinctrl", 136 + .data = &sc7280_lpi_data, 134 137 }, 135 138 { } 136 139 };
-151
drivers/pinctrl/qcom/pinctrl-sm8350-lpass-lpi.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2020-2023 Linaro Ltd. 5 - */ 6 - 7 - #include <linux/gpio/driver.h> 8 - #include <linux/module.h> 9 - #include <linux/platform_device.h> 10 - 11 - #include "pinctrl-lpass-lpi.h" 12 - 13 - enum lpass_lpi_functions { 14 - LPI_MUX_dmic1_clk, 15 - LPI_MUX_dmic1_data, 16 - LPI_MUX_dmic2_clk, 17 - LPI_MUX_dmic2_data, 18 - LPI_MUX_dmic3_clk, 19 - LPI_MUX_dmic3_data, 20 - LPI_MUX_i2s1_clk, 21 - LPI_MUX_i2s1_data, 22 - LPI_MUX_i2s1_ws, 23 - LPI_MUX_i2s2_clk, 24 - LPI_MUX_i2s2_data, 25 - LPI_MUX_i2s2_ws, 26 - LPI_MUX_qua_mi2s_data, 27 - LPI_MUX_qua_mi2s_sclk, 28 - LPI_MUX_qua_mi2s_ws, 29 - LPI_MUX_swr_rx_clk, 30 - LPI_MUX_swr_rx_data, 31 - LPI_MUX_swr_tx_clk, 32 - LPI_MUX_swr_tx_data, 33 - LPI_MUX_wsa_swr_clk, 34 - LPI_MUX_wsa_swr_data, 35 - LPI_MUX_gpio, 36 - LPI_MUX__, 37 - }; 38 - 39 - static const struct pinctrl_pin_desc sm8350_lpi_pins[] = { 40 - PINCTRL_PIN(0, "gpio0"), 41 - PINCTRL_PIN(1, "gpio1"), 42 - PINCTRL_PIN(2, "gpio2"), 43 - PINCTRL_PIN(3, "gpio3"), 44 - PINCTRL_PIN(4, "gpio4"), 45 - PINCTRL_PIN(5, "gpio5"), 46 - PINCTRL_PIN(6, "gpio6"), 47 - PINCTRL_PIN(7, "gpio7"), 48 - PINCTRL_PIN(8, "gpio8"), 49 - PINCTRL_PIN(9, "gpio9"), 50 - PINCTRL_PIN(10, "gpio10"), 51 - PINCTRL_PIN(11, "gpio11"), 52 - PINCTRL_PIN(12, "gpio12"), 53 - PINCTRL_PIN(13, "gpio13"), 54 - PINCTRL_PIN(14, "gpio14"), 55 - }; 56 - 57 - static const char * const swr_tx_clk_groups[] = { "gpio0" }; 58 - static const char * const swr_tx_data_groups[] = { "gpio1", "gpio2", "gpio5", "gpio14" }; 59 - static const char * const swr_rx_clk_groups[] = { "gpio3" }; 60 - static const char * const swr_rx_data_groups[] = { "gpio4", "gpio5" }; 61 - static const char * const dmic1_clk_groups[] = { "gpio6" }; 62 - static const char * const dmic1_data_groups[] = { "gpio7" }; 63 - static const char * const dmic2_clk_groups[] = { "gpio8" }; 64 - static const char * const dmic2_data_groups[] = { "gpio9" }; 65 - static const char * const i2s2_clk_groups[] = { "gpio10" }; 66 - static const char * const i2s2_ws_groups[] = { "gpio11" }; 67 - static const char * const dmic3_clk_groups[] = { "gpio12" }; 68 - static const char * const dmic3_data_groups[] = { "gpio13" }; 69 - static const char * const qua_mi2s_sclk_groups[] = { "gpio0" }; 70 - static const char * const qua_mi2s_ws_groups[] = { "gpio1" }; 71 - static const char * const qua_mi2s_data_groups[] = { "gpio2", "gpio3", "gpio4" }; 72 - static const char * const i2s1_clk_groups[] = { "gpio6" }; 73 - static const char * const i2s1_ws_groups[] = { "gpio7" }; 74 - static const char * const i2s1_data_groups[] = { "gpio8", "gpio9" }; 75 - static const char * const wsa_swr_clk_groups[] = { "gpio10" }; 76 - static const char * const wsa_swr_data_groups[] = { "gpio11" }; 77 - static const char * const i2s2_data_groups[] = { "gpio12", "gpio12" }; 78 - 79 - static const struct lpi_pingroup sm8350_groups[] = { 80 - LPI_PINGROUP(0, 0, swr_tx_clk, qua_mi2s_sclk, _, _), 81 - LPI_PINGROUP(1, 2, swr_tx_data, qua_mi2s_ws, _, _), 82 - LPI_PINGROUP(2, 4, swr_tx_data, qua_mi2s_data, _, _), 83 - LPI_PINGROUP(3, 8, swr_rx_clk, qua_mi2s_data, _, _), 84 - LPI_PINGROUP(4, 10, swr_rx_data, qua_mi2s_data, _, _), 85 - LPI_PINGROUP(5, 12, swr_tx_data, swr_rx_data, _, _), 86 - LPI_PINGROUP(6, LPI_NO_SLEW, dmic1_clk, i2s1_clk, _, _), 87 - LPI_PINGROUP(7, LPI_NO_SLEW, dmic1_data, i2s1_ws, _, _), 88 - LPI_PINGROUP(8, LPI_NO_SLEW, dmic2_clk, i2s1_data, _, _), 89 - LPI_PINGROUP(9, LPI_NO_SLEW, dmic2_data, i2s1_data, _, _), 90 - LPI_PINGROUP(10, 16, i2s2_clk, wsa_swr_clk, _, _), 91 - LPI_PINGROUP(11, 18, i2s2_ws, wsa_swr_data, _, _), 92 - LPI_PINGROUP(12, LPI_NO_SLEW, dmic3_clk, i2s2_data, _, _), 93 - LPI_PINGROUP(13, LPI_NO_SLEW, dmic3_data, i2s2_data, _, _), 94 - LPI_PINGROUP(14, 6, swr_tx_data, _, _, _), 95 - }; 96 - 97 - static const struct lpi_function sm8350_functions[] = { 98 - LPI_FUNCTION(dmic1_clk), 99 - LPI_FUNCTION(dmic1_data), 100 - LPI_FUNCTION(dmic2_clk), 101 - LPI_FUNCTION(dmic2_data), 102 - LPI_FUNCTION(dmic3_clk), 103 - LPI_FUNCTION(dmic3_data), 104 - LPI_FUNCTION(i2s1_clk), 105 - LPI_FUNCTION(i2s1_data), 106 - LPI_FUNCTION(i2s1_ws), 107 - LPI_FUNCTION(i2s2_clk), 108 - LPI_FUNCTION(i2s2_data), 109 - LPI_FUNCTION(i2s2_ws), 110 - LPI_FUNCTION(qua_mi2s_data), 111 - LPI_FUNCTION(qua_mi2s_sclk), 112 - LPI_FUNCTION(qua_mi2s_ws), 113 - LPI_FUNCTION(swr_rx_clk), 114 - LPI_FUNCTION(swr_rx_data), 115 - LPI_FUNCTION(swr_tx_clk), 116 - LPI_FUNCTION(swr_tx_data), 117 - LPI_FUNCTION(wsa_swr_clk), 118 - LPI_FUNCTION(wsa_swr_data), 119 - }; 120 - 121 - static const struct lpi_pinctrl_variant_data sm8350_lpi_data = { 122 - .pins = sm8350_lpi_pins, 123 - .npins = ARRAY_SIZE(sm8350_lpi_pins), 124 - .groups = sm8350_groups, 125 - .ngroups = ARRAY_SIZE(sm8350_groups), 126 - .functions = sm8350_functions, 127 - .nfunctions = ARRAY_SIZE(sm8350_functions), 128 - }; 129 - 130 - static const struct of_device_id lpi_pinctrl_of_match[] = { 131 - { 132 - .compatible = "qcom,sm8350-lpass-lpi-pinctrl", 133 - .data = &sm8350_lpi_data, 134 - }, 135 - { } 136 - }; 137 - MODULE_DEVICE_TABLE(of, lpi_pinctrl_of_match); 138 - 139 - static struct platform_driver lpi_pinctrl_driver = { 140 - .driver = { 141 - .name = "qcom-sm8350-lpass-lpi-pinctrl", 142 - .of_match_table = lpi_pinctrl_of_match, 143 - }, 144 - .probe = lpi_pinctrl_probe, 145 - .remove = lpi_pinctrl_remove, 146 - }; 147 - module_platform_driver(lpi_pinctrl_driver); 148 - 149 - MODULE_AUTHOR("Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>"); 150 - MODULE_DESCRIPTION("QTI SM8350 LPI GPIO pin control driver"); 151 - MODULE_LICENSE("GPL");
+1 -1
drivers/platform/mellanox/mlx-platform.c
··· 7381 7381 mlxplat_hotplug = &mlxplat_mlxcpld_ng800_hi171_data; 7382 7382 mlxplat_hotplug->deferred_nr = 7383 7383 mlxplat_msn21xx_channels[MLXPLAT_CPLD_GRP_CHNL_NUM - 1]; 7384 - mlxplat_led = &mlxplat_default_ng_led_data; 7384 + mlxplat_led = &mlxplat_xdr_led_data; 7385 7385 mlxplat_regs_io = &mlxplat_default_ng_regs_io_data; 7386 7386 mlxplat_fan = &mlxplat_xdr_fan_data; 7387 7387
+10 -3
drivers/platform/x86/acer-wmi.c
··· 455 455 .mailled = 1, 456 456 }; 457 457 458 + static struct quirk_entry quirk_acer_nitro_an515_58 = { 459 + .predator_v4 = 1, 460 + .pwm = 1, 461 + }; 462 + 458 463 static struct quirk_entry quirk_acer_predator_ph315_53 = { 459 464 .turbo = 1, 460 465 .cpu_fans = 1, ··· 660 655 DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 661 656 DMI_MATCH(DMI_PRODUCT_NAME, "Nitro AN515-58"), 662 657 }, 663 - .driver_data = &quirk_acer_predator_v4, 658 + .driver_data = &quirk_acer_nitro_an515_58, 664 659 }, 665 660 { 666 661 .callback = dmi_matched, ··· 2070 2065 WMID_gaming_set_u64(0x1, ACER_CAP_TURBO_LED); 2071 2066 2072 2067 /* Set FAN mode to auto */ 2073 - WMID_gaming_set_fan_mode(ACER_WMID_FAN_MODE_AUTO); 2068 + if (has_cap(ACER_CAP_TURBO_FAN)) 2069 + WMID_gaming_set_fan_mode(ACER_WMID_FAN_MODE_AUTO); 2074 2070 2075 2071 /* Set OC to normal */ 2076 2072 if (has_cap(ACER_CAP_TURBO_OC)) { ··· 2085 2079 WMID_gaming_set_u64(0x10001, ACER_CAP_TURBO_LED); 2086 2080 2087 2081 /* Set FAN mode to turbo */ 2088 - WMID_gaming_set_fan_mode(ACER_WMID_FAN_MODE_TURBO); 2082 + if (has_cap(ACER_CAP_TURBO_FAN)) 2083 + WMID_gaming_set_fan_mode(ACER_WMID_FAN_MODE_TURBO); 2089 2084 2090 2085 /* Set OC to turbo mode */ 2091 2086 if (has_cap(ACER_CAP_TURBO_OC)) {
+3 -1
drivers/platform/x86/amd/wbrf.c
··· 104 104 obj = acpi_evaluate_dsm(adev->handle, &wifi_acpi_dsm_guid, 105 105 WBRF_REVISION, WBRF_RECORD, &argv4); 106 106 107 - if (!obj) 107 + if (!obj) { 108 + kfree(tmp); 108 109 return -EINVAL; 110 + } 109 111 110 112 if (obj->type != ACPI_TYPE_INTEGER) { 111 113 ret = -EINVAL;
+221 -3
drivers/platform/x86/asus-armoury.h
··· 348 348 static const struct dmi_system_id power_limits[] = { 349 349 { 350 350 .matches = { 351 + DMI_MATCH(DMI_BOARD_NAME, "FA401UV"), 352 + }, 353 + .driver_data = &(struct power_data) { 354 + .ac_data = &(struct power_limits) { 355 + .ppt_pl1_spl_min = 15, 356 + .ppt_pl1_spl_max = 80, 357 + .ppt_pl2_sppt_min = 35, 358 + .ppt_pl2_sppt_max = 80, 359 + .ppt_pl3_fppt_min = 35, 360 + .ppt_pl3_fppt_max = 80, 361 + .nv_dynamic_boost_min = 5, 362 + .nv_dynamic_boost_max = 25, 363 + .nv_temp_target_min = 75, 364 + .nv_temp_target_max = 87, 365 + .nv_tgp_min = 55, 366 + .nv_tgp_max = 75, 367 + }, 368 + .dc_data = &(struct power_limits) { 369 + .ppt_pl1_spl_min = 25, 370 + .ppt_pl1_spl_max = 35, 371 + .ppt_pl2_sppt_min = 31, 372 + .ppt_pl2_sppt_max = 44, 373 + .ppt_pl3_fppt_min = 45, 374 + .ppt_pl3_fppt_max = 65, 375 + .nv_temp_target_min = 75, 376 + .nv_temp_target_max = 87, 377 + }, 378 + }, 379 + }, 380 + { 381 + .matches = { 351 382 DMI_MATCH(DMI_BOARD_NAME, "FA401W"), 352 383 }, 353 384 .driver_data = &(struct power_data) { ··· 611 580 .ppt_pl2_sppt_def = 54, 612 581 .ppt_pl2_sppt_max = 90, 613 582 .ppt_pl3_fppt_min = 35, 614 - .ppt_pl3_fppt_def = 90, 615 - .ppt_pl3_fppt_max = 65, 583 + .ppt_pl3_fppt_def = 65, 584 + .ppt_pl3_fppt_max = 90, 616 585 .nv_dynamic_boost_min = 10, 617 586 .nv_dynamic_boost_max = 15, 618 587 .nv_temp_target_min = 75, ··· 729 698 .ppt_platform_sppt_max = 100, 730 699 .nv_temp_target_min = 75, 731 700 .nv_temp_target_max = 87, 701 + }, 702 + }, 703 + }, 704 + { 705 + .matches = { 706 + DMI_MATCH(DMI_BOARD_NAME, "FA617XT"), 707 + }, 708 + .driver_data = &(struct power_data) { 709 + .ac_data = &(struct power_limits) { 710 + .ppt_apu_sppt_min = 15, 711 + .ppt_apu_sppt_max = 80, 712 + .ppt_platform_sppt_min = 30, 713 + .ppt_platform_sppt_max = 145, 714 + }, 715 + .dc_data = &(struct power_limits) { 716 + .ppt_apu_sppt_min = 25, 717 + .ppt_apu_sppt_max = 35, 718 + .ppt_platform_sppt_min = 45, 719 + .ppt_platform_sppt_max = 100, 732 720 }, 733 721 }, 734 722 }, ··· 893 843 }, 894 844 { 895 845 .matches = { 896 - DMI_MATCH(DMI_BOARD_NAME, "GA403U"), 846 + DMI_MATCH(DMI_BOARD_NAME, "GA403UI"), 897 847 }, 898 848 .driver_data = &(struct power_data) { 899 849 .ac_data = &(struct power_limits) { ··· 925 875 }, 926 876 { 927 877 .matches = { 878 + DMI_MATCH(DMI_BOARD_NAME, "GA403UV"), 879 + }, 880 + .driver_data = &(struct power_data) { 881 + .ac_data = &(struct power_limits) { 882 + .ppt_pl1_spl_min = 15, 883 + .ppt_pl1_spl_max = 80, 884 + .ppt_pl2_sppt_min = 25, 885 + .ppt_pl2_sppt_max = 80, 886 + .ppt_pl3_fppt_min = 35, 887 + .ppt_pl3_fppt_max = 80, 888 + .nv_dynamic_boost_min = 5, 889 + .nv_dynamic_boost_max = 25, 890 + .nv_temp_target_min = 75, 891 + .nv_temp_target_max = 87, 892 + .nv_tgp_min = 55, 893 + .nv_tgp_max = 65, 894 + }, 895 + .dc_data = &(struct power_limits) { 896 + .ppt_pl1_spl_min = 15, 897 + .ppt_pl1_spl_max = 35, 898 + .ppt_pl2_sppt_min = 25, 899 + .ppt_pl2_sppt_max = 35, 900 + .ppt_pl3_fppt_min = 35, 901 + .ppt_pl3_fppt_max = 65, 902 + .nv_temp_target_min = 75, 903 + .nv_temp_target_max = 87, 904 + }, 905 + .requires_fan_curve = true, 906 + }, 907 + }, 908 + { 909 + .matches = { 910 + DMI_MATCH(DMI_BOARD_NAME, "GA403WM"), 911 + }, 912 + .driver_data = &(struct power_data) { 913 + .ac_data = &(struct power_limits) { 914 + .ppt_pl1_spl_min = 15, 915 + .ppt_pl1_spl_max = 80, 916 + .ppt_pl2_sppt_min = 25, 917 + .ppt_pl2_sppt_max = 80, 918 + .ppt_pl3_fppt_min = 35, 919 + .ppt_pl3_fppt_max = 80, 920 + .nv_dynamic_boost_min = 0, 921 + .nv_dynamic_boost_max = 15, 922 + .nv_temp_target_min = 75, 923 + .nv_temp_target_max = 87, 924 + .nv_tgp_min = 55, 925 + .nv_tgp_max = 85, 926 + }, 927 + .dc_data = &(struct power_limits) { 928 + .ppt_pl1_spl_min = 15, 929 + .ppt_pl1_spl_max = 35, 930 + .ppt_pl2_sppt_min = 25, 931 + .ppt_pl2_sppt_max = 35, 932 + .ppt_pl3_fppt_min = 35, 933 + .ppt_pl3_fppt_max = 65, 934 + .nv_temp_target_min = 75, 935 + .nv_temp_target_max = 87, 936 + }, 937 + .requires_fan_curve = true, 938 + }, 939 + }, 940 + { 941 + .matches = { 928 942 DMI_MATCH(DMI_BOARD_NAME, "GA403WR"), 943 + }, 944 + .driver_data = &(struct power_data) { 945 + .ac_data = &(struct power_limits) { 946 + .ppt_pl1_spl_min = 15, 947 + .ppt_pl1_spl_max = 80, 948 + .ppt_pl2_sppt_min = 25, 949 + .ppt_pl2_sppt_max = 80, 950 + .ppt_pl3_fppt_min = 35, 951 + .ppt_pl3_fppt_max = 80, 952 + .nv_dynamic_boost_min = 0, 953 + .nv_dynamic_boost_max = 25, 954 + .nv_temp_target_min = 75, 955 + .nv_temp_target_max = 87, 956 + .nv_tgp_min = 80, 957 + .nv_tgp_max = 95, 958 + }, 959 + .dc_data = &(struct power_limits) { 960 + .ppt_pl1_spl_min = 15, 961 + .ppt_pl1_spl_max = 35, 962 + .ppt_pl2_sppt_min = 25, 963 + .ppt_pl2_sppt_max = 35, 964 + .ppt_pl3_fppt_min = 35, 965 + .ppt_pl3_fppt_max = 65, 966 + .nv_temp_target_min = 75, 967 + .nv_temp_target_max = 87, 968 + }, 969 + .requires_fan_curve = true, 970 + }, 971 + }, 972 + { 973 + .matches = { 974 + DMI_MATCH(DMI_BOARD_NAME, "GA403WW"), 929 975 }, 930 976 .driver_data = &(struct power_data) { 931 977 .ac_data = &(struct power_limits) { ··· 1335 1189 }, 1336 1190 { 1337 1191 .matches = { 1192 + DMI_MATCH(DMI_BOARD_NAME, "GV302XV"), 1193 + }, 1194 + .driver_data = &(struct power_data) { 1195 + .ac_data = &(struct power_limits) { 1196 + .ppt_pl1_spl_min = 15, 1197 + .ppt_pl1_spl_max = 55, 1198 + .ppt_pl2_sppt_min = 25, 1199 + .ppt_pl2_sppt_max = 60, 1200 + .ppt_pl3_fppt_min = 35, 1201 + .ppt_pl3_fppt_max = 65, 1202 + .nv_temp_target_min = 75, 1203 + .nv_temp_target_max = 87, 1204 + }, 1205 + .dc_data = &(struct power_limits) { 1206 + .ppt_pl1_spl_min = 15, 1207 + .ppt_pl1_spl_max = 35, 1208 + .ppt_pl2_sppt_min = 25, 1209 + .ppt_pl2_sppt_max = 35, 1210 + .ppt_pl3_fppt_min = 35, 1211 + .ppt_pl3_fppt_max = 65, 1212 + .nv_temp_target_min = 75, 1213 + .nv_temp_target_max = 87, 1214 + }, 1215 + }, 1216 + }, 1217 + { 1218 + .matches = { 1338 1219 DMI_MATCH(DMI_BOARD_NAME, "GV601R"), 1339 1220 }, 1340 1221 .driver_data = &(struct power_data) { ··· 1484 1311 .ppt_pl1_spl_max = 100, 1485 1312 .ppt_pl2_sppt_min = 15, 1486 1313 .ppt_pl2_sppt_max = 190, 1314 + }, 1315 + .dc_data = NULL, 1316 + .requires_fan_curve = true, 1317 + }, 1318 + }, 1319 + { 1320 + .matches = { 1321 + DMI_MATCH(DMI_BOARD_NAME, "G513QY"), 1322 + }, 1323 + .driver_data = &(struct power_data) { 1324 + .ac_data = &(struct power_limits) { 1325 + /* Advantage Edition Laptop, no PL1 or PL2 limits */ 1326 + .ppt_apu_sppt_min = 15, 1327 + .ppt_apu_sppt_max = 100, 1328 + .ppt_platform_sppt_min = 70, 1329 + .ppt_platform_sppt_max = 190, 1487 1330 }, 1488 1331 .dc_data = NULL, 1489 1332 .requires_fan_curve = true, ··· 1744 1555 .nv_dynamic_boost_max = 25, 1745 1556 .nv_temp_target_min = 75, 1746 1557 .nv_temp_target_max = 87, 1558 + }, 1559 + .dc_data = &(struct power_limits) { 1560 + .ppt_pl1_spl_min = 25, 1561 + .ppt_pl1_spl_max = 55, 1562 + .ppt_pl2_sppt_min = 25, 1563 + .ppt_pl2_sppt_max = 70, 1564 + .nv_temp_target_min = 75, 1565 + .nv_temp_target_max = 87, 1566 + }, 1567 + .requires_fan_curve = true, 1568 + }, 1569 + }, 1570 + { 1571 + .matches = { 1572 + DMI_MATCH(DMI_BOARD_NAME, "G835LR"), 1573 + }, 1574 + .driver_data = &(struct power_data) { 1575 + .ac_data = &(struct power_limits) { 1576 + .ppt_pl1_spl_min = 28, 1577 + .ppt_pl1_spl_def = 140, 1578 + .ppt_pl1_spl_max = 175, 1579 + .ppt_pl2_sppt_min = 28, 1580 + .ppt_pl2_sppt_max = 175, 1581 + .nv_dynamic_boost_min = 5, 1582 + .nv_dynamic_boost_max = 25, 1583 + .nv_temp_target_min = 75, 1584 + .nv_temp_target_max = 87, 1585 + .nv_tgp_min = 65, 1586 + .nv_tgp_max = 115, 1747 1587 }, 1748 1588 .dc_data = &(struct power_limits) { 1749 1589 .ppt_pl1_spl_min = 25,
+2 -1
drivers/platform/x86/asus-wmi.c
··· 4889 4889 asus->egpu_enable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_EGPU); 4890 4890 asus->dgpu_disable_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_DGPU); 4891 4891 asus->kbd_rgb_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_TUF_RGB_STATE); 4892 - asus->oobe_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_OOBE); 4893 4892 4894 4893 if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_MINI_LED_MODE)) 4895 4894 asus->mini_led_dev_id = ASUS_WMI_DEVID_MINI_LED_MODE; ··· 4900 4901 else if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_GPU_MUX_VIVO)) 4901 4902 asus->gpu_mux_dev = ASUS_WMI_DEVID_GPU_MUX_VIVO; 4902 4903 #endif /* IS_ENABLED(CONFIG_ASUS_WMI_DEPRECATED_ATTRS) */ 4904 + 4905 + asus->oobe_state_available = asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_OOBE); 4903 4906 4904 4907 if (asus_wmi_dev_is_present(asus, ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY)) 4905 4908 asus->throttle_thermal_policy_dev = ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY;
+8
drivers/platform/x86/hp/hp-bioscfg/bioscfg.c
··· 10 10 #include <linux/fs.h> 11 11 #include <linux/module.h> 12 12 #include <linux/kernel.h> 13 + #include <linux/printk.h> 14 + #include <linux/string.h> 13 15 #include <linux/wmi.h> 14 16 #include "bioscfg.h" 15 17 #include "../../firmware_attributes_class.h" ··· 782 780 783 781 if (ret < 0) 784 782 goto buff_attr_exit; 783 + 784 + if (strlen(str) == 0) { 785 + pr_debug("Ignoring attribute with empty name\n"); 786 + ret = 0; 787 + goto buff_attr_exit; 788 + } 785 789 786 790 if (attr_type == HPWMI_PASSWORD_TYPE || 787 791 attr_type == HPWMI_SECURE_PLATFORM_TYPE)
+7 -5
drivers/platform/x86/hp/hp-bioscfg/bioscfg.h
··· 10 10 11 11 #include <linux/wmi.h> 12 12 #include <linux/types.h> 13 + #include <linux/string.h> 13 14 #include <linux/device.h> 14 15 #include <linux/module.h> 15 16 #include <linux/kernel.h> ··· 57 56 58 57 #define PASSWD_MECHANISM_TYPES "password" 59 58 60 - #define HP_WMI_BIOS_GUID "5FB7F034-2C63-45e9-BE91-3D44E2C707E4" 59 + #define HP_WMI_BIOS_GUID "5FB7F034-2C63-45E9-BE91-3D44E2C707E4" 61 60 62 - #define HP_WMI_BIOS_STRING_GUID "988D08E3-68F4-4c35-AF3E-6A1B8106F83C" 61 + #define HP_WMI_BIOS_STRING_GUID "988D08E3-68F4-4C35-AF3E-6A1B8106F83C" 63 62 #define HP_WMI_BIOS_INTEGER_GUID "8232DE3D-663D-4327-A8F4-E293ADB9BF05" 64 63 #define HP_WMI_BIOS_ENUMERATION_GUID "2D114B49-2DFB-4130-B8FE-4A3C09E75133" 65 64 #define HP_WMI_BIOS_ORDERED_LIST_GUID "14EA9746-CE1F-4098-A0E0-7045CB4DA745" 66 65 #define HP_WMI_BIOS_PASSWORD_GUID "322F2028-0F84-4901-988E-015176049E2D" 67 - #define HP_WMI_SET_BIOS_SETTING_GUID "1F4C91EB-DC5C-460b-951D-C7CB9B4B8D5E" 66 + #define HP_WMI_SET_BIOS_SETTING_GUID "1F4C91EB-DC5C-460B-951D-C7CB9B4B8D5E" 68 67 69 68 enum hp_wmi_spm_commandtype { 70 69 HPWMI_SECUREPLATFORM_GET_STATE = 0x10, ··· 286 285 { \ 287 286 int i; \ 288 287 \ 289 - for (i = 0; i <= bioscfg_drv.type##_instances_count; i++) { \ 290 - if (!strcmp(kobj->name, bioscfg_drv.type##_data[i].attr_name_kobj->name)) \ 288 + for (i = 0; i < bioscfg_drv.type##_instances_count; i++) { \ 289 + if (bioscfg_drv.type##_data[i].attr_name_kobj && \ 290 + !strcmp(kobj->name, bioscfg_drv.type##_data[i].attr_name_kobj->name)) \ 291 291 return i; \ 292 292 } \ 293 293 return -EIO; \
+7 -4
drivers/pmdomain/imx/imx8m-blk-ctrl.c
··· 846 846 return NOTIFY_OK; 847 847 } 848 848 849 + /* 850 + * For i.MX8MQ, the ADB in the VPUMIX domain has no separate reset and clock 851 + * enable bits, but is ungated and reset together with the VPUs. 852 + * Resetting G1 or G2 separately may led to system hang. 853 + * Remove the rst_mask and clk_mask from the domain data of G1 and G2, 854 + * Let imx8mq_vpu_power_notifier() do really vpu reset. 855 + */ 849 856 static const struct imx8m_blk_ctrl_domain_data imx8mq_vpu_blk_ctl_domain_data[] = { 850 857 [IMX8MQ_VPUBLK_PD_G1] = { 851 858 .name = "vpublk-g1", 852 859 .clk_names = (const char *[]){ "g1", }, 853 860 .num_clks = 1, 854 861 .gpc_name = "g1", 855 - .rst_mask = BIT(1), 856 - .clk_mask = BIT(1), 857 862 }, 858 863 [IMX8MQ_VPUBLK_PD_G2] = { 859 864 .name = "vpublk-g2", 860 865 .clk_names = (const char *[]){ "g2", }, 861 866 .num_clks = 1, 862 867 .gpc_name = "g2", 863 - .rst_mask = BIT(0), 864 - .clk_mask = BIT(0), 865 868 }, 866 869 }; 867 870
+10
drivers/pmdomain/rockchip/pm-domains.c
··· 879 879 pd->genpd.name = pd->info->name; 880 880 else 881 881 pd->genpd.name = kbasename(node->full_name); 882 + 883 + /* 884 + * power domain's needing a regulator should default to off, since 885 + * the regulator state is unknown at probe time. Also the regulator 886 + * state cannot be checked, since that usually requires IP needing 887 + * (a different) power domain. 888 + */ 889 + if (pd->info->need_regulator) 890 + rockchip_pd_power(pd, false); 891 + 882 892 pd->genpd.power_off = rockchip_pd_power_off; 883 893 pd->genpd.power_on = rockchip_pd_power_on; 884 894 pd->genpd.attach_dev = rockchip_pd_attach_dev;
+3
drivers/regulator/fp9931.c
··· 439 439 int i; 440 440 441 441 data = devm_kzalloc(&client->dev, sizeof(*data), GFP_KERNEL); 442 + if (!data) 443 + return -ENOMEM; 444 + 442 445 data->regmap = devm_regmap_init_i2c(client, &regmap_config); 443 446 if (IS_ERR(data->regmap)) 444 447 return dev_err_probe(&client->dev, PTR_ERR(data->regmap),
+1 -1
drivers/s390/crypto/ap_card.c
··· 43 43 { 44 44 struct ap_card *ac = to_ap_card(dev); 45 45 46 - return sysfs_emit(buf, "%d\n", ac->hwinfo.qd); 46 + return sysfs_emit(buf, "%d\n", ac->hwinfo.qd + 1); 47 47 } 48 48 49 49 static DEVICE_ATTR_RO(depth);
+1 -1
drivers/s390/crypto/ap_queue.c
··· 285 285 list_move_tail(&ap_msg->list, &aq->pendingq); 286 286 aq->requestq_count--; 287 287 aq->pendingq_count++; 288 - if (aq->queue_count < aq->card->hwinfo.qd) { 288 + if (aq->queue_count < aq->card->hwinfo.qd + 1) { 289 289 aq->sm_state = AP_SM_STATE_WORKING; 290 290 return AP_SM_WAIT_AGAIN; 291 291 }
+7
drivers/scsi/qla2xxx/qla_isr.c
··· 878 878 payload_size = sizeof(purex->els_frame_payload); 879 879 } 880 880 881 + if (total_bytes > sizeof(item->iocb.iocb)) 882 + total_bytes = sizeof(item->iocb.iocb); 883 + 881 884 pending_bytes = total_bytes; 882 885 no_bytes = (pending_bytes > payload_size) ? payload_size : 883 886 pending_bytes; ··· 1166 1163 1167 1164 total_bytes = (le16_to_cpu(purex->frame_size) & 0x0FFF) 1168 1165 - PURX_ELS_HEADER_SIZE; 1166 + 1167 + if (total_bytes > sizeof(item->iocb.iocb)) 1168 + total_bytes = sizeof(item->iocb.iocb); 1169 + 1169 1170 pending_bytes = total_bytes; 1170 1171 entry_count = entry_count_remaining = purex->entry_count; 1171 1172 no_bytes = (pending_bytes > sizeof(purex->els_frame_payload)) ?
+10 -1
drivers/scsi/scsi_error.c
··· 282 282 { 283 283 struct scsi_cmnd *scmd = container_of(head, typeof(*scmd), rcu); 284 284 struct Scsi_Host *shost = scmd->device->host; 285 - unsigned int busy = scsi_host_busy(shost); 285 + unsigned int busy; 286 286 unsigned long flags; 287 287 288 288 spin_lock_irqsave(shost->host_lock, flags); 289 289 shost->host_failed++; 290 + spin_unlock_irqrestore(shost->host_lock, flags); 291 + /* 292 + * The counting of busy requests needs to occur after adding to 293 + * host_failed or after the lock acquire for adding to host_failed 294 + * to prevent a race with host unbusy and missing an eh wakeup. 295 + */ 296 + busy = scsi_host_busy(shost); 297 + 298 + spin_lock_irqsave(shost->host_lock, flags); 290 299 scsi_eh_wakeup(shost, busy); 291 300 spin_unlock_irqrestore(shost->host_lock, flags); 292 301 }
+8
drivers/scsi/scsi_lib.c
··· 376 376 rcu_read_lock(); 377 377 __clear_bit(SCMD_STATE_INFLIGHT, &cmd->state); 378 378 if (unlikely(scsi_host_in_recovery(shost))) { 379 + /* 380 + * Ensure the clear of SCMD_STATE_INFLIGHT is visible to 381 + * other CPUs before counting busy requests. Otherwise, 382 + * reordering can cause CPUs to race and miss an eh wakeup 383 + * when no CPU sees all busy requests as done or timed out. 384 + */ 385 + smp_mb(); 386 + 379 387 unsigned int busy = scsi_host_busy(shost); 380 388 381 389 spin_lock_irqsave(shost->host_lock, flags);
+2 -1
drivers/scsi/storvsc_drv.c
··· 1144 1144 * The current SCSI handling on the host side does 1145 1145 * not correctly handle: 1146 1146 * INQUIRY command with page code parameter set to 0x80 1147 - * MODE_SENSE command with cmd[2] == 0x1c 1147 + * MODE_SENSE and MODE_SENSE_10 command with cmd[2] == 0x1c 1148 1148 * MAINTENANCE_IN is not supported by HyperV FC passthrough 1149 1149 * 1150 1150 * Setup srb and scsi status so this won't be fatal. ··· 1154 1154 1155 1155 if ((stor_pkt->vm_srb.cdb[0] == INQUIRY) || 1156 1156 (stor_pkt->vm_srb.cdb[0] == MODE_SENSE) || 1157 + (stor_pkt->vm_srb.cdb[0] == MODE_SENSE_10) || 1157 1158 (stor_pkt->vm_srb.cdb[0] == MAINTENANCE_IN && 1158 1159 hv_dev_is_fc(device))) { 1159 1160 vstor_packet->vm_srb.scsi_status = 0;
+29 -25
drivers/slimbus/core.c
··· 146 146 { 147 147 struct slim_device *sbdev = to_slim_device(dev); 148 148 149 + of_node_put(sbdev->dev.of_node); 149 150 kfree(sbdev); 150 151 } 151 152 ··· 281 280 /* slim_remove_device: Remove the effect of slim_add_device() */ 282 281 static void slim_remove_device(struct slim_device *sbdev) 283 282 { 284 - of_node_put(sbdev->dev.of_node); 285 283 device_unregister(&sbdev->dev); 286 284 } 287 285 ··· 366 366 * @ctrl: Controller on which this device will be added/queried 367 367 * @e_addr: Enumeration address of the device to be queried 368 368 * 369 + * Takes a reference to the embedded struct device which needs to be dropped 370 + * after use. 371 + * 369 372 * Return: pointer to a device if it has already reported. Creates a new 370 373 * device and returns pointer to it if the device has not yet enumerated. 371 374 */ ··· 382 379 sbdev = slim_alloc_device(ctrl, e_addr, NULL); 383 380 if (!sbdev) 384 381 return ERR_PTR(-ENOMEM); 382 + 383 + get_device(&sbdev->dev); 385 384 } 386 385 387 386 return sbdev; 388 387 } 389 388 EXPORT_SYMBOL_GPL(slim_get_device); 390 389 391 - static struct slim_device *of_find_slim_device(struct slim_controller *ctrl, 392 - struct device_node *np) 390 + /** 391 + * of_slim_get_device() - get handle to a device using dt node. 392 + * 393 + * @ctrl: Controller on which this device will be queried 394 + * @np: node pointer to device 395 + * 396 + * Takes a reference to the embedded struct device which needs to be dropped 397 + * after use. 398 + * 399 + * Return: pointer to a device if it has been registered, otherwise NULL. 400 + */ 401 + struct slim_device *of_slim_get_device(struct slim_controller *ctrl, 402 + struct device_node *np) 393 403 { 394 404 struct slim_device *sbdev; 395 405 struct device *dev; ··· 414 398 } 415 399 416 400 return NULL; 417 - } 418 - 419 - /** 420 - * of_slim_get_device() - get handle to a device using dt node. 421 - * 422 - * @ctrl: Controller on which this device will be added/queried 423 - * @np: node pointer to device 424 - * 425 - * Return: pointer to a device if it has already reported. Creates a new 426 - * device and returns pointer to it if the device has not yet enumerated. 427 - */ 428 - struct slim_device *of_slim_get_device(struct slim_controller *ctrl, 429 - struct device_node *np) 430 - { 431 - return of_find_slim_device(ctrl, np); 432 401 } 433 402 EXPORT_SYMBOL_GPL(of_slim_get_device); 434 403 ··· 490 489 if (ctrl->sched.clk_state != SLIM_CLK_ACTIVE) { 491 490 dev_err(ctrl->dev, "slim ctrl not active,state:%d, ret:%d\n", 492 491 ctrl->sched.clk_state, ret); 493 - goto slimbus_not_active; 492 + goto out_put_rpm; 494 493 } 495 494 496 495 sbdev = slim_get_device(ctrl, e_addr); 497 - if (IS_ERR(sbdev)) 498 - return -ENODEV; 496 + if (IS_ERR(sbdev)) { 497 + ret = -ENODEV; 498 + goto out_put_rpm; 499 + } 499 500 500 501 if (sbdev->is_laddr_valid) { 501 502 *laddr = sbdev->laddr; 502 - return 0; 503 + ret = 0; 504 + } else { 505 + ret = slim_device_alloc_laddr(sbdev, true); 503 506 } 504 507 505 - ret = slim_device_alloc_laddr(sbdev, true); 506 - 507 - slimbus_not_active: 508 + put_device(&sbdev->dev); 509 + out_put_rpm: 508 510 pm_runtime_mark_last_busy(ctrl->dev); 509 511 pm_runtime_put_autosuspend(ctrl->dev); 510 512 return ret;
+1
drivers/soc/renesas/Kconfig
··· 445 445 depends on RISCV_SBI 446 446 select ARCH_RZG2L 447 447 select AX45MP_L2_CACHE 448 + select CACHEMAINT_FOR_DMA 448 449 select DMA_GLOBAL_POOL 449 450 select ERRATA_ANDES 450 451 select ERRATA_ANDES_CMO
+1
drivers/spi/spi-cadence.c
··· 729 729 ctlr->unprepare_transfer_hardware = cdns_unprepare_transfer_hardware; 730 730 ctlr->mode_bits = SPI_CPOL | SPI_CPHA; 731 731 ctlr->bits_per_word_mask = SPI_BPW_MASK(8); 732 + ctlr->flags = SPI_CONTROLLER_MUST_TX; 732 733 733 734 if (of_device_is_compatible(pdev->dev.of_node, "cix,sky1-spi-r1p6")) 734 735 ctlr->bits_per_word_mask |= SPI_BPW_MASK(16) | SPI_BPW_MASK(32);
+1 -3
drivers/spi/spi-hisi-kunpeng.c
··· 161 161 static int hisi_spi_debugfs_init(struct hisi_spi *hs) 162 162 { 163 163 char name[32]; 164 + struct spi_controller *host = dev_get_drvdata(hs->dev); 164 165 165 - struct spi_controller *host; 166 - 167 - host = container_of(hs->dev, struct spi_controller, dev); 168 166 snprintf(name, 32, "hisi_spi%d", host->bus_num); 169 167 hs->debugfs = debugfs_create_dir(name, NULL); 170 168 if (IS_ERR(hs->debugfs))
+1
drivers/spi/spi-intel-pci.c
··· 81 81 { PCI_VDEVICE(INTEL, 0x54a4), (unsigned long)&cnl_info }, 82 82 { PCI_VDEVICE(INTEL, 0x5794), (unsigned long)&cnl_info }, 83 83 { PCI_VDEVICE(INTEL, 0x5825), (unsigned long)&cnl_info }, 84 + { PCI_VDEVICE(INTEL, 0x6e24), (unsigned long)&cnl_info }, 84 85 { PCI_VDEVICE(INTEL, 0x7723), (unsigned long)&cnl_info }, 85 86 { PCI_VDEVICE(INTEL, 0x7a24), (unsigned long)&cnl_info }, 86 87 { PCI_VDEVICE(INTEL, 0x7aa4), (unsigned long)&cnl_info },
+10 -23
drivers/spi/spi-sprd-adi.c
··· 528 528 pdev->id = of_alias_get_id(np, "spi"); 529 529 num_chipselect = of_get_child_count(np); 530 530 531 - ctlr = spi_alloc_host(&pdev->dev, sizeof(struct sprd_adi)); 531 + ctlr = devm_spi_alloc_host(&pdev->dev, sizeof(struct sprd_adi)); 532 532 if (!ctlr) 533 533 return -ENOMEM; 534 534 ··· 536 536 sadi = spi_controller_get_devdata(ctlr); 537 537 538 538 sadi->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 539 - if (IS_ERR(sadi->base)) { 540 - ret = PTR_ERR(sadi->base); 541 - goto put_ctlr; 542 - } 539 + if (IS_ERR(sadi->base)) 540 + return PTR_ERR(sadi->base); 543 541 544 542 sadi->slave_vbase = (unsigned long)sadi->base + 545 543 data->slave_offset; ··· 549 551 if (ret > 0 || (IS_ENABLED(CONFIG_HWSPINLOCK) && ret == 0)) { 550 552 sadi->hwlock = 551 553 devm_hwspin_lock_request_specific(&pdev->dev, ret); 552 - if (!sadi->hwlock) { 553 - ret = -ENXIO; 554 - goto put_ctlr; 555 - } 554 + if (!sadi->hwlock) 555 + return -ENXIO; 556 556 } else { 557 557 switch (ret) { 558 558 case -ENOENT: 559 559 dev_info(&pdev->dev, "no hardware spinlock supplied\n"); 560 560 break; 561 561 default: 562 - dev_err_probe(&pdev->dev, ret, "failed to find hwlock id\n"); 563 - goto put_ctlr; 562 + return dev_err_probe(&pdev->dev, ret, "failed to find hwlock id\n"); 564 563 } 565 564 } 566 565 ··· 574 579 ctlr->transfer_one = sprd_adi_transfer_one; 575 580 576 581 ret = devm_spi_register_controller(&pdev->dev, ctlr); 577 - if (ret) { 578 - dev_err(&pdev->dev, "failed to register SPI controller\n"); 579 - goto put_ctlr; 580 - } 582 + if (ret) 583 + return dev_err_probe(&pdev->dev, ret, "failed to register SPI controller\n"); 581 584 582 585 if (sadi->data->restart) { 583 586 ret = devm_register_restart_handler(&pdev->dev, 584 587 sadi->data->restart, 585 588 sadi); 586 - if (ret) { 587 - dev_err(&pdev->dev, "can not register restart handler\n"); 588 - goto put_ctlr; 589 - } 589 + if (ret) 590 + return dev_err_probe(&pdev->dev, ret, "can not register restart handler\n"); 590 591 } 591 592 592 593 return 0; 593 - 594 - put_ctlr: 595 - spi_controller_put(ctlr); 596 - return ret; 597 594 } 598 595 599 596 static struct sprd_adi_data sc9860_data = {
+8 -2
drivers/target/iscsi/iscsi_target_util.c
··· 741 741 spin_lock_bh(&sess->session_usage_lock); 742 742 sess->session_usage_count--; 743 743 744 - if (!sess->session_usage_count && sess->session_waiting_on_uc) 744 + if (!sess->session_usage_count && sess->session_waiting_on_uc) { 745 + spin_unlock_bh(&sess->session_usage_lock); 745 746 complete(&sess->session_waiting_on_uc_comp); 747 + return; 748 + } 746 749 747 750 spin_unlock_bh(&sess->session_usage_lock); 748 751 } ··· 813 810 spin_lock_bh(&conn->conn_usage_lock); 814 811 conn->conn_usage_count--; 815 812 816 - if (!conn->conn_usage_count && conn->conn_waiting_on_uc) 813 + if (!conn->conn_usage_count && conn->conn_waiting_on_uc) { 814 + spin_unlock_bh(&conn->conn_usage_lock); 817 815 complete(&conn->conn_waiting_on_uc_comp); 816 + return; 817 + } 818 818 819 819 spin_unlock_bh(&conn->conn_usage_lock); 820 820 }
+1 -1
drivers/tty/serial/8250/8250_pci.c
··· 1658 1658 } 1659 1659 1660 1660 static const struct serial_rs485 pci_fintek_rs485_supported = { 1661 - .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND, 1661 + .flags = SER_RS485_ENABLED | SER_RS485_RTS_ON_SEND | SER_RS485_RTS_AFTER_SEND, 1662 1662 /* F81504/508/512 does not support RTS delay before or after send */ 1663 1663 }; 1664 1664
+6 -7
drivers/tty/serial/qcom_geni_serial.c
··· 1888 1888 if (ret) 1889 1889 goto error; 1890 1890 1891 - devm_pm_runtime_enable(port->se.dev); 1892 - 1893 - ret = uart_add_one_port(drv, uport); 1894 - if (ret) 1895 - goto error; 1896 - 1897 1891 if (port->wakeup_irq > 0) { 1898 1892 device_init_wakeup(&pdev->dev, true); 1899 1893 ret = dev_pm_set_dedicated_wake_irq(&pdev->dev, ··· 1895 1901 if (ret) { 1896 1902 device_init_wakeup(&pdev->dev, false); 1897 1903 ida_free(&port_ida, uport->line); 1898 - uart_remove_one_port(drv, uport); 1899 1904 goto error; 1900 1905 } 1901 1906 } 1907 + 1908 + devm_pm_runtime_enable(port->se.dev); 1909 + 1910 + ret = uart_add_one_port(drv, uport); 1911 + if (ret) 1912 + goto error; 1902 1913 1903 1914 return 0; 1904 1915
+6
drivers/tty/serial/serial_core.c
··· 3074 3074 if (uport->cons && uport->dev) 3075 3075 of_console_check(uport->dev->of_node, uport->cons->name, uport->line); 3076 3076 3077 + /* 3078 + * TTY port has to be linked with the driver before register_console() 3079 + * in uart_configure_port(), because user-space could open the console 3080 + * immediately after. 3081 + */ 3082 + tty_port_link_device(port, drv->tty_driver, uport->line); 3077 3083 uart_configure_port(drv, state, uport); 3078 3084 3079 3085 port->console = uart_console(uport);
+2 -2
drivers/uio/uio_pci_generic_sva.c
··· 29 29 struct uio_pci_sva_dev *udev = info->priv; 30 30 struct iommu_domain *domain; 31 31 32 - if (!udev && !udev->pdev) 32 + if (!udev || !udev->pdev) 33 33 return -ENODEV; 34 34 35 35 domain = iommu_get_domain_for_dev(&udev->pdev->dev); ··· 51 51 { 52 52 struct uio_pci_sva_dev *udev = info->priv; 53 53 54 - if (!udev && !udev->pdev) 54 + if (!udev || !udev->pdev) 55 55 return -ENODEV; 56 56 57 57 iommu_sva_unbind_device(udev->sva_handle);
+12
drivers/vfio/pci/vfio_pci_dmabuf.c
··· 20 20 u8 revoked : 1; 21 21 }; 22 22 23 + static int vfio_pci_dma_buf_pin(struct dma_buf_attachment *attachment) 24 + { 25 + return -EOPNOTSUPP; 26 + } 27 + 28 + static void vfio_pci_dma_buf_unpin(struct dma_buf_attachment *attachment) 29 + { 30 + /* Do nothing */ 31 + } 32 + 23 33 static int vfio_pci_dma_buf_attach(struct dma_buf *dmabuf, 24 34 struct dma_buf_attachment *attachment) 25 35 { ··· 86 76 } 87 77 88 78 static const struct dma_buf_ops vfio_pci_dmabuf_ops = { 79 + .pin = vfio_pci_dma_buf_pin, 80 + .unpin = vfio_pci_dma_buf_unpin, 89 81 .attach = vfio_pci_dma_buf_attach, 90 82 .map_dma_buf = vfio_pci_dma_buf_map, 91 83 .unmap_dma_buf = vfio_pci_dma_buf_unmap,
+19 -41
drivers/w1/slaves/w1_therm.c
··· 1836 1836 struct w1_slave *sl = dev_to_w1_slave(device); 1837 1837 struct therm_info info; 1838 1838 u8 new_config_register[3]; /* array of data to be written */ 1839 - int temp, ret; 1840 - char *token = NULL; 1839 + long long temp; 1840 + int ret = 0; 1841 1841 s8 tl, th; /* 1 byte per value + temp ring order */ 1842 - char *p_args, *orig; 1842 + const char *p = buf; 1843 + char *endp; 1843 1844 1844 - p_args = orig = kmalloc(size, GFP_KERNEL); 1845 - /* Safe string copys as buf is const */ 1846 - if (!p_args) { 1847 - dev_warn(device, 1848 - "%s: error unable to allocate memory %d\n", 1849 - __func__, -ENOMEM); 1850 - return size; 1851 - } 1852 - strcpy(p_args, buf); 1853 - 1854 - /* Split string using space char */ 1855 - token = strsep(&p_args, " "); 1856 - 1857 - if (!token) { 1858 - dev_info(device, 1859 - "%s: error parsing args %d\n", __func__, -EINVAL); 1860 - goto free_m; 1861 - } 1862 - 1863 - /* Convert 1st entry to int */ 1864 - ret = kstrtoint (token, 10, &temp); 1845 + temp = simple_strtoll(p, &endp, 10); 1846 + if (p == endp || *endp != ' ') 1847 + ret = -EINVAL; 1848 + else if (temp < INT_MIN || temp > INT_MAX) 1849 + ret = -ERANGE; 1865 1850 if (ret) { 1866 1851 dev_info(device, 1867 1852 "%s: error parsing args %d\n", __func__, ret); 1868 - goto free_m; 1853 + return size; 1869 1854 } 1870 1855 1871 1856 tl = int_to_short(temp); 1872 1857 1873 - /* Split string using space char */ 1874 - token = strsep(&p_args, " "); 1875 - if (!token) { 1876 - dev_info(device, 1877 - "%s: error parsing args %d\n", __func__, -EINVAL); 1878 - goto free_m; 1879 - } 1880 - /* Convert 2nd entry to int */ 1881 - ret = kstrtoint (token, 10, &temp); 1858 + p = endp + 1; 1859 + temp = simple_strtoll(p, &endp, 10); 1860 + if (p == endp) 1861 + ret = -EINVAL; 1862 + else if (temp < INT_MIN || temp > INT_MAX) 1863 + ret = -ERANGE; 1882 1864 if (ret) { 1883 1865 dev_info(device, 1884 1866 "%s: error parsing args %d\n", __func__, ret); 1885 - goto free_m; 1867 + return size; 1886 1868 } 1887 1869 1888 1870 /* Prepare to cast to short by eliminating out of range values */ ··· 1887 1905 dev_info(device, 1888 1906 "%s: error reading from the slave device %d\n", 1889 1907 __func__, ret); 1890 - goto free_m; 1908 + return size; 1891 1909 } 1892 1910 1893 1911 /* Write data in the device RAM */ ··· 1895 1913 dev_info(device, 1896 1914 "%s: Device not supported by the driver %d\n", 1897 1915 __func__, -ENODEV); 1898 - goto free_m; 1916 + return size; 1899 1917 } 1900 1918 1901 1919 ret = SLAVE_SPECIFIC_FUNC(sl)->write_data(sl, new_config_register); ··· 1903 1921 dev_info(device, 1904 1922 "%s: error writing to the slave device %d\n", 1905 1923 __func__, ret); 1906 - 1907 - free_m: 1908 - /* free allocated memory */ 1909 - kfree(orig); 1910 1924 1911 1925 return size; 1912 1926 }
-2
drivers/w1/w1.c
··· 758 758 if (err < 0) { 759 759 dev_err(&dev->dev, "%s: Attaching %s failed.\n", __func__, 760 760 sl->name); 761 - dev->slave_count--; 762 - w1_family_put(sl->family); 763 761 atomic_dec(&sl->master->refcnt); 764 762 kfree(sl); 765 763 return err;
+1
drivers/xen/xen-scsiback.c
··· 1262 1262 gnttab_page_cache_shrink(&info->free_pages, 0); 1263 1263 1264 1264 dev_set_drvdata(&dev->dev, NULL); 1265 + kfree(info); 1265 1266 } 1266 1267 1267 1268 static int scsiback_probe(struct xenbus_device *dev,
+2
fs/9p/vfs_dir.c
··· 242 242 .iterate_shared = v9fs_dir_readdir, 243 243 .open = v9fs_file_open, 244 244 .release = v9fs_dir_release, 245 + .setlease = simple_nosetlease, 245 246 }; 246 247 247 248 const struct file_operations v9fs_dir_operations_dotl = { ··· 252 251 .open = v9fs_file_open, 253 252 .release = v9fs_dir_release, 254 253 .fsync = v9fs_file_fsync_dotl, 254 + .setlease = simple_nosetlease, 255 255 };
-22
fs/btrfs/disk-io.c
··· 498 498 #define btree_migrate_folio NULL 499 499 #endif 500 500 501 - static int btree_writepages(struct address_space *mapping, 502 - struct writeback_control *wbc) 503 - { 504 - int ret; 505 - 506 - if (wbc->sync_mode == WB_SYNC_NONE) { 507 - struct btrfs_fs_info *fs_info; 508 - 509 - if (wbc->for_kupdate) 510 - return 0; 511 - 512 - fs_info = inode_to_fs_info(mapping->host); 513 - /* this is a bit racy, but that's ok */ 514 - ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes, 515 - BTRFS_DIRTY_METADATA_THRESH, 516 - fs_info->dirty_metadata_batch); 517 - if (ret < 0) 518 - return 0; 519 - } 520 - return btree_write_cache_pages(mapping, wbc); 521 - } 522 - 523 501 static bool btree_release_folio(struct folio *folio, gfp_t gfp_flags) 524 502 { 525 503 if (folio_test_writeback(folio) || folio_test_dirty(folio))
+1 -2
fs/btrfs/extent_io.c
··· 2286 2286 } 2287 2287 } 2288 2288 2289 - int btree_write_cache_pages(struct address_space *mapping, 2290 - struct writeback_control *wbc) 2289 + int btree_writepages(struct address_space *mapping, struct writeback_control *wbc) 2291 2290 { 2292 2291 struct btrfs_eb_write_context ctx = { .wbc = wbc }; 2293 2292 struct btrfs_fs_info *fs_info = inode_to_fs_info(mapping->host);
+1 -2
fs/btrfs/extent_io.h
··· 237 237 u64 start, u64 end, struct writeback_control *wbc, 238 238 bool pages_dirty); 239 239 int btrfs_writepages(struct address_space *mapping, struct writeback_control *wbc); 240 - int btree_write_cache_pages(struct address_space *mapping, 241 - struct writeback_control *wbc); 240 + int btree_writepages(struct address_space *mapping, struct writeback_control *wbc); 242 241 void btrfs_btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start, u64 end); 243 242 void btrfs_readahead(struct readahead_control *rac); 244 243 int set_folio_extent_mapped(struct folio *folio);
+1
fs/btrfs/zlib.c
··· 139 139 data_in = kmap_local_folio(folio, offset); 140 140 memcpy(workspace->buf + cur - filepos, data_in, copy_length); 141 141 kunmap_local(data_in); 142 + folio_put(folio); 142 143 cur += copy_length; 143 144 } 144 145 return 0;
+2
fs/ceph/dir.c
··· 2214 2214 .fsync = ceph_fsync, 2215 2215 .lock = ceph_lock, 2216 2216 .flock = ceph_flock, 2217 + .setlease = simple_nosetlease, 2217 2218 }; 2218 2219 2219 2220 const struct file_operations ceph_snapdir_fops = { ··· 2222 2221 .llseek = ceph_dir_llseek, 2223 2222 .open = ceph_open, 2224 2223 .release = ceph_release, 2224 + .setlease = simple_nosetlease, 2225 2225 }; 2226 2226 2227 2227 const struct inode_operations ceph_dir_iops = {
+10
fs/dcache.c
··· 1104 1104 return de; 1105 1105 } 1106 1106 1107 + /** 1108 + * d_dispose_if_unused - move unreferenced dentries to shrink list 1109 + * @dentry: dentry in question 1110 + * @dispose: head of shrink list 1111 + * 1112 + * If dentry has no external references, move it to shrink list. 1113 + * 1114 + * NOTE!!! The caller is responsible for preventing eviction of the dentry by 1115 + * holding dentry->d_inode->i_lock or equivalent. 1116 + */ 1107 1117 void d_dispose_if_unused(struct dentry *dentry, struct list_head *dispose) 1108 1118 { 1109 1119 spin_lock(&dentry->d_lock);
+12 -4
fs/fs-writeback.c
··· 2492 2492 wb_wakeup(wb); 2493 2493 } 2494 2494 rcu_read_unlock(); 2495 - schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ); 2495 + if (dirtytime_expire_interval) 2496 + schedule_delayed_work(&dirtytime_work, 2497 + round_jiffies_relative(dirtytime_expire_interval * HZ)); 2496 2498 } 2497 2499 2498 2500 static int dirtytime_interval_handler(const struct ctl_table *table, int write, ··· 2503 2501 int ret; 2504 2502 2505 2503 ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 2506 - if (ret == 0 && write) 2507 - mod_delayed_work(system_percpu_wq, &dirtytime_work, 0); 2504 + if (ret == 0 && write) { 2505 + if (dirtytime_expire_interval) 2506 + mod_delayed_work(system_percpu_wq, &dirtytime_work, 0); 2507 + else 2508 + cancel_delayed_work_sync(&dirtytime_work); 2509 + } 2508 2510 return ret; 2509 2511 } 2510 2512 ··· 2525 2519 2526 2520 static int __init start_dirtytime_writeback(void) 2527 2521 { 2528 - schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ); 2522 + if (dirtytime_expire_interval) 2523 + schedule_delayed_work(&dirtytime_work, 2524 + round_jiffies_relative(dirtytime_expire_interval * HZ)); 2529 2525 register_sysctl_init("vm", vm_fs_writeback_table); 2530 2526 return 0; 2531 2527 }
+37 -29
fs/fuse/dir.c
··· 32 32 spinlock_t lock; 33 33 }; 34 34 35 - #define HASH_BITS 5 36 - #define HASH_SIZE (1 << HASH_BITS) 37 - static struct dentry_bucket dentry_hash[HASH_SIZE]; 35 + #define FUSE_HASH_BITS 5 36 + #define FUSE_HASH_SIZE (1 << FUSE_HASH_BITS) 37 + static struct dentry_bucket dentry_hash[FUSE_HASH_SIZE]; 38 38 struct delayed_work dentry_tree_work; 39 39 40 40 /* Minimum invalidation work queue frequency */ ··· 83 83 84 84 static inline struct dentry_bucket *get_dentry_bucket(struct dentry *dentry) 85 85 { 86 - int i = hash_ptr(dentry, HASH_BITS); 86 + int i = hash_ptr(dentry, FUSE_HASH_BITS); 87 87 88 88 return &dentry_hash[i]; 89 89 } ··· 164 164 struct rb_node *node; 165 165 int i; 166 166 167 - for (i = 0; i < HASH_SIZE; i++) { 167 + for (i = 0; i < FUSE_HASH_SIZE; i++) { 168 168 spin_lock(&dentry_hash[i].lock); 169 169 node = rb_first(&dentry_hash[i].tree); 170 170 while (node) { 171 171 fd = rb_entry(node, struct fuse_dentry, node); 172 - if (time_after64(get_jiffies_64(), fd->time)) { 173 - rb_erase(&fd->node, &dentry_hash[i].tree); 174 - RB_CLEAR_NODE(&fd->node); 172 + if (!time_before64(fd->time, get_jiffies_64())) 173 + break; 174 + 175 + rb_erase(&fd->node, &dentry_hash[i].tree); 176 + RB_CLEAR_NODE(&fd->node); 177 + spin_lock(&fd->dentry->d_lock); 178 + /* If dentry is still referenced, let next dput release it */ 179 + fd->dentry->d_flags |= DCACHE_OP_DELETE; 180 + spin_unlock(&fd->dentry->d_lock); 181 + d_dispose_if_unused(fd->dentry, &dispose); 182 + if (need_resched()) { 175 183 spin_unlock(&dentry_hash[i].lock); 176 - d_dispose_if_unused(fd->dentry, &dispose); 177 184 cond_resched(); 178 185 spin_lock(&dentry_hash[i].lock); 179 - } else 180 - break; 186 + } 181 187 node = rb_first(&dentry_hash[i].tree); 182 188 } 183 189 spin_unlock(&dentry_hash[i].lock); 184 - shrink_dentry_list(&dispose); 185 190 } 191 + shrink_dentry_list(&dispose); 186 192 187 193 if (inval_wq) 188 194 schedule_delayed_work(&dentry_tree_work, ··· 219 213 { 220 214 int i; 221 215 222 - for (i = 0; i < HASH_SIZE; i++) { 216 + for (i = 0; i < FUSE_HASH_SIZE; i++) { 223 217 spin_lock_init(&dentry_hash[i].lock); 224 218 dentry_hash[i].tree = RB_ROOT; 225 219 } ··· 233 227 inval_wq = 0; 234 228 cancel_delayed_work_sync(&dentry_tree_work); 235 229 236 - for (i = 0; i < HASH_SIZE; i++) 230 + for (i = 0; i < FUSE_HASH_SIZE; i++) 237 231 WARN_ON_ONCE(!RB_EMPTY_ROOT(&dentry_hash[i].tree)); 238 232 } 239 233 ··· 485 479 return 0; 486 480 } 487 481 488 - static void fuse_dentry_prune(struct dentry *dentry) 482 + static void fuse_dentry_release(struct dentry *dentry) 489 483 { 490 484 struct fuse_dentry *fd = dentry->d_fsdata; 491 485 492 486 if (!RB_EMPTY_NODE(&fd->node)) 493 487 fuse_dentry_tree_del_node(dentry); 494 - } 495 - 496 - static void fuse_dentry_release(struct dentry *dentry) 497 - { 498 - struct fuse_dentry *fd = dentry->d_fsdata; 499 - 500 488 kfree_rcu(fd, rcu); 501 489 } 502 490 ··· 527 527 .d_revalidate = fuse_dentry_revalidate, 528 528 .d_delete = fuse_dentry_delete, 529 529 .d_init = fuse_dentry_init, 530 - .d_prune = fuse_dentry_prune, 531 530 .d_release = fuse_dentry_release, 532 531 .d_automount = fuse_dentry_automount, 533 532 }; ··· 1583 1584 { 1584 1585 int err = -ENOTDIR; 1585 1586 struct inode *parent; 1586 - struct dentry *dir; 1587 - struct dentry *entry; 1587 + struct dentry *dir = NULL; 1588 + struct dentry *entry = NULL; 1588 1589 1589 1590 parent = fuse_ilookup(fc, parent_nodeid, NULL); 1590 1591 if (!parent) ··· 1597 1598 dir = d_find_alias(parent); 1598 1599 if (!dir) 1599 1600 goto put_parent; 1600 - 1601 - entry = start_removing_noperm(dir, name); 1602 - dput(dir); 1603 - if (IS_ERR(entry)) 1604 - goto put_parent; 1601 + while (!entry) { 1602 + struct dentry *child = try_lookup_noperm(name, dir); 1603 + if (!child || IS_ERR(child)) 1604 + goto put_parent; 1605 + entry = start_removing_dentry(dir, child); 1606 + dput(child); 1607 + if (IS_ERR(entry)) 1608 + goto put_parent; 1609 + if (!d_same_name(entry, dir, name)) { 1610 + end_removing(entry); 1611 + entry = NULL; 1612 + } 1613 + } 1605 1614 1606 1615 fuse_dir_changed(parent); 1607 1616 if (!(flags & FUSE_EXPIRE_ONLY)) ··· 1647 1640 1648 1641 end_removing(entry); 1649 1642 put_parent: 1643 + dput(dir); 1650 1644 iput(parent); 1651 1645 return err; 1652 1646 }
+1
fs/gfs2/file.c
··· 1608 1608 .lock = gfs2_lock, 1609 1609 .flock = gfs2_flock, 1610 1610 .llseek = default_llseek, 1611 + .setlease = simple_nosetlease, 1611 1612 .fop_flags = FOP_ASYNC_LOCK, 1612 1613 }; 1613 1614
+1
fs/iomap/buffered-io.c
··· 851 851 } 852 852 853 853 folio_get(folio); 854 + folio_wait_stable(folio); 854 855 return folio; 855 856 } 856 857
+1
fs/nfs/dir.c
··· 66 66 .open = nfs_opendir, 67 67 .release = nfs_closedir, 68 68 .fsync = nfs_fsync_dir, 69 + .setlease = simple_nosetlease, 69 70 }; 70 71 71 72 const struct address_space_operations nfs_dir_aops = {
-2
fs/nfs/nfs4file.c
··· 431 431 static int nfs4_setlease(struct file *file, int arg, struct file_lease **lease, 432 432 void **priv) 433 433 { 434 - if (!S_ISREG(file_inode(file)->i_mode)) 435 - return -EINVAL; 436 434 return nfs4_proc_setlease(file, arg, lease, priv); 437 435 } 438 436
+3
fs/readdir.c
··· 316 316 struct getdents_callback buf = { 317 317 .ctx.actor = filldir, 318 318 .ctx.count = count, 319 + .ctx.dt_flags_mask = FILLDIR_FLAG_NOINTR, 319 320 .current_dir = dirent 320 321 }; 321 322 int error; ··· 401 400 struct getdents_callback64 buf = { 402 401 .ctx.actor = filldir64, 403 402 .ctx.count = count, 403 + .ctx.dt_flags_mask = FILLDIR_FLAG_NOINTR, 404 404 .current_dir = dirent 405 405 }; 406 406 int error; ··· 571 569 struct compat_getdents_callback buf = { 572 570 .ctx.actor = compat_filldir, 573 571 .ctx.count = count, 572 + .ctx.dt_flags_mask = FILLDIR_FLAG_NOINTR, 574 573 .current_dir = dirent, 575 574 }; 576 575 int error;
+4 -1
fs/romfs/super.c
··· 458 458 459 459 #ifdef CONFIG_BLOCK 460 460 if (!sb->s_mtd) { 461 - sb_set_blocksize(sb, ROMBSIZE); 461 + if (!sb_set_blocksize(sb, ROMBSIZE)) { 462 + errorf(fc, "romfs: unable to set blocksize\n"); 463 + return -EINVAL; 464 + } 462 465 } else { 463 466 sb->s_blocksize = ROMBSIZE; 464 467 sb->s_blocksize_bits = blksize_bits(ROMBSIZE);
+1 -3
fs/smb/client/cifsfs.c
··· 1149 1149 struct inode *inode = file_inode(file); 1150 1150 struct cifsFileInfo *cfile = file->private_data; 1151 1151 1152 - if (!S_ISREG(inode->i_mode)) 1153 - return -EINVAL; 1154 - 1155 1152 /* Check if file is oplocked if this is request for new lease */ 1156 1153 if (arg == F_UNLCK || 1157 1154 ((arg == F_RDLCK) && CIFS_CACHE_READ(CIFS_I(inode))) || ··· 1709 1712 .remap_file_range = cifs_remap_file_range, 1710 1713 .llseek = generic_file_llseek, 1711 1714 .fsync = cifs_dir_fsync, 1715 + .setlease = simple_nosetlease, 1712 1716 }; 1713 1717 1714 1718 static void
+8 -8
fs/smb/server/transport_rdma.c
··· 1353 1353 1354 1354 static int get_mapped_sg_list(struct ib_device *device, void *buf, int size, 1355 1355 struct scatterlist *sg_list, int nentries, 1356 - enum dma_data_direction dir) 1356 + enum dma_data_direction dir, int *npages) 1357 1357 { 1358 - int npages; 1359 - 1360 - npages = get_sg_list(buf, size, sg_list, nentries); 1361 - if (npages < 0) 1358 + *npages = get_sg_list(buf, size, sg_list, nentries); 1359 + if (*npages < 0) 1362 1360 return -EINVAL; 1363 - return ib_dma_map_sg(device, sg_list, npages, dir); 1361 + return ib_dma_map_sg(device, sg_list, *npages, dir); 1364 1362 } 1365 1363 1366 1364 static int post_sendmsg(struct smbdirect_socket *sc, ··· 1429 1431 for (i = 0; i < niov; i++) { 1430 1432 struct ib_sge *sge; 1431 1433 int sg_cnt; 1434 + int npages; 1432 1435 1433 1436 sg_init_table(sg, SMBDIRECT_SEND_IO_MAX_SGE - 1); 1434 1437 sg_cnt = get_mapped_sg_list(sc->ib.dev, 1435 1438 iov[i].iov_base, iov[i].iov_len, 1436 1439 sg, SMBDIRECT_SEND_IO_MAX_SGE - 1, 1437 - DMA_TO_DEVICE); 1440 + DMA_TO_DEVICE, &npages); 1438 1441 if (sg_cnt <= 0) { 1439 1442 pr_err("failed to map buffer\n"); 1440 1443 ret = -ENOMEM; ··· 1443 1444 } else if (sg_cnt + msg->num_sge > SMBDIRECT_SEND_IO_MAX_SGE) { 1444 1445 pr_err("buffer not fitted into sges\n"); 1445 1446 ret = -E2BIG; 1446 - ib_dma_unmap_sg(sc->ib.dev, sg, sg_cnt, 1447 + ib_dma_unmap_sg(sc->ib.dev, sg, npages, 1447 1448 DMA_TO_DEVICE); 1448 1449 goto err; 1449 1450 } ··· 2707 2708 { 2708 2709 int ret; 2709 2710 2711 + smb_direct_port = SMB_DIRECT_PORT_INFINIBAND; 2710 2712 smb_direct_listener.cm_id = NULL; 2711 2713 2712 2714 ret = ib_register_client(&smb_direct_ib_client);
+1 -1
fs/smb/server/vfs.c
··· 1227 1227 } 1228 1228 1229 1229 /** 1230 - * ksmbd_vfs_kern_path_start_remove() - lookup a file and get path info prior to removal 1230 + * ksmbd_vfs_kern_path_start_removing() - lookup a file and get path info prior to removal 1231 1231 * @work: work 1232 1232 * @filepath: file path that is relative to share 1233 1233 * @flags: lookup flags
+1
fs/vboxsf/dir.c
··· 186 186 .release = vboxsf_dir_release, 187 187 .read = generic_read_dir, 188 188 .llseek = generic_file_llseek, 189 + .setlease = simple_nosetlease, 189 190 }; 190 191 191 192 /*
+17 -2
include/drm/drm_pagemap.h
··· 209 209 struct dma_fence *pre_migrate_fence); 210 210 }; 211 211 212 + #if IS_ENABLED(CONFIG_ZONE_DEVICE) 213 + 214 + struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page); 215 + 216 + #else 217 + 218 + static inline struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page) 219 + { 220 + return NULL; 221 + } 222 + 223 + #endif /* IS_ENABLED(CONFIG_ZONE_DEVICE) */ 224 + 212 225 /** 213 226 * struct drm_pagemap_devmem - Structure representing a GPU SVM device memory allocation 214 227 * ··· 246 233 struct dma_fence *pre_migrate_fence; 247 234 }; 248 235 236 + #if IS_ENABLED(CONFIG_ZONE_DEVICE) 237 + 249 238 int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation, 250 239 struct mm_struct *mm, 251 240 unsigned long start, unsigned long end, ··· 257 242 int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation); 258 243 259 244 const struct dev_pagemap_ops *drm_pagemap_pagemap_ops_get(void); 260 - 261 - struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page); 262 245 263 246 void drm_pagemap_devmem_init(struct drm_pagemap_devmem *devmem_allocation, 264 247 struct device *dev, struct mm_struct *mm, ··· 268 255 unsigned long start, unsigned long end, 269 256 struct mm_struct *mm, 270 257 unsigned long timeslice_ms); 258 + 259 + #endif /* IS_ENABLED(CONFIG_ZONE_DEVICE) */ 271 260 272 261 #endif
+9
include/linux/device/driver.h
··· 85 85 * uevent. 86 86 * @p: Driver core's private data, no one other than the driver 87 87 * core can touch this. 88 + * @p_cb: Callbacks private to the driver core; no one other than the 89 + * driver core is allowed to touch this. 88 90 * 89 91 * The device driver-model tracks all of the drivers known to the system. 90 92 * The main reason for this tracking is to enable the driver core to match ··· 121 119 void (*coredump) (struct device *dev); 122 120 123 121 struct driver_private *p; 122 + struct { 123 + /* 124 + * Called after remove() and after all devres entries have been 125 + * processed. This is a Rust only callback. 126 + */ 127 + void (*post_unbind_rust)(struct device *dev); 128 + } p_cb; 124 129 }; 125 130 126 131
+5 -1
include/linux/fs.h
··· 1855 1855 * INT_MAX unlimited 1856 1856 */ 1857 1857 int count; 1858 + /* @actor supports these flags in d_type high bits */ 1859 + unsigned int dt_flags_mask; 1858 1860 }; 1859 1861 1860 1862 /* If OR-ed with d_type, pending signals are not checked */ ··· 3526 3524 const char *name, int namelen, 3527 3525 u64 ino, unsigned type) 3528 3526 { 3529 - return ctx->actor(ctx, name, namelen, ctx->pos, ino, type); 3527 + unsigned int dt_mask = S_DT_MASK | ctx->dt_flags_mask; 3528 + 3529 + return ctx->actor(ctx, name, namelen, ctx->pos, ino, type & dt_mask); 3530 3530 } 3531 3531 static inline bool dir_emit_dot(struct file *file, struct dir_context *ctx) 3532 3532 {
+2
include/linux/iio/iio-opaque.h
··· 14 14 * @mlock: lock used to prevent simultaneous device state changes 15 15 * @mlock_key: lockdep class for iio_dev lock 16 16 * @info_exist_lock: lock to prevent use during removal 17 + * @info_exist_key: lockdep class for info_exist lock 17 18 * @trig_readonly: mark the current trigger immutable 18 19 * @event_interface: event chrdevs associated with interrupt lines 19 20 * @attached_buffers: array of buffers statically attached by the driver ··· 48 47 struct mutex mlock; 49 48 struct lock_class_key mlock_key; 50 49 struct mutex info_exist_lock; 50 + struct lock_class_key info_exist_key; 51 51 bool trig_readonly; 52 52 struct iio_event_interface *event_interface; 53 53 struct iio_buffer **attached_buffers;
+7 -6
include/net/bonding.h
··· 522 522 static inline unsigned long slave_oldest_target_arp_rx(struct bonding *bond, 523 523 struct slave *slave) 524 524 { 525 + unsigned long tmp, ret = READ_ONCE(slave->target_last_arp_rx[0]); 525 526 int i = 1; 526 - unsigned long ret = slave->target_last_arp_rx[0]; 527 527 528 - for (; (i < BOND_MAX_ARP_TARGETS) && bond->params.arp_targets[i]; i++) 529 - if (time_before(slave->target_last_arp_rx[i], ret)) 530 - ret = slave->target_last_arp_rx[i]; 531 - 528 + for (; (i < BOND_MAX_ARP_TARGETS) && bond->params.arp_targets[i]; i++) { 529 + tmp = READ_ONCE(slave->target_last_arp_rx[i]); 530 + if (time_before(tmp, ret)) 531 + ret = tmp; 532 + } 532 533 return ret; 533 534 } 534 535 ··· 539 538 if (bond->params.arp_all_targets == BOND_ARP_TARGETS_ALL) 540 539 return slave_oldest_target_arp_rx(bond, slave); 541 540 542 - return slave->last_rx; 541 + return READ_ONCE(slave->last_rx); 543 542 } 544 543 545 544 static inline void slave_update_last_tx(struct slave *slave)
+2
include/net/nfc/nfc.h
··· 219 219 220 220 int nfc_register_device(struct nfc_dev *dev); 221 221 222 + void nfc_unregister_rfkill(struct nfc_dev *dev); 223 + void nfc_remove_device(struct nfc_dev *dev); 222 224 void nfc_unregister_device(struct nfc_dev *dev); 223 225 224 226 /**
+4 -2
include/uapi/linux/blkzoned.h
··· 81 81 BLK_ZONE_COND_FULL = 0xE, 82 82 BLK_ZONE_COND_OFFLINE = 0xF, 83 83 84 - BLK_ZONE_COND_ACTIVE = 0xFF, 84 + BLK_ZONE_COND_ACTIVE = 0xFF, /* added in Linux 6.19 */ 85 + #define BLK_ZONE_COND_ACTIVE BLK_ZONE_COND_ACTIVE 85 86 }; 86 87 87 88 /** ··· 101 100 BLK_ZONE_REP_CAPACITY = (1U << 0), 102 101 103 102 /* Input flags */ 104 - BLK_ZONE_REP_CACHED = (1U << 31), 103 + BLK_ZONE_REP_CACHED = (1U << 31), /* added in Linux 6.19 */ 104 + #define BLK_ZONE_REP_CACHED BLK_ZONE_REP_CACHED 105 105 }; 106 106 107 107 /**
+1 -1
include/uapi/linux/comedi.h
··· 640 640 641 641 /** 642 642 * struct comedi_rangeinfo - used to retrieve the range table for a channel 643 - * @range_type: Encodes subdevice index (bits 27:24), channel index 643 + * @range_type: Encodes subdevice index (bits 31:24), channel index 644 644 * (bits 23:16) and range table length (bits 15:0). 645 645 * @range_ptr: Pointer to array of @struct comedi_krange to be filled 646 646 * in with the range table for the channel or subdevice.
+1 -1
io_uring/io-wq.c
··· 598 598 __releases(&acct->lock) 599 599 { 600 600 struct io_wq *wq = worker->wq; 601 - bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); 602 601 603 602 do { 603 + bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); 604 604 struct io_wq_work *work; 605 605 606 606 /*
+11 -4
io_uring/rw.c
··· 144 144 return 0; 145 145 } 146 146 147 - static void io_rw_recycle(struct io_kiocb *req, unsigned int issue_flags) 147 + static bool io_rw_recycle(struct io_kiocb *req, unsigned int issue_flags) 148 148 { 149 149 struct io_async_rw *rw = req->async_data; 150 150 151 151 if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) 152 - return; 152 + return false; 153 153 154 154 io_alloc_cache_vec_kasan(&rw->vec); 155 155 if (rw->vec.nr > IO_VEC_CACHE_SOFT_CAP) 156 156 io_vec_free(&rw->vec); 157 157 158 - if (io_alloc_cache_put(&req->ctx->rw_cache, rw)) 158 + if (io_alloc_cache_put(&req->ctx->rw_cache, rw)) { 159 159 io_req_async_data_clear(req, 0); 160 + return true; 161 + } 162 + return false; 160 163 } 161 164 162 165 static void io_req_rw_cleanup(struct io_kiocb *req, unsigned int issue_flags) ··· 193 190 */ 194 191 if (!(req->flags & (REQ_F_REISSUE | REQ_F_REFCOUNT))) { 195 192 req->flags &= ~REQ_F_NEED_CLEANUP; 196 - io_rw_recycle(req, issue_flags); 193 + if (!io_rw_recycle(req, issue_flags)) { 194 + struct io_async_rw *rw = req->async_data; 195 + 196 + io_vec_free(&rw->vec); 197 + } 197 198 } 198 199 } 199 200
+3 -3
io_uring/waitid.c
··· 114 114 struct io_waitid *iw = io_kiocb_to_cmd(req, struct io_waitid); 115 115 struct wait_queue_head *head; 116 116 117 - head = READ_ONCE(iw->head); 117 + head = smp_load_acquire(&iw->head); 118 118 if (head) { 119 119 struct io_waitid_async *iwa = req->async_data; 120 120 121 - iw->head = NULL; 121 + smp_store_release(&iw->head, NULL); 122 122 spin_lock_irq(&head->lock); 123 123 list_del_init(&iwa->wo.child_wait.entry); 124 124 spin_unlock_irq(&head->lock); ··· 246 246 return 0; 247 247 248 248 list_del_init(&wait->entry); 249 - iw->head = NULL; 249 + smp_store_release(&iw->head, NULL); 250 250 251 251 /* cancel is in progress */ 252 252 if (atomic_fetch_inc(&iw->refs) & IO_WAITID_REF_MASK)
+9
kernel/events/core.c
··· 6997 6997 if (data_page_nr(event->rb) != nr_pages) 6998 6998 return -EINVAL; 6999 6999 7000 + /* 7001 + * If this event doesn't have mmap_count, we're attempting to 7002 + * create an alias of another event's mmap(); this would mean 7003 + * both events will end up scribbling the same user_page; 7004 + * which makes no sense. 7005 + */ 7006 + if (!refcount_read(&event->mmap_count)) 7007 + return -EBUSY; 7008 + 7000 7009 if (refcount_inc_not_zero(&event->rb->mmap_count)) { 7001 7010 /* 7002 7011 * Success -- managed to mmap() the same buffer
-16
kernel/sched/fair.c
··· 8828 8828 if ((wake_flags & WF_FORK) || pse->sched_delayed) 8829 8829 return; 8830 8830 8831 - /* 8832 - * If @p potentially is completing work required by current then 8833 - * consider preemption. 8834 - * 8835 - * Reschedule if waker is no longer eligible. */ 8836 - if (in_task() && !entity_eligible(cfs_rq, se)) { 8837 - preempt_action = PREEMPT_WAKEUP_RESCHED; 8838 - goto preempt; 8839 - } 8840 - 8841 8831 /* Prefer picking wakee soon if appropriate. */ 8842 8832 if (sched_feat(NEXT_BUDDY) && 8843 8833 set_preempt_buddy(cfs_rq, wake_flags, pse, se)) { ··· 8984 8994 if (new_tasks > 0) 8985 8995 goto again; 8986 8996 } 8987 - 8988 - /* 8989 - * rq is about to be idle, check if we need to update the 8990 - * lost_idle_time of clock_pelt 8991 - */ 8992 - update_idle_rq_clock_pelt(rq); 8993 8997 8994 8998 return NULL; 8995 8999 }
+1 -1
kernel/sched/features.h
··· 29 29 * wakeup-preemption), since its likely going to consume data we 30 30 * touched, increases cache locality. 31 31 */ 32 - SCHED_FEAT(NEXT_BUDDY, true) 32 + SCHED_FEAT(NEXT_BUDDY, false) 33 33 34 34 /* 35 35 * Allow completely ignoring cfs_rq->next; which can be set from various
+6
kernel/sched/idle.c
··· 468 468 scx_update_idle(rq, true, true); 469 469 schedstat_inc(rq->sched_goidle); 470 470 next->se.exec_start = rq_clock_task(rq); 471 + 472 + /* 473 + * rq is about to be idle, check if we need to update the 474 + * lost_idle_time of clock_pelt 475 + */ 476 + update_idle_rq_clock_pelt(rq); 471 477 } 472 478 473 479 struct task_struct *pick_task_idle(struct rq *rq, struct rq_flags *rf)
+1 -1
kernel/time/clocksource.c
··· 252 252 253 253 static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow) 254 254 { 255 - int64_t md = 2 * watchdog->uncertainty_margin; 255 + int64_t md = watchdog->uncertainty_margin; 256 256 unsigned int nretries, max_retries; 257 257 int64_t wd_delay, wd_seq_delay; 258 258 u64 wd_end, wd_end2;
+1 -1
kernel/time/timekeeping.c
··· 2735 2735 timekeeping_update_from_shadow(tkd, TK_CLOCK_WAS_SET); 2736 2736 result->clock_set = true; 2737 2737 } else { 2738 - tk_update_leap_state_all(&tk_core); 2738 + tk_update_leap_state_all(tkd); 2739 2739 } 2740 2740 2741 2741 /* Update the multiplier immediately if frequency was set directly */
+4 -4
kernel/trace/trace.c
··· 6115 6115 unsigned long addr = (unsigned long)key; 6116 6116 const struct trace_mod_entry *ent = pivot; 6117 6117 6118 - if (addr >= ent[0].mod_addr && addr < ent[1].mod_addr) 6119 - return 0; 6120 - else 6121 - return addr - ent->mod_addr; 6118 + if (addr < ent[0].mod_addr) 6119 + return -1; 6120 + 6121 + return addr >= ent[1].mod_addr; 6122 6122 } 6123 6123 6124 6124 /**
+9
kernel/trace/trace_events_hist.c
··· 2057 2057 hist_field->fn_num = HIST_FIELD_FN_RELDYNSTRING; 2058 2058 else 2059 2059 hist_field->fn_num = HIST_FIELD_FN_PSTRING; 2060 + } else if (field->filter_type == FILTER_STACKTRACE) { 2061 + flags |= HIST_FIELD_FL_STACKTRACE; 2062 + 2063 + hist_field->size = MAX_FILTER_STR_VAL; 2064 + hist_field->type = kstrdup_const(field->type, GFP_KERNEL); 2065 + if (!hist_field->type) 2066 + goto free; 2067 + 2068 + hist_field->fn_num = HIST_FIELD_FN_STACK; 2060 2069 } else { 2061 2070 hist_field->size = field->size; 2062 2071 hist_field->is_signed = field->is_signed;
+7 -1
kernel/trace/trace_events_synth.c
··· 130 130 struct synth_event *event = call->data; 131 131 unsigned int i, size, n_u64; 132 132 char *name, *type; 133 + int filter_type; 133 134 bool is_signed; 135 + bool is_stack; 134 136 int ret = 0; 135 137 136 138 for (i = 0, n_u64 = 0; i < event->n_fields; i++) { ··· 140 138 is_signed = event->fields[i]->is_signed; 141 139 type = event->fields[i]->type; 142 140 name = event->fields[i]->name; 141 + is_stack = event->fields[i]->is_stack; 142 + 143 + filter_type = is_stack ? FILTER_STACKTRACE : FILTER_OTHER; 144 + 143 145 ret = trace_define_field(call, type, name, offset, size, 144 - is_signed, FILTER_OTHER); 146 + is_signed, filter_type); 145 147 if (ret) 146 148 break; 147 149
+1 -1
kernel/trace/trace_functions_graph.c
··· 901 901 trace_seq_printf(s, "%ps", func); 902 902 903 903 if (args_size >= FTRACE_REGS_MAX_ARGS * sizeof(long)) { 904 - print_function_args(s, entry->args, (unsigned long)func); 904 + print_function_args(s, FGRAPH_ENTRY_ARGS(entry), (unsigned long)func); 905 905 trace_seq_putc(s, ';'); 906 906 } else 907 907 trace_seq_puts(s, "();");
+1 -1
net/bridge/br_input.c
··· 274 274 int ret; 275 275 276 276 net = dev_net(skb->dev); 277 - #ifdef HAVE_JUMP_LABEL 277 + #ifdef CONFIG_JUMP_LABEL 278 278 if (!static_key_false(&nf_hooks_needed[NFPROTO_BRIDGE][NF_BR_PRE_ROUTING])) 279 279 goto frame_finish; 280 280 #endif
+2
net/core/filter.c
··· 3353 3353 shinfo->gso_type &= ~SKB_GSO_TCPV4; 3354 3354 shinfo->gso_type |= SKB_GSO_TCPV6; 3355 3355 } 3356 + shinfo->gso_type |= SKB_GSO_DODGY; 3356 3357 } 3357 3358 3358 3359 bpf_skb_change_protocol(skb, ETH_P_IPV6); ··· 3384 3383 shinfo->gso_type &= ~SKB_GSO_TCPV6; 3385 3384 shinfo->gso_type |= SKB_GSO_TCPV4; 3386 3385 } 3386 + shinfo->gso_type |= SKB_GSO_DODGY; 3387 3387 } 3388 3388 3389 3389 bpf_skb_change_protocol(skb, ETH_P_IP);
+2 -1
net/ipv4/tcp_offload.c
··· 107 107 if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) { 108 108 struct tcphdr *th = tcp_hdr(skb); 109 109 110 - if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) 110 + if ((skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) && 111 + !(skb_shinfo(skb)->gso_type & SKB_GSO_DODGY)) 111 112 return __tcp4_gso_segment_list(skb, features); 112 113 113 114 skb->ip_summed = CHECKSUM_NONE;
+2 -1
net/ipv4/udp_offload.c
··· 514 514 515 515 if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) { 516 516 /* Detect modified geometry and pass those to skb_segment. */ 517 - if (skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size) 517 + if ((skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size) && 518 + !(skb_shinfo(gso_skb)->gso_type & SKB_GSO_DODGY)) 518 519 return __udp_gso_segment_list(gso_skb, features, is_ipv6); 519 520 520 521 ret = __skb_linearize(gso_skb);
+3 -1
net/ipv6/icmp.c
··· 966 966 fl6.daddr = ipv6_hdr(skb)->saddr; 967 967 if (saddr) 968 968 fl6.saddr = *saddr; 969 - fl6.flowi6_oif = icmp6_iif(skb); 969 + fl6.flowi6_oif = ipv6_addr_loopback(&fl6.daddr) ? 970 + skb->dev->ifindex : 971 + icmp6_iif(skb); 970 972 fl6.fl6_icmp_type = type; 971 973 fl6.flowi6_mark = mark; 972 974 fl6.flowi6_uid = sock_net_uid(net, NULL);
+2 -1
net/ipv6/tcpv6_offload.c
··· 168 168 if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) { 169 169 struct tcphdr *th = tcp_hdr(skb); 170 170 171 - if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) 171 + if ((skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size) && 172 + !(skb_shinfo(skb)->gso_type & SKB_GSO_DODGY)) 172 173 return __tcp6_gso_segment_list(skb, features); 173 174 174 175 skb->ip_summed = CHECKSUM_NONE;
+5 -3
net/mac80211/mlme.c
··· 8 8 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 9 9 * Copyright 2013-2014 Intel Mobile Communications GmbH 10 10 * Copyright (C) 2015 - 2017 Intel Deutschland GmbH 11 - * Copyright (C) 2018 - 2025 Intel Corporation 11 + * Copyright (C) 2018 - 2026 Intel Corporation 12 12 */ 13 13 14 14 #include <linux/delay.h> ··· 6190 6190 return -EINVAL; 6191 6191 } 6192 6192 6193 - link_map_presence = *pos; 6194 - pos++; 6193 + if (!(control & IEEE80211_TTLM_CONTROL_DEF_LINK_MAP)) { 6194 + link_map_presence = *pos; 6195 + pos++; 6196 + } 6195 6197 6196 6198 if (control & IEEE80211_TTLM_CONTROL_SWITCH_TIME_PRESENT) { 6197 6199 ttlm_info->switch_time = get_unaligned_le16(pos);
+13 -3
net/mptcp/pm_kernel.c
··· 1294 1294 int mptcp_pm_nl_flush_addrs_doit(struct sk_buff *skb, struct genl_info *info) 1295 1295 { 1296 1296 struct pm_nl_pernet *pernet = genl_info_pm_nl(info); 1297 - LIST_HEAD(free_list); 1297 + struct list_head free_list; 1298 1298 1299 1299 spin_lock_bh(&pernet->lock); 1300 - list_splice_init(&pernet->endp_list, &free_list); 1300 + free_list = pernet->endp_list; 1301 + INIT_LIST_HEAD_RCU(&pernet->endp_list); 1301 1302 __reset_counters(pernet); 1302 1303 pernet->next_id = 1; 1303 1304 bitmap_zero(pernet->id_bitmap, MPTCP_PM_MAX_ADDR_ID + 1); 1304 1305 spin_unlock_bh(&pernet->lock); 1305 - mptcp_nl_flush_addrs_list(sock_net(skb->sk), &free_list); 1306 + 1307 + if (free_list.next == &pernet->endp_list) 1308 + return 0; 1309 + 1306 1310 synchronize_rcu(); 1311 + 1312 + /* Adjust the pointers to free_list instead of pernet->endp_list */ 1313 + free_list.prev->next = &free_list; 1314 + free_list.next->prev = &free_list; 1315 + 1316 + mptcp_nl_flush_addrs_list(sock_net(skb->sk), &free_list); 1307 1317 __flush_addrs(&free_list); 1308 1318 return 0; 1309 1319 }
+7 -6
net/mptcp/protocol.c
··· 821 821 822 822 static bool __mptcp_subflow_error_report(struct sock *sk, struct sock *ssk) 823 823 { 824 - int err = sock_error(ssk); 825 824 int ssk_state; 826 - 827 - if (!err) 828 - return false; 825 + int err; 829 826 830 827 /* only propagate errors on fallen-back sockets or 831 828 * on MPC connect 832 829 */ 833 830 if (sk->sk_state != TCP_SYN_SENT && !__mptcp_check_fallback(mptcp_sk(sk))) 831 + return false; 832 + 833 + err = sock_error(ssk); 834 + if (!err) 834 835 return false; 835 836 836 837 /* We need to propagate only transition to CLOSE state. ··· 2599 2598 struct mptcp_sock *msk = mptcp_sk(sk); 2600 2599 struct sk_buff *skb; 2601 2600 2602 - /* The first subflow can already be closed and still in the list */ 2603 - if (subflow->close_event_done) 2601 + /* The first subflow can already be closed or disconnected */ 2602 + if (subflow->close_event_done || READ_ONCE(subflow->local_id) < 0) 2604 2603 return; 2605 2604 2606 2605 subflow->close_event_done = true;
+24 -3
net/nfc/core.c
··· 1147 1147 EXPORT_SYMBOL(nfc_register_device); 1148 1148 1149 1149 /** 1150 - * nfc_unregister_device - unregister a nfc device in the nfc subsystem 1150 + * nfc_unregister_rfkill - unregister a nfc device in the rfkill subsystem 1151 1151 * 1152 1152 * @dev: The nfc device to unregister 1153 1153 */ 1154 - void nfc_unregister_device(struct nfc_dev *dev) 1154 + void nfc_unregister_rfkill(struct nfc_dev *dev) 1155 1155 { 1156 - int rc; 1157 1156 struct rfkill *rfk = NULL; 1157 + int rc; 1158 1158 1159 1159 pr_debug("dev_name=%s\n", dev_name(&dev->dev)); 1160 1160 ··· 1175 1175 rfkill_unregister(rfk); 1176 1176 rfkill_destroy(rfk); 1177 1177 } 1178 + } 1179 + EXPORT_SYMBOL(nfc_unregister_rfkill); 1178 1180 1181 + /** 1182 + * nfc_remove_device - remove a nfc device in the nfc subsystem 1183 + * 1184 + * @dev: The nfc device to remove 1185 + */ 1186 + void nfc_remove_device(struct nfc_dev *dev) 1187 + { 1179 1188 if (dev->ops->check_presence) { 1180 1189 timer_delete_sync(&dev->check_pres_timer); 1181 1190 cancel_work_sync(&dev->check_pres_work); ··· 1196 1187 nfc_devlist_generation++; 1197 1188 device_del(&dev->dev); 1198 1189 mutex_unlock(&nfc_devlist_mutex); 1190 + } 1191 + EXPORT_SYMBOL(nfc_remove_device); 1192 + 1193 + /** 1194 + * nfc_unregister_device - unregister a nfc device in the nfc subsystem 1195 + * 1196 + * @dev: The nfc device to unregister 1197 + */ 1198 + void nfc_unregister_device(struct nfc_dev *dev) 1199 + { 1200 + nfc_unregister_rfkill(dev); 1201 + nfc_remove_device(dev); 1199 1202 } 1200 1203 EXPORT_SYMBOL(nfc_unregister_device); 1201 1204
+16 -1
net/nfc/llcp_commands.c
··· 778 778 if (likely(frag_len > 0)) 779 779 skb_put_data(pdu, msg_ptr, frag_len); 780 780 781 + spin_lock(&local->tx_queue.lock); 782 + 783 + if (list_empty(&local->list)) { 784 + spin_unlock(&local->tx_queue.lock); 785 + 786 + kfree_skb(pdu); 787 + 788 + len -= remaining_len; 789 + if (len == 0) 790 + len = -ENXIO; 791 + break; 792 + } 793 + 781 794 /* No need to check for the peer RW for UI frames */ 782 - skb_queue_tail(&local->tx_queue, pdu); 795 + __skb_queue_tail(&local->tx_queue, pdu); 796 + 797 + spin_unlock(&local->tx_queue.lock); 783 798 784 799 remaining_len -= frag_len; 785 800 msg_ptr += frag_len;
+3 -1
net/nfc/llcp_core.c
··· 316 316 spin_lock(&llcp_devices_lock); 317 317 list_for_each_entry_safe(local, tmp, &llcp_devices, list) 318 318 if (local->dev == dev) { 319 - list_del(&local->list); 319 + spin_lock(&local->tx_queue.lock); 320 + list_del_init(&local->list); 321 + spin_unlock(&local->tx_queue.lock); 320 322 spin_unlock(&llcp_devices_lock); 321 323 return local; 322 324 }
+3 -1
net/nfc/nci/core.c
··· 1303 1303 { 1304 1304 struct nci_conn_info *conn_info, *n; 1305 1305 1306 + nfc_unregister_rfkill(ndev->nfc_dev); 1307 + 1306 1308 /* This set_bit is not protected with specialized barrier, 1307 1309 * However, it is fine because the mutex_lock(&ndev->req_lock); 1308 1310 * in nci_close_device() will help to emit one. ··· 1322 1320 /* conn_info is allocated with devm_kzalloc */ 1323 1321 } 1324 1322 1325 - nfc_unregister_device(ndev->nfc_dev); 1323 + nfc_remove_device(ndev->nfc_dev); 1326 1324 } 1327 1325 EXPORT_SYMBOL(nci_unregister_device); 1328 1326
+33 -8
rust/kernel/auxiliary.rs
··· 23 23 /// An adapter for the registration of auxiliary drivers. 24 24 pub struct Adapter<T: Driver>(T); 25 25 26 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 26 + // SAFETY: 27 + // - `bindings::auxiliary_driver` is a C type declared as `repr(C)`. 28 + // - `T` is the type of the driver's device private data. 29 + // - `struct auxiliary_driver` embeds a `struct device_driver`. 30 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 31 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 32 + type DriverType = bindings::auxiliary_driver; 33 + type DriverData = T; 34 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 35 + } 36 + 37 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 27 38 // a preceding call to `register` has been successful. 28 39 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 29 - type RegType = bindings::auxiliary_driver; 30 - 31 40 unsafe fn register( 32 - adrv: &Opaque<Self::RegType>, 41 + adrv: &Opaque<Self::DriverType>, 33 42 name: &'static CStr, 34 43 module: &'static ThisModule, 35 44 ) -> Result { ··· 50 41 (*adrv.get()).id_table = T::ID_TABLE.as_ptr(); 51 42 } 52 43 53 - // SAFETY: `adrv` is guaranteed to be a valid `RegType`. 44 + // SAFETY: `adrv` is guaranteed to be a valid `DriverType`. 54 45 to_result(unsafe { 55 46 bindings::__auxiliary_driver_register(adrv.get(), module.0, name.as_char_ptr()) 56 47 }) 57 48 } 58 49 59 - unsafe fn unregister(adrv: &Opaque<Self::RegType>) { 60 - // SAFETY: `adrv` is guaranteed to be a valid `RegType`. 50 + unsafe fn unregister(adrv: &Opaque<Self::DriverType>) { 51 + // SAFETY: `adrv` is guaranteed to be a valid `DriverType`. 61 52 unsafe { bindings::auxiliary_driver_unregister(adrv.get()) } 62 53 } 63 54 } ··· 96 87 // SAFETY: `remove_callback` is only ever called after a successful call to 97 88 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 98 89 // and stored a `Pin<KBox<T>>`. 99 - drop(unsafe { adev.as_ref().drvdata_obtain::<T>() }); 90 + let data = unsafe { adev.as_ref().drvdata_borrow::<T>() }; 91 + 92 + T::unbind(adev, data); 100 93 } 101 94 } 102 95 ··· 198 187 /// 199 188 /// Called when an auxiliary device is matches a corresponding driver. 200 189 fn probe(dev: &Device<device::Core>, id_info: &Self::IdInfo) -> impl PinInit<Self, Error>; 190 + 191 + /// Auxiliary driver unbind. 192 + /// 193 + /// Called when a [`Device`] is unbound from its bound [`Driver`]. Implementing this callback 194 + /// is optional. 195 + /// 196 + /// This callback serves as a place for drivers to perform teardown operations that require a 197 + /// `&Device<Core>` or `&Device<Bound>` reference. For instance, drivers may try to perform I/O 198 + /// operations to gracefully tear down the device. 199 + /// 200 + /// Otherwise, release operations for driver resources should be performed in `Self::drop`. 201 + fn unbind(dev: &Device<device::Core>, this: Pin<&Self>) { 202 + let _ = (dev, this); 203 + } 201 204 } 202 205 203 206 /// The auxiliary device representation.
+11 -9
rust/kernel/device.rs
··· 232 232 /// 233 233 /// # Safety 234 234 /// 235 - /// - Must only be called once after a preceding call to [`Device::set_drvdata`]. 236 235 /// - The type `T` must match the type of the `ForeignOwnable` previously stored by 237 236 /// [`Device::set_drvdata`]. 238 - pub unsafe fn drvdata_obtain<T: 'static>(&self) -> Pin<KBox<T>> { 237 + pub(crate) unsafe fn drvdata_obtain<T: 'static>(&self) -> Option<Pin<KBox<T>>> { 239 238 // SAFETY: By the type invariants, `self.as_raw()` is a valid pointer to a `struct device`. 240 239 let ptr = unsafe { bindings::dev_get_drvdata(self.as_raw()) }; 241 240 242 241 // SAFETY: By the type invariants, `self.as_raw()` is a valid pointer to a `struct device`. 243 242 unsafe { bindings::dev_set_drvdata(self.as_raw(), core::ptr::null_mut()) }; 244 243 244 + if ptr.is_null() { 245 + return None; 246 + } 247 + 245 248 // SAFETY: 246 - // - By the safety requirements of this function, `ptr` comes from a previous call to 247 - // `into_foreign()`. 249 + // - If `ptr` is not NULL, it comes from a previous call to `into_foreign()`. 248 250 // - `dev_get_drvdata()` guarantees to return the same pointer given to `dev_set_drvdata()` 249 251 // in `into_foreign()`. 250 - unsafe { Pin::<KBox<T>>::from_foreign(ptr.cast()) } 252 + Some(unsafe { Pin::<KBox<T>>::from_foreign(ptr.cast()) }) 251 253 } 252 254 253 255 /// Borrow the driver's private data bound to this [`Device`]. 254 256 /// 255 257 /// # Safety 256 258 /// 257 - /// - Must only be called after a preceding call to [`Device::set_drvdata`] and before 258 - /// [`Device::drvdata_obtain`]. 259 + /// - Must only be called after a preceding call to [`Device::set_drvdata`] and before the 260 + /// device is fully unbound. 259 261 /// - The type `T` must match the type of the `ForeignOwnable` previously stored by 260 262 /// [`Device::set_drvdata`]. 261 263 pub unsafe fn drvdata_borrow<T: 'static>(&self) -> Pin<&T> { ··· 273 271 /// # Safety 274 272 /// 275 273 /// - Must only be called after a preceding call to [`Device::set_drvdata`] and before 276 - /// [`Device::drvdata_obtain`]. 274 + /// the device is fully unbound. 277 275 /// - The type `T` must match the type of the `ForeignOwnable` previously stored by 278 276 /// [`Device::set_drvdata`]. 279 277 unsafe fn drvdata_unchecked<T: 'static>(&self) -> Pin<&T> { ··· 322 320 323 321 // SAFETY: 324 322 // - The above check of `dev_get_drvdata()` guarantees that we are called after 325 - // `set_drvdata()` and before `drvdata_obtain()`. 323 + // `set_drvdata()`. 326 324 // - We've just checked that the type of the driver's private data is in fact `T`. 327 325 Ok(unsafe { self.drvdata_unchecked() }) 328 326 }
+70 -16
rust/kernel/driver.rs
··· 99 99 use core::pin::Pin; 100 100 use pin_init::{pin_data, pinned_drop, PinInit}; 101 101 102 + /// Trait describing the layout of a specific device driver. 103 + /// 104 + /// This trait describes the layout of a specific driver structure, such as `struct pci_driver` or 105 + /// `struct platform_driver`. 106 + /// 107 + /// # Safety 108 + /// 109 + /// Implementors must guarantee that: 110 + /// - `DriverType` is `repr(C)`, 111 + /// - `DriverData` is the type of the driver's device private data. 112 + /// - `DriverType` embeds a valid `struct device_driver` at byte offset `DEVICE_DRIVER_OFFSET`. 113 + pub unsafe trait DriverLayout { 114 + /// The specific driver type embedding a `struct device_driver`. 115 + type DriverType: Default; 116 + 117 + /// The type of the driver's device private data. 118 + type DriverData; 119 + 120 + /// Byte offset of the embedded `struct device_driver` within `DriverType`. 121 + /// 122 + /// This must correspond exactly to the location of the embedded `struct device_driver` field. 123 + const DEVICE_DRIVER_OFFSET: usize; 124 + } 125 + 102 126 /// The [`RegistrationOps`] trait serves as generic interface for subsystems (e.g., PCI, Platform, 103 127 /// Amba, etc.) to provide the corresponding subsystem specific implementation to register / 104 - /// unregister a driver of the particular type (`RegType`). 128 + /// unregister a driver of the particular type (`DriverType`). 105 129 /// 106 - /// For instance, the PCI subsystem would set `RegType` to `bindings::pci_driver` and call 130 + /// For instance, the PCI subsystem would set `DriverType` to `bindings::pci_driver` and call 107 131 /// `bindings::__pci_register_driver` from `RegistrationOps::register` and 108 132 /// `bindings::pci_unregister_driver` from `RegistrationOps::unregister`. 109 133 /// 110 134 /// # Safety 111 135 /// 112 - /// A call to [`RegistrationOps::unregister`] for a given instance of `RegType` is only valid if a 113 - /// preceding call to [`RegistrationOps::register`] has been successful. 114 - pub unsafe trait RegistrationOps { 115 - /// The type that holds information about the registration. This is typically a struct defined 116 - /// by the C portion of the kernel. 117 - type RegType: Default; 118 - 136 + /// A call to [`RegistrationOps::unregister`] for a given instance of `DriverType` is only valid if 137 + /// a preceding call to [`RegistrationOps::register`] has been successful. 138 + pub unsafe trait RegistrationOps: DriverLayout { 119 139 /// Registers a driver. 120 140 /// 121 141 /// # Safety ··· 143 123 /// On success, `reg` must remain pinned and valid until the matching call to 144 124 /// [`RegistrationOps::unregister`]. 145 125 unsafe fn register( 146 - reg: &Opaque<Self::RegType>, 126 + reg: &Opaque<Self::DriverType>, 147 127 name: &'static CStr, 148 128 module: &'static ThisModule, 149 129 ) -> Result; ··· 154 134 /// 155 135 /// Must only be called after a preceding successful call to [`RegistrationOps::register`] for 156 136 /// the same `reg`. 157 - unsafe fn unregister(reg: &Opaque<Self::RegType>); 137 + unsafe fn unregister(reg: &Opaque<Self::DriverType>); 158 138 } 159 139 160 140 /// A [`Registration`] is a generic type that represents the registration of some driver type (e.g. ··· 166 146 #[pin_data(PinnedDrop)] 167 147 pub struct Registration<T: RegistrationOps> { 168 148 #[pin] 169 - reg: Opaque<T::RegType>, 149 + reg: Opaque<T::DriverType>, 170 150 } 171 151 172 152 // SAFETY: `Registration` has no fields or methods accessible via `&Registration`, so it is safe to ··· 177 157 // any thread, so `Registration` is `Send`. 178 158 unsafe impl<T: RegistrationOps> Send for Registration<T> {} 179 159 180 - impl<T: RegistrationOps> Registration<T> { 160 + impl<T: RegistrationOps + 'static> Registration<T> { 161 + extern "C" fn post_unbind_callback(dev: *mut bindings::device) { 162 + // SAFETY: The driver core only ever calls the post unbind callback with a valid pointer to 163 + // a `struct device`. 164 + // 165 + // INVARIANT: `dev` is valid for the duration of the `post_unbind_callback()`. 166 + let dev = unsafe { &*dev.cast::<device::Device<device::CoreInternal>>() }; 167 + 168 + // `remove()` and all devres callbacks have been completed at this point, hence drop the 169 + // driver's device private data. 170 + // 171 + // SAFETY: By the safety requirements of the `Driver` trait, `T::DriverData` is the 172 + // driver's device private data type. 173 + drop(unsafe { dev.drvdata_obtain::<T::DriverData>() }); 174 + } 175 + 176 + /// Attach generic `struct device_driver` callbacks. 177 + fn callbacks_attach(drv: &Opaque<T::DriverType>) { 178 + let ptr = drv.get().cast::<u8>(); 179 + 180 + // SAFETY: 181 + // - `drv.get()` yields a valid pointer to `Self::DriverType`. 182 + // - Adding `DEVICE_DRIVER_OFFSET` yields the address of the embedded `struct device_driver` 183 + // as guaranteed by the safety requirements of the `Driver` trait. 184 + let base = unsafe { ptr.add(T::DEVICE_DRIVER_OFFSET) }; 185 + 186 + // CAST: `base` points to the offset of the embedded `struct device_driver`. 187 + let base = base.cast::<bindings::device_driver>(); 188 + 189 + // SAFETY: It is safe to set the fields of `struct device_driver` on initialization. 190 + unsafe { (*base).p_cb.post_unbind_rust = Some(Self::post_unbind_callback) }; 191 + } 192 + 181 193 /// Creates a new instance of the registration object. 182 194 pub fn new(name: &'static CStr, module: &'static ThisModule) -> impl PinInit<Self, Error> { 183 195 try_pin_init!(Self { 184 - reg <- Opaque::try_ffi_init(|ptr: *mut T::RegType| { 196 + reg <- Opaque::try_ffi_init(|ptr: *mut T::DriverType| { 185 197 // SAFETY: `try_ffi_init` guarantees that `ptr` is valid for write. 186 - unsafe { ptr.write(T::RegType::default()) }; 198 + unsafe { ptr.write(T::DriverType::default()) }; 187 199 188 200 // SAFETY: `try_ffi_init` guarantees that `ptr` is valid for write, and it has 189 201 // just been initialised above, so it's also valid for read. 190 - let drv = unsafe { &*(ptr as *const Opaque<T::RegType>) }; 202 + let drv = unsafe { &*(ptr as *const Opaque<T::DriverType>) }; 203 + 204 + Self::callbacks_attach(drv); 191 205 192 206 // SAFETY: `drv` is guaranteed to be pinned until `T::unregister`. 193 207 unsafe { T::register(drv, name, module) }
+20 -11
rust/kernel/i2c.rs
··· 92 92 /// An adapter for the registration of I2C drivers. 93 93 pub struct Adapter<T: Driver>(T); 94 94 95 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 95 + // SAFETY: 96 + // - `bindings::i2c_driver` is a C type declared as `repr(C)`. 97 + // - `T` is the type of the driver's device private data. 98 + // - `struct i2c_driver` embeds a `struct device_driver`. 99 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 100 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 101 + type DriverType = bindings::i2c_driver; 102 + type DriverData = T; 103 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 104 + } 105 + 106 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 96 107 // a preceding call to `register` has been successful. 97 108 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 98 - type RegType = bindings::i2c_driver; 99 - 100 109 unsafe fn register( 101 - idrv: &Opaque<Self::RegType>, 110 + idrv: &Opaque<Self::DriverType>, 102 111 name: &'static CStr, 103 112 module: &'static ThisModule, 104 113 ) -> Result { ··· 142 133 (*idrv.get()).driver.acpi_match_table = acpi_table; 143 134 } 144 135 145 - // SAFETY: `idrv` is guaranteed to be a valid `RegType`. 136 + // SAFETY: `idrv` is guaranteed to be a valid `DriverType`. 146 137 to_result(unsafe { bindings::i2c_register_driver(module.0, idrv.get()) }) 147 138 } 148 139 149 - unsafe fn unregister(idrv: &Opaque<Self::RegType>) { 150 - // SAFETY: `idrv` is guaranteed to be a valid `RegType`. 140 + unsafe fn unregister(idrv: &Opaque<Self::DriverType>) { 141 + // SAFETY: `idrv` is guaranteed to be a valid `DriverType`. 151 142 unsafe { bindings::i2c_del_driver(idrv.get()) } 152 143 } 153 144 } ··· 178 169 // SAFETY: `remove_callback` is only ever called after a successful call to 179 170 // `probe_callback`, hence it's guaranteed that `I2cClient::set_drvdata()` has been called 180 171 // and stored a `Pin<KBox<T>>`. 181 - let data = unsafe { idev.as_ref().drvdata_obtain::<T>() }; 172 + let data = unsafe { idev.as_ref().drvdata_borrow::<T>() }; 182 173 183 - T::unbind(idev, data.as_ref()); 174 + T::unbind(idev, data); 184 175 } 185 176 186 177 extern "C" fn shutdown_callback(idev: *mut bindings::i2c_client) { ··· 190 181 // SAFETY: `shutdown_callback` is only ever called after a successful call to 191 182 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 192 183 // and stored a `Pin<KBox<T>>`. 193 - let data = unsafe { idev.as_ref().drvdata_obtain::<T>() }; 184 + let data = unsafe { idev.as_ref().drvdata_borrow::<T>() }; 194 185 195 - T::shutdown(idev, data.as_ref()); 186 + T::shutdown(idev, data); 196 187 } 197 188 198 189 /// The [`i2c::IdTable`] of the corresponding driver.
+6 -3
rust/kernel/io.rs
··· 142 142 /// Bound checks are performed on compile time, hence if the offset is not known at compile 143 143 /// time, the build will fail. 144 144 $(#[$attr])* 145 - #[inline] 145 + // Always inline to optimize out error path of `io_addr_assert`. 146 + #[inline(always)] 146 147 pub fn $name(&self, offset: usize) -> $type_name { 147 148 let addr = self.io_addr_assert::<$type_name>(offset); 148 149 ··· 172 171 /// Bound checks are performed on compile time, hence if the offset is not known at compile 173 172 /// time, the build will fail. 174 173 $(#[$attr])* 175 - #[inline] 174 + // Always inline to optimize out error path of `io_addr_assert`. 175 + #[inline(always)] 176 176 pub fn $name(&self, value: $type_name, offset: usize) { 177 177 let addr = self.io_addr_assert::<$type_name>(offset); 178 178 ··· 241 239 self.addr().checked_add(offset).ok_or(EINVAL) 242 240 } 243 241 244 - #[inline] 242 + // Always inline to optimize out error path of `build_assert`. 243 + #[inline(always)] 245 244 fn io_addr_assert<U>(&self, offset: usize) -> usize { 246 245 build_assert!(Self::offset_valid::<U>(offset, SIZE)); 247 246
+2
rust/kernel/io/resource.rs
··· 226 226 /// Resource represents a memory region that must be ioremaped using `ioremap_np`. 227 227 pub const IORESOURCE_MEM_NONPOSTED: Flags = Flags::new(bindings::IORESOURCE_MEM_NONPOSTED); 228 228 229 + // Always inline to optimize out error path of `build_assert`. 230 + #[inline(always)] 229 231 const fn new(value: u32) -> Self { 230 232 crate::build_assert!(value as u64 <= c_ulong::MAX as u64); 231 233 Flags(value as c_ulong)
+2
rust/kernel/irq/flags.rs
··· 96 96 self.0 97 97 } 98 98 99 + // Always inline to optimize out error path of `build_assert`. 100 + #[inline(always)] 99 101 const fn new(value: u32) -> Self { 100 102 build_assert!(value as u64 <= c_ulong::MAX as u64); 101 103 Self(value as c_ulong)
+18 -9
rust/kernel/pci.rs
··· 50 50 /// An adapter for the registration of PCI drivers. 51 51 pub struct Adapter<T: Driver>(T); 52 52 53 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 53 + // SAFETY: 54 + // - `bindings::pci_driver` is a C type declared as `repr(C)`. 55 + // - `T` is the type of the driver's device private data. 56 + // - `struct pci_driver` embeds a `struct device_driver`. 57 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 58 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 59 + type DriverType = bindings::pci_driver; 60 + type DriverData = T; 61 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 62 + } 63 + 64 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 54 65 // a preceding call to `register` has been successful. 55 66 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 56 - type RegType = bindings::pci_driver; 57 - 58 67 unsafe fn register( 59 - pdrv: &Opaque<Self::RegType>, 68 + pdrv: &Opaque<Self::DriverType>, 60 69 name: &'static CStr, 61 70 module: &'static ThisModule, 62 71 ) -> Result { ··· 77 68 (*pdrv.get()).id_table = T::ID_TABLE.as_ptr(); 78 69 } 79 70 80 - // SAFETY: `pdrv` is guaranteed to be a valid `RegType`. 71 + // SAFETY: `pdrv` is guaranteed to be a valid `DriverType`. 81 72 to_result(unsafe { 82 73 bindings::__pci_register_driver(pdrv.get(), module.0, name.as_char_ptr()) 83 74 }) 84 75 } 85 76 86 - unsafe fn unregister(pdrv: &Opaque<Self::RegType>) { 87 - // SAFETY: `pdrv` is guaranteed to be a valid `RegType`. 77 + unsafe fn unregister(pdrv: &Opaque<Self::DriverType>) { 78 + // SAFETY: `pdrv` is guaranteed to be a valid `DriverType`. 88 79 unsafe { bindings::pci_unregister_driver(pdrv.get()) } 89 80 } 90 81 } ··· 123 114 // SAFETY: `remove_callback` is only ever called after a successful call to 124 115 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 125 116 // and stored a `Pin<KBox<T>>`. 126 - let data = unsafe { pdev.as_ref().drvdata_obtain::<T>() }; 117 + let data = unsafe { pdev.as_ref().drvdata_borrow::<T>() }; 127 118 128 - T::unbind(pdev, data.as_ref()); 119 + T::unbind(pdev, data); 129 120 } 130 121 } 131 122
+18 -9
rust/kernel/platform.rs
··· 26 26 /// An adapter for the registration of platform drivers. 27 27 pub struct Adapter<T: Driver>(T); 28 28 29 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 29 + // SAFETY: 30 + // - `bindings::platform_driver` is a C type declared as `repr(C)`. 31 + // - `T` is the type of the driver's device private data. 32 + // - `struct platform_driver` embeds a `struct device_driver`. 33 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 34 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 35 + type DriverType = bindings::platform_driver; 36 + type DriverData = T; 37 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 38 + } 39 + 40 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 30 41 // a preceding call to `register` has been successful. 31 42 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 32 - type RegType = bindings::platform_driver; 33 - 34 43 unsafe fn register( 35 - pdrv: &Opaque<Self::RegType>, 44 + pdrv: &Opaque<Self::DriverType>, 36 45 name: &'static CStr, 37 46 module: &'static ThisModule, 38 47 ) -> Result { ··· 64 55 (*pdrv.get()).driver.acpi_match_table = acpi_table; 65 56 } 66 57 67 - // SAFETY: `pdrv` is guaranteed to be a valid `RegType`. 58 + // SAFETY: `pdrv` is guaranteed to be a valid `DriverType`. 68 59 to_result(unsafe { bindings::__platform_driver_register(pdrv.get(), module.0) }) 69 60 } 70 61 71 - unsafe fn unregister(pdrv: &Opaque<Self::RegType>) { 72 - // SAFETY: `pdrv` is guaranteed to be a valid `RegType`. 62 + unsafe fn unregister(pdrv: &Opaque<Self::DriverType>) { 63 + // SAFETY: `pdrv` is guaranteed to be a valid `DriverType`. 73 64 unsafe { bindings::platform_driver_unregister(pdrv.get()) }; 74 65 } 75 66 } ··· 101 92 // SAFETY: `remove_callback` is only ever called after a successful call to 102 93 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 103 94 // and stored a `Pin<KBox<T>>`. 104 - let data = unsafe { pdev.as_ref().drvdata_obtain::<T>() }; 95 + let data = unsafe { pdev.as_ref().drvdata_borrow::<T>() }; 105 96 106 - T::unbind(pdev, data.as_ref()); 97 + T::unbind(pdev, data); 107 98 } 108 99 } 109 100
+18 -9
rust/kernel/usb.rs
··· 27 27 /// An adapter for the registration of USB drivers. 28 28 pub struct Adapter<T: Driver>(T); 29 29 30 - // SAFETY: A call to `unregister` for a given instance of `RegType` is guaranteed to be valid if 30 + // SAFETY: 31 + // - `bindings::usb_driver` is a C type declared as `repr(C)`. 32 + // - `T` is the type of the driver's device private data. 33 + // - `struct usb_driver` embeds a `struct device_driver`. 34 + // - `DEVICE_DRIVER_OFFSET` is the correct byte offset to the embedded `struct device_driver`. 35 + unsafe impl<T: Driver + 'static> driver::DriverLayout for Adapter<T> { 36 + type DriverType = bindings::usb_driver; 37 + type DriverData = T; 38 + const DEVICE_DRIVER_OFFSET: usize = core::mem::offset_of!(Self::DriverType, driver); 39 + } 40 + 41 + // SAFETY: A call to `unregister` for a given instance of `DriverType` is guaranteed to be valid if 31 42 // a preceding call to `register` has been successful. 32 43 unsafe impl<T: Driver + 'static> driver::RegistrationOps for Adapter<T> { 33 - type RegType = bindings::usb_driver; 34 - 35 44 unsafe fn register( 36 - udrv: &Opaque<Self::RegType>, 45 + udrv: &Opaque<Self::DriverType>, 37 46 name: &'static CStr, 38 47 module: &'static ThisModule, 39 48 ) -> Result { ··· 54 45 (*udrv.get()).id_table = T::ID_TABLE.as_ptr(); 55 46 } 56 47 57 - // SAFETY: `udrv` is guaranteed to be a valid `RegType`. 48 + // SAFETY: `udrv` is guaranteed to be a valid `DriverType`. 58 49 to_result(unsafe { 59 50 bindings::usb_register_driver(udrv.get(), module.0, name.as_char_ptr()) 60 51 }) 61 52 } 62 53 63 - unsafe fn unregister(udrv: &Opaque<Self::RegType>) { 64 - // SAFETY: `udrv` is guaranteed to be a valid `RegType`. 54 + unsafe fn unregister(udrv: &Opaque<Self::DriverType>) { 55 + // SAFETY: `udrv` is guaranteed to be a valid `DriverType`. 65 56 unsafe { bindings::usb_deregister(udrv.get()) }; 66 57 } 67 58 } ··· 103 94 // SAFETY: `disconnect_callback` is only ever called after a successful call to 104 95 // `probe_callback`, hence it's guaranteed that `Device::set_drvdata()` has been called 105 96 // and stored a `Pin<KBox<T>>`. 106 - let data = unsafe { dev.drvdata_obtain::<T>() }; 97 + let data = unsafe { dev.drvdata_borrow::<T>() }; 107 98 108 - T::disconnect(intf, data.as_ref()); 99 + T::disconnect(intf, data); 109 100 } 110 101 } 111 102
+1 -1
scripts/check-function-names.sh
··· 13 13 exit 1 14 14 fi 15 15 16 - bad_symbols=$(nm "$objfile" | awk '$2 ~ /^[TtWw]$/ {print $3}' | grep -E '^(startup|exit|split|unlikely|hot|unknown)(\.|$)') 16 + bad_symbols=$(${NM:-nm} "$objfile" | awk '$2 ~ /^[TtWw]$/ {print $3}' | grep -E '^(startup|exit|split|unlikely|hot|unknown)(\.|$)') 17 17 18 18 if [ -n "$bad_symbols" ]; then 19 19 echo "$bad_symbols" | while read -r sym; do
+6 -5
scripts/kconfig/nconf-cfg.sh
··· 6 6 cflags=$1 7 7 libs=$2 8 8 9 - PKG="ncursesw menuw panelw" 10 - PKG2="ncurses menu panel" 9 + # Keep library order for static linking (HOSTCC='cc -static') 10 + PKG="menuw panelw ncursesw" 11 + PKG2="menu panel ncurses" 11 12 12 13 if [ -n "$(command -v ${HOSTPKG_CONFIG})" ]; then 13 14 if ${HOSTPKG_CONFIG} --exists $PKG; then ··· 29 28 # find ncurses by pkg-config.) 30 29 if [ -f /usr/include/ncursesw/ncurses.h ]; then 31 30 echo -D_GNU_SOURCE -I/usr/include/ncursesw > ${cflags} 32 - echo -lncursesw -lmenuw -lpanelw > ${libs} 31 + echo -lmenuw -lpanelw -lncursesw > ${libs} 33 32 exit 0 34 33 fi 35 34 36 35 if [ -f /usr/include/ncurses/ncurses.h ]; then 37 36 echo -D_GNU_SOURCE -I/usr/include/ncurses > ${cflags} 38 - echo -lncurses -lmenu -lpanel > ${libs} 37 + echo -lmenu -lpanel -lncurses > ${libs} 39 38 exit 0 40 39 fi 41 40 42 41 if [ -f /usr/include/ncurses.h ]; then 43 42 echo -D_GNU_SOURCE > ${cflags} 44 - echo -lncurses -lmenu -lpanel > ${libs} 43 + echo -lmenu -lpanel -lncurses > ${libs} 45 44 exit 0 46 45 fi 47 46
+2
scripts/tracepoint-update.c
··· 49 49 array = realloc(array, sizeof(char *) * size); 50 50 if (!array) { 51 51 fprintf(stderr, "Failed memory allocation\n"); 52 + free(*vals); 53 + *vals = NULL; 52 54 return -1; 53 55 } 54 56 *vals = array;
+2 -2
security/keys/trusted-keys/trusted_tpm2.c
··· 465 465 } 466 466 467 467 /** 468 - * tpm2_unseal_cmd() - execute a TPM2_Unload command 468 + * tpm2_unseal_cmd() - execute a TPM2_Unseal command 469 469 * 470 470 * @chip: TPM chip to use 471 471 * @payload: the key data in clear and encrypted form ··· 498 498 return rc; 499 499 } 500 500 501 - rc = tpm_buf_append_name(chip, &buf, options->keyhandle, NULL); 501 + rc = tpm_buf_append_name(chip, &buf, blob_handle, NULL); 502 502 if (rc) 503 503 goto out; 504 504
+28 -1
sound/hda/codecs/realtek/alc269.c
··· 3736 3736 ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE, 3737 3737 ALC287_FIXUP_YOGA7_14ITL_SPEAKERS, 3738 3738 ALC298_FIXUP_LENOVO_C940_DUET7, 3739 + ALC287_FIXUP_LENOVO_YOGA_BOOK_9I, 3739 3740 ALC287_FIXUP_13S_GEN2_SPEAKERS, 3740 3741 ALC256_FIXUP_SET_COEF_DEFAULTS, 3741 3742 ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, ··· 3821 3820 id = ALC298_FIXUP_LENOVO_SPK_VOLUME; /* C940 */ 3822 3821 else 3823 3822 id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* Duet 7 */ 3823 + __snd_hda_apply_fixup(codec, id, action, 0); 3824 + } 3825 + 3826 + /* A special fixup for Lenovo Yoga 9i and Yoga Book 9i 13IRU8 3827 + * both have the very same PCI SSID and vendor ID, so we need 3828 + * to apply different fixups depending on the subsystem ID 3829 + */ 3830 + static void alc287_fixup_lenovo_yoga_book_9i(struct hda_codec *codec, 3831 + const struct hda_fixup *fix, 3832 + int action) 3833 + { 3834 + int id; 3835 + 3836 + if (codec->core.subsystem_id == 0x17aa3881) 3837 + id = ALC287_FIXUP_TAS2781_I2C; /* Yoga Book 9i 13IRU8 */ 3838 + else 3839 + id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP; /* Yoga 9i */ 3824 3840 __snd_hda_apply_fixup(codec, id, action, 0); 3825 3841 } 3826 3842 ··· 5852 5834 .type = HDA_FIXUP_FUNC, 5853 5835 .v.func = alc298_fixup_lenovo_c940_duet7, 5854 5836 }, 5837 + [ALC287_FIXUP_LENOVO_YOGA_BOOK_9I] = { 5838 + .type = HDA_FIXUP_FUNC, 5839 + .v.func = alc287_fixup_lenovo_yoga_book_9i, 5840 + }, 5855 5841 [ALC287_FIXUP_13S_GEN2_SPEAKERS] = { 5856 5842 .type = HDA_FIXUP_VERBS, 5857 5843 .v.verbs = (const struct hda_verb[]) { ··· 7035 7013 SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP), 7036 7014 SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP), 7037 7015 SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7016 + SND_PCI_QUIRK(0x144d, 0xc876, "Samsung 730QED (NP730QED-KA2US)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7038 7017 SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP), 7039 7018 SND_PCI_QUIRK(0x144d, 0xca06, "Samsung Galaxy Book3 360 (NP730QFG)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7040 7019 SND_PCI_QUIRK(0x144d, 0xc868, "Samsung Galaxy Book2 Pro (NP930XED)", ALC298_FIXUP_SAMSUNG_AMP), ··· 7214 7191 SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF), 7215 7192 SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 7216 7193 SND_PCI_QUIRK(0x17aa, 0x383d, "Legion Y9000X 2019", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS), 7217 - SND_PCI_QUIRK(0x17aa, 0x3843, "Yoga 9i", ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP), 7194 + SND_PCI_QUIRK(0x17aa, 0x3843, "Lenovo Yoga 9i / Yoga Book 9i", ALC287_FIXUP_LENOVO_YOGA_BOOK_9I), 7218 7195 SND_PCI_QUIRK(0x17aa, 0x3847, "Legion 7 16ACHG6", ALC287_FIXUP_LEGION_16ACHG6), 7219 7196 SND_PCI_QUIRK(0x17aa, 0x384a, "Lenovo Yoga 7 15ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 7220 7197 SND_PCI_QUIRK(0x17aa, 0x3852, "Lenovo Yoga 7 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), ··· 7805 7782 {0x12, 0x90a60140}, 7806 7783 {0x19, 0x04a11030}, 7807 7784 {0x21, 0x04211020}), 7785 + SND_HDA_PIN_QUIRK(0x10ec0274, 0x1d05, "TongFang", ALC274_FIXUP_HP_HEADSET_MIC, 7786 + {0x17, 0x90170110}, 7787 + {0x19, 0x03a11030}, 7788 + {0x21, 0x03211020}), 7808 7789 SND_HDA_PIN_QUIRK(0x10ec0282, 0x1025, "Acer", ALC282_FIXUP_ACER_DISABLE_LINEOUT, 7809 7790 ALC282_STANDARD_PINS, 7810 7791 {0x12, 0x90a609c0},
+2
sound/pci/ctxfi/ctamixer.c
··· 205 205 206 206 /* Set amixer specific operations */ 207 207 amixer->rsc.ops = &amixer_basic_rsc_ops; 208 + amixer->rsc.conj = 0; 208 209 amixer->ops = &amixer_ops; 209 210 amixer->input = NULL; 210 211 amixer->sum = NULL; ··· 368 367 return err; 369 368 370 369 sum->rsc.ops = &sum_basic_rsc_ops; 370 + sum->rsc.conj = 0; 371 371 372 372 return 0; 373 373 }
+17 -5
sound/usb/mixer.c
··· 1813 1813 1814 1814 range = (cval->max - cval->min) / cval->res; 1815 1815 /* 1816 - * Are there devices with volume range more than 255? I use a bit more 1817 - * to be sure. 384 is a resolution magic number found on Logitech 1818 - * devices. It will definitively catch all buggy Logitech devices. 1816 + * There are definitely devices with a range of ~20,000, so let's be 1817 + * conservative and allow for a bit more. 1819 1818 */ 1820 - if (range > 384) { 1819 + if (range > 65535) { 1821 1820 usb_audio_warn(mixer->chip, 1822 1821 "Warning! Unlikely big volume range (=%u), cval->res is probably wrong.", 1823 1822 range); ··· 2945 2946 2946 2947 static void snd_usb_mixer_free(struct usb_mixer_interface *mixer) 2947 2948 { 2949 + struct usb_mixer_elem_list *list, *next; 2950 + int id; 2951 + 2948 2952 /* kill pending URBs */ 2949 2953 snd_usb_mixer_disconnect(mixer); 2950 2954 2951 - kfree(mixer->id_elems); 2955 + /* Unregister controls first, snd_ctl_remove() frees the element */ 2956 + if (mixer->id_elems) { 2957 + for (id = 0; id < MAX_ID_ELEMS; id++) { 2958 + for (list = mixer->id_elems[id]; list; list = next) { 2959 + next = list->next_id_elem; 2960 + if (list->kctl) 2961 + snd_ctl_remove(mixer->chip->card, list->kctl); 2962 + } 2963 + } 2964 + kfree(mixer->id_elems); 2965 + } 2952 2966 if (mixer->urb) { 2953 2967 kfree(mixer->urb->transfer_buffer); 2954 2968 usb_free_urb(mixer->urb);
+3 -3
sound/usb/mixer_scarlett2.c
··· 2533 2533 err = scarlett2_usb_get(mixer, config_item->offset, buf, size); 2534 2534 if (err < 0) 2535 2535 return err; 2536 - if (size == 2) { 2536 + if (config_item->size == 16) { 2537 2537 u16 *buf_16 = buf; 2538 2538 2539 2539 for (i = 0; i < count; i++, buf_16++) 2540 2540 *buf_16 = le16_to_cpu(*(__le16 *)buf_16); 2541 - } else if (size == 4) { 2542 - u32 *buf_32 = buf; 2541 + } else if (config_item->size == 32) { 2542 + u32 *buf_32 = (u32 *)buf; 2543 2543 2544 2544 for (i = 0; i < count; i++, buf_32++) 2545 2545 *buf_32 = le32_to_cpu(*(__le32 *)buf_32);
+2 -1
sound/usb/pcm.c
··· 1553 1553 1554 1554 for (i = 0; i < ctx->packets; i++) { 1555 1555 counts = snd_usb_endpoint_next_packet_size(ep, ctx, i, avail); 1556 - if (counts < 0 || frames + counts >= ep->max_urb_frames) 1556 + if (counts < 0 || 1557 + (frames + counts) * stride > ctx->buffer_size) 1557 1558 break; 1558 1559 /* set up descriptor */ 1559 1560 urb->iso_frame_desc[i].offset = frames * stride;
+2
sound/usb/quirks.c
··· 2390 2390 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2391 2391 DEVICE_FLG(0x2d99, 0x0026, /* HECATE G2 GAMING HEADSET */ 2392 2392 QUIRK_FLAG_MIXER_PLAYBACK_MIN_MUTE), 2393 + DEVICE_FLG(0x2fc6, 0xf06b, /* MOONDROP Moonriver2 Ti */ 2394 + QUIRK_FLAG_CTL_MSG_DELAY), 2393 2395 DEVICE_FLG(0x2fc6, 0xf0b7, /* iBasso DC07 Pro */ 2394 2396 QUIRK_FLAG_CTL_MSG_DELAY_1M), 2395 2397 DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+45 -16
tools/include/io_uring/mini_liburing.h
··· 6 6 #include <stdio.h> 7 7 #include <string.h> 8 8 #include <unistd.h> 9 + #include <sys/uio.h> 9 10 10 11 struct io_sq_ring { 11 12 unsigned int *head; ··· 56 55 struct io_uring_sq sq; 57 56 struct io_uring_cq cq; 58 57 int ring_fd; 58 + unsigned flags; 59 59 }; 60 60 61 61 #if defined(__x86_64) || defined(__i386__) ··· 74 72 void *ptr; 75 73 int ret; 76 74 77 - sq->ring_sz = p->sq_off.array + p->sq_entries * sizeof(unsigned int); 75 + if (p->flags & IORING_SETUP_NO_SQARRAY) { 76 + sq->ring_sz = p->cq_off.cqes; 77 + sq->ring_sz += p->cq_entries * sizeof(struct io_uring_cqe); 78 + } else { 79 + sq->ring_sz = p->sq_off.array; 80 + sq->ring_sz += p->sq_entries * sizeof(unsigned int); 81 + } 82 + 78 83 ptr = mmap(0, sq->ring_sz, PROT_READ | PROT_WRITE, 79 84 MAP_SHARED | MAP_POPULATE, fd, IORING_OFF_SQ_RING); 80 85 if (ptr == MAP_FAILED) ··· 92 83 sq->kring_entries = ptr + p->sq_off.ring_entries; 93 84 sq->kflags = ptr + p->sq_off.flags; 94 85 sq->kdropped = ptr + p->sq_off.dropped; 95 - sq->array = ptr + p->sq_off.array; 86 + if (!(p->flags & IORING_SETUP_NO_SQARRAY)) 87 + sq->array = ptr + p->sq_off.array; 96 88 97 89 size = p->sq_entries * sizeof(struct io_uring_sqe); 98 90 sq->sqes = mmap(0, size, PROT_READ | PROT_WRITE, ··· 136 126 flags, sig, _NSIG / 8); 137 127 } 138 128 129 + static inline int io_uring_queue_init_params(unsigned int entries, 130 + struct io_uring *ring, 131 + struct io_uring_params *p) 132 + { 133 + int fd, ret; 134 + 135 + memset(ring, 0, sizeof(*ring)); 136 + 137 + fd = io_uring_setup(entries, p); 138 + if (fd < 0) 139 + return fd; 140 + ret = io_uring_mmap(fd, p, &ring->sq, &ring->cq); 141 + if (!ret) { 142 + ring->ring_fd = fd; 143 + ring->flags = p->flags; 144 + } else { 145 + close(fd); 146 + } 147 + return ret; 148 + } 149 + 139 150 static inline int io_uring_queue_init(unsigned int entries, 140 151 struct io_uring *ring, 141 152 unsigned int flags) 142 153 { 143 154 struct io_uring_params p; 144 - int fd, ret; 145 155 146 - memset(ring, 0, sizeof(*ring)); 147 156 memset(&p, 0, sizeof(p)); 148 157 p.flags = flags; 149 158 150 - fd = io_uring_setup(entries, &p); 151 - if (fd < 0) 152 - return fd; 153 - ret = io_uring_mmap(fd, &p, &ring->sq, &ring->cq); 154 - if (!ret) 155 - ring->ring_fd = fd; 156 - else 157 - close(fd); 158 - return ret; 159 + return io_uring_queue_init_params(entries, ring, &p); 159 160 } 160 161 161 162 /* Get a sqe */ ··· 220 199 221 200 ktail = *sq->ktail; 222 201 to_submit = sq->sqe_tail - sq->sqe_head; 223 - for (submitted = 0; submitted < to_submit; submitted++) { 224 - read_barrier(); 225 - sq->array[ktail++ & mask] = sq->sqe_head++ & mask; 202 + 203 + if (!(ring->flags & IORING_SETUP_NO_SQARRAY)) { 204 + for (submitted = 0; submitted < to_submit; submitted++) { 205 + read_barrier(); 206 + sq->array[ktail++ & mask] = sq->sqe_head++ & mask; 207 + } 208 + } else { 209 + ktail += to_submit; 210 + sq->sqe_head += to_submit; 211 + submitted = to_submit; 226 212 } 213 + 227 214 if (!submitted) 228 215 return 0; 229 216
+17 -4
tools/objtool/Makefile
··· 77 77 # We check using HOSTCC directly rather than the shared feature framework 78 78 # because objtool is a host tool that links against host libraries. 79 79 # 80 - HAVE_LIBOPCODES := $(shell echo 'int main(void) { return 0; }' | \ 81 - $(HOSTCC) -xc - -o /dev/null -lopcodes 2>/dev/null && echo y) 80 + # When using shared libraries, -lopcodes is sufficient as dependencies are 81 + # resolved automatically. With static libraries, we must explicitly link 82 + # against libopcodes' dependencies: libbfd, libiberty, and sometimes libz. 83 + # Try each combination and use the first one that succeeds. 84 + # 85 + LIBOPCODES_LIBS := $(shell \ 86 + for libs in "-lopcodes" \ 87 + "-lopcodes -lbfd" \ 88 + "-lopcodes -lbfd -liberty" \ 89 + "-lopcodes -lbfd -liberty -lz"; do \ 90 + echo 'extern void disassemble_init_for_target(void *);' \ 91 + 'int main(void) { disassemble_init_for_target(0); return 0; }' | \ 92 + $(HOSTCC) -xc - -o /dev/null $$libs 2>/dev/null && \ 93 + echo "$$libs" && break; \ 94 + done) 82 95 83 96 # Styled disassembler support requires binutils >= 2.39 84 97 HAVE_DISASM_STYLED := $(shell echo '$(pound)include <dis-asm.h>' | \ ··· 99 86 100 87 BUILD_DISAS := n 101 88 102 - ifeq ($(HAVE_LIBOPCODES),y) 89 + ifneq ($(LIBOPCODES_LIBS),) 103 90 BUILD_DISAS := y 104 91 OBJTOOL_CFLAGS += -DDISAS -DPACKAGE='"objtool"' 105 - OBJTOOL_LDFLAGS += -lopcodes 92 + OBJTOOL_LDFLAGS += $(LIBOPCODES_LIBS) 106 93 ifeq ($(HAVE_DISASM_STYLED),y) 107 94 OBJTOOL_CFLAGS += -DDISASM_INIT_STYLED 108 95 endif
-1
tools/testing/selftests/alsa/utimer-test.c
··· 141 141 TEST(wrong_timers_test) { 142 142 int timer_dev_fd; 143 143 int utimer_fd; 144 - size_t i; 145 144 struct snd_timer_uinfo wrong_timer = { 146 145 .resolution = 0, 147 146 .id = UTIMER_DEFAULT_ID,
+7
tools/testing/selftests/net/fcnal-test.sh
··· 2327 2327 log_test_addr ${a} $? 2 "ping local, device bind" 2328 2328 done 2329 2329 2330 + for a in ${NSA_LO_IP6} ${NSA_LINKIP6}%${NSA_DEV} ${NSA_IP6} 2331 + do 2332 + log_start 2333 + run_cmd ${ping6} -c1 -w1 -I ::1 ${a} 2334 + log_test_addr ${a} $? 0 "ping local, from localhost" 2335 + done 2336 + 2330 2337 # 2331 2338 # ip rule blocks address 2332 2339 #
+74 -7
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 2329 2329 ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=1 2330 2330 speed=slow \ 2331 2331 run_tests $ns1 $ns2 10.0.1.1 2332 + chk_join_nr 3 3 3 2332 2333 2333 2334 # It is not directly linked to the commit introducing this 2334 2335 # symbol but for the parent one which is linked anyway. 2335 - if ! mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2336 - chk_join_nr 3 3 2 2337 - chk_add_nr 4 4 2338 - else 2339 - chk_join_nr 3 3 3 2336 + if mptcp_lib_kallsyms_has "mptcp_pm_subflow_check_next$"; then 2340 2337 # the server will not signal the address terminating 2341 2338 # the MPC subflow 2342 2339 chk_add_nr 3 3 2340 + else 2341 + chk_add_nr 4 4 2343 2342 fi 2344 2343 fi 2345 2344 } ··· 3846 3847 fi 3847 3848 } 3848 3849 3849 - # $1: ns ; $2: event type ; $3: count 3850 + # $1: ns ; $2: event type ; $3: count ; [ $4: attr ; $5: attr count ] 3850 3851 chk_evt_nr() 3851 3852 { 3852 3853 local ns=${1} 3853 3854 local evt_name="${2}" 3854 3855 local exp="${3}" 3856 + local attr="${4}" 3857 + local attr_exp="${5}" 3855 3858 3856 3859 local evts="${evts_ns1}" 3857 3860 local evt="${!evt_name}" 3861 + local attr_name 3858 3862 local count 3863 + 3864 + if [ -n "${attr}" ]; then 3865 + attr_name=", ${attr}: ${attr_exp}" 3866 + fi 3859 3867 3860 3868 evt_name="${evt_name:16}" # without MPTCP_LIB_EVENT_ 3861 3869 [ "${ns}" == "ns2" ] && evts="${evts_ns2}" 3862 3870 3863 - print_check "event ${ns} ${evt_name} (${exp})" 3871 + print_check "event ${ns} ${evt_name} (${exp}${attr_name})" 3864 3872 3865 3873 if [[ "${evt_name}" = "LISTENER_"* ]] && 3866 3874 ! mptcp_lib_kallsyms_has "mptcp_event_pm_listener$"; then ··· 3878 3872 count=$(grep -cw "type:${evt}" "${evts}") 3879 3873 if [ "${count}" != "${exp}" ]; then 3880 3874 fail_test "got ${count} events, expected ${exp}" 3875 + cat "${evts}" 3876 + return 3877 + elif [ -z "${attr}" ]; then 3878 + print_ok 3879 + return 3880 + fi 3881 + 3882 + count=$(grep -w "type:${evt}" "${evts}" | grep -c ",${attr}:") 3883 + if [ "${count}" != "${attr_exp}" ]; then 3884 + fail_test "got ${count} event attributes, expected ${attr_exp}" 3885 + grep -w "type:${evt}" "${evts}" 3881 3886 else 3882 3887 print_ok 3883 3888 fi 3889 + } 3890 + 3891 + # $1: ns ; $2: event type ; $3: expected count 3892 + wait_event() 3893 + { 3894 + local ns="${1}" 3895 + local evt_name="${2}" 3896 + local exp="${3}" 3897 + 3898 + local evt="${!evt_name}" 3899 + local evts="${evts_ns1}" 3900 + local count 3901 + 3902 + [ "${ns}" == "ns2" ] && evts="${evts_ns2}" 3903 + 3904 + for _ in $(seq 100); do 3905 + count=$(grep -cw "type:${evt}" "${evts}") 3906 + [ "${count}" -ge "${exp}" ] && break 3907 + sleep 0.1 3908 + done 3884 3909 } 3885 3910 3886 3911 userspace_tests() ··· 4119 4082 chk_rst_nr 0 0 invert 4120 4083 chk_mptcp_info subflows 1 subflows 1 4121 4084 chk_subflows_total 1 1 4085 + kill_events_pids 4086 + mptcp_lib_kill_group_wait $tests_pid 4087 + fi 4088 + 4089 + # userspace pm no duplicated spurious close events after an error 4090 + if reset_with_events "userspace pm no dup close events after error" && 4091 + continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then 4092 + set_userspace_pm $ns2 4093 + pm_nl_set_limits $ns1 0 2 4094 + { timeout_test=120 test_linkfail=128 speed=slow \ 4095 + run_tests $ns1 $ns2 10.0.1.1 & } 2>/dev/null 4096 + local tests_pid=$! 4097 + wait_event ns2 MPTCP_LIB_EVENT_ESTABLISHED 1 4098 + userspace_pm_add_sf $ns2 10.0.3.2 20 4099 + chk_mptcp_info subflows 1 subflows 1 4100 + chk_subflows_total 2 2 4101 + 4102 + # force quick loss 4103 + ip netns exec $ns2 sysctl -q net.ipv4.tcp_syn_retries=1 4104 + if ip netns exec "${ns1}" ${iptables} -A INPUT -s "10.0.1.2" \ 4105 + -p tcp --tcp-option 30 -j REJECT --reject-with tcp-reset && 4106 + ip netns exec "${ns2}" ${iptables} -A INPUT -d "10.0.1.2" \ 4107 + -p tcp --tcp-option 30 -j REJECT --reject-with tcp-reset; then 4108 + wait_event ns2 MPTCP_LIB_EVENT_SUB_CLOSED 1 4109 + wait_event ns1 MPTCP_LIB_EVENT_SUB_CLOSED 1 4110 + chk_subflows_total 1 1 4111 + userspace_pm_add_sf $ns2 10.0.1.2 0 4112 + wait_event ns2 MPTCP_LIB_EVENT_SUB_CLOSED 2 4113 + chk_evt_nr ns2 MPTCP_LIB_EVENT_SUB_CLOSED 2 error 2 4114 + fi 4122 4115 kill_events_pids 4123 4116 mptcp_lib_kill_group_wait $tests_pid 4124 4117 fi
+7 -4
tools/testing/selftests/ublk/kublk.c
··· 753 753 754 754 static int ublk_thread_is_done(struct ublk_thread *t) 755 755 { 756 - return (t->state & UBLKS_T_STOPPING) && ublk_thread_is_idle(t); 756 + return (t->state & UBLKS_T_STOPPING) && ublk_thread_is_idle(t) && !t->cmd_inflight; 757 757 } 758 758 759 759 static inline void ublksrv_handle_tgt_cqe(struct ublk_thread *t, ··· 1054 1054 } 1055 1055 if (ret < 0) { 1056 1056 ublk_err("%s: ublk_ctrl_start_dev failed: %d\n", __func__, ret); 1057 - goto fail; 1057 + /* stop device so that inflight uring_cmd can be cancelled */ 1058 + ublk_ctrl_stop_dev(dev); 1059 + goto fail_start; 1058 1060 } 1059 1061 1060 1062 ublk_ctrl_get_info(dev); ··· 1064 1062 ublk_ctrl_dump(dev); 1065 1063 else 1066 1064 ublk_send_dev_event(ctx, dev, dev->dev_info.dev_id); 1067 - 1065 + fail_start: 1068 1066 /* wait until we are terminated */ 1069 1067 for (i = 0; i < dev->nthreads; i++) 1070 1068 pthread_join(tinfo[i].thread, &thread_ret); ··· 1274 1272 } 1275 1273 1276 1274 ret = ublk_start_daemon(ctx, dev); 1277 - ublk_dbg(UBLK_DBG_DEV, "%s: daemon exit %d\b", ret); 1275 + ublk_dbg(UBLK_DBG_DEV, "%s: daemon exit %d\n", __func__, ret); 1278 1276 if (ret < 0) 1279 1277 ublk_ctrl_del_dev(dev); 1280 1278 ··· 1620 1618 int option_idx, opt; 1621 1619 const char *cmd = argv[1]; 1622 1620 struct dev_ctx ctx = { 1621 + ._evtfd = -1, 1623 1622 .queue_depth = 128, 1624 1623 .nr_hw_queues = 2, 1625 1624 .dev_id = -1,
+1 -1
tools/testing/selftests/vDSO/vgetrandom-chacha.S
··· 14 14 #elif defined(__riscv) && __riscv_xlen == 64 15 15 #include "../../../../arch/riscv/kernel/vdso/vgetrandom-chacha.S" 16 16 #elif defined(__s390x__) 17 - #include "../../../../arch/s390/kernel/vdso64/vgetrandom-chacha.S" 17 + #include "../../../../arch/s390/kernel/vdso/vgetrandom-chacha.S" 18 18 #elif defined(__x86_64__) 19 19 #include "../../../../arch/x86/entry/vdso/vgetrandom-chacha.S" 20 20 #endif