Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-7.0-rc6).

No conflicts, or adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4433 -1586
+1
.mailmap
··· 587 587 Morten Welinder <welinder@anemone.rentec.com> 588 588 Morten Welinder <welinder@darter.rentec.com> 589 589 Morten Welinder <welinder@troll.com> 590 + Muhammad Usama Anjum <usama.anjum@arm.com> <usama.anjum@collabora.com> 590 591 Mukesh Ojha <quic_mojha@quicinc.com> <mojha@codeaurora.org> 591 592 Muna Sinada <quic_msinada@quicinc.com> <msinada@codeaurora.org> 592 593 Murali Nalajala <quic_mnalajal@quicinc.com> <mnalajal@codeaurora.org>
+29 -7
Documentation/core-api/dma-attributes.rst
··· 149 149 DMA_ATTR_MMIO will not perform any cache flushing. The address 150 150 provided must never be mapped cacheable into the CPU. 151 151 152 - DMA_ATTR_CPU_CACHE_CLEAN 153 - ------------------------ 152 + DMA_ATTR_DEBUGGING_IGNORE_CACHELINES 153 + ------------------------------------ 154 154 155 - This attribute indicates the CPU will not dirty any cacheline overlapping this 156 - DMA_FROM_DEVICE/DMA_BIDIRECTIONAL buffer while it is mapped. This allows 157 - multiple small buffers to safely share a cacheline without risk of data 158 - corruption, suppressing DMA debug warnings about overlapping mappings. 159 - All mappings sharing a cacheline should have this attribute. 155 + This attribute indicates that CPU cache lines may overlap for buffers mapped 156 + with DMA_FROM_DEVICE or DMA_BIDIRECTIONAL. 157 + 158 + Such overlap may occur when callers map multiple small buffers that reside 159 + within the same cache line. In this case, callers must guarantee that the CPU 160 + will not dirty these cache lines after the mappings are established. When this 161 + condition is met, multiple buffers can safely share a cache line without risking 162 + data corruption. 163 + 164 + All mappings that share a cache line must set this attribute to suppress DMA 165 + debug warnings about overlapping mappings. 166 + 167 + DMA_ATTR_REQUIRE_COHERENT 168 + ------------------------- 169 + 170 + DMA mapping requests with the DMA_ATTR_REQUIRE_COHERENT fail on any 171 + system where SWIOTLB or cache management is required. This should only 172 + be used to support uAPI designs that require continuous HW DMA 173 + coherence with userspace processes, for example RDMA and DRM. At a 174 + minimum the memory being mapped must be userspace memory from 175 + pin_user_pages() or similar. 176 + 177 + Drivers should consider using dma_mmap_pages() instead of this 178 + interface when building their uAPIs, when possible. 179 + 180 + It must never be used in an in-kernel driver that only works with 181 + kernel memory.
+19 -7
Documentation/devicetree/bindings/mtd/st,spear600-smi.yaml
··· 19 19 Flash sub nodes describe the memory range and optional per-flash 20 20 properties. 21 21 22 - allOf: 23 - - $ref: mtd.yaml# 24 - 25 22 properties: 26 23 compatible: 27 24 const: st,spear600-smi ··· 39 42 $ref: /schemas/types.yaml#/definitions/uint32 40 43 description: Functional clock rate of the SMI controller in Hz. 41 44 42 - st,smi-fast-mode: 43 - type: boolean 44 - description: Indicates that the attached flash supports fast read mode. 45 + patternProperties: 46 + "^flash@.*$": 47 + $ref: /schemas/mtd/mtd.yaml# 48 + 49 + properties: 50 + reg: 51 + maxItems: 1 52 + 53 + st,smi-fast-mode: 54 + type: boolean 55 + description: Indicates that the attached flash supports fast read mode. 56 + 57 + unevaluatedProperties: false 58 + 59 + required: 60 + - reg 45 61 46 62 required: 47 63 - compatible 48 64 - reg 49 65 - clock-rate 66 + - "#address-cells" 67 + - "#size-cells" 50 68 51 69 unevaluatedProperties: false 52 70 ··· 76 64 interrupts = <12>; 77 65 clock-rate = <50000000>; /* 50 MHz */ 78 66 79 - flash@f8000000 { 67 + flash@fc000000 { 80 68 reg = <0xfc000000 0x1000>; 81 69 st,smi-fast-mode; 82 70 };
+2 -2
Documentation/devicetree/bindings/regulator/regulator.yaml
··· 168 168 offset from voltage set to regulator. 169 169 170 170 regulator-uv-protection-microvolt: 171 - description: Set over under voltage protection limit. This is a limit where 171 + description: Set under voltage protection limit. This is a limit where 172 172 hardware performs emergency shutdown. Zero can be passed to disable 173 173 protection and value '1' indicates that protection should be enabled but 174 174 limit setting can be omitted. Limit is given as microvolt offset from ··· 182 182 is given as microvolt offset from voltage set to regulator. 183 183 184 184 regulator-uv-warn-microvolt: 185 - description: Set over under voltage warning limit. This is a limit where 185 + description: Set under voltage warning limit. This is a limit where 186 186 hardware is assumed still to be functional but approaching limit where 187 187 it gets damaged. Recovery actions should be initiated. Zero can be passed 188 188 to disable detection and value '1' indicates that detection should
+48
Documentation/driver-api/driver-model/binding.rst
··· 99 99 When a driver is removed, the list of devices that it supports is 100 100 iterated over, and the driver's remove callback is called for each 101 101 one. The device is removed from that list and the symlinks removed. 102 + 103 + 104 + Driver Override 105 + ~~~~~~~~~~~~~~~ 106 + 107 + Userspace may override the standard matching by writing a driver name to 108 + a device's ``driver_override`` sysfs attribute. When set, only a driver 109 + whose name matches the override will be considered during binding. This 110 + bypasses all bus-specific matching (OF, ACPI, ID tables, etc.). 111 + 112 + The override may be cleared by writing an empty string, which returns 113 + the device to standard matching rules. Writing to ``driver_override`` 114 + does not automatically unbind the device from its current driver or 115 + make any attempt to load the specified driver. 116 + 117 + Buses opt into this mechanism by setting the ``driver_override`` flag in 118 + their ``struct bus_type``:: 119 + 120 + const struct bus_type example_bus_type = { 121 + ... 122 + .driver_override = true, 123 + }; 124 + 125 + When the flag is set, the driver core automatically creates the 126 + ``driver_override`` sysfs attribute for every device on that bus. 127 + 128 + The bus's ``match()`` callback should check the override before performing 129 + its own matching, using ``device_match_driver_override()``:: 130 + 131 + static int example_match(struct device *dev, const struct device_driver *drv) 132 + { 133 + int ret; 134 + 135 + ret = device_match_driver_override(dev, drv); 136 + if (ret >= 0) 137 + return ret; 138 + 139 + /* Fall through to bus-specific matching... */ 140 + } 141 + 142 + ``device_match_driver_override()`` returns > 0 if the override matches 143 + the given driver, 0 if the override is set but does not match, or < 0 if 144 + no override is set at all. 145 + 146 + Additional helpers are available: 147 + 148 + - ``device_set_driver_override()`` - set or clear the override from kernel code. 149 + - ``device_has_driver_override()`` - check whether an override is set.
+5 -3
MAINTAINERS
··· 3986 3986 ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS 3987 3987 M: Corentin Chary <corentin.chary@gmail.com> 3988 3988 M: Luke D. Jones <luke@ljones.dev> 3989 - M: Denis Benato <benato.denis96@gmail.com> 3989 + M: Denis Benato <denis.benato@linux.dev> 3990 3990 L: platform-driver-x86@vger.kernel.org 3991 3991 S: Maintained 3992 3992 W: https://asus-linux.org/ ··· 7998 7998 F: drivers/gpu/drm/tiny/hx8357d.c 7999 7999 8000 8000 DRM DRIVER FOR HYPERV SYNTHETIC VIDEO DEVICE 8001 - M: Deepak Rawat <drawat.floss@gmail.com> 8001 + M: Dexuan Cui <decui@microsoft.com> 8002 + M: Long Li <longli@microsoft.com> 8003 + M: Saurabh Sengar <ssengar@linux.microsoft.com> 8002 8004 L: linux-hyperv@vger.kernel.org 8003 8005 L: dri-devel@lists.freedesktop.org 8004 8006 S: Maintained ··· 24906 24904 F: drivers/pinctrl/spear/ 24907 24905 24908 24906 SPI NOR SUBSYSTEM 24909 - M: Tudor Ambarus <tudor.ambarus@linaro.org> 24910 24907 M: Pratyush Yadav <pratyush@kernel.org> 24911 24908 M: Michael Walle <mwalle@kernel.org> 24909 + R: Takahiro Kuwano <takahiro.kuwano@infineon.com> 24912 24910 L: linux-mtd@lists.infradead.org 24913 24911 S: Maintained 24914 24912 W: http://www.linux-mtd.infradead.org/
+2 -2
Makefile
··· 2 2 VERSION = 7 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 1654 1654 modules.builtin.ranges vmlinux.o.map vmlinux.unstripped \ 1655 1655 compile_commands.json rust/test \ 1656 1656 rust-project.json .vmlinux.objs .vmlinux.export.c \ 1657 - .builtin-dtbs-list .builtin-dtb.S 1657 + .builtin-dtbs-list .builtin-dtbs.S 1658 1658 1659 1659 # Directories & files removed with 'make mrproper' 1660 1660 MRPROPER_FILES += include/config include/generated \
+8
arch/arm64/kernel/pi/patch-scs.c
··· 192 192 size -= 2; 193 193 break; 194 194 195 + case DW_CFA_advance_loc4: 196 + loc += *opcode++ * code_alignment_factor; 197 + loc += (*opcode++ << 8) * code_alignment_factor; 198 + loc += (*opcode++ << 16) * code_alignment_factor; 199 + loc += (*opcode++ << 24) * code_alignment_factor; 200 + size -= 4; 201 + break; 202 + 195 203 case DW_CFA_def_cfa: 196 204 case DW_CFA_offset_extended: 197 205 size = skip_xleb128(&opcode, size);
+2 -1
arch/arm64/kernel/rsi.c
··· 12 12 13 13 #include <asm/io.h> 14 14 #include <asm/mem_encrypt.h> 15 + #include <asm/pgtable.h> 15 16 #include <asm/rsi.h> 16 17 17 18 static struct realm_config config; ··· 147 146 return; 148 147 if (WARN_ON(rsi_get_realm_config(&config))) 149 148 return; 150 - prot_ns_shared = BIT(config.ipa_bits - 1); 149 + prot_ns_shared = __phys_to_pte_val(BIT(config.ipa_bits - 1)); 151 150 152 151 if (arm64_ioremap_prot_hook_register(realm_ioremap_hook)) 153 152 return;
+1 -1
arch/arm64/kvm/at.c
··· 1753 1753 if (!writable) 1754 1754 return -EPERM; 1755 1755 1756 - ptep = (u64 __user *)hva + offset; 1756 + ptep = (void __user *)hva + offset; 1757 1757 if (cpus_have_final_cap(ARM64_HAS_LSE_ATOMICS)) 1758 1758 r = __lse_swap_desc(ptep, old, new); 1759 1759 else
+14
arch/arm64/kvm/reset.c
··· 247 247 kvm_vcpu_set_be(vcpu); 248 248 249 249 *vcpu_pc(vcpu) = target_pc; 250 + 251 + /* 252 + * We may come from a state where either a PC update was 253 + * pending (SMC call resulting in PC being increpented to 254 + * skip the SMC) or a pending exception. Make sure we get 255 + * rid of all that, as this cannot be valid out of reset. 256 + * 257 + * Note that clearing the exception mask also clears PC 258 + * updates, but that's an implementation detail, and we 259 + * really want to make it explicit. 260 + */ 261 + vcpu_clear_flag(vcpu, PENDING_EXCEPTION); 262 + vcpu_clear_flag(vcpu, EXCEPT_MASK); 263 + vcpu_clear_flag(vcpu, INCREMENT_PC); 250 264 vcpu_set_reg(vcpu, 0, reset_state.r0); 251 265 } 252 266
+2 -2
arch/parisc/kernel/cache.c
··· 953 953 #else 954 954 "1: cmpb,<<,n %0,%2,1b\n" 955 955 #endif 956 - " fic,m %3(%4,%0)\n" 956 + " fdc,m %3(%4,%0)\n" 957 957 "2: sync\n" 958 958 ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1") 959 959 : "+r" (start), "+r" (error) ··· 968 968 #else 969 969 "1: cmpb,<<,n %0,%2,1b\n" 970 970 #endif 971 - " fdc,m %3(%4,%0)\n" 971 + " fic,m %3(%4,%0)\n" 972 972 "2: sync\n" 973 973 ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1") 974 974 : "+r" (start), "+r" (error)
+3
arch/s390/include/asm/kvm_host.h
··· 710 710 void kvm_arch_crypto_set_masks(struct kvm *kvm, unsigned long *apm, 711 711 unsigned long *aqm, unsigned long *adm); 712 712 713 + #define SIE64_RETURN_NORMAL 0 714 + #define SIE64_RETURN_MCCK 1 715 + 713 716 int __sie64a(phys_addr_t sie_block_phys, struct kvm_s390_sie_block *sie_block, u64 *rsa, 714 717 unsigned long gasce); 715 718
+1 -1
arch/s390/include/asm/stacktrace.h
··· 62 62 struct { 63 63 unsigned long sie_control_block; 64 64 unsigned long sie_savearea; 65 - unsigned long sie_reason; 65 + unsigned long sie_return; 66 66 unsigned long sie_flags; 67 67 unsigned long sie_control_block_phys; 68 68 unsigned long sie_guest_asce;
+1 -1
arch/s390/kernel/asm-offsets.c
··· 63 63 OFFSET(__SF_EMPTY, stack_frame, empty[0]); 64 64 OFFSET(__SF_SIE_CONTROL, stack_frame, sie_control_block); 65 65 OFFSET(__SF_SIE_SAVEAREA, stack_frame, sie_savearea); 66 - OFFSET(__SF_SIE_REASON, stack_frame, sie_reason); 66 + OFFSET(__SF_SIE_RETURN, stack_frame, sie_return); 67 67 OFFSET(__SF_SIE_FLAGS, stack_frame, sie_flags); 68 68 OFFSET(__SF_SIE_CONTROL_PHYS, stack_frame, sie_control_block_phys); 69 69 OFFSET(__SF_SIE_GUEST_ASCE, stack_frame, sie_guest_asce);
+2 -2
arch/s390/kernel/entry.S
··· 200 200 stg %r3,__SF_SIE_CONTROL(%r15) # ...and virtual addresses 201 201 stg %r4,__SF_SIE_SAVEAREA(%r15) # save guest register save area 202 202 stg %r5,__SF_SIE_GUEST_ASCE(%r15) # save guest asce 203 - xc __SF_SIE_REASON(8,%r15),__SF_SIE_REASON(%r15) # reason code = 0 203 + xc __SF_SIE_RETURN(8,%r15),__SF_SIE_RETURN(%r15) # return code = 0 204 204 mvc __SF_SIE_FLAGS(8,%r15),__TI_flags(%r14) # copy thread flags 205 205 lmg %r0,%r13,0(%r4) # load guest gprs 0-13 206 206 mvi __TI_sie(%r14),1 ··· 237 237 xgr %r4,%r4 238 238 xgr %r5,%r5 239 239 lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers 240 - lg %r2,__SF_SIE_REASON(%r15) # return exit reason code 240 + lg %r2,__SF_SIE_RETURN(%r15) # return sie return code 241 241 BR_EX %r14 242 242 SYM_FUNC_END(__sie64a) 243 243 EXPORT_SYMBOL(__sie64a)
+2 -2
arch/s390/kernel/nmi.c
··· 487 487 mcck_dam_code = (mci.val & MCIC_SUBCLASS_MASK); 488 488 if (test_cpu_flag(CIF_MCCK_GUEST) && 489 489 (mcck_dam_code & MCCK_CODE_NO_GUEST) != mcck_dam_code) { 490 - /* Set exit reason code for host's later handling */ 491 - *((long *)(regs->gprs[15] + __SF_SIE_REASON)) = -EINTR; 490 + /* Set sie return code for host's later handling */ 491 + ((struct stack_frame *)regs->gprs[15])->sie_return = SIE64_RETURN_MCCK; 492 492 } 493 493 clear_cpu_flag(CIF_MCCK_GUEST); 494 494
+4 -2
arch/s390/kvm/gaccess.c
··· 1434 1434 if (rc) 1435 1435 return rc; 1436 1436 1437 - pgste = pgste_get_lock(ptep_h); 1437 + if (!pgste_get_trylock(ptep_h, &pgste)) 1438 + return -EAGAIN; 1438 1439 newpte = _pte(f->pfn, f->writable, !p, 0); 1439 1440 newpte.s.d |= ptep->s.d; 1440 1441 newpte.s.sd |= ptep->s.sd; ··· 1445 1444 pgste_set_unlock(ptep_h, pgste); 1446 1445 1447 1446 newpte = _pte(f->pfn, 0, !p, 0); 1448 - pgste = pgste_get_lock(ptep); 1447 + if (!pgste_get_trylock(ptep, &pgste)) 1448 + return -EAGAIN; 1449 1449 pgste = __dat_ptep_xchg(ptep, pgste, newpte, gpa_to_gfn(raddr), sg->asce, uses_skeys(sg)); 1450 1450 pgste_set_unlock(ptep, pgste); 1451 1451
+18
arch/s390/kvm/interrupt.c
··· 2724 2724 2725 2725 bit = bit_nr + (addr % PAGE_SIZE) * 8; 2726 2726 2727 + /* kvm_set_routing_entry() should never allow this to happen */ 2728 + WARN_ON_ONCE(bit > (PAGE_SIZE * BITS_PER_BYTE - 1)); 2729 + 2727 2730 return swap ? (bit ^ (BITS_PER_LONG - 1)) : bit; 2728 2731 } 2729 2732 ··· 2827 2824 int rc; 2828 2825 2829 2826 mci.val = mcck_info->mcic; 2827 + 2828 + /* log machine checks being reinjected on all debugs */ 2829 + VCPU_EVENT(vcpu, 2, "guest machine check %lx", mci.val); 2830 + KVM_EVENT(2, "guest machine check %lx", mci.val); 2831 + pr_info("guest machine check pid %d: %lx", current->pid, mci.val); 2832 + 2830 2833 if (mci.sr) 2831 2834 cr14 |= CR14_RECOVERY_SUBMASK; 2832 2835 if (mci.dg) ··· 2861 2852 struct kvm_kernel_irq_routing_entry *e, 2862 2853 const struct kvm_irq_routing_entry *ue) 2863 2854 { 2855 + const struct kvm_irq_routing_s390_adapter *adapter; 2864 2856 u64 uaddr_s, uaddr_i; 2865 2857 int idx; 2866 2858 ··· 2871 2861 if (kvm_is_ucontrol(kvm)) 2872 2862 return -EINVAL; 2873 2863 e->set = set_adapter_int; 2864 + 2865 + adapter = &ue->u.adapter; 2866 + if (adapter->summary_addr + (adapter->summary_offset / 8) >= 2867 + (adapter->summary_addr & PAGE_MASK) + PAGE_SIZE) 2868 + return -EINVAL; 2869 + if (adapter->ind_addr + (adapter->ind_offset / 8) >= 2870 + (adapter->ind_addr & PAGE_MASK) + PAGE_SIZE) 2871 + return -EINVAL; 2874 2872 2875 2873 idx = srcu_read_lock(&kvm->srcu); 2876 2874 uaddr_s = gpa_to_hva(kvm, ue->u.adapter.summary_addr);
+8 -8
arch/s390/kvm/kvm-s390.c
··· 4617 4617 return 0; 4618 4618 } 4619 4619 4620 - static int vcpu_post_run(struct kvm_vcpu *vcpu, int exit_reason) 4620 + static int vcpu_post_run(struct kvm_vcpu *vcpu, int sie_return) 4621 4621 { 4622 4622 struct mcck_volatile_info *mcck_info; 4623 4623 struct sie_page *sie_page; ··· 4633 4633 vcpu->run->s.regs.gprs[14] = vcpu->arch.sie_block->gg14; 4634 4634 vcpu->run->s.regs.gprs[15] = vcpu->arch.sie_block->gg15; 4635 4635 4636 - if (exit_reason == -EINTR) { 4637 - VCPU_EVENT(vcpu, 3, "%s", "machine check"); 4636 + if (sie_return == SIE64_RETURN_MCCK) { 4638 4637 sie_page = container_of(vcpu->arch.sie_block, 4639 4638 struct sie_page, sie_block); 4640 4639 mcck_info = &sie_page->mcck_info; 4641 4640 kvm_s390_reinject_machine_check(vcpu, mcck_info); 4642 4641 return 0; 4643 4642 } 4643 + WARN_ON_ONCE(sie_return != SIE64_RETURN_NORMAL); 4644 4644 4645 4645 if (vcpu->arch.sie_block->icptcode > 0) { 4646 4646 rc = kvm_handle_sie_intercept(vcpu); ··· 4679 4679 #define PSW_INT_MASK (PSW_MASK_EXT | PSW_MASK_IO | PSW_MASK_MCHECK) 4680 4680 static int __vcpu_run(struct kvm_vcpu *vcpu) 4681 4681 { 4682 - int rc, exit_reason; 4682 + int rc, sie_return; 4683 4683 struct sie_page *sie_page = (struct sie_page *)vcpu->arch.sie_block; 4684 4684 4685 4685 /* ··· 4719 4719 guest_timing_enter_irqoff(); 4720 4720 __disable_cpu_timer_accounting(vcpu); 4721 4721 4722 - exit_reason = kvm_s390_enter_exit_sie(vcpu->arch.sie_block, 4723 - vcpu->run->s.regs.gprs, 4724 - vcpu->arch.gmap->asce.val); 4722 + sie_return = kvm_s390_enter_exit_sie(vcpu->arch.sie_block, 4723 + vcpu->run->s.regs.gprs, 4724 + vcpu->arch.gmap->asce.val); 4725 4725 4726 4726 __enable_cpu_timer_accounting(vcpu); 4727 4727 guest_timing_exit_irqoff(); ··· 4744 4744 } 4745 4745 kvm_vcpu_srcu_read_lock(vcpu); 4746 4746 4747 - rc = vcpu_post_run(vcpu, exit_reason); 4747 + rc = vcpu_post_run(vcpu, sie_return); 4748 4748 if (rc || guestdbg_exit_pending(vcpu)) { 4749 4749 kvm_vcpu_srcu_read_unlock(vcpu); 4750 4750 break;
+5 -3
arch/s390/kvm/vsie.c
··· 1122 1122 { 1123 1123 struct kvm_s390_sie_block *scb_s = &vsie_page->scb_s; 1124 1124 struct kvm_s390_sie_block *scb_o = vsie_page->scb_o; 1125 + unsigned long sie_return = SIE64_RETURN_NORMAL; 1125 1126 int guest_bp_isolation; 1126 1127 int rc = 0; 1127 1128 ··· 1164 1163 goto xfer_to_guest_mode_check; 1165 1164 } 1166 1165 guest_timing_enter_irqoff(); 1167 - rc = kvm_s390_enter_exit_sie(scb_s, vcpu->run->s.regs.gprs, sg->asce.val); 1166 + sie_return = kvm_s390_enter_exit_sie(scb_s, vcpu->run->s.regs.gprs, sg->asce.val); 1168 1167 guest_timing_exit_irqoff(); 1169 1168 local_irq_enable(); 1170 1169 } ··· 1179 1178 1180 1179 kvm_vcpu_srcu_read_lock(vcpu); 1181 1180 1182 - if (rc == -EINTR) { 1183 - VCPU_EVENT(vcpu, 3, "%s", "machine check"); 1181 + if (sie_return == SIE64_RETURN_MCCK) { 1184 1182 kvm_s390_reinject_machine_check(vcpu, &vsie_page->mcck_info); 1185 1183 return 0; 1186 1184 } 1185 + 1186 + WARN_ON_ONCE(sie_return != SIE64_RETURN_NORMAL); 1187 1187 1188 1188 if (rc > 0) 1189 1189 rc = 0; /* we could still have an icpt */
+9 -2
arch/s390/mm/fault.c
··· 441 441 folio = phys_to_folio(addr); 442 442 if (unlikely(!folio_try_get(folio))) 443 443 return; 444 - rc = arch_make_folio_accessible(folio); 444 + rc = uv_convert_from_secure(folio_to_phys(folio)); 445 + if (!rc) 446 + clear_bit(PG_arch_1, &folio->flags.f); 445 447 folio_put(folio); 448 + /* 449 + * There are some valid fixup types for kernel 450 + * accesses to donated secure memory. zeropad is one 451 + * of them. 452 + */ 446 453 if (rc) 447 - BUG(); 454 + return handle_fault_error_nolock(regs, 0); 448 455 } else { 449 456 if (faulthandler_disabled()) 450 457 return handle_fault_error_nolock(regs, 0);
-4
arch/sh/drivers/platform_early.c
··· 26 26 struct platform_device *pdev = to_platform_device(dev); 27 27 struct platform_driver *pdrv = to_platform_driver(drv); 28 28 29 - /* When driver_override is set, only bind to the matching driver */ 30 - if (pdev->driver_override) 31 - return !strcmp(pdev->driver_override, drv->name); 32 - 33 29 /* Then try to match against the id table */ 34 30 if (pdrv->id_table) 35 31 return platform_match_id(pdrv->id_table, pdev) != NULL;
+1 -1
arch/x86/entry/vdso/common/vclock_gettime.c
··· 13 13 #include <linux/types.h> 14 14 #include <vdso/gettime.h> 15 15 16 - #include "../../../../lib/vdso/gettimeofday.c" 16 + #include "lib/vdso/gettimeofday.c" 17 17 18 18 int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz) 19 19 {
+5 -2
arch/x86/events/core.c
··· 1372 1372 else if (i < n_running) 1373 1373 continue; 1374 1374 1375 - if (hwc->state & PERF_HES_ARCH) 1375 + cpuc->events[hwc->idx] = event; 1376 + 1377 + if (hwc->state & PERF_HES_ARCH) { 1378 + static_call(x86_pmu_set_period)(event); 1376 1379 continue; 1380 + } 1377 1381 1378 1382 /* 1379 1383 * if cpuc->enabled = 0, then no wrmsr as 1380 1384 * per x86_pmu_enable_event() 1381 1385 */ 1382 - cpuc->events[hwc->idx] = event; 1383 1386 x86_pmu_start(event, PERF_EF_RELOAD); 1384 1387 } 1385 1388 cpuc->n_added = 0;
+21 -10
arch/x86/events/intel/core.c
··· 4628 4628 event->hw.dyn_constraint &= hybrid(event->pmu, acr_cause_mask64); 4629 4629 } 4630 4630 4631 + static inline int intel_set_branch_counter_constr(struct perf_event *event, 4632 + int *num) 4633 + { 4634 + if (branch_sample_call_stack(event)) 4635 + return -EINVAL; 4636 + if (branch_sample_counters(event)) { 4637 + (*num)++; 4638 + event->hw.dyn_constraint &= x86_pmu.lbr_counters; 4639 + } 4640 + 4641 + return 0; 4642 + } 4643 + 4631 4644 static int intel_pmu_hw_config(struct perf_event *event) 4632 4645 { 4633 4646 int ret = x86_pmu_hw_config(event); ··· 4711 4698 * group, which requires the extra space to store the counters. 4712 4699 */ 4713 4700 leader = event->group_leader; 4714 - if (branch_sample_call_stack(leader)) 4701 + if (intel_set_branch_counter_constr(leader, &num)) 4715 4702 return -EINVAL; 4716 - if (branch_sample_counters(leader)) { 4717 - num++; 4718 - leader->hw.dyn_constraint &= x86_pmu.lbr_counters; 4719 - } 4720 4703 leader->hw.flags |= PERF_X86_EVENT_BRANCH_COUNTERS; 4721 4704 4722 4705 for_each_sibling_event(sibling, leader) { 4723 - if (branch_sample_call_stack(sibling)) 4706 + if (intel_set_branch_counter_constr(sibling, &num)) 4724 4707 return -EINVAL; 4725 - if (branch_sample_counters(sibling)) { 4726 - num++; 4727 - sibling->hw.dyn_constraint &= x86_pmu.lbr_counters; 4728 - } 4708 + } 4709 + 4710 + /* event isn't installed as a sibling yet. */ 4711 + if (event != leader) { 4712 + if (intel_set_branch_counter_constr(event, &num)) 4713 + return -EINVAL; 4729 4714 } 4730 4715 4731 4716 if (num > fls(x86_pmu.lbr_counters))
+7 -4
arch/x86/events/intel/ds.c
··· 345 345 if (omr.omr_remote) 346 346 val |= REM; 347 347 348 - val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT); 349 - 350 348 if (omr.omr_source == 0x2) { 351 - u8 snoop = omr.omr_snoop | omr.omr_promoted; 349 + u8 snoop = omr.omr_snoop | (omr.omr_promoted << 1); 352 350 353 - if (snoop == 0x0) 351 + if (omr.omr_hitm) 352 + val |= P(SNOOP, HITM); 353 + else if (snoop == 0x0) 354 354 val |= P(SNOOP, NA); 355 355 else if (snoop == 0x1) 356 356 val |= P(SNOOP, MISS); ··· 359 359 else if (snoop == 0x3) 360 360 val |= P(SNOOP, NONE); 361 361 } else if (omr.omr_source > 0x2 && omr.omr_source < 0x7) { 362 + val |= omr.omr_hitm ? P(SNOOP, HITM) : P(SNOOP, HIT); 362 363 val |= omr.omr_snoop ? P(SNOOPX, FWD) : 0; 364 + } else { 365 + val |= P(SNOOP, NONE); 363 366 } 364 367 365 368 return val;
+61 -57
arch/x86/hyperv/hv_crash.c
··· 107 107 cpu_relax(); 108 108 } 109 109 110 - /* This cannot be inlined as it needs stack */ 111 - static noinline __noclone void hv_crash_restore_tss(void) 110 + static void hv_crash_restore_tss(void) 112 111 { 113 112 load_TR_desc(); 114 113 } 115 114 116 - /* This cannot be inlined as it needs stack */ 117 - static noinline void hv_crash_clear_kernpt(void) 115 + static void hv_crash_clear_kernpt(void) 118 116 { 119 117 pgd_t *pgd; 120 118 p4d_t *p4d; ··· 123 125 native_p4d_clear(p4d); 124 126 } 125 127 126 - /* 127 - * This is the C entry point from the asm glue code after the disable hypercall. 128 - * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel 129 - * page tables with our below 4G page identity mapped, but using a temporary 130 - * GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not 131 - * available. We restore kernel GDT, and rest of the context, and continue 132 - * to kexec. 133 - */ 134 - static asmlinkage void __noreturn hv_crash_c_entry(void) 128 + 129 + static void __noreturn hv_crash_handle(void) 135 130 { 136 - struct hv_crash_ctxt *ctxt = &hv_crash_ctxt; 137 - 138 - /* first thing, restore kernel gdt */ 139 - native_load_gdt(&ctxt->gdtr); 140 - 141 - asm volatile("movw %%ax, %%ss" : : "a"(ctxt->ss)); 142 - asm volatile("movq %0, %%rsp" : : "m"(ctxt->rsp)); 143 - 144 - asm volatile("movw %%ax, %%ds" : : "a"(ctxt->ds)); 145 - asm volatile("movw %%ax, %%es" : : "a"(ctxt->es)); 146 - asm volatile("movw %%ax, %%fs" : : "a"(ctxt->fs)); 147 - asm volatile("movw %%ax, %%gs" : : "a"(ctxt->gs)); 148 - 149 - native_wrmsrq(MSR_IA32_CR_PAT, ctxt->pat); 150 - asm volatile("movq %0, %%cr0" : : "r"(ctxt->cr0)); 151 - 152 - asm volatile("movq %0, %%cr8" : : "r"(ctxt->cr8)); 153 - asm volatile("movq %0, %%cr4" : : "r"(ctxt->cr4)); 154 - asm volatile("movq %0, %%cr2" : : "r"(ctxt->cr4)); 155 - 156 - native_load_idt(&ctxt->idtr); 157 - native_wrmsrq(MSR_GS_BASE, ctxt->gsbase); 158 - native_wrmsrq(MSR_EFER, ctxt->efer); 159 - 160 - /* restore the original kernel CS now via far return */ 161 - asm volatile("movzwq %0, %%rax\n\t" 162 - "pushq %%rax\n\t" 163 - "pushq $1f\n\t" 164 - "lretq\n\t" 165 - "1:nop\n\t" : : "m"(ctxt->cs) : "rax"); 166 - 167 - /* We are in asmlinkage without stack frame, hence make C function 168 - * calls which will buy stack frames. 169 - */ 170 131 hv_crash_restore_tss(); 171 132 hv_crash_clear_kernpt(); 172 133 ··· 134 177 135 178 hv_panic_timeout_reboot(); 136 179 } 137 - /* Tell gcc we are using lretq long jump in the above function intentionally */ 180 + 181 + /* 182 + * __naked functions do not permit function calls, not even to __always_inline 183 + * functions that only contain asm() blocks themselves. So use a macro instead. 184 + */ 185 + #define hv_wrmsr(msr, val) \ 186 + asm volatile("wrmsr" :: "c"(msr), "a"((u32)val), "d"((u32)(val >> 32)) : "memory") 187 + 188 + /* 189 + * This is the C entry point from the asm glue code after the disable hypercall. 190 + * We enter here in IA32-e long mode, ie, full 64bit mode running on kernel 191 + * page tables with our below 4G page identity mapped, but using a temporary 192 + * GDT. ds/fs/gs/es are null. ss is not usable. bp is null. stack is not 193 + * available. We restore kernel GDT, and rest of the context, and continue 194 + * to kexec. 195 + */ 196 + static void __naked hv_crash_c_entry(void) 197 + { 198 + /* first thing, restore kernel gdt */ 199 + asm volatile("lgdt %0" : : "m" (hv_crash_ctxt.gdtr)); 200 + 201 + asm volatile("movw %0, %%ss\n\t" 202 + "movq %1, %%rsp" 203 + :: "m"(hv_crash_ctxt.ss), "m"(hv_crash_ctxt.rsp)); 204 + 205 + asm volatile("movw %0, %%ds" : : "m"(hv_crash_ctxt.ds)); 206 + asm volatile("movw %0, %%es" : : "m"(hv_crash_ctxt.es)); 207 + asm volatile("movw %0, %%fs" : : "m"(hv_crash_ctxt.fs)); 208 + asm volatile("movw %0, %%gs" : : "m"(hv_crash_ctxt.gs)); 209 + 210 + hv_wrmsr(MSR_IA32_CR_PAT, hv_crash_ctxt.pat); 211 + asm volatile("movq %0, %%cr0" : : "r"(hv_crash_ctxt.cr0)); 212 + 213 + asm volatile("movq %0, %%cr8" : : "r"(hv_crash_ctxt.cr8)); 214 + asm volatile("movq %0, %%cr4" : : "r"(hv_crash_ctxt.cr4)); 215 + asm volatile("movq %0, %%cr2" : : "r"(hv_crash_ctxt.cr2)); 216 + 217 + asm volatile("lidt %0" : : "m" (hv_crash_ctxt.idtr)); 218 + hv_wrmsr(MSR_GS_BASE, hv_crash_ctxt.gsbase); 219 + hv_wrmsr(MSR_EFER, hv_crash_ctxt.efer); 220 + 221 + /* restore the original kernel CS now via far return */ 222 + asm volatile("pushq %q0\n\t" 223 + "pushq %q1\n\t" 224 + "lretq" 225 + :: "r"(hv_crash_ctxt.cs), "r"(hv_crash_handle)); 226 + } 227 + /* Tell objtool we are using lretq long jump in the above function intentionally */ 138 228 STACK_FRAME_NON_STANDARD(hv_crash_c_entry); 139 229 140 230 static void hv_mark_tss_not_busy(void) ··· 199 195 { 200 196 struct hv_crash_ctxt *ctxt = &hv_crash_ctxt; 201 197 202 - asm volatile("movq %%rsp,%0" : "=m"(ctxt->rsp)); 198 + ctxt->rsp = current_stack_pointer; 203 199 204 200 ctxt->cr0 = native_read_cr0(); 205 201 ctxt->cr4 = native_read_cr4(); 206 202 207 - asm volatile("movq %%cr2, %0" : "=a"(ctxt->cr2)); 208 - asm volatile("movq %%cr8, %0" : "=a"(ctxt->cr8)); 203 + asm volatile("movq %%cr2, %0" : "=r"(ctxt->cr2)); 204 + asm volatile("movq %%cr8, %0" : "=r"(ctxt->cr8)); 209 205 210 - asm volatile("movl %%cs, %%eax" : "=a"(ctxt->cs)); 211 - asm volatile("movl %%ss, %%eax" : "=a"(ctxt->ss)); 212 - asm volatile("movl %%ds, %%eax" : "=a"(ctxt->ds)); 213 - asm volatile("movl %%es, %%eax" : "=a"(ctxt->es)); 214 - asm volatile("movl %%fs, %%eax" : "=a"(ctxt->fs)); 215 - asm volatile("movl %%gs, %%eax" : "=a"(ctxt->gs)); 206 + asm volatile("movw %%cs, %0" : "=m"(ctxt->cs)); 207 + asm volatile("movw %%ss, %0" : "=m"(ctxt->ss)); 208 + asm volatile("movw %%ds, %0" : "=m"(ctxt->ds)); 209 + asm volatile("movw %%es, %0" : "=m"(ctxt->es)); 210 + asm volatile("movw %%fs, %0" : "=m"(ctxt->fs)); 211 + asm volatile("movw %%gs, %0" : "=m"(ctxt->gs)); 216 212 217 213 native_store_gdt(&ctxt->gdtr); 218 214 store_idt(&ctxt->idtr);
+16 -2
arch/x86/kernel/apic/x2apic_uv_x.c
··· 1708 1708 struct uv_hub_info_s *new_hub; 1709 1709 1710 1710 /* Allocate & fill new per hub info list */ 1711 - new_hub = (bid == 0) ? &uv_hub_info_node0 1712 - : kzalloc_node(bytes, GFP_KERNEL, uv_blade_to_node(bid)); 1711 + if (bid == 0) { 1712 + new_hub = &uv_hub_info_node0; 1713 + } else { 1714 + int nid; 1715 + 1716 + /* 1717 + * Deconfigured sockets are mapped to SOCK_EMPTY. Use 1718 + * NUMA_NO_NODE to allocate on a valid node. 1719 + */ 1720 + nid = uv_blade_to_node(bid); 1721 + if (nid == SOCK_EMPTY) 1722 + nid = NUMA_NO_NODE; 1723 + 1724 + new_hub = kzalloc_node(bytes, GFP_KERNEL, nid); 1725 + } 1726 + 1713 1727 if (WARN_ON_ONCE(!new_hub)) { 1714 1728 /* do not kfree() bid 0, which is statically allocated */ 1715 1729 while (--bid > 0)
+11 -6
arch/x86/kernel/cpu/mce/amd.c
··· 875 875 { 876 876 amd_reset_thr_limit(m->bank); 877 877 878 - /* Clear MCA_DESTAT for all deferred errors even those logged in MCA_STATUS. */ 879 - if (m->status & MCI_STATUS_DEFERRED) 880 - mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0); 878 + if (mce_flags.smca) { 879 + /* 880 + * Clear MCA_DESTAT for all deferred errors even those 881 + * logged in MCA_STATUS. 882 + */ 883 + if (m->status & MCI_STATUS_DEFERRED) 884 + mce_wrmsrq(MSR_AMD64_SMCA_MCx_DESTAT(m->bank), 0); 881 885 882 - /* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */ 883 - if (m->kflags & MCE_CHECK_DFR_REGS) 884 - return; 886 + /* Don't clear MCA_STATUS if MCA_DESTAT was used exclusively. */ 887 + if (m->kflags & MCE_CHECK_DFR_REGS) 888 + return; 889 + } 885 890 886 891 mce_wrmsrq(mca_msr_reg(m->bank, MCA_STATUS), 0); 887 892 }
+3 -2
arch/x86/kernel/cpu/mshyperv.c
··· 496 496 test_and_set_bit(HYPERV_DBG_FASTFAIL_VECTOR, system_vectors)) 497 497 BUG(); 498 498 499 - pr_info("Hyper-V: reserve vectors: %d %d %d\n", HYPERV_DBG_ASSERT_VECTOR, 500 - HYPERV_DBG_SERVICE_VECTOR, HYPERV_DBG_FASTFAIL_VECTOR); 499 + pr_info("Hyper-V: reserve vectors: 0x%x 0x%x 0x%x\n", 500 + HYPERV_DBG_ASSERT_VECTOR, HYPERV_DBG_SERVICE_VECTOR, 501 + HYPERV_DBG_FASTFAIL_VECTOR); 501 502 } 502 503 503 504 static void __init ms_hyperv_init_platform(void)
+3
drivers/ata/libata-core.c
··· 4188 4188 { "ST3320[68]13AS", "SD1[5-9]", ATA_QUIRK_NONCQ | 4189 4189 ATA_QUIRK_FIRMWARE_WARN }, 4190 4190 4191 + /* ADATA devices with LPM issues. */ 4192 + { "ADATA SU680", NULL, ATA_QUIRK_NOLPM }, 4193 + 4191 4194 /* Seagate disks with LPM issues */ 4192 4195 { "ST1000DM010-2EP102", NULL, ATA_QUIRK_NOLPM }, 4193 4196 { "ST2000DM008-2FR102", NULL, ATA_QUIRK_NOLPM },
+1 -1
drivers/ata/libata-scsi.c
··· 3600 3600 3601 3601 if (cdb[2] != 1 && cdb[2] != 3) { 3602 3602 ata_dev_warn(dev, "invalid command format %d\n", cdb[2]); 3603 - ata_scsi_set_invalid_field(dev, cmd, 1, 0xff); 3603 + ata_scsi_set_invalid_field(dev, cmd, 2, 0xff); 3604 3604 return 0; 3605 3605 } 3606 3606
+42 -1
drivers/base/bus.c
··· 504 504 } 505 505 EXPORT_SYMBOL_GPL(bus_for_each_drv); 506 506 507 + static ssize_t driver_override_store(struct device *dev, 508 + struct device_attribute *attr, 509 + const char *buf, size_t count) 510 + { 511 + int ret; 512 + 513 + ret = __device_set_driver_override(dev, buf, count); 514 + if (ret) 515 + return ret; 516 + 517 + return count; 518 + } 519 + 520 + static ssize_t driver_override_show(struct device *dev, 521 + struct device_attribute *attr, char *buf) 522 + { 523 + guard(spinlock)(&dev->driver_override.lock); 524 + return sysfs_emit(buf, "%s\n", dev->driver_override.name); 525 + } 526 + static DEVICE_ATTR_RW(driver_override); 527 + 528 + static struct attribute *driver_override_dev_attrs[] = { 529 + &dev_attr_driver_override.attr, 530 + NULL, 531 + }; 532 + 533 + static const struct attribute_group driver_override_dev_group = { 534 + .attrs = driver_override_dev_attrs, 535 + }; 536 + 507 537 /** 508 538 * bus_add_device - add device to bus 509 539 * @dev: device being added ··· 567 537 if (error) 568 538 goto out_put; 569 539 540 + if (dev->bus->driver_override) { 541 + error = device_add_group(dev, &driver_override_dev_group); 542 + if (error) 543 + goto out_groups; 544 + } 545 + 570 546 error = sysfs_create_link(&sp->devices_kset->kobj, &dev->kobj, dev_name(dev)); 571 547 if (error) 572 - goto out_groups; 548 + goto out_override; 573 549 574 550 error = sysfs_create_link(&dev->kobj, &sp->subsys.kobj, "subsystem"); 575 551 if (error) ··· 586 550 587 551 out_subsys: 588 552 sysfs_remove_link(&sp->devices_kset->kobj, dev_name(dev)); 553 + out_override: 554 + if (dev->bus->driver_override) 555 + device_remove_group(dev, &driver_override_dev_group); 589 556 out_groups: 590 557 device_remove_groups(dev, sp->bus->dev_groups); 591 558 out_put: ··· 646 607 647 608 sysfs_remove_link(&dev->kobj, "subsystem"); 648 609 sysfs_remove_link(&sp->devices_kset->kobj, dev_name(dev)); 610 + if (dev->bus->driver_override) 611 + device_remove_group(dev, &driver_override_dev_group); 649 612 device_remove_groups(dev, dev->bus->dev_groups); 650 613 if (klist_node_attached(&dev->p->knode_bus)) 651 614 klist_del(&dev->p->knode_bus);
+2
drivers/base/core.c
··· 2556 2556 devres_release_all(dev); 2557 2557 2558 2558 kfree(dev->dma_range_map); 2559 + kfree(dev->driver_override.name); 2559 2560 2560 2561 if (dev->release) 2561 2562 dev->release(dev); ··· 3160 3159 kobject_init(&dev->kobj, &device_ktype); 3161 3160 INIT_LIST_HEAD(&dev->dma_pools); 3162 3161 mutex_init(&dev->mutex); 3162 + spin_lock_init(&dev->driver_override.lock); 3163 3163 lockdep_set_novalidate_class(&dev->mutex); 3164 3164 spin_lock_init(&dev->devres_lock); 3165 3165 INIT_LIST_HEAD(&dev->devres_head);
+60
drivers/base/dd.c
··· 381 381 } 382 382 __exitcall(deferred_probe_exit); 383 383 384 + int __device_set_driver_override(struct device *dev, const char *s, size_t len) 385 + { 386 + const char *new, *old; 387 + char *cp; 388 + 389 + if (!s) 390 + return -EINVAL; 391 + 392 + /* 393 + * The stored value will be used in sysfs show callback (sysfs_emit()), 394 + * which has a length limit of PAGE_SIZE and adds a trailing newline. 395 + * Thus we can store one character less to avoid truncation during sysfs 396 + * show. 397 + */ 398 + if (len >= (PAGE_SIZE - 1)) 399 + return -EINVAL; 400 + 401 + /* 402 + * Compute the real length of the string in case userspace sends us a 403 + * bunch of \0 characters like python likes to do. 404 + */ 405 + len = strlen(s); 406 + 407 + if (!len) { 408 + /* Empty string passed - clear override */ 409 + spin_lock(&dev->driver_override.lock); 410 + old = dev->driver_override.name; 411 + dev->driver_override.name = NULL; 412 + spin_unlock(&dev->driver_override.lock); 413 + kfree(old); 414 + 415 + return 0; 416 + } 417 + 418 + cp = strnchr(s, len, '\n'); 419 + if (cp) 420 + len = cp - s; 421 + 422 + new = kstrndup(s, len, GFP_KERNEL); 423 + if (!new) 424 + return -ENOMEM; 425 + 426 + spin_lock(&dev->driver_override.lock); 427 + old = dev->driver_override.name; 428 + if (cp != s) { 429 + dev->driver_override.name = new; 430 + spin_unlock(&dev->driver_override.lock); 431 + } else { 432 + /* "\n" passed - clear override */ 433 + dev->driver_override.name = NULL; 434 + spin_unlock(&dev->driver_override.lock); 435 + 436 + kfree(new); 437 + } 438 + kfree(old); 439 + 440 + return 0; 441 + } 442 + EXPORT_SYMBOL_GPL(__device_set_driver_override); 443 + 384 444 /** 385 445 * device_is_bound() - Check if device is bound to a driver 386 446 * @dev: device to check
+5 -32
drivers/base/platform.c
··· 603 603 kfree(pa->pdev.dev.platform_data); 604 604 kfree(pa->pdev.mfd_cell); 605 605 kfree(pa->pdev.resource); 606 - kfree(pa->pdev.driver_override); 607 606 kfree(pa); 608 607 } 609 608 ··· 1305 1306 } 1306 1307 static DEVICE_ATTR_RO(numa_node); 1307 1308 1308 - static ssize_t driver_override_show(struct device *dev, 1309 - struct device_attribute *attr, char *buf) 1310 - { 1311 - struct platform_device *pdev = to_platform_device(dev); 1312 - ssize_t len; 1313 - 1314 - device_lock(dev); 1315 - len = sysfs_emit(buf, "%s\n", pdev->driver_override); 1316 - device_unlock(dev); 1317 - 1318 - return len; 1319 - } 1320 - 1321 - static ssize_t driver_override_store(struct device *dev, 1322 - struct device_attribute *attr, 1323 - const char *buf, size_t count) 1324 - { 1325 - struct platform_device *pdev = to_platform_device(dev); 1326 - int ret; 1327 - 1328 - ret = driver_set_override(dev, &pdev->driver_override, buf, count); 1329 - if (ret) 1330 - return ret; 1331 - 1332 - return count; 1333 - } 1334 - static DEVICE_ATTR_RW(driver_override); 1335 - 1336 1309 static struct attribute *platform_dev_attrs[] = { 1337 1310 &dev_attr_modalias.attr, 1338 1311 &dev_attr_numa_node.attr, 1339 - &dev_attr_driver_override.attr, 1340 1312 NULL, 1341 1313 }; 1342 1314 ··· 1347 1377 { 1348 1378 struct platform_device *pdev = to_platform_device(dev); 1349 1379 struct platform_driver *pdrv = to_platform_driver(drv); 1380 + int ret; 1350 1381 1351 1382 /* When driver_override is set, only bind to the matching driver */ 1352 - if (pdev->driver_override) 1353 - return !strcmp(pdev->driver_override, drv->name); 1383 + ret = device_match_driver_override(dev, drv); 1384 + if (ret >= 0) 1385 + return ret; 1354 1386 1355 1387 /* Attempt an OF style match first */ 1356 1388 if (of_driver_match_device(dev, drv)) ··· 1488 1516 const struct bus_type platform_bus_type = { 1489 1517 .name = "platform", 1490 1518 .dev_groups = platform_dev_groups, 1519 + .driver_override = true, 1491 1520 .match = platform_match, 1492 1521 .uevent = platform_uevent, 1493 1522 .probe = platform_probe,
+14 -25
drivers/block/zram/zram_drv.c
··· 917 917 918 918 static int zram_writeback_complete(struct zram *zram, struct zram_wb_req *req) 919 919 { 920 - u32 size, index = req->pps->index; 921 - int err, prio; 922 - bool huge; 920 + u32 index = req->pps->index; 921 + int err; 923 922 924 923 err = blk_status_to_errno(req->bio.bi_status); 925 924 if (err) { ··· 945 946 goto out; 946 947 } 947 948 948 - if (zram->compressed_wb) { 949 - /* 950 - * ZRAM_WB slots get freed, we need to preserve data required 951 - * for read decompression. 952 - */ 953 - size = get_slot_size(zram, index); 954 - prio = get_slot_comp_priority(zram, index); 955 - huge = test_slot_flag(zram, index, ZRAM_HUGE); 956 - } 957 - 958 - slot_free(zram, index); 959 - set_slot_flag(zram, index, ZRAM_WB); 949 + clear_slot_flag(zram, index, ZRAM_IDLE); 950 + if (test_slot_flag(zram, index, ZRAM_HUGE)) 951 + atomic64_dec(&zram->stats.huge_pages); 952 + atomic64_sub(get_slot_size(zram, index), &zram->stats.compr_data_size); 953 + zs_free(zram->mem_pool, get_slot_handle(zram, index)); 960 954 set_slot_handle(zram, index, req->blk_idx); 961 - 962 - if (zram->compressed_wb) { 963 - if (huge) 964 - set_slot_flag(zram, index, ZRAM_HUGE); 965 - set_slot_size(zram, index, size); 966 - set_slot_comp_priority(zram, index, prio); 967 - } 968 - 969 - atomic64_inc(&zram->stats.pages_stored); 955 + set_slot_flag(zram, index, ZRAM_WB); 970 956 971 957 out: 972 958 slot_unlock(zram, index); ··· 1994 2010 set_slot_comp_priority(zram, index, 0); 1995 2011 1996 2012 if (test_slot_flag(zram, index, ZRAM_HUGE)) { 2013 + /* 2014 + * Writeback completion decrements ->huge_pages but keeps 2015 + * ZRAM_HUGE flag for deferred decompression path. 2016 + */ 2017 + if (!test_slot_flag(zram, index, ZRAM_WB)) 2018 + atomic64_dec(&zram->stats.huge_pages); 1997 2019 clear_slot_flag(zram, index, ZRAM_HUGE); 1998 - atomic64_dec(&zram->stats.huge_pages); 1999 2020 } 2000 2021 2001 2022 if (test_slot_flag(zram, index, ZRAM_WB)) {
+8 -3
drivers/bluetooth/btintel.c
··· 251 251 252 252 bt_dev_err(hdev, "Hardware error 0x%2.2x", code); 253 253 254 + hci_req_sync_lock(hdev); 255 + 254 256 skb = __hci_cmd_sync(hdev, HCI_OP_RESET, 0, NULL, HCI_INIT_TIMEOUT); 255 257 if (IS_ERR(skb)) { 256 258 bt_dev_err(hdev, "Reset after hardware error failed (%ld)", 257 259 PTR_ERR(skb)); 258 - return; 260 + goto unlock; 259 261 } 260 262 kfree_skb(skb); 261 263 ··· 265 263 if (IS_ERR(skb)) { 266 264 bt_dev_err(hdev, "Retrieving Intel exception info failed (%ld)", 267 265 PTR_ERR(skb)); 268 - return; 266 + goto unlock; 269 267 } 270 268 271 269 if (skb->len != 13) { 272 270 bt_dev_err(hdev, "Exception info size mismatch"); 273 271 kfree_skb(skb); 274 - return; 272 + goto unlock; 275 273 } 276 274 277 275 bt_dev_err(hdev, "Exception info %s", (char *)(skb->data + 1)); 278 276 279 277 kfree_skb(skb); 278 + 279 + unlock: 280 + hci_req_sync_unlock(hdev); 280 281 } 281 282 EXPORT_SYMBOL_GPL(btintel_hw_error); 282 283
+4 -1
drivers/bluetooth/btusb.c
··· 2376 2376 if (data->air_mode == HCI_NOTIFY_ENABLE_SCO_CVSD) { 2377 2377 if (hdev->voice_setting & 0x0020) { 2378 2378 static const int alts[3] = { 2, 4, 5 }; 2379 + unsigned int sco_idx; 2379 2380 2380 - new_alts = alts[data->sco_num - 1]; 2381 + sco_idx = min_t(unsigned int, data->sco_num - 1, 2382 + ARRAY_SIZE(alts) - 1); 2383 + new_alts = alts[sco_idx]; 2381 2384 } else { 2382 2385 new_alts = data->sco_num; 2383 2386 }
+2
drivers/bluetooth/hci_ll.c
··· 541 541 if (err || !fw->data || !fw->size) { 542 542 bt_dev_err(lldev->hu.hdev, "request_firmware failed(errno %d) for %s", 543 543 err, bts_scr_name); 544 + if (!err) 545 + release_firmware(fw); 544 546 return -EINVAL; 545 547 } 546 548 ptr = (void *)fw->data;
+2 -2
drivers/bus/simple-pm-bus.c
··· 36 36 * that's not listed in simple_pm_bus_of_match. We don't want to do any 37 37 * of the simple-pm-bus tasks for these devices, so return early. 38 38 */ 39 - if (pdev->driver_override) 39 + if (device_has_driver_override(&pdev->dev)) 40 40 return 0; 41 41 42 42 match = of_match_device(dev->driver->of_match_table, dev); ··· 78 78 { 79 79 const void *data = of_device_get_match_data(&pdev->dev); 80 80 81 - if (pdev->driver_override || data) 81 + if (device_has_driver_override(&pdev->dev) || data) 82 82 return; 83 83 84 84 dev_dbg(&pdev->dev, "%s\n", __func__);
+1 -2
drivers/clk/imx/clk-scu.c
··· 706 706 if (ret) 707 707 goto put_device; 708 708 709 - ret = driver_set_override(&pdev->dev, &pdev->driver_override, 710 - "imx-scu-clk", strlen("imx-scu-clk")); 709 + ret = device_set_driver_override(&pdev->dev, "imx-scu-clk"); 711 710 if (ret) 712 711 goto put_device; 713 712
+1
drivers/cxl/Kconfig
··· 59 59 tristate "CXL ACPI: Platform Support" 60 60 depends on ACPI 61 61 depends on ACPI_NUMA 62 + depends on CXL_PMEM || !CXL_PMEM 62 63 default CXL_BUS 63 64 select ACPI_TABLE_LIB 64 65 select ACPI_HMAT
+9 -16
drivers/cxl/core/hdm.c
··· 94 94 struct cxl_hdm *cxlhdm; 95 95 void __iomem *hdm; 96 96 u32 ctrl; 97 - int i; 98 97 99 98 if (!info) 100 99 return false; ··· 112 113 return false; 113 114 114 115 /* 115 - * If any decoders are committed already, there should not be any 116 - * emulated DVSEC decoders. 116 + * If HDM decoders are globally enabled, do not fall back to DVSEC 117 + * range emulation. Zeroed decoder registers after region teardown 118 + * do not imply absence of HDM capability. 119 + * 120 + * Falling back to DVSEC here would treat the decoder as AUTO and 121 + * may incorrectly latch default interleave settings. 117 122 */ 118 - for (i = 0; i < cxlhdm->decoder_count; i++) { 119 - ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(i)); 120 - dev_dbg(&info->port->dev, 121 - "decoder%d.%d: committed: %ld base: %#x_%.8x size: %#x_%.8x\n", 122 - info->port->id, i, 123 - FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl), 124 - readl(hdm + CXL_HDM_DECODER0_BASE_HIGH_OFFSET(i)), 125 - readl(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(i)), 126 - readl(hdm + CXL_HDM_DECODER0_SIZE_HIGH_OFFSET(i)), 127 - readl(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(i))); 128 - if (FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl)) 129 - return false; 130 - } 123 + ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET); 124 + if (ctrl & CXL_HDM_DECODER_ENABLE) 125 + return false; 131 126 132 127 return true; 133 128 }
+1 -1
drivers/cxl/core/mbox.c
··· 1301 1301 * Require an endpoint to be safe otherwise the driver can not 1302 1302 * be sure that the device is unmapped. 1303 1303 */ 1304 - if (endpoint && cxl_num_decoders_committed(endpoint) == 0) 1304 + if (cxlmd->dev.driver && cxl_num_decoders_committed(endpoint) == 0) 1305 1305 return __cxl_mem_sanitize(mds, cmd); 1306 1306 1307 1307 return -EBUSY;
+6 -2
drivers/cxl/core/port.c
··· 552 552 xa_destroy(&port->dports); 553 553 xa_destroy(&port->regions); 554 554 ida_free(&cxl_port_ida, port->id); 555 - if (is_cxl_root(port)) 555 + 556 + if (is_cxl_root(port)) { 556 557 kfree(to_cxl_root(port)); 557 - else 558 + } else { 559 + put_device(dev->parent); 558 560 kfree(port); 561 + } 559 562 } 560 563 561 564 static ssize_t decoders_committed_show(struct device *dev, ··· 710 707 struct cxl_port *iter; 711 708 712 709 dev->parent = &parent_port->dev; 710 + get_device(dev->parent); 713 711 port->depth = parent_port->depth + 1; 714 712 port->parent_dport = parent_dport; 715 713
+3 -1
drivers/cxl/core/region.c
··· 3854 3854 } 3855 3855 3856 3856 rc = sysfs_update_group(&cxlr->dev.kobj, &cxl_region_group); 3857 - if (rc) 3857 + if (rc) { 3858 + kfree(res); 3858 3859 return rc; 3860 + } 3859 3861 3860 3862 rc = insert_resource(cxlrd->res, res); 3861 3863 if (rc) {
+1 -1
drivers/cxl/pmem.c
··· 554 554 555 555 MODULE_DESCRIPTION("CXL PMEM: Persistent Memory Support"); 556 556 MODULE_LICENSE("GPL v2"); 557 - module_init(cxl_pmem_init); 557 + subsys_initcall(cxl_pmem_init); 558 558 module_exit(cxl_pmem_exit); 559 559 MODULE_IMPORT_NS("CXL"); 560 560 MODULE_ALIAS_CXL(CXL_DEVICE_NVDIMM_BRIDGE);
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_bo_list.c
··· 36 36 37 37 #define AMDGPU_BO_LIST_MAX_PRIORITY 32u 38 38 #define AMDGPU_BO_LIST_NUM_BUCKETS (AMDGPU_BO_LIST_MAX_PRIORITY + 1) 39 + #define AMDGPU_BO_LIST_MAX_ENTRIES (128 * 1024) 39 40 40 41 static void amdgpu_bo_list_free_rcu(struct rcu_head *rcu) 41 42 { ··· 188 187 const uint32_t bo_info_size = in->bo_info_size; 189 188 const uint32_t bo_number = in->bo_number; 190 189 struct drm_amdgpu_bo_list_entry *info; 190 + 191 + if (bo_number > AMDGPU_BO_LIST_MAX_ENTRIES) 192 + return -EINVAL; 191 193 192 194 /* copy the handle array from userspace to a kernel buffer */ 193 195 if (likely(info_size == bo_info_size)) {
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1069 1069 } 1070 1070 1071 1071 /* Prepare a TLB flush fence to be attached to PTs */ 1072 - if (!params->unlocked) { 1072 + /* The check for need_tlb_fence should be dropped once we 1073 + * sort out the issues with KIQ/MES TLB invalidation timeouts. 1074 + */ 1075 + if (!params->unlocked && vm->need_tlb_fence) { 1073 1076 amdgpu_vm_tlb_fence_create(params->adev, vm, fence); 1074 1077 1075 1078 /* Makes sure no PD/PT is freed before the flush */ ··· 2605 2602 ttm_lru_bulk_move_init(&vm->lru_bulk_move); 2606 2603 2607 2604 vm->is_compute_context = false; 2605 + vm->need_tlb_fence = amdgpu_userq_enabled(&adev->ddev); 2608 2606 2609 2607 vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode & 2610 2608 AMDGPU_VM_USE_CPU_FOR_GFX); ··· 2743 2739 dma_fence_put(vm->last_update); 2744 2740 vm->last_update = dma_fence_get_stub(); 2745 2741 vm->is_compute_context = true; 2742 + vm->need_tlb_fence = true; 2746 2743 2747 2744 unreserve_bo: 2748 2745 amdgpu_bo_unreserve(vm->root.bo);
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
··· 441 441 struct ttm_lru_bulk_move lru_bulk_move; 442 442 /* Flag to indicate if VM is used for compute */ 443 443 bool is_compute_context; 444 + /* Flag to indicate if VM needs a TLB fence (KFD or KGD) */ 445 + bool need_tlb_fence; 444 446 445 447 /* Memory partition number, -1 means any partition */ 446 448 int8_t mem_id;
+14 -7
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
··· 662 662 } else { 663 663 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 664 664 case IP_VERSION(9, 0, 0): 665 - mmhub_cid = mmhub_client_ids_vega10[cid][rw]; 665 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega10) ? 666 + mmhub_client_ids_vega10[cid][rw] : NULL; 666 667 break; 667 668 case IP_VERSION(9, 3, 0): 668 - mmhub_cid = mmhub_client_ids_vega12[cid][rw]; 669 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega12) ? 670 + mmhub_client_ids_vega12[cid][rw] : NULL; 669 671 break; 670 672 case IP_VERSION(9, 4, 0): 671 - mmhub_cid = mmhub_client_ids_vega20[cid][rw]; 673 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vega20) ? 674 + mmhub_client_ids_vega20[cid][rw] : NULL; 672 675 break; 673 676 case IP_VERSION(9, 4, 1): 674 - mmhub_cid = mmhub_client_ids_arcturus[cid][rw]; 677 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_arcturus) ? 678 + mmhub_client_ids_arcturus[cid][rw] : NULL; 675 679 break; 676 680 case IP_VERSION(9, 1, 0): 677 681 case IP_VERSION(9, 2, 0): 678 - mmhub_cid = mmhub_client_ids_raven[cid][rw]; 682 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_raven) ? 683 + mmhub_client_ids_raven[cid][rw] : NULL; 679 684 break; 680 685 case IP_VERSION(1, 5, 0): 681 686 case IP_VERSION(2, 4, 0): 682 - mmhub_cid = mmhub_client_ids_renoir[cid][rw]; 687 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_renoir) ? 688 + mmhub_client_ids_renoir[cid][rw] : NULL; 683 689 break; 684 690 case IP_VERSION(1, 8, 0): 685 691 case IP_VERSION(9, 4, 2): 686 - mmhub_cid = mmhub_client_ids_aldebaran[cid][rw]; 692 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_aldebaran) ? 693 + mmhub_client_ids_aldebaran[cid][rw] : NULL; 687 694 break; 688 695 default: 689 696 mmhub_cid = NULL;
+2 -2
drivers/gpu/drm/amd/amdgpu/isp_v4_1_1.c
··· 129 129 if (!pdev) 130 130 return -EINVAL; 131 131 132 - if (!dev->type->name) { 132 + if (!dev->type || !dev->type->name) { 133 133 drm_dbg(&adev->ddev, "Invalid device type to add\n"); 134 134 goto exit; 135 135 } ··· 165 165 if (!pdev) 166 166 return -EINVAL; 167 167 168 - if (!dev->type->name) { 168 + if (!dev->type || !dev->type->name) { 169 169 drm_dbg(&adev->ddev, "Invalid device type to remove\n"); 170 170 goto exit; 171 171 }
+6 -3
drivers/gpu/drm/amd/amdgpu/mmhub_v2_0.c
··· 154 154 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 155 155 case IP_VERSION(2, 0, 0): 156 156 case IP_VERSION(2, 0, 2): 157 - mmhub_cid = mmhub_client_ids_navi1x[cid][rw]; 157 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_navi1x) ? 158 + mmhub_client_ids_navi1x[cid][rw] : NULL; 158 159 break; 159 160 case IP_VERSION(2, 1, 0): 160 161 case IP_VERSION(2, 1, 1): 161 - mmhub_cid = mmhub_client_ids_sienna_cichlid[cid][rw]; 162 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_sienna_cichlid) ? 163 + mmhub_client_ids_sienna_cichlid[cid][rw] : NULL; 162 164 break; 163 165 case IP_VERSION(2, 1, 2): 164 - mmhub_cid = mmhub_client_ids_beige_goby[cid][rw]; 166 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_beige_goby) ? 167 + mmhub_client_ids_beige_goby[cid][rw] : NULL; 165 168 break; 166 169 default: 167 170 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v2_3.c
··· 94 94 case IP_VERSION(2, 3, 0): 95 95 case IP_VERSION(2, 4, 0): 96 96 case IP_VERSION(2, 4, 1): 97 - mmhub_cid = mmhub_client_ids_vangogh[cid][rw]; 97 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_vangogh) ? 98 + mmhub_client_ids_vangogh[cid][rw] : NULL; 98 99 break; 99 100 default: 100 101 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0.c
··· 110 110 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 111 111 case IP_VERSION(3, 0, 0): 112 112 case IP_VERSION(3, 0, 1): 113 - mmhub_cid = mmhub_client_ids_v3_0_0[cid][rw]; 113 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_0) ? 114 + mmhub_client_ids_v3_0_0[cid][rw] : NULL; 114 115 break; 115 116 default: 116 117 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_1.c
··· 117 117 118 118 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 119 119 case IP_VERSION(3, 0, 1): 120 - mmhub_cid = mmhub_client_ids_v3_0_1[cid][rw]; 120 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_1) ? 121 + mmhub_client_ids_v3_0_1[cid][rw] : NULL; 121 122 break; 122 123 default: 123 124 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v3_0_2.c
··· 108 108 "MMVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n", 109 109 status); 110 110 111 - mmhub_cid = mmhub_client_ids_v3_0_2[cid][rw]; 111 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v3_0_2) ? 112 + mmhub_client_ids_v3_0_2[cid][rw] : NULL; 112 113 dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n", 113 114 mmhub_cid ? mmhub_cid : "unknown", cid); 114 115 dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
··· 102 102 status); 103 103 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 104 104 case IP_VERSION(4, 1, 0): 105 - mmhub_cid = mmhub_client_ids_v4_1_0[cid][rw]; 105 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v4_1_0) ? 106 + mmhub_client_ids_v4_1_0[cid][rw] : NULL; 106 107 break; 107 108 default: 108 109 mmhub_cid = NULL;
+2 -1
drivers/gpu/drm/amd/amdgpu/mmhub_v4_2_0.c
··· 688 688 status); 689 689 switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) { 690 690 case IP_VERSION(4, 2, 0): 691 - mmhub_cid = mmhub_client_ids_v4_2_0[cid][rw]; 691 + mmhub_cid = cid < ARRAY_SIZE(mmhub_client_ids_v4_2_0) ? 692 + mmhub_client_ids_v4_2_0[cid][rw] : NULL; 692 693 break; 693 694 default: 694 695 mmhub_cid = NULL;
+3 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2554 2554 fw_meta_info_params.fw_inst_const = adev->dm.dmub_fw->data + 2555 2555 le32_to_cpu(hdr->header.ucode_array_offset_bytes) + 2556 2556 PSP_HEADER_BYTES_256; 2557 - fw_meta_info_params.fw_bss_data = region_params.bss_data_size ? adev->dm.dmub_fw->data + 2557 + fw_meta_info_params.fw_bss_data = fw_meta_info_params.bss_data_size ? adev->dm.dmub_fw->data + 2558 2558 le32_to_cpu(hdr->header.ucode_array_offset_bytes) + 2559 2559 le32_to_cpu(hdr->inst_const_bytes) : NULL; 2560 2560 fw_meta_info_params.custom_psp_footer_size = 0; ··· 13119 13119 u16 min_vfreq; 13120 13120 u16 max_vfreq; 13121 13121 13122 - if (edid == NULL || edid->extensions == 0) 13122 + if (!edid || !edid->extensions) 13123 13123 return; 13124 13124 13125 13125 /* Find DisplayID extension */ ··· 13129 13129 break; 13130 13130 } 13131 13131 13132 - if (edid_ext == NULL) 13132 + if (i == edid->extensions) 13133 13133 return; 13134 13134 13135 13135 while (j < EDID_LENGTH) {
+3 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_colorop.c
··· 37 37 BIT(DRM_COLOROP_1D_CURVE_SRGB_EOTF) | 38 38 BIT(DRM_COLOROP_1D_CURVE_PQ_125_EOTF) | 39 39 BIT(DRM_COLOROP_1D_CURVE_BT2020_INV_OETF) | 40 - BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV); 40 + BIT(DRM_COLOROP_1D_CURVE_GAMMA22); 41 41 42 42 const u64 amdgpu_dm_supported_shaper_tfs = 43 43 BIT(DRM_COLOROP_1D_CURVE_SRGB_INV_EOTF) | 44 44 BIT(DRM_COLOROP_1D_CURVE_PQ_125_INV_EOTF) | 45 45 BIT(DRM_COLOROP_1D_CURVE_BT2020_OETF) | 46 - BIT(DRM_COLOROP_1D_CURVE_GAMMA22); 46 + BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV); 47 47 48 48 const u64 amdgpu_dm_supported_blnd_tfs = 49 49 BIT(DRM_COLOROP_1D_CURVE_SRGB_EOTF) | 50 50 BIT(DRM_COLOROP_1D_CURVE_PQ_125_EOTF) | 51 51 BIT(DRM_COLOROP_1D_CURVE_BT2020_INV_OETF) | 52 - BIT(DRM_COLOROP_1D_CURVE_GAMMA22_INV); 52 + BIT(DRM_COLOROP_1D_CURVE_GAMMA22); 53 53 54 54 #define MAX_COLOR_PIPELINE_OPS 10 55 55
+4 -4
drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
··· 255 255 BREAK_TO_DEBUGGER(); 256 256 return NULL; 257 257 } 258 + if (ctx->dce_version == DCN_VERSION_2_01) { 259 + dcn201_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg); 260 + return &clk_mgr->base; 261 + } 258 262 if (ASICREV_IS_SIENNA_CICHLID_P(asic_id.hw_internal_rev)) { 259 263 dcn3_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg); 260 264 return &clk_mgr->base; ··· 269 265 } 270 266 if (ASICREV_IS_BEIGE_GOBY_P(asic_id.hw_internal_rev)) { 271 267 dcn3_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg); 272 - return &clk_mgr->base; 273 - } 274 - if (ctx->dce_version == DCN_VERSION_2_01) { 275 - dcn201_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg); 276 268 return &clk_mgr->base; 277 269 } 278 270 dcn20_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
+3
drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
··· 1785 1785 1786 1786 dc->res_pool->funcs->calculate_wm_and_dlg(dc, context, pipes, pipe_cnt, vlevel); 1787 1787 1788 + DC_FP_START(); 1788 1789 dcn32_override_min_req_memclk(dc, context); 1790 + DC_FP_END(); 1791 + 1789 1792 dcn32_override_min_req_dcfclk(dc, context); 1790 1793 1791 1794 BW_VAL_TRACE_END_WATERMARKS();
+3 -1
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
··· 3454 3454 if (adev->asic_type == CHIP_HAINAN) { 3455 3455 if ((adev->pdev->revision == 0x81) || 3456 3456 (adev->pdev->revision == 0xC3) || 3457 + (adev->pdev->device == 0x6660) || 3457 3458 (adev->pdev->device == 0x6664) || 3458 3459 (adev->pdev->device == 0x6665) || 3459 - (adev->pdev->device == 0x6667)) { 3460 + (adev->pdev->device == 0x6667) || 3461 + (adev->pdev->device == 0x666F)) { 3460 3462 max_sclk = 75000; 3461 3463 } 3462 3464 if ((adev->pdev->revision == 0xC3) ||
+1 -1
drivers/gpu/drm/bridge/synopsys/dw-hdmi-qp.c
··· 848 848 849 849 regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS0, &header_bytes, 1); 850 850 regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS1, &buffer[3], 1); 851 - regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS2, &buffer[4], 1); 851 + regmap_bulk_write(hdmi->regm, PKT_AUDI_CONTENTS2, &buffer[7], 1); 852 852 853 853 /* Enable ACR, AUDI, AMD */ 854 854 dw_hdmi_qp_mod(hdmi,
+4 -1
drivers/gpu/drm/drm_file.c
··· 233 233 void drm_file_free(struct drm_file *file) 234 234 { 235 235 struct drm_device *dev; 236 + int idx; 236 237 237 238 if (!file) 238 239 return; ··· 250 249 251 250 drm_events_release(file); 252 251 253 - if (drm_core_check_feature(dev, DRIVER_MODESET)) { 252 + if (drm_core_check_feature(dev, DRIVER_MODESET) && 253 + drm_dev_enter(dev, &idx)) { 254 254 drm_fb_release(file); 255 255 drm_property_destroy_user_blobs(dev, file); 256 + drm_dev_exit(idx); 256 257 } 257 258 258 259 if (drm_core_check_feature(dev, DRIVER_SYNCOBJ))
+6 -3
drivers/gpu/drm/drm_mode_config.c
··· 577 577 */ 578 578 WARN_ON(!list_empty(&dev->mode_config.fb_list)); 579 579 list_for_each_entry_safe(fb, fbt, &dev->mode_config.fb_list, head) { 580 - struct drm_printer p = drm_dbg_printer(dev, DRM_UT_KMS, "[leaked fb]"); 580 + if (list_empty(&fb->filp_head) || drm_framebuffer_read_refcount(fb) > 1) { 581 + struct drm_printer p = drm_dbg_printer(dev, DRM_UT_KMS, "[leaked fb]"); 581 582 582 - drm_printf(&p, "framebuffer[%u]:\n", fb->base.id); 583 - drm_framebuffer_print_info(&p, 1, fb); 583 + drm_printf(&p, "framebuffer[%u]:\n", fb->base.id); 584 + drm_framebuffer_print_info(&p, 1, fb); 585 + } 586 + list_del_init(&fb->filp_head); 584 587 drm_framebuffer_free(&fb->base.refcount); 585 588 } 586 589
+5 -9
drivers/gpu/drm/drm_pagemap_util.c
··· 65 65 drm_dbg(cache->shrinker->drm, "Destroying dpagemap cache.\n"); 66 66 spin_lock(&cache->lock); 67 67 dpagemap = cache->dpagemap; 68 - if (!dpagemap) { 69 - spin_unlock(&cache->lock); 70 - goto out; 71 - } 68 + cache->dpagemap = NULL; 69 + if (dpagemap && !drm_pagemap_shrinker_cancel(dpagemap)) 70 + dpagemap = NULL; 71 + spin_unlock(&cache->lock); 72 72 73 - if (drm_pagemap_shrinker_cancel(dpagemap)) { 74 - cache->dpagemap = NULL; 75 - spin_unlock(&cache->lock); 73 + if (dpagemap) 76 74 drm_pagemap_destroy(dpagemap, false); 77 - } 78 75 79 - out: 80 76 mutex_destroy(&cache->lookup_mutex); 81 77 kfree(cache); 82 78 }
+1 -1
drivers/gpu/drm/i915/display/intel_display_power_well.c
··· 806 806 power_domains->dc_state, val & mask); 807 807 808 808 enable_dc6 = state & DC_STATE_EN_UPTO_DC6; 809 - dc6_was_enabled = val & DC_STATE_EN_UPTO_DC6; 809 + dc6_was_enabled = power_domains->dc_state & DC_STATE_EN_UPTO_DC6; 810 810 if (!dc6_was_enabled && enable_dc6) 811 811 intel_dmc_update_dc6_allowed_count(display, true); 812 812
+1
drivers/gpu/drm/i915/display/intel_display_types.h
··· 1186 1186 u32 dc3co_exitline; 1187 1187 u16 su_y_granularity; 1188 1188 u8 active_non_psr_pipes; 1189 + u8 entry_setup_frames; 1189 1190 const char *no_psr_reason; 1190 1191 1191 1192 /*
+1 -2
drivers/gpu/drm/i915/display/intel_dmc.c
··· 1599 1599 return false; 1600 1600 1601 1601 mutex_lock(&power_domains->lock); 1602 - dc6_enabled = intel_de_read(display, DC_STATE_EN) & 1603 - DC_STATE_EN_UPTO_DC6; 1602 + dc6_enabled = power_domains->dc_state & DC_STATE_EN_UPTO_DC6; 1604 1603 if (dc6_enabled) 1605 1604 intel_dmc_update_dc6_allowed_count(display, false); 1606 1605
+5 -2
drivers/gpu/drm/i915/display/intel_psr.c
··· 1717 1717 entry_setup_frames = intel_psr_entry_setup_frames(intel_dp, conn_state, adjusted_mode); 1718 1718 1719 1719 if (entry_setup_frames >= 0) { 1720 - intel_dp->psr.entry_setup_frames = entry_setup_frames; 1720 + crtc_state->entry_setup_frames = entry_setup_frames; 1721 1721 } else { 1722 1722 crtc_state->no_psr_reason = "PSR setup timing not met"; 1723 1723 drm_dbg_kms(display->drm, ··· 1815 1815 { 1816 1816 struct intel_display *display = to_intel_display(intel_dp); 1817 1817 1818 - return (DISPLAY_VER(display) == 20 && intel_dp->psr.entry_setup_frames > 0 && 1818 + return (DISPLAY_VER(display) == 20 && crtc_state->entry_setup_frames > 0 && 1819 1819 !crtc_state->has_sel_update); 1820 1820 } 1821 1821 ··· 2189 2189 intel_dp->psr.pkg_c_latency_used = crtc_state->pkg_c_latency_used; 2190 2190 intel_dp->psr.io_wake_lines = crtc_state->alpm_state.io_wake_lines; 2191 2191 intel_dp->psr.fast_wake_lines = crtc_state->alpm_state.fast_wake_lines; 2192 + intel_dp->psr.entry_setup_frames = crtc_state->entry_setup_frames; 2192 2193 2193 2194 if (!psr_interrupt_error_check(intel_dp)) 2194 2195 return; ··· 3110 3109 * - Display WA #1136: skl, bxt 3111 3110 */ 3112 3111 if (intel_crtc_needs_modeset(new_crtc_state) || 3112 + new_crtc_state->update_m_n || 3113 + new_crtc_state->update_lrr || 3113 3114 !new_crtc_state->has_psr || 3114 3115 !new_crtc_state->active_planes || 3115 3116 new_crtc_state->has_sel_update != psr->sel_update_enabled ||
+2 -1
drivers/gpu/drm/i915/gt/intel_engine_cs.c
··· 1967 1967 if (engine->sanitize) 1968 1968 engine->sanitize(engine); 1969 1969 1970 - engine->set_default_submission(engine); 1970 + if (engine->set_default_submission) 1971 + engine->set_default_submission(engine); 1971 1972 } 1972 1973 } 1973 1974
-17
drivers/gpu/drm/imagination/pvr_device.c
··· 225 225 } 226 226 227 227 if (pvr_dev->has_safety_events) { 228 - int err; 229 - 230 - /* 231 - * Ensure the GPU is powered on since some safety events (such 232 - * as ECC faults) can happen outside of job submissions, which 233 - * are otherwise the only time a power reference is held. 234 - */ 235 - err = pvr_power_get(pvr_dev); 236 - if (err) { 237 - drm_err_ratelimited(drm_dev, 238 - "%s: could not take power reference (%d)\n", 239 - __func__, err); 240 - return ret; 241 - } 242 - 243 228 while (pvr_device_safety_irq_pending(pvr_dev)) { 244 229 pvr_device_safety_irq_clear(pvr_dev); 245 230 pvr_device_handle_safety_events(pvr_dev); 246 231 247 232 ret = IRQ_HANDLED; 248 233 } 249 - 250 - pvr_power_put(pvr_dev); 251 234 } 252 235 253 236 return ret;
+39 -12
drivers/gpu/drm/imagination/pvr_power.c
··· 90 90 } 91 91 92 92 static int 93 - pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset) 93 + pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset, bool rpm_suspend) 94 94 { 95 - if (!hard_reset) { 96 - int err; 95 + int err; 97 96 97 + if (!hard_reset) { 98 98 cancel_delayed_work_sync(&pvr_dev->watchdog.work); 99 99 100 100 err = pvr_power_request_idle(pvr_dev); ··· 106 106 return err; 107 107 } 108 108 109 - return pvr_fw_stop(pvr_dev); 109 + if (rpm_suspend) { 110 + /* This also waits for late processing of GPU or firmware IRQs in other cores */ 111 + disable_irq(pvr_dev->irq); 112 + } 113 + 114 + err = pvr_fw_stop(pvr_dev); 115 + if (err && rpm_suspend) 116 + enable_irq(pvr_dev->irq); 117 + 118 + return err; 110 119 } 111 120 112 121 static int 113 - pvr_power_fw_enable(struct pvr_device *pvr_dev) 122 + pvr_power_fw_enable(struct pvr_device *pvr_dev, bool rpm_resume) 114 123 { 115 124 int err; 116 125 126 + if (rpm_resume) 127 + enable_irq(pvr_dev->irq); 128 + 117 129 err = pvr_fw_start(pvr_dev); 118 130 if (err) 119 - return err; 131 + goto out; 120 132 121 133 err = pvr_wait_for_fw_boot(pvr_dev); 122 134 if (err) { 123 135 drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n"); 124 136 pvr_fw_stop(pvr_dev); 125 - return err; 137 + goto out; 126 138 } 127 139 128 140 queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work, 129 141 msecs_to_jiffies(WATCHDOG_TIME_MS)); 130 142 131 143 return 0; 144 + 145 + out: 146 + if (rpm_resume) 147 + disable_irq(pvr_dev->irq); 148 + 149 + return err; 132 150 } 133 151 134 152 bool ··· 379 361 return -EIO; 380 362 381 363 if (pvr_dev->fw_dev.booted) { 382 - err = pvr_power_fw_disable(pvr_dev, false); 364 + err = pvr_power_fw_disable(pvr_dev, false, true); 383 365 if (err) 384 366 goto err_drm_dev_exit; 385 367 } ··· 409 391 goto err_drm_dev_exit; 410 392 411 393 if (pvr_dev->fw_dev.booted) { 412 - err = pvr_power_fw_enable(pvr_dev); 394 + err = pvr_power_fw_enable(pvr_dev, true); 413 395 if (err) 414 396 goto err_power_off; 415 397 } ··· 528 510 } 529 511 530 512 /* Disable IRQs for the duration of the reset. */ 531 - disable_irq(pvr_dev->irq); 513 + if (hard_reset) { 514 + disable_irq(pvr_dev->irq); 515 + } else { 516 + /* 517 + * Soft reset is triggered as a response to a FW command to the Host and is 518 + * processed from the threaded IRQ handler. This code cannot (nor needs to) 519 + * wait for any IRQ processing to complete. 520 + */ 521 + disable_irq_nosync(pvr_dev->irq); 522 + } 532 523 533 524 do { 534 525 if (hard_reset) { ··· 545 518 queues_disabled = true; 546 519 } 547 520 548 - err = pvr_power_fw_disable(pvr_dev, hard_reset); 521 + err = pvr_power_fw_disable(pvr_dev, hard_reset, false); 549 522 if (!err) { 550 523 if (hard_reset) { 551 524 pvr_dev->fw_dev.booted = false; ··· 568 541 569 542 pvr_fw_irq_clear(pvr_dev); 570 543 571 - err = pvr_power_fw_enable(pvr_dev); 544 + err = pvr_power_fw_enable(pvr_dev, false); 572 545 } 573 546 574 547 if (err && hard_reset)
+3 -1
drivers/gpu/drm/radeon/si_dpm.c
··· 2915 2915 if (rdev->family == CHIP_HAINAN) { 2916 2916 if ((rdev->pdev->revision == 0x81) || 2917 2917 (rdev->pdev->revision == 0xC3) || 2918 + (rdev->pdev->device == 0x6660) || 2918 2919 (rdev->pdev->device == 0x6664) || 2919 2920 (rdev->pdev->device == 0x6665) || 2920 - (rdev->pdev->device == 0x6667)) { 2921 + (rdev->pdev->device == 0x6667) || 2922 + (rdev->pdev->device == 0x666F)) { 2921 2923 max_sclk = 75000; 2922 2924 } 2923 2925 if ((rdev->pdev->revision == 0xC3) ||
+57 -36
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 96 96 97 97 struct vmw_res_func; 98 98 99 + struct vmw_bo; 100 + struct vmw_bo; 101 + struct vmw_resource_dirty; 102 + 99 103 /** 100 - * struct vmw-resource - base class for hardware resources 104 + * struct vmw_resource - base class for hardware resources 101 105 * 102 106 * @kref: For refcounting. 103 107 * @dev_priv: Pointer to the device private for this resource. Immutable. 104 108 * @id: Device id. Protected by @dev_priv::resource_lock. 109 + * @used_prio: Priority for this resource. 105 110 * @guest_memory_size: Guest memory buffer size. Immutable. 106 111 * @res_dirty: Resource contains data not yet in the guest memory buffer. 107 112 * Protected by resource reserved. ··· 122 117 * pin-count greater than zero. It is not on the resource LRU lists and its 123 118 * guest memory buffer is pinned. Hence it can't be evicted. 124 119 * @func: Method vtable for this resource. Immutable. 125 - * @mob_node; Node for the MOB guest memory rbtree. Protected by 120 + * @mob_node: Node for the MOB guest memory rbtree. Protected by 126 121 * @guest_memory_bo reserved. 127 122 * @lru_head: List head for the LRU list. Protected by @dev_priv::resource_lock. 128 123 * @binding_head: List head for the context binding list. Protected by 129 124 * the @dev_priv::binding_mutex 125 + * @dirty: resource's dirty tracker 130 126 * @res_free: The resource destructor. 131 127 * @hw_destroy: Callback to destroy the resource on the device, as part of 132 128 * resource destruction. 133 129 */ 134 - struct vmw_bo; 135 - struct vmw_bo; 136 - struct vmw_resource_dirty; 137 130 struct vmw_resource { 138 131 struct kref kref; 139 132 struct vmw_private *dev_priv; ··· 199 196 * @quality_level: Quality level. 200 197 * @autogen_filter: Filter for automatically generated mipmaps. 201 198 * @array_size: Number of array elements for a 1D/2D texture. For cubemap 202 - texture number of faces * array_size. This should be 0 for pre 203 - SM4 device. 199 + * texture number of faces * array_size. This should be 0 for pre 200 + * SM4 device. 204 201 * @buffer_byte_stride: Buffer byte stride. 205 202 * @num_sizes: Size of @sizes. For GB surface this should always be 1. 206 203 * @base_size: Surface dimension. ··· 268 265 struct vmw_res_cache_entry { 269 266 uint32_t handle; 270 267 struct vmw_resource *res; 268 + /* private: */ 271 269 void *private; 270 + /* public: */ 272 271 unsigned short valid_handle; 273 272 unsigned short valid; 274 273 }; 275 274 276 275 /** 277 276 * enum vmw_dma_map_mode - indicate how to perform TTM page dma mappings. 277 + * @vmw_dma_alloc_coherent: Use TTM coherent pages 278 + * @vmw_dma_map_populate: Unmap from DMA just after unpopulate 279 + * @vmw_dma_map_bind: Unmap from DMA just before unbind 278 280 */ 279 281 enum vmw_dma_map_mode { 280 - vmw_dma_alloc_coherent, /* Use TTM coherent pages */ 281 - vmw_dma_map_populate, /* Unmap from DMA just after unpopulate */ 282 - vmw_dma_map_bind, /* Unmap from DMA just before unbind */ 282 + vmw_dma_alloc_coherent, 283 + vmw_dma_map_populate, 284 + vmw_dma_map_bind, 285 + /* private: */ 283 286 vmw_dma_map_max 284 287 }; 285 288 ··· 293 284 * struct vmw_sg_table - Scatter/gather table for binding, with additional 294 285 * device-specific information. 295 286 * 287 + * @mode: which page mapping mode to use 288 + * @pages: Array of page pointers to the pages. 289 + * @addrs: DMA addresses to the pages if coherent pages are used. 296 290 * @sgt: Pointer to a struct sg_table with binding information 297 - * @num_regions: Number of regions with device-address contiguous pages 291 + * @num_pages: Number of @pages 298 292 */ 299 293 struct vmw_sg_table { 300 294 enum vmw_dma_map_mode mode; ··· 365 353 * than from user-space 366 354 * @fp: If @kernel is false, points to the file of the client. Otherwise 367 355 * NULL 356 + * @filp: DRM state for this file 368 357 * @cmd_bounce: Command bounce buffer used for command validation before 369 358 * copying to fifo space 370 359 * @cmd_bounce_size: Current command bounce buffer size ··· 742 729 bool vmwgfx_supported(struct vmw_private *vmw); 743 730 744 731 745 - /** 732 + /* 746 733 * GMR utilities - vmwgfx_gmr.c 747 734 */ 748 735 ··· 752 739 int gmr_id); 753 740 extern void vmw_gmr_unbind(struct vmw_private *dev_priv, int gmr_id); 754 741 755 - /** 742 + /* 756 743 * User handles 757 744 */ 758 745 struct vmw_user_object { ··· 772 759 void vmw_user_object_unmap(struct vmw_user_object *uo); 773 760 bool vmw_user_object_is_mapped(struct vmw_user_object *uo); 774 761 775 - /** 762 + /* 776 763 * Resource utilities - vmwgfx_resource.c 777 764 */ 778 765 struct vmw_user_resource_conv; ··· 832 819 return !RB_EMPTY_NODE(&res->mob_node); 833 820 } 834 821 835 - /** 822 + /* 836 823 * GEM related functionality - vmwgfx_gem.c 837 824 */ 838 825 struct vmw_bo_params; ··· 846 833 struct drm_file *filp); 847 834 extern void vmw_debugfs_gem_init(struct vmw_private *vdev); 848 835 849 - /** 836 + /* 850 837 * Misc Ioctl functionality - vmwgfx_ioctl.c 851 838 */ 852 839 ··· 859 846 extern int vmw_present_readback_ioctl(struct drm_device *dev, void *data, 860 847 struct drm_file *file_priv); 861 848 862 - /** 849 + /* 863 850 * Fifo utilities - vmwgfx_fifo.c 864 851 */ 865 852 ··· 893 880 894 881 895 882 /** 896 - * vmw_fifo_caps - Returns the capabilities of the FIFO command 883 + * vmw_fifo_caps - Get the capabilities of the FIFO command 897 884 * queue or 0 if fifo memory isn't present. 898 885 * @dev_priv: The device private context 886 + * 887 + * Returns: capabilities of the FIFO command or %0 if fifo memory not present 899 888 */ 900 889 static inline uint32_t vmw_fifo_caps(const struct vmw_private *dev_priv) 901 890 { ··· 908 893 909 894 910 895 /** 911 - * vmw_is_cursor_bypass3_enabled - Returns TRUE iff Cursor Bypass 3 912 - * is enabled in the FIFO. 896 + * vmw_is_cursor_bypass3_enabled - check Cursor Bypass 3 enabled setting 897 + * in the FIFO. 913 898 * @dev_priv: The device private context 899 + * 900 + * Returns: %true iff Cursor Bypass 3 is enabled in the FIFO 914 901 */ 915 902 static inline bool 916 903 vmw_is_cursor_bypass3_enabled(const struct vmw_private *dev_priv) ··· 920 903 return (vmw_fifo_caps(dev_priv) & SVGA_FIFO_CAP_CURSOR_BYPASS_3) != 0; 921 904 } 922 905 923 - /** 906 + /* 924 907 * TTM buffer object driver - vmwgfx_ttm_buffer.c 925 908 */ 926 909 ··· 944 927 * 945 928 * @viter: Pointer to the iterator to advance. 946 929 * 947 - * Returns false if past the list of pages, true otherwise. 930 + * Returns: false if past the list of pages, true otherwise. 948 931 */ 949 932 static inline bool vmw_piter_next(struct vmw_piter *viter) 950 933 { ··· 956 939 * 957 940 * @viter: Pointer to the iterator 958 941 * 959 - * Returns the DMA address of the page pointed to by @viter. 942 + * Returns: the DMA address of the page pointed to by @viter. 960 943 */ 961 944 static inline dma_addr_t vmw_piter_dma_addr(struct vmw_piter *viter) 962 945 { ··· 968 951 * 969 952 * @viter: Pointer to the iterator 970 953 * 971 - * Returns the DMA address of the page pointed to by @viter. 954 + * Returns: the DMA address of the page pointed to by @viter. 972 955 */ 973 956 static inline struct page *vmw_piter_page(struct vmw_piter *viter) 974 957 { 975 958 return viter->pages[viter->i]; 976 959 } 977 960 978 - /** 961 + /* 979 962 * Command submission - vmwgfx_execbuf.c 980 963 */ 981 964 ··· 1010 993 int32_t out_fence_fd); 1011 994 bool vmw_cmd_describe(const void *buf, u32 *size, char const **cmd); 1012 995 1013 - /** 996 + /* 1014 997 * IRQs and wating - vmwgfx_irq.c 1015 998 */ 1016 999 ··· 1033 1016 bool vmw_generic_waiter_remove(struct vmw_private *dev_priv, 1034 1017 u32 flag, int *waiter_count); 1035 1018 1036 - /** 1019 + /* 1037 1020 * Kernel modesetting - vmwgfx_kms.c 1038 1021 */ 1039 1022 ··· 1065 1048 extern void vmw_resource_unpin(struct vmw_resource *res); 1066 1049 extern enum vmw_res_type vmw_res_type(const struct vmw_resource *res); 1067 1050 1068 - /** 1051 + /* 1069 1052 * Overlay control - vmwgfx_overlay.c 1070 1053 */ 1071 1054 ··· 1080 1063 int vmw_overlay_num_overlays(struct vmw_private *dev_priv); 1081 1064 int vmw_overlay_num_free_overlays(struct vmw_private *dev_priv); 1082 1065 1083 - /** 1066 + /* 1084 1067 * GMR Id manager 1085 1068 */ 1086 1069 1087 1070 int vmw_gmrid_man_init(struct vmw_private *dev_priv, int type); 1088 1071 void vmw_gmrid_man_fini(struct vmw_private *dev_priv, int type); 1089 1072 1090 - /** 1073 + /* 1091 1074 * System memory manager 1092 1075 */ 1093 1076 int vmw_sys_man_init(struct vmw_private *dev_priv); 1094 1077 void vmw_sys_man_fini(struct vmw_private *dev_priv); 1095 1078 1096 - /** 1079 + /* 1097 1080 * Prime - vmwgfx_prime.c 1098 1081 */ 1099 1082 ··· 1309 1292 * @line: The current line of the blit. 1310 1293 * @line_offset: Offset of the current line segment. 1311 1294 * @cpp: Bytes per pixel (granularity information). 1312 - * @memcpy: Which memcpy function to use. 1295 + * @do_cpy: Which memcpy function to use. 1313 1296 */ 1314 1297 struct vmw_diff_cpy { 1315 1298 struct drm_rect rect; ··· 1397 1380 1398 1381 /** 1399 1382 * VMW_DEBUG_KMS - Debug output for kernel mode-setting 1383 + * @fmt: format string for the args 1400 1384 * 1401 1385 * This macro is for debugging vmwgfx mode-setting code. 1402 1386 */ 1403 1387 #define VMW_DEBUG_KMS(fmt, ...) \ 1404 1388 DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__) 1405 1389 1406 - /** 1390 + /* 1407 1391 * Inline helper functions 1408 1392 */ 1409 1393 ··· 1435 1417 1436 1418 /** 1437 1419 * vmw_fifo_mem_read - Perform a MMIO read from the fifo memory 1438 - * 1420 + * @vmw: The device private structure 1439 1421 * @fifo_reg: The fifo register to read from 1440 1422 * 1441 1423 * This function is intended to be equivalent to ioread32() on 1442 1424 * memremap'd memory, but without byteswapping. 1425 + * 1426 + * Returns: the value read 1443 1427 */ 1444 1428 static inline u32 vmw_fifo_mem_read(struct vmw_private *vmw, uint32 fifo_reg) 1445 1429 { ··· 1451 1431 1452 1432 /** 1453 1433 * vmw_fifo_mem_write - Perform a MMIO write to volatile memory 1454 - * 1455 - * @addr: The fifo register to write to 1434 + * @vmw: The device private structure 1435 + * @fifo_reg: The fifo register to write to 1436 + * @value: The value to write 1456 1437 * 1457 1438 * This function is intended to be equivalent to iowrite32 on 1458 1439 * memremap'd memory, but without byteswapping.
+2 -1
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 771 771 ret = vmw_bo_dirty_add(bo); 772 772 if (!ret && surface && surface->res.func->dirty_alloc) { 773 773 surface->res.coherent = true; 774 - ret = surface->res.func->dirty_alloc(&surface->res); 774 + if (surface->res.dirty == NULL) 775 + ret = surface->res.func->dirty_alloc(&surface->res); 775 776 } 776 777 ttm_bo_unreserve(&bo->tbo); 777 778 }
+4 -6
drivers/gpu/drm/xe/xe_ggtt.c
··· 313 313 { 314 314 struct xe_ggtt *ggtt = arg; 315 315 316 + scoped_guard(mutex, &ggtt->lock) 317 + ggtt->flags &= ~XE_GGTT_FLAGS_ONLINE; 316 318 drain_workqueue(ggtt->wq); 317 319 } 318 320 ··· 379 377 if (err) 380 378 return err; 381 379 380 + ggtt->flags |= XE_GGTT_FLAGS_ONLINE; 382 381 err = devm_add_action_or_reset(xe->drm.dev, dev_fini_ggtt, ggtt); 383 382 if (err) 384 383 return err; ··· 413 410 static void ggtt_node_remove(struct xe_ggtt_node *node) 414 411 { 415 412 struct xe_ggtt *ggtt = node->ggtt; 416 - struct xe_device *xe = tile_to_xe(ggtt->tile); 417 413 bool bound; 418 - int idx; 419 - 420 - bound = drm_dev_enter(&xe->drm, &idx); 421 414 422 415 mutex_lock(&ggtt->lock); 416 + bound = ggtt->flags & XE_GGTT_FLAGS_ONLINE; 423 417 if (bound) 424 418 xe_ggtt_clear(ggtt, node->base.start, node->base.size); 425 419 drm_mm_remove_node(&node->base); ··· 428 428 429 429 if (node->invalidate_on_remove) 430 430 xe_ggtt_invalidate(ggtt); 431 - 432 - drm_dev_exit(idx); 433 431 434 432 free_node: 435 433 xe_ggtt_node_fini(node);
+4 -1
drivers/gpu/drm/xe/xe_ggtt_types.h
··· 28 28 /** @size: Total usable size of this GGTT */ 29 29 u64 size; 30 30 31 - #define XE_GGTT_FLAGS_64K BIT(0) 31 + #define XE_GGTT_FLAGS_64K BIT(0) 32 + #define XE_GGTT_FLAGS_ONLINE BIT(1) 32 33 /** 33 34 * @flags: Flags for this GGTT 34 35 * Acceptable flags: 35 36 * - %XE_GGTT_FLAGS_64K - if PTE size is 64K. Otherwise, regular is 4K. 37 + * - %XE_GGTT_FLAGS_ONLINE - is GGTT online, protected by ggtt->lock 38 + * after init 36 39 */ 37 40 unsigned int flags; 38 41 /** @scratch: Internal object allocation used as a scratch page */
+2
drivers/gpu/drm/xe/xe_gt_ccs_mode.c
··· 12 12 #include "xe_gt_printk.h" 13 13 #include "xe_gt_sysfs.h" 14 14 #include "xe_mmio.h" 15 + #include "xe_pm.h" 15 16 #include "xe_sriov.h" 16 17 17 18 static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines) ··· 151 150 xe_gt_info(gt, "Setting compute mode to %d\n", num_engines); 152 151 gt->ccs_mode = num_engines; 153 152 xe_gt_record_user_engines(gt); 153 + guard(xe_pm_runtime)(xe); 154 154 xe_gt_reset(gt); 155 155 } 156 156
+27 -5
drivers/gpu/drm/xe/xe_guc.c
··· 1124 1124 struct xe_guc_pc *guc_pc = &gt->uc.guc.pc; 1125 1125 u32 before_freq, act_freq, cur_freq; 1126 1126 u32 status = 0, tries = 0; 1127 + int load_result, ret; 1127 1128 ktime_t before; 1128 1129 u64 delta_ms; 1129 - int ret; 1130 1130 1131 1131 before_freq = xe_guc_pc_get_act_freq(guc_pc); 1132 1132 before = ktime_get(); 1133 1133 1134 - ret = poll_timeout_us(ret = guc_load_done(gt, &status, &tries), ret, 1134 + ret = poll_timeout_us(load_result = guc_load_done(gt, &status, &tries), load_result, 1135 1135 10 * USEC_PER_MSEC, 1136 1136 GUC_LOAD_TIMEOUT_SEC * USEC_PER_SEC, false); 1137 1137 ··· 1139 1139 act_freq = xe_guc_pc_get_act_freq(guc_pc); 1140 1140 cur_freq = xe_guc_pc_get_cur_freq_fw(guc_pc); 1141 1141 1142 - if (ret) { 1142 + if (ret || load_result <= 0) { 1143 1143 xe_gt_err(gt, "load failed: status = 0x%08X, time = %lldms, freq = %dMHz (req %dMHz)\n", 1144 1144 status, delta_ms, xe_guc_pc_get_act_freq(guc_pc), 1145 1145 xe_guc_pc_get_cur_freq_fw(guc_pc)); ··· 1347 1347 return 0; 1348 1348 } 1349 1349 1350 - int xe_guc_suspend(struct xe_guc *guc) 1350 + /** 1351 + * xe_guc_softreset() - Soft reset GuC 1352 + * @guc: The GuC object 1353 + * 1354 + * Send soft reset command to GuC through mmio send. 1355 + * 1356 + * Return: 0 if success, otherwise error code 1357 + */ 1358 + int xe_guc_softreset(struct xe_guc *guc) 1351 1359 { 1352 - struct xe_gt *gt = guc_to_gt(guc); 1353 1360 u32 action[] = { 1354 1361 XE_GUC_ACTION_CLIENT_SOFT_RESET, 1355 1362 }; 1356 1363 int ret; 1357 1364 1365 + if (!xe_uc_fw_is_running(&guc->fw)) 1366 + return 0; 1367 + 1358 1368 ret = xe_guc_mmio_send(guc, action, ARRAY_SIZE(action)); 1369 + if (ret) 1370 + return ret; 1371 + 1372 + return 0; 1373 + } 1374 + 1375 + int xe_guc_suspend(struct xe_guc *guc) 1376 + { 1377 + struct xe_gt *gt = guc_to_gt(guc); 1378 + int ret; 1379 + 1380 + ret = xe_guc_softreset(guc); 1359 1381 if (ret) { 1360 1382 xe_gt_err(gt, "GuC suspend failed: %pe\n", ERR_PTR(ret)); 1361 1383 return ret;
+1
drivers/gpu/drm/xe/xe_guc.h
··· 44 44 void xe_guc_runtime_suspend(struct xe_guc *guc); 45 45 void xe_guc_runtime_resume(struct xe_guc *guc); 46 46 int xe_guc_suspend(struct xe_guc *guc); 47 + int xe_guc_softreset(struct xe_guc *guc); 47 48 void xe_guc_notify(struct xe_guc *guc); 48 49 int xe_guc_auth_huc(struct xe_guc *guc, u32 rsa_addr); 49 50 int xe_guc_mmio_send(struct xe_guc *guc, const u32 *request, u32 len);
+1
drivers/gpu/drm/xe/xe_guc_ct.c
··· 345 345 { 346 346 struct xe_guc_ct *ct = arg; 347 347 348 + xe_guc_ct_stop(ct); 348 349 guc_ct_change_state(ct, XE_GUC_CT_STATE_DISABLED); 349 350 } 350 351
+61 -25
drivers/gpu/drm/xe/xe_guc_submit.c
··· 48 48 49 49 #define XE_GUC_EXEC_QUEUE_CGP_CONTEXT_ERROR_LEN 6 50 50 51 + static int guc_submit_reset_prepare(struct xe_guc *guc); 52 + 51 53 static struct xe_guc * 52 54 exec_queue_to_guc(struct xe_exec_queue *q) 53 55 { ··· 241 239 EXEC_QUEUE_STATE_BANNED)); 242 240 } 243 241 244 - static void guc_submit_fini(struct drm_device *drm, void *arg) 242 + static void guc_submit_sw_fini(struct drm_device *drm, void *arg) 245 243 { 246 244 struct xe_guc *guc = arg; 247 245 struct xe_device *xe = guc_to_xe(guc); ··· 257 255 xe_gt_assert(gt, ret); 258 256 259 257 xa_destroy(&guc->submission_state.exec_queue_lookup); 258 + } 259 + 260 + static void guc_submit_fini(void *arg) 261 + { 262 + struct xe_guc *guc = arg; 263 + 264 + /* Forcefully kill any remaining exec queues */ 265 + xe_guc_ct_stop(&guc->ct); 266 + guc_submit_reset_prepare(guc); 267 + xe_guc_softreset(guc); 268 + xe_guc_submit_stop(guc); 269 + xe_uc_fw_sanitize(&guc->fw); 270 + xe_guc_submit_pause_abort(guc); 260 271 } 261 272 262 273 static void guc_submit_wedged_fini(void *arg) ··· 341 326 342 327 guc->submission_state.initialized = true; 343 328 344 - return drmm_add_action_or_reset(&xe->drm, guc_submit_fini, guc); 329 + err = drmm_add_action_or_reset(&xe->drm, guc_submit_sw_fini, guc); 330 + if (err) 331 + return err; 332 + 333 + return devm_add_action_or_reset(xe->drm.dev, guc_submit_fini, guc); 345 334 } 346 335 347 336 /* ··· 1271 1252 */ 1272 1253 void xe_guc_submit_wedge(struct xe_guc *guc) 1273 1254 { 1255 + struct xe_device *xe = guc_to_xe(guc); 1274 1256 struct xe_gt *gt = guc_to_gt(guc); 1275 1257 struct xe_exec_queue *q; 1276 1258 unsigned long index; ··· 1286 1266 if (!guc->submission_state.initialized) 1287 1267 return; 1288 1268 1289 - err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev, 1290 - guc_submit_wedged_fini, guc); 1291 - if (err) { 1292 - xe_gt_err(gt, "Failed to register clean-up in wedged.mode=%s; " 1293 - "Although device is wedged.\n", 1294 - xe_wedged_mode_to_string(XE_WEDGED_MODE_UPON_ANY_HANG_NO_RESET)); 1295 - return; 1296 - } 1269 + if (xe->wedged.mode == 2) { 1270 + err = devm_add_action_or_reset(guc_to_xe(guc)->drm.dev, 1271 + guc_submit_wedged_fini, guc); 1272 + if (err) { 1273 + xe_gt_err(gt, "Failed to register clean-up on wedged.mode=2; " 1274 + "Although device is wedged.\n"); 1275 + return; 1276 + } 1297 1277 1298 - mutex_lock(&guc->submission_state.lock); 1299 - xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) 1300 - if (xe_exec_queue_get_unless_zero(q)) 1301 - set_exec_queue_wedged(q); 1302 - mutex_unlock(&guc->submission_state.lock); 1278 + mutex_lock(&guc->submission_state.lock); 1279 + xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) 1280 + if (xe_exec_queue_get_unless_zero(q)) 1281 + set_exec_queue_wedged(q); 1282 + mutex_unlock(&guc->submission_state.lock); 1283 + } else { 1284 + /* Forcefully kill any remaining exec queues, signal fences */ 1285 + guc_submit_reset_prepare(guc); 1286 + xe_guc_submit_stop(guc); 1287 + xe_guc_softreset(guc); 1288 + xe_uc_fw_sanitize(&guc->fw); 1289 + xe_guc_submit_pause_abort(guc); 1290 + } 1303 1291 } 1304 1292 1305 1293 static bool guc_submit_hint_wedged(struct xe_guc *guc) ··· 2258 2230 static void guc_exec_queue_stop(struct xe_guc *guc, struct xe_exec_queue *q) 2259 2231 { 2260 2232 struct xe_gpu_scheduler *sched = &q->guc->sched; 2233 + bool do_destroy = false; 2261 2234 2262 2235 /* Stop scheduling + flush any DRM scheduler operations */ 2263 2236 xe_sched_submission_stop(sched); ··· 2266 2237 /* Clean up lost G2H + reset engine state */ 2267 2238 if (exec_queue_registered(q)) { 2268 2239 if (exec_queue_destroyed(q)) 2269 - __guc_exec_queue_destroy(guc, q); 2240 + do_destroy = true; 2270 2241 } 2271 2242 if (q->guc->suspend_pending) { 2272 2243 set_exec_queue_suspended(q); ··· 2302 2273 xe_guc_exec_queue_trigger_cleanup(q); 2303 2274 } 2304 2275 } 2276 + 2277 + if (do_destroy) 2278 + __guc_exec_queue_destroy(guc, q); 2305 2279 } 2306 2280 2307 - int xe_guc_submit_reset_prepare(struct xe_guc *guc) 2281 + static int guc_submit_reset_prepare(struct xe_guc *guc) 2308 2282 { 2309 2283 int ret; 2310 - 2311 - if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc))) 2312 - return 0; 2313 - 2314 - if (!guc->submission_state.initialized) 2315 - return 0; 2316 2284 2317 2285 /* 2318 2286 * Using an atomic here rather than submission_state.lock as this ··· 2323 2297 wake_up_all(&guc->ct.wq); 2324 2298 2325 2299 return ret; 2300 + } 2301 + 2302 + int xe_guc_submit_reset_prepare(struct xe_guc *guc) 2303 + { 2304 + if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc))) 2305 + return 0; 2306 + 2307 + if (!guc->submission_state.initialized) 2308 + return 0; 2309 + 2310 + return guc_submit_reset_prepare(guc); 2326 2311 } 2327 2312 2328 2313 void xe_guc_submit_reset_wait(struct xe_guc *guc) ··· 2732 2695 continue; 2733 2696 2734 2697 xe_sched_submission_start(sched); 2735 - if (exec_queue_killed_or_banned_or_wedged(q)) 2736 - xe_guc_exec_queue_trigger_cleanup(q); 2698 + guc_exec_queue_kill(q); 2737 2699 } 2738 2700 mutex_unlock(&guc->submission_state.lock); 2739 2701 }
+2 -2
drivers/gpu/drm/xe/xe_lrc.c
··· 2413 2413 * @lrc: Pointer to the lrc. 2414 2414 * 2415 2415 * Return latest ctx timestamp. With support for active contexts, the 2416 - * calculation may bb slightly racy, so follow a read-again logic to ensure that 2416 + * calculation may be slightly racy, so follow a read-again logic to ensure that 2417 2417 * the context is still active before returning the right timestamp. 2418 2418 * 2419 2419 * Returns: New ctx timestamp value 2420 2420 */ 2421 2421 u64 xe_lrc_timestamp(struct xe_lrc *lrc) 2422 2422 { 2423 - u64 lrc_ts, reg_ts, new_ts; 2423 + u64 lrc_ts, reg_ts, new_ts = lrc->ctx_timestamp; 2424 2424 u32 engine_id; 2425 2425 2426 2426 lrc_ts = xe_lrc_ctx_timestamp(lrc);
+5 -2
drivers/gpu/drm/xe/xe_oa.c
··· 543 543 size_t offset = 0; 544 544 int ret; 545 545 546 - /* Can't read from disabled streams */ 547 - if (!stream->enabled || !stream->sample) 546 + if (!stream->sample) 548 547 return -EINVAL; 549 548 550 549 if (!(file->f_flags & O_NONBLOCK)) { ··· 1459 1460 1460 1461 if (stream->sample) 1461 1462 hrtimer_cancel(&stream->poll_check_timer); 1463 + 1464 + /* Update stream->oa_buffer.tail to allow any final reports to be read */ 1465 + if (xe_oa_buffer_check_unlocked(stream)) 1466 + wake_up(&stream->poll_wq); 1462 1467 } 1463 1468 1464 1469 static int xe_oa_enable_preempt_timeslice(struct xe_oa_stream *stream)
+29 -9
drivers/gpu/drm/xe/xe_pt.c
··· 1655 1655 XE_WARN_ON(!level); 1656 1656 /* Check for leaf node */ 1657 1657 if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) && 1658 - (!xe_child->base.children || !xe_child->base.children[first])) { 1658 + xe_child->level <= MAX_HUGEPTE_LEVEL) { 1659 1659 struct iosys_map *leaf_map = &xe_child->bo->vmap; 1660 1660 pgoff_t count = xe_pt_num_entries(addr, next, xe_child->level, walk); 1661 1661 1662 1662 for (pgoff_t i = 0; i < count; i++) { 1663 - u64 pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64); 1663 + u64 pte; 1664 1664 int ret; 1665 + 1666 + /* 1667 + * If not a leaf pt, skip unless non-leaf pt is interleaved between 1668 + * leaf ptes which causes the page walk to skip over the child leaves 1669 + */ 1670 + if (xe_child->base.children && xe_child->base.children[first + i]) { 1671 + u64 pt_size = 1ULL << walk->shifts[xe_child->level]; 1672 + bool edge_pt = (i == 0 && !IS_ALIGNED(addr, pt_size)) || 1673 + (i == count - 1 && !IS_ALIGNED(next, pt_size)); 1674 + 1675 + if (!edge_pt) { 1676 + xe_page_reclaim_list_abort(xe_walk->tile->primary_gt, 1677 + xe_walk->prl, 1678 + "PT is skipped by walk at level=%u offset=%lu", 1679 + xe_child->level, first + i); 1680 + break; 1681 + } 1682 + continue; 1683 + } 1684 + 1685 + pte = xe_map_rd(xe, leaf_map, (first + i) * sizeof(u64), u64); 1665 1686 1666 1687 /* 1667 1688 * In rare scenarios, pte may not be written yet due to racy conditions. ··· 1695 1674 } 1696 1675 1697 1676 /* Ensure it is a defined page */ 1698 - xe_tile_assert(xe_walk->tile, 1699 - xe_child->level == 0 || 1700 - (pte & (XE_PTE_PS64 | XE_PDE_PS_2M | XE_PDPE_PS_1G))); 1677 + xe_tile_assert(xe_walk->tile, xe_child->level == 0 || 1678 + (pte & (XE_PDE_PS_2M | XE_PDPE_PS_1G))); 1701 1679 1702 1680 /* An entry should be added for 64KB but contigious 4K have XE_PTE_PS64 */ 1703 1681 if (pte & XE_PTE_PS64) ··· 1721 1701 killed = xe_pt_check_kill(addr, next, level - 1, xe_child, action, walk); 1722 1702 1723 1703 /* 1724 - * Verify PRL is active and if entry is not a leaf pte (base.children conditions), 1725 - * there is a potential need to invalidate the PRL if any PTE (num_live) are dropped. 1704 + * Verify if any PTE are potentially dropped at non-leaf levels, either from being 1705 + * killed or the page walk covers the region. 1726 1706 */ 1727 - if (xe_walk->prl && level > 1 && xe_child->num_live && 1728 - xe_child->base.children && xe_child->base.children[first]) { 1707 + if (xe_walk->prl && xe_page_reclaim_list_valid(xe_walk->prl) && 1708 + xe_child->level > MAX_HUGEPTE_LEVEL && xe_child->num_live) { 1729 1709 bool covered = xe_pt_covers(addr, next, xe_child->level, &xe_walk->base); 1730 1710 1731 1711 /*
+4 -2
drivers/hv/mshv_regions.c
··· 314 314 ret = pin_user_pages_fast(userspace_addr, nr_pages, 315 315 FOLL_WRITE | FOLL_LONGTERM, 316 316 pages); 317 - if (ret < 0) 317 + if (ret != nr_pages) 318 318 goto release_pages; 319 319 } 320 320 321 321 return 0; 322 322 323 323 release_pages: 324 + if (ret > 0) 325 + done_count += ret; 324 326 mshv_region_invalidate_pages(region, 0, done_count); 325 - return ret; 327 + return ret < 0 ? ret : -ENOMEM; 326 328 } 327 329 328 330 static int mshv_region_chunk_unmap(struct mshv_mem_region *region,
+2 -3
drivers/hv/mshv_root.h
··· 190 190 }; 191 191 192 192 struct mshv_root { 193 - struct hv_synic_pages __percpu *synic_pages; 194 193 spinlock_t pt_ht_lock; 195 194 DECLARE_HASHTABLE(pt_htable, MSHV_PARTITIONS_HASH_BITS); 196 195 struct hv_partition_property_vmm_capabilities vmm_caps; ··· 248 249 void mshv_unregister_doorbell(u64 partition_id, int doorbell_portid); 249 250 250 251 void mshv_isr(void); 251 - int mshv_synic_init(unsigned int cpu); 252 - int mshv_synic_cleanup(unsigned int cpu); 252 + int mshv_synic_init(struct device *dev); 253 + void mshv_synic_exit(void); 253 254 254 255 static inline bool mshv_partition_encrypted(struct mshv_partition *partition) 255 256 {
+22 -71
drivers/hv/mshv_root_main.c
··· 120 120 HVCALL_SET_VP_REGISTERS, 121 121 HVCALL_TRANSLATE_VIRTUAL_ADDRESS, 122 122 HVCALL_CLEAR_VIRTUAL_INTERRUPT, 123 - HVCALL_SCRUB_PARTITION, 124 123 HVCALL_REGISTER_INTERCEPT_RESULT, 125 124 HVCALL_ASSERT_VIRTUAL_INTERRUPT, 126 125 HVCALL_GET_GPA_PAGES_ACCESS_STATES, ··· 1288 1289 */ 1289 1290 static long 1290 1291 mshv_map_user_memory(struct mshv_partition *partition, 1291 - struct mshv_user_mem_region mem) 1292 + struct mshv_user_mem_region *mem) 1292 1293 { 1293 1294 struct mshv_mem_region *region; 1294 1295 struct vm_area_struct *vma; ··· 1296 1297 ulong mmio_pfn; 1297 1298 long ret; 1298 1299 1299 - if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP) || 1300 - !access_ok((const void __user *)mem.userspace_addr, mem.size)) 1300 + if (mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP) || 1301 + !access_ok((const void __user *)mem->userspace_addr, mem->size)) 1301 1302 return -EINVAL; 1302 1303 1303 1304 mmap_read_lock(current->mm); 1304 - vma = vma_lookup(current->mm, mem.userspace_addr); 1305 + vma = vma_lookup(current->mm, mem->userspace_addr); 1305 1306 is_mmio = vma ? !!(vma->vm_flags & (VM_IO | VM_PFNMAP)) : 0; 1306 1307 mmio_pfn = is_mmio ? vma->vm_pgoff : 0; 1307 1308 mmap_read_unlock(current->mm); ··· 1309 1310 if (!vma) 1310 1311 return -EINVAL; 1311 1312 1312 - ret = mshv_partition_create_region(partition, &mem, &region, 1313 + ret = mshv_partition_create_region(partition, mem, &region, 1313 1314 is_mmio); 1314 1315 if (ret) 1315 1316 return ret; ··· 1347 1348 return 0; 1348 1349 1349 1350 errout: 1350 - vfree(region); 1351 + mshv_region_put(region); 1351 1352 return ret; 1352 1353 } 1353 1354 1354 1355 /* Called for unmapping both the guest ram and the mmio space */ 1355 1356 static long 1356 1357 mshv_unmap_user_memory(struct mshv_partition *partition, 1357 - struct mshv_user_mem_region mem) 1358 + struct mshv_user_mem_region *mem) 1358 1359 { 1359 1360 struct mshv_mem_region *region; 1360 1361 1361 - if (!(mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP))) 1362 + if (!(mem->flags & BIT(MSHV_SET_MEM_BIT_UNMAP))) 1362 1363 return -EINVAL; 1363 1364 1364 1365 spin_lock(&partition->pt_mem_regions_lock); 1365 1366 1366 - region = mshv_partition_region_by_gfn(partition, mem.guest_pfn); 1367 + region = mshv_partition_region_by_gfn(partition, mem->guest_pfn); 1367 1368 if (!region) { 1368 1369 spin_unlock(&partition->pt_mem_regions_lock); 1369 1370 return -ENOENT; 1370 1371 } 1371 1372 1372 1373 /* Paranoia check */ 1373 - if (region->start_uaddr != mem.userspace_addr || 1374 - region->start_gfn != mem.guest_pfn || 1375 - region->nr_pages != HVPFN_DOWN(mem.size)) { 1374 + if (region->start_uaddr != mem->userspace_addr || 1375 + region->start_gfn != mem->guest_pfn || 1376 + region->nr_pages != HVPFN_DOWN(mem->size)) { 1376 1377 spin_unlock(&partition->pt_mem_regions_lock); 1377 1378 return -EINVAL; 1378 1379 } ··· 1403 1404 return -EINVAL; 1404 1405 1405 1406 if (mem.flags & BIT(MSHV_SET_MEM_BIT_UNMAP)) 1406 - return mshv_unmap_user_memory(partition, mem); 1407 + return mshv_unmap_user_memory(partition, &mem); 1407 1408 1408 - return mshv_map_user_memory(partition, mem); 1409 + return mshv_map_user_memory(partition, &mem); 1409 1410 } 1410 1411 1411 1412 static long ··· 2063 2064 return 0; 2064 2065 } 2065 2066 2066 - static int mshv_cpuhp_online; 2067 2067 static int mshv_root_sched_online; 2068 2068 2069 2069 static const char *scheduler_type_to_string(enum hv_scheduler_type type) ··· 2247 2249 free_percpu(root_scheduler_output); 2248 2250 } 2249 2251 2250 - static int mshv_reboot_notify(struct notifier_block *nb, 2251 - unsigned long code, void *unused) 2252 - { 2253 - cpuhp_remove_state(mshv_cpuhp_online); 2254 - return 0; 2255 - } 2256 - 2257 - struct notifier_block mshv_reboot_nb = { 2258 - .notifier_call = mshv_reboot_notify, 2259 - }; 2260 - 2261 - static void mshv_root_partition_exit(void) 2262 - { 2263 - unregister_reboot_notifier(&mshv_reboot_nb); 2264 - } 2265 - 2266 - static int __init mshv_root_partition_init(struct device *dev) 2267 - { 2268 - return register_reboot_notifier(&mshv_reboot_nb); 2269 - } 2270 - 2271 2252 static int __init mshv_init_vmm_caps(struct device *dev) 2272 2253 { 2273 2254 int ret; ··· 2291 2314 MSHV_HV_MAX_VERSION); 2292 2315 } 2293 2316 2294 - mshv_root.synic_pages = alloc_percpu(struct hv_synic_pages); 2295 - if (!mshv_root.synic_pages) { 2296 - dev_err(dev, "Failed to allocate percpu synic page\n"); 2297 - ret = -ENOMEM; 2317 + ret = mshv_synic_init(dev); 2318 + if (ret) 2298 2319 goto device_deregister; 2299 - } 2300 - 2301 - ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mshv_synic", 2302 - mshv_synic_init, 2303 - mshv_synic_cleanup); 2304 - if (ret < 0) { 2305 - dev_err(dev, "Failed to setup cpu hotplug state: %i\n", ret); 2306 - goto free_synic_pages; 2307 - } 2308 - 2309 - mshv_cpuhp_online = ret; 2310 2320 2311 2321 ret = mshv_init_vmm_caps(dev); 2312 2322 if (ret) 2313 - goto remove_cpu_state; 2323 + goto synic_cleanup; 2314 2324 2315 2325 ret = mshv_retrieve_scheduler_type(dev); 2316 2326 if (ret) 2317 - goto remove_cpu_state; 2318 - 2319 - if (hv_root_partition()) 2320 - ret = mshv_root_partition_init(dev); 2321 - if (ret) 2322 - goto remove_cpu_state; 2327 + goto synic_cleanup; 2323 2328 2324 2329 ret = root_scheduler_init(dev); 2325 2330 if (ret) 2326 - goto exit_partition; 2331 + goto synic_cleanup; 2327 2332 2328 2333 ret = mshv_debugfs_init(); 2329 2334 if (ret) ··· 2326 2367 mshv_debugfs_exit(); 2327 2368 deinit_root_scheduler: 2328 2369 root_scheduler_deinit(); 2329 - exit_partition: 2330 - if (hv_root_partition()) 2331 - mshv_root_partition_exit(); 2332 - remove_cpu_state: 2333 - cpuhp_remove_state(mshv_cpuhp_online); 2334 - free_synic_pages: 2335 - free_percpu(mshv_root.synic_pages); 2370 + synic_cleanup: 2371 + mshv_synic_exit(); 2336 2372 device_deregister: 2337 2373 misc_deregister(&mshv_dev); 2338 2374 return ret; ··· 2341 2387 misc_deregister(&mshv_dev); 2342 2388 mshv_irqfd_wq_cleanup(); 2343 2389 root_scheduler_deinit(); 2344 - if (hv_root_partition()) 2345 - mshv_root_partition_exit(); 2346 - cpuhp_remove_state(mshv_cpuhp_online); 2347 - free_percpu(mshv_root.synic_pages); 2390 + mshv_synic_exit(); 2348 2391 } 2349 2392 2350 2393 module_init(mshv_parent_partition_init);
+173 -15
drivers/hv/mshv_synic.c
··· 10 10 #include <linux/kernel.h> 11 11 #include <linux/slab.h> 12 12 #include <linux/mm.h> 13 + #include <linux/interrupt.h> 13 14 #include <linux/io.h> 14 15 #include <linux/random.h> 16 + #include <linux/cpuhotplug.h> 17 + #include <linux/reboot.h> 15 18 #include <asm/mshyperv.h> 19 + #include <linux/acpi.h> 16 20 17 21 #include "mshv_eventfd.h" 18 22 #include "mshv.h" 23 + 24 + static int synic_cpuhp_online; 25 + static struct hv_synic_pages __percpu *synic_pages; 26 + static int mshv_sint_vector = -1; /* hwirq for the SynIC SINTs */ 27 + static int mshv_sint_irq = -1; /* Linux IRQ for mshv_sint_vector */ 19 28 20 29 static u32 synic_event_ring_get_queued_port(u32 sint_index) 21 30 { ··· 35 26 u32 message; 36 27 u8 tail; 37 28 38 - spages = this_cpu_ptr(mshv_root.synic_pages); 29 + spages = this_cpu_ptr(synic_pages); 39 30 event_ring_page = &spages->synic_event_ring_page; 40 31 synic_eventring_tail = (u8 **)this_cpu_ptr(hv_synic_eventring_tail); 41 32 ··· 402 393 403 394 void mshv_isr(void) 404 395 { 405 - struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages); 396 + struct hv_synic_pages *spages = this_cpu_ptr(synic_pages); 406 397 struct hv_message_page **msg_page = &spages->hyp_synic_message_page; 407 398 struct hv_message *msg; 408 399 bool handled; ··· 446 437 if (msg->header.message_flags.msg_pending) 447 438 hv_set_non_nested_msr(HV_MSR_EOM, 0); 448 439 449 - #ifdef HYPERVISOR_CALLBACK_VECTOR 450 - add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR); 451 - #endif 440 + add_interrupt_randomness(mshv_sint_vector); 452 441 } else { 453 442 pr_warn_once("%s: unknown message type 0x%x\n", __func__, 454 443 msg->header.message_type); 455 444 } 456 445 } 457 446 458 - int mshv_synic_init(unsigned int cpu) 447 + static int mshv_synic_cpu_init(unsigned int cpu) 459 448 { 460 449 union hv_synic_simp simp; 461 450 union hv_synic_siefp siefp; 462 451 union hv_synic_sirbp sirbp; 463 - #ifdef HYPERVISOR_CALLBACK_VECTOR 464 452 union hv_synic_sint sint; 465 - #endif 466 453 union hv_synic_scontrol sctrl; 467 - struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages); 454 + struct hv_synic_pages *spages = this_cpu_ptr(synic_pages); 468 455 struct hv_message_page **msg_page = &spages->hyp_synic_message_page; 469 456 struct hv_synic_event_flags_page **event_flags_page = 470 457 &spages->synic_event_flags_page; ··· 501 496 502 497 hv_set_non_nested_msr(HV_MSR_SIRBP, sirbp.as_uint64); 503 498 504 - #ifdef HYPERVISOR_CALLBACK_VECTOR 499 + if (mshv_sint_irq != -1) 500 + enable_percpu_irq(mshv_sint_irq, 0); 501 + 505 502 /* Enable intercepts */ 506 503 sint.as_uint64 = 0; 507 - sint.vector = HYPERVISOR_CALLBACK_VECTOR; 504 + sint.vector = mshv_sint_vector; 508 505 sint.masked = false; 509 506 sint.auto_eoi = hv_recommend_using_aeoi(); 510 507 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_INTERCEPTION_SINT_INDEX, ··· 514 507 515 508 /* Doorbell SINT */ 516 509 sint.as_uint64 = 0; 517 - sint.vector = HYPERVISOR_CALLBACK_VECTOR; 510 + sint.vector = mshv_sint_vector; 518 511 sint.masked = false; 519 512 sint.as_intercept = 1; 520 513 sint.auto_eoi = hv_recommend_using_aeoi(); 521 514 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_DOORBELL_SINT_INDEX, 522 515 sint.as_uint64); 523 - #endif 524 516 525 517 /* Enable global synic bit */ 526 518 sctrl.as_uint64 = hv_get_non_nested_msr(HV_MSR_SCONTROL); ··· 548 542 return -EFAULT; 549 543 } 550 544 551 - int mshv_synic_cleanup(unsigned int cpu) 545 + static int mshv_synic_cpu_exit(unsigned int cpu) 552 546 { 553 547 union hv_synic_sint sint; 554 548 union hv_synic_simp simp; 555 549 union hv_synic_siefp siefp; 556 550 union hv_synic_sirbp sirbp; 557 551 union hv_synic_scontrol sctrl; 558 - struct hv_synic_pages *spages = this_cpu_ptr(mshv_root.synic_pages); 552 + struct hv_synic_pages *spages = this_cpu_ptr(synic_pages); 559 553 struct hv_message_page **msg_page = &spages->hyp_synic_message_page; 560 554 struct hv_synic_event_flags_page **event_flags_page = 561 555 &spages->synic_event_flags_page; ··· 573 567 sint.masked = true; 574 568 hv_set_non_nested_msr(HV_MSR_SINT0 + HV_SYNIC_DOORBELL_SINT_INDEX, 575 569 sint.as_uint64); 570 + 571 + if (mshv_sint_irq != -1) 572 + disable_percpu_irq(mshv_sint_irq); 576 573 577 574 /* Disable Synic's event ring page */ 578 575 sirbp.as_uint64 = hv_get_non_nested_msr(HV_MSR_SIRBP); ··· 671 662 hv_call_delete_port(hv_current_partition_id, port_id); 672 663 673 664 mshv_portid_free(doorbell_portid); 665 + } 666 + 667 + static int mshv_synic_reboot_notify(struct notifier_block *nb, 668 + unsigned long code, void *unused) 669 + { 670 + if (!hv_root_partition()) 671 + return 0; 672 + 673 + cpuhp_remove_state(synic_cpuhp_online); 674 + return 0; 675 + } 676 + 677 + static struct notifier_block mshv_synic_reboot_nb = { 678 + .notifier_call = mshv_synic_reboot_notify, 679 + }; 680 + 681 + #ifndef HYPERVISOR_CALLBACK_VECTOR 682 + static DEFINE_PER_CPU(long, mshv_evt); 683 + 684 + static irqreturn_t mshv_percpu_isr(int irq, void *dev_id) 685 + { 686 + mshv_isr(); 687 + return IRQ_HANDLED; 688 + } 689 + 690 + #ifdef CONFIG_ACPI 691 + static int __init mshv_acpi_setup_sint_irq(void) 692 + { 693 + return acpi_register_gsi(NULL, mshv_sint_vector, ACPI_EDGE_SENSITIVE, 694 + ACPI_ACTIVE_HIGH); 695 + } 696 + 697 + static void mshv_acpi_cleanup_sint_irq(void) 698 + { 699 + acpi_unregister_gsi(mshv_sint_vector); 700 + } 701 + #else 702 + static int __init mshv_acpi_setup_sint_irq(void) 703 + { 704 + return -ENODEV; 705 + } 706 + 707 + static void mshv_acpi_cleanup_sint_irq(void) 708 + { 709 + } 710 + #endif 711 + 712 + static int __init mshv_sint_vector_setup(void) 713 + { 714 + int ret; 715 + struct hv_register_assoc reg = { 716 + .name = HV_ARM64_REGISTER_SINT_RESERVED_INTERRUPT_ID, 717 + }; 718 + union hv_input_vtl input_vtl = { 0 }; 719 + 720 + if (acpi_disabled) 721 + return -ENODEV; 722 + 723 + ret = hv_call_get_vp_registers(HV_VP_INDEX_SELF, HV_PARTITION_ID_SELF, 724 + 1, input_vtl, &reg); 725 + if (ret || !reg.value.reg64) 726 + return -ENODEV; 727 + 728 + mshv_sint_vector = reg.value.reg64; 729 + ret = mshv_acpi_setup_sint_irq(); 730 + if (ret < 0) { 731 + pr_err("Failed to setup IRQ for MSHV SINT vector %d: %d\n", 732 + mshv_sint_vector, ret); 733 + goto out_fail; 734 + } 735 + 736 + mshv_sint_irq = ret; 737 + 738 + ret = request_percpu_irq(mshv_sint_irq, mshv_percpu_isr, "MSHV", 739 + &mshv_evt); 740 + if (ret) 741 + goto out_unregister; 742 + 743 + return 0; 744 + 745 + out_unregister: 746 + mshv_acpi_cleanup_sint_irq(); 747 + out_fail: 748 + return ret; 749 + } 750 + 751 + static void mshv_sint_vector_cleanup(void) 752 + { 753 + free_percpu_irq(mshv_sint_irq, &mshv_evt); 754 + mshv_acpi_cleanup_sint_irq(); 755 + } 756 + #else /* !HYPERVISOR_CALLBACK_VECTOR */ 757 + static int __init mshv_sint_vector_setup(void) 758 + { 759 + mshv_sint_vector = HYPERVISOR_CALLBACK_VECTOR; 760 + return 0; 761 + } 762 + 763 + static void mshv_sint_vector_cleanup(void) 764 + { 765 + } 766 + #endif /* HYPERVISOR_CALLBACK_VECTOR */ 767 + 768 + int __init mshv_synic_init(struct device *dev) 769 + { 770 + int ret = 0; 771 + 772 + ret = mshv_sint_vector_setup(); 773 + if (ret) 774 + return ret; 775 + 776 + synic_pages = alloc_percpu(struct hv_synic_pages); 777 + if (!synic_pages) { 778 + dev_err(dev, "Failed to allocate percpu synic page\n"); 779 + ret = -ENOMEM; 780 + goto sint_vector_cleanup; 781 + } 782 + 783 + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "mshv_synic", 784 + mshv_synic_cpu_init, 785 + mshv_synic_cpu_exit); 786 + if (ret < 0) { 787 + dev_err(dev, "Failed to setup cpu hotplug state: %i\n", ret); 788 + goto free_synic_pages; 789 + } 790 + 791 + synic_cpuhp_online = ret; 792 + 793 + ret = register_reboot_notifier(&mshv_synic_reboot_nb); 794 + if (ret) 795 + goto remove_cpuhp_state; 796 + 797 + return 0; 798 + 799 + remove_cpuhp_state: 800 + cpuhp_remove_state(synic_cpuhp_online); 801 + free_synic_pages: 802 + free_percpu(synic_pages); 803 + sint_vector_cleanup: 804 + mshv_sint_vector_cleanup(); 805 + return ret; 806 + } 807 + 808 + void mshv_synic_exit(void) 809 + { 810 + unregister_reboot_notifier(&mshv_synic_reboot_nb); 811 + cpuhp_remove_state(synic_cpuhp_online); 812 + free_percpu(synic_pages); 813 + mshv_sint_vector_cleanup(); 674 814 }
+1 -1
drivers/hwmon/axi-fan-control.c
··· 507 507 ret = devm_request_threaded_irq(&pdev->dev, ctl->irq, NULL, 508 508 axi_fan_control_irq_handler, 509 509 IRQF_ONESHOT | IRQF_TRIGGER_HIGH, 510 - pdev->driver_override, ctl); 510 + NULL, ctl); 511 511 if (ret) 512 512 return dev_err_probe(&pdev->dev, ret, 513 513 "failed to request an irq\n");
+5 -5
drivers/hwmon/max6639.c
··· 232 232 static int max6639_set_ppr(struct max6639_data *data, int channel, u8 ppr) 233 233 { 234 234 /* Decrement the PPR value and shift left by 6 to match the register format */ 235 - return regmap_write(data->regmap, MAX6639_REG_FAN_PPR(channel), ppr-- << 6); 235 + return regmap_write(data->regmap, MAX6639_REG_FAN_PPR(channel), --ppr << 6); 236 236 } 237 237 238 238 static int max6639_write_fan(struct device *dev, u32 attr, int channel, ··· 524 524 525 525 { 526 526 struct device *dev = &client->dev; 527 - u32 i; 528 - int err, val; 527 + u32 i, val; 528 + int err; 529 529 530 530 err = of_property_read_u32(child, "reg", &i); 531 531 if (err) { ··· 540 540 541 541 err = of_property_read_u32(child, "pulses-per-revolution", &val); 542 542 if (!err) { 543 - if (val < 1 || val > 5) { 544 - dev_err(dev, "invalid pulses-per-revolution %d of %pOFn\n", val, child); 543 + if (val < 1 || val > 4) { 544 + dev_err(dev, "invalid pulses-per-revolution %u of %pOFn\n", val, child); 545 545 return -EINVAL; 546 546 } 547 547 data->ppr[i] = val;
+2
drivers/hwmon/pmbus/hac300s.c
··· 58 58 case PMBUS_MFR_VOUT_MIN: 59 59 case PMBUS_READ_VOUT: 60 60 rv = pmbus_read_word_data(client, page, phase, reg); 61 + if (rv < 0) 62 + return rv; 61 63 return FIELD_GET(LINEAR11_MANTISSA_MASK, rv); 62 64 default: 63 65 return -ENODATA;
+2
drivers/hwmon/pmbus/ina233.c
··· 67 67 switch (reg) { 68 68 case PMBUS_VIRT_READ_VMON: 69 69 ret = pmbus_read_word_data(client, 0, 0xff, MFR_READ_VSHUNT); 70 + if (ret < 0) 71 + return ret; 70 72 71 73 /* Adjust returned value to match VIN coefficients */ 72 74 /* VIN: 1.25 mV VSHUNT: 2.5 uV LSB */
+5 -2
drivers/hwmon/pmbus/isl68137.c
··· 98 98 { 99 99 int val = pmbus_read_byte_data(client, page, PMBUS_OPERATION); 100 100 101 - return sprintf(buf, "%d\n", 102 - (val & ISL68137_VOUT_AVS) == ISL68137_VOUT_AVS ? 1 : 0); 101 + if (val < 0) 102 + return val; 103 + 104 + return sysfs_emit(buf, "%d\n", 105 + (val & ISL68137_VOUT_AVS) == ISL68137_VOUT_AVS); 103 106 } 104 107 105 108 static ssize_t isl68137_avs_enable_store_page(struct i2c_client *client,
+21 -14
drivers/hwmon/pmbus/mp2869.c
··· 165 165 { 166 166 const struct pmbus_driver_info *info = pmbus_get_driver_info(client); 167 167 struct mp2869_data *data = to_mp2869_data(info); 168 - int ret; 168 + int ret, mfr; 169 169 170 170 switch (reg) { 171 171 case PMBUS_VOUT_MODE: ··· 188 188 if (ret < 0) 189 189 return ret; 190 190 191 + mfr = pmbus_read_byte_data(client, page, 192 + PMBUS_STATUS_MFR_SPECIFIC); 193 + if (mfr < 0) 194 + return mfr; 195 + 191 196 ret = (ret & ~GENMASK(2, 2)) | 192 197 FIELD_PREP(GENMASK(2, 2), 193 - FIELD_GET(GENMASK(1, 1), 194 - pmbus_read_byte_data(client, page, 195 - PMBUS_STATUS_MFR_SPECIFIC))); 198 + FIELD_GET(GENMASK(1, 1), mfr)); 196 199 break; 197 200 case PMBUS_STATUS_TEMPERATURE: 198 201 /* ··· 210 207 if (ret < 0) 211 208 return ret; 212 209 210 + mfr = pmbus_read_byte_data(client, page, 211 + PMBUS_STATUS_MFR_SPECIFIC); 212 + if (mfr < 0) 213 + return mfr; 214 + 213 215 ret = (ret & ~GENMASK(7, 6)) | 214 216 FIELD_PREP(GENMASK(6, 6), 215 - FIELD_GET(GENMASK(1, 1), 216 - pmbus_read_byte_data(client, page, 217 - PMBUS_STATUS_MFR_SPECIFIC))) | 217 + FIELD_GET(GENMASK(1, 1), mfr)) | 218 218 FIELD_PREP(GENMASK(7, 7), 219 - FIELD_GET(GENMASK(1, 1), 220 - pmbus_read_byte_data(client, page, 221 - PMBUS_STATUS_MFR_SPECIFIC))); 219 + FIELD_GET(GENMASK(1, 1), mfr)); 222 220 break; 223 221 default: 224 222 ret = -ENODATA; ··· 234 230 { 235 231 const struct pmbus_driver_info *info = pmbus_get_driver_info(client); 236 232 struct mp2869_data *data = to_mp2869_data(info); 237 - int ret; 233 + int ret, mfr; 238 234 239 235 switch (reg) { 240 236 case PMBUS_STATUS_WORD: ··· 250 246 if (ret < 0) 251 247 return ret; 252 248 249 + mfr = pmbus_read_byte_data(client, page, 250 + PMBUS_STATUS_MFR_SPECIFIC); 251 + if (mfr < 0) 252 + return mfr; 253 + 253 254 ret = (ret & ~GENMASK(2, 2)) | 254 255 FIELD_PREP(GENMASK(2, 2), 255 - FIELD_GET(GENMASK(1, 1), 256 - pmbus_read_byte_data(client, page, 257 - PMBUS_STATUS_MFR_SPECIFIC))); 256 + FIELD_GET(GENMASK(1, 1), mfr)); 258 257 break; 259 258 case PMBUS_READ_VIN: 260 259 /*
+2
drivers/hwmon/pmbus/mp2975.c
··· 313 313 case PMBUS_STATUS_WORD: 314 314 /* MP2973 & MP2971 return PGOOD instead of PB_STATUS_POWER_GOOD_N. */ 315 315 ret = pmbus_read_word_data(client, page, phase, reg); 316 + if (ret < 0) 317 + return ret; 316 318 ret ^= PB_STATUS_POWER_GOOD_N; 317 319 break; 318 320 case PMBUS_OT_FAULT_LIMIT:
+2
drivers/i2c/busses/Kconfig
··· 1213 1213 tristate "NVIDIA Tegra internal I2C controller" 1214 1214 depends on ARCH_TEGRA || (COMPILE_TEST && (ARC || ARM || ARM64 || M68K || RISCV || SUPERH || SPARC)) 1215 1215 # COMPILE_TEST needs architectures with readsX()/writesX() primitives 1216 + depends on PINCTRL 1217 + # ARCH_TEGRA implies PINCTRL, but the COMPILE_TEST side doesn't. 1216 1218 help 1217 1219 If you say yes to this option, support will be included for the 1218 1220 I2C controller embedded in NVIDIA Tegra SOCs
+3
drivers/i2c/busses/i2c-cp2615.c
··· 298 298 if (!adap) 299 299 return -ENOMEM; 300 300 301 + if (!usbdev->serial) 302 + return -EINVAL; 303 + 301 304 strscpy(adap->name, usbdev->serial, sizeof(adap->name)); 302 305 adap->owner = THIS_MODULE; 303 306 adap->dev.parent = &usbif->dev;
+1
drivers/i2c/busses/i2c-fsi.c
··· 729 729 rc = i2c_add_adapter(&port->adapter); 730 730 if (rc < 0) { 731 731 dev_err(dev, "Failed to register adapter: %d\n", rc); 732 + of_node_put(np); 732 733 kfree(port); 733 734 continue; 734 735 }
+16 -1
drivers/i2c/busses/i2c-pxa.c
··· 268 268 struct pinctrl *pinctrl; 269 269 struct pinctrl_state *pinctrl_default; 270 270 struct pinctrl_state *pinctrl_recovery; 271 + bool reset_before_xfer; 271 272 }; 272 273 273 274 #define _IBMR(i2c) ((i2c)->reg_ibmr) ··· 1145 1144 { 1146 1145 struct pxa_i2c *i2c = adap->algo_data; 1147 1146 1147 + if (i2c->reset_before_xfer) { 1148 + i2c_pxa_reset(i2c); 1149 + i2c->reset_before_xfer = false; 1150 + } 1151 + 1148 1152 return i2c_pxa_internal_xfer(i2c, msgs, num, i2c_pxa_do_xfer); 1149 1153 } 1150 1154 ··· 1527 1521 } 1528 1522 } 1529 1523 1530 - i2c_pxa_reset(i2c); 1524 + /* 1525 + * Skip reset on Armada 3700 when recovery is used to avoid 1526 + * controller hang due to the pinctrl state changes done by 1527 + * the generic recovery initialization code. The reset will 1528 + * be performed later, prior to the first transfer. 1529 + */ 1530 + if (i2c_type == REGS_A3700 && i2c->adap.bus_recovery_info) 1531 + i2c->reset_before_xfer = true; 1532 + else 1533 + i2c_pxa_reset(i2c); 1531 1534 1532 1535 ret = i2c_add_numbered_adapter(&i2c->adap); 1533 1536 if (ret < 0)
+4 -1
drivers/i2c/busses/i2c-tegra.c
··· 2047 2047 * 2048 2048 * VI I2C device shouldn't be marked as IRQ-safe because VI I2C won't 2049 2049 * be used for atomic transfers. ACPI device is not IRQ safe also. 2050 + * 2051 + * Devices with pinctrl states cannot be marked IRQ-safe as the pinctrl 2052 + * state transitions during runtime PM require mutexes. 2050 2053 */ 2051 - if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev)) 2054 + if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev) && !i2c_dev->dev->pins) 2052 2055 pm_runtime_irq_safe(i2c_dev->dev); 2053 2056 2054 2057 pm_runtime_enable(i2c_dev->dev);
+3 -2
drivers/infiniband/core/umem.c
··· 55 55 56 56 if (dirty) 57 57 ib_dma_unmap_sgtable_attrs(dev, &umem->sgt_append.sgt, 58 - DMA_BIDIRECTIONAL, 0); 58 + DMA_BIDIRECTIONAL, 59 + DMA_ATTR_REQUIRE_COHERENT); 59 60 60 61 for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) { 61 62 unpin_user_page_range_dirty_lock(sg_page(sg), ··· 170 169 unsigned long lock_limit; 171 170 unsigned long new_pinned; 172 171 unsigned long cur_base; 173 - unsigned long dma_attr = 0; 172 + unsigned long dma_attr = DMA_ATTR_REQUIRE_COHERENT; 174 173 struct mm_struct *mm; 175 174 unsigned long npages; 176 175 int pinned, ret;
+14 -1
drivers/iommu/amd/iommu.c
··· 2909 2909 2910 2910 static struct protection_domain identity_domain; 2911 2911 2912 + static int amd_iommu_identity_attach(struct iommu_domain *dom, struct device *dev, 2913 + struct iommu_domain *old) 2914 + { 2915 + /* 2916 + * Don't allow attaching a device to the identity domain if SNP is 2917 + * enabled. 2918 + */ 2919 + if (amd_iommu_snp_en) 2920 + return -EINVAL; 2921 + 2922 + return amd_iommu_attach_device(dom, dev, old); 2923 + } 2924 + 2912 2925 static const struct iommu_domain_ops identity_domain_ops = { 2913 - .attach_dev = amd_iommu_attach_device, 2926 + .attach_dev = amd_iommu_identity_attach, 2914 2927 }; 2915 2928 2916 2929 void amd_iommu_init_identity_domain(void)
+17 -4
drivers/iommu/dma-iommu.c
··· 1211 1211 */ 1212 1212 if (dev_use_swiotlb(dev, size, dir) && 1213 1213 iova_unaligned(iovad, phys, size)) { 1214 - if (attrs & DMA_ATTR_MMIO) 1214 + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) 1215 1215 return DMA_MAPPING_ERROR; 1216 1216 1217 1217 phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); ··· 1223 1223 arch_sync_dma_for_device(phys, size, dir); 1224 1224 1225 1225 iova = __iommu_dma_map(dev, phys, size, prot, dma_mask); 1226 - if (iova == DMA_MAPPING_ERROR && !(attrs & DMA_ATTR_MMIO)) 1226 + if (iova == DMA_MAPPING_ERROR && 1227 + !(attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT))) 1227 1228 swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); 1228 1229 return iova; 1229 1230 } ··· 1234 1233 { 1235 1234 phys_addr_t phys; 1236 1235 1237 - if (attrs & DMA_ATTR_MMIO) { 1236 + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) { 1238 1237 __iommu_dma_unmap(dev, dma_handle, size); 1239 1238 return; 1240 1239 } ··· 1946 1945 if (WARN_ON_ONCE(iova_start_pad && offset > 0)) 1947 1946 return -EIO; 1948 1947 1948 + /* 1949 + * DMA_IOVA_USE_SWIOTLB is set on state after some entry 1950 + * took SWIOTLB path, which we were supposed to prevent 1951 + * for DMA_ATTR_REQUIRE_COHERENT attribute. 1952 + */ 1953 + if (WARN_ON_ONCE((state->__size & DMA_IOVA_USE_SWIOTLB) && 1954 + (attrs & DMA_ATTR_REQUIRE_COHERENT))) 1955 + return -EOPNOTSUPP; 1956 + 1957 + if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT)) 1958 + return -EOPNOTSUPP; 1959 + 1949 1960 if (dev_use_swiotlb(dev, size, dir) && 1950 1961 iova_unaligned(iovad, phys, size)) { 1951 - if (attrs & DMA_ATTR_MMIO) 1962 + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) 1952 1963 return -EPERM; 1953 1964 1954 1965 return iommu_dma_iova_link_swiotlb(dev, state, phys, offset,
+1 -2
drivers/iommu/intel/dmar.c
··· 1314 1314 if (fault & DMA_FSTS_ITE) { 1315 1315 head = readl(iommu->reg + DMAR_IQH_REG); 1316 1316 head = ((head >> shift) - 1 + QI_LENGTH) % QI_LENGTH; 1317 - head |= 1; 1318 1317 tail = readl(iommu->reg + DMAR_IQT_REG); 1319 1318 tail = ((tail >> shift) - 1 + QI_LENGTH) % QI_LENGTH; 1320 1319 ··· 1330 1331 do { 1331 1332 if (qi->desc_status[head] == QI_IN_USE) 1332 1333 qi->desc_status[head] = QI_ABORT; 1333 - head = (head - 2 + QI_LENGTH) % QI_LENGTH; 1334 + head = (head - 1 + QI_LENGTH) % QI_LENGTH; 1334 1335 } while (head != tail); 1335 1336 1336 1337 /*
+8 -4
drivers/iommu/intel/svm.c
··· 164 164 if (IS_ERR(dev_pasid)) 165 165 return PTR_ERR(dev_pasid); 166 166 167 - ret = iopf_for_domain_replace(domain, old, dev); 168 - if (ret) 169 - goto out_remove_dev_pasid; 167 + /* SVA with non-IOMMU/PRI IOPF handling is allowed. */ 168 + if (info->pri_supported) { 169 + ret = iopf_for_domain_replace(domain, old, dev); 170 + if (ret) 171 + goto out_remove_dev_pasid; 172 + } 170 173 171 174 /* Setup the pasid table: */ 172 175 sflags = cpu_feature_enabled(X86_FEATURE_LA57) ? PASID_FLAG_FL5LP : 0; ··· 184 181 185 182 return 0; 186 183 out_unwind_iopf: 187 - iopf_for_domain_replace(old, domain, dev); 184 + if (info->pri_supported) 185 + iopf_for_domain_replace(old, domain, dev); 188 186 out_remove_dev_pasid: 189 187 domain_remove_dev_pasid(domain, dev, pasid); 190 188 return ret;
+6 -6
drivers/iommu/iommu-sva.c
··· 182 182 iommu_detach_device_pasid(domain, dev, iommu_mm->pasid); 183 183 if (--domain->users == 0) { 184 184 list_del(&domain->next); 185 - iommu_domain_free(domain); 186 - } 185 + if (list_empty(&iommu_mm->sva_domains)) { 186 + list_del(&iommu_mm->mm_list_elm); 187 + if (list_empty(&iommu_sva_mms)) 188 + iommu_sva_present = false; 189 + } 187 190 188 - if (list_empty(&iommu_mm->sva_domains)) { 189 - list_del(&iommu_mm->mm_list_elm); 190 - if (list_empty(&iommu_sva_mms)) 191 - iommu_sva_present = false; 191 + iommu_domain_free(domain); 192 192 } 193 193 194 194 mutex_unlock(&iommu_sva_lock);
+5 -1
drivers/iommu/iommu.c
··· 1213 1213 if (addr == end) 1214 1214 goto map_end; 1215 1215 1216 - phys_addr = iommu_iova_to_phys(domain, addr); 1216 + /* 1217 + * Return address by iommu_iova_to_phys for 0 is 1218 + * ambiguous. Offset to address 1 if addr is 0. 1219 + */ 1220 + phys_addr = iommu_iova_to_phys(domain, addr ? addr : 1); 1217 1221 if (!phys_addr) { 1218 1222 map_size += pg_size; 1219 1223 continue;
+1
drivers/irqchip/irq-riscv-rpmi-sysmsi.c
··· 250 250 rc = riscv_acpi_get_gsi_info(fwnode, &priv->gsi_base, &id, 251 251 &nr_irqs, NULL); 252 252 if (rc) { 253 + mbox_free_channel(priv->chan); 253 254 dev_err(dev, "failed to find GSI mapping\n"); 254 255 return rc; 255 256 }
+5
drivers/media/mc/mc-request.c
··· 192 192 struct media_device *mdev = req->mdev; 193 193 unsigned long flags; 194 194 195 + mutex_lock(&mdev->req_queue_mutex); 196 + 195 197 spin_lock_irqsave(&req->lock, flags); 196 198 if (req->state != MEDIA_REQUEST_STATE_IDLE && 197 199 req->state != MEDIA_REQUEST_STATE_COMPLETE) { ··· 201 199 "request: %s not in idle or complete state, cannot reinit\n", 202 200 req->debug_str); 203 201 spin_unlock_irqrestore(&req->lock, flags); 202 + mutex_unlock(&mdev->req_queue_mutex); 204 203 return -EBUSY; 205 204 } 206 205 if (req->access_count) { ··· 209 206 "request: %s is being accessed, cannot reinit\n", 210 207 req->debug_str); 211 208 spin_unlock_irqrestore(&req->lock, flags); 209 + mutex_unlock(&mdev->req_queue_mutex); 212 210 return -EBUSY; 213 211 } 214 212 req->state = MEDIA_REQUEST_STATE_CLEANING; ··· 220 216 spin_lock_irqsave(&req->lock, flags); 221 217 req->state = MEDIA_REQUEST_STATE_IDLE; 222 218 spin_unlock_irqrestore(&req->lock, flags); 219 + mutex_unlock(&mdev->req_queue_mutex); 223 220 224 221 return 0; 225 222 }
+4
drivers/media/platform/rockchip/rkvdec/rkvdec-hevc-common.c
··· 500 500 ctrl = v4l2_ctrl_find(&ctx->ctrl_hdl, 501 501 V4L2_CID_STATELESS_HEVC_EXT_SPS_ST_RPS); 502 502 run->ext_sps_st_rps = ctrl ? ctrl->p_cur.p : NULL; 503 + } else { 504 + run->ext_sps_st_rps = NULL; 503 505 } 504 506 if (ctx->has_sps_lt_rps) { 505 507 ctrl = v4l2_ctrl_find(&ctx->ctrl_hdl, 506 508 V4L2_CID_STATELESS_HEVC_EXT_SPS_LT_RPS); 507 509 run->ext_sps_lt_rps = ctrl ? ctrl->p_cur.p : NULL; 510 + } else { 511 + run->ext_sps_lt_rps = NULL; 508 512 } 509 513 510 514 rkvdec_run_preamble(ctx, &run->base);
+27 -23
drivers/media/platform/rockchip/rkvdec/rkvdec-vdpu383-h264.c
··· 130 130 struct vdpu383_regs_h26x regs; 131 131 }; 132 132 133 - static void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) 133 + static noinline_for_stack void set_field_order_cnt(struct rkvdec_pps *pps, const struct v4l2_h264_dpb_entry *dpb) 134 134 { 135 135 pps->top_field_order_cnt0 = dpb[0].top_field_order_cnt; 136 136 pps->bot_field_order_cnt0 = dpb[0].bottom_field_order_cnt; ··· 166 166 pps->bot_field_order_cnt15 = dpb[15].bottom_field_order_cnt; 167 167 } 168 168 169 + static noinline_for_stack void set_dec_params(struct rkvdec_pps *pps, const struct v4l2_ctrl_h264_decode_params *dec_params) 170 + { 171 + const struct v4l2_h264_dpb_entry *dpb = dec_params->dpb; 172 + 173 + for (int i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { 174 + if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) 175 + pps->is_longterm |= (1 << i); 176 + pps->ref_field_flags |= 177 + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; 178 + pps->ref_colmv_use_flag |= 179 + (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; 180 + pps->ref_topfield_used |= 181 + (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; 182 + pps->ref_botfield_used |= 183 + (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; 184 + } 185 + pps->pic_field_flag = 186 + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); 187 + pps->pic_associated_flag = 188 + !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); 189 + 190 + pps->cur_top_field = dec_params->top_field_order_cnt; 191 + pps->cur_bot_field = dec_params->bottom_field_order_cnt; 192 + } 193 + 169 194 static void assemble_hw_pps(struct rkvdec_ctx *ctx, 170 195 struct rkvdec_h264_run *run) 171 196 { ··· 202 177 struct rkvdec_h264_priv_tbl *priv_tbl = h264_ctx->priv_tbl.cpu; 203 178 struct rkvdec_sps_pps *hw_ps; 204 179 u32 pic_width, pic_height; 205 - u32 i; 206 180 207 181 /* 208 182 * HW read the SPS/PPS information from PPS packet index by PPS id. ··· 285 261 !!(pps->flags & V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT); 286 262 287 263 set_field_order_cnt(&hw_ps->pps, dpb); 264 + set_dec_params(&hw_ps->pps, dec_params); 288 265 289 - for (i = 0; i < ARRAY_SIZE(dec_params->dpb); i++) { 290 - if (dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM) 291 - hw_ps->pps.is_longterm |= (1 << i); 292 - 293 - hw_ps->pps.ref_field_flags |= 294 - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_FIELD)) << i; 295 - hw_ps->pps.ref_colmv_use_flag |= 296 - (!!(dpb[i].flags & V4L2_H264_DPB_ENTRY_FLAG_ACTIVE)) << i; 297 - hw_ps->pps.ref_topfield_used |= 298 - (!!(dpb[i].fields & V4L2_H264_TOP_FIELD_REF)) << i; 299 - hw_ps->pps.ref_botfield_used |= 300 - (!!(dpb[i].fields & V4L2_H264_BOTTOM_FIELD_REF)) << i; 301 - } 302 - 303 - hw_ps->pps.pic_field_flag = 304 - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC); 305 - hw_ps->pps.pic_associated_flag = 306 - !!(dec_params->flags & V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD); 307 - 308 - hw_ps->pps.cur_top_field = dec_params->top_field_order_cnt; 309 - hw_ps->pps.cur_bot_field = dec_params->bottom_field_order_cnt; 310 266 } 311 267 312 268 static void rkvdec_write_regs(struct rkvdec_ctx *ctx)
+2 -1
drivers/media/platform/rockchip/rkvdec/rkvdec-vp9.c
··· 893 893 update_ctx_last_info(vp9_ctx); 894 894 } 895 895 896 - static void rkvdec_init_v4l2_vp9_count_tbl(struct rkvdec_ctx *ctx) 896 + static noinline_for_stack void 897 + rkvdec_init_v4l2_vp9_count_tbl(struct rkvdec_ctx *ctx) 897 898 { 898 899 struct rkvdec_vp9_ctx *vp9_ctx = ctx->priv; 899 900 struct rkvdec_vp9_intra_frame_symbol_counts *intra_cnts = vp9_ctx->count_tbl.cpu;
+1
drivers/media/platform/synopsys/Kconfig
··· 7 7 depends on VIDEO_DEV 8 8 depends on V4L_PLATFORM_DRIVERS 9 9 depends on PM && COMMON_CLK 10 + select GENERIC_PHY_MIPI_DPHY 10 11 select MEDIA_CONTROLLER 11 12 select V4L2_FWNODE 12 13 select VIDEO_V4L2_SUBDEV_API
+1 -1
drivers/media/platform/synopsys/dw-mipi-csi2rx.c
··· 301 301 302 302 return 0; 303 303 case DW_MIPI_CSI2RX_PAD_SINK: 304 - if (code->index > csi2->formats_num) 304 + if (code->index >= csi2->formats_num) 305 305 return -EINVAL; 306 306 307 307 code->code = csi2->formats[code->index].code;
+1 -1
drivers/media/platform/verisilicon/imx8m_vpu_hw.c
··· 343 343 .num_regs = ARRAY_SIZE(imx8mq_reg_names) 344 344 }; 345 345 346 - static const struct of_device_id imx8mq_vpu_shared_resources[] __initconst = { 346 + static const struct of_device_id imx8mq_vpu_shared_resources[] = { 347 347 { .compatible = "nxp,imx8mq-vpu-g1", }, 348 348 { .compatible = "nxp,imx8mq-vpu-g2", }, 349 349 { /* sentinel */ }
+3 -2
drivers/media/v4l2-core/v4l2-ioctl.c
··· 3082 3082 } 3083 3083 3084 3084 /* 3085 - * We need to serialize streamon/off with queueing new requests. 3085 + * We need to serialize streamon/off/reqbufs with queueing new requests. 3086 3086 * These ioctls may trigger the cancellation of a streaming 3087 3087 * operation, and that should not be mixed with queueing a new 3088 3088 * request at the same time. 3089 3089 */ 3090 3090 if (v4l2_device_supports_requests(vfd->v4l2_dev) && 3091 - (cmd == VIDIOC_STREAMON || cmd == VIDIOC_STREAMOFF)) { 3091 + (cmd == VIDIOC_STREAMON || cmd == VIDIOC_STREAMOFF || 3092 + cmd == VIDIOC_REQBUFS)) { 3092 3093 req_queue_lock = &vfd->v4l2_dev->mdev->req_queue_mutex; 3093 3094 3094 3095 if (mutex_lock_interruptible(req_queue_lock))
+9
drivers/mmc/host/sdhci-pci-gli.c
··· 68 68 #define GLI_9750_MISC_TX1_DLY_VALUE 0x5 69 69 #define SDHCI_GLI_9750_MISC_SSC_OFF BIT(26) 70 70 71 + #define SDHCI_GLI_9750_GM_BURST_SIZE 0x510 72 + #define SDHCI_GLI_9750_GM_BURST_SIZE_R_OSRC_LMT GENMASK(17, 16) 73 + 71 74 #define SDHCI_GLI_9750_TUNING_CONTROL 0x540 72 75 #define SDHCI_GLI_9750_TUNING_CONTROL_EN BIT(4) 73 76 #define GLI_9750_TUNING_CONTROL_EN_ON 0x1 ··· 348 345 u32 misc_value; 349 346 u32 parameter_value; 350 347 u32 control_value; 348 + u32 burst_value; 351 349 u16 ctrl2; 352 350 353 351 gl9750_wt_on(host); 352 + 353 + /* clear R_OSRC_Lmt to avoid DMA write corruption */ 354 + burst_value = sdhci_readl(host, SDHCI_GLI_9750_GM_BURST_SIZE); 355 + burst_value &= ~SDHCI_GLI_9750_GM_BURST_SIZE_R_OSRC_LMT; 356 + sdhci_writel(host, burst_value, SDHCI_GLI_9750_GM_BURST_SIZE); 354 357 355 358 driving_value = sdhci_readl(host, SDHCI_GLI_9750_DRIVING); 356 359 pll_value = sdhci_readl(host, SDHCI_GLI_9750_PLL);
+8 -1
drivers/mmc/host/sdhci.c
··· 4532 4532 * their platform code before calling sdhci_add_host(), and we 4533 4533 * won't assume 8-bit width for hosts without that CAP. 4534 4534 */ 4535 - if (!(host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA)) 4535 + if (host->quirks & SDHCI_QUIRK_FORCE_1_BIT_DATA) { 4536 + host->caps1 &= ~(SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50); 4537 + if (host->quirks2 & SDHCI_QUIRK2_CAPS_BIT63_FOR_HS400) 4538 + host->caps1 &= ~SDHCI_SUPPORT_HS400; 4539 + mmc->caps2 &= ~(MMC_CAP2_HS200 | MMC_CAP2_HS400 | MMC_CAP2_HS400_ES); 4540 + mmc->caps &= ~(MMC_CAP_DDR | MMC_CAP_UHS); 4541 + } else { 4536 4542 mmc->caps |= MMC_CAP_4_BIT_DATA; 4543 + } 4537 4544 4538 4545 if (host->quirks2 & SDHCI_QUIRK2_HOST_NO_CMD23) 4539 4546 mmc->caps &= ~MMC_CAP_CMD23;
+2 -4
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 2350 2350 for (i = 0; i < ctrl->max_oob; i += 4) 2351 2351 oob_reg_write(ctrl, i, 0xffffffff); 2352 2352 2353 - if (mtd->oops_panic_write) 2353 + if (mtd->oops_panic_write) { 2354 2354 /* switch to interrupt polling and PIO mode */ 2355 2355 disable_ctrl_irqs(ctrl); 2356 - 2357 - if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) { 2356 + } else if (use_dma(ctrl) && (has_edu(ctrl) || !oob) && flash_dma_buf_ok(buf)) { 2358 2357 if (ctrl->dma_trans(host, addr, (u32 *)buf, oob, mtd->writesize, 2359 2358 CMD_PROGRAM_PAGE)) 2360 - 2361 2359 ret = -EIO; 2362 2360 2363 2361 goto out;
+1 -1
drivers/mtd/nand/raw/cadence-nand-controller.c
··· 3133 3133 sizeof(*cdns_ctrl->cdma_desc), 3134 3134 &cdns_ctrl->dma_cdma_desc, 3135 3135 GFP_KERNEL); 3136 - if (!cdns_ctrl->dma_cdma_desc) 3136 + if (!cdns_ctrl->cdma_desc) 3137 3137 return -ENOMEM; 3138 3138 3139 3139 cdns_ctrl->buf_size = SZ_16K;
+12 -2
drivers/mtd/nand/raw/nand_base.c
··· 4737 4737 static int nand_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 4738 4738 { 4739 4739 struct nand_chip *chip = mtd_to_nand(mtd); 4740 + int ret; 4740 4741 4741 4742 if (!chip->ops.lock_area) 4742 4743 return -ENOTSUPP; 4743 4744 4744 - return chip->ops.lock_area(chip, ofs, len); 4745 + nand_get_device(chip); 4746 + ret = chip->ops.lock_area(chip, ofs, len); 4747 + nand_release_device(chip); 4748 + 4749 + return ret; 4745 4750 } 4746 4751 4747 4752 /** ··· 4758 4753 static int nand_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len) 4759 4754 { 4760 4755 struct nand_chip *chip = mtd_to_nand(mtd); 4756 + int ret; 4761 4757 4762 4758 if (!chip->ops.unlock_area) 4763 4759 return -ENOTSUPP; 4764 4760 4765 - return chip->ops.unlock_area(chip, ofs, len); 4761 + nand_get_device(chip); 4762 + ret = chip->ops.unlock_area(chip, ofs, len); 4763 + nand_release_device(chip); 4764 + 4765 + return ret; 4766 4766 } 4767 4767 4768 4768 /* Set default functions */
+3
drivers/mtd/nand/raw/pl35x-nand-controller.c
··· 862 862 PL35X_SMC_NAND_TAR_CYCLES(tmgs.t_ar) | 863 863 PL35X_SMC_NAND_TRR_CYCLES(tmgs.t_rr); 864 864 865 + writel(plnand->timings, nfc->conf_regs + PL35X_SMC_CYCLES); 866 + pl35x_smc_update_regs(nfc); 867 + 865 868 return 0; 866 869 } 867 870
+3 -3
drivers/mtd/parsers/redboot.c
··· 270 270 271 271 strcpy(names, fl->img->name); 272 272 #ifdef CONFIG_MTD_REDBOOT_PARTS_READONLY 273 - if (!memcmp(names, "RedBoot", 8) || 274 - !memcmp(names, "RedBoot config", 15) || 275 - !memcmp(names, "FIS directory", 14)) { 273 + if (!strcmp(names, "RedBoot") || 274 + !strcmp(names, "RedBoot config") || 275 + !strcmp(names, "FIS directory")) { 276 276 parts[i].mask_flags = MTD_WRITEABLE; 277 277 } 278 278 #endif
+7 -7
drivers/mtd/spi-nor/core.c
··· 2345 2345 } 2346 2346 2347 2347 /** 2348 - * spi_nor_spimem_check_op - check if the operation is supported 2349 - * by controller 2348 + * spi_nor_spimem_check_read_pp_op - check if a read or a page program operation is 2349 + * supported by controller 2350 2350 *@nor: pointer to a 'struct spi_nor' 2351 2351 *@op: pointer to op template to be checked 2352 2352 * 2353 2353 * Returns 0 if operation is supported, -EOPNOTSUPP otherwise. 2354 2354 */ 2355 - static int spi_nor_spimem_check_op(struct spi_nor *nor, 2356 - struct spi_mem_op *op) 2355 + static int spi_nor_spimem_check_read_pp_op(struct spi_nor *nor, 2356 + struct spi_mem_op *op) 2357 2357 { 2358 2358 /* 2359 2359 * First test with 4 address bytes. The opcode itself might ··· 2396 2396 if (spi_nor_protocol_is_dtr(nor->read_proto)) 2397 2397 op.dummy.nbytes *= 2; 2398 2398 2399 - return spi_nor_spimem_check_op(nor, &op); 2399 + return spi_nor_spimem_check_read_pp_op(nor, &op); 2400 2400 } 2401 2401 2402 2402 /** ··· 2414 2414 2415 2415 spi_nor_spimem_setup_op(nor, &op, pp->proto); 2416 2416 2417 - return spi_nor_spimem_check_op(nor, &op); 2417 + return spi_nor_spimem_check_read_pp_op(nor, &op); 2418 2418 } 2419 2419 2420 2420 /** ··· 2466 2466 2467 2467 spi_nor_spimem_setup_op(nor, &op, nor->reg_proto); 2468 2468 2469 - if (spi_nor_spimem_check_op(nor, &op)) 2469 + if (!spi_mem_supports_op(nor->spimem, &op)) 2470 2470 nor->flags |= SNOR_F_NO_READ_CR; 2471 2471 } 2472 2472 }
+3 -1
drivers/net/can/dev/netlink.c
··· 601 601 /* We need synchronization with dev->stop() */ 602 602 ASSERT_RTNL(); 603 603 604 - can_ctrlmode_changelink(dev, data, extack); 604 + err = can_ctrlmode_changelink(dev, data, extack); 605 + if (err) 606 + return err; 605 607 606 608 if (data[IFLA_CAN_BITTIMING]) { 607 609 struct can_bittiming bt;
+24 -5
drivers/net/can/spi/mcp251x.c
··· 1225 1225 } 1226 1226 1227 1227 mutex_lock(&priv->mcp_lock); 1228 - mcp251x_power_enable(priv->transceiver, 1); 1228 + ret = mcp251x_power_enable(priv->transceiver, 1); 1229 + if (ret) { 1230 + dev_err(&spi->dev, "failed to enable transceiver power: %pe\n", ERR_PTR(ret)); 1231 + goto out_close_candev; 1232 + } 1229 1233 1230 1234 priv->force_quit = 0; 1231 1235 priv->tx_skb = NULL; ··· 1276 1272 mcp251x_hw_sleep(spi); 1277 1273 out_close: 1278 1274 mcp251x_power_enable(priv->transceiver, 0); 1275 + out_close_candev: 1279 1276 close_candev(net); 1280 1277 mutex_unlock(&priv->mcp_lock); 1281 1278 if (release_irq) ··· 1521 1516 { 1522 1517 struct spi_device *spi = to_spi_device(dev); 1523 1518 struct mcp251x_priv *priv = spi_get_drvdata(spi); 1519 + int ret = 0; 1524 1520 1525 - if (priv->after_suspend & AFTER_SUSPEND_POWER) 1526 - mcp251x_power_enable(priv->power, 1); 1527 - if (priv->after_suspend & AFTER_SUSPEND_UP) 1528 - mcp251x_power_enable(priv->transceiver, 1); 1521 + if (priv->after_suspend & AFTER_SUSPEND_POWER) { 1522 + ret = mcp251x_power_enable(priv->power, 1); 1523 + if (ret) { 1524 + dev_err(dev, "failed to restore power: %pe\n", ERR_PTR(ret)); 1525 + return ret; 1526 + } 1527 + } 1528 + 1529 + if (priv->after_suspend & AFTER_SUSPEND_UP) { 1530 + ret = mcp251x_power_enable(priv->transceiver, 1); 1531 + if (ret) { 1532 + dev_err(dev, "failed to restore transceiver power: %pe\n", ERR_PTR(ret)); 1533 + if (priv->after_suspend & AFTER_SUSPEND_POWER) 1534 + mcp251x_power_enable(priv->power, 0); 1535 + return ret; 1536 + } 1537 + } 1529 1538 1530 1539 if (priv->after_suspend & (AFTER_SUSPEND_POWER | AFTER_SUSPEND_UP)) 1531 1540 queue_work(priv->wq, &priv->restart_work);
+2
drivers/net/ethernet/airoha/airoha_ppe.c
··· 248 248 if (!dev) 249 249 return -ENODEV; 250 250 251 + rcu_read_lock(); 251 252 err = dev_fill_forward_path(dev, addr, &stack); 253 + rcu_read_unlock(); 252 254 if (err) 253 255 return err; 254 256
+1 -1
drivers/net/ethernet/broadcom/Kconfig
··· 25 25 select SSB 26 26 select MII 27 27 select PHYLIB 28 - select FIXED_PHY if BCM47XX 28 + select FIXED_PHY 29 29 help 30 30 If you have a network (Ethernet) controller of this type, say Y 31 31 or M here.
+23 -18
drivers/net/ethernet/broadcom/asp2/bcmasp.c
··· 1152 1152 } 1153 1153 } 1154 1154 1155 - static void bcmasp_wol_irq_destroy(struct bcmasp_priv *priv) 1156 - { 1157 - if (priv->wol_irq > 0) 1158 - free_irq(priv->wol_irq, priv); 1159 - } 1160 - 1161 1155 static void bcmasp_eee_fixup(struct bcmasp_intf *intf, bool en) 1162 1156 { 1163 1157 u32 reg, phy_lpi_overwrite; ··· 1249 1255 if (priv->irq <= 0) 1250 1256 return -EINVAL; 1251 1257 1252 - priv->clk = devm_clk_get_optional_enabled(dev, "sw_asp"); 1258 + priv->clk = devm_clk_get_optional(dev, "sw_asp"); 1253 1259 if (IS_ERR(priv->clk)) 1254 1260 return dev_err_probe(dev, PTR_ERR(priv->clk), 1255 1261 "failed to request clock\n"); ··· 1277 1283 1278 1284 bcmasp_set_pdata(priv, pdata); 1279 1285 1286 + ret = clk_prepare_enable(priv->clk); 1287 + if (ret) 1288 + return dev_err_probe(dev, ret, "failed to start clock\n"); 1289 + 1280 1290 /* Enable all clocks to ensure successful probing */ 1281 1291 bcmasp_core_clock_set(priv, ASP_CTRL_CLOCK_CTRL_ASP_ALL_DISABLE, 0); 1282 1292 ··· 1292 1294 1293 1295 ret = devm_request_irq(&pdev->dev, priv->irq, bcmasp_isr, 0, 1294 1296 pdev->name, priv); 1295 - if (ret) 1296 - return dev_err_probe(dev, ret, "failed to request ASP interrupt: %d", ret); 1297 + if (ret) { 1298 + dev_err(dev, "Failed to request ASP interrupt: %d", ret); 1299 + goto err_clock_disable; 1300 + } 1297 1301 1298 1302 /* Register mdio child nodes */ 1299 1303 of_platform_populate(dev->of_node, bcmasp_mdio_of_match, NULL, dev); ··· 1307 1307 1308 1308 priv->mda_filters = devm_kcalloc(dev, priv->num_mda_filters, 1309 1309 sizeof(*priv->mda_filters), GFP_KERNEL); 1310 - if (!priv->mda_filters) 1311 - return -ENOMEM; 1310 + if (!priv->mda_filters) { 1311 + ret = -ENOMEM; 1312 + goto err_clock_disable; 1313 + } 1312 1314 1313 1315 priv->net_filters = devm_kcalloc(dev, priv->num_net_filters, 1314 1316 sizeof(*priv->net_filters), GFP_KERNEL); 1315 - if (!priv->net_filters) 1316 - return -ENOMEM; 1317 + if (!priv->net_filters) { 1318 + ret = -ENOMEM; 1319 + goto err_clock_disable; 1320 + } 1317 1321 1318 1322 bcmasp_core_init_filters(priv); 1319 1323 ··· 1326 1322 ports_node = of_find_node_by_name(dev->of_node, "ethernet-ports"); 1327 1323 if (!ports_node) { 1328 1324 dev_warn(dev, "No ports found\n"); 1329 - return -EINVAL; 1325 + ret = -EINVAL; 1326 + goto err_clock_disable; 1330 1327 } 1331 1328 1332 1329 i = 0; ··· 1349 1344 */ 1350 1345 bcmasp_core_clock_set(priv, 0, ASP_CTRL_CLOCK_CTRL_ASP_ALL_DISABLE); 1351 1346 1352 - clk_disable_unprepare(priv->clk); 1353 - 1354 1347 /* Now do the registration of the network ports which will take care 1355 1348 * of managing the clock properly. 1356 1349 */ ··· 1361 1358 count++; 1362 1359 } 1363 1360 1361 + clk_disable_unprepare(priv->clk); 1362 + 1364 1363 dev_info(dev, "Initialized %d port(s)\n", count); 1365 1364 1366 1365 return ret; 1367 1366 1368 1367 err_cleanup: 1369 - bcmasp_wol_irq_destroy(priv); 1370 1368 bcmasp_remove_intfs(priv); 1369 + err_clock_disable: 1370 + clk_disable_unprepare(priv->clk); 1371 1371 1372 1372 return ret; 1373 1373 } ··· 1382 1376 if (!priv) 1383 1377 return; 1384 1378 1385 - bcmasp_wol_irq_destroy(priv); 1386 1379 bcmasp_remove_intfs(priv); 1387 1380 } 1388 1381
+25 -16
drivers/net/ethernet/cadence/macb_main.c
··· 1210 1210 } 1211 1211 1212 1212 if (tx_skb->skb) { 1213 - napi_consume_skb(tx_skb->skb, budget); 1213 + dev_consume_skb_any(tx_skb->skb); 1214 1214 tx_skb->skb = NULL; 1215 1215 } 1216 1216 } ··· 3369 3369 spin_lock_irq(&bp->stats_lock); 3370 3370 gem_update_stats(bp); 3371 3371 memcpy(data, &bp->ethtool_stats, sizeof(u64) 3372 - * (GEM_STATS_LEN + QUEUE_STATS_LEN * MACB_MAX_QUEUES)); 3372 + * (GEM_STATS_LEN + QUEUE_STATS_LEN * bp->num_queues)); 3373 3373 spin_unlock_irq(&bp->stats_lock); 3374 3374 } 3375 3375 ··· 5923 5923 struct macb_queue *queue; 5924 5924 struct in_device *idev; 5925 5925 unsigned long flags; 5926 + u32 tmp, ifa_local; 5926 5927 unsigned int q; 5927 5928 int err; 5928 - u32 tmp; 5929 5929 5930 5930 if (!device_may_wakeup(&bp->dev->dev)) 5931 5931 phy_exit(bp->phy); ··· 5934 5934 return 0; 5935 5935 5936 5936 if (bp->wol & MACB_WOL_ENABLED) { 5937 - /* Check for IP address in WOL ARP mode */ 5938 - idev = __in_dev_get_rcu(bp->dev); 5939 - if (idev) 5940 - ifa = rcu_dereference(idev->ifa_list); 5941 - if ((bp->wolopts & WAKE_ARP) && !ifa) { 5942 - netdev_err(netdev, "IP address not assigned as required by WoL walk ARP\n"); 5943 - return -EOPNOTSUPP; 5937 + if (bp->wolopts & WAKE_ARP) { 5938 + /* Check for IP address in WOL ARP mode */ 5939 + rcu_read_lock(); 5940 + idev = __in_dev_get_rcu(bp->dev); 5941 + if (idev) 5942 + ifa = rcu_dereference(idev->ifa_list); 5943 + if (!ifa) { 5944 + rcu_read_unlock(); 5945 + netdev_err(netdev, "IP address not assigned as required by WoL walk ARP\n"); 5946 + return -EOPNOTSUPP; 5947 + } 5948 + ifa_local = be32_to_cpu(ifa->ifa_local); 5949 + rcu_read_unlock(); 5944 5950 } 5951 + 5945 5952 spin_lock_irqsave(&bp->lock, flags); 5946 5953 5947 5954 /* Disable Tx and Rx engines before disabling the queues, ··· 5987 5980 if (bp->wolopts & WAKE_ARP) { 5988 5981 tmp |= MACB_BIT(ARP); 5989 5982 /* write IP address into register */ 5990 - tmp |= MACB_BFEXT(IP, be32_to_cpu(ifa->ifa_local)); 5983 + tmp |= MACB_BFEXT(IP, ifa_local); 5991 5984 } 5985 + spin_unlock_irqrestore(&bp->lock, flags); 5992 5986 5993 5987 /* Change interrupt handler and 5994 5988 * Enable WoL IRQ on queue 0 ··· 6002 5994 dev_err(dev, 6003 5995 "Unable to request IRQ %d (error %d)\n", 6004 5996 bp->queues[0].irq, err); 6005 - spin_unlock_irqrestore(&bp->lock, flags); 6006 5997 return err; 6007 5998 } 5999 + spin_lock_irqsave(&bp->lock, flags); 6008 6000 queue_writel(bp->queues, IER, GEM_BIT(WOL)); 6009 6001 gem_writel(bp, WOL, tmp); 6002 + spin_unlock_irqrestore(&bp->lock, flags); 6010 6003 } else { 6011 6004 err = devm_request_irq(dev, bp->queues[0].irq, macb_wol_interrupt, 6012 6005 IRQF_SHARED, netdev->name, bp->queues); ··· 6015 6006 dev_err(dev, 6016 6007 "Unable to request IRQ %d (error %d)\n", 6017 6008 bp->queues[0].irq, err); 6018 - spin_unlock_irqrestore(&bp->lock, flags); 6019 6009 return err; 6020 6010 } 6011 + spin_lock_irqsave(&bp->lock, flags); 6021 6012 queue_writel(bp->queues, IER, MACB_BIT(WOL)); 6022 6013 macb_writel(bp, WOL, tmp); 6014 + spin_unlock_irqrestore(&bp->lock, flags); 6023 6015 } 6024 - spin_unlock_irqrestore(&bp->lock, flags); 6025 6016 6026 6017 enable_irq_wake(bp->queues[0].irq); 6027 6018 } ··· 6088 6079 queue_readl(bp->queues, ISR); 6089 6080 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 6090 6081 queue_writel(bp->queues, ISR, -1); 6082 + spin_unlock_irqrestore(&bp->lock, flags); 6083 + 6091 6084 /* Replace interrupt handler on queue 0 */ 6092 6085 devm_free_irq(dev, bp->queues[0].irq, bp->queues); 6093 6086 err = devm_request_irq(dev, bp->queues[0].irq, macb_interrupt, ··· 6098 6087 dev_err(dev, 6099 6088 "Unable to request IRQ %d (error %d)\n", 6100 6089 bp->queues[0].irq, err); 6101 - spin_unlock_irqrestore(&bp->lock, flags); 6102 6090 return err; 6103 6091 } 6104 - spin_unlock_irqrestore(&bp->lock, flags); 6105 6092 6106 6093 disable_irq_wake(bp->queues[0].irq); 6107 6094
+2
drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
··· 813 813 { 814 814 struct enetc_ndev_priv *priv = netdev_priv(ndev); 815 815 816 + ring->rx_max_pending = priv->rx_bd_count; 817 + ring->tx_max_pending = priv->tx_bd_count; 816 818 ring->rx_pending = priv->rx_bd_count; 817 819 ring->tx_pending = priv->tx_bd_count; 818 820
+15 -16
drivers/net/ethernet/intel/iavf/iavf_ethtool.c
··· 313 313 { 314 314 /* Report the maximum number queues, even if not every queue is 315 315 * currently configured. Since allocation of queues is in pairs, 316 - * use netdev->real_num_tx_queues * 2. The real_num_tx_queues is set 317 - * at device creation and never changes. 316 + * use netdev->num_tx_queues * 2. The num_tx_queues is set at 317 + * device creation and never changes. 318 318 */ 319 319 320 320 if (sset == ETH_SS_STATS) 321 321 return IAVF_STATS_LEN + 322 - (IAVF_QUEUE_STATS_LEN * 2 * 323 - netdev->real_num_tx_queues); 322 + (IAVF_QUEUE_STATS_LEN * 2 * netdev->num_tx_queues); 324 323 else 325 324 return -EINVAL; 326 325 } ··· 344 345 iavf_add_ethtool_stats(&data, adapter, iavf_gstrings_stats); 345 346 346 347 rcu_read_lock(); 347 - /* As num_active_queues describe both tx and rx queues, we can use 348 - * it to iterate over rings' stats. 348 + /* Use num_tx_queues to report stats for the maximum number of queues. 349 + * Queues beyond num_active_queues will report zero. 349 350 */ 350 - for (i = 0; i < adapter->num_active_queues; i++) { 351 - struct iavf_ring *ring; 351 + for (i = 0; i < netdev->num_tx_queues; i++) { 352 + struct iavf_ring *tx_ring = NULL, *rx_ring = NULL; 352 353 353 - /* Tx rings stats */ 354 - ring = &adapter->tx_rings[i]; 355 - iavf_add_queue_stats(&data, ring); 354 + if (i < adapter->num_active_queues) { 355 + tx_ring = &adapter->tx_rings[i]; 356 + rx_ring = &adapter->rx_rings[i]; 357 + } 356 358 357 - /* Rx rings stats */ 358 - ring = &adapter->rx_rings[i]; 359 - iavf_add_queue_stats(&data, ring); 359 + iavf_add_queue_stats(&data, tx_ring); 360 + iavf_add_queue_stats(&data, rx_ring); 360 361 } 361 362 rcu_read_unlock(); 362 363 } ··· 375 376 iavf_add_stat_strings(&data, iavf_gstrings_stats); 376 377 377 378 /* Queues are always allocated in pairs, so we just use 378 - * real_num_tx_queues for both Tx and Rx queues. 379 + * num_tx_queues for both Tx and Rx queues. 379 380 */ 380 - for (i = 0; i < netdev->real_num_tx_queues; i++) { 381 + for (i = 0; i < netdev->num_tx_queues; i++) { 381 382 iavf_add_stat_strings(&data, iavf_gstrings_queue_stats, 382 383 "tx", i); 383 384 iavf_add_stat_strings(&data, iavf_gstrings_queue_stats,
+22
drivers/net/ethernet/intel/ice/ice.h
··· 840 840 } 841 841 842 842 /** 843 + * ice_get_max_txq - return the maximum number of Tx queues for in a PF 844 + * @pf: PF structure 845 + * 846 + * Return: maximum number of Tx queues 847 + */ 848 + static inline int ice_get_max_txq(struct ice_pf *pf) 849 + { 850 + return min(num_online_cpus(), pf->hw.func_caps.common_cap.num_txq); 851 + } 852 + 853 + /** 854 + * ice_get_max_rxq - return the maximum number of Rx queues for in a PF 855 + * @pf: PF structure 856 + * 857 + * Return: maximum number of Rx queues 858 + */ 859 + static inline int ice_get_max_rxq(struct ice_pf *pf) 860 + { 861 + return min(num_online_cpus(), pf->hw.func_caps.common_cap.num_rxq); 862 + } 863 + 864 + /** 843 865 * ice_get_main_vsi - Get the PF VSI 844 866 * @pf: PF instance 845 867 *
+11 -21
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 1930 1930 int i = 0; 1931 1931 char *p; 1932 1932 1933 + if (ice_is_port_repr_netdev(netdev)) { 1934 + ice_update_eth_stats(vsi); 1935 + 1936 + for (j = 0; j < ICE_VSI_STATS_LEN; j++) { 1937 + p = (char *)vsi + ice_gstrings_vsi_stats[j].stat_offset; 1938 + data[i++] = (ice_gstrings_vsi_stats[j].sizeof_stat == 1939 + sizeof(u64)) ? *(u64 *)p : *(u32 *)p; 1940 + } 1941 + return; 1942 + } 1943 + 1933 1944 ice_update_pf_stats(pf); 1934 1945 ice_update_vsi_stats(vsi); 1935 1946 ··· 1949 1938 data[i++] = (ice_gstrings_vsi_stats[j].sizeof_stat == 1950 1939 sizeof(u64)) ? *(u64 *)p : *(u32 *)p; 1951 1940 } 1952 - 1953 - if (ice_is_port_repr_netdev(netdev)) 1954 - return; 1955 1941 1956 1942 /* populate per queue stats */ 1957 1943 rcu_read_lock(); ··· 3779 3771 info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) | BIT(HWTSTAMP_FILTER_ALL); 3780 3772 3781 3773 return 0; 3782 - } 3783 - 3784 - /** 3785 - * ice_get_max_txq - return the maximum number of Tx queues for in a PF 3786 - * @pf: PF structure 3787 - */ 3788 - static int ice_get_max_txq(struct ice_pf *pf) 3789 - { 3790 - return min(num_online_cpus(), pf->hw.func_caps.common_cap.num_txq); 3791 - } 3792 - 3793 - /** 3794 - * ice_get_max_rxq - return the maximum number of Rx queues for in a PF 3795 - * @pf: PF structure 3796 - */ 3797 - static int ice_get_max_rxq(struct ice_pf *pf) 3798 - { 3799 - return min(num_online_cpus(), pf->hw.func_caps.common_cap.num_rxq); 3800 3774 } 3801 3775 3802 3776 /**
+2 -2
drivers/net/ethernet/intel/ice/ice_main.c
··· 4699 4699 struct net_device *netdev; 4700 4700 u8 mac_addr[ETH_ALEN]; 4701 4701 4702 - netdev = alloc_etherdev_mqs(sizeof(*np), vsi->alloc_txq, 4703 - vsi->alloc_rxq); 4702 + netdev = alloc_etherdev_mqs(sizeof(*np), ice_get_max_txq(vsi->back), 4703 + ice_get_max_rxq(vsi->back)); 4704 4704 if (!netdev) 4705 4705 return -ENOMEM; 4706 4706
+3 -2
drivers/net/ethernet/intel/ice/ice_repr.c
··· 2 2 /* Copyright (C) 2019-2021, Intel Corporation. */ 3 3 4 4 #include "ice.h" 5 + #include "ice_lib.h" 5 6 #include "ice_eswitch.h" 6 7 #include "devlink/devlink.h" 7 8 #include "devlink/port.h" ··· 68 67 return; 69 68 vsi = repr->src_vsi; 70 69 71 - ice_update_vsi_stats(vsi); 70 + ice_update_eth_stats(vsi); 72 71 eth_stats = &vsi->eth_stats; 73 72 74 73 stats->tx_packets = eth_stats->tx_unicast + eth_stats->tx_broadcast + ··· 316 315 317 316 static int ice_repr_ready_vf(struct ice_repr *repr) 318 317 { 319 - return !ice_check_vf_ready_for_cfg(repr->vf); 318 + return ice_check_vf_ready_for_cfg(repr->vf); 320 319 } 321 320 322 321 static int ice_repr_ready_sf(struct ice_repr *repr)
+1 -1
drivers/net/ethernet/intel/idpf/idpf.h
··· 1066 1066 int idpf_idc_init(struct idpf_adapter *adapter); 1067 1067 int idpf_idc_init_aux_core_dev(struct idpf_adapter *adapter, 1068 1068 enum iidc_function_type ftype); 1069 - void idpf_idc_deinit_core_aux_device(struct iidc_rdma_core_dev_info *cdev_info); 1069 + void idpf_idc_deinit_core_aux_device(struct idpf_adapter *adapter); 1070 1070 void idpf_idc_deinit_vport_aux_device(struct iidc_rdma_vport_dev_info *vdev_info); 1071 1071 void idpf_idc_issue_reset_event(struct iidc_rdma_core_dev_info *cdev_info); 1072 1072 void idpf_idc_vdev_mtu_event(struct iidc_rdma_vport_dev_info *vdev_info,
+4 -2
drivers/net/ethernet/intel/idpf/idpf_idc.c
··· 470 470 471 471 /** 472 472 * idpf_idc_deinit_core_aux_device - de-initialize Auxiliary Device(s) 473 - * @cdev_info: IDC core device info pointer 473 + * @adapter: driver private data structure 474 474 */ 475 - void idpf_idc_deinit_core_aux_device(struct iidc_rdma_core_dev_info *cdev_info) 475 + void idpf_idc_deinit_core_aux_device(struct idpf_adapter *adapter) 476 476 { 477 + struct iidc_rdma_core_dev_info *cdev_info = adapter->cdev_info; 477 478 struct iidc_rdma_priv_dev_info *privd; 478 479 479 480 if (!cdev_info) ··· 486 485 kfree(privd->mapped_mem_regions); 487 486 kfree(privd); 488 487 kfree(cdev_info); 488 + adapter->cdev_info = NULL; 489 489 } 490 490 491 491 /**
+1 -1
drivers/net/ethernet/intel/idpf/idpf_txrx.c
··· 1860 1860 idpf_queue_assign(HSPLIT_EN, q, hs); 1861 1861 idpf_queue_assign(RSC_EN, q, rsc); 1862 1862 1863 - bufq_set->num_refillqs = num_rxq; 1864 1863 bufq_set->refillqs = kcalloc(num_rxq, swq_size, 1865 1864 GFP_KERNEL); 1866 1865 if (!bufq_set->refillqs) { 1867 1866 err = -ENOMEM; 1868 1867 goto err_alloc; 1869 1868 } 1869 + bufq_set->num_refillqs = num_rxq; 1870 1870 for (unsigned int k = 0; k < bufq_set->num_refillqs; k++) { 1871 1871 struct idpf_sw_queue *refillq = 1872 1872 &bufq_set->refillqs[k];
+1 -1
drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
··· 3668 3668 3669 3669 idpf_ptp_release(adapter); 3670 3670 idpf_deinit_task(adapter); 3671 - idpf_idc_deinit_core_aux_device(adapter->cdev_info); 3671 + idpf_idc_deinit_core_aux_device(adapter); 3672 3672 idpf_rel_rx_pt_lkup(adapter); 3673 3673 idpf_intr_rel(adapter); 3674 3674
+5
drivers/net/ethernet/microchip/lan743x_main.c
··· 3060 3060 else if (speed == SPEED_100) 3061 3061 mac_cr |= MAC_CR_CFG_L_; 3062 3062 3063 + if (duplex == DUPLEX_FULL) 3064 + mac_cr |= MAC_CR_DPX_; 3065 + else 3066 + mac_cr &= ~MAC_CR_DPX_; 3067 + 3063 3068 lan743x_csr_write(adapter, MAC_CR, mac_cr); 3064 3069 3065 3070 lan743x_ptp_update_latency(adapter, speed);
+4 -2
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 3458 3458 struct auxiliary_device *adev; 3459 3459 struct mana_adev *madev; 3460 3460 int ret; 3461 + int id; 3461 3462 3462 3463 madev = kzalloc_obj(*madev); 3463 3464 if (!madev) ··· 3468 3467 ret = mana_adev_idx_alloc(); 3469 3468 if (ret < 0) 3470 3469 goto idx_fail; 3471 - adev->id = ret; 3470 + id = ret; 3471 + adev->id = id; 3472 3472 3473 3473 adev->name = name; 3474 3474 adev->dev.parent = gd->gdma_context->dev; ··· 3495 3493 auxiliary_device_uninit(adev); 3496 3494 3497 3495 init_fail: 3498 - mana_adev_idx_free(adev->id); 3496 + mana_adev_idx_free(id); 3499 3497 3500 3498 idx_fail: 3501 3499 kfree(madev);
+11 -6
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 1719 1719 if (ether_addr_equal(netdev->dev_addr, mac)) 1720 1720 return 0; 1721 1721 1722 - err = ionic_program_mac(lif, mac); 1723 - if (err < 0) 1724 - return err; 1722 + /* Only program macs for virtual functions to avoid losing the permanent 1723 + * Mac across warm reset/reboot. 1724 + */ 1725 + if (lif->ionic->pdev->is_virtfn) { 1726 + err = ionic_program_mac(lif, mac); 1727 + if (err < 0) 1728 + return err; 1725 1729 1726 - if (err > 0) 1727 - netdev_dbg(netdev, "%s: SET and GET ATTR Mac are not equal-due to old FW running\n", 1728 - __func__); 1730 + if (err > 0) 1731 + netdev_dbg(netdev, "%s: SET and GET ATTR Mac are not equal-due to old FW running\n", 1732 + __func__); 1733 + } 1729 1734 1730 1735 err = eth_prepare_mac_addr_change(netdev, addr); 1731 1736 if (err)
+2 -2
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 962 962 pkt_len -= 4; 963 963 cppi5_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL); 964 964 psdata = cppi5_hdesc_get_psdata(desc_rx); 965 - k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); 966 965 count++; 967 966 xsk_buff_set_size(xdp, pkt_len); 968 967 xsk_buff_dma_sync_for_cpu(xdp); ··· 987 988 emac_dispatch_skb_zc(emac, xdp, psdata); 988 989 xsk_buff_free(xdp); 989 990 } 991 + k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); 990 992 } 991 993 992 994 if (xdp_status & ICSSG_XDP_REDIR) ··· 1057 1057 /* firmware adds 4 CRC bytes, strip them */ 1058 1058 pkt_len -= 4; 1059 1059 cppi5_desc_get_tags_ids(&desc_rx->hdr, &port_id, NULL); 1060 - k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); 1061 1060 1062 1061 /* if allocation fails we drop the packet but push the 1063 1062 * descriptor back to the ring with old page to prevent a stall ··· 1114 1115 ndev->stats.rx_packets++; 1115 1116 1116 1117 requeue: 1118 + k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); 1117 1119 /* queue another RX DMA */ 1118 1120 ret = prueth_dma_rx_push_mapped(emac, &emac->rx_chns, new_page, 1119 1121 PRUETH_MAX_PKT_SIZE);
+64 -1
drivers/net/team/team_core.c
··· 2058 2058 * rt netlink interface 2059 2059 ***********************/ 2060 2060 2061 + /* For tx path we need a linkup && enabled port and for parse any port 2062 + * suffices. 2063 + */ 2064 + static struct team_port *team_header_port_get_rcu(struct team *team, 2065 + bool txable) 2066 + { 2067 + struct team_port *port; 2068 + 2069 + list_for_each_entry_rcu(port, &team->port_list, list) { 2070 + if (!txable || team_port_txable(port)) 2071 + return port; 2072 + } 2073 + 2074 + return NULL; 2075 + } 2076 + 2077 + static int team_header_create(struct sk_buff *skb, struct net_device *team_dev, 2078 + unsigned short type, const void *daddr, 2079 + const void *saddr, unsigned int len) 2080 + { 2081 + struct team *team = netdev_priv(team_dev); 2082 + const struct header_ops *port_ops; 2083 + struct team_port *port; 2084 + int ret = 0; 2085 + 2086 + rcu_read_lock(); 2087 + port = team_header_port_get_rcu(team, true); 2088 + if (port) { 2089 + port_ops = READ_ONCE(port->dev->header_ops); 2090 + if (port_ops && port_ops->create) 2091 + ret = port_ops->create(skb, port->dev, 2092 + type, daddr, saddr, len); 2093 + } 2094 + rcu_read_unlock(); 2095 + return ret; 2096 + } 2097 + 2098 + static int team_header_parse(const struct sk_buff *skb, 2099 + const struct net_device *team_dev, 2100 + unsigned char *haddr) 2101 + { 2102 + struct team *team = netdev_priv(team_dev); 2103 + const struct header_ops *port_ops; 2104 + struct team_port *port; 2105 + int ret = 0; 2106 + 2107 + rcu_read_lock(); 2108 + port = team_header_port_get_rcu(team, false); 2109 + if (port) { 2110 + port_ops = READ_ONCE(port->dev->header_ops); 2111 + if (port_ops && port_ops->parse) 2112 + ret = port_ops->parse(skb, port->dev, haddr); 2113 + } 2114 + rcu_read_unlock(); 2115 + return ret; 2116 + } 2117 + 2118 + static const struct header_ops team_header_ops = { 2119 + .create = team_header_create, 2120 + .parse = team_header_parse, 2121 + }; 2122 + 2061 2123 static void team_setup_by_port(struct net_device *dev, 2062 2124 struct net_device *port_dev) 2063 2125 { ··· 2128 2066 if (port_dev->type == ARPHRD_ETHER) 2129 2067 dev->header_ops = team->header_ops_cache; 2130 2068 else 2131 - dev->header_ops = port_dev->header_ops; 2069 + dev->header_ops = port_dev->header_ops ? 2070 + &team_header_ops : NULL; 2132 2071 dev->type = port_dev->type; 2133 2072 dev->hard_header_len = port_dev->hard_header_len; 2134 2073 dev->needed_headroom = port_dev->needed_headroom;
+1 -1
drivers/net/tun_vnet.h
··· 244 244 245 245 if (virtio_net_hdr_tnl_from_skb(skb, tnl_hdr, has_tnl_offload, 246 246 tun_vnet_is_little_endian(flags), 247 - vlan_hlen, true)) { 247 + vlan_hlen, true, false)) { 248 248 struct virtio_net_hdr_v1 *hdr = &tnl_hdr->hash_hdr.hdr; 249 249 struct skb_shared_info *sinfo = skb_shinfo(skb); 250 250
+6 -1
drivers/net/virtio_net.c
··· 3278 3278 struct virtio_net_hdr_v1_hash_tunnel *hdr; 3279 3279 int num_sg; 3280 3280 unsigned hdr_len = vi->hdr_len; 3281 + bool feature_hdrlen; 3281 3282 bool can_push; 3283 + 3284 + feature_hdrlen = virtio_has_feature(vi->vdev, 3285 + VIRTIO_NET_F_GUEST_HDRLEN); 3282 3286 3283 3287 pr_debug("%s: xmit %p %pM\n", vi->dev->name, skb, dest); 3284 3288 ··· 3303 3299 3304 3300 if (virtio_net_hdr_tnl_from_skb(skb, hdr, vi->tx_tnl, 3305 3301 virtio_is_little_endian(vi->vdev), 0, 3306 - false)) 3302 + false, feature_hdrlen)) 3307 3303 return -EPROTO; 3308 3304 3309 3305 if (vi->mergeable_rx_bufs) ··· 3366 3362 /* Don't wait up for transmitted skbs to be freed. */ 3367 3363 if (!use_napi) { 3368 3364 skb_orphan(skb); 3365 + skb_dst_drop(skb); 3369 3366 nf_reset_ct(skb); 3370 3367 } 3371 3368
+5
drivers/pci/endpoint/functions/pci-epf-test.c
··· 894 894 dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret); 895 895 bar->submap = old_submap; 896 896 bar->num_submap = old_nsub; 897 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar); 898 + if (ret) 899 + dev_warn(&epf->dev, "Failed to restore the original BAR mapping: %d\n", 900 + ret); 901 + 897 902 kfree(submap); 898 903 goto err; 899 904 }
+41 -13
drivers/pci/pwrctrl/core.c
··· 268 268 } 269 269 EXPORT_SYMBOL_GPL(pci_pwrctrl_power_on_devices); 270 270 271 + /* 272 + * Check whether the pwrctrl device really needs to be created or not. The 273 + * pwrctrl device will only be created if the node satisfies below requirements: 274 + * 275 + * 1. Presence of compatible property with "pci" prefix to match against the 276 + * pwrctrl driver (AND) 277 + * 2. At least one of the power supplies defined in the devicetree node of the 278 + * device (OR) in the remote endpoint parent node to indicate pwrctrl 279 + * requirement. 280 + */ 281 + static bool pci_pwrctrl_is_required(struct device_node *np) 282 + { 283 + struct device_node *endpoint; 284 + const char *compat; 285 + int ret; 286 + 287 + ret = of_property_read_string(np, "compatible", &compat); 288 + if (ret < 0) 289 + return false; 290 + 291 + if (!strstarts(compat, "pci")) 292 + return false; 293 + 294 + if (of_pci_supply_present(np)) 295 + return true; 296 + 297 + if (of_graph_is_present(np)) { 298 + for_each_endpoint_of_node(np, endpoint) { 299 + struct device_node *remote __free(device_node) = 300 + of_graph_get_remote_port_parent(endpoint); 301 + if (remote) { 302 + if (of_pci_supply_present(remote)) 303 + return true; 304 + } 305 + } 306 + } 307 + 308 + return false; 309 + } 310 + 271 311 static int pci_pwrctrl_create_device(struct device_node *np, 272 312 struct device *parent) 273 313 { ··· 327 287 return 0; 328 288 } 329 289 330 - /* 331 - * Sanity check to make sure that the node has the compatible property 332 - * to allow driver binding. 333 - */ 334 - if (!of_property_present(np, "compatible")) 335 - return 0; 336 - 337 - /* 338 - * Check whether the pwrctrl device really needs to be created or not. 339 - * This is decided based on at least one of the power supplies defined 340 - * in the devicetree node of the device or the graph property. 341 - */ 342 - if (!of_pci_supply_present(np) && !of_graph_is_present(np)) { 290 + if (!pci_pwrctrl_is_required(np)) { 343 291 dev_dbg(parent, "Skipping OF node: %s\n", np->name); 344 292 return 0; 345 293 }
+6 -3
drivers/pinctrl/mediatek/pinctrl-mtk-common.c
··· 1135 1135 goto chip_error; 1136 1136 } 1137 1137 1138 - ret = mtk_eint_init(pctl, pdev); 1139 - if (ret) 1140 - goto chip_error; 1138 + /* Only initialize EINT if we have EINT pins */ 1139 + if (data->eint_hw.ap_num > 0) { 1140 + ret = mtk_eint_init(pctl, pdev); 1141 + if (ret) 1142 + goto chip_error; 1143 + } 1141 1144 1142 1145 return 0; 1143 1146
+16
drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
··· 723 723 .pin_config_group_dbg_show = pmic_gpio_config_dbg_show, 724 724 }; 725 725 726 + static int pmic_gpio_get_direction(struct gpio_chip *chip, unsigned pin) 727 + { 728 + struct pmic_gpio_state *state = gpiochip_get_data(chip); 729 + struct pmic_gpio_pad *pad; 730 + 731 + pad = state->ctrl->desc->pins[pin].drv_data; 732 + 733 + if (!pad->is_enabled || pad->analog_pass || 734 + (!pad->input_enabled && !pad->output_enabled)) 735 + return -EINVAL; 736 + 737 + /* Make sure the state is aligned on what pmic_gpio_get() returns */ 738 + return pad->input_enabled ? GPIO_LINE_DIRECTION_IN : GPIO_LINE_DIRECTION_OUT; 739 + } 740 + 726 741 static int pmic_gpio_direction_input(struct gpio_chip *chip, unsigned pin) 727 742 { 728 743 struct pmic_gpio_state *state = gpiochip_get_data(chip); ··· 816 801 } 817 802 818 803 static const struct gpio_chip pmic_gpio_gpio_template = { 804 + .get_direction = pmic_gpio_get_direction, 819 805 .direction_input = pmic_gpio_direction_input, 820 806 .direction_output = pmic_gpio_direction_output, 821 807 .get = pmic_gpio_get,
+1 -1
drivers/pinctrl/renesas/pinctrl-rza1.c
··· 589 589 { 590 590 void __iomem *mem = RZA1_ADDR(port->base, reg, port->id); 591 591 592 - return ioread16(mem) & BIT(bit); 592 + return !!(ioread16(mem) & BIT(bit)); 593 593 } 594 594 595 595 /**
+8 -7
drivers/pinctrl/renesas/pinctrl-rzt2h.c
··· 85 85 struct gpio_chip gpio_chip; 86 86 struct pinctrl_gpio_range gpio_range; 87 87 DECLARE_BITMAP(used_irqs, RZT2H_INTERRUPTS_NUM); 88 - spinlock_t lock; /* lock read/write registers */ 88 + raw_spinlock_t lock; /* lock read/write registers */ 89 89 struct mutex mutex; /* serialize adding groups and functions */ 90 90 bool safety_port_enabled; 91 91 atomic_t wakeup_path; ··· 145 145 u64 reg64; 146 146 u16 reg16; 147 147 148 - guard(spinlock_irqsave)(&pctrl->lock); 148 + guard(raw_spinlock_irqsave)(&pctrl->lock); 149 149 150 150 /* Set pin to 'Non-use (Hi-Z input protection)' */ 151 151 reg16 = rzt2h_pinctrl_readw(pctrl, port, PM(port)); ··· 474 474 if (ret) 475 475 return ret; 476 476 477 - guard(spinlock_irqsave)(&pctrl->lock); 477 + guard(raw_spinlock_irqsave)(&pctrl->lock); 478 478 479 479 /* Select GPIO mode in PMC Register */ 480 480 rzt2h_pinctrl_set_gpio_en(pctrl, port, bit, true); ··· 487 487 { 488 488 u16 reg; 489 489 490 - guard(spinlock_irqsave)(&pctrl->lock); 490 + guard(raw_spinlock_irqsave)(&pctrl->lock); 491 491 492 492 reg = rzt2h_pinctrl_readw(pctrl, port, PM(port)); 493 493 reg &= ~PM_PIN_MASK(bit); ··· 509 509 if (ret) 510 510 return ret; 511 511 512 - guard(spinlock_irqsave)(&pctrl->lock); 512 + guard(raw_spinlock_irqsave)(&pctrl->lock); 513 513 514 514 if (rzt2h_pinctrl_readb(pctrl, port, PMC(port)) & BIT(bit)) { 515 515 /* ··· 547 547 u8 bit = RZT2H_PIN_ID_TO_PIN(offset); 548 548 u8 reg; 549 549 550 - guard(spinlock_irqsave)(&pctrl->lock); 550 + guard(raw_spinlock_irqsave)(&pctrl->lock); 551 551 552 552 reg = rzt2h_pinctrl_readb(pctrl, port, P(port)); 553 553 if (value) ··· 833 833 if (ret) 834 834 return dev_err_probe(dev, ret, "Unable to parse gpio-ranges\n"); 835 835 836 + of_node_put(of_args.np); 836 837 if (of_args.args[0] != 0 || of_args.args[1] != 0 || 837 838 of_args.args[2] != pctrl->data->n_port_pins) 838 839 return dev_err_probe(dev, -EINVAL, ··· 965 964 if (ret) 966 965 return ret; 967 966 968 - spin_lock_init(&pctrl->lock); 967 + raw_spin_lock_init(&pctrl->lock); 969 968 mutex_init(&pctrl->mutex); 970 969 platform_set_drvdata(pdev, pctrl); 971 970
+1
drivers/pinctrl/stm32/Kconfig
··· 65 65 select PINMUX 66 66 select GENERIC_PINCONF 67 67 select GPIOLIB 68 + select GPIO_GENERIC 68 69 help 69 70 The Hardware Debug Port allows the observation of internal signals. 70 71 It uses configurable multiplexer to route signals in a dedicated observation register.
+32 -11
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 157 157 const char *pin_name, 158 158 const char *func_name) 159 159 { 160 + unsigned long variant = pctl->flags & SUNXI_PINCTRL_VARIANT_MASK; 160 161 int i; 161 162 162 163 for (i = 0; i < pctl->desc->npins; i++) { ··· 169 168 while (func->name) { 170 169 if (!strcmp(func->name, func_name) && 171 170 (!func->variant || 172 - func->variant & pctl->variant)) 171 + func->variant & variant)) 173 172 return func; 174 173 175 174 func++; ··· 210 209 const u16 pin_num, 211 210 const u8 muxval) 212 211 { 212 + unsigned long variant = pctl->flags & SUNXI_PINCTRL_VARIANT_MASK; 213 + 213 214 for (unsigned int i = 0; i < pctl->desc->npins; i++) { 214 215 const struct sunxi_desc_pin *pin = pctl->desc->pins + i; 215 216 struct sunxi_desc_function *func = pin->functions; ··· 219 216 if (pin->pin.number != pin_num) 220 217 continue; 221 218 222 - if (pin->variant && !(pctl->variant & pin->variant)) 219 + if (pin->variant && !(variant & pin->variant)) 223 220 continue; 224 221 225 222 while (func->name) { ··· 1092 1089 { 1093 1090 struct sunxi_pinctrl *pctl = irq_data_get_irq_chip_data(d); 1094 1091 struct sunxi_desc_function *func; 1092 + unsigned int offset; 1093 + u32 reg, shift, mask; 1094 + u8 disabled_mux, muxval; 1095 1095 int ret; 1096 1096 1097 1097 func = sunxi_pinctrl_desc_find_function_by_pin(pctl, ··· 1102 1096 if (!func) 1103 1097 return -EINVAL; 1104 1098 1105 - ret = gpiochip_lock_as_irq(pctl->chip, 1106 - pctl->irq_array[d->hwirq] - pctl->desc->pin_base); 1099 + offset = pctl->irq_array[d->hwirq] - pctl->desc->pin_base; 1100 + sunxi_mux_reg(pctl, offset, &reg, &shift, &mask); 1101 + muxval = (readl(pctl->membase + reg) & mask) >> shift; 1102 + 1103 + /* Change muxing to GPIO INPUT mode if at reset value */ 1104 + if (pctl->flags & SUNXI_PINCTRL_NEW_REG_LAYOUT) 1105 + disabled_mux = SUN4I_FUNC_DISABLED_NEW; 1106 + else 1107 + disabled_mux = SUN4I_FUNC_DISABLED_OLD; 1108 + 1109 + if (muxval == disabled_mux) 1110 + sunxi_pmx_set(pctl->pctl_dev, pctl->irq_array[d->hwirq], 1111 + SUN4I_FUNC_INPUT); 1112 + 1113 + ret = gpiochip_lock_as_irq(pctl->chip, offset); 1107 1114 if (ret) { 1108 1115 dev_err(pctl->dev, "unable to lock HW IRQ %lu for IRQ\n", 1109 1116 irqd_to_hwirq(d)); ··· 1357 1338 static int sunxi_pinctrl_build_state(struct platform_device *pdev) 1358 1339 { 1359 1340 struct sunxi_pinctrl *pctl = platform_get_drvdata(pdev); 1341 + unsigned long variant = pctl->flags & SUNXI_PINCTRL_VARIANT_MASK; 1360 1342 void *ptr; 1361 1343 int i; 1362 1344 ··· 1382 1362 const struct sunxi_desc_pin *pin = pctl->desc->pins + i; 1383 1363 struct sunxi_pinctrl_group *group = pctl->groups + pctl->ngroups; 1384 1364 1385 - if (pin->variant && !(pctl->variant & pin->variant)) 1365 + if (pin->variant && !(variant & pin->variant)) 1386 1366 continue; 1387 1367 1388 1368 group->name = pin->pin.name; ··· 1407 1387 const struct sunxi_desc_pin *pin = pctl->desc->pins + i; 1408 1388 struct sunxi_desc_function *func; 1409 1389 1410 - if (pin->variant && !(pctl->variant & pin->variant)) 1390 + if (pin->variant && !(variant & pin->variant)) 1411 1391 continue; 1412 1392 1413 1393 for (func = pin->functions; func->name; func++) { 1414 - if (func->variant && !(pctl->variant & func->variant)) 1394 + if (func->variant && !(variant & func->variant)) 1415 1395 continue; 1416 1396 1417 1397 /* Create interrupt mapping while we're at it */ ··· 1439 1419 const struct sunxi_desc_pin *pin = pctl->desc->pins + i; 1440 1420 struct sunxi_desc_function *func; 1441 1421 1442 - if (pin->variant && !(pctl->variant & pin->variant)) 1422 + if (pin->variant && !(variant & pin->variant)) 1443 1423 continue; 1444 1424 1445 1425 for (func = pin->functions; func->name; func++) { 1446 1426 struct sunxi_pinctrl_function *func_item; 1447 1427 const char **func_grp; 1448 1428 1449 - if (func->variant && !(pctl->variant & func->variant)) 1429 + if (func->variant && !(variant & func->variant)) 1450 1430 continue; 1451 1431 1452 1432 func_item = sunxi_pinctrl_find_function_by_name(pctl, ··· 1588 1568 1589 1569 pctl->dev = &pdev->dev; 1590 1570 pctl->desc = desc; 1591 - pctl->variant = flags & SUNXI_PINCTRL_VARIANT_MASK; 1571 + pctl->flags = flags; 1592 1572 if (flags & SUNXI_PINCTRL_NEW_REG_LAYOUT) { 1593 1573 pctl->bank_mem_size = D1_BANK_MEM_SIZE; 1594 1574 pctl->pull_regs_offset = D1_PULL_REGS_OFFSET; ··· 1624 1604 1625 1605 for (i = 0, pin_idx = 0; i < pctl->desc->npins; i++) { 1626 1606 const struct sunxi_desc_pin *pin = pctl->desc->pins + i; 1607 + unsigned long variant = pctl->flags & SUNXI_PINCTRL_VARIANT_MASK; 1627 1608 1628 - if (pin->variant && !(pctl->variant & pin->variant)) 1609 + if (pin->variant && !(variant & pin->variant)) 1629 1610 continue; 1630 1611 1631 1612 pins[pin_idx++] = pin->pin;
+3 -1
drivers/pinctrl/sunxi/pinctrl-sunxi.h
··· 86 86 87 87 #define SUN4I_FUNC_INPUT 0 88 88 #define SUN4I_FUNC_IRQ 6 89 + #define SUN4I_FUNC_DISABLED_OLD 7 90 + #define SUN4I_FUNC_DISABLED_NEW 15 89 91 90 92 #define SUNXI_PINCTRL_VARIANT_MASK GENMASK(7, 0) 91 93 #define SUNXI_PINCTRL_NEW_REG_LAYOUT BIT(8) ··· 176 174 unsigned *irq_array; 177 175 raw_spinlock_t lock; 178 176 struct pinctrl_dev *pctl_dev; 179 - unsigned long variant; 177 + unsigned long flags; 180 178 u32 bank_mem_size; 181 179 u32 pull_regs_offset; 182 180 u32 dlevel_field_width;
+1 -1
drivers/platform/olpc/olpc-xo175-ec.c
··· 482 482 dev_dbg(dev, "CMD %x, %zd bytes expected\n", cmd, resp_len); 483 483 484 484 if (inlen > 5) { 485 - dev_err(dev, "command len %zd too big!\n", resp_len); 485 + dev_err(dev, "command len %zd too big!\n", inlen); 486 486 return -EOVERFLOW; 487 487 } 488 488
+1 -1
drivers/platform/x86/amd/hsmp/hsmp.c
··· 117 117 } 118 118 119 119 if (unlikely(mbox_status == HSMP_STATUS_NOT_READY)) { 120 - dev_err(sock->dev, "Message ID 0x%X failure : SMU tmeout (status = 0x%X)\n", 120 + dev_err(sock->dev, "Message ID 0x%X failure : SMU timeout (status = 0x%X)\n", 121 121 msg->msg_id, mbox_status); 122 122 return -ETIMEDOUT; 123 123 } else if (unlikely(mbox_status == HSMP_ERR_INVALID_MSG)) {
+77
drivers/platform/x86/asus-armoury.h
··· 1082 1082 }, 1083 1083 { 1084 1084 .matches = { 1085 + DMI_MATCH(DMI_BOARD_NAME, "GA503QM"), 1086 + }, 1087 + .driver_data = &(struct power_data) { 1088 + .ac_data = &(struct power_limits) { 1089 + .ppt_pl1_spl_min = 15, 1090 + .ppt_pl1_spl_def = 35, 1091 + .ppt_pl1_spl_max = 80, 1092 + .ppt_pl2_sppt_min = 65, 1093 + .ppt_pl2_sppt_max = 80, 1094 + }, 1095 + }, 1096 + }, 1097 + { 1098 + .matches = { 1085 1099 DMI_MATCH(DMI_BOARD_NAME, "GA503QR"), 1086 1100 }, 1087 1101 .driver_data = &(struct power_data) { ··· 1534 1520 }, 1535 1521 { 1536 1522 .matches = { 1523 + DMI_MATCH(DMI_BOARD_NAME, "GZ302EA"), 1524 + }, 1525 + .driver_data = &(struct power_data) { 1526 + .ac_data = &(struct power_limits) { 1527 + .ppt_pl1_spl_min = 28, 1528 + .ppt_pl1_spl_def = 60, 1529 + .ppt_pl1_spl_max = 80, 1530 + .ppt_pl2_sppt_min = 32, 1531 + .ppt_pl2_sppt_def = 75, 1532 + .ppt_pl2_sppt_max = 92, 1533 + .ppt_pl3_fppt_min = 45, 1534 + .ppt_pl3_fppt_def = 86, 1535 + .ppt_pl3_fppt_max = 93, 1536 + }, 1537 + .dc_data = &(struct power_limits) { 1538 + .ppt_pl1_spl_min = 28, 1539 + .ppt_pl1_spl_def = 45, 1540 + .ppt_pl1_spl_max = 80, 1541 + .ppt_pl2_sppt_min = 32, 1542 + .ppt_pl2_sppt_def = 52, 1543 + .ppt_pl2_sppt_max = 92, 1544 + .ppt_pl3_fppt_min = 45, 1545 + .ppt_pl3_fppt_def = 71, 1546 + .ppt_pl3_fppt_max = 93, 1547 + }, 1548 + }, 1549 + }, 1550 + { 1551 + .matches = { 1537 1552 DMI_MATCH(DMI_BOARD_NAME, "G513I"), 1538 1553 }, 1539 1554 .driver_data = &(struct power_data) { ··· 1633 1590 .ppt_pl2_sppt_max = 50, 1634 1591 .ppt_pl3_fppt_min = 28, 1635 1592 .ppt_pl3_fppt_max = 65, 1593 + .nv_temp_target_min = 75, 1594 + .nv_temp_target_max = 87, 1595 + }, 1596 + .requires_fan_curve = true, 1597 + }, 1598 + }, 1599 + { 1600 + .matches = { 1601 + DMI_MATCH(DMI_BOARD_NAME, "G614FP"), 1602 + }, 1603 + .driver_data = &(struct power_data) { 1604 + .ac_data = &(struct power_limits) { 1605 + .ppt_pl1_spl_min = 30, 1606 + .ppt_pl1_spl_max = 120, 1607 + .ppt_pl2_sppt_min = 65, 1608 + .ppt_pl2_sppt_def = 140, 1609 + .ppt_pl2_sppt_max = 165, 1610 + .ppt_pl3_fppt_min = 65, 1611 + .ppt_pl3_fppt_def = 140, 1612 + .ppt_pl3_fppt_max = 165, 1613 + .nv_temp_target_min = 75, 1614 + .nv_temp_target_max = 87, 1615 + .nv_dynamic_boost_min = 5, 1616 + .nv_dynamic_boost_max = 15, 1617 + .nv_tgp_min = 50, 1618 + .nv_tgp_max = 100, 1619 + }, 1620 + .dc_data = &(struct power_limits) { 1621 + .ppt_pl1_spl_min = 25, 1622 + .ppt_pl1_spl_max = 65, 1623 + .ppt_pl2_sppt_min = 25, 1624 + .ppt_pl2_sppt_max = 65, 1625 + .ppt_pl3_fppt_min = 35, 1626 + .ppt_pl3_fppt_max = 75, 1636 1627 .nv_temp_target_min = 75, 1637 1628 .nv_temp_target_max = 87, 1638 1629 },
+1 -1
drivers/platform/x86/asus-nb-wmi.c
··· 548 548 .callback = dmi_matched, 549 549 .ident = "ASUS ROG Z13", 550 550 .matches = { 551 - DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 551 + DMI_MATCH(DMI_SYS_VENDOR, "ASUS"), 552 552 DMI_MATCH(DMI_PRODUCT_NAME, "ROG Flow Z13"), 553 553 }, 554 554 .driver_data = &quirk_asus_z13,
+19
drivers/platform/x86/hp/hp-wmi.c
··· 120 120 .ec_tp_offset = HP_VICTUS_S_EC_THERMAL_PROFILE_OFFSET, 121 121 }; 122 122 123 + static const struct thermal_profile_params omen_v1_legacy_thermal_params = { 124 + .performance = HP_OMEN_V1_THERMAL_PROFILE_PERFORMANCE, 125 + .balanced = HP_OMEN_V1_THERMAL_PROFILE_DEFAULT, 126 + .low_power = HP_OMEN_V1_THERMAL_PROFILE_DEFAULT, 127 + .ec_tp_offset = HP_OMEN_EC_THERMAL_PROFILE_OFFSET, 128 + }; 129 + 123 130 /* 124 131 * A generic pointer for the currently-active board's thermal profile 125 132 * parameters. ··· 183 176 /* DMI Board names of Victus 16-r and Victus 16-s laptops */ 184 177 static const struct dmi_system_id victus_s_thermal_profile_boards[] __initconst = { 185 178 { 179 + .matches = { DMI_MATCH(DMI_BOARD_NAME, "8A4D") }, 180 + .driver_data = (void *)&omen_v1_legacy_thermal_params, 181 + }, 182 + { 186 183 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BAB") }, 187 184 .driver_data = (void *)&omen_v1_thermal_params, 188 185 }, 189 186 { 190 187 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BBE") }, 191 188 .driver_data = (void *)&victus_s_thermal_params, 189 + }, 190 + { 191 + .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BCA") }, 192 + .driver_data = (void *)&omen_v1_thermal_params, 192 193 }, 193 194 { 194 195 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BCD") }, ··· 209 194 { 210 195 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8BD5") }, 211 196 .driver_data = (void *)&victus_s_thermal_params, 197 + }, 198 + { 199 + .matches = { DMI_MATCH(DMI_BOARD_NAME, "8C76") }, 200 + .driver_data = (void *)&omen_v1_thermal_params, 212 201 }, 213 202 { 214 203 .matches = { DMI_MATCH(DMI_BOARD_NAME, "8C78") },
+9 -1
drivers/platform/x86/intel/hid.c
··· 438 438 return 0; 439 439 } 440 440 441 + static int intel_hid_pl_freeze_handler(struct device *device) 442 + { 443 + struct intel_hid_priv *priv = dev_get_drvdata(device); 444 + 445 + priv->wakeup_mode = false; 446 + return intel_hid_pl_suspend_handler(device); 447 + } 448 + 441 449 static int intel_hid_pl_resume_handler(struct device *device) 442 450 { 443 451 intel_hid_pm_complete(device); ··· 460 452 static const struct dev_pm_ops intel_hid_pl_pm_ops = { 461 453 .prepare = intel_hid_pm_prepare, 462 454 .complete = intel_hid_pm_complete, 463 - .freeze = intel_hid_pl_suspend_handler, 455 + .freeze = intel_hid_pl_freeze_handler, 464 456 .thaw = intel_hid_pl_resume_handler, 465 457 .restore = intel_hid_pl_resume_handler, 466 458 .suspend = intel_hid_pl_suspend_handler,
+4 -1
drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
··· 558 558 { 559 559 u64 value; 560 560 561 + if (!static_cpu_has(X86_FEATURE_HWP)) 562 + return true; 563 + 561 564 rdmsrq(MSR_PM_ENABLE, value); 562 565 return !(value & 0x1); 563 566 } ··· 872 869 _read_pp_info("current_level", perf_level.current_level, SST_PP_STATUS_OFFSET, 873 870 SST_PP_LEVEL_START, SST_PP_LEVEL_WIDTH, SST_MUL_FACTOR_NONE) 874 871 _read_pp_info("locked", perf_level.locked, SST_PP_STATUS_OFFSET, 875 - SST_PP_LOCK_START, SST_PP_LEVEL_WIDTH, SST_MUL_FACTOR_NONE) 872 + SST_PP_LOCK_START, SST_PP_LOCK_WIDTH, SST_MUL_FACTOR_NONE) 876 873 _read_pp_info("feature_state", perf_level.feature_state, SST_PP_STATUS_OFFSET, 877 874 SST_PP_FEATURE_STATE_START, SST_PP_FEATURE_STATE_WIDTH, SST_MUL_FACTOR_NONE) 878 875 perf_level.enabled = !!(power_domain_info->sst_header.cap_mask & BIT(1));
-2
drivers/platform/x86/lenovo/wmi-gamezone.c
··· 31 31 #define LWMI_GZ_METHOD_ID_SMARTFAN_SET 44 32 32 #define LWMI_GZ_METHOD_ID_SMARTFAN_GET 45 33 33 34 - static BLOCKING_NOTIFIER_HEAD(gz_chain_head); 35 - 36 34 struct lwmi_gz_priv { 37 35 enum thermal_mode current_mode; 38 36 struct notifier_block event_nb;
+4 -8
drivers/pmdomain/bcm/bcm2835-power.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/delay.h> 11 11 #include <linux/io.h> 12 + #include <linux/iopoll.h> 12 13 #include <linux/mfd/bcm2835-pm.h> 13 14 #include <linux/module.h> 14 15 #include <linux/platform_device.h> ··· 154 153 static int bcm2835_asb_control(struct bcm2835_power *power, u32 reg, bool enable) 155 154 { 156 155 void __iomem *base = power->asb; 157 - u64 start; 158 156 u32 val; 159 157 160 158 switch (reg) { ··· 166 166 break; 167 167 } 168 168 169 - start = ktime_get_ns(); 170 - 171 169 /* Enable the module's async AXI bridges. */ 172 170 if (enable) { 173 171 val = readl(base + reg) & ~ASB_REQ_STOP; ··· 174 176 } 175 177 writel(PM_PASSWORD | val, base + reg); 176 178 177 - while (!!(readl(base + reg) & ASB_ACK) == enable) { 178 - cpu_relax(); 179 - if (ktime_get_ns() - start >= 1000) 180 - return -ETIMEDOUT; 181 - } 179 + if (readl_poll_timeout_atomic(base + reg, val, 180 + !!(val & ASB_ACK) != enable, 0, 5)) 181 + return -ETIMEDOUT; 182 182 183 183 return 0; 184 184 }
+1 -1
drivers/pmdomain/mediatek/mtk-pm-domains.c
··· 1203 1203 scpsys->soc_data = soc; 1204 1204 1205 1205 scpsys->pd_data.domains = scpsys->domains; 1206 - scpsys->pd_data.num_domains = soc->num_domains; 1206 + scpsys->pd_data.num_domains = num_domains; 1207 1207 1208 1208 parent = dev->parent; 1209 1209 if (!parent) {
+2
drivers/resctrl/mpam_devices.c
··· 1428 1428 static int mpam_restore_mbwu_state(void *_ris) 1429 1429 { 1430 1430 int i; 1431 + u64 val; 1431 1432 struct mon_read mwbu_arg; 1432 1433 struct mpam_msc_ris *ris = _ris; 1433 1434 struct mpam_class *class = ris->vmsc->comp->class; ··· 1438 1437 mwbu_arg.ris = ris; 1439 1438 mwbu_arg.ctx = &ris->mbwu_state[i].cfg; 1440 1439 mwbu_arg.type = mpam_msmon_choose_counter(class); 1440 + mwbu_arg.val = &val; 1441 1441 1442 1442 __ris_msmon_read(&mwbu_arg); 1443 1443 }
+15 -7
drivers/resctrl/test_mpam_devices.c
··· 322 322 mutex_unlock(&mpam_list_lock); 323 323 } 324 324 325 + static void __test_mpam_reset_msc_bitmap(struct mpam_msc *msc, u16 reg, u16 wd) 326 + { 327 + /* Avoid warnings when running with CONFIG_DEBUG_PREEMPT */ 328 + guard(preempt)(); 329 + 330 + mpam_reset_msc_bitmap(msc, reg, wd); 331 + } 332 + 325 333 static void test_mpam_reset_msc_bitmap(struct kunit *test) 326 334 { 327 - char __iomem *buf = kunit_kzalloc(test, SZ_16K, GFP_KERNEL); 335 + char __iomem *buf = (__force char __iomem *)kunit_kzalloc(test, SZ_16K, GFP_KERNEL); 328 336 struct mpam_msc fake_msc = {}; 329 337 u32 *test_result; 330 338 ··· 347 339 mutex_init(&fake_msc.part_sel_lock); 348 340 mutex_lock(&fake_msc.part_sel_lock); 349 341 350 - test_result = (u32 *)(buf + MPAMCFG_CPBM); 342 + test_result = (__force u32 *)(buf + MPAMCFG_CPBM); 351 343 352 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); 344 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 0); 353 345 KUNIT_EXPECT_EQ(test, test_result[0], 0); 354 346 KUNIT_EXPECT_EQ(test, test_result[1], 0); 355 347 test_result[0] = 0; 356 348 test_result[1] = 0; 357 349 358 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 1); 350 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 1); 359 351 KUNIT_EXPECT_EQ(test, test_result[0], 1); 360 352 KUNIT_EXPECT_EQ(test, test_result[1], 0); 361 353 test_result[0] = 0; 362 354 test_result[1] = 0; 363 355 364 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 16); 356 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 16); 365 357 KUNIT_EXPECT_EQ(test, test_result[0], 0xffff); 366 358 KUNIT_EXPECT_EQ(test, test_result[1], 0); 367 359 test_result[0] = 0; 368 360 test_result[1] = 0; 369 361 370 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 32); 362 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 32); 371 363 KUNIT_EXPECT_EQ(test, test_result[0], 0xffffffff); 372 364 KUNIT_EXPECT_EQ(test, test_result[1], 0); 373 365 test_result[0] = 0; 374 366 test_result[1] = 0; 375 367 376 - mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 33); 368 + __test_mpam_reset_msc_bitmap(&fake_msc, MPAMCFG_CPBM, 33); 377 369 KUNIT_EXPECT_EQ(test, test_result[0], 0xffffffff); 378 370 KUNIT_EXPECT_EQ(test, test_result[1], 1); 379 371 test_result[0] = 0;
+2 -4
drivers/slimbus/qcom-ngd-ctrl.c
··· 1535 1535 ngd->id = id; 1536 1536 ngd->pdev->dev.parent = parent; 1537 1537 1538 - ret = driver_set_override(&ngd->pdev->dev, 1539 - &ngd->pdev->driver_override, 1540 - QCOM_SLIM_NGD_DRV_NAME, 1541 - strlen(QCOM_SLIM_NGD_DRV_NAME)); 1538 + ret = device_set_driver_override(&ngd->pdev->dev, 1539 + QCOM_SLIM_NGD_DRV_NAME); 1542 1540 if (ret) { 1543 1541 platform_device_put(ngd->pdev); 1544 1542 kfree(ngd);
+7 -39
drivers/spi/spi-amlogic-spifc-a4.c
··· 1083 1083 return clk_set_rate(sfc->core_clk, SFC_BUS_DEFAULT_CLK); 1084 1084 } 1085 1085 1086 - static int aml_sfc_disable_clk(struct aml_sfc *sfc) 1087 - { 1088 - clk_disable_unprepare(sfc->core_clk); 1089 - clk_disable_unprepare(sfc->gate_clk); 1090 - 1091 - return 0; 1092 - } 1093 - 1094 1086 static int aml_sfc_probe(struct platform_device *pdev) 1095 1087 { 1096 1088 struct device_node *np = pdev->dev.of_node; ··· 1133 1141 1134 1142 /* Enable Amlogic flash controller spi mode */ 1135 1143 ret = regmap_write(sfc->regmap_base, SFC_SPI_CFG, SPI_MODE_EN); 1136 - if (ret) { 1137 - dev_err(dev, "failed to enable SPI mode\n"); 1138 - goto err_out; 1139 - } 1144 + if (ret) 1145 + return dev_err_probe(dev, ret, "failed to enable SPI mode\n"); 1140 1146 1141 1147 ret = dma_set_mask(sfc->dev, DMA_BIT_MASK(32)); 1142 - if (ret) { 1143 - dev_err(sfc->dev, "failed to set dma mask\n"); 1144 - goto err_out; 1145 - } 1148 + if (ret) 1149 + return dev_err_probe(sfc->dev, ret, "failed to set dma mask\n"); 1146 1150 1147 1151 sfc->ecc_eng.dev = &pdev->dev; 1148 1152 sfc->ecc_eng.integration = NAND_ECC_ENGINE_INTEGRATION_PIPELINED; ··· 1146 1158 sfc->ecc_eng.priv = sfc; 1147 1159 1148 1160 ret = nand_ecc_register_on_host_hw_engine(&sfc->ecc_eng); 1149 - if (ret) { 1150 - dev_err(&pdev->dev, "failed to register Aml host ecc engine.\n"); 1151 - goto err_out; 1152 - } 1161 + if (ret) 1162 + return dev_err_probe(&pdev->dev, ret, "failed to register Aml host ecc engine.\n"); 1153 1163 1154 1164 ret = of_property_read_u32(np, "amlogic,rx-adj", &val); 1155 1165 if (!ret) ··· 1163 1177 ctrl->min_speed_hz = SFC_MIN_FREQUENCY; 1164 1178 ctrl->num_chipselect = SFC_MAX_CS_NUM; 1165 1179 1166 - ret = devm_spi_register_controller(dev, ctrl); 1167 - if (ret) 1168 - goto err_out; 1169 - 1170 - return 0; 1171 - 1172 - err_out: 1173 - aml_sfc_disable_clk(sfc); 1174 - 1175 - return ret; 1176 - } 1177 - 1178 - static void aml_sfc_remove(struct platform_device *pdev) 1179 - { 1180 - struct spi_controller *ctlr = platform_get_drvdata(pdev); 1181 - struct aml_sfc *sfc = spi_controller_get_devdata(ctlr); 1182 - 1183 - aml_sfc_disable_clk(sfc); 1180 + return devm_spi_register_controller(dev, ctrl); 1184 1181 } 1185 1182 1186 1183 static const struct of_device_id aml_sfc_of_match[] = { ··· 1181 1212 .of_match_table = aml_sfc_of_match, 1182 1213 }, 1183 1214 .probe = aml_sfc_probe, 1184 - .remove = aml_sfc_remove, 1185 1215 }; 1186 1216 module_platform_driver(aml_sfc_driver); 1187 1217
+4 -8
drivers/spi/spi-amlogic-spisg.c
··· 729 729 }; 730 730 731 731 if (of_property_read_bool(dev->of_node, "spi-slave")) 732 - ctlr = spi_alloc_target(dev, sizeof(*spisg)); 732 + ctlr = devm_spi_alloc_target(dev, sizeof(*spisg)); 733 733 else 734 - ctlr = spi_alloc_host(dev, sizeof(*spisg)); 734 + ctlr = devm_spi_alloc_host(dev, sizeof(*spisg)); 735 735 if (!ctlr) 736 736 return -ENOMEM; 737 737 ··· 750 750 return dev_err_probe(dev, PTR_ERR(spisg->map), "regmap init failed\n"); 751 751 752 752 irq = platform_get_irq(pdev, 0); 753 - if (irq < 0) { 754 - ret = irq; 755 - goto out_controller; 756 - } 753 + if (irq < 0) 754 + return irq; 757 755 758 756 ret = device_reset_optional(dev); 759 757 if (ret) ··· 815 817 if (spisg->core) 816 818 clk_disable_unprepare(spisg->core); 817 819 clk_disable_unprepare(spisg->pclk); 818 - out_controller: 819 - spi_controller_put(ctlr); 820 820 821 821 return ret; 822 822 }
+11 -20
drivers/spi/spi-axiado.c
··· 765 765 platform_set_drvdata(pdev, ctlr); 766 766 767 767 xspi->regs = devm_platform_ioremap_resource(pdev, 0); 768 - if (IS_ERR(xspi->regs)) { 769 - ret = PTR_ERR(xspi->regs); 770 - goto remove_ctlr; 771 - } 768 + if (IS_ERR(xspi->regs)) 769 + return PTR_ERR(xspi->regs); 772 770 773 771 xspi->pclk = devm_clk_get(&pdev->dev, "pclk"); 774 - if (IS_ERR(xspi->pclk)) { 775 - dev_err(&pdev->dev, "pclk clock not found.\n"); 776 - ret = PTR_ERR(xspi->pclk); 777 - goto remove_ctlr; 778 - } 772 + if (IS_ERR(xspi->pclk)) 773 + return dev_err_probe(&pdev->dev, PTR_ERR(xspi->pclk), 774 + "pclk clock not found.\n"); 779 775 780 776 xspi->ref_clk = devm_clk_get(&pdev->dev, "ref"); 781 - if (IS_ERR(xspi->ref_clk)) { 782 - dev_err(&pdev->dev, "ref clock not found.\n"); 783 - ret = PTR_ERR(xspi->ref_clk); 784 - goto remove_ctlr; 785 - } 777 + if (IS_ERR(xspi->ref_clk)) 778 + return dev_err_probe(&pdev->dev, PTR_ERR(xspi->ref_clk), 779 + "ref clock not found.\n"); 786 780 787 781 ret = clk_prepare_enable(xspi->pclk); 788 - if (ret) { 789 - dev_err(&pdev->dev, "Unable to enable APB clock.\n"); 790 - goto remove_ctlr; 791 - } 782 + if (ret) 783 + return dev_err_probe(&pdev->dev, ret, "Unable to enable APB clock.\n"); 792 784 793 785 ret = clk_prepare_enable(xspi->ref_clk); 794 786 if (ret) { ··· 861 869 clk_disable_unprepare(xspi->ref_clk); 862 870 clk_dis_apb: 863 871 clk_disable_unprepare(xspi->pclk); 864 - remove_ctlr: 865 - spi_controller_put(ctlr); 872 + 866 873 return ret; 867 874 } 868 875
+7 -6
drivers/spi/spi-geni-qcom.c
··· 359 359 writel((spi_slv->mode & SPI_LOOP) ? LOOPBACK_ENABLE : 0, se->base + SE_SPI_LOOPBACK); 360 360 if (cs_changed) 361 361 writel(chipselect, se->base + SE_SPI_DEMUX_SEL); 362 - if (mode_changed & SE_SPI_CPHA) 362 + if (mode_changed & SPI_CPHA) 363 363 writel((spi_slv->mode & SPI_CPHA) ? CPHA : 0, se->base + SE_SPI_CPHA); 364 - if (mode_changed & SE_SPI_CPOL) 364 + if (mode_changed & SPI_CPOL) 365 365 writel((spi_slv->mode & SPI_CPOL) ? CPOL : 0, se->base + SE_SPI_CPOL); 366 366 if ((mode_changed & SPI_CS_HIGH) || (cs_changed && (spi_slv->mode & SPI_CS_HIGH))) 367 367 writel((spi_slv->mode & SPI_CS_HIGH) ? BIT(chipselect) : 0, se->base + SE_SPI_DEMUX_OUTPUT_INV); ··· 906 906 struct spi_controller *spi = data; 907 907 struct spi_geni_master *mas = spi_controller_get_devdata(spi); 908 908 struct geni_se *se = &mas->se; 909 - u32 m_irq; 909 + u32 m_irq, dma_tx_status, dma_rx_status; 910 910 911 911 m_irq = readl(se->base + SE_GENI_M_IRQ_STATUS); 912 - if (!m_irq) 912 + dma_tx_status = readl_relaxed(se->base + SE_DMA_TX_IRQ_STAT); 913 + dma_rx_status = readl_relaxed(se->base + SE_DMA_RX_IRQ_STAT); 914 + 915 + if (!m_irq && !dma_tx_status && !dma_rx_status) 913 916 return IRQ_NONE; 914 917 915 918 if (m_irq & (M_CMD_OVERRUN_EN | M_ILLEGAL_CMD_EN | M_CMD_FAILURE_EN | ··· 960 957 } 961 958 } else if (mas->cur_xfer_mode == GENI_SE_DMA) { 962 959 const struct spi_transfer *xfer = mas->cur_xfer; 963 - u32 dma_tx_status = readl_relaxed(se->base + SE_DMA_TX_IRQ_STAT); 964 - u32 dma_rx_status = readl_relaxed(se->base + SE_DMA_RX_IRQ_STAT); 965 960 966 961 if (dma_tx_status) 967 962 writel(dma_tx_status, se->base + SE_DMA_TX_IRQ_CLR);
+12 -13
drivers/spi/spi.c
··· 3049 3049 struct spi_controller *ctlr; 3050 3050 3051 3051 ctlr = container_of(dev, struct spi_controller, dev); 3052 + 3053 + free_percpu(ctlr->pcpu_statistics); 3052 3054 kfree(ctlr); 3053 3055 } 3054 3056 ··· 3193 3191 ctlr = kzalloc(size + ctlr_size, GFP_KERNEL); 3194 3192 if (!ctlr) 3195 3193 return NULL; 3194 + 3195 + ctlr->pcpu_statistics = spi_alloc_pcpu_stats(NULL); 3196 + if (!ctlr->pcpu_statistics) { 3197 + kfree(ctlr); 3198 + return NULL; 3199 + } 3196 3200 3197 3201 device_initialize(&ctlr->dev); 3198 3202 INIT_LIST_HEAD(&ctlr->queue); ··· 3488 3480 dev_info(dev, "controller is unqueued, this is deprecated\n"); 3489 3481 } else if (ctlr->transfer_one || ctlr->transfer_one_message) { 3490 3482 status = spi_controller_initialize_queue(ctlr); 3491 - if (status) { 3492 - device_del(&ctlr->dev); 3493 - goto free_bus_id; 3494 - } 3495 - } 3496 - /* Add statistics */ 3497 - ctlr->pcpu_statistics = spi_alloc_pcpu_stats(dev); 3498 - if (!ctlr->pcpu_statistics) { 3499 - dev_err(dev, "Error allocating per-cpu statistics\n"); 3500 - status = -ENOMEM; 3501 - goto destroy_queue; 3483 + if (status) 3484 + goto del_ctrl; 3502 3485 } 3503 3486 3504 3487 mutex_lock(&board_lock); ··· 3503 3504 acpi_register_spi_devices(ctlr); 3504 3505 return status; 3505 3506 3506 - destroy_queue: 3507 - spi_destroy_queue(ctlr); 3507 + del_ctrl: 3508 + device_del(&ctlr->dev); 3508 3509 free_bus_id: 3509 3510 mutex_lock(&board_lock); 3510 3511 idr_remove(&spi_controller_idr, ctlr->bus_num);
+25
drivers/tty/serial/8250/8250.h
··· 175 175 return value; 176 176 } 177 177 178 + void serial8250_clear_fifos(struct uart_8250_port *p); 178 179 void serial8250_clear_and_reinit_fifos(struct uart_8250_port *p); 180 + void serial8250_fifo_wait_for_lsr_thre(struct uart_8250_port *up, unsigned int count); 179 181 180 182 void serial8250_rpm_get(struct uart_8250_port *p); 181 183 void serial8250_rpm_put(struct uart_8250_port *p); ··· 402 400 403 401 return dma && dma->tx_running; 404 402 } 403 + 404 + static inline void serial8250_tx_dma_pause(struct uart_8250_port *p) 405 + { 406 + struct uart_8250_dma *dma = p->dma; 407 + 408 + if (!dma->tx_running) 409 + return; 410 + 411 + dmaengine_pause(dma->txchan); 412 + } 413 + 414 + static inline void serial8250_tx_dma_resume(struct uart_8250_port *p) 415 + { 416 + struct uart_8250_dma *dma = p->dma; 417 + 418 + if (!dma->tx_running) 419 + return; 420 + 421 + dmaengine_resume(dma->txchan); 422 + } 405 423 #else 406 424 static inline int serial8250_tx_dma(struct uart_8250_port *p) 407 425 { ··· 443 421 { 444 422 return false; 445 423 } 424 + 425 + static inline void serial8250_tx_dma_pause(struct uart_8250_port *p) { } 426 + static inline void serial8250_tx_dma_resume(struct uart_8250_port *p) { } 446 427 #endif 447 428 448 429 static inline int ns16550a_goto_highspeed(struct uart_8250_port *up)
+15
drivers/tty/serial/8250/8250_dma.c
··· 162 162 */ 163 163 dma->tx_size = 0; 164 164 165 + /* 166 + * We can't use `dmaengine_terminate_sync` because `uart_flush_buffer` is 167 + * holding the uart port spinlock. 168 + */ 165 169 dmaengine_terminate_async(dma->txchan); 170 + 171 + /* 172 + * The callback might or might not run. If it doesn't run, we need to ensure 173 + * that `tx_running` is cleared so that we can schedule new transactions. 174 + * If it does run, then the zombie callback will clear `tx_running` again 175 + * and perform a no-op since `tx_size` was cleared above. 176 + * 177 + * In either case, we ASSUME the DMA transaction will terminate before we 178 + * issue a new `serial8250_tx_dma`. 179 + */ 180 + dma->tx_running = 0; 166 181 } 167 182 168 183 int serial8250_rx_dma(struct uart_8250_port *p)
+239 -65
drivers/tty/serial/8250/8250_dw.c
··· 9 9 * LCR is written whilst busy. If it is, then a busy detect interrupt is 10 10 * raised, the LCR needs to be rewritten and the uart status register read. 11 11 */ 12 + #include <linux/bitfield.h> 13 + #include <linux/bits.h> 14 + #include <linux/cleanup.h> 12 15 #include <linux/clk.h> 13 16 #include <linux/delay.h> 14 17 #include <linux/device.h> 15 18 #include <linux/io.h> 19 + #include <linux/lockdep.h> 16 20 #include <linux/mod_devicetable.h> 17 21 #include <linux/module.h> 18 22 #include <linux/notifier.h> ··· 44 40 #define RZN1_UART_RDMACR 0x110 /* DMA Control Register Receive Mode */ 45 41 46 42 /* DesignWare specific register fields */ 43 + #define DW_UART_IIR_IID GENMASK(3, 0) 44 + 47 45 #define DW_UART_MCR_SIRE BIT(6) 46 + 47 + #define DW_UART_USR_BUSY BIT(0) 48 48 49 49 /* Renesas specific register fields */ 50 50 #define RZN1_UART_xDMACR_DMA_EN BIT(0) ··· 64 56 #define DW_UART_QUIRK_IS_DMA_FC BIT(3) 65 57 #define DW_UART_QUIRK_APMC0D08 BIT(4) 66 58 #define DW_UART_QUIRK_CPR_VALUE BIT(5) 59 + #define DW_UART_QUIRK_IER_KICK BIT(6) 60 + 61 + /* 62 + * Number of consecutive IIR_NO_INT interrupts required to trigger interrupt 63 + * storm prevention code. 64 + */ 65 + #define DW_UART_QUIRK_IER_KICK_THRES 4 67 66 68 67 struct dw8250_platform_data { 69 68 u8 usr_reg; ··· 92 77 93 78 unsigned int skip_autocfg:1; 94 79 unsigned int uart_16550_compatible:1; 80 + unsigned int in_idle:1; 81 + 82 + u8 no_int_count; 95 83 }; 96 84 97 85 static inline struct dw8250_data *to_dw8250_data(struct dw8250_port_data *data) ··· 125 107 return value; 126 108 } 127 109 128 - /* 129 - * This function is being called as part of the uart_port::serial_out() 130 - * routine. Hence, it must not call serial_port_out() or serial_out() 131 - * against the modified registers here, i.e. LCR. 132 - */ 133 - static void dw8250_force_idle(struct uart_port *p) 110 + static void dw8250_idle_exit(struct uart_port *p) 134 111 { 112 + struct dw8250_data *d = to_dw8250_data(p->private_data); 135 113 struct uart_8250_port *up = up_to_u8250p(p); 136 - unsigned int lsr; 137 114 138 - /* 139 - * The following call currently performs serial_out() 140 - * against the FCR register. Because it differs to LCR 141 - * there will be no infinite loop, but if it ever gets 142 - * modified, we might need a new custom version of it 143 - * that avoids infinite recursion. 144 - */ 145 - serial8250_clear_and_reinit_fifos(up); 115 + if (d->uart_16550_compatible) 116 + return; 146 117 147 - /* 148 - * With PSLVERR_RESP_EN parameter set to 1, the device generates an 149 - * error response when an attempt to read an empty RBR with FIFO 150 - * enabled. 151 - */ 152 - if (up->fcr & UART_FCR_ENABLE_FIFO) { 153 - lsr = serial_port_in(p, UART_LSR); 154 - if (!(lsr & UART_LSR_DR)) 155 - return; 118 + if (up->capabilities & UART_CAP_FIFO) 119 + serial_port_out(p, UART_FCR, up->fcr); 120 + serial_port_out(p, UART_MCR, up->mcr); 121 + serial_port_out(p, UART_IER, up->ier); 122 + 123 + /* DMA Rx is restarted by IRQ handler as needed. */ 124 + if (up->dma) 125 + serial8250_tx_dma_resume(up); 126 + 127 + d->in_idle = 0; 128 + } 129 + 130 + /* 131 + * Ensure BUSY is not asserted. If DW UART is configured with 132 + * !uart_16550_compatible, the writes to LCR, DLL, and DLH fail while 133 + * BUSY is asserted. 134 + * 135 + * Context: port's lock must be held 136 + */ 137 + static int dw8250_idle_enter(struct uart_port *p) 138 + { 139 + struct dw8250_data *d = to_dw8250_data(p->private_data); 140 + unsigned int usr_reg = d->pdata ? d->pdata->usr_reg : DW_UART_USR; 141 + struct uart_8250_port *up = up_to_u8250p(p); 142 + int retries; 143 + u32 lsr; 144 + 145 + lockdep_assert_held_once(&p->lock); 146 + 147 + if (d->uart_16550_compatible) 148 + return 0; 149 + 150 + d->in_idle = 1; 151 + 152 + /* Prevent triggering interrupt from RBR filling */ 153 + serial_port_out(p, UART_IER, 0); 154 + 155 + if (up->dma) { 156 + serial8250_rx_dma_flush(up); 157 + if (serial8250_tx_dma_running(up)) 158 + serial8250_tx_dma_pause(up); 156 159 } 157 160 158 - serial_port_in(p, UART_RX); 161 + /* 162 + * Wait until Tx becomes empty + one extra frame time to ensure all bits 163 + * have been sent on the wire. 164 + * 165 + * FIXME: frame_time delay is too long with very low baudrates. 166 + */ 167 + serial8250_fifo_wait_for_lsr_thre(up, p->fifosize); 168 + ndelay(p->frame_time); 169 + 170 + serial_port_out(p, UART_MCR, up->mcr | UART_MCR_LOOP); 171 + 172 + retries = 4; /* Arbitrary limit, 2 was always enough in tests */ 173 + do { 174 + serial8250_clear_fifos(up); 175 + if (!(serial_port_in(p, usr_reg) & DW_UART_USR_BUSY)) 176 + break; 177 + /* FIXME: frame_time delay is too long with very low baudrates. */ 178 + ndelay(p->frame_time); 179 + } while (--retries); 180 + 181 + lsr = serial_lsr_in(up); 182 + if (lsr & UART_LSR_DR) { 183 + serial_port_in(p, UART_RX); 184 + up->lsr_saved_flags = 0; 185 + } 186 + 187 + /* Now guaranteed to have BUSY deasserted? Just sanity check */ 188 + if (serial_port_in(p, usr_reg) & DW_UART_USR_BUSY) { 189 + dw8250_idle_exit(p); 190 + return -EBUSY; 191 + } 192 + 193 + return 0; 194 + } 195 + 196 + static void dw8250_set_divisor(struct uart_port *p, unsigned int baud, 197 + unsigned int quot, unsigned int quot_frac) 198 + { 199 + struct uart_8250_port *up = up_to_u8250p(p); 200 + int ret; 201 + 202 + ret = dw8250_idle_enter(p); 203 + if (ret < 0) 204 + return; 205 + 206 + serial_port_out(p, UART_LCR, up->lcr | UART_LCR_DLAB); 207 + if (!(serial_port_in(p, UART_LCR) & UART_LCR_DLAB)) 208 + goto idle_failed; 209 + 210 + serial_dl_write(up, quot); 211 + serial_port_out(p, UART_LCR, up->lcr); 212 + 213 + idle_failed: 214 + dw8250_idle_exit(p); 159 215 } 160 216 161 217 /* 162 218 * This function is being called as part of the uart_port::serial_out() 163 - * routine. Hence, it must not call serial_port_out() or serial_out() 164 - * against the modified registers here, i.e. LCR. 219 + * routine. Hence, special care must be taken when serial_port_out() or 220 + * serial_out() against the modified registers here, i.e. LCR (d->in_idle is 221 + * used to break recursion loop). 165 222 */ 166 223 static void dw8250_check_lcr(struct uart_port *p, unsigned int offset, u32 value) 167 224 { 168 225 struct dw8250_data *d = to_dw8250_data(p->private_data); 169 - void __iomem *addr = p->membase + (offset << p->regshift); 170 - int tries = 1000; 226 + u32 lcr; 227 + int ret; 171 228 172 229 if (offset != UART_LCR || d->uart_16550_compatible) 173 230 return; 174 231 232 + lcr = serial_port_in(p, UART_LCR); 233 + 175 234 /* Make sure LCR write wasn't ignored */ 176 - while (tries--) { 177 - u32 lcr = serial_port_in(p, offset); 235 + if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 236 + return; 178 237 179 - if ((value & ~UART_LCR_SPAR) == (lcr & ~UART_LCR_SPAR)) 180 - return; 238 + if (d->in_idle) 239 + goto write_err; 181 240 182 - dw8250_force_idle(p); 241 + ret = dw8250_idle_enter(p); 242 + if (ret < 0) 243 + goto write_err; 183 244 184 - #ifdef CONFIG_64BIT 185 - if (p->type == PORT_OCTEON) 186 - __raw_writeq(value & 0xff, addr); 187 - else 188 - #endif 189 - if (p->iotype == UPIO_MEM32) 190 - writel(value, addr); 191 - else if (p->iotype == UPIO_MEM32BE) 192 - iowrite32be(value, addr); 193 - else 194 - writeb(value, addr); 195 - } 245 + serial_port_out(p, UART_LCR, value); 246 + dw8250_idle_exit(p); 247 + return; 248 + 249 + write_err: 196 250 /* 197 251 * FIXME: this deadlocks if port->lock is already held 198 252 * dev_err(p->dev, "Couldn't set LCR to %d\n", value); 199 253 */ 254 + return; /* Silences "label at the end of compound statement" */ 255 + } 256 + 257 + /* 258 + * With BUSY, LCR writes can be very expensive (IRQ + complex retry logic). 259 + * If the write does not change the value of the LCR register, skip it entirely. 260 + */ 261 + static bool dw8250_can_skip_reg_write(struct uart_port *p, unsigned int offset, u32 value) 262 + { 263 + struct dw8250_data *d = to_dw8250_data(p->private_data); 264 + u32 lcr; 265 + 266 + if (offset != UART_LCR || d->uart_16550_compatible) 267 + return false; 268 + 269 + lcr = serial_port_in(p, offset); 270 + return lcr == value; 200 271 } 201 272 202 273 /* Returns once the transmitter is empty or we run out of retries */ ··· 314 207 315 208 static void dw8250_serial_out(struct uart_port *p, unsigned int offset, u32 value) 316 209 { 210 + if (dw8250_can_skip_reg_write(p, offset, value)) 211 + return; 212 + 317 213 writeb(value, p->membase + (offset << p->regshift)); 318 214 dw8250_check_lcr(p, offset, value); 319 215 } 320 216 321 217 static void dw8250_serial_out38x(struct uart_port *p, unsigned int offset, u32 value) 322 218 { 219 + if (dw8250_can_skip_reg_write(p, offset, value)) 220 + return; 221 + 323 222 /* Allow the TX to drain before we reconfigure */ 324 223 if (offset == UART_LCR) 325 224 dw8250_tx_wait_empty(p); ··· 350 237 351 238 static void dw8250_serial_outq(struct uart_port *p, unsigned int offset, u32 value) 352 239 { 240 + if (dw8250_can_skip_reg_write(p, offset, value)) 241 + return; 242 + 353 243 value &= 0xff; 354 244 __raw_writeq(value, p->membase + (offset << p->regshift)); 355 245 /* Read back to ensure register write ordering. */ ··· 364 248 365 249 static void dw8250_serial_out32(struct uart_port *p, unsigned int offset, u32 value) 366 250 { 251 + if (dw8250_can_skip_reg_write(p, offset, value)) 252 + return; 253 + 367 254 writel(value, p->membase + (offset << p->regshift)); 368 255 dw8250_check_lcr(p, offset, value); 369 256 } ··· 380 261 381 262 static void dw8250_serial_out32be(struct uart_port *p, unsigned int offset, u32 value) 382 263 { 264 + if (dw8250_can_skip_reg_write(p, offset, value)) 265 + return; 266 + 383 267 iowrite32be(value, p->membase + (offset << p->regshift)); 384 268 dw8250_check_lcr(p, offset, value); 385 269 } ··· 394 272 return dw8250_modify_msr(p, offset, value); 395 273 } 396 274 275 + /* 276 + * INTC10EE UART can IRQ storm while reporting IIR_NO_INT. Inducing IIR value 277 + * change has been observed to break the storm. 278 + * 279 + * If Tx is empty (THRE asserted), we use here IER_THRI to cause IIR_NO_INT -> 280 + * IIR_THRI transition. 281 + */ 282 + static void dw8250_quirk_ier_kick(struct uart_port *p) 283 + { 284 + struct uart_8250_port *up = up_to_u8250p(p); 285 + u32 lsr; 286 + 287 + if (up->ier & UART_IER_THRI) 288 + return; 289 + 290 + lsr = serial_lsr_in(up); 291 + if (!(lsr & UART_LSR_THRE)) 292 + return; 293 + 294 + serial_port_out(p, UART_IER, up->ier | UART_IER_THRI); 295 + serial_port_in(p, UART_LCR); /* safe, no side-effects */ 296 + serial_port_out(p, UART_IER, up->ier); 297 + } 397 298 398 299 static int dw8250_handle_irq(struct uart_port *p) 399 300 { ··· 426 281 bool rx_timeout = (iir & 0x3f) == UART_IIR_RX_TIMEOUT; 427 282 unsigned int quirks = d->pdata->quirks; 428 283 unsigned int status; 429 - unsigned long flags; 284 + 285 + guard(uart_port_lock_irqsave)(p); 286 + 287 + switch (FIELD_GET(DW_UART_IIR_IID, iir)) { 288 + case UART_IIR_NO_INT: 289 + if (d->uart_16550_compatible || up->dma) 290 + return 0; 291 + 292 + if (quirks & DW_UART_QUIRK_IER_KICK && 293 + d->no_int_count == (DW_UART_QUIRK_IER_KICK_THRES - 1)) 294 + dw8250_quirk_ier_kick(p); 295 + d->no_int_count = (d->no_int_count + 1) % DW_UART_QUIRK_IER_KICK_THRES; 296 + 297 + return 0; 298 + 299 + case UART_IIR_BUSY: 300 + /* Clear the USR */ 301 + serial_port_in(p, d->pdata->usr_reg); 302 + 303 + d->no_int_count = 0; 304 + 305 + return 1; 306 + } 307 + 308 + d->no_int_count = 0; 430 309 431 310 /* 432 311 * There are ways to get Designware-based UARTs into a state where ··· 463 294 * so we limit the workaround only to non-DMA mode. 464 295 */ 465 296 if (!up->dma && rx_timeout) { 466 - uart_port_lock_irqsave(p, &flags); 467 297 status = serial_lsr_in(up); 468 298 469 299 if (!(status & (UART_LSR_DR | UART_LSR_BI))) 470 300 serial_port_in(p, UART_RX); 471 - 472 - uart_port_unlock_irqrestore(p, flags); 473 301 } 474 302 475 303 /* Manually stop the Rx DMA transfer when acting as flow controller */ 476 304 if (quirks & DW_UART_QUIRK_IS_DMA_FC && up->dma && up->dma->rx_running && rx_timeout) { 477 - uart_port_lock_irqsave(p, &flags); 478 305 status = serial_lsr_in(up); 479 - uart_port_unlock_irqrestore(p, flags); 480 306 481 307 if (status & (UART_LSR_DR | UART_LSR_BI)) { 482 308 dw8250_writel_ext(p, RZN1_UART_RDMACR, 0); ··· 479 315 } 480 316 } 481 317 482 - if (serial8250_handle_irq(p, iir)) 483 - return 1; 318 + serial8250_handle_irq_locked(p, iir); 484 319 485 - if ((iir & UART_IIR_BUSY) == UART_IIR_BUSY) { 486 - /* Clear the USR */ 487 - serial_port_in(p, d->pdata->usr_reg); 488 - 489 - return 1; 490 - } 491 - 492 - return 0; 320 + return 1; 493 321 } 494 322 495 323 static void dw8250_clk_work_cb(struct work_struct *work) ··· 683 527 reset_control_assert(data); 684 528 } 685 529 530 + static void dw8250_shutdown(struct uart_port *port) 531 + { 532 + struct dw8250_data *d = to_dw8250_data(port->private_data); 533 + 534 + serial8250_do_shutdown(port); 535 + d->no_int_count = 0; 536 + } 537 + 686 538 static int dw8250_probe(struct platform_device *pdev) 687 539 { 688 540 struct uart_8250_port uart = {}, *up = &uart; ··· 709 545 p->type = PORT_8250; 710 546 p->flags = UPF_FIXED_PORT; 711 547 p->dev = dev; 548 + 712 549 p->set_ldisc = dw8250_set_ldisc; 713 550 p->set_termios = dw8250_set_termios; 551 + p->set_divisor = dw8250_set_divisor; 714 552 715 553 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 716 554 if (!data) ··· 820 654 dw8250_quirks(p, data); 821 655 822 656 /* If the Busy Functionality is not implemented, don't handle it */ 823 - if (data->uart_16550_compatible) 657 + if (data->uart_16550_compatible) { 824 658 p->handle_irq = NULL; 825 - else if (data->pdata) 659 + } else if (data->pdata) { 826 660 p->handle_irq = dw8250_handle_irq; 661 + p->shutdown = dw8250_shutdown; 662 + } 827 663 828 664 dw8250_setup_dma_filter(p, data); 829 665 ··· 957 789 .quirks = DW_UART_QUIRK_SKIP_SET_RATE, 958 790 }; 959 791 792 + static const struct dw8250_platform_data dw8250_intc10ee = { 793 + .usr_reg = DW_UART_USR, 794 + .quirks = DW_UART_QUIRK_IER_KICK, 795 + }; 796 + 960 797 static const struct of_device_id dw8250_of_match[] = { 961 798 { .compatible = "snps,dw-apb-uart", .data = &dw8250_dw_apb }, 962 799 { .compatible = "cavium,octeon-3860-uart", .data = &dw8250_octeon_3860_data }, ··· 991 818 { "INT33C5", (kernel_ulong_t)&dw8250_dw_apb }, 992 819 { "INT3434", (kernel_ulong_t)&dw8250_dw_apb }, 993 820 { "INT3435", (kernel_ulong_t)&dw8250_dw_apb }, 994 - { "INTC10EE", (kernel_ulong_t)&dw8250_dw_apb }, 821 + { "INTC10EE", (kernel_ulong_t)&dw8250_intc10ee }, 995 822 { }, 996 823 }; 997 824 MODULE_DEVICE_TABLE(acpi, dw8250_acpi_match); ··· 1009 836 1010 837 module_platform_driver(dw8250_platform_driver); 1011 838 839 + MODULE_IMPORT_NS("SERIAL_8250"); 1012 840 MODULE_AUTHOR("Jamie Iles"); 1013 841 MODULE_LICENSE("GPL"); 1014 842 MODULE_DESCRIPTION("Synopsys DesignWare 8250 serial port driver");
+17
drivers/tty/serial/8250/8250_pci.c
··· 137 137 }; 138 138 139 139 #define PCI_DEVICE_ID_HPE_PCI_SERIAL 0x37e 140 + #define PCIE_VENDOR_ID_ASIX 0x125B 141 + #define PCIE_DEVICE_ID_AX99100 0x9100 140 142 141 143 static const struct pci_device_id pci_use_msi[] = { 142 144 { PCI_DEVICE_SUB(PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9900, ··· 151 149 0xA000, 0x1000) }, 152 150 { PCI_DEVICE_SUB(PCI_VENDOR_ID_HP_3PAR, PCI_DEVICE_ID_HPE_PCI_SERIAL, 153 151 PCI_ANY_ID, PCI_ANY_ID) }, 152 + { PCI_DEVICE_SUB(PCIE_VENDOR_ID_ASIX, PCIE_DEVICE_ID_AX99100, 153 + 0xA000, 0x1000) }, 154 154 { } 155 155 }; 156 156 ··· 924 920 case PCI_DEVICE_ID_NETMOS_9912: 925 921 case PCI_DEVICE_ID_NETMOS_9922: 926 922 case PCI_DEVICE_ID_NETMOS_9900: 923 + case PCIE_DEVICE_ID_AX99100: 927 924 num_serial = pci_netmos_9900_numports(dev); 928 925 break; 929 926 ··· 2543 2538 */ 2544 2539 { 2545 2540 .vendor = PCI_VENDOR_ID_NETMOS, 2541 + .device = PCI_ANY_ID, 2542 + .subvendor = PCI_ANY_ID, 2543 + .subdevice = PCI_ANY_ID, 2544 + .init = pci_netmos_init, 2545 + .setup = pci_netmos_9900_setup, 2546 + }, 2547 + { 2548 + .vendor = PCIE_VENDOR_ID_ASIX, 2546 2549 .device = PCI_ANY_ID, 2547 2550 .subvendor = PCI_ANY_ID, 2548 2551 .subdevice = PCI_ANY_ID, ··· 6077 6064 { PCI_VENDOR_ID_NETMOS, PCI_DEVICE_ID_NETMOS_9900, 6078 6065 0xA000, 0x3002, 6079 6066 0, 0, pbn_NETMOS9900_2s_115200 }, 6067 + 6068 + { PCIE_VENDOR_ID_ASIX, PCIE_DEVICE_ID_AX99100, 6069 + 0xA000, 0x1000, 6070 + 0, 0, pbn_b0_1_115200 }, 6080 6071 6081 6072 /* 6082 6073 * Best Connectivity and Rosewill PCI Multi I/O cards
+45 -30
drivers/tty/serial/8250/8250_port.c
··· 18 18 #include <linux/irq.h> 19 19 #include <linux/console.h> 20 20 #include <linux/gpio/consumer.h> 21 + #include <linux/lockdep.h> 21 22 #include <linux/sysrq.h> 22 23 #include <linux/delay.h> 23 24 #include <linux/platform_device.h> ··· 489 488 /* 490 489 * FIFO support. 491 490 */ 492 - static void serial8250_clear_fifos(struct uart_8250_port *p) 491 + void serial8250_clear_fifos(struct uart_8250_port *p) 493 492 { 494 493 if (p->capabilities & UART_CAP_FIFO) { 495 494 serial_out(p, UART_FCR, UART_FCR_ENABLE_FIFO); ··· 498 497 serial_out(p, UART_FCR, 0); 499 498 } 500 499 } 500 + EXPORT_SYMBOL_NS_GPL(serial8250_clear_fifos, "SERIAL_8250"); 501 501 502 502 static enum hrtimer_restart serial8250_em485_handle_start_tx(struct hrtimer *t); 503 503 static enum hrtimer_restart serial8250_em485_handle_stop_tx(struct hrtimer *t); ··· 1784 1782 } 1785 1783 1786 1784 /* 1787 - * This handles the interrupt from one port. 1785 + * Context: port's lock must be held by the caller. 1788 1786 */ 1789 - int serial8250_handle_irq(struct uart_port *port, unsigned int iir) 1787 + void serial8250_handle_irq_locked(struct uart_port *port, unsigned int iir) 1790 1788 { 1791 1789 struct uart_8250_port *up = up_to_u8250p(port); 1792 1790 struct tty_port *tport = &port->state->port; 1793 1791 bool skip_rx = false; 1794 - unsigned long flags; 1795 1792 u16 status; 1796 1793 1797 - if (iir & UART_IIR_NO_INT) 1798 - return 0; 1799 - 1800 - uart_port_lock_irqsave(port, &flags); 1794 + lockdep_assert_held_once(&port->lock); 1801 1795 1802 1796 status = serial_lsr_in(up); 1803 1797 ··· 1826 1828 else if (!up->dma->tx_running) 1827 1829 __stop_tx(up); 1828 1830 } 1831 + } 1832 + EXPORT_SYMBOL_NS_GPL(serial8250_handle_irq_locked, "SERIAL_8250"); 1829 1833 1830 - uart_unlock_and_check_sysrq_irqrestore(port, flags); 1834 + /* 1835 + * This handles the interrupt from one port. 1836 + */ 1837 + int serial8250_handle_irq(struct uart_port *port, unsigned int iir) 1838 + { 1839 + if (iir & UART_IIR_NO_INT) 1840 + return 0; 1841 + 1842 + guard(uart_port_lock_irqsave)(port); 1843 + serial8250_handle_irq_locked(port, iir); 1831 1844 1832 1845 return 1; 1833 1846 } ··· 2156 2147 if (up->port.flags & UPF_NO_THRE_TEST) 2157 2148 return; 2158 2149 2159 - if (port->irqflags & IRQF_SHARED) 2160 - disable_irq_nosync(port->irq); 2150 + disable_irq(port->irq); 2161 2151 2162 2152 /* 2163 2153 * Test for UARTs that do not reassert THRE when the transmitter is idle and the interrupt ··· 2178 2170 serial_port_out(port, UART_IER, 0); 2179 2171 } 2180 2172 2181 - if (port->irqflags & IRQF_SHARED) 2182 - enable_irq(port->irq); 2173 + enable_irq(port->irq); 2183 2174 2184 2175 /* 2185 2176 * If the interrupt is not reasserted, or we otherwise don't trust the iir, setup a timer to ··· 2357 2350 void serial8250_do_shutdown(struct uart_port *port) 2358 2351 { 2359 2352 struct uart_8250_port *up = up_to_u8250p(port); 2353 + u32 lcr; 2360 2354 2361 2355 serial8250_rpm_get(up); 2362 2356 /* ··· 2384 2376 port->mctrl &= ~TIOCM_OUT2; 2385 2377 2386 2378 serial8250_set_mctrl(port, port->mctrl); 2379 + 2380 + /* Disable break condition */ 2381 + lcr = serial_port_in(port, UART_LCR); 2382 + lcr &= ~UART_LCR_SBC; 2383 + serial_port_out(port, UART_LCR, lcr); 2387 2384 } 2388 2385 2389 - /* 2390 - * Disable break condition and FIFOs 2391 - */ 2392 - serial_port_out(port, UART_LCR, 2393 - serial_port_in(port, UART_LCR) & ~UART_LCR_SBC); 2394 2386 serial8250_clear_fifos(up); 2395 2387 2396 2388 rsa_disable(up); ··· 2400 2392 * the IRQ chain. 2401 2393 */ 2402 2394 serial_port_in(port, UART_RX); 2395 + /* 2396 + * LCR writes on DW UART can trigger late (unmaskable) IRQs. 2397 + * Handle them before releasing the handler. 2398 + */ 2399 + synchronize_irq(port->irq); 2400 + 2403 2401 serial8250_rpm_put(up); 2404 2402 2405 2403 up->ops->release_irq(up); ··· 3199 3185 } 3200 3186 EXPORT_SYMBOL_GPL(serial8250_set_defaults); 3201 3187 3188 + void serial8250_fifo_wait_for_lsr_thre(struct uart_8250_port *up, unsigned int count) 3189 + { 3190 + unsigned int i; 3191 + 3192 + for (i = 0; i < count; i++) { 3193 + if (wait_for_lsr(up, UART_LSR_THRE)) 3194 + return; 3195 + } 3196 + } 3197 + EXPORT_SYMBOL_NS_GPL(serial8250_fifo_wait_for_lsr_thre, "SERIAL_8250"); 3198 + 3202 3199 #ifdef CONFIG_SERIAL_8250_CONSOLE 3203 3200 3204 3201 static void serial8250_console_putchar(struct uart_port *port, unsigned char ch) ··· 3251 3226 serial8250_out_MCR(up, up->mcr | UART_MCR_DTR | UART_MCR_RTS); 3252 3227 } 3253 3228 3254 - static void fifo_wait_for_lsr(struct uart_8250_port *up, unsigned int count) 3255 - { 3256 - unsigned int i; 3257 - 3258 - for (i = 0; i < count; i++) { 3259 - if (wait_for_lsr(up, UART_LSR_THRE)) 3260 - return; 3261 - } 3262 - } 3263 - 3264 3229 /* 3265 3230 * Print a string to the serial port using the device FIFO 3266 3231 * ··· 3269 3254 3270 3255 while (s != end) { 3271 3256 /* Allow timeout for each byte of a possibly full FIFO */ 3272 - fifo_wait_for_lsr(up, fifosize); 3257 + serial8250_fifo_wait_for_lsr_thre(up, fifosize); 3273 3258 3274 3259 for (i = 0; i < fifosize && s != end; ++i) { 3275 3260 if (*s == '\n' && !cr_sent) { ··· 3287 3272 * Allow timeout for each byte written since the caller will only wait 3288 3273 * for UART_LSR_BOTH_EMPTY using the timeout of a single character 3289 3274 */ 3290 - fifo_wait_for_lsr(up, tx_count); 3275 + serial8250_fifo_wait_for_lsr_thre(up, tx_count); 3291 3276 } 3292 3277 3293 3278 /*
+4 -1
drivers/tty/serial/serial_core.c
··· 643 643 unsigned int ret; 644 644 645 645 port = uart_port_ref_lock(state, &flags); 646 - ret = kfifo_avail(&state->port.xmit_fifo); 646 + if (!state->port.xmit_buf) 647 + ret = 0; 648 + else 649 + ret = kfifo_avail(&state->port.xmit_fifo); 647 650 uart_port_unlock_deref(port, flags); 648 651 return ret; 649 652 }
+1
drivers/tty/serial/uartlite.c
··· 878 878 pm_runtime_use_autosuspend(&pdev->dev); 879 879 pm_runtime_set_autosuspend_delay(&pdev->dev, UART_AUTOSUSPEND_TIMEOUT); 880 880 pm_runtime_set_active(&pdev->dev); 881 + pm_runtime_get_noresume(&pdev->dev); 881 882 pm_runtime_enable(&pdev->dev); 882 883 883 884 ret = ulite_assign(&pdev->dev, id, res->start, irq, pdata);
+8
drivers/tty/vt/vt.c
··· 1339 1339 kfree(vc->vc_saved_screen); 1340 1340 vc->vc_saved_screen = NULL; 1341 1341 } 1342 + vc_uniscr_free(vc->vc_saved_uni_lines); 1343 + vc->vc_saved_uni_lines = NULL; 1342 1344 } 1343 1345 return vc; 1344 1346 } ··· 1886 1884 vc->vc_saved_screen = kmemdup((u16 *)vc->vc_origin, size, GFP_KERNEL); 1887 1885 if (vc->vc_saved_screen == NULL) 1888 1886 return; 1887 + vc->vc_saved_uni_lines = vc->vc_uni_lines; 1888 + vc->vc_uni_lines = NULL; 1889 1889 vc->vc_saved_rows = vc->vc_rows; 1890 1890 vc->vc_saved_cols = vc->vc_cols; 1891 1891 save_cur(vc); ··· 1909 1905 dest = ((u16 *)vc->vc_origin) + r * vc->vc_cols; 1910 1906 memcpy(dest, src, 2 * cols); 1911 1907 } 1908 + vc_uniscr_set(vc, vc->vc_saved_uni_lines); 1909 + vc->vc_saved_uni_lines = NULL; 1912 1910 restore_cur(vc); 1913 1911 /* Update the entire screen */ 1914 1912 if (con_should_update(vc)) ··· 2233 2227 if (vc->vc_saved_screen != NULL) { 2234 2228 kfree(vc->vc_saved_screen); 2235 2229 vc->vc_saved_screen = NULL; 2230 + vc_uniscr_free(vc->vc_saved_uni_lines); 2231 + vc->vc_saved_uni_lines = NULL; 2236 2232 vc->vc_saved_rows = 0; 2237 2233 vc->vc_saved_cols = 0; 2238 2234 }
+5 -5
drivers/virtio/virtio_ring.c
··· 2912 2912 * @data: the token identifying the buffer. 2913 2913 * @gfp: how to do memory allocations (if necessary). 2914 2914 * 2915 - * Same as virtqueue_add_inbuf but passes DMA_ATTR_CPU_CACHE_CLEAN to indicate 2916 - * that the CPU will not dirty any cacheline overlapping this buffer while it 2917 - * is available, and to suppress overlapping cacheline warnings in DMA debug 2918 - * builds. 2915 + * Same as virtqueue_add_inbuf but passes DMA_ATTR_DEBUGGING_IGNORE_CACHELINES 2916 + * to indicate that the CPU will not dirty any cacheline overlapping this buffer 2917 + * while it is available, and to suppress overlapping cacheline warnings in DMA 2918 + * debug builds. 2919 2919 * 2920 2920 * Caller must ensure we don't call this with other virtqueue operations 2921 2921 * at the same time (except where noted). ··· 2928 2928 gfp_t gfp) 2929 2929 { 2930 2930 return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp, 2931 - DMA_ATTR_CPU_CACHE_CLEAN); 2931 + DMA_ATTR_DEBUGGING_IGNORE_CACHELINES); 2932 2932 } 2933 2933 EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_cache_clean); 2934 2934
+70 -3
drivers/xen/privcmd.c
··· 12 12 #include <linux/eventfd.h> 13 13 #include <linux/file.h> 14 14 #include <linux/kernel.h> 15 + #include <linux/kstrtox.h> 15 16 #include <linux/module.h> 16 17 #include <linux/mutex.h> 17 18 #include <linux/poll.h> ··· 31 30 #include <linux/seq_file.h> 32 31 #include <linux/miscdevice.h> 33 32 #include <linux/moduleparam.h> 33 + #include <linux/notifier.h> 34 + #include <linux/security.h> 34 35 #include <linux/virtio_mmio.h> 36 + #include <linux/wait.h> 35 37 36 38 #include <asm/xen/hypervisor.h> 37 39 #include <asm/xen/hypercall.h> ··· 50 46 #include <xen/page.h> 51 47 #include <xen/xen-ops.h> 52 48 #include <xen/balloon.h> 49 + #include <xen/xenbus.h> 53 50 #ifdef CONFIG_XEN_ACPI 54 51 #include <xen/acpi.h> 55 52 #endif ··· 73 68 MODULE_PARM_DESC(dm_op_buf_max_size, 74 69 "Maximum size of a dm_op hypercall buffer"); 75 70 71 + static bool unrestricted; 72 + module_param(unrestricted, bool, 0); 73 + MODULE_PARM_DESC(unrestricted, 74 + "Don't restrict hypercalls to target domain if running in a domU"); 75 + 76 76 struct privcmd_data { 77 77 domid_t domid; 78 78 }; 79 + 80 + /* DOMID_INVALID implies no restriction */ 81 + static domid_t target_domain = DOMID_INVALID; 82 + static bool restrict_wait; 83 + static DECLARE_WAIT_QUEUE_HEAD(restrict_wait_wq); 79 84 80 85 static int privcmd_vma_range_is_mapped( 81 86 struct vm_area_struct *vma, ··· 1578 1563 1579 1564 static int privcmd_open(struct inode *ino, struct file *file) 1580 1565 { 1581 - struct privcmd_data *data = kzalloc_obj(*data); 1566 + struct privcmd_data *data; 1582 1567 1568 + if (wait_event_interruptible(restrict_wait_wq, !restrict_wait) < 0) 1569 + return -EINTR; 1570 + 1571 + data = kzalloc_obj(*data); 1583 1572 if (!data) 1584 1573 return -ENOMEM; 1585 1574 1586 - /* DOMID_INVALID implies no restriction */ 1587 - data->domid = DOMID_INVALID; 1575 + data->domid = target_domain; 1588 1576 1589 1577 file->private_data = data; 1590 1578 return 0; ··· 1680 1662 .fops = &xen_privcmd_fops, 1681 1663 }; 1682 1664 1665 + static int init_restrict(struct notifier_block *notifier, 1666 + unsigned long event, 1667 + void *data) 1668 + { 1669 + char *target; 1670 + unsigned int domid; 1671 + 1672 + /* Default to an guaranteed unused domain-id. */ 1673 + target_domain = DOMID_IDLE; 1674 + 1675 + target = xenbus_read(XBT_NIL, "target", "", NULL); 1676 + if (IS_ERR(target) || kstrtouint(target, 10, &domid)) { 1677 + pr_err("No target domain found, blocking all hypercalls\n"); 1678 + goto out; 1679 + } 1680 + 1681 + target_domain = domid; 1682 + 1683 + out: 1684 + if (!IS_ERR(target)) 1685 + kfree(target); 1686 + 1687 + restrict_wait = false; 1688 + wake_up_all(&restrict_wait_wq); 1689 + 1690 + return NOTIFY_DONE; 1691 + } 1692 + 1693 + static struct notifier_block xenstore_notifier = { 1694 + .notifier_call = init_restrict, 1695 + }; 1696 + 1697 + static void __init restrict_driver(void) 1698 + { 1699 + if (unrestricted) { 1700 + if (security_locked_down(LOCKDOWN_XEN_USER_ACTIONS)) 1701 + pr_warn("Kernel is locked down, parameter \"unrestricted\" ignored\n"); 1702 + else 1703 + return; 1704 + } 1705 + 1706 + restrict_wait = true; 1707 + 1708 + register_xenstore_notifier(&xenstore_notifier); 1709 + } 1710 + 1683 1711 static int __init privcmd_init(void) 1684 1712 { 1685 1713 int err; 1686 1714 1687 1715 if (!xen_domain()) 1688 1716 return -ENODEV; 1717 + 1718 + if (!xen_initial_domain()) 1719 + restrict_driver(); 1689 1720 1690 1721 err = misc_register(&privcmd_dev); 1691 1722 if (err != 0) {
+6
fs/binfmt_elf_fdpic.c
··· 595 595 #ifdef ELF_HWCAP2 596 596 nitems++; 597 597 #endif 598 + #ifdef ELF_HWCAP3 599 + nitems++; 600 + #endif 601 + #ifdef ELF_HWCAP4 602 + nitems++; 603 + #endif 598 604 599 605 csp = sp; 600 606 sp -= nitems * 2 * sizeof(unsigned long);
+28
fs/btrfs/backref.c
··· 1393 1393 .indirect_missing_keys = PREFTREE_INIT 1394 1394 }; 1395 1395 1396 + if (unlikely(!root)) { 1397 + btrfs_err(ctx->fs_info, 1398 + "missing extent root for extent at bytenr %llu", 1399 + ctx->bytenr); 1400 + return -EUCLEAN; 1401 + } 1402 + 1396 1403 /* Roots ulist is not needed when using a sharedness check context. */ 1397 1404 if (sc) 1398 1405 ASSERT(ctx->roots == NULL); ··· 2211 2204 struct btrfs_extent_item *ei; 2212 2205 struct btrfs_key key; 2213 2206 2207 + if (unlikely(!extent_root)) { 2208 + btrfs_err(fs_info, 2209 + "missing extent root for extent at bytenr %llu", 2210 + logical); 2211 + return -EUCLEAN; 2212 + } 2213 + 2214 2214 key.objectid = logical; 2215 2215 if (btrfs_fs_incompat(fs_info, SKINNY_METADATA)) 2216 2216 key.type = BTRFS_METADATA_ITEM_KEY; ··· 2865 2851 struct btrfs_key key; 2866 2852 int ret; 2867 2853 2854 + if (unlikely(!extent_root)) { 2855 + btrfs_err(fs_info, 2856 + "missing extent root for extent at bytenr %llu", 2857 + bytenr); 2858 + return -EUCLEAN; 2859 + } 2860 + 2868 2861 key.objectid = bytenr; 2869 2862 key.type = BTRFS_METADATA_ITEM_KEY; 2870 2863 key.offset = (u64)-1; ··· 3008 2987 3009 2988 /* We're at keyed items, there is no inline item, go to the next one */ 3010 2989 extent_root = btrfs_extent_root(iter->fs_info, iter->bytenr); 2990 + if (unlikely(!extent_root)) { 2991 + btrfs_err(iter->fs_info, 2992 + "missing extent root for extent at bytenr %llu", 2993 + iter->bytenr); 2994 + return -EUCLEAN; 2995 + } 2996 + 3011 2997 ret = btrfs_next_item(extent_root, iter->path); 3012 2998 if (ret) 3013 2999 return ret;
+36
fs/btrfs/block-group.c
··· 739 739 740 740 last = max_t(u64, block_group->start, BTRFS_SUPER_INFO_OFFSET); 741 741 extent_root = btrfs_extent_root(fs_info, last); 742 + if (unlikely(!extent_root)) { 743 + btrfs_err(fs_info, 744 + "missing extent root for block group at offset %llu", 745 + block_group->start); 746 + return -EUCLEAN; 747 + } 742 748 743 749 #ifdef CONFIG_BTRFS_DEBUG 744 750 /* ··· 1067 1061 int ret; 1068 1062 1069 1063 root = btrfs_block_group_root(fs_info); 1064 + if (unlikely(!root)) { 1065 + btrfs_err(fs_info, "missing block group root"); 1066 + return -EUCLEAN; 1067 + } 1068 + 1070 1069 key.objectid = block_group->start; 1071 1070 key.type = BTRFS_BLOCK_GROUP_ITEM_KEY; 1072 1071 key.offset = block_group->length; ··· 1359 1348 struct btrfs_root *root = btrfs_block_group_root(fs_info); 1360 1349 struct btrfs_chunk_map *map; 1361 1350 unsigned int num_items; 1351 + 1352 + if (unlikely(!root)) { 1353 + btrfs_err(fs_info, "missing block group root"); 1354 + return ERR_PTR(-EUCLEAN); 1355 + } 1362 1356 1363 1357 map = btrfs_find_chunk_map(fs_info, chunk_offset, 1); 1364 1358 ASSERT(map != NULL); ··· 2156 2140 int ret; 2157 2141 struct btrfs_key found_key; 2158 2142 2143 + if (unlikely(!root)) { 2144 + btrfs_err(fs_info, "missing block group root"); 2145 + return -EUCLEAN; 2146 + } 2147 + 2159 2148 btrfs_for_each_slot(root, key, &found_key, path, ret) { 2160 2149 if (found_key.objectid >= key->objectid && 2161 2150 found_key.type == BTRFS_BLOCK_GROUP_ITEM_KEY) { ··· 2734 2713 size_t size; 2735 2714 int ret; 2736 2715 2716 + if (unlikely(!root)) { 2717 + btrfs_err(fs_info, "missing block group root"); 2718 + return -EUCLEAN; 2719 + } 2720 + 2737 2721 spin_lock(&block_group->lock); 2738 2722 btrfs_set_stack_block_group_v2_used(&bgi, block_group->used); 2739 2723 btrfs_set_stack_block_group_v2_chunk_objectid(&bgi, block_group->global_root_id); ··· 3074 3048 int ret; 3075 3049 bool dirty_bg_running; 3076 3050 3051 + if (unlikely(!root)) { 3052 + btrfs_err(fs_info, "missing block group root"); 3053 + return -EUCLEAN; 3054 + } 3055 + 3077 3056 /* 3078 3057 * This can only happen when we are doing read-only scrub on read-only 3079 3058 * mount. ··· 3222 3191 u32 old_last_identity_remap_count; 3223 3192 u64 used, remap_bytes; 3224 3193 u32 identity_remap_count; 3194 + 3195 + if (unlikely(!root)) { 3196 + btrfs_err(fs_info, "missing block group root"); 3197 + return -EUCLEAN; 3198 + } 3225 3199 3226 3200 /* 3227 3201 * Block group items update can be triggered out of commit transaction
+8 -3
fs/btrfs/compression.c
··· 320 320 321 321 ASSERT(IS_ALIGNED(ordered->file_offset, fs_info->sectorsize)); 322 322 ASSERT(IS_ALIGNED(ordered->num_bytes, fs_info->sectorsize)); 323 - ASSERT(cb->writeback); 323 + /* 324 + * This flag determines if we should clear the writeback flag from the 325 + * page cache. But this function is only utilized by encoded writes, it 326 + * never goes through the page cache. 327 + */ 328 + ASSERT(!cb->writeback); 324 329 325 330 cb->start = ordered->file_offset; 326 331 cb->len = ordered->num_bytes; 332 + ASSERT(cb->bbio.bio.bi_iter.bi_size == ordered->disk_num_bytes); 327 333 cb->compressed_len = ordered->disk_num_bytes; 328 334 cb->bbio.bio.bi_iter.bi_sector = ordered->disk_bytenr >> SECTOR_SHIFT; 329 335 cb->bbio.ordered = ordered; ··· 351 345 cb = alloc_compressed_bio(inode, start, REQ_OP_WRITE, end_bbio_compressed_write); 352 346 cb->start = start; 353 347 cb->len = len; 354 - cb->writeback = true; 355 - 348 + cb->writeback = false; 356 349 return cb; 357 350 } 358 351
+17 -3
fs/btrfs/disk-io.c
··· 1591 1591 * this will bump the backup pointer by one when it is 1592 1592 * done 1593 1593 */ 1594 - static void backup_super_roots(struct btrfs_fs_info *info) 1594 + static int backup_super_roots(struct btrfs_fs_info *info) 1595 1595 { 1596 1596 const int next_backup = info->backup_root_index; 1597 1597 struct btrfs_root_backup *root_backup; ··· 1622 1622 if (!btrfs_fs_incompat(info, EXTENT_TREE_V2)) { 1623 1623 struct btrfs_root *extent_root = btrfs_extent_root(info, 0); 1624 1624 struct btrfs_root *csum_root = btrfs_csum_root(info, 0); 1625 + 1626 + if (unlikely(!extent_root)) { 1627 + btrfs_err(info, "missing extent root for extent at bytenr 0"); 1628 + return -EUCLEAN; 1629 + } 1630 + if (unlikely(!csum_root)) { 1631 + btrfs_err(info, "missing csum root for extent at bytenr 0"); 1632 + return -EUCLEAN; 1633 + } 1625 1634 1626 1635 btrfs_set_backup_extent_root(root_backup, 1627 1636 extent_root->node->start); ··· 1679 1670 memcpy(&info->super_copy->super_roots, 1680 1671 &info->super_for_commit->super_roots, 1681 1672 sizeof(*root_backup) * BTRFS_NUM_BACKUP_ROOTS); 1673 + 1674 + return 0; 1682 1675 } 1683 1676 1684 1677 /* ··· 4062 4051 * not from fsync where the tree roots in fs_info have not 4063 4052 * been consistent on disk. 4064 4053 */ 4065 - if (max_mirrors == 0) 4066 - backup_super_roots(fs_info); 4054 + if (max_mirrors == 0) { 4055 + ret = backup_super_roots(fs_info); 4056 + if (ret < 0) 4057 + return ret; 4058 + } 4067 4059 4068 4060 sb = fs_info->super_for_commit; 4069 4061 dev_item = &sb->dev_item;
+93 -5
fs/btrfs/extent-tree.c
··· 75 75 struct btrfs_key key; 76 76 BTRFS_PATH_AUTO_FREE(path); 77 77 78 + if (unlikely(!root)) { 79 + btrfs_err(fs_info, 80 + "missing extent root for extent at bytenr %llu", start); 81 + return -EUCLEAN; 82 + } 83 + 78 84 path = btrfs_alloc_path(); 79 85 if (!path) 80 86 return -ENOMEM; ··· 137 131 key.offset = offset; 138 132 139 133 extent_root = btrfs_extent_root(fs_info, bytenr); 134 + if (unlikely(!extent_root)) { 135 + btrfs_err(fs_info, 136 + "missing extent root for extent at bytenr %llu", bytenr); 137 + return -EUCLEAN; 138 + } 139 + 140 140 ret = btrfs_search_slot(NULL, extent_root, &key, path, 0, 0); 141 141 if (ret < 0) 142 142 return ret; ··· 448 436 int recow; 449 437 int ret; 450 438 439 + if (unlikely(!root)) { 440 + btrfs_err(trans->fs_info, 441 + "missing extent root for extent at bytenr %llu", bytenr); 442 + return -EUCLEAN; 443 + } 444 + 451 445 key.objectid = bytenr; 452 446 if (parent) { 453 447 key.type = BTRFS_SHARED_DATA_REF_KEY; ··· 527 509 u32 size; 528 510 u32 num_refs; 529 511 int ret; 512 + 513 + if (unlikely(!root)) { 514 + btrfs_err(trans->fs_info, 515 + "missing extent root for extent at bytenr %llu", bytenr); 516 + return -EUCLEAN; 517 + } 530 518 531 519 key.objectid = bytenr; 532 520 if (node->parent) { ··· 692 668 struct btrfs_key key; 693 669 int ret; 694 670 671 + if (unlikely(!root)) { 672 + btrfs_err(trans->fs_info, 673 + "missing extent root for extent at bytenr %llu", bytenr); 674 + return -EUCLEAN; 675 + } 676 + 695 677 key.objectid = bytenr; 696 678 if (parent) { 697 679 key.type = BTRFS_SHARED_BLOCK_REF_KEY; ··· 721 691 struct btrfs_root *root = btrfs_extent_root(trans->fs_info, bytenr); 722 692 struct btrfs_key key; 723 693 int ret; 694 + 695 + if (unlikely(!root)) { 696 + btrfs_err(trans->fs_info, 697 + "missing extent root for extent at bytenr %llu", bytenr); 698 + return -EUCLEAN; 699 + } 724 700 725 701 key.objectid = bytenr; 726 702 if (node->parent) { ··· 817 781 int ret; 818 782 bool skinny_metadata = btrfs_fs_incompat(fs_info, SKINNY_METADATA); 819 783 int needed; 784 + 785 + if (unlikely(!root)) { 786 + btrfs_err(fs_info, 787 + "missing extent root for extent at bytenr %llu", bytenr); 788 + return -EUCLEAN; 789 + } 820 790 821 791 key.objectid = bytenr; 822 792 key.type = BTRFS_EXTENT_ITEM_KEY; ··· 1722 1680 } 1723 1681 1724 1682 root = btrfs_extent_root(fs_info, key.objectid); 1683 + if (unlikely(!root)) { 1684 + btrfs_err(fs_info, 1685 + "missing extent root for extent at bytenr %llu", 1686 + key.objectid); 1687 + return -EUCLEAN; 1688 + } 1725 1689 again: 1726 1690 ret = btrfs_search_slot(trans, root, &key, path, 0, 1); 1727 1691 if (ret < 0) { ··· 1974 1926 struct btrfs_root *csum_root; 1975 1927 1976 1928 csum_root = btrfs_csum_root(fs_info, head->bytenr); 1977 - ret = btrfs_del_csums(trans, csum_root, head->bytenr, 1978 - head->num_bytes); 1929 + if (unlikely(!csum_root)) { 1930 + btrfs_err(fs_info, 1931 + "missing csum root for extent at bytenr %llu", 1932 + head->bytenr); 1933 + ret = -EUCLEAN; 1934 + } else { 1935 + ret = btrfs_del_csums(trans, csum_root, head->bytenr, 1936 + head->num_bytes); 1937 + } 1979 1938 } 1980 1939 } 1981 1940 ··· 2433 2378 u32 expected_size; 2434 2379 int type; 2435 2380 int ret; 2381 + 2382 + if (unlikely(!extent_root)) { 2383 + btrfs_err(fs_info, 2384 + "missing extent root for extent at bytenr %llu", bytenr); 2385 + return -EUCLEAN; 2386 + } 2436 2387 2437 2388 key.objectid = bytenr; 2438 2389 key.type = BTRFS_EXTENT_ITEM_KEY; ··· 3154 3093 struct btrfs_root *csum_root; 3155 3094 3156 3095 csum_root = btrfs_csum_root(trans->fs_info, bytenr); 3096 + if (unlikely(!csum_root)) { 3097 + ret = -EUCLEAN; 3098 + btrfs_abort_transaction(trans, ret); 3099 + btrfs_err(trans->fs_info, 3100 + "missing csum root for extent at bytenr %llu", 3101 + bytenr); 3102 + return ret; 3103 + } 3104 + 3157 3105 ret = btrfs_del_csums(trans, csum_root, bytenr, num_bytes); 3158 3106 if (unlikely(ret)) { 3159 3107 btrfs_abort_transaction(trans, ret); ··· 3292 3222 u64 delayed_ref_root = href->owning_root; 3293 3223 3294 3224 extent_root = btrfs_extent_root(info, bytenr); 3295 - ASSERT(extent_root); 3225 + if (unlikely(!extent_root)) { 3226 + btrfs_err(info, 3227 + "missing extent root for extent at bytenr %llu", bytenr); 3228 + return -EUCLEAN; 3229 + } 3296 3230 3297 3231 path = btrfs_alloc_path(); 3298 3232 if (!path) ··· 5013 4939 size += btrfs_extent_inline_ref_size(BTRFS_EXTENT_OWNER_REF_KEY); 5014 4940 size += btrfs_extent_inline_ref_size(type); 5015 4941 4942 + extent_root = btrfs_extent_root(fs_info, ins->objectid); 4943 + if (unlikely(!extent_root)) { 4944 + btrfs_err(fs_info, 4945 + "missing extent root for extent at bytenr %llu", 4946 + ins->objectid); 4947 + return -EUCLEAN; 4948 + } 4949 + 5016 4950 path = btrfs_alloc_path(); 5017 4951 if (!path) 5018 4952 return -ENOMEM; 5019 4953 5020 - extent_root = btrfs_extent_root(fs_info, ins->objectid); 5021 4954 ret = btrfs_insert_empty_item(trans, extent_root, path, ins, size); 5022 4955 if (ret) { 5023 4956 btrfs_free_path(path); ··· 5100 5019 size += sizeof(*block_info); 5101 5020 } 5102 5021 5022 + extent_root = btrfs_extent_root(fs_info, extent_key.objectid); 5023 + if (unlikely(!extent_root)) { 5024 + btrfs_err(fs_info, 5025 + "missing extent root for extent at bytenr %llu", 5026 + extent_key.objectid); 5027 + return -EUCLEAN; 5028 + } 5029 + 5103 5030 path = btrfs_alloc_path(); 5104 5031 if (!path) 5105 5032 return -ENOMEM; 5106 5033 5107 - extent_root = btrfs_extent_root(fs_info, extent_key.objectid); 5108 5034 ret = btrfs_insert_empty_item(trans, extent_root, path, &extent_key, 5109 5035 size); 5110 5036 if (ret) {
+7
fs/btrfs/file-item.c
··· 308 308 /* Current item doesn't contain the desired range, search again */ 309 309 btrfs_release_path(path); 310 310 csum_root = btrfs_csum_root(fs_info, disk_bytenr); 311 + if (unlikely(!csum_root)) { 312 + btrfs_err(fs_info, 313 + "missing csum root for extent at bytenr %llu", 314 + disk_bytenr); 315 + return -EUCLEAN; 316 + } 317 + 311 318 item = btrfs_lookup_csum(NULL, csum_root, path, disk_bytenr, 0); 312 319 if (IS_ERR(item)) { 313 320 ret = PTR_ERR(item);
+8 -1
fs/btrfs/free-space-tree.c
··· 1073 1073 if (ret) 1074 1074 return ret; 1075 1075 1076 + extent_root = btrfs_extent_root(trans->fs_info, block_group->start); 1077 + if (unlikely(!extent_root)) { 1078 + btrfs_err(trans->fs_info, 1079 + "missing extent root for block group at offset %llu", 1080 + block_group->start); 1081 + return -EUCLEAN; 1082 + } 1083 + 1076 1084 mutex_lock(&block_group->free_space_lock); 1077 1085 1078 1086 /* ··· 1094 1086 key.type = BTRFS_EXTENT_ITEM_KEY; 1095 1087 key.offset = 0; 1096 1088 1097 - extent_root = btrfs_extent_root(trans->fs_info, key.objectid); 1098 1089 ret = btrfs_search_slot_for_read(extent_root, &key, path, 1, 0); 1099 1090 if (ret < 0) 1100 1091 goto out_locked;
+20 -5
fs/btrfs/inode.c
··· 2012 2012 */ 2013 2013 2014 2014 csum_root = btrfs_csum_root(root->fs_info, io_start); 2015 + if (unlikely(!csum_root)) { 2016 + btrfs_err(root->fs_info, 2017 + "missing csum root for extent at bytenr %llu", io_start); 2018 + ret = -EUCLEAN; 2019 + goto out; 2020 + } 2021 + 2015 2022 ret = btrfs_lookup_csums_list(csum_root, io_start, 2016 2023 io_start + args->file_extent.num_bytes - 1, 2017 2024 NULL, nowait); ··· 2756 2749 int ret; 2757 2750 2758 2751 list_for_each_entry(sum, list, list) { 2759 - trans->adding_csums = true; 2760 - if (!csum_root) 2752 + if (!csum_root) { 2761 2753 csum_root = btrfs_csum_root(trans->fs_info, 2762 2754 sum->logical); 2755 + if (unlikely(!csum_root)) { 2756 + btrfs_err(trans->fs_info, 2757 + "missing csum root for extent at bytenr %llu", 2758 + sum->logical); 2759 + return -EUCLEAN; 2760 + } 2761 + } 2762 + trans->adding_csums = true; 2763 2763 ret = btrfs_csum_file_blocks(trans, csum_root, sum); 2764 2764 trans->adding_csums = false; 2765 2765 if (ret) ··· 9888 9874 int compression; 9889 9875 size_t orig_count; 9890 9876 const u32 min_folio_size = btrfs_min_folio_size(fs_info); 9877 + const u32 blocksize = fs_info->sectorsize; 9891 9878 u64 start, end; 9892 9879 u64 num_bytes, ram_bytes, disk_num_bytes; 9893 9880 struct btrfs_key ins; ··· 9999 9984 ret = -EFAULT; 10000 9985 goto out_cb; 10001 9986 } 10002 - if (bytes < min_folio_size) 10003 - folio_zero_range(folio, bytes, min_folio_size - bytes); 10004 - ret = bio_add_folio(&cb->bbio.bio, folio, folio_size(folio), 0); 9987 + if (!IS_ALIGNED(bytes, blocksize)) 9988 + folio_zero_range(folio, bytes, round_up(bytes, blocksize) - bytes); 9989 + ret = bio_add_folio(&cb->bbio.bio, folio, round_up(bytes, blocksize), 0); 10005 9990 if (unlikely(!ret)) { 10006 9991 folio_put(folio); 10007 9992 ret = -EINVAL;
+9 -3
fs/btrfs/ioctl.c
··· 3617 3617 } 3618 3618 } 3619 3619 3620 - trans = btrfs_join_transaction(root); 3620 + /* 2 BTRFS_QGROUP_RELATION_KEY items. */ 3621 + trans = btrfs_start_transaction(root, 2); 3621 3622 if (IS_ERR(trans)) { 3622 3623 ret = PTR_ERR(trans); 3623 3624 goto out; ··· 3690 3689 goto out; 3691 3690 } 3692 3691 3693 - trans = btrfs_join_transaction(root); 3692 + /* 3693 + * 1 BTRFS_QGROUP_INFO_KEY item. 3694 + * 1 BTRFS_QGROUP_LIMIT_KEY item. 3695 + */ 3696 + trans = btrfs_start_transaction(root, 2); 3694 3697 if (IS_ERR(trans)) { 3695 3698 ret = PTR_ERR(trans); 3696 3699 goto out; ··· 3743 3738 goto drop_write; 3744 3739 } 3745 3740 3746 - trans = btrfs_join_transaction(root); 3741 + /* 1 BTRFS_QGROUP_LIMIT_KEY item. */ 3742 + trans = btrfs_start_transaction(root, 1); 3747 3743 if (IS_ERR(trans)) { 3748 3744 ret = PTR_ERR(trans); 3749 3745 goto out;
+2 -2
fs/btrfs/lzo.c
··· 429 429 int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) 430 430 { 431 431 struct workspace *workspace = list_entry(ws, struct workspace, list); 432 - const struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info; 432 + struct btrfs_fs_info *fs_info = cb->bbio.inode->root->fs_info; 433 433 const u32 sectorsize = fs_info->sectorsize; 434 434 struct folio_iter fi; 435 435 char *kaddr; ··· 447 447 /* There must be a compressed folio and matches the sectorsize. */ 448 448 if (unlikely(!fi.folio)) 449 449 return -EINVAL; 450 - ASSERT(folio_size(fi.folio) == sectorsize); 450 + ASSERT(folio_size(fi.folio) == btrfs_min_folio_size(fs_info)); 451 451 kaddr = kmap_local_folio(fi.folio, 0); 452 452 len_in = read_compress_length(kaddr); 453 453 kunmap_local(kaddr);
+8
fs/btrfs/qgroup.c
··· 3739 3739 mutex_lock(&fs_info->qgroup_rescan_lock); 3740 3740 extent_root = btrfs_extent_root(fs_info, 3741 3741 fs_info->qgroup_rescan_progress.objectid); 3742 + if (unlikely(!extent_root)) { 3743 + btrfs_err(fs_info, 3744 + "missing extent root for extent at bytenr %llu", 3745 + fs_info->qgroup_rescan_progress.objectid); 3746 + mutex_unlock(&fs_info->qgroup_rescan_lock); 3747 + return -EUCLEAN; 3748 + } 3749 + 3742 3750 ret = btrfs_search_slot_for_read(extent_root, 3743 3751 &fs_info->qgroup_rescan_progress, 3744 3752 path, 1, 0);
+10 -2
fs/btrfs/raid56.c
··· 2297 2297 static void fill_data_csums(struct btrfs_raid_bio *rbio) 2298 2298 { 2299 2299 struct btrfs_fs_info *fs_info = rbio->bioc->fs_info; 2300 - struct btrfs_root *csum_root = btrfs_csum_root(fs_info, 2301 - rbio->bioc->full_stripe_logical); 2300 + struct btrfs_root *csum_root; 2302 2301 const u64 start = rbio->bioc->full_stripe_logical; 2303 2302 const u32 len = (rbio->nr_data * rbio->stripe_nsectors) << 2304 2303 fs_info->sectorsize_bits; ··· 2327 2328 GFP_NOFS); 2328 2329 if (!rbio->csum_buf || !rbio->csum_bitmap) { 2329 2330 ret = -ENOMEM; 2331 + goto error; 2332 + } 2333 + 2334 + csum_root = btrfs_csum_root(fs_info, rbio->bioc->full_stripe_logical); 2335 + if (unlikely(!csum_root)) { 2336 + btrfs_err(fs_info, 2337 + "missing csum root for extent at bytenr %llu", 2338 + rbio->bioc->full_stripe_logical); 2339 + ret = -EUCLEAN; 2330 2340 goto error; 2331 2341 } 2332 2342
+32 -7
fs/btrfs/relocation.c
··· 4185 4185 dest_addr = ins.objectid; 4186 4186 dest_length = ins.offset; 4187 4187 4188 + dest_bg = btrfs_lookup_block_group(fs_info, dest_addr); 4189 + 4188 4190 if (!is_data && !IS_ALIGNED(dest_length, fs_info->nodesize)) { 4189 4191 u64 new_length = ALIGN_DOWN(dest_length, fs_info->nodesize); 4190 4192 ··· 4297 4295 if (unlikely(ret)) 4298 4296 goto end; 4299 4297 4300 - dest_bg = btrfs_lookup_block_group(fs_info, dest_addr); 4301 - 4302 4298 adjust_block_group_remap_bytes(trans, dest_bg, dest_length); 4303 4299 4304 4300 mutex_lock(&dest_bg->free_space_lock); 4305 4301 bg_needs_free_space = test_bit(BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE, 4306 4302 &dest_bg->runtime_flags); 4307 4303 mutex_unlock(&dest_bg->free_space_lock); 4308 - btrfs_put_block_group(dest_bg); 4309 4304 4310 4305 if (bg_needs_free_space) { 4311 4306 ret = btrfs_add_block_group_free_space(trans, dest_bg); ··· 4332 4333 btrfs_end_transaction(trans); 4333 4334 } 4334 4335 } else { 4335 - dest_bg = btrfs_lookup_block_group(fs_info, dest_addr); 4336 4336 btrfs_free_reserved_bytes(dest_bg, dest_length, 0); 4337 - btrfs_put_block_group(dest_bg); 4338 4337 4339 4338 ret = btrfs_commit_transaction(trans); 4340 4339 } 4340 + 4341 + btrfs_put_block_group(dest_bg); 4341 4342 4342 4343 return ret; 4343 4344 } ··· 4953 4954 struct btrfs_space_info *sinfo = src_bg->space_info; 4954 4955 4955 4956 extent_root = btrfs_extent_root(fs_info, src_bg->start); 4957 + if (unlikely(!extent_root)) { 4958 + btrfs_err(fs_info, 4959 + "missing extent root for block group at offset %llu", 4960 + src_bg->start); 4961 + return -EUCLEAN; 4962 + } 4956 4963 4957 4964 trans = btrfs_start_transaction(extent_root, 0); 4958 4965 if (IS_ERR(trans)) ··· 5311 5306 int ret; 5312 5307 bool bg_is_ro = false; 5313 5308 5309 + if (unlikely(!extent_root)) { 5310 + btrfs_err(fs_info, 5311 + "missing extent root for block group at offset %llu", 5312 + group_start); 5313 + return -EUCLEAN; 5314 + } 5315 + 5314 5316 /* 5315 5317 * This only gets set if we had a half-deleted snapshot on mount. We 5316 5318 * cannot allow relocation to start while we're still trying to clean up ··· 5548 5536 goto out; 5549 5537 } 5550 5538 5539 + rc->extent_root = btrfs_extent_root(fs_info, 0); 5540 + if (unlikely(!rc->extent_root)) { 5541 + btrfs_err(fs_info, "missing extent root for extent at bytenr 0"); 5542 + ret = -EUCLEAN; 5543 + goto out; 5544 + } 5545 + 5551 5546 ret = reloc_chunk_start(fs_info); 5552 5547 if (ret < 0) 5553 5548 goto out_end; 5554 - 5555 - rc->extent_root = btrfs_extent_root(fs_info, 0); 5556 5549 5557 5550 set_reloc_control(rc); 5558 5551 ··· 5652 5635 struct btrfs_root *csum_root = btrfs_csum_root(fs_info, disk_bytenr); 5653 5636 LIST_HEAD(list); 5654 5637 int ret; 5638 + 5639 + if (unlikely(!csum_root)) { 5640 + btrfs_mark_ordered_extent_error(ordered); 5641 + btrfs_err(fs_info, 5642 + "missing csum root for extent at bytenr %llu", 5643 + disk_bytenr); 5644 + return -EUCLEAN; 5645 + } 5655 5646 5656 5647 ret = btrfs_lookup_csums_list(csum_root, disk_bytenr, 5657 5648 disk_bytenr + ordered->num_bytes - 1,
+17
fs/btrfs/tree-checker.c
··· 1288 1288 btrfs_root_drop_level(&ri), BTRFS_MAX_LEVEL - 1); 1289 1289 return -EUCLEAN; 1290 1290 } 1291 + /* 1292 + * If drop_progress.objectid is non-zero, a btrfs_drop_snapshot() was 1293 + * interrupted and the resume point was recorded in drop_progress and 1294 + * drop_level. In that case drop_level must be >= 1: level 0 is the 1295 + * leaf level and drop_snapshot never saves a checkpoint there (it 1296 + * only records checkpoints at internal node levels in DROP_REFERENCE 1297 + * stage). A zero drop_level combined with a non-zero drop_progress 1298 + * objectid indicates on-disk corruption and would cause a BUG_ON in 1299 + * merge_reloc_root() and btrfs_drop_snapshot() at mount time. 1300 + */ 1301 + if (unlikely(btrfs_disk_key_objectid(&ri.drop_progress) != 0 && 1302 + btrfs_root_drop_level(&ri) == 0)) { 1303 + generic_err(leaf, slot, 1304 + "invalid root drop_level 0 with non-zero drop_progress objectid %llu", 1305 + btrfs_disk_key_objectid(&ri.drop_progress)); 1306 + return -EUCLEAN; 1307 + } 1291 1308 1292 1309 /* Flags check */ 1293 1310 if (unlikely(btrfs_root_flags(&ri) & ~valid_root_flags)) {
+21
fs/btrfs/tree-log.c
··· 984 984 985 985 sums = list_first_entry(&ordered_sums, struct btrfs_ordered_sum, list); 986 986 csum_root = btrfs_csum_root(fs_info, sums->logical); 987 + if (unlikely(!csum_root)) { 988 + btrfs_err(fs_info, 989 + "missing csum root for extent at bytenr %llu", 990 + sums->logical); 991 + ret = -EUCLEAN; 992 + } 993 + 987 994 if (!ret) { 988 995 ret = btrfs_del_csums(trans, csum_root, sums->logical, 989 996 sums->len); ··· 4897 4890 } 4898 4891 4899 4892 csum_root = btrfs_csum_root(trans->fs_info, disk_bytenr); 4893 + if (unlikely(!csum_root)) { 4894 + btrfs_err(trans->fs_info, 4895 + "missing csum root for extent at bytenr %llu", 4896 + disk_bytenr); 4897 + return -EUCLEAN; 4898 + } 4899 + 4900 4900 disk_bytenr += extent_offset; 4901 4901 ret = btrfs_lookup_csums_list(csum_root, disk_bytenr, 4902 4902 disk_bytenr + extent_num_bytes - 1, ··· 5100 5086 /* block start is already adjusted for the file extent offset. */ 5101 5087 block_start = btrfs_extent_map_block_start(em); 5102 5088 csum_root = btrfs_csum_root(trans->fs_info, block_start); 5089 + if (unlikely(!csum_root)) { 5090 + btrfs_err(trans->fs_info, 5091 + "missing csum root for extent at bytenr %llu", 5092 + block_start); 5093 + return -EUCLEAN; 5094 + } 5095 + 5103 5096 ret = btrfs_lookup_csums_list(csum_root, block_start + csum_offset, 5104 5097 block_start + csum_offset + csum_len - 1, 5105 5098 &ordered_sums, false);
+17 -8
fs/btrfs/volumes.c
··· 4277 4277 end: 4278 4278 while (!list_empty(chunks)) { 4279 4279 bool is_unused; 4280 + struct btrfs_block_group *bg; 4280 4281 4281 4282 rci = list_first_entry(chunks, struct remap_chunk_info, list); 4282 4283 4283 - spin_lock(&rci->bg->lock); 4284 - is_unused = !btrfs_is_block_group_used(rci->bg); 4285 - spin_unlock(&rci->bg->lock); 4284 + bg = rci->bg; 4285 + if (bg) { 4286 + /* 4287 + * This is a bit racy and the 'used' status can change 4288 + * but this is not a problem as later functions will 4289 + * verify it again. 4290 + */ 4291 + spin_lock(&bg->lock); 4292 + is_unused = !btrfs_is_block_group_used(bg); 4293 + spin_unlock(&bg->lock); 4286 4294 4287 - if (is_unused) 4288 - btrfs_mark_bg_unused(rci->bg); 4295 + if (is_unused) 4296 + btrfs_mark_bg_unused(bg); 4289 4297 4290 - if (rci->made_ro) 4291 - btrfs_dec_block_group_ro(rci->bg); 4298 + if (rci->made_ro) 4299 + btrfs_dec_block_group_ro(bg); 4292 4300 4293 - btrfs_put_block_group(rci->bg); 4301 + btrfs_put_block_group(bg); 4302 + } 4294 4303 4295 4304 list_del(&rci->list); 4296 4305 kfree(rci);
+7
fs/btrfs/zoned.c
··· 1261 1261 key.offset = 0; 1262 1262 1263 1263 root = btrfs_extent_root(fs_info, key.objectid); 1264 + if (unlikely(!root)) { 1265 + btrfs_err(fs_info, 1266 + "missing extent root for extent at bytenr %llu", 1267 + key.objectid); 1268 + return -EUCLEAN; 1269 + } 1270 + 1264 1271 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 1265 1272 /* We should not find the exact match */ 1266 1273 if (unlikely(!ret))
+1 -1
fs/btrfs/zstd.c
··· 600 600 bio_first_folio(&fi, &cb->bbio.bio, 0); 601 601 if (unlikely(!fi.folio)) 602 602 return -EINVAL; 603 - ASSERT(folio_size(fi.folio) == blocksize); 603 + ASSERT(folio_size(fi.folio) == min_folio_size); 604 604 605 605 stream = zstd_init_dstream( 606 606 ZSTD_BTRFS_MAX_INPUT, workspace->mem, workspace->size);
+29 -14
fs/erofs/Kconfig
··· 16 16 select ZLIB_INFLATE if EROFS_FS_ZIP_DEFLATE 17 17 select ZSTD_DECOMPRESS if EROFS_FS_ZIP_ZSTD 18 18 help 19 - EROFS (Enhanced Read-Only File System) is a lightweight read-only 20 - file system with modern designs (e.g. no buffer heads, inline 21 - xattrs/data, chunk-based deduplication, multiple devices, etc.) for 22 - scenarios which need high-performance read-only solutions, e.g. 23 - smartphones with Android OS, LiveCDs and high-density hosts with 24 - numerous containers; 19 + EROFS (Enhanced Read-Only File System) is a modern, lightweight, 20 + secure read-only filesystem for various use cases, such as immutable 21 + system images, container images, application sandboxes, and datasets. 25 22 26 - It also provides transparent compression and deduplication support to 27 - improve storage density and maintain relatively high compression 28 - ratios, and it implements in-place decompression to temporarily reuse 29 - page cache for compressed data using proper strategies, which is 30 - quite useful for ensuring guaranteed end-to-end runtime decompression 23 + EROFS uses a flexible, hierarchical on-disk design so that features 24 + can be enabled on demand: the core on-disk format is block-aligned in 25 + order to perform optimally on all kinds of devices, including block 26 + and memory-backed devices; the format is easy to parse and has zero 27 + metadata redundancy, unlike generic filesystems, making it ideal for 28 + filesystem auditing and remote access; inline data, random-access 29 + friendly directory data, inline/shared extended attributes and 30 + chunk-based deduplication ensure space efficiency while maintaining 31 + high performance. 32 + 33 + Optionally, it supports multiple devices to reference external data, 34 + enabling data sharing for container images. 35 + 36 + It also has advanced encoded on-disk layouts, particularly for data 37 + compression and fine-grained deduplication. It utilizes fixed-size 38 + output compression to improve storage density while keeping relatively 39 + high compression ratios. Furthermore, it implements in-place 40 + decompression to reuse file pages to keep compressed data temporarily 41 + with proper strategies, which ensures guaranteed end-to-end runtime 31 42 performance under extreme memory pressure without extra cost. 32 43 33 - See the documentation at <file:Documentation/filesystems/erofs.rst> 34 - and the web pages at <https://erofs.docs.kernel.org> for more details. 44 + For more details, see the web pages at <https://erofs.docs.kernel.org> 45 + and the documentation at <file:Documentation/filesystems/erofs.rst>. 46 + 47 + To compile EROFS filesystem support as a module, choose M here. The 48 + module will be called erofs. 35 49 36 50 If unsure, say N. 37 51 ··· 119 105 depends on EROFS_FS 120 106 default y 121 107 help 122 - Enable transparent compression support for EROFS file systems. 108 + Enable EROFS compression layouts so that filesystems containing 109 + compressed files can be parsed by the kernel. 123 110 124 111 If you don't want to enable compression feature, say N. 125 112
+2 -4
fs/erofs/fileio.c
··· 25 25 container_of(iocb, struct erofs_fileio_rq, iocb); 26 26 struct folio_iter fi; 27 27 28 - if (ret >= 0 && ret != rq->bio.bi_iter.bi_size) { 29 - bio_advance(&rq->bio, ret); 30 - zero_fill_bio(&rq->bio); 31 - } 28 + if (ret >= 0 && ret != rq->bio.bi_iter.bi_size) 29 + ret = -EIO; 32 30 if (!rq->bio.bi_end_io) { 33 31 bio_for_each_folio_all(fi, &rq->bio) { 34 32 DBG_BUGON(folio_test_uptodate(fi.folio));
+13 -2
fs/erofs/ishare.c
··· 200 200 201 201 int __init erofs_init_ishare(void) 202 202 { 203 - erofs_ishare_mnt = kern_mount(&erofs_anon_fs_type); 204 - return PTR_ERR_OR_ZERO(erofs_ishare_mnt); 203 + struct vfsmount *mnt; 204 + int ret; 205 + 206 + mnt = kern_mount(&erofs_anon_fs_type); 207 + if (IS_ERR(mnt)) 208 + return PTR_ERR(mnt); 209 + /* generic_fadvise() doesn't work if s_bdi == &noop_backing_dev_info */ 210 + ret = super_setup_bdi(mnt->mnt_sb); 211 + if (ret) 212 + kern_unmount(mnt); 213 + else 214 + erofs_ishare_mnt = mnt; 215 + return ret; 205 216 } 206 217 207 218 void erofs_exit_ishare(void)
+3
fs/erofs/zdata.c
··· 1445 1445 int bios) 1446 1446 { 1447 1447 struct erofs_sb_info *const sbi = EROFS_SB(io->sb); 1448 + int gfp_flag; 1448 1449 1449 1450 /* wake up the caller thread for sync decompression */ 1450 1451 if (io->sync) { ··· 1478 1477 sbi->sync_decompress = EROFS_SYNC_DECOMPRESS_FORCE_ON; 1479 1478 return; 1480 1479 } 1480 + gfp_flag = memalloc_noio_save(); 1481 1481 z_erofs_decompressqueue_work(&io->u.work); 1482 + memalloc_noio_restore(gfp_flag); 1482 1483 } 1483 1484 1484 1485 static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+6
fs/smb/client/cifsglob.h
··· 2386 2386 return opts; 2387 2387 } 2388 2388 2389 + /* 2390 + * The number of blocks is not related to (i_size / i_blksize), but instead 2391 + * 512 byte (2**9) size is required for calculating num blocks. 2392 + */ 2393 + #define CIFS_INO_BLOCKS(size) DIV_ROUND_UP_ULL((u64)(size), 512) 2394 + 2389 2395 #endif /* _CIFS_GLOB_H */
+4
fs/smb/client/connect.c
··· 1955 1955 case Kerberos: 1956 1956 if (!uid_eq(ctx->cred_uid, ses->cred_uid)) 1957 1957 return 0; 1958 + if (strncmp(ses->user_name ?: "", 1959 + ctx->username ?: "", 1960 + CIFS_MAX_USERNAME_LEN)) 1961 + return 0; 1958 1962 break; 1959 1963 case NTLMv2: 1960 1964 case RawNTLMSSP:
-1
fs/smb/client/file.c
··· 993 993 if (!rc) { 994 994 netfs_resize_file(&cinode->netfs, 0, true); 995 995 cifs_setsize(inode, 0); 996 - inode->i_blocks = 0; 997 996 } 998 997 } 999 998 if (cfile)
+6 -15
fs/smb/client/inode.c
··· 219 219 */ 220 220 if (is_size_safe_to_change(cifs_i, fattr->cf_eof, from_readdir)) { 221 221 i_size_write(inode, fattr->cf_eof); 222 - 223 - /* 224 - * i_blocks is not related to (i_size / i_blksize), 225 - * but instead 512 byte (2**9) size is required for 226 - * calculating num blocks. 227 - */ 228 - inode->i_blocks = (512 - 1 + fattr->cf_bytes) >> 9; 222 + inode->i_blocks = CIFS_INO_BLOCKS(fattr->cf_bytes); 229 223 } 230 224 231 225 if (S_ISLNK(fattr->cf_mode) && fattr->cf_symlink_target) { ··· 3009 3015 { 3010 3016 spin_lock(&inode->i_lock); 3011 3017 i_size_write(inode, offset); 3018 + /* 3019 + * Until we can query the server for actual allocation size, 3020 + * this is best estimate we have for blocks allocated for a file. 3021 + */ 3022 + inode->i_blocks = CIFS_INO_BLOCKS(offset); 3012 3023 spin_unlock(&inode->i_lock); 3013 3024 inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode)); 3014 3025 truncate_pagecache(inode, offset); ··· 3086 3087 if (rc == 0) { 3087 3088 netfs_resize_file(&cifsInode->netfs, size, true); 3088 3089 cifs_setsize(inode, size); 3089 - /* 3090 - * i_blocks is not related to (i_size / i_blksize), but instead 3091 - * 512 byte (2**9) size is required for calculating num blocks. 3092 - * Until we can query the server for actual allocation size, 3093 - * this is best estimate we have for blocks allocated for a file 3094 - * Number of blocks must be rounded up so size 1 is not 0 blocks 3095 - */ 3096 - inode->i_blocks = (512 - 1 + size) >> 9; 3097 3090 } 3098 3091 3099 3092 return rc;
+1 -1
fs/smb/client/smb1transport.c
··· 460 460 return 0; 461 461 462 462 /* 463 - * Windows NT server returns error resposne (e.g. STATUS_DELETE_PENDING 463 + * Windows NT server returns error response (e.g. STATUS_DELETE_PENDING 464 464 * or STATUS_OBJECT_NAME_NOT_FOUND or ERRDOS/ERRbadfile or any other) 465 465 * for some TRANS2 requests without the RESPONSE flag set in header. 466 466 */
+4 -16
fs/smb/client/smb2ops.c
··· 1497 1497 { 1498 1498 struct smb2_file_network_open_info file_inf; 1499 1499 struct inode *inode; 1500 + u64 asize; 1500 1501 int rc; 1501 1502 1502 1503 rc = __SMB2_close(xid, tcon, cfile->fid.persistent_fid, ··· 1521 1520 inode_set_atime_to_ts(inode, 1522 1521 cifs_NTtimeToUnix(file_inf.LastAccessTime)); 1523 1522 1524 - /* 1525 - * i_blocks is not related to (i_size / i_blksize), 1526 - * but instead 512 byte (2**9) size is required for 1527 - * calculating num blocks. 1528 - */ 1529 - if (le64_to_cpu(file_inf.AllocationSize) > 4096) 1530 - inode->i_blocks = 1531 - (512 - 1 + le64_to_cpu(file_inf.AllocationSize)) >> 9; 1523 + asize = le64_to_cpu(file_inf.AllocationSize); 1524 + if (asize > 4096) 1525 + inode->i_blocks = CIFS_INO_BLOCKS(asize); 1532 1526 1533 1527 /* End of file and Attributes should not have to be updated on close */ 1534 1528 spin_unlock(&inode->i_lock); ··· 2200 2204 rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false); 2201 2205 if (rc) 2202 2206 goto duplicate_extents_out; 2203 - 2204 - /* 2205 - * Although also could set plausible allocation size (i_blocks) 2206 - * here in addition to setting the file size, in reflink 2207 - * it is likely that the target file is sparse. Its allocation 2208 - * size will be queried on next revalidate, but it is important 2209 - * to make sure that file's cached size is updated immediately 2210 - */ 2211 2207 netfs_resize_file(netfs_inode(inode), dest_off + len, true); 2212 2208 cifs_setsize(inode, dest_off + len); 2213 2209 }
+6 -3
fs/smb/server/mgmt/tree_connect.c
··· 102 102 103 103 void ksmbd_tree_connect_put(struct ksmbd_tree_connect *tcon) 104 104 { 105 - if (atomic_dec_and_test(&tcon->refcount)) 105 + if (atomic_dec_and_test(&tcon->refcount)) { 106 + ksmbd_share_config_put(tcon->share_conf); 106 107 kfree(tcon); 108 + } 107 109 } 108 110 109 111 static int __ksmbd_tree_conn_disconnect(struct ksmbd_session *sess, ··· 115 113 116 114 ret = ksmbd_ipc_tree_disconnect_request(sess->id, tree_conn->id); 117 115 ksmbd_release_tree_conn_id(sess, tree_conn->id); 118 - ksmbd_share_config_put(tree_conn->share_conf); 119 116 ksmbd_counter_dec(KSMBD_COUNTER_TREE_CONNS); 120 - if (atomic_dec_and_test(&tree_conn->refcount)) 117 + if (atomic_dec_and_test(&tree_conn->refcount)) { 118 + ksmbd_share_config_put(tree_conn->share_conf); 121 119 kfree(tree_conn); 120 + } 122 121 return ret; 123 122 } 124 123
+12 -5
fs/smb/server/smb2pdu.c
··· 126 126 pr_err("The first operation in the compound does not have tcon\n"); 127 127 return -EINVAL; 128 128 } 129 + if (work->tcon->t_state != TREE_CONNECTED) 130 + return -ENOENT; 129 131 if (tree_id != UINT_MAX && work->tcon->id != tree_id) { 130 132 pr_err("tree id(%u) is different with id(%u) in first operation\n", 131 133 tree_id, work->tcon->id); ··· 1950 1948 } 1951 1949 } 1952 1950 smb2_set_err_rsp(work); 1951 + conn->binding = false; 1953 1952 } else { 1954 1953 unsigned int iov_len; 1955 1954 ··· 2831 2828 goto out; 2832 2829 } 2833 2830 2834 - dh_info->fp->conn = conn; 2831 + if (dh_info->fp->conn) { 2832 + ksmbd_put_durable_fd(dh_info->fp); 2833 + err = -EBADF; 2834 + goto out; 2835 + } 2835 2836 dh_info->reconnected = true; 2836 2837 goto out; 2837 2838 } ··· 5459 5452 struct smb2_query_info_req *req, 5460 5453 struct smb2_query_info_rsp *rsp) 5461 5454 { 5462 - struct ksmbd_session *sess = work->sess; 5463 5455 struct ksmbd_conn *conn = work->conn; 5464 5456 struct ksmbd_share_config *share = work->tcon->share_conf; 5465 5457 int fsinfoclass = 0; ··· 5595 5589 5596 5590 info = (struct object_id_info *)(rsp->Buffer); 5597 5591 5598 - if (!user_guest(sess->user)) 5599 - memcpy(info->objid, user_passkey(sess->user), 16); 5592 + if (path.mnt->mnt_sb->s_uuid_len == 16) 5593 + memcpy(info->objid, path.mnt->mnt_sb->s_uuid.b, 5594 + path.mnt->mnt_sb->s_uuid_len); 5600 5595 else 5601 - memset(info->objid, 0, 16); 5596 + memcpy(info->objid, &stfs.f_fsid, sizeof(stfs.f_fsid)); 5602 5597 5603 5598 info->extended_info.magic = cpu_to_le32(EXTENDED_INFO_MAGIC); 5604 5599 info->extended_info.version = cpu_to_le32(1);
-3
fs/tests/exec_kunit.c
··· 94 94 { { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * (_STK_LIM / 4 * 3 + sizeof(void *)), 95 95 .argc = 0, .envc = 0 }, 96 96 .expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) }, 97 - { { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * (_STK_LIM / 4 * + sizeof(void *)), 98 - .argc = 0, .envc = 0 }, 99 - .expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) }, 100 97 { { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * _STK_LIM, 101 98 .argc = 0, .envc = 0 }, 102 99 .expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) },
+2 -1
include/hyperv/hvgdk_mini.h
··· 477 477 #define HVCALL_NOTIFY_PARTITION_EVENT 0x0087 478 478 #define HVCALL_ENTER_SLEEP_STATE 0x0084 479 479 #define HVCALL_NOTIFY_PORT_RING_EMPTY 0x008b 480 - #define HVCALL_SCRUB_PARTITION 0x008d 481 480 #define HVCALL_REGISTER_INTERCEPT_RESULT 0x0091 482 481 #define HVCALL_ASSERT_VIRTUAL_INTERRUPT 0x0094 483 482 #define HVCALL_CREATE_PORT 0x0095 ··· 1120 1121 HV_X64_REGISTER_MSR_MTRR_FIX4KF8000 = 0x0008007A, 1121 1122 1122 1123 HV_X64_REGISTER_REG_PAGE = 0x0009001C, 1124 + #elif defined(CONFIG_ARM64) 1125 + HV_ARM64_REGISTER_SINT_RESERVED_INTERRUPT_ID = 0x00070001, 1123 1126 #endif 1124 1127 }; 1125 1128
+1 -1
include/linux/auxvec.h
··· 4 4 5 5 #include <uapi/linux/auxvec.h> 6 6 7 - #define AT_VECTOR_SIZE_BASE 22 /* NEW_AUX_ENT entries in auxiliary table */ 7 + #define AT_VECTOR_SIZE_BASE 24 /* NEW_AUX_ENT entries in auxiliary table */ 8 8 /* number of "#define AT_.*" above, minus {AT_NULL, AT_IGNORE, AT_NOTELF} */ 9 9 #endif /* _LINUX_AUXVEC_H */
+1
include/linux/console_struct.h
··· 160 160 struct uni_pagedict **uni_pagedict_loc; /* [!] Location of uni_pagedict variable for this console */ 161 161 u32 **vc_uni_lines; /* unicode screen content */ 162 162 u16 *vc_saved_screen; 163 + u32 **vc_saved_uni_lines; 163 164 unsigned int vc_saved_cols; 164 165 unsigned int vc_saved_rows; 165 166 /* additional information is in vt_kern.h */
+6
include/linux/damon.h
··· 810 810 struct damos_walk_control *walk_control; 811 811 struct mutex walk_control_lock; 812 812 813 + /* 814 + * indicate if this may be corrupted. Currentonly this is set only for 815 + * damon_commit_ctx() failure. 816 + */ 817 + bool maybe_corrupted; 818 + 813 819 /* Working thread of the given DAMON context */ 814 820 struct task_struct *kdamond; 815 821 /* Protects @kdamond field access */
+54
include/linux/device.h
··· 483 483 * on. This shrinks the "Board Support Packages" (BSPs) and 484 484 * minimizes board-specific #ifdefs in drivers. 485 485 * @driver_data: Private pointer for driver specific info. 486 + * @driver_override: Driver name to force a match. Do not touch directly; use 487 + * device_set_driver_override() instead. 486 488 * @links: Links to suppliers and consumers of this device. 487 489 * @power: For device power management. 488 490 * See Documentation/driver-api/pm/devices.rst for details. ··· 578 576 core doesn't touch it */ 579 577 void *driver_data; /* Driver data, set and get with 580 578 dev_set_drvdata/dev_get_drvdata */ 579 + struct { 580 + const char *name; 581 + spinlock_t lock; 582 + } driver_override; 581 583 struct mutex mutex; /* mutex to synchronize calls to 582 584 * its driver. 583 585 */ ··· 706 700 }; 707 701 708 702 #define kobj_to_dev(__kobj) container_of_const(__kobj, struct device, kobj) 703 + 704 + int __device_set_driver_override(struct device *dev, const char *s, size_t len); 705 + 706 + /** 707 + * device_set_driver_override() - Helper to set or clear driver override. 708 + * @dev: Device to change 709 + * @s: NUL-terminated string, new driver name to force a match, pass empty 710 + * string to clear it ("" or "\n", where the latter is only for sysfs 711 + * interface). 712 + * 713 + * Helper to set or clear driver override of a device. 714 + * 715 + * Returns: 0 on success or a negative error code on failure. 716 + */ 717 + static inline int device_set_driver_override(struct device *dev, const char *s) 718 + { 719 + return __device_set_driver_override(dev, s, s ? strlen(s) : 0); 720 + } 721 + 722 + /** 723 + * device_has_driver_override() - Check if a driver override has been set. 724 + * @dev: device to check 725 + * 726 + * Returns true if a driver override has been set for this device. 727 + */ 728 + static inline bool device_has_driver_override(struct device *dev) 729 + { 730 + guard(spinlock)(&dev->driver_override.lock); 731 + return !!dev->driver_override.name; 732 + } 733 + 734 + /** 735 + * device_match_driver_override() - Match a driver against the device's driver_override. 736 + * @dev: device to check 737 + * @drv: driver to match against 738 + * 739 + * Returns > 0 if a driver override is set and matches the given driver, 0 if a 740 + * driver override is set but does not match, or < 0 if a driver override is not 741 + * set at all. 742 + */ 743 + static inline int device_match_driver_override(struct device *dev, 744 + const struct device_driver *drv) 745 + { 746 + guard(spinlock)(&dev->driver_override.lock); 747 + if (dev->driver_override.name) 748 + return !strcmp(dev->driver_override.name, drv->name); 749 + return -1; 750 + } 709 751 710 752 /** 711 753 * device_iommu_mapped - Returns true when the device DMA is translated
+4
include/linux/device/bus.h
··· 65 65 * this bus. 66 66 * @pm: Power management operations of this bus, callback the specific 67 67 * device driver's pm-ops. 68 + * @driver_override: Set to true if this bus supports the driver_override 69 + * mechanism, which allows userspace to force a specific 70 + * driver to bind to a device via a sysfs attribute. 68 71 * @need_parent_lock: When probing or removing a device on this bus, the 69 72 * device core should lock the device's parent. 70 73 * ··· 109 106 110 107 const struct dev_pm_ops *pm; 111 108 109 + bool driver_override; 112 110 bool need_parent_lock; 113 111 }; 114 112
+13 -6
include/linux/dma-mapping.h
··· 80 80 #define DMA_ATTR_MMIO (1UL << 10) 81 81 82 82 /* 83 - * DMA_ATTR_CPU_CACHE_CLEAN: Indicates the CPU will not dirty any cacheline 84 - * overlapping this buffer while it is mapped for DMA. All mappings sharing 85 - * a cacheline must have this attribute for this to be considered safe. 83 + * DMA_ATTR_DEBUGGING_IGNORE_CACHELINES: Indicates the CPU cache line can be 84 + * overlapped. All mappings sharing a cacheline must have this attribute for 85 + * this to be considered safe. 86 86 */ 87 - #define DMA_ATTR_CPU_CACHE_CLEAN (1UL << 11) 87 + #define DMA_ATTR_DEBUGGING_IGNORE_CACHELINES (1UL << 11) 88 + 89 + /* 90 + * DMA_ATTR_REQUIRE_COHERENT: Indicates that DMA coherency is required. 91 + * All mappings that carry this attribute can't work with SWIOTLB and cache 92 + * flushing. 93 + */ 94 + #define DMA_ATTR_REQUIRE_COHERENT (1UL << 12) 88 95 89 96 /* 90 97 * A dma_addr_t can hold any valid DMA or bus address for the platform. It can ··· 255 248 { 256 249 return NULL; 257 250 } 258 - static void dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, 259 - dma_addr_t dma_handle, unsigned long attrs) 251 + static inline void dma_free_attrs(struct device *dev, size_t size, 252 + void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs) 260 253 { 261 254 } 262 255 static inline void *dmam_alloc_attrs(struct device *dev, size_t size,
+8 -2
include/linux/io-pgtable.h
··· 53 53 * tables. 54 54 * @ias: Input address (iova) size, in bits. 55 55 * @oas: Output address (paddr) size, in bits. 56 - * @coherent_walk A flag to indicate whether or not page table walks made 56 + * @coherent_walk: A flag to indicate whether or not page table walks made 57 57 * by the IOMMU are coherent with the CPU caches. 58 58 * @tlb: TLB management callbacks for this set of tables. 59 59 * @iommu_dev: The device representing the DMA configuration for the ··· 136 136 void (*free)(void *cookie, void *pages, size_t size); 137 137 138 138 /* Low-level data specific to the table format */ 139 + /* private: */ 139 140 union { 140 141 struct { 141 142 u64 ttbr; ··· 204 203 * @unmap_pages: Unmap a range of virtually contiguous pages of the same size. 205 204 * @iova_to_phys: Translate iova to physical address. 206 205 * @pgtable_walk: (optional) Perform a page table walk for a given iova. 206 + * @read_and_clear_dirty: Record dirty info per IOVA. If an IOVA is dirty, 207 + * clear its dirty state from the PTE unless the 208 + * IOMMU_DIRTY_NO_CLEAR flag is passed in. 207 209 * 208 210 * These functions map directly onto the iommu_ops member functions with 209 211 * the same names. ··· 235 231 * the configuration actually provided by the allocator (e.g. the 236 232 * pgsize_bitmap may be restricted). 237 233 * @cookie: An opaque token provided by the IOMMU driver and passed back to 238 - * the callback routines in cfg->tlb. 234 + * the callback routines. 235 + * 236 + * Returns: Pointer to the &struct io_pgtable_ops for this set of page tables. 239 237 */ 240 238 struct io_pgtable_ops *alloc_io_pgtable_ops(enum io_pgtable_fmt fmt, 241 239 struct io_pgtable_cfg *cfg,
+3
include/linux/io_uring_types.h
··· 541 541 REQ_F_BL_NO_RECYCLE_BIT, 542 542 REQ_F_BUFFERS_COMMIT_BIT, 543 543 REQ_F_BUF_NODE_BIT, 544 + REQ_F_BUF_MORE_BIT, 544 545 REQ_F_HAS_METADATA_BIT, 545 546 REQ_F_IMPORT_BUFFER_BIT, 546 547 REQ_F_SQE_COPIED_BIT, ··· 627 626 REQ_F_BUFFERS_COMMIT = IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT), 628 627 /* buf node is valid */ 629 628 REQ_F_BUF_NODE = IO_REQ_FLAG(REQ_F_BUF_NODE_BIT), 629 + /* incremental buffer consumption, more space available */ 630 + REQ_F_BUF_MORE = IO_REQ_FLAG(REQ_F_BUF_MORE_BIT), 630 631 /* request has read/write metadata assigned */ 631 632 REQ_F_HAS_METADATA = IO_REQ_FLAG(REQ_F_HAS_METADATA_BIT), 632 633 /*
+1 -1
include/linux/local_lock_internal.h
··· 315 315 316 316 #endif /* CONFIG_PREEMPT_RT */ 317 317 318 - #if defined(WARN_CONTEXT_ANALYSIS) 318 + #if defined(WARN_CONTEXT_ANALYSIS) && !defined(__CHECKER__) 319 319 /* 320 320 * Because the compiler only knows about the base per-CPU variable, use this 321 321 * helper function to make the compiler think we lock/unlock the @base variable,
-5
include/linux/platform_device.h
··· 31 31 struct resource *resource; 32 32 33 33 const struct platform_device_id *id_entry; 34 - /* 35 - * Driver name to force a match. Do not set directly, because core 36 - * frees it. Use driver_set_override() to set or clear it. 37 - */ 38 - const char *driver_override; 39 34 40 35 /* MFD cell pointer */ 41 36 struct mfd_cell *mfd_cell;
+1
include/linux/security.h
··· 145 145 LOCKDOWN_BPF_WRITE_USER, 146 146 LOCKDOWN_DBG_WRITE_KERNEL, 147 147 LOCKDOWN_RTAS_ERROR_INJECTION, 148 + LOCKDOWN_XEN_USER_ACTIONS, 148 149 LOCKDOWN_INTEGRITY_MAX, 149 150 LOCKDOWN_KCORE, 150 151 LOCKDOWN_KPROBES,
+1
include/linux/serial_8250.h
··· 195 195 void serial8250_do_set_divisor(struct uart_port *port, unsigned int baud, 196 196 unsigned int quot); 197 197 int fsl8250_handle_irq(struct uart_port *port); 198 + void serial8250_handle_irq_locked(struct uart_port *port, unsigned int iir); 198 199 int serial8250_handle_irq(struct uart_port *port, unsigned int iir); 199 200 u16 serial8250_rx_chars(struct uart_8250_port *up, u16 lsr); 200 201 void serial8250_read_char(struct uart_8250_port *up, u16 lsr);
+4
include/linux/srcutiny.h
··· 11 11 #ifndef _LINUX_SRCU_TINY_H 12 12 #define _LINUX_SRCU_TINY_H 13 13 14 + #include <linux/irq_work_types.h> 14 15 #include <linux/swait.h> 15 16 16 17 struct srcu_struct { ··· 25 24 struct rcu_head *srcu_cb_head; /* Pending callbacks: Head. */ 26 25 struct rcu_head **srcu_cb_tail; /* Pending callbacks: Tail. */ 27 26 struct work_struct srcu_work; /* For driving grace periods. */ 27 + struct irq_work srcu_irq_work; /* Defer schedule_work() to irq work. */ 28 28 #ifdef CONFIG_DEBUG_LOCK_ALLOC 29 29 struct lockdep_map dep_map; 30 30 #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ 31 31 }; 32 32 33 33 void srcu_drive_gp(struct work_struct *wp); 34 + void srcu_tiny_irq_work(struct irq_work *irq_work); 34 35 35 36 #define __SRCU_STRUCT_INIT(name, __ignored, ___ignored, ____ignored) \ 36 37 { \ 37 38 .srcu_wq = __SWAIT_QUEUE_HEAD_INITIALIZER(name.srcu_wq), \ 38 39 .srcu_cb_tail = &name.srcu_cb_head, \ 39 40 .srcu_work = __WORK_INITIALIZER(name.srcu_work, srcu_drive_gp), \ 41 + .srcu_irq_work = { .func = srcu_tiny_irq_work }, \ 40 42 __SRCU_DEP_MAP_INIT(name) \ 41 43 } 42 44
+5 -4
include/linux/srcutree.h
··· 34 34 /* Values: SRCU_READ_FLAVOR_.* */ 35 35 36 36 /* Update-side state. */ 37 - spinlock_t __private lock ____cacheline_internodealigned_in_smp; 37 + raw_spinlock_t __private lock ____cacheline_internodealigned_in_smp; 38 38 struct rcu_segcblist srcu_cblist; /* List of callbacks.*/ 39 39 unsigned long srcu_gp_seq_needed; /* Furthest future GP needed. */ 40 40 unsigned long srcu_gp_seq_needed_exp; /* Furthest future exp GP. */ ··· 55 55 * Node in SRCU combining tree, similar in function to rcu_data. 56 56 */ 57 57 struct srcu_node { 58 - spinlock_t __private lock; 58 + raw_spinlock_t __private lock; 59 59 unsigned long srcu_have_cbs[4]; /* GP seq for children having CBs, but only */ 60 60 /* if greater than ->srcu_gp_seq. */ 61 61 unsigned long srcu_data_have_cbs[4]; /* Which srcu_data structs have CBs for given GP? */ ··· 74 74 /* First node at each level. */ 75 75 int srcu_size_state; /* Small-to-big transition state. */ 76 76 struct mutex srcu_cb_mutex; /* Serialize CB preparation. */ 77 - spinlock_t __private lock; /* Protect counters and size state. */ 77 + raw_spinlock_t __private lock; /* Protect counters and size state. */ 78 78 struct mutex srcu_gp_mutex; /* Serialize GP work. */ 79 79 unsigned long srcu_gp_seq; /* Grace-period seq #. */ 80 80 unsigned long srcu_gp_seq_needed; /* Latest gp_seq needed. */ ··· 95 95 unsigned long reschedule_jiffies; 96 96 unsigned long reschedule_count; 97 97 struct delayed_work work; 98 + struct irq_work irq_work; 98 99 struct srcu_struct *srcu_ssp; 99 100 }; 100 101 ··· 157 156 158 157 #define __SRCU_USAGE_INIT(name) \ 159 158 { \ 160 - .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ 159 + .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \ 161 160 .srcu_gp_seq = SRCU_GP_SEQ_INITIAL_VAL, \ 162 161 .srcu_gp_seq_needed = SRCU_GP_SEQ_INITIAL_VAL_WITH_STATE, \ 163 162 .srcu_gp_seq_needed_exp = SRCU_GP_SEQ_INITIAL_VAL, \
+49 -4
include/linux/virtio_net.h
··· 207 207 return __virtio_net_hdr_to_skb(skb, hdr, little_endian, hdr->gso_type); 208 208 } 209 209 210 + /* This function must be called after virtio_net_hdr_from_skb(). */ 211 + static inline void __virtio_net_set_hdrlen(const struct sk_buff *skb, 212 + struct virtio_net_hdr *hdr, 213 + bool little_endian) 214 + { 215 + u16 hdr_len; 216 + 217 + hdr_len = skb_transport_offset(skb); 218 + 219 + if (hdr->gso_type == VIRTIO_NET_HDR_GSO_UDP_L4) 220 + hdr_len += sizeof(struct udphdr); 221 + else 222 + hdr_len += tcp_hdrlen(skb); 223 + 224 + hdr->hdr_len = __cpu_to_virtio16(little_endian, hdr_len); 225 + } 226 + 227 + /* This function must be called after virtio_net_hdr_from_skb(). */ 228 + static inline void __virtio_net_set_tnl_hdrlen(const struct sk_buff *skb, 229 + struct virtio_net_hdr *hdr) 230 + { 231 + u16 hdr_len; 232 + 233 + hdr_len = skb_inner_transport_offset(skb); 234 + 235 + if (hdr->gso_type == VIRTIO_NET_HDR_GSO_UDP_L4) 236 + hdr_len += sizeof(struct udphdr); 237 + else 238 + hdr_len += inner_tcp_hdrlen(skb); 239 + 240 + hdr->hdr_len = __cpu_to_virtio16(true, hdr_len); 241 + } 242 + 210 243 static inline int virtio_net_hdr_from_skb(const struct sk_buff *skb, 211 244 struct virtio_net_hdr *hdr, 212 245 bool little_endian, ··· 418 385 bool tnl_hdr_negotiated, 419 386 bool little_endian, 420 387 int vlan_hlen, 421 - bool has_data_valid) 388 + bool has_data_valid, 389 + bool feature_hdrlen) 422 390 { 423 391 struct virtio_net_hdr *hdr = (struct virtio_net_hdr *)vhdr; 424 392 unsigned int inner_nh, outer_th; ··· 428 394 429 395 tnl_gso_type = skb_shinfo(skb)->gso_type & (SKB_GSO_UDP_TUNNEL | 430 396 SKB_GSO_UDP_TUNNEL_CSUM); 431 - if (!tnl_gso_type) 432 - return virtio_net_hdr_from_skb(skb, hdr, little_endian, 433 - has_data_valid, vlan_hlen); 397 + if (!tnl_gso_type) { 398 + ret = virtio_net_hdr_from_skb(skb, hdr, little_endian, 399 + has_data_valid, vlan_hlen); 400 + if (ret) 401 + return ret; 402 + 403 + if (feature_hdrlen && hdr->hdr_len) 404 + __virtio_net_set_hdrlen(skb, hdr, little_endian); 405 + 406 + return ret; 407 + } 434 408 435 409 /* Tunnel support not negotiated but skb ask for it. */ 436 410 if (!tnl_hdr_negotiated) ··· 455 413 skb_shinfo(skb)->gso_type |= tnl_gso_type; 456 414 if (ret) 457 415 return ret; 416 + 417 + if (feature_hdrlen && hdr->hdr_len) 418 + __virtio_net_set_tnl_hdrlen(skb, hdr); 458 419 459 420 if (skb->protocol == htons(ETH_P_IPV6)) 460 421 hdr->gso_type |= VIRTIO_NET_HDR_GSO_UDP_TUNNEL_IPV6;
+1
include/net/bluetooth/l2cap.h
··· 658 658 struct sk_buff *rx_skb; 659 659 __u32 rx_len; 660 660 struct ida tx_ida; 661 + __u8 tx_ident; 661 662 662 663 struct sk_buff_head pending_rx; 663 664 struct work_struct pending_rx_work;
+1
include/net/codel_impl.h
··· 158 158 bool drop; 159 159 160 160 if (!skb) { 161 + vars->first_above_time = 0; 161 162 vars->dropping = false; 162 163 return skb; 163 164 }
+14
include/net/inet_hashtables.h
··· 264 264 return &hinfo->bhash2[hash & (hinfo->bhash_size - 1)]; 265 265 } 266 266 267 + static inline bool inet_use_hash2_on_bind(const struct sock *sk) 268 + { 269 + #if IS_ENABLED(CONFIG_IPV6) 270 + if (sk->sk_family == AF_INET6) { 271 + if (ipv6_addr_any(&sk->sk_v6_rcv_saddr)) 272 + return false; 273 + 274 + if (!ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr)) 275 + return true; 276 + } 277 + #endif 278 + return sk->sk_rcv_saddr != htonl(INADDR_ANY); 279 + } 280 + 267 281 struct inet_bind_hashbucket * 268 282 inet_bhash2_addr_any_hashbucket(const struct sock *sk, const struct net *net, int port); 269 283
+20 -1
include/net/ip6_fib.h
··· 507 507 void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info, 508 508 unsigned int flags); 509 509 510 + void fib6_age_exceptions(struct fib6_info *rt, struct fib6_gc_args *gc_args, 511 + unsigned long now); 510 512 void fib6_run_gc(unsigned long expires, struct net *net, bool force); 511 - 512 513 void fib6_gc_cleanup(void); 513 514 514 515 int fib6_init(void); 515 516 517 + #if IS_ENABLED(CONFIG_IPV6) 516 518 /* Add the route to the gc list if it is not already there 517 519 * 518 520 * The callers should hold f6i->fib6_table->tb6_lock. ··· 546 544 if (!hlist_unhashed(&f6i->gc_link)) 547 545 hlist_del_init(&f6i->gc_link); 548 546 } 547 + 548 + static inline void fib6_may_remove_gc_list(struct net *net, 549 + struct fib6_info *f6i) 550 + { 551 + struct fib6_gc_args gc_args; 552 + 553 + if (hlist_unhashed(&f6i->gc_link)) 554 + return; 555 + 556 + gc_args.timeout = READ_ONCE(net->ipv6.sysctl.ip6_rt_gc_interval); 557 + gc_args.more = 0; 558 + 559 + rcu_read_lock(); 560 + fib6_age_exceptions(f6i, &gc_args, jiffies); 561 + rcu_read_unlock(); 562 + } 563 + #endif 549 564 550 565 struct ipv6_route_iter { 551 566 struct seq_net_private p;
+5
include/net/netfilter/nf_conntrack_core.h
··· 83 83 84 84 extern spinlock_t nf_conntrack_expect_lock; 85 85 86 + static inline void lockdep_nfct_expect_lock_held(void) 87 + { 88 + lockdep_assert_held(&nf_conntrack_expect_lock); 89 + } 90 + 86 91 /* ctnetlink code shared by both ctnetlink and nf_conntrack_bpf */ 87 92 88 93 static inline void __nf_ct_set_timeout(struct nf_conn *ct, u64 timeout)
+18 -2
include/net/netfilter/nf_conntrack_expect.h
··· 22 22 /* Hash member */ 23 23 struct hlist_node hnode; 24 24 25 + /* Network namespace */ 26 + possible_net_t net; 27 + 25 28 /* We expect this tuple, with the following mask */ 26 29 struct nf_conntrack_tuple tuple; 27 30 struct nf_conntrack_tuple_mask mask; 28 31 32 + #ifdef CONFIG_NF_CONNTRACK_ZONES 33 + struct nf_conntrack_zone zone; 34 + #endif 29 35 /* Usage count. */ 30 36 refcount_t use; 31 37 ··· 46 40 struct nf_conntrack_expect *this); 47 41 48 42 /* Helper to assign to new connection */ 49 - struct nf_conntrack_helper *helper; 43 + struct nf_conntrack_helper __rcu *helper; 50 44 51 45 /* The conntrack of the master connection */ 52 46 struct nf_conn *master; ··· 68 62 69 63 static inline struct net *nf_ct_exp_net(struct nf_conntrack_expect *exp) 70 64 { 71 - return nf_ct_net(exp->master); 65 + return read_pnet(&exp->net); 66 + } 67 + 68 + static inline bool nf_ct_exp_zone_equal_any(const struct nf_conntrack_expect *a, 69 + const struct nf_conntrack_zone *b) 70 + { 71 + #ifdef CONFIG_NF_CONNTRACK_ZONES 72 + return a->zone.id == b->id; 73 + #else 74 + return true; 75 + #endif 72 76 } 73 77 74 78 #define NF_CT_EXP_POLICY_NAME_LEN 16
+1 -1
include/net/netns/xfrm.h
··· 59 59 struct list_head inexact_bins; 60 60 61 61 62 - struct sock *nlsk; 62 + struct sock __rcu *nlsk; 63 63 struct sock *nlsk_stash; 64 64 65 65 u32 sysctl_aevent_etime;
+3 -1
include/trace/events/dma.h
··· 32 32 { DMA_ATTR_ALLOC_SINGLE_PAGES, "ALLOC_SINGLE_PAGES" }, \ 33 33 { DMA_ATTR_NO_WARN, "NO_WARN" }, \ 34 34 { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \ 35 - { DMA_ATTR_MMIO, "MMIO" }) 35 + { DMA_ATTR_MMIO, "MMIO" }, \ 36 + { DMA_ATTR_DEBUGGING_IGNORE_CACHELINES, "CACHELINES_OVERLAP" }, \ 37 + { DMA_ATTR_REQUIRE_COHERENT, "REQUIRE_COHERENT" }) 36 38 37 39 DECLARE_EVENT_CLASS(dma_map, 38 40 TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr,
+5 -2
include/trace/events/task.h
··· 38 38 TP_ARGS(task, comm), 39 39 40 40 TP_STRUCT__entry( 41 + __field( pid_t, pid) 41 42 __array( char, oldcomm, TASK_COMM_LEN) 42 43 __array( char, newcomm, TASK_COMM_LEN) 43 44 __field( short, oom_score_adj) 44 45 ), 45 46 46 47 TP_fast_assign( 48 + __entry->pid = task->pid; 47 49 memcpy(entry->oldcomm, task->comm, TASK_COMM_LEN); 48 50 strscpy(entry->newcomm, comm, TASK_COMM_LEN); 49 51 __entry->oom_score_adj = task->signal->oom_score_adj; 50 52 ), 51 53 52 - TP_printk("oldcomm=%s newcomm=%s oom_score_adj=%hd", 53 - __entry->oldcomm, __entry->newcomm, __entry->oom_score_adj) 54 + TP_printk("pid=%d oldcomm=%s newcomm=%s oom_score_adj=%hd", 55 + __entry->pid, __entry->oldcomm, 56 + __entry->newcomm, __entry->oom_score_adj) 54 57 ); 55 58 56 59 /**
+4
include/uapi/linux/netfilter/nf_conntrack_common.h
··· 159 159 #define NF_CT_EXPECT_INACTIVE 0x2 160 160 #define NF_CT_EXPECT_USERSPACE 0x4 161 161 162 + #ifdef __KERNEL__ 163 + #define NF_CT_EXPECT_MASK (NF_CT_EXPECT_PERMANENT | NF_CT_EXPECT_INACTIVE | \ 164 + NF_CT_EXPECT_USERSPACE) 165 + #endif 162 166 163 167 #endif /* _UAPI_NF_CONNTRACK_COMMON_H */
+1 -1
init/Kconfig
··· 146 146 config CC_HAS_COUNTED_BY_PTR 147 147 bool 148 148 # supported since clang 22 149 - default y if CC_IS_CLANG && CLANG_VERSION >= 220000 149 + default y if CC_IS_CLANG && CLANG_VERSION >= 220100 150 150 # supported since gcc 16.0.0 151 151 default y if CC_IS_GCC && GCC_VERSION >= 160000 152 152
+11 -3
io_uring/kbuf.c
··· 34 34 35 35 static bool io_kbuf_inc_commit(struct io_buffer_list *bl, int len) 36 36 { 37 + /* No data consumed, return false early to avoid consuming the buffer */ 38 + if (!len) 39 + return false; 40 + 37 41 while (len) { 38 42 struct io_uring_buf *buf; 39 43 u32 buf_len, this_len; ··· 216 212 sel.addr = u64_to_user_ptr(READ_ONCE(buf->addr)); 217 213 218 214 if (io_should_commit(req, issue_flags)) { 219 - io_kbuf_commit(req, sel.buf_list, *len, 1); 215 + if (!io_kbuf_commit(req, sel.buf_list, *len, 1)) 216 + req->flags |= REQ_F_BUF_MORE; 220 217 sel.buf_list = NULL; 221 218 } 222 219 return sel; ··· 350 345 */ 351 346 if (ret > 0) { 352 347 req->flags |= REQ_F_BUFFERS_COMMIT | REQ_F_BL_NO_RECYCLE; 353 - io_kbuf_commit(req, sel->buf_list, arg->out_len, ret); 348 + if (!io_kbuf_commit(req, sel->buf_list, arg->out_len, ret)) 349 + req->flags |= REQ_F_BUF_MORE; 354 350 } 355 351 } else { 356 352 ret = io_provided_buffers_select(req, &arg->out_len, sel->buf_list, arg->iovs); ··· 397 391 398 392 if (bl) 399 393 ret = io_kbuf_commit(req, bl, len, nr); 394 + if (ret && (req->flags & REQ_F_BUF_MORE)) 395 + ret = false; 400 396 401 - req->flags &= ~REQ_F_BUFFER_RING; 397 + req->flags &= ~(REQ_F_BUFFER_RING | REQ_F_BUF_MORE); 402 398 return ret; 403 399 } 404 400
+7 -2
io_uring/poll.c
··· 272 272 atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs); 273 273 v &= ~IO_POLL_RETRY_FLAG; 274 274 } 275 + v &= IO_POLL_REF_MASK; 275 276 } 276 277 277 278 /* the mask was stashed in __io_poll_execute */ ··· 305 304 return IOU_POLL_REMOVE_POLL_USE_RES; 306 305 } 307 306 } else { 308 - int ret = io_poll_issue(req, tw); 307 + int ret; 309 308 309 + /* multiple refs and HUP, ensure we loop once more */ 310 + if ((req->cqe.res & (POLLHUP | POLLRDHUP)) && v != 1) 311 + v--; 312 + 313 + ret = io_poll_issue(req, tw); 310 314 if (ret == IOU_COMPLETE) 311 315 return IOU_POLL_REMOVE_POLL_USE_RES; 312 316 else if (ret == IOU_REQUEUE) ··· 327 321 * Release all references, retry if someone tried to restart 328 322 * task_work while we were executing it. 329 323 */ 330 - v &= IO_POLL_REF_MASK; 331 324 } while (atomic_sub_return(v, &req->poll_refs) & IO_POLL_REF_MASK); 332 325 333 326 io_napi_add(req);
+20 -4
kernel/bpf/btf.c
··· 1787 1787 * of the _bh() version. 1788 1788 */ 1789 1789 spin_lock_irqsave(&btf_idr_lock, flags); 1790 - idr_remove(&btf_idr, btf->id); 1790 + if (btf->id) { 1791 + idr_remove(&btf_idr, btf->id); 1792 + /* 1793 + * Clear the id here to make this function idempotent, since it will get 1794 + * called a couple of times for module BTFs: on module unload, and then 1795 + * the final btf_put(). btf_alloc_id() starts IDs with 1, so we can use 1796 + * 0 as sentinel value. 1797 + */ 1798 + WRITE_ONCE(btf->id, 0); 1799 + } 1791 1800 spin_unlock_irqrestore(&btf_idr_lock, flags); 1792 1801 } 1793 1802 ··· 8124 8115 { 8125 8116 const struct btf *btf = filp->private_data; 8126 8117 8127 - seq_printf(m, "btf_id:\t%u\n", btf->id); 8118 + seq_printf(m, "btf_id:\t%u\n", READ_ONCE(btf->id)); 8128 8119 } 8129 8120 #endif 8130 8121 ··· 8206 8197 if (copy_from_user(&info, uinfo, info_copy)) 8207 8198 return -EFAULT; 8208 8199 8209 - info.id = btf->id; 8200 + info.id = READ_ONCE(btf->id); 8210 8201 ubtf = u64_to_user_ptr(info.btf); 8211 8202 btf_copy = min_t(u32, btf->data_size, info.btf_size); 8212 8203 if (copy_to_user(ubtf, btf->data, btf_copy)) ··· 8269 8260 8270 8261 u32 btf_obj_id(const struct btf *btf) 8271 8262 { 8272 - return btf->id; 8263 + return READ_ONCE(btf->id); 8273 8264 } 8274 8265 8275 8266 bool btf_is_kernel(const struct btf *btf) ··· 8391 8382 if (btf_mod->module != module) 8392 8383 continue; 8393 8384 8385 + /* 8386 + * For modules, we do the freeing of BTF IDR as soon as 8387 + * module goes away to disable BTF discovery, since the 8388 + * btf_try_get_module() on such BTFs will fail. This may 8389 + * be called again on btf_put(), but it's ok to do so. 8390 + */ 8391 + btf_free_id(btf_mod->btf); 8394 8392 list_del(&btf_mod->list); 8395 8393 if (btf_mod->sysfs_attr) 8396 8394 sysfs_remove_bin_file(btf_kobj, btf_mod->sysfs_attr);
+35 -8
kernel/bpf/core.c
··· 1422 1422 *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd); 1423 1423 *to++ = BPF_STX_MEM(from->code, from->dst_reg, BPF_REG_AX, from->off); 1424 1424 break; 1425 + 1426 + case BPF_ST | BPF_PROBE_MEM32 | BPF_DW: 1427 + case BPF_ST | BPF_PROBE_MEM32 | BPF_W: 1428 + case BPF_ST | BPF_PROBE_MEM32 | BPF_H: 1429 + case BPF_ST | BPF_PROBE_MEM32 | BPF_B: 1430 + *to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ 1431 + from->imm); 1432 + *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd); 1433 + /* 1434 + * Cannot use BPF_STX_MEM() macro here as it 1435 + * hardcodes BPF_MEM mode, losing PROBE_MEM32 1436 + * and breaking arena addressing in the JIT. 1437 + */ 1438 + *to++ = (struct bpf_insn) { 1439 + .code = BPF_STX | BPF_PROBE_MEM32 | 1440 + BPF_SIZE(from->code), 1441 + .dst_reg = from->dst_reg, 1442 + .src_reg = BPF_REG_AX, 1443 + .off = from->off, 1444 + }; 1445 + break; 1425 1446 } 1426 1447 out: 1427 1448 return to - to_buff; ··· 1757 1736 } 1758 1737 1759 1738 #ifndef CONFIG_BPF_JIT_ALWAYS_ON 1739 + /* Absolute value of s32 without undefined behavior for S32_MIN */ 1740 + static u32 abs_s32(s32 x) 1741 + { 1742 + return x >= 0 ? (u32)x : -(u32)x; 1743 + } 1744 + 1760 1745 /** 1761 1746 * ___bpf_prog_run - run eBPF program on a given context 1762 1747 * @regs: is the array of MAX_BPF_EXT_REG eBPF pseudo-registers ··· 1927 1900 DST = do_div(AX, (u32) SRC); 1928 1901 break; 1929 1902 case 1: 1930 - AX = abs((s32)DST); 1931 - AX = do_div(AX, abs((s32)SRC)); 1903 + AX = abs_s32((s32)DST); 1904 + AX = do_div(AX, abs_s32((s32)SRC)); 1932 1905 if ((s32)DST < 0) 1933 1906 DST = (u32)-AX; 1934 1907 else ··· 1955 1928 DST = do_div(AX, (u32) IMM); 1956 1929 break; 1957 1930 case 1: 1958 - AX = abs((s32)DST); 1959 - AX = do_div(AX, abs((s32)IMM)); 1931 + AX = abs_s32((s32)DST); 1932 + AX = do_div(AX, abs_s32((s32)IMM)); 1960 1933 if ((s32)DST < 0) 1961 1934 DST = (u32)-AX; 1962 1935 else ··· 1982 1955 DST = (u32) AX; 1983 1956 break; 1984 1957 case 1: 1985 - AX = abs((s32)DST); 1986 - do_div(AX, abs((s32)SRC)); 1958 + AX = abs_s32((s32)DST); 1959 + do_div(AX, abs_s32((s32)SRC)); 1987 1960 if (((s32)DST < 0) == ((s32)SRC < 0)) 1988 1961 DST = (u32)AX; 1989 1962 else ··· 2009 1982 DST = (u32) AX; 2010 1983 break; 2011 1984 case 1: 2012 - AX = abs((s32)DST); 2013 - do_div(AX, abs((s32)IMM)); 1985 + AX = abs_s32((s32)DST); 1986 + do_div(AX, abs_s32((s32)IMM)); 2014 1987 if (((s32)DST < 0) == ((s32)IMM < 0)) 2015 1988 DST = (u32)AX; 2016 1989 else
+25 -8
kernel/bpf/verifier.c
··· 15910 15910 /* Apply bswap if alu64 or switch between big-endian and little-endian machines */ 15911 15911 bool need_bswap = alu64 || (to_le == is_big_endian); 15912 15912 15913 + /* 15914 + * If the register is mutated, manually reset its scalar ID to break 15915 + * any existing ties and avoid incorrect bounds propagation. 15916 + */ 15917 + if (need_bswap || insn->imm == 16 || insn->imm == 32) 15918 + dst_reg->id = 0; 15919 + 15913 15920 if (need_bswap) { 15914 15921 if (insn->imm == 16) 15915 15922 dst_reg->var_off = tnum_bswap16(dst_reg->var_off); ··· 15999 15992 else 16000 15993 return 0; 16001 15994 16002 - branch = push_stack(env, env->insn_idx + 1, env->insn_idx, false); 15995 + branch = push_stack(env, env->insn_idx, env->insn_idx, false); 16003 15996 if (IS_ERR(branch)) 16004 15997 return PTR_ERR(branch); 16005 15998 ··· 17415 17408 continue; 17416 17409 if ((reg->id & ~BPF_ADD_CONST) != (known_reg->id & ~BPF_ADD_CONST)) 17417 17410 continue; 17411 + /* 17412 + * Skip mixed 32/64-bit links: the delta relationship doesn't 17413 + * hold across different ALU widths. 17414 + */ 17415 + if (((reg->id ^ known_reg->id) & BPF_ADD_CONST) == BPF_ADD_CONST) 17416 + continue; 17418 17417 if ((!(reg->id & BPF_ADD_CONST) && !(known_reg->id & BPF_ADD_CONST)) || 17419 17418 reg->off == known_reg->off) { 17420 17419 s32 saved_subreg_def = reg->subreg_def; ··· 17448 17435 scalar32_min_max_add(reg, &fake_reg); 17449 17436 scalar_min_max_add(reg, &fake_reg); 17450 17437 reg->var_off = tnum_add(reg->var_off, fake_reg.var_off); 17451 - if (known_reg->id & BPF_ADD_CONST32) 17438 + if ((reg->id | known_reg->id) & BPF_ADD_CONST32) 17452 17439 zext_32_to_64(reg); 17453 17440 reg_bounds_sync(reg); 17454 17441 } ··· 19876 19863 * Also verify that new value satisfies old value range knowledge. 19877 19864 */ 19878 19865 19879 - /* ADD_CONST mismatch: different linking semantics */ 19880 - if ((rold->id & BPF_ADD_CONST) && !(rcur->id & BPF_ADD_CONST)) 19881 - return false; 19882 - 19883 - if (rold->id && !(rold->id & BPF_ADD_CONST) && (rcur->id & BPF_ADD_CONST)) 19866 + /* 19867 + * ADD_CONST flags must match exactly: BPF_ADD_CONST32 and 19868 + * BPF_ADD_CONST64 have different linking semantics in 19869 + * sync_linked_regs() (alu32 zero-extends, alu64 does not), 19870 + * so pruning across different flag types is unsafe. 19871 + */ 19872 + if (rold->id && 19873 + (rold->id & BPF_ADD_CONST) != (rcur->id & BPF_ADD_CONST)) 19884 19874 return false; 19885 19875 19886 19876 /* Both have offset linkage: offsets must match */ ··· 20920 20904 * state when it exits. 20921 20905 */ 20922 20906 int err = check_resource_leak(env, exception_exit, 20923 - !env->cur_state->curframe, 20907 + exception_exit || !env->cur_state->curframe, 20908 + exception_exit ? "bpf_throw" : 20924 20909 "BPF_EXIT instruction in main prog"); 20925 20910 if (err) 20926 20911 return err;
+5 -4
kernel/dma/debug.c
··· 453 453 return overlap; 454 454 } 455 455 456 - static void active_cacheline_inc_overlap(phys_addr_t cln) 456 + static void active_cacheline_inc_overlap(phys_addr_t cln, bool is_cache_clean) 457 457 { 458 458 int overlap = active_cacheline_read_overlap(cln); 459 459 ··· 462 462 /* If we overflowed the overlap counter then we're potentially 463 463 * leaking dma-mappings. 464 464 */ 465 - WARN_ONCE(overlap > ACTIVE_CACHELINE_MAX_OVERLAP, 465 + WARN_ONCE(!is_cache_clean && overlap > ACTIVE_CACHELINE_MAX_OVERLAP, 466 466 pr_fmt("exceeded %d overlapping mappings of cacheline %pa\n"), 467 467 ACTIVE_CACHELINE_MAX_OVERLAP, &cln); 468 468 } ··· 495 495 if (rc == -EEXIST) { 496 496 struct dma_debug_entry *existing; 497 497 498 - active_cacheline_inc_overlap(cln); 498 + active_cacheline_inc_overlap(cln, entry->is_cache_clean); 499 499 existing = radix_tree_lookup(&dma_active_cacheline, cln); 500 500 /* A lookup failure here after we got -EEXIST is unexpected. */ 501 501 WARN_ON(!existing); ··· 601 601 unsigned long flags; 602 602 int rc; 603 603 604 - entry->is_cache_clean = !!(attrs & DMA_ATTR_CPU_CACHE_CLEAN); 604 + entry->is_cache_clean = attrs & (DMA_ATTR_DEBUGGING_IGNORE_CACHELINES | 605 + DMA_ATTR_REQUIRE_COHERENT); 605 606 606 607 bucket = get_hash_bucket(entry, &flags); 607 608 hash_bucket_add(bucket, entry);
+4 -3
kernel/dma/direct.h
··· 84 84 dma_addr_t dma_addr; 85 85 86 86 if (is_swiotlb_force_bounce(dev)) { 87 - if (attrs & DMA_ATTR_MMIO) 87 + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) 88 88 return DMA_MAPPING_ERROR; 89 89 90 90 return swiotlb_map(dev, phys, size, dir, attrs); ··· 98 98 dma_addr = phys_to_dma(dev, phys); 99 99 if (unlikely(!dma_capable(dev, dma_addr, size, true)) || 100 100 dma_kmalloc_needs_bounce(dev, size, dir)) { 101 - if (is_swiotlb_active(dev)) 101 + if (is_swiotlb_active(dev) && 102 + !(attrs & DMA_ATTR_REQUIRE_COHERENT)) 102 103 return swiotlb_map(dev, phys, size, dir, attrs); 103 104 104 105 goto err_overflow; ··· 124 123 { 125 124 phys_addr_t phys; 126 125 127 - if (attrs & DMA_ATTR_MMIO) 126 + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) 128 127 /* nothing to do: uncached and no swiotlb */ 129 128 return; 130 129
+6
kernel/dma/mapping.c
··· 164 164 if (WARN_ON_ONCE(!dev->dma_mask)) 165 165 return DMA_MAPPING_ERROR; 166 166 167 + if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT)) 168 + return DMA_MAPPING_ERROR; 169 + 167 170 if (dma_map_direct(dev, ops) || 168 171 (!is_mmio && arch_dma_map_phys_direct(dev, phys + size))) 169 172 addr = dma_direct_map_phys(dev, phys, size, dir, attrs); ··· 237 234 int ents; 238 235 239 236 BUG_ON(!valid_dma_direction(dir)); 237 + 238 + if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT)) 239 + return -EOPNOTSUPP; 240 240 241 241 if (WARN_ON_ONCE(!dev->dma_mask)) 242 242 return 0;
+19 -2
kernel/dma/swiotlb.c
··· 30 30 #include <linux/gfp.h> 31 31 #include <linux/highmem.h> 32 32 #include <linux/io.h> 33 + #include <linux/kmsan-checks.h> 33 34 #include <linux/iommu-helper.h> 34 35 #include <linux/init.h> 35 36 #include <linux/memblock.h> ··· 902 901 903 902 local_irq_save(flags); 904 903 page = pfn_to_page(pfn); 905 - if (dir == DMA_TO_DEVICE) 904 + if (dir == DMA_TO_DEVICE) { 905 + /* 906 + * Ideally, kmsan_check_highmem_page() 907 + * could be used here to detect infoleaks, 908 + * but callers may map uninitialized buffers 909 + * that will be written by the device, 910 + * causing false positives. 911 + */ 906 912 memcpy_from_page(vaddr, page, offset, sz); 907 - else 913 + } else { 914 + kmsan_unpoison_memory(vaddr, sz); 908 915 memcpy_to_page(page, offset, vaddr, sz); 916 + } 909 917 local_irq_restore(flags); 910 918 911 919 size -= sz; ··· 923 913 offset = 0; 924 914 } 925 915 } else if (dir == DMA_TO_DEVICE) { 916 + /* 917 + * Ideally, kmsan_check_memory() could be used here to detect 918 + * infoleaks (uninitialized data being sent to device), but 919 + * callers may map uninitialized buffers that will be written 920 + * by the device, causing false positives. 921 + */ 926 922 memcpy(vaddr, phys_to_virt(orig_addr), size); 927 923 } else { 924 + kmsan_unpoison_memory(vaddr, size); 928 925 memcpy(phys_to_virt(orig_addr), vaddr, size); 929 926 } 930 927 }
+8 -11
kernel/events/core.c
··· 4813 4813 struct perf_event *sub, *event = data->event; 4814 4814 struct perf_event_context *ctx = event->ctx; 4815 4815 struct perf_cpu_context *cpuctx = this_cpu_ptr(&perf_cpu_context); 4816 - struct pmu *pmu = event->pmu; 4816 + struct pmu *pmu; 4817 4817 4818 4818 /* 4819 4819 * If this is a task context, we need to check whether it is ··· 4825 4825 if (ctx->task && cpuctx->task_ctx != ctx) 4826 4826 return; 4827 4827 4828 - raw_spin_lock(&ctx->lock); 4828 + guard(raw_spinlock)(&ctx->lock); 4829 4829 ctx_time_update_event(ctx, event); 4830 4830 4831 4831 perf_event_update_time(event); ··· 4833 4833 perf_event_update_sibling_time(event); 4834 4834 4835 4835 if (event->state != PERF_EVENT_STATE_ACTIVE) 4836 - goto unlock; 4836 + return; 4837 4837 4838 4838 if (!data->group) { 4839 - pmu->read(event); 4839 + perf_pmu_read(event); 4840 4840 data->ret = 0; 4841 - goto unlock; 4841 + return; 4842 4842 } 4843 4843 4844 + pmu = event->pmu_ctx->pmu; 4844 4845 pmu->start_txn(pmu, PERF_PMU_TXN_READ); 4845 4846 4846 - pmu->read(event); 4847 - 4847 + perf_pmu_read(event); 4848 4848 for_each_sibling_event(sub, event) 4849 4849 perf_pmu_read(sub); 4850 4850 4851 4851 data->ret = pmu->commit_txn(pmu); 4852 - 4853 - unlock: 4854 - raw_spin_unlock(&ctx->lock); 4855 4852 } 4856 4853 4857 4854 static inline u64 perf_event_count(struct perf_event *event, bool self) ··· 14741 14744 get_ctx(child_ctx); 14742 14745 child_event->ctx = child_ctx; 14743 14746 14744 - pmu_ctx = find_get_pmu_context(child_event->pmu, child_ctx, child_event); 14747 + pmu_ctx = find_get_pmu_context(parent_event->pmu_ctx->pmu, child_ctx, child_event); 14745 14748 if (IS_ERR(pmu_ctx)) { 14746 14749 free_event(child_event); 14747 14750 return ERR_CAST(pmu_ctx);
+9
kernel/rcu/rcu.h
··· 502 502 ___locked; \ 503 503 }) 504 504 505 + #define raw_spin_trylock_irqsave_rcu_node(p, flags) \ 506 + ({ \ 507 + bool ___locked = raw_spin_trylock_irqsave(&ACCESS_PRIVATE(p, lock), flags); \ 508 + \ 509 + if (___locked) \ 510 + smp_mb__after_unlock_lock(); \ 511 + ___locked; \ 512 + }) 513 + 505 514 #define raw_lockdep_assert_held_rcu_node(p) \ 506 515 lockdep_assert_held(&ACCESS_PRIVATE(p, lock)) 507 516
+18 -1
kernel/rcu/srcutiny.c
··· 9 9 */ 10 10 11 11 #include <linux/export.h> 12 + #include <linux/irq_work.h> 12 13 #include <linux/mutex.h> 13 14 #include <linux/preempt.h> 14 15 #include <linux/rcupdate_wait.h> ··· 42 41 ssp->srcu_idx_max = 0; 43 42 INIT_WORK(&ssp->srcu_work, srcu_drive_gp); 44 43 INIT_LIST_HEAD(&ssp->srcu_work.entry); 44 + init_irq_work(&ssp->srcu_irq_work, srcu_tiny_irq_work); 45 45 return 0; 46 46 } 47 47 ··· 86 84 void cleanup_srcu_struct(struct srcu_struct *ssp) 87 85 { 88 86 WARN_ON(ssp->srcu_lock_nesting[0] || ssp->srcu_lock_nesting[1]); 87 + irq_work_sync(&ssp->srcu_irq_work); 89 88 flush_work(&ssp->srcu_work); 90 89 WARN_ON(ssp->srcu_gp_running); 91 90 WARN_ON(ssp->srcu_gp_waiting); ··· 180 177 } 181 178 EXPORT_SYMBOL_GPL(srcu_drive_gp); 182 179 180 + /* 181 + * Use an irq_work to defer schedule_work() to avoid acquiring the workqueue 182 + * pool->lock while the caller might hold scheduler locks, causing lockdep 183 + * splats due to workqueue_init() doing a wakeup. 184 + */ 185 + void srcu_tiny_irq_work(struct irq_work *irq_work) 186 + { 187 + struct srcu_struct *ssp; 188 + 189 + ssp = container_of(irq_work, struct srcu_struct, srcu_irq_work); 190 + schedule_work(&ssp->srcu_work); 191 + } 192 + EXPORT_SYMBOL_GPL(srcu_tiny_irq_work); 193 + 183 194 static void srcu_gp_start_if_needed(struct srcu_struct *ssp) 184 195 { 185 196 unsigned long cookie; ··· 206 189 WRITE_ONCE(ssp->srcu_idx_max, cookie); 207 190 if (!READ_ONCE(ssp->srcu_gp_running)) { 208 191 if (likely(srcu_init_done)) 209 - schedule_work(&ssp->srcu_work); 192 + irq_work_queue(&ssp->srcu_irq_work); 210 193 else if (list_empty(&ssp->srcu_work.entry)) 211 194 list_add(&ssp->srcu_work.entry, &srcu_boot_list); 212 195 }
+102 -109
kernel/rcu/srcutree.c
··· 19 19 #include <linux/mutex.h> 20 20 #include <linux/percpu.h> 21 21 #include <linux/preempt.h> 22 + #include <linux/irq_work.h> 22 23 #include <linux/rcupdate_wait.h> 23 24 #include <linux/sched.h> 24 25 #include <linux/smp.h> ··· 76 75 static void srcu_invoke_callbacks(struct work_struct *work); 77 76 static void srcu_reschedule(struct srcu_struct *ssp, unsigned long delay); 78 77 static void process_srcu(struct work_struct *work); 78 + static void srcu_irq_work(struct irq_work *work); 79 79 static void srcu_delay_timer(struct timer_list *t); 80 - 81 - /* Wrappers for lock acquisition and release, see raw_spin_lock_rcu_node(). */ 82 - #define spin_lock_rcu_node(p) \ 83 - do { \ 84 - spin_lock(&ACCESS_PRIVATE(p, lock)); \ 85 - smp_mb__after_unlock_lock(); \ 86 - } while (0) 87 - 88 - #define spin_unlock_rcu_node(p) spin_unlock(&ACCESS_PRIVATE(p, lock)) 89 - 90 - #define spin_lock_irq_rcu_node(p) \ 91 - do { \ 92 - spin_lock_irq(&ACCESS_PRIVATE(p, lock)); \ 93 - smp_mb__after_unlock_lock(); \ 94 - } while (0) 95 - 96 - #define spin_unlock_irq_rcu_node(p) \ 97 - spin_unlock_irq(&ACCESS_PRIVATE(p, lock)) 98 - 99 - #define spin_lock_irqsave_rcu_node(p, flags) \ 100 - do { \ 101 - spin_lock_irqsave(&ACCESS_PRIVATE(p, lock), flags); \ 102 - smp_mb__after_unlock_lock(); \ 103 - } while (0) 104 - 105 - #define spin_trylock_irqsave_rcu_node(p, flags) \ 106 - ({ \ 107 - bool ___locked = spin_trylock_irqsave(&ACCESS_PRIVATE(p, lock), flags); \ 108 - \ 109 - if (___locked) \ 110 - smp_mb__after_unlock_lock(); \ 111 - ___locked; \ 112 - }) 113 - 114 - #define spin_unlock_irqrestore_rcu_node(p, flags) \ 115 - spin_unlock_irqrestore(&ACCESS_PRIVATE(p, lock), flags) \ 116 80 117 81 /* 118 82 * Initialize SRCU per-CPU data. Note that statically allocated ··· 97 131 */ 98 132 for_each_possible_cpu(cpu) { 99 133 sdp = per_cpu_ptr(ssp->sda, cpu); 100 - spin_lock_init(&ACCESS_PRIVATE(sdp, lock)); 134 + raw_spin_lock_init(&ACCESS_PRIVATE(sdp, lock)); 101 135 rcu_segcblist_init(&sdp->srcu_cblist); 102 136 sdp->srcu_cblist_invoking = false; 103 137 sdp->srcu_gp_seq_needed = ssp->srcu_sup->srcu_gp_seq; ··· 152 186 153 187 /* Each pass through this loop initializes one srcu_node structure. */ 154 188 srcu_for_each_node_breadth_first(ssp, snp) { 155 - spin_lock_init(&ACCESS_PRIVATE(snp, lock)); 189 + raw_spin_lock_init(&ACCESS_PRIVATE(snp, lock)); 156 190 BUILD_BUG_ON(ARRAY_SIZE(snp->srcu_have_cbs) != 157 191 ARRAY_SIZE(snp->srcu_data_have_cbs)); 158 192 for (i = 0; i < ARRAY_SIZE(snp->srcu_have_cbs); i++) { ··· 208 242 if (!ssp->srcu_sup) 209 243 return -ENOMEM; 210 244 if (!is_static) 211 - spin_lock_init(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); 245 + raw_spin_lock_init(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); 212 246 ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; 213 247 ssp->srcu_sup->node = NULL; 214 248 mutex_init(&ssp->srcu_sup->srcu_cb_mutex); ··· 218 252 mutex_init(&ssp->srcu_sup->srcu_barrier_mutex); 219 253 atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 0); 220 254 INIT_DELAYED_WORK(&ssp->srcu_sup->work, process_srcu); 255 + init_irq_work(&ssp->srcu_sup->irq_work, srcu_irq_work); 221 256 ssp->srcu_sup->sda_is_static = is_static; 222 257 if (!is_static) { 223 258 ssp->sda = alloc_percpu(struct srcu_data); ··· 230 263 ssp->srcu_sup->srcu_gp_seq_needed_exp = SRCU_GP_SEQ_INITIAL_VAL; 231 264 ssp->srcu_sup->srcu_last_gp_end = ktime_get_mono_fast_ns(); 232 265 if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) { 233 - if (!init_srcu_struct_nodes(ssp, is_static ? GFP_ATOMIC : GFP_KERNEL)) 266 + if (!preemptible()) 267 + WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_ALLOC); 268 + else if (init_srcu_struct_nodes(ssp, GFP_KERNEL)) 269 + WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG); 270 + else 234 271 goto err_free_sda; 235 - WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG); 236 272 } 237 273 ssp->srcu_sup->srcu_ssp = ssp; 238 274 smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed, ··· 364 394 /* Double-checked locking on ->srcu_size-state. */ 365 395 if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) != SRCU_SIZE_SMALL) 366 396 return; 367 - spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags); 397 + raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags); 368 398 if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) != SRCU_SIZE_SMALL) { 369 - spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 399 + raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 370 400 return; 371 401 } 372 402 __srcu_transition_to_big(ssp); 373 - spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 403 + raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 374 404 } 375 405 376 406 /* 377 407 * Check to see if the just-encountered contention event justifies 378 408 * a transition to SRCU_SIZE_BIG. 379 409 */ 380 - static void spin_lock_irqsave_check_contention(struct srcu_struct *ssp) 410 + static void raw_spin_lock_irqsave_check_contention(struct srcu_struct *ssp) 381 411 { 382 412 unsigned long j; 383 413 ··· 399 429 * to SRCU_SIZE_BIG. But only if the srcutree.convert_to_big module 400 430 * parameter permits this. 401 431 */ 402 - static void spin_lock_irqsave_sdp_contention(struct srcu_data *sdp, unsigned long *flags) 432 + static void raw_spin_lock_irqsave_sdp_contention(struct srcu_data *sdp, unsigned long *flags) 403 433 { 404 434 struct srcu_struct *ssp = sdp->ssp; 405 435 406 - if (spin_trylock_irqsave_rcu_node(sdp, *flags)) 436 + if (raw_spin_trylock_irqsave_rcu_node(sdp, *flags)) 407 437 return; 408 - spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags); 409 - spin_lock_irqsave_check_contention(ssp); 410 - spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, *flags); 411 - spin_lock_irqsave_rcu_node(sdp, *flags); 438 + raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags); 439 + raw_spin_lock_irqsave_check_contention(ssp); 440 + raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, *flags); 441 + raw_spin_lock_irqsave_rcu_node(sdp, *flags); 412 442 } 413 443 414 444 /* ··· 417 447 * to SRCU_SIZE_BIG. But only if the srcutree.convert_to_big module 418 448 * parameter permits this. 419 449 */ 420 - static void spin_lock_irqsave_ssp_contention(struct srcu_struct *ssp, unsigned long *flags) 450 + static void raw_spin_lock_irqsave_ssp_contention(struct srcu_struct *ssp, unsigned long *flags) 421 451 { 422 - if (spin_trylock_irqsave_rcu_node(ssp->srcu_sup, *flags)) 452 + if (raw_spin_trylock_irqsave_rcu_node(ssp->srcu_sup, *flags)) 423 453 return; 424 - spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags); 425 - spin_lock_irqsave_check_contention(ssp); 454 + raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags); 455 + raw_spin_lock_irqsave_check_contention(ssp); 426 456 } 427 457 428 458 /* ··· 440 470 /* The smp_load_acquire() pairs with the smp_store_release(). */ 441 471 if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq_needed))) /*^^^*/ 442 472 return; /* Already initialized. */ 443 - spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags); 473 + raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags); 444 474 if (!rcu_seq_state(ssp->srcu_sup->srcu_gp_seq_needed)) { 445 - spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 475 + raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 446 476 return; 447 477 } 448 478 init_srcu_struct_fields(ssp, true); 449 - spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 479 + raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 450 480 } 451 481 452 482 /* ··· 712 742 unsigned long delay; 713 743 struct srcu_usage *sup = ssp->srcu_sup; 714 744 715 - spin_lock_irq_rcu_node(ssp->srcu_sup); 745 + raw_spin_lock_irq_rcu_node(ssp->srcu_sup); 716 746 delay = srcu_get_delay(ssp); 717 - spin_unlock_irq_rcu_node(ssp->srcu_sup); 747 + raw_spin_unlock_irq_rcu_node(ssp->srcu_sup); 718 748 if (WARN_ON(!delay)) 719 749 return; /* Just leak it! */ 720 750 if (WARN_ON(srcu_readers_active(ssp))) 721 751 return; /* Just leak it! */ 752 + /* Wait for irq_work to finish first as it may queue a new work. */ 753 + irq_work_sync(&sup->irq_work); 722 754 flush_delayed_work(&sup->work); 723 755 for_each_possible_cpu(cpu) { 724 756 struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); ··· 932 960 mutex_lock(&sup->srcu_cb_mutex); 933 961 934 962 /* End the current grace period. */ 935 - spin_lock_irq_rcu_node(sup); 963 + raw_spin_lock_irq_rcu_node(sup); 936 964 idx = rcu_seq_state(sup->srcu_gp_seq); 937 965 WARN_ON_ONCE(idx != SRCU_STATE_SCAN2); 938 966 if (srcu_gp_is_expedited(ssp)) ··· 943 971 gpseq = rcu_seq_current(&sup->srcu_gp_seq); 944 972 if (ULONG_CMP_LT(sup->srcu_gp_seq_needed_exp, gpseq)) 945 973 WRITE_ONCE(sup->srcu_gp_seq_needed_exp, gpseq); 946 - spin_unlock_irq_rcu_node(sup); 974 + raw_spin_unlock_irq_rcu_node(sup); 947 975 mutex_unlock(&sup->srcu_gp_mutex); 948 976 /* A new grace period can start at this point. But only one. */ 949 977 ··· 955 983 } else { 956 984 idx = rcu_seq_ctr(gpseq) % ARRAY_SIZE(snp->srcu_have_cbs); 957 985 srcu_for_each_node_breadth_first(ssp, snp) { 958 - spin_lock_irq_rcu_node(snp); 986 + raw_spin_lock_irq_rcu_node(snp); 959 987 cbs = false; 960 988 last_lvl = snp >= sup->level[rcu_num_lvls - 1]; 961 989 if (last_lvl) ··· 970 998 else 971 999 mask = snp->srcu_data_have_cbs[idx]; 972 1000 snp->srcu_data_have_cbs[idx] = 0; 973 - spin_unlock_irq_rcu_node(snp); 1001 + raw_spin_unlock_irq_rcu_node(snp); 974 1002 if (cbs) 975 1003 srcu_schedule_cbs_snp(ssp, snp, mask, cbdelay); 976 1004 } ··· 980 1008 if (!(gpseq & counter_wrap_check)) 981 1009 for_each_possible_cpu(cpu) { 982 1010 sdp = per_cpu_ptr(ssp->sda, cpu); 983 - spin_lock_irq_rcu_node(sdp); 1011 + raw_spin_lock_irq_rcu_node(sdp); 984 1012 if (ULONG_CMP_GE(gpseq, sdp->srcu_gp_seq_needed + 100)) 985 1013 sdp->srcu_gp_seq_needed = gpseq; 986 1014 if (ULONG_CMP_GE(gpseq, sdp->srcu_gp_seq_needed_exp + 100)) 987 1015 sdp->srcu_gp_seq_needed_exp = gpseq; 988 - spin_unlock_irq_rcu_node(sdp); 1016 + raw_spin_unlock_irq_rcu_node(sdp); 989 1017 } 990 1018 991 1019 /* Callback initiation done, allow grace periods after next. */ 992 1020 mutex_unlock(&sup->srcu_cb_mutex); 993 1021 994 1022 /* Start a new grace period if needed. */ 995 - spin_lock_irq_rcu_node(sup); 1023 + raw_spin_lock_irq_rcu_node(sup); 996 1024 gpseq = rcu_seq_current(&sup->srcu_gp_seq); 997 1025 if (!rcu_seq_state(gpseq) && 998 1026 ULONG_CMP_LT(gpseq, sup->srcu_gp_seq_needed)) { 999 1027 srcu_gp_start(ssp); 1000 - spin_unlock_irq_rcu_node(sup); 1028 + raw_spin_unlock_irq_rcu_node(sup); 1001 1029 srcu_reschedule(ssp, 0); 1002 1030 } else { 1003 - spin_unlock_irq_rcu_node(sup); 1031 + raw_spin_unlock_irq_rcu_node(sup); 1004 1032 } 1005 1033 1006 1034 /* Transition to big if needed. */ ··· 1031 1059 if (WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_sup->srcu_gp_seq, s)) || 1032 1060 (!srcu_invl_snp_seq(sgsne) && ULONG_CMP_GE(sgsne, s))) 1033 1061 return; 1034 - spin_lock_irqsave_rcu_node(snp, flags); 1062 + raw_spin_lock_irqsave_rcu_node(snp, flags); 1035 1063 sgsne = snp->srcu_gp_seq_needed_exp; 1036 1064 if (!srcu_invl_snp_seq(sgsne) && ULONG_CMP_GE(sgsne, s)) { 1037 - spin_unlock_irqrestore_rcu_node(snp, flags); 1065 + raw_spin_unlock_irqrestore_rcu_node(snp, flags); 1038 1066 return; 1039 1067 } 1040 1068 WRITE_ONCE(snp->srcu_gp_seq_needed_exp, s); 1041 - spin_unlock_irqrestore_rcu_node(snp, flags); 1069 + raw_spin_unlock_irqrestore_rcu_node(snp, flags); 1042 1070 } 1043 - spin_lock_irqsave_ssp_contention(ssp, &flags); 1071 + raw_spin_lock_irqsave_ssp_contention(ssp, &flags); 1044 1072 if (ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed_exp, s)) 1045 1073 WRITE_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp, s); 1046 - spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 1074 + raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 1047 1075 } 1048 1076 1049 1077 /* ··· 1081 1109 for (snp = snp_leaf; snp != NULL; snp = snp->srcu_parent) { 1082 1110 if (WARN_ON_ONCE(rcu_seq_done(&sup->srcu_gp_seq, s)) && snp != snp_leaf) 1083 1111 return; /* GP already done and CBs recorded. */ 1084 - spin_lock_irqsave_rcu_node(snp, flags); 1112 + raw_spin_lock_irqsave_rcu_node(snp, flags); 1085 1113 snp_seq = snp->srcu_have_cbs[idx]; 1086 1114 if (!srcu_invl_snp_seq(snp_seq) && ULONG_CMP_GE(snp_seq, s)) { 1087 1115 if (snp == snp_leaf && snp_seq == s) 1088 1116 snp->srcu_data_have_cbs[idx] |= sdp->grpmask; 1089 - spin_unlock_irqrestore_rcu_node(snp, flags); 1117 + raw_spin_unlock_irqrestore_rcu_node(snp, flags); 1090 1118 if (snp == snp_leaf && snp_seq != s) { 1091 1119 srcu_schedule_cbs_sdp(sdp, do_norm ? SRCU_INTERVAL : 0); 1092 1120 return; ··· 1101 1129 sgsne = snp->srcu_gp_seq_needed_exp; 1102 1130 if (!do_norm && (srcu_invl_snp_seq(sgsne) || ULONG_CMP_LT(sgsne, s))) 1103 1131 WRITE_ONCE(snp->srcu_gp_seq_needed_exp, s); 1104 - spin_unlock_irqrestore_rcu_node(snp, flags); 1132 + raw_spin_unlock_irqrestore_rcu_node(snp, flags); 1105 1133 } 1106 1134 1107 1135 /* Top of tree, must ensure the grace period will be started. */ 1108 - spin_lock_irqsave_ssp_contention(ssp, &flags); 1136 + raw_spin_lock_irqsave_ssp_contention(ssp, &flags); 1109 1137 if (ULONG_CMP_LT(sup->srcu_gp_seq_needed, s)) { 1110 1138 /* 1111 1139 * Record need for grace period s. Pair with load ··· 1126 1154 // it isn't. And it does not have to be. After all, it 1127 1155 // can only be executed during early boot when there is only 1128 1156 // the one boot CPU running with interrupts still disabled. 1157 + // 1158 + // Use an irq_work here to avoid acquiring runqueue lock with 1159 + // srcu rcu_node::lock held. BPF instrument could introduce the 1160 + // opposite dependency, hence we need to break the possible 1161 + // locking dependency here. 1129 1162 if (likely(srcu_init_done)) 1130 - queue_delayed_work(rcu_gp_wq, &sup->work, 1131 - !!srcu_get_delay(ssp)); 1163 + irq_work_queue(&sup->irq_work); 1132 1164 else if (list_empty(&sup->work.work.entry)) 1133 1165 list_add(&sup->work.work.entry, &srcu_boot_list); 1134 1166 } 1135 - spin_unlock_irqrestore_rcu_node(sup, flags); 1167 + raw_spin_unlock_irqrestore_rcu_node(sup, flags); 1136 1168 } 1137 1169 1138 1170 /* ··· 1148 1172 { 1149 1173 unsigned long curdelay; 1150 1174 1151 - spin_lock_irq_rcu_node(ssp->srcu_sup); 1175 + raw_spin_lock_irq_rcu_node(ssp->srcu_sup); 1152 1176 curdelay = !srcu_get_delay(ssp); 1153 - spin_unlock_irq_rcu_node(ssp->srcu_sup); 1177 + raw_spin_unlock_irq_rcu_node(ssp->srcu_sup); 1154 1178 1155 1179 for (;;) { 1156 1180 if (srcu_readers_active_idx_check(ssp, idx)) ··· 1261 1285 return false; 1262 1286 /* If the local srcu_data structure has callbacks, not idle. */ 1263 1287 sdp = raw_cpu_ptr(ssp->sda); 1264 - spin_lock_irqsave_rcu_node(sdp, flags); 1288 + raw_spin_lock_irqsave_rcu_node(sdp, flags); 1265 1289 if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) { 1266 - spin_unlock_irqrestore_rcu_node(sdp, flags); 1290 + raw_spin_unlock_irqrestore_rcu_node(sdp, flags); 1267 1291 return false; /* Callbacks already present, so not idle. */ 1268 1292 } 1269 - spin_unlock_irqrestore_rcu_node(sdp, flags); 1293 + raw_spin_unlock_irqrestore_rcu_node(sdp, flags); 1270 1294 1271 1295 /* 1272 1296 * No local callbacks, so probabilistically probe global state. ··· 1326 1350 sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id()); 1327 1351 else 1328 1352 sdp = raw_cpu_ptr(ssp->sda); 1329 - spin_lock_irqsave_sdp_contention(sdp, &flags); 1353 + raw_spin_lock_irqsave_sdp_contention(sdp, &flags); 1330 1354 if (rhp) 1331 1355 rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp); 1332 1356 /* ··· 1386 1410 sdp->srcu_gp_seq_needed_exp = s; 1387 1411 needexp = true; 1388 1412 } 1389 - spin_unlock_irqrestore_rcu_node(sdp, flags); 1413 + raw_spin_unlock_irqrestore_rcu_node(sdp, flags); 1390 1414 1391 1415 /* Ensure that snp node tree is fully initialized before traversing it */ 1392 1416 if (ss_state < SRCU_SIZE_WAIT_BARRIER) ··· 1498 1522 1499 1523 /* 1500 1524 * Make sure that later code is ordered after the SRCU grace 1501 - * period. This pairs with the spin_lock_irq_rcu_node() 1525 + * period. This pairs with the raw_spin_lock_irq_rcu_node() 1502 1526 * in srcu_invoke_callbacks(). Unlike Tree RCU, this is needed 1503 1527 * because the current CPU might have been totally uninvolved with 1504 1528 * (and thus unordered against) that grace period. ··· 1677 1701 */ 1678 1702 static void srcu_barrier_one_cpu(struct srcu_struct *ssp, struct srcu_data *sdp) 1679 1703 { 1680 - spin_lock_irq_rcu_node(sdp); 1704 + raw_spin_lock_irq_rcu_node(sdp); 1681 1705 atomic_inc(&ssp->srcu_sup->srcu_barrier_cpu_cnt); 1682 1706 sdp->srcu_barrier_head.func = srcu_barrier_cb; 1683 1707 debug_rcu_head_queue(&sdp->srcu_barrier_head); ··· 1686 1710 debug_rcu_head_unqueue(&sdp->srcu_barrier_head); 1687 1711 atomic_dec(&ssp->srcu_sup->srcu_barrier_cpu_cnt); 1688 1712 } 1689 - spin_unlock_irq_rcu_node(sdp); 1713 + raw_spin_unlock_irq_rcu_node(sdp); 1690 1714 } 1691 1715 1692 1716 /** ··· 1737 1761 bool needcb = false; 1738 1762 struct srcu_data *sdp = container_of(rhp, struct srcu_data, srcu_ec_head); 1739 1763 1740 - spin_lock_irqsave_sdp_contention(sdp, &flags); 1764 + raw_spin_lock_irqsave_sdp_contention(sdp, &flags); 1741 1765 if (sdp->srcu_ec_state == SRCU_EC_IDLE) { 1742 1766 WARN_ON_ONCE(1); 1743 1767 } else if (sdp->srcu_ec_state == SRCU_EC_PENDING) { ··· 1747 1771 sdp->srcu_ec_state = SRCU_EC_PENDING; 1748 1772 needcb = true; 1749 1773 } 1750 - spin_unlock_irqrestore_rcu_node(sdp, flags); 1774 + raw_spin_unlock_irqrestore_rcu_node(sdp, flags); 1751 1775 // If needed, requeue ourselves as an expedited SRCU callback. 1752 1776 if (needcb) 1753 1777 __call_srcu(sdp->ssp, &sdp->srcu_ec_head, srcu_expedite_current_cb, false); ··· 1771 1795 1772 1796 migrate_disable(); 1773 1797 sdp = this_cpu_ptr(ssp->sda); 1774 - spin_lock_irqsave_sdp_contention(sdp, &flags); 1798 + raw_spin_lock_irqsave_sdp_contention(sdp, &flags); 1775 1799 if (sdp->srcu_ec_state == SRCU_EC_IDLE) { 1776 1800 sdp->srcu_ec_state = SRCU_EC_PENDING; 1777 1801 needcb = true; ··· 1780 1804 } else { 1781 1805 WARN_ON_ONCE(sdp->srcu_ec_state != SRCU_EC_REPOST); 1782 1806 } 1783 - spin_unlock_irqrestore_rcu_node(sdp, flags); 1807 + raw_spin_unlock_irqrestore_rcu_node(sdp, flags); 1784 1808 // If needed, queue an expedited SRCU callback. 1785 1809 if (needcb) 1786 1810 __call_srcu(ssp, &sdp->srcu_ec_head, srcu_expedite_current_cb, false); ··· 1824 1848 */ 1825 1849 idx = rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq)); /* ^^^ */ 1826 1850 if (idx == SRCU_STATE_IDLE) { 1827 - spin_lock_irq_rcu_node(ssp->srcu_sup); 1851 + raw_spin_lock_irq_rcu_node(ssp->srcu_sup); 1828 1852 if (ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)) { 1829 1853 WARN_ON_ONCE(rcu_seq_state(ssp->srcu_sup->srcu_gp_seq)); 1830 - spin_unlock_irq_rcu_node(ssp->srcu_sup); 1854 + raw_spin_unlock_irq_rcu_node(ssp->srcu_sup); 1831 1855 mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); 1832 1856 return; 1833 1857 } 1834 1858 idx = rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)); 1835 1859 if (idx == SRCU_STATE_IDLE) 1836 1860 srcu_gp_start(ssp); 1837 - spin_unlock_irq_rcu_node(ssp->srcu_sup); 1861 + raw_spin_unlock_irq_rcu_node(ssp->srcu_sup); 1838 1862 if (idx != SRCU_STATE_IDLE) { 1839 1863 mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); 1840 1864 return; /* Someone else started the grace period. */ ··· 1848 1872 return; /* readers present, retry later. */ 1849 1873 } 1850 1874 srcu_flip(ssp); 1851 - spin_lock_irq_rcu_node(ssp->srcu_sup); 1875 + raw_spin_lock_irq_rcu_node(ssp->srcu_sup); 1852 1876 rcu_seq_set_state(&ssp->srcu_sup->srcu_gp_seq, SRCU_STATE_SCAN2); 1853 1877 ssp->srcu_sup->srcu_n_exp_nodelay = 0; 1854 - spin_unlock_irq_rcu_node(ssp->srcu_sup); 1878 + raw_spin_unlock_irq_rcu_node(ssp->srcu_sup); 1855 1879 } 1856 1880 1857 1881 if (rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)) == SRCU_STATE_SCAN2) { ··· 1889 1913 1890 1914 ssp = sdp->ssp; 1891 1915 rcu_cblist_init(&ready_cbs); 1892 - spin_lock_irq_rcu_node(sdp); 1916 + raw_spin_lock_irq_rcu_node(sdp); 1893 1917 WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); 1894 1918 rcu_segcblist_advance(&sdp->srcu_cblist, 1895 1919 rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); ··· 1900 1924 */ 1901 1925 if (sdp->srcu_cblist_invoking || 1902 1926 !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) { 1903 - spin_unlock_irq_rcu_node(sdp); 1927 + raw_spin_unlock_irq_rcu_node(sdp); 1904 1928 return; /* Someone else on the job or nothing to do. */ 1905 1929 } 1906 1930 ··· 1908 1932 sdp->srcu_cblist_invoking = true; 1909 1933 rcu_segcblist_extract_done_cbs(&sdp->srcu_cblist, &ready_cbs); 1910 1934 len = ready_cbs.len; 1911 - spin_unlock_irq_rcu_node(sdp); 1935 + raw_spin_unlock_irq_rcu_node(sdp); 1912 1936 rhp = rcu_cblist_dequeue(&ready_cbs); 1913 1937 for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) { 1914 1938 debug_rcu_head_unqueue(rhp); ··· 1923 1947 * Update counts, accelerate new callbacks, and if needed, 1924 1948 * schedule another round of callback invocation. 1925 1949 */ 1926 - spin_lock_irq_rcu_node(sdp); 1950 + raw_spin_lock_irq_rcu_node(sdp); 1927 1951 rcu_segcblist_add_len(&sdp->srcu_cblist, -len); 1928 1952 sdp->srcu_cblist_invoking = false; 1929 1953 more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist); 1930 - spin_unlock_irq_rcu_node(sdp); 1954 + raw_spin_unlock_irq_rcu_node(sdp); 1931 1955 /* An SRCU barrier or callbacks from previous nesting work pending */ 1932 1956 if (more) 1933 1957 srcu_schedule_cbs_sdp(sdp, 0); ··· 1941 1965 { 1942 1966 bool pushgp = true; 1943 1967 1944 - spin_lock_irq_rcu_node(ssp->srcu_sup); 1968 + raw_spin_lock_irq_rcu_node(ssp->srcu_sup); 1945 1969 if (ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)) { 1946 1970 if (!WARN_ON_ONCE(rcu_seq_state(ssp->srcu_sup->srcu_gp_seq))) { 1947 1971 /* All requests fulfilled, time to go idle. */ ··· 1951 1975 /* Outstanding request and no GP. Start one. */ 1952 1976 srcu_gp_start(ssp); 1953 1977 } 1954 - spin_unlock_irq_rcu_node(ssp->srcu_sup); 1978 + raw_spin_unlock_irq_rcu_node(ssp->srcu_sup); 1955 1979 1956 1980 if (pushgp) 1957 1981 queue_delayed_work(rcu_gp_wq, &ssp->srcu_sup->work, delay); ··· 1971 1995 ssp = sup->srcu_ssp; 1972 1996 1973 1997 srcu_advance_state(ssp); 1974 - spin_lock_irq_rcu_node(ssp->srcu_sup); 1998 + raw_spin_lock_irq_rcu_node(ssp->srcu_sup); 1975 1999 curdelay = srcu_get_delay(ssp); 1976 - spin_unlock_irq_rcu_node(ssp->srcu_sup); 2000 + raw_spin_unlock_irq_rcu_node(ssp->srcu_sup); 1977 2001 if (curdelay) { 1978 2002 WRITE_ONCE(sup->reschedule_count, 0); 1979 2003 } else { ··· 1989 2013 } 1990 2014 } 1991 2015 srcu_reschedule(ssp, curdelay); 2016 + } 2017 + 2018 + static void srcu_irq_work(struct irq_work *work) 2019 + { 2020 + struct srcu_struct *ssp; 2021 + struct srcu_usage *sup; 2022 + unsigned long delay; 2023 + unsigned long flags; 2024 + 2025 + sup = container_of(work, struct srcu_usage, irq_work); 2026 + ssp = sup->srcu_ssp; 2027 + 2028 + raw_spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags); 2029 + delay = srcu_get_delay(ssp); 2030 + raw_spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); 2031 + 2032 + queue_delayed_work(rcu_gp_wq, &sup->work, !!delay); 1992 2033 } 1993 2034 1994 2035 void srcutorture_get_gp_data(struct srcu_struct *ssp, int *flags,
+2 -2
kernel/trace/ftrace.c
··· 6606 6606 if (!orig_hash) 6607 6607 goto unlock; 6608 6608 6609 - /* Enable the tmp_ops to have the same functions as the direct ops */ 6609 + /* Enable the tmp_ops to have the same functions as the hash object. */ 6610 6610 ftrace_ops_init(&tmp_ops); 6611 - tmp_ops.func_hash = ops->func_hash; 6611 + tmp_ops.func_hash->filter_hash = hash; 6612 6612 6613 6613 err = register_ftrace_function_nolock(&tmp_ops); 6614 6614 if (err)
+1 -1
kernel/trace/ring_buffer.c
··· 2053 2053 2054 2054 entries += ret; 2055 2055 entry_bytes += local_read(&head_page->page->commit); 2056 - local_set(&cpu_buffer->head_page->entries, ret); 2056 + local_set(&head_page->entries, ret); 2057 2057 2058 2058 if (head_page == cpu_buffer->commit_page) 2059 2059 break;
+27 -9
kernel/trace/trace.c
··· 555 555 lockdep_assert_held(&event_mutex); 556 556 557 557 if (enabled) { 558 - if (!list_empty(&tr->marker_list)) 558 + if (tr->trace_flags & TRACE_ITER(COPY_MARKER)) 559 559 return false; 560 560 561 561 list_add_rcu(&tr->marker_list, &marker_copies); ··· 563 563 return true; 564 564 } 565 565 566 - if (list_empty(&tr->marker_list)) 566 + if (!(tr->trace_flags & TRACE_ITER(COPY_MARKER))) 567 567 return false; 568 568 569 - list_del_init(&tr->marker_list); 569 + list_del_rcu(&tr->marker_list); 570 570 tr->trace_flags &= ~TRACE_ITER(COPY_MARKER); 571 571 return true; 572 572 } ··· 6784 6784 6785 6785 do { 6786 6786 /* 6787 + * It is possible that something is trying to migrate this 6788 + * task. What happens then, is when preemption is enabled, 6789 + * the migration thread will preempt this task, try to 6790 + * migrate it, fail, then let it run again. That will 6791 + * cause this to loop again and never succeed. 6792 + * On failures, enabled and disable preemption with 6793 + * migration enabled, to allow the migration thread to 6794 + * migrate this task. 6795 + */ 6796 + if (trys) { 6797 + preempt_enable_notrace(); 6798 + preempt_disable_notrace(); 6799 + cpu = smp_processor_id(); 6800 + buffer = per_cpu_ptr(tinfo->tbuf, cpu)->buf; 6801 + } 6802 + 6803 + /* 6787 6804 * If for some reason, copy_from_user() always causes a context 6788 6805 * switch, this would then cause an infinite loop. 6789 6806 * If this task is preempted by another user space task, it ··· 9761 9744 9762 9745 list_del(&tr->list); 9763 9746 9747 + if (printk_trace == tr) 9748 + update_printk_trace(&global_trace); 9749 + 9750 + /* Must be done before disabling all the flags */ 9751 + if (update_marker_trace(tr, 0)) 9752 + synchronize_rcu(); 9753 + 9764 9754 /* Disable all the flags that were enabled coming in */ 9765 9755 for (i = 0; i < TRACE_FLAGS_MAX_SIZE; i++) { 9766 9756 if ((1ULL << i) & ZEROED_TRACE_FLAGS) 9767 9757 set_tracer_flag(tr, 1ULL << i, 0); 9768 9758 } 9769 - 9770 - if (printk_trace == tr) 9771 - update_printk_trace(&global_trace); 9772 - 9773 - if (update_marker_trace(tr, 0)) 9774 - synchronize_rcu(); 9775 9759 9776 9760 tracing_set_nop(tr); 9777 9761 clear_ftrace_function_probes(tr);
+2 -1
lib/bootconfig.c
··· 723 723 if (op == ':') { 724 724 unsigned short nidx = child->next; 725 725 726 - xbc_init_node(child, v, XBC_VALUE); 726 + if (xbc_init_node(child, v, XBC_VALUE) < 0) 727 + return xbc_parse_error("Failed to override value", v); 727 728 child->next = nidx; /* keep subkeys */ 728 729 goto array; 729 730 }
+8
mm/damon/core.c
··· 1252 1252 { 1253 1253 int err; 1254 1254 1255 + dst->maybe_corrupted = true; 1255 1256 if (!is_power_of_2(src->min_region_sz)) 1256 1257 return -EINVAL; 1257 1258 ··· 1278 1277 dst->addr_unit = src->addr_unit; 1279 1278 dst->min_region_sz = src->min_region_sz; 1280 1279 1280 + dst->maybe_corrupted = false; 1281 1281 return 0; 1282 1282 } 1283 1283 ··· 2680 2678 complete(&control->completion); 2681 2679 else if (control->canceled && control->dealloc_on_cancel) 2682 2680 kfree(control); 2681 + if (!cancel && ctx->maybe_corrupted) 2682 + break; 2683 2683 } 2684 2684 2685 2685 mutex_lock(&ctx->call_controls_lock); ··· 2711 2707 kdamond_usleep(min_wait_time); 2712 2708 2713 2709 kdamond_call(ctx, false); 2710 + if (ctx->maybe_corrupted) 2711 + return -EINVAL; 2714 2712 damos_walk_cancel(ctx); 2715 2713 } 2716 2714 return -EBUSY; ··· 2796 2790 * kdamond_merge_regions() if possible, to reduce overhead 2797 2791 */ 2798 2792 kdamond_call(ctx, false); 2793 + if (ctx->maybe_corrupted) 2794 + break; 2799 2795 if (!list_empty(&ctx->schemes)) 2800 2796 kdamond_apply_schemes(ctx); 2801 2797 else
+50 -3
mm/damon/stat.c
··· 145 145 return 0; 146 146 } 147 147 148 + struct damon_stat_system_ram_range_walk_arg { 149 + bool walked; 150 + struct resource res; 151 + }; 152 + 153 + static int damon_stat_system_ram_walk_fn(struct resource *res, void *arg) 154 + { 155 + struct damon_stat_system_ram_range_walk_arg *a = arg; 156 + 157 + if (!a->walked) { 158 + a->walked = true; 159 + a->res.start = res->start; 160 + } 161 + a->res.end = res->end; 162 + return 0; 163 + } 164 + 165 + static unsigned long damon_stat_res_to_core_addr(resource_size_t ra, 166 + unsigned long addr_unit) 167 + { 168 + /* 169 + * Use div_u64() for avoiding linking errors related with __udivdi3, 170 + * __aeabi_uldivmod, or similar problems. This should also improve the 171 + * performance optimization (read div_u64() comment for the detail). 172 + */ 173 + if (sizeof(ra) == 8 && sizeof(addr_unit) == 4) 174 + return div_u64(ra, addr_unit); 175 + return ra / addr_unit; 176 + } 177 + 178 + static int damon_stat_set_monitoring_region(struct damon_target *t, 179 + unsigned long addr_unit, unsigned long min_region_sz) 180 + { 181 + struct damon_addr_range addr_range; 182 + struct damon_stat_system_ram_range_walk_arg arg = {}; 183 + 184 + walk_system_ram_res(0, -1, &arg, damon_stat_system_ram_walk_fn); 185 + if (!arg.walked) 186 + return -EINVAL; 187 + addr_range.start = damon_stat_res_to_core_addr( 188 + arg.res.start, addr_unit); 189 + addr_range.end = damon_stat_res_to_core_addr( 190 + arg.res.end + 1, addr_unit); 191 + if (addr_range.end <= addr_range.start) 192 + return -EINVAL; 193 + return damon_set_regions(t, &addr_range, 1, min_region_sz); 194 + } 195 + 148 196 static struct damon_ctx *damon_stat_build_ctx(void) 149 197 { 150 198 struct damon_ctx *ctx; 151 199 struct damon_attrs attrs; 152 200 struct damon_target *target; 153 - unsigned long start = 0, end = 0; 154 201 155 202 ctx = damon_new_ctx(); 156 203 if (!ctx) ··· 227 180 if (!target) 228 181 goto free_out; 229 182 damon_add_target(ctx, target); 230 - if (damon_set_region_biggest_system_ram_default(target, &start, &end, 231 - ctx->min_region_sz)) 183 + if (damon_stat_set_monitoring_region(target, ctx->addr_unit, 184 + ctx->min_region_sz)) 232 185 goto free_out; 233 186 return ctx; 234 187 free_out:
+2 -2
mm/hmm.c
··· 778 778 struct page *page = hmm_pfn_to_page(pfns[idx]); 779 779 phys_addr_t paddr = hmm_pfn_to_phys(pfns[idx]); 780 780 size_t offset = idx * map->dma_entry_size; 781 - unsigned long attrs = 0; 781 + unsigned long attrs = DMA_ATTR_REQUIRE_COHERENT; 782 782 dma_addr_t dma_addr; 783 783 int ret; 784 784 ··· 871 871 struct dma_iova_state *state = &map->state; 872 872 dma_addr_t *dma_addrs = map->dma_list; 873 873 unsigned long *pfns = map->pfn_list; 874 - unsigned long attrs = 0; 874 + unsigned long attrs = DMA_ATTR_REQUIRE_COHERENT; 875 875 876 876 if ((pfns[idx] & valid_dma) != valid_dma) 877 877 return false;
+7
mm/rmap.c
··· 457 457 list_del(&avc->same_vma); 458 458 anon_vma_chain_free(avc); 459 459 } 460 + 461 + /* 462 + * The anon_vma assigned to this VMA is no longer valid, as we were not 463 + * able to correctly clone AVC state. Avoid inconsistent anon_vma tree 464 + * state by resetting. 465 + */ 466 + vma->anon_vma = NULL; 460 467 } 461 468 462 469 /**
+7 -1
mm/zswap.c
··· 942 942 943 943 /* zswap entries of length PAGE_SIZE are not compressed. */ 944 944 if (entry->length == PAGE_SIZE) { 945 + void *dst; 946 + 945 947 WARN_ON_ONCE(input->length != PAGE_SIZE); 946 - memcpy_from_sglist(kmap_local_folio(folio, 0), input, 0, PAGE_SIZE); 948 + 949 + dst = kmap_local_folio(folio, 0); 950 + memcpy_from_sglist(dst, input, 0, PAGE_SIZE); 947 951 dlen = PAGE_SIZE; 952 + kunmap_local(dst); 953 + flush_dcache_folio(folio); 948 954 } else { 949 955 sg_init_table(&output, 1); 950 956 sg_set_folio(&output, folio, PAGE_SIZE, 0);
+1 -1
net/bluetooth/hci_conn.c
··· 3095 3095 * hci_connect_le serializes the connection attempts so only one 3096 3096 * connection can be in BT_CONNECT at time. 3097 3097 */ 3098 - if (conn->state == BT_CONNECT && hdev->req_status == HCI_REQ_PEND) { 3098 + if (conn->state == BT_CONNECT && READ_ONCE(hdev->req_status) == HCI_REQ_PEND) { 3099 3099 switch (hci_skb_event(hdev->sent_cmd)) { 3100 3100 case HCI_EV_CONN_COMPLETE: 3101 3101 case HCI_EV_LE_CONN_COMPLETE:
+1 -1
net/bluetooth/hci_core.c
··· 4126 4126 kfree_skb(skb); 4127 4127 } 4128 4128 4129 - if (hdev->req_status == HCI_REQ_PEND && 4129 + if (READ_ONCE(hdev->req_status) == HCI_REQ_PEND && 4130 4130 !hci_dev_test_and_set_flag(hdev, HCI_CMD_PENDING)) { 4131 4131 kfree_skb(hdev->req_skb); 4132 4132 hdev->req_skb = skb_clone(hdev->sent_cmd, GFP_KERNEL);
+10 -10
net/bluetooth/hci_sync.c
··· 25 25 { 26 26 bt_dev_dbg(hdev, "result 0x%2.2x", result); 27 27 28 - if (hdev->req_status != HCI_REQ_PEND) 28 + if (READ_ONCE(hdev->req_status) != HCI_REQ_PEND) 29 29 return; 30 30 31 31 hdev->req_result = result; 32 - hdev->req_status = HCI_REQ_DONE; 32 + WRITE_ONCE(hdev->req_status, HCI_REQ_DONE); 33 33 34 34 /* Free the request command so it is not used as response */ 35 35 kfree_skb(hdev->req_skb); ··· 167 167 168 168 hci_cmd_sync_add(&req, opcode, plen, param, event, sk); 169 169 170 - hdev->req_status = HCI_REQ_PEND; 170 + WRITE_ONCE(hdev->req_status, HCI_REQ_PEND); 171 171 172 172 err = hci_req_sync_run(&req); 173 173 if (err < 0) 174 174 return ERR_PTR(err); 175 175 176 176 err = wait_event_interruptible_timeout(hdev->req_wait_q, 177 - hdev->req_status != HCI_REQ_PEND, 177 + READ_ONCE(hdev->req_status) != HCI_REQ_PEND, 178 178 timeout); 179 179 180 180 if (err == -ERESTARTSYS) 181 181 return ERR_PTR(-EINTR); 182 182 183 - switch (hdev->req_status) { 183 + switch (READ_ONCE(hdev->req_status)) { 184 184 case HCI_REQ_DONE: 185 185 err = -bt_to_errno(hdev->req_result); 186 186 break; ··· 194 194 break; 195 195 } 196 196 197 - hdev->req_status = 0; 197 + WRITE_ONCE(hdev->req_status, 0); 198 198 hdev->req_result = 0; 199 199 skb = hdev->req_rsp; 200 200 hdev->req_rsp = NULL; ··· 665 665 { 666 666 bt_dev_dbg(hdev, "err 0x%2.2x", err); 667 667 668 - if (hdev->req_status == HCI_REQ_PEND) { 668 + if (READ_ONCE(hdev->req_status) == HCI_REQ_PEND) { 669 669 hdev->req_result = err; 670 - hdev->req_status = HCI_REQ_CANCELED; 670 + WRITE_ONCE(hdev->req_status, HCI_REQ_CANCELED); 671 671 672 672 queue_work(hdev->workqueue, &hdev->cmd_sync_cancel_work); 673 673 } ··· 683 683 { 684 684 bt_dev_dbg(hdev, "err 0x%2.2x", err); 685 685 686 - if (hdev->req_status == HCI_REQ_PEND) { 686 + if (READ_ONCE(hdev->req_status) == HCI_REQ_PEND) { 687 687 /* req_result is __u32 so error must be positive to be properly 688 688 * propagated. 689 689 */ 690 690 hdev->req_result = err < 0 ? -err : err; 691 - hdev->req_status = HCI_REQ_CANCELED; 691 + WRITE_ONCE(hdev->req_status, HCI_REQ_CANCELED); 692 692 693 693 wake_up_interruptible(&hdev->req_wait_q); 694 694 }
+53 -18
net/bluetooth/l2cap_core.c
··· 926 926 927 927 static int l2cap_get_ident(struct l2cap_conn *conn) 928 928 { 929 + u8 max; 930 + int ident; 931 + 929 932 /* LE link does not support tools like l2ping so use the full range */ 930 933 if (conn->hcon->type == LE_LINK) 931 - return ida_alloc_range(&conn->tx_ida, 1, 255, GFP_ATOMIC); 932 - 934 + max = 255; 933 935 /* Get next available identificator. 934 936 * 1 - 128 are used by kernel. 935 937 * 129 - 199 are reserved. 936 938 * 200 - 254 are used by utilities like l2ping, etc. 937 939 */ 938 - return ida_alloc_range(&conn->tx_ida, 1, 128, GFP_ATOMIC); 940 + else 941 + max = 128; 942 + 943 + /* Allocate ident using min as last used + 1 (cyclic) */ 944 + ident = ida_alloc_range(&conn->tx_ida, READ_ONCE(conn->tx_ident) + 1, 945 + max, GFP_ATOMIC); 946 + /* Force min 1 to start over */ 947 + if (ident <= 0) { 948 + ident = ida_alloc_range(&conn->tx_ida, 1, max, GFP_ATOMIC); 949 + if (ident <= 0) { 950 + /* If all idents are in use, log an error, this is 951 + * extremely unlikely to happen and would indicate a bug 952 + * in the code that idents are not being freed properly. 953 + */ 954 + BT_ERR("Unable to allocate ident: %d", ident); 955 + return 0; 956 + } 957 + } 958 + 959 + WRITE_ONCE(conn->tx_ident, ident); 960 + 961 + return ident; 939 962 } 940 963 941 964 static void l2cap_send_acl(struct l2cap_conn *conn, struct sk_buff *skb, ··· 1771 1748 1772 1749 BT_DBG("hcon %p conn %p, err %d", hcon, conn, err); 1773 1750 1751 + disable_delayed_work_sync(&conn->info_timer); 1752 + disable_delayed_work_sync(&conn->id_addr_timer); 1753 + 1774 1754 mutex_lock(&conn->lock); 1775 1755 1776 1756 kfree_skb(conn->rx_skb); ··· 1788 1762 cancel_work_sync(&conn->pending_rx_work); 1789 1763 1790 1764 ida_destroy(&conn->tx_ida); 1791 - 1792 - cancel_delayed_work_sync(&conn->id_addr_timer); 1793 1765 1794 1766 l2cap_unregister_all_users(conn); 1795 1767 ··· 1806 1782 l2cap_chan_unlock(chan); 1807 1783 l2cap_chan_put(chan); 1808 1784 } 1809 - 1810 - if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT) 1811 - cancel_delayed_work_sync(&conn->info_timer); 1812 1785 1813 1786 hci_chan_del(conn->hchan); 1814 1787 conn->hchan = NULL; ··· 2397 2376 2398 2377 /* Remote device may have requested smaller PDUs */ 2399 2378 pdu_len = min_t(size_t, pdu_len, chan->remote_mps); 2379 + 2380 + if (!pdu_len) 2381 + return -EINVAL; 2400 2382 2401 2383 if (len <= pdu_len) { 2402 2384 sar = L2CAP_SAR_UNSEGMENTED; ··· 4336 4312 if (test_bit(CONF_INPUT_DONE, &chan->conf_state)) { 4337 4313 set_default_fcs(chan); 4338 4314 4339 - if (chan->mode == L2CAP_MODE_ERTM || 4340 - chan->mode == L2CAP_MODE_STREAMING) 4341 - err = l2cap_ertm_init(chan); 4315 + if (chan->state != BT_CONNECTED) { 4316 + if (chan->mode == L2CAP_MODE_ERTM || 4317 + chan->mode == L2CAP_MODE_STREAMING) 4318 + err = l2cap_ertm_init(chan); 4342 4319 4343 - if (err < 0) 4344 - l2cap_send_disconn_req(chan, -err); 4345 - else 4346 - l2cap_chan_ready(chan); 4320 + if (err < 0) 4321 + l2cap_send_disconn_req(chan, -err); 4322 + else 4323 + l2cap_chan_ready(chan); 4324 + } 4347 4325 4348 4326 goto unlock; 4349 4327 } ··· 5107 5081 cmd_len -= sizeof(*req); 5108 5082 num_scid = cmd_len / sizeof(u16); 5109 5083 5110 - /* Always respond with the same number of scids as in the request */ 5111 - rsp_len = cmd_len; 5112 - 5113 5084 if (num_scid > L2CAP_ECRED_MAX_CID) { 5114 5085 result = L2CAP_CR_LE_INVALID_PARAMS; 5115 5086 goto response; 5116 5087 } 5088 + 5089 + /* Always respond with the same number of scids as in the request */ 5090 + rsp_len = cmd_len; 5117 5091 5118 5092 mtu = __le16_to_cpu(req->mtu); 5119 5093 mps = __le16_to_cpu(req->mps); ··· 6633 6607 struct l2cap_le_credits pkt; 6634 6608 u16 return_credits = l2cap_le_rx_credits(chan); 6635 6609 6610 + if (chan->mode != L2CAP_MODE_LE_FLOWCTL && 6611 + chan->mode != L2CAP_MODE_EXT_FLOWCTL) 6612 + return; 6613 + 6636 6614 if (chan->rx_credits >= return_credits) 6637 6615 return; 6638 6616 ··· 6719 6689 6720 6690 if (!chan->sdu) { 6721 6691 u16 sdu_len; 6692 + 6693 + if (!pskb_may_pull(skb, L2CAP_SDULEN_SIZE)) { 6694 + err = -EINVAL; 6695 + goto failed; 6696 + } 6722 6697 6723 6698 sdu_len = get_unaligned_le16(skb->data); 6724 6699 skb_pull(skb, L2CAP_SDULEN_SIZE);
+3
net/bluetooth/l2cap_sock.c
··· 1698 1698 struct sock *sk = chan->data; 1699 1699 struct sock *parent; 1700 1700 1701 + if (!sk) 1702 + return; 1703 + 1701 1704 lock_sock(sk); 1702 1705 1703 1706 parent = bt_sk(sk)->parent;
+1 -1
net/bluetooth/mgmt.c
··· 5355 5355 * hci_adv_monitors_clear is about to be called which will take care of 5356 5356 * freeing the adv_monitor instances. 5357 5357 */ 5358 - if (status == -ECANCELED && !mgmt_pending_valid(hdev, cmd)) 5358 + if (status == -ECANCELED || !mgmt_pending_valid(hdev, cmd)) 5359 5359 return; 5360 5360 5361 5361 monitor = cmd->user_data;
+7 -3
net/bluetooth/sco.c
··· 401 401 struct sock *sk; 402 402 403 403 sco_conn_lock(conn); 404 - sk = conn->sk; 404 + sk = sco_sock_hold(conn); 405 405 sco_conn_unlock(conn); 406 406 407 407 if (!sk) ··· 410 410 BT_DBG("sk %p len %u", sk, skb->len); 411 411 412 412 if (sk->sk_state != BT_CONNECTED) 413 - goto drop; 413 + goto drop_put; 414 414 415 - if (!sock_queue_rcv_skb(sk, skb)) 415 + if (!sock_queue_rcv_skb(sk, skb)) { 416 + sock_put(sk); 416 417 return; 418 + } 417 419 420 + drop_put: 421 + sock_put(sk); 418 422 drop: 419 423 kfree_skb(skb); 420 424 }
+2 -2
net/can/af_can.c
··· 469 469 470 470 rcv->can_id = can_id; 471 471 rcv->mask = mask; 472 - rcv->matches = 0; 472 + atomic_long_set(&rcv->matches, 0); 473 473 rcv->func = func; 474 474 rcv->data = data; 475 475 rcv->ident = ident; ··· 573 573 static inline void deliver(struct sk_buff *skb, struct receiver *rcv) 574 574 { 575 575 rcv->func(skb, rcv->data); 576 - rcv->matches++; 576 + atomic_long_inc(&rcv->matches); 577 577 } 578 578 579 579 static int can_rcv_filter(struct can_dev_rcv_lists *dev_rcv_lists, struct sk_buff *skb)
+1 -1
net/can/af_can.h
··· 52 52 struct hlist_node list; 53 53 canid_t can_id; 54 54 canid_t mask; 55 - unsigned long matches; 55 + atomic_long_t matches; 56 56 void (*func)(struct sk_buff *skb, void *data); 57 57 void *data; 58 58 char *ident;
+3 -3
net/can/gw.c
··· 375 375 return; 376 376 377 377 if (from <= to) { 378 - for (i = crc8->from_idx; i <= crc8->to_idx; i++) 378 + for (i = from; i <= to; i++) 379 379 crc = crc8->crctab[crc ^ cf->data[i]]; 380 380 } else { 381 - for (i = crc8->from_idx; i >= crc8->to_idx; i--) 381 + for (i = from; i >= to; i--) 382 382 crc = crc8->crctab[crc ^ cf->data[i]]; 383 383 } 384 384 ··· 397 397 break; 398 398 } 399 399 400 - cf->data[crc8->result_idx] = crc ^ crc8->final_xor_val; 400 + cf->data[res] = crc ^ crc8->final_xor_val; 401 401 } 402 402 403 403 static void cgw_csum_crc8_pos(struct canfd_frame *cf,
+18 -6
net/can/isotp.c
··· 1248 1248 so->ifindex = 0; 1249 1249 so->bound = 0; 1250 1250 1251 - if (so->rx.buf != so->rx.sbuf) 1252 - kfree(so->rx.buf); 1253 - 1254 - if (so->tx.buf != so->tx.sbuf) 1255 - kfree(so->tx.buf); 1256 - 1257 1251 sock_orphan(sk); 1258 1252 sock->sk = NULL; 1259 1253 ··· 1616 1622 return NOTIFY_DONE; 1617 1623 } 1618 1624 1625 + static void isotp_sock_destruct(struct sock *sk) 1626 + { 1627 + struct isotp_sock *so = isotp_sk(sk); 1628 + 1629 + /* do the standard CAN sock destruct work */ 1630 + can_sock_destruct(sk); 1631 + 1632 + /* free potential extended PDU buffers */ 1633 + if (so->rx.buf != so->rx.sbuf) 1634 + kfree(so->rx.buf); 1635 + 1636 + if (so->tx.buf != so->tx.sbuf) 1637 + kfree(so->tx.buf); 1638 + } 1639 + 1619 1640 static int isotp_init(struct sock *sk) 1620 1641 { 1621 1642 struct isotp_sock *so = isotp_sk(sk); ··· 1674 1665 spin_lock(&isotp_notifier_lock); 1675 1666 list_add_tail(&so->notifier, &isotp_notifier_list); 1676 1667 spin_unlock(&isotp_notifier_lock); 1668 + 1669 + /* re-assign default can_sock_destruct() reference */ 1670 + sk->sk_destruct = isotp_sock_destruct; 1677 1671 1678 1672 return 0; 1679 1673 }
+2 -1
net/can/proc.c
··· 196 196 " %-5s %03x %08x %pK %pK %8ld %s\n"; 197 197 198 198 seq_printf(m, fmt, DNAME(dev), r->can_id, r->mask, 199 - r->func, r->data, r->matches, r->ident); 199 + r->func, r->data, atomic_long_read(&r->matches), 200 + r->ident); 200 201 } 201 202 } 202 203
+17 -5
net/core/dev.c
··· 3770 3770 return vlan_features_check(skb, features); 3771 3771 } 3772 3772 3773 + static bool skb_gso_has_extension_hdr(const struct sk_buff *skb) 3774 + { 3775 + if (!skb->encapsulation) 3776 + return ((skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6 || 3777 + (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && 3778 + vlan_get_protocol(skb) == htons(ETH_P_IPV6))) && 3779 + skb_transport_header_was_set(skb) && 3780 + skb_network_header_len(skb) != sizeof(struct ipv6hdr)); 3781 + else 3782 + return (!skb_inner_network_header_was_set(skb) || 3783 + ((skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6 || 3784 + (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && 3785 + inner_ip_hdr(skb)->version == 6)) && 3786 + skb_inner_network_header_len(skb) != sizeof(struct ipv6hdr))); 3787 + } 3788 + 3773 3789 static netdev_features_t gso_features_check(const struct sk_buff *skb, 3774 3790 struct net_device *dev, 3775 3791 netdev_features_t features) ··· 3833 3817 * so neither does TSO that depends on it. 3834 3818 */ 3835 3819 if (features & NETIF_F_IPV6_CSUM && 3836 - (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6 || 3837 - (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && 3838 - vlan_get_protocol(skb) == htons(ETH_P_IPV6))) && 3839 - skb_transport_header_was_set(skb) && 3840 - skb_network_header_len(skb) != sizeof(struct ipv6hdr)) 3820 + skb_gso_has_extension_hdr(skb)) 3841 3821 features &= ~(NETIF_F_IPV6_CSUM | NETIF_F_TSO6 | NETIF_F_GSO_UDP_L4); 3842 3822 3843 3823 return features;
+25 -3
net/core/rtnetlink.c
··· 629 629 unlock: 630 630 mutex_unlock(&link_ops_mutex); 631 631 632 + if (err) 633 + cleanup_srcu_struct(&ops->srcu); 634 + 632 635 return err; 633 636 } 634 637 EXPORT_SYMBOL_GPL(rtnl_link_register); ··· 710 707 goto out; 711 708 712 709 ops = master_dev->rtnl_link_ops; 713 - if (!ops || !ops->get_slave_size) 710 + if (!ops) 711 + goto out; 712 + size += nla_total_size(strlen(ops->kind) + 1); /* IFLA_INFO_SLAVE_KIND */ 713 + if (!ops->get_slave_size) 714 714 goto out; 715 715 /* IFLA_INFO_SLAVE_DATA + nested data */ 716 - size = nla_total_size(sizeof(struct nlattr)) + 717 - ops->get_slave_size(master_dev, dev); 716 + size += nla_total_size(sizeof(struct nlattr)) + 717 + ops->get_slave_size(master_dev, dev); 718 718 719 719 out: 720 720 rcu_read_unlock(); ··· 1273 1267 return size; 1274 1268 } 1275 1269 1270 + static size_t rtnl_dev_parent_size(const struct net_device *dev) 1271 + { 1272 + size_t size = 0; 1273 + 1274 + /* IFLA_PARENT_DEV_NAME */ 1275 + if (dev->dev.parent) 1276 + size += nla_total_size(strlen(dev_name(dev->dev.parent)) + 1); 1277 + 1278 + /* IFLA_PARENT_DEV_BUS_NAME */ 1279 + if (dev->dev.parent && dev->dev.parent->bus) 1280 + size += nla_total_size(strlen(dev->dev.parent->bus->name) + 1); 1281 + 1282 + return size; 1283 + } 1284 + 1276 1285 static noinline size_t if_nlmsg_size(const struct net_device *dev, 1277 1286 u32 ext_filter_mask) 1278 1287 { ··· 1349 1328 + nla_total_size(8) /* IFLA_MAX_PACING_OFFLOAD_HORIZON */ 1350 1329 + nla_total_size(2) /* IFLA_HEADROOM */ 1351 1330 + nla_total_size(2) /* IFLA_TAILROOM */ 1331 + + rtnl_dev_parent_size(dev) 1352 1332 + 0; 1353 1333 1354 1334 if (!(ext_filter_mask & RTEXT_FILTER_SKIP_STATS))
+6 -3
net/ipv4/esp4.c
··· 235 235 xfrm_dev_resume(skb); 236 236 } else { 237 237 if (!err && 238 - x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 239 - esp_output_tail_tcp(x, skb); 240 - else 238 + x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) { 239 + err = esp_output_tail_tcp(x, skb); 240 + if (err != -EINPROGRESS) 241 + kfree_skb(skb); 242 + } else { 241 243 xfrm_output_resume(skb_to_full_sk(skb), skb, err); 244 + } 242 245 } 243 246 } 244 247
+3 -17
net/ipv4/inet_connection_sock.c
··· 153 153 } 154 154 EXPORT_SYMBOL(inet_sk_get_local_port_range); 155 155 156 - static bool inet_use_bhash2_on_bind(const struct sock *sk) 157 - { 158 - #if IS_ENABLED(CONFIG_IPV6) 159 - if (sk->sk_family == AF_INET6) { 160 - if (ipv6_addr_any(&sk->sk_v6_rcv_saddr)) 161 - return false; 162 - 163 - if (!ipv6_addr_v4mapped(&sk->sk_v6_rcv_saddr)) 164 - return true; 165 - } 166 - #endif 167 - return sk->sk_rcv_saddr != htonl(INADDR_ANY); 168 - } 169 - 170 156 static bool inet_bind_conflict(const struct sock *sk, struct sock *sk2, 171 157 kuid_t uid, bool relax, 172 158 bool reuseport_cb_ok, bool reuseport_ok) ··· 244 258 * checks separately because their spinlocks have to be acquired/released 245 259 * independently of each other, to prevent possible deadlocks 246 260 */ 247 - if (inet_use_bhash2_on_bind(sk)) 261 + if (inet_use_hash2_on_bind(sk)) 248 262 return tb2 && inet_bhash2_conflict(sk, tb2, uid, relax, 249 263 reuseport_cb_ok, reuseport_ok); 250 264 ··· 361 375 head = &hinfo->bhash[inet_bhashfn(net, port, 362 376 hinfo->bhash_size)]; 363 377 spin_lock_bh(&head->lock); 364 - if (inet_use_bhash2_on_bind(sk)) { 378 + if (inet_use_hash2_on_bind(sk)) { 365 379 if (inet_bhash2_addr_any_conflict(sk, port, l3mdev, relax, false)) 366 380 goto next_port; 367 381 } ··· 547 561 check_bind_conflict = false; 548 562 } 549 563 550 - if (check_bind_conflict && inet_use_bhash2_on_bind(sk)) { 564 + if (check_bind_conflict && inet_use_hash2_on_bind(sk)) { 551 565 if (inet_bhash2_addr_any_conflict(sk, port, l3mdev, true, true)) 552 566 goto fail_unlock; 553 567 }
+1 -1
net/ipv4/udp.c
··· 285 285 } else { 286 286 hslot = udp_hashslot(udptable, net, snum); 287 287 spin_lock_bh(&hslot->lock); 288 - if (hslot->count > 10) { 288 + if (inet_use_hash2_on_bind(sk) && hslot->count > 10) { 289 289 int exist; 290 290 unsigned int slot2 = udp_sk(sk)->udp_portaddr_hash ^ snum; 291 291
+2 -2
net/ipv6/addrconf.c
··· 2862 2862 fib6_add_gc_list(rt); 2863 2863 } else { 2864 2864 fib6_clean_expires(rt); 2865 - fib6_remove_gc_list(rt); 2865 + fib6_may_remove_gc_list(net, rt); 2866 2866 } 2867 2867 2868 2868 spin_unlock_bh(&table->tb6_lock); ··· 4840 4840 4841 4841 if (!(flags & RTF_EXPIRES)) { 4842 4842 fib6_clean_expires(f6i); 4843 - fib6_remove_gc_list(f6i); 4843 + fib6_may_remove_gc_list(net, f6i); 4844 4844 } else { 4845 4845 fib6_set_expires(f6i, expires); 4846 4846 fib6_add_gc_list(f6i);
+6 -3
net/ipv6/esp6.c
··· 271 271 xfrm_dev_resume(skb); 272 272 } else { 273 273 if (!err && 274 - x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) 275 - esp_output_tail_tcp(x, skb); 276 - else 274 + x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) { 275 + err = esp_output_tail_tcp(x, skb); 276 + if (err != -EINPROGRESS) 277 + kfree_skb(skb); 278 + } else { 277 279 xfrm_output_resume(skb_to_full_sk(skb), skb, err); 280 + } 278 281 } 279 282 } 280 283
+13 -2
net/ipv6/ip6_fib.c
··· 1136 1136 return -EEXIST; 1137 1137 if (!(rt->fib6_flags & RTF_EXPIRES)) { 1138 1138 fib6_clean_expires(iter); 1139 - fib6_remove_gc_list(iter); 1139 + fib6_may_remove_gc_list(info->nl_net, iter); 1140 1140 } else { 1141 1141 fib6_set_expires(iter, rt->expires); 1142 1142 fib6_add_gc_list(iter); ··· 2351 2351 /* 2352 2352 * Garbage collection 2353 2353 */ 2354 + void fib6_age_exceptions(struct fib6_info *rt, struct fib6_gc_args *gc_args, 2355 + unsigned long now) 2356 + { 2357 + bool may_expire = rt->fib6_flags & RTF_EXPIRES && rt->expires; 2358 + int old_more = gc_args->more; 2359 + 2360 + rt6_age_exceptions(rt, gc_args, now); 2361 + 2362 + if (!may_expire && old_more == gc_args->more) 2363 + fib6_remove_gc_list(rt); 2364 + } 2354 2365 2355 2366 static int fib6_age(struct fib6_info *rt, struct fib6_gc_args *gc_args) 2356 2367 { ··· 2384 2373 * Note, that clones are aged out 2385 2374 * only if they are not in use now. 2386 2375 */ 2387 - rt6_age_exceptions(rt, gc_args, now); 2376 + fib6_age_exceptions(rt, gc_args, now); 2388 2377 2389 2378 return 0; 2390 2379 }
+4
net/ipv6/netfilter/ip6t_rt.c
··· 157 157 pr_debug("unknown flags %X\n", rtinfo->invflags); 158 158 return -EINVAL; 159 159 } 160 + if (rtinfo->addrnr > IP6T_RT_HOPS) { 161 + pr_debug("too many addresses specified\n"); 162 + return -EINVAL; 163 + } 160 164 if ((rtinfo->flags & (IP6T_RT_RES | IP6T_RT_FST_MASK)) && 161 165 (!(rtinfo->flags & IP6T_RT_TYP) || 162 166 (rtinfo->rt_type != 0) ||
+1 -1
net/ipv6/route.c
··· 1033 1033 1034 1034 if (!addrconf_finite_timeout(lifetime)) { 1035 1035 fib6_clean_expires(rt); 1036 - fib6_remove_gc_list(rt); 1036 + fib6_may_remove_gc_list(net, rt); 1037 1037 } else { 1038 1038 fib6_set_expires(rt, jiffies + HZ * lifetime); 1039 1039 fib6_add_gc_list(rt);
+12 -7
net/key/af_key.c
··· 3518 3518 3519 3519 static int set_ipsecrequest(struct sk_buff *skb, 3520 3520 uint8_t proto, uint8_t mode, int level, 3521 - uint32_t reqid, uint8_t family, 3521 + uint32_t reqid, sa_family_t family, 3522 3522 const xfrm_address_t *src, const xfrm_address_t *dst) 3523 3523 { 3524 3524 struct sadb_x_ipsecrequest *rq; ··· 3583 3583 3584 3584 /* ipsecrequests */ 3585 3585 for (i = 0, mp = m; i < num_bundles; i++, mp++) { 3586 - /* old locator pair */ 3587 - size_pol += sizeof(struct sadb_x_ipsecrequest) + 3588 - pfkey_sockaddr_pair_size(mp->old_family); 3589 - /* new locator pair */ 3590 - size_pol += sizeof(struct sadb_x_ipsecrequest) + 3591 - pfkey_sockaddr_pair_size(mp->new_family); 3586 + int pair_size; 3587 + 3588 + pair_size = pfkey_sockaddr_pair_size(mp->old_family); 3589 + if (!pair_size) 3590 + return -EINVAL; 3591 + size_pol += sizeof(struct sadb_x_ipsecrequest) + pair_size; 3592 + 3593 + pair_size = pfkey_sockaddr_pair_size(mp->new_family); 3594 + if (!pair_size) 3595 + return -EINVAL; 3596 + size_pol += sizeof(struct sadb_x_ipsecrequest) + pair_size; 3592 3597 } 3593 3598 3594 3599 size += sizeof(struct sadb_msg) + size_pol;
+6 -2
net/netfilter/nf_conntrack_broadcast.c
··· 21 21 unsigned int timeout) 22 22 { 23 23 const struct nf_conntrack_helper *helper; 24 + struct net *net = read_pnet(&ct->ct_net); 24 25 struct nf_conntrack_expect *exp; 25 26 struct iphdr *iph = ip_hdr(skb); 26 27 struct rtable *rt = skb_rtable(skb); ··· 71 70 exp->expectfn = NULL; 72 71 exp->flags = NF_CT_EXPECT_PERMANENT; 73 72 exp->class = NF_CT_EXPECT_CLASS_DEFAULT; 74 - exp->helper = NULL; 75 - 73 + rcu_assign_pointer(exp->helper, helper); 74 + write_pnet(&exp->net, net); 75 + #ifdef CONFIG_NF_CONNTRACK_ZONES 76 + exp->zone = ct->zone; 77 + #endif 76 78 nf_ct_expect_related(exp, 0); 77 79 nf_ct_expect_put(exp); 78 80
+2
net/netfilter/nf_conntrack_ecache.c
··· 247 247 struct nf_ct_event_notifier *notify; 248 248 struct nf_conntrack_ecache *e; 249 249 250 + lockdep_nfct_expect_lock_held(); 251 + 250 252 rcu_read_lock(); 251 253 notify = rcu_dereference(net->ct.nf_conntrack_event_cb); 252 254 if (!notify)
+34 -5
net/netfilter/nf_conntrack_expect.c
··· 51 51 struct net *net = nf_ct_exp_net(exp); 52 52 struct nf_conntrack_net *cnet; 53 53 54 + lockdep_nfct_expect_lock_held(); 54 55 WARN_ON(!master_help); 55 56 WARN_ON(timer_pending(&exp->timeout)); 56 57 ··· 113 112 const struct net *net) 114 113 { 115 114 return nf_ct_tuple_mask_cmp(tuple, &i->tuple, &i->mask) && 116 - net_eq(net, nf_ct_net(i->master)) && 117 - nf_ct_zone_equal_any(i->master, zone); 115 + net_eq(net, read_pnet(&i->net)) && 116 + nf_ct_exp_zone_equal_any(i, zone); 118 117 } 119 118 120 119 bool nf_ct_remove_expect(struct nf_conntrack_expect *exp) 121 120 { 121 + lockdep_nfct_expect_lock_held(); 122 + 122 123 if (timer_delete(&exp->timeout)) { 123 124 nf_ct_unlink_expect(exp); 124 125 nf_ct_expect_put(exp); ··· 179 176 struct nf_conntrack_net *cnet = nf_ct_pernet(net); 180 177 struct nf_conntrack_expect *i, *exp = NULL; 181 178 unsigned int h; 179 + 180 + lockdep_nfct_expect_lock_held(); 182 181 183 182 if (!cnet->expect_count) 184 183 return NULL; ··· 314 309 } 315 310 EXPORT_SYMBOL_GPL(nf_ct_expect_alloc); 316 311 312 + /* This function can only be used from packet path, where accessing 313 + * master's helper is safe, because the packet holds a reference on 314 + * the conntrack object. Never use it from control plane. 315 + */ 317 316 void nf_ct_expect_init(struct nf_conntrack_expect *exp, unsigned int class, 318 317 u_int8_t family, 319 318 const union nf_inet_addr *saddr, 320 319 const union nf_inet_addr *daddr, 321 320 u_int8_t proto, const __be16 *src, const __be16 *dst) 322 321 { 322 + struct nf_conntrack_helper *helper = NULL; 323 + struct nf_conn *ct = exp->master; 324 + struct net *net = read_pnet(&ct->ct_net); 325 + struct nf_conn_help *help; 323 326 int len; 324 327 325 328 if (family == AF_INET) ··· 338 325 exp->flags = 0; 339 326 exp->class = class; 340 327 exp->expectfn = NULL; 341 - exp->helper = NULL; 328 + 329 + help = nfct_help(ct); 330 + if (help) 331 + helper = rcu_dereference(help->helper); 332 + 333 + rcu_assign_pointer(exp->helper, helper); 334 + write_pnet(&exp->net, net); 335 + #ifdef CONFIG_NF_CONNTRACK_ZONES 336 + exp->zone = ct->zone; 337 + #endif 342 338 exp->tuple.src.l3num = family; 343 339 exp->tuple.dst.protonum = proto; 344 340 ··· 464 442 unsigned int h; 465 443 int ret = 0; 466 444 445 + lockdep_nfct_expect_lock_held(); 446 + 467 447 if (!master_help) { 468 448 ret = -ESHUTDOWN; 469 449 goto out; ··· 522 498 523 499 nf_ct_expect_insert(expect); 524 500 525 - spin_unlock_bh(&nf_conntrack_expect_lock); 526 501 nf_ct_expect_event_report(IPEXP_NEW, expect, portid, report); 502 + spin_unlock_bh(&nf_conntrack_expect_lock); 503 + 527 504 return 0; 528 505 out: 529 506 spin_unlock_bh(&nf_conntrack_expect_lock); ··· 652 627 { 653 628 struct nf_conntrack_expect *expect; 654 629 struct nf_conntrack_helper *helper; 630 + struct net *net = seq_file_net(s); 655 631 struct hlist_node *n = v; 656 632 char *delim = ""; 657 633 658 634 expect = hlist_entry(n, struct nf_conntrack_expect, hnode); 635 + 636 + if (!net_eq(nf_ct_exp_net(expect), net)) 637 + return 0; 659 638 660 639 if (expect->timeout.function) 661 640 seq_printf(s, "%ld ", timer_pending(&expect->timeout) ··· 683 654 if (expect->flags & NF_CT_EXPECT_USERSPACE) 684 655 seq_printf(s, "%sUSERSPACE", delim); 685 656 686 - helper = rcu_dereference(nfct_help(expect->master)->helper); 657 + helper = rcu_dereference(expect->helper); 687 658 if (helper) { 688 659 seq_printf(s, "%s%s", expect->flags ? " " : "", helper->name); 689 660 if (helper->expect_policy[expect->class].name[0])
+6 -6
net/netfilter/nf_conntrack_h323_main.c
··· 643 643 &ct->tuplehash[!dir].tuple.src.u3, 644 644 &ct->tuplehash[!dir].tuple.dst.u3, 645 645 IPPROTO_TCP, NULL, &port); 646 - exp->helper = &nf_conntrack_helper_h245; 646 + rcu_assign_pointer(exp->helper, &nf_conntrack_helper_h245); 647 647 648 648 nathook = rcu_dereference(nfct_h323_nat_hook); 649 649 if (memcmp(&ct->tuplehash[dir].tuple.src.u3, ··· 767 767 nf_ct_expect_init(exp, NF_CT_EXPECT_CLASS_DEFAULT, nf_ct_l3num(ct), 768 768 &ct->tuplehash[!dir].tuple.src.u3, &addr, 769 769 IPPROTO_TCP, NULL, &port); 770 - exp->helper = nf_conntrack_helper_q931; 770 + rcu_assign_pointer(exp->helper, nf_conntrack_helper_q931); 771 771 772 772 nathook = rcu_dereference(nfct_h323_nat_hook); 773 773 if (memcmp(&ct->tuplehash[dir].tuple.src.u3, ··· 1234 1234 &ct->tuplehash[!dir].tuple.src.u3 : NULL, 1235 1235 &ct->tuplehash[!dir].tuple.dst.u3, 1236 1236 IPPROTO_TCP, NULL, &port); 1237 - exp->helper = nf_conntrack_helper_q931; 1237 + rcu_assign_pointer(exp->helper, nf_conntrack_helper_q931); 1238 1238 exp->flags = NF_CT_EXPECT_PERMANENT; /* Accept multiple calls */ 1239 1239 1240 1240 nathook = rcu_dereference(nfct_h323_nat_hook); ··· 1306 1306 nf_ct_expect_init(exp, NF_CT_EXPECT_CLASS_DEFAULT, nf_ct_l3num(ct), 1307 1307 &ct->tuplehash[!dir].tuple.src.u3, &addr, 1308 1308 IPPROTO_UDP, NULL, &port); 1309 - exp->helper = nf_conntrack_helper_ras; 1309 + rcu_assign_pointer(exp->helper, nf_conntrack_helper_ras); 1310 1310 1311 1311 if (nf_ct_expect_related(exp, 0) == 0) { 1312 1312 pr_debug("nf_ct_ras: expect RAS "); ··· 1523 1523 &ct->tuplehash[!dir].tuple.src.u3, &addr, 1524 1524 IPPROTO_TCP, NULL, &port); 1525 1525 exp->flags = NF_CT_EXPECT_PERMANENT; 1526 - exp->helper = nf_conntrack_helper_q931; 1526 + rcu_assign_pointer(exp->helper, nf_conntrack_helper_q931); 1527 1527 1528 1528 if (nf_ct_expect_related(exp, 0) == 0) { 1529 1529 pr_debug("nf_ct_ras: expect Q.931 "); ··· 1577 1577 &ct->tuplehash[!dir].tuple.src.u3, &addr, 1578 1578 IPPROTO_TCP, NULL, &port); 1579 1579 exp->flags = NF_CT_EXPECT_PERMANENT; 1580 - exp->helper = nf_conntrack_helper_q931; 1580 + rcu_assign_pointer(exp->helper, nf_conntrack_helper_q931); 1581 1581 1582 1582 if (nf_ct_expect_related(exp, 0) == 0) { 1583 1583 pr_debug("nf_ct_ras: expect Q.931 ");
+6 -5
net/netfilter/nf_conntrack_helper.c
··· 395 395 396 396 static bool expect_iter_me(struct nf_conntrack_expect *exp, void *data) 397 397 { 398 - struct nf_conn_help *help = nfct_help(exp->master); 399 398 const struct nf_conntrack_helper *me = data; 400 399 const struct nf_conntrack_helper *this; 401 400 402 - if (exp->helper == me) 403 - return true; 404 - 405 - this = rcu_dereference_protected(help->helper, 401 + this = rcu_dereference_protected(exp->helper, 406 402 lockdep_is_held(&nf_conntrack_expect_lock)); 407 403 return this == me; 408 404 } ··· 417 421 418 422 nf_ct_expect_iterate_destroy(expect_iter_me, NULL); 419 423 nf_ct_iterate_destroy(unhelp, me); 424 + 425 + /* nf_ct_iterate_destroy() does an unconditional synchronize_rcu() as 426 + * last step, this ensures rcu readers of exp->helper are done. 427 + * No need for another synchronize_rcu() here. 428 + */ 420 429 } 421 430 EXPORT_SYMBOL_GPL(nf_conntrack_helper_unregister); 422 431
+40 -35
net/netfilter/nf_conntrack_netlink.c
··· 908 908 }; 909 909 910 910 static const struct nla_policy cta_filter_nla_policy[CTA_FILTER_MAX + 1] = { 911 - [CTA_FILTER_ORIG_FLAGS] = { .type = NLA_U32 }, 912 - [CTA_FILTER_REPLY_FLAGS] = { .type = NLA_U32 }, 911 + [CTA_FILTER_ORIG_FLAGS] = NLA_POLICY_MASK(NLA_U32, CTA_FILTER_F_ALL), 912 + [CTA_FILTER_REPLY_FLAGS] = NLA_POLICY_MASK(NLA_U32, CTA_FILTER_F_ALL), 913 913 }; 914 914 915 915 static int ctnetlink_parse_filter(const struct nlattr *attr, ··· 923 923 if (ret) 924 924 return ret; 925 925 926 - if (tb[CTA_FILTER_ORIG_FLAGS]) { 926 + if (tb[CTA_FILTER_ORIG_FLAGS]) 927 927 filter->orig_flags = nla_get_u32(tb[CTA_FILTER_ORIG_FLAGS]); 928 - if (filter->orig_flags & ~CTA_FILTER_F_ALL) 929 - return -EOPNOTSUPP; 930 - } 931 928 932 - if (tb[CTA_FILTER_REPLY_FLAGS]) { 929 + if (tb[CTA_FILTER_REPLY_FLAGS]) 933 930 filter->reply_flags = nla_get_u32(tb[CTA_FILTER_REPLY_FLAGS]); 934 - if (filter->reply_flags & ~CTA_FILTER_F_ALL) 935 - return -EOPNOTSUPP; 936 - } 937 931 938 932 return 0; 939 933 } ··· 2626 2632 [CTA_EXPECT_HELP_NAME] = { .type = NLA_NUL_STRING, 2627 2633 .len = NF_CT_HELPER_NAME_LEN - 1 }, 2628 2634 [CTA_EXPECT_ZONE] = { .type = NLA_U16 }, 2629 - [CTA_EXPECT_FLAGS] = { .type = NLA_U32 }, 2635 + [CTA_EXPECT_FLAGS] = NLA_POLICY_MASK(NLA_BE32, NF_CT_EXPECT_MASK), 2630 2636 [CTA_EXPECT_CLASS] = { .type = NLA_U32 }, 2631 2637 [CTA_EXPECT_NAT] = { .type = NLA_NESTED }, 2632 2638 [CTA_EXPECT_FN] = { .type = NLA_NUL_STRING }, ··· 3004 3010 { 3005 3011 struct nf_conn *master = exp->master; 3006 3012 long timeout = ((long)exp->timeout.expires - (long)jiffies) / HZ; 3007 - struct nf_conn_help *help; 3013 + struct nf_conntrack_helper *helper; 3008 3014 #if IS_ENABLED(CONFIG_NF_NAT) 3009 3015 struct nlattr *nest_parms; 3010 3016 struct nf_conntrack_tuple nat_tuple = {}; ··· 3049 3055 nla_put_be32(skb, CTA_EXPECT_FLAGS, htonl(exp->flags)) || 3050 3056 nla_put_be32(skb, CTA_EXPECT_CLASS, htonl(exp->class))) 3051 3057 goto nla_put_failure; 3052 - help = nfct_help(master); 3053 - if (help) { 3054 - struct nf_conntrack_helper *helper; 3055 3058 3056 - helper = rcu_dereference(help->helper); 3057 - if (helper && 3058 - nla_put_string(skb, CTA_EXPECT_HELP_NAME, helper->name)) 3059 - goto nla_put_failure; 3060 - } 3059 + helper = rcu_dereference(exp->helper); 3060 + if (helper && 3061 + nla_put_string(skb, CTA_EXPECT_HELP_NAME, helper->name)) 3062 + goto nla_put_failure; 3063 + 3061 3064 expfn = nf_ct_helper_expectfn_find_by_symbol(exp->expectfn); 3062 3065 if (expfn != NULL && 3063 3066 nla_put_string(skb, CTA_EXPECT_FN, expfn->name)) ··· 3347 3356 if (err < 0) 3348 3357 return err; 3349 3358 3359 + skb2 = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 3360 + if (!skb2) 3361 + return -ENOMEM; 3362 + 3363 + spin_lock_bh(&nf_conntrack_expect_lock); 3350 3364 exp = nf_ct_expect_find_get(info->net, &zone, &tuple); 3351 - if (!exp) 3365 + if (!exp) { 3366 + spin_unlock_bh(&nf_conntrack_expect_lock); 3367 + kfree_skb(skb2); 3352 3368 return -ENOENT; 3369 + } 3353 3370 3354 3371 if (cda[CTA_EXPECT_ID]) { 3355 3372 __be32 id = nla_get_be32(cda[CTA_EXPECT_ID]); 3356 3373 3357 3374 if (id != nf_expect_get_id(exp)) { 3358 3375 nf_ct_expect_put(exp); 3376 + spin_unlock_bh(&nf_conntrack_expect_lock); 3377 + kfree_skb(skb2); 3359 3378 return -ENOENT; 3360 3379 } 3361 - } 3362 - 3363 - skb2 = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 3364 - if (!skb2) { 3365 - nf_ct_expect_put(exp); 3366 - return -ENOMEM; 3367 3380 } 3368 3381 3369 3382 rcu_read_lock(); ··· 3376 3381 exp); 3377 3382 rcu_read_unlock(); 3378 3383 nf_ct_expect_put(exp); 3384 + spin_unlock_bh(&nf_conntrack_expect_lock); 3385 + 3379 3386 if (err <= 0) { 3380 3387 kfree_skb(skb2); 3381 3388 return -ENOMEM; ··· 3389 3392 static bool expect_iter_name(struct nf_conntrack_expect *exp, void *data) 3390 3393 { 3391 3394 struct nf_conntrack_helper *helper; 3392 - const struct nf_conn_help *m_help; 3393 3395 const char *name = data; 3394 3396 3395 - m_help = nfct_help(exp->master); 3396 - 3397 - helper = rcu_dereference(m_help->helper); 3397 + helper = rcu_dereference(exp->helper); 3398 3398 if (!helper) 3399 3399 return false; 3400 3400 ··· 3424 3430 if (err < 0) 3425 3431 return err; 3426 3432 3433 + spin_lock_bh(&nf_conntrack_expect_lock); 3434 + 3427 3435 /* bump usage count to 2 */ 3428 3436 exp = nf_ct_expect_find_get(info->net, &zone, &tuple); 3429 - if (!exp) 3437 + if (!exp) { 3438 + spin_unlock_bh(&nf_conntrack_expect_lock); 3430 3439 return -ENOENT; 3440 + } 3431 3441 3432 3442 if (cda[CTA_EXPECT_ID]) { 3433 3443 __be32 id = nla_get_be32(cda[CTA_EXPECT_ID]); 3434 3444 3435 3445 if (id != nf_expect_get_id(exp)) { 3436 3446 nf_ct_expect_put(exp); 3447 + spin_unlock_bh(&nf_conntrack_expect_lock); 3437 3448 return -ENOENT; 3438 3449 } 3439 3450 } 3440 3451 3441 3452 /* after list removal, usage count == 1 */ 3442 - spin_lock_bh(&nf_conntrack_expect_lock); 3443 3453 if (timer_delete(&exp->timeout)) { 3444 3454 nf_ct_unlink_expect_report(exp, NETLINK_CB(skb).portid, 3445 3455 nlmsg_report(info->nlh)); ··· 3530 3532 struct nf_conntrack_tuple *tuple, 3531 3533 struct nf_conntrack_tuple *mask) 3532 3534 { 3533 - u_int32_t class = 0; 3535 + struct net *net = read_pnet(&ct->ct_net); 3534 3536 struct nf_conntrack_expect *exp; 3535 3537 struct nf_conn_help *help; 3538 + u32 class = 0; 3536 3539 int err; 3537 3540 3538 3541 help = nfct_help(ct); ··· 3570 3571 3571 3572 exp->class = class; 3572 3573 exp->master = ct; 3573 - exp->helper = helper; 3574 + write_pnet(&exp->net, net); 3575 + #ifdef CONFIG_NF_CONNTRACK_ZONES 3576 + exp->zone = ct->zone; 3577 + #endif 3578 + if (!helper) 3579 + helper = rcu_dereference(help->helper); 3580 + rcu_assign_pointer(exp->helper, helper); 3574 3581 exp->tuple = *tuple; 3575 3582 exp->mask.src.u3 = mask->src.u3; 3576 3583 exp->mask.src.u.all = mask->src.u.all;
+3 -7
net/netfilter/nf_conntrack_proto_tcp.c
··· 1385 1385 } 1386 1386 1387 1387 static const struct nla_policy tcp_nla_policy[CTA_PROTOINFO_TCP_MAX+1] = { 1388 - [CTA_PROTOINFO_TCP_STATE] = { .type = NLA_U8 }, 1389 - [CTA_PROTOINFO_TCP_WSCALE_ORIGINAL] = { .type = NLA_U8 }, 1390 - [CTA_PROTOINFO_TCP_WSCALE_REPLY] = { .type = NLA_U8 }, 1388 + [CTA_PROTOINFO_TCP_STATE] = NLA_POLICY_MAX(NLA_U8, TCP_CONNTRACK_SYN_SENT2), 1389 + [CTA_PROTOINFO_TCP_WSCALE_ORIGINAL] = NLA_POLICY_MAX(NLA_U8, TCP_MAX_WSCALE), 1390 + [CTA_PROTOINFO_TCP_WSCALE_REPLY] = NLA_POLICY_MAX(NLA_U8, TCP_MAX_WSCALE), 1391 1391 [CTA_PROTOINFO_TCP_FLAGS_ORIGINAL] = { .len = sizeof(struct nf_ct_tcp_flags) }, 1392 1392 [CTA_PROTOINFO_TCP_FLAGS_REPLY] = { .len = sizeof(struct nf_ct_tcp_flags) }, 1393 1393 }; ··· 1413 1413 tcp_nla_policy, NULL); 1414 1414 if (err < 0) 1415 1415 return err; 1416 - 1417 - if (tb[CTA_PROTOINFO_TCP_STATE] && 1418 - nla_get_u8(tb[CTA_PROTOINFO_TCP_STATE]) >= TCP_CONNTRACK_MAX) 1419 - return -EINVAL; 1420 1416 1421 1417 spin_lock_bh(&ct->lock); 1422 1418 if (tb[CTA_PROTOINFO_TCP_STATE])
+12 -6
net/netfilter/nf_conntrack_sip.c
··· 924 924 exp = __nf_ct_expect_find(net, nf_ct_zone(ct), &tuple); 925 925 926 926 if (!exp || exp->master == ct || 927 - nfct_help(exp->master)->helper != nfct_help(ct)->helper || 927 + exp->helper != nfct_help(ct)->helper || 928 928 exp->class != class) 929 929 break; 930 930 #if IS_ENABLED(CONFIG_NF_NAT) ··· 1040 1040 unsigned int port; 1041 1041 const struct sdp_media_type *t; 1042 1042 int ret = NF_ACCEPT; 1043 + bool have_rtp_addr = false; 1043 1044 1044 1045 hooks = rcu_dereference(nf_nat_sip_hooks); 1045 1046 ··· 1057 1056 caddr_len = 0; 1058 1057 if (ct_sip_parse_sdp_addr(ct, *dptr, sdpoff, *datalen, 1059 1058 SDP_HDR_CONNECTION, SDP_HDR_MEDIA, 1060 - &matchoff, &matchlen, &caddr) > 0) 1059 + &matchoff, &matchlen, &caddr) > 0) { 1061 1060 caddr_len = matchlen; 1061 + memcpy(&rtp_addr, &caddr, sizeof(rtp_addr)); 1062 + have_rtp_addr = true; 1063 + } 1062 1064 1063 1065 mediaoff = sdpoff; 1064 1066 for (i = 0; i < ARRAY_SIZE(sdp_media_types); ) { ··· 1095 1091 &matchoff, &matchlen, &maddr) > 0) { 1096 1092 maddr_len = matchlen; 1097 1093 memcpy(&rtp_addr, &maddr, sizeof(rtp_addr)); 1098 - } else if (caddr_len) 1094 + have_rtp_addr = true; 1095 + } else if (caddr_len) { 1099 1096 memcpy(&rtp_addr, &caddr, sizeof(rtp_addr)); 1100 - else { 1097 + have_rtp_addr = true; 1098 + } else { 1101 1099 nf_ct_helper_log(skb, ct, "cannot parse SDP message"); 1102 1100 return NF_DROP; 1103 1101 } ··· 1131 1125 1132 1126 /* Update session connection and owner addresses */ 1133 1127 hooks = rcu_dereference(nf_nat_sip_hooks); 1134 - if (hooks && ct->status & IPS_NAT_MASK) 1128 + if (hooks && ct->status & IPS_NAT_MASK && have_rtp_addr) 1135 1129 ret = hooks->sdp_session(skb, protoff, dataoff, 1136 1130 dptr, datalen, sdpoff, 1137 1131 &rtp_addr); ··· 1303 1297 nf_ct_expect_init(exp, SIP_EXPECT_SIGNALLING, nf_ct_l3num(ct), 1304 1298 saddr, &daddr, proto, NULL, &port); 1305 1299 exp->timeout.expires = sip_timeout * HZ; 1306 - exp->helper = helper; 1300 + rcu_assign_pointer(exp->helper, helper); 1307 1301 exp->flags = NF_CT_EXPECT_PERMANENT | NF_CT_EXPECT_INACTIVE; 1308 1302 1309 1303 hooks = rcu_dereference(nf_nat_sip_hooks);
+10 -10
net/netfilter/nft_set_pipapo_avx2.c
··· 242 242 243 243 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 244 244 if (last) 245 - return b; 245 + ret = b; 246 246 247 247 if (unlikely(ret == -1)) 248 248 ret = b / XSAVE_YMM_SIZE; ··· 319 319 320 320 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 321 321 if (last) 322 - return b; 322 + ret = b; 323 323 324 324 if (unlikely(ret == -1)) 325 325 ret = b / XSAVE_YMM_SIZE; ··· 414 414 415 415 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 416 416 if (last) 417 - return b; 417 + ret = b; 418 418 419 419 if (unlikely(ret == -1)) 420 420 ret = b / XSAVE_YMM_SIZE; ··· 505 505 506 506 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 507 507 if (last) 508 - return b; 508 + ret = b; 509 509 510 510 if (unlikely(ret == -1)) 511 511 ret = b / XSAVE_YMM_SIZE; ··· 641 641 642 642 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 643 643 if (last) 644 - return b; 644 + ret = b; 645 645 646 646 if (unlikely(ret == -1)) 647 647 ret = b / XSAVE_YMM_SIZE; ··· 699 699 700 700 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 701 701 if (last) 702 - return b; 702 + ret = b; 703 703 704 704 if (unlikely(ret == -1)) 705 705 ret = b / XSAVE_YMM_SIZE; ··· 764 764 765 765 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 766 766 if (last) 767 - return b; 767 + ret = b; 768 768 769 769 if (unlikely(ret == -1)) 770 770 ret = b / XSAVE_YMM_SIZE; ··· 839 839 840 840 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 841 841 if (last) 842 - return b; 842 + ret = b; 843 843 844 844 if (unlikely(ret == -1)) 845 845 ret = b / XSAVE_YMM_SIZE; ··· 925 925 926 926 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 927 927 if (last) 928 - return b; 928 + ret = b; 929 929 930 930 if (unlikely(ret == -1)) 931 931 ret = b / XSAVE_YMM_SIZE; ··· 1019 1019 1020 1020 b = nft_pipapo_avx2_refill(i_ul, &map[i_ul], fill, f->mt, last); 1021 1021 if (last) 1022 - return b; 1022 + ret = b; 1023 1023 1024 1024 if (unlikely(ret == -1)) 1025 1025 ret = b / XSAVE_YMM_SIZE;
+75 -17
net/netfilter/nft_set_rbtree.c
··· 572 572 return array; 573 573 } 574 574 575 - #define NFT_ARRAY_EXTRA_SIZE 10240 576 - 577 575 /* Similar to nft_rbtree_{u,k}size to hide details to userspace, but consider 578 576 * packed representation coming from userspace for anonymous sets too. 579 577 */ 580 578 static u32 nft_array_elems(const struct nft_set *set) 581 579 { 582 - u32 nelems = atomic_read(&set->nelems); 580 + u32 nelems = atomic_read(&set->nelems) - set->ndeact; 583 581 584 582 /* Adjacent intervals are represented with a single start element in 585 583 * anonymous sets, use the current element counter as is. ··· 593 595 return (nelems / 2) + 2; 594 596 } 595 597 596 - static int nft_array_may_resize(const struct nft_set *set) 598 + #define NFT_ARRAY_INITIAL_SIZE 1024 599 + #define NFT_ARRAY_INITIAL_ANON_SIZE 16 600 + #define NFT_ARRAY_INITIAL_ANON_THRESH (8192U / sizeof(struct nft_array_interval)) 601 + 602 + static int nft_array_may_resize(const struct nft_set *set, bool flush) 597 603 { 598 - u32 nelems = nft_array_elems(set), new_max_intervals; 604 + u32 initial_intervals, max_intervals, new_max_intervals, delta; 605 + u32 shrinked_max_intervals, nelems = nft_array_elems(set); 599 606 struct nft_rbtree *priv = nft_set_priv(set); 600 607 struct nft_array *array; 601 608 602 - if (!priv->array_next) { 603 - array = nft_array_alloc(nelems + NFT_ARRAY_EXTRA_SIZE); 609 + if (nft_set_is_anonymous(set)) 610 + initial_intervals = NFT_ARRAY_INITIAL_ANON_SIZE; 611 + else 612 + initial_intervals = NFT_ARRAY_INITIAL_SIZE; 613 + 614 + if (priv->array_next) { 615 + max_intervals = priv->array_next->max_intervals; 616 + new_max_intervals = priv->array_next->max_intervals; 617 + } else { 618 + if (priv->array) { 619 + max_intervals = priv->array->max_intervals; 620 + new_max_intervals = priv->array->max_intervals; 621 + } else { 622 + max_intervals = 0; 623 + new_max_intervals = initial_intervals; 624 + } 625 + } 626 + 627 + if (nft_set_is_anonymous(set)) 628 + goto maybe_grow; 629 + 630 + if (flush) { 631 + /* Set flush just started, nelems still report elements.*/ 632 + nelems = 0; 633 + new_max_intervals = NFT_ARRAY_INITIAL_SIZE; 634 + goto realloc_array; 635 + } 636 + 637 + if (check_add_overflow(new_max_intervals, new_max_intervals, 638 + &shrinked_max_intervals)) 639 + return -EOVERFLOW; 640 + 641 + shrinked_max_intervals = DIV_ROUND_UP(shrinked_max_intervals, 3); 642 + 643 + if (shrinked_max_intervals > NFT_ARRAY_INITIAL_SIZE && 644 + nelems < shrinked_max_intervals) { 645 + new_max_intervals = shrinked_max_intervals; 646 + goto realloc_array; 647 + } 648 + maybe_grow: 649 + if (nelems > new_max_intervals) { 650 + if (nft_set_is_anonymous(set) && 651 + new_max_intervals < NFT_ARRAY_INITIAL_ANON_THRESH) { 652 + new_max_intervals <<= 1; 653 + } else { 654 + delta = new_max_intervals >> 1; 655 + if (check_add_overflow(new_max_intervals, delta, 656 + &new_max_intervals)) 657 + return -EOVERFLOW; 658 + } 659 + } 660 + 661 + realloc_array: 662 + if (WARN_ON_ONCE(nelems > new_max_intervals)) 663 + return -ENOMEM; 664 + 665 + if (priv->array_next) { 666 + if (max_intervals == new_max_intervals) 667 + return 0; 668 + 669 + if (nft_array_intervals_alloc(priv->array_next, new_max_intervals) < 0) 670 + return -ENOMEM; 671 + } else { 672 + array = nft_array_alloc(new_max_intervals); 604 673 if (!array) 605 674 return -ENOMEM; 606 675 607 676 priv->array_next = array; 608 677 } 609 - 610 - if (nelems < priv->array_next->max_intervals) 611 - return 0; 612 - 613 - new_max_intervals = priv->array_next->max_intervals + NFT_ARRAY_EXTRA_SIZE; 614 - if (nft_array_intervals_alloc(priv->array_next, new_max_intervals) < 0) 615 - return -ENOMEM; 616 678 617 679 return 0; 618 680 } ··· 688 630 689 631 nft_rbtree_maybe_reset_start_cookie(priv, tstamp); 690 632 691 - if (nft_array_may_resize(set) < 0) 633 + if (nft_array_may_resize(set, false) < 0) 692 634 return -ENOMEM; 693 635 694 636 do { ··· 794 736 nft_rbtree_interval_null(set, this)) 795 737 priv->start_rbe_cookie = 0; 796 738 797 - if (nft_array_may_resize(set) < 0) 739 + if (nft_array_may_resize(set, false) < 0) 798 740 return NULL; 799 741 800 742 while (parent != NULL) { ··· 864 806 865 807 switch (iter->type) { 866 808 case NFT_ITER_UPDATE_CLONE: 867 - if (nft_array_may_resize(set) < 0) { 809 + if (nft_array_may_resize(set, true) < 0) { 868 810 iter->err = -ENOMEM; 869 811 break; 870 812 }
+6 -4
net/nfc/nci/core.c
··· 579 579 skb_queue_purge(&ndev->rx_q); 580 580 skb_queue_purge(&ndev->tx_q); 581 581 582 - /* Flush RX and TX wq */ 583 - flush_workqueue(ndev->rx_wq); 582 + /* Flush TX wq, RX wq flush can't be under the lock */ 584 583 flush_workqueue(ndev->tx_wq); 585 584 586 585 /* Reset device */ ··· 591 592 msecs_to_jiffies(NCI_RESET_TIMEOUT)); 592 593 593 594 /* After this point our queues are empty 594 - * and no works are scheduled. 595 + * rx work may be running but will see that NCI_UP was cleared 595 596 */ 596 597 ndev->ops->close(ndev); 597 598 598 599 clear_bit(NCI_INIT, &ndev->flags); 599 600 600 - /* Flush cmd wq */ 601 + /* Flush cmd and tx wq */ 601 602 flush_workqueue(ndev->cmd_wq); 602 603 603 604 timer_delete_sync(&ndev->cmd_timer); ··· 611 612 ndev->flags &= BIT(NCI_UNREG); 612 613 613 614 mutex_unlock(&ndev->req_lock); 615 + 616 + /* rx_work may take req_lock via nci_deactivate_target */ 617 + flush_workqueue(ndev->rx_wq); 614 618 615 619 return 0; 616 620 }
+2
net/openvswitch/flow_netlink.c
··· 2953 2953 case OVS_KEY_ATTR_MPLS: 2954 2954 if (!eth_p_mpls(eth_type)) 2955 2955 return -EINVAL; 2956 + if (key_len != sizeof(struct ovs_key_mpls)) 2957 + return -EINVAL; 2956 2958 break; 2957 2959 2958 2960 case OVS_KEY_ATTR_SCTP:
+8 -3
net/openvswitch/vport-netdev.c
··· 151 151 void ovs_netdev_detach_dev(struct vport *vport) 152 152 { 153 153 ASSERT_RTNL(); 154 - vport->dev->priv_flags &= ~IFF_OVS_DATAPATH; 155 154 netdev_rx_handler_unregister(vport->dev); 156 155 netdev_upper_dev_unlink(vport->dev, 157 156 netdev_master_upper_dev_get(vport->dev)); 158 157 dev_set_promiscuity(vport->dev, -1); 158 + 159 + /* paired with smp_mb() in netdev_destroy() */ 160 + smp_wmb(); 161 + 162 + vport->dev->priv_flags &= ~IFF_OVS_DATAPATH; 159 163 } 160 164 161 165 static void netdev_destroy(struct vport *vport) ··· 178 174 rtnl_unlock(); 179 175 } 180 176 177 + /* paired with smp_wmb() in ovs_netdev_detach_dev() */ 178 + smp_mb(); 179 + 181 180 call_rcu(&vport->rcu, vport_netdev_free); 182 181 } 183 182 ··· 196 189 */ 197 190 if (vport->dev->reg_state == NETREG_REGISTERED) 198 191 rtnl_delete_link(vport->dev, 0, NULL); 199 - netdev_put(vport->dev, &vport->dev_tracker); 200 - vport->dev = NULL; 201 192 rtnl_unlock(); 202 193 203 194 call_rcu(&vport->rcu, vport_netdev_free);
+1
net/packet/af_packet.c
··· 3135 3135 3136 3136 spin_lock(&po->bind_lock); 3137 3137 unregister_prot_hook(sk, false); 3138 + WRITE_ONCE(po->num, 0); 3138 3139 packet_cached_dev_reset(po); 3139 3140 3140 3141 if (po->prot_hook.dev) {
+8 -1
net/smc/smc_rx.c
··· 135 135 sock_put(sk); 136 136 } 137 137 138 + static bool smc_rx_pipe_buf_get(struct pipe_inode_info *pipe, 139 + struct pipe_buffer *buf) 140 + { 141 + /* smc_spd_priv in buf->private is not shareable; disallow cloning. */ 142 + return false; 143 + } 144 + 138 145 static const struct pipe_buf_operations smc_pipe_ops = { 139 146 .release = smc_rx_pipe_buf_release, 140 - .get = generic_pipe_buf_get 147 + .get = smc_rx_pipe_buf_get, 141 148 }; 142 149 143 150 static void smc_rx_spd_release(struct splice_pipe_desc *spd,
+1 -1
net/tls/tls_sw.c
··· 246 246 crypto_wait_req(-EINPROGRESS, &ctx->async_wait); 247 247 atomic_inc(&ctx->decrypt_pending); 248 248 249 + __skb_queue_purge(&ctx->async_hold); 249 250 return ctx->async_wait.err; 250 251 } 251 252 ··· 2225 2224 2226 2225 /* Wait for all previously submitted records to be decrypted */ 2227 2226 ret = tls_decrypt_async_wait(ctx); 2228 - __skb_queue_purge(&ctx->async_hold); 2229 2227 2230 2228 if (ret) { 2231 2229 if (err >= 0 || err == -EINPROGRESS)
+4 -1
net/xfrm/xfrm_input.c
··· 75 75 76 76 spin_lock_bh(&xfrm_input_afinfo_lock); 77 77 if (likely(xfrm_input_afinfo[afinfo->is_ipip][afinfo->family])) { 78 - if (unlikely(xfrm_input_afinfo[afinfo->is_ipip][afinfo->family] != afinfo)) 78 + const struct xfrm_input_afinfo *cur; 79 + 80 + cur = rcu_access_pointer(xfrm_input_afinfo[afinfo->is_ipip][afinfo->family]); 81 + if (unlikely(cur != afinfo)) 79 82 err = -EINVAL; 80 83 else 81 84 RCU_INIT_POINTER(xfrm_input_afinfo[afinfo->is_ipip][afinfo->family], NULL);
+14 -3
net/xfrm/xfrm_iptfs.c
··· 901 901 iptfs_skb_can_add_frags(newskb, fragwalk, data, copylen)) { 902 902 iptfs_skb_add_frags(newskb, fragwalk, data, copylen); 903 903 } else { 904 + if (skb_linearize(newskb)) { 905 + XFRM_INC_STATS(xs_net(xtfs->x), 906 + LINUX_MIB_XFRMINBUFFERERROR); 907 + goto abandon; 908 + } 909 + 904 910 /* copy fragment data into newskb */ 905 911 if (skb_copy_seq_read(st, data, skb_put(newskb, copylen), 906 912 copylen)) { ··· 997 991 998 992 iplen = be16_to_cpu(iph->tot_len); 999 993 iphlen = iph->ihl << 2; 994 + if (iplen < iphlen || iphlen < sizeof(*iph)) { 995 + XFRM_INC_STATS(net, 996 + LINUX_MIB_XFRMINHDRERROR); 997 + goto done; 998 + } 1000 999 protocol = cpu_to_be16(ETH_P_IP); 1001 1000 XFRM_MODE_SKB_CB(skbseq->root_skb)->tos = iph->tos; 1002 1001 } else if (iph->version == 0x6) { ··· 2664 2653 if (!xtfs) 2665 2654 return -ENOMEM; 2666 2655 2667 - x->mode_data = xtfs; 2668 - xtfs->x = x; 2669 - 2670 2656 xtfs->ra_newskb = NULL; 2671 2657 if (xtfs->cfg.reorder_win_size) { 2672 2658 xtfs->w_saved = kzalloc_objs(*xtfs->w_saved, ··· 2673 2665 return -ENOMEM; 2674 2666 } 2675 2667 } 2668 + 2669 + x->mode_data = xtfs; 2670 + xtfs->x = x; 2676 2671 2677 2672 return 0; 2678 2673 }
+1 -1
net/xfrm/xfrm_nat_keepalive.c
··· 261 261 262 262 int xfrm_nat_keepalive_net_fini(struct net *net) 263 263 { 264 - cancel_delayed_work_sync(&net->xfrm.nat_keepalive_work); 264 + disable_delayed_work_sync(&net->xfrm.nat_keepalive_work); 265 265 return 0; 266 266 } 267 267
+7 -5
net/xfrm/xfrm_policy.c
··· 4156 4156 int i; 4157 4157 4158 4158 for (i = 0; i < ARRAY_SIZE(xfrm_policy_afinfo); i++) { 4159 - if (xfrm_policy_afinfo[i] != afinfo) 4159 + if (rcu_access_pointer(xfrm_policy_afinfo[i]) != afinfo) 4160 4160 continue; 4161 4161 RCU_INIT_POINTER(xfrm_policy_afinfo[i], NULL); 4162 4162 break; ··· 4242 4242 net->xfrm.policy_count[XFRM_POLICY_MAX + dir] = 0; 4243 4243 4244 4244 htab = &net->xfrm.policy_bydst[dir]; 4245 - htab->table = xfrm_hash_alloc(sz); 4245 + rcu_assign_pointer(htab->table, xfrm_hash_alloc(sz)); 4246 4246 if (!htab->table) 4247 4247 goto out_bydst; 4248 4248 htab->hmask = hmask; ··· 4269 4269 struct xfrm_policy_hash *htab; 4270 4270 4271 4271 htab = &net->xfrm.policy_bydst[dir]; 4272 - xfrm_hash_free(htab->table, sz); 4272 + xfrm_hash_free(rcu_dereference_protected(htab->table, true), sz); 4273 4273 } 4274 4274 xfrm_hash_free(net->xfrm.policy_byidx, sz); 4275 4275 out_byidx: ··· 4281 4281 struct xfrm_pol_inexact_bin *b, *t; 4282 4282 unsigned int sz; 4283 4283 int dir; 4284 + 4285 + disable_work_sync(&net->xfrm.policy_hthresh.work); 4284 4286 4285 4287 flush_work(&net->xfrm.policy_hash_work); 4286 4288 #ifdef CONFIG_XFRM_SUB_POLICY ··· 4297 4295 4298 4296 htab = &net->xfrm.policy_bydst[dir]; 4299 4297 sz = (htab->hmask + 1) * sizeof(struct hlist_head); 4300 - WARN_ON(!hlist_empty(htab->table)); 4301 - xfrm_hash_free(htab->table, sz); 4298 + WARN_ON(!hlist_empty(rcu_dereference_protected(htab->table, true))); 4299 + xfrm_hash_free(rcu_dereference_protected(htab->table, true), sz); 4302 4300 } 4303 4301 4304 4302 sz = (net->xfrm.policy_idx_hmask + 1) * sizeof(struct hlist_head);
+64 -52
net/xfrm/xfrm_state.c
··· 53 53 static HLIST_HEAD(xfrm_state_gc_list); 54 54 static HLIST_HEAD(xfrm_state_dev_gc_list); 55 55 56 - static inline bool xfrm_state_hold_rcu(struct xfrm_state __rcu *x) 56 + static inline bool xfrm_state_hold_rcu(struct xfrm_state *x) 57 57 { 58 58 return refcount_inc_not_zero(&x->refcnt); 59 59 } ··· 870 870 for (i = 0; i <= net->xfrm.state_hmask; i++) { 871 871 struct xfrm_state *x; 872 872 873 - hlist_for_each_entry(x, net->xfrm.state_bydst+i, bydst) { 873 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + i, bydst) { 874 874 if (xfrm_id_proto_match(x->id.proto, proto) && 875 875 (err = security_xfrm_state_delete(x)) != 0) { 876 876 xfrm_audit_state_delete(x, 0, task_valid); ··· 891 891 struct xfrm_state *x; 892 892 struct xfrm_dev_offload *xso; 893 893 894 - hlist_for_each_entry(x, net->xfrm.state_bydst+i, bydst) { 894 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + i, bydst) { 895 895 xso = &x->xso; 896 896 897 897 if (xso->dev == dev && ··· 931 931 for (i = 0; i <= net->xfrm.state_hmask; i++) { 932 932 struct xfrm_state *x; 933 933 restart: 934 - hlist_for_each_entry(x, net->xfrm.state_bydst+i, bydst) { 934 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + i, bydst) { 935 935 if (!xfrm_state_kern(x) && 936 936 xfrm_id_proto_match(x->id.proto, proto)) { 937 937 xfrm_state_hold(x); ··· 973 973 err = -ESRCH; 974 974 for (i = 0; i <= net->xfrm.state_hmask; i++) { 975 975 restart: 976 - hlist_for_each_entry(x, net->xfrm.state_bydst+i, bydst) { 976 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + i, bydst) { 977 977 xso = &x->xso; 978 978 979 979 if (!xfrm_state_kern(x) && xso->dev == dev) { ··· 1563 1563 list_add(&x->km.all, &net->xfrm.state_all); 1564 1564 h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family); 1565 1565 XFRM_STATE_INSERT(bydst, &x->bydst, 1566 - net->xfrm.state_bydst + h, 1566 + xfrm_state_deref_prot(net->xfrm.state_bydst, net) + h, 1567 1567 x->xso.type); 1568 1568 h = xfrm_src_hash(net, daddr, saddr, encap_family); 1569 1569 XFRM_STATE_INSERT(bysrc, &x->bysrc, 1570 - net->xfrm.state_bysrc + h, 1570 + xfrm_state_deref_prot(net->xfrm.state_bysrc, net) + h, 1571 1571 x->xso.type); 1572 1572 INIT_HLIST_NODE(&x->state_cache); 1573 1573 if (x->id.spi) { 1574 1574 h = xfrm_spi_hash(net, &x->id.daddr, x->id.spi, x->id.proto, encap_family); 1575 1575 XFRM_STATE_INSERT(byspi, &x->byspi, 1576 - net->xfrm.state_byspi + h, 1576 + xfrm_state_deref_prot(net->xfrm.state_byspi, net) + h, 1577 1577 x->xso.type); 1578 1578 } 1579 1579 if (x->km.seq) { 1580 1580 h = xfrm_seq_hash(net, x->km.seq); 1581 1581 XFRM_STATE_INSERT(byseq, &x->byseq, 1582 - net->xfrm.state_byseq + h, 1582 + xfrm_state_deref_prot(net->xfrm.state_byseq, net) + h, 1583 1583 x->xso.type); 1584 1584 } 1585 1585 x->lft.hard_add_expires_seconds = net->xfrm.sysctl_acq_expires; ··· 1652 1652 1653 1653 spin_lock_bh(&net->xfrm.xfrm_state_lock); 1654 1654 h = xfrm_dst_hash(net, daddr, saddr, reqid, family); 1655 - hlist_for_each_entry(x, net->xfrm.state_bydst+h, bydst) { 1655 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + h, bydst) { 1656 1656 if (x->props.family == family && 1657 1657 x->props.reqid == reqid && 1658 1658 (mark & x->mark.m) == x->mark.v && ··· 1703 1703 struct xfrm_state *x; 1704 1704 unsigned int i; 1705 1705 1706 - rcu_read_lock(); 1707 1706 for (i = 0; i <= net->xfrm.state_hmask; i++) { 1708 - hlist_for_each_entry_rcu(x, &net->xfrm.state_byspi[i], byspi) { 1709 - if (x->id.spi == spi && x->id.proto == proto) { 1710 - if (!xfrm_state_hold_rcu(x)) 1711 - continue; 1712 - rcu_read_unlock(); 1707 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_byspi, net) + i, byspi) { 1708 + if (x->id.spi == spi && x->id.proto == proto) 1713 1709 return x; 1714 - } 1715 1710 } 1716 1711 } 1717 - rcu_read_unlock(); 1718 1712 return NULL; 1719 1713 } 1720 1714 ··· 1724 1730 1725 1731 h = xfrm_dst_hash(net, &x->id.daddr, &x->props.saddr, 1726 1732 x->props.reqid, x->props.family); 1727 - XFRM_STATE_INSERT(bydst, &x->bydst, net->xfrm.state_bydst + h, 1733 + XFRM_STATE_INSERT(bydst, &x->bydst, 1734 + xfrm_state_deref_prot(net->xfrm.state_bydst, net) + h, 1728 1735 x->xso.type); 1729 1736 1730 1737 h = xfrm_src_hash(net, &x->id.daddr, &x->props.saddr, x->props.family); 1731 - XFRM_STATE_INSERT(bysrc, &x->bysrc, net->xfrm.state_bysrc + h, 1738 + XFRM_STATE_INSERT(bysrc, &x->bysrc, 1739 + xfrm_state_deref_prot(net->xfrm.state_bysrc, net) + h, 1732 1740 x->xso.type); 1733 1741 1734 1742 if (x->id.spi) { 1735 1743 h = xfrm_spi_hash(net, &x->id.daddr, x->id.spi, x->id.proto, 1736 1744 x->props.family); 1737 1745 1738 - XFRM_STATE_INSERT(byspi, &x->byspi, net->xfrm.state_byspi + h, 1746 + XFRM_STATE_INSERT(byspi, &x->byspi, 1747 + xfrm_state_deref_prot(net->xfrm.state_byspi, net) + h, 1739 1748 x->xso.type); 1740 1749 } 1741 1750 1742 1751 if (x->km.seq) { 1743 1752 h = xfrm_seq_hash(net, x->km.seq); 1744 1753 1745 - XFRM_STATE_INSERT(byseq, &x->byseq, net->xfrm.state_byseq + h, 1754 + XFRM_STATE_INSERT(byseq, &x->byseq, 1755 + xfrm_state_deref_prot(net->xfrm.state_byseq, net) + h, 1746 1756 x->xso.type); 1747 1757 } 1748 1758 ··· 1773 1775 u32 cpu_id = xnew->pcpu_num; 1774 1776 1775 1777 h = xfrm_dst_hash(net, &xnew->id.daddr, &xnew->props.saddr, reqid, family); 1776 - hlist_for_each_entry(x, net->xfrm.state_bydst+h, bydst) { 1778 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + h, bydst) { 1777 1779 if (x->props.family == family && 1778 1780 x->props.reqid == reqid && 1779 1781 x->if_id == if_id && ··· 1809 1811 struct xfrm_state *x; 1810 1812 u32 mark = m->v & m->m; 1811 1813 1812 - hlist_for_each_entry(x, net->xfrm.state_bydst+h, bydst) { 1814 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + h, bydst) { 1813 1815 if (x->props.reqid != reqid || 1814 1816 x->props.mode != mode || 1815 1817 x->props.family != family || ··· 1866 1868 ktime_set(net->xfrm.sysctl_acq_expires, 0), 1867 1869 HRTIMER_MODE_REL_SOFT); 1868 1870 list_add(&x->km.all, &net->xfrm.state_all); 1869 - XFRM_STATE_INSERT(bydst, &x->bydst, net->xfrm.state_bydst + h, 1871 + XFRM_STATE_INSERT(bydst, &x->bydst, 1872 + xfrm_state_deref_prot(net->xfrm.state_bydst, net) + h, 1870 1873 x->xso.type); 1871 1874 h = xfrm_src_hash(net, daddr, saddr, family); 1872 - XFRM_STATE_INSERT(bysrc, &x->bysrc, net->xfrm.state_bysrc + h, 1875 + XFRM_STATE_INSERT(bysrc, &x->bysrc, 1876 + xfrm_state_deref_prot(net->xfrm.state_bysrc, net) + h, 1873 1877 x->xso.type); 1874 1878 1875 1879 net->xfrm.state_num++; ··· 2091 2091 if (m->reqid) { 2092 2092 h = xfrm_dst_hash(net, &m->old_daddr, &m->old_saddr, 2093 2093 m->reqid, m->old_family); 2094 - hlist_for_each_entry(x, net->xfrm.state_bydst+h, bydst) { 2094 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + h, bydst) { 2095 2095 if (x->props.mode != m->mode || 2096 2096 x->id.proto != m->proto) 2097 2097 continue; ··· 2110 2110 } else { 2111 2111 h = xfrm_src_hash(net, &m->old_daddr, &m->old_saddr, 2112 2112 m->old_family); 2113 - hlist_for_each_entry(x, net->xfrm.state_bysrc+h, bysrc) { 2113 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bysrc, net) + h, bysrc) { 2114 2114 if (x->props.mode != m->mode || 2115 2115 x->id.proto != m->proto) 2116 2116 continue; ··· 2264 2264 2265 2265 err = 0; 2266 2266 x->km.state = XFRM_STATE_DEAD; 2267 + xfrm_dev_state_delete(x); 2267 2268 __xfrm_state_put(x); 2268 2269 } 2269 2270 ··· 2313 2312 2314 2313 spin_lock_bh(&net->xfrm.xfrm_state_lock); 2315 2314 for (i = 0; i <= net->xfrm.state_hmask; i++) { 2316 - hlist_for_each_entry(x, net->xfrm.state_bydst + i, bydst) 2315 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_bydst, net) + i, bydst) 2317 2316 xfrm_dev_state_update_stats(x); 2318 2317 } 2319 2318 spin_unlock_bh(&net->xfrm.xfrm_state_lock); ··· 2504 2503 unsigned int h = xfrm_seq_hash(net, seq); 2505 2504 struct xfrm_state *x; 2506 2505 2507 - hlist_for_each_entry_rcu(x, net->xfrm.state_byseq + h, byseq) { 2506 + hlist_for_each_entry(x, xfrm_state_deref_prot(net->xfrm.state_byseq, net) + h, byseq) { 2508 2507 if (x->km.seq == seq && 2509 2508 (mark & x->mark.m) == x->mark.v && 2510 2509 x->pcpu_num == pcpu_num && ··· 2603 2602 if (!x0) { 2604 2603 x->id.spi = newspi; 2605 2604 h = xfrm_spi_hash(net, &x->id.daddr, newspi, x->id.proto, x->props.family); 2606 - XFRM_STATE_INSERT(byspi, &x->byspi, net->xfrm.state_byspi + h, x->xso.type); 2605 + XFRM_STATE_INSERT(byspi, &x->byspi, 2606 + xfrm_state_deref_prot(net->xfrm.state_byspi, net) + h, 2607 + x->xso.type); 2607 2608 spin_unlock_bh(&net->xfrm.xfrm_state_lock); 2608 2609 err = 0; 2609 2610 goto unlock; 2610 2611 } 2611 - xfrm_state_put(x0); 2612 2612 spin_unlock_bh(&net->xfrm.xfrm_state_lock); 2613 2613 2614 2614 next: ··· 3260 3258 3261 3259 int __net_init xfrm_state_init(struct net *net) 3262 3260 { 3261 + struct hlist_head *ndst, *nsrc, *nspi, *nseq; 3263 3262 unsigned int sz; 3264 3263 3265 3264 if (net_eq(net, &init_net)) ··· 3271 3268 3272 3269 sz = sizeof(struct hlist_head) * 8; 3273 3270 3274 - net->xfrm.state_bydst = xfrm_hash_alloc(sz); 3275 - if (!net->xfrm.state_bydst) 3271 + ndst = xfrm_hash_alloc(sz); 3272 + if (!ndst) 3276 3273 goto out_bydst; 3277 - net->xfrm.state_bysrc = xfrm_hash_alloc(sz); 3278 - if (!net->xfrm.state_bysrc) 3274 + rcu_assign_pointer(net->xfrm.state_bydst, ndst); 3275 + 3276 + nsrc = xfrm_hash_alloc(sz); 3277 + if (!nsrc) 3279 3278 goto out_bysrc; 3280 - net->xfrm.state_byspi = xfrm_hash_alloc(sz); 3281 - if (!net->xfrm.state_byspi) 3279 + rcu_assign_pointer(net->xfrm.state_bysrc, nsrc); 3280 + 3281 + nspi = xfrm_hash_alloc(sz); 3282 + if (!nspi) 3282 3283 goto out_byspi; 3283 - net->xfrm.state_byseq = xfrm_hash_alloc(sz); 3284 - if (!net->xfrm.state_byseq) 3284 + rcu_assign_pointer(net->xfrm.state_byspi, nspi); 3285 + 3286 + nseq = xfrm_hash_alloc(sz); 3287 + if (!nseq) 3285 3288 goto out_byseq; 3289 + rcu_assign_pointer(net->xfrm.state_byseq, nseq); 3286 3290 3287 3291 net->xfrm.state_cache_input = alloc_percpu(struct hlist_head); 3288 3292 if (!net->xfrm.state_cache_input) ··· 3305 3295 return 0; 3306 3296 3307 3297 out_state_cache_input: 3308 - xfrm_hash_free(net->xfrm.state_byseq, sz); 3298 + xfrm_hash_free(nseq, sz); 3309 3299 out_byseq: 3310 - xfrm_hash_free(net->xfrm.state_byspi, sz); 3300 + xfrm_hash_free(nspi, sz); 3311 3301 out_byspi: 3312 - xfrm_hash_free(net->xfrm.state_bysrc, sz); 3302 + xfrm_hash_free(nsrc, sz); 3313 3303 out_bysrc: 3314 - xfrm_hash_free(net->xfrm.state_bydst, sz); 3304 + xfrm_hash_free(ndst, sz); 3315 3305 out_bydst: 3316 3306 return -ENOMEM; 3317 3307 } 3318 3308 3309 + #define xfrm_state_deref_netexit(table) \ 3310 + rcu_dereference_protected((table), true /* netns is going away */) 3319 3311 void xfrm_state_fini(struct net *net) 3320 3312 { 3321 3313 unsigned int sz; ··· 3330 3318 WARN_ON(!list_empty(&net->xfrm.state_all)); 3331 3319 3332 3320 for (i = 0; i <= net->xfrm.state_hmask; i++) { 3333 - WARN_ON(!hlist_empty(net->xfrm.state_byseq + i)); 3334 - WARN_ON(!hlist_empty(net->xfrm.state_byspi + i)); 3335 - WARN_ON(!hlist_empty(net->xfrm.state_bysrc + i)); 3336 - WARN_ON(!hlist_empty(net->xfrm.state_bydst + i)); 3321 + WARN_ON(!hlist_empty(xfrm_state_deref_netexit(net->xfrm.state_byseq) + i)); 3322 + WARN_ON(!hlist_empty(xfrm_state_deref_netexit(net->xfrm.state_byspi) + i)); 3323 + WARN_ON(!hlist_empty(xfrm_state_deref_netexit(net->xfrm.state_bysrc) + i)); 3324 + WARN_ON(!hlist_empty(xfrm_state_deref_netexit(net->xfrm.state_bydst) + i)); 3337 3325 } 3338 3326 3339 3327 sz = (net->xfrm.state_hmask + 1) * sizeof(struct hlist_head); 3340 - xfrm_hash_free(net->xfrm.state_byseq, sz); 3341 - xfrm_hash_free(net->xfrm.state_byspi, sz); 3342 - xfrm_hash_free(net->xfrm.state_bysrc, sz); 3343 - xfrm_hash_free(net->xfrm.state_bydst, sz); 3328 + xfrm_hash_free(xfrm_state_deref_netexit(net->xfrm.state_byseq), sz); 3329 + xfrm_hash_free(xfrm_state_deref_netexit(net->xfrm.state_byspi), sz); 3330 + xfrm_hash_free(xfrm_state_deref_netexit(net->xfrm.state_bysrc), sz); 3331 + xfrm_hash_free(xfrm_state_deref_netexit(net->xfrm.state_bydst), sz); 3344 3332 free_percpu(net->xfrm.state_cache_input); 3345 3333 } 3346 3334
+22 -10
net/xfrm/xfrm_user.c
··· 35 35 #endif 36 36 #include <linux/unaligned.h> 37 37 38 + static struct sock *xfrm_net_nlsk(const struct net *net, const struct sk_buff *skb) 39 + { 40 + /* get the source of this request, see netlink_unicast_kernel */ 41 + const struct sock *sk = NETLINK_CB(skb).sk; 42 + 43 + /* sk is refcounted, the netns stays alive and nlsk with it */ 44 + return rcu_dereference_protected(net->xfrm.nlsk, sk->sk_net_refcnt); 45 + } 46 + 38 47 static int verify_one_alg(struct nlattr **attrs, enum xfrm_attr_type_t type, 39 48 struct netlink_ext_ack *extack) 40 49 { ··· 1736 1727 err = build_spdinfo(r_skb, net, sportid, seq, *flags); 1737 1728 BUG_ON(err < 0); 1738 1729 1739 - return nlmsg_unicast(net->xfrm.nlsk, r_skb, sportid); 1730 + return nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, sportid); 1740 1731 } 1741 1732 1742 1733 static inline unsigned int xfrm_sadinfo_msgsize(void) ··· 1796 1787 err = build_sadinfo(r_skb, net, sportid, seq, *flags); 1797 1788 BUG_ON(err < 0); 1798 1789 1799 - return nlmsg_unicast(net->xfrm.nlsk, r_skb, sportid); 1790 + return nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, sportid); 1800 1791 } 1801 1792 1802 1793 static int xfrm_get_sa(struct sk_buff *skb, struct nlmsghdr *nlh, ··· 1816 1807 if (IS_ERR(resp_skb)) { 1817 1808 err = PTR_ERR(resp_skb); 1818 1809 } else { 1819 - err = nlmsg_unicast(net->xfrm.nlsk, resp_skb, NETLINK_CB(skb).portid); 1810 + err = nlmsg_unicast(xfrm_net_nlsk(net, skb), resp_skb, NETLINK_CB(skb).portid); 1820 1811 } 1821 1812 xfrm_state_put(x); 1822 1813 out_noput: ··· 1859 1850 pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]); 1860 1851 if (pcpu_num >= num_possible_cpus()) { 1861 1852 err = -EINVAL; 1853 + NL_SET_ERR_MSG(extack, "pCPU number too big"); 1862 1854 goto out_noput; 1863 1855 } 1864 1856 } ··· 1907 1897 } 1908 1898 } 1909 1899 1910 - err = nlmsg_unicast(net->xfrm.nlsk, resp_skb, NETLINK_CB(skb).portid); 1900 + err = nlmsg_unicast(xfrm_net_nlsk(net, skb), resp_skb, NETLINK_CB(skb).portid); 1911 1901 1912 1902 out: 1913 1903 xfrm_state_put(x); ··· 2552 2542 r_up->out = net->xfrm.policy_default[XFRM_POLICY_OUT]; 2553 2543 nlmsg_end(r_skb, r_nlh); 2554 2544 2555 - return nlmsg_unicast(net->xfrm.nlsk, r_skb, portid); 2545 + return nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, portid); 2556 2546 } 2557 2547 2558 2548 static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh, ··· 2618 2608 if (IS_ERR(resp_skb)) { 2619 2609 err = PTR_ERR(resp_skb); 2620 2610 } else { 2621 - err = nlmsg_unicast(net->xfrm.nlsk, resp_skb, 2611 + err = nlmsg_unicast(xfrm_net_nlsk(net, skb), resp_skb, 2622 2612 NETLINK_CB(skb).portid); 2623 2613 } 2624 2614 } else { ··· 2791 2781 err = build_aevent(r_skb, x, &c); 2792 2782 BUG_ON(err < 0); 2793 2783 2794 - err = nlmsg_unicast(net->xfrm.nlsk, r_skb, NETLINK_CB(skb).portid); 2784 + err = nlmsg_unicast(xfrm_net_nlsk(net, skb), r_skb, NETLINK_CB(skb).portid); 2795 2785 spin_unlock_bh(&x->lock); 2796 2786 xfrm_state_put(x); 2797 2787 return err; ··· 3011 3001 if (attrs[XFRMA_SA_PCPU]) { 3012 3002 x->pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]); 3013 3003 err = -EINVAL; 3014 - if (x->pcpu_num >= num_possible_cpus()) 3004 + if (x->pcpu_num >= num_possible_cpus()) { 3005 + NL_SET_ERR_MSG(extack, "pCPU number too big"); 3015 3006 goto free_state; 3007 + } 3016 3008 } 3017 3009 3018 3010 err = verify_newpolicy_info(&ua->policy, extack); ··· 3495 3483 goto err; 3496 3484 } 3497 3485 3498 - err = netlink_dump_start(net->xfrm.nlsk, skb, nlh, &c); 3486 + err = netlink_dump_start(xfrm_net_nlsk(net, skb), skb, nlh, &c); 3499 3487 goto err; 3500 3488 } 3501 3489 ··· 3685 3673 } 3686 3674 if (x->if_id) 3687 3675 l += nla_total_size(sizeof(x->if_id)); 3688 - if (x->pcpu_num) 3676 + if (x->pcpu_num != UINT_MAX) 3689 3677 l += nla_total_size(sizeof(x->pcpu_num)); 3690 3678 3691 3679 /* Must count x->lastused as it may become non-zero behind our back. */
+11
scripts/coccinelle/api/kmalloc_objs.cocci
··· 122 122 - ALLOC(struct_size_t(TYPE, FLEX, COUNT), GFP) 123 123 + ALLOC_FLEX(TYPE, FLEX, COUNT, GFP) 124 124 ) 125 + 126 + @drop_gfp_kernel depends on patch && !(file in "tools") && !(file in "samples")@ 127 + identifier ALLOC = {kmalloc_obj,kmalloc_objs,kmalloc_flex, 128 + kzalloc_obj,kzalloc_objs,kzalloc_flex, 129 + kvmalloc_obj,kvmalloc_objs,kvmalloc_flex, 130 + kvzalloc_obj,kvzalloc_objs,kvzalloc_flex}; 131 + @@ 132 + 133 + ALLOC(... 134 + - , GFP_KERNEL 135 + )
+10 -14
scripts/kconfig/merge_config.sh
··· 151 151 if ! "$AWK" -v prefix="$CONFIG_PREFIX" \ 152 152 -v warnoverride="$WARNOVERRIDE" \ 153 153 -v strict="$STRICT" \ 154 + -v outfile="$TMP_FILE.new" \ 154 155 -v builtin="$BUILTIN" \ 155 156 -v warnredun="$WARNREDUN" ' 156 157 BEGIN { ··· 196 195 197 196 # First pass: read merge file, store all lines and index 198 197 FILENAME == ARGV[1] { 199 - mergefile = FILENAME 198 + mergefile = FILENAME 200 199 merge_lines[FNR] = $0 201 200 merge_total = FNR 202 201 cfg = get_cfg($0) ··· 213 212 214 213 # Not a config or not in merge file - keep it 215 214 if (cfg == "" || !(cfg in merge_cfg)) { 216 - print $0 >> ARGV[3] 215 + print $0 >> outfile 217 216 next 218 217 } 219 218 220 - prev_val = $0 219 + prev_val = $0 221 220 new_val = merge_cfg[cfg] 222 221 223 222 # BUILTIN: do not demote y to m 224 223 if (builtin == "true" && new_val ~ /=m$/ && prev_val ~ /=y$/) { 225 224 warn_builtin(cfg, prev_val, new_val) 226 - print $0 >> ARGV[3] 225 + print $0 >> outfile 227 226 skip_merge[merge_cfg_line[cfg]] = 1 228 227 next 229 228 } ··· 236 235 237 236 # "=n" is the same as "is not set" 238 237 if (prev_val ~ /=n$/ && new_val ~ / is not set$/) { 239 - print $0 >> ARGV[3] 238 + print $0 >> outfile 240 239 next 241 240 } 242 241 ··· 247 246 } 248 247 } 249 248 250 - # output file, skip all lines 251 - FILENAME == ARGV[3] { 252 - nextfile 253 - } 254 - 255 249 END { 256 250 # Newline in case base file lacks trailing newline 257 - print "" >> ARGV[3] 251 + print "" >> outfile 258 252 # Append merge file, skipping lines marked for builtin preservation 259 253 for (i = 1; i <= merge_total; i++) { 260 254 if (!(i in skip_merge)) { 261 - print merge_lines[i] >> ARGV[3] 255 + print merge_lines[i] >> outfile 262 256 } 263 257 } 264 258 if (strict_violated) { 265 259 exit 1 266 260 } 267 261 }' \ 268 - "$ORIG_MERGE_FILE" "$TMP_FILE" "$TMP_FILE.new"; then 262 + "$ORIG_MERGE_FILE" "$TMP_FILE"; then 269 263 # awk exited non-zero, strict mode was violated 270 264 STRICT_MODE_VIOLATED=true 271 265 fi ··· 377 381 STRICT_MODE_VIOLATED=true 378 382 fi 379 383 380 - if [ "$STRICT" == "true" ] && [ "$STRICT_MODE_VIOLATED" == "true" ]; then 384 + if [ "$STRICT" = "true" ] && [ "$STRICT_MODE_VIOLATED" = "true" ]; then 381 385 echo "Requested and effective config differ" 382 386 exit 1 383 387 fi
+4 -5
scripts/livepatch/klp-build
··· 285 285 # application from appending it with '+' due to a dirty git working tree. 286 286 set_kernelversion() { 287 287 local file="$SRC/scripts/setlocalversion" 288 - local localversion 288 + local kernelrelease 289 289 290 290 stash_file "$file" 291 291 292 - localversion="$(cd "$SRC" && make --no-print-directory kernelversion)" 293 - localversion="$(cd "$SRC" && KERNELVERSION="$localversion" ./scripts/setlocalversion)" 294 - [[ -z "$localversion" ]] && die "setlocalversion failed" 292 + kernelrelease="$(cd "$SRC" && make syncconfig &>/dev/null && make -s kernelrelease)" 293 + [[ -z "$kernelrelease" ]] && die "failed to get kernel version" 295 294 296 - sed -i "2i echo $localversion; exit 0" scripts/setlocalversion 295 + sed -i "2i echo $kernelrelease; exit 0" scripts/setlocalversion 297 296 } 298 297 299 298 get_patch_files() {
+1
security/security.c
··· 61 61 [LOCKDOWN_BPF_WRITE_USER] = "use of bpf to write user RAM", 62 62 [LOCKDOWN_DBG_WRITE_KERNEL] = "use of kgdb/kdb to write kernel RAM", 63 63 [LOCKDOWN_RTAS_ERROR_INJECTION] = "RTAS error injection", 64 + [LOCKDOWN_XEN_USER_ACTIONS] = "Xen guest user action", 64 65 [LOCKDOWN_INTEGRITY_MAX] = "integrity", 65 66 [LOCKDOWN_KCORE] = "/proc/kcore access", 66 67 [LOCKDOWN_KPROBES] = "use of kprobes",
+3 -3
sound/soc/samsung/i2s.c
··· 1360 1360 if (!pdev_sec) 1361 1361 return -ENOMEM; 1362 1362 1363 - pdev_sec->driver_override = kstrdup("samsung-i2s", GFP_KERNEL); 1364 - if (!pdev_sec->driver_override) { 1363 + ret = device_set_driver_override(&pdev_sec->dev, "samsung-i2s"); 1364 + if (ret) { 1365 1365 platform_device_put(pdev_sec); 1366 - return -ENOMEM; 1366 + return ret; 1367 1367 } 1368 1368 1369 1369 ret = platform_device_add(pdev_sec);
+4 -1
tools/arch/x86/include/asm/msr-index.h
··· 740 740 #define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT) 741 741 #define MSR_AMD64_SNP_SECURE_AVIC_BIT 18 742 742 #define MSR_AMD64_SNP_SECURE_AVIC BIT_ULL(MSR_AMD64_SNP_SECURE_AVIC_BIT) 743 - #define MSR_AMD64_SNP_RESV_BIT 19 743 + #define MSR_AMD64_SNP_RESERVED_BITS19_22 GENMASK_ULL(22, 19) 744 + #define MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT 23 745 + #define MSR_AMD64_SNP_IBPB_ON_ENTRY BIT_ULL(MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT) 746 + #define MSR_AMD64_SNP_RESV_BIT 24 744 747 #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) 745 748 #define MSR_AMD64_SAVIC_CONTROL 0xc0010138 746 749 #define MSR_AMD64_SAVIC_EN_BIT 0
+1
tools/arch/x86/include/uapi/asm/kvm.h
··· 476 476 #define KVM_X86_QUIRK_SLOT_ZAP_ALL (1 << 7) 477 477 #define KVM_X86_QUIRK_STUFF_FEATURE_MSRS (1 << 8) 478 478 #define KVM_X86_QUIRK_IGNORE_GUEST_PAT (1 << 9) 479 + #define KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM (1 << 10) 479 480 480 481 #define KVM_STATE_NESTED_FORMAT_VMX 0 481 482 #define KVM_STATE_NESTED_FORMAT_SVM 1
+5 -2
tools/bootconfig/main.c
··· 162 162 if (fd < 0) 163 163 return -errno; 164 164 ret = fstat(fd, &stat); 165 - if (ret < 0) 166 - return -errno; 165 + if (ret < 0) { 166 + ret = -errno; 167 + close(fd); 168 + return ret; 169 + } 167 170 168 171 ret = load_xbc_fd(fd, buf, stat.st_size); 169 172
+3 -1
tools/include/linux/build_bug.h
··· 32 32 /** 33 33 * BUILD_BUG_ON_MSG - break compile if a condition is true & emit supplied 34 34 * error message. 35 - * @condition: the condition which the compiler should know is false. 35 + * @cond: the condition which the compiler should know is false. 36 + * @msg: build-time error message 36 37 * 37 38 * See BUILD_BUG_ON for description. 38 39 */ ··· 61 60 62 61 /** 63 62 * static_assert - check integer constant expression at build time 63 + * @expr: expression to be checked 64 64 * 65 65 * static_assert() is a wrapper for the C11 _Static_assert, with a 66 66 * little macro magic to make the message optional (defaulting to the
+8
tools/include/uapi/linux/kvm.h
··· 14 14 #include <linux/ioctl.h> 15 15 #include <asm/kvm.h> 16 16 17 + #ifdef __KERNEL__ 18 + #include <linux/kvm_types.h> 19 + #endif 20 + 17 21 #define KVM_API_VERSION 12 18 22 19 23 /* ··· 1605 1601 __u16 size; 1606 1602 __u32 offset; 1607 1603 __u32 bucket_size; 1604 + #ifdef __KERNEL__ 1605 + char name[KVM_STATS_NAME_SIZE]; 1606 + #else 1608 1607 char name[]; 1608 + #endif 1609 1609 }; 1610 1610 1611 1611 #define KVM_GET_STATS_FD _IO(KVMIO, 0xce)
+2 -3
tools/objtool/check.c
··· 2184 2184 last = insn; 2185 2185 2186 2186 /* 2187 - * Store back-pointers for unconditional forward jumps such 2187 + * Store back-pointers for forward jumps such 2188 2188 * that find_jump_table() can back-track using those and 2189 2189 * avoid some potentially confusing code. 2190 2190 */ 2191 - if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->jump_dest && 2192 - insn->offset > last->offset && 2191 + if (insn->jump_dest && 2193 2192 insn->jump_dest->offset > insn->offset && 2194 2193 !insn->jump_dest->first_jump_src) { 2195 2194
+3 -20
tools/objtool/elf.c
··· 16 16 #include <string.h> 17 17 #include <unistd.h> 18 18 #include <errno.h> 19 - #include <libgen.h> 20 19 #include <ctype.h> 21 20 #include <linux/align.h> 22 21 #include <linux/kernel.h> ··· 1188 1189 struct elf *elf_create_file(GElf_Ehdr *ehdr, const char *name) 1189 1190 { 1190 1191 struct section *null, *symtab, *strtab, *shstrtab; 1191 - char *dir, *base, *tmp_name; 1192 + char *tmp_name; 1192 1193 struct symbol *sym; 1193 1194 struct elf *elf; 1194 1195 ··· 1202 1203 1203 1204 INIT_LIST_HEAD(&elf->sections); 1204 1205 1205 - dir = strdup(name); 1206 - if (!dir) { 1207 - ERROR_GLIBC("strdup"); 1208 - return NULL; 1209 - } 1210 - 1211 - dir = dirname(dir); 1212 - 1213 - base = strdup(name); 1214 - if (!base) { 1215 - ERROR_GLIBC("strdup"); 1216 - return NULL; 1217 - } 1218 - 1219 - base = basename(base); 1220 - 1221 - tmp_name = malloc(256); 1206 + tmp_name = malloc(strlen(name) + 8); 1222 1207 if (!tmp_name) { 1223 1208 ERROR_GLIBC("malloc"); 1224 1209 return NULL; 1225 1210 } 1226 1211 1227 - snprintf(tmp_name, 256, "%s/%s.XXXXXX", dir, base); 1212 + sprintf(tmp_name, "%s.XXXXXX", name); 1228 1213 1229 1214 elf->fd = mkstemp(tmp_name); 1230 1215 if (elf->fd == -1) {
+2 -1
tools/objtool/klp-diff.c
··· 14 14 #include <objtool/util.h> 15 15 #include <arch/special.h> 16 16 17 + #include <linux/align.h> 17 18 #include <linux/objtool_types.h> 18 19 #include <linux/livepatch_external.h> 19 20 #include <linux/stringify.h> ··· 561 560 } 562 561 563 562 if (!is_sec_sym(patched_sym)) 564 - offset = sec_size(out_sec); 563 + offset = ALIGN(sec_size(out_sec), out_sec->sh.sh_addralign); 565 564 566 565 if (patched_sym->len || is_sec_sym(patched_sym)) { 567 566 void *data = NULL;
-1
tools/perf/check-headers.sh
··· 187 187 check arch/x86/lib/memcpy_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memcpy_\(erms\|orig\))" -I"^#include <linux/cfi_types.h>"' 188 188 check arch/x86/lib/memset_64.S '-I "^EXPORT_SYMBOL" -I "^#include <asm/export.h>" -I"^SYM_FUNC_START\(_LOCAL\)*(memset_\(erms\|orig\))"' 189 189 check arch/x86/include/asm/amd/ibs.h '-I "^#include .*/msr-index.h"' 190 - check arch/arm64/include/asm/cputype.h '-I "^#include [<\"]\(asm/\)*sysreg.h"' 191 190 check include/linux/unaligned.h '-I "^#include <linux/unaligned/packed_struct.h>" -I "^#include <asm/byteorder.h>" -I "^#pragma GCC diagnostic"' 192 191 check include/uapi/asm-generic/mman.h '-I "^#include <\(uapi/\)*asm-generic/mman-common\(-tools\)*.h>"' 193 192 check include/uapi/linux/mman.h '-I "^#include <\(uapi/\)*asm/mman.h>"'
+3 -3
tools/perf/util/kvm-stat-arch/kvm-stat-x86.c
··· 4 4 #include "../kvm-stat.h" 5 5 #include "../evsel.h" 6 6 #include "../env.h" 7 - #include "../../arch/x86/include/uapi/asm/svm.h" 8 - #include "../../arch/x86/include/uapi/asm/vmx.h" 9 - #include "../../arch/x86/include/uapi/asm/kvm.h" 7 + #include "../../../arch/x86/include/uapi/asm/svm.h" 8 + #include "../../../arch/x86/include/uapi/asm/vmx.h" 9 + #include "../../../arch/x86/include/uapi/asm/kvm.h" 10 10 #include <subcmd/parse-options.h> 11 11 12 12 define_exit_reasons_table(vmx_exit_reasons, VMX_EXIT_REASONS);
+3 -3
tools/perf/util/metricgroup.c
··· 1605 1605 .metric_or_groups = metric_or_groups, 1606 1606 }; 1607 1607 1608 - return pmu_metrics_table__for_each_metric(table, 1609 - metricgroup__has_metric_or_groups_callback, 1610 - &data) 1608 + return metricgroup__for_each_metric(table, 1609 + metricgroup__has_metric_or_groups_callback, 1610 + &data) 1611 1611 ? true : false; 1612 1612 } 1613 1613
+65 -17
tools/perf/util/parse-events.c
··· 1117 1117 1118 1118 static struct evsel_config_term *add_config_term(enum evsel_term_type type, 1119 1119 struct list_head *head_terms, 1120 - bool weak) 1120 + bool weak, char *str, u64 val) 1121 1121 { 1122 1122 struct evsel_config_term *t; 1123 1123 ··· 1128 1128 INIT_LIST_HEAD(&t->list); 1129 1129 t->type = type; 1130 1130 t->weak = weak; 1131 - list_add_tail(&t->list, head_terms); 1132 1131 1132 + switch (type) { 1133 + case EVSEL__CONFIG_TERM_PERIOD: 1134 + case EVSEL__CONFIG_TERM_FREQ: 1135 + case EVSEL__CONFIG_TERM_STACK_USER: 1136 + case EVSEL__CONFIG_TERM_USR_CHG_CONFIG: 1137 + case EVSEL__CONFIG_TERM_USR_CHG_CONFIG1: 1138 + case EVSEL__CONFIG_TERM_USR_CHG_CONFIG2: 1139 + case EVSEL__CONFIG_TERM_USR_CHG_CONFIG3: 1140 + case EVSEL__CONFIG_TERM_USR_CHG_CONFIG4: 1141 + t->val.val = val; 1142 + break; 1143 + case EVSEL__CONFIG_TERM_TIME: 1144 + t->val.time = val; 1145 + break; 1146 + case EVSEL__CONFIG_TERM_INHERIT: 1147 + t->val.inherit = val; 1148 + break; 1149 + case EVSEL__CONFIG_TERM_OVERWRITE: 1150 + t->val.overwrite = val; 1151 + break; 1152 + case EVSEL__CONFIG_TERM_MAX_STACK: 1153 + t->val.max_stack = val; 1154 + break; 1155 + case EVSEL__CONFIG_TERM_MAX_EVENTS: 1156 + t->val.max_events = val; 1157 + break; 1158 + case EVSEL__CONFIG_TERM_PERCORE: 1159 + t->val.percore = val; 1160 + break; 1161 + case EVSEL__CONFIG_TERM_AUX_OUTPUT: 1162 + t->val.aux_output = val; 1163 + break; 1164 + case EVSEL__CONFIG_TERM_AUX_SAMPLE_SIZE: 1165 + t->val.aux_sample_size = val; 1166 + break; 1167 + case EVSEL__CONFIG_TERM_CALLGRAPH: 1168 + case EVSEL__CONFIG_TERM_BRANCH: 1169 + case EVSEL__CONFIG_TERM_DRV_CFG: 1170 + case EVSEL__CONFIG_TERM_RATIO_TO_PREV: 1171 + case EVSEL__CONFIG_TERM_AUX_ACTION: 1172 + if (str) { 1173 + t->val.str = strdup(str); 1174 + if (!t->val.str) { 1175 + zfree(&t); 1176 + return NULL; 1177 + } 1178 + t->free_str = true; 1179 + } 1180 + break; 1181 + default: 1182 + t->val.val = val; 1183 + break; 1184 + } 1185 + 1186 + list_add_tail(&t->list, head_terms); 1133 1187 return t; 1134 1188 } 1135 1189 ··· 1196 1142 struct evsel_config_term *new_term; 1197 1143 enum evsel_term_type new_type; 1198 1144 bool str_type = false; 1199 - u64 val; 1145 + u64 val = 0; 1200 1146 1201 1147 switch (term->type_term) { 1202 1148 case PARSE_EVENTS__TERM_TYPE_SAMPLE_PERIOD: ··· 1288 1234 continue; 1289 1235 } 1290 1236 1291 - new_term = add_config_term(new_type, head_terms, term->weak); 1237 + /* 1238 + * Note: Members evsel_config_term::val and 1239 + * parse_events_term::val are unions and endianness needs 1240 + * to be taken into account when changing such union members. 1241 + */ 1242 + new_term = add_config_term(new_type, head_terms, term->weak, 1243 + str_type ? term->val.str : NULL, val); 1292 1244 if (!new_term) 1293 1245 return -ENOMEM; 1294 - 1295 - if (str_type) { 1296 - new_term->val.str = strdup(term->val.str); 1297 - if (!new_term->val.str) { 1298 - zfree(&new_term); 1299 - return -ENOMEM; 1300 - } 1301 - new_term->free_str = true; 1302 - } else { 1303 - new_term->val.val = val; 1304 - } 1305 1246 } 1306 1247 return 0; 1307 1248 } ··· 1326 1277 if (bits) { 1327 1278 struct evsel_config_term *new_term; 1328 1279 1329 - new_term = add_config_term(new_term_type, head_terms, false); 1280 + new_term = add_config_term(new_term_type, head_terms, false, NULL, bits); 1330 1281 if (!new_term) 1331 1282 return -ENOMEM; 1332 - new_term->val.cfg_chg = bits; 1333 1283 } 1334 1284 1335 1285 return 0;
+1 -1
tools/testing/selftests/bpf/Makefile
··· 409 409 CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \ 410 410 LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \ 411 411 EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \ 412 - HOSTPKG_CONFIG=$(PKG_CONFIG) \ 412 + HOSTPKG_CONFIG='$(PKG_CONFIG)' \ 413 413 OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ) 414 414 415 415 # Get Clang's default includes on this system, as opposed to those seen by
+53 -3
tools/testing/selftests/bpf/progs/exceptions_fail.c
··· 8 8 #include "bpf_experimental.h" 9 9 10 10 extern void bpf_rcu_read_lock(void) __ksym; 11 + extern void bpf_rcu_read_unlock(void) __ksym; 12 + extern void bpf_preempt_disable(void) __ksym; 13 + extern void bpf_preempt_enable(void) __ksym; 14 + extern void bpf_local_irq_save(unsigned long *) __ksym; 15 + extern void bpf_local_irq_restore(unsigned long *) __ksym; 11 16 12 17 #define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8))) 13 18 ··· 136 131 } 137 132 138 133 SEC("?tc") 139 - __failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_rcu_read_lock-ed region") 134 + __failure __msg("bpf_throw cannot be used inside bpf_rcu_read_lock-ed region") 140 135 int reject_with_rcu_read_lock(void *ctx) 141 136 { 142 137 bpf_rcu_read_lock(); ··· 152 147 } 153 148 154 149 SEC("?tc") 155 - __failure __msg("BPF_EXIT instruction in main prog cannot be used inside bpf_rcu_read_lock-ed region") 150 + __failure __msg("bpf_throw cannot be used inside bpf_rcu_read_lock-ed region") 156 151 int reject_subprog_with_rcu_read_lock(void *ctx) 157 152 { 158 153 bpf_rcu_read_lock(); 159 - return throwing_subprog(ctx); 154 + throwing_subprog(ctx); 155 + bpf_rcu_read_unlock(); 156 + return 0; 160 157 } 161 158 162 159 static bool rbless(struct bpf_rb_node *n1, const struct bpf_rb_node *n2) ··· 350 343 bpf_loop(5, loop_cb1, NULL, 0); 351 344 else 352 345 bpf_loop(5, loop_cb2, NULL, 0); 346 + return 0; 347 + } 348 + 349 + __noinline static int always_throws(void) 350 + { 351 + bpf_throw(0); 352 + return 0; 353 + } 354 + 355 + __noinline static int rcu_lock_then_throw(void) 356 + { 357 + bpf_rcu_read_lock(); 358 + bpf_throw(0); 359 + return 0; 360 + } 361 + 362 + SEC("?tc") 363 + __failure __msg("bpf_throw cannot be used inside bpf_rcu_read_lock-ed region") 364 + int reject_subprog_rcu_lock_throw(void *ctx) 365 + { 366 + rcu_lock_then_throw(); 367 + return 0; 368 + } 369 + 370 + SEC("?tc") 371 + __failure __msg("bpf_throw cannot be used inside bpf_preempt_disable-ed region") 372 + int reject_subprog_throw_preempt_lock(void *ctx) 373 + { 374 + bpf_preempt_disable(); 375 + always_throws(); 376 + bpf_preempt_enable(); 377 + return 0; 378 + } 379 + 380 + SEC("?tc") 381 + __failure __msg("bpf_throw cannot be used inside bpf_local_irq_save-ed region") 382 + int reject_subprog_throw_irq_lock(void *ctx) 383 + { 384 + unsigned long flags; 385 + 386 + bpf_local_irq_save(&flags); 387 + always_throws(); 388 + bpf_local_irq_restore(&flags); 353 389 return 0; 354 390 } 355 391
+94
tools/testing/selftests/bpf/progs/verifier_bounds.c
··· 2037 2037 : __clobber_all); 2038 2038 } 2039 2039 2040 + SEC("socket") 2041 + __description("maybe_fork_scalars: OR with constant rejects OOB") 2042 + __failure __msg("invalid access to map value") 2043 + __naked void or_scalar_fork_rejects_oob(void) 2044 + { 2045 + asm volatile (" \ 2046 + r1 = 0; \ 2047 + *(u64*)(r10 - 8) = r1; \ 2048 + r2 = r10; \ 2049 + r2 += -8; \ 2050 + r1 = %[map_hash_8b] ll; \ 2051 + call %[bpf_map_lookup_elem]; \ 2052 + if r0 == 0 goto l0_%=; \ 2053 + r9 = r0; \ 2054 + r6 = *(u64*)(r9 + 0); \ 2055 + r6 s>>= 63; \ 2056 + r6 |= 8; \ 2057 + /* r6 is -1 (current) or 8 (pushed) */ \ 2058 + if r6 s< 0 goto l0_%=; \ 2059 + /* pushed path: r6 = 8, OOB for value_size=8 */ \ 2060 + r9 += r6; \ 2061 + r0 = *(u8*)(r9 + 0); \ 2062 + l0_%=: r0 = 0; \ 2063 + exit; \ 2064 + " : 2065 + : __imm(bpf_map_lookup_elem), 2066 + __imm_addr(map_hash_8b) 2067 + : __clobber_all); 2068 + } 2069 + 2070 + SEC("socket") 2071 + __description("maybe_fork_scalars: AND with constant still works") 2072 + __success __retval(0) 2073 + __naked void and_scalar_fork_still_works(void) 2074 + { 2075 + asm volatile (" \ 2076 + r1 = 0; \ 2077 + *(u64*)(r10 - 8) = r1; \ 2078 + r2 = r10; \ 2079 + r2 += -8; \ 2080 + r1 = %[map_hash_8b] ll; \ 2081 + call %[bpf_map_lookup_elem]; \ 2082 + if r0 == 0 goto l0_%=; \ 2083 + r9 = r0; \ 2084 + r6 = *(u64*)(r9 + 0); \ 2085 + r6 s>>= 63; \ 2086 + r6 &= 4; \ 2087 + /* \ 2088 + * r6 is 0 (pushed, 0&4==0) or 4 (current) \ 2089 + * both within value_size=8 \ 2090 + */ \ 2091 + if r6 s< 0 goto l0_%=; \ 2092 + r9 += r6; \ 2093 + r0 = *(u8*)(r9 + 0); \ 2094 + l0_%=: r0 = 0; \ 2095 + exit; \ 2096 + " : 2097 + : __imm(bpf_map_lookup_elem), 2098 + __imm_addr(map_hash_8b) 2099 + : __clobber_all); 2100 + } 2101 + 2102 + SEC("socket") 2103 + __description("maybe_fork_scalars: OR with constant allows in-bounds") 2104 + __success __retval(0) 2105 + __naked void or_scalar_fork_allows_inbounds(void) 2106 + { 2107 + asm volatile (" \ 2108 + r1 = 0; \ 2109 + *(u64*)(r10 - 8) = r1; \ 2110 + r2 = r10; \ 2111 + r2 += -8; \ 2112 + r1 = %[map_hash_8b] ll; \ 2113 + call %[bpf_map_lookup_elem]; \ 2114 + if r0 == 0 goto l0_%=; \ 2115 + r9 = r0; \ 2116 + r6 = *(u64*)(r9 + 0); \ 2117 + r6 s>>= 63; \ 2118 + r6 |= 4; \ 2119 + /* \ 2120 + * r6 is -1 (current) or 4 (pushed) \ 2121 + * pushed path: r6 = 4, within value_size=8 \ 2122 + */ \ 2123 + if r6 s< 0 goto l0_%=; \ 2124 + r9 += r6; \ 2125 + r0 = *(u8*)(r9 + 0); \ 2126 + l0_%=: r0 = 0; \ 2127 + exit; \ 2128 + " : 2129 + : __imm(bpf_map_lookup_elem), 2130 + __imm_addr(map_hash_8b) 2131 + : __clobber_all); 2132 + } 2133 + 2040 2134 char _license[] SEC("license") = "GPL";
+22
tools/testing/selftests/bpf/progs/verifier_bswap.c
··· 91 91 BSWAP_RANGE_TEST(le64_range, "le64", 0x3f00, 0x3f000000000000) 92 92 #endif 93 93 94 + SEC("socket") 95 + __description("BSWAP, reset reg id") 96 + __failure __msg("math between fp pointer and register with unbounded min value is not allowed") 97 + __naked void bswap_reset_reg_id(void) 98 + { 99 + asm volatile (" \ 100 + call %[bpf_ktime_get_ns]; \ 101 + r1 = r0; \ 102 + r0 = be16 r0; \ 103 + if r0 != 1 goto l0_%=; \ 104 + r2 = r10; \ 105 + r2 += -512; \ 106 + r2 += r1; \ 107 + *(u8 *)(r2 + 0) = 0; \ 108 + l0_%=: \ 109 + r0 = 0; \ 110 + exit; \ 111 + " : 112 + : __imm(bpf_ktime_get_ns) 113 + : __clobber_all); 114 + } 115 + 94 116 #else 95 117 96 118 SEC("socket")
+108
tools/testing/selftests/bpf/progs/verifier_linked_scalars.c
··· 348 348 : __clobber_all); 349 349 } 350 350 351 + /* 352 + * Test that sync_linked_regs() checks reg->id (the linked target register) 353 + * for BPF_ADD_CONST32 rather than known_reg->id (the branch register). 354 + */ 355 + SEC("socket") 356 + __success 357 + __naked void scalars_alu32_zext_linked_reg(void) 358 + { 359 + asm volatile (" \ 360 + call %[bpf_get_prandom_u32]; \ 361 + w6 = w0; /* r6 in [0, 0xFFFFFFFF] */ \ 362 + r7 = r6; /* linked: same id as r6 */ \ 363 + w7 += 1; /* alu32: r7.id |= BPF_ADD_CONST32 */ \ 364 + r8 = 0xFFFFffff ll; \ 365 + if r6 < r8 goto l0_%=; \ 366 + /* r6 in [0xFFFFFFFF, 0xFFFFFFFF] */ \ 367 + /* sync_linked_regs: known_reg=r6, reg=r7 */ \ 368 + /* CPU: w7 = (u32)(0xFFFFFFFF + 1) = 0, zext -> r7 = 0 */ \ 369 + /* With fix: r7 64-bit = [0, 0] (zext applied) */ \ 370 + /* Without fix: r7 64-bit = [0x100000000] (no zext) */ \ 371 + r7 >>= 32; \ 372 + if r7 == 0 goto l0_%=; \ 373 + r0 /= 0; /* unreachable with fix */ \ 374 + l0_%=: \ 375 + r0 = 0; \ 376 + exit; \ 377 + " : 378 + : __imm(bpf_get_prandom_u32) 379 + : __clobber_all); 380 + } 381 + 382 + /* 383 + * Test that sync_linked_regs() skips propagation when one register used 384 + * alu32 (BPF_ADD_CONST32) and the other used alu64 (BPF_ADD_CONST64). 385 + * The delta relationship doesn't hold across different ALU widths. 386 + */ 387 + SEC("socket") 388 + __failure __msg("div by zero") 389 + __naked void scalars_alu32_alu64_cross_type(void) 390 + { 391 + asm volatile (" \ 392 + call %[bpf_get_prandom_u32]; \ 393 + w6 = w0; /* r6 in [0, 0xFFFFFFFF] */ \ 394 + r7 = r6; /* linked: same id as r6 */ \ 395 + w7 += 1; /* alu32: BPF_ADD_CONST32, delta = 1 */ \ 396 + r8 = r6; /* linked: same id as r6 */ \ 397 + r8 += 2; /* alu64: BPF_ADD_CONST64, delta = 2 */ \ 398 + r9 = 0xFFFFffff ll; \ 399 + if r7 < r9 goto l0_%=; \ 400 + /* r7 = 0xFFFFFFFF */ \ 401 + /* sync: known_reg=r7 (ADD_CONST32), reg=r8 (ADD_CONST64) */ \ 402 + /* Without fix: r8 = zext(0xFFFFFFFF + 1) = 0 */ \ 403 + /* With fix: r8 stays [2, 0x100000001] (r8 >= 2) */ \ 404 + if r8 > 0 goto l1_%=; \ 405 + goto l0_%=; \ 406 + l1_%=: \ 407 + r0 /= 0; /* div by zero */ \ 408 + l0_%=: \ 409 + r0 = 0; \ 410 + exit; \ 411 + " : 412 + : __imm(bpf_get_prandom_u32) 413 + : __clobber_all); 414 + } 415 + 416 + /* 417 + * Test that regsafe() prevents pruning when two paths reach the same program 418 + * point with linked registers carrying different ADD_CONST flags (one 419 + * BPF_ADD_CONST32 from alu32, another BPF_ADD_CONST64 from alu64). 420 + */ 421 + SEC("socket") 422 + __failure __msg("div by zero") 423 + __flag(BPF_F_TEST_STATE_FREQ) 424 + __naked void scalars_alu32_alu64_regsafe_pruning(void) 425 + { 426 + asm volatile (" \ 427 + call %[bpf_get_prandom_u32]; \ 428 + w6 = w0; /* r6 in [0, 0xFFFFFFFF] */ \ 429 + r7 = r6; /* linked: same id as r6 */ \ 430 + /* Get another random value for the path branch */ \ 431 + call %[bpf_get_prandom_u32]; \ 432 + if r0 > 0 goto l_pathb_%=; \ 433 + /* Path A: alu32 */ \ 434 + w7 += 1; /* BPF_ADD_CONST32, delta = 1 */\ 435 + goto l_merge_%=; \ 436 + l_pathb_%=: \ 437 + /* Path B: alu64 */ \ 438 + r7 += 1; /* BPF_ADD_CONST64, delta = 1 */\ 439 + l_merge_%=: \ 440 + /* Merge point: regsafe() compares path B against cached path A. */ \ 441 + /* Narrow r6 to trigger sync_linked_regs for r7 */ \ 442 + r9 = 0xFFFFffff ll; \ 443 + if r6 < r9 goto l0_%=; \ 444 + /* r6 = 0xFFFFFFFF */ \ 445 + /* sync: r7 = 0xFFFFFFFF + 1 = 0x100000000 */ \ 446 + /* Path A: zext -> r7 = 0 */ \ 447 + /* Path B: no zext -> r7 = 0x100000000 */ \ 448 + r7 >>= 32; \ 449 + if r7 == 0 goto l0_%=; \ 450 + r0 /= 0; /* div by zero on path B */ \ 451 + l0_%=: \ 452 + r0 = 0; \ 453 + exit; \ 454 + " : 455 + : __imm(bpf_get_prandom_u32) 456 + : __clobber_all); 457 + } 458 + 351 459 SEC("socket") 352 460 __success 353 461 void alu32_negative_offset(void)
+58
tools/testing/selftests/bpf/progs/verifier_sdiv.c
··· 1209 1209 : __clobber_all); 1210 1210 } 1211 1211 1212 + SEC("socket") 1213 + __description("SDIV32, INT_MIN divided by 2, imm") 1214 + __success __success_unpriv __retval(-1073741824) 1215 + __naked void sdiv32_int_min_div_2_imm(void) 1216 + { 1217 + asm volatile (" \ 1218 + w0 = %[int_min]; \ 1219 + w0 s/= 2; \ 1220 + exit; \ 1221 + " : 1222 + : __imm_const(int_min, INT_MIN) 1223 + : __clobber_all); 1224 + } 1225 + 1226 + SEC("socket") 1227 + __description("SDIV32, INT_MIN divided by 2, reg") 1228 + __success __success_unpriv __retval(-1073741824) 1229 + __naked void sdiv32_int_min_div_2_reg(void) 1230 + { 1231 + asm volatile (" \ 1232 + w0 = %[int_min]; \ 1233 + w1 = 2; \ 1234 + w0 s/= w1; \ 1235 + exit; \ 1236 + " : 1237 + : __imm_const(int_min, INT_MIN) 1238 + : __clobber_all); 1239 + } 1240 + 1241 + SEC("socket") 1242 + __description("SMOD32, INT_MIN modulo 2, imm") 1243 + __success __success_unpriv __retval(0) 1244 + __naked void smod32_int_min_mod_2_imm(void) 1245 + { 1246 + asm volatile (" \ 1247 + w0 = %[int_min]; \ 1248 + w0 s%%= 2; \ 1249 + exit; \ 1250 + " : 1251 + : __imm_const(int_min, INT_MIN) 1252 + : __clobber_all); 1253 + } 1254 + 1255 + SEC("socket") 1256 + __description("SMOD32, INT_MIN modulo -2, imm") 1257 + __success __success_unpriv __retval(0) 1258 + __naked void smod32_int_min_mod_neg2_imm(void) 1259 + { 1260 + asm volatile (" \ 1261 + w0 = %[int_min]; \ 1262 + w0 s%%= -2; \ 1263 + exit; \ 1264 + " : 1265 + : __imm_const(int_min, INT_MIN) 1266 + : __clobber_all); 1267 + } 1268 + 1269 + 1212 1270 #else 1213 1271 1214 1272 SEC("socket")
+1
tools/testing/selftests/drivers/net/team/Makefile
··· 3 3 4 4 TEST_PROGS := \ 5 5 dev_addr_lists.sh \ 6 + non_ether_header_ops.sh \ 6 7 options.sh \ 7 8 propagation.sh \ 8 9 refleak.sh \
+2
tools/testing/selftests/drivers/net/team/config
··· 1 + CONFIG_BONDING=y 1 2 CONFIG_DUMMY=y 2 3 CONFIG_IPV6=y 3 4 CONFIG_MACVLAN=y 4 5 CONFIG_NETDEVSIM=m 6 + CONFIG_NET_IPGRE=y 5 7 CONFIG_NET_TEAM=y 6 8 CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=y 7 9 CONFIG_NET_TEAM_MODE_LOADBALANCE=y
+41
tools/testing/selftests/drivers/net/team/non_ether_header_ops.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # shellcheck disable=SC2154 4 + # 5 + # Reproduce the non-Ethernet header_ops confusion scenario with: 6 + # g0 (gre) -> b0 (bond) -> t0 (team) 7 + # 8 + # Before the fix, direct header_ops inheritance in this stack could call 9 + # callbacks with the wrong net_device context and crash. 10 + 11 + lib_dir=$(dirname "$0") 12 + source "$lib_dir"/../../../net/lib.sh 13 + 14 + trap cleanup_all_ns EXIT 15 + 16 + setup_ns ns1 17 + 18 + ip -n "$ns1" link add d0 type dummy 19 + ip -n "$ns1" addr add 10.10.10.1/24 dev d0 20 + ip -n "$ns1" link set d0 up 21 + 22 + ip -n "$ns1" link add g0 type gre local 10.10.10.1 23 + ip -n "$ns1" link add b0 type bond mode active-backup 24 + ip -n "$ns1" link add t0 type team 25 + 26 + ip -n "$ns1" link set g0 master b0 27 + ip -n "$ns1" link set b0 master t0 28 + 29 + ip -n "$ns1" link set g0 up 30 + ip -n "$ns1" link set b0 up 31 + ip -n "$ns1" link set t0 up 32 + 33 + # IPv6 address assignment triggers MLD join reports that call 34 + # dev_hard_header() on t0, exercising the inherited header_ops path. 35 + ip -n "$ns1" -6 addr add 2001:db8:1::1/64 dev t0 nodad 36 + for i in $(seq 1 20); do 37 + ip netns exec "$ns1" ping -6 -I t0 ff02::1 -c1 -W1 &>/dev/null || true 38 + done 39 + 40 + echo "PASS: non-Ethernet header_ops stacking did not crash" 41 + exit "$EXIT_STATUS"
+1
tools/testing/selftests/kvm/Makefile.kvm
··· 206 206 TEST_GEN_PROGS_s390 += s390/user_operexec 207 207 TEST_GEN_PROGS_s390 += s390/keyop 208 208 TEST_GEN_PROGS_s390 += rseq_test 209 + TEST_GEN_PROGS_s390 += s390/irq_routing 209 210 210 211 TEST_GEN_PROGS_riscv = $(TEST_GEN_PROGS_COMMON) 211 212 TEST_GEN_PROGS_riscv += riscv/sbi_pmu_test
+75
tools/testing/selftests/kvm/s390/irq_routing.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * IRQ routing offset tests. 4 + * 5 + * Copyright IBM Corp. 2026 6 + * 7 + * Authors: 8 + * Janosch Frank <frankja@linux.ibm.com> 9 + */ 10 + #include <stdio.h> 11 + #include <stdlib.h> 12 + #include <string.h> 13 + #include <sys/ioctl.h> 14 + 15 + #include "test_util.h" 16 + #include "kvm_util.h" 17 + #include "kselftest.h" 18 + #include "ucall_common.h" 19 + 20 + extern char guest_code[]; 21 + asm("guest_code:\n" 22 + "diag %r0,%r0,0\n" 23 + "j .\n"); 24 + 25 + static void test(void) 26 + { 27 + struct kvm_irq_routing *routing; 28 + struct kvm_vcpu *vcpu; 29 + struct kvm_vm *vm; 30 + vm_paddr_t mem; 31 + int ret; 32 + 33 + struct kvm_irq_routing_entry ue = { 34 + .type = KVM_IRQ_ROUTING_S390_ADAPTER, 35 + .gsi = 1, 36 + }; 37 + 38 + vm = vm_create_with_one_vcpu(&vcpu, guest_code); 39 + mem = vm_phy_pages_alloc(vm, 2, 4096 * 42, 0); 40 + 41 + routing = kvm_gsi_routing_create(); 42 + routing->nr = 1; 43 + routing->entries[0] = ue; 44 + routing->entries[0].u.adapter.summary_addr = (uintptr_t)mem; 45 + routing->entries[0].u.adapter.ind_addr = (uintptr_t)mem; 46 + 47 + routing->entries[0].u.adapter.summary_offset = 4096 * 8; 48 + ret = __vm_ioctl(vm, KVM_SET_GSI_ROUTING, routing); 49 + ksft_test_result(ret == -1 && errno == EINVAL, "summary offset outside of page\n"); 50 + 51 + routing->entries[0].u.adapter.summary_offset -= 4; 52 + ret = __vm_ioctl(vm, KVM_SET_GSI_ROUTING, routing); 53 + ksft_test_result(ret == 0, "summary offset inside of page\n"); 54 + 55 + routing->entries[0].u.adapter.ind_offset = 4096 * 8; 56 + ret = __vm_ioctl(vm, KVM_SET_GSI_ROUTING, routing); 57 + ksft_test_result(ret == -1 && errno == EINVAL, "ind offset outside of page\n"); 58 + 59 + routing->entries[0].u.adapter.ind_offset -= 4; 60 + ret = __vm_ioctl(vm, KVM_SET_GSI_ROUTING, routing); 61 + ksft_test_result(ret == 0, "ind offset inside of page\n"); 62 + 63 + kvm_vm_free(vm); 64 + } 65 + 66 + int main(int argc, char *argv[]) 67 + { 68 + TEST_REQUIRE(kvm_has_cap(KVM_CAP_IRQ_ROUTING)); 69 + 70 + ksft_print_header(); 71 + ksft_set_plan(4); 72 + test(); 73 + 74 + ksft_finished(); /* Print results and exit() accordingly */ 75 + }
+58 -3
tools/testing/selftests/net/fib_tests.sh
··· 868 868 check_rt_num 5 $($IP -6 route list |grep -v expires|grep 2001:20::|wc -l) 869 869 log_test $ret 0 "ipv6 route garbage collection (replace with permanent)" 870 870 871 + # Delete dummy_10 and remove all routes 872 + $IP link del dev dummy_10 873 + 874 + # rd6 is required for the next test. (ipv6toolkit) 875 + if [ ! -x "$(command -v rd6)" ]; then 876 + echo "SKIP: rd6 not found." 877 + set +e 878 + cleanup &> /dev/null 879 + return 880 + fi 881 + 882 + setup_ns ns2 883 + $IP link add veth1 type veth peer veth2 netns $ns2 884 + $IP link set veth1 up 885 + ip -netns $ns2 link set veth2 up 886 + $IP addr add fe80:dead::1/64 dev veth1 887 + ip -netns $ns2 addr add fe80:dead::2/64 dev veth2 888 + 889 + # Add NTF_ROUTER neighbour to prevent rt6_age_examine_exception() 890 + # from removing not-yet-expired exceptions. 891 + ip -netns $ns2 link set veth2 address 00:11:22:33:44:55 892 + $IP neigh add fe80:dead::3 lladdr 00:11:22:33:44:55 dev veth1 router 893 + 894 + $NS_EXEC sysctl -wq net.ipv6.conf.veth1.accept_redirects=1 895 + $NS_EXEC sysctl -wq net.ipv6.conf.veth1.forwarding=0 896 + 897 + # Temporary routes 898 + for i in $(seq 1 5); do 899 + # Expire route after $EXPIRE seconds 900 + $IP -6 route add 2001:10::$i \ 901 + via fe80:dead::2 dev veth1 expires $EXPIRE 902 + 903 + ip netns exec $ns2 rd6 -i veth2 \ 904 + -s fe80:dead::2 -d fe80:dead::1 \ 905 + -r 2001:10::$i -t fe80:dead::3 -p ICMP6 906 + done 907 + 908 + check_rt_num 5 $($IP -6 route list | grep expires | grep 2001:10:: | wc -l) 909 + 910 + # Promote to permanent routes by "prepend" (w/o NLM_F_EXCL and NLM_F_REPLACE) 911 + for i in $(seq 1 5); do 912 + # -EEXIST, but the temporary route becomes the permanent route. 913 + $IP -6 route append 2001:10::$i \ 914 + via fe80:dead::2 dev veth1 2>/dev/null || true 915 + done 916 + 917 + check_rt_num 5 $($IP -6 route list | grep -v expires | grep 2001:10:: | wc -l) 918 + check_rt_num 5 $($IP -6 route list cache | grep 2001:10:: | wc -l) 919 + 920 + # Trigger GC instead of waiting $GC_WAIT_TIME. 921 + # rt6_nh_dump_exceptions() just skips expired exceptions. 922 + $NS_EXEC sysctl -wq net.ipv6.route.flush=1 923 + check_rt_num 0 $($IP -6 route list cache | grep 2001:10:: | wc -l) 924 + log_test $ret 0 "ipv6 route garbage collection (promote to permanent routes)" 925 + 926 + $IP neigh del fe80:dead::3 lladdr 00:11:22:33:44:55 dev veth1 router 927 + $IP link del veth1 928 + 871 929 # ra6 is required for the next test. (ipv6toolkit) 872 930 if [ ! -x "$(command -v ra6)" ]; then 873 931 echo "SKIP: ra6 not found." ··· 933 875 cleanup &> /dev/null 934 876 return 935 877 fi 936 - 937 - # Delete dummy_10 and remove all routes 938 - $IP link del dev dummy_10 939 878 940 879 # Create a pair of veth devices to send a RA message from one 941 880 # device to another.
+69 -1
tools/testing/selftests/net/netfilter/nft_concat_range.sh
··· 29 29 net6_port_net6_port net_port_mac_proto_net" 30 30 31 31 # Reported bugs, also described by TYPE_ variables below 32 - BUGS="flush_remove_add reload net_port_proto_match avx2_mismatch doublecreate insert_overlap" 32 + BUGS="flush_remove_add reload net_port_proto_match avx2_mismatch doublecreate 33 + insert_overlap load_flush_load4 load_flush_load8" 33 34 34 35 # List of possible paths to pktgen script from kernel tree for performance tests 35 36 PKTGEN_SCRIPT_PATHS=" ··· 423 422 424 423 TYPE_insert_overlap=" 425 424 display reject overlapping range on add 425 + type_spec ipv4_addr . ipv4_addr 426 + chain_spec ip saddr . ip daddr 427 + dst addr4 428 + proto icmp 429 + 430 + race_repeat 0 431 + 432 + perf_duration 0 433 + " 434 + 435 + TYPE_load_flush_load4=" 436 + display reload with flush, 4bit groups 437 + type_spec ipv4_addr . ipv4_addr 438 + chain_spec ip saddr . ip daddr 439 + dst addr4 440 + proto icmp 441 + 442 + race_repeat 0 443 + 444 + perf_duration 0 445 + " 446 + 447 + TYPE_load_flush_load8=" 448 + display reload with flush, 8bit groups 426 449 type_spec ipv4_addr . ipv4_addr 427 450 chain_spec ip saddr . ip daddr 428 451 dst addr4 ··· 2018 1993 2019 1994 elements="1.2.3.4 . 1.2.4.1-1.2.4.2" 2020 1995 add_fail "{ $elements }" || return 1 1996 + 1997 + return 0 1998 + } 1999 + 2000 + test_bug_load_flush_load4() 2001 + { 2002 + local i 2003 + 2004 + setup veth send_"${proto}" set || return ${ksft_skip} 2005 + 2006 + for i in $(seq 0 255); do 2007 + local addelem="add element inet filter test" 2008 + local j 2009 + 2010 + for j in $(seq 0 20); do 2011 + echo "$addelem { 10.$j.0.$i . 10.$j.1.$i }" 2012 + echo "$addelem { 10.$j.0.$i . 10.$j.2.$i }" 2013 + done 2014 + done > "$tmp" 2015 + 2016 + nft -f "$tmp" || return 1 2017 + 2018 + ( echo "flush set inet filter test";cat "$tmp") | nft -f - 2019 + [ $? -eq 0 ] || return 1 2020 + 2021 + return 0 2022 + } 2023 + 2024 + test_bug_load_flush_load8() 2025 + { 2026 + local i 2027 + 2028 + setup veth send_"${proto}" set || return ${ksft_skip} 2029 + 2030 + for i in $(seq 1 100); do 2031 + echo "add element inet filter test { 10.0.0.$i . 10.0.1.$i }" 2032 + echo "add element inet filter test { 10.0.0.$i . 10.0.2.$i }" 2033 + done > "$tmp" 2034 + 2035 + nft -f "$tmp" || return 1 2036 + 2037 + ( echo "flush set inet filter test";cat "$tmp") | nft -f - 2038 + [ $? -eq 0 ] || return 1 2021 2039 2022 2040 return 0 2023 2041 }