Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

net/devlink/leftover.c / net/core/devlink.c:
565b4824c39f ("devlink: change port event netdev notifier from per-net to global")
f05bd8ebeb69 ("devlink: move code to a dedicated directory")
687125b5799c ("devlink: split out core code")
https://lore.kernel.org/all/20230208094657.379f2b1a@canb.auug.org.au/

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2478 -1136
+1
.mailmap
··· 130 130 Douglas Gilbert <dougg@torque.net> 131 131 Ed L. Cashin <ecashin@coraid.com> 132 132 Erik Kaneda <erik.kaneda@intel.com> <erik.schmauss@intel.com> 133 + Eugen Hristev <eugen.hristev@collabora.com> <eugen.hristev@microchip.com> 133 134 Evgeniy Polyakov <johnpol@2ka.mipt.ru> 134 135 Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> <ezequiel@collabora.com> 135 136 Felipe W Damasio <felipewd@terra.com.br>
+6 -9
Documentation/admin-guide/cgroup-v2.rst
··· 1245 1245 This is a simple interface to trigger memory reclaim in the 1246 1246 target cgroup. 1247 1247 1248 - This file accepts a string which contains the number of bytes to 1249 - reclaim. 1248 + This file accepts a single key, the number of bytes to reclaim. 1249 + No nested keys are currently supported. 1250 1250 1251 1251 Example:: 1252 1252 1253 1253 echo "1G" > memory.reclaim 1254 + 1255 + The interface can be later extended with nested keys to 1256 + configure the reclaim behavior. For example, specify the 1257 + type of memory to reclaim from (anon, file, ..). 1254 1258 1255 1259 Please note that the kernel can over or under reclaim from 1256 1260 the target cgroup. If less bytes are reclaimed than the ··· 1266 1262 the memory reclaim normally is not exercised in this case. 1267 1263 This means that the networking layer will not adapt based on 1268 1264 reclaim induced by memory.reclaim. 1269 - 1270 - This file also allows the user to specify the nodes to reclaim from, 1271 - via the 'nodes=' key, for example:: 1272 - 1273 - echo "1G nodes=0,1" > memory.reclaim 1274 - 1275 - The above instructs the kernel to reclaim memory from nodes 0,1. 1276 1265 1277 1266 memory.peak 1278 1267 A read-only single value file which exists on non-root
+5
Documentation/devicetree/bindings/.gitignore
··· 2 2 *.example.dts 3 3 /processed-schema*.yaml 4 4 /processed-schema*.json 5 + 6 + # 7 + # We don't want to ignore the following even if they are dot-files 8 + # 9 + !.yamllint
+1 -1
Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.yaml
··· 108 108 109 109 msi-controller: 110 110 description: 111 - Only present if the Message Based Interrupt functionnality is 111 + Only present if the Message Based Interrupt functionality is 112 112 being exposed by the HW, and the mbi-ranges property present. 113 113 114 114 mbi-ranges:
+2
Documentation/devicetree/bindings/rtc/qcom-pm8xxx-rtc.yaml
··· 40 40 description: 41 41 Indicates that the setting of RTC time is allowed by the host CPU. 42 42 43 + wakeup-source: true 44 + 43 45 required: 44 46 - compatible 45 47 - reg
+1 -1
Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst
··· 16 16 17 17 Support 18 18 ======= 19 - If you got any problem, contact Wangxun support team via support@trustnetic.com 19 + If you got any problem, contact Wangxun support team via nic-support@net-swift.com 20 20 and Cc: netdev.
+7 -3
Documentation/virt/kvm/api.rst
··· 8070 8070 state is final and avoid missing dirty pages from another ioctl ordered 8071 8071 after the bitmap collection. 8072 8072 8073 - NOTE: One example of using the backup bitmap is saving arm64 vgic/its 8074 - tables through KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} command on 8075 - KVM device "kvm-arm-vgic-its" when dirty ring is enabled. 8073 + NOTE: Multiple examples of using the backup bitmap: (1) save vgic/its 8074 + tables through command KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} on 8075 + KVM device "kvm-arm-vgic-its". (2) restore vgic/its tables through 8076 + command KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_RESTORE_TABLES} on KVM device 8077 + "kvm-arm-vgic-its". VGICv3 LPI pending status is restored. (3) save 8078 + vgic3 pending table through KVM_DEV_ARM_VGIC_{GRP_CTRL, SAVE_PENDING_TABLES} 8079 + command on KVM device "kvm-arm-vgic-v3". 8076 8080 8077 8081 8.30 KVM_CAP_XEN_HVM 8078 8082 --------------------
+2 -1
MAINTAINERS
··· 15679 15679 M: Jonas Bonn <jonas@southpole.se> 15680 15680 M: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi> 15681 15681 M: Stafford Horne <shorne@gmail.com> 15682 - L: openrisc@lists.librecores.org 15682 + L: linux-openrisc@vger.kernel.org 15683 15683 S: Maintained 15684 15684 W: http://openrisc.io 15685 15685 T: git https://github.com/openrisc/linux.git ··· 21752 21752 21753 21753 USB WEBCAM GADGET 21754 21754 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 21755 + M: Daniel Scally <dan.scally@ideasonboard.com> 21755 21756 L: linux-usb@vger.kernel.org 21756 21757 S: Maintained 21757 21758 F: drivers/usb/gadget/function/*uvc*
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 2 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+5 -8
arch/arm64/kvm/vgic/vgic-its.c
··· 2187 2187 ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) | 2188 2188 ite->collection->collection_id; 2189 2189 val = cpu_to_le64(val); 2190 - return kvm_write_guest_lock(kvm, gpa, &val, ite_esz); 2190 + return vgic_write_guest_lock(kvm, gpa, &val, ite_esz); 2191 2191 } 2192 2192 2193 2193 /** ··· 2339 2339 (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) | 2340 2340 (dev->num_eventid_bits - 1)); 2341 2341 val = cpu_to_le64(val); 2342 - return kvm_write_guest_lock(kvm, ptr, &val, dte_esz); 2342 + return vgic_write_guest_lock(kvm, ptr, &val, dte_esz); 2343 2343 } 2344 2344 2345 2345 /** ··· 2526 2526 ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) | 2527 2527 collection->collection_id); 2528 2528 val = cpu_to_le64(val); 2529 - return kvm_write_guest_lock(its->dev->kvm, gpa, &val, esz); 2529 + return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz); 2530 2530 } 2531 2531 2532 2532 /* ··· 2607 2607 */ 2608 2608 val = 0; 2609 2609 BUG_ON(cte_esz > sizeof(val)); 2610 - ret = kvm_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz); 2610 + ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz); 2611 2611 return ret; 2612 2612 } 2613 2613 ··· 2743 2743 static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr) 2744 2744 { 2745 2745 const struct vgic_its_abi *abi = vgic_its_get_abi(its); 2746 - struct vgic_dist *dist = &kvm->arch.vgic; 2747 2746 int ret = 0; 2748 2747 2749 2748 if (attr == KVM_DEV_ARM_VGIC_CTRL_INIT) /* Nothing to do */ ··· 2762 2763 vgic_its_reset(kvm, its); 2763 2764 break; 2764 2765 case KVM_DEV_ARM_ITS_SAVE_TABLES: 2765 - dist->save_its_tables_in_progress = true; 2766 2766 ret = abi->save_tables(its); 2767 - dist->save_its_tables_in_progress = false; 2768 2767 break; 2769 2768 case KVM_DEV_ARM_ITS_RESTORE_TABLES: 2770 2769 ret = abi->restore_tables(its); ··· 2789 2792 { 2790 2793 struct vgic_dist *dist = &kvm->arch.vgic; 2791 2794 2792 - return dist->save_its_tables_in_progress; 2795 + return dist->table_write_in_progress; 2793 2796 } 2794 2797 2795 2798 static int vgic_its_set_attr(struct kvm_device *dev,
+2 -2
arch/arm64/kvm/vgic/vgic-v3.c
··· 339 339 if (status) { 340 340 /* clear consumed data */ 341 341 val &= ~(1 << bit_nr); 342 - ret = kvm_write_guest_lock(kvm, ptr, &val, 1); 342 + ret = vgic_write_guest_lock(kvm, ptr, &val, 1); 343 343 if (ret) 344 344 return ret; 345 345 } ··· 434 434 else 435 435 val &= ~(1 << bit_nr); 436 436 437 - ret = kvm_write_guest_lock(kvm, ptr, &val, 1); 437 + ret = vgic_write_guest_lock(kvm, ptr, &val, 1); 438 438 if (ret) 439 439 goto out; 440 440 }
+14
arch/arm64/kvm/vgic/vgic.h
··· 6 6 #define __KVM_ARM_VGIC_NEW_H__ 7 7 8 8 #include <linux/irqchip/arm-gic-common.h> 9 + #include <asm/kvm_mmu.h> 9 10 10 11 #define PRODUCT_ID_KVM 0x4b /* ASCII code K */ 11 12 #define IMPLEMENTER_ARM 0x43b ··· 130 129 static inline bool vgic_irq_is_multi_sgi(struct vgic_irq *irq) 131 130 { 132 131 return vgic_irq_get_lr_count(irq) > 1; 132 + } 133 + 134 + static inline int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa, 135 + const void *data, unsigned long len) 136 + { 137 + struct vgic_dist *dist = &kvm->arch.vgic; 138 + int ret; 139 + 140 + dist->table_write_in_progress = true; 141 + ret = kvm_write_guest_lock(kvm, gpa, data, len); 142 + dist->table_write_in_progress = false; 143 + 144 + return ret; 133 145 } 134 146 135 147 /*
+5 -2
arch/ia64/kernel/sys_ia64.c
··· 170 170 asmlinkage long 171 171 ia64_clock_getres(const clockid_t which_clock, struct __kernel_timespec __user *tp) 172 172 { 173 + struct timespec64 rtn_tp; 174 + s64 tick_ns; 175 + 173 176 /* 174 177 * ia64's clock_gettime() syscall is implemented as a vdso call 175 178 * fsys_clock_gettime(). Currently it handles only ··· 188 185 switch (which_clock) { 189 186 case CLOCK_REALTIME: 190 187 case CLOCK_MONOTONIC: 191 - s64 tick_ns = DIV_ROUND_UP(NSEC_PER_SEC, local_cpu_data->itc_freq); 192 - struct timespec64 rtn_tp = ns_to_timespec64(tick_ns); 188 + tick_ns = DIV_ROUND_UP(NSEC_PER_SEC, local_cpu_data->itc_freq); 189 + rtn_tp = ns_to_timespec64(tick_ns); 193 190 return put_timespec64(&rtn_tp, tp); 194 191 } 195 192
+3 -2
arch/parisc/kernel/firmware.c
··· 1303 1303 */ 1304 1304 int pdc_iodc_print(const unsigned char *str, unsigned count) 1305 1305 { 1306 - unsigned int i; 1306 + unsigned int i, found = 0; 1307 1307 unsigned long flags; 1308 1308 1309 1309 count = min_t(unsigned int, count, sizeof(iodc_dbuf)); ··· 1315 1315 iodc_dbuf[i+0] = '\r'; 1316 1316 iodc_dbuf[i+1] = '\n'; 1317 1317 i += 2; 1318 + found = 1; 1318 1319 goto print; 1319 1320 default: 1320 1321 iodc_dbuf[i] = str[i]; ··· 1331 1330 __pa(pdc_result), 0, __pa(iodc_dbuf), i, 0); 1332 1331 spin_unlock_irqrestore(&pdc_lock, flags); 1333 1332 1334 - return i; 1333 + return i - found; 1335 1334 } 1336 1335 1337 1336 #if !defined(BOOTLOADER)
+16 -5
arch/parisc/kernel/ptrace.c
··· 126 126 unsigned long tmp; 127 127 long ret = -EIO; 128 128 129 + unsigned long user_regs_struct_size = sizeof(struct user_regs_struct); 130 + #ifdef CONFIG_64BIT 131 + if (is_compat_task()) 132 + user_regs_struct_size /= 2; 133 + #endif 134 + 129 135 switch (request) { 130 136 131 137 /* Read the word at location addr in the USER area. For ptraced ··· 172 166 addr >= sizeof(struct pt_regs)) 173 167 break; 174 168 if (addr == PT_IAOQ0 || addr == PT_IAOQ1) { 175 - data |= 3; /* ensure userspace privilege */ 169 + data |= PRIV_USER; /* ensure userspace privilege */ 176 170 } 177 171 if ((addr >= PT_GR1 && addr <= PT_GR31) || 178 172 addr == PT_IAOQ0 || addr == PT_IAOQ1 || ··· 187 181 return copy_regset_to_user(child, 188 182 task_user_regset_view(current), 189 183 REGSET_GENERAL, 190 - 0, sizeof(struct user_regs_struct), 184 + 0, user_regs_struct_size, 191 185 datap); 192 186 193 187 case PTRACE_SETREGS: /* Set all gp regs in the child. */ 194 188 return copy_regset_from_user(child, 195 189 task_user_regset_view(current), 196 190 REGSET_GENERAL, 197 - 0, sizeof(struct user_regs_struct), 191 + 0, user_regs_struct_size, 198 192 datap); 199 193 200 194 case PTRACE_GETFPREGS: /* Get the child FPU state. */ ··· 291 285 if (addr >= sizeof(struct pt_regs)) 292 286 break; 293 287 if (addr == PT_IAOQ0+4 || addr == PT_IAOQ1+4) { 294 - data |= 3; /* ensure userspace privilege */ 288 + data |= PRIV_USER; /* ensure userspace privilege */ 295 289 } 296 290 if (addr >= PT_FR0 && addr <= PT_FR31 + 4) { 297 291 /* Special case, fp regs are 64 bits anyway */ ··· 308 302 } 309 303 } 310 304 break; 305 + case PTRACE_GETREGS: 306 + case PTRACE_SETREGS: 307 + case PTRACE_GETFPREGS: 308 + case PTRACE_SETFPREGS: 309 + return arch_ptrace(child, request, addr, data); 311 310 312 311 default: 313 312 ret = compat_ptrace_request(child, request, addr, data); ··· 495 484 case RI(iaoq[0]): 496 485 case RI(iaoq[1]): 497 486 /* set 2 lowest bits to ensure userspace privilege: */ 498 - regs->iaoq[num - RI(iaoq[0])] = val | 3; 487 + regs->iaoq[num - RI(iaoq[0])] = val | PRIV_USER; 499 488 return; 500 489 case RI(sar): regs->sar = val; 501 490 return;
+2
arch/powerpc/include/asm/book3s/64/tlbflush.h
··· 97 97 { 98 98 if (radix_enabled()) 99 99 radix__tlb_flush(tlb); 100 + 101 + return hash__tlb_flush(tlb); 100 102 } 101 103 102 104 #ifdef CONFIG_SMP
+30 -13
arch/powerpc/include/asm/hw_irq.h
··· 173 173 return flags; 174 174 } 175 175 176 + static inline notrace unsigned long irq_soft_mask_andc_return(unsigned long mask) 177 + { 178 + unsigned long flags = irq_soft_mask_return(); 179 + 180 + irq_soft_mask_set(flags & ~mask); 181 + 182 + return flags; 183 + } 184 + 176 185 static inline unsigned long arch_local_save_flags(void) 177 186 { 178 187 return irq_soft_mask_return(); ··· 201 192 202 193 static inline unsigned long arch_local_irq_save(void) 203 194 { 204 - return irq_soft_mask_set_return(IRQS_DISABLED); 195 + return irq_soft_mask_or_return(IRQS_DISABLED); 205 196 } 206 197 207 198 static inline bool arch_irqs_disabled_flags(unsigned long flags) ··· 340 331 * is a different soft-masked interrupt pending that requires hard 341 332 * masking. 342 333 */ 343 - static inline bool should_hard_irq_enable(void) 334 + static inline bool should_hard_irq_enable(struct pt_regs *regs) 344 335 { 345 336 if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { 346 - WARN_ON(irq_soft_mask_return() == IRQS_ENABLED); 337 + WARN_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED); 338 + WARN_ON(!(get_paca()->irq_happened & PACA_IRQ_HARD_DIS)); 347 339 WARN_ON(mfmsr() & MSR_EE); 348 340 } 349 341 ··· 357 347 * 358 348 * TODO: Add test for 64e 359 349 */ 360 - if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !power_pmu_wants_prompt_pmi()) 361 - return false; 350 + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) { 351 + if (!power_pmu_wants_prompt_pmi()) 352 + return false; 353 + /* 354 + * If PMIs are disabled then IRQs should be disabled as well, 355 + * so we shouldn't see this condition, check for it just in 356 + * case because we are about to enable PMIs. 357 + */ 358 + if (WARN_ON_ONCE(regs->softe & IRQS_PMI_DISABLED)) 359 + return false; 360 + } 362 361 363 362 if (get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK) 364 363 return false; ··· 377 358 378 359 /* 379 360 * Do the hard enabling, only call this if should_hard_irq_enable is true. 361 + * This allows PMI interrupts to profile irq handlers. 380 362 */ 381 363 static inline void do_hard_irq_enable(void) 382 364 { 383 - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { 384 - WARN_ON(irq_soft_mask_return() == IRQS_ENABLED); 385 - WARN_ON(get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK); 386 - WARN_ON(mfmsr() & MSR_EE); 387 - } 388 365 /* 389 - * This allows PMI interrupts (and watchdog soft-NMIs) through. 390 - * There is no other reason to enable this way. 366 + * Asynch interrupts come in with IRQS_ALL_DISABLED, 367 + * PACA_IRQ_HARD_DIS, and MSR[EE]=0. 391 368 */ 369 + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) 370 + irq_soft_mask_andc_return(IRQS_PMI_DISABLED); 392 371 get_paca()->irq_happened &= ~PACA_IRQ_HARD_DIS; 393 372 __hard_irq_enable(); 394 373 } ··· 469 452 return !(regs->msr & MSR_EE); 470 453 } 471 454 472 - static __always_inline bool should_hard_irq_enable(void) 455 + static __always_inline bool should_hard_irq_enable(struct pt_regs *regs) 473 456 { 474 457 return false; 475 458 }
+1 -1
arch/powerpc/kernel/dbell.c
··· 27 27 28 28 ppc_msgsync(); 29 29 30 - if (should_hard_irq_enable()) 30 + if (should_hard_irq_enable(regs)) 31 31 do_hard_irq_enable(); 32 32 33 33 kvmppc_clear_host_ipi(smp_processor_id());
+2 -1
arch/powerpc/kernel/head_85xx.S
··· 864 864 * SPE unavailable trap from kernel - print a message, but let 865 865 * the task use SPE in the kernel until it returns to user mode. 866 866 */ 867 - KernelSPE: 867 + SYM_FUNC_START_LOCAL(KernelSPE) 868 868 lwz r3,_MSR(r1) 869 869 oris r3,r3,MSR_SPE@h 870 870 stw r3,_MSR(r1) /* enable use of SPE after return */ ··· 881 881 #endif 882 882 .align 4,0 883 883 884 + SYM_FUNC_END(KernelSPE) 884 885 #endif /* CONFIG_SPE */ 885 886 886 887 /*
+1 -1
arch/powerpc/kernel/irq.c
··· 238 238 irq = static_call(ppc_get_irq)(); 239 239 240 240 /* We can hard enable interrupts now to allow perf interrupts */ 241 - if (should_hard_irq_enable()) 241 + if (should_hard_irq_enable(regs)) 242 242 do_hard_irq_enable(); 243 243 244 244 /* And finally process it */
+1 -1
arch/powerpc/kernel/time.c
··· 515 515 } 516 516 517 517 /* Conditionally hard-enable interrupts. */ 518 - if (should_hard_irq_enable()) { 518 + if (should_hard_irq_enable(regs)) { 519 519 /* 520 520 * Ensure a positive value is written to the decrementer, or 521 521 * else some CPUs will continue to take decrementer exceptions.
+7 -4
arch/powerpc/kexec/file_load_64.c
··· 989 989 * linux,drconf-usable-memory properties. Get an approximate on the 990 990 * number of usable memory entries and use for FDT size estimation. 991 991 */ 992 - usm_entries = ((memblock_end_of_DRAM() / drmem_lmb_size()) + 993 - (2 * (resource_size(&crashk_res) / drmem_lmb_size()))); 994 - 995 - extra_size = (unsigned int)(usm_entries * sizeof(u64)); 992 + if (drmem_lmb_size()) { 993 + usm_entries = ((memory_hotplug_max() / drmem_lmb_size()) + 994 + (2 * (resource_size(&crashk_res) / drmem_lmb_size()))); 995 + extra_size = (unsigned int)(usm_entries * sizeof(u64)); 996 + } else { 997 + extra_size = 0; 998 + } 996 999 997 1000 /* 998 1001 * Get the number of CPU nodes in the current DT. This allows to
+2 -3
arch/powerpc/kvm/booke.c
··· 912 912 913 913 static void kvmppc_fill_pt_regs(struct pt_regs *regs) 914 914 { 915 - ulong r1, ip, msr, lr; 915 + ulong r1, msr, lr; 916 916 917 917 asm("mr %0, 1" : "=r"(r1)); 918 918 asm("mflr %0" : "=r"(lr)); 919 919 asm("mfmsr %0" : "=r"(msr)); 920 - asm("bl 1f; 1: mflr %0" : "=r"(ip)); 921 920 922 921 memset(regs, 0, sizeof(*regs)); 923 922 regs->gpr[1] = r1; 924 - regs->nip = ip; 923 + regs->nip = _THIS_IP_; 925 924 regs->msr = msr; 926 925 regs->link = lr; 927 926 }
+24
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 234 234 end = (unsigned long)__end_rodata; 235 235 236 236 radix__change_memory_range(start, end, _PAGE_WRITE); 237 + 238 + for (start = PAGE_OFFSET; start < (unsigned long)_stext; start += PAGE_SIZE) { 239 + end = start + PAGE_SIZE; 240 + if (overlaps_interrupt_vector_text(start, end)) 241 + radix__change_memory_range(start, end, _PAGE_WRITE); 242 + else 243 + break; 244 + } 237 245 } 238 246 239 247 void radix__mark_initmem_nx(void) ··· 270 262 static unsigned long next_boundary(unsigned long addr, unsigned long end) 271 263 { 272 264 #ifdef CONFIG_STRICT_KERNEL_RWX 265 + unsigned long stext_phys; 266 + 267 + stext_phys = __pa_symbol(_stext); 268 + 269 + // Relocatable kernel running at non-zero real address 270 + if (stext_phys != 0) { 271 + // The end of interrupts code at zero is a rodata boundary 272 + unsigned long end_intr = __pa_symbol(__end_interrupts) - stext_phys; 273 + if (addr < end_intr) 274 + return end_intr; 275 + 276 + // Start of relocated kernel text is a rodata boundary 277 + if (addr < stext_phys) 278 + return stext_phys; 279 + } 280 + 273 281 if (addr < __pa_symbol(__srwx_boundary)) 274 282 return __pa_symbol(__srwx_boundary); 275 283 #endif
+7 -7
arch/powerpc/perf/imc-pmu.c
··· 22 22 * Used to avoid races in counting the nest-pmu units during hotplug 23 23 * register and unregister 24 24 */ 25 - static DEFINE_SPINLOCK(nest_init_lock); 25 + static DEFINE_MUTEX(nest_init_lock); 26 26 static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc); 27 27 static struct imc_pmu **per_nest_pmu_arr; 28 28 static cpumask_t nest_imc_cpumask; ··· 1629 1629 static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr) 1630 1630 { 1631 1631 if (pmu_ptr->domain == IMC_DOMAIN_NEST) { 1632 - spin_lock(&nest_init_lock); 1632 + mutex_lock(&nest_init_lock); 1633 1633 if (nest_pmus == 1) { 1634 1634 cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE); 1635 1635 kfree(nest_imc_refc); ··· 1639 1639 1640 1640 if (nest_pmus > 0) 1641 1641 nest_pmus--; 1642 - spin_unlock(&nest_init_lock); 1642 + mutex_unlock(&nest_init_lock); 1643 1643 } 1644 1644 1645 1645 /* Free core_imc memory */ ··· 1796 1796 * rest. To handle the cpuhotplug callback unregister, we track 1797 1797 * the number of nest pmus in "nest_pmus". 1798 1798 */ 1799 - spin_lock(&nest_init_lock); 1799 + mutex_lock(&nest_init_lock); 1800 1800 if (nest_pmus == 0) { 1801 1801 ret = init_nest_pmu_ref(); 1802 1802 if (ret) { 1803 - spin_unlock(&nest_init_lock); 1803 + mutex_unlock(&nest_init_lock); 1804 1804 kfree(per_nest_pmu_arr); 1805 1805 per_nest_pmu_arr = NULL; 1806 1806 goto err_free_mem; ··· 1808 1808 /* Register for cpu hotplug notification. */ 1809 1809 ret = nest_pmu_cpumask_init(); 1810 1810 if (ret) { 1811 - spin_unlock(&nest_init_lock); 1811 + mutex_unlock(&nest_init_lock); 1812 1812 kfree(nest_imc_refc); 1813 1813 kfree(per_nest_pmu_arr); 1814 1814 per_nest_pmu_arr = NULL; ··· 1816 1816 } 1817 1817 } 1818 1818 nest_pmus++; 1819 - spin_unlock(&nest_init_lock); 1819 + mutex_unlock(&nest_init_lock); 1820 1820 break; 1821 1821 case IMC_DOMAIN_CORE: 1822 1822 ret = core_imc_pmu_cpumask_init();
+3
arch/riscv/Makefile
··· 80 80 KBUILD_CFLAGS += -fno-omit-frame-pointer 81 81 endif 82 82 83 + # Avoid generating .eh_frame sections. 84 + KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables 85 + 83 86 KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax) 84 87 KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax) 85 88
-3
arch/riscv/include/asm/hwcap.h
··· 70 70 */ 71 71 enum riscv_isa_ext_key { 72 72 RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */ 73 - RISCV_ISA_EXT_KEY_ZIHINTPAUSE, 74 73 RISCV_ISA_EXT_KEY_SVINVAL, 75 74 RISCV_ISA_EXT_KEY_MAX, 76 75 }; ··· 90 91 return RISCV_ISA_EXT_KEY_FPU; 91 92 case RISCV_ISA_EXT_d: 92 93 return RISCV_ISA_EXT_KEY_FPU; 93 - case RISCV_ISA_EXT_ZIHINTPAUSE: 94 - return RISCV_ISA_EXT_KEY_ZIHINTPAUSE; 95 94 case RISCV_ISA_EXT_SVINVAL: 96 95 return RISCV_ISA_EXT_KEY_SVINVAL; 97 96 default:
+12 -16
arch/riscv/include/asm/vdso/processor.h
··· 4 4 5 5 #ifndef __ASSEMBLY__ 6 6 7 - #include <linux/jump_label.h> 8 7 #include <asm/barrier.h> 9 - #include <asm/hwcap.h> 10 8 11 9 static inline void cpu_relax(void) 12 10 { 13 - if (!static_branch_likely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_ZIHINTPAUSE])) { 14 11 #ifdef __riscv_muldiv 15 - int dummy; 16 - /* In lieu of a halt instruction, induce a long-latency stall. */ 17 - __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy)); 12 + int dummy; 13 + /* In lieu of a halt instruction, induce a long-latency stall. */ 14 + __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy)); 18 15 #endif 19 - } else { 20 - /* 21 - * Reduce instruction retirement. 22 - * This assumes the PC changes. 23 - */ 24 - #ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE 25 - __asm__ __volatile__ ("pause"); 16 + 17 + #ifdef __riscv_zihintpause 18 + /* 19 + * Reduce instruction retirement. 20 + * This assumes the PC changes. 21 + */ 22 + __asm__ __volatile__ ("pause"); 26 23 #else 27 - /* Encoding of the pause instruction */ 28 - __asm__ __volatile__ (".4byte 0x100000F"); 24 + /* Encoding of the pause instruction */ 25 + __asm__ __volatile__ (".4byte 0x100000F"); 29 26 #endif 30 - } 31 27 barrier(); 32 28 } 33 29
+18
arch/riscv/kernel/probes/kprobes.c
··· 48 48 post_kprobe_handler(p, kcb, regs); 49 49 } 50 50 51 + static bool __kprobes arch_check_kprobe(struct kprobe *p) 52 + { 53 + unsigned long tmp = (unsigned long)p->addr - p->offset; 54 + unsigned long addr = (unsigned long)p->addr; 55 + 56 + while (tmp <= addr) { 57 + if (tmp == addr) 58 + return true; 59 + 60 + tmp += GET_INSN_LENGTH(*(u16 *)tmp); 61 + } 62 + 63 + return false; 64 + } 65 + 51 66 int __kprobes arch_prepare_kprobe(struct kprobe *p) 52 67 { 53 68 unsigned long probe_addr = (unsigned long)p->addr; 54 69 55 70 if (probe_addr & 0x1) 71 + return -EILSEQ; 72 + 73 + if (!arch_check_kprobe(p)) 56 74 return -EILSEQ; 57 75 58 76 /* copy instruction */
+1
arch/sh/kernel/vmlinux.lds.S
··· 4 4 * Written by Niibe Yutaka and Paul Mundt 5 5 */ 6 6 OUTPUT_ARCH(sh) 7 + #define RUNTIME_DISCARD_EXIT 7 8 #include <asm/thread_info.h> 8 9 #include <asm/cache.h> 9 10 #include <asm/vmlinux.lds.h>
+24 -2
arch/x86/include/asm/debugreg.h
··· 39 39 asm("mov %%db6, %0" :"=r" (val)); 40 40 break; 41 41 case 7: 42 - asm("mov %%db7, %0" :"=r" (val)); 42 + /* 43 + * Apply __FORCE_ORDER to DR7 reads to forbid re-ordering them 44 + * with other code. 45 + * 46 + * This is needed because a DR7 access can cause a #VC exception 47 + * when running under SEV-ES. Taking a #VC exception is not a 48 + * safe thing to do just anywhere in the entry code and 49 + * re-ordering might place the access into an unsafe location. 50 + * 51 + * This happened in the NMI handler, where the DR7 read was 52 + * re-ordered to happen before the call to sev_es_ist_enter(), 53 + * causing stack recursion. 54 + */ 55 + asm volatile("mov %%db7, %0" : "=r" (val) : __FORCE_ORDER); 43 56 break; 44 57 default: 45 58 BUG(); ··· 79 66 asm("mov %0, %%db6" ::"r" (value)); 80 67 break; 81 68 case 7: 82 - asm("mov %0, %%db7" ::"r" (value)); 69 + /* 70 + * Apply __FORCE_ORDER to DR7 writes to forbid re-ordering them 71 + * with other code. 72 + * 73 + * While is didn't happen with a DR7 write (see the DR7 read 74 + * comment above which explains where it happened), add the 75 + * __FORCE_ORDER here too to avoid similar problems in the 76 + * future. 77 + */ 78 + asm volatile("mov %0, %%db7" ::"r" (value), __FORCE_ORDER); 83 79 break; 84 80 default: 85 81 BUG();
+1 -1
block/bfq-cgroup.c
··· 769 769 * request from the old cgroup. 770 770 */ 771 771 bfq_put_cooperator(sync_bfqq); 772 - bfq_release_process_ref(bfqd, sync_bfqq); 773 772 bic_set_bfqq(bic, NULL, true); 773 + bfq_release_process_ref(bfqd, sync_bfqq); 774 774 } 775 775 } 776 776 }
+3 -1
block/bfq-iosched.c
··· 5425 5425 5426 5426 bfqq = bic_to_bfqq(bic, false); 5427 5427 if (bfqq) { 5428 - bfq_release_process_ref(bfqd, bfqq); 5428 + struct bfq_queue *old_bfqq = bfqq; 5429 + 5429 5430 bfqq = bfq_get_queue(bfqd, bio, false, bic, true); 5430 5431 bic_set_bfqq(bic, bfqq, false); 5432 + bfq_release_process_ref(bfqd, old_bfqq); 5431 5433 } 5432 5434 5433 5435 bfqq = bic_to_bfqq(bic, true);
+4
block/blk-cgroup.c
··· 2001 2001 struct blkg_iostat_set *bis; 2002 2002 unsigned long flags; 2003 2003 2004 + /* Root-level stats are sourced from system-wide IO stats */ 2005 + if (!cgroup_parent(blkcg->css.cgroup)) 2006 + return; 2007 + 2004 2008 cpu = get_cpu(); 2005 2009 bis = per_cpu_ptr(bio->bi_blkg->iostat_cpu, cpu); 2006 2010 flags = u64_stats_update_begin_irqsave(&bis->sync);
+3 -2
block/blk-mq.c
··· 4069 4069 * blk_mq_destroy_queue - shutdown a request queue 4070 4070 * @q: request queue to shutdown 4071 4071 * 4072 - * This shuts down a request queue allocated by blk_mq_init_queue() and drops 4073 - * the initial reference. All future requests will failed with -ENODEV. 4072 + * This shuts down a request queue allocated by blk_mq_init_queue(). All future 4073 + * requests will be failed with -ENODEV. The caller is responsible for dropping 4074 + * the reference from blk_mq_init_queue() by calling blk_put_queue(). 4074 4075 * 4075 4076 * Context: can sleep 4076 4077 */
+2 -2
certs/Makefile
··· 23 23 targets += blacklist_hash_list 24 24 25 25 quiet_cmd_extract_certs = CERT $@ 26 - cmd_extract_certs = $(obj)/extract-cert $(extract-cert-in) $@ 27 - extract-cert-in = $(or $(filter-out $(obj)/extract-cert, $(real-prereqs)),"") 26 + cmd_extract_certs = $(obj)/extract-cert "$(extract-cert-in)" $@ 27 + extract-cert-in = $(filter-out $(obj)/extract-cert, $(real-prereqs)) 28 28 29 29 $(obj)/system_certificates.o: $(obj)/x509_certificate_list 30 30
+1 -1
drivers/ata/libata-core.c
··· 3109 3109 */ 3110 3110 if (spd > 1) 3111 3111 mask &= (1 << (spd - 1)) - 1; 3112 - else 3112 + else if (link->sata_spd) 3113 3113 return -EINVAL; 3114 3114 3115 3115 /* were we already at the bottom? */
+1 -1
drivers/block/ublk_drv.c
··· 137 137 138 138 char *__queues; 139 139 140 - unsigned short queue_size; 140 + unsigned int queue_size; 141 141 struct ublksrv_ctrl_dev_info dev_info; 142 142 143 143 struct blk_mq_tag_set tag_set;
+1 -1
drivers/dma-buf/dma-fence.c
··· 167 167 0, 0); 168 168 169 169 set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, 170 - &dma_fence_stub.flags); 170 + &fence->flags); 171 171 172 172 dma_fence_signal(fence); 173 173
+2
drivers/firmware/efi/efi.c
··· 1007 1007 /* first try to find a slot in an existing linked list entry */ 1008 1008 for (prsv = efi_memreserve_root->next; prsv; ) { 1009 1009 rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB); 1010 + if (!rsv) 1011 + return -ENOMEM; 1010 1012 index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size); 1011 1013 if (index < rsv->size) { 1012 1014 rsv->entry[index].base = addr;
+1 -1
drivers/firmware/efi/memattr.c
··· 33 33 return -ENOMEM; 34 34 } 35 35 36 - if (tbl->version > 1) { 36 + if (tbl->version > 2) { 37 37 pr_warn("Unexpected EFI Memory Attributes table version %d\n", 38 38 tbl->version); 39 39 goto unmap;
+12 -5
drivers/fpga/intel-m10-bmc-sec-update.c
··· 574 574 len = scnprintf(buf, SEC_UPDATE_LEN_MAX, "secure-update%d", 575 575 sec->fw_name_id); 576 576 sec->fw_name = kmemdup_nul(buf, len, GFP_KERNEL); 577 - if (!sec->fw_name) 578 - return -ENOMEM; 577 + if (!sec->fw_name) { 578 + ret = -ENOMEM; 579 + goto fw_name_fail; 580 + } 579 581 580 582 fwl = firmware_upload_register(THIS_MODULE, sec->dev, sec->fw_name, 581 583 &m10bmc_ops, sec); 582 584 if (IS_ERR(fwl)) { 583 585 dev_err(sec->dev, "Firmware Upload driver failed to start\n"); 584 - kfree(sec->fw_name); 585 - xa_erase(&fw_upload_xa, sec->fw_name_id); 586 - return PTR_ERR(fwl); 586 + ret = PTR_ERR(fwl); 587 + goto fw_uploader_fail; 587 588 } 588 589 589 590 sec->fwl = fwl; 590 591 return 0; 592 + 593 + fw_uploader_fail: 594 + kfree(sec->fw_name); 595 + fw_name_fail: 596 + xa_erase(&fw_upload_xa, sec->fw_name_id); 597 + return ret; 591 598 } 592 599 593 600 static int m10bmc_sec_remove(struct platform_device *pdev)
+2 -2
drivers/fpga/stratix10-soc.c
··· 213 213 /* Allocate buffers from the service layer's pool. */ 214 214 for (i = 0; i < NUM_SVC_BUFS; i++) { 215 215 kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE); 216 - if (!kbuf) { 216 + if (IS_ERR(kbuf)) { 217 217 s10_free_buffers(mgr); 218 - ret = -ENOMEM; 218 + ret = PTR_ERR(kbuf); 219 219 goto init_done; 220 220 } 221 221
+2 -2
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 790 790 * zero here */ 791 791 WARN_ON(simd != 0); 792 792 793 - /* type 2 wave data */ 794 - dst[(*no_fields)++] = 2; 793 + /* type 3 wave data */ 794 + dst[(*no_fields)++] = 3; 795 795 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_STATUS); 796 796 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_LO); 797 797 dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_HI);
+7 -1
drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c
··· 337 337 338 338 static void nbio_v4_3_init_registers(struct amdgpu_device *adev) 339 339 { 340 - return; 340 + if (adev->ip_versions[NBIO_HWIP][0] == IP_VERSION(4, 3, 0)) { 341 + uint32_t data; 342 + 343 + data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2); 344 + data &= ~RCC_DEV0_EPF2_STRAP2__STRAP_NO_SOFT_RESET_DEV0_F2_MASK; 345 + WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2, data); 346 + } 341 347 } 342 348 343 349 static u32 nbio_v4_3_get_rom_offset(struct amdgpu_device *adev)
+2 -1
drivers/gpu/drm/amd/amdgpu/soc21.c
··· 640 640 AMD_CG_SUPPORT_GFX_CGCG | 641 641 AMD_CG_SUPPORT_GFX_CGLS | 642 642 AMD_CG_SUPPORT_REPEATER_FGCG | 643 - AMD_CG_SUPPORT_GFX_MGCG; 643 + AMD_CG_SUPPORT_GFX_MGCG | 644 + AMD_CG_SUPPORT_HDP_SD; 644 645 adev->pg_flags = AMD_PG_SUPPORT_VCN | 645 646 AMD_PG_SUPPORT_VCN_DPG | 646 647 AMD_PG_SUPPORT_JPEG;
+11
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4501 4501 static int dm_early_init(void *handle) 4502 4502 { 4503 4503 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 4504 + struct amdgpu_mode_info *mode_info = &adev->mode_info; 4505 + struct atom_context *ctx = mode_info->atom_context; 4506 + int index = GetIndexIntoMasterTable(DATA, Object_Header); 4507 + u16 data_offset; 4508 + 4509 + /* if there is no object header, skip DM */ 4510 + if (!amdgpu_atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) { 4511 + adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK; 4512 + dev_info(adev->dev, "No object header, skipping DM\n"); 4513 + return -ENOENT; 4514 + } 4504 4515 4505 4516 switch (adev->asic_type) { 4506 4517 #if defined(CONFIG_DRM_AMD_DC_SI)
+3 -2
drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c
··· 874 874 }, 875 875 876 876 // 6:1 downscaling ratio: 1000/6 = 166.666 877 + // 4:1 downscaling ratio for ARGB888 to prevent underflow during P010 playback: 1000/4 = 250 877 878 .max_downscale_factor = { 878 - .argb8888 = 167, 879 + .argb8888 = 250, 879 880 .nv12 = 167, 880 881 .fp16 = 167 881 882 }, ··· 1764 1763 pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE; 1765 1764 pool->base.pipe_count = pool->base.res_cap->num_timing_generator; 1766 1765 pool->base.mpcc_count = pool->base.res_cap->num_timing_generator; 1767 - dc->caps.max_downscale_ratio = 600; 1766 + dc->caps.max_downscale_ratio = 400; 1768 1767 dc->caps.i2c_speed_in_khz = 100; 1769 1768 dc->caps.i2c_speed_in_khz_hdcp = 100; 1770 1769 dc->caps.max_cursor_size = 256;
+1 -1
drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c
··· 94 94 .get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync, 95 95 .calc_vupdate_position = dcn10_calc_vupdate_position, 96 96 .apply_idle_power_optimizations = dcn32_apply_idle_power_optimizations, 97 - .does_plane_fit_in_mall = dcn30_does_plane_fit_in_mall, 97 + .does_plane_fit_in_mall = NULL, 98 98 .set_backlight_level = dcn21_set_backlight_level, 99 99 .set_abm_immediate_disable = dcn21_set_abm_immediate_disable, 100 100 .hardware_release = dcn30_hardware_release,
+1 -1
drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c
··· 3183 3183 } else { 3184 3184 v->MIN_DST_Y_NEXT_START[k] = v->VTotal[k] - v->VFrontPorch[k] + v->VTotal[k] - v->VActive[k] - v->VStartup[k]; 3185 3185 } 3186 - v->MIN_DST_Y_NEXT_START[k] += dml_floor(4.0 * v->TSetup[k] / (double)v->HTotal[k] / v->PixelClock[k], 1.0) / 4.0; 3186 + v->MIN_DST_Y_NEXT_START[k] += dml_floor(4.0 * v->TSetup[k] / ((double)v->HTotal[k] / v->PixelClock[k]), 1.0) / 4.0; 3187 3187 if (((v->VUpdateOffsetPix[k] + v->VUpdateWidthPix[k] + v->VReadyOffsetPix[k]) / v->HTotal[k]) 3188 3188 <= (isInterlaceTiming ? 3189 3189 dml_floor((v->VTotal[k] - v->VActive[k] - v->VFrontPorch[k] - v->VStartup[k]) / 2.0, 1.0) :
+12
drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c
··· 532 532 if (dmub->hw_funcs.reset) 533 533 dmub->hw_funcs.reset(dmub); 534 534 535 + /* reset the cache of the last wptr as well now that hw is reset */ 536 + dmub->inbox1_last_wptr = 0; 537 + 535 538 cw0.offset.quad_part = inst_fb->gpu_addr; 536 539 cw0.region.base = DMUB_CW0_BASE; 537 540 cw0.region.top = cw0.region.base + inst_fb->size - 1; ··· 651 648 652 649 if (dmub->hw_funcs.reset) 653 650 dmub->hw_funcs.reset(dmub); 651 + 652 + /* mailboxes have been reset in hw, so reset the sw state as well */ 653 + dmub->inbox1_last_wptr = 0; 654 + dmub->inbox1_rb.wrpt = 0; 655 + dmub->inbox1_rb.rptr = 0; 656 + dmub->outbox0_rb.wrpt = 0; 657 + dmub->outbox0_rb.rptr = 0; 658 + dmub->outbox1_rb.wrpt = 0; 659 + dmub->outbox1_rb.rptr = 0; 654 660 655 661 dmub->hw_init = false; 656 662
+4 -2
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 2007 2007 gc_ver == IP_VERSION(10, 3, 0) || 2008 2008 gc_ver == IP_VERSION(10, 1, 2) || 2009 2009 gc_ver == IP_VERSION(11, 0, 0) || 2010 - gc_ver == IP_VERSION(11, 0, 2))) 2010 + gc_ver == IP_VERSION(11, 0, 2) || 2011 + gc_ver == IP_VERSION(11, 0, 3))) 2011 2012 *states = ATTR_STATE_UNSUPPORTED; 2012 2013 } else if (DEVICE_ATTR_IS(pp_dpm_dclk)) { 2013 2014 if (!(gc_ver == IP_VERSION(10, 3, 1) || 2014 2015 gc_ver == IP_VERSION(10, 3, 0) || 2015 2016 gc_ver == IP_VERSION(10, 1, 2) || 2016 2017 gc_ver == IP_VERSION(11, 0, 0) || 2017 - gc_ver == IP_VERSION(11, 0, 2))) 2018 + gc_ver == IP_VERSION(11, 0, 2) || 2019 + gc_ver == IP_VERSION(11, 0, 3))) 2018 2020 *states = ATTR_STATE_UNSUPPORTED; 2019 2021 } else if (DEVICE_ATTR_IS(pp_power_profile_mode)) { 2020 2022 if (amdgpu_dpm_get_power_profile_mode(adev, NULL) == -EOPNOTSUPP)
+14
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1500 1500 } 1501 1501 1502 1502 /* 1503 + * For SMU 13.0.4/11, PMFW will handle the features disablement properly 1504 + * for gpu reset case. Driver involvement is unnecessary. 1505 + */ 1506 + if (amdgpu_in_reset(adev)) { 1507 + switch (adev->ip_versions[MP1_HWIP][0]) { 1508 + case IP_VERSION(13, 0, 4): 1509 + case IP_VERSION(13, 0, 11): 1510 + return 0; 1511 + default: 1512 + break; 1513 + } 1514 + } 1515 + 1516 + /* 1503 1517 * For gpu reset, runpm and hibernation through BACO, 1504 1518 * BACO feature has to be kept enabled. 1505 1519 */
+1 -1
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 1319 1319 { .refclk = 24000, .cdclk = 192000, .divider = 2, .ratio = 16 }, 1320 1320 { .refclk = 24000, .cdclk = 312000, .divider = 2, .ratio = 26 }, 1321 1321 { .refclk = 24000, .cdclk = 552000, .divider = 2, .ratio = 46 }, 1322 - { .refclk = 24400, .cdclk = 648000, .divider = 2, .ratio = 54 }, 1322 + { .refclk = 24000, .cdclk = 648000, .divider = 2, .ratio = 54 }, 1323 1323 1324 1324 { .refclk = 38400, .cdclk = 179200, .divider = 3, .ratio = 14 }, 1325 1325 { .refclk = 38400, .cdclk = 192000, .divider = 2, .ratio = 10 },
+12 -4
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 1861 1861 vm = ctx->vm; 1862 1862 GEM_BUG_ON(!vm); 1863 1863 1864 - err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL); 1865 - if (err) 1866 - return err; 1867 - 1864 + /* 1865 + * Get a reference for the allocated handle. Once the handle is 1866 + * visible in the vm_xa table, userspace could try to close it 1867 + * from under our feet, so we need to hold the extra reference 1868 + * first. 1869 + */ 1868 1870 i915_vm_get(vm); 1871 + 1872 + err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL); 1873 + if (err) { 1874 + i915_vm_put(vm); 1875 + return err; 1876 + } 1869 1877 1870 1878 GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */ 1871 1879 args->value = id;
+5 -4
drivers/gpu/drm/i915/gem/i915_gem_tiling.c
··· 305 305 spin_unlock(&obj->vma.lock); 306 306 307 307 obj->tiling_and_stride = tiling | stride; 308 - i915_gem_object_unlock(obj); 309 - 310 - /* Force the fence to be reacquired for GTT access */ 311 - i915_gem_object_release_mmap_gtt(obj); 312 308 313 309 /* Try to preallocate memory required to save swizzling on put-pages */ 314 310 if (i915_gem_object_needs_bit17_swizzle(obj)) { ··· 316 320 bitmap_free(obj->bit_17); 317 321 obj->bit_17 = NULL; 318 322 } 323 + 324 + i915_gem_object_unlock(obj); 325 + 326 + /* Force the fence to be reacquired for GTT access */ 327 + i915_gem_object_release_mmap_gtt(obj); 319 328 320 329 return 0; 321 330 }
+3 -1
drivers/gpu/drm/i915/gt/intel_context.c
··· 528 528 return rq; 529 529 } 530 530 531 - struct i915_request *intel_context_find_active_request(struct intel_context *ce) 531 + struct i915_request *intel_context_get_active_request(struct intel_context *ce) 532 532 { 533 533 struct intel_context *parent = intel_context_to_parent(ce); 534 534 struct i915_request *rq, *active = NULL; ··· 552 552 553 553 active = rq; 554 554 } 555 + if (active) 556 + active = i915_request_get_rcu(active); 555 557 spin_unlock_irqrestore(&parent->guc_state.lock, flags); 556 558 557 559 return active;
+1 -2
drivers/gpu/drm/i915/gt/intel_context.h
··· 268 268 269 269 struct i915_request *intel_context_create_request(struct intel_context *ce); 270 270 271 - struct i915_request * 272 - intel_context_find_active_request(struct intel_context *ce); 271 + struct i915_request *intel_context_get_active_request(struct intel_context *ce); 273 272 274 273 static inline bool intel_context_is_barrier(const struct intel_context *ce) 275 274 {
+2 -2
drivers/gpu/drm/i915/gt/intel_engine.h
··· 248 248 ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine, 249 249 ktime_t *now); 250 250 251 - struct i915_request * 252 - intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine); 251 + void intel_engine_get_hung_entity(struct intel_engine_cs *engine, 252 + struct intel_context **ce, struct i915_request **rq); 253 253 254 254 u32 intel_engine_context_size(struct intel_gt *gt, u8 class); 255 255 struct intel_context *
+39 -35
drivers/gpu/drm/i915/gt/intel_engine_cs.c
··· 2094 2094 } 2095 2095 } 2096 2096 2097 - static unsigned long list_count(struct list_head *list) 2098 - { 2099 - struct list_head *pos; 2100 - unsigned long count = 0; 2101 - 2102 - list_for_each(pos, list) 2103 - count++; 2104 - 2105 - return count; 2106 - } 2107 - 2108 2097 static unsigned long read_ul(void *p, size_t x) 2109 2098 { 2110 2099 return *(unsigned long *)(p + x); ··· 2185 2196 } 2186 2197 } 2187 2198 2188 - static void engine_dump_active_requests(struct intel_engine_cs *engine, struct drm_printer *m) 2199 + static void engine_dump_active_requests(struct intel_engine_cs *engine, 2200 + struct drm_printer *m) 2189 2201 { 2202 + struct intel_context *hung_ce = NULL; 2190 2203 struct i915_request *hung_rq = NULL; 2191 - struct intel_context *ce; 2192 - bool guc; 2193 2204 2194 2205 /* 2195 2206 * No need for an engine->irq_seqno_barrier() before the seqno reads. ··· 2198 2209 * But the intention here is just to report an instantaneous snapshot 2199 2210 * so that's fine. 2200 2211 */ 2201 - lockdep_assert_held(&engine->sched_engine->lock); 2212 + intel_engine_get_hung_entity(engine, &hung_ce, &hung_rq); 2202 2213 2203 2214 drm_printf(m, "\tRequests:\n"); 2204 2215 2205 - guc = intel_uc_uses_guc_submission(&engine->gt->uc); 2206 - if (guc) { 2207 - ce = intel_engine_get_hung_context(engine); 2208 - if (ce) 2209 - hung_rq = intel_context_find_active_request(ce); 2210 - } else { 2211 - hung_rq = intel_engine_execlist_find_hung_request(engine); 2212 - } 2213 - 2214 2216 if (hung_rq) 2215 2217 engine_dump_request(hung_rq, m, "\t\thung"); 2218 + else if (hung_ce) 2219 + drm_printf(m, "\t\tGot hung ce but no hung rq!\n"); 2216 2220 2217 - if (guc) 2221 + if (intel_uc_uses_guc_submission(&engine->gt->uc)) 2218 2222 intel_guc_dump_active_requests(engine, hung_rq, m); 2219 2223 else 2220 - intel_engine_dump_active_requests(&engine->sched_engine->requests, 2221 - hung_rq, m); 2224 + intel_execlists_dump_active_requests(engine, hung_rq, m); 2225 + 2226 + if (hung_rq) 2227 + i915_request_put(hung_rq); 2222 2228 } 2223 2229 2224 2230 void intel_engine_dump(struct intel_engine_cs *engine, ··· 2223 2239 struct i915_gpu_error * const error = &engine->i915->gpu_error; 2224 2240 struct i915_request *rq; 2225 2241 intel_wakeref_t wakeref; 2226 - unsigned long flags; 2227 2242 ktime_t dummy; 2228 2243 2229 2244 if (header) { ··· 2259 2276 i915_reset_count(error)); 2260 2277 print_properties(engine, m); 2261 2278 2262 - spin_lock_irqsave(&engine->sched_engine->lock, flags); 2263 2279 engine_dump_active_requests(engine, m); 2264 - 2265 - drm_printf(m, "\tOn hold?: %lu\n", 2266 - list_count(&engine->sched_engine->hold)); 2267 - spin_unlock_irqrestore(&engine->sched_engine->lock, flags); 2268 2280 2269 2281 drm_printf(m, "\tMMIO base: 0x%08x\n", engine->mmio_base); 2270 2282 wakeref = intel_runtime_pm_get_if_in_use(engine->uncore->rpm); ··· 2306 2328 return siblings[0]->cops->create_virtual(siblings, count, flags); 2307 2329 } 2308 2330 2309 - struct i915_request * 2310 - intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine) 2331 + static struct i915_request *engine_execlist_find_hung_request(struct intel_engine_cs *engine) 2311 2332 { 2312 2333 struct i915_request *request, *active = NULL; 2313 2334 ··· 2356 2379 } 2357 2380 2358 2381 return active; 2382 + } 2383 + 2384 + void intel_engine_get_hung_entity(struct intel_engine_cs *engine, 2385 + struct intel_context **ce, struct i915_request **rq) 2386 + { 2387 + unsigned long flags; 2388 + 2389 + *ce = intel_engine_get_hung_context(engine); 2390 + if (*ce) { 2391 + intel_engine_clear_hung_context(engine); 2392 + 2393 + *rq = intel_context_get_active_request(*ce); 2394 + return; 2395 + } 2396 + 2397 + /* 2398 + * Getting here with GuC enabled means it is a forced error capture 2399 + * with no actual hang. So, no need to attempt the execlist search. 2400 + */ 2401 + if (intel_uc_uses_guc_submission(&engine->gt->uc)) 2402 + return; 2403 + 2404 + spin_lock_irqsave(&engine->sched_engine->lock, flags); 2405 + *rq = engine_execlist_find_hung_request(engine); 2406 + if (*rq) 2407 + *rq = i915_request_get_rcu(*rq); 2408 + spin_unlock_irqrestore(&engine->sched_engine->lock, flags); 2359 2409 } 2360 2410 2361 2411 void xehp_enable_ccs_engines(struct intel_engine_cs *engine)
+27
drivers/gpu/drm/i915/gt/intel_execlists_submission.c
··· 4148 4148 spin_unlock_irqrestore(&sched_engine->lock, flags); 4149 4149 } 4150 4150 4151 + static unsigned long list_count(struct list_head *list) 4152 + { 4153 + struct list_head *pos; 4154 + unsigned long count = 0; 4155 + 4156 + list_for_each(pos, list) 4157 + count++; 4158 + 4159 + return count; 4160 + } 4161 + 4162 + void intel_execlists_dump_active_requests(struct intel_engine_cs *engine, 4163 + struct i915_request *hung_rq, 4164 + struct drm_printer *m) 4165 + { 4166 + unsigned long flags; 4167 + 4168 + spin_lock_irqsave(&engine->sched_engine->lock, flags); 4169 + 4170 + intel_engine_dump_active_requests(&engine->sched_engine->requests, hung_rq, m); 4171 + 4172 + drm_printf(m, "\tOn hold?: %lu\n", 4173 + list_count(&engine->sched_engine->hold)); 4174 + 4175 + spin_unlock_irqrestore(&engine->sched_engine->lock, flags); 4176 + } 4177 + 4151 4178 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) 4152 4179 #include "selftest_execlists.c" 4153 4180 #endif
+4
drivers/gpu/drm/i915/gt/intel_execlists_submission.h
··· 32 32 int indent), 33 33 unsigned int max); 34 34 35 + void intel_execlists_dump_active_requests(struct intel_engine_cs *engine, 36 + struct i915_request *hung_rq, 37 + struct drm_printer *m); 38 + 35 39 bool 36 40 intel_engine_in_execlists_submission_mode(const struct intel_engine_cs *engine); 37 41
+13 -1
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 1702 1702 goto next_context; 1703 1703 1704 1704 guilty = false; 1705 - rq = intel_context_find_active_request(ce); 1705 + rq = intel_context_get_active_request(ce); 1706 1706 if (!rq) { 1707 1707 head = ce->ring->tail; 1708 1708 goto out_replay; ··· 1715 1715 head = intel_ring_wrap(ce->ring, rq->head); 1716 1716 1717 1717 __i915_request_reset(rq, guilty); 1718 + i915_request_put(rq); 1718 1719 out_replay: 1719 1720 guc_reset_state(ce, head, guilty); 1720 1721 next_context: ··· 4818 4817 4819 4818 xa_lock_irqsave(&guc->context_lookup, flags); 4820 4819 xa_for_each(&guc->context_lookup, index, ce) { 4820 + bool found; 4821 + 4821 4822 if (!kref_get_unless_zero(&ce->ref)) 4822 4823 continue; 4823 4824 ··· 4836 4833 goto next; 4837 4834 } 4838 4835 4836 + found = false; 4837 + spin_lock(&ce->guc_state.lock); 4839 4838 list_for_each_entry(rq, &ce->guc_state.requests, sched.link) { 4840 4839 if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE) 4841 4840 continue; 4842 4841 4842 + found = true; 4843 + break; 4844 + } 4845 + spin_unlock(&ce->guc_state.lock); 4846 + 4847 + if (found) { 4843 4848 intel_engine_set_hung_context(engine, ce); 4844 4849 4845 4850 /* Can only cope with one hang at a time... */ ··· 4855 4844 xa_lock(&guc->context_lookup); 4856 4845 goto done; 4857 4846 } 4847 + 4858 4848 next: 4859 4849 intel_context_put(ce); 4860 4850 xa_lock(&guc->context_lookup);
+6 -27
drivers/gpu/drm/i915/i915_gpu_error.c
··· 1596 1596 { 1597 1597 struct intel_engine_capture_vma *capture = NULL; 1598 1598 struct intel_engine_coredump *ee; 1599 - struct intel_context *ce; 1599 + struct intel_context *ce = NULL; 1600 1600 struct i915_request *rq = NULL; 1601 - unsigned long flags; 1602 1601 1603 1602 ee = intel_engine_coredump_alloc(engine, ALLOW_FAIL, dump_flags); 1604 1603 if (!ee) 1605 1604 return NULL; 1606 1605 1607 - ce = intel_engine_get_hung_context(engine); 1608 - if (ce) { 1609 - intel_engine_clear_hung_context(engine); 1610 - rq = intel_context_find_active_request(ce); 1611 - if (!rq || !i915_request_started(rq)) 1612 - goto no_request_capture; 1613 - } else { 1614 - /* 1615 - * Getting here with GuC enabled means it is a forced error capture 1616 - * with no actual hang. So, no need to attempt the execlist search. 1617 - */ 1618 - if (!intel_uc_uses_guc_submission(&engine->gt->uc)) { 1619 - spin_lock_irqsave(&engine->sched_engine->lock, flags); 1620 - rq = intel_engine_execlist_find_hung_request(engine); 1621 - spin_unlock_irqrestore(&engine->sched_engine->lock, 1622 - flags); 1623 - } 1624 - } 1625 - if (rq) 1626 - rq = i915_request_get_rcu(rq); 1627 - 1628 - if (!rq) 1606 + intel_engine_get_hung_entity(engine, &ce, &rq); 1607 + if (!rq || !i915_request_started(rq)) 1629 1608 goto no_request_capture; 1630 1609 1631 1610 capture = intel_engine_coredump_add_request(ee, rq, ATOMIC_MAYFAIL); 1632 - if (!capture) { 1633 - i915_request_put(rq); 1611 + if (!capture) 1634 1612 goto no_request_capture; 1635 - } 1636 1613 if (dump_flags & CORE_DUMP_FLAG_IS_GUC_CAPTURE) 1637 1614 intel_guc_capture_get_matching_node(engine->gt, ee, ce); 1638 1615 ··· 1619 1642 return ee; 1620 1643 1621 1644 no_request_capture: 1645 + if (rq) 1646 + i915_request_put(rq); 1622 1647 kfree(ee); 1623 1648 return NULL; 1624 1649 }
+1
drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h
··· 97 97 int gp102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 98 98 int gp10b_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 99 99 int gv100_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 100 + int tu102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 100 101 int ga100_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 101 102 int ga102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); 102 103
+3
drivers/gpu/drm/nouveau/nvkm/core/firmware.c
··· 151 151 static enum nvkm_memory_target 152 152 nvkm_firmware_mem_target(struct nvkm_memory *memory) 153 153 { 154 + if (nvkm_firmware_mem(memory)->device->func->tegra) 155 + return NVKM_MEM_TARGET_NCOH; 156 + 154 157 return NVKM_MEM_TARGET_HOST; 155 158 } 156 159
+5 -5
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
··· 2405 2405 .bus = { 0x00000001, gf100_bus_new }, 2406 2406 .devinit = { 0x00000001, tu102_devinit_new }, 2407 2407 .fault = { 0x00000001, tu102_fault_new }, 2408 - .fb = { 0x00000001, gv100_fb_new }, 2408 + .fb = { 0x00000001, tu102_fb_new }, 2409 2409 .fuse = { 0x00000001, gm107_fuse_new }, 2410 2410 .gpio = { 0x00000001, gk104_gpio_new }, 2411 2411 .gsp = { 0x00000001, gv100_gsp_new }, ··· 2440 2440 .bus = { 0x00000001, gf100_bus_new }, 2441 2441 .devinit = { 0x00000001, tu102_devinit_new }, 2442 2442 .fault = { 0x00000001, tu102_fault_new }, 2443 - .fb = { 0x00000001, gv100_fb_new }, 2443 + .fb = { 0x00000001, tu102_fb_new }, 2444 2444 .fuse = { 0x00000001, gm107_fuse_new }, 2445 2445 .gpio = { 0x00000001, gk104_gpio_new }, 2446 2446 .gsp = { 0x00000001, gv100_gsp_new }, ··· 2475 2475 .bus = { 0x00000001, gf100_bus_new }, 2476 2476 .devinit = { 0x00000001, tu102_devinit_new }, 2477 2477 .fault = { 0x00000001, tu102_fault_new }, 2478 - .fb = { 0x00000001, gv100_fb_new }, 2478 + .fb = { 0x00000001, tu102_fb_new }, 2479 2479 .fuse = { 0x00000001, gm107_fuse_new }, 2480 2480 .gpio = { 0x00000001, gk104_gpio_new }, 2481 2481 .gsp = { 0x00000001, gv100_gsp_new }, ··· 2510 2510 .bus = { 0x00000001, gf100_bus_new }, 2511 2511 .devinit = { 0x00000001, tu102_devinit_new }, 2512 2512 .fault = { 0x00000001, tu102_fault_new }, 2513 - .fb = { 0x00000001, gv100_fb_new }, 2513 + .fb = { 0x00000001, tu102_fb_new }, 2514 2514 .fuse = { 0x00000001, gm107_fuse_new }, 2515 2515 .gpio = { 0x00000001, gk104_gpio_new }, 2516 2516 .gsp = { 0x00000001, gv100_gsp_new }, ··· 2545 2545 .bus = { 0x00000001, gf100_bus_new }, 2546 2546 .devinit = { 0x00000001, tu102_devinit_new }, 2547 2547 .fault = { 0x00000001, tu102_fault_new }, 2548 - .fb = { 0x00000001, gv100_fb_new }, 2548 + .fb = { 0x00000001, tu102_fb_new }, 2549 2549 .fuse = { 0x00000001, gm107_fuse_new }, 2550 2550 .gpio = { 0x00000001, gk104_gpio_new }, 2551 2551 .gsp = { 0x00000001, gv100_gsp_new },
+13 -1
drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c
··· 48 48 img += 4; 49 49 len -= 4; 50 50 } 51 + 52 + /* Sigh. Tegra PMU FW's init message... */ 53 + if (len) { 54 + u32 data = nvkm_falcon_rd32(falcon, 0x1c4 + (port * 8)); 55 + 56 + while (len--) { 57 + *(u8 *)img++ = data & 0xff; 58 + data >>= 8; 59 + } 60 + } 51 61 } 52 62 53 63 static void ··· 74 64 img += 4; 75 65 len -= 4; 76 66 } 67 + 68 + WARN_ON(len); 77 69 } 78 70 79 71 static void ··· 86 74 87 75 const struct nvkm_falcon_func_pio 88 76 gm200_flcn_dmem_pio = { 89 - .min = 4, 77 + .min = 1, 90 78 .max = 0x100, 91 79 .wr_init = gm200_flcn_pio_dmem_wr_init, 92 80 .wr = gm200_flcn_pio_dmem_wr,
+23
drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c
··· 65 65 return ret; 66 66 } 67 67 68 + static int 69 + tu102_devinit_wait(struct nvkm_device *device) 70 + { 71 + unsigned timeout = 50 + 2000; 72 + 73 + do { 74 + if (nvkm_rd32(device, 0x118128) & 0x00000001) { 75 + if ((nvkm_rd32(device, 0x118234) & 0x000000ff) == 0xff) 76 + return 0; 77 + } 78 + 79 + usleep_range(1000, 2000); 80 + } while (timeout--); 81 + 82 + return -ETIMEDOUT; 83 + } 84 + 68 85 int 69 86 tu102_devinit_post(struct nvkm_devinit *base, bool post) 70 87 { 71 88 struct nv50_devinit *init = nv50_devinit(base); 89 + int ret; 90 + 91 + ret = tu102_devinit_wait(init->base.subdev.device); 92 + if (ret) 93 + return ret; 94 + 72 95 gm200_devinit_preos(init, post); 73 96 return 0; 74 97 }
+1
drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild
··· 32 32 nvkm-y += nvkm/subdev/fb/gp102.o 33 33 nvkm-y += nvkm/subdev/fb/gp10b.o 34 34 nvkm-y += nvkm/subdev/fb/gv100.o 35 + nvkm-y += nvkm/subdev/fb/tu102.o 35 36 nvkm-y += nvkm/subdev/fb/ga100.o 36 37 nvkm-y += nvkm/subdev/fb/ga102.o 37 38
+1 -7
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c
··· 40 40 return ret; 41 41 } 42 42 43 - static bool 44 - ga102_fb_vpr_scrub_required(struct nvkm_fb *fb) 45 - { 46 - return (nvkm_rd32(fb->subdev.device, 0x1fa80c) & 0x00000010) != 0; 47 - } 48 - 49 43 static const struct nvkm_fb_func 50 44 ga102_fb = { 51 45 .dtor = gf100_fb_dtor, ··· 50 56 .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, 51 57 .ram_new = ga102_ram_new, 52 58 .default_bigpage = 16, 53 - .vpr.scrub_required = ga102_fb_vpr_scrub_required, 59 + .vpr.scrub_required = tu102_fb_vpr_scrub_required, 54 60 .vpr.scrub = ga102_fb_vpr_scrub, 55 61 }; 56 62
-5
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c
··· 49 49 } 50 50 51 51 MODULE_FIRMWARE("nvidia/gv100/nvdec/scrubber.bin"); 52 - MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin"); 53 - MODULE_FIRMWARE("nvidia/tu104/nvdec/scrubber.bin"); 54 - MODULE_FIRMWARE("nvidia/tu106/nvdec/scrubber.bin"); 55 - MODULE_FIRMWARE("nvidia/tu116/nvdec/scrubber.bin"); 56 - MODULE_FIRMWARE("nvidia/tu117/nvdec/scrubber.bin");
+2
drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h
··· 89 89 int gp102_fb_vpr_scrub(struct nvkm_fb *); 90 90 91 91 int gv100_fb_init_page(struct nvkm_fb *); 92 + 93 + bool tu102_fb_vpr_scrub_required(struct nvkm_fb *); 92 94 #endif
+55
drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c
··· 1 + /* 2 + * Copyright 2018 Red Hat Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + */ 22 + #include "gf100.h" 23 + #include "ram.h" 24 + 25 + bool 26 + tu102_fb_vpr_scrub_required(struct nvkm_fb *fb) 27 + { 28 + return (nvkm_rd32(fb->subdev.device, 0x1fa80c) & 0x00000010) != 0; 29 + } 30 + 31 + static const struct nvkm_fb_func 32 + tu102_fb = { 33 + .dtor = gf100_fb_dtor, 34 + .oneinit = gf100_fb_oneinit, 35 + .init = gm200_fb_init, 36 + .init_page = gv100_fb_init_page, 37 + .init_unkn = gp100_fb_init_unkn, 38 + .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, 39 + .vpr.scrub_required = tu102_fb_vpr_scrub_required, 40 + .vpr.scrub = gp102_fb_vpr_scrub, 41 + .ram_new = gp100_ram_new, 42 + .default_bigpage = 16, 43 + }; 44 + 45 + int 46 + tu102_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb) 47 + { 48 + return gp102_fb_new_(&tu102_fb, device, type, inst, pfb); 49 + } 50 + 51 + MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin"); 52 + MODULE_FIRMWARE("nvidia/tu104/nvdec/scrubber.bin"); 53 + MODULE_FIRMWARE("nvidia/tu106/nvdec/scrubber.bin"); 54 + MODULE_FIRMWARE("nvidia/tu116/nvdec/scrubber.bin"); 55 + MODULE_FIRMWARE("nvidia/tu117/nvdec/scrubber.bin");
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c
··· 225 225 226 226 pmu->initmsg_received = false; 227 227 228 - nvkm_falcon_load_dmem(falcon, &args, addr_args, sizeof(args), 0); 228 + nvkm_falcon_pio_wr(falcon, (u8 *)&args, 0, 0, DMEM, addr_args, sizeof(args), 0, false); 229 229 nvkm_falcon_start(falcon); 230 230 return 0; 231 231 }
+12 -4
drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
··· 1193 1193 return 0; 1194 1194 } 1195 1195 1196 - static int boe_panel_unprepare(struct drm_panel *panel) 1196 + static int boe_panel_disable(struct drm_panel *panel) 1197 1197 { 1198 1198 struct boe_panel *boe = to_boe_panel(panel); 1199 1199 int ret; 1200 - 1201 - if (!boe->prepared) 1202 - return 0; 1203 1200 1204 1201 ret = boe_panel_enter_sleep_mode(boe); 1205 1202 if (ret < 0) { ··· 1205 1208 } 1206 1209 1207 1210 msleep(150); 1211 + 1212 + return 0; 1213 + } 1214 + 1215 + static int boe_panel_unprepare(struct drm_panel *panel) 1216 + { 1217 + struct boe_panel *boe = to_boe_panel(panel); 1218 + 1219 + if (!boe->prepared) 1220 + return 0; 1208 1221 1209 1222 if (boe->desc->discharge_on_disable) { 1210 1223 regulator_disable(boe->avee); ··· 1535 1528 } 1536 1529 1537 1530 static const struct drm_panel_funcs boe_panel_funcs = { 1531 + .disable = boe_panel_disable, 1538 1532 .unprepare = boe_panel_unprepare, 1539 1533 .prepare = boe_panel_prepare, 1540 1534 .enable = boe_panel_enable,
+7 -11
drivers/gpu/drm/solomon/ssd130x.c
··· 656 656 .atomic_check = drm_crtc_helper_atomic_check, 657 657 }; 658 658 659 - static void ssd130x_crtc_reset(struct drm_crtc *crtc) 660 - { 661 - struct drm_device *drm = crtc->dev; 662 - struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); 663 - 664 - ssd130x_init(ssd130x); 665 - 666 - drm_atomic_helper_crtc_reset(crtc); 667 - } 668 - 669 659 static const struct drm_crtc_funcs ssd130x_crtc_funcs = { 670 - .reset = ssd130x_crtc_reset, 660 + .reset = drm_atomic_helper_crtc_reset, 671 661 .destroy = drm_crtc_cleanup, 672 662 .set_config = drm_atomic_helper_set_config, 673 663 .page_flip = drm_atomic_helper_page_flip, ··· 675 685 ret = ssd130x_power_on(ssd130x); 676 686 if (ret) 677 687 return; 688 + 689 + ret = ssd130x_init(ssd130x); 690 + if (ret) { 691 + ssd130x_power_off(ssd130x); 692 + return; 693 + } 678 694 679 695 ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON); 680 696
+2 -1
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 3018 3018 } 3019 3019 3020 3020 vc4_hdmi->cec_adap = cec_allocate_adapter(&vc4_hdmi_cec_adap_ops, 3021 - vc4_hdmi, "vc4", 3021 + vc4_hdmi, 3022 + vc4_hdmi->variant->card_name, 3022 3023 CEC_CAP_DEFAULTS | 3023 3024 CEC_CAP_CONNECTOR_INFO, 1); 3024 3025 ret = PTR_ERR_OR_ZERO(vc4_hdmi->cec_adap);
+11 -2
drivers/hid/amd-sfh-hid/amd_sfh_client.c
··· 227 227 cl_data->num_hid_devices = amd_mp2_get_sensor_num(privdata, &cl_data->sensor_idx[0]); 228 228 if (cl_data->num_hid_devices == 0) 229 229 return -ENODEV; 230 + cl_data->is_any_sensor_enabled = false; 230 231 231 232 INIT_DELAYED_WORK(&cl_data->work, amd_sfh_work); 232 233 INIT_DELAYED_WORK(&cl_data->work_buffer, amd_sfh_work_buffer); ··· 288 287 status = amd_sfh_wait_for_response 289 288 (privdata, cl_data->sensor_idx[i], SENSOR_ENABLED); 290 289 if (status == SENSOR_ENABLED) { 290 + cl_data->is_any_sensor_enabled = true; 291 291 cl_data->sensor_sts[i] = SENSOR_ENABLED; 292 292 rc = amdtp_hid_probe(cl_data->cur_hid_dev, cl_data); 293 293 if (rc) { ··· 303 301 cl_data->sensor_sts[i]); 304 302 goto cleanup; 305 303 } 304 + } else { 305 + cl_data->sensor_sts[i] = SENSOR_DISABLED; 306 + dev_dbg(dev, "sid 0x%x (%s) status 0x%x\n", 307 + cl_data->sensor_idx[i], 308 + get_sensor_name(cl_data->sensor_idx[i]), 309 + cl_data->sensor_sts[i]); 306 310 } 307 311 dev_dbg(dev, "sid 0x%x (%s) status 0x%x\n", 308 312 cl_data->sensor_idx[i], get_sensor_name(cl_data->sensor_idx[i]), 309 313 cl_data->sensor_sts[i]); 310 314 } 311 - if (mp2_ops->discovery_status && mp2_ops->discovery_status(privdata) == 0) { 315 + if (!cl_data->is_any_sensor_enabled || 316 + (mp2_ops->discovery_status && mp2_ops->discovery_status(privdata) == 0)) { 312 317 amd_sfh_hid_client_deinit(privdata); 313 318 for (i = 0; i < cl_data->num_hid_devices; i++) { 314 319 devm_kfree(dev, cl_data->feature_report[i]); 315 320 devm_kfree(dev, in_data->input_report[i]); 316 321 devm_kfree(dev, cl_data->report_descr[i]); 317 322 } 318 - dev_warn(dev, "Failed to discover, sensors not enabled\n"); 323 + dev_warn(dev, "Failed to discover, sensors not enabled is %d\n", cl_data->is_any_sensor_enabled); 319 324 return -EOPNOTSUPP; 320 325 } 321 326 schedule_delayed_work(&cl_data->work_buffer, msecs_to_jiffies(AMD_SFH_IDLE_LOOP));
+1
drivers/hid/amd-sfh-hid/amd_sfh_hid.h
··· 32 32 struct amdtp_cl_data { 33 33 u8 init_done; 34 34 u32 cur_hid_dev; 35 + bool is_any_sensor_enabled; 35 36 u32 hid_dev_count; 36 37 u32 num_hid_devices; 37 38 struct device_info *hid_devices;
+3
drivers/hid/hid-core.c
··· 1202 1202 __u8 *end; 1203 1203 __u8 *next; 1204 1204 int ret; 1205 + int i; 1205 1206 static int (*dispatch_type[])(struct hid_parser *parser, 1206 1207 struct hid_item *item) = { 1207 1208 hid_parser_main, ··· 1253 1252 goto err; 1254 1253 } 1255 1254 device->collection_size = HID_DEFAULT_NUM_COLLECTIONS; 1255 + for (i = 0; i < HID_DEFAULT_NUM_COLLECTIONS; i++) 1256 + device->collection[i].parent_idx = -1; 1256 1257 1257 1258 ret = -EINVAL; 1258 1259 while ((next = fetch_item(start, end, &item)) != NULL) {
+14 -2
drivers/hid/hid-elecom.c
··· 12 12 * Copyright (c) 2017 Alex Manoussakis <amanou@gnu.org> 13 13 * Copyright (c) 2017 Tomasz Kramkowski <tk@the-tk.com> 14 14 * Copyright (c) 2020 YOSHIOKA Takuma <lo48576@hard-wi.red> 15 + * Copyright (c) 2022 Takahiro Fujii <fujii@xaxxi.net> 15 16 */ 16 17 17 18 /* ··· 90 89 case USB_DEVICE_ID_ELECOM_M_DT1URBK: 91 90 case USB_DEVICE_ID_ELECOM_M_DT1DRBK: 92 91 case USB_DEVICE_ID_ELECOM_M_HT1URBK: 93 - case USB_DEVICE_ID_ELECOM_M_HT1DRBK: 92 + case USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D: 94 93 /* 95 94 * Report descriptor format: 96 95 * 12: button bit count ··· 99 98 * 20: button usage maximum 100 99 */ 101 100 mouse_button_fixup(hdev, rdesc, *rsize, 12, 30, 14, 20, 8); 101 + break; 102 + case USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C: 103 + /* 104 + * Report descriptor format: 105 + * 22: button bit count 106 + * 30: padding bit count 107 + * 24: button report size 108 + * 16: button usage maximum 109 + */ 110 + mouse_button_fixup(hdev, rdesc, *rsize, 22, 30, 24, 16, 8); 102 111 break; 103 112 } 104 113 return rdesc; ··· 123 112 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, 124 113 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, 125 114 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) }, 126 - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK) }, 115 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) }, 116 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) }, 127 117 { } 128 118 }; 129 119 MODULE_DEVICE_TABLE(hid, elecom_devices);
+4 -1
drivers/hid/hid-ids.h
··· 413 413 #define I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100 0x29CF 414 414 #define I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV 0x2CF9 415 415 #define I2C_DEVICE_ID_HP_SPECTRE_X360_15 0x2817 416 + #define I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG 0x29DF 417 + #define I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN 0x2BC8 416 418 #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544 417 419 #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706 418 420 #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A ··· 430 428 #define USB_DEVICE_ID_ELECOM_M_DT1URBK 0x00fe 431 429 #define USB_DEVICE_ID_ELECOM_M_DT1DRBK 0x00ff 432 430 #define USB_DEVICE_ID_ELECOM_M_HT1URBK 0x010c 433 - #define USB_DEVICE_ID_ELECOM_M_HT1DRBK 0x010d 431 + #define USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D 0x010d 432 + #define USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C 0x011c 434 433 435 434 #define USB_VENDOR_ID_DREAM_CHEEKY 0x1d34 436 435 #define USB_DEVICE_ID_DREAM_CHEEKY_WN 0x0004
+4
drivers/hid/hid-input.c
··· 370 370 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 371 371 USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD), 372 372 HID_BATTERY_QUIRK_IGNORE }, 373 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN), 374 + HID_BATTERY_QUIRK_IGNORE }, 373 375 { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN), 374 376 HID_BATTERY_QUIRK_IGNORE }, 375 377 { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN), ··· 385 383 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV), 386 384 HID_BATTERY_QUIRK_IGNORE }, 387 385 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15), 386 + HID_BATTERY_QUIRK_IGNORE }, 387 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG), 388 388 HID_BATTERY_QUIRK_IGNORE }, 389 389 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN), 390 390 HID_BATTERY_QUIRK_IGNORE },
+2 -1
drivers/hid/hid-logitech-hidpp.c
··· 3978 3978 } 3979 3979 3980 3980 hidpp_initialize_battery(hidpp); 3981 - hidpp_initialize_hires_scroll(hidpp); 3981 + if (!hid_is_usb(hidpp->hid_dev)) 3982 + hidpp_initialize_hires_scroll(hidpp); 3982 3983 3983 3984 /* forward current battery state */ 3984 3985 if (hidpp->capabilities & HIDPP_CAPABILITY_HIDPP10_BATTERY) {
+2 -1
drivers/hid/hid-quirks.c
··· 393 393 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, 394 394 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, 395 395 { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) }, 396 - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK) }, 396 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) }, 397 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) }, 397 398 #endif 398 399 #if IS_ENABLED(CONFIG_HID_ELO) 399 400 { HID_USB_DEVICE(USB_VENDOR_ID_ELO, 0x0009) },
+1 -1
drivers/hv/hv_balloon.c
··· 1963 1963 1964 1964 static void hv_balloon_debugfs_exit(struct hv_dynmem_device *b) 1965 1965 { 1966 - debugfs_remove(debugfs_lookup("hv-balloon", NULL)); 1966 + debugfs_lookup_and_remove("hv-balloon", NULL); 1967 1967 } 1968 1968 1969 1969 #else
+1
drivers/iio/accel/hid-sensor-accel-3d.c
··· 280 280 hid_sensor_convert_timestamp( 281 281 &accel_state->common_attributes, 282 282 *(int64_t *)raw_data); 283 + ret = 0; 283 284 break; 284 285 default: 285 286 break;
+3 -1
drivers/iio/adc/berlin2-adc.c
··· 298 298 int ret; 299 299 300 300 indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv)); 301 - if (!indio_dev) 301 + if (!indio_dev) { 302 + of_node_put(parent_np); 302 303 return -ENOMEM; 304 + } 303 305 304 306 priv = iio_priv(indio_dev); 305 307
+9 -2
drivers/iio/adc/imx8qxp-adc.c
··· 86 86 87 87 #define IMX8QXP_ADC_TIMEOUT msecs_to_jiffies(100) 88 88 89 + #define IMX8QXP_ADC_MAX_FIFO_SIZE 16 90 + 89 91 struct imx8qxp_adc { 90 92 struct device *dev; 91 93 void __iomem *regs; ··· 97 95 /* Serialise ADC channel reads */ 98 96 struct mutex lock; 99 97 struct completion completion; 98 + u32 fifo[IMX8QXP_ADC_MAX_FIFO_SIZE]; 100 99 }; 101 100 102 101 #define IMX8QXP_ADC_CHAN(_idx) { \ ··· 241 238 return ret; 242 239 } 243 240 244 - *val = FIELD_GET(IMX8QXP_ADC_RESFIFO_VAL_MASK, 245 - readl(adc->regs + IMX8QXP_ADR_ADC_RESFIFO)); 241 + *val = adc->fifo[0]; 246 242 247 243 mutex_unlock(&adc->lock); 248 244 return IIO_VAL_INT; ··· 267 265 { 268 266 struct imx8qxp_adc *adc = dev_id; 269 267 u32 fifo_count; 268 + int i; 270 269 271 270 fifo_count = FIELD_GET(IMX8QXP_ADC_FCTRL_FCOUNT_MASK, 272 271 readl(adc->regs + IMX8QXP_ADR_ADC_FCTRL)); 272 + 273 + for (i = 0; i < fifo_count; i++) 274 + adc->fifo[i] = FIELD_GET(IMX8QXP_ADC_RESFIFO_VAL_MASK, 275 + readl_relaxed(adc->regs + IMX8QXP_ADR_ADC_RESFIFO)); 273 276 274 277 if (fifo_count) 275 278 complete(&adc->completion);
+1
drivers/iio/adc/stm32-dfsdm-adc.c
··· 1520 1520 }, 1521 1521 {} 1522 1522 }; 1523 + MODULE_DEVICE_TABLE(of, stm32_dfsdm_adc_match); 1523 1524 1524 1525 static int stm32_dfsdm_adc_probe(struct platform_device *pdev) 1525 1526 {
+32
drivers/iio/adc/twl6030-gpadc.c
··· 57 57 #define TWL6030_GPADCS BIT(1) 58 58 #define TWL6030_GPADCR BIT(0) 59 59 60 + #define USB_VBUS_CTRL_SET 0x04 61 + #define USB_ID_CTRL_SET 0x06 62 + 63 + #define TWL6030_MISC1 0xE4 64 + #define VBUS_MEAS 0x01 65 + #define ID_MEAS 0x01 66 + 67 + #define VAC_MEAS 0x04 68 + #define VBAT_MEAS 0x02 69 + #define BB_MEAS 0x01 70 + 71 + 60 72 /** 61 73 * struct twl6030_chnl_calib - channel calibration 62 74 * @gain: slope coefficient for ideal curve ··· 936 924 TWL6030_REG_TOGGLE1); 937 925 if (ret < 0) { 938 926 dev_err(dev, "failed to enable GPADC module\n"); 927 + return ret; 928 + } 929 + 930 + ret = twl_i2c_write_u8(TWL_MODULE_USB, VBUS_MEAS, USB_VBUS_CTRL_SET); 931 + if (ret < 0) { 932 + dev_err(dev, "failed to wire up inputs\n"); 933 + return ret; 934 + } 935 + 936 + ret = twl_i2c_write_u8(TWL_MODULE_USB, ID_MEAS, USB_ID_CTRL_SET); 937 + if (ret < 0) { 938 + dev_err(dev, "failed to wire up inputs\n"); 939 + return ret; 940 + } 941 + 942 + ret = twl_i2c_write_u8(TWL6030_MODULE_ID0, 943 + VBAT_MEAS | BB_MEAS | VAC_MEAS, 944 + TWL6030_MISC1); 945 + if (ret < 0) { 946 + dev_err(dev, "failed to wire up inputs\n"); 939 947 return ret; 940 948 } 941 949
+1 -1
drivers/iio/adc/xilinx-ams.c
··· 1329 1329 1330 1330 dev_channels = devm_krealloc(dev, ams_channels, dev_size, GFP_KERNEL); 1331 1331 if (!dev_channels) 1332 - ret = -ENOMEM; 1332 + return -ENOMEM; 1333 1333 1334 1334 indio_dev->channels = dev_channels; 1335 1335 indio_dev->num_channels = num_channels;
+1
drivers/iio/gyro/hid-sensor-gyro-3d.c
··· 231 231 gyro_state->timestamp = 232 232 hid_sensor_convert_timestamp(&gyro_state->common_attributes, 233 233 *(s64 *)raw_data); 234 + ret = 0; 234 235 break; 235 236 default: 236 237 break;
+89 -22
drivers/iio/imu/fxos8700_core.c
··· 10 10 #include <linux/regmap.h> 11 11 #include <linux/acpi.h> 12 12 #include <linux/bitops.h> 13 + #include <linux/bitfield.h> 13 14 14 15 #include <linux/iio/iio.h> 15 16 #include <linux/iio/sysfs.h> ··· 145 144 #define FXOS8700_NVM_DATA_BNK0 0xa7 146 145 147 146 /* Bit definitions for FXOS8700_CTRL_REG1 */ 148 - #define FXOS8700_CTRL_ODR_MSK 0x38 149 147 #define FXOS8700_CTRL_ODR_MAX 0x00 150 - #define FXOS8700_CTRL_ODR_MIN GENMASK(4, 3) 148 + #define FXOS8700_CTRL_ODR_MSK GENMASK(5, 3) 151 149 152 150 /* Bit definitions for FXOS8700_M_CTRL_REG1 */ 153 151 #define FXOS8700_HMS_MASK GENMASK(1, 0) ··· 320 320 switch (iio_type) { 321 321 case IIO_ACCEL: 322 322 return FXOS8700_ACCEL; 323 - case IIO_ANGL_VEL: 323 + case IIO_MAGN: 324 324 return FXOS8700_MAGN; 325 325 default: 326 326 return -EINVAL; ··· 345 345 static int fxos8700_set_scale(struct fxos8700_data *data, 346 346 enum fxos8700_sensor t, int uscale) 347 347 { 348 - int i; 348 + int i, ret, val; 349 + bool active_mode; 349 350 static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale); 350 351 struct device *dev = regmap_get_device(data->regmap); 351 352 352 353 if (t == FXOS8700_MAGN) { 353 - dev_err(dev, "Magnetometer scale is locked at 1200uT\n"); 354 + dev_err(dev, "Magnetometer scale is locked at 0.001Gs\n"); 354 355 return -EINVAL; 356 + } 357 + 358 + /* 359 + * When device is in active mode, it failed to set an ACCEL 360 + * full-scale range(2g/4g/8g) in FXOS8700_XYZ_DATA_CFG. 361 + * This is not align with the datasheet, but it is a fxos8700 362 + * chip behavier. Set the device in standby mode before setting 363 + * an ACCEL full-scale range. 364 + */ 365 + ret = regmap_read(data->regmap, FXOS8700_CTRL_REG1, &val); 366 + if (ret) 367 + return ret; 368 + 369 + active_mode = val & FXOS8700_ACTIVE; 370 + if (active_mode) { 371 + ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1, 372 + val & ~FXOS8700_ACTIVE); 373 + if (ret) 374 + return ret; 355 375 } 356 376 357 377 for (i = 0; i < scale_num; i++) ··· 381 361 if (i == scale_num) 382 362 return -EINVAL; 383 363 384 - return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, 364 + ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, 385 365 fxos8700_accel_scale[i].bits); 366 + if (ret) 367 + return ret; 368 + return regmap_write(data->regmap, FXOS8700_CTRL_REG1, 369 + active_mode); 386 370 } 387 371 388 372 static int fxos8700_get_scale(struct fxos8700_data *data, ··· 396 372 static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale); 397 373 398 374 if (t == FXOS8700_MAGN) { 399 - *uscale = 1200; /* Magnetometer is locked at 1200uT */ 375 + *uscale = 1000; /* Magnetometer is locked at 0.001Gs */ 400 376 return 0; 401 377 } 402 378 ··· 418 394 int axis, int *val) 419 395 { 420 396 u8 base, reg; 397 + s16 tmp; 421 398 int ret; 422 - enum fxos8700_sensor type = fxos8700_to_sensor(chan_type); 423 399 424 - base = type ? FXOS8700_OUT_X_MSB : FXOS8700_M_OUT_X_MSB; 400 + /* 401 + * Different register base addresses varies with channel types. 402 + * This bug hasn't been noticed before because using an enum is 403 + * really hard to read. Use an a switch statement to take over that. 404 + */ 405 + switch (chan_type) { 406 + case IIO_ACCEL: 407 + base = FXOS8700_OUT_X_MSB; 408 + break; 409 + case IIO_MAGN: 410 + base = FXOS8700_M_OUT_X_MSB; 411 + break; 412 + default: 413 + return -EINVAL; 414 + } 425 415 426 416 /* Block read 6 bytes of device output registers to avoid data loss */ 427 417 ret = regmap_bulk_read(data->regmap, base, data->buf, 428 - FXOS8700_DATA_BUF_SIZE); 418 + sizeof(data->buf)); 429 419 if (ret) 430 420 return ret; 431 421 432 422 /* Convert axis to buffer index */ 433 423 reg = axis - IIO_MOD_X; 434 424 425 + /* 426 + * Convert to native endianness. The accel data and magn data 427 + * are signed, so a forced type conversion is needed. 428 + */ 429 + tmp = be16_to_cpu(data->buf[reg]); 430 + 431 + /* 432 + * ACCEL output data registers contain the X-axis, Y-axis, and Z-axis 433 + * 14-bit left-justified sample data and MAGN output data registers 434 + * contain the X-axis, Y-axis, and Z-axis 16-bit sample data. Apply 435 + * a signed 2 bits right shift to the readback raw data from ACCEL 436 + * output data register and keep that from MAGN sensor as the origin. 437 + * Value should be extended to 32 bit. 438 + */ 439 + switch (chan_type) { 440 + case IIO_ACCEL: 441 + tmp = tmp >> 2; 442 + break; 443 + case IIO_MAGN: 444 + /* Nothing to do */ 445 + break; 446 + default: 447 + return -EINVAL; 448 + } 449 + 435 450 /* Convert to native endianness */ 436 - *val = sign_extend32(be16_to_cpu(data->buf[reg]), 15); 451 + *val = sign_extend32(tmp, 15); 437 452 438 453 return 0; 439 454 } ··· 508 445 if (i >= odr_num) 509 446 return -EINVAL; 510 447 511 - return regmap_update_bits(data->regmap, 512 - FXOS8700_CTRL_REG1, 513 - FXOS8700_CTRL_ODR_MSK + FXOS8700_ACTIVE, 514 - fxos8700_odr[i].bits << 3 | active_mode); 448 + val &= ~FXOS8700_CTRL_ODR_MSK; 449 + val |= FIELD_PREP(FXOS8700_CTRL_ODR_MSK, fxos8700_odr[i].bits) | FXOS8700_ACTIVE; 450 + return regmap_write(data->regmap, FXOS8700_CTRL_REG1, val); 515 451 } 516 452 517 453 static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t, ··· 523 461 if (ret) 524 462 return ret; 525 463 526 - val &= FXOS8700_CTRL_ODR_MSK; 464 + val = FIELD_GET(FXOS8700_CTRL_ODR_MSK, val); 527 465 528 466 for (i = 0; i < odr_num; i++) 529 467 if (val == fxos8700_odr[i].bits) ··· 588 526 static IIO_CONST_ATTR(in_magn_sampling_frequency_available, 589 527 "1.5625 6.25 12.5 50 100 200 400 800"); 590 528 static IIO_CONST_ATTR(in_accel_scale_available, "0.000244 0.000488 0.000976"); 591 - static IIO_CONST_ATTR(in_magn_scale_available, "0.000001200"); 529 + static IIO_CONST_ATTR(in_magn_scale_available, "0.001000"); 592 530 593 531 static struct attribute *fxos8700_attrs[] = { 594 532 &iio_const_attr_in_accel_sampling_frequency_available.dev_attr.attr, ··· 654 592 if (ret) 655 593 return ret; 656 594 657 - /* Max ODR (800Hz individual or 400Hz hybrid), active mode */ 658 - ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1, 659 - FXOS8700_CTRL_ODR_MAX | FXOS8700_ACTIVE); 595 + /* 596 + * Set max full-scale range (+/-8G) for ACCEL sensor in chip 597 + * initialization then activate the device. 598 + */ 599 + ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G); 660 600 if (ret) 661 601 return ret; 662 602 663 - /* Set for max full-scale range (+/-8G) */ 664 - return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G); 603 + /* Max ODR (800Hz individual or 400Hz hybrid), active mode */ 604 + return regmap_update_bits(data->regmap, FXOS8700_CTRL_REG1, 605 + FXOS8700_CTRL_ODR_MSK | FXOS8700_ACTIVE, 606 + FIELD_PREP(FXOS8700_CTRL_ODR_MSK, FXOS8700_CTRL_ODR_MAX) | 607 + FXOS8700_ACTIVE); 665 608 } 666 609 667 610 static void fxos8700_chip_uninit(void *data)
+1
drivers/iio/imu/st_lsm6dsx/Kconfig
··· 4 4 tristate "ST_LSM6DSx driver for STM 6-axis IMU MEMS sensors" 5 5 depends on (I2C || SPI || I3C) 6 6 select IIO_BUFFER 7 + select IIO_TRIGGERED_BUFFER 7 8 select IIO_KFIFO_BUF 8 9 select IIO_ST_LSM6DSX_I2C if (I2C) 9 10 select IIO_ST_LSM6DSX_SPI if (SPI_MASTER)
+5 -4
drivers/iio/light/cm32181.c
··· 440 440 if (!indio_dev) 441 441 return -ENOMEM; 442 442 443 + i2c_set_clientdata(client, indio_dev); 444 + 443 445 /* 444 446 * Some ACPI systems list 2 I2C resources for the CM3218 sensor, the 445 447 * SMBus Alert Response Address (ARA, 0x0c) and the actual I2C address. ··· 461 459 if (IS_ERR(client)) 462 460 return PTR_ERR(client); 463 461 } 464 - 465 - i2c_set_clientdata(client, indio_dev); 466 462 467 463 cm32181 = iio_priv(indio_dev); 468 464 cm32181->client = client; ··· 490 490 491 491 static int cm32181_suspend(struct device *dev) 492 492 { 493 - struct i2c_client *client = to_i2c_client(dev); 493 + struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev)); 494 + struct i2c_client *client = cm32181->client; 494 495 495 496 return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD, 496 497 CM32181_CMD_ALS_DISABLE); ··· 499 498 500 499 static int cm32181_resume(struct device *dev) 501 500 { 502 - struct i2c_client *client = to_i2c_client(dev); 503 501 struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev)); 502 + struct i2c_client *client = cm32181->client; 504 503 505 504 return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD, 506 505 cm32181->conf_regs[CM32181_REG_ADDR_CMD]);
+1 -1
drivers/net/bonding/bond_debugfs.c
··· 76 76 77 77 d = debugfs_rename(bonding_debug_root, bond->debug_dir, 78 78 bonding_debug_root, bond->dev->name); 79 - if (d) { 79 + if (!IS_ERR(d)) { 80 80 bond->debug_dir = d; 81 81 } else { 82 82 netdev_warn(bond->dev, "failed to reregister, so just unregister old one\n");
+19 -7
drivers/net/dsa/mt7530.c
··· 1302 1302 if (!priv->ports[port].pvid) 1303 1303 mt7530_rmw(priv, MT7530_PVC_P(port), ACC_FRM_MASK, 1304 1304 MT7530_VLAN_ACC_TAGGED); 1305 - } 1306 1305 1307 - /* Set the port as a user port which is to be able to recognize VID 1308 - * from incoming packets before fetching entry within the VLAN table. 1309 - */ 1310 - mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK, 1311 - VLAN_ATTR(MT7530_VLAN_USER) | 1312 - PVC_EG_TAG(MT7530_VLAN_EG_DISABLED)); 1306 + /* Set the port as a user port which is to be able to recognize 1307 + * VID from incoming packets before fetching entry within the 1308 + * VLAN table. 1309 + */ 1310 + mt7530_rmw(priv, MT7530_PVC_P(port), 1311 + VLAN_ATTR_MASK | PVC_EG_TAG_MASK, 1312 + VLAN_ATTR(MT7530_VLAN_USER) | 1313 + PVC_EG_TAG(MT7530_VLAN_EG_DISABLED)); 1314 + } else { 1315 + /* Also set CPU ports to the "user" VLAN port attribute, to 1316 + * allow VLAN classification, but keep the EG_TAG attribute as 1317 + * "consistent" (i.o.w. don't change its value) for packets 1318 + * received by the switch from the CPU, so that tagged packets 1319 + * are forwarded to user ports as tagged, and untagged as 1320 + * untagged. 1321 + */ 1322 + mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK, 1323 + VLAN_ATTR(MT7530_VLAN_USER)); 1324 + } 1313 1325 } 1314 1326 1315 1327 static void
+16 -15
drivers/net/ethernet/cadence/macb_main.c
··· 4684 4684 if (ret) 4685 4685 return dev_err_probe(&pdev->dev, ret, 4686 4686 "failed to init SGMII PHY\n"); 4687 - } 4688 4687 4689 - ret = zynqmp_pm_is_function_supported(PM_IOCTL, IOCTL_SET_GEM_CONFIG); 4690 - if (!ret) { 4691 - u32 pm_info[2]; 4688 + ret = zynqmp_pm_is_function_supported(PM_IOCTL, IOCTL_SET_GEM_CONFIG); 4689 + if (!ret) { 4690 + u32 pm_info[2]; 4692 4691 4693 - ret = of_property_read_u32_array(pdev->dev.of_node, "power-domains", 4694 - pm_info, ARRAY_SIZE(pm_info)); 4695 - if (ret) { 4696 - dev_err(&pdev->dev, "Failed to read power management information\n"); 4697 - goto err_out_phy_exit; 4692 + ret = of_property_read_u32_array(pdev->dev.of_node, "power-domains", 4693 + pm_info, ARRAY_SIZE(pm_info)); 4694 + if (ret) { 4695 + dev_err(&pdev->dev, "Failed to read power management information\n"); 4696 + goto err_out_phy_exit; 4697 + } 4698 + ret = zynqmp_pm_set_gem_config(pm_info[1], GEM_CONFIG_FIXED, 0); 4699 + if (ret) 4700 + goto err_out_phy_exit; 4701 + 4702 + ret = zynqmp_pm_set_gem_config(pm_info[1], GEM_CONFIG_SGMII_MODE, 1); 4703 + if (ret) 4704 + goto err_out_phy_exit; 4698 4705 } 4699 - ret = zynqmp_pm_set_gem_config(pm_info[1], GEM_CONFIG_FIXED, 0); 4700 - if (ret) 4701 - goto err_out_phy_exit; 4702 4706 4703 - ret = zynqmp_pm_set_gem_config(pm_info[1], GEM_CONFIG_SGMII_MODE, 1); 4704 - if (ret) 4705 - goto err_out_phy_exit; 4706 4707 } 4707 4708 4708 4709 /* Fully reset controller at hardware level if mapped in device tree */
+4 -5
drivers/net/ethernet/intel/ice/ice_common.c
··· 5534 5534 * returned by the firmware is a 16 bit * value, but is indexed 5535 5535 * by [fls(speed) - 1] 5536 5536 */ 5537 - static const u32 ice_aq_to_link_speed[15] = { 5537 + static const u32 ice_aq_to_link_speed[] = { 5538 5538 SPEED_10, /* BIT(0) */ 5539 5539 SPEED_100, 5540 5540 SPEED_1000, ··· 5546 5546 SPEED_40000, 5547 5547 SPEED_50000, 5548 5548 SPEED_100000, /* BIT(10) */ 5549 - 0, 5550 - 0, 5551 - 0, 5552 - 0 /* BIT(14) */ 5553 5549 }; 5554 5550 5555 5551 /** ··· 5556 5560 */ 5557 5561 u32 ice_get_link_speed(u16 index) 5558 5562 { 5563 + if (index >= ARRAY_SIZE(ice_aq_to_link_speed)) 5564 + return 0; 5565 + 5559 5566 return ice_aq_to_link_speed[index]; 5560 5567 }
+1 -1
drivers/net/ethernet/intel/ice/ice_main.c
··· 5724 5724 pr_info("%s\n", ice_driver_string); 5725 5725 pr_info("%s\n", ice_copyright); 5726 5726 5727 - ice_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME); 5727 + ice_wq = alloc_workqueue("%s", 0, 0, KBUILD_MODNAME); 5728 5728 if (!ice_wq) { 5729 5729 pr_err("Failed to create workqueue\n"); 5730 5730 return -ENOMEM;
+1 -1
drivers/net/ethernet/intel/ice/ice_switch.c
··· 5420 5420 */ 5421 5421 status = ice_add_special_words(rinfo, lkup_exts, ice_is_dvm_ena(hw)); 5422 5422 if (status) 5423 - goto err_free_lkup_exts; 5423 + goto err_unroll; 5424 5424 5425 5425 /* Group match words into recipes using preferred recipe grouping 5426 5426 * criteria.
+1 -1
drivers/net/ethernet/intel/ice/ice_tc_lib.c
··· 1688 1688 struct ice_vsi *ch_vsi = NULL; 1689 1689 u16 queue = act->rx_queue; 1690 1690 1691 - if (queue > vsi->num_rxq) { 1691 + if (queue >= vsi->num_rxq) { 1692 1692 NL_SET_ERR_MSG_MOD(fltr->extack, 1693 1693 "Unable to add filter because specified queue is invalid"); 1694 1694 return -EINVAL;
+8 -13
drivers/net/ethernet/intel/ice/ice_vf_mbx.c
··· 39 39 return ice_sq_send_cmd(hw, &hw->mailboxq, &desc, msg, msglen, cd); 40 40 } 41 41 42 - static const u32 ice_legacy_aq_to_vc_speed[15] = { 42 + static const u32 ice_legacy_aq_to_vc_speed[] = { 43 43 VIRTCHNL_LINK_SPEED_100MB, /* BIT(0) */ 44 44 VIRTCHNL_LINK_SPEED_100MB, 45 45 VIRTCHNL_LINK_SPEED_1GB, ··· 51 51 VIRTCHNL_LINK_SPEED_40GB, 52 52 VIRTCHNL_LINK_SPEED_40GB, 53 53 VIRTCHNL_LINK_SPEED_40GB, 54 - VIRTCHNL_LINK_SPEED_UNKNOWN, 55 - VIRTCHNL_LINK_SPEED_UNKNOWN, 56 - VIRTCHNL_LINK_SPEED_UNKNOWN, 57 - VIRTCHNL_LINK_SPEED_UNKNOWN /* BIT(14) */ 58 54 }; 59 55 60 56 /** ··· 67 71 */ 68 72 u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed) 69 73 { 70 - u32 speed; 74 + /* convert a BIT() value into an array index */ 75 + u32 index = fls(link_speed) - 1; 71 76 72 - if (adv_link_support) { 73 - /* convert a BIT() value into an array index */ 74 - speed = ice_get_link_speed(fls(link_speed) - 1); 75 - } else { 77 + if (adv_link_support) 78 + return ice_get_link_speed(index); 79 + else if (index < ARRAY_SIZE(ice_legacy_aq_to_vc_speed)) 76 80 /* Virtchnl speeds are not defined for every speed supported in 77 81 * the hardware. To maintain compatibility with older AVF 78 82 * drivers, while reporting the speed the new speed values are 79 83 * resolved to the closest known virtchnl speeds 80 84 */ 81 - speed = ice_legacy_aq_to_vc_speed[fls(link_speed) - 1]; 82 - } 85 + return ice_legacy_aq_to_vc_speed[index]; 83 86 84 - return speed; 87 + return VIRTCHNL_LINK_SPEED_UNKNOWN; 85 88 } 86 89 87 90 /* The mailbox overflow detection algorithm helps to check if there
+15 -1
drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c
··· 44 44 45 45 /* outer VLAN ops regardless of port VLAN config */ 46 46 vlan_ops->add_vlan = ice_vsi_add_vlan; 47 - vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; 48 47 vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; 49 48 vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; 50 49 51 50 if (ice_vf_is_port_vlan_ena(vf)) { 52 51 /* setup outer VLAN ops */ 53 52 vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; 53 + /* all Rx traffic should be in the domain of the 54 + * assigned port VLAN, so prevent disabling Rx VLAN 55 + * filtering 56 + */ 57 + vlan_ops->dis_rx_filtering = noop_vlan; 54 58 vlan_ops->ena_rx_filtering = 55 59 ice_vsi_ena_rx_vlan_filtering; 56 60 ··· 67 63 vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; 68 64 vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; 69 65 } else { 66 + vlan_ops->dis_rx_filtering = 67 + ice_vsi_dis_rx_vlan_filtering; 68 + 70 69 if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) 71 70 vlan_ops->ena_rx_filtering = noop_vlan; 72 71 else ··· 103 96 vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; 104 97 vlan_ops->ena_rx_filtering = 105 98 ice_vsi_ena_rx_vlan_filtering; 99 + /* all Rx traffic should be in the domain of the 100 + * assigned port VLAN, so prevent disabling Rx VLAN 101 + * filtering 102 + */ 103 + vlan_ops->dis_rx_filtering = noop_vlan; 106 104 } else { 105 + vlan_ops->dis_rx_filtering = 106 + ice_vsi_dis_rx_vlan_filtering; 107 107 if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) 108 108 vlan_ops->ena_rx_filtering = noop_vlan; 109 109 else
+23 -2
drivers/net/ethernet/intel/igc/igc_main.c
··· 2942 2942 if (tx_buffer->next_to_watch && 2943 2943 time_after(jiffies, tx_buffer->time_stamp + 2944 2944 (adapter->tx_timeout_factor * HZ)) && 2945 - !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF)) { 2945 + !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF) && 2946 + (rd32(IGC_TDH(tx_ring->reg_idx)) != 2947 + readl(tx_ring->tail))) { 2946 2948 /* detected Tx unit hang */ 2947 2949 netdev_err(tx_ring->netdev, 2948 2950 "Detected Tx Unit Hang\n" ··· 5071 5069 } 5072 5070 5073 5071 /** 5072 + * igc_tx_timeout - Respond to a Tx Hang 5073 + * @netdev: network interface device structure 5074 + * @txqueue: queue number that timed out 5075 + **/ 5076 + static void igc_tx_timeout(struct net_device *netdev, 5077 + unsigned int __always_unused txqueue) 5078 + { 5079 + struct igc_adapter *adapter = netdev_priv(netdev); 5080 + struct igc_hw *hw = &adapter->hw; 5081 + 5082 + /* Do the reset outside of interrupt context */ 5083 + adapter->tx_timeout_count++; 5084 + schedule_work(&adapter->reset_task); 5085 + wr32(IGC_EICS, 5086 + (adapter->eims_enable_mask & ~adapter->eims_other)); 5087 + } 5088 + 5089 + /** 5074 5090 * igc_get_stats64 - Get System Network Statistics 5075 5091 * @netdev: network interface device structure 5076 5092 * @stats: rtnl_link_stats64 pointer ··· 5515 5495 case SPEED_100: 5516 5496 case SPEED_1000: 5517 5497 case SPEED_2500: 5518 - adapter->tx_timeout_factor = 7; 5498 + adapter->tx_timeout_factor = 1; 5519 5499 break; 5520 5500 } 5521 5501 ··· 6367 6347 .ndo_set_rx_mode = igc_set_rx_mode, 6368 6348 .ndo_set_mac_address = igc_set_mac, 6369 6349 .ndo_change_mtu = igc_change_mtu, 6350 + .ndo_tx_timeout = igc_tx_timeout, 6370 6351 .ndo_get_stats64 = igc_get_stats64, 6371 6352 .ndo_fix_features = igc_fix_features, 6372 6353 .ndo_set_features = igc_set_features,
+19 -17
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 1621 1621 if (IS_ERR(pp)) 1622 1622 return pp; 1623 1623 1624 - err = __xdp_rxq_info_reg(xdp_q, &eth->dummy_dev, eth->rx_napi.napi_id, 1625 - id, PAGE_SIZE); 1624 + err = __xdp_rxq_info_reg(xdp_q, &eth->dummy_dev, id, 1625 + eth->rx_napi.napi_id, PAGE_SIZE); 1626 1626 if (err < 0) 1627 1627 goto err_free_pp; 1628 1628 ··· 1921 1921 1922 1922 while (done < budget) { 1923 1923 unsigned int pktlen, *rxdcsum; 1924 + bool has_hwaccel_tag = false; 1924 1925 struct net_device *netdev; 1926 + u16 vlan_proto, vlan_tci; 1925 1927 dma_addr_t dma_addr; 1926 1928 u32 hash, reason; 1927 1929 int mac = 0; ··· 2063 2061 2064 2062 if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { 2065 2063 if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { 2066 - if (trxd.rxd3 & RX_DMA_VTAG_V2) 2067 - __vlan_hwaccel_put_tag(skb, 2068 - htons(RX_DMA_VPID(trxd.rxd4)), 2069 - RX_DMA_VID(trxd.rxd4)); 2064 + if (trxd.rxd3 & RX_DMA_VTAG_V2) { 2065 + vlan_proto = RX_DMA_VPID(trxd.rxd4); 2066 + vlan_tci = RX_DMA_VID(trxd.rxd4); 2067 + has_hwaccel_tag = true; 2068 + } 2070 2069 } else if (trxd.rxd2 & RX_DMA_VTAG) { 2071 - __vlan_hwaccel_put_tag(skb, htons(RX_DMA_VPID(trxd.rxd3)), 2072 - RX_DMA_VID(trxd.rxd3)); 2070 + vlan_proto = RX_DMA_VPID(trxd.rxd3); 2071 + vlan_tci = RX_DMA_VID(trxd.rxd3); 2072 + has_hwaccel_tag = true; 2073 2073 } 2074 2074 } 2075 2075 2076 2076 /* When using VLAN untagging in combination with DSA, the 2077 2077 * hardware treats the MTK special tag as a VLAN and untags it. 2078 2078 */ 2079 - if (skb_vlan_tag_present(skb) && netdev_uses_dsa(netdev)) { 2080 - unsigned int port = ntohs(skb->vlan_proto) & GENMASK(2, 0); 2079 + if (has_hwaccel_tag && netdev_uses_dsa(netdev)) { 2080 + unsigned int port = vlan_proto & GENMASK(2, 0); 2081 2081 2082 2082 if (port < ARRAY_SIZE(eth->dsa_meta) && 2083 2083 eth->dsa_meta[port]) 2084 2084 skb_dst_set_noref(skb, &eth->dsa_meta[port]->dst); 2085 - 2086 - __vlan_hwaccel_clear_tag(skb); 2085 + } else if (has_hwaccel_tag) { 2086 + __vlan_hwaccel_put_tag(skb, htons(vlan_proto), vlan_tci); 2087 2087 } 2088 2088 2089 2089 skb_record_rx_queue(skb, 0); ··· 3181 3177 3182 3178 val |= config; 3183 3179 3184 - if (!i && eth->netdev[0] && netdev_uses_dsa(eth->netdev[0])) 3180 + if (eth->netdev[i] && netdev_uses_dsa(eth->netdev[i])) 3185 3181 val |= MTK_GDMA_SPECIAL_TAG; 3186 3182 3187 3183 mtk_w32(eth, val, MTK_GDMA_FWD_CFG(i)); ··· 3247 3243 struct mtk_eth *eth = mac->hw; 3248 3244 int i, err; 3249 3245 3250 - if ((mtk_uses_dsa(dev) && !eth->prog) && 3251 - !(mac->id == 1 && MTK_HAS_CAPS(eth->soc->caps, MTK_GMAC1_TRGMII))) { 3246 + if (mtk_uses_dsa(dev) && !eth->prog) { 3252 3247 for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) { 3253 3248 struct metadata_dst *md_dst = eth->dsa_meta[i]; 3254 3249 ··· 3264 3261 } 3265 3262 } else { 3266 3263 /* Hardware special tag parsing needs to be disabled if at least 3267 - * one MAC does not use DSA, or the second MAC of the MT7621 and 3268 - * MT7623 SoCs is being used. 3264 + * one MAC does not use DSA. 3269 3265 */ 3270 3266 u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL); 3271 3267 val &= ~MTK_CDMP_STAG_EN;
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/debugfs.c
··· 245 245 pages = dev->priv.dbg.pages_debugfs; 246 246 247 247 debugfs_create_u32("fw_pages_total", 0400, pages, &dev->priv.fw_pages); 248 - debugfs_create_u32("fw_pages_vfs", 0400, pages, &dev->priv.vfs_pages); 249 - debugfs_create_u32("fw_pages_host_pf", 0400, pages, &dev->priv.host_pf_pages); 248 + debugfs_create_u32("fw_pages_vfs", 0400, pages, &dev->priv.page_counters[MLX5_VF]); 249 + debugfs_create_u32("fw_pages_sfs", 0400, pages, &dev->priv.page_counters[MLX5_SF]); 250 + debugfs_create_u32("fw_pages_host_pf", 0400, pages, &dev->priv.page_counters[MLX5_HOST_PF]); 250 251 debugfs_create_u32("fw_pages_alloc_failed", 0400, pages, &dev->priv.fw_pages_alloc_failed); 251 252 debugfs_create_u32("fw_pages_give_dropped", 0400, pages, &dev->priv.give_pages_dropped); 252 253 debugfs_create_u32("fw_pages_reclaim_discard", 0400, pages,
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
··· 64 64 MLX5_GET(mtrc_cap, out, num_string_trace); 65 65 tracer->str_db.num_string_db = MLX5_GET(mtrc_cap, out, num_string_db); 66 66 tracer->owner = !!MLX5_GET(mtrc_cap, out, trace_owner); 67 + tracer->str_db.loaded = false; 67 68 68 69 for (i = 0; i < tracer->str_db.num_string_db; i++) { 69 70 mtrc_cap_sp = MLX5_ADDR_OF(mtrc_cap, out, string_db_param[i]); ··· 784 783 if (err) 785 784 mlx5_core_warn(dev, "FWTracer: Failed to set tracer configurations %d\n", err); 786 785 786 + tracer->buff.consumer_index = 0; 787 787 return err; 788 788 } 789 789 ··· 849 847 mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner); 850 848 if (tracer->owner) { 851 849 tracer->owner = false; 852 - tracer->buff.consumer_index = 0; 853 850 return; 854 851 } 855 852
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/ecpf.c
··· 95 95 96 96 mlx5_host_pf_cleanup(dev); 97 97 98 - err = mlx5_wait_for_pages(dev, &dev->priv.host_pf_pages); 98 + err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_HOST_PF]); 99 99 if (err) 100 100 mlx5_core_warn(dev, "Timeout reclaiming external host PF pages err(%d)\n", err); 101 101 }
-4
drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
··· 438 438 439 439 switch (event) { 440 440 case SWITCHDEV_FDB_ADD_TO_BRIDGE: 441 - /* only handle the event on native eswtich of representor */ 442 - if (!mlx5_esw_bridge_is_local(dev, rep, esw)) 443 - break; 444 - 445 441 fdb_info = container_of(info, 446 442 struct switchdev_notifier_fdb_info, 447 443 info);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
··· 450 450 451 451 void mlx5e_disable_cvlan_filter(struct mlx5e_flow_steering *fs, bool promisc) 452 452 { 453 - if (fs->vlan->cvlan_filter_disabled) 453 + if (!fs->vlan || fs->vlan->cvlan_filter_disabled) 454 454 return; 455 455 456 456 fs->vlan->cvlan_filter_disabled = true;
+19 -71
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 597 597 rq->ix = c->ix; 598 598 rq->channel = c; 599 599 rq->mdev = mdev; 600 - rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); 600 + rq->hw_mtu = 601 + MLX5E_SW2HW_MTU(params, params->sw_mtu) - ETH_FCS_LEN * !params->scatter_fcs_en; 601 602 rq->xdpsq = &c->rq_xdpsq; 602 603 rq->stats = &c->priv->channel_stats[c->ix]->rq; 603 604 rq->ptp_cyc2time = mlx5_rq_ts_translator(mdev); ··· 1019 1018 mlx5e_free_rx_descs(rq); 1020 1019 1021 1020 return mlx5e_rq_to_ready(rq, curr_state); 1022 - } 1023 - 1024 - static int mlx5e_modify_rq_scatter_fcs(struct mlx5e_rq *rq, bool enable) 1025 - { 1026 - struct mlx5_core_dev *mdev = rq->mdev; 1027 - 1028 - void *in; 1029 - void *rqc; 1030 - int inlen; 1031 - int err; 1032 - 1033 - inlen = MLX5_ST_SZ_BYTES(modify_rq_in); 1034 - in = kvzalloc(inlen, GFP_KERNEL); 1035 - if (!in) 1036 - return -ENOMEM; 1037 - 1038 - rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx); 1039 - 1040 - MLX5_SET(modify_rq_in, in, rq_state, MLX5_RQC_STATE_RDY); 1041 - MLX5_SET64(modify_rq_in, in, modify_bitmask, 1042 - MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_SCATTER_FCS); 1043 - MLX5_SET(rqc, rqc, scatter_fcs, enable); 1044 - MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RDY); 1045 - 1046 - err = mlx5_core_modify_rq(mdev, rq->rqn, in); 1047 - 1048 - kvfree(in); 1049 - 1050 - return err; 1051 1021 } 1052 1022 1053 1023 static int mlx5e_modify_rq_vsd(struct mlx5e_rq *rq, bool vsd) ··· 3300 3328 mlx5e_destroy_tises(priv); 3301 3329 } 3302 3330 3303 - static int mlx5e_modify_channels_scatter_fcs(struct mlx5e_channels *chs, bool enable) 3304 - { 3305 - int err = 0; 3306 - int i; 3307 - 3308 - for (i = 0; i < chs->num; i++) { 3309 - err = mlx5e_modify_rq_scatter_fcs(&chs->c[i]->rq, enable); 3310 - if (err) 3311 - return err; 3312 - } 3313 - 3314 - return 0; 3315 - } 3316 - 3317 3331 static int mlx5e_modify_channels_vsd(struct mlx5e_channels *chs, bool vsd) 3318 3332 { 3319 3333 int err; ··· 3875 3917 return mlx5_set_ports_check(mdev, in, sizeof(in)); 3876 3918 } 3877 3919 3920 + static int mlx5e_set_rx_port_ts_wrap(struct mlx5e_priv *priv, void *ctx) 3921 + { 3922 + struct mlx5_core_dev *mdev = priv->mdev; 3923 + bool enable = *(bool *)ctx; 3924 + 3925 + return mlx5e_set_rx_port_ts(mdev, enable); 3926 + } 3927 + 3878 3928 static int set_feature_rx_fcs(struct net_device *netdev, bool enable) 3879 3929 { 3880 3930 struct mlx5e_priv *priv = netdev_priv(netdev); 3881 3931 struct mlx5e_channels *chs = &priv->channels; 3882 - struct mlx5_core_dev *mdev = priv->mdev; 3932 + struct mlx5e_params new_params; 3883 3933 int err; 3884 3934 3885 3935 mutex_lock(&priv->state_lock); 3886 3936 3887 - if (enable) { 3888 - err = mlx5e_set_rx_port_ts(mdev, false); 3889 - if (err) 3890 - goto out; 3891 - 3892 - chs->params.scatter_fcs_en = true; 3893 - err = mlx5e_modify_channels_scatter_fcs(chs, true); 3894 - if (err) { 3895 - chs->params.scatter_fcs_en = false; 3896 - mlx5e_set_rx_port_ts(mdev, true); 3897 - } 3898 - } else { 3899 - chs->params.scatter_fcs_en = false; 3900 - err = mlx5e_modify_channels_scatter_fcs(chs, false); 3901 - if (err) { 3902 - chs->params.scatter_fcs_en = true; 3903 - goto out; 3904 - } 3905 - err = mlx5e_set_rx_port_ts(mdev, true); 3906 - if (err) { 3907 - mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err); 3908 - err = 0; 3909 - } 3910 - } 3911 - 3912 - out: 3937 + new_params = chs->params; 3938 + new_params.scatter_fcs_en = enable; 3939 + err = mlx5e_safe_switch_params(priv, &new_params, mlx5e_set_rx_port_ts_wrap, 3940 + &new_params.scatter_fcs_en, true); 3913 3941 mutex_unlock(&priv->state_lock); 3914 3942 return err; 3915 3943 } ··· 4031 4087 features &= ~NETIF_F_GRO_HW; 4032 4088 if (netdev->features & NETIF_F_GRO_HW) 4033 4089 netdev_warn(netdev, "Disabling HW_GRO, not supported in switchdev mode\n"); 4090 + 4091 + features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; 4092 + if (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER) 4093 + netdev_warn(netdev, "Disabling HW_VLAN CTAG FILTERING, not supported in switchdev mode\n"); 4034 4094 4035 4095 return features; 4036 4096 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
··· 1715 1715 struct mlx5_esw_bridge *bridge; 1716 1716 1717 1717 port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); 1718 - if (!port || port->flags & MLX5_ESW_BRIDGE_PORT_FLAG_PEER) 1718 + if (!port) 1719 1719 return; 1720 1720 1721 1721 bridge = port->bridge;
+5 -8
drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
··· 191 191 } 192 192 } 193 193 194 - static int mlx5i_get_speed_settings(u16 ib_link_width_oper, u16 ib_proto_oper) 194 + static u32 mlx5i_get_speed_settings(u16 ib_link_width_oper, u16 ib_proto_oper) 195 195 { 196 196 int rate, width; 197 197 198 198 rate = mlx5_ptys_rate_enum_to_int(ib_proto_oper); 199 199 if (rate < 0) 200 - return -EINVAL; 200 + return SPEED_UNKNOWN; 201 201 width = mlx5_ptys_width_enum_to_int(ib_link_width_oper); 202 202 if (width < 0) 203 - return -EINVAL; 203 + return SPEED_UNKNOWN; 204 204 205 205 return rate * width; 206 206 } ··· 223 223 ethtool_link_ksettings_zero_link_mode(link_ksettings, advertising); 224 224 225 225 speed = mlx5i_get_speed_settings(ib_link_width_oper, ib_proto_oper); 226 - if (speed < 0) 227 - return -EINVAL; 226 + link_ksettings->base.speed = speed; 227 + link_ksettings->base.duplex = speed == SPEED_UNKNOWN ? DUPLEX_UNKNOWN : DUPLEX_FULL; 228 228 229 - link_ksettings->base.duplex = DUPLEX_FULL; 230 229 link_ksettings->base.port = PORT_OTHER; 231 230 232 231 link_ksettings->base.autoneg = AUTONEG_DISABLE; 233 - 234 - link_ksettings->base.speed = speed; 235 232 236 233 return 0; 237 234 }
+7 -7
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 2131 2131 mlx5_core_verify_params(); 2132 2132 mlx5_register_debugfs(); 2133 2133 2134 - err = pci_register_driver(&mlx5_core_driver); 2134 + err = mlx5e_init(); 2135 2135 if (err) 2136 2136 goto err_debug; 2137 2137 ··· 2139 2139 if (err) 2140 2140 goto err_sf; 2141 2141 2142 - err = mlx5e_init(); 2142 + err = pci_register_driver(&mlx5_core_driver); 2143 2143 if (err) 2144 - goto err_en; 2144 + goto err_pci; 2145 2145 2146 2146 return 0; 2147 2147 2148 - err_en: 2148 + err_pci: 2149 2149 mlx5_sf_driver_unregister(); 2150 2150 err_sf: 2151 - pci_unregister_driver(&mlx5_core_driver); 2151 + mlx5e_cleanup(); 2152 2152 err_debug: 2153 2153 mlx5_unregister_debugfs(); 2154 2154 return err; ··· 2156 2156 2157 2157 static void __exit mlx5_cleanup(void) 2158 2158 { 2159 - mlx5e_cleanup(); 2160 - mlx5_sf_driver_unregister(); 2161 2159 pci_unregister_driver(&mlx5_core_driver); 2160 + mlx5_sf_driver_unregister(); 2161 + mlx5e_cleanup(); 2162 2162 mlx5_unregister_debugfs(); 2163 2163 } 2164 2164
+21 -16
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
··· 74 74 return (u32)func_id | (ec_function << 16); 75 75 } 76 76 77 + static u16 func_id_to_type(struct mlx5_core_dev *dev, u16 func_id, bool ec_function) 78 + { 79 + if (!func_id) 80 + return mlx5_core_is_ecpf(dev) && !ec_function ? MLX5_HOST_PF : MLX5_PF; 81 + 82 + return func_id <= mlx5_core_max_vfs(dev) ? MLX5_VF : MLX5_SF; 83 + } 84 + 77 85 static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function) 78 86 { 79 87 struct rb_root *root; ··· 341 333 u32 out[MLX5_ST_SZ_DW(manage_pages_out)] = {0}; 342 334 int inlen = MLX5_ST_SZ_BYTES(manage_pages_in); 343 335 int notify_fail = event; 336 + u16 func_type; 344 337 u64 addr; 345 338 int err; 346 339 u32 *in; ··· 393 384 goto out_dropped; 394 385 } 395 386 387 + func_type = func_id_to_type(dev, func_id, ec_function); 388 + dev->priv.page_counters[func_type] += npages; 396 389 dev->priv.fw_pages += npages; 397 - if (func_id) 398 - dev->priv.vfs_pages += npages; 399 - else if (mlx5_core_is_ecpf(dev) && !ec_function) 400 - dev->priv.host_pf_pages += npages; 401 390 402 391 mlx5_core_dbg(dev, "npages %d, ec_function %d, func_id 0x%x, err %d\n", 403 392 npages, ec_function, func_id, err); ··· 422 415 struct rb_root *root; 423 416 struct rb_node *p; 424 417 int npages = 0; 418 + u16 func_type; 425 419 426 420 root = xa_load(&dev->priv.page_root_xa, function); 427 421 if (WARN_ON_ONCE(!root)) ··· 437 429 free_fwp(dev, fwp, fwp->free_count); 438 430 } 439 431 432 + func_type = func_id_to_type(dev, func_id, ec_function); 433 + dev->priv.page_counters[func_type] -= npages; 440 434 dev->priv.fw_pages -= npages; 441 - if (func_id) 442 - dev->priv.vfs_pages -= npages; 443 - else if (mlx5_core_is_ecpf(dev) && !ec_function) 444 - dev->priv.host_pf_pages -= npages; 445 435 446 436 mlx5_core_dbg(dev, "npages %d, ec_function %d, func_id 0x%x\n", 447 437 npages, ec_function, func_id); ··· 505 499 int outlen = MLX5_ST_SZ_BYTES(manage_pages_out); 506 500 u32 in[MLX5_ST_SZ_DW(manage_pages_in)] = {}; 507 501 int num_claimed; 502 + u16 func_type; 508 503 u32 *out; 509 504 int err; 510 505 int i; ··· 557 550 if (nclaimed) 558 551 *nclaimed = num_claimed; 559 552 553 + func_type = func_id_to_type(dev, func_id, ec_function); 554 + dev->priv.page_counters[func_type] -= num_claimed; 560 555 dev->priv.fw_pages -= num_claimed; 561 - if (func_id) 562 - dev->priv.vfs_pages -= num_claimed; 563 - else if (mlx5_core_is_ecpf(dev) && !ec_function) 564 - dev->priv.host_pf_pages -= num_claimed; 565 556 566 557 out_free: 567 558 kvfree(out); ··· 712 707 WARN(dev->priv.fw_pages, 713 708 "FW pages counter is %d after reclaiming all pages\n", 714 709 dev->priv.fw_pages); 715 - WARN(dev->priv.vfs_pages, 710 + WARN(dev->priv.page_counters[MLX5_VF], 716 711 "VFs FW pages counter is %d after reclaiming all pages\n", 717 - dev->priv.vfs_pages); 718 - WARN(dev->priv.host_pf_pages, 712 + dev->priv.page_counters[MLX5_VF]); 713 + WARN(dev->priv.page_counters[MLX5_HOST_PF], 719 714 "External host PF FW pages counter is %d after reclaiming all pages\n", 720 - dev->priv.host_pf_pages); 715 + dev->priv.page_counters[MLX5_HOST_PF]); 721 716 722 717 return 0; 723 718 }
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/sriov.c
··· 147 147 148 148 mlx5_eswitch_disable_sriov(dev->priv.eswitch, clear_vf); 149 149 150 - if (mlx5_wait_for_pages(dev, &dev->priv.vfs_pages)) 150 + if (mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF])) 151 151 mlx5_core_warn(dev, "timeout reclaiming VFs pages\n"); 152 152 } 153 153
+15 -10
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c
··· 1138 1138 rule->flow_source)) 1139 1139 return 0; 1140 1140 1141 + mlx5dr_domain_nic_lock(nic_dmn); 1142 + 1141 1143 ret = mlx5dr_matcher_select_builders(matcher, 1142 1144 nic_matcher, 1143 1145 dr_rule_get_ipv(&param->outer), 1144 1146 dr_rule_get_ipv(&param->inner)); 1145 1147 if (ret) 1146 - return ret; 1148 + goto err_unlock; 1147 1149 1148 1150 hw_ste_arr_is_opt = nic_matcher->num_of_builders <= DR_RULE_MAX_STES_OPTIMIZED; 1149 1151 if (likely(hw_ste_arr_is_opt)) { ··· 1154 1152 hw_ste_arr = kzalloc((nic_matcher->num_of_builders + DR_ACTION_MAX_STES) * 1155 1153 DR_STE_SIZE, GFP_KERNEL); 1156 1154 1157 - if (!hw_ste_arr) 1158 - return -ENOMEM; 1155 + if (!hw_ste_arr) { 1156 + ret = -ENOMEM; 1157 + goto err_unlock; 1158 + } 1159 1159 } 1160 - 1161 - mlx5dr_domain_nic_lock(nic_dmn); 1162 1160 1163 1161 ret = mlx5dr_matcher_add_to_tbl_nic(dmn, nic_matcher); 1164 1162 if (ret) ··· 1225 1223 1226 1224 mlx5dr_domain_nic_unlock(nic_dmn); 1227 1225 1228 - goto out; 1226 + if (unlikely(!hw_ste_arr_is_opt)) 1227 + kfree(hw_ste_arr); 1228 + 1229 + return 0; 1229 1230 1230 1231 free_rule: 1231 1232 dr_rule_clean_rule_members(rule, nic_rule); ··· 1243 1238 mlx5dr_matcher_remove_from_tbl_nic(dmn, nic_matcher); 1244 1239 1245 1240 free_hw_ste: 1246 - mlx5dr_domain_nic_unlock(nic_dmn); 1247 - 1248 - out: 1249 - if (unlikely(!hw_ste_arr_is_opt)) 1241 + if (!hw_ste_arr_is_opt) 1250 1242 kfree(hw_ste_arr); 1243 + 1244 + err_unlock: 1245 + mlx5dr_domain_nic_unlock(nic_dmn); 1251 1246 1252 1247 return ret; 1253 1248 }
+2 -2
drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c
··· 632 632 /* Enable master counters */ 633 633 spx5_wr(PTP_PTP_DOM_CFG_PTP_ENA_SET(0x7), sparx5, PTP_PTP_DOM_CFG); 634 634 635 - for (i = 0; i < sparx5->port_count; i++) { 635 + for (i = 0; i < SPX5_PORTS; i++) { 636 636 port = sparx5->ports[i]; 637 637 if (!port) 638 638 continue; ··· 648 648 struct sparx5_port *port; 649 649 int i; 650 650 651 - for (i = 0; i < sparx5->port_count; i++) { 651 + for (i = 0; i < SPX5_PORTS; i++) { 652 652 port = sparx5->ports[i]; 653 653 if (!port) 654 654 continue;
+11 -26
drivers/net/ethernet/microsoft/mana/gdma_main.c
··· 1217 1217 unsigned int max_queues_per_port = num_online_cpus(); 1218 1218 struct gdma_context *gc = pci_get_drvdata(pdev); 1219 1219 struct gdma_irq_context *gic; 1220 - unsigned int max_irqs; 1221 - u16 *cpus; 1222 - cpumask_var_t req_mask; 1220 + unsigned int max_irqs, cpu; 1223 1221 int nvec, irq; 1224 1222 int err, i = 0, j; 1225 1223 ··· 1238 1240 goto free_irq_vector; 1239 1241 } 1240 1242 1241 - if (!zalloc_cpumask_var(&req_mask, GFP_KERNEL)) { 1242 - err = -ENOMEM; 1243 - goto free_irq; 1244 - } 1245 - 1246 - cpus = kcalloc(nvec, sizeof(*cpus), GFP_KERNEL); 1247 - if (!cpus) { 1248 - err = -ENOMEM; 1249 - goto free_mask; 1250 - } 1251 - for (i = 0; i < nvec; i++) 1252 - cpus[i] = cpumask_local_spread(i, gc->numa_node); 1253 - 1254 1243 for (i = 0; i < nvec; i++) { 1255 - cpumask_set_cpu(cpus[i], req_mask); 1256 1244 gic = &gc->irq_contexts[i]; 1257 1245 gic->handler = NULL; 1258 1246 gic->arg = NULL; ··· 1253 1269 irq = pci_irq_vector(pdev, i); 1254 1270 if (irq < 0) { 1255 1271 err = irq; 1256 - goto free_mask; 1272 + goto free_irq; 1257 1273 } 1258 1274 1259 1275 err = request_irq(irq, mana_gd_intr, 0, gic->name, gic); 1260 1276 if (err) 1261 - goto free_mask; 1262 - irq_set_affinity_and_hint(irq, req_mask); 1263 - cpumask_clear(req_mask); 1277 + goto free_irq; 1278 + 1279 + cpu = cpumask_local_spread(i, gc->numa_node); 1280 + irq_set_affinity_and_hint(irq, cpumask_of(cpu)); 1264 1281 } 1265 - free_cpumask_var(req_mask); 1266 - kfree(cpus); 1267 1282 1268 1283 err = mana_gd_alloc_res_map(nvec, &gc->msix_resource); 1269 1284 if (err) ··· 1273 1290 1274 1291 return 0; 1275 1292 1276 - free_mask: 1277 - free_cpumask_var(req_mask); 1278 - kfree(cpus); 1279 1293 free_irq: 1280 1294 for (j = i - 1; j >= 0; j--) { 1281 1295 irq = pci_irq_vector(pdev, j); 1282 1296 gic = &gc->irq_contexts[j]; 1297 + 1298 + irq_update_affinity_hint(irq, NULL); 1283 1299 free_irq(irq, gic); 1284 1300 } 1285 1301 ··· 1306 1324 continue; 1307 1325 1308 1326 gic = &gc->irq_contexts[i]; 1327 + 1328 + /* Need to clear the hint before free_irq */ 1329 + irq_update_affinity_hint(irq, NULL); 1309 1330 free_irq(irq, gic); 1310 1331 } 1311 1332
+12 -12
drivers/net/ethernet/mscc/ocelot_flower.c
··· 605 605 flow_rule_match_control(rule, &match); 606 606 } 607 607 608 + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) { 609 + struct flow_match_vlan match; 610 + 611 + flow_rule_match_vlan(rule, &match); 612 + filter->key_type = OCELOT_VCAP_KEY_ANY; 613 + filter->vlan.vid.value = match.key->vlan_id; 614 + filter->vlan.vid.mask = match.mask->vlan_id; 615 + filter->vlan.pcp.value[0] = match.key->vlan_priority; 616 + filter->vlan.pcp.mask[0] = match.mask->vlan_priority; 617 + match_protocol = false; 618 + } 619 + 608 620 if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { 609 621 struct flow_match_eth_addrs match; 610 622 ··· 746 734 filter->key.ipv4.sport.mask = ntohs(match.mask->src); 747 735 filter->key.ipv4.dport.value = ntohs(match.key->dst); 748 736 filter->key.ipv4.dport.mask = ntohs(match.mask->dst); 749 - match_protocol = false; 750 - } 751 - 752 - if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) { 753 - struct flow_match_vlan match; 754 - 755 - flow_rule_match_vlan(rule, &match); 756 - filter->key_type = OCELOT_VCAP_KEY_ANY; 757 - filter->vlan.vid.value = match.key->vlan_id; 758 - filter->vlan.vid.mask = match.mask->vlan_id; 759 - filter->vlan.pcp.value[0] = match.key->vlan_priority; 760 - filter->vlan.pcp.mask[0] = match.mask->vlan_priority; 761 737 match_protocol = false; 762 738 } 763 739
+4 -4
drivers/net/ethernet/mscc/ocelot_ptp.c
··· 335 335 ocelot_populate_ipv6_ptp_event_trap_key(struct ocelot_vcap_filter *trap) 336 336 { 337 337 trap->key_type = OCELOT_VCAP_KEY_IPV6; 338 - trap->key.ipv4.proto.value[0] = IPPROTO_UDP; 339 - trap->key.ipv4.proto.mask[0] = 0xff; 338 + trap->key.ipv6.proto.value[0] = IPPROTO_UDP; 339 + trap->key.ipv6.proto.mask[0] = 0xff; 340 340 trap->key.ipv6.dport.value = PTP_EV_PORT; 341 341 trap->key.ipv6.dport.mask = 0xffff; 342 342 } ··· 355 355 ocelot_populate_ipv6_ptp_general_trap_key(struct ocelot_vcap_filter *trap) 356 356 { 357 357 trap->key_type = OCELOT_VCAP_KEY_IPV6; 358 - trap->key.ipv4.proto.value[0] = IPPROTO_UDP; 359 - trap->key.ipv4.proto.mask[0] = 0xff; 358 + trap->key.ipv6.proto.value[0] = IPPROTO_UDP; 359 + trap->key.ipv6.proto.mask[0] = 0xff; 360 360 trap->key.ipv6.dport.value = PTP_GEN_PORT; 361 361 trap->key.ipv6.dport.mask = 0xffff; 362 362 }
+158 -36
drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
··· 293 293 } 294 294 } 295 295 296 - static const u16 nfp_eth_media_table[] = { 297 - [NFP_MEDIA_1000BASE_CX] = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, 298 - [NFP_MEDIA_1000BASE_KX] = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, 299 - [NFP_MEDIA_10GBASE_KX4] = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, 300 - [NFP_MEDIA_10GBASE_KR] = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT, 301 - [NFP_MEDIA_10GBASE_CX4] = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, 302 - [NFP_MEDIA_10GBASE_CR] = ETHTOOL_LINK_MODE_10000baseCR_Full_BIT, 303 - [NFP_MEDIA_10GBASE_SR] = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT, 304 - [NFP_MEDIA_10GBASE_ER] = ETHTOOL_LINK_MODE_10000baseER_Full_BIT, 305 - [NFP_MEDIA_25GBASE_KR] = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, 306 - [NFP_MEDIA_25GBASE_KR_S] = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, 307 - [NFP_MEDIA_25GBASE_CR] = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, 308 - [NFP_MEDIA_25GBASE_CR_S] = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, 309 - [NFP_MEDIA_25GBASE_SR] = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, 310 - [NFP_MEDIA_40GBASE_CR4] = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT, 311 - [NFP_MEDIA_40GBASE_KR4] = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT, 312 - [NFP_MEDIA_40GBASE_SR4] = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT, 313 - [NFP_MEDIA_40GBASE_LR4] = ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT, 314 - [NFP_MEDIA_50GBASE_KR] = ETHTOOL_LINK_MODE_50000baseKR_Full_BIT, 315 - [NFP_MEDIA_50GBASE_SR] = ETHTOOL_LINK_MODE_50000baseSR_Full_BIT, 316 - [NFP_MEDIA_50GBASE_CR] = ETHTOOL_LINK_MODE_50000baseCR_Full_BIT, 317 - [NFP_MEDIA_50GBASE_LR] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, 318 - [NFP_MEDIA_50GBASE_ER] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, 319 - [NFP_MEDIA_50GBASE_FR] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, 320 - [NFP_MEDIA_100GBASE_KR4] = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, 321 - [NFP_MEDIA_100GBASE_SR4] = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT, 322 - [NFP_MEDIA_100GBASE_CR4] = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, 323 - [NFP_MEDIA_100GBASE_KP4] = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, 324 - [NFP_MEDIA_100GBASE_CR10] = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, 296 + static const struct nfp_eth_media_link_mode { 297 + u16 ethtool_link_mode; 298 + u16 speed; 299 + } nfp_eth_media_table[NFP_MEDIA_LINK_MODES_NUMBER] = { 300 + [NFP_MEDIA_1000BASE_CX] = { 301 + .ethtool_link_mode = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, 302 + .speed = NFP_SPEED_1G, 303 + }, 304 + [NFP_MEDIA_1000BASE_KX] = { 305 + .ethtool_link_mode = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, 306 + .speed = NFP_SPEED_1G, 307 + }, 308 + [NFP_MEDIA_10GBASE_KX4] = { 309 + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, 310 + .speed = NFP_SPEED_10G, 311 + }, 312 + [NFP_MEDIA_10GBASE_KR] = { 313 + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT, 314 + .speed = NFP_SPEED_10G, 315 + }, 316 + [NFP_MEDIA_10GBASE_CX4] = { 317 + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, 318 + .speed = NFP_SPEED_10G, 319 + }, 320 + [NFP_MEDIA_10GBASE_CR] = { 321 + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseCR_Full_BIT, 322 + .speed = NFP_SPEED_10G, 323 + }, 324 + [NFP_MEDIA_10GBASE_SR] = { 325 + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT, 326 + .speed = NFP_SPEED_10G, 327 + }, 328 + [NFP_MEDIA_10GBASE_ER] = { 329 + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseER_Full_BIT, 330 + .speed = NFP_SPEED_10G, 331 + }, 332 + [NFP_MEDIA_25GBASE_KR] = { 333 + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, 334 + .speed = NFP_SPEED_25G, 335 + }, 336 + [NFP_MEDIA_25GBASE_KR_S] = { 337 + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, 338 + .speed = NFP_SPEED_25G, 339 + }, 340 + [NFP_MEDIA_25GBASE_CR] = { 341 + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, 342 + .speed = NFP_SPEED_25G, 343 + }, 344 + [NFP_MEDIA_25GBASE_CR_S] = { 345 + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, 346 + .speed = NFP_SPEED_25G, 347 + }, 348 + [NFP_MEDIA_25GBASE_SR] = { 349 + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, 350 + .speed = NFP_SPEED_25G, 351 + }, 352 + [NFP_MEDIA_40GBASE_CR4] = { 353 + .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT, 354 + .speed = NFP_SPEED_40G, 355 + }, 356 + [NFP_MEDIA_40GBASE_KR4] = { 357 + .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT, 358 + .speed = NFP_SPEED_40G, 359 + }, 360 + [NFP_MEDIA_40GBASE_SR4] = { 361 + .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT, 362 + .speed = NFP_SPEED_40G, 363 + }, 364 + [NFP_MEDIA_40GBASE_LR4] = { 365 + .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT, 366 + .speed = NFP_SPEED_40G, 367 + }, 368 + [NFP_MEDIA_50GBASE_KR] = { 369 + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseKR_Full_BIT, 370 + .speed = NFP_SPEED_50G, 371 + }, 372 + [NFP_MEDIA_50GBASE_SR] = { 373 + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseSR_Full_BIT, 374 + .speed = NFP_SPEED_50G, 375 + }, 376 + [NFP_MEDIA_50GBASE_CR] = { 377 + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseCR_Full_BIT, 378 + .speed = NFP_SPEED_50G, 379 + }, 380 + [NFP_MEDIA_50GBASE_LR] = { 381 + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, 382 + .speed = NFP_SPEED_50G, 383 + }, 384 + [NFP_MEDIA_50GBASE_ER] = { 385 + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, 386 + .speed = NFP_SPEED_50G, 387 + }, 388 + [NFP_MEDIA_50GBASE_FR] = { 389 + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, 390 + .speed = NFP_SPEED_50G, 391 + }, 392 + [NFP_MEDIA_100GBASE_KR4] = { 393 + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, 394 + .speed = NFP_SPEED_100G, 395 + }, 396 + [NFP_MEDIA_100GBASE_SR4] = { 397 + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT, 398 + .speed = NFP_SPEED_100G, 399 + }, 400 + [NFP_MEDIA_100GBASE_CR4] = { 401 + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, 402 + .speed = NFP_SPEED_100G, 403 + }, 404 + [NFP_MEDIA_100GBASE_KP4] = { 405 + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, 406 + .speed = NFP_SPEED_100G, 407 + }, 408 + [NFP_MEDIA_100GBASE_CR10] = { 409 + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, 410 + .speed = NFP_SPEED_100G, 411 + }, 412 + }; 413 + 414 + static const unsigned int nfp_eth_speed_map[NFP_SUP_SPEED_NUMBER] = { 415 + [NFP_SPEED_1G] = SPEED_1000, 416 + [NFP_SPEED_10G] = SPEED_10000, 417 + [NFP_SPEED_25G] = SPEED_25000, 418 + [NFP_SPEED_40G] = SPEED_40000, 419 + [NFP_SPEED_50G] = SPEED_50000, 420 + [NFP_SPEED_100G] = SPEED_100000, 325 421 }; 326 422 327 423 static void nfp_add_media_link_mode(struct nfp_port *port, ··· 430 334 }; 431 335 struct nfp_cpp *cpp = port->app->cpp; 432 336 433 - if (nfp_eth_read_media(cpp, &ethm)) 337 + if (nfp_eth_read_media(cpp, &ethm)) { 338 + bitmap_fill(port->speed_bitmap, NFP_SUP_SPEED_NUMBER); 434 339 return; 340 + } 341 + 342 + bitmap_zero(port->speed_bitmap, NFP_SUP_SPEED_NUMBER); 435 343 436 344 for (u32 i = 0; i < 2; i++) { 437 345 supported_modes[i] = le64_to_cpu(ethm.supported_modes[i]); ··· 444 344 445 345 for (u32 i = 0; i < NFP_MEDIA_LINK_MODES_NUMBER; i++) { 446 346 if (i < 64) { 447 - if (supported_modes[0] & BIT_ULL(i)) 448 - __set_bit(nfp_eth_media_table[i], 347 + if (supported_modes[0] & BIT_ULL(i)) { 348 + __set_bit(nfp_eth_media_table[i].ethtool_link_mode, 449 349 cmd->link_modes.supported); 350 + __set_bit(nfp_eth_media_table[i].speed, 351 + port->speed_bitmap); 352 + } 450 353 451 354 if (advertised_modes[0] & BIT_ULL(i)) 452 - __set_bit(nfp_eth_media_table[i], 355 + __set_bit(nfp_eth_media_table[i].ethtool_link_mode, 453 356 cmd->link_modes.advertising); 454 357 } else { 455 - if (supported_modes[1] & BIT_ULL(i - 64)) 456 - __set_bit(nfp_eth_media_table[i], 358 + if (supported_modes[1] & BIT_ULL(i - 64)) { 359 + __set_bit(nfp_eth_media_table[i].ethtool_link_mode, 457 360 cmd->link_modes.supported); 361 + __set_bit(nfp_eth_media_table[i].speed, 362 + port->speed_bitmap); 363 + } 458 364 459 365 if (advertised_modes[1] & BIT_ULL(i - 64)) 460 - __set_bit(nfp_eth_media_table[i], 366 + __set_bit(nfp_eth_media_table[i].ethtool_link_mode, 461 367 cmd->link_modes.advertising); 462 368 } 463 369 } ··· 574 468 575 469 if (cmd->base.speed != SPEED_UNKNOWN) { 576 470 u32 speed = cmd->base.speed / eth_port->lanes; 471 + bool is_supported = false; 472 + 473 + for (u32 i = 0; i < NFP_SUP_SPEED_NUMBER; i++) { 474 + if (cmd->base.speed == nfp_eth_speed_map[i] && 475 + test_bit(i, port->speed_bitmap)) { 476 + is_supported = true; 477 + break; 478 + } 479 + } 480 + 481 + if (!is_supported) { 482 + netdev_err(netdev, "Speed %u is not supported.\n", 483 + cmd->base.speed); 484 + err = -EINVAL; 485 + goto err_bad_set; 486 + } 577 487 578 488 if (req_aneg) { 579 489 netdev_err(netdev, "Speed changing is not allowed when working on autoneg mode.\n");
+12
drivers/net/ethernet/netronome/nfp/nfp_port.h
··· 38 38 NFP_PORT_CHANGED = 0, 39 39 }; 40 40 41 + enum { 42 + NFP_SPEED_1G, 43 + NFP_SPEED_10G, 44 + NFP_SPEED_25G, 45 + NFP_SPEED_40G, 46 + NFP_SPEED_50G, 47 + NFP_SPEED_100G, 48 + NFP_SUP_SPEED_NUMBER 49 + }; 50 + 41 51 /** 42 52 * struct nfp_port - structure representing NFP port 43 53 * @netdev: backpointer to associated netdev ··· 62 52 * @eth_forced: for %NFP_PORT_PHYS_PORT port is forced UP or DOWN, don't change 63 53 * @eth_port: for %NFP_PORT_PHYS_PORT translated ETH Table port entry 64 54 * @eth_stats: for %NFP_PORT_PHYS_PORT MAC stats if available 55 + * @speed_bitmap: for %NFP_PORT_PHYS_PORT supported speed bitmap 65 56 * @pf_id: for %NFP_PORT_PF_PORT, %NFP_PORT_VF_PORT ID of the PCI PF (0-3) 66 57 * @vf_id: for %NFP_PORT_VF_PORT ID of the PCI VF within @pf_id 67 58 * @pf_split: for %NFP_PORT_PF_PORT %true if PCI PF has more than one vNIC ··· 89 78 bool eth_forced; 90 79 struct nfp_eth_table_port *eth_port; 91 80 u8 __iomem *eth_stats; 81 + DECLARE_BITMAP(speed_bitmap, NFP_SUP_SPEED_NUMBER); 92 82 }; 93 83 /* NFP_PORT_PF_PORT, NFP_PORT_VF_PORT */ 94 84 struct {
+8 -1
drivers/net/ethernet/pensando/ionic/ionic_dev.c
··· 708 708 q->lif->index, q->name, q->hw_type, q->hw_index, 709 709 q->head_idx, ring_doorbell); 710 710 711 - if (ring_doorbell) 711 + if (ring_doorbell) { 712 712 ionic_dbell_ring(lif->kern_dbpage, q->hw_type, 713 713 q->dbval | q->head_idx); 714 + 715 + q->dbell_jiffies = jiffies; 716 + 717 + if (q_to_qcq(q)->napi_qcq) 718 + mod_timer(&q_to_qcq(q)->napi_qcq->napi_deadline, 719 + jiffies + IONIC_NAPI_DEADLINE); 720 + } 714 721 } 715 722 716 723 static bool ionic_q_is_posted(struct ionic_queue *q, unsigned int pos)
+12
drivers/net/ethernet/pensando/ionic/ionic_dev.h
··· 25 25 #define IONIC_DEV_INFO_REG_COUNT 32 26 26 #define IONIC_DEV_CMD_REG_COUNT 32 27 27 28 + #define IONIC_NAPI_DEADLINE (HZ / 200) /* 5ms */ 29 + #define IONIC_ADMIN_DOORBELL_DEADLINE (HZ / 2) /* 500ms */ 30 + #define IONIC_TX_DOORBELL_DEADLINE (HZ / 100) /* 10ms */ 31 + #define IONIC_RX_MIN_DOORBELL_DEADLINE (HZ / 100) /* 10ms */ 32 + #define IONIC_RX_MAX_DOORBELL_DEADLINE (HZ * 5) /* 5s */ 33 + 28 34 struct ionic_dev_bar { 29 35 void __iomem *vaddr; 30 36 phys_addr_t bus_addr; ··· 222 216 struct ionic_lif *lif; 223 217 struct ionic_desc_info *info; 224 218 u64 dbval; 219 + unsigned long dbell_deadline; 220 + unsigned long dbell_jiffies; 225 221 u16 head_idx; 226 222 u16 tail_idx; 227 223 unsigned int index; ··· 368 360 unsigned int stop_index); 369 361 int ionic_heartbeat_check(struct ionic *ionic); 370 362 bool ionic_is_fw_running(struct ionic_dev *idev); 363 + 364 + bool ionic_adminq_poke_doorbell(struct ionic_queue *q); 365 + bool ionic_txq_poke_doorbell(struct ionic_queue *q); 366 + bool ionic_rxq_poke_doorbell(struct ionic_queue *q); 371 367 372 368 #endif /* _IONIC_DEV_H_ */
+59 -9
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 16 16 17 17 #include "ionic.h" 18 18 #include "ionic_bus.h" 19 + #include "ionic_dev.h" 19 20 #include "ionic_lif.h" 20 21 #include "ionic_txrx.h" 21 22 #include "ionic_ethtool.h" ··· 201 200 } 202 201 } 203 202 203 + static void ionic_napi_deadline(struct timer_list *timer) 204 + { 205 + struct ionic_qcq *qcq = container_of(timer, struct ionic_qcq, napi_deadline); 206 + 207 + napi_schedule(&qcq->napi); 208 + } 209 + 204 210 static irqreturn_t ionic_isr(int irq, void *data) 205 211 { 206 212 struct napi_struct *napi = data; ··· 277 269 .oper = IONIC_Q_ENABLE, 278 270 }, 279 271 }; 272 + int ret; 280 273 281 274 idev = &lif->ionic->idev; 282 275 dev = lif->ionic->dev; ··· 285 276 dev_dbg(dev, "q_enable.index %d q_enable.qtype %d\n", 286 277 ctx.cmd.q_control.index, ctx.cmd.q_control.type); 287 278 279 + if (qcq->flags & IONIC_QCQ_F_INTR) 280 + ionic_intr_clean(idev->intr_ctrl, qcq->intr.index); 281 + 282 + ret = ionic_adminq_post_wait(lif, &ctx); 283 + if (ret) 284 + return ret; 285 + 286 + if (qcq->napi.poll) 287 + napi_enable(&qcq->napi); 288 + 288 289 if (qcq->flags & IONIC_QCQ_F_INTR) { 289 290 irq_set_affinity_hint(qcq->intr.vector, 290 291 &qcq->intr.affinity_mask); 291 - napi_enable(&qcq->napi); 292 - ionic_intr_clean(idev->intr_ctrl, qcq->intr.index); 293 292 ionic_intr_mask(idev->intr_ctrl, qcq->intr.index, 294 293 IONIC_INTR_MASK_CLEAR); 295 294 } 296 295 297 - return ionic_adminq_post_wait(lif, &ctx); 296 + return 0; 298 297 } 299 298 300 299 static int ionic_qcq_disable(struct ionic_lif *lif, struct ionic_qcq *qcq, int fw_err) ··· 333 316 synchronize_irq(qcq->intr.vector); 334 317 irq_set_affinity_hint(qcq->intr.vector, NULL); 335 318 napi_disable(&qcq->napi); 319 + del_timer_sync(&qcq->napi_deadline); 336 320 } 337 321 338 322 /* If there was a previous fw communcation error, don't bother with ··· 469 451 470 452 n_qcq->intr.vector = src_qcq->intr.vector; 471 453 n_qcq->intr.index = src_qcq->intr.index; 454 + n_qcq->napi_qcq = src_qcq->napi_qcq; 472 455 } 473 456 474 457 static int ionic_alloc_qcq_interrupt(struct ionic_lif *lif, struct ionic_qcq *qcq) ··· 583 564 } 584 565 585 566 if (flags & IONIC_QCQ_F_NOTIFYQ) { 586 - int q_size, cq_size; 567 + int q_size; 587 568 588 - /* q & cq need to be contiguous in case of notifyq */ 569 + /* q & cq need to be contiguous in NotifyQ, so alloc it all in q 570 + * and don't alloc qc. We leave new->qc_size and new->qc_base 571 + * as 0 to be sure we don't try to free it later. 572 + */ 589 573 q_size = ALIGN(num_descs * desc_size, PAGE_SIZE); 590 - cq_size = ALIGN(num_descs * cq_desc_size, PAGE_SIZE); 591 - 592 - new->q_size = PAGE_SIZE + q_size + cq_size; 574 + new->q_size = PAGE_SIZE + q_size + 575 + ALIGN(num_descs * cq_desc_size, PAGE_SIZE); 593 576 new->q_base = dma_alloc_coherent(dev, new->q_size, 594 577 &new->q_base_pa, GFP_KERNEL); 595 578 if (!new->q_base) { ··· 794 773 dev_dbg(dev, "txq->hw_type %d\n", q->hw_type); 795 774 dev_dbg(dev, "txq->hw_index %d\n", q->hw_index); 796 775 797 - if (test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state)) 776 + q->dbell_deadline = IONIC_TX_DOORBELL_DEADLINE; 777 + q->dbell_jiffies = jiffies; 778 + 779 + if (test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state)) { 798 780 netif_napi_add(lif->netdev, &qcq->napi, ionic_tx_napi); 781 + qcq->napi_qcq = qcq; 782 + timer_setup(&qcq->napi_deadline, ionic_napi_deadline, 0); 783 + } 799 784 800 785 qcq->flags |= IONIC_QCQ_F_INITED; 801 786 ··· 855 828 dev_dbg(dev, "rxq->hw_type %d\n", q->hw_type); 856 829 dev_dbg(dev, "rxq->hw_index %d\n", q->hw_index); 857 830 831 + q->dbell_deadline = IONIC_RX_MIN_DOORBELL_DEADLINE; 832 + q->dbell_jiffies = jiffies; 833 + 858 834 if (test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state)) 859 835 netif_napi_add(lif->netdev, &qcq->napi, ionic_rx_napi); 860 836 else 861 837 netif_napi_add(lif->netdev, &qcq->napi, ionic_txrx_napi); 838 + 839 + qcq->napi_qcq = qcq; 840 + timer_setup(&qcq->napi_deadline, ionic_napi_deadline, 0); 862 841 863 842 qcq->flags |= IONIC_QCQ_F_INITED; 864 843 ··· 1183 1150 struct ionic_dev *idev = &lif->ionic->idev; 1184 1151 unsigned long irqflags; 1185 1152 unsigned int flags = 0; 1153 + bool resched = false; 1186 1154 int rx_work = 0; 1187 1155 int tx_work = 0; 1188 1156 int n_work = 0; ··· 1220 1186 credits = n_work + a_work + rx_work + tx_work; 1221 1187 ionic_intr_credits(idev->intr_ctrl, intr->index, credits, flags); 1222 1188 } 1189 + 1190 + if (!a_work && ionic_adminq_poke_doorbell(&lif->adminqcq->q)) 1191 + resched = true; 1192 + if (lif->hwstamp_rxq && !rx_work && ionic_rxq_poke_doorbell(&lif->hwstamp_rxq->q)) 1193 + resched = true; 1194 + if (lif->hwstamp_txq && !tx_work && ionic_txq_poke_doorbell(&lif->hwstamp_txq->q)) 1195 + resched = true; 1196 + if (resched) 1197 + mod_timer(&lif->adminqcq->napi_deadline, 1198 + jiffies + IONIC_NAPI_DEADLINE); 1223 1199 1224 1200 return work_done; 1225 1201 } ··· 3289 3245 dev_dbg(dev, "adminq->hw_type %d\n", q->hw_type); 3290 3246 dev_dbg(dev, "adminq->hw_index %d\n", q->hw_index); 3291 3247 3248 + q->dbell_deadline = IONIC_ADMIN_DOORBELL_DEADLINE; 3249 + q->dbell_jiffies = jiffies; 3250 + 3292 3251 netif_napi_add(lif->netdev, &qcq->napi, ionic_adminq_napi); 3252 + 3253 + qcq->napi_qcq = qcq; 3254 + timer_setup(&qcq->napi_deadline, ionic_napi_deadline, 0); 3293 3255 3294 3256 napi_enable(&qcq->napi); 3295 3257
+2
drivers/net/ethernet/pensando/ionic/ionic_lif.h
··· 74 74 struct ionic_queue q; 75 75 struct ionic_cq cq; 76 76 struct ionic_intr_info intr; 77 + struct timer_list napi_deadline; 77 78 struct napi_struct napi; 78 79 unsigned int flags; 80 + struct ionic_qcq *napi_qcq; 79 81 struct dentry *dentry; 80 82 }; 81 83
+29
drivers/net/ethernet/pensando/ionic/ionic_main.c
··· 289 289 complete_all(&ctx->work); 290 290 } 291 291 292 + bool ionic_adminq_poke_doorbell(struct ionic_queue *q) 293 + { 294 + struct ionic_lif *lif = q->lif; 295 + unsigned long now, then, dif; 296 + unsigned long irqflags; 297 + 298 + spin_lock_irqsave(&lif->adminq_lock, irqflags); 299 + 300 + if (q->tail_idx == q->head_idx) { 301 + spin_unlock_irqrestore(&lif->adminq_lock, irqflags); 302 + return false; 303 + } 304 + 305 + now = READ_ONCE(jiffies); 306 + then = q->dbell_jiffies; 307 + dif = now - then; 308 + 309 + if (dif > q->dbell_deadline) { 310 + ionic_dbell_ring(q->lif->kern_dbpage, q->hw_type, 311 + q->dbval | q->head_idx); 312 + 313 + q->dbell_jiffies = now; 314 + } 315 + 316 + spin_unlock_irqrestore(&lif->adminq_lock, irqflags); 317 + 318 + return true; 319 + } 320 + 292 321 int ionic_adminq_post(struct ionic_lif *lif, struct ionic_admin_ctx *ctx) 293 322 { 294 323 struct ionic_desc_info *desc_info;
+85 -2
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 22 22 ionic_q_post(q, ring_dbell, cb_func, cb_arg); 23 23 } 24 24 25 + bool ionic_txq_poke_doorbell(struct ionic_queue *q) 26 + { 27 + unsigned long now, then, dif; 28 + struct netdev_queue *netdev_txq; 29 + struct net_device *netdev; 30 + 31 + netdev = q->lif->netdev; 32 + netdev_txq = netdev_get_tx_queue(netdev, q->index); 33 + 34 + HARD_TX_LOCK(netdev, netdev_txq, smp_processor_id()); 35 + 36 + if (q->tail_idx == q->head_idx) { 37 + HARD_TX_UNLOCK(netdev, netdev_txq); 38 + return false; 39 + } 40 + 41 + now = READ_ONCE(jiffies); 42 + then = q->dbell_jiffies; 43 + dif = now - then; 44 + 45 + if (dif > q->dbell_deadline) { 46 + ionic_dbell_ring(q->lif->kern_dbpage, q->hw_type, 47 + q->dbval | q->head_idx); 48 + 49 + q->dbell_jiffies = now; 50 + } 51 + 52 + HARD_TX_UNLOCK(netdev, netdev_txq); 53 + 54 + return true; 55 + } 56 + 57 + bool ionic_rxq_poke_doorbell(struct ionic_queue *q) 58 + { 59 + unsigned long now, then, dif; 60 + 61 + /* no lock, called from rx napi or txrx napi, nothing else can fill */ 62 + 63 + if (q->tail_idx == q->head_idx) 64 + return false; 65 + 66 + now = READ_ONCE(jiffies); 67 + then = q->dbell_jiffies; 68 + dif = now - then; 69 + 70 + if (dif > q->dbell_deadline) { 71 + ionic_dbell_ring(q->lif->kern_dbpage, q->hw_type, 72 + q->dbval | q->head_idx); 73 + 74 + q->dbell_jiffies = now; 75 + 76 + dif = 2 * q->dbell_deadline; 77 + if (dif > IONIC_RX_MAX_DOORBELL_DEADLINE) 78 + dif = IONIC_RX_MAX_DOORBELL_DEADLINE; 79 + 80 + q->dbell_deadline = dif; 81 + } 82 + 83 + return true; 84 + } 85 + 25 86 static inline struct netdev_queue *q_to_ndq(struct ionic_queue *q) 26 87 { 27 88 return netdev_get_tx_queue(q->lif->netdev, q->index); ··· 485 424 486 425 ionic_dbell_ring(q->lif->kern_dbpage, q->hw_type, 487 426 q->dbval | q->head_idx); 427 + 428 + q->dbell_deadline = IONIC_RX_MIN_DOORBELL_DEADLINE; 429 + q->dbell_jiffies = jiffies; 430 + 431 + mod_timer(&q_to_qcq(q)->napi_qcq->napi_deadline, 432 + jiffies + IONIC_NAPI_DEADLINE); 488 433 } 489 434 490 435 void ionic_rx_empty(struct ionic_queue *q) ··· 578 511 work_done, flags); 579 512 } 580 513 514 + if (!work_done && ionic_txq_poke_doorbell(&qcq->q)) 515 + mod_timer(&qcq->napi_deadline, jiffies + IONIC_NAPI_DEADLINE); 516 + 581 517 return work_done; 582 518 } 583 519 ··· 614 544 work_done, flags); 615 545 } 616 546 547 + if (!work_done && ionic_rxq_poke_doorbell(&qcq->q)) 548 + mod_timer(&qcq->napi_deadline, jiffies + IONIC_NAPI_DEADLINE); 549 + 617 550 return work_done; 618 551 } 619 552 620 553 int ionic_txrx_napi(struct napi_struct *napi, int budget) 621 554 { 622 - struct ionic_qcq *qcq = napi_to_qcq(napi); 555 + struct ionic_qcq *rxqcq = napi_to_qcq(napi); 623 556 struct ionic_cq *rxcq = napi_to_cq(napi); 624 557 unsigned int qi = rxcq->bound_q->index; 558 + struct ionic_qcq *txqcq; 625 559 struct ionic_dev *idev; 626 560 struct ionic_lif *lif; 627 561 struct ionic_cq *txcq; 562 + bool resched = false; 628 563 u32 rx_work_done = 0; 629 564 u32 tx_work_done = 0; 630 565 u32 flags = 0; 631 566 632 567 lif = rxcq->bound_q->lif; 633 568 idev = &lif->ionic->idev; 569 + txqcq = lif->txqcqs[qi]; 634 570 txcq = &lif->txqcqs[qi]->cq; 635 571 636 572 tx_work_done = ionic_cq_service(txcq, IONIC_TX_BUDGET_DEFAULT, ··· 648 572 ionic_rx_fill(rxcq->bound_q); 649 573 650 574 if (rx_work_done < budget && napi_complete_done(napi, rx_work_done)) { 651 - ionic_dim_update(qcq, 0); 575 + ionic_dim_update(rxqcq, 0); 652 576 flags |= IONIC_INTR_CRED_UNMASK; 653 577 rxcq->bound_intr->rearm_count++; 654 578 } ··· 658 582 ionic_intr_credits(idev->intr_ctrl, rxcq->bound_intr->index, 659 583 tx_work_done + rx_work_done, flags); 660 584 } 585 + 586 + if (!rx_work_done && ionic_rxq_poke_doorbell(&rxqcq->q)) 587 + resched = true; 588 + if (!tx_work_done && ionic_txq_poke_doorbell(&txqcq->q)) 589 + resched = true; 590 + if (resched) 591 + mod_timer(&rxqcq->napi_deadline, jiffies + IONIC_NAPI_DEADLINE); 661 592 662 593 return rx_work_done; 663 594 }
+1 -1
drivers/net/hyperv/netvsc.c
··· 1034 1034 1035 1035 packet->dma_range = kcalloc(page_count, 1036 1036 sizeof(*packet->dma_range), 1037 - GFP_KERNEL); 1037 + GFP_ATOMIC); 1038 1038 if (!packet->dma_range) 1039 1039 return -ENOMEM; 1040 1040
+2
drivers/net/phy/meson-gxl.c
··· 261 261 .handle_interrupt = meson_gxl_handle_interrupt, 262 262 .suspend = genphy_suspend, 263 263 .resume = genphy_resume, 264 + .read_mmd = genphy_read_mmd_unsupported, 265 + .write_mmd = genphy_write_mmd_unsupported, 264 266 }, { 265 267 PHY_ID_MATCH_EXACT(0x01803301), 266 268 .name = "Meson G12A Internal PHY",
+2 -3
drivers/net/phy/phylink.c
··· 1816 1816 1817 1817 ret = phy_attach_direct(pl->netdev, phy_dev, flags, 1818 1818 pl->link_interface); 1819 - if (ret) { 1820 - phy_device_free(phy_dev); 1819 + phy_device_free(phy_dev); 1820 + if (ret) 1821 1821 return ret; 1822 - } 1823 1822 1824 1823 ret = phylink_bringup_phy(pl, phy_dev, pl->link_config.interface); 1825 1824 if (ret)
+1 -3
drivers/net/usb/plusb.c
··· 57 57 static inline int 58 58 pl_vendor_req(struct usbnet *dev, u8 req, u8 val, u8 index) 59 59 { 60 - return usbnet_read_cmd(dev, req, 61 - USB_DIR_IN | USB_TYPE_VENDOR | 62 - USB_RECIP_DEVICE, 60 + return usbnet_write_cmd(dev, req, USB_TYPE_VENDOR | USB_RECIP_DEVICE, 63 61 val, index, NULL, 0); 64 62 } 65 63
+12 -2
drivers/nvme/host/auth.c
··· 45 45 int sess_key_len; 46 46 }; 47 47 48 + struct workqueue_struct *nvme_auth_wq; 49 + 48 50 #define nvme_auth_flags_from_qid(qid) \ 49 51 (qid == 0) ? 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED 50 52 #define nvme_auth_queue_from_qid(ctrl, qid) \ ··· 868 866 869 867 chap = &ctrl->dhchap_ctxs[qid]; 870 868 cancel_work_sync(&chap->auth_work); 871 - queue_work(nvme_wq, &chap->auth_work); 869 + queue_work(nvme_auth_wq, &chap->auth_work); 872 870 return 0; 873 871 } 874 872 EXPORT_SYMBOL_GPL(nvme_auth_negotiate); ··· 1010 1008 1011 1009 int __init nvme_init_auth(void) 1012 1010 { 1011 + nvme_auth_wq = alloc_workqueue("nvme-auth-wq", 1012 + WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0); 1013 + if (!nvme_auth_wq) 1014 + return -ENOMEM; 1015 + 1013 1016 nvme_chap_buf_cache = kmem_cache_create("nvme-chap-buf-cache", 1014 1017 CHAP_BUF_SIZE, 0, SLAB_HWCACHE_ALIGN, NULL); 1015 1018 if (!nvme_chap_buf_cache) 1016 - return -ENOMEM; 1019 + goto err_destroy_workqueue; 1017 1020 1018 1021 nvme_chap_buf_pool = mempool_create(16, mempool_alloc_slab, 1019 1022 mempool_free_slab, nvme_chap_buf_cache); ··· 1028 1021 return 0; 1029 1022 err_destroy_chap_buf_cache: 1030 1023 kmem_cache_destroy(nvme_chap_buf_cache); 1024 + err_destroy_workqueue: 1025 + destroy_workqueue(nvme_auth_wq); 1031 1026 return -ENOMEM; 1032 1027 } 1033 1028 ··· 1037 1028 { 1038 1029 mempool_destroy(nvme_chap_buf_pool); 1039 1030 kmem_cache_destroy(nvme_chap_buf_cache); 1031 + destroy_workqueue(nvme_auth_wq); 1040 1032 }
+4 -1
drivers/nvme/host/core.c
··· 4921 4921 blk_mq_destroy_queue(ctrl->admin_q); 4922 4922 blk_put_queue(ctrl->admin_q); 4923 4923 out_free_tagset: 4924 - blk_mq_free_tag_set(ctrl->admin_tagset); 4924 + blk_mq_free_tag_set(set); 4925 + ctrl->admin_q = NULL; 4926 + ctrl->fabrics_q = NULL; 4925 4927 return ret; 4926 4928 } 4927 4929 EXPORT_SYMBOL_GPL(nvme_alloc_admin_tag_set); ··· 4985 4983 4986 4984 out_free_tag_set: 4987 4985 blk_mq_free_tag_set(set); 4986 + ctrl->connect_q = NULL; 4988 4987 return ret; 4989 4988 } 4990 4989 EXPORT_SYMBOL_GPL(nvme_alloc_io_tag_set);
+3 -1
drivers/nvme/target/fc.c
··· 1685 1685 else { 1686 1686 queue = nvmet_fc_alloc_target_queue(iod->assoc, 0, 1687 1687 be16_to_cpu(rqst->assoc_cmd.sqsize)); 1688 - if (!queue) 1688 + if (!queue) { 1689 1689 ret = VERR_QUEUE_ALLOC_FAIL; 1690 + nvmet_fc_tgt_a_put(iod->assoc); 1691 + } 1690 1692 } 1691 1693 } 1692 1694
+3
drivers/nvmem/brcm_nvram.c
··· 98 98 len = le32_to_cpu(header.len); 99 99 100 100 data = kzalloc(len, GFP_KERNEL); 101 + if (!data) 102 + return -ENOMEM; 103 + 101 104 memcpy_fromio(data, priv->base, len); 102 105 data[len - 1] = '\0'; 103 106
+30 -30
drivers/nvmem/core.c
··· 770 770 return ERR_PTR(rval); 771 771 } 772 772 773 - if (config->wp_gpio) 774 - nvmem->wp_gpio = config->wp_gpio; 775 - else if (!config->ignore_wp) 773 + nvmem->id = rval; 774 + 775 + nvmem->dev.type = &nvmem_provider_type; 776 + nvmem->dev.bus = &nvmem_bus_type; 777 + nvmem->dev.parent = config->dev; 778 + 779 + device_initialize(&nvmem->dev); 780 + 781 + if (!config->ignore_wp) 776 782 nvmem->wp_gpio = gpiod_get_optional(config->dev, "wp", 777 783 GPIOD_OUT_HIGH); 778 784 if (IS_ERR(nvmem->wp_gpio)) { 779 - ida_free(&nvmem_ida, nvmem->id); 780 785 rval = PTR_ERR(nvmem->wp_gpio); 781 - kfree(nvmem); 782 - return ERR_PTR(rval); 786 + nvmem->wp_gpio = NULL; 787 + goto err_put_device; 783 788 } 784 789 785 790 kref_init(&nvmem->refcnt); 786 791 INIT_LIST_HEAD(&nvmem->cells); 787 792 788 - nvmem->id = rval; 789 793 nvmem->owner = config->owner; 790 794 if (!nvmem->owner && config->dev->driver) 791 795 nvmem->owner = config->dev->driver->owner; 792 796 nvmem->stride = config->stride ?: 1; 793 797 nvmem->word_size = config->word_size ?: 1; 794 798 nvmem->size = config->size; 795 - nvmem->dev.type = &nvmem_provider_type; 796 - nvmem->dev.bus = &nvmem_bus_type; 797 - nvmem->dev.parent = config->dev; 798 799 nvmem->root_only = config->root_only; 799 800 nvmem->priv = config->priv; 800 801 nvmem->type = config->type; ··· 823 822 break; 824 823 } 825 824 826 - if (rval) { 827 - ida_free(&nvmem_ida, nvmem->id); 828 - kfree(nvmem); 829 - return ERR_PTR(rval); 830 - } 825 + if (rval) 826 + goto err_put_device; 831 827 832 828 nvmem->read_only = device_property_present(config->dev, "read-only") || 833 829 config->read_only || !nvmem->reg_write; ··· 833 835 nvmem->dev.groups = nvmem_dev_groups; 834 836 #endif 835 837 836 - dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name); 837 - 838 - rval = device_register(&nvmem->dev); 839 - if (rval) 840 - goto err_put_device; 841 - 842 838 if (nvmem->nkeepout) { 843 839 rval = nvmem_validate_keepouts(nvmem); 844 840 if (rval) 845 - goto err_device_del; 841 + goto err_put_device; 846 842 } 847 843 848 844 if (config->compat) { 849 845 rval = nvmem_sysfs_setup_compat(nvmem, config); 850 846 if (rval) 851 - goto err_device_del; 847 + goto err_put_device; 852 848 } 853 849 854 850 if (config->cells) { 855 851 rval = nvmem_add_cells(nvmem, config->cells, config->ncells); 856 852 if (rval) 857 - goto err_teardown_compat; 853 + goto err_remove_cells; 858 854 } 859 855 860 856 rval = nvmem_add_cells_from_table(nvmem); ··· 859 867 if (rval) 860 868 goto err_remove_cells; 861 869 870 + dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name); 871 + 872 + rval = device_add(&nvmem->dev); 873 + if (rval) 874 + goto err_remove_cells; 875 + 862 876 blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem); 863 877 864 878 return nvmem; 865 879 866 880 err_remove_cells: 867 881 nvmem_device_remove_all_cells(nvmem); 868 - err_teardown_compat: 869 882 if (config->compat) 870 883 nvmem_sysfs_remove_compat(nvmem, config); 871 - err_device_del: 872 - device_del(&nvmem->dev); 873 884 err_put_device: 874 885 put_device(&nvmem->dev); 875 886 ··· 1237 1242 if (!cell_np) 1238 1243 return ERR_PTR(-ENOENT); 1239 1244 1240 - nvmem_np = of_get_next_parent(cell_np); 1241 - if (!nvmem_np) 1245 + nvmem_np = of_get_parent(cell_np); 1246 + if (!nvmem_np) { 1247 + of_node_put(cell_np); 1242 1248 return ERR_PTR(-EINVAL); 1249 + } 1243 1250 1244 1251 nvmem = __nvmem_device_get(nvmem_np, device_match_of_node); 1245 1252 of_node_put(nvmem_np); 1246 - if (IS_ERR(nvmem)) 1253 + if (IS_ERR(nvmem)) { 1254 + of_node_put(cell_np); 1247 1255 return ERR_CAST(nvmem); 1256 + } 1248 1257 1249 1258 cell_entry = nvmem_find_cell_entry_by_node(nvmem, cell_np); 1259 + of_node_put(cell_np); 1250 1260 if (!cell_entry) { 1251 1261 __nvmem_device_put(nvmem); 1252 1262 return ERR_PTR(-ENOENT);
+1
drivers/nvmem/qcom-spmi-sdam.c
··· 166 166 { .compatible = "qcom,spmi-sdam" }, 167 167 {}, 168 168 }; 169 + MODULE_DEVICE_TABLE(of, sdam_match_table); 169 170 170 171 static struct platform_driver sdam_driver = { 171 172 .driver = {
+14 -1
drivers/nvmem/sunxi_sid.c
··· 41 41 void *val, size_t bytes) 42 42 { 43 43 struct sunxi_sid *sid = context; 44 + u32 word; 44 45 45 - memcpy_fromio(val, sid->base + sid->value_offset + offset, bytes); 46 + /* .stride = 4 so offset is guaranteed to be aligned */ 47 + __ioread32_copy(val, sid->base + sid->value_offset + offset, bytes / 4); 48 + 49 + val += round_down(bytes, 4); 50 + offset += round_down(bytes, 4); 51 + bytes = bytes % 4; 52 + 53 + if (!bytes) 54 + return 0; 55 + 56 + /* Handle any trailing bytes */ 57 + word = readl_relaxed(sid->base + sid->value_offset + offset); 58 + memcpy(val, &word, bytes); 46 59 47 60 return 0; 48 61 }
+15 -6
drivers/of/address.c
··· 965 965 } 966 966 967 967 of_dma_range_parser_init(&parser, node); 968 - for_each_of_range(&parser, &range) 968 + for_each_of_range(&parser, &range) { 969 + if (range.cpu_addr == OF_BAD_ADDR) { 970 + pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n", 971 + range.bus_addr, node); 972 + continue; 973 + } 969 974 num_ranges++; 975 + } 976 + 977 + if (!num_ranges) { 978 + ret = -EINVAL; 979 + goto out; 980 + } 970 981 971 982 r = kcalloc(num_ranges + 1, sizeof(*r), GFP_KERNEL); 972 983 if (!r) { ··· 986 975 } 987 976 988 977 /* 989 - * Record all info in the generic DMA ranges array for struct device. 978 + * Record all info in the generic DMA ranges array for struct device, 979 + * returning an error if we don't find any parsable ranges. 990 980 */ 991 981 *map = r; 992 982 of_dma_range_parser_init(&parser, node); 993 983 for_each_of_range(&parser, &range) { 994 984 pr_debug("dma_addr(%llx) cpu_addr(%llx) size(%llx)\n", 995 985 range.bus_addr, range.cpu_addr, range.size); 996 - if (range.cpu_addr == OF_BAD_ADDR) { 997 - pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n", 998 - range.bus_addr, node); 986 + if (range.cpu_addr == OF_BAD_ADDR) 999 987 continue; 1000 - } 1001 988 r->cpu_start = range.cpu_addr; 1002 989 r->dma_start = range.bus_addr; 1003 990 r->size = range.size;
+1 -5
drivers/of/fdt.c
··· 26 26 #include <linux/serial_core.h> 27 27 #include <linux/sysfs.h> 28 28 #include <linux/random.h> 29 - #include <linux/kmemleak.h> 30 29 31 30 #include <asm/setup.h> /* for COMMAND_LINE_SIZE */ 32 31 #include <asm/page.h> ··· 524 525 size = dt_mem_next_cell(dt_root_size_cells, &prop); 525 526 526 527 if (size && 527 - early_init_dt_reserve_memory(base, size, nomap) == 0) { 528 + early_init_dt_reserve_memory(base, size, nomap) == 0) 528 529 pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n", 529 530 uname, &base, (unsigned long)(size / SZ_1M)); 530 - if (!nomap) 531 - kmemleak_alloc_phys(base, size, 0); 532 - } 533 531 else 534 532 pr_err("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n", 535 533 uname, &base, (unsigned long)(size / SZ_1M));
+10 -2
drivers/of/platform.c
··· 525 525 if (IS_ENABLED(CONFIG_PPC)) { 526 526 struct device_node *boot_display = NULL; 527 527 struct platform_device *dev; 528 + int display_number = 0; 528 529 int ret; 529 530 530 531 /* Check if we have a MacOS display without a node spec */ ··· 556 555 if (!of_get_property(node, "linux,opened", NULL) || 557 556 !of_get_property(node, "linux,boot-display", NULL)) 558 557 continue; 559 - dev = of_platform_device_create(node, "of-display", NULL); 558 + dev = of_platform_device_create(node, "of-display.0", NULL); 559 + of_node_put(node); 560 560 if (WARN_ON(!dev)) 561 561 return -ENOMEM; 562 562 boot_display = node; 563 + display_number++; 563 564 break; 564 565 } 565 566 for_each_node_by_type(node, "display") { 567 + char buf[14]; 568 + const char *of_display_format = "of-display.%d"; 569 + 566 570 if (!of_get_property(node, "linux,opened", NULL) || node == boot_display) 567 571 continue; 568 - of_platform_device_create(node, "of-display", NULL); 572 + ret = snprintf(buf, sizeof(buf), of_display_format, display_number++); 573 + if (ret < sizeof(buf)) 574 + of_platform_device_create(node, buf, NULL); 569 575 } 570 576 571 577 } else {
+3 -6
drivers/parisc/pdc_stable.c
··· 274 274 275 275 /* We'll use a local copy of buf */ 276 276 count = min_t(size_t, count, sizeof(in)-1); 277 - strncpy(in, buf, count); 278 - in[count] = '\0'; 277 + strscpy(in, buf, count + 1); 279 278 280 279 /* Let's clean up the target. 0xff is a blank pattern */ 281 280 memset(&hwpath, 0xff, sizeof(hwpath)); ··· 387 388 388 389 /* We'll use a local copy of buf */ 389 390 count = min_t(size_t, count, sizeof(in)-1); 390 - strncpy(in, buf, count); 391 - in[count] = '\0'; 391 + strscpy(in, buf, count + 1); 392 392 393 393 /* Let's clean up the target. 0 is a blank pattern */ 394 394 memset(&layers, 0, sizeof(layers)); ··· 754 756 755 757 /* We'll use a local copy of buf */ 756 758 count = min_t(size_t, count, sizeof(in)-1); 757 - strncpy(in, buf, count); 758 - in[count] = '\0'; 759 + strscpy(in, buf, count + 1); 759 760 760 761 /* Current flags are stored in primary boot path entry */ 761 762 pathentry = &pdcspath_entry_primary;
+26 -20
drivers/rtc/rtc-efi.c
··· 188 188 189 189 static int efi_procfs(struct device *dev, struct seq_file *seq) 190 190 { 191 - efi_time_t eft, alm; 192 - efi_time_cap_t cap; 193 - efi_bool_t enabled, pending; 191 + efi_time_t eft, alm; 192 + efi_time_cap_t cap; 193 + efi_bool_t enabled, pending; 194 + struct rtc_device *rtc = dev_get_drvdata(dev); 194 195 195 196 memset(&eft, 0, sizeof(eft)); 196 197 memset(&alm, 0, sizeof(alm)); ··· 214 213 /* XXX fixme: convert to string? */ 215 214 seq_printf(seq, "Timezone\t: %u\n", eft.timezone); 216 215 217 - seq_printf(seq, 218 - "Alarm Time\t: %u:%u:%u.%09u\n" 219 - "Alarm Date\t: %u-%u-%u\n" 220 - "Alarm Daylight\t: %u\n" 221 - "Enabled\t\t: %s\n" 222 - "Pending\t\t: %s\n", 223 - alm.hour, alm.minute, alm.second, alm.nanosecond, 224 - alm.year, alm.month, alm.day, 225 - alm.daylight, 226 - enabled == 1 ? "yes" : "no", 227 - pending == 1 ? "yes" : "no"); 216 + if (test_bit(RTC_FEATURE_ALARM, rtc->features)) { 217 + seq_printf(seq, 218 + "Alarm Time\t: %u:%u:%u.%09u\n" 219 + "Alarm Date\t: %u-%u-%u\n" 220 + "Alarm Daylight\t: %u\n" 221 + "Enabled\t\t: %s\n" 222 + "Pending\t\t: %s\n", 223 + alm.hour, alm.minute, alm.second, alm.nanosecond, 224 + alm.year, alm.month, alm.day, 225 + alm.daylight, 226 + enabled == 1 ? "yes" : "no", 227 + pending == 1 ? "yes" : "no"); 228 228 229 - if (eft.timezone == EFI_UNSPECIFIED_TIMEZONE) 230 - seq_puts(seq, "Timezone\t: unspecified\n"); 231 - else 232 - /* XXX fixme: convert to string? */ 233 - seq_printf(seq, "Timezone\t: %u\n", alm.timezone); 229 + if (eft.timezone == EFI_UNSPECIFIED_TIMEZONE) 230 + seq_puts(seq, "Timezone\t: unspecified\n"); 231 + else 232 + /* XXX fixme: convert to string? */ 233 + seq_printf(seq, "Timezone\t: %u\n", alm.timezone); 234 + } 234 235 235 236 /* 236 237 * now prints the capabilities ··· 272 269 273 270 rtc->ops = &efi_rtc_ops; 274 271 clear_bit(RTC_FEATURE_UPDATE_INTERRUPT, rtc->features); 275 - set_bit(RTC_FEATURE_ALARM_WAKEUP_ONLY, rtc->features); 272 + if (efi_rt_services_supported(EFI_RT_SUPPORTED_WAKEUP_SERVICES)) 273 + set_bit(RTC_FEATURE_ALARM_WAKEUP_ONLY, rtc->features); 274 + else 275 + clear_bit(RTC_FEATURE_ALARM, rtc->features); 276 276 277 277 device_init_wakeup(&dev->dev, true); 278 278
+2 -2
drivers/rtc/rtc-sunplus.c
··· 240 240 if (IS_ERR(sp_rtc->reg_base)) 241 241 return dev_err_probe(&plat_dev->dev, PTR_ERR(sp_rtc->reg_base), 242 242 "%s devm_ioremap_resource fail\n", RTC_REG_NAME); 243 - dev_dbg(&plat_dev->dev, "res = 0x%x, reg_base = 0x%lx\n", 244 - sp_rtc->res->start, (unsigned long)sp_rtc->reg_base); 243 + dev_dbg(&plat_dev->dev, "res = %pR, reg_base = %p\n", 244 + sp_rtc->res, sp_rtc->reg_base); 245 245 246 246 sp_rtc->irq = platform_get_irq(plat_dev, 0); 247 247 if (sp_rtc->irq < 0)
+17 -4
drivers/tty/serial/8250/8250_dma.c
··· 43 43 struct uart_8250_dma *dma = p->dma; 44 44 struct tty_port *tty_port = &p->port.state->port; 45 45 struct dma_tx_state state; 46 + enum dma_status dma_status; 46 47 int count; 47 48 48 - dma->rx_running = 0; 49 - dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); 49 + /* 50 + * New DMA Rx can be started during the completion handler before it 51 + * could acquire port's lock and it might still be ongoing. Don't to 52 + * anything in such case. 53 + */ 54 + dma_status = dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); 55 + if (dma_status == DMA_IN_PROGRESS) 56 + return; 50 57 51 58 count = dma->rx_size - state.residue; 52 59 53 60 tty_insert_flip_string(tty_port, dma->rx_buf, count); 54 61 p->port.icount.rx += count; 62 + dma->rx_running = 0; 55 63 56 64 tty_flip_buffer_push(tty_port); 57 65 } ··· 70 62 struct uart_8250_dma *dma = p->dma; 71 63 unsigned long flags; 72 64 73 - __dma_rx_complete(p); 74 - 75 65 spin_lock_irqsave(&p->port.lock, flags); 66 + if (dma->rx_running) 67 + __dma_rx_complete(p); 68 + 69 + /* 70 + * Cannot be combined with the previous check because __dma_rx_complete() 71 + * changes dma->rx_running. 72 + */ 76 73 if (!dma->rx_running && (serial_lsr_in(p) & UART_LSR_DR)) 77 74 p->dma->rx_dma(p); 78 75 spin_unlock_irqrestore(&p->port.lock, flags);
+5 -28
drivers/tty/serial/stm32-usart.c
··· 797 797 spin_unlock(&port->lock); 798 798 } 799 799 800 - if (stm32_usart_rx_dma_enabled(port)) 801 - return IRQ_WAKE_THREAD; 802 - else 803 - return IRQ_HANDLED; 804 - } 805 - 806 - static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr) 807 - { 808 - struct uart_port *port = ptr; 809 - struct tty_port *tport = &port->state->port; 810 - struct stm32_port *stm32_port = to_stm32_port(port); 811 - unsigned int size; 812 - unsigned long flags; 813 - 814 800 /* Receiver timeout irq for DMA RX */ 815 - if (!stm32_port->throttled) { 816 - spin_lock_irqsave(&port->lock, flags); 801 + if (stm32_usart_rx_dma_enabled(port) && !stm32_port->throttled) { 802 + spin_lock(&port->lock); 817 803 size = stm32_usart_receive_chars(port, false); 818 - uart_unlock_and_check_sysrq_irqrestore(port, flags); 804 + uart_unlock_and_check_sysrq(port); 819 805 if (size) 820 806 tty_flip_buffer_push(tport); 821 807 } ··· 1001 1015 u32 val; 1002 1016 int ret; 1003 1017 1004 - ret = request_threaded_irq(port->irq, stm32_usart_interrupt, 1005 - stm32_usart_threaded_interrupt, 1006 - IRQF_ONESHOT | IRQF_NO_SUSPEND, 1007 - name, port); 1018 + ret = request_irq(port->irq, stm32_usart_interrupt, 1019 + IRQF_NO_SUSPEND, name, port); 1008 1020 if (ret) 1009 1021 return ret; 1010 1022 ··· 1584 1600 struct device *dev = &pdev->dev; 1585 1601 struct dma_slave_config config; 1586 1602 int ret; 1587 - 1588 - /* 1589 - * Using DMA and threaded handler for the console could lead to 1590 - * deadlocks. 1591 - */ 1592 - if (uart_console(port)) 1593 - return -ENODEV; 1594 1603 1595 1604 stm32port->rx_buf = dma_alloc_coherent(dev, RX_BUF_L, 1596 1605 &stm32port->rx_dma_buf,
+5 -4
drivers/tty/vt/vc_screen.c
··· 386 386 387 387 uni_mode = use_unicode(inode); 388 388 attr = use_attributes(inode); 389 - ret = -ENXIO; 390 - vc = vcs_vc(inode, &viewed); 391 - if (!vc) 392 - goto unlock_out; 393 389 394 390 ret = -EINVAL; 395 391 if (pos < 0) ··· 402 406 while (count) { 403 407 unsigned int this_round, skip = 0; 404 408 int size; 409 + 410 + ret = -ENXIO; 411 + vc = vcs_vc(inode, &viewed); 412 + if (!vc) 413 + goto unlock_out; 405 414 406 415 /* Check whether we are above size each round, 407 416 * as copy_to_user at the end of this loop
+1 -1
drivers/usb/dwc3/dwc3-qcom.c
··· 901 901 qcom->mode = usb_get_dr_mode(&qcom->dwc3->dev); 902 902 903 903 /* enable vbus override for device mode */ 904 - if (qcom->mode == USB_DR_MODE_PERIPHERAL) 904 + if (qcom->mode != USB_DR_MODE_HOST) 905 905 dwc3_qcom_vbus_override_enable(qcom, true); 906 906 907 907 /* register extcon to override sw_vbus on Vbus change later */
-1
drivers/usb/fotg210/fotg210-udc.c
··· 1014 1014 int ret; 1015 1015 1016 1016 /* hook up the driver */ 1017 - driver->driver.bus = NULL; 1018 1017 fotg210->driver = driver; 1019 1018 1020 1019 if (!IS_ERR_OR_NULL(fotg210->phy)) {
+3 -1
drivers/usb/gadget/function/f_fs.c
··· 279 279 struct usb_request *req = ffs->ep0req; 280 280 int ret; 281 281 282 - if (!req) 282 + if (!req) { 283 + spin_unlock_irq(&ffs->ev.waitq.lock); 283 284 return -EINVAL; 285 + } 284 286 285 287 req->zero = len < le16_to_cpu(ffs->ev.setup.wLength); 286 288
+1
drivers/usb/gadget/function/f_uac2.c
··· 1142 1142 } 1143 1143 std_as_out_if0_desc.bInterfaceNumber = ret; 1144 1144 std_as_out_if1_desc.bInterfaceNumber = ret; 1145 + std_as_out_if1_desc.bNumEndpoints = 1; 1145 1146 uac2->as_out_intf = ret; 1146 1147 uac2->as_out_alt = 0; 1147 1148
-1
drivers/usb/gadget/udc/bcm63xx_udc.c
··· 1830 1830 bcm63xx_select_phy_mode(udc, true); 1831 1831 1832 1832 udc->driver = driver; 1833 - driver->driver.bus = NULL; 1834 1833 udc->gadget.dev.of_node = udc->dev->of_node; 1835 1834 1836 1835 spin_unlock_irqrestore(&udc->lock, flags);
-1
drivers/usb/gadget/udc/fsl_qe_udc.c
··· 2285 2285 /* lock is needed but whether should use this lock or another */ 2286 2286 spin_lock_irqsave(&udc->lock, flags); 2287 2287 2288 - driver->driver.bus = NULL; 2289 2288 /* hook up the driver */ 2290 2289 udc->driver = driver; 2291 2290 udc->gadget.speed = driver->max_speed;
-1
drivers/usb/gadget/udc/fsl_udc_core.c
··· 1943 1943 /* lock is needed but whether should use this lock or another */ 1944 1944 spin_lock_irqsave(&udc_controller->lock, flags); 1945 1945 1946 - driver->driver.bus = NULL; 1947 1946 /* hook up the driver */ 1948 1947 udc_controller->driver = driver; 1949 1948 spin_unlock_irqrestore(&udc_controller->lock, flags);
-1
drivers/usb/gadget/udc/fusb300_udc.c
··· 1311 1311 struct fusb300 *fusb300 = to_fusb300(g); 1312 1312 1313 1313 /* hook up the driver */ 1314 - driver->driver.bus = NULL; 1315 1314 fusb300->driver = driver; 1316 1315 1317 1316 return 0;
-1
drivers/usb/gadget/udc/goku_udc.c
··· 1375 1375 struct goku_udc *dev = to_goku_udc(g); 1376 1376 1377 1377 /* hook up the driver */ 1378 - driver->driver.bus = NULL; 1379 1378 dev->driver = driver; 1380 1379 1381 1380 /*
-1
drivers/usb/gadget/udc/gr_udc.c
··· 1906 1906 spin_lock(&dev->lock); 1907 1907 1908 1908 /* Hook up the driver */ 1909 - driver->driver.bus = NULL; 1910 1909 dev->driver = driver; 1911 1910 1912 1911 /* Get ready for host detection */
-1
drivers/usb/gadget/udc/m66592-udc.c
··· 1454 1454 struct m66592 *m66592 = to_m66592(g); 1455 1455 1456 1456 /* hook up the driver */ 1457 - driver->driver.bus = NULL; 1458 1457 m66592->driver = driver; 1459 1458 1460 1459 m66592_bset(m66592, M66592_VBSE | M66592_URST, M66592_INTENB0);
-1
drivers/usb/gadget/udc/max3420_udc.c
··· 1108 1108 1109 1109 spin_lock_irqsave(&udc->lock, flags); 1110 1110 /* hook up the driver */ 1111 - driver->driver.bus = NULL; 1112 1111 udc->driver = driver; 1113 1112 udc->gadget.speed = USB_SPEED_FULL; 1114 1113
-1
drivers/usb/gadget/udc/mv_u3d_core.c
··· 1243 1243 } 1244 1244 1245 1245 /* hook up the driver ... */ 1246 - driver->driver.bus = NULL; 1247 1246 u3d->driver = driver; 1248 1247 1249 1248 u3d->ep0_dir = USB_DIR_OUT;
-1
drivers/usb/gadget/udc/mv_udc_core.c
··· 1359 1359 spin_lock_irqsave(&udc->lock, flags); 1360 1360 1361 1361 /* hook up the driver ... */ 1362 - driver->driver.bus = NULL; 1363 1362 udc->driver = driver; 1364 1363 1365 1364 udc->usb_state = USB_STATE_ATTACHED;
-1
drivers/usb/gadget/udc/net2272.c
··· 1451 1451 dev->ep[i].irqs = 0; 1452 1452 /* hook up the driver ... */ 1453 1453 dev->softconnect = 1; 1454 - driver->driver.bus = NULL; 1455 1454 dev->driver = driver; 1456 1455 1457 1456 /* ... then enable host detection and ep0; and we're ready
-1
drivers/usb/gadget/udc/net2280.c
··· 2423 2423 dev->ep[i].irqs = 0; 2424 2424 2425 2425 /* hook up the driver ... */ 2426 - driver->driver.bus = NULL; 2427 2426 dev->driver = driver; 2428 2427 2429 2428 retval = device_create_file(&dev->pdev->dev, &dev_attr_function);
-1
drivers/usb/gadget/udc/omap_udc.c
··· 2066 2066 udc->softconnect = 1; 2067 2067 2068 2068 /* hook up the driver */ 2069 - driver->driver.bus = NULL; 2070 2069 udc->driver = driver; 2071 2070 spin_unlock_irqrestore(&udc->lock, flags); 2072 2071
-1
drivers/usb/gadget/udc/pch_udc.c
··· 2908 2908 { 2909 2909 struct pch_udc_dev *dev = to_pch_udc(g); 2910 2910 2911 - driver->driver.bus = NULL; 2912 2911 dev->driver = driver; 2913 2912 2914 2913 /* get ready for ep0 traffic */
-1
drivers/usb/gadget/udc/snps_udc_core.c
··· 1933 1933 struct udc *dev = to_amd5536_udc(g); 1934 1934 u32 tmp; 1935 1935 1936 - driver->driver.bus = NULL; 1937 1936 dev->driver = driver; 1938 1937 1939 1938 /* Some gadget drivers use both ep0 directions.
+8 -1
drivers/usb/typec/ucsi/ucsi.c
··· 1269 1269 con->port = NULL; 1270 1270 } 1271 1271 1272 + kfree(ucsi->connector); 1273 + ucsi->connector = NULL; 1274 + 1272 1275 err_reset: 1273 1276 memset(&ucsi->cap, 0, sizeof(ucsi->cap)); 1274 1277 ucsi_reset_ppm(ucsi); ··· 1303 1300 1304 1301 int ucsi_resume(struct ucsi *ucsi) 1305 1302 { 1306 - queue_work(system_long_wq, &ucsi->resume_work); 1303 + if (ucsi->connector) 1304 + queue_work(system_long_wq, &ucsi->resume_work); 1307 1305 return 0; 1308 1306 } 1309 1307 EXPORT_SYMBOL_GPL(ucsi_resume); ··· 1423 1419 1424 1420 /* Disable notifications */ 1425 1421 ucsi->ops->async_write(ucsi, UCSI_CONTROL, &cmd, sizeof(cmd)); 1422 + 1423 + if (!ucsi->connector) 1424 + return; 1426 1425 1427 1426 for (i = 0; i < ucsi->cap.num_connectors; i++) { 1428 1427 cancel_work_sync(&ucsi->connector[i].work);
+1 -21
drivers/video/fbdev/atmel_lcdfb.c
··· 49 49 struct clk *lcdc_clk; 50 50 51 51 struct backlight_device *backlight; 52 - u8 bl_power; 53 52 u8 saved_lcdcon; 54 53 55 54 u32 pseudo_palette[16]; ··· 108 109 static int atmel_bl_update_status(struct backlight_device *bl) 109 110 { 110 111 struct atmel_lcdfb_info *sinfo = bl_get_data(bl); 111 - int power = sinfo->bl_power; 112 - int brightness = bl->props.brightness; 113 - 114 - /* REVISIT there may be a meaningful difference between 115 - * fb_blank and power ... there seem to be some cases 116 - * this doesn't handle correctly. 117 - */ 118 - if (bl->props.fb_blank != sinfo->bl_power) 119 - power = bl->props.fb_blank; 120 - else if (bl->props.power != sinfo->bl_power) 121 - power = bl->props.power; 122 - 123 - if (brightness < 0 && power == FB_BLANK_UNBLANK) 124 - brightness = lcdc_readl(sinfo, ATMEL_LCDC_CONTRAST_VAL); 125 - else if (power != FB_BLANK_UNBLANK) 126 - brightness = 0; 112 + int brightness = backlight_get_brightness(bl); 127 113 128 114 lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_VAL, brightness); 129 115 if (contrast_ctr & ATMEL_LCDC_POL_POSITIVE) ··· 116 132 brightness ? contrast_ctr : 0); 117 133 else 118 134 lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_CTR, contrast_ctr); 119 - 120 - bl->props.fb_blank = bl->props.power = sinfo->bl_power = power; 121 135 122 136 return 0; 123 137 } ··· 136 154 { 137 155 struct backlight_properties props; 138 156 struct backlight_device *bl; 139 - 140 - sinfo->bl_power = FB_BLANK_UNBLANK; 141 157 142 158 if (sinfo->backlight) 143 159 return;
+2 -4
drivers/video/fbdev/aty/aty128fb.c
··· 1766 1766 unsigned int reg = aty_ld_le32(LVDS_GEN_CNTL); 1767 1767 int level; 1768 1768 1769 - if (bd->props.power != FB_BLANK_UNBLANK || 1770 - bd->props.fb_blank != FB_BLANK_UNBLANK || 1771 - !par->lcd_on) 1769 + if (!par->lcd_on) 1772 1770 level = 0; 1773 1771 else 1774 - level = bd->props.brightness; 1772 + level = backlight_get_brightness(bd); 1775 1773 1776 1774 reg |= LVDS_BL_MOD_EN | LVDS_BLON; 1777 1775 if (level > 0) {
+1 -7
drivers/video/fbdev/aty/atyfb_base.c
··· 2219 2219 { 2220 2220 struct atyfb_par *par = bl_get_data(bd); 2221 2221 unsigned int reg = aty_ld_lcd(LCD_MISC_CNTL, par); 2222 - int level; 2223 - 2224 - if (bd->props.power != FB_BLANK_UNBLANK || 2225 - bd->props.fb_blank != FB_BLANK_UNBLANK) 2226 - level = 0; 2227 - else 2228 - level = bd->props.brightness; 2222 + int level = backlight_get_brightness(bd); 2229 2223 2230 2224 reg |= (BLMOD_EN | BIASMOD_EN); 2231 2225 if (level > 0) {
+1 -5
drivers/video/fbdev/aty/radeon_backlight.c
··· 57 57 * backlight. This provides some greater power saving and the display 58 58 * is useless without backlight anyway. 59 59 */ 60 - if (bd->props.power != FB_BLANK_UNBLANK || 61 - bd->props.fb_blank != FB_BLANK_UNBLANK) 62 - level = 0; 63 - else 64 - level = bd->props.brightness; 60 + level = backlight_get_brightness(bd); 65 61 66 62 del_timer_sync(&rinfo->lvds_timer); 67 63 radeon_engine_idle();
+5 -2
drivers/video/fbdev/core/fbcon.c
··· 2495 2495 h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres)) 2496 2496 return -EINVAL; 2497 2497 2498 + if (font->width > 32 || font->height > 32) 2499 + return -EINVAL; 2500 + 2498 2501 /* Make sure drawing engine can handle the font */ 2499 - if (!(info->pixmap.blit_x & (1 << (font->width - 1))) || 2500 - !(info->pixmap.blit_y & (1 << (font->height - 1)))) 2502 + if (!(info->pixmap.blit_x & BIT(font->width - 1)) || 2503 + !(info->pixmap.blit_y & BIT(font->height - 1))) 2501 2504 return -EINVAL; 2502 2505 2503 2506 /* Make sure driver can handle the font length */
+1 -1
drivers/video/fbdev/core/fbmon.c
··· 1050 1050 } 1051 1051 1052 1052 /** 1053 - * fb_get_hblank_by_freq - get horizontal blank time given hfreq 1053 + * fb_get_hblank_by_hfreq - get horizontal blank time given hfreq 1054 1054 * @hfreq: horizontal freq 1055 1055 * @xres: horizontal resolution in pixels 1056 1056 *
+1 -6
drivers/video/fbdev/mx3fb.c
··· 283 283 static int mx3fb_bl_update_status(struct backlight_device *bl) 284 284 { 285 285 struct mx3fb_data *fbd = bl_get_data(bl); 286 - int brightness = bl->props.brightness; 287 - 288 - if (bl->props.power != FB_BLANK_UNBLANK) 289 - brightness = 0; 290 - if (bl->props.fb_blank != FB_BLANK_UNBLANK) 291 - brightness = 0; 286 + int brightness = backlight_get_brightness(bl); 292 287 293 288 fbd->backlight_level = (fbd->backlight_level & ~0xFF) | brightness; 294 289
+1 -7
drivers/video/fbdev/nvidia/nv_backlight.c
··· 49 49 { 50 50 struct nvidia_par *par = bl_get_data(bd); 51 51 u32 tmp_pcrt, tmp_pmc, fpcontrol; 52 - int level; 52 + int level = backlight_get_brightness(bd); 53 53 54 54 if (!par->FlatPanel) 55 55 return 0; 56 - 57 - if (bd->props.power != FB_BLANK_UNBLANK || 58 - bd->props.fb_blank != FB_BLANK_UNBLANK) 59 - level = 0; 60 - else 61 - level = bd->props.brightness; 62 56 63 57 tmp_pmc = NV_RD32(par->PMC, 0x10F0) & 0x0000FFFF; 64 58 tmp_pcrt = NV_RD32(par->PCRTC0, 0x081C) & 0xFFFFFFFC;
+1 -7
drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c
··· 331 331 struct panel_drv_data *ddata = dev_get_drvdata(&dev->dev); 332 332 struct omap_dss_device *in = ddata->in; 333 333 int r; 334 - int level; 335 - 336 - if (dev->props.fb_blank == FB_BLANK_UNBLANK && 337 - dev->props.power == FB_BLANK_UNBLANK) 338 - level = dev->props.brightness; 339 - else 340 - level = 0; 334 + int level = backlight_get_brightness(dev); 341 335 342 336 dev_dbg(&ddata->pdev->dev, "update brightness to %d\n", level); 343 337
+4 -3
drivers/video/fbdev/omap2/omapfb/dss/display-sysfs.c
··· 10 10 #define DSS_SUBSYS_NAME "DISPLAY" 11 11 12 12 #include <linux/kernel.h> 13 + #include <linux/kstrtox.h> 13 14 #include <linux/module.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/sysfs.h> ··· 37 36 int r; 38 37 bool enable; 39 38 40 - r = strtobool(buf, &enable); 39 + r = kstrtobool(buf, &enable); 41 40 if (r) 42 41 return r; 43 42 ··· 74 73 if (!dssdev->driver->enable_te || !dssdev->driver->get_te) 75 74 return -ENOENT; 76 75 77 - r = strtobool(buf, &te); 76 + r = kstrtobool(buf, &te); 78 77 if (r) 79 78 return r; 80 79 ··· 184 183 if (!dssdev->driver->set_mirror || !dssdev->driver->get_mirror) 185 184 return -ENOENT; 186 185 187 - r = strtobool(buf, &mirror); 186 + r = kstrtobool(buf, &mirror); 188 187 if (r) 189 188 return r; 190 189
+4 -3
drivers/video/fbdev/omap2/omapfb/dss/manager-sysfs.c
··· 10 10 #define DSS_SUBSYS_NAME "MANAGER" 11 11 12 12 #include <linux/kernel.h> 13 + #include <linux/kstrtox.h> 13 14 #include <linux/slab.h> 14 15 #include <linux/module.h> 15 16 #include <linux/platform_device.h> ··· 247 246 bool enable; 248 247 int r; 249 248 250 - r = strtobool(buf, &enable); 249 + r = kstrtobool(buf, &enable); 251 250 if (r) 252 251 return r; 253 252 ··· 291 290 if(!dss_has_feature(FEAT_ALPHA_FIXED_ZORDER)) 292 291 return -ENODEV; 293 292 294 - r = strtobool(buf, &enable); 293 + r = kstrtobool(buf, &enable); 295 294 if (r) 296 295 return r; 297 296 ··· 330 329 if (!dss_has_feature(FEAT_CPR)) 331 330 return -ENODEV; 332 331 333 - r = strtobool(buf, &enable); 332 + r = kstrtobool(buf, &enable); 334 333 if (r) 335 334 return r; 336 335
+2 -1
drivers/video/fbdev/omap2/omapfb/dss/overlay-sysfs.c
··· 13 13 #include <linux/err.h> 14 14 #include <linux/sysfs.h> 15 15 #include <linux/kobject.h> 16 + #include <linux/kstrtox.h> 16 17 #include <linux/platform_device.h> 17 18 18 19 #include <video/omapfb_dss.h> ··· 211 210 int r; 212 211 bool enable; 213 212 214 - r = strtobool(buf, &enable); 213 + r = kstrtobool(buf, &enable); 215 214 if (r) 216 215 return r; 217 216
+2 -1
drivers/video/fbdev/omap2/omapfb/omapfb-sysfs.c
··· 15 15 #include <linux/uaccess.h> 16 16 #include <linux/platform_device.h> 17 17 #include <linux/kernel.h> 18 + #include <linux/kstrtox.h> 18 19 #include <linux/mm.h> 19 20 #include <linux/omapfb.h> 20 21 ··· 97 96 int r; 98 97 struct fb_var_screeninfo new_var; 99 98 100 - r = strtobool(buf, &mirror); 99 + r = kstrtobool(buf, &mirror); 101 100 if (r) 102 101 return r; 103 102
+1 -7
drivers/video/fbdev/riva/fbdev.c
··· 293 293 { 294 294 struct riva_par *par = bl_get_data(bd); 295 295 U032 tmp_pcrt, tmp_pmc; 296 - int level; 297 - 298 - if (bd->props.power != FB_BLANK_UNBLANK || 299 - bd->props.fb_blank != FB_BLANK_UNBLANK) 300 - level = 0; 301 - else 302 - level = bd->props.brightness; 296 + int level = backlight_get_brightness(bd); 303 297 304 298 tmp_pmc = NV_RD32(par->riva.PMC, 0x10F0) & 0x0000FFFF; 305 299 tmp_pcrt = NV_RD32(par->riva.PCRTC0, 0x081C) & 0xFFFFFFFC;
+11 -3
fs/btrfs/raid56.c
··· 1426 1426 u32 bio_size = 0; 1427 1427 struct bio_vec *bvec; 1428 1428 struct bvec_iter_all iter_all; 1429 + int i; 1429 1430 1430 1431 bio_for_each_segment_all(bvec, bio, iter_all) 1431 1432 bio_size += bvec->bv_len; 1432 1433 1433 - bitmap_set(rbio->error_bitmap, total_sector_nr, 1434 - bio_size >> rbio->bioc->fs_info->sectorsize_bits); 1434 + /* 1435 + * Since we can have multiple bios touching the error_bitmap, we cannot 1436 + * call bitmap_set() without protection. 1437 + * 1438 + * Instead use set_bit() for each bit, as set_bit() itself is atomic. 1439 + */ 1440 + for (i = total_sector_nr; i < total_sector_nr + 1441 + (bio_size >> rbio->bioc->fs_info->sectorsize_bits); i++) 1442 + set_bit(i, rbio->error_bitmap); 1435 1443 } 1436 1444 1437 1445 /* Verify the data sectors at read time. */ ··· 1894 1886 sector->uptodate = 1; 1895 1887 } 1896 1888 if (failb >= 0) { 1897 - ret = verify_one_sector(rbio, faila, sector_nr); 1889 + ret = verify_one_sector(rbio, failb, sector_nr); 1898 1890 if (ret < 0) 1899 1891 goto cleanup; 1900 1892
+3 -3
fs/btrfs/send.c
··· 8073 8073 /* 8074 8074 * Check that we don't overflow at later allocations, we request 8075 8075 * clone_sources_count + 1 items, and compare to unsigned long inside 8076 - * access_ok. 8076 + * access_ok. Also set an upper limit for allocation size so this can't 8077 + * easily exhaust memory. Max number of clone sources is about 200K. 8077 8078 */ 8078 - if (arg->clone_sources_count > 8079 - ULONG_MAX / sizeof(struct clone_root) - 1) { 8079 + if (arg->clone_sources_count > SZ_8M / sizeof(struct clone_root)) { 8080 8080 ret = -EINVAL; 8081 8081 goto out; 8082 8082 }
+5 -1
fs/btrfs/volumes.c
··· 1600 1600 if (ret < 0) 1601 1601 goto out; 1602 1602 1603 - while (1) { 1603 + while (search_start < search_end) { 1604 1604 l = path->nodes[0]; 1605 1605 slot = path->slots[0]; 1606 1606 if (slot >= btrfs_header_nritems(l)) { ··· 1622 1622 1623 1623 if (key.type != BTRFS_DEV_EXTENT_KEY) 1624 1624 goto next; 1625 + 1626 + if (key.offset > search_end) 1627 + break; 1625 1628 1626 1629 if (key.offset > search_start) { 1627 1630 hole_size = key.offset - search_start; ··· 1686 1683 else 1687 1684 ret = 0; 1688 1685 1686 + ASSERT(max_hole_start + max_hole_size <= search_end); 1689 1687 out: 1690 1688 btrfs_free_path(path); 1691 1689 *start = max_hole_start;
+1 -1
fs/btrfs/zlib.c
··· 63 63 64 64 workspacesize = max(zlib_deflate_workspacesize(MAX_WBITS, MAX_MEM_LEVEL), 65 65 zlib_inflate_workspacesize()); 66 - workspace->strm.workspace = kvmalloc(workspacesize, GFP_KERNEL); 66 + workspace->strm.workspace = kvzalloc(workspacesize, GFP_KERNEL); 67 67 workspace->level = level; 68 68 workspace->buf = NULL; 69 69 /*
+15 -2
fs/ceph/addr.c
··· 305 305 struct inode *inode = rreq->inode; 306 306 struct ceph_inode_info *ci = ceph_inode(inode); 307 307 struct ceph_fs_client *fsc = ceph_inode_to_client(inode); 308 - struct ceph_osd_request *req; 308 + struct ceph_osd_request *req = NULL; 309 309 struct ceph_vino vino = ceph_vino(inode); 310 310 struct iov_iter iter; 311 311 struct page **pages; 312 312 size_t page_off; 313 313 int err = 0; 314 314 u64 len = subreq->len; 315 + 316 + if (ceph_inode_is_shutdown(inode)) { 317 + err = -EIO; 318 + goto out; 319 + } 315 320 316 321 if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq)) 317 322 return; ··· 567 562 bool caching = ceph_is_cache_enabled(inode); 568 563 569 564 dout("writepage %p idx %lu\n", page, page->index); 565 + 566 + if (ceph_inode_is_shutdown(inode)) 567 + return -EIO; 570 568 571 569 /* verify this is a writeable snap context */ 572 570 snapc = page_snap_context(page); ··· 1651 1643 struct ceph_inode_info *ci = ceph_inode(inode); 1652 1644 struct ceph_fs_client *fsc = ceph_inode_to_client(inode); 1653 1645 struct ceph_osd_request *req = NULL; 1654 - struct ceph_cap_flush *prealloc_cf; 1646 + struct ceph_cap_flush *prealloc_cf = NULL; 1655 1647 struct folio *folio = NULL; 1656 1648 u64 inline_version = CEPH_INLINE_NONE; 1657 1649 struct page *pages[1]; ··· 1664 1656 1665 1657 dout("uninline_data %p %llx.%llx inline_version %llu\n", 1666 1658 inode, ceph_vinop(inode), inline_version); 1659 + 1660 + if (ceph_inode_is_shutdown(inode)) { 1661 + err = -EIO; 1662 + goto out; 1663 + } 1667 1664 1668 1665 if (inline_version == CEPH_INLINE_NONE) 1669 1666 return 0;
+13 -3
fs/ceph/caps.c
··· 4078 4078 void *p, *end; 4079 4079 struct cap_extra_info extra_info = {}; 4080 4080 bool queue_trunc; 4081 + bool close_sessions = false; 4081 4082 4082 4083 dout("handle_caps from mds%d\n", session->s_mds); 4083 4084 ··· 4216 4215 realm = NULL; 4217 4216 if (snaptrace_len) { 4218 4217 down_write(&mdsc->snap_rwsem); 4219 - ceph_update_snap_trace(mdsc, snaptrace, 4220 - snaptrace + snaptrace_len, 4221 - false, &realm); 4218 + if (ceph_update_snap_trace(mdsc, snaptrace, 4219 + snaptrace + snaptrace_len, 4220 + false, &realm)) { 4221 + up_write(&mdsc->snap_rwsem); 4222 + close_sessions = true; 4223 + goto done; 4224 + } 4222 4225 downgrade_write(&mdsc->snap_rwsem); 4223 4226 } else { 4224 4227 down_read(&mdsc->snap_rwsem); ··· 4282 4277 iput(inode); 4283 4278 out: 4284 4279 ceph_put_string(extra_info.pool_ns); 4280 + 4281 + /* Defer closing the sessions after s_mutex lock being released */ 4282 + if (close_sessions) 4283 + ceph_mdsc_close_sessions(mdsc); 4284 + 4285 4285 return; 4286 4286 4287 4287 flush_cap_releases:
+3
fs/ceph/file.c
··· 2011 2011 loff_t zero = 0; 2012 2012 int op; 2013 2013 2014 + if (ceph_inode_is_shutdown(inode)) 2015 + return -EIO; 2016 + 2014 2017 if (!length) { 2015 2018 op = offset ? CEPH_OSD_OP_DELETE : CEPH_OSD_OP_TRUNCATE; 2016 2019 length = &zero;
+27 -3
fs/ceph/mds_client.c
··· 806 806 { 807 807 struct ceph_mds_session *s; 808 808 809 + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) 810 + return ERR_PTR(-EIO); 811 + 809 812 if (mds >= mdsc->mdsmap->possible_max_rank) 810 813 return ERR_PTR(-EINVAL); 811 814 ··· 1480 1477 struct ceph_msg *msg; 1481 1478 int mstate; 1482 1479 int mds = session->s_mds; 1480 + 1481 + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) 1482 + return -EIO; 1483 1483 1484 1484 /* wait for mds to go active? */ 1485 1485 mstate = ceph_mdsmap_get_state(mdsc->mdsmap, mds); ··· 2866 2860 return; 2867 2861 } 2868 2862 2863 + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) { 2864 + dout("do_request metadata corrupted\n"); 2865 + err = -EIO; 2866 + goto finish; 2867 + } 2869 2868 if (req->r_timeout && 2870 2869 time_after_eq(jiffies, req->r_started + req->r_timeout)) { 2871 2870 dout("do_request timed out\n"); ··· 3256 3245 u64 tid; 3257 3246 int err, result; 3258 3247 int mds = session->s_mds; 3248 + bool close_sessions = false; 3259 3249 3260 3250 if (msg->front.iov_len < sizeof(*head)) { 3261 3251 pr_err("mdsc_handle_reply got corrupt (short) reply\n"); ··· 3363 3351 realm = NULL; 3364 3352 if (rinfo->snapblob_len) { 3365 3353 down_write(&mdsc->snap_rwsem); 3366 - ceph_update_snap_trace(mdsc, rinfo->snapblob, 3354 + err = ceph_update_snap_trace(mdsc, rinfo->snapblob, 3367 3355 rinfo->snapblob + rinfo->snapblob_len, 3368 3356 le32_to_cpu(head->op) == CEPH_MDS_OP_RMSNAP, 3369 3357 &realm); 3358 + if (err) { 3359 + up_write(&mdsc->snap_rwsem); 3360 + close_sessions = true; 3361 + if (err == -EIO) 3362 + ceph_msg_dump(msg); 3363 + goto out_err; 3364 + } 3370 3365 downgrade_write(&mdsc->snap_rwsem); 3371 3366 } else { 3372 3367 down_read(&mdsc->snap_rwsem); ··· 3431 3412 req->r_end_latency, err); 3432 3413 out: 3433 3414 ceph_mdsc_put_request(req); 3415 + 3416 + /* Defer closing the sessions after s_mutex lock being released */ 3417 + if (close_sessions) 3418 + ceph_mdsc_close_sessions(mdsc); 3434 3419 return; 3435 3420 } 3436 3421 ··· 5034 5011 } 5035 5012 5036 5013 /* 5037 - * called after sb is ro. 5014 + * called after sb is ro or when metadata corrupted. 5038 5015 */ 5039 5016 void ceph_mdsc_close_sessions(struct ceph_mds_client *mdsc) 5040 5017 { ··· 5324 5301 struct ceph_mds_client *mdsc = s->s_mdsc; 5325 5302 5326 5303 pr_warn("mds%d closed our session\n", s->s_mds); 5327 - send_mds_reconnect(mdsc, s); 5304 + if (READ_ONCE(mdsc->fsc->mount_state) != CEPH_MOUNT_FENCE_IO) 5305 + send_mds_reconnect(mdsc, s); 5328 5306 } 5329 5307 5330 5308 static void mds_dispatch(struct ceph_connection *con, struct ceph_msg *msg)
+34 -2
fs/ceph/snap.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/ceph/ceph_debug.h> 3 3 4 + #include <linux/fs.h> 4 5 #include <linux/sort.h> 5 6 #include <linux/slab.h> 6 7 #include <linux/iversion.h> ··· 767 766 struct ceph_snap_realm *realm; 768 767 struct ceph_snap_realm *first_realm = NULL; 769 768 struct ceph_snap_realm *realm_to_rebuild = NULL; 769 + struct ceph_client *client = mdsc->fsc->client; 770 770 int rebuild_snapcs; 771 771 int err = -ENOMEM; 772 + int ret; 772 773 LIST_HEAD(dirty_realms); 773 774 774 775 lockdep_assert_held_write(&mdsc->snap_rwsem); ··· 887 884 if (first_realm) 888 885 ceph_put_snap_realm(mdsc, first_realm); 889 886 pr_err("%s error %d\n", __func__, err); 887 + 888 + /* 889 + * When receiving a corrupted snap trace we don't know what 890 + * exactly has happened in MDS side. And we shouldn't continue 891 + * writing to OSD, which may corrupt the snapshot contents. 892 + * 893 + * Just try to blocklist this kclient and then this kclient 894 + * must be remounted to continue after the corrupted metadata 895 + * fixed in the MDS side. 896 + */ 897 + WRITE_ONCE(mdsc->fsc->mount_state, CEPH_MOUNT_FENCE_IO); 898 + ret = ceph_monc_blocklist_add(&client->monc, &client->msgr.inst.addr); 899 + if (ret) 900 + pr_err("%s failed to blocklist %s: %d\n", __func__, 901 + ceph_pr_addr(&client->msgr.inst.addr), ret); 902 + 903 + WARN(1, "%s: %s%sdo remount to continue%s", 904 + __func__, ret ? "" : ceph_pr_addr(&client->msgr.inst.addr), 905 + ret ? "" : " was blocklisted, ", 906 + err == -EIO ? " after corrupted snaptrace is fixed" : ""); 907 + 890 908 return err; 891 909 } 892 910 ··· 1008 984 __le64 *split_inos = NULL, *split_realms = NULL; 1009 985 int i; 1010 986 int locked_rwsem = 0; 987 + bool close_sessions = false; 1011 988 1012 989 /* decode */ 1013 990 if (msg->front.iov_len < sizeof(*h)) ··· 1117 1092 * update using the provided snap trace. if we are deleting a 1118 1093 * snap, we can avoid queueing cap_snaps. 1119 1094 */ 1120 - ceph_update_snap_trace(mdsc, p, e, 1121 - op == CEPH_SNAP_OP_DESTROY, NULL); 1095 + if (ceph_update_snap_trace(mdsc, p, e, 1096 + op == CEPH_SNAP_OP_DESTROY, 1097 + NULL)) { 1098 + close_sessions = true; 1099 + goto bad; 1100 + } 1122 1101 1123 1102 if (op == CEPH_SNAP_OP_SPLIT) 1124 1103 /* we took a reference when we created the realm, above */ ··· 1141 1112 out: 1142 1113 if (locked_rwsem) 1143 1114 up_write(&mdsc->snap_rwsem); 1115 + 1116 + if (close_sessions) 1117 + ceph_mdsc_close_sessions(mdsc); 1144 1118 return; 1145 1119 } 1146 1120
+11
fs/ceph/super.h
··· 100 100 char *mon_addr; 101 101 }; 102 102 103 + /* mount state */ 104 + enum { 105 + CEPH_MOUNT_MOUNTING, 106 + CEPH_MOUNT_MOUNTED, 107 + CEPH_MOUNT_UNMOUNTING, 108 + CEPH_MOUNT_UNMOUNTED, 109 + CEPH_MOUNT_SHUTDOWN, 110 + CEPH_MOUNT_RECOVER, 111 + CEPH_MOUNT_FENCE_IO, 112 + }; 113 + 103 114 #define CEPH_ASYNC_CREATE_CONFLICT_BITS 8 104 115 105 116 struct ceph_fs_client {
+2 -2
fs/cifs/file.c
··· 3889 3889 rdata->got_bytes += result; 3890 3890 } 3891 3891 3892 - return rdata->got_bytes > 0 && result != -ECONNABORTED ? 3892 + return result != -ECONNABORTED && rdata->got_bytes > 0 ? 3893 3893 rdata->got_bytes : result; 3894 3894 } 3895 3895 ··· 4665 4665 rdata->got_bytes += result; 4666 4666 } 4667 4667 4668 - return rdata->got_bytes > 0 && result != -ECONNABORTED ? 4668 + return result != -ECONNABORTED && rdata->got_bytes > 0 ? 4669 4669 rdata->got_bytes : result; 4670 4670 } 4671 4671
+24 -24
fs/coredump.c
··· 838 838 } 839 839 } 840 840 841 + int dump_emit(struct coredump_params *cprm, const void *addr, int nr) 842 + { 843 + if (cprm->to_skip) { 844 + if (!__dump_skip(cprm, cprm->to_skip)) 845 + return 0; 846 + cprm->to_skip = 0; 847 + } 848 + return __dump_emit(cprm, addr, nr); 849 + } 850 + EXPORT_SYMBOL(dump_emit); 851 + 852 + void dump_skip_to(struct coredump_params *cprm, unsigned long pos) 853 + { 854 + cprm->to_skip = pos - cprm->pos; 855 + } 856 + EXPORT_SYMBOL(dump_skip_to); 857 + 858 + void dump_skip(struct coredump_params *cprm, size_t nr) 859 + { 860 + cprm->to_skip += nr; 861 + } 862 + EXPORT_SYMBOL(dump_skip); 863 + 864 + #ifdef CONFIG_ELF_CORE 841 865 static int dump_emit_page(struct coredump_params *cprm, struct page *page) 842 866 { 843 867 struct bio_vec bvec = { ··· 895 871 return 1; 896 872 } 897 873 898 - int dump_emit(struct coredump_params *cprm, const void *addr, int nr) 899 - { 900 - if (cprm->to_skip) { 901 - if (!__dump_skip(cprm, cprm->to_skip)) 902 - return 0; 903 - cprm->to_skip = 0; 904 - } 905 - return __dump_emit(cprm, addr, nr); 906 - } 907 - EXPORT_SYMBOL(dump_emit); 908 - 909 - void dump_skip_to(struct coredump_params *cprm, unsigned long pos) 910 - { 911 - cprm->to_skip = pos - cprm->pos; 912 - } 913 - EXPORT_SYMBOL(dump_skip_to); 914 - 915 - void dump_skip(struct coredump_params *cprm, size_t nr) 916 - { 917 - cprm->to_skip += nr; 918 - } 919 - EXPORT_SYMBOL(dump_skip); 920 - 921 - #ifdef CONFIG_ELF_CORE 922 874 int dump_user_range(struct coredump_params *cprm, unsigned long start, 923 875 unsigned long len) 924 876 {
+1 -1
fs/freevxfs/Kconfig
··· 8 8 of SCO UnixWare (and possibly others) and optionally available 9 9 for Sunsoft Solaris, HP-UX and many other operating systems. However 10 10 these particular OS implementations of vxfs may differ in on-disk 11 - data endianess and/or superblock offset. The vxfs module has been 11 + data endianness and/or superblock offset. The vxfs module has been 12 12 tested with SCO UnixWare and HP-UX B.10.20 (pa-risc 1.1 arch.) 13 13 Currently only readonly access is supported and VxFX versions 14 14 2, 3 and 4. Tests were performed with HP-UX VxFS version 3.
+1 -3
fs/proc/task_mmu.c
··· 745 745 page = pfn_swap_entry_to_page(swpent); 746 746 } 747 747 if (page) { 748 - int mapcount = page_mapcount(page); 749 - 750 - if (mapcount >= 2) 748 + if (page_mapcount(page) >= 2 || hugetlb_pmd_shared(pte)) 751 749 mss->shared_hugetlb += huge_page_size(hstate_vma(vma)); 752 750 else 753 751 mss->private_hugetlb += huge_page_size(hstate_vma(vma));
+1 -1
fs/squashfs/squashfs_fs.h
··· 183 183 #define SQUASHFS_ID_BLOCK_BYTES(A) (SQUASHFS_ID_BLOCKS(A) *\ 184 184 sizeof(u64)) 185 185 /* xattr id lookup table defines */ 186 - #define SQUASHFS_XATTR_BYTES(A) ((A) * sizeof(struct squashfs_xattr_id)) 186 + #define SQUASHFS_XATTR_BYTES(A) (((u64) (A)) * sizeof(struct squashfs_xattr_id)) 187 187 188 188 #define SQUASHFS_XATTR_BLOCK(A) (SQUASHFS_XATTR_BYTES(A) / \ 189 189 SQUASHFS_METADATA_SIZE)
+1 -1
fs/squashfs/squashfs_fs_sb.h
··· 63 63 long long bytes_used; 64 64 unsigned int inodes; 65 65 unsigned int fragments; 66 - int xattr_ids; 66 + unsigned int xattr_ids; 67 67 unsigned int ids; 68 68 bool panic_on_errors; 69 69 const struct squashfs_decompressor_thread_ops *thread_ops;
+2 -2
fs/squashfs/xattr.h
··· 10 10 11 11 #ifdef CONFIG_SQUASHFS_XATTR 12 12 extern __le64 *squashfs_read_xattr_id_table(struct super_block *, u64, 13 - u64 *, int *); 13 + u64 *, unsigned int *); 14 14 extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *, 15 15 unsigned int *, unsigned long long *); 16 16 #else 17 17 static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb, 18 - u64 start, u64 *xattr_table_start, int *xattr_ids) 18 + u64 start, u64 *xattr_table_start, unsigned int *xattr_ids) 19 19 { 20 20 struct squashfs_xattr_id_table *id_table; 21 21
+2 -2
fs/squashfs/xattr_id.c
··· 56 56 * Read uncompressed xattr id lookup table indexes from disk into memory 57 57 */ 58 58 __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start, 59 - u64 *xattr_table_start, int *xattr_ids) 59 + u64 *xattr_table_start, unsigned int *xattr_ids) 60 60 { 61 61 struct squashfs_sb_info *msblk = sb->s_fs_info; 62 62 unsigned int len, indexes; ··· 76 76 /* Sanity check values */ 77 77 78 78 /* there is always at least one xattr id */ 79 - if (*xattr_ids == 0) 79 + if (*xattr_ids <= 0) 80 80 return ERR_PTR(-EINVAL); 81 81 82 82 len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
+1 -1
include/kvm/arm_vgic.h
··· 263 263 struct vgic_io_device dist_iodev; 264 264 265 265 bool has_its; 266 - bool save_its_tables_in_progress; 266 + bool table_write_in_progress; 267 267 268 268 /* 269 269 * Contains the attributes and gpa of the LPI configuration table.
-10
include/linux/ceph/libceph.h
··· 99 99 100 100 #define CEPH_AUTH_NAME_DEFAULT "guest" 101 101 102 - /* mount state */ 103 - enum { 104 - CEPH_MOUNT_MOUNTING, 105 - CEPH_MOUNT_MOUNTED, 106 - CEPH_MOUNT_UNMOUNTING, 107 - CEPH_MOUNT_UNMOUNTED, 108 - CEPH_MOUNT_SHUTDOWN, 109 - CEPH_MOUNT_RECOVER, 110 - }; 111 - 112 102 static inline unsigned long ceph_timeout_jiffies(unsigned long timeout) 113 103 { 114 104 return timeout ?: MAX_SCHEDULE_TIMEOUT;
+2 -1
include/linux/efi.h
··· 668 668 669 669 #define EFI_RT_SUPPORTED_ALL 0x3fff 670 670 671 - #define EFI_RT_SUPPORTED_TIME_SERVICES 0x000f 671 + #define EFI_RT_SUPPORTED_TIME_SERVICES 0x0003 672 + #define EFI_RT_SUPPORTED_WAKEUP_SERVICES 0x000c 672 673 #define EFI_RT_SUPPORTED_VARIABLE_SERVICES 0x0070 673 674 674 675 extern struct mm_struct efi_mm;
+2 -2
include/linux/highmem-internal.h
··· 200 200 static inline void __kunmap_local(const void *addr) 201 201 { 202 202 #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 203 - kunmap_flush_on_unmap(addr); 203 + kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE)); 204 204 #endif 205 205 } 206 206 ··· 227 227 static inline void __kunmap_atomic(const void *addr) 228 228 { 229 229 #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 230 - kunmap_flush_on_unmap(addr); 230 + kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE)); 231 231 #endif 232 232 pagefault_enable(); 233 233 if (IS_ENABLED(CONFIG_PREEMPT_RT))
+13
include/linux/hugetlb.h
··· 7 7 #include <linux/fs.h> 8 8 #include <linux/hugetlb_inline.h> 9 9 #include <linux/cgroup.h> 10 + #include <linux/page_ref.h> 10 11 #include <linux/list.h> 11 12 #include <linux/kref.h> 12 13 #include <linux/pgtable.h> ··· 1185 1184 #else 1186 1185 static inline __init void hugetlb_cma_reserve(int order) 1187 1186 { 1187 + } 1188 + #endif 1189 + 1190 + #ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE 1191 + static inline bool hugetlb_pmd_shared(pte_t *pte) 1192 + { 1193 + return page_count(virt_to_page(pte)) > 1; 1194 + } 1195 + #else 1196 + static inline bool hugetlb_pmd_shared(pte_t *pte) 1197 + { 1198 + return false; 1188 1199 } 1189 1200 #endif 1190 1201
+4 -1
include/linux/memcontrol.h
··· 1666 1666 static inline void mem_cgroup_track_foreign_dirty(struct folio *folio, 1667 1667 struct bdi_writeback *wb) 1668 1668 { 1669 + struct mem_cgroup *memcg; 1670 + 1669 1671 if (mem_cgroup_disabled()) 1670 1672 return; 1671 1673 1672 - if (unlikely(&folio_memcg(folio)->css != wb->memcg_css)) 1674 + memcg = folio_memcg(folio); 1675 + if (unlikely(memcg && &memcg->css != wb->memcg_css)) 1673 1676 mem_cgroup_track_foreign_dirty_slowpath(folio, wb); 1674 1677 } 1675 1678
+10 -3
include/linux/mlx5/driver.h
··· 572 572 struct dentry *lag_debugfs; 573 573 }; 574 574 575 + enum mlx5_func_type { 576 + MLX5_PF, 577 + MLX5_VF, 578 + MLX5_SF, 579 + MLX5_HOST_PF, 580 + MLX5_FUNC_TYPE_NUM, 581 + }; 582 + 575 583 struct mlx5_ft_pool; 576 584 struct mlx5_priv { 577 585 /* IRQ table valid only for real pci devices PF or VF */ ··· 590 582 struct mlx5_nb pg_nb; 591 583 struct workqueue_struct *pg_wq; 592 584 struct xarray page_root_xa; 593 - u32 fw_pages; 594 585 atomic_t reg_pages; 595 586 struct list_head free_list; 596 - u32 vfs_pages; 597 - u32 host_pf_pages; 587 + u32 fw_pages; 588 + u32 page_counters[MLX5_FUNC_TYPE_NUM]; 598 589 u32 fw_pages_alloc_failed; 599 590 u32 give_pages_dropped; 600 591 u32 reclaim_pages_discard;
-2
include/linux/nvmem-provider.h
··· 70 70 * @word_size: Minimum read/write access granularity. 71 71 * @stride: Minimum read/write access stride. 72 72 * @priv: User context passed to read/write callbacks. 73 - * @wp-gpio: Write protect pin 74 73 * @ignore_wp: Write Protect pin is managed by the provider. 75 74 * 76 75 * Note: A default "nvmem<id>" name will be assigned to the device if ··· 84 85 const char *name; 85 86 int id; 86 87 struct module *owner; 87 - struct gpio_desc *wp_gpio; 88 88 const struct nvmem_cell_info *cells; 89 89 int ncells; 90 90 const struct nvmem_keepout *keepout;
+9
include/linux/spinlock.h
··· 476 476 #define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ 477 477 __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) 478 478 479 + extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock); 480 + #define atomic_dec_and_raw_lock(atomic, lock) \ 481 + __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) 482 + 483 + extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, 484 + unsigned long *flags); 485 + #define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ 486 + __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags))) 487 + 479 488 int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, 480 489 size_t max_size, unsigned int cpu_mult, 481 490 gfp_t gfp, const char *name,
+1 -2
include/linux/swap.h
··· 418 418 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, 419 419 unsigned long nr_pages, 420 420 gfp_t gfp_mask, 421 - unsigned int reclaim_options, 422 - nodemask_t *nodemask); 421 + unsigned int reclaim_options); 423 422 extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, 424 423 gfp_t gfp_mask, bool noswap, 425 424 pg_data_t *pgdat,
+1
include/uapi/linux/ip.h
··· 18 18 #ifndef _UAPI_LINUX_IP_H 19 19 #define _UAPI_LINUX_IP_H 20 20 #include <linux/types.h> 21 + #include <linux/stddef.h> 21 22 #include <asm/byteorder.h> 22 23 23 24 #define IPTOS_TOS_MASK 0x1E
+1
include/uapi/linux/ipv6.h
··· 4 4 5 5 #include <linux/libc-compat.h> 6 6 #include <linux/types.h> 7 + #include <linux/stddef.h> 7 8 #include <linux/in6.h> 8 9 #include <asm/byteorder.h> 9 10
+36 -9
kernel/cgroup/cpuset.c
··· 1205 1205 /** 1206 1206 * update_tasks_cpumask - Update the cpumasks of tasks in the cpuset. 1207 1207 * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed 1208 + * @new_cpus: the temp variable for the new effective_cpus mask 1208 1209 * 1209 1210 * Iterate through each task of @cs updating its cpus_allowed to the 1210 1211 * effective cpuset's. As this function is called with cpuset_rwsem held, 1211 1212 * cpuset membership stays stable. 1212 1213 */ 1213 - static void update_tasks_cpumask(struct cpuset *cs) 1214 + static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus) 1214 1215 { 1215 1216 struct css_task_iter it; 1216 1217 struct task_struct *task; ··· 1225 1224 if (top_cs && (task->flags & PF_KTHREAD) && 1226 1225 kthread_is_per_cpu(task)) 1227 1226 continue; 1228 - set_cpus_allowed_ptr(task, cs->effective_cpus); 1227 + 1228 + cpumask_and(new_cpus, cs->effective_cpus, 1229 + task_cpu_possible_mask(task)); 1230 + set_cpus_allowed_ptr(task, new_cpus); 1229 1231 } 1230 1232 css_task_iter_end(&it); 1231 1233 } ··· 1513 1509 spin_unlock_irq(&callback_lock); 1514 1510 1515 1511 if (adding || deleting) 1516 - update_tasks_cpumask(parent); 1512 + update_tasks_cpumask(parent, tmp->new_cpus); 1517 1513 1518 1514 /* 1519 1515 * Set or clear CS_SCHED_LOAD_BALANCE when partcmd_update, if necessary. ··· 1665 1661 WARN_ON(!is_in_v2_mode() && 1666 1662 !cpumask_equal(cp->cpus_allowed, cp->effective_cpus)); 1667 1663 1668 - update_tasks_cpumask(cp); 1664 + update_tasks_cpumask(cp, tmp->new_cpus); 1669 1665 1670 1666 /* 1671 1667 * On legacy hierarchy, if the effective cpumask of any non- ··· 2313 2309 } 2314 2310 } 2315 2311 2316 - update_tasks_cpumask(parent); 2312 + update_tasks_cpumask(parent, tmpmask.new_cpus); 2317 2313 2318 2314 if (parent->child_ecpus_count) 2319 2315 update_sibling_cpumasks(parent, cs, &tmpmask); ··· 3352 3348 * as the tasks will be migrated to an ancestor. 3353 3349 */ 3354 3350 if (cpus_updated && !cpumask_empty(cs->cpus_allowed)) 3355 - update_tasks_cpumask(cs); 3351 + update_tasks_cpumask(cs, new_cpus); 3356 3352 if (mems_updated && !nodes_empty(cs->mems_allowed)) 3357 3353 update_tasks_nodemask(cs); 3358 3354 ··· 3389 3385 spin_unlock_irq(&callback_lock); 3390 3386 3391 3387 if (cpus_updated) 3392 - update_tasks_cpumask(cs); 3388 + update_tasks_cpumask(cs, new_cpus); 3393 3389 if (mems_updated) 3394 3390 update_tasks_nodemask(cs); 3395 3391 } ··· 3696 3692 * Description: Returns the cpumask_var_t cpus_allowed of the cpuset 3697 3693 * attached to the specified @tsk. Guaranteed to return some non-empty 3698 3694 * subset of cpu_online_mask, even if this means going outside the 3699 - * tasks cpuset. 3695 + * tasks cpuset, except when the task is in the top cpuset. 3700 3696 **/ 3701 3697 3702 3698 void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) 3703 3699 { 3704 3700 unsigned long flags; 3701 + struct cpuset *cs; 3705 3702 3706 3703 spin_lock_irqsave(&callback_lock, flags); 3707 - guarantee_online_cpus(tsk, pmask); 3704 + rcu_read_lock(); 3705 + 3706 + cs = task_cs(tsk); 3707 + if (cs != &top_cpuset) 3708 + guarantee_online_cpus(tsk, pmask); 3709 + /* 3710 + * Tasks in the top cpuset won't get update to their cpumasks 3711 + * when a hotplug online/offline event happens. So we include all 3712 + * offline cpus in the allowed cpu list. 3713 + */ 3714 + if ((cs == &top_cpuset) || cpumask_empty(pmask)) { 3715 + const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); 3716 + 3717 + /* 3718 + * We first exclude cpus allocated to partitions. If there is no 3719 + * allowable online cpu left, we fall back to all possible cpus. 3720 + */ 3721 + cpumask_andnot(pmask, possible_mask, top_cpuset.subparts_cpus); 3722 + if (!cpumask_intersects(pmask, cpu_online_mask)) 3723 + cpumask_copy(pmask, possible_mask); 3724 + } 3725 + 3726 + rcu_read_unlock(); 3708 3727 spin_unlock_irqrestore(&callback_lock, flags); 3709 3728 } 3710 3729
+17 -22
kernel/events/core.c
··· 4813 4813 4814 4814 cpc = per_cpu_ptr(pmu->cpu_pmu_context, event->cpu); 4815 4815 epc = &cpc->epc; 4816 - 4816 + raw_spin_lock_irq(&ctx->lock); 4817 4817 if (!epc->ctx) { 4818 4818 atomic_set(&epc->refcount, 1); 4819 4819 epc->embedded = 1; 4820 - raw_spin_lock_irq(&ctx->lock); 4821 4820 list_add(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list); 4822 4821 epc->ctx = ctx; 4823 - raw_spin_unlock_irq(&ctx->lock); 4824 4822 } else { 4825 4823 WARN_ON_ONCE(epc->ctx != ctx); 4826 4824 atomic_inc(&epc->refcount); 4827 4825 } 4828 - 4826 + raw_spin_unlock_irq(&ctx->lock); 4829 4827 return epc; 4830 4828 } 4831 4829 ··· 4894 4896 4895 4897 static void put_pmu_ctx(struct perf_event_pmu_context *epc) 4896 4898 { 4899 + struct perf_event_context *ctx = epc->ctx; 4897 4900 unsigned long flags; 4898 4901 4899 - if (!atomic_dec_and_test(&epc->refcount)) 4902 + /* 4903 + * XXX 4904 + * 4905 + * lockdep_assert_held(&ctx->mutex); 4906 + * 4907 + * can't because of the call-site in _free_event()/put_event() 4908 + * which isn't always called under ctx->mutex. 4909 + */ 4910 + if (!atomic_dec_and_raw_lock_irqsave(&epc->refcount, &ctx->lock, flags)) 4900 4911 return; 4901 4912 4902 - if (epc->ctx) { 4903 - struct perf_event_context *ctx = epc->ctx; 4913 + WARN_ON_ONCE(list_empty(&epc->pmu_ctx_entry)); 4904 4914 4905 - /* 4906 - * XXX 4907 - * 4908 - * lockdep_assert_held(&ctx->mutex); 4909 - * 4910 - * can't because of the call-site in _free_event()/put_event() 4911 - * which isn't always called under ctx->mutex. 4912 - */ 4913 - 4914 - WARN_ON_ONCE(list_empty(&epc->pmu_ctx_entry)); 4915 - raw_spin_lock_irqsave(&ctx->lock, flags); 4916 - list_del_init(&epc->pmu_ctx_entry); 4917 - epc->ctx = NULL; 4918 - raw_spin_unlock_irqrestore(&ctx->lock, flags); 4919 - } 4915 + list_del_init(&epc->pmu_ctx_entry); 4916 + epc->ctx = NULL; 4920 4917 4921 4918 WARN_ON_ONCE(!list_empty(&epc->pinned_active)); 4922 4919 WARN_ON_ONCE(!list_empty(&epc->flexible_active)); 4920 + 4921 + raw_spin_unlock_irqrestore(&ctx->lock, flags); 4923 4922 4924 4923 if (epc->embedded) 4925 4924 return;
+1 -1
kernel/irq/irqdomain.c
··· 1915 1915 1916 1916 static void debugfs_remove_domain_dir(struct irq_domain *d) 1917 1917 { 1918 - debugfs_remove(debugfs_lookup(d->name, domain_dir)); 1918 + debugfs_lookup_and_remove(d->name, domain_dir); 1919 1919 } 1920 1920 1921 1921 void __init irq_domain_debugfs_init(struct dentry *root)
-3
kernel/trace/trace.c
··· 9148 9148 if (val > 100) 9149 9149 return -EINVAL; 9150 9150 9151 - if (!val) 9152 - val = 1; 9153 - 9154 9151 tr->buffer_percent = val; 9155 9152 9156 9153 (*ppos)++;
+2 -1
lib/Kconfig.debug
··· 763 763 select KALLSYMS 764 764 select CRC32 765 765 select STACKDEPOT 766 + select STACKDEPOT_ALWAYS_INIT if !DEBUG_KMEMLEAK_DEFAULT_OFF 766 767 help 767 768 Say Y here if you want to enable the memory leak 768 769 detector. The memory allocation/freeing is traced in a way ··· 1217 1216 depends on DEBUG_KERNEL && PROC_FS 1218 1217 default y 1219 1218 help 1220 - If you say Y here, the /proc/sched_debug file will be provided 1219 + If you say Y here, the /sys/kernel/debug/sched file will be provided 1221 1220 that can help debug the scheduler. The runtime overhead of this 1222 1221 option is minimal. 1223 1222
+31
lib/dec_and_lock.c
··· 49 49 return 0; 50 50 } 51 51 EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave); 52 + 53 + int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) 54 + { 55 + /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ 56 + if (atomic_add_unless(atomic, -1, 1)) 57 + return 0; 58 + 59 + /* Otherwise do it the slow way */ 60 + raw_spin_lock(lock); 61 + if (atomic_dec_and_test(atomic)) 62 + return 1; 63 + raw_spin_unlock(lock); 64 + return 0; 65 + } 66 + EXPORT_SYMBOL(_atomic_dec_and_raw_lock); 67 + 68 + int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, 69 + unsigned long *flags) 70 + { 71 + /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ 72 + if (atomic_add_unless(atomic, -1, 1)) 73 + return 0; 74 + 75 + /* Otherwise do it the slow way */ 76 + raw_spin_lock_irqsave(lock, *flags); 77 + if (atomic_dec_and_test(atomic)) 78 + return 1; 79 + raw_spin_unlock_irqrestore(lock, *flags); 80 + return 0; 81 + } 82 + EXPORT_SYMBOL(_atomic_dec_and_raw_lock_irqsave);
+11 -11
lib/maple_tree.c
··· 670 670 unsigned char piv) 671 671 { 672 672 struct maple_node *node = mte_to_node(mn); 673 + enum maple_type type = mte_node_type(mn); 673 674 674 - if (piv >= mt_pivots[piv]) { 675 + if (piv >= mt_pivots[type]) { 675 676 WARN_ON(1); 676 677 return 0; 677 678 } 678 - switch (mte_node_type(mn)) { 679 + switch (type) { 679 680 case maple_arange_64: 680 681 return node->ma64.pivot[piv]; 681 682 case maple_range_64: ··· 4888 4887 unsigned long *pivots, *gaps; 4889 4888 void __rcu **slots; 4890 4889 unsigned long gap = 0; 4891 - unsigned long max, min, index; 4890 + unsigned long max, min; 4892 4891 unsigned char offset; 4893 4892 4894 4893 if (unlikely(mas_is_err(mas))) ··· 4910 4909 min = mas_safe_min(mas, pivots, --offset); 4911 4910 4912 4911 max = mas_safe_pivot(mas, pivots, offset, type); 4913 - index = mas->index; 4914 - while (index <= max) { 4912 + while (mas->index <= max) { 4915 4913 gap = 0; 4916 4914 if (gaps) 4917 4915 gap = gaps[offset]; ··· 4941 4941 min = mas_safe_min(mas, pivots, offset); 4942 4942 } 4943 4943 4944 - if (unlikely(index > max)) { 4945 - mas_set_err(mas, -EBUSY); 4946 - return false; 4947 - } 4944 + if (unlikely((mas->index > max) || (size - 1 > max - mas->index))) 4945 + goto no_space; 4948 4946 4949 4947 if (unlikely(ma_is_leaf(type))) { 4950 4948 mas->offset = offset; ··· 4959 4961 return false; 4960 4962 4961 4963 ascend: 4962 - if (mte_is_root(mas->node)) 4963 - mas_set_err(mas, -EBUSY); 4964 + if (!mte_is_root(mas->node)) 4965 + return false; 4964 4966 4967 + no_space: 4968 + mas_set_err(mas, -EBUSY); 4965 4969 return false; 4966 4970 } 4967 4971
+89
lib/test_maple_tree.c
··· 2517 2517 mt_set_non_kernel(0); 2518 2518 } 2519 2519 2520 + static noinline void check_empty_area_window(struct maple_tree *mt) 2521 + { 2522 + unsigned long i, nr_entries = 20; 2523 + MA_STATE(mas, mt, 0, 0); 2524 + 2525 + for (i = 1; i <= nr_entries; i++) 2526 + mtree_store_range(mt, i*10, i*10 + 9, 2527 + xa_mk_value(i), GFP_KERNEL); 2528 + 2529 + /* Create another hole besides the one at 0 */ 2530 + mtree_store_range(mt, 160, 169, NULL, GFP_KERNEL); 2531 + 2532 + /* Check lower bounds that don't fit */ 2533 + rcu_read_lock(); 2534 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 90, 10) != -EBUSY); 2535 + 2536 + mas_reset(&mas); 2537 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 6, 90, 5) != -EBUSY); 2538 + 2539 + /* Check lower bound that does fit */ 2540 + mas_reset(&mas); 2541 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 90, 5) != 0); 2542 + MT_BUG_ON(mt, mas.index != 5); 2543 + MT_BUG_ON(mt, mas.last != 9); 2544 + rcu_read_unlock(); 2545 + 2546 + /* Check one gap that doesn't fit and one that does */ 2547 + rcu_read_lock(); 2548 + mas_reset(&mas); 2549 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 217, 9) != 0); 2550 + MT_BUG_ON(mt, mas.index != 161); 2551 + MT_BUG_ON(mt, mas.last != 169); 2552 + 2553 + /* Check one gap that does fit above the min */ 2554 + mas_reset(&mas); 2555 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 218, 3) != 0); 2556 + MT_BUG_ON(mt, mas.index != 216); 2557 + MT_BUG_ON(mt, mas.last != 218); 2558 + 2559 + /* Check size that doesn't fit any gap */ 2560 + mas_reset(&mas); 2561 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 218, 16) != -EBUSY); 2562 + 2563 + /* 2564 + * Check size that doesn't fit the lower end of the window but 2565 + * does fit the gap 2566 + */ 2567 + mas_reset(&mas); 2568 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 167, 200, 4) != -EBUSY); 2569 + 2570 + /* 2571 + * Check size that doesn't fit the upper end of the window but 2572 + * does fit the gap 2573 + */ 2574 + mas_reset(&mas); 2575 + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 162, 4) != -EBUSY); 2576 + 2577 + /* Check mas_empty_area forward */ 2578 + mas_reset(&mas); 2579 + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 9) != 0); 2580 + MT_BUG_ON(mt, mas.index != 0); 2581 + MT_BUG_ON(mt, mas.last != 8); 2582 + 2583 + mas_reset(&mas); 2584 + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 4) != 0); 2585 + MT_BUG_ON(mt, mas.index != 0); 2586 + MT_BUG_ON(mt, mas.last != 3); 2587 + 2588 + mas_reset(&mas); 2589 + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 11) != -EBUSY); 2590 + 2591 + mas_reset(&mas); 2592 + MT_BUG_ON(mt, mas_empty_area(&mas, 5, 100, 6) != -EBUSY); 2593 + 2594 + mas_reset(&mas); 2595 + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 8, 10) != -EBUSY); 2596 + 2597 + mas_reset(&mas); 2598 + mas_empty_area(&mas, 100, 165, 3); 2599 + 2600 + mas_reset(&mas); 2601 + MT_BUG_ON(mt, mas_empty_area(&mas, 100, 163, 6) != -EBUSY); 2602 + rcu_read_unlock(); 2603 + } 2604 + 2520 2605 static DEFINE_MTREE(tree); 2521 2606 static int maple_tree_seed(void) 2522 2607 { ··· 2848 2763 2849 2764 mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); 2850 2765 check_bnode_min_spanning(&tree); 2766 + mtree_destroy(&tree); 2767 + 2768 + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); 2769 + check_empty_area_window(&tree); 2851 2770 mtree_destroy(&tree); 2852 2771 2853 2772 #if defined(BENCH)
+3
mm/hugetlb.c
··· 5051 5051 entry = huge_pte_clear_uffd_wp(entry); 5052 5052 set_huge_pte_at(dst, addr, dst_pte, entry); 5053 5053 } else if (unlikely(is_pte_marker(entry))) { 5054 + /* No swap on hugetlb */ 5055 + WARN_ON_ONCE( 5056 + is_swapin_error_entry(pte_to_swp_entry(entry))); 5054 5057 /* 5055 5058 * We copy the pte marker only if the dst vma has 5056 5059 * uffd-wp enabled.
+21 -1
mm/khugepaged.c
··· 847 847 return SCAN_SUCCEED; 848 848 } 849 849 850 + /* 851 + * See pmd_trans_unstable() for how the result may change out from 852 + * underneath us, even if we hold mmap_lock in read. 853 + */ 850 854 static int find_pmd_or_thp_or_none(struct mm_struct *mm, 851 855 unsigned long address, 852 856 pmd_t **pmd) ··· 869 865 #endif 870 866 if (pmd_none(pmde)) 871 867 return SCAN_PMD_NONE; 868 + if (!pmd_present(pmde)) 869 + return SCAN_PMD_NULL; 872 870 if (pmd_trans_huge(pmde)) 873 871 return SCAN_PMD_MAPPED; 872 + if (pmd_devmap(pmde)) 873 + return SCAN_PMD_NULL; 874 874 if (pmd_bad(pmde)) 875 875 return SCAN_PMD_NULL; 876 876 return SCAN_SUCCEED; ··· 1650 1642 * has higher cost too. It would also probably require locking 1651 1643 * the anon_vma. 1652 1644 */ 1653 - if (vma->anon_vma) { 1645 + if (READ_ONCE(vma->anon_vma)) { 1654 1646 result = SCAN_PAGE_ANON; 1655 1647 goto next; 1656 1648 } ··· 1678 1670 result = SCAN_PTE_MAPPED_HUGEPAGE; 1679 1671 if ((cc->is_khugepaged || is_target) && 1680 1672 mmap_write_trylock(mm)) { 1673 + /* 1674 + * Re-check whether we have an ->anon_vma, because 1675 + * collapse_and_free_pmd() requires that either no 1676 + * ->anon_vma exists or the anon_vma is locked. 1677 + * We already checked ->anon_vma above, but that check 1678 + * is racy because ->anon_vma can be populated under the 1679 + * mmap lock in read mode. 1680 + */ 1681 + if (vma->anon_vma) { 1682 + result = SCAN_PAGE_ANON; 1683 + goto unlock_next; 1684 + } 1681 1685 /* 1682 1686 * When a vma is registered with uffd-wp, we can't 1683 1687 * recycle the pmd pgtable because there can be pte
+3 -2
mm/kmemleak.c
··· 2070 2070 return -EINVAL; 2071 2071 if (strcmp(str, "off") == 0) 2072 2072 kmemleak_disable(); 2073 - else if (strcmp(str, "on") == 0) 2073 + else if (strcmp(str, "on") == 0) { 2074 2074 kmemleak_skip_disable = 1; 2075 + stack_depot_want_early_init(); 2076 + } 2075 2077 else 2076 2078 return -EINVAL; 2077 2079 return 0; ··· 2095 2093 if (kmemleak_error) 2096 2094 return; 2097 2095 2098 - stack_depot_init(); 2099 2096 jiffies_min_age = msecs_to_jiffies(MSECS_MIN_AGE); 2100 2097 jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000); 2101 2098
+13 -54
mm/memcontrol.c
··· 63 63 #include <linux/resume_user_mode.h> 64 64 #include <linux/psi.h> 65 65 #include <linux/seq_buf.h> 66 - #include <linux/parser.h> 67 66 #include "internal.h" 68 67 #include <net/sock.h> 69 68 #include <net/ip.h> ··· 2392 2393 psi_memstall_enter(&pflags); 2393 2394 nr_reclaimed += try_to_free_mem_cgroup_pages(memcg, nr_pages, 2394 2395 gfp_mask, 2395 - MEMCG_RECLAIM_MAY_SWAP, 2396 - NULL); 2396 + MEMCG_RECLAIM_MAY_SWAP); 2397 2397 psi_memstall_leave(&pflags); 2398 2398 } while ((memcg = parent_mem_cgroup(memcg)) && 2399 2399 !mem_cgroup_is_root(memcg)); ··· 2683 2685 2684 2686 psi_memstall_enter(&pflags); 2685 2687 nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages, 2686 - gfp_mask, reclaim_options, 2687 - NULL); 2688 + gfp_mask, reclaim_options); 2688 2689 psi_memstall_leave(&pflags); 2689 2690 2690 2691 if (mem_cgroup_margin(mem_over_limit) >= nr_pages) ··· 3503 3506 } 3504 3507 3505 3508 if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, 3506 - memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP, 3507 - NULL)) { 3509 + memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP)) { 3508 3510 ret = -EBUSY; 3509 3511 break; 3510 3512 } ··· 3614 3618 return -EINTR; 3615 3619 3616 3620 if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, 3617 - MEMCG_RECLAIM_MAY_SWAP, 3618 - NULL)) 3621 + MEMCG_RECLAIM_MAY_SWAP)) 3619 3622 nr_retries--; 3620 3623 } 3621 3624 ··· 6424 6429 } 6425 6430 6426 6431 reclaimed = try_to_free_mem_cgroup_pages(memcg, nr_pages - high, 6427 - GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP, 6428 - NULL); 6432 + GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP); 6429 6433 6430 6434 if (!reclaimed && !nr_retries--) 6431 6435 break; ··· 6473 6479 6474 6480 if (nr_reclaims) { 6475 6481 if (!try_to_free_mem_cgroup_pages(memcg, nr_pages - max, 6476 - GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP, 6477 - NULL)) 6482 + GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP)) 6478 6483 nr_reclaims--; 6479 6484 continue; 6480 6485 } ··· 6596 6603 return nbytes; 6597 6604 } 6598 6605 6599 - enum { 6600 - MEMORY_RECLAIM_NODES = 0, 6601 - MEMORY_RECLAIM_NULL, 6602 - }; 6603 - 6604 - static const match_table_t if_tokens = { 6605 - { MEMORY_RECLAIM_NODES, "nodes=%s" }, 6606 - { MEMORY_RECLAIM_NULL, NULL }, 6607 - }; 6608 - 6609 6606 static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, 6610 6607 size_t nbytes, loff_t off) 6611 6608 { 6612 6609 struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); 6613 6610 unsigned int nr_retries = MAX_RECLAIM_RETRIES; 6614 6611 unsigned long nr_to_reclaim, nr_reclaimed = 0; 6615 - unsigned int reclaim_options = MEMCG_RECLAIM_MAY_SWAP | 6616 - MEMCG_RECLAIM_PROACTIVE; 6617 - char *old_buf, *start; 6618 - substring_t args[MAX_OPT_ARGS]; 6619 - int token; 6620 - char value[256]; 6621 - nodemask_t nodemask = NODE_MASK_ALL; 6612 + unsigned int reclaim_options; 6613 + int err; 6622 6614 6623 6615 buf = strstrip(buf); 6616 + err = page_counter_memparse(buf, "", &nr_to_reclaim); 6617 + if (err) 6618 + return err; 6624 6619 6625 - old_buf = buf; 6626 - nr_to_reclaim = memparse(buf, &buf) / PAGE_SIZE; 6627 - if (buf == old_buf) 6628 - return -EINVAL; 6629 - 6630 - buf = strstrip(buf); 6631 - 6632 - while ((start = strsep(&buf, " ")) != NULL) { 6633 - if (!strlen(start)) 6634 - continue; 6635 - token = match_token(start, if_tokens, args); 6636 - match_strlcpy(value, args, sizeof(value)); 6637 - switch (token) { 6638 - case MEMORY_RECLAIM_NODES: 6639 - if (nodelist_parse(value, nodemask) < 0) 6640 - return -EINVAL; 6641 - break; 6642 - default: 6643 - return -EINVAL; 6644 - } 6645 - } 6646 - 6620 + reclaim_options = MEMCG_RECLAIM_MAY_SWAP | MEMCG_RECLAIM_PROACTIVE; 6647 6621 while (nr_reclaimed < nr_to_reclaim) { 6648 6622 unsigned long reclaimed; 6649 6623 ··· 6627 6667 6628 6668 reclaimed = try_to_free_mem_cgroup_pages(memcg, 6629 6669 nr_to_reclaim - nr_reclaimed, 6630 - GFP_KERNEL, reclaim_options, 6631 - &nodemask); 6670 + GFP_KERNEL, reclaim_options); 6632 6671 6633 6672 if (!reclaimed && !nr_retries--) 6634 6673 return -EAGAIN;
+7 -7
mm/memory.c
··· 828 828 return -EBUSY; 829 829 return -ENOENT; 830 830 } else if (is_pte_marker_entry(entry)) { 831 - /* 832 - * We're copying the pgtable should only because dst_vma has 833 - * uffd-wp enabled, do sanity check. 834 - */ 835 - WARN_ON_ONCE(!userfaultfd_wp(dst_vma)); 836 - set_pte_at(dst_mm, addr, dst_pte, pte); 831 + if (is_swapin_error_entry(entry) || userfaultfd_wp(dst_vma)) 832 + set_pte_at(dst_mm, addr, dst_pte, pte); 837 833 return 0; 838 834 } 839 835 if (!userfaultfd_wp(dst_vma)) ··· 3625 3629 /* 3626 3630 * Be careful so that we will only recover a special uffd-wp pte into a 3627 3631 * none pte. Otherwise it means the pte could have changed, so retry. 3632 + * 3633 + * This should also cover the case where e.g. the pte changed 3634 + * quickly from a PTE_MARKER_UFFD_WP into PTE_MARKER_SWAPIN_ERROR. 3635 + * So is_pte_marker() check is not enough to safely drop the pte. 3628 3636 */ 3629 - if (is_pte_marker(*vmf->pte)) 3637 + if (pte_same(vmf->orig_pte, *vmf->pte)) 3630 3638 pte_clear(vmf->vma->vm_mm, vmf->address, vmf->pte); 3631 3639 pte_unmap_unlock(vmf->pte, vmf->ptl); 3632 3640 return 0;
+2 -1
mm/mempolicy.c
··· 600 600 601 601 /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */ 602 602 if (flags & (MPOL_MF_MOVE_ALL) || 603 - (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) { 603 + (flags & MPOL_MF_MOVE && page_mapcount(page) == 1 && 604 + !hugetlb_pmd_shared(pte))) { 604 605 if (isolate_hugetlb(page, qp->pagelist) && 605 606 (flags & MPOL_MF_STRICT)) 606 607 /*
+7 -1
mm/mprotect.c
··· 245 245 newpte = pte_swp_mksoft_dirty(newpte); 246 246 if (pte_swp_uffd_wp(oldpte)) 247 247 newpte = pte_swp_mkuffd_wp(newpte); 248 - } else if (pte_marker_entry_uffd_wp(entry)) { 248 + } else if (is_pte_marker_entry(entry)) { 249 + /* 250 + * Ignore swapin errors unconditionally, 251 + * because any access should sigbus anyway. 252 + */ 253 + if (is_swapin_error_entry(entry)) 254 + continue; 249 255 /* 250 256 * If this is uffd-wp pte marker and we'd like 251 257 * to unprotect it, drop it; the next page
+19 -6
mm/mremap.c
··· 1027 1027 } 1028 1028 1029 1029 /* 1030 - * Function vma_merge() is called on the extension we are adding to 1031 - * the already existing vma, vma_merge() will merge this extension with 1032 - * the already existing vma (expand operation itself) and possibly also 1033 - * with the next vma if it becomes adjacent to the expanded vma and 1034 - * otherwise compatible. 1030 + * Function vma_merge() is called on the extension we 1031 + * are adding to the already existing vma, vma_merge() 1032 + * will merge this extension with the already existing 1033 + * vma (expand operation itself) and possibly also with 1034 + * the next vma if it becomes adjacent to the expanded 1035 + * vma and otherwise compatible. 1036 + * 1037 + * However, vma_merge() can currently fail due to 1038 + * is_mergeable_vma() check for vm_ops->close (see the 1039 + * comment there). Yet this should not prevent vma 1040 + * expanding, so perform a simple expand for such vma. 1041 + * Ideally the check for close op should be only done 1042 + * when a vma would be actually removed due to a merge. 1035 1043 */ 1036 - vma = vma_merge(mm, vma, extension_start, extension_end, 1044 + if (!vma->vm_ops || !vma->vm_ops->close) { 1045 + vma = vma_merge(mm, vma, extension_start, extension_end, 1037 1046 vma->vm_flags, vma->anon_vma, vma->vm_file, 1038 1047 extension_pgoff, vma_policy(vma), 1039 1048 vma->vm_userfaultfd_ctx, anon_vma_name(vma)); 1049 + } else if (vma_adjust(vma, vma->vm_start, addr + new_len, 1050 + vma->vm_pgoff, NULL)) { 1051 + vma = NULL; 1052 + } 1040 1053 if (!vma) { 1041 1054 vm_unacct_memory(pages); 1042 1055 ret = -ENOMEM;
+1
mm/swapfile.c
··· 1100 1100 goto check_out; 1101 1101 pr_debug("scan_swap_map of si %d failed to find offset\n", 1102 1102 si->type); 1103 + cond_resched(); 1103 1104 1104 1105 spin_lock(&swap_avail_lock); 1105 1106 nextsi:
+5 -4
mm/vmscan.c
··· 3323 3323 if (mem_cgroup_disabled()) 3324 3324 return; 3325 3325 3326 + /* migration can happen before addition */ 3327 + if (!mm->lru_gen.memcg) 3328 + return; 3329 + 3326 3330 rcu_read_lock(); 3327 3331 memcg = mem_cgroup_from_task(task); 3328 3332 rcu_read_unlock(); 3329 3333 if (memcg == mm->lru_gen.memcg) 3330 3334 return; 3331 3335 3332 - VM_WARN_ON_ONCE(!mm->lru_gen.memcg); 3333 3336 VM_WARN_ON_ONCE(list_empty(&mm->lru_gen.list)); 3334 3337 3335 3338 lru_gen_del_mm(mm); ··· 6757 6754 unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, 6758 6755 unsigned long nr_pages, 6759 6756 gfp_t gfp_mask, 6760 - unsigned int reclaim_options, 6761 - nodemask_t *nodemask) 6757 + unsigned int reclaim_options) 6762 6758 { 6763 6759 unsigned long nr_reclaimed; 6764 6760 unsigned int noreclaim_flag; ··· 6772 6770 .may_unmap = 1, 6773 6771 .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP), 6774 6772 .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE), 6775 - .nodemask = nodemask, 6776 6773 }; 6777 6774 /* 6778 6775 * Traverse the ZONELIST_FALLBACK zonelist of the current node to put
+205 -32
mm/zsmalloc.c
··· 113 113 * have room for two bit at least. 114 114 */ 115 115 #define OBJ_ALLOCATED_TAG 1 116 - #define OBJ_TAG_BITS 1 116 + 117 + #ifdef CONFIG_ZPOOL 118 + /* 119 + * The second least-significant bit in the object's header identifies if the 120 + * value stored at the header is a deferred handle from the last reclaim 121 + * attempt. 122 + * 123 + * As noted above, this is valid because we have room for two bits. 124 + */ 125 + #define OBJ_DEFERRED_HANDLE_TAG 2 126 + #define OBJ_TAG_BITS 2 127 + #define OBJ_TAG_MASK (OBJ_ALLOCATED_TAG | OBJ_DEFERRED_HANDLE_TAG) 128 + #else 129 + #define OBJ_TAG_BITS 1 130 + #define OBJ_TAG_MASK OBJ_ALLOCATED_TAG 131 + #endif /* CONFIG_ZPOOL */ 132 + 117 133 #define OBJ_INDEX_BITS (BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS) 118 134 #define OBJ_INDEX_MASK ((_AC(1, UL) << OBJ_INDEX_BITS) - 1) 119 135 ··· 238 222 * Handle of allocated object. 239 223 */ 240 224 unsigned long handle; 225 + #ifdef CONFIG_ZPOOL 226 + /* 227 + * Deferred handle of a reclaimed object. 228 + */ 229 + unsigned long deferred_handle; 230 + #endif 241 231 }; 242 232 }; 243 233 ··· 294 272 /* links the zspage to the lru list in the pool */ 295 273 struct list_head lru; 296 274 bool under_reclaim; 297 - /* list of unfreed handles whose objects have been reclaimed */ 298 - unsigned long *deferred_handles; 299 275 #endif 300 276 301 277 struct zs_pool *pool; ··· 917 897 return *(unsigned long *)handle; 918 898 } 919 899 920 - static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) 900 + static bool obj_tagged(struct page *page, void *obj, unsigned long *phandle, 901 + int tag) 921 902 { 922 903 unsigned long handle; 923 904 struct zspage *zspage = get_zspage(page); ··· 929 908 } else 930 909 handle = *(unsigned long *)obj; 931 910 932 - if (!(handle & OBJ_ALLOCATED_TAG)) 911 + if (!(handle & tag)) 933 912 return false; 934 913 935 - *phandle = handle & ~OBJ_ALLOCATED_TAG; 914 + /* Clear all tags before returning the handle */ 915 + *phandle = handle & ~OBJ_TAG_MASK; 936 916 return true; 937 917 } 918 + 919 + static inline bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) 920 + { 921 + return obj_tagged(page, obj, phandle, OBJ_ALLOCATED_TAG); 922 + } 923 + 924 + #ifdef CONFIG_ZPOOL 925 + static bool obj_stores_deferred_handle(struct page *page, void *obj, 926 + unsigned long *phandle) 927 + { 928 + return obj_tagged(page, obj, phandle, OBJ_DEFERRED_HANDLE_TAG); 929 + } 930 + #endif 938 931 939 932 static void reset_page(struct page *page) 940 933 { ··· 981 946 } 982 947 983 948 #ifdef CONFIG_ZPOOL 949 + static unsigned long find_deferred_handle_obj(struct size_class *class, 950 + struct page *page, int *obj_idx); 951 + 984 952 /* 985 953 * Free all the deferred handles whose objects are freed in zs_free. 986 954 */ 987 - static void free_handles(struct zs_pool *pool, struct zspage *zspage) 955 + static void free_handles(struct zs_pool *pool, struct size_class *class, 956 + struct zspage *zspage) 988 957 { 989 - unsigned long handle = (unsigned long)zspage->deferred_handles; 958 + int obj_idx = 0; 959 + struct page *page = get_first_page(zspage); 960 + unsigned long handle; 990 961 991 - while (handle) { 992 - unsigned long nxt_handle = handle_to_obj(handle); 962 + while (1) { 963 + handle = find_deferred_handle_obj(class, page, &obj_idx); 964 + if (!handle) { 965 + page = get_next_page(page); 966 + if (!page) 967 + break; 968 + obj_idx = 0; 969 + continue; 970 + } 993 971 994 972 cache_free_handle(pool, handle); 995 - handle = nxt_handle; 973 + obj_idx++; 996 974 } 997 975 } 998 976 #else 999 - static inline void free_handles(struct zs_pool *pool, struct zspage *zspage) {} 977 + static inline void free_handles(struct zs_pool *pool, struct size_class *class, 978 + struct zspage *zspage) {} 1000 979 #endif 1001 980 1002 981 static void __free_zspage(struct zs_pool *pool, struct size_class *class, ··· 1028 979 VM_BUG_ON(fg != ZS_EMPTY); 1029 980 1030 981 /* Free all deferred handles from zs_free */ 1031 - free_handles(pool, zspage); 982 + free_handles(pool, class, zspage); 1032 983 1033 984 next = page = get_first_page(zspage); 1034 985 do { ··· 1116 1067 #ifdef CONFIG_ZPOOL 1117 1068 INIT_LIST_HEAD(&zspage->lru); 1118 1069 zspage->under_reclaim = false; 1119 - zspage->deferred_handles = NULL; 1120 1070 #endif 1121 1071 1122 1072 set_freeobj(zspage, 0); ··· 1616 1568 } 1617 1569 EXPORT_SYMBOL_GPL(zs_malloc); 1618 1570 1619 - static void obj_free(int class_size, unsigned long obj) 1571 + static void obj_free(int class_size, unsigned long obj, unsigned long *handle) 1620 1572 { 1621 1573 struct link_free *link; 1622 1574 struct zspage *zspage; ··· 1630 1582 zspage = get_zspage(f_page); 1631 1583 1632 1584 vaddr = kmap_atomic(f_page); 1633 - 1634 - /* Insert this object in containing zspage's freelist */ 1635 1585 link = (struct link_free *)(vaddr + f_offset); 1636 - if (likely(!ZsHugePage(zspage))) 1637 - link->next = get_freeobj(zspage) << OBJ_TAG_BITS; 1638 - else 1639 - f_page->index = 0; 1586 + 1587 + if (handle) { 1588 + #ifdef CONFIG_ZPOOL 1589 + /* Stores the (deferred) handle in the object's header */ 1590 + *handle |= OBJ_DEFERRED_HANDLE_TAG; 1591 + *handle &= ~OBJ_ALLOCATED_TAG; 1592 + 1593 + if (likely(!ZsHugePage(zspage))) 1594 + link->deferred_handle = *handle; 1595 + else 1596 + f_page->index = *handle; 1597 + #endif 1598 + } else { 1599 + /* Insert this object in containing zspage's freelist */ 1600 + if (likely(!ZsHugePage(zspage))) 1601 + link->next = get_freeobj(zspage) << OBJ_TAG_BITS; 1602 + else 1603 + f_page->index = 0; 1604 + set_freeobj(zspage, f_objidx); 1605 + } 1606 + 1640 1607 kunmap_atomic(vaddr); 1641 - set_freeobj(zspage, f_objidx); 1642 1608 mod_zspage_inuse(zspage, -1); 1643 1609 } 1644 1610 ··· 1677 1615 zspage = get_zspage(f_page); 1678 1616 class = zspage_class(pool, zspage); 1679 1617 1680 - obj_free(class->size, obj); 1681 1618 class_stat_dec(class, OBJ_USED, 1); 1682 1619 1683 1620 #ifdef CONFIG_ZPOOL ··· 1685 1624 * Reclaim needs the handles during writeback. It'll free 1686 1625 * them along with the zspage when it's done with them. 1687 1626 * 1688 - * Record current deferred handle at the memory location 1689 - * whose address is given by handle. 1627 + * Record current deferred handle in the object's header. 1690 1628 */ 1691 - record_obj(handle, (unsigned long)zspage->deferred_handles); 1692 - zspage->deferred_handles = (unsigned long *)handle; 1629 + obj_free(class->size, obj, &handle); 1693 1630 spin_unlock(&pool->lock); 1694 1631 return; 1695 1632 } 1696 1633 #endif 1634 + obj_free(class->size, obj, NULL); 1635 + 1697 1636 fullness = fix_fullness_group(class, zspage); 1698 1637 if (fullness == ZS_EMPTY) 1699 1638 free_zspage(pool, class, zspage); ··· 1774 1713 } 1775 1714 1776 1715 /* 1777 - * Find alloced object in zspage from index object and 1716 + * Find object with a certain tag in zspage from index object and 1778 1717 * return handle. 1779 1718 */ 1780 - static unsigned long find_alloced_obj(struct size_class *class, 1781 - struct page *page, int *obj_idx) 1719 + static unsigned long find_tagged_obj(struct size_class *class, 1720 + struct page *page, int *obj_idx, int tag) 1782 1721 { 1783 1722 unsigned int offset; 1784 1723 int index = *obj_idx; ··· 1789 1728 offset += class->size * index; 1790 1729 1791 1730 while (offset < PAGE_SIZE) { 1792 - if (obj_allocated(page, addr + offset, &handle)) 1731 + if (obj_tagged(page, addr + offset, &handle, tag)) 1793 1732 break; 1794 1733 1795 1734 offset += class->size; ··· 1802 1741 1803 1742 return handle; 1804 1743 } 1744 + 1745 + /* 1746 + * Find alloced object in zspage from index object and 1747 + * return handle. 1748 + */ 1749 + static unsigned long find_alloced_obj(struct size_class *class, 1750 + struct page *page, int *obj_idx) 1751 + { 1752 + return find_tagged_obj(class, page, obj_idx, OBJ_ALLOCATED_TAG); 1753 + } 1754 + 1755 + #ifdef CONFIG_ZPOOL 1756 + /* 1757 + * Find object storing a deferred handle in header in zspage from index object 1758 + * and return handle. 1759 + */ 1760 + static unsigned long find_deferred_handle_obj(struct size_class *class, 1761 + struct page *page, int *obj_idx) 1762 + { 1763 + return find_tagged_obj(class, page, obj_idx, OBJ_DEFERRED_HANDLE_TAG); 1764 + } 1765 + #endif 1805 1766 1806 1767 struct zs_compact_control { 1807 1768 /* Source spage for migration which could be a subpage of zspage */ ··· 1867 1784 zs_object_copy(class, free_obj, used_obj); 1868 1785 obj_idx++; 1869 1786 record_obj(handle, free_obj); 1870 - obj_free(class->size, used_obj); 1787 + obj_free(class->size, used_obj, NULL); 1871 1788 } 1872 1789 1873 1790 /* Remember last position in this iteration */ ··· 2561 2478 EXPORT_SYMBOL_GPL(zs_destroy_pool); 2562 2479 2563 2480 #ifdef CONFIG_ZPOOL 2481 + static void restore_freelist(struct zs_pool *pool, struct size_class *class, 2482 + struct zspage *zspage) 2483 + { 2484 + unsigned int obj_idx = 0; 2485 + unsigned long handle, off = 0; /* off is within-page offset */ 2486 + struct page *page = get_first_page(zspage); 2487 + struct link_free *prev_free = NULL; 2488 + void *prev_page_vaddr = NULL; 2489 + 2490 + /* in case no free object found */ 2491 + set_freeobj(zspage, (unsigned int)(-1UL)); 2492 + 2493 + while (page) { 2494 + void *vaddr = kmap_atomic(page); 2495 + struct page *next_page; 2496 + 2497 + while (off < PAGE_SIZE) { 2498 + void *obj_addr = vaddr + off; 2499 + 2500 + /* skip allocated object */ 2501 + if (obj_allocated(page, obj_addr, &handle)) { 2502 + obj_idx++; 2503 + off += class->size; 2504 + continue; 2505 + } 2506 + 2507 + /* free deferred handle from reclaim attempt */ 2508 + if (obj_stores_deferred_handle(page, obj_addr, &handle)) 2509 + cache_free_handle(pool, handle); 2510 + 2511 + if (prev_free) 2512 + prev_free->next = obj_idx << OBJ_TAG_BITS; 2513 + else /* first free object found */ 2514 + set_freeobj(zspage, obj_idx); 2515 + 2516 + prev_free = (struct link_free *)vaddr + off / sizeof(*prev_free); 2517 + /* if last free object in a previous page, need to unmap */ 2518 + if (prev_page_vaddr) { 2519 + kunmap_atomic(prev_page_vaddr); 2520 + prev_page_vaddr = NULL; 2521 + } 2522 + 2523 + obj_idx++; 2524 + off += class->size; 2525 + } 2526 + 2527 + /* 2528 + * Handle the last (full or partial) object on this page. 2529 + */ 2530 + next_page = get_next_page(page); 2531 + if (next_page) { 2532 + if (!prev_free || prev_page_vaddr) { 2533 + /* 2534 + * There is no free object in this page, so we can safely 2535 + * unmap it. 2536 + */ 2537 + kunmap_atomic(vaddr); 2538 + } else { 2539 + /* update prev_page_vaddr since prev_free is on this page */ 2540 + prev_page_vaddr = vaddr; 2541 + } 2542 + } else { /* this is the last page */ 2543 + if (prev_free) { 2544 + /* 2545 + * Reset OBJ_TAG_BITS bit to last link to tell 2546 + * whether it's allocated object or not. 2547 + */ 2548 + prev_free->next = -1UL << OBJ_TAG_BITS; 2549 + } 2550 + 2551 + /* unmap previous page (if not done yet) */ 2552 + if (prev_page_vaddr) { 2553 + kunmap_atomic(prev_page_vaddr); 2554 + prev_page_vaddr = NULL; 2555 + } 2556 + 2557 + kunmap_atomic(vaddr); 2558 + } 2559 + 2560 + page = next_page; 2561 + off %= PAGE_SIZE; 2562 + } 2563 + } 2564 + 2564 2565 static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries) 2565 2566 { 2566 2567 int i, obj_idx, ret = 0; ··· 2728 2561 return 0; 2729 2562 } 2730 2563 2564 + /* 2565 + * Eviction fails on one of the handles, so we need to restore zspage. 2566 + * We need to rebuild its freelist (and free stored deferred handles), 2567 + * put it back to the correct size class, and add it to the LRU list. 2568 + */ 2569 + restore_freelist(pool, class, zspage); 2731 2570 putback_zspage(class, zspage); 2732 2571 list_add(&zspage->lru, &pool->lru); 2733 2572 unlock_zspage(zspage);
+40
net/can/j1939/address-claim.c
··· 165 165 * leaving this function. 166 166 */ 167 167 ecu = j1939_ecu_get_by_name_locked(priv, name); 168 + 169 + if (ecu && ecu->addr == skcb->addr.sa) { 170 + /* The ISO 11783-5 standard, in "4.5.2 - Address claim 171 + * requirements", states: 172 + * d) No CF shall begin, or resume, transmission on the 173 + * network until 250 ms after it has successfully claimed 174 + * an address except when responding to a request for 175 + * address-claimed. 176 + * 177 + * But "Figure 6" and "Figure 7" in "4.5.4.2 - Address-claim 178 + * prioritization" show that the CF begins the transmission 179 + * after 250 ms from the first AC (address-claimed) message 180 + * even if it sends another AC message during that time window 181 + * to resolve the address contention with another CF. 182 + * 183 + * As stated in "4.4.2.3 - Address-claimed message": 184 + * In order to successfully claim an address, the CF sending 185 + * an address claimed message shall not receive a contending 186 + * claim from another CF for at least 250 ms. 187 + * 188 + * As stated in "4.4.3.2 - NAME management (NM) message": 189 + * 1) A commanding CF can 190 + * d) request that a CF with a specified NAME transmit 191 + * the address-claimed message with its current NAME. 192 + * 2) A target CF shall 193 + * d) send an address-claimed message in response to a 194 + * request for a matching NAME 195 + * 196 + * Taking the above arguments into account, the 250 ms wait is 197 + * requested only during network initialization. 198 + * 199 + * Do not restart the timer on AC message if both the NAME and 200 + * the address match and so if the address has already been 201 + * claimed (timer has expired) or the AC message has been sent 202 + * to resolve the contention with another CF (timer is still 203 + * running). 204 + */ 205 + goto out_ecu_put; 206 + } 207 + 168 208 if (!ecu && j1939_address_is_unicast(skcb->addr.sa)) 169 209 ecu = j1939_ecu_create_locked(priv, name); 170 210
+15 -3
net/core/neighbour.c
··· 269 269 (n->nud_state == NUD_NOARP) || 270 270 (tbl->is_multicast && 271 271 tbl->is_multicast(n->primary_key)) || 272 - time_after(tref, n->updated)) 272 + !time_in_range(n->updated, tref, jiffies)) 273 273 remove = true; 274 274 write_unlock(&n->lock); 275 275 ··· 289 289 290 290 static void neigh_add_timer(struct neighbour *n, unsigned long when) 291 291 { 292 + /* Use safe distance from the jiffies - LONG_MAX point while timer 293 + * is running in DELAY/PROBE state but still show to user space 294 + * large times in the past. 295 + */ 296 + unsigned long mint = jiffies - (LONG_MAX - 86400 * HZ); 297 + 292 298 neigh_hold(n); 299 + if (!time_in_range(n->confirmed, mint, jiffies)) 300 + n->confirmed = mint; 301 + if (time_before(n->used, n->confirmed)) 302 + n->used = n->confirmed; 293 303 if (unlikely(mod_timer(&n->timer, when))) { 294 304 printk("NEIGH: BUG, double timer add, state is %x\n", 295 305 n->nud_state); ··· 1011 1001 goto next_elt; 1012 1002 } 1013 1003 1014 - if (time_before(n->used, n->confirmed)) 1004 + if (time_before(n->used, n->confirmed) && 1005 + time_is_before_eq_jiffies(n->confirmed)) 1015 1006 n->used = n->confirmed; 1016 1007 1017 1008 if (refcount_read(&n->refcnt) == 1 && 1018 1009 (state == NUD_FAILED || 1019 - time_after(jiffies, n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) { 1010 + !time_in_range_open(jiffies, n->used, 1011 + n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) { 1020 1012 *np = n->next; 1021 1013 neigh_mark_dead(n); 1022 1014 write_unlock(&n->lock);
+2 -1
net/core/sock.c
··· 1531 1531 ret = -EINVAL; 1532 1532 break; 1533 1533 } 1534 + if ((u8)val == SOCK_TXREHASH_DEFAULT) 1535 + val = READ_ONCE(sock_net(sk)->core.sysctl_txrehash); 1534 1536 /* Paired with READ_ONCE() in tcp_rtx_synack() */ 1535 1537 WRITE_ONCE(sk->sk_txrehash, (u8)val); 1536 1538 break; ··· 3456 3454 sk->sk_pacing_rate = ~0UL; 3457 3455 WRITE_ONCE(sk->sk_pacing_shift, 10); 3458 3456 sk->sk_incoming_cpu = -1; 3459 - sk->sk_txrehash = SOCK_TXREHASH_DEFAULT; 3460 3457 3461 3458 sk_rx_queue_clear(sk); 3462 3459 /*
+2 -3
net/devlink/core.c
··· 205 205 goto err_xa_alloc; 206 206 207 207 devlink->netdevice_nb.notifier_call = devlink_port_netdevice_event; 208 - ret = register_netdevice_notifier_net(net, &devlink->netdevice_nb); 208 + ret = register_netdevice_notifier(&devlink->netdevice_nb); 209 209 if (ret) 210 210 goto err_register_netdevice_notifier; 211 211 ··· 266 266 xa_destroy(&devlink->snapshot_ids); 267 267 xa_destroy(&devlink->ports); 268 268 269 - WARN_ON_ONCE(unregister_netdevice_notifier_net(devlink_net(devlink), 270 - &devlink->netdevice_nb)); 269 + WARN_ON_ONCE(unregister_netdevice_notifier(&devlink->netdevice_nb)); 271 270 272 271 xa_erase(&devlinks, devlink->index); 273 272
+4
net/devlink/leftover.c
··· 8430 8430 break; 8431 8431 case NETDEV_REGISTER: 8432 8432 case NETDEV_CHANGENAME: 8433 + if (devlink_net(devlink) != dev_net(netdev)) 8434 + return NOTIFY_OK; 8433 8435 /* Set the netdev on top of previously set type. Note this 8434 8436 * event happens also during net namespace change so here 8435 8437 * we take into account netdev pointer appearing in this ··· 8441 8439 netdev); 8442 8440 break; 8443 8441 case NETDEV_UNREGISTER: 8442 + if (devlink_net(devlink) != dev_net(netdev)) 8443 + return NOTIFY_OK; 8444 8444 /* Clear netdev pointer, but not the type. This event happens 8445 8445 * also during net namespace change so we need to clear 8446 8446 * pointer to netdev that is going to another net namespace.
+1
net/ipv4/af_inet.c
··· 347 347 sk->sk_destruct = inet_sock_destruct; 348 348 sk->sk_protocol = protocol; 349 349 sk->sk_backlog_rcv = sk->sk_prot->backlog_rcv; 350 + sk->sk_txrehash = READ_ONCE(net->core.sysctl_txrehash); 350 351 351 352 inet->uc_ttl = -1; 352 353 inet->mc_loop = 1;
-3
net/ipv4/inet_connection_sock.c
··· 1246 1246 sk->sk_ack_backlog = 0; 1247 1247 inet_csk_delack_init(sk); 1248 1248 1249 - if (sk->sk_txrehash == SOCK_TXREHASH_DEFAULT) 1250 - sk->sk_txrehash = READ_ONCE(sock_net(sk)->core.sysctl_txrehash); 1251 - 1252 1249 /* There is race window here: we announce ourselves listening, 1253 1250 * but this transition is still not validated by get_port(). 1254 1251 * It is OK, because this socket enters to hash table only
+1
net/ipv6/af_inet6.c
··· 222 222 np->pmtudisc = IPV6_PMTUDISC_WANT; 223 223 np->repflow = net->ipv6.sysctl.flowlabel_reflect & FLOWLABEL_REFLECT_ESTABLISHED; 224 224 sk->sk_ipv6only = net->ipv6.sysctl.bindv6only; 225 + sk->sk_txrehash = READ_ONCE(net->core.sysctl_txrehash); 225 226 226 227 /* Init the ipv4 part of the socket since we can have sockets 227 228 * using v6 API for ipv4.
+6 -4
net/mptcp/pm_netlink.c
··· 1002 1002 { 1003 1003 int addrlen = sizeof(struct sockaddr_in); 1004 1004 struct sockaddr_storage addr; 1005 - struct mptcp_sock *msk; 1006 1005 struct socket *ssock; 1006 + struct sock *newsk; 1007 1007 int backlog = 1024; 1008 1008 int err; 1009 1009 ··· 1012 1012 if (err) 1013 1013 return err; 1014 1014 1015 - msk = mptcp_sk(entry->lsk->sk); 1016 - if (!msk) 1015 + newsk = entry->lsk->sk; 1016 + if (!newsk) 1017 1017 return -EINVAL; 1018 1018 1019 - ssock = __mptcp_nmpc_socket(msk); 1019 + lock_sock(newsk); 1020 + ssock = __mptcp_nmpc_socket(mptcp_sk(newsk)); 1021 + release_sock(newsk); 1020 1022 if (!ssock) 1021 1023 return -EINVAL; 1022 1024
+9
net/mptcp/protocol.c
··· 2902 2902 struct mptcp_subflow_context *subflow; 2903 2903 struct mptcp_sock *msk = mptcp_sk(sk); 2904 2904 bool do_cancel_work = false; 2905 + int subflows_alive = 0; 2905 2906 2906 2907 sk->sk_shutdown = SHUTDOWN_MASK; 2907 2908 ··· 2929 2928 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 2930 2929 bool slow = lock_sock_fast_nested(ssk); 2931 2930 2931 + subflows_alive += ssk->sk_state != TCP_CLOSE; 2932 + 2932 2933 /* since the close timeout takes precedence on the fail one, 2933 2934 * cancel the latter 2934 2935 */ ··· 2945 2942 unlock_sock_fast(ssk, slow); 2946 2943 } 2947 2944 sock_orphan(sk); 2945 + 2946 + /* all the subflows are closed, only timeout can change the msk 2947 + * state, let's not keep resources busy for no reasons 2948 + */ 2949 + if (subflows_alive == 0) 2950 + inet_sk_state_store(sk, TCP_CLOSE); 2948 2951 2949 2952 sock_hold(sk); 2950 2953 pr_debug("msk=%p state=%d", sk, sk->sk_state);
+9 -2
net/mptcp/sockopt.c
··· 760 760 static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int optname, 761 761 sockptr_t optval, unsigned int optlen) 762 762 { 763 + struct sock *sk = (struct sock *)msk; 763 764 struct socket *sock; 765 + int ret = -EINVAL; 764 766 765 767 /* Limit to first subflow, before the connection establishment */ 768 + lock_sock(sk); 766 769 sock = __mptcp_nmpc_socket(msk); 767 770 if (!sock) 768 - return -EINVAL; 771 + goto unlock; 769 772 770 - return tcp_setsockopt(sock->sk, level, optname, optval, optlen); 773 + ret = tcp_setsockopt(sock->sk, level, optname, optval, optlen); 774 + 775 + unlock: 776 + release_sock(sk); 777 + return ret; 771 778 } 772 779 773 780 static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname,
+10 -2
net/mptcp/subflow.c
··· 1400 1400 mptcp_for_each_subflow(msk, subflow) { 1401 1401 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 1402 1402 int err = sock_error(ssk); 1403 + int ssk_state; 1403 1404 1404 1405 if (!err) 1405 1406 continue; ··· 1411 1410 if (sk->sk_state != TCP_SYN_SENT && !__mptcp_check_fallback(msk)) 1412 1411 continue; 1413 1412 1414 - inet_sk_state_store(sk, inet_sk_state_load(ssk)); 1413 + /* We need to propagate only transition to CLOSE state. 1414 + * Orphaned socket will see such state change via 1415 + * subflow_sched_work_if_closed() and that path will properly 1416 + * destroy the msk as needed. 1417 + */ 1418 + ssk_state = inet_sk_state_load(ssk); 1419 + if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD)) 1420 + inet_sk_state_store(sk, ssk_state); 1415 1421 sk->sk_err = -err; 1416 1422 1417 1423 /* This barrier is coupled with smp_rmb() in mptcp_poll() */ ··· 1690 1682 if (err) 1691 1683 return err; 1692 1684 1693 - lock_sock(sf->sk); 1685 + lock_sock_nested(sf->sk, SINGLE_DEPTH_NESTING); 1694 1686 1695 1687 /* the newly created socket has to be in the same cgroup as its parent */ 1696 1688 mptcp_attach_cgroup(sk, sf->sk);
+3 -3
net/rds/message.c
··· 104 104 spin_lock_irqsave(&q->lock, flags); 105 105 head = &q->zcookie_head; 106 106 if (!list_empty(head)) { 107 - info = list_entry(head, struct rds_msg_zcopy_info, 108 - rs_zcookie_next); 109 - if (info && rds_zcookie_add(info, cookie)) { 107 + info = list_first_entry(head, struct rds_msg_zcopy_info, 108 + rs_zcookie_next); 109 + if (rds_zcookie_add(info, cookie)) { 110 110 spin_unlock_irqrestore(&q->lock, flags); 111 111 kfree(rds_info_from_znotifier(znotif)); 112 112 /* caller invokes rds_wake_sk_sleep() */
+1 -1
net/sched/sch_htb.c
··· 433 433 while (m) { 434 434 unsigned int prio = ffz(~m); 435 435 436 - if (WARN_ON_ONCE(prio > ARRAY_SIZE(p->inner.clprio))) 436 + if (WARN_ON_ONCE(prio >= ARRAY_SIZE(p->inner.clprio))) 437 437 break; 438 438 m &= ~(1 << prio); 439 439
+3 -1
net/xfrm/xfrm_compat.c
··· 5 5 * Based on code and translator idea by: Florian Westphal <fw@strlen.de> 6 6 */ 7 7 #include <linux/compat.h> 8 + #include <linux/nospec.h> 8 9 #include <linux/xfrm.h> 9 10 #include <net/xfrm.h> 10 11 ··· 303 302 nla_for_each_attr(nla, attrs, len, remaining) { 304 303 int err; 305 304 306 - switch (type) { 305 + switch (nlh_src->nlmsg_type) { 307 306 case XFRM_MSG_NEWSPDINFO: 308 307 err = xfrm_nla_cpy(dst, nla, nla_len(nla)); 309 308 break; ··· 438 437 NL_SET_ERR_MSG(extack, "Bad attribute"); 439 438 return -EOPNOTSUPP; 440 439 } 440 + type = array_index_nospec(type, XFRMA_MAX + 1); 441 441 if (nla_len(nla) < compat_policy[type].len) { 442 442 NL_SET_ERR_MSG(extack, "Attribute bad length"); 443 443 return -EOPNOTSUPP;
+1 -2
net/xfrm/xfrm_input.c
··· 279 279 goto out; 280 280 281 281 if (x->props.flags & XFRM_STATE_DECAP_DSCP) 282 - ipv6_copy_dscp(ipv6_get_dsfield(ipv6_hdr(skb)), 283 - ipipv6_hdr(skb)); 282 + ipv6_copy_dscp(XFRM_MODE_SKB_CB(skb)->tos, ipipv6_hdr(skb)); 284 283 if (!(x->props.flags & XFRM_STATE_NOECN)) 285 284 ipip6_ecn_decapsulate(skb); 286 285
+50 -4
net/xfrm/xfrm_interface_core.c
··· 310 310 skb->mark = 0; 311 311 } 312 312 313 + static int xfrmi_input(struct sk_buff *skb, int nexthdr, __be32 spi, 314 + int encap_type, unsigned short family) 315 + { 316 + struct sec_path *sp; 317 + 318 + sp = skb_sec_path(skb); 319 + if (sp && (sp->len || sp->olen) && 320 + !xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family)) 321 + goto discard; 322 + 323 + XFRM_SPI_SKB_CB(skb)->family = family; 324 + if (family == AF_INET) { 325 + XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct iphdr, daddr); 326 + XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = NULL; 327 + } else { 328 + XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr); 329 + XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL; 330 + } 331 + 332 + return xfrm_input(skb, nexthdr, spi, encap_type); 333 + discard: 334 + kfree_skb(skb); 335 + return 0; 336 + } 337 + 338 + static int xfrmi4_rcv(struct sk_buff *skb) 339 + { 340 + return xfrmi_input(skb, ip_hdr(skb)->protocol, 0, 0, AF_INET); 341 + } 342 + 343 + static int xfrmi6_rcv(struct sk_buff *skb) 344 + { 345 + return xfrmi_input(skb, skb_network_header(skb)[IP6CB(skb)->nhoff], 346 + 0, 0, AF_INET6); 347 + } 348 + 349 + static int xfrmi4_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type) 350 + { 351 + return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET); 352 + } 353 + 354 + static int xfrmi6_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type) 355 + { 356 + return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET6); 357 + } 358 + 313 359 static int xfrmi_rcv_cb(struct sk_buff *skb, int err) 314 360 { 315 361 const struct xfrm_mode *inner_mode; ··· 991 945 }; 992 946 993 947 static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = { 994 - .handler = xfrm6_rcv, 995 - .input_handler = xfrm_input, 948 + .handler = xfrmi6_rcv, 949 + .input_handler = xfrmi6_input, 996 950 .cb_handler = xfrmi_rcv_cb, 997 951 .err_handler = xfrmi6_err, 998 952 .priority = 10, ··· 1042 996 #endif 1043 997 1044 998 static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = { 1045 - .handler = xfrm4_rcv, 1046 - .input_handler = xfrm_input, 999 + .handler = xfrmi4_rcv, 1000 + .input_handler = xfrmi4_input, 1047 1001 .cb_handler = xfrmi_rcv_cb, 1048 1002 .err_handler = xfrmi4_err, 1049 1003 .priority = 10,
+10 -4
net/xfrm/xfrm_policy.c
··· 336 336 } 337 337 if (xp->lft.hard_use_expires_seconds) { 338 338 time64_t tmo = xp->lft.hard_use_expires_seconds + 339 - (xp->curlft.use_time ? : xp->curlft.add_time) - now; 339 + (READ_ONCE(xp->curlft.use_time) ? : xp->curlft.add_time) - now; 340 340 if (tmo <= 0) 341 341 goto expired; 342 342 if (tmo < next) ··· 354 354 } 355 355 if (xp->lft.soft_use_expires_seconds) { 356 356 time64_t tmo = xp->lft.soft_use_expires_seconds + 357 - (xp->curlft.use_time ? : xp->curlft.add_time) - now; 357 + (READ_ONCE(xp->curlft.use_time) ? : xp->curlft.add_time) - now; 358 358 if (tmo <= 0) { 359 359 warn = 1; 360 360 tmo = XFRM_KM_TIMEOUT; ··· 3661 3661 return 1; 3662 3662 } 3663 3663 3664 - pol->curlft.use_time = ktime_get_real_seconds(); 3664 + /* This lockless write can happen from different cpus. */ 3665 + WRITE_ONCE(pol->curlft.use_time, ktime_get_real_seconds()); 3665 3666 3666 3667 pols[0] = pol; 3667 3668 npols++; ··· 3677 3676 xfrm_pol_put(pols[0]); 3678 3677 return 0; 3679 3678 } 3680 - pols[1]->curlft.use_time = ktime_get_real_seconds(); 3679 + /* This write can happen from different cpus. */ 3680 + WRITE_ONCE(pols[1]->curlft.use_time, 3681 + ktime_get_real_seconds()); 3681 3682 npols++; 3682 3683 } 3683 3684 } ··· 3744 3741 XFRM_INC_STATS(net, LINUX_MIB_XFRMINTMPLMISMATCH); 3745 3742 goto reject; 3746 3743 } 3744 + 3745 + if (if_id) 3746 + secpath_reset(skb); 3747 3747 3748 3748 xfrm_pols_put(pols, npols); 3749 3749 return 1;
+9 -9
net/xfrm/xfrm_state.c
··· 577 577 if (x->km.state == XFRM_STATE_EXPIRED) 578 578 goto expired; 579 579 if (x->lft.hard_add_expires_seconds) { 580 - long tmo = x->lft.hard_add_expires_seconds + 580 + time64_t tmo = x->lft.hard_add_expires_seconds + 581 581 x->curlft.add_time - now; 582 582 if (tmo <= 0) { 583 583 if (x->xflags & XFRM_SOFT_EXPIRE) { ··· 594 594 next = tmo; 595 595 } 596 596 if (x->lft.hard_use_expires_seconds) { 597 - long tmo = x->lft.hard_use_expires_seconds + 598 - (x->curlft.use_time ? : now) - now; 597 + time64_t tmo = x->lft.hard_use_expires_seconds + 598 + (READ_ONCE(x->curlft.use_time) ? : now) - now; 599 599 if (tmo <= 0) 600 600 goto expired; 601 601 if (tmo < next) ··· 604 604 if (x->km.dying) 605 605 goto resched; 606 606 if (x->lft.soft_add_expires_seconds) { 607 - long tmo = x->lft.soft_add_expires_seconds + 607 + time64_t tmo = x->lft.soft_add_expires_seconds + 608 608 x->curlft.add_time - now; 609 609 if (tmo <= 0) { 610 610 warn = 1; ··· 616 616 } 617 617 } 618 618 if (x->lft.soft_use_expires_seconds) { 619 - long tmo = x->lft.soft_use_expires_seconds + 620 - (x->curlft.use_time ? : now) - now; 619 + time64_t tmo = x->lft.soft_use_expires_seconds + 620 + (READ_ONCE(x->curlft.use_time) ? : now) - now; 621 621 if (tmo <= 0) 622 622 warn = 1; 623 623 else if (tmo < next) ··· 1906 1906 1907 1907 hrtimer_start(&x1->mtimer, ktime_set(1, 0), 1908 1908 HRTIMER_MODE_REL_SOFT); 1909 - if (x1->curlft.use_time) 1909 + if (READ_ONCE(x1->curlft.use_time)) 1910 1910 xfrm_state_check_expire(x1); 1911 1911 1912 1912 if (x->props.smark.m || x->props.smark.v || x->if_id) { ··· 1940 1940 { 1941 1941 xfrm_dev_state_update_curlft(x); 1942 1942 1943 - if (!x->curlft.use_time) 1944 - x->curlft.use_time = ktime_get_real_seconds(); 1943 + if (!READ_ONCE(x->curlft.use_time)) 1944 + WRITE_ONCE(x->curlft.use_time, ktime_get_real_seconds()); 1945 1945 1946 1946 if (x->curlft.bytes >= x->lft.hard_byte_limit || 1947 1947 x->curlft.packets >= x->lft.hard_packet_limit) {
+5 -1
scripts/Makefile.modinst
··· 66 66 # Don't stop modules_install even if we can't sign external modules. 67 67 # 68 68 ifeq ($(CONFIG_MODULE_SIG_ALL),y) 69 + ifeq ($(filter pkcs11:%, $(CONFIG_MODULE_SIG_KEY)),) 69 70 sig-key := $(if $(wildcard $(CONFIG_MODULE_SIG_KEY)),,$(srctree)/)$(CONFIG_MODULE_SIG_KEY) 71 + else 72 + sig-key := $(CONFIG_MODULE_SIG_KEY) 73 + endif 70 74 quiet_cmd_sign = SIGN $@ 71 - cmd_sign = scripts/sign-file $(CONFIG_MODULE_SIG_HASH) $(sig-key) certs/signing_key.x509 $@ \ 75 + cmd_sign = scripts/sign-file $(CONFIG_MODULE_SIG_HASH) "$(sig-key)" certs/signing_key.x509 $@ \ 72 76 $(if $(KBUILD_EXTMOD),|| true) 73 77 else 74 78 quiet_cmd_sign :=
+1 -1
tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh
··· 246 246 bridge vlan add dev $swp2 vid 300 247 247 248 248 tc filter add dev $swp1 ingress chain $(IS1 2) pref 3 \ 249 - protocol 802.1Q flower skip_sw vlan_id 200 \ 249 + protocol 802.1Q flower skip_sw vlan_id 200 src_mac $h1_mac \ 250 250 action vlan modify id 300 \ 251 251 action goto chain $(IS2 0 0) 252 252
tools/testing/selftests/filesystems/fat/run_fat_tests.sh
+103 -84
tools/testing/selftests/kvm/aarch64/page_fault_test.c
··· 237 237 GUEST_SYNC(CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG); 238 238 } 239 239 240 + static void guest_check_no_s1ptw_wr_in_dirty_log(void) 241 + { 242 + GUEST_SYNC(CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG); 243 + } 244 + 240 245 static void guest_exec(void) 241 246 { 242 247 int (*code)(void) = (int (*)(void))TEST_EXEC_GVA; ··· 309 304 310 305 /* Returns true to continue the test, and false if it should be skipped. */ 311 306 static int uffd_generic_handler(int uffd_mode, int uffd, struct uffd_msg *msg, 312 - struct uffd_args *args, bool expect_write) 307 + struct uffd_args *args) 313 308 { 314 309 uint64_t addr = msg->arg.pagefault.address; 315 310 uint64_t flags = msg->arg.pagefault.flags; ··· 318 313 319 314 TEST_ASSERT(uffd_mode == UFFDIO_REGISTER_MODE_MISSING, 320 315 "The only expected UFFD mode is MISSING"); 321 - ASSERT_EQ(!!(flags & UFFD_PAGEFAULT_FLAG_WRITE), expect_write); 322 316 ASSERT_EQ(addr, (uint64_t)args->hva); 323 317 324 318 pr_debug("uffd fault: addr=%p write=%d\n", ··· 341 337 return 0; 342 338 } 343 339 344 - static int uffd_pt_write_handler(int mode, int uffd, struct uffd_msg *msg) 340 + static int uffd_pt_handler(int mode, int uffd, struct uffd_msg *msg) 345 341 { 346 - return uffd_generic_handler(mode, uffd, msg, &pt_args, true); 342 + return uffd_generic_handler(mode, uffd, msg, &pt_args); 347 343 } 348 344 349 - static int uffd_data_write_handler(int mode, int uffd, struct uffd_msg *msg) 345 + static int uffd_data_handler(int mode, int uffd, struct uffd_msg *msg) 350 346 { 351 - return uffd_generic_handler(mode, uffd, msg, &data_args, true); 352 - } 353 - 354 - static int uffd_data_read_handler(int mode, int uffd, struct uffd_msg *msg) 355 - { 356 - return uffd_generic_handler(mode, uffd, msg, &data_args, false); 347 + return uffd_generic_handler(mode, uffd, msg, &data_args); 357 348 } 358 349 359 350 static void setup_uffd_args(struct userspace_mem_region *region, ··· 470 471 { 471 472 struct userspace_mem_region *data_region, *pt_region; 472 473 bool continue_test = true; 474 + uint64_t pte_gpa, pte_pg; 473 475 474 476 data_region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA); 475 477 pt_region = vm_get_mem_region(vm, MEM_REGION_PT); 478 + pte_gpa = addr_hva2gpa(vm, virt_get_pte_hva(vm, TEST_GVA)); 479 + pte_pg = (pte_gpa - pt_region->region.guest_phys_addr) / getpagesize(); 476 480 477 481 if (cmd == CMD_SKIP_TEST) 478 482 continue_test = false; ··· 488 486 TEST_ASSERT(check_write_in_dirty_log(vm, data_region, 0), 489 487 "Missing write in dirty log"); 490 488 if (cmd & CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG) 491 - TEST_ASSERT(check_write_in_dirty_log(vm, pt_region, 0), 489 + TEST_ASSERT(check_write_in_dirty_log(vm, pt_region, pte_pg), 492 490 "Missing s1ptw write in dirty log"); 493 491 if (cmd & CMD_CHECK_NO_WRITE_IN_DIRTY_LOG) 494 492 TEST_ASSERT(!check_write_in_dirty_log(vm, data_region, 0), 495 493 "Unexpected write in dirty log"); 496 494 if (cmd & CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG) 497 - TEST_ASSERT(!check_write_in_dirty_log(vm, pt_region, 0), 495 + TEST_ASSERT(!check_write_in_dirty_log(vm, pt_region, pte_pg), 498 496 "Unexpected s1ptw write in dirty log"); 499 497 500 498 return continue_test; ··· 799 797 .expected_events = { .uffd_faults = _uffd_faults, }, \ 800 798 } 801 799 802 - #define TEST_DIRTY_LOG(_access, _with_af, _test_check) \ 800 + #define TEST_DIRTY_LOG(_access, _with_af, _test_check, _pt_check) \ 803 801 { \ 804 802 .name = SCAT3(dirty_log, _access, _with_af), \ 805 803 .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ ··· 807 805 .guest_prepare = { _PREPARE(_with_af), \ 808 806 _PREPARE(_access) }, \ 809 807 .guest_test = _access, \ 810 - .guest_test_check = { _CHECK(_with_af), _test_check, \ 811 - guest_check_s1ptw_wr_in_dirty_log}, \ 808 + .guest_test_check = { _CHECK(_with_af), _test_check, _pt_check }, \ 812 809 .expected_events = { 0 }, \ 813 810 } 814 811 815 812 #define TEST_UFFD_AND_DIRTY_LOG(_access, _with_af, _uffd_data_handler, \ 816 - _uffd_faults, _test_check) \ 813 + _uffd_faults, _test_check, _pt_check) \ 817 814 { \ 818 815 .name = SCAT3(uffd_and_dirty_log, _access, _with_af), \ 819 816 .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ ··· 821 820 _PREPARE(_access) }, \ 822 821 .guest_test = _access, \ 823 822 .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ 824 - .guest_test_check = { _CHECK(_with_af), _test_check }, \ 823 + .guest_test_check = { _CHECK(_with_af), _test_check, _pt_check }, \ 825 824 .uffd_data_handler = _uffd_data_handler, \ 826 - .uffd_pt_handler = uffd_pt_write_handler, \ 825 + .uffd_pt_handler = uffd_pt_handler, \ 827 826 .expected_events = { .uffd_faults = _uffd_faults, }, \ 828 827 } 829 828 830 829 #define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits) \ 831 830 { \ 832 - .name = SCAT3(ro_memslot, _access, _with_af), \ 831 + .name = SCAT2(ro_memslot, _access), \ 833 832 .data_memslot_flags = KVM_MEM_READONLY, \ 833 + .pt_memslot_flags = KVM_MEM_READONLY, \ 834 834 .guest_prepare = { _PREPARE(_access) }, \ 835 835 .guest_test = _access, \ 836 836 .mmio_handler = _mmio_handler, \ ··· 842 840 { \ 843 841 .name = SCAT2(ro_memslot_no_syndrome, _access), \ 844 842 .data_memslot_flags = KVM_MEM_READONLY, \ 843 + .pt_memslot_flags = KVM_MEM_READONLY, \ 845 844 .guest_test = _access, \ 846 845 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ 847 846 .expected_events = { .fail_vcpu_runs = 1 }, \ ··· 851 848 #define TEST_RO_MEMSLOT_AND_DIRTY_LOG(_access, _mmio_handler, _mmio_exits, \ 852 849 _test_check) \ 853 850 { \ 854 - .name = SCAT3(ro_memslot, _access, _with_af), \ 851 + .name = SCAT2(ro_memslot, _access), \ 855 852 .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 856 - .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ 853 + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 857 854 .guest_prepare = { _PREPARE(_access) }, \ 858 855 .guest_test = _access, \ 859 856 .guest_test_check = { _test_check }, \ ··· 865 862 { \ 866 863 .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ 867 864 .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 868 - .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ 865 + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 869 866 .guest_test = _access, \ 870 867 .guest_test_check = { _test_check }, \ 871 868 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ ··· 877 874 { \ 878 875 .name = SCAT2(ro_memslot_uffd, _access), \ 879 876 .data_memslot_flags = KVM_MEM_READONLY, \ 877 + .pt_memslot_flags = KVM_MEM_READONLY, \ 880 878 .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ 881 879 .guest_prepare = { _PREPARE(_access) }, \ 882 880 .guest_test = _access, \ 883 881 .uffd_data_handler = _uffd_data_handler, \ 884 - .uffd_pt_handler = uffd_pt_write_handler, \ 882 + .uffd_pt_handler = uffd_pt_handler, \ 885 883 .mmio_handler = _mmio_handler, \ 886 884 .expected_events = { .mmio_exits = _mmio_exits, \ 887 885 .uffd_faults = _uffd_faults }, \ ··· 893 889 { \ 894 890 .name = SCAT2(ro_memslot_no_syndrome, _access), \ 895 891 .data_memslot_flags = KVM_MEM_READONLY, \ 892 + .pt_memslot_flags = KVM_MEM_READONLY, \ 896 893 .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ 897 894 .guest_test = _access, \ 898 895 .uffd_data_handler = _uffd_data_handler, \ 899 - .uffd_pt_handler = uffd_pt_write_handler, \ 896 + .uffd_pt_handler = uffd_pt_handler, \ 900 897 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ 901 898 .expected_events = { .fail_vcpu_runs = 1, \ 902 899 .uffd_faults = _uffd_faults }, \ ··· 938 933 * (S1PTW). 939 934 */ 940 935 TEST_UFFD(guest_read64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 941 - uffd_data_read_handler, uffd_pt_write_handler, 2), 942 - /* no_af should also lead to a PT write. */ 936 + uffd_data_handler, uffd_pt_handler, 2), 943 937 TEST_UFFD(guest_read64, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, 944 - uffd_data_read_handler, uffd_pt_write_handler, 2), 945 - /* Note how that cas invokes the read handler. */ 938 + uffd_data_handler, uffd_pt_handler, 2), 946 939 TEST_UFFD(guest_cas, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 947 - uffd_data_read_handler, uffd_pt_write_handler, 2), 940 + uffd_data_handler, uffd_pt_handler, 2), 948 941 /* 949 942 * Can't test guest_at with_af as it's IMPDEF whether the AF is set. 950 943 * The S1PTW fault should still be marked as a write. 951 944 */ 952 945 TEST_UFFD(guest_at, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, 953 - uffd_data_read_handler, uffd_pt_write_handler, 1), 946 + uffd_no_handler, uffd_pt_handler, 1), 954 947 TEST_UFFD(guest_ld_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 955 - uffd_data_read_handler, uffd_pt_write_handler, 2), 948 + uffd_data_handler, uffd_pt_handler, 2), 956 949 TEST_UFFD(guest_write64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 957 - uffd_data_write_handler, uffd_pt_write_handler, 2), 950 + uffd_data_handler, uffd_pt_handler, 2), 958 951 TEST_UFFD(guest_dc_zva, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 959 - uffd_data_write_handler, uffd_pt_write_handler, 2), 952 + uffd_data_handler, uffd_pt_handler, 2), 960 953 TEST_UFFD(guest_st_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 961 - uffd_data_write_handler, uffd_pt_write_handler, 2), 954 + uffd_data_handler, uffd_pt_handler, 2), 962 955 TEST_UFFD(guest_exec, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, 963 - uffd_data_read_handler, uffd_pt_write_handler, 2), 956 + uffd_data_handler, uffd_pt_handler, 2), 964 957 965 958 /* 966 959 * Try accesses when the data and PT memory regions are both 967 960 * tracked for dirty logging. 968 961 */ 969 - TEST_DIRTY_LOG(guest_read64, with_af, guest_check_no_write_in_dirty_log), 970 - /* no_af should also lead to a PT write. */ 971 - TEST_DIRTY_LOG(guest_read64, no_af, guest_check_no_write_in_dirty_log), 972 - TEST_DIRTY_LOG(guest_ld_preidx, with_af, guest_check_no_write_in_dirty_log), 973 - TEST_DIRTY_LOG(guest_at, no_af, guest_check_no_write_in_dirty_log), 974 - TEST_DIRTY_LOG(guest_exec, with_af, guest_check_no_write_in_dirty_log), 975 - TEST_DIRTY_LOG(guest_write64, with_af, guest_check_write_in_dirty_log), 976 - TEST_DIRTY_LOG(guest_cas, with_af, guest_check_write_in_dirty_log), 977 - TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), 978 - TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), 962 + TEST_DIRTY_LOG(guest_read64, with_af, guest_check_no_write_in_dirty_log, 963 + guest_check_s1ptw_wr_in_dirty_log), 964 + TEST_DIRTY_LOG(guest_read64, no_af, guest_check_no_write_in_dirty_log, 965 + guest_check_no_s1ptw_wr_in_dirty_log), 966 + TEST_DIRTY_LOG(guest_ld_preidx, with_af, 967 + guest_check_no_write_in_dirty_log, 968 + guest_check_s1ptw_wr_in_dirty_log), 969 + TEST_DIRTY_LOG(guest_at, no_af, guest_check_no_write_in_dirty_log, 970 + guest_check_no_s1ptw_wr_in_dirty_log), 971 + TEST_DIRTY_LOG(guest_exec, with_af, guest_check_no_write_in_dirty_log, 972 + guest_check_s1ptw_wr_in_dirty_log), 973 + TEST_DIRTY_LOG(guest_write64, with_af, guest_check_write_in_dirty_log, 974 + guest_check_s1ptw_wr_in_dirty_log), 975 + TEST_DIRTY_LOG(guest_cas, with_af, guest_check_write_in_dirty_log, 976 + guest_check_s1ptw_wr_in_dirty_log), 977 + TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log, 978 + guest_check_s1ptw_wr_in_dirty_log), 979 + TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log, 980 + guest_check_s1ptw_wr_in_dirty_log), 979 981 980 982 /* 981 983 * Access when the data and PT memory regions are both marked for ··· 992 980 * fault, and nothing in the dirty log. Any S1PTW should result in 993 981 * a write in the dirty log and a userfaultfd write. 994 982 */ 995 - TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, uffd_data_read_handler, 2, 996 - guest_check_no_write_in_dirty_log), 997 - /* no_af should also lead to a PT write. */ 998 - TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, uffd_data_read_handler, 2, 999 - guest_check_no_write_in_dirty_log), 1000 - TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, uffd_data_read_handler, 1001 - 2, guest_check_no_write_in_dirty_log), 1002 - TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, 0, 1, 1003 - guest_check_no_write_in_dirty_log), 1004 - TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, uffd_data_read_handler, 2, 1005 - guest_check_no_write_in_dirty_log), 1006 - TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, uffd_data_write_handler, 1007 - 2, guest_check_write_in_dirty_log), 1008 - TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, uffd_data_read_handler, 2, 1009 - guest_check_write_in_dirty_log), 1010 - TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, uffd_data_write_handler, 1011 - 2, guest_check_write_in_dirty_log), 983 + TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, 984 + uffd_data_handler, 2, 985 + guest_check_no_write_in_dirty_log, 986 + guest_check_s1ptw_wr_in_dirty_log), 987 + TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, 988 + uffd_data_handler, 2, 989 + guest_check_no_write_in_dirty_log, 990 + guest_check_no_s1ptw_wr_in_dirty_log), 991 + TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, 992 + uffd_data_handler, 993 + 2, guest_check_no_write_in_dirty_log, 994 + guest_check_s1ptw_wr_in_dirty_log), 995 + TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, uffd_no_handler, 1, 996 + guest_check_no_write_in_dirty_log, 997 + guest_check_s1ptw_wr_in_dirty_log), 998 + TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, 999 + uffd_data_handler, 2, 1000 + guest_check_no_write_in_dirty_log, 1001 + guest_check_s1ptw_wr_in_dirty_log), 1002 + TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, 1003 + uffd_data_handler, 1004 + 2, guest_check_write_in_dirty_log, 1005 + guest_check_s1ptw_wr_in_dirty_log), 1006 + TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, 1007 + uffd_data_handler, 2, 1008 + guest_check_write_in_dirty_log, 1009 + guest_check_s1ptw_wr_in_dirty_log), 1010 + TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, 1011 + uffd_data_handler, 1012 + 2, guest_check_write_in_dirty_log, 1013 + guest_check_s1ptw_wr_in_dirty_log), 1012 1014 TEST_UFFD_AND_DIRTY_LOG(guest_st_preidx, with_af, 1013 - uffd_data_write_handler, 2, 1014 - guest_check_write_in_dirty_log), 1015 - 1015 + uffd_data_handler, 2, 1016 + guest_check_write_in_dirty_log, 1017 + guest_check_s1ptw_wr_in_dirty_log), 1016 1018 /* 1017 - * Try accesses when the data memory region is marked read-only 1019 + * Access when both the PT and data regions are marked read-only 1018 1020 * (with KVM_MEM_READONLY). Writes with a syndrome result in an 1019 1021 * MMIO exit, writes with no syndrome (e.g., CAS) result in a 1020 1022 * failed vcpu run, and reads/execs with and without syndroms do ··· 1044 1018 TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), 1045 1019 1046 1020 /* 1047 - * Access when both the data region is both read-only and marked 1021 + * The PT and data regions are both read-only and marked 1048 1022 * for dirty logging at the same time. The expected result is that 1049 1023 * for writes there should be no write in the dirty log. The 1050 1024 * readonly handling is the same as if the memslot was not marked ··· 1069 1043 guest_check_no_write_in_dirty_log), 1070 1044 1071 1045 /* 1072 - * Access when the data region is both read-only and punched with 1046 + * The PT and data regions are both read-only and punched with 1073 1047 * holes tracked with userfaultfd. The expected result is the 1074 1048 * union of both userfaultfd and read-only behaviors. For example, 1075 1049 * write accesses result in a userfaultfd write fault and an MMIO ··· 1077 1051 * no userfaultfd write fault. Reads result in userfaultfd getting 1078 1052 * triggered. 1079 1053 */ 1080 - TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, 1081 - uffd_data_read_handler, 2), 1082 - TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, 1083 - uffd_data_read_handler, 2), 1084 - TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, 1085 - uffd_no_handler, 1), 1086 - TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, 1087 - uffd_data_read_handler, 2), 1054 + TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, uffd_data_handler, 2), 1055 + TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, uffd_data_handler, 2), 1056 + TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, uffd_no_handler, 1), 1057 + TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, uffd_data_handler, 2), 1088 1058 TEST_RO_MEMSLOT_AND_UFFD(guest_write64, mmio_on_test_gpa_handler, 1, 1089 - uffd_data_write_handler, 2), 1090 - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, 1091 - uffd_data_read_handler, 2), 1092 - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, 1093 - uffd_no_handler, 1), 1094 - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, 1095 - uffd_no_handler, 1), 1059 + uffd_data_handler, 2), 1060 + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, uffd_data_handler, 2), 1061 + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, uffd_no_handler, 1), 1062 + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, uffd_no_handler, 1), 1096 1063 1097 1064 { 0 } 1098 1065 };
+2 -2
tools/testing/selftests/net/forwarding/lib.sh
··· 893 893 local value=$1; shift 894 894 895 895 SYSCTL_ORIG[$key]=$(sysctl -n $key) 896 - sysctl -qw $key=$value 896 + sysctl -qw $key="$value" 897 897 } 898 898 899 899 sysctl_restore() 900 900 { 901 901 local key=$1; shift 902 902 903 - sysctl -qw $key=${SYSCTL_ORIG["$key"]} 903 + sysctl -qw $key="${SYSCTL_ORIG[$key]}" 904 904 } 905 905 906 906 forwarding_enable()
+17 -5
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 498 498 kill_wait $evts_ns2_pid 499 499 } 500 500 501 + kill_tests_wait() 502 + { 503 + kill -SIGUSR1 $(ip netns pids $ns2) $(ip netns pids $ns1) 504 + wait 505 + } 506 + 501 507 pm_nl_set_limits() 502 508 { 503 509 local ns=$1 ··· 1693 1687 local subflow_nr=$3 1694 1688 local cnt1 1695 1689 local cnt2 1690 + local dump_stats 1696 1691 1697 1692 if [ -n "${need_title}" ]; then 1698 1693 printf "%03u %-36s %s" "${TEST_COUNT}" "${TEST_NAME}" "${msg}" ··· 1711 1704 echo "[ ok ]" 1712 1705 fi 1713 1706 1714 - [ "${dump_stats}" = 1 ] && ( ss -N $ns1 -tOni ; ss -N $ns1 -tOni | grep token; ip -n $ns1 mptcp endpoint ) 1707 + if [ "${dump_stats}" = 1 ]; then 1708 + ss -N $ns1 -tOni 1709 + ss -N $ns1 -tOni | grep token 1710 + ip -n $ns1 mptcp endpoint 1711 + dump_stats 1712 + fi 1715 1713 } 1716 1714 1717 1715 chk_link_usage() ··· 3095 3083 pm_nl_set_limits $ns1 2 2 3096 3084 pm_nl_set_limits $ns2 2 2 3097 3085 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal 3098 - run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow & 3086 + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2>/dev/null & 3099 3087 3100 3088 wait_mpj $ns1 3101 3089 pm_nl_check_endpoint 1 "creation" \ ··· 3108 3096 pm_nl_add_endpoint $ns2 10.0.2.2 flags signal 3109 3097 pm_nl_check_endpoint 0 "modif is allowed" \ 3110 3098 $ns2 10.0.2.2 id 1 flags signal 3111 - wait 3099 + kill_tests_wait 3112 3100 fi 3113 3101 3114 3102 if reset "delete and re-add"; then 3115 3103 pm_nl_set_limits $ns1 1 1 3116 3104 pm_nl_set_limits $ns2 1 1 3117 3105 pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow 3118 - run_tests $ns1 $ns2 10.0.1.1 4 0 0 slow & 3106 + run_tests $ns1 $ns2 10.0.1.1 4 0 0 speed_20 2>/dev/null & 3119 3107 3120 3108 wait_mpj $ns2 3121 3109 pm_nl_del_endpoint $ns2 2 10.0.2.2 ··· 3125 3113 pm_nl_add_endpoint $ns2 10.0.2.2 dev ns2eth2 flags subflow 3126 3114 wait_mpj $ns2 3127 3115 chk_subflow_nr "" "after re-add" 2 3128 - wait 3116 + kill_tests_wait 3129 3117 fi 3130 3118 } 3131 3119
+5 -13
tools/testing/selftests/net/test_vxlan_vnifiltering.sh
··· 293 293 elif [[ -n $vtype && $vtype == "vnifilterg" ]]; then 294 294 # Add per vni group config with 'bridge vni' api 295 295 if [ -n "$group" ]; then 296 - if [ "$family" == "v4" ]; then 297 - if [ $mcast -eq 1 ]; then 298 - bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group $group 299 - else 300 - bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote $group 301 - fi 302 - else 303 - if [ $mcast -eq 1 ]; then 304 - bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group6 $group 305 - else 306 - bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote6 $group 307 - fi 308 - fi 296 + if [ $mcast -eq 1 ]; then 297 + bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group $group 298 + else 299 + bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote $group 300 + fi 309 301 fi 310 302 fi 311 303 done
-1
tools/testing/selftests/vm/hugetlb-madvise.c
··· 17 17 #include <stdio.h> 18 18 #include <unistd.h> 19 19 #include <sys/mman.h> 20 - #define __USE_GNU 21 20 #include <fcntl.h> 22 21 23 22 #define MIN_FREE_PAGES 20