Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

No conflicts, no adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2295 -1362
+3
.mailmap
··· 72 72 Andrzej Hajda <andrzej.hajda@intel.com> <a.hajda@samsung.com> 73 73 André Almeida <andrealmeid@igalia.com> <andrealmeid@collabora.com> 74 74 Andy Adamson <andros@citi.umich.edu> 75 + Andy Shevchenko <andy@kernel.org> <andy@smile.org.ua> 76 + Andy Shevchenko <andy@kernel.org> <ext-andriy.shevchenko@nokia.com> 75 77 Anilkumar Kolli <quic_akolli@quicinc.com> <akolli@codeaurora.org> 76 78 Anirudh Ghayal <quic_aghayal@quicinc.com> <aghayal@codeaurora.org> 77 79 Antoine Tenart <atenart@kernel.org> <antoine.tenart@bootlin.com> ··· 219 217 Geliang Tang <geliang@kernel.org> <geliangtang@xiaomi.com> 220 218 Geliang Tang <geliang@kernel.org> <geliangtang@gmail.com> 221 219 Geliang Tang <geliang@kernel.org> <geliangtang@163.com> 220 + Geliang Tang <geliang@kernel.org> <tanggeliang@kylinos.cn> 222 221 Georgi Djakov <djakov@kernel.org> <georgi.djakov@linaro.org> 223 222 Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@de.ibm.com> 224 223 Gerald Schaefer <gerald.schaefer@linux.ibm.com> <gerald.schaefer@de.ibm.com>
+2 -2
Documentation/admin-guide/mm/transhuge.rst
··· 467 467 instead falls back to using huge pages with lower orders or 468 468 small pages even though the allocation was successful. 469 469 470 - anon_swpout 470 + swpout 471 471 is incremented every time a huge page is swapped out in one 472 472 piece without splitting. 473 473 474 - anon_swpout_fallback 474 + swpout_fallback 475 475 is incremented if a huge page has to be split before swapout. 476 476 Usually because failed to allocate some continuous swap space 477 477 for the huge page.
+2 -2
Documentation/cdrom/cdrom-standard.rst
··· 217 217 int (*media_changed)(struct cdrom_device_info *, int); 218 218 int (*tray_move)(struct cdrom_device_info *, int); 219 219 int (*lock_door)(struct cdrom_device_info *, int); 220 - int (*select_speed)(struct cdrom_device_info *, int); 220 + int (*select_speed)(struct cdrom_device_info *, unsigned long); 221 221 int (*get_last_session) (struct cdrom_device_info *, 222 222 struct cdrom_multisession *); 223 223 int (*get_mcn)(struct cdrom_device_info *, struct cdrom_mcn *); ··· 396 396 397 397 :: 398 398 399 - int select_speed(struct cdrom_device_info *cdi, int speed) 399 + int select_speed(struct cdrom_device_info *cdi, unsigned long speed) 400 400 401 401 Some CD-ROM drives are capable of changing their head-speed. There 402 402 are several reasons for changing the speed of a CD-ROM drive. Badly
+14 -5
Documentation/devicetree/bindings/input/elan,ekth6915.yaml
··· 18 18 19 19 properties: 20 20 compatible: 21 - enum: 22 - - elan,ekth6915 23 - - ilitek,ili2901 21 + oneOf: 22 + - items: 23 + - enum: 24 + - elan,ekth5015m 25 + - const: elan,ekth6915 26 + - const: elan,ekth6915 24 27 25 28 reg: 26 29 const: 0x10 ··· 35 32 36 33 reset-gpios: 37 34 description: Reset GPIO; not all touchscreens using eKTH6915 hook this up. 35 + 36 + no-reset-on-power-off: 37 + type: boolean 38 + description: 39 + Reset line is wired so that it can (and should) be left deasserted when 40 + the power supply is off. 38 41 39 42 vcc33-supply: 40 43 description: The 3.3V supply to the touchscreen. ··· 67 58 #address-cells = <1>; 68 59 #size-cells = <0>; 69 60 70 - ap_ts: touchscreen@10 { 71 - compatible = "elan,ekth6915"; 61 + touchscreen@10 { 62 + compatible = "elan,ekth5015m", "elan,ekth6915"; 72 63 reg = <0x10>; 73 64 74 65 interrupt-parent = <&tlmm>;
+66
Documentation/devicetree/bindings/input/ilitek,ili2901.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/input/ilitek,ili2901.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Ilitek ILI2901 touchscreen controller 8 + 9 + maintainers: 10 + - Jiri Kosina <jkosina@suse.com> 11 + 12 + description: 13 + Supports the Ilitek ILI2901 touchscreen controller. 14 + This touchscreen controller uses the i2c-hid protocol with a reset GPIO. 15 + 16 + allOf: 17 + - $ref: /schemas/input/touchscreen/touchscreen.yaml# 18 + 19 + properties: 20 + compatible: 21 + enum: 22 + - ilitek,ili2901 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + interrupts: 28 + maxItems: 1 29 + 30 + panel: true 31 + 32 + reset-gpios: 33 + maxItems: 1 34 + 35 + vcc33-supply: true 36 + 37 + vccio-supply: true 38 + 39 + required: 40 + - compatible 41 + - reg 42 + - interrupts 43 + - vcc33-supply 44 + 45 + additionalProperties: false 46 + 47 + examples: 48 + - | 49 + #include <dt-bindings/gpio/gpio.h> 50 + #include <dt-bindings/interrupt-controller/irq.h> 51 + 52 + i2c { 53 + #address-cells = <1>; 54 + #size-cells = <0>; 55 + 56 + touchscreen@41 { 57 + compatible = "ilitek,ili2901"; 58 + reg = <0x41>; 59 + 60 + interrupt-parent = <&tlmm>; 61 + interrupts = <9 IRQ_TYPE_LEVEL_LOW>; 62 + 63 + reset-gpios = <&tlmm 8 GPIO_ACTIVE_LOW>; 64 + vcc33-supply = <&pp3300_ts>; 65 + }; 66 + };
+11 -1
Documentation/kbuild/kconfig-language.rst
··· 150 150 That will limit the usefulness but on the other hand avoid 151 151 the illegal configurations all over. 152 152 153 + If "select" <symbol> is followed by "if" <expr>, <symbol> will be 154 + selected by the logical AND of the value of the current menu symbol 155 + and <expr>. This means, the lower limit can be downgraded due to the 156 + presence of "if" <expr>. This behavior may seem weird, but we rely on 157 + it. (The future of this behavior is undecided.) 158 + 153 159 - weak reverse dependencies: "imply" <symbol> ["if" <expr>] 154 160 155 161 This is similar to "select" as it enforces a lower limit on another ··· 190 184 ability to hook into a secondary subsystem while allowing the user to 191 185 configure that subsystem out without also having to unset these drivers. 192 186 193 - Note: If the combination of FOO=y and BAR=m causes a link error, 187 + Note: If the combination of FOO=y and BAZ=m causes a link error, 194 188 you can guard the function call with IS_REACHABLE():: 195 189 196 190 foo_init() ··· 207 201 tristate "foo" 208 202 imply BAR 209 203 imply BAZ 204 + 205 + Note: If "imply" <symbol> is followed by "if" <expr>, the default of <symbol> 206 + will be the logical AND of the value of the current menu symbol and <expr>. 207 + (The future of this behavior is undecided.) 210 208 211 209 - limiting menu display: "visible if" <expr> 212 210
+1 -1
Documentation/userspace-api/media/v4l/dev-subdev.rst
··· 582 582 Devices generating the streams may allow enabling and disabling some of the 583 583 routes or have a fixed routing configuration. If the routes can be disabled, not 584 584 declaring the routes (or declaring them without 585 - ``VIDIOC_SUBDEV_STREAM_FL_ACTIVE`` flag set) in ``VIDIOC_SUBDEV_S_ROUTING`` will 585 + ``V4L2_SUBDEV_STREAM_FL_ACTIVE`` flag set) in ``VIDIOC_SUBDEV_S_ROUTING`` will 586 586 disable the routes. ``VIDIOC_SUBDEV_S_ROUTING`` will still return such routes 587 587 back to the user in the routes array, with the ``V4L2_SUBDEV_STREAM_FL_ACTIVE`` 588 588 flag unset.
+1 -1
MAINTAINERS
··· 15825 15825 F: tools/testing/selftests/nci/ 15826 15826 15827 15827 NFS, SUNRPC, AND LOCKD CLIENTS 15828 - M: Trond Myklebust <trond.myklebust@hammerspace.com> 15828 + M: Trond Myklebust <trondmy@kernel.org> 15829 15829 M: Anna Schumaker <anna@kernel.org> 15830 15830 L: linux-nfs@vger.kernel.org 15831 15831 S: Maintained
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 10 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION*
+15 -2
arch/arm/kernel/ftrace.c
··· 232 232 unsigned long old; 233 233 234 234 if (unlikely(atomic_read(&current->tracing_graph_pause))) 235 + err_out: 235 236 return; 236 237 237 238 if (IS_ENABLED(CONFIG_UNWINDER_FRAME_POINTER)) { 238 - /* FP points one word below parent's top of stack */ 239 - frame_pointer += 4; 239 + /* 240 + * Usually, the stack frames are contiguous in memory but cases 241 + * have been observed where the next stack frame does not live 242 + * at 'frame_pointer + 4' as this code used to assume. 243 + * 244 + * Instead, dereference the field in the stack frame that 245 + * stores the SP of the calling frame: to avoid unbounded 246 + * recursion, this cannot involve any ftrace instrumented 247 + * functions, so use the __get_kernel_nofault() primitive 248 + * directly. 249 + */ 250 + __get_kernel_nofault(&frame_pointer, 251 + (unsigned long *)(frame_pointer - 8), 252 + unsigned long, err_out); 240 253 } else { 241 254 struct stackframe frame = { 242 255 .fp = frame_pointer,
+16 -20
arch/arm64/include/asm/io.h
··· 153 153 * emit the large TLP from the CPU. 154 154 */ 155 155 156 - static inline void __const_memcpy_toio_aligned32(volatile u32 __iomem *to, 157 - const u32 *from, size_t count) 156 + static __always_inline void 157 + __const_memcpy_toio_aligned32(volatile u32 __iomem *to, const u32 *from, 158 + size_t count) 158 159 { 159 160 switch (count) { 160 161 case 8: ··· 197 196 198 197 void __iowrite32_copy_full(void __iomem *to, const void *from, size_t count); 199 198 200 - static inline void __const_iowrite32_copy(void __iomem *to, const void *from, 201 - size_t count) 199 + static __always_inline void 200 + __iowrite32_copy(void __iomem *to, const void *from, size_t count) 202 201 { 203 - if (count == 8 || count == 4 || count == 2 || count == 1) { 202 + if (__builtin_constant_p(count) && 203 + (count == 8 || count == 4 || count == 2 || count == 1)) { 204 204 __const_memcpy_toio_aligned32(to, from, count); 205 205 dgh(); 206 206 } else { 207 207 __iowrite32_copy_full(to, from, count); 208 208 } 209 209 } 210 + #define __iowrite32_copy __iowrite32_copy 210 211 211 - #define __iowrite32_copy(to, from, count) \ 212 - (__builtin_constant_p(count) ? \ 213 - __const_iowrite32_copy(to, from, count) : \ 214 - __iowrite32_copy_full(to, from, count)) 215 - 216 - static inline void __const_memcpy_toio_aligned64(volatile u64 __iomem *to, 217 - const u64 *from, size_t count) 212 + static __always_inline void 213 + __const_memcpy_toio_aligned64(volatile u64 __iomem *to, const u64 *from, 214 + size_t count) 218 215 { 219 216 switch (count) { 220 217 case 8: ··· 254 255 255 256 void __iowrite64_copy_full(void __iomem *to, const void *from, size_t count); 256 257 257 - static inline void __const_iowrite64_copy(void __iomem *to, const void *from, 258 - size_t count) 258 + static __always_inline void 259 + __iowrite64_copy(void __iomem *to, const void *from, size_t count) 259 260 { 260 - if (count == 8 || count == 4 || count == 2 || count == 1) { 261 + if (__builtin_constant_p(count) && 262 + (count == 8 || count == 4 || count == 2 || count == 1)) { 261 263 __const_memcpy_toio_aligned64(to, from, count); 262 264 dgh(); 263 265 } else { 264 266 __iowrite64_copy_full(to, from, count); 265 267 } 266 268 } 267 - 268 - #define __iowrite64_copy(to, from, count) \ 269 - (__builtin_constant_p(count) ? \ 270 - __const_iowrite64_copy(to, from, count) : \ 271 - __iowrite64_copy_full(to, from, count)) 269 + #define __iowrite64_copy __iowrite64_copy 272 270 273 271 /* 274 272 * I/O memory mapping functions.
+3
arch/arm64/kernel/armv8_deprecated.c
··· 462 462 for (int i = 0; i < ARRAY_SIZE(insn_emulations); i++) { 463 463 struct insn_emulation *insn = insn_emulations[i]; 464 464 bool enable = READ_ONCE(insn->current_mode) == INSN_HW; 465 + if (insn->status == INSN_UNAVAILABLE) 466 + continue; 467 + 465 468 if (insn->set_hw_mode && insn->set_hw_mode(enable)) { 466 469 pr_warn("CPU[%u] cannot support the emulation of %s", 467 470 cpu, insn->name);
+2 -2
arch/arm64/mm/contpte.c
··· 376 376 * clearing access/dirty for the whole block. 377 377 */ 378 378 unsigned long start = addr; 379 - unsigned long end = start + nr; 379 + unsigned long end = start + nr * PAGE_SIZE; 380 380 381 381 if (pte_cont(__ptep_get(ptep + nr - 1))) 382 382 end = ALIGN(end, CONT_PTE_SIZE); ··· 386 386 ptep = contpte_align_down(ptep); 387 387 } 388 388 389 - __clear_young_dirty_ptes(vma, start, ptep, end - start, flags); 389 + __clear_young_dirty_ptes(vma, start, ptep, (end - start) / PAGE_SIZE, flags); 390 390 } 391 391 EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes); 392 392
+2 -2
arch/riscv/mm/fault.c
··· 293 293 if (unlikely(access_error(cause, vma))) { 294 294 vma_end_read(vma); 295 295 count_vm_vma_lock_event(VMA_LOCK_SUCCESS); 296 - tsk->thread.bad_cause = SEGV_ACCERR; 297 - bad_area_nosemaphore(regs, code, addr); 296 + tsk->thread.bad_cause = cause; 297 + bad_area_nosemaphore(regs, SEGV_ACCERR, addr); 298 298 return; 299 299 } 300 300
+11 -10
arch/riscv/mm/init.c
··· 250 250 kernel_map.va_pa_offset = PAGE_OFFSET - phys_ram_base; 251 251 252 252 /* 253 - * memblock allocator is not aware of the fact that last 4K bytes of 254 - * the addressable memory can not be mapped because of IS_ERR_VALUE 255 - * macro. Make sure that last 4k bytes are not usable by memblock 256 - * if end of dram is equal to maximum addressable memory. For 64-bit 257 - * kernel, this problem can't happen here as the end of the virtual 258 - * address space is occupied by the kernel mapping then this check must 259 - * be done as soon as the kernel mapping base address is determined. 253 + * Reserve physical address space that would be mapped to virtual 254 + * addresses greater than (void *)(-PAGE_SIZE) because: 255 + * - This memory would overlap with ERR_PTR 256 + * - This memory belongs to high memory, which is not supported 257 + * 258 + * This is not applicable to 64-bit kernel, because virtual addresses 259 + * after (void *)(-PAGE_SIZE) are not linearly mapped: they are 260 + * occupied by kernel mapping. Also it is unrealistic for high memory 261 + * to exist on 64-bit platforms. 260 262 */ 261 263 if (!IS_ENABLED(CONFIG_64BIT)) { 262 - max_mapped_addr = __pa(~(ulong)0); 263 - if (max_mapped_addr == (phys_ram_end - 1)) 264 - memblock_set_current_limit(max_mapped_addr - 4096); 264 + max_mapped_addr = __va_to_pa_nodebug(-PAGE_SIZE); 265 + memblock_reserve(max_mapped_addr, (phys_addr_t)-max_mapped_addr); 265 266 } 266 267 267 268 min_low_pfn = PFN_UP(phys_ram_base);
+30 -24
arch/s390/kernel/crash_dump.c
··· 451 451 /* 452 452 * Initialize ELF header (new kernel) 453 453 */ 454 - static void *ehdr_init(Elf64_Ehdr *ehdr, int mem_chunk_cnt) 454 + static void *ehdr_init(Elf64_Ehdr *ehdr, int phdr_count) 455 455 { 456 456 memset(ehdr, 0, sizeof(*ehdr)); 457 457 memcpy(ehdr->e_ident, ELFMAG, SELFMAG); ··· 465 465 ehdr->e_phoff = sizeof(Elf64_Ehdr); 466 466 ehdr->e_ehsize = sizeof(Elf64_Ehdr); 467 467 ehdr->e_phentsize = sizeof(Elf64_Phdr); 468 - /* 469 - * Number of memory chunk PT_LOAD program headers plus one kernel 470 - * image PT_LOAD program header plus one PT_NOTE program header. 471 - */ 472 - ehdr->e_phnum = mem_chunk_cnt + 1 + 1; 468 + /* Number of PT_LOAD program headers plus PT_NOTE program header */ 469 + ehdr->e_phnum = phdr_count + 1; 473 470 return ehdr + 1; 474 471 } 475 472 ··· 500 503 /* 501 504 * Initialize ELF loads (new kernel) 502 505 */ 503 - static void loads_init(Elf64_Phdr *phdr) 506 + static void loads_init(Elf64_Phdr *phdr, bool os_info_has_vm) 504 507 { 505 - unsigned long old_identity_base = os_info_old_value(OS_INFO_IDENTITY_BASE); 508 + unsigned long old_identity_base = 0; 506 509 phys_addr_t start, end; 507 510 u64 idx; 508 511 512 + if (os_info_has_vm) 513 + old_identity_base = os_info_old_value(OS_INFO_IDENTITY_BASE); 509 514 for_each_physmem_range(idx, &oldmem_type, &start, &end) { 510 515 phdr->p_type = PT_LOAD; 511 516 phdr->p_vaddr = old_identity_base + start; ··· 519 520 phdr->p_align = PAGE_SIZE; 520 521 phdr++; 521 522 } 523 + } 524 + 525 + static bool os_info_has_vm(void) 526 + { 527 + return os_info_old_value(OS_INFO_KASLR_OFFSET); 522 528 } 523 529 524 530 /* ··· 570 566 return ptr; 571 567 } 572 568 573 - static size_t get_elfcorehdr_size(int mem_chunk_cnt) 569 + static size_t get_elfcorehdr_size(int phdr_count) 574 570 { 575 571 size_t size; 576 572 ··· 585 581 size += nt_vmcoreinfo_size(); 586 582 /* nt_final */ 587 583 size += sizeof(Elf64_Nhdr); 588 - /* PT_LOAD type program header for kernel text region */ 589 - size += sizeof(Elf64_Phdr); 590 584 /* PT_LOADS */ 591 - size += mem_chunk_cnt * sizeof(Elf64_Phdr); 585 + size += phdr_count * sizeof(Elf64_Phdr); 592 586 593 587 return size; 594 588 } ··· 597 595 int elfcorehdr_alloc(unsigned long long *addr, unsigned long long *size) 598 596 { 599 597 Elf64_Phdr *phdr_notes, *phdr_loads, *phdr_text; 598 + int mem_chunk_cnt, phdr_text_cnt; 600 599 size_t alloc_size; 601 - int mem_chunk_cnt; 602 600 void *ptr, *hdr; 603 601 u64 hdr_off; 604 602 ··· 617 615 } 618 616 619 617 mem_chunk_cnt = get_mem_chunk_cnt(); 618 + phdr_text_cnt = os_info_has_vm() ? 1 : 0; 620 619 621 - alloc_size = get_elfcorehdr_size(mem_chunk_cnt); 620 + alloc_size = get_elfcorehdr_size(mem_chunk_cnt + phdr_text_cnt); 622 621 623 622 hdr = kzalloc(alloc_size, GFP_KERNEL); 624 623 625 - /* Without elfcorehdr /proc/vmcore cannot be created. Thus creating 624 + /* 625 + * Without elfcorehdr /proc/vmcore cannot be created. Thus creating 626 626 * a dump with this crash kernel will fail. Panic now to allow other 627 627 * dump mechanisms to take over. 628 628 */ ··· 632 628 panic("s390 kdump allocating elfcorehdr failed"); 633 629 634 630 /* Init elf header */ 635 - ptr = ehdr_init(hdr, mem_chunk_cnt); 631 + phdr_notes = ehdr_init(hdr, mem_chunk_cnt + phdr_text_cnt); 636 632 /* Init program headers */ 637 - phdr_notes = ptr; 638 - ptr = PTR_ADD(ptr, sizeof(Elf64_Phdr)); 639 - phdr_text = ptr; 640 - ptr = PTR_ADD(ptr, sizeof(Elf64_Phdr)); 641 - phdr_loads = ptr; 642 - ptr = PTR_ADD(ptr, sizeof(Elf64_Phdr) * mem_chunk_cnt); 633 + if (phdr_text_cnt) { 634 + phdr_text = phdr_notes + 1; 635 + phdr_loads = phdr_text + 1; 636 + } else { 637 + phdr_loads = phdr_notes + 1; 638 + } 639 + ptr = PTR_ADD(phdr_loads, sizeof(Elf64_Phdr) * mem_chunk_cnt); 643 640 /* Init notes */ 644 641 hdr_off = PTR_DIFF(ptr, hdr); 645 642 ptr = notes_init(phdr_notes, ptr, ((unsigned long) hdr) + hdr_off); 646 643 /* Init kernel text program header */ 647 - text_init(phdr_text); 644 + if (phdr_text_cnt) 645 + text_init(phdr_text); 648 646 /* Init loads */ 649 - loads_init(phdr_loads); 647 + loads_init(phdr_loads, phdr_text_cnt); 650 648 /* Finalize program headers */ 651 649 hdr_off = PTR_DIFF(ptr, hdr); 652 650 *addr = (unsigned long long) hdr;
+8 -1
arch/x86/kernel/amd_nb.c
··· 215 215 216 216 int amd_smn_read(u16 node, u32 address, u32 *value) 217 217 { 218 - return __amd_smn_rw(node, address, value, false); 218 + int err = __amd_smn_rw(node, address, value, false); 219 + 220 + if (PCI_POSSIBLE_ERROR(*value)) { 221 + err = -ENODEV; 222 + *value = 0; 223 + } 224 + 225 + return err; 219 226 } 220 227 EXPORT_SYMBOL_GPL(amd_smn_read); 221 228
+9 -2
arch/x86/kernel/machine_kexec_64.c
··· 295 295 void machine_kexec(struct kimage *image) 296 296 { 297 297 unsigned long page_list[PAGES_NR]; 298 - void *control_page; 298 + unsigned int host_mem_enc_active; 299 299 int save_ftrace_enabled; 300 + void *control_page; 301 + 302 + /* 303 + * This must be done before load_segments() since if call depth tracking 304 + * is used then GS must be valid to make any function calls. 305 + */ 306 + host_mem_enc_active = cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT); 300 307 301 308 #ifdef CONFIG_KEXEC_JUMP 302 309 if (image->preserve_context) ··· 365 358 (unsigned long)page_list, 366 359 image->start, 367 360 image->preserve_context, 368 - cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)); 361 + host_mem_enc_active); 369 362 370 363 #ifdef CONFIG_KEXEC_JUMP 371 364 if (image->preserve_context)
+3 -3
arch/x86/mm/numa.c
··· 493 493 for_each_reserved_mem_region(mb_region) { 494 494 int nid = memblock_get_region_node(mb_region); 495 495 496 - if (nid != MAX_NUMNODES) 496 + if (nid != NUMA_NO_NODE) 497 497 node_set(nid, reserved_nodemask); 498 498 } 499 499 ··· 614 614 nodes_clear(node_online_map); 615 615 memset(&numa_meminfo, 0, sizeof(numa_meminfo)); 616 616 WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.memory, 617 - MAX_NUMNODES)); 617 + NUMA_NO_NODE)); 618 618 WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.reserved, 619 - MAX_NUMNODES)); 619 + NUMA_NO_NODE)); 620 620 /* In case that parsing SRAT failed. */ 621 621 WARN_ON(memblock_clear_hotplug(0, ULLONG_MAX)); 622 622 numa_reset_distance();
+6 -3
drivers/ata/pata_macio.c
··· 915 915 .sg_tablesize = MAX_DCMDS, 916 916 /* We may not need that strict one */ 917 917 .dma_boundary = ATA_DMA_BOUNDARY, 918 - /* Not sure what the real max is but we know it's less than 64K, let's 919 - * use 64K minus 256 918 + /* 919 + * The SCSI core requires the segment size to cover at least a page, so 920 + * for 64K page size kernels this must be at least 64K. However the 921 + * hardware can't handle 64K, so pata_macio_qc_prep() will split large 922 + * requests. 920 923 */ 921 - .max_segment_size = MAX_DBDMA_SEG, 924 + .max_segment_size = SZ_64K, 922 925 .device_configure = pata_macio_device_configure, 923 926 .sdev_groups = ata_common_sdev_groups, 924 927 .can_queue = ATA_DEF_QUEUE,
+2 -2
drivers/block/null_blk/main.c
··· 1824 1824 dev->queue_mode = NULL_Q_MQ; 1825 1825 } 1826 1826 1827 - dev->blocksize = round_down(dev->blocksize, 512); 1828 - dev->blocksize = clamp_t(unsigned int, dev->blocksize, 512, 4096); 1827 + if (blk_validate_block_size(dev->blocksize)) 1828 + return -EINVAL; 1829 1829 1830 1830 if (dev->use_per_node_hctx) { 1831 1831 if (dev->submit_queues != nr_online_nodes)
+9 -2
drivers/clk/clkdev.c
··· 204 204 pr_err("%pV:%s: %s ID is greater than %zu\n", 205 205 &vaf, con_id, failure, max_size); 206 206 va_end(ap_copy); 207 - kfree(cla); 208 - return NULL; 207 + 208 + /* 209 + * Don't fail in this case, but as the entry won't ever match just 210 + * fill it with something that also won't match. 211 + */ 212 + strscpy(cla->con_id, "bad", sizeof(cla->con_id)); 213 + strscpy(cla->dev_id, "bad", sizeof(cla->dev_id)); 214 + 215 + return &cla->cl; 209 216 } 210 217 211 218 static struct clk_lookup *
-8
drivers/clk/sifive/sifive-prci.c
··· 4 4 * Copyright (C) 2020 Zong Li 5 5 */ 6 6 7 - #include <linux/clkdev.h> 8 7 #include <linux/delay.h> 9 8 #include <linux/io.h> 10 9 #include <linux/module.h> ··· 532 533 r = devm_clk_hw_register(dev, &pic->hw); 533 534 if (r) { 534 535 dev_warn(dev, "Failed to register clock %s: %d\n", 535 - init.name, r); 536 - return r; 537 - } 538 - 539 - r = clk_hw_register_clkdev(&pic->hw, pic->name, dev_name(dev)); 540 - if (r) { 541 - dev_warn(dev, "Failed to register clkdev for %s: %d\n", 542 536 init.name, r); 543 537 return r; 544 538 }
+5 -3
drivers/edac/amd64_edac.c
··· 81 81 amd64_warn("%s: error reading F%dx%03x.\n", 82 82 func, PCI_FUNC(pdev->devfn), offset); 83 83 84 - return err; 84 + return pcibios_err_to_errno(err); 85 85 } 86 86 87 87 int __amd64_write_pci_cfg_dword(struct pci_dev *pdev, int offset, ··· 94 94 amd64_warn("%s: error writing to F%dx%03x.\n", 95 95 func, PCI_FUNC(pdev->devfn), offset); 96 96 97 - return err; 97 + return pcibios_err_to_errno(err); 98 98 } 99 99 100 100 /* ··· 1025 1025 } 1026 1026 1027 1027 ret = pci_read_config_dword(pdev, REG_LOCAL_NODE_TYPE_MAP, &tmp); 1028 - if (ret) 1028 + if (ret) { 1029 + ret = pcibios_err_to_errno(ret); 1029 1030 goto out; 1031 + } 1030 1032 1031 1033 gpu_node_map.node_count = FIELD_GET(LNTM_NODE_COUNT, tmp); 1032 1034 gpu_node_map.base_node_id = FIELD_GET(LNTM_BASE_NODE_ID, tmp);
+2 -2
drivers/edac/igen6_edac.c
··· 800 800 801 801 rc = pci_read_config_word(imc->pdev, ERRCMD_OFFSET, &errcmd); 802 802 if (rc) 803 - return rc; 803 + return pcibios_err_to_errno(rc); 804 804 805 805 if (enable) 806 806 errcmd |= ERRCMD_CE | ERRSTS_UE; ··· 809 809 810 810 rc = pci_write_config_word(imc->pdev, ERRCMD_OFFSET, errcmd); 811 811 if (rc) 812 - return rc; 812 + return pcibios_err_to_errno(rc); 813 813 814 814 return 0; 815 815 }
+1 -1
drivers/gpio/Kconfig
··· 1576 1576 are "output only" GPIOs. 1577 1577 1578 1578 config GPIO_TQMX86 1579 - tristate "TQ-Systems QTMX86 GPIO" 1579 + tristate "TQ-Systems TQMx86 GPIO" 1580 1580 depends on MFD_TQMX86 || COMPILE_TEST 1581 1581 depends on HAS_IOPORT_MAP 1582 1582 select GPIOLIB_IRQCHIP
+1
drivers/gpio/gpio-gw-pld.c
··· 130 130 }; 131 131 module_i2c_driver(gw_pld_driver); 132 132 133 + MODULE_DESCRIPTION("Gateworks I2C PLD GPIO expander"); 133 134 MODULE_LICENSE("GPL"); 134 135 MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>");
+1
drivers/gpio/gpio-mc33880.c
··· 168 168 module_exit(mc33880_exit); 169 169 170 170 MODULE_AUTHOR("Mocean Laboratories <info@mocean-labs.com>"); 171 + MODULE_DESCRIPTION("MC33880 high-side/low-side switch GPIO driver"); 171 172 MODULE_LICENSE("GPL v2"); 172 173
+1
drivers/gpio/gpio-pcf857x.c
··· 438 438 } 439 439 module_exit(pcf857x_exit); 440 440 441 + MODULE_DESCRIPTION("Driver for pcf857x, pca857x, and pca967x I2C GPIO expanders"); 441 442 MODULE_LICENSE("GPL"); 442 443 MODULE_AUTHOR("David Brownell");
+1
drivers/gpio/gpio-pl061.c
··· 438 438 }; 439 439 module_amba_driver(pl061_gpio_driver); 440 440 441 + MODULE_DESCRIPTION("Driver for the ARM PrimeCell(tm) General Purpose Input/Output (PL061)"); 441 442 MODULE_LICENSE("GPL v2");
+80 -30
drivers/gpio/gpio-tqmx86.c
··· 6 6 * Vadim V.Vlasov <vvlasov@dev.rtsoft.ru> 7 7 */ 8 8 9 + #include <linux/bitmap.h> 9 10 #include <linux/bitops.h> 10 11 #include <linux/errno.h> 11 12 #include <linux/gpio/driver.h> ··· 29 28 #define TQMX86_GPIIC 3 /* GPI Interrupt Configuration Register */ 30 29 #define TQMX86_GPIIS 4 /* GPI Interrupt Status Register */ 31 30 31 + #define TQMX86_GPII_NONE 0 32 32 #define TQMX86_GPII_FALLING BIT(0) 33 33 #define TQMX86_GPII_RISING BIT(1) 34 + /* Stored in irq_type as a trigger type, but not actually valid as a register 35 + * value, so the name doesn't use "GPII" 36 + */ 37 + #define TQMX86_INT_BOTH (BIT(0) | BIT(1)) 34 38 #define TQMX86_GPII_MASK (BIT(0) | BIT(1)) 35 39 #define TQMX86_GPII_BITS 2 40 + /* Stored in irq_type with GPII bits */ 41 + #define TQMX86_INT_UNMASKED BIT(2) 36 42 37 43 struct tqmx86_gpio_data { 38 44 struct gpio_chip chip; 39 45 void __iomem *io_base; 40 46 int irq; 47 + /* Lock must be held for accessing output and irq_type fields */ 41 48 raw_spinlock_t spinlock; 49 + DECLARE_BITMAP(output, TQMX86_NGPIO); 42 50 u8 irq_type[TQMX86_NGPI]; 43 51 }; 44 52 ··· 74 64 { 75 65 struct tqmx86_gpio_data *gpio = gpiochip_get_data(chip); 76 66 unsigned long flags; 77 - u8 val; 78 67 79 68 raw_spin_lock_irqsave(&gpio->spinlock, flags); 80 - val = tqmx86_gpio_read(gpio, TQMX86_GPIOD); 81 - if (value) 82 - val |= BIT(offset); 83 - else 84 - val &= ~BIT(offset); 85 - tqmx86_gpio_write(gpio, val, TQMX86_GPIOD); 69 + __assign_bit(offset, gpio->output, value); 70 + tqmx86_gpio_write(gpio, bitmap_get_value8(gpio->output, 0), TQMX86_GPIOD); 86 71 raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 87 72 } 88 73 ··· 112 107 return GPIO_LINE_DIRECTION_OUT; 113 108 } 114 109 110 + static void tqmx86_gpio_irq_config(struct tqmx86_gpio_data *gpio, int offset) 111 + __must_hold(&gpio->spinlock) 112 + { 113 + u8 type = TQMX86_GPII_NONE, gpiic; 114 + 115 + if (gpio->irq_type[offset] & TQMX86_INT_UNMASKED) { 116 + type = gpio->irq_type[offset] & TQMX86_GPII_MASK; 117 + 118 + if (type == TQMX86_INT_BOTH) 119 + type = tqmx86_gpio_get(&gpio->chip, offset + TQMX86_NGPO) 120 + ? TQMX86_GPII_FALLING 121 + : TQMX86_GPII_RISING; 122 + } 123 + 124 + gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC); 125 + gpiic &= ~(TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS)); 126 + gpiic |= type << (offset * TQMX86_GPII_BITS); 127 + tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC); 128 + } 129 + 115 130 static void tqmx86_gpio_irq_mask(struct irq_data *data) 116 131 { 117 132 unsigned int offset = (data->hwirq - TQMX86_NGPO); 118 133 struct tqmx86_gpio_data *gpio = gpiochip_get_data( 119 134 irq_data_get_irq_chip_data(data)); 120 135 unsigned long flags; 121 - u8 gpiic, mask; 122 - 123 - mask = TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS); 124 136 125 137 raw_spin_lock_irqsave(&gpio->spinlock, flags); 126 - gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC); 127 - gpiic &= ~mask; 128 - tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC); 138 + gpio->irq_type[offset] &= ~TQMX86_INT_UNMASKED; 139 + tqmx86_gpio_irq_config(gpio, offset); 129 140 raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 141 + 130 142 gpiochip_disable_irq(&gpio->chip, irqd_to_hwirq(data)); 131 143 } 132 144 ··· 153 131 struct tqmx86_gpio_data *gpio = gpiochip_get_data( 154 132 irq_data_get_irq_chip_data(data)); 155 133 unsigned long flags; 156 - u8 gpiic, mask; 157 - 158 - mask = TQMX86_GPII_MASK << (offset * TQMX86_GPII_BITS); 159 134 160 135 gpiochip_enable_irq(&gpio->chip, irqd_to_hwirq(data)); 136 + 161 137 raw_spin_lock_irqsave(&gpio->spinlock, flags); 162 - gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC); 163 - gpiic &= ~mask; 164 - gpiic |= gpio->irq_type[offset] << (offset * TQMX86_GPII_BITS); 165 - tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC); 138 + gpio->irq_type[offset] |= TQMX86_INT_UNMASKED; 139 + tqmx86_gpio_irq_config(gpio, offset); 166 140 raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 167 141 } 168 142 ··· 169 151 unsigned int offset = (data->hwirq - TQMX86_NGPO); 170 152 unsigned int edge_type = type & IRQF_TRIGGER_MASK; 171 153 unsigned long flags; 172 - u8 new_type, gpiic; 154 + u8 new_type; 173 155 174 156 switch (edge_type) { 175 157 case IRQ_TYPE_EDGE_RISING: ··· 179 161 new_type = TQMX86_GPII_FALLING; 180 162 break; 181 163 case IRQ_TYPE_EDGE_BOTH: 182 - new_type = TQMX86_GPII_FALLING | TQMX86_GPII_RISING; 164 + new_type = TQMX86_INT_BOTH; 183 165 break; 184 166 default: 185 167 return -EINVAL; /* not supported */ 186 168 } 187 169 188 - gpio->irq_type[offset] = new_type; 189 - 190 170 raw_spin_lock_irqsave(&gpio->spinlock, flags); 191 - gpiic = tqmx86_gpio_read(gpio, TQMX86_GPIIC); 192 - gpiic &= ~((TQMX86_GPII_MASK) << (offset * TQMX86_GPII_BITS)); 193 - gpiic |= new_type << (offset * TQMX86_GPII_BITS); 194 - tqmx86_gpio_write(gpio, gpiic, TQMX86_GPIIC); 171 + gpio->irq_type[offset] &= ~TQMX86_GPII_MASK; 172 + gpio->irq_type[offset] |= new_type; 173 + tqmx86_gpio_irq_config(gpio, offset); 195 174 raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 196 175 197 176 return 0; ··· 199 184 struct gpio_chip *chip = irq_desc_get_handler_data(desc); 200 185 struct tqmx86_gpio_data *gpio = gpiochip_get_data(chip); 201 186 struct irq_chip *irq_chip = irq_desc_get_chip(desc); 202 - unsigned long irq_bits; 203 - int i = 0; 187 + unsigned long irq_bits, flags; 188 + int i; 204 189 u8 irq_status; 205 190 206 191 chained_irq_enter(irq_chip, desc); ··· 209 194 tqmx86_gpio_write(gpio, irq_status, TQMX86_GPIIS); 210 195 211 196 irq_bits = irq_status; 197 + 198 + raw_spin_lock_irqsave(&gpio->spinlock, flags); 199 + for_each_set_bit(i, &irq_bits, TQMX86_NGPI) { 200 + /* 201 + * Edge-both triggers are implemented by flipping the edge 202 + * trigger after each interrupt, as the controller only supports 203 + * either rising or falling edge triggers, but not both. 204 + * 205 + * Internally, the TQMx86 GPIO controller has separate status 206 + * registers for rising and falling edge interrupts. GPIIC 207 + * configures which bits from which register are visible in the 208 + * interrupt status register GPIIS and defines what triggers the 209 + * parent IRQ line. Writing to GPIIS always clears both rising 210 + * and falling interrupt flags internally, regardless of the 211 + * currently configured trigger. 212 + * 213 + * In consequence, we can cleanly implement the edge-both 214 + * trigger in software by first clearing the interrupt and then 215 + * setting the new trigger based on the current GPIO input in 216 + * tqmx86_gpio_irq_config() - even if an edge arrives between 217 + * reading the input and setting the trigger, we will have a new 218 + * interrupt pending. 219 + */ 220 + if ((gpio->irq_type[i] & TQMX86_GPII_MASK) == TQMX86_INT_BOTH) 221 + tqmx86_gpio_irq_config(gpio, i); 222 + } 223 + raw_spin_unlock_irqrestore(&gpio->spinlock, flags); 224 + 212 225 for_each_set_bit(i, &irq_bits, TQMX86_NGPI) 213 226 generic_handle_domain_irq(gpio->chip.irq.domain, 214 227 i + TQMX86_NGPO); ··· 319 276 gpio->io_base = io_base; 320 277 321 278 tqmx86_gpio_write(gpio, (u8)~TQMX86_DIR_INPUT_MASK, TQMX86_GPIODD); 279 + 280 + /* 281 + * Reading the previous output state is not possible with TQMx86 hardware. 282 + * Initialize all outputs to 0 to have a defined state that matches the 283 + * shadow register. 284 + */ 285 + tqmx86_gpio_write(gpio, 0, TQMX86_GPIOD); 322 286 323 287 chip = &gpio->chip; 324 288 chip->label = "gpio-tqmx86";
+49 -42
drivers/gpu/drm/amd/include/pptable.h
··· 477 477 } ATOM_PPLIB_STATE_V2; 478 478 479 479 typedef struct _StateArray{ 480 - //how many states we have 481 - UCHAR ucNumEntries; 482 - 483 - ATOM_PPLIB_STATE_V2 states[1]; 480 + //how many states we have 481 + UCHAR ucNumEntries; 482 + 483 + ATOM_PPLIB_STATE_V2 states[] /* __counted_by(ucNumEntries) */; 484 484 }StateArray; 485 485 486 486 487 487 typedef struct _ClockInfoArray{ 488 - //how many clock levels we have 489 - UCHAR ucNumEntries; 490 - 491 - //sizeof(ATOM_PPLIB_CLOCK_INFO) 492 - UCHAR ucEntrySize; 493 - 494 - UCHAR clockInfo[]; 488 + //how many clock levels we have 489 + UCHAR ucNumEntries; 490 + 491 + //sizeof(ATOM_PPLIB_CLOCK_INFO) 492 + UCHAR ucEntrySize; 493 + 494 + UCHAR clockInfo[]; 495 495 }ClockInfoArray; 496 496 497 497 typedef struct _NonClockInfoArray{ 498 + //how many non-clock levels we have. normally should be same as number of states 499 + UCHAR ucNumEntries; 500 + //sizeof(ATOM_PPLIB_NONCLOCK_INFO) 501 + UCHAR ucEntrySize; 498 502 499 - //how many non-clock levels we have. normally should be same as number of states 500 - UCHAR ucNumEntries; 501 - //sizeof(ATOM_PPLIB_NONCLOCK_INFO) 502 - UCHAR ucEntrySize; 503 - 504 - ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[]; 503 + ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[] __counted_by(ucNumEntries); 505 504 }NonClockInfoArray; 506 505 507 506 typedef struct _ATOM_PPLIB_Clock_Voltage_Dependency_Record ··· 512 513 513 514 typedef struct _ATOM_PPLIB_Clock_Voltage_Dependency_Table 514 515 { 515 - UCHAR ucNumEntries; // Number of entries. 516 - ATOM_PPLIB_Clock_Voltage_Dependency_Record entries[1]; // Dynamically allocate entries. 516 + // Number of entries. 517 + UCHAR ucNumEntries; 518 + // Dynamically allocate entries. 519 + ATOM_PPLIB_Clock_Voltage_Dependency_Record entries[] __counted_by(ucNumEntries); 517 520 }ATOM_PPLIB_Clock_Voltage_Dependency_Table; 518 521 519 522 typedef struct _ATOM_PPLIB_Clock_Voltage_Limit_Record ··· 530 529 531 530 typedef struct _ATOM_PPLIB_Clock_Voltage_Limit_Table 532 531 { 533 - UCHAR ucNumEntries; // Number of entries. 534 - ATOM_PPLIB_Clock_Voltage_Limit_Record entries[1]; // Dynamically allocate entries. 532 + // Number of entries. 533 + UCHAR ucNumEntries; 534 + // Dynamically allocate entries. 535 + ATOM_PPLIB_Clock_Voltage_Limit_Record entries[] __counted_by(ucNumEntries); 535 536 }ATOM_PPLIB_Clock_Voltage_Limit_Table; 536 537 537 538 union _ATOM_PPLIB_CAC_Leakage_Record ··· 556 553 557 554 typedef struct _ATOM_PPLIB_CAC_Leakage_Table 558 555 { 559 - UCHAR ucNumEntries; // Number of entries. 560 - ATOM_PPLIB_CAC_Leakage_Record entries[1]; // Dynamically allocate entries. 556 + // Number of entries. 557 + UCHAR ucNumEntries; 558 + // Dynamically allocate entries. 559 + ATOM_PPLIB_CAC_Leakage_Record entries[] __counted_by(ucNumEntries); 561 560 }ATOM_PPLIB_CAC_Leakage_Table; 562 561 563 562 typedef struct _ATOM_PPLIB_PhaseSheddingLimits_Record ··· 573 568 574 569 typedef struct _ATOM_PPLIB_PhaseSheddingLimits_Table 575 570 { 576 - UCHAR ucNumEntries; // Number of entries. 577 - ATOM_PPLIB_PhaseSheddingLimits_Record entries[1]; // Dynamically allocate entries. 571 + // Number of entries. 572 + UCHAR ucNumEntries; 573 + // Dynamically allocate entries. 574 + ATOM_PPLIB_PhaseSheddingLimits_Record entries[] __counted_by(ucNumEntries); 578 575 }ATOM_PPLIB_PhaseSheddingLimits_Table; 579 576 580 577 typedef struct _VCEClockInfo{ ··· 587 580 }VCEClockInfo; 588 581 589 582 typedef struct _VCEClockInfoArray{ 590 - UCHAR ucNumEntries; 591 - VCEClockInfo entries[1]; 583 + UCHAR ucNumEntries; 584 + VCEClockInfo entries[] __counted_by(ucNumEntries); 592 585 }VCEClockInfoArray; 593 586 594 587 typedef struct _ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record ··· 599 592 600 593 typedef struct _ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table 601 594 { 602 - UCHAR numEntries; 603 - ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record entries[1]; 595 + UCHAR numEntries; 596 + ATOM_PPLIB_VCE_Clock_Voltage_Limit_Record entries[] __counted_by(numEntries); 604 597 }ATOM_PPLIB_VCE_Clock_Voltage_Limit_Table; 605 598 606 599 typedef struct _ATOM_PPLIB_VCE_State_Record ··· 611 604 612 605 typedef struct _ATOM_PPLIB_VCE_State_Table 613 606 { 614 - UCHAR numEntries; 615 - ATOM_PPLIB_VCE_State_Record entries[1]; 607 + UCHAR numEntries; 608 + ATOM_PPLIB_VCE_State_Record entries[] __counted_by(numEntries); 616 609 }ATOM_PPLIB_VCE_State_Table; 617 610 618 611 ··· 633 626 }UVDClockInfo; 634 627 635 628 typedef struct _UVDClockInfoArray{ 636 - UCHAR ucNumEntries; 637 - UVDClockInfo entries[1]; 629 + UCHAR ucNumEntries; 630 + UVDClockInfo entries[] __counted_by(ucNumEntries); 638 631 }UVDClockInfoArray; 639 632 640 633 typedef struct _ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record ··· 645 638 646 639 typedef struct _ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table 647 640 { 648 - UCHAR numEntries; 649 - ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record entries[1]; 641 + UCHAR numEntries; 642 + ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record entries[] __counted_by(numEntries); 650 643 }ATOM_PPLIB_UVD_Clock_Voltage_Limit_Table; 651 644 652 645 typedef struct _ATOM_PPLIB_UVD_Table ··· 664 657 }ATOM_PPLIB_SAMClk_Voltage_Limit_Record; 665 658 666 659 typedef struct _ATOM_PPLIB_SAMClk_Voltage_Limit_Table{ 667 - UCHAR numEntries; 668 - ATOM_PPLIB_SAMClk_Voltage_Limit_Record entries[]; 660 + UCHAR numEntries; 661 + ATOM_PPLIB_SAMClk_Voltage_Limit_Record entries[] __counted_by(numEntries); 669 662 }ATOM_PPLIB_SAMClk_Voltage_Limit_Table; 670 663 671 664 typedef struct _ATOM_PPLIB_SAMU_Table ··· 682 675 }ATOM_PPLIB_ACPClk_Voltage_Limit_Record; 683 676 684 677 typedef struct _ATOM_PPLIB_ACPClk_Voltage_Limit_Table{ 685 - UCHAR numEntries; 686 - ATOM_PPLIB_ACPClk_Voltage_Limit_Record entries[1]; 678 + UCHAR numEntries; 679 + ATOM_PPLIB_ACPClk_Voltage_Limit_Record entries[] __counted_by(numEntries); 687 680 }ATOM_PPLIB_ACPClk_Voltage_Limit_Table; 688 681 689 682 typedef struct _ATOM_PPLIB_ACP_Table ··· 750 743 } ATOM_PPLIB_VQ_Budgeting_Record; 751 744 752 745 typedef struct ATOM_PPLIB_VQ_Budgeting_Table { 753 - UCHAR revid; 754 - UCHAR numEntries; 755 - ATOM_PPLIB_VQ_Budgeting_Record entries[1]; 746 + UCHAR revid; 747 + UCHAR numEntries; 748 + ATOM_PPLIB_VQ_Budgeting_Record entries[] __counted_by(numEntries); 756 749 } ATOM_PPLIB_VQ_Budgeting_Table; 757 750 758 751 #pragma pack()
+11 -9
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_4_ppt.c
··· 226 226 struct amdgpu_device *adev = smu->adev; 227 227 int ret = 0; 228 228 229 - if (!en && adev->in_s4) { 230 - /* Adds a GFX reset as workaround just before sending the 231 - * MP1_UNLOAD message to prevent GC/RLC/PMFW from entering 232 - * an invalid state. 233 - */ 234 - ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GfxDeviceDriverReset, 235 - SMU_RESET_MODE_2, NULL); 236 - if (ret) 237 - return ret; 229 + if (!en && !adev->in_s0ix) { 230 + if (adev->in_s4) { 231 + /* Adds a GFX reset as workaround just before sending the 232 + * MP1_UNLOAD message to prevent GC/RLC/PMFW from entering 233 + * an invalid state. 234 + */ 235 + ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_GfxDeviceDriverReset, 236 + SMU_RESET_MODE_2, NULL); 237 + if (ret) 238 + return ret; 239 + } 238 240 239 241 ret = smu_cmn_send_smc_msg(smu, SMU_MSG_PrepareMp1ForUnload, NULL); 240 242 }
-5
drivers/gpu/drm/arm/display/komeda/komeda_color_mgmt.c
··· 72 72 u32 segment_width; 73 73 }; 74 74 75 - struct gamma_curve_segment { 76 - u32 start; 77 - u32 end; 78 - }; 79 - 80 75 static struct gamma_curve_sector sector_tbl[] = { 81 76 { 0, 4, 4 }, 82 77 { 16, 4, 4 },
+3 -1
drivers/gpu/drm/panel/panel-sitronix-st7789v.c
··· 643 643 if (ret) 644 644 return dev_err_probe(dev, ret, "Failed to get backlight\n"); 645 645 646 - of_drm_get_panel_orientation(spi->dev.of_node, &ctx->orientation); 646 + ret = of_drm_get_panel_orientation(spi->dev.of_node, &ctx->orientation); 647 + if (ret) 648 + return dev_err_probe(&spi->dev, ret, "Failed to get orientation\n"); 647 649 648 650 drm_panel_add(&ctx->panel); 649 651
+6 -13
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 746 746 dev->vram_size = pci_resource_len(pdev, 2); 747 747 748 748 drm_info(&dev->drm, 749 - "Register MMIO at 0x%pa size is %llu kiB\n", 749 + "Register MMIO at 0x%pa size is %llu KiB\n", 750 750 &rmmio_start, (uint64_t)rmmio_size / 1024); 751 751 dev->rmmio = devm_ioremap(dev->drm.dev, 752 752 rmmio_start, ··· 765 765 fifo_size = pci_resource_len(pdev, 2); 766 766 767 767 drm_info(&dev->drm, 768 - "FIFO at %pa size is %llu kiB\n", 768 + "FIFO at %pa size is %llu KiB\n", 769 769 &fifo_start, (uint64_t)fifo_size / 1024); 770 770 dev->fifo_mem = devm_memremap(dev->drm.dev, 771 771 fifo_start, ··· 790 790 * SVGA_REG_VRAM_SIZE. 791 791 */ 792 792 drm_info(&dev->drm, 793 - "VRAM at %pa size is %llu kiB\n", 793 + "VRAM at %pa size is %llu KiB\n", 794 794 &dev->vram_start, (uint64_t)dev->vram_size / 1024); 795 795 796 796 return 0; ··· 960 960 vmw_read(dev_priv, 961 961 SVGA_REG_SUGGESTED_GBOBJECT_MEM_SIZE_KB); 962 962 963 - /* 964 - * Workaround for low memory 2D VMs to compensate for the 965 - * allocation taken by fbdev 966 - */ 967 - if (!(dev_priv->capabilities & SVGA_CAP_3D)) 968 - mem_size *= 3; 969 - 970 963 dev_priv->max_mob_pages = mem_size * 1024 / PAGE_SIZE; 971 964 dev_priv->max_primary_mem = 972 965 vmw_read(dev_priv, SVGA_REG_MAX_PRIMARY_MEM); ··· 984 991 dev_priv->max_primary_mem = dev_priv->vram_size; 985 992 } 986 993 drm_info(&dev_priv->drm, 987 - "Legacy memory limits: VRAM = %llu kB, FIFO = %llu kB, surface = %u kB\n", 994 + "Legacy memory limits: VRAM = %llu KiB, FIFO = %llu KiB, surface = %u KiB\n", 988 995 (u64)dev_priv->vram_size / 1024, 989 996 (u64)dev_priv->fifo_mem_size / 1024, 990 997 dev_priv->memory_size / 1024); 991 998 992 999 drm_info(&dev_priv->drm, 993 - "MOB limits: max mob size = %u kB, max mob pages = %u\n", 1000 + "MOB limits: max mob size = %u KiB, max mob pages = %u\n", 994 1001 dev_priv->max_mob_size / 1024, dev_priv->max_mob_pages); 995 1002 996 1003 ret = vmw_dma_masks(dev_priv); ··· 1008 1015 (unsigned)dev_priv->max_gmr_pages); 1009 1016 } 1010 1017 drm_info(&dev_priv->drm, 1011 - "Maximum display memory size is %llu kiB\n", 1018 + "Maximum display memory size is %llu KiB\n", 1012 1019 (uint64_t)dev_priv->max_primary_mem / 1024); 1013 1020 1014 1021 /* Need mmio memory to check for fifo pitchlock cap. */
-3
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 1043 1043 int vmw_kms_write_svga(struct vmw_private *vmw_priv, 1044 1044 unsigned width, unsigned height, unsigned pitch, 1045 1045 unsigned bpp, unsigned depth); 1046 - bool vmw_kms_validate_mode_vram(struct vmw_private *dev_priv, 1047 - uint32_t pitch, 1048 - uint32_t height); 1049 1046 int vmw_kms_present(struct vmw_private *dev_priv, 1050 1047 struct drm_file *file_priv, 1051 1048 struct vmw_framebuffer *vfb,
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
··· 94 94 } else 95 95 new_max_pages = gman->max_gmr_pages * 2; 96 96 if (new_max_pages > gman->max_gmr_pages && new_max_pages >= gman->used_gmr_pages) { 97 - DRM_WARN("vmwgfx: increasing guest mob limits to %u kB.\n", 97 + DRM_WARN("vmwgfx: increasing guest mob limits to %u KiB.\n", 98 98 ((new_max_pages) << (PAGE_SHIFT - 10))); 99 99 100 100 gman->max_gmr_pages = new_max_pages; 101 101 } else { 102 102 char buf[256]; 103 103 snprintf(buf, sizeof(buf), 104 - "vmwgfx, error: guest graphics is out of memory (mob limit at: %ukB).\n", 104 + "vmwgfx, error: guest graphics is out of memory (mob limit at: %u KiB).\n", 105 105 ((gman->max_gmr_pages) << (PAGE_SHIFT - 10))); 106 106 vmw_host_printf(buf); 107 107 DRM_WARN("%s", buf);
+10 -18
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 224 224 new_image = vmw_du_cursor_plane_acquire_image(new_vps); 225 225 226 226 changed = false; 227 - if (old_image && new_image) 227 + if (old_image && new_image && old_image != new_image) 228 228 changed = memcmp(old_image, new_image, size) != 0; 229 229 230 230 return changed; ··· 2171 2171 return 0; 2172 2172 } 2173 2173 2174 + static 2174 2175 bool vmw_kms_validate_mode_vram(struct vmw_private *dev_priv, 2175 - uint32_t pitch, 2176 - uint32_t height) 2176 + u64 pitch, 2177 + u64 height) 2177 2178 { 2178 - return ((u64) pitch * (u64) height) < (u64) 2179 - ((dev_priv->active_display_unit == vmw_du_screen_target) ? 2180 - dev_priv->max_primary_mem : dev_priv->vram_size); 2179 + return (pitch * height) < (u64)dev_priv->vram_size; 2181 2180 } 2182 2181 2183 2182 /** ··· 2872 2873 enum drm_mode_status vmw_connector_mode_valid(struct drm_connector *connector, 2873 2874 struct drm_display_mode *mode) 2874 2875 { 2876 + enum drm_mode_status ret; 2875 2877 struct drm_device *dev = connector->dev; 2876 2878 struct vmw_private *dev_priv = vmw_priv(dev); 2877 - u32 max_width = dev_priv->texture_max_width; 2878 - u32 max_height = dev_priv->texture_max_height; 2879 2879 u32 assumed_cpp = 4; 2880 2880 2881 2881 if (dev_priv->assume_16bpp) 2882 2882 assumed_cpp = 2; 2883 2883 2884 - if (dev_priv->active_display_unit == vmw_du_screen_target) { 2885 - max_width = min(dev_priv->stdu_max_width, max_width); 2886 - max_height = min(dev_priv->stdu_max_height, max_height); 2887 - } 2888 - 2889 - if (max_width < mode->hdisplay) 2890 - return MODE_BAD_HVALUE; 2891 - 2892 - if (max_height < mode->vdisplay) 2893 - return MODE_BAD_VVALUE; 2884 + ret = drm_mode_validate_size(mode, dev_priv->texture_max_width, 2885 + dev_priv->texture_max_height); 2886 + if (ret != MODE_OK) 2887 + return ret; 2894 2888 2895 2889 if (!vmw_kms_validate_mode_vram(dev_priv, 2896 2890 mode->hdisplay * assumed_cpp,
+53 -7
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 43 43 #define vmw_connector_to_stdu(x) \ 44 44 container_of(x, struct vmw_screen_target_display_unit, base.connector) 45 45 46 - 46 + /* 47 + * Some renderers such as llvmpipe will align the width and height of their 48 + * buffers to match their tile size. We need to keep this in mind when exposing 49 + * modes to userspace so that this possible over-allocation will not exceed 50 + * graphics memory. 64x64 pixels seems to be a reasonable upper bound for the 51 + * tile size of current renderers. 52 + */ 53 + #define GPU_TILE_SIZE 64 47 54 48 55 enum stdu_content_type { 49 56 SAME_AS_DISPLAY = 0, ··· 90 83 struct vmw_stdu_update { 91 84 SVGA3dCmdHeader header; 92 85 SVGA3dCmdUpdateGBScreenTarget body; 93 - }; 94 - 95 - struct vmw_stdu_dma { 96 - SVGA3dCmdHeader header; 97 - SVGA3dCmdSurfaceDMA body; 98 86 }; 99 87 100 88 struct vmw_stdu_surface_copy { ··· 416 414 { 417 415 struct vmw_private *dev_priv; 418 416 struct vmw_screen_target_display_unit *stdu; 417 + struct drm_crtc_state *new_crtc_state; 419 418 int ret; 420 419 421 420 if (!crtc) { ··· 426 423 427 424 stdu = vmw_crtc_to_stdu(crtc); 428 425 dev_priv = vmw_priv(crtc->dev); 426 + new_crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 429 427 430 428 if (dev_priv->vkms_enabled) 431 429 drm_crtc_vblank_off(crtc); ··· 437 433 DRM_ERROR("Failed to blank CRTC\n"); 438 434 439 435 (void) vmw_stdu_update_st(dev_priv, stdu); 436 + 437 + /* Don't destroy the Screen Target if we are only setting the 438 + * display as inactive 439 + */ 440 + if (new_crtc_state->enable && 441 + !new_crtc_state->active && 442 + !new_crtc_state->mode_changed) 443 + return; 440 444 441 445 ret = vmw_stdu_destroy_st(dev_priv, stdu); 442 446 if (ret) ··· 841 829 vmw_stdu_destroy(vmw_connector_to_stdu(connector)); 842 830 } 843 831 832 + static enum drm_mode_status 833 + vmw_stdu_connector_mode_valid(struct drm_connector *connector, 834 + struct drm_display_mode *mode) 835 + { 836 + enum drm_mode_status ret; 837 + struct drm_device *dev = connector->dev; 838 + struct vmw_private *dev_priv = vmw_priv(dev); 839 + u64 assumed_cpp = dev_priv->assume_16bpp ? 2 : 4; 840 + /* Align width and height to account for GPU tile over-alignment */ 841 + u64 required_mem = ALIGN(mode->hdisplay, GPU_TILE_SIZE) * 842 + ALIGN(mode->vdisplay, GPU_TILE_SIZE) * 843 + assumed_cpp; 844 + required_mem = ALIGN(required_mem, PAGE_SIZE); 844 845 846 + ret = drm_mode_validate_size(mode, dev_priv->stdu_max_width, 847 + dev_priv->stdu_max_height); 848 + if (ret != MODE_OK) 849 + return ret; 850 + 851 + ret = drm_mode_validate_size(mode, dev_priv->texture_max_width, 852 + dev_priv->texture_max_height); 853 + if (ret != MODE_OK) 854 + return ret; 855 + 856 + if (required_mem > dev_priv->max_primary_mem) 857 + return MODE_MEM; 858 + 859 + if (required_mem > dev_priv->max_mob_pages * PAGE_SIZE) 860 + return MODE_MEM; 861 + 862 + if (required_mem > dev_priv->max_mob_size) 863 + return MODE_MEM; 864 + 865 + return MODE_OK; 866 + } 845 867 846 868 static const struct drm_connector_funcs vmw_stdu_connector_funcs = { 847 869 .dpms = vmw_du_connector_dpms, ··· 891 845 static const struct 892 846 drm_connector_helper_funcs vmw_stdu_connector_helper_funcs = { 893 847 .get_modes = vmw_connector_get_modes, 894 - .mode_valid = vmw_connector_mode_valid 848 + .mode_valid = vmw_stdu_connector_mode_valid 895 849 }; 896 850 897 851
+1
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
··· 1749 1749 if (!xe_gt_is_media_type(gt)) { 1750 1750 pf_release_vf_config_ggtt(gt, config); 1751 1751 pf_release_vf_config_lmem(gt, config); 1752 + pf_update_vf_lmtt(gt_to_xe(gt), vfid); 1752 1753 } 1753 1754 pf_release_config_ctxs(gt, config); 1754 1755 pf_release_config_dbs(gt, config);
+2 -2
drivers/hid/hid-asus.c
··· 1204 1204 } 1205 1205 1206 1206 /* match many more n-key devices */ 1207 - if (drvdata->quirks & QUIRK_ROG_NKEY_KEYBOARD) { 1208 - for (int i = 0; i < *rsize + 1; i++) { 1207 + if (drvdata->quirks & QUIRK_ROG_NKEY_KEYBOARD && *rsize > 15) { 1208 + for (int i = 0; i < *rsize - 15; i++) { 1209 1209 /* offset to the count from 0x5a report part always 14 */ 1210 1210 if (rdesc[i] == 0x85 && rdesc[i + 1] == 0x5a && 1211 1211 rdesc[i + 14] == 0x95 && rdesc[i + 15] == 0x05) {
-1
drivers/hid/hid-core.c
··· 1448 1448 hid_warn(hid, 1449 1449 "%s() called with too large value %d (n: %d)! (%s)\n", 1450 1450 __func__, value, n, current->comm); 1451 - WARN_ON(1); 1452 1451 value &= m; 1453 1452 } 1454 1453 }
+2
drivers/hid/hid-debug.c
··· 3366 3366 [KEY_CAMERA_ACCESS_ENABLE] = "CameraAccessEnable", 3367 3367 [KEY_CAMERA_ACCESS_DISABLE] = "CameraAccessDisable", 3368 3368 [KEY_CAMERA_ACCESS_TOGGLE] = "CameraAccessToggle", 3369 + [KEY_ACCESSIBILITY] = "Accessibility", 3370 + [KEY_DO_NOT_DISTURB] = "DoNotDisturb", 3369 3371 [KEY_DICTATE] = "Dictate", 3370 3372 [KEY_MICMUTE] = "MicrophoneMute", 3371 3373 [KEY_BRIGHTNESS_MIN] = "BrightnessMin",
+2
drivers/hid/hid-ids.h
··· 423 423 #define I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG 0x29DF 424 424 #define I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN 0x2BC8 425 425 #define I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN 0x2C82 426 + #define I2C_DEVICE_ID_ASUS_UX3402_TOUCHSCREEN 0x2F2C 427 + #define I2C_DEVICE_ID_ASUS_UX6404_TOUCHSCREEN 0x4116 426 428 #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544 427 429 #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706 428 430 #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A
+13
drivers/hid/hid-input.c
··· 377 377 HID_BATTERY_QUIRK_IGNORE }, 378 378 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_GV301RA_TOUCHSCREEN), 379 379 HID_BATTERY_QUIRK_IGNORE }, 380 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_UX3402_TOUCHSCREEN), 381 + HID_BATTERY_QUIRK_IGNORE }, 382 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_UX6404_TOUCHSCREEN), 383 + HID_BATTERY_QUIRK_IGNORE }, 380 384 { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN), 381 385 HID_BATTERY_QUIRK_IGNORE }, 382 386 { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN), ··· 837 833 break; 838 834 } 839 835 836 + if ((usage->hid & 0xf0) == 0x90) { /* SystemControl*/ 837 + switch (usage->hid & 0xf) { 838 + case 0xb: map_key_clear(KEY_DO_NOT_DISTURB); break; 839 + default: goto ignore; 840 + } 841 + break; 842 + } 843 + 840 844 if ((usage->hid & 0xf0) == 0xa0) { /* SystemControl */ 841 845 switch (usage->hid & 0xf) { 842 846 case 0x9: map_key_clear(KEY_MICMUTE); break; 847 + case 0xa: map_key_clear(KEY_ACCESSIBILITY); break; 843 848 default: goto ignore; 844 849 } 845 850 break;
+3 -1
drivers/hid/hid-logitech-dj.c
··· 1284 1284 */ 1285 1285 msleep(50); 1286 1286 1287 - if (retval) 1287 + if (retval) { 1288 + kfree(dj_report); 1288 1289 return retval; 1290 + } 1289 1291 } 1290 1292 1291 1293 /*
+1
drivers/hid/hid-logitech-hidpp.c
··· 27 27 #include "usbhid/usbhid.h" 28 28 #include "hid-ids.h" 29 29 30 + MODULE_DESCRIPTION("Support for Logitech devices relying on the HID++ specification"); 30 31 MODULE_LICENSE("GPL"); 31 32 MODULE_AUTHOR("Benjamin Tissoires <benjamin.tissoires@gmail.com>"); 32 33 MODULE_AUTHOR("Nestor Lopez Casado <nlopezcasad@logitech.com>");
+4 -2
drivers/hid/hid-nintendo.c
··· 2725 2725 ret = joycon_power_supply_create(ctlr); 2726 2726 if (ret) { 2727 2727 hid_err(hdev, "Failed to create power_supply; ret=%d\n", ret); 2728 - goto err_close; 2728 + goto err_ida; 2729 2729 } 2730 2730 2731 2731 ret = joycon_input_create(ctlr); 2732 2732 if (ret) { 2733 2733 hid_err(hdev, "Failed to create input device; ret=%d\n", ret); 2734 - goto err_close; 2734 + goto err_ida; 2735 2735 } 2736 2736 2737 2737 ctlr->ctlr_state = JOYCON_CTLR_STATE_READ; ··· 2739 2739 hid_dbg(hdev, "probe - success\n"); 2740 2740 return 0; 2741 2741 2742 + err_ida: 2743 + ida_free(&nintendo_player_id_allocator, ctlr->player_id); 2742 2744 err_close: 2743 2745 hid_hw_close(hdev); 2744 2746 err_stop:
+3 -1
drivers/hid/hid-nvidia-shield.c
··· 283 283 return haptics; 284 284 285 285 input_set_capability(haptics, EV_FF, FF_RUMBLE); 286 - input_ff_create_memless(haptics, NULL, play_effect); 286 + ret = input_ff_create_memless(haptics, NULL, play_effect); 287 + if (ret) 288 + goto err; 287 289 288 290 ret = input_register_device(haptics); 289 291 if (ret)
+47 -12
drivers/hid/i2c-hid/i2c-hid-of-elan.c
··· 31 31 struct regulator *vcc33; 32 32 struct regulator *vccio; 33 33 struct gpio_desc *reset_gpio; 34 + bool no_reset_on_power_off; 34 35 const struct elan_i2c_hid_chip_data *chip_data; 35 36 }; 36 37 ··· 41 40 container_of(ops, struct i2c_hid_of_elan, ops); 42 41 int ret; 43 42 43 + gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); 44 + 44 45 if (ihid_elan->vcc33) { 45 46 ret = regulator_enable(ihid_elan->vcc33); 46 47 if (ret) 47 - return ret; 48 + goto err_deassert_reset; 48 49 } 49 50 50 51 ret = regulator_enable(ihid_elan->vccio); 51 - if (ret) { 52 - regulator_disable(ihid_elan->vcc33); 53 - return ret; 54 - } 52 + if (ret) 53 + goto err_disable_vcc33; 55 54 56 55 if (ihid_elan->chip_data->post_power_delay_ms) 57 56 msleep(ihid_elan->chip_data->post_power_delay_ms); ··· 61 60 msleep(ihid_elan->chip_data->post_gpio_reset_on_delay_ms); 62 61 63 62 return 0; 63 + 64 + err_disable_vcc33: 65 + if (ihid_elan->vcc33) 66 + regulator_disable(ihid_elan->vcc33); 67 + err_deassert_reset: 68 + if (ihid_elan->no_reset_on_power_off) 69 + gpiod_set_value_cansleep(ihid_elan->reset_gpio, 0); 70 + 71 + return ret; 64 72 } 65 73 66 74 static void elan_i2c_hid_power_down(struct i2chid_ops *ops) ··· 77 67 struct i2c_hid_of_elan *ihid_elan = 78 68 container_of(ops, struct i2c_hid_of_elan, ops); 79 69 80 - gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); 70 + /* 71 + * Do not assert reset when the hardware allows for it to remain 72 + * deasserted regardless of the state of the (shared) power supply to 73 + * avoid wasting power when the supply is left on. 74 + */ 75 + if (!ihid_elan->no_reset_on_power_off) 76 + gpiod_set_value_cansleep(ihid_elan->reset_gpio, 1); 77 + 81 78 if (ihid_elan->chip_data->post_gpio_reset_off_delay_ms) 82 79 msleep(ihid_elan->chip_data->post_gpio_reset_off_delay_ms); 83 80 ··· 96 79 static int i2c_hid_of_elan_probe(struct i2c_client *client) 97 80 { 98 81 struct i2c_hid_of_elan *ihid_elan; 82 + int ret; 99 83 100 84 ihid_elan = devm_kzalloc(&client->dev, sizeof(*ihid_elan), GFP_KERNEL); 101 85 if (!ihid_elan) ··· 111 93 if (IS_ERR(ihid_elan->reset_gpio)) 112 94 return PTR_ERR(ihid_elan->reset_gpio); 113 95 96 + ihid_elan->no_reset_on_power_off = of_property_read_bool(client->dev.of_node, 97 + "no-reset-on-power-off"); 98 + 114 99 ihid_elan->vccio = devm_regulator_get(&client->dev, "vccio"); 115 - if (IS_ERR(ihid_elan->vccio)) 116 - return PTR_ERR(ihid_elan->vccio); 100 + if (IS_ERR(ihid_elan->vccio)) { 101 + ret = PTR_ERR(ihid_elan->vccio); 102 + goto err_deassert_reset; 103 + } 117 104 118 105 ihid_elan->chip_data = device_get_match_data(&client->dev); 119 106 120 107 if (ihid_elan->chip_data->main_supply_name) { 121 108 ihid_elan->vcc33 = devm_regulator_get(&client->dev, 122 109 ihid_elan->chip_data->main_supply_name); 123 - if (IS_ERR(ihid_elan->vcc33)) 124 - return PTR_ERR(ihid_elan->vcc33); 110 + if (IS_ERR(ihid_elan->vcc33)) { 111 + ret = PTR_ERR(ihid_elan->vcc33); 112 + goto err_deassert_reset; 113 + } 125 114 } 126 115 127 - return i2c_hid_core_probe(client, &ihid_elan->ops, 128 - ihid_elan->chip_data->hid_descriptor_address, 0); 116 + ret = i2c_hid_core_probe(client, &ihid_elan->ops, 117 + ihid_elan->chip_data->hid_descriptor_address, 0); 118 + if (ret) 119 + goto err_deassert_reset; 120 + 121 + return 0; 122 + 123 + err_deassert_reset: 124 + if (ihid_elan->no_reset_on_power_off) 125 + gpiod_set_value_cansleep(ihid_elan->reset_gpio, 0); 126 + 127 + return ret; 129 128 } 130 129 131 130 static const struct elan_i2c_hid_chip_data elan_ekth6915_chip_data = {
+44 -35
drivers/hid/intel-ish-hid/ishtp/loader.c
··· 84 84 static int loader_xfer_cmd(struct ishtp_device *dev, void *req, int req_len, 85 85 void *resp, int resp_len) 86 86 { 87 - struct loader_msg_header *req_hdr = req; 88 - struct loader_msg_header *resp_hdr = resp; 87 + union loader_msg_header req_hdr; 88 + union loader_msg_header resp_hdr; 89 89 struct device *devc = dev->devc; 90 90 int rv; 91 91 ··· 93 93 dev->fw_loader_rx_size = resp_len; 94 94 95 95 rv = loader_write_message(dev, req, req_len); 96 + req_hdr.val32 = le32_to_cpup(req); 97 + 96 98 if (rv < 0) { 97 - dev_err(devc, "write cmd %u failed:%d\n", req_hdr->command, rv); 99 + dev_err(devc, "write cmd %u failed:%d\n", req_hdr.command, rv); 98 100 return rv; 99 101 } 100 102 101 103 /* Wait the ACK */ 102 104 wait_event_interruptible_timeout(dev->wait_loader_recvd_msg, dev->fw_loader_received, 103 105 ISHTP_LOADER_TIMEOUT); 106 + resp_hdr.val32 = le32_to_cpup(resp); 104 107 dev->fw_loader_rx_size = 0; 105 108 dev->fw_loader_rx_buf = NULL; 106 109 if (!dev->fw_loader_received) { 107 - dev_err(devc, "wait response of cmd %u timeout\n", req_hdr->command); 110 + dev_err(devc, "wait response of cmd %u timeout\n", req_hdr.command); 108 111 return -ETIMEDOUT; 109 112 } 110 113 111 - if (!resp_hdr->is_response) { 112 - dev_err(devc, "not a response for %u\n", req_hdr->command); 114 + if (!resp_hdr.is_response) { 115 + dev_err(devc, "not a response for %u\n", req_hdr.command); 113 116 return -EBADMSG; 114 117 } 115 118 116 - if (req_hdr->command != resp_hdr->command) { 117 - dev_err(devc, "unexpected cmd response %u:%u\n", req_hdr->command, 118 - resp_hdr->command); 119 + if (req_hdr.command != resp_hdr.command) { 120 + dev_err(devc, "unexpected cmd response %u:%u\n", req_hdr.command, 121 + resp_hdr.command); 119 122 return -EBADMSG; 120 123 } 121 124 122 - if (resp_hdr->status) { 123 - dev_err(devc, "cmd %u failed %u\n", req_hdr->command, resp_hdr->status); 125 + if (resp_hdr.status) { 126 + dev_err(devc, "cmd %u failed %u\n", req_hdr.command, resp_hdr.status); 124 127 return -EIO; 125 128 } 126 129 ··· 141 138 struct loader_xfer_dma_fragment *fragment, 142 139 void **dma_bufs, u32 fragment_size) 143 140 { 141 + dma_addr_t dma_addr; 144 142 int i; 145 143 146 144 for (i = 0; i < FRAGMENT_MAX_NUM; i++) { 147 145 if (dma_bufs[i]) { 148 - dma_free_coherent(dev->devc, fragment_size, dma_bufs[i], 149 - fragment->fragment_tbl[i].ddr_adrs); 146 + dma_addr = le64_to_cpu(fragment->fragment_tbl[i].ddr_adrs); 147 + dma_free_coherent(dev->devc, fragment_size, dma_bufs[i], dma_addr); 150 148 dma_bufs[i] = NULL; 151 149 } 152 150 } ··· 160 156 * @fragment: The ISHTP firmware fragment descriptor 161 157 * @dma_bufs: The array of DMA fragment buffers 162 158 * @fragment_size: The size of a single DMA fragment 159 + * @fragment_count: Number of fragments 163 160 * 164 161 * Return: 0 on success, negative error code on failure 165 162 */ 166 163 static int prepare_dma_bufs(struct ishtp_device *dev, 167 164 const struct firmware *ish_fw, 168 165 struct loader_xfer_dma_fragment *fragment, 169 - void **dma_bufs, u32 fragment_size) 166 + void **dma_bufs, u32 fragment_size, u32 fragment_count) 170 167 { 168 + dma_addr_t dma_addr; 171 169 u32 offset = 0; 170 + u32 length; 172 171 int i; 173 172 174 - for (i = 0; i < fragment->fragment_cnt && offset < ish_fw->size; i++) { 175 - dma_bufs[i] = dma_alloc_coherent(dev->devc, fragment_size, 176 - &fragment->fragment_tbl[i].ddr_adrs, GFP_KERNEL); 173 + for (i = 0; i < fragment_count && offset < ish_fw->size; i++) { 174 + dma_bufs[i] = dma_alloc_coherent(dev->devc, fragment_size, &dma_addr, GFP_KERNEL); 177 175 if (!dma_bufs[i]) 178 176 return -ENOMEM; 179 177 180 - fragment->fragment_tbl[i].length = clamp(ish_fw->size - offset, 0, fragment_size); 181 - fragment->fragment_tbl[i].fw_off = offset; 182 - memcpy(dma_bufs[i], ish_fw->data + offset, fragment->fragment_tbl[i].length); 178 + fragment->fragment_tbl[i].ddr_adrs = cpu_to_le64(dma_addr); 179 + length = clamp(ish_fw->size - offset, 0, fragment_size); 180 + fragment->fragment_tbl[i].length = cpu_to_le32(length); 181 + fragment->fragment_tbl[i].fw_off = cpu_to_le32(offset); 182 + memcpy(dma_bufs[i], ish_fw->data + offset, length); 183 183 clflush_cache_range(dma_bufs[i], fragment_size); 184 184 185 - offset += fragment->fragment_tbl[i].length; 185 + offset += length; 186 186 } 187 187 188 188 return 0; ··· 214 206 { 215 207 DEFINE_RAW_FLEX(struct loader_xfer_dma_fragment, fragment, fragment_tbl, FRAGMENT_MAX_NUM); 216 208 struct ishtp_device *dev = container_of(work, struct ishtp_device, work_fw_loader); 217 - struct loader_xfer_query query = { 218 - .header.command = LOADER_CMD_XFER_QUERY, 219 - }; 220 - struct loader_start start = { 221 - .header.command = LOADER_CMD_START, 222 - }; 209 + union loader_msg_header query_hdr = { .command = LOADER_CMD_XFER_QUERY, }; 210 + union loader_msg_header start_hdr = { .command = LOADER_CMD_START, }; 211 + union loader_msg_header fragment_hdr = { .command = LOADER_CMD_XFER_FRAGMENT, }; 212 + struct loader_xfer_query query = { .header = cpu_to_le32(query_hdr.val32), }; 213 + struct loader_start start = { .header = cpu_to_le32(start_hdr.val32), }; 223 214 union loader_recv_message recv_msg; 224 215 char *filename = dev->driver_data->fw_filename; 225 216 const struct firmware *ish_fw; 226 217 void *dma_bufs[FRAGMENT_MAX_NUM] = {}; 227 218 u32 fragment_size; 219 + u32 fragment_count; 228 220 int retry = ISHTP_LOADER_RETRY_TIMES; 229 221 int rv; 230 222 ··· 234 226 return; 235 227 } 236 228 237 - fragment->fragment.header.command = LOADER_CMD_XFER_FRAGMENT; 238 - fragment->fragment.xfer_mode = LOADER_XFER_MODE_DMA; 239 - fragment->fragment.is_last = 1; 240 - fragment->fragment.size = ish_fw->size; 229 + fragment->fragment.header = cpu_to_le32(fragment_hdr.val32); 230 + fragment->fragment.xfer_mode = cpu_to_le32(LOADER_XFER_MODE_DMA); 231 + fragment->fragment.is_last = cpu_to_le32(1); 232 + fragment->fragment.size = cpu_to_le32(ish_fw->size); 241 233 /* Calculate the size of a single DMA fragment */ 242 234 fragment_size = PFN_ALIGN(DIV_ROUND_UP(ish_fw->size, FRAGMENT_MAX_NUM)); 243 235 /* Calculate the count of DMA fragments */ 244 - fragment->fragment_cnt = DIV_ROUND_UP(ish_fw->size, fragment_size); 236 + fragment_count = DIV_ROUND_UP(ish_fw->size, fragment_size); 237 + fragment->fragment_cnt = cpu_to_le32(fragment_count); 245 238 246 - rv = prepare_dma_bufs(dev, ish_fw, fragment, dma_bufs, fragment_size); 239 + rv = prepare_dma_bufs(dev, ish_fw, fragment, dma_bufs, fragment_size, fragment_count); 247 240 if (rv) { 248 241 dev_err(dev->devc, "prepare DMA buffer failed.\n"); 249 242 goto out; 250 243 } 251 244 252 245 do { 253 - query.image_size = ish_fw->size; 246 + query.image_size = cpu_to_le32(ish_fw->size); 254 247 rv = loader_xfer_cmd(dev, &query, sizeof(query), recv_msg.raw_data, 255 248 sizeof(struct loader_xfer_query_ack)); 256 249 if (rv) ··· 264 255 recv_msg.query_ack.version_build); 265 256 266 257 rv = loader_xfer_cmd(dev, fragment, 267 - struct_size(fragment, fragment_tbl, fragment->fragment_cnt), 258 + struct_size(fragment, fragment_tbl, fragment_count), 268 259 recv_msg.raw_data, sizeof(struct loader_xfer_fragment_ack)); 269 260 if (rv) 270 261 continue; /* try again if failed */
+18 -13
drivers/hid/intel-ish-hid/ishtp/loader.h
··· 30 30 #define LOADER_XFER_MODE_DMA BIT(0) 31 31 32 32 /** 33 - * struct loader_msg_header - ISHTP firmware loader message header 33 + * union loader_msg_header - ISHTP firmware loader message header 34 34 * @command: Command type 35 35 * @is_response: Indicates if the message is a response 36 36 * @has_next: Indicates if there is a next message 37 37 * @reserved: Reserved for future use 38 38 * @status: Status of the message 39 + * @val32: entire header as a 32-bit value 39 40 */ 40 - struct loader_msg_header { 41 - __le32 command:7; 42 - __le32 is_response:1; 43 - __le32 has_next:1; 44 - __le32 reserved:15; 45 - __le32 status:8; 41 + union loader_msg_header { 42 + struct { 43 + __u32 command:7; 44 + __u32 is_response:1; 45 + __u32 has_next:1; 46 + __u32 reserved:15; 47 + __u32 status:8; 48 + }; 49 + __u32 val32; 46 50 }; 47 51 48 52 /** ··· 55 51 * @image_size: Size of the image 56 52 */ 57 53 struct loader_xfer_query { 58 - struct loader_msg_header header; 54 + __le32 header; 59 55 __le32 image_size; 60 56 }; 61 57 ··· 107 103 * @capability: Loader capability 108 104 */ 109 105 struct loader_xfer_query_ack { 110 - struct loader_msg_header header; 106 + __le32 header; 111 107 __le16 version_major; 112 108 __le16 version_minor; 113 109 __le16 version_hotfix; ··· 126 122 * @is_last: Is last 127 123 */ 128 124 struct loader_xfer_fragment { 129 - struct loader_msg_header header; 125 + __le32 header; 130 126 __le32 xfer_mode; 131 127 __le32 offset; 132 128 __le32 size; ··· 138 134 * @header: Header of the message 139 135 */ 140 136 struct loader_xfer_fragment_ack { 141 - struct loader_msg_header header; 137 + __le32 header; 142 138 }; 143 139 144 140 /** ··· 174 170 * @header: Header of the message 175 171 */ 176 172 struct loader_start { 177 - struct loader_msg_header header; 173 + __le32 header; 178 174 }; 179 175 180 176 /** ··· 182 178 * @header: Header of the message 183 179 */ 184 180 struct loader_start_ack { 185 - struct loader_msg_header header; 181 + __le32 header; 186 182 }; 187 183 188 184 union loader_recv_message { 185 + __le32 header; 189 186 struct loader_xfer_query_ack query_ack; 190 187 struct loader_xfer_fragment_ack fragment_ack; 191 188 struct loader_start_ack start_ack;
+5 -14
drivers/input/touchscreen/silead.c
··· 71 71 struct regulator_bulk_data regulators[2]; 72 72 char fw_name[64]; 73 73 struct touchscreen_properties prop; 74 - u32 max_fingers; 75 74 u32 chip_id; 76 75 struct input_mt_pos pos[SILEAD_MAX_FINGERS]; 77 76 int slots[SILEAD_MAX_FINGERS]; ··· 135 136 touchscreen_parse_properties(data->input, true, &data->prop); 136 137 silead_apply_efi_fw_min_max(data); 137 138 138 - input_mt_init_slots(data->input, data->max_fingers, 139 + input_mt_init_slots(data->input, SILEAD_MAX_FINGERS, 139 140 INPUT_MT_DIRECT | INPUT_MT_DROP_UNUSED | 140 141 INPUT_MT_TRACK); 141 142 ··· 255 256 return; 256 257 } 257 258 258 - if (buf[0] > data->max_fingers) { 259 + if (buf[0] > SILEAD_MAX_FINGERS) { 259 260 dev_warn(dev, "More touches reported then supported %d > %d\n", 260 - buf[0], data->max_fingers); 261 - buf[0] = data->max_fingers; 261 + buf[0], SILEAD_MAX_FINGERS); 262 + buf[0] = SILEAD_MAX_FINGERS; 262 263 } 263 264 264 265 if (silead_ts_handle_pen_data(data, buf)) ··· 314 315 315 316 static int silead_ts_init(struct i2c_client *client) 316 317 { 317 - struct silead_ts_data *data = i2c_get_clientdata(client); 318 318 int error; 319 319 320 320 error = i2c_smbus_write_byte_data(client, SILEAD_REG_RESET, ··· 323 325 usleep_range(SILEAD_CMD_SLEEP_MIN, SILEAD_CMD_SLEEP_MAX); 324 326 325 327 error = i2c_smbus_write_byte_data(client, SILEAD_REG_TOUCH_NR, 326 - data->max_fingers); 328 + SILEAD_MAX_FINGERS); 327 329 if (error) 328 330 goto i2c_write_err; 329 331 usleep_range(SILEAD_CMD_SLEEP_MIN, SILEAD_CMD_SLEEP_MAX); ··· 588 590 struct device *dev = &client->dev; 589 591 const char *str; 590 592 int error; 591 - 592 - error = device_property_read_u32(dev, "silead,max-fingers", 593 - &data->max_fingers); 594 - if (error) { 595 - dev_dbg(dev, "Max fingers read error %d\n", error); 596 - data->max_fingers = 5; /* Most devices handle up-to 5 fingers */ 597 - } 598 593 599 594 error = device_property_read_string(dev, "firmware-name", &str); 600 595 if (!error)
+2 -1
drivers/iommu/amd/amd_iommu.h
··· 129 129 static inline bool amd_iommu_gt_ppr_supported(void) 130 130 { 131 131 return (check_feature(FEATURE_GT) && 132 - check_feature(FEATURE_PPR)); 132 + check_feature(FEATURE_PPR) && 133 + check_feature(FEATURE_EPHSUP)); 133 134 } 134 135 135 136 static inline u64 iommu_virt_to_phys(void *vaddr)
+9
drivers/iommu/amd/init.c
··· 1626 1626 } 1627 1627 } 1628 1628 1629 + static void __init free_sysfs(struct amd_iommu *iommu) 1630 + { 1631 + if (iommu->iommu.dev) { 1632 + iommu_device_unregister(&iommu->iommu); 1633 + iommu_device_sysfs_remove(&iommu->iommu); 1634 + } 1635 + } 1636 + 1629 1637 static void __init free_iommu_one(struct amd_iommu *iommu) 1630 1638 { 1639 + free_sysfs(iommu); 1631 1640 free_cwwb_sem(iommu); 1632 1641 free_command_buffer(iommu); 1633 1642 free_event_buffer(iommu);
+24 -24
drivers/iommu/amd/iommu.c
··· 2032 2032 struct protection_domain *domain) 2033 2033 { 2034 2034 struct amd_iommu *iommu = get_amd_iommu_from_dev_data(dev_data); 2035 - struct pci_dev *pdev; 2036 2035 int ret = 0; 2037 2036 2038 2037 /* Update data structures */ ··· 2046 2047 domain->dev_iommu[iommu->index] += 1; 2047 2048 domain->dev_cnt += 1; 2048 2049 2049 - pdev = dev_is_pci(dev_data->dev) ? to_pci_dev(dev_data->dev) : NULL; 2050 + /* Setup GCR3 table */ 2050 2051 if (pdom_is_sva_capable(domain)) { 2051 2052 ret = init_gcr3_table(dev_data, domain); 2052 2053 if (ret) 2053 2054 return ret; 2054 - 2055 - if (pdev) { 2056 - pdev_enable_caps(pdev); 2057 - 2058 - /* 2059 - * Device can continue to function even if IOPF 2060 - * enablement failed. Hence in error path just 2061 - * disable device PRI support. 2062 - */ 2063 - if (amd_iommu_iopf_add_device(iommu, dev_data)) 2064 - pdev_disable_cap_pri(pdev); 2065 - } 2066 - } else if (pdev) { 2067 - pdev_enable_cap_ats(pdev); 2068 2055 } 2069 - 2070 - /* Update device table */ 2071 - amd_iommu_dev_update_dte(dev_data, true); 2072 2056 2073 2057 return ret; 2074 2058 } ··· 2145 2163 2146 2164 do_detach(dev_data); 2147 2165 2166 + out: 2167 + spin_unlock(&dev_data->lock); 2168 + 2169 + spin_unlock_irqrestore(&domain->lock, flags); 2170 + 2148 2171 /* Remove IOPF handler */ 2149 2172 if (ppr) 2150 2173 amd_iommu_iopf_remove_device(iommu, dev_data); ··· 2157 2170 if (dev_is_pci(dev)) 2158 2171 pdev_disable_caps(to_pci_dev(dev)); 2159 2172 2160 - out: 2161 - spin_unlock(&dev_data->lock); 2162 - 2163 - spin_unlock_irqrestore(&domain->lock, flags); 2164 2173 } 2165 2174 2166 2175 static struct iommu_device *amd_iommu_probe_device(struct device *dev) ··· 2468 2485 struct iommu_dev_data *dev_data = dev_iommu_priv_get(dev); 2469 2486 struct protection_domain *domain = to_pdomain(dom); 2470 2487 struct amd_iommu *iommu = get_amd_iommu_from_dev(dev); 2488 + struct pci_dev *pdev; 2471 2489 int ret; 2472 2490 2473 2491 /* ··· 2501 2517 } 2502 2518 #endif 2503 2519 2504 - iommu_completion_wait(iommu); 2520 + pdev = dev_is_pci(dev_data->dev) ? to_pci_dev(dev_data->dev) : NULL; 2521 + if (pdev && pdom_is_sva_capable(domain)) { 2522 + pdev_enable_caps(pdev); 2523 + 2524 + /* 2525 + * Device can continue to function even if IOPF 2526 + * enablement failed. Hence in error path just 2527 + * disable device PRI support. 2528 + */ 2529 + if (amd_iommu_iopf_add_device(iommu, dev_data)) 2530 + pdev_disable_cap_pri(pdev); 2531 + } else if (pdev) { 2532 + pdev_enable_cap_ats(pdev); 2533 + } 2534 + 2535 + /* Update device table */ 2536 + amd_iommu_dev_update_dte(dev_data, true); 2505 2537 2506 2538 return ret; 2507 2539 }
+5 -20
drivers/iommu/amd/ppr.c
··· 222 222 if (iommu->iopf_queue) 223 223 return ret; 224 224 225 - snprintf(iommu->iopfq_name, sizeof(iommu->iopfq_name), 226 - "amdiommu-%#x-iopfq", 225 + snprintf(iommu->iopfq_name, sizeof(iommu->iopfq_name), "amdvi-%#x", 227 226 PCI_SEG_DEVID_TO_SBDF(iommu->pci_seg->id, iommu->devid)); 228 227 229 228 iommu->iopf_queue = iopf_queue_alloc(iommu->iopfq_name); ··· 248 249 int amd_iommu_iopf_add_device(struct amd_iommu *iommu, 249 250 struct iommu_dev_data *dev_data) 250 251 { 251 - unsigned long flags; 252 252 int ret = 0; 253 253 254 254 if (!dev_data->pri_enabled) 255 255 return ret; 256 256 257 - raw_spin_lock_irqsave(&iommu->lock, flags); 258 - 259 - if (!iommu->iopf_queue) { 260 - ret = -EINVAL; 261 - goto out_unlock; 262 - } 257 + if (!iommu->iopf_queue) 258 + return -EINVAL; 263 259 264 260 ret = iopf_queue_add_device(iommu->iopf_queue, dev_data->dev); 265 261 if (ret) 266 - goto out_unlock; 262 + return ret; 267 263 268 264 dev_data->ppr = true; 269 - 270 - out_unlock: 271 - raw_spin_unlock_irqrestore(&iommu->lock, flags); 272 - return ret; 265 + return 0; 273 266 } 274 267 275 268 /* Its assumed that caller has verified that device was added to iopf queue */ 276 269 void amd_iommu_iopf_remove_device(struct amd_iommu *iommu, 277 270 struct iommu_dev_data *dev_data) 278 271 { 279 - unsigned long flags; 280 - 281 - raw_spin_lock_irqsave(&iommu->lock, flags); 282 - 283 272 iopf_queue_remove_device(iommu->iopf_queue, dev_data->dev); 284 273 dev_data->ppr = false; 285 - 286 - raw_spin_unlock_irqrestore(&iommu->lock, flags); 287 274 }
+4 -4
drivers/iommu/dma-iommu.c
··· 686 686 687 687 /* Check the domain allows at least some access to the device... */ 688 688 if (map) { 689 - dma_addr_t base = dma_range_map_min(map); 690 - if (base > domain->geometry.aperture_end || 689 + if (dma_range_map_min(map) > domain->geometry.aperture_end || 691 690 dma_range_map_max(map) < domain->geometry.aperture_start) { 692 691 pr_warn("specified DMA range outside IOMMU capability\n"); 693 692 return -EFAULT; 694 693 } 695 - /* ...then finally give it a kicking to make sure it fits */ 696 - base_pfn = max(base, domain->geometry.aperture_start) >> order; 697 694 } 695 + /* ...then finally give it a kicking to make sure it fits */ 696 + base_pfn = max_t(unsigned long, base_pfn, 697 + domain->geometry.aperture_start >> order); 698 698 699 699 /* start_pfn is always nonzero for an already-initialised domain */ 700 700 mutex_lock(&cookie->mutex);
+12 -32
drivers/irqchip/irq-gic-v3-its.c
··· 1846 1846 { 1847 1847 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1848 1848 u32 event = its_get_event_id(d); 1849 - int ret = 0; 1850 1849 1851 1850 if (!info->map) 1852 1851 return -EINVAL; 1853 - 1854 - raw_spin_lock(&its_dev->event_map.vlpi_lock); 1855 1852 1856 1853 if (!its_dev->event_map.vm) { 1857 1854 struct its_vlpi_map *maps; 1858 1855 1859 1856 maps = kcalloc(its_dev->event_map.nr_lpis, sizeof(*maps), 1860 1857 GFP_ATOMIC); 1861 - if (!maps) { 1862 - ret = -ENOMEM; 1863 - goto out; 1864 - } 1858 + if (!maps) 1859 + return -ENOMEM; 1865 1860 1866 1861 its_dev->event_map.vm = info->map->vm; 1867 1862 its_dev->event_map.vlpi_maps = maps; 1868 1863 } else if (its_dev->event_map.vm != info->map->vm) { 1869 - ret = -EINVAL; 1870 - goto out; 1864 + return -EINVAL; 1871 1865 } 1872 1866 1873 1867 /* Get our private copy of the mapping information */ ··· 1893 1899 its_dev->event_map.nr_vlpis++; 1894 1900 } 1895 1901 1896 - out: 1897 - raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1898 - return ret; 1902 + return 0; 1899 1903 } 1900 1904 1901 1905 static int its_vlpi_get(struct irq_data *d, struct its_cmd_info *info) 1902 1906 { 1903 1907 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1904 1908 struct its_vlpi_map *map; 1905 - int ret = 0; 1906 - 1907 - raw_spin_lock(&its_dev->event_map.vlpi_lock); 1908 1909 1909 1910 map = get_vlpi_map(d); 1910 1911 1911 - if (!its_dev->event_map.vm || !map) { 1912 - ret = -EINVAL; 1913 - goto out; 1914 - } 1912 + if (!its_dev->event_map.vm || !map) 1913 + return -EINVAL; 1915 1914 1916 1915 /* Copy our mapping information to the incoming request */ 1917 1916 *info->map = *map; 1918 1917 1919 - out: 1920 - raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1921 - return ret; 1918 + return 0; 1922 1919 } 1923 1920 1924 1921 static int its_vlpi_unmap(struct irq_data *d) 1925 1922 { 1926 1923 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1927 1924 u32 event = its_get_event_id(d); 1928 - int ret = 0; 1929 1925 1930 - raw_spin_lock(&its_dev->event_map.vlpi_lock); 1931 - 1932 - if (!its_dev->event_map.vm || !irqd_is_forwarded_to_vcpu(d)) { 1933 - ret = -EINVAL; 1934 - goto out; 1935 - } 1926 + if (!its_dev->event_map.vm || !irqd_is_forwarded_to_vcpu(d)) 1927 + return -EINVAL; 1936 1928 1937 1929 /* Drop the virtual mapping */ 1938 1930 its_send_discard(its_dev, event); ··· 1942 1962 kfree(its_dev->event_map.vlpi_maps); 1943 1963 } 1944 1964 1945 - out: 1946 - raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1947 - return ret; 1965 + return 0; 1948 1966 } 1949 1967 1950 1968 static int its_vlpi_prop_update(struct irq_data *d, struct its_cmd_info *info) ··· 1969 1991 /* Need a v4 ITS */ 1970 1992 if (!is_v4(its_dev->its)) 1971 1993 return -EINVAL; 1994 + 1995 + guard(raw_spinlock_irq)(&its_dev->event_map.vlpi_lock); 1972 1996 1973 1997 /* Unmap request? */ 1974 1998 if (!info)
+7 -2
drivers/irqchip/irq-riscv-intc.c
··· 253 253 static int __init riscv_intc_acpi_init(union acpi_subtable_headers *header, 254 254 const unsigned long end) 255 255 { 256 - struct fwnode_handle *fn; 257 256 struct acpi_madt_rintc *rintc; 257 + struct fwnode_handle *fn; 258 + int rc; 258 259 259 260 rintc = (struct acpi_madt_rintc *)header; 260 261 ··· 274 273 return -ENOMEM; 275 274 } 276 275 277 - return riscv_intc_init_common(fn, &riscv_intc_chip); 276 + rc = riscv_intc_init_common(fn, &riscv_intc_chip); 277 + if (rc) 278 + irq_domain_free_fwnode(fn); 279 + 280 + return rc; 278 281 } 279 282 280 283 IRQCHIP_ACPI_DECLARE(riscv_intc, ACPI_MADT_TYPE_RINTC, NULL,
+17 -17
drivers/irqchip/irq-sifive-plic.c
··· 85 85 struct plic_priv *priv; 86 86 }; 87 87 static int plic_parent_irq __ro_after_init; 88 - static bool plic_cpuhp_setup_done __ro_after_init; 88 + static bool plic_global_setup_done __ro_after_init; 89 89 static DEFINE_PER_CPU(struct plic_handler, plic_handlers); 90 90 91 91 static int plic_irq_set_type(struct irq_data *d, unsigned int type); ··· 487 487 unsigned long plic_quirks = 0; 488 488 struct plic_handler *handler; 489 489 u32 nr_irqs, parent_hwirq; 490 - struct irq_domain *domain; 491 490 struct plic_priv *priv; 492 491 irq_hw_number_t hwirq; 493 - bool cpuhp_setup; 494 492 495 493 if (is_of_node(dev->fwnode)) { 496 494 const struct of_device_id *id; ··· 547 549 continue; 548 550 } 549 551 550 - /* Find parent domain and register chained handler */ 551 - domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), DOMAIN_BUS_ANY); 552 - if (!plic_parent_irq && domain) { 553 - plic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT); 554 - if (plic_parent_irq) 555 - irq_set_chained_handler(plic_parent_irq, plic_handle_irq); 556 - } 557 - 558 552 /* 559 553 * When running in M-mode we need to ignore the S-mode handler. 560 554 * Here we assume it always comes later, but that might be a ··· 587 597 goto fail_cleanup_contexts; 588 598 589 599 /* 590 - * We can have multiple PLIC instances so setup cpuhp state 600 + * We can have multiple PLIC instances so setup global state 591 601 * and register syscore operations only once after context 592 602 * handlers of all online CPUs are initialized. 593 603 */ 594 - if (!plic_cpuhp_setup_done) { 595 - cpuhp_setup = true; 604 + if (!plic_global_setup_done) { 605 + struct irq_domain *domain; 606 + bool global_setup = true; 607 + 596 608 for_each_online_cpu(cpu) { 597 609 handler = per_cpu_ptr(&plic_handlers, cpu); 598 610 if (!handler->present) { 599 - cpuhp_setup = false; 611 + global_setup = false; 600 612 break; 601 613 } 602 614 } 603 - if (cpuhp_setup) { 615 + 616 + if (global_setup) { 617 + /* Find parent domain and register chained handler */ 618 + domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), DOMAIN_BUS_ANY); 619 + if (domain) 620 + plic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT); 621 + if (plic_parent_irq) 622 + irq_set_chained_handler(plic_parent_irq, plic_handle_irq); 623 + 604 624 cpuhp_setup_state(CPUHP_AP_IRQ_SIFIVE_PLIC_STARTING, 605 625 "irqchip/sifive/plic:starting", 606 626 plic_starting_cpu, plic_dying_cpu); 607 627 register_syscore_ops(&plic_irq_syscore_ops); 608 - plic_cpuhp_setup_done = true; 628 + plic_global_setup_done = true; 609 629 } 610 630 } 611 631
+3 -3
drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
··· 301 301 out_requeue: 302 302 if (bl && bl->nbufs) 303 303 ipu6_isys_buffer_list_queue(bl, 304 - (IPU6_ISYS_BUFFER_LIST_FL_INCOMING | 305 - error) ? 304 + IPU6_ISYS_BUFFER_LIST_FL_INCOMING | 305 + (error ? 306 306 IPU6_ISYS_BUFFER_LIST_FL_SET_STATE : 307 - 0, error ? VB2_BUF_STATE_ERROR : 307 + 0), error ? VB2_BUF_STATE_ERROR : 308 308 VB2_BUF_STATE_QUEUED); 309 309 flush_firmware_streamon_fail(stream); 310 310
+43 -28
drivers/media/pci/intel/ipu6/ipu6-isys.c
··· 678 678 container_of(asc, struct sensor_async_sd, asc); 679 679 int ret; 680 680 681 + if (s_asd->csi2.port >= isys->pdata->ipdata->csi2.nports) { 682 + dev_err(&isys->adev->auxdev.dev, "invalid csi2 port %u\n", 683 + s_asd->csi2.port); 684 + return -EINVAL; 685 + } 686 + 681 687 ret = ipu_bridge_instantiate_vcm(sd->dev); 682 688 if (ret) { 683 689 dev_err(&isys->adev->auxdev.dev, "instantiate vcm failed\n"); ··· 931 925 .resume = isys_resume, 932 926 }; 933 927 934 - static void isys_remove(struct auxiliary_device *auxdev) 928 + static void free_fw_msg_bufs(struct ipu6_isys *isys) 935 929 { 936 - struct ipu6_bus_device *adev = auxdev_to_adev(auxdev); 937 - struct ipu6_isys *isys = dev_get_drvdata(&auxdev->dev); 938 - struct ipu6_device *isp = adev->isp; 930 + struct device *dev = &isys->adev->auxdev.dev; 939 931 struct isys_fw_msgs *fwmsg, *safe; 940 - unsigned int i; 941 932 942 933 list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist, head) 943 - dma_free_attrs(&auxdev->dev, sizeof(struct isys_fw_msgs), 944 - fwmsg, fwmsg->dma_addr, 0); 934 + dma_free_attrs(dev, sizeof(struct isys_fw_msgs), fwmsg, 935 + fwmsg->dma_addr, 0); 945 936 946 937 list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist_fw, head) 947 - dma_free_attrs(&auxdev->dev, sizeof(struct isys_fw_msgs), 948 - fwmsg, fwmsg->dma_addr, 0); 949 - 950 - isys_unregister_devices(isys); 951 - isys_notifier_cleanup(isys); 952 - 953 - cpu_latency_qos_remove_request(&isys->pm_qos); 954 - 955 - if (!isp->secure_mode) { 956 - ipu6_cpd_free_pkg_dir(adev); 957 - ipu6_buttress_unmap_fw_image(adev, &adev->fw_sgt); 958 - release_firmware(adev->fw); 959 - } 960 - 961 - for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) 962 - mutex_destroy(&isys->streams[i].mutex); 963 - 964 - isys_iwake_watermark_cleanup(isys); 965 - mutex_destroy(&isys->stream_mutex); 966 - mutex_destroy(&isys->mutex); 938 + dma_free_attrs(dev, sizeof(struct isys_fw_msgs), fwmsg, 939 + fwmsg->dma_addr, 0); 967 940 } 968 941 969 942 static int alloc_fw_msg_bufs(struct ipu6_isys *isys, int amount) ··· 1125 1140 1126 1141 ret = isys_register_devices(isys); 1127 1142 if (ret) 1128 - goto out_remove_pkg_dir_shared_buffer; 1143 + goto free_fw_msg_bufs; 1129 1144 1130 1145 ipu6_mmu_hw_cleanup(adev->mmu); 1131 1146 1132 1147 return 0; 1133 1148 1149 + free_fw_msg_bufs: 1150 + free_fw_msg_bufs(isys); 1134 1151 out_remove_pkg_dir_shared_buffer: 1135 1152 if (!isp->secure_mode) 1136 1153 ipu6_cpd_free_pkg_dir(adev); ··· 1152 1165 ipu6_mmu_hw_cleanup(adev->mmu); 1153 1166 1154 1167 return ret; 1168 + } 1169 + 1170 + static void isys_remove(struct auxiliary_device *auxdev) 1171 + { 1172 + struct ipu6_bus_device *adev = auxdev_to_adev(auxdev); 1173 + struct ipu6_isys *isys = dev_get_drvdata(&auxdev->dev); 1174 + struct ipu6_device *isp = adev->isp; 1175 + unsigned int i; 1176 + 1177 + free_fw_msg_bufs(isys); 1178 + 1179 + isys_unregister_devices(isys); 1180 + isys_notifier_cleanup(isys); 1181 + 1182 + cpu_latency_qos_remove_request(&isys->pm_qos); 1183 + 1184 + if (!isp->secure_mode) { 1185 + ipu6_cpd_free_pkg_dir(adev); 1186 + ipu6_buttress_unmap_fw_image(adev, &adev->fw_sgt); 1187 + release_firmware(adev->fw); 1188 + } 1189 + 1190 + for (i = 0; i < IPU6_ISYS_MAX_STREAMS; i++) 1191 + mutex_destroy(&isys->streams[i].mutex); 1192 + 1193 + isys_iwake_watermark_cleanup(isys); 1194 + mutex_destroy(&isys->stream_mutex); 1195 + mutex_destroy(&isys->mutex); 1155 1196 } 1156 1197 1157 1198 struct fwmsg {
+1 -4
drivers/media/pci/intel/ipu6/ipu6.c
··· 285 285 #define IPU6_ISYS_CSI2_NPORTS 4 286 286 #define IPU6SE_ISYS_CSI2_NPORTS 4 287 287 #define IPU6_TGL_ISYS_CSI2_NPORTS 8 288 - #define IPU6EP_MTL_ISYS_CSI2_NPORTS 4 288 + #define IPU6EP_MTL_ISYS_CSI2_NPORTS 6 289 289 290 290 static void ipu6_internal_pdata_init(struct ipu6_device *isp) 291 291 { ··· 726 726 727 727 pm_runtime_forbid(&pdev->dev); 728 728 pm_runtime_get_noresume(&pdev->dev); 729 - 730 - pci_release_regions(pdev); 731 - pci_disable_device(pdev); 732 729 733 730 release_firmware(isp->cpd_fw); 734 731
+4 -1
drivers/media/pci/intel/ivsc/mei_csi.c
··· 677 677 return -ENODEV; 678 678 679 679 ret = ipu_bridge_init(&ipu->dev, ipu_bridge_parse_ssdb); 680 + put_device(&ipu->dev); 680 681 if (ret < 0) 681 682 return ret; 682 - if (WARN_ON(!dev_fwnode(dev))) 683 + if (!dev_fwnode(dev)) { 684 + dev_err(dev, "mei-csi probed without device fwnode!\n"); 683 685 return -ENXIO; 686 + } 684 687 685 688 csi = devm_kzalloc(dev, sizeof(struct mei_csi), GFP_KERNEL); 686 689 if (!csi)
+4 -3
drivers/media/pci/mgb4/mgb4_core.c
··· 642 642 struct mgb4_dev *mgbdev = pci_get_drvdata(pdev); 643 643 int i; 644 644 645 - #ifdef CONFIG_DEBUG_FS 646 - debugfs_remove_recursive(mgbdev->debugfs); 647 - #endif 648 645 #if IS_REACHABLE(CONFIG_HWMON) 649 646 hwmon_device_unregister(mgbdev->hwmon_dev); 650 647 #endif ··· 655 658 for (i = 0; i < MGB4_VIN_DEVICES; i++) 656 659 if (mgbdev->vin[i]) 657 660 mgb4_vin_free(mgbdev->vin[i]); 661 + 662 + #ifdef CONFIG_DEBUG_FS 663 + debugfs_remove_recursive(mgbdev->debugfs); 664 + #endif 658 665 659 666 device_remove_groups(&mgbdev->pdev->dev, mgb4_pci_groups); 660 667 free_spi(mgbdev);
+1 -1
drivers/media/pci/saa7134/saa7134-cards.c
··· 5152 5152 }, 5153 5153 }, 5154 5154 [SAA7134_BOARD_AVERMEDIA_STUDIO_507UA] = { 5155 - /* Andy Shevchenko <andy@smile.org.ua> */ 5155 + /* Andy Shevchenko <andy@kernel.org> */ 5156 5156 .name = "Avermedia AVerTV Studio 507UA", 5157 5157 .audio_clock = 0x00187de7, 5158 5158 .tuner_type = TUNER_PHILIPS_FM1216ME_MK3, /* Should be MK5 */
+10 -2
drivers/net/dsa/qca/qca8k-leds.c
··· 431 431 init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d", 432 432 priv->internal_mdio_bus->id, 433 433 port_num); 434 - if (!init_data.devicename) 434 + if (!init_data.devicename) { 435 + fwnode_handle_put(led); 436 + fwnode_handle_put(leds); 435 437 return -ENOMEM; 438 + } 436 439 437 440 ret = devm_led_classdev_register_ext(priv->dev, &port_led->cdev, &init_data); 438 441 if (ret) ··· 444 441 kfree(init_data.devicename); 445 442 } 446 443 444 + fwnode_handle_put(leds); 447 445 return 0; 448 446 } 449 447 ··· 475 471 * the correct port for LED setup. 476 472 */ 477 473 ret = qca8k_parse_port_leds(priv, port, qca8k_port_to_phy(port_num)); 478 - if (ret) 474 + if (ret) { 475 + fwnode_handle_put(port); 476 + fwnode_handle_put(ports); 479 477 return ret; 478 + } 480 479 } 481 480 481 + fwnode_handle_put(ports); 482 482 return 0; 483 483 }
+51
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1434 1434 atomic_t refcnt; 1435 1435 }; 1436 1436 1437 + /* Compat version of hwrm_port_phy_qcfg_output capped at 96 bytes. The 1438 + * first 95 bytes are identical to hwrm_port_phy_qcfg_output in bnxt_hsi.h. 1439 + * The last valid byte in the compat version is different. 1440 + */ 1441 + struct hwrm_port_phy_qcfg_output_compat { 1442 + __le16 error_code; 1443 + __le16 req_type; 1444 + __le16 seq_id; 1445 + __le16 resp_len; 1446 + u8 link; 1447 + u8 active_fec_signal_mode; 1448 + __le16 link_speed; 1449 + u8 duplex_cfg; 1450 + u8 pause; 1451 + __le16 support_speeds; 1452 + __le16 force_link_speed; 1453 + u8 auto_mode; 1454 + u8 auto_pause; 1455 + __le16 auto_link_speed; 1456 + __le16 auto_link_speed_mask; 1457 + u8 wirespeed; 1458 + u8 lpbk; 1459 + u8 force_pause; 1460 + u8 module_status; 1461 + __le32 preemphasis; 1462 + u8 phy_maj; 1463 + u8 phy_min; 1464 + u8 phy_bld; 1465 + u8 phy_type; 1466 + u8 media_type; 1467 + u8 xcvr_pkg_type; 1468 + u8 eee_config_phy_addr; 1469 + u8 parallel_detect; 1470 + __le16 link_partner_adv_speeds; 1471 + u8 link_partner_adv_auto_mode; 1472 + u8 link_partner_adv_pause; 1473 + __le16 adv_eee_link_speed_mask; 1474 + __le16 link_partner_adv_eee_link_speed_mask; 1475 + __le32 xcvr_identifier_type_tx_lpi_timer; 1476 + __le16 fec_cfg; 1477 + u8 duplex_state; 1478 + u8 option_flags; 1479 + char phy_vendor_name[16]; 1480 + char phy_vendor_partnumber[16]; 1481 + __le16 support_pam4_speeds; 1482 + __le16 force_pam4_link_speed; 1483 + __le16 auto_pam4_link_speed_mask; 1484 + u8 link_partner_pam4_adv_speeds; 1485 + u8 valid; 1486 + }; 1487 + 1437 1488 struct bnxt_link_info { 1438 1489 u8 phy_type; 1439 1490 u8 media_type;
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
··· 680 680 req_type); 681 681 else if (rc && rc != HWRM_ERR_CODE_PF_UNAVAILABLE) 682 682 hwrm_err(bp, ctx, "hwrm req_type 0x%x seq id 0x%x error 0x%x\n", 683 - req_type, token->seq_id, rc); 683 + req_type, le16_to_cpu(ctx->req->seq_id), rc); 684 684 rc = __hwrm_to_stderr(rc); 685 685 exit: 686 686 if (token)
+10 -2
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
··· 950 950 struct hwrm_fwd_resp_input *req; 951 951 int rc; 952 952 953 - if (BNXT_FWD_RESP_SIZE_ERR(msg_size)) 953 + if (BNXT_FWD_RESP_SIZE_ERR(msg_size)) { 954 + netdev_warn_once(bp->dev, "HWRM fwd response too big (%d bytes)\n", 955 + msg_size); 954 956 return -EINVAL; 957 + } 955 958 956 959 rc = hwrm_req_init(bp, req, HWRM_FWD_RESP); 957 960 if (!rc) { ··· 1088 1085 rc = bnxt_hwrm_exec_fwd_resp( 1089 1086 bp, vf, sizeof(struct hwrm_port_phy_qcfg_input)); 1090 1087 } else { 1091 - struct hwrm_port_phy_qcfg_output phy_qcfg_resp = {0}; 1088 + struct hwrm_port_phy_qcfg_output_compat phy_qcfg_resp = {}; 1092 1089 struct hwrm_port_phy_qcfg_input *phy_qcfg_req; 1093 1090 1094 1091 phy_qcfg_req = ··· 1099 1096 mutex_unlock(&bp->link_lock); 1100 1097 phy_qcfg_resp.resp_len = cpu_to_le16(sizeof(phy_qcfg_resp)); 1101 1098 phy_qcfg_resp.seq_id = phy_qcfg_req->seq_id; 1099 + /* New SPEEDS2 fields are beyond the legacy structure, so 1100 + * clear the SPEEDS2_SUPPORTED flag. 1101 + */ 1102 + phy_qcfg_resp.option_flags &= 1103 + ~PORT_PHY_QCAPS_RESP_FLAGS2_SPEEDS2_SUPPORTED; 1102 1104 phy_qcfg_resp.valid = 1; 1103 1105 1104 1106 if (vf->flags & BNXT_VF_LINK_UP) {
+5 -6
drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
··· 272 272 pg_info->page_offset; 273 273 memcpy(skb->data, va, MIN_SKB_SIZE); 274 274 skb_put(skb, MIN_SKB_SIZE); 275 + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, 276 + pg_info->page, 277 + pg_info->page_offset + MIN_SKB_SIZE, 278 + len - MIN_SKB_SIZE, 279 + LIO_RXBUFFER_SZ); 275 280 } 276 - 277 - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, 278 - pg_info->page, 279 - pg_info->page_offset + MIN_SKB_SIZE, 280 - len - MIN_SKB_SIZE, 281 - LIO_RXBUFFER_SZ); 282 281 } else { 283 282 struct octeon_skb_page_info *pg_info = 284 283 ((struct octeon_skb_page_info *)(skb->cb));
+5 -3
drivers/net/ethernet/google/gve/gve_rx_dqo.c
··· 647 647 skb_set_hash(skb, le32_to_cpu(compl_desc->hash), hash_type); 648 648 } 649 649 650 - static void gve_rx_free_skb(struct gve_rx_ring *rx) 650 + static void gve_rx_free_skb(struct napi_struct *napi, struct gve_rx_ring *rx) 651 651 { 652 652 if (!rx->ctx.skb_head) 653 653 return; 654 654 655 + if (rx->ctx.skb_head == napi->skb) 656 + napi->skb = NULL; 655 657 dev_kfree_skb_any(rx->ctx.skb_head); 656 658 rx->ctx.skb_head = NULL; 657 659 rx->ctx.skb_tail = NULL; ··· 952 950 953 951 err = gve_rx_dqo(napi, rx, compl_desc, complq->head, rx->q_num); 954 952 if (err < 0) { 955 - gve_rx_free_skb(rx); 953 + gve_rx_free_skb(napi, rx); 956 954 u64_stats_update_begin(&rx->statss); 957 955 if (err == -ENOMEM) 958 956 rx->rx_skb_alloc_fail++; ··· 995 993 996 994 /* gve_rx_complete_skb() will consume skb if successful */ 997 995 if (gve_rx_complete_skb(rx, napi, compl_desc, feat) != 0) { 998 - gve_rx_free_skb(rx); 996 + gve_rx_free_skb(napi, rx); 999 997 u64_stats_update_begin(&rx->statss); 1000 998 rx->rx_desc_err_dropped_pkt++; 1001 999 u64_stats_update_end(&rx->statss);
+5 -15
drivers/net/ethernet/google/gve/gve_tx_dqo.c
··· 555 555 if (unlikely(skb_shinfo(skb)->gso_size < GVE_TX_MIN_TSO_MSS_DQO)) 556 556 return -1; 557 557 558 + if (!(skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))) 559 + return -EINVAL; 560 + 558 561 /* Needed because we will modify header. */ 559 562 err = skb_cow_head(skb, 0); 560 563 if (err < 0) 561 564 return err; 562 565 563 566 tcp = tcp_hdr(skb); 564 - 565 - /* Remove payload length from checksum. */ 566 567 paylen = skb->len - skb_transport_offset(skb); 567 - 568 - switch (skb_shinfo(skb)->gso_type) { 569 - case SKB_GSO_TCPV4: 570 - case SKB_GSO_TCPV6: 571 - csum_replace_by_diff(&tcp->check, 572 - (__force __wsum)htonl(paylen)); 573 - 574 - /* Compute length of segmentation header. */ 575 - header_len = skb_tcp_all_headers(skb); 576 - break; 577 - default: 578 - return -EINVAL; 579 - } 568 + csum_replace_by_diff(&tcp->check, (__force __wsum)htonl(paylen)); 569 + header_len = skb_tcp_all_headers(skb); 580 570 581 571 if (unlikely(header_len > GVE_TX_MAX_HDR_SIZE_DQO)) 582 572 return -EINVAL;
+4
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 3535 3535 ret = hns3_alloc_and_attach_buffer(ring, i); 3536 3536 if (ret) 3537 3537 goto out_buffer_fail; 3538 + 3539 + if (!(i % HNS3_RESCHED_BD_NUM)) 3540 + cond_resched(); 3538 3541 } 3539 3542 3540 3543 return 0; ··· 5110 5107 } 5111 5108 5112 5109 u64_stats_init(&priv->ring[i].syncp); 5110 + cond_resched(); 5113 5111 } 5114 5112 5115 5113 return 0;
+2
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
··· 214 214 #define HNS3_CQ_MODE_EQE 1U 215 215 #define HNS3_CQ_MODE_CQE 0U 216 216 217 + #define HNS3_RESCHED_BD_NUM 1024 218 + 217 219 enum hns3_pkt_l2t_type { 218 220 HNS3_L2_TYPE_UNICAST, 219 221 HNS3_L2_TYPE_MULTICAST,
+16 -5
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 3086 3086 3087 3087 static void hclge_update_link_status(struct hclge_dev *hdev) 3088 3088 { 3089 - struct hnae3_handle *rhandle = &hdev->vport[0].roce; 3090 3089 struct hnae3_handle *handle = &hdev->vport[0].nic; 3091 - struct hnae3_client *rclient = hdev->roce_client; 3092 3090 struct hnae3_client *client = hdev->nic_client; 3093 3091 int state; 3094 3092 int ret; ··· 3110 3112 3111 3113 client->ops->link_status_change(handle, state); 3112 3114 hclge_config_mac_tnl_int(hdev, state); 3113 - if (rclient && rclient->ops->link_status_change) 3114 - rclient->ops->link_status_change(rhandle, state); 3115 + 3116 + if (test_bit(HCLGE_STATE_ROCE_REGISTERED, &hdev->state)) { 3117 + struct hnae3_handle *rhandle = &hdev->vport[0].roce; 3118 + struct hnae3_client *rclient = hdev->roce_client; 3119 + 3120 + if (rclient && rclient->ops->link_status_change) 3121 + rclient->ops->link_status_change(rhandle, 3122 + state); 3123 + } 3115 3124 3116 3125 hclge_push_link_status(hdev); 3117 3126 } ··· 11324 11319 return ret; 11325 11320 } 11326 11321 11322 + static bool hclge_uninit_need_wait(struct hclge_dev *hdev) 11323 + { 11324 + return test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) || 11325 + test_bit(HCLGE_STATE_LINK_UPDATING, &hdev->state); 11326 + } 11327 + 11327 11328 static void hclge_uninit_client_instance(struct hnae3_client *client, 11328 11329 struct hnae3_ae_dev *ae_dev) 11329 11330 { ··· 11338 11327 11339 11328 if (hdev->roce_client) { 11340 11329 clear_bit(HCLGE_STATE_ROCE_REGISTERED, &hdev->state); 11341 - while (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state)) 11330 + while (hclge_uninit_need_wait(hdev)) 11342 11331 msleep(HCLGE_WAIT_RESET_DONE); 11343 11332 11344 11333 hdev->roce_client->ops->uninit_instance(&vport->roce, 0);
+2 -3
drivers/net/ethernet/intel/igc/igc_main.c
··· 7032 7032 device_set_wakeup_enable(&adapter->pdev->dev, 7033 7033 adapter->flags & IGC_FLAG_WOL_SUPPORTED); 7034 7034 7035 + igc_ptp_init(adapter); 7036 + 7035 7037 igc_tsn_clear_schedule(adapter); 7036 7038 7037 7039 /* reset the hardware with the new settings */ ··· 7054 7052 7055 7053 /* Check if Media Autosense is enabled */ 7056 7054 adapter->ei = *ei; 7057 - 7058 - /* do hw tstamp init after resetting */ 7059 - igc_ptp_init(adapter); 7060 7055 7061 7056 /* print pcie link status and MAC address */ 7062 7057 pcie_print_link_status(pdev);
+1 -2
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 4895 4895 4896 4896 /* Verify if UDP port is being offloaded by HW */ 4897 4897 if (mlx5_vxlan_lookup_port(priv->mdev->vxlan, port)) 4898 - return features; 4898 + return vxlan_features_check(skb, features); 4899 4899 4900 4900 #if IS_ENABLED(CONFIG_GENEVE) 4901 4901 /* Support Geneve offload for default UDP port */ ··· 4921 4921 struct mlx5e_priv *priv = netdev_priv(netdev); 4922 4922 4923 4923 features = vlan_features_check(skb, features); 4924 - features = vxlan_features_check(skb, features); 4925 4924 4926 4925 /* Validate if the tunneled packet is being offloaded by HW */ 4927 4926 if (skb->encapsulation &&
+1 -3
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 304 304 if (ret) 305 305 return ret; 306 306 307 - if (qcq->napi.poll) 308 - napi_enable(&qcq->napi); 309 - 310 307 if (qcq->flags & IONIC_QCQ_F_INTR) { 308 + napi_enable(&qcq->napi); 311 309 irq_set_affinity_hint(qcq->intr.vector, 312 310 &qcq->intr.affinity_mask); 313 311 ionic_intr_mask(idev->intr_ctrl, qcq->intr.index,
+4
drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
··· 93 93 bool has_emac_ge_3; 94 94 const char *link_clk_name; 95 95 bool has_integrated_pcs; 96 + u32 dma_addr_width; 96 97 struct dwmac4_addrs dwmac4_addrs; 97 98 }; 98 99 ··· 277 276 .has_emac_ge_3 = true, 278 277 .link_clk_name = "phyaux", 279 278 .has_integrated_pcs = true, 279 + .dma_addr_width = 36, 280 280 .dwmac4_addrs = { 281 281 .dma_chan = 0x00008100, 282 282 .dma_chan_offset = 0x1000, ··· 847 845 plat_dat->flags |= STMMAC_FLAG_RX_CLK_RUNS_IN_LPI; 848 846 if (data->has_integrated_pcs) 849 847 plat_dat->flags |= STMMAC_FLAG_HAS_INTEGRATED_PCS; 848 + if (data->dma_addr_width) 849 + plat_dat->host_dma_width = data->dma_addr_width; 850 850 851 851 if (ethqos->serdes_phy) { 852 852 plat_dat->serdes_powerup = qcom_ethqos_serdes_powerup;
+11 -14
drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
··· 343 343 struct tc_cbs_qopt_offload *qopt) 344 344 { 345 345 u32 tx_queues_count = priv->plat->tx_queues_to_use; 346 + s64 port_transmit_rate_kbps; 346 347 u32 queue = qopt->queue; 347 - u32 ptr, speed_div; 348 348 u32 mode_to_use; 349 349 u64 value; 350 + u32 ptr; 350 351 int ret; 351 352 352 353 /* Queue 0 is not AVB capable */ ··· 356 355 if (!priv->dma_cap.av) 357 356 return -EOPNOTSUPP; 358 357 358 + port_transmit_rate_kbps = qopt->idleslope - qopt->sendslope; 359 + 359 360 /* Port Transmit Rate and Speed Divider */ 360 - switch (priv->speed) { 361 + switch (div_s64(port_transmit_rate_kbps, 1000)) { 361 362 case SPEED_10000: 362 - ptr = 32; 363 - speed_div = 10000000; 364 - break; 365 363 case SPEED_5000: 366 364 ptr = 32; 367 - speed_div = 5000000; 368 365 break; 369 366 case SPEED_2500: 370 - ptr = 8; 371 - speed_div = 2500000; 372 - break; 373 367 case SPEED_1000: 374 368 ptr = 8; 375 - speed_div = 1000000; 376 369 break; 377 370 case SPEED_100: 378 371 ptr = 4; 379 - speed_div = 100000; 380 372 break; 381 373 default: 382 - return -EOPNOTSUPP; 374 + netdev_err(priv->dev, 375 + "Invalid portTransmitRate %lld (idleSlope - sendSlope)\n", 376 + port_transmit_rate_kbps); 377 + return -EINVAL; 383 378 } 384 379 385 380 mode_to_use = priv->plat->tx_queues_cfg[queue].mode_to_use; ··· 395 398 } 396 399 397 400 /* Final adjustments for HW */ 398 - value = div_s64(qopt->idleslope * 1024ll * ptr, speed_div); 401 + value = div_s64(qopt->idleslope * 1024ll * ptr, port_transmit_rate_kbps); 399 402 priv->plat->tx_queues_cfg[queue].idle_slope = value & GENMASK(31, 0); 400 403 401 - value = div_s64(-qopt->sendslope * 1024ll * ptr, speed_div); 404 + value = div_s64(-qopt->sendslope * 1024ll * ptr, port_transmit_rate_kbps); 402 405 priv->plat->tx_queues_cfg[queue].send_slope = value & GENMASK(31, 0); 403 406 404 407 value = qopt->hicredit * 1024ll * 8;
+6 -4
drivers/net/geneve.c
··· 815 815 struct geneve_dev *geneve, 816 816 const struct ip_tunnel_info *info) 817 817 { 818 + bool inner_proto_inherit = geneve->cfg.inner_proto_inherit; 818 819 bool xnet = !net_eq(geneve->net, dev_net(geneve->dev)); 819 820 struct geneve_sock *gs4 = rcu_dereference(geneve->sock4); 820 821 const struct ip_tunnel_key *key = &info->key; ··· 827 826 __be16 sport; 828 827 int err; 829 828 830 - if (!skb_vlan_inet_prepare(skb)) 829 + if (!skb_vlan_inet_prepare(skb, inner_proto_inherit)) 831 830 return -EINVAL; 832 831 833 832 if (!gs4) ··· 909 908 } 910 909 911 910 err = geneve_build_skb(&rt->dst, skb, info, xnet, sizeof(struct iphdr), 912 - geneve->cfg.inner_proto_inherit); 911 + inner_proto_inherit); 913 912 if (unlikely(err)) 914 913 return err; 915 914 ··· 926 925 struct geneve_dev *geneve, 927 926 const struct ip_tunnel_info *info) 928 927 { 928 + bool inner_proto_inherit = geneve->cfg.inner_proto_inherit; 929 929 bool xnet = !net_eq(geneve->net, dev_net(geneve->dev)); 930 930 struct geneve_sock *gs6 = rcu_dereference(geneve->sock6); 931 931 const struct ip_tunnel_key *key = &info->key; ··· 937 935 __be16 sport; 938 936 int err; 939 937 940 - if (!skb_vlan_inet_prepare(skb)) 938 + if (!skb_vlan_inet_prepare(skb, inner_proto_inherit)) 941 939 return -EINVAL; 942 940 943 941 if (!gs6) ··· 999 997 ttl = ttl ? : ip6_dst_hoplimit(dst); 1000 998 } 1001 999 err = geneve_build_skb(dst, skb, info, xnet, sizeof(struct ipv6hdr), 1002 - geneve->cfg.inner_proto_inherit); 1000 + inner_proto_inherit); 1003 1001 if (unlikely(err)) 1004 1002 return err; 1005 1003
+2 -1
drivers/net/netdevsim/netdev.c
··· 324 324 325 325 rcu_read_lock(); 326 326 peer = rcu_dereference(nsim->peer); 327 - iflink = peer ? READ_ONCE(peer->netdev->ifindex) : 0; 327 + iflink = peer ? READ_ONCE(peer->netdev->ifindex) : 328 + READ_ONCE(dev->ifindex); 328 329 rcu_read_unlock(); 329 330 330 331 return iflink;
+1 -2
drivers/net/phy/sfp.c
··· 2429 2429 2430 2430 /* Handle remove event globally, it resets this state machine */ 2431 2431 if (event == SFP_E_REMOVE) { 2432 - if (sfp->sm_mod_state > SFP_MOD_PROBE) 2433 - sfp_sm_mod_remove(sfp); 2432 + sfp_sm_mod_remove(sfp); 2434 2433 sfp_sm_mod_next(sfp, SFP_MOD_EMPTY, 0); 2435 2434 return; 2436 2435 }
+3 -3
drivers/nvme/host/fabrics.c
··· 180 180 cmd.prop_get.offset = cpu_to_le32(off); 181 181 182 182 ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0, 183 - NVME_QID_ANY, 0); 183 + NVME_QID_ANY, NVME_SUBMIT_RESERVED); 184 184 185 185 if (ret >= 0) 186 186 *val = le64_to_cpu(res.u64); ··· 226 226 cmd.prop_get.offset = cpu_to_le32(off); 227 227 228 228 ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0, 229 - NVME_QID_ANY, 0); 229 + NVME_QID_ANY, NVME_SUBMIT_RESERVED); 230 230 231 231 if (ret >= 0) 232 232 *val = le64_to_cpu(res.u64); ··· 271 271 cmd.prop_set.value = cpu_to_le64(val); 272 272 273 273 ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, NULL, NULL, 0, 274 - NVME_QID_ANY, 0); 274 + NVME_QID_ANY, NVME_SUBMIT_RESERVED); 275 275 if (unlikely(ret)) 276 276 dev_err(ctrl->device, 277 277 "Property Set error: %d, offset %#x\n",
+1 -1
drivers/nvme/host/pr.c
··· 77 77 if (nvme_is_path_error(nvme_sc)) 78 78 return PR_STS_PATH_FAILED; 79 79 80 - switch (nvme_sc) { 80 + switch (nvme_sc & 0x7ff) { 81 81 case NVME_SC_SUCCESS: 82 82 return PR_STS_SUCCESS; 83 83 case NVME_SC_RESERVATION_CONFLICT:
-4
drivers/pci/access.c
··· 289 289 { 290 290 might_sleep(); 291 291 292 - lock_map_acquire(&dev->cfg_access_lock); 293 - 294 292 raw_spin_lock_irq(&pci_lock); 295 293 if (dev->block_cfg_access) 296 294 pci_wait_cfg(dev); ··· 343 345 raw_spin_unlock_irqrestore(&pci_lock, flags); 344 346 345 347 wake_up_all(&pci_cfg_wait); 346 - 347 - lock_map_release(&dev->cfg_access_lock); 348 348 } 349 349 EXPORT_SYMBOL_GPL(pci_cfg_access_unlock); 350 350
-1
drivers/pci/pci.c
··· 4883 4883 */ 4884 4884 int pci_bridge_secondary_bus_reset(struct pci_dev *dev) 4885 4885 { 4886 - lock_map_assert_held(&dev->cfg_access_lock); 4887 4886 pcibios_reset_secondary_bus(dev); 4888 4887 4889 4888 return pci_bridge_wait_for_secondary_bus(dev, "bus reset");
-3
drivers/pci/probe.c
··· 2546 2546 dev->dev.dma_mask = &dev->dma_mask; 2547 2547 dev->dev.dma_parms = &dev->dma_parms; 2548 2548 dev->dev.coherent_dma_mask = 0xffffffffull; 2549 - lockdep_register_key(&dev->cfg_access_key); 2550 - lockdep_init_map(&dev->cfg_access_lock, dev_name(&dev->dev), 2551 - &dev->cfg_access_key, 0); 2552 2549 2553 2550 dma_set_max_seg_size(&dev->dev, 65536); 2554 2551 dma_set_seg_boundary(&dev->dev, 0xffffffff);
+1
drivers/platform/x86/Kconfig
··· 136 136 config YT2_1380 137 137 tristate "Lenovo Yoga Tablet 2 1380 fast charge driver" 138 138 depends on SERIAL_DEV_BUS 139 + depends on EXTCON 139 140 depends on ACPI 140 141 help 141 142 Say Y here to enable support for the custom fast charging protocol
+43 -7
drivers/platform/x86/amd/hsmp.c
··· 907 907 return ret; 908 908 } 909 909 910 + /* 911 + * This check is only needed for backward compatibility of previous platforms. 912 + * All new platforms are expected to support ACPI based probing. 913 + */ 914 + static bool legacy_hsmp_support(void) 915 + { 916 + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) 917 + return false; 918 + 919 + switch (boot_cpu_data.x86) { 920 + case 0x19: 921 + switch (boot_cpu_data.x86_model) { 922 + case 0x00 ... 0x1F: 923 + case 0x30 ... 0x3F: 924 + case 0x90 ... 0x9F: 925 + case 0xA0 ... 0xAF: 926 + return true; 927 + default: 928 + return false; 929 + } 930 + case 0x1A: 931 + switch (boot_cpu_data.x86_model) { 932 + case 0x00 ... 0x1F: 933 + return true; 934 + default: 935 + return false; 936 + } 937 + default: 938 + return false; 939 + } 940 + 941 + return false; 942 + } 943 + 910 944 static int __init hsmp_plt_init(void) 911 945 { 912 946 int ret = -ENODEV; 913 - 914 - if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD || boot_cpu_data.x86 < 0x19) { 915 - pr_err("HSMP is not supported on Family:%x model:%x\n", 916 - boot_cpu_data.x86, boot_cpu_data.x86_model); 917 - return ret; 918 - } 919 947 920 948 /* 921 949 * amd_nb_num() returns number of SMN/DF interfaces present in the system ··· 958 930 return ret; 959 931 960 932 if (!plat_dev.is_acpi_device) { 961 - ret = hsmp_plat_dev_register(); 933 + if (legacy_hsmp_support()) { 934 + /* Not ACPI device, but supports HSMP, register a plat_dev */ 935 + ret = hsmp_plat_dev_register(); 936 + } else { 937 + /* Not ACPI, Does not support HSMP */ 938 + pr_info("HSMP is not supported on Family:%x model:%x\n", 939 + boot_cpu_data.x86, boot_cpu_data.x86_model); 940 + ret = -ENODEV; 941 + } 962 942 if (ret) 963 943 platform_driver_unregister(&amd_hsmp_driver); 964 944 }
+39 -62
drivers/platform/x86/dell/dell-smbios-base.c
··· 11 11 */ 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 + #include <linux/container_of.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/module.h> 16 17 #include <linux/capability.h> ··· 26 25 static int da_num_tokens; 27 26 static struct platform_device *platform_device; 28 27 static struct calling_interface_token *da_tokens; 29 - static struct device_attribute *token_location_attrs; 30 - static struct device_attribute *token_value_attrs; 28 + static struct token_sysfs_data *token_entries; 31 29 static struct attribute **token_attrs; 32 30 static DEFINE_MUTEX(smbios_mutex); 31 + 32 + struct token_sysfs_data { 33 + struct device_attribute location_attr; 34 + struct device_attribute value_attr; 35 + struct calling_interface_token *token; 36 + }; 33 37 34 38 struct smbios_device { 35 39 struct list_head list; ··· 422 416 } 423 417 } 424 418 425 - static int match_attribute(struct device *dev, 426 - struct device_attribute *attr) 427 - { 428 - int i; 429 - 430 - for (i = 0; i < da_num_tokens * 2; i++) { 431 - if (!token_attrs[i]) 432 - continue; 433 - if (strcmp(token_attrs[i]->name, attr->attr.name) == 0) 434 - return i/2; 435 - } 436 - dev_dbg(dev, "couldn't match: %s\n", attr->attr.name); 437 - return -EINVAL; 438 - } 439 - 440 419 static ssize_t location_show(struct device *dev, 441 420 struct device_attribute *attr, char *buf) 442 421 { 443 - int i; 422 + struct token_sysfs_data *data = container_of(attr, struct token_sysfs_data, location_attr); 444 423 445 424 if (!capable(CAP_SYS_ADMIN)) 446 425 return -EPERM; 447 426 448 - i = match_attribute(dev, attr); 449 - if (i > 0) 450 - return sysfs_emit(buf, "%08x", da_tokens[i].location); 451 - return 0; 427 + return sysfs_emit(buf, "%08x", data->token->location); 452 428 } 453 429 454 430 static ssize_t value_show(struct device *dev, 455 431 struct device_attribute *attr, char *buf) 456 432 { 457 - int i; 433 + struct token_sysfs_data *data = container_of(attr, struct token_sysfs_data, value_attr); 458 434 459 435 if (!capable(CAP_SYS_ADMIN)) 460 436 return -EPERM; 461 437 462 - i = match_attribute(dev, attr); 463 - if (i > 0) 464 - return sysfs_emit(buf, "%08x", da_tokens[i].value); 465 - return 0; 438 + return sysfs_emit(buf, "%08x", data->token->value); 466 439 } 467 440 468 441 static struct attribute_group smbios_attribute_group = { ··· 458 473 { 459 474 char *location_name; 460 475 char *value_name; 461 - size_t size; 462 476 int ret; 463 477 int i, j; 464 478 465 - /* (number of tokens + 1 for null terminated */ 466 - size = sizeof(struct device_attribute) * (da_num_tokens + 1); 467 - token_location_attrs = kzalloc(size, GFP_KERNEL); 468 - if (!token_location_attrs) 479 + token_entries = kcalloc(da_num_tokens, sizeof(*token_entries), GFP_KERNEL); 480 + if (!token_entries) 469 481 return -ENOMEM; 470 - token_value_attrs = kzalloc(size, GFP_KERNEL); 471 - if (!token_value_attrs) 472 - goto out_allocate_value; 473 482 474 483 /* need to store both location and value + terminator*/ 475 - size = sizeof(struct attribute *) * ((2 * da_num_tokens) + 1); 476 - token_attrs = kzalloc(size, GFP_KERNEL); 484 + token_attrs = kcalloc((2 * da_num_tokens) + 1, sizeof(*token_attrs), GFP_KERNEL); 477 485 if (!token_attrs) 478 486 goto out_allocate_attrs; 479 487 ··· 474 496 /* skip empty */ 475 497 if (da_tokens[i].tokenID == 0) 476 498 continue; 499 + 500 + token_entries[i].token = &da_tokens[i]; 501 + 477 502 /* add location */ 478 503 location_name = kasprintf(GFP_KERNEL, "%04x_location", 479 504 da_tokens[i].tokenID); 480 505 if (location_name == NULL) 481 506 goto out_unwind_strings; 482 - sysfs_attr_init(&token_location_attrs[i].attr); 483 - token_location_attrs[i].attr.name = location_name; 484 - token_location_attrs[i].attr.mode = 0444; 485 - token_location_attrs[i].show = location_show; 486 - token_attrs[j++] = &token_location_attrs[i].attr; 507 + 508 + sysfs_attr_init(&token_entries[i].location_attr.attr); 509 + token_entries[i].location_attr.attr.name = location_name; 510 + token_entries[i].location_attr.attr.mode = 0444; 511 + token_entries[i].location_attr.show = location_show; 512 + token_attrs[j++] = &token_entries[i].location_attr.attr; 487 513 488 514 /* add value */ 489 515 value_name = kasprintf(GFP_KERNEL, "%04x_value", 490 516 da_tokens[i].tokenID); 491 - if (value_name == NULL) 492 - goto loop_fail_create_value; 493 - sysfs_attr_init(&token_value_attrs[i].attr); 494 - token_value_attrs[i].attr.name = value_name; 495 - token_value_attrs[i].attr.mode = 0444; 496 - token_value_attrs[i].show = value_show; 497 - token_attrs[j++] = &token_value_attrs[i].attr; 498 - continue; 517 + if (!value_name) { 518 + kfree(location_name); 519 + goto out_unwind_strings; 520 + } 499 521 500 - loop_fail_create_value: 501 - kfree(location_name); 502 - goto out_unwind_strings; 522 + sysfs_attr_init(&token_entries[i].value_attr.attr); 523 + token_entries[i].value_attr.attr.name = value_name; 524 + token_entries[i].value_attr.attr.mode = 0444; 525 + token_entries[i].value_attr.show = value_show; 526 + token_attrs[j++] = &token_entries[i].value_attr.attr; 503 527 } 504 528 smbios_attribute_group.attrs = token_attrs; 505 529 ··· 512 532 513 533 out_unwind_strings: 514 534 while (i--) { 515 - kfree(token_location_attrs[i].attr.name); 516 - kfree(token_value_attrs[i].attr.name); 535 + kfree(token_entries[i].location_attr.attr.name); 536 + kfree(token_entries[i].value_attr.attr.name); 517 537 } 518 538 kfree(token_attrs); 519 539 out_allocate_attrs: 520 - kfree(token_value_attrs); 521 - out_allocate_value: 522 - kfree(token_location_attrs); 540 + kfree(token_entries); 523 541 524 542 return -ENOMEM; 525 543 } ··· 529 551 sysfs_remove_group(&pdev->dev.kobj, 530 552 &smbios_attribute_group); 531 553 for (i = 0; i < da_num_tokens; i++) { 532 - kfree(token_location_attrs[i].attr.name); 533 - kfree(token_value_attrs[i].attr.name); 554 + kfree(token_entries[i].location_attr.attr.name); 555 + kfree(token_entries[i].value_attr.attr.name); 534 556 } 535 557 kfree(token_attrs); 536 - kfree(token_value_attrs); 537 - kfree(token_location_attrs); 558 + kfree(token_entries); 538 559 } 539 560 540 561 static int __init dell_smbios_init(void)
+1 -58
drivers/platform/x86/touchscreen_dmi.c
··· 34 34 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 35 35 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 36 36 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 37 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 38 37 PROPERTY_ENTRY_BOOL("silead,home-button"), 39 38 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-archos-101-cesium-educ.fw"), 40 39 { } ··· 48 49 PROPERTY_ENTRY_U32("touchscreen-size-x", 1850), 49 50 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 50 51 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 51 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 52 52 PROPERTY_ENTRY_BOOL("silead,home-button"), 53 53 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-bush-bush-windows-tablet.fw"), 54 54 { } ··· 77 79 PROPERTY_ENTRY_U32("touchscreen-size-y", 1148), 78 80 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 79 81 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-chuwi-hi8-air.fw"), 80 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 81 82 { } 82 83 }; 83 84 ··· 92 95 PROPERTY_ENTRY_U32("touchscreen-size-y", 1148), 93 96 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 94 97 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-chuwi-hi8-pro.fw"), 95 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 96 98 PROPERTY_ENTRY_BOOL("silead,home-button"), 97 99 { } 98 100 }; ··· 119 123 PROPERTY_ENTRY_U32("touchscreen-fuzz-x", 5), 120 124 PROPERTY_ENTRY_U32("touchscreen-fuzz-y", 4), 121 125 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10-air.fw"), 122 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 123 126 PROPERTY_ENTRY_BOOL("silead,home-button"), 124 127 { } 125 128 }; ··· 134 139 PROPERTY_ENTRY_U32("touchscreen-size-x", 1908), 135 140 PROPERTY_ENTRY_U32("touchscreen-size-y", 1270), 136 141 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10plus.fw"), 137 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 138 142 PROPERTY_ENTRY_BOOL("silead,home-button"), 139 143 PROPERTY_ENTRY_BOOL("silead,pen-supported"), 140 144 PROPERTY_ENTRY_U32("silead,pen-resolution-x", 8), ··· 165 171 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 166 172 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hi10-pro.fw"), 167 173 PROPERTY_ENTRY_U32_ARRAY("silead,efi-fw-min-max", chuwi_hi10_pro_efi_min_max), 168 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 169 174 PROPERTY_ENTRY_BOOL("silead,home-button"), 170 175 PROPERTY_ENTRY_BOOL("silead,pen-supported"), 171 176 PROPERTY_ENTRY_U32("silead,pen-resolution-x", 8), ··· 194 201 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 195 202 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 196 203 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-hibook.fw"), 197 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 198 204 PROPERTY_ENTRY_BOOL("silead,home-button"), 199 205 { } 200 206 }; ··· 219 227 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 220 228 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 221 229 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-chuwi-vi8.fw"), 222 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 223 230 PROPERTY_ENTRY_BOOL("silead,home-button"), 224 231 { } 225 232 }; ··· 246 255 PROPERTY_ENTRY_U32("touchscreen-size-x", 1858), 247 256 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 248 257 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-chuwi-vi10.fw"), 249 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 250 258 PROPERTY_ENTRY_BOOL("silead,home-button"), 251 259 { } 252 260 }; ··· 261 271 PROPERTY_ENTRY_U32("touchscreen-size-x", 2040), 262 272 PROPERTY_ENTRY_U32("touchscreen-size-y", 1524), 263 273 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-chuwi-surbook-mini.fw"), 264 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 265 274 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 266 275 { } 267 276 }; ··· 278 289 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 279 290 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 280 291 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-connect-tablet9.fw"), 281 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 282 292 { } 283 293 }; 284 294 ··· 294 306 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 295 307 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 296 308 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-csl-panther-tab-hd.fw"), 297 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 298 309 { } 299 310 }; 300 311 ··· 309 322 PROPERTY_ENTRY_U32("touchscreen-size-y", 896), 310 323 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 311 324 PROPERTY_ENTRY_STRING("firmware-name", "gsl3670-cube-iwork8-air.fw"), 312 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 313 325 { } 314 326 }; 315 327 ··· 332 346 PROPERTY_ENTRY_U32("touchscreen-size-x", 1961), 333 347 PROPERTY_ENTRY_U32("touchscreen-size-y", 1513), 334 348 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-cube-knote-i1101.fw"), 335 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 336 349 PROPERTY_ENTRY_BOOL("silead,home-button"), 337 350 { } 338 351 }; ··· 345 360 PROPERTY_ENTRY_U32("touchscreen-size-x", 890), 346 361 PROPERTY_ENTRY_U32("touchscreen-size-y", 630), 347 362 PROPERTY_ENTRY_STRING("firmware-name", "gsl1686-dexp-ursus-7w.fw"), 348 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 349 363 PROPERTY_ENTRY_BOOL("silead,home-button"), 350 364 { } 351 365 }; ··· 360 376 PROPERTY_ENTRY_U32("touchscreen-size-x", 1720), 361 377 PROPERTY_ENTRY_U32("touchscreen-size-y", 1137), 362 378 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-dexp-ursus-kx210i.fw"), 363 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 364 379 PROPERTY_ENTRY_BOOL("silead,home-button"), 365 380 { } 366 381 }; ··· 374 391 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), 375 392 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 376 393 PROPERTY_ENTRY_STRING("firmware-name", "gsl1686-digma_citi_e200.fw"), 377 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 378 394 PROPERTY_ENTRY_BOOL("silead,home-button"), 379 395 { } 380 396 }; ··· 432 450 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 433 451 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 434 452 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-irbis_tw90.fw"), 435 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 436 453 PROPERTY_ENTRY_BOOL("silead,home-button"), 437 454 { } 438 455 }; ··· 447 466 PROPERTY_ENTRY_U32("touchscreen-size-x", 1960), 448 467 PROPERTY_ENTRY_U32("touchscreen-size-y", 1510), 449 468 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-irbis-tw118.fw"), 450 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 451 469 { } 452 470 }; 453 471 ··· 463 483 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 464 484 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 465 485 PROPERTY_ENTRY_STRING("firmware-name", "gsl3670-itworks-tw891.fw"), 466 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 467 486 { } 468 487 }; 469 488 ··· 475 496 PROPERTY_ENTRY_U32("touchscreen-size-x", 1980), 476 497 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), 477 498 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-jumper-ezpad-6-pro.fw"), 478 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 479 499 PROPERTY_ENTRY_BOOL("silead,home-button"), 480 500 { } 481 501 }; ··· 489 511 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), 490 512 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-jumper-ezpad-6-pro-b.fw"), 491 513 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 492 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 493 514 PROPERTY_ENTRY_BOOL("silead,home-button"), 494 515 { } 495 516 }; ··· 504 527 PROPERTY_ENTRY_U32("touchscreen-size-x", 1950), 505 528 PROPERTY_ENTRY_U32("touchscreen-size-y", 1525), 506 529 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-jumper-ezpad-6-m4.fw"), 507 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 508 530 PROPERTY_ENTRY_BOOL("silead,home-button"), 509 531 { } 510 532 }; ··· 520 544 PROPERTY_ENTRY_U32("touchscreen-size-y", 1526), 521 545 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 522 546 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-jumper-ezpad-7.fw"), 523 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 524 547 PROPERTY_ENTRY_BOOL("silead,stuck-controller-bug"), 525 548 { } 526 549 }; ··· 536 561 PROPERTY_ENTRY_U32("touchscreen-size-y", 1138), 537 562 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 538 563 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-jumper-ezpad-mini3.fw"), 539 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 540 564 { } 541 565 }; 542 566 ··· 552 578 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 553 579 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 554 580 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-mpman-converter9.fw"), 555 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 556 581 { } 557 582 }; 558 583 ··· 567 594 PROPERTY_ENTRY_U32("touchscreen-size-y", 1150), 568 595 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 569 596 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-mpman-mpwin895cl.fw"), 570 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 571 597 PROPERTY_ENTRY_BOOL("silead,home-button"), 572 598 { } 573 599 }; ··· 583 611 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 584 612 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 585 613 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-myria-my8307.fw"), 586 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 587 614 PROPERTY_ENTRY_BOOL("silead,home-button"), 588 615 { } 589 616 }; ··· 599 628 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 600 629 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 601 630 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-onda-obook-20-plus.fw"), 602 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 603 631 PROPERTY_ENTRY_BOOL("silead,home-button"), 604 632 { } 605 633 }; ··· 615 645 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 616 646 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 617 647 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-onda-v80-plus-v3.fw"), 618 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 619 648 PROPERTY_ENTRY_BOOL("silead,home-button"), 620 649 { } 621 650 }; ··· 638 669 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 639 670 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 640 671 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-onda-v820w-32g.fw"), 641 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 642 672 PROPERTY_ENTRY_BOOL("silead,home-button"), 643 673 { } 644 674 }; ··· 655 687 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 656 688 PROPERTY_ENTRY_STRING("firmware-name", 657 689 "gsl3676-onda-v891-v5.fw"), 658 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 659 690 PROPERTY_ENTRY_BOOL("silead,home-button"), 660 691 { } 661 692 }; ··· 670 703 PROPERTY_ENTRY_U32("touchscreen-size-x", 1676), 671 704 PROPERTY_ENTRY_U32("touchscreen-size-y", 1130), 672 705 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-onda-v891w-v1.fw"), 673 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 674 706 PROPERTY_ENTRY_BOOL("silead,home-button"), 675 707 { } 676 708 }; ··· 686 720 PROPERTY_ENTRY_U32("touchscreen-size-y", 1135), 687 721 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 688 722 PROPERTY_ENTRY_STRING("firmware-name", "gsl3676-onda-v891w-v3.fw"), 689 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 690 723 PROPERTY_ENTRY_BOOL("silead,home-button"), 691 724 { } 692 725 }; ··· 724 759 PROPERTY_ENTRY_U32("touchscreen-size-x", 1984), 725 760 PROPERTY_ENTRY_U32("touchscreen-size-y", 1532), 726 761 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-pipo-w11.fw"), 727 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 728 762 PROPERTY_ENTRY_BOOL("silead,home-button"), 729 763 { } 730 764 }; ··· 739 775 PROPERTY_ENTRY_U32("touchscreen-size-x", 1915), 740 776 PROPERTY_ENTRY_U32("touchscreen-size-y", 1269), 741 777 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-positivo-c4128b.fw"), 742 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 743 778 { } 744 779 }; 745 780 ··· 754 791 PROPERTY_ENTRY_U32("touchscreen-size-y", 1146), 755 792 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 756 793 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-pov-mobii-wintab-p800w-v20.fw"), 757 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 758 794 PROPERTY_ENTRY_BOOL("silead,home-button"), 759 795 { } 760 796 }; ··· 770 808 PROPERTY_ENTRY_U32("touchscreen-size-y", 1148), 771 809 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 772 810 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-pov-mobii-wintab-p800w.fw"), 773 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 774 811 PROPERTY_ENTRY_BOOL("silead,home-button"), 775 812 { } 776 813 }; ··· 786 825 PROPERTY_ENTRY_U32("touchscreen-size-y", 1520), 787 826 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 788 827 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-pov-mobii-wintab-p1006w-v10.fw"), 789 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 790 828 PROPERTY_ENTRY_BOOL("silead,home-button"), 791 829 { } 792 830 }; ··· 802 842 PROPERTY_ENTRY_U32("touchscreen-size-y", 1144), 803 843 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 804 844 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-predia-basic.fw"), 805 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 806 845 PROPERTY_ENTRY_BOOL("silead,home-button"), 807 846 { } 808 847 }; ··· 818 859 PROPERTY_ENTRY_U32("touchscreen-size-y", 874), 819 860 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 820 861 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rca-cambio-w101-v2.fw"), 821 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 822 862 { } 823 863 }; 824 864 ··· 832 874 PROPERTY_ENTRY_U32("touchscreen-size-y", 1140), 833 875 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 834 876 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-rwc-nanote-p8.fw"), 835 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 836 877 { } 837 878 }; 838 879 ··· 847 890 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 848 891 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 849 892 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-schneider-sct101ctm.fw"), 850 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 851 893 PROPERTY_ENTRY_BOOL("silead,home-button"), 852 894 { } 853 895 }; ··· 862 906 PROPERTY_ENTRY_U32("touchscreen-size-x", 1723), 863 907 PROPERTY_ENTRY_U32("touchscreen-size-y", 1077), 864 908 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-globalspace-solt-ivw116.fw"), 865 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 866 909 PROPERTY_ENTRY_BOOL("silead,home-button"), 867 910 { } 868 911 }; ··· 878 923 PROPERTY_ENTRY_U32("touchscreen-size-y", 1270), 879 924 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 880 925 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-techbite-arc-11-6.fw"), 881 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 882 926 { } 883 927 }; 884 928 ··· 893 939 PROPERTY_ENTRY_U32("touchscreen-size-y", 1264), 894 940 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 895 941 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-teclast-tbook11.fw"), 896 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 897 942 PROPERTY_ENTRY_BOOL("silead,home-button"), 898 943 { } 899 944 }; ··· 918 965 PROPERTY_ENTRY_U32("touchscreen-size-y", 1264), 919 966 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 920 967 PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-teclast-x16-plus.fw"), 921 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 922 968 PROPERTY_ENTRY_BOOL("silead,home-button"), 923 969 { } 924 970 }; ··· 940 988 PROPERTY_ENTRY_U32("touchscreen-size-x", 1980), 941 989 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), 942 990 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-teclast-x3-plus.fw"), 943 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 944 991 PROPERTY_ENTRY_BOOL("silead,home-button"), 945 992 { } 946 993 }; ··· 955 1004 PROPERTY_ENTRY_BOOL("touchscreen-inverted-x"), 956 1005 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 957 1006 PROPERTY_ENTRY_STRING("firmware-name", "gsl1686-teclast_x98plus2.fw"), 958 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 959 1007 { } 960 1008 }; 961 1009 ··· 968 1018 PROPERTY_ENTRY_U32("touchscreen-size-y", 1530), 969 1019 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 970 1020 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-trekstor-primebook-c11.fw"), 971 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 972 1021 PROPERTY_ENTRY_BOOL("silead,home-button"), 973 1022 { } 974 1023 }; ··· 981 1032 PROPERTY_ENTRY_U32("touchscreen-size-x", 2624), 982 1033 PROPERTY_ENTRY_U32("touchscreen-size-y", 1920), 983 1034 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-trekstor-primebook-c13.fw"), 984 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 985 1035 PROPERTY_ENTRY_BOOL("silead,home-button"), 986 1036 { } 987 1037 }; ··· 994 1046 PROPERTY_ENTRY_U32("touchscreen-size-x", 2500), 995 1047 PROPERTY_ENTRY_U32("touchscreen-size-y", 1900), 996 1048 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-trekstor-primetab-t13b.fw"), 997 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 998 1049 PROPERTY_ENTRY_BOOL("silead,home-button"), 999 1050 PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 1000 1051 { } ··· 1021 1074 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 1022 1075 PROPERTY_ENTRY_U32("touchscreen-inverted-y", 1), 1023 1076 PROPERTY_ENTRY_STRING("firmware-name", "gsl3670-surftab-twin-10-1-st10432-8.fw"), 1024 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 1025 1077 PROPERTY_ENTRY_BOOL("silead,home-button"), 1026 1078 { } 1027 1079 }; ··· 1036 1090 PROPERTY_ENTRY_U32("touchscreen-size-x", 884), 1037 1091 PROPERTY_ENTRY_U32("touchscreen-size-y", 632), 1038 1092 PROPERTY_ENTRY_STRING("firmware-name", "gsl1686-surftab-wintron70-st70416-6.fw"), 1039 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 1040 1093 PROPERTY_ENTRY_BOOL("silead,home-button"), 1041 1094 { } 1042 1095 }; ··· 1052 1107 PROPERTY_ENTRY_U32("touchscreen-fuzz-y", 6), 1053 1108 PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"), 1054 1109 PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-viglen-connect-10.fw"), 1055 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 1056 1110 PROPERTY_ENTRY_BOOL("silead,home-button"), 1057 1111 { } 1058 1112 }; ··· 1065 1121 PROPERTY_ENTRY_U32("touchscreen-size-x", 1920), 1066 1122 PROPERTY_ENTRY_U32("touchscreen-size-y", 1280), 1067 1123 PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-vinga-twizzle_j116.fw"), 1068 - PROPERTY_ENTRY_U32("silead,max-fingers", 10), 1069 1124 PROPERTY_ENTRY_BOOL("silead,home-button"), 1070 1125 { } 1071 1126 }; ··· 1850 1907 u32 u32val; 1851 1908 int i, ret; 1852 1909 1853 - strscpy(orig_str, str, sizeof(orig_str)); 1910 + strscpy(orig_str, str); 1854 1911 1855 1912 /* 1856 1913 * str is part of the static_command_line from init/main.c and poking
+22 -9
drivers/scsi/device_handler/scsi_dh_alua.c
··· 414 414 } 415 415 } 416 416 417 - static enum scsi_disposition alua_check_sense(struct scsi_device *sdev, 418 - struct scsi_sense_hdr *sense_hdr) 417 + static void alua_handle_state_transition(struct scsi_device *sdev) 419 418 { 420 419 struct alua_dh_data *h = sdev->handler_data; 421 420 struct alua_port_group *pg; 422 421 422 + rcu_read_lock(); 423 + pg = rcu_dereference(h->pg); 424 + if (pg) 425 + pg->state = SCSI_ACCESS_STATE_TRANSITIONING; 426 + rcu_read_unlock(); 427 + alua_check(sdev, false); 428 + } 429 + 430 + static enum scsi_disposition alua_check_sense(struct scsi_device *sdev, 431 + struct scsi_sense_hdr *sense_hdr) 432 + { 423 433 switch (sense_hdr->sense_key) { 424 434 case NOT_READY: 425 435 if (sense_hdr->asc == 0x04 && sense_hdr->ascq == 0x0a) { 426 436 /* 427 437 * LUN Not Accessible - ALUA state transition 428 438 */ 429 - rcu_read_lock(); 430 - pg = rcu_dereference(h->pg); 431 - if (pg) 432 - pg->state = SCSI_ACCESS_STATE_TRANSITIONING; 433 - rcu_read_unlock(); 434 - alua_check(sdev, false); 439 + alua_handle_state_transition(sdev); 435 440 return NEEDS_RETRY; 436 441 } 437 442 break; 438 443 case UNIT_ATTENTION: 444 + if (sense_hdr->asc == 0x04 && sense_hdr->ascq == 0x0a) { 445 + /* 446 + * LUN Not Accessible - ALUA state transition 447 + */ 448 + alua_handle_state_transition(sdev); 449 + return NEEDS_RETRY; 450 + } 439 451 if (sense_hdr->asc == 0x29 && sense_hdr->ascq == 0x00) { 440 452 /* 441 453 * Power On, Reset, or Bus Device Reset. ··· 514 502 515 503 retval = scsi_test_unit_ready(sdev, ALUA_FAILOVER_TIMEOUT * HZ, 516 504 ALUA_FAILOVER_RETRIES, &sense_hdr); 517 - if (sense_hdr.sense_key == NOT_READY && 505 + if ((sense_hdr.sense_key == NOT_READY || 506 + sense_hdr.sense_key == UNIT_ATTENTION) && 518 507 sense_hdr.asc == 0x04 && sense_hdr.ascq == 0x0a) 519 508 return SCSI_DH_RETRY; 520 509 else if (retval)
+1 -1
drivers/scsi/mpi3mr/mpi3mr_transport.c
··· 1364 1364 continue; 1365 1365 1366 1366 if (i > sizeof(mr_sas_port->phy_mask) * 8) { 1367 - ioc_warn(mrioc, "skipping port %u, max allowed value is %lu\n", 1367 + ioc_warn(mrioc, "skipping port %u, max allowed value is %zu\n", 1368 1368 i, sizeof(mr_sas_port->phy_mask) * 8); 1369 1369 goto out_fail; 1370 1370 }
+2 -2
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 302 302 303 303 /** 304 304 * _scsih_set_debug_level - global setting of ioc->logging_level. 305 - * @val: ? 306 - * @kp: ? 305 + * @val: value of the parameter to be set 306 + * @kp: pointer to kernel_param structure 307 307 * 308 308 * Note: The logging levels are defined in mpt3sas_debug.h. 309 309 */
+1
drivers/scsi/qedf/qedf.h
··· 363 363 #define QEDF_IN_RECOVERY 5 364 364 #define QEDF_DBG_STOP_IO 6 365 365 #define QEDF_PROBING 8 366 + #define QEDF_STAG_IN_PROGRESS 9 366 367 unsigned long flags; /* Miscellaneous state flags */ 367 368 int fipvlan_retries; 368 369 u8 num_queues;
+44 -3
drivers/scsi/qedf/qedf_main.c
··· 318 318 */ 319 319 if (resp == fc_lport_flogi_resp) { 320 320 qedf->flogi_cnt++; 321 + qedf->flogi_pending++; 322 + 323 + if (test_bit(QEDF_UNLOADING, &qedf->flags)) { 324 + QEDF_ERR(&qedf->dbg_ctx, "Driver unloading\n"); 325 + qedf->flogi_pending = 0; 326 + } 327 + 321 328 if (qedf->flogi_pending >= QEDF_FLOGI_RETRY_CNT) { 322 329 schedule_delayed_work(&qedf->stag_work, 2); 323 330 return NULL; 324 331 } 325 - qedf->flogi_pending++; 332 + 326 333 return fc_elsct_send(lport, did, fp, op, qedf_flogi_resp, 327 334 arg, timeout); 328 335 } ··· 919 912 struct qedf_ctx *qedf; 920 913 struct qed_link_output if_link; 921 914 915 + qedf = lport_priv(lport); 916 + 922 917 if (lport->vport) { 918 + clear_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags); 923 919 printk_ratelimited("Cannot issue host reset on NPIV port.\n"); 924 920 return; 925 921 } 926 - 927 - qedf = lport_priv(lport); 928 922 929 923 qedf->flogi_pending = 0; 930 924 /* For host reset, essentially do a soft link up/down */ ··· 946 938 if (!if_link.link_up) { 947 939 QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_DISC, 948 940 "Physical link is not up.\n"); 941 + clear_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags); 949 942 return; 950 943 } 951 944 /* Flush and wait to make sure link down is processed */ ··· 959 950 "Queue link up work.\n"); 960 951 queue_delayed_work(qedf->link_update_wq, &qedf->link_update, 961 952 0); 953 + clear_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags); 962 954 } 963 955 964 956 /* Reset the host by gracefully logging out and then logging back in */ ··· 3473 3463 } 3474 3464 3475 3465 /* Start the Slowpath-process */ 3466 + memset(&slowpath_params, 0, sizeof(struct qed_slowpath_params)); 3476 3467 slowpath_params.int_mode = QED_INT_MODE_MSIX; 3477 3468 slowpath_params.drv_major = QEDF_DRIVER_MAJOR_VER; 3478 3469 slowpath_params.drv_minor = QEDF_DRIVER_MINOR_VER; ··· 3732 3721 { 3733 3722 struct qedf_ctx *qedf; 3734 3723 int rc; 3724 + int cnt = 0; 3735 3725 3736 3726 if (!pdev) { 3737 3727 QEDF_ERR(NULL, "pdev is NULL.\n"); ··· 3748 3736 if (test_bit(QEDF_UNLOADING, &qedf->flags)) { 3749 3737 QEDF_ERR(&qedf->dbg_ctx, "Already removing PCI function.\n"); 3750 3738 return; 3739 + } 3740 + 3741 + stag_in_prog: 3742 + if (test_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags)) { 3743 + QEDF_ERR(&qedf->dbg_ctx, "Stag in progress, cnt=%d.\n", cnt); 3744 + cnt++; 3745 + 3746 + if (cnt < 5) { 3747 + msleep(500); 3748 + goto stag_in_prog; 3749 + } 3751 3750 } 3752 3751 3753 3752 if (mode != QEDF_MODE_RECOVERY) ··· 4019 3996 { 4020 3997 struct qedf_ctx *qedf = 4021 3998 container_of(work, struct qedf_ctx, stag_work.work); 3999 + 4000 + if (!qedf) { 4001 + QEDF_ERR(&qedf->dbg_ctx, "qedf is NULL"); 4002 + return; 4003 + } 4004 + 4005 + if (test_bit(QEDF_IN_RECOVERY, &qedf->flags)) { 4006 + QEDF_ERR(&qedf->dbg_ctx, 4007 + "Already is in recovery, hence not calling software context reset.\n"); 4008 + return; 4009 + } 4010 + 4011 + if (test_bit(QEDF_UNLOADING, &qedf->flags)) { 4012 + QEDF_ERR(&qedf->dbg_ctx, "Driver unloading\n"); 4013 + return; 4014 + } 4015 + 4016 + set_bit(QEDF_STAG_IN_PROGRESS, &qedf->flags); 4022 4017 4023 4018 printk_ratelimited("[%s]:[%s:%d]:%d: Performing software context reset.", 4024 4019 dev_name(&qedf->pdev->dev), __func__, __LINE__,
+7
drivers/scsi/scsi.c
··· 350 350 if (result < SCSI_VPD_HEADER_SIZE) 351 351 return 0; 352 352 353 + if (result > sizeof(vpd)) { 354 + dev_warn_once(&sdev->sdev_gendev, 355 + "%s: long VPD page 0 length: %d bytes\n", 356 + __func__, result); 357 + result = sizeof(vpd); 358 + } 359 + 353 360 result -= SCSI_VPD_HEADER_SIZE; 354 361 if (!memchr(&vpd[SCSI_VPD_HEADER_SIZE], page, result)) 355 362 return 0;
+1 -1
drivers/scsi/sr.h
··· 65 65 int sr_get_last_session(struct cdrom_device_info *, struct cdrom_multisession *); 66 66 int sr_get_mcn(struct cdrom_device_info *, struct cdrom_mcn *); 67 67 int sr_reset(struct cdrom_device_info *); 68 - int sr_select_speed(struct cdrom_device_info *cdi, int speed); 68 + int sr_select_speed(struct cdrom_device_info *cdi, unsigned long speed); 69 69 int sr_audio_ioctl(struct cdrom_device_info *, unsigned int, void *); 70 70 71 71 int sr_is_xa(Scsi_CD *);
+4 -1
drivers/scsi/sr_ioctl.c
··· 425 425 return 0; 426 426 } 427 427 428 - int sr_select_speed(struct cdrom_device_info *cdi, int speed) 428 + int sr_select_speed(struct cdrom_device_info *cdi, unsigned long speed) 429 429 { 430 430 Scsi_CD *cd = cdi->handle; 431 431 struct packet_command cgc; 432 + 433 + /* avoid exceeding the max speed or overflowing integer bounds */ 434 + speed = clamp(0, speed, 0xffff / 177); 432 435 433 436 if (speed == 0) 434 437 speed = 0xffff; /* set to max */
+8 -9
drivers/ufs/core/ufs-mcq.c
··· 634 634 struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 635 635 struct ufs_hw_queue *hwq; 636 636 unsigned long flags; 637 - int err = FAILED; 637 + int err; 638 638 639 639 if (!ufshcd_cmd_inflight(lrbp->cmd)) { 640 640 dev_err(hba->dev, 641 641 "%s: skip abort. cmd at tag %d already completed.\n", 642 642 __func__, tag); 643 - goto out; 643 + return FAILED; 644 644 } 645 645 646 646 /* Skip task abort in case previous aborts failed and report failure */ 647 647 if (lrbp->req_abort_skip) { 648 648 dev_err(hba->dev, "%s: skip abort. tag %d failed earlier\n", 649 649 __func__, tag); 650 - goto out; 650 + return FAILED; 651 651 } 652 652 653 653 hwq = ufshcd_mcq_req_to_hwq(hba, scsi_cmd_to_rq(cmd)); ··· 659 659 */ 660 660 dev_err(hba->dev, "%s: cmd found in sq. hwq=%d, tag=%d\n", 661 661 __func__, hwq->id, tag); 662 - goto out; 662 + return FAILED; 663 663 } 664 664 665 665 /* ··· 667 667 * in the completion queue either. Query the device to see if 668 668 * the command is being processed in the device. 669 669 */ 670 - if (ufshcd_try_to_abort_task(hba, tag)) { 670 + err = ufshcd_try_to_abort_task(hba, tag); 671 + if (err) { 671 672 dev_err(hba->dev, "%s: device abort failed %d\n", __func__, err); 672 673 lrbp->req_abort_skip = true; 673 - goto out; 674 + return FAILED; 674 675 } 675 676 676 - err = SUCCESS; 677 677 spin_lock_irqsave(&hwq->cq_lock, flags); 678 678 if (ufshcd_cmd_inflight(lrbp->cmd)) 679 679 ufshcd_release_scsi_cmd(hba, lrbp); 680 680 spin_unlock_irqrestore(&hwq->cq_lock, flags); 681 681 682 - out: 683 - return err; 682 + return SUCCESS; 684 683 }
+20 -2
fs/bcachefs/alloc_background.c
··· 741 741 enum btree_iter_update_trigger_flags flags) 742 742 { 743 743 struct bch_fs *c = trans->c; 744 + struct printbuf buf = PRINTBUF; 744 745 int ret = 0; 745 746 746 747 struct bch_dev *ca = bch2_dev_bucket_tryget(c, new.k->p); ··· 861 860 } 862 861 863 862 percpu_down_read(&c->mark_lock); 864 - if (new_a->gen != old_a->gen) 865 - *bucket_gen(ca, new.k->p.offset) = new_a->gen; 863 + if (new_a->gen != old_a->gen) { 864 + u8 *gen = bucket_gen(ca, new.k->p.offset); 865 + if (unlikely(!gen)) { 866 + percpu_up_read(&c->mark_lock); 867 + goto invalid_bucket; 868 + } 869 + *gen = new_a->gen; 870 + } 866 871 867 872 bch2_dev_usage_update(c, ca, old_a, new_a, journal_seq, false); 868 873 percpu_up_read(&c->mark_lock); ··· 902 895 903 896 percpu_down_read(&c->mark_lock); 904 897 struct bucket *g = gc_bucket(ca, new.k->p.offset); 898 + if (unlikely(!g)) { 899 + percpu_up_read(&c->mark_lock); 900 + goto invalid_bucket; 901 + } 902 + g->gen_valid = 1; 905 903 906 904 bucket_lock(g); 907 905 ··· 922 910 percpu_up_read(&c->mark_lock); 923 911 } 924 912 err: 913 + printbuf_exit(&buf); 925 914 bch2_dev_put(ca); 926 915 return ret; 916 + invalid_bucket: 917 + bch2_fs_inconsistent(c, "reference to invalid bucket\n %s", 918 + (bch2_bkey_val_to_text(&buf, c, new.s_c), buf.buf)); 919 + ret = -EIO; 920 + goto err; 927 921 } 928 922 929 923 /*
+2 -1
fs/bcachefs/bcachefs.h
··· 790 790 791 791 /* BTREE CACHE */ 792 792 struct bio_set btree_bio; 793 - struct workqueue_struct *io_complete_wq; 793 + struct workqueue_struct *btree_read_complete_wq; 794 + struct workqueue_struct *btree_write_submit_wq; 794 795 795 796 struct btree_root btree_roots_known[BTREE_ID_NR]; 796 797 DARRAY(struct btree_root) btree_roots_extra;
+5 -4
fs/bcachefs/btree_cache.c
··· 91 91 } 92 92 93 93 static const struct rhashtable_params bch_btree_cache_params = { 94 - .head_offset = offsetof(struct btree, hash), 95 - .key_offset = offsetof(struct btree, hash_val), 96 - .key_len = sizeof(u64), 97 - .obj_cmpfn = bch2_btree_cache_cmp_fn, 94 + .head_offset = offsetof(struct btree, hash), 95 + .key_offset = offsetof(struct btree, hash_val), 96 + .key_len = sizeof(u64), 97 + .obj_cmpfn = bch2_btree_cache_cmp_fn, 98 + .automatic_shrinking = true, 98 99 }; 99 100 100 101 static int btree_node_data_alloc(struct bch_fs *c, struct btree *b, gfp_t gfp)
+12 -5
fs/bcachefs/btree_gc.c
··· 874 874 const struct bch_alloc_v4 *old; 875 875 int ret; 876 876 877 + if (!bucket_valid(ca, k.k->p.offset)) 878 + return 0; 879 + 877 880 old = bch2_alloc_to_v4(k, &old_convert); 878 881 gc = new = *old; 879 882 ··· 993 990 994 991 buckets->first_bucket = ca->mi.first_bucket; 995 992 buckets->nbuckets = ca->mi.nbuckets; 993 + buckets->nbuckets_minus_first = 994 + buckets->nbuckets - buckets->first_bucket; 996 995 rcu_assign_pointer(ca->buckets_gc, buckets); 997 996 } 998 997 ··· 1008 1003 continue; 1009 1004 } 1010 1005 1011 - struct bch_alloc_v4 a_convert; 1012 - const struct bch_alloc_v4 *a = bch2_alloc_to_v4(k, &a_convert); 1006 + if (bucket_valid(ca, k.k->p.offset)) { 1007 + struct bch_alloc_v4 a_convert; 1008 + const struct bch_alloc_v4 *a = bch2_alloc_to_v4(k, &a_convert); 1013 1009 1014 - struct bucket *g = gc_bucket(ca, k.k->p.offset); 1015 - g->gen_valid = 1; 1016 - g->gen = a->gen; 1010 + struct bucket *g = gc_bucket(ca, k.k->p.offset); 1011 + g->gen_valid = 1; 1012 + g->gen = a->gen; 1013 + } 1017 1014 0; 1018 1015 }))); 1019 1016 bch2_dev_put(ca);
+4 -4
fs/bcachefs/btree_io.c
··· 1389 1389 bch2_latency_acct(ca, rb->start_time, READ); 1390 1390 } 1391 1391 1392 - queue_work(c->io_complete_wq, &rb->work); 1392 + queue_work(c->btree_read_complete_wq, &rb->work); 1393 1393 } 1394 1394 1395 1395 struct btree_node_read_all { ··· 1656 1656 btree_node_read_all_replicas_done(&ra->cl.work); 1657 1657 } else { 1658 1658 continue_at(&ra->cl, btree_node_read_all_replicas_done, 1659 - c->io_complete_wq); 1659 + c->btree_read_complete_wq); 1660 1660 } 1661 1661 1662 1662 return 0; ··· 1737 1737 if (sync) 1738 1738 btree_node_read_work(&rb->work); 1739 1739 else 1740 - queue_work(c->io_complete_wq, &rb->work); 1740 + queue_work(c->btree_read_complete_wq, &rb->work); 1741 1741 } 1742 1742 } 1743 1743 ··· 2229 2229 atomic64_add(bytes_to_write, &c->btree_write_stats[type].bytes); 2230 2230 2231 2231 INIT_WORK(&wbio->work, btree_write_submit); 2232 - queue_work(c->io_complete_wq, &wbio->work); 2232 + queue_work(c->btree_write_submit_wq, &wbio->work); 2233 2233 return; 2234 2234 err: 2235 2235 set_btree_node_noevict(b);
+4 -7
fs/bcachefs/btree_iter.c
··· 221 221 struct btree_path *path) 222 222 { 223 223 struct bch_fs *c = trans->c; 224 - unsigned i; 225 224 226 - EBUG_ON(path->btree_id >= BTREE_ID_NR); 227 - 228 - for (i = 0; i < (!path->cached ? BTREE_MAX_DEPTH : 1); i++) { 225 + for (unsigned i = 0; i < (!path->cached ? BTREE_MAX_DEPTH : 1); i++) { 229 226 if (!path->l[i].b) { 230 227 BUG_ON(!path->cached && 231 228 bch2_btree_id_root(c, path->btree_id)->b->c.level > i); ··· 247 250 static void bch2_btree_iter_verify(struct btree_iter *iter) 248 251 { 249 252 struct btree_trans *trans = iter->trans; 250 - 251 - BUG_ON(iter->btree_id >= BTREE_ID_NR); 252 253 253 254 BUG_ON(!!(iter->flags & BTREE_ITER_cached) != btree_iter_path(trans, iter)->cached); 254 255 ··· 3401 3406 bch2_time_stats_exit(&s->lock_hold_times); 3402 3407 } 3403 3408 3404 - if (c->btree_trans_barrier_initialized) 3409 + if (c->btree_trans_barrier_initialized) { 3410 + synchronize_srcu_expedited(&c->btree_trans_barrier); 3405 3411 cleanup_srcu_struct(&c->btree_trans_barrier); 3412 + } 3406 3413 mempool_exit(&c->btree_trans_mem_pool); 3407 3414 mempool_exit(&c->btree_trans_pool); 3408 3415 }
+20 -13
fs/bcachefs/btree_key_cache.c
··· 32 32 } 33 33 34 34 static const struct rhashtable_params bch2_btree_key_cache_params = { 35 - .head_offset = offsetof(struct bkey_cached, hash), 36 - .key_offset = offsetof(struct bkey_cached, key), 37 - .key_len = sizeof(struct bkey_cached_key), 38 - .obj_cmpfn = bch2_btree_key_cache_cmp_fn, 35 + .head_offset = offsetof(struct bkey_cached, hash), 36 + .key_offset = offsetof(struct bkey_cached, key), 37 + .key_len = sizeof(struct bkey_cached_key), 38 + .obj_cmpfn = bch2_btree_key_cache_cmp_fn, 39 + .automatic_shrinking = true, 39 40 }; 40 41 41 42 __flatten ··· 841 840 six_lock_exit(&ck->c.lock); 842 841 kmem_cache_free(bch2_key_cache, ck); 843 842 atomic_long_dec(&bc->nr_freed); 844 - freed++; 845 843 bc->nr_freed_nonpcpu--; 846 844 bc->freed++; 847 845 } ··· 854 854 six_lock_exit(&ck->c.lock); 855 855 kmem_cache_free(bch2_key_cache, ck); 856 856 atomic_long_dec(&bc->nr_freed); 857 - freed++; 858 857 bc->nr_freed_pcpu--; 859 858 bc->freed++; 860 859 } ··· 875 876 876 877 if (test_bit(BKEY_CACHED_DIRTY, &ck->flags)) { 877 878 bc->skipped_dirty++; 878 - goto next; 879 879 } else if (test_bit(BKEY_CACHED_ACCESSED, &ck->flags)) { 880 880 clear_bit(BKEY_CACHED_ACCESSED, &ck->flags); 881 881 bc->skipped_accessed++; 882 - goto next; 883 - } else if (bkey_cached_lock_for_evict(ck)) { 882 + } else if (!bkey_cached_lock_for_evict(ck)) { 883 + bc->skipped_lock_fail++; 884 + } else { 884 885 bkey_cached_evict(bc, ck); 885 886 bkey_cached_free(bc, ck); 886 887 bc->moved_to_freelist++; 887 - } else { 888 - bc->skipped_lock_fail++; 888 + freed++; 889 889 } 890 890 891 891 scanned++; 892 892 if (scanned >= nr) 893 893 break; 894 - next: 894 + 895 895 pos = next; 896 896 } 897 897 ··· 914 916 struct btree_key_cache *bc = &c->btree_key_cache; 915 917 long nr = atomic_long_read(&bc->nr_keys) - 916 918 atomic_long_read(&bc->nr_dirty); 919 + 920 + /* 921 + * Avoid hammering our shrinker too much if it's nearly empty - the 922 + * shrinker code doesn't take into account how big our cache is, if it's 923 + * mostly empty but the system is under memory pressure it causes nasty 924 + * lock contention: 925 + */ 926 + nr -= 128; 917 927 918 928 return max(0L, nr); 919 929 } ··· 1031 1025 if (!shrink) 1032 1026 return -BCH_ERR_ENOMEM_fs_btree_cache_init; 1033 1027 bc->shrink = shrink; 1034 - shrink->seeks = 0; 1035 1028 shrink->count_objects = bch2_btree_key_cache_count; 1036 1029 shrink->scan_objects = bch2_btree_key_cache_scan; 1030 + shrink->batch = 1 << 14; 1031 + shrink->seeks = 0; 1037 1032 shrink->private_data = c; 1038 1033 shrinker_register(shrink); 1039 1034 return 0;
+5 -4
fs/bcachefs/btree_node_scan.c
··· 72 72 73 73 struct btree *b = bch2_btree_node_get_noiter(trans, &k.k, f->btree_id, f->level, false); 74 74 bool ret = !IS_ERR_OR_NULL(b); 75 - if (ret) { 76 - f->sectors_written = b->written; 77 - six_unlock_read(&b->c.lock); 78 - } 75 + if (!ret) 76 + return ret; 77 + 78 + f->sectors_written = b->written; 79 + six_unlock_read(&b->c.lock); 79 80 80 81 /* 81 82 * We might update this node's range; if that happens, we need the node
+173 -134
fs/bcachefs/buckets.c
··· 465 465 return bch2_update_replicas_list(trans, &r.e, sectors); 466 466 } 467 467 468 + static int bch2_check_fix_ptr(struct btree_trans *trans, 469 + struct bkey_s_c k, 470 + struct extent_ptr_decoded p, 471 + const union bch_extent_entry *entry, 472 + bool *do_update) 473 + { 474 + struct bch_fs *c = trans->c; 475 + struct printbuf buf = PRINTBUF; 476 + int ret = 0; 477 + 478 + struct bch_dev *ca = bch2_dev_tryget(c, p.ptr.dev); 479 + if (!ca) { 480 + if (fsck_err(c, ptr_to_invalid_device, 481 + "pointer to missing device %u\n" 482 + "while marking %s", 483 + p.ptr.dev, 484 + (printbuf_reset(&buf), 485 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 486 + *do_update = true; 487 + return 0; 488 + } 489 + 490 + struct bucket *g = PTR_GC_BUCKET(ca, &p.ptr); 491 + if (!g) { 492 + if (fsck_err(c, ptr_to_invalid_device, 493 + "pointer to invalid bucket on device %u\n" 494 + "while marking %s", 495 + p.ptr.dev, 496 + (printbuf_reset(&buf), 497 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 498 + *do_update = true; 499 + goto out; 500 + } 501 + 502 + enum bch_data_type data_type = bch2_bkey_ptr_data_type(k, p, entry); 503 + 504 + if (fsck_err_on(!g->gen_valid, 505 + c, ptr_to_missing_alloc_key, 506 + "bucket %u:%zu data type %s ptr gen %u missing in alloc btree\n" 507 + "while marking %s", 508 + p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), 509 + bch2_data_type_str(ptr_data_type(k.k, &p.ptr)), 510 + p.ptr.gen, 511 + (printbuf_reset(&buf), 512 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 513 + if (!p.ptr.cached) { 514 + g->gen_valid = true; 515 + g->gen = p.ptr.gen; 516 + } else { 517 + *do_update = true; 518 + } 519 + } 520 + 521 + if (fsck_err_on(gen_cmp(p.ptr.gen, g->gen) > 0, 522 + c, ptr_gen_newer_than_bucket_gen, 523 + "bucket %u:%zu data type %s ptr gen in the future: %u > %u\n" 524 + "while marking %s", 525 + p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), 526 + bch2_data_type_str(ptr_data_type(k.k, &p.ptr)), 527 + p.ptr.gen, g->gen, 528 + (printbuf_reset(&buf), 529 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 530 + if (!p.ptr.cached && 531 + (g->data_type != BCH_DATA_btree || 532 + data_type == BCH_DATA_btree)) { 533 + g->gen_valid = true; 534 + g->gen = p.ptr.gen; 535 + g->data_type = 0; 536 + g->dirty_sectors = 0; 537 + g->cached_sectors = 0; 538 + } else { 539 + *do_update = true; 540 + } 541 + } 542 + 543 + if (fsck_err_on(gen_cmp(g->gen, p.ptr.gen) > BUCKET_GC_GEN_MAX, 544 + c, ptr_gen_newer_than_bucket_gen, 545 + "bucket %u:%zu gen %u data type %s: ptr gen %u too stale\n" 546 + "while marking %s", 547 + p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), g->gen, 548 + bch2_data_type_str(ptr_data_type(k.k, &p.ptr)), 549 + p.ptr.gen, 550 + (printbuf_reset(&buf), 551 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 552 + *do_update = true; 553 + 554 + if (fsck_err_on(!p.ptr.cached && gen_cmp(p.ptr.gen, g->gen) < 0, 555 + c, stale_dirty_ptr, 556 + "bucket %u:%zu data type %s stale dirty ptr: %u < %u\n" 557 + "while marking %s", 558 + p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), 559 + bch2_data_type_str(ptr_data_type(k.k, &p.ptr)), 560 + p.ptr.gen, g->gen, 561 + (printbuf_reset(&buf), 562 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 563 + *do_update = true; 564 + 565 + if (data_type != BCH_DATA_btree && p.ptr.gen != g->gen) 566 + goto out; 567 + 568 + if (fsck_err_on(bucket_data_type_mismatch(g->data_type, data_type), 569 + c, ptr_bucket_data_type_mismatch, 570 + "bucket %u:%zu gen %u different types of data in same bucket: %s, %s\n" 571 + "while marking %s", 572 + p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), g->gen, 573 + bch2_data_type_str(g->data_type), 574 + bch2_data_type_str(data_type), 575 + (printbuf_reset(&buf), 576 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 577 + if (data_type == BCH_DATA_btree) { 578 + g->gen_valid = true; 579 + g->gen = p.ptr.gen; 580 + g->data_type = data_type; 581 + g->dirty_sectors = 0; 582 + g->cached_sectors = 0; 583 + } else { 584 + *do_update = true; 585 + } 586 + } 587 + 588 + if (p.has_ec) { 589 + struct gc_stripe *m = genradix_ptr(&c->gc_stripes, p.ec.idx); 590 + 591 + if (fsck_err_on(!m || !m->alive, 592 + c, ptr_to_missing_stripe, 593 + "pointer to nonexistent stripe %llu\n" 594 + "while marking %s", 595 + (u64) p.ec.idx, 596 + (printbuf_reset(&buf), 597 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 598 + *do_update = true; 599 + 600 + if (fsck_err_on(m && m->alive && !bch2_ptr_matches_stripe_m(m, p), 601 + c, ptr_to_incorrect_stripe, 602 + "pointer does not match stripe %llu\n" 603 + "while marking %s", 604 + (u64) p.ec.idx, 605 + (printbuf_reset(&buf), 606 + bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 607 + *do_update = true; 608 + } 609 + out: 610 + fsck_err: 611 + bch2_dev_put(ca); 612 + printbuf_exit(&buf); 613 + return ret; 614 + } 615 + 468 616 int bch2_check_fix_ptrs(struct btree_trans *trans, 469 617 enum btree_id btree, unsigned level, struct bkey_s_c k, 470 618 enum btree_iter_update_trigger_flags flags) ··· 628 480 percpu_down_read(&c->mark_lock); 629 481 630 482 bkey_for_each_ptr_decode(k.k, ptrs_c, p, entry_c) { 631 - struct bch_dev *ca = bch2_dev_tryget(c, p.ptr.dev); 632 - if (!ca) { 633 - if (fsck_err(c, ptr_to_invalid_device, 634 - "pointer to missing device %u\n" 635 - "while marking %s", 636 - p.ptr.dev, 637 - (printbuf_reset(&buf), 638 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 639 - do_update = true; 640 - continue; 641 - } 642 - 643 - struct bucket *g = PTR_GC_BUCKET(ca, &p.ptr); 644 - enum bch_data_type data_type = bch2_bkey_ptr_data_type(k, p, entry_c); 645 - 646 - if (fsck_err_on(!g->gen_valid, 647 - c, ptr_to_missing_alloc_key, 648 - "bucket %u:%zu data type %s ptr gen %u missing in alloc btree\n" 649 - "while marking %s", 650 - p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), 651 - bch2_data_type_str(ptr_data_type(k.k, &p.ptr)), 652 - p.ptr.gen, 653 - (printbuf_reset(&buf), 654 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 655 - if (!p.ptr.cached) { 656 - g->gen_valid = true; 657 - g->gen = p.ptr.gen; 658 - } else { 659 - do_update = true; 660 - } 661 - } 662 - 663 - if (fsck_err_on(gen_cmp(p.ptr.gen, g->gen) > 0, 664 - c, ptr_gen_newer_than_bucket_gen, 665 - "bucket %u:%zu data type %s ptr gen in the future: %u > %u\n" 666 - "while marking %s", 667 - p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), 668 - bch2_data_type_str(ptr_data_type(k.k, &p.ptr)), 669 - p.ptr.gen, g->gen, 670 - (printbuf_reset(&buf), 671 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 672 - if (!p.ptr.cached && 673 - (g->data_type != BCH_DATA_btree || 674 - data_type == BCH_DATA_btree)) { 675 - g->gen_valid = true; 676 - g->gen = p.ptr.gen; 677 - g->data_type = 0; 678 - g->dirty_sectors = 0; 679 - g->cached_sectors = 0; 680 - } else { 681 - do_update = true; 682 - } 683 - } 684 - 685 - if (fsck_err_on(gen_cmp(g->gen, p.ptr.gen) > BUCKET_GC_GEN_MAX, 686 - c, ptr_gen_newer_than_bucket_gen, 687 - "bucket %u:%zu gen %u data type %s: ptr gen %u too stale\n" 688 - "while marking %s", 689 - p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), g->gen, 690 - bch2_data_type_str(ptr_data_type(k.k, &p.ptr)), 691 - p.ptr.gen, 692 - (printbuf_reset(&buf), 693 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 694 - do_update = true; 695 - 696 - if (fsck_err_on(!p.ptr.cached && gen_cmp(p.ptr.gen, g->gen) < 0, 697 - c, stale_dirty_ptr, 698 - "bucket %u:%zu data type %s stale dirty ptr: %u < %u\n" 699 - "while marking %s", 700 - p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), 701 - bch2_data_type_str(ptr_data_type(k.k, &p.ptr)), 702 - p.ptr.gen, g->gen, 703 - (printbuf_reset(&buf), 704 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 705 - do_update = true; 706 - 707 - if (data_type != BCH_DATA_btree && p.ptr.gen != g->gen) 708 - goto next; 709 - 710 - if (fsck_err_on(bucket_data_type_mismatch(g->data_type, data_type), 711 - c, ptr_bucket_data_type_mismatch, 712 - "bucket %u:%zu gen %u different types of data in same bucket: %s, %s\n" 713 - "while marking %s", 714 - p.ptr.dev, PTR_BUCKET_NR(ca, &p.ptr), g->gen, 715 - bch2_data_type_str(g->data_type), 716 - bch2_data_type_str(data_type), 717 - (printbuf_reset(&buf), 718 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 719 - if (data_type == BCH_DATA_btree) { 720 - g->gen_valid = true; 721 - g->gen = p.ptr.gen; 722 - g->data_type = data_type; 723 - g->dirty_sectors = 0; 724 - g->cached_sectors = 0; 725 - } else { 726 - do_update = true; 727 - } 728 - } 729 - 730 - if (p.has_ec) { 731 - struct gc_stripe *m = genradix_ptr(&c->gc_stripes, p.ec.idx); 732 - 733 - if (fsck_err_on(!m || !m->alive, c, 734 - ptr_to_missing_stripe, 735 - "pointer to nonexistent stripe %llu\n" 736 - "while marking %s", 737 - (u64) p.ec.idx, 738 - (printbuf_reset(&buf), 739 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 740 - do_update = true; 741 - 742 - if (fsck_err_on(m && m->alive && !bch2_ptr_matches_stripe_m(m, p), c, 743 - ptr_to_incorrect_stripe, 744 - "pointer does not match stripe %llu\n" 745 - "while marking %s", 746 - (u64) p.ec.idx, 747 - (printbuf_reset(&buf), 748 - bch2_bkey_val_to_text(&buf, c, k), buf.buf))) 749 - do_update = true; 750 - } 751 - next: 752 - bch2_dev_put(ca); 483 + ret = bch2_check_fix_ptr(trans, k, p, entry_c, &do_update); 484 + if (ret) 485 + goto err; 753 486 } 754 487 755 488 if (do_update) { ··· 745 716 bch2_btree_node_update_key_early(trans, btree, level - 1, k, new); 746 717 } 747 718 err: 748 - fsck_err: 749 719 percpu_up_read(&c->mark_lock); 750 720 printbuf_exit(&buf); 751 721 return ret; ··· 1015 987 enum btree_iter_update_trigger_flags flags) 1016 988 { 1017 989 bool insert = !(flags & BTREE_TRIGGER_overwrite); 990 + struct printbuf buf = PRINTBUF; 1018 991 int ret = 0; 1019 992 1020 993 struct bch_fs *c = trans->c; ··· 1048 1019 if (flags & BTREE_TRIGGER_gc) { 1049 1020 percpu_down_read(&c->mark_lock); 1050 1021 struct bucket *g = gc_bucket(ca, bucket.offset); 1022 + if (bch2_fs_inconsistent_on(!g, c, "reference to invalid bucket on device %u\n %s", 1023 + p.ptr.dev, 1024 + (bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 1025 + ret = -EIO; 1026 + goto err_unlock; 1027 + } 1028 + 1051 1029 bucket_lock(g); 1052 1030 struct bch_alloc_v4 old = bucket_m_to_alloc(*g), new = old; 1053 1031 ret = __mark_pointer(trans, ca, k, &p.ptr, *sectors, bp.data_type, &new); ··· 1063 1027 bch2_dev_usage_update(c, ca, &old, &new, 0, true); 1064 1028 } 1065 1029 bucket_unlock(g); 1030 + err_unlock: 1066 1031 percpu_up_read(&c->mark_lock); 1067 1032 } 1068 1033 err: 1069 1034 bch2_dev_put(ca); 1035 + printbuf_exit(&buf); 1070 1036 return ret; 1071 1037 } 1072 1038 ··· 1356 1318 u64 b, enum bch_data_type data_type, unsigned sectors, 1357 1319 enum btree_iter_update_trigger_flags flags) 1358 1320 { 1359 - int ret = 0; 1360 - 1361 1321 percpu_down_read(&c->mark_lock); 1362 1322 struct bucket *g = gc_bucket(ca, b); 1323 + if (bch2_fs_inconsistent_on(!g, c, "reference to invalid bucket on device %u when marking metadata type %s", 1324 + ca->dev_idx, bch2_data_type_str(data_type))) 1325 + goto err_unlock; 1363 1326 1364 1327 bucket_lock(g); 1365 1328 struct bch_alloc_v4 old = bucket_m_to_alloc(*g); ··· 1369 1330 g->data_type != data_type, c, 1370 1331 "different types of data in same bucket: %s, %s", 1371 1332 bch2_data_type_str(g->data_type), 1372 - bch2_data_type_str(data_type))) { 1373 - ret = -EIO; 1333 + bch2_data_type_str(data_type))) 1374 1334 goto err; 1375 - } 1376 1335 1377 1336 if (bch2_fs_inconsistent_on((u64) g->dirty_sectors + sectors > ca->mi.bucket_size, c, 1378 1337 "bucket %u:%llu gen %u data type %s sector count overflow: %u + %u > bucket size", 1379 1338 ca->dev_idx, b, g->gen, 1380 1339 bch2_data_type_str(g->data_type ?: data_type), 1381 - g->dirty_sectors, sectors)) { 1382 - ret = -EIO; 1340 + g->dirty_sectors, sectors)) 1383 1341 goto err; 1384 - } 1385 1342 1386 1343 g->data_type = data_type; 1387 1344 g->dirty_sectors += sectors; 1388 1345 struct bch_alloc_v4 new = bucket_m_to_alloc(*g); 1346 + bch2_dev_usage_update(c, ca, &old, &new, 0, true); 1347 + percpu_up_read(&c->mark_lock); 1348 + return 0; 1389 1349 err: 1390 1350 bucket_unlock(g); 1391 - if (!ret) 1392 - bch2_dev_usage_update(c, ca, &old, &new, 0, true); 1351 + err_unlock: 1393 1352 percpu_up_read(&c->mark_lock); 1394 - return ret; 1353 + return -EIO; 1395 1354 } 1396 1355 1397 1356 int bch2_trans_mark_metadata_bucket(struct btree_trans *trans, ··· 1632 1595 1633 1596 bucket_gens->first_bucket = ca->mi.first_bucket; 1634 1597 bucket_gens->nbuckets = nbuckets; 1598 + bucket_gens->nbuckets_minus_first = 1599 + bucket_gens->nbuckets - bucket_gens->first_bucket; 1635 1600 1636 1601 if (resize) { 1637 1602 down_write(&c->gc_lock);
+11 -6
fs/bcachefs/buckets.h
··· 93 93 { 94 94 struct bucket_array *buckets = gc_bucket_array(ca); 95 95 96 - BUG_ON(!bucket_valid(ca, b)); 96 + if (b - buckets->first_bucket >= buckets->nbuckets_minus_first) 97 + return NULL; 97 98 return buckets->b + b; 98 99 } 99 100 ··· 111 110 { 112 111 struct bucket_gens *gens = bucket_gens(ca); 113 112 114 - BUG_ON(!bucket_valid(ca, b)); 113 + if (b - gens->first_bucket >= gens->nbuckets_minus_first) 114 + return NULL; 115 115 return gens->b + b; 116 116 } 117 117 ··· 172 170 return r > 0 ? r : 0; 173 171 } 174 172 175 - static inline u8 dev_ptr_stale_rcu(struct bch_dev *ca, const struct bch_extent_ptr *ptr) 173 + static inline int dev_ptr_stale_rcu(struct bch_dev *ca, const struct bch_extent_ptr *ptr) 176 174 { 177 - return gen_after(*bucket_gen(ca, PTR_BUCKET_NR(ca, ptr)), ptr->gen); 175 + u8 *gen = bucket_gen(ca, PTR_BUCKET_NR(ca, ptr)); 176 + if (!gen) 177 + return -1; 178 + return gen_after(*gen, ptr->gen); 178 179 } 179 180 180 181 /** 181 182 * dev_ptr_stale() - check if a pointer points into a bucket that has been 182 183 * invalidated. 183 184 */ 184 - static inline u8 dev_ptr_stale(struct bch_dev *ca, const struct bch_extent_ptr *ptr) 185 + static inline int dev_ptr_stale(struct bch_dev *ca, const struct bch_extent_ptr *ptr) 185 186 { 186 187 rcu_read_lock(); 187 - u8 ret = dev_ptr_stale_rcu(ca, ptr); 188 + int ret = dev_ptr_stale_rcu(ca, ptr); 188 189 rcu_read_unlock(); 189 190 190 191 return ret;
+2
fs/bcachefs/buckets_types.h
··· 22 22 struct rcu_head rcu; 23 23 u16 first_bucket; 24 24 size_t nbuckets; 25 + size_t nbuckets_minus_first; 25 26 struct bucket b[]; 26 27 }; 27 28 ··· 30 29 struct rcu_head rcu; 31 30 u16 first_bucket; 32 31 size_t nbuckets; 32 + size_t nbuckets_minus_first; 33 33 u8 b[]; 34 34 }; 35 35
+1 -2
fs/bcachefs/data_update.c
··· 202 202 bch2_bkey_durability(c, bkey_i_to_s_c(&new->k_i)); 203 203 204 204 /* Now, drop excess replicas: */ 205 - restart_drop_extra_replicas: 206 - 207 205 rcu_read_lock(); 206 + restart_drop_extra_replicas: 208 207 bkey_for_each_ptr_decode(old.k, bch2_bkey_ptrs(bkey_i_to_s(insert)), p, entry) { 209 208 unsigned ptr_durability = bch2_extent_ptr_durability(c, &p); 210 209
+20 -6
fs/bcachefs/ec.c
··· 268 268 { 269 269 struct bch_fs *c = trans->c; 270 270 const struct bch_extent_ptr *ptr = s.v->ptrs + ptr_idx; 271 + struct printbuf buf = PRINTBUF; 271 272 int ret = 0; 272 273 273 274 struct bch_dev *ca = bch2_dev_tryget(c, ptr->dev); ··· 290 289 if (flags & BTREE_TRIGGER_gc) { 291 290 percpu_down_read(&c->mark_lock); 292 291 struct bucket *g = gc_bucket(ca, bucket.offset); 292 + if (bch2_fs_inconsistent_on(!g, c, "reference to invalid bucket on device %u\n %s", 293 + ptr->dev, 294 + (bch2_bkey_val_to_text(&buf, c, s.s_c), buf.buf))) { 295 + ret = -EIO; 296 + goto err_unlock; 297 + } 298 + 293 299 bucket_lock(g); 294 300 struct bch_alloc_v4 old = bucket_m_to_alloc(*g), new = old; 295 301 ret = __mark_stripe_bucket(trans, ca, s, ptr_idx, deleting, bucket, &new, flags); ··· 305 297 bch2_dev_usage_update(c, ca, &old, &new, 0, true); 306 298 } 307 299 bucket_unlock(g); 300 + err_unlock: 308 301 percpu_up_read(&c->mark_lock); 309 302 } 310 303 err: 311 304 bch2_dev_put(ca); 305 + printbuf_exit(&buf); 312 306 return ret; 313 307 } 314 308 ··· 724 714 bch2_blk_status_to_str(bio->bi_status))) 725 715 clear_bit(ec_bio->idx, ec_bio->buf->valid); 726 716 727 - if (dev_ptr_stale(ca, ptr)) { 717 + int stale = dev_ptr_stale(ca, ptr); 718 + if (stale) { 728 719 bch_err_ratelimited(ca->fs, 729 - "error %s stripe: stale pointer after io", 730 - bio_data_dir(bio) == READ ? "reading from" : "writing to"); 720 + "error %s stripe: stale/invalid pointer (%i) after io", 721 + bio_data_dir(bio) == READ ? "reading from" : "writing to", 722 + stale); 731 723 clear_bit(ec_bio->idx, ec_bio->buf->valid); 732 724 } 733 725 ··· 755 743 return; 756 744 } 757 745 758 - if (dev_ptr_stale(ca, ptr)) { 746 + int stale = dev_ptr_stale(ca, ptr); 747 + if (stale) { 759 748 bch_err_ratelimited(c, 760 - "error %s stripe: stale pointer", 761 - rw == READ ? "reading from" : "writing to"); 749 + "error %s stripe: stale pointer (%i)", 750 + rw == READ ? "reading from" : "writing to", 751 + stale); 762 752 clear_bit(idx, buf->valid); 763 753 return; 764 754 }
+6 -3
fs/bcachefs/extents.c
··· 137 137 138 138 struct bch_dev *ca = bch2_dev_rcu(c, p.ptr.dev); 139 139 140 - if (p.ptr.cached && (!ca || dev_ptr_stale(ca, &p.ptr))) 140 + if (p.ptr.cached && (!ca || dev_ptr_stale_rcu(ca, &p.ptr))) 141 141 continue; 142 142 143 143 f = failed ? dev_io_failures(failed, p.ptr.dev) : NULL; ··· 999 999 bch2_bkey_drop_ptrs(k, ptr, 1000 1000 ptr->cached && 1001 1001 (ca = bch2_dev_rcu(c, ptr->dev)) && 1002 - dev_ptr_stale_rcu(ca, ptr)); 1002 + dev_ptr_stale_rcu(ca, ptr) > 0); 1003 1003 rcu_read_unlock(); 1004 1004 1005 1005 return bkey_deleted(k.k); ··· 1024 1024 prt_str(out, " cached"); 1025 1025 if (ptr->unwritten) 1026 1026 prt_str(out, " unwritten"); 1027 - if (bucket_valid(ca, b) && dev_ptr_stale_rcu(ca, ptr)) 1027 + int stale = dev_ptr_stale_rcu(ca, ptr); 1028 + if (stale > 0) 1028 1029 prt_printf(out, " stale"); 1030 + else if (stale) 1031 + prt_printf(out, " invalid"); 1029 1032 } 1030 1033 rcu_read_unlock(); 1031 1034 --out->atomic;
+5 -12
fs/bcachefs/fs-ioctl.c
··· 308 308 return ret; 309 309 } 310 310 311 - static long __bch2_ioctl_subvolume_create(struct bch_fs *c, struct file *filp, 312 - struct bch_ioctl_subvolume arg) 311 + static long bch2_ioctl_subvolume_create(struct bch_fs *c, struct file *filp, 312 + struct bch_ioctl_subvolume arg) 313 313 { 314 314 struct inode *dir; 315 315 struct bch_inode_info *inode; ··· 406 406 !arg.src_ptr) 407 407 snapshot_src.subvol = inode_inum(to_bch_ei(dir)).subvol; 408 408 409 + down_write(&c->snapshot_create_lock); 409 410 inode = __bch2_create(file_mnt_idmap(filp), to_bch_ei(dir), 410 411 dst_dentry, arg.mode|S_IFDIR, 411 412 0, snapshot_src, create_flags); 413 + up_write(&c->snapshot_create_lock); 414 + 412 415 error = PTR_ERR_OR_ZERO(inode); 413 416 if (error) 414 417 goto err3; ··· 430 427 } 431 428 err1: 432 429 return error; 433 - } 434 - 435 - static long bch2_ioctl_subvolume_create(struct bch_fs *c, struct file *filp, 436 - struct bch_ioctl_subvolume arg) 437 - { 438 - down_write(&c->snapshot_create_lock); 439 - long ret = __bch2_ioctl_subvolume_create(c, filp, arg); 440 - up_write(&c->snapshot_create_lock); 441 - 442 - return ret; 443 430 } 444 431 445 432 static long bch2_ioctl_subvolume_destroy(struct bch_fs *c, struct file *filp,
+3
fs/bcachefs/fs.c
··· 227 227 mutex_init(&inode->ei_update_lock); 228 228 two_state_lock_init(&inode->ei_pagecache_lock); 229 229 INIT_LIST_HEAD(&inode->ei_vfs_inode_list); 230 + inode->ei_flags = 0; 230 231 mutex_init(&inode->ei_quota_lock); 232 + memset(&inode->ei_devs_need_flush, 0, sizeof(inode->ei_devs_need_flush)); 231 233 inode->v.i_state = 0; 232 234 233 235 if (unlikely(inode_init_always(c->vfs_sb, &inode->v))) { ··· 1969 1967 sb->s_time_min = div_s64(S64_MIN, c->sb.time_units_per_sec) + 1; 1970 1968 sb->s_time_max = div_s64(S64_MAX, c->sb.time_units_per_sec); 1971 1969 sb->s_uuid = c->sb.user_uuid; 1970 + sb->s_shrink->seeks = 0; 1972 1971 c->vfs_sb = sb; 1973 1972 strscpy(sb->s_id, c->name, sizeof(sb->s_id)); 1974 1973
+3
fs/bcachefs/fsck.c
··· 1677 1677 trans_was_restarted(trans, restart_count); 1678 1678 } 1679 1679 1680 + noinline_for_stack 1680 1681 static int check_dirent_inode_dirent(struct btree_trans *trans, 1681 1682 struct btree_iter *iter, 1682 1683 struct bkey_s_c_dirent d, ··· 1774 1773 return ret; 1775 1774 } 1776 1775 1776 + noinline_for_stack 1777 1777 static int check_dirent_target(struct btree_trans *trans, 1778 1778 struct btree_iter *iter, 1779 1779 struct bkey_s_c_dirent d, ··· 1849 1847 return ret; 1850 1848 } 1851 1849 1850 + noinline_for_stack 1852 1851 static int check_dirent_to_subvol(struct btree_trans *trans, struct btree_iter *iter, 1853 1852 struct bkey_s_c_dirent d) 1854 1853 {
+27 -12
fs/bcachefs/io_read.c
··· 84 84 }; 85 85 86 86 static const struct rhashtable_params bch_promote_params = { 87 - .head_offset = offsetof(struct promote_op, hash), 88 - .key_offset = offsetof(struct promote_op, pos), 89 - .key_len = sizeof(struct bpos), 87 + .head_offset = offsetof(struct promote_op, hash), 88 + .key_offset = offsetof(struct promote_op, pos), 89 + .key_len = sizeof(struct bpos), 90 + .automatic_shrinking = true, 90 91 }; 91 92 92 93 static inline int should_promote(struct bch_fs *c, struct bkey_s_c k, ··· 777 776 PTR_BUCKET_POS(ca, &ptr), 778 777 BTREE_ITER_cached); 779 778 780 - prt_printf(&buf, "Attempting to read from stale dirty pointer:\n"); 781 - printbuf_indent_add(&buf, 2); 779 + u8 *gen = bucket_gen(ca, iter.pos.offset); 780 + if (gen) { 782 781 783 - bch2_bkey_val_to_text(&buf, c, k); 784 - prt_newline(&buf); 782 + prt_printf(&buf, "Attempting to read from stale dirty pointer:\n"); 783 + printbuf_indent_add(&buf, 2); 785 784 786 - prt_printf(&buf, "memory gen: %u", *bucket_gen(ca, iter.pos.offset)); 787 - 788 - ret = lockrestart_do(trans, bkey_err(k = bch2_btree_iter_peek_slot(&iter))); 789 - if (!ret) { 790 - prt_newline(&buf); 791 785 bch2_bkey_val_to_text(&buf, c, k); 786 + prt_newline(&buf); 787 + 788 + prt_printf(&buf, "memory gen: %u", *gen); 789 + 790 + ret = lockrestart_do(trans, bkey_err(k = bch2_btree_iter_peek_slot(&iter))); 791 + if (!ret) { 792 + prt_newline(&buf); 793 + bch2_bkey_val_to_text(&buf, c, k); 794 + } 795 + } else { 796 + prt_printf(&buf, "Attempting to read from invalid bucket %llu:%llu:\n", 797 + iter.pos.inode, iter.pos.offset); 798 + printbuf_indent_add(&buf, 2); 799 + 800 + prt_printf(&buf, "first bucket %u nbuckets %llu\n", 801 + ca->mi.first_bucket, ca->mi.nbuckets); 802 + 803 + bch2_bkey_val_to_text(&buf, c, k); 804 + prt_newline(&buf); 792 805 } 793 806 794 807 bch2_fs_inconsistent(c, "%s", buf.buf);
+15 -4
fs/bcachefs/io_write.c
··· 1220 1220 DARRAY_PREALLOCATED(struct bucket_to_lock, 3) buckets; 1221 1221 u32 snapshot; 1222 1222 struct bucket_to_lock *stale_at; 1223 - int ret; 1223 + int stale, ret; 1224 1224 1225 1225 if (op->flags & BCH_WRITE_MOVE) 1226 1226 return; ··· 1299 1299 BUCKET_NOCOW_LOCK_UPDATE); 1300 1300 1301 1301 rcu_read_lock(); 1302 - bool stale = gen_after(*bucket_gen(ca, i->b.offset), i->gen); 1302 + u8 *gen = bucket_gen(ca, i->b.offset); 1303 + stale = !gen ? -1 : gen_after(*gen, i->gen); 1303 1304 rcu_read_unlock(); 1304 1305 1305 1306 if (unlikely(stale)) { ··· 1381 1380 break; 1382 1381 } 1383 1382 1384 - /* We can retry this: */ 1385 - ret = -BCH_ERR_transaction_restart; 1383 + struct printbuf buf = PRINTBUF; 1384 + if (bch2_fs_inconsistent_on(stale < 0, c, 1385 + "pointer to invalid bucket in nocow path on device %llu\n %s", 1386 + stale_at->b.inode, 1387 + (bch2_bkey_val_to_text(&buf, c, k), buf.buf))) { 1388 + ret = -EIO; 1389 + } else { 1390 + /* We can retry this: */ 1391 + ret = -BCH_ERR_transaction_restart; 1392 + } 1393 + printbuf_exit(&buf); 1394 + 1386 1395 goto err_get_ioref; 1387 1396 } 1388 1397
+4 -3
fs/bcachefs/movinggc.c
··· 35 35 }; 36 36 37 37 static const struct rhashtable_params bch_move_bucket_params = { 38 - .head_offset = offsetof(struct move_bucket_in_flight, hash), 39 - .key_offset = offsetof(struct move_bucket_in_flight, bucket.k), 40 - .key_len = sizeof(struct move_bucket_key), 38 + .head_offset = offsetof(struct move_bucket_in_flight, hash), 39 + .key_offset = offsetof(struct move_bucket_in_flight, bucket.k), 40 + .key_len = sizeof(struct move_bucket_key), 41 + .automatic_shrinking = true, 41 42 }; 42 43 43 44 static struct move_bucket_in_flight *
+3 -3
fs/bcachefs/super-io.c
··· 1310 1310 1311 1311 prt_printf(out, "Device index:\t%u\n", sb->dev_idx); 1312 1312 1313 - prt_str(out, "Label:\t"); 1313 + prt_printf(out, "Label:\t"); 1314 1314 prt_printf(out, "%.*s", (int) sizeof(sb->label), sb->label); 1315 1315 prt_newline(out); 1316 1316 1317 - prt_str(out, "Version:\t"); 1317 + prt_printf(out, "Version:\t"); 1318 1318 bch2_version_to_text(out, le16_to_cpu(sb->version)); 1319 1319 prt_newline(out); 1320 1320 1321 - prt_str(out, "Version upgrade complete:\t"); 1321 + prt_printf(out, "Version upgrade complete:\t"); 1322 1322 bch2_version_to_text(out, BCH_SB_VERSION_UPGRADE_COMPLETE(sb)); 1323 1323 prt_newline(out); 1324 1324
+7 -3
fs/bcachefs/super.c
··· 582 582 583 583 if (c->write_ref_wq) 584 584 destroy_workqueue(c->write_ref_wq); 585 - if (c->io_complete_wq) 586 - destroy_workqueue(c->io_complete_wq); 585 + if (c->btree_write_submit_wq) 586 + destroy_workqueue(c->btree_write_submit_wq); 587 + if (c->btree_read_complete_wq) 588 + destroy_workqueue(c->btree_read_complete_wq); 587 589 if (c->copygc_wq) 588 590 destroy_workqueue(c->copygc_wq); 589 591 if (c->btree_io_complete_wq) ··· 880 878 WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 1)) || 881 879 !(c->copygc_wq = alloc_workqueue("bcachefs_copygc", 882 880 WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 1)) || 883 - !(c->io_complete_wq = alloc_workqueue("bcachefs_io", 881 + !(c->btree_read_complete_wq = alloc_workqueue("bcachefs_btree_read_complete", 884 882 WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 512)) || 883 + !(c->btree_write_submit_wq = alloc_workqueue("bcachefs_btree_write_sumit", 884 + WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 1)) || 885 885 !(c->write_ref_wq = alloc_workqueue("bcachefs_write_ref", 886 886 WQ_FREEZABLE, 0)) || 887 887 #ifndef BCH_WRITE_REF_DEBUG
+1 -9
fs/btrfs/disk-io.c
··· 4538 4538 struct btrfs_fs_info *fs_info) 4539 4539 { 4540 4540 struct rb_node *node; 4541 - struct btrfs_delayed_ref_root *delayed_refs; 4541 + struct btrfs_delayed_ref_root *delayed_refs = &trans->delayed_refs; 4542 4542 struct btrfs_delayed_ref_node *ref; 4543 4543 4544 - delayed_refs = &trans->delayed_refs; 4545 - 4546 4544 spin_lock(&delayed_refs->lock); 4547 - if (atomic_read(&delayed_refs->num_entries) == 0) { 4548 - spin_unlock(&delayed_refs->lock); 4549 - btrfs_debug(fs_info, "delayed_refs has NO entry"); 4550 - return; 4551 - } 4552 - 4553 4545 while ((node = rb_first_cached(&delayed_refs->href_root)) != NULL) { 4554 4546 struct btrfs_delayed_ref_head *head; 4555 4547 struct rb_node *n;
+31 -29
fs/btrfs/extent_io.c
··· 3689 3689 struct folio *folio = page_folio(page); 3690 3690 struct extent_buffer *exists; 3691 3691 3692 + lockdep_assert_held(&page->mapping->i_private_lock); 3693 + 3692 3694 /* 3693 3695 * For subpage case, we completely rely on radix tree to ensure we 3694 3696 * don't try to insert two ebs for the same bytenr. So here we always ··· 3758 3756 * The caller needs to free the existing folios and retry using the same order. 3759 3757 */ 3760 3758 static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i, 3759 + struct btrfs_subpage *prealloc, 3761 3760 struct extent_buffer **found_eb_ret) 3762 3761 { 3763 3762 3764 3763 struct btrfs_fs_info *fs_info = eb->fs_info; 3765 3764 struct address_space *mapping = fs_info->btree_inode->i_mapping; 3766 3765 const unsigned long index = eb->start >> PAGE_SHIFT; 3767 - struct folio *existing_folio; 3766 + struct folio *existing_folio = NULL; 3768 3767 int ret; 3769 3768 3770 3769 ASSERT(found_eb_ret); ··· 3777 3774 ret = filemap_add_folio(mapping, eb->folios[i], index + i, 3778 3775 GFP_NOFS | __GFP_NOFAIL); 3779 3776 if (!ret) 3780 - return 0; 3777 + goto finish; 3781 3778 3782 3779 existing_folio = filemap_lock_folio(mapping, index + i); 3783 3780 /* The page cache only exists for a very short time, just retry. */ 3784 - if (IS_ERR(existing_folio)) 3781 + if (IS_ERR(existing_folio)) { 3782 + existing_folio = NULL; 3785 3783 goto retry; 3784 + } 3786 3785 3787 3786 /* For now, we should only have single-page folios for btree inode. */ 3788 3787 ASSERT(folio_nr_pages(existing_folio) == 1); ··· 3795 3790 return -EAGAIN; 3796 3791 } 3797 3792 3798 - if (fs_info->nodesize < PAGE_SIZE) { 3799 - /* 3800 - * We're going to reuse the existing page, can drop our page 3801 - * and subpage structure now. 3802 - */ 3793 + finish: 3794 + spin_lock(&mapping->i_private_lock); 3795 + if (existing_folio && fs_info->nodesize < PAGE_SIZE) { 3796 + /* We're going to reuse the existing page, can drop our folio now. */ 3803 3797 __free_page(folio_page(eb->folios[i], 0)); 3804 3798 eb->folios[i] = existing_folio; 3805 - } else { 3799 + } else if (existing_folio) { 3806 3800 struct extent_buffer *existing_eb; 3807 3801 3808 3802 existing_eb = grab_extent_buffer(fs_info, ··· 3809 3805 if (existing_eb) { 3810 3806 /* The extent buffer still exists, we can use it directly. */ 3811 3807 *found_eb_ret = existing_eb; 3808 + spin_unlock(&mapping->i_private_lock); 3812 3809 folio_unlock(existing_folio); 3813 3810 folio_put(existing_folio); 3814 3811 return 1; ··· 3818 3813 __free_page(folio_page(eb->folios[i], 0)); 3819 3814 eb->folios[i] = existing_folio; 3820 3815 } 3816 + eb->folio_size = folio_size(eb->folios[i]); 3817 + eb->folio_shift = folio_shift(eb->folios[i]); 3818 + /* Should not fail, as we have preallocated the memory. */ 3819 + ret = attach_extent_buffer_folio(eb, eb->folios[i], prealloc); 3820 + ASSERT(!ret); 3821 + /* 3822 + * To inform we have an extra eb under allocation, so that 3823 + * detach_extent_buffer_page() won't release the folio private when the 3824 + * eb hasn't been inserted into radix tree yet. 3825 + * 3826 + * The ref will be decreased when the eb releases the page, in 3827 + * detach_extent_buffer_page(). Thus needs no special handling in the 3828 + * error path. 3829 + */ 3830 + btrfs_folio_inc_eb_refs(fs_info, eb->folios[i]); 3831 + spin_unlock(&mapping->i_private_lock); 3821 3832 return 0; 3822 3833 } 3823 3834 ··· 3845 3824 int attached = 0; 3846 3825 struct extent_buffer *eb; 3847 3826 struct extent_buffer *existing_eb = NULL; 3848 - struct address_space *mapping = fs_info->btree_inode->i_mapping; 3849 3827 struct btrfs_subpage *prealloc = NULL; 3850 3828 u64 lockdep_owner = owner_root; 3851 3829 bool page_contig = true; ··· 3910 3890 for (int i = 0; i < num_folios; i++) { 3911 3891 struct folio *folio; 3912 3892 3913 - ret = attach_eb_folio_to_filemap(eb, i, &existing_eb); 3893 + ret = attach_eb_folio_to_filemap(eb, i, prealloc, &existing_eb); 3914 3894 if (ret > 0) { 3915 3895 ASSERT(existing_eb); 3916 3896 goto out; ··· 3947 3927 * and free the allocated page. 3948 3928 */ 3949 3929 folio = eb->folios[i]; 3950 - eb->folio_size = folio_size(folio); 3951 - eb->folio_shift = folio_shift(folio); 3952 - spin_lock(&mapping->i_private_lock); 3953 - /* Should not fail, as we have preallocated the memory */ 3954 - ret = attach_extent_buffer_folio(eb, folio, prealloc); 3955 - ASSERT(!ret); 3956 - /* 3957 - * To inform we have extra eb under allocation, so that 3958 - * detach_extent_buffer_page() won't release the folio private 3959 - * when the eb hasn't yet been inserted into radix tree. 3960 - * 3961 - * The ref will be decreased when the eb released the page, in 3962 - * detach_extent_buffer_page(). 3963 - * Thus needs no special handling in error path. 3964 - */ 3965 - btrfs_folio_inc_eb_refs(fs_info, folio); 3966 - spin_unlock(&mapping->i_private_lock); 3967 - 3968 3930 WARN_ON(btrfs_folio_test_dirty(fs_info, folio, eb->start, eb->len)); 3969 3931 3970 3932 /*
+11 -6
fs/btrfs/tree-log.c
··· 4860 4860 path->slots[0]++; 4861 4861 continue; 4862 4862 } 4863 - if (!dropped_extents) { 4864 - /* 4865 - * Avoid logging extent items logged in past fsync calls 4866 - * and leading to duplicate keys in the log tree. 4867 - */ 4863 + /* 4864 + * Avoid overlapping items in the log tree. The first time we 4865 + * get here, get rid of everything from a past fsync. After 4866 + * that, if the current extent starts before the end of the last 4867 + * extent we copied, truncate the last one. This can happen if 4868 + * an ordered extent completion modifies the subvolume tree 4869 + * while btrfs_next_leaf() has the tree unlocked. 4870 + */ 4871 + if (!dropped_extents || key.offset < truncate_offset) { 4868 4872 ret = truncate_inode_items(trans, root->log_root, inode, 4869 - truncate_offset, 4873 + min(key.offset, truncate_offset), 4870 4874 BTRFS_EXTENT_DATA_KEY); 4871 4875 if (ret) 4872 4876 goto out; 4873 4877 dropped_extents = true; 4874 4878 } 4879 + truncate_offset = btrfs_file_extent_end(path); 4875 4880 if (ins_nr == 0) 4876 4881 start_slot = slot; 4877 4882 ins_nr++;
+2 -1
fs/cachefiles/daemon.c
··· 133 133 return 0; 134 134 } 135 135 136 - static void cachefiles_flush_reqs(struct cachefiles_cache *cache) 136 + void cachefiles_flush_reqs(struct cachefiles_cache *cache) 137 137 { 138 138 struct xarray *xa = &cache->reqs; 139 139 struct cachefiles_req *req; ··· 159 159 xa_for_each(xa, index, req) { 160 160 req->error = -EIO; 161 161 complete(&req->done); 162 + __xa_erase(xa, index); 162 163 } 163 164 xa_unlock(xa); 164 165
+5
fs/cachefiles/internal.h
··· 55 55 int ondemand_id; 56 56 enum cachefiles_object_state state; 57 57 struct cachefiles_object *object; 58 + spinlock_t lock; 58 59 }; 59 60 60 61 /* ··· 139 138 struct cachefiles_req { 140 139 struct cachefiles_object *object; 141 140 struct completion done; 141 + refcount_t ref; 142 142 int error; 143 143 struct cachefiles_msg msg; 144 144 }; ··· 188 186 * daemon.c 189 187 */ 190 188 extern const struct file_operations cachefiles_daemon_fops; 189 + extern void cachefiles_flush_reqs(struct cachefiles_cache *cache); 191 190 extern void cachefiles_get_unbind_pincount(struct cachefiles_cache *cache); 192 191 extern void cachefiles_put_unbind_pincount(struct cachefiles_cache *cache); 193 192 ··· 427 424 pr_err("I/O Error: " FMT"\n", ##__VA_ARGS__); \ 428 425 fscache_io_error((___cache)->cache); \ 429 426 set_bit(CACHEFILES_DEAD, &(___cache)->flags); \ 427 + if (cachefiles_in_ondemand_mode(___cache)) \ 428 + cachefiles_flush_reqs(___cache); \ 430 429 } while (0) 431 430 432 431 #define cachefiles_io_error_obj(object, FMT, ...) \
+163 -57
fs/cachefiles/ondemand.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 - #include <linux/fdtable.h> 3 2 #include <linux/anon_inodes.h> 4 3 #include <linux/uio.h> 5 4 #include "internal.h" 5 + 6 + struct ondemand_anon_file { 7 + struct file *file; 8 + int fd; 9 + }; 10 + 11 + static inline void cachefiles_req_put(struct cachefiles_req *req) 12 + { 13 + if (refcount_dec_and_test(&req->ref)) 14 + kfree(req); 15 + } 6 16 7 17 static int cachefiles_ondemand_fd_release(struct inode *inode, 8 18 struct file *file) 9 19 { 10 20 struct cachefiles_object *object = file->private_data; 11 - struct cachefiles_cache *cache = object->volume->cache; 12 - struct cachefiles_ondemand_info *info = object->ondemand; 13 - int object_id = info->ondemand_id; 21 + struct cachefiles_cache *cache; 22 + struct cachefiles_ondemand_info *info; 23 + int object_id; 14 24 struct cachefiles_req *req; 15 - XA_STATE(xas, &cache->reqs, 0); 25 + XA_STATE(xas, NULL, 0); 26 + 27 + if (!object) 28 + return 0; 29 + 30 + info = object->ondemand; 31 + cache = object->volume->cache; 32 + xas.xa = &cache->reqs; 16 33 17 34 xa_lock(&cache->reqs); 35 + spin_lock(&info->lock); 36 + object_id = info->ondemand_id; 18 37 info->ondemand_id = CACHEFILES_ONDEMAND_ID_CLOSED; 19 38 cachefiles_ondemand_set_object_close(object); 39 + spin_unlock(&info->lock); 20 40 21 41 /* Only flush CACHEFILES_REQ_NEW marked req to avoid race with daemon_read */ 22 42 xas_for_each_marked(&xas, req, ULONG_MAX, CACHEFILES_REQ_NEW) { ··· 96 76 } 97 77 98 78 static long cachefiles_ondemand_fd_ioctl(struct file *filp, unsigned int ioctl, 99 - unsigned long arg) 79 + unsigned long id) 100 80 { 101 81 struct cachefiles_object *object = filp->private_data; 102 82 struct cachefiles_cache *cache = object->volume->cache; 103 83 struct cachefiles_req *req; 104 - unsigned long id; 84 + XA_STATE(xas, &cache->reqs, id); 105 85 106 86 if (ioctl != CACHEFILES_IOC_READ_COMPLETE) 107 87 return -EINVAL; ··· 109 89 if (!test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags)) 110 90 return -EOPNOTSUPP; 111 91 112 - id = arg; 113 - req = xa_erase(&cache->reqs, id); 114 - if (!req) 92 + xa_lock(&cache->reqs); 93 + req = xas_load(&xas); 94 + if (!req || req->msg.opcode != CACHEFILES_OP_READ || 95 + req->object != object) { 96 + xa_unlock(&cache->reqs); 115 97 return -EINVAL; 98 + } 99 + xas_store(&xas, NULL); 100 + xa_unlock(&cache->reqs); 116 101 117 102 trace_cachefiles_ondemand_cread(object, id); 118 103 complete(&req->done); ··· 141 116 { 142 117 struct cachefiles_req *req; 143 118 struct fscache_cookie *cookie; 119 + struct cachefiles_ondemand_info *info; 144 120 char *pid, *psize; 145 121 unsigned long id; 146 122 long size; 147 123 int ret; 124 + XA_STATE(xas, &cache->reqs, 0); 148 125 149 126 if (!test_bit(CACHEFILES_ONDEMAND_MODE, &cache->flags)) 150 127 return -EOPNOTSUPP; ··· 170 143 if (ret) 171 144 return ret; 172 145 173 - req = xa_erase(&cache->reqs, id); 174 - if (!req) 146 + xa_lock(&cache->reqs); 147 + xas.xa_index = id; 148 + req = xas_load(&xas); 149 + if (!req || req->msg.opcode != CACHEFILES_OP_OPEN || 150 + !req->object->ondemand->ondemand_id) { 151 + xa_unlock(&cache->reqs); 175 152 return -EINVAL; 153 + } 154 + xas_store(&xas, NULL); 155 + xa_unlock(&cache->reqs); 176 156 157 + info = req->object->ondemand; 177 158 /* fail OPEN request if copen format is invalid */ 178 159 ret = kstrtol(psize, 0, &size); 179 160 if (ret) { ··· 201 166 goto out; 202 167 } 203 168 169 + spin_lock(&info->lock); 170 + /* 171 + * The anonymous fd was closed before copen ? Fail the request. 172 + * 173 + * t1 | t2 174 + * --------------------------------------------------------- 175 + * cachefiles_ondemand_copen 176 + * req = xa_erase(&cache->reqs, id) 177 + * // Anon fd is maliciously closed. 178 + * cachefiles_ondemand_fd_release 179 + * xa_lock(&cache->reqs) 180 + * cachefiles_ondemand_set_object_close(object) 181 + * xa_unlock(&cache->reqs) 182 + * cachefiles_ondemand_set_object_open 183 + * // No one will ever close it again. 184 + * cachefiles_ondemand_daemon_read 185 + * cachefiles_ondemand_select_req 186 + * 187 + * Get a read req but its fd is already closed. The daemon can't 188 + * issue a cread ioctl with an closed fd, then hung. 189 + */ 190 + if (info->ondemand_id == CACHEFILES_ONDEMAND_ID_CLOSED) { 191 + spin_unlock(&info->lock); 192 + req->error = -EBADFD; 193 + goto out; 194 + } 204 195 cookie = req->object->cookie; 205 196 cookie->object_size = size; 206 197 if (size) ··· 236 175 trace_cachefiles_ondemand_copen(req->object, id, size); 237 176 238 177 cachefiles_ondemand_set_object_open(req->object); 178 + spin_unlock(&info->lock); 239 179 wake_up_all(&cache->daemon_pollwq); 240 180 241 181 out: 182 + spin_lock(&info->lock); 183 + /* Need to set object close to avoid reopen status continuing */ 184 + if (info->ondemand_id == CACHEFILES_ONDEMAND_ID_CLOSED) 185 + cachefiles_ondemand_set_object_close(req->object); 186 + spin_unlock(&info->lock); 242 187 complete(&req->done); 243 188 return ret; 244 189 } ··· 272 205 return 0; 273 206 } 274 207 275 - static int cachefiles_ondemand_get_fd(struct cachefiles_req *req) 208 + static int cachefiles_ondemand_get_fd(struct cachefiles_req *req, 209 + struct ondemand_anon_file *anon_file) 276 210 { 277 211 struct cachefiles_object *object; 278 212 struct cachefiles_cache *cache; 279 213 struct cachefiles_open *load; 280 - struct file *file; 281 214 u32 object_id; 282 - int ret, fd; 215 + int ret; 283 216 284 217 object = cachefiles_grab_object(req->object, 285 218 cachefiles_obj_get_ondemand_fd); ··· 291 224 if (ret < 0) 292 225 goto err; 293 226 294 - fd = get_unused_fd_flags(O_WRONLY); 295 - if (fd < 0) { 296 - ret = fd; 227 + anon_file->fd = get_unused_fd_flags(O_WRONLY); 228 + if (anon_file->fd < 0) { 229 + ret = anon_file->fd; 297 230 goto err_free_id; 298 231 } 299 232 300 - file = anon_inode_getfile("[cachefiles]", &cachefiles_ondemand_fd_fops, 301 - object, O_WRONLY); 302 - if (IS_ERR(file)) { 303 - ret = PTR_ERR(file); 233 + anon_file->file = anon_inode_getfile("[cachefiles]", 234 + &cachefiles_ondemand_fd_fops, object, O_WRONLY); 235 + if (IS_ERR(anon_file->file)) { 236 + ret = PTR_ERR(anon_file->file); 304 237 goto err_put_fd; 305 238 } 306 239 307 - file->f_mode |= FMODE_PWRITE | FMODE_LSEEK; 308 - fd_install(fd, file); 240 + spin_lock(&object->ondemand->lock); 241 + if (object->ondemand->ondemand_id > 0) { 242 + spin_unlock(&object->ondemand->lock); 243 + /* Pair with check in cachefiles_ondemand_fd_release(). */ 244 + anon_file->file->private_data = NULL; 245 + ret = -EEXIST; 246 + goto err_put_file; 247 + } 248 + 249 + anon_file->file->f_mode |= FMODE_PWRITE | FMODE_LSEEK; 309 250 310 251 load = (void *)req->msg.data; 311 - load->fd = fd; 252 + load->fd = anon_file->fd; 312 253 object->ondemand->ondemand_id = object_id; 254 + spin_unlock(&object->ondemand->lock); 313 255 314 256 cachefiles_get_unbind_pincount(cache); 315 257 trace_cachefiles_ondemand_open(object, &req->msg, load); 316 258 return 0; 317 259 260 + err_put_file: 261 + fput(anon_file->file); 262 + anon_file->file = NULL; 318 263 err_put_fd: 319 - put_unused_fd(fd); 264 + put_unused_fd(anon_file->fd); 265 + anon_file->fd = ret; 320 266 err_free_id: 321 267 xa_erase(&cache->ondemand_ids, object_id); 322 268 err: 269 + spin_lock(&object->ondemand->lock); 270 + /* Avoid marking an opened object as closed. */ 271 + if (object->ondemand->ondemand_id <= 0) 272 + cachefiles_ondemand_set_object_close(object); 273 + spin_unlock(&object->ondemand->lock); 323 274 cachefiles_put_object(object, cachefiles_obj_put_ondemand_fd); 324 275 return ret; 325 276 } ··· 379 294 return NULL; 380 295 } 381 296 297 + static inline bool cachefiles_ondemand_finish_req(struct cachefiles_req *req, 298 + struct xa_state *xas, int err) 299 + { 300 + if (unlikely(!xas || !req)) 301 + return false; 302 + 303 + if (xa_cmpxchg(xas->xa, xas->xa_index, req, NULL, 0) != req) 304 + return false; 305 + 306 + req->error = err; 307 + complete(&req->done); 308 + return true; 309 + } 310 + 382 311 ssize_t cachefiles_ondemand_daemon_read(struct cachefiles_cache *cache, 383 312 char __user *_buffer, size_t buflen) 384 313 { 385 314 struct cachefiles_req *req; 386 315 struct cachefiles_msg *msg; 387 - unsigned long id = 0; 388 316 size_t n; 389 317 int ret = 0; 318 + struct ondemand_anon_file anon_file; 390 319 XA_STATE(xas, &cache->reqs, cache->req_id_next); 391 320 392 321 xa_lock(&cache->reqs); ··· 429 330 430 331 xas_clear_mark(&xas, CACHEFILES_REQ_NEW); 431 332 cache->req_id_next = xas.xa_index + 1; 333 + refcount_inc(&req->ref); 334 + cachefiles_grab_object(req->object, cachefiles_obj_get_read_req); 432 335 xa_unlock(&cache->reqs); 433 336 434 - id = xas.xa_index; 435 - 436 337 if (msg->opcode == CACHEFILES_OP_OPEN) { 437 - ret = cachefiles_ondemand_get_fd(req); 438 - if (ret) { 439 - cachefiles_ondemand_set_object_close(req->object); 440 - goto error; 441 - } 338 + ret = cachefiles_ondemand_get_fd(req, &anon_file); 339 + if (ret) 340 + goto out; 442 341 } 443 342 444 - msg->msg_id = id; 343 + msg->msg_id = xas.xa_index; 445 344 msg->object_id = req->object->ondemand->ondemand_id; 446 345 447 - if (copy_to_user(_buffer, msg, n) != 0) { 346 + if (copy_to_user(_buffer, msg, n) != 0) 448 347 ret = -EFAULT; 449 - goto err_put_fd; 348 + 349 + if (msg->opcode == CACHEFILES_OP_OPEN) { 350 + if (ret < 0) { 351 + fput(anon_file.file); 352 + put_unused_fd(anon_file.fd); 353 + goto out; 354 + } 355 + fd_install(anon_file.fd, anon_file.file); 450 356 } 451 - 452 - /* CLOSE request has no reply */ 453 - if (msg->opcode == CACHEFILES_OP_CLOSE) { 454 - xa_erase(&cache->reqs, id); 455 - complete(&req->done); 456 - } 457 - 458 - return n; 459 - 460 - err_put_fd: 461 - if (msg->opcode == CACHEFILES_OP_OPEN) 462 - close_fd(((struct cachefiles_open *)msg->data)->fd); 463 - error: 464 - xa_erase(&cache->reqs, id); 465 - req->error = ret; 466 - complete(&req->done); 467 - return ret; 357 + out: 358 + cachefiles_put_object(req->object, cachefiles_obj_put_read_req); 359 + /* Remove error request and CLOSE request has no reply */ 360 + if (ret || msg->opcode == CACHEFILES_OP_CLOSE) 361 + cachefiles_ondemand_finish_req(req, &xas, ret); 362 + cachefiles_req_put(req); 363 + return ret ? ret : n; 468 364 } 469 365 470 366 typedef int (*init_req_fn)(struct cachefiles_req *req, void *private); ··· 489 395 goto out; 490 396 } 491 397 398 + refcount_set(&req->ref, 1); 492 399 req->object = object; 493 400 init_completion(&req->done); 494 401 req->msg.opcode = opcode; ··· 549 454 goto out; 550 455 551 456 wake_up_all(&cache->daemon_pollwq); 552 - wait_for_completion(&req->done); 553 - ret = req->error; 554 - kfree(req); 457 + wait: 458 + ret = wait_for_completion_killable(&req->done); 459 + if (!ret) { 460 + ret = req->error; 461 + } else { 462 + ret = -EINTR; 463 + if (!cachefiles_ondemand_finish_req(req, &xas, ret)) { 464 + /* Someone will complete it soon. */ 465 + cpu_relax(); 466 + goto wait; 467 + } 468 + } 469 + cachefiles_req_put(req); 555 470 return ret; 556 471 out: 557 472 /* Reset the object to close state in error handling path. ··· 683 578 return -ENOMEM; 684 579 685 580 object->ondemand->object = object; 581 + spin_lock_init(&object->ondemand->lock); 686 582 INIT_WORK(&object->ondemand->ondemand_work, ondemand_object_worker); 687 583 return 0; 688 584 }
+9 -1
fs/debugfs/inode.c
··· 107 107 int opt; 108 108 109 109 opt = fs_parse(fc, debugfs_param_specs, param, &result); 110 - if (opt < 0) 110 + if (opt < 0) { 111 + /* 112 + * We might like to report bad mount options here; but 113 + * traditionally debugfs has ignored all mount options 114 + */ 115 + if (opt == -ENOPARAM) 116 + return 0; 117 + 111 118 return opt; 119 + } 112 120 113 121 switch (opt) { 114 122 case Opt_uid:
+2 -2
fs/file.c
··· 486 486 487 487 static unsigned int find_next_fd(struct fdtable *fdt, unsigned int start) 488 488 { 489 - unsigned int maxfd = fdt->max_fds; 489 + unsigned int maxfd = fdt->max_fds; /* always multiple of BITS_PER_LONG */ 490 490 unsigned int maxbit = maxfd / BITS_PER_LONG; 491 491 unsigned int bitbit = start / BITS_PER_LONG; 492 492 493 493 bitbit = find_next_zero_bit(fdt->full_fds_bits, maxbit, bitbit) * BITS_PER_LONG; 494 - if (bitbit > maxfd) 494 + if (bitbit >= maxfd) 495 495 return maxfd; 496 496 if (bitbit > start) 497 497 start = bitbit;
+27 -31
fs/iomap/buffered-io.c
··· 241 241 unsigned block_size = (1 << block_bits); 242 242 size_t poff = offset_in_folio(folio, *pos); 243 243 size_t plen = min_t(loff_t, folio_size(folio) - poff, length); 244 + size_t orig_plen = plen; 244 245 unsigned first = poff >> block_bits; 245 246 unsigned last = (poff + plen - 1) >> block_bits; 246 247 ··· 278 277 * handle both halves separately so that we properly zero data in the 279 278 * page cache for blocks that are entirely outside of i_size. 280 279 */ 281 - if (orig_pos <= isize && orig_pos + length > isize) { 280 + if (orig_pos <= isize && orig_pos + orig_plen > isize) { 282 281 unsigned end = offset_in_folio(folio, isize - 1) >> block_bits; 283 282 284 283 if (first <= end && last > end) ··· 878 877 size_t copied, struct folio *folio) 879 878 { 880 879 const struct iomap *srcmap = iomap_iter_srcmap(iter); 880 + loff_t old_size = iter->inode->i_size; 881 + size_t written; 881 882 882 883 if (srcmap->type == IOMAP_INLINE) { 883 884 iomap_write_end_inline(iter, folio, pos, copied); 884 - return true; 885 - } 886 - 887 - if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { 888 - size_t bh_written; 889 - 890 - bh_written = block_write_end(NULL, iter->inode->i_mapping, pos, 885 + written = copied; 886 + } else if (srcmap->flags & IOMAP_F_BUFFER_HEAD) { 887 + written = block_write_end(NULL, iter->inode->i_mapping, pos, 891 888 len, copied, &folio->page, NULL); 892 - WARN_ON_ONCE(bh_written != copied && bh_written != 0); 893 - return bh_written == copied; 889 + WARN_ON_ONCE(written != copied && written != 0); 890 + } else { 891 + written = __iomap_write_end(iter->inode, pos, len, copied, 892 + folio) ? copied : 0; 894 893 } 895 894 896 - return __iomap_write_end(iter->inode, pos, len, copied, folio); 895 + /* 896 + * Update the in-memory inode size after copying the data into the page 897 + * cache. It's up to the file system to write the updated size to disk, 898 + * preferably after I/O completion so that no stale data is exposed. 899 + * Only once that's done can we unlock and release the folio. 900 + */ 901 + if (pos + written > old_size) { 902 + i_size_write(iter->inode, pos + written); 903 + iter->iomap.flags |= IOMAP_F_SIZE_CHANGED; 904 + } 905 + __iomap_put_folio(iter, pos, written, folio); 906 + 907 + if (old_size < pos) 908 + pagecache_isize_extended(iter->inode, old_size, pos); 909 + 910 + return written == copied; 897 911 } 898 912 899 913 static loff_t iomap_write_iter(struct iomap_iter *iter, struct iov_iter *i) ··· 923 907 924 908 do { 925 909 struct folio *folio; 926 - loff_t old_size; 927 910 size_t offset; /* Offset into folio */ 928 911 size_t bytes; /* Bytes to write to folio */ 929 912 size_t copied; /* Bytes copied from user */ ··· 973 958 copied = copy_folio_from_iter_atomic(folio, offset, bytes, i); 974 959 written = iomap_write_end(iter, pos, bytes, copied, folio) ? 975 960 copied : 0; 976 - 977 - /* 978 - * Update the in-memory inode size after copying the data into 979 - * the page cache. It's up to the file system to write the 980 - * updated size to disk, preferably after I/O completion so that 981 - * no stale data is exposed. Only once that's done can we 982 - * unlock and release the folio. 983 - */ 984 - old_size = iter->inode->i_size; 985 - if (pos + written > old_size) { 986 - i_size_write(iter->inode, pos + written); 987 - iter->iomap.flags |= IOMAP_F_SIZE_CHANGED; 988 - } 989 - __iomap_put_folio(iter, pos, written, folio); 990 - 991 - if (old_size < pos) 992 - pagecache_isize_extended(iter->inode, old_size, pos); 993 961 994 962 cond_resched(); 995 963 if (unlikely(written == 0)) { ··· 1344 1346 bytes = folio_size(folio) - offset; 1345 1347 1346 1348 ret = iomap_write_end(iter, pos, bytes, bytes, folio); 1347 - __iomap_put_folio(iter, pos, bytes, folio); 1348 1349 if (WARN_ON_ONCE(!ret)) 1349 1350 return -EIO; 1350 1351 ··· 1409 1412 folio_mark_accessed(folio); 1410 1413 1411 1414 ret = iomap_write_end(iter, pos, bytes, bytes, folio); 1412 - __iomap_put_folio(iter, pos, bytes, folio); 1413 1415 if (WARN_ON_ONCE(!ret)) 1414 1416 return -EIO; 1415 1417
+48 -29
fs/nfs/dir.c
··· 1627 1627 switch (error) { 1628 1628 case 1: 1629 1629 break; 1630 - case 0: 1630 + case -ETIMEDOUT: 1631 + if (inode && (IS_ROOT(dentry) || 1632 + NFS_SERVER(inode)->flags & NFS_MOUNT_SOFTREVAL)) 1633 + error = 1; 1634 + break; 1635 + case -ESTALE: 1636 + case -ENOENT: 1637 + error = 0; 1638 + fallthrough; 1639 + default: 1631 1640 /* 1632 1641 * We can't d_drop the root of a disconnected tree: 1633 1642 * its d_hash is on the s_anon list and d_drop() would hide ··· 1691 1682 1692 1683 dir_verifier = nfs_save_change_attribute(dir); 1693 1684 ret = NFS_PROTO(dir)->lookup(dir, dentry, fhandle, fattr); 1694 - if (ret < 0) { 1695 - switch (ret) { 1696 - case -ESTALE: 1697 - case -ENOENT: 1698 - ret = 0; 1699 - break; 1700 - case -ETIMEDOUT: 1701 - if (NFS_SERVER(inode)->flags & NFS_MOUNT_SOFTREVAL) 1702 - ret = 1; 1703 - } 1685 + if (ret < 0) 1704 1686 goto out; 1705 - } 1706 1687 1707 1688 /* Request help from readdirplus */ 1708 1689 nfs_lookup_advise_force_readdirplus(dir, flags); ··· 1736 1737 unsigned int flags) 1737 1738 { 1738 1739 struct inode *inode; 1739 - int error; 1740 + int error = 0; 1740 1741 1741 1742 nfs_inc_stats(dir, NFSIOS_DENTRYREVALIDATE); 1742 1743 inode = d_inode(dentry); ··· 1781 1782 out_bad: 1782 1783 if (flags & LOOKUP_RCU) 1783 1784 return -ECHILD; 1784 - return nfs_lookup_revalidate_done(dir, dentry, inode, 0); 1785 + return nfs_lookup_revalidate_done(dir, dentry, inode, error); 1785 1786 } 1786 1787 1787 1788 static int ··· 1803 1804 if (parent != READ_ONCE(dentry->d_parent)) 1804 1805 return -ECHILD; 1805 1806 } else { 1806 - /* Wait for unlink to complete */ 1807 + /* Wait for unlink to complete - see unblock_revalidate() */ 1807 1808 wait_var_event(&dentry->d_fsdata, 1808 - dentry->d_fsdata != NFS_FSDATA_BLOCKED); 1809 + smp_load_acquire(&dentry->d_fsdata) 1810 + != NFS_FSDATA_BLOCKED); 1809 1811 parent = dget_parent(dentry); 1810 1812 ret = reval(d_inode(parent), dentry, flags); 1811 1813 dput(parent); ··· 1817 1817 static int nfs_lookup_revalidate(struct dentry *dentry, unsigned int flags) 1818 1818 { 1819 1819 return __nfs_lookup_revalidate(dentry, flags, nfs_do_lookup_revalidate); 1820 + } 1821 + 1822 + static void block_revalidate(struct dentry *dentry) 1823 + { 1824 + /* old devname - just in case */ 1825 + kfree(dentry->d_fsdata); 1826 + 1827 + /* Any new reference that could lead to an open 1828 + * will take ->d_lock in lookup_open() -> d_lookup(). 1829 + * Holding this lock ensures we cannot race with 1830 + * __nfs_lookup_revalidate() and removes and need 1831 + * for further barriers. 1832 + */ 1833 + lockdep_assert_held(&dentry->d_lock); 1834 + 1835 + dentry->d_fsdata = NFS_FSDATA_BLOCKED; 1836 + } 1837 + 1838 + static void unblock_revalidate(struct dentry *dentry) 1839 + { 1840 + /* store_release ensures wait_var_event() sees the update */ 1841 + smp_store_release(&dentry->d_fsdata, NULL); 1842 + wake_up_var(&dentry->d_fsdata); 1820 1843 } 1821 1844 1822 1845 /* ··· 2278 2255 */ 2279 2256 int error = 0; 2280 2257 2258 + if (dentry->d_name.len > NFS_SERVER(dir)->namelen) 2259 + return -ENAMETOOLONG; 2260 + 2281 2261 if (open_flags & O_CREAT) { 2282 2262 file->f_mode |= FMODE_CREATED; 2283 2263 error = nfs_do_create(dir, dentry, mode, open_flags); ··· 2575 2549 spin_unlock(&dentry->d_lock); 2576 2550 goto out; 2577 2551 } 2578 - /* old devname */ 2579 - kfree(dentry->d_fsdata); 2580 - dentry->d_fsdata = NFS_FSDATA_BLOCKED; 2552 + block_revalidate(dentry); 2581 2553 2582 2554 spin_unlock(&dentry->d_lock); 2583 2555 error = nfs_safe_remove(dentry); 2584 2556 nfs_dentry_remove_handle_error(dir, dentry, error); 2585 - dentry->d_fsdata = NULL; 2586 - wake_up_var(&dentry->d_fsdata); 2557 + unblock_revalidate(dentry); 2587 2558 out: 2588 2559 trace_nfs_unlink_exit(dir, dentry, error); 2589 2560 return error; ··· 2687 2664 { 2688 2665 struct dentry *new_dentry = data->new_dentry; 2689 2666 2690 - new_dentry->d_fsdata = NULL; 2691 - wake_up_var(&new_dentry->d_fsdata); 2667 + unblock_revalidate(new_dentry); 2692 2668 } 2693 2669 2694 2670 /* ··· 2749 2727 if (WARN_ON(new_dentry->d_flags & DCACHE_NFSFS_RENAMED) || 2750 2728 WARN_ON(new_dentry->d_fsdata == NFS_FSDATA_BLOCKED)) 2751 2729 goto out; 2752 - if (new_dentry->d_fsdata) { 2753 - /* old devname */ 2754 - kfree(new_dentry->d_fsdata); 2755 - new_dentry->d_fsdata = NULL; 2756 - } 2757 2730 2758 2731 spin_lock(&new_dentry->d_lock); 2759 2732 if (d_count(new_dentry) > 2) { ··· 2770 2753 new_dentry = dentry; 2771 2754 new_inode = NULL; 2772 2755 } else { 2773 - new_dentry->d_fsdata = NFS_FSDATA_BLOCKED; 2756 + block_revalidate(new_dentry); 2774 2757 must_unblock = true; 2775 2758 spin_unlock(&new_dentry->d_lock); 2776 2759 } ··· 2782 2765 task = nfs_async_rename(old_dir, new_dir, old_dentry, new_dentry, 2783 2766 must_unblock ? nfs_unblock_rename : NULL); 2784 2767 if (IS_ERR(task)) { 2768 + if (must_unblock) 2769 + unblock_revalidate(new_dentry); 2785 2770 error = PTR_ERR(task); 2786 2771 goto out; 2787 2772 }
+23 -1
fs/nfs/nfs4proc.c
··· 4023 4023 } 4024 4024 } 4025 4025 4026 + static bool _is_same_nfs4_pathname(struct nfs4_pathname *path1, 4027 + struct nfs4_pathname *path2) 4028 + { 4029 + int i; 4030 + 4031 + if (path1->ncomponents != path2->ncomponents) 4032 + return false; 4033 + for (i = 0; i < path1->ncomponents; i++) { 4034 + if (path1->components[i].len != path2->components[i].len) 4035 + return false; 4036 + if (memcmp(path1->components[i].data, path2->components[i].data, 4037 + path1->components[i].len)) 4038 + return false; 4039 + } 4040 + return true; 4041 + } 4042 + 4026 4043 static int _nfs4_discover_trunking(struct nfs_server *server, 4027 4044 struct nfs_fh *fhandle) 4028 4045 { ··· 4073 4056 if (status) 4074 4057 goto out_free_3; 4075 4058 4076 - for (i = 0; i < locations->nlocations; i++) 4059 + for (i = 0; i < locations->nlocations; i++) { 4060 + if (!_is_same_nfs4_pathname(&locations->fs_path, 4061 + &locations->locations[i].rootpath)) 4062 + continue; 4077 4063 test_fs_location_for_trunking(&locations->locations[i], clp, 4078 4064 server); 4065 + } 4079 4066 out_free_3: 4080 4067 kfree(locations->fattr); 4081 4068 out_free_2: ··· 6289 6268 if (status == 0) 6290 6269 nfs_setsecurity(inode, fattr); 6291 6270 6271 + nfs_free_fattr(fattr); 6292 6272 return status; 6293 6273 } 6294 6274 #endif /* CONFIG_NFS_V4_SECURITY_LABEL */
+5
fs/nfs/pagelist.c
··· 1545 1545 continue; 1546 1546 } else if (index == prev->wb_index + 1) 1547 1547 continue; 1548 + /* 1549 + * We will submit more requests after these. Indicate 1550 + * this to the underlying layers. 1551 + */ 1552 + desc->pg_moreio = 1; 1548 1553 nfs_pageio_complete(desc); 1549 1554 break; 1550 1555 }
+1 -1
fs/nfs/symlink.c
··· 41 41 error: 42 42 folio_set_error(folio); 43 43 folio_unlock(folio); 44 - return -EIO; 44 + return error; 45 45 } 46 46 47 47 static const char *nfs_get_link(struct dentry *dentry,
+1 -1
fs/nilfs2/dir.c
··· 607 607 608 608 kaddr = nilfs_get_folio(inode, i, &folio); 609 609 if (IS_ERR(kaddr)) 610 - continue; 610 + return 0; 611 611 612 612 de = (struct nilfs_dir_entry *)kaddr; 613 613 kaddr += nilfs_last_byte(inode, i) - NILFS_DIR_REC_LEN(1);
+3
fs/nilfs2/segment.c
··· 1652 1652 if (bh->b_folio != bd_folio) { 1653 1653 if (bd_folio) { 1654 1654 folio_lock(bd_folio); 1655 + folio_wait_writeback(bd_folio); 1655 1656 folio_clear_dirty_for_io(bd_folio); 1656 1657 folio_start_writeback(bd_folio); 1657 1658 folio_unlock(bd_folio); ··· 1666 1665 if (bh == segbuf->sb_super_root) { 1667 1666 if (bh->b_folio != bd_folio) { 1668 1667 folio_lock(bd_folio); 1668 + folio_wait_writeback(bd_folio); 1669 1669 folio_clear_dirty_for_io(bd_folio); 1670 1670 folio_start_writeback(bd_folio); 1671 1671 folio_unlock(bd_folio); ··· 1683 1681 } 1684 1682 if (bd_folio) { 1685 1683 folio_lock(bd_folio); 1684 + folio_wait_writeback(bd_folio); 1686 1685 folio_clear_dirty_for_io(bd_folio); 1687 1686 folio_start_writeback(bd_folio); 1688 1687 folio_unlock(bd_folio);
+1 -1
fs/proc/base.c
··· 3214 3214 mm = get_task_mm(task); 3215 3215 if (mm) { 3216 3216 seq_printf(m, "ksm_rmap_items %lu\n", mm->ksm_rmap_items); 3217 - seq_printf(m, "ksm_zero_pages %lu\n", mm->ksm_zero_pages); 3217 + seq_printf(m, "ksm_zero_pages %ld\n", mm_ksm_zero_pages(mm)); 3218 3218 seq_printf(m, "ksm_merging_pages %lu\n", mm->ksm_merging_pages); 3219 3219 seq_printf(m, "ksm_process_profit %ld\n", ksm_process_profit(mm)); 3220 3220 mmput(mm);
-3
fs/smb/client/smb2pdu.c
··· 4577 4577 if (rdata->subreq.start < rdata->subreq.rreq->i_size) 4578 4578 rdata->result = 0; 4579 4579 } 4580 - if (rdata->result == 0 || rdata->result == -EAGAIN) 4581 - iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); 4582 4580 rdata->credits.value = 0; 4583 4581 netfs_subreq_terminated(&rdata->subreq, 4584 4582 (rdata->result == 0 || rdata->result == -EAGAIN) ? ··· 4787 4789 wdata->result = -ENOSPC; 4788 4790 else 4789 4791 wdata->subreq.len = written; 4790 - iov_iter_advance(&wdata->subreq.io_iter, written); 4791 4792 break; 4792 4793 case MID_REQUEST_SUBMITTED: 4793 4794 case MID_RETRY_NEEDED:
+1 -1
fs/smb/client/smb2transport.c
··· 216 216 } 217 217 tcon = smb2_find_smb_sess_tcon_unlocked(ses, tid); 218 218 if (!tcon) { 219 - cifs_put_smb_ses(ses); 220 219 spin_unlock(&cifs_tcp_ses_lock); 220 + cifs_put_smb_ses(ses); 221 221 return NULL; 222 222 } 223 223 spin_unlock(&cifs_tcp_ses_lock);
+2 -2
include/dt-bindings/net/ti-dp83867.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 1 + /* SPDX-License-Identifier: GPL-2.0-only OR MIT */ 2 2 /* 3 3 * Device Tree constants for the Texas Instruments DP83867 PHY 4 4 * 5 5 * Author: Dan Murphy <dmurphy@ti.com> 6 6 * 7 - * Copyright: (C) 2015 Texas Instruments, Inc. 7 + * Copyright (C) 2015-2024 Texas Instruments Incorporated - https://www.ti.com/ 8 8 */ 9 9 10 10 #ifndef _DT_BINDINGS_TI_DP83867_H
+2 -2
include/dt-bindings/net/ti-dp83869.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 1 + /* SPDX-License-Identifier: GPL-2.0-only OR MIT */ 2 2 /* 3 3 * Device Tree constants for the Texas Instruments DP83869 PHY 4 4 * 5 5 * Author: Dan Murphy <dmurphy@ti.com> 6 6 * 7 - * Copyright: (C) 2019 Texas Instruments, Inc. 7 + * Copyright (C) 2015-2024 Texas Instruments Incorporated - https://www.ti.com/ 8 8 */ 9 9 10 10 #ifndef _DT_BINDINGS_TI_DP83869_H
+3 -3
include/linux/atomic/atomic-arch-fallback.h
··· 2242 2242 2243 2243 /** 2244 2244 * raw_atomic_sub_and_test() - atomic subtract and test if zero with full ordering 2245 - * @i: int value to add 2245 + * @i: int value to subtract 2246 2246 * @v: pointer to atomic_t 2247 2247 * 2248 2248 * Atomically updates @v to (@v - @i) with full ordering. ··· 4368 4368 4369 4369 /** 4370 4370 * raw_atomic64_sub_and_test() - atomic subtract and test if zero with full ordering 4371 - * @i: s64 value to add 4371 + * @i: s64 value to subtract 4372 4372 * @v: pointer to atomic64_t 4373 4373 * 4374 4374 * Atomically updates @v to (@v - @i) with full ordering. ··· 4690 4690 } 4691 4691 4692 4692 #endif /* _LINUX_ATOMIC_FALLBACK_H */ 4693 - // 14850c0b0db20c62fdc78ccd1d42b98b88d76331 4693 + // b565db590afeeff0d7c9485ccbca5bb6e155749f
+4 -4
include/linux/atomic/atomic-instrumented.h
··· 1349 1349 1350 1350 /** 1351 1351 * atomic_sub_and_test() - atomic subtract and test if zero with full ordering 1352 - * @i: int value to add 1352 + * @i: int value to subtract 1353 1353 * @v: pointer to atomic_t 1354 1354 * 1355 1355 * Atomically updates @v to (@v - @i) with full ordering. ··· 2927 2927 2928 2928 /** 2929 2929 * atomic64_sub_and_test() - atomic subtract and test if zero with full ordering 2930 - * @i: s64 value to add 2930 + * @i: s64 value to subtract 2931 2931 * @v: pointer to atomic64_t 2932 2932 * 2933 2933 * Atomically updates @v to (@v - @i) with full ordering. ··· 4505 4505 4506 4506 /** 4507 4507 * atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering 4508 - * @i: long value to add 4508 + * @i: long value to subtract 4509 4509 * @v: pointer to atomic_long_t 4510 4510 * 4511 4511 * Atomically updates @v to (@v - @i) with full ordering. ··· 5050 5050 5051 5051 5052 5052 #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ 5053 - // ce5b65e0f1f8a276268b667194581d24bed219d4 5053 + // 8829b337928e9508259079d32581775ececd415b
+2 -2
include/linux/atomic/atomic-long.h
··· 1535 1535 1536 1536 /** 1537 1537 * raw_atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering 1538 - * @i: long value to add 1538 + * @i: long value to subtract 1539 1539 * @v: pointer to atomic_long_t 1540 1540 * 1541 1541 * Atomically updates @v to (@v - @i) with full ordering. ··· 1809 1809 } 1810 1810 1811 1811 #endif /* _LINUX_ATOMIC_LONG_H */ 1812 - // 1c4a26fc77f345342953770ebe3c4d08e7ce2f9a 1812 + // eadf183c3600b8b92b91839dd3be6bcc560c752d
+1 -1
include/linux/cdrom.h
··· 77 77 unsigned int clearing, int slot); 78 78 int (*tray_move) (struct cdrom_device_info *, int); 79 79 int (*lock_door) (struct cdrom_device_info *, int); 80 - int (*select_speed) (struct cdrom_device_info *, int); 80 + int (*select_speed) (struct cdrom_device_info *, unsigned long); 81 81 int (*get_last_session) (struct cdrom_device_info *, 82 82 struct cdrom_multisession *); 83 83 int (*get_mcn) (struct cdrom_device_info *,
+8 -2
include/linux/huge_mm.h
··· 269 269 MTHP_STAT_ANON_FAULT_ALLOC, 270 270 MTHP_STAT_ANON_FAULT_FALLBACK, 271 271 MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, 272 - MTHP_STAT_ANON_SWPOUT, 273 - MTHP_STAT_ANON_SWPOUT_FALLBACK, 272 + MTHP_STAT_SWPOUT, 273 + MTHP_STAT_SWPOUT_FALLBACK, 274 274 __MTHP_STAT_COUNT 275 275 }; 276 276 ··· 278 278 unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUNT]; 279 279 }; 280 280 281 + #ifdef CONFIG_SYSFS 281 282 DECLARE_PER_CPU(struct mthp_stat, mthp_stats); 282 283 283 284 static inline void count_mthp_stat(int order, enum mthp_stat_item item) ··· 288 287 289 288 this_cpu_inc(mthp_stats.stats[order][item]); 290 289 } 290 + #else 291 + static inline void count_mthp_stat(int order, enum mthp_stat_item item) 292 + { 293 + } 294 + #endif 291 295 292 296 #define transparent_hugepage_use_zero_page() \ 293 297 (transparent_hugepage_flags & \
+1 -1
include/linux/iommu.h
··· 1533 1533 static inline struct iommu_sva * 1534 1534 iommu_sva_bind_device(struct device *dev, struct mm_struct *mm) 1535 1535 { 1536 - return NULL; 1536 + return ERR_PTR(-ENODEV); 1537 1537 } 1538 1538 1539 1539 static inline void iommu_sva_unbind_device(struct iommu_sva *handle)
+14 -3
include/linux/ksm.h
··· 33 33 */ 34 34 #define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)) 35 35 36 - extern unsigned long ksm_zero_pages; 36 + extern atomic_long_t ksm_zero_pages; 37 + 38 + static inline void ksm_map_zero_page(struct mm_struct *mm) 39 + { 40 + atomic_long_inc(&ksm_zero_pages); 41 + atomic_long_inc(&mm->ksm_zero_pages); 42 + } 37 43 38 44 static inline void ksm_might_unmap_zero_page(struct mm_struct *mm, pte_t pte) 39 45 { 40 46 if (is_ksm_zero_pte(pte)) { 41 - ksm_zero_pages--; 42 - mm->ksm_zero_pages--; 47 + atomic_long_dec(&ksm_zero_pages); 48 + atomic_long_dec(&mm->ksm_zero_pages); 43 49 } 50 + } 51 + 52 + static inline long mm_ksm_zero_pages(struct mm_struct *mm) 53 + { 54 + return atomic_long_read(&mm->ksm_zero_pages); 44 55 } 45 56 46 57 static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
-5
include/linux/lockdep.h
··· 297 297 .wait_type_inner = _wait_type, \ 298 298 .lock_type = LD_LOCK_WAIT_OVERRIDE, } 299 299 300 - #define lock_map_assert_held(l) \ 301 - lockdep_assert(lock_is_held(l) != LOCK_STATE_NOT_HELD) 302 - 303 300 #else /* !CONFIG_LOCKDEP */ 304 301 305 302 static inline void lockdep_init_task(struct task_struct *task) ··· 387 390 388 391 #define DEFINE_WAIT_OVERRIDE_MAP(_name, _wait_type) \ 389 392 struct lockdep_map __maybe_unused _name = {} 390 - 391 - #define lock_map_assert_held(l) do { (void)(l); } while (0) 392 393 393 394 #endif /* !LOCKDEP */ 394 395
+1 -1
include/linux/mm_types.h
··· 985 985 * Represent how many empty pages are merged with kernel zero 986 986 * pages when enabling KSM use_zero_pages. 987 987 */ 988 - unsigned long ksm_zero_pages; 988 + atomic_long_t ksm_zero_pages; 989 989 #endif /* CONFIG_KSM */ 990 990 #ifdef CONFIG_LRU_GEN_WALKS_MMU 991 991 struct {
+1 -1
include/linux/netfs.h
··· 521 521 522 522 /** 523 523 * netfs_wait_for_outstanding_io - Wait for outstanding I/O to complete 524 - * @ctx: The netfs inode to wait on 524 + * @inode: The netfs inode to wait on 525 525 * 526 526 * Wait for outstanding I/O requests of any type to complete. This is intended 527 527 * to be called from inode eviction routines. This makes sure that any
-2
include/linux/pci.h
··· 413 413 struct resource driver_exclusive_resource; /* driver exclusive resource ranges */ 414 414 415 415 bool match_driver; /* Skip attaching driver */ 416 - struct lock_class_key cfg_access_key; 417 - struct lockdep_map cfg_access_lock; 418 416 419 417 unsigned int transparent:1; /* Subtractive decode bridge */ 420 418 unsigned int io_window:1; /* Bridge has I/O window */
+2 -2
include/linux/pse-pd/pse.h
··· 167 167 struct netlink_ext_ack *extack, 168 168 struct pse_control_status *status) 169 169 { 170 - return -ENOTSUPP; 170 + return -EOPNOTSUPP; 171 171 } 172 172 173 173 static inline int pse_ethtool_set_config(struct pse_control *psec, 174 174 struct netlink_ext_ack *extack, 175 175 const struct pse_control_config *config) 176 176 { 177 - return -ENOTSUPP; 177 + return -EOPNOTSUPP; 178 178 } 179 179 180 180 static inline bool pse_has_podl(struct pse_control *psec)
+32 -4
include/net/bluetooth/hci_core.h
··· 2113 2113 { 2114 2114 u16 max_latency; 2115 2115 2116 - if (min > max || min < 6 || max > 3200) 2116 + if (min > max) { 2117 + BT_WARN("min %d > max %d", min, max); 2117 2118 return -EINVAL; 2119 + } 2118 2120 2119 - if (to_multiplier < 10 || to_multiplier > 3200) 2121 + if (min < 6) { 2122 + BT_WARN("min %d < 6", min); 2120 2123 return -EINVAL; 2124 + } 2121 2125 2122 - if (max >= to_multiplier * 8) 2126 + if (max > 3200) { 2127 + BT_WARN("max %d > 3200", max); 2123 2128 return -EINVAL; 2129 + } 2130 + 2131 + if (to_multiplier < 10) { 2132 + BT_WARN("to_multiplier %d < 10", to_multiplier); 2133 + return -EINVAL; 2134 + } 2135 + 2136 + if (to_multiplier > 3200) { 2137 + BT_WARN("to_multiplier %d > 3200", to_multiplier); 2138 + return -EINVAL; 2139 + } 2140 + 2141 + if (max >= to_multiplier * 8) { 2142 + BT_WARN("max %d >= to_multiplier %d * 8", max, to_multiplier); 2143 + return -EINVAL; 2144 + } 2124 2145 2125 2146 max_latency = (to_multiplier * 4 / max) - 1; 2126 - if (latency > 499 || latency > max_latency) 2147 + if (latency > 499) { 2148 + BT_WARN("latency %d > 499", latency); 2127 2149 return -EINVAL; 2150 + } 2151 + 2152 + if (latency > max_latency) { 2153 + BT_WARN("latency %d > max_latency %d", latency, max_latency); 2154 + return -EINVAL; 2155 + } 2128 2156 2129 2157 return 0; 2130 2158 }
+3 -2
include/net/ip_tunnels.h
··· 473 473 474 474 /* Variant of pskb_inet_may_pull(). 475 475 */ 476 - static inline bool skb_vlan_inet_prepare(struct sk_buff *skb) 476 + static inline bool skb_vlan_inet_prepare(struct sk_buff *skb, 477 + bool inner_proto_inherit) 477 478 { 478 - int nhlen = 0, maclen = ETH_HLEN; 479 + int nhlen = 0, maclen = inner_proto_inherit ? 0 : ETH_HLEN; 479 480 __be16 type = skb->protocol; 480 481 481 482 /* Essentially this is skb_protocol(skb, true)
+7 -1
include/trace/events/cachefiles.h
··· 33 33 cachefiles_obj_see_withdrawal, 34 34 cachefiles_obj_get_ondemand_fd, 35 35 cachefiles_obj_put_ondemand_fd, 36 + cachefiles_obj_get_read_req, 37 + cachefiles_obj_put_read_req, 36 38 }; 37 39 38 40 enum fscache_why_object_killed { ··· 129 127 EM(cachefiles_obj_see_lookup_cookie, "SEE lookup_cookie") \ 130 128 EM(cachefiles_obj_see_lookup_failed, "SEE lookup_failed") \ 131 129 EM(cachefiles_obj_see_withdraw_cookie, "SEE withdraw_cookie") \ 132 - E_(cachefiles_obj_see_withdrawal, "SEE withdrawal") 130 + EM(cachefiles_obj_see_withdrawal, "SEE withdrawal") \ 131 + EM(cachefiles_obj_get_ondemand_fd, "GET ondemand_fd") \ 132 + EM(cachefiles_obj_put_ondemand_fd, "PUT ondemand_fd") \ 133 + EM(cachefiles_obj_get_read_req, "GET read_req") \ 134 + E_(cachefiles_obj_put_read_req, "PUT read_req") 133 135 134 136 #define cachefiles_coherency_traces \ 135 137 EM(cachefiles_coherency_check_aux, "BAD aux ") \
+2
include/uapi/linux/input-event-codes.h
··· 618 618 #define KEY_CAMERA_ACCESS_ENABLE 0x24b /* Enables programmatic access to camera devices. (HUTRR72) */ 619 619 #define KEY_CAMERA_ACCESS_DISABLE 0x24c /* Disables programmatic access to camera devices. (HUTRR72) */ 620 620 #define KEY_CAMERA_ACCESS_TOGGLE 0x24d /* Toggles the current state of the camera access control. (HUTRR72) */ 621 + #define KEY_ACCESSIBILITY 0x24e /* Toggles the system bound accessibility UI/command (HUTRR116) */ 622 + #define KEY_DO_NOT_DISTURB 0x24f /* Toggles the system-wide "Do Not Disturb" control (HUTRR94)*/ 621 623 622 624 #define KEY_BRIGHTNESS_MIN 0x250 /* Set Brightness to Minimum */ 623 625 #define KEY_BRIGHTNESS_MAX 0x251 /* Set Brightness to Maximum */
+1 -1
include/uapi/linux/stat.h
··· 126 126 __u64 stx_mnt_id; 127 127 __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */ 128 128 __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */ 129 - __u64 stx_subvol; /* Subvolume identifier */ 130 129 /* 0xa0 */ 130 + __u64 stx_subvol; /* Subvolume identifier */ 131 131 __u64 __spare3[11]; /* Spare space for future expansion */ 132 132 /* 0x100 */ 133 133 };
+5 -5
io_uring/io-wq.c
··· 927 927 { 928 928 struct io_wq_acct *acct = io_work_get_acct(wq, work); 929 929 unsigned long work_flags = work->flags; 930 - struct io_cb_cancel_data match; 930 + struct io_cb_cancel_data match = { 931 + .fn = io_wq_work_match_item, 932 + .data = work, 933 + .cancel_all = false, 934 + }; 931 935 bool do_create; 932 936 933 937 /* ··· 969 965 raw_spin_unlock(&wq->lock); 970 966 971 967 /* fatal condition, failed to create the first worker */ 972 - match.fn = io_wq_work_match_item, 973 - match.data = work, 974 - match.cancel_all = false, 975 - 976 968 io_acct_cancel_pending_work(wq, acct, &match); 977 969 } 978 970 }
+1 -1
io_uring/io_uring.h
··· 433 433 { 434 434 if (req->flags & REQ_F_CAN_POLL) 435 435 return true; 436 - if (file_can_poll(req->file)) { 436 + if (req->file && file_can_poll(req->file)) { 437 437 req->flags |= REQ_F_CAN_POLL; 438 438 return true; 439 439 }
+12 -10
io_uring/napi.c
··· 261 261 } 262 262 263 263 /* 264 - * __io_napi_adjust_timeout() - Add napi id to the busy poll list 264 + * __io_napi_adjust_timeout() - adjust busy loop timeout 265 265 * @ctx: pointer to io-uring context structure 266 266 * @iowq: pointer to io wait queue 267 267 * @ts: pointer to timespec or NULL 268 268 * 269 269 * Adjust the busy loop timeout according to timespec and busy poll timeout. 270 + * If the specified NAPI timeout is bigger than the wait timeout, then adjust 271 + * the NAPI timeout accordingly. 270 272 */ 271 273 void __io_napi_adjust_timeout(struct io_ring_ctx *ctx, struct io_wait_queue *iowq, 272 274 struct timespec64 *ts) ··· 276 274 unsigned int poll_to = READ_ONCE(ctx->napi_busy_poll_to); 277 275 278 276 if (ts) { 279 - struct timespec64 poll_to_ts = ns_to_timespec64(1000 * (s64)poll_to); 277 + struct timespec64 poll_to_ts; 280 278 281 - if (timespec64_compare(ts, &poll_to_ts) > 0) { 282 - *ts = timespec64_sub(*ts, poll_to_ts); 283 - } else { 284 - u64 to = timespec64_to_ns(ts); 285 - 286 - do_div(to, 1000); 287 - ts->tv_sec = 0; 288 - ts->tv_nsec = 0; 279 + poll_to_ts = ns_to_timespec64(1000 * (s64)poll_to); 280 + if (timespec64_compare(ts, &poll_to_ts) < 0) { 281 + s64 poll_to_ns = timespec64_to_ns(ts); 282 + if (poll_to_ns > 0) { 283 + u64 val = poll_to_ns + 999; 284 + do_div(val, (s64) 1000); 285 + poll_to = val; 286 + } 289 287 } 290 288 } 291 289
+4
io_uring/register.c
··· 355 355 } 356 356 357 357 if (sqd) { 358 + mutex_unlock(&ctx->uring_lock); 358 359 mutex_unlock(&sqd->lock); 359 360 io_put_sq_data(sqd); 361 + mutex_lock(&ctx->uring_lock); 360 362 } 361 363 362 364 if (copy_to_user(arg, new_count, sizeof(new_count))) ··· 382 380 return 0; 383 381 err: 384 382 if (sqd) { 383 + mutex_unlock(&ctx->uring_lock); 385 384 mutex_unlock(&sqd->lock); 386 385 io_put_sq_data(sqd); 386 + mutex_lock(&ctx->uring_lock); 387 387 } 388 388 return ret; 389 389 }
+13
kernel/events/core.c
··· 5384 5384 again: 5385 5385 mutex_lock(&event->child_mutex); 5386 5386 list_for_each_entry(child, &event->child_list, child_list) { 5387 + void *var = NULL; 5387 5388 5388 5389 /* 5389 5390 * Cannot change, child events are not migrated, see the ··· 5425 5424 * this can't be the last reference. 5426 5425 */ 5427 5426 put_event(event); 5427 + } else { 5428 + var = &ctx->refcount; 5428 5429 } 5429 5430 5430 5431 mutex_unlock(&event->child_mutex); 5431 5432 mutex_unlock(&ctx->mutex); 5432 5433 put_ctx(ctx); 5434 + 5435 + if (var) { 5436 + /* 5437 + * If perf_event_free_task() has deleted all events from the 5438 + * ctx while the child_mutex got released above, make sure to 5439 + * notify about the preceding put_ctx(). 5440 + */ 5441 + smp_mb(); /* pairs with wait_var_event() */ 5442 + wake_up_var(var); 5443 + } 5433 5444 goto again; 5434 5445 } 5435 5446 mutex_unlock(&event->child_mutex);
+1 -1
mm/filemap.c
··· 1000 1000 do { 1001 1001 cpuset_mems_cookie = read_mems_allowed_begin(); 1002 1002 n = cpuset_mem_spread_node(); 1003 - folio = __folio_alloc_node(gfp, order, n); 1003 + folio = __folio_alloc_node_noprof(gfp, order, n); 1004 1004 } while (!folio && read_mems_allowed_retry(cpuset_mems_cookie)); 1005 1005 1006 1006 return folio;
+4 -4
mm/huge_memory.c
··· 558 558 DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC); 559 559 DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK); 560 560 DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); 561 - DEFINE_MTHP_STAT_ATTR(anon_swpout, MTHP_STAT_ANON_SWPOUT); 562 - DEFINE_MTHP_STAT_ATTR(anon_swpout_fallback, MTHP_STAT_ANON_SWPOUT_FALLBACK); 561 + DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT); 562 + DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK); 563 563 564 564 static struct attribute *stats_attrs[] = { 565 565 &anon_fault_alloc_attr.attr, 566 566 &anon_fault_fallback_attr.attr, 567 567 &anon_fault_fallback_charge_attr.attr, 568 - &anon_swpout_attr.attr, 569 - &anon_swpout_fallback_attr.attr, 568 + &swpout_attr.attr, 569 + &swpout_fallback_attr.attr, 570 570 NULL, 571 571 }; 572 572
+14 -2
mm/hugetlb.c
··· 5768 5768 * do_exit() will not see it, and will keep the reservation 5769 5769 * forever. 5770 5770 */ 5771 - if (adjust_reservation && vma_needs_reservation(h, vma, address)) 5772 - vma_add_reservation(h, vma, address); 5771 + if (adjust_reservation) { 5772 + int rc = vma_needs_reservation(h, vma, address); 5773 + 5774 + if (rc < 0) 5775 + /* Pressumably allocate_file_region_entries failed 5776 + * to allocate a file_region struct. Clear 5777 + * hugetlb_restore_reserve so that global reserve 5778 + * count will not be incremented by free_huge_folio. 5779 + * Act as if we consumed the reservation. 5780 + */ 5781 + folio_clear_hugetlb_restore_reserve(page_folio(page)); 5782 + else if (rc) 5783 + vma_add_reservation(h, vma, address); 5784 + } 5773 5785 5774 5786 tlb_remove_page_size(tlb, page, huge_page_size(h)); 5775 5787 /*
+11 -4
mm/kmsan/core.c
··· 196 196 u32 origin, bool checked) 197 197 { 198 198 u64 address = (u64)addr; 199 - void *shadow_start; 200 - u32 *origin_start; 199 + u32 *shadow_start, *origin_start; 201 200 size_t pad = 0; 202 201 203 202 KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); ··· 224 225 origin_start = 225 226 (u32 *)kmsan_get_metadata((void *)address, KMSAN_META_ORIGIN); 226 227 227 - for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) 228 - origin_start[i] = origin; 228 + /* 229 + * If the new origin is non-zero, assume that the shadow byte is also non-zero, 230 + * and unconditionally overwrite the old origin slot. 231 + * If the new origin is zero, overwrite the old origin slot iff the 232 + * corresponding shadow slot is zero. 233 + */ 234 + for (int i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) { 235 + if (origin || !shadow_start[i]) 236 + origin_start[i] = origin; 237 + } 229 238 } 230 239 231 240 struct page *kmsan_vmalloc_to_page_or_null(void *vaddr)
+7 -10
mm/ksm.c
··· 296 296 static bool ksm_smart_scan = true; 297 297 298 298 /* The number of zero pages which is placed by KSM */ 299 - unsigned long ksm_zero_pages; 299 + atomic_long_t ksm_zero_pages = ATOMIC_LONG_INIT(0); 300 300 301 301 /* The number of pages that have been skipped due to "smart scanning" */ 302 302 static unsigned long ksm_pages_skipped; ··· 1429 1429 * the dirty bit in zero page's PTE is set. 1430 1430 */ 1431 1431 newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot))); 1432 - ksm_zero_pages++; 1433 - mm->ksm_zero_pages++; 1432 + ksm_map_zero_page(mm); 1434 1433 /* 1435 1434 * We're replacing an anonymous page with a zero page, which is 1436 1435 * not anonymous. We need to do proper accounting otherwise we ··· 2753 2754 { 2754 2755 struct ksm_rmap_item *rmap_item; 2755 2756 struct page *page; 2756 - unsigned int npages = scan_npages; 2757 2757 2758 - while (npages-- && likely(!freezing(current))) { 2758 + while (scan_npages-- && likely(!freezing(current))) { 2759 2759 cond_resched(); 2760 2760 rmap_item = scan_get_next_rmap_item(&page); 2761 2761 if (!rmap_item) 2762 2762 return; 2763 2763 cmp_and_merge_page(page, rmap_item); 2764 2764 put_page(page); 2765 + ksm_pages_scanned++; 2765 2766 } 2766 - 2767 - ksm_pages_scanned += scan_npages - npages; 2768 2767 } 2769 2768 2770 2769 static int ksmd_should_run(void) ··· 3373 3376 #ifdef CONFIG_PROC_FS 3374 3377 long ksm_process_profit(struct mm_struct *mm) 3375 3378 { 3376 - return (long)(mm->ksm_merging_pages + mm->ksm_zero_pages) * PAGE_SIZE - 3379 + return (long)(mm->ksm_merging_pages + mm_ksm_zero_pages(mm)) * PAGE_SIZE - 3377 3380 mm->ksm_rmap_items * sizeof(struct ksm_rmap_item); 3378 3381 } 3379 3382 #endif /* CONFIG_PROC_FS */ ··· 3662 3665 static ssize_t ksm_zero_pages_show(struct kobject *kobj, 3663 3666 struct kobj_attribute *attr, char *buf) 3664 3667 { 3665 - return sysfs_emit(buf, "%ld\n", ksm_zero_pages); 3668 + return sysfs_emit(buf, "%ld\n", atomic_long_read(&ksm_zero_pages)); 3666 3669 } 3667 3670 KSM_ATTR_RO(ksm_zero_pages); 3668 3671 ··· 3671 3674 { 3672 3675 long general_profit; 3673 3676 3674 - general_profit = (ksm_pages_sharing + ksm_zero_pages) * PAGE_SIZE - 3677 + general_profit = (ksm_pages_sharing + atomic_long_read(&ksm_zero_pages)) * PAGE_SIZE - 3675 3678 ksm_rmap_items * sizeof(struct ksm_rmap_item); 3676 3679 3677 3680 return sysfs_emit(buf, "%ld\n", general_profit);
+4
mm/memblock.c
··· 1339 1339 int start_rgn, end_rgn; 1340 1340 int i, ret; 1341 1341 1342 + if (WARN_ONCE(nid == MAX_NUMNODES, 1343 + "Usage of MAX_NUMNODES is deprecated. Use NUMA_NO_NODE instead\n")) 1344 + nid = NUMA_NO_NODE; 1345 + 1342 1346 ret = memblock_isolate_range(type, base, size, &start_rgn, &end_rgn); 1343 1347 if (ret) 1344 1348 return ret;
-2
mm/memcontrol.c
··· 3147 3147 struct mem_cgroup *memcg; 3148 3148 struct lruvec *lruvec; 3149 3149 3150 - lockdep_assert_irqs_disabled(); 3151 - 3152 3150 rcu_read_lock(); 3153 3151 memcg = obj_cgroup_memcg(objcg); 3154 3152 lruvec = mem_cgroup_lruvec(memcg, pgdat);
+1 -1
mm/mempool.c
··· 273 273 { 274 274 mempool_t *pool; 275 275 276 - pool = kzalloc_node(sizeof(*pool), gfp_mask, node_id); 276 + pool = kmalloc_node_noprof(sizeof(*pool), gfp_mask | __GFP_ZERO, node_id); 277 277 if (!pool) 278 278 return NULL; 279 279
+34 -16
mm/page_alloc.c
··· 1955 1955 } 1956 1956 1957 1957 /* 1958 - * Reserve a pageblock for exclusive use of high-order atomic allocations if 1959 - * there are no empty page blocks that contain a page with a suitable order 1958 + * Reserve the pageblock(s) surrounding an allocation request for 1959 + * exclusive use of high-order atomic allocations if there are no 1960 + * empty page blocks that contain a page with a suitable order 1960 1961 */ 1961 - static void reserve_highatomic_pageblock(struct page *page, struct zone *zone) 1962 + static void reserve_highatomic_pageblock(struct page *page, int order, 1963 + struct zone *zone) 1962 1964 { 1963 1965 int mt; 1964 1966 unsigned long max_managed, flags; ··· 1986 1984 /* Yoink! */ 1987 1985 mt = get_pageblock_migratetype(page); 1988 1986 /* Only reserve normal pageblocks (i.e., they can merge with others) */ 1989 - if (migratetype_is_mergeable(mt)) 1990 - if (move_freepages_block(zone, page, mt, 1991 - MIGRATE_HIGHATOMIC) != -1) 1992 - zone->nr_reserved_highatomic += pageblock_nr_pages; 1987 + if (!migratetype_is_mergeable(mt)) 1988 + goto out_unlock; 1989 + 1990 + if (order < pageblock_order) { 1991 + if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) == -1) 1992 + goto out_unlock; 1993 + zone->nr_reserved_highatomic += pageblock_nr_pages; 1994 + } else { 1995 + change_pageblock_range(page, order, MIGRATE_HIGHATOMIC); 1996 + zone->nr_reserved_highatomic += 1 << order; 1997 + } 1993 1998 1994 1999 out_unlock: 1995 2000 spin_unlock_irqrestore(&zone->lock, flags); ··· 2008 1999 * intense memory pressure but failed atomic allocations should be easier 2009 2000 * to recover from than an OOM. 2010 2001 * 2011 - * If @force is true, try to unreserve a pageblock even though highatomic 2002 + * If @force is true, try to unreserve pageblocks even though highatomic 2012 2003 * pageblock is exhausted. 2013 2004 */ 2014 2005 static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, ··· 2050 2041 * adjust the count once. 2051 2042 */ 2052 2043 if (is_migrate_highatomic(mt)) { 2044 + unsigned long size; 2053 2045 /* 2054 2046 * It should never happen but changes to 2055 2047 * locking could inadvertently allow a per-cpu ··· 2058 2048 * while unreserving so be safe and watch for 2059 2049 * underflows. 2060 2050 */ 2061 - zone->nr_reserved_highatomic -= min( 2062 - pageblock_nr_pages, 2063 - zone->nr_reserved_highatomic); 2051 + size = max(pageblock_nr_pages, 1UL << order); 2052 + size = min(size, zone->nr_reserved_highatomic); 2053 + zone->nr_reserved_highatomic -= size; 2064 2054 } 2065 2055 2066 2056 /* ··· 2072 2062 * of pageblocks that cannot be completely freed 2073 2063 * may increase. 2074 2064 */ 2075 - ret = move_freepages_block(zone, page, mt, 2076 - ac->migratetype); 2065 + if (order < pageblock_order) 2066 + ret = move_freepages_block(zone, page, mt, 2067 + ac->migratetype); 2068 + else { 2069 + move_to_free_list(page, zone, order, mt, 2070 + ac->migratetype); 2071 + change_pageblock_range(page, order, 2072 + ac->migratetype); 2073 + ret = 1; 2074 + } 2077 2075 /* 2078 - * Reserving this block already succeeded, so this should 2079 - * not fail on zone boundaries. 2076 + * Reserving the block(s) already succeeded, 2077 + * so this should not fail on zone boundaries. 2080 2078 */ 2081 2079 WARN_ON_ONCE(ret == -1); 2082 2080 if (ret > 0) { ··· 3424 3406 * if the pageblock should be reserved for the future 3425 3407 */ 3426 3408 if (unlikely(alloc_flags & ALLOC_HIGHATOMIC)) 3427 - reserve_highatomic_pageblock(page, zone); 3409 + reserve_highatomic_pageblock(page, order, zone); 3428 3410 3429 3411 return page; 3430 3412 } else {
+1 -1
mm/page_io.c
··· 217 217 count_memcg_folio_events(folio, THP_SWPOUT, 1); 218 218 count_vm_event(THP_SWPOUT); 219 219 } 220 - count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_SWPOUT); 220 + count_mthp_stat(folio_order(folio), MTHP_STAT_SWPOUT); 221 221 #endif 222 222 count_vm_events(PSWPOUT, folio_nr_pages(folio)); 223 223 }
+3 -2
mm/slub.c
··· 1952 1952 #ifdef CONFIG_MEMCG 1953 1953 new_exts |= MEMCG_DATA_OBJEXTS; 1954 1954 #endif 1955 - old_exts = slab->obj_exts; 1955 + old_exts = READ_ONCE(slab->obj_exts); 1956 1956 handle_failed_objexts_alloc(old_exts, vec, objects); 1957 1957 if (new_slab) { 1958 1958 /* ··· 1961 1961 * be simply assigned. 1962 1962 */ 1963 1963 slab->obj_exts = new_exts; 1964 - } else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) { 1964 + } else if ((old_exts & ~OBJEXTS_FLAGS_MASK) || 1965 + cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) { 1965 1966 /* 1966 1967 * If the slab is already in use, somebody can allocate and 1967 1968 * assign slabobj_exts in parallel. In this case the existing
+5 -5
mm/util.c
··· 705 705 706 706 if (oldsize >= newsize) 707 707 return (void *)p; 708 - newp = kvmalloc(newsize, flags); 708 + newp = kvmalloc_noprof(newsize, flags); 709 709 if (!newp) 710 710 return NULL; 711 711 memcpy(newp, p, oldsize); ··· 726 726 727 727 if (unlikely(check_mul_overflow(n, size, &bytes))) 728 728 return NULL; 729 - return __vmalloc(bytes, flags); 729 + return __vmalloc_noprof(bytes, flags); 730 730 } 731 731 EXPORT_SYMBOL(__vmalloc_array_noprof); 732 732 ··· 737 737 */ 738 738 void *vmalloc_array_noprof(size_t n, size_t size) 739 739 { 740 - return __vmalloc_array(n, size, GFP_KERNEL); 740 + return __vmalloc_array_noprof(n, size, GFP_KERNEL); 741 741 } 742 742 EXPORT_SYMBOL(vmalloc_array_noprof); 743 743 ··· 749 749 */ 750 750 void *__vcalloc_noprof(size_t n, size_t size, gfp_t flags) 751 751 { 752 - return __vmalloc_array(n, size, flags | __GFP_ZERO); 752 + return __vmalloc_array_noprof(n, size, flags | __GFP_ZERO); 753 753 } 754 754 EXPORT_SYMBOL(__vcalloc_noprof); 755 755 ··· 760 760 */ 761 761 void *vcalloc_noprof(size_t n, size_t size) 762 762 { 763 - return __vmalloc_array(n, size, GFP_KERNEL | __GFP_ZERO); 763 + return __vmalloc_array_noprof(n, size, GFP_KERNEL | __GFP_ZERO); 764 764 } 765 765 EXPORT_SYMBOL(vcalloc_noprof); 766 766
+1 -1
mm/vmalloc.c
··· 722 722 * and fall back on vmalloc() if that fails. Others 723 723 * just put it in the vmalloc space. 724 724 */ 725 - #if defined(CONFIG_MODULES) && defined(MODULES_VADDR) 725 + #if defined(CONFIG_EXECMEM) && defined(MODULES_VADDR) 726 726 unsigned long addr = (unsigned long)kasan_reset_tag(x); 727 727 if (addr >= MODULES_VADDR && addr < MODULES_END) 728 728 return 1;
+1 -1
mm/vmscan.c
··· 1227 1227 THP_SWPOUT_FALLBACK, 1); 1228 1228 count_vm_event(THP_SWPOUT_FALLBACK); 1229 1229 } 1230 - count_mthp_stat(order, MTHP_STAT_ANON_SWPOUT_FALLBACK); 1230 + count_mthp_stat(order, MTHP_STAT_SWPOUT_FALLBACK); 1231 1231 #endif 1232 1232 if (!add_to_swap(folio)) 1233 1233 goto activate_locked_split;
+1 -1
net/bluetooth/hci_sync.c
··· 1194 1194 1195 1195 cp.own_addr_type = own_addr_type; 1196 1196 cp.channel_map = hdev->le_adv_channel_map; 1197 - cp.handle = instance; 1197 + cp.handle = adv ? adv->handle : instance; 1198 1198 1199 1199 if (flags & MGMT_ADV_FLAG_SEC_2M) { 1200 1200 cp.primary_phy = HCI_ADV_PHY_1M;
+3 -9
net/bluetooth/l2cap_core.c
··· 4011 4011 status = L2CAP_CS_AUTHOR_PEND; 4012 4012 chan->ops->defer(chan); 4013 4013 } else { 4014 - l2cap_state_change(chan, BT_CONNECT2); 4015 - result = L2CAP_CR_PEND; 4014 + l2cap_state_change(chan, BT_CONFIG); 4015 + result = L2CAP_CR_SUCCESS; 4016 4016 status = L2CAP_CS_NO_INFO; 4017 4017 } 4018 4018 } else { ··· 4647 4647 4648 4648 memset(&rsp, 0, sizeof(rsp)); 4649 4649 4650 - if (max > hcon->le_conn_max_interval) { 4651 - BT_DBG("requested connection interval exceeds current bounds."); 4652 - err = -EINVAL; 4653 - } else { 4654 - err = hci_check_conn_params(min, max, latency, to_multiplier); 4655 - } 4656 - 4650 + err = hci_check_conn_params(min, max, latency, to_multiplier); 4657 4651 if (err) 4658 4652 rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED); 4659 4653 else
+6 -7
net/bridge/br_mst.c
··· 73 73 } 74 74 EXPORT_SYMBOL_GPL(br_mst_get_state); 75 75 76 - static void br_mst_vlan_set_state(struct net_bridge_port *p, struct net_bridge_vlan *v, 76 + static void br_mst_vlan_set_state(struct net_bridge_vlan_group *vg, 77 + struct net_bridge_vlan *v, 77 78 u8 state) 78 79 { 79 - struct net_bridge_vlan_group *vg = nbp_vlan_group(p); 80 - 81 80 if (br_vlan_get_state(v) == state) 82 81 return; 83 82 ··· 102 103 int err = 0; 103 104 104 105 rcu_read_lock(); 105 - vg = nbp_vlan_group(p); 106 + vg = nbp_vlan_group_rcu(p); 106 107 if (!vg) 107 108 goto out; 108 109 ··· 120 121 if (v->brvlan->msti != msti) 121 122 continue; 122 123 123 - br_mst_vlan_set_state(p, v, state); 124 + br_mst_vlan_set_state(vg, v, state); 124 125 } 125 126 126 127 out: ··· 139 140 * it. 140 141 */ 141 142 if (v != pv && v->brvlan->msti == msti) { 142 - br_mst_vlan_set_state(pv->port, pv, v->state); 143 + br_mst_vlan_set_state(vg, pv, v->state); 143 144 return; 144 145 } 145 146 } 146 147 147 148 /* Otherwise, start out in a new MSTI with all ports disabled. */ 148 - return br_mst_vlan_set_state(pv->port, pv, BR_STATE_DISABLED); 149 + return br_mst_vlan_set_state(vg, pv, BR_STATE_DISABLED); 149 150 } 150 151 151 152 int br_mst_vlan_set_msti(struct net_bridge_vlan *mv, u16 msti)
+5 -1
net/ipv4/tcp_timer.c
··· 481 481 { 482 482 const struct tcp_sock *tp = tcp_sk(sk); 483 483 const int timeout = TCP_RTO_MAX * 2; 484 - u32 rcv_delta; 484 + s32 rcv_delta; 485 485 486 + /* Note: timer interrupt might have been delayed by at least one jiffy, 487 + * and tp->rcv_tstamp might very well have been written recently. 488 + * rcv_delta can thus be negative. 489 + */ 486 490 rcv_delta = inet_csk(sk)->icsk_timeout - tp->rcv_tstamp; 487 491 if (rcv_delta <= timeout) 488 492 return false;
+1
net/ipv6/netfilter.c
··· 36 36 .flowi6_uid = sock_net_uid(net, sk), 37 37 .daddr = iph->daddr, 38 38 .saddr = iph->saddr, 39 + .flowlabel = ip6_flowinfo(iph), 39 40 }; 40 41 int err; 41 42
+2 -2
net/ipv6/route.c
··· 6341 6341 if (!write) 6342 6342 return -EINVAL; 6343 6343 6344 - net = (struct net *)ctl->extra1; 6345 - delay = net->ipv6.sysctl.flush_delay; 6346 6344 ret = proc_dointvec(ctl, write, buffer, lenp, ppos); 6347 6345 if (ret) 6348 6346 return ret; 6349 6347 6348 + net = (struct net *)ctl->extra1; 6349 + delay = net->ipv6.sysctl.flush_delay; 6350 6350 fib6_run_gc(delay <= 0 ? 0 : (unsigned long)delay, net, delay > 0); 6351 6351 return 0; 6352 6352 }
+2 -1
net/ipv6/tcp_ipv6.c
··· 1435 1435 */ 1436 1436 1437 1437 newsk->sk_gso_type = SKB_GSO_TCPV6; 1438 - ip6_dst_store(newsk, dst, NULL, NULL); 1439 1438 inet6_sk_rx_dst_set(newsk, skb); 1440 1439 1441 1440 inet_sk(newsk)->pinet6 = tcp_inet6_sk(newsk); ··· 1444 1445 newnp = tcp_inet6_sk(newsk); 1445 1446 1446 1447 memcpy(newnp, np, sizeof(struct ipv6_pinfo)); 1448 + 1449 + ip6_dst_store(newsk, dst, NULL, NULL); 1447 1450 1448 1451 newsk->sk_v6_daddr = ireq->ir_v6_rmt_addr; 1449 1452 newnp->saddr = ireq->ir_v6_loc_addr;
+14 -7
net/mptcp/pm_netlink.c
··· 677 677 unsigned int add_addr_accept_max; 678 678 struct mptcp_addr_info remote; 679 679 unsigned int subflows_max; 680 + bool sf_created = false; 680 681 int i, nr; 681 682 682 683 add_addr_accept_max = mptcp_pm_get_add_addr_accept_max(msk); ··· 705 704 if (nr == 0) 706 705 return; 707 706 708 - msk->pm.add_addr_accepted++; 709 - if (msk->pm.add_addr_accepted >= add_addr_accept_max || 710 - msk->pm.subflows >= subflows_max) 711 - WRITE_ONCE(msk->pm.accept_addr, false); 712 - 713 707 spin_unlock_bh(&msk->pm.lock); 714 708 for (i = 0; i < nr; i++) 715 - __mptcp_subflow_connect(sk, &addrs[i], &remote); 709 + if (__mptcp_subflow_connect(sk, &addrs[i], &remote) == 0) 710 + sf_created = true; 716 711 spin_lock_bh(&msk->pm.lock); 712 + 713 + if (sf_created) { 714 + msk->pm.add_addr_accepted++; 715 + if (msk->pm.add_addr_accepted >= add_addr_accept_max || 716 + msk->pm.subflows >= subflows_max) 717 + WRITE_ONCE(msk->pm.accept_addr, false); 718 + } 717 719 } 718 720 719 721 void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk) ··· 818 814 spin_lock_bh(&msk->pm.lock); 819 815 820 816 removed = true; 821 - __MPTCP_INC_STATS(sock_net(sk), rm_type); 817 + if (rm_type == MPTCP_MIB_RMSUBFLOW) 818 + __MPTCP_INC_STATS(sock_net(sk), rm_type); 822 819 } 823 820 if (rm_type == MPTCP_MIB_RMSUBFLOW) 824 821 __set_bit(rm_id ? rm_id : msk->mpc_endpoint_id, msk->pm.id_avail_bitmap); 822 + else if (rm_type == MPTCP_MIB_RMADDR) 823 + __MPTCP_INC_STATS(sock_net(sk), rm_type); 825 824 if (!removed) 826 825 continue; 827 826
+1
net/mptcp/protocol.c
··· 3740 3740 3741 3741 WRITE_ONCE(msk->write_seq, subflow->idsn); 3742 3742 WRITE_ONCE(msk->snd_nxt, subflow->idsn); 3743 + WRITE_ONCE(msk->snd_una, subflow->idsn); 3743 3744 if (likely(!__mptcp_check_fallback(msk))) 3744 3745 MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEACTIVE); 3745 3746
+52 -41
net/netfilter/ipset/ip_set_core.c
··· 1172 1172 .len = IPSET_MAXNAMELEN - 1 }, 1173 1173 }; 1174 1174 1175 - static void 1176 - ip_set_destroy_set(struct ip_set *set) 1177 - { 1178 - pr_debug("set: %s\n", set->name); 1179 - 1180 - /* Must call it without holding any lock */ 1181 - set->variant->destroy(set); 1182 - module_put(set->type->me); 1183 - kfree(set); 1184 - } 1175 + /* In order to return quickly when destroying a single set, it is split 1176 + * into two stages: 1177 + * - Cancel garbage collector 1178 + * - Destroy the set itself via call_rcu() 1179 + */ 1185 1180 1186 1181 static void 1187 1182 ip_set_destroy_set_rcu(struct rcu_head *head) 1188 1183 { 1189 1184 struct ip_set *set = container_of(head, struct ip_set, rcu); 1190 1185 1191 - ip_set_destroy_set(set); 1186 + set->variant->destroy(set); 1187 + module_put(set->type->me); 1188 + kfree(set); 1189 + } 1190 + 1191 + static void 1192 + _destroy_all_sets(struct ip_set_net *inst) 1193 + { 1194 + struct ip_set *set; 1195 + ip_set_id_t i; 1196 + bool need_wait = false; 1197 + 1198 + /* First cancel gc's: set:list sets are flushed as well */ 1199 + for (i = 0; i < inst->ip_set_max; i++) { 1200 + set = ip_set(inst, i); 1201 + if (set) { 1202 + set->variant->cancel_gc(set); 1203 + if (set->type->features & IPSET_TYPE_NAME) 1204 + need_wait = true; 1205 + } 1206 + } 1207 + /* Must wait for flush to be really finished */ 1208 + if (need_wait) 1209 + rcu_barrier(); 1210 + for (i = 0; i < inst->ip_set_max; i++) { 1211 + set = ip_set(inst, i); 1212 + if (set) { 1213 + ip_set(inst, i) = NULL; 1214 + set->variant->destroy(set); 1215 + module_put(set->type->me); 1216 + kfree(set); 1217 + } 1218 + } 1192 1219 } 1193 1220 1194 1221 static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info, ··· 1229 1202 if (unlikely(protocol_min_failed(attr))) 1230 1203 return -IPSET_ERR_PROTOCOL; 1231 1204 1232 - 1233 1205 /* Commands are serialized and references are 1234 1206 * protected by the ip_set_ref_lock. 1235 1207 * External systems (i.e. xt_set) must call 1236 - * ip_set_put|get_nfnl_* functions, that way we 1208 + * ip_set_nfnl_get_* functions, that way we 1237 1209 * can safely check references here. 1238 1210 * 1239 1211 * list:set timer can only decrement the reference ··· 1240 1214 * without holding the lock. 1241 1215 */ 1242 1216 if (!attr[IPSET_ATTR_SETNAME]) { 1243 - /* Must wait for flush to be really finished in list:set */ 1244 - rcu_barrier(); 1245 1217 read_lock_bh(&ip_set_ref_lock); 1246 1218 for (i = 0; i < inst->ip_set_max; i++) { 1247 1219 s = ip_set(inst, i); ··· 1250 1226 } 1251 1227 inst->is_destroyed = true; 1252 1228 read_unlock_bh(&ip_set_ref_lock); 1253 - for (i = 0; i < inst->ip_set_max; i++) { 1254 - s = ip_set(inst, i); 1255 - if (s) { 1256 - ip_set(inst, i) = NULL; 1257 - /* Must cancel garbage collectors */ 1258 - s->variant->cancel_gc(s); 1259 - ip_set_destroy_set(s); 1260 - } 1261 - } 1229 + _destroy_all_sets(inst); 1262 1230 /* Modified by ip_set_destroy() only, which is serialized */ 1263 1231 inst->is_destroyed = false; 1264 1232 } else { ··· 1271 1255 features = s->type->features; 1272 1256 ip_set(inst, i) = NULL; 1273 1257 read_unlock_bh(&ip_set_ref_lock); 1258 + /* Must cancel garbage collectors */ 1259 + s->variant->cancel_gc(s); 1274 1260 if (features & IPSET_TYPE_NAME) { 1275 1261 /* Must wait for flush to be really finished */ 1276 1262 rcu_barrier(); 1277 1263 } 1278 - /* Must cancel garbage collectors */ 1279 - s->variant->cancel_gc(s); 1280 1264 call_rcu(&s->rcu, ip_set_destroy_set_rcu); 1281 1265 } 1282 1266 return 0; ··· 2381 2365 } 2382 2366 2383 2367 static void __net_exit 2368 + ip_set_net_pre_exit(struct net *net) 2369 + { 2370 + struct ip_set_net *inst = ip_set_pernet(net); 2371 + 2372 + inst->is_deleted = true; /* flag for ip_set_nfnl_put */ 2373 + } 2374 + 2375 + static void __net_exit 2384 2376 ip_set_net_exit(struct net *net) 2385 2377 { 2386 2378 struct ip_set_net *inst = ip_set_pernet(net); 2387 2379 2388 - struct ip_set *set = NULL; 2389 - ip_set_id_t i; 2390 - 2391 - inst->is_deleted = true; /* flag for ip_set_nfnl_put */ 2392 - 2393 - nfnl_lock(NFNL_SUBSYS_IPSET); 2394 - for (i = 0; i < inst->ip_set_max; i++) { 2395 - set = ip_set(inst, i); 2396 - if (set) { 2397 - ip_set(inst, i) = NULL; 2398 - set->variant->cancel_gc(set); 2399 - ip_set_destroy_set(set); 2400 - } 2401 - } 2402 - nfnl_unlock(NFNL_SUBSYS_IPSET); 2380 + _destroy_all_sets(inst); 2403 2381 kvfree(rcu_dereference_protected(inst->ip_set_list, 1)); 2404 2382 } 2405 2383 2406 2384 static struct pernet_operations ip_set_net_ops = { 2407 2385 .init = ip_set_net_init, 2386 + .pre_exit = ip_set_net_pre_exit, 2408 2387 .exit = ip_set_net_exit, 2409 2388 .id = &ip_set_net_id, 2410 2389 .size = sizeof(struct ip_set_net),
+14 -16
net/netfilter/ipset/ip_set_list_set.c
··· 79 79 struct set_elem *e; 80 80 int ret; 81 81 82 - list_for_each_entry(e, &map->members, list) { 82 + list_for_each_entry_rcu(e, &map->members, list) { 83 83 if (SET_WITH_TIMEOUT(set) && 84 84 ip_set_timeout_expired(ext_timeout(e, set))) 85 85 continue; ··· 99 99 struct set_elem *e; 100 100 int ret; 101 101 102 - list_for_each_entry(e, &map->members, list) { 102 + list_for_each_entry_rcu(e, &map->members, list) { 103 103 if (SET_WITH_TIMEOUT(set) && 104 104 ip_set_timeout_expired(ext_timeout(e, set))) 105 105 continue; ··· 188 188 struct list_set *map = set->data; 189 189 struct set_adt_elem *d = value; 190 190 struct set_elem *e, *next, *prev = NULL; 191 - int ret; 191 + int ret = 0; 192 192 193 - list_for_each_entry(e, &map->members, list) { 193 + rcu_read_lock(); 194 + list_for_each_entry_rcu(e, &map->members, list) { 194 195 if (SET_WITH_TIMEOUT(set) && 195 196 ip_set_timeout_expired(ext_timeout(e, set))) 196 197 continue; ··· 202 201 203 202 if (d->before == 0) { 204 203 ret = 1; 204 + goto out; 205 205 } else if (d->before > 0) { 206 206 next = list_next_entry(e, list); 207 207 ret = !list_is_last(&e->list, &map->members) && ··· 210 208 } else { 211 209 ret = prev && prev->id == d->refid; 212 210 } 213 - return ret; 211 + goto out; 214 212 } 215 - return 0; 213 + out: 214 + rcu_read_unlock(); 215 + return ret; 216 216 } 217 217 218 218 static void ··· 243 239 244 240 /* Find where to add the new entry */ 245 241 n = prev = next = NULL; 246 - list_for_each_entry(e, &map->members, list) { 242 + list_for_each_entry_rcu(e, &map->members, list) { 247 243 if (SET_WITH_TIMEOUT(set) && 248 244 ip_set_timeout_expired(ext_timeout(e, set))) 249 245 continue; ··· 320 316 { 321 317 struct list_set *map = set->data; 322 318 struct set_adt_elem *d = value; 323 - struct set_elem *e, *next, *prev = NULL; 319 + struct set_elem *e, *n, *next, *prev = NULL; 324 320 325 - list_for_each_entry(e, &map->members, list) { 321 + list_for_each_entry_safe(e, n, &map->members, list) { 326 322 if (SET_WITH_TIMEOUT(set) && 327 323 ip_set_timeout_expired(ext_timeout(e, set))) 328 324 continue; ··· 428 424 list_set_destroy(struct ip_set *set) 429 425 { 430 426 struct list_set *map = set->data; 431 - struct set_elem *e, *n; 432 427 433 - list_for_each_entry_safe(e, n, &map->members, list) { 434 - list_del(&e->list); 435 - ip_set_put_byindex(map->net, e->id); 436 - ip_set_ext_destroy(set, e); 437 - kfree(e); 438 - } 428 + WARN_ON_ONCE(!list_empty(&map->members)); 439 429 kfree(map); 440 430 441 431 set->data = NULL;
+3
net/netfilter/nft_meta.c
··· 839 839 struct nft_meta *priv = nft_expr_priv(expr); 840 840 unsigned int len; 841 841 842 + if (!tb[NFTA_META_KEY] || !tb[NFTA_META_DREG]) 843 + return -EINVAL; 844 + 842 845 priv->key = ntohl(nla_get_be32(tb[NFTA_META_KEY])); 843 846 switch (priv->key) { 844 847 case NFT_META_PROTOCOL:
+4
net/netfilter/nft_payload.c
··· 650 650 struct nft_payload *priv = nft_expr_priv(expr); 651 651 u32 base; 652 652 653 + if (!tb[NFTA_PAYLOAD_BASE] || !tb[NFTA_PAYLOAD_OFFSET] || 654 + !tb[NFTA_PAYLOAD_LEN] || !tb[NFTA_PAYLOAD_DREG]) 655 + return -EINVAL; 656 + 653 657 base = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE])); 654 658 switch (base) { 655 659 case NFT_PAYLOAD_TUN_HEADER:
+1
net/sched/sch_generic.c
··· 677 677 .qlen = 0, 678 678 .lock = __SPIN_LOCK_UNLOCKED(noop_qdisc.skb_bad_txq.lock), 679 679 }, 680 + .owner = -1, 680 681 }; 681 682 EXPORT_SYMBOL(noop_qdisc); 682 683
+3 -1
net/sunrpc/auth_gss/auth_gss.c
··· 1875 1875 offset = (u8 *)p - (u8 *)snd_buf->head[0].iov_base; 1876 1876 maj_stat = gss_wrap(ctx->gc_gss_ctx, offset, snd_buf, inpages); 1877 1877 /* slack space should prevent this ever happening: */ 1878 - if (unlikely(snd_buf->len > snd_buf->buflen)) 1878 + if (unlikely(snd_buf->len > snd_buf->buflen)) { 1879 + status = -EIO; 1879 1880 goto wrap_failed; 1881 + } 1880 1882 /* We're assuming that when GSS_S_CONTEXT_EXPIRED, the encryption was 1881 1883 * done anyway, so it's safe to put the request on the wire: */ 1882 1884 if (maj_stat == GSS_S_CONTEXT_EXPIRED)
+1 -1
net/sunrpc/auth_gss/svcauth_gss.c
··· 1069 1069 goto out_denied_free; 1070 1070 1071 1071 pages = DIV_ROUND_UP(inlen, PAGE_SIZE); 1072 - in_token->pages = kcalloc(pages, sizeof(struct page *), GFP_KERNEL); 1072 + in_token->pages = kcalloc(pages + 1, sizeof(struct page *), GFP_KERNEL); 1073 1073 if (!in_token->pages) 1074 1074 goto out_denied_free; 1075 1075 in_token->page_base = 0;
+9 -9
net/unix/af_unix.c
··· 2625 2625 if (skb == u->oob_skb) { 2626 2626 if (copied) { 2627 2627 skb = NULL; 2628 - } else if (sock_flag(sk, SOCK_URGINLINE)) { 2629 - if (!(flags & MSG_PEEK)) { 2628 + } else if (!(flags & MSG_PEEK)) { 2629 + if (sock_flag(sk, SOCK_URGINLINE)) { 2630 2630 WRITE_ONCE(u->oob_skb, NULL); 2631 2631 consume_skb(skb); 2632 + } else { 2633 + __skb_unlink(skb, &sk->sk_receive_queue); 2634 + WRITE_ONCE(u->oob_skb, NULL); 2635 + unlinked_skb = skb; 2636 + skb = skb_peek(&sk->sk_receive_queue); 2632 2637 } 2633 - } else if (flags & MSG_PEEK) { 2634 - skb = NULL; 2635 - } else { 2636 - __skb_unlink(skb, &sk->sk_receive_queue); 2637 - WRITE_ONCE(u->oob_skb, NULL); 2638 - unlinked_skb = skb; 2639 - skb = skb_peek(&sk->sk_receive_queue); 2638 + } else if (!sock_flag(sk, SOCK_URGINLINE)) { 2639 + skb = skb_peek_next(skb, &sk->sk_receive_queue); 2640 2640 } 2641 2641 } 2642 2642
+1 -1
scripts/atomic/kerneldoc/sub_and_test
··· 1 1 cat <<EOF 2 2 /** 3 3 * ${class}${atomicname}() - atomic subtract and test if zero with ${desc_order} ordering 4 - * @i: ${int} value to add 4 + * @i: ${int} value to subtract 5 5 * @v: pointer to ${atomic}_t 6 6 * 7 7 * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
-13
scripts/kconfig/confdata.c
··· 533 533 */ 534 534 if (sym->visible == no && !conf_unsaved) 535 535 sym->flags &= ~SYMBOL_DEF_USER; 536 - switch (sym->type) { 537 - case S_STRING: 538 - case S_INT: 539 - case S_HEX: 540 - /* Reset a string value if it's out of range */ 541 - if (sym_string_within_range(sym, sym->def[S_DEF_USER].val)) 542 - break; 543 - sym->flags &= ~SYMBOL_VALID; 544 - conf_unsaved++; 545 - break; 546 - default: 547 - break; 548 - } 549 536 } 550 537 } 551 538
-29
scripts/kconfig/expr.c
··· 397 397 } 398 398 399 399 /* 400 - * bool FOO!=n => FOO 401 - */ 402 - struct expr *expr_trans_bool(struct expr *e) 403 - { 404 - if (!e) 405 - return NULL; 406 - switch (e->type) { 407 - case E_AND: 408 - case E_OR: 409 - case E_NOT: 410 - e->left.expr = expr_trans_bool(e->left.expr); 411 - e->right.expr = expr_trans_bool(e->right.expr); 412 - break; 413 - case E_UNEQUAL: 414 - // FOO!=n -> FOO 415 - if (e->left.sym->type == S_TRISTATE) { 416 - if (e->right.sym == &symbol_no) { 417 - e->type = E_SYMBOL; 418 - e->right.sym = NULL; 419 - } 420 - } 421 - break; 422 - default: 423 - ; 424 - } 425 - return e; 426 - } 427 - 428 - /* 429 400 * e1 || e2 -> ? 430 401 */ 431 402 static struct expr *expr_join_or(struct expr *e1, struct expr *e2)
-1
scripts/kconfig/expr.h
··· 284 284 void expr_eliminate_eq(struct expr **ep1, struct expr **ep2); 285 285 int expr_eq(struct expr *e1, struct expr *e2); 286 286 tristate expr_calc_value(struct expr *e); 287 - struct expr *expr_trans_bool(struct expr *e); 288 287 struct expr *expr_eliminate_dups(struct expr *e); 289 288 struct expr *expr_transform(struct expr *e); 290 289 int expr_contains_symbol(struct expr *dep, struct symbol *sym);
+2 -1
scripts/kconfig/gconf.c
··· 1422 1422 1423 1423 conf_parse(name); 1424 1424 fixup_rootmenu(&rootmenu); 1425 - conf_read(NULL); 1426 1425 1427 1426 /* Load the interface and connect signals */ 1428 1427 init_main_window(glade_file); 1429 1428 init_tree_model(); 1430 1429 init_left_tree(); 1431 1430 init_right_tree(); 1431 + 1432 + conf_read(NULL); 1432 1433 1433 1434 switch (view_mode) { 1434 1435 case SINGLE_VIEW:
-2
scripts/kconfig/menu.c
··· 398 398 dep = expr_transform(dep); 399 399 dep = expr_alloc_and(expr_copy(basedep), dep); 400 400 dep = expr_eliminate_dups(dep); 401 - if (menu->sym && menu->sym->type != S_TRISTATE) 402 - dep = expr_trans_bool(dep); 403 401 prop->visible.expr = dep; 404 402 405 403 /*
+3 -2
scripts/mod/modpost.c
··· 1647 1647 namespace = get_next_modinfo(&info, "import_ns", 1648 1648 namespace); 1649 1649 } 1650 + 1651 + if (extra_warn && !get_modinfo(&info, "description")) 1652 + warn("missing MODULE_DESCRIPTION() in %s\n", modname); 1650 1653 } 1651 1654 1652 - if (extra_warn && !get_modinfo(&info, "description")) 1653 - warn("missing MODULE_DESCRIPTION() in %s\n", modname); 1654 1655 for (sym = info.symtab_start; sym < info.symtab_stop; sym++) { 1655 1656 symname = remove_dot(info.strtab + sym->st_name); 1656 1657
+6
tools/arch/arm64/include/asm/cputype.h
··· 86 86 #define ARM_CPU_PART_CORTEX_X2 0xD48 87 87 #define ARM_CPU_PART_NEOVERSE_N2 0xD49 88 88 #define ARM_CPU_PART_CORTEX_A78C 0xD4B 89 + #define ARM_CPU_PART_NEOVERSE_V2 0xD4F 90 + #define ARM_CPU_PART_CORTEX_X4 0xD82 91 + #define ARM_CPU_PART_NEOVERSE_V3 0xD84 89 92 90 93 #define APM_CPU_PART_XGENE 0x000 91 94 #define APM_CPU_VAR_POTENZA 0x00 ··· 162 159 #define MIDR_CORTEX_X2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X2) 163 160 #define MIDR_NEOVERSE_N2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N2) 164 161 #define MIDR_CORTEX_A78C MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78C) 162 + #define MIDR_NEOVERSE_V2 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V2) 163 + #define MIDR_CORTEX_X4 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_X4) 164 + #define MIDR_NEOVERSE_V3 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V3) 165 165 #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) 166 166 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) 167 167 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
+4 -5
tools/arch/x86/include/asm/msr-index.h
··· 170 170 * CPU is not affected by Branch 171 171 * History Injection. 172 172 */ 173 + #define ARCH_CAP_XAPIC_DISABLE BIT(21) /* 174 + * IA32_XAPIC_DISABLE_STATUS MSR 175 + * supported 176 + */ 173 177 #define ARCH_CAP_PBRSB_NO BIT(24) /* 174 178 * Not susceptible to Post-Barrier 175 179 * Return Stack Buffer Predictions. ··· 194 190 #define ARCH_CAP_RFDS_CLEAR BIT(28) /* 195 191 * VERW clears CPU Register 196 192 * File. 197 - */ 198 - 199 - #define ARCH_CAP_XAPIC_DISABLE BIT(21) /* 200 - * IA32_XAPIC_DISABLE_STATUS MSR 201 - * supported 202 193 */ 203 194 204 195 #define MSR_IA32_FLUSH_CMD 0x0000010b
+20 -2
tools/arch/x86/include/uapi/asm/kvm.h
··· 457 457 458 458 #define KVM_STATE_VMX_PREEMPTION_TIMER_DEADLINE 0x00000001 459 459 460 - /* attributes for system fd (group 0) */ 461 - #define KVM_X86_XCOMP_GUEST_SUPP 0 460 + /* vendor-independent attributes for system fd (group 0) */ 461 + #define KVM_X86_GRP_SYSTEM 0 462 + # define KVM_X86_XCOMP_GUEST_SUPP 0 463 + 464 + /* vendor-specific groups and attributes for system fd */ 465 + #define KVM_X86_GRP_SEV 1 466 + # define KVM_X86_SEV_VMSA_FEATURES 0 462 467 463 468 struct kvm_vmx_nested_state_data { 464 469 __u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE]; ··· 694 689 /* Guest Migration Extension */ 695 690 KVM_SEV_SEND_CANCEL, 696 691 692 + /* Second time is the charm; improved versions of the above ioctls. */ 693 + KVM_SEV_INIT2, 694 + 697 695 KVM_SEV_NR_MAX, 698 696 }; 699 697 ··· 706 698 __u64 data; 707 699 __u32 error; 708 700 __u32 sev_fd; 701 + }; 702 + 703 + struct kvm_sev_init { 704 + __u64 vmsa_features; 705 + __u32 flags; 706 + __u16 ghcb_version; 707 + __u16 pad1; 708 + __u32 pad2[8]; 709 709 }; 710 710 711 711 struct kvm_sev_launch_start { ··· 872 856 873 857 #define KVM_X86_DEFAULT_VM 0 874 858 #define KVM_X86_SW_PROTECTED_VM 1 859 + #define KVM_X86_SEV_VM 2 860 + #define KVM_X86_SEV_ES_VM 3 875 861 876 862 #endif /* _ASM_X86_KVM_H */
+4 -1
tools/include/uapi/asm-generic/unistd.h
··· 842 842 #define __NR_lsm_list_modules 461 843 843 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules) 844 844 845 + #define __NR_mseal 462 846 + __SYSCALL(__NR_mseal, sys_mseal) 847 + 845 848 #undef __NR_syscalls 846 - #define __NR_syscalls 462 849 + #define __NR_syscalls 463 847 850 848 851 /* 849 852 * 32 bit systems traditionally used different
+28 -3
tools/include/uapi/drm/i915_drm.h
··· 806 806 */ 807 807 #define I915_PARAM_PXP_STATUS 58 808 808 809 + /* 810 + * Query if kernel allows marking a context to send a Freq hint to SLPC. This 811 + * will enable use of the strategies allowed by the SLPC algorithm. 812 + */ 813 + #define I915_PARAM_HAS_CONTEXT_FREQ_HINT 59 814 + 809 815 /* Must be kept compact -- no holes and well documented */ 810 816 811 817 /** ··· 2154 2148 * -EIO: The firmware did not succeed in creating the protected context. 2155 2149 */ 2156 2150 #define I915_CONTEXT_PARAM_PROTECTED_CONTENT 0xd 2151 + 2152 + /* 2153 + * I915_CONTEXT_PARAM_LOW_LATENCY: 2154 + * 2155 + * Mark this context as a low latency workload which requires aggressive GT 2156 + * frequency scaling. Use I915_PARAM_HAS_CONTEXT_FREQ_HINT to check if the kernel 2157 + * supports this per context flag. 2158 + */ 2159 + #define I915_CONTEXT_PARAM_LOW_LATENCY 0xe 2157 2160 /* Must be kept compact -- no holes and well documented */ 2158 2161 2159 2162 /** @value: Context parameter value to be set or queried */ ··· 2638 2623 * 2639 2624 */ 2640 2625 2626 + /* 2627 + * struct drm_i915_reset_stats - Return global reset and other context stats 2628 + * 2629 + * Driver keeps few stats for each contexts and also global reset count. 2630 + * This struct can be used to query those stats. 2631 + */ 2641 2632 struct drm_i915_reset_stats { 2633 + /** @ctx_id: ID of the requested context */ 2642 2634 __u32 ctx_id; 2635 + 2636 + /** @flags: MBZ */ 2643 2637 __u32 flags; 2644 2638 2645 - /* All resets since boot/module reload, for all contexts */ 2639 + /** @reset_count: All resets since boot/module reload, for all contexts */ 2646 2640 __u32 reset_count; 2647 2641 2648 - /* Number of batches lost when active in GPU, for this context */ 2642 + /** @batch_active: Number of batches lost when active in GPU, for this context */ 2649 2643 __u32 batch_active; 2650 2644 2651 - /* Number of batches lost pending for execution, for this context */ 2645 + /** @batch_pending: Number of batches lost pending for execution, for this context */ 2652 2646 __u32 batch_pending; 2653 2647 2648 + /** @pad: MBZ */ 2654 2649 __u32 pad; 2655 2650 }; 2656 2651
+2 -2
tools/include/uapi/linux/kvm.h
··· 1221 1221 /* Available with KVM_CAP_SPAPR_RESIZE_HPT */ 1222 1222 #define KVM_PPC_RESIZE_HPT_PREPARE _IOR(KVMIO, 0xad, struct kvm_ppc_resize_hpt) 1223 1223 #define KVM_PPC_RESIZE_HPT_COMMIT _IOR(KVMIO, 0xae, struct kvm_ppc_resize_hpt) 1224 - /* Available with KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_MMU_HASH_V3 */ 1224 + /* Available with KVM_CAP_PPC_MMU_RADIX or KVM_CAP_PPC_MMU_HASH_V3 */ 1225 1225 #define KVM_PPC_CONFIGURE_V3_MMU _IOW(KVMIO, 0xaf, struct kvm_ppc_mmuv3_cfg) 1226 - /* Available with KVM_CAP_PPC_RADIX_MMU */ 1226 + /* Available with KVM_CAP_PPC_MMU_RADIX */ 1227 1227 #define KVM_PPC_GET_RMMU_INFO _IOW(KVMIO, 0xb0, struct kvm_ppc_rmmu_info) 1228 1228 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */ 1229 1229 #define KVM_PPC_GET_CPU_CHAR _IOR(KVMIO, 0xb1, struct kvm_ppc_cpu_char)
+3 -1
tools/include/uapi/linux/stat.h
··· 126 126 __u64 stx_mnt_id; 127 127 __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */ 128 128 __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */ 129 + __u64 stx_subvol; /* Subvolume identifier */ 129 130 /* 0xa0 */ 130 - __u64 __spare3[12]; /* Spare space for future expansion */ 131 + __u64 __spare3[11]; /* Spare space for future expansion */ 131 132 /* 0x100 */ 132 133 }; 133 134 ··· 156 155 #define STATX_MNT_ID 0x00001000U /* Got stx_mnt_id */ 157 156 #define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */ 158 157 #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */ 158 + #define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */ 159 159 160 160 #define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */ 161 161
+1
tools/perf/Makefile.perf
··· 214 214 215 215 ifdef MAKECMDGOALS 216 216 ifeq ($(filter-out $(NON_CONFIG_TARGETS),$(MAKECMDGOALS)),) 217 + VMLINUX_H=$(src-perf)/util/bpf_skel/vmlinux/vmlinux.h 217 218 config := 0 218 219 endif 219 220 endif
+1
tools/perf/arch/mips/entry/syscalls/syscall_n64.tbl
··· 376 376 459 n64 lsm_get_self_attr sys_lsm_get_self_attr 377 377 460 n64 lsm_set_self_attr sys_lsm_set_self_attr 378 378 461 n64 lsm_list_modules sys_lsm_list_modules 379 + 462 n64 mseal sys_mseal
+1
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 548 548 459 common lsm_get_self_attr sys_lsm_get_self_attr 549 549 460 common lsm_set_self_attr sys_lsm_set_self_attr 550 550 461 common lsm_list_modules sys_lsm_list_modules 551 + 462 common mseal sys_mseal
+1
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 464 464 459 common lsm_get_self_attr sys_lsm_get_self_attr sys_lsm_get_self_attr 465 465 460 common lsm_set_self_attr sys_lsm_set_self_attr sys_lsm_set_self_attr 466 466 461 common lsm_list_modules sys_lsm_list_modules sys_lsm_list_modules 467 + 462 common mseal sys_mseal sys_mseal
+2 -1
tools/perf/arch/x86/entry/syscalls/syscall_64.tbl
··· 374 374 450 common set_mempolicy_home_node sys_set_mempolicy_home_node 375 375 451 common cachestat sys_cachestat 376 376 452 common fchmodat2 sys_fchmodat2 377 - 453 64 map_shadow_stack sys_map_shadow_stack 377 + 453 common map_shadow_stack sys_map_shadow_stack 378 378 454 common futex_wake sys_futex_wake 379 379 455 common futex_wait sys_futex_wait 380 380 456 common futex_requeue sys_futex_requeue ··· 383 383 459 common lsm_get_self_attr sys_lsm_get_self_attr 384 384 460 common lsm_set_self_attr sys_lsm_set_self_attr 385 385 461 common lsm_list_modules sys_lsm_list_modules 386 + 462 common mseal sys_mseal 386 387 387 388 # 388 389 # Due to a historical design error, certain syscalls are numbered differently
+2 -4
tools/perf/builtin-record.c
··· 1956 1956 1957 1957 if (count.lost) { 1958 1958 if (!lost) { 1959 - lost = zalloc(sizeof(*lost) + 1960 - session->machines.host.id_hdr_size); 1959 + lost = zalloc(PERF_SAMPLE_MAX_SIZE); 1961 1960 if (!lost) { 1962 1961 pr_debug("Memory allocation failed\n"); 1963 1962 return; ··· 1972 1973 lost_count = perf_bpf_filter__lost_count(evsel); 1973 1974 if (lost_count) { 1974 1975 if (!lost) { 1975 - lost = zalloc(sizeof(*lost) + 1976 - session->machines.host.id_hdr_size); 1976 + lost = zalloc(PERF_SAMPLE_MAX_SIZE); 1977 1977 if (!lost) { 1978 1978 pr_debug("Memory allocation failed\n"); 1979 1979 return;
+1 -1
tools/perf/builtin-trace.c
··· 765 765 static DEFINE_STRARRAY(fcntl_cmds, "F_"); 766 766 767 767 static const char *fcntl_linux_specific_cmds[] = { 768 - "SETLEASE", "GETLEASE", "NOTIFY", [5] = "CANCELLK", "DUPFD_CLOEXEC", 768 + "SETLEASE", "GETLEASE", "NOTIFY", "DUPFD_QUERY", [5] = "CANCELLK", "DUPFD_CLOEXEC", 769 769 "SETPIPE_SZ", "GETPIPE_SZ", "ADD_SEALS", "GET_SEALS", 770 770 "GET_RW_HINT", "SET_RW_HINT", "GET_FILE_RW_HINT", "SET_FILE_RW_HINT", 771 771 };
+7 -1
tools/perf/trace/beauty/arch/x86/include/asm/irq_vectors.h
··· 97 97 98 98 #define LOCAL_TIMER_VECTOR 0xec 99 99 100 + /* 101 + * Posted interrupt notification vector for all device MSIs delivered to 102 + * the host kernel. 103 + */ 104 + #define POSTED_MSI_NOTIFICATION_VECTOR 0xeb 105 + 100 106 #define NR_VECTORS 256 101 107 102 108 #ifdef CONFIG_X86_LOCAL_APIC 103 - #define FIRST_SYSTEM_VECTOR LOCAL_TIMER_VECTOR 109 + #define FIRST_SYSTEM_VECTOR POSTED_MSI_NOTIFICATION_VECTOR 104 110 #else 105 111 #define FIRST_SYSTEM_VECTOR NR_VECTORS 106 112 #endif
+2 -1
tools/perf/trace/beauty/include/linux/socket.h
··· 16 16 struct socket; 17 17 struct sock; 18 18 struct sk_buff; 19 + struct proto_accept_arg; 19 20 20 21 #define __sockaddr_check_size(size) \ 21 22 BUILD_BUG_ON(((size) > sizeof(struct __kernel_sockaddr_storage))) ··· 434 433 extern int __sys_sendto(int fd, void __user *buff, size_t len, 435 434 unsigned int flags, struct sockaddr __user *addr, 436 435 int addr_len); 437 - extern struct file *do_accept(struct file *file, unsigned file_flags, 436 + extern struct file *do_accept(struct file *file, struct proto_accept_arg *arg, 438 437 struct sockaddr __user *upeer_sockaddr, 439 438 int __user *upeer_addrlen, int flags); 440 439 extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr,
+8 -6
tools/perf/trace/beauty/include/uapi/linux/fcntl.h
··· 9 9 #define F_GETLEASE (F_LINUX_SPECIFIC_BASE + 1) 10 10 11 11 /* 12 + * Request nofications on a directory. 13 + * See below for events that may be notified. 14 + */ 15 + #define F_NOTIFY (F_LINUX_SPECIFIC_BASE + 2) 16 + 17 + #define F_DUPFD_QUERY (F_LINUX_SPECIFIC_BASE + 3) 18 + 19 + /* 12 20 * Cancel a blocking posix lock; internal use only until we expose an 13 21 * asynchronous lock api to userspace: 14 22 */ ··· 24 16 25 17 /* Create a file descriptor with FD_CLOEXEC set. */ 26 18 #define F_DUPFD_CLOEXEC (F_LINUX_SPECIFIC_BASE + 6) 27 - 28 - /* 29 - * Request nofications on a directory. 30 - * See below for events that may be notified. 31 - */ 32 - #define F_NOTIFY (F_LINUX_SPECIFIC_BASE+2) 33 19 34 20 /* 35 21 * Set and get of pipe page size array
+22
tools/perf/trace/beauty/include/uapi/linux/prctl.h
··· 306 306 # define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc 307 307 # define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f 308 308 309 + #define PR_RISCV_SET_ICACHE_FLUSH_CTX 71 310 + # define PR_RISCV_CTX_SW_FENCEI_ON 0 311 + # define PR_RISCV_CTX_SW_FENCEI_OFF 1 312 + # define PR_RISCV_SCOPE_PER_PROCESS 0 313 + # define PR_RISCV_SCOPE_PER_THREAD 1 314 + 315 + /* PowerPC Dynamic Execution Control Register (DEXCR) controls */ 316 + #define PR_PPC_GET_DEXCR 72 317 + #define PR_PPC_SET_DEXCR 73 318 + /* DEXCR aspect to act on */ 319 + # define PR_PPC_DEXCR_SBHE 0 /* Speculative branch hint enable */ 320 + # define PR_PPC_DEXCR_IBRTPD 1 /* Indirect branch recurrent target prediction disable */ 321 + # define PR_PPC_DEXCR_SRAPD 2 /* Subroutine return address prediction disable */ 322 + # define PR_PPC_DEXCR_NPHIE 3 /* Non-privileged hash instruction enable */ 323 + /* Action to apply / return */ 324 + # define PR_PPC_DEXCR_CTRL_EDITABLE 0x1 /* Aspect can be modified with PR_PPC_SET_DEXCR */ 325 + # define PR_PPC_DEXCR_CTRL_SET 0x2 /* Set the aspect for this process */ 326 + # define PR_PPC_DEXCR_CTRL_CLEAR 0x4 /* Clear the aspect for this process */ 327 + # define PR_PPC_DEXCR_CTRL_SET_ONEXEC 0x8 /* Set the aspect on exec */ 328 + # define PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC 0x10 /* Clear the aspect on exec */ 329 + # define PR_PPC_DEXCR_CTRL_MASK 0x1f 330 + 309 331 #endif /* _LINUX_PRCTL_H */
+3 -1
tools/perf/trace/beauty/include/uapi/linux/stat.h
··· 126 126 __u64 stx_mnt_id; 127 127 __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */ 128 128 __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */ 129 + __u64 stx_subvol; /* Subvolume identifier */ 129 130 /* 0xa0 */ 130 - __u64 __spare3[12]; /* Spare space for future expansion */ 131 + __u64 __spare3[11]; /* Spare space for future expansion */ 131 132 /* 0x100 */ 132 133 }; 133 134 ··· 156 155 #define STATX_MNT_ID 0x00001000U /* Got stx_mnt_id */ 157 156 #define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */ 158 157 #define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */ 158 + #define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */ 159 159 160 160 #define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */ 161 161
+1
tools/testing/selftests/kvm/Makefile
··· 183 183 TEST_GEN_PROGS_s390x += s390x/tprot 184 184 TEST_GEN_PROGS_s390x += s390x/cmma_test 185 185 TEST_GEN_PROGS_s390x += s390x/debug_test 186 + TEST_GEN_PROGS_s390x += s390x/shared_zeropage_test 186 187 TEST_GEN_PROGS_s390x += demand_paging_test 187 188 TEST_GEN_PROGS_s390x += dirty_log_test 188 189 TEST_GEN_PROGS_s390x += guest_print_test
+111
tools/testing/selftests/kvm/s390x/shared_zeropage_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Test shared zeropage handling (with/without storage keys) 4 + * 5 + * Copyright (C) 2024, Red Hat, Inc. 6 + */ 7 + #include <sys/mman.h> 8 + 9 + #include <linux/fs.h> 10 + 11 + #include "test_util.h" 12 + #include "kvm_util.h" 13 + #include "kselftest.h" 14 + #include "ucall_common.h" 15 + 16 + static void set_storage_key(void *addr, uint8_t skey) 17 + { 18 + asm volatile("sske %0,%1" : : "d" (skey), "a" (addr)); 19 + } 20 + 21 + static void guest_code(void) 22 + { 23 + /* Issue some storage key instruction. */ 24 + set_storage_key((void *)0, 0x98); 25 + GUEST_DONE(); 26 + } 27 + 28 + /* 29 + * Returns 1 if the shared zeropage is mapped, 0 if something else is mapped. 30 + * Returns < 0 on error or if nothing is mapped. 31 + */ 32 + static int maps_shared_zeropage(int pagemap_fd, void *addr) 33 + { 34 + struct page_region region; 35 + struct pm_scan_arg arg = { 36 + .start = (uintptr_t)addr, 37 + .end = (uintptr_t)addr + 4096, 38 + .vec = (uintptr_t)&region, 39 + .vec_len = 1, 40 + .size = sizeof(struct pm_scan_arg), 41 + .category_mask = PAGE_IS_PFNZERO, 42 + .category_anyof_mask = PAGE_IS_PRESENT, 43 + .return_mask = PAGE_IS_PFNZERO, 44 + }; 45 + return ioctl(pagemap_fd, PAGEMAP_SCAN, &arg); 46 + } 47 + 48 + int main(int argc, char *argv[]) 49 + { 50 + char *mem, *page0, *page1, *page2, tmp; 51 + const size_t pagesize = getpagesize(); 52 + struct kvm_vcpu *vcpu; 53 + struct kvm_vm *vm; 54 + struct ucall uc; 55 + int pagemap_fd; 56 + 57 + ksft_print_header(); 58 + ksft_set_plan(3); 59 + 60 + /* 61 + * We'll use memory that is not mapped into the VM for simplicity. 62 + * Shared zeropages are enabled/disabled per-process. 63 + */ 64 + mem = mmap(0, 3 * pagesize, PROT_READ, MAP_PRIVATE | MAP_ANON, -1, 0); 65 + TEST_ASSERT(mem != MAP_FAILED, "mmap() failed"); 66 + 67 + /* Disable THP. Ignore errors on older kernels. */ 68 + madvise(mem, 3 * pagesize, MADV_NOHUGEPAGE); 69 + 70 + page0 = mem; 71 + page1 = page0 + pagesize; 72 + page2 = page1 + pagesize; 73 + 74 + /* Can we even detect shared zeropages? */ 75 + pagemap_fd = open("/proc/self/pagemap", O_RDONLY); 76 + TEST_REQUIRE(pagemap_fd >= 0); 77 + 78 + tmp = *page0; 79 + asm volatile("" : "+r" (tmp)); 80 + TEST_REQUIRE(maps_shared_zeropage(pagemap_fd, page0) == 1); 81 + 82 + vm = vm_create_with_one_vcpu(&vcpu, guest_code); 83 + 84 + /* Verify that we get the shared zeropage after VM creation. */ 85 + tmp = *page1; 86 + asm volatile("" : "+r" (tmp)); 87 + ksft_test_result(maps_shared_zeropage(pagemap_fd, page1) == 1, 88 + "Shared zeropages should be enabled\n"); 89 + 90 + /* 91 + * Let our VM execute a storage key instruction that should 92 + * unshare all shared zeropages. 93 + */ 94 + vcpu_run(vcpu); 95 + get_ucall(vcpu, &uc); 96 + TEST_ASSERT_EQ(uc.cmd, UCALL_DONE); 97 + 98 + /* Verify that we don't have a shared zeropage anymore. */ 99 + ksft_test_result(!maps_shared_zeropage(pagemap_fd, page1), 100 + "Shared zeropage should be gone\n"); 101 + 102 + /* Verify that we don't get any new shared zeropages. */ 103 + tmp = *page2; 104 + asm volatile("" : "+r" (tmp)); 105 + ksft_test_result(!maps_shared_zeropage(pagemap_fd, page2), 106 + "Shared zeropages should be disabled\n"); 107 + 108 + kvm_vm_free(vm); 109 + 110 + ksft_finished(); 111 + }
+3 -2
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 2249 2249 if reset "remove invalid addresses"; then 2250 2250 pm_nl_set_limits $ns1 3 3 2251 2251 pm_nl_add_endpoint $ns1 10.0.12.1 flags signal 2252 + # broadcast IP: no packet for this address will be received on ns1 2253 + pm_nl_add_endpoint $ns1 224.0.0.1 flags signal 2252 2254 pm_nl_add_endpoint $ns1 10.0.3.1 flags signal 2253 - pm_nl_add_endpoint $ns1 10.0.14.1 flags signal 2254 - pm_nl_set_limits $ns2 3 3 2255 + pm_nl_set_limits $ns2 2 2 2255 2256 addr_nr_ns1=-3 speed=10 \ 2256 2257 run_tests $ns1 $ns2 10.0.1.1 2257 2258 chk_join_nr 1 1 1