Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-nonmm-stable-2025-05-31-15-28' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:

- "hung_task: extend blocking task stacktrace dump to semaphore" from
Lance Yang enhances the hung task detector.

The detector presently dumps the blocking tasks's stack when it is
blocked on a mutex. Lance's series extends this to semaphores

- "nilfs2: improve sanity checks in dirty state propagation" from
Wentao Liang addresses a couple of minor flaws in nilfs2

- "scripts/gdb: Fixes related to lx_per_cpu()" from Illia Ostapyshyn
fixes a couple of issues in the gdb scripts

- "Support kdump with LUKS encryption by reusing LUKS volume keys" from
Coiby Xu addresses a usability problem with kdump.

When the dump device is LUKS-encrypted, the kdump kernel may not have
the keys to the encrypted filesystem. A full writeup of this is in
the series [0/N] cover letter

- "sysfs: add counters for lockups and stalls" from Max Kellermann adds
/sys/kernel/hardlockup_count and /sys/kernel/hardlockup_count and
/sys/kernel/rcu_stall_count

- "fork: Page operation cleanups in the fork code" from Pasha Tatashin
implements a number of code cleanups in fork.c

- "scripts/gdb/symbols: determine KASLR offset on s390 during early
boot" from Ilya Leoshkevich fixes some s390 issues in the gdb
scripts

* tag 'mm-nonmm-stable-2025-05-31-15-28' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (67 commits)
llist: make llist_add_batch() a static inline
delayacct: remove redundant code and adjust indentation
squashfs: add optional full compressed block caching
crash_dump, nvme: select CONFIGFS_FS as built-in
scripts/gdb/symbols: determine KASLR offset on s390 during early boot
scripts/gdb/symbols: factor out pagination_off()
scripts/gdb/symbols: factor out get_vmlinux()
kernel/panic.c: format kernel-doc comments
mailmap: update and consolidate Casey Connolly's name and email
nilfs2: remove wbc->for_reclaim handling
fork: define a local GFP_VMAP_STACK
fork: check charging success before zeroing stack
fork: clean-up naming of vm_stack/vm_struct variables in vmap stacks code
fork: clean-up ifdef logic around stack allocation
kernel/rcu/tree_stall: add /sys/kernel/rcu_stall_count
kernel/watchdog: add /sys/kernel/{hard,soft}lockup_count
x86/crash: make the page that stores the dm crypt keys inaccessible
x86/crash: pass dm crypt keys to kdump kernel
Revert "x86/mm: Remove unused __set_memory_prot()"
crash_dump: retrieve dm crypt keys in kdump kernel
...

+1568 -814
+3
.mailmap
··· 155 155 Brian Silverman <bsilver16384@gmail.com> <brian.silverman@bluerivertech.com> 156 156 Bryan Tan <bryan-bt.tan@broadcom.com> <bryantan@vmware.com> 157 157 Cai Huoqing <cai.huoqing@linux.dev> <caihuoqing@baidu.com> 158 + Casey Connolly <casey.connolly@linaro.org> <caleb.connolly@linaro.org> 159 + Casey Connolly <casey.connolly@linaro.org> <caleb@connolly.tech> 160 + Casey Connolly <casey.connolly@linaro.org> <caleb@postmarketos.org> 158 161 Can Guo <quic_cang@quicinc.com> <cang@codeaurora.org> 159 162 Carl Huang <quic_cjhuang@quicinc.com> <cjhuang@codeaurora.org> 160 163 Carlos Bilbao <carlos.bilbao@kernel.org> <carlos.bilbao@amd.com>
+7
Documentation/ABI/testing/sysfs-kernel-hardlockup_count
··· 1 + What: /sys/kernel/hardlockup_count 2 + Date: May 2025 3 + KernelVersion: 6.16 4 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 5 + Description: 6 + Shows how many times the system has detected a hard lockup since last boot. 7 + Available only if CONFIG_HARDLOCKUP_DETECTOR is enabled.
+6
Documentation/ABI/testing/sysfs-kernel-rcu_stall_count
··· 1 + What: /sys/kernel/rcu_stall_count 2 + Date: May 2025 3 + KernelVersion: 6.16 4 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 5 + Description: 6 + Shows how many times the system has detected an RCU stall since last boot.
+7
Documentation/ABI/testing/sysfs-kernel-softlockup_count
··· 1 + What: /sys/kernel/softlockup_count 2 + Date: May 2025 3 + KernelVersion: 6.16 4 + Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 5 + Description: 6 + Shows how many times the system has detected a soft lockup since last boot. 7 + Available only if CONFIG_SOFTLOCKUP_DETECTOR is enabled.
+32
Documentation/admin-guide/kdump/kdump.rst
··· 547 547 bit flag being set by add_taint(). 548 548 This will cause a kdump to occur at the add_taint()->panic() call. 549 549 550 + Write the dump file to encrypted disk volume 551 + ============================================ 552 + 553 + CONFIG_CRASH_DM_CRYPT can be enabled to support saving the dump file to an 554 + encrypted disk volume (only x86_64 supported for now). User space can interact 555 + with /sys/kernel/config/crash_dm_crypt_keys for setup, 556 + 557 + 1. Tell the first kernel what logon keys are needed to unlock the disk volumes, 558 + # Add key #1 559 + mkdir /sys/kernel/config/crash_dm_crypt_keys/7d26b7b4-e342-4d2d-b660-7426b0996720 560 + # Add key #1's description 561 + echo cryptsetup:7d26b7b4-e342-4d2d-b660-7426b0996720 > /sys/kernel/config/crash_dm_crypt_keys/description 562 + 563 + # how many keys do we have now? 564 + cat /sys/kernel/config/crash_dm_crypt_keys/count 565 + 1 566 + 567 + # Add key #2 in the same way 568 + 569 + # how many keys do we have now? 570 + cat /sys/kernel/config/crash_dm_crypt_keys/count 571 + 2 572 + 573 + # To support CPU/memory hot-plugging, re-use keys already saved to reserved 574 + # memory 575 + echo true > /sys/kernel/config/crash_dm_crypt_key/reuse 576 + 577 + 2. Load the dump-capture kernel 578 + 579 + 3. After the dump-capture kerne get booted, restore the keys to user keyring 580 + echo yes > /sys/kernel/crash_dm_crypt_keys/restore 581 + 550 582 Contact 551 583 ======= 552 584
+2 -2
Documentation/admin-guide/kdump/vmcoreinfo.rst
··· 331 331 Page attributes. These flags are used to filter various unnecessary for 332 332 dumping pages. 333 333 334 - PAGE_BUDDY_MAPCOUNT_VALUE(~PG_buddy)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_offline) 335 - ----------------------------------------------------------------------------- 334 + PAGE_BUDDY_MAPCOUNT_VALUE(~PG_buddy)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_offline)|PAGE_OFFLINE_MAPCOUNT_VALUE(~PG_unaccepted) 335 + ------------------------------------------------------------------------------------------------------------------------- 336 336 337 337 More page attributes. These flags are used to filter various unnecessary for 338 338 dumping pages.
+1 -1
Documentation/devicetree/bindings/display/panel/lg,sw43408.yaml
··· 7 7 title: LG SW43408 1080x2160 DSI panel 8 8 9 9 maintainers: 10 - - Caleb Connolly <caleb.connolly@linaro.org> 10 + - Casey Connolly <casey.connolly@linaro.org> 11 11 12 12 description: 13 13 This panel is used on the Pixel 3, it is a 60hz OLED panel which
+1 -1
Documentation/devicetree/bindings/iio/adc/qcom,spmi-rradc.yaml
··· 7 7 title: Qualcomm's SPMI PMIC Round Robin ADC 8 8 9 9 maintainers: 10 - - Caleb Connolly <caleb.connolly@linaro.org> 10 + - Casey Connolly <casey.connolly@linaro.org> 11 11 12 12 description: | 13 13 The Qualcomm SPMI Round Robin ADC (RRADC) provides interface to clients to
+1 -1
Documentation/devicetree/bindings/power/supply/qcom,pmi8998-charger.yaml
··· 7 7 title: Qualcomm PMI8998/PM660 Switch-Mode Battery Charger "2" 8 8 9 9 maintainers: 10 - - Caleb Connolly <caleb.connolly@linaro.org> 10 + - Casey Connolly <casey.connolly@linaro.org> 11 11 12 12 properties: 13 13 compatible:
-10
Documentation/filesystems/relay.rst
··· 301 301 (including in create_buf_file()) via chan->private_data or 302 302 buf->chan->private_data. 303 303 304 - Buffer-only channels 305 - -------------------- 306 - 307 - These channels have no files associated and can be created with 308 - relay_open(NULL, NULL, ...). Such channels are useful in scenarios such 309 - as when doing early tracing in the kernel, before the VFS is up. In these 310 - cases, one may open a buffer-only channel and then call 311 - relay_late_setup_files() when the kernel is ready to handle files, 312 - to expose the buffered data to the userspace. 313 - 314 304 Channel 'modes' 315 305 --------------- 316 306
+15 -19
Documentation/process/debugging/gdb-kernel-debugging.rst
··· 127 127 128 128 - Make use of the per-cpu function for the current or a specified CPU:: 129 129 130 - (gdb) p $lx_per_cpu("runqueues").nr_running 130 + (gdb) p $lx_per_cpu(runqueues).nr_running 131 131 $3 = 1 132 - (gdb) p $lx_per_cpu("runqueues", 2).nr_running 132 + (gdb) p $lx_per_cpu(runqueues, 2).nr_running 133 133 $4 = 0 134 134 135 135 - Dig into hrtimers using the container_of helper:: 136 136 137 - (gdb) set $next = $lx_per_cpu("hrtimer_bases").clock_base[0].active.next 138 - (gdb) p *$container_of($next, "struct hrtimer", "node") 137 + (gdb) set $leftmost = $lx_per_cpu(hrtimer_bases).clock_base[0].active.rb_root.rb_leftmost 138 + (gdb) p *$container_of($leftmost, "struct hrtimer", "node") 139 139 $5 = { 140 140 node = { 141 141 node = { 142 - __rb_parent_color = 18446612133355256072, 143 - rb_right = 0x0 <irq_stack_union>, 144 - rb_left = 0x0 <irq_stack_union> 142 + __rb_parent_color = 18446612686384860673, 143 + rb_right = 0xffff888231da8b00, 144 + rb_left = 0x0 145 145 }, 146 - expires = { 147 - tv64 = 1835268000000 148 - } 146 + expires = 1228461000000 149 147 }, 150 - _softexpires = { 151 - tv64 = 1835268000000 152 - }, 153 - function = 0xffffffff81078232 <tick_sched_timer>, 154 - base = 0xffff88003fd0d6f0, 155 - state = 1, 156 - start_pid = 0, 157 - start_site = 0xffffffff81055c1f <hrtimer_start_range_ns+20>, 158 - start_comm = "swapper/2\000\000\000\000\000\000" 148 + _softexpires = 1228461000000, 149 + function = 0xffffffff8137ab20 <tick_nohz_handler>, 150 + base = 0xffff888231d9b4c0, 151 + state = 1 '\001', 152 + is_rel = 0 '\000', 153 + is_soft = 0 '\000', 154 + is_hard = 1 '\001' 159 155 } 160 156 161 157
+15 -19
Documentation/translations/zh_CN/dev-tools/gdb-kernel-debugging.rst
··· 120 120 121 121 - 对当前或指定的CPU使用per-cpu函数:: 122 122 123 - (gdb) p $lx_per_cpu("runqueues").nr_running 123 + (gdb) p $lx_per_cpu(runqueues).nr_running 124 124 $3 = 1 125 - (gdb) p $lx_per_cpu("runqueues", 2).nr_running 125 + (gdb) p $lx_per_cpu(runqueues, 2).nr_running 126 126 $4 = 0 127 127 128 128 - 使用container_of查看更多hrtimers信息:: 129 129 130 - (gdb) set $next = $lx_per_cpu("hrtimer_bases").clock_base[0].active.next 131 - (gdb) p *$container_of($next, "struct hrtimer", "node") 130 + (gdb) set $leftmost = $lx_per_cpu(hrtimer_bases).clock_base[0].active.rb_root.rb_leftmost 131 + (gdb) p *$container_of($leftmost, "struct hrtimer", "node") 132 132 $5 = { 133 133 node = { 134 134 node = { 135 - __rb_parent_color = 18446612133355256072, 136 - rb_right = 0x0 <irq_stack_union>, 137 - rb_left = 0x0 <irq_stack_union> 135 + __rb_parent_color = 18446612686384860673, 136 + rb_right = 0xffff888231da8b00, 137 + rb_left = 0x0 138 138 }, 139 - expires = { 140 - tv64 = 1835268000000 141 - } 139 + expires = 1228461000000 142 140 }, 143 - _softexpires = { 144 - tv64 = 1835268000000 145 - }, 146 - function = 0xffffffff81078232 <tick_sched_timer>, 147 - base = 0xffff88003fd0d6f0, 148 - state = 1, 149 - start_pid = 0, 150 - start_site = 0xffffffff81055c1f <hrtimer_start_range_ns+20>, 151 - start_comm = "swapper/2\000\000\000\000\000\000" 141 + _softexpires = 1228461000000, 142 + function = 0xffffffff8137ab20 <tick_nohz_handler>, 143 + base = 0xffff888231d9b4c0, 144 + state = 1 '\001', 145 + is_rel = 0 '\000', 146 + is_soft = 0 '\000', 147 + is_hard = 1 '\001' 152 148 } 153 149 154 150
+15 -19
Documentation/translations/zh_TW/dev-tools/gdb-kernel-debugging.rst
··· 116 116 117 117 - 對當前或指定的CPU使用per-cpu函數:: 118 118 119 - (gdb) p $lx_per_cpu("runqueues").nr_running 119 + (gdb) p $lx_per_cpu(runqueues).nr_running 120 120 $3 = 1 121 - (gdb) p $lx_per_cpu("runqueues", 2).nr_running 121 + (gdb) p $lx_per_cpu(runqueues, 2).nr_running 122 122 $4 = 0 123 123 124 124 - 使用container_of查看更多hrtimers信息:: 125 125 126 - (gdb) set $next = $lx_per_cpu("hrtimer_bases").clock_base[0].active.next 127 - (gdb) p *$container_of($next, "struct hrtimer", "node") 126 + (gdb) set $leftmost = $lx_per_cpu(hrtimer_bases).clock_base[0].active.rb_root.rb_leftmost 127 + (gdb) p *$container_of($leftmost, "struct hrtimer", "node") 128 128 $5 = { 129 129 node = { 130 130 node = { 131 - __rb_parent_color = 18446612133355256072, 132 - rb_right = 0x0 <irq_stack_union>, 133 - rb_left = 0x0 <irq_stack_union> 131 + __rb_parent_color = 18446612686384860673, 132 + rb_right = 0xffff888231da8b00, 133 + rb_left = 0x0 134 134 }, 135 - expires = { 136 - tv64 = 1835268000000 137 - } 135 + expires = 1228461000000 138 136 }, 139 - _softexpires = { 140 - tv64 = 1835268000000 141 - }, 142 - function = 0xffffffff81078232 <tick_sched_timer>, 143 - base = 0xffff88003fd0d6f0, 144 - state = 1, 145 - start_pid = 0, 146 - start_site = 0xffffffff81055c1f <hrtimer_start_range_ns+20>, 147 - start_comm = "swapper/2\000\000\000\000\000\000" 137 + _softexpires = 1228461000000, 138 + function = 0xffffffff8137ab20 <tick_nohz_handler>, 139 + base = 0xffff888231d9b4c0, 140 + state = 1 '\001', 141 + is_rel = 0 '\000', 142 + is_soft = 0 '\000', 143 + is_hard = 1 '\001' 148 144 } 149 145 150 146
+1 -1
MAINTAINERS
··· 7538 7538 7539 7539 DRM DRIVER FOR LG SW43408 PANELS 7540 7540 M: Sumit Semwal <sumit.semwal@linaro.org> 7541 - M: Caleb Connolly <caleb.connolly@linaro.org> 7541 + M: Casey Connolly <casey.connolly@linaro.org> 7542 7542 S: Maintained 7543 7543 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 7544 7544 F: Documentation/devicetree/bindings/display/panel/lg,sw43408.yaml
+1 -1
arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts
··· 1 1 // SPDX-License-Identifier: BSD-3-Clause 2 2 /* 3 3 * Copyright (c) 2023, Luca Weiss <luca.weiss@fairphone.com> 4 - * Copyright (c) 2024, Caleb Connolly <caleb@postmarketos.org> 4 + * Copyright (c) 2024, Casey Connolly <casey.connolly@linaro.org> 5 5 */ 6 6 7 7 /dts-v1/;
+1 -1
arch/arm64/boot/dts/qcom/sdm845-shift-axolotl.dts
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * Copyright (c) 2022, Alexander Martinz <amartinz@shiftphones.com> 4 - * Copyright (c) 2022, Caleb Connolly <caleb@connolly.tech> 4 + * Copyright (c) 2022, Casey Connolly <casey.connolly@linaro.org> 5 5 * Copyright (c) 2022, Dylan Van Assche <me@dylanvanassche.be> 6 6 */ 7 7
+2
arch/x86/include/asm/set_memory.h
··· 4 4 5 5 #include <asm/page.h> 6 6 #include <asm-generic/set_memory.h> 7 + #include <asm/pgtable.h> 7 8 8 9 #define set_memory_rox set_memory_rox 9 10 int set_memory_rox(unsigned long addr, int numpages); ··· 38 37 * The caller is required to take care of these. 39 38 */ 40 39 40 + int __set_memory_prot(unsigned long addr, int numpages, pgprot_t prot); 41 41 int _set_memory_uc(unsigned long addr, int numpages); 42 42 int _set_memory_wc(unsigned long addr, int numpages); 43 43 int _set_memory_wt(unsigned long addr, int numpages);
+24 -2
arch/x86/kernel/crash.c
··· 278 278 unsigned long long mend) 279 279 { 280 280 unsigned long start, end; 281 + int ret; 281 282 282 283 cmem->ranges[0].start = mstart; 283 284 cmem->ranges[0].end = mend; ··· 287 286 /* Exclude elf header region */ 288 287 start = image->elf_load_addr; 289 288 end = start + image->elf_headers_sz - 1; 290 - return crash_exclude_mem_range(cmem, start, end); 289 + ret = crash_exclude_mem_range(cmem, start, end); 290 + 291 + if (ret) 292 + return ret; 293 + 294 + /* Exclude dm crypt keys region */ 295 + if (image->dm_crypt_keys_addr) { 296 + start = image->dm_crypt_keys_addr; 297 + end = start + image->dm_crypt_keys_sz - 1; 298 + return crash_exclude_mem_range(cmem, start, end); 299 + } 300 + 301 + return ret; 291 302 } 292 303 293 304 /* Prepare memory map for crash dump kernel */ 294 305 int crash_setup_memmap_entries(struct kimage *image, struct boot_params *params) 295 306 { 307 + unsigned int nr_ranges = 0; 296 308 int i, ret = 0; 297 309 unsigned long flags; 298 310 struct e820_entry ei; 299 311 struct crash_memmap_data cmd; 300 312 struct crash_mem *cmem; 301 313 302 - cmem = vzalloc(struct_size(cmem, ranges, 1)); 314 + /* 315 + * Using random kexec_buf for passing dm crypt keys may cause a range 316 + * split. So use two slots here. 317 + */ 318 + nr_ranges = 2; 319 + cmem = vzalloc(struct_size(cmem, ranges, nr_ranges)); 303 320 if (!cmem) 304 321 return -ENOMEM; 322 + 323 + cmem->max_nr_ranges = nr_ranges; 324 + cmem->nr_ranges = 0; 305 325 306 326 memset(&cmd, 0, sizeof(struct crash_memmap_data)); 307 327 cmd.params = params;
+21
arch/x86/kernel/kexec-bzimage64.c
··· 27 27 #include <asm/kexec-bzimage64.h> 28 28 29 29 #define MAX_ELFCOREHDR_STR_LEN 30 /* elfcorehdr=0x<64bit-value> */ 30 + #define MAX_DMCRYPTKEYS_STR_LEN 31 /* dmcryptkeys=0x<64bit-value> */ 31 + 30 32 31 33 /* 32 34 * Defines lowest physical address for various segments. Not sure where ··· 78 76 if (image->type == KEXEC_TYPE_CRASH) { 79 77 len = sprintf(cmdline_ptr, 80 78 "elfcorehdr=0x%lx ", image->elf_load_addr); 79 + 80 + if (image->dm_crypt_keys_addr != 0) 81 + len += sprintf(cmdline_ptr + len, 82 + "dmcryptkeys=0x%lx ", image->dm_crypt_keys_addr); 81 83 } 82 84 memcpy(cmdline_ptr + len, cmdline, cmdline_len); 83 85 cmdline_len += len; ··· 480 474 ret = crash_load_segments(image); 481 475 if (ret) 482 476 return ERR_PTR(ret); 477 + ret = crash_load_dm_crypt_keys(image); 478 + if (ret == -ENOENT) { 479 + kexec_dprintk("No dm crypt key to load\n"); 480 + } else if (ret) { 481 + pr_err("Failed to load dm crypt keys\n"); 482 + return ERR_PTR(ret); 483 + } 484 + if (image->dm_crypt_keys_addr && 485 + cmdline_len + MAX_ELFCOREHDR_STR_LEN + MAX_DMCRYPTKEYS_STR_LEN > 486 + header->cmdline_size) { 487 + pr_err("Appending dmcryptkeys=<addr> to command line exceeds maximum allowed length\n"); 488 + return ERR_PTR(-EINVAL); 489 + } 483 490 } 484 491 #endif 485 492 ··· 520 501 efi_map_sz = efi_get_runtime_map_size(); 521 502 params_cmdline_sz = sizeof(struct boot_params) + cmdline_len + 522 503 MAX_ELFCOREHDR_STR_LEN; 504 + if (image->dm_crypt_keys_addr) 505 + params_cmdline_sz += MAX_DMCRYPTKEYS_STR_LEN; 523 506 params_cmdline_sz = ALIGN(params_cmdline_sz, 16); 524 507 kbuf.bufsz = params_cmdline_sz + ALIGN(efi_map_sz, 16) + 525 508 sizeof(struct setup_data) +
+22
arch/x86/kernel/machine_kexec_64.c
··· 630 630 kexec_mark_range(control, crashk_res.end, protect); 631 631 } 632 632 633 + /* make the memory storing dm crypt keys in/accessible */ 634 + static void kexec_mark_dm_crypt_keys(bool protect) 635 + { 636 + unsigned long start_paddr, end_paddr; 637 + unsigned int nr_pages; 638 + 639 + if (kexec_crash_image->dm_crypt_keys_addr) { 640 + start_paddr = kexec_crash_image->dm_crypt_keys_addr; 641 + end_paddr = start_paddr + kexec_crash_image->dm_crypt_keys_sz - 1; 642 + nr_pages = (PAGE_ALIGN(end_paddr) - PAGE_ALIGN_DOWN(start_paddr))/PAGE_SIZE; 643 + if (protect) 644 + set_memory_np((unsigned long)phys_to_virt(start_paddr), nr_pages); 645 + else 646 + __set_memory_prot( 647 + (unsigned long)phys_to_virt(start_paddr), 648 + nr_pages, 649 + __pgprot(_PAGE_PRESENT | _PAGE_NX | _PAGE_RW)); 650 + } 651 + } 652 + 633 653 void arch_kexec_protect_crashkres(void) 634 654 { 635 655 kexec_mark_crashkres(true); 656 + kexec_mark_dm_crypt_keys(true); 636 657 } 637 658 638 659 void arch_kexec_unprotect_crashkres(void) 639 660 { 661 + kexec_mark_dm_crypt_keys(false); 640 662 kexec_mark_crashkres(false); 641 663 } 642 664 #endif
+13
arch/x86/mm/pat/set_memory.c
··· 2148 2148 CPA_PAGES_ARRAY, pages); 2149 2149 } 2150 2150 2151 + /* 2152 + * __set_memory_prot is an internal helper for callers that have been passed 2153 + * a pgprot_t value from upper layers and a reservation has already been taken. 2154 + * If you want to set the pgprot to a specific page protocol, use the 2155 + * set_memory_xx() functions. 2156 + */ 2157 + int __set_memory_prot(unsigned long addr, int numpages, pgprot_t prot) 2158 + { 2159 + return change_page_attr_set_clr(&addr, numpages, prot, 2160 + __pgprot(~pgprot_val(prot)), 0, 0, 2161 + NULL); 2162 + } 2163 + 2151 2164 int _set_memory_uc(unsigned long addr, int numpages) 2152 2165 { 2153 2166 /*
+1 -1
drivers/cpufreq/powernow-k8.c
··· 482 482 cpuid(CPUID_FREQ_VOLT_CAPABILITIES, &eax, &ebx, &ecx, &edx); 483 483 if ((edx & P_STATE_TRANSITION_CAPABLE) 484 484 != P_STATE_TRANSITION_CAPABLE) { 485 - pr_info("Power state transitions not supported\n"); 485 + pr_info_once("Power state transitions not supported\n"); 486 486 return; 487 487 } 488 488 *rc = 0;
+2 -2
drivers/gpu/drm/panel/panel-samsung-sofef00.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /* Copyright (c) 2020 Caleb Connolly <caleb@connolly.tech> 2 + /* Copyright (c) 2020 Casey Connolly <casey.connolly@linaro.org> 3 3 * Generated with linux-mdss-dsi-panel-driver-generator from vendor device tree: 4 4 * Copyright (c) 2020, The Linux Foundation. All rights reserved. 5 5 */ ··· 260 260 261 261 module_mipi_dsi_driver(sofef00_panel_driver); 262 262 263 - MODULE_AUTHOR("Caleb Connolly <caleb@connolly.tech>"); 263 + MODULE_AUTHOR("Casey Connolly <casey.connolly@linaro.org>"); 264 264 MODULE_DESCRIPTION("DRM driver for Samsung AMOLED DSI panels found in OnePlus 6/6T phones"); 265 265 MODULE_LICENSE("GPL v2");
+2 -2
drivers/iio/adc/qcom-spmi-rradc.c
··· 2 2 /* 3 3 * Copyright (c) 2016-2017, 2019, The Linux Foundation. All rights reserved. 4 4 * Copyright (c) 2022 Linaro Limited. 5 - * Author: Caleb Connolly <caleb.connolly@linaro.org> 5 + * Author: Casey Connolly <casey.connolly@linaro.org> 6 6 * 7 7 * This driver is for the Round Robin ADC found in the pmi8998 and pm660 PMICs. 8 8 */ ··· 1016 1016 module_platform_driver(rradc_driver); 1017 1017 1018 1018 MODULE_DESCRIPTION("QCOM SPMI PMIC RR ADC driver"); 1019 - MODULE_AUTHOR("Caleb Connolly <caleb.connolly@linaro.org>"); 1019 + MODULE_AUTHOR("Casey Connolly <casey.connolly@linaro.org>"); 1020 1020 MODULE_LICENSE("GPL");
+1 -2
drivers/md/bcache/btree.c
··· 36 36 #include <linux/sched/clock.h> 37 37 #include <linux/rculist.h> 38 38 #include <linux/delay.h> 39 + #include <linux/sort.h> 39 40 #include <trace/events/bcache.h> 40 41 41 42 /* ··· 559 558 list_move(&b->list, &b->c->btree_cache_freed); 560 559 } 561 560 } 562 - 563 - #define cmp_int(l, r) ((l > r) - (l < r)) 564 561 565 562 #ifdef CONFIG_PROVE_LOCKING 566 563 static int btree_lock_cmp_fn(const struct lockdep_map *_a,
+1 -1
drivers/nvme/target/Kconfig
··· 3 3 config NVME_TARGET 4 4 tristate "NVMe Target support" 5 5 depends on BLOCK 6 - depends on CONFIGFS_FS 6 + select CONFIGFS_FS 7 7 select NVME_KEYRING if NVME_TARGET_TCP_TLS 8 8 select KEYS if NVME_TARGET_TCP_TLS 9 9 select SGL_ALLOC
+2 -2
drivers/power/supply/qcom_pmi8998_charger.c
··· 2 2 /* 3 3 * Copyright (c) 2016-2019 The Linux Foundation. All rights reserved. 4 4 * Copyright (c) 2023, Linaro Ltd. 5 - * Author: Caleb Connolly <caleb.connolly@linaro.org> 5 + * Author: Casey Connolly <casey.connolly@linaro.org> 6 6 * 7 7 * This driver is for the switch-mode battery charger and boost 8 8 * hardware found in pmi8998 and related PMICs. ··· 1045 1045 1046 1046 module_platform_driver(qcom_spmi_smb2); 1047 1047 1048 - MODULE_AUTHOR("Caleb Connolly <caleb.connolly@linaro.org>"); 1048 + MODULE_AUTHOR("Casey Connolly <casey.connolly@linaro.org>"); 1049 1049 MODULE_DESCRIPTION("Qualcomm SMB2 Charger Driver"); 1050 1050 MODULE_LICENSE("GPL");
-20
drivers/rapidio/devices/rio_mport_cdev.c
··· 98 98 #endif 99 99 100 100 /* 101 - * An internal DMA coherent buffer 102 - */ 103 - struct mport_dma_buf { 104 - void *ib_base; 105 - dma_addr_t ib_phys; 106 - u32 ib_size; 107 - u64 ib_rio_base; 108 - bool ib_map; 109 - struct file *filp; 110 - }; 111 - 112 - /* 113 101 * Internal memory mapping structure 114 102 */ 115 103 enum rio_mport_map_dir { ··· 119 131 struct file *filp; 120 132 }; 121 133 122 - struct rio_mport_dma_map { 123 - int valid; 124 - u64 length; 125 - void *vaddr; 126 - dma_addr_t paddr; 127 - }; 128 - 129 - #define MPORT_MAX_DMA_BUFS 16 130 134 #define MPORT_EVENT_DEPTH 10 131 135 132 136 /*
-103
drivers/rapidio/rio.c
··· 1775 1775 EXPORT_SYMBOL_GPL(rio_request_mport_dma); 1776 1776 1777 1777 /** 1778 - * rio_request_dma - request RapidIO capable DMA channel that supports 1779 - * specified target RapidIO device. 1780 - * @rdev: RIO device associated with DMA transfer 1781 - * 1782 - * Returns pointer to allocated DMA channel or NULL if failed. 1783 - */ 1784 - struct dma_chan *rio_request_dma(struct rio_dev *rdev) 1785 - { 1786 - return rio_request_mport_dma(rdev->net->hport); 1787 - } 1788 - EXPORT_SYMBOL_GPL(rio_request_dma); 1789 - 1790 - /** 1791 1778 * rio_release_dma - release specified DMA channel 1792 1779 * @dchan: DMA channel to release 1793 1780 */ ··· 1821 1834 } 1822 1835 EXPORT_SYMBOL_GPL(rio_dma_prep_xfer); 1823 1836 1824 - /** 1825 - * rio_dma_prep_slave_sg - RapidIO specific wrapper 1826 - * for device_prep_slave_sg callback defined by DMAENGINE. 1827 - * @rdev: RIO device control structure 1828 - * @dchan: DMA channel to configure 1829 - * @data: RIO specific data descriptor 1830 - * @direction: DMA data transfer direction (TO or FROM the device) 1831 - * @flags: dmaengine defined flags 1832 - * 1833 - * Initializes RapidIO capable DMA channel for the specified data transfer. 1834 - * Uses DMA channel private extension to pass information related to remote 1835 - * target RIO device. 1836 - * 1837 - * Returns: pointer to DMA transaction descriptor if successful, 1838 - * error-valued pointer or NULL if failed. 1839 - */ 1840 - struct dma_async_tx_descriptor *rio_dma_prep_slave_sg(struct rio_dev *rdev, 1841 - struct dma_chan *dchan, struct rio_dma_data *data, 1842 - enum dma_transfer_direction direction, unsigned long flags) 1843 - { 1844 - return rio_dma_prep_xfer(dchan, rdev->destid, data, direction, flags); 1845 - } 1846 - EXPORT_SYMBOL_GPL(rio_dma_prep_slave_sg); 1847 - 1848 1837 #endif /* CONFIG_RAPIDIO_DMA_ENGINE */ 1849 - 1850 - /** 1851 - * rio_find_mport - find RIO mport by its ID 1852 - * @mport_id: number (ID) of mport device 1853 - * 1854 - * Given a RIO mport number, the desired mport is located 1855 - * in the global list of mports. If the mport is found, a pointer to its 1856 - * data structure is returned. If no mport is found, %NULL is returned. 1857 - */ 1858 - struct rio_mport *rio_find_mport(int mport_id) 1859 - { 1860 - struct rio_mport *port; 1861 - 1862 - mutex_lock(&rio_mport_list_lock); 1863 - list_for_each_entry(port, &rio_mports, node) { 1864 - if (port->id == mport_id) 1865 - goto found; 1866 - } 1867 - port = NULL; 1868 - found: 1869 - mutex_unlock(&rio_mport_list_lock); 1870 - 1871 - return port; 1872 - } 1873 1838 1874 1839 /** 1875 1840 * rio_register_scan - enumeration/discovery method registration interface ··· 1899 1960 return rc; 1900 1961 } 1901 1962 EXPORT_SYMBOL_GPL(rio_register_scan); 1902 - 1903 - /** 1904 - * rio_unregister_scan - removes enumeration/discovery method from mport 1905 - * @mport_id: mport device ID for which fabric scan routine has to be 1906 - * unregistered (RIO_MPORT_ANY = apply to all mports that use 1907 - * the specified scan_ops) 1908 - * @scan_ops: enumeration/discovery operations structure 1909 - * 1910 - * Removes enumeration or discovery method assigned to the specified mport 1911 - * device. If RIO_MPORT_ANY is specified, removes the specified operations from 1912 - * all mports that have them attached. 1913 - */ 1914 - int rio_unregister_scan(int mport_id, struct rio_scan *scan_ops) 1915 - { 1916 - struct rio_mport *port; 1917 - struct rio_scan_node *scan; 1918 - 1919 - pr_debug("RIO: %s for mport_id=%d\n", __func__, mport_id); 1920 - 1921 - if (mport_id != RIO_MPORT_ANY && mport_id >= RIO_MAX_MPORTS) 1922 - return -EINVAL; 1923 - 1924 - mutex_lock(&rio_mport_list_lock); 1925 - 1926 - list_for_each_entry(port, &rio_mports, node) 1927 - if (port->id == mport_id || 1928 - (mport_id == RIO_MPORT_ANY && port->nscan == scan_ops)) 1929 - port->nscan = NULL; 1930 - 1931 - list_for_each_entry(scan, &rio_scans, node) { 1932 - if (scan->mport_id == mport_id) { 1933 - list_del(&scan->node); 1934 - kfree(scan); 1935 - break; 1936 - } 1937 - } 1938 - 1939 - mutex_unlock(&rio_mport_list_lock); 1940 - 1941 - return 0; 1942 - } 1943 - EXPORT_SYMBOL_GPL(rio_unregister_scan); 1944 1963 1945 1964 /** 1946 1965 * rio_mport_scan - execute enumeration/discovery on the specified mport
-2
drivers/rapidio/rio.h
··· 41 41 extern int rio_enable_rx_tx_port(struct rio_mport *port, int local, u16 destid, 42 42 u8 hopcount, u8 port_num); 43 43 extern int rio_register_scan(int mport_id, struct rio_scan *scan_ops); 44 - extern int rio_unregister_scan(int mport_id, struct rio_scan *scan_ops); 45 44 extern void rio_attach_device(struct rio_dev *rdev); 46 - extern struct rio_mport *rio_find_mport(int mport_id); 47 45 extern int rio_mport_scan(int mport_id); 48 46 49 47 /* Structures internal to the RIO core code */
-6
drivers/rapidio/rio_cm.c
··· 198 198 struct rio_dev *rdev; 199 199 }; 200 200 201 - struct rio_cm_work { 202 - struct work_struct work; 203 - struct cm_dev *cm; 204 - void *data; 205 - }; 206 - 207 201 struct conn_req { 208 202 struct list_head node; 209 203 u32 destid; /* requester destID */
+2 -2
drivers/s390/char/vmlogrdr.c
··· 255 255 256 256 /* 257 257 * The recording commands needs to be called with option QID 258 - * for guests that have previlege classes A or B. 258 + * for guests that have privilege classes A or B. 259 259 * Purging has to be done as separate step, because recording 260 260 * can't be switched on as long as records are on the queue. 261 261 * Doing both at the same time doesn't work. ··· 557 557 558 558 /* 559 559 * The recording command needs to be called with option QID 560 - * for guests that have previlege classes A or B. 560 + * for guests that have privilege classes A or B. 561 561 * Other guests will not recognize the command and we have to 562 562 * issue the same command without the QID parameter. 563 563 */
+1 -2
fs/bcachefs/util.h
··· 17 17 #include <linux/random.h> 18 18 #include <linux/ratelimit.h> 19 19 #include <linux/slab.h> 20 + #include <linux/sort.h> 20 21 #include <linux/vmalloc.h> 21 22 #include <linux/workqueue.h> 22 23 ··· 672 671 } 673 672 674 673 u64 *bch2_acc_percpu_u64s(u64 __percpu *, unsigned); 675 - 676 - #define cmp_int(l, r) ((l > r) - (l < r)) 677 674 678 675 static inline int u8_cmp(u8 l, u8 r) 679 676 {
-1
fs/configfs/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config CONFIGFS_FS 3 3 tristate "Userspace-driven configuration filesystem" 4 - select SYSFS 5 4 help 6 5 configfs is a RAM-based filesystem that provides the converse 7 6 of sysfs's functionality. Where sysfs is a filesystem-based
+3 -1
fs/nilfs2/btree.c
··· 2102 2102 2103 2103 ret = nilfs_btree_do_lookup(btree, path, key, NULL, level + 1, 0); 2104 2104 if (ret < 0) { 2105 - if (unlikely(ret == -ENOENT)) 2105 + if (unlikely(ret == -ENOENT)) { 2106 2106 nilfs_crit(btree->b_inode->i_sb, 2107 2107 "writing node/leaf block does not appear in b-tree (ino=%lu) at key=%llu, level=%d", 2108 2108 btree->b_inode->i_ino, 2109 2109 (unsigned long long)key, level); 2110 + ret = -EINVAL; 2111 + } 2110 2112 goto out; 2111 2113 } 2112 2114
+3
fs/nilfs2/direct.c
··· 273 273 dat = nilfs_bmap_get_dat(bmap); 274 274 key = nilfs_bmap_data_get_key(bmap, bh); 275 275 ptr = nilfs_direct_get_ptr(bmap, key); 276 + if (ptr == NILFS_BMAP_INVALID_PTR) 277 + return -EINVAL; 278 + 276 279 if (!buffer_nilfs_volatile(bh)) { 277 280 oldreq.pr_entry_nr = ptr; 278 281 newreq.pr_entry_nr = ptr;
-2
fs/nilfs2/mdt.c
··· 422 422 423 423 if (wbc->sync_mode == WB_SYNC_ALL) 424 424 err = nilfs_construct_segment(sb); 425 - else if (wbc->for_reclaim) 426 - nilfs_flush_segment(sb, inode->i_ino); 427 425 428 426 return err; 429 427 }
-16
fs/nilfs2/segment.c
··· 2221 2221 spin_unlock(&sci->sc_state_lock); 2222 2222 } 2223 2223 2224 - /** 2225 - * nilfs_flush_segment - trigger a segment construction for resource control 2226 - * @sb: super block 2227 - * @ino: inode number of the file to be flushed out. 2228 - */ 2229 - void nilfs_flush_segment(struct super_block *sb, ino_t ino) 2230 - { 2231 - struct the_nilfs *nilfs = sb->s_fs_info; 2232 - struct nilfs_sc_info *sci = nilfs->ns_writer; 2233 - 2234 - if (!sci || nilfs_doing_construction()) 2235 - return; 2236 - nilfs_segctor_do_flush(sci, NILFS_MDT_INODE(sb, ino) ? ino : 0); 2237 - /* assign bit 0 to data files */ 2238 - } 2239 - 2240 2224 struct nilfs_segctor_wait_request { 2241 2225 wait_queue_entry_t wq; 2242 2226 __u32 seq;
-1
fs/nilfs2/segment.h
··· 226 226 extern int nilfs_construct_segment(struct super_block *); 227 227 extern int nilfs_construct_dsync_segment(struct super_block *, struct inode *, 228 228 loff_t, loff_t); 229 - extern void nilfs_flush_segment(struct super_block *, ino_t); 230 229 extern int nilfs_clean_segments(struct super_block *, struct nilfs_argv *, 231 230 void **); 232 231
+1 -1
fs/ocfs2/cluster/tcp.c
··· 1483 1483 sc_put(sc); 1484 1484 } 1485 1485 1486 - /* socket shutdown does a del_timer_sync against this as it tears down. 1486 + /* socket shutdown does a timer_delete_sync against this as it tears down. 1487 1487 * we can't start this timer until we've got to the point in sc buildup 1488 1488 * where shutdown is going to be involved */ 1489 1489 static void o2net_idle_timer(struct timer_list *t)
+1 -1
fs/ocfs2/filecheck.c
··· 505 505 ocfs2_filecheck_handle_entry(ent, entry); 506 506 507 507 exit: 508 - return (!ret ? count : ret); 508 + return ret ?: count; 509 509 }
+1 -1
fs/ocfs2/quota_local.c
··· 674 674 break; 675 675 } 676 676 out: 677 - kfree(rec); 677 + ocfs2_free_quota_recovery(rec); 678 678 return status; 679 679 } 680 680
+1 -2
fs/ocfs2/stackglue.c
··· 691 691 memset(&locking_max_version, 0, 692 692 sizeof(struct ocfs2_protocol_version)); 693 693 ocfs2_sysfs_exit(); 694 - if (ocfs2_table_header) 695 - unregister_sysctl_table(ocfs2_table_header); 694 + unregister_sysctl_table(ocfs2_table_header); 696 695 } 697 696 698 697 MODULE_AUTHOR("Oracle");
+1 -2
fs/pipe.c
··· 26 26 #include <linux/memcontrol.h> 27 27 #include <linux/watch_queue.h> 28 28 #include <linux/sysctl.h> 29 + #include <linux/sort.h> 29 30 30 31 #include <linux/uaccess.h> 31 32 #include <asm/ioctls.h> ··· 76 75 * pipe_read & write cleanup 77 76 * -- Manfred Spraul <manfred@colorfullife.com> 2002-05-09 78 77 */ 79 - 80 - #define cmp_int(l, r) ((l > r) - (l < r)) 81 78 82 79 #ifdef CONFIG_PROVE_LOCKING 83 80 static int pipe_lock_cmp_fn(const struct lockdep_map *a,
+9 -3
fs/proc/base.c
··· 827 827 .release = single_release, 828 828 }; 829 829 830 - 830 + /* 831 + * proc_mem_open() can return errno, NULL or mm_struct*. 832 + * 833 + * - Returns NULL if the task has no mm (PF_KTHREAD or PF_EXITING) 834 + * - Returns mm_struct* on success 835 + * - Returns error code on failure 836 + */ 831 837 struct mm_struct *proc_mem_open(struct inode *inode, unsigned int mode) 832 838 { 833 839 struct task_struct *task = get_proc_task(inode); ··· 860 854 { 861 855 struct mm_struct *mm = proc_mem_open(inode, mode); 862 856 863 - if (IS_ERR(mm)) 864 - return PTR_ERR(mm); 857 + if (IS_ERR_OR_NULL(mm)) 858 + return mm ? PTR_ERR(mm) : -ESRCH; 865 859 866 860 file->private_data = mm; 867 861 return 0;
+6 -6
fs/proc/task_mmu.c
··· 212 212 213 213 priv->inode = inode; 214 214 priv->mm = proc_mem_open(inode, PTRACE_MODE_READ); 215 - if (IS_ERR(priv->mm)) { 216 - int err = PTR_ERR(priv->mm); 215 + if (IS_ERR_OR_NULL(priv->mm)) { 216 + int err = priv->mm ? PTR_ERR(priv->mm) : -ESRCH; 217 217 218 218 seq_release_private(inode, file); 219 219 return err; ··· 1325 1325 1326 1326 priv->inode = inode; 1327 1327 priv->mm = proc_mem_open(inode, PTRACE_MODE_READ); 1328 - if (IS_ERR(priv->mm)) { 1329 - ret = PTR_ERR(priv->mm); 1328 + if (IS_ERR_OR_NULL(priv->mm)) { 1329 + ret = priv->mm ? PTR_ERR(priv->mm) : -ESRCH; 1330 1330 1331 1331 single_release(inode, file); 1332 1332 goto out_free; ··· 2069 2069 struct mm_struct *mm; 2070 2070 2071 2071 mm = proc_mem_open(inode, PTRACE_MODE_READ); 2072 - if (IS_ERR(mm)) 2073 - return PTR_ERR(mm); 2072 + if (IS_ERR_OR_NULL(mm)) 2073 + return mm ? PTR_ERR(mm) : -ESRCH; 2074 2074 file->private_data = mm; 2075 2075 return 0; 2076 2076 }
+2 -2
fs/proc/task_nommu.c
··· 260 260 261 261 priv->inode = inode; 262 262 priv->mm = proc_mem_open(inode, PTRACE_MODE_READ); 263 - if (IS_ERR(priv->mm)) { 264 - int err = PTR_ERR(priv->mm); 263 + if (IS_ERR_OR_NULL(priv->mm)) { 264 + int err = priv->mm ? PTR_ERR(priv->mm) : -ESRCH; 265 265 266 266 seq_release_private(inode, file); 267 267 return err;
+21
fs/squashfs/Kconfig
··· 149 149 150 150 If unsure, say N. 151 151 152 + config SQUASHFS_COMP_CACHE_FULL 153 + bool "Enable full caching of compressed blocks" 154 + depends on SQUASHFS 155 + default n 156 + help 157 + This option enables caching of all compressed blocks, Without caching, 158 + repeated reads of the same files trigger excessive disk I/O, significantly 159 + reducinng performance in workloads like fio-based benchmarks. 160 + 161 + For example, fio tests (iodepth=1, numjobs=1, ioengine=psync) show: 162 + With caching: IOPS=2223, BW=278MiB/s (291MB/s) 163 + Without caching: IOPS=815, BW=102MiB/s (107MB/s) 164 + 165 + Enabling this option restores performance to pre-regression levels by 166 + caching all compressed blocks in the page cache, reducing disk I/O for 167 + repeated reads. However, this increases memory usage, which may be a 168 + concern in memory-constrained environments. 169 + 170 + Enable this option if your workload involves frequent repeated reads and 171 + memory usage is not a limiting factor. If unsure, say N. 172 + 152 173 config SQUASHFS_ZLIB 153 174 bool "Include support for ZLIB compressed file systems" 154 175 depends on SQUASHFS
+28
fs/squashfs/block.c
··· 88 88 struct bio_vec *bv; 89 89 int idx = 0; 90 90 int err = 0; 91 + #ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL 92 + struct page **cache_pages = kmalloc_array(page_count, 93 + sizeof(void *), GFP_KERNEL | __GFP_ZERO); 94 + #endif 91 95 92 96 bio_for_each_segment_all(bv, fullbio, iter_all) { 93 97 struct page *page = bv->bv_page; ··· 114 110 head_to_cache = page; 115 111 else if (idx == page_count - 1 && index + length != read_end) 116 112 tail_to_cache = page; 113 + #ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL 114 + /* Cache all pages in the BIO for repeated reads */ 115 + else if (cache_pages) 116 + cache_pages[idx] = page; 117 + #endif 117 118 118 119 if (!bio || idx != end_idx) { 119 120 struct bio *new = bio_alloc_clone(bdev, fullbio, ··· 172 163 } 173 164 } 174 165 166 + #ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL 167 + if (!cache_pages) 168 + goto out; 169 + 170 + for (idx = 0; idx < page_count; idx++) { 171 + if (!cache_pages[idx]) 172 + continue; 173 + int ret = add_to_page_cache_lru(cache_pages[idx], cache_mapping, 174 + (read_start >> PAGE_SHIFT) + idx, 175 + GFP_NOIO); 176 + 177 + if (!ret) { 178 + SetPageUptodate(cache_pages[idx]); 179 + unlock_page(cache_pages[idx]); 180 + } 181 + } 182 + kfree(cache_pages); 183 + out: 184 + #endif 175 185 return 0; 176 186 } 177 187
+5
fs/squashfs/super.c
··· 202 202 msblk->panic_on_errors = (opts->errors == Opt_errors_panic); 203 203 204 204 msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE); 205 + if (!msblk->devblksize) { 206 + errorf(fc, "squashfs: unable to set blocksize\n"); 207 + return -EINVAL; 208 + } 209 + 205 210 msblk->devblksize_log2 = ffz(~msblk->devblksize); 206 211 207 212 mutex_init(&msblk->meta_index_mutex);
-2
fs/xfs/xfs_zone_gc.c
··· 290 290 return 0; 291 291 } 292 292 293 - #define cmp_int(l, r) ((l > r) - (l < r)) 294 - 295 293 static int 296 294 xfs_zone_gc_rmap_rec_cmp( 297 295 const void *a,
+7 -1
include/linux/compiler_types.h
··· 530 530 sizeof(t) == sizeof(int) || sizeof(t) == sizeof(long)) 531 531 532 532 #ifdef __OPTIMIZE__ 533 + /* 534 + * #ifdef __OPTIMIZE__ is only a good approximation; for instance "make 535 + * CFLAGS_foo.o=-Og" defines __OPTIMIZE__, does not elide the conditional code 536 + * and can break compilation with wrong error message(s). Combine with 537 + * -U__OPTIMIZE__ when needed. 538 + */ 533 539 # define __compiletime_assert(condition, msg, prefix, suffix) \ 534 540 do { \ 535 541 /* \ ··· 549 543 prefix ## suffix(); \ 550 544 } while (0) 551 545 #else 552 - # define __compiletime_assert(condition, msg, prefix, suffix) do { } while (0) 546 + # define __compiletime_assert(condition, msg, prefix, suffix) ((void)(condition)) 553 547 #endif 554 548 555 549 #define _compiletime_assert(condition, msg, prefix, suffix) \
+6 -1
include/linux/crash_core.h
··· 34 34 static inline void arch_kexec_unprotect_crashkres(void) { } 35 35 #endif 36 36 37 - 37 + #ifdef CONFIG_CRASH_DM_CRYPT 38 + int crash_load_dm_crypt_keys(struct kimage *image); 39 + ssize_t dm_crypt_keys_read(char *buf, size_t count, u64 *ppos); 40 + #else 41 + static inline int crash_load_dm_crypt_keys(struct kimage *image) {return 0; } 42 + #endif 38 43 39 44 #ifndef arch_crash_handle_hotplug_event 40 45 static inline void arch_crash_handle_hotplug_event(struct kimage *image, void *arg) { }
+2
include/linux/crash_dump.h
··· 15 15 extern unsigned long long elfcorehdr_addr; 16 16 extern unsigned long long elfcorehdr_size; 17 17 18 + extern unsigned long long dm_crypt_keys_addr; 19 + 18 20 #ifdef CONFIG_CRASH_DUMP 19 21 extern int elfcorehdr_alloc(unsigned long long *addr, unsigned long long *size); 20 22 extern void elfcorehdr_free(unsigned long long addr);
+1 -1
include/linux/habanalabs/hl_boot_if.h
··· 295 295 * Initialized in: linux 296 296 * 297 297 * CPU_BOOT_DEV_STS0_GIC_PRIVILEGED_EN GIC access permission only from 298 - * previleged entity. FW sets this status 298 + * privileged entity. FW sets this status 299 299 * bit for host. If this bit is set then 300 300 * GIC can not be accessed from host. 301 301 * Initialized in: linux
+99
include/linux/hung_task.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Detect Hung Task: detecting tasks stuck in D state 4 + * 5 + * Copyright (C) 2025 Tongcheng Travel (www.ly.com) 6 + * Author: Lance Yang <mingzhe.yang@ly.com> 7 + */ 8 + #ifndef __LINUX_HUNG_TASK_H 9 + #define __LINUX_HUNG_TASK_H 10 + 11 + #include <linux/bug.h> 12 + #include <linux/sched.h> 13 + #include <linux/compiler.h> 14 + 15 + /* 16 + * @blocker: Combines lock address and blocking type. 17 + * 18 + * Since lock pointers are at least 4-byte aligned(32-bit) or 8-byte 19 + * aligned(64-bit). This leaves the 2 least bits (LSBs) of the pointer 20 + * always zero. So we can use these bits to encode the specific blocking 21 + * type. 22 + * 23 + * Type encoding: 24 + * 00 - Blocked on mutex (BLOCKER_TYPE_MUTEX) 25 + * 01 - Blocked on semaphore (BLOCKER_TYPE_SEM) 26 + * 10 - Blocked on rt-mutex (BLOCKER_TYPE_RTMUTEX) 27 + * 11 - Blocked on rw-semaphore (BLOCKER_TYPE_RWSEM) 28 + */ 29 + #define BLOCKER_TYPE_MUTEX 0x00UL 30 + #define BLOCKER_TYPE_SEM 0x01UL 31 + #define BLOCKER_TYPE_RTMUTEX 0x02UL 32 + #define BLOCKER_TYPE_RWSEM 0x03UL 33 + 34 + #define BLOCKER_TYPE_MASK 0x03UL 35 + 36 + #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 37 + static inline void hung_task_set_blocker(void *lock, unsigned long type) 38 + { 39 + unsigned long lock_ptr = (unsigned long)lock; 40 + 41 + WARN_ON_ONCE(!lock_ptr); 42 + WARN_ON_ONCE(READ_ONCE(current->blocker)); 43 + 44 + /* 45 + * If the lock pointer matches the BLOCKER_TYPE_MASK, return 46 + * without writing anything. 47 + */ 48 + if (WARN_ON_ONCE(lock_ptr & BLOCKER_TYPE_MASK)) 49 + return; 50 + 51 + WRITE_ONCE(current->blocker, lock_ptr | type); 52 + } 53 + 54 + static inline void hung_task_clear_blocker(void) 55 + { 56 + WARN_ON_ONCE(!READ_ONCE(current->blocker)); 57 + 58 + WRITE_ONCE(current->blocker, 0UL); 59 + } 60 + 61 + /* 62 + * hung_task_get_blocker_type - Extracts blocker type from encoded blocker 63 + * address. 64 + * 65 + * @blocker: Blocker pointer with encoded type (via LSB bits) 66 + * 67 + * Returns: BLOCKER_TYPE_MUTEX, BLOCKER_TYPE_SEM, etc. 68 + */ 69 + static inline unsigned long hung_task_get_blocker_type(unsigned long blocker) 70 + { 71 + WARN_ON_ONCE(!blocker); 72 + 73 + return blocker & BLOCKER_TYPE_MASK; 74 + } 75 + 76 + static inline void *hung_task_blocker_to_lock(unsigned long blocker) 77 + { 78 + WARN_ON_ONCE(!blocker); 79 + 80 + return (void *)(blocker & ~BLOCKER_TYPE_MASK); 81 + } 82 + #else 83 + static inline void hung_task_set_blocker(void *lock, unsigned long type) 84 + { 85 + } 86 + static inline void hung_task_clear_blocker(void) 87 + { 88 + } 89 + static inline unsigned long hung_task_get_blocker_type(unsigned long blocker) 90 + { 91 + return 0UL; 92 + } 93 + static inline void *hung_task_blocker_to_lock(unsigned long blocker) 94 + { 95 + return NULL; 96 + } 97 + #endif 98 + 99 + #endif /* __LINUX_HUNG_TASK_H */
+1 -13
include/linux/kernel.h
··· 33 33 #include <linux/sprintf.h> 34 34 #include <linux/static_call_types.h> 35 35 #include <linux/instruction_pointer.h> 36 + #include <linux/util_macros.h> 36 37 #include <linux/wordpart.h> 37 38 38 39 #include <asm/byteorder.h> ··· 41 40 #include <uapi/linux/kernel.h> 42 41 43 42 #define STACK_MAGIC 0xdeadbeef 44 - 45 - /* generic data direction definitions */ 46 - #define READ 0 47 - #define WRITE 1 48 - 49 - #define PTR_IF(cond, ptr) ((cond) ? (ptr) : NULL) 50 - 51 - #define u64_to_user_ptr(x) ( \ 52 - { \ 53 - typecheck(u64, (x)); \ 54 - (void __user *)(uintptr_t)(x); \ 55 - } \ 56 - ) 57 43 58 44 struct completion; 59 45 struct user;
+34
include/linux/kexec.h
··· 25 25 26 26 extern note_buf_t __percpu *crash_notes; 27 27 28 + #ifdef CONFIG_CRASH_DUMP 29 + #include <linux/prandom.h> 30 + #endif 31 + 28 32 #ifdef CONFIG_KEXEC_CORE 29 33 #include <linux/list.h> 30 34 #include <linux/compat.h> ··· 173 169 * @buf_min: The buffer can't be placed below this address. 174 170 * @buf_max: The buffer can't be placed above this address. 175 171 * @top_down: Allocate from top of memory. 172 + * @random: Place the buffer at a random position. 176 173 */ 177 174 struct kexec_buf { 178 175 struct kimage *image; ··· 185 180 unsigned long buf_min; 186 181 unsigned long buf_max; 187 182 bool top_down; 183 + #ifdef CONFIG_CRASH_DUMP 184 + bool random; 185 + #endif 188 186 }; 187 + 188 + 189 + #ifdef CONFIG_CRASH_DUMP 190 + static inline void kexec_random_range_start(unsigned long start, 191 + unsigned long end, 192 + struct kexec_buf *kbuf, 193 + unsigned long *temp_start) 194 + { 195 + unsigned short i; 196 + 197 + if (kbuf->random) { 198 + get_random_bytes(&i, sizeof(unsigned short)); 199 + *temp_start = start + (end - start) / USHRT_MAX * i; 200 + } 201 + } 202 + #else 203 + static inline void kexec_random_range_start(unsigned long start, 204 + unsigned long end, 205 + struct kexec_buf *kbuf, 206 + unsigned long *temp_start) 207 + {} 208 + #endif 189 209 190 210 int kexec_load_purgatory(struct kimage *image, struct kexec_buf *kbuf); 191 211 int kexec_purgatory_get_set_symbol(struct kimage *image, const char *name, ··· 413 383 void *elf_headers; 414 384 unsigned long elf_headers_sz; 415 385 unsigned long elf_load_addr; 386 + 387 + /* dm crypt keys buffer */ 388 + unsigned long dm_crypt_keys_addr; 389 + unsigned long dm_crypt_keys_sz; 416 390 }; 417 391 418 392 /* kexec interface functions */
+4 -4
include/linux/list.h
··· 50 50 * Performs the full set of list corruption checks before __list_add(). 51 51 * On list corruption reports a warning, and returns false. 52 52 */ 53 - extern bool __list_valid_slowpath __list_add_valid_or_report(struct list_head *new, 54 - struct list_head *prev, 55 - struct list_head *next); 53 + bool __list_valid_slowpath __list_add_valid_or_report(struct list_head *new, 54 + struct list_head *prev, 55 + struct list_head *next); 56 56 57 57 /* 58 58 * Performs list corruption checks before __list_add(). Returns false if a ··· 93 93 * Performs the full set of list corruption checks before __list_del_entry(). 94 94 * On list corruption reports a warning, and returns false. 95 95 */ 96 - extern bool __list_valid_slowpath __list_del_entry_valid_or_report(struct list_head *entry); 96 + bool __list_valid_slowpath __list_del_entry_valid_or_report(struct list_head *entry); 97 97 98 98 /* 99 99 * Performs list corruption checks before __list_del_entry(). Returns false if a
+20 -3
include/linux/llist.h
··· 223 223 return node->next; 224 224 } 225 225 226 - extern bool llist_add_batch(struct llist_node *new_first, 227 - struct llist_node *new_last, 228 - struct llist_head *head); 226 + /** 227 + * llist_add_batch - add several linked entries in batch 228 + * @new_first: first entry in batch to be added 229 + * @new_last: last entry in batch to be added 230 + * @head: the head for your lock-less list 231 + * 232 + * Return whether list is empty before adding. 233 + */ 234 + static inline bool llist_add_batch(struct llist_node *new_first, 235 + struct llist_node *new_last, 236 + struct llist_head *head) 237 + { 238 + struct llist_node *first = READ_ONCE(head->first); 239 + 240 + do { 241 + new_last->next = first; 242 + } while (!try_cmpxchg(&head->first, &first, new_first)); 243 + 244 + return !first; 245 + } 229 246 230 247 static inline bool __llist_add_batch(struct llist_node *new_first, 231 248 struct llist_node *new_last,
-1
include/linux/oid_registry.h
··· 151 151 extern enum OID look_up_OID(const void *data, size_t datasize); 152 152 extern int parse_OID(const void *data, size_t datasize, enum OID *oid); 153 153 extern int sprint_oid(const void *, size_t, char *, size_t); 154 - extern int sprint_OID(enum OID, char *, size_t); 155 154 156 155 #endif /* _LINUX_OID_REGISTRY_H */
-3
include/linux/relay.h
··· 159 159 size_t n_subbufs, 160 160 const struct rchan_callbacks *cb, 161 161 void *private_data); 162 - extern int relay_late_setup_files(struct rchan *chan, 163 - const char *base_filename, 164 - struct dentry *parent); 165 162 extern void relay_close(struct rchan *chan); 166 163 extern void relay_flush(struct rchan *chan); 167 164 extern void relay_subbufs_consumed(struct rchan *chan,
-5
include/linux/rio_drv.h
··· 391 391 void rio_dev_put(struct rio_dev *); 392 392 393 393 #ifdef CONFIG_RAPIDIO_DMA_ENGINE 394 - extern struct dma_chan *rio_request_dma(struct rio_dev *rdev); 395 394 extern struct dma_chan *rio_request_mport_dma(struct rio_mport *mport); 396 395 extern void rio_release_dma(struct dma_chan *dchan); 397 - extern struct dma_async_tx_descriptor *rio_dma_prep_slave_sg( 398 - struct rio_dev *rdev, struct dma_chan *dchan, 399 - struct rio_dma_data *data, 400 - enum dma_transfer_direction direction, unsigned long flags); 401 396 extern struct dma_async_tx_descriptor *rio_dma_prep_xfer( 402 397 struct dma_chan *dchan, u16 destid, 403 398 struct rio_dma_data *data,
+22 -1
include/linux/scatterlist.h
··· 95 95 } 96 96 97 97 /** 98 + * sg_next - return the next scatterlist entry in a list 99 + * @sg: The current sg entry 100 + * 101 + * Description: 102 + * Usually the next entry will be @sg@ + 1, but if this sg element is part 103 + * of a chained scatterlist, it could jump to the start of a new 104 + * scatterlist array. 105 + * 106 + **/ 107 + static inline struct scatterlist *sg_next(struct scatterlist *sg) 108 + { 109 + if (sg_is_last(sg)) 110 + return NULL; 111 + 112 + sg++; 113 + if (unlikely(sg_is_chain(sg))) 114 + sg = sg_chain_ptr(sg); 115 + 116 + return sg; 117 + } 118 + 119 + /** 98 120 * sg_assign_page - Assign a given page to an SG entry 99 121 * @sg: SG entry 100 122 * @page: The page ··· 440 418 441 419 int sg_nents(struct scatterlist *sg); 442 420 int sg_nents_for_len(struct scatterlist *sg, u64 len); 443 - struct scatterlist *sg_next(struct scatterlist *); 444 421 struct scatterlist *sg_last(struct scatterlist *s, unsigned int); 445 422 void sg_init_table(struct scatterlist *, unsigned int); 446 423 void sg_init_one(struct scatterlist *, const void *, unsigned int);
+5 -1
include/linux/sched.h
··· 1240 1240 #endif 1241 1241 1242 1242 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 1243 - struct mutex *blocker_mutex; 1243 + /* 1244 + * Encoded lock address causing task block (lower 2 bits = type from 1245 + * <linux/hung_task.h>). Accessed via hung_task_*() helpers. 1246 + */ 1247 + unsigned long blocker; 1244 1248 #endif 1245 1249 1246 1250 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-2
include/linux/sched/task_stack.h
··· 106 106 #endif 107 107 extern void set_task_stack_end_magic(struct task_struct *tsk); 108 108 109 - #ifndef __HAVE_ARCH_KSTACK_END 110 109 static inline int kstack_end(void *addr) 111 110 { 112 111 /* Reliable end of stack detection: ··· 113 114 */ 114 115 return !(((unsigned long)addr+sizeof(void*)-1) & (THREAD_SIZE-sizeof(void*))); 115 116 } 116 - #endif 117 117 118 118 #endif /* _LINUX_SCHED_TASK_STACK_H */
+14 -1
include/linux/semaphore.h
··· 16 16 raw_spinlock_t lock; 17 17 unsigned int count; 18 18 struct list_head wait_list; 19 + 20 + #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 21 + unsigned long last_holder; 22 + #endif 19 23 }; 24 + 25 + #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 26 + #define __LAST_HOLDER_SEMAPHORE_INITIALIZER \ 27 + , .last_holder = 0UL 28 + #else 29 + #define __LAST_HOLDER_SEMAPHORE_INITIALIZER 30 + #endif 20 31 21 32 #define __SEMAPHORE_INITIALIZER(name, n) \ 22 33 { \ 23 34 .lock = __RAW_SPIN_LOCK_UNLOCKED((name).lock), \ 24 35 .count = n, \ 25 - .wait_list = LIST_HEAD_INIT((name).wait_list), \ 36 + .wait_list = LIST_HEAD_INIT((name).wait_list) \ 37 + __LAST_HOLDER_SEMAPHORE_INITIALIZER \ 26 38 } 27 39 28 40 /* ··· 59 47 extern int __must_check down_trylock(struct semaphore *sem); 60 48 extern int __must_check down_timeout(struct semaphore *sem, long jiffies); 61 49 extern void up(struct semaphore *sem); 50 + extern unsigned long sem_last_holder(struct semaphore *sem); 62 51 63 52 #endif /* __LINUX_SEMAPHORE_H */
+10
include/linux/sort.h
··· 4 4 5 5 #include <linux/types.h> 6 6 7 + /** 8 + * cmp_int - perform a three-way comparison of the arguments 9 + * @l: the left argument 10 + * @r: the right argument 11 + * 12 + * Return: 1 if the left argument is greater than the right one; 0 if the 13 + * arguments are equal; -1 if the left argument is less than the right one. 14 + */ 15 + #define cmp_int(l, r) (((l) > (r)) - ((l) < (r))) 16 + 7 17 void sort_r(void *base, size_t num, size_t size, 8 18 cmp_r_func_t cmp_func, 9 19 swap_r_func_t swap_func,
+4
include/linux/types.h
··· 136 136 typedef u64 sector_t; 137 137 typedef u64 blkcnt_t; 138 138 139 + /* generic data direction definitions */ 140 + #define READ 0 141 + #define WRITE 1 142 + 139 143 /* 140 144 * The type of an index into the pagecache. 141 145 */
+66
include/linux/util_macros.h
··· 83 83 }) 84 84 85 85 /** 86 + * PTR_IF - evaluate to @ptr if @cond is true, or to NULL otherwise. 87 + * @cond: A conditional, usually in a form of IS_ENABLED(CONFIG_FOO) 88 + * @ptr: A pointer to assign if @cond is true. 89 + * 90 + * PTR_IF(IS_ENABLED(CONFIG_FOO), ptr) evaluates to @ptr if CONFIG_FOO is set 91 + * to 'y' or 'm', or to NULL otherwise. The @ptr argument must be a pointer. 92 + * 93 + * The macro can be very useful to help compiler dropping dead code. 94 + * 95 + * For instance, consider the following:: 96 + * 97 + * #ifdef CONFIG_FOO_SUSPEND 98 + * static int foo_suspend(struct device *dev) 99 + * { 100 + * ... 101 + * } 102 + * #endif 103 + * 104 + * static struct pm_ops foo_ops = { 105 + * #ifdef CONFIG_FOO_SUSPEND 106 + * .suspend = foo_suspend, 107 + * #endif 108 + * }; 109 + * 110 + * While this works, the foo_suspend() macro is compiled conditionally, 111 + * only when CONFIG_FOO_SUSPEND is set. This is problematic, as there could 112 + * be a build bug in this function, we wouldn't have a way to know unless 113 + * the configuration option is set. 114 + * 115 + * An alternative is to declare foo_suspend() always, but mark it 116 + * as __maybe_unused. This works, but the __maybe_unused attribute 117 + * is required to instruct the compiler that the function may not 118 + * be referenced anywhere, and is safe to remove without making 119 + * a fuss about it. This makes the programmer responsible for tagging 120 + * the functions that can be garbage-collected. 121 + * 122 + * With the macro it is possible to write the following: 123 + * 124 + * static int foo_suspend(struct device *dev) 125 + * { 126 + * ... 127 + * } 128 + * 129 + * static struct pm_ops foo_ops = { 130 + * .suspend = PTR_IF(IS_ENABLED(CONFIG_FOO_SUSPEND), foo_suspend), 131 + * }; 132 + * 133 + * The foo_suspend() function will now be automatically dropped by the 134 + * compiler, and it does not require any specific attribute. 135 + */ 136 + #define PTR_IF(cond, ptr) ((cond) ? (ptr) : NULL) 137 + 138 + /** 139 + * to_user_ptr - cast a pointer passed as u64 from user space to void __user * 140 + * @x: The u64 value from user space, usually via IOCTL 141 + * 142 + * to_user_ptr() simply casts a pointer passed as u64 from user space to void 143 + * __user * correctly. Using this lets us get rid of all the tiresome casts. 144 + */ 145 + #define u64_to_user_ptr(x) \ 146 + ({ \ 147 + typecheck(u64, (x)); \ 148 + (void __user *)(uintptr_t)(x); \ 149 + }) 150 + 151 + /** 86 152 * is_insidevar - check if the @ptr points inside the @var memory range. 87 153 * @ptr: the pointer to a memory address. 88 154 * @var: the variable which address and size identify the memory range.
+1 -1
include/soc/qcom/qcom-spmi-pmic.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* Copyright (c) 2022 Linaro. All rights reserved. 3 - * Author: Caleb Connolly <caleb.connolly@linaro.org> 3 + * Author: Casey Connolly <casey.connolly@linaro.org> 4 4 */ 5 5 6 6 #ifndef __QCOM_SPMI_PMIC_H__
+16 -2
init/main.c
··· 1216 1216 fn, ret, (unsigned long long)ktime_us_delta(rettime, *calltime)); 1217 1217 } 1218 1218 1219 + static __init_or_module void 1220 + trace_initcall_level_cb(void *data, const char *level) 1221 + { 1222 + printk(KERN_DEBUG "entering initcall level: %s\n", level); 1223 + } 1224 + 1219 1225 static ktime_t initcall_calltime; 1220 1226 1221 1227 #ifdef TRACEPOINTS_ENABLED ··· 1233 1227 &initcall_calltime); 1234 1228 ret |= register_trace_initcall_finish(trace_initcall_finish_cb, 1235 1229 &initcall_calltime); 1230 + ret |= register_trace_initcall_level(trace_initcall_level_cb, NULL); 1236 1231 WARN(ret, "Failed to register initcall tracepoints\n"); 1237 1232 } 1238 1233 # define do_trace_initcall_start trace_initcall_start 1239 1234 # define do_trace_initcall_finish trace_initcall_finish 1235 + # define do_trace_initcall_level trace_initcall_level 1240 1236 #else 1241 1237 static inline void do_trace_initcall_start(initcall_t fn) 1242 1238 { ··· 1251 1243 if (!initcall_debug) 1252 1244 return; 1253 1245 trace_initcall_finish_cb(&initcall_calltime, fn, ret); 1246 + } 1247 + static inline void do_trace_initcall_level(const char *level) 1248 + { 1249 + if (!initcall_debug) 1250 + return; 1251 + trace_initcall_level_cb(NULL, level); 1254 1252 } 1255 1253 #endif /* !TRACEPOINTS_ENABLED */ 1256 1254 ··· 1330 1316 level, level, 1331 1317 NULL, ignore_unknown_bootoption); 1332 1318 1333 - trace_initcall_level(initcall_level_names[level]); 1319 + do_trace_initcall_level(initcall_level_names[level]); 1334 1320 for (fn = initcall_levels[level]; fn < initcall_levels[level+1]; fn++) 1335 1321 do_one_initcall(initcall_from_entry(fn)); 1336 1322 } ··· 1374 1360 { 1375 1361 initcall_entry_t *fn; 1376 1362 1377 - trace_initcall_level("early"); 1363 + do_trace_initcall_level("early"); 1378 1364 for (fn = __initcall_start; fn < __initcall0_start; fn++) 1379 1365 do_one_initcall(initcall_from_entry(fn)); 1380 1366 }
+4 -1
ipc/shm.c
··· 431 431 void shm_destroy_orphaned(struct ipc_namespace *ns) 432 432 { 433 433 down_write(&shm_ids(ns).rwsem); 434 - if (shm_ids(ns).in_use) 434 + if (shm_ids(ns).in_use) { 435 + rcu_read_lock(); 435 436 idr_for_each(&shm_ids(ns).ipcs_idr, &shm_try_destroy_orphaned, ns); 437 + rcu_read_unlock(); 438 + } 436 439 up_write(&shm_ids(ns).rwsem); 437 440 } 438 441
+18 -2
kernel/Kconfig.kexec
··· 38 38 config KEXEC_FILE 39 39 bool "Enable kexec file based system call" 40 40 depends on ARCH_SUPPORTS_KEXEC_FILE 41 - select CRYPTO 42 - select CRYPTO_SHA256 41 + select CRYPTO_LIB_SHA256 43 42 select KEXEC_CORE 44 43 help 45 44 This is new version of kexec system call. This system call is ··· 128 129 129 130 For s390, this option also enables zfcpdump. 130 131 See also <file:Documentation/arch/s390/zfcpdump.rst> 132 + 133 + config CRASH_DM_CRYPT 134 + bool "Support saving crash dump to dm-crypt encrypted volume" 135 + depends on KEXEC_FILE 136 + depends on CRASH_DUMP 137 + depends on DM_CRYPT 138 + help 139 + With this option enabled, user space can intereact with 140 + /sys/kernel/config/crash_dm_crypt_keys to make the dm crypt keys 141 + persistent for the dump-capture kernel. 142 + 143 + config CRASH_DM_CRYPT_CONFIGS 144 + def_tristate CRASH_DM_CRYPT 145 + select CONFIGFS_FS 146 + help 147 + CRASH_DM_CRYPT cannot directly select CONFIGFS_FS, because that 148 + is required to be built-in. 131 149 132 150 config CRASH_HOTPLUG 133 151 bool "Update the crash elfcorehdr on system configuration changes"
+1
kernel/Makefile
··· 77 77 obj-$(CONFIG_CRASH_RESERVE) += crash_reserve.o 78 78 obj-$(CONFIG_KEXEC_CORE) += kexec_core.o 79 79 obj-$(CONFIG_CRASH_DUMP) += crash_core.o 80 + obj-$(CONFIG_CRASH_DM_CRYPT) += crash_dump_dm_crypt.o 80 81 obj-$(CONFIG_KEXEC) += kexec.o 81 82 obj-$(CONFIG_KEXEC_FILE) += kexec_file.o 82 83 obj-$(CONFIG_KEXEC_ELF) += kexec_elf.o
+464
kernel/crash_dump_dm_crypt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + #include <linux/key.h> 3 + #include <linux/keyctl.h> 4 + #include <keys/user-type.h> 5 + #include <linux/crash_dump.h> 6 + #include <linux/cc_platform.h> 7 + #include <linux/configfs.h> 8 + #include <linux/module.h> 9 + 10 + #define KEY_NUM_MAX 128 /* maximum dm crypt keys */ 11 + #define KEY_SIZE_MAX 256 /* maximum dm crypt key size */ 12 + #define KEY_DESC_MAX_LEN 128 /* maximum dm crypt key description size */ 13 + 14 + static unsigned int key_count; 15 + 16 + struct dm_crypt_key { 17 + unsigned int key_size; 18 + char key_desc[KEY_DESC_MAX_LEN]; 19 + u8 data[KEY_SIZE_MAX]; 20 + }; 21 + 22 + static struct keys_header { 23 + unsigned int total_keys; 24 + struct dm_crypt_key keys[] __counted_by(total_keys); 25 + } *keys_header; 26 + 27 + static size_t get_keys_header_size(size_t total_keys) 28 + { 29 + return struct_size(keys_header, keys, total_keys); 30 + } 31 + 32 + unsigned long long dm_crypt_keys_addr; 33 + EXPORT_SYMBOL_GPL(dm_crypt_keys_addr); 34 + 35 + static int __init setup_dmcryptkeys(char *arg) 36 + { 37 + char *end; 38 + 39 + if (!arg) 40 + return -EINVAL; 41 + dm_crypt_keys_addr = memparse(arg, &end); 42 + if (end > arg) 43 + return 0; 44 + 45 + dm_crypt_keys_addr = 0; 46 + return -EINVAL; 47 + } 48 + 49 + early_param("dmcryptkeys", setup_dmcryptkeys); 50 + 51 + /* 52 + * Architectures may override this function to read dm crypt keys 53 + */ 54 + ssize_t __weak dm_crypt_keys_read(char *buf, size_t count, u64 *ppos) 55 + { 56 + struct kvec kvec = { .iov_base = buf, .iov_len = count }; 57 + struct iov_iter iter; 58 + 59 + iov_iter_kvec(&iter, READ, &kvec, 1, count); 60 + return read_from_oldmem(&iter, count, ppos, cc_platform_has(CC_ATTR_MEM_ENCRYPT)); 61 + } 62 + 63 + static int add_key_to_keyring(struct dm_crypt_key *dm_key, 64 + key_ref_t keyring_ref) 65 + { 66 + key_ref_t key_ref; 67 + int r; 68 + 69 + /* create or update the requested key and add it to the target keyring */ 70 + key_ref = key_create_or_update(keyring_ref, "user", dm_key->key_desc, 71 + dm_key->data, dm_key->key_size, 72 + KEY_USR_ALL, KEY_ALLOC_IN_QUOTA); 73 + 74 + if (!IS_ERR(key_ref)) { 75 + r = key_ref_to_ptr(key_ref)->serial; 76 + key_ref_put(key_ref); 77 + kexec_dprintk("Success adding key %s", dm_key->key_desc); 78 + } else { 79 + r = PTR_ERR(key_ref); 80 + kexec_dprintk("Error when adding key"); 81 + } 82 + 83 + key_ref_put(keyring_ref); 84 + return r; 85 + } 86 + 87 + static void get_keys_from_kdump_reserved_memory(void) 88 + { 89 + struct keys_header *keys_header_loaded; 90 + 91 + arch_kexec_unprotect_crashkres(); 92 + 93 + keys_header_loaded = kmap_local_page(pfn_to_page( 94 + kexec_crash_image->dm_crypt_keys_addr >> PAGE_SHIFT)); 95 + 96 + memcpy(keys_header, keys_header_loaded, get_keys_header_size(key_count)); 97 + kunmap_local(keys_header_loaded); 98 + arch_kexec_protect_crashkres(); 99 + } 100 + 101 + static int restore_dm_crypt_keys_to_thread_keyring(void) 102 + { 103 + struct dm_crypt_key *key; 104 + size_t keys_header_size; 105 + key_ref_t keyring_ref; 106 + u64 addr; 107 + 108 + /* find the target keyring (which must be writable) */ 109 + keyring_ref = 110 + lookup_user_key(KEY_SPEC_USER_KEYRING, 0x01, KEY_NEED_WRITE); 111 + if (IS_ERR(keyring_ref)) { 112 + kexec_dprintk("Failed to get the user keyring\n"); 113 + return PTR_ERR(keyring_ref); 114 + } 115 + 116 + addr = dm_crypt_keys_addr; 117 + dm_crypt_keys_read((char *)&key_count, sizeof(key_count), &addr); 118 + if (key_count < 0 || key_count > KEY_NUM_MAX) { 119 + kexec_dprintk("Failed to read the number of dm-crypt keys\n"); 120 + return -1; 121 + } 122 + 123 + kexec_dprintk("There are %u keys\n", key_count); 124 + addr = dm_crypt_keys_addr; 125 + 126 + keys_header_size = get_keys_header_size(key_count); 127 + keys_header = kzalloc(keys_header_size, GFP_KERNEL); 128 + if (!keys_header) 129 + return -ENOMEM; 130 + 131 + dm_crypt_keys_read((char *)keys_header, keys_header_size, &addr); 132 + 133 + for (int i = 0; i < keys_header->total_keys; i++) { 134 + key = &keys_header->keys[i]; 135 + kexec_dprintk("Get key (size=%u)\n", key->key_size); 136 + add_key_to_keyring(key, keyring_ref); 137 + } 138 + 139 + return 0; 140 + } 141 + 142 + static int read_key_from_user_keying(struct dm_crypt_key *dm_key) 143 + { 144 + const struct user_key_payload *ukp; 145 + struct key *key; 146 + 147 + kexec_dprintk("Requesting logon key %s", dm_key->key_desc); 148 + key = request_key(&key_type_logon, dm_key->key_desc, NULL); 149 + 150 + if (IS_ERR(key)) { 151 + pr_warn("No such logon key %s\n", dm_key->key_desc); 152 + return PTR_ERR(key); 153 + } 154 + 155 + ukp = user_key_payload_locked(key); 156 + if (!ukp) 157 + return -EKEYREVOKED; 158 + 159 + if (ukp->datalen > KEY_SIZE_MAX) { 160 + pr_err("Key size %u exceeds maximum (%u)\n", ukp->datalen, KEY_SIZE_MAX); 161 + return -EINVAL; 162 + } 163 + 164 + memcpy(dm_key->data, ukp->data, ukp->datalen); 165 + dm_key->key_size = ukp->datalen; 166 + kexec_dprintk("Get dm crypt key (size=%u) %s: %8ph\n", dm_key->key_size, 167 + dm_key->key_desc, dm_key->data); 168 + return 0; 169 + } 170 + 171 + struct config_key { 172 + struct config_item item; 173 + const char *description; 174 + }; 175 + 176 + static inline struct config_key *to_config_key(struct config_item *item) 177 + { 178 + return container_of(item, struct config_key, item); 179 + } 180 + 181 + static ssize_t config_key_description_show(struct config_item *item, char *page) 182 + { 183 + return sprintf(page, "%s\n", to_config_key(item)->description); 184 + } 185 + 186 + static ssize_t config_key_description_store(struct config_item *item, 187 + const char *page, size_t count) 188 + { 189 + struct config_key *config_key = to_config_key(item); 190 + size_t len; 191 + int ret; 192 + 193 + ret = -EINVAL; 194 + len = strcspn(page, "\n"); 195 + 196 + if (len > KEY_DESC_MAX_LEN) { 197 + pr_err("The key description shouldn't exceed %u characters", KEY_DESC_MAX_LEN); 198 + return ret; 199 + } 200 + 201 + if (!len) 202 + return ret; 203 + 204 + kfree(config_key->description); 205 + ret = -ENOMEM; 206 + config_key->description = kmemdup_nul(page, len, GFP_KERNEL); 207 + if (!config_key->description) 208 + return ret; 209 + 210 + return count; 211 + } 212 + 213 + CONFIGFS_ATTR(config_key_, description); 214 + 215 + static struct configfs_attribute *config_key_attrs[] = { 216 + &config_key_attr_description, 217 + NULL, 218 + }; 219 + 220 + static void config_key_release(struct config_item *item) 221 + { 222 + kfree(to_config_key(item)); 223 + key_count--; 224 + } 225 + 226 + static struct configfs_item_operations config_key_item_ops = { 227 + .release = config_key_release, 228 + }; 229 + 230 + static const struct config_item_type config_key_type = { 231 + .ct_item_ops = &config_key_item_ops, 232 + .ct_attrs = config_key_attrs, 233 + .ct_owner = THIS_MODULE, 234 + }; 235 + 236 + static struct config_item *config_keys_make_item(struct config_group *group, 237 + const char *name) 238 + { 239 + struct config_key *config_key; 240 + 241 + if (key_count > KEY_NUM_MAX) { 242 + pr_err("Only %u keys at maximum to be created\n", KEY_NUM_MAX); 243 + return ERR_PTR(-EINVAL); 244 + } 245 + 246 + config_key = kzalloc(sizeof(struct config_key), GFP_KERNEL); 247 + if (!config_key) 248 + return ERR_PTR(-ENOMEM); 249 + 250 + config_item_init_type_name(&config_key->item, name, &config_key_type); 251 + 252 + key_count++; 253 + 254 + return &config_key->item; 255 + } 256 + 257 + static ssize_t config_keys_count_show(struct config_item *item, char *page) 258 + { 259 + return sprintf(page, "%d\n", key_count); 260 + } 261 + 262 + CONFIGFS_ATTR_RO(config_keys_, count); 263 + 264 + static bool is_dm_key_reused; 265 + 266 + static ssize_t config_keys_reuse_show(struct config_item *item, char *page) 267 + { 268 + return sprintf(page, "%d\n", is_dm_key_reused); 269 + } 270 + 271 + static ssize_t config_keys_reuse_store(struct config_item *item, 272 + const char *page, size_t count) 273 + { 274 + if (!kexec_crash_image || !kexec_crash_image->dm_crypt_keys_addr) { 275 + kexec_dprintk( 276 + "dm-crypt keys haven't be saved to crash-reserved memory\n"); 277 + return -EINVAL; 278 + } 279 + 280 + if (kstrtobool(page, &is_dm_key_reused)) 281 + return -EINVAL; 282 + 283 + if (is_dm_key_reused) 284 + get_keys_from_kdump_reserved_memory(); 285 + 286 + return count; 287 + } 288 + 289 + CONFIGFS_ATTR(config_keys_, reuse); 290 + 291 + static struct configfs_attribute *config_keys_attrs[] = { 292 + &config_keys_attr_count, 293 + &config_keys_attr_reuse, 294 + NULL, 295 + }; 296 + 297 + /* 298 + * Note that, since no extra work is required on ->drop_item(), 299 + * no ->drop_item() is provided. 300 + */ 301 + static struct configfs_group_operations config_keys_group_ops = { 302 + .make_item = config_keys_make_item, 303 + }; 304 + 305 + static const struct config_item_type config_keys_type = { 306 + .ct_group_ops = &config_keys_group_ops, 307 + .ct_attrs = config_keys_attrs, 308 + .ct_owner = THIS_MODULE, 309 + }; 310 + 311 + static bool restore; 312 + 313 + static ssize_t config_keys_restore_show(struct config_item *item, char *page) 314 + { 315 + return sprintf(page, "%d\n", restore); 316 + } 317 + 318 + static ssize_t config_keys_restore_store(struct config_item *item, 319 + const char *page, size_t count) 320 + { 321 + if (!restore) 322 + restore_dm_crypt_keys_to_thread_keyring(); 323 + 324 + if (kstrtobool(page, &restore)) 325 + return -EINVAL; 326 + 327 + return count; 328 + } 329 + 330 + CONFIGFS_ATTR(config_keys_, restore); 331 + 332 + static struct configfs_attribute *kdump_config_keys_attrs[] = { 333 + &config_keys_attr_restore, 334 + NULL, 335 + }; 336 + 337 + static const struct config_item_type kdump_config_keys_type = { 338 + .ct_attrs = kdump_config_keys_attrs, 339 + .ct_owner = THIS_MODULE, 340 + }; 341 + 342 + static struct configfs_subsystem config_keys_subsys = { 343 + .su_group = { 344 + .cg_item = { 345 + .ci_namebuf = "crash_dm_crypt_keys", 346 + .ci_type = &config_keys_type, 347 + }, 348 + }, 349 + }; 350 + 351 + static int build_keys_header(void) 352 + { 353 + struct config_item *item = NULL; 354 + struct config_key *key; 355 + int i, r; 356 + 357 + if (keys_header != NULL) 358 + kvfree(keys_header); 359 + 360 + keys_header = kzalloc(get_keys_header_size(key_count), GFP_KERNEL); 361 + if (!keys_header) 362 + return -ENOMEM; 363 + 364 + keys_header->total_keys = key_count; 365 + 366 + i = 0; 367 + list_for_each_entry(item, &config_keys_subsys.su_group.cg_children, 368 + ci_entry) { 369 + if (item->ci_type != &config_key_type) 370 + continue; 371 + 372 + key = to_config_key(item); 373 + 374 + if (!key->description) { 375 + pr_warn("No key description for key %s\n", item->ci_name); 376 + return -EINVAL; 377 + } 378 + 379 + strscpy(keys_header->keys[i].key_desc, key->description, 380 + KEY_DESC_MAX_LEN); 381 + r = read_key_from_user_keying(&keys_header->keys[i]); 382 + if (r != 0) { 383 + kexec_dprintk("Failed to read key %s\n", 384 + keys_header->keys[i].key_desc); 385 + return r; 386 + } 387 + i++; 388 + kexec_dprintk("Found key: %s\n", item->ci_name); 389 + } 390 + 391 + return 0; 392 + } 393 + 394 + int crash_load_dm_crypt_keys(struct kimage *image) 395 + { 396 + struct kexec_buf kbuf = { 397 + .image = image, 398 + .buf_min = 0, 399 + .buf_max = ULONG_MAX, 400 + .top_down = false, 401 + .random = true, 402 + }; 403 + int r; 404 + 405 + 406 + if (key_count <= 0) { 407 + kexec_dprintk("No dm-crypt keys\n"); 408 + return -ENOENT; 409 + } 410 + 411 + if (!is_dm_key_reused) { 412 + image->dm_crypt_keys_addr = 0; 413 + r = build_keys_header(); 414 + if (r) 415 + return r; 416 + } 417 + 418 + kbuf.buffer = keys_header; 419 + kbuf.bufsz = get_keys_header_size(key_count); 420 + 421 + kbuf.memsz = kbuf.bufsz; 422 + kbuf.buf_align = ELF_CORE_HEADER_ALIGN; 423 + kbuf.mem = KEXEC_BUF_MEM_UNKNOWN; 424 + r = kexec_add_buffer(&kbuf); 425 + if (r) { 426 + kvfree((void *)kbuf.buffer); 427 + return r; 428 + } 429 + image->dm_crypt_keys_addr = kbuf.mem; 430 + image->dm_crypt_keys_sz = kbuf.bufsz; 431 + kexec_dprintk( 432 + "Loaded dm crypt keys to kexec_buffer bufsz=0x%lx memsz=0x%lx\n", 433 + kbuf.bufsz, kbuf.memsz); 434 + 435 + return r; 436 + } 437 + 438 + static int __init configfs_dmcrypt_keys_init(void) 439 + { 440 + int ret; 441 + 442 + if (is_kdump_kernel()) { 443 + config_keys_subsys.su_group.cg_item.ci_type = 444 + &kdump_config_keys_type; 445 + } 446 + 447 + config_group_init(&config_keys_subsys.su_group); 448 + mutex_init(&config_keys_subsys.su_mutex); 449 + ret = configfs_register_subsystem(&config_keys_subsys); 450 + if (ret) { 451 + pr_err("Error %d while registering subsystem %s\n", ret, 452 + config_keys_subsys.su_group.cg_item.ci_namebuf); 453 + goto out_unregister; 454 + } 455 + 456 + return 0; 457 + 458 + out_unregister: 459 + configfs_unregister_subsystem(&config_keys_subsys); 460 + 461 + return ret; 462 + } 463 + 464 + module_init(configfs_dmcrypt_keys_init);
+1 -1
kernel/crash_reserve.c
··· 131 131 cur++; 132 132 *crash_base = memparse(cur, &tmp); 133 133 if (cur == tmp) { 134 - pr_warn("crahskernel: Memory value expected after '@'\n"); 134 + pr_warn("crashkernel: Memory value expected after '@'\n"); 135 135 return -EINVAL; 136 136 } 137 137 }
+16 -35
kernel/delayacct.c
··· 14 14 #include <linux/delayacct.h> 15 15 #include <linux/module.h> 16 16 17 + #define UPDATE_DELAY(type) \ 18 + do { \ 19 + d->type##_delay_max = tsk->delays->type##_delay_max; \ 20 + d->type##_delay_min = tsk->delays->type##_delay_min; \ 21 + tmp = d->type##_delay_total + tsk->delays->type##_delay; \ 22 + d->type##_delay_total = (tmp < d->type##_delay_total) ? 0 : tmp; \ 23 + d->type##_count += tsk->delays->type##_count; \ 24 + } while (0) 25 + 17 26 DEFINE_STATIC_KEY_FALSE(delayacct_key); 18 27 int delayacct_on __read_mostly; /* Delay accounting turned on/off */ 19 28 struct kmem_cache *delayacct_cache; ··· 182 173 183 174 /* zero XXX_total, non-zero XXX_count implies XXX stat overflowed */ 184 175 raw_spin_lock_irqsave(&tsk->delays->lock, flags); 185 - d->blkio_delay_max = tsk->delays->blkio_delay_max; 186 - d->blkio_delay_min = tsk->delays->blkio_delay_min; 187 - tmp = d->blkio_delay_total + tsk->delays->blkio_delay; 188 - d->blkio_delay_total = (tmp < d->blkio_delay_total) ? 0 : tmp; 189 - d->swapin_delay_max = tsk->delays->swapin_delay_max; 190 - d->swapin_delay_min = tsk->delays->swapin_delay_min; 191 - tmp = d->swapin_delay_total + tsk->delays->swapin_delay; 192 - d->swapin_delay_total = (tmp < d->swapin_delay_total) ? 0 : tmp; 193 - d->freepages_delay_max = tsk->delays->freepages_delay_max; 194 - d->freepages_delay_min = tsk->delays->freepages_delay_min; 195 - tmp = d->freepages_delay_total + tsk->delays->freepages_delay; 196 - d->freepages_delay_total = (tmp < d->freepages_delay_total) ? 0 : tmp; 197 - d->thrashing_delay_max = tsk->delays->thrashing_delay_max; 198 - d->thrashing_delay_min = tsk->delays->thrashing_delay_min; 199 - tmp = d->thrashing_delay_total + tsk->delays->thrashing_delay; 200 - d->thrashing_delay_total = (tmp < d->thrashing_delay_total) ? 0 : tmp; 201 - d->compact_delay_max = tsk->delays->compact_delay_max; 202 - d->compact_delay_min = tsk->delays->compact_delay_min; 203 - tmp = d->compact_delay_total + tsk->delays->compact_delay; 204 - d->compact_delay_total = (tmp < d->compact_delay_total) ? 0 : tmp; 205 - d->wpcopy_delay_max = tsk->delays->wpcopy_delay_max; 206 - d->wpcopy_delay_min = tsk->delays->wpcopy_delay_min; 207 - tmp = d->wpcopy_delay_total + tsk->delays->wpcopy_delay; 208 - d->wpcopy_delay_total = (tmp < d->wpcopy_delay_total) ? 0 : tmp; 209 - d->irq_delay_max = tsk->delays->irq_delay_max; 210 - d->irq_delay_min = tsk->delays->irq_delay_min; 211 - tmp = d->irq_delay_total + tsk->delays->irq_delay; 212 - d->irq_delay_total = (tmp < d->irq_delay_total) ? 0 : tmp; 213 - d->blkio_count += tsk->delays->blkio_count; 214 - d->swapin_count += tsk->delays->swapin_count; 215 - d->freepages_count += tsk->delays->freepages_count; 216 - d->thrashing_count += tsk->delays->thrashing_count; 217 - d->compact_count += tsk->delays->compact_count; 218 - d->wpcopy_count += tsk->delays->wpcopy_count; 219 - d->irq_count += tsk->delays->irq_count; 176 + UPDATE_DELAY(blkio); 177 + UPDATE_DELAY(swapin); 178 + UPDATE_DELAY(freepages); 179 + UPDATE_DELAY(thrashing); 180 + UPDATE_DELAY(compact); 181 + UPDATE_DELAY(wpcopy); 182 + UPDATE_DELAY(irq); 220 183 raw_spin_unlock_irqrestore(&tsk->delays->lock, flags); 221 184 222 185 return 0;
+32 -36
kernel/exit.c
··· 421 421 } 422 422 } 423 423 424 - static void coredump_task_exit(struct task_struct *tsk) 424 + static void coredump_task_exit(struct task_struct *tsk, 425 + struct core_state *core_state) 425 426 { 426 - struct core_state *core_state; 427 + struct core_thread self; 427 428 429 + self.task = tsk; 430 + if (self.task->flags & PF_SIGNALED) 431 + self.next = xchg(&core_state->dumper.next, &self); 432 + else 433 + self.task = NULL; 428 434 /* 429 - * Serialize with any possible pending coredump. 430 - * We must hold siglock around checking core_state 431 - * and setting PF_POSTCOREDUMP. The core-inducing thread 432 - * will increment ->nr_threads for each thread in the 433 - * group without PF_POSTCOREDUMP set. 435 + * Implies mb(), the result of xchg() must be visible 436 + * to core_state->dumper. 434 437 */ 435 - spin_lock_irq(&tsk->sighand->siglock); 436 - tsk->flags |= PF_POSTCOREDUMP; 437 - core_state = tsk->signal->core_state; 438 - spin_unlock_irq(&tsk->sighand->siglock); 439 - if (core_state) { 440 - struct core_thread self; 438 + if (atomic_dec_and_test(&core_state->nr_threads)) 439 + complete(&core_state->startup); 441 440 442 - self.task = current; 443 - if (self.task->flags & PF_SIGNALED) 444 - self.next = xchg(&core_state->dumper.next, &self); 445 - else 446 - self.task = NULL; 447 - /* 448 - * Implies mb(), the result of xchg() must be visible 449 - * to core_state->dumper. 450 - */ 451 - if (atomic_dec_and_test(&core_state->nr_threads)) 452 - complete(&core_state->startup); 453 - 454 - for (;;) { 455 - set_current_state(TASK_IDLE|TASK_FREEZABLE); 456 - if (!self.task) /* see coredump_finish() */ 457 - break; 458 - schedule(); 459 - } 460 - __set_current_state(TASK_RUNNING); 441 + for (;;) { 442 + set_current_state(TASK_IDLE|TASK_FREEZABLE); 443 + if (!self.task) /* see coredump_finish() */ 444 + break; 445 + schedule(); 461 446 } 447 + __set_current_state(TASK_RUNNING); 462 448 } 463 449 464 450 #ifdef CONFIG_MEMCG ··· 868 882 { 869 883 struct sighand_struct *sighand = tsk->sighand; 870 884 struct signal_struct *signal = tsk->signal; 885 + struct core_state *core_state; 871 886 872 887 spin_lock_irq(&sighand->siglock); 873 888 signal->quick_threads--; ··· 878 891 signal->group_exit_code = code; 879 892 signal->group_stop_count = 0; 880 893 } 894 + /* 895 + * Serialize with any possible pending coredump. 896 + * We must hold siglock around checking core_state 897 + * and setting PF_POSTCOREDUMP. The core-inducing thread 898 + * will increment ->nr_threads for each thread in the 899 + * group without PF_POSTCOREDUMP set. 900 + */ 901 + tsk->flags |= PF_POSTCOREDUMP; 902 + core_state = signal->core_state; 881 903 spin_unlock_irq(&sighand->siglock); 904 + 905 + if (unlikely(core_state)) 906 + coredump_task_exit(tsk, core_state); 882 907 } 883 908 884 909 void __noreturn do_exit(long code) ··· 899 900 int group_dead; 900 901 901 902 WARN_ON(irqs_disabled()); 902 - 903 - synchronize_group_exit(tsk, code); 904 - 905 903 WARN_ON(tsk->plug); 906 904 907 905 kcov_task_exit(tsk); 908 906 kmsan_task_exit(tsk); 909 907 910 - coredump_task_exit(tsk); 908 + synchronize_group_exit(tsk, code); 911 909 ptrace_event(PTRACE_EVENT_EXIT, code); 912 910 user_events_exit(tsk); 913 911
+44 -11
kernel/hung_task.c
··· 22 22 #include <linux/sched/signal.h> 23 23 #include <linux/sched/debug.h> 24 24 #include <linux/sched/sysctl.h> 25 + #include <linux/hung_task.h> 25 26 26 27 #include <trace/events/sched.h> 27 28 ··· 99 98 static void debug_show_blocker(struct task_struct *task) 100 99 { 101 100 struct task_struct *g, *t; 102 - unsigned long owner; 103 - struct mutex *lock; 101 + unsigned long owner, blocker, blocker_type; 104 102 105 103 RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "No rcu lock held"); 106 104 107 - lock = READ_ONCE(task->blocker_mutex); 108 - if (!lock) 105 + blocker = READ_ONCE(task->blocker); 106 + if (!blocker) 109 107 return; 110 108 111 - owner = mutex_get_owner(lock); 109 + blocker_type = hung_task_get_blocker_type(blocker); 110 + 111 + switch (blocker_type) { 112 + case BLOCKER_TYPE_MUTEX: 113 + owner = mutex_get_owner( 114 + (struct mutex *)hung_task_blocker_to_lock(blocker)); 115 + break; 116 + case BLOCKER_TYPE_SEM: 117 + owner = sem_last_holder( 118 + (struct semaphore *)hung_task_blocker_to_lock(blocker)); 119 + break; 120 + default: 121 + WARN_ON_ONCE(1); 122 + return; 123 + } 124 + 125 + 112 126 if (unlikely(!owner)) { 113 - pr_err("INFO: task %s:%d is blocked on a mutex, but the owner is not found.\n", 114 - task->comm, task->pid); 127 + switch (blocker_type) { 128 + case BLOCKER_TYPE_MUTEX: 129 + pr_err("INFO: task %s:%d is blocked on a mutex, but the owner is not found.\n", 130 + task->comm, task->pid); 131 + break; 132 + case BLOCKER_TYPE_SEM: 133 + pr_err("INFO: task %s:%d is blocked on a semaphore, but the last holder is not found.\n", 134 + task->comm, task->pid); 135 + break; 136 + } 115 137 return; 116 138 } 117 139 118 140 /* Ensure the owner information is correct. */ 119 141 for_each_process_thread(g, t) { 120 - if ((unsigned long)t == owner) { 142 + if ((unsigned long)t != owner) 143 + continue; 144 + 145 + switch (blocker_type) { 146 + case BLOCKER_TYPE_MUTEX: 121 147 pr_err("INFO: task %s:%d is blocked on a mutex likely owned by task %s:%d.\n", 122 - task->comm, task->pid, t->comm, t->pid); 123 - sched_show_task(t); 124 - return; 148 + task->comm, task->pid, t->comm, t->pid); 149 + break; 150 + case BLOCKER_TYPE_SEM: 151 + pr_err("INFO: task %s:%d blocked on a semaphore likely last held by task %s:%d\n", 152 + task->comm, task->pid, t->comm, t->pid); 153 + break; 125 154 } 155 + sched_show_task(t); 156 + return; 126 157 } 127 158 } 128 159 #else
+18 -63
kernel/kexec_file.c
··· 19 19 #include <linux/list.h> 20 20 #include <linux/fs.h> 21 21 #include <linux/ima.h> 22 - #include <crypto/hash.h> 23 22 #include <crypto/sha2.h> 24 23 #include <linux/elf.h> 25 24 #include <linux/elfcore.h> ··· 473 474 474 475 temp_end = min(end, kbuf->buf_max); 475 476 temp_start = temp_end - kbuf->memsz + 1; 477 + kexec_random_range_start(temp_start, temp_end, kbuf, &temp_start); 476 478 477 479 do { 478 480 /* align down start */ ··· 517 517 unsigned long temp_start, temp_end; 518 518 519 519 temp_start = max(start, kbuf->buf_min); 520 + 521 + kexec_random_range_start(temp_start, end, kbuf, &temp_start); 520 522 521 523 do { 522 524 temp_start = ALIGN(temp_start, kbuf->buf_align); ··· 751 749 /* Calculate and store the digest of segments */ 752 750 static int kexec_calculate_store_digests(struct kimage *image) 753 751 { 754 - struct crypto_shash *tfm; 755 - struct shash_desc *desc; 752 + struct sha256_state state; 756 753 int ret = 0, i, j, zero_buf_sz, sha_region_sz; 757 - size_t desc_size, nullsz; 758 - char *digest; 754 + size_t nullsz; 755 + u8 digest[SHA256_DIGEST_SIZE]; 759 756 void *zero_buf; 760 757 struct kexec_sha_region *sha_regions; 761 758 struct purgatory_info *pi = &image->purgatory_info; ··· 765 764 zero_buf = __va(page_to_pfn(ZERO_PAGE(0)) << PAGE_SHIFT); 766 765 zero_buf_sz = PAGE_SIZE; 767 766 768 - tfm = crypto_alloc_shash("sha256", 0, 0); 769 - if (IS_ERR(tfm)) { 770 - ret = PTR_ERR(tfm); 771 - goto out; 772 - } 773 - 774 - desc_size = crypto_shash_descsize(tfm) + sizeof(*desc); 775 - desc = kzalloc(desc_size, GFP_KERNEL); 776 - if (!desc) { 777 - ret = -ENOMEM; 778 - goto out_free_tfm; 779 - } 780 - 781 767 sha_region_sz = KEXEC_SEGMENT_MAX * sizeof(struct kexec_sha_region); 782 768 sha_regions = vzalloc(sha_region_sz); 783 - if (!sha_regions) { 784 - ret = -ENOMEM; 785 - goto out_free_desc; 786 - } 769 + if (!sha_regions) 770 + return -ENOMEM; 787 771 788 - desc->tfm = tfm; 789 - 790 - ret = crypto_shash_init(desc); 791 - if (ret < 0) 792 - goto out_free_sha_regions; 793 - 794 - digest = kzalloc(SHA256_DIGEST_SIZE, GFP_KERNEL); 795 - if (!digest) { 796 - ret = -ENOMEM; 797 - goto out_free_sha_regions; 798 - } 772 + sha256_init(&state); 799 773 800 774 for (j = i = 0; i < image->nr_segments; i++) { 801 775 struct kexec_segment *ksegment; ··· 796 820 if (check_ima_segment_index(image, i)) 797 821 continue; 798 822 799 - ret = crypto_shash_update(desc, ksegment->kbuf, 800 - ksegment->bufsz); 801 - if (ret) 802 - break; 823 + sha256_update(&state, ksegment->kbuf, ksegment->bufsz); 803 824 804 825 /* 805 826 * Assume rest of the buffer is filled with zero and ··· 808 835 809 836 if (bytes > zero_buf_sz) 810 837 bytes = zero_buf_sz; 811 - ret = crypto_shash_update(desc, zero_buf, bytes); 812 - if (ret) 813 - break; 838 + sha256_update(&state, zero_buf, bytes); 814 839 nullsz -= bytes; 815 840 } 816 - 817 - if (ret) 818 - break; 819 841 820 842 sha_regions[j].start = ksegment->mem; 821 843 sha_regions[j].len = ksegment->memsz; 822 844 j++; 823 845 } 824 846 825 - if (!ret) { 826 - ret = crypto_shash_final(desc, digest); 827 - if (ret) 828 - goto out_free_digest; 829 - ret = kexec_purgatory_get_set_symbol(image, "purgatory_sha_regions", 830 - sha_regions, sha_region_sz, 0); 831 - if (ret) 832 - goto out_free_digest; 847 + sha256_final(&state, digest); 833 848 834 - ret = kexec_purgatory_get_set_symbol(image, "purgatory_sha256_digest", 835 - digest, SHA256_DIGEST_SIZE, 0); 836 - if (ret) 837 - goto out_free_digest; 838 - } 849 + ret = kexec_purgatory_get_set_symbol(image, "purgatory_sha_regions", 850 + sha_regions, sha_region_sz, 0); 851 + if (ret) 852 + goto out_free_sha_regions; 839 853 840 - out_free_digest: 841 - kfree(digest); 854 + ret = kexec_purgatory_get_set_symbol(image, "purgatory_sha256_digest", 855 + digest, SHA256_DIGEST_SIZE, 0); 842 856 out_free_sha_regions: 843 857 vfree(sha_regions); 844 - out_free_desc: 845 - kfree(desc); 846 - out_free_tfm: 847 - kfree(tfm); 848 - out: 849 858 return ret; 850 859 } 851 860
+3 -2
kernel/locking/mutex.c
··· 29 29 #include <linux/interrupt.h> 30 30 #include <linux/debug_locks.h> 31 31 #include <linux/osq_lock.h> 32 + #include <linux/hung_task.h> 32 33 33 34 #define CREATE_TRACE_POINTS 34 35 #include <trace/events/lock.h> ··· 192 191 struct list_head *list) 193 192 { 194 193 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 195 - WRITE_ONCE(current->blocker_mutex, lock); 194 + hung_task_set_blocker(lock, BLOCKER_TYPE_MUTEX); 196 195 #endif 197 196 debug_mutex_add_waiter(lock, waiter, current); 198 197 ··· 210 209 211 210 debug_mutex_remove_waiter(lock, waiter, current); 212 211 #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 213 - WRITE_ONCE(current->blocker_mutex, NULL); 212 + hung_task_clear_blocker(); 214 213 #endif 215 214 } 216 215
+51 -6
kernel/locking/semaphore.c
··· 34 34 #include <linux/spinlock.h> 35 35 #include <linux/ftrace.h> 36 36 #include <trace/events/lock.h> 37 + #include <linux/hung_task.h> 37 38 38 39 static noinline void __down(struct semaphore *sem); 39 40 static noinline int __down_interruptible(struct semaphore *sem); 40 41 static noinline int __down_killable(struct semaphore *sem); 41 42 static noinline int __down_timeout(struct semaphore *sem, long timeout); 42 43 static noinline void __up(struct semaphore *sem, struct wake_q_head *wake_q); 44 + 45 + #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 46 + static inline void hung_task_sem_set_holder(struct semaphore *sem) 47 + { 48 + WRITE_ONCE((sem)->last_holder, (unsigned long)current); 49 + } 50 + 51 + static inline void hung_task_sem_clear_if_holder(struct semaphore *sem) 52 + { 53 + if (READ_ONCE((sem)->last_holder) == (unsigned long)current) 54 + WRITE_ONCE((sem)->last_holder, 0UL); 55 + } 56 + 57 + unsigned long sem_last_holder(struct semaphore *sem) 58 + { 59 + return READ_ONCE(sem->last_holder); 60 + } 61 + #else 62 + static inline void hung_task_sem_set_holder(struct semaphore *sem) 63 + { 64 + } 65 + static inline void hung_task_sem_clear_if_holder(struct semaphore *sem) 66 + { 67 + } 68 + unsigned long sem_last_holder(struct semaphore *sem) 69 + { 70 + return 0UL; 71 + } 72 + #endif 73 + 74 + static inline void __sem_acquire(struct semaphore *sem) 75 + { 76 + sem->count--; 77 + hung_task_sem_set_holder(sem); 78 + } 43 79 44 80 /** 45 81 * down - acquire the semaphore ··· 95 59 might_sleep(); 96 60 raw_spin_lock_irqsave(&sem->lock, flags); 97 61 if (likely(sem->count > 0)) 98 - sem->count--; 62 + __sem_acquire(sem); 99 63 else 100 64 __down(sem); 101 65 raw_spin_unlock_irqrestore(&sem->lock, flags); ··· 119 83 might_sleep(); 120 84 raw_spin_lock_irqsave(&sem->lock, flags); 121 85 if (likely(sem->count > 0)) 122 - sem->count--; 86 + __sem_acquire(sem); 123 87 else 124 88 result = __down_interruptible(sem); 125 89 raw_spin_unlock_irqrestore(&sem->lock, flags); ··· 146 110 might_sleep(); 147 111 raw_spin_lock_irqsave(&sem->lock, flags); 148 112 if (likely(sem->count > 0)) 149 - sem->count--; 113 + __sem_acquire(sem); 150 114 else 151 115 result = __down_killable(sem); 152 116 raw_spin_unlock_irqrestore(&sem->lock, flags); ··· 176 140 raw_spin_lock_irqsave(&sem->lock, flags); 177 141 count = sem->count - 1; 178 142 if (likely(count >= 0)) 179 - sem->count = count; 143 + __sem_acquire(sem); 180 144 raw_spin_unlock_irqrestore(&sem->lock, flags); 181 145 182 146 return (count < 0); ··· 201 165 might_sleep(); 202 166 raw_spin_lock_irqsave(&sem->lock, flags); 203 167 if (likely(sem->count > 0)) 204 - sem->count--; 168 + __sem_acquire(sem); 205 169 else 206 170 result = __down_timeout(sem, timeout); 207 171 raw_spin_unlock_irqrestore(&sem->lock, flags); ··· 223 187 DEFINE_WAKE_Q(wake_q); 224 188 225 189 raw_spin_lock_irqsave(&sem->lock, flags); 190 + 191 + hung_task_sem_clear_if_holder(sem); 192 + 226 193 if (likely(list_empty(&sem->wait_list))) 227 194 sem->count++; 228 195 else ··· 267 228 raw_spin_unlock_irq(&sem->lock); 268 229 timeout = schedule_timeout(timeout); 269 230 raw_spin_lock_irq(&sem->lock); 270 - if (waiter.up) 231 + if (waiter.up) { 232 + hung_task_sem_set_holder(sem); 271 233 return 0; 234 + } 272 235 } 273 236 274 237 timed_out: ··· 287 246 { 288 247 int ret; 289 248 249 + hung_task_set_blocker(sem, BLOCKER_TYPE_SEM); 250 + 290 251 trace_contention_begin(sem, 0); 291 252 ret = ___down_common(sem, state, timeout); 292 253 trace_contention_end(sem, ret); 254 + 255 + hung_task_clear_blocker(); 293 256 294 257 return ret; 295 258 }
+3 -5
kernel/panic.c
··· 307 307 } 308 308 309 309 /** 310 - * panic - halt the system 311 - * @fmt: The text string to print 310 + * panic - halt the system 311 + * @fmt: The text string to print 312 312 * 313 - * Display a message, then perform cleanups. 314 - * 315 - * This function never returns. 313 + * Display a message, then perform cleanups. This function never returns. 316 314 */ 317 315 void panic(const char *fmt, ...) 318 316 {
+1 -110
kernel/relay.c
··· 452 452 453 453 /** 454 454 * relay_open - create a new relay channel 455 - * @base_filename: base name of files to create, %NULL for buffering only 455 + * @base_filename: base name of files to create 456 456 * @parent: dentry of parent directory, %NULL for root directory or buffer 457 457 * @subbuf_size: size of sub-buffers 458 458 * @n_subbufs: number of sub-buffers ··· 465 465 * attributes specified. The created channel buffer files 466 466 * will be named base_filename0...base_filenameN-1. File 467 467 * permissions will be %S_IRUSR. 468 - * 469 - * If opening a buffer (@parent = NULL) that you later wish to register 470 - * in a filesystem, call relay_late_setup_files() once the @parent dentry 471 - * is available. 472 468 */ 473 469 struct rchan *relay_open(const char *base_filename, 474 470 struct dentry *parent, ··· 535 539 struct rchan_buf *buf; 536 540 struct dentry *dentry; 537 541 }; 538 - 539 - /* Called in atomic context. */ 540 - static void __relay_set_buf_dentry(void *info) 541 - { 542 - struct rchan_percpu_buf_dispatcher *p = info; 543 - 544 - relay_set_buf_dentry(p->buf, p->dentry); 545 - } 546 - 547 - /** 548 - * relay_late_setup_files - triggers file creation 549 - * @chan: channel to operate on 550 - * @base_filename: base name of files to create 551 - * @parent: dentry of parent directory, %NULL for root directory 552 - * 553 - * Returns 0 if successful, non-zero otherwise. 554 - * 555 - * Use to setup files for a previously buffer-only channel created 556 - * by relay_open() with a NULL parent dentry. 557 - * 558 - * For example, this is useful for perfomring early tracing in kernel, 559 - * before VFS is up and then exposing the early results once the dentry 560 - * is available. 561 - */ 562 - int relay_late_setup_files(struct rchan *chan, 563 - const char *base_filename, 564 - struct dentry *parent) 565 - { 566 - int err = 0; 567 - unsigned int i, curr_cpu; 568 - unsigned long flags; 569 - struct dentry *dentry; 570 - struct rchan_buf *buf; 571 - struct rchan_percpu_buf_dispatcher disp; 572 - 573 - if (!chan || !base_filename) 574 - return -EINVAL; 575 - 576 - strscpy(chan->base_filename, base_filename, NAME_MAX); 577 - 578 - mutex_lock(&relay_channels_mutex); 579 - /* Is chan already set up? */ 580 - if (unlikely(chan->has_base_filename)) { 581 - mutex_unlock(&relay_channels_mutex); 582 - return -EEXIST; 583 - } 584 - chan->has_base_filename = 1; 585 - chan->parent = parent; 586 - 587 - if (chan->is_global) { 588 - err = -EINVAL; 589 - buf = *per_cpu_ptr(chan->buf, 0); 590 - if (!WARN_ON_ONCE(!buf)) { 591 - dentry = relay_create_buf_file(chan, buf, 0); 592 - if (dentry && !WARN_ON_ONCE(!chan->is_global)) { 593 - relay_set_buf_dentry(buf, dentry); 594 - err = 0; 595 - } 596 - } 597 - mutex_unlock(&relay_channels_mutex); 598 - return err; 599 - } 600 - 601 - curr_cpu = get_cpu(); 602 - /* 603 - * The CPU hotplug notifier ran before us and created buffers with 604 - * no files associated. So it's safe to call relay_setup_buf_file() 605 - * on all currently online CPUs. 606 - */ 607 - for_each_online_cpu(i) { 608 - buf = *per_cpu_ptr(chan->buf, i); 609 - if (unlikely(!buf)) { 610 - WARN_ONCE(1, KERN_ERR "CPU has no buffer!\n"); 611 - err = -EINVAL; 612 - break; 613 - } 614 - 615 - dentry = relay_create_buf_file(chan, buf, i); 616 - if (unlikely(!dentry)) { 617 - err = -EINVAL; 618 - break; 619 - } 620 - 621 - if (curr_cpu == i) { 622 - local_irq_save(flags); 623 - relay_set_buf_dentry(buf, dentry); 624 - local_irq_restore(flags); 625 - } else { 626 - disp.buf = buf; 627 - disp.dentry = dentry; 628 - smp_mb(); 629 - /* relay_channels_mutex must be held, so wait. */ 630 - err = smp_call_function_single(i, 631 - __relay_set_buf_dentry, 632 - &disp, 1); 633 - } 634 - if (unlikely(err)) 635 - break; 636 - } 637 - put_cpu(); 638 - mutex_unlock(&relay_channels_mutex); 639 - 640 - return err; 641 - } 642 - EXPORT_SYMBOL_GPL(relay_late_setup_files); 643 542 644 543 /** 645 544 * relay_switch_subbuf - switch to a new sub-buffer
+4
kernel/vmcore_info.c
··· 210 210 VMCOREINFO_NUMBER(PAGE_HUGETLB_MAPCOUNT_VALUE); 211 211 #define PAGE_OFFLINE_MAPCOUNT_VALUE (PGTY_offline << 24) 212 212 VMCOREINFO_NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE); 213 + #ifdef CONFIG_UNACCEPTED_MEMORY 214 + #define PAGE_UNACCEPTED_MAPCOUNT_VALUE (PGTY_unaccepted << 24) 215 + VMCOREINFO_NUMBER(PAGE_UNACCEPTED_MAPCOUNT_VALUE); 216 + #endif 213 217 214 218 #ifdef CONFIG_KALLSYMS 215 219 VMCOREINFO_SYMBOL(kallsyms_names);
+80 -14
kernel/watchdog.c
··· 47 47 static int __read_mostly watchdog_hardlockup_user_enabled = WATCHDOG_HARDLOCKUP_DEFAULT; 48 48 static int __read_mostly watchdog_softlockup_user_enabled = 1; 49 49 int __read_mostly watchdog_thresh = 10; 50 + static int __read_mostly watchdog_thresh_next; 50 51 static int __read_mostly watchdog_hardlockup_available; 51 52 52 53 struct cpumask watchdog_cpumask __read_mostly; ··· 64 63 */ 65 64 unsigned int __read_mostly hardlockup_panic = 66 65 IS_ENABLED(CONFIG_BOOTPARAM_HARDLOCKUP_PANIC); 66 + 67 + #ifdef CONFIG_SYSFS 68 + 69 + static unsigned int hardlockup_count; 70 + 71 + static ssize_t hardlockup_count_show(struct kobject *kobj, struct kobj_attribute *attr, 72 + char *page) 73 + { 74 + return sysfs_emit(page, "%u\n", hardlockup_count); 75 + } 76 + 77 + static struct kobj_attribute hardlockup_count_attr = __ATTR_RO(hardlockup_count); 78 + 79 + static __init int kernel_hardlockup_sysfs_init(void) 80 + { 81 + sysfs_add_file_to_group(kernel_kobj, &hardlockup_count_attr.attr, NULL); 82 + return 0; 83 + } 84 + 85 + late_initcall(kernel_hardlockup_sysfs_init); 86 + 87 + #endif // CONFIG_SYSFS 88 + 67 89 /* 68 90 * We may not want to enable hard lockup detection by default in all cases, 69 91 * for example when running the kernel as a guest on a hypervisor. In these ··· 192 168 if (is_hardlockup(cpu)) { 193 169 unsigned int this_cpu = smp_processor_id(); 194 170 unsigned long flags; 171 + 172 + #ifdef CONFIG_SYSFS 173 + ++hardlockup_count; 174 + #endif 195 175 196 176 /* Only print hardlockups once. */ 197 177 if (per_cpu(watchdog_hardlockup_warned, cpu)) ··· 338 310 339 311 static bool softlockup_initialized __read_mostly; 340 312 static u64 __read_mostly sample_period; 313 + 314 + #ifdef CONFIG_SYSFS 315 + 316 + static unsigned int softlockup_count; 317 + 318 + static ssize_t softlockup_count_show(struct kobject *kobj, struct kobj_attribute *attr, 319 + char *page) 320 + { 321 + return sysfs_emit(page, "%u\n", softlockup_count); 322 + } 323 + 324 + static struct kobj_attribute softlockup_count_attr = __ATTR_RO(softlockup_count); 325 + 326 + static __init int kernel_softlockup_sysfs_init(void) 327 + { 328 + sysfs_add_file_to_group(kernel_kobj, &softlockup_count_attr.attr, NULL); 329 + return 0; 330 + } 331 + 332 + late_initcall(kernel_softlockup_sysfs_init); 333 + 334 + #endif // CONFIG_SYSFS 341 335 342 336 /* Timestamp taken after the last successful reschedule. */ 343 337 static DEFINE_PER_CPU(unsigned long, watchdog_touch_ts); ··· 792 742 touch_ts = __this_cpu_read(watchdog_touch_ts); 793 743 duration = is_softlockup(touch_ts, period_ts, now); 794 744 if (unlikely(duration)) { 745 + #ifdef CONFIG_SYSFS 746 + ++softlockup_count; 747 + #endif 748 + 795 749 /* 796 750 * Prevent multiple soft-lockup reports if one cpu is already 797 751 * engaged in dumping all cpu back traces. ··· 924 870 return 0; 925 871 } 926 872 927 - static void __lockup_detector_reconfigure(void) 873 + static void __lockup_detector_reconfigure(bool thresh_changed) 928 874 { 929 875 cpus_read_lock(); 930 876 watchdog_hardlockup_stop(); 931 877 932 878 softlockup_stop_all(); 879 + /* 880 + * To prevent watchdog_timer_fn from using the old interval and 881 + * the new watchdog_thresh at the same time, which could lead to 882 + * false softlockup reports, it is necessary to update the 883 + * watchdog_thresh after the softlockup is completed. 884 + */ 885 + if (thresh_changed) 886 + watchdog_thresh = READ_ONCE(watchdog_thresh_next); 933 887 set_sample_period(); 934 888 lockup_detector_update_enable(); 935 889 if (watchdog_enabled && watchdog_thresh) ··· 950 888 void lockup_detector_reconfigure(void) 951 889 { 952 890 mutex_lock(&watchdog_mutex); 953 - __lockup_detector_reconfigure(); 891 + __lockup_detector_reconfigure(false); 954 892 mutex_unlock(&watchdog_mutex); 955 893 } 956 894 ··· 970 908 return; 971 909 972 910 mutex_lock(&watchdog_mutex); 973 - __lockup_detector_reconfigure(); 911 + __lockup_detector_reconfigure(false); 974 912 softlockup_initialized = true; 975 913 mutex_unlock(&watchdog_mutex); 976 914 } 977 915 978 916 #else /* CONFIG_SOFTLOCKUP_DETECTOR */ 979 - static void __lockup_detector_reconfigure(void) 917 + static void __lockup_detector_reconfigure(bool thresh_changed) 980 918 { 981 919 cpus_read_lock(); 982 920 watchdog_hardlockup_stop(); 921 + if (thresh_changed) 922 + watchdog_thresh = READ_ONCE(watchdog_thresh_next); 983 923 lockup_detector_update_enable(); 984 924 watchdog_hardlockup_start(); 985 925 cpus_read_unlock(); 986 926 } 987 927 void lockup_detector_reconfigure(void) 988 928 { 989 - __lockup_detector_reconfigure(); 929 + __lockup_detector_reconfigure(false); 990 930 } 991 931 static inline void lockup_detector_setup(void) 992 932 { 993 - __lockup_detector_reconfigure(); 933 + __lockup_detector_reconfigure(false); 994 934 } 995 935 #endif /* !CONFIG_SOFTLOCKUP_DETECTOR */ 996 936 ··· 1010 946 #ifdef CONFIG_SYSCTL 1011 947 1012 948 /* Propagate any changes to the watchdog infrastructure */ 1013 - static void proc_watchdog_update(void) 949 + static void proc_watchdog_update(bool thresh_changed) 1014 950 { 1015 951 /* Remove impossible cpus to keep sysctl output clean. */ 1016 952 cpumask_and(&watchdog_cpumask, &watchdog_cpumask, cpu_possible_mask); 1017 - __lockup_detector_reconfigure(); 953 + __lockup_detector_reconfigure(thresh_changed); 1018 954 } 1019 955 1020 956 /* ··· 1048 984 } else { 1049 985 err = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 1050 986 if (!err && old != READ_ONCE(*param)) 1051 - proc_watchdog_update(); 987 + proc_watchdog_update(false); 1052 988 } 1053 989 mutex_unlock(&watchdog_mutex); 1054 990 return err; ··· 1099 1035 1100 1036 mutex_lock(&watchdog_mutex); 1101 1037 1102 - old = READ_ONCE(watchdog_thresh); 1038 + watchdog_thresh_next = READ_ONCE(watchdog_thresh); 1039 + 1040 + old = watchdog_thresh_next; 1103 1041 err = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 1104 1042 1105 - if (!err && write && old != READ_ONCE(watchdog_thresh)) 1106 - proc_watchdog_update(); 1043 + if (!err && write && old != READ_ONCE(watchdog_thresh_next)) 1044 + proc_watchdog_update(true); 1107 1045 1108 1046 mutex_unlock(&watchdog_mutex); 1109 1047 return err; ··· 1126 1060 1127 1061 err = proc_do_large_bitmap(table, write, buffer, lenp, ppos); 1128 1062 if (!err && write) 1129 - proc_watchdog_update(); 1063 + proc_watchdog_update(false); 1130 1064 1131 1065 mutex_unlock(&watchdog_mutex); 1132 1066 return err; ··· 1146 1080 }, 1147 1081 { 1148 1082 .procname = "watchdog_thresh", 1149 - .data = &watchdog_thresh, 1083 + .data = &watchdog_thresh_next, 1150 1084 .maxlen = sizeof(int), 1151 1085 .mode = 0644, 1152 1086 .proc_handler = proc_watchdog_thresh,
-6
lib/Kconfig.debug
··· 2982 2982 config TEST_KMOD 2983 2983 tristate "kmod stress tester" 2984 2984 depends on m 2985 - depends on NETDEVICES && NET_CORE && INET # for TUN 2986 - depends on BLOCK 2987 - depends on PAGE_SIZE_LESS_THAN_256KB # for BTRFS 2988 2985 select TEST_LKM 2989 - select XFS_FS 2990 - select TUN 2991 - select BTRFS_FS 2992 2986 help 2993 2987 Test the kernel's module loading mechanism: kmod. kmod implements 2994 2988 support to load modules using the Linux kernel's usermode helper.
+7 -6
lib/errseq.c
··· 34 34 */ 35 35 36 36 /* The low bits are designated for error code (max of MAX_ERRNO) */ 37 - #define ERRSEQ_SHIFT ilog2(MAX_ERRNO + 1) 37 + #define ERRSEQ_SHIFT (ilog2(MAX_ERRNO) + 1) 38 38 39 39 /* This bit is used as a flag to indicate whether the value has been seen */ 40 40 #define ERRSEQ_SEEN (1 << ERRSEQ_SHIFT) 41 + 42 + /* Leverage macro ERRSEQ_SEEN to define errno mask macro here */ 43 + #define ERRNO_MASK (ERRSEQ_SEEN - 1) 41 44 42 45 /* The lowest bit of the counter */ 43 46 #define ERRSEQ_CTR_INC (1 << (ERRSEQ_SHIFT + 1)) ··· 63 60 { 64 61 errseq_t cur, old; 65 62 66 - /* MAX_ERRNO must be able to serve as a mask */ 67 - BUILD_BUG_ON_NOT_POWER_OF_2(MAX_ERRNO + 1); 68 63 69 64 /* 70 65 * Ensure the error code actually fits where we want it to go. If it ··· 80 79 errseq_t new; 81 80 82 81 /* Clear out error bits and set new error */ 83 - new = (old & ~(MAX_ERRNO|ERRSEQ_SEEN)) | -err; 82 + new = (old & ~(ERRNO_MASK | ERRSEQ_SEEN)) | -err; 84 83 85 84 /* Only increment if someone has looked at it */ 86 85 if (old & ERRSEQ_SEEN) ··· 149 148 150 149 if (likely(cur == since)) 151 150 return 0; 152 - return -(cur & MAX_ERRNO); 151 + return -(cur & ERRNO_MASK); 153 152 } 154 153 EXPORT_SYMBOL(errseq_check); 155 154 ··· 201 200 if (new != old) 202 201 cmpxchg(eseq, old, new); 203 202 *since = new; 204 - err = -(new & MAX_ERRNO); 203 + err = -(new & ERRNO_MASK); 205 204 } 206 205 return err; 207 206 }
+4
lib/kstrtox.c
··· 351 351 return -EINVAL; 352 352 353 353 switch (s[0]) { 354 + case 'e': 355 + case 'E': 354 356 case 'y': 355 357 case 'Y': 356 358 case 't': ··· 360 358 case '1': 361 359 *res = true; 362 360 return 0; 361 + case 'd': 362 + case 'D': 363 363 case 'n': 364 364 case 'N': 365 365 case 'f':
-22
lib/llist.c
··· 14 14 #include <linux/export.h> 15 15 #include <linux/llist.h> 16 16 17 - 18 - /** 19 - * llist_add_batch - add several linked entries in batch 20 - * @new_first: first entry in batch to be added 21 - * @new_last: last entry in batch to be added 22 - * @head: the head for your lock-less list 23 - * 24 - * Return whether list is empty before adding. 25 - */ 26 - bool llist_add_batch(struct llist_node *new_first, struct llist_node *new_last, 27 - struct llist_head *head) 28 - { 29 - struct llist_node *first = READ_ONCE(head->first); 30 - 31 - do { 32 - new_last->next = first; 33 - } while (!try_cmpxchg(&head->first, &first, new_first)); 34 - 35 - return !first; 36 - } 37 - EXPORT_SYMBOL_GPL(llist_add_batch); 38 - 39 17 /** 40 18 * llist_del_first - delete the first entry of lock-less list 41 19 * @head: the head for your lock-less list
+1 -24
lib/oid_registry.c
··· 117 117 EXPORT_SYMBOL_GPL(parse_OID); 118 118 119 119 /* 120 - * sprint_OID - Print an Object Identifier into a buffer 120 + * sprint_oid - Print an Object Identifier into a buffer 121 121 * @data: The encoded OID to print 122 122 * @datasize: The size of the encoded OID 123 123 * @buffer: The buffer to render into ··· 173 173 return -EBADMSG; 174 174 } 175 175 EXPORT_SYMBOL_GPL(sprint_oid); 176 - 177 - /** 178 - * sprint_OID - Print an Object Identifier into a buffer 179 - * @oid: The OID to print 180 - * @buffer: The buffer to render into 181 - * @bufsize: The size of the buffer 182 - * 183 - * The OID is rendered into the buffer in "a.b.c.d" format and the number of 184 - * bytes is returned. 185 - */ 186 - int sprint_OID(enum OID oid, char *buffer, size_t bufsize) 187 - { 188 - int ret; 189 - 190 - BUG_ON(oid >= OID__NR); 191 - 192 - ret = sprint_oid(oid_data + oid_index[oid], 193 - oid_index[oid + 1] - oid_index[oid], 194 - buffer, bufsize); 195 - BUG_ON(ret == -EBADMSG); 196 - return ret; 197 - } 198 - EXPORT_SYMBOL_GPL(sprint_OID);
+4 -4
lib/rbtree.c
··· 297 297 * / \ / \ 298 298 * N S --> N sl 299 299 * / \ \ 300 - * sl sr S 300 + * sl Sr S 301 301 * \ 302 - * sr 302 + * Sr 303 303 * 304 304 * Note: p might be red, and then both 305 305 * p and sl are red after rotation(which ··· 312 312 * / \ / \ 313 313 * N sl --> P S 314 314 * \ / \ 315 - * S N sr 315 + * S N Sr 316 316 * \ 317 - * sr 317 + * Sr 318 318 */ 319 319 tmp1 = tmp2->rb_right; 320 320 WRITE_ONCE(sibling->rb_left, tmp1);
-23
lib/scatterlist.c
··· 14 14 #include <linux/folio_queue.h> 15 15 16 16 /** 17 - * sg_next - return the next scatterlist entry in a list 18 - * @sg: The current sg entry 19 - * 20 - * Description: 21 - * Usually the next entry will be @sg@ + 1, but if this sg element is part 22 - * of a chained scatterlist, it could jump to the start of a new 23 - * scatterlist array. 24 - * 25 - **/ 26 - struct scatterlist *sg_next(struct scatterlist *sg) 27 - { 28 - if (sg_is_last(sg)) 29 - return NULL; 30 - 31 - sg++; 32 - if (unlikely(sg_is_chain(sg))) 33 - sg = sg_chain_ptr(sg); 34 - 35 - return sg; 36 - } 37 - EXPORT_SYMBOL(sg_next); 38 - 39 - /** 40 17 * sg_nents - return total count of entries in scatterlist 41 18 * @sg: The scatterlist 42 19 *
+34 -30
lib/test_kmod.c
··· 28 28 29 29 #define TEST_START_NUM_THREADS 50 30 30 #define TEST_START_DRIVER "test_module" 31 - #define TEST_START_TEST_FS "xfs" 32 31 #define TEST_START_TEST_CASE TEST_KMOD_DRIVER 33 32 34 - 35 33 static bool force_init_test = false; 36 - module_param(force_init_test, bool_enable_only, 0644); 34 + module_param(force_init_test, bool_enable_only, 0444); 37 35 MODULE_PARM_DESC(force_init_test, 38 36 "Force kicking a test immediately after driver loads"); 37 + static char *start_driver; 38 + module_param(start_driver, charp, 0444); 39 + MODULE_PARM_DESC(start_driver, 40 + "Module/driver to use for the testing after driver loads"); 41 + static char *start_test_fs; 42 + module_param(start_test_fs, charp, 0444); 43 + MODULE_PARM_DESC(start_test_fs, 44 + "File system to use for the testing after driver loads"); 39 45 40 46 /* 41 47 * For device allocation / registration ··· 514 508 case TEST_KMOD_DRIVER: 515 509 return run_test_driver(test_dev); 516 510 case TEST_KMOD_FS_TYPE: 511 + if (!config->test_fs) { 512 + dev_warn(test_dev->dev, 513 + "No fs type specified, can't run the test\n"); 514 + return -EINVAL; 515 + } 517 516 return run_test_fs_type(test_dev); 518 517 default: 519 518 dev_warn(test_dev->dev, ··· 732 721 static DEVICE_ATTR_RW(config_test_fs); 733 722 734 723 static int trigger_config_run_type(struct kmod_test_device *test_dev, 735 - enum kmod_test_case test_case, 736 - const char *test_str) 724 + enum kmod_test_case test_case) 737 725 { 738 - int copied = 0; 739 726 struct test_config *config = &test_dev->config; 740 727 741 728 mutex_lock(&test_dev->config_mutex); 742 729 743 730 switch (test_case) { 744 731 case TEST_KMOD_DRIVER: 745 - kfree_const(config->test_driver); 746 - config->test_driver = NULL; 747 - copied = config_copy_test_driver_name(config, test_str, 748 - strlen(test_str)); 749 732 break; 750 733 case TEST_KMOD_FS_TYPE: 751 - kfree_const(config->test_fs); 752 - config->test_fs = NULL; 753 - copied = config_copy_test_fs(config, test_str, 754 - strlen(test_str)); 734 + if (!config->test_fs) { 735 + mutex_unlock(&test_dev->config_mutex); 736 + return 0; 737 + } 755 738 break; 756 739 default: 757 740 mutex_unlock(&test_dev->config_mutex); ··· 755 750 config->test_case = test_case; 756 751 757 752 mutex_unlock(&test_dev->config_mutex); 758 - 759 - if (copied <= 0 || copied != strlen(test_str)) { 760 - test_dev->test_is_oom = true; 761 - return -ENOMEM; 762 - } 763 753 764 754 test_dev->test_is_oom = false; 765 755 ··· 800 800 static int __kmod_config_init(struct kmod_test_device *test_dev) 801 801 { 802 802 struct test_config *config = &test_dev->config; 803 + const char *test_start_driver = start_driver ? start_driver : 804 + TEST_START_DRIVER; 803 805 int ret = -ENOMEM, copied; 804 806 805 807 __kmod_config_free(config); 806 808 807 - copied = config_copy_test_driver_name(config, TEST_START_DRIVER, 808 - strlen(TEST_START_DRIVER)); 809 - if (copied != strlen(TEST_START_DRIVER)) 809 + copied = config_copy_test_driver_name(config, test_start_driver, 810 + strlen(test_start_driver)); 811 + if (copied != strlen(test_start_driver)) 810 812 goto err_out; 811 813 812 - copied = config_copy_test_fs(config, TEST_START_TEST_FS, 813 - strlen(TEST_START_TEST_FS)); 814 - if (copied != strlen(TEST_START_TEST_FS)) 815 - goto err_out; 814 + 815 + if (start_test_fs) { 816 + copied = config_copy_test_fs(config, start_test_fs, 817 + strlen(start_test_fs)); 818 + if (copied != strlen(start_test_fs)) 819 + goto err_out; 820 + } 816 821 817 822 config->num_threads = kmod_init_test_thread_limit(); 818 823 config->test_result = 0; ··· 1183 1178 * lowering the init level for more fun. 1184 1179 */ 1185 1180 if (force_init_test) { 1186 - ret = trigger_config_run_type(test_dev, 1187 - TEST_KMOD_DRIVER, "tun"); 1181 + ret = trigger_config_run_type(test_dev, TEST_KMOD_DRIVER); 1188 1182 if (WARN_ON(ret)) 1189 1183 return ret; 1190 - ret = trigger_config_run_type(test_dev, 1191 - TEST_KMOD_FS_TYPE, "btrfs"); 1184 + 1185 + ret = trigger_config_run_type(test_dev, TEST_KMOD_FS_TYPE); 1192 1186 if (WARN_ON(ret)) 1193 1187 return ret; 1194 1188 }
+1 -1
mm/maccess.c
··· 196 196 if (ret >= count) { 197 197 ret = count; 198 198 dst[ret - 1] = '\0'; 199 - } else if (ret > 0) { 199 + } else if (ret >= 0) { 200 200 ret++; 201 201 } 202 202
+5 -4
samples/Kconfig
··· 315 315 tristate "Hung task detector test code" 316 316 depends on DETECT_HUNG_TASK && DEBUG_FS 317 317 help 318 - Build a module which provide a simple debugfs file. If user reads 319 - the file, it will sleep long time (256 seconds) with holding a 320 - mutex. Thus if there are 2 or more processes read this file, it 321 - will be detected by the hung_task watchdog. 318 + Build a module that provides debugfs files (e.g., mutex, semaphore, 319 + etc.) under <debugfs>/hung_task. If user reads one of these files, 320 + it will sleep long time (256 seconds) with holding a lock. Thus, 321 + if 2 or more processes read the same file concurrently, it will 322 + be detected by the hung_task watchdog. 322 323 323 324 source "samples/rust/Kconfig" 324 325
+1 -1
samples/hung_task/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_SAMPLE_HUNG_TASK) += hung_task_mutex.o 2 + obj-$(CONFIG_SAMPLE_HUNG_TASK) += hung_task_tests.o
-66
samples/hung_task/hung_task_mutex.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * hung_task_mutex.c - Sample code which causes hung task by mutex 4 - * 5 - * Usage: load this module and read `<debugfs>/hung_task/mutex` 6 - * by 2 or more processes. 7 - * 8 - * This is for testing kernel hung_task error message. 9 - * Note that this will make your system freeze and maybe 10 - * cause panic. So do not use this except for the test. 11 - */ 12 - 13 - #include <linux/debugfs.h> 14 - #include <linux/delay.h> 15 - #include <linux/fs.h> 16 - #include <linux/module.h> 17 - #include <linux/mutex.h> 18 - 19 - #define HUNG_TASK_DIR "hung_task" 20 - #define HUNG_TASK_FILE "mutex" 21 - #define SLEEP_SECOND 256 22 - 23 - static const char dummy_string[] = "This is a dummy string."; 24 - static DEFINE_MUTEX(dummy_mutex); 25 - static struct dentry *hung_task_dir; 26 - 27 - static ssize_t read_dummy(struct file *file, char __user *user_buf, 28 - size_t count, loff_t *ppos) 29 - { 30 - /* If the second task waits on the lock, it is uninterruptible sleep. */ 31 - guard(mutex)(&dummy_mutex); 32 - 33 - /* When the first task sleep here, it is interruptible. */ 34 - msleep_interruptible(SLEEP_SECOND * 1000); 35 - 36 - return simple_read_from_buffer(user_buf, count, ppos, 37 - dummy_string, sizeof(dummy_string)); 38 - } 39 - 40 - static const struct file_operations hung_task_fops = { 41 - .read = read_dummy, 42 - }; 43 - 44 - static int __init hung_task_sample_init(void) 45 - { 46 - hung_task_dir = debugfs_create_dir(HUNG_TASK_DIR, NULL); 47 - if (IS_ERR(hung_task_dir)) 48 - return PTR_ERR(hung_task_dir); 49 - 50 - debugfs_create_file(HUNG_TASK_FILE, 0400, hung_task_dir, 51 - NULL, &hung_task_fops); 52 - 53 - return 0; 54 - } 55 - 56 - static void __exit hung_task_sample_exit(void) 57 - { 58 - debugfs_remove_recursive(hung_task_dir); 59 - } 60 - 61 - module_init(hung_task_sample_init); 62 - module_exit(hung_task_sample_exit); 63 - 64 - MODULE_LICENSE("GPL"); 65 - MODULE_AUTHOR("Masami Hiramatsu"); 66 - MODULE_DESCRIPTION("Simple sleep under mutex file for testing hung task");
+97
samples/hung_task/hung_task_tests.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * hung_task_tests.c - Sample code for testing hung tasks with mutex, 4 + * semaphore, etc. 5 + * 6 + * Usage: Load this module and read `<debugfs>/hung_task/mutex`, 7 + * `<debugfs>/hung_task/semaphore`, etc., with 2 or more processes. 8 + * 9 + * This is for testing kernel hung_task error messages with various locking 10 + * mechanisms (e.g., mutex, semaphore, etc.). Note that this may freeze 11 + * your system or cause a panic. Use only for testing purposes. 12 + */ 13 + 14 + #include <linux/debugfs.h> 15 + #include <linux/delay.h> 16 + #include <linux/fs.h> 17 + #include <linux/module.h> 18 + #include <linux/mutex.h> 19 + #include <linux/semaphore.h> 20 + 21 + #define HUNG_TASK_DIR "hung_task" 22 + #define HUNG_TASK_MUTEX_FILE "mutex" 23 + #define HUNG_TASK_SEM_FILE "semaphore" 24 + #define SLEEP_SECOND 256 25 + 26 + static const char dummy_string[] = "This is a dummy string."; 27 + static DEFINE_MUTEX(dummy_mutex); 28 + static DEFINE_SEMAPHORE(dummy_sem, 1); 29 + static struct dentry *hung_task_dir; 30 + 31 + /* Mutex-based read function */ 32 + static ssize_t read_dummy_mutex(struct file *file, char __user *user_buf, 33 + size_t count, loff_t *ppos) 34 + { 35 + /* Second task waits on mutex, entering uninterruptible sleep */ 36 + guard(mutex)(&dummy_mutex); 37 + 38 + /* First task sleeps here, interruptible */ 39 + msleep_interruptible(SLEEP_SECOND * 1000); 40 + 41 + return simple_read_from_buffer(user_buf, count, ppos, dummy_string, 42 + sizeof(dummy_string)); 43 + } 44 + 45 + /* Semaphore-based read function */ 46 + static ssize_t read_dummy_semaphore(struct file *file, char __user *user_buf, 47 + size_t count, loff_t *ppos) 48 + { 49 + /* Second task waits on semaphore, entering uninterruptible sleep */ 50 + down(&dummy_sem); 51 + 52 + /* First task sleeps here, interruptible */ 53 + msleep_interruptible(SLEEP_SECOND * 1000); 54 + 55 + up(&dummy_sem); 56 + 57 + return simple_read_from_buffer(user_buf, count, ppos, dummy_string, 58 + sizeof(dummy_string)); 59 + } 60 + 61 + /* File operations for mutex */ 62 + static const struct file_operations hung_task_mutex_fops = { 63 + .read = read_dummy_mutex, 64 + }; 65 + 66 + /* File operations for semaphore */ 67 + static const struct file_operations hung_task_sem_fops = { 68 + .read = read_dummy_semaphore, 69 + }; 70 + 71 + static int __init hung_task_tests_init(void) 72 + { 73 + hung_task_dir = debugfs_create_dir(HUNG_TASK_DIR, NULL); 74 + if (IS_ERR(hung_task_dir)) 75 + return PTR_ERR(hung_task_dir); 76 + 77 + /* Create debugfs files for mutex and semaphore tests */ 78 + debugfs_create_file(HUNG_TASK_MUTEX_FILE, 0400, hung_task_dir, NULL, 79 + &hung_task_mutex_fops); 80 + debugfs_create_file(HUNG_TASK_SEM_FILE, 0400, hung_task_dir, NULL, 81 + &hung_task_sem_fops); 82 + 83 + return 0; 84 + } 85 + 86 + static void __exit hung_task_tests_exit(void) 87 + { 88 + debugfs_remove_recursive(hung_task_dir); 89 + } 90 + 91 + module_init(hung_task_tests_init); 92 + module_exit(hung_task_tests_exit); 93 + 94 + MODULE_LICENSE("GPL"); 95 + MODULE_AUTHOR("Masami Hiramatsu <mhiramat@kernel.org>"); 96 + MODULE_AUTHOR("Zi Li <amaindex@outlook.com>"); 97 + MODULE_DESCRIPTION("Simple sleep under lock files for testing hung task");
+28 -7
scripts/checkpatch.pl
··· 151 151 exit($exitcode); 152 152 } 153 153 154 + my $DO_WHILE_0_ADVICE = q{ 155 + do {} while (0) advice is over-stated in a few situations: 156 + 157 + The more obvious case is macros, like MODULE_PARM_DESC, invoked at 158 + file-scope, where C disallows code (it must be in functions). See 159 + $exceptions if you have one to add by name. 160 + 161 + More troublesome is declarative macros used at top of new scope, 162 + like DECLARE_PER_CPU. These might just compile with a do-while-0 163 + wrapper, but would be incorrect. Most of these are handled by 164 + detecting struct,union,etc declaration primitives in $exceptions. 165 + 166 + Theres also macros called inside an if (block), which "return" an 167 + expression. These cannot do-while, and need a ({}) wrapper. 168 + 169 + Enjoy this qualification while we work to improve our heuristics. 170 + }; 171 + 154 172 sub uniq { 155 173 my %seen; 156 174 return grep { !$seen{$_}++ } @_; ··· 5903 5885 } 5904 5886 } 5905 5887 5906 - # multi-statement macros should be enclosed in a do while loop, grab the 5907 - # first statement and ensure its the whole macro if its not enclosed 5908 - # in a known good container 5888 + # Usually multi-statement macros should be enclosed in a do {} while 5889 + # (0) loop. Grab the first statement and ensure its the whole macro 5890 + # if its not enclosed in a known good container 5909 5891 if ($realfile !~ m@/vmlinux.lds.h$@ && 5910 5892 $line =~ /^.\s*\#\s*define\s*$Ident(\()?/) { 5911 5893 my $ln = $linenr; ··· 5958 5940 5959 5941 my $exceptions = qr{ 5960 5942 $Declare| 5943 + # named exceptions 5961 5944 module_param_named| 5962 5945 MODULE_PARM_DESC| 5963 5946 DECLARE_PER_CPU| 5964 5947 DEFINE_PER_CPU| 5948 + static_assert| 5949 + # declaration primitives 5965 5950 __typeof__\(| 5966 5951 union| 5967 5952 struct| ··· 5999 5978 ERROR("MULTISTATEMENT_MACRO_USE_DO_WHILE", 6000 5979 "Macros starting with if should be enclosed by a do - while loop to avoid possible if/else logic defects\n" . "$herectx"); 6001 5980 } elsif ($dstat =~ /;/) { 6002 - ERROR("MULTISTATEMENT_MACRO_USE_DO_WHILE", 6003 - "Macros with multiple statements should be enclosed in a do - while loop\n" . "$herectx"); 5981 + WARN("MULTISTATEMENT_MACRO_USE_DO_WHILE", 5982 + "Non-declarative macros with multiple statements should be enclosed in a do - while loop\n" . "$herectx\nBUT SEE:\n$DO_WHILE_0_ADVICE"); 6004 5983 } else { 6005 5984 ERROR("COMPLEX_MACRO", 6006 - "Macros with complex values should be enclosed in parentheses\n" . "$herectx"); 5985 + "Macros with complex values should be enclosed in parentheses\n" . "$herectx\nBUT SEE:\n$DO_WHILE_0_ADVICE"); 6007 5986 } 6008 5987 6009 5988 } ··· 6047 6026 } 6048 6027 6049 6028 # check if this is an unused argument 6050 - if ($define_stmt !~ /\b$arg\b/) { 6029 + if ($define_stmt !~ /\b$arg\b/ && $define_stmt) { 6051 6030 WARN("MACRO_ARG_UNUSED", 6052 6031 "Argument '$arg' is not used in function-like macro\n" . "$herectx"); 6053 6032 }
+2 -2
scripts/gdb/linux/cpus.py
··· 141 141 class PerCpu(gdb.Function): 142 142 """Return per-cpu variable. 143 143 144 - $lx_per_cpu("VAR"[, CPU]): Return the per-cpu variable called VAR for the 144 + $lx_per_cpu(VAR[, CPU]): Return the per-cpu variable called VAR for the 145 145 given CPU number. If CPU is omitted, the CPU of the current context is used. 146 146 Note that VAR has to be quoted as string.""" 147 147 ··· 158 158 class PerCpuPtr(gdb.Function): 159 159 """Return per-cpu pointer. 160 160 161 - $lx_per_cpu_ptr("VAR"[, CPU]): Return the per-cpu pointer called VAR for the 161 + $lx_per_cpu_ptr(VAR[, CPU]): Return the per-cpu pointer called VAR for the 162 162 given CPU number. If CPU is omitted, the CPU of the current context is used. 163 163 Note that VAR has to be quoted as string.""" 164 164
+20 -18
scripts/gdb/linux/symbols.py
··· 38 38 # Disable pagination while reporting symbol (re-)loading. 39 39 # The console input is blocked in this context so that we would 40 40 # get stuck waiting for the user to acknowledge paged output. 41 - show_pagination = gdb.execute("show pagination", to_string=True) 42 - pagination = show_pagination.endswith("on.\n") 43 - gdb.execute("set pagination off") 44 - 45 - if module_name in cmd.loaded_modules: 46 - gdb.write("refreshing all symbols to reload module " 47 - "'{0}'\n".format(module_name)) 48 - cmd.load_all_symbols() 49 - else: 50 - cmd.load_module_symbols(module) 51 - 52 - # restore pagination state 53 - gdb.execute("set pagination %s" % ("on" if pagination else "off")) 41 + with utils.pagination_off(): 42 + if module_name in cmd.loaded_modules: 43 + gdb.write("refreshing all symbols to reload module " 44 + "'{0}'\n".format(module_name)) 45 + cmd.load_all_symbols() 46 + else: 47 + cmd.load_module_symbols(module) 54 48 55 49 return False 56 50 ··· 54 60 vmcore_info = 0x0e0c 55 61 paddr_vmcoreinfo_note = gdb.parse_and_eval("*(unsigned long long *)" + 56 62 hex(vmcore_info)) 63 + if paddr_vmcoreinfo_note == 0 or paddr_vmcoreinfo_note & 1: 64 + # In the early boot case, extract vm_layout.kaslr_offset from the 65 + # vmlinux image in physical memory. 66 + if paddr_vmcoreinfo_note == 0: 67 + kaslr_offset_phys = 0 68 + else: 69 + kaslr_offset_phys = paddr_vmcoreinfo_note - 1 70 + with utils.pagination_off(): 71 + gdb.execute("symbol-file {0} -o {1}".format( 72 + utils.get_vmlinux(), hex(kaslr_offset_phys))) 73 + kaslr_offset = gdb.parse_and_eval("vm_layout.kaslr_offset") 74 + return "KERNELOFFSET=" + hex(kaslr_offset)[2:] 57 75 inferior = gdb.selected_inferior() 58 76 elf_note = inferior.read_memory(paddr_vmcoreinfo_note, 12) 59 77 n_namesz, n_descsz, n_type = struct.unpack(">III", elf_note) ··· 184 178 saved_states.append({'breakpoint': bp, 'enabled': bp.enabled}) 185 179 186 180 # drop all current symbols and reload vmlinux 187 - orig_vmlinux = 'vmlinux' 188 - for obj in gdb.objfiles(): 189 - if (obj.filename.endswith('vmlinux') or 190 - obj.filename.endswith('vmlinux.debug')): 191 - orig_vmlinux = obj.filename 181 + orig_vmlinux = utils.get_vmlinux() 192 182 gdb.execute("symbol-file", to_string=True) 193 183 kerneloffset = get_kerneloffset() 194 184 if kerneloffset is None:
+21 -1
scripts/gdb/linux/utils.py
··· 200 200 201 201 def probe_kgdb(): 202 202 try: 203 - thread_info = gdb.execute("info thread 2", to_string=True) 203 + thread_info = gdb.execute("info thread 1", to_string=True) 204 204 return "shadowCPU" in thread_info 205 205 except gdb.error: 206 206 return False ··· 251 251 else: 252 252 kerneloffset = int(match.group(1), 16) 253 253 return VmCore(kerneloffset=kerneloffset) 254 + 255 + 256 + def get_vmlinux(): 257 + vmlinux = 'vmlinux' 258 + for obj in gdb.objfiles(): 259 + if (obj.filename.endswith('vmlinux') or 260 + obj.filename.endswith('vmlinux.debug')): 261 + vmlinux = obj.filename 262 + return vmlinux 263 + 264 + 265 + @contextlib.contextmanager 266 + def pagination_off(): 267 + show_pagination = gdb.execute("show pagination", to_string=True) 268 + pagination = show_pagination.endswith("on.\n") 269 + gdb.execute("set pagination off") 270 + try: 271 + yield 272 + finally: 273 + gdb.execute("set pagination %s" % ("on" if pagination else "off"))
+2
scripts/spelling.txt
··· 1240 1240 prefferably||preferably 1241 1241 prefitler||prefilter 1242 1242 preform||perform 1243 + previleged||privileged 1244 + previlege||privilege 1243 1245 premption||preemption 1244 1246 prepaired||prepared 1245 1247 prepate||prepare
+1 -1
tools/testing/selftests/filesystems/file_stressor.c
··· 156 156 ssize_t nr_read; 157 157 158 158 /* 159 - * Concurrently read /proc/<pid>/fd/ which rougly does: 159 + * Concurrently read /proc/<pid>/fd/ which roughly does: 160 160 * 161 161 * f = fget_task_next(p, &fd); 162 162 * if (!f)
-5
tools/testing/selftests/kmod/config
··· 1 1 CONFIG_TEST_KMOD=m 2 2 CONFIG_TEST_LKM=m 3 - CONFIG_XFS_FS=m 4 - 5 - # For the module parameter force_init_test is used 6 - CONFIG_TUN=m 7 - CONFIG_BTRFS_FS=m
+1 -1
tools/testing/selftests/mm/gup_longterm.c
··· 158 158 /* 159 159 * R/O pinning or pinning in a private mapping is always 160 160 * expected to work. Otherwise, we expect long-term R/W pinning 161 - * to only succeed for special fielesystems. 161 + * to only succeed for special filesystems. 162 162 */ 163 163 should_work = !shared || !rw || 164 164 fs_supports_writable_longterm_pinning(fs_type);
+1 -1
tools/testing/selftests/thermal/intel/power_floor/power_floor_test.c
··· 56 56 } 57 57 58 58 if (write(fd, "1\n", 2) < 0) { 59 - perror("Can' enable power floor notifications\n"); 59 + perror("Can't enable power floor notifications\n"); 60 60 exit(1); 61 61 } 62 62
+2 -2
tools/testing/selftests/thermal/intel/workload_hint/workload_hint_test.c
··· 37 37 } 38 38 39 39 if (write(fd, "0\n", 2) < 0) { 40 - perror("Can' disable workload hints\n"); 40 + perror("Can't disable workload hints\n"); 41 41 exit(1); 42 42 } 43 43 ··· 99 99 } 100 100 101 101 if (write(fd, "1\n", 2) < 0) { 102 - perror("Can' enable workload hints\n"); 102 + perror("Can't enable workload hints\n"); 103 103 exit(1); 104 104 } 105 105