Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-nonmm-stable-2025-03-30-18-23' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:

- The series "powerpc/crash: use generic crashkernel reservation" from
Sourabh Jain changes powerpc's kexec code to use more of the generic
layers.

- The series "get_maintainer: report subsystem status separately" from
Vlastimil Babka makes some long-requested improvements to the
get_maintainer output.

- The series "ucount: Simplify refcounting with rcuref_t" from
Sebastian Siewior cleans up and optimizing the refcounting in the
ucount code.

- The series "reboot: support runtime configuration of emergency
hw_protection action" from Ahmad Fatoum improves the ability for a
driver to perform an emergency system shutdown or reboot.

- The series "Converge on using secs_to_jiffies() part two" from Easwar
Hariharan performs further migrations from msecs_to_jiffies() to
secs_to_jiffies().

- The series "lib/interval_tree: add some test cases and cleanup" from
Wei Yang permits more userspace testing of kernel library code, adds
some more tests and performs some cleanups.

- The series "hung_task: Dump the blocking task stacktrace" from Masami
Hiramatsu arranges for the hung_task detector to dump the stack of
the blocking task and not just that of the blocked task.

- The series "resource: Split and use DEFINE_RES*() macros" from Andy
Shevchenko provides some cleanups to the resource definition macros.

- Plus the usual shower of singleton patches - please see the
individual changelogs for details.

* tag 'mm-nonmm-stable-2025-03-30-18-23' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (77 commits)
mailmap: consolidate email addresses of Alexander Sverdlin
fs/procfs: fix the comment above proc_pid_wchan()
relay: use kasprintf() instead of fixed buffer formatting
resource: replace open coded variant of DEFINE_RES()
resource: replace open coded variants of DEFINE_RES_*_NAMED()
resource: replace open coded variant of DEFINE_RES_NAMED_DESC()
resource: split DEFINE_RES_NAMED_DESC() out of DEFINE_RES_NAMED()
samples: add hung_task detector mutex blocking sample
hung_task: show the blocker task if the task is hung on mutex
kexec_core: accept unaccepted kexec segments' destination addresses
watchdog/perf: optimize bytes copied and remove manual NUL-termination
lib/interval_tree: fix the comment of interval_tree_span_iter_next_gap()
lib/interval_tree: skip the check before go to the right subtree
lib/interval_tree: add test case for span iteration
lib/interval_tree: add test case for interval_tree_iter_xxx() helpers
lib/rbtree: add random seed
lib/rbtree: split tests
lib/rbtree: enable userland test suite for rbtree related data structure
checkpatch: describe --min-conf-desc-length
scripts/gdb/symbols: determine KASLR offset on s390
...

+1392 -627
+8 -3
.mailmap
··· 31 31 Alexander Lobakin <alobakin@pm.me> <bloodyreaper@yandex.ru> 32 32 Alexander Mikhalitsyn <alexander@mihalicyn.com> <alexander.mikhalitsyn@virtuozzo.com> 33 33 Alexander Mikhalitsyn <alexander@mihalicyn.com> <aleksandr.mikhalitsyn@canonical.com> 34 + Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin.ext@nsn.com> 35 + Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@gmx.de> 36 + Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@nokia.com> 37 + Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@nsn.com> 38 + Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@siemens.com> 39 + Alexander Sverdlin <alexander.sverdlin@gmail.com> <alexander.sverdlin@sysgo.com> 40 + Alexander Sverdlin <alexander.sverdlin@gmail.com> <subaparts@yandex.ru> 34 41 Alexandre Belloni <alexandre.belloni@bootlin.com> <alexandre.belloni@free-electrons.com> 35 42 Alexandre Ghiti <alex@ghiti.fr> <alexandre.ghiti@canonical.com> 36 43 Alexei Avshalom Lazar <quic_ailizaro@quicinc.com> <ailizaro@codeaurora.org> ··· 160 153 Carlos Bilbao <carlos.bilbao@kernel.org> <carlos.bilbao.osdev@gmail.com> 161 154 Carlos Bilbao <carlos.bilbao@kernel.org> <bilbao@vt.edu> 162 155 Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com> 163 - Changbin Du <changbin.du@intel.com> <changbin.du@intel.com> 164 156 Chao Yu <chao@kernel.org> <chao2.yu@samsung.com> 165 157 Chao Yu <chao@kernel.org> <yuchao0@huawei.com> 166 158 Chester Lin <chester62515@gmail.com> <clin@suse.com> ··· 277 271 Hanjun Guo <guohanjun@huawei.com> <hanjun.guo@linaro.org> 278 272 Hans Verkuil <hverkuil@xs4all.nl> <hansverk@cisco.com> 279 273 Hans Verkuil <hverkuil@xs4all.nl> <hverkuil-cisco@xs4all.nl> 274 + Harry Yoo <harry.yoo@oracle.com> <42.hyeyoo@gmail.com> 280 275 Heiko Carstens <hca@linux.ibm.com> <h.carstens@de.ibm.com> 281 276 Heiko Carstens <hca@linux.ibm.com> <heiko.carstens@de.ibm.com> 282 277 Heiko Stuebner <heiko@sntech.de> <heiko.stuebner@bqreaders.com> ··· 312 305 Jan Kuliga <jtkuliga.kdev@gmail.com> <jankul@alatek.krakow.pl> 313 306 Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@linux.intel.com> 314 307 Jarkko Sakkinen <jarkko@kernel.org> <jarkko@profian.com> 315 - Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@parity.io> 316 308 Jason Gunthorpe <jgg@ziepe.ca> <jgg@mellanox.com> 317 309 Jason Gunthorpe <jgg@ziepe.ca> <jgg@nvidia.com> 318 310 Jason Gunthorpe <jgg@ziepe.ca> <jgunthorpe@obsidianresearch.com> ··· 768 762 Viresh Kumar <vireshk@kernel.org> <viresh.kumar2@arm.com> 769 763 Viresh Kumar <vireshk@kernel.org> <viresh.kumar@st.com> 770 764 Viresh Kumar <vireshk@kernel.org> <viresh.linux@gmail.com> 771 - Viresh Kumar <viresh.kumar@linaro.org> <viresh.kumar@linaro.org> 772 765 Viresh Kumar <viresh.kumar@linaro.org> <viresh.kumar@linaro.com> 773 766 Vishnu Dasa <vishnu.dasa@broadcom.com> <vdasa@vmware.com> 774 767 Vivek Aknurwar <quic_viveka@quicinc.com> <viveka@codeaurora.org>
+8
Documentation/ABI/testing/sysfs-kernel-reboot
··· 30 30 Contact: Matteo Croce <mcroce@microsoft.com> 31 31 Description: Don't wait for any other CPUs on reboot and 32 32 avoid anything that could hang. 33 + 34 + What: /sys/kernel/reboot/hw_protection 35 + Date: April 2025 36 + KernelVersion: 6.15 37 + Contact: Ahmad Fatoum <a.fatoum@pengutronix.de> 38 + Description: Hardware protection action taken on critical events like 39 + overtemperature or imminent voltage loss. 40 + Valid values are: reboot shutdown
+6
Documentation/admin-guide/kernel-parameters.txt
··· 1954 1954 which allow the hypervisor to 'idle' the guest 1955 1955 on lock contention. 1956 1956 1957 + hw_protection= [HW] 1958 + Format: reboot | shutdown 1959 + 1960 + Hardware protection action taken on critical events like 1961 + overtemperature or imminent voltage loss. 1962 + 1957 1963 i2c_bus= [HW] Override the default board specific I2C bus speed 1958 1964 or register an additional I2C bus that is not 1959 1965 registered from board initialization code.
+2 -3
Documentation/devicetree/bindings/thermal/thermal-zones.yaml
··· 82 82 $ref: /schemas/types.yaml#/definitions/string 83 83 description: | 84 84 The action the OS should perform after the critical temperature is reached. 85 - By default the system will shutdown as a safe action to prevent damage 86 - to the hardware, if the property is not set. 87 - The shutdown action should be always the default and preferred one. 85 + If the property is not set, it is up to the system to select the correct 86 + action. The recommended and preferred default is shutdown. 88 87 Choose 'reboot' with care, as the hardware may be in thermal stress, 89 88 thus leading to infinite reboots that may cause damage to the hardware. 90 89 Make sure the firmware/bootloader will act as the last resort and take
+14 -11
Documentation/driver-api/thermal/sysfs-api.rst
··· 413 413 device. It sets the cooling device to the deepest cooling state if 414 414 possible. 415 415 416 - 5. thermal_emergency_poweroff 417 - ============================= 416 + 5. Critical Events 417 + ================== 418 418 419 - On an event of critical trip temperature crossing the thermal framework 420 - shuts down the system by calling hw_protection_shutdown(). The 421 - hw_protection_shutdown() first attempts to perform an orderly shutdown 422 - but accepts a delay after which it proceeds doing a forced power-off 423 - or as last resort an emergency_restart. 419 + On an event of critical trip temperature crossing, the thermal framework 420 + will trigger a hardware protection power-off (shutdown) or reboot, 421 + depending on configuration. 422 + 423 + At first, the kernel will attempt an orderly power-off or reboot, but 424 + accepts a delay after which it proceeds to do a forced power-off or 425 + reboot, respectively. If this fails, ``emergency_restart()`` is invoked 426 + as last resort. 424 427 425 428 The delay should be carefully profiled so as to give adequate time for 426 - orderly poweroff. 429 + orderly power-off or reboot. 427 430 428 - If the delay is set to 0 emergency poweroff will not be supported. So a 429 - carefully profiled non-zero positive value is a must for emergency 430 - poweroff to be triggered. 431 + If the delay is set to 0, the emergency action will not be supported. So a 432 + carefully profiled non-zero positive value is a must for the emergency 433 + action to be triggered.
+10
Documentation/filesystems/proc.rst
··· 128 128 The link 'self' points to the process reading the file system. Each process 129 129 subdirectory has the entries listed in Table 1-1. 130 130 131 + A process can read its own information from /proc/PID/* with no extra 132 + permissions. When reading /proc/PID/* information for other processes, reading 133 + process is required to have either CAP_SYS_PTRACE capability with 134 + PTRACE_MODE_READ access permissions, or, alternatively, CAP_PERFMON 135 + capability. This applies to all read-only information like `maps`, `environ`, 136 + `pagemap`, etc. The only exception is `mem` file due to its read-write nature, 137 + which requires CAP_SYS_PTRACE capabilities with more elevated 138 + PTRACE_MODE_ATTACH permissions; CAP_PERFMON capability does not grant access 139 + to /proc/PID/mem for other processes. 140 + 131 141 Note that an open file descriptor to /proc/<pid> or to any of its 132 142 contained files or subdirectories does not prevent <pid> being reused 133 143 for some other process in the event that <pid> exits. Operations on
+2 -1
MAINTAINERS
··· 18802 18802 18803 18803 PER-TASK DELAY ACCOUNTING 18804 18804 M: Balbir Singh <bsingharora@gmail.com> 18805 + M: Yang Yang <yang.yang29@zte.com.cn> 18805 18806 S: Maintained 18806 18807 F: include/linux/delayacct.h 18807 18808 F: kernel/delayacct.c ··· 22154 22153 M: Andrew Morton <akpm@linux-foundation.org> 22155 22154 M: Vlastimil Babka <vbabka@suse.cz> 22156 22155 R: Roman Gushchin <roman.gushchin@linux.dev> 22157 - R: Hyeonggon Yoo <42.hyeyoo@gmail.com> 22156 + R: Harry Yoo <harry.yoo@oracle.com> 22158 22157 L: linux-mm@kvack.org 22159 22158 S: Maintained 22160 22159 T: git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
+2 -4
arch/arm64/mm/init.c
··· 98 98 { 99 99 unsigned long long low_size = 0; 100 100 unsigned long long crash_base, crash_size; 101 - char *cmdline = boot_command_line; 102 101 bool high = false; 103 102 int ret; 104 103 105 104 if (!IS_ENABLED(CONFIG_CRASH_RESERVE)) 106 105 return; 107 106 108 - ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), 107 + ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), 109 108 &crash_size, &crash_base, 110 109 &low_size, &high); 111 110 if (ret) 112 111 return; 113 112 114 - reserve_crashkernel_generic(cmdline, crash_size, crash_base, 115 - low_size, high); 113 + reserve_crashkernel_generic(crash_size, crash_base, low_size, high); 116 114 } 117 115 118 116 static phys_addr_t __init max_zone_phys(phys_addr_t zone_limit)
+2 -3
arch/loongarch/kernel/setup.c
··· 259 259 int ret; 260 260 unsigned long long low_size = 0; 261 261 unsigned long long crash_base, crash_size; 262 - char *cmdline = boot_command_line; 263 262 bool high = false; 264 263 265 264 if (!IS_ENABLED(CONFIG_CRASH_RESERVE)) 266 265 return; 267 266 268 - ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), 267 + ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), 269 268 &crash_size, &crash_base, &low_size, &high); 270 269 if (ret) 271 270 return; 272 271 273 - reserve_crashkernel_generic(cmdline, crash_size, crash_base, low_size, high); 272 + reserve_crashkernel_generic(crash_size, crash_base, low_size, high); 274 273 } 275 274 276 275 static void __init fdt_setup(void)
+3
arch/powerpc/Kconfig
··· 716 716 def_bool y 717 717 depends on PPC64 718 718 719 + config ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION 720 + def_bool CRASH_RESERVE 721 + 719 722 config FA_DUMP 720 723 bool "Firmware-assisted dump" 721 724 depends on CRASH_DUMP && PPC64 && (PPC_RTAS || PPC_POWERNV)
+8
arch/powerpc/include/asm/crash_reserve.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_POWERPC_CRASH_RESERVE_H 3 + #define _ASM_POWERPC_CRASH_RESERVE_H 4 + 5 + /* crash kernel regions are Page size agliged */ 6 + #define CRASH_ALIGN PAGE_SIZE 7 + 8 + #endif /* _ASM_POWERPC_CRASH_RESERVE_H */
+6 -4
arch/powerpc/include/asm/kexec.h
··· 94 94 int arch_kimage_file_post_load_cleanup(struct kimage *image); 95 95 #define arch_kimage_file_post_load_cleanup arch_kimage_file_post_load_cleanup 96 96 97 - int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf); 98 - #define arch_kexec_locate_mem_hole arch_kexec_locate_mem_hole 97 + int arch_check_excluded_range(struct kimage *image, unsigned long start, 98 + unsigned long end); 99 + #define arch_check_excluded_range arch_check_excluded_range 100 + 99 101 100 102 int load_crashdump_segments_ppc64(struct kimage *image, 101 103 struct kexec_buf *kbuf); ··· 114 112 115 113 #ifdef CONFIG_CRASH_RESERVE 116 114 int __init overlaps_crashkernel(unsigned long start, unsigned long size); 117 - extern void reserve_crashkernel(void); 115 + extern void arch_reserve_crashkernel(void); 118 116 #else 119 - static inline void reserve_crashkernel(void) {} 117 + static inline void arch_reserve_crashkernel(void) {} 120 118 static inline int overlaps_crashkernel(unsigned long start, unsigned long size) { return 0; } 121 119 #endif 122 120
+1 -1
arch/powerpc/kernel/prom.c
··· 860 860 */ 861 861 if (fadump_reserve_mem() == 0) 862 862 #endif 863 - reserve_crashkernel(); 863 + arch_reserve_crashkernel(); 864 864 early_reserve_mem(); 865 865 866 866 if (memory_limit > memblock_phys_mem_size())
+40 -60
arch/powerpc/kexec/core.c
··· 58 58 } 59 59 60 60 #ifdef CONFIG_CRASH_RESERVE 61 - void __init reserve_crashkernel(void) 61 + 62 + static unsigned long long __init get_crash_base(unsigned long long crash_base) 62 63 { 63 - unsigned long long crash_size, crash_base, total_mem_sz; 64 - int ret; 65 - 66 - total_mem_sz = memory_limit ? memory_limit : memblock_phys_mem_size(); 67 - /* use common parsing */ 68 - ret = parse_crashkernel(boot_command_line, total_mem_sz, 69 - &crash_size, &crash_base, NULL, NULL); 70 - if (ret == 0 && crash_size > 0) { 71 - crashk_res.start = crash_base; 72 - crashk_res.end = crash_base + crash_size - 1; 73 - } 74 - 75 - if (crashk_res.end == crashk_res.start) { 76 - crashk_res.start = crashk_res.end = 0; 77 - return; 78 - } 79 - 80 - /* We might have got these values via the command line or the 81 - * device tree, either way sanitise them now. */ 82 - 83 - crash_size = resource_size(&crashk_res); 84 64 85 65 #ifndef CONFIG_NONSTATIC_KERNEL 86 - if (crashk_res.start != KDUMP_KERNELBASE) 66 + if (crash_base != KDUMP_KERNELBASE) 87 67 printk("Crash kernel location must be 0x%x\n", 88 68 KDUMP_KERNELBASE); 89 69 90 - crashk_res.start = KDUMP_KERNELBASE; 70 + return KDUMP_KERNELBASE; 91 71 #else 92 - if (!crashk_res.start) { 72 + unsigned long long crash_base_align; 73 + 74 + if (!crash_base) { 93 75 #ifdef CONFIG_PPC64 94 76 /* 95 77 * On the LPAR platform place the crash kernel to mid of ··· 83 101 * kernel starts at 128MB offset on other platforms. 84 102 */ 85 103 if (firmware_has_feature(FW_FEATURE_LPAR)) 86 - crashk_res.start = min_t(u64, ppc64_rma_size / 2, SZ_512M); 104 + crash_base = min_t(u64, ppc64_rma_size / 2, SZ_512M); 87 105 else 88 - crashk_res.start = min_t(u64, ppc64_rma_size / 2, SZ_128M); 106 + crash_base = min_t(u64, ppc64_rma_size / 2, SZ_128M); 89 107 #else 90 - crashk_res.start = KDUMP_KERNELBASE; 108 + crash_base = KDUMP_KERNELBASE; 91 109 #endif 92 110 } 93 111 94 - crash_base = PAGE_ALIGN(crashk_res.start); 95 - if (crash_base != crashk_res.start) { 96 - printk("Crash kernel base must be aligned to 0x%lx\n", 97 - PAGE_SIZE); 98 - crashk_res.start = crash_base; 99 - } 112 + crash_base_align = PAGE_ALIGN(crash_base); 113 + if (crash_base != crash_base_align) 114 + pr_warn("Crash kernel base must be aligned to 0x%lx\n", PAGE_SIZE); 100 115 116 + return crash_base_align; 101 117 #endif 102 - crash_size = PAGE_ALIGN(crash_size); 103 - crashk_res.end = crashk_res.start + crash_size - 1; 118 + } 119 + 120 + void __init arch_reserve_crashkernel(void) 121 + { 122 + unsigned long long crash_size, crash_base, crash_end; 123 + unsigned long long kernel_start, kernel_size; 124 + unsigned long long total_mem_sz; 125 + int ret; 126 + 127 + total_mem_sz = memory_limit ? memory_limit : memblock_phys_mem_size(); 128 + 129 + /* use common parsing */ 130 + ret = parse_crashkernel(boot_command_line, total_mem_sz, &crash_size, 131 + &crash_base, NULL, NULL); 132 + 133 + if (ret) 134 + return; 135 + 136 + crash_base = get_crash_base(crash_base); 137 + crash_end = crash_base + crash_size - 1; 138 + 139 + kernel_start = __pa(_stext); 140 + kernel_size = _end - _stext; 104 141 105 142 /* The crash region must not overlap the current kernel */ 106 - if (overlaps_crashkernel(__pa(_stext), _end - _stext)) { 107 - printk(KERN_WARNING 108 - "Crash kernel can not overlap current kernel\n"); 109 - crashk_res.start = crashk_res.end = 0; 143 + if ((kernel_start + kernel_size > crash_base) && (kernel_start <= crash_end)) { 144 + pr_warn("Crash kernel can not overlap current kernel\n"); 110 145 return; 111 146 } 112 147 113 - /* Crash kernel trumps memory limit */ 114 - if (memory_limit && memory_limit <= crashk_res.end) { 115 - memory_limit = crashk_res.end + 1; 116 - total_mem_sz = memory_limit; 117 - printk("Adjusted memory limit for crashkernel, now 0x%llx\n", 118 - memory_limit); 119 - } 120 - 121 - printk(KERN_INFO "Reserving %ldMB of memory at %ldMB " 122 - "for crashkernel (System RAM: %ldMB)\n", 123 - (unsigned long)(crash_size >> 20), 124 - (unsigned long)(crashk_res.start >> 20), 125 - (unsigned long)(total_mem_sz >> 20)); 126 - 127 - if (!memblock_is_region_memory(crashk_res.start, crash_size) || 128 - memblock_reserve(crashk_res.start, crash_size)) { 129 - pr_err("Failed to reserve memory for crashkernel!\n"); 130 - crashk_res.start = crashk_res.end = 0; 131 - return; 132 - } 148 + reserve_crashkernel_generic(crash_size, crash_base, 0, false); 133 149 } 134 150 135 151 int __init overlaps_crashkernel(unsigned long start, unsigned long size)
+9 -250
arch/powerpc/kexec/file_load_64.c
··· 49 49 NULL 50 50 }; 51 51 52 - /** 53 - * __locate_mem_hole_top_down - Looks top down for a large enough memory hole 54 - * in the memory regions between buf_min & buf_max 55 - * for the buffer. If found, sets kbuf->mem. 56 - * @kbuf: Buffer contents and memory parameters. 57 - * @buf_min: Minimum address for the buffer. 58 - * @buf_max: Maximum address for the buffer. 59 - * 60 - * Returns 0 on success, negative errno on error. 61 - */ 62 - static int __locate_mem_hole_top_down(struct kexec_buf *kbuf, 63 - u64 buf_min, u64 buf_max) 52 + int arch_check_excluded_range(struct kimage *image, unsigned long start, 53 + unsigned long end) 64 54 { 65 - int ret = -EADDRNOTAVAIL; 66 - phys_addr_t start, end; 67 - u64 i; 55 + struct crash_mem *emem; 56 + int i; 68 57 69 - for_each_mem_range_rev(i, &start, &end) { 70 - /* 71 - * memblock uses [start, end) convention while it is 72 - * [start, end] here. Fix the off-by-one to have the 73 - * same convention. 74 - */ 75 - end -= 1; 58 + emem = image->arch.exclude_ranges; 59 + for (i = 0; i < emem->nr_ranges; i++) 60 + if (start < emem->ranges[i].end && end > emem->ranges[i].start) 61 + return 1; 76 62 77 - if (start > buf_max) 78 - continue; 79 - 80 - /* Memory hole not found */ 81 - if (end < buf_min) 82 - break; 83 - 84 - /* Adjust memory region based on the given range */ 85 - if (start < buf_min) 86 - start = buf_min; 87 - if (end > buf_max) 88 - end = buf_max; 89 - 90 - start = ALIGN(start, kbuf->buf_align); 91 - if (start < end && (end - start + 1) >= kbuf->memsz) { 92 - /* Suitable memory range found. Set kbuf->mem */ 93 - kbuf->mem = ALIGN_DOWN(end - kbuf->memsz + 1, 94 - kbuf->buf_align); 95 - ret = 0; 96 - break; 97 - } 98 - } 99 - 100 - return ret; 101 - } 102 - 103 - /** 104 - * locate_mem_hole_top_down_ppc64 - Skip special memory regions to find a 105 - * suitable buffer with top down approach. 106 - * @kbuf: Buffer contents and memory parameters. 107 - * @buf_min: Minimum address for the buffer. 108 - * @buf_max: Maximum address for the buffer. 109 - * @emem: Exclude memory ranges. 110 - * 111 - * Returns 0 on success, negative errno on error. 112 - */ 113 - static int locate_mem_hole_top_down_ppc64(struct kexec_buf *kbuf, 114 - u64 buf_min, u64 buf_max, 115 - const struct crash_mem *emem) 116 - { 117 - int i, ret = 0, err = -EADDRNOTAVAIL; 118 - u64 start, end, tmin, tmax; 119 - 120 - tmax = buf_max; 121 - for (i = (emem->nr_ranges - 1); i >= 0; i--) { 122 - start = emem->ranges[i].start; 123 - end = emem->ranges[i].end; 124 - 125 - if (start > tmax) 126 - continue; 127 - 128 - if (end < tmax) { 129 - tmin = (end < buf_min ? buf_min : end + 1); 130 - ret = __locate_mem_hole_top_down(kbuf, tmin, tmax); 131 - if (!ret) 132 - return 0; 133 - } 134 - 135 - tmax = start - 1; 136 - 137 - if (tmax < buf_min) { 138 - ret = err; 139 - break; 140 - } 141 - ret = 0; 142 - } 143 - 144 - if (!ret) { 145 - tmin = buf_min; 146 - ret = __locate_mem_hole_top_down(kbuf, tmin, tmax); 147 - } 148 - return ret; 149 - } 150 - 151 - /** 152 - * __locate_mem_hole_bottom_up - Looks bottom up for a large enough memory hole 153 - * in the memory regions between buf_min & buf_max 154 - * for the buffer. If found, sets kbuf->mem. 155 - * @kbuf: Buffer contents and memory parameters. 156 - * @buf_min: Minimum address for the buffer. 157 - * @buf_max: Maximum address for the buffer. 158 - * 159 - * Returns 0 on success, negative errno on error. 160 - */ 161 - static int __locate_mem_hole_bottom_up(struct kexec_buf *kbuf, 162 - u64 buf_min, u64 buf_max) 163 - { 164 - int ret = -EADDRNOTAVAIL; 165 - phys_addr_t start, end; 166 - u64 i; 167 - 168 - for_each_mem_range(i, &start, &end) { 169 - /* 170 - * memblock uses [start, end) convention while it is 171 - * [start, end] here. Fix the off-by-one to have the 172 - * same convention. 173 - */ 174 - end -= 1; 175 - 176 - if (end < buf_min) 177 - continue; 178 - 179 - /* Memory hole not found */ 180 - if (start > buf_max) 181 - break; 182 - 183 - /* Adjust memory region based on the given range */ 184 - if (start < buf_min) 185 - start = buf_min; 186 - if (end > buf_max) 187 - end = buf_max; 188 - 189 - start = ALIGN(start, kbuf->buf_align); 190 - if (start < end && (end - start + 1) >= kbuf->memsz) { 191 - /* Suitable memory range found. Set kbuf->mem */ 192 - kbuf->mem = start; 193 - ret = 0; 194 - break; 195 - } 196 - } 197 - 198 - return ret; 199 - } 200 - 201 - /** 202 - * locate_mem_hole_bottom_up_ppc64 - Skip special memory regions to find a 203 - * suitable buffer with bottom up approach. 204 - * @kbuf: Buffer contents and memory parameters. 205 - * @buf_min: Minimum address for the buffer. 206 - * @buf_max: Maximum address for the buffer. 207 - * @emem: Exclude memory ranges. 208 - * 209 - * Returns 0 on success, negative errno on error. 210 - */ 211 - static int locate_mem_hole_bottom_up_ppc64(struct kexec_buf *kbuf, 212 - u64 buf_min, u64 buf_max, 213 - const struct crash_mem *emem) 214 - { 215 - int i, ret = 0, err = -EADDRNOTAVAIL; 216 - u64 start, end, tmin, tmax; 217 - 218 - tmin = buf_min; 219 - for (i = 0; i < emem->nr_ranges; i++) { 220 - start = emem->ranges[i].start; 221 - end = emem->ranges[i].end; 222 - 223 - if (end < tmin) 224 - continue; 225 - 226 - if (start > tmin) { 227 - tmax = (start > buf_max ? buf_max : start - 1); 228 - ret = __locate_mem_hole_bottom_up(kbuf, tmin, tmax); 229 - if (!ret) 230 - return 0; 231 - } 232 - 233 - tmin = end + 1; 234 - 235 - if (tmin > buf_max) { 236 - ret = err; 237 - break; 238 - } 239 - ret = 0; 240 - } 241 - 242 - if (!ret) { 243 - tmax = buf_max; 244 - ret = __locate_mem_hole_bottom_up(kbuf, tmin, tmax); 245 - } 246 - return ret; 63 + return 0; 247 64 } 248 65 249 66 #ifdef CONFIG_CRASH_DUMP ··· 818 1001 819 1002 out: 820 1003 kfree(umem); 821 - return ret; 822 - } 823 - 824 - /** 825 - * arch_kexec_locate_mem_hole - Skip special memory regions like rtas, opal, 826 - * tce-table, reserved-ranges & such (exclude 827 - * memory ranges) as they can't be used for kexec 828 - * segment buffer. Sets kbuf->mem when a suitable 829 - * memory hole is found. 830 - * @kbuf: Buffer contents and memory parameters. 831 - * 832 - * Assumes minimum of PAGE_SIZE alignment for kbuf->memsz & kbuf->buf_align. 833 - * 834 - * Returns 0 on success, negative errno on error. 835 - */ 836 - int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf) 837 - { 838 - struct crash_mem **emem; 839 - u64 buf_min, buf_max; 840 - int ret; 841 - 842 - /* Look up the exclude ranges list while locating the memory hole */ 843 - emem = &(kbuf->image->arch.exclude_ranges); 844 - if (!(*emem) || ((*emem)->nr_ranges == 0)) { 845 - pr_warn("No exclude range list. Using the default locate mem hole method\n"); 846 - return kexec_locate_mem_hole(kbuf); 847 - } 848 - 849 - buf_min = kbuf->buf_min; 850 - buf_max = kbuf->buf_max; 851 - /* Segments for kdump kernel should be within crashkernel region */ 852 - if (IS_ENABLED(CONFIG_CRASH_DUMP) && kbuf->image->type == KEXEC_TYPE_CRASH) { 853 - buf_min = (buf_min < crashk_res.start ? 854 - crashk_res.start : buf_min); 855 - buf_max = (buf_max > crashk_res.end ? 856 - crashk_res.end : buf_max); 857 - } 858 - 859 - if (buf_min > buf_max) { 860 - pr_err("Invalid buffer min and/or max values\n"); 861 - return -EINVAL; 862 - } 863 - 864 - if (kbuf->top_down) 865 - ret = locate_mem_hole_top_down_ppc64(kbuf, buf_min, buf_max, 866 - *emem); 867 - else 868 - ret = locate_mem_hole_bottom_up_ppc64(kbuf, buf_min, buf_max, 869 - *emem); 870 - 871 - /* Add the buffer allocated to the exclude list for the next lookup */ 872 - if (!ret) { 873 - add_mem_range(emem, kbuf->mem, kbuf->memsz); 874 - sort_memory_ranges(*emem, true); 875 - } else { 876 - pr_err("Failed to locate memory buffer of size %lu\n", 877 - kbuf->memsz); 878 - } 879 1004 return ret; 880 1005 } 881 1006
+1 -1
arch/powerpc/mm/mem.c
··· 338 338 */ 339 339 res->end = end - 1; 340 340 res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; 341 - WARN_ON(request_resource(&iomem_resource, res) < 0); 341 + WARN_ON(insert_resource(&iomem_resource, res) < 0); 342 342 } 343 343 } 344 344
+2 -4
arch/riscv/mm/init.c
··· 1393 1393 { 1394 1394 unsigned long long low_size = 0; 1395 1395 unsigned long long crash_base, crash_size; 1396 - char *cmdline = boot_command_line; 1397 1396 bool high = false; 1398 1397 int ret; 1399 1398 1400 1399 if (!IS_ENABLED(CONFIG_CRASH_RESERVE)) 1401 1400 return; 1402 1401 1403 - ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), 1402 + ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), 1404 1403 &crash_size, &crash_base, 1405 1404 &low_size, &high); 1406 1405 if (ret) 1407 1406 return; 1408 1407 1409 - reserve_crashkernel_generic(cmdline, crash_size, crash_base, 1410 - low_size, high); 1408 + reserve_crashkernel_generic(crash_size, crash_base, low_size, high); 1411 1409 } 1412 1410 1413 1411 void __init paging_init(void)
+2 -4
arch/x86/kernel/setup.c
··· 578 578 static void __init arch_reserve_crashkernel(void) 579 579 { 580 580 unsigned long long crash_base, crash_size, low_size = 0; 581 - char *cmdline = boot_command_line; 582 581 bool high = false; 583 582 int ret; 584 583 585 584 if (!IS_ENABLED(CONFIG_CRASH_RESERVE)) 586 585 return; 587 586 588 - ret = parse_crashkernel(cmdline, memblock_phys_mem_size(), 587 + ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), 589 588 &crash_size, &crash_base, 590 589 &low_size, &high); 591 590 if (ret) ··· 595 596 return; 596 597 } 597 598 598 - reserve_crashkernel_generic(cmdline, crash_size, crash_base, 599 - low_size, high); 599 + reserve_crashkernel_generic(crash_size, crash_base, low_size, high); 600 600 } 601 601 602 602 static struct resource standard_io_resources[] = {
+1 -1
drivers/accel/habanalabs/common/command_submission.c
··· 2586 2586 cs_seq = args->in.seq; 2587 2587 2588 2588 timeout = flags & HL_CS_FLAGS_CUSTOM_TIMEOUT 2589 - ? msecs_to_jiffies(args->in.timeout * 1000) 2589 + ? secs_to_jiffies(args->in.timeout) 2590 2590 : hpriv->hdev->timeout_jiffies; 2591 2591 2592 2592 switch (cs_type) {
+1 -1
drivers/accel/habanalabs/common/debugfs.c
··· 1403 1403 return rc; 1404 1404 1405 1405 if (value) 1406 - hdev->timeout_jiffies = msecs_to_jiffies(value * 1000); 1406 + hdev->timeout_jiffies = secs_to_jiffies(value); 1407 1407 else 1408 1408 hdev->timeout_jiffies = MAX_SCHEDULE_TIMEOUT; 1409 1409
+1 -1
drivers/accel/habanalabs/common/device.c
··· 2091 2091 dev_dbg(hdev->dev, "Device is going to be hard-reset in %u sec unless being released\n", 2092 2092 hdev->device_release_watchdog_timeout_sec); 2093 2093 schedule_delayed_work(&hdev->device_release_watchdog_work.reset_work, 2094 - msecs_to_jiffies(hdev->device_release_watchdog_timeout_sec * 1000)); 2094 + secs_to_jiffies(hdev->device_release_watchdog_timeout_sec)); 2095 2095 hdev->reset_info.watchdog_active = 1; 2096 2096 out: 2097 2097 spin_unlock(&hdev->reset_info.lock);
+1 -1
drivers/accel/habanalabs/common/habanalabs_drv.c
··· 386 386 hdev->fw_comms_poll_interval_usec = HL_FW_STATUS_POLL_INTERVAL_USEC; 387 387 388 388 if (tmp_timeout) 389 - hdev->timeout_jiffies = msecs_to_jiffies(tmp_timeout * MSEC_PER_SEC); 389 + hdev->timeout_jiffies = secs_to_jiffies(tmp_timeout); 390 390 else 391 391 hdev->timeout_jiffies = MAX_SCHEDULE_TIMEOUT; 392 392
+1 -2
drivers/ata/libata-zpodd.c
··· 160 160 return; 161 161 } 162 162 163 - expires = zpodd->last_ready + 164 - msecs_to_jiffies(zpodd_poweroff_delay * 1000); 163 + expires = zpodd->last_ready + secs_to_jiffies(zpodd_poweroff_delay); 165 164 if (time_before(jiffies, expires)) 166 165 return; 167 166
+1 -1
drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
··· 160 160 wait_event_timeout(cmdq->waitq, 161 161 !crsqe->is_in_used || 162 162 test_bit(ERR_DEVICE_DETACHED, &cmdq->flags), 163 - msecs_to_jiffies(rcfw->max_timeout * 1000)); 163 + secs_to_jiffies(rcfw->max_timeout)); 164 164 165 165 if (!crsqe->is_in_used) 166 166 return 0;
+2 -4
drivers/nvme/host/core.c
··· 4469 4469 nvme_auth_stop(ctrl); 4470 4470 4471 4471 if (ctrl->mtfa) 4472 - fw_act_timeout = jiffies + 4473 - msecs_to_jiffies(ctrl->mtfa * 100); 4472 + fw_act_timeout = jiffies + msecs_to_jiffies(ctrl->mtfa * 100); 4474 4473 else 4475 - fw_act_timeout = jiffies + 4476 - msecs_to_jiffies(admin_timeout * 1000); 4474 + fw_act_timeout = jiffies + secs_to_jiffies(admin_timeout); 4477 4475 4478 4476 nvme_quiesce_io_queues(ctrl); 4479 4477 while (nvme_ctrl_pp_status(ctrl)) {
+1 -1
drivers/platform/chrome/cros_ec_lpc.c
··· 455 455 blocking_notifier_call_chain(&ec_dev->panic_notifier, 0, ec_dev); 456 456 kobject_uevent_env(&ec_dev->dev->kobj, KOBJ_CHANGE, (char **)env); 457 457 /* Begin orderly shutdown. EC will force reset after a short period. */ 458 - hw_protection_shutdown("CrOS EC Panic", -1); 458 + __hw_protection_trigger("CrOS EC Panic", -1, HWPROT_ACT_SHUTDOWN); 459 459 /* Do not query for other events after a panic is reported */ 460 460 return; 461 461 }
+1 -2
drivers/power/supply/da9030_battery.c
··· 502 502 503 503 /* 10 seconds between monitor runs unless platform defines other 504 504 interval */ 505 - charger->interval = msecs_to_jiffies( 506 - (pdata->batmon_interval ? : 10) * 1000); 505 + charger->interval = secs_to_jiffies(pdata->batmon_interval ? : 10); 507 506 508 507 charger->charge_milliamp = pdata->charge_milliamp; 509 508 charger->charge_millivolt = pdata->charge_millivolt;
+2 -2
drivers/regulator/core.c
··· 5282 5282 if (!reason) 5283 5283 return; 5284 5284 5285 - hw_protection_shutdown(reason, 5286 - rdev->constraints->uv_less_critical_window_ms); 5285 + hw_protection_trigger(reason, 5286 + rdev->constraints->uv_less_critical_window_ms); 5287 5287 } 5288 5288 5289 5289 /**
+8 -8
drivers/regulator/irq_helpers.c
··· 64 64 reread: 65 65 if (d->fatal_cnt && h->retry_cnt > d->fatal_cnt) { 66 66 if (!d->die) 67 - return hw_protection_shutdown("Regulator HW failure? - no IC recovery", 68 - REGULATOR_FORCED_SAFETY_SHUTDOWN_WAIT_MS); 67 + return hw_protection_trigger("Regulator HW failure? - no IC recovery", 68 + REGULATOR_FORCED_SAFETY_SHUTDOWN_WAIT_MS); 69 69 ret = d->die(rid); 70 70 /* 71 71 * If the 'last resort' IC recovery failed we will have 72 72 * nothing else left to do... 73 73 */ 74 74 if (ret) 75 - return hw_protection_shutdown("Regulator HW failure. IC recovery failed", 76 - REGULATOR_FORCED_SAFETY_SHUTDOWN_WAIT_MS); 75 + return hw_protection_trigger("Regulator HW failure. IC recovery failed", 76 + REGULATOR_FORCED_SAFETY_SHUTDOWN_WAIT_MS); 77 77 78 78 /* 79 79 * If h->die() was implemented we assume recovery has been ··· 263 263 if (d->fatal_cnt && h->retry_cnt > d->fatal_cnt) { 264 264 /* If we have no recovery, just try shut down straight away */ 265 265 if (!d->die) { 266 - hw_protection_shutdown("Regulator failure. Retry count exceeded", 267 - REGULATOR_FORCED_SAFETY_SHUTDOWN_WAIT_MS); 266 + hw_protection_trigger("Regulator failure. Retry count exceeded", 267 + REGULATOR_FORCED_SAFETY_SHUTDOWN_WAIT_MS); 268 268 } else { 269 269 ret = d->die(rid); 270 270 /* If die() failed shut down as a last attempt to save the HW */ 271 271 if (ret) 272 - hw_protection_shutdown("Regulator failure. Recovery failed", 273 - REGULATOR_FORCED_SAFETY_SHUTDOWN_WAIT_MS); 272 + hw_protection_trigger("Regulator failure. Recovery failed", 273 + REGULATOR_FORCED_SAFETY_SHUTDOWN_WAIT_MS); 274 274 } 275 275 } 276 276
+10 -7
drivers/thermal/thermal_core.c
··· 369 369 tz->governor->update_tz(tz, reason); 370 370 } 371 371 372 - static void thermal_zone_device_halt(struct thermal_zone_device *tz, bool shutdown) 372 + static void thermal_zone_device_halt(struct thermal_zone_device *tz, 373 + enum hw_protection_action action) 373 374 { 374 375 /* 375 376 * poweroff_delay_ms must be a carefully profiled positive value. ··· 381 380 382 381 dev_emerg(&tz->device, "%s: critical temperature reached\n", tz->type); 383 382 384 - if (shutdown) 385 - hw_protection_shutdown(msg, poweroff_delay_ms); 386 - else 387 - hw_protection_reboot(msg, poweroff_delay_ms); 383 + __hw_protection_trigger(msg, poweroff_delay_ms, action); 388 384 } 389 385 390 386 void thermal_zone_device_critical(struct thermal_zone_device *tz) 391 387 { 392 - thermal_zone_device_halt(tz, true); 388 + thermal_zone_device_halt(tz, HWPROT_ACT_DEFAULT); 393 389 } 394 390 EXPORT_SYMBOL(thermal_zone_device_critical); 395 391 392 + void thermal_zone_device_critical_shutdown(struct thermal_zone_device *tz) 393 + { 394 + thermal_zone_device_halt(tz, HWPROT_ACT_SHUTDOWN); 395 + } 396 + 396 397 void thermal_zone_device_critical_reboot(struct thermal_zone_device *tz) 397 398 { 398 - thermal_zone_device_halt(tz, false); 399 + thermal_zone_device_halt(tz, HWPROT_ACT_REBOOT); 399 400 } 400 401 401 402 static void handle_critical_trips(struct thermal_zone_device *tz,
+1
drivers/thermal/thermal_core.h
··· 262 262 void __thermal_zone_device_update(struct thermal_zone_device *tz, 263 263 enum thermal_notify_event event); 264 264 void thermal_zone_device_critical_reboot(struct thermal_zone_device *tz); 265 + void thermal_zone_device_critical_shutdown(struct thermal_zone_device *tz); 265 266 void thermal_governor_update_tz(struct thermal_zone_device *tz, 266 267 enum thermal_notify_event reason); 267 268
+5 -2
drivers/thermal/thermal_of.c
··· 405 405 of_ops.should_bind = thermal_of_should_bind; 406 406 407 407 ret = of_property_read_string(np, "critical-action", &action); 408 - if (!ret) 409 - if (!of_ops.critical && !strcasecmp(action, "reboot")) 408 + if (!ret && !of_ops.critical) { 409 + if (!strcasecmp(action, "reboot")) 410 410 of_ops.critical = thermal_zone_device_critical_reboot; 411 + else if (!strcasecmp(action, "shutdown")) 412 + of_ops.critical = thermal_zone_device_critical_shutdown; 413 + } 411 414 412 415 tz = thermal_zone_device_register_with_trips(np->name, trips, ntrips, 413 416 data, &of_ops, &tzp,
+3 -3
fs/btrfs/disk-io.c
··· 1564 1564 1565 1565 do { 1566 1566 cannot_commit = false; 1567 - delay = msecs_to_jiffies(fs_info->commit_interval * 1000); 1567 + delay = secs_to_jiffies(fs_info->commit_interval); 1568 1568 mutex_lock(&fs_info->transaction_kthread_mutex); 1569 1569 1570 1570 spin_lock(&fs_info->trans_lock); ··· 1579 1579 cur->state < TRANS_STATE_COMMIT_PREP && 1580 1580 delta < fs_info->commit_interval) { 1581 1581 spin_unlock(&fs_info->trans_lock); 1582 - delay -= msecs_to_jiffies((delta - 1) * 1000); 1582 + delay -= secs_to_jiffies(delta - 1); 1583 1583 delay = min(delay, 1584 - msecs_to_jiffies(fs_info->commit_interval * 1000)); 1584 + secs_to_jiffies(fs_info->commit_interval)); 1585 1585 goto sleep; 1586 1586 } 1587 1587 transid = cur->transid;
+8
fs/ocfs2/alloc.c
··· 1803 1803 1804 1804 el = root_el; 1805 1805 while (el->l_tree_depth) { 1806 + if (unlikely(le16_to_cpu(el->l_tree_depth) >= OCFS2_MAX_PATH_DEPTH)) { 1807 + ocfs2_error(ocfs2_metadata_cache_get_super(ci), 1808 + "Owner %llu has invalid tree depth %u in extent list\n", 1809 + (unsigned long long)ocfs2_metadata_cache_owner(ci), 1810 + le16_to_cpu(el->l_tree_depth)); 1811 + ret = -EROFS; 1812 + goto out; 1813 + } 1806 1814 if (le16_to_cpu(el->l_next_free_rec) == 0) { 1807 1815 ocfs2_error(ocfs2_metadata_cache_get_super(ci), 1808 1816 "Owner %llu has empty extent list at depth %u\n",
+5 -12
fs/ocfs2/aops.c
··· 46 46 struct buffer_head *bh = NULL; 47 47 struct buffer_head *buffer_cache_bh = NULL; 48 48 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 49 - void *kaddr; 50 49 51 50 trace_ocfs2_symlink_get_block( 52 51 (unsigned long long)OCFS2_I(inode)->ip_blkno, ··· 90 91 * could've happened. Since we've got a reference on 91 92 * the bh, even if it commits while we're doing the 92 93 * copy, the data is still good. */ 93 - if (buffer_jbd(buffer_cache_bh) 94 - && ocfs2_inode_is_new(inode)) { 95 - kaddr = kmap_atomic(bh_result->b_page); 96 - if (!kaddr) { 97 - mlog(ML_ERROR, "couldn't kmap!\n"); 98 - goto bail; 99 - } 100 - memcpy(kaddr + (bh_result->b_size * iblock), 101 - buffer_cache_bh->b_data, 102 - bh_result->b_size); 103 - kunmap_atomic(kaddr); 94 + if (buffer_jbd(buffer_cache_bh) && ocfs2_inode_is_new(inode)) { 95 + memcpy_to_folio(bh_result->b_folio, 96 + bh_result->b_size * iblock, 97 + buffer_cache_bh->b_data, 98 + bh_result->b_size); 104 99 set_buffer_uptodate(bh_result); 105 100 } 106 101 brelse(buffer_cache_bh);
+1 -1
fs/ocfs2/quota_global.c
··· 273 273 if (new) 274 274 memset(bh->b_data, 0, sb->s_blocksize); 275 275 memcpy(bh->b_data + offset, data, len); 276 - flush_dcache_page(bh->b_page); 276 + flush_dcache_folio(bh->b_folio); 277 277 set_buffer_uptodate(bh); 278 278 unlock_buffer(bh); 279 279 ocfs2_set_buffer_uptodate(INODE_CACHE(gqinode), bh);
+1 -1
fs/proc/base.c
··· 416 416 #ifdef CONFIG_KALLSYMS 417 417 /* 418 418 * Provides a wchan file via kallsyms in a proper one-value-per-file format. 419 - * Returns the resolved symbol. If that fails, simply return the address. 419 + * Returns the resolved symbol to user space. 420 420 */ 421 421 static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns, 422 422 struct pid *pid, struct task_struct *task)
+1 -1
fs/xfs/xfs_icache.c
··· 230 230 rcu_read_lock(); 231 231 if (radix_tree_tagged(&pag->pag_ici_root, XFS_ICI_BLOCKGC_TAG)) 232 232 queue_delayed_work(mp->m_blockgc_wq, &pag->pag_blockgc_work, 233 - msecs_to_jiffies(xfs_blockgc_secs * 1000)); 233 + secs_to_jiffies(xfs_blockgc_secs)); 234 234 rcu_read_unlock(); 235 235 } 236 236
+4 -4
fs/xfs/xfs_sysfs.c
··· 569 569 if (val == -1) 570 570 cfg->retry_timeout = XFS_ERR_RETRY_FOREVER; 571 571 else { 572 - cfg->retry_timeout = msecs_to_jiffies(val * MSEC_PER_SEC); 573 - ASSERT(msecs_to_jiffies(val * MSEC_PER_SEC) < LONG_MAX); 572 + cfg->retry_timeout = secs_to_jiffies(val); 573 + ASSERT(secs_to_jiffies(val) < LONG_MAX); 574 574 } 575 575 return count; 576 576 } ··· 687 687 if (init[i].retry_timeout == XFS_ERR_RETRY_FOREVER) 688 688 cfg->retry_timeout = XFS_ERR_RETRY_FOREVER; 689 689 else 690 - cfg->retry_timeout = msecs_to_jiffies( 691 - init[i].retry_timeout * MSEC_PER_SEC); 690 + cfg->retry_timeout = 691 + secs_to_jiffies(init[i].retry_timeout); 692 692 } 693 693 return 0; 694 694
+1 -1
include/linux/cpu.h
··· 148 148 } 149 149 static inline void suspend_enable_secondary_cpus(void) 150 150 { 151 - return thaw_secondary_cpus(); 151 + thaw_secondary_cpus(); 152 152 } 153 153 154 154 #else /* !CONFIG_PM_SLEEP_SMP */
+5 -6
include/linux/crash_reserve.h
··· 32 32 #define CRASH_ADDR_HIGH_MAX memblock_end_of_DRAM() 33 33 #endif 34 34 35 - void __init reserve_crashkernel_generic(char *cmdline, 36 - unsigned long long crash_size, 37 - unsigned long long crash_base, 38 - unsigned long long crash_low_size, 39 - bool high); 35 + void __init reserve_crashkernel_generic(unsigned long long crash_size, 36 + unsigned long long crash_base, 37 + unsigned long long crash_low_size, 38 + bool high); 40 39 #else 41 - static inline void __init reserve_crashkernel_generic(char *cmdline, 40 + static inline void __init reserve_crashkernel_generic( 42 41 unsigned long long crash_size, 43 42 unsigned long long crash_base, 44 43 unsigned long long crash_low_size,
+2 -6
include/linux/interval_tree_generic.h
··· 104 104 if (ITSTART(node) <= last) { /* Cond1 */ \ 105 105 if (start <= ITLAST(node)) /* Cond2 */ \ 106 106 return node; /* node is leftmost match */ \ 107 - if (node->ITRB.rb_right) { \ 108 - node = rb_entry(node->ITRB.rb_right, \ 109 - ITSTRUCT, ITRB); \ 110 - if (start <= node->ITSUBTREE) \ 111 - continue; \ 112 - } \ 107 + node = rb_entry(node->ITRB.rb_right, ITSTRUCT, ITRB); \ 108 + continue; \ 113 109 } \ 114 110 return NULL; /* No match */ \ 115 111 } \
+7 -2
include/linux/ioport.h
··· 154 154 }; 155 155 156 156 /* helpers to define resources */ 157 - #define DEFINE_RES_NAMED(_start, _size, _name, _flags) \ 157 + #define DEFINE_RES_NAMED_DESC(_start, _size, _name, _flags, _desc) \ 158 158 (struct resource) { \ 159 159 .start = (_start), \ 160 160 .end = (_start) + (_size) - 1, \ 161 161 .name = (_name), \ 162 162 .flags = (_flags), \ 163 - .desc = IORES_DESC_NONE, \ 163 + .desc = (_desc), \ 164 164 } 165 + 166 + #define DEFINE_RES_NAMED(_start, _size, _name, _flags) \ 167 + DEFINE_RES_NAMED_DESC(_start, _size, _name, _flags, IORES_DESC_NONE) 168 + #define DEFINE_RES(_start, _size, _flags) \ 169 + DEFINE_RES_NAMED(_start, _size, NULL, _flags) 165 170 166 171 #define DEFINE_RES_IO_NAMED(_start, _size, _name) \ 167 172 DEFINE_RES_NAMED((_start), (_size), (_name), IORESOURCE_IO)
+9
include/linux/kexec.h
··· 203 203 } 204 204 #endif 205 205 206 + #ifndef arch_check_excluded_range 207 + static inline int arch_check_excluded_range(struct kimage *image, 208 + unsigned long start, 209 + unsigned long end) 210 + { 211 + return 0; 212 + } 213 + #endif 214 + 206 215 #ifdef CONFIG_KEXEC_SIG 207 216 #ifdef CONFIG_SIGNED_PE_FILE_VERIFICATION 208 217 int kexec_kernel_verify_pe_sig(const char *kernel, unsigned long kernel_len);
+1
include/linux/list_nulls.h
··· 28 28 #define NULLS_MARKER(value) (1UL | (((long)value) << 1)) 29 29 #define INIT_HLIST_NULLS_HEAD(ptr, nulls) \ 30 30 ((ptr)->first = (struct hlist_nulls_node *) NULLS_MARKER(nulls)) 31 + #define HLIST_NULLS_HEAD_INIT(nulls) {.first = (struct hlist_nulls_node *)NULLS_MARKER(nulls)} 31 32 32 33 #define hlist_nulls_entry(ptr, type, member) container_of(ptr,type,member) 33 34
+6 -6
include/linux/min_heap.h
··· 218 218 219 219 /* Initialize a min-heap. */ 220 220 static __always_inline 221 - void __min_heap_init_inline(min_heap_char *heap, void *data, int size) 221 + void __min_heap_init_inline(min_heap_char *heap, void *data, size_t size) 222 222 { 223 223 heap->nr = 0; 224 224 heap->size = size; ··· 254 254 255 255 /* Sift the element at pos down the heap. */ 256 256 static __always_inline 257 - void __min_heap_sift_down_inline(min_heap_char *heap, int pos, size_t elem_size, 257 + void __min_heap_sift_down_inline(min_heap_char *heap, size_t pos, size_t elem_size, 258 258 const struct min_heap_callbacks *func, void *args) 259 259 { 260 260 const unsigned long lsbit = elem_size & -elem_size; ··· 324 324 void __min_heapify_all_inline(min_heap_char *heap, size_t elem_size, 325 325 const struct min_heap_callbacks *func, void *args) 326 326 { 327 - int i; 327 + ssize_t i; 328 328 329 329 for (i = heap->nr / 2 - 1; i >= 0; i--) 330 330 __min_heap_sift_down_inline(heap, i, elem_size, func, args); ··· 379 379 const struct min_heap_callbacks *func, void *args) 380 380 { 381 381 void *data = heap->data; 382 - int pos; 382 + size_t pos; 383 383 384 384 if (WARN_ONCE(heap->nr >= heap->size, "Pushing on a full heap")) 385 385 return false; ··· 428 428 __min_heap_del_inline(container_of(&(_heap)->nr, min_heap_char, nr), \ 429 429 __minheap_obj_size(_heap), _idx, _func, _args) 430 430 431 - void __min_heap_init(min_heap_char *heap, void *data, int size); 431 + void __min_heap_init(min_heap_char *heap, void *data, size_t size); 432 432 void *__min_heap_peek(struct min_heap_char *heap); 433 433 bool __min_heap_full(min_heap_char *heap); 434 - void __min_heap_sift_down(min_heap_char *heap, int pos, size_t elem_size, 434 + void __min_heap_sift_down(min_heap_char *heap, size_t pos, size_t elem_size, 435 435 const struct min_heap_callbacks *func, void *args); 436 436 void __min_heap_sift_up(min_heap_char *heap, size_t elem_size, size_t idx, 437 437 const struct min_heap_callbacks *func, void *args);
+2
include/linux/mutex.h
··· 202 202 DEFINE_GUARD_COND(mutex, _try, mutex_trylock(_T)) 203 203 DEFINE_GUARD_COND(mutex, _intr, mutex_lock_interruptible(_T) == 0) 204 204 205 + extern unsigned long mutex_get_owner(struct mutex *lock); 206 + 205 207 #endif /* __LINUX_MUTEX_H */
+29 -7
include/linux/reboot.h
··· 177 177 178 178 extern void orderly_poweroff(bool force); 179 179 extern void orderly_reboot(void); 180 - void __hw_protection_shutdown(const char *reason, int ms_until_forced, bool shutdown); 181 180 182 - static inline void hw_protection_reboot(const char *reason, int ms_until_forced) 183 - { 184 - __hw_protection_shutdown(reason, ms_until_forced, false); 185 - } 181 + /** 182 + * enum hw_protection_action - Hardware protection action 183 + * 184 + * @HWPROT_ACT_DEFAULT: 185 + * The default action should be taken. This is HWPROT_ACT_SHUTDOWN 186 + * by default, but can be overridden. 187 + * @HWPROT_ACT_SHUTDOWN: 188 + * The system should be shut down (powered off) for HW protection. 189 + * @HWPROT_ACT_REBOOT: 190 + * The system should be rebooted for HW protection. 191 + */ 192 + enum hw_protection_action { HWPROT_ACT_DEFAULT, HWPROT_ACT_SHUTDOWN, HWPROT_ACT_REBOOT }; 186 193 187 - static inline void hw_protection_shutdown(const char *reason, int ms_until_forced) 194 + void __hw_protection_trigger(const char *reason, int ms_until_forced, 195 + enum hw_protection_action action); 196 + 197 + /** 198 + * hw_protection_trigger - Trigger default emergency system hardware protection action 199 + * 200 + * @reason: Reason of emergency shutdown or reboot to be printed. 201 + * @ms_until_forced: Time to wait for orderly shutdown or reboot before 202 + * triggering it. Negative value disables the forced 203 + * shutdown or reboot. 204 + * 205 + * Initiate an emergency system shutdown or reboot in order to protect 206 + * hardware from further damage. The exact action taken is controllable at 207 + * runtime and defaults to shutdown. 208 + */ 209 + static inline void hw_protection_trigger(const char *reason, int ms_until_forced) 188 210 { 189 - __hw_protection_shutdown(reason, ms_until_forced, true); 211 + __hw_protection_trigger(reason, ms_until_forced, HWPROT_ACT_DEFAULT); 190 212 } 191 213 192 214 /*
+3 -3
include/linux/rhashtable.h
··· 1259 1259 static inline void rhltable_walk_enter(struct rhltable *hlt, 1260 1260 struct rhashtable_iter *iter) 1261 1261 { 1262 - return rhashtable_walk_enter(&hlt->ht, iter); 1262 + rhashtable_walk_enter(&hlt->ht, iter); 1263 1263 } 1264 1264 1265 1265 /** ··· 1275 1275 void *arg), 1276 1276 void *arg) 1277 1277 { 1278 - return rhashtable_free_and_destroy(&hlt->ht, free_fn, arg); 1278 + rhashtable_free_and_destroy(&hlt->ht, free_fn, arg); 1279 1279 } 1280 1280 1281 1281 static inline void rhltable_destroy(struct rhltable *hlt) 1282 1282 { 1283 - return rhltable_free_and_destroy(hlt, NULL, NULL); 1283 + rhltable_free_and_destroy(hlt, NULL, NULL); 1284 1284 } 1285 1285 1286 1286 #endif /* _LINUX_RHASHTABLE_H */
+4
include/linux/sched.h
··· 1239 1239 struct mutex_waiter *blocked_on; 1240 1240 #endif 1241 1241 1242 + #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 1243 + struct mutex *blocker_mutex; 1244 + #endif 1245 + 1242 1246 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP 1243 1247 int non_block_count; 1244 1248 #endif
+1
include/linux/types.h
··· 92 92 typedef unsigned short ushort; 93 93 typedef unsigned int uint; 94 94 typedef unsigned long ulong; 95 + typedef unsigned long long ullong; 95 96 96 97 #ifndef __BIT_TYPES_DEFINED__ 97 98 #define __BIT_TYPES_DEFINED__
+12 -3
include/linux/user_namespace.h
··· 5 5 #include <linux/kref.h> 6 6 #include <linux/nsproxy.h> 7 7 #include <linux/ns_common.h> 8 + #include <linux/rculist_nulls.h> 8 9 #include <linux/sched.h> 9 10 #include <linux/workqueue.h> 11 + #include <linux/rcuref.h> 10 12 #include <linux/rwsem.h> 11 13 #include <linux/sysctl.h> 12 14 #include <linux/err.h> ··· 117 115 } __randomize_layout; 118 116 119 117 struct ucounts { 120 - struct hlist_node node; 118 + struct hlist_nulls_node node; 121 119 struct user_namespace *ns; 122 120 kuid_t uid; 123 - atomic_t count; 121 + struct rcu_head rcu; 122 + rcuref_t count; 124 123 atomic_long_t ucount[UCOUNT_COUNTS]; 125 124 atomic_long_t rlimit[UCOUNT_RLIMIT_COUNTS]; 126 125 }; ··· 134 131 struct ucounts *inc_ucount(struct user_namespace *ns, kuid_t uid, enum ucount_type type); 135 132 void dec_ucount(struct ucounts *ucounts, enum ucount_type type); 136 133 struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid); 137 - struct ucounts * __must_check get_ucounts(struct ucounts *ucounts); 138 134 void put_ucounts(struct ucounts *ucounts); 135 + 136 + static inline struct ucounts * __must_check get_ucounts(struct ucounts *ucounts) 137 + { 138 + if (rcuref_get(&ucounts->count)) 139 + return ucounts; 140 + return NULL; 141 + } 139 142 140 143 static inline long get_rlimit_value(struct ucounts *ucounts, enum rlimit_type type) 141 144 {
+1
include/uapi/linux/capability.h
··· 275 275 /* Allow setting encryption key on loopback filesystem */ 276 276 /* Allow setting zone reclaim policy */ 277 277 /* Allow everything under CAP_BPF and CAP_PERFMON for backward compatibility */ 278 + /* Allow setting hardware protection emergency action */ 278 279 279 280 #define CAP_SYS_ADMIN 21 280 281
+4 -5
kernel/crash_reserve.c
··· 375 375 return 0; 376 376 } 377 377 378 - void __init reserve_crashkernel_generic(char *cmdline, 379 - unsigned long long crash_size, 380 - unsigned long long crash_base, 381 - unsigned long long crash_low_size, 382 - bool high) 378 + void __init reserve_crashkernel_generic(unsigned long long crash_size, 379 + unsigned long long crash_base, 380 + unsigned long long crash_low_size, 381 + bool high) 383 382 { 384 383 unsigned long long search_end = CRASH_ADDR_LOW_MAX, search_base = 0; 385 384 bool fixed_base = false;
+12 -1
kernel/fork.c
··· 1582 1582 } 1583 1583 EXPORT_SYMBOL_GPL(get_task_mm); 1584 1584 1585 + static bool may_access_mm(struct mm_struct *mm, struct task_struct *task, unsigned int mode) 1586 + { 1587 + if (mm == current->mm) 1588 + return true; 1589 + if (ptrace_may_access(task, mode)) 1590 + return true; 1591 + if ((mode & PTRACE_MODE_READ) && perfmon_capable()) 1592 + return true; 1593 + return false; 1594 + } 1595 + 1585 1596 struct mm_struct *mm_access(struct task_struct *task, unsigned int mode) 1586 1597 { 1587 1598 struct mm_struct *mm; ··· 1605 1594 mm = get_task_mm(task); 1606 1595 if (!mm) { 1607 1596 mm = ERR_PTR(-ESRCH); 1608 - } else if (mm != current->mm && !ptrace_may_access(task, mode)) { 1597 + } else if (!may_access_mm(mm, task, mode)) { 1609 1598 mmput(mm); 1610 1599 mm = ERR_PTR(-EACCES); 1611 1600 }
+38
kernel/hung_task.c
··· 93 93 .notifier_call = hung_task_panic, 94 94 }; 95 95 96 + 97 + #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 98 + static void debug_show_blocker(struct task_struct *task) 99 + { 100 + struct task_struct *g, *t; 101 + unsigned long owner; 102 + struct mutex *lock; 103 + 104 + RCU_LOCKDEP_WARN(!rcu_read_lock_held(), "No rcu lock held"); 105 + 106 + lock = READ_ONCE(task->blocker_mutex); 107 + if (!lock) 108 + return; 109 + 110 + owner = mutex_get_owner(lock); 111 + if (unlikely(!owner)) { 112 + pr_err("INFO: task %s:%d is blocked on a mutex, but the owner is not found.\n", 113 + task->comm, task->pid); 114 + return; 115 + } 116 + 117 + /* Ensure the owner information is correct. */ 118 + for_each_process_thread(g, t) { 119 + if ((unsigned long)t == owner) { 120 + pr_err("INFO: task %s:%d is blocked on a mutex likely owned by task %s:%d.\n", 121 + task->comm, task->pid, t->comm, t->pid); 122 + sched_show_task(t); 123 + return; 124 + } 125 + } 126 + } 127 + #else 128 + static inline void debug_show_blocker(struct task_struct *task) 129 + { 130 + } 131 + #endif 132 + 96 133 static void check_hung_task(struct task_struct *t, unsigned long timeout) 97 134 { 98 135 unsigned long switch_count = t->nvcsw + t->nivcsw; ··· 189 152 pr_err("\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\"" 190 153 " disables this message.\n"); 191 154 sched_show_task(t); 155 + debug_show_blocker(t); 192 156 hung_task_show_lock = true; 193 157 194 158 if (sysctl_hung_task_all_cpu_backtrace)
+10
kernel/kexec_core.c
··· 210 210 } 211 211 #endif 212 212 213 + /* 214 + * The destination addresses are searched from system RAM rather than 215 + * being allocated from the buddy allocator, so they are not guaranteed 216 + * to be accepted by the current kernel. Accept the destination 217 + * addresses before kexec swaps their content with the segments' source 218 + * pages to avoid accessing memory before it is accepted. 219 + */ 220 + for (i = 0; i < nr_segments; i++) 221 + accept_memory(image->segment[i].mem, image->segment[i].memsz); 222 + 213 223 return 0; 214 224 } 215 225
+1 -1
kernel/kexec_elf.c
··· 390 390 struct kexec_buf *kbuf, 391 391 unsigned long *lowest_load_addr) 392 392 { 393 - unsigned long lowest_addr = UINT_MAX; 393 + unsigned long lowest_addr = ULONG_MAX; 394 394 int ret; 395 395 size_t i; 396 396
+12
kernel/kexec_file.c
··· 464 464 continue; 465 465 } 466 466 467 + /* Make sure this does not conflict with exclude range */ 468 + if (arch_check_excluded_range(image, temp_start, temp_end)) { 469 + temp_start = temp_start - PAGE_SIZE; 470 + continue; 471 + } 472 + 467 473 /* We found a suitable memory range */ 468 474 break; 469 475 } while (1); ··· 500 494 * segments 501 495 */ 502 496 if (kimage_is_destination_range(image, temp_start, temp_end)) { 497 + temp_start = temp_start + PAGE_SIZE; 498 + continue; 499 + } 500 + 501 + /* Make sure this does not conflict with exclude range */ 502 + if (arch_check_excluded_range(image, temp_start, temp_end)) { 503 503 temp_start = temp_start + PAGE_SIZE; 504 504 continue; 505 505 }
+14
kernel/locking/mutex.c
··· 72 72 return owner & MUTEX_FLAGS; 73 73 } 74 74 75 + /* Do not use the return value as a pointer directly. */ 76 + unsigned long mutex_get_owner(struct mutex *lock) 77 + { 78 + unsigned long owner = atomic_long_read(&lock->owner); 79 + 80 + return (unsigned long)__owner_task(owner); 81 + } 82 + 75 83 /* 76 84 * Returns: __mutex_owner(lock) on failure or NULL on success. 77 85 */ ··· 190 182 __mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, 191 183 struct list_head *list) 192 184 { 185 + #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 186 + WRITE_ONCE(current->blocker_mutex, lock); 187 + #endif 193 188 debug_mutex_add_waiter(lock, waiter, current); 194 189 195 190 list_add_tail(&waiter->list, list); ··· 208 197 __mutex_clear_flag(lock, MUTEX_FLAGS); 209 198 210 199 debug_mutex_remove_waiter(lock, waiter, current); 200 + #ifdef CONFIG_DETECT_HUNG_TASK_BLOCKER 201 + WRITE_ONCE(current->blocker_mutex, NULL); 202 + #endif 211 203 } 212 204 213 205 /*
+110 -32
kernel/reboot.c
··· 36 36 EXPORT_SYMBOL_GPL(reboot_mode); 37 37 enum reboot_mode panic_reboot_mode = REBOOT_UNDEFINED; 38 38 39 + static enum hw_protection_action hw_protection_action = HWPROT_ACT_SHUTDOWN; 40 + 39 41 /* 40 42 * This variable is used privately to keep track of whether or not 41 43 * reboot_type is still set to its default value (i.e., reboot= hasn't ··· 230 228 231 229 /** 232 230 * do_kernel_restart - Execute kernel restart handler call chain 231 + * 232 + * @cmd: pointer to buffer containing command to execute for restart 233 + * or %NULL 233 234 * 234 235 * Calls functions registered with register_restart_handler. 235 236 * ··· 938 933 } 939 934 EXPORT_SYMBOL_GPL(orderly_reboot); 940 935 936 + static const char *hw_protection_action_str(enum hw_protection_action action) 937 + { 938 + switch (action) { 939 + case HWPROT_ACT_SHUTDOWN: 940 + return "shutdown"; 941 + case HWPROT_ACT_REBOOT: 942 + return "reboot"; 943 + default: 944 + return "undefined"; 945 + } 946 + } 947 + 948 + static enum hw_protection_action hw_failure_emergency_action; 949 + 941 950 /** 942 - * hw_failure_emergency_poweroff_func - emergency poweroff work after a known delay 943 - * @work: work_struct associated with the emergency poweroff function 951 + * hw_failure_emergency_action_func - emergency action work after a known delay 952 + * @work: work_struct associated with the emergency action function 944 953 * 945 954 * This function is called in very critical situations to force 946 - * a kernel poweroff after a configurable timeout value. 955 + * a kernel poweroff or reboot after a configurable timeout value. 947 956 */ 948 - static void hw_failure_emergency_poweroff_func(struct work_struct *work) 957 + static void hw_failure_emergency_action_func(struct work_struct *work) 949 958 { 959 + const char *action_str = hw_protection_action_str(hw_failure_emergency_action); 960 + 961 + pr_emerg("Hardware protection timed-out. Trying forced %s\n", 962 + action_str); 963 + 950 964 /* 951 - * We have reached here after the emergency shutdown waiting period has 952 - * expired. This means orderly_poweroff has not been able to shut off 953 - * the system for some reason. 965 + * We have reached here after the emergency action waiting period has 966 + * expired. This means orderly_poweroff/reboot has not been able to 967 + * shut off the system for some reason. 954 968 * 955 - * Try to shut down the system immediately using kernel_power_off 956 - * if populated 969 + * Try to shut off the system immediately if possible 957 970 */ 958 - pr_emerg("Hardware protection timed-out. Trying forced poweroff\n"); 959 - kernel_power_off(); 971 + 972 + if (hw_failure_emergency_action == HWPROT_ACT_REBOOT) 973 + kernel_restart(NULL); 974 + else 975 + kernel_power_off(); 960 976 961 977 /* 962 978 * Worst of the worst case trigger emergency restart 963 979 */ 964 - pr_emerg("Hardware protection shutdown failed. Trying emergency restart\n"); 980 + pr_emerg("Hardware protection %s failed. Trying emergency restart\n", 981 + action_str); 965 982 emergency_restart(); 966 983 } 967 984 968 - static DECLARE_DELAYED_WORK(hw_failure_emergency_poweroff_work, 969 - hw_failure_emergency_poweroff_func); 985 + static DECLARE_DELAYED_WORK(hw_failure_emergency_action_work, 986 + hw_failure_emergency_action_func); 970 987 971 988 /** 972 - * hw_failure_emergency_poweroff - Trigger an emergency system poweroff 989 + * hw_failure_emergency_schedule - Schedule an emergency system shutdown or reboot 990 + * 991 + * @action: The hardware protection action to be taken 992 + * @action_delay_ms: Time in milliseconds to elapse before triggering action 973 993 * 974 994 * This may be called from any critical situation to trigger a system shutdown 975 - * after a given period of time. If time is negative this is not scheduled. 995 + * or reboot after a given period of time. 996 + * If time is negative this is not scheduled. 976 997 */ 977 - static void hw_failure_emergency_poweroff(int poweroff_delay_ms) 998 + static void hw_failure_emergency_schedule(enum hw_protection_action action, 999 + int action_delay_ms) 978 1000 { 979 - if (poweroff_delay_ms <= 0) 1001 + if (action_delay_ms <= 0) 980 1002 return; 981 - schedule_delayed_work(&hw_failure_emergency_poweroff_work, 982 - msecs_to_jiffies(poweroff_delay_ms)); 1003 + hw_failure_emergency_action = action; 1004 + schedule_delayed_work(&hw_failure_emergency_action_work, 1005 + msecs_to_jiffies(action_delay_ms)); 983 1006 } 984 1007 985 1008 /** 986 - * __hw_protection_shutdown - Trigger an emergency system shutdown or reboot 1009 + * __hw_protection_trigger - Trigger an emergency system shutdown or reboot 987 1010 * 988 1011 * @reason: Reason of emergency shutdown or reboot to be printed. 989 1012 * @ms_until_forced: Time to wait for orderly shutdown or reboot before 990 1013 * triggering it. Negative value disables the forced 991 1014 * shutdown or reboot. 992 - * @shutdown: If true, indicates that a shutdown will happen 993 - * after the critical tempeature is reached. 994 - * If false, indicates that a reboot will happen 995 - * after the critical tempeature is reached. 1015 + * @action: The hardware protection action to be taken. 996 1016 * 997 1017 * Initiate an emergency system shutdown or reboot in order to protect 998 1018 * hardware from further damage. Usage examples include a thermal protection. ··· 1025 995 * pending even if the previous request has given a large timeout for forced 1026 996 * shutdown/reboot. 1027 997 */ 1028 - void __hw_protection_shutdown(const char *reason, int ms_until_forced, bool shutdown) 998 + void __hw_protection_trigger(const char *reason, int ms_until_forced, 999 + enum hw_protection_action action) 1029 1000 { 1030 1001 static atomic_t allow_proceed = ATOMIC_INIT(1); 1031 1002 1032 - pr_emerg("HARDWARE PROTECTION shutdown (%s)\n", reason); 1003 + if (action == HWPROT_ACT_DEFAULT) 1004 + action = hw_protection_action; 1005 + 1006 + pr_emerg("HARDWARE PROTECTION %s (%s)\n", 1007 + hw_protection_action_str(action), reason); 1033 1008 1034 1009 /* Shutdown should be initiated only once. */ 1035 1010 if (!atomic_dec_and_test(&allow_proceed)) ··· 1044 1009 * Queue a backup emergency shutdown in the event of 1045 1010 * orderly_poweroff failure 1046 1011 */ 1047 - hw_failure_emergency_poweroff(ms_until_forced); 1048 - if (shutdown) 1049 - orderly_poweroff(true); 1050 - else 1012 + hw_failure_emergency_schedule(action, ms_until_forced); 1013 + if (action == HWPROT_ACT_REBOOT) 1051 1014 orderly_reboot(); 1015 + else 1016 + orderly_poweroff(true); 1052 1017 } 1053 - EXPORT_SYMBOL_GPL(__hw_protection_shutdown); 1018 + EXPORT_SYMBOL_GPL(__hw_protection_trigger); 1019 + 1020 + static bool hw_protection_action_parse(const char *str, 1021 + enum hw_protection_action *action) 1022 + { 1023 + if (sysfs_streq(str, "shutdown")) 1024 + *action = HWPROT_ACT_SHUTDOWN; 1025 + else if (sysfs_streq(str, "reboot")) 1026 + *action = HWPROT_ACT_REBOOT; 1027 + else 1028 + return false; 1029 + 1030 + return true; 1031 + } 1032 + 1033 + static int __init hw_protection_setup(char *str) 1034 + { 1035 + hw_protection_action_parse(str, &hw_protection_action); 1036 + return 1; 1037 + } 1038 + __setup("hw_protection=", hw_protection_setup); 1039 + 1040 + #ifdef CONFIG_SYSFS 1041 + static ssize_t hw_protection_show(struct kobject *kobj, 1042 + struct kobj_attribute *attr, char *buf) 1043 + { 1044 + return sysfs_emit(buf, "%s\n", 1045 + hw_protection_action_str(hw_protection_action)); 1046 + } 1047 + static ssize_t hw_protection_store(struct kobject *kobj, 1048 + struct kobj_attribute *attr, const char *buf, 1049 + size_t count) 1050 + { 1051 + if (!capable(CAP_SYS_ADMIN)) 1052 + return -EPERM; 1053 + 1054 + if (!hw_protection_action_parse(buf, &hw_protection_action)) 1055 + return -EINVAL; 1056 + 1057 + return count; 1058 + } 1059 + static struct kobj_attribute hw_protection_attr = __ATTR_RW(hw_protection); 1060 + #endif 1054 1061 1055 1062 static int __init reboot_setup(char *str) 1056 1063 { ··· 1353 1276 #endif 1354 1277 1355 1278 static struct attribute *reboot_attrs[] = { 1279 + &hw_protection_attr.attr, 1356 1280 &reboot_mode_attr.attr, 1357 1281 #ifdef CONFIG_X86 1358 1282 &reboot_force_attr.attr,
+1 -2
kernel/relay.c
··· 351 351 struct dentry *dentry; 352 352 char *tmpname; 353 353 354 - tmpname = kzalloc(NAME_MAX + 1, GFP_KERNEL); 354 + tmpname = kasprintf(GFP_KERNEL, "%s%d", chan->base_filename, cpu); 355 355 if (!tmpname) 356 356 return NULL; 357 - snprintf(tmpname, NAME_MAX, "%s%d", chan->base_filename, cpu); 358 357 359 358 /* Create file in fs */ 360 359 dentry = chan->cb->create_buf_file(tmpname, chan->parent,
+4 -14
kernel/resource.c
··· 561 561 struct resource res, o; 562 562 bool covered; 563 563 564 - res.start = start; 565 - res.end = start + size - 1; 564 + res = DEFINE_RES(start, size, 0); 566 565 567 566 for (p = parent->child; p ; p = p->sibling) { 568 567 if (!resource_intersection(p, &res, &o)) ··· 1713 1714 * I/O port space; otherwise assume it's memory. 1714 1715 */ 1715 1716 if (io_start < 0x10000) { 1716 - res->flags = IORESOURCE_IO; 1717 + *res = DEFINE_RES_IO_NAMED(io_start, io_num, "reserved"); 1717 1718 parent = &ioport_resource; 1718 1719 } else { 1719 - res->flags = IORESOURCE_MEM; 1720 + *res = DEFINE_RES_MEM_NAMED(io_start, io_num, "reserved"); 1720 1721 parent = &iomem_resource; 1721 1722 } 1722 - res->name = "reserved"; 1723 - res->start = io_start; 1724 - res->end = io_start + io_num - 1; 1725 1723 res->flags |= IORESOURCE_BUSY; 1726 - res->desc = IORES_DESC_NONE; 1727 - res->child = NULL; 1728 1724 if (request_resource(parent, res) == 0) 1729 1725 reserved = x+1; 1730 1726 } ··· 1969 1975 */ 1970 1976 revoke_iomem(res); 1971 1977 } else { 1972 - res->start = addr; 1973 - res->end = addr + size - 1; 1974 - res->name = name; 1975 - res->desc = desc; 1976 - res->flags = IORESOURCE_MEM; 1978 + *res = DEFINE_RES_NAMED_DESC(addr, size, name, IORESOURCE_MEM, desc); 1977 1979 1978 1980 /* 1979 1981 * Only succeed if the resource hosts an exclusive
+4 -3
kernel/signal.c
··· 176 176 177 177 void recalc_sigpending(void) 178 178 { 179 - if (!recalc_sigpending_tsk(current) && !freezing(current)) 180 - clear_thread_flag(TIF_SIGPENDING); 181 - 179 + if (!recalc_sigpending_tsk(current) && !freezing(current)) { 180 + if (unlikely(test_thread_flag(TIF_SIGPENDING))) 181 + clear_thread_flag(TIF_SIGPENDING); 182 + } 182 183 } 183 184 EXPORT_SYMBOL(recalc_sigpending); 184 185
+42 -55
kernel/ucount.c
··· 11 11 struct ucounts init_ucounts = { 12 12 .ns = &init_user_ns, 13 13 .uid = GLOBAL_ROOT_UID, 14 - .count = ATOMIC_INIT(1), 14 + .count = RCUREF_INIT(1), 15 15 }; 16 16 17 17 #define UCOUNTS_HASHTABLE_BITS 10 18 - static struct hlist_head ucounts_hashtable[(1 << UCOUNTS_HASHTABLE_BITS)]; 18 + #define UCOUNTS_HASHTABLE_ENTRIES (1 << UCOUNTS_HASHTABLE_BITS) 19 + static struct hlist_nulls_head ucounts_hashtable[UCOUNTS_HASHTABLE_ENTRIES] = { 20 + [0 ... UCOUNTS_HASHTABLE_ENTRIES - 1] = HLIST_NULLS_HEAD_INIT(0) 21 + }; 19 22 static DEFINE_SPINLOCK(ucounts_lock); 20 23 21 24 #define ucounts_hashfn(ns, uid) \ ··· 26 23 UCOUNTS_HASHTABLE_BITS) 27 24 #define ucounts_hashentry(ns, uid) \ 28 25 (ucounts_hashtable + ucounts_hashfn(ns, uid)) 29 - 30 26 31 27 #ifdef CONFIG_SYSCTL 32 28 static struct ctl_table_set * ··· 129 127 #endif 130 128 } 131 129 132 - static struct ucounts *find_ucounts(struct user_namespace *ns, kuid_t uid, struct hlist_head *hashent) 130 + static struct ucounts *find_ucounts(struct user_namespace *ns, kuid_t uid, 131 + struct hlist_nulls_head *hashent) 133 132 { 134 133 struct ucounts *ucounts; 134 + struct hlist_nulls_node *pos; 135 135 136 - hlist_for_each_entry(ucounts, hashent, node) { 137 - if (uid_eq(ucounts->uid, uid) && (ucounts->ns == ns)) 138 - return ucounts; 136 + guard(rcu)(); 137 + hlist_nulls_for_each_entry_rcu(ucounts, pos, hashent, node) { 138 + if (uid_eq(ucounts->uid, uid) && (ucounts->ns == ns)) { 139 + if (rcuref_get(&ucounts->count)) 140 + return ucounts; 141 + } 139 142 } 140 143 return NULL; 141 144 } 142 145 143 146 static void hlist_add_ucounts(struct ucounts *ucounts) 144 147 { 145 - struct hlist_head *hashent = ucounts_hashentry(ucounts->ns, ucounts->uid); 148 + struct hlist_nulls_head *hashent = ucounts_hashentry(ucounts->ns, ucounts->uid); 149 + 146 150 spin_lock_irq(&ucounts_lock); 147 - hlist_add_head(&ucounts->node, hashent); 151 + hlist_nulls_add_head_rcu(&ucounts->node, hashent); 148 152 spin_unlock_irq(&ucounts_lock); 149 - } 150 - 151 - static inline bool get_ucounts_or_wrap(struct ucounts *ucounts) 152 - { 153 - /* Returns true on a successful get, false if the count wraps. */ 154 - return !atomic_add_negative(1, &ucounts->count); 155 - } 156 - 157 - struct ucounts *get_ucounts(struct ucounts *ucounts) 158 - { 159 - if (!get_ucounts_or_wrap(ucounts)) { 160 - put_ucounts(ucounts); 161 - ucounts = NULL; 162 - } 163 - return ucounts; 164 153 } 165 154 166 155 struct ucounts *alloc_ucounts(struct user_namespace *ns, kuid_t uid) 167 156 { 168 - struct hlist_head *hashent = ucounts_hashentry(ns, uid); 169 - bool wrapped; 170 - struct ucounts *ucounts, *new = NULL; 157 + struct hlist_nulls_head *hashent = ucounts_hashentry(ns, uid); 158 + struct ucounts *ucounts, *new; 159 + 160 + ucounts = find_ucounts(ns, uid, hashent); 161 + if (ucounts) 162 + return ucounts; 163 + 164 + new = kzalloc(sizeof(*new), GFP_KERNEL); 165 + if (!new) 166 + return NULL; 167 + 168 + new->ns = ns; 169 + new->uid = uid; 170 + rcuref_init(&new->count, 1); 171 171 172 172 spin_lock_irq(&ucounts_lock); 173 173 ucounts = find_ucounts(ns, uid, hashent); 174 - if (!ucounts) { 174 + if (ucounts) { 175 175 spin_unlock_irq(&ucounts_lock); 176 - 177 - new = kzalloc(sizeof(*new), GFP_KERNEL); 178 - if (!new) 179 - return NULL; 180 - 181 - new->ns = ns; 182 - new->uid = uid; 183 - atomic_set(&new->count, 1); 184 - 185 - spin_lock_irq(&ucounts_lock); 186 - ucounts = find_ucounts(ns, uid, hashent); 187 - if (!ucounts) { 188 - hlist_add_head(&new->node, hashent); 189 - get_user_ns(new->ns); 190 - spin_unlock_irq(&ucounts_lock); 191 - return new; 192 - } 176 + kfree(new); 177 + return ucounts; 193 178 } 194 179 195 - wrapped = !get_ucounts_or_wrap(ucounts); 180 + hlist_nulls_add_head_rcu(&new->node, hashent); 181 + get_user_ns(new->ns); 196 182 spin_unlock_irq(&ucounts_lock); 197 - kfree(new); 198 - if (wrapped) { 199 - put_ucounts(ucounts); 200 - return NULL; 201 - } 202 - return ucounts; 183 + return new; 203 184 } 204 185 205 186 void put_ucounts(struct ucounts *ucounts) 206 187 { 207 188 unsigned long flags; 208 189 209 - if (atomic_dec_and_lock_irqsave(&ucounts->count, &ucounts_lock, flags)) { 210 - hlist_del_init(&ucounts->node); 190 + if (rcuref_put(&ucounts->count)) { 191 + spin_lock_irqsave(&ucounts_lock, flags); 192 + hlist_nulls_del_rcu(&ucounts->node); 211 193 spin_unlock_irqrestore(&ucounts_lock, flags); 194 + 212 195 put_user_ns(ucounts->ns); 213 - kfree(ucounts); 196 + kfree_rcu(ucounts, rcu); 214 197 } 215 198 } 216 199
+2 -4
kernel/watchdog_perf.c
··· 269 269 } else { 270 270 unsigned int len = comma - str; 271 271 272 - if (len >= sizeof(buf)) 272 + if (len > sizeof(buf)) 273 273 return; 274 274 275 - if (strscpy(buf, str, sizeof(buf)) < 0) 276 - return; 277 - buf[len] = 0; 275 + strscpy(buf, str, len); 278 276 if (kstrtoull(buf, 16, &config)) 279 277 return; 280 278 }
+11
lib/Kconfig.debug
··· 1280 1280 1281 1281 Say N if unsure. 1282 1282 1283 + config DETECT_HUNG_TASK_BLOCKER 1284 + bool "Dump Hung Tasks Blocker" 1285 + depends on DETECT_HUNG_TASK 1286 + depends on !PREEMPT_RT 1287 + default y 1288 + help 1289 + Say Y here to show the blocker task's stacktrace who acquires 1290 + the mutex lock which "hung tasks" are waiting. 1291 + This will add overhead a bit but shows suspicious tasks and 1292 + call trace if it comes from waiting a mutex. 1293 + 1283 1294 config WQ_WATCHDOG 1284 1295 bool "Detect Workqueue Stalls" 1285 1296 depends on DEBUG_KERNEL
+9 -3
lib/interval_tree.c
··· 20 20 /* 21 21 * Roll nodes[1] into nodes[0] by advancing nodes[1] to the end of a contiguous 22 22 * span of nodes. This makes nodes[0]->last the end of that contiguous used span 23 - * indexes that started at the original nodes[1]->start. nodes[1] is now the 24 - * first node starting the next used span. A hole span is between nodes[0]->last 25 - * and nodes[1]->start. nodes[1] must be !NULL. 23 + * of indexes that started at the original nodes[1]->start. 24 + * 25 + * If there is an interior hole, nodes[1] is now the first node starting the 26 + * next used span. A hole span is between nodes[0]->last and nodes[1]->start. 27 + * 28 + * If there is a tailing hole, nodes[1] is now NULL. A hole span is between 29 + * nodes[0]->last and last_index. 30 + * 31 + * If the contiguous used range span to last_index, nodes[1] is set to NULL. 26 32 */ 27 33 static void 28 34 interval_tree_span_iter_next_gap(struct interval_tree_span_iter *state)
+223 -14
lib/interval_tree_test.c
··· 5 5 #include <linux/prandom.h> 6 6 #include <linux/slab.h> 7 7 #include <asm/timex.h> 8 + #include <linux/bitmap.h> 9 + #include <linux/maple_tree.h> 8 10 9 11 #define __param(type, name, init, msg) \ 10 12 static type name = init; \ ··· 21 19 __param(bool, search_all, false, "Searches will iterate all nodes in the tree"); 22 20 23 21 __param(uint, max_endpoint, ~0, "Largest value for the interval's endpoint"); 22 + __param(ullong, seed, 3141592653589793238ULL, "Random seed"); 24 23 25 24 static struct rb_root_cached root = RB_ROOT_CACHED; 26 25 static struct interval_tree_node *nodes = NULL; ··· 62 59 queries[i] = (prandom_u32_state(&rnd) >> 4) % max_endpoint; 63 60 } 64 61 65 - static int interval_tree_test_init(void) 62 + static int basic_check(void) 66 63 { 67 64 int i, j; 68 - unsigned long results; 69 65 cycles_t time1, time2, time; 70 - 71 - nodes = kmalloc_array(nnodes, sizeof(struct interval_tree_node), 72 - GFP_KERNEL); 73 - if (!nodes) 74 - return -ENOMEM; 75 - 76 - queries = kmalloc_array(nsearches, sizeof(int), GFP_KERNEL); 77 - if (!queries) { 78 - kfree(nodes); 79 - return -ENOMEM; 80 - } 81 66 82 67 printk(KERN_ALERT "interval tree insert/remove"); 83 68 84 - prandom_seed_state(&rnd, 3141592653589793238ULL); 85 69 init(); 86 70 87 71 time1 = get_cycles(); ··· 86 96 time = div_u64(time, perf_loops); 87 97 printk(" -> %llu cycles\n", (unsigned long long)time); 88 98 99 + return 0; 100 + } 101 + 102 + static int search_check(void) 103 + { 104 + int i, j; 105 + unsigned long results; 106 + cycles_t time1, time2, time; 107 + 89 108 printk(KERN_ALERT "interval tree search"); 109 + 110 + init(); 90 111 91 112 for (j = 0; j < nnodes; j++) 92 113 interval_tree_insert(nodes + j, &root); ··· 120 119 results = div_u64(results, search_loops); 121 120 printk(" -> %llu cycles (%lu results)\n", 122 121 (unsigned long long)time, results); 122 + 123 + for (j = 0; j < nnodes; j++) 124 + interval_tree_remove(nodes + j, &root); 125 + 126 + return 0; 127 + } 128 + 129 + static int intersection_range_check(void) 130 + { 131 + int i, j, k; 132 + unsigned long start, last; 133 + struct interval_tree_node *node; 134 + unsigned long *intxn1; 135 + unsigned long *intxn2; 136 + 137 + printk(KERN_ALERT "interval tree iteration\n"); 138 + 139 + intxn1 = bitmap_alloc(nnodes, GFP_KERNEL); 140 + if (!intxn1) { 141 + WARN_ON_ONCE("Failed to allocate intxn1\n"); 142 + return -ENOMEM; 143 + } 144 + 145 + intxn2 = bitmap_alloc(nnodes, GFP_KERNEL); 146 + if (!intxn2) { 147 + WARN_ON_ONCE("Failed to allocate intxn2\n"); 148 + bitmap_free(intxn1); 149 + return -ENOMEM; 150 + } 151 + 152 + for (i = 0; i < search_loops; i++) { 153 + /* Initialize interval tree for each round */ 154 + init(); 155 + for (j = 0; j < nnodes; j++) 156 + interval_tree_insert(nodes + j, &root); 157 + 158 + /* Let's try nsearches different ranges */ 159 + for (k = 0; k < nsearches; k++) { 160 + /* Try whole range once */ 161 + if (!k) { 162 + start = 0UL; 163 + last = ULONG_MAX; 164 + } else { 165 + last = (prandom_u32_state(&rnd) >> 4) % max_endpoint; 166 + start = (prandom_u32_state(&rnd) >> 4) % last; 167 + } 168 + 169 + /* Walk nodes to mark intersection nodes */ 170 + bitmap_zero(intxn1, nnodes); 171 + for (j = 0; j < nnodes; j++) { 172 + node = nodes + j; 173 + 174 + if (start <= node->last && last >= node->start) 175 + bitmap_set(intxn1, j, 1); 176 + } 177 + 178 + /* Iterate tree to clear intersection nodes */ 179 + bitmap_zero(intxn2, nnodes); 180 + for (node = interval_tree_iter_first(&root, start, last); node; 181 + node = interval_tree_iter_next(node, start, last)) 182 + bitmap_set(intxn2, node - nodes, 1); 183 + 184 + WARN_ON_ONCE(!bitmap_equal(intxn1, intxn2, nnodes)); 185 + } 186 + 187 + for (j = 0; j < nnodes; j++) 188 + interval_tree_remove(nodes + j, &root); 189 + } 190 + 191 + bitmap_free(intxn1); 192 + bitmap_free(intxn2); 193 + return 0; 194 + } 195 + 196 + #ifdef CONFIG_INTERVAL_TREE_SPAN_ITER 197 + /* 198 + * Helper function to get span of current position from maple tree point of 199 + * view. 200 + */ 201 + static void mas_cur_span(struct ma_state *mas, struct interval_tree_span_iter *state) 202 + { 203 + unsigned long cur_start; 204 + unsigned long cur_last; 205 + int is_hole; 206 + 207 + if (mas->status == ma_overflow) 208 + return; 209 + 210 + /* walk to current position */ 211 + state->is_hole = mas_walk(mas) ? 0 : 1; 212 + 213 + cur_start = mas->index < state->first_index ? 214 + state->first_index : mas->index; 215 + 216 + /* whether we have followers */ 217 + do { 218 + 219 + cur_last = mas->last > state->last_index ? 220 + state->last_index : mas->last; 221 + 222 + is_hole = mas_next_range(mas, state->last_index) ? 0 : 1; 223 + 224 + } while (mas->status != ma_overflow && is_hole == state->is_hole); 225 + 226 + if (state->is_hole) { 227 + state->start_hole = cur_start; 228 + state->last_hole = cur_last; 229 + } else { 230 + state->start_used = cur_start; 231 + state->last_used = cur_last; 232 + } 233 + 234 + /* advance position for next round */ 235 + if (mas->status != ma_overflow) 236 + mas_set(mas, cur_last + 1); 237 + } 238 + 239 + static int span_iteration_check(void) 240 + { 241 + int i, j, k; 242 + unsigned long start, last; 243 + struct interval_tree_span_iter span, mas_span; 244 + 245 + DEFINE_MTREE(tree); 246 + 247 + MA_STATE(mas, &tree, 0, 0); 248 + 249 + printk(KERN_ALERT "interval tree span iteration\n"); 250 + 251 + for (i = 0; i < search_loops; i++) { 252 + /* Initialize interval tree for each round */ 253 + init(); 254 + for (j = 0; j < nnodes; j++) 255 + interval_tree_insert(nodes + j, &root); 256 + 257 + /* Put all the range into maple tree */ 258 + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); 259 + mt_set_in_rcu(&tree); 260 + 261 + for (j = 0; j < nnodes; j++) 262 + WARN_ON_ONCE(mtree_store_range(&tree, nodes[j].start, 263 + nodes[j].last, nodes + j, GFP_KERNEL)); 264 + 265 + /* Let's try nsearches different ranges */ 266 + for (k = 0; k < nsearches; k++) { 267 + /* Try whole range once */ 268 + if (!k) { 269 + start = 0UL; 270 + last = ULONG_MAX; 271 + } else { 272 + last = (prandom_u32_state(&rnd) >> 4) % max_endpoint; 273 + start = (prandom_u32_state(&rnd) >> 4) % last; 274 + } 275 + 276 + mas_span.first_index = start; 277 + mas_span.last_index = last; 278 + mas_span.is_hole = -1; 279 + mas_set(&mas, start); 280 + 281 + interval_tree_for_each_span(&span, &root, start, last) { 282 + mas_cur_span(&mas, &mas_span); 283 + 284 + WARN_ON_ONCE(span.is_hole != mas_span.is_hole); 285 + 286 + if (span.is_hole) { 287 + WARN_ON_ONCE(span.start_hole != mas_span.start_hole); 288 + WARN_ON_ONCE(span.last_hole != mas_span.last_hole); 289 + } else { 290 + WARN_ON_ONCE(span.start_used != mas_span.start_used); 291 + WARN_ON_ONCE(span.last_used != mas_span.last_used); 292 + } 293 + } 294 + 295 + } 296 + 297 + WARN_ON_ONCE(mas.status != ma_overflow); 298 + 299 + /* Cleanup maple tree for each round */ 300 + mtree_destroy(&tree); 301 + /* Cleanup interval tree for each round */ 302 + for (j = 0; j < nnodes; j++) 303 + interval_tree_remove(nodes + j, &root); 304 + } 305 + return 0; 306 + } 307 + #else 308 + static inline int span_iteration_check(void) {return 0; } 309 + #endif 310 + 311 + static int interval_tree_test_init(void) 312 + { 313 + nodes = kmalloc_array(nnodes, sizeof(struct interval_tree_node), 314 + GFP_KERNEL); 315 + if (!nodes) 316 + return -ENOMEM; 317 + 318 + queries = kmalloc_array(nsearches, sizeof(int), GFP_KERNEL); 319 + if (!queries) { 320 + kfree(nodes); 321 + return -ENOMEM; 322 + } 323 + 324 + prandom_seed_state(&rnd, seed); 325 + 326 + basic_check(); 327 + search_check(); 328 + intersection_range_check(); 329 + span_iteration_check(); 123 330 124 331 kfree(queries); 125 332 kfree(nodes);
+2 -2
lib/min_heap.c
··· 2 2 #include <linux/export.h> 3 3 #include <linux/min_heap.h> 4 4 5 - void __min_heap_init(min_heap_char *heap, void *data, int size) 5 + void __min_heap_init(min_heap_char *heap, void *data, size_t size) 6 6 { 7 7 __min_heap_init_inline(heap, data, size); 8 8 } ··· 20 20 } 21 21 EXPORT_SYMBOL(__min_heap_full); 22 22 23 - void __min_heap_sift_down(min_heap_char *heap, int pos, size_t elem_size, 23 + void __min_heap_sift_down(min_heap_char *heap, size_t pos, size_t elem_size, 24 24 const struct min_heap_callbacks *func, void *args) 25 25 { 26 26 __min_heap_sift_down_inline(heap, pos, elem_size, func, args);
+12
lib/plist.c
··· 171 171 172 172 plist_del(node, head); 173 173 174 + /* 175 + * After plist_del(), iter is the replacement of the node. If the node 176 + * was on prio_list, take shortcut to find node_next instead of looping. 177 + */ 178 + if (!list_empty(&iter->prio_list)) { 179 + iter = list_entry(iter->prio_list.next, struct plist_node, 180 + prio_list); 181 + node_next = &iter->node_list; 182 + goto queue; 183 + } 184 + 174 185 plist_for_each_continue(iter, head) { 175 186 if (node->prio != iter->prio) { 176 187 node_next = &iter->node_list; 177 188 break; 178 189 } 179 190 } 191 + queue: 180 192 list_add_tail(&node->node_list, node_next); 181 193 182 194 plist_check_head(head);
+24 -6
lib/rbtree_test.c
··· 14 14 __param(int, nnodes, 100, "Number of nodes in the rb-tree"); 15 15 __param(int, perf_loops, 1000, "Number of iterations modifying the rb-tree"); 16 16 __param(int, check_loops, 100, "Number of iterations modifying and verifying the rb-tree"); 17 + __param(ullong, seed, 3141592653589793238ULL, "Random seed"); 17 18 18 19 struct test_node { 19 20 u32 key; ··· 240 239 } 241 240 } 242 241 243 - static int __init rbtree_test_init(void) 242 + static int basic_check(void) 244 243 { 245 244 int i, j; 246 245 cycles_t time1, time2, time; 247 246 struct rb_node *node; 248 247 249 - nodes = kmalloc_array(nnodes, sizeof(*nodes), GFP_KERNEL); 250 - if (!nodes) 251 - return -ENOMEM; 252 - 253 248 printk(KERN_ALERT "rbtree testing"); 254 249 255 - prandom_seed_state(&rnd, 3141592653589793238ULL); 256 250 init(); 257 251 258 252 time1 = get_cycles(); ··· 339 343 check(0); 340 344 } 341 345 346 + return 0; 347 + } 348 + 349 + static int augmented_check(void) 350 + { 351 + int i, j; 352 + cycles_t time1, time2, time; 353 + 342 354 printk(KERN_ALERT "augmented rbtree testing"); 343 355 344 356 init(); ··· 393 389 } 394 390 check_augmented(0); 395 391 } 392 + 393 + return 0; 394 + } 395 + 396 + static int __init rbtree_test_init(void) 397 + { 398 + nodes = kmalloc_array(nnodes, sizeof(*nodes), GFP_KERNEL); 399 + if (!nodes) 400 + return -ENOMEM; 401 + 402 + prandom_seed_state(&rnd, seed); 403 + 404 + basic_check(); 405 + augmented_check(); 396 406 397 407 kfree(nodes); 398 408
+1 -5
lib/zlib_deflate/deflate.c
··· 151 151 * meaning. 152 152 */ 153 153 154 - #define EQUAL 0 155 - /* result of memcmp for equal strings */ 156 - 157 154 /* =========================================================================== 158 155 * Update a hash value with the given input byte 159 156 * IN assertion: all calls to UPDATE_HASH are made with consecutive ··· 710 713 ) 711 714 { 712 715 /* check that the match is indeed a match */ 713 - if (memcmp((char *)s->window + match, 714 - (char *)s->window + start, length) != EQUAL) { 716 + if (memcmp((char *)s->window + match, (char *)s->window + start, length)) { 715 717 fprintf(stderr, " start %u, match %u, length %d\n", 716 718 start, match, length); 717 719 do {
+9
samples/Kconfig
··· 300 300 demonstrate how they should be used with execveat(2) + 301 301 AT_EXECVE_CHECK. 302 302 303 + config SAMPLE_HUNG_TASK 304 + tristate "Hung task detector test code" 305 + depends on DETECT_HUNG_TASK && DEBUG_FS 306 + help 307 + Build a module which provide a simple debugfs file. If user reads 308 + the file, it will sleep long time (256 seconds) with holding a 309 + mutex. Thus if there are 2 or more processes read this file, it 310 + will be detected by the hung_task watchdog. 311 + 303 312 source "samples/rust/Kconfig" 304 313 305 314 source "samples/damon/Kconfig"
+1
samples/Makefile
··· 42 42 obj-$(CONFIG_SAMPLES_RUST) += rust/ 43 43 obj-$(CONFIG_SAMPLE_DAMON_WSSE) += damon/ 44 44 obj-$(CONFIG_SAMPLE_DAMON_PRCL) += damon/ 45 + obj-$(CONFIG_SAMPLE_HUNG_TASK) += hung_task/
+2
samples/hung_task/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + obj-$(CONFIG_SAMPLE_HUNG_TASK) += hung_task_mutex.o
+66
samples/hung_task/hung_task_mutex.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * hung_task_mutex.c - Sample code which causes hung task by mutex 4 + * 5 + * Usage: load this module and read `<debugfs>/hung_task/mutex` 6 + * by 2 or more processes. 7 + * 8 + * This is for testing kernel hung_task error message. 9 + * Note that this will make your system freeze and maybe 10 + * cause panic. So do not use this except for the test. 11 + */ 12 + 13 + #include <linux/debugfs.h> 14 + #include <linux/delay.h> 15 + #include <linux/fs.h> 16 + #include <linux/module.h> 17 + #include <linux/mutex.h> 18 + 19 + #define HUNG_TASK_DIR "hung_task" 20 + #define HUNG_TASK_FILE "mutex" 21 + #define SLEEP_SECOND 256 22 + 23 + static const char dummy_string[] = "This is a dummy string."; 24 + static DEFINE_MUTEX(dummy_mutex); 25 + static struct dentry *hung_task_dir; 26 + 27 + static ssize_t read_dummy(struct file *file, char __user *user_buf, 28 + size_t count, loff_t *ppos) 29 + { 30 + /* If the second task waits on the lock, it is uninterruptible sleep. */ 31 + guard(mutex)(&dummy_mutex); 32 + 33 + /* When the first task sleep here, it is interruptible. */ 34 + msleep_interruptible(SLEEP_SECOND * 1000); 35 + 36 + return simple_read_from_buffer(user_buf, count, ppos, 37 + dummy_string, sizeof(dummy_string)); 38 + } 39 + 40 + static const struct file_operations hung_task_fops = { 41 + .read = read_dummy, 42 + }; 43 + 44 + static int __init hung_task_sample_init(void) 45 + { 46 + hung_task_dir = debugfs_create_dir(HUNG_TASK_DIR, NULL); 47 + if (IS_ERR(hung_task_dir)) 48 + return PTR_ERR(hung_task_dir); 49 + 50 + debugfs_create_file(HUNG_TASK_FILE, 0400, hung_task_dir, 51 + NULL, &hung_task_fops); 52 + 53 + return 0; 54 + } 55 + 56 + static void __exit hung_task_sample_exit(void) 57 + { 58 + debugfs_remove_recursive(hung_task_dir); 59 + } 60 + 61 + module_init(hung_task_sample_init); 62 + module_exit(hung_task_sample_exit); 63 + 64 + MODULE_LICENSE("GPL"); 65 + MODULE_AUTHOR("Masami Hiramatsu"); 66 + MODULE_DESCRIPTION("Simple sleep under mutex file for testing hung task");
+3 -2
scripts/checkpatch.pl
··· 113 113 --max-line-length=n set the maximum line length, (default $max_line_length) 114 114 if exceeded, warn on patches 115 115 requires --strict for use with --file 116 - --min-conf-desc-length=n set the min description length, if shorter, warn 116 + --min-conf-desc-length=n set the minimum description length for config symbols 117 + in lines, if shorter, warn (default $min_conf_desc_length) 117 118 --tab-size=n set the number of spaces for tab (default $tabsize) 118 119 --root=PATH PATH to the kernel tree root 119 120 --no-summary suppress the per-file summary ··· 3646 3645 $help_length < $min_conf_desc_length) { 3647 3646 my $stat_real = get_stat_real($linenr, $ln - 1); 3648 3647 WARN("CONFIG_DESCRIPTION", 3649 - "please write a help paragraph that fully describes the config symbol\n" . "$here\n$stat_real\n"); 3648 + "please write a help paragraph that fully describes the config symbol with at least $min_conf_desc_length lines\n" . "$here\n$stat_real\n"); 3650 3649 } 3651 3650 } 3652 3651
+10
scripts/coccinelle/misc/secs_to_jiffies.cocci
··· 20 20 21 21 - msecs_to_jiffies(C * MSEC_PER_SEC) 22 22 + secs_to_jiffies(C) 23 + 24 + @depends on patch@ expression E; @@ 25 + 26 + - msecs_to_jiffies(E * 1000) 27 + + secs_to_jiffies(E) 28 + 29 + @depends on patch@ expression E; @@ 30 + 31 + - msecs_to_jiffies(E * MSEC_PER_SEC) 32 + + secs_to_jiffies(E)
+30
scripts/extract-fwblobs
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # ----------------------------------------------------------------------------- 5 + # Extracts the vmlinux built-in firmware blobs - requires a non-stripped image 6 + # ----------------------------------------------------------------------------- 7 + 8 + if [ -z "$1" ]; then 9 + echo "Must provide a non-stripped vmlinux as argument" 10 + exit 1 11 + fi 12 + 13 + read -r RD_ADDR_HEX RD_OFF_HEX <<< "$( readelf -SW "$1" |\ 14 + grep -w rodata | awk '{print "0x"$5" 0x"$6}' )" 15 + 16 + FW_SYMS="$(readelf -sW "$1" |\ 17 + awk -n '/fw_end/ { end=$2 ; print name " 0x" start " 0x" end; } { start=$2; name=$8; }')" 18 + 19 + while IFS= read -r entry; do 20 + read -r FW_NAME FW_ADDR_ST_HEX FW_ADDR_END_HEX <<< "$entry" 21 + 22 + # Notice kernel prepends _fw_ and appends _bin to the FW name 23 + # in rodata; hence we hereby filter that out. 24 + FW_NAME=${FW_NAME:4:-4} 25 + 26 + FW_OFFSET="$(printf "%d" $((FW_ADDR_ST_HEX - RD_ADDR_HEX + RD_OFF_HEX)))" 27 + FW_SIZE="$(printf "%d" $((FW_ADDR_END_HEX - FW_ADDR_ST_HEX)))" 28 + 29 + dd if="$1" of="./${FW_NAME}" bs="${FW_SIZE}" count=1 iflag=skip_bytes skip="${FW_OFFSET}" 30 + done <<< "${FW_SYMS}"
+20 -2
scripts/gdb/linux/cpus.py
··· 46 46 # !CONFIG_SMP case 47 47 offset = 0 48 48 pointer = var_ptr.cast(utils.get_long_type()) + offset 49 - return pointer.cast(var_ptr.type).dereference() 49 + return pointer.cast(var_ptr.type) 50 50 51 51 52 52 cpu_mask = {} ··· 149 149 super(PerCpu, self).__init__("lx_per_cpu") 150 150 151 151 def invoke(self, var, cpu=-1): 152 - return per_cpu(var.address, cpu) 152 + return per_cpu(var.address, cpu).dereference() 153 153 154 154 155 155 PerCpu() 156 + 157 + 158 + class PerCpuPtr(gdb.Function): 159 + """Return per-cpu pointer. 160 + 161 + $lx_per_cpu_ptr("VAR"[, CPU]): Return the per-cpu pointer called VAR for the 162 + given CPU number. If CPU is omitted, the CPU of the current context is used. 163 + Note that VAR has to be quoted as string.""" 164 + 165 + def __init__(self): 166 + super(PerCpuPtr, self).__init__("lx_per_cpu_ptr") 167 + 168 + def invoke(self, var, cpu=-1): 169 + return per_cpu(var, cpu) 170 + 171 + 172 + PerCpuPtr() 173 + 156 174 157 175 def get_current_task(cpu): 158 176 task_ptr_type = task_type.get_type().pointer()
+39 -5
scripts/gdb/linux/symbols.py
··· 14 14 import gdb 15 15 import os 16 16 import re 17 + import struct 17 18 19 + from itertools import count 18 20 from linux import modules, utils, constants 19 21 20 22 ··· 53 51 gdb.execute("set pagination %s" % ("on" if pagination else "off")) 54 52 55 53 return False 54 + 55 + 56 + def get_vmcore_s390(): 57 + with utils.qemu_phy_mem_mode(): 58 + vmcore_info = 0x0e0c 59 + paddr_vmcoreinfo_note = gdb.parse_and_eval("*(unsigned long long *)" + 60 + hex(vmcore_info)) 61 + inferior = gdb.selected_inferior() 62 + elf_note = inferior.read_memory(paddr_vmcoreinfo_note, 12) 63 + n_namesz, n_descsz, n_type = struct.unpack(">III", elf_note) 64 + desc_paddr = paddr_vmcoreinfo_note + len(elf_note) + n_namesz + 1 65 + return gdb.parse_and_eval("(char *)" + hex(desc_paddr)).string() 66 + 67 + 68 + def get_kerneloffset(): 69 + if utils.is_target_arch('s390'): 70 + try: 71 + vmcore_str = get_vmcore_s390() 72 + except gdb.error as e: 73 + gdb.write("{}\n".format(e)) 74 + return None 75 + return utils.parse_vmcore(vmcore_str).kerneloffset 76 + return None 56 77 57 78 58 79 class LxSymbols(gdb.Command): ··· 120 95 except gdb.error: 121 96 return str(module_addr) 122 97 123 - attrs = sect_attrs['attrs'] 124 - section_name_to_address = { 125 - attrs[n]['battr']['attr']['name'].string(): attrs[n]['address'] 126 - for n in range(int(sect_attrs['nsections']))} 98 + section_name_to_address = {} 99 + for i in count(): 100 + # this is a NULL terminated array 101 + if sect_attrs['grp']['bin_attrs'][i] == 0x0: 102 + break 103 + 104 + attr = sect_attrs['grp']['bin_attrs'][i].dereference() 105 + section_name_to_address[attr['attr']['name'].string()] = attr['private'] 127 106 128 107 textaddr = section_name_to_address.get(".text", module_addr) 129 108 args = [] ··· 184 155 obj.filename.endswith('vmlinux.debug')): 185 156 orig_vmlinux = obj.filename 186 157 gdb.execute("symbol-file", to_string=True) 187 - gdb.execute("symbol-file {0}".format(orig_vmlinux)) 158 + kerneloffset = get_kerneloffset() 159 + if kerneloffset is None: 160 + offset_arg = "" 161 + else: 162 + offset_arg = " -o " + hex(kerneloffset) 163 + gdb.execute("symbol-file {0}{1}".format(orig_vmlinux, offset_arg)) 188 164 189 165 self.loaded_modules = [] 190 166 module_list = modules.module_list()
+35
scripts/gdb/linux/utils.py
··· 11 11 # This work is licensed under the terms of the GNU GPL version 2. 12 12 # 13 13 14 + import contextlib 15 + import dataclasses 16 + import re 17 + import typing 18 + 14 19 import gdb 15 20 16 21 ··· 221 216 return gdb.parse_and_eval(expresssion) 222 217 except gdb.error: 223 218 return None 219 + 220 + 221 + @contextlib.contextmanager 222 + def qemu_phy_mem_mode(): 223 + connection = gdb.selected_inferior().connection 224 + orig = connection.send_packet("qqemu.PhyMemMode") 225 + if orig not in b"01": 226 + raise gdb.error("Unexpected qemu.PhyMemMode") 227 + orig = orig.decode() 228 + if connection.send_packet("Qqemu.PhyMemMode:1") != b"OK": 229 + raise gdb.error("Failed to set qemu.PhyMemMode") 230 + try: 231 + yield 232 + finally: 233 + if connection.send_packet("Qqemu.PhyMemMode:" + orig) != b"OK": 234 + raise gdb.error("Failed to restore qemu.PhyMemMode") 235 + 236 + 237 + @dataclasses.dataclass 238 + class VmCore: 239 + kerneloffset: typing.Optional[int] 240 + 241 + 242 + def parse_vmcore(s): 243 + match = re.search(r"KERNELOFFSET=([0-9a-f]+)", s) 244 + if match is None: 245 + kerneloffset = None 246 + else: 247 + kerneloffset = int(match.group(1), 16) 248 + return VmCore(kerneloffset=kerneloffset)
+31 -18
scripts/get_maintainer.pl
··· 50 50 my $output_separator = ", "; 51 51 my $output_roles = 0; 52 52 my $output_rolestats = 1; 53 + my $output_substatus = undef; 53 54 my $output_section_maxlen = 50; 54 55 my $scm = 0; 55 56 my $tree = 1; ··· 270 269 'separator=s' => \$output_separator, 271 270 'subsystem!' => \$subsystem, 272 271 'status!' => \$status, 272 + 'substatus!' => \$output_substatus, 273 273 'scm!' => \$scm, 274 274 'tree!' => \$tree, 275 275 'web!' => \$web, ··· 315 313 $output_multiline = 0 if ($output_separator ne ", "); 316 314 $output_rolestats = 1 if ($interactive); 317 315 $output_roles = 1 if ($output_rolestats); 316 + 317 + if (!defined $output_substatus) { 318 + $output_substatus = $email && $output_roles && -t STDOUT; 319 + } 318 320 319 321 if ($sections || $letters ne "") { 320 322 $sections = 1; ··· 643 637 my @bug = (); 644 638 my @subsystem = (); 645 639 my @status = (); 640 + my @substatus = (); 646 641 my %deduplicate_name_hash = (); 647 642 my %deduplicate_address_hash = (); 648 643 ··· 656 649 if ($scm) { 657 650 @scm = uniq(@scm); 658 651 output(@scm); 652 + } 653 + 654 + if ($output_substatus) { 655 + @substatus = uniq(@substatus); 656 + output(@substatus); 659 657 } 660 658 661 659 if ($status) { ··· 871 859 @bug = (); 872 860 @subsystem = (); 873 861 @status = (); 862 + @substatus = (); 874 863 %deduplicate_name_hash = (); 875 864 %deduplicate_address_hash = (); 876 865 if ($email_git_all_signature_types) { ··· 1084 1071 --moderated => include moderated lists(s) if any (default: true) 1085 1072 --s => include subscriber only list(s) if any (default: false) 1086 1073 --remove-duplicates => minimize duplicate email names/addresses 1087 - --roles => show roles (status:subsystem, git-signer, list, etc...) 1074 + --roles => show roles (role:subsystem, git-signer, list, etc...) 1088 1075 --rolestats => show roles and statistics (commits/total_commits, %) 1076 + --substatus => show subsystem status if not Maintained (default: match --roles when output is tty)" 1089 1077 --file-emails => add email addresses found in -f file (default: 0 (off)) 1090 1078 --fixes => for patches, add signatures of commits with 'Fixes: <commit>' (default: 1 (on)) 1091 1079 --scm => print SCM tree(s) if any ··· 1298 1284 my $start = find_starting_index($index); 1299 1285 my $end = find_ending_index($index); 1300 1286 1301 - my $role = "unknown"; 1287 + my $role = "maintainer"; 1302 1288 my $subsystem = get_subsystem_name($index); 1289 + my $status = "unknown"; 1303 1290 1304 1291 for ($i = $start + 1; $i < $end; $i++) { 1305 1292 my $tv = $typevalue[$i]; ··· 1308 1293 my $ptype = $1; 1309 1294 my $pvalue = $2; 1310 1295 if ($ptype eq "S") { 1311 - $role = $pvalue; 1296 + $status = $pvalue; 1312 1297 } 1313 1298 } 1314 1299 } 1315 1300 1316 - $role = lc($role); 1317 - if ($role eq "supported") { 1318 - $role = "supporter"; 1319 - } elsif ($role eq "maintained") { 1320 - $role = "maintainer"; 1321 - } elsif ($role eq "odd fixes") { 1322 - $role = "odd fixer"; 1323 - } elsif ($role eq "orphan") { 1324 - $role = "orphan minder"; 1325 - } elsif ($role eq "obsolete") { 1326 - $role = "obsolete minder"; 1327 - } elsif ($role eq "buried alive in reporters") { 1301 + $status = lc($status); 1302 + if ($status eq "buried alive in reporters") { 1328 1303 $role = "chief penguin"; 1329 1304 } 1330 1305 ··· 1340 1335 my $start = find_starting_index($index); 1341 1336 my $end = find_ending_index($index); 1342 1337 1343 - push(@subsystem, $typevalue[$start]); 1338 + my $subsystem = $typevalue[$start]; 1339 + push(@subsystem, $subsystem); 1340 + my $status = "Unknown"; 1344 1341 1345 1342 for ($i = $start + 1; $i < $end; $i++) { 1346 1343 my $tv = $typevalue[$i]; ··· 1393 1386 } 1394 1387 } elsif ($ptype eq "R") { 1395 1388 if ($email_reviewer) { 1396 - my $subsystem = get_subsystem_name($i); 1397 - push_email_addresses($pvalue, "reviewer:$subsystem" . $suffix); 1389 + my $subs = get_subsystem_name($i); 1390 + push_email_addresses($pvalue, "reviewer:$subs" . $suffix); 1398 1391 } 1399 1392 } elsif ($ptype eq "T") { 1400 1393 push(@scm, $pvalue . $suffix); ··· 1404 1397 push(@bug, $pvalue . $suffix); 1405 1398 } elsif ($ptype eq "S") { 1406 1399 push(@status, $pvalue . $suffix); 1400 + $status = $pvalue; 1407 1401 } 1408 1402 } 1403 + } 1404 + 1405 + if ($subsystem ne "THE REST" and $status ne "Maintained") { 1406 + push(@substatus, $subsystem . " status: " . $status . $suffix) 1409 1407 } 1410 1408 } 1411 1409 ··· 1915 1903 $done = 1; 1916 1904 $output_rolestats = 0; 1917 1905 $output_roles = 0; 1906 + $output_substatus = 0; 1918 1907 last; 1919 1908 } elsif ($nr =~ /^\d+$/ && $nr > 0 && $nr <= $count) { 1920 1909 $selected{$nr - 1} = !$selected{$nr - 1};
+1 -2
sound/pci/ac97/ac97_codec.c
··· 2461 2461 * (for avoiding loud click noises for many (OSS) apps 2462 2462 * that open/close frequently) 2463 2463 */ 2464 - schedule_delayed_work(&ac97->power_work, 2465 - msecs_to_jiffies(power_save * 1000)); 2464 + schedule_delayed_work(&ac97->power_work, secs_to_jiffies(power_save)); 2466 2465 else { 2467 2466 cancel_delayed_work(&ac97->power_work); 2468 2467 update_power_regs(ac97);
+13
tools/include/asm/timex.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __TOOLS_LINUX_ASM_TIMEX_H 3 + #define __TOOLS_LINUX_ASM_TIMEX_H 4 + 5 + #include <time.h> 6 + 7 + #define cycles_t clock_t 8 + 9 + static inline cycles_t get_cycles(void) 10 + { 11 + return clock(); 12 + } 13 + #endif // __TOOLS_LINUX_ASM_TIMEX_H
+21
tools/include/linux/bitmap.h
··· 19 19 const unsigned long *bitmap2, unsigned int bits); 20 20 bool __bitmap_equal(const unsigned long *bitmap1, 21 21 const unsigned long *bitmap2, unsigned int bits); 22 + void __bitmap_set(unsigned long *map, unsigned int start, int len); 22 23 void __bitmap_clear(unsigned long *map, unsigned int start, int len); 23 24 bool __bitmap_intersects(const unsigned long *bitmap1, 24 25 const unsigned long *bitmap2, unsigned int bits); ··· 78 77 *dst = *src1 | *src2; 79 78 else 80 79 __bitmap_or(dst, src1, src2, nbits); 80 + } 81 + 82 + static inline unsigned long *bitmap_alloc(unsigned int nbits, gfp_t flags __maybe_unused) 83 + { 84 + return malloc(bitmap_size(nbits)); 81 85 } 82 86 83 87 /** ··· 154 148 return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0; 155 149 else 156 150 return __bitmap_intersects(src1, src2, nbits); 151 + } 152 + 153 + static inline void bitmap_set(unsigned long *map, unsigned int start, unsigned int nbits) 154 + { 155 + if (__builtin_constant_p(nbits) && nbits == 1) 156 + __set_bit(start, map); 157 + else if (small_const_nbits(start + nbits)) 158 + *map |= GENMASK(start + nbits - 1, start); 159 + else if (__builtin_constant_p(start & BITMAP_MEM_MASK) && 160 + IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) && 161 + __builtin_constant_p(nbits & BITMAP_MEM_MASK) && 162 + IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT)) 163 + memset((char *)map + start / 8, 0xff, nbits / 8); 164 + else 165 + __bitmap_set(map, start, nbits); 157 166 } 158 167 159 168 static inline void bitmap_clear(unsigned long *map, unsigned int start,
+18
tools/include/linux/container_of.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _TOOLS_LINUX_CONTAINER_OF_H 3 + #define _TOOLS_LINUX_CONTAINER_OF_H 4 + 5 + #ifndef container_of 6 + /** 7 + * container_of - cast a member of a structure out to the containing structure 8 + * @ptr: the pointer to the member. 9 + * @type: the type of the container struct this is embedded in. 10 + * @member: the name of the member within the struct. 11 + * 12 + */ 13 + #define container_of(ptr, type, member) ({ \ 14 + const typeof(((type *)0)->member) * __mptr = (ptr); \ 15 + (type *)((char *)__mptr - offsetof(type, member)); }) 16 + #endif 17 + 18 + #endif /* _TOOLS_LINUX_CONTAINER_OF_H */
+1 -13
tools/include/linux/kernel.h
··· 11 11 #include <linux/panic.h> 12 12 #include <endian.h> 13 13 #include <byteswap.h> 14 + #include <linux/container_of.h> 14 15 15 16 #ifndef UINT_MAX 16 17 #define UINT_MAX (~0U) ··· 24 23 25 24 #ifndef offsetof 26 25 #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) 27 - #endif 28 - 29 - #ifndef container_of 30 - /** 31 - * container_of - cast a member of a structure out to the containing structure 32 - * @ptr: the pointer to the member. 33 - * @type: the type of the container struct this is embedded in. 34 - * @member: the name of the member within the struct. 35 - * 36 - */ 37 - #define container_of(ptr, type, member) ({ \ 38 - const typeof(((type *)0)->member) * __mptr = (ptr); \ 39 - (type *)((char *)__mptr - offsetof(type, member)); }) 40 26 #endif 41 27 42 28 #ifndef max
+5
tools/include/linux/math64.h
··· 72 72 } 73 73 #endif 74 74 75 + static inline u64 div_u64(u64 dividend, u32 divisor) 76 + { 77 + return dividend / divisor; 78 + } 79 + 75 80 #endif /* _LINUX_MATH64_H */
+7
tools/include/linux/moduleparam.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _TOOLS_LINUX_MODULE_PARAMS_H 3 + #define _TOOLS_LINUX_MODULE_PARAMS_H 4 + 5 + #define MODULE_PARM_DESC(parm, desc) 6 + 7 + #endif // _TOOLS_LINUX_MODULE_PARAMS_H
+51
tools/include/linux/prandom.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __TOOLS_LINUX_PRANDOM_H 3 + #define __TOOLS_LINUX_PRANDOM_H 4 + 5 + #include <linux/types.h> 6 + 7 + struct rnd_state { 8 + __u32 s1, s2, s3, s4; 9 + }; 10 + 11 + /* 12 + * Handle minimum values for seeds 13 + */ 14 + static inline u32 __seed(u32 x, u32 m) 15 + { 16 + return (x < m) ? x + m : x; 17 + } 18 + 19 + /** 20 + * prandom_seed_state - set seed for prandom_u32_state(). 21 + * @state: pointer to state structure to receive the seed. 22 + * @seed: arbitrary 64-bit value to use as a seed. 23 + */ 24 + static inline void prandom_seed_state(struct rnd_state *state, u64 seed) 25 + { 26 + u32 i = ((seed >> 32) ^ (seed << 10) ^ seed) & 0xffffffffUL; 27 + 28 + state->s1 = __seed(i, 2U); 29 + state->s2 = __seed(i, 8U); 30 + state->s3 = __seed(i, 16U); 31 + state->s4 = __seed(i, 128U); 32 + } 33 + 34 + /** 35 + * prandom_u32_state - seeded pseudo-random number generator. 36 + * @state: pointer to state structure holding seeded state. 37 + * 38 + * This is used for pseudo-randomness with no outside seeding. 39 + * For more random results, use get_random_u32(). 40 + */ 41 + static inline u32 prandom_u32_state(struct rnd_state *state) 42 + { 43 + #define TAUSWORTHE(s, a, b, c, d) (((s & c) << d) ^ (((s << a) ^ s) >> b)) 44 + state->s1 = TAUSWORTHE(state->s1, 6U, 13U, 4294967294U, 18U); 45 + state->s2 = TAUSWORTHE(state->s2, 2U, 27U, 4294967288U, 2U); 46 + state->s3 = TAUSWORTHE(state->s3, 13U, 21U, 4294967280U, 7U); 47 + state->s4 = TAUSWORTHE(state->s4, 3U, 12U, 4294967168U, 13U); 48 + 49 + return (state->s1 ^ state->s2 ^ state->s3 ^ state->s4); 50 + } 51 + #endif // __TOOLS_LINUX_PRANDOM_H
+1
tools/include/linux/slab.h
··· 12 12 13 13 void *kmalloc(size_t size, gfp_t gfp); 14 14 void kfree(void *p); 15 + void *kmalloc_array(size_t n, size_t size, gfp_t gfp); 15 16 16 17 bool slab_is_available(void); 17 18
+2
tools/include/linux/types.h
··· 42 42 typedef __u8 u8; 43 43 typedef __s8 s8; 44 44 45 + typedef unsigned long long ullong; 46 + 45 47 #ifdef __CHECKER__ 46 48 #define __bitwise __attribute__((bitwise)) 47 49 #else
+20
tools/lib/bitmap.c
··· 101 101 return false; 102 102 } 103 103 104 + void __bitmap_set(unsigned long *map, unsigned int start, int len) 105 + { 106 + unsigned long *p = map + BIT_WORD(start); 107 + const unsigned int size = start + len; 108 + int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG); 109 + unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start); 110 + 111 + while (len - bits_to_set >= 0) { 112 + *p |= mask_to_set; 113 + len -= bits_to_set; 114 + bits_to_set = BITS_PER_LONG; 115 + mask_to_set = ~0UL; 116 + p++; 117 + } 118 + if (len) { 119 + mask_to_set &= BITMAP_LAST_WORD_MASK(size); 120 + *p |= mask_to_set; 121 + } 122 + } 123 + 104 124 void __bitmap_clear(unsigned long *map, unsigned int start, int len) 105 125 { 106 126 unsigned long *p = map + BIT_WORD(start);
+16
tools/lib/slab.c
··· 36 36 printf("Freeing %p to malloc\n", p); 37 37 free(p); 38 38 } 39 + 40 + void *kmalloc_array(size_t n, size_t size, gfp_t gfp) 41 + { 42 + void *ret; 43 + 44 + if (!(gfp & __GFP_DIRECT_RECLAIM)) 45 + return NULL; 46 + 47 + ret = calloc(n, size); 48 + uatomic_inc(&kmalloc_nr_allocated); 49 + if (kmalloc_verbose) 50 + printf("Allocating %p from calloc\n", ret); 51 + if (gfp & __GFP_ZERO) 52 + memset(ret, 0, n * size); 53 + return ret; 54 + }
+33
tools/testing/rbtree/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + .PHONY: clean 4 + 5 + TARGETS = rbtree_test interval_tree_test 6 + OFILES = $(SHARED_OFILES) rbtree-shim.o interval_tree-shim.o maple-shim.o 7 + DEPS = ../../../include/linux/rbtree.h \ 8 + ../../../include/linux/rbtree_types.h \ 9 + ../../../include/linux/rbtree_augmented.h \ 10 + ../../../include/linux/interval_tree.h \ 11 + ../../../include/linux/interval_tree_generic.h \ 12 + ../../../lib/rbtree.c \ 13 + ../../../lib/interval_tree.c 14 + 15 + targets: $(TARGETS) 16 + 17 + include ../shared/shared.mk 18 + 19 + ifeq ($(DEBUG), 1) 20 + CFLAGS += -g 21 + endif 22 + 23 + $(TARGETS): $(OFILES) 24 + 25 + rbtree-shim.o: $(DEPS) 26 + rbtree_test.o: ../../../lib/rbtree_test.c 27 + interval_tree-shim.o: $(DEPS) 28 + interval_tree-shim.o: CFLAGS += -DCONFIG_INTERVAL_TREE_SPAN_ITER 29 + interval_tree_test.o: ../../../lib/interval_tree_test.c 30 + interval_tree_test.o: CFLAGS += -DCONFIG_INTERVAL_TREE_SPAN_ITER 31 + 32 + clean: 33 + $(RM) $(TARGETS) *.o radix-tree.c idr.c generated/*
+58
tools/testing/rbtree/interval_tree_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * interval_tree.c: Userspace Interval Tree test-suite 4 + * Copyright (c) 2025 Wei Yang <richard.weiyang@gmail.com> 5 + */ 6 + #include <linux/math64.h> 7 + #include <linux/kern_levels.h> 8 + #include "shared.h" 9 + #include "maple-shared.h" 10 + 11 + #include "../../../lib/interval_tree_test.c" 12 + 13 + int usage(void) 14 + { 15 + fprintf(stderr, "Userland interval tree test cases\n"); 16 + fprintf(stderr, " -n: Number of nodes in the interval tree\n"); 17 + fprintf(stderr, " -p: Number of iterations modifying the tree\n"); 18 + fprintf(stderr, " -q: Number of searches to the interval tree\n"); 19 + fprintf(stderr, " -s: Number of iterations searching the tree\n"); 20 + fprintf(stderr, " -a: Searches will iterate all nodes in the tree\n"); 21 + fprintf(stderr, " -m: Largest value for the interval's endpoint\n"); 22 + fprintf(stderr, " -r: Random seed\n"); 23 + exit(-1); 24 + } 25 + 26 + void interval_tree_tests(void) 27 + { 28 + interval_tree_test_init(); 29 + interval_tree_test_exit(); 30 + } 31 + 32 + int main(int argc, char **argv) 33 + { 34 + int opt; 35 + 36 + while ((opt = getopt(argc, argv, "n:p:q:s:am:r:")) != -1) { 37 + if (opt == 'n') 38 + nnodes = strtoul(optarg, NULL, 0); 39 + else if (opt == 'p') 40 + perf_loops = strtoul(optarg, NULL, 0); 41 + else if (opt == 'q') 42 + nsearches = strtoul(optarg, NULL, 0); 43 + else if (opt == 's') 44 + search_loops = strtoul(optarg, NULL, 0); 45 + else if (opt == 'a') 46 + search_all = true; 47 + else if (opt == 'm') 48 + max_endpoint = strtoul(optarg, NULL, 0); 49 + else if (opt == 'r') 50 + seed = strtoul(optarg, NULL, 0); 51 + else 52 + usage(); 53 + } 54 + 55 + maple_tree_init(); 56 + interval_tree_tests(); 57 + return 0; 58 + }
+48
tools/testing/rbtree/rbtree_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * rbtree_test.c: Userspace Red Black Tree test-suite 4 + * Copyright (c) 2025 Wei Yang <richard.weiyang@gmail.com> 5 + */ 6 + #include <linux/init.h> 7 + #include <linux/math64.h> 8 + #include <linux/kern_levels.h> 9 + #include "shared.h" 10 + 11 + #include "../../../lib/rbtree_test.c" 12 + 13 + int usage(void) 14 + { 15 + fprintf(stderr, "Userland rbtree test cases\n"); 16 + fprintf(stderr, " -n: Number of nodes in the rb-tree\n"); 17 + fprintf(stderr, " -p: Number of iterations modifying the rb-tree\n"); 18 + fprintf(stderr, " -c: Number of iterations modifying and verifying the rb-tree\n"); 19 + fprintf(stderr, " -r: Random seed\n"); 20 + exit(-1); 21 + } 22 + 23 + void rbtree_tests(void) 24 + { 25 + rbtree_test_init(); 26 + rbtree_test_exit(); 27 + } 28 + 29 + int main(int argc, char **argv) 30 + { 31 + int opt; 32 + 33 + while ((opt = getopt(argc, argv, "n:p:c:r:")) != -1) { 34 + if (opt == 'n') 35 + nnodes = strtoul(optarg, NULL, 0); 36 + else if (opt == 'p') 37 + perf_loops = strtoul(optarg, NULL, 0); 38 + else if (opt == 'c') 39 + check_loops = strtoul(optarg, NULL, 0); 40 + else if (opt == 'r') 41 + seed = strtoul(optarg, NULL, 0); 42 + else 43 + usage(); 44 + } 45 + 46 + rbtree_tests(); 47 + return 0; 48 + }
+4
tools/testing/rbtree/test.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + void rbtree_tests(void); 4 + void interval_tree_tests(void);
+5
tools/testing/shared/interval_tree-shim.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* Very simple shim around the interval tree. */ 4 + 5 + #include "../../../lib/interval_tree.c"
+7
tools/testing/shared/linux/interval_tree.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _TEST_INTERVAL_TREE_H 3 + #define _TEST_INTERVAL_TREE_H 4 + 5 + #include "../../../../include/linux/interval_tree.h" 6 + 7 + #endif /* _TEST_INTERVAL_TREE_H */
+2
tools/testing/shared/linux/interval_tree_generic.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include "../../../../include/linux/interval_tree_generic.h"
+8
tools/testing/shared/linux/rbtree.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _TEST_RBTREE_H 3 + #define _TEST_RBTREE_H 4 + 5 + #include <linux/kernel.h> 6 + #include "../../../../include/linux/rbtree.h" 7 + 8 + #endif /* _TEST_RBTREE_H */
+7
tools/testing/shared/linux/rbtree_augmented.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _TEST_RBTREE_AUGMENTED_H 3 + #define _TEST_RBTREE_AUGMENTED_H 4 + 5 + #include "../../../../include/linux/rbtree_augmented.h" 6 + 7 + #endif /* _TEST_RBTREE_AUGMENTED_H */
+8
tools/testing/shared/linux/rbtree_types.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _TEST_RBTREE_TYPES_H 3 + #define _TEST_RBTREE_TYPES_H 4 + 5 + #include "../../../../include/linux/rbtree_types.h" 6 + 7 + #endif /* _TEST_RBTREE_TYPES_H */ 8 +
+6
tools/testing/shared/rbtree-shim.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* Very simple shim around the rbtree. */ 4 + 5 + #include "../../../lib/rbtree.c" 6 +