Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-nonmm-stable-2026-02-12-10-48' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:

- "ocfs2: give ocfs2 the ability to reclaim suballocator free bg" saves
disk space by teaching ocfs2 to reclaim suballocator block group
space (Heming Zhao)

- "Add ARRAY_END(), and use it to fix off-by-one bugs" adds the
ARRAY_END() macro and uses it in various places (Alejandro Colomar)

- "vmcoreinfo: support VMCOREINFO_BYTES larger than PAGE_SIZE" makes
the vmcore code future-safe, if VMCOREINFO_BYTES ever exceeds the
page size (Pnina Feder)

- "kallsyms: Prevent invalid access when showing module buildid" cleans
up kallsyms code related to module buildid and fixes an invalid
access crash when printing backtraces (Petr Mladek)

- "Address page fault in ima_restore_measurement_list()" fixes a
kexec-related crash that can occur when booting the second-stage
kernel on x86 (Harshit Mogalapalli)

- "kho: ABI headers and Documentation updates" updates the kexec
handover ABI documentation (Mike Rapoport)

- "Align atomic storage" adds the __aligned attribute to atomic_t and
atomic64_t definitions to get natural alignment of both types on
csky, m68k, microblaze, nios2, openrisc and sh (Finn Thain)

- "kho: clean up page initialization logic" simplifies the page
initialization logic in kho_restore_page() (Pratyush Yadav)

- "Unload linux/kernel.h" moves several things out of kernel.h and into
more appropriate places (Yury Norov)

- "don't abuse task_struct.group_leader" removes the usage of
->group_leader when it is "obviously unnecessary" (Oleg Nesterov)

- "list private v2 & luo flb" adds some infrastructure improvements to
the live update orchestrator (Pasha Tatashin)

* tag 'mm-nonmm-stable-2026-02-12-10-48' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (107 commits)
watchdog/hardlockup: simplify perf event probe and remove per-cpu dependency
procfs: fix missing RCU protection when reading real_parent in do_task_stat()
watchdog/softlockup: fix sample ring index wrap in need_counting_irqs()
kcsan, compiler_types: avoid duplicate type issues in BPF Type Format
kho: fix doc for kho_restore_pages()
tests/liveupdate: add in-kernel liveupdate test
liveupdate: luo_flb: introduce File-Lifecycle-Bound global state
liveupdate: luo_file: Use private list
list: add kunit test for private list primitives
list: add primitives for private list manipulations
delayacct: fix uapi timespec64 definition
panic: add panic_force_cpu= parameter to redirect panic to a specific CPU
netclassid: use thread_group_leader(p) in update_classid_task()
RDMA/umem: don't abuse current->group_leader
drm/pan*: don't abuse current->group_leader
drm/amd: kill the outdated "Only the pthreads threading model is supported" checks
drm/amdgpu: don't abuse current->group_leader
android/binder: use same_thread_group(proc->tsk, current) in binder_mmap()
android/binder: don't abuse current->group_leader
kho: skip memoryless NUMA nodes when reserving scratch areas
...

+4133 -1554
+1 -3
.editorconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 - root = true 4 - 5 - [{*.{awk,c,dts,dtsi,dtso,h,mk,s,S},Kconfig,Makefile,Makefile.*}] 3 + [{*.{awk,c,dts,dtsi,dtso,h,mk,rst,s,S},Kconfig,Makefile,Makefile.*}] 6 4 charset = utf-8 7 5 end_of_line = lf 8 6 insert_final_newline = true
+16 -16
Documentation/accounting/delay-accounting.rst
··· 107 107 TGID 242 108 108 109 109 110 - CPU count real total virtual total delay total delay average delay max delay min 111 - 39 156000000 156576579 2111069 0.054ms 0.212296ms 0.031307ms 112 - IO count delay total delay average delay max delay min 113 - 0 0 0.000ms 0.000000ms 0.000000ms 114 - SWAP count delay total delay average delay max delay min 115 - 0 0 0.000ms 0.000000ms 0.000000ms 116 - RECLAIM count delay total delay average delay max delay min 117 - 0 0 0.000ms 0.000000ms 0.000000ms 118 - THRASHING count delay total delay average delay max delay min 119 - 0 0 0.000ms 0.000000ms 0.000000ms 120 - COMPACT count delay total delay average delay max delay min 121 - 0 0 0.000ms 0.000000ms 0.000000ms 122 - WPCOPY count delay total delay average delay max delay min 123 - 156 11215873 0.072ms 0.207403ms 0.033913ms 124 - IRQ count delay total delay average delay max delay min 125 - 0 0 0.000ms 0.000000ms 0.000000ms 110 + CPU count real total virtual total delay total delay average delay max delay min delay max timestamp 111 + 46 188000000 192348334 4098012 0.089ms 0.429260ms 0.051205ms 2026-01-15T15:06:58 112 + IO count delay total delay average delay max delay min delay max timestamp 113 + 0 0 0.000ms 0.000000ms 0.000000ms N/A 114 + SWAP count delay total delay average delay max delay min delay max timestamp 115 + 0 0 0.000ms 0.000000ms 0.000000ms N/A 116 + RECLAIM count delay total delay average delay max delay min delay max timestamp 117 + 0 0 0.000ms 0.000000ms 0.000000ms N/A 118 + THRASHING count delay total delay average delay max delay min delay max timestamp 119 + 0 0 0.000ms 0.000000ms 0.000000ms N/A 120 + COMPACT count delay total delay average delay max delay min delay max timestamp 121 + 0 0 0.000ms 0.000000ms 0.000000ms N/A 122 + WPCOPY count delay total delay average delay max delay min delay max timestamp 123 + 182 19413338 0.107ms 0.547353ms 0.022462ms 2026-01-15T15:05:24 124 + IRQ count delay total delay average delay max delay min delay max timestamp 125 + 0 0 0.000ms 0.000000ms 0.000000ms N/A 126 126 127 127 Get IO accounting for pid 1, it works only with -p:: 128 128
+20 -5
Documentation/admin-guide/kernel-parameters.txt
··· 4815 4815 panic_on_warn=1 panic() instead of WARN(). Useful to cause kdump 4816 4816 on a WARN(). 4817 4817 4818 + panic_force_cpu= 4819 + [KNL,SMP] Force panic handling to execute on a specific CPU. 4820 + Format: <cpu number> 4821 + Some platforms require panic handling to occur on a 4822 + specific CPU for the crash kernel to function correctly. 4823 + This can be due to firmware limitations, interrupt routing 4824 + constraints, or platform-specific requirements where only 4825 + a particular CPU can safely enter the crash kernel. 4826 + When set, panic() will redirect execution to the specified 4827 + CPU before proceeding with the normal panic and kexec flow. 4828 + If the target CPU is offline or unavailable, panic proceeds 4829 + on the current CPU. 4830 + This option should only be used for systems with the above 4831 + constraints as it might cause the panic operation to be less reliable. 4832 + 4818 4833 panic_print= Bitmask for printing system info when panic happens. 4819 4834 User can chose combination of the following bits: 4820 4835 bit 0: print all tasks info ··· 7004 6989 7005 6990 softlockup_panic= 7006 6991 [KNL] Should the soft-lockup detector generate panics. 7007 - Format: 0 | 1 6992 + Format: <int> 7008 6993 7009 - A value of 1 instructs the soft-lockup detector 7010 - to panic the machine when a soft-lockup occurs. It is 7011 - also controlled by the kernel.softlockup_panic sysctl 7012 - and CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC, which is the 6994 + A value of non-zero instructs the soft-lockup detector 6995 + to panic the machine when a soft-lockup duration exceeds 6996 + N thresholds. It is also controlled by the kernel.softlockup_panic 6997 + sysctl and CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC, which is the 7013 6998 respective build-time switch to that functionality. 7014 6999 7015 7000 softlockup_all_cpu_backtrace=
+28
Documentation/core-api/kho/abi.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + ================== 4 + Kexec Handover ABI 5 + ================== 6 + 7 + Core Kexec Handover ABI 8 + ======================== 9 + 10 + .. kernel-doc:: include/linux/kho/abi/kexec_handover.h 11 + :doc: Kexec Handover ABI 12 + 13 + vmalloc preservation ABI 14 + ======================== 15 + 16 + .. kernel-doc:: include/linux/kho/abi/kexec_handover.h 17 + :doc: Kexec Handover ABI for vmalloc Preservation 18 + 19 + memblock preservation ABI 20 + ========================= 21 + 22 + .. kernel-doc:: include/linux/kho/abi/memblock.h 23 + :doc: memblock kexec handover ABI 24 + 25 + See Also 26 + ======== 27 + 28 + - :doc:`/admin-guide/mm/kho`
-43
Documentation/core-api/kho/bindings/kho.yaml
··· 1 - # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - %YAML 1.2 3 - --- 4 - title: Kexec HandOver (KHO) root tree 5 - 6 - maintainers: 7 - - Mike Rapoport <rppt@kernel.org> 8 - - Changyuan Lyu <changyuanl@google.com> 9 - 10 - description: | 11 - System memory preserved by KHO across kexec. 12 - 13 - properties: 14 - compatible: 15 - enum: 16 - - kho-v1 17 - 18 - preserved-memory-map: 19 - description: | 20 - physical address (u64) of an in-memory structure describing all preserved 21 - folios and memory ranges. 22 - 23 - patternProperties: 24 - "$[0-9a-f_]+^": 25 - $ref: sub-fdt.yaml# 26 - description: physical address of a KHO user's own FDT. 27 - 28 - required: 29 - - compatible 30 - - preserved-memory-map 31 - 32 - additionalProperties: false 33 - 34 - examples: 35 - - | 36 - kho { 37 - compatible = "kho-v1"; 38 - preserved-memory-map = <0xf0be16 0x1000000>; 39 - 40 - memblock { 41 - fdt = <0x80cc16 0x1000000>; 42 - }; 43 - };
-39
Documentation/core-api/kho/bindings/memblock/memblock.yaml
··· 1 - # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - %YAML 1.2 3 - --- 4 - title: Memblock reserved memory 5 - 6 - maintainers: 7 - - Mike Rapoport <rppt@kernel.org> 8 - 9 - description: | 10 - Memblock can serialize its current memory reservations created with 11 - reserve_mem command line option across kexec through KHO. 12 - The post-KHO kernel can then consume these reservations and they are 13 - guaranteed to have the same physical address. 14 - 15 - properties: 16 - compatible: 17 - enum: 18 - - reserve-mem-v1 19 - 20 - patternProperties: 21 - "$[0-9a-f_]+^": 22 - $ref: reserve-mem.yaml# 23 - description: reserved memory regions 24 - 25 - required: 26 - - compatible 27 - 28 - additionalProperties: false 29 - 30 - examples: 31 - - | 32 - memblock { 33 - compatible = "memblock-v1"; 34 - n1 { 35 - compatible = "reserve-mem-v1"; 36 - start = <0xc06b 0x4000000>; 37 - size = <0x04 0x00>; 38 - }; 39 - };
-40
Documentation/core-api/kho/bindings/memblock/reserve-mem.yaml
··· 1 - # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - %YAML 1.2 3 - --- 4 - title: Memblock reserved memory regions 5 - 6 - maintainers: 7 - - Mike Rapoport <rppt@kernel.org> 8 - 9 - description: | 10 - Memblock can serialize its current memory reservations created with 11 - reserve_mem command line option across kexec through KHO. 12 - This object describes each such region. 13 - 14 - properties: 15 - compatible: 16 - enum: 17 - - reserve-mem-v1 18 - 19 - start: 20 - description: | 21 - physical address (u64) of the reserved memory region. 22 - 23 - size: 24 - description: | 25 - size (u64) of the reserved memory region. 26 - 27 - required: 28 - - compatible 29 - - start 30 - - size 31 - 32 - additionalProperties: false 33 - 34 - examples: 35 - - | 36 - n1 { 37 - compatible = "reserve-mem-v1"; 38 - start = <0xc06b 0x4000000>; 39 - size = <0x04 0x00>; 40 - };
-27
Documentation/core-api/kho/bindings/sub-fdt.yaml
··· 1 - # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - %YAML 1.2 3 - --- 4 - title: KHO users' FDT address 5 - 6 - maintainers: 7 - - Mike Rapoport <rppt@kernel.org> 8 - - Changyuan Lyu <changyuanl@google.com> 9 - 10 - description: | 11 - Physical address of an FDT blob registered by a KHO user. 12 - 13 - properties: 14 - fdt: 15 - description: | 16 - physical address (u64) of an FDT blob. 17 - 18 - required: 19 - - fdt 20 - 21 - additionalProperties: false 22 - 23 - examples: 24 - - | 25 - memblock { 26 - fdt = <0x80cc16 0x1000000>; 27 - };
-74
Documentation/core-api/kho/concepts.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0-or-later 2 - .. _kho-concepts: 3 - 4 - ======================= 5 - Kexec Handover Concepts 6 - ======================= 7 - 8 - Kexec HandOver (KHO) is a mechanism that allows Linux to preserve memory 9 - regions, which could contain serialized system states, across kexec. 10 - 11 - It introduces multiple concepts: 12 - 13 - KHO FDT 14 - ======= 15 - 16 - Every KHO kexec carries a KHO specific flattened device tree (FDT) blob 17 - that describes preserved memory regions. These regions contain either 18 - serialized subsystem states, or in-memory data that shall not be touched 19 - across kexec. After KHO, subsystems can retrieve and restore preserved 20 - memory regions from KHO FDT. 21 - 22 - KHO only uses the FDT container format and libfdt library, but does not 23 - adhere to the same property semantics that normal device trees do: Properties 24 - are passed in native endianness and standardized properties like ``regs`` and 25 - ``ranges`` do not exist, hence there are no ``#...-cells`` properties. 26 - 27 - KHO is still under development. The FDT schema is unstable and would change 28 - in the future. 29 - 30 - Scratch Regions 31 - =============== 32 - 33 - To boot into kexec, we need to have a physically contiguous memory range that 34 - contains no handed over memory. Kexec then places the target kernel and initrd 35 - into that region. The new kernel exclusively uses this region for memory 36 - allocations before during boot up to the initialization of the page allocator. 37 - 38 - We guarantee that we always have such regions through the scratch regions: On 39 - first boot KHO allocates several physically contiguous memory regions. Since 40 - after kexec these regions will be used by early memory allocations, there is a 41 - scratch region per NUMA node plus a scratch region to satisfy allocations 42 - requests that do not require particular NUMA node assignment. 43 - By default, size of the scratch region is calculated based on amount of memory 44 - allocated during boot. The ``kho_scratch`` kernel command line option may be 45 - used to explicitly define size of the scratch regions. 46 - The scratch regions are declared as CMA when page allocator is initialized so 47 - that their memory can be used during system lifetime. CMA gives us the 48 - guarantee that no handover pages land in that region, because handover pages 49 - must be at a static physical memory location and CMA enforces that only 50 - movable pages can be located inside. 51 - 52 - After KHO kexec, we ignore the ``kho_scratch`` kernel command line option and 53 - instead reuse the exact same region that was originally allocated. This allows 54 - us to recursively execute any amount of KHO kexecs. Because we used this region 55 - for boot memory allocations and as target memory for kexec blobs, some parts 56 - of that memory region may be reserved. These reservations are irrelevant for 57 - the next KHO, because kexec can overwrite even the original kernel. 58 - 59 - .. _kho-finalization-phase: 60 - 61 - KHO finalization phase 62 - ====================== 63 - 64 - To enable user space based kexec file loader, the kernel needs to be able to 65 - provide the FDT that describes the current kernel's state before 66 - performing the actual kexec. The process of generating that FDT is 67 - called serialization. When the FDT is generated, some properties 68 - of the system may become immutable because they are already written down 69 - in the FDT. That state is called the KHO finalization phase. 70 - 71 - Public API 72 - ========== 73 - .. kernel-doc:: kernel/liveupdate/kexec_handover.c 74 - :export:
-80
Documentation/core-api/kho/fdt.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0-or-later 2 - 3 - ======= 4 - KHO FDT 5 - ======= 6 - 7 - KHO uses the flattened device tree (FDT) container format and libfdt 8 - library to create and parse the data that is passed between the 9 - kernels. The properties in KHO FDT are stored in native format. 10 - It includes the physical address of an in-memory structure describing 11 - all preserved memory regions, as well as physical addresses of KHO users' 12 - own FDTs. Interpreting those sub FDTs is the responsibility of KHO users. 13 - 14 - KHO nodes and properties 15 - ======================== 16 - 17 - Property ``preserved-memory-map`` 18 - --------------------------------- 19 - 20 - KHO saves a special property named ``preserved-memory-map`` under the root node. 21 - This node contains the physical address of an in-memory structure for KHO to 22 - preserve memory regions across kexec. 23 - 24 - Property ``compatible`` 25 - ----------------------- 26 - 27 - The ``compatible`` property determines compatibility between the kernel 28 - that created the KHO FDT and the kernel that attempts to load it. 29 - If the kernel that loads the KHO FDT is not compatible with it, the entire 30 - KHO process will be bypassed. 31 - 32 - Property ``fdt`` 33 - ---------------- 34 - 35 - Generally, a KHO user serialize its state into its own FDT and instructs 36 - KHO to preserve the underlying memory, such that after kexec, the new kernel 37 - can recover its state from the preserved FDT. 38 - 39 - A KHO user thus can create a node in KHO root tree and save the physical address 40 - of its own FDT in that node's property ``fdt`` . 41 - 42 - Examples 43 - ======== 44 - 45 - The following example demonstrates KHO FDT that preserves two memory 46 - regions created with ``reserve_mem`` kernel command line parameter:: 47 - 48 - /dts-v1/; 49 - 50 - / { 51 - compatible = "kho-v1"; 52 - 53 - preserved-memory-map = <0x40be16 0x1000000>; 54 - 55 - memblock { 56 - fdt = <0x1517 0x1000000>; 57 - }; 58 - }; 59 - 60 - where the ``memblock`` node contains an FDT that is requested by the 61 - subsystem memblock for preservation. The FDT contains the following 62 - serialized data:: 63 - 64 - /dts-v1/; 65 - 66 - / { 67 - compatible = "memblock-v1"; 68 - 69 - n1 { 70 - compatible = "reserve-mem-v1"; 71 - start = <0xc06b 0x4000000>; 72 - size = <0x04 0x00>; 73 - }; 74 - 75 - n2 { 76 - compatible = "reserve-mem-v1"; 77 - start = <0xc067 0x4000000>; 78 - size = <0x04 0x00>; 79 - }; 80 - };
+80 -2
Documentation/core-api/kho/index.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0-or-later 2 2 3 + .. _kho-concepts: 4 + 3 5 ======================== 4 6 Kexec Handover Subsystem 5 7 ======================== 6 8 9 + Overview 10 + ======== 11 + 12 + Kexec HandOver (KHO) is a mechanism that allows Linux to preserve memory 13 + regions, which could contain serialized system states, across kexec. 14 + 15 + KHO uses :ref:`flattened device tree (FDT) <kho_fdt>` to pass information about 16 + the preserved state from pre-exec kernel to post-kexec kernel and :ref:`scratch 17 + memory regions <kho_scratch>` to ensure integrity of the preserved memory. 18 + 19 + .. _kho_fdt: 20 + 21 + KHO FDT 22 + ======= 23 + Every KHO kexec carries a KHO specific flattened device tree (FDT) blob that 24 + describes the preserved state. The FDT includes properties describing preserved 25 + memory regions and nodes that hold subsystem specific state. 26 + 27 + The preserved memory regions contain either serialized subsystem states, or 28 + in-memory data that shall not be touched across kexec. After KHO, subsystems 29 + can retrieve and restore the preserved state from KHO FDT. 30 + 31 + Subsystems participating in KHO can define their own format for state 32 + serialization and preservation. 33 + 34 + KHO FDT and structures defined by the subsystems form an ABI between pre-kexec 35 + and post-kexec kernels. This ABI is defined by header files in 36 + ``include/linux/kho/abi`` directory. 37 + 7 38 .. toctree:: 8 39 :maxdepth: 1 9 40 10 - concepts 11 - fdt 41 + abi.rst 42 + 43 + .. _kho_scratch: 44 + 45 + Scratch Regions 46 + =============== 47 + 48 + To boot into kexec, we need to have a physically contiguous memory range that 49 + contains no handed over memory. Kexec then places the target kernel and initrd 50 + into that region. The new kernel exclusively uses this region for memory 51 + allocations before during boot up to the initialization of the page allocator. 52 + 53 + We guarantee that we always have such regions through the scratch regions: On 54 + first boot KHO allocates several physically contiguous memory regions. Since 55 + after kexec these regions will be used by early memory allocations, there is a 56 + scratch region per NUMA node plus a scratch region to satisfy allocations 57 + requests that do not require particular NUMA node assignment. 58 + By default, size of the scratch region is calculated based on amount of memory 59 + allocated during boot. The ``kho_scratch`` kernel command line option may be 60 + used to explicitly define size of the scratch regions. 61 + The scratch regions are declared as CMA when page allocator is initialized so 62 + that their memory can be used during system lifetime. CMA gives us the 63 + guarantee that no handover pages land in that region, because handover pages 64 + must be at a static physical memory location and CMA enforces that only 65 + movable pages can be located inside. 66 + 67 + After KHO kexec, we ignore the ``kho_scratch`` kernel command line option and 68 + instead reuse the exact same region that was originally allocated. This allows 69 + us to recursively execute any amount of KHO kexecs. Because we used this region 70 + for boot memory allocations and as target memory for kexec blobs, some parts 71 + of that memory region may be reserved. These reservations are irrelevant for 72 + the next KHO, because kexec can overwrite even the original kernel. 73 + 74 + .. _kho-finalization-phase: 75 + 76 + KHO finalization phase 77 + ====================== 78 + 79 + To enable user space based kexec file loader, the kernel needs to be able to 80 + provide the FDT that describes the current kernel's state before 81 + performing the actual kexec. The process of generating that FDT is 82 + called serialization. When the FDT is generated, some properties 83 + of the system may become immutable because they are already written down 84 + in the FDT. That state is called the KHO finalization phase. 85 + 86 + See Also 87 + ======== 88 + 89 + - :doc:`/admin-guide/mm/kho`
+9
Documentation/core-api/list.rst
··· 774 774 775 775 .. kernel-doc:: include/linux/list.h 776 776 :internal: 777 + 778 + Private List API 779 + ================ 780 + 781 + .. kernel-doc:: include/linux/list_private.h 782 + :doc: Private List Primitives 783 + 784 + .. kernel-doc:: include/linux/list_private.h 785 + :internal:
+12 -1
Documentation/core-api/liveupdate.rst
··· 18 18 .. kernel-doc:: kernel/liveupdate/luo_file.c 19 19 :doc: LUO File Descriptors 20 20 21 + LUO File Lifecycle Bound Global Data 22 + ==================================== 23 + .. kernel-doc:: kernel/liveupdate/luo_flb.c 24 + :doc: LUO File Lifecycle Bound Global Data 25 + 21 26 Live Update Orchestrator ABI 22 27 ============================ 23 28 .. kernel-doc:: include/linux/kho/abi/luo.h ··· 45 40 .. kernel-doc:: kernel/liveupdate/luo_core.c 46 41 :export: 47 42 43 + .. kernel-doc:: kernel/liveupdate/luo_flb.c 44 + :export: 45 + 48 46 .. kernel-doc:: kernel/liveupdate/luo_file.c 49 47 :export: 50 48 51 49 Internal API 52 50 ============ 53 51 .. kernel-doc:: kernel/liveupdate/luo_core.c 52 + :internal: 53 + 54 + .. kernel-doc:: kernel/liveupdate/luo_flb.c 54 55 :internal: 55 56 56 57 .. kernel-doc:: kernel/liveupdate/luo_session.c ··· 69 58 ======== 70 59 71 60 - :doc:`Live Update uAPI </userspace-api/liveupdate>` 72 - - :doc:`/core-api/kho/concepts` 61 + - :doc:`/core-api/kho/index`
+5
Documentation/dev-tools/checkpatch.rst
··· 601 601 602 602 See: https://www.kernel.org/doc/html/latest/process/submitting-patches.html#describe-your-changes 603 603 604 + **BAD_COMMIT_SEPARATOR** 605 + The commit separator is a single line with 3 dashes. 606 + The regex match is '^---$' 607 + Lines that start with 3 dashes and have more content on the same line 608 + may confuse tools that apply patches. 604 609 605 610 Comparison style 606 611 ----------------
+1 -1
Documentation/filesystems/sysfs.rst
··· 120 120 .store = store_foo, 121 121 }; 122 122 123 - Note as stated in include/linux/kernel.h "OTHER_WRITABLE? Generally 123 + Note as stated in include/linux/sysfs.h "OTHER_WRITABLE? Generally 124 124 considered a bad idea." so trying to set a sysfs file writable for 125 125 everyone will fail reverting to RO mode for "Others". 126 126
+1 -1
Documentation/mm/memfd_preservation.rst
··· 20 20 ======== 21 21 22 22 - :doc:`/core-api/liveupdate` 23 - - :doc:`/core-api/kho/concepts` 23 + - :doc:`/core-api/kho/index`
+5 -3
MAINTAINERS
··· 14076 14076 F: Documentation/core-api/kho/* 14077 14077 F: include/linux/kexec_handover.h 14078 14078 F: include/linux/kho/ 14079 + F: include/linux/kho/abi/ 14079 14080 F: kernel/liveupdate/kexec_handover* 14080 14081 F: lib/test_kho.c 14081 14082 F: tools/testing/selftests/kho/ ··· 14769 14768 F: include/linux/liveupdate/ 14770 14769 F: include/uapi/linux/liveupdate.h 14771 14770 F: kernel/liveupdate/ 14771 + F: lib/tests/liveupdate.c 14772 14772 F: mm/memfd_luo.c 14773 14773 F: tools/testing/selftests/liveupdate/ 14774 14774 ··· 16522 16520 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git for-next 16523 16521 T: git git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock.git fixes 16524 16522 F: Documentation/core-api/boot-time-mm.rst 16525 - F: Documentation/core-api/kho/bindings/memblock/* 16523 + F: include/linux/kho/abi/memblock.h 16526 16524 F: include/linux/memblock.h 16527 16525 F: mm/bootmem_info.c 16528 16526 F: mm/memblock.c ··· 17593 17591 F: Documentation/core-api/min_heap.rst 17594 17592 F: include/linux/min_heap.h 17595 17593 F: lib/min_heap.c 17596 - F: lib/test_min_heap.c 17594 + F: lib/tests/min_heap_kunit.c 17597 17595 17598 17596 MIPI CCS, SMIA AND SMIA++ IMAGE SENSOR DRIVER 17599 17597 M: Sakari Ailus <sakari.ailus@linux.intel.com> ··· 27501 27499 L: linux-kernel@vger.kernel.org 27502 27500 S: Maintained 27503 27501 F: include/linux/uuid.h 27504 - F: lib/test_uuid.c 27502 + F: lib/tests/uuid_kunit.c 27505 27503 F: lib/uuid.c 27506 27504 27507 27505 UV SYSFS DRIVER
+1 -1
arch/arm/configs/aspeed_g5_defconfig
··· 306 306 CONFIG_PANIC_ON_OOPS=y 307 307 CONFIG_PANIC_TIMEOUT=-1 308 308 CONFIG_SOFTLOCKUP_DETECTOR=y 309 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 309 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=1 310 310 CONFIG_BOOTPARAM_HUNG_TASK_PANIC=1 311 311 CONFIG_WQ_WATCHDOG=y 312 312 # CONFIG_SCHED_DEBUG is not set
+1 -1
arch/arm/configs/pxa3xx_defconfig
··· 100 100 CONFIG_DEBUG_KERNEL=y 101 101 CONFIG_MAGIC_SYSRQ=y 102 102 CONFIG_DEBUG_SHIRQ=y 103 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 103 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=1 104 104 # CONFIG_SCHED_DEBUG is not set 105 105 CONFIG_DEBUG_SPINLOCK=y 106 106 CONFIG_DEBUG_SPINLOCK_SLEEP=y
+1 -1
arch/arm64/net/bpf_jit_comp.c
··· 2997 2997 u64 plt_target = 0ULL; 2998 2998 bool poking_bpf_entry; 2999 2999 3000 - if (!__bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf)) 3000 + if (!bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf)) 3001 3001 /* Only poking bpf text is supported. Since kernel function 3002 3002 * entry is set up by ftrace, we reply on ftrace to poke kernel 3003 3003 * functions.
+1 -1
arch/loongarch/net/bpf_jit.c
··· 1319 1319 /* Only poking bpf text is supported. Since kernel function entry 1320 1320 * is set up by ftrace, we rely on ftrace to poke kernel functions. 1321 1321 */ 1322 - if (!__bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf)) 1322 + if (!bpf_address_lookup((unsigned long)ip, &size, &offset, namebuf)) 1323 1323 return -ENOTSUPP; 1324 1324 1325 1325 image = ip - offset;
-3
arch/m68k/configs/amiga_defconfig
··· 599 599 CONFIG_PRIME_NUMBERS=m 600 600 CONFIG_CRC_BENCHMARK=y 601 601 CONFIG_XZ_DEC_TEST=m 602 - CONFIG_GLOB_SELFTEST=m 603 602 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 604 603 CONFIG_MAGIC_SYSRQ=y 605 604 CONFIG_TEST_LOCKUP=m ··· 607 608 CONFIG_KUNIT=m 608 609 CONFIG_KUNIT_ALL_TESTS=m 609 610 CONFIG_TEST_DHRY=m 610 - CONFIG_TEST_MIN_HEAP=m 611 611 CONFIG_TEST_DIV64=m 612 612 CONFIG_TEST_MULDIV64=m 613 613 CONFIG_REED_SOLOMON_TEST=m ··· 615 617 CONFIG_TEST_HEXDUMP=m 616 618 CONFIG_TEST_KSTRTOX=m 617 619 CONFIG_TEST_BITMAP=m 618 - CONFIG_TEST_UUID=m 619 620 CONFIG_TEST_XARRAY=m 620 621 CONFIG_TEST_MAPLE_TREE=m 621 622 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/apollo_defconfig
··· 556 556 CONFIG_PRIME_NUMBERS=m 557 557 CONFIG_CRC_BENCHMARK=y 558 558 CONFIG_XZ_DEC_TEST=m 559 - CONFIG_GLOB_SELFTEST=m 560 559 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 561 560 CONFIG_MAGIC_SYSRQ=y 562 561 CONFIG_TEST_LOCKUP=m ··· 564 565 CONFIG_KUNIT=m 565 566 CONFIG_KUNIT_ALL_TESTS=m 566 567 CONFIG_TEST_DHRY=m 567 - CONFIG_TEST_MIN_HEAP=m 568 568 CONFIG_TEST_DIV64=m 569 569 CONFIG_TEST_MULDIV64=m 570 570 CONFIG_REED_SOLOMON_TEST=m ··· 572 574 CONFIG_TEST_HEXDUMP=m 573 575 CONFIG_TEST_KSTRTOX=m 574 576 CONFIG_TEST_BITMAP=m 575 - CONFIG_TEST_UUID=m 576 577 CONFIG_TEST_XARRAY=m 577 578 CONFIG_TEST_MAPLE_TREE=m 578 579 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/atari_defconfig
··· 576 576 CONFIG_PRIME_NUMBERS=m 577 577 CONFIG_CRC_BENCHMARK=y 578 578 CONFIG_XZ_DEC_TEST=m 579 - CONFIG_GLOB_SELFTEST=m 580 579 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 581 580 CONFIG_MAGIC_SYSRQ=y 582 581 CONFIG_TEST_LOCKUP=m ··· 584 585 CONFIG_KUNIT=m 585 586 CONFIG_KUNIT_ALL_TESTS=m 586 587 CONFIG_TEST_DHRY=m 587 - CONFIG_TEST_MIN_HEAP=m 588 588 CONFIG_TEST_DIV64=m 589 589 CONFIG_TEST_MULDIV64=m 590 590 CONFIG_REED_SOLOMON_TEST=m ··· 592 594 CONFIG_TEST_HEXDUMP=m 593 595 CONFIG_TEST_KSTRTOX=m 594 596 CONFIG_TEST_BITMAP=m 595 - CONFIG_TEST_UUID=m 596 597 CONFIG_TEST_XARRAY=m 597 598 CONFIG_TEST_MAPLE_TREE=m 598 599 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/bvme6000_defconfig
··· 548 548 CONFIG_PRIME_NUMBERS=m 549 549 CONFIG_CRC_BENCHMARK=y 550 550 CONFIG_XZ_DEC_TEST=m 551 - CONFIG_GLOB_SELFTEST=m 552 551 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 553 552 CONFIG_MAGIC_SYSRQ=y 554 553 CONFIG_TEST_LOCKUP=m ··· 556 557 CONFIG_KUNIT=m 557 558 CONFIG_KUNIT_ALL_TESTS=m 558 559 CONFIG_TEST_DHRY=m 559 - CONFIG_TEST_MIN_HEAP=m 560 560 CONFIG_TEST_DIV64=m 561 561 CONFIG_TEST_MULDIV64=m 562 562 CONFIG_REED_SOLOMON_TEST=m ··· 564 566 CONFIG_TEST_HEXDUMP=m 565 567 CONFIG_TEST_KSTRTOX=m 566 568 CONFIG_TEST_BITMAP=m 567 - CONFIG_TEST_UUID=m 568 569 CONFIG_TEST_XARRAY=m 569 570 CONFIG_TEST_MAPLE_TREE=m 570 571 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/hp300_defconfig
··· 558 558 CONFIG_PRIME_NUMBERS=m 559 559 CONFIG_CRC_BENCHMARK=y 560 560 CONFIG_XZ_DEC_TEST=m 561 - CONFIG_GLOB_SELFTEST=m 562 561 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 563 562 CONFIG_MAGIC_SYSRQ=y 564 563 CONFIG_TEST_LOCKUP=m ··· 566 567 CONFIG_KUNIT=m 567 568 CONFIG_KUNIT_ALL_TESTS=m 568 569 CONFIG_TEST_DHRY=m 569 - CONFIG_TEST_MIN_HEAP=m 570 570 CONFIG_TEST_DIV64=m 571 571 CONFIG_TEST_MULDIV64=m 572 572 CONFIG_REED_SOLOMON_TEST=m ··· 574 576 CONFIG_TEST_HEXDUMP=m 575 577 CONFIG_TEST_KSTRTOX=m 576 578 CONFIG_TEST_BITMAP=m 577 - CONFIG_TEST_UUID=m 578 579 CONFIG_TEST_XARRAY=m 579 580 CONFIG_TEST_MAPLE_TREE=m 580 581 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/mac_defconfig
··· 575 575 CONFIG_PRIME_NUMBERS=m 576 576 CONFIG_CRC_BENCHMARK=y 577 577 CONFIG_XZ_DEC_TEST=m 578 - CONFIG_GLOB_SELFTEST=m 579 578 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 580 579 CONFIG_MAGIC_SYSRQ=y 581 580 CONFIG_TEST_LOCKUP=m ··· 583 584 CONFIG_KUNIT=m 584 585 CONFIG_KUNIT_ALL_TESTS=m 585 586 CONFIG_TEST_DHRY=m 586 - CONFIG_TEST_MIN_HEAP=m 587 587 CONFIG_TEST_DIV64=m 588 588 CONFIG_TEST_MULDIV64=m 589 589 CONFIG_REED_SOLOMON_TEST=m ··· 591 593 CONFIG_TEST_HEXDUMP=m 592 594 CONFIG_TEST_KSTRTOX=m 593 595 CONFIG_TEST_BITMAP=m 594 - CONFIG_TEST_UUID=m 595 596 CONFIG_TEST_XARRAY=m 596 597 CONFIG_TEST_MAPLE_TREE=m 597 598 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/multi_defconfig
··· 662 662 CONFIG_PRIME_NUMBERS=m 663 663 CONFIG_CRC_BENCHMARK=y 664 664 CONFIG_XZ_DEC_TEST=m 665 - CONFIG_GLOB_SELFTEST=m 666 665 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 667 666 CONFIG_MAGIC_SYSRQ=y 668 667 CONFIG_TEST_LOCKUP=m ··· 670 671 CONFIG_KUNIT=m 671 672 CONFIG_KUNIT_ALL_TESTS=m 672 673 CONFIG_TEST_DHRY=m 673 - CONFIG_TEST_MIN_HEAP=m 674 674 CONFIG_TEST_DIV64=m 675 675 CONFIG_TEST_MULDIV64=m 676 676 CONFIG_REED_SOLOMON_TEST=m ··· 678 680 CONFIG_TEST_HEXDUMP=m 679 681 CONFIG_TEST_KSTRTOX=m 680 682 CONFIG_TEST_BITMAP=m 681 - CONFIG_TEST_UUID=m 682 683 CONFIG_TEST_XARRAY=m 683 684 CONFIG_TEST_MAPLE_TREE=m 684 685 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/mvme147_defconfig
··· 548 548 CONFIG_PRIME_NUMBERS=m 549 549 CONFIG_CRC_BENCHMARK=y 550 550 CONFIG_XZ_DEC_TEST=m 551 - CONFIG_GLOB_SELFTEST=m 552 551 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 553 552 CONFIG_MAGIC_SYSRQ=y 554 553 CONFIG_TEST_LOCKUP=m ··· 556 557 CONFIG_KUNIT=m 557 558 CONFIG_KUNIT_ALL_TESTS=m 558 559 CONFIG_TEST_DHRY=m 559 - CONFIG_TEST_MIN_HEAP=m 560 560 CONFIG_TEST_DIV64=m 561 561 CONFIG_TEST_MULDIV64=m 562 562 CONFIG_REED_SOLOMON_TEST=m ··· 564 566 CONFIG_TEST_HEXDUMP=m 565 567 CONFIG_TEST_KSTRTOX=m 566 568 CONFIG_TEST_BITMAP=m 567 - CONFIG_TEST_UUID=m 568 569 CONFIG_TEST_XARRAY=m 569 570 CONFIG_TEST_MAPLE_TREE=m 570 571 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/mvme16x_defconfig
··· 549 549 CONFIG_PRIME_NUMBERS=m 550 550 CONFIG_CRC_BENCHMARK=y 551 551 CONFIG_XZ_DEC_TEST=m 552 - CONFIG_GLOB_SELFTEST=m 553 552 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 554 553 CONFIG_MAGIC_SYSRQ=y 555 554 CONFIG_TEST_LOCKUP=m ··· 557 558 CONFIG_KUNIT=m 558 559 CONFIG_KUNIT_ALL_TESTS=m 559 560 CONFIG_TEST_DHRY=m 560 - CONFIG_TEST_MIN_HEAP=m 561 561 CONFIG_TEST_DIV64=m 562 562 CONFIG_TEST_MULDIV64=m 563 563 CONFIG_REED_SOLOMON_TEST=m ··· 565 567 CONFIG_TEST_HEXDUMP=m 566 568 CONFIG_TEST_KSTRTOX=m 567 569 CONFIG_TEST_BITMAP=m 568 - CONFIG_TEST_UUID=m 569 570 CONFIG_TEST_XARRAY=m 570 571 CONFIG_TEST_MAPLE_TREE=m 571 572 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/q40_defconfig
··· 565 565 CONFIG_PRIME_NUMBERS=m 566 566 CONFIG_CRC_BENCHMARK=y 567 567 CONFIG_XZ_DEC_TEST=m 568 - CONFIG_GLOB_SELFTEST=m 569 568 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 570 569 CONFIG_MAGIC_SYSRQ=y 571 570 CONFIG_TEST_LOCKUP=m ··· 573 574 CONFIG_KUNIT=m 574 575 CONFIG_KUNIT_ALL_TESTS=m 575 576 CONFIG_TEST_DHRY=m 576 - CONFIG_TEST_MIN_HEAP=m 577 577 CONFIG_TEST_DIV64=m 578 578 CONFIG_TEST_MULDIV64=m 579 579 CONFIG_REED_SOLOMON_TEST=m ··· 581 583 CONFIG_TEST_HEXDUMP=m 582 584 CONFIG_TEST_KSTRTOX=m 583 585 CONFIG_TEST_BITMAP=m 584 - CONFIG_TEST_UUID=m 585 586 CONFIG_TEST_XARRAY=m 586 587 CONFIG_TEST_MAPLE_TREE=m 587 588 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/sun3_defconfig
··· 546 546 CONFIG_PRIME_NUMBERS=m 547 547 CONFIG_CRC_BENCHMARK=y 548 548 CONFIG_XZ_DEC_TEST=m 549 - CONFIG_GLOB_SELFTEST=m 550 549 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 551 550 CONFIG_MAGIC_SYSRQ=y 552 551 CONFIG_TEST_LOCKUP=m ··· 553 554 CONFIG_KUNIT=m 554 555 CONFIG_KUNIT_ALL_TESTS=m 555 556 CONFIG_TEST_DHRY=m 556 - CONFIG_TEST_MIN_HEAP=m 557 557 CONFIG_TEST_DIV64=m 558 558 CONFIG_TEST_MULDIV64=m 559 559 CONFIG_REED_SOLOMON_TEST=m ··· 561 563 CONFIG_TEST_HEXDUMP=m 562 564 CONFIG_TEST_KSTRTOX=m 563 565 CONFIG_TEST_BITMAP=m 564 - CONFIG_TEST_UUID=m 565 566 CONFIG_TEST_XARRAY=m 566 567 CONFIG_TEST_MAPLE_TREE=m 567 568 CONFIG_TEST_RHASHTABLE=m
-3
arch/m68k/configs/sun3x_defconfig
··· 546 546 CONFIG_PRIME_NUMBERS=m 547 547 CONFIG_CRC_BENCHMARK=y 548 548 CONFIG_XZ_DEC_TEST=m 549 - CONFIG_GLOB_SELFTEST=m 550 549 # CONFIG_SECTION_MISMATCH_WARN_ONLY is not set 551 550 CONFIG_MAGIC_SYSRQ=y 552 551 CONFIG_TEST_LOCKUP=m ··· 554 555 CONFIG_KUNIT=m 555 556 CONFIG_KUNIT_ALL_TESTS=m 556 557 CONFIG_TEST_DHRY=m 557 - CONFIG_TEST_MIN_HEAP=m 558 558 CONFIG_TEST_DIV64=m 559 559 CONFIG_TEST_MULDIV64=m 560 560 CONFIG_REED_SOLOMON_TEST=m ··· 562 564 CONFIG_TEST_HEXDUMP=m 563 565 CONFIG_TEST_KSTRTOX=m 564 566 CONFIG_TEST_BITMAP=m 565 - CONFIG_TEST_UUID=m 566 567 CONFIG_TEST_XARRAY=m 567 568 CONFIG_TEST_MAPLE_TREE=m 568 569 CONFIG_TEST_RHASHTABLE=m
+1
arch/mips/kernel/setup.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/cpu.h> 15 15 #include <linux/delay.h> 16 + #include <linux/hex.h> 16 17 #include <linux/ioport.h> 17 18 #include <linux/export.h> 18 19 #include <linux/memblock.h>
+1
arch/mips/rb532/devices.c
··· 7 7 */ 8 8 #include <linux/kernel.h> 9 9 #include <linux/export.h> 10 + #include <linux/hex.h> 10 11 #include <linux/init.h> 11 12 #include <linux/ctype.h> 12 13 #include <linux/string.h>
+1 -1
arch/openrisc/configs/or1klitex_defconfig
··· 52 52 CONFIG_PRINTK_TIME=y 53 53 CONFIG_PANIC_ON_OOPS=y 54 54 CONFIG_SOFTLOCKUP_DETECTOR=y 55 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 55 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=1 56 56 CONFIG_BUG_ON_DATA_CORRUPTION=y
-2
arch/powerpc/configs/ppc64_defconfig
··· 425 425 CONFIG_KUNIT=m 426 426 CONFIG_KUNIT_ALL_TESTS=m 427 427 CONFIG_LKDTM=m 428 - CONFIG_TEST_MIN_HEAP=m 429 428 CONFIG_TEST_DIV64=m 430 429 CONFIG_BACKTRACE_SELF_TEST=m 431 430 CONFIG_TEST_REF_TRACKER=m ··· 441 442 CONFIG_TEST_PRINTF=m 442 443 CONFIG_TEST_SCANF=m 443 444 CONFIG_TEST_BITMAP=m 444 - CONFIG_TEST_UUID=m 445 445 CONFIG_TEST_XARRAY=m 446 446 CONFIG_TEST_MAPLE_TREE=m 447 447 CONFIG_TEST_RHASHTABLE=m
+1 -1
arch/powerpc/configs/skiroot_defconfig
··· 288 288 CONFIG_DEBUG_STACKOVERFLOW=y 289 289 CONFIG_PANIC_ON_OOPS=y 290 290 CONFIG_SOFTLOCKUP_DETECTOR=y 291 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 291 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=1 292 292 CONFIG_HARDLOCKUP_DETECTOR=y 293 293 CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y 294 294 CONFIG_WQ_WATCHDOG=y
+1
arch/powerpc/kernel/btext.c
··· 6 6 */ 7 7 #include <linux/kernel.h> 8 8 #include <linux/string.h> 9 + #include <linux/hex.h> 9 10 #include <linux/init.h> 10 11 #include <linux/export.h> 11 12 #include <linux/font.h>
+1 -1
arch/powerpc/net/bpf_jit_comp.c
··· 1194 1194 bpf_func = (unsigned long)ip; 1195 1195 1196 1196 /* We currently only support poking bpf programs */ 1197 - if (!__bpf_address_lookup(bpf_func, &size, &offset, name)) { 1197 + if (!bpf_address_lookup(bpf_func, &size, &offset, name)) { 1198 1198 pr_err("%s (0x%lx): kernel/modules are not supported\n", __func__, bpf_func); 1199 1199 return -EOPNOTSUPP; 1200 1200 }
+1 -1
arch/s390/configs/debug_defconfig
··· 921 921 CONFIG_FAULT_INJECTION_CONFIGFS=y 922 922 CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y 923 923 CONFIG_LKDTM=m 924 - CONFIG_TEST_MIN_HEAP=y 924 + CONFIG_MIN_HEAP_KUNIT_TEST=m 925 925 CONFIG_KPROBES_SANITY_TEST=m 926 926 CONFIG_RBTREE_TEST=y 927 927 CONFIG_INTERVAL_TREE_TEST=m
+1
arch/s390/include/asm/processor.h
··· 31 31 #include <linux/cpumask.h> 32 32 #include <linux/linkage.h> 33 33 #include <linux/irqflags.h> 34 + #include <linux/instruction_pointer.h> 34 35 #include <linux/bitops.h> 35 36 #include <asm/fpu-types.h> 36 37 #include <asm/cpu.h>
+1
arch/s390/kernel/alternative.c
··· 4 4 #define pr_fmt(fmt) "alt: " fmt 5 5 #endif 6 6 7 + #include <linux/hex.h> 7 8 #include <linux/uaccess.h> 8 9 #include <linux/printk.h> 9 10 #include <asm/nospec-branch.h>
+1
arch/s390/kernel/stackprotector.c
··· 5 5 #endif 6 6 7 7 #include <linux/export.h> 8 + #include <linux/hex.h> 8 9 #include <linux/uaccess.h> 9 10 #include <linux/printk.h> 10 11 #include <asm/abs_lowcore.h>
+1
arch/um/drivers/vector_kern.c
··· 13 13 #include <linux/memblock.h> 14 14 #include <linux/etherdevice.h> 15 15 #include <linux/ethtool.h> 16 + #include <linux/hex.h> 16 17 #include <linux/inetdevice.h> 17 18 #include <linux/init.h> 18 19 #include <linux/list.h>
+6
arch/x86/kernel/setup.c
··· 437 437 438 438 int __init ima_get_kexec_buffer(void **addr, size_t *size) 439 439 { 440 + int ret; 441 + 440 442 if (!ima_kexec_buffer_size) 441 443 return -ENOENT; 444 + 445 + ret = ima_validate_range(ima_kexec_buffer_phys, ima_kexec_buffer_size); 446 + if (ret) 447 + return ret; 442 448 443 449 *addr = __va(ima_kexec_buffer_phys); 444 450 *size = ima_kexec_buffer_size;
+1
arch/xtensa/platforms/iss/network.c
··· 13 13 14 14 #define pr_fmt(fmt) "%s: " fmt, __func__ 15 15 16 + #include <linux/hex.h> 16 17 #include <linux/list.h> 17 18 #include <linux/irq.h> 18 19 #include <linux/spinlock.h>
+1
certs/blacklist.c
··· 13 13 #include <linux/sched.h> 14 14 #include <linux/ctype.h> 15 15 #include <linux/err.h> 16 + #include <linux/hex.h> 16 17 #include <linux/seq_file.h> 17 18 #include <linux/uidgid.h> 18 19 #include <keys/asymmetric-type.h>
+1
crypto/asymmetric_keys/asymmetric_type.c
··· 9 9 #include <keys/asymmetric-subtype.h> 10 10 #include <keys/asymmetric-parser.h> 11 11 #include <crypto/public_key.h> 12 + #include <linux/hex.h> 12 13 #include <linux/seq_file.h> 13 14 #include <linux/module.h> 14 15 #include <linux/overflow.h>
+1
crypto/asymmetric_keys/x509_public_key.c
··· 10 10 #include <keys/asymmetric-parser.h> 11 11 #include <keys/asymmetric-subtype.h> 12 12 #include <keys/system_keyring.h> 13 + #include <linux/hex.h> 13 14 #include <linux/module.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/slab.h>
+1
crypto/krb5/selftest.c
··· 7 7 8 8 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 9 9 10 + #include <linux/hex.h> 10 11 #include <linux/slab.h> 11 12 #include <crypto/skcipher.h> 12 13 #include <crypto/hash.h>
+4 -5
drivers/android/binder.c
··· 6028 6028 { 6029 6029 struct binder_proc *proc = filp->private_data; 6030 6030 6031 - if (proc->tsk != current->group_leader) 6031 + if (!same_thread_group(proc->tsk, current)) 6032 6032 return -EINVAL; 6033 6033 6034 6034 binder_debug(BINDER_DEBUG_OPEN_CLOSE, ··· 6059 6059 bool existing_pid = false; 6060 6060 6061 6061 binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__, 6062 - current->group_leader->pid, current->pid); 6062 + current->tgid, current->pid); 6063 6063 6064 6064 proc = kzalloc(sizeof(*proc), GFP_KERNEL); 6065 6065 if (proc == NULL) ··· 6068 6068 dbitmap_init(&proc->dmap); 6069 6069 spin_lock_init(&proc->inner_lock); 6070 6070 spin_lock_init(&proc->outer_lock); 6071 - get_task_struct(current->group_leader); 6072 - proc->tsk = current->group_leader; 6071 + proc->tsk = get_task_struct(current->group_leader); 6072 + proc->pid = current->tgid; 6073 6073 proc->cred = get_cred(filp->f_cred); 6074 6074 INIT_LIST_HEAD(&proc->todo); 6075 6075 init_waitqueue_head(&proc->freeze_wait); ··· 6088 6088 binder_alloc_init(&proc->alloc); 6089 6089 6090 6090 binder_stats_created(BINDER_STAT_PROC); 6091 - proc->pid = current->group_leader->pid; 6092 6091 INIT_LIST_HEAD(&proc->delivered_death); 6093 6092 INIT_LIST_HEAD(&proc->delivered_freeze); 6094 6093 INIT_LIST_HEAD(&proc->waiting_threads);
+1 -1
drivers/android/binder_alloc.c
··· 1233 1233 VISIBLE_IF_KUNIT void __binder_alloc_init(struct binder_alloc *alloc, 1234 1234 struct list_lru *freelist) 1235 1235 { 1236 - alloc->pid = current->group_leader->pid; 1236 + alloc->pid = current->tgid; 1237 1237 alloc->mm = current->mm; 1238 1238 mmgrab(alloc->mm); 1239 1239 mutex_init(&alloc->mutex);
+1
drivers/atm/nicstar.c
··· 43 43 #include <linux/types.h> 44 44 #include <linux/string.h> 45 45 #include <linux/delay.h> 46 + #include <linux/hex.h> 46 47 #include <linux/init.h> 47 48 #include <linux/sched.h> 48 49 #include <linux/timer.h>
+1
drivers/auxdisplay/hd44780_common.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 + #include <linux/hex.h> 2 3 #include <linux/module.h> 3 4 #include <linux/sched.h> 4 5 #include <linux/slab.h>
+1
drivers/auxdisplay/lcd2s.c
··· 11 11 * Author: Lars Pöschel <poeschel@lemonage.de> 12 12 * All rights reserved. 13 13 */ 14 + #include <linux/hex.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/mod_devicetable.h> 16 17 #include <linux/module.h>
-2
drivers/block/floppy.c
··· 4802 4802 } 4803 4803 } 4804 4804 4805 - #define ARRAY_END(X) (&((X)[ARRAY_SIZE(X)])) 4806 - 4807 4805 static int floppy_request_regions(int fdc) 4808 4806 { 4809 4807 const struct io_region *p;
+1
drivers/bus/moxtet.c
··· 8 8 #include <dt-bindings/bus/moxtet.h> 9 9 #include <linux/bitops.h> 10 10 #include <linux/debugfs.h> 11 + #include <linux/hex.h> 11 12 #include <linux/interrupt.h> 12 13 #include <linux/module.h> 13 14 #include <linux/moxtet.h>
+1
drivers/char/tpm/tpm.h
··· 20 20 21 21 #include <linux/module.h> 22 22 #include <linux/delay.h> 23 + #include <linux/hex.h> 23 24 #include <linux/mutex.h> 24 25 #include <linux/sched.h> 25 26 #include <linux/platform_device.h>
+1
drivers/comedi/drivers/jr3_pci.c
··· 32 32 #include <linux/module.h> 33 33 #include <linux/delay.h> 34 34 #include <linux/ctype.h> 35 + #include <linux/hex.h> 35 36 #include <linux/jiffies.h> 36 37 #include <linux/slab.h> 37 38 #include <linux/timer.h>
+1
drivers/firmware/broadcom/bcm47xx_sprom.c
··· 30 30 #include <linux/bcm47xx_sprom.h> 31 31 #include <linux/bcma/bcma.h> 32 32 #include <linux/etherdevice.h> 33 + #include <linux/hex.h> 33 34 #include <linux/if_ether.h> 34 35 #include <linux/ssb/ssb.h> 35 36
+1
drivers/gpio/gpio-macsmc.c
··· 10 10 #include <linux/bitmap.h> 11 11 #include <linux/device.h> 12 12 #include <linux/gpio/driver.h> 13 + #include <linux/hex.h> 13 14 #include <linux/mfd/core.h> 14 15 #include <linux/mfd/macsmc.h> 15 16
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1423 1423 goto create_evict_fence_fail; 1424 1424 } 1425 1425 1426 - info->pid = get_task_pid(current->group_leader, PIDTYPE_PID); 1426 + info->pid = get_task_pid(current, PIDTYPE_TGID); 1427 1427 INIT_DELAYED_WORK(&info->restore_userptr_work, 1428 1428 amdgpu_amdkfd_restore_userptr_worker); 1429 1429
+1 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 2571 2571 vm->task_info->task.pid = current->pid; 2572 2572 get_task_comm(vm->task_info->task.comm, current); 2573 2573 2574 - if (current->group_leader->mm != current->mm) 2575 - return; 2576 - 2577 - vm->task_info->tgid = current->group_leader->pid; 2574 + vm->task_info->tgid = current->tgid; 2578 2575 get_task_comm(vm->task_info->process_name, current->group_leader); 2579 2576 } 2580 2577
-6
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 928 928 if (!(thread->mm && mmget_not_zero(thread->mm))) 929 929 return ERR_PTR(-EINVAL); 930 930 931 - /* Only the pthreads threading model is supported. */ 932 - if (thread->group_leader->mm != thread->mm) { 933 - mmput(thread->mm); 934 - return ERR_PTR(-EINVAL); 935 - } 936 - 937 931 /* If the process just called exec(3), it is possible that the 938 932 * cleanup of the kfd_process (following the release of the mm 939 933 * of the old process image) is still in the cleanup work queue.
+1 -1
drivers/gpu/drm/ci/arm.config
··· 52 52 CONFIG_PROVE_LOCKING=n 53 53 CONFIG_DEBUG_LOCKDEP=n 54 54 CONFIG_SOFTLOCKUP_DETECTOR=n 55 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=n 55 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=0 56 56 57 57 CONFIG_FW_LOADER_COMPRESS=y 58 58
+1 -1
drivers/gpu/drm/ci/arm64.config
··· 161 161 CONFIG_PROVE_LOCKING=n 162 162 CONFIG_DEBUG_LOCKDEP=n 163 163 CONFIG_SOFTLOCKUP_DETECTOR=y 164 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 164 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=1 165 165 166 166 CONFIG_DETECT_HUNG_TASK=y 167 167
+1 -1
drivers/gpu/drm/ci/x86_64.config
··· 47 47 CONFIG_PROVE_LOCKING=n 48 48 CONFIG_DEBUG_LOCKDEP=n 49 49 CONFIG_SOFTLOCKUP_DETECTOR=y 50 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 50 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=1 51 51 52 52 CONFIG_DETECT_HUNG_TASK=y 53 53
+1
drivers/gpu/drm/i915/gt/selftest_ring_submission.c
··· 3 3 * Copyright © 2020 Intel Corporation 4 4 */ 5 5 6 + #include "i915_selftest.h" 6 7 #include "intel_engine_pm.h" 7 8 #include "selftests/igt_flush_test.h" 8 9
+2
drivers/gpu/drm/i915/i915_selftest.h
··· 26 26 27 27 #include <linux/types.h> 28 28 29 + #define STACK_MAGIC 0xdeadbeef 30 + 29 31 struct pci_dev; 30 32 struct drm_i915_private; 31 33
+1 -1
drivers/gpu/drm/panfrost/panfrost_gem.c
··· 35 35 static void panfrost_gem_debugfs_bo_add(struct panfrost_device *pfdev, 36 36 struct panfrost_gem_object *bo) 37 37 { 38 - bo->debugfs.creator.tgid = current->group_leader->pid; 38 + bo->debugfs.creator.tgid = current->tgid; 39 39 get_task_comm(bo->debugfs.creator.process_name, current->group_leader); 40 40 41 41 mutex_lock(&pfdev->debugfs.gems_lock);
+1 -1
drivers/gpu/drm/panthor/panthor_gem.c
··· 45 45 struct panthor_device *ptdev = container_of(bo->base.base.dev, 46 46 struct panthor_device, base); 47 47 48 - bo->debugfs.creator.tgid = current->group_leader->pid; 48 + bo->debugfs.creator.tgid = current->tgid; 49 49 get_task_comm(bo->debugfs.creator.process_name, current->group_leader); 50 50 51 51 mutex_lock(&ptdev->gems.lock);
+1
drivers/hid/hid-picolcd_debugfs.c
··· 11 11 #include <linux/hid-debug.h> 12 12 13 13 #include <linux/fb.h> 14 + #include <linux/hex.h> 14 15 #include <linux/seq_file.h> 15 16 #include <linux/debugfs.h> 16 17
+1
drivers/hwmon/pmbus/q54sj108a2.c
··· 7 7 */ 8 8 9 9 #include <linux/debugfs.h> 10 + #include <linux/hex.h> 10 11 #include <linux/i2c.h> 11 12 #include <linux/kstrtox.h> 12 13 #include <linux/module.h>
+1
drivers/hwmon/pmbus/ucd9000.c
··· 8 8 9 9 #include <linux/debugfs.h> 10 10 #include <linux/delay.h> 11 + #include <linux/hex.h> 11 12 #include <linux/kernel.h> 12 13 #include <linux/module.h> 13 14 #include <linux/of.h>
+2 -2
drivers/infiniband/core/umem_odp.c
··· 149 149 umem->owning_mm = current->mm; 150 150 umem_odp->page_shift = PAGE_SHIFT; 151 151 152 - umem_odp->tgid = get_task_pid(current->group_leader, PIDTYPE_PID); 152 + umem_odp->tgid = get_task_pid(current, PIDTYPE_TGID); 153 153 ib_init_umem_implicit_odp(umem_odp); 154 154 return umem_odp; 155 155 } ··· 258 258 umem_odp->page_shift = HPAGE_SHIFT; 259 259 #endif 260 260 261 - umem_odp->tgid = get_task_pid(current->group_leader, PIDTYPE_PID); 261 + umem_odp->tgid = get_task_pid(current, PIDTYPE_TGID); 262 262 ret = ib_init_umem_odp(umem_odp, ops); 263 263 if (ret) 264 264 goto err_put_pid;
+1
drivers/infiniband/ulp/srp/ib_srp.c
··· 33 33 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 34 34 35 35 #include <linux/module.h> 36 + #include <linux/hex.h> 36 37 #include <linux/init.h> 37 38 #include <linux/slab.h> 38 39 #include <linux/err.h>
+1
drivers/infiniband/ulp/srpt/ib_srpt.c
··· 33 33 */ 34 34 35 35 #include <linux/module.h> 36 + #include <linux/hex.h> 36 37 #include <linux/init.h> 37 38 #include <linux/slab.h> 38 39 #include <linux/err.h>
+1
drivers/input/touchscreen/iqs5xx.c
··· 17 17 #include <linux/err.h> 18 18 #include <linux/firmware.h> 19 19 #include <linux/gpio/consumer.h> 20 + #include <linux/hex.h> 20 21 #include <linux/i2c.h> 21 22 #include <linux/input.h> 22 23 #include <linux/input/mt.h>
+1
drivers/md/dm-crypt.c
··· 11 11 #include <linux/completion.h> 12 12 #include <linux/err.h> 13 13 #include <linux/module.h> 14 + #include <linux/hex.h> 14 15 #include <linux/init.h> 15 16 #include <linux/kernel.h> 16 17 #include <linux/key.h>
+1
drivers/md/dm-integrity.c
··· 17 17 #include <linux/sort.h> 18 18 #include <linux/rbtree.h> 19 19 #include <linux/delay.h> 20 + #include <linux/hex.h> 20 21 #include <linux/random.h> 21 22 #include <linux/reboot.h> 22 23 #include <crypto/hash.h>
+1
drivers/md/dm-verity-target.c
··· 17 17 #include "dm-verity-fec.h" 18 18 #include "dm-verity-verify-sig.h" 19 19 #include "dm-audit.h" 20 + #include <linux/hex.h> 20 21 #include <linux/module.h> 21 22 #include <linux/reboot.h> 22 23 #include <linux/string.h>
+1
drivers/media/cec/usb/extron-da-hd-4k-plus/extron-da-hd-4k-plus.c
··· 19 19 #include <linux/completion.h> 20 20 #include <linux/ctype.h> 21 21 #include <linux/delay.h> 22 + #include <linux/hex.h> 22 23 #include <linux/init.h> 23 24 #include <linux/interrupt.h> 24 25 #include <linux/kernel.h>
+1
drivers/media/cec/usb/rainshadow/rainshadow-cec.c
··· 19 19 #include <linux/completion.h> 20 20 #include <linux/ctype.h> 21 21 #include <linux/delay.h> 22 + #include <linux/hex.h> 22 23 #include <linux/init.h> 23 24 #include <linux/interrupt.h> 24 25 #include <linux/kernel.h>
+1
drivers/media/i2c/ccs/ccs-reg-access.c
··· 12 12 #include <linux/unaligned.h> 13 13 14 14 #include <linux/delay.h> 15 + #include <linux/hex.h> 15 16 #include <linux/i2c.h> 16 17 17 18 #include "ccs.h"
+1
drivers/media/usb/pvrusb2/pvrusb2-debugifc.c
··· 4 4 * Copyright (C) 2005 Mike Isely <isely@pobox.com> 5 5 */ 6 6 7 + #include <linux/hex.h> 7 8 #include <linux/string.h> 8 9 #include "pvrusb2-debugifc.h" 9 10 #include "pvrusb2-hdw.h"
+1
drivers/misc/kgdbts.c
··· 89 89 #include <linux/syscalls.h> 90 90 #include <linux/nmi.h> 91 91 #include <linux/delay.h> 92 + #include <linux/hex.h> 92 93 #include <linux/kthread.h> 93 94 #include <linux/module.h> 94 95 #include <linux/sched/task.h>
+1
drivers/misc/pch_phub.c
··· 7 7 #include <linux/kernel.h> 8 8 #include <linux/types.h> 9 9 #include <linux/fs.h> 10 + #include <linux/hex.h> 10 11 #include <linux/uaccess.h> 11 12 #include <linux/string.h> 12 13 #include <linux/pci.h>
+1
drivers/net/bonding/bond_options.c
··· 6 6 */ 7 7 8 8 #include <linux/errno.h> 9 + #include <linux/hex.h> 9 10 #include <linux/if.h> 10 11 #include <linux/netdevice.h> 11 12 #include <linux/spinlock.h>
+1
drivers/net/can/can327.c
··· 18 18 #include <linux/bitops.h> 19 19 #include <linux/ctype.h> 20 20 #include <linux/errno.h> 21 + #include <linux/hex.h> 21 22 #include <linux/kernel.h> 22 23 #include <linux/list.h> 23 24 #include <linux/lockdep.h>
+1
drivers/net/can/slcan/slcan-core.c
··· 50 50 #include <linux/netdevice.h> 51 51 #include <linux/skbuff.h> 52 52 #include <linux/rtnetlink.h> 53 + #include <linux/hex.h> 53 54 #include <linux/init.h> 54 55 #include <linux/kernel.h> 55 56 #include <linux/workqueue.h>
+1
drivers/net/ethernet/chelsio/cxgb3/common.h
··· 36 36 #include <linux/types.h> 37 37 #include <linux/ctype.h> 38 38 #include <linux/delay.h> 39 + #include <linux/hex.h> 39 40 #include <linux/netdevice.h> 40 41 #include <linux/ethtool.h> 41 42 #include <linux/mdio.h>
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/sws/dr_dbg.c
··· 2 2 // Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 3 3 4 4 #include <linux/debugfs.h> 5 + #include <linux/hex.h> 5 6 #include <linux/kernel.h> 6 7 #include <linux/seq_file.h> 7 8 #include <linux/version.h>
+1
drivers/net/ethernet/micrel/ksz884x.c
··· 12 12 #include <linux/interrupt.h> 13 13 #include <linux/kernel.h> 14 14 #include <linux/module.h> 15 + #include <linux/hex.h> 15 16 #include <linux/ioport.h> 16 17 #include <linux/pci.h> 17 18 #include <linux/proc_fs.h>
+1
drivers/net/ethernet/pasemi/pasemi_mac.c
··· 11 11 #include <linux/interrupt.h> 12 12 #include <linux/dmaengine.h> 13 13 #include <linux/delay.h> 14 + #include <linux/hex.h> 14 15 #include <linux/netdevice.h> 15 16 #include <linux/of_mdio.h> 16 17 #include <linux/etherdevice.h>
+1
drivers/net/netconsole.c
··· 36 36 #include <linux/inet.h> 37 37 #include <linux/configfs.h> 38 38 #include <linux/etherdevice.h> 39 + #include <linux/hex.h> 39 40 #include <linux/u64_stats_sync.h> 40 41 #include <linux/utsname.h> 41 42 #include <linux/rtnetlink.h>
+1
drivers/net/netdevsim/dev.c
··· 18 18 #include <linux/debugfs.h> 19 19 #include <linux/device.h> 20 20 #include <linux/etherdevice.h> 21 + #include <linux/hex.h> 21 22 #include <linux/inet.h> 22 23 #include <linux/jiffies.h> 23 24 #include <linux/kernel.h>
+1
drivers/net/usb/r8152.c
··· 10 10 #include <linux/etherdevice.h> 11 11 #include <linux/mii.h> 12 12 #include <linux/ethtool.h> 13 + #include <linux/hex.h> 13 14 #include <linux/phy.h> 14 15 #include <linux/usb.h> 15 16 #include <linux/crc32.h>
+1
drivers/net/usb/usbnet.c
··· 18 18 */ 19 19 20 20 #include <linux/module.h> 21 + #include <linux/hex.h> 21 22 #include <linux/init.h> 22 23 #include <linux/netdevice.h> 23 24 #include <linux/etherdevice.h>
+1
drivers/net/wireless/ath/ath6kl/debug.c
··· 19 19 20 20 #include <linux/skbuff.h> 21 21 #include <linux/fs.h> 22 + #include <linux/hex.h> 22 23 #include <linux/vmalloc.h> 23 24 #include <linux/export.h> 24 25
+1
drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
··· 7 7 #include "api/commands.h" 8 8 #include "debugfs.h" 9 9 #include "dbg.h" 10 + #include <linux/hex.h> 10 11 #include <linux/seq_file.h> 11 12 12 13 #define FWRT_DEBUGFS_OPEN_WRAPPER(name, buflen, argtype) \
+1
drivers/net/wireless/intel/iwlwifi/mld/debugfs.c
··· 24 24 #include "fw/api/rfi.h" 25 25 #include "fw/dhc-utils.h" 26 26 #include <linux/dmi.h> 27 + #include <linux/hex.h> 27 28 28 29 #define MLD_DEBUGFS_READ_FILE_OPS(name, bufsz) \ 29 30 _MLD_DEBUGFS_READ_FILE_OPS(name, bufsz, struct iwl_mld)
+1
drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
··· 6 6 */ 7 7 #include <linux/vmalloc.h> 8 8 #include <linux/err.h> 9 + #include <linux/hex.h> 9 10 #include <linux/ieee80211.h> 10 11 #include <linux/netdevice.h> 11 12 #include <linux/dmi.h>
+1
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 11 11 #include <linux/spinlock.h> 12 12 #include <linux/cleanup.h> 13 13 #include <linux/leds.h> 14 + #include <linux/hex.h> 14 15 #include <linux/in6.h> 15 16 16 17 #ifdef CONFIG_THERMAL
+1
drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
··· 5 5 #define __MT7615_H 6 6 7 7 #include <linux/completion.h> 8 + #include <linux/hex.h> 8 9 #include <linux/interrupt.h> 9 10 #include <linux/ktime.h> 10 11 #include <linux/regmap.h>
+1
drivers/net/wireless/realtek/rtw89/debug.c
··· 2 2 /* Copyright(c) 2019-2020 Realtek Corporation 3 3 */ 4 4 5 + #include <linux/hex.h> 5 6 #include <linux/vmalloc.h> 6 7 7 8 #include "coex.h"
+1
drivers/net/wireless/silabs/wfx/fwio.c
··· 6 6 * Copyright (c) 2010, ST-Ericsson 7 7 */ 8 8 #include <linux/firmware.h> 9 + #include <linux/hex.h> 9 10 #include <linux/slab.h> 10 11 #include <linux/mm.h> 11 12 #include <linux/bitfield.h>
+1
drivers/nvme/target/configfs.c
··· 4 4 * Copyright (c) 2015-2016 HGST, a Western Digital Company. 5 5 */ 6 6 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 + #include <linux/hex.h> 7 8 #include <linux/kstrtox.h> 8 9 #include <linux/kernel.h> 9 10 #include <linux/module.h>
+1
drivers/nvme/target/core.c
··· 4 4 * Copyright (c) 2015-2016 HGST, a Western Digital Company. 5 5 */ 6 6 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 + #include <linux/hex.h> 7 8 #include <linux/module.h> 8 9 #include <linux/random.h> 9 10 #include <linux/rculist.h>
+1
drivers/nvmem/brcm_nvram.c
··· 5 5 6 6 #include <linux/bcm47xx_nvram.h> 7 7 #include <linux/etherdevice.h> 8 + #include <linux/hex.h> 8 9 #include <linux/if_ether.h> 9 10 #include <linux/io.h> 10 11 #include <linux/mod_devicetable.h>
+1
drivers/nvmem/layouts/u-boot-env.c
··· 6 6 #include <linux/crc32.h> 7 7 #include <linux/etherdevice.h> 8 8 #include <linux/export.h> 9 + #include <linux/hex.h> 9 10 #include <linux/if_ether.h> 10 11 #include <linux/nvmem-consumer.h> 11 12 #include <linux/nvmem-provider.h>
+3 -12
drivers/of/kexec.c
··· 128 128 { 129 129 int ret, len; 130 130 unsigned long tmp_addr; 131 - unsigned long start_pfn, end_pfn; 132 131 size_t tmp_size; 133 132 const void *prop; 134 133 ··· 143 144 if (!tmp_size) 144 145 return -ENOENT; 145 146 146 - /* 147 - * Calculate the PFNs for the buffer and ensure 148 - * they are with in addressable memory. 149 - */ 150 - start_pfn = PHYS_PFN(tmp_addr); 151 - end_pfn = PHYS_PFN(tmp_addr + tmp_size - 1); 152 - if (!page_is_ram(start_pfn) || !page_is_ram(end_pfn)) { 153 - pr_warn("IMA buffer at 0x%lx, size = 0x%zx beyond memory\n", 154 - tmp_addr, tmp_size); 155 - return -EINVAL; 156 - } 147 + ret = ima_validate_range(tmp_addr, tmp_size); 148 + if (ret) 149 + return ret; 157 150 158 151 *addr = __va(tmp_addr); 159 152 *size = tmp_size;
+1
drivers/platform/x86/intel/wmi/thunderbolt.c
··· 10 10 #include <linux/acpi.h> 11 11 #include <linux/device.h> 12 12 #include <linux/fs.h> 13 + #include <linux/hex.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/module.h> 15 16 #include <linux/string.h>
+1
drivers/pnp/support.c
··· 9 9 10 10 #include <linux/module.h> 11 11 #include <linux/ctype.h> 12 + #include <linux/hex.h> 12 13 #include <linux/pnp.h> 13 14 #include "base.h" 14 15
+1
drivers/ptp/ptp_pch.c
··· 10 10 11 11 #include <linux/device.h> 12 12 #include <linux/err.h> 13 + #include <linux/hex.h> 13 14 #include <linux/interrupt.h> 14 15 #include <linux/io.h> 15 16 #include <linux/io-64-nonatomic-lo-hi.h>
+2 -1
drivers/rapidio/rio-scan.c
··· 854 854 855 855 if (idtab == NULL) { 856 856 pr_err("RIO: failed to allocate destID table\n"); 857 - rio_free_net(net); 857 + kfree(net); 858 + mport->net = NULL; 858 859 net = NULL; 859 860 } else { 860 861 net->enum_data = idtab;
+1
drivers/s390/cio/blacklist.c
··· 10 10 11 11 #define pr_fmt(fmt) "cio: " fmt 12 12 13 + #include <linux/hex.h> 13 14 #include <linux/init.h> 14 15 #include <linux/vmalloc.h> 15 16 #include <linux/proc_fs.h>
+1
drivers/s390/crypto/ap_bus.c
··· 16 16 #include <linux/kernel_stat.h> 17 17 #include <linux/moduleparam.h> 18 18 #include <linux/export.h> 19 + #include <linux/hex.h> 19 20 #include <linux/init.h> 20 21 #include <linux/delay.h> 21 22 #include <linux/err.h>
+1
drivers/s390/crypto/zcrypt_cex4.c
··· 6 6 7 7 #include <linux/module.h> 8 8 #include <linux/slab.h> 9 + #include <linux/hex.h> 9 10 #include <linux/init.h> 10 11 #include <linux/err.h> 11 12 #include <linux/atomic.h>
+1
drivers/s390/virtio/virtio_ccw.c
··· 8 8 */ 9 9 10 10 #include <linux/kernel_stat.h> 11 + #include <linux/hex.h> 11 12 #include <linux/init.h> 12 13 #include <linux/memblock.h> 13 14 #include <linux/err.h>
+1
drivers/scsi/aacraid/rx.c
··· 17 17 */ 18 18 19 19 #include <linux/kernel.h> 20 + #include <linux/hex.h> 20 21 #include <linux/init.h> 21 22 #include <linux/types.h> 22 23 #include <linux/pci.h>
+1
drivers/scsi/ips.c
··· 167 167 #include <linux/stddef.h> 168 168 #include <linux/string.h> 169 169 #include <linux/errno.h> 170 + #include <linux/hex.h> 170 171 #include <linux/kernel.h> 171 172 #include <linux/ioport.h> 172 173 #include <linux/slab.h>
+1
drivers/scsi/libsas/sas_scsi_host.c
··· 10 10 #include <linux/firmware.h> 11 11 #include <linux/export.h> 12 12 #include <linux/ctype.h> 13 + #include <linux/hex.h> 13 14 #include <linux/kernel.h> 14 15 15 16 #include "sas_internal.h"
+1
drivers/scsi/qla2xxx/tcm_qla2xxx.c
··· 18 18 #include <linux/module.h> 19 19 #include <linux/utsname.h> 20 20 #include <linux/vmalloc.h> 21 + #include <linux/hex.h> 21 22 #include <linux/list.h> 22 23 #include <linux/slab.h> 23 24 #include <linux/types.h>
+1
drivers/scsi/scsi_transport_fc.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/slab.h> 14 14 #include <linux/delay.h> 15 + #include <linux/hex.h> 15 16 #include <linux/kernel.h> 16 17 #include <linux/bsg-lib.h> 17 18 #include <scsi/scsi_device.h>
+1
drivers/staging/rtl8723bs/core/rtw_ieee80211.c
··· 6 6 ******************************************************************************/ 7 7 8 8 #include <drv_types.h> 9 + #include <linux/hex.h> 9 10 #include <linux/of.h> 10 11 #include <linux/unaligned.h> 11 12
+1
drivers/target/iscsi/iscsi_target_auth.c
··· 12 12 #include <linux/kernel.h> 13 13 #include <linux/string.h> 14 14 #include <linux/err.h> 15 + #include <linux/hex.h> 15 16 #include <linux/random.h> 16 17 #include <linux/scatterlist.h> 17 18 #include <target/iscsi/iscsi_target_core.h>
+1
drivers/target/target_core_fabric_lib.c
··· 16 16 * on the formats implemented in this file. 17 17 */ 18 18 19 + #include <linux/hex.h> 19 20 #include <linux/kernel.h> 20 21 #include <linux/string.h> 21 22 #include <linux/ctype.h>
+1
drivers/target/target_core_spc.c
··· 7 7 * Nicholas A. Bellinger <nab@kernel.org> 8 8 */ 9 9 10 + #include <linux/hex.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/module.h> 12 13 #include <linux/unaligned.h>
+1
drivers/target/tcm_fc/tfc_conf.c
··· 17 17 #include <linux/moduleparam.h> 18 18 #include <generated/utsrelease.h> 19 19 #include <linux/utsname.h> 20 + #include <linux/hex.h> 20 21 #include <linux/init.h> 21 22 #include <linux/slab.h> 22 23 #include <linux/kthread.h>
+1
drivers/thunderbolt/switch.c
··· 7 7 */ 8 8 9 9 #include <linux/delay.h> 10 + #include <linux/hex.h> 10 11 #include <linux/idr.h> 11 12 #include <linux/module.h> 12 13 #include <linux/nvmem-provider.h>
+1
drivers/tty/vt/vt.c
··· 79 79 #include <linux/kernel.h> 80 80 #include <linux/string.h> 81 81 #include <linux/errno.h> 82 + #include <linux/hex.h> 82 83 #include <linux/kd.h> 83 84 #include <linux/slab.h> 84 85 #include <linux/vmalloc.h>
+1
drivers/ufs/core/ufshcd.c
··· 18 18 #include <linux/blkdev.h> 19 19 #include <linux/clk.h> 20 20 #include <linux/delay.h> 21 + #include <linux/hex.h> 21 22 #include <linux/interrupt.h> 22 23 #include <linux/module.h> 23 24 #include <linux/pm_opp.h>
+1
drivers/usb/atm/speedtch.c
··· 13 13 #include <linux/device.h> 14 14 #include <linux/errno.h> 15 15 #include <linux/firmware.h> 16 + #include <linux/hex.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/module.h> 18 19 #include <linux/moduleparam.h>
+1
drivers/usb/atm/ueagle-atm.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/moduleparam.h> 17 17 #include <linux/crc32.h> 18 + #include <linux/hex.h> 18 19 #include <linux/usb.h> 19 20 #include <linux/firmware.h> 20 21 #include <linux/ctype.h>
+1
drivers/usb/gadget/function/u_ether.c
··· 16 16 #include <linux/ctype.h> 17 17 #include <linux/etherdevice.h> 18 18 #include <linux/ethtool.h> 19 + #include <linux/hex.h> 19 20 #include <linux/if_vlan.h> 20 21 #include <linux/string_helpers.h> 21 22 #include <linux/usb/composite.h>
+1
drivers/usb/gadget/function/uvc_configfs.c
··· 12 12 13 13 #include "uvc_configfs.h" 14 14 15 + #include <linux/hex.h> 15 16 #include <linux/sort.h> 16 17 #include <linux/usb/uvc.h> 17 18 #include <linux/usb/video.h>
+1
drivers/usb/typec/ucsi/debugfs.c
··· 8 8 * Gopal Saranya <saranya.gopal@intel.com> 9 9 */ 10 10 #include <linux/debugfs.h> 11 + #include <linux/hex.h> 11 12 #include <linux/slab.h> 12 13 #include <linux/string.h> 13 14 #include <linux/types.h>
+1
drivers/usb/typec/ucsi/ucsi_ccg.c
··· 10 10 #include <linux/acpi.h> 11 11 #include <linux/delay.h> 12 12 #include <linux/firmware.h> 13 + #include <linux/hex.h> 13 14 #include <linux/i2c.h> 14 15 #include <linux/module.h> 15 16 #include <linux/pci.h>
+1
drivers/watchdog/hpwdt.c
··· 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 14 #include <linux/device.h> 15 + #include <linux/hex.h> 15 16 #include <linux/io.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/module.h>
+1
fs/adfs/dir.c
··· 6 6 * 7 7 * Common directory handling for ADFS 8 8 */ 9 + #include <linux/hex.h> 9 10 #include <linux/slab.h> 10 11 #include "adfs.h" 11 12
+1
fs/binfmt_misc.c
··· 12 12 13 13 #include <linux/kernel.h> 14 14 #include <linux/module.h> 15 + #include <linux/hex.h> 15 16 #include <linux/init.h> 16 17 #include <linux/sched/mm.h> 17 18 #include <linux/magic.h>
+1
fs/ecryptfs/ecryptfs_kernel.h
··· 21 21 #include <linux/kernel.h> 22 22 #include <linux/fs.h> 23 23 #include <linux/fs_stack.h> 24 + #include <linux/hex.h> 24 25 #include <linux/namei.h> 25 26 #include <linux/scatterlist.h> 26 27 #include <linux/hash.h>
+1
fs/efivarfs/vars.c
··· 9 9 #include <linux/capability.h> 10 10 #include <linux/types.h> 11 11 #include <linux/errno.h> 12 + #include <linux/hex.h> 12 13 #include <linux/init.h> 13 14 #include <linux/mm.h> 14 15 #include <linux/module.h>
+2 -2
fs/fat/cache.c
··· 54 54 kmem_cache_destroy(fat_cache_cachep); 55 55 } 56 56 57 - static inline struct fat_cache *fat_cache_alloc(struct inode *inode) 57 + static inline struct fat_cache *fat_cache_alloc(void) 58 58 { 59 59 return kmem_cache_alloc(fat_cache_cachep, GFP_NOFS); 60 60 } ··· 144 144 MSDOS_I(inode)->nr_caches++; 145 145 spin_unlock(&MSDOS_I(inode)->cache_lru_lock); 146 146 147 - tmp = fat_cache_alloc(inode); 147 + tmp = fat_cache_alloc(); 148 148 if (!tmp) { 149 149 spin_lock(&MSDOS_I(inode)->cache_lru_lock); 150 150 MSDOS_I(inode)->nr_caches--;
+1
fs/fat/dir.c
··· 17 17 #include <linux/slab.h> 18 18 #include <linux/compat.h> 19 19 #include <linux/filelock.h> 20 + #include <linux/hex.h> 20 21 #include <linux/uaccess.h> 21 22 #include <linux/iversion.h> 22 23 #include "fat.h"
+6 -1
fs/fat/namei_msdos.c
··· 325 325 err = fat_remove_entries(dir, &sinfo); /* and releases bh */ 326 326 if (err) 327 327 goto out; 328 - drop_nlink(dir); 328 + if (dir->i_nlink >= 3) 329 + drop_nlink(dir); 330 + else { 331 + fat_fs_error(sb, "parent dir link count too low (%u)", 332 + dir->i_nlink); 333 + } 329 334 330 335 clear_nlink(inode); 331 336 fat_detach(inode);
+7 -1
fs/fat/namei_vfat.c
··· 20 20 #include <linux/ctype.h> 21 21 #include <linux/slab.h> 22 22 #include <linux/namei.h> 23 + #include <linux/hex.h> 23 24 #include <linux/kernel.h> 24 25 #include <linux/iversion.h> 25 26 #include "fat.h" ··· 804 803 err = fat_remove_entries(dir, &sinfo); /* and releases bh */ 805 804 if (err) 806 805 goto out; 807 - drop_nlink(dir); 806 + if (dir->i_nlink >= 3) 807 + drop_nlink(dir); 808 + else { 809 + fat_fs_error(sb, "parent dir link count too low (%u)", 810 + dir->i_nlink); 811 + } 808 812 809 813 clear_nlink(inode); 810 814 fat_truncate_time(inode, NULL, FAT_UPDATE_ATIME | FAT_UPDATE_CMTIME);
+1
fs/gfs2/lock_dlm.c
··· 8 8 9 9 #include <linux/fs.h> 10 10 #include <linux/dlm.h> 11 + #include <linux/hex.h> 11 12 #include <linux/slab.h> 12 13 #include <linux/types.h> 13 14 #include <linux/delay.h>
+1
fs/nfsd/nfs4recover.c
··· 39 39 #include <linux/namei.h> 40 40 #include <linux/sched.h> 41 41 #include <linux/fs.h> 42 + #include <linux/hex.h> 42 43 #include <linux/module.h> 43 44 #include <net/net_namespace.h> 44 45 #include <linux/sunrpc/rpc_pipe_fs.h>
+1
fs/ntfs3/ntfs_fs.h
··· 14 14 #include <linux/fs.h> 15 15 #include <linux/highmem.h> 16 16 #include <linux/kernel.h> 17 + #include <linux/hex.h> 17 18 #include <linux/mm.h> 18 19 #include <linux/mutex.h> 19 20 #include <linux/page-flags.h>
+5 -4
fs/ocfs2/alloc.c
··· 1812 1812 ret = -EROFS; 1813 1813 goto out; 1814 1814 } 1815 - if (le16_to_cpu(el->l_next_free_rec) == 0) { 1815 + if (!el->l_next_free_rec || !el->l_count) { 1816 1816 ocfs2_error(ocfs2_metadata_cache_get_super(ci), 1817 - "Owner %llu has empty extent list at depth %u\n", 1817 + "Owner %llu has empty extent list at depth %u\n" 1818 + "(next free=%u count=%u)\n", 1818 1819 (unsigned long long)ocfs2_metadata_cache_owner(ci), 1819 - le16_to_cpu(el->l_tree_depth)); 1820 + le16_to_cpu(el->l_tree_depth), 1821 + le16_to_cpu(el->l_next_free_rec), le16_to_cpu(el->l_count)); 1820 1822 ret = -EROFS; 1821 1823 goto out; 1822 - 1823 1824 } 1824 1825 1825 1826 for(i = 0; i < le16_to_cpu(el->l_next_free_rec) - 1; i++) {
+2 -2
fs/ocfs2/cluster/heartbeat.c
··· 1942 1942 NULL, 1943 1943 }; 1944 1944 1945 - static struct configfs_item_operations o2hb_region_item_ops = { 1945 + static const struct configfs_item_operations o2hb_region_item_ops = { 1946 1946 .release = o2hb_region_release, 1947 1947 }; 1948 1948 ··· 2193 2193 NULL, 2194 2194 }; 2195 2195 2196 - static struct configfs_group_operations o2hb_heartbeat_group_group_ops = { 2196 + static const struct configfs_group_operations o2hb_heartbeat_group_group_ops = { 2197 2197 .make_item = o2hb_heartbeat_group_make_item, 2198 2198 .drop_item = o2hb_heartbeat_group_drop_item, 2199 2199 };
+4 -4
fs/ocfs2/cluster/nodemanager.c
··· 396 396 NULL, 397 397 }; 398 398 399 - static struct configfs_item_operations o2nm_node_item_ops = { 399 + static const struct configfs_item_operations o2nm_node_item_ops = { 400 400 .release = o2nm_node_release, 401 401 }; 402 402 ··· 638 638 config_item_put(item); 639 639 } 640 640 641 - static struct configfs_group_operations o2nm_node_group_group_ops = { 641 + static const struct configfs_group_operations o2nm_node_group_group_ops = { 642 642 .make_item = o2nm_node_group_make_item, 643 643 .drop_item = o2nm_node_group_drop_item, 644 644 }; ··· 657 657 kfree(cluster); 658 658 } 659 659 660 - static struct configfs_item_operations o2nm_cluster_item_ops = { 660 + static const struct configfs_item_operations o2nm_cluster_item_ops = { 661 661 .release = o2nm_cluster_release, 662 662 }; 663 663 ··· 741 741 config_item_put(item); 742 742 } 743 743 744 - static struct configfs_group_operations o2nm_cluster_group_group_ops = { 744 + static const struct configfs_group_operations o2nm_cluster_group_group_ops = { 745 745 .make_group = o2nm_cluster_group_make_group, 746 746 .drop_item = o2nm_cluster_group_drop_item, 747 747 };
+1 -1
fs/ocfs2/dlm/dlmdomain.c
··· 1105 1105 mlog(0, "Node %u queries hb regions on domain %s\n", qr->qr_node, 1106 1106 qr->qr_domain); 1107 1107 1108 - /* buffer used in dlm_mast_regions() */ 1108 + /* buffer used in dlm_match_regions() */ 1109 1109 local = kmalloc(sizeof(qr->qr_regions), GFP_KERNEL); 1110 1110 if (!local) 1111 1111 return -ENOMEM;
+4 -2
fs/ocfs2/export.c
··· 74 74 * nice 75 75 */ 76 76 status = -ESTALE; 77 - } else 77 + } else if (status != -ESTALE) { 78 78 mlog(ML_ERROR, "test inode bit failed %d\n", status); 79 + } 79 80 goto unlock_nfs_sync; 80 81 } 81 82 ··· 163 162 if (status < 0) { 164 163 if (status == -EINVAL) { 165 164 status = -ESTALE; 166 - } else 165 + } else if (status != -ESTALE) { 167 166 mlog(ML_ERROR, "test inode bit failed %d\n", status); 167 + } 168 168 parent = ERR_PTR(status); 169 169 goto bail_unlock; 170 170 }
+26 -6
fs/ocfs2/inode.c
··· 1494 1494 goto bail; 1495 1495 } 1496 1496 1497 - if ((le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL) && 1498 - le32_to_cpu(di->i_clusters)) { 1499 - rc = ocfs2_error(sb, "Invalid dinode %llu: %u clusters\n", 1500 - (unsigned long long)bh->b_blocknr, 1501 - le32_to_cpu(di->i_clusters)); 1502 - goto bail; 1497 + if (le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL) { 1498 + struct ocfs2_inline_data *data = &di->id2.i_data; 1499 + 1500 + if (le32_to_cpu(di->i_clusters)) { 1501 + rc = ocfs2_error(sb, 1502 + "Invalid dinode %llu: %u clusters\n", 1503 + (unsigned long long)bh->b_blocknr, 1504 + le32_to_cpu(di->i_clusters)); 1505 + goto bail; 1506 + } 1507 + 1508 + if (le64_to_cpu(di->i_size) > le16_to_cpu(data->id_count)) { 1509 + rc = ocfs2_error(sb, 1510 + "Invalid dinode #%llu: inline data i_size %llu exceeds id_count %u\n", 1511 + (unsigned long long)bh->b_blocknr, 1512 + (unsigned long long)le64_to_cpu(di->i_size), 1513 + le16_to_cpu(data->id_count)); 1514 + goto bail; 1515 + } 1503 1516 } 1504 1517 1505 1518 if (le32_to_cpu(di->i_flags) & OCFS2_CHAIN_FL) { ··· 1540 1527 le16_to_cpu(cl->cl_bpc)); 1541 1528 goto bail; 1542 1529 } 1530 + } 1531 + 1532 + if ((le16_to_cpu(di->i_dyn_features) & OCFS2_HAS_REFCOUNT_FL) && 1533 + !di->i_refcount_loc) { 1534 + rc = ocfs2_error(sb, "Inode #%llu has refcount flag but no i_refcount_loc\n", 1535 + (unsigned long long)bh->b_blocknr); 1536 + goto bail; 1543 1537 } 1544 1538 1545 1539 rc = 0;
+1 -3
fs/ocfs2/localalloc.c
··· 905 905 static void ocfs2_clear_local_alloc(struct ocfs2_dinode *alloc) 906 906 { 907 907 struct ocfs2_local_alloc *la = OCFS2_LOCAL_ALLOC(alloc); 908 - int i; 909 908 910 909 alloc->id1.bitmap1.i_total = 0; 911 910 alloc->id1.bitmap1.i_used = 0; 912 911 la->la_bm_off = 0; 913 - for(i = 0; i < le16_to_cpu(la->la_size); i++) 914 - la->la_bitmap[i] = 0; 912 + memset(la->la_bitmap, 0, le16_to_cpu(la->la_size)); 915 913 } 916 914 917 915 #if 0
+6 -1
fs/ocfs2/move_extents.c
··· 662 662 goto out_commit; 663 663 } 664 664 665 + gd = (struct ocfs2_group_desc *)gd_bh->b_data; 666 + if (le16_to_cpu(gd->bg_free_bits_count) < len) { 667 + ret = -ENOSPC; 668 + goto out_commit; 669 + } 670 + 665 671 /* 666 672 * probe the victim cluster group to find a proper 667 673 * region to fit wanted movement, it even will perform ··· 688 682 goto out_commit; 689 683 } 690 684 691 - gd = (struct ocfs2_group_desc *)gd_bh->b_data; 692 685 ret = ocfs2_alloc_dinode_update_counts(gb_inode, handle, gb_bh, len, 693 686 le16_to_cpu(gd->bg_chain)); 694 687 if (ret) {
+2 -2
fs/ocfs2/ocfs2_fs.h
··· 641 641 __le16 la_size; /* Size of included bitmap, in bytes */ 642 642 __le16 la_reserved1; 643 643 __le64 la_reserved2; 644 - /*10*/ __u8 la_bitmap[]; 644 + /*10*/ __u8 la_bitmap[] __counted_by_le(la_size); 645 645 }; 646 646 647 647 /* ··· 654 654 * for data, starting at id_data */ 655 655 __le16 id_reserved0; 656 656 __le32 id_reserved1; 657 - __u8 id_data[]; /* Start of user data */ 657 + __u8 id_data[] __counted_by_le(id_count); /* Start of user data */ 658 658 }; 659 659 660 660 /*
+25 -2
fs/ocfs2/slot_map.c
··· 44 44 static int __ocfs2_node_num_to_slot(struct ocfs2_slot_info *si, 45 45 unsigned int node_num); 46 46 47 + static int ocfs2_validate_slot_map_block(struct super_block *sb, 48 + struct buffer_head *bh); 49 + 47 50 static void ocfs2_invalidate_slot(struct ocfs2_slot_info *si, 48 51 int slot_num) 49 52 { ··· 135 132 * this is not true, the read of -1 (UINT64_MAX) will fail. 136 133 */ 137 134 ret = ocfs2_read_blocks(INODE_CACHE(si->si_inode), -1, si->si_blocks, 138 - si->si_bh, OCFS2_BH_IGNORE_CACHE, NULL); 135 + si->si_bh, OCFS2_BH_IGNORE_CACHE, 136 + ocfs2_validate_slot_map_block); 139 137 if (ret == 0) { 140 138 spin_lock(&osb->osb_lock); 141 139 ocfs2_update_slot_info(si); ··· 336 332 return ocfs2_update_disk_slot(osb, osb->slot_info, slot_num); 337 333 } 338 334 335 + static int ocfs2_validate_slot_map_block(struct super_block *sb, 336 + struct buffer_head *bh) 337 + { 338 + int rc; 339 + 340 + BUG_ON(!buffer_uptodate(bh)); 341 + 342 + if (bh->b_blocknr < OCFS2_SUPER_BLOCK_BLKNO) { 343 + rc = ocfs2_error(sb, 344 + "Invalid Slot Map Buffer Head " 345 + "Block Number : %llu, Should be >= %d", 346 + (unsigned long long)bh->b_blocknr, 347 + OCFS2_SUPER_BLOCK_BLKNO); 348 + return rc; 349 + } 350 + return 0; 351 + } 352 + 339 353 static int ocfs2_map_slot_buffers(struct ocfs2_super *osb, 340 354 struct ocfs2_slot_info *si) 341 355 { ··· 405 383 406 384 bh = NULL; /* Acquire a fresh bh */ 407 385 status = ocfs2_read_blocks(INODE_CACHE(si->si_inode), blkno, 408 - 1, &bh, OCFS2_BH_IGNORE_CACHE, NULL); 386 + 1, &bh, OCFS2_BH_IGNORE_CACHE, 387 + ocfs2_validate_slot_map_block); 409 388 if (status < 0) { 410 389 mlog_errno(status); 411 390 goto bail;
+316 -18
fs/ocfs2/suballoc.c
··· 295 295 return ocfs2_validate_gd_self(sb, bh, 0); 296 296 } 297 297 298 + /* 299 + * The hint group descriptor (gd) may already have been released 300 + * in _ocfs2_free_suballoc_bits(). We first check the gd signature, 301 + * then perform the standard ocfs2_read_group_descriptor() jobs. 302 + * 303 + * If the gd signature is invalid, we return 'rc=0' and set 304 + * '*released=1'. The caller is expected to handle this specific case. 305 + * Otherwise, we return the actual error code. 306 + * 307 + * We treat gd signature corruption case as a release case. The 308 + * caller ocfs2_claim_suballoc_bits() will use ocfs2_search_chain() 309 + * to search each gd block. The code will eventually find this 310 + * corrupted gd block - Late, but not missed. 311 + * 312 + * Note: 313 + * The caller is responsible for initializing the '*released' status. 314 + */ 315 + static int ocfs2_read_hint_group_descriptor(struct inode *inode, 316 + struct ocfs2_dinode *di, u64 gd_blkno, 317 + struct buffer_head **bh, int *released) 318 + { 319 + int rc; 320 + struct buffer_head *tmp = *bh; 321 + struct ocfs2_group_desc *gd; 322 + 323 + rc = ocfs2_read_block(INODE_CACHE(inode), gd_blkno, &tmp, NULL); 324 + if (rc) 325 + goto out; 326 + 327 + gd = (struct ocfs2_group_desc *) tmp->b_data; 328 + if (!OCFS2_IS_VALID_GROUP_DESC(gd)) { 329 + /* 330 + * Invalid gd cache was set in ocfs2_read_block(), 331 + * which will affect block_group allocation. 332 + * Path: 333 + * ocfs2_reserve_suballoc_bits 334 + * ocfs2_block_group_alloc 335 + * ocfs2_block_group_alloc_contig 336 + * ocfs2_set_new_buffer_uptodate 337 + */ 338 + ocfs2_remove_from_cache(INODE_CACHE(inode), tmp); 339 + *released = 1; /* we return 'rc=0' for this case */ 340 + goto free_bh; 341 + } 342 + 343 + /* below jobs same with ocfs2_read_group_descriptor() */ 344 + if (!buffer_jbd(tmp)) { 345 + rc = ocfs2_validate_group_descriptor(inode->i_sb, tmp); 346 + if (rc) 347 + goto free_bh; 348 + } 349 + 350 + rc = ocfs2_validate_gd_parent(inode->i_sb, di, tmp, 0); 351 + if (rc) 352 + goto free_bh; 353 + 354 + /* If ocfs2_read_block() got us a new bh, pass it up. */ 355 + if (!*bh) 356 + *bh = tmp; 357 + 358 + return rc; 359 + 360 + free_bh: 361 + brelse(tmp); 362 + out: 363 + return rc; 364 + } 365 + 298 366 int ocfs2_read_group_descriptor(struct inode *inode, struct ocfs2_dinode *di, 299 367 u64 gd_blkno, struct buffer_head **bh) 300 368 { ··· 1793 1725 u32 bits_wanted, 1794 1726 u32 min_bits, 1795 1727 struct ocfs2_suballoc_result *res, 1796 - u16 *bits_left) 1728 + u16 *bits_left, int *released) 1797 1729 { 1798 1730 int ret; 1799 1731 struct buffer_head *group_bh = NULL; ··· 1801 1733 struct ocfs2_dinode *di = (struct ocfs2_dinode *)ac->ac_bh->b_data; 1802 1734 struct inode *alloc_inode = ac->ac_inode; 1803 1735 1804 - ret = ocfs2_read_group_descriptor(alloc_inode, di, 1805 - res->sr_bg_blkno, &group_bh); 1806 - if (ret < 0) { 1736 + ret = ocfs2_read_hint_group_descriptor(alloc_inode, di, 1737 + res->sr_bg_blkno, &group_bh, released); 1738 + if (*released) { 1739 + return 0; 1740 + } else if (ret < 0) { 1807 1741 mlog_errno(ret); 1808 1742 return ret; 1809 1743 } ··· 2020 1950 struct ocfs2_suballoc_result *res) 2021 1951 { 2022 1952 int status; 1953 + int released = 0; 2023 1954 u16 victim, i; 2024 1955 u16 bits_left = 0; 2025 1956 u64 hint = ac->ac_last_group; ··· 2047 1976 goto bail; 2048 1977 } 2049 1978 1979 + /* the hint bg may already be released, we quiet search this group. */ 2050 1980 res->sr_bg_blkno = hint; 2051 1981 if (res->sr_bg_blkno) { 2052 1982 /* Attempt to short-circuit the usual search mechanism ··· 2055 1983 * allocation group. This helps us maintain some 2056 1984 * contiguousness across allocations. */ 2057 1985 status = ocfs2_search_one_group(ac, handle, bits_wanted, 2058 - min_bits, res, &bits_left); 1986 + min_bits, res, &bits_left, 1987 + &released); 1988 + if (released) { 1989 + res->sr_bg_blkno = 0; 1990 + goto chain_search; 1991 + } 2059 1992 if (!status) 2060 1993 goto set_hint; 2061 1994 if (status < 0 && status != -ENOSPC) { ··· 2068 1991 goto bail; 2069 1992 } 2070 1993 } 2071 - 1994 + chain_search: 2072 1995 cl = (struct ocfs2_chain_list *) &fe->id2.i_chain; 2073 1996 if (!le16_to_cpu(cl->cl_next_free_rec) || 2074 1997 le16_to_cpu(cl->cl_next_free_rec) > le16_to_cpu(cl->cl_count)) { ··· 2190 2113 return status; 2191 2114 } 2192 2115 2116 + /* 2117 + * after ocfs2 has the ability to release block group unused space, 2118 + * the ->ip_last_used_group may be invalid. so this function returns 2119 + * ac->ac_last_group need to verify. 2120 + * refer the 'hint' in ocfs2_claim_suballoc_bits() for more details. 2121 + */ 2193 2122 static void ocfs2_init_inode_ac_group(struct inode *dir, 2194 2123 struct buffer_head *parent_di_bh, 2195 2124 struct ocfs2_alloc_context *ac) ··· 2635 2552 } 2636 2553 2637 2554 /* 2555 + * Reclaim the suballocator managed space to main bitmap. 2556 + * This function first works on the suballocator to perform the 2557 + * cleanup rec/alloc_inode job, then switches to the main bitmap 2558 + * to reclaim released space. 2559 + * 2560 + * handle: The transaction handle 2561 + * alloc_inode: The suballoc inode 2562 + * alloc_bh: The buffer_head of suballoc inode 2563 + * group_bh: The group descriptor buffer_head of suballocator managed. 2564 + * Caller should release the input group_bh. 2565 + */ 2566 + static int _ocfs2_reclaim_suballoc_to_main(handle_t *handle, 2567 + struct inode *alloc_inode, 2568 + struct buffer_head *alloc_bh, 2569 + struct buffer_head *group_bh) 2570 + { 2571 + int idx, status = 0; 2572 + int i, next_free_rec, len = 0; 2573 + __le16 old_bg_contig_free_bits = 0; 2574 + u16 start_bit; 2575 + u32 tmp_used; 2576 + u64 bg_blkno, start_blk; 2577 + unsigned int count; 2578 + struct ocfs2_chain_rec *rec; 2579 + struct buffer_head *main_bm_bh = NULL; 2580 + struct inode *main_bm_inode = NULL; 2581 + struct ocfs2_super *osb = OCFS2_SB(alloc_inode->i_sb); 2582 + struct ocfs2_dinode *fe = (struct ocfs2_dinode *) alloc_bh->b_data; 2583 + struct ocfs2_chain_list *cl = &fe->id2.i_chain; 2584 + struct ocfs2_group_desc *group = (struct ocfs2_group_desc *) group_bh->b_data; 2585 + 2586 + idx = le16_to_cpu(group->bg_chain); 2587 + rec = &(cl->cl_recs[idx]); 2588 + 2589 + status = ocfs2_extend_trans(handle, 2590 + ocfs2_calc_group_alloc_credits(osb->sb, 2591 + le16_to_cpu(cl->cl_cpg))); 2592 + if (status) { 2593 + mlog_errno(status); 2594 + goto bail; 2595 + } 2596 + status = ocfs2_journal_access_di(handle, INODE_CACHE(alloc_inode), 2597 + alloc_bh, OCFS2_JOURNAL_ACCESS_WRITE); 2598 + if (status < 0) { 2599 + mlog_errno(status); 2600 + goto bail; 2601 + } 2602 + 2603 + /* 2604 + * Only clear the suballocator rec item in-place. 2605 + * 2606 + * If idx is not the last, we don't compress (remove the empty item) 2607 + * the cl_recs[]. If not, we need to do lots jobs. 2608 + * 2609 + * Compress cl_recs[] code example: 2610 + * if (idx != cl->cl_next_free_rec - 1) 2611 + * memmove(&cl->cl_recs[idx], &cl->cl_recs[idx + 1], 2612 + * sizeof(struct ocfs2_chain_rec) * 2613 + * (cl->cl_next_free_rec - idx - 1)); 2614 + * for(i = idx; i < cl->cl_next_free_rec-1; i++) { 2615 + * group->bg_chain = "later group->bg_chain"; 2616 + * group->bg_blkno = xxx; 2617 + * ... ... 2618 + * } 2619 + */ 2620 + 2621 + tmp_used = le32_to_cpu(fe->id1.bitmap1.i_total); 2622 + fe->id1.bitmap1.i_total = cpu_to_le32(tmp_used - le32_to_cpu(rec->c_total)); 2623 + 2624 + /* Substraction 1 for the block group itself */ 2625 + tmp_used = le32_to_cpu(fe->id1.bitmap1.i_used); 2626 + fe->id1.bitmap1.i_used = cpu_to_le32(tmp_used - 1); 2627 + 2628 + tmp_used = le32_to_cpu(fe->i_clusters); 2629 + fe->i_clusters = cpu_to_le32(tmp_used - le16_to_cpu(cl->cl_cpg)); 2630 + 2631 + spin_lock(&OCFS2_I(alloc_inode)->ip_lock); 2632 + OCFS2_I(alloc_inode)->ip_clusters -= le32_to_cpu(fe->i_clusters); 2633 + fe->i_size = cpu_to_le64(ocfs2_clusters_to_bytes(alloc_inode->i_sb, 2634 + le32_to_cpu(fe->i_clusters))); 2635 + spin_unlock(&OCFS2_I(alloc_inode)->ip_lock); 2636 + i_size_write(alloc_inode, le64_to_cpu(fe->i_size)); 2637 + alloc_inode->i_blocks = ocfs2_inode_sector_count(alloc_inode); 2638 + 2639 + ocfs2_journal_dirty(handle, alloc_bh); 2640 + ocfs2_update_inode_fsync_trans(handle, alloc_inode, 0); 2641 + 2642 + start_blk = le64_to_cpu(rec->c_blkno); 2643 + count = le32_to_cpu(rec->c_total) / le16_to_cpu(cl->cl_bpc); 2644 + 2645 + /* 2646 + * If the rec is the last one, let's compress the chain list by 2647 + * removing the empty cl_recs[] at the end. 2648 + */ 2649 + next_free_rec = le16_to_cpu(cl->cl_next_free_rec); 2650 + if (idx == (next_free_rec - 1)) { 2651 + len++; /* the last item should be counted first */ 2652 + for (i = (next_free_rec - 2); i > 0; i--) { 2653 + if (cl->cl_recs[i].c_free == cl->cl_recs[i].c_total) 2654 + len++; 2655 + else 2656 + break; 2657 + } 2658 + } 2659 + le16_add_cpu(&cl->cl_next_free_rec, -len); 2660 + 2661 + rec->c_free = 0; 2662 + rec->c_total = 0; 2663 + rec->c_blkno = 0; 2664 + ocfs2_remove_from_cache(INODE_CACHE(alloc_inode), group_bh); 2665 + memset(group, 0, sizeof(struct ocfs2_group_desc)); 2666 + 2667 + /* prepare job for reclaim clusters */ 2668 + main_bm_inode = ocfs2_get_system_file_inode(osb, 2669 + GLOBAL_BITMAP_SYSTEM_INODE, 2670 + OCFS2_INVALID_SLOT); 2671 + if (!main_bm_inode) 2672 + goto bail; /* ignore the error in reclaim path */ 2673 + 2674 + inode_lock(main_bm_inode); 2675 + 2676 + status = ocfs2_inode_lock(main_bm_inode, &main_bm_bh, 1); 2677 + if (status < 0) 2678 + goto free_bm_inode; /* ignore the error in reclaim path */ 2679 + 2680 + ocfs2_block_to_cluster_group(main_bm_inode, start_blk, &bg_blkno, 2681 + &start_bit); 2682 + fe = (struct ocfs2_dinode *) main_bm_bh->b_data; 2683 + cl = &fe->id2.i_chain; 2684 + /* reuse group_bh, caller will release the input group_bh */ 2685 + group_bh = NULL; 2686 + 2687 + /* reclaim clusters to global_bitmap */ 2688 + status = ocfs2_read_group_descriptor(main_bm_inode, fe, bg_blkno, 2689 + &group_bh); 2690 + if (status < 0) { 2691 + mlog_errno(status); 2692 + goto free_bm_bh; 2693 + } 2694 + group = (struct ocfs2_group_desc *) group_bh->b_data; 2695 + 2696 + if ((count + start_bit) > le16_to_cpu(group->bg_bits)) { 2697 + ocfs2_error(alloc_inode->i_sb, 2698 + "reclaim length (%d) beyands block group length (%d)", 2699 + count + start_bit, le16_to_cpu(group->bg_bits)); 2700 + goto free_group_bh; 2701 + } 2702 + 2703 + old_bg_contig_free_bits = group->bg_contig_free_bits; 2704 + status = ocfs2_block_group_clear_bits(handle, main_bm_inode, 2705 + group, group_bh, 2706 + start_bit, count, 0, 2707 + _ocfs2_clear_bit); 2708 + if (status < 0) { 2709 + mlog_errno(status); 2710 + goto free_group_bh; 2711 + } 2712 + 2713 + status = ocfs2_journal_access_di(handle, INODE_CACHE(main_bm_inode), 2714 + main_bm_bh, OCFS2_JOURNAL_ACCESS_WRITE); 2715 + if (status < 0) { 2716 + mlog_errno(status); 2717 + ocfs2_block_group_set_bits(handle, main_bm_inode, group, group_bh, 2718 + start_bit, count, 2719 + le16_to_cpu(old_bg_contig_free_bits), 1); 2720 + goto free_group_bh; 2721 + } 2722 + 2723 + idx = le16_to_cpu(group->bg_chain); 2724 + rec = &(cl->cl_recs[idx]); 2725 + 2726 + le32_add_cpu(&rec->c_free, count); 2727 + tmp_used = le32_to_cpu(fe->id1.bitmap1.i_used); 2728 + fe->id1.bitmap1.i_used = cpu_to_le32(tmp_used - count); 2729 + ocfs2_journal_dirty(handle, main_bm_bh); 2730 + 2731 + free_group_bh: 2732 + brelse(group_bh); 2733 + 2734 + free_bm_bh: 2735 + ocfs2_inode_unlock(main_bm_inode, 1); 2736 + brelse(main_bm_bh); 2737 + 2738 + free_bm_inode: 2739 + inode_unlock(main_bm_inode); 2740 + iput(main_bm_inode); 2741 + 2742 + bail: 2743 + return status; 2744 + } 2745 + 2746 + /* 2638 2747 * expects the suballoc inode to already be locked. 2639 2748 */ 2640 2749 static int _ocfs2_free_suballoc_bits(handle_t *handle, ··· 2838 2563 void (*undo_fn)(unsigned int bit, 2839 2564 unsigned long *bitmap)) 2840 2565 { 2841 - int status = 0; 2566 + int idx, status = 0; 2842 2567 u32 tmp_used; 2843 2568 struct ocfs2_dinode *fe = (struct ocfs2_dinode *) alloc_bh->b_data; 2844 2569 struct ocfs2_chain_list *cl = &fe->id2.i_chain; 2845 2570 struct buffer_head *group_bh = NULL; 2846 2571 struct ocfs2_group_desc *group; 2572 + struct ocfs2_chain_rec *rec; 2847 2573 __le16 old_bg_contig_free_bits = 0; 2848 2574 2849 2575 /* The alloc_bh comes from ocfs2_free_dinode() or ··· 2890 2614 goto bail; 2891 2615 } 2892 2616 2893 - le32_add_cpu(&cl->cl_recs[le16_to_cpu(group->bg_chain)].c_free, 2894 - count); 2617 + idx = le16_to_cpu(group->bg_chain); 2618 + rec = &(cl->cl_recs[idx]); 2619 + 2620 + le32_add_cpu(&rec->c_free, count); 2895 2621 tmp_used = le32_to_cpu(fe->id1.bitmap1.i_used); 2896 2622 fe->id1.bitmap1.i_used = cpu_to_le32(tmp_used - count); 2897 2623 ocfs2_journal_dirty(handle, alloc_bh); 2624 + 2625 + /* 2626 + * Reclaim suballocator free space. 2627 + * Bypass: global_bitmap, non empty rec, first rec in cl_recs[] 2628 + */ 2629 + if (ocfs2_is_cluster_bitmap(alloc_inode) || 2630 + (le32_to_cpu(rec->c_free) != (le32_to_cpu(rec->c_total) - 1)) || 2631 + (le16_to_cpu(cl->cl_next_free_rec) == 1)) { 2632 + goto bail; 2633 + } 2634 + 2635 + _ocfs2_reclaim_suballoc_to_main(handle, alloc_inode, alloc_bh, group_bh); 2898 2636 2899 2637 bail: 2900 2638 brelse(group_bh); ··· 3163 2873 struct ocfs2_group_desc *group; 3164 2874 struct buffer_head *group_bh = NULL; 3165 2875 u64 bg_blkno; 3166 - int status; 2876 + int status, quiet = 0, released = 0; 3167 2877 3168 2878 trace_ocfs2_test_suballoc_bit((unsigned long long)blkno, 3169 2879 (unsigned int)bit); ··· 3179 2889 3180 2890 bg_blkno = group_blkno ? group_blkno : 3181 2891 ocfs2_which_suballoc_group(blkno, bit); 3182 - status = ocfs2_read_group_descriptor(suballoc, alloc_di, bg_blkno, 3183 - &group_bh); 3184 - if (status < 0) { 2892 + status = ocfs2_read_hint_group_descriptor(suballoc, alloc_di, bg_blkno, 2893 + &group_bh, &released); 2894 + if (released) { 2895 + quiet = 1; 2896 + status = -ESTALE; 2897 + goto bail; 2898 + } else if (status < 0) { 3185 2899 mlog(ML_ERROR, "read group %llu failed %d\n", 3186 2900 (unsigned long long)bg_blkno, status); 3187 2901 goto bail; ··· 3197 2903 bail: 3198 2904 brelse(group_bh); 3199 2905 3200 - if (status) 2906 + if (status && !quiet) 3201 2907 mlog_errno(status); 3202 2908 return status; 3203 2909 } ··· 3217 2923 */ 3218 2924 int ocfs2_test_inode_bit(struct ocfs2_super *osb, u64 blkno, int *res) 3219 2925 { 3220 - int status; 2926 + int status, quiet = 0; 3221 2927 u64 group_blkno = 0; 3222 2928 u16 suballoc_bit = 0, suballoc_slot = 0; 3223 2929 struct inode *inode_alloc_inode; ··· 3259 2965 3260 2966 status = ocfs2_test_suballoc_bit(osb, inode_alloc_inode, alloc_bh, 3261 2967 group_blkno, blkno, suballoc_bit, res); 3262 - if (status < 0) 3263 - mlog(ML_ERROR, "test suballoc bit failed %d\n", status); 2968 + if (status < 0) { 2969 + if (status == -ESTALE) 2970 + quiet = 1; 2971 + else 2972 + mlog(ML_ERROR, "test suballoc bit failed %d\n", status); 2973 + } 3264 2974 3265 2975 ocfs2_inode_unlock(inode_alloc_inode, 0); 3266 2976 inode_unlock(inode_alloc_inode); ··· 3272 2974 iput(inode_alloc_inode); 3273 2975 brelse(alloc_bh); 3274 2976 bail: 3275 - if (status) 2977 + if (status && !quiet) 3276 2978 mlog_errno(status); 3277 2979 return status; 3278 2980 }
+7 -2
fs/ocfs2/xattr.c
··· 1971 1971 ocfs2_xa_wipe_namevalue(loc); 1972 1972 loc->xl_entry = NULL; 1973 1973 1974 - le16_add_cpu(&xh->xh_count, -1); 1975 - count = le16_to_cpu(xh->xh_count); 1974 + count = le16_to_cpu(xh->xh_count) - 1; 1976 1975 1977 1976 /* 1978 1977 * Only zero out the entry if there are more remaining. This is ··· 1986 1987 memset(&xh->xh_entries[count], 0, 1987 1988 sizeof(struct ocfs2_xattr_entry)); 1988 1989 } 1990 + 1991 + xh->xh_count = cpu_to_le16(count); 1989 1992 } 1990 1993 1991 1994 /* ··· 6395 6394 (void *)last - (void *)xe); 6396 6395 memset(last, 0, 6397 6396 sizeof(struct ocfs2_xattr_entry)); 6397 + last = &new_xh->xh_entries[le16_to_cpu(new_xh->xh_count)] - 1; 6398 + } else { 6399 + memset(xe, 0, sizeof(struct ocfs2_xattr_entry)); 6400 + last = NULL; 6398 6401 } 6399 6402 6400 6403 /*
+1
fs/overlayfs/namei.c
··· 7 7 #include <linux/fs.h> 8 8 #include <linux/cred.h> 9 9 #include <linux/ctype.h> 10 + #include <linux/hex.h> 10 11 #include <linux/namei.h> 11 12 #include <linux/xattr.h> 12 13 #include <linux/ratelimit.h>
+2 -1
fs/proc/array.c
··· 55 55 56 56 #include <linux/types.h> 57 57 #include <linux/errno.h> 58 + #include <linux/hex.h> 58 59 #include <linux/time.h> 59 60 #include <linux/time_namespace.h> 60 61 #include <linux/kernel.h> ··· 529 528 } 530 529 531 530 sid = task_session_nr_ns(task, ns); 532 - ppid = task_tgid_nr_ns(task->real_parent, ns); 531 + ppid = task_ppid_nr_ns(task, ns); 533 532 pgid = task_pgrp_nr_ns(task, ns); 534 533 535 534 unlock_task_sighand(task, &flags);
+1
fs/seq_file.c
··· 11 11 #include <linux/cache.h> 12 12 #include <linux/fs.h> 13 13 #include <linux/export.h> 14 + #include <linux/hex.h> 14 15 #include <linux/seq_file.h> 15 16 #include <linux/vmalloc.h> 16 17 #include <linux/slab.h>
+1
fs/udf/unicode.c
··· 16 16 17 17 #include "udfdecl.h" 18 18 19 + #include <linux/hex.h> 19 20 #include <linux/kernel.h> 20 21 #include <linux/string.h> /* for memset */ 21 22 #include <linux/nls.h>
+1 -1
include/asm-generic/atomic64.h
··· 10 10 #include <linux/types.h> 11 11 12 12 typedef struct { 13 - s64 counter; 13 + s64 __aligned(sizeof(s64)) counter; 14 14 } atomic64_t; 15 15 16 16 #define ATOMIC64_INIT(i) { (i) }
+1 -1
include/asm-generic/rqspinlock.h
··· 28 28 */ 29 29 struct bpf_res_spin_lock { 30 30 u32 val; 31 - }; 31 + } __aligned(__alignof__(struct rqspinlock)); 32 32 33 33 struct qspinlock; 34 34 #ifdef CONFIG_QUEUED_SPINLOCKS
+6
include/linux/array_size.h
··· 10 10 */ 11 11 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr)) 12 12 13 + /** 14 + * ARRAY_END - get a pointer to one past the last element in array @arr 15 + * @arr: array 16 + */ 17 + #define ARRAY_END(arr) (&(arr)[ARRAY_SIZE(arr)]) 18 + 13 19 #endif /* _LINUX_ARRAY_SIZE_H */
+6
include/linux/capability.h
··· 203 203 ns_capable(ns, CAP_SYS_ADMIN); 204 204 } 205 205 206 + static inline bool checkpoint_restore_ns_capable_noaudit(struct user_namespace *ns) 207 + { 208 + return ns_capable_noaudit(ns, CAP_CHECKPOINT_RESTORE) || 209 + ns_capable_noaudit(ns, CAP_SYS_ADMIN); 210 + } 211 + 206 212 /* audit system wants to get cap info from files as well */ 207 213 int get_vfs_caps_from_disk(struct mnt_idmap *idmap, 208 214 const struct dentry *dentry,
+1 -1
include/linux/compiler-clang.h
··· 153 153 * Bindgen uses LLVM even if our C compiler is GCC, so we cannot 154 154 * rely on the auto-detected CONFIG_CC_HAS_TYPEOF_UNQUAL. 155 155 */ 156 - #define CC_HAS_TYPEOF_UNQUAL (__clang_major__ >= 19) 156 + #define CC_HAS_TYPEOF_UNQUAL (__clang_major__ > 19 || (__clang_major__ == 19 && __clang_minor__ > 0))
+16 -7
include/linux/compiler_types.h
··· 289 289 # define __no_kasan_or_inline __always_inline 290 290 #endif 291 291 292 + #ifdef CONFIG_KCSAN 293 + /* 294 + * Type qualifier to mark variables where all data-racy accesses should be 295 + * ignored by KCSAN. Note, the implementation simply marks these variables as 296 + * volatile, since KCSAN will treat such accesses as "marked". 297 + * 298 + * Defined here because defining __data_racy as volatile for KCSAN objects only 299 + * causes problems in BPF Type Format (BTF) generation since struct members 300 + * of core kernel data structs will be volatile in some objects and not in 301 + * others. Instead define it globally for KCSAN kernels. 302 + */ 303 + # define __data_racy volatile 304 + #else 305 + # define __data_racy 306 + #endif 307 + 292 308 #ifdef __SANITIZE_THREAD__ 293 309 /* 294 310 * Clang still emits instrumentation for __tsan_func_{entry,exit}() and builtin ··· 316 300 * disable all instrumentation. See Kconfig.kcsan where this is mandatory. 317 301 */ 318 302 # define __no_kcsan __no_sanitize_thread __disable_sanitizer_instrumentation 319 - /* 320 - * Type qualifier to mark variables where all data-racy accesses should be 321 - * ignored by KCSAN. Note, the implementation simply marks these variables as 322 - * volatile, since KCSAN will treat such accesses as "marked". 323 - */ 324 - # define __data_racy volatile 325 303 # define __no_sanitize_or_inline __no_kcsan notrace __maybe_unused 326 304 #else 327 305 # define __no_kcsan 328 - # define __data_racy 329 306 #endif 330 307 331 308 #ifdef __SANITIZE_MEMORY__
+8
include/linux/delayacct.h
··· 69 69 u32 compact_count; /* total count of memory compact */ 70 70 u32 wpcopy_count; /* total count of write-protect copy */ 71 71 u32 irq_count; /* total count of IRQ/SOFTIRQ */ 72 + 73 + struct timespec64 blkio_delay_max_ts; 74 + struct timespec64 swapin_delay_max_ts; 75 + struct timespec64 freepages_delay_max_ts; 76 + struct timespec64 thrashing_delay_max_ts; 77 + struct timespec64 compact_delay_max_ts; 78 + struct timespec64 wpcopy_delay_max_ts; 79 + struct timespec64 irq_delay_max_ts; 72 80 }; 73 81 #endif 74 82
+4 -22
include/linux/filter.h
··· 1376 1376 return false; 1377 1377 } 1378 1378 1379 - int __bpf_address_lookup(unsigned long addr, unsigned long *size, 1380 - unsigned long *off, char *sym); 1379 + int bpf_address_lookup(unsigned long addr, unsigned long *size, 1380 + unsigned long *off, char *sym); 1381 1381 bool is_bpf_text_address(unsigned long addr); 1382 1382 int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type, 1383 1383 char *sym); 1384 1384 struct bpf_prog *bpf_prog_ksym_find(unsigned long addr); 1385 - 1386 - static inline int 1387 - bpf_address_lookup(unsigned long addr, unsigned long *size, 1388 - unsigned long *off, char **modname, char *sym) 1389 - { 1390 - int ret = __bpf_address_lookup(addr, size, off, sym); 1391 - 1392 - if (ret && modname) 1393 - *modname = NULL; 1394 - return ret; 1395 - } 1396 1385 1397 1386 void bpf_prog_kallsyms_add(struct bpf_prog *fp); 1398 1387 void bpf_prog_kallsyms_del(struct bpf_prog *fp); ··· 1421 1432 } 1422 1433 1423 1434 static inline int 1424 - __bpf_address_lookup(unsigned long addr, unsigned long *size, 1425 - unsigned long *off, char *sym) 1435 + bpf_address_lookup(unsigned long addr, unsigned long *size, 1436 + unsigned long *off, char *sym) 1426 1437 { 1427 1438 return 0; 1428 1439 } ··· 1441 1452 static inline struct bpf_prog *bpf_prog_ksym_find(unsigned long addr) 1442 1453 { 1443 1454 return NULL; 1444 - } 1445 - 1446 - static inline int 1447 - bpf_address_lookup(unsigned long addr, unsigned long *size, 1448 - unsigned long *off, char **modname, char *sym) 1449 - { 1450 - return 0; 1451 1455 } 1452 1456 1453 1457 static inline void bpf_prog_kallsyms_add(struct bpf_prog *fp)
+4 -2
include/linux/ftrace.h
··· 88 88 defined(CONFIG_DYNAMIC_FTRACE) 89 89 int 90 90 ftrace_mod_address_lookup(unsigned long addr, unsigned long *size, 91 - unsigned long *off, char **modname, char *sym); 91 + unsigned long *off, char **modname, 92 + const unsigned char **modbuildid, char *sym); 92 93 #else 93 94 static inline int 94 95 ftrace_mod_address_lookup(unsigned long addr, unsigned long *size, 95 - unsigned long *off, char **modname, char *sym) 96 + unsigned long *off, char **modname, 97 + const unsigned char **modbuildid, char *sym) 96 98 { 97 99 return 0; 98 100 }
+1
include/linux/ima.h
··· 69 69 #ifdef CONFIG_HAVE_IMA_KEXEC 70 70 int __init ima_free_kexec_buffer(void); 71 71 int __init ima_get_kexec_buffer(void **addr, size_t *size); 72 + int ima_validate_range(phys_addr_t phys, size_t size); 72 73 #endif 73 74 74 75 #ifdef CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT
+17
include/linux/instrumented.h
··· 7 7 #ifndef _LINUX_INSTRUMENTED_H 8 8 #define _LINUX_INSTRUMENTED_H 9 9 10 + #include <linux/bug.h> 10 11 #include <linux/compiler.h> 11 12 #include <linux/kasan-checks.h> 12 13 #include <linux/kcsan-checks.h> ··· 56 55 kcsan_check_read_write(v, size); 57 56 } 58 57 58 + static __always_inline void instrument_atomic_check_alignment(const volatile void *v, size_t size) 59 + { 60 + #ifndef __DISABLE_EXPORTS 61 + if (IS_ENABLED(CONFIG_DEBUG_ATOMIC)) { 62 + unsigned int mask = size - 1; 63 + 64 + if (IS_ENABLED(CONFIG_DEBUG_ATOMIC_LARGEST_ALIGN)) 65 + mask &= sizeof(struct { long x; } __aligned_largest) - 1; 66 + WARN_ON_ONCE((unsigned long)v & mask); 67 + } 68 + #endif 69 + } 70 + 59 71 /** 60 72 * instrument_atomic_read - instrument atomic read access 61 73 * @v: address of access ··· 81 67 { 82 68 kasan_check_read(v, size); 83 69 kcsan_check_atomic_read(v, size); 70 + instrument_atomic_check_alignment(v, size); 84 71 } 85 72 86 73 /** ··· 96 81 { 97 82 kasan_check_write(v, size); 98 83 kcsan_check_atomic_write(v, size); 84 + instrument_atomic_check_alignment(v, size); 99 85 } 100 86 101 87 /** ··· 111 95 { 112 96 kasan_check_write(v, size); 113 97 kcsan_check_atomic_read_write(v, size); 98 + instrument_atomic_check_alignment(v, size); 114 99 } 115 100 116 101 /**
+6 -1
include/linux/ioport.h
··· 10 10 #define _LINUX_IOPORT_H 11 11 12 12 #ifndef __ASSEMBLY__ 13 + #include <linux/args.h> 13 14 #include <linux/bits.h> 14 15 #include <linux/compiler.h> 15 16 #include <linux/minmax.h> ··· 166 165 167 166 #define DEFINE_RES_NAMED(_start, _size, _name, _flags) \ 168 167 DEFINE_RES_NAMED_DESC(_start, _size, _name, _flags, IORES_DESC_NONE) 169 - #define DEFINE_RES(_start, _size, _flags) \ 168 + #define __DEFINE_RES0() \ 169 + DEFINE_RES_NAMED(0, 0, NULL, IORESOURCE_UNSET) 170 + #define __DEFINE_RES3(_start, _size, _flags) \ 170 171 DEFINE_RES_NAMED(_start, _size, NULL, _flags) 172 + #define DEFINE_RES(...) \ 173 + CONCATENATE(__DEFINE_RES, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__) 171 174 172 175 #define DEFINE_RES_IO_NAMED(_start, _size, _name) \ 173 176 DEFINE_RES_NAMED((_start), (_size), (_name), IORESOURCE_IO)
+1 -210
include/linux/kernel.h
··· 21 21 #include <linux/compiler.h> 22 22 #include <linux/container_of.h> 23 23 #include <linux/bitops.h> 24 - #include <linux/hex.h> 25 24 #include <linux/kstrtox.h> 26 25 #include <linux/log2.h> 27 26 #include <linux/math.h> ··· 31 32 #include <linux/build_bug.h> 32 33 #include <linux/sprintf.h> 33 34 #include <linux/static_call_types.h> 34 - #include <linux/instruction_pointer.h> 35 + #include <linux/trace_printk.h> 35 36 #include <linux/util_macros.h> 36 37 #include <linux/wordpart.h> 37 38 38 39 #include <asm/byteorder.h> 39 40 40 41 #include <uapi/linux/kernel.h> 41 - 42 - #define STACK_MAGIC 0xdeadbeef 43 42 44 43 struct completion; 45 44 struct user; ··· 189 192 }; 190 193 extern enum system_states system_state; 191 194 192 - /* 193 - * General tracing related utility functions - trace_printk(), 194 - * tracing_on/tracing_off and tracing_start()/tracing_stop 195 - * 196 - * Use tracing_on/tracing_off when you want to quickly turn on or off 197 - * tracing. It simply enables or disables the recording of the trace events. 198 - * This also corresponds to the user space /sys/kernel/tracing/tracing_on 199 - * file, which gives a means for the kernel and userspace to interact. 200 - * Place a tracing_off() in the kernel where you want tracing to end. 201 - * From user space, examine the trace, and then echo 1 > tracing_on 202 - * to continue tracing. 203 - * 204 - * tracing_stop/tracing_start has slightly more overhead. It is used 205 - * by things like suspend to ram where disabling the recording of the 206 - * trace is not enough, but tracing must actually stop because things 207 - * like calling smp_processor_id() may crash the system. 208 - * 209 - * Most likely, you want to use tracing_on/tracing_off. 210 - */ 211 - 212 - enum ftrace_dump_mode { 213 - DUMP_NONE, 214 - DUMP_ALL, 215 - DUMP_ORIG, 216 - DUMP_PARAM, 217 - }; 218 - 219 - #ifdef CONFIG_TRACING 220 - void tracing_on(void); 221 - void tracing_off(void); 222 - int tracing_is_on(void); 223 - void tracing_snapshot(void); 224 - void tracing_snapshot_alloc(void); 225 - 226 - extern void tracing_start(void); 227 - extern void tracing_stop(void); 228 - 229 - static inline __printf(1, 2) 230 - void ____trace_printk_check_format(const char *fmt, ...) 231 - { 232 - } 233 - #define __trace_printk_check_format(fmt, args...) \ 234 - do { \ 235 - if (0) \ 236 - ____trace_printk_check_format(fmt, ##args); \ 237 - } while (0) 238 - 239 - /** 240 - * trace_printk - printf formatting in the ftrace buffer 241 - * @fmt: the printf format for printing 242 - * 243 - * Note: __trace_printk is an internal function for trace_printk() and 244 - * the @ip is passed in via the trace_printk() macro. 245 - * 246 - * This function allows a kernel developer to debug fast path sections 247 - * that printk is not appropriate for. By scattering in various 248 - * printk like tracing in the code, a developer can quickly see 249 - * where problems are occurring. 250 - * 251 - * This is intended as a debugging tool for the developer only. 252 - * Please refrain from leaving trace_printks scattered around in 253 - * your code. (Extra memory is used for special buffers that are 254 - * allocated when trace_printk() is used.) 255 - * 256 - * A little optimization trick is done here. If there's only one 257 - * argument, there's no need to scan the string for printf formats. 258 - * The trace_puts() will suffice. But how can we take advantage of 259 - * using trace_puts() when trace_printk() has only one argument? 260 - * By stringifying the args and checking the size we can tell 261 - * whether or not there are args. __stringify((__VA_ARGS__)) will 262 - * turn into "()\0" with a size of 3 when there are no args, anything 263 - * else will be bigger. All we need to do is define a string to this, 264 - * and then take its size and compare to 3. If it's bigger, use 265 - * do_trace_printk() otherwise, optimize it to trace_puts(). Then just 266 - * let gcc optimize the rest. 267 - */ 268 - 269 - #define trace_printk(fmt, ...) \ 270 - do { \ 271 - char _______STR[] = __stringify((__VA_ARGS__)); \ 272 - if (sizeof(_______STR) > 3) \ 273 - do_trace_printk(fmt, ##__VA_ARGS__); \ 274 - else \ 275 - trace_puts(fmt); \ 276 - } while (0) 277 - 278 - #define do_trace_printk(fmt, args...) \ 279 - do { \ 280 - static const char *trace_printk_fmt __used \ 281 - __section("__trace_printk_fmt") = \ 282 - __builtin_constant_p(fmt) ? fmt : NULL; \ 283 - \ 284 - __trace_printk_check_format(fmt, ##args); \ 285 - \ 286 - if (__builtin_constant_p(fmt)) \ 287 - __trace_bprintk(_THIS_IP_, trace_printk_fmt, ##args); \ 288 - else \ 289 - __trace_printk(_THIS_IP_, fmt, ##args); \ 290 - } while (0) 291 - 292 - extern __printf(2, 3) 293 - int __trace_bprintk(unsigned long ip, const char *fmt, ...); 294 - 295 - extern __printf(2, 3) 296 - int __trace_printk(unsigned long ip, const char *fmt, ...); 297 - 298 - /** 299 - * trace_puts - write a string into the ftrace buffer 300 - * @str: the string to record 301 - * 302 - * Note: __trace_bputs is an internal function for trace_puts and 303 - * the @ip is passed in via the trace_puts macro. 304 - * 305 - * This is similar to trace_printk() but is made for those really fast 306 - * paths that a developer wants the least amount of "Heisenbug" effects, 307 - * where the processing of the print format is still too much. 308 - * 309 - * This function allows a kernel developer to debug fast path sections 310 - * that printk is not appropriate for. By scattering in various 311 - * printk like tracing in the code, a developer can quickly see 312 - * where problems are occurring. 313 - * 314 - * This is intended as a debugging tool for the developer only. 315 - * Please refrain from leaving trace_puts scattered around in 316 - * your code. (Extra memory is used for special buffers that are 317 - * allocated when trace_puts() is used.) 318 - * 319 - * Returns: 0 if nothing was written, positive # if string was. 320 - * (1 when __trace_bputs is used, strlen(str) when __trace_puts is used) 321 - */ 322 - 323 - #define trace_puts(str) ({ \ 324 - static const char *trace_printk_fmt __used \ 325 - __section("__trace_printk_fmt") = \ 326 - __builtin_constant_p(str) ? str : NULL; \ 327 - \ 328 - if (__builtin_constant_p(str)) \ 329 - __trace_bputs(_THIS_IP_, trace_printk_fmt); \ 330 - else \ 331 - __trace_puts(_THIS_IP_, str, strlen(str)); \ 332 - }) 333 - extern int __trace_bputs(unsigned long ip, const char *str); 334 - extern int __trace_puts(unsigned long ip, const char *str, int size); 335 - 336 - extern void trace_dump_stack(int skip); 337 - 338 - /* 339 - * The double __builtin_constant_p is because gcc will give us an error 340 - * if we try to allocate the static variable to fmt if it is not a 341 - * constant. Even with the outer if statement. 342 - */ 343 - #define ftrace_vprintk(fmt, vargs) \ 344 - do { \ 345 - if (__builtin_constant_p(fmt)) { \ 346 - static const char *trace_printk_fmt __used \ 347 - __section("__trace_printk_fmt") = \ 348 - __builtin_constant_p(fmt) ? fmt : NULL; \ 349 - \ 350 - __ftrace_vbprintk(_THIS_IP_, trace_printk_fmt, vargs); \ 351 - } else \ 352 - __ftrace_vprintk(_THIS_IP_, fmt, vargs); \ 353 - } while (0) 354 - 355 - extern __printf(2, 0) int 356 - __ftrace_vbprintk(unsigned long ip, const char *fmt, va_list ap); 357 - 358 - extern __printf(2, 0) int 359 - __ftrace_vprintk(unsigned long ip, const char *fmt, va_list ap); 360 - 361 - extern void ftrace_dump(enum ftrace_dump_mode oops_dump_mode); 362 - #else 363 - static inline void tracing_start(void) { } 364 - static inline void tracing_stop(void) { } 365 - static inline void trace_dump_stack(int skip) { } 366 - 367 - static inline void tracing_on(void) { } 368 - static inline void tracing_off(void) { } 369 - static inline int tracing_is_on(void) { return 0; } 370 - static inline void tracing_snapshot(void) { } 371 - static inline void tracing_snapshot_alloc(void) { } 372 - 373 - static inline __printf(1, 2) 374 - int trace_printk(const char *fmt, ...) 375 - { 376 - return 0; 377 - } 378 - static __printf(1, 0) inline int 379 - ftrace_vprintk(const char *fmt, va_list ap) 380 - { 381 - return 0; 382 - } 383 - static inline void ftrace_dump(enum ftrace_dump_mode oops_dump_mode) { } 384 - #endif /* CONFIG_TRACING */ 385 - 386 195 /* Rebuild everything on CONFIG_DYNAMIC_FTRACE */ 387 196 #ifdef CONFIG_DYNAMIC_FTRACE 388 197 # define REBUILD_DUE_TO_DYNAMIC_FTRACE 389 198 #endif 390 199 391 - /* Permissions on a sysfs file: you didn't miss the 0 prefix did you? */ 392 - #define VERIFY_OCTAL_PERMISSIONS(perms) \ 393 - (BUILD_BUG_ON_ZERO((perms) < 0) + \ 394 - BUILD_BUG_ON_ZERO((perms) > 0777) + \ 395 - /* USER_READABLE >= GROUP_READABLE >= OTHER_READABLE */ \ 396 - BUILD_BUG_ON_ZERO((((perms) >> 6) & 4) < (((perms) >> 3) & 4)) + \ 397 - BUILD_BUG_ON_ZERO((((perms) >> 3) & 4) < ((perms) & 4)) + \ 398 - /* USER_WRITABLE >= GROUP_WRITABLE */ \ 399 - BUILD_BUG_ON_ZERO((((perms) >> 6) & 2) < (((perms) >> 3) & 2)) + \ 400 - /* OTHER_WRITABLE? Generally considered a bad idea. */ \ 401 - BUILD_BUG_ON_ZERO((perms) & 2) + \ 402 - (perms)) 403 200 #endif
+5 -28
include/linux/kexec_handover.h
··· 11 11 phys_addr_t size; 12 12 }; 13 13 14 + struct kho_vmalloc; 15 + 14 16 struct folio; 15 17 struct page; 16 - 17 - #define DECLARE_KHOSER_PTR(name, type) \ 18 - union { \ 19 - phys_addr_t phys; \ 20 - type ptr; \ 21 - } name 22 - #define KHOSER_STORE_PTR(dest, val) \ 23 - ({ \ 24 - typeof(val) v = val; \ 25 - typecheck(typeof((dest).ptr), v); \ 26 - (dest).phys = virt_to_phys(v); \ 27 - }) 28 - #define KHOSER_LOAD_PTR(src) \ 29 - ({ \ 30 - typeof(src) s = src; \ 31 - (typeof((s).ptr))((s).phys ? phys_to_virt((s).phys) : NULL); \ 32 - }) 33 - 34 - struct kho_vmalloc_chunk; 35 - struct kho_vmalloc { 36 - DECLARE_KHOSER_PTR(first, struct kho_vmalloc_chunk *); 37 - unsigned int total_pages; 38 - unsigned short flags; 39 - unsigned short order; 40 - }; 41 18 42 19 #ifdef CONFIG_KEXEC_HANDOVER 43 20 bool kho_is_enabled(void); ··· 22 45 23 46 int kho_preserve_folio(struct folio *folio); 24 47 void kho_unpreserve_folio(struct folio *folio); 25 - int kho_preserve_pages(struct page *page, unsigned int nr_pages); 26 - void kho_unpreserve_pages(struct page *page, unsigned int nr_pages); 48 + int kho_preserve_pages(struct page *page, unsigned long nr_pages); 49 + void kho_unpreserve_pages(struct page *page, unsigned long nr_pages); 27 50 int kho_preserve_vmalloc(void *ptr, struct kho_vmalloc *preservation); 28 51 void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation); 29 52 void *kho_alloc_preserve(size_t size); 30 53 void kho_unpreserve_free(void *mem); 31 54 void kho_restore_free(void *mem); 32 55 struct folio *kho_restore_folio(phys_addr_t phys); 33 - struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages); 56 + struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages); 34 57 void *kho_restore_vmalloc(const struct kho_vmalloc *preservation); 35 58 int kho_add_subtree(const char *name, void *fdt); 36 59 void kho_remove_subtree(void *fdt);
+163
include/linux/kho/abi/kexec_handover.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + /* 4 + * Copyright (C) 2023 Alexander Graf <graf@amazon.com> 5 + * Copyright (C) 2025 Microsoft Corporation, Mike Rapoport <rppt@kernel.org> 6 + * Copyright (C) 2025 Google LLC, Changyuan Lyu <changyuanl@google.com> 7 + * Copyright (C) 2025 Google LLC, Jason Miu <jasonmiu@google.com> 8 + */ 9 + 10 + #ifndef _LINUX_KHO_ABI_KEXEC_HANDOVER_H 11 + #define _LINUX_KHO_ABI_KEXEC_HANDOVER_H 12 + 13 + #include <linux/types.h> 14 + 15 + /** 16 + * DOC: Kexec Handover ABI 17 + * 18 + * Kexec Handover uses the ABI defined below for passing preserved data from 19 + * one kernel to the next. 20 + * The ABI uses Flattened Device Tree (FDT) format. The first kernel creates an 21 + * FDT which is then passed to the next kernel during a kexec handover. 22 + * 23 + * This interface is a contract. Any modification to the FDT structure, node 24 + * properties, compatible string, or the layout of the data structures 25 + * referenced here constitutes a breaking change. Such changes require 26 + * incrementing the version number in KHO_FDT_COMPATIBLE to prevent a new kernel 27 + * from misinterpreting data from an older kernel. Changes are allowed provided 28 + * the compatibility version is incremented. However, backward/forward 29 + * compatibility is only guaranteed for kernels supporting the same ABI version. 30 + * 31 + * FDT Structure Overview: 32 + * The FDT serves as a central registry for physical 33 + * addresses of preserved data structures and sub-FDTs. The first kernel 34 + * populates this FDT with references to memory regions and other FDTs that 35 + * need to persist across the kexec transition. The subsequent kernel then 36 + * parses this FDT to locate and restore the preserved data.:: 37 + * 38 + * / { 39 + * compatible = "kho-v1"; 40 + * 41 + * preserved-memory-map = <0x...>; 42 + * 43 + * <subnode-name-1> { 44 + * fdt = <0x...>; 45 + * }; 46 + * 47 + * <subnode-name-2> { 48 + * fdt = <0x...>; 49 + * }; 50 + * ... ... 51 + * <subnode-name-N> { 52 + * fdt = <0x...>; 53 + * }; 54 + * }; 55 + * 56 + * Root KHO Node (/): 57 + * - compatible: "kho-v1" 58 + * 59 + * Indentifies the overall KHO ABI version. 60 + * 61 + * - preserved-memory-map: u64 62 + * 63 + * Physical memory address pointing to the root of the 64 + * preserved memory map data structure. 65 + * 66 + * Subnodes (<subnode-name-N>): 67 + * Subnodes can also be added to the root node to 68 + * describe other preserved data blobs. The <subnode-name-N> 69 + * is provided by the subsystem that uses KHO for preserving its 70 + * data. 71 + * 72 + * - fdt: u64 73 + * 74 + * Physical address pointing to a subnode FDT blob that is also 75 + * being preserved. 76 + */ 77 + 78 + /* The compatible string for the KHO FDT root node. */ 79 + #define KHO_FDT_COMPATIBLE "kho-v1" 80 + 81 + /* The FDT property for the preserved memory map. */ 82 + #define KHO_FDT_MEMORY_MAP_PROP_NAME "preserved-memory-map" 83 + 84 + /* The FDT property for sub-FDTs. */ 85 + #define KHO_FDT_SUB_TREE_PROP_NAME "fdt" 86 + 87 + /** 88 + * DOC: Kexec Handover ABI for vmalloc Preservation 89 + * 90 + * The Kexec Handover ABI for preserving vmalloc'ed memory is defined by 91 + * a set of structures and helper macros. The layout of these structures is a 92 + * stable contract between kernels and is versioned by the KHO_FDT_COMPATIBLE 93 + * string. 94 + * 95 + * The preservation is managed through a main descriptor &struct kho_vmalloc, 96 + * which points to a linked list of &struct kho_vmalloc_chunk structures. These 97 + * chunks contain the physical addresses of the preserved pages, allowing the 98 + * next kernel to reconstruct the vmalloc area with the same content and layout. 99 + * Helper macros are also defined for storing and loading pointers within 100 + * these structures. 101 + */ 102 + 103 + /* Helper macro to define a union for a serializable pointer. */ 104 + #define DECLARE_KHOSER_PTR(name, type) \ 105 + union { \ 106 + u64 phys; \ 107 + type ptr; \ 108 + } name 109 + 110 + /* Stores the physical address of a serializable pointer. */ 111 + #define KHOSER_STORE_PTR(dest, val) \ 112 + ({ \ 113 + typeof(val) v = val; \ 114 + typecheck(typeof((dest).ptr), v); \ 115 + (dest).phys = virt_to_phys(v); \ 116 + }) 117 + 118 + /* Loads the stored physical address back to a pointer. */ 119 + #define KHOSER_LOAD_PTR(src) \ 120 + ({ \ 121 + typeof(src) s = src; \ 122 + (typeof((s).ptr))((s).phys ? phys_to_virt((s).phys) : NULL); \ 123 + }) 124 + 125 + /* 126 + * This header is embedded at the beginning of each `kho_vmalloc_chunk` 127 + * and contains a pointer to the next chunk in the linked list, 128 + * stored as a physical address for handover. 129 + */ 130 + struct kho_vmalloc_hdr { 131 + DECLARE_KHOSER_PTR(next, struct kho_vmalloc_chunk *); 132 + }; 133 + 134 + #define KHO_VMALLOC_SIZE \ 135 + ((PAGE_SIZE - sizeof(struct kho_vmalloc_hdr)) / \ 136 + sizeof(u64)) 137 + 138 + /* 139 + * Each chunk is a single page and is part of a linked list that describes 140 + * a preserved vmalloc area. It contains the header with the link to the next 141 + * chunk and a zero terminated array of physical addresses of the pages that 142 + * make up the preserved vmalloc area. 143 + */ 144 + struct kho_vmalloc_chunk { 145 + struct kho_vmalloc_hdr hdr; 146 + u64 phys[KHO_VMALLOC_SIZE]; 147 + }; 148 + 149 + static_assert(sizeof(struct kho_vmalloc_chunk) == PAGE_SIZE); 150 + 151 + /* 152 + * Describes a preserved vmalloc memory area, including the 153 + * total number of pages, allocation flags, page order, and a pointer to the 154 + * first chunk of physical page addresses. 155 + */ 156 + struct kho_vmalloc { 157 + DECLARE_KHOSER_PTR(first, struct kho_vmalloc_chunk *); 158 + unsigned int total_pages; 159 + unsigned short flags; 160 + unsigned short order; 161 + }; 162 + 163 + #endif /* _LINUX_KHO_ABI_KEXEC_HANDOVER_H */
+85 -4
include/linux/kho/abi/luo.h
··· 8 8 /** 9 9 * DOC: Live Update Orchestrator ABI 10 10 * 11 - * This header defines the stable Application Binary Interface used by the 12 - * Live Update Orchestrator to pass state from a pre-update kernel to a 13 - * post-update kernel. The ABI is built upon the Kexec HandOver framework 14 - * and uses a Flattened Device Tree to describe the preserved data. 11 + * Live Update Orchestrator uses the stable Application Binary Interface 12 + * defined below to pass state from a pre-update kernel to a post-update 13 + * kernel. The ABI is built upon the Kexec HandOver framework and uses a 14 + * Flattened Device Tree to describe the preserved data. 15 15 * 16 16 * This interface is a contract. Any modification to the FDT structure, node 17 17 * properties, compatible strings, or the layout of the `__packed` serialization ··· 37 37 * compatible = "luo-session-v1"; 38 38 * luo-session-header = <phys_addr_of_session_header_ser>; 39 39 * }; 40 + * 41 + * luo-flb { 42 + * compatible = "luo-flb-v1"; 43 + * luo-flb-header = <phys_addr_of_flb_header_ser>; 44 + * }; 40 45 * }; 41 46 * 42 47 * Main LUO Node (/): ··· 61 56 * is the header for a contiguous block of memory containing an array of 62 57 * `struct luo_session_ser`, one for each preserved session. 63 58 * 59 + * File-Lifecycle-Bound Node (luo-flb): 60 + * This node describes all preserved global objects whose lifecycle is bound 61 + * to that of the preserved files (e.g., shared IOMMU state). 62 + * 63 + * - compatible: "luo-flb-v1" 64 + * Identifies the FLB ABI version. 65 + * - luo-flb-header: u64 66 + * The physical address of a `struct luo_flb_header_ser`. This structure is 67 + * the header for a contiguous block of memory containing an array of 68 + * `struct luo_flb_ser`, one for each preserved global object. 69 + * 64 70 * Serialization Structures: 65 71 * The FDT properties point to memory regions containing arrays of simple, 66 72 * `__packed` structures. These structures contain the actual preserved state. ··· 90 74 * Metadata for a single preserved file. Contains the `compatible` string to 91 75 * find the correct handler in the new kernel, a user-provided `token` for 92 76 * identification, and an opaque `data` handle for the handler to use. 77 + * 78 + * - struct luo_flb_header_ser: 79 + * Header for the FLB array. Contains the total page count of the 80 + * preserved memory block and the number of `struct luo_flb_ser` entries 81 + * that follow. 82 + * 83 + * - struct luo_flb_ser: 84 + * Metadata for a single preserved global object. Contains its `name` 85 + * (compatible string), an opaque `data` handle, and the `count` 86 + * number of files depending on it. 93 87 */ 94 88 95 89 #ifndef _LINUX_KHO_ABI_LUO_H ··· 188 162 char name[LIVEUPDATE_SESSION_NAME_LENGTH]; 189 163 struct luo_file_set_ser file_set_ser; 190 164 } __packed; 165 + 166 + /* The max size is set so it can be reliably used during in serialization */ 167 + #define LIVEUPDATE_FLB_COMPAT_LENGTH 48 168 + 169 + #define LUO_FDT_FLB_NODE_NAME "luo-flb" 170 + #define LUO_FDT_FLB_COMPATIBLE "luo-flb-v1" 171 + #define LUO_FDT_FLB_HEADER "luo-flb-header" 172 + 173 + /** 174 + * struct luo_flb_header_ser - Header for the serialized FLB data block. 175 + * @pgcnt: The total number of pages occupied by the entire preserved memory 176 + * region, including this header and the subsequent array of 177 + * &struct luo_flb_ser entries. 178 + * @count: The number of &struct luo_flb_ser entries that follow this header 179 + * in the memory block. 180 + * 181 + * This structure is located at the physical address specified by the 182 + * `LUO_FDT_FLB_HEADER` FDT property. It provides the new kernel with the 183 + * necessary information to find and iterate over the array of preserved 184 + * File-Lifecycle-Bound objects and to manage the underlying memory. 185 + * 186 + * If this structure is modified, LUO_FDT_FLB_COMPATIBLE must be updated. 187 + */ 188 + struct luo_flb_header_ser { 189 + u64 pgcnt; 190 + u64 count; 191 + } __packed; 192 + 193 + /** 194 + * struct luo_flb_ser - Represents the serialized state of a single FLB object. 195 + * @name: The unique compatibility string of the FLB object, used to find the 196 + * corresponding &struct liveupdate_flb handler in the new kernel. 197 + * @data: The opaque u64 handle returned by the FLB's .preserve() operation 198 + * in the old kernel. This handle encapsulates the entire state needed 199 + * for restoration. 200 + * @count: The reference count at the time of serialization; i.e., the number 201 + * of preserved files that depended on this FLB. This is used by the 202 + * new kernel to correctly manage the FLB's lifecycle. 203 + * 204 + * An array of these structures is created in a preserved memory region and 205 + * passed to the new kernel. Each entry allows the LUO core to restore one 206 + * global, shared object. 207 + * 208 + * If this structure is modified, LUO_FDT_FLB_COMPATIBLE must be updated. 209 + */ 210 + struct luo_flb_ser { 211 + char name[LIVEUPDATE_FLB_COMPAT_LENGTH]; 212 + u64 data; 213 + u64 count; 214 + } __packed; 215 + 216 + /* Kernel Live Update Test ABI */ 217 + #ifdef CONFIG_LIVEUPDATE_TEST 218 + #define LIVEUPDATE_TEST_FLB_COMPATIBLE(i) "liveupdate-test-flb-v" #i 219 + #endif 191 220 192 221 #endif /* _LINUX_KHO_ABI_LUO_H */
+73
include/linux/kho/abi/memblock.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef _LINUX_KHO_ABI_MEMBLOCK_H 4 + #define _LINUX_KHO_ABI_MEMBLOCK_H 5 + 6 + /** 7 + * DOC: memblock kexec handover ABI 8 + * 9 + * Memblock can serialize its current memory reservations created with 10 + * reserve_mem command line option across kexec through KHO. 11 + * The post-KHO kernel can then consume these reservations and they are 12 + * guaranteed to have the same physical address. 13 + * 14 + * The state is serialized using Flattened Device Tree (FDT) format. Any 15 + * modification to the FDT structure, node properties, or the compatible 16 + * strings constitutes a breaking change. Such changes require incrementing the 17 + * version number in the relevant `_COMPATIBLE` string to prevent a new kernel 18 + * from misinterpreting data from an old kernel. 19 + * 20 + * Changes are allowed provided the compatibility version is incremented. 21 + * However, backward/forward compatibility is only guaranteed for kernels 22 + * supporting the same ABI version. 23 + * 24 + * FDT Structure Overview: 25 + * The entire memblock state is encapsulated within a single KHO entry named 26 + * "memblock". 27 + * This entry contains an FDT with the following layout: 28 + * 29 + * .. code-block:: none 30 + * 31 + * / { 32 + * compatible = "memblock-v1"; 33 + * 34 + * n1 { 35 + * compatible = "reserve-mem-v1"; 36 + * start = <0xc06b 0x4000000>; 37 + * size = <0x04 0x00>; 38 + * }; 39 + * }; 40 + * 41 + * Main memblock node (/): 42 + * 43 + * - compatible: "memblock-v1" 44 + 45 + * Identifies the overall memblock ABI version. 46 + * 47 + * reserved_mem node: 48 + * These nodes describe all reserve_mem regions. The node name is the name 49 + * defined by the user for a reserve_mem region. 50 + * 51 + * - compatible: "reserve-mem-v1" 52 + * 53 + * Identifies the ABI version of reserve_mem descriptions 54 + * 55 + * - start: u64 56 + * 57 + * Physical address of the reserved memory region. 58 + * 59 + * - size: u64 60 + * 61 + * size in bytes of the reserved memory region. 62 + */ 63 + 64 + /* Top level memblock FDT node name. */ 65 + #define MEMBLOCK_KHO_FDT "memblock" 66 + 67 + /* The compatible string for the memblock FDT root node. */ 68 + #define MEMBLOCK_KHO_NODE_COMPATIBLE "memblock-v1" 69 + 70 + /* The compatible string for the reserve_mem FDT nodes. */ 71 + #define RESERVE_MEM_KHO_NODE_COMPATIBLE "reserve-mem-v1" 72 + 73 + #endif /* _LINUX_KHO_ABI_MEMBLOCK_H */
+3 -3
include/linux/kho/abi/memfd.h
··· 12 12 #define _LINUX_KHO_ABI_MEMFD_H 13 13 14 14 #include <linux/types.h> 15 - #include <linux/kexec_handover.h> 15 + #include <linux/kho/abi/kexec_handover.h> 16 16 17 17 /** 18 18 * DOC: memfd Live Update ABI 19 19 * 20 - * This header defines the ABI for preserving the state of a memfd across a 21 - * kexec reboot using the LUO. 20 + * memfd uses the ABI defined below for preserving its state across a kexec 21 + * reboot using the LUO. 22 22 * 23 23 * The state is serialized into a packed structure `struct memfd_luo_ser` 24 24 * which is handed over to the next kernel via the KHO mechanism.
+256
include/linux/list_private.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + /* 4 + * Copyright (c) 2025, Google LLC. 5 + * Pasha Tatashin <pasha.tatashin@soleen.com> 6 + */ 7 + #ifndef _LINUX_LIST_PRIVATE_H 8 + #define _LINUX_LIST_PRIVATE_H 9 + 10 + /** 11 + * DOC: Private List Primitives 12 + * 13 + * Provides a set of list primitives identical in function to those in 14 + * ``<linux/list.h>``, but designed for cases where the embedded 15 + * ``&struct list_head`` is private member. 16 + */ 17 + 18 + #include <linux/compiler.h> 19 + #include <linux/list.h> 20 + 21 + #define __list_private_offset(type, member) \ 22 + ((size_t)(&ACCESS_PRIVATE(((type *)0), member))) 23 + 24 + /** 25 + * list_private_entry - get the struct for this entry 26 + * @ptr: the &struct list_head pointer. 27 + * @type: the type of the struct this is embedded in. 28 + * @member: the identifier passed to ACCESS_PRIVATE. 29 + */ 30 + #define list_private_entry(ptr, type, member) ({ \ 31 + const struct list_head *__mptr = (ptr); \ 32 + (type *)((char *)__mptr - __list_private_offset(type, member)); \ 33 + }) 34 + 35 + /** 36 + * list_private_first_entry - get the first element from a list 37 + * @ptr: the list head to take the element from. 38 + * @type: the type of the struct this is embedded in. 39 + * @member: the identifier passed to ACCESS_PRIVATE. 40 + */ 41 + #define list_private_first_entry(ptr, type, member) \ 42 + list_private_entry((ptr)->next, type, member) 43 + 44 + /** 45 + * list_private_last_entry - get the last element from a list 46 + * @ptr: the list head to take the element from. 47 + * @type: the type of the struct this is embedded in. 48 + * @member: the identifier passed to ACCESS_PRIVATE. 49 + */ 50 + #define list_private_last_entry(ptr, type, member) \ 51 + list_private_entry((ptr)->prev, type, member) 52 + 53 + /** 54 + * list_private_next_entry - get the next element in list 55 + * @pos: the type * to cursor 56 + * @member: the name of the list_head within the struct. 57 + */ 58 + #define list_private_next_entry(pos, member) \ 59 + list_private_entry(ACCESS_PRIVATE(pos, member).next, typeof(*(pos)), member) 60 + 61 + /** 62 + * list_private_next_entry_circular - get the next element in list 63 + * @pos: the type * to cursor. 64 + * @head: the list head to take the element from. 65 + * @member: the name of the list_head within the struct. 66 + * 67 + * Wraparound if pos is the last element (return the first element). 68 + * Note, that list is expected to be not empty. 69 + */ 70 + #define list_private_next_entry_circular(pos, head, member) \ 71 + (list_is_last(&ACCESS_PRIVATE(pos, member), head) ? \ 72 + list_private_first_entry(head, typeof(*(pos)), member) : \ 73 + list_private_next_entry(pos, member)) 74 + 75 + /** 76 + * list_private_prev_entry - get the prev element in list 77 + * @pos: the type * to cursor 78 + * @member: the name of the list_head within the struct. 79 + */ 80 + #define list_private_prev_entry(pos, member) \ 81 + list_private_entry(ACCESS_PRIVATE(pos, member).prev, typeof(*(pos)), member) 82 + 83 + /** 84 + * list_private_prev_entry_circular - get the prev element in list 85 + * @pos: the type * to cursor. 86 + * @head: the list head to take the element from. 87 + * @member: the name of the list_head within the struct. 88 + * 89 + * Wraparound if pos is the first element (return the last element). 90 + * Note, that list is expected to be not empty. 91 + */ 92 + #define list_private_prev_entry_circular(pos, head, member) \ 93 + (list_is_first(&ACCESS_PRIVATE(pos, member), head) ? \ 94 + list_private_last_entry(head, typeof(*(pos)), member) : \ 95 + list_private_prev_entry(pos, member)) 96 + 97 + /** 98 + * list_private_entry_is_head - test if the entry points to the head of the list 99 + * @pos: the type * to cursor 100 + * @head: the head for your list. 101 + * @member: the name of the list_head within the struct. 102 + */ 103 + #define list_private_entry_is_head(pos, head, member) \ 104 + list_is_head(&ACCESS_PRIVATE(pos, member), (head)) 105 + 106 + /** 107 + * list_private_for_each_entry - iterate over list of given type 108 + * @pos: the type * to use as a loop cursor. 109 + * @head: the head for your list. 110 + * @member: the name of the list_head within the struct. 111 + */ 112 + #define list_private_for_each_entry(pos, head, member) \ 113 + for (pos = list_private_first_entry(head, typeof(*pos), member); \ 114 + !list_private_entry_is_head(pos, head, member); \ 115 + pos = list_private_next_entry(pos, member)) 116 + 117 + /** 118 + * list_private_for_each_entry_reverse - iterate backwards over list of given type. 119 + * @pos: the type * to use as a loop cursor. 120 + * @head: the head for your list. 121 + * @member: the name of the list_head within the struct. 122 + */ 123 + #define list_private_for_each_entry_reverse(pos, head, member) \ 124 + for (pos = list_private_last_entry(head, typeof(*pos), member); \ 125 + !list_private_entry_is_head(pos, head, member); \ 126 + pos = list_private_prev_entry(pos, member)) 127 + 128 + /** 129 + * list_private_for_each_entry_continue - continue iteration over list of given type 130 + * @pos: the type * to use as a loop cursor. 131 + * @head: the head for your list. 132 + * @member: the name of the list_head within the struct. 133 + * 134 + * Continue to iterate over list of given type, continuing after 135 + * the current position. 136 + */ 137 + #define list_private_for_each_entry_continue(pos, head, member) \ 138 + for (pos = list_private_next_entry(pos, member); \ 139 + !list_private_entry_is_head(pos, head, member); \ 140 + pos = list_private_next_entry(pos, member)) 141 + 142 + /** 143 + * list_private_for_each_entry_continue_reverse - iterate backwards from the given point 144 + * @pos: the type * to use as a loop cursor. 145 + * @head: the head for your list. 146 + * @member: the name of the list_head within the struct. 147 + * 148 + * Start to iterate over list of given type backwards, continuing after 149 + * the current position. 150 + */ 151 + #define list_private_for_each_entry_continue_reverse(pos, head, member) \ 152 + for (pos = list_private_prev_entry(pos, member); \ 153 + !list_private_entry_is_head(pos, head, member); \ 154 + pos = list_private_prev_entry(pos, member)) 155 + 156 + /** 157 + * list_private_for_each_entry_from - iterate over list of given type from the current point 158 + * @pos: the type * to use as a loop cursor. 159 + * @head: the head for your list. 160 + * @member: the name of the list_head within the struct. 161 + * 162 + * Iterate over list of given type, continuing from current position. 163 + */ 164 + #define list_private_for_each_entry_from(pos, head, member) \ 165 + for (; !list_private_entry_is_head(pos, head, member); \ 166 + pos = list_private_next_entry(pos, member)) 167 + 168 + /** 169 + * list_private_for_each_entry_from_reverse - iterate backwards over list of given type 170 + * from the current point 171 + * @pos: the type * to use as a loop cursor. 172 + * @head: the head for your list. 173 + * @member: the name of the list_head within the struct. 174 + * 175 + * Iterate backwards over list of given type, continuing from current position. 176 + */ 177 + #define list_private_for_each_entry_from_reverse(pos, head, member) \ 178 + for (; !list_private_entry_is_head(pos, head, member); \ 179 + pos = list_private_prev_entry(pos, member)) 180 + 181 + /** 182 + * list_private_for_each_entry_safe - iterate over list of given type safe against removal of list entry 183 + * @pos: the type * to use as a loop cursor. 184 + * @n: another type * to use as temporary storage 185 + * @head: the head for your list. 186 + * @member: the name of the list_head within the struct. 187 + */ 188 + #define list_private_for_each_entry_safe(pos, n, head, member) \ 189 + for (pos = list_private_first_entry(head, typeof(*pos), member), \ 190 + n = list_private_next_entry(pos, member); \ 191 + !list_private_entry_is_head(pos, head, member); \ 192 + pos = n, n = list_private_next_entry(n, member)) 193 + 194 + /** 195 + * list_private_for_each_entry_safe_continue - continue list iteration safe against removal 196 + * @pos: the type * to use as a loop cursor. 197 + * @n: another type * to use as temporary storage 198 + * @head: the head for your list. 199 + * @member: the name of the list_head within the struct. 200 + * 201 + * Iterate over list of given type, continuing after current point, 202 + * safe against removal of list entry. 203 + */ 204 + #define list_private_for_each_entry_safe_continue(pos, n, head, member) \ 205 + for (pos = list_private_next_entry(pos, member), \ 206 + n = list_private_next_entry(pos, member); \ 207 + !list_private_entry_is_head(pos, head, member); \ 208 + pos = n, n = list_private_next_entry(n, member)) 209 + 210 + /** 211 + * list_private_for_each_entry_safe_from - iterate over list from current point safe against removal 212 + * @pos: the type * to use as a loop cursor. 213 + * @n: another type * to use as temporary storage 214 + * @head: the head for your list. 215 + * @member: the name of the list_head within the struct. 216 + * 217 + * Iterate over list of given type from current point, safe against 218 + * removal of list entry. 219 + */ 220 + #define list_private_for_each_entry_safe_from(pos, n, head, member) \ 221 + for (n = list_private_next_entry(pos, member); \ 222 + !list_private_entry_is_head(pos, head, member); \ 223 + pos = n, n = list_private_next_entry(n, member)) 224 + 225 + /** 226 + * list_private_for_each_entry_safe_reverse - iterate backwards over list safe against removal 227 + * @pos: the type * to use as a loop cursor. 228 + * @n: another type * to use as temporary storage 229 + * @head: the head for your list. 230 + * @member: the name of the list_head within the struct. 231 + * 232 + * Iterate backwards over list of given type, safe against removal 233 + * of list entry. 234 + */ 235 + #define list_private_for_each_entry_safe_reverse(pos, n, head, member) \ 236 + for (pos = list_private_last_entry(head, typeof(*pos), member), \ 237 + n = list_private_prev_entry(pos, member); \ 238 + !list_private_entry_is_head(pos, head, member); \ 239 + pos = n, n = list_private_prev_entry(n, member)) 240 + 241 + /** 242 + * list_private_safe_reset_next - reset a stale list_for_each_entry_safe loop 243 + * @pos: the loop cursor used in the list_for_each_entry_safe loop 244 + * @n: temporary storage used in list_for_each_entry_safe 245 + * @member: the name of the list_head within the struct. 246 + * 247 + * list_safe_reset_next is not safe to use in general if the list may be 248 + * modified concurrently (eg. the lock is dropped in the loop body). An 249 + * exception to this is if the cursor element (pos) is pinned in the list, 250 + * and list_safe_reset_next is called after re-taking the lock and before 251 + * completing the current iteration of the loop body. 252 + */ 253 + #define list_private_safe_reset_next(pos, n, member) \ 254 + n = list_private_next_entry(pos, member) 255 + 256 + #endif /* _LINUX_LIST_PRIVATE_H */
+147
include/linux/liveupdate.h
··· 11 11 #include <linux/compiler.h> 12 12 #include <linux/kho/abi/luo.h> 13 13 #include <linux/list.h> 14 + #include <linux/mutex.h> 14 15 #include <linux/types.h> 15 16 #include <uapi/linux/liveupdate.h> 16 17 17 18 struct liveupdate_file_handler; 19 + struct liveupdate_flb; 20 + struct liveupdate_session; 18 21 struct file; 19 22 20 23 /** ··· 102 99 * registered file handlers. 103 100 */ 104 101 struct list_head __private list; 102 + /* A list of FLB dependencies. */ 103 + struct list_head __private flb_list; 104 + }; 105 + 106 + /** 107 + * struct liveupdate_flb_op_args - Arguments for FLB operation callbacks. 108 + * @flb: The global FLB instance for which this call is performed. 109 + * @data: For .preserve(): [OUT] The callback sets this field. 110 + * For .unpreserve(): [IN] The handle from .preserve(). 111 + * For .retrieve(): [IN] The handle from .preserve(). 112 + * @obj: For .preserve(): [OUT] Sets this to the live object. 113 + * For .retrieve(): [OUT] Sets this to the live object. 114 + * For .finish(): [IN] The live object from .retrieve(). 115 + * 116 + * This structure bundles all parameters for the FLB operation callbacks. 117 + */ 118 + struct liveupdate_flb_op_args { 119 + struct liveupdate_flb *flb; 120 + u64 data; 121 + void *obj; 122 + }; 123 + 124 + /** 125 + * struct liveupdate_flb_ops - Callbacks for global File-Lifecycle-Bound data. 126 + * @preserve: Called when the first file using this FLB is preserved. 127 + * The callback must save its state and return a single, 128 + * self-contained u64 handle by setting the 'argp->data' 129 + * field and 'argp->obj'. 130 + * @unpreserve: Called when the last file using this FLB is unpreserved 131 + * (aborted before reboot). Receives the handle via 132 + * 'argp->data' and live object via 'argp->obj'. 133 + * @retrieve: Called on-demand in the new kernel, the first time a 134 + * component requests access to the shared object. It receives 135 + * the preserved handle via 'argp->data' and must reconstruct 136 + * the live object, returning it by setting the 'argp->obj' 137 + * field. 138 + * @finish: Called in the new kernel when the last file using this FLB 139 + * is finished. Receives the live object via 'argp->obj' for 140 + * cleanup. 141 + * @owner: Module reference 142 + * 143 + * Operations that manage global shared data with file bound lifecycle, 144 + * triggered by the first file that uses it and concluded by the last file that 145 + * uses it, across all sessions. 146 + */ 147 + struct liveupdate_flb_ops { 148 + int (*preserve)(struct liveupdate_flb_op_args *argp); 149 + void (*unpreserve)(struct liveupdate_flb_op_args *argp); 150 + int (*retrieve)(struct liveupdate_flb_op_args *argp); 151 + void (*finish)(struct liveupdate_flb_op_args *argp); 152 + struct module *owner; 153 + }; 154 + 155 + /* 156 + * struct luo_flb_private_state - Private FLB state structures. 157 + * @count: The number of preserved files currently depending on this FLB. 158 + * This is used to trigger the preserve/unpreserve/finish ops on the 159 + * first/last file. 160 + * @data: The opaque u64 handle returned by .preserve() or passed to 161 + * .retrieve(). 162 + * @obj: The live kernel object returned by .preserve() or .retrieve(). 163 + * @lock: A mutex that protects all fields within this structure, providing 164 + * the synchronization service for the FLB's ops. 165 + * @finished: True once the FLB's finish() callback has run. 166 + * @retrieved: True once the FLB's retrieve() callback has run. 167 + */ 168 + struct luo_flb_private_state { 169 + long count; 170 + u64 data; 171 + void *obj; 172 + struct mutex lock; 173 + bool finished; 174 + bool retrieved; 175 + }; 176 + 177 + /* 178 + * struct luo_flb_private - Keep separate incoming and outgoing states. 179 + * @list: A global list of registered FLBs. 180 + * @outgoing: The runtime state for the pre-reboot 181 + * (preserve/unpreserve) lifecycle. 182 + * @incoming: The runtime state for the post-reboot (retrieve/finish) 183 + * lifecycle. 184 + * @users: With how many File-Handlers this FLB is registered. 185 + * @initialized: true when private fields have been initialized. 186 + */ 187 + struct luo_flb_private { 188 + struct list_head list; 189 + struct luo_flb_private_state outgoing; 190 + struct luo_flb_private_state incoming; 191 + int users; 192 + bool initialized; 193 + }; 194 + 195 + /** 196 + * struct liveupdate_flb - A global definition for a shared data object. 197 + * @ops: Callback functions 198 + * @compatible: The compatibility string (e.g., "iommu-core-v1" 199 + * that uniquely identifies the FLB type this handler 200 + * supports. This is matched against the compatible string 201 + * associated with individual &struct liveupdate_flb 202 + * instances. 203 + * 204 + * This struct is the "template" that a driver registers to define a shared, 205 + * file-lifecycle-bound object. The actual runtime state (the live object, 206 + * refcount, etc.) is managed privately by the LUO core. 207 + */ 208 + struct liveupdate_flb { 209 + const struct liveupdate_flb_ops *ops; 210 + const char compatible[LIVEUPDATE_FLB_COMPAT_LENGTH]; 211 + 212 + /* private: */ 213 + struct luo_flb_private __private private; 105 214 }; 106 215 107 216 #ifdef CONFIG_LIVEUPDATE ··· 226 111 227 112 int liveupdate_register_file_handler(struct liveupdate_file_handler *fh); 228 113 int liveupdate_unregister_file_handler(struct liveupdate_file_handler *fh); 114 + 115 + int liveupdate_register_flb(struct liveupdate_file_handler *fh, 116 + struct liveupdate_flb *flb); 117 + int liveupdate_unregister_flb(struct liveupdate_file_handler *fh, 118 + struct liveupdate_flb *flb); 119 + 120 + int liveupdate_flb_get_incoming(struct liveupdate_flb *flb, void **objp); 121 + int liveupdate_flb_get_outgoing(struct liveupdate_flb *flb, void **objp); 229 122 230 123 #else /* CONFIG_LIVEUPDATE */ 231 124 ··· 253 130 } 254 131 255 132 static inline int liveupdate_unregister_file_handler(struct liveupdate_file_handler *fh) 133 + { 134 + return -EOPNOTSUPP; 135 + } 136 + 137 + static inline int liveupdate_register_flb(struct liveupdate_file_handler *fh, 138 + struct liveupdate_flb *flb) 139 + { 140 + return -EOPNOTSUPP; 141 + } 142 + 143 + static inline int liveupdate_unregister_flb(struct liveupdate_file_handler *fh, 144 + struct liveupdate_flb *flb) 145 + { 146 + return -EOPNOTSUPP; 147 + } 148 + 149 + static inline int liveupdate_flb_get_incoming(struct liveupdate_flb *flb, 150 + void **objp) 151 + { 152 + return -EOPNOTSUPP; 153 + } 154 + 155 + static inline int liveupdate_flb_get_outgoing(struct liveupdate_flb *flb, 156 + void **objp) 256 157 { 257 158 return -EOPNOTSUPP; 258 159 }
+1 -1
include/linux/log2.h
··· 44 44 static __always_inline __attribute__((const)) 45 45 bool is_power_of_2(unsigned long n) 46 46 { 47 - return (n != 0 && ((n & (n - 1)) == 0)); 47 + return n - 1 < (n ^ (n - 1)); 48 48 } 49 49 50 50 /**
+9
include/linux/module.h
··· 742 742 __mod ? __mod->name : "kernel"; \ 743 743 }) 744 744 745 + static inline const unsigned char *module_buildid(struct module *mod) 746 + { 747 + #ifdef CONFIG_STACKTRACE_BUILD_ID 748 + return mod->build_id; 749 + #else 750 + return NULL; 751 + #endif 752 + } 753 + 745 754 /* Dereference module function descriptor */ 746 755 void *dereference_module_function_descriptor(struct module *mod, void *ptr); 747 756
+6 -1
include/linux/moduleparam.h
··· 2 2 #ifndef _LINUX_MODULE_PARAMS_H 3 3 #define _LINUX_MODULE_PARAMS_H 4 4 /* (C) Copyright 2001, 2002 Rusty Russell IBM Corporation */ 5 + 6 + #include <linux/array_size.h> 7 + #include <linux/build_bug.h> 8 + #include <linux/compiler.h> 5 9 #include <linux/init.h> 6 10 #include <linux/stringify.h> 7 - #include <linux/kernel.h> 11 + #include <linux/sysfs.h> 12 + #include <linux/types.h> 8 13 9 14 /* 10 15 * The maximum module name length, including the NUL byte.
+8
include/linux/panic.h
··· 41 41 * PANIC_CPU_INVALID means no CPU has entered panic() or crash_kexec(). 42 42 */ 43 43 extern atomic_t panic_cpu; 44 + 45 + /* 46 + * panic_redirect_cpu is used when panic is redirected to a specific CPU via 47 + * the panic_force_cpu= boot parameter. It holds the CPU number that originally 48 + * triggered the panic before redirection. A value of PANIC_CPU_INVALID means 49 + * no redirection has occurred. 50 + */ 51 + extern atomic_t panic_redirect_cpu; 44 52 #define PANIC_CPU_INVALID -1 45 53 46 54 bool panic_try_start(void);
+5
include/linux/sched.h
··· 49 49 #include <linux/tracepoint-defs.h> 50 50 #include <linux/unwind_deferred_types.h> 51 51 #include <asm/kmap_size.h> 52 + #include <linux/time64.h> 52 53 #ifndef COMPILE_OFFSETS 53 54 #include <generated/rq-offsets.h> 54 55 #endif ··· 87 86 struct task_delay_info; 88 87 struct task_group; 89 88 struct task_struct; 89 + struct timespec64; 90 90 struct user_event_mm; 91 91 92 92 #include <linux/sched/ext.h> ··· 436 434 437 435 /* When were we last queued to run? */ 438 436 unsigned long long last_queued; 437 + 438 + /* Timestamp of max time spent waiting on a runqueue: */ 439 + struct timespec64 max_run_delay_ts; 439 440 440 441 #endif /* CONFIG_SCHED_INFO */ 441 442 };
+1
include/linux/smp.h
··· 62 62 void __noreturn panic_smp_self_stop(void); 63 63 void __noreturn nmi_panic_self_stop(struct pt_regs *regs); 64 64 void crash_smp_send_stop(void); 65 + int panic_smp_redirect_cpu(int target_cpu, void *msg); 65 66 66 67 /* 67 68 * Call a function on all processors
+13
include/linux/sysfs.h
··· 808 808 kernfs_put(kn); 809 809 } 810 810 811 + /* Permissions on a sysfs file: you didn't miss the 0 prefix did you? */ 812 + #define VERIFY_OCTAL_PERMISSIONS(perms) \ 813 + (BUILD_BUG_ON_ZERO((perms) < 0) + \ 814 + BUILD_BUG_ON_ZERO((perms) > 0777) + \ 815 + /* USER_READABLE >= GROUP_READABLE >= OTHER_READABLE */ \ 816 + BUILD_BUG_ON_ZERO((((perms) >> 6) & 4) < (((perms) >> 3) & 4)) + \ 817 + BUILD_BUG_ON_ZERO((((perms) >> 3) & 4) < ((perms) & 4)) + \ 818 + /* USER_WRITABLE >= GROUP_WRITABLE */ \ 819 + BUILD_BUG_ON_ZERO((((perms) >> 6) & 2) < (((perms) >> 3) & 2)) + \ 820 + /* OTHER_WRITABLE? Generally considered a bad idea. */ \ 821 + BUILD_BUG_ON_ZERO((perms) & 2) + \ 822 + (perms)) 823 + 811 824 #endif /* _SYSFS_H_ */
+204
include/linux/trace_printk.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_TRACE_PRINTK_H 3 + #define _LINUX_TRACE_PRINTK_H 4 + 5 + #include <linux/compiler_attributes.h> 6 + #include <linux/instruction_pointer.h> 7 + #include <linux/stddef.h> 8 + #include <linux/stringify.h> 9 + 10 + /* 11 + * General tracing related utility functions - trace_printk(), 12 + * tracing_on/tracing_off and tracing_start()/tracing_stop 13 + * 14 + * Use tracing_on/tracing_off when you want to quickly turn on or off 15 + * tracing. It simply enables or disables the recording of the trace events. 16 + * This also corresponds to the user space /sys/kernel/tracing/tracing_on 17 + * file, which gives a means for the kernel and userspace to interact. 18 + * Place a tracing_off() in the kernel where you want tracing to end. 19 + * From user space, examine the trace, and then echo 1 > tracing_on 20 + * to continue tracing. 21 + * 22 + * tracing_stop/tracing_start has slightly more overhead. It is used 23 + * by things like suspend to ram where disabling the recording of the 24 + * trace is not enough, but tracing must actually stop because things 25 + * like calling smp_processor_id() may crash the system. 26 + * 27 + * Most likely, you want to use tracing_on/tracing_off. 28 + */ 29 + 30 + enum ftrace_dump_mode { 31 + DUMP_NONE, 32 + DUMP_ALL, 33 + DUMP_ORIG, 34 + DUMP_PARAM, 35 + }; 36 + 37 + #ifdef CONFIG_TRACING 38 + void tracing_on(void); 39 + void tracing_off(void); 40 + int tracing_is_on(void); 41 + void tracing_snapshot(void); 42 + void tracing_snapshot_alloc(void); 43 + 44 + extern void tracing_start(void); 45 + extern void tracing_stop(void); 46 + 47 + static inline __printf(1, 2) 48 + void ____trace_printk_check_format(const char *fmt, ...) 49 + { 50 + } 51 + #define __trace_printk_check_format(fmt, args...) \ 52 + do { \ 53 + if (0) \ 54 + ____trace_printk_check_format(fmt, ##args); \ 55 + } while (0) 56 + 57 + /** 58 + * trace_printk - printf formatting in the ftrace buffer 59 + * @fmt: the printf format for printing 60 + * 61 + * Note: __trace_printk is an internal function for trace_printk() and 62 + * the @ip is passed in via the trace_printk() macro. 63 + * 64 + * This function allows a kernel developer to debug fast path sections 65 + * that printk is not appropriate for. By scattering in various 66 + * printk like tracing in the code, a developer can quickly see 67 + * where problems are occurring. 68 + * 69 + * This is intended as a debugging tool for the developer only. 70 + * Please refrain from leaving trace_printks scattered around in 71 + * your code. (Extra memory is used for special buffers that are 72 + * allocated when trace_printk() is used.) 73 + * 74 + * A little optimization trick is done here. If there's only one 75 + * argument, there's no need to scan the string for printf formats. 76 + * The trace_puts() will suffice. But how can we take advantage of 77 + * using trace_puts() when trace_printk() has only one argument? 78 + * By stringifying the args and checking the size we can tell 79 + * whether or not there are args. __stringify((__VA_ARGS__)) will 80 + * turn into "()\0" with a size of 3 when there are no args, anything 81 + * else will be bigger. All we need to do is define a string to this, 82 + * and then take its size and compare to 3. If it's bigger, use 83 + * do_trace_printk() otherwise, optimize it to trace_puts(). Then just 84 + * let gcc optimize the rest. 85 + */ 86 + 87 + #define trace_printk(fmt, ...) \ 88 + do { \ 89 + char _______STR[] = __stringify((__VA_ARGS__)); \ 90 + if (sizeof(_______STR) > 3) \ 91 + do_trace_printk(fmt, ##__VA_ARGS__); \ 92 + else \ 93 + trace_puts(fmt); \ 94 + } while (0) 95 + 96 + #define do_trace_printk(fmt, args...) \ 97 + do { \ 98 + static const char *trace_printk_fmt __used \ 99 + __section("__trace_printk_fmt") = \ 100 + __builtin_constant_p(fmt) ? fmt : NULL; \ 101 + \ 102 + __trace_printk_check_format(fmt, ##args); \ 103 + \ 104 + if (__builtin_constant_p(fmt)) \ 105 + __trace_bprintk(_THIS_IP_, trace_printk_fmt, ##args); \ 106 + else \ 107 + __trace_printk(_THIS_IP_, fmt, ##args); \ 108 + } while (0) 109 + 110 + extern __printf(2, 3) 111 + int __trace_bprintk(unsigned long ip, const char *fmt, ...); 112 + 113 + extern __printf(2, 3) 114 + int __trace_printk(unsigned long ip, const char *fmt, ...); 115 + 116 + /** 117 + * trace_puts - write a string into the ftrace buffer 118 + * @str: the string to record 119 + * 120 + * Note: __trace_bputs is an internal function for trace_puts and 121 + * the @ip is passed in via the trace_puts macro. 122 + * 123 + * This is similar to trace_printk() but is made for those really fast 124 + * paths that a developer wants the least amount of "Heisenbug" effects, 125 + * where the processing of the print format is still too much. 126 + * 127 + * This function allows a kernel developer to debug fast path sections 128 + * that printk is not appropriate for. By scattering in various 129 + * printk like tracing in the code, a developer can quickly see 130 + * where problems are occurring. 131 + * 132 + * This is intended as a debugging tool for the developer only. 133 + * Please refrain from leaving trace_puts scattered around in 134 + * your code. (Extra memory is used for special buffers that are 135 + * allocated when trace_puts() is used.) 136 + * 137 + * Returns: 0 if nothing was written, positive # if string was. 138 + * (1 when __trace_bputs is used, strlen(str) when __trace_puts is used) 139 + */ 140 + 141 + #define trace_puts(str) ({ \ 142 + static const char *trace_printk_fmt __used \ 143 + __section("__trace_printk_fmt") = \ 144 + __builtin_constant_p(str) ? str : NULL; \ 145 + \ 146 + if (__builtin_constant_p(str)) \ 147 + __trace_bputs(_THIS_IP_, trace_printk_fmt); \ 148 + else \ 149 + __trace_puts(_THIS_IP_, str); \ 150 + }) 151 + extern int __trace_bputs(unsigned long ip, const char *str); 152 + extern int __trace_puts(unsigned long ip, const char *str); 153 + 154 + extern void trace_dump_stack(int skip); 155 + 156 + /* 157 + * The double __builtin_constant_p is because gcc will give us an error 158 + * if we try to allocate the static variable to fmt if it is not a 159 + * constant. Even with the outer if statement. 160 + */ 161 + #define ftrace_vprintk(fmt, vargs) \ 162 + do { \ 163 + if (__builtin_constant_p(fmt)) { \ 164 + static const char *trace_printk_fmt __used \ 165 + __section("__trace_printk_fmt") = \ 166 + __builtin_constant_p(fmt) ? fmt : NULL; \ 167 + \ 168 + __ftrace_vbprintk(_THIS_IP_, trace_printk_fmt, vargs); \ 169 + } else \ 170 + __ftrace_vprintk(_THIS_IP_, fmt, vargs); \ 171 + } while (0) 172 + 173 + extern __printf(2, 0) int 174 + __ftrace_vbprintk(unsigned long ip, const char *fmt, va_list ap); 175 + 176 + extern __printf(2, 0) int 177 + __ftrace_vprintk(unsigned long ip, const char *fmt, va_list ap); 178 + 179 + extern void ftrace_dump(enum ftrace_dump_mode oops_dump_mode); 180 + #else 181 + static inline void tracing_start(void) { } 182 + static inline void tracing_stop(void) { } 183 + static inline void trace_dump_stack(int skip) { } 184 + 185 + static inline void tracing_on(void) { } 186 + static inline void tracing_off(void) { } 187 + static inline int tracing_is_on(void) { return 0; } 188 + static inline void tracing_snapshot(void) { } 189 + static inline void tracing_snapshot_alloc(void) { } 190 + 191 + static inline __printf(1, 2) 192 + int trace_printk(const char *fmt, ...) 193 + { 194 + return 0; 195 + } 196 + static __printf(1, 0) inline int 197 + ftrace_vprintk(const char *fmt, va_list ap) 198 + { 199 + return 0; 200 + } 201 + static inline void ftrace_dump(enum ftrace_dump_mode oops_dump_mode) { } 202 + #endif /* CONFIG_TRACING */ 203 + 204 + #endif
+1 -2
include/linux/types.h
··· 2 2 #ifndef _LINUX_TYPES_H 3 3 #define _LINUX_TYPES_H 4 4 5 - #define __EXPORTED_HEADERS__ 6 5 #include <uapi/linux/types.h> 7 6 8 7 #ifndef __ASSEMBLY__ ··· 184 185 typedef unsigned long irq_hw_number_t; 185 186 186 187 typedef struct { 187 - int counter; 188 + int __aligned(sizeof(int)) counter; 188 189 } atomic_t; 189 190 190 191 #define ATOMIC_INIT(i) { (i) }
+1
include/linux/ww_mutex.h
··· 17 17 #ifndef __LINUX_WW_MUTEX_H 18 18 #define __LINUX_WW_MUTEX_H 19 19 20 + #include <linux/instruction_pointer.h> 20 21 #include <linux/mutex.h> 21 22 #include <linux/rtmutex.h> 22 23
-3
include/uapi/linux/shm.h
··· 5 5 #include <linux/ipc.h> 6 6 #include <linux/errno.h> 7 7 #include <asm-generic/hugetlb_encode.h> 8 - #ifndef __KERNEL__ 9 - #include <unistd.h> 10 - #endif 11 8 12 9 /* 13 10 * SHMMNI, SHMMAX and SHMALL are default upper limits which can be
+12 -1
include/uapi/linux/taskstats.h
··· 18 18 #define _LINUX_TASKSTATS_H 19 19 20 20 #include <linux/types.h> 21 + #include <linux/time_types.h> 21 22 22 23 /* Format for per-task data returned to userland when 23 24 * - a task exits ··· 35 34 */ 36 35 37 36 38 - #define TASKSTATS_VERSION 16 37 + #define TASKSTATS_VERSION 17 39 38 #define TS_COMM_LEN 32 /* should be >= TASK_COMM_LEN 40 39 * in linux/sched.h */ 41 40 ··· 231 230 232 231 __u64 irq_delay_max; 233 232 __u64 irq_delay_min; 233 + 234 + /*v17: delay max timestamp record*/ 235 + struct __kernel_timespec cpu_delay_max_ts; 236 + struct __kernel_timespec blkio_delay_max_ts; 237 + struct __kernel_timespec swapin_delay_max_ts; 238 + struct __kernel_timespec freepages_delay_max_ts; 239 + struct __kernel_timespec thrashing_delay_max_ts; 240 + struct __kernel_timespec compact_delay_max_ts; 241 + struct __kernel_timespec wpcopy_delay_max_ts; 242 + struct __kernel_timespec irq_delay_max_ts; 234 243 }; 235 244 236 245
+10 -6
init/main.c
··· 104 104 #include <linux/pidfs.h> 105 105 #include <linux/ptdump.h> 106 106 #include <linux/time_namespace.h> 107 + #include <linux/unaligned.h> 107 108 #include <net/net_namespace.h> 108 109 109 110 #include <asm/io.h> ··· 163 162 164 163 static char *execute_command; 165 164 static char *ramdisk_execute_command = "/init"; 165 + static bool __initdata ramdisk_execute_command_set; 166 166 167 167 /* 168 168 * Used to generate warnings if static_key manipulation functions are used ··· 271 269 { 272 270 u32 size, csum; 273 271 char *data; 274 - u32 *hdr; 272 + u8 *hdr; 275 273 int i; 276 274 277 275 if (!initrd_end) ··· 290 288 return NULL; 291 289 292 290 found: 293 - hdr = (u32 *)(data - 8); 294 - size = le32_to_cpu(hdr[0]); 295 - csum = le32_to_cpu(hdr[1]); 291 + hdr = (u8 *)(data - 8); 292 + size = get_unaligned_le32(hdr); 293 + csum = get_unaligned_le32(hdr + 4); 296 294 297 295 data = ((void *)hdr) - size; 298 296 if ((unsigned long)data < initrd_start) { ··· 625 623 unsigned int i; 626 624 627 625 ramdisk_execute_command = str; 626 + ramdisk_execute_command_set = true; 628 627 /* See "auto" comment in init_setup */ 629 628 for (i = 1; i < MAX_INIT_ARGS; i++) 630 629 argv_init[i] = NULL; ··· 1703 1700 int ramdisk_command_access; 1704 1701 ramdisk_command_access = init_eaccess(ramdisk_execute_command); 1705 1702 if (ramdisk_command_access != 0) { 1706 - pr_warn("check access for rdinit=%s failed: %i, ignoring\n", 1707 - ramdisk_execute_command, ramdisk_command_access); 1703 + if (ramdisk_execute_command_set) 1704 + pr_warn("check access for rdinit=%s failed: %i, ignoring\n", 1705 + ramdisk_execute_command, ramdisk_command_access); 1708 1706 ramdisk_execute_command = NULL; 1709 1707 prepare_namespace(); 1710 1708 }
+1 -1
ipc/ipc_sysctl.c
··· 214 214 if (((table->data == &ns->ids[IPC_SEM_IDS].next_id) || 215 215 (table->data == &ns->ids[IPC_MSG_IDS].next_id) || 216 216 (table->data == &ns->ids[IPC_SHM_IDS].next_id)) && 217 - checkpoint_restore_ns_capable(ns->user_ns)) 217 + checkpoint_restore_ns_capable_noaudit(ns->user_ns)) 218 218 mode = 0666; 219 219 else 220 220 #endif
+1
kernel/audit.c
··· 32 32 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 33 33 34 34 #include <linux/file.h> 35 + #include <linux/hex.h> 35 36 #include <linux/init.h> 36 37 #include <linux/types.h> 37 38 #include <linux/atomic.h>
+3 -2
kernel/bpf/core.c
··· 25 25 #include <linux/prandom.h> 26 26 #include <linux/bpf.h> 27 27 #include <linux/btf.h> 28 + #include <linux/hex.h> 28 29 #include <linux/objtool.h> 29 30 #include <linux/overflow.h> 30 31 #include <linux/rbtree_latch.h> ··· 717 716 return n ? container_of(n, struct bpf_ksym, tnode) : NULL; 718 717 } 719 718 720 - int __bpf_address_lookup(unsigned long addr, unsigned long *size, 721 - unsigned long *off, char *sym) 719 + int bpf_address_lookup(unsigned long addr, unsigned long *size, 720 + unsigned long *off, char *sym) 722 721 { 723 722 struct bpf_ksym *ksym; 724 723 int ret = 0;
-1
kernel/bpf/rqspinlock.c
··· 695 695 int ret; 696 696 697 697 BUILD_BUG_ON(sizeof(rqspinlock_t) != sizeof(struct bpf_res_spin_lock)); 698 - BUILD_BUG_ON(__alignof__(rqspinlock_t) != __alignof__(struct bpf_res_spin_lock)); 699 698 700 699 preempt_disable(); 701 700 ret = res_spin_lock((rqspinlock_t *)lock);
+1
kernel/bpf/syscall.c
··· 9 9 #include <linux/bpf_verifier.h> 10 10 #include <linux/bsearch.h> 11 11 #include <linux/btf.h> 12 + #include <linux/hex.h> 12 13 #include <linux/syscalls.h> 13 14 #include <linux/slab.h> 14 15 #include <linux/sched/signal.h>
+1 -1
kernel/configs/debug.config
··· 84 84 # Debug Oops, Lockups and Hangs 85 85 # 86 86 CONFIG_BOOTPARAM_HUNG_TASK_PANIC=0 87 - # CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set 87 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=0 88 88 CONFIG_DEBUG_ATOMIC_SLEEP=y 89 89 CONFIG_DETECT_HUNG_TASK=y 90 90 CONFIG_PANIC_ON_OOPS=y
+13 -4
kernel/crash_core.c
··· 44 44 45 45 int kimage_crash_copy_vmcoreinfo(struct kimage *image) 46 46 { 47 - struct page *vmcoreinfo_page; 47 + struct page *vmcoreinfo_base; 48 + struct page *vmcoreinfo_pages[DIV_ROUND_UP(VMCOREINFO_BYTES, PAGE_SIZE)]; 49 + unsigned int order, nr_pages; 50 + int i; 48 51 void *safecopy; 52 + 53 + nr_pages = DIV_ROUND_UP(VMCOREINFO_BYTES, PAGE_SIZE); 54 + order = get_order(VMCOREINFO_BYTES); 49 55 50 56 if (!IS_ENABLED(CONFIG_CRASH_DUMP)) 51 57 return 0; ··· 67 61 * happens to generate vmcoreinfo note, hereby we rely on 68 62 * vmap for this purpose. 69 63 */ 70 - vmcoreinfo_page = kimage_alloc_control_pages(image, 0); 71 - if (!vmcoreinfo_page) { 64 + vmcoreinfo_base = kimage_alloc_control_pages(image, order); 65 + if (!vmcoreinfo_base) { 72 66 pr_warn("Could not allocate vmcoreinfo buffer\n"); 73 67 return -ENOMEM; 74 68 } 75 - safecopy = vmap(&vmcoreinfo_page, 1, VM_MAP, PAGE_KERNEL); 69 + for (i = 0; i < nr_pages; i++) 70 + vmcoreinfo_pages[i] = vmcoreinfo_base + i; 71 + 72 + safecopy = vmap(vmcoreinfo_pages, nr_pages, VM_MAP, PAGE_KERNEL); 76 73 if (!safecopy) { 77 74 pr_warn("Could not vmap vmcoreinfo buffer\n"); 78 75 return -ENOMEM;
+15 -6
kernel/crash_dump_dm_crypt.c
··· 143 143 { 144 144 const struct user_key_payload *ukp; 145 145 struct key *key; 146 + int ret = 0; 146 147 147 148 kexec_dprintk("Requesting logon key %s", dm_key->key_desc); 148 149 key = request_key(&key_type_logon, dm_key->key_desc, NULL); ··· 153 152 return PTR_ERR(key); 154 153 } 155 154 155 + down_read(&key->sem); 156 156 ukp = user_key_payload_locked(key); 157 - if (!ukp) 158 - return -EKEYREVOKED; 157 + if (!ukp) { 158 + ret = -EKEYREVOKED; 159 + goto out; 160 + } 159 161 160 162 if (ukp->datalen > KEY_SIZE_MAX) { 161 163 pr_err("Key size %u exceeds maximum (%u)\n", ukp->datalen, KEY_SIZE_MAX); 162 - return -EINVAL; 164 + ret = -EINVAL; 165 + goto out; 163 166 } 164 167 165 168 memcpy(dm_key->data, ukp->data, ukp->datalen); 166 169 dm_key->key_size = ukp->datalen; 167 170 kexec_dprintk("Get dm crypt key (size=%u) %s: %8ph\n", dm_key->key_size, 168 171 dm_key->key_desc, dm_key->data); 169 - return 0; 172 + 173 + out: 174 + up_read(&key->sem); 175 + key_put(key); 176 + return ret; 170 177 } 171 178 172 179 struct config_key { ··· 232 223 key_count--; 233 224 } 234 225 235 - static struct configfs_item_operations config_key_item_ops = { 226 + static const struct configfs_item_operations config_key_item_ops = { 236 227 .release = config_key_release, 237 228 }; 238 229 ··· 307 298 * Note that, since no extra work is required on ->drop_item(), 308 299 * no ->drop_item() is provided. 309 300 */ 310 - static struct configfs_group_operations config_keys_group_ops = { 301 + static const struct configfs_group_operations config_keys_group_ops = { 311 302 .make_item = config_keys_make_item, 312 303 }; 313 304
+1
kernel/debug/gdbstub.c
··· 27 27 28 28 #include <linux/kernel.h> 29 29 #include <linux/sched/signal.h> 30 + #include <linux/hex.h> 30 31 #include <linux/kgdb.h> 31 32 #include <linux/kdb.h> 32 33 #include <linux/serial_core.h>
+24 -9
kernel/delayacct.c
··· 18 18 do { \ 19 19 d->type##_delay_max = tsk->delays->type##_delay_max; \ 20 20 d->type##_delay_min = tsk->delays->type##_delay_min; \ 21 + d->type##_delay_max_ts.tv_sec = tsk->delays->type##_delay_max_ts.tv_sec; \ 22 + d->type##_delay_max_ts.tv_nsec = tsk->delays->type##_delay_max_ts.tv_nsec; \ 21 23 tmp = d->type##_delay_total + tsk->delays->type##_delay; \ 22 24 d->type##_delay_total = (tmp < d->type##_delay_total) ? 0 : tmp; \ 23 25 d->type##_count += tsk->delays->type##_count; \ ··· 106 104 * Finish delay accounting for a statistic using its timestamps (@start), 107 105 * accumulator (@total) and @count 108 106 */ 109 - static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total, u32 *count, u64 *max, u64 *min) 107 + static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total, u32 *count, 108 + u64 *max, u64 *min, struct timespec64 *ts) 110 109 { 111 110 s64 ns = local_clock() - *start; 112 111 unsigned long flags; ··· 116 113 raw_spin_lock_irqsave(lock, flags); 117 114 *total += ns; 118 115 (*count)++; 119 - if (ns > *max) 116 + if (ns > *max) { 120 117 *max = ns; 118 + ktime_get_real_ts64(ts); 119 + } 121 120 if (*min == 0 || ns < *min) 122 121 *min = ns; 123 122 raw_spin_unlock_irqrestore(lock, flags); ··· 142 137 &p->delays->blkio_delay, 143 138 &p->delays->blkio_count, 144 139 &p->delays->blkio_delay_max, 145 - &p->delays->blkio_delay_min); 140 + &p->delays->blkio_delay_min, 141 + &p->delays->blkio_delay_max_ts); 146 142 } 147 143 148 144 int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk) ··· 176 170 177 171 d->cpu_delay_max = tsk->sched_info.max_run_delay; 178 172 d->cpu_delay_min = tsk->sched_info.min_run_delay; 173 + d->cpu_delay_max_ts.tv_sec = tsk->sched_info.max_run_delay_ts.tv_sec; 174 + d->cpu_delay_max_ts.tv_nsec = tsk->sched_info.max_run_delay_ts.tv_nsec; 179 175 tmp = (s64)d->cpu_delay_total + t2; 180 176 d->cpu_delay_total = (tmp < (s64)d->cpu_delay_total) ? 0 : tmp; 181 177 tmp = (s64)d->cpu_run_virtual_total + t3; ··· 225 217 &current->delays->freepages_delay, 226 218 &current->delays->freepages_count, 227 219 &current->delays->freepages_delay_max, 228 - &current->delays->freepages_delay_min); 220 + &current->delays->freepages_delay_min, 221 + &current->delays->freepages_delay_max_ts); 229 222 } 230 223 231 224 void __delayacct_thrashing_start(bool *in_thrashing) ··· 250 241 &current->delays->thrashing_delay, 251 242 &current->delays->thrashing_count, 252 243 &current->delays->thrashing_delay_max, 253 - &current->delays->thrashing_delay_min); 244 + &current->delays->thrashing_delay_min, 245 + &current->delays->thrashing_delay_max_ts); 254 246 } 255 247 256 248 void __delayacct_swapin_start(void) ··· 266 256 &current->delays->swapin_delay, 267 257 &current->delays->swapin_count, 268 258 &current->delays->swapin_delay_max, 269 - &current->delays->swapin_delay_min); 259 + &current->delays->swapin_delay_min, 260 + &current->delays->swapin_delay_max_ts); 270 261 } 271 262 272 263 void __delayacct_compact_start(void) ··· 282 271 &current->delays->compact_delay, 283 272 &current->delays->compact_count, 284 273 &current->delays->compact_delay_max, 285 - &current->delays->compact_delay_min); 274 + &current->delays->compact_delay_min, 275 + &current->delays->compact_delay_max_ts); 286 276 } 287 277 288 278 void __delayacct_wpcopy_start(void) ··· 298 286 &current->delays->wpcopy_delay, 299 287 &current->delays->wpcopy_count, 300 288 &current->delays->wpcopy_delay_max, 301 - &current->delays->wpcopy_delay_min); 289 + &current->delays->wpcopy_delay_min, 290 + &current->delays->wpcopy_delay_max_ts); 302 291 } 303 292 304 293 void __delayacct_irq(struct task_struct *task, u32 delta) ··· 309 296 raw_spin_lock_irqsave(&task->delays->lock, flags); 310 297 task->delays->irq_delay += delta; 311 298 task->delays->irq_count++; 312 - if (delta > task->delays->irq_delay_max) 299 + if (delta > task->delays->irq_delay_max) { 313 300 task->delays->irq_delay_max = delta; 301 + ktime_get_real_ts64(&task->delays->irq_delay_max_ts); 302 + } 314 303 if (delta && (!task->delays->irq_delay_min || delta < task->delays->irq_delay_min)) 315 304 task->delays->irq_delay_min = delta; 316 305 raw_spin_unlock_irqrestore(&task->delays->lock, flags);
+2 -2
kernel/fork.c
··· 1357 1357 * @task: The task. 1358 1358 * 1359 1359 * Returns %NULL if the task has no mm. Checks PF_KTHREAD (meaning 1360 - * this kernel workthread has transiently adopted a user mm with use_mm, 1360 + * this kernel workthread has transiently adopted a user mm with kthread_use_mm, 1361 1361 * to do its AIO) is not set and if so returns a reference to it, after 1362 1362 * bumping up the use count. User must release the mm via mmput() 1363 1363 * after use. Typically used by /proc and ptrace. ··· 2069 2069 2070 2070 p->set_child_tid = (clone_flags & CLONE_CHILD_SETTID) ? args->child_tid : NULL; 2071 2071 /* 2072 - * Clear TID on mm_release()? 2072 + * TID is cleared in mm_release() when the task exits 2073 2073 */ 2074 2074 p->clear_child_tid = (clone_flags & CLONE_CHILD_CLEARTID) ? args->child_tid : NULL; 2075 2075
+54 -19
kernel/kallsyms.c
··· 347 347 return 1; 348 348 } 349 349 return !!module_address_lookup(addr, symbolsize, offset, NULL, NULL, namebuf) || 350 - !!__bpf_address_lookup(addr, symbolsize, offset, namebuf); 350 + !!bpf_address_lookup(addr, symbolsize, offset, namebuf); 351 351 } 352 352 353 353 static int kallsyms_lookup_buildid(unsigned long addr, ··· 357 357 { 358 358 int ret; 359 359 360 - namebuf[KSYM_NAME_LEN - 1] = 0; 360 + /* 361 + * kallsyms_lookus() returns pointer to namebuf on success and 362 + * NULL on error. But some callers ignore the return value. 363 + * Instead they expect @namebuf filled either with valid 364 + * or empty string. 365 + */ 361 366 namebuf[0] = 0; 367 + /* 368 + * Initialize the module-related return values. They are not set 369 + * when the symbol is in vmlinux or it is a bpf address. 370 + */ 371 + if (modname) 372 + *modname = NULL; 373 + if (modbuildid) 374 + *modbuildid = NULL; 362 375 363 376 if (is_ksym_addr(addr)) { 364 377 unsigned long pos; ··· 380 367 /* Grab name */ 381 368 kallsyms_expand_symbol(get_symbol_offset(pos), 382 369 namebuf, KSYM_NAME_LEN); 383 - if (modname) 384 - *modname = NULL; 385 - if (modbuildid) 386 - *modbuildid = NULL; 387 370 388 371 return strlen(namebuf); 389 372 } ··· 388 379 ret = module_address_lookup(addr, symbolsize, offset, 389 380 modname, modbuildid, namebuf); 390 381 if (!ret) 391 - ret = bpf_address_lookup(addr, symbolsize, 392 - offset, modname, namebuf); 382 + ret = bpf_address_lookup(addr, symbolsize, offset, namebuf); 393 383 394 384 if (!ret) 395 - ret = ftrace_mod_address_lookup(addr, symbolsize, 396 - offset, modname, namebuf); 385 + ret = ftrace_mod_address_lookup(addr, symbolsize, offset, 386 + modname, modbuildid, namebuf); 397 387 398 388 return ret; 399 389 } ··· 436 428 return lookup_module_symbol_name(addr, symname); 437 429 } 438 430 431 + #ifdef CONFIG_STACKTRACE_BUILD_ID 432 + 433 + static int append_buildid(char *buffer, const char *modname, 434 + const unsigned char *buildid) 435 + { 436 + if (!modname) 437 + return 0; 438 + 439 + if (!buildid) { 440 + pr_warn_once("Undefined buildid for the module %s\n", modname); 441 + return 0; 442 + } 443 + 444 + /* build ID should match length of sprintf */ 445 + #ifdef CONFIG_MODULES 446 + static_assert(sizeof(typeof_member(struct module, build_id)) == 20); 447 + #endif 448 + 449 + return sprintf(buffer, " %20phN", buildid); 450 + } 451 + 452 + #else /* CONFIG_STACKTRACE_BUILD_ID */ 453 + 454 + static int append_buildid(char *buffer, const char *modname, 455 + const unsigned char *buildid) 456 + { 457 + return 0; 458 + } 459 + 460 + #endif /* CONFIG_STACKTRACE_BUILD_ID */ 461 + 439 462 /* Look up a kernel symbol and return it in a text buffer. */ 440 463 static int __sprint_symbol(char *buffer, unsigned long address, 441 464 int symbol_offset, int add_offset, int add_buildid) ··· 475 436 const unsigned char *buildid; 476 437 unsigned long offset, size; 477 438 int len; 439 + 440 + /* Prevent module removal until modname and modbuildid are printed */ 441 + guard(rcu)(); 478 442 479 443 address += symbol_offset; 480 444 len = kallsyms_lookup_buildid(address, &size, &offset, &modname, &buildid, ··· 492 450 493 451 if (modname) { 494 452 len += sprintf(buffer + len, " [%s", modname); 495 - #if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID) 496 - if (add_buildid && buildid) { 497 - /* build ID should match length of sprintf */ 498 - #if IS_ENABLED(CONFIG_MODULES) 499 - static_assert(sizeof(typeof_member(struct module, build_id)) == 20); 500 - #endif 501 - len += sprintf(buffer + len, " %20phN", buildid); 502 - } 503 - #endif 453 + if (add_buildid) 454 + len += append_buildid(buffer + len, modname, buildid); 504 455 len += sprintf(buffer + len, "]"); 505 456 } 506 457
+2 -2
kernel/kcsan/kcsan_test.c
··· 176 176 177 177 /* Title */ 178 178 cur = expect[0]; 179 - end = &expect[0][sizeof(expect[0]) - 1]; 179 + end = ARRAY_END(expect[0]); 180 180 cur += scnprintf(cur, end - cur, "BUG: KCSAN: %s in ", 181 181 is_assert ? "assert: race" : "data-race"); 182 182 if (r->access[1].fn) { ··· 200 200 201 201 /* Access 1 */ 202 202 cur = expect[1]; 203 - end = &expect[1][sizeof(expect[1]) - 1]; 203 + end = ARRAY_END(expect[1]); 204 204 if (!r->access[1].fn) 205 205 cur += scnprintf(cur, end - cur, "race at unknown origin, with "); 206 206
+74 -57
kernel/kexec_file.c
··· 883 883 884 884 #ifdef CONFIG_ARCH_SUPPORTS_KEXEC_PURGATORY 885 885 /* 886 + * kexec_purgatory_find_symbol - find a symbol in the purgatory 887 + * @pi: Purgatory to search in. 888 + * @name: Name of the symbol. 889 + * 890 + * Return: pointer to symbol in read-only symtab on success, NULL on error. 891 + */ 892 + static const Elf_Sym *kexec_purgatory_find_symbol(struct purgatory_info *pi, 893 + const char *name) 894 + { 895 + const Elf_Shdr *sechdrs; 896 + const Elf_Ehdr *ehdr; 897 + const Elf_Sym *syms; 898 + const char *strtab; 899 + int i, k; 900 + 901 + if (!pi->ehdr) 902 + return NULL; 903 + 904 + ehdr = pi->ehdr; 905 + sechdrs = (void *)ehdr + ehdr->e_shoff; 906 + 907 + for (i = 0; i < ehdr->e_shnum; i++) { 908 + if (sechdrs[i].sh_type != SHT_SYMTAB) 909 + continue; 910 + 911 + if (sechdrs[i].sh_link >= ehdr->e_shnum) 912 + /* Invalid strtab section number */ 913 + continue; 914 + strtab = (void *)ehdr + sechdrs[sechdrs[i].sh_link].sh_offset; 915 + syms = (void *)ehdr + sechdrs[i].sh_offset; 916 + 917 + /* Go through symbols for a match */ 918 + for (k = 0; k < sechdrs[i].sh_size/sizeof(Elf_Sym); k++) { 919 + if (ELF_ST_BIND(syms[k].st_info) != STB_GLOBAL) 920 + continue; 921 + 922 + if (strcmp(strtab + syms[k].st_name, name) != 0) 923 + continue; 924 + 925 + if (syms[k].st_shndx == SHN_UNDEF || 926 + syms[k].st_shndx >= ehdr->e_shnum) { 927 + pr_debug("Symbol: %s has bad section index %d.\n", 928 + name, syms[k].st_shndx); 929 + return NULL; 930 + } 931 + 932 + /* Found the symbol we are looking for */ 933 + return &syms[k]; 934 + } 935 + } 936 + 937 + return NULL; 938 + } 939 + /* 886 940 * kexec_purgatory_setup_kbuf - prepare buffer to load purgatory. 887 941 * @pi: Purgatory to be loaded. 888 942 * @kbuf: Buffer to setup. ··· 1014 960 unsigned long offset; 1015 961 size_t sechdrs_size; 1016 962 Elf_Shdr *sechdrs; 963 + const Elf_Sym *entry_sym; 964 + u16 entry_shndx = 0; 965 + unsigned long entry_off = 0; 966 + bool start_fixed = false; 1017 967 int i; 1018 968 1019 969 /* ··· 1034 976 offset = 0; 1035 977 bss_addr = kbuf->mem + kbuf->bufsz; 1036 978 kbuf->image->start = pi->ehdr->e_entry; 979 + 980 + entry_sym = kexec_purgatory_find_symbol(pi, "purgatory_start"); 981 + if (entry_sym) { 982 + entry_shndx = entry_sym->st_shndx; 983 + entry_off = entry_sym->st_value; 984 + } 1037 985 1038 986 for (i = 0; i < pi->ehdr->e_shnum; i++) { 1039 987 unsigned long align; ··· 1058 994 1059 995 offset = ALIGN(offset, align); 1060 996 997 + if (!start_fixed && entry_sym && i == entry_shndx && 998 + (sechdrs[i].sh_flags & SHF_EXECINSTR) && 999 + entry_off < sechdrs[i].sh_size) { 1000 + kbuf->image->start = kbuf->mem + offset + entry_off; 1001 + start_fixed = true; 1002 + } 1003 + 1061 1004 /* 1062 1005 * Check if the segment contains the entry point, if so, 1063 1006 * calculate the value of image->start based on it. ··· 1075 1004 * is not set to the initial value, and warn the user so they 1076 1005 * have a chance to fix their purgatory's linker script. 1077 1006 */ 1078 - if (sechdrs[i].sh_flags & SHF_EXECINSTR && 1007 + if (!start_fixed && sechdrs[i].sh_flags & SHF_EXECINSTR && 1079 1008 pi->ehdr->e_entry >= sechdrs[i].sh_addr && 1080 1009 pi->ehdr->e_entry < (sechdrs[i].sh_addr 1081 1010 + sechdrs[i].sh_size) && 1082 - !WARN_ON(kbuf->image->start != pi->ehdr->e_entry)) { 1011 + kbuf->image->start == pi->ehdr->e_entry) { 1083 1012 kbuf->image->start -= sechdrs[i].sh_addr; 1084 1013 kbuf->image->start += kbuf->mem + offset; 1014 + start_fixed = true; 1085 1015 } 1086 1016 1087 1017 src = (void *)pi->ehdr + sechdrs[i].sh_offset; ··· 1198 1126 vfree(pi->purgatory_buf); 1199 1127 pi->purgatory_buf = NULL; 1200 1128 return ret; 1201 - } 1202 - 1203 - /* 1204 - * kexec_purgatory_find_symbol - find a symbol in the purgatory 1205 - * @pi: Purgatory to search in. 1206 - * @name: Name of the symbol. 1207 - * 1208 - * Return: pointer to symbol in read-only symtab on success, NULL on error. 1209 - */ 1210 - static const Elf_Sym *kexec_purgatory_find_symbol(struct purgatory_info *pi, 1211 - const char *name) 1212 - { 1213 - const Elf_Shdr *sechdrs; 1214 - const Elf_Ehdr *ehdr; 1215 - const Elf_Sym *syms; 1216 - const char *strtab; 1217 - int i, k; 1218 - 1219 - if (!pi->ehdr) 1220 - return NULL; 1221 - 1222 - ehdr = pi->ehdr; 1223 - sechdrs = (void *)ehdr + ehdr->e_shoff; 1224 - 1225 - for (i = 0; i < ehdr->e_shnum; i++) { 1226 - if (sechdrs[i].sh_type != SHT_SYMTAB) 1227 - continue; 1228 - 1229 - if (sechdrs[i].sh_link >= ehdr->e_shnum) 1230 - /* Invalid strtab section number */ 1231 - continue; 1232 - strtab = (void *)ehdr + sechdrs[sechdrs[i].sh_link].sh_offset; 1233 - syms = (void *)ehdr + sechdrs[i].sh_offset; 1234 - 1235 - /* Go through symbols for a match */ 1236 - for (k = 0; k < sechdrs[i].sh_size/sizeof(Elf_Sym); k++) { 1237 - if (ELF_ST_BIND(syms[k].st_info) != STB_GLOBAL) 1238 - continue; 1239 - 1240 - if (strcmp(strtab + syms[k].st_name, name) != 0) 1241 - continue; 1242 - 1243 - if (syms[k].st_shndx == SHN_UNDEF || 1244 - syms[k].st_shndx >= ehdr->e_shnum) { 1245 - pr_debug("Symbol: %s has bad section index %d.\n", 1246 - name, syms[k].st_shndx); 1247 - return NULL; 1248 - } 1249 - 1250 - /* Found the symbol we are looking for */ 1251 - return &syms[k]; 1252 - } 1253 - } 1254 - 1255 - return NULL; 1256 1129 } 1257 1130 1258 1131 void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name)
+16 -1
kernel/liveupdate/Kconfig
··· 54 54 config LIVEUPDATE 55 55 bool "Live Update Orchestrator" 56 56 depends on KEXEC_HANDOVER 57 - depends on SHMEM 58 57 help 59 58 Enable the Live Update Orchestrator. Live Update is a mechanism, 60 59 typically based on kexec, that allows the kernel to be updated ··· 69 70 This feature primarily targets virtual machine hosts to quickly update 70 71 the kernel hypervisor with minimal disruption to the running virtual 71 72 machines. 73 + 74 + If unsure, say N. 75 + 76 + config LIVEUPDATE_MEMFD 77 + bool "Live update support for memfd" 78 + depends on LIVEUPDATE 79 + depends on MEMFD_CREATE 80 + depends on SHMEM 81 + default LIVEUPDATE 82 + help 83 + Enable live update support for memfd regions. This allows preserving 84 + memfd-backed memory across kernel live updates. 85 + 86 + This can be used to back VM memory with memfds, allowing the guest 87 + memory to persist, or for other user workloads needing to preserve 88 + pages. 72 89 73 90 If unsure, say N. 74 91
+1
kernel/liveupdate/Makefile
··· 3 3 luo-y := \ 4 4 luo_core.o \ 5 5 luo_file.o \ 6 + luo_flb.o \ 6 7 luo_session.o 7 8 8 9 obj-$(CONFIG_KEXEC_HANDOVER) += kexec_handover.o
+75 -72
kernel/liveupdate/kexec_handover.c
··· 15 15 #include <linux/count_zeros.h> 16 16 #include <linux/kexec.h> 17 17 #include <linux/kexec_handover.h> 18 + #include <linux/kho/abi/kexec_handover.h> 18 19 #include <linux/libfdt.h> 19 20 #include <linux/list.h> 20 21 #include <linux/memblock.h> ··· 25 24 26 25 #include <asm/early_ioremap.h> 27 26 28 - #include "kexec_handover_internal.h" 29 27 /* 30 28 * KHO is tightly coupled with mm init and needs access to some of mm 31 29 * internal APIs. ··· 33 33 #include "../kexec_internal.h" 34 34 #include "kexec_handover_internal.h" 35 35 36 - #define KHO_FDT_COMPATIBLE "kho-v1" 37 - #define PROP_PRESERVED_MEMORY_MAP "preserved-memory-map" 38 - #define PROP_SUB_FDT "fdt" 39 - 36 + /* The magic token for preserved pages */ 40 37 #define KHO_PAGE_MAGIC 0x4b484f50U /* ASCII for 'KHOP' */ 41 38 42 39 /* ··· 216 219 return 0; 217 220 } 218 221 222 + /* For physically contiguous 0-order pages. */ 223 + static void kho_init_pages(struct page *page, unsigned long nr_pages) 224 + { 225 + for (unsigned long i = 0; i < nr_pages; i++) 226 + set_page_count(page + i, 1); 227 + } 228 + 229 + static void kho_init_folio(struct page *page, unsigned int order) 230 + { 231 + unsigned long nr_pages = (1 << order); 232 + 233 + /* Head page gets refcount of 1. */ 234 + set_page_count(page, 1); 235 + 236 + /* For higher order folios, tail pages get a page count of zero. */ 237 + for (unsigned long i = 1; i < nr_pages; i++) 238 + set_page_count(page + i, 0); 239 + 240 + if (order > 0) 241 + prep_compound_page(page, order); 242 + } 243 + 219 244 static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) 220 245 { 221 246 struct page *page = pfn_to_online_page(PHYS_PFN(phys)); 222 - unsigned int nr_pages, ref_cnt; 247 + unsigned long nr_pages; 223 248 union kho_page_info info; 224 249 225 250 if (!page) ··· 259 240 260 241 /* Clear private to make sure later restores on this page error out. */ 261 242 page->private = 0; 262 - /* Head page gets refcount of 1. */ 263 - set_page_count(page, 1); 264 243 265 - /* 266 - * For higher order folios, tail pages get a page count of zero. 267 - * For physically contiguous order-0 pages every pages gets a page 268 - * count of 1 269 - */ 270 - ref_cnt = is_folio ? 0 : 1; 271 - for (unsigned int i = 1; i < nr_pages; i++) 272 - set_page_count(page + i, ref_cnt); 273 - 274 - if (is_folio && info.order) 275 - prep_compound_page(page, info.order); 244 + if (is_folio) 245 + kho_init_folio(page, info.order); 246 + else 247 + kho_init_pages(page, nr_pages); 276 248 277 249 /* Always mark headpage's codetag as empty to avoid accounting mismatch */ 278 250 clear_page_tag_ref(page); ··· 299 289 * Restore a contiguous list of order 0 pages that was preserved with 300 290 * kho_preserve_pages(). 301 291 * 302 - * Return: 0 on success, error code on failure 292 + * Return: the first page on success, NULL on failure. 303 293 */ 304 - struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages) 294 + struct page *kho_restore_pages(phys_addr_t phys, unsigned long nr_pages) 305 295 { 306 296 const unsigned long start_pfn = PHYS_PFN(phys); 307 297 const unsigned long end_pfn = start_pfn + nr_pages; ··· 396 386 void *ptr; 397 387 u64 phys; 398 388 399 - ptr = fdt_getprop_w(kho_out.fdt, 0, PROP_PRESERVED_MEMORY_MAP, NULL); 389 + ptr = fdt_getprop_w(kho_out.fdt, 0, KHO_FDT_MEMORY_MAP_PROP_NAME, NULL); 400 390 401 391 /* Check and discard previous memory map */ 402 392 phys = get_unaligned((u64 *)ptr); ··· 484 474 const void *mem_ptr; 485 475 int len; 486 476 487 - mem_ptr = fdt_getprop(fdt, 0, PROP_PRESERVED_MEMORY_MAP, &len); 477 + mem_ptr = fdt_getprop(fdt, 0, KHO_FDT_MEMORY_MAP_PROP_NAME, &len); 488 478 if (!mem_ptr || len != sizeof(u64)) { 489 479 pr_err("failed to get preserved memory bitmaps\n"); 490 480 return 0; ··· 655 645 scratch_size_update(); 656 646 657 647 /* FIXME: deal with node hot-plug/remove */ 658 - kho_scratch_cnt = num_online_nodes() + 2; 648 + kho_scratch_cnt = nodes_weight(node_states[N_MEMORY]) + 2; 659 649 size = kho_scratch_cnt * sizeof(*kho_scratch); 660 650 kho_scratch = memblock_alloc(size, PAGE_SIZE); 661 - if (!kho_scratch) 651 + if (!kho_scratch) { 652 + pr_err("Failed to reserve scratch array\n"); 662 653 goto err_disable_kho; 654 + } 663 655 664 656 /* 665 657 * reserve scratch area in low memory for lowmem allocations in the ··· 670 658 size = scratch_size_lowmem; 671 659 addr = memblock_phys_alloc_range(size, CMA_MIN_ALIGNMENT_BYTES, 0, 672 660 ARCH_LOW_ADDRESS_LIMIT); 673 - if (!addr) 661 + if (!addr) { 662 + pr_err("Failed to reserve lowmem scratch buffer\n"); 674 663 goto err_free_scratch_desc; 664 + } 675 665 676 666 kho_scratch[i].addr = addr; 677 667 kho_scratch[i].size = size; ··· 682 668 /* reserve large contiguous area for allocations without nid */ 683 669 size = scratch_size_global; 684 670 addr = memblock_phys_alloc(size, CMA_MIN_ALIGNMENT_BYTES); 685 - if (!addr) 671 + if (!addr) { 672 + pr_err("Failed to reserve global scratch buffer\n"); 686 673 goto err_free_scratch_areas; 674 + } 687 675 688 676 kho_scratch[i].addr = addr; 689 677 kho_scratch[i].size = size; 690 678 i++; 691 679 692 - for_each_online_node(nid) { 680 + /* 681 + * Loop over nodes that have both memory and are online. Skip 682 + * memoryless nodes, as we can not allocate scratch areas there. 683 + */ 684 + for_each_node_state(nid, N_MEMORY) { 693 685 size = scratch_size_node(nid); 694 686 addr = memblock_alloc_range_nid(size, CMA_MIN_ALIGNMENT_BYTES, 695 687 0, MEMBLOCK_ALLOC_ACCESSIBLE, 696 688 nid, true); 697 - if (!addr) 689 + if (!addr) { 690 + pr_err("Failed to reserve nid %d scratch buffer\n", nid); 698 691 goto err_free_scratch_areas; 692 + } 699 693 700 694 kho_scratch[i].addr = addr; 701 695 kho_scratch[i].size = size; ··· 757 735 goto out_pack; 758 736 } 759 737 760 - err = fdt_setprop(root_fdt, off, PROP_SUB_FDT, &phys, sizeof(phys)); 738 + err = fdt_setprop(root_fdt, off, KHO_FDT_SUB_TREE_PROP_NAME, 739 + &phys, sizeof(phys)); 761 740 if (err < 0) 762 741 goto out_pack; 763 742 ··· 789 766 const u64 *val; 790 767 int len; 791 768 792 - val = fdt_getprop(root_fdt, off, PROP_SUB_FDT, &len); 769 + val = fdt_getprop(root_fdt, off, KHO_FDT_SUB_TREE_PROP_NAME, &len); 793 770 if (!val || len != sizeof(phys_addr_t)) 794 771 continue; 795 772 ··· 854 831 * 855 832 * Return: 0 on success, error code on failure 856 833 */ 857 - int kho_preserve_pages(struct page *page, unsigned int nr_pages) 834 + int kho_preserve_pages(struct page *page, unsigned long nr_pages) 858 835 { 859 836 struct kho_mem_track *track = &kho_out.track; 860 837 const unsigned long start_pfn = page_to_pfn(page); ··· 898 875 * kho_preserve_pages() call. Unpreserving arbitrary sub-ranges of larger 899 876 * preserved blocks is not supported. 900 877 */ 901 - void kho_unpreserve_pages(struct page *page, unsigned int nr_pages) 878 + void kho_unpreserve_pages(struct page *page, unsigned long nr_pages) 902 879 { 903 880 struct kho_mem_track *track = &kho_out.track; 904 881 const unsigned long start_pfn = page_to_pfn(page); ··· 907 884 __kho_unpreserve(track, start_pfn, end_pfn); 908 885 } 909 886 EXPORT_SYMBOL_GPL(kho_unpreserve_pages); 910 - 911 - struct kho_vmalloc_hdr { 912 - DECLARE_KHOSER_PTR(next, struct kho_vmalloc_chunk *); 913 - }; 914 - 915 - #define KHO_VMALLOC_SIZE \ 916 - ((PAGE_SIZE - sizeof(struct kho_vmalloc_hdr)) / \ 917 - sizeof(phys_addr_t)) 918 - 919 - struct kho_vmalloc_chunk { 920 - struct kho_vmalloc_hdr hdr; 921 - phys_addr_t phys[KHO_VMALLOC_SIZE]; 922 - }; 923 - 924 - static_assert(sizeof(struct kho_vmalloc_chunk) == PAGE_SIZE); 925 887 926 888 /* vmalloc flags KHO supports */ 927 889 #define KHO_VMALLOC_SUPPORTED_FLAGS (VM_ALLOC | VM_ALLOW_HUGE_VMAP) ··· 1323 1315 if (offset < 0) 1324 1316 return -ENOENT; 1325 1317 1326 - val = fdt_getprop(fdt, offset, PROP_SUB_FDT, &len); 1318 + val = fdt_getprop(fdt, offset, KHO_FDT_SUB_TREE_PROP_NAME, &len); 1327 1319 if (!val || len != sizeof(*val)) 1328 1320 return -EINVAL; 1329 1321 ··· 1343 1335 err |= fdt_finish_reservemap(root); 1344 1336 err |= fdt_begin_node(root, ""); 1345 1337 err |= fdt_property_string(root, "compatible", KHO_FDT_COMPATIBLE); 1346 - err |= fdt_property(root, PROP_PRESERVED_MEMORY_MAP, &empty_mem_map, 1338 + err |= fdt_property(root, KHO_FDT_MEMORY_MAP_PROP_NAME, &empty_mem_map, 1347 1339 sizeof(empty_mem_map)); 1348 1340 err |= fdt_end_node(root); 1349 1341 err |= fdt_finish(root); ··· 1459 1451 void __init kho_populate(phys_addr_t fdt_phys, u64 fdt_len, 1460 1452 phys_addr_t scratch_phys, u64 scratch_len) 1461 1453 { 1454 + unsigned int scratch_cnt = scratch_len / sizeof(*kho_scratch); 1462 1455 struct kho_scratch *scratch = NULL; 1463 1456 phys_addr_t mem_map_phys; 1464 1457 void *fdt = NULL; 1465 - int err = 0; 1466 - unsigned int scratch_cnt = scratch_len / sizeof(*kho_scratch); 1458 + int err; 1467 1459 1468 1460 /* Validate the input FDT */ 1469 1461 fdt = early_memremap(fdt_phys, fdt_len); 1470 1462 if (!fdt) { 1471 1463 pr_warn("setup: failed to memremap FDT (0x%llx)\n", fdt_phys); 1472 - err = -EFAULT; 1473 - goto out; 1464 + goto err_report; 1474 1465 } 1475 1466 err = fdt_check_header(fdt); 1476 1467 if (err) { 1477 1468 pr_warn("setup: handover FDT (0x%llx) is invalid: %d\n", 1478 1469 fdt_phys, err); 1479 - err = -EINVAL; 1480 - goto out; 1470 + goto err_unmap_fdt; 1481 1471 } 1482 1472 err = fdt_node_check_compatible(fdt, 0, KHO_FDT_COMPATIBLE); 1483 1473 if (err) { 1484 1474 pr_warn("setup: handover FDT (0x%llx) is incompatible with '%s': %d\n", 1485 1475 fdt_phys, KHO_FDT_COMPATIBLE, err); 1486 - err = -EINVAL; 1487 - goto out; 1476 + goto err_unmap_fdt; 1488 1477 } 1489 1478 1490 1479 mem_map_phys = kho_get_mem_map_phys(fdt); 1491 - if (!mem_map_phys) { 1492 - err = -ENOENT; 1493 - goto out; 1494 - } 1480 + if (!mem_map_phys) 1481 + goto err_unmap_fdt; 1495 1482 1496 1483 scratch = early_memremap(scratch_phys, scratch_len); 1497 1484 if (!scratch) { 1498 1485 pr_warn("setup: failed to memremap scratch (phys=0x%llx, len=%lld)\n", 1499 1486 scratch_phys, scratch_len); 1500 - err = -EFAULT; 1501 - goto out; 1487 + goto err_unmap_fdt; 1502 1488 } 1503 1489 1504 1490 /* ··· 1509 1507 if (WARN_ON(err)) { 1510 1508 pr_warn("failed to mark the scratch region 0x%pa+0x%pa: %pe", 1511 1509 &area->addr, &size, ERR_PTR(err)); 1512 - goto out; 1510 + goto err_unmap_scratch; 1513 1511 } 1514 1512 pr_debug("Marked 0x%pa+0x%pa as scratch", &area->addr, &size); 1515 1513 } ··· 1531 1529 kho_scratch_cnt = scratch_cnt; 1532 1530 pr_info("found kexec handover data.\n"); 1533 1531 1534 - out: 1535 - if (fdt) 1536 - early_memunmap(fdt, fdt_len); 1537 - if (scratch) 1538 - early_memunmap(scratch, scratch_len); 1539 - if (err) 1540 - pr_warn("disabling KHO revival: %d\n", err); 1532 + return; 1533 + 1534 + err_unmap_scratch: 1535 + early_memunmap(scratch, scratch_len); 1536 + err_unmap_fdt: 1537 + early_memunmap(fdt, fdt_len); 1538 + err_report: 1539 + pr_warn("disabling KHO revival\n"); 1541 1540 } 1542 1541 1543 1542 /* Helper functions for kexec_file_load */
+7 -3
kernel/liveupdate/luo_core.c
··· 35 35 * iommu, interrupts, vfio, participating filesystems, and memory management. 36 36 * 37 37 * LUO uses Kexec Handover to transfer memory state from the current kernel to 38 - * the next kernel. For more details see 39 - * Documentation/core-api/kho/concepts.rst. 38 + * the next kernel. For more details see Documentation/core-api/kho/index.rst. 40 39 */ 41 40 42 41 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt ··· 127 128 if (err) 128 129 return err; 129 130 130 - return 0; 131 + err = luo_flb_setup_incoming(luo_global.fdt_in); 132 + 133 + return err; 131 134 } 132 135 133 136 static int __init liveupdate_early_init(void) ··· 166 165 err |= fdt_property_string(fdt_out, "compatible", LUO_FDT_COMPATIBLE); 167 166 err |= fdt_property(fdt_out, LUO_FDT_LIVEUPDATE_NUM, &ln, sizeof(ln)); 168 167 err |= luo_session_setup_outgoing(fdt_out); 168 + err |= luo_flb_setup_outgoing(fdt_out); 169 169 err |= fdt_end_node(fdt_out); 170 170 err |= fdt_finish(fdt_out); 171 171 if (err) ··· 227 225 err = luo_session_serialize(); 228 226 if (err) 229 227 return err; 228 + 229 + luo_flb_serialize(); 230 230 231 231 err = kho_finalize(); 232 232 if (err) {
+33 -6
kernel/liveupdate/luo_file.c
··· 104 104 #include <linux/io.h> 105 105 #include <linux/kexec_handover.h> 106 106 #include <linux/kho/abi/luo.h> 107 + #include <linux/list_private.h> 107 108 #include <linux/liveupdate.h> 108 109 #include <linux/module.h> 109 110 #include <linux/sizes.h> ··· 274 273 goto err_fput; 275 274 276 275 err = -ENOENT; 277 - luo_list_for_each_private(fh, &luo_file_handler_list, list) { 276 + list_private_for_each_entry(fh, &luo_file_handler_list, list) { 278 277 if (fh->ops->can_preserve(fh, file)) { 279 278 err = 0; 280 279 break; ··· 285 284 if (err) 286 285 goto err_free_files_mem; 287 286 287 + err = luo_flb_file_preserve(fh); 288 + if (err) 289 + goto err_free_files_mem; 290 + 288 291 luo_file = kzalloc(sizeof(*luo_file), GFP_KERNEL); 289 292 if (!luo_file) { 290 293 err = -ENOMEM; 291 - goto err_free_files_mem; 294 + goto err_flb_unpreserve; 292 295 } 293 296 294 297 luo_file->file = file; ··· 316 311 317 312 err_kfree: 318 313 kfree(luo_file); 314 + err_flb_unpreserve: 315 + luo_flb_file_unpreserve(fh); 319 316 err_free_files_mem: 320 317 luo_free_files_mem(file_set); 321 318 err_fput: ··· 359 352 args.serialized_data = luo_file->serialized_data; 360 353 args.private_data = luo_file->private_data; 361 354 luo_file->fh->ops->unpreserve(&args); 355 + luo_flb_file_unpreserve(luo_file->fh); 362 356 363 357 list_del(&luo_file->list); 364 358 file_set->count--; ··· 635 627 args.retrieved = luo_file->retrieved; 636 628 637 629 luo_file->fh->ops->finish(&args); 630 + luo_flb_file_finish(luo_file->fh); 638 631 } 639 632 640 633 /** ··· 767 758 bool handler_found = false; 768 759 struct luo_file *luo_file; 769 760 770 - luo_list_for_each_private(fh, &luo_file_handler_list, list) { 761 + list_private_for_each_entry(fh, &luo_file_handler_list, list) { 771 762 if (!strcmp(fh->compatible, file_ser[i].compatible)) { 772 763 handler_found = true; 773 764 break; ··· 842 833 return -EBUSY; 843 834 844 835 /* Check for duplicate compatible strings */ 845 - luo_list_for_each_private(fh_iter, &luo_file_handler_list, list) { 836 + list_private_for_each_entry(fh_iter, &luo_file_handler_list, list) { 846 837 if (!strcmp(fh_iter->compatible, fh->compatible)) { 847 838 pr_err("File handler registration failed: Compatible string '%s' already registered.\n", 848 839 fh->compatible); ··· 857 848 goto err_resume; 858 849 } 859 850 851 + INIT_LIST_HEAD(&ACCESS_PRIVATE(fh, flb_list)); 860 852 INIT_LIST_HEAD(&ACCESS_PRIVATE(fh, list)); 861 853 list_add_tail(&ACCESS_PRIVATE(fh, list), &luo_file_handler_list); 862 854 luo_session_resume(); 855 + 856 + liveupdate_test_register(fh); 863 857 864 858 return 0; 865 859 ··· 880 868 * 881 869 * It ensures safe removal by checking that: 882 870 * No live update session is currently in progress. 871 + * No FLB registered with this file handler. 883 872 * 884 873 * If the unregistration fails, the internal test state is reverted. 885 874 * 886 875 * Return: 0 Success. -EOPNOTSUPP when live update is not enabled. -EBUSY A live 887 - * update is in progress, can't quiesce live update. 876 + * update is in progress, can't quiesce live update or FLB is registred with 877 + * this file handler. 888 878 */ 889 879 int liveupdate_unregister_file_handler(struct liveupdate_file_handler *fh) 890 880 { 881 + int err = -EBUSY; 882 + 891 883 if (!liveupdate_enabled()) 892 884 return -EOPNOTSUPP; 893 885 886 + liveupdate_test_unregister(fh); 887 + 894 888 if (!luo_session_quiesce()) 895 - return -EBUSY; 889 + goto err_register; 890 + 891 + if (!list_empty(&ACCESS_PRIVATE(fh, flb_list))) 892 + goto err_resume; 896 893 897 894 list_del(&ACCESS_PRIVATE(fh, list)); 898 895 module_put(fh->ops->owner); 899 896 luo_session_resume(); 900 897 901 898 return 0; 899 + 900 + err_resume: 901 + luo_session_resume(); 902 + err_register: 903 + liveupdate_test_register(fh); 904 + return err; 902 905 }
+654
kernel/liveupdate/luo_flb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* 4 + * Copyright (c) 2025, Google LLC. 5 + * Pasha Tatashin <pasha.tatashin@soleen.com> 6 + */ 7 + 8 + /** 9 + * DOC: LUO File Lifecycle Bound Global Data 10 + * 11 + * File-Lifecycle-Bound (FLB) objects provide a mechanism for managing global 12 + * state that is shared across multiple live-updatable files. The lifecycle of 13 + * this shared state is tied to the preservation of the files that depend on it. 14 + * 15 + * An FLB represents a global resource, such as the IOMMU core state, that is 16 + * required by multiple file descriptors (e.g., all VFIO fds). 17 + * 18 + * The preservation of the FLB's state is triggered when the *first* file 19 + * depending on it is preserved. The cleanup of this state (unpreserve or 20 + * finish) is triggered when the *last* file depending on it is unpreserved or 21 + * finished. 22 + * 23 + * Handler Dependency: A file handler declares its dependency on one or more 24 + * FLBs by registering them via liveupdate_register_flb(). 25 + * 26 + * Callback Model: Each FLB is defined by a set of operations 27 + * (&struct liveupdate_flb_ops) that LUO invokes at key points: 28 + * 29 + * - .preserve(): Called for the first file. Saves global state. 30 + * - .unpreserve(): Called for the last file (if aborted pre-reboot). 31 + * - .retrieve(): Called on-demand in the new kernel to restore the state. 32 + * - .finish(): Called for the last file in the new kernel for cleanup. 33 + * 34 + * This reference-counted approach ensures that shared state is saved exactly 35 + * once and restored exactly once, regardless of how many files depend on it, 36 + * and that its lifecycle is correctly managed across the kexec transition. 37 + */ 38 + 39 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 40 + 41 + #include <linux/cleanup.h> 42 + #include <linux/err.h> 43 + #include <linux/errno.h> 44 + #include <linux/io.h> 45 + #include <linux/kexec_handover.h> 46 + #include <linux/kho/abi/luo.h> 47 + #include <linux/libfdt.h> 48 + #include <linux/list_private.h> 49 + #include <linux/liveupdate.h> 50 + #include <linux/module.h> 51 + #include <linux/mutex.h> 52 + #include <linux/slab.h> 53 + #include <linux/unaligned.h> 54 + #include "luo_internal.h" 55 + 56 + #define LUO_FLB_PGCNT 1ul 57 + #define LUO_FLB_MAX (((LUO_FLB_PGCNT << PAGE_SHIFT) - \ 58 + sizeof(struct luo_flb_header_ser)) / sizeof(struct luo_flb_ser)) 59 + 60 + struct luo_flb_header { 61 + struct luo_flb_header_ser *header_ser; 62 + struct luo_flb_ser *ser; 63 + bool active; 64 + }; 65 + 66 + struct luo_flb_global { 67 + struct luo_flb_header incoming; 68 + struct luo_flb_header outgoing; 69 + struct list_head list; 70 + long count; 71 + }; 72 + 73 + static struct luo_flb_global luo_flb_global = { 74 + .list = LIST_HEAD_INIT(luo_flb_global.list), 75 + }; 76 + 77 + /* 78 + * struct luo_flb_link - Links an FLB definition to a file handler's internal 79 + * list of dependencies. 80 + * @flb: A pointer to the registered &struct liveupdate_flb definition. 81 + * @list: The list_head for linking. 82 + */ 83 + struct luo_flb_link { 84 + struct liveupdate_flb *flb; 85 + struct list_head list; 86 + }; 87 + 88 + /* luo_flb_get_private - Access private field, and if needed initialize it. */ 89 + static struct luo_flb_private *luo_flb_get_private(struct liveupdate_flb *flb) 90 + { 91 + struct luo_flb_private *private = &ACCESS_PRIVATE(flb, private); 92 + 93 + if (!private->initialized) { 94 + mutex_init(&private->incoming.lock); 95 + mutex_init(&private->outgoing.lock); 96 + INIT_LIST_HEAD(&private->list); 97 + private->users = 0; 98 + private->initialized = true; 99 + } 100 + 101 + return private; 102 + } 103 + 104 + static int luo_flb_file_preserve_one(struct liveupdate_flb *flb) 105 + { 106 + struct luo_flb_private *private = luo_flb_get_private(flb); 107 + 108 + scoped_guard(mutex, &private->outgoing.lock) { 109 + if (!private->outgoing.count) { 110 + struct liveupdate_flb_op_args args = {0}; 111 + int err; 112 + 113 + args.flb = flb; 114 + err = flb->ops->preserve(&args); 115 + if (err) 116 + return err; 117 + private->outgoing.data = args.data; 118 + private->outgoing.obj = args.obj; 119 + } 120 + private->outgoing.count++; 121 + } 122 + 123 + return 0; 124 + } 125 + 126 + static void luo_flb_file_unpreserve_one(struct liveupdate_flb *flb) 127 + { 128 + struct luo_flb_private *private = luo_flb_get_private(flb); 129 + 130 + scoped_guard(mutex, &private->outgoing.lock) { 131 + private->outgoing.count--; 132 + if (!private->outgoing.count) { 133 + struct liveupdate_flb_op_args args = {0}; 134 + 135 + args.flb = flb; 136 + args.data = private->outgoing.data; 137 + args.obj = private->outgoing.obj; 138 + 139 + if (flb->ops->unpreserve) 140 + flb->ops->unpreserve(&args); 141 + 142 + private->outgoing.data = 0; 143 + private->outgoing.obj = NULL; 144 + } 145 + } 146 + } 147 + 148 + static int luo_flb_retrieve_one(struct liveupdate_flb *flb) 149 + { 150 + struct luo_flb_private *private = luo_flb_get_private(flb); 151 + struct luo_flb_header *fh = &luo_flb_global.incoming; 152 + struct liveupdate_flb_op_args args = {0}; 153 + bool found = false; 154 + int err; 155 + 156 + guard(mutex)(&private->incoming.lock); 157 + 158 + if (private->incoming.finished) 159 + return -ENODATA; 160 + 161 + if (private->incoming.retrieved) 162 + return 0; 163 + 164 + if (!fh->active) 165 + return -ENODATA; 166 + 167 + for (int i = 0; i < fh->header_ser->count; i++) { 168 + if (!strcmp(fh->ser[i].name, flb->compatible)) { 169 + private->incoming.data = fh->ser[i].data; 170 + private->incoming.count = fh->ser[i].count; 171 + found = true; 172 + break; 173 + } 174 + } 175 + 176 + if (!found) 177 + return -ENOENT; 178 + 179 + args.flb = flb; 180 + args.data = private->incoming.data; 181 + 182 + err = flb->ops->retrieve(&args); 183 + if (err) 184 + return err; 185 + 186 + private->incoming.obj = args.obj; 187 + private->incoming.retrieved = true; 188 + 189 + return 0; 190 + } 191 + 192 + static void luo_flb_file_finish_one(struct liveupdate_flb *flb) 193 + { 194 + struct luo_flb_private *private = luo_flb_get_private(flb); 195 + u64 count; 196 + 197 + scoped_guard(mutex, &private->incoming.lock) 198 + count = --private->incoming.count; 199 + 200 + if (!count) { 201 + struct liveupdate_flb_op_args args = {0}; 202 + 203 + if (!private->incoming.retrieved) { 204 + int err = luo_flb_retrieve_one(flb); 205 + 206 + if (WARN_ON(err)) 207 + return; 208 + } 209 + 210 + scoped_guard(mutex, &private->incoming.lock) { 211 + args.flb = flb; 212 + args.obj = private->incoming.obj; 213 + flb->ops->finish(&args); 214 + 215 + private->incoming.data = 0; 216 + private->incoming.obj = NULL; 217 + private->incoming.finished = true; 218 + } 219 + } 220 + } 221 + 222 + /** 223 + * luo_flb_file_preserve - Notifies FLBs that a file is about to be preserved. 224 + * @fh: The file handler for the preserved file. 225 + * 226 + * This function iterates through all FLBs associated with the given file 227 + * handler. It increments the reference count for each FLB. If the count becomes 228 + * 1, it triggers the FLB's .preserve() callback to save the global state. 229 + * 230 + * This operation is atomic. If any FLB's .preserve() op fails, it will roll 231 + * back by calling .unpreserve() on any FLBs that were successfully preserved 232 + * during this call. 233 + * 234 + * Context: Called from luo_preserve_file() 235 + * Return: 0 on success, or a negative errno on failure. 236 + */ 237 + int luo_flb_file_preserve(struct liveupdate_file_handler *fh) 238 + { 239 + struct list_head *flb_list = &ACCESS_PRIVATE(fh, flb_list); 240 + struct luo_flb_link *iter; 241 + int err = 0; 242 + 243 + list_for_each_entry(iter, flb_list, list) { 244 + err = luo_flb_file_preserve_one(iter->flb); 245 + if (err) 246 + goto exit_err; 247 + } 248 + 249 + return 0; 250 + 251 + exit_err: 252 + list_for_each_entry_continue_reverse(iter, flb_list, list) 253 + luo_flb_file_unpreserve_one(iter->flb); 254 + 255 + return err; 256 + } 257 + 258 + /** 259 + * luo_flb_file_unpreserve - Notifies FLBs that a dependent file was unpreserved. 260 + * @fh: The file handler for the unpreserved file. 261 + * 262 + * This function iterates through all FLBs associated with the given file 263 + * handler, in reverse order of registration. It decrements the reference count 264 + * for each FLB. If the count becomes 0, it triggers the FLB's .unpreserve() 265 + * callback to clean up the global state. 266 + * 267 + * Context: Called when a preserved file is being cleaned up before reboot 268 + * (e.g., from luo_file_unpreserve_files()). 269 + */ 270 + void luo_flb_file_unpreserve(struct liveupdate_file_handler *fh) 271 + { 272 + struct list_head *flb_list = &ACCESS_PRIVATE(fh, flb_list); 273 + struct luo_flb_link *iter; 274 + 275 + list_for_each_entry_reverse(iter, flb_list, list) 276 + luo_flb_file_unpreserve_one(iter->flb); 277 + } 278 + 279 + /** 280 + * luo_flb_file_finish - Notifies FLBs that a dependent file has been finished. 281 + * @fh: The file handler for the finished file. 282 + * 283 + * This function iterates through all FLBs associated with the given file 284 + * handler, in reverse order of registration. It decrements the incoming 285 + * reference count for each FLB. If the count becomes 0, it triggers the FLB's 286 + * .finish() callback for final cleanup in the new kernel. 287 + * 288 + * Context: Called from luo_file_finish() for each file being finished. 289 + */ 290 + void luo_flb_file_finish(struct liveupdate_file_handler *fh) 291 + { 292 + struct list_head *flb_list = &ACCESS_PRIVATE(fh, flb_list); 293 + struct luo_flb_link *iter; 294 + 295 + list_for_each_entry_reverse(iter, flb_list, list) 296 + luo_flb_file_finish_one(iter->flb); 297 + } 298 + 299 + /** 300 + * liveupdate_register_flb - Associate an FLB with a file handler and register it globally. 301 + * @fh: The file handler that will now depend on the FLB. 302 + * @flb: The File-Lifecycle-Bound object to associate. 303 + * 304 + * Establishes a dependency, informing the LUO core that whenever a file of 305 + * type @fh is preserved, the state of @flb must also be managed. 306 + * 307 + * On the first registration of a given @flb object, it is added to a global 308 + * registry. This function checks for duplicate registrations, both for a 309 + * specific handler and globally, and ensures the total number of unique 310 + * FLBs does not exceed the system limit. 311 + * 312 + * Context: Typically called from a subsystem's module init function after 313 + * both the handler and the FLB have been defined and initialized. 314 + * Return: 0 on success. Returns a negative errno on failure: 315 + * -EINVAL if arguments are NULL or not initialized. 316 + * -ENOMEM on memory allocation failure. 317 + * -EEXIST if this FLB is already registered with this handler. 318 + * -ENOSPC if the maximum number of global FLBs has been reached. 319 + * -EOPNOTSUPP if live update is disabled or not configured. 320 + */ 321 + int liveupdate_register_flb(struct liveupdate_file_handler *fh, 322 + struct liveupdate_flb *flb) 323 + { 324 + struct luo_flb_private *private = luo_flb_get_private(flb); 325 + struct list_head *flb_list = &ACCESS_PRIVATE(fh, flb_list); 326 + struct luo_flb_link *link __free(kfree) = NULL; 327 + struct liveupdate_flb *gflb; 328 + struct luo_flb_link *iter; 329 + int err; 330 + 331 + if (!liveupdate_enabled()) 332 + return -EOPNOTSUPP; 333 + 334 + if (WARN_ON(!flb->ops->preserve || !flb->ops->unpreserve || 335 + !flb->ops->retrieve || !flb->ops->finish)) { 336 + return -EINVAL; 337 + } 338 + 339 + /* 340 + * File handler must already be registered, as it initializes the 341 + * flb_list 342 + */ 343 + if (WARN_ON(list_empty(&ACCESS_PRIVATE(fh, list)))) 344 + return -EINVAL; 345 + 346 + link = kzalloc(sizeof(*link), GFP_KERNEL); 347 + if (!link) 348 + return -ENOMEM; 349 + 350 + /* 351 + * Ensure the system is quiescent (no active sessions). 352 + * This acts as a global lock for registration: no other thread can 353 + * be in this section, and no sessions can be creating/using FDs. 354 + */ 355 + if (!luo_session_quiesce()) 356 + return -EBUSY; 357 + 358 + /* Check that this FLB is not already linked to this file handler */ 359 + err = -EEXIST; 360 + list_for_each_entry(iter, flb_list, list) { 361 + if (iter->flb == flb) 362 + goto err_resume; 363 + } 364 + 365 + /* 366 + * If this FLB is not linked to global list it's the first time the FLB 367 + * is registered 368 + */ 369 + if (!private->users) { 370 + if (WARN_ON(!list_empty(&private->list))) { 371 + err = -EINVAL; 372 + goto err_resume; 373 + } 374 + 375 + if (luo_flb_global.count == LUO_FLB_MAX) { 376 + err = -ENOSPC; 377 + goto err_resume; 378 + } 379 + 380 + /* Check that compatible string is unique in global list */ 381 + list_private_for_each_entry(gflb, &luo_flb_global.list, private.list) { 382 + if (!strcmp(gflb->compatible, flb->compatible)) 383 + goto err_resume; 384 + } 385 + 386 + if (!try_module_get(flb->ops->owner)) { 387 + err = -EAGAIN; 388 + goto err_resume; 389 + } 390 + 391 + list_add_tail(&private->list, &luo_flb_global.list); 392 + luo_flb_global.count++; 393 + } 394 + 395 + /* Finally, link the FLB to the file handler */ 396 + private->users++; 397 + link->flb = flb; 398 + list_add_tail(&no_free_ptr(link)->list, flb_list); 399 + luo_session_resume(); 400 + 401 + return 0; 402 + 403 + err_resume: 404 + luo_session_resume(); 405 + return err; 406 + } 407 + 408 + /** 409 + * liveupdate_unregister_flb - Remove an FLB dependency from a file handler. 410 + * @fh: The file handler that is currently depending on the FLB. 411 + * @flb: The File-Lifecycle-Bound object to remove. 412 + * 413 + * Removes the association between the specified file handler and the FLB 414 + * previously established by liveupdate_register_flb(). 415 + * 416 + * This function manages the global lifecycle of the FLB. It decrements the 417 + * FLB's usage count. If this was the last file handler referencing this FLB, 418 + * the FLB is removed from the global registry and the reference to its 419 + * owner module (acquired during registration) is released. 420 + * 421 + * Context: This function ensures the session is quiesced (no active FDs 422 + * being created) during the update. It is typically called from a 423 + * subsystem's module exit function. 424 + * Return: 0 on success. 425 + * -EOPNOTSUPP if live update is disabled. 426 + * -EBUSY if the live update session is active and cannot be quiesced. 427 + * -ENOENT if the FLB was not found in the file handler's list. 428 + */ 429 + int liveupdate_unregister_flb(struct liveupdate_file_handler *fh, 430 + struct liveupdate_flb *flb) 431 + { 432 + struct luo_flb_private *private = luo_flb_get_private(flb); 433 + struct list_head *flb_list = &ACCESS_PRIVATE(fh, flb_list); 434 + struct luo_flb_link *iter; 435 + int err = -ENOENT; 436 + 437 + if (!liveupdate_enabled()) 438 + return -EOPNOTSUPP; 439 + 440 + /* 441 + * Ensure the system is quiescent (no active sessions). 442 + * This acts as a global lock for unregistration. 443 + */ 444 + if (!luo_session_quiesce()) 445 + return -EBUSY; 446 + 447 + /* Find and remove the link from the file handler's list */ 448 + list_for_each_entry(iter, flb_list, list) { 449 + if (iter->flb == flb) { 450 + list_del(&iter->list); 451 + kfree(iter); 452 + err = 0; 453 + break; 454 + } 455 + } 456 + 457 + if (err) 458 + goto err_resume; 459 + 460 + private->users--; 461 + /* 462 + * If this is the last file-handler with which we are registred, remove 463 + * from the global list, and relese module reference. 464 + */ 465 + if (!private->users) { 466 + list_del_init(&private->list); 467 + luo_flb_global.count--; 468 + module_put(flb->ops->owner); 469 + } 470 + 471 + luo_session_resume(); 472 + 473 + return 0; 474 + 475 + err_resume: 476 + luo_session_resume(); 477 + return err; 478 + } 479 + 480 + /** 481 + * liveupdate_flb_get_incoming - Retrieve the incoming FLB object. 482 + * @flb: The FLB definition. 483 + * @objp: Output parameter; will be populated with the live shared object. 484 + * 485 + * Returns a pointer to its shared live object for the incoming (post-reboot) 486 + * path. 487 + * 488 + * If this is the first time the object is requested in the new kernel, this 489 + * function will trigger the FLB's .retrieve() callback to reconstruct the 490 + * object from its preserved state. Subsequent calls will return the same 491 + * cached object. 492 + * 493 + * Return: 0 on success, or a negative errno on failure. -ENODATA means no 494 + * incoming FLB data, -ENOENT means specific flb not found in the incoming 495 + * data, and -EOPNOTSUPP when live update is disabled or not configured. 496 + */ 497 + int liveupdate_flb_get_incoming(struct liveupdate_flb *flb, void **objp) 498 + { 499 + struct luo_flb_private *private = luo_flb_get_private(flb); 500 + 501 + if (!liveupdate_enabled()) 502 + return -EOPNOTSUPP; 503 + 504 + if (!private->incoming.obj) { 505 + int err = luo_flb_retrieve_one(flb); 506 + 507 + if (err) 508 + return err; 509 + } 510 + 511 + guard(mutex)(&private->incoming.lock); 512 + *objp = private->incoming.obj; 513 + 514 + return 0; 515 + } 516 + 517 + /** 518 + * liveupdate_flb_get_outgoing - Retrieve the outgoing FLB object. 519 + * @flb: The FLB definition. 520 + * @objp: Output parameter; will be populated with the live shared object. 521 + * 522 + * Returns a pointer to its shared live object for the outgoing (pre-reboot) 523 + * path. 524 + * 525 + * This function assumes the object has already been created by the FLB's 526 + * .preserve() callback, which is triggered when the first dependent file 527 + * is preserved. 528 + * 529 + * Return: 0 on success, or a negative errno on failure. 530 + */ 531 + int liveupdate_flb_get_outgoing(struct liveupdate_flb *flb, void **objp) 532 + { 533 + struct luo_flb_private *private = luo_flb_get_private(flb); 534 + 535 + if (!liveupdate_enabled()) 536 + return -EOPNOTSUPP; 537 + 538 + guard(mutex)(&private->outgoing.lock); 539 + *objp = private->outgoing.obj; 540 + 541 + return 0; 542 + } 543 + 544 + int __init luo_flb_setup_outgoing(void *fdt_out) 545 + { 546 + struct luo_flb_header_ser *header_ser; 547 + u64 header_ser_pa; 548 + int err; 549 + 550 + header_ser = kho_alloc_preserve(LUO_FLB_PGCNT << PAGE_SHIFT); 551 + if (IS_ERR(header_ser)) 552 + return PTR_ERR(header_ser); 553 + 554 + header_ser_pa = virt_to_phys(header_ser); 555 + 556 + err = fdt_begin_node(fdt_out, LUO_FDT_FLB_NODE_NAME); 557 + err |= fdt_property_string(fdt_out, "compatible", 558 + LUO_FDT_FLB_COMPATIBLE); 559 + err |= fdt_property(fdt_out, LUO_FDT_FLB_HEADER, &header_ser_pa, 560 + sizeof(header_ser_pa)); 561 + err |= fdt_end_node(fdt_out); 562 + 563 + if (err) 564 + goto err_unpreserve; 565 + 566 + header_ser->pgcnt = LUO_FLB_PGCNT; 567 + luo_flb_global.outgoing.header_ser = header_ser; 568 + luo_flb_global.outgoing.ser = (void *)(header_ser + 1); 569 + luo_flb_global.outgoing.active = true; 570 + 571 + return 0; 572 + 573 + err_unpreserve: 574 + kho_unpreserve_free(header_ser); 575 + 576 + return err; 577 + } 578 + 579 + int __init luo_flb_setup_incoming(void *fdt_in) 580 + { 581 + struct luo_flb_header_ser *header_ser; 582 + int err, header_size, offset; 583 + const void *ptr; 584 + u64 header_ser_pa; 585 + 586 + offset = fdt_subnode_offset(fdt_in, 0, LUO_FDT_FLB_NODE_NAME); 587 + if (offset < 0) { 588 + pr_err("Unable to get FLB node [%s]\n", LUO_FDT_FLB_NODE_NAME); 589 + 590 + return -ENOENT; 591 + } 592 + 593 + err = fdt_node_check_compatible(fdt_in, offset, 594 + LUO_FDT_FLB_COMPATIBLE); 595 + if (err) { 596 + pr_err("FLB node is incompatible with '%s' [%d]\n", 597 + LUO_FDT_FLB_COMPATIBLE, err); 598 + 599 + return -EINVAL; 600 + } 601 + 602 + header_size = 0; 603 + ptr = fdt_getprop(fdt_in, offset, LUO_FDT_FLB_HEADER, &header_size); 604 + if (!ptr || header_size != sizeof(u64)) { 605 + pr_err("Unable to get FLB header property '%s' [%d]\n", 606 + LUO_FDT_FLB_HEADER, header_size); 607 + 608 + return -EINVAL; 609 + } 610 + 611 + header_ser_pa = get_unaligned((u64 *)ptr); 612 + header_ser = phys_to_virt(header_ser_pa); 613 + 614 + luo_flb_global.incoming.header_ser = header_ser; 615 + luo_flb_global.incoming.ser = (void *)(header_ser + 1); 616 + luo_flb_global.incoming.active = true; 617 + 618 + return 0; 619 + } 620 + 621 + /** 622 + * luo_flb_serialize - Serializes all active FLB objects for KHO. 623 + * 624 + * This function is called from the reboot path. It iterates through all 625 + * registered File-Lifecycle-Bound (FLB) objects. For each FLB that has been 626 + * preserved (i.e., its reference count is greater than zero), it writes its 627 + * metadata into the memory region designated for Kexec Handover. 628 + * 629 + * The serialized data includes the FLB's compatibility string, its opaque 630 + * data handle, and the final reference count. This allows the new kernel to 631 + * find the appropriate handler and reconstruct the FLB's state. 632 + * 633 + * Context: Called from liveupdate_reboot() just before kho_finalize(). 634 + */ 635 + void luo_flb_serialize(void) 636 + { 637 + struct luo_flb_header *fh = &luo_flb_global.outgoing; 638 + struct liveupdate_flb *gflb; 639 + int i = 0; 640 + 641 + list_private_for_each_entry(gflb, &luo_flb_global.list, private.list) { 642 + struct luo_flb_private *private = luo_flb_get_private(gflb); 643 + 644 + if (private->outgoing.count > 0) { 645 + strscpy(fh->ser[i].name, gflb->compatible, 646 + sizeof(fh->ser[i].name)); 647 + fh->ser[i].data = private->outgoing.data; 648 + fh->ser[i].count = private->outgoing.count; 649 + i++; 650 + } 651 + } 652 + 653 + fh->header_ser->count = i; 654 + }
+15 -7
kernel/liveupdate/luo_internal.h
··· 40 40 */ 41 41 #define luo_restore_fail(__fmt, ...) panic(__fmt, ##__VA_ARGS__) 42 42 43 - /* Mimics list_for_each_entry() but for private list head entries */ 44 - #define luo_list_for_each_private(pos, head, member) \ 45 - for (struct list_head *__iter = (head)->next; \ 46 - __iter != (head) && \ 47 - ({ pos = container_of(__iter, typeof(*(pos)), member); 1; }); \ 48 - __iter = __iter->next) 49 - 50 43 /** 51 44 * struct luo_file_set - A set of files that belong to the same sessions. 52 45 * @files_list: An ordered list of files associated with this session, it is ··· 99 106 struct luo_file_set_ser *file_set_ser); 100 107 void luo_file_set_init(struct luo_file_set *file_set); 101 108 void luo_file_set_destroy(struct luo_file_set *file_set); 109 + 110 + int luo_flb_file_preserve(struct liveupdate_file_handler *fh); 111 + void luo_flb_file_unpreserve(struct liveupdate_file_handler *fh); 112 + void luo_flb_file_finish(struct liveupdate_file_handler *fh); 113 + int __init luo_flb_setup_outgoing(void *fdt); 114 + int __init luo_flb_setup_incoming(void *fdt); 115 + void luo_flb_serialize(void); 116 + 117 + #ifdef CONFIG_LIVEUPDATE_TEST 118 + void liveupdate_test_register(struct liveupdate_file_handler *fh); 119 + void liveupdate_test_unregister(struct liveupdate_file_handler *fh); 120 + #else 121 + static inline void liveupdate_test_register(struct liveupdate_file_handler *fh) { } 122 + static inline void liveupdate_test_unregister(struct liveupdate_file_handler *fh) { } 123 + #endif 102 124 103 125 #endif /* _LINUX_LUO_INTERNAL_H */
+2 -7
kernel/module/kallsyms.c
··· 334 334 if (mod) { 335 335 if (modname) 336 336 *modname = mod->name; 337 - if (modbuildid) { 338 - #if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID) 339 - *modbuildid = mod->build_id; 340 - #else 341 - *modbuildid = NULL; 342 - #endif 343 - } 337 + if (modbuildid) 338 + *modbuildid = module_buildid(mod); 344 339 345 340 sym = find_kallsyms_symbol(mod, addr, size, offset); 346 341
+162 -2
kernel/panic.c
··· 42 42 43 43 #define PANIC_TIMER_STEP 100 44 44 #define PANIC_BLINK_SPD 18 45 + #define PANIC_MSG_BUFSZ 1024 45 46 46 47 #ifdef CONFIG_SMP 47 48 /* ··· 74 73 EXPORT_SYMBOL_GPL(panic_timeout); 75 74 76 75 unsigned long panic_print; 76 + 77 + static int panic_force_cpu = -1; 77 78 78 79 ATOMIC_NOTIFIER_HEAD(panic_notifier_list); 79 80 ··· 303 300 } 304 301 305 302 atomic_t panic_cpu = ATOMIC_INIT(PANIC_CPU_INVALID); 303 + atomic_t panic_redirect_cpu = ATOMIC_INIT(PANIC_CPU_INVALID); 304 + 305 + #if defined(CONFIG_SMP) && defined(CONFIG_CRASH_DUMP) 306 + static char *panic_force_buf; 307 + 308 + static int __init panic_force_cpu_setup(char *str) 309 + { 310 + int cpu; 311 + 312 + if (!str) 313 + return -EINVAL; 314 + 315 + if (kstrtoint(str, 0, &cpu) || cpu < 0 || cpu >= nr_cpu_ids) { 316 + pr_warn("panic_force_cpu: invalid value '%s'\n", str); 317 + return -EINVAL; 318 + } 319 + 320 + panic_force_cpu = cpu; 321 + return 0; 322 + } 323 + early_param("panic_force_cpu", panic_force_cpu_setup); 324 + 325 + static int __init panic_force_cpu_late_init(void) 326 + { 327 + if (panic_force_cpu < 0) 328 + return 0; 329 + 330 + panic_force_buf = kmalloc(PANIC_MSG_BUFSZ, GFP_KERNEL); 331 + 332 + return 0; 333 + } 334 + late_initcall(panic_force_cpu_late_init); 335 + 336 + static void do_panic_on_target_cpu(void *info) 337 + { 338 + panic("%s", (char *)info); 339 + } 340 + 341 + /** 342 + * panic_smp_redirect_cpu - Redirect panic to target CPU 343 + * @target_cpu: CPU that should handle the panic 344 + * @msg: formatted panic message 345 + * 346 + * Default implementation uses IPI. Architectures with NMI support 347 + * can override this for more reliable delivery. 348 + * 349 + * Return: 0 on success, negative errno on failure 350 + */ 351 + int __weak panic_smp_redirect_cpu(int target_cpu, void *msg) 352 + { 353 + static call_single_data_t panic_csd; 354 + 355 + panic_csd.func = do_panic_on_target_cpu; 356 + panic_csd.info = msg; 357 + 358 + return smp_call_function_single_async(target_cpu, &panic_csd); 359 + } 360 + 361 + /** 362 + * panic_try_force_cpu - Redirect panic to a specific CPU for crash kernel 363 + * @fmt: panic message format string 364 + * @args: arguments for format string 365 + * 366 + * Some platforms require panic handling to occur on a specific CPU 367 + * for the crash kernel to function correctly. This function redirects 368 + * panic handling to the CPU specified via the panic_force_cpu= boot parameter. 369 + * 370 + * Returns false if panic should proceed on current CPU. 371 + * Returns true if panic was redirected. 372 + */ 373 + __printf(1, 0) 374 + static bool panic_try_force_cpu(const char *fmt, va_list args) 375 + { 376 + int this_cpu = raw_smp_processor_id(); 377 + int old_cpu = PANIC_CPU_INVALID; 378 + const char *msg; 379 + 380 + /* Feature not enabled via boot parameter */ 381 + if (panic_force_cpu < 0) 382 + return false; 383 + 384 + /* Already on target CPU - proceed normally */ 385 + if (this_cpu == panic_force_cpu) 386 + return false; 387 + 388 + /* Target CPU is offline, can't redirect */ 389 + if (!cpu_online(panic_force_cpu)) { 390 + pr_warn("panic: target CPU %d is offline, continuing on CPU %d\n", 391 + panic_force_cpu, this_cpu); 392 + return false; 393 + } 394 + 395 + /* Another panic already in progress */ 396 + if (panic_in_progress()) 397 + return false; 398 + 399 + /* 400 + * Only one CPU can do the redirect. Use atomic cmpxchg to ensure 401 + * we don't race with another CPU also trying to redirect. 402 + */ 403 + if (!atomic_try_cmpxchg(&panic_redirect_cpu, &old_cpu, this_cpu)) 404 + return false; 405 + 406 + /* 407 + * Use dynamically allocated buffer if available, otherwise 408 + * fall back to static message for early boot panics or allocation failure. 409 + */ 410 + if (panic_force_buf) { 411 + vsnprintf(panic_force_buf, PANIC_MSG_BUFSZ, fmt, args); 412 + msg = panic_force_buf; 413 + } else { 414 + msg = "Redirected panic (buffer unavailable)"; 415 + } 416 + 417 + console_verbose(); 418 + bust_spinlocks(1); 419 + 420 + pr_emerg("panic: Redirecting from CPU %d to CPU %d for crash kernel.\n", 421 + this_cpu, panic_force_cpu); 422 + 423 + /* Dump original CPU before redirecting */ 424 + if (!test_taint(TAINT_DIE) && 425 + oops_in_progress <= 1 && 426 + IS_ENABLED(CONFIG_DEBUG_BUGVERBOSE)) { 427 + dump_stack(); 428 + } 429 + 430 + if (panic_smp_redirect_cpu(panic_force_cpu, (void *)msg) != 0) { 431 + atomic_set(&panic_redirect_cpu, PANIC_CPU_INVALID); 432 + pr_warn("panic: failed to redirect to CPU %d, continuing on CPU %d\n", 433 + panic_force_cpu, this_cpu); 434 + return false; 435 + } 436 + 437 + /* IPI/NMI sent, this CPU should stop */ 438 + return true; 439 + } 440 + #else 441 + __printf(1, 0) 442 + static inline bool panic_try_force_cpu(const char *fmt, va_list args) 443 + { 444 + return false; 445 + } 446 + #endif /* CONFIG_SMP && CONFIG_CRASH_DUMP */ 306 447 307 448 bool panic_try_start(void) 308 449 { ··· 575 428 */ 576 429 void vpanic(const char *fmt, va_list args) 577 430 { 578 - static char buf[1024]; 431 + static char buf[PANIC_MSG_BUFSZ]; 579 432 long i, i_next = 0, len; 580 433 int state = 0; 581 434 bool _crash_kexec_post_notifiers = crash_kexec_post_notifiers; ··· 599 452 local_irq_disable(); 600 453 preempt_disable_notrace(); 601 454 455 + /* Redirect panic to target CPU if configured via panic_force_cpu=. */ 456 + if (panic_try_force_cpu(fmt, args)) { 457 + /* 458 + * Mark ourselves offline so panic_other_cpus_shutdown() won't wait 459 + * for us on architectures that check num_online_cpus(). 460 + */ 461 + set_cpu_online(smp_processor_id(), false); 462 + panic_smp_self_stop(); 463 + } 602 464 /* 603 465 * It's possible to come here directly from a panic-assertion and 604 466 * not have preempt disabled. Some functions called from here want ··· 640 484 /* 641 485 * Avoid nested stack-dumping if a panic occurs during oops processing 642 486 */ 643 - if (test_taint(TAINT_DIE) || oops_in_progress > 1) { 487 + if (atomic_read(&panic_redirect_cpu) != PANIC_CPU_INVALID && 488 + panic_force_cpu == raw_smp_processor_id()) { 489 + pr_emerg("panic: Redirected from CPU %d, skipping stack dump.\n", 490 + atomic_read(&panic_redirect_cpu)); 491 + } else if (test_taint(TAINT_DIE) || oops_in_progress > 1) { 644 492 panic_this_cpu_backtrace_printed = true; 645 493 } else if (IS_ENABLED(CONFIG_DEBUG_BUGVERBOSE)) { 646 494 dump_stack();
+6 -2
kernel/sched/stats.h
··· 253 253 delta = rq_clock(rq) - t->sched_info.last_queued; 254 254 t->sched_info.last_queued = 0; 255 255 t->sched_info.run_delay += delta; 256 - if (delta > t->sched_info.max_run_delay) 256 + if (delta > t->sched_info.max_run_delay) { 257 257 t->sched_info.max_run_delay = delta; 258 + ktime_get_real_ts64(&t->sched_info.max_run_delay_ts); 259 + } 258 260 if (delta && (!t->sched_info.min_run_delay || delta < t->sched_info.min_run_delay)) 259 261 t->sched_info.min_run_delay = delta; 260 262 rq_sched_info_dequeue(rq, delta); ··· 280 278 t->sched_info.run_delay += delta; 281 279 t->sched_info.last_arrival = now; 282 280 t->sched_info.pcount++; 283 - if (delta > t->sched_info.max_run_delay) 281 + if (delta > t->sched_info.max_run_delay) { 284 282 t->sched_info.max_run_delay = delta; 283 + ktime_get_real_ts64(&t->sched_info.max_run_delay_ts); 284 + } 285 285 if (delta && (!t->sched_info.min_run_delay || delta < t->sched_info.min_run_delay)) 286 286 t->sched_info.min_run_delay = delta; 287 287
+4 -1
kernel/trace/ftrace.c
··· 8112 8112 8113 8113 int 8114 8114 ftrace_mod_address_lookup(unsigned long addr, unsigned long *size, 8115 - unsigned long *off, char **modname, char *sym) 8115 + unsigned long *off, char **modname, 8116 + const unsigned char **modbuildid, char *sym) 8116 8117 { 8117 8118 struct ftrace_mod_map *mod_map; 8118 8119 int ret = 0; ··· 8125 8124 if (ret) { 8126 8125 if (modname) 8127 8126 *modname = mod_map->mod->name; 8127 + if (modbuildid) 8128 + *modbuildid = module_buildid(mod_map->mod); 8128 8129 break; 8129 8130 } 8130 8131 }
+3 -4
kernel/trace/trace.c
··· 1178 1178 * __trace_puts - write a constant string into the trace buffer. 1179 1179 * @ip: The address of the caller 1180 1180 * @str: The constant string to write 1181 - * @size: The size of the string. 1182 1181 */ 1183 - int __trace_puts(unsigned long ip, const char *str, int size) 1182 + int __trace_puts(unsigned long ip, const char *str) 1184 1183 { 1185 - return __trace_array_puts(printk_trace, ip, str, size); 1184 + return __trace_array_puts(printk_trace, ip, str, strlen(str)); 1186 1185 } 1187 1186 EXPORT_SYMBOL_GPL(__trace_puts); 1188 1187 ··· 1200 1201 int size = sizeof(struct bputs_entry); 1201 1202 1202 1203 if (!printk_binsafe(tr)) 1203 - return __trace_puts(ip, str, strlen(str)); 1204 + return __trace_puts(ip, str); 1204 1205 1205 1206 if (!(tr->trace_flags & TRACE_ITER(PRINTK))) 1206 1207 return 0;
+1 -1
kernel/trace/trace.h
··· 2119 2119 * about performance). The internal_trace_puts() is for such 2120 2120 * a purpose. 2121 2121 */ 2122 - #define internal_trace_puts(str) __trace_puts(_THIS_IP_, str, strlen(str)) 2122 + #define internal_trace_puts(str) __trace_puts(_THIS_IP_, str) 2123 2123 2124 2124 #undef FTRACE_ENTRY 2125 2125 #define FTRACE_ENTRY(call, struct_name, id, tstruct, print) \
+1 -1
kernel/tsacct.c
··· 125 125 { 126 126 u64 time, delta; 127 127 128 - if (!likely(tsk->mm)) 128 + if (unlikely(!tsk->mm || (tsk->flags & PF_KTHREAD))) 129 129 return; 130 130 131 131 time = stime + utime;
+1 -1
kernel/ucount.c
··· 47 47 int mode; 48 48 49 49 /* Allow users with CAP_SYS_RESOURCE unrestrained access */ 50 - if (ns_capable(user_ns, CAP_SYS_RESOURCE)) 50 + if (ns_capable_noaudit(user_ns, CAP_SYS_RESOURCE)) 51 51 mode = (table->mode & S_IRWXU) >> 6; 52 52 else 53 53 /* Allow all others at most read-only access */
+4 -2
kernel/vmcore_info.c
··· 141 141 142 142 static int __init crash_save_vmcoreinfo_init(void) 143 143 { 144 - vmcoreinfo_data = (unsigned char *)get_zeroed_page(GFP_KERNEL); 144 + int order; 145 + order = get_order(VMCOREINFO_BYTES); 146 + vmcoreinfo_data = (unsigned char *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order); 145 147 if (!vmcoreinfo_data) { 146 148 pr_warn("Memory allocation for vmcoreinfo_data failed\n"); 147 149 return -ENOMEM; ··· 152 150 vmcoreinfo_note = alloc_pages_exact(VMCOREINFO_NOTE_SIZE, 153 151 GFP_KERNEL | __GFP_ZERO); 154 152 if (!vmcoreinfo_note) { 155 - free_page((unsigned long)vmcoreinfo_data); 153 + free_pages((unsigned long)vmcoreinfo_data, order); 156 154 vmcoreinfo_data = NULL; 157 155 pr_warn("Memory allocation for vmcoreinfo_note failed\n"); 158 156 return -ENOMEM;
+7 -5
kernel/watchdog.c
··· 363 363 364 364 /* Global variables, exported for sysctl */ 365 365 unsigned int __read_mostly softlockup_panic = 366 - IS_ENABLED(CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC); 366 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC; 367 367 368 368 static bool softlockup_initialized __read_mostly; 369 369 static u64 __read_mostly sample_period; ··· 550 550 u8 util; 551 551 int tail = __this_cpu_read(cpustat_tail); 552 552 553 - tail = (tail + NUM_HARDIRQ_REPORT - 1) % NUM_HARDIRQ_REPORT; 553 + tail = (tail + NUM_SAMPLE_PERIODS - 1) % NUM_SAMPLE_PERIODS; 554 554 util = __this_cpu_read(cpustat_util[tail][STATS_HARDIRQ]); 555 555 return util > HARDIRQ_PERCENT_THRESH; 556 556 } ··· 774 774 { 775 775 unsigned long touch_ts, period_ts, now; 776 776 struct pt_regs *regs = get_irq_regs(); 777 - int duration; 778 777 int softlockup_all_cpu_backtrace; 778 + int duration, thresh_count; 779 779 unsigned long flags; 780 780 781 781 if (!watchdog_enabled) ··· 879 879 880 880 add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK); 881 881 sys_info(softlockup_si_mask & ~SYS_INFO_ALL_BT); 882 - if (softlockup_panic) 882 + thresh_count = duration / get_softlockup_thresh(); 883 + 884 + if (softlockup_panic && thresh_count >= softlockup_panic) 883 885 panic("softlockup: hung tasks"); 884 886 } 885 887 ··· 1230 1228 .mode = 0644, 1231 1229 .proc_handler = proc_dointvec_minmax, 1232 1230 .extra1 = SYSCTL_ZERO, 1233 - .extra2 = SYSCTL_ONE, 1231 + .extra2 = SYSCTL_INT_MAX, 1234 1232 }, 1235 1233 { 1236 1234 .procname = "softlockup_sys_info",
+28 -22
kernel/watchdog_perf.c
··· 118 118 watchdog_hardlockup_check(smp_processor_id(), regs); 119 119 } 120 120 121 - static int hardlockup_detector_event_create(void) 121 + static struct perf_event *hardlockup_detector_event_create(unsigned int cpu) 122 122 { 123 - unsigned int cpu; 124 123 struct perf_event_attr *wd_attr; 125 124 struct perf_event *evt; 126 125 127 - /* 128 - * Preemption is not disabled because memory will be allocated. 129 - * Ensure CPU-locality by calling this in per-CPU kthread. 130 - */ 131 - WARN_ON(!is_percpu_thread()); 132 - cpu = raw_smp_processor_id(); 133 126 wd_attr = &wd_hw_attr; 134 127 wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh); 135 128 ··· 136 143 watchdog_overflow_callback, NULL); 137 144 } 138 145 139 - if (IS_ERR(evt)) { 140 - pr_debug("Perf event create on CPU %d failed with %ld\n", cpu, 141 - PTR_ERR(evt)); 142 - return PTR_ERR(evt); 143 - } 144 - WARN_ONCE(this_cpu_read(watchdog_ev), "unexpected watchdog_ev leak"); 145 - this_cpu_write(watchdog_ev, evt); 146 - return 0; 146 + return evt; 147 147 } 148 148 149 149 /** ··· 145 159 */ 146 160 void watchdog_hardlockup_enable(unsigned int cpu) 147 161 { 162 + struct perf_event *evt; 163 + 148 164 WARN_ON_ONCE(cpu != smp_processor_id()); 149 165 150 - if (hardlockup_detector_event_create()) 166 + evt = hardlockup_detector_event_create(cpu); 167 + if (IS_ERR(evt)) { 168 + pr_debug("Perf event create on CPU %d failed with %ld\n", cpu, 169 + PTR_ERR(evt)); 151 170 return; 171 + } 152 172 153 173 /* use original value for check */ 154 174 if (!atomic_fetch_inc(&watchdog_cpus)) 155 175 pr_info("Enabled. Permanently consumes one hw-PMU counter.\n"); 156 176 177 + WARN_ONCE(this_cpu_read(watchdog_ev), "unexpected watchdog_ev leak"); 178 + this_cpu_write(watchdog_ev, evt); 179 + 157 180 watchdog_init_timestamp(); 158 - perf_event_enable(this_cpu_read(watchdog_ev)); 181 + perf_event_enable(evt); 159 182 } 160 183 161 184 /** ··· 258 263 */ 259 264 int __init watchdog_hardlockup_probe(void) 260 265 { 266 + struct perf_event *evt; 267 + unsigned int cpu; 261 268 int ret; 262 269 263 270 if (!arch_perf_nmi_is_available()) 264 271 return -ENODEV; 265 272 266 - ret = hardlockup_detector_event_create(); 273 + if (!hw_nmi_get_sample_period(watchdog_thresh)) 274 + return -EINVAL; 267 275 268 - if (ret) { 276 + /* 277 + * Test hardware PMU availability by creating a temporary perf event. 278 + * The event is released immediately. 279 + */ 280 + cpu = raw_smp_processor_id(); 281 + evt = hardlockup_detector_event_create(cpu); 282 + if (IS_ERR(evt)) { 269 283 pr_info("Perf NMI watchdog permanently disabled\n"); 284 + ret = PTR_ERR(evt); 270 285 } else { 271 - perf_event_release_kernel(this_cpu_read(watchdog_ev)); 272 - this_cpu_write(watchdog_ev, NULL); 286 + perf_event_release_kernel(evt); 287 + ret = 0; 273 288 } 289 + 274 290 return ret; 275 291 } 276 292
-13
lib/Kconfig
··· 430 430 are compiling an out-of tree driver which tells you that it 431 431 depends on this. 432 432 433 - config GLOB_SELFTEST 434 - tristate "glob self-test on init" 435 - depends on GLOB 436 - help 437 - This option enables a simple self-test of the glob_match 438 - function on startup. It is primarily useful for people 439 - working on the code to ensure they haven't introduced any 440 - regressions. 441 - 442 - It only adds a little bit of code and slows kernel boot (or 443 - module load) by a small amount, so you're welcome to play with 444 - it, but you probably don't need it. 445 - 446 433 # 447 434 # Netlink attribute parsing support is select'ed if needed 448 435 #
+98 -20
lib/Kconfig.debug
··· 1147 1147 the CPU stats and the interrupt counts during the "soft lockups". 1148 1148 1149 1149 config BOOTPARAM_SOFTLOCKUP_PANIC 1150 - bool "Panic (Reboot) On Soft Lockups" 1150 + int "Panic (Reboot) On Soft Lockups" 1151 1151 depends on SOFTLOCKUP_DETECTOR 1152 + default 0 1152 1153 help 1153 - Say Y here to enable the kernel to panic on "soft lockups", 1154 - which are bugs that cause the kernel to loop in kernel 1155 - mode for more than 20 seconds (configurable using the watchdog_thresh 1156 - sysctl), without giving other tasks a chance to run. 1154 + Set to a non-zero value N to enable the kernel to panic on "soft 1155 + lockups", which are bugs that cause the kernel to loop in kernel 1156 + mode for more than (N * 20 seconds) (configurable using the 1157 + watchdog_thresh sysctl), without giving other tasks a chance to run. 1157 1158 1158 1159 The panic can be used in combination with panic_timeout, 1159 1160 to cause the system to reboot automatically after a ··· 1162 1161 high-availability systems that have uptime guarantees and 1163 1162 where a lockup must be resolved ASAP. 1164 1163 1165 - Say N if unsure. 1164 + Say 0 if unsure. 1166 1165 1167 1166 config HAVE_HARDLOCKUP_DETECTOR_BUDDY 1168 1167 bool ··· 1311 1310 high-availability systems that have uptime guarantees and 1312 1311 where a hung tasks must be resolved ASAP. 1313 1312 1314 - Say N if unsure. 1313 + Say 0 if unsure. 1315 1314 1316 1315 config DETECT_HUNG_TASK_BLOCKER 1317 1316 bool "Dump Hung Tasks Blocker" ··· 1419 1418 This option has potential to introduce high runtime overhead, 1420 1419 depending on workload as it triggers debugging routines for each 1421 1420 this_cpu operation. It should only be used for debugging purposes. 1421 + 1422 + config DEBUG_ATOMIC 1423 + bool "Debug atomic variables" 1424 + depends on DEBUG_KERNEL 1425 + help 1426 + If you say Y here then the kernel will add a runtime alignment check 1427 + to atomic accesses. Useful for architectures that do not have trap on 1428 + mis-aligned access. 1429 + 1430 + This option has potentially significant overhead. 1431 + 1432 + config DEBUG_ATOMIC_LARGEST_ALIGN 1433 + bool "Check alignment only up to __aligned_largest" 1434 + depends on DEBUG_ATOMIC 1435 + help 1436 + If you say Y here then the check for natural alignment of 1437 + atomic accesses will be constrained to the compiler's largest 1438 + alignment for scalar types. 1422 1439 1423 1440 menu "Lock Debugging (spinlocks, mutexes, etc...)" 1424 1441 ··· 2356 2337 2357 2338 If unsure, say N. 2358 2339 2359 - config TEST_MIN_HEAP 2360 - tristate "Min heap test" 2361 - depends on DEBUG_KERNEL || m 2362 - help 2363 - Enable this to turn on min heap function tests. This test is 2364 - executed only once during system boot (so affects only boot time), 2365 - or at module load time. 2366 - 2367 - If unsure, say N. 2368 - 2369 2340 config TEST_SORT 2370 2341 tristate "Array-based sort test" if !KUNIT_ALL_TESTS 2371 2342 depends on KUNIT ··· 2567 2558 Enable this option to test the bitmap functions at boot. 2568 2559 2569 2560 If unsure, say N. 2570 - 2571 - config TEST_UUID 2572 - tristate "Test functions located in the uuid module at runtime" 2573 2561 2574 2562 config TEST_XARRAY 2575 2563 tristate "Test the XArray code at runtime" ··· 2851 2845 2852 2846 If unsure, say N. 2853 2847 2848 + config LIST_PRIVATE_KUNIT_TEST 2849 + tristate "KUnit Test for Kernel Private Linked-list structures" if !KUNIT_ALL_TESTS 2850 + depends on KUNIT 2851 + default KUNIT_ALL_TESTS 2852 + help 2853 + This builds the KUnit test for the private linked-list primitives 2854 + defined in include/linux/list_private.h. 2855 + 2856 + These primitives allow manipulation of list_head members that are 2857 + marked as private and require special accessors (ACCESS_PRIVATE) 2858 + to strip qualifiers or handle encapsulation. 2859 + 2860 + If unsure, say N. 2861 + 2854 2862 config HASHTABLE_KUNIT_TEST 2855 2863 tristate "KUnit Test for Kernel Hashtable structures" if !KUNIT_ALL_TESTS 2856 2864 depends on KUNIT ··· 2903 2883 to add supported patterns to this test. 2904 2884 2905 2885 If unsure, say N. 2886 + 2887 + config LIVEUPDATE_TEST 2888 + bool "Live Update Kernel Test" 2889 + default n 2890 + depends on LIVEUPDATE 2891 + help 2892 + Enable a built-in kernel test module for the Live Update 2893 + Orchestrator. 2894 + 2895 + This module validates the File-Lifecycle-Bound subsystem by 2896 + registering a set of mock FLB objects with any real file handlers 2897 + that support live update (such as the memfd handler). 2898 + 2899 + When live update operations are performed, this test module will 2900 + output messages to the kernel log (dmesg), confirming that its 2901 + registration and various callback functions (preserve, retrieve, 2902 + finish, etc.) are being invoked correctly. 2903 + 2904 + This is a debugging and regression testing tool for developers 2905 + working on the Live Update subsystem. It should not be enabled in 2906 + production kernels. 2907 + 2908 + If unsure, say N 2906 2909 2907 2910 config CMDLINE_KUNIT_TEST 2908 2911 tristate "KUnit test for cmdline API" if !KUNIT_ALL_TESTS ··· 3001 2958 to the KUnit documentation in Documentation/dev-tools/kunit/. 3002 2959 3003 2960 If unsure, say N. 2961 + 2962 + config MIN_HEAP_KUNIT_TEST 2963 + tristate "Min heap test" if !KUNIT_ALL_TESTS 2964 + depends on KUNIT 2965 + default KUNIT_ALL_TESTS 2966 + help 2967 + This option enables the KUnit test suite for the min heap library 2968 + which provides functions for creating and managing min heaps. 2969 + The test suite checks the functionality of the min heap library. 2970 + 2971 + If unsure, say N 3004 2972 3005 2973 config IS_SIGNED_TYPE_KUNIT_TEST 3006 2974 tristate "Test is_signed_type() macro" if !KUNIT_ALL_TESTS ··· 3418 3364 3419 3365 If unsure, say N. 3420 3366 3367 + config UUID_KUNIT_TEST 3368 + tristate "KUnit test for UUID" if !KUNIT_ALL_TESTS 3369 + depends on KUNIT 3370 + default KUNIT_ALL_TESTS 3371 + help 3372 + This option enables the KUnit test suite for the uuid library, 3373 + which provides functions for generating and parsing UUID and GUID. 3374 + The test suite checks parsing of UUID and GUID strings. 3375 + 3376 + If unsure, say N. 3377 + 3421 3378 config INT_POW_KUNIT_TEST 3422 3379 tristate "Integer exponentiation (int_pow) test" if !KUNIT_ALL_TESTS 3423 3380 depends on KUNIT ··· 3495 3430 3496 3431 Enabling this option will include tests that compare the prime number 3497 3432 generator functions against a brute force implementation. 3433 + 3434 + If unsure, say N 3435 + 3436 + config GLOB_KUNIT_TEST 3437 + tristate "Glob matching test" if !KUNIT_ALL_TESTS 3438 + depends on GLOB 3439 + depends on KUNIT 3440 + default KUNIT_ALL_TESTS 3441 + help 3442 + Enable this option to test the glob functions at runtime. 3443 + 3444 + This test suite verifies the correctness of glob_match() across various 3445 + scenarios, including edge cases. 3498 3446 3499 3447 If unsure, say N 3500 3448
-3
lib/Makefile
··· 77 77 CFLAGS_test_ubsan.o += $(call cc-disable-warning, unused-but-set-variable) 78 78 UBSAN_SANITIZE_test_ubsan.o := y 79 79 obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o 80 - obj-$(CONFIG_TEST_MIN_HEAP) += test_min_heap.o 81 80 obj-$(CONFIG_TEST_LKM) += test_module.o 82 81 obj-$(CONFIG_TEST_VMALLOC) += test_vmalloc.o 83 82 obj-$(CONFIG_TEST_RHASHTABLE) += test_rhashtable.o ··· 90 91 GCOV_PROFILE_test_bitmap.o := n 91 92 endif 92 93 93 - obj-$(CONFIG_TEST_UUID) += test_uuid.o 94 94 obj-$(CONFIG_TEST_XARRAY) += test_xarray.o 95 95 obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o 96 96 obj-$(CONFIG_TEST_PARMAN) += test_parman.o ··· 226 228 obj-$(CONFIG_DQL) += dynamic_queue_limits.o 227 229 228 230 obj-$(CONFIG_GLOB) += glob.o 229 - obj-$(CONFIG_GLOB_SELFTEST) += globtest.o 230 231 231 232 obj-$(CONFIG_DIMLIB) += dim/ 232 233 obj-$(CONFIG_SIGNATURE) += digsig.o
+18 -8
lib/build_OID_registry
··· 60 60 # Determine the encoded length of this OID 61 61 my $size = $#components; 62 62 for (my $loop = 2; $loop <= $#components; $loop++) { 63 - my $c = $components[$loop]; 63 + $ENV{'BC_LINE_LENGTH'} = "0"; 64 + my $c = `echo "ibase=10; obase=2; $components[$loop]" | bc`; 65 + chomp($c); 64 66 65 67 # We will base128 encode the number 66 - my $tmp = ($c == 0) ? 0 : int(log($c)/log(2)); 68 + my $tmp = length($c) - 1; 67 69 $tmp = int($tmp / 7); 68 70 $size += $tmp; 69 71 } ··· 102 100 push @octets, $components[0] * 40 + $components[1]; 103 101 104 102 for (my $loop = 2; $loop <= $#components; $loop++) { 105 - my $c = $components[$loop]; 103 + # get the base 2 representation of the component 104 + $ENV{'BC_LINE_LENGTH'} = "0"; 105 + my $c = `echo "ibase=10; obase=2; $components[$loop]" | bc`; 106 + chomp($c); 106 107 107 - # Base128 encode the number 108 - my $tmp = ($c == 0) ? 0 : int(log($c)/log(2)); 108 + my $tmp = length($c) - 1; 109 109 $tmp = int($tmp / 7); 110 110 111 - for (; $tmp > 0; $tmp--) { 112 - push @octets, (($c >> $tmp * 7) & 0x7f) | 0x80; 111 + # zero pad upto length multiple of 7 112 + $c = substr("0000000", 0, ($tmp + 1) * 7 - length($c)).$c; 113 + 114 + # Base128 encode the number 115 + for (my $j = 0; $j < $tmp; $j++) { 116 + my $b = oct("0b".substr($c, $j * 7, 7)); 117 + 118 + push @octets, $b | 0x80; 113 119 } 114 - push @octets, $c & 0x7f; 120 + push @octets, oct("0b".substr($c, $tmp * 7, 7)); 115 121 } 116 122 117 123 push @encoded_oids, \@octets;
-167
lib/globtest.c
··· 1 - /* 2 - * Extracted fronm glob.c 3 - */ 4 - 5 - #include <linux/module.h> 6 - #include <linux/moduleparam.h> 7 - #include <linux/glob.h> 8 - #include <linux/printk.h> 9 - 10 - /* Boot with "glob.verbose=1" to show successful tests, too */ 11 - static bool verbose = false; 12 - module_param(verbose, bool, 0); 13 - 14 - struct glob_test { 15 - char const *pat, *str; 16 - bool expected; 17 - }; 18 - 19 - static bool __pure __init test(char const *pat, char const *str, bool expected) 20 - { 21 - bool match = glob_match(pat, str); 22 - bool success = match == expected; 23 - 24 - /* Can't get string literals into a particular section, so... */ 25 - static char const msg_error[] __initconst = 26 - KERN_ERR "glob: \"%s\" vs. \"%s\": %s *** ERROR ***\n"; 27 - static char const msg_ok[] __initconst = 28 - KERN_DEBUG "glob: \"%s\" vs. \"%s\": %s OK\n"; 29 - static char const mismatch[] __initconst = "mismatch"; 30 - char const *message; 31 - 32 - if (!success) 33 - message = msg_error; 34 - else if (verbose) 35 - message = msg_ok; 36 - else 37 - return success; 38 - 39 - printk(message, pat, str, mismatch + 3*match); 40 - return success; 41 - } 42 - 43 - /* 44 - * The tests are all jammed together in one array to make it simpler 45 - * to place that array in the .init.rodata section. The obvious 46 - * "array of structures containing char *" has no way to force the 47 - * pointed-to strings to be in a particular section. 48 - * 49 - * Anyway, a test consists of: 50 - * 1. Expected glob_match result: '1' or '0'. 51 - * 2. Pattern to match: null-terminated string 52 - * 3. String to match against: null-terminated string 53 - * 54 - * The list of tests is terminated with a final '\0' instead of 55 - * a glob_match result character. 56 - */ 57 - static char const glob_tests[] __initconst = 58 - /* Some basic tests */ 59 - "1" "a\0" "a\0" 60 - "0" "a\0" "b\0" 61 - "0" "a\0" "aa\0" 62 - "0" "a\0" "\0" 63 - "1" "\0" "\0" 64 - "0" "\0" "a\0" 65 - /* Simple character class tests */ 66 - "1" "[a]\0" "a\0" 67 - "0" "[a]\0" "b\0" 68 - "0" "[!a]\0" "a\0" 69 - "1" "[!a]\0" "b\0" 70 - "1" "[ab]\0" "a\0" 71 - "1" "[ab]\0" "b\0" 72 - "0" "[ab]\0" "c\0" 73 - "1" "[!ab]\0" "c\0" 74 - "1" "[a-c]\0" "b\0" 75 - "0" "[a-c]\0" "d\0" 76 - /* Corner cases in character class parsing */ 77 - "1" "[a-c-e-g]\0" "-\0" 78 - "0" "[a-c-e-g]\0" "d\0" 79 - "1" "[a-c-e-g]\0" "f\0" 80 - "1" "[]a-ceg-ik[]\0" "a\0" 81 - "1" "[]a-ceg-ik[]\0" "]\0" 82 - "1" "[]a-ceg-ik[]\0" "[\0" 83 - "1" "[]a-ceg-ik[]\0" "h\0" 84 - "0" "[]a-ceg-ik[]\0" "f\0" 85 - "0" "[!]a-ceg-ik[]\0" "h\0" 86 - "0" "[!]a-ceg-ik[]\0" "]\0" 87 - "1" "[!]a-ceg-ik[]\0" "f\0" 88 - /* Simple wild cards */ 89 - "1" "?\0" "a\0" 90 - "0" "?\0" "aa\0" 91 - "0" "??\0" "a\0" 92 - "1" "?x?\0" "axb\0" 93 - "0" "?x?\0" "abx\0" 94 - "0" "?x?\0" "xab\0" 95 - /* Asterisk wild cards (backtracking) */ 96 - "0" "*??\0" "a\0" 97 - "1" "*??\0" "ab\0" 98 - "1" "*??\0" "abc\0" 99 - "1" "*??\0" "abcd\0" 100 - "0" "??*\0" "a\0" 101 - "1" "??*\0" "ab\0" 102 - "1" "??*\0" "abc\0" 103 - "1" "??*\0" "abcd\0" 104 - "0" "?*?\0" "a\0" 105 - "1" "?*?\0" "ab\0" 106 - "1" "?*?\0" "abc\0" 107 - "1" "?*?\0" "abcd\0" 108 - "1" "*b\0" "b\0" 109 - "1" "*b\0" "ab\0" 110 - "0" "*b\0" "ba\0" 111 - "1" "*b\0" "bb\0" 112 - "1" "*b\0" "abb\0" 113 - "1" "*b\0" "bab\0" 114 - "1" "*bc\0" "abbc\0" 115 - "1" "*bc\0" "bc\0" 116 - "1" "*bc\0" "bbc\0" 117 - "1" "*bc\0" "bcbc\0" 118 - /* Multiple asterisks (complex backtracking) */ 119 - "1" "*ac*\0" "abacadaeafag\0" 120 - "1" "*ac*ae*ag*\0" "abacadaeafag\0" 121 - "1" "*a*b*[bc]*[ef]*g*\0" "abacadaeafag\0" 122 - "0" "*a*b*[ef]*[cd]*g*\0" "abacadaeafag\0" 123 - "1" "*abcd*\0" "abcabcabcabcdefg\0" 124 - "1" "*ab*cd*\0" "abcabcabcabcdefg\0" 125 - "1" "*abcd*abcdef*\0" "abcabcdabcdeabcdefg\0" 126 - "0" "*abcd*\0" "abcabcabcabcefg\0" 127 - "0" "*ab*cd*\0" "abcabcabcabcefg\0"; 128 - 129 - static int __init glob_init(void) 130 - { 131 - unsigned successes = 0; 132 - unsigned n = 0; 133 - char const *p = glob_tests; 134 - static char const message[] __initconst = 135 - KERN_INFO "glob: %u self-tests passed, %u failed\n"; 136 - 137 - /* 138 - * Tests are jammed together in a string. The first byte is '1' 139 - * or '0' to indicate the expected outcome, or '\0' to indicate the 140 - * end of the tests. Then come two null-terminated strings: the 141 - * pattern and the string to match it against. 142 - */ 143 - while (*p) { 144 - bool expected = *p++ & 1; 145 - char const *pat = p; 146 - 147 - p += strlen(p) + 1; 148 - successes += test(pat, p, expected); 149 - p += strlen(p) + 1; 150 - n++; 151 - } 152 - 153 - n -= successes; 154 - printk(message, successes, n); 155 - 156 - /* What's the errno for "kernel bug detected"? Guess... */ 157 - return n ? -ECANCELED : 0; 158 - } 159 - 160 - /* We need a dummy exit function to allow unload */ 161 - static void __exit glob_fini(void) { } 162 - 163 - module_init(glob_init); 164 - module_exit(glob_fini); 165 - 166 - MODULE_DESCRIPTION("glob(7) matching tests"); 167 - MODULE_LICENSE("Dual MIT/GPL");
+206 -65
lib/group_cpus.c
··· 114 114 return ln->ncpus - rn->ncpus; 115 115 } 116 116 117 - /* 118 - * Allocate group number for each node, so that for each node: 119 - * 120 - * 1) the allocated number is >= 1 121 - * 122 - * 2) the allocated number is <= active CPU number of this node 123 - * 124 - * The actual allocated total groups may be less than @numgrps when 125 - * active total CPU number is less than @numgrps. 126 - * 127 - * Active CPUs means the CPUs in '@cpu_mask AND @node_to_cpumask[]' 128 - * for each node. 129 - */ 130 - static void alloc_nodes_groups(unsigned int numgrps, 131 - cpumask_var_t *node_to_cpumask, 132 - const struct cpumask *cpu_mask, 133 - const nodemask_t nodemsk, 134 - struct cpumask *nmsk, 135 - struct node_groups *node_groups) 117 + static void alloc_groups_to_nodes(unsigned int numgrps, 118 + unsigned int numcpus, 119 + struct node_groups *node_groups, 120 + unsigned int num_nodes) 136 121 { 137 - unsigned n, remaining_ncpus = 0; 122 + unsigned int n, remaining_ncpus = numcpus; 123 + unsigned int ngroups, ncpus; 138 124 139 - for (n = 0; n < nr_node_ids; n++) { 140 - node_groups[n].id = n; 141 - node_groups[n].ncpus = UINT_MAX; 142 - } 143 - 144 - for_each_node_mask(n, nodemsk) { 145 - unsigned ncpus; 146 - 147 - cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); 148 - ncpus = cpumask_weight(nmsk); 149 - 150 - if (!ncpus) 151 - continue; 152 - remaining_ncpus += ncpus; 153 - node_groups[n].ncpus = ncpus; 154 - } 155 - 156 - numgrps = min_t(unsigned, remaining_ncpus, numgrps); 157 - 158 - sort(node_groups, nr_node_ids, sizeof(node_groups[0]), 125 + sort(node_groups, num_nodes, sizeof(node_groups[0]), 159 126 ncpus_cmp_func, NULL); 160 127 161 128 /* ··· 193 226 * finally for each node X: grps(X) <= ncpu(X). 194 227 * 195 228 */ 196 - for (n = 0; n < nr_node_ids; n++) { 197 - unsigned ngroups, ncpus; 198 229 230 + for (n = 0; n < num_nodes; n++) { 199 231 if (node_groups[n].ncpus == UINT_MAX) 200 232 continue; 201 233 ··· 212 246 } 213 247 } 214 248 249 + /* 250 + * Allocate group number for each node, so that for each node: 251 + * 252 + * 1) the allocated number is >= 1 253 + * 254 + * 2) the allocated number is <= active CPU number of this node 255 + * 256 + * The actual allocated total groups may be less than @numgrps when 257 + * active total CPU number is less than @numgrps. 258 + * 259 + * Active CPUs means the CPUs in '@cpu_mask AND @node_to_cpumask[]' 260 + * for each node. 261 + */ 262 + static void alloc_nodes_groups(unsigned int numgrps, 263 + cpumask_var_t *node_to_cpumask, 264 + const struct cpumask *cpu_mask, 265 + const nodemask_t nodemsk, 266 + struct cpumask *nmsk, 267 + struct node_groups *node_groups) 268 + { 269 + unsigned int n, numcpus = 0; 270 + 271 + for (n = 0; n < nr_node_ids; n++) { 272 + node_groups[n].id = n; 273 + node_groups[n].ncpus = UINT_MAX; 274 + } 275 + 276 + for_each_node_mask(n, nodemsk) { 277 + unsigned int ncpus; 278 + 279 + cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); 280 + ncpus = cpumask_weight(nmsk); 281 + 282 + if (!ncpus) 283 + continue; 284 + numcpus += ncpus; 285 + node_groups[n].ncpus = ncpus; 286 + } 287 + 288 + numgrps = min_t(unsigned int, numcpus, numgrps); 289 + alloc_groups_to_nodes(numgrps, numcpus, node_groups, nr_node_ids); 290 + } 291 + 292 + static void assign_cpus_to_groups(unsigned int ncpus, 293 + struct cpumask *nmsk, 294 + struct node_groups *nv, 295 + struct cpumask *masks, 296 + unsigned int *curgrp, 297 + unsigned int last_grp) 298 + { 299 + unsigned int v, cpus_per_grp, extra_grps; 300 + /* Account for rounding errors */ 301 + extra_grps = ncpus - nv->ngroups * (ncpus / nv->ngroups); 302 + 303 + /* Spread allocated groups on CPUs of the current node */ 304 + for (v = 0; v < nv->ngroups; v++, *curgrp += 1) { 305 + cpus_per_grp = ncpus / nv->ngroups; 306 + 307 + /* Account for extra groups to compensate rounding errors */ 308 + if (extra_grps) { 309 + cpus_per_grp++; 310 + --extra_grps; 311 + } 312 + 313 + /* 314 + * wrapping has to be considered given 'startgrp' 315 + * may start anywhere 316 + */ 317 + if (*curgrp >= last_grp) 318 + *curgrp = 0; 319 + grp_spread_init_one(&masks[*curgrp], nmsk, cpus_per_grp); 320 + } 321 + } 322 + 323 + static int alloc_cluster_groups(unsigned int ncpus, 324 + unsigned int ngroups, 325 + struct cpumask *node_cpumask, 326 + cpumask_var_t msk, 327 + const struct cpumask ***clusters_ptr, 328 + struct node_groups **cluster_groups_ptr) 329 + { 330 + unsigned int ncluster = 0; 331 + unsigned int cpu, nc, n; 332 + const struct cpumask *cluster_mask; 333 + const struct cpumask **clusters; 334 + struct node_groups *cluster_groups; 335 + 336 + cpumask_copy(msk, node_cpumask); 337 + 338 + /* Probe how many clusters in this node. */ 339 + while (1) { 340 + cpu = cpumask_first(msk); 341 + if (cpu >= nr_cpu_ids) 342 + break; 343 + 344 + cluster_mask = topology_cluster_cpumask(cpu); 345 + if (!cpumask_weight(cluster_mask)) 346 + goto no_cluster; 347 + /* Clean out CPUs on the same cluster. */ 348 + cpumask_andnot(msk, msk, cluster_mask); 349 + ncluster++; 350 + } 351 + 352 + /* If ngroups < ncluster, cross cluster is inevitable, skip. */ 353 + if (ncluster == 0 || ncluster > ngroups) 354 + goto no_cluster; 355 + 356 + /* Allocate memory based on cluster number. */ 357 + clusters = kcalloc(ncluster, sizeof(struct cpumask *), GFP_KERNEL); 358 + if (!clusters) 359 + goto no_cluster; 360 + cluster_groups = kcalloc(ncluster, sizeof(struct node_groups), GFP_KERNEL); 361 + if (!cluster_groups) 362 + goto fail_cluster_groups; 363 + 364 + /* Filling cluster info for later process. */ 365 + cpumask_copy(msk, node_cpumask); 366 + for (n = 0; n < ncluster; n++) { 367 + cpu = cpumask_first(msk); 368 + cluster_mask = topology_cluster_cpumask(cpu); 369 + nc = cpumask_weight_and(cluster_mask, node_cpumask); 370 + clusters[n] = cluster_mask; 371 + cluster_groups[n].id = n; 372 + cluster_groups[n].ncpus = nc; 373 + cpumask_andnot(msk, msk, cluster_mask); 374 + } 375 + 376 + alloc_groups_to_nodes(ngroups, ncpus, cluster_groups, ncluster); 377 + 378 + *clusters_ptr = clusters; 379 + *cluster_groups_ptr = cluster_groups; 380 + return ncluster; 381 + 382 + fail_cluster_groups: 383 + kfree(clusters); 384 + no_cluster: 385 + return 0; 386 + } 387 + 388 + /* 389 + * Try group CPUs evenly for cluster locality within a NUMA node. 390 + * 391 + * Return: true if success, false otherwise. 392 + */ 393 + static bool __try_group_cluster_cpus(unsigned int ncpus, 394 + unsigned int ngroups, 395 + struct cpumask *node_cpumask, 396 + struct cpumask *masks, 397 + unsigned int *curgrp, 398 + unsigned int last_grp) 399 + { 400 + struct node_groups *cluster_groups; 401 + const struct cpumask **clusters; 402 + unsigned int ncluster; 403 + bool ret = false; 404 + cpumask_var_t nmsk; 405 + unsigned int i, nc; 406 + 407 + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) 408 + goto fail_nmsk_alloc; 409 + 410 + ncluster = alloc_cluster_groups(ncpus, ngroups, node_cpumask, nmsk, 411 + &clusters, &cluster_groups); 412 + 413 + if (ncluster == 0) 414 + goto fail_no_clusters; 415 + 416 + for (i = 0; i < ncluster; i++) { 417 + struct node_groups *nv = &cluster_groups[i]; 418 + 419 + /* Get the cpus on this cluster. */ 420 + cpumask_and(nmsk, node_cpumask, clusters[nv->id]); 421 + nc = cpumask_weight(nmsk); 422 + if (!nc) 423 + continue; 424 + WARN_ON_ONCE(nv->ngroups > nc); 425 + 426 + assign_cpus_to_groups(nc, nmsk, nv, masks, curgrp, last_grp); 427 + } 428 + 429 + ret = true; 430 + kfree(cluster_groups); 431 + kfree(clusters); 432 + fail_no_clusters: 433 + free_cpumask_var(nmsk); 434 + fail_nmsk_alloc: 435 + return ret; 436 + } 437 + 215 438 static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps, 216 439 cpumask_var_t *node_to_cpumask, 217 440 const struct cpumask *cpu_mask, 218 441 struct cpumask *nmsk, struct cpumask *masks) 219 442 { 220 - unsigned int i, n, nodes, cpus_per_grp, extra_grps, done = 0; 443 + unsigned int i, n, nodes, done = 0; 221 444 unsigned int last_grp = numgrps; 222 445 unsigned int curgrp = startgrp; 223 446 nodemask_t nodemsk = NODE_MASK_NONE; ··· 442 287 alloc_nodes_groups(numgrps, node_to_cpumask, cpu_mask, 443 288 nodemsk, nmsk, node_groups); 444 289 for (i = 0; i < nr_node_ids; i++) { 445 - unsigned int ncpus, v; 290 + unsigned int ncpus; 446 291 struct node_groups *nv = &node_groups[i]; 447 292 448 293 if (nv->ngroups == UINT_MAX) ··· 456 301 457 302 WARN_ON_ONCE(nv->ngroups > ncpus); 458 303 459 - /* Account for rounding errors */ 460 - extra_grps = ncpus - nv->ngroups * (ncpus / nv->ngroups); 461 - 462 - /* Spread allocated groups on CPUs of the current node */ 463 - for (v = 0; v < nv->ngroups; v++, curgrp++) { 464 - cpus_per_grp = ncpus / nv->ngroups; 465 - 466 - /* Account for extra groups to compensate rounding errors */ 467 - if (extra_grps) { 468 - cpus_per_grp++; 469 - --extra_grps; 470 - } 471 - 472 - /* 473 - * wrapping has to be considered given 'startgrp' 474 - * may start anywhere 475 - */ 476 - if (curgrp >= last_grp) 477 - curgrp = 0; 478 - grp_spread_init_one(&masks[curgrp], nmsk, 479 - cpus_per_grp); 304 + if (__try_group_cluster_cpus(ncpus, nv->ngroups, nmsk, 305 + masks, &curgrp, last_grp)) { 306 + done += nv->ngroups; 307 + continue; 480 308 } 309 + 310 + assign_cpus_to_groups(ncpus, nmsk, nv, masks, &curgrp, 311 + last_grp); 481 312 done += nv->ngroups; 482 313 } 483 314 kfree(node_groups);
+1
lib/hexdump.c
··· 6 6 #include <linux/types.h> 7 7 #include <linux/ctype.h> 8 8 #include <linux/errno.h> 9 + #include <linux/hex.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/minmax.h> 11 12 #include <linux/export.h>
+1 -1
lib/kfifo.c
··· 41 41 return -EINVAL; 42 42 } 43 43 44 - fifo->data = kmalloc_array_node(esize, size, gfp_mask, node); 44 + fifo->data = kmalloc_array_node(size, esize, gfp_mask, node); 45 45 46 46 if (!fifo->data) { 47 47 fifo->mask = 0;
+2 -2
lib/kstrtox.c
··· 340 340 * @s: input string 341 341 * @res: result 342 342 * 343 - * This routine returns 0 iff the first character is one of 'YyTt1NnFf0', or 344 - * [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL. Value 343 + * This routine returns 0 iff the first character is one of 'EeYyTt1DdNnFf0', 344 + * or [oO][NnFf] for "on" and "off". Otherwise it will return -EINVAL. Value 345 345 * pointed to by res is updated upon finding a match. 346 346 */ 347 347 noinline
+1 -1
lib/once.c
··· 93 93 { 94 94 *done = true; 95 95 mutex_unlock(&once_mutex); 96 - once_disable_jump(once_key, mod); 96 + static_branch_disable(once_key); 97 97 } 98 98 EXPORT_SYMBOL(__do_once_sleepable_done);
+1
lib/string_helpers.c
··· 13 13 #include <linux/device.h> 14 14 #include <linux/errno.h> 15 15 #include <linux/fs.h> 16 + #include <linux/hex.h> 16 17 #include <linux/limits.h> 17 18 #include <linux/mm.h> 18 19 #include <linux/slab.h>
+6 -1
lib/test_kho.c
··· 19 19 #include <linux/printk.h> 20 20 #include <linux/vmalloc.h> 21 21 #include <linux/kexec_handover.h> 22 + #include <linux/kho/abi/kexec_handover.h> 22 23 23 24 #include <net/checksum.h> 24 25 ··· 340 339 341 340 static void kho_test_cleanup(void) 342 341 { 342 + /* unpreserve and free the data stored in folios */ 343 + kho_test_unpreserve_data(&kho_test_state); 343 344 for (int i = 0; i < kho_test_state.nr_folios; i++) 344 345 folio_put(kho_test_state.folios[i]); 345 346 346 347 kvfree(kho_test_state.folios); 347 - vfree(kho_test_state.folios_info); 348 + 349 + /* Unpreserve and release the FDT folio */ 350 + kho_unpreserve_folio(kho_test_state.fdt); 348 351 folio_put(kho_test_state.fdt); 349 352 } 350 353
+66 -79
lib/test_min_heap.c lib/tests/min_heap_kunit.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - #define pr_fmt(fmt) "min_heap_test: " fmt 3 - 4 2 /* 5 3 * Test cases for the min max heap. 6 4 */ 7 5 8 - #include <linux/log2.h> 6 + #include <kunit/test.h> 9 7 #include <linux/min_heap.h> 10 8 #include <linux/module.h> 11 - #include <linux/printk.h> 12 9 #include <linux/random.h> 10 + 11 + struct min_heap_test_case { 12 + const char *str; 13 + bool min_heap; 14 + }; 15 + 16 + static struct min_heap_test_case min_heap_cases[] = { 17 + { 18 + .str = "min", 19 + .min_heap = true, 20 + }, 21 + { 22 + .str = "max", 23 + .min_heap = false, 24 + }, 25 + }; 26 + 27 + KUNIT_ARRAY_PARAM_DESC(min_heap, min_heap_cases, str); 13 28 14 29 DEFINE_MIN_HEAP(int, min_heap_test); 15 30 16 - static __init bool less_than(const void *lhs, const void *rhs, void __always_unused *args) 31 + static bool less_than(const void *lhs, const void *rhs, void __always_unused *args) 17 32 { 18 33 return *(int *)lhs < *(int *)rhs; 19 34 } 20 35 21 - static __init bool greater_than(const void *lhs, const void *rhs, void __always_unused *args) 36 + static bool greater_than(const void *lhs, const void *rhs, void __always_unused *args) 22 37 { 23 38 return *(int *)lhs > *(int *)rhs; 24 39 } 25 40 26 - static __init int pop_verify_heap(bool min_heap, 27 - struct min_heap_test *heap, 28 - const struct min_heap_callbacks *funcs) 41 + static void pop_verify_heap(struct kunit *test, 42 + bool min_heap, 43 + struct min_heap_test *heap, 44 + const struct min_heap_callbacks *funcs) 29 45 { 30 46 int *values = heap->data; 31 - int err = 0; 32 47 int last; 33 48 34 49 last = values[0]; 35 50 min_heap_pop_inline(heap, funcs, NULL); 36 51 while (heap->nr > 0) { 37 - if (min_heap) { 38 - if (last > values[0]) { 39 - pr_err("error: expected %d <= %d\n", last, 40 - values[0]); 41 - err++; 42 - } 43 - } else { 44 - if (last < values[0]) { 45 - pr_err("error: expected %d >= %d\n", last, 46 - values[0]); 47 - err++; 48 - } 49 - } 52 + if (min_heap) 53 + KUNIT_EXPECT_LE(test, last, values[0]); 54 + else 55 + KUNIT_EXPECT_GE(test, last, values[0]); 50 56 last = values[0]; 51 57 min_heap_pop_inline(heap, funcs, NULL); 52 58 } 53 - return err; 54 59 } 55 60 56 - static __init int test_heapify_all(bool min_heap) 61 + static void test_heapify_all(struct kunit *test) 57 62 { 63 + const struct min_heap_test_case *params = test->param_value; 58 64 int values[] = { 3, 1, 2, 4, 0x8000000, 0x7FFFFFF, 0, 59 65 -3, -1, -2, -4, 0x8000000, 0x7FFFFFF }; 60 66 struct min_heap_test heap = { ··· 69 63 .size = ARRAY_SIZE(values), 70 64 }; 71 65 struct min_heap_callbacks funcs = { 72 - .less = min_heap ? less_than : greater_than, 66 + .less = params->min_heap ? less_than : greater_than, 73 67 .swp = NULL, 74 68 }; 75 - int i, err; 69 + int i; 76 70 77 71 /* Test with known set of values. */ 78 72 min_heapify_all_inline(&heap, &funcs, NULL); 79 - err = pop_verify_heap(min_heap, &heap, &funcs); 80 - 73 + pop_verify_heap(test, params->min_heap, &heap, &funcs); 81 74 82 75 /* Test with randomly generated values. */ 83 76 heap.nr = ARRAY_SIZE(values); ··· 84 79 values[i] = get_random_u32(); 85 80 86 81 min_heapify_all_inline(&heap, &funcs, NULL); 87 - err += pop_verify_heap(min_heap, &heap, &funcs); 88 - 89 - return err; 82 + pop_verify_heap(test, params->min_heap, &heap, &funcs); 90 83 } 91 84 92 - static __init int test_heap_push(bool min_heap) 85 + static void test_heap_push(struct kunit *test) 93 86 { 87 + const struct min_heap_test_case *params = test->param_value; 94 88 const int data[] = { 3, 1, 2, 4, 0x80000000, 0x7FFFFFFF, 0, 95 89 -3, -1, -2, -4, 0x80000000, 0x7FFFFFFF }; 96 90 int values[ARRAY_SIZE(data)]; ··· 99 95 .size = ARRAY_SIZE(values), 100 96 }; 101 97 struct min_heap_callbacks funcs = { 102 - .less = min_heap ? less_than : greater_than, 98 + .less = params->min_heap ? less_than : greater_than, 103 99 .swp = NULL, 104 100 }; 105 - int i, temp, err; 101 + int i, temp; 106 102 107 103 /* Test with known set of values copied from data. */ 108 104 for (i = 0; i < ARRAY_SIZE(data); i++) 109 105 min_heap_push_inline(&heap, &data[i], &funcs, NULL); 110 106 111 - err = pop_verify_heap(min_heap, &heap, &funcs); 107 + pop_verify_heap(test, params->min_heap, &heap, &funcs); 112 108 113 109 /* Test with randomly generated values. */ 114 110 while (heap.nr < heap.size) { 115 111 temp = get_random_u32(); 116 112 min_heap_push_inline(&heap, &temp, &funcs, NULL); 117 113 } 118 - err += pop_verify_heap(min_heap, &heap, &funcs); 119 - 120 - return err; 114 + pop_verify_heap(test, params->min_heap, &heap, &funcs); 121 115 } 122 116 123 - static __init int test_heap_pop_push(bool min_heap) 117 + static void test_heap_pop_push(struct kunit *test) 124 118 { 119 + const struct min_heap_test_case *params = test->param_value; 125 120 const int data[] = { 3, 1, 2, 4, 0x80000000, 0x7FFFFFFF, 0, 126 121 -3, -1, -2, -4, 0x80000000, 0x7FFFFFFF }; 127 122 int values[ARRAY_SIZE(data)]; ··· 130 127 .size = ARRAY_SIZE(values), 131 128 }; 132 129 struct min_heap_callbacks funcs = { 133 - .less = min_heap ? less_than : greater_than, 130 + .less = params->min_heap ? less_than : greater_than, 134 131 .swp = NULL, 135 132 }; 136 - int i, temp, err; 133 + int i, temp; 137 134 138 135 /* Fill values with data to pop and replace. */ 139 - temp = min_heap ? 0x80000000 : 0x7FFFFFFF; 136 + temp = params->min_heap ? 0x80000000 : 0x7FFFFFFF; 140 137 for (i = 0; i < ARRAY_SIZE(data); i++) 141 138 min_heap_push_inline(&heap, &temp, &funcs, NULL); 142 139 ··· 144 141 for (i = 0; i < ARRAY_SIZE(data); i++) 145 142 min_heap_pop_push_inline(&heap, &data[i], &funcs, NULL); 146 143 147 - err = pop_verify_heap(min_heap, &heap, &funcs); 144 + pop_verify_heap(test, params->min_heap, &heap, &funcs); 148 145 149 146 heap.nr = 0; 150 147 for (i = 0; i < ARRAY_SIZE(data); i++) ··· 155 152 temp = get_random_u32(); 156 153 min_heap_pop_push_inline(&heap, &temp, &funcs, NULL); 157 154 } 158 - err += pop_verify_heap(min_heap, &heap, &funcs); 159 - 160 - return err; 155 + pop_verify_heap(test, params->min_heap, &heap, &funcs); 161 156 } 162 157 163 - static __init int test_heap_del(bool min_heap) 158 + static void test_heap_del(struct kunit *test) 164 159 { 160 + const struct min_heap_test_case *params = test->param_value; 165 161 int values[] = { 3, 1, 2, 4, 0x8000000, 0x7FFFFFF, 0, 166 162 -3, -1, -2, -4, 0x8000000, 0x7FFFFFF }; 167 163 struct min_heap_test heap; ··· 168 166 min_heap_init_inline(&heap, values, ARRAY_SIZE(values)); 169 167 heap.nr = ARRAY_SIZE(values); 170 168 struct min_heap_callbacks funcs = { 171 - .less = min_heap ? less_than : greater_than, 169 + .less = params->min_heap ? less_than : greater_than, 172 170 .swp = NULL, 173 171 }; 174 - int i, err; 172 + int i; 175 173 176 174 /* Test with known set of values. */ 177 175 min_heapify_all_inline(&heap, &funcs, NULL); 178 176 for (i = 0; i < ARRAY_SIZE(values) / 2; i++) 179 177 min_heap_del_inline(&heap, get_random_u32() % heap.nr, &funcs, NULL); 180 - err = pop_verify_heap(min_heap, &heap, &funcs); 181 - 178 + pop_verify_heap(test, params->min_heap, &heap, &funcs); 182 179 183 180 /* Test with randomly generated values. */ 184 181 heap.nr = ARRAY_SIZE(values); ··· 187 186 188 187 for (i = 0; i < ARRAY_SIZE(values) / 2; i++) 189 188 min_heap_del_inline(&heap, get_random_u32() % heap.nr, &funcs, NULL); 190 - err += pop_verify_heap(min_heap, &heap, &funcs); 191 - 192 - return err; 189 + pop_verify_heap(test, params->min_heap, &heap, &funcs); 193 190 } 194 191 195 - static int __init test_min_heap_init(void) 196 - { 197 - int err = 0; 192 + static struct kunit_case min_heap_test_cases[] = { 193 + KUNIT_CASE_PARAM(test_heapify_all, min_heap_gen_params), 194 + KUNIT_CASE_PARAM(test_heap_push, min_heap_gen_params), 195 + KUNIT_CASE_PARAM(test_heap_pop_push, min_heap_gen_params), 196 + KUNIT_CASE_PARAM(test_heap_del, min_heap_gen_params), 197 + {}, 198 + }; 198 199 199 - err += test_heapify_all(true); 200 - err += test_heapify_all(false); 201 - err += test_heap_push(true); 202 - err += test_heap_push(false); 203 - err += test_heap_pop_push(true); 204 - err += test_heap_pop_push(false); 205 - err += test_heap_del(true); 206 - err += test_heap_del(false); 207 - if (err) { 208 - pr_err("test failed with %d errors\n", err); 209 - return -EINVAL; 210 - } 211 - pr_info("test passed\n"); 212 - return 0; 213 - } 214 - module_init(test_min_heap_init); 200 + static struct kunit_suite min_heap_test_suite = { 201 + .name = "min_heap", 202 + .test_cases = min_heap_test_cases, 203 + }; 215 204 216 - static void __exit test_min_heap_exit(void) 217 - { 218 - /* do nothing */ 219 - } 220 - module_exit(test_min_heap_exit); 205 + kunit_test_suite(min_heap_test_suite); 221 206 222 207 MODULE_DESCRIPTION("Test cases for the min max heap"); 223 208 MODULE_LICENSE("GPL");
-134
lib/test_uuid.c
··· 1 - /* 2 - * Test cases for lib/uuid.c module. 3 - */ 4 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 5 - 6 - #include <linux/init.h> 7 - #include <linux/kernel.h> 8 - #include <linux/module.h> 9 - #include <linux/string.h> 10 - #include <linux/uuid.h> 11 - 12 - struct test_uuid_data { 13 - const char *uuid; 14 - guid_t le; 15 - uuid_t be; 16 - }; 17 - 18 - static const struct test_uuid_data test_uuid_test_data[] = { 19 - { 20 - .uuid = "c33f4995-3701-450e-9fbf-206a2e98e576", 21 - .le = GUID_INIT(0xc33f4995, 0x3701, 0x450e, 0x9f, 0xbf, 0x20, 0x6a, 0x2e, 0x98, 0xe5, 0x76), 22 - .be = UUID_INIT(0xc33f4995, 0x3701, 0x450e, 0x9f, 0xbf, 0x20, 0x6a, 0x2e, 0x98, 0xe5, 0x76), 23 - }, 24 - { 25 - .uuid = "64b4371c-77c1-48f9-8221-29f054fc023b", 26 - .le = GUID_INIT(0x64b4371c, 0x77c1, 0x48f9, 0x82, 0x21, 0x29, 0xf0, 0x54, 0xfc, 0x02, 0x3b), 27 - .be = UUID_INIT(0x64b4371c, 0x77c1, 0x48f9, 0x82, 0x21, 0x29, 0xf0, 0x54, 0xfc, 0x02, 0x3b), 28 - }, 29 - { 30 - .uuid = "0cb4ddff-a545-4401-9d06-688af53e7f84", 31 - .le = GUID_INIT(0x0cb4ddff, 0xa545, 0x4401, 0x9d, 0x06, 0x68, 0x8a, 0xf5, 0x3e, 0x7f, 0x84), 32 - .be = UUID_INIT(0x0cb4ddff, 0xa545, 0x4401, 0x9d, 0x06, 0x68, 0x8a, 0xf5, 0x3e, 0x7f, 0x84), 33 - }, 34 - }; 35 - 36 - static const char * const test_uuid_wrong_data[] = { 37 - "c33f4995-3701-450e-9fbf206a2e98e576 ", /* no hyphen(s) */ 38 - "64b4371c-77c1-48f9-8221-29f054XX023b", /* invalid character(s) */ 39 - "0cb4ddff-a545-4401-9d06-688af53e", /* not enough data */ 40 - }; 41 - 42 - static unsigned total_tests __initdata; 43 - static unsigned failed_tests __initdata; 44 - 45 - static void __init test_uuid_failed(const char *prefix, bool wrong, bool be, 46 - const char *data, const char *actual) 47 - { 48 - pr_err("%s test #%u %s %s data: '%s'\n", 49 - prefix, 50 - total_tests, 51 - wrong ? "passed on wrong" : "failed on", 52 - be ? "BE" : "LE", 53 - data); 54 - if (actual && *actual) 55 - pr_err("%s test #%u actual data: '%s'\n", 56 - prefix, 57 - total_tests, 58 - actual); 59 - failed_tests++; 60 - } 61 - 62 - static void __init test_uuid_test(const struct test_uuid_data *data) 63 - { 64 - guid_t le; 65 - uuid_t be; 66 - char buf[48]; 67 - 68 - /* LE */ 69 - total_tests++; 70 - if (guid_parse(data->uuid, &le)) 71 - test_uuid_failed("conversion", false, false, data->uuid, NULL); 72 - 73 - total_tests++; 74 - if (!guid_equal(&data->le, &le)) { 75 - sprintf(buf, "%pUl", &le); 76 - test_uuid_failed("cmp", false, false, data->uuid, buf); 77 - } 78 - 79 - /* BE */ 80 - total_tests++; 81 - if (uuid_parse(data->uuid, &be)) 82 - test_uuid_failed("conversion", false, true, data->uuid, NULL); 83 - 84 - total_tests++; 85 - if (!uuid_equal(&data->be, &be)) { 86 - sprintf(buf, "%pUb", &be); 87 - test_uuid_failed("cmp", false, true, data->uuid, buf); 88 - } 89 - } 90 - 91 - static void __init test_uuid_wrong(const char *data) 92 - { 93 - guid_t le; 94 - uuid_t be; 95 - 96 - /* LE */ 97 - total_tests++; 98 - if (!guid_parse(data, &le)) 99 - test_uuid_failed("negative", true, false, data, NULL); 100 - 101 - /* BE */ 102 - total_tests++; 103 - if (!uuid_parse(data, &be)) 104 - test_uuid_failed("negative", true, true, data, NULL); 105 - } 106 - 107 - static int __init test_uuid_init(void) 108 - { 109 - unsigned int i; 110 - 111 - for (i = 0; i < ARRAY_SIZE(test_uuid_test_data); i++) 112 - test_uuid_test(&test_uuid_test_data[i]); 113 - 114 - for (i = 0; i < ARRAY_SIZE(test_uuid_wrong_data); i++) 115 - test_uuid_wrong(test_uuid_wrong_data[i]); 116 - 117 - if (failed_tests == 0) 118 - pr_info("all %u tests passed\n", total_tests); 119 - else 120 - pr_err("failed %u out of %u tests\n", failed_tests, total_tests); 121 - 122 - return failed_tests ? -EINVAL : 0; 123 - } 124 - module_init(test_uuid_init); 125 - 126 - static void __exit test_uuid_exit(void) 127 - { 128 - /* do nothing */ 129 - } 130 - module_exit(test_uuid_exit); 131 - 132 - MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>"); 133 - MODULE_DESCRIPTION("Test cases for lib/uuid.c module"); 134 - MODULE_LICENSE("Dual BSD/GPL");
+5
lib/tests/Makefile
··· 20 20 obj-$(CONFIG_FORTIFY_KUNIT_TEST) += fortify_kunit.o 21 21 CFLAGS_test_fprobe.o += $(CC_FLAGS_FTRACE) 22 22 obj-$(CONFIG_FPROBE_SANITY_TEST) += test_fprobe.o 23 + obj-$(CONFIG_GLOB_KUNIT_TEST) += glob_kunit.o 23 24 obj-$(CONFIG_HASHTABLE_KUNIT_TEST) += hashtable_test.o 24 25 obj-$(CONFIG_HASH_KUNIT_TEST) += test_hash.o 25 26 obj-$(CONFIG_TEST_IOV_ITER) += kunit_iov_iter.o 26 27 obj-$(CONFIG_IS_SIGNED_TYPE_KUNIT_TEST) += is_signed_type_kunit.o 27 28 obj-$(CONFIG_KPROBES_SANITY_TEST) += test_kprobes.o 28 29 obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o 30 + obj-$(CONFIG_LIST_PRIVATE_KUNIT_TEST) += list-private-test.o 29 31 obj-$(CONFIG_KFIFO_KUNIT_TEST) += kfifo_kunit.o 30 32 obj-$(CONFIG_TEST_LIST_SORT) += test_list_sort.o 31 33 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o 34 + obj-$(CONFIG_LIVEUPDATE_TEST) += liveupdate.o 32 35 33 36 CFLAGS_longest_symbol_kunit.o += $(call cc-disable-warning, missing-prototypes) 34 37 obj-$(CONFIG_LONGEST_SYM_KUNIT_TEST) += longest_symbol_kunit.o 35 38 36 39 obj-$(CONFIG_MEMCPY_KUNIT_TEST) += memcpy_kunit.o 40 + obj-$(CONFIG_MIN_HEAP_KUNIT_TEST) += min_heap_kunit.o 37 41 CFLAGS_overflow_kunit.o = $(call cc-disable-warning, tautological-constant-out-of-range-compare) 38 42 obj-$(CONFIG_OVERFLOW_KUNIT_TEST) += overflow_kunit.o 39 43 obj-$(CONFIG_PRINTF_KUNIT_TEST) += printf_kunit.o ··· 54 50 obj-$(CONFIG_USERCOPY_KUNIT_TEST) += usercopy_kunit.o 55 51 obj-$(CONFIG_UTIL_MACROS_KUNIT) += util_macros_kunit.o 56 52 obj-$(CONFIG_RATELIMIT_KUNIT_TEST) += test_ratelimit.o 53 + obj-$(CONFIG_UUID_KUNIT_TEST) += uuid_kunit.o 57 54 58 55 obj-$(CONFIG_TEST_RUNTIME_MODULE) += module/
+125
lib/tests/glob_kunit.c
··· 1 + // SPDX-License-Identifier: MIT OR GPL-2.0 2 + /* 3 + * Test cases for glob functions. 4 + */ 5 + 6 + #include <kunit/test.h> 7 + #include <linux/glob.h> 8 + #include <linux/module.h> 9 + 10 + /** 11 + * struct glob_test_case - Test case for glob matching. 12 + * @pat: Pattern to match. 13 + * @str: String to match against. 14 + * @expected: Expected glob_match result, true if matched. 15 + */ 16 + struct glob_test_case { 17 + const char *pat; 18 + const char *str; 19 + bool expected; 20 + }; 21 + 22 + static const struct glob_test_case glob_test_cases[] = { 23 + /* Some basic tests */ 24 + { .pat = "a", .str = "a", .expected = true }, 25 + { .pat = "a", .str = "b", .expected = false }, 26 + { .pat = "a", .str = "aa", .expected = false }, 27 + { .pat = "a", .str = "", .expected = false }, 28 + { .pat = "", .str = "", .expected = true }, 29 + { .pat = "", .str = "a", .expected = false }, 30 + /* Simple character class tests */ 31 + { .pat = "[a]", .str = "a", .expected = true }, 32 + { .pat = "[a]", .str = "b", .expected = false }, 33 + { .pat = "[!a]", .str = "a", .expected = false }, 34 + { .pat = "[!a]", .str = "b", .expected = true }, 35 + { .pat = "[ab]", .str = "a", .expected = true }, 36 + { .pat = "[ab]", .str = "b", .expected = true }, 37 + { .pat = "[ab]", .str = "c", .expected = false }, 38 + { .pat = "[!ab]", .str = "c", .expected = true }, 39 + { .pat = "[a-c]", .str = "b", .expected = true }, 40 + { .pat = "[a-c]", .str = "d", .expected = false }, 41 + /* Corner cases in character class parsing */ 42 + { .pat = "[a-c-e-g]", .str = "-", .expected = true }, 43 + { .pat = "[a-c-e-g]", .str = "d", .expected = false }, 44 + { .pat = "[a-c-e-g]", .str = "f", .expected = true }, 45 + { .pat = "[]a-ceg-ik[]", .str = "a", .expected = true }, 46 + { .pat = "[]a-ceg-ik[]", .str = "]", .expected = true }, 47 + { .pat = "[]a-ceg-ik[]", .str = "[", .expected = true }, 48 + { .pat = "[]a-ceg-ik[]", .str = "h", .expected = true }, 49 + { .pat = "[]a-ceg-ik[]", .str = "f", .expected = false }, 50 + { .pat = "[!]a-ceg-ik[]", .str = "h", .expected = false }, 51 + { .pat = "[!]a-ceg-ik[]", .str = "]", .expected = false }, 52 + { .pat = "[!]a-ceg-ik[]", .str = "f", .expected = true }, 53 + /* Simple wild cards */ 54 + { .pat = "?", .str = "a", .expected = true }, 55 + { .pat = "?", .str = "aa", .expected = false }, 56 + { .pat = "??", .str = "a", .expected = false }, 57 + { .pat = "?x?", .str = "axb", .expected = true }, 58 + { .pat = "?x?", .str = "abx", .expected = false }, 59 + { .pat = "?x?", .str = "xab", .expected = false }, 60 + /* Asterisk wild cards (backtracking) */ 61 + { .pat = "*??", .str = "a", .expected = false }, 62 + { .pat = "*??", .str = "ab", .expected = true }, 63 + { .pat = "*??", .str = "abc", .expected = true }, 64 + { .pat = "*??", .str = "abcd", .expected = true }, 65 + { .pat = "??*", .str = "a", .expected = false }, 66 + { .pat = "??*", .str = "ab", .expected = true }, 67 + { .pat = "??*", .str = "abc", .expected = true }, 68 + { .pat = "??*", .str = "abcd", .expected = true }, 69 + { .pat = "?*?", .str = "a", .expected = false }, 70 + { .pat = "?*?", .str = "ab", .expected = true }, 71 + { .pat = "?*?", .str = "abc", .expected = true }, 72 + { .pat = "?*?", .str = "abcd", .expected = true }, 73 + { .pat = "*b", .str = "b", .expected = true }, 74 + { .pat = "*b", .str = "ab", .expected = true }, 75 + { .pat = "*b", .str = "ba", .expected = false }, 76 + { .pat = "*b", .str = "bb", .expected = true }, 77 + { .pat = "*b", .str = "abb", .expected = true }, 78 + { .pat = "*b", .str = "bab", .expected = true }, 79 + { .pat = "*bc", .str = "abbc", .expected = true }, 80 + { .pat = "*bc", .str = "bc", .expected = true }, 81 + { .pat = "*bc", .str = "bbc", .expected = true }, 82 + { .pat = "*bc", .str = "bcbc", .expected = true }, 83 + /* Multiple asterisks (complex backtracking) */ 84 + { .pat = "*ac*", .str = "abacadaeafag", .expected = true }, 85 + { .pat = "*ac*ae*ag*", .str = "abacadaeafag", .expected = true }, 86 + { .pat = "*a*b*[bc]*[ef]*g*", .str = "abacadaeafag", .expected = true }, 87 + { .pat = "*a*b*[ef]*[cd]*g*", .str = "abacadaeafag", .expected = false }, 88 + { .pat = "*abcd*", .str = "abcabcabcabcdefg", .expected = true }, 89 + { .pat = "*ab*cd*", .str = "abcabcabcabcdefg", .expected = true }, 90 + { .pat = "*abcd*abcdef*", .str = "abcabcdabcdeabcdefg", .expected = true }, 91 + { .pat = "*abcd*", .str = "abcabcabcabcefg", .expected = false }, 92 + { .pat = "*ab*cd*", .str = "abcabcabcabcefg", .expected = false }, 93 + }; 94 + 95 + static void glob_case_to_desc(const struct glob_test_case *t, char *desc) 96 + { 97 + snprintf(desc, KUNIT_PARAM_DESC_SIZE, "pat:\"%s\" str:\"%s\"", t->pat, t->str); 98 + } 99 + 100 + KUNIT_ARRAY_PARAM(glob, glob_test_cases, glob_case_to_desc); 101 + 102 + static void glob_test_match(struct kunit *test) 103 + { 104 + const struct glob_test_case *params = test->param_value; 105 + 106 + KUNIT_EXPECT_EQ_MSG(test, 107 + glob_match(params->pat, params->str), 108 + params->expected, 109 + "Pattern: \"%s\", String: \"%s\", Expected: %d", 110 + params->pat, params->str, params->expected); 111 + } 112 + 113 + static struct kunit_case glob_kunit_test_cases[] = { 114 + KUNIT_CASE_PARAM(glob_test_match, glob_gen_params), 115 + {} 116 + }; 117 + 118 + static struct kunit_suite glob_test_suite = { 119 + .name = "glob", 120 + .test_cases = glob_kunit_test_cases, 121 + }; 122 + 123 + kunit_test_suite(glob_test_suite); 124 + MODULE_DESCRIPTION("Test cases for glob functions"); 125 + MODULE_LICENSE("Dual MIT/GPL");
+76
lib/tests/list-private-test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * KUnit compilation/smoke test for Private list primitives. 4 + * 5 + * Copyright (c) 2025, Google LLC. 6 + * Pasha Tatashin <pasha.tatashin@soleen.com> 7 + */ 8 + #include <linux/list_private.h> 9 + #include <kunit/test.h> 10 + 11 + /* 12 + * This forces compiler to warn if you access it directly, because list 13 + * primitives expect (struct list_head *), not (volatile struct list_head *). 14 + */ 15 + #undef __private 16 + #define __private volatile 17 + 18 + /* Redefine ACCESS_PRIVATE for this test. */ 19 + #undef ACCESS_PRIVATE 20 + #define ACCESS_PRIVATE(p, member) \ 21 + (*((struct list_head *)((unsigned long)&((p)->member)))) 22 + 23 + struct list_test_struct { 24 + int data; 25 + struct list_head __private list; 26 + }; 27 + 28 + static void list_private_compile_test(struct kunit *test) 29 + { 30 + struct list_test_struct entry; 31 + struct list_test_struct *pos, *n; 32 + LIST_HEAD(head); 33 + 34 + INIT_LIST_HEAD(&ACCESS_PRIVATE(&entry, list)); 35 + list_add(&ACCESS_PRIVATE(&entry, list), &head); 36 + pos = &entry; 37 + 38 + pos = list_private_entry(&ACCESS_PRIVATE(&entry, list), struct list_test_struct, list); 39 + pos = list_private_first_entry(&head, struct list_test_struct, list); 40 + pos = list_private_last_entry(&head, struct list_test_struct, list); 41 + pos = list_private_next_entry(pos, list); 42 + pos = list_private_prev_entry(pos, list); 43 + pos = list_private_next_entry_circular(pos, &head, list); 44 + pos = list_private_prev_entry_circular(pos, &head, list); 45 + 46 + if (list_private_entry_is_head(pos, &head, list)) 47 + return; 48 + 49 + list_private_for_each_entry(pos, &head, list) { } 50 + list_private_for_each_entry_reverse(pos, &head, list) { } 51 + list_private_for_each_entry_continue(pos, &head, list) { } 52 + list_private_for_each_entry_continue_reverse(pos, &head, list) { } 53 + list_private_for_each_entry_from(pos, &head, list) { } 54 + list_private_for_each_entry_from_reverse(pos, &head, list) { } 55 + 56 + list_private_for_each_entry_safe(pos, n, &head, list) 57 + list_private_safe_reset_next(pos, n, list); 58 + list_private_for_each_entry_safe_continue(pos, n, &head, list) { } 59 + list_private_for_each_entry_safe_from(pos, n, &head, list) { } 60 + list_private_for_each_entry_safe_reverse(pos, n, &head, list) { } 61 + } 62 + 63 + static struct kunit_case list_private_test_cases[] = { 64 + KUNIT_CASE(list_private_compile_test), 65 + {}, 66 + }; 67 + 68 + static struct kunit_suite list_private_test_module = { 69 + .name = "list-private-kunit-test", 70 + .test_cases = list_private_test_cases, 71 + }; 72 + 73 + kunit_test_suite(list_private_test_module); 74 + 75 + MODULE_DESCRIPTION("KUnit compilation test for private list primitives"); 76 + MODULE_LICENSE("GPL");
+158
lib/tests/liveupdate.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* 4 + * Copyright (c) 2025, Google LLC. 5 + * Pasha Tatashin <pasha.tatashin@soleen.com> 6 + */ 7 + 8 + #define pr_fmt(fmt) KBUILD_MODNAME " test: " fmt 9 + 10 + #include <linux/cleanup.h> 11 + #include <linux/errno.h> 12 + #include <linux/init.h> 13 + #include <linux/liveupdate.h> 14 + #include <linux/module.h> 15 + #include "../../kernel/liveupdate/luo_internal.h" 16 + 17 + static const struct liveupdate_flb_ops test_flb_ops; 18 + #define DEFINE_TEST_FLB(i) { \ 19 + .ops = &test_flb_ops, \ 20 + .compatible = LIVEUPDATE_TEST_FLB_COMPATIBLE(i), \ 21 + } 22 + 23 + /* Number of Test FLBs to register with every file handler */ 24 + #define TEST_NFLBS 3 25 + static struct liveupdate_flb test_flbs[TEST_NFLBS] = { 26 + DEFINE_TEST_FLB(0), 27 + DEFINE_TEST_FLB(1), 28 + DEFINE_TEST_FLB(2), 29 + }; 30 + 31 + #define TEST_FLB_MAGIC_BASE 0xFEEDF00DCAFEBEE0ULL 32 + 33 + static int test_flb_preserve(struct liveupdate_flb_op_args *argp) 34 + { 35 + ptrdiff_t index = argp->flb - test_flbs; 36 + 37 + pr_info("%s: preserve was triggered\n", argp->flb->compatible); 38 + argp->data = TEST_FLB_MAGIC_BASE + index; 39 + 40 + return 0; 41 + } 42 + 43 + static void test_flb_unpreserve(struct liveupdate_flb_op_args *argp) 44 + { 45 + pr_info("%s: unpreserve was triggered\n", argp->flb->compatible); 46 + } 47 + 48 + static int test_flb_retrieve(struct liveupdate_flb_op_args *argp) 49 + { 50 + ptrdiff_t index = argp->flb - test_flbs; 51 + u64 expected_data = TEST_FLB_MAGIC_BASE + index; 52 + 53 + if (argp->data == expected_data) { 54 + pr_info("%s: found flb data from the previous boot\n", 55 + argp->flb->compatible); 56 + argp->obj = (void *)argp->data; 57 + } else { 58 + pr_err("%s: ERROR - incorrect data handle: %llx, expected %llx\n", 59 + argp->flb->compatible, argp->data, expected_data); 60 + return -EINVAL; 61 + } 62 + 63 + return 0; 64 + } 65 + 66 + static void test_flb_finish(struct liveupdate_flb_op_args *argp) 67 + { 68 + ptrdiff_t index = argp->flb - test_flbs; 69 + void *expected_obj = (void *)(TEST_FLB_MAGIC_BASE + index); 70 + 71 + if (argp->obj == expected_obj) { 72 + pr_info("%s: finish was triggered\n", argp->flb->compatible); 73 + } else { 74 + pr_err("%s: ERROR - finish called with invalid object\n", 75 + argp->flb->compatible); 76 + } 77 + } 78 + 79 + static const struct liveupdate_flb_ops test_flb_ops = { 80 + .preserve = test_flb_preserve, 81 + .unpreserve = test_flb_unpreserve, 82 + .retrieve = test_flb_retrieve, 83 + .finish = test_flb_finish, 84 + .owner = THIS_MODULE, 85 + }; 86 + 87 + static void liveupdate_test_init(void) 88 + { 89 + static DEFINE_MUTEX(init_lock); 90 + static bool initialized; 91 + int i; 92 + 93 + guard(mutex)(&init_lock); 94 + 95 + if (initialized) 96 + return; 97 + 98 + for (i = 0; i < TEST_NFLBS; i++) { 99 + struct liveupdate_flb *flb = &test_flbs[i]; 100 + void *obj; 101 + int err; 102 + 103 + err = liveupdate_flb_get_incoming(flb, &obj); 104 + if (err && err != -ENODATA && err != -ENOENT) { 105 + pr_err("liveupdate_flb_get_incoming for %s failed: %pe\n", 106 + flb->compatible, ERR_PTR(err)); 107 + } 108 + } 109 + initialized = true; 110 + } 111 + 112 + void liveupdate_test_register(struct liveupdate_file_handler *fh) 113 + { 114 + int err, i; 115 + 116 + liveupdate_test_init(); 117 + 118 + for (i = 0; i < TEST_NFLBS; i++) { 119 + struct liveupdate_flb *flb = &test_flbs[i]; 120 + 121 + err = liveupdate_register_flb(fh, flb); 122 + if (err) { 123 + pr_err("Failed to register %s %pe\n", 124 + flb->compatible, ERR_PTR(err)); 125 + } 126 + } 127 + 128 + err = liveupdate_register_flb(fh, &test_flbs[0]); 129 + if (!err || err != -EEXIST) { 130 + pr_err("Failed: %s should be already registered, but got err: %pe\n", 131 + test_flbs[0].compatible, ERR_PTR(err)); 132 + } 133 + 134 + pr_info("Registered %d FLBs with file handler: [%s]\n", 135 + TEST_NFLBS, fh->compatible); 136 + } 137 + 138 + void liveupdate_test_unregister(struct liveupdate_file_handler *fh) 139 + { 140 + int err, i; 141 + 142 + for (i = 0; i < TEST_NFLBS; i++) { 143 + struct liveupdate_flb *flb = &test_flbs[i]; 144 + 145 + err = liveupdate_unregister_flb(fh, flb); 146 + if (err) { 147 + pr_err("Failed to unregister %s %pe\n", 148 + flb->compatible, ERR_PTR(err)); 149 + } 150 + } 151 + 152 + pr_info("Unregistered %d FLBs from file handler: [%s]\n", 153 + TEST_NFLBS, fh->compatible); 154 + } 155 + 156 + MODULE_LICENSE("GPL"); 157 + MODULE_AUTHOR("Pasha Tatashin <pasha.tatashin@soleen.com>"); 158 + MODULE_DESCRIPTION("In-kernel test for LUO mechanism");
+106
lib/tests/uuid_kunit.c
··· 1 + // SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0 2 + /* 3 + * Test cases for lib/uuid.c module. 4 + */ 5 + 6 + #include <kunit/test.h> 7 + #include <linux/uuid.h> 8 + 9 + struct test_uuid_data { 10 + const char *uuid; 11 + guid_t le; 12 + uuid_t be; 13 + }; 14 + 15 + static const struct test_uuid_data test_uuid_test_data[] = { 16 + { 17 + .uuid = "c33f4995-3701-450e-9fbf-206a2e98e576", 18 + .le = GUID_INIT(0xc33f4995, 0x3701, 0x450e, 0x9f, 0xbf, 0x20, 0x6a, 0x2e, 0x98, 0xe5, 0x76), 19 + .be = UUID_INIT(0xc33f4995, 0x3701, 0x450e, 0x9f, 0xbf, 0x20, 0x6a, 0x2e, 0x98, 0xe5, 0x76), 20 + }, 21 + { 22 + .uuid = "64b4371c-77c1-48f9-8221-29f054fc023b", 23 + .le = GUID_INIT(0x64b4371c, 0x77c1, 0x48f9, 0x82, 0x21, 0x29, 0xf0, 0x54, 0xfc, 0x02, 0x3b), 24 + .be = UUID_INIT(0x64b4371c, 0x77c1, 0x48f9, 0x82, 0x21, 0x29, 0xf0, 0x54, 0xfc, 0x02, 0x3b), 25 + }, 26 + { 27 + .uuid = "0cb4ddff-a545-4401-9d06-688af53e7f84", 28 + .le = GUID_INIT(0x0cb4ddff, 0xa545, 0x4401, 0x9d, 0x06, 0x68, 0x8a, 0xf5, 0x3e, 0x7f, 0x84), 29 + .be = UUID_INIT(0x0cb4ddff, 0xa545, 0x4401, 0x9d, 0x06, 0x68, 0x8a, 0xf5, 0x3e, 0x7f, 0x84), 30 + }, 31 + }; 32 + 33 + static const char * const test_uuid_wrong_data[] = { 34 + "c33f4995-3701-450e-9fbf206a2e98e576 ", /* no hyphen(s) */ 35 + "64b4371c-77c1-48f9-8221-29f054XX023b", /* invalid character(s) */ 36 + "0cb4ddff-a545-4401-9d06-688af53e", /* not enough data */ 37 + }; 38 + 39 + static void uuid_test_guid_valid(struct kunit *test) 40 + { 41 + unsigned int i; 42 + const struct test_uuid_data *data; 43 + guid_t le; 44 + 45 + for (i = 0; i < ARRAY_SIZE(test_uuid_test_data); i++) { 46 + data = &test_uuid_test_data[i]; 47 + KUNIT_EXPECT_EQ(test, guid_parse(data->uuid, &le), 0); 48 + KUNIT_EXPECT_TRUE(test, guid_equal(&data->le, &le)); 49 + } 50 + } 51 + 52 + static void uuid_test_uuid_valid(struct kunit *test) 53 + { 54 + unsigned int i; 55 + const struct test_uuid_data *data; 56 + uuid_t be; 57 + 58 + for (i = 0; i < ARRAY_SIZE(test_uuid_test_data); i++) { 59 + data = &test_uuid_test_data[i]; 60 + KUNIT_EXPECT_EQ(test, uuid_parse(data->uuid, &be), 0); 61 + KUNIT_EXPECT_TRUE(test, uuid_equal(&data->be, &be)); 62 + } 63 + } 64 + 65 + static void uuid_test_guid_invalid(struct kunit *test) 66 + { 67 + unsigned int i; 68 + const char *uuid; 69 + guid_t le; 70 + 71 + for (i = 0; i < ARRAY_SIZE(test_uuid_wrong_data); i++) { 72 + uuid = test_uuid_wrong_data[i]; 73 + KUNIT_EXPECT_EQ(test, guid_parse(uuid, &le), -EINVAL); 74 + } 75 + } 76 + 77 + static void uuid_test_uuid_invalid(struct kunit *test) 78 + { 79 + unsigned int i; 80 + const char *uuid; 81 + uuid_t be; 82 + 83 + for (i = 0; i < ARRAY_SIZE(test_uuid_wrong_data); i++) { 84 + uuid = test_uuid_wrong_data[i]; 85 + KUNIT_EXPECT_EQ(test, uuid_parse(uuid, &be), -EINVAL); 86 + } 87 + } 88 + 89 + static struct kunit_case uuid_test_cases[] = { 90 + KUNIT_CASE(uuid_test_guid_valid), 91 + KUNIT_CASE(uuid_test_uuid_valid), 92 + KUNIT_CASE(uuid_test_guid_invalid), 93 + KUNIT_CASE(uuid_test_uuid_invalid), 94 + {}, 95 + }; 96 + 97 + static struct kunit_suite uuid_test_suite = { 98 + .name = "uuid", 99 + .test_cases = uuid_test_cases, 100 + }; 101 + 102 + kunit_test_suite(uuid_test_suite); 103 + 104 + MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>"); 105 + MODULE_DESCRIPTION("Test cases for lib/uuid.c module"); 106 + MODULE_LICENSE("Dual BSD/GPL");
+1
lib/uuid.c
··· 10 10 #include <linux/ctype.h> 11 11 #include <linux/errno.h> 12 12 #include <linux/export.h> 13 + #include <linux/hex.h> 13 14 #include <linux/uuid.h> 14 15 #include <linux/random.h> 15 16
+1
lib/vsprintf.c
··· 26 26 #include <linux/types.h> 27 27 #include <linux/string.h> 28 28 #include <linux/ctype.h> 29 + #include <linux/hex.h> 29 30 #include <linux/kernel.h> 30 31 #include <linux/kallsyms.h> 31 32 #include <linux/math64.h>
+1 -1
mm/Makefile
··· 100 100 obj-$(CONFIG_DEVICE_MIGRATION) += migrate_device.o 101 101 obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o 102 102 obj-$(CONFIG_PAGE_COUNTER) += page_counter.o 103 - obj-$(CONFIG_LIVEUPDATE) += memfd_luo.o 103 + obj-$(CONFIG_LIVEUPDATE_MEMFD) += memfd_luo.o 104 104 obj-$(CONFIG_MEMCG_V1) += memcontrol-v1.o 105 105 obj-$(CONFIG_MEMCG) += memcontrol.o vmpressure.o 106 106 ifdef CONFIG_SWAP
+2 -2
mm/kfence/kfence_test.c
··· 110 110 111 111 /* Title */ 112 112 cur = expect[0]; 113 - end = &expect[0][sizeof(expect[0]) - 1]; 113 + end = ARRAY_END(expect[0]); 114 114 switch (r->type) { 115 115 case KFENCE_ERROR_OOB: 116 116 cur += scnprintf(cur, end - cur, "BUG: KFENCE: out-of-bounds %s", ··· 140 140 141 141 /* Access information */ 142 142 cur = expect[1]; 143 - end = &expect[1][sizeof(expect[1]) - 1]; 143 + end = ARRAY_END(expect[1]); 144 144 145 145 switch (r->type) { 146 146 case KFENCE_ERROR_OOB:
+1 -1
mm/kmemleak.c
··· 510 510 { 511 511 unsigned long flags; 512 512 513 - if (object < mem_pool || object >= mem_pool + ARRAY_SIZE(mem_pool)) { 513 + if (object < mem_pool || object >= ARRAY_END(mem_pool)) { 514 514 kmem_cache_free(object_cache, object); 515 515 return; 516 516 }
+1 -1
mm/kmsan/kmsan_test.c
··· 105 105 106 106 /* Title */ 107 107 cur = expected_header; 108 - end = &expected_header[sizeof(expected_header) - 1]; 108 + end = ARRAY_END(expected_header); 109 109 110 110 cur += scnprintf(cur, end - cur, "BUG: KMSAN: %s", r->error_type); 111 111
+1 -3
mm/memblock.c
··· 21 21 #ifdef CONFIG_KEXEC_HANDOVER 22 22 #include <linux/libfdt.h> 23 23 #include <linux/kexec_handover.h> 24 + #include <linux/kho/abi/memblock.h> 24 25 #endif /* CONFIG_KEXEC_HANDOVER */ 25 26 26 27 #include <asm/sections.h> ··· 2443 2442 } 2444 2443 2445 2444 #ifdef CONFIG_KEXEC_HANDOVER 2446 - #define MEMBLOCK_KHO_FDT "memblock" 2447 - #define MEMBLOCK_KHO_NODE_COMPATIBLE "memblock-v1" 2448 - #define RESERVE_MEM_KHO_NODE_COMPATIBLE "reserve-mem-v1" 2449 2445 2450 2446 static int __init reserved_mem_preserve(void) 2451 2447 {
+2 -2
mm/memcontrol-v1.c
··· 1816 1816 1817 1817 mem_cgroup_flush_stats(memcg); 1818 1818 1819 - for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { 1819 + for (stat = stats; stat < ARRAY_END(stats); stat++) { 1820 1820 seq_printf(m, "%s=%lu", stat->name, 1821 1821 mem_cgroup_nr_lru_pages(memcg, stat->lru_mask, 1822 1822 false)); ··· 1827 1827 seq_putc(m, '\n'); 1828 1828 } 1829 1829 1830 - for (stat = stats; stat < stats + ARRAY_SIZE(stats); stat++) { 1830 + for (stat = stats; stat < ARRAY_END(stats); stat++) { 1831 1831 1832 1832 seq_printf(m, "hierarchical_%s=%lu", stat->name, 1833 1833 mem_cgroup_nr_lru_pages(memcg, stat->lru_mask,
+1
net/bridge/br_sysfs_br.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/netdevice.h> 13 13 #include <linux/etherdevice.h> 14 + #include <linux/hex.h> 14 15 #include <linux/if_bridge.h> 15 16 #include <linux/rtnetlink.h> 16 17 #include <linux/spinlock.h>
+1 -1
net/core/netclassid_cgroup.c
··· 93 93 /* Only update the leader task, when many threads in this task, 94 94 * so it can avoid the useless traversal. 95 95 */ 96 - if (p != p->group_leader) 96 + if (!thread_group_leader(p)) 97 97 return; 98 98 99 99 do {
+1
net/core/pktgen.c
··· 126 126 #include <linux/string.h> 127 127 #include <linux/ptrace.h> 128 128 #include <linux/errno.h> 129 + #include <linux/hex.h> 129 130 #include <linux/ioport.h> 130 131 #include <linux/interrupt.h> 131 132 #include <linux/capability.h>
+1
net/core/utils.c
··· 11 11 */ 12 12 13 13 #include <linux/module.h> 14 + #include <linux/hex.h> 14 15 #include <linux/jiffies.h> 15 16 #include <linux/kernel.h> 16 17 #include <linux/ctype.h>
+1
net/ipv4/arp.c
··· 79 79 #include <linux/socket.h> 80 80 #include <linux/sockios.h> 81 81 #include <linux/errno.h> 82 + #include <linux/hex.h> 82 83 #include <linux/in.h> 83 84 #include <linux/mm.h> 84 85 #include <linux/inet.h>
+1
net/mac80211/debugfs_netdev.c
··· 7 7 8 8 #include <linux/kernel.h> 9 9 #include <linux/device.h> 10 + #include <linux/hex.h> 10 11 #include <linux/if.h> 11 12 #include <linux/if_ether.h> 12 13 #include <linux/interrupt.h>
+1
net/sunrpc/cache.c
··· 11 11 #include <linux/types.h> 12 12 #include <linux/fs.h> 13 13 #include <linux/file.h> 14 + #include <linux/hex.h> 14 15 #include <linux/slab.h> 15 16 #include <linux/signal.h> 16 17 #include <linux/sched.h>
+1
net/tipc/core.h
··· 44 44 #include <linux/types.h> 45 45 #include <linux/kernel.h> 46 46 #include <linux/errno.h> 47 + #include <linux/hex.h> 47 48 #include <linux/mm.h> 48 49 #include <linux/timer.h> 49 50 #include <linux/string.h>
+12 -12
rust/kernel/task.rs
··· 204 204 self.0.get() 205 205 } 206 206 207 - /// Returns the group leader of the given task. 208 - pub fn group_leader(&self) -> &Task { 209 - // SAFETY: The group leader of a task never changes after initialization, so reading this 210 - // field is not a data race. 211 - let ptr = unsafe { *ptr::addr_of!((*self.as_ptr()).group_leader) }; 212 - 213 - // SAFETY: The lifetime of the returned task reference is tied to the lifetime of `self`, 214 - // and given that a task has a reference to its group leader, we know it must be valid for 215 - // the lifetime of the returned task reference. 216 - unsafe { &*ptr.cast() } 217 - } 218 - 219 207 /// Returns the PID of the given task. 220 208 pub fn pid(&self) -> Pid { 221 209 // SAFETY: The pid of a task never changes after initialization, so reading this field is ··· 332 344 // escape the scope in which the current pointer was obtained, e.g. it cannot live past a 333 345 // `release_task()` call. 334 346 Some(unsafe { PidNamespace::from_ptr(active_ns) }) 347 + } 348 + 349 + /// Returns the group leader of the current task. 350 + pub fn group_leader(&self) -> &Task { 351 + // SAFETY: The group leader of a task never changes while the task is running, and `self` 352 + // is the current task, which is guaranteed running. 353 + let ptr = unsafe { (*self.as_ptr()).group_leader }; 354 + 355 + // SAFETY: `current->group_leader` stays valid for at least the duration in which `current` 356 + // is running, and the signature of this function ensures that the returned `&Task` can 357 + // only be used while `current` is still valid, thus still running. 358 + unsafe { &*ptr.cast() } 335 359 } 336 360 } 337 361
+1
scripts/bloat-o-meter
··· 42 42 if name.startswith("__se_sys"): continue 43 43 if name.startswith("__se_compat_sys"): continue 44 44 if name.startswith("__addressable_"): continue 45 + if name.startswith("__noinstr_text_start"): continue 45 46 if name == "linux_banner": continue 46 47 if name == "vermagic": continue 47 48 # statics and some other optimizations adds random .NUMBER
+10
scripts/checkpatch.pl
··· 3033 3033 } 3034 3034 } 3035 3035 3036 + # Check for invalid patch separator 3037 + if ($in_commit_log && 3038 + $line =~ /^---.+/) { 3039 + if (ERROR("BAD_COMMIT_SEPARATOR", 3040 + "Invalid commit separator - some tools may have problems applying this\n" . $herecurr) && 3041 + $fix) { 3042 + $fixed[$fixlinenr] =~ s/-/=/g; 3043 + } 3044 + } 3045 + 3036 3046 # Check for patch separator 3037 3047 if ($line =~ /^---$/) { 3038 3048 $has_patch_separator = 1;
+1
security/integrity/evm/evm_crypto.c
··· 13 13 #define pr_fmt(fmt) "EVM: "fmt 14 14 15 15 #include <linux/export.h> 16 + #include <linux/hex.h> 16 17 #include <linux/crypto.h> 17 18 #include <linux/xattr.h> 18 19 #include <linux/evm.h>
+1
security/integrity/ima/ima_api.c
··· 11 11 #include <linux/slab.h> 12 12 #include <linux/file.h> 13 13 #include <linux/fs.h> 14 + #include <linux/hex.h> 14 15 #include <linux/xattr.h> 15 16 #include <linux/evm.h> 16 17 #include <linux/fsverity.h>
+35
security/integrity/ima/ima_kexec.c
··· 12 12 #include <linux/kexec.h> 13 13 #include <linux/of.h> 14 14 #include <linux/ima.h> 15 + #include <linux/mm.h> 16 + #include <linux/overflow.h> 15 17 #include <linux/reboot.h> 16 18 #include <asm/page.h> 17 19 #include "ima.h" ··· 295 293 default: 296 294 pr_debug("Error restoring the measurement list: %d\n", rc); 297 295 } 296 + } 297 + 298 + /* 299 + * ima_validate_range - verify a physical buffer lies in addressable RAM 300 + * @phys: physical start address of the buffer from previous kernel 301 + * @size: size of the buffer 302 + * 303 + * On success return 0. On failure returns -EINVAL so callers can skip 304 + * restoring. 305 + */ 306 + int ima_validate_range(phys_addr_t phys, size_t size) 307 + { 308 + unsigned long start_pfn, end_pfn; 309 + phys_addr_t end_phys; 310 + 311 + if (check_add_overflow(phys, (phys_addr_t)size - 1, &end_phys)) 312 + return -EINVAL; 313 + 314 + start_pfn = PHYS_PFN(phys); 315 + end_pfn = PHYS_PFN(end_phys); 316 + 317 + #ifdef CONFIG_X86 318 + if (!pfn_range_is_mapped(start_pfn, end_pfn)) 319 + #else 320 + if (!page_is_ram(start_pfn) || !page_is_ram(end_pfn)) 321 + #endif 322 + { 323 + pr_warn("IMA: previous kernel measurement buffer %pa (size 0x%zx) lies outside available memory\n", 324 + &phys, size); 325 + return -EINVAL; 326 + } 327 + 328 + return 0; 298 329 }
+1
security/ipe/digest.c
··· 3 3 * Copyright (C) 2020-2024 Microsoft Corporation. All rights reserved. 4 4 */ 5 5 6 + #include <linux/hex.h> 6 7 #include "digest.h" 7 8 8 9 /**
+1
security/keys/encrypted-keys/encrypted.c
··· 13 13 14 14 #include <linux/uaccess.h> 15 15 #include <linux/module.h> 16 + #include <linux/hex.h> 16 17 #include <linux/init.h> 17 18 #include <linux/slab.h> 18 19 #include <linux/parser.h>
+1
security/keys/trusted-keys/trusted_core.c
··· 15 15 #include <keys/trusted_pkwm.h> 16 16 #include <linux/capability.h> 17 17 #include <linux/err.h> 18 + #include <linux/hex.h> 18 19 #include <linux/init.h> 19 20 #include <linux/key-type.h> 20 21 #include <linux/module.h>
+1
security/keys/trusted-keys/trusted_tpm1.c
··· 9 9 #include <crypto/hash_info.h> 10 10 #include <crypto/sha1.h> 11 11 #include <crypto/utils.h> 12 + #include <linux/hex.h> 12 13 #include <linux/init.h> 13 14 #include <linux/slab.h> 14 15 #include <linux/parser.h>
+1
security/loadpin/loadpin.c
··· 11 11 12 12 #include <linux/module.h> 13 13 #include <linux/fs.h> 14 + #include <linux/hex.h> 14 15 #include <linux/kernel_read_file.h> 15 16 #include <linux/lsm_hooks.h> 16 17 #include <linux/mount.h>
+1
security/selinux/selinuxfs.c
··· 18 18 #include <linux/vmalloc.h> 19 19 #include <linux/fs.h> 20 20 #include <linux/fs_context.h> 21 + #include <linux/hex.h> 21 22 #include <linux/mount.h> 22 23 #include <linux/mutex.h> 23 24 #include <linux/namei.h>
+1
sound/pci/riptide/riptide.c
··· 75 75 */ 76 76 77 77 #include <linux/delay.h> 78 + #include <linux/hex.h> 78 79 #include <linux/init.h> 79 80 #include <linux/interrupt.h> 80 81 #include <linux/pci.h>
+1
sound/usb/6fire/firmware.c
··· 12 12 #include <linux/firmware.h> 13 13 #include <linux/module.h> 14 14 #include <linux/bitrev.h> 15 + #include <linux/hex.h> 15 16 #include <linux/kernel.h> 16 17 17 18 #include "firmware.h"
+145 -27
tools/accounting/getdelays.c
··· 24 24 #include <sys/socket.h> 25 25 #include <sys/wait.h> 26 26 #include <signal.h> 27 + #include <time.h> 27 28 28 29 #include <linux/genetlink.h> 29 30 #include <linux/taskstats.h> ··· 196 195 #define delay_ms(t) (t / 1000000ULL) 197 196 198 197 /* 198 + * Format timespec64 to human readable string (YYYY-MM-DD HH:MM:SS) 199 + * Returns formatted string or "N/A" if timestamp is zero 200 + */ 201 + static const char *format_timespec64(struct timespec64 *ts) 202 + { 203 + static char buffer[32]; 204 + struct tm tm_info; 205 + time_t time_sec; 206 + 207 + /* Check if timestamp is zero (not set) */ 208 + if (ts->tv_sec == 0 && ts->tv_nsec == 0) 209 + return "N/A"; 210 + 211 + time_sec = (time_t)ts->tv_sec; 212 + 213 + /* Use thread-safe localtime_r */ 214 + if (localtime_r(&time_sec, &tm_info) == NULL) 215 + return "N/A"; 216 + 217 + snprintf(buffer, sizeof(buffer), "%04d-%02d-%02dT%02d:%02d:%02d", 218 + tm_info.tm_year + 1900, 219 + tm_info.tm_mon + 1, 220 + tm_info.tm_mday, 221 + tm_info.tm_hour, 222 + tm_info.tm_min, 223 + tm_info.tm_sec); 224 + 225 + return buffer; 226 + } 227 + 228 + /* 199 229 * Version compatibility note: 200 230 * Field availability depends on taskstats version (t->version), 201 231 * corresponding to TASKSTATS_VERSION in kernel headers ··· 237 205 * version >= 13 - supports WPCOPY statistics 238 206 * version >= 14 - supports IRQ statistics 239 207 * version >= 16 - supports *_max and *_min delay statistics 208 + * version >= 17 - supports delay max timestamp statistics 240 209 * 241 210 * Always verify version before accessing version-dependent fields 242 211 * to maintain backward compatibility. 243 212 */ 244 213 #define PRINT_CPU_DELAY(version, t) \ 245 214 do { \ 246 - if (version >= 16) { \ 215 + if (version >= 17) { \ 216 + printf("%-10s%15s%15s%15s%15s%15s%15s%15s%25s\n", \ 217 + "CPU", "count", "real total", "virtual total", \ 218 + "delay total", "delay average", "delay max", \ 219 + "delay min", "delay max timestamp"); \ 220 + printf(" %15llu%15llu%15llu%15llu%15.3fms%13.6fms%13.6fms%23s\n", \ 221 + (unsigned long long)(t)->cpu_count, \ 222 + (unsigned long long)(t)->cpu_run_real_total, \ 223 + (unsigned long long)(t)->cpu_run_virtual_total, \ 224 + (unsigned long long)(t)->cpu_delay_total, \ 225 + average_ms((double)(t)->cpu_delay_total, (t)->cpu_count), \ 226 + delay_ms((double)(t)->cpu_delay_max), \ 227 + delay_ms((double)(t)->cpu_delay_min), \ 228 + format_timespec64(&(t)->cpu_delay_max_ts)); \ 229 + } else if (version >= 16) { \ 247 230 printf("%-10s%15s%15s%15s%15s%15s%15s%15s\n", \ 248 231 "CPU", "count", "real total", "virtual total", \ 249 232 "delay total", "delay average", "delay max", "delay min"); \ ··· 304 257 } \ 305 258 } while (0) 306 259 260 + #define PRINT_FILED_DELAY_WITH_TS(name, version, t, count, total, max, min, max_ts) \ 261 + do { \ 262 + if (version >= 17) { \ 263 + printf("%-10s%15s%15s%15s%15s%15s%25s\n", \ 264 + name, "count", "delay total", "delay average", \ 265 + "delay max", "delay min", "delay max timestamp"); \ 266 + printf(" %15llu%15llu%15.3fms%13.6fms%13.6fms%23s\n", \ 267 + (unsigned long long)(t)->count, \ 268 + (unsigned long long)(t)->total, \ 269 + average_ms((double)(t)->total, (t)->count), \ 270 + delay_ms((double)(t)->max), \ 271 + delay_ms((double)(t)->min), \ 272 + format_timespec64(&(t)->max_ts)); \ 273 + } else if (version >= 16) { \ 274 + printf("%-10s%15s%15s%15s%15s%15s\n", \ 275 + name, "count", "delay total", "delay average", \ 276 + "delay max", "delay min"); \ 277 + printf(" %15llu%15llu%15.3fms%13.6fms%13.6fms\n", \ 278 + (unsigned long long)(t)->count, \ 279 + (unsigned long long)(t)->total, \ 280 + average_ms((double)(t)->total, (t)->count), \ 281 + delay_ms((double)(t)->max), \ 282 + delay_ms((double)(t)->min)); \ 283 + } else { \ 284 + printf("%-10s%15s%15s%15s\n", \ 285 + name, "count", "delay total", "delay average"); \ 286 + printf(" %15llu%15llu%15.3fms\n", \ 287 + (unsigned long long)(t)->count, \ 288 + (unsigned long long)(t)->total, \ 289 + average_ms((double)(t)->total, (t)->count)); \ 290 + } \ 291 + } while (0) 292 + 307 293 static void print_delayacct(struct taskstats *t) 308 294 { 309 295 printf("\n\n"); 310 296 311 297 PRINT_CPU_DELAY(t->version, t); 312 298 313 - PRINT_FILED_DELAY("IO", t->version, t, 314 - blkio_count, blkio_delay_total, 315 - blkio_delay_max, blkio_delay_min); 299 + /* Use new macro with timestamp support for version >= 17 */ 300 + if (t->version >= 17) { 301 + PRINT_FILED_DELAY_WITH_TS("IO", t->version, t, 302 + blkio_count, blkio_delay_total, 303 + blkio_delay_max, blkio_delay_min, blkio_delay_max_ts); 316 304 317 - PRINT_FILED_DELAY("SWAP", t->version, t, 318 - swapin_count, swapin_delay_total, 319 - swapin_delay_max, swapin_delay_min); 305 + PRINT_FILED_DELAY_WITH_TS("SWAP", t->version, t, 306 + swapin_count, swapin_delay_total, 307 + swapin_delay_max, swapin_delay_min, swapin_delay_max_ts); 320 308 321 - PRINT_FILED_DELAY("RECLAIM", t->version, t, 322 - freepages_count, freepages_delay_total, 323 - freepages_delay_max, freepages_delay_min); 309 + PRINT_FILED_DELAY_WITH_TS("RECLAIM", t->version, t, 310 + freepages_count, freepages_delay_total, 311 + freepages_delay_max, freepages_delay_min, freepages_delay_max_ts); 324 312 325 - PRINT_FILED_DELAY("THRASHING", t->version, t, 326 - thrashing_count, thrashing_delay_total, 327 - thrashing_delay_max, thrashing_delay_min); 313 + PRINT_FILED_DELAY_WITH_TS("THRASHING", t->version, t, 314 + thrashing_count, thrashing_delay_total, 315 + thrashing_delay_max, thrashing_delay_min, thrashing_delay_max_ts); 328 316 329 - if (t->version >= 11) { 330 - PRINT_FILED_DELAY("COMPACT", t->version, t, 331 - compact_count, compact_delay_total, 332 - compact_delay_max, compact_delay_min); 333 - } 317 + if (t->version >= 11) { 318 + PRINT_FILED_DELAY_WITH_TS("COMPACT", t->version, t, 319 + compact_count, compact_delay_total, 320 + compact_delay_max, compact_delay_min, compact_delay_max_ts); 321 + } 334 322 335 - if (t->version >= 13) { 336 - PRINT_FILED_DELAY("WPCOPY", t->version, t, 337 - wpcopy_count, wpcopy_delay_total, 338 - wpcopy_delay_max, wpcopy_delay_min); 339 - } 323 + if (t->version >= 13) { 324 + PRINT_FILED_DELAY_WITH_TS("WPCOPY", t->version, t, 325 + wpcopy_count, wpcopy_delay_total, 326 + wpcopy_delay_max, wpcopy_delay_min, wpcopy_delay_max_ts); 327 + } 340 328 341 - if (t->version >= 14) { 342 - PRINT_FILED_DELAY("IRQ", t->version, t, 343 - irq_count, irq_delay_total, 344 - irq_delay_max, irq_delay_min); 329 + if (t->version >= 14) { 330 + PRINT_FILED_DELAY_WITH_TS("IRQ", t->version, t, 331 + irq_count, irq_delay_total, 332 + irq_delay_max, irq_delay_min, irq_delay_max_ts); 333 + } 334 + } else { 335 + /* Use original macro for older versions */ 336 + PRINT_FILED_DELAY("IO", t->version, t, 337 + blkio_count, blkio_delay_total, 338 + blkio_delay_max, blkio_delay_min); 339 + 340 + PRINT_FILED_DELAY("SWAP", t->version, t, 341 + swapin_count, swapin_delay_total, 342 + swapin_delay_max, swapin_delay_min); 343 + 344 + PRINT_FILED_DELAY("RECLAIM", t->version, t, 345 + freepages_count, freepages_delay_total, 346 + freepages_delay_max, freepages_delay_min); 347 + 348 + PRINT_FILED_DELAY("THRASHING", t->version, t, 349 + thrashing_count, thrashing_delay_total, 350 + thrashing_delay_max, thrashing_delay_min); 351 + 352 + if (t->version >= 11) { 353 + PRINT_FILED_DELAY("COMPACT", t->version, t, 354 + compact_count, compact_delay_total, 355 + compact_delay_max, compact_delay_min); 356 + } 357 + 358 + if (t->version >= 13) { 359 + PRINT_FILED_DELAY("WPCOPY", t->version, t, 360 + wpcopy_count, wpcopy_delay_total, 361 + wpcopy_delay_max, wpcopy_delay_min); 362 + } 363 + 364 + if (t->version >= 14) { 365 + PRINT_FILED_DELAY("IRQ", t->version, t, 366 + irq_count, irq_delay_total, 367 + irq_delay_max, irq_delay_min); 368 + } 345 369 } 346 370 } 347 371
+17 -1
tools/debugging/kernel-chktaint
··· 211 211 addout "J" 212 212 echo " * fwctl's mutating debug interface was used (#19)" 213 213 fi 214 + echo "Raw taint value as int/string: $taint/'$out'" 214 215 216 + # report on any tainted loadable modules 217 + [ "$1" = "" ] && [ -r /sys/module/ ] && \ 218 + cnt=`grep [A-Z] /sys/module/*/taint | wc -l` || cnt=0 219 + 220 + if [ $cnt -ne 0 ]; then 221 + echo 222 + echo "Tainted modules:" 223 + for dir in `ls /sys/module` ; do 224 + if [ -r /sys/module/$dir/taint ]; then 225 + modtnt=`cat /sys/module/$dir/taint` 226 + [ "$modtnt" = "" ] || echo " * $dir ($modtnt)" 227 + fi 228 + done 229 + fi 230 + 231 + echo 215 232 echo "For a more detailed explanation of the various taint flags see" 216 233 echo " Documentation/admin-guide/tainted-kernels.rst in the Linux kernel sources" 217 234 echo " or https://kernel.org/doc/html/latest/admin-guide/tainted-kernels.html" 218 - echo "Raw taint value as int/string: $taint/'$out'" 219 235 #EOF#
+1 -1
tools/testing/selftests/bpf/config
··· 1 1 CONFIG_BLK_DEV_LOOP=y 2 2 CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y 3 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 3 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=1 4 4 CONFIG_BPF=y 5 5 CONFIG_BPF_EVENTS=y 6 6 CONFIG_BPF_JIT=y
+1 -1
tools/testing/selftests/wireguard/qemu/kernel.config
··· 80 80 CONFIG_WQ_WATCHDOG=y 81 81 CONFIG_DETECT_HUNG_TASK=y 82 82 CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y 83 - CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 83 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=1 84 84 CONFIG_BOOTPARAM_HUNG_TASK_PANIC=1 85 85 CONFIG_PANIC_TIMEOUT=-1 86 86 CONFIG_STACKTRACE=y