Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'hardening-v6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull kernel hardening updates from Kees Cook:
"Most of the collected changes here are fixes across the tree for
various hardening features (details noted below).

The most notable new feature here is the addition of the memcpy()
overflow warning (under CONFIG_FORTIFY_SOURCE), which is the next step
on the path to killing the common class of "trivially detectable"
buffer overflow conditions (i.e. on arrays with sizes known at compile
time) that have resulted in many exploitable vulnerabilities over the
years (e.g. BleedingTooth).

This feature is expected to still have some undiscovered false
positives. It's been in -next for a full development cycle and all the
reported false positives have been fixed in their respective trees.
All the known-bad code patterns we could find with Coccinelle are also
either fixed in their respective trees or in flight.

The commit message in commit 54d9469bc515 ("fortify: Add run-time WARN
for cross-field memcpy()") for the feature has extensive details, but
I'll repeat here that this is a warning _only_, and is not intended to
actually block overflows (yet). The many patches fixing array sizes
and struct members have been landing for several years now, and we're
finally able to turn this on to find any remaining stragglers.

Summary:

Various fixes across several hardening areas:

- loadpin: Fix verity target enforcement (Matthias Kaehlcke).

- zero-call-used-regs: Add missing clobbers in paravirt (Bill
Wendling).

- CFI: clean up sparc function pointer type mismatches (Bart Van
Assche).

- Clang: Adjust compiler flag detection for various Clang changes
(Sami Tolvanen, Kees Cook).

- fortify: Fix warnings in arch-specific code in sh, ARM, and xen.

Improvements to existing features:

- testing: improve overflow KUnit test, introduce fortify KUnit test,
add more coverage to LKDTM tests (Bart Van Assche, Kees Cook).

- overflow: Relax overflow type checking for wider utility.

New features:

- string: Introduce strtomem() and strtomem_pad() to fill a gap in
strncpy() replacement needs.

- um: Enable FORTIFY_SOURCE support.

- fortify: Enable run-time struct member memcpy() overflow warning"

* tag 'hardening-v6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (27 commits)
Makefile.extrawarn: Move -Wcast-function-type-strict to W=1
hardening: Remove Clang's enable flag for -ftrivial-auto-var-init=zero
sparc: Unbreak the build
x86/paravirt: add extra clobbers with ZERO_CALL_USED_REGS enabled
x86/paravirt: clean up typos and grammaros
fortify: Convert to struct vs member helpers
fortify: Explicitly check bounds are compile-time constants
x86/entry: Work around Clang __bdos() bug
ARM: decompressor: Include .data.rel.ro.local
fortify: Adjust KUnit test for modular build
sh: machvec: Use char[] for section boundaries
kunit/memcpy: Avoid pathological compile-time string size
lib: Improve the is_signed_type() kunit test
LoadPin: Require file with verity root digests to have a header
dm: verity-loadpin: Only trust verity targets with enforcement
LoadPin: Fix Kconfig doc about format of file with verity digests
um: Enable FORTIFY_SOURCE
lkdtm: Update tests for memcpy() run-time warnings
fortify: Add run-time WARN for cross-field memcpy()
fortify: Use SIZE_MAX instead of (size_t)-1
...

+815 -240
+7 -4
Documentation/process/deprecated.rst
··· 138 138 other misbehavior due to the missing termination. It also NUL-pads 139 139 the destination buffer if the source contents are shorter than the 140 140 destination buffer size, which may be a needless performance penalty 141 - for callers using only NUL-terminated strings. The safe replacement is 141 + for callers using only NUL-terminated strings. 142 + 143 + When the destination is required to be NUL-terminated, the replacement is 142 144 strscpy(), though care must be given to any cases where the return value 143 145 of strncpy() was used, since strscpy() does not return a pointer to the 144 146 destination, but rather a count of non-NUL bytes copied (or negative 145 147 errno when it truncates). Any cases still needing NUL-padding should 146 148 instead use strscpy_pad(). 147 149 148 - If a caller is using non-NUL-terminated strings, strncpy() can 149 - still be used, but destinations should be marked with the `__nonstring 150 + If a caller is using non-NUL-terminated strings, strtomem() should be 151 + used, and the destinations should be marked with the `__nonstring 150 152 <https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html>`_ 151 - attribute to avoid future compiler warnings. 153 + attribute to avoid future compiler warnings. For cases still needing 154 + NUL-padding, strtomem_pad() can be used. 152 155 153 156 strlcpy() 154 157 ---------
+1
MAINTAINERS
··· 8001 8001 S: Supported 8002 8002 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening 8003 8003 F: include/linux/fortify-string.h 8004 + F: lib/fortify_kunit.c 8004 8005 F: lib/test_fortify/* 8005 8006 F: scripts/test_fortify.sh 8006 8007 K: \b__NO_FORTIFY\b
+2 -2
Makefile
··· 909 909 # Initialize all stack variables with a zero value. 910 910 ifdef CONFIG_INIT_STACK_ALL_ZERO 911 911 KBUILD_CFLAGS += -ftrivial-auto-var-init=zero 912 - ifdef CONFIG_CC_IS_CLANG 913 - # https://bugs.llvm.org/show_bug.cgi?id=45497 912 + ifdef CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER 913 + # https://github.com/llvm/llvm-project/issues/44842 914 914 KBUILD_CFLAGS += -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang 915 915 endif 916 916 endif
+2
arch/arm/boot/compressed/vmlinux.lds.S
··· 23 23 *(.ARM.extab*) 24 24 *(.note.*) 25 25 *(.rel.*) 26 + *(.printk_index) 26 27 /* 27 28 * Discard any r/w data - this produces a link error if we have any, 28 29 * which is required for PIC decompression. Local data generates ··· 58 57 *(.rodata) 59 58 *(.rodata.*) 60 59 *(.data.rel.ro) 60 + *(.data.rel.ro.*) 61 61 } 62 62 .piggydata : { 63 63 *(.piggydata)
+1 -1
arch/sh/include/asm/sections.h
··· 4 4 5 5 #include <asm-generic/sections.h> 6 6 7 - extern long __machvec_start, __machvec_end; 7 + extern char __machvec_start[], __machvec_end[]; 8 8 extern char __uncached_start, __uncached_end; 9 9 extern char __start_eh_frame[], __stop_eh_frame[]; 10 10
+5 -5
arch/sh/kernel/machvec.c
··· 20 20 #define MV_NAME_SIZE 32 21 21 22 22 #define for_each_mv(mv) \ 23 - for ((mv) = (struct sh_machine_vector *)&__machvec_start; \ 24 - (mv) && (unsigned long)(mv) < (unsigned long)&__machvec_end; \ 23 + for ((mv) = (struct sh_machine_vector *)__machvec_start; \ 24 + (mv) && (unsigned long)(mv) < (unsigned long)__machvec_end; \ 25 25 (mv)++) 26 26 27 27 static struct sh_machine_vector * __init get_mv_byname(const char *name) ··· 87 87 if (!machvec_selected) { 88 88 unsigned long machvec_size; 89 89 90 - machvec_size = ((unsigned long)&__machvec_end - 91 - (unsigned long)&__machvec_start); 90 + machvec_size = ((unsigned long)__machvec_end - 91 + (unsigned long)__machvec_start); 92 92 93 93 /* 94 94 * Sanity check for machvec section alignment. Ensure ··· 102 102 * vector (usually the only one) from .machvec.init. 103 103 */ 104 104 if (machvec_size >= sizeof(struct sh_machine_vector)) 105 - sh_mv = *(struct sh_machine_vector *)&__machvec_start; 105 + sh_mv = *(struct sh_machine_vector *)__machvec_start; 106 106 } 107 107 108 108 pr_notice("Booting machvec: %s\n", get_system_type());
+6 -9
arch/sparc/include/asm/smp_32.h
··· 33 33 extern cpumask_t smp_commenced_mask; 34 34 extern struct linux_prom_registers smp_penguin_ctable; 35 35 36 - typedef void (*smpfunc_t)(unsigned long, unsigned long, unsigned long, 37 - unsigned long, unsigned long); 38 - 39 36 void cpu_panic(void); 40 37 41 38 /* ··· 54 57 void smp_info(struct seq_file *); 55 58 56 59 struct sparc32_ipi_ops { 57 - void (*cross_call)(smpfunc_t func, cpumask_t mask, unsigned long arg1, 60 + void (*cross_call)(void *func, cpumask_t mask, unsigned long arg1, 58 61 unsigned long arg2, unsigned long arg3, 59 62 unsigned long arg4); 60 63 void (*resched)(int cpu); ··· 63 66 }; 64 67 extern const struct sparc32_ipi_ops *sparc32_ipi_ops; 65 68 66 - static inline void xc0(smpfunc_t func) 69 + static inline void xc0(void *func) 67 70 { 68 71 sparc32_ipi_ops->cross_call(func, *cpu_online_mask, 0, 0, 0, 0); 69 72 } 70 73 71 - static inline void xc1(smpfunc_t func, unsigned long arg1) 74 + static inline void xc1(void *func, unsigned long arg1) 72 75 { 73 76 sparc32_ipi_ops->cross_call(func, *cpu_online_mask, arg1, 0, 0, 0); 74 77 } 75 - static inline void xc2(smpfunc_t func, unsigned long arg1, unsigned long arg2) 78 + static inline void xc2(void *func, unsigned long arg1, unsigned long arg2) 76 79 { 77 80 sparc32_ipi_ops->cross_call(func, *cpu_online_mask, arg1, arg2, 0, 0); 78 81 } 79 82 80 - static inline void xc3(smpfunc_t func, unsigned long arg1, unsigned long arg2, 83 + static inline void xc3(void *func, unsigned long arg1, unsigned long arg2, 81 84 unsigned long arg3) 82 85 { 83 86 sparc32_ipi_ops->cross_call(func, *cpu_online_mask, 84 87 arg1, arg2, arg3, 0); 85 88 } 86 89 87 - static inline void xc4(smpfunc_t func, unsigned long arg1, unsigned long arg2, 90 + static inline void xc4(void *func, unsigned long arg1, unsigned long arg2, 88 91 unsigned long arg3, unsigned long arg4) 89 92 { 90 93 sparc32_ipi_ops->cross_call(func, *cpu_online_mask,
+7 -5
arch/sparc/kernel/leon_smp.c
··· 359 359 } 360 360 361 361 static struct smp_funcall { 362 - smpfunc_t func; 362 + void *func; 363 363 unsigned long arg1; 364 364 unsigned long arg2; 365 365 unsigned long arg3; ··· 372 372 static DEFINE_SPINLOCK(cross_call_lock); 373 373 374 374 /* Cross calls must be serialized, at least currently. */ 375 - static void leon_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1, 375 + static void leon_cross_call(void *func, cpumask_t mask, unsigned long arg1, 376 376 unsigned long arg2, unsigned long arg3, 377 377 unsigned long arg4) 378 378 { ··· 384 384 385 385 { 386 386 /* If you make changes here, make sure gcc generates proper code... */ 387 - register smpfunc_t f asm("i0") = func; 387 + register void *f asm("i0") = func; 388 388 register unsigned long a1 asm("i1") = arg1; 389 389 register unsigned long a2 asm("i2") = arg2; 390 390 register unsigned long a3 asm("i3") = arg3; ··· 444 444 /* Running cross calls. */ 445 445 void leon_cross_call_irq(void) 446 446 { 447 + void (*func)(unsigned long, unsigned long, unsigned long, unsigned long, 448 + unsigned long) = ccall_info.func; 447 449 int i = smp_processor_id(); 448 450 449 451 ccall_info.processors_in[i] = 1; 450 - ccall_info.func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, 451 - ccall_info.arg4, ccall_info.arg5); 452 + func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, ccall_info.arg4, 453 + ccall_info.arg5); 452 454 ccall_info.processors_out[i] = 1; 453 455 } 454 456
+7 -5
arch/sparc/kernel/sun4d_smp.c
··· 268 268 } 269 269 270 270 static struct smp_funcall { 271 - smpfunc_t func; 271 + void *func; 272 272 unsigned long arg1; 273 273 unsigned long arg2; 274 274 unsigned long arg3; ··· 281 281 static DEFINE_SPINLOCK(cross_call_lock); 282 282 283 283 /* Cross calls must be serialized, at least currently. */ 284 - static void sun4d_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1, 284 + static void sun4d_cross_call(void *func, cpumask_t mask, unsigned long arg1, 285 285 unsigned long arg2, unsigned long arg3, 286 286 unsigned long arg4) 287 287 { ··· 296 296 * If you make changes here, make sure 297 297 * gcc generates proper code... 298 298 */ 299 - register smpfunc_t f asm("i0") = func; 299 + register void *f asm("i0") = func; 300 300 register unsigned long a1 asm("i1") = arg1; 301 301 register unsigned long a2 asm("i2") = arg2; 302 302 register unsigned long a3 asm("i3") = arg3; ··· 353 353 /* Running cross calls. */ 354 354 void smp4d_cross_call_irq(void) 355 355 { 356 + void (*func)(unsigned long, unsigned long, unsigned long, unsigned long, 357 + unsigned long) = ccall_info.func; 356 358 int i = hard_smp_processor_id(); 357 359 358 360 ccall_info.processors_in[i] = 1; 359 - ccall_info.func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, 360 - ccall_info.arg4, ccall_info.arg5); 361 + func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, ccall_info.arg4, 362 + ccall_info.arg5); 361 363 ccall_info.processors_out[i] = 1; 362 364 } 363 365
+6 -4
arch/sparc/kernel/sun4m_smp.c
··· 157 157 } 158 158 159 159 static struct smp_funcall { 160 - smpfunc_t func; 160 + void *func; 161 161 unsigned long arg1; 162 162 unsigned long arg2; 163 163 unsigned long arg3; ··· 170 170 static DEFINE_SPINLOCK(cross_call_lock); 171 171 172 172 /* Cross calls must be serialized, at least currently. */ 173 - static void sun4m_cross_call(smpfunc_t func, cpumask_t mask, unsigned long arg1, 173 + static void sun4m_cross_call(void *func, cpumask_t mask, unsigned long arg1, 174 174 unsigned long arg2, unsigned long arg3, 175 175 unsigned long arg4) 176 176 { ··· 230 230 /* Running cross calls. */ 231 231 void smp4m_cross_call_irq(void) 232 232 { 233 + void (*func)(unsigned long, unsigned long, unsigned long, unsigned long, 234 + unsigned long) = ccall_info.func; 233 235 int i = smp_processor_id(); 234 236 235 237 ccall_info.processors_in[i] = 1; 236 - ccall_info.func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, 237 - ccall_info.arg4, ccall_info.arg5); 238 + func(ccall_info.arg1, ccall_info.arg2, ccall_info.arg3, ccall_info.arg4, 239 + ccall_info.arg5); 238 240 ccall_info.processors_out[i] = 1; 239 241 } 240 242
+13 -16
arch/sparc/mm/srmmu.c
··· 1636 1636 /* Local cross-calls. */ 1637 1637 static void smp_flush_page_for_dma(unsigned long page) 1638 1638 { 1639 - xc1((smpfunc_t) local_ops->page_for_dma, page); 1639 + xc1(local_ops->page_for_dma, page); 1640 1640 local_ops->page_for_dma(page); 1641 1641 } 1642 1642 1643 1643 static void smp_flush_cache_all(void) 1644 1644 { 1645 - xc0((smpfunc_t) local_ops->cache_all); 1645 + xc0(local_ops->cache_all); 1646 1646 local_ops->cache_all(); 1647 1647 } 1648 1648 1649 1649 static void smp_flush_tlb_all(void) 1650 1650 { 1651 - xc0((smpfunc_t) local_ops->tlb_all); 1651 + xc0(local_ops->tlb_all); 1652 1652 local_ops->tlb_all(); 1653 1653 } 1654 1654 ··· 1659 1659 cpumask_copy(&cpu_mask, mm_cpumask(mm)); 1660 1660 cpumask_clear_cpu(smp_processor_id(), &cpu_mask); 1661 1661 if (!cpumask_empty(&cpu_mask)) 1662 - xc1((smpfunc_t) local_ops->cache_mm, (unsigned long) mm); 1662 + xc1(local_ops->cache_mm, (unsigned long)mm); 1663 1663 local_ops->cache_mm(mm); 1664 1664 } 1665 1665 } ··· 1671 1671 cpumask_copy(&cpu_mask, mm_cpumask(mm)); 1672 1672 cpumask_clear_cpu(smp_processor_id(), &cpu_mask); 1673 1673 if (!cpumask_empty(&cpu_mask)) { 1674 - xc1((smpfunc_t) local_ops->tlb_mm, (unsigned long) mm); 1674 + xc1(local_ops->tlb_mm, (unsigned long)mm); 1675 1675 if (atomic_read(&mm->mm_users) == 1 && current->active_mm == mm) 1676 1676 cpumask_copy(mm_cpumask(mm), 1677 1677 cpumask_of(smp_processor_id())); ··· 1691 1691 cpumask_copy(&cpu_mask, mm_cpumask(mm)); 1692 1692 cpumask_clear_cpu(smp_processor_id(), &cpu_mask); 1693 1693 if (!cpumask_empty(&cpu_mask)) 1694 - xc3((smpfunc_t) local_ops->cache_range, 1695 - (unsigned long) vma, start, end); 1694 + xc3(local_ops->cache_range, (unsigned long)vma, start, 1695 + end); 1696 1696 local_ops->cache_range(vma, start, end); 1697 1697 } 1698 1698 } ··· 1708 1708 cpumask_copy(&cpu_mask, mm_cpumask(mm)); 1709 1709 cpumask_clear_cpu(smp_processor_id(), &cpu_mask); 1710 1710 if (!cpumask_empty(&cpu_mask)) 1711 - xc3((smpfunc_t) local_ops->tlb_range, 1712 - (unsigned long) vma, start, end); 1711 + xc3(local_ops->tlb_range, (unsigned long)vma, start, 1712 + end); 1713 1713 local_ops->tlb_range(vma, start, end); 1714 1714 } 1715 1715 } ··· 1723 1723 cpumask_copy(&cpu_mask, mm_cpumask(mm)); 1724 1724 cpumask_clear_cpu(smp_processor_id(), &cpu_mask); 1725 1725 if (!cpumask_empty(&cpu_mask)) 1726 - xc2((smpfunc_t) local_ops->cache_page, 1727 - (unsigned long) vma, page); 1726 + xc2(local_ops->cache_page, (unsigned long)vma, page); 1728 1727 local_ops->cache_page(vma, page); 1729 1728 } 1730 1729 } ··· 1737 1738 cpumask_copy(&cpu_mask, mm_cpumask(mm)); 1738 1739 cpumask_clear_cpu(smp_processor_id(), &cpu_mask); 1739 1740 if (!cpumask_empty(&cpu_mask)) 1740 - xc2((smpfunc_t) local_ops->tlb_page, 1741 - (unsigned long) vma, page); 1741 + xc2(local_ops->tlb_page, (unsigned long)vma, page); 1742 1742 local_ops->tlb_page(vma, page); 1743 1743 } 1744 1744 } ··· 1751 1753 * XXX This experiment failed, research further... -DaveM 1752 1754 */ 1753 1755 #if 1 1754 - xc1((smpfunc_t) local_ops->page_to_ram, page); 1756 + xc1(local_ops->page_to_ram, page); 1755 1757 #endif 1756 1758 local_ops->page_to_ram(page); 1757 1759 } ··· 1762 1764 cpumask_copy(&cpu_mask, mm_cpumask(mm)); 1763 1765 cpumask_clear_cpu(smp_processor_id(), &cpu_mask); 1764 1766 if (!cpumask_empty(&cpu_mask)) 1765 - xc2((smpfunc_t) local_ops->sig_insns, 1766 - (unsigned long) mm, insn_addr); 1767 + xc2(local_ops->sig_insns, (unsigned long)mm, insn_addr); 1767 1768 local_ops->sig_insns(mm, insn_addr); 1768 1769 } 1769 1770
+1
arch/um/Kconfig
··· 6 6 bool 7 7 default y 8 8 select ARCH_EPHEMERAL_INODES 9 + select ARCH_HAS_FORTIFY_SOURCE 9 10 select ARCH_HAS_GCOV_PROFILE_ALL 10 11 select ARCH_HAS_KCOV 11 12 select ARCH_HAS_STRNCPY_FROM_USER
+1
arch/um/os-Linux/user_syms.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + #define __NO_FORTIFY 2 3 #include <linux/types.h> 3 4 #include <linux/module.h> 4 5
+18 -9
arch/x86/include/asm/paravirt_types.h
··· 328 328 * Unfortunately, this is a relatively slow operation for modern CPUs, 329 329 * because it cannot necessarily determine what the destination 330 330 * address is. In this case, the address is a runtime constant, so at 331 - * the very least we can patch the call to e a simple direct call, or 331 + * the very least we can patch the call to a simple direct call, or, 332 332 * ideally, patch an inline implementation into the callsite. (Direct 333 333 * calls are essentially free, because the call and return addresses 334 334 * are completely predictable.) ··· 339 339 * on the stack. All caller-save registers (eax,edx,ecx) are expected 340 340 * to be modified (either clobbered or used for return values). 341 341 * X86_64, on the other hand, already specifies a register-based calling 342 - * conventions, returning at %rax, with parameters going on %rdi, %rsi, 342 + * conventions, returning at %rax, with parameters going in %rdi, %rsi, 343 343 * %rdx, and %rcx. Note that for this reason, x86_64 does not need any 344 344 * special handling for dealing with 4 arguments, unlike i386. 345 - * However, x86_64 also have to clobber all caller saved registers, which 345 + * However, x86_64 also has to clobber all caller saved registers, which 346 346 * unfortunately, are quite a bit (r8 - r11) 347 347 * 348 348 * The call instruction itself is marked by placing its start address ··· 360 360 * There are 5 sets of PVOP_* macros for dealing with 0-4 arguments. 361 361 * It could be extended to more arguments, but there would be little 362 362 * to be gained from that. For each number of arguments, there are 363 - * the two VCALL and CALL variants for void and non-void functions. 363 + * two VCALL and CALL variants for void and non-void functions. 364 364 * 365 365 * When there is a return value, the invoker of the macro must specify 366 366 * the return type. The macro then uses sizeof() on that type to 367 - * determine whether its a 32 or 64 bit value, and places the return 367 + * determine whether it's a 32 or 64 bit value and places the return 368 368 * in the right register(s) (just %eax for 32-bit, and %edx:%eax for 369 - * 64-bit). For x86_64 machines, it just returns at %rax regardless of 369 + * 64-bit). For x86_64 machines, it just returns in %rax regardless of 370 370 * the return value size. 371 371 * 372 - * 64-bit arguments are passed as a pair of adjacent 32-bit arguments 372 + * 64-bit arguments are passed as a pair of adjacent 32-bit arguments; 373 373 * i386 also passes 64-bit arguments as a pair of adjacent 32-bit arguments 374 374 * in low,high order 375 375 * 376 376 * Small structures are passed and returned in registers. The macro 377 377 * calling convention can't directly deal with this, so the wrapper 378 - * functions must do this. 378 + * functions must do it. 379 379 * 380 380 * These PVOP_* macros are only defined within this header. This 381 381 * means that all uses must be wrapped in inline functions. This also ··· 414 414 "=c" (__ecx) 415 415 #define PVOP_CALL_CLOBBERS PVOP_VCALL_CLOBBERS, "=a" (__eax) 416 416 417 - /* void functions are still allowed [re]ax for scratch */ 417 + /* 418 + * void functions are still allowed [re]ax for scratch. 419 + * 420 + * The ZERO_CALL_USED REGS feature may end up zeroing out callee-saved 421 + * registers. Make sure we model this with the appropriate clobbers. 422 + */ 423 + #ifdef CONFIG_ZERO_CALL_USED_REGS 424 + #define PVOP_VCALLEE_CLOBBERS "=a" (__eax), PVOP_VCALL_CLOBBERS 425 + #else 418 426 #define PVOP_VCALLEE_CLOBBERS "=a" (__eax) 427 + #endif 419 428 #define PVOP_CALLEE_CLOBBERS PVOP_VCALLEE_CLOBBERS 420 429 421 430 #define EXTRA_CLOBBERS , "r8", "r9", "r10", "r11"
+2 -1
arch/x86/xen/enlighten_pv.c
··· 765 765 { 766 766 static DEFINE_SPINLOCK(lock); 767 767 static struct trap_info traps[257]; 768 + static const struct trap_info zero = { }; 768 769 unsigned out; 769 770 770 771 trace_xen_cpu_load_idt(desc); ··· 775 774 memcpy(this_cpu_ptr(&idt_desc), desc, sizeof(idt_desc)); 776 775 777 776 out = xen_convert_trap_info(desc, traps, false); 778 - memset(&traps[out], 0, sizeof(traps[0])); 777 + traps[out] = zero; 779 778 780 779 xen_mc_flush(); 781 780 if (HYPERVISOR_set_trap_table(traps))
+8
drivers/md/dm-verity-loadpin.c
··· 14 14 15 15 static bool is_trusted_verity_target(struct dm_target *ti) 16 16 { 17 + int verity_mode; 17 18 u8 *root_digest; 18 19 unsigned int digest_size; 19 20 struct dm_verity_loadpin_trusted_root_digest *trd; 20 21 bool trusted = false; 21 22 22 23 if (!dm_is_verity_target(ti)) 24 + return false; 25 + 26 + verity_mode = dm_verity_get_mode(ti); 27 + 28 + if ((verity_mode != DM_VERITY_MODE_EIO) && 29 + (verity_mode != DM_VERITY_MODE_RESTART) && 30 + (verity_mode != DM_VERITY_MODE_PANIC)) 23 31 return false; 24 32 25 33 if (dm_verity_get_root_digest(ti, &root_digest, &digest_size))
+16
drivers/md/dm-verity-target.c
··· 1447 1447 } 1448 1448 1449 1449 /* 1450 + * Get the verity mode (error behavior) of a verity target. 1451 + * 1452 + * Returns the verity mode of the target, or -EINVAL if 'ti' is not a verity 1453 + * target. 1454 + */ 1455 + int dm_verity_get_mode(struct dm_target *ti) 1456 + { 1457 + struct dm_verity *v = ti->private; 1458 + 1459 + if (!dm_is_verity_target(ti)) 1460 + return -EINVAL; 1461 + 1462 + return v->mode; 1463 + } 1464 + 1465 + /* 1450 1466 * Get the root digest of a verity target. 1451 1467 * 1452 1468 * Returns a copy of the root digest, the caller is responsible for
+1
drivers/md/dm-verity.h
··· 134 134 sector_t block, u8 *digest, bool *is_zero); 135 135 136 136 extern bool dm_is_verity_target(struct dm_target *ti); 137 + extern int dm_verity_get_mode(struct dm_target *ti); 137 138 extern int dm_verity_get_root_digest(struct dm_target *ti, u8 **root_digest, 138 139 unsigned int *digest_size); 139 140
+83 -13
drivers/misc/lkdtm/fortify.c
··· 10 10 11 11 static volatile int fortify_scratch_space; 12 12 13 - static void lkdtm_FORTIFIED_OBJECT(void) 13 + static void lkdtm_FORTIFY_STR_OBJECT(void) 14 14 { 15 15 struct target { 16 16 char a[10]; 17 - } target[2] = {}; 17 + int foo; 18 + } target[3] = {}; 18 19 /* 19 20 * Using volatile prevents the compiler from determining the value of 20 21 * 'size' at compile time. Without that, we would get a compile error 21 22 * rather than a runtime error. 22 23 */ 23 - volatile int size = 11; 24 + volatile int size = 20; 24 25 25 - pr_info("trying to read past the end of a struct\n"); 26 + pr_info("trying to strcmp() past the end of a struct\n"); 27 + 28 + strncpy(target[0].a, target[1].a, size); 26 29 27 30 /* Store result to global to prevent the code from being eliminated */ 28 - fortify_scratch_space = memcmp(&target[0], &target[1], size); 31 + fortify_scratch_space = target[0].a[3]; 29 32 30 - pr_err("FAIL: fortify did not block an object overread!\n"); 33 + pr_err("FAIL: fortify did not block a strncpy() object write overflow!\n"); 31 34 pr_expected_config(CONFIG_FORTIFY_SOURCE); 32 35 } 33 36 34 - static void lkdtm_FORTIFIED_SUBOBJECT(void) 37 + static void lkdtm_FORTIFY_STR_MEMBER(void) 35 38 { 36 39 struct target { 37 40 char a[10]; ··· 47 44 strscpy(src, "over ten bytes", size); 48 45 size = strlen(src) + 1; 49 46 50 - pr_info("trying to strncpy past the end of a member of a struct\n"); 47 + pr_info("trying to strncpy() past the end of a struct member...\n"); 51 48 52 49 /* 53 50 * strncpy(target.a, src, 20); will hit a compile error because the ··· 59 56 /* Store result to global to prevent the code from being eliminated */ 60 57 fortify_scratch_space = target.a[3]; 61 58 62 - pr_err("FAIL: fortify did not block an sub-object overrun!\n"); 59 + pr_err("FAIL: fortify did not block a strncpy() struct member write overflow!\n"); 60 + pr_expected_config(CONFIG_FORTIFY_SOURCE); 61 + 62 + kfree(src); 63 + } 64 + 65 + static void lkdtm_FORTIFY_MEM_OBJECT(void) 66 + { 67 + int before[10]; 68 + struct target { 69 + char a[10]; 70 + int foo; 71 + } target = {}; 72 + int after[10]; 73 + /* 74 + * Using volatile prevents the compiler from determining the value of 75 + * 'size' at compile time. Without that, we would get a compile error 76 + * rather than a runtime error. 77 + */ 78 + volatile int size = 20; 79 + 80 + memset(before, 0, sizeof(before)); 81 + memset(after, 0, sizeof(after)); 82 + fortify_scratch_space = before[5]; 83 + fortify_scratch_space = after[5]; 84 + 85 + pr_info("trying to memcpy() past the end of a struct\n"); 86 + 87 + pr_info("0: %zu\n", __builtin_object_size(&target, 0)); 88 + pr_info("1: %zu\n", __builtin_object_size(&target, 1)); 89 + pr_info("s: %d\n", size); 90 + memcpy(&target, &before, size); 91 + 92 + /* Store result to global to prevent the code from being eliminated */ 93 + fortify_scratch_space = target.a[3]; 94 + 95 + pr_err("FAIL: fortify did not block a memcpy() object write overflow!\n"); 96 + pr_expected_config(CONFIG_FORTIFY_SOURCE); 97 + } 98 + 99 + static void lkdtm_FORTIFY_MEM_MEMBER(void) 100 + { 101 + struct target { 102 + char a[10]; 103 + char b[10]; 104 + } target; 105 + volatile int size = 20; 106 + char *src; 107 + 108 + src = kmalloc(size, GFP_KERNEL); 109 + strscpy(src, "over ten bytes", size); 110 + size = strlen(src) + 1; 111 + 112 + pr_info("trying to memcpy() past the end of a struct member...\n"); 113 + 114 + /* 115 + * strncpy(target.a, src, 20); will hit a compile error because the 116 + * compiler knows at build time that target.a < 20 bytes. Use a 117 + * volatile to force a runtime error. 118 + */ 119 + memcpy(target.a, src, size); 120 + 121 + /* Store result to global to prevent the code from being eliminated */ 122 + fortify_scratch_space = target.a[3]; 123 + 124 + pr_err("FAIL: fortify did not block a memcpy() struct member write overflow!\n"); 63 125 pr_expected_config(CONFIG_FORTIFY_SOURCE); 64 126 65 127 kfree(src); ··· 135 67 * strscpy and generate a panic because there is a write overflow (i.e. src 136 68 * length is greater than dst length). 137 69 */ 138 - static void lkdtm_FORTIFIED_STRSCPY(void) 70 + static void lkdtm_FORTIFY_STRSCPY(void) 139 71 { 140 72 char *src; 141 73 char dst[5]; ··· 204 136 } 205 137 206 138 static struct crashtype crashtypes[] = { 207 - CRASHTYPE(FORTIFIED_OBJECT), 208 - CRASHTYPE(FORTIFIED_SUBOBJECT), 209 - CRASHTYPE(FORTIFIED_STRSCPY), 139 + CRASHTYPE(FORTIFY_STR_OBJECT), 140 + CRASHTYPE(FORTIFY_STR_MEMBER), 141 + CRASHTYPE(FORTIFY_MEM_OBJECT), 142 + CRASHTYPE(FORTIFY_MEM_MEMBER), 143 + CRASHTYPE(FORTIFY_STRSCPY), 210 144 }; 211 145 212 146 struct crashtype_category fortify_crashtypes = {
+178 -67
include/linux/fortify-string.h
··· 2 2 #ifndef _LINUX_FORTIFY_STRING_H_ 3 3 #define _LINUX_FORTIFY_STRING_H_ 4 4 5 + #include <linux/bug.h> 5 6 #include <linux/const.h> 7 + #include <linux/limits.h> 6 8 7 9 #define __FORTIFY_INLINE extern __always_inline __gnu_inline __overloadable 8 10 #define __RENAME(x) __asm__(#x) ··· 19 17 #define __compiletime_strlen(p) \ 20 18 ({ \ 21 19 unsigned char *__p = (unsigned char *)(p); \ 22 - size_t __ret = (size_t)-1; \ 23 - size_t __p_size = __builtin_object_size(p, 1); \ 24 - if (__p_size != (size_t)-1) { \ 20 + size_t __ret = SIZE_MAX; \ 21 + size_t __p_size = __member_size(p); \ 22 + if (__p_size != SIZE_MAX && \ 23 + __builtin_constant_p(*__p)) { \ 25 24 size_t __p_len = __p_size - 1; \ 26 25 if (__builtin_constant_p(__p[__p_len]) && \ 27 26 __p[__p_len] == '\0') \ ··· 72 69 __underlying_memcpy(dst, src, bytes) 73 70 74 71 /* 75 - * Clang's use of __builtin_object_size() within inlines needs hinting via 76 - * __pass_object_size(). The preference is to only ever use type 1 (member 72 + * Clang's use of __builtin_*object_size() within inlines needs hinting via 73 + * __pass_*object_size(). The preference is to only ever use type 1 (member 77 74 * size, rather than struct size), but there remain some stragglers using 78 75 * type 0 that will be converted in the future. 79 76 */ 80 - #define POS __pass_object_size(1) 81 - #define POS0 __pass_object_size(0) 77 + #define POS __pass_object_size(1) 78 + #define POS0 __pass_object_size(0) 79 + #define __struct_size(p) __builtin_object_size(p, 0) 80 + #define __member_size(p) __builtin_object_size(p, 1) 82 81 82 + #define __compiletime_lessthan(bounds, length) ( \ 83 + __builtin_constant_p((bounds) < (length)) && \ 84 + (bounds) < (length) \ 85 + ) 86 + 87 + /** 88 + * strncpy - Copy a string to memory with non-guaranteed NUL padding 89 + * 90 + * @p: pointer to destination of copy 91 + * @q: pointer to NUL-terminated source string to copy 92 + * @size: bytes to write at @p 93 + * 94 + * If strlen(@q) >= @size, the copy of @q will stop after @size bytes, 95 + * and @p will NOT be NUL-terminated 96 + * 97 + * If strlen(@q) < @size, following the copy of @q, trailing NUL bytes 98 + * will be written to @p until @size total bytes have been written. 99 + * 100 + * Do not use this function. While FORTIFY_SOURCE tries to avoid 101 + * over-reads of @q, it cannot defend against writing unterminated 102 + * results to @p. Using strncpy() remains ambiguous and fragile. 103 + * Instead, please choose an alternative, so that the expectation 104 + * of @p's contents is unambiguous: 105 + * 106 + * +--------------------+-----------------+------------+ 107 + * | @p needs to be: | padded to @size | not padded | 108 + * +====================+=================+============+ 109 + * | NUL-terminated | strscpy_pad() | strscpy() | 110 + * +--------------------+-----------------+------------+ 111 + * | not NUL-terminated | strtomem_pad() | strtomem() | 112 + * +--------------------+-----------------+------------+ 113 + * 114 + * Note strscpy*()'s differing return values for detecting truncation, 115 + * and strtomem*()'s expectation that the destination is marked with 116 + * __nonstring when it is a character array. 117 + * 118 + */ 83 119 __FORTIFY_INLINE __diagnose_as(__builtin_strncpy, 1, 2, 3) 84 120 char *strncpy(char * const POS p, const char *q, __kernel_size_t size) 85 121 { 86 - size_t p_size = __builtin_object_size(p, 1); 122 + size_t p_size = __member_size(p); 87 123 88 - if (__builtin_constant_p(size) && p_size < size) 124 + if (__compiletime_lessthan(p_size, size)) 89 125 __write_overflow(); 90 126 if (p_size < size) 91 127 fortify_panic(__func__); ··· 134 92 __FORTIFY_INLINE __diagnose_as(__builtin_strcat, 1, 2) 135 93 char *strcat(char * const POS p, const char *q) 136 94 { 137 - size_t p_size = __builtin_object_size(p, 1); 95 + size_t p_size = __member_size(p); 138 96 139 - if (p_size == (size_t)-1) 97 + if (p_size == SIZE_MAX) 140 98 return __underlying_strcat(p, q); 141 99 if (strlcat(p, q, p_size) >= p_size) 142 100 fortify_panic(__func__); ··· 146 104 extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen); 147 105 __FORTIFY_INLINE __kernel_size_t strnlen(const char * const POS p, __kernel_size_t maxlen) 148 106 { 149 - size_t p_size = __builtin_object_size(p, 1); 107 + size_t p_size = __member_size(p); 150 108 size_t p_len = __compiletime_strlen(p); 151 109 size_t ret; 152 110 153 111 /* We can take compile-time actions when maxlen is const. */ 154 - if (__builtin_constant_p(maxlen) && p_len != (size_t)-1) { 112 + if (__builtin_constant_p(maxlen) && p_len != SIZE_MAX) { 155 113 /* If p is const, we can use its compile-time-known len. */ 156 114 if (maxlen >= p_size) 157 115 return p_len; ··· 176 134 __kernel_size_t __fortify_strlen(const char * const POS p) 177 135 { 178 136 __kernel_size_t ret; 179 - size_t p_size = __builtin_object_size(p, 1); 137 + size_t p_size = __member_size(p); 180 138 181 139 /* Give up if we don't know how large p is. */ 182 - if (p_size == (size_t)-1) 140 + if (p_size == SIZE_MAX) 183 141 return __underlying_strlen(p); 184 142 ret = strnlen(p, p_size); 185 143 if (p_size <= ret) ··· 191 149 extern size_t __real_strlcpy(char *, const char *, size_t) __RENAME(strlcpy); 192 150 __FORTIFY_INLINE size_t strlcpy(char * const POS p, const char * const POS q, size_t size) 193 151 { 194 - size_t p_size = __builtin_object_size(p, 1); 195 - size_t q_size = __builtin_object_size(q, 1); 152 + size_t p_size = __member_size(p); 153 + size_t q_size = __member_size(q); 196 154 size_t q_len; /* Full count of source string length. */ 197 155 size_t len; /* Count of characters going into destination. */ 198 156 199 - if (p_size == (size_t)-1 && q_size == (size_t)-1) 157 + if (p_size == SIZE_MAX && q_size == SIZE_MAX) 200 158 return __real_strlcpy(p, q, size); 201 159 q_len = strlen(q); 202 160 len = (q_len >= size) ? size - 1 : q_len; ··· 220 178 { 221 179 size_t len; 222 180 /* Use string size rather than possible enclosing struct size. */ 223 - size_t p_size = __builtin_object_size(p, 1); 224 - size_t q_size = __builtin_object_size(q, 1); 181 + size_t p_size = __member_size(p); 182 + size_t q_size = __member_size(q); 225 183 226 184 /* If we cannot get size of p and q default to call strscpy. */ 227 - if (p_size == (size_t) -1 && q_size == (size_t) -1) 185 + if (p_size == SIZE_MAX && q_size == SIZE_MAX) 228 186 return __real_strscpy(p, q, size); 229 187 230 188 /* 231 189 * If size can be known at compile time and is greater than 232 190 * p_size, generate a compile time write overflow error. 233 191 */ 234 - if (__builtin_constant_p(size) && size > p_size) 192 + if (__compiletime_lessthan(p_size, size)) 235 193 __write_overflow(); 236 194 237 195 /* ··· 266 224 char *strncat(char * const POS p, const char * const POS q, __kernel_size_t count) 267 225 { 268 226 size_t p_len, copy_len; 269 - size_t p_size = __builtin_object_size(p, 1); 270 - size_t q_size = __builtin_object_size(q, 1); 227 + size_t p_size = __member_size(p); 228 + size_t q_size = __member_size(q); 271 229 272 - if (p_size == (size_t)-1 && q_size == (size_t)-1) 230 + if (p_size == SIZE_MAX && q_size == SIZE_MAX) 273 231 return __underlying_strncat(p, q, count); 274 232 p_len = strlen(p); 275 233 copy_len = strnlen(q, count); ··· 288 246 /* 289 247 * Length argument is a constant expression, so we 290 248 * can perform compile-time bounds checking where 291 - * buffer sizes are known. 249 + * buffer sizes are also known at compile time. 292 250 */ 293 251 294 252 /* Error when size is larger than enclosing struct. */ 295 - if (p_size > p_size_field && p_size < size) 253 + if (__compiletime_lessthan(p_size_field, p_size) && 254 + __compiletime_lessthan(p_size, size)) 296 255 __write_overflow(); 297 256 298 257 /* Warn when write size is larger than dest field. */ 299 - if (p_size_field < size) 258 + if (__compiletime_lessthan(p_size_field, size)) 300 259 __write_overflow_field(p_size_field, size); 301 260 } 302 261 /* ··· 311 268 /* 312 269 * Always stop accesses beyond the struct that contains the 313 270 * field, when the buffer's remaining size is known. 314 - * (The -1 test is to optimize away checks where the buffer 271 + * (The SIZE_MAX test is to optimize away checks where the buffer 315 272 * lengths are unknown.) 316 273 */ 317 - if (p_size != (size_t)(-1) && p_size < size) 274 + if (p_size != SIZE_MAX && p_size < size) 318 275 fortify_panic("memset"); 319 276 } 320 277 ··· 325 282 }) 326 283 327 284 /* 328 - * __builtin_object_size() must be captured here to avoid evaluating argument 329 - * side-effects further into the macro layers. 285 + * __struct_size() vs __member_size() must be captured here to avoid 286 + * evaluating argument side-effects further into the macro layers. 330 287 */ 331 288 #define memset(p, c, s) __fortify_memset_chk(p, c, s, \ 332 - __builtin_object_size(p, 0), __builtin_object_size(p, 1)) 289 + __struct_size(p), __member_size(p)) 333 290 334 291 /* 335 292 * To make sure the compiler can enforce protection against buffer overflows, ··· 362 319 * V = vulnerable to run-time overflow (will need refactoring to solve) 363 320 * 364 321 */ 365 - __FORTIFY_INLINE void fortify_memcpy_chk(__kernel_size_t size, 322 + __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size, 366 323 const size_t p_size, 367 324 const size_t q_size, 368 325 const size_t p_size_field, ··· 373 330 /* 374 331 * Length argument is a constant expression, so we 375 332 * can perform compile-time bounds checking where 376 - * buffer sizes are known. 333 + * buffer sizes are also known at compile time. 377 334 */ 378 335 379 336 /* Error when size is larger than enclosing struct. */ 380 - if (p_size > p_size_field && p_size < size) 337 + if (__compiletime_lessthan(p_size_field, p_size) && 338 + __compiletime_lessthan(p_size, size)) 381 339 __write_overflow(); 382 - if (q_size > q_size_field && q_size < size) 340 + if (__compiletime_lessthan(q_size_field, q_size) && 341 + __compiletime_lessthan(q_size, size)) 383 342 __read_overflow2(); 384 343 385 344 /* Warn when write size argument larger than dest field. */ 386 - if (p_size_field < size) 345 + if (__compiletime_lessthan(p_size_field, size)) 387 346 __write_overflow_field(p_size_field, size); 388 347 /* 389 348 * Warn for source field over-read when building with W=1 390 349 * or when an over-write happened, so both can be fixed at 391 350 * the same time. 392 351 */ 393 - if ((IS_ENABLED(KBUILD_EXTRA_WARN1) || p_size_field < size) && 394 - q_size_field < size) 352 + if ((IS_ENABLED(KBUILD_EXTRA_WARN1) || 353 + __compiletime_lessthan(p_size_field, size)) && 354 + __compiletime_lessthan(q_size_field, size)) 395 355 __read_overflow2_field(q_size_field, size); 396 356 } 397 357 /* ··· 408 362 /* 409 363 * Always stop accesses beyond the struct that contains the 410 364 * field, when the buffer's remaining size is known. 411 - * (The -1 test is to optimize away checks where the buffer 365 + * (The SIZE_MAX test is to optimize away checks where the buffer 412 366 * lengths are unknown.) 413 367 */ 414 - if ((p_size != (size_t)(-1) && p_size < size) || 415 - (q_size != (size_t)(-1) && q_size < size)) 368 + if ((p_size != SIZE_MAX && p_size < size) || 369 + (q_size != SIZE_MAX && q_size < size)) 416 370 fortify_panic(func); 371 + 372 + /* 373 + * Warn when writing beyond destination field size. 374 + * 375 + * We must ignore p_size_field == 0 for existing 0-element 376 + * fake flexible arrays, until they are all converted to 377 + * proper flexible arrays. 378 + * 379 + * The implementation of __builtin_*object_size() behaves 380 + * like sizeof() when not directly referencing a flexible 381 + * array member, which means there will be many bounds checks 382 + * that will appear at run-time, without a way for them to be 383 + * detected at compile-time (as can be done when the destination 384 + * is specifically the flexible array member). 385 + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101832 386 + */ 387 + if (p_size_field != 0 && p_size_field != SIZE_MAX && 388 + p_size != p_size_field && p_size_field < size) 389 + return true; 390 + 391 + return false; 417 392 } 418 393 419 394 #define __fortify_memcpy_chk(p, q, size, p_size, q_size, \ 420 395 p_size_field, q_size_field, op) ({ \ 421 396 size_t __fortify_size = (size_t)(size); \ 422 - fortify_memcpy_chk(__fortify_size, p_size, q_size, \ 423 - p_size_field, q_size_field, #op); \ 397 + WARN_ONCE(fortify_memcpy_chk(__fortify_size, p_size, q_size, \ 398 + p_size_field, q_size_field, #op), \ 399 + #op ": detected field-spanning write (size %zu) of single %s (size %zu)\n", \ 400 + __fortify_size, \ 401 + "field \"" #p "\" at " __FILE__ ":" __stringify(__LINE__), \ 402 + p_size_field); \ 424 403 __underlying_##op(p, q, __fortify_size); \ 425 404 }) 426 405 427 406 /* 428 - * __builtin_object_size() must be captured here to avoid evaluating argument 429 - * side-effects further into the macro layers. 407 + * Notes about compile-time buffer size detection: 408 + * 409 + * With these types... 410 + * 411 + * struct middle { 412 + * u16 a; 413 + * u8 middle_buf[16]; 414 + * int b; 415 + * }; 416 + * struct end { 417 + * u16 a; 418 + * u8 end_buf[16]; 419 + * }; 420 + * struct flex { 421 + * int a; 422 + * u8 flex_buf[]; 423 + * }; 424 + * 425 + * void func(TYPE *ptr) { ... } 426 + * 427 + * Cases where destination size cannot be currently detected: 428 + * - the size of ptr's object (seemingly by design, gcc & clang fail): 429 + * __builtin_object_size(ptr, 1) == SIZE_MAX 430 + * - the size of flexible arrays in ptr's obj (by design, dynamic size): 431 + * __builtin_object_size(ptr->flex_buf, 1) == SIZE_MAX 432 + * - the size of ANY array at the end of ptr's obj (gcc and clang bug): 433 + * __builtin_object_size(ptr->end_buf, 1) == SIZE_MAX 434 + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101836 435 + * 436 + * Cases where destination size is currently detected: 437 + * - the size of non-array members within ptr's object: 438 + * __builtin_object_size(ptr->a, 1) == 2 439 + * - the size of non-flexible-array in the middle of ptr's obj: 440 + * __builtin_object_size(ptr->middle_buf, 1) == 16 441 + * 442 + */ 443 + 444 + /* 445 + * __struct_size() vs __member_size() must be captured here to avoid 446 + * evaluating argument side-effects further into the macro layers. 430 447 */ 431 448 #define memcpy(p, q, s) __fortify_memcpy_chk(p, q, s, \ 432 - __builtin_object_size(p, 0), __builtin_object_size(q, 0), \ 433 - __builtin_object_size(p, 1), __builtin_object_size(q, 1), \ 449 + __struct_size(p), __struct_size(q), \ 450 + __member_size(p), __member_size(q), \ 434 451 memcpy) 435 452 #define memmove(p, q, s) __fortify_memcpy_chk(p, q, s, \ 436 - __builtin_object_size(p, 0), __builtin_object_size(q, 0), \ 437 - __builtin_object_size(p, 1), __builtin_object_size(q, 1), \ 453 + __struct_size(p), __struct_size(q), \ 454 + __member_size(p), __member_size(q), \ 438 455 memmove) 439 456 440 457 extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan); 441 458 __FORTIFY_INLINE void *memscan(void * const POS0 p, int c, __kernel_size_t size) 442 459 { 443 - size_t p_size = __builtin_object_size(p, 0); 460 + size_t p_size = __struct_size(p); 444 461 445 - if (__builtin_constant_p(size) && p_size < size) 462 + if (__compiletime_lessthan(p_size, size)) 446 463 __read_overflow(); 447 464 if (p_size < size) 448 465 fortify_panic(__func__); ··· 515 406 __FORTIFY_INLINE __diagnose_as(__builtin_memcmp, 1, 2, 3) 516 407 int memcmp(const void * const POS0 p, const void * const POS0 q, __kernel_size_t size) 517 408 { 518 - size_t p_size = __builtin_object_size(p, 0); 519 - size_t q_size = __builtin_object_size(q, 0); 409 + size_t p_size = __struct_size(p); 410 + size_t q_size = __struct_size(q); 520 411 521 412 if (__builtin_constant_p(size)) { 522 - if (p_size < size) 413 + if (__compiletime_lessthan(p_size, size)) 523 414 __read_overflow(); 524 - if (q_size < size) 415 + if (__compiletime_lessthan(q_size, size)) 525 416 __read_overflow2(); 526 417 } 527 418 if (p_size < size || q_size < size) ··· 532 423 __FORTIFY_INLINE __diagnose_as(__builtin_memchr, 1, 2, 3) 533 424 void *memchr(const void * const POS0 p, int c, __kernel_size_t size) 534 425 { 535 - size_t p_size = __builtin_object_size(p, 0); 426 + size_t p_size = __struct_size(p); 536 427 537 - if (__builtin_constant_p(size) && p_size < size) 428 + if (__compiletime_lessthan(p_size, size)) 538 429 __read_overflow(); 539 430 if (p_size < size) 540 431 fortify_panic(__func__); ··· 544 435 void *__real_memchr_inv(const void *s, int c, size_t n) __RENAME(memchr_inv); 545 436 __FORTIFY_INLINE void *memchr_inv(const void * const POS0 p, int c, size_t size) 546 437 { 547 - size_t p_size = __builtin_object_size(p, 0); 438 + size_t p_size = __struct_size(p); 548 439 549 - if (__builtin_constant_p(size) && p_size < size) 440 + if (__compiletime_lessthan(p_size, size)) 550 441 __read_overflow(); 551 442 if (p_size < size) 552 443 fortify_panic(__func__); ··· 556 447 extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup); 557 448 __FORTIFY_INLINE void *kmemdup(const void * const POS0 p, size_t size, gfp_t gfp) 558 449 { 559 - size_t p_size = __builtin_object_size(p, 0); 450 + size_t p_size = __struct_size(p); 560 451 561 - if (__builtin_constant_p(size) && p_size < size) 452 + if (__compiletime_lessthan(p_size, size)) 562 453 __read_overflow(); 563 454 if (p_size < size) 564 455 fortify_panic(__func__); ··· 569 460 __FORTIFY_INLINE __diagnose_as(__builtin_strcpy, 1, 2) 570 461 char *strcpy(char * const POS p, const char * const POS q) 571 462 { 572 - size_t p_size = __builtin_object_size(p, 1); 573 - size_t q_size = __builtin_object_size(q, 1); 463 + size_t p_size = __member_size(p); 464 + size_t q_size = __member_size(q); 574 465 size_t size; 575 466 576 467 /* If neither buffer size is known, immediately give up. */ 577 - if (p_size == (size_t)-1 && q_size == (size_t)-1) 468 + if (__builtin_constant_p(p_size) && 469 + __builtin_constant_p(q_size) && 470 + p_size == SIZE_MAX && q_size == SIZE_MAX) 578 471 return __underlying_strcpy(p, q); 579 472 size = strlen(q) + 1; 580 473 /* Compile-time check for const size overflow. */ 581 - if (__builtin_constant_p(size) && p_size < size) 474 + if (__compiletime_lessthan(p_size, size)) 582 475 __write_overflow(); 583 476 /* Run-time check for dynamic size overflow. */ 584 477 if (p_size < size)
+41 -31
include/linux/overflow.h
··· 51 51 return unlikely(overflow); 52 52 } 53 53 54 - /* 55 - * For simplicity and code hygiene, the fallback code below insists on 56 - * a, b and *d having the same type (similar to the min() and max() 57 - * macros), whereas gcc's type-generic overflow checkers accept 58 - * different types. Hence we don't just make check_add_overflow an 59 - * alias for __builtin_add_overflow, but add type checks similar to 60 - * below. 54 + /** check_add_overflow() - Calculate addition with overflow checking 55 + * 56 + * @a: first addend 57 + * @b: second addend 58 + * @d: pointer to store sum 59 + * 60 + * Returns 0 on success. 61 + * 62 + * *@d holds the results of the attempted addition, but is not considered 63 + * "safe for use" on a non-zero return value, which indicates that the 64 + * sum has overflowed or been truncated. 61 65 */ 62 - #define check_add_overflow(a, b, d) __must_check_overflow(({ \ 63 - typeof(a) __a = (a); \ 64 - typeof(b) __b = (b); \ 65 - typeof(d) __d = (d); \ 66 - (void) (&__a == &__b); \ 67 - (void) (&__a == __d); \ 68 - __builtin_add_overflow(__a, __b, __d); \ 69 - })) 66 + #define check_add_overflow(a, b, d) \ 67 + __must_check_overflow(__builtin_add_overflow(a, b, d)) 70 68 71 - #define check_sub_overflow(a, b, d) __must_check_overflow(({ \ 72 - typeof(a) __a = (a); \ 73 - typeof(b) __b = (b); \ 74 - typeof(d) __d = (d); \ 75 - (void) (&__a == &__b); \ 76 - (void) (&__a == __d); \ 77 - __builtin_sub_overflow(__a, __b, __d); \ 78 - })) 69 + /** check_sub_overflow() - Calculate subtraction with overflow checking 70 + * 71 + * @a: minuend; value to subtract from 72 + * @b: subtrahend; value to subtract from @a 73 + * @d: pointer to store difference 74 + * 75 + * Returns 0 on success. 76 + * 77 + * *@d holds the results of the attempted subtraction, but is not considered 78 + * "safe for use" on a non-zero return value, which indicates that the 79 + * difference has underflowed or been truncated. 80 + */ 81 + #define check_sub_overflow(a, b, d) \ 82 + __must_check_overflow(__builtin_sub_overflow(a, b, d)) 79 83 80 - #define check_mul_overflow(a, b, d) __must_check_overflow(({ \ 81 - typeof(a) __a = (a); \ 82 - typeof(b) __b = (b); \ 83 - typeof(d) __d = (d); \ 84 - (void) (&__a == &__b); \ 85 - (void) (&__a == __d); \ 86 - __builtin_mul_overflow(__a, __b, __d); \ 87 - })) 84 + /** check_mul_overflow() - Calculate multiplication with overflow checking 85 + * 86 + * @a: first factor 87 + * @b: second factor 88 + * @d: pointer to store product 89 + * 90 + * Returns 0 on success. 91 + * 92 + * *@d holds the results of the attempted multiplication, but is not 93 + * considered "safe for use" on a non-zero return value, which indicates 94 + * that the product has overflowed or been truncated. 95 + */ 96 + #define check_mul_overflow(a, b, d) \ 97 + __must_check_overflow(__builtin_mul_overflow(a, b, d)) 88 98 89 99 /** check_shl_overflow() - Calculate a left-shifted value and check overflow 90 100 *
+43
include/linux/string.h
··· 261 261 int pad); 262 262 263 263 /** 264 + * strtomem_pad - Copy NUL-terminated string to non-NUL-terminated buffer 265 + * 266 + * @dest: Pointer of destination character array (marked as __nonstring) 267 + * @src: Pointer to NUL-terminated string 268 + * @pad: Padding character to fill any remaining bytes of @dest after copy 269 + * 270 + * This is a replacement for strncpy() uses where the destination is not 271 + * a NUL-terminated string, but with bounds checking on the source size, and 272 + * an explicit padding character. If padding is not required, use strtomem(). 273 + * 274 + * Note that the size of @dest is not an argument, as the length of @dest 275 + * must be discoverable by the compiler. 276 + */ 277 + #define strtomem_pad(dest, src, pad) do { \ 278 + const size_t _dest_len = __builtin_object_size(dest, 1); \ 279 + \ 280 + BUILD_BUG_ON(!__builtin_constant_p(_dest_len) || \ 281 + _dest_len == (size_t)-1); \ 282 + memcpy_and_pad(dest, _dest_len, src, strnlen(src, _dest_len), pad); \ 283 + } while (0) 284 + 285 + /** 286 + * strtomem - Copy NUL-terminated string to non-NUL-terminated buffer 287 + * 288 + * @dest: Pointer of destination character array (marked as __nonstring) 289 + * @src: Pointer to NUL-terminated string 290 + * 291 + * This is a replacement for strncpy() uses where the destination is not 292 + * a NUL-terminated string, but with bounds checking on the source size, and 293 + * without trailing padding. If padding is required, use strtomem_pad(). 294 + * 295 + * Note that the size of @dest is not an argument, as the length of @dest 296 + * must be discoverable by the compiler. 297 + */ 298 + #define strtomem(dest, src) do { \ 299 + const size_t _dest_len = __builtin_object_size(dest, 1); \ 300 + \ 301 + BUILD_BUG_ON(!__builtin_constant_p(_dest_len) || \ 302 + _dest_len == (size_t)-1); \ 303 + memcpy(dest, src, min(_dest_len, strnlen(src, _dest_len))); \ 304 + } while (0) 305 + 306 + /** 264 307 * memset_after - Set a value after a struct member to the end of a struct 265 308 * 266 309 * @obj: Address of target struct instance
+21
lib/Kconfig.debug
··· 2511 2511 2512 2512 If unsure, say N. 2513 2513 2514 + config IS_SIGNED_TYPE_KUNIT_TEST 2515 + tristate "Test is_signed_type() macro" if !KUNIT_ALL_TESTS 2516 + depends on KUNIT 2517 + default KUNIT_ALL_TESTS 2518 + help 2519 + Builds unit tests for the is_signed_type() macro. 2520 + 2521 + For more information on KUnit and unit tests in general please refer 2522 + to the KUnit documentation in Documentation/dev-tools/kunit/. 2523 + 2524 + If unsure, say N. 2525 + 2514 2526 config OVERFLOW_KUNIT_TEST 2515 2527 tristate "Test check_*_overflow() functions at runtime" if !KUNIT_ALL_TESTS 2516 2528 depends on KUNIT ··· 2546 2534 CONFIG_INIT_STACK_ALL_PATTERN, CONFIG_INIT_STACK_ALL_ZERO, 2547 2535 CONFIG_GCC_PLUGIN_STRUCTLEAK, CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF, 2548 2536 or CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. 2537 + 2538 + config FORTIFY_KUNIT_TEST 2539 + tristate "Test fortified str*() and mem*() function internals at runtime" if !KUNIT_ALL_TESTS 2540 + depends on KUNIT && FORTIFY_SOURCE 2541 + default KUNIT_ALL_TESTS 2542 + help 2543 + Builds unit tests for checking internals of FORTIFY_SOURCE as used 2544 + by the str*() and mem*() family of functions. For testing runtime 2545 + traps of FORTIFY_SOURCE, see LKDTM's "FORTIFY_*" tests. 2549 2546 2550 2547 config TEST_UDELAY 2551 2548 tristate "udelay test driver"
+2
lib/Makefile
··· 377 377 obj-$(CONFIG_CMDLINE_KUNIT_TEST) += cmdline_kunit.o 378 378 obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o 379 379 obj-$(CONFIG_MEMCPY_KUNIT_TEST) += memcpy_kunit.o 380 + obj-$(CONFIG_IS_SIGNED_TYPE_KUNIT_TEST) += is_signed_type_kunit.o 380 381 obj-$(CONFIG_OVERFLOW_KUNIT_TEST) += overflow_kunit.o 381 382 CFLAGS_stackinit_kunit.o += $(call cc-disable-warning, switch-unreachable) 382 383 obj-$(CONFIG_STACKINIT_KUNIT_TEST) += stackinit_kunit.o 384 + obj-$(CONFIG_FORTIFY_KUNIT_TEST) += fortify_kunit.o 383 385 384 386 obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o 385 387
+76
lib/fortify_kunit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Runtime test cases for CONFIG_FORTIFY_SOURCE that aren't expected to 4 + * Oops the kernel on success. (For those, see drivers/misc/lkdtm/fortify.c) 5 + * 6 + * For corner cases with UBSAN, try testing with: 7 + * 8 + * ./tools/testing/kunit/kunit.py run --arch=x86_64 \ 9 + * --kconfig_add CONFIG_FORTIFY_SOURCE=y \ 10 + * --kconfig_add CONFIG_UBSAN=y \ 11 + * --kconfig_add CONFIG_UBSAN_TRAP=y \ 12 + * --kconfig_add CONFIG_UBSAN_BOUNDS=y \ 13 + * --kconfig_add CONFIG_UBSAN_LOCAL_BOUNDS=y \ 14 + * --make_options LLVM=1 fortify 15 + */ 16 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 17 + 18 + #include <kunit/test.h> 19 + #include <linux/string.h> 20 + 21 + static const char array_of_10[] = "this is 10"; 22 + static const char *ptr_of_11 = "this is 11!"; 23 + static char array_unknown[] = "compiler thinks I might change"; 24 + 25 + static void known_sizes_test(struct kunit *test) 26 + { 27 + KUNIT_EXPECT_EQ(test, __compiletime_strlen("88888888"), 8); 28 + KUNIT_EXPECT_EQ(test, __compiletime_strlen(array_of_10), 10); 29 + KUNIT_EXPECT_EQ(test, __compiletime_strlen(ptr_of_11), 11); 30 + 31 + KUNIT_EXPECT_EQ(test, __compiletime_strlen(array_unknown), SIZE_MAX); 32 + /* Externally defined and dynamically sized string pointer: */ 33 + KUNIT_EXPECT_EQ(test, __compiletime_strlen(test->name), SIZE_MAX); 34 + } 35 + 36 + /* This is volatile so the optimizer can't perform DCE below. */ 37 + static volatile int pick; 38 + 39 + /* Not inline to keep optimizer from figuring out which string we want. */ 40 + static noinline size_t want_minus_one(int pick) 41 + { 42 + const char *str; 43 + 44 + switch (pick) { 45 + case 1: 46 + str = "4444"; 47 + break; 48 + case 2: 49 + str = "333"; 50 + break; 51 + default: 52 + str = "1"; 53 + break; 54 + } 55 + return __compiletime_strlen(str); 56 + } 57 + 58 + static void control_flow_split_test(struct kunit *test) 59 + { 60 + KUNIT_EXPECT_EQ(test, want_minus_one(pick), SIZE_MAX); 61 + } 62 + 63 + static struct kunit_case fortify_test_cases[] = { 64 + KUNIT_CASE(known_sizes_test), 65 + KUNIT_CASE(control_flow_split_test), 66 + {} 67 + }; 68 + 69 + static struct kunit_suite fortify_test_suite = { 70 + .name = "fortify", 71 + .test_cases = fortify_test_cases, 72 + }; 73 + 74 + kunit_test_suite(fortify_test_suite); 75 + 76 + MODULE_LICENSE("GPL");
+53
lib/is_signed_type_kunit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR MIT 2 + /* 3 + * ./tools/testing/kunit/kunit.py run is_signed_type [--raw_output] 4 + */ 5 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 6 + 7 + #include <kunit/test.h> 8 + #include <linux/compiler.h> 9 + 10 + enum unsigned_enum { 11 + constant_a = 3, 12 + }; 13 + 14 + enum signed_enum { 15 + constant_b = -1, 16 + constant_c = 2, 17 + }; 18 + 19 + static void is_signed_type_test(struct kunit *test) 20 + { 21 + KUNIT_EXPECT_EQ(test, is_signed_type(bool), false); 22 + KUNIT_EXPECT_EQ(test, is_signed_type(signed char), true); 23 + KUNIT_EXPECT_EQ(test, is_signed_type(unsigned char), false); 24 + #ifdef __CHAR_UNSIGNED__ 25 + KUNIT_EXPECT_EQ(test, is_signed_type(char), false); 26 + #else 27 + KUNIT_EXPECT_EQ(test, is_signed_type(char), true); 28 + #endif 29 + KUNIT_EXPECT_EQ(test, is_signed_type(int), true); 30 + KUNIT_EXPECT_EQ(test, is_signed_type(unsigned int), false); 31 + KUNIT_EXPECT_EQ(test, is_signed_type(long), true); 32 + KUNIT_EXPECT_EQ(test, is_signed_type(unsigned long), false); 33 + KUNIT_EXPECT_EQ(test, is_signed_type(long long), true); 34 + KUNIT_EXPECT_EQ(test, is_signed_type(unsigned long long), false); 35 + KUNIT_EXPECT_EQ(test, is_signed_type(enum unsigned_enum), false); 36 + KUNIT_EXPECT_EQ(test, is_signed_type(enum signed_enum), true); 37 + KUNIT_EXPECT_EQ(test, is_signed_type(void *), false); 38 + KUNIT_EXPECT_EQ(test, is_signed_type(const char *), false); 39 + } 40 + 41 + static struct kunit_case is_signed_type_test_cases[] = { 42 + KUNIT_CASE(is_signed_type_test), 43 + {} 44 + }; 45 + 46 + static struct kunit_suite is_signed_type_test_suite = { 47 + .name = "is_signed_type", 48 + .test_cases = is_signed_type_test_cases, 49 + }; 50 + 51 + kunit_test_suite(is_signed_type_test_suite); 52 + 53 + MODULE_LICENSE("Dual MIT/GPL");
+55 -4
lib/memcpy_kunit.c
··· 29 29 }; 30 30 31 31 #define check(instance, v) do { \ 32 - int i; \ 33 32 BUILD_BUG_ON(sizeof(instance.data) != 32); \ 34 - for (i = 0; i < sizeof(instance.data); i++) { \ 33 + for (size_t i = 0; i < sizeof(instance.data); i++) { \ 35 34 KUNIT_ASSERT_EQ_MSG(test, instance.data[i], v, \ 36 35 "line %d: '%s' not initialized to 0x%02x @ %d (saw 0x%02x)\n", \ 37 36 __LINE__, #instance, v, i, instance.data[i]); \ ··· 38 39 } while (0) 39 40 40 41 #define compare(name, one, two) do { \ 41 - int i; \ 42 42 BUILD_BUG_ON(sizeof(one) != sizeof(two)); \ 43 - for (i = 0; i < sizeof(one); i++) { \ 43 + for (size_t i = 0; i < sizeof(one); i++) { \ 44 44 KUNIT_EXPECT_EQ_MSG(test, one.data[i], two.data[i], \ 45 45 "line %d: %s.data[%d] (0x%02x) != %s.data[%d] (0x%02x)\n", \ 46 46 __LINE__, #one, i, one.data[i], #two, i, two.data[i]); \ ··· 270 272 #undef TEST_OP 271 273 } 272 274 275 + static void strtomem_test(struct kunit *test) 276 + { 277 + static const char input[sizeof(unsigned long)] = "hi"; 278 + static const char truncate[] = "this is too long"; 279 + struct { 280 + unsigned long canary1; 281 + unsigned char output[sizeof(unsigned long)] __nonstring; 282 + unsigned long canary2; 283 + } wrap; 284 + 285 + memset(&wrap, 0xFF, sizeof(wrap)); 286 + KUNIT_EXPECT_EQ_MSG(test, wrap.canary1, ULONG_MAX, 287 + "bad initial canary value"); 288 + KUNIT_EXPECT_EQ_MSG(test, wrap.canary2, ULONG_MAX, 289 + "bad initial canary value"); 290 + 291 + /* Check unpadded copy leaves surroundings untouched. */ 292 + strtomem(wrap.output, input); 293 + KUNIT_EXPECT_EQ(test, wrap.canary1, ULONG_MAX); 294 + KUNIT_EXPECT_EQ(test, wrap.output[0], input[0]); 295 + KUNIT_EXPECT_EQ(test, wrap.output[1], input[1]); 296 + for (size_t i = 2; i < sizeof(wrap.output); i++) 297 + KUNIT_EXPECT_EQ(test, wrap.output[i], 0xFF); 298 + KUNIT_EXPECT_EQ(test, wrap.canary2, ULONG_MAX); 299 + 300 + /* Check truncated copy leaves surroundings untouched. */ 301 + memset(&wrap, 0xFF, sizeof(wrap)); 302 + strtomem(wrap.output, truncate); 303 + KUNIT_EXPECT_EQ(test, wrap.canary1, ULONG_MAX); 304 + for (size_t i = 0; i < sizeof(wrap.output); i++) 305 + KUNIT_EXPECT_EQ(test, wrap.output[i], truncate[i]); 306 + KUNIT_EXPECT_EQ(test, wrap.canary2, ULONG_MAX); 307 + 308 + /* Check padded copy leaves only string padded. */ 309 + memset(&wrap, 0xFF, sizeof(wrap)); 310 + strtomem_pad(wrap.output, input, 0xAA); 311 + KUNIT_EXPECT_EQ(test, wrap.canary1, ULONG_MAX); 312 + KUNIT_EXPECT_EQ(test, wrap.output[0], input[0]); 313 + KUNIT_EXPECT_EQ(test, wrap.output[1], input[1]); 314 + for (size_t i = 2; i < sizeof(wrap.output); i++) 315 + KUNIT_EXPECT_EQ(test, wrap.output[i], 0xAA); 316 + KUNIT_EXPECT_EQ(test, wrap.canary2, ULONG_MAX); 317 + 318 + /* Check truncated padded copy has no padding. */ 319 + memset(&wrap, 0xFF, sizeof(wrap)); 320 + strtomem(wrap.output, truncate); 321 + KUNIT_EXPECT_EQ(test, wrap.canary1, ULONG_MAX); 322 + for (size_t i = 0; i < sizeof(wrap.output); i++) 323 + KUNIT_EXPECT_EQ(test, wrap.output[i], truncate[i]); 324 + KUNIT_EXPECT_EQ(test, wrap.canary2, ULONG_MAX); 325 + } 326 + 273 327 static struct kunit_case memcpy_test_cases[] = { 274 328 KUNIT_CASE(memset_test), 275 329 KUNIT_CASE(memcpy_test), 276 330 KUNIT_CASE(memmove_test), 331 + KUNIT_CASE(strtomem_test), 277 332 {} 278 333 }; 279 334
+122 -55
lib/overflow_kunit.c
··· 16 16 #include <linux/types.h> 17 17 #include <linux/vmalloc.h> 18 18 19 - #define DEFINE_TEST_ARRAY(t) \ 20 - static const struct test_ ## t { \ 21 - t a, b; \ 22 - t sum, diff, prod; \ 23 - bool s_of, d_of, p_of; \ 24 - } t ## _tests[] 19 + #define DEFINE_TEST_ARRAY_TYPED(t1, t2, t) \ 20 + static const struct test_ ## t1 ## _ ## t2 ## __ ## t { \ 21 + t1 a; \ 22 + t2 b; \ 23 + t sum, diff, prod; \ 24 + bool s_of, d_of, p_of; \ 25 + } t1 ## _ ## t2 ## __ ## t ## _tests[] 26 + 27 + #define DEFINE_TEST_ARRAY(t) DEFINE_TEST_ARRAY_TYPED(t, t, t) 25 28 26 29 DEFINE_TEST_ARRAY(u8) = { 27 30 {0, 0, 0, 0, 0, false, false, false}, ··· 225 222 }; 226 223 #endif 227 224 228 - #define check_one_op(t, fmt, op, sym, a, b, r, of) do { \ 229 - t _r; \ 230 - bool _of; \ 231 - \ 232 - _of = check_ ## op ## _overflow(a, b, &_r); \ 233 - KUNIT_EXPECT_EQ_MSG(test, _of, of, \ 225 + #define check_one_op(t, fmt, op, sym, a, b, r, of) do { \ 226 + int _a_orig = a, _a_bump = a + 1; \ 227 + int _b_orig = b, _b_bump = b + 1; \ 228 + bool _of; \ 229 + t _r; \ 230 + \ 231 + _of = check_ ## op ## _overflow(a, b, &_r); \ 232 + KUNIT_EXPECT_EQ_MSG(test, _of, of, \ 234 233 "expected "fmt" "sym" "fmt" to%s overflow (type %s)\n", \ 235 - a, b, of ? "" : " not", #t); \ 236 - KUNIT_EXPECT_EQ_MSG(test, _r, r, \ 234 + a, b, of ? "" : " not", #t); \ 235 + KUNIT_EXPECT_EQ_MSG(test, _r, r, \ 237 236 "expected "fmt" "sym" "fmt" == "fmt", got "fmt" (type %s)\n", \ 238 - a, b, r, _r, #t); \ 237 + a, b, r, _r, #t); \ 238 + /* Check for internal macro side-effects. */ \ 239 + _of = check_ ## op ## _overflow(_a_orig++, _b_orig++, &_r); \ 240 + KUNIT_EXPECT_EQ_MSG(test, _a_orig, _a_bump, "Unexpected " #op " macro side-effect!\n"); \ 241 + KUNIT_EXPECT_EQ_MSG(test, _b_orig, _b_bump, "Unexpected " #op " macro side-effect!\n"); \ 239 242 } while (0) 240 243 241 - #define DEFINE_TEST_FUNC(t, fmt) \ 242 - static void do_test_ ## t(struct kunit *test, const struct test_ ## t *p) \ 244 + #define DEFINE_TEST_FUNC_TYPED(n, t, fmt) \ 245 + static void do_test_ ## n(struct kunit *test, const struct test_ ## n *p) \ 243 246 { \ 244 247 check_one_op(t, fmt, add, "+", p->a, p->b, p->sum, p->s_of); \ 245 248 check_one_op(t, fmt, add, "+", p->b, p->a, p->sum, p->s_of); \ ··· 254 245 check_one_op(t, fmt, mul, "*", p->b, p->a, p->prod, p->p_of); \ 255 246 } \ 256 247 \ 257 - static void t ## _overflow_test(struct kunit *test) { \ 248 + static void n ## _overflow_test(struct kunit *test) { \ 258 249 unsigned i; \ 259 250 \ 260 - for (i = 0; i < ARRAY_SIZE(t ## _tests); ++i) \ 261 - do_test_ ## t(test, &t ## _tests[i]); \ 251 + for (i = 0; i < ARRAY_SIZE(n ## _tests); ++i) \ 252 + do_test_ ## n(test, &n ## _tests[i]); \ 262 253 kunit_info(test, "%zu %s arithmetic tests finished\n", \ 263 - ARRAY_SIZE(t ## _tests), #t); \ 254 + ARRAY_SIZE(n ## _tests), #n); \ 264 255 } 256 + 257 + #define DEFINE_TEST_FUNC(t, fmt) \ 258 + DEFINE_TEST_FUNC_TYPED(t ## _ ## t ## __ ## t, t, fmt) 265 259 266 260 DEFINE_TEST_FUNC(u8, "%d"); 267 261 DEFINE_TEST_FUNC(s8, "%d"); ··· 277 265 DEFINE_TEST_FUNC(s64, "%lld"); 278 266 #endif 279 267 280 - static void overflow_shift_test(struct kunit *test) 281 - { 282 - int count = 0; 268 + DEFINE_TEST_ARRAY_TYPED(u32, u32, u8) = { 269 + {0, 0, 0, 0, 0, false, false, false}, 270 + {U8_MAX, 2, 1, U8_MAX - 2, U8_MAX - 1, true, false, true}, 271 + {U8_MAX + 1, 0, 0, 0, 0, true, true, false}, 272 + }; 273 + DEFINE_TEST_FUNC_TYPED(u32_u32__u8, u8, "%d"); 274 + 275 + DEFINE_TEST_ARRAY_TYPED(u32, u32, int) = { 276 + {0, 0, 0, 0, 0, false, false, false}, 277 + {U32_MAX, 0, -1, -1, 0, true, true, false}, 278 + }; 279 + DEFINE_TEST_FUNC_TYPED(u32_u32__int, int, "%d"); 280 + 281 + DEFINE_TEST_ARRAY_TYPED(u8, u8, int) = { 282 + {0, 0, 0, 0, 0, false, false, false}, 283 + {U8_MAX, U8_MAX, 2 * U8_MAX, 0, U8_MAX * U8_MAX, false, false, false}, 284 + {1, 2, 3, -1, 2, false, false, false}, 285 + }; 286 + DEFINE_TEST_FUNC_TYPED(u8_u8__int, int, "%d"); 287 + 288 + DEFINE_TEST_ARRAY_TYPED(int, int, u8) = { 289 + {0, 0, 0, 0, 0, false, false, false}, 290 + {1, 2, 3, U8_MAX, 2, false, true, false}, 291 + {-1, 0, U8_MAX, U8_MAX, 0, true, true, false}, 292 + }; 293 + DEFINE_TEST_FUNC_TYPED(int_int__u8, u8, "%d"); 283 294 284 295 /* Args are: value, shift, type, expected result, overflow expected */ 285 296 #define TEST_ONE_SHIFT(a, s, t, expect, of) do { \ ··· 326 291 } \ 327 292 count++; \ 328 293 } while (0) 294 + 295 + static void shift_sane_test(struct kunit *test) 296 + { 297 + int count = 0; 329 298 330 299 /* Sane shifts. */ 331 300 TEST_ONE_SHIFT(1, 0, u8, 1 << 0, false); ··· 372 333 TEST_ONE_SHIFT(0, 30, int, 0, false); 373 334 TEST_ONE_SHIFT(0, 30, s32, 0, false); 374 335 TEST_ONE_SHIFT(0, 62, s64, 0, false); 336 + 337 + kunit_info(test, "%d sane shift tests finished\n", count); 338 + } 339 + 340 + static void shift_overflow_test(struct kunit *test) 341 + { 342 + int count = 0; 375 343 376 344 /* Overflow: shifted the bit off the end. */ 377 345 TEST_ONE_SHIFT(1, 8, u8, 0, true); ··· 427 381 /* 0100000100001000001000000010000001000010000001000100010001001011 */ 428 382 TEST_ONE_SHIFT(4686030735197619275LL, 2, s64, 0, true); 429 383 384 + kunit_info(test, "%d overflow shift tests finished\n", count); 385 + } 386 + 387 + static void shift_truncate_test(struct kunit *test) 388 + { 389 + int count = 0; 390 + 430 391 /* Overflow: values larger than destination type. */ 431 392 TEST_ONE_SHIFT(0x100, 0, u8, 0, true); 432 393 TEST_ONE_SHIFT(0xFF, 0, s8, 0, true); ··· 444 391 TEST_ONE_SHIFT(0xFFFFFFFFUL, 0, s32, 0, true); 445 392 TEST_ONE_SHIFT(0xFFFFFFFFUL, 0, int, 0, true); 446 393 TEST_ONE_SHIFT(0xFFFFFFFFFFFFFFFFULL, 0, s64, 0, true); 394 + 395 + /* Overflow: shifted at or beyond entire type's bit width. */ 396 + TEST_ONE_SHIFT(0, 8, u8, 0, true); 397 + TEST_ONE_SHIFT(0, 9, u8, 0, true); 398 + TEST_ONE_SHIFT(0, 8, s8, 0, true); 399 + TEST_ONE_SHIFT(0, 9, s8, 0, true); 400 + TEST_ONE_SHIFT(0, 16, u16, 0, true); 401 + TEST_ONE_SHIFT(0, 17, u16, 0, true); 402 + TEST_ONE_SHIFT(0, 16, s16, 0, true); 403 + TEST_ONE_SHIFT(0, 17, s16, 0, true); 404 + TEST_ONE_SHIFT(0, 32, u32, 0, true); 405 + TEST_ONE_SHIFT(0, 33, u32, 0, true); 406 + TEST_ONE_SHIFT(0, 32, int, 0, true); 407 + TEST_ONE_SHIFT(0, 33, int, 0, true); 408 + TEST_ONE_SHIFT(0, 32, s32, 0, true); 409 + TEST_ONE_SHIFT(0, 33, s32, 0, true); 410 + TEST_ONE_SHIFT(0, 64, u64, 0, true); 411 + TEST_ONE_SHIFT(0, 65, u64, 0, true); 412 + TEST_ONE_SHIFT(0, 64, s64, 0, true); 413 + TEST_ONE_SHIFT(0, 65, s64, 0, true); 414 + 415 + kunit_info(test, "%d truncate shift tests finished\n", count); 416 + } 417 + 418 + static void shift_nonsense_test(struct kunit *test) 419 + { 420 + int count = 0; 447 421 448 422 /* Nonsense: negative initial value. */ 449 423 TEST_ONE_SHIFT(-1, 0, s8, 0, true); ··· 496 416 TEST_ONE_SHIFT(0, -30, s64, 0, true); 497 417 TEST_ONE_SHIFT(0, -30, u64, 0, true); 498 418 499 - /* Overflow: shifted at or beyond entire type's bit width. */ 500 - TEST_ONE_SHIFT(0, 8, u8, 0, true); 501 - TEST_ONE_SHIFT(0, 9, u8, 0, true); 502 - TEST_ONE_SHIFT(0, 8, s8, 0, true); 503 - TEST_ONE_SHIFT(0, 9, s8, 0, true); 504 - TEST_ONE_SHIFT(0, 16, u16, 0, true); 505 - TEST_ONE_SHIFT(0, 17, u16, 0, true); 506 - TEST_ONE_SHIFT(0, 16, s16, 0, true); 507 - TEST_ONE_SHIFT(0, 17, s16, 0, true); 508 - TEST_ONE_SHIFT(0, 32, u32, 0, true); 509 - TEST_ONE_SHIFT(0, 33, u32, 0, true); 510 - TEST_ONE_SHIFT(0, 32, int, 0, true); 511 - TEST_ONE_SHIFT(0, 33, int, 0, true); 512 - TEST_ONE_SHIFT(0, 32, s32, 0, true); 513 - TEST_ONE_SHIFT(0, 33, s32, 0, true); 514 - TEST_ONE_SHIFT(0, 64, u64, 0, true); 515 - TEST_ONE_SHIFT(0, 65, u64, 0, true); 516 - TEST_ONE_SHIFT(0, 64, s64, 0, true); 517 - TEST_ONE_SHIFT(0, 65, s64, 0, true); 518 - 519 419 /* 520 420 * Corner case: for unsigned types, we fail when we've shifted 521 421 * through the entire width of bits. For signed types, we might ··· 511 451 TEST_ONE_SHIFT(0, 31, s32, 0, false); 512 452 TEST_ONE_SHIFT(0, 63, s64, 0, false); 513 453 514 - kunit_info(test, "%d shift tests finished\n", count); 515 - #undef TEST_ONE_SHIFT 454 + kunit_info(test, "%d nonsense shift tests finished\n", count); 516 455 } 456 + #undef TEST_ONE_SHIFT 517 457 518 458 /* 519 459 * Deal with the various forms of allocator arguments. See comments above ··· 709 649 } 710 650 711 651 static struct kunit_case overflow_test_cases[] = { 712 - KUNIT_CASE(u8_overflow_test), 713 - KUNIT_CASE(s8_overflow_test), 714 - KUNIT_CASE(u16_overflow_test), 715 - KUNIT_CASE(s16_overflow_test), 716 - KUNIT_CASE(u32_overflow_test), 717 - KUNIT_CASE(s32_overflow_test), 652 + KUNIT_CASE(u8_u8__u8_overflow_test), 653 + KUNIT_CASE(s8_s8__s8_overflow_test), 654 + KUNIT_CASE(u16_u16__u16_overflow_test), 655 + KUNIT_CASE(s16_s16__s16_overflow_test), 656 + KUNIT_CASE(u32_u32__u32_overflow_test), 657 + KUNIT_CASE(s32_s32__s32_overflow_test), 718 658 /* Clang 13 and earlier generate unwanted libcalls on 32-bit. */ 719 659 #if BITS_PER_LONG == 64 720 - KUNIT_CASE(u64_overflow_test), 721 - KUNIT_CASE(s64_overflow_test), 660 + KUNIT_CASE(u64_u64__u64_overflow_test), 661 + KUNIT_CASE(s64_s64__s64_overflow_test), 722 662 #endif 723 - KUNIT_CASE(overflow_shift_test), 663 + KUNIT_CASE(u32_u32__u8_overflow_test), 664 + KUNIT_CASE(u32_u32__int_overflow_test), 665 + KUNIT_CASE(u8_u8__int_overflow_test), 666 + KUNIT_CASE(int_int__u8_overflow_test), 667 + KUNIT_CASE(shift_sane_test), 668 + KUNIT_CASE(shift_overflow_test), 669 + KUNIT_CASE(shift_truncate_test), 670 + KUNIT_CASE(shift_nonsense_test), 724 671 KUNIT_CASE(overflow_allocation_test), 725 672 KUNIT_CASE(overflow_size_helpers_test), 726 673 {}
+1
scripts/Makefile.extrawarn
··· 64 64 KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast) 65 65 KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare 66 66 KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access) 67 + KBUILD_CFLAGS += $(call cc-disable-warning, cast-function-type-strict) 67 68 endif 68 69 69 70 endif
+10 -4
security/Kconfig.hardening
··· 22 22 config CC_HAS_AUTO_VAR_INIT_PATTERN 23 23 def_bool $(cc-option,-ftrivial-auto-var-init=pattern) 24 24 25 - config CC_HAS_AUTO_VAR_INIT_ZERO 26 - # GCC ignores the -enable flag, so we can test for the feature with 27 - # a single invocation using the flag, but drop it as appropriate in 28 - # the Makefile, depending on the presence of Clang. 25 + config CC_HAS_AUTO_VAR_INIT_ZERO_BARE 26 + def_bool $(cc-option,-ftrivial-auto-var-init=zero) 27 + 28 + config CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER 29 + # Clang 16 and later warn about using the -enable flag, but it 30 + # is required before then. 29 31 def_bool $(cc-option,-ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang) 32 + depends on !CC_HAS_AUTO_VAR_INIT_ZERO_BARE 33 + 34 + config CC_HAS_AUTO_VAR_INIT_ZERO 35 + def_bool CC_HAS_AUTO_VAR_INIT_ZERO_BARE || CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER 30 36 31 37 choice 32 38 prompt "Initialize kernel stack variables at function entry"
+6 -1
security/loadpin/Kconfig
··· 33 33 on the LoadPin securityfs entry 'dm-verity'. The ioctl 34 34 expects a file descriptor of a file with verity digests as 35 35 parameter. The file must be located on the pinned root and 36 - contain a comma separated list of digests. 36 + start with the line: 37 + 38 + # LOADPIN_TRUSTED_VERITY_ROOT_DIGESTS 39 + 40 + This is followed by the verity digests, with one digest per 41 + line.
+15 -1
security/loadpin/loadpin.c
··· 21 21 #include <linux/dm-verity-loadpin.h> 22 22 #include <uapi/linux/loadpin.h> 23 23 24 + #define VERITY_DIGEST_FILE_HEADER "# LOADPIN_TRUSTED_VERITY_ROOT_DIGESTS" 25 + 24 26 static void report_load(const char *origin, struct file *file, char *operation) 25 27 { 26 28 char *cmdline, *pathname; ··· 294 292 295 293 p = strim(data); 296 294 while ((d = strsep(&p, "\n")) != NULL) { 297 - int len = strlen(d); 295 + int len; 298 296 struct dm_verity_loadpin_trusted_root_digest *trd; 297 + 298 + if (d == data) { 299 + /* first line, validate header */ 300 + if (strcmp(d, VERITY_DIGEST_FILE_HEADER)) { 301 + rc = -EPROTO; 302 + goto err; 303 + } 304 + 305 + continue; 306 + } 307 + 308 + len = strlen(d); 299 309 300 310 if (len % 2) { 301 311 rc = -EPROTO;
+5 -3
tools/testing/selftests/lkdtm/tests.txt
··· 75 75 STACKLEAK_ERASING OK: the rest of the thread stack is properly erased 76 76 CFI_FORWARD_PROTO 77 77 CFI_BACKWARD call trace:|ok: control flow unchanged 78 - FORTIFIED_STRSCPY 79 - FORTIFIED_OBJECT 80 - FORTIFIED_SUBOBJECT 78 + FORTIFY_STRSCPY detected buffer overflow 79 + FORTIFY_STR_OBJECT detected buffer overflow 80 + FORTIFY_STR_MEMBER detected buffer overflow 81 + FORTIFY_MEM_OBJECT detected buffer overflow 82 + FORTIFY_MEM_MEMBER detected field-spanning write 81 83 PPC_SLB_MULTIHIT Recovered