Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'hardening-v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull kernel hardening updates from Kees Cook:

- Convert flexible array members, fix -Wstringop-overflow warnings, and
fix KCFI function type mismatches that went ignored by maintainers
(Gustavo A. R. Silva, Nathan Chancellor, Kees Cook)

- Remove the remaining side-effect users of ksize() by converting
dma-buf, btrfs, and coredump to using kmalloc_size_roundup(), add
more __alloc_size attributes, and introduce full testing of all
allocator functions. Finally remove the ksize() side-effect so that
each allocation-aware checker can finally behave without exceptions

- Introduce oops_limit (default 10,000) and warn_limit (default off) to
provide greater granularity of control for panic_on_oops and
panic_on_warn (Jann Horn, Kees Cook)

- Introduce overflows_type() and castable_to_type() helpers for cleaner
overflow checking

- Improve code generation for strscpy() and update str*() kern-doc

- Convert strscpy and sigphash tests to KUnit, and expand memcpy tests

- Always use a non-NULL argument for prepare_kernel_cred()

- Disable structleak plugin in FORTIFY KUnit test (Anders Roxell)

- Adjust orphan linker section checking to respect CONFIG_WERROR (Xin
Li)

- Make sure siginfo is cleared for forced SIGKILL (haifeng.xu)

- Fix um vs FORTIFY warnings for always-NULL arguments

* tag 'hardening-v6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (31 commits)
ksmbd: replace one-element arrays with flexible-array members
hpet: Replace one-element array with flexible-array member
um: virt-pci: Avoid GCC non-NULL warning
signal: Initialize the info in ksignal
lib: fortify_kunit: build without structleak plugin
panic: Expose "warn_count" to sysfs
panic: Introduce warn_limit
panic: Consolidate open-coded panic_on_warn checks
exit: Allow oops_limit to be disabled
exit: Expose "oops_count" to sysfs
exit: Put an upper limit on how often we can oops
panic: Separate sysctl logic from CONFIG_SMP
mm/pgtable: Fix multiple -Wstringop-overflow warnings
mm: Make ksize() a reporting-only function
kunit/fortify: Validate __alloc_size attribute results
drm/sti: Fix return type of sti_{dvo,hda,hdmi}_connector_mode_valid()
drm/fsl-dcu: Fix return type of fsl_dcu_drm_connector_mode_valid()
driver core: Add __alloc_size hint to devm allocators
overflow: Introduce overflows_type() and castable_to_type()
coredump: Proactively round up to kmalloc bucket size
...

+1533 -463
+6
Documentation/ABI/testing/sysfs-kernel-oops_count
··· 1 + What: /sys/kernel/oops_count 2 + Date: November 2022 3 + KernelVersion: 6.2.0 4 + Contact: Linux Kernel Hardening List <linux-hardening@vger.kernel.org> 5 + Description: 6 + Shows how many times the system has Oopsed since last boot.
+6
Documentation/ABI/testing/sysfs-kernel-warn_count
··· 1 + What: /sys/kernel/oops_count 2 + Date: November 2022 3 + KernelVersion: 6.2.0 4 + Contact: Linux Kernel Hardening List <linux-hardening@vger.kernel.org> 5 + Description: 6 + Shows how many times the system has Warned since last boot.
+19
Documentation/admin-guide/sysctl/kernel.rst
··· 670 670 an oops event is detected. 671 671 672 672 673 + oops_limit 674 + ========== 675 + 676 + Number of kernel oopses after which the kernel should panic when 677 + ``panic_on_oops`` is not set. Setting this to 0 disables checking 678 + the count. Setting this to 1 has the same effect as setting 679 + ``panic_on_oops=1``. The default value is 10000. 680 + 681 + 673 682 osrelease, ostype & version 674 683 =========================== 675 684 ··· 1534 1525 1 Unprivileged calls to ``bpf()`` are disabled without recovery 1535 1526 2 Unprivileged calls to ``bpf()`` are disabled 1536 1527 = ============================================================= 1528 + 1529 + 1530 + warn_limit 1531 + ========== 1532 + 1533 + Number of kernel warnings after which the kernel should panic when 1534 + ``panic_on_warn`` is not set. Setting this to 0 disables checking 1535 + the warning count. Setting this to 1 has the same effect as setting 1536 + ``panic_on_warn=1``. The default value is 0. 1537 + 1537 1538 1538 1539 watchdog 1539 1540 ========
+3
Documentation/core-api/kernel-api.rst
··· 36 36 String Manipulation 37 37 ------------------- 38 38 39 + .. kernel-doc:: include/linux/fortify-string.h 40 + :internal: 41 + 39 42 .. kernel-doc:: lib/string.c 40 43 :export: 41 44
+5 -1
MAINTAINERS
··· 8105 8105 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening 8106 8106 F: include/linux/fortify-string.h 8107 8107 F: lib/fortify_kunit.c 8108 + F: lib/memcpy_kunit.c 8109 + F: lib/strscpy_kunit.c 8108 8110 F: lib/test_fortify/* 8109 8111 F: scripts/test_fortify.sh 8110 8112 K: \b__NO_FORTIFY\b ··· 11210 11208 L: linux-hardening@vger.kernel.org 11211 11209 S: Supported 11212 11210 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening 11211 + F: Documentation/ABI/testing/sysfs-kernel-oops_count 11212 + F: Documentation/ABI/testing/sysfs-kernel-warn_count 11213 11213 F: include/linux/overflow.h 11214 11214 F: include/linux/randomize_kstack.h 11215 11215 F: mm/usercopy.c ··· 19054 19050 S: Maintained 19055 19051 F: include/linux/siphash.h 19056 19052 F: lib/siphash.c 19057 - F: lib/test_siphash.c 19053 + F: lib/siphash_kunit.c 19058 19054 19059 19055 SIS 190 ETHERNET DRIVER 19060 19056 M: Francois Romieu <romieu@fr.zoreil.com>
+1 -1
Makefile
··· 1120 1120 # We never want expected sections to be placed heuristically by the 1121 1121 # linker. All sections should be explicitly named in the linker script. 1122 1122 ifdef CONFIG_LD_ORPHAN_WARN 1123 - LDFLAGS_vmlinux += --orphan-handling=warn 1123 + LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL) 1124 1124 endif 1125 1125 1126 1126 # Align the bit size of userspace programs with the kernel
+1 -1
arch/arm/boot/compressed/Makefile
··· 124 124 LDFLAGS_vmlinux += -X 125 125 # Report orphan sections 126 126 ifdef CONFIG_LD_ORPHAN_WARN 127 - LDFLAGS_vmlinux += --orphan-handling=warn 127 + LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL) 128 128 endif 129 129 # Next argument is a linker script 130 130 LDFLAGS_vmlinux += -T
+1 -1
arch/arm64/kernel/vdso/Makefile
··· 27 27 -Bsymbolic --build-id=sha1 -n $(btildflags-y) 28 28 29 29 ifdef CONFIG_LD_ORPHAN_WARN 30 - ldflags-y += --orphan-handling=warn 30 + ldflags-y += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL) 31 31 endif 32 32 33 33 ldflags-y += -T
+1 -1
arch/arm64/kernel/vdso32/Makefile
··· 104 104 VDSO_LDFLAGS += -Bsymbolic --no-undefined -soname=linux-vdso.so.1 105 105 VDSO_LDFLAGS += -z max-page-size=4096 -z common-page-size=4096 106 106 VDSO_LDFLAGS += -shared --hash-style=sysv --build-id=sha1 107 - VDSO_LDFLAGS += --orphan-handling=warn 107 + VDSO_LDFLAGS += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL) 108 108 109 109 110 110 # Borrow vdsomunge.c from the arm vDSO
+6 -3
arch/um/drivers/virt-pci.c
··· 97 97 } 98 98 99 99 buf = get_cpu_var(um_pci_msg_bufs); 100 - memcpy(buf, cmd, cmd_size); 100 + if (buf) 101 + memcpy(buf, cmd, cmd_size); 101 102 102 103 if (posted) { 103 104 u8 *ncmd = kmalloc(cmd_size + extra_size, GFP_ATOMIC); ··· 183 182 struct um_pci_message_buffer *buf; 184 183 u8 *data; 185 184 unsigned long ret = ULONG_MAX; 185 + size_t bytes = sizeof(buf->data); 186 186 187 187 if (!dev) 188 188 return ULONG_MAX; ··· 191 189 buf = get_cpu_var(um_pci_msg_bufs); 192 190 data = buf->data; 193 191 194 - memset(buf->data, 0xff, sizeof(buf->data)); 192 + if (buf) 193 + memset(data, 0xff, bytes); 195 194 196 195 switch (size) { 197 196 case 1: ··· 207 204 goto out; 208 205 } 209 206 210 - if (um_pci_send_cmd(dev, &hdr, sizeof(hdr), NULL, 0, data, 8)) 207 + if (um_pci_send_cmd(dev, &hdr, sizeof(hdr), NULL, 0, data, bytes)) 211 208 goto out; 212 209 213 210 switch (size) {
+1 -1
arch/x86/boot/compressed/Makefile
··· 68 68 # address by the bootloader. 69 69 LDFLAGS_vmlinux := -pie $(call ld-option, --no-dynamic-linker) 70 70 ifdef CONFIG_LD_ORPHAN_WARN 71 - LDFLAGS_vmlinux += --orphan-handling=warn 71 + LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL) 72 72 endif 73 73 LDFLAGS_vmlinux += -z noexecstack 74 74 ifeq ($(CONFIG_LD_IS_BFD),y)
+13 -9
arch/x86/mm/pgtable.c
··· 299 299 pud_t *pud; 300 300 int i; 301 301 302 - if (PREALLOCATED_PMDS == 0) /* Work around gcc-3.4.x bug */ 303 - return; 304 - 305 302 p4d = p4d_offset(pgd, 0); 306 303 pud = pud_offset(p4d, 0); 307 304 ··· 431 434 432 435 mm->pgd = pgd; 433 436 434 - if (preallocate_pmds(mm, pmds, PREALLOCATED_PMDS) != 0) 437 + if (sizeof(pmds) != 0 && 438 + preallocate_pmds(mm, pmds, PREALLOCATED_PMDS) != 0) 435 439 goto out_free_pgd; 436 440 437 - if (preallocate_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS) != 0) 441 + if (sizeof(u_pmds) != 0 && 442 + preallocate_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS) != 0) 438 443 goto out_free_pmds; 439 444 440 445 if (paravirt_pgd_alloc(mm) != 0) ··· 450 451 spin_lock(&pgd_lock); 451 452 452 453 pgd_ctor(mm, pgd); 453 - pgd_prepopulate_pmd(mm, pgd, pmds); 454 - pgd_prepopulate_user_pmd(mm, pgd, u_pmds); 454 + if (sizeof(pmds) != 0) 455 + pgd_prepopulate_pmd(mm, pgd, pmds); 456 + 457 + if (sizeof(u_pmds) != 0) 458 + pgd_prepopulate_user_pmd(mm, pgd, u_pmds); 455 459 456 460 spin_unlock(&pgd_lock); 457 461 458 462 return pgd; 459 463 460 464 out_free_user_pmds: 461 - free_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS); 465 + if (sizeof(u_pmds) != 0) 466 + free_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS); 462 467 out_free_pmds: 463 - free_pmds(mm, pmds, PREALLOCATED_PMDS); 468 + if (sizeof(pmds) != 0) 469 + free_pmds(mm, pmds, PREALLOCATED_PMDS); 464 470 out_free_pgd: 465 471 _pgd_free(pgd); 466 472 out:
+1 -1
drivers/base/firmware_loader/main.c
··· 821 821 * called by a driver when serving an unrelated request from userland, we use 822 822 * the kernel credentials to read the file. 823 823 */ 824 - kern_cred = prepare_kernel_cred(NULL); 824 + kern_cred = prepare_kernel_cred(&init_task); 825 825 if (!kern_cred) { 826 826 ret = -ENOMEM; 827 827 goto out;
+7 -2
drivers/dma-buf/dma-resv.c
··· 98 98 static struct dma_resv_list *dma_resv_list_alloc(unsigned int max_fences) 99 99 { 100 100 struct dma_resv_list *list; 101 + size_t size; 101 102 102 - list = kmalloc(struct_size(list, table, max_fences), GFP_KERNEL); 103 + /* Round up to the next kmalloc bucket size. */ 104 + size = kmalloc_size_roundup(struct_size(list, table, max_fences)); 105 + 106 + list = kmalloc(size, GFP_KERNEL); 103 107 if (!list) 104 108 return NULL; 105 109 106 - list->max_fences = (ksize(list) - offsetof(typeof(*list), table)) / 110 + /* Given the resulting bucket size, recalculated max_fences. */ 111 + list->max_fences = (size - offsetof(typeof(*list), table)) / 107 112 sizeof(*list->table); 108 113 109 114 return list;
+3 -2
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_rgb.c
··· 60 60 return drm_panel_get_modes(fsl_connector->panel, connector); 61 61 } 62 62 63 - static int fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector, 64 - struct drm_display_mode *mode) 63 + static enum drm_mode_status 64 + fsl_dcu_drm_connector_mode_valid(struct drm_connector *connector, 65 + struct drm_display_mode *mode) 65 66 { 66 67 if (mode->hdisplay & 0xf) 67 68 return MODE_ERROR;
+1 -1
drivers/gpu/drm/i915/i915_user_extensions.c
··· 51 51 return err; 52 52 53 53 if (get_user(next, &ext->next_extension) || 54 - overflows_type(next, ext)) 54 + overflows_type(next, uintptr_t)) 55 55 return -EFAULT; 56 56 57 57 ext = u64_to_user_ptr(next);
-4
drivers/gpu/drm/i915/i915_utils.h
··· 111 111 #define range_overflows_end_t(type, start, size, max) \ 112 112 range_overflows_end((type)(start), (type)(size), (type)(max)) 113 113 114 - /* Note we don't consider signbits :| */ 115 - #define overflows_type(x, T) \ 116 - (sizeof(x) > sizeof(T) && (x) >> BITS_PER_TYPE(T)) 117 - 118 114 #define ptr_mask_bits(ptr, n) ({ \ 119 115 unsigned long __v = (unsigned long)(ptr); \ 120 116 (typeof(ptr))(__v & -BIT(n)); \
+3 -2
drivers/gpu/drm/sti/sti_dvo.c
··· 346 346 347 347 #define CLK_TOLERANCE_HZ 50 348 348 349 - static int sti_dvo_connector_mode_valid(struct drm_connector *connector, 350 - struct drm_display_mode *mode) 349 + static enum drm_mode_status 350 + sti_dvo_connector_mode_valid(struct drm_connector *connector, 351 + struct drm_display_mode *mode) 351 352 { 352 353 int target = mode->clock * 1000; 353 354 int target_min = target - CLK_TOLERANCE_HZ;
+3 -2
drivers/gpu/drm/sti/sti_hda.c
··· 601 601 602 602 #define CLK_TOLERANCE_HZ 50 603 603 604 - static int sti_hda_connector_mode_valid(struct drm_connector *connector, 605 - struct drm_display_mode *mode) 604 + static enum drm_mode_status 605 + sti_hda_connector_mode_valid(struct drm_connector *connector, 606 + struct drm_display_mode *mode) 606 607 { 607 608 int target = mode->clock * 1000; 608 609 int target_min = target - CLK_TOLERANCE_HZ;
+3 -2
drivers/gpu/drm/sti/sti_hdmi.c
··· 1004 1004 1005 1005 #define CLK_TOLERANCE_HZ 50 1006 1006 1007 - static int sti_hdmi_connector_mode_valid(struct drm_connector *connector, 1008 - struct drm_display_mode *mode) 1007 + static enum drm_mode_status 1008 + sti_hdmi_connector_mode_valid(struct drm_connector *connector, 1009 + struct drm_display_mode *mode) 1009 1010 { 1010 1011 int target = mode->clock * 1000; 1011 1012 int target_min = target - CLK_TOLERANCE_HZ;
+6 -5
fs/btrfs/send.c
··· 486 486 old_buf_len = p->buf_len; 487 487 488 488 /* 489 + * Allocate to the next largest kmalloc bucket size, to let 490 + * the fast path happen most of the time. 491 + */ 492 + len = kmalloc_size_roundup(len); 493 + /* 489 494 * First time the inline_buf does not suffice 490 495 */ 491 496 if (p->buf == p->inline_buf) { ··· 503 498 if (!tmp_buf) 504 499 return -ENOMEM; 505 500 p->buf = tmp_buf; 506 - /* 507 - * The real size of the buffer is bigger, this will let the fast path 508 - * happen most of the time 509 - */ 510 - p->buf_len = ksize(p->buf); 501 + p->buf_len = len; 511 502 512 503 if (p->reversed) { 513 504 tmp_buf = p->buf + old_buf_len - path_len - 1;
+1 -1
fs/cifs/cifs_spnego.c
··· 189 189 * spnego upcalls. 190 190 */ 191 191 192 - cred = prepare_kernel_cred(NULL); 192 + cred = prepare_kernel_cred(&init_task); 193 193 if (!cred) 194 194 return -ENOMEM; 195 195
+1 -1
fs/cifs/cifsacl.c
··· 470 470 * this is used to prevent malicious redirections from being installed 471 471 * with add_key(). 472 472 */ 473 - cred = prepare_kernel_cred(NULL); 473 + cred = prepare_kernel_cred(&init_task); 474 474 if (!cred) 475 475 return -ENOMEM; 476 476
+5 -2
fs/coredump.c
··· 68 68 69 69 static int expand_corename(struct core_name *cn, int size) 70 70 { 71 - char *corename = krealloc(cn->corename, size, GFP_KERNEL); 71 + char *corename; 72 + 73 + size = kmalloc_size_roundup(size); 74 + corename = krealloc(cn->corename, size, GFP_KERNEL); 72 75 73 76 if (!corename) 74 77 return -ENOMEM; ··· 79 76 if (size > core_name_size) /* racy but harmless */ 80 77 core_name_size = size; 81 78 82 - cn->size = ksize(corename); 79 + cn->size = size; 83 80 cn->corename = corename; 84 81 return 0; 85 82 }
+2 -2
fs/ksmbd/smb2pdu.c
··· 3438 3438 goto free_conv_name; 3439 3439 } 3440 3440 3441 - struct_sz = readdir_info_level_struct_sz(info_level) - 1 + conv_len; 3441 + struct_sz = readdir_info_level_struct_sz(info_level) + conv_len; 3442 3442 next_entry_offset = ALIGN(struct_sz, KSMBD_DIR_INFO_ALIGNMENT); 3443 3443 d_info->last_entry_off_align = next_entry_offset - struct_sz; 3444 3444 ··· 3690 3690 return -EOPNOTSUPP; 3691 3691 3692 3692 conv_len = (d_info->name_len + 1) * 2; 3693 - next_entry_offset = ALIGN(struct_sz - 1 + conv_len, 3693 + next_entry_offset = ALIGN(struct_sz + conv_len, 3694 3694 KSMBD_DIR_INFO_ALIGNMENT); 3695 3695 3696 3696 if (next_entry_offset > d_info->out_buf_len) {
+1 -1
fs/ksmbd/smb2pdu.h
··· 443 443 /* SidBuffer contain two sids (UNIX user sid(16), UNIX group sid(16)) */ 444 444 u8 SidBuffer[32]; 445 445 __le32 name_len; 446 - u8 name[1]; 446 + u8 name[]; 447 447 /* 448 448 * var sized owner SID 449 449 * var sized group SID
+1 -1
fs/ksmbd/smb_common.c
··· 623 623 if (share->force_gid != KSMBD_SHARE_INVALID_GID) 624 624 gid = share->force_gid; 625 625 626 - cred = prepare_kernel_cred(NULL); 626 + cred = prepare_kernel_cred(&init_task); 627 627 if (!cred) 628 628 return -ENOMEM; 629 629
+6 -6
fs/ksmbd/smb_common.h
··· 277 277 __le64 AllocationSize; 278 278 __le32 ExtFileAttributes; 279 279 __le32 FileNameLength; 280 - char FileName[1]; 280 + char FileName[]; 281 281 } __packed; /* level 0x101 FF resp data */ 282 282 283 283 struct file_names_info { 284 284 __le32 NextEntryOffset; 285 285 __u32 FileIndex; 286 286 __le32 FileNameLength; 287 - char FileName[1]; 287 + char FileName[]; 288 288 } __packed; /* level 0xc FF resp data */ 289 289 290 290 struct file_full_directory_info { ··· 299 299 __le32 ExtFileAttributes; 300 300 __le32 FileNameLength; 301 301 __le32 EaSize; 302 - char FileName[1]; 302 + char FileName[]; 303 303 } __packed; /* level 0x102 FF resp */ 304 304 305 305 struct file_both_directory_info { ··· 317 317 __u8 ShortNameLength; 318 318 __u8 Reserved; 319 319 __u8 ShortName[24]; 320 - char FileName[1]; 320 + char FileName[]; 321 321 } __packed; /* level 0x104 FFrsp data */ 322 322 323 323 struct file_id_both_directory_info { ··· 337 337 __u8 ShortName[24]; 338 338 __le16 Reserved2; 339 339 __le64 UniqueId; 340 - char FileName[1]; 340 + char FileName[]; 341 341 } __packed; 342 342 343 343 struct file_id_full_dir_info { ··· 354 354 __le32 EaSize; /* EA size */ 355 355 __le32 Reserved; 356 356 __le64 UniqueId; /* inode num - le since Samba puts ino in low 32 bit*/ 357 - char FileName[1]; 357 + char FileName[]; 358 358 } __packed; /* level 0x105 FF rsp data */ 359 359 360 360 struct smb_version_values {
+2 -2
fs/nfs/flexfilelayout/flexfilelayout.c
··· 493 493 gid = make_kgid(&init_user_ns, id); 494 494 495 495 if (gfp_flags & __GFP_FS) 496 - kcred = prepare_kernel_cred(NULL); 496 + kcred = prepare_kernel_cred(&init_task); 497 497 else { 498 498 unsigned int nofs_flags = memalloc_nofs_save(); 499 - kcred = prepare_kernel_cred(NULL); 499 + kcred = prepare_kernel_cred(&init_task); 500 500 memalloc_nofs_restore(nofs_flags); 501 501 } 502 502 rc = -ENOMEM;
+1 -1
fs/nfs/nfs4idmap.c
··· 203 203 printk(KERN_NOTICE "NFS: Registering the %s key type\n", 204 204 key_type_id_resolver.name); 205 205 206 - cred = prepare_kernel_cred(NULL); 206 + cred = prepare_kernel_cred(&init_task); 207 207 if (!cred) 208 208 return -ENOMEM; 209 209
+1 -1
fs/nfsd/nfs4callback.c
··· 942 942 } else { 943 943 struct cred *kcred; 944 944 945 - kcred = prepare_kernel_cred(NULL); 945 + kcred = prepare_kernel_cred(&init_task); 946 946 if (!kcred) 947 947 return NULL; 948 948
+1
include/linux/compiler.h
··· 236 236 * bool and also pointer types. 237 237 */ 238 238 #define is_signed_type(type) (((type)(-1)) < (__force type)1) 239 + #define is_unsigned_type(type) (!is_signed_type(type)) 239 240 240 241 /* 241 242 * This is needed in functions which generate the stack canary, see
+4 -3
include/linux/device.h
··· 197 197 int devres_release_group(struct device *dev, void *id); 198 198 199 199 /* managed devm_k.alloc/kfree for device drivers */ 200 - void *devm_kmalloc(struct device *dev, size_t size, gfp_t gfp) __malloc; 200 + void *devm_kmalloc(struct device *dev, size_t size, gfp_t gfp) __alloc_size(2); 201 201 void *devm_krealloc(struct device *dev, void *ptr, size_t size, 202 - gfp_t gfp) __must_check; 202 + gfp_t gfp) __must_check __realloc_size(3); 203 203 __printf(3, 0) char *devm_kvasprintf(struct device *dev, gfp_t gfp, 204 204 const char *fmt, va_list ap) __malloc; 205 205 __printf(3, 4) char *devm_kasprintf(struct device *dev, gfp_t gfp, ··· 226 226 void devm_kfree(struct device *dev, const void *p); 227 227 char *devm_kstrdup(struct device *dev, const char *s, gfp_t gfp) __malloc; 228 228 const char *devm_kstrdup_const(struct device *dev, const char *s, gfp_t gfp); 229 - void *devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp); 229 + void *devm_kmemdup(struct device *dev, const void *src, size_t len, gfp_t gfp) 230 + __realloc_size(3); 230 231 231 232 unsigned long devm_get_free_pages(struct device *dev, 232 233 gfp_t gfp_mask, unsigned int order);
+136 -12
include/linux/fortify-string.h
··· 18 18 19 19 #define __compiletime_strlen(p) \ 20 20 ({ \ 21 - unsigned char *__p = (unsigned char *)(p); \ 21 + char *__p = (char *)(p); \ 22 22 size_t __ret = SIZE_MAX; \ 23 23 size_t __p_size = __member_size(p); \ 24 24 if (__p_size != SIZE_MAX && \ ··· 119 119 * Instead, please choose an alternative, so that the expectation 120 120 * of @p's contents is unambiguous: 121 121 * 122 - * +--------------------+-----------------+------------+ 123 - * | @p needs to be: | padded to @size | not padded | 124 - * +====================+=================+============+ 125 - * | NUL-terminated | strscpy_pad() | strscpy() | 126 - * +--------------------+-----------------+------------+ 127 - * | not NUL-terminated | strtomem_pad() | strtomem() | 128 - * +--------------------+-----------------+------------+ 122 + * +--------------------+--------------------+------------+ 123 + * | **p** needs to be: | padded to **size** | not padded | 124 + * +====================+====================+============+ 125 + * | NUL-terminated | strscpy_pad() | strscpy() | 126 + * +--------------------+--------------------+------------+ 127 + * | not NUL-terminated | strtomem_pad() | strtomem() | 128 + * +--------------------+--------------------+------------+ 129 129 * 130 130 * Note strscpy*()'s differing return values for detecting truncation, 131 131 * and strtomem*()'s expectation that the destination is marked with ··· 144 144 return __underlying_strncpy(p, q, size); 145 145 } 146 146 147 + /** 148 + * strcat - Append a string to an existing string 149 + * 150 + * @p: pointer to NUL-terminated string to append to 151 + * @q: pointer to NUL-terminated source string to append from 152 + * 153 + * Do not use this function. While FORTIFY_SOURCE tries to avoid 154 + * read and write overflows, this is only possible when the 155 + * destination buffer size is known to the compiler. Prefer 156 + * building the string with formatting, via scnprintf() or similar. 157 + * At the very least, use strncat(). 158 + * 159 + * Returns @p. 160 + * 161 + */ 147 162 __FORTIFY_INLINE __diagnose_as(__builtin_strcat, 1, 2) 148 163 char *strcat(char * const POS p, const char *q) 149 164 { ··· 172 157 } 173 158 174 159 extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen); 160 + /** 161 + * strnlen - Return bounded count of characters in a NUL-terminated string 162 + * 163 + * @p: pointer to NUL-terminated string to count. 164 + * @maxlen: maximum number of characters to count. 165 + * 166 + * Returns number of characters in @p (NOT including the final NUL), or 167 + * @maxlen, if no NUL has been found up to there. 168 + * 169 + */ 175 170 __FORTIFY_INLINE __kernel_size_t strnlen(const char * const POS p, __kernel_size_t maxlen) 176 171 { 177 172 size_t p_size = __member_size(p); ··· 207 182 * possible for strlen() to be used on compile-time strings for use in 208 183 * static initializers (i.e. as a constant expression). 209 184 */ 185 + /** 186 + * strlen - Return count of characters in a NUL-terminated string 187 + * 188 + * @p: pointer to NUL-terminated string to count. 189 + * 190 + * Do not use this function unless the string length is known at 191 + * compile-time. When @p is unterminated, this function may crash 192 + * or return unexpected counts that could lead to memory content 193 + * exposures. Prefer strnlen(). 194 + * 195 + * Returns number of characters in @p (NOT including the final NUL). 196 + * 197 + */ 210 198 #define strlen(p) \ 211 199 __builtin_choose_expr(__is_constexpr(__builtin_strlen(p)), \ 212 200 __builtin_strlen(p), __fortify_strlen(p)) ··· 238 200 return ret; 239 201 } 240 202 241 - /* defined after fortified strlen to reuse it */ 203 + /* Defined after fortified strlen() to reuse it. */ 242 204 extern size_t __real_strlcpy(char *, const char *, size_t) __RENAME(strlcpy); 205 + /** 206 + * strlcpy - Copy a string into another string buffer 207 + * 208 + * @p: pointer to destination of copy 209 + * @q: pointer to NUL-terminated source string to copy 210 + * @size: maximum number of bytes to write at @p 211 + * 212 + * If strlen(@q) >= @size, the copy of @q will be truncated at 213 + * @size - 1 bytes. @p will always be NUL-terminated. 214 + * 215 + * Do not use this function. While FORTIFY_SOURCE tries to avoid 216 + * over-reads when calculating strlen(@q), it is still possible. 217 + * Prefer strscpy(), though note its different return values for 218 + * detecting truncation. 219 + * 220 + * Returns total number of bytes written to @p, including terminating NUL. 221 + * 222 + */ 243 223 __FORTIFY_INLINE size_t strlcpy(char * const POS p, const char * const POS q, size_t size) 244 224 { 245 225 size_t p_size = __member_size(p); ··· 283 227 return q_len; 284 228 } 285 229 286 - /* defined after fortified strnlen to reuse it */ 230 + /* Defined after fortified strnlen() to reuse it. */ 287 231 extern ssize_t __real_strscpy(char *, const char *, size_t) __RENAME(strscpy); 232 + /** 233 + * strscpy - Copy a C-string into a sized buffer 234 + * 235 + * @p: Where to copy the string to 236 + * @q: Where to copy the string from 237 + * @size: Size of destination buffer 238 + * 239 + * Copy the source string @p, or as much of it as fits, into the destination 240 + * @q buffer. The behavior is undefined if the string buffers overlap. The 241 + * destination @p buffer is always NUL terminated, unless it's zero-sized. 242 + * 243 + * Preferred to strlcpy() since the API doesn't require reading memory 244 + * from the source @q string beyond the specified @size bytes, and since 245 + * the return value is easier to error-check than strlcpy()'s. 246 + * In addition, the implementation is robust to the string changing out 247 + * from underneath it, unlike the current strlcpy() implementation. 248 + * 249 + * Preferred to strncpy() since it always returns a valid string, and 250 + * doesn't unnecessarily force the tail of the destination buffer to be 251 + * zero padded. If padding is desired please use strscpy_pad(). 252 + * 253 + * Returns the number of characters copied in @p (not including the 254 + * trailing %NUL) or -E2BIG if @size is 0 or the copy of @q was truncated. 255 + */ 288 256 __FORTIFY_INLINE ssize_t strscpy(char * const POS p, const char * const POS q, size_t size) 289 257 { 290 258 size_t len; ··· 326 246 */ 327 247 if (__compiletime_lessthan(p_size, size)) 328 248 __write_overflow(); 249 + 250 + /* Short-circuit for compile-time known-safe lengths. */ 251 + if (__compiletime_lessthan(p_size, SIZE_MAX)) { 252 + len = __compiletime_strlen(q); 253 + 254 + if (len < SIZE_MAX && __compiletime_lessthan(len, size)) { 255 + __underlying_memcpy(p, q, len + 1); 256 + return len; 257 + } 258 + } 329 259 330 260 /* 331 261 * This call protects from read overflow, because len will default to q ··· 364 274 return __real_strscpy(p, q, len); 365 275 } 366 276 367 - /* defined after fortified strlen and strnlen to reuse them */ 277 + /** 278 + * strncat - Append a string to an existing string 279 + * 280 + * @p: pointer to NUL-terminated string to append to 281 + * @q: pointer to source string to append from 282 + * @count: Maximum bytes to read from @q 283 + * 284 + * Appends at most @count bytes from @q (stopping at the first 285 + * NUL byte) after the NUL-terminated string at @p. @p will be 286 + * NUL-terminated. 287 + * 288 + * Do not use this function. While FORTIFY_SOURCE tries to avoid 289 + * read and write overflows, this is only possible when the sizes 290 + * of @p and @q are known to the compiler. Prefer building the 291 + * string with formatting, via scnprintf() or similar. 292 + * 293 + * Returns @p. 294 + * 295 + */ 296 + /* Defined after fortified strlen() and strnlen() to reuse them. */ 368 297 __FORTIFY_INLINE __diagnose_as(__builtin_strncat, 1, 2, 3) 369 298 char *strncat(char * const POS p, const char * const POS q, __kernel_size_t count) 370 299 { ··· 682 573 return __real_memchr_inv(p, c, size); 683 574 } 684 575 685 - extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup); 576 + extern void *__real_kmemdup(const void *src, size_t len, gfp_t gfp) __RENAME(kmemdup) 577 + __realloc_size(2); 686 578 __FORTIFY_INLINE void *kmemdup(const void * const POS0 p, size_t size, gfp_t gfp) 687 579 { 688 580 size_t p_size = __struct_size(p); ··· 695 585 return __real_kmemdup(p, size, gfp); 696 586 } 697 587 588 + /** 589 + * strcpy - Copy a string into another string buffer 590 + * 591 + * @p: pointer to destination of copy 592 + * @q: pointer to NUL-terminated source string to copy 593 + * 594 + * Do not use this function. While FORTIFY_SOURCE tries to avoid 595 + * overflows, this is only possible when the sizes of @q and @p are 596 + * known to the compiler. Prefer strscpy(), though note its different 597 + * return values for detecting truncation. 598 + * 599 + * Returns @p. 600 + * 601 + */ 698 602 /* Defined after fortified strlen to reuse it. */ 699 603 __FORTIFY_INLINE __diagnose_as(__builtin_strcpy, 1, 2) 700 604 char *strcpy(char * const POS p, const char * const POS q)
+1 -1
include/linux/hpet.h
··· 30 30 unsigned long _hpet_compare; 31 31 } _u1; 32 32 u64 hpet_fsb[2]; /* FSB route */ 33 - } hpet_timers[1]; 33 + } hpet_timers[]; 34 34 }; 35 35 36 36 #define hpet_mc _u0._hpet_mc
+47
include/linux/overflow.h
··· 128 128 (*_d >> _to_shift) != _a); \ 129 129 })) 130 130 131 + #define __overflows_type_constexpr(x, T) ( \ 132 + is_unsigned_type(typeof(x)) ? \ 133 + (x) > type_max(typeof(T)) : \ 134 + is_unsigned_type(typeof(T)) ? \ 135 + (x) < 0 || (x) > type_max(typeof(T)) : \ 136 + (x) < type_min(typeof(T)) || (x) > type_max(typeof(T))) 137 + 138 + #define __overflows_type(x, T) ({ \ 139 + typeof(T) v = 0; \ 140 + check_add_overflow((x), v, &v); \ 141 + }) 142 + 143 + /** 144 + * overflows_type - helper for checking the overflows between value, variables, 145 + * or data type 146 + * 147 + * @n: source constant value or variable to be checked 148 + * @T: destination variable or data type proposed to store @x 149 + * 150 + * Compares the @x expression for whether or not it can safely fit in 151 + * the storage of the type in @T. @x and @T can have different types. 152 + * If @x is a constant expression, this will also resolve to a constant 153 + * expression. 154 + * 155 + * Returns: true if overflow can occur, false otherwise. 156 + */ 157 + #define overflows_type(n, T) \ 158 + __builtin_choose_expr(__is_constexpr(n), \ 159 + __overflows_type_constexpr(n, T), \ 160 + __overflows_type(n, T)) 161 + 162 + /** 163 + * castable_to_type - like __same_type(), but also allows for casted literals 164 + * 165 + * @n: variable or constant value 166 + * @T: variable or data type 167 + * 168 + * Unlike the __same_type() macro, this allows a constant value as the 169 + * first argument. If this value would not overflow into an assignment 170 + * of the second argument's type, it returns true. Otherwise, this falls 171 + * back to __same_type(). 172 + */ 173 + #define castable_to_type(n, T) \ 174 + __builtin_choose_expr(__is_constexpr(n), \ 175 + !__overflows_type_constexpr(n, T), \ 176 + __same_type(n, T)) 177 + 131 178 /** 132 179 * size_mul() - Calculate size_t multiplication with saturation at SIZE_MAX 133 180 * @factor1: first factor
+1
include/linux/panic.h
··· 11 11 __printf(1, 2) 12 12 void panic(const char *fmt, ...) __noreturn __cold; 13 13 void nmi_panic(struct pt_regs *regs, const char *msg); 14 + void check_panic_on_warn(const char *origin); 14 15 extern void oops_enter(void); 15 16 extern void oops_exit(void); 16 17 extern bool oops_may_print(void);
+1 -1
include/linux/string.h
··· 176 176 extern char *kstrdup(const char *s, gfp_t gfp) __malloc; 177 177 extern const char *kstrdup_const(const char *s, gfp_t gfp); 178 178 extern char *kstrndup(const char *s, size_t len, gfp_t gfp); 179 - extern void *kmemdup(const void *src, size_t len, gfp_t gfp); 179 + extern void *kmemdup(const void *src, size_t len, gfp_t gfp) __realloc_size(2); 180 180 extern char *kmemdup_nul(const char *s, size_t len, gfp_t gfp); 181 181 182 182 extern char **argv_split(gfp_t gfp, const char *str, int *argcp);
+12 -3
init/Kconfig
··· 159 159 help 160 160 A kernel build should not cause any compiler warnings, and this 161 161 enables the '-Werror' (for C) and '-Dwarnings' (for Rust) flags 162 - to enforce that rule by default. 162 + to enforce that rule by default. Certain warnings from other tools 163 + such as the linker may be upgraded to errors with this option as 164 + well. 163 165 164 - However, if you have a new (or very old) compiler with odd and 165 - unusual warnings, or you have some architecture with problems, 166 + However, if you have a new (or very old) compiler or linker with odd 167 + and unusual warnings, or you have some architecture with problems, 166 168 you may need to disable this config option in order to 167 169 successfully build the kernel. 168 170 ··· 1456 1454 def_bool y 1457 1455 depends on ARCH_WANT_LD_ORPHAN_WARN 1458 1456 depends on $(ld-option,--orphan-handling=warn) 1457 + depends on $(ld-option,--orphan-handling=error) 1458 + 1459 + config LD_ORPHAN_WARN_LEVEL 1460 + string 1461 + depends on LD_ORPHAN_WARN 1462 + default "error" if WERROR 1463 + default "warn" 1459 1464 1460 1465 config SYSCTL 1461 1466 bool
+7 -8
kernel/cred.c
··· 701 701 * override a task's own credentials so that work can be done on behalf of that 702 702 * task that requires a different subjective context. 703 703 * 704 - * @daemon is used to provide a base for the security record, but can be NULL. 705 - * If @daemon is supplied, then the security data will be derived from that; 706 - * otherwise they'll be set to 0 and no groups, full capabilities and no keys. 704 + * @daemon is used to provide a base cred, with the security data derived from 705 + * that; if this is "&init_task", they'll be set to 0, no groups, full 706 + * capabilities, and no keys. 707 707 * 708 708 * The caller may change these controls afterwards if desired. 709 709 * ··· 714 714 const struct cred *old; 715 715 struct cred *new; 716 716 717 + if (WARN_ON_ONCE(!daemon)) 718 + return NULL; 719 + 717 720 new = kmem_cache_alloc(cred_jar, GFP_KERNEL); 718 721 if (!new) 719 722 return NULL; 720 723 721 724 kdebug("prepare_kernel_cred() alloc %p", new); 722 725 723 - if (daemon) 724 - old = get_task_cred(daemon); 725 - else 726 - old = get_cred(&init_cred); 727 - 726 + old = get_task_cred(daemon); 728 727 validate_creds(old); 729 728 730 729 *new = *old;
+60
kernel/exit.c
··· 67 67 #include <linux/io_uring.h> 68 68 #include <linux/kprobes.h> 69 69 #include <linux/rethook.h> 70 + #include <linux/sysfs.h> 70 71 71 72 #include <linux/uaccess.h> 72 73 #include <asm/unistd.h> 73 74 #include <asm/mmu_context.h> 75 + 76 + /* 77 + * The default value should be high enough to not crash a system that randomly 78 + * crashes its kernel from time to time, but low enough to at least not permit 79 + * overflowing 32-bit refcounts or the ldsem writer count. 80 + */ 81 + static unsigned int oops_limit = 10000; 82 + 83 + #ifdef CONFIG_SYSCTL 84 + static struct ctl_table kern_exit_table[] = { 85 + { 86 + .procname = "oops_limit", 87 + .data = &oops_limit, 88 + .maxlen = sizeof(oops_limit), 89 + .mode = 0644, 90 + .proc_handler = proc_douintvec, 91 + }, 92 + { } 93 + }; 94 + 95 + static __init int kernel_exit_sysctls_init(void) 96 + { 97 + register_sysctl_init("kernel", kern_exit_table); 98 + return 0; 99 + } 100 + late_initcall(kernel_exit_sysctls_init); 101 + #endif 102 + 103 + static atomic_t oops_count = ATOMIC_INIT(0); 104 + 105 + #ifdef CONFIG_SYSFS 106 + static ssize_t oops_count_show(struct kobject *kobj, struct kobj_attribute *attr, 107 + char *page) 108 + { 109 + return sysfs_emit(page, "%d\n", atomic_read(&oops_count)); 110 + } 111 + 112 + static struct kobj_attribute oops_count_attr = __ATTR_RO(oops_count); 113 + 114 + static __init int kernel_exit_sysfs_init(void) 115 + { 116 + sysfs_add_file_to_group(kernel_kobj, &oops_count_attr.attr, NULL); 117 + return 0; 118 + } 119 + late_initcall(kernel_exit_sysfs_init); 120 + #endif 74 121 75 122 static void __unhash_process(struct task_struct *p, bool group_dead) 76 123 { ··· 943 896 preempt_count()); 944 897 preempt_count_set(PREEMPT_ENABLED); 945 898 } 899 + 900 + /* 901 + * Every time the system oopses, if the oops happens while a reference 902 + * to an object was held, the reference leaks. 903 + * If the oops doesn't also leak memory, repeated oopsing can cause 904 + * reference counters to wrap around (if they're not using refcount_t). 905 + * This means that repeated oopsing can make unexploitable-looking bugs 906 + * exploitable through repeated oopsing. 907 + * To make sure this can't happen, place an upper bound on how often the 908 + * kernel may oops without panic(). 909 + */ 910 + if (atomic_inc_return(&oops_count) >= READ_ONCE(oops_limit) && oops_limit) 911 + panic("Oopsed too often (kernel.oops_limit is %d)", oops_limit); 946 912 947 913 /* 948 914 * We're taking recursive faults here in make_task_dead. Safest is to just
+1 -2
kernel/kcsan/report.c
··· 492 492 dump_stack_print_info(KERN_DEFAULT); 493 493 pr_err("==================================================================\n"); 494 494 495 - if (panic_on_warn) 496 - panic("panic_on_warn set ...\n"); 495 + check_panic_on_warn("KCSAN"); 497 496 } 498 497 499 498 static void release_report(unsigned long *flags, struct other_info *other_info)
+42 -3
kernel/panic.c
··· 33 33 #include <linux/bug.h> 34 34 #include <linux/ratelimit.h> 35 35 #include <linux/debugfs.h> 36 + #include <linux/sysfs.h> 36 37 #include <trace/events/error_report.h> 37 38 #include <asm/sections.h> 38 39 ··· 60 59 int panic_on_warn __read_mostly; 61 60 unsigned long panic_on_taint; 62 61 bool panic_on_taint_nousertaint = false; 62 + static unsigned int warn_limit __read_mostly; 63 63 64 64 int panic_timeout = CONFIG_PANIC_TIMEOUT; 65 65 EXPORT_SYMBOL_GPL(panic_timeout); ··· 78 76 79 77 EXPORT_SYMBOL(panic_notifier_list); 80 78 81 - #if defined(CONFIG_SMP) && defined(CONFIG_SYSCTL) 79 + #ifdef CONFIG_SYSCTL 82 80 static struct ctl_table kern_panic_table[] = { 81 + #ifdef CONFIG_SMP 83 82 { 84 83 .procname = "oops_all_cpu_backtrace", 85 84 .data = &sysctl_oops_all_cpu_backtrace, ··· 89 86 .proc_handler = proc_dointvec_minmax, 90 87 .extra1 = SYSCTL_ZERO, 91 88 .extra2 = SYSCTL_ONE, 89 + }, 90 + #endif 91 + { 92 + .procname = "warn_limit", 93 + .data = &warn_limit, 94 + .maxlen = sizeof(warn_limit), 95 + .mode = 0644, 96 + .proc_handler = proc_douintvec, 92 97 }, 93 98 { } 94 99 }; ··· 107 96 return 0; 108 97 } 109 98 late_initcall(kernel_panic_sysctls_init); 99 + #endif 100 + 101 + static atomic_t warn_count = ATOMIC_INIT(0); 102 + 103 + #ifdef CONFIG_SYSFS 104 + static ssize_t warn_count_show(struct kobject *kobj, struct kobj_attribute *attr, 105 + char *page) 106 + { 107 + return sysfs_emit(page, "%d\n", atomic_read(&warn_count)); 108 + } 109 + 110 + static struct kobj_attribute warn_count_attr = __ATTR_RO(warn_count); 111 + 112 + static __init int kernel_panic_sysfs_init(void) 113 + { 114 + sysfs_add_file_to_group(kernel_kobj, &warn_count_attr.attr, NULL); 115 + return 0; 116 + } 117 + late_initcall(kernel_panic_sysfs_init); 110 118 #endif 111 119 112 120 static long no_blink(int state) ··· 228 198 229 199 if (panic_print & PANIC_PRINT_FTRACE_INFO) 230 200 ftrace_dump(DUMP_ALL); 201 + } 202 + 203 + void check_panic_on_warn(const char *origin) 204 + { 205 + if (panic_on_warn) 206 + panic("%s: panic_on_warn set ...\n", origin); 207 + 208 + if (atomic_inc_return(&warn_count) >= READ_ONCE(warn_limit) && warn_limit) 209 + panic("%s: system warned too often (kernel.warn_limit is %d)", 210 + origin, warn_limit); 231 211 } 232 212 233 213 /** ··· 658 618 if (regs) 659 619 show_regs(regs); 660 620 661 - if (panic_on_warn) 662 - panic("panic_on_warn set ...\n"); 621 + check_panic_on_warn("kernel"); 663 622 664 623 if (!regs) 665 624 dump_stack();
+1 -2
kernel/sched/core.c
··· 5782 5782 pr_err("Preemption disabled at:"); 5783 5783 print_ip_sym(KERN_ERR, preempt_disable_ip); 5784 5784 } 5785 - if (panic_on_warn) 5786 - panic("scheduling while atomic\n"); 5785 + check_panic_on_warn("scheduling while atomic"); 5787 5786 5788 5787 dump_stack(); 5789 5788 add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
+1
kernel/signal.c
··· 2693 2693 /* Has this task already been marked for death? */ 2694 2694 if ((signal->flags & SIGNAL_GROUP_EXIT) || 2695 2695 signal->group_exec_task) { 2696 + clear_siginfo(&ksig->info); 2696 2697 ksig->info.si_signo = signr = SIGKILL; 2697 2698 sigdelset(&current->pending.signal, SIGKILL); 2698 2699 trace_signal_deliver(SIGKILL, SEND_SIG_NOINFO,
+16 -12
lib/Kconfig.debug
··· 2234 2234 config TEST_STRING_HELPERS 2235 2235 tristate "Test functions located in the string_helpers module at runtime" 2236 2236 2237 - config TEST_STRSCPY 2238 - tristate "Test strscpy*() family of functions at runtime" 2239 - 2240 2237 config TEST_KSTRTOX 2241 2238 tristate "Test kstrto*() family of functions at runtime" 2242 2239 ··· 2267 2270 Enable this option to test the rhashtable functions at boot. 2268 2271 2269 2272 If unsure, say N. 2270 - 2271 - config TEST_SIPHASH 2272 - tristate "Perform selftest on siphash functions" 2273 - help 2274 - Enable this option to test the kernel's siphash (<linux/siphash.h>) hash 2275 - functions on boot (or module load). 2276 - 2277 - This is intended to help people writing architecture-specific 2278 - optimized versions. If unsure, say N. 2279 2273 2280 2274 config TEST_IDA 2281 2275 tristate "Perform selftest on IDA functions" ··· 2594 2606 Tests for hw_breakpoint constraints accounting. 2595 2607 2596 2608 If unsure, say N. 2609 + 2610 + config STRSCPY_KUNIT_TEST 2611 + tristate "Test strscpy*() family of functions at runtime" if !KUNIT_ALL_TESTS 2612 + depends on KUNIT 2613 + default KUNIT_ALL_TESTS 2614 + 2615 + config SIPHASH_KUNIT_TEST 2616 + tristate "Perform selftest on siphash functions" if !KUNIT_ALL_TESTS 2617 + depends on KUNIT 2618 + default KUNIT_ALL_TESTS 2619 + help 2620 + Enable this option to test the kernel's siphash (<linux/siphash.h>) hash 2621 + functions on boot (or module load). 2622 + 2623 + This is intended to help people writing architecture-specific 2624 + optimized versions. If unsure, say N. 2597 2625 2598 2626 config TEST_UDELAY 2599 2627 tristate "udelay test driver"
+5 -2
lib/Makefile
··· 62 62 CFLAGS_test_bitops.o += -Werror 63 63 obj-$(CONFIG_CPUMASK_KUNIT_TEST) += cpumask_kunit.o 64 64 obj-$(CONFIG_TEST_SYSCTL) += test_sysctl.o 65 - obj-$(CONFIG_TEST_SIPHASH) += test_siphash.o 66 65 obj-$(CONFIG_HASH_KUNIT_TEST) += test_hash.o 67 66 obj-$(CONFIG_TEST_IDA) += test_ida.o 68 67 obj-$(CONFIG_TEST_UBSAN) += test_ubsan.o ··· 81 82 obj-$(CONFIG_TEST_PRINTF) += test_printf.o 82 83 obj-$(CONFIG_TEST_SCANF) += test_scanf.o 83 84 obj-$(CONFIG_TEST_BITMAP) += test_bitmap.o 84 - obj-$(CONFIG_TEST_STRSCPY) += test_strscpy.o 85 85 obj-$(CONFIG_TEST_UUID) += test_uuid.o 86 86 obj-$(CONFIG_TEST_XARRAY) += test_xarray.o 87 87 obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o ··· 375 377 obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o 376 378 obj-$(CONFIG_MEMCPY_KUNIT_TEST) += memcpy_kunit.o 377 379 obj-$(CONFIG_IS_SIGNED_TYPE_KUNIT_TEST) += is_signed_type_kunit.o 380 + CFLAGS_overflow_kunit.o = $(call cc-disable-warning, tautological-constant-out-of-range-compare) 378 381 obj-$(CONFIG_OVERFLOW_KUNIT_TEST) += overflow_kunit.o 379 382 CFLAGS_stackinit_kunit.o += $(call cc-disable-warning, switch-unreachable) 380 383 obj-$(CONFIG_STACKINIT_KUNIT_TEST) += stackinit_kunit.o 384 + CFLAGS_fortify_kunit.o += $(call cc-disable-warning, unsequenced) 385 + CFLAGS_fortify_kunit.o += $(DISABLE_STRUCTLEAK_PLUGIN) 381 386 obj-$(CONFIG_FORTIFY_KUNIT_TEST) += fortify_kunit.o 387 + obj-$(CONFIG_STRSCPY_KUNIT_TEST) += strscpy_kunit.o 388 + obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o 382 389 383 390 obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o 384 391
+255
lib/fortify_kunit.c
··· 16 16 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 17 17 18 18 #include <kunit/test.h> 19 + #include <linux/device.h> 20 + #include <linux/slab.h> 19 21 #include <linux/string.h> 22 + #include <linux/vmalloc.h> 20 23 21 24 static const char array_of_10[] = "this is 10"; 22 25 static const char *ptr_of_11 = "this is 11!"; ··· 63 60 KUNIT_EXPECT_EQ(test, want_minus_one(pick), SIZE_MAX); 64 61 } 65 62 63 + #define KUNIT_EXPECT_BOS(test, p, expected, name) \ 64 + KUNIT_EXPECT_EQ_MSG(test, __builtin_object_size(p, 1), \ 65 + expected, \ 66 + "__alloc_size() not working with __bos on " name "\n") 67 + 68 + #if !__has_builtin(__builtin_dynamic_object_size) 69 + #define KUNIT_EXPECT_BDOS(test, p, expected, name) \ 70 + /* Silence "unused variable 'expected'" warning. */ \ 71 + KUNIT_EXPECT_EQ(test, expected, expected) 72 + #else 73 + #define KUNIT_EXPECT_BDOS(test, p, expected, name) \ 74 + KUNIT_EXPECT_EQ_MSG(test, __builtin_dynamic_object_size(p, 1), \ 75 + expected, \ 76 + "__alloc_size() not working with __bdos on " name "\n") 77 + #endif 78 + 79 + /* If the execpted size is a constant value, __bos can see it. */ 80 + #define check_const(_expected, alloc, free) do { \ 81 + size_t expected = (_expected); \ 82 + void *p = alloc; \ 83 + KUNIT_EXPECT_TRUE_MSG(test, p != NULL, #alloc " failed?!\n"); \ 84 + KUNIT_EXPECT_BOS(test, p, expected, #alloc); \ 85 + KUNIT_EXPECT_BDOS(test, p, expected, #alloc); \ 86 + free; \ 87 + } while (0) 88 + 89 + /* If the execpted size is NOT a constant value, __bos CANNOT see it. */ 90 + #define check_dynamic(_expected, alloc, free) do { \ 91 + size_t expected = (_expected); \ 92 + void *p = alloc; \ 93 + KUNIT_EXPECT_TRUE_MSG(test, p != NULL, #alloc " failed?!\n"); \ 94 + KUNIT_EXPECT_BOS(test, p, SIZE_MAX, #alloc); \ 95 + KUNIT_EXPECT_BDOS(test, p, expected, #alloc); \ 96 + free; \ 97 + } while (0) 98 + 99 + /* Assortment of constant-value kinda-edge cases. */ 100 + #define CONST_TEST_BODY(TEST_alloc) do { \ 101 + /* Special-case vmalloc()-family to skip 0-sized allocs. */ \ 102 + if (strcmp(#TEST_alloc, "TEST_vmalloc") != 0) \ 103 + TEST_alloc(check_const, 0, 0); \ 104 + TEST_alloc(check_const, 1, 1); \ 105 + TEST_alloc(check_const, 128, 128); \ 106 + TEST_alloc(check_const, 1023, 1023); \ 107 + TEST_alloc(check_const, 1025, 1025); \ 108 + TEST_alloc(check_const, 4096, 4096); \ 109 + TEST_alloc(check_const, 4097, 4097); \ 110 + } while (0) 111 + 112 + static volatile size_t zero_size; 113 + static volatile size_t unknown_size = 50; 114 + 115 + #if !__has_builtin(__builtin_dynamic_object_size) 116 + #define DYNAMIC_TEST_BODY(TEST_alloc) \ 117 + kunit_skip(test, "Compiler is missing __builtin_dynamic_object_size() support\n") 118 + #else 119 + #define DYNAMIC_TEST_BODY(TEST_alloc) do { \ 120 + size_t size = unknown_size; \ 121 + \ 122 + /* \ 123 + * Expected size is "size" in each test, before it is then \ 124 + * internally incremented in each test. Requires we disable \ 125 + * -Wunsequenced. \ 126 + */ \ 127 + TEST_alloc(check_dynamic, size, size++); \ 128 + /* Make sure incrementing actually happened. */ \ 129 + KUNIT_EXPECT_NE(test, size, unknown_size); \ 130 + } while (0) 131 + #endif 132 + 133 + #define DEFINE_ALLOC_SIZE_TEST_PAIR(allocator) \ 134 + static void alloc_size_##allocator##_const_test(struct kunit *test) \ 135 + { \ 136 + CONST_TEST_BODY(TEST_##allocator); \ 137 + } \ 138 + static void alloc_size_##allocator##_dynamic_test(struct kunit *test) \ 139 + { \ 140 + DYNAMIC_TEST_BODY(TEST_##allocator); \ 141 + } 142 + 143 + #define TEST_kmalloc(checker, expected_size, alloc_size) do { \ 144 + gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; \ 145 + void *orig; \ 146 + size_t len; \ 147 + \ 148 + checker(expected_size, kmalloc(alloc_size, gfp), \ 149 + kfree(p)); \ 150 + checker(expected_size, \ 151 + kmalloc_node(alloc_size, gfp, NUMA_NO_NODE), \ 152 + kfree(p)); \ 153 + checker(expected_size, kzalloc(alloc_size, gfp), \ 154 + kfree(p)); \ 155 + checker(expected_size, \ 156 + kzalloc_node(alloc_size, gfp, NUMA_NO_NODE), \ 157 + kfree(p)); \ 158 + checker(expected_size, kcalloc(1, alloc_size, gfp), \ 159 + kfree(p)); \ 160 + checker(expected_size, kcalloc(alloc_size, 1, gfp), \ 161 + kfree(p)); \ 162 + checker(expected_size, \ 163 + kcalloc_node(1, alloc_size, gfp, NUMA_NO_NODE), \ 164 + kfree(p)); \ 165 + checker(expected_size, \ 166 + kcalloc_node(alloc_size, 1, gfp, NUMA_NO_NODE), \ 167 + kfree(p)); \ 168 + checker(expected_size, kmalloc_array(1, alloc_size, gfp), \ 169 + kfree(p)); \ 170 + checker(expected_size, kmalloc_array(alloc_size, 1, gfp), \ 171 + kfree(p)); \ 172 + checker(expected_size, \ 173 + kmalloc_array_node(1, alloc_size, gfp, NUMA_NO_NODE), \ 174 + kfree(p)); \ 175 + checker(expected_size, \ 176 + kmalloc_array_node(alloc_size, 1, gfp, NUMA_NO_NODE), \ 177 + kfree(p)); \ 178 + checker(expected_size, __kmalloc(alloc_size, gfp), \ 179 + kfree(p)); \ 180 + checker(expected_size, \ 181 + __kmalloc_node(alloc_size, gfp, NUMA_NO_NODE), \ 182 + kfree(p)); \ 183 + \ 184 + orig = kmalloc(alloc_size, gfp); \ 185 + KUNIT_EXPECT_TRUE(test, orig != NULL); \ 186 + checker((expected_size) * 2, \ 187 + krealloc(orig, (alloc_size) * 2, gfp), \ 188 + kfree(p)); \ 189 + orig = kmalloc(alloc_size, gfp); \ 190 + KUNIT_EXPECT_TRUE(test, orig != NULL); \ 191 + checker((expected_size) * 2, \ 192 + krealloc_array(orig, 1, (alloc_size) * 2, gfp), \ 193 + kfree(p)); \ 194 + orig = kmalloc(alloc_size, gfp); \ 195 + KUNIT_EXPECT_TRUE(test, orig != NULL); \ 196 + checker((expected_size) * 2, \ 197 + krealloc_array(orig, (alloc_size) * 2, 1, gfp), \ 198 + kfree(p)); \ 199 + \ 200 + len = 11; \ 201 + /* Using memdup() with fixed size, so force unknown length. */ \ 202 + if (!__builtin_constant_p(expected_size)) \ 203 + len += zero_size; \ 204 + checker(len, kmemdup("hello there", len, gfp), kfree(p)); \ 205 + } while (0) 206 + DEFINE_ALLOC_SIZE_TEST_PAIR(kmalloc) 207 + 208 + /* Sizes are in pages, not bytes. */ 209 + #define TEST_vmalloc(checker, expected_pages, alloc_pages) do { \ 210 + gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; \ 211 + checker((expected_pages) * PAGE_SIZE, \ 212 + vmalloc((alloc_pages) * PAGE_SIZE), vfree(p)); \ 213 + checker((expected_pages) * PAGE_SIZE, \ 214 + vzalloc((alloc_pages) * PAGE_SIZE), vfree(p)); \ 215 + checker((expected_pages) * PAGE_SIZE, \ 216 + __vmalloc((alloc_pages) * PAGE_SIZE, gfp), vfree(p)); \ 217 + } while (0) 218 + DEFINE_ALLOC_SIZE_TEST_PAIR(vmalloc) 219 + 220 + /* Sizes are in pages (and open-coded for side-effects), not bytes. */ 221 + #define TEST_kvmalloc(checker, expected_pages, alloc_pages) do { \ 222 + gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; \ 223 + size_t prev_size; \ 224 + void *orig; \ 225 + \ 226 + checker((expected_pages) * PAGE_SIZE, \ 227 + kvmalloc((alloc_pages) * PAGE_SIZE, gfp), \ 228 + vfree(p)); \ 229 + checker((expected_pages) * PAGE_SIZE, \ 230 + kvmalloc_node((alloc_pages) * PAGE_SIZE, gfp, NUMA_NO_NODE), \ 231 + vfree(p)); \ 232 + checker((expected_pages) * PAGE_SIZE, \ 233 + kvzalloc((alloc_pages) * PAGE_SIZE, gfp), \ 234 + vfree(p)); \ 235 + checker((expected_pages) * PAGE_SIZE, \ 236 + kvzalloc_node((alloc_pages) * PAGE_SIZE, gfp, NUMA_NO_NODE), \ 237 + vfree(p)); \ 238 + checker((expected_pages) * PAGE_SIZE, \ 239 + kvcalloc(1, (alloc_pages) * PAGE_SIZE, gfp), \ 240 + vfree(p)); \ 241 + checker((expected_pages) * PAGE_SIZE, \ 242 + kvcalloc((alloc_pages) * PAGE_SIZE, 1, gfp), \ 243 + vfree(p)); \ 244 + checker((expected_pages) * PAGE_SIZE, \ 245 + kvmalloc_array(1, (alloc_pages) * PAGE_SIZE, gfp), \ 246 + vfree(p)); \ 247 + checker((expected_pages) * PAGE_SIZE, \ 248 + kvmalloc_array((alloc_pages) * PAGE_SIZE, 1, gfp), \ 249 + vfree(p)); \ 250 + \ 251 + prev_size = (expected_pages) * PAGE_SIZE; \ 252 + orig = kvmalloc(prev_size, gfp); \ 253 + KUNIT_EXPECT_TRUE(test, orig != NULL); \ 254 + checker(((expected_pages) * PAGE_SIZE) * 2, \ 255 + kvrealloc(orig, prev_size, \ 256 + ((alloc_pages) * PAGE_SIZE) * 2, gfp), \ 257 + kvfree(p)); \ 258 + } while (0) 259 + DEFINE_ALLOC_SIZE_TEST_PAIR(kvmalloc) 260 + 261 + #define TEST_devm_kmalloc(checker, expected_size, alloc_size) do { \ 262 + gfp_t gfp = GFP_KERNEL | __GFP_NOWARN; \ 263 + const char dev_name[] = "fortify-test"; \ 264 + struct device *dev; \ 265 + void *orig; \ 266 + size_t len; \ 267 + \ 268 + /* Create dummy device for devm_kmalloc()-family tests. */ \ 269 + dev = root_device_register(dev_name); \ 270 + KUNIT_ASSERT_FALSE_MSG(test, IS_ERR(dev), \ 271 + "Cannot register test device\n"); \ 272 + \ 273 + checker(expected_size, devm_kmalloc(dev, alloc_size, gfp), \ 274 + devm_kfree(dev, p)); \ 275 + checker(expected_size, devm_kzalloc(dev, alloc_size, gfp), \ 276 + devm_kfree(dev, p)); \ 277 + checker(expected_size, \ 278 + devm_kmalloc_array(dev, 1, alloc_size, gfp), \ 279 + devm_kfree(dev, p)); \ 280 + checker(expected_size, \ 281 + devm_kmalloc_array(dev, alloc_size, 1, gfp), \ 282 + devm_kfree(dev, p)); \ 283 + checker(expected_size, \ 284 + devm_kcalloc(dev, 1, alloc_size, gfp), \ 285 + devm_kfree(dev, p)); \ 286 + checker(expected_size, \ 287 + devm_kcalloc(dev, alloc_size, 1, gfp), \ 288 + devm_kfree(dev, p)); \ 289 + \ 290 + orig = devm_kmalloc(dev, alloc_size, gfp); \ 291 + KUNIT_EXPECT_TRUE(test, orig != NULL); \ 292 + checker((expected_size) * 2, \ 293 + devm_krealloc(dev, orig, (alloc_size) * 2, gfp), \ 294 + devm_kfree(dev, p)); \ 295 + \ 296 + len = 4; \ 297 + /* Using memdup() with fixed size, so force unknown length. */ \ 298 + if (!__builtin_constant_p(expected_size)) \ 299 + len += zero_size; \ 300 + checker(len, devm_kmemdup(dev, "Ohai", len, gfp), \ 301 + devm_kfree(dev, p)); \ 302 + \ 303 + device_unregister(dev); \ 304 + } while (0) 305 + DEFINE_ALLOC_SIZE_TEST_PAIR(devm_kmalloc) 306 + 66 307 static struct kunit_case fortify_test_cases[] = { 67 308 KUNIT_CASE(known_sizes_test), 68 309 KUNIT_CASE(control_flow_split_test), 310 + KUNIT_CASE(alloc_size_kmalloc_const_test), 311 + KUNIT_CASE(alloc_size_kmalloc_dynamic_test), 312 + KUNIT_CASE(alloc_size_vmalloc_const_test), 313 + KUNIT_CASE(alloc_size_vmalloc_dynamic_test), 314 + KUNIT_CASE(alloc_size_kvmalloc_const_test), 315 + KUNIT_CASE(alloc_size_kvmalloc_dynamic_test), 316 + KUNIT_CASE(alloc_size_devm_kmalloc_const_test), 317 + KUNIT_CASE(alloc_size_devm_kmalloc_dynamic_test), 69 318 {} 70 319 }; 71 320
+205
lib/memcpy_kunit.c
··· 292 292 #undef TEST_OP 293 293 } 294 294 295 + static u8 large_src[1024]; 296 + static u8 large_dst[2048]; 297 + static const u8 large_zero[2048]; 298 + 299 + static void set_random_nonzero(struct kunit *test, u8 *byte) 300 + { 301 + int failed_rng = 0; 302 + 303 + while (*byte == 0) { 304 + get_random_bytes(byte, 1); 305 + KUNIT_ASSERT_LT_MSG(test, failed_rng++, 100, 306 + "Is the RNG broken?"); 307 + } 308 + } 309 + 310 + static void init_large(struct kunit *test) 311 + { 312 + 313 + /* Get many bit patterns. */ 314 + get_random_bytes(large_src, ARRAY_SIZE(large_src)); 315 + 316 + /* Make sure we have non-zero edges. */ 317 + set_random_nonzero(test, &large_src[0]); 318 + set_random_nonzero(test, &large_src[ARRAY_SIZE(large_src) - 1]); 319 + 320 + /* Explicitly zero the entire destination. */ 321 + memset(large_dst, 0, ARRAY_SIZE(large_dst)); 322 + } 323 + 324 + /* 325 + * Instead of an indirect function call for "copy" or a giant macro, 326 + * use a bool to pick memcpy or memmove. 327 + */ 328 + static void copy_large_test(struct kunit *test, bool use_memmove) 329 + { 330 + init_large(test); 331 + 332 + /* Copy a growing number of non-overlapping bytes ... */ 333 + for (int bytes = 1; bytes <= ARRAY_SIZE(large_src); bytes++) { 334 + /* Over a shifting destination window ... */ 335 + for (int offset = 0; offset < ARRAY_SIZE(large_src); offset++) { 336 + int right_zero_pos = offset + bytes; 337 + int right_zero_size = ARRAY_SIZE(large_dst) - right_zero_pos; 338 + 339 + /* Copy! */ 340 + if (use_memmove) 341 + memmove(large_dst + offset, large_src, bytes); 342 + else 343 + memcpy(large_dst + offset, large_src, bytes); 344 + 345 + /* Did we touch anything before the copy area? */ 346 + KUNIT_ASSERT_EQ_MSG(test, 347 + memcmp(large_dst, large_zero, offset), 0, 348 + "with size %d at offset %d", bytes, offset); 349 + /* Did we touch anything after the copy area? */ 350 + KUNIT_ASSERT_EQ_MSG(test, 351 + memcmp(&large_dst[right_zero_pos], large_zero, right_zero_size), 0, 352 + "with size %d at offset %d", bytes, offset); 353 + 354 + /* Are we byte-for-byte exact across the copy? */ 355 + KUNIT_ASSERT_EQ_MSG(test, 356 + memcmp(large_dst + offset, large_src, bytes), 0, 357 + "with size %d at offset %d", bytes, offset); 358 + 359 + /* Zero out what we copied for the next cycle. */ 360 + memset(large_dst + offset, 0, bytes); 361 + } 362 + /* Avoid stall warnings if this loop gets slow. */ 363 + cond_resched(); 364 + } 365 + } 366 + 367 + static void memcpy_large_test(struct kunit *test) 368 + { 369 + copy_large_test(test, false); 370 + } 371 + 372 + static void memmove_large_test(struct kunit *test) 373 + { 374 + copy_large_test(test, true); 375 + } 376 + 377 + /* 378 + * On the assumption that boundary conditions are going to be the most 379 + * sensitive, instead of taking a full step (inc) each iteration, 380 + * take single index steps for at least the first "inc"-many indexes 381 + * from the "start" and at least the last "inc"-many indexes before 382 + * the "end". When in the middle, take full "inc"-wide steps. For 383 + * example, calling next_step(idx, 1, 15, 3) with idx starting at 0 384 + * would see the following pattern: 1 2 3 4 7 10 11 12 13 14 15. 385 + */ 386 + static int next_step(int idx, int start, int end, int inc) 387 + { 388 + start += inc; 389 + end -= inc; 390 + 391 + if (idx < start || idx + inc > end) 392 + inc = 1; 393 + return idx + inc; 394 + } 395 + 396 + static void inner_loop(struct kunit *test, int bytes, int d_off, int s_off) 397 + { 398 + int left_zero_pos, left_zero_size; 399 + int right_zero_pos, right_zero_size; 400 + int src_pos, src_orig_pos, src_size; 401 + int pos; 402 + 403 + /* Place the source in the destination buffer. */ 404 + memcpy(&large_dst[s_off], large_src, bytes); 405 + 406 + /* Copy to destination offset. */ 407 + memmove(&large_dst[d_off], &large_dst[s_off], bytes); 408 + 409 + /* Make sure destination entirely matches. */ 410 + KUNIT_ASSERT_EQ_MSG(test, memcmp(&large_dst[d_off], large_src, bytes), 0, 411 + "with size %d at src offset %d and dest offset %d", 412 + bytes, s_off, d_off); 413 + 414 + /* Calculate the expected zero spans. */ 415 + if (s_off < d_off) { 416 + left_zero_pos = 0; 417 + left_zero_size = s_off; 418 + 419 + right_zero_pos = d_off + bytes; 420 + right_zero_size = ARRAY_SIZE(large_dst) - right_zero_pos; 421 + 422 + src_pos = s_off; 423 + src_orig_pos = 0; 424 + src_size = d_off - s_off; 425 + } else { 426 + left_zero_pos = 0; 427 + left_zero_size = d_off; 428 + 429 + right_zero_pos = s_off + bytes; 430 + right_zero_size = ARRAY_SIZE(large_dst) - right_zero_pos; 431 + 432 + src_pos = d_off + bytes; 433 + src_orig_pos = src_pos - s_off; 434 + src_size = right_zero_pos - src_pos; 435 + } 436 + 437 + /* Check non-overlapping source is unchanged.*/ 438 + KUNIT_ASSERT_EQ_MSG(test, 439 + memcmp(&large_dst[src_pos], &large_src[src_orig_pos], src_size), 0, 440 + "with size %d at src offset %d and dest offset %d", 441 + bytes, s_off, d_off); 442 + 443 + /* Check leading buffer contents are zero. */ 444 + KUNIT_ASSERT_EQ_MSG(test, 445 + memcmp(&large_dst[left_zero_pos], large_zero, left_zero_size), 0, 446 + "with size %d at src offset %d and dest offset %d", 447 + bytes, s_off, d_off); 448 + /* Check trailing buffer contents are zero. */ 449 + KUNIT_ASSERT_EQ_MSG(test, 450 + memcmp(&large_dst[right_zero_pos], large_zero, right_zero_size), 0, 451 + "with size %d at src offset %d and dest offset %d", 452 + bytes, s_off, d_off); 453 + 454 + /* Zero out everything not already zeroed.*/ 455 + pos = left_zero_pos + left_zero_size; 456 + memset(&large_dst[pos], 0, right_zero_pos - pos); 457 + } 458 + 459 + static void memmove_overlap_test(struct kunit *test) 460 + { 461 + /* 462 + * Running all possible offset and overlap combinations takes a 463 + * very long time. Instead, only check up to 128 bytes offset 464 + * into the destination buffer (which should result in crossing 465 + * cachelines), with a step size of 1 through 7 to try to skip some 466 + * redundancy. 467 + */ 468 + static const int offset_max = 128; /* less than ARRAY_SIZE(large_src); */ 469 + static const int bytes_step = 7; 470 + static const int window_step = 7; 471 + 472 + static const int bytes_start = 1; 473 + static const int bytes_end = ARRAY_SIZE(large_src) + 1; 474 + 475 + init_large(test); 476 + 477 + /* Copy a growing number of overlapping bytes ... */ 478 + for (int bytes = bytes_start; bytes < bytes_end; 479 + bytes = next_step(bytes, bytes_start, bytes_end, bytes_step)) { 480 + 481 + /* Over a shifting destination window ... */ 482 + for (int d_off = 0; d_off < offset_max; d_off++) { 483 + int s_start = max(d_off - bytes, 0); 484 + int s_end = min_t(int, d_off + bytes, ARRAY_SIZE(large_src)); 485 + 486 + /* Over a shifting source window ... */ 487 + for (int s_off = s_start; s_off < s_end; 488 + s_off = next_step(s_off, s_start, s_end, window_step)) 489 + inner_loop(test, bytes, d_off, s_off); 490 + 491 + /* Avoid stall warnings. */ 492 + cond_resched(); 493 + } 494 + } 495 + } 496 + 295 497 static void strtomem_test(struct kunit *test) 296 498 { 297 499 static const char input[sizeof(unsigned long)] = "hi"; ··· 549 347 static struct kunit_case memcpy_test_cases[] = { 550 348 KUNIT_CASE(memset_test), 551 349 KUNIT_CASE(memcpy_test), 350 + KUNIT_CASE(memcpy_large_test), 552 351 KUNIT_CASE(memmove_test), 352 + KUNIT_CASE(memmove_large_test), 353 + KUNIT_CASE(memmove_overlap_test), 553 354 KUNIT_CASE(strtomem_test), 554 355 {} 555 356 };
+381
lib/overflow_kunit.c
··· 736 736 #undef check_one_size_helper 737 737 } 738 738 739 + static void overflows_type_test(struct kunit *test) 740 + { 741 + int count = 0; 742 + unsigned int var; 743 + 744 + #define __TEST_OVERFLOWS_TYPE(func, arg1, arg2, of) do { \ 745 + bool __of = func(arg1, arg2); \ 746 + KUNIT_EXPECT_EQ_MSG(test, __of, of, \ 747 + "expected " #func "(" #arg1 ", " #arg2 " to%s overflow\n",\ 748 + of ? "" : " not"); \ 749 + count++; \ 750 + } while (0) 751 + 752 + /* Args are: first type, second type, value, overflow expected */ 753 + #define TEST_OVERFLOWS_TYPE(__t1, __t2, v, of) do { \ 754 + __t1 t1 = (v); \ 755 + __t2 t2; \ 756 + __TEST_OVERFLOWS_TYPE(__overflows_type, t1, t2, of); \ 757 + __TEST_OVERFLOWS_TYPE(__overflows_type, t1, __t2, of); \ 758 + __TEST_OVERFLOWS_TYPE(__overflows_type_constexpr, t1, t2, of); \ 759 + __TEST_OVERFLOWS_TYPE(__overflows_type_constexpr, t1, __t2, of);\ 760 + } while (0) 761 + 762 + TEST_OVERFLOWS_TYPE(u8, u8, U8_MAX, false); 763 + TEST_OVERFLOWS_TYPE(u8, u16, U8_MAX, false); 764 + TEST_OVERFLOWS_TYPE(u8, s8, U8_MAX, true); 765 + TEST_OVERFLOWS_TYPE(u8, s8, S8_MAX, false); 766 + TEST_OVERFLOWS_TYPE(u8, s8, (u8)S8_MAX + 1, true); 767 + TEST_OVERFLOWS_TYPE(u8, s16, U8_MAX, false); 768 + TEST_OVERFLOWS_TYPE(s8, u8, S8_MAX, false); 769 + TEST_OVERFLOWS_TYPE(s8, u8, -1, true); 770 + TEST_OVERFLOWS_TYPE(s8, u8, S8_MIN, true); 771 + TEST_OVERFLOWS_TYPE(s8, u16, S8_MAX, false); 772 + TEST_OVERFLOWS_TYPE(s8, u16, -1, true); 773 + TEST_OVERFLOWS_TYPE(s8, u16, S8_MIN, true); 774 + TEST_OVERFLOWS_TYPE(s8, u32, S8_MAX, false); 775 + TEST_OVERFLOWS_TYPE(s8, u32, -1, true); 776 + TEST_OVERFLOWS_TYPE(s8, u32, S8_MIN, true); 777 + #if BITS_PER_LONG == 64 778 + TEST_OVERFLOWS_TYPE(s8, u64, S8_MAX, false); 779 + TEST_OVERFLOWS_TYPE(s8, u64, -1, true); 780 + TEST_OVERFLOWS_TYPE(s8, u64, S8_MIN, true); 781 + #endif 782 + TEST_OVERFLOWS_TYPE(s8, s8, S8_MAX, false); 783 + TEST_OVERFLOWS_TYPE(s8, s8, S8_MIN, false); 784 + TEST_OVERFLOWS_TYPE(s8, s16, S8_MAX, false); 785 + TEST_OVERFLOWS_TYPE(s8, s16, S8_MIN, false); 786 + TEST_OVERFLOWS_TYPE(u16, u8, U8_MAX, false); 787 + TEST_OVERFLOWS_TYPE(u16, u8, (u16)U8_MAX + 1, true); 788 + TEST_OVERFLOWS_TYPE(u16, u8, U16_MAX, true); 789 + TEST_OVERFLOWS_TYPE(u16, s8, S8_MAX, false); 790 + TEST_OVERFLOWS_TYPE(u16, s8, (u16)S8_MAX + 1, true); 791 + TEST_OVERFLOWS_TYPE(u16, s8, U16_MAX, true); 792 + TEST_OVERFLOWS_TYPE(u16, s16, S16_MAX, false); 793 + TEST_OVERFLOWS_TYPE(u16, s16, (u16)S16_MAX + 1, true); 794 + TEST_OVERFLOWS_TYPE(u16, s16, U16_MAX, true); 795 + TEST_OVERFLOWS_TYPE(u16, u32, U16_MAX, false); 796 + TEST_OVERFLOWS_TYPE(u16, s32, U16_MAX, false); 797 + TEST_OVERFLOWS_TYPE(s16, u8, U8_MAX, false); 798 + TEST_OVERFLOWS_TYPE(s16, u8, (s16)U8_MAX + 1, true); 799 + TEST_OVERFLOWS_TYPE(s16, u8, -1, true); 800 + TEST_OVERFLOWS_TYPE(s16, u8, S16_MIN, true); 801 + TEST_OVERFLOWS_TYPE(s16, u16, S16_MAX, false); 802 + TEST_OVERFLOWS_TYPE(s16, u16, -1, true); 803 + TEST_OVERFLOWS_TYPE(s16, u16, S16_MIN, true); 804 + TEST_OVERFLOWS_TYPE(s16, u32, S16_MAX, false); 805 + TEST_OVERFLOWS_TYPE(s16, u32, -1, true); 806 + TEST_OVERFLOWS_TYPE(s16, u32, S16_MIN, true); 807 + #if BITS_PER_LONG == 64 808 + TEST_OVERFLOWS_TYPE(s16, u64, S16_MAX, false); 809 + TEST_OVERFLOWS_TYPE(s16, u64, -1, true); 810 + TEST_OVERFLOWS_TYPE(s16, u64, S16_MIN, true); 811 + #endif 812 + TEST_OVERFLOWS_TYPE(s16, s8, S8_MAX, false); 813 + TEST_OVERFLOWS_TYPE(s16, s8, S8_MIN, false); 814 + TEST_OVERFLOWS_TYPE(s16, s8, (s16)S8_MAX + 1, true); 815 + TEST_OVERFLOWS_TYPE(s16, s8, (s16)S8_MIN - 1, true); 816 + TEST_OVERFLOWS_TYPE(s16, s8, S16_MAX, true); 817 + TEST_OVERFLOWS_TYPE(s16, s8, S16_MIN, true); 818 + TEST_OVERFLOWS_TYPE(s16, s16, S16_MAX, false); 819 + TEST_OVERFLOWS_TYPE(s16, s16, S16_MIN, false); 820 + TEST_OVERFLOWS_TYPE(s16, s32, S16_MAX, false); 821 + TEST_OVERFLOWS_TYPE(s16, s32, S16_MIN, false); 822 + TEST_OVERFLOWS_TYPE(u32, u8, U8_MAX, false); 823 + TEST_OVERFLOWS_TYPE(u32, u8, (u32)U8_MAX + 1, true); 824 + TEST_OVERFLOWS_TYPE(u32, u8, U32_MAX, true); 825 + TEST_OVERFLOWS_TYPE(u32, s8, S8_MAX, false); 826 + TEST_OVERFLOWS_TYPE(u32, s8, (u32)S8_MAX + 1, true); 827 + TEST_OVERFLOWS_TYPE(u32, s8, U32_MAX, true); 828 + TEST_OVERFLOWS_TYPE(u32, u16, U16_MAX, false); 829 + TEST_OVERFLOWS_TYPE(u32, u16, U16_MAX + 1, true); 830 + TEST_OVERFLOWS_TYPE(u32, u16, U32_MAX, true); 831 + TEST_OVERFLOWS_TYPE(u32, s16, S16_MAX, false); 832 + TEST_OVERFLOWS_TYPE(u32, s16, (u32)S16_MAX + 1, true); 833 + TEST_OVERFLOWS_TYPE(u32, s16, U32_MAX, true); 834 + TEST_OVERFLOWS_TYPE(u32, u32, U32_MAX, false); 835 + TEST_OVERFLOWS_TYPE(u32, s32, S32_MAX, false); 836 + TEST_OVERFLOWS_TYPE(u32, s32, U32_MAX, true); 837 + TEST_OVERFLOWS_TYPE(u32, s32, (u32)S32_MAX + 1, true); 838 + #if BITS_PER_LONG == 64 839 + TEST_OVERFLOWS_TYPE(u32, u64, U32_MAX, false); 840 + TEST_OVERFLOWS_TYPE(u32, s64, U32_MAX, false); 841 + #endif 842 + TEST_OVERFLOWS_TYPE(s32, u8, U8_MAX, false); 843 + TEST_OVERFLOWS_TYPE(s32, u8, (s32)U8_MAX + 1, true); 844 + TEST_OVERFLOWS_TYPE(s32, u16, S32_MAX, true); 845 + TEST_OVERFLOWS_TYPE(s32, u8, -1, true); 846 + TEST_OVERFLOWS_TYPE(s32, u8, S32_MIN, true); 847 + TEST_OVERFLOWS_TYPE(s32, u16, U16_MAX, false); 848 + TEST_OVERFLOWS_TYPE(s32, u16, (s32)U16_MAX + 1, true); 849 + TEST_OVERFLOWS_TYPE(s32, u16, S32_MAX, true); 850 + TEST_OVERFLOWS_TYPE(s32, u16, -1, true); 851 + TEST_OVERFLOWS_TYPE(s32, u16, S32_MIN, true); 852 + TEST_OVERFLOWS_TYPE(s32, u32, S32_MAX, false); 853 + TEST_OVERFLOWS_TYPE(s32, u32, -1, true); 854 + TEST_OVERFLOWS_TYPE(s32, u32, S32_MIN, true); 855 + #if BITS_PER_LONG == 64 856 + TEST_OVERFLOWS_TYPE(s32, u64, S32_MAX, false); 857 + TEST_OVERFLOWS_TYPE(s32, u64, -1, true); 858 + TEST_OVERFLOWS_TYPE(s32, u64, S32_MIN, true); 859 + #endif 860 + TEST_OVERFLOWS_TYPE(s32, s8, S8_MAX, false); 861 + TEST_OVERFLOWS_TYPE(s32, s8, S8_MIN, false); 862 + TEST_OVERFLOWS_TYPE(s32, s8, (s32)S8_MAX + 1, true); 863 + TEST_OVERFLOWS_TYPE(s32, s8, (s32)S8_MIN - 1, true); 864 + TEST_OVERFLOWS_TYPE(s32, s8, S32_MAX, true); 865 + TEST_OVERFLOWS_TYPE(s32, s8, S32_MIN, true); 866 + TEST_OVERFLOWS_TYPE(s32, s16, S16_MAX, false); 867 + TEST_OVERFLOWS_TYPE(s32, s16, S16_MIN, false); 868 + TEST_OVERFLOWS_TYPE(s32, s16, (s32)S16_MAX + 1, true); 869 + TEST_OVERFLOWS_TYPE(s32, s16, (s32)S16_MIN - 1, true); 870 + TEST_OVERFLOWS_TYPE(s32, s16, S32_MAX, true); 871 + TEST_OVERFLOWS_TYPE(s32, s16, S32_MIN, true); 872 + TEST_OVERFLOWS_TYPE(s32, s32, S32_MAX, false); 873 + TEST_OVERFLOWS_TYPE(s32, s32, S32_MIN, false); 874 + #if BITS_PER_LONG == 64 875 + TEST_OVERFLOWS_TYPE(s32, s64, S32_MAX, false); 876 + TEST_OVERFLOWS_TYPE(s32, s64, S32_MIN, false); 877 + TEST_OVERFLOWS_TYPE(u64, u8, U64_MAX, true); 878 + TEST_OVERFLOWS_TYPE(u64, u8, U8_MAX, false); 879 + TEST_OVERFLOWS_TYPE(u64, u8, (u64)U8_MAX + 1, true); 880 + TEST_OVERFLOWS_TYPE(u64, u16, U64_MAX, true); 881 + TEST_OVERFLOWS_TYPE(u64, u16, U16_MAX, false); 882 + TEST_OVERFLOWS_TYPE(u64, u16, (u64)U16_MAX + 1, true); 883 + TEST_OVERFLOWS_TYPE(u64, u32, U64_MAX, true); 884 + TEST_OVERFLOWS_TYPE(u64, u32, U32_MAX, false); 885 + TEST_OVERFLOWS_TYPE(u64, u32, (u64)U32_MAX + 1, true); 886 + TEST_OVERFLOWS_TYPE(u64, u64, U64_MAX, false); 887 + TEST_OVERFLOWS_TYPE(u64, s8, S8_MAX, false); 888 + TEST_OVERFLOWS_TYPE(u64, s8, (u64)S8_MAX + 1, true); 889 + TEST_OVERFLOWS_TYPE(u64, s8, U64_MAX, true); 890 + TEST_OVERFLOWS_TYPE(u64, s16, S16_MAX, false); 891 + TEST_OVERFLOWS_TYPE(u64, s16, (u64)S16_MAX + 1, true); 892 + TEST_OVERFLOWS_TYPE(u64, s16, U64_MAX, true); 893 + TEST_OVERFLOWS_TYPE(u64, s32, S32_MAX, false); 894 + TEST_OVERFLOWS_TYPE(u64, s32, (u64)S32_MAX + 1, true); 895 + TEST_OVERFLOWS_TYPE(u64, s32, U64_MAX, true); 896 + TEST_OVERFLOWS_TYPE(u64, s64, S64_MAX, false); 897 + TEST_OVERFLOWS_TYPE(u64, s64, U64_MAX, true); 898 + TEST_OVERFLOWS_TYPE(u64, s64, (u64)S64_MAX + 1, true); 899 + TEST_OVERFLOWS_TYPE(s64, u8, S64_MAX, true); 900 + TEST_OVERFLOWS_TYPE(s64, u8, S64_MIN, true); 901 + TEST_OVERFLOWS_TYPE(s64, u8, -1, true); 902 + TEST_OVERFLOWS_TYPE(s64, u8, U8_MAX, false); 903 + TEST_OVERFLOWS_TYPE(s64, u8, (s64)U8_MAX + 1, true); 904 + TEST_OVERFLOWS_TYPE(s64, u16, S64_MAX, true); 905 + TEST_OVERFLOWS_TYPE(s64, u16, S64_MIN, true); 906 + TEST_OVERFLOWS_TYPE(s64, u16, -1, true); 907 + TEST_OVERFLOWS_TYPE(s64, u16, U16_MAX, false); 908 + TEST_OVERFLOWS_TYPE(s64, u16, (s64)U16_MAX + 1, true); 909 + TEST_OVERFLOWS_TYPE(s64, u32, S64_MAX, true); 910 + TEST_OVERFLOWS_TYPE(s64, u32, S64_MIN, true); 911 + TEST_OVERFLOWS_TYPE(s64, u32, -1, true); 912 + TEST_OVERFLOWS_TYPE(s64, u32, U32_MAX, false); 913 + TEST_OVERFLOWS_TYPE(s64, u32, (s64)U32_MAX + 1, true); 914 + TEST_OVERFLOWS_TYPE(s64, u64, S64_MAX, false); 915 + TEST_OVERFLOWS_TYPE(s64, u64, S64_MIN, true); 916 + TEST_OVERFLOWS_TYPE(s64, u64, -1, true); 917 + TEST_OVERFLOWS_TYPE(s64, s8, S8_MAX, false); 918 + TEST_OVERFLOWS_TYPE(s64, s8, S8_MIN, false); 919 + TEST_OVERFLOWS_TYPE(s64, s8, (s64)S8_MAX + 1, true); 920 + TEST_OVERFLOWS_TYPE(s64, s8, (s64)S8_MIN - 1, true); 921 + TEST_OVERFLOWS_TYPE(s64, s8, S64_MAX, true); 922 + TEST_OVERFLOWS_TYPE(s64, s16, S16_MAX, false); 923 + TEST_OVERFLOWS_TYPE(s64, s16, S16_MIN, false); 924 + TEST_OVERFLOWS_TYPE(s64, s16, (s64)S16_MAX + 1, true); 925 + TEST_OVERFLOWS_TYPE(s64, s16, (s64)S16_MIN - 1, true); 926 + TEST_OVERFLOWS_TYPE(s64, s16, S64_MAX, true); 927 + TEST_OVERFLOWS_TYPE(s64, s32, S32_MAX, false); 928 + TEST_OVERFLOWS_TYPE(s64, s32, S32_MIN, false); 929 + TEST_OVERFLOWS_TYPE(s64, s32, (s64)S32_MAX + 1, true); 930 + TEST_OVERFLOWS_TYPE(s64, s32, (s64)S32_MIN - 1, true); 931 + TEST_OVERFLOWS_TYPE(s64, s32, S64_MAX, true); 932 + TEST_OVERFLOWS_TYPE(s64, s64, S64_MAX, false); 933 + TEST_OVERFLOWS_TYPE(s64, s64, S64_MIN, false); 934 + #endif 935 + 936 + /* Check for macro side-effects. */ 937 + var = INT_MAX - 1; 938 + __TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, false); 939 + __TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, false); 940 + __TEST_OVERFLOWS_TYPE(__overflows_type, var++, int, true); 941 + var = INT_MAX - 1; 942 + __TEST_OVERFLOWS_TYPE(overflows_type, var++, int, false); 943 + __TEST_OVERFLOWS_TYPE(overflows_type, var++, int, false); 944 + __TEST_OVERFLOWS_TYPE(overflows_type, var++, int, true); 945 + 946 + kunit_info(test, "%d overflows_type() tests finished\n", count); 947 + #undef TEST_OVERFLOWS_TYPE 948 + #undef __TEST_OVERFLOWS_TYPE 949 + } 950 + 951 + static void same_type_test(struct kunit *test) 952 + { 953 + int count = 0; 954 + int var; 955 + 956 + #define TEST_SAME_TYPE(t1, t2, same) do { \ 957 + typeof(t1) __t1h = type_max(t1); \ 958 + typeof(t1) __t1l = type_min(t1); \ 959 + typeof(t2) __t2h = type_max(t2); \ 960 + typeof(t2) __t2l = type_min(t2); \ 961 + KUNIT_EXPECT_EQ(test, true, __same_type(t1, __t1h)); \ 962 + KUNIT_EXPECT_EQ(test, true, __same_type(t1, __t1l)); \ 963 + KUNIT_EXPECT_EQ(test, true, __same_type(__t1h, t1)); \ 964 + KUNIT_EXPECT_EQ(test, true, __same_type(__t1l, t1)); \ 965 + KUNIT_EXPECT_EQ(test, true, __same_type(t2, __t2h)); \ 966 + KUNIT_EXPECT_EQ(test, true, __same_type(t2, __t2l)); \ 967 + KUNIT_EXPECT_EQ(test, true, __same_type(__t2h, t2)); \ 968 + KUNIT_EXPECT_EQ(test, true, __same_type(__t2l, t2)); \ 969 + KUNIT_EXPECT_EQ(test, same, __same_type(t1, t2)); \ 970 + KUNIT_EXPECT_EQ(test, same, __same_type(t2, __t1h)); \ 971 + KUNIT_EXPECT_EQ(test, same, __same_type(t2, __t1l)); \ 972 + KUNIT_EXPECT_EQ(test, same, __same_type(__t1h, t2)); \ 973 + KUNIT_EXPECT_EQ(test, same, __same_type(__t1l, t2)); \ 974 + KUNIT_EXPECT_EQ(test, same, __same_type(t1, __t2h)); \ 975 + KUNIT_EXPECT_EQ(test, same, __same_type(t1, __t2l)); \ 976 + KUNIT_EXPECT_EQ(test, same, __same_type(__t2h, t1)); \ 977 + KUNIT_EXPECT_EQ(test, same, __same_type(__t2l, t1)); \ 978 + } while (0) 979 + 980 + #if BITS_PER_LONG == 64 981 + # define TEST_SAME_TYPE64(base, t, m) TEST_SAME_TYPE(base, t, m) 982 + #else 983 + # define TEST_SAME_TYPE64(base, t, m) do { } while (0) 984 + #endif 985 + 986 + #define TEST_TYPE_SETS(base, mu8, mu16, mu32, ms8, ms16, ms32, mu64, ms64) \ 987 + do { \ 988 + TEST_SAME_TYPE(base, u8, mu8); \ 989 + TEST_SAME_TYPE(base, u16, mu16); \ 990 + TEST_SAME_TYPE(base, u32, mu32); \ 991 + TEST_SAME_TYPE(base, s8, ms8); \ 992 + TEST_SAME_TYPE(base, s16, ms16); \ 993 + TEST_SAME_TYPE(base, s32, ms32); \ 994 + TEST_SAME_TYPE64(base, u64, mu64); \ 995 + TEST_SAME_TYPE64(base, s64, ms64); \ 996 + } while (0) 997 + 998 + TEST_TYPE_SETS(u8, true, false, false, false, false, false, false, false); 999 + TEST_TYPE_SETS(u16, false, true, false, false, false, false, false, false); 1000 + TEST_TYPE_SETS(u32, false, false, true, false, false, false, false, false); 1001 + TEST_TYPE_SETS(s8, false, false, false, true, false, false, false, false); 1002 + TEST_TYPE_SETS(s16, false, false, false, false, true, false, false, false); 1003 + TEST_TYPE_SETS(s32, false, false, false, false, false, true, false, false); 1004 + #if BITS_PER_LONG == 64 1005 + TEST_TYPE_SETS(u64, false, false, false, false, false, false, true, false); 1006 + TEST_TYPE_SETS(s64, false, false, false, false, false, false, false, true); 1007 + #endif 1008 + 1009 + /* Check for macro side-effects. */ 1010 + var = 4; 1011 + KUNIT_EXPECT_EQ(test, var, 4); 1012 + KUNIT_EXPECT_TRUE(test, __same_type(var++, int)); 1013 + KUNIT_EXPECT_EQ(test, var, 4); 1014 + KUNIT_EXPECT_TRUE(test, __same_type(int, var++)); 1015 + KUNIT_EXPECT_EQ(test, var, 4); 1016 + KUNIT_EXPECT_TRUE(test, __same_type(var++, var++)); 1017 + KUNIT_EXPECT_EQ(test, var, 4); 1018 + 1019 + kunit_info(test, "%d __same_type() tests finished\n", count); 1020 + 1021 + #undef TEST_TYPE_SETS 1022 + #undef TEST_SAME_TYPE64 1023 + #undef TEST_SAME_TYPE 1024 + } 1025 + 1026 + static void castable_to_type_test(struct kunit *test) 1027 + { 1028 + int count = 0; 1029 + 1030 + #define TEST_CASTABLE_TO_TYPE(arg1, arg2, pass) do { \ 1031 + bool __pass = castable_to_type(arg1, arg2); \ 1032 + KUNIT_EXPECT_EQ_MSG(test, __pass, pass, \ 1033 + "expected castable_to_type(" #arg1 ", " #arg2 ") to%s pass\n",\ 1034 + pass ? "" : " not"); \ 1035 + count++; \ 1036 + } while (0) 1037 + 1038 + TEST_CASTABLE_TO_TYPE(16, u8, true); 1039 + TEST_CASTABLE_TO_TYPE(16, u16, true); 1040 + TEST_CASTABLE_TO_TYPE(16, u32, true); 1041 + TEST_CASTABLE_TO_TYPE(16, s8, true); 1042 + TEST_CASTABLE_TO_TYPE(16, s16, true); 1043 + TEST_CASTABLE_TO_TYPE(16, s32, true); 1044 + TEST_CASTABLE_TO_TYPE(-16, s8, true); 1045 + TEST_CASTABLE_TO_TYPE(-16, s16, true); 1046 + TEST_CASTABLE_TO_TYPE(-16, s32, true); 1047 + #if BITS_PER_LONG == 64 1048 + TEST_CASTABLE_TO_TYPE(16, u64, true); 1049 + TEST_CASTABLE_TO_TYPE(-16, s64, true); 1050 + #endif 1051 + 1052 + #define TEST_CASTABLE_TO_TYPE_VAR(width) do { \ 1053 + u ## width u ## width ## var = 0; \ 1054 + s ## width s ## width ## var = 0; \ 1055 + \ 1056 + /* Constant expressions that fit types. */ \ 1057 + TEST_CASTABLE_TO_TYPE(type_max(u ## width), u ## width, true); \ 1058 + TEST_CASTABLE_TO_TYPE(type_min(u ## width), u ## width, true); \ 1059 + TEST_CASTABLE_TO_TYPE(type_max(u ## width), u ## width ## var, true); \ 1060 + TEST_CASTABLE_TO_TYPE(type_min(u ## width), u ## width ## var, true); \ 1061 + TEST_CASTABLE_TO_TYPE(type_max(s ## width), s ## width, true); \ 1062 + TEST_CASTABLE_TO_TYPE(type_min(s ## width), s ## width, true); \ 1063 + TEST_CASTABLE_TO_TYPE(type_max(s ## width), s ## width ## var, true); \ 1064 + TEST_CASTABLE_TO_TYPE(type_min(u ## width), s ## width ## var, true); \ 1065 + /* Constant expressions that do not fit types. */ \ 1066 + TEST_CASTABLE_TO_TYPE(type_max(u ## width), s ## width, false); \ 1067 + TEST_CASTABLE_TO_TYPE(type_max(u ## width), s ## width ## var, false); \ 1068 + TEST_CASTABLE_TO_TYPE(type_min(s ## width), u ## width, false); \ 1069 + TEST_CASTABLE_TO_TYPE(type_min(s ## width), u ## width ## var, false); \ 1070 + /* Non-constant expression with mismatched type. */ \ 1071 + TEST_CASTABLE_TO_TYPE(s ## width ## var, u ## width, false); \ 1072 + TEST_CASTABLE_TO_TYPE(u ## width ## var, s ## width, false); \ 1073 + } while (0) 1074 + 1075 + #define TEST_CASTABLE_TO_TYPE_RANGE(width) do { \ 1076 + unsigned long big = U ## width ## _MAX; \ 1077 + signed long small = S ## width ## _MIN; \ 1078 + u ## width u ## width ## var = 0; \ 1079 + s ## width s ## width ## var = 0; \ 1080 + \ 1081 + /* Constant expression in range. */ \ 1082 + TEST_CASTABLE_TO_TYPE(U ## width ## _MAX, u ## width, true); \ 1083 + TEST_CASTABLE_TO_TYPE(U ## width ## _MAX, u ## width ## var, true); \ 1084 + TEST_CASTABLE_TO_TYPE(S ## width ## _MIN, s ## width, true); \ 1085 + TEST_CASTABLE_TO_TYPE(S ## width ## _MIN, s ## width ## var, true); \ 1086 + /* Constant expression out of range. */ \ 1087 + TEST_CASTABLE_TO_TYPE((unsigned long)U ## width ## _MAX + 1, u ## width, false); \ 1088 + TEST_CASTABLE_TO_TYPE((unsigned long)U ## width ## _MAX + 1, u ## width ## var, false); \ 1089 + TEST_CASTABLE_TO_TYPE((signed long)S ## width ## _MIN - 1, s ## width, false); \ 1090 + TEST_CASTABLE_TO_TYPE((signed long)S ## width ## _MIN - 1, s ## width ## var, false); \ 1091 + /* Non-constant expression with mismatched type. */ \ 1092 + TEST_CASTABLE_TO_TYPE(big, u ## width, false); \ 1093 + TEST_CASTABLE_TO_TYPE(big, u ## width ## var, false); \ 1094 + TEST_CASTABLE_TO_TYPE(small, s ## width, false); \ 1095 + TEST_CASTABLE_TO_TYPE(small, s ## width ## var, false); \ 1096 + } while (0) 1097 + 1098 + TEST_CASTABLE_TO_TYPE_VAR(8); 1099 + TEST_CASTABLE_TO_TYPE_VAR(16); 1100 + TEST_CASTABLE_TO_TYPE_VAR(32); 1101 + #if BITS_PER_LONG == 64 1102 + TEST_CASTABLE_TO_TYPE_VAR(64); 1103 + #endif 1104 + 1105 + TEST_CASTABLE_TO_TYPE_RANGE(8); 1106 + TEST_CASTABLE_TO_TYPE_RANGE(16); 1107 + #if BITS_PER_LONG == 64 1108 + TEST_CASTABLE_TO_TYPE_RANGE(32); 1109 + #endif 1110 + kunit_info(test, "%d castable_to_type() tests finished\n", count); 1111 + 1112 + #undef TEST_CASTABLE_TO_TYPE_RANGE 1113 + #undef TEST_CASTABLE_TO_TYPE_VAR 1114 + #undef TEST_CASTABLE_TO_TYPE 1115 + } 1116 + 739 1117 static struct kunit_case overflow_test_cases[] = { 740 1118 KUNIT_CASE(u8_u8__u8_overflow_test), 741 1119 KUNIT_CASE(s8_s8__s8_overflow_test), ··· 1133 755 KUNIT_CASE(shift_nonsense_test), 1134 756 KUNIT_CASE(overflow_allocation_test), 1135 757 KUNIT_CASE(overflow_size_helpers_test), 758 + KUNIT_CASE(overflows_type_test), 759 + KUNIT_CASE(same_type_test), 760 + KUNIT_CASE(castable_to_type_test), 1136 761 {} 1137 762 }; 1138 763
-82
lib/string.c
··· 76 76 #endif 77 77 78 78 #ifndef __HAVE_ARCH_STRCPY 79 - /** 80 - * strcpy - Copy a %NUL terminated string 81 - * @dest: Where to copy the string to 82 - * @src: Where to copy the string from 83 - */ 84 79 char *strcpy(char *dest, const char *src) 85 80 { 86 81 char *tmp = dest; ··· 88 93 #endif 89 94 90 95 #ifndef __HAVE_ARCH_STRNCPY 91 - /** 92 - * strncpy - Copy a length-limited, C-string 93 - * @dest: Where to copy the string to 94 - * @src: Where to copy the string from 95 - * @count: The maximum number of bytes to copy 96 - * 97 - * The result is not %NUL-terminated if the source exceeds 98 - * @count bytes. 99 - * 100 - * In the case where the length of @src is less than that of 101 - * count, the remainder of @dest will be padded with %NUL. 102 - * 103 - */ 104 96 char *strncpy(char *dest, const char *src, size_t count) 105 97 { 106 98 char *tmp = dest; ··· 104 122 #endif 105 123 106 124 #ifndef __HAVE_ARCH_STRLCPY 107 - /** 108 - * strlcpy - Copy a C-string into a sized buffer 109 - * @dest: Where to copy the string to 110 - * @src: Where to copy the string from 111 - * @size: size of destination buffer 112 - * 113 - * Compatible with ``*BSD``: the result is always a valid 114 - * NUL-terminated string that fits in the buffer (unless, 115 - * of course, the buffer size is zero). It does not pad 116 - * out the result like strncpy() does. 117 - */ 118 125 size_t strlcpy(char *dest, const char *src, size_t size) 119 126 { 120 127 size_t ret = strlen(src); ··· 119 148 #endif 120 149 121 150 #ifndef __HAVE_ARCH_STRSCPY 122 - /** 123 - * strscpy - Copy a C-string into a sized buffer 124 - * @dest: Where to copy the string to 125 - * @src: Where to copy the string from 126 - * @count: Size of destination buffer 127 - * 128 - * Copy the string, or as much of it as fits, into the dest buffer. The 129 - * behavior is undefined if the string buffers overlap. The destination 130 - * buffer is always NUL terminated, unless it's zero-sized. 131 - * 132 - * Preferred to strlcpy() since the API doesn't require reading memory 133 - * from the src string beyond the specified "count" bytes, and since 134 - * the return value is easier to error-check than strlcpy()'s. 135 - * In addition, the implementation is robust to the string changing out 136 - * from underneath it, unlike the current strlcpy() implementation. 137 - * 138 - * Preferred to strncpy() since it always returns a valid string, and 139 - * doesn't unnecessarily force the tail of the destination buffer to be 140 - * zeroed. If zeroing is desired please use strscpy_pad(). 141 - * 142 - * Returns: 143 - * * The number of characters copied (not including the trailing %NUL) 144 - * * -E2BIG if count is 0 or @src was truncated. 145 - */ 146 151 ssize_t strscpy(char *dest, const char *src, size_t count) 147 152 { 148 153 const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS; ··· 213 266 EXPORT_SYMBOL(stpcpy); 214 267 215 268 #ifndef __HAVE_ARCH_STRCAT 216 - /** 217 - * strcat - Append one %NUL-terminated string to another 218 - * @dest: The string to be appended to 219 - * @src: The string to append to it 220 - */ 221 269 char *strcat(char *dest, const char *src) 222 270 { 223 271 char *tmp = dest; ··· 227 285 #endif 228 286 229 287 #ifndef __HAVE_ARCH_STRNCAT 230 - /** 231 - * strncat - Append a length-limited, C-string to another 232 - * @dest: The string to be appended to 233 - * @src: The string to append to it 234 - * @count: The maximum numbers of bytes to copy 235 - * 236 - * Note that in contrast to strncpy(), strncat() ensures the result is 237 - * terminated. 238 - */ 239 288 char *strncat(char *dest, const char *src, size_t count) 240 289 { 241 290 char *tmp = dest; ··· 247 314 #endif 248 315 249 316 #ifndef __HAVE_ARCH_STRLCAT 250 - /** 251 - * strlcat - Append a length-limited, C-string to another 252 - * @dest: The string to be appended to 253 - * @src: The string to append to it 254 - * @count: The size of the destination buffer. 255 - */ 256 317 size_t strlcat(char *dest, const char *src, size_t count) 257 318 { 258 319 size_t dsize = strlen(dest); ··· 411 484 #endif 412 485 413 486 #ifndef __HAVE_ARCH_STRLEN 414 - /** 415 - * strlen - Find the length of a string 416 - * @s: The string to be sized 417 - */ 418 487 size_t strlen(const char *s) 419 488 { 420 489 const char *sc; ··· 423 500 #endif 424 501 425 502 #ifndef __HAVE_ARCH_STRNLEN 426 - /** 427 - * strnlen - Find the length of a length-limited string 428 - * @s: The string to be sized 429 - * @count: The maximum number of bytes to search 430 - */ 431 503 size_t strnlen(const char *s, size_t count) 432 504 { 433 505 const char *sc;
+142
lib/strscpy_kunit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Kernel module for testing 'strscpy' family of functions. 4 + */ 5 + 6 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 7 + 8 + #include <kunit/test.h> 9 + #include <linux/string.h> 10 + 11 + /* 12 + * tc() - Run a specific test case. 13 + * @src: Source string, argument to strscpy_pad() 14 + * @count: Size of destination buffer, argument to strscpy_pad() 15 + * @expected: Expected return value from call to strscpy_pad() 16 + * @terminator: 1 if there should be a terminating null byte 0 otherwise. 17 + * @chars: Number of characters from the src string expected to be 18 + * written to the dst buffer. 19 + * @pad: Number of pad characters expected (in the tail of dst buffer). 20 + * (@pad does not include the null terminator byte.) 21 + * 22 + * Calls strscpy_pad() and verifies the return value and state of the 23 + * destination buffer after the call returns. 24 + */ 25 + static void tc(struct kunit *test, char *src, int count, int expected, 26 + int chars, int terminator, int pad) 27 + { 28 + int nr_bytes_poison; 29 + int max_expected; 30 + int max_count; 31 + int written; 32 + char buf[6]; 33 + int index, i; 34 + const char POISON = 'z'; 35 + 36 + KUNIT_ASSERT_TRUE_MSG(test, src != NULL, 37 + "null source string not supported"); 38 + 39 + memset(buf, POISON, sizeof(buf)); 40 + /* Future proofing test suite, validate args */ 41 + max_count = sizeof(buf) - 2; /* Space for null and to verify overflow */ 42 + max_expected = count - 1; /* Space for the null */ 43 + 44 + KUNIT_ASSERT_LE_MSG(test, count, max_count, 45 + "count (%d) is too big (%d) ... aborting", count, max_count); 46 + KUNIT_EXPECT_LE_MSG(test, expected, max_expected, 47 + "expected (%d) is bigger than can possibly be returned (%d)", 48 + expected, max_expected); 49 + 50 + written = strscpy_pad(buf, src, count); 51 + KUNIT_ASSERT_EQ(test, written, expected); 52 + 53 + if (count && written == -E2BIG) { 54 + KUNIT_ASSERT_EQ_MSG(test, 0, strncmp(buf, src, count - 1), 55 + "buffer state invalid for -E2BIG"); 56 + KUNIT_ASSERT_EQ_MSG(test, buf[count - 1], '\0', 57 + "too big string is not null terminated correctly"); 58 + } 59 + 60 + for (i = 0; i < chars; i++) 61 + KUNIT_ASSERT_EQ_MSG(test, buf[i], src[i], 62 + "buf[i]==%c != src[i]==%c", buf[i], src[i]); 63 + 64 + if (terminator) 65 + KUNIT_ASSERT_EQ_MSG(test, buf[count - 1], '\0', 66 + "string is not null terminated correctly"); 67 + 68 + for (i = 0; i < pad; i++) { 69 + index = chars + terminator + i; 70 + KUNIT_ASSERT_EQ_MSG(test, buf[index], '\0', 71 + "padding missing at index: %d", i); 72 + } 73 + 74 + nr_bytes_poison = sizeof(buf) - chars - terminator - pad; 75 + for (i = 0; i < nr_bytes_poison; i++) { 76 + index = sizeof(buf) - 1 - i; /* Check from the end back */ 77 + KUNIT_ASSERT_EQ_MSG(test, buf[index], POISON, 78 + "poison value missing at index: %d", i); 79 + } 80 + } 81 + 82 + static void strscpy_test(struct kunit *test) 83 + { 84 + char dest[8]; 85 + 86 + /* 87 + * tc() uses a destination buffer of size 6 and needs at 88 + * least 2 characters spare (one for null and one to check for 89 + * overflow). This means we should only call tc() with 90 + * strings up to a maximum of 4 characters long and 'count' 91 + * should not exceed 4. To test with longer strings increase 92 + * the buffer size in tc(). 93 + */ 94 + 95 + /* tc(test, src, count, expected, chars, terminator, pad) */ 96 + tc(test, "a", 0, -E2BIG, 0, 0, 0); 97 + tc(test, "", 0, -E2BIG, 0, 0, 0); 98 + 99 + tc(test, "a", 1, -E2BIG, 0, 1, 0); 100 + tc(test, "", 1, 0, 0, 1, 0); 101 + 102 + tc(test, "ab", 2, -E2BIG, 1, 1, 0); 103 + tc(test, "a", 2, 1, 1, 1, 0); 104 + tc(test, "", 2, 0, 0, 1, 1); 105 + 106 + tc(test, "abc", 3, -E2BIG, 2, 1, 0); 107 + tc(test, "ab", 3, 2, 2, 1, 0); 108 + tc(test, "a", 3, 1, 1, 1, 1); 109 + tc(test, "", 3, 0, 0, 1, 2); 110 + 111 + tc(test, "abcd", 4, -E2BIG, 3, 1, 0); 112 + tc(test, "abc", 4, 3, 3, 1, 0); 113 + tc(test, "ab", 4, 2, 2, 1, 1); 114 + tc(test, "a", 4, 1, 1, 1, 2); 115 + tc(test, "", 4, 0, 0, 1, 3); 116 + 117 + /* Compile-time-known source strings. */ 118 + KUNIT_EXPECT_EQ(test, strscpy(dest, "", ARRAY_SIZE(dest)), 0); 119 + KUNIT_EXPECT_EQ(test, strscpy(dest, "", 3), 0); 120 + KUNIT_EXPECT_EQ(test, strscpy(dest, "", 1), 0); 121 + KUNIT_EXPECT_EQ(test, strscpy(dest, "", 0), -E2BIG); 122 + KUNIT_EXPECT_EQ(test, strscpy(dest, "Fixed", ARRAY_SIZE(dest)), 5); 123 + KUNIT_EXPECT_EQ(test, strscpy(dest, "Fixed", 3), -E2BIG); 124 + KUNIT_EXPECT_EQ(test, strscpy(dest, "Fixed", 1), -E2BIG); 125 + KUNIT_EXPECT_EQ(test, strscpy(dest, "Fixed", 0), -E2BIG); 126 + KUNIT_EXPECT_EQ(test, strscpy(dest, "This is too long", ARRAY_SIZE(dest)), -E2BIG); 127 + } 128 + 129 + static struct kunit_case strscpy_test_cases[] = { 130 + KUNIT_CASE(strscpy_test), 131 + {} 132 + }; 133 + 134 + static struct kunit_suite strscpy_test_suite = { 135 + .name = "strscpy", 136 + .test_cases = strscpy_test_cases, 137 + }; 138 + 139 + kunit_test_suite(strscpy_test_suite); 140 + 141 + MODULE_AUTHOR("Tobin C. Harding <tobin@kernel.org>"); 142 + MODULE_LICENSE("GPL");
+70 -95
lib/test_siphash.c lib/siphash_kunit.c
··· 13 13 14 14 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 15 15 16 + #include <kunit/test.h> 16 17 #include <linux/siphash.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/string.h> ··· 110 109 }; 111 110 #endif 112 111 113 - static int __init siphash_test_init(void) 112 + #define chk(hash, vector, fmt...) \ 113 + KUNIT_EXPECT_EQ_MSG(test, hash, vector, fmt) 114 + 115 + static void siphash_test(struct kunit *test) 114 116 { 115 117 u8 in[64] __aligned(SIPHASH_ALIGNMENT); 116 118 u8 in_unaligned[65] __aligned(SIPHASH_ALIGNMENT); 117 119 u8 i; 118 - int ret = 0; 119 120 120 121 for (i = 0; i < 64; ++i) { 121 122 in[i] = i; 122 123 in_unaligned[i + 1] = i; 123 - if (siphash(in, i, &test_key_siphash) != 124 - test_vectors_siphash[i]) { 125 - pr_info("siphash self-test aligned %u: FAIL\n", i + 1); 126 - ret = -EINVAL; 127 - } 128 - if (siphash(in_unaligned + 1, i, &test_key_siphash) != 129 - test_vectors_siphash[i]) { 130 - pr_info("siphash self-test unaligned %u: FAIL\n", i + 1); 131 - ret = -EINVAL; 132 - } 133 - if (hsiphash(in, i, &test_key_hsiphash) != 134 - test_vectors_hsiphash[i]) { 135 - pr_info("hsiphash self-test aligned %u: FAIL\n", i + 1); 136 - ret = -EINVAL; 137 - } 138 - if (hsiphash(in_unaligned + 1, i, &test_key_hsiphash) != 139 - test_vectors_hsiphash[i]) { 140 - pr_info("hsiphash self-test unaligned %u: FAIL\n", i + 1); 141 - ret = -EINVAL; 142 - } 124 + chk(siphash(in, i, &test_key_siphash), 125 + test_vectors_siphash[i], 126 + "siphash self-test aligned %u: FAIL", i + 1); 127 + chk(siphash(in_unaligned + 1, i, &test_key_siphash), 128 + test_vectors_siphash[i], 129 + "siphash self-test unaligned %u: FAIL", i + 1); 130 + chk(hsiphash(in, i, &test_key_hsiphash), 131 + test_vectors_hsiphash[i], 132 + "hsiphash self-test aligned %u: FAIL", i + 1); 133 + chk(hsiphash(in_unaligned + 1, i, &test_key_hsiphash), 134 + test_vectors_hsiphash[i], 135 + "hsiphash self-test unaligned %u: FAIL", i + 1); 143 136 } 144 - if (siphash_1u64(0x0706050403020100ULL, &test_key_siphash) != 145 - test_vectors_siphash[8]) { 146 - pr_info("siphash self-test 1u64: FAIL\n"); 147 - ret = -EINVAL; 148 - } 149 - if (siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, 150 - &test_key_siphash) != test_vectors_siphash[16]) { 151 - pr_info("siphash self-test 2u64: FAIL\n"); 152 - ret = -EINVAL; 153 - } 154 - if (siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, 155 - 0x1716151413121110ULL, &test_key_siphash) != 156 - test_vectors_siphash[24]) { 157 - pr_info("siphash self-test 3u64: FAIL\n"); 158 - ret = -EINVAL; 159 - } 160 - if (siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, 137 + chk(siphash_1u64(0x0706050403020100ULL, &test_key_siphash), 138 + test_vectors_siphash[8], 139 + "siphash self-test 1u64: FAIL"); 140 + chk(siphash_2u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, 141 + &test_key_siphash), 142 + test_vectors_siphash[16], 143 + "siphash self-test 2u64: FAIL"); 144 + chk(siphash_3u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, 145 + 0x1716151413121110ULL, &test_key_siphash), 146 + test_vectors_siphash[24], 147 + "siphash self-test 3u64: FAIL"); 148 + chk(siphash_4u64(0x0706050403020100ULL, 0x0f0e0d0c0b0a0908ULL, 161 149 0x1716151413121110ULL, 0x1f1e1d1c1b1a1918ULL, 162 - &test_key_siphash) != test_vectors_siphash[32]) { 163 - pr_info("siphash self-test 4u64: FAIL\n"); 164 - ret = -EINVAL; 165 - } 166 - if (siphash_1u32(0x03020100U, &test_key_siphash) != 167 - test_vectors_siphash[4]) { 168 - pr_info("siphash self-test 1u32: FAIL\n"); 169 - ret = -EINVAL; 170 - } 171 - if (siphash_2u32(0x03020100U, 0x07060504U, &test_key_siphash) != 172 - test_vectors_siphash[8]) { 173 - pr_info("siphash self-test 2u32: FAIL\n"); 174 - ret = -EINVAL; 175 - } 176 - if (siphash_3u32(0x03020100U, 0x07060504U, 177 - 0x0b0a0908U, &test_key_siphash) != 178 - test_vectors_siphash[12]) { 179 - pr_info("siphash self-test 3u32: FAIL\n"); 180 - ret = -EINVAL; 181 - } 182 - if (siphash_4u32(0x03020100U, 0x07060504U, 183 - 0x0b0a0908U, 0x0f0e0d0cU, &test_key_siphash) != 184 - test_vectors_siphash[16]) { 185 - pr_info("siphash self-test 4u32: FAIL\n"); 186 - ret = -EINVAL; 187 - } 188 - if (hsiphash_1u32(0x03020100U, &test_key_hsiphash) != 189 - test_vectors_hsiphash[4]) { 190 - pr_info("hsiphash self-test 1u32: FAIL\n"); 191 - ret = -EINVAL; 192 - } 193 - if (hsiphash_2u32(0x03020100U, 0x07060504U, &test_key_hsiphash) != 194 - test_vectors_hsiphash[8]) { 195 - pr_info("hsiphash self-test 2u32: FAIL\n"); 196 - ret = -EINVAL; 197 - } 198 - if (hsiphash_3u32(0x03020100U, 0x07060504U, 199 - 0x0b0a0908U, &test_key_hsiphash) != 200 - test_vectors_hsiphash[12]) { 201 - pr_info("hsiphash self-test 3u32: FAIL\n"); 202 - ret = -EINVAL; 203 - } 204 - if (hsiphash_4u32(0x03020100U, 0x07060504U, 205 - 0x0b0a0908U, 0x0f0e0d0cU, &test_key_hsiphash) != 206 - test_vectors_hsiphash[16]) { 207 - pr_info("hsiphash self-test 4u32: FAIL\n"); 208 - ret = -EINVAL; 209 - } 210 - if (!ret) 211 - pr_info("self-tests: pass\n"); 212 - return ret; 150 + &test_key_siphash), 151 + test_vectors_siphash[32], 152 + "siphash self-test 4u64: FAIL"); 153 + chk(siphash_1u32(0x03020100U, &test_key_siphash), 154 + test_vectors_siphash[4], 155 + "siphash self-test 1u32: FAIL"); 156 + chk(siphash_2u32(0x03020100U, 0x07060504U, &test_key_siphash), 157 + test_vectors_siphash[8], 158 + "siphash self-test 2u32: FAIL"); 159 + chk(siphash_3u32(0x03020100U, 0x07060504U, 160 + 0x0b0a0908U, &test_key_siphash), 161 + test_vectors_siphash[12], 162 + "siphash self-test 3u32: FAIL"); 163 + chk(siphash_4u32(0x03020100U, 0x07060504U, 164 + 0x0b0a0908U, 0x0f0e0d0cU, &test_key_siphash), 165 + test_vectors_siphash[16], 166 + "siphash self-test 4u32: FAIL"); 167 + chk(hsiphash_1u32(0x03020100U, &test_key_hsiphash), 168 + test_vectors_hsiphash[4], 169 + "hsiphash self-test 1u32: FAIL"); 170 + chk(hsiphash_2u32(0x03020100U, 0x07060504U, &test_key_hsiphash), 171 + test_vectors_hsiphash[8], 172 + "hsiphash self-test 2u32: FAIL"); 173 + chk(hsiphash_3u32(0x03020100U, 0x07060504U, 174 + 0x0b0a0908U, &test_key_hsiphash), 175 + test_vectors_hsiphash[12], 176 + "hsiphash self-test 3u32: FAIL"); 177 + chk(hsiphash_4u32(0x03020100U, 0x07060504U, 178 + 0x0b0a0908U, 0x0f0e0d0cU, &test_key_hsiphash), 179 + test_vectors_hsiphash[16], 180 + "hsiphash self-test 4u32: FAIL"); 213 181 } 214 182 215 - static void __exit siphash_test_exit(void) 216 - { 217 - } 183 + static struct kunit_case siphash_test_cases[] = { 184 + KUNIT_CASE(siphash_test), 185 + {} 186 + }; 218 187 219 - module_init(siphash_test_init); 220 - module_exit(siphash_test_exit); 188 + static struct kunit_suite siphash_test_suite = { 189 + .name = "siphash", 190 + .test_cases = siphash_test_cases, 191 + }; 192 + 193 + kunit_test_suite(siphash_test_suite); 221 194 222 195 MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>"); 223 196 MODULE_LICENSE("Dual BSD/GPL");
-150
lib/test_strscpy.c
··· 1 - // SPDX-License-Identifier: GPL-2.0+ 2 - 3 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 4 - 5 - #include <linux/string.h> 6 - 7 - #include "../tools/testing/selftests/kselftest_module.h" 8 - 9 - /* 10 - * Kernel module for testing 'strscpy' family of functions. 11 - */ 12 - 13 - KSTM_MODULE_GLOBALS(); 14 - 15 - /* 16 - * tc() - Run a specific test case. 17 - * @src: Source string, argument to strscpy_pad() 18 - * @count: Size of destination buffer, argument to strscpy_pad() 19 - * @expected: Expected return value from call to strscpy_pad() 20 - * @terminator: 1 if there should be a terminating null byte 0 otherwise. 21 - * @chars: Number of characters from the src string expected to be 22 - * written to the dst buffer. 23 - * @pad: Number of pad characters expected (in the tail of dst buffer). 24 - * (@pad does not include the null terminator byte.) 25 - * 26 - * Calls strscpy_pad() and verifies the return value and state of the 27 - * destination buffer after the call returns. 28 - */ 29 - static int __init tc(char *src, int count, int expected, 30 - int chars, int terminator, int pad) 31 - { 32 - int nr_bytes_poison; 33 - int max_expected; 34 - int max_count; 35 - int written; 36 - char buf[6]; 37 - int index, i; 38 - const char POISON = 'z'; 39 - 40 - total_tests++; 41 - 42 - if (!src) { 43 - pr_err("null source string not supported\n"); 44 - return -1; 45 - } 46 - 47 - memset(buf, POISON, sizeof(buf)); 48 - /* Future proofing test suite, validate args */ 49 - max_count = sizeof(buf) - 2; /* Space for null and to verify overflow */ 50 - max_expected = count - 1; /* Space for the null */ 51 - if (count > max_count) { 52 - pr_err("count (%d) is too big (%d) ... aborting", count, max_count); 53 - return -1; 54 - } 55 - if (expected > max_expected) { 56 - pr_warn("expected (%d) is bigger than can possibly be returned (%d)", 57 - expected, max_expected); 58 - } 59 - 60 - written = strscpy_pad(buf, src, count); 61 - if ((written) != (expected)) { 62 - pr_err("%d != %d (written, expected)\n", written, expected); 63 - goto fail; 64 - } 65 - 66 - if (count && written == -E2BIG) { 67 - if (strncmp(buf, src, count - 1) != 0) { 68 - pr_err("buffer state invalid for -E2BIG\n"); 69 - goto fail; 70 - } 71 - if (buf[count - 1] != '\0') { 72 - pr_err("too big string is not null terminated correctly\n"); 73 - goto fail; 74 - } 75 - } 76 - 77 - for (i = 0; i < chars; i++) { 78 - if (buf[i] != src[i]) { 79 - pr_err("buf[i]==%c != src[i]==%c\n", buf[i], src[i]); 80 - goto fail; 81 - } 82 - } 83 - 84 - if (terminator) { 85 - if (buf[count - 1] != '\0') { 86 - pr_err("string is not null terminated correctly\n"); 87 - goto fail; 88 - } 89 - } 90 - 91 - for (i = 0; i < pad; i++) { 92 - index = chars + terminator + i; 93 - if (buf[index] != '\0') { 94 - pr_err("padding missing at index: %d\n", i); 95 - goto fail; 96 - } 97 - } 98 - 99 - nr_bytes_poison = sizeof(buf) - chars - terminator - pad; 100 - for (i = 0; i < nr_bytes_poison; i++) { 101 - index = sizeof(buf) - 1 - i; /* Check from the end back */ 102 - if (buf[index] != POISON) { 103 - pr_err("poison value missing at index: %d\n", i); 104 - goto fail; 105 - } 106 - } 107 - 108 - return 0; 109 - fail: 110 - failed_tests++; 111 - return -1; 112 - } 113 - 114 - static void __init selftest(void) 115 - { 116 - /* 117 - * tc() uses a destination buffer of size 6 and needs at 118 - * least 2 characters spare (one for null and one to check for 119 - * overflow). This means we should only call tc() with 120 - * strings up to a maximum of 4 characters long and 'count' 121 - * should not exceed 4. To test with longer strings increase 122 - * the buffer size in tc(). 123 - */ 124 - 125 - /* tc(src, count, expected, chars, terminator, pad) */ 126 - KSTM_CHECK_ZERO(tc("a", 0, -E2BIG, 0, 0, 0)); 127 - KSTM_CHECK_ZERO(tc("", 0, -E2BIG, 0, 0, 0)); 128 - 129 - KSTM_CHECK_ZERO(tc("a", 1, -E2BIG, 0, 1, 0)); 130 - KSTM_CHECK_ZERO(tc("", 1, 0, 0, 1, 0)); 131 - 132 - KSTM_CHECK_ZERO(tc("ab", 2, -E2BIG, 1, 1, 0)); 133 - KSTM_CHECK_ZERO(tc("a", 2, 1, 1, 1, 0)); 134 - KSTM_CHECK_ZERO(tc("", 2, 0, 0, 1, 1)); 135 - 136 - KSTM_CHECK_ZERO(tc("abc", 3, -E2BIG, 2, 1, 0)); 137 - KSTM_CHECK_ZERO(tc("ab", 3, 2, 2, 1, 0)); 138 - KSTM_CHECK_ZERO(tc("a", 3, 1, 1, 1, 1)); 139 - KSTM_CHECK_ZERO(tc("", 3, 0, 0, 1, 2)); 140 - 141 - KSTM_CHECK_ZERO(tc("abcd", 4, -E2BIG, 3, 1, 0)); 142 - KSTM_CHECK_ZERO(tc("abc", 4, 3, 3, 1, 0)); 143 - KSTM_CHECK_ZERO(tc("ab", 4, 2, 2, 1, 1)); 144 - KSTM_CHECK_ZERO(tc("a", 4, 1, 1, 1, 2)); 145 - KSTM_CHECK_ZERO(tc("", 4, 0, 0, 1, 3)); 146 - } 147 - 148 - KSTM_MODULE_LOADERS(test_strscpy); 149 - MODULE_AUTHOR("Tobin C. Harding <tobin@kernel.org>"); 150 - MODULE_LICENSE("GPL");
+1 -2
lib/ubsan.c
··· 154 154 155 155 current->in_ubsan--; 156 156 157 - if (panic_on_warn) 158 - panic("panic_on_warn set ...\n"); 157 + check_panic_on_warn("UBSAN"); 159 158 } 160 159 161 160 void __ubsan_handle_divrem_overflow(void *_data, void *lhs, void *rhs)
+13 -6
mm/kasan/kasan_test.c
··· 825 825 KUNIT_EXPECT_KASAN_FAIL(test, *(volatile char *)p); 826 826 } 827 827 828 - /* Check that ksize() makes the whole object accessible. */ 828 + /* Check that ksize() does NOT unpoison whole object. */ 829 829 static void ksize_unpoisons_memory(struct kunit *test) 830 830 { 831 831 char *ptr; 832 - size_t size = 123, real_size; 832 + size_t size = 128 - KASAN_GRANULE_SIZE - 5; 833 + size_t real_size; 833 834 834 835 ptr = kmalloc(size, GFP_KERNEL); 835 836 KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); 837 + 836 838 real_size = ksize(ptr); 839 + KUNIT_EXPECT_GT(test, real_size, size); 837 840 838 841 OPTIMIZER_HIDE_VAR(ptr); 839 842 840 - /* This access shouldn't trigger a KASAN report. */ 841 - ptr[size] = 'x'; 843 + /* These accesses shouldn't trigger a KASAN report. */ 844 + ptr[0] = 'x'; 845 + ptr[size - 1] = 'x'; 842 846 843 - /* This one must. */ 844 - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size]); 847 + /* These must trigger a KASAN report. */ 848 + if (IS_ENABLED(CONFIG_KASAN_GENERIC)) 849 + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]); 850 + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size + 5]); 851 + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[real_size - 1]); 845 852 846 853 kfree(ptr); 847 854 }
+2 -2
mm/kasan/report.c
··· 186 186 (unsigned long)addr); 187 187 pr_err("==================================================================\n"); 188 188 spin_unlock_irqrestore(&report_lock, *flags); 189 - if (panic_on_warn && !test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags)) 190 - panic("panic_on_warn set ...\n"); 189 + if (!test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags)) 190 + check_panic_on_warn("KASAN"); 191 191 if (kasan_arg_fault == KASAN_ARG_FAULT_PANIC) 192 192 panic("kasan.fault=panic set ...\n"); 193 193 add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
+1 -2
mm/kfence/report.c
··· 273 273 274 274 lockdep_on(); 275 275 276 - if (panic_on_warn) 277 - panic("panic_on_warn set ...\n"); 276 + check_panic_on_warn("KFENCE"); 278 277 279 278 /* We encountered a memory safety error, taint the kernel! */ 280 279 add_taint(TAINT_BAD_PAGE, LOCKDEP_STILL_OK);
+10 -16
mm/slab_common.c
··· 1348 1348 void *ret; 1349 1349 size_t ks; 1350 1350 1351 - /* Don't use instrumented ksize to allow precise KASAN poisoning. */ 1351 + /* Check for double-free before calling ksize. */ 1352 1352 if (likely(!ZERO_OR_NULL_PTR(p))) { 1353 1353 if (!kasan_check_byte(p)) 1354 1354 return NULL; 1355 - ks = kfence_ksize(p) ?: __ksize(p); 1355 + ks = ksize(p); 1356 1356 } else 1357 1357 ks = 0; 1358 1358 ··· 1420 1420 void *mem = (void *)p; 1421 1421 1422 1422 ks = ksize(mem); 1423 - if (ks) 1423 + if (ks) { 1424 + kasan_unpoison_range(mem, ks); 1424 1425 memzero_explicit(mem, ks); 1426 + } 1425 1427 kfree(mem); 1426 1428 } 1427 1429 EXPORT_SYMBOL(kfree_sensitive); 1428 1430 1429 1431 size_t ksize(const void *objp) 1430 1432 { 1431 - size_t size; 1432 - 1433 1433 /* 1434 - * We need to first check that the pointer to the object is valid, and 1435 - * only then unpoison the memory. The report printed from ksize() is 1436 - * more useful, then when it's printed later when the behaviour could 1437 - * be undefined due to a potential use-after-free or double-free. 1434 + * We need to first check that the pointer to the object is valid. 1435 + * The KASAN report printed from ksize() is more useful, then when 1436 + * it's printed later when the behaviour could be undefined due to 1437 + * a potential use-after-free or double-free. 1438 1438 * 1439 1439 * We use kasan_check_byte(), which is supported for the hardware 1440 1440 * tag-based KASAN mode, unlike kasan_check_read/write(). ··· 1448 1448 if (unlikely(ZERO_OR_NULL_PTR(objp)) || !kasan_check_byte(objp)) 1449 1449 return 0; 1450 1450 1451 - size = kfence_ksize(objp) ?: __ksize(objp); 1452 - /* 1453 - * We assume that ksize callers could use whole allocated area, 1454 - * so we need to unpoison this area. 1455 - */ 1456 - kasan_unpoison_range(objp, size); 1457 - return size; 1451 + return kfence_ksize(objp) ?: __ksize(objp); 1458 1452 } 1459 1453 EXPORT_SYMBOL(ksize); 1460 1454
+1 -1
net/dns_resolver/dns_key.c
··· 337 337 * this is used to prevent malicious redirections from being installed 338 338 * with add_key(). 339 339 */ 340 - cred = prepare_kernel_cred(NULL); 340 + cred = prepare_kernel_cred(&init_task); 341 341 if (!cred) 342 342 return -ENOMEM; 343 343
+5 -1
scripts/kernel-doc
··· 1461 1461 foreach my $arg (split($splitter, $args)) { 1462 1462 # strip comments 1463 1463 $arg =~ s/\/\*.*\*\///; 1464 + # ignore argument attributes 1465 + $arg =~ s/\sPOS0?\s/ /; 1464 1466 # strip leading/trailing spaces 1465 1467 $arg =~ s/^\s*//; 1466 1468 $arg =~ s/\s*$//; ··· 1672 1670 $prototype =~ s/^__inline +//; 1673 1671 $prototype =~ s/^__always_inline +//; 1674 1672 $prototype =~ s/^noinline +//; 1673 + $prototype =~ s/^__FORTIFY_INLINE +//; 1675 1674 $prototype =~ s/__init +//; 1676 1675 $prototype =~ s/__init_or_module +//; 1677 1676 $prototype =~ s/__deprecated +//; ··· 1682 1679 $prototype =~ s/__weak +//; 1683 1680 $prototype =~ s/__sched +//; 1684 1681 $prototype =~ s/__printf\s*\(\s*\d*\s*,\s*\d*\s*\) +//; 1685 - $prototype =~ s/__alloc_size\s*\(\s*\d+\s*(?:,\s*\d+\s*)?\) +//; 1682 + $prototype =~ s/__(?:re)?alloc_size\s*\(\s*\d+\s*(?:,\s*\d+\s*)?\) +//; 1683 + $prototype =~ s/__diagnose_as\s*\(\s*\S+\s*(?:,\s*\d+\s*)*\) +//; 1686 1684 my $define = $prototype =~ s/^#\s*define\s+//; #ak added 1687 1685 $prototype =~ s/__attribute_const__ +//; 1688 1686 $prototype =~ s/__attribute__\s*\(\(