Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'mm-hotfixes-stable-2024-06-17-11-43' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull misc fixes from Andrew Morton:
"Mainly MM singleton fixes. And a couple of ocfs2 regression fixes"

* tag 'mm-hotfixes-stable-2024-06-17-11-43' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
kcov: don't lose track of remote references during softirqs
mm: shmem: fix getting incorrect lruvec when replacing a shmem folio
mm/debug_vm_pgtable: drop RANDOM_ORVALUE trick
mm: fix possible OOB in numa_rebuild_large_mapping()
mm/migrate: fix kernel BUG at mm/compaction.c:2761!
selftests: mm: make map_fixed_noreplace test names stable
mm/memfd: add documentation for MFD_NOEXEC_SEAL MFD_EXEC
mm: mmap: allow for the maximum number of bits for randomizing mmap_base by default
gcov: add support for GCC 14
zap_pid_ns_processes: clear TIF_NOTIFY_SIGNAL along with TIF_SIGPENDING
mm: huge_memory: fix misused mapping_large_folio_support() for anon folios
lib/alloc_tag: fix RCU imbalance in pgalloc_tag_get()
lib/alloc_tag: do not register sysctl interface when CONFIG_SYSCTL=n
MAINTAINERS: remove Lorenzo as vmalloc reviewer
Revert "mm: init_mlocked_on_free_v3"
mm/page_table_check: fix crash on ZONE_DEVICE
gcc: disable '-Warray-bounds' for gcc-9
ocfs2: fix NULL pointer dereference in ocfs2_abort_trigger()
ocfs2: fix NULL pointer dereference in ocfs2_journal_dirty()

+345 -222
-6
Documentation/admin-guide/kernel-parameters.txt
··· 2192 2192 Format: 0 | 1 2193 2193 Default set by CONFIG_INIT_ON_FREE_DEFAULT_ON. 2194 2194 2195 - init_mlocked_on_free= [MM] Fill freed userspace memory with zeroes if 2196 - it was mlock'ed and not explicitly munlock'ed 2197 - afterwards. 2198 - Format: 0 | 1 2199 - Default set by CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON 2200 - 2201 2195 init_pkru= [X86] Specify the default memory protection keys rights 2202 2196 register contents for all processes. 0x55555554 by 2203 2197 default (disallow access to all but pkey 0). Can
+1
Documentation/userspace-api/index.rst
··· 32 32 seccomp_filter 33 33 landlock 34 34 lsm 35 + mfd_noexec 35 36 spec_ctrl 36 37 tee 37 38
+86
Documentation/userspace-api/mfd_noexec.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================================== 4 + Introduction of non-executable mfd 5 + ================================== 6 + :Author: 7 + Daniel Verkamp <dverkamp@chromium.org> 8 + Jeff Xu <jeffxu@chromium.org> 9 + 10 + :Contributor: 11 + Aleksa Sarai <cyphar@cyphar.com> 12 + 13 + Since Linux introduced the memfd feature, memfds have always had their 14 + execute bit set, and the memfd_create() syscall doesn't allow setting 15 + it differently. 16 + 17 + However, in a secure-by-default system, such as ChromeOS, (where all 18 + executables should come from the rootfs, which is protected by verified 19 + boot), this executable nature of memfd opens a door for NoExec bypass 20 + and enables “confused deputy attack”. E.g, in VRP bug [1]: cros_vm 21 + process created a memfd to share the content with an external process, 22 + however the memfd is overwritten and used for executing arbitrary code 23 + and root escalation. [2] lists more VRP of this kind. 24 + 25 + On the other hand, executable memfd has its legit use: runc uses memfd’s 26 + seal and executable feature to copy the contents of the binary then 27 + execute them. For such a system, we need a solution to differentiate runc's 28 + use of executable memfds and an attacker's [3]. 29 + 30 + To address those above: 31 + - Let memfd_create() set X bit at creation time. 32 + - Let memfd be sealed for modifying X bit when NX is set. 33 + - Add a new pid namespace sysctl: vm.memfd_noexec to help applications in 34 + migrating and enforcing non-executable MFD. 35 + 36 + User API 37 + ======== 38 + ``int memfd_create(const char *name, unsigned int flags)`` 39 + 40 + ``MFD_NOEXEC_SEAL`` 41 + When MFD_NOEXEC_SEAL bit is set in the ``flags``, memfd is created 42 + with NX. F_SEAL_EXEC is set and the memfd can't be modified to 43 + add X later. MFD_ALLOW_SEALING is also implied. 44 + This is the most common case for the application to use memfd. 45 + 46 + ``MFD_EXEC`` 47 + When MFD_EXEC bit is set in the ``flags``, memfd is created with X. 48 + 49 + Note: 50 + ``MFD_NOEXEC_SEAL`` implies ``MFD_ALLOW_SEALING``. In case that 51 + an app doesn't want sealing, it can add F_SEAL_SEAL after creation. 52 + 53 + 54 + Sysctl: 55 + ======== 56 + ``pid namespaced sysctl vm.memfd_noexec`` 57 + 58 + The new pid namespaced sysctl vm.memfd_noexec has 3 values: 59 + 60 + - 0: MEMFD_NOEXEC_SCOPE_EXEC 61 + memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL acts like 62 + MFD_EXEC was set. 63 + 64 + - 1: MEMFD_NOEXEC_SCOPE_NOEXEC_SEAL 65 + memfd_create() without MFD_EXEC nor MFD_NOEXEC_SEAL acts like 66 + MFD_NOEXEC_SEAL was set. 67 + 68 + - 2: MEMFD_NOEXEC_SCOPE_NOEXEC_ENFORCED 69 + memfd_create() without MFD_NOEXEC_SEAL will be rejected. 70 + 71 + The sysctl allows finer control of memfd_create for old software that 72 + doesn't set the executable bit; for example, a container with 73 + vm.memfd_noexec=1 means the old software will create non-executable memfd 74 + by default while new software can create executable memfd by setting 75 + MFD_EXEC. 76 + 77 + The value of vm.memfd_noexec is passed to child namespace at creation 78 + time. In addition, the setting is hierarchical, i.e. during memfd_create, 79 + we will search from current ns to root ns and use the most restrictive 80 + setting. 81 + 82 + [1] https://crbug.com/1305267 83 + 84 + [2] https://bugs.chromium.org/p/chromium/issues/list?q=type%3Dbug-security%20memfd%20escalation&can=1 85 + 86 + [3] https://lwn.net/Articles/781013/
-1
MAINTAINERS
··· 23974 23974 M: Andrew Morton <akpm@linux-foundation.org> 23975 23975 R: Uladzislau Rezki <urezki@gmail.com> 23976 23976 R: Christoph Hellwig <hch@infradead.org> 23977 - R: Lorenzo Stoakes <lstoakes@gmail.com> 23978 23977 L: linux-mm@kvack.org 23979 23978 S: Maintained 23980 23979 W: http://www.linux-mm.org
+12
arch/Kconfig
··· 1046 1046 config ARCH_MMAP_RND_BITS_DEFAULT 1047 1047 int 1048 1048 1049 + config FORCE_MAX_MMAP_RND_BITS 1050 + bool "Force maximum number of bits to use for ASLR of mmap base address" 1051 + default y if !64BIT 1052 + help 1053 + ARCH_MMAP_RND_BITS and ARCH_MMAP_RND_COMPAT_BITS represent the number 1054 + of bits to use for ASLR and if no custom value is assigned (EXPERT) 1055 + then the architecture's lower bound (minimum) value is assumed. 1056 + This toggle changes that default assumption to assume the arch upper 1057 + bound (maximum) value instead. 1058 + 1049 1059 config ARCH_MMAP_RND_BITS 1050 1060 int "Number of bits to use for ASLR of mmap base address" if EXPERT 1051 1061 range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX 1052 1062 default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT 1063 + default ARCH_MMAP_RND_BITS_MAX if FORCE_MAX_MMAP_RND_BITS 1053 1064 default ARCH_MMAP_RND_BITS_MIN 1054 1065 depends on HAVE_ARCH_MMAP_RND_BITS 1055 1066 help ··· 1095 1084 int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT 1096 1085 range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX 1097 1086 default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT 1087 + default ARCH_MMAP_RND_COMPAT_BITS_MAX if FORCE_MAX_MMAP_RND_BITS 1098 1088 default ARCH_MMAP_RND_COMPAT_BITS_MIN 1099 1089 depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS 1100 1090 help
+107 -85
fs/ocfs2/journal.c
··· 479 479 return status; 480 480 } 481 481 482 - 483 - struct ocfs2_triggers { 484 - struct jbd2_buffer_trigger_type ot_triggers; 485 - int ot_offset; 486 - }; 487 - 488 482 static inline struct ocfs2_triggers *to_ocfs2_trigger(struct jbd2_buffer_trigger_type *triggers) 489 483 { 490 484 return container_of(triggers, struct ocfs2_triggers, ot_triggers); ··· 542 548 static void ocfs2_abort_trigger(struct jbd2_buffer_trigger_type *triggers, 543 549 struct buffer_head *bh) 544 550 { 551 + struct ocfs2_triggers *ot = to_ocfs2_trigger(triggers); 552 + 545 553 mlog(ML_ERROR, 546 554 "ocfs2_abort_trigger called by JBD2. bh = 0x%lx, " 547 555 "bh->b_blocknr = %llu\n", 548 556 (unsigned long)bh, 549 557 (unsigned long long)bh->b_blocknr); 550 558 551 - ocfs2_error(bh->b_assoc_map->host->i_sb, 559 + ocfs2_error(ot->sb, 552 560 "JBD2 has aborted our journal, ocfs2 cannot continue\n"); 553 561 } 554 562 555 - static struct ocfs2_triggers di_triggers = { 556 - .ot_triggers = { 557 - .t_frozen = ocfs2_frozen_trigger, 558 - .t_abort = ocfs2_abort_trigger, 559 - }, 560 - .ot_offset = offsetof(struct ocfs2_dinode, i_check), 561 - }; 563 + static void ocfs2_setup_csum_triggers(struct super_block *sb, 564 + enum ocfs2_journal_trigger_type type, 565 + struct ocfs2_triggers *ot) 566 + { 567 + BUG_ON(type >= OCFS2_JOURNAL_TRIGGER_COUNT); 562 568 563 - static struct ocfs2_triggers eb_triggers = { 564 - .ot_triggers = { 565 - .t_frozen = ocfs2_frozen_trigger, 566 - .t_abort = ocfs2_abort_trigger, 567 - }, 568 - .ot_offset = offsetof(struct ocfs2_extent_block, h_check), 569 - }; 569 + switch (type) { 570 + case OCFS2_JTR_DI: 571 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 572 + ot->ot_offset = offsetof(struct ocfs2_dinode, i_check); 573 + break; 574 + case OCFS2_JTR_EB: 575 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 576 + ot->ot_offset = offsetof(struct ocfs2_extent_block, h_check); 577 + break; 578 + case OCFS2_JTR_RB: 579 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 580 + ot->ot_offset = offsetof(struct ocfs2_refcount_block, rf_check); 581 + break; 582 + case OCFS2_JTR_GD: 583 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 584 + ot->ot_offset = offsetof(struct ocfs2_group_desc, bg_check); 585 + break; 586 + case OCFS2_JTR_DB: 587 + ot->ot_triggers.t_frozen = ocfs2_db_frozen_trigger; 588 + break; 589 + case OCFS2_JTR_XB: 590 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 591 + ot->ot_offset = offsetof(struct ocfs2_xattr_block, xb_check); 592 + break; 593 + case OCFS2_JTR_DQ: 594 + ot->ot_triggers.t_frozen = ocfs2_dq_frozen_trigger; 595 + break; 596 + case OCFS2_JTR_DR: 597 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 598 + ot->ot_offset = offsetof(struct ocfs2_dx_root_block, dr_check); 599 + break; 600 + case OCFS2_JTR_DL: 601 + ot->ot_triggers.t_frozen = ocfs2_frozen_trigger; 602 + ot->ot_offset = offsetof(struct ocfs2_dx_leaf, dl_check); 603 + break; 604 + case OCFS2_JTR_NONE: 605 + /* To make compiler happy... */ 606 + return; 607 + } 570 608 571 - static struct ocfs2_triggers rb_triggers = { 572 - .ot_triggers = { 573 - .t_frozen = ocfs2_frozen_trigger, 574 - .t_abort = ocfs2_abort_trigger, 575 - }, 576 - .ot_offset = offsetof(struct ocfs2_refcount_block, rf_check), 577 - }; 609 + ot->ot_triggers.t_abort = ocfs2_abort_trigger; 610 + ot->sb = sb; 611 + } 578 612 579 - static struct ocfs2_triggers gd_triggers = { 580 - .ot_triggers = { 581 - .t_frozen = ocfs2_frozen_trigger, 582 - .t_abort = ocfs2_abort_trigger, 583 - }, 584 - .ot_offset = offsetof(struct ocfs2_group_desc, bg_check), 585 - }; 613 + void ocfs2_initialize_journal_triggers(struct super_block *sb, 614 + struct ocfs2_triggers triggers[]) 615 + { 616 + enum ocfs2_journal_trigger_type type; 586 617 587 - static struct ocfs2_triggers db_triggers = { 588 - .ot_triggers = { 589 - .t_frozen = ocfs2_db_frozen_trigger, 590 - .t_abort = ocfs2_abort_trigger, 591 - }, 592 - }; 593 - 594 - static struct ocfs2_triggers xb_triggers = { 595 - .ot_triggers = { 596 - .t_frozen = ocfs2_frozen_trigger, 597 - .t_abort = ocfs2_abort_trigger, 598 - }, 599 - .ot_offset = offsetof(struct ocfs2_xattr_block, xb_check), 600 - }; 601 - 602 - static struct ocfs2_triggers dq_triggers = { 603 - .ot_triggers = { 604 - .t_frozen = ocfs2_dq_frozen_trigger, 605 - .t_abort = ocfs2_abort_trigger, 606 - }, 607 - }; 608 - 609 - static struct ocfs2_triggers dr_triggers = { 610 - .ot_triggers = { 611 - .t_frozen = ocfs2_frozen_trigger, 612 - .t_abort = ocfs2_abort_trigger, 613 - }, 614 - .ot_offset = offsetof(struct ocfs2_dx_root_block, dr_check), 615 - }; 616 - 617 - static struct ocfs2_triggers dl_triggers = { 618 - .ot_triggers = { 619 - .t_frozen = ocfs2_frozen_trigger, 620 - .t_abort = ocfs2_abort_trigger, 621 - }, 622 - .ot_offset = offsetof(struct ocfs2_dx_leaf, dl_check), 623 - }; 618 + for (type = OCFS2_JTR_DI; type < OCFS2_JOURNAL_TRIGGER_COUNT; type++) 619 + ocfs2_setup_csum_triggers(sb, type, &triggers[type]); 620 + } 624 621 625 622 static int __ocfs2_journal_access(handle_t *handle, 626 623 struct ocfs2_caching_info *ci, ··· 693 708 int ocfs2_journal_access_di(handle_t *handle, struct ocfs2_caching_info *ci, 694 709 struct buffer_head *bh, int type) 695 710 { 696 - return __ocfs2_journal_access(handle, ci, bh, &di_triggers, type); 711 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 712 + 713 + return __ocfs2_journal_access(handle, ci, bh, 714 + &osb->s_journal_triggers[OCFS2_JTR_DI], 715 + type); 697 716 } 698 717 699 718 int ocfs2_journal_access_eb(handle_t *handle, struct ocfs2_caching_info *ci, 700 719 struct buffer_head *bh, int type) 701 720 { 702 - return __ocfs2_journal_access(handle, ci, bh, &eb_triggers, type); 721 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 722 + 723 + return __ocfs2_journal_access(handle, ci, bh, 724 + &osb->s_journal_triggers[OCFS2_JTR_EB], 725 + type); 703 726 } 704 727 705 728 int ocfs2_journal_access_rb(handle_t *handle, struct ocfs2_caching_info *ci, 706 729 struct buffer_head *bh, int type) 707 730 { 708 - return __ocfs2_journal_access(handle, ci, bh, &rb_triggers, 731 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 732 + 733 + return __ocfs2_journal_access(handle, ci, bh, 734 + &osb->s_journal_triggers[OCFS2_JTR_RB], 709 735 type); 710 736 } 711 737 712 738 int ocfs2_journal_access_gd(handle_t *handle, struct ocfs2_caching_info *ci, 713 739 struct buffer_head *bh, int type) 714 740 { 715 - return __ocfs2_journal_access(handle, ci, bh, &gd_triggers, type); 741 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 742 + 743 + return __ocfs2_journal_access(handle, ci, bh, 744 + &osb->s_journal_triggers[OCFS2_JTR_GD], 745 + type); 716 746 } 717 747 718 748 int ocfs2_journal_access_db(handle_t *handle, struct ocfs2_caching_info *ci, 719 749 struct buffer_head *bh, int type) 720 750 { 721 - return __ocfs2_journal_access(handle, ci, bh, &db_triggers, type); 751 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 752 + 753 + return __ocfs2_journal_access(handle, ci, bh, 754 + &osb->s_journal_triggers[OCFS2_JTR_DB], 755 + type); 722 756 } 723 757 724 758 int ocfs2_journal_access_xb(handle_t *handle, struct ocfs2_caching_info *ci, 725 759 struct buffer_head *bh, int type) 726 760 { 727 - return __ocfs2_journal_access(handle, ci, bh, &xb_triggers, type); 761 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 762 + 763 + return __ocfs2_journal_access(handle, ci, bh, 764 + &osb->s_journal_triggers[OCFS2_JTR_XB], 765 + type); 728 766 } 729 767 730 768 int ocfs2_journal_access_dq(handle_t *handle, struct ocfs2_caching_info *ci, 731 769 struct buffer_head *bh, int type) 732 770 { 733 - return __ocfs2_journal_access(handle, ci, bh, &dq_triggers, type); 771 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 772 + 773 + return __ocfs2_journal_access(handle, ci, bh, 774 + &osb->s_journal_triggers[OCFS2_JTR_DQ], 775 + type); 734 776 } 735 777 736 778 int ocfs2_journal_access_dr(handle_t *handle, struct ocfs2_caching_info *ci, 737 779 struct buffer_head *bh, int type) 738 780 { 739 - return __ocfs2_journal_access(handle, ci, bh, &dr_triggers, type); 781 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 782 + 783 + return __ocfs2_journal_access(handle, ci, bh, 784 + &osb->s_journal_triggers[OCFS2_JTR_DR], 785 + type); 740 786 } 741 787 742 788 int ocfs2_journal_access_dl(handle_t *handle, struct ocfs2_caching_info *ci, 743 789 struct buffer_head *bh, int type) 744 790 { 745 - return __ocfs2_journal_access(handle, ci, bh, &dl_triggers, type); 791 + struct ocfs2_super *osb = OCFS2_SB(ocfs2_metadata_cache_get_super(ci)); 792 + 793 + return __ocfs2_journal_access(handle, ci, bh, 794 + &osb->s_journal_triggers[OCFS2_JTR_DL], 795 + type); 746 796 } 747 797 748 798 int ocfs2_journal_access(handle_t *handle, struct ocfs2_caching_info *ci, ··· 798 778 if (!is_handle_aborted(handle)) { 799 779 journal_t *journal = handle->h_transaction->t_journal; 800 780 801 - mlog(ML_ERROR, "jbd2_journal_dirty_metadata failed. " 802 - "Aborting transaction and journal.\n"); 781 + mlog(ML_ERROR, "jbd2_journal_dirty_metadata failed: " 782 + "handle type %u started at line %u, credits %u/%u " 783 + "errcode %d. Aborting transaction and journal.\n", 784 + handle->h_type, handle->h_line_no, 785 + handle->h_requested_credits, 786 + jbd2_handle_buffer_credits(handle), status); 803 787 handle->h_err = status; 804 788 jbd2_journal_abort_handle(handle); 805 789 jbd2_journal_abort(journal, status); 806 - ocfs2_abort(bh->b_assoc_map->host->i_sb, 807 - "Journal already aborted.\n"); 808 790 } 809 791 } 810 792 }
+27
fs/ocfs2/ocfs2.h
··· 284 284 #define OCFS2_OSB_ERROR_FS 0x0004 285 285 #define OCFS2_DEFAULT_ATIME_QUANTUM 60 286 286 287 + struct ocfs2_triggers { 288 + struct jbd2_buffer_trigger_type ot_triggers; 289 + int ot_offset; 290 + struct super_block *sb; 291 + }; 292 + 293 + enum ocfs2_journal_trigger_type { 294 + OCFS2_JTR_DI, 295 + OCFS2_JTR_EB, 296 + OCFS2_JTR_RB, 297 + OCFS2_JTR_GD, 298 + OCFS2_JTR_DB, 299 + OCFS2_JTR_XB, 300 + OCFS2_JTR_DQ, 301 + OCFS2_JTR_DR, 302 + OCFS2_JTR_DL, 303 + OCFS2_JTR_NONE /* This must be the last entry */ 304 + }; 305 + 306 + #define OCFS2_JOURNAL_TRIGGER_COUNT OCFS2_JTR_NONE 307 + 308 + void ocfs2_initialize_journal_triggers(struct super_block *sb, 309 + struct ocfs2_triggers triggers[]); 310 + 287 311 struct ocfs2_journal; 288 312 struct ocfs2_slot_info; 289 313 struct ocfs2_recovery_map; ··· 374 350 wait_queue_head_t checkpoint_event; 375 351 struct ocfs2_journal *journal; 376 352 unsigned long osb_commit_interval; 353 + 354 + /* Journal triggers for checksum */ 355 + struct ocfs2_triggers s_journal_triggers[OCFS2_JOURNAL_TRIGGER_COUNT]; 377 356 378 357 struct delayed_work la_enable_wq; 379 358
+3 -1
fs/ocfs2/super.c
··· 1075 1075 debugfs_create_file("fs_state", S_IFREG|S_IRUSR, osb->osb_debug_root, 1076 1076 osb, &ocfs2_osb_debug_fops); 1077 1077 1078 - if (ocfs2_meta_ecc(osb)) 1078 + if (ocfs2_meta_ecc(osb)) { 1079 + ocfs2_initialize_journal_triggers(sb, osb->s_journal_triggers); 1079 1080 ocfs2_blockcheck_stats_debugfs_install( &osb->osb_ecc_stats, 1080 1081 osb->osb_debug_root); 1082 + } 1081 1083 1082 1084 status = ocfs2_mount_volume(sb); 1083 1085 if (status < 0)
+2
include/linux/kcov.h
··· 21 21 KCOV_MODE_TRACE_PC = 2, 22 22 /* Collecting comparison operands mode. */ 23 23 KCOV_MODE_TRACE_CMP = 3, 24 + /* The process owns a KCOV remote reference. */ 25 + KCOV_MODE_REMOTE = 4, 24 26 }; 25 27 26 28 #define KCOV_IN_CTXSW (1 << 30)
+1 -8
include/linux/mm.h
··· 3776 3776 static inline bool want_init_on_free(void) 3777 3777 { 3778 3778 return static_branch_maybe(CONFIG_INIT_ON_FREE_DEFAULT_ON, 3779 - &init_on_free); 3780 - } 3781 - 3782 - DECLARE_STATIC_KEY_MAYBE(CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON, init_mlocked_on_free); 3783 - static inline bool want_init_mlocked_on_free(void) 3784 - { 3785 - return static_branch_maybe(CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON, 3786 - &init_mlocked_on_free); 3779 + &init_on_free); 3787 3780 } 3788 3781 3789 3782 extern bool _debug_pagealloc_enabled_early;
+4
include/linux/pagemap.h
··· 381 381 */ 382 382 static inline bool mapping_large_folio_support(struct address_space *mapping) 383 383 { 384 + /* AS_LARGE_FOLIO_SUPPORT is only reasonable for pagecache folios */ 385 + VM_WARN_ONCE((unsigned long)mapping & PAGE_MAPPING_ANON, 386 + "Anonymous mapping always supports large folio"); 387 + 384 388 return IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && 385 389 test_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags); 386 390 }
+8 -3
include/linux/pgalloc_tag.h
··· 37 37 38 38 static inline void put_page_tag_ref(union codetag_ref *ref) 39 39 { 40 + if (WARN_ON(!ref)) 41 + return; 42 + 40 43 page_ext_put(page_ext_from_codetag_ref(ref)); 41 44 } 42 45 ··· 105 102 union codetag_ref *ref = get_page_tag_ref(page); 106 103 107 104 alloc_tag_sub_check(ref); 108 - if (ref && ref->ct) 109 - tag = ct_to_alloc_tag(ref->ct); 110 - put_page_tag_ref(ref); 105 + if (ref) { 106 + if (ref->ct) 107 + tag = ct_to_alloc_tag(ref->ct); 108 + put_page_tag_ref(ref); 109 + } 111 110 } 112 111 113 112 return tag;
+1 -1
init/Kconfig
··· 883 883 884 884 config CC_NO_ARRAY_BOUNDS 885 885 bool 886 - default y if CC_IS_GCC && GCC_VERSION >= 100000 && GCC10_NO_ARRAY_BOUNDS 886 + default y if CC_IS_GCC && GCC_VERSION >= 90000 && GCC10_NO_ARRAY_BOUNDS 887 887 888 888 # Currently, disable -Wstringop-overflow for GCC globally. 889 889 config GCC_NO_STRINGOP_OVERFLOW
+3 -1
kernel/gcov/gcc_4_7.c
··· 18 18 #include <linux/mm.h> 19 19 #include "gcov.h" 20 20 21 - #if (__GNUC__ >= 10) 21 + #if (__GNUC__ >= 14) 22 + #define GCOV_COUNTERS 9 23 + #elif (__GNUC__ >= 10) 22 24 #define GCOV_COUNTERS 8 23 25 #elif (__GNUC__ >= 7) 24 26 #define GCOV_COUNTERS 9
+1
kernel/kcov.c
··· 632 632 return -EINVAL; 633 633 kcov->mode = mode; 634 634 t->kcov = kcov; 635 + t->kcov_mode = KCOV_MODE_REMOTE; 635 636 kcov->t = t; 636 637 kcov->remote = true; 637 638 kcov->remote_size = remote_arg->area_size;
+1
kernel/pid_namespace.c
··· 218 218 */ 219 219 do { 220 220 clear_thread_flag(TIF_SIGPENDING); 221 + clear_thread_flag(TIF_NOTIFY_SIGNAL); 221 222 rc = kernel_wait4(-1, NULL, __WALL, NULL); 222 223 } while (rc != -ECHILD); 223 224
+13 -3
lib/alloc_tag.c
··· 227 227 }; 228 228 EXPORT_SYMBOL(page_alloc_tagging_ops); 229 229 230 + #ifdef CONFIG_SYSCTL 230 231 static struct ctl_table memory_allocation_profiling_sysctls[] = { 231 232 { 232 233 .procname = "mem_profiling", ··· 242 241 { } 243 242 }; 244 243 244 + static void __init sysctl_init(void) 245 + { 246 + if (!mem_profiling_support) 247 + memory_allocation_profiling_sysctls[0].mode = 0444; 248 + 249 + register_sysctl_init("vm", memory_allocation_profiling_sysctls); 250 + } 251 + #else /* CONFIG_SYSCTL */ 252 + static inline void sysctl_init(void) {} 253 + #endif /* CONFIG_SYSCTL */ 254 + 245 255 static int __init alloc_tag_init(void) 246 256 { 247 257 const struct codetag_type_desc desc = { ··· 265 253 if (IS_ERR(alloc_tag_cttype)) 266 254 return PTR_ERR(alloc_tag_cttype); 267 255 268 - if (!mem_profiling_support) 269 - memory_allocation_profiling_sysctls[0].mode = 0444; 270 - register_sysctl_init("vm", memory_allocation_profiling_sysctls); 256 + sysctl_init(); 271 257 procfs_init(); 272 258 273 259 return 0;
+5 -26
mm/debug_vm_pgtable.c
··· 40 40 * Please refer Documentation/mm/arch_pgtable_helpers.rst for the semantics 41 41 * expectations that are being validated here. All future changes in here 42 42 * or the documentation need to be in sync. 43 - * 44 - * On s390 platform, the lower 4 bits are used to identify given page table 45 - * entry type. But these bits might affect the ability to clear entries with 46 - * pxx_clear() because of how dynamic page table folding works on s390. So 47 - * while loading up the entries do not change the lower 4 bits. It does not 48 - * have affect any other platform. Also avoid the 62nd bit on ppc64 that is 49 - * used to mark a pte entry. 50 43 */ 51 - #define S390_SKIP_MASK GENMASK(3, 0) 52 - #if __BITS_PER_LONG == 64 53 - #define PPC64_SKIP_MASK GENMASK(62, 62) 54 - #else 55 - #define PPC64_SKIP_MASK 0x0 56 - #endif 57 - #define ARCH_SKIP_MASK (S390_SKIP_MASK | PPC64_SKIP_MASK) 58 - #define RANDOM_ORVALUE (GENMASK(BITS_PER_LONG - 1, 0) & ~ARCH_SKIP_MASK) 59 44 #define RANDOM_NZVALUE GENMASK(7, 0) 60 45 61 46 struct pgtable_debug_args { ··· 496 511 return; 497 512 498 513 pr_debug("Validating PUD clear\n"); 499 - pud = __pud(pud_val(pud) | RANDOM_ORVALUE); 500 - WRITE_ONCE(*args->pudp, pud); 514 + WARN_ON(pud_none(pud)); 501 515 pud_clear(args->pudp); 502 516 pud = READ_ONCE(*args->pudp); 503 517 WARN_ON(!pud_none(pud)); ··· 532 548 return; 533 549 534 550 pr_debug("Validating P4D clear\n"); 535 - p4d = __p4d(p4d_val(p4d) | RANDOM_ORVALUE); 536 - WRITE_ONCE(*args->p4dp, p4d); 551 + WARN_ON(p4d_none(p4d)); 537 552 p4d_clear(args->p4dp); 538 553 p4d = READ_ONCE(*args->p4dp); 539 554 WARN_ON(!p4d_none(p4d)); ··· 565 582 return; 566 583 567 584 pr_debug("Validating PGD clear\n"); 568 - pgd = __pgd(pgd_val(pgd) | RANDOM_ORVALUE); 569 - WRITE_ONCE(*args->pgdp, pgd); 585 + WARN_ON(pgd_none(pgd)); 570 586 pgd_clear(args->pgdp); 571 587 pgd = READ_ONCE(*args->pgdp); 572 588 WARN_ON(!pgd_none(pgd)); ··· 616 634 if (WARN_ON(!args->ptep)) 617 635 return; 618 636 619 - #ifndef CONFIG_RISCV 620 - pte = __pte(pte_val(pte) | RANDOM_ORVALUE); 621 - #endif 622 637 set_pte_at(args->mm, args->vaddr, args->ptep, pte); 638 + WARN_ON(pte_none(pte)); 623 639 flush_dcache_page(page); 624 640 barrier(); 625 641 ptep_clear(args->mm, args->vaddr, args->ptep); ··· 630 650 pmd_t pmd = READ_ONCE(*args->pmdp); 631 651 632 652 pr_debug("Validating PMD clear\n"); 633 - pmd = __pmd(pmd_val(pmd) | RANDOM_ORVALUE); 634 - WRITE_ONCE(*args->pmdp, pmd); 653 + WARN_ON(pmd_none(pmd)); 635 654 pmd_clear(args->pmdp); 636 655 pmd = READ_ONCE(*args->pmdp); 637 656 WARN_ON(!pmd_none(pmd));
+17 -11
mm/huge_memory.c
··· 3009 3009 if (new_order >= folio_order(folio)) 3010 3010 return -EINVAL; 3011 3011 3012 - /* Cannot split anonymous THP to order-1 */ 3013 - if (new_order == 1 && folio_test_anon(folio)) { 3014 - VM_WARN_ONCE(1, "Cannot split to order-1 folio"); 3015 - return -EINVAL; 3016 - } 3017 - 3018 - if (new_order) { 3019 - /* Only swapping a whole PMD-mapped folio is supported */ 3020 - if (folio_test_swapcache(folio)) 3012 + if (folio_test_anon(folio)) { 3013 + /* order-1 is not supported for anonymous THP. */ 3014 + if (new_order == 1) { 3015 + VM_WARN_ONCE(1, "Cannot split to order-1 folio"); 3021 3016 return -EINVAL; 3017 + } 3018 + } else if (new_order) { 3022 3019 /* Split shmem folio to non-zero order not supported */ 3023 3020 if (shmem_mapping(folio->mapping)) { 3024 3021 VM_WARN_ONCE(1, 3025 3022 "Cannot split shmem folio to non-0 order"); 3026 3023 return -EINVAL; 3027 3024 } 3028 - /* No split if the file system does not support large folio */ 3029 - if (!mapping_large_folio_support(folio->mapping)) { 3025 + /* 3026 + * No split if the file system does not support large folio. 3027 + * Note that we might still have THPs in such mappings due to 3028 + * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping 3029 + * does not actually support large folios properly. 3030 + */ 3031 + if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && 3032 + !mapping_large_folio_support(folio->mapping)) { 3030 3033 VM_WARN_ONCE(1, 3031 3034 "Cannot split file folio to non-0 order"); 3032 3035 return -EINVAL; 3033 3036 } 3034 3037 } 3035 3038 3039 + /* Only swapping a whole PMD-mapped folio is supported */ 3040 + if (folio_test_swapcache(folio) && new_order) 3041 + return -EINVAL; 3036 3042 3037 3043 is_hzp = is_huge_zero_folio(folio); 3038 3044 if (is_hzp) {
-1
mm/internal.h
··· 588 588 extern void memblock_free_pages(struct page *page, unsigned long pfn, 589 589 unsigned int order); 590 590 extern void __free_pages_core(struct page *page, unsigned int order); 591 - extern void kernel_init_pages(struct page *page, int numpages); 592 591 593 592 /* 594 593 * This will have no effect, other than possibly generating a warning, if the
+1 -2
mm/memcontrol.c
··· 7745 7745 * @new: Replacement folio. 7746 7746 * 7747 7747 * Charge @new as a replacement folio for @old. @old will 7748 - * be uncharged upon free. This is only used by the page cache 7749 - * (in replace_page_cache_folio()). 7748 + * be uncharged upon free. 7750 7749 * 7751 7750 * Both folios must be locked, @new->mapping must be set up. 7752 7751 */
+10 -10
mm/memory.c
··· 1507 1507 if (unlikely(folio_mapcount(folio) < 0)) 1508 1508 print_bad_pte(vma, addr, ptent, page); 1509 1509 } 1510 - 1511 - if (want_init_mlocked_on_free() && folio_test_mlocked(folio) && 1512 - !delay_rmap && folio_test_anon(folio)) { 1513 - kernel_init_pages(page, folio_nr_pages(folio)); 1514 - } 1515 - 1516 1510 if (unlikely(__tlb_remove_folio_pages(tlb, page, nr, delay_rmap))) { 1517 1511 *force_flush = true; 1518 1512 *force_break = true; ··· 5100 5106 bool ignore_writable, bool pte_write_upgrade) 5101 5107 { 5102 5108 int nr = pte_pfn(fault_pte) - folio_pfn(folio); 5103 - unsigned long start = max(vmf->address - nr * PAGE_SIZE, vma->vm_start); 5104 - unsigned long end = min(vmf->address + (folio_nr_pages(folio) - nr) * PAGE_SIZE, vma->vm_end); 5105 - pte_t *start_ptep = vmf->pte - (vmf->address - start) / PAGE_SIZE; 5106 - unsigned long addr; 5109 + unsigned long start, end, addr = vmf->address; 5110 + unsigned long addr_start = addr - (nr << PAGE_SHIFT); 5111 + unsigned long pt_start = ALIGN_DOWN(addr, PMD_SIZE); 5112 + pte_t *start_ptep; 5113 + 5114 + /* Stay within the VMA and within the page table. */ 5115 + start = max3(addr_start, pt_start, vma->vm_start); 5116 + end = min3(addr_start + folio_size(folio), pt_start + PMD_SIZE, 5117 + vma->vm_end); 5118 + start_ptep = vmf->pte - ((addr - start) >> PAGE_SHIFT); 5107 5119 5108 5120 /* Restore all PTEs' mapping of the large folio */ 5109 5121 for (addr = start; addr != end; start_ptep++, addr += PAGE_SIZE) {
+7 -1
mm/migrate.c
··· 1654 1654 1655 1655 /* 1656 1656 * The rare folio on the deferred split list should 1657 - * be split now. It should not count as a failure. 1657 + * be split now. It should not count as a failure: 1658 + * but increment nr_failed because, without doing so, 1659 + * migrate_pages() may report success with (split but 1660 + * unmigrated) pages still on its fromlist; whereas it 1661 + * always reports success when its fromlist is empty. 1662 + * 1658 1663 * Only check it without removing it from the list. 1659 1664 * Since the folio can be on deferred_split_scan() 1660 1665 * local list and removing it can cause the local list ··· 1674 1669 if (nr_pages > 2 && 1675 1670 !list_empty(&folio->_deferred_list)) { 1676 1671 if (try_split_folio(folio, split_folios) == 0) { 1672 + nr_failed++; 1677 1673 stats->nr_thp_split += is_thp; 1678 1674 stats->nr_split++; 1679 1675 continue;
+7 -36
mm/mm_init.c
··· 2523 2523 DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free); 2524 2524 EXPORT_SYMBOL(init_on_free); 2525 2525 2526 - DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON, init_mlocked_on_free); 2527 - EXPORT_SYMBOL(init_mlocked_on_free); 2528 - 2529 2526 static bool _init_on_alloc_enabled_early __read_mostly 2530 2527 = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON); 2531 2528 static int __init early_init_on_alloc(char *buf) ··· 2539 2542 return kstrtobool(buf, &_init_on_free_enabled_early); 2540 2543 } 2541 2544 early_param("init_on_free", early_init_on_free); 2542 - 2543 - static bool _init_mlocked_on_free_enabled_early __read_mostly 2544 - = IS_ENABLED(CONFIG_INIT_MLOCKED_ON_FREE_DEFAULT_ON); 2545 - static int __init early_init_mlocked_on_free(char *buf) 2546 - { 2547 - return kstrtobool(buf, &_init_mlocked_on_free_enabled_early); 2548 - } 2549 - early_param("init_mlocked_on_free", early_init_mlocked_on_free); 2550 2545 2551 2546 DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); 2552 2547 ··· 2567 2578 } 2568 2579 #endif 2569 2580 2570 - if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early || 2571 - _init_mlocked_on_free_enabled_early) && 2581 + if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) && 2572 2582 page_poisoning_requested) { 2573 2583 pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, " 2574 - "will take precedence over init_on_alloc, init_on_free " 2575 - "and init_mlocked_on_free\n"); 2584 + "will take precedence over init_on_alloc and init_on_free\n"); 2576 2585 _init_on_alloc_enabled_early = false; 2577 2586 _init_on_free_enabled_early = false; 2578 - _init_mlocked_on_free_enabled_early = false; 2579 - } 2580 - 2581 - if (_init_mlocked_on_free_enabled_early && _init_on_free_enabled_early) { 2582 - pr_info("mem auto-init: init_on_free is on, " 2583 - "will take precedence over init_mlocked_on_free\n"); 2584 - _init_mlocked_on_free_enabled_early = false; 2585 2587 } 2586 2588 2587 2589 if (_init_on_alloc_enabled_early) { ··· 2589 2609 static_branch_disable(&init_on_free); 2590 2610 } 2591 2611 2592 - if (_init_mlocked_on_free_enabled_early) { 2593 - want_check_pages = true; 2594 - static_branch_enable(&init_mlocked_on_free); 2595 - } else { 2596 - static_branch_disable(&init_mlocked_on_free); 2597 - } 2598 - 2599 - if (IS_ENABLED(CONFIG_KMSAN) && (_init_on_alloc_enabled_early || 2600 - _init_on_free_enabled_early || _init_mlocked_on_free_enabled_early)) 2601 - pr_info("mem auto-init: please make sure init_on_alloc, init_on_free and " 2602 - "init_mlocked_on_free are disabled when running KMSAN\n"); 2612 + if (IS_ENABLED(CONFIG_KMSAN) && 2613 + (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) 2614 + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); 2603 2615 2604 2616 #ifdef CONFIG_DEBUG_PAGEALLOC 2605 2617 if (debug_pagealloc_enabled()) { ··· 2630 2658 else 2631 2659 stack = "off"; 2632 2660 2633 - pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s, mlocked free:%s\n", 2661 + pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s\n", 2634 2662 stack, want_init_on_alloc(GFP_KERNEL) ? "on" : "off", 2635 - want_init_on_free() ? "on" : "off", 2636 - want_init_mlocked_on_free() ? "on" : "off"); 2663 + want_init_on_free() ? "on" : "off"); 2637 2664 if (want_init_on_free()) 2638 2665 pr_info("mem auto-init: clearing system memory may take some time...\n"); 2639 2666 }
+1 -1
mm/page_alloc.c
··· 1016 1016 return page_kasan_tag(page) == KASAN_TAG_KERNEL; 1017 1017 } 1018 1018 1019 - void kernel_init_pages(struct page *page, int numpages) 1019 + static void kernel_init_pages(struct page *page, int numpages) 1020 1020 { 1021 1021 int i; 1022 1022
+10 -1
mm/page_table_check.c
··· 73 73 page = pfn_to_page(pfn); 74 74 page_ext = page_ext_get(page); 75 75 76 + if (!page_ext) 77 + return; 78 + 76 79 BUG_ON(PageSlab(page)); 77 80 anon = PageAnon(page); 78 81 ··· 113 110 page = pfn_to_page(pfn); 114 111 page_ext = page_ext_get(page); 115 112 113 + if (!page_ext) 114 + return; 115 + 116 116 BUG_ON(PageSlab(page)); 117 117 anon = PageAnon(page); 118 118 ··· 146 140 BUG_ON(PageSlab(page)); 147 141 148 142 page_ext = page_ext_get(page); 149 - BUG_ON(!page_ext); 143 + 144 + if (!page_ext) 145 + return; 146 + 150 147 for (i = 0; i < (1ul << order); i++) { 151 148 struct page_table_check *ptc = get_page_table_check(page_ext); 152 149
+1 -1
mm/shmem.c
··· 1786 1786 xa_lock_irq(&swap_mapping->i_pages); 1787 1787 error = shmem_replace_entry(swap_mapping, swap_index, old, new); 1788 1788 if (!error) { 1789 - mem_cgroup_migrate(old, new); 1789 + mem_cgroup_replace_folio(old, new); 1790 1790 __lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1); 1791 1791 __lruvec_stat_mod_folio(new, NR_SHMEM, 1); 1792 1792 __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1);
-15
security/Kconfig.hardening
··· 255 255 touching "cold" memory areas. Most cases see 3-5% impact. Some 256 256 synthetic workloads have measured as high as 8%. 257 257 258 - config INIT_MLOCKED_ON_FREE_DEFAULT_ON 259 - bool "Enable mlocked memory zeroing on free" 260 - depends on !KMSAN 261 - help 262 - This config has the effect of setting "init_mlocked_on_free=1" 263 - on the kernel command line. If it is enabled, all mlocked process 264 - memory is zeroed when freed. This restriction to mlocked memory 265 - improves performance over "init_on_free" but can still be used to 266 - protect confidential data like key material from content exposures 267 - to other processes, as well as live forensics and cold boot attacks. 268 - Any non-mlocked memory is not cleared before it is reassigned. This 269 - configuration can be overwritten by setting "init_mlocked_on_free=0" 270 - on the command line. The "init_on_free" boot option takes 271 - precedence over "init_mlocked_on_free". 272 - 273 258 config CC_HAS_ZERO_CALL_USED_REGS 274 259 def_bool $(cc-option,-fzero-call-used-regs=used-gpr) 275 260 # https://github.com/ClangBuiltLinux/linux/issues/1766
+16 -8
tools/testing/selftests/mm/map_fixed_noreplace.c
··· 67 67 dump_maps(); 68 68 ksft_exit_fail_msg("Error: munmap failed!?\n"); 69 69 } 70 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 70 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 71 + ksft_test_result_pass("mmap() 5*PAGE_SIZE at base\n"); 71 72 72 73 addr = base_addr + page_size; 73 74 size = 3 * page_size; ··· 77 76 dump_maps(); 78 77 ksft_exit_fail_msg("Error: first mmap() failed unexpectedly\n"); 79 78 } 80 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 79 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 80 + ksft_test_result_pass("mmap() 3*PAGE_SIZE at base+PAGE_SIZE\n"); 81 81 82 82 /* 83 83 * Exact same mapping again: ··· 95 93 dump_maps(); 96 94 ksft_exit_fail_msg("Error:1: mmap() succeeded when it shouldn't have\n"); 97 95 } 98 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 96 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 97 + ksft_test_result_pass("mmap() 5*PAGE_SIZE at base\n"); 99 98 100 99 /* 101 100 * Second mapping contained within first: ··· 114 111 dump_maps(); 115 112 ksft_exit_fail_msg("Error:2: mmap() succeeded when it shouldn't have\n"); 116 113 } 117 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 114 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 115 + ksft_test_result_pass("mmap() 2*PAGE_SIZE at base+PAGE_SIZE\n"); 118 116 119 117 /* 120 118 * Overlap end of existing mapping: ··· 132 128 dump_maps(); 133 129 ksft_exit_fail_msg("Error:3: mmap() succeeded when it shouldn't have\n"); 134 130 } 135 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 131 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 132 + ksft_test_result_pass("mmap() 2*PAGE_SIZE at base+(3*PAGE_SIZE)\n"); 136 133 137 134 /* 138 135 * Overlap start of existing mapping: ··· 150 145 dump_maps(); 151 146 ksft_exit_fail_msg("Error:4: mmap() succeeded when it shouldn't have\n"); 152 147 } 153 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 148 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 149 + ksft_test_result_pass("mmap() 2*PAGE_SIZE bytes at base\n"); 154 150 155 151 /* 156 152 * Adjacent to start of existing mapping: ··· 168 162 dump_maps(); 169 163 ksft_exit_fail_msg("Error:5: mmap() failed when it shouldn't have\n"); 170 164 } 171 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 165 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 166 + ksft_test_result_pass("mmap() PAGE_SIZE at base\n"); 172 167 173 168 /* 174 169 * Adjacent to end of existing mapping: ··· 186 179 dump_maps(); 187 180 ksft_exit_fail_msg("Error:6: mmap() failed when it shouldn't have\n"); 188 181 } 189 - ksft_test_result_pass("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 182 + ksft_print_msg("mmap() @ 0x%lx-0x%lx p=%p result=%m\n", addr, addr + size, p); 183 + ksft_test_result_pass("mmap() PAGE_SIZE at base+(4*PAGE_SIZE)\n"); 190 184 191 185 addr = base_addr; 192 186 size = 5 * page_size;