Merge tag 'mm-hotfixes-stable-2025-02-01-03-56' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull misc fixes from Andrew Morton:
"21 hotfixes. 8 are cc:stable and the remainder address post-6.13
issues. 13 are for MM and 8 are for non-MM.

All are singletons, please see the changelogs for details"

* tag 'mm-hotfixes-stable-2025-02-01-03-56' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (21 commits)
MAINTAINERS: include linux-mm for xarray maintenance
revert "xarray: port tests to kunit"
MAINTAINERS: add lib/test_xarray.c
mailmap, MAINTAINERS, docs: update Carlos's email address
mm/hugetlb: fix hugepage allocation for interleaved memory nodes
mm: gup: fix infinite loop within __get_longterm_locked
mm, swap: fix reclaim offset calculation error during allocation
.mailmap: update email address for Christopher Obbard
kfence: skip __GFP_THISNODE allocations on NUMA systems
nilfs2: fix possible int overflows in nilfs_fiemap()
mm: compaction: use the proper flag to determine watermarks
kernel: be more careful about dup_mmap() failures and uprobe registering
mm/fake-numa: handle cases with no SRAT info
mm: kmemleak: fix upper boundary check for physical address objects
mailmap: add an entry for Hamza Mahfooz
MAINTAINERS: mailmap: update Yosry Ahmed's email address
scripts/gdb: fix aarch64 userspace detection in get_current_task
mm/vmscan: accumulate nr_demoted for accurate demotion statistics
ocfs2: fix incorrect CPU endianness conversion causing mount failure
mm/zsmalloc: add __maybe_unused attribute for is_first_zpdesc()
...

+379 -441
+6 -1
.mailmap
··· 150 Cai Huoqing <cai.huoqing@linux.dev> <caihuoqing@baidu.com> 151 Can Guo <quic_cang@quicinc.com> <cang@codeaurora.org> 152 Carl Huang <quic_cjhuang@quicinc.com> <cjhuang@codeaurora.org> 153 - Carlos Bilbao <carlos.bilbao.osdev@gmail.com> <carlos.bilbao@amd.com> 154 Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com> 155 Changbin Du <changbin.du@intel.com> <changbin.du@intel.com> 156 Chao Yu <chao@kernel.org> <chao2.yu@samsung.com> ··· 169 Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com> 170 Christian Marangi <ansuelsmth@gmail.com> 171 Christophe Ricard <christophe.ricard@gmail.com> 172 Christoph Hellwig <hch@lst.de> 173 Chuck Lever <chuck.lever@oracle.com> <cel@kernel.org> 174 Chuck Lever <chuck.lever@oracle.com> <cel@netapp.com> ··· 266 Guru Das Srinagesh <quic_gurus@quicinc.com> <gurus@codeaurora.org> 267 Gustavo Padovan <gustavo@las.ic.unicamp.br> 268 Gustavo Padovan <padovan@profusion.mobi> 269 Hanjun Guo <guohanjun@huawei.com> <hanjun.guo@linaro.org> 270 Hans Verkuil <hverkuil@xs4all.nl> <hansverk@cisco.com> 271 Hans Verkuil <hverkuil@xs4all.nl> <hverkuil-cisco@xs4all.nl> ··· 767 Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com> 768 Yanteng Si <si.yanteng@linux.dev> <siyanteng@loongson.cn> 769 Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com> 770 Yusuke Goda <goda.yusuke@renesas.com> 771 Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com> 772 Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>
··· 150 Cai Huoqing <cai.huoqing@linux.dev> <caihuoqing@baidu.com> 151 Can Guo <quic_cang@quicinc.com> <cang@codeaurora.org> 152 Carl Huang <quic_cjhuang@quicinc.com> <cjhuang@codeaurora.org> 153 + Carlos Bilbao <carlos.bilbao@kernel.org> <carlos.bilbao@amd.com> 154 + Carlos Bilbao <carlos.bilbao@kernel.org> <carlos.bilbao.osdev@gmail.com> 155 + Carlos Bilbao <carlos.bilbao@kernel.org> <bilbao@vt.edu> 156 Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com> 157 Changbin Du <changbin.du@intel.com> <changbin.du@intel.com> 158 Chao Yu <chao@kernel.org> <chao2.yu@samsung.com> ··· 167 Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com> 168 Christian Marangi <ansuelsmth@gmail.com> 169 Christophe Ricard <christophe.ricard@gmail.com> 170 + Christopher Obbard <christopher.obbard@linaro.org> <chris.obbard@collabora.com> 171 Christoph Hellwig <hch@lst.de> 172 Chuck Lever <chuck.lever@oracle.com> <cel@kernel.org> 173 Chuck Lever <chuck.lever@oracle.com> <cel@netapp.com> ··· 263 Guru Das Srinagesh <quic_gurus@quicinc.com> <gurus@codeaurora.org> 264 Gustavo Padovan <gustavo@las.ic.unicamp.br> 265 Gustavo Padovan <padovan@profusion.mobi> 266 + Hamza Mahfooz <hamzamahfooz@linux.microsoft.com> <hamza.mahfooz@amd.com> 267 Hanjun Guo <guohanjun@huawei.com> <hanjun.guo@linaro.org> 268 Hans Verkuil <hverkuil@xs4all.nl> <hansverk@cisco.com> 269 Hans Verkuil <hverkuil@xs4all.nl> <hverkuil-cisco@xs4all.nl> ··· 763 Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com> 764 Yanteng Si <si.yanteng@linux.dev> <siyanteng@loongson.cn> 765 Ying Huang <huang.ying.caritas@gmail.com> <ying.huang@intel.com> 766 + Yosry Ahmed <yosry.ahmed@linux.dev> <yosryahmed@google.com> 767 Yusuke Goda <goda.yusuke@renesas.com> 768 Zack Rusin <zack.rusin@broadcom.com> <zackr@vmware.com> 769 Zhu Yanjun <zyjzyj2000@gmail.com> <yanjunz@nvidia.com>
+1 -1
Documentation/translations/sp_SP/index.rst
··· 7 8 \kerneldocCJKoff 9 10 - :maintainer: Carlos Bilbao <carlos.bilbao.osdev@gmail.com> 11 12 .. _sp_disclaimer: 13
··· 7 8 \kerneldocCJKoff 9 10 + :maintainer: Carlos Bilbao <carlos.bilbao@kernel.org> 11 12 .. _sp_disclaimer: 13
+7 -5
MAINTAINERS
··· 1090 1091 AMD HSMP DRIVER 1092 M: Naveen Krishna Chatradhi <naveenkrishna.chatradhi@amd.com> 1093 - R: Carlos Bilbao <carlos.bilbao.osdev@gmail.com> 1094 L: platform-driver-x86@vger.kernel.org 1095 S: Maintained 1096 F: Documentation/arch/x86/amd_hsmp.rst ··· 5857 5858 CONFIDENTIAL COMPUTING THREAT MODEL FOR X86 VIRTUALIZATION (SNP/TDX) 5859 M: Elena Reshetova <elena.reshetova@intel.com> 5860 - M: Carlos Bilbao <carlos.bilbao.osdev@gmail.com> 5861 S: Maintained 5862 F: Documentation/security/snp-tdx-threat-model.rst 5863 ··· 11331 F: drivers/video/fbdev/imsttfb.c 11332 11333 INDEX OF FURTHER KERNEL DOCUMENTATION 11334 - M: Carlos Bilbao <carlos.bilbao.osdev@gmail.com> 11335 S: Maintained 11336 F: Documentation/process/kernel-docs.rst 11337 ··· 22215 F: drivers/media/dvb-frontends/sp2* 22216 22217 SPANISH DOCUMENTATION 22218 - M: Carlos Bilbao <carlos.bilbao.osdev@gmail.com> 22219 R: Avadhut Naik <avadhut.naik@amd.com> 22220 S: Maintained 22221 F: Documentation/translations/sp_SP/ ··· 25739 XARRAY 25740 M: Matthew Wilcox <willy@infradead.org> 25741 L: linux-fsdevel@vger.kernel.org 25742 S: Supported 25743 F: Documentation/core-api/xarray.rst 25744 F: include/linux/idr.h 25745 F: include/linux/xarray.h 25746 F: lib/idr.c 25747 F: lib/xarray.c 25748 F: tools/testing/radix-tree 25749 ··· 26225 26226 ZSWAP COMPRESSED SWAP CACHING 26227 M: Johannes Weiner <hannes@cmpxchg.org> 26228 - M: Yosry Ahmed <yosryahmed@google.com> 26229 M: Nhat Pham <nphamcs@gmail.com> 26230 R: Chengming Zhou <chengming.zhou@linux.dev> 26231 L: linux-mm@kvack.org
··· 1090 1091 AMD HSMP DRIVER 1092 M: Naveen Krishna Chatradhi <naveenkrishna.chatradhi@amd.com> 1093 + R: Carlos Bilbao <carlos.bilbao@kernel.org> 1094 L: platform-driver-x86@vger.kernel.org 1095 S: Maintained 1096 F: Documentation/arch/x86/amd_hsmp.rst ··· 5857 5858 CONFIDENTIAL COMPUTING THREAT MODEL FOR X86 VIRTUALIZATION (SNP/TDX) 5859 M: Elena Reshetova <elena.reshetova@intel.com> 5860 + M: Carlos Bilbao <carlos.bilbao@kernel.org> 5861 S: Maintained 5862 F: Documentation/security/snp-tdx-threat-model.rst 5863 ··· 11331 F: drivers/video/fbdev/imsttfb.c 11332 11333 INDEX OF FURTHER KERNEL DOCUMENTATION 11334 + M: Carlos Bilbao <carlos.bilbao@kernel.org> 11335 S: Maintained 11336 F: Documentation/process/kernel-docs.rst 11337 ··· 22215 F: drivers/media/dvb-frontends/sp2* 22216 22217 SPANISH DOCUMENTATION 22218 + M: Carlos Bilbao <carlos.bilbao@kernel.org> 22219 R: Avadhut Naik <avadhut.naik@amd.com> 22220 S: Maintained 22221 F: Documentation/translations/sp_SP/ ··· 25739 XARRAY 25740 M: Matthew Wilcox <willy@infradead.org> 25741 L: linux-fsdevel@vger.kernel.org 25742 + L: linux-mm@kvack.org 25743 S: Supported 25744 F: Documentation/core-api/xarray.rst 25745 F: include/linux/idr.h 25746 F: include/linux/xarray.h 25747 F: lib/idr.c 25748 + F: lib/test_xarray.c 25749 F: lib/xarray.c 25750 F: tools/testing/radix-tree 25751 ··· 26223 26224 ZSWAP COMPRESSED SWAP CACHING 26225 M: Johannes Weiner <hannes@cmpxchg.org> 26226 + M: Yosry Ahmed <yosry.ahmed@linux.dev> 26227 M: Nhat Pham <nphamcs@gmail.com> 26228 R: Chengming Zhou <chengming.zhou@linux.dev> 26229 L: linux-mm@kvack.org
+1
arch/m68k/configs/amiga_defconfig
··· 626 CONFIG_TEST_SCANF=m 627 CONFIG_TEST_BITMAP=m 628 CONFIG_TEST_UUID=m 629 CONFIG_TEST_MAPLE_TREE=m 630 CONFIG_TEST_RHASHTABLE=m 631 CONFIG_TEST_IDA=m
··· 626 CONFIG_TEST_SCANF=m 627 CONFIG_TEST_BITMAP=m 628 CONFIG_TEST_UUID=m 629 + CONFIG_TEST_XARRAY=m 630 CONFIG_TEST_MAPLE_TREE=m 631 CONFIG_TEST_RHASHTABLE=m 632 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/apollo_defconfig
··· 583 CONFIG_TEST_SCANF=m 584 CONFIG_TEST_BITMAP=m 585 CONFIG_TEST_UUID=m 586 CONFIG_TEST_MAPLE_TREE=m 587 CONFIG_TEST_RHASHTABLE=m 588 CONFIG_TEST_IDA=m
··· 583 CONFIG_TEST_SCANF=m 584 CONFIG_TEST_BITMAP=m 585 CONFIG_TEST_UUID=m 586 + CONFIG_TEST_XARRAY=m 587 CONFIG_TEST_MAPLE_TREE=m 588 CONFIG_TEST_RHASHTABLE=m 589 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/atari_defconfig
··· 603 CONFIG_TEST_SCANF=m 604 CONFIG_TEST_BITMAP=m 605 CONFIG_TEST_UUID=m 606 CONFIG_TEST_MAPLE_TREE=m 607 CONFIG_TEST_RHASHTABLE=m 608 CONFIG_TEST_IDA=m
··· 603 CONFIG_TEST_SCANF=m 604 CONFIG_TEST_BITMAP=m 605 CONFIG_TEST_UUID=m 606 + CONFIG_TEST_XARRAY=m 607 CONFIG_TEST_MAPLE_TREE=m 608 CONFIG_TEST_RHASHTABLE=m 609 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/bvme6000_defconfig
··· 575 CONFIG_TEST_SCANF=m 576 CONFIG_TEST_BITMAP=m 577 CONFIG_TEST_UUID=m 578 CONFIG_TEST_MAPLE_TREE=m 579 CONFIG_TEST_RHASHTABLE=m 580 CONFIG_TEST_IDA=m
··· 575 CONFIG_TEST_SCANF=m 576 CONFIG_TEST_BITMAP=m 577 CONFIG_TEST_UUID=m 578 + CONFIG_TEST_XARRAY=m 579 CONFIG_TEST_MAPLE_TREE=m 580 CONFIG_TEST_RHASHTABLE=m 581 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/hp300_defconfig
··· 585 CONFIG_TEST_SCANF=m 586 CONFIG_TEST_BITMAP=m 587 CONFIG_TEST_UUID=m 588 CONFIG_TEST_MAPLE_TREE=m 589 CONFIG_TEST_RHASHTABLE=m 590 CONFIG_TEST_IDA=m
··· 585 CONFIG_TEST_SCANF=m 586 CONFIG_TEST_BITMAP=m 587 CONFIG_TEST_UUID=m 588 + CONFIG_TEST_XARRAY=m 589 CONFIG_TEST_MAPLE_TREE=m 590 CONFIG_TEST_RHASHTABLE=m 591 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/mac_defconfig
··· 602 CONFIG_TEST_SCANF=m 603 CONFIG_TEST_BITMAP=m 604 CONFIG_TEST_UUID=m 605 CONFIG_TEST_MAPLE_TREE=m 606 CONFIG_TEST_RHASHTABLE=m 607 CONFIG_TEST_IDA=m
··· 602 CONFIG_TEST_SCANF=m 603 CONFIG_TEST_BITMAP=m 604 CONFIG_TEST_UUID=m 605 + CONFIG_TEST_XARRAY=m 606 CONFIG_TEST_MAPLE_TREE=m 607 CONFIG_TEST_RHASHTABLE=m 608 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/multi_defconfig
··· 689 CONFIG_TEST_SCANF=m 690 CONFIG_TEST_BITMAP=m 691 CONFIG_TEST_UUID=m 692 CONFIG_TEST_MAPLE_TREE=m 693 CONFIG_TEST_RHASHTABLE=m 694 CONFIG_TEST_IDA=m
··· 689 CONFIG_TEST_SCANF=m 690 CONFIG_TEST_BITMAP=m 691 CONFIG_TEST_UUID=m 692 + CONFIG_TEST_XARRAY=m 693 CONFIG_TEST_MAPLE_TREE=m 694 CONFIG_TEST_RHASHTABLE=m 695 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/mvme147_defconfig
··· 575 CONFIG_TEST_SCANF=m 576 CONFIG_TEST_BITMAP=m 577 CONFIG_TEST_UUID=m 578 CONFIG_TEST_MAPLE_TREE=m 579 CONFIG_TEST_RHASHTABLE=m 580 CONFIG_TEST_IDA=m
··· 575 CONFIG_TEST_SCANF=m 576 CONFIG_TEST_BITMAP=m 577 CONFIG_TEST_UUID=m 578 + CONFIG_TEST_XARRAY=m 579 CONFIG_TEST_MAPLE_TREE=m 580 CONFIG_TEST_RHASHTABLE=m 581 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/mvme16x_defconfig
··· 576 CONFIG_TEST_SCANF=m 577 CONFIG_TEST_BITMAP=m 578 CONFIG_TEST_UUID=m 579 CONFIG_TEST_MAPLE_TREE=m 580 CONFIG_TEST_RHASHTABLE=m 581 CONFIG_TEST_IDA=m
··· 576 CONFIG_TEST_SCANF=m 577 CONFIG_TEST_BITMAP=m 578 CONFIG_TEST_UUID=m 579 + CONFIG_TEST_XARRAY=m 580 CONFIG_TEST_MAPLE_TREE=m 581 CONFIG_TEST_RHASHTABLE=m 582 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/q40_defconfig
··· 592 CONFIG_TEST_SCANF=m 593 CONFIG_TEST_BITMAP=m 594 CONFIG_TEST_UUID=m 595 CONFIG_TEST_MAPLE_TREE=m 596 CONFIG_TEST_RHASHTABLE=m 597 CONFIG_TEST_IDA=m
··· 592 CONFIG_TEST_SCANF=m 593 CONFIG_TEST_BITMAP=m 594 CONFIG_TEST_UUID=m 595 + CONFIG_TEST_XARRAY=m 596 CONFIG_TEST_MAPLE_TREE=m 597 CONFIG_TEST_RHASHTABLE=m 598 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/sun3_defconfig
··· 572 CONFIG_TEST_SCANF=m 573 CONFIG_TEST_BITMAP=m 574 CONFIG_TEST_UUID=m 575 CONFIG_TEST_MAPLE_TREE=m 576 CONFIG_TEST_RHASHTABLE=m 577 CONFIG_TEST_IDA=m
··· 572 CONFIG_TEST_SCANF=m 573 CONFIG_TEST_BITMAP=m 574 CONFIG_TEST_UUID=m 575 + CONFIG_TEST_XARRAY=m 576 CONFIG_TEST_MAPLE_TREE=m 577 CONFIG_TEST_RHASHTABLE=m 578 CONFIG_TEST_IDA=m
+1
arch/m68k/configs/sun3x_defconfig
··· 573 CONFIG_TEST_SCANF=m 574 CONFIG_TEST_BITMAP=m 575 CONFIG_TEST_UUID=m 576 CONFIG_TEST_MAPLE_TREE=m 577 CONFIG_TEST_RHASHTABLE=m 578 CONFIG_TEST_IDA=m
··· 573 CONFIG_TEST_SCANF=m 574 CONFIG_TEST_BITMAP=m 575 CONFIG_TEST_UUID=m 576 + CONFIG_TEST_XARRAY=m 577 CONFIG_TEST_MAPLE_TREE=m 578 CONFIG_TEST_RHASHTABLE=m 579 CONFIG_TEST_IDA=m
+1
arch/powerpc/configs/ppc64_defconfig
··· 448 CONFIG_TEST_SCANF=m 449 CONFIG_TEST_BITMAP=m 450 CONFIG_TEST_UUID=m 451 CONFIG_TEST_MAPLE_TREE=m 452 CONFIG_TEST_RHASHTABLE=m 453 CONFIG_TEST_IDA=m
··· 448 CONFIG_TEST_SCANF=m 449 CONFIG_TEST_BITMAP=m 450 CONFIG_TEST_UUID=m 451 + CONFIG_TEST_XARRAY=m 452 CONFIG_TEST_MAPLE_TREE=m 453 CONFIG_TEST_RHASHTABLE=m 454 CONFIG_TEST_IDA=m
+10 -1
drivers/acpi/numa/srat.c
··· 95 int i, j, index = -1, count = 0; 96 nodemask_t nodes_to_enable; 97 98 - if (numa_off || srat_disabled()) 99 return -1; 100 101 /* find fake nodes PXM mapping */ 102 for (i = 0; i < MAX_NUMNODES; i++) { ··· 120 } 121 } 122 } 123 } 124 if (WARN(index != max_nid, "%d max nid when expected %d\n", 125 index, max_nid))
··· 95 int i, j, index = -1, count = 0; 96 nodemask_t nodes_to_enable; 97 98 + if (numa_off) 99 return -1; 100 + 101 + /* no or incomplete node/PXM mapping set, nothing to do */ 102 + if (srat_disabled()) 103 + return 0; 104 105 /* find fake nodes PXM mapping */ 106 for (i = 0; i < MAX_NUMNODES; i++) { ··· 116 } 117 } 118 } 119 + } 120 + if (index == -1) { 121 + pr_debug("No node/PXM mapping has been set\n"); 122 + /* nothing more to be done */ 123 + return 0; 124 } 125 if (WARN(index != max_nid, "%d max nid when expected %d\n", 126 index, max_nid))
+3 -3
fs/nilfs2/inode.c
··· 1186 if (size) { 1187 if (phys && blkphy << blkbits == phys + size) { 1188 /* The current extent goes on */ 1189 - size += n << blkbits; 1190 } else { 1191 /* Terminate the current extent */ 1192 ret = fiemap_fill_next_extent( ··· 1199 flags = FIEMAP_EXTENT_MERGED; 1200 logical = blkoff << blkbits; 1201 phys = blkphy << blkbits; 1202 - size = n << blkbits; 1203 } 1204 } else { 1205 /* Start a new extent */ 1206 flags = FIEMAP_EXTENT_MERGED; 1207 logical = blkoff << blkbits; 1208 phys = blkphy << blkbits; 1209 - size = n << blkbits; 1210 } 1211 blkoff += n; 1212 }
··· 1186 if (size) { 1187 if (phys && blkphy << blkbits == phys + size) { 1188 /* The current extent goes on */ 1189 + size += (u64)n << blkbits; 1190 } else { 1191 /* Terminate the current extent */ 1192 ret = fiemap_fill_next_extent( ··· 1199 flags = FIEMAP_EXTENT_MERGED; 1200 logical = blkoff << blkbits; 1201 phys = blkphy << blkbits; 1202 + size = (u64)n << blkbits; 1203 } 1204 } else { 1205 /* Start a new extent */ 1206 flags = FIEMAP_EXTENT_MERGED; 1207 logical = blkoff << blkbits; 1208 phys = blkphy << blkbits; 1209 + size = (u64)n << blkbits; 1210 } 1211 blkoff += n; 1212 }
+1 -1
fs/ocfs2/super.c
··· 2285 mlog(ML_ERROR, "found superblock with incorrect block " 2286 "size bits: found %u, should be 9, 10, 11, or 12\n", 2287 blksz_bits); 2288 - } else if ((1 << le32_to_cpu(blksz_bits)) != blksz) { 2289 mlog(ML_ERROR, "found superblock with incorrect block " 2290 "size: found %u, should be %u\n", 1 << blksz_bits, blksz); 2291 } else if (le16_to_cpu(di->id2.i_super.s_major_rev_level) !=
··· 2285 mlog(ML_ERROR, "found superblock with incorrect block " 2286 "size bits: found %u, should be 9, 10, 11, or 12\n", 2287 blksz_bits); 2288 + } else if ((1 << blksz_bits) != blksz) { 2289 mlog(ML_ERROR, "found superblock with incorrect block " 2290 "size: found %u, should be %u\n", 1 << blksz_bits, blksz); 2291 } else if (le16_to_cpu(di->id2.i_super.s_major_rev_level) !=
+1
include/linux/swap.h
··· 222 }; 223 224 #define SWAP_CLUSTER_MAX 32UL 225 #define COMPACT_CLUSTER_MAX SWAP_CLUSTER_MAX 226 227 /* Bit flag in swap_map */
··· 222 }; 223 224 #define SWAP_CLUSTER_MAX 32UL 225 + #define SWAP_CLUSTER_MAX_SKIPPED (SWAP_CLUSTER_MAX << 10) 226 #define COMPACT_CLUSTER_MAX SWAP_CLUSTER_MAX 227 228 /* Bit flag in swap_map */
+4
kernel/events/uprobes.c
··· 28 #include <linux/rcupdate_trace.h> 29 #include <linux/workqueue.h> 30 #include <linux/srcu.h> 31 32 #include <linux/uprobes.h> 33 ··· 1261 * returns NULL in find_active_uprobe_rcu(). 1262 */ 1263 mmap_write_lock(mm); 1264 vma = find_vma(mm, info->vaddr); 1265 if (!vma || !valid_vma(vma, is_register) || 1266 file_inode(vma->vm_file) != uprobe->inode)
··· 28 #include <linux/rcupdate_trace.h> 29 #include <linux/workqueue.h> 30 #include <linux/srcu.h> 31 + #include <linux/oom.h> /* check_stable_address_space */ 32 33 #include <linux/uprobes.h> 34 ··· 1260 * returns NULL in find_active_uprobe_rcu(). 1261 */ 1262 mmap_write_lock(mm); 1263 + if (check_stable_address_space(mm)) 1264 + goto unlock; 1265 + 1266 vma = find_vma(mm, info->vaddr); 1267 if (!vma || !valid_vma(vma, is_register) || 1268 file_inode(vma->vm_file) != uprobe->inode)
+14 -3
kernel/fork.c
··· 760 mt_set_in_rcu(vmi.mas.tree); 761 ksm_fork(mm, oldmm); 762 khugepaged_fork(mm, oldmm); 763 - } else if (mpnt) { 764 /* 765 * The entire maple tree has already been duplicated. If the 766 * mmap duplication fails, mark the failure point with ··· 769 * stop releasing VMAs that have not been duplicated after this 770 * point. 771 */ 772 - mas_set_range(&vmi.mas, mpnt->vm_start, mpnt->vm_end - 1); 773 - mas_store(&vmi.mas, XA_ZERO_ENTRY); 774 } 775 out: 776 mmap_write_unlock(mm);
··· 760 mt_set_in_rcu(vmi.mas.tree); 761 ksm_fork(mm, oldmm); 762 khugepaged_fork(mm, oldmm); 763 + } else { 764 + 765 /* 766 * The entire maple tree has already been duplicated. If the 767 * mmap duplication fails, mark the failure point with ··· 768 * stop releasing VMAs that have not been duplicated after this 769 * point. 770 */ 771 + if (mpnt) { 772 + mas_set_range(&vmi.mas, mpnt->vm_start, mpnt->vm_end - 1); 773 + mas_store(&vmi.mas, XA_ZERO_ENTRY); 774 + /* Avoid OOM iterating a broken tree */ 775 + set_bit(MMF_OOM_SKIP, &mm->flags); 776 + } 777 + /* 778 + * The mm_struct is going to exit, but the locks will be dropped 779 + * first. Set the mm_struct as unstable is advisable as it is 780 + * not fully initialised. 781 + */ 782 + set_bit(MMF_UNSTABLE, &mm->flags); 783 } 784 out: 785 mmap_write_unlock(mm);
+2 -16
lib/Kconfig.debug
··· 2456 config TEST_UUID 2457 tristate "Test functions located in the uuid module at runtime" 2458 2459 - config XARRAY_KUNIT 2460 - tristate "KUnit test XArray code at runtime" if !KUNIT_ALL_TESTS 2461 - depends on KUNIT 2462 - default KUNIT_ALL_TESTS 2463 - help 2464 - Enable this option to test the Xarray code at boot. 2465 - 2466 - KUnit tests run during boot and output the results to the debug log 2467 - in TAP format (http://testanything.org/). Only useful for kernel devs 2468 - running the KUnit test harness, and not intended for inclusion into a 2469 - production build. 2470 - 2471 - For more information on KUnit and unit tests in general please refer 2472 - to the KUnit documentation in Documentation/dev-tools/kunit/. 2473 - 2474 - If unsure, say N. 2475 2476 config TEST_MAPLE_TREE 2477 tristate "Test the Maple Tree code at runtime or module load"
··· 2456 config TEST_UUID 2457 tristate "Test functions located in the uuid module at runtime" 2458 2459 + config TEST_XARRAY 2460 + tristate "Test the XArray code at runtime" 2461 2462 config TEST_MAPLE_TREE 2463 tristate "Test the Maple Tree code at runtime or module load"
+1 -1
lib/Makefile
··· 94 endif 95 96 obj-$(CONFIG_TEST_UUID) += test_uuid.o 97 obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o 98 obj-$(CONFIG_TEST_PARMAN) += test_parman.o 99 obj-$(CONFIG_TEST_KMOD) += test_kmod.o ··· 373 obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o 374 obj-$(CONFIG_CHECKSUM_KUNIT) += checksum_kunit.o 375 obj-$(CONFIG_UTIL_MACROS_KUNIT) += util_macros_kunit.o 376 - obj-$(CONFIG_XARRAY_KUNIT) += test_xarray.o 377 obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o 378 obj-$(CONFIG_HASHTABLE_KUNIT_TEST) += hashtable_test.o 379 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
··· 94 endif 95 96 obj-$(CONFIG_TEST_UUID) += test_uuid.o 97 + obj-$(CONFIG_TEST_XARRAY) += test_xarray.o 98 obj-$(CONFIG_TEST_MAPLE_TREE) += test_maple_tree.o 99 obj-$(CONFIG_TEST_PARMAN) += test_parman.o 100 obj-$(CONFIG_TEST_KMOD) += test_kmod.o ··· 372 obj-$(CONFIG_BITFIELD_KUNIT) += bitfield_kunit.o 373 obj-$(CONFIG_CHECKSUM_KUNIT) += checksum_kunit.o 374 obj-$(CONFIG_UTIL_MACROS_KUNIT) += util_macros_kunit.o 375 obj-$(CONFIG_LIST_KUNIT_TEST) += list-test.o 376 obj-$(CONFIG_HASHTABLE_KUNIT_TEST) += hashtable_test.o 377 obj-$(CONFIG_LINEAR_RANGES_TEST) += test_linear_ranges.o
+271 -386
lib/test_xarray.c
··· 6 * Author: Matthew Wilcox <willy@infradead.org> 7 */ 8 9 - #include <kunit/test.h> 10 - 11 - #include <linux/module.h> 12 #include <linux/xarray.h> 13 14 static const unsigned int order_limit = 15 IS_ENABLED(CONFIG_XARRAY_MULTI) ? BITS_PER_LONG : 1; ··· 20 void xa_dump(const struct xarray *xa) { } 21 # endif 22 #undef XA_BUG_ON 23 - #define XA_BUG_ON(xa, x) do { \ 24 - if (x) { \ 25 - KUNIT_FAIL(test, #x); \ 26 - xa_dump(xa); \ 27 - dump_stack(); \ 28 - } \ 29 } while (0) 30 #endif 31 ··· 42 return xa_store(xa, index, xa_mk_index(index), gfp); 43 } 44 45 - static void xa_insert_index(struct kunit *test, struct xarray *xa, unsigned long index) 46 { 47 XA_BUG_ON(xa, xa_insert(xa, index, xa_mk_index(index), 48 GFP_KERNEL) != 0); 49 } 50 51 - static void xa_alloc_index(struct kunit *test, struct xarray *xa, unsigned long index, gfp_t gfp) 52 { 53 u32 id; 54 ··· 57 XA_BUG_ON(xa, id != index); 58 } 59 60 - static void xa_erase_index(struct kunit *test, struct xarray *xa, unsigned long index) 61 { 62 XA_BUG_ON(xa, xa_erase(xa, index) != xa_mk_index(index)); 63 XA_BUG_ON(xa, xa_load(xa, index) != NULL); ··· 83 return curr; 84 } 85 86 - static inline struct xarray *xa_param(struct kunit *test) 87 { 88 - return *(struct xarray **)test->param_value; 89 - } 90 - 91 - static noinline void check_xa_err(struct kunit *test) 92 - { 93 - struct xarray *xa = xa_param(test); 94 - 95 XA_BUG_ON(xa, xa_err(xa_store_index(xa, 0, GFP_NOWAIT)) != 0); 96 XA_BUG_ON(xa, xa_err(xa_erase(xa, 0)) != 0); 97 #ifndef __KERNEL__ ··· 99 // XA_BUG_ON(xa, xa_err(xa_store(xa, 0, xa_mk_internal(0), 0)) != -EINVAL); 100 } 101 102 - static noinline void check_xas_retry(struct kunit *test) 103 { 104 - struct xarray *xa = xa_param(test); 105 - 106 XA_STATE(xas, xa, 0); 107 void *entry; 108 ··· 109 110 rcu_read_lock(); 111 XA_BUG_ON(xa, xas_find(&xas, ULONG_MAX) != xa_mk_value(0)); 112 - xa_erase_index(test, xa, 1); 113 XA_BUG_ON(xa, !xa_is_retry(xas_reload(&xas))); 114 XA_BUG_ON(xa, xas_retry(&xas, NULL)); 115 XA_BUG_ON(xa, xas_retry(&xas, xa_mk_value(0))); ··· 140 } 141 xas_unlock(&xas); 142 143 - xa_erase_index(test, xa, 0); 144 - xa_erase_index(test, xa, 1); 145 } 146 147 - static noinline void check_xa_load(struct kunit *test) 148 { 149 - struct xarray *xa = xa_param(test); 150 - 151 unsigned long i, j; 152 153 for (i = 0; i < 1024; i++) { ··· 167 else 168 XA_BUG_ON(xa, entry); 169 } 170 - xa_erase_index(test, xa, i); 171 } 172 XA_BUG_ON(xa, !xa_empty(xa)); 173 } 174 175 - static noinline void check_xa_mark_1(struct kunit *test, unsigned long index) 176 { 177 - struct xarray *xa = xa_param(test); 178 - 179 unsigned int order; 180 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 8 : 1; 181 ··· 193 XA_BUG_ON(xa, xa_get_mark(xa, index, XA_MARK_1)); 194 195 /* Storing NULL clears marks, and they can't be set again */ 196 - xa_erase_index(test, xa, index); 197 XA_BUG_ON(xa, !xa_empty(xa)); 198 XA_BUG_ON(xa, xa_get_mark(xa, index, XA_MARK_0)); 199 xa_set_mark(xa, index, XA_MARK_0); ··· 244 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0)); 245 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_1)); 246 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_2)); 247 - xa_erase_index(test, xa, index); 248 - xa_erase_index(test, xa, next); 249 XA_BUG_ON(xa, !xa_empty(xa)); 250 } 251 XA_BUG_ON(xa, !xa_empty(xa)); 252 } 253 254 - static noinline void check_xa_mark_2(struct kunit *test) 255 { 256 - struct xarray *xa = xa_param(test); 257 - 258 XA_STATE(xas, xa, 0); 259 unsigned long index; 260 unsigned int count = 0; ··· 289 xa_destroy(xa); 290 } 291 292 - static noinline void check_xa_mark_3(struct kunit *test) 293 { 294 #ifdef CONFIG_XARRAY_MULTI 295 - struct xarray *xa = xa_param(test); 296 - 297 XA_STATE(xas, xa, 0x41); 298 void *entry; 299 int count = 0; ··· 310 #endif 311 } 312 313 - static noinline void check_xa_mark(struct kunit *test) 314 { 315 unsigned long index; 316 317 for (index = 0; index < 16384; index += 4) 318 - check_xa_mark_1(test, index); 319 320 - check_xa_mark_2(test); 321 - check_xa_mark_3(test); 322 } 323 324 - static noinline void check_xa_shrink(struct kunit *test) 325 { 326 - struct xarray *xa = xa_param(test); 327 - 328 XA_STATE(xas, xa, 1); 329 struct xa_node *node; 330 unsigned int order; ··· 347 XA_BUG_ON(xa, xas_load(&xas) != NULL); 348 xas_unlock(&xas); 349 XA_BUG_ON(xa, xa_load(xa, 0) != xa_mk_value(0)); 350 - xa_erase_index(test, xa, 0); 351 XA_BUG_ON(xa, !xa_empty(xa)); 352 353 for (order = 0; order < max_order; order++) { ··· 364 XA_BUG_ON(xa, xa_head(xa) == node); 365 rcu_read_unlock(); 366 XA_BUG_ON(xa, xa_load(xa, max + 1) != NULL); 367 - xa_erase_index(test, xa, ULONG_MAX); 368 XA_BUG_ON(xa, xa->xa_head != node); 369 - xa_erase_index(test, xa, 0); 370 } 371 } 372 373 - static noinline void check_insert(struct kunit *test) 374 { 375 - struct xarray *xa = xa_param(test); 376 - 377 unsigned long i; 378 379 for (i = 0; i < 1024; i++) { 380 - xa_insert_index(test, xa, i); 381 XA_BUG_ON(xa, xa_load(xa, i - 1) != NULL); 382 XA_BUG_ON(xa, xa_load(xa, i + 1) != NULL); 383 - xa_erase_index(test, xa, i); 384 } 385 386 for (i = 10; i < BITS_PER_LONG; i++) { 387 - xa_insert_index(test, xa, 1UL << i); 388 XA_BUG_ON(xa, xa_load(xa, (1UL << i) - 1) != NULL); 389 XA_BUG_ON(xa, xa_load(xa, (1UL << i) + 1) != NULL); 390 - xa_erase_index(test, xa, 1UL << i); 391 392 - xa_insert_index(test, xa, (1UL << i) - 1); 393 XA_BUG_ON(xa, xa_load(xa, (1UL << i) - 2) != NULL); 394 XA_BUG_ON(xa, xa_load(xa, 1UL << i) != NULL); 395 - xa_erase_index(test, xa, (1UL << i) - 1); 396 } 397 398 - xa_insert_index(test, xa, ~0UL); 399 XA_BUG_ON(xa, xa_load(xa, 0UL) != NULL); 400 XA_BUG_ON(xa, xa_load(xa, ~1UL) != NULL); 401 - xa_erase_index(test, xa, ~0UL); 402 403 XA_BUG_ON(xa, !xa_empty(xa)); 404 } 405 406 - static noinline void check_cmpxchg(struct kunit *test) 407 { 408 - struct xarray *xa = xa_param(test); 409 - 410 void *FIVE = xa_mk_value(5); 411 void *SIX = xa_mk_value(6); 412 void *LOTS = xa_mk_value(12345678); ··· 418 XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) != -EBUSY); 419 XA_BUG_ON(xa, xa_cmpxchg(xa, 5, FIVE, NULL, GFP_KERNEL) != FIVE); 420 XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) == -EBUSY); 421 - xa_erase_index(test, xa, 12345678); 422 - xa_erase_index(test, xa, 5); 423 XA_BUG_ON(xa, !xa_empty(xa)); 424 } 425 426 - static noinline void check_cmpxchg_order(struct kunit *test) 427 { 428 #ifdef CONFIG_XARRAY_MULTI 429 - struct xarray *xa = xa_param(test); 430 - 431 void *FIVE = xa_mk_value(5); 432 unsigned int i, order = 3; 433 ··· 476 #endif 477 } 478 479 - static noinline void check_reserve(struct kunit *test) 480 { 481 - struct xarray *xa = xa_param(test); 482 - 483 void *entry; 484 unsigned long index; 485 int count; ··· 494 XA_BUG_ON(xa, xa_reserve(xa, 12345678, GFP_KERNEL) != 0); 495 XA_BUG_ON(xa, xa_store_index(xa, 12345678, GFP_NOWAIT) != NULL); 496 xa_release(xa, 12345678); 497 - xa_erase_index(test, xa, 12345678); 498 XA_BUG_ON(xa, !xa_empty(xa)); 499 500 /* cmpxchg sees a reserved entry as ZERO */ ··· 502 XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, XA_ZERO_ENTRY, 503 xa_mk_value(12345678), GFP_NOWAIT) != NULL); 504 xa_release(xa, 12345678); 505 - xa_erase_index(test, xa, 12345678); 506 XA_BUG_ON(xa, !xa_empty(xa)); 507 508 /* xa_insert treats it as busy */ ··· 542 xa_destroy(xa); 543 } 544 545 - static noinline void check_xas_erase(struct kunit *test) 546 { 547 - struct xarray *xa = xa_param(test); 548 - 549 XA_STATE(xas, xa, 0); 550 void *entry; 551 unsigned long i, j; ··· 581 } 582 583 #ifdef CONFIG_XARRAY_MULTI 584 - static noinline void check_multi_store_1(struct kunit *test, unsigned long index, 585 unsigned int order) 586 { 587 - struct xarray *xa = xa_param(test); 588 - 589 XA_STATE(xas, xa, index); 590 unsigned long min = index & ~((1UL << order) - 1); 591 unsigned long max = min + (1UL << order); ··· 602 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 603 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 604 605 - xa_erase_index(test, xa, min); 606 XA_BUG_ON(xa, !xa_empty(xa)); 607 } 608 609 - static noinline void check_multi_store_2(struct kunit *test, unsigned long index, 610 unsigned int order) 611 { 612 - struct xarray *xa = xa_param(test); 613 - 614 XA_STATE(xas, xa, index); 615 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL); 616 ··· 620 XA_BUG_ON(xa, !xa_empty(xa)); 621 } 622 623 - static noinline void check_multi_store_3(struct kunit *test, unsigned long index, 624 unsigned int order) 625 { 626 - struct xarray *xa = xa_param(test); 627 - 628 XA_STATE(xas, xa, 0); 629 void *entry; 630 int n = 0; ··· 647 } 648 #endif 649 650 - static noinline void check_multi_store(struct kunit *test) 651 { 652 #ifdef CONFIG_XARRAY_MULTI 653 - struct xarray *xa = xa_param(test); 654 - 655 unsigned long i, j, k; 656 unsigned int max_order = (sizeof(long) == 4) ? 30 : 60; 657 ··· 714 } 715 716 for (i = 0; i < 20; i++) { 717 - check_multi_store_1(test, 200, i); 718 - check_multi_store_1(test, 0, i); 719 - check_multi_store_1(test, (1UL << i) + 1, i); 720 } 721 - check_multi_store_2(test, 4095, 9); 722 723 for (i = 1; i < 20; i++) { 724 - check_multi_store_3(test, 0, i); 725 - check_multi_store_3(test, 1UL << i, i); 726 } 727 #endif 728 } 729 730 #ifdef CONFIG_XARRAY_MULTI 731 /* mimics page cache __filemap_add_folio() */ 732 - static noinline void check_xa_multi_store_adv_add(struct kunit *test, 733 unsigned long index, 734 unsigned int order, 735 void *p) 736 { 737 - struct xarray *xa = xa_param(test); 738 - 739 XA_STATE(xas, xa, index); 740 unsigned int nrpages = 1UL << order; 741 ··· 761 } 762 763 /* mimics page_cache_delete() */ 764 - static noinline void check_xa_multi_store_adv_del_entry(struct kunit *test, 765 unsigned long index, 766 unsigned int order) 767 { 768 - struct xarray *xa = xa_param(test); 769 - 770 XA_STATE(xas, xa, index); 771 772 xas_set_order(&xas, index, order); ··· 772 xas_init_marks(&xas); 773 } 774 775 - static noinline void check_xa_multi_store_adv_delete(struct kunit *test, 776 unsigned long index, 777 unsigned int order) 778 { 779 - struct xarray *xa = xa_param(test); 780 - 781 xa_lock_irq(xa); 782 - check_xa_multi_store_adv_del_entry(test, index, order); 783 xa_unlock_irq(xa); 784 } 785 ··· 814 static unsigned long some_val_2 = 0xdeaddead; 815 816 /* mimics the page cache usage */ 817 - static noinline void check_xa_multi_store_adv(struct kunit *test, 818 unsigned long pos, 819 unsigned int order) 820 { 821 - struct xarray *xa = xa_param(test); 822 - 823 unsigned int nrpages = 1UL << order; 824 unsigned long index, base, next_index, next_next_index; 825 unsigned int i; ··· 827 next_index = round_down(base + nrpages, nrpages); 828 next_next_index = round_down(next_index + nrpages, nrpages); 829 830 - check_xa_multi_store_adv_add(test, base, order, &some_val); 831 832 for (i = 0; i < nrpages; i++) 833 XA_BUG_ON(xa, test_get_entry(xa, base + i) != &some_val); ··· 835 XA_BUG_ON(xa, test_get_entry(xa, next_index) != NULL); 836 837 /* Use order 0 for the next item */ 838 - check_xa_multi_store_adv_add(test, next_index, 0, &some_val_2); 839 XA_BUG_ON(xa, test_get_entry(xa, next_index) != &some_val_2); 840 841 /* Remove the next item */ 842 - check_xa_multi_store_adv_delete(test, next_index, 0); 843 844 /* Now use order for a new pointer */ 845 - check_xa_multi_store_adv_add(test, next_index, order, &some_val_2); 846 847 for (i = 0; i < nrpages; i++) 848 XA_BUG_ON(xa, test_get_entry(xa, next_index + i) != &some_val_2); 849 850 - check_xa_multi_store_adv_delete(test, next_index, order); 851 - check_xa_multi_store_adv_delete(test, base, order); 852 XA_BUG_ON(xa, !xa_empty(xa)); 853 854 /* starting fresh again */ ··· 856 /* let's test some holes now */ 857 858 /* hole at base and next_next */ 859 - check_xa_multi_store_adv_add(test, next_index, order, &some_val_2); 860 861 for (i = 0; i < nrpages; i++) 862 XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); ··· 867 for (i = 0; i < nrpages; i++) 868 XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != NULL); 869 870 - check_xa_multi_store_adv_delete(test, next_index, order); 871 XA_BUG_ON(xa, !xa_empty(xa)); 872 873 /* hole at base and next */ 874 875 - check_xa_multi_store_adv_add(test, next_next_index, order, &some_val_2); 876 877 for (i = 0; i < nrpages; i++) 878 XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); ··· 883 for (i = 0; i < nrpages; i++) 884 XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != &some_val_2); 885 886 - check_xa_multi_store_adv_delete(test, next_next_index, order); 887 XA_BUG_ON(xa, !xa_empty(xa)); 888 } 889 #endif 890 891 - static noinline void check_multi_store_advanced(struct kunit *test) 892 { 893 #ifdef CONFIG_XARRAY_MULTI 894 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; ··· 900 */ 901 for (pos = 7; pos < end; pos = (pos * pos) + 564) { 902 for (i = 0; i < max_order; i++) { 903 - check_xa_multi_store_adv(test, pos, i); 904 - check_xa_multi_store_adv(test, pos + 157, i); 905 } 906 } 907 #endif 908 } 909 910 - static noinline void check_xa_alloc_1(struct kunit *test, struct xarray *xa, unsigned int base) 911 { 912 int i; 913 u32 id; 914 915 XA_BUG_ON(xa, !xa_empty(xa)); 916 /* An empty array should assign %base to the first alloc */ 917 - xa_alloc_index(test, xa, base, GFP_KERNEL); 918 919 /* Erasing it should make the array empty again */ 920 - xa_erase_index(test, xa, base); 921 XA_BUG_ON(xa, !xa_empty(xa)); 922 923 /* And it should assign %base again */ 924 - xa_alloc_index(test, xa, base, GFP_KERNEL); 925 926 /* Allocating and then erasing a lot should not lose base */ 927 for (i = base + 1; i < 2 * XA_CHUNK_SIZE; i++) 928 - xa_alloc_index(test, xa, i, GFP_KERNEL); 929 for (i = base; i < 2 * XA_CHUNK_SIZE; i++) 930 - xa_erase_index(test, xa, i); 931 - xa_alloc_index(test, xa, base, GFP_KERNEL); 932 933 /* Destroying the array should do the same as erasing */ 934 xa_destroy(xa); 935 936 /* And it should assign %base again */ 937 - xa_alloc_index(test, xa, base, GFP_KERNEL); 938 939 /* The next assigned ID should be base+1 */ 940 - xa_alloc_index(test, xa, base + 1, GFP_KERNEL); 941 - xa_erase_index(test, xa, base + 1); 942 943 /* Storing a value should mark it used */ 944 xa_store_index(xa, base + 1, GFP_KERNEL); 945 - xa_alloc_index(test, xa, base + 2, GFP_KERNEL); 946 947 /* If we then erase base, it should be free */ 948 - xa_erase_index(test, xa, base); 949 - xa_alloc_index(test, xa, base, GFP_KERNEL); 950 951 - xa_erase_index(test, xa, base + 1); 952 - xa_erase_index(test, xa, base + 2); 953 954 for (i = 1; i < 5000; i++) { 955 - xa_alloc_index(test, xa, base + i, GFP_KERNEL); 956 } 957 958 xa_destroy(xa); ··· 975 976 XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5), 977 GFP_KERNEL) != -EBUSY); 978 - XA_BUG_ON(xa, xa_store_index(xa, 3, GFP_KERNEL) != NULL); 979 XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5), 980 GFP_KERNEL) != -EBUSY); 981 - xa_erase_index(test, xa, 3); 982 XA_BUG_ON(xa, !xa_empty(xa)); 983 } 984 985 - static noinline void check_xa_alloc_2(struct kunit *test, struct xarray *xa, unsigned int base) 986 { 987 unsigned int i, id; 988 unsigned long index; ··· 1018 XA_BUG_ON(xa, id != 5); 1019 1020 xa_for_each(xa, index, entry) { 1021 - xa_erase_index(test, xa, index); 1022 } 1023 1024 for (i = base; i < base + 9; i++) { ··· 1033 xa_destroy(xa); 1034 } 1035 1036 - static noinline void check_xa_alloc_3(struct kunit *test, struct xarray *xa, unsigned int base) 1037 { 1038 struct xa_limit limit = XA_LIMIT(1, 0x3fff); 1039 u32 next = 0; ··· 1049 XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(0x3ffd), limit, 1050 &next, GFP_KERNEL) != 0); 1051 XA_BUG_ON(xa, id != 0x3ffd); 1052 - xa_erase_index(test, xa, 0x3ffd); 1053 - xa_erase_index(test, xa, 1); 1054 XA_BUG_ON(xa, !xa_empty(xa)); 1055 1056 for (i = 0x3ffe; i < 0x4003; i++) { ··· 1065 1066 /* Check wrap-around is handled correctly */ 1067 if (base != 0) 1068 - xa_erase_index(test, xa, base); 1069 - xa_erase_index(test, xa, base + 1); 1070 next = UINT_MAX; 1071 XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(UINT_MAX), 1072 xa_limit_32b, &next, GFP_KERNEL) != 0); ··· 1079 XA_BUG_ON(xa, id != base + 1); 1080 1081 xa_for_each(xa, index, entry) 1082 - xa_erase_index(test, xa, index); 1083 1084 XA_BUG_ON(xa, !xa_empty(xa)); 1085 } ··· 1087 static DEFINE_XARRAY_ALLOC(xa0); 1088 static DEFINE_XARRAY_ALLOC1(xa1); 1089 1090 - static noinline void check_xa_alloc(struct kunit *test) 1091 { 1092 - check_xa_alloc_1(test, &xa0, 0); 1093 - check_xa_alloc_1(test, &xa1, 1); 1094 - check_xa_alloc_2(test, &xa0, 0); 1095 - check_xa_alloc_2(test, &xa1, 1); 1096 - check_xa_alloc_3(test, &xa0, 0); 1097 - check_xa_alloc_3(test, &xa1, 1); 1098 } 1099 1100 - static noinline void __check_store_iter(struct kunit *test, unsigned long start, 1101 unsigned int order, unsigned int present) 1102 { 1103 - struct xarray *xa = xa_param(test); 1104 - 1105 XA_STATE_ORDER(xas, xa, start, order); 1106 void *entry; 1107 unsigned int count = 0; ··· 1123 XA_BUG_ON(xa, xa_load(xa, start) != xa_mk_index(start)); 1124 XA_BUG_ON(xa, xa_load(xa, start + (1UL << order) - 1) != 1125 xa_mk_index(start)); 1126 - xa_erase_index(test, xa, start); 1127 } 1128 1129 - static noinline void check_store_iter(struct kunit *test) 1130 { 1131 - struct xarray *xa = xa_param(test); 1132 - 1133 unsigned int i, j; 1134 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; 1135 1136 for (i = 0; i < max_order; i++) { 1137 unsigned int min = 1 << i; 1138 unsigned int max = (2 << i) - 1; 1139 - __check_store_iter(test, 0, i, 0); 1140 XA_BUG_ON(xa, !xa_empty(xa)); 1141 - __check_store_iter(test, min, i, 0); 1142 XA_BUG_ON(xa, !xa_empty(xa)); 1143 1144 xa_store_index(xa, min, GFP_KERNEL); 1145 - __check_store_iter(test, min, i, 1); 1146 XA_BUG_ON(xa, !xa_empty(xa)); 1147 xa_store_index(xa, max, GFP_KERNEL); 1148 - __check_store_iter(test, min, i, 1); 1149 XA_BUG_ON(xa, !xa_empty(xa)); 1150 1151 for (j = 0; j < min; j++) 1152 xa_store_index(xa, j, GFP_KERNEL); 1153 - __check_store_iter(test, 0, i, min); 1154 XA_BUG_ON(xa, !xa_empty(xa)); 1155 for (j = 0; j < min; j++) 1156 xa_store_index(xa, min + j, GFP_KERNEL); 1157 - __check_store_iter(test, min, i, min); 1158 XA_BUG_ON(xa, !xa_empty(xa)); 1159 } 1160 #ifdef CONFIG_XARRAY_MULTI 1161 xa_store_index(xa, 63, GFP_KERNEL); 1162 xa_store_index(xa, 65, GFP_KERNEL); 1163 - __check_store_iter(test, 64, 2, 1); 1164 - xa_erase_index(test, xa, 63); 1165 #endif 1166 XA_BUG_ON(xa, !xa_empty(xa)); 1167 } 1168 1169 - static noinline void check_multi_find_1(struct kunit *test, unsigned int order) 1170 { 1171 #ifdef CONFIG_XARRAY_MULTI 1172 - struct xarray *xa = xa_param(test); 1173 - 1174 unsigned long multi = 3 << order; 1175 unsigned long next = 4 << order; 1176 unsigned long index; ··· 1189 XA_BUG_ON(xa, xa_find_after(xa, &index, next, XA_PRESENT) != NULL); 1190 XA_BUG_ON(xa, index != next); 1191 1192 - xa_erase_index(test, xa, multi); 1193 - xa_erase_index(test, xa, next); 1194 - xa_erase_index(test, xa, next + 1); 1195 XA_BUG_ON(xa, !xa_empty(xa)); 1196 #endif 1197 } 1198 1199 - static noinline void check_multi_find_2(struct kunit *test) 1200 { 1201 - struct xarray *xa = xa_param(test); 1202 - 1203 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 10 : 1; 1204 unsigned int i, j; 1205 void *entry; ··· 1211 GFP_KERNEL); 1212 rcu_read_lock(); 1213 xas_for_each(&xas, entry, ULONG_MAX) { 1214 - xa_erase_index(test, xa, index); 1215 } 1216 rcu_read_unlock(); 1217 - xa_erase_index(test, xa, index - 1); 1218 XA_BUG_ON(xa, !xa_empty(xa)); 1219 } 1220 } 1221 } 1222 1223 - static noinline void check_multi_find_3(struct kunit *test) 1224 { 1225 - struct xarray *xa = xa_param(test); 1226 - 1227 unsigned int order; 1228 1229 for (order = 5; order < order_limit; order++) { ··· 1230 XA_BUG_ON(xa, !xa_empty(xa)); 1231 xa_store_order(xa, 0, order - 4, xa_mk_index(0), GFP_KERNEL); 1232 XA_BUG_ON(xa, xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT)); 1233 - xa_erase_index(test, xa, 0); 1234 } 1235 } 1236 1237 - static noinline void check_find_1(struct kunit *test) 1238 { 1239 - struct xarray *xa = xa_param(test); 1240 - 1241 unsigned long i, j, k; 1242 1243 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1272 else 1273 XA_BUG_ON(xa, entry != NULL); 1274 } 1275 - xa_erase_index(test, xa, j); 1276 XA_BUG_ON(xa, xa_get_mark(xa, j, XA_MARK_0)); 1277 XA_BUG_ON(xa, !xa_get_mark(xa, i, XA_MARK_0)); 1278 } 1279 - xa_erase_index(test, xa, i); 1280 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0)); 1281 } 1282 XA_BUG_ON(xa, !xa_empty(xa)); 1283 } 1284 1285 - static noinline void check_find_2(struct kunit *test) 1286 { 1287 - struct xarray *xa = xa_param(test); 1288 - 1289 void *entry; 1290 unsigned long i, j, index; 1291 ··· 1303 xa_destroy(xa); 1304 } 1305 1306 - static noinline void check_find_3(struct kunit *test) 1307 { 1308 - struct xarray *xa = xa_param(test); 1309 - 1310 XA_STATE(xas, xa, 0); 1311 unsigned long i, j, k; 1312 void *entry; ··· 1328 xa_destroy(xa); 1329 } 1330 1331 - static noinline void check_find_4(struct kunit *test) 1332 { 1333 - struct xarray *xa = xa_param(test); 1334 - 1335 unsigned long index = 0; 1336 void *entry; 1337 ··· 1341 entry = xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT); 1342 XA_BUG_ON(xa, entry); 1343 1344 - xa_erase_index(test, xa, ULONG_MAX); 1345 } 1346 1347 - static noinline void check_find(struct kunit *test) 1348 { 1349 unsigned i; 1350 1351 - check_find_1(test); 1352 - check_find_2(test); 1353 - check_find_3(test); 1354 - check_find_4(test); 1355 1356 for (i = 2; i < 10; i++) 1357 - check_multi_find_1(test, i); 1358 - check_multi_find_2(test); 1359 - check_multi_find_3(test); 1360 } 1361 1362 /* See find_swap_entry() in mm/shmem.c */ ··· 1382 return entry ? xas.xa_index : -1; 1383 } 1384 1385 - static noinline void check_find_entry(struct kunit *test) 1386 { 1387 - struct xarray *xa = xa_param(test); 1388 - 1389 #ifdef CONFIG_XARRAY_MULTI 1390 unsigned int order; 1391 unsigned long offset, index; ··· 1410 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); 1411 XA_BUG_ON(xa, xa_find_entry(xa, xa) != -1); 1412 XA_BUG_ON(xa, xa_find_entry(xa, xa_mk_index(ULONG_MAX)) != -1); 1413 - xa_erase_index(test, xa, ULONG_MAX); 1414 XA_BUG_ON(xa, !xa_empty(xa)); 1415 } 1416 1417 - static noinline void check_pause(struct kunit *test) 1418 { 1419 - struct xarray *xa = xa_param(test); 1420 - 1421 XA_STATE(xas, xa, 0); 1422 void *entry; 1423 unsigned int order; ··· 1485 1486 } 1487 1488 - static noinline void check_move_tiny(struct kunit *test) 1489 { 1490 - struct xarray *xa = xa_param(test); 1491 - 1492 XA_STATE(xas, xa, 0); 1493 1494 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1503 XA_BUG_ON(xa, xas_prev(&xas) != xa_mk_index(0)); 1504 XA_BUG_ON(xa, xas_prev(&xas) != NULL); 1505 rcu_read_unlock(); 1506 - xa_erase_index(test, xa, 0); 1507 XA_BUG_ON(xa, !xa_empty(xa)); 1508 } 1509 1510 - static noinline void check_move_max(struct kunit *test) 1511 { 1512 - struct xarray *xa = xa_param(test); 1513 - 1514 XA_STATE(xas, xa, 0); 1515 1516 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); ··· 1524 XA_BUG_ON(xa, xas_find(&xas, ULONG_MAX) != NULL); 1525 rcu_read_unlock(); 1526 1527 - xa_erase_index(test, xa, ULONG_MAX); 1528 XA_BUG_ON(xa, !xa_empty(xa)); 1529 } 1530 1531 - static noinline void check_move_small(struct kunit *test, unsigned long idx) 1532 { 1533 - struct xarray *xa = xa_param(test); 1534 - 1535 XA_STATE(xas, xa, 0); 1536 unsigned long i; 1537 ··· 1571 XA_BUG_ON(xa, xas.xa_index != ULONG_MAX); 1572 rcu_read_unlock(); 1573 1574 - xa_erase_index(test, xa, 0); 1575 - xa_erase_index(test, xa, idx); 1576 XA_BUG_ON(xa, !xa_empty(xa)); 1577 } 1578 1579 - static noinline void check_move(struct kunit *test) 1580 { 1581 - struct xarray *xa = xa_param(test); 1582 - 1583 XA_STATE(xas, xa, (1 << 16) - 1); 1584 unsigned long i; 1585 ··· 1604 rcu_read_unlock(); 1605 1606 for (i = (1 << 8); i < (1 << 15); i++) 1607 - xa_erase_index(test, xa, i); 1608 1609 i = xas.xa_index; 1610 ··· 1635 1636 xa_destroy(xa); 1637 1638 - check_move_tiny(test); 1639 - check_move_max(test); 1640 1641 for (i = 0; i < 16; i++) 1642 - check_move_small(test, 1UL << i); 1643 1644 for (i = 2; i < 16; i++) 1645 - check_move_small(test, (1UL << i) - 1); 1646 } 1647 1648 - static noinline void xa_store_many_order(struct kunit *test, struct xarray *xa, 1649 unsigned long index, unsigned order) 1650 { 1651 XA_STATE_ORDER(xas, xa, index, order); ··· 1668 XA_BUG_ON(xa, xas_error(&xas)); 1669 } 1670 1671 - static noinline void check_create_range_1(struct kunit *test, 1672 unsigned long index, unsigned order) 1673 { 1674 - struct xarray *xa = xa_param(test); 1675 - 1676 unsigned long i; 1677 1678 - xa_store_many_order(test, xa, index, order); 1679 for (i = index; i < index + (1UL << order); i++) 1680 - xa_erase_index(test, xa, i); 1681 XA_BUG_ON(xa, !xa_empty(xa)); 1682 } 1683 1684 - static noinline void check_create_range_2(struct kunit *test, unsigned int order) 1685 { 1686 - struct xarray *xa = xa_param(test); 1687 - 1688 unsigned long i; 1689 unsigned long nr = 1UL << order; 1690 1691 for (i = 0; i < nr * nr; i += nr) 1692 - xa_store_many_order(test, xa, i, order); 1693 for (i = 0; i < nr * nr; i++) 1694 - xa_erase_index(test, xa, i); 1695 XA_BUG_ON(xa, !xa_empty(xa)); 1696 } 1697 1698 - static noinline void check_create_range_3(struct kunit *test) 1699 { 1700 XA_STATE(xas, NULL, 0); 1701 xas_set_err(&xas, -EEXIST); ··· 1699 XA_BUG_ON(NULL, xas_error(&xas) != -EEXIST); 1700 } 1701 1702 - static noinline void check_create_range_4(struct kunit *test, 1703 unsigned long index, unsigned order) 1704 { 1705 - struct xarray *xa = xa_param(test); 1706 - 1707 XA_STATE_ORDER(xas, xa, index, order); 1708 unsigned long base = xas.xa_index; 1709 unsigned long i = 0; ··· 1727 XA_BUG_ON(xa, xas_error(&xas)); 1728 1729 for (i = base; i < base + (1UL << order); i++) 1730 - xa_erase_index(test, xa, i); 1731 XA_BUG_ON(xa, !xa_empty(xa)); 1732 } 1733 1734 - static noinline void check_create_range_5(struct kunit *test, 1735 unsigned long index, unsigned int order) 1736 { 1737 - struct xarray *xa = xa_param(test); 1738 - 1739 XA_STATE_ORDER(xas, xa, index, order); 1740 unsigned int i; 1741 ··· 1750 xa_destroy(xa); 1751 } 1752 1753 - static noinline void check_create_range(struct kunit *test) 1754 { 1755 unsigned int order; 1756 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 12 : 1; 1757 1758 for (order = 0; order < max_order; order++) { 1759 - check_create_range_1(test, 0, order); 1760 - check_create_range_1(test, 1U << order, order); 1761 - check_create_range_1(test, 2U << order, order); 1762 - check_create_range_1(test, 3U << order, order); 1763 - check_create_range_1(test, 1U << 24, order); 1764 if (order < 10) 1765 - check_create_range_2(test, order); 1766 1767 - check_create_range_4(test, 0, order); 1768 - check_create_range_4(test, 1U << order, order); 1769 - check_create_range_4(test, 2U << order, order); 1770 - check_create_range_4(test, 3U << order, order); 1771 - check_create_range_4(test, 1U << 24, order); 1772 1773 - check_create_range_4(test, 1, order); 1774 - check_create_range_4(test, (1U << order) + 1, order); 1775 - check_create_range_4(test, (2U << order) + 1, order); 1776 - check_create_range_4(test, (2U << order) - 1, order); 1777 - check_create_range_4(test, (3U << order) + 1, order); 1778 - check_create_range_4(test, (3U << order) - 1, order); 1779 - check_create_range_4(test, (1U << 24) + 1, order); 1780 1781 - check_create_range_5(test, 0, order); 1782 - check_create_range_5(test, (1U << order), order); 1783 } 1784 1785 - check_create_range_3(test); 1786 } 1787 1788 - static noinline void __check_store_range(struct kunit *test, unsigned long first, 1789 unsigned long last) 1790 { 1791 - struct xarray *xa = xa_param(test); 1792 - 1793 #ifdef CONFIG_XARRAY_MULTI 1794 xa_store_range(xa, first, last, xa_mk_index(first), GFP_KERNEL); 1795 ··· 1802 XA_BUG_ON(xa, !xa_empty(xa)); 1803 } 1804 1805 - static noinline void check_store_range(struct kunit *test) 1806 { 1807 unsigned long i, j; 1808 1809 for (i = 0; i < 128; i++) { 1810 for (j = i; j < 128; j++) { 1811 - __check_store_range(test, i, j); 1812 - __check_store_range(test, 128 + i, 128 + j); 1813 - __check_store_range(test, 4095 + i, 4095 + j); 1814 - __check_store_range(test, 4096 + i, 4096 + j); 1815 - __check_store_range(test, 123456 + i, 123456 + j); 1816 - __check_store_range(test, (1 << 24) + i, (1 << 24) + j); 1817 } 1818 } 1819 } 1820 1821 #ifdef CONFIG_XARRAY_MULTI 1822 - static void check_split_1(struct kunit *test, unsigned long index, 1823 unsigned int order, unsigned int new_order) 1824 { 1825 - struct xarray *xa = xa_param(test); 1826 - 1827 XA_STATE_ORDER(xas, xa, index, new_order); 1828 unsigned int i, found; 1829 void *entry; ··· 1857 xa_destroy(xa); 1858 } 1859 1860 - static noinline void check_split(struct kunit *test) 1861 { 1862 - struct xarray *xa = xa_param(test); 1863 - 1864 unsigned int order, new_order; 1865 1866 XA_BUG_ON(xa, !xa_empty(xa)); 1867 1868 for (order = 1; order < 2 * XA_CHUNK_SHIFT; order++) { 1869 for (new_order = 0; new_order < order; new_order++) { 1870 - check_split_1(test, 0, order, new_order); 1871 - check_split_1(test, 1UL << order, order, new_order); 1872 - check_split_1(test, 3UL << order, order, new_order); 1873 } 1874 } 1875 } 1876 #else 1877 - static void check_split(struct kunit *test) { } 1878 #endif 1879 1880 - static void check_align_1(struct kunit *test, char *name) 1881 { 1882 - struct xarray *xa = xa_param(test); 1883 - 1884 int i; 1885 unsigned int id; 1886 unsigned long index; ··· 1896 * We should always be able to store without allocating memory after 1897 * reserving a slot. 1898 */ 1899 - static void check_align_2(struct kunit *test, char *name) 1900 { 1901 - struct xarray *xa = xa_param(test); 1902 - 1903 int i; 1904 1905 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1916 XA_BUG_ON(xa, !xa_empty(xa)); 1917 } 1918 1919 - static noinline void check_align(struct kunit *test) 1920 { 1921 char name[] = "Motorola 68000"; 1922 1923 - check_align_1(test, name); 1924 - check_align_1(test, name + 1); 1925 - check_align_1(test, name + 2); 1926 - check_align_1(test, name + 3); 1927 - check_align_2(test, name); 1928 } 1929 1930 static LIST_HEAD(shadow_nodes); ··· 1940 } 1941 } 1942 1943 - static noinline void shadow_remove(struct kunit *test, struct xarray *xa) 1944 { 1945 struct xa_node *node; 1946 ··· 1954 xa_unlock(xa); 1955 } 1956 1957 - struct workingset_testcase { 1958 - struct xarray *xa; 1959 - unsigned long index; 1960 - }; 1961 - 1962 - static noinline void check_workingset(struct kunit *test) 1963 { 1964 - struct workingset_testcase tc = *(struct workingset_testcase *)test->param_value; 1965 - struct xarray *xa = tc.xa; 1966 - unsigned long index = tc.index; 1967 - 1968 XA_STATE(xas, xa, index); 1969 xas_set_update(&xas, test_update_node); 1970 ··· 1978 xas_unlock(&xas); 1979 XA_BUG_ON(xa, list_empty(&shadow_nodes)); 1980 1981 - shadow_remove(test, xa); 1982 XA_BUG_ON(xa, !list_empty(&shadow_nodes)); 1983 XA_BUG_ON(xa, !xa_empty(xa)); 1984 } ··· 1987 * Check that the pointer / value / sibling entries are accounted the 1988 * way we expect them to be. 1989 */ 1990 - static noinline void check_account(struct kunit *test) 1991 { 1992 #ifdef CONFIG_XARRAY_MULTI 1993 - struct xarray *xa = xa_param(test); 1994 - 1995 unsigned int order; 1996 1997 for (order = 1; order < 12; order++) { ··· 2016 #endif 2017 } 2018 2019 - static noinline void check_get_order(struct kunit *test) 2020 { 2021 - struct xarray *xa = xa_param(test); 2022 - 2023 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; 2024 unsigned int order; 2025 unsigned long i, j; ··· 2036 } 2037 } 2038 2039 - static noinline void check_xas_get_order(struct kunit *test) 2040 { 2041 - struct xarray *xa = xa_param(test); 2042 - 2043 XA_STATE(xas, xa, 0); 2044 2045 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; ··· 2069 } 2070 } 2071 2072 - static noinline void check_xas_conflict_get_order(struct kunit *test) 2073 { 2074 - struct xarray *xa = xa_param(test); 2075 - 2076 XA_STATE(xas, xa, 0); 2077 2078 void *entry; ··· 2127 } 2128 2129 2130 - static noinline void check_destroy(struct kunit *test) 2131 { 2132 - struct xarray *xa = xa_param(test); 2133 - 2134 unsigned long index; 2135 2136 XA_BUG_ON(xa, !xa_empty(xa)); ··· 2161 } 2162 2163 static DEFINE_XARRAY(array); 2164 - static struct xarray *arrays[] = { &array }; 2165 - KUNIT_ARRAY_PARAM(array, arrays, NULL); 2166 2167 - static struct xarray *xa0s[] = { &xa0 }; 2168 - KUNIT_ARRAY_PARAM(xa0, xa0s, NULL); 2169 2170 - static struct workingset_testcase workingset_testcases[] = { 2171 - { &array, 0 }, 2172 - { &array, 64 }, 2173 - { &array, 4096 }, 2174 - }; 2175 - KUNIT_ARRAY_PARAM(workingset, workingset_testcases, NULL); 2176 2177 - static struct kunit_case xarray_cases[] = { 2178 - KUNIT_CASE_PARAM(check_xa_err, array_gen_params), 2179 - KUNIT_CASE_PARAM(check_xas_retry, array_gen_params), 2180 - KUNIT_CASE_PARAM(check_xa_load, array_gen_params), 2181 - KUNIT_CASE_PARAM(check_xa_mark, array_gen_params), 2182 - KUNIT_CASE_PARAM(check_xa_shrink, array_gen_params), 2183 - KUNIT_CASE_PARAM(check_xas_erase, array_gen_params), 2184 - KUNIT_CASE_PARAM(check_insert, array_gen_params), 2185 - KUNIT_CASE_PARAM(check_cmpxchg, array_gen_params), 2186 - KUNIT_CASE_PARAM(check_cmpxchg_order, array_gen_params), 2187 - KUNIT_CASE_PARAM(check_reserve, array_gen_params), 2188 - KUNIT_CASE_PARAM(check_reserve, xa0_gen_params), 2189 - KUNIT_CASE_PARAM(check_multi_store, array_gen_params), 2190 - KUNIT_CASE_PARAM(check_multi_store_advanced, array_gen_params), 2191 - KUNIT_CASE_PARAM(check_get_order, array_gen_params), 2192 - KUNIT_CASE_PARAM(check_xas_get_order, array_gen_params), 2193 - KUNIT_CASE_PARAM(check_xas_conflict_get_order, array_gen_params), 2194 - KUNIT_CASE(check_xa_alloc), 2195 - KUNIT_CASE_PARAM(check_find, array_gen_params), 2196 - KUNIT_CASE_PARAM(check_find_entry, array_gen_params), 2197 - KUNIT_CASE_PARAM(check_pause, array_gen_params), 2198 - KUNIT_CASE_PARAM(check_account, array_gen_params), 2199 - KUNIT_CASE_PARAM(check_destroy, array_gen_params), 2200 - KUNIT_CASE_PARAM(check_move, array_gen_params), 2201 - KUNIT_CASE_PARAM(check_create_range, array_gen_params), 2202 - KUNIT_CASE_PARAM(check_store_range, array_gen_params), 2203 - KUNIT_CASE_PARAM(check_store_iter, array_gen_params), 2204 - KUNIT_CASE_PARAM(check_align, xa0_gen_params), 2205 - KUNIT_CASE_PARAM(check_split, array_gen_params), 2206 - KUNIT_CASE_PARAM(check_workingset, workingset_gen_params), 2207 - {}, 2208 - }; 2209 2210 - static struct kunit_suite xarray_suite = { 2211 - .name = "xarray", 2212 - .test_cases = xarray_cases, 2213 - }; 2214 2215 - kunit_test_suite(xarray_suite); 2216 - 2217 MODULE_AUTHOR("Matthew Wilcox <willy@infradead.org>"); 2218 MODULE_DESCRIPTION("XArray API test module"); 2219 MODULE_LICENSE("GPL");
··· 6 * Author: Matthew Wilcox <willy@infradead.org> 7 */ 8 9 #include <linux/xarray.h> 10 + #include <linux/module.h> 11 + 12 + static unsigned int tests_run; 13 + static unsigned int tests_passed; 14 15 static const unsigned int order_limit = 16 IS_ENABLED(CONFIG_XARRAY_MULTI) ? BITS_PER_LONG : 1; ··· 19 void xa_dump(const struct xarray *xa) { } 20 # endif 21 #undef XA_BUG_ON 22 + #define XA_BUG_ON(xa, x) do { \ 23 + tests_run++; \ 24 + if (x) { \ 25 + printk("BUG at %s:%d\n", __func__, __LINE__); \ 26 + xa_dump(xa); \ 27 + dump_stack(); \ 28 + } else { \ 29 + tests_passed++; \ 30 + } \ 31 } while (0) 32 #endif 33 ··· 38 return xa_store(xa, index, xa_mk_index(index), gfp); 39 } 40 41 + static void xa_insert_index(struct xarray *xa, unsigned long index) 42 { 43 XA_BUG_ON(xa, xa_insert(xa, index, xa_mk_index(index), 44 GFP_KERNEL) != 0); 45 } 46 47 + static void xa_alloc_index(struct xarray *xa, unsigned long index, gfp_t gfp) 48 { 49 u32 id; 50 ··· 53 XA_BUG_ON(xa, id != index); 54 } 55 56 + static void xa_erase_index(struct xarray *xa, unsigned long index) 57 { 58 XA_BUG_ON(xa, xa_erase(xa, index) != xa_mk_index(index)); 59 XA_BUG_ON(xa, xa_load(xa, index) != NULL); ··· 79 return curr; 80 } 81 82 + static noinline void check_xa_err(struct xarray *xa) 83 { 84 XA_BUG_ON(xa, xa_err(xa_store_index(xa, 0, GFP_NOWAIT)) != 0); 85 XA_BUG_ON(xa, xa_err(xa_erase(xa, 0)) != 0); 86 #ifndef __KERNEL__ ··· 102 // XA_BUG_ON(xa, xa_err(xa_store(xa, 0, xa_mk_internal(0), 0)) != -EINVAL); 103 } 104 105 + static noinline void check_xas_retry(struct xarray *xa) 106 { 107 XA_STATE(xas, xa, 0); 108 void *entry; 109 ··· 114 115 rcu_read_lock(); 116 XA_BUG_ON(xa, xas_find(&xas, ULONG_MAX) != xa_mk_value(0)); 117 + xa_erase_index(xa, 1); 118 XA_BUG_ON(xa, !xa_is_retry(xas_reload(&xas))); 119 XA_BUG_ON(xa, xas_retry(&xas, NULL)); 120 XA_BUG_ON(xa, xas_retry(&xas, xa_mk_value(0))); ··· 145 } 146 xas_unlock(&xas); 147 148 + xa_erase_index(xa, 0); 149 + xa_erase_index(xa, 1); 150 } 151 152 + static noinline void check_xa_load(struct xarray *xa) 153 { 154 unsigned long i, j; 155 156 for (i = 0; i < 1024; i++) { ··· 174 else 175 XA_BUG_ON(xa, entry); 176 } 177 + xa_erase_index(xa, i); 178 } 179 XA_BUG_ON(xa, !xa_empty(xa)); 180 } 181 182 + static noinline void check_xa_mark_1(struct xarray *xa, unsigned long index) 183 { 184 unsigned int order; 185 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 8 : 1; 186 ··· 202 XA_BUG_ON(xa, xa_get_mark(xa, index, XA_MARK_1)); 203 204 /* Storing NULL clears marks, and they can't be set again */ 205 + xa_erase_index(xa, index); 206 XA_BUG_ON(xa, !xa_empty(xa)); 207 XA_BUG_ON(xa, xa_get_mark(xa, index, XA_MARK_0)); 208 xa_set_mark(xa, index, XA_MARK_0); ··· 253 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_0)); 254 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_1)); 255 XA_BUG_ON(xa, xa_get_mark(xa, next, XA_MARK_2)); 256 + xa_erase_index(xa, index); 257 + xa_erase_index(xa, next); 258 XA_BUG_ON(xa, !xa_empty(xa)); 259 } 260 XA_BUG_ON(xa, !xa_empty(xa)); 261 } 262 263 + static noinline void check_xa_mark_2(struct xarray *xa) 264 { 265 XA_STATE(xas, xa, 0); 266 unsigned long index; 267 unsigned int count = 0; ··· 300 xa_destroy(xa); 301 } 302 303 + static noinline void check_xa_mark_3(struct xarray *xa) 304 { 305 #ifdef CONFIG_XARRAY_MULTI 306 XA_STATE(xas, xa, 0x41); 307 void *entry; 308 int count = 0; ··· 323 #endif 324 } 325 326 + static noinline void check_xa_mark(struct xarray *xa) 327 { 328 unsigned long index; 329 330 for (index = 0; index < 16384; index += 4) 331 + check_xa_mark_1(xa, index); 332 333 + check_xa_mark_2(xa); 334 + check_xa_mark_3(xa); 335 } 336 337 + static noinline void check_xa_shrink(struct xarray *xa) 338 { 339 XA_STATE(xas, xa, 1); 340 struct xa_node *node; 341 unsigned int order; ··· 362 XA_BUG_ON(xa, xas_load(&xas) != NULL); 363 xas_unlock(&xas); 364 XA_BUG_ON(xa, xa_load(xa, 0) != xa_mk_value(0)); 365 + xa_erase_index(xa, 0); 366 XA_BUG_ON(xa, !xa_empty(xa)); 367 368 for (order = 0; order < max_order; order++) { ··· 379 XA_BUG_ON(xa, xa_head(xa) == node); 380 rcu_read_unlock(); 381 XA_BUG_ON(xa, xa_load(xa, max + 1) != NULL); 382 + xa_erase_index(xa, ULONG_MAX); 383 XA_BUG_ON(xa, xa->xa_head != node); 384 + xa_erase_index(xa, 0); 385 } 386 } 387 388 + static noinline void check_insert(struct xarray *xa) 389 { 390 unsigned long i; 391 392 for (i = 0; i < 1024; i++) { 393 + xa_insert_index(xa, i); 394 XA_BUG_ON(xa, xa_load(xa, i - 1) != NULL); 395 XA_BUG_ON(xa, xa_load(xa, i + 1) != NULL); 396 + xa_erase_index(xa, i); 397 } 398 399 for (i = 10; i < BITS_PER_LONG; i++) { 400 + xa_insert_index(xa, 1UL << i); 401 XA_BUG_ON(xa, xa_load(xa, (1UL << i) - 1) != NULL); 402 XA_BUG_ON(xa, xa_load(xa, (1UL << i) + 1) != NULL); 403 + xa_erase_index(xa, 1UL << i); 404 405 + xa_insert_index(xa, (1UL << i) - 1); 406 XA_BUG_ON(xa, xa_load(xa, (1UL << i) - 2) != NULL); 407 XA_BUG_ON(xa, xa_load(xa, 1UL << i) != NULL); 408 + xa_erase_index(xa, (1UL << i) - 1); 409 } 410 411 + xa_insert_index(xa, ~0UL); 412 XA_BUG_ON(xa, xa_load(xa, 0UL) != NULL); 413 XA_BUG_ON(xa, xa_load(xa, ~1UL) != NULL); 414 + xa_erase_index(xa, ~0UL); 415 416 XA_BUG_ON(xa, !xa_empty(xa)); 417 } 418 419 + static noinline void check_cmpxchg(struct xarray *xa) 420 { 421 void *FIVE = xa_mk_value(5); 422 void *SIX = xa_mk_value(6); 423 void *LOTS = xa_mk_value(12345678); ··· 437 XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) != -EBUSY); 438 XA_BUG_ON(xa, xa_cmpxchg(xa, 5, FIVE, NULL, GFP_KERNEL) != FIVE); 439 XA_BUG_ON(xa, xa_insert(xa, 5, FIVE, GFP_KERNEL) == -EBUSY); 440 + xa_erase_index(xa, 12345678); 441 + xa_erase_index(xa, 5); 442 XA_BUG_ON(xa, !xa_empty(xa)); 443 } 444 445 + static noinline void check_cmpxchg_order(struct xarray *xa) 446 { 447 #ifdef CONFIG_XARRAY_MULTI 448 void *FIVE = xa_mk_value(5); 449 unsigned int i, order = 3; 450 ··· 497 #endif 498 } 499 500 + static noinline void check_reserve(struct xarray *xa) 501 { 502 void *entry; 503 unsigned long index; 504 int count; ··· 517 XA_BUG_ON(xa, xa_reserve(xa, 12345678, GFP_KERNEL) != 0); 518 XA_BUG_ON(xa, xa_store_index(xa, 12345678, GFP_NOWAIT) != NULL); 519 xa_release(xa, 12345678); 520 + xa_erase_index(xa, 12345678); 521 XA_BUG_ON(xa, !xa_empty(xa)); 522 523 /* cmpxchg sees a reserved entry as ZERO */ ··· 525 XA_BUG_ON(xa, xa_cmpxchg(xa, 12345678, XA_ZERO_ENTRY, 526 xa_mk_value(12345678), GFP_NOWAIT) != NULL); 527 xa_release(xa, 12345678); 528 + xa_erase_index(xa, 12345678); 529 XA_BUG_ON(xa, !xa_empty(xa)); 530 531 /* xa_insert treats it as busy */ ··· 565 xa_destroy(xa); 566 } 567 568 + static noinline void check_xas_erase(struct xarray *xa) 569 { 570 XA_STATE(xas, xa, 0); 571 void *entry; 572 unsigned long i, j; ··· 606 } 607 608 #ifdef CONFIG_XARRAY_MULTI 609 + static noinline void check_multi_store_1(struct xarray *xa, unsigned long index, 610 unsigned int order) 611 { 612 XA_STATE(xas, xa, index); 613 unsigned long min = index & ~((1UL << order) - 1); 614 unsigned long max = min + (1UL << order); ··· 629 XA_BUG_ON(xa, xa_load(xa, max) != NULL); 630 XA_BUG_ON(xa, xa_load(xa, min - 1) != NULL); 631 632 + xa_erase_index(xa, min); 633 XA_BUG_ON(xa, !xa_empty(xa)); 634 } 635 636 + static noinline void check_multi_store_2(struct xarray *xa, unsigned long index, 637 unsigned int order) 638 { 639 XA_STATE(xas, xa, index); 640 xa_store_order(xa, index, order, xa_mk_value(0), GFP_KERNEL); 641 ··· 649 XA_BUG_ON(xa, !xa_empty(xa)); 650 } 651 652 + static noinline void check_multi_store_3(struct xarray *xa, unsigned long index, 653 unsigned int order) 654 { 655 XA_STATE(xas, xa, 0); 656 void *entry; 657 int n = 0; ··· 678 } 679 #endif 680 681 + static noinline void check_multi_store(struct xarray *xa) 682 { 683 #ifdef CONFIG_XARRAY_MULTI 684 unsigned long i, j, k; 685 unsigned int max_order = (sizeof(long) == 4) ? 30 : 60; 686 ··· 747 } 748 749 for (i = 0; i < 20; i++) { 750 + check_multi_store_1(xa, 200, i); 751 + check_multi_store_1(xa, 0, i); 752 + check_multi_store_1(xa, (1UL << i) + 1, i); 753 } 754 + check_multi_store_2(xa, 4095, 9); 755 756 for (i = 1; i < 20; i++) { 757 + check_multi_store_3(xa, 0, i); 758 + check_multi_store_3(xa, 1UL << i, i); 759 } 760 #endif 761 } 762 763 #ifdef CONFIG_XARRAY_MULTI 764 /* mimics page cache __filemap_add_folio() */ 765 + static noinline void check_xa_multi_store_adv_add(struct xarray *xa, 766 unsigned long index, 767 unsigned int order, 768 void *p) 769 { 770 XA_STATE(xas, xa, index); 771 unsigned int nrpages = 1UL << order; 772 ··· 796 } 797 798 /* mimics page_cache_delete() */ 799 + static noinline void check_xa_multi_store_adv_del_entry(struct xarray *xa, 800 unsigned long index, 801 unsigned int order) 802 { 803 XA_STATE(xas, xa, index); 804 805 xas_set_order(&xas, index, order); ··· 809 xas_init_marks(&xas); 810 } 811 812 + static noinline void check_xa_multi_store_adv_delete(struct xarray *xa, 813 unsigned long index, 814 unsigned int order) 815 { 816 xa_lock_irq(xa); 817 + check_xa_multi_store_adv_del_entry(xa, index, order); 818 xa_unlock_irq(xa); 819 } 820 ··· 853 static unsigned long some_val_2 = 0xdeaddead; 854 855 /* mimics the page cache usage */ 856 + static noinline void check_xa_multi_store_adv(struct xarray *xa, 857 unsigned long pos, 858 unsigned int order) 859 { 860 unsigned int nrpages = 1UL << order; 861 unsigned long index, base, next_index, next_next_index; 862 unsigned int i; ··· 868 next_index = round_down(base + nrpages, nrpages); 869 next_next_index = round_down(next_index + nrpages, nrpages); 870 871 + check_xa_multi_store_adv_add(xa, base, order, &some_val); 872 873 for (i = 0; i < nrpages; i++) 874 XA_BUG_ON(xa, test_get_entry(xa, base + i) != &some_val); ··· 876 XA_BUG_ON(xa, test_get_entry(xa, next_index) != NULL); 877 878 /* Use order 0 for the next item */ 879 + check_xa_multi_store_adv_add(xa, next_index, 0, &some_val_2); 880 XA_BUG_ON(xa, test_get_entry(xa, next_index) != &some_val_2); 881 882 /* Remove the next item */ 883 + check_xa_multi_store_adv_delete(xa, next_index, 0); 884 885 /* Now use order for a new pointer */ 886 + check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2); 887 888 for (i = 0; i < nrpages; i++) 889 XA_BUG_ON(xa, test_get_entry(xa, next_index + i) != &some_val_2); 890 891 + check_xa_multi_store_adv_delete(xa, next_index, order); 892 + check_xa_multi_store_adv_delete(xa, base, order); 893 XA_BUG_ON(xa, !xa_empty(xa)); 894 895 /* starting fresh again */ ··· 897 /* let's test some holes now */ 898 899 /* hole at base and next_next */ 900 + check_xa_multi_store_adv_add(xa, next_index, order, &some_val_2); 901 902 for (i = 0; i < nrpages; i++) 903 XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); ··· 908 for (i = 0; i < nrpages; i++) 909 XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != NULL); 910 911 + check_xa_multi_store_adv_delete(xa, next_index, order); 912 XA_BUG_ON(xa, !xa_empty(xa)); 913 914 /* hole at base and next */ 915 916 + check_xa_multi_store_adv_add(xa, next_next_index, order, &some_val_2); 917 918 for (i = 0; i < nrpages; i++) 919 XA_BUG_ON(xa, test_get_entry(xa, base + i) != NULL); ··· 924 for (i = 0; i < nrpages; i++) 925 XA_BUG_ON(xa, test_get_entry(xa, next_next_index + i) != &some_val_2); 926 927 + check_xa_multi_store_adv_delete(xa, next_next_index, order); 928 XA_BUG_ON(xa, !xa_empty(xa)); 929 } 930 #endif 931 932 + static noinline void check_multi_store_advanced(struct xarray *xa) 933 { 934 #ifdef CONFIG_XARRAY_MULTI 935 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; ··· 941 */ 942 for (pos = 7; pos < end; pos = (pos * pos) + 564) { 943 for (i = 0; i < max_order; i++) { 944 + check_xa_multi_store_adv(xa, pos, i); 945 + check_xa_multi_store_adv(xa, pos + 157, i); 946 } 947 } 948 #endif 949 } 950 951 + static noinline void check_xa_alloc_1(struct xarray *xa, unsigned int base) 952 { 953 int i; 954 u32 id; 955 956 XA_BUG_ON(xa, !xa_empty(xa)); 957 /* An empty array should assign %base to the first alloc */ 958 + xa_alloc_index(xa, base, GFP_KERNEL); 959 960 /* Erasing it should make the array empty again */ 961 + xa_erase_index(xa, base); 962 XA_BUG_ON(xa, !xa_empty(xa)); 963 964 /* And it should assign %base again */ 965 + xa_alloc_index(xa, base, GFP_KERNEL); 966 967 /* Allocating and then erasing a lot should not lose base */ 968 for (i = base + 1; i < 2 * XA_CHUNK_SIZE; i++) 969 + xa_alloc_index(xa, i, GFP_KERNEL); 970 for (i = base; i < 2 * XA_CHUNK_SIZE; i++) 971 + xa_erase_index(xa, i); 972 + xa_alloc_index(xa, base, GFP_KERNEL); 973 974 /* Destroying the array should do the same as erasing */ 975 xa_destroy(xa); 976 977 /* And it should assign %base again */ 978 + xa_alloc_index(xa, base, GFP_KERNEL); 979 980 /* The next assigned ID should be base+1 */ 981 + xa_alloc_index(xa, base + 1, GFP_KERNEL); 982 + xa_erase_index(xa, base + 1); 983 984 /* Storing a value should mark it used */ 985 xa_store_index(xa, base + 1, GFP_KERNEL); 986 + xa_alloc_index(xa, base + 2, GFP_KERNEL); 987 988 /* If we then erase base, it should be free */ 989 + xa_erase_index(xa, base); 990 + xa_alloc_index(xa, base, GFP_KERNEL); 991 992 + xa_erase_index(xa, base + 1); 993 + xa_erase_index(xa, base + 2); 994 995 for (i = 1; i < 5000; i++) { 996 + xa_alloc_index(xa, base + i, GFP_KERNEL); 997 } 998 999 xa_destroy(xa); ··· 1016 1017 XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5), 1018 GFP_KERNEL) != -EBUSY); 1019 + XA_BUG_ON(xa, xa_store_index(xa, 3, GFP_KERNEL) != 0); 1020 XA_BUG_ON(xa, xa_alloc(xa, &id, xa_mk_index(10), XA_LIMIT(10, 5), 1021 GFP_KERNEL) != -EBUSY); 1022 + xa_erase_index(xa, 3); 1023 XA_BUG_ON(xa, !xa_empty(xa)); 1024 } 1025 1026 + static noinline void check_xa_alloc_2(struct xarray *xa, unsigned int base) 1027 { 1028 unsigned int i, id; 1029 unsigned long index; ··· 1059 XA_BUG_ON(xa, id != 5); 1060 1061 xa_for_each(xa, index, entry) { 1062 + xa_erase_index(xa, index); 1063 } 1064 1065 for (i = base; i < base + 9; i++) { ··· 1074 xa_destroy(xa); 1075 } 1076 1077 + static noinline void check_xa_alloc_3(struct xarray *xa, unsigned int base) 1078 { 1079 struct xa_limit limit = XA_LIMIT(1, 0x3fff); 1080 u32 next = 0; ··· 1090 XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(0x3ffd), limit, 1091 &next, GFP_KERNEL) != 0); 1092 XA_BUG_ON(xa, id != 0x3ffd); 1093 + xa_erase_index(xa, 0x3ffd); 1094 + xa_erase_index(xa, 1); 1095 XA_BUG_ON(xa, !xa_empty(xa)); 1096 1097 for (i = 0x3ffe; i < 0x4003; i++) { ··· 1106 1107 /* Check wrap-around is handled correctly */ 1108 if (base != 0) 1109 + xa_erase_index(xa, base); 1110 + xa_erase_index(xa, base + 1); 1111 next = UINT_MAX; 1112 XA_BUG_ON(xa, xa_alloc_cyclic(xa, &id, xa_mk_index(UINT_MAX), 1113 xa_limit_32b, &next, GFP_KERNEL) != 0); ··· 1120 XA_BUG_ON(xa, id != base + 1); 1121 1122 xa_for_each(xa, index, entry) 1123 + xa_erase_index(xa, index); 1124 1125 XA_BUG_ON(xa, !xa_empty(xa)); 1126 } ··· 1128 static DEFINE_XARRAY_ALLOC(xa0); 1129 static DEFINE_XARRAY_ALLOC1(xa1); 1130 1131 + static noinline void check_xa_alloc(void) 1132 { 1133 + check_xa_alloc_1(&xa0, 0); 1134 + check_xa_alloc_1(&xa1, 1); 1135 + check_xa_alloc_2(&xa0, 0); 1136 + check_xa_alloc_2(&xa1, 1); 1137 + check_xa_alloc_3(&xa0, 0); 1138 + check_xa_alloc_3(&xa1, 1); 1139 } 1140 1141 + static noinline void __check_store_iter(struct xarray *xa, unsigned long start, 1142 unsigned int order, unsigned int present) 1143 { 1144 XA_STATE_ORDER(xas, xa, start, order); 1145 void *entry; 1146 unsigned int count = 0; ··· 1166 XA_BUG_ON(xa, xa_load(xa, start) != xa_mk_index(start)); 1167 XA_BUG_ON(xa, xa_load(xa, start + (1UL << order) - 1) != 1168 xa_mk_index(start)); 1169 + xa_erase_index(xa, start); 1170 } 1171 1172 + static noinline void check_store_iter(struct xarray *xa) 1173 { 1174 unsigned int i, j; 1175 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; 1176 1177 for (i = 0; i < max_order; i++) { 1178 unsigned int min = 1 << i; 1179 unsigned int max = (2 << i) - 1; 1180 + __check_store_iter(xa, 0, i, 0); 1181 XA_BUG_ON(xa, !xa_empty(xa)); 1182 + __check_store_iter(xa, min, i, 0); 1183 XA_BUG_ON(xa, !xa_empty(xa)); 1184 1185 xa_store_index(xa, min, GFP_KERNEL); 1186 + __check_store_iter(xa, min, i, 1); 1187 XA_BUG_ON(xa, !xa_empty(xa)); 1188 xa_store_index(xa, max, GFP_KERNEL); 1189 + __check_store_iter(xa, min, i, 1); 1190 XA_BUG_ON(xa, !xa_empty(xa)); 1191 1192 for (j = 0; j < min; j++) 1193 xa_store_index(xa, j, GFP_KERNEL); 1194 + __check_store_iter(xa, 0, i, min); 1195 XA_BUG_ON(xa, !xa_empty(xa)); 1196 for (j = 0; j < min; j++) 1197 xa_store_index(xa, min + j, GFP_KERNEL); 1198 + __check_store_iter(xa, min, i, min); 1199 XA_BUG_ON(xa, !xa_empty(xa)); 1200 } 1201 #ifdef CONFIG_XARRAY_MULTI 1202 xa_store_index(xa, 63, GFP_KERNEL); 1203 xa_store_index(xa, 65, GFP_KERNEL); 1204 + __check_store_iter(xa, 64, 2, 1); 1205 + xa_erase_index(xa, 63); 1206 #endif 1207 XA_BUG_ON(xa, !xa_empty(xa)); 1208 } 1209 1210 + static noinline void check_multi_find_1(struct xarray *xa, unsigned order) 1211 { 1212 #ifdef CONFIG_XARRAY_MULTI 1213 unsigned long multi = 3 << order; 1214 unsigned long next = 4 << order; 1215 unsigned long index; ··· 1236 XA_BUG_ON(xa, xa_find_after(xa, &index, next, XA_PRESENT) != NULL); 1237 XA_BUG_ON(xa, index != next); 1238 1239 + xa_erase_index(xa, multi); 1240 + xa_erase_index(xa, next); 1241 + xa_erase_index(xa, next + 1); 1242 XA_BUG_ON(xa, !xa_empty(xa)); 1243 #endif 1244 } 1245 1246 + static noinline void check_multi_find_2(struct xarray *xa) 1247 { 1248 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 10 : 1; 1249 unsigned int i, j; 1250 void *entry; ··· 1260 GFP_KERNEL); 1261 rcu_read_lock(); 1262 xas_for_each(&xas, entry, ULONG_MAX) { 1263 + xa_erase_index(xa, index); 1264 } 1265 rcu_read_unlock(); 1266 + xa_erase_index(xa, index - 1); 1267 XA_BUG_ON(xa, !xa_empty(xa)); 1268 } 1269 } 1270 } 1271 1272 + static noinline void check_multi_find_3(struct xarray *xa) 1273 { 1274 unsigned int order; 1275 1276 for (order = 5; order < order_limit; order++) { ··· 1281 XA_BUG_ON(xa, !xa_empty(xa)); 1282 xa_store_order(xa, 0, order - 4, xa_mk_index(0), GFP_KERNEL); 1283 XA_BUG_ON(xa, xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT)); 1284 + xa_erase_index(xa, 0); 1285 } 1286 } 1287 1288 + static noinline void check_find_1(struct xarray *xa) 1289 { 1290 unsigned long i, j, k; 1291 1292 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1325 else 1326 XA_BUG_ON(xa, entry != NULL); 1327 } 1328 + xa_erase_index(xa, j); 1329 XA_BUG_ON(xa, xa_get_mark(xa, j, XA_MARK_0)); 1330 XA_BUG_ON(xa, !xa_get_mark(xa, i, XA_MARK_0)); 1331 } 1332 + xa_erase_index(xa, i); 1333 XA_BUG_ON(xa, xa_get_mark(xa, i, XA_MARK_0)); 1334 } 1335 XA_BUG_ON(xa, !xa_empty(xa)); 1336 } 1337 1338 + static noinline void check_find_2(struct xarray *xa) 1339 { 1340 void *entry; 1341 unsigned long i, j, index; 1342 ··· 1358 xa_destroy(xa); 1359 } 1360 1361 + static noinline void check_find_3(struct xarray *xa) 1362 { 1363 XA_STATE(xas, xa, 0); 1364 unsigned long i, j, k; 1365 void *entry; ··· 1385 xa_destroy(xa); 1386 } 1387 1388 + static noinline void check_find_4(struct xarray *xa) 1389 { 1390 unsigned long index = 0; 1391 void *entry; 1392 ··· 1400 entry = xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT); 1401 XA_BUG_ON(xa, entry); 1402 1403 + xa_erase_index(xa, ULONG_MAX); 1404 } 1405 1406 + static noinline void check_find(struct xarray *xa) 1407 { 1408 unsigned i; 1409 1410 + check_find_1(xa); 1411 + check_find_2(xa); 1412 + check_find_3(xa); 1413 + check_find_4(xa); 1414 1415 for (i = 2; i < 10; i++) 1416 + check_multi_find_1(xa, i); 1417 + check_multi_find_2(xa); 1418 + check_multi_find_3(xa); 1419 } 1420 1421 /* See find_swap_entry() in mm/shmem.c */ ··· 1441 return entry ? xas.xa_index : -1; 1442 } 1443 1444 + static noinline void check_find_entry(struct xarray *xa) 1445 { 1446 #ifdef CONFIG_XARRAY_MULTI 1447 unsigned int order; 1448 unsigned long offset, index; ··· 1471 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); 1472 XA_BUG_ON(xa, xa_find_entry(xa, xa) != -1); 1473 XA_BUG_ON(xa, xa_find_entry(xa, xa_mk_index(ULONG_MAX)) != -1); 1474 + xa_erase_index(xa, ULONG_MAX); 1475 XA_BUG_ON(xa, !xa_empty(xa)); 1476 } 1477 1478 + static noinline void check_pause(struct xarray *xa) 1479 { 1480 XA_STATE(xas, xa, 0); 1481 void *entry; 1482 unsigned int order; ··· 1548 1549 } 1550 1551 + static noinline void check_move_tiny(struct xarray *xa) 1552 { 1553 XA_STATE(xas, xa, 0); 1554 1555 XA_BUG_ON(xa, !xa_empty(xa)); ··· 1568 XA_BUG_ON(xa, xas_prev(&xas) != xa_mk_index(0)); 1569 XA_BUG_ON(xa, xas_prev(&xas) != NULL); 1570 rcu_read_unlock(); 1571 + xa_erase_index(xa, 0); 1572 XA_BUG_ON(xa, !xa_empty(xa)); 1573 } 1574 1575 + static noinline void check_move_max(struct xarray *xa) 1576 { 1577 XA_STATE(xas, xa, 0); 1578 1579 xa_store_index(xa, ULONG_MAX, GFP_KERNEL); ··· 1591 XA_BUG_ON(xa, xas_find(&xas, ULONG_MAX) != NULL); 1592 rcu_read_unlock(); 1593 1594 + xa_erase_index(xa, ULONG_MAX); 1595 XA_BUG_ON(xa, !xa_empty(xa)); 1596 } 1597 1598 + static noinline void check_move_small(struct xarray *xa, unsigned long idx) 1599 { 1600 XA_STATE(xas, xa, 0); 1601 unsigned long i; 1602 ··· 1640 XA_BUG_ON(xa, xas.xa_index != ULONG_MAX); 1641 rcu_read_unlock(); 1642 1643 + xa_erase_index(xa, 0); 1644 + xa_erase_index(xa, idx); 1645 XA_BUG_ON(xa, !xa_empty(xa)); 1646 } 1647 1648 + static noinline void check_move(struct xarray *xa) 1649 { 1650 XA_STATE(xas, xa, (1 << 16) - 1); 1651 unsigned long i; 1652 ··· 1675 rcu_read_unlock(); 1676 1677 for (i = (1 << 8); i < (1 << 15); i++) 1678 + xa_erase_index(xa, i); 1679 1680 i = xas.xa_index; 1681 ··· 1706 1707 xa_destroy(xa); 1708 1709 + check_move_tiny(xa); 1710 + check_move_max(xa); 1711 1712 for (i = 0; i < 16; i++) 1713 + check_move_small(xa, 1UL << i); 1714 1715 for (i = 2; i < 16; i++) 1716 + check_move_small(xa, (1UL << i) - 1); 1717 } 1718 1719 + static noinline void xa_store_many_order(struct xarray *xa, 1720 unsigned long index, unsigned order) 1721 { 1722 XA_STATE_ORDER(xas, xa, index, order); ··· 1739 XA_BUG_ON(xa, xas_error(&xas)); 1740 } 1741 1742 + static noinline void check_create_range_1(struct xarray *xa, 1743 unsigned long index, unsigned order) 1744 { 1745 unsigned long i; 1746 1747 + xa_store_many_order(xa, index, order); 1748 for (i = index; i < index + (1UL << order); i++) 1749 + xa_erase_index(xa, i); 1750 XA_BUG_ON(xa, !xa_empty(xa)); 1751 } 1752 1753 + static noinline void check_create_range_2(struct xarray *xa, unsigned order) 1754 { 1755 unsigned long i; 1756 unsigned long nr = 1UL << order; 1757 1758 for (i = 0; i < nr * nr; i += nr) 1759 + xa_store_many_order(xa, i, order); 1760 for (i = 0; i < nr * nr; i++) 1761 + xa_erase_index(xa, i); 1762 XA_BUG_ON(xa, !xa_empty(xa)); 1763 } 1764 1765 + static noinline void check_create_range_3(void) 1766 { 1767 XA_STATE(xas, NULL, 0); 1768 xas_set_err(&xas, -EEXIST); ··· 1774 XA_BUG_ON(NULL, xas_error(&xas) != -EEXIST); 1775 } 1776 1777 + static noinline void check_create_range_4(struct xarray *xa, 1778 unsigned long index, unsigned order) 1779 { 1780 XA_STATE_ORDER(xas, xa, index, order); 1781 unsigned long base = xas.xa_index; 1782 unsigned long i = 0; ··· 1804 XA_BUG_ON(xa, xas_error(&xas)); 1805 1806 for (i = base; i < base + (1UL << order); i++) 1807 + xa_erase_index(xa, i); 1808 XA_BUG_ON(xa, !xa_empty(xa)); 1809 } 1810 1811 + static noinline void check_create_range_5(struct xarray *xa, 1812 unsigned long index, unsigned int order) 1813 { 1814 XA_STATE_ORDER(xas, xa, index, order); 1815 unsigned int i; 1816 ··· 1829 xa_destroy(xa); 1830 } 1831 1832 + static noinline void check_create_range(struct xarray *xa) 1833 { 1834 unsigned int order; 1835 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 12 : 1; 1836 1837 for (order = 0; order < max_order; order++) { 1838 + check_create_range_1(xa, 0, order); 1839 + check_create_range_1(xa, 1U << order, order); 1840 + check_create_range_1(xa, 2U << order, order); 1841 + check_create_range_1(xa, 3U << order, order); 1842 + check_create_range_1(xa, 1U << 24, order); 1843 if (order < 10) 1844 + check_create_range_2(xa, order); 1845 1846 + check_create_range_4(xa, 0, order); 1847 + check_create_range_4(xa, 1U << order, order); 1848 + check_create_range_4(xa, 2U << order, order); 1849 + check_create_range_4(xa, 3U << order, order); 1850 + check_create_range_4(xa, 1U << 24, order); 1851 1852 + check_create_range_4(xa, 1, order); 1853 + check_create_range_4(xa, (1U << order) + 1, order); 1854 + check_create_range_4(xa, (2U << order) + 1, order); 1855 + check_create_range_4(xa, (2U << order) - 1, order); 1856 + check_create_range_4(xa, (3U << order) + 1, order); 1857 + check_create_range_4(xa, (3U << order) - 1, order); 1858 + check_create_range_4(xa, (1U << 24) + 1, order); 1859 1860 + check_create_range_5(xa, 0, order); 1861 + check_create_range_5(xa, (1U << order), order); 1862 } 1863 1864 + check_create_range_3(); 1865 } 1866 1867 + static noinline void __check_store_range(struct xarray *xa, unsigned long first, 1868 unsigned long last) 1869 { 1870 #ifdef CONFIG_XARRAY_MULTI 1871 xa_store_range(xa, first, last, xa_mk_index(first), GFP_KERNEL); 1872 ··· 1883 XA_BUG_ON(xa, !xa_empty(xa)); 1884 } 1885 1886 + static noinline void check_store_range(struct xarray *xa) 1887 { 1888 unsigned long i, j; 1889 1890 for (i = 0; i < 128; i++) { 1891 for (j = i; j < 128; j++) { 1892 + __check_store_range(xa, i, j); 1893 + __check_store_range(xa, 128 + i, 128 + j); 1894 + __check_store_range(xa, 4095 + i, 4095 + j); 1895 + __check_store_range(xa, 4096 + i, 4096 + j); 1896 + __check_store_range(xa, 123456 + i, 123456 + j); 1897 + __check_store_range(xa, (1 << 24) + i, (1 << 24) + j); 1898 } 1899 } 1900 } 1901 1902 #ifdef CONFIG_XARRAY_MULTI 1903 + static void check_split_1(struct xarray *xa, unsigned long index, 1904 unsigned int order, unsigned int new_order) 1905 { 1906 XA_STATE_ORDER(xas, xa, index, new_order); 1907 unsigned int i, found; 1908 void *entry; ··· 1940 xa_destroy(xa); 1941 } 1942 1943 + static noinline void check_split(struct xarray *xa) 1944 { 1945 unsigned int order, new_order; 1946 1947 XA_BUG_ON(xa, !xa_empty(xa)); 1948 1949 for (order = 1; order < 2 * XA_CHUNK_SHIFT; order++) { 1950 for (new_order = 0; new_order < order; new_order++) { 1951 + check_split_1(xa, 0, order, new_order); 1952 + check_split_1(xa, 1UL << order, order, new_order); 1953 + check_split_1(xa, 3UL << order, order, new_order); 1954 } 1955 } 1956 } 1957 #else 1958 + static void check_split(struct xarray *xa) { } 1959 #endif 1960 1961 + static void check_align_1(struct xarray *xa, char *name) 1962 { 1963 int i; 1964 unsigned int id; 1965 unsigned long index; ··· 1983 * We should always be able to store without allocating memory after 1984 * reserving a slot. 1985 */ 1986 + static void check_align_2(struct xarray *xa, char *name) 1987 { 1988 int i; 1989 1990 XA_BUG_ON(xa, !xa_empty(xa)); ··· 2005 XA_BUG_ON(xa, !xa_empty(xa)); 2006 } 2007 2008 + static noinline void check_align(struct xarray *xa) 2009 { 2010 char name[] = "Motorola 68000"; 2011 2012 + check_align_1(xa, name); 2013 + check_align_1(xa, name + 1); 2014 + check_align_1(xa, name + 2); 2015 + check_align_1(xa, name + 3); 2016 + check_align_2(xa, name); 2017 } 2018 2019 static LIST_HEAD(shadow_nodes); ··· 2029 } 2030 } 2031 2032 + static noinline void shadow_remove(struct xarray *xa) 2033 { 2034 struct xa_node *node; 2035 ··· 2043 xa_unlock(xa); 2044 } 2045 2046 + static noinline void check_workingset(struct xarray *xa, unsigned long index) 2047 { 2048 XA_STATE(xas, xa, index); 2049 xas_set_update(&xas, test_update_node); 2050 ··· 2076 xas_unlock(&xas); 2077 XA_BUG_ON(xa, list_empty(&shadow_nodes)); 2078 2079 + shadow_remove(xa); 2080 XA_BUG_ON(xa, !list_empty(&shadow_nodes)); 2081 XA_BUG_ON(xa, !xa_empty(xa)); 2082 } ··· 2085 * Check that the pointer / value / sibling entries are accounted the 2086 * way we expect them to be. 2087 */ 2088 + static noinline void check_account(struct xarray *xa) 2089 { 2090 #ifdef CONFIG_XARRAY_MULTI 2091 unsigned int order; 2092 2093 for (order = 1; order < 12; order++) { ··· 2116 #endif 2117 } 2118 2119 + static noinline void check_get_order(struct xarray *xa) 2120 { 2121 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; 2122 unsigned int order; 2123 unsigned long i, j; ··· 2138 } 2139 } 2140 2141 + static noinline void check_xas_get_order(struct xarray *xa) 2142 { 2143 XA_STATE(xas, xa, 0); 2144 2145 unsigned int max_order = IS_ENABLED(CONFIG_XARRAY_MULTI) ? 20 : 1; ··· 2173 } 2174 } 2175 2176 + static noinline void check_xas_conflict_get_order(struct xarray *xa) 2177 { 2178 XA_STATE(xas, xa, 0); 2179 2180 void *entry; ··· 2233 } 2234 2235 2236 + static noinline void check_destroy(struct xarray *xa) 2237 { 2238 unsigned long index; 2239 2240 XA_BUG_ON(xa, !xa_empty(xa)); ··· 2269 } 2270 2271 static DEFINE_XARRAY(array); 2272 2273 + static int xarray_checks(void) 2274 + { 2275 + check_xa_err(&array); 2276 + check_xas_retry(&array); 2277 + check_xa_load(&array); 2278 + check_xa_mark(&array); 2279 + check_xa_shrink(&array); 2280 + check_xas_erase(&array); 2281 + check_insert(&array); 2282 + check_cmpxchg(&array); 2283 + check_cmpxchg_order(&array); 2284 + check_reserve(&array); 2285 + check_reserve(&xa0); 2286 + check_multi_store(&array); 2287 + check_multi_store_advanced(&array); 2288 + check_get_order(&array); 2289 + check_xas_get_order(&array); 2290 + check_xas_conflict_get_order(&array); 2291 + check_xa_alloc(); 2292 + check_find(&array); 2293 + check_find_entry(&array); 2294 + check_pause(&array); 2295 + check_account(&array); 2296 + check_destroy(&array); 2297 + check_move(&array); 2298 + check_create_range(&array); 2299 + check_store_range(&array); 2300 + check_store_iter(&array); 2301 + check_align(&xa0); 2302 + check_split(&array); 2303 2304 + check_workingset(&array, 0); 2305 + check_workingset(&array, 64); 2306 + check_workingset(&array, 4096); 2307 2308 + printk("XArray: %u of %u tests passed\n", tests_passed, tests_run); 2309 + return (tests_run == tests_passed) ? 0 : -EINVAL; 2310 + } 2311 2312 + static void xarray_exit(void) 2313 + { 2314 + } 2315 2316 + module_init(xarray_checks); 2317 + module_exit(xarray_exit); 2318 MODULE_AUTHOR("Matthew Wilcox <willy@infradead.org>"); 2319 MODULE_DESCRIPTION("XArray API test module"); 2320 MODULE_LICENSE("GPL");
+25 -4
mm/compaction.c
··· 2491 */ 2492 static enum compact_result 2493 compaction_suit_allocation_order(struct zone *zone, unsigned int order, 2494 - int highest_zoneidx, unsigned int alloc_flags) 2495 { 2496 unsigned long watermark; 2497 ··· 2500 if (zone_watermark_ok(zone, order, watermark, highest_zoneidx, 2501 alloc_flags)) 2502 return COMPACT_SUCCESS; 2503 2504 if (!compaction_suitable(zone, order, highest_zoneidx)) 2505 return COMPACT_SKIPPED; ··· 2553 if (!is_via_compact_memory(cc->order)) { 2554 ret = compaction_suit_allocation_order(cc->zone, cc->order, 2555 cc->highest_zoneidx, 2556 - cc->alloc_flags); 2557 if (ret != COMPACT_CONTINUE) 2558 return ret; 2559 } ··· 3057 3058 ret = compaction_suit_allocation_order(zone, 3059 pgdat->kcompactd_max_order, 3060 - highest_zoneidx, ALLOC_WMARK_MIN); 3061 if (ret == COMPACT_CONTINUE) 3062 return true; 3063 } ··· 3099 continue; 3100 3101 ret = compaction_suit_allocation_order(zone, 3102 - cc.order, zoneid, ALLOC_WMARK_MIN); 3103 if (ret != COMPACT_CONTINUE) 3104 continue; 3105
··· 2491 */ 2492 static enum compact_result 2493 compaction_suit_allocation_order(struct zone *zone, unsigned int order, 2494 + int highest_zoneidx, unsigned int alloc_flags, 2495 + bool async) 2496 { 2497 unsigned long watermark; 2498 ··· 2499 if (zone_watermark_ok(zone, order, watermark, highest_zoneidx, 2500 alloc_flags)) 2501 return COMPACT_SUCCESS; 2502 + 2503 + /* 2504 + * For unmovable allocations (without ALLOC_CMA), check if there is enough 2505 + * free memory in the non-CMA pageblocks. Otherwise compaction could form 2506 + * the high-order page in CMA pageblocks, which would not help the 2507 + * allocation to succeed. However, limit the check to costly order async 2508 + * compaction (such as opportunistic THP attempts) because there is the 2509 + * possibility that compaction would migrate pages from non-CMA to CMA 2510 + * pageblock. 2511 + */ 2512 + if (order > PAGE_ALLOC_COSTLY_ORDER && async && 2513 + !(alloc_flags & ALLOC_CMA)) { 2514 + watermark = low_wmark_pages(zone) + compact_gap(order); 2515 + if (!__zone_watermark_ok(zone, 0, watermark, highest_zoneidx, 2516 + 0, zone_page_state(zone, NR_FREE_PAGES))) 2517 + return COMPACT_SKIPPED; 2518 + } 2519 2520 if (!compaction_suitable(zone, order, highest_zoneidx)) 2521 return COMPACT_SKIPPED; ··· 2535 if (!is_via_compact_memory(cc->order)) { 2536 ret = compaction_suit_allocation_order(cc->zone, cc->order, 2537 cc->highest_zoneidx, 2538 + cc->alloc_flags, 2539 + cc->mode == MIGRATE_ASYNC); 2540 if (ret != COMPACT_CONTINUE) 2541 return ret; 2542 } ··· 3038 3039 ret = compaction_suit_allocation_order(zone, 3040 pgdat->kcompactd_max_order, 3041 + highest_zoneidx, ALLOC_WMARK_MIN, 3042 + false); 3043 if (ret == COMPACT_CONTINUE) 3044 return true; 3045 } ··· 3079 continue; 3080 3081 ret = compaction_suit_allocation_order(zone, 3082 + cc.order, zoneid, ALLOC_WMARK_MIN, 3083 + false); 3084 if (ret != COMPACT_CONTINUE) 3085 continue; 3086
+4 -10
mm/gup.c
··· 2320 /* 2321 * Returns the number of collected folios. Return value is always >= 0. 2322 */ 2323 - static unsigned long collect_longterm_unpinnable_folios( 2324 struct list_head *movable_folio_list, 2325 struct pages_or_folios *pofs) 2326 { 2327 - unsigned long i, collected = 0; 2328 struct folio *prev_folio = NULL; 2329 bool drain_allow = true; 2330 2331 for (i = 0; i < pofs->nr_entries; i++) { 2332 struct folio *folio = pofs_get_folio(pofs, i); ··· 2337 2338 if (folio_is_longterm_pinnable(folio)) 2339 continue; 2340 - 2341 - collected++; 2342 2343 if (folio_is_device_coherent(folio)) 2344 continue; ··· 2359 NR_ISOLATED_ANON + folio_is_file_lru(folio), 2360 folio_nr_pages(folio)); 2361 } 2362 - 2363 - return collected; 2364 } 2365 2366 /* ··· 2435 check_and_migrate_movable_pages_or_folios(struct pages_or_folios *pofs) 2436 { 2437 LIST_HEAD(movable_folio_list); 2438 - unsigned long collected; 2439 2440 - collected = collect_longterm_unpinnable_folios(&movable_folio_list, 2441 - pofs); 2442 - if (!collected) 2443 return 0; 2444 2445 return migrate_longterm_unpinnable_folios(&movable_folio_list, pofs);
··· 2320 /* 2321 * Returns the number of collected folios. Return value is always >= 0. 2322 */ 2323 + static void collect_longterm_unpinnable_folios( 2324 struct list_head *movable_folio_list, 2325 struct pages_or_folios *pofs) 2326 { 2327 struct folio *prev_folio = NULL; 2328 bool drain_allow = true; 2329 + unsigned long i; 2330 2331 for (i = 0; i < pofs->nr_entries; i++) { 2332 struct folio *folio = pofs_get_folio(pofs, i); ··· 2337 2338 if (folio_is_longterm_pinnable(folio)) 2339 continue; 2340 2341 if (folio_is_device_coherent(folio)) 2342 continue; ··· 2361 NR_ISOLATED_ANON + folio_is_file_lru(folio), 2362 folio_nr_pages(folio)); 2363 } 2364 } 2365 2366 /* ··· 2439 check_and_migrate_movable_pages_or_folios(struct pages_or_folios *pofs) 2440 { 2441 LIST_HEAD(movable_folio_list); 2442 2443 + collect_longterm_unpinnable_folios(&movable_folio_list, pofs); 2444 + if (list_empty(&movable_folio_list)) 2445 return 0; 2446 2447 return migrate_longterm_unpinnable_folios(&movable_folio_list, pofs);
+1 -1
mm/hugetlb.c
··· 3309 .thread_fn = gather_bootmem_prealloc_parallel, 3310 .fn_arg = NULL, 3311 .start = 0, 3312 - .size = num_node_state(N_MEMORY), 3313 .align = 1, 3314 .min_chunk = 1, 3315 .max_threads = num_node_state(N_MEMORY),
··· 3309 .thread_fn = gather_bootmem_prealloc_parallel, 3310 .fn_arg = NULL, 3311 .start = 0, 3312 + .size = nr_node_ids, 3313 .align = 1, 3314 .min_chunk = 1, 3315 .max_threads = num_node_state(N_MEMORY),
+2
mm/kfence/core.c
··· 21 #include <linux/log2.h> 22 #include <linux/memblock.h> 23 #include <linux/moduleparam.h> 24 #include <linux/notifier.h> 25 #include <linux/panic_notifier.h> 26 #include <linux/random.h> ··· 1085 * properties (e.g. reside in DMAable memory). 1086 */ 1087 if ((flags & GFP_ZONEMASK) || 1088 (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) { 1089 atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_INCOMPAT]); 1090 return NULL;
··· 21 #include <linux/log2.h> 22 #include <linux/memblock.h> 23 #include <linux/moduleparam.h> 24 + #include <linux/nodemask.h> 25 #include <linux/notifier.h> 26 #include <linux/panic_notifier.h> 27 #include <linux/random.h> ··· 1084 * properties (e.g. reside in DMAable memory). 1085 */ 1086 if ((flags & GFP_ZONEMASK) || 1087 + ((flags & __GFP_THISNODE) && num_online_nodes() > 1) || 1088 (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) { 1089 atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_INCOMPAT]); 1090 return NULL;
+1 -1
mm/kmemleak.c
··· 1689 unsigned long phys = object->pointer; 1690 1691 if (PHYS_PFN(phys) < min_low_pfn || 1692 - PHYS_PFN(phys + object->size) >= max_low_pfn) 1693 __paint_it(object, KMEMLEAK_BLACK); 1694 } 1695
··· 1689 unsigned long phys = object->pointer; 1690 1691 if (PHYS_PFN(phys) < min_low_pfn || 1692 + PHYS_PFN(phys + object->size) > max_low_pfn) 1693 __paint_it(object, KMEMLEAK_BLACK); 1694 } 1695
+1 -1
mm/swapfile.c
··· 794 if (!cluster_scan_range(si, ci, offset, nr_pages, &need_reclaim)) 795 continue; 796 if (need_reclaim) { 797 - ret = cluster_reclaim_range(si, ci, start, end); 798 /* 799 * Reclaim drops ci->lock and cluster could be used 800 * by another order. Not checking flag as off-list
··· 794 if (!cluster_scan_range(si, ci, offset, nr_pages, &need_reclaim)) 795 continue; 796 if (need_reclaim) { 797 + ret = cluster_reclaim_range(si, ci, offset, offset + nr_pages); 798 /* 799 * Reclaim drops ci->lock and cluster could be used 800 * by another order. Not checking flag as off-list
+9 -4
mm/vmscan.c
··· 1086 struct folio_batch free_folios; 1087 LIST_HEAD(ret_folios); 1088 LIST_HEAD(demote_folios); 1089 - unsigned int nr_reclaimed = 0; 1090 unsigned int pgactivate = 0; 1091 bool do_demote_pass; 1092 struct swap_iocb *plug = NULL; ··· 1550 /* 'folio_list' is always empty here */ 1551 1552 /* Migrate folios selected for demotion */ 1553 - stat->nr_demoted = demote_folio_list(&demote_folios, pgdat); 1554 - nr_reclaimed += stat->nr_demoted; 1555 /* Folios that could not be demoted are still in @demote_folios */ 1556 if (!list_empty(&demote_folios)) { 1557 /* Folios which weren't demoted go back on @folio_list */ ··· 1693 unsigned long nr_skipped[MAX_NR_ZONES] = { 0, }; 1694 unsigned long skipped = 0; 1695 unsigned long scan, total_scan, nr_pages; 1696 LIST_HEAD(folios_skipped); 1697 1698 total_scan = 0; ··· 1708 nr_pages = folio_nr_pages(folio); 1709 total_scan += nr_pages; 1710 1711 - if (folio_zonenum(folio) > sc->reclaim_idx) { 1712 nr_skipped[folio_zonenum(folio)] += nr_pages; 1713 move_to = &folios_skipped; 1714 goto move; 1715 } 1716
··· 1086 struct folio_batch free_folios; 1087 LIST_HEAD(ret_folios); 1088 LIST_HEAD(demote_folios); 1089 + unsigned int nr_reclaimed = 0, nr_demoted = 0; 1090 unsigned int pgactivate = 0; 1091 bool do_demote_pass; 1092 struct swap_iocb *plug = NULL; ··· 1550 /* 'folio_list' is always empty here */ 1551 1552 /* Migrate folios selected for demotion */ 1553 + nr_demoted = demote_folio_list(&demote_folios, pgdat); 1554 + nr_reclaimed += nr_demoted; 1555 + stat->nr_demoted += nr_demoted; 1556 /* Folios that could not be demoted are still in @demote_folios */ 1557 if (!list_empty(&demote_folios)) { 1558 /* Folios which weren't demoted go back on @folio_list */ ··· 1692 unsigned long nr_skipped[MAX_NR_ZONES] = { 0, }; 1693 unsigned long skipped = 0; 1694 unsigned long scan, total_scan, nr_pages; 1695 + unsigned long max_nr_skipped = 0; 1696 LIST_HEAD(folios_skipped); 1697 1698 total_scan = 0; ··· 1706 nr_pages = folio_nr_pages(folio); 1707 total_scan += nr_pages; 1708 1709 + /* Using max_nr_skipped to prevent hard LOCKUP*/ 1710 + if (max_nr_skipped < SWAP_CLUSTER_MAX_SKIPPED && 1711 + (folio_zonenum(folio) > sc->reclaim_idx)) { 1712 nr_skipped[folio_zonenum(folio)] += nr_pages; 1713 move_to = &folios_skipped; 1714 + max_nr_skipped++; 1715 goto move; 1716 } 1717
+1 -1
mm/zsmalloc.c
··· 452 .lock = INIT_LOCAL_LOCK(lock), 453 }; 454 455 - static inline bool is_first_zpdesc(struct zpdesc *zpdesc) 456 { 457 return PagePrivate(zpdesc_page(zpdesc)); 458 }
··· 452 .lock = INIT_LOCAL_LOCK(lock), 453 }; 454 455 + static inline bool __maybe_unused is_first_zpdesc(struct zpdesc *zpdesc) 456 { 457 return PagePrivate(zpdesc_page(zpdesc)); 458 }
+1 -1
scripts/gdb/linux/cpus.py
··· 167 var_ptr = gdb.parse_and_eval("&pcpu_hot.current_task") 168 return per_cpu(var_ptr, cpu).dereference() 169 elif utils.is_target_arch("aarch64"): 170 - current_task_addr = gdb.parse_and_eval("$SP_EL0") 171 if (current_task_addr >> 63) != 0: 172 current_task = current_task_addr.cast(task_ptr_type) 173 return current_task.dereference()
··· 167 var_ptr = gdb.parse_and_eval("&pcpu_hot.current_task") 168 return per_cpu(var_ptr, cpu).dereference() 169 elif utils.is_target_arch("aarch64"): 170 + current_task_addr = gdb.parse_and_eval("(unsigned long)$SP_EL0") 171 if (current_task_addr >> 63) != 0: 172 current_task = current_task_addr.cast(task_ptr_type) 173 return current_task.dereference()