Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/shmem, swap: fix race of truncate and swap entry split

The helper for shmem swap freeing is not handling the order of swap
entries correctly. It uses xa_cmpxchg_irq to erase the swap entry, but it
gets the entry order before that using xa_get_order without lock
protection, and it may get an outdated order value if the entry is split
or changed in other ways after the xa_get_order and before the
xa_cmpxchg_irq.

And besides, the order could grow and be larger than expected, and cause
truncation to erase data beyond the end border. For example, if the
target entry and following entries are swapped in or freed, then a large
folio was added in place and swapped out, using the same entry, the
xa_cmpxchg_irq will still succeed, it's very unlikely to happen though.

To fix that, open code the Xarray cmpxchg and put the order retrieval and
value checking in the same critical section. Also, ensure the order won't
exceed the end border, skip it if the entry goes across the border.

Skipping large swap entries crosses the end border is safe here. Shmem
truncate iterates the range twice, in the first iteration,
find_lock_entries already filtered such entries, and shmem will swapin the
entries that cross the end border and partially truncate the folio (split
the folio or at least zero part of it). So in the second loop here, if we
see a swap entry that crosses the end order, it must at least have its
content erased already.

I observed random swapoff hangs and kernel panics when stress testing
ZSWAP with shmem. After applying this patch, all problems are gone.

Link: https://lkml.kernel.org/r/20260120-shmem-swap-fix-v3-1-3d33ebfbc057@tencent.com
Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
Signed-off-by: Kairui Song <kasong@tencent.com>
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
Acked-by: Chris Li <chrisl@kernel.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Kairui Song and committed by
Andrew Morton
8a1968bd 6c790212

+34 -11
+34 -11
mm/shmem.c
··· 962 962 * being freed). 963 963 */ 964 964 static long shmem_free_swap(struct address_space *mapping, 965 - pgoff_t index, void *radswap) 965 + pgoff_t index, pgoff_t end, void *radswap) 966 966 { 967 - int order = xa_get_order(&mapping->i_pages, index); 968 - void *old; 967 + XA_STATE(xas, &mapping->i_pages, index); 968 + unsigned int nr_pages = 0; 969 + pgoff_t base; 970 + void *entry; 969 971 970 - old = xa_cmpxchg_irq(&mapping->i_pages, index, radswap, NULL, 0); 971 - if (old != radswap) 972 - return 0; 973 - free_swap_and_cache_nr(radix_to_swp_entry(radswap), 1 << order); 972 + xas_lock_irq(&xas); 973 + entry = xas_load(&xas); 974 + if (entry == radswap) { 975 + nr_pages = 1 << xas_get_order(&xas); 976 + base = round_down(xas.xa_index, nr_pages); 977 + if (base < index || base + nr_pages - 1 > end) 978 + nr_pages = 0; 979 + else 980 + xas_store(&xas, NULL); 981 + } 982 + xas_unlock_irq(&xas); 974 983 975 - return 1 << order; 984 + if (nr_pages) 985 + free_swap_and_cache_nr(radix_to_swp_entry(radswap), nr_pages); 986 + 987 + return nr_pages; 976 988 } 977 989 978 990 /* ··· 1136 1124 if (xa_is_value(folio)) { 1137 1125 if (unfalloc) 1138 1126 continue; 1139 - nr_swaps_freed += shmem_free_swap(mapping, 1140 - indices[i], folio); 1127 + nr_swaps_freed += shmem_free_swap(mapping, indices[i], 1128 + end - 1, folio); 1141 1129 continue; 1142 1130 } 1143 1131 ··· 1203 1191 folio = fbatch.folios[i]; 1204 1192 1205 1193 if (xa_is_value(folio)) { 1194 + int order; 1206 1195 long swaps_freed; 1207 1196 1208 1197 if (unfalloc) 1209 1198 continue; 1210 - swaps_freed = shmem_free_swap(mapping, indices[i], folio); 1199 + swaps_freed = shmem_free_swap(mapping, indices[i], 1200 + end - 1, folio); 1211 1201 if (!swaps_freed) { 1202 + /* 1203 + * If found a large swap entry cross the end border, 1204 + * skip it as the truncate_inode_partial_folio above 1205 + * should have at least zerod its content once. 1206 + */ 1207 + order = shmem_confirm_swap(mapping, indices[i], 1208 + radix_to_swp_entry(folio)); 1209 + if (order > 0 && indices[i] + (1 << order) > end) 1210 + continue; 1212 1211 /* Swap was replaced by page: retry */ 1213 1212 index = indices[i]; 1214 1213 break;