mm, shmem: prevent infinite loop on truncate race

When truncating a large swap entry, shmem_free_swap() returns 0 when the
entry's index doesn't match the given index due to lookup alignment. The
failure fallback path checks if the entry crosses the end border and
aborts when it happens, so truncate won't erase an unexpected entry or
range. But one scenario was ignored.

When `index` points to the middle of a large swap entry, and the large
swap entry doesn't go across the end border, find_get_entries() will
return that large swap entry as the first item in the batch with
`indices[0]` equal to `index`. The entry's base index will be smaller
than `indices[0]`, so shmem_free_swap() will fail and return 0 due to the
"base < index" check. The code will then call shmem_confirm_swap(), get
the order, check if it crosses the END boundary (which it doesn't), and
retry with the same index.

The next iteration will find the same entry again at the same index with
same indices, leading to an infinite loop.

Fix this by retrying with a round-down index, and abort if the index is
smaller than the truncate range.

Link: https://lkml.kernel.org/r/aXo6ltB5iqAKJzY8@KASONG-MC4
Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
Fixes: 8a1968bd997f ("mm/shmem, swap: fix race of truncate and swap entry split")
Signed-off-by: Kairui Song <kasong@tencent.com>
Reported-by: Chris Mason <clm@meta.com>
Closes: https://lore.kernel.org/linux-mm/20260128130336.727049-1-clm@meta.com/
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Chris Li <chrisl@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by Kairui Song and committed by Andrew Morton 2030dddf f1675db3

+14 -9
+14 -9
mm/shmem.c
··· 1211 1211 swaps_freed = shmem_free_swap(mapping, indices[i], 1212 1212 end - 1, folio); 1213 1213 if (!swaps_freed) { 1214 - /* 1215 - * If found a large swap entry cross the end border, 1216 - * skip it as the truncate_inode_partial_folio above 1217 - * should have at least zerod its content once. 1218 - */ 1214 + pgoff_t base = indices[i]; 1215 + 1219 1216 order = shmem_confirm_swap(mapping, indices[i], 1220 1217 radix_to_swp_entry(folio)); 1221 - if (order > 0 && indices[i] + (1 << order) > end) 1222 - continue; 1223 - /* Swap was replaced by page: retry */ 1224 - index = indices[i]; 1218 + /* 1219 + * If found a large swap entry cross the end or start 1220 + * border, skip it as the truncate_inode_partial_folio 1221 + * above should have at least zerod its content once. 1222 + */ 1223 + if (order > 0) { 1224 + base = round_down(base, 1 << order); 1225 + if (base < start || base + (1 << order) > end) 1226 + continue; 1227 + } 1228 + /* Swap was replaced by page or extended, retry */ 1229 + index = base; 1225 1230 break; 1226 1231 } 1227 1232 nr_swaps_freed += swaps_freed;