Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: move memory_failure_queue() into copy_mc_[user]_highpage()

Patch series "mm: migrate: support poison recover from migrate folio", v5.

The folio migration is widely used in kernel, memory compaction, memory
hotplug, soft offline page, numa balance, memory demote/promotion, etc,
but once access a poisoned source folio when migrating, the kernel will
panic.

There is a mechanism in the kernel to recover from uncorrectable memory
errors, ARCH_HAS_COPY_MC(eg, Machine Check Safe Memory Copy on x86), which
is already used in NVDIMM or core-mm paths(eg, CoW, khugepaged, coredump,
ksm copy), see copy_mc_to_{user,kernel}, copy_mc_{user_}highpage callers.

This series of patches provide the recovery mechanism from folio copy for
the widely used folio migration. Please note, because folio migration is
no guarantee of success, so we could chose to make folio migration
tolerant of memory failures, adding folio_mc_copy() which is a #MC
versions of folio_copy(), once accessing a poisoned source folio, we could
return error and make the folio migration fail, and this could avoid the
similar panic shown below.

CPU: 1 PID: 88343 Comm: test_softofflin Kdump: loaded Not tainted 6.6.0
pc : copy_page+0x10/0xc0
lr : copy_highpage+0x38/0x50
...
Call trace:
copy_page+0x10/0xc0
folio_copy+0x78/0x90
migrate_folio_extra+0x54/0xa0
move_to_new_folio+0xd8/0x1f0
migrate_folio_move+0xb8/0x300
migrate_pages_batch+0x528/0x788
migrate_pages_sync+0x8c/0x258
migrate_pages+0x440/0x528
soft_offline_in_use_page+0x2ec/0x3c0
soft_offline_page+0x238/0x310
soft_offline_page_store+0x6c/0xc0
dev_attr_store+0x20/0x40
sysfs_kf_write+0x4c/0x68
kernfs_fop_write_iter+0x130/0x1c8
new_sync_write+0xa4/0x138
vfs_write+0x238/0x2d8
ksys_write+0x74/0x110


This patch (of 5):

There is a memory_failure_queue() call after copy_mc_[user]_highpage(),
see callers, eg, CoW/KSM page copy, it is used to mark the source page as
h/w poisoned and unmap it from other tasks, and the upcomming poison
recover from migrate folio will do the similar thing, so let's move the
memory_failure_queue() into the copy_mc_[user]_highpage() instead of
adding it into each user, this should also enhance the handling of
poisoned page in khugepaged.

Link: https://lkml.kernel.org/r/20240626085328.608006-1-wangkefeng.wang@huawei.com
Link: https://lkml.kernel.org/r/20240626085328.608006-2-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Benjamin LaHaise <bcrl@kvack.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Jiaqi Yan <jiaqiyan@google.com>
Cc: Lance Yang <ioworker0@gmail.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Kefeng Wang and committed by
Andrew Morton
28bdacbc 8ef6fd0e

+9 -10
+6
include/linux/highmem.h
··· 352 352 kunmap_local(vto); 353 353 kunmap_local(vfrom); 354 354 355 + if (ret) 356 + memory_failure_queue(page_to_pfn(from), 0); 357 + 355 358 return ret; 356 359 } 357 360 ··· 370 367 kmsan_copy_page_meta(to, from); 371 368 kunmap_local(vto); 372 369 kunmap_local(vfrom); 370 + 371 + if (ret) 372 + memory_failure_queue(page_to_pfn(from), 0); 373 373 374 374 return ret; 375 375 }
-1
mm/ksm.c
··· 2998 2998 if (copy_mc_user_highpage(folio_page(new_folio, 0), page, 2999 2999 addr, vma)) { 3000 3000 folio_put(new_folio); 3001 - memory_failure_queue(folio_pfn(folio), 0); 3002 3001 return ERR_PTR(-EHWPOISON); 3003 3002 } 3004 3003 folio_set_dirty(new_folio);
+3 -9
mm/memory.c
··· 3022 3022 unsigned long addr = vmf->address; 3023 3023 3024 3024 if (likely(src)) { 3025 - if (copy_mc_user_highpage(dst, src, addr, vma)) { 3026 - memory_failure_queue(page_to_pfn(src), 0); 3025 + if (copy_mc_user_highpage(dst, src, addr, vma)) 3027 3026 return -EHWPOISON; 3028 - } 3029 3027 return 0; 3030 3028 } 3031 3029 ··· 6490 6492 6491 6493 cond_resched(); 6492 6494 if (copy_mc_user_highpage(dst_page, src_page, 6493 - addr + i*PAGE_SIZE, vma)) { 6494 - memory_failure_queue(page_to_pfn(src_page), 0); 6495 + addr + i*PAGE_SIZE, vma)) 6495 6496 return -EHWPOISON; 6496 - } 6497 6497 } 6498 6498 return 0; 6499 6499 } ··· 6508 6512 struct page *dst = folio_page(copy_arg->dst, idx); 6509 6513 struct page *src = folio_page(copy_arg->src, idx); 6510 6514 6511 - if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) { 6512 - memory_failure_queue(page_to_pfn(src), 0); 6515 + if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) 6513 6516 return -EHWPOISON; 6514 - } 6515 6517 return 0; 6516 6518 } 6517 6519