Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: filemap_unaccount_folio() large skip mapcount fixup

The page_mapcount_reset() when folio_mapped() while mapping_exiting() was
devised long before there were huge or compound pages in the cache. It is
still valid for small pages, but not at all clear what's right to check
and reset on large pages. Just don't try when folio_test_large().

Link: https://lkml.kernel.org/r/879c4426-4122-da9c-1a86-697f2c9a083@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Hugh Dickins and committed by
Linus Torvalds
85207ad8 bb43b14b

+13 -13
+13 -13
mm/filemap.c
··· 152 152 153 153 VM_BUG_ON_FOLIO(folio_mapped(folio), folio); 154 154 if (!IS_ENABLED(CONFIG_DEBUG_VM) && unlikely(folio_mapped(folio))) { 155 - int mapcount; 156 - 157 155 pr_alert("BUG: Bad page cache in process %s pfn:%05lx\n", 158 156 current->comm, folio_pfn(folio)); 159 157 dump_page(&folio->page, "still mapped when deleted"); 160 158 dump_stack(); 161 159 add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); 162 160 163 - mapcount = page_mapcount(&folio->page); 164 - if (mapping_exiting(mapping) && 165 - folio_ref_count(folio) >= mapcount + 2) { 166 - /* 167 - * All vmas have already been torn down, so it's 168 - * a good bet that actually the folio is unmapped, 169 - * and we'd prefer not to leak it: if we're wrong, 170 - * some other bad page check should catch it later. 171 - */ 172 - page_mapcount_reset(&folio->page); 173 - folio_ref_sub(folio, mapcount); 161 + if (mapping_exiting(mapping) && !folio_test_large(folio)) { 162 + int mapcount = page_mapcount(&folio->page); 163 + 164 + if (folio_ref_count(folio) >= mapcount + 2) { 165 + /* 166 + * All vmas have already been torn down, so it's 167 + * a good bet that actually the page is unmapped 168 + * and we'd rather not leak it: if we're wrong, 169 + * another bad page check should catch it later. 170 + */ 171 + page_mapcount_reset(&folio->page); 172 + folio_ref_sub(folio, mapcount); 173 + } 174 174 } 175 175 } 176 176