Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: use __SetPageSwapBacked and dont ClearPageSwapBacked

v3.16 commit 07a427884348 ("mm: shmem: avoid atomic operation during
shmem_getpage_gfp") rightly replaced one instance of SetPageSwapBacked
by __SetPageSwapBacked, pointing out that the newly allocated page is
not yet visible to other users (except speculative get_page_unless_zero-
ers, who may not update page flags before their further checks).

That was part of a series in which Mel was focused on tmpfs profiles:
but almost all SetPageSwapBacked uses can be so optimized, with the same
justification.

Remove ClearPageSwapBacked from __read_swap_cache_async() error path:
it's not an error to free a page with PG_swapbacked set.

Follow a convention of __SetPageLocked, __SetPageSwapBacked instead of
doing it differently in different places; but that's for tidiness - if
the ordering actually mattered, we should not be using the __variants.

There's probably scope for further __SetPageFlags in other places, but
SwapBacked is the one I'm interested in at the moment.

Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Reviewed-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Hugh Dickins and committed by
Linus Torvalds
fa9949da 9d5e6a9f

+7 -8
+3 -3
mm/migrate.c
··· 332 332 newpage->index = page->index; 333 333 newpage->mapping = page->mapping; 334 334 if (PageSwapBacked(page)) 335 - SetPageSwapBacked(newpage); 335 + __SetPageSwapBacked(newpage); 336 336 337 337 return MIGRATEPAGE_SUCCESS; 338 338 } ··· 378 378 newpage->index = page->index; 379 379 newpage->mapping = page->mapping; 380 380 if (PageSwapBacked(page)) 381 - SetPageSwapBacked(newpage); 381 + __SetPageSwapBacked(newpage); 382 382 383 383 get_page(newpage); /* add cache reference */ 384 384 if (PageSwapCache(page)) { ··· 1791 1791 1792 1792 /* Prepare a page as a migration target */ 1793 1793 __SetPageLocked(new_page); 1794 - SetPageSwapBacked(new_page); 1794 + __SetPageSwapBacked(new_page); 1795 1795 1796 1796 /* anon mapping, we can simply copy page->mapping to the new page: */ 1797 1797 new_page->mapping = page->mapping;
+1 -1
mm/rmap.c
··· 1249 1249 int nr = compound ? hpage_nr_pages(page) : 1; 1250 1250 1251 1251 VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma); 1252 - SetPageSwapBacked(page); 1252 + __SetPageSwapBacked(page); 1253 1253 if (compound) { 1254 1254 VM_BUG_ON_PAGE(!PageTransHuge(page), page); 1255 1255 /* increment count (starts at -1) */
+2 -2
mm/shmem.c
··· 1085 1085 flush_dcache_page(newpage); 1086 1086 1087 1087 __SetPageLocked(newpage); 1088 + __SetPageSwapBacked(newpage); 1088 1089 SetPageUptodate(newpage); 1089 - SetPageSwapBacked(newpage); 1090 1090 set_page_private(newpage, swap_index); 1091 1091 SetPageSwapCache(newpage); 1092 1092 ··· 1276 1276 goto decused; 1277 1277 } 1278 1278 1279 - __SetPageSwapBacked(page); 1280 1279 __SetPageLocked(page); 1280 + __SetPageSwapBacked(page); 1281 1281 if (sgp == SGP_WRITE) 1282 1282 __SetPageReferenced(page); 1283 1283
+1 -2
mm/swap_state.c
··· 358 358 359 359 /* May fail (-ENOMEM) if radix-tree node allocation failed. */ 360 360 __SetPageLocked(new_page); 361 - SetPageSwapBacked(new_page); 361 + __SetPageSwapBacked(new_page); 362 362 err = __add_to_swap_cache(new_page, entry); 363 363 if (likely(!err)) { 364 364 radix_tree_preload_end(); ··· 370 370 return new_page; 371 371 } 372 372 radix_tree_preload_end(); 373 - ClearPageSwapBacked(new_page); 374 373 __ClearPageLocked(new_page); 375 374 /* 376 375 * add_to_swap_cache() doesn't return -EEXIST, so we can safely