Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: create non-atomic version of SetPageReserved for init use

It doesn't make much sense to use the atomic SetPageReserved at init time
when we are using memset to clear the memory and manipulating the page
flags via simple "&=" and "|=" operations in __init_single_page.

This patch adds a non-atomic version __SetPageReserved that can be used
during page init and shows about a 10% improvement in initialization times
on the systems I have available for testing. On those systems I saw
initialization times drop from around 35 seconds to around 32 seconds to
initialize a 3TB block of persistent memory. I believe the main advantage
of this is that it allows for more compiler optimization as the __set_bit
operation can be reordered whereas the atomic version cannot.

I tried adding a bit of documentation based on f1dd2cd13c4 ("mm,
memory_hotplug: do not associate hotadded memory to zones until online").

Ideally the reserved flag should be set earlier since there is a brief
window where the page is initialization via __init_single_page and we have
not set the PG_Reserved flag. I'm leaving that for a future patch set as
that will require a more significant refactor.

Link: http://lkml.kernel.org/r/20180925202018.3576.11607.stgit@localhost.localdomain
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: Pavel Tatashin <pavel.tatashin@microsoft.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Alexander Duyck and committed by
Linus Torvalds
d483da5b f682a97a

+8 -2
+1
include/linux/page-flags.h
··· 303 303 304 304 PAGEFLAG(Reserved, reserved, PF_NO_COMPOUND) 305 305 __CLEARPAGEFLAG(Reserved, reserved, PF_NO_COMPOUND) 306 + __SETPAGEFLAG(Reserved, reserved, PF_NO_COMPOUND) 306 307 PAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL) 307 308 __CLEARPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL) 308 309 __SETPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
+7 -2
mm/page_alloc.c
··· 1232 1232 /* Avoid false-positive PageTail() */ 1233 1233 INIT_LIST_HEAD(&page->lru); 1234 1234 1235 - SetPageReserved(page); 1235 + /* 1236 + * no need for atomic set_bit because the struct 1237 + * page is not visible yet so nobody should 1238 + * access it yet. 1239 + */ 1240 + __SetPageReserved(page); 1236 1241 } 1237 1242 } 1238 1243 } ··· 5513 5508 page = pfn_to_page(pfn); 5514 5509 __init_single_page(page, pfn, zone, nid); 5515 5510 if (context == MEMMAP_HOTPLUG) 5516 - SetPageReserved(page); 5511 + __SetPageReserved(page); 5517 5512 5518 5513 /* 5519 5514 * Mark the block movable so that blocks are reserved for