Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v2.6.35-rc2 691 lines 34 kB view raw
1 ============================== 2 UNEVICTABLE LRU INFRASTRUCTURE 3 ============================== 4 5======== 6CONTENTS 7======== 8 9 (*) The Unevictable LRU 10 11 - The unevictable page list. 12 - Memory control group interaction. 13 - Marking address spaces unevictable. 14 - Detecting Unevictable Pages. 15 - vmscan's handling of unevictable pages. 16 17 (*) mlock()'d pages. 18 19 - History. 20 - Basic management. 21 - mlock()/mlockall() system call handling. 22 - Filtering special vmas. 23 - munlock()/munlockall() system call handling. 24 - Migrating mlocked pages. 25 - mmap(MAP_LOCKED) system call handling. 26 - munmap()/exit()/exec() system call handling. 27 - try_to_unmap(). 28 - try_to_munlock() reverse map scan. 29 - Page reclaim in shrink_*_list(). 30 31 32============ 33INTRODUCTION 34============ 35 36This document describes the Linux memory manager's "Unevictable LRU" 37infrastructure and the use of this to manage several types of "unevictable" 38pages. 39 40The document attempts to provide the overall rationale behind this mechanism 41and the rationale for some of the design decisions that drove the 42implementation. The latter design rationale is discussed in the context of an 43implementation description. Admittedly, one can obtain the implementation 44details - the "what does it do?" - by reading the code. One hopes that the 45descriptions below add value by provide the answer to "why does it do that?". 46 47 48=================== 49THE UNEVICTABLE LRU 50=================== 51 52The Unevictable LRU facility adds an additional LRU list to track unevictable 53pages and to hide these pages from vmscan. This mechanism is based on a patch 54by Larry Woodman of Red Hat to address several scalability problems with page 55reclaim in Linux. The problems have been observed at customer sites on large 56memory x86_64 systems. 57 58To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of 59main memory will have over 32 million 4k pages in a single zone. When a large 60fraction of these pages are not evictable for any reason [see below], vmscan 61will spend a lot of time scanning the LRU lists looking for the small fraction 62of pages that are evictable. This can result in a situation where all CPUs are 63spending 100% of their time in vmscan for hours or days on end, with the system 64completely unresponsive. 65 66The unevictable list addresses the following classes of unevictable pages: 67 68 (*) Those owned by ramfs. 69 70 (*) Those mapped into SHM_LOCK'd shared memory regions. 71 72 (*) Those mapped into VM_LOCKED [mlock()ed] VMAs. 73 74The infrastructure may also be able to handle other conditions that make pages 75unevictable, either by definition or by circumstance, in the future. 76 77 78THE UNEVICTABLE PAGE LIST 79------------------------- 80 81The Unevictable LRU infrastructure consists of an additional, per-zone, LRU list 82called the "unevictable" list and an associated page flag, PG_unevictable, to 83indicate that the page is being managed on the unevictable list. 84 85The PG_unevictable flag is analogous to, and mutually exclusive with, the 86PG_active flag in that it indicates on which LRU list a page resides when 87PG_lru is set. The unevictable list is compile-time configurable based on the 88UNEVICTABLE_LRU Kconfig option. 89 90The Unevictable LRU infrastructure maintains unevictable pages on an additional 91LRU list for a few reasons: 92 93 (1) We get to "treat unevictable pages just like we treat other pages in the 94 system - which means we get to use the same code to manipulate them, the 95 same code to isolate them (for migrate, etc.), the same code to keep track 96 of the statistics, etc..." [Rik van Riel] 97 98 (2) We want to be able to migrate unevictable pages between nodes for memory 99 defragmentation, workload management and memory hotplug. The linux kernel 100 can only migrate pages that it can successfully isolate from the LRU 101 lists. If we were to maintain pages elsewhere than on an LRU-like list, 102 where they can be found by isolate_lru_page(), we would prevent their 103 migration, unless we reworked migration code to find the unevictable pages 104 itself. 105 106 107The unevictable list does not differentiate between file-backed and anonymous, 108swap-backed pages. This differentiation is only important while the pages are, 109in fact, evictable. 110 111The unevictable list benefits from the "arrayification" of the per-zone LRU 112lists and statistics originally proposed and posted by Christoph Lameter. 113 114The unevictable list does not use the LRU pagevec mechanism. Rather, 115unevictable pages are placed directly on the page's zone's unevictable list 116under the zone lru_lock. This allows us to prevent the stranding of pages on 117the unevictable list when one task has the page isolated from the LRU and other 118tasks are changing the "evictability" state of the page. 119 120 121MEMORY CONTROL GROUP INTERACTION 122-------------------------------- 123 124The unevictable LRU facility interacts with the memory control group [aka 125memory controller; see Documentation/cgroups/memory.txt] by extending the 126lru_list enum. 127 128The memory controller data structure automatically gets a per-zone unevictable 129list as a result of the "arrayification" of the per-zone LRU lists (one per 130lru_list enum element). The memory controller tracks the movement of pages to 131and from the unevictable list. 132 133When a memory control group comes under memory pressure, the controller will 134not attempt to reclaim pages on the unevictable list. This has a couple of 135effects: 136 137 (1) Because the pages are "hidden" from reclaim on the unevictable list, the 138 reclaim process can be more efficient, dealing only with pages that have a 139 chance of being reclaimed. 140 141 (2) On the other hand, if too many of the pages charged to the control group 142 are unevictable, the evictable portion of the working set of the tasks in 143 the control group may not fit into the available memory. This can cause 144 the control group to thrash or to OOM-kill tasks. 145 146 147MARKING ADDRESS SPACES UNEVICTABLE 148---------------------------------- 149 150For facilities such as ramfs none of the pages attached to the address space 151may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE 152address space flag is provided, and this can be manipulated by a filesystem 153using a number of wrapper functions: 154 155 (*) void mapping_set_unevictable(struct address_space *mapping); 156 157 Mark the address space as being completely unevictable. 158 159 (*) void mapping_clear_unevictable(struct address_space *mapping); 160 161 Mark the address space as being evictable. 162 163 (*) int mapping_unevictable(struct address_space *mapping); 164 165 Query the address space, and return true if it is completely 166 unevictable. 167 168These are currently used in two places in the kernel: 169 170 (1) By ramfs to mark the address spaces of its inodes when they are created, 171 and this mark remains for the life of the inode. 172 173 (2) By SYSV SHM to mark SHM_LOCK'd address spaces until SHM_UNLOCK is called. 174 175 Note that SHM_LOCK is not required to page in the locked pages if they're 176 swapped out; the application must touch the pages manually if it wants to 177 ensure they're in memory. 178 179 180DETECTING UNEVICTABLE PAGES 181--------------------------- 182 183The function page_evictable() in vmscan.c determines whether a page is 184evictable or not using the query function outlined above [see section "Marking 185address spaces unevictable"] to check the AS_UNEVICTABLE flag. 186 187For address spaces that are so marked after being populated (as SHM regions 188might be), the lock action (eg: SHM_LOCK) can be lazy, and need not populate 189the page tables for the region as does, for example, mlock(), nor need it make 190any special effort to push any pages in the SHM_LOCK'd area to the unevictable 191list. Instead, vmscan will do this if and when it encounters the pages during 192a reclamation scan. 193 194On an unlock action (such as SHM_UNLOCK), the unlocker (eg: shmctl()) must scan 195the pages in the region and "rescue" them from the unevictable list if no other 196condition is keeping them unevictable. If an unevictable region is destroyed, 197the pages are also "rescued" from the unevictable list in the process of 198freeing them. 199 200page_evictable() also checks for mlocked pages by testing an additional page 201flag, PG_mlocked (as wrapped by PageMlocked()). If the page is NOT mlocked, 202and a non-NULL VMA is supplied, page_evictable() will check whether the VMA is 203VM_LOCKED via is_mlocked_vma(). is_mlocked_vma() will SetPageMlocked() and 204update the appropriate statistics if the vma is VM_LOCKED. This method allows 205efficient "culling" of pages in the fault path that are being faulted in to 206VM_LOCKED VMAs. 207 208 209VMSCAN'S HANDLING OF UNEVICTABLE PAGES 210-------------------------------------- 211 212If unevictable pages are culled in the fault path, or moved to the unevictable 213list at mlock() or mmap() time, vmscan will not encounter the pages until they 214have become evictable again (via munlock() for example) and have been "rescued" 215from the unevictable list. However, there may be situations where we decide, 216for the sake of expediency, to leave a unevictable page on one of the regular 217active/inactive LRU lists for vmscan to deal with. vmscan checks for such 218pages in all of the shrink_{active|inactive|page}_list() functions and will 219"cull" such pages that it encounters: that is, it diverts those pages to the 220unevictable list for the zone being scanned. 221 222There may be situations where a page is mapped into a VM_LOCKED VMA, but the 223page is not marked as PG_mlocked. Such pages will make it all the way to 224shrink_page_list() where they will be detected when vmscan walks the reverse 225map in try_to_unmap(). If try_to_unmap() returns SWAP_MLOCK, 226shrink_page_list() will cull the page at that point. 227 228To "cull" an unevictable page, vmscan simply puts the page back on the LRU list 229using putback_lru_page() - the inverse operation to isolate_lru_page() - after 230dropping the page lock. Because the condition which makes the page unevictable 231may change once the page is unlocked, putback_lru_page() will recheck the 232unevictable state of a page that it places on the unevictable list. If the 233page has become unevictable, putback_lru_page() removes it from the list and 234retries, including the page_unevictable() test. Because such a race is a rare 235event and movement of pages onto the unevictable list should be rare, these 236extra evictabilty checks should not occur in the majority of calls to 237putback_lru_page(). 238 239 240============= 241MLOCKED PAGES 242============= 243 244The unevictable page list is also useful for mlock(), in addition to ramfs and 245SYSV SHM. Note that mlock() is only available in CONFIG_MMU=y situations; in 246NOMMU situations, all mappings are effectively mlocked. 247 248 249HISTORY 250------- 251 252The "Unevictable mlocked Pages" infrastructure is based on work originally 253posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU". 254Nick posted his patch as an alternative to a patch posted by Christoph Lameter 255to achieve the same objective: hiding mlocked pages from vmscan. 256 257In Nick's patch, he used one of the struct page LRU list link fields as a count 258of VM_LOCKED VMAs that map the page. This use of the link field for a count 259prevented the management of the pages on an LRU list, and thus mlocked pages 260were not migratable as isolate_lru_page() could not find them, and the LRU list 261link field was not available to the migration subsystem. 262 263Nick resolved this by putting mlocked pages back on the lru list before 264attempting to isolate them, thus abandoning the count of VM_LOCKED VMAs. When 265Nick's patch was integrated with the Unevictable LRU work, the count was 266replaced by walking the reverse map to determine whether any VM_LOCKED VMAs 267mapped the page. More on this below. 268 269 270BASIC MANAGEMENT 271---------------- 272 273mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable 274pages. When such a page has been "noticed" by the memory management subsystem, 275the page is marked with the PG_mlocked flag. This can be manipulated using the 276PageMlocked() functions. 277 278A PG_mlocked page will be placed on the unevictable list when it is added to 279the LRU. Such pages can be "noticed" by memory management in several places: 280 281 (1) in the mlock()/mlockall() system call handlers; 282 283 (2) in the mmap() system call handler when mmapping a region with the 284 MAP_LOCKED flag; 285 286 (3) mmapping a region in a task that has called mlockall() with the MCL_FUTURE 287 flag 288 289 (4) in the fault path, if mlocked pages are "culled" in the fault path, 290 and when a VM_LOCKED stack segment is expanded; or 291 292 (5) as mentioned above, in vmscan:shrink_page_list() when attempting to 293 reclaim a page in a VM_LOCKED VMA via try_to_unmap() 294 295all of which result in the VM_LOCKED flag being set for the VMA if it doesn't 296already have it set. 297 298mlocked pages become unlocked and rescued from the unevictable list when: 299 300 (1) mapped in a range unlocked via the munlock()/munlockall() system calls; 301 302 (2) munmap()'d out of the last VM_LOCKED VMA that maps the page, including 303 unmapping at task exit; 304 305 (3) when the page is truncated from the last VM_LOCKED VMA of an mmapped file; 306 or 307 308 (4) before a page is COW'd in a VM_LOCKED VMA. 309 310 311mlock()/mlockall() SYSTEM CALL HANDLING 312--------------------------------------- 313 314Both [do_]mlock() and [do_]mlockall() system call handlers call mlock_fixup() 315for each VMA in the range specified by the call. In the case of mlockall(), 316this is the entire active address space of the task. Note that mlock_fixup() 317is used for both mlocking and munlocking a range of memory. A call to mlock() 318an already VM_LOCKED VMA, or to munlock() a VMA that is not VM_LOCKED is 319treated as a no-op, and mlock_fixup() simply returns. 320 321If the VMA passes some filtering as described in "Filtering Special Vmas" 322below, mlock_fixup() will attempt to merge the VMA with its neighbors or split 323off a subset of the VMA if the range does not cover the entire VMA. Once the 324VMA has been merged or split or neither, mlock_fixup() will call 325__mlock_vma_pages_range() to fault in the pages via get_user_pages() and to 326mark the pages as mlocked via mlock_vma_page(). 327 328Note that the VMA being mlocked might be mapped with PROT_NONE. In this case, 329get_user_pages() will be unable to fault in the pages. That's okay. If pages 330do end up getting faulted into this VM_LOCKED VMA, we'll handle them in the 331fault path or in vmscan. 332 333Also note that a page returned by get_user_pages() could be truncated or 334migrated out from under us, while we're trying to mlock it. To detect this, 335__mlock_vma_pages_range() checks page_mapping() after acquiring the page lock. 336If the page is still associated with its mapping, we'll go ahead and call 337mlock_vma_page(). If the mapping is gone, we just unlock the page and move on. 338In the worst case, this will result in a page mapped in a VM_LOCKED VMA 339remaining on a normal LRU list without being PageMlocked(). Again, vmscan will 340detect and cull such pages. 341 342mlock_vma_page() will call TestSetPageMlocked() for each page returned by 343get_user_pages(). We use TestSetPageMlocked() because the page might already 344be mlocked by another task/VMA and we don't want to do extra work. We 345especially do not want to count an mlocked page more than once in the 346statistics. If the page was already mlocked, mlock_vma_page() need do nothing 347more. 348 349If the page was NOT already mlocked, mlock_vma_page() attempts to isolate the 350page from the LRU, as it is likely on the appropriate active or inactive list 351at that time. If the isolate_lru_page() succeeds, mlock_vma_page() will put 352back the page - by calling putback_lru_page() - which will notice that the page 353is now mlocked and divert the page to the zone's unevictable list. If 354mlock_vma_page() is unable to isolate the page from the LRU, vmscan will handle 355it later if and when it attempts to reclaim the page. 356 357 358FILTERING SPECIAL VMAS 359---------------------- 360 361mlock_fixup() filters several classes of "special" VMAs: 362 3631) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind 364 these mappings are inherently pinned, so we don't need to mark them as 365 mlocked. In any case, most of the pages have no struct page in which to so 366 mark the page. Because of this, get_user_pages() will fail for these VMAs, 367 so there is no sense in attempting to visit them. 368 3692) VMAs mapping hugetlbfs page are already effectively pinned into memory. We 370 neither need nor want to mlock() these pages. However, to preserve the 371 prior behavior of mlock() - before the unevictable/mlock changes - 372 mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to 373 allocate the huge pages and populate the ptes. 374 3753) VMAs with VM_DONTEXPAND or VM_RESERVED are generally userspace mappings of 376 kernel pages, such as the VDSO page, relay channel pages, etc. These pages 377 are inherently unevictable and are not managed on the LRU lists. 378 mlock_fixup() treats these VMAs the same as hugetlbfs VMAs. It calls 379 make_pages_present() to populate the ptes. 380 381Note that for all of these special VMAs, mlock_fixup() does not set the 382VM_LOCKED flag. Therefore, we won't have to deal with them later during 383munlock(), munmap() or task exit. Neither does mlock_fixup() account these 384VMAs against the task's "locked_vm". 385 386 387munlock()/munlockall() SYSTEM CALL HANDLING 388------------------------------------------- 389 390The munlock() and munlockall() system calls are handled by the same functions - 391do_mlock[all]() - as the mlock() and mlockall() system calls with the unlock vs 392lock operation indicated by an argument. So, these system calls are also 393handled by mlock_fixup(). Again, if called for an already munlocked VMA, 394mlock_fixup() simply returns. Because of the VMA filtering discussed above, 395VM_LOCKED will not be set in any "special" VMAs. So, these VMAs will be 396ignored for munlock. 397 398If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the 399specified range. The range is then munlocked via the function 400__mlock_vma_pages_range() - the same function used to mlock a VMA range - 401passing a flag to indicate that munlock() is being performed. 402 403Because the VMA access protections could have been changed to PROT_NONE after 404faulting in and mlocking pages, get_user_pages() was unreliable for visiting 405these pages for munlocking. Because we don't want to leave pages mlocked, 406get_user_pages() was enhanced to accept a flag to ignore the permissions when 407fetching the pages - all of which should be resident as a result of previous 408mlocking. 409 410For munlock(), __mlock_vma_pages_range() unlocks individual pages by calling 411munlock_vma_page(). munlock_vma_page() unconditionally clears the PG_mlocked 412flag using TestClearPageMlocked(). As with mlock_vma_page(), 413munlock_vma_page() use the Test*PageMlocked() function to handle the case where 414the page might have already been unlocked by another task. If the page was 415mlocked, munlock_vma_page() updates that zone statistics for the number of 416mlocked pages. Note, however, that at this point we haven't checked whether 417the page is mapped by other VM_LOCKED VMAs. 418 419We can't call try_to_munlock(), the function that walks the reverse map to 420check for other VM_LOCKED VMAs, without first isolating the page from the LRU. 421try_to_munlock() is a variant of try_to_unmap() and thus requires that the page 422not be on an LRU list [more on these below]. However, the call to 423isolate_lru_page() could fail, in which case we couldn't try_to_munlock(). So, 424we go ahead and clear PG_mlocked up front, as this might be the only chance we 425have. If we can successfully isolate the page, we go ahead and 426try_to_munlock(), which will restore the PG_mlocked flag and update the zone 427page statistics if it finds another VMA holding the page mlocked. If we fail 428to isolate the page, we'll have left a potentially mlocked page on the LRU. 429This is fine, because we'll catch it later if and if vmscan tries to reclaim 430the page. This should be relatively rare. 431 432 433MIGRATING MLOCKED PAGES 434----------------------- 435 436A page that is being migrated has been isolated from the LRU lists and is held 437locked across unmapping of the page, updating the page's address space entry 438and copying the contents and state, until the page table entry has been 439replaced with an entry that refers to the new page. Linux supports migration 440of mlocked pages and other unevictable pages. This involves simply moving the 441PG_mlocked and PG_unevictable states from the old page to the new page. 442 443Note that page migration can race with mlocking or munlocking of the same page. 444This has been discussed from the mlock/munlock perspective in the respective 445sections above. Both processes (migration and m[un]locking) hold the page 446locked. This provides the first level of synchronization. Page migration 447zeros out the page_mapping of the old page before unlocking it, so m[un]lock 448can skip these pages by testing the page mapping under page lock. 449 450To complete page migration, we place the new and old pages back onto the LRU 451after dropping the page lock. The "unneeded" page - old page on success, new 452page on failure - will be freed when the reference count held by the migration 453process is released. To ensure that we don't strand pages on the unevictable 454list because of a race between munlock and migration, page migration uses the 455putback_lru_page() function to add migrated pages back to the LRU. 456 457 458mmap(MAP_LOCKED) SYSTEM CALL HANDLING 459------------------------------------- 460 461In addition the the mlock()/mlockall() system calls, an application can request 462that a region of memory be mlocked supplying the MAP_LOCKED flag to the mmap() 463call. Furthermore, any mmap() call or brk() call that expands the heap by a 464task that has previously called mlockall() with the MCL_FUTURE flag will result 465in the newly mapped memory being mlocked. Before the unevictable/mlock 466changes, the kernel simply called make_pages_present() to allocate pages and 467populate the page table. 468 469To mlock a range of memory under the unevictable/mlock infrastructure, the 470mmap() handler and task address space expansion functions call 471mlock_vma_pages_range() specifying the vma and the address range to mlock. 472mlock_vma_pages_range() filters VMAs like mlock_fixup(), as described above in 473"Filtering Special VMAs". It will clear the VM_LOCKED flag, which will have 474already been set by the caller, in filtered VMAs. Thus these VMA's need not be 475visited for munlock when the region is unmapped. 476 477For "normal" VMAs, mlock_vma_pages_range() calls __mlock_vma_pages_range() to 478fault/allocate the pages and mlock them. Again, like mlock_fixup(), 479mlock_vma_pages_range() downgrades the mmap semaphore to read mode before 480attempting to fault/allocate and mlock the pages and "upgrades" the semaphore 481back to write mode before returning. 482 483The callers of mlock_vma_pages_range() will have already added the memory range 484to be mlocked to the task's "locked_vm". To account for filtered VMAs, 485mlock_vma_pages_range() returns the number of pages NOT mlocked. All of the 486callers then subtract a non-negative return value from the task's locked_vm. A 487negative return value represent an error - for example, from get_user_pages() 488attempting to fault in a VMA with PROT_NONE access. In this case, we leave the 489memory range accounted as locked_vm, as the protections could be changed later 490and pages allocated into that region. 491 492 493munmap()/exit()/exec() SYSTEM CALL HANDLING 494------------------------------------------- 495 496When unmapping an mlocked region of memory, whether by an explicit call to 497munmap() or via an internal unmap from exit() or exec() processing, we must 498munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages. 499Before the unevictable/mlock changes, mlocking did not mark the pages in any 500way, so unmapping them required no processing. 501 502To munlock a range of memory under the unevictable/mlock infrastructure, the 503munmap() handler and task address space call tear down function 504munlock_vma_pages_all(). The name reflects the observation that one always 505specifies the entire VMA range when munlock()ing during unmap of a region. 506Because of the VMA filtering when mlocking() regions, only "normal" VMAs that 507actually contain mlocked pages will be passed to munlock_vma_pages_all(). 508 509munlock_vma_pages_all() clears the VM_LOCKED VMA flag and, like mlock_fixup() 510for the munlock case, calls __munlock_vma_pages_range() to walk the page table 511for the VMA's memory range and munlock_vma_page() each resident page mapped by 512the VMA. This effectively munlocks the page, only if this is the last 513VM_LOCKED VMA that maps the page. 514 515 516try_to_unmap() 517-------------- 518 519Pages can, of course, be mapped into multiple VMAs. Some of these VMAs may 520have VM_LOCKED flag set. It is possible for a page mapped into one or more 521VM_LOCKED VMAs not to have the PG_mlocked flag set and therefore reside on one 522of the active or inactive LRU lists. This could happen if, for example, a task 523in the process of munlocking the page could not isolate the page from the LRU. 524As a result, vmscan/shrink_page_list() might encounter such a page as described 525in section "vmscan's handling of unevictable pages". To handle this situation, 526try_to_unmap() checks for VM_LOCKED VMAs while it is walking a page's reverse 527map. 528 529try_to_unmap() is always called, by either vmscan for reclaim or for page 530migration, with the argument page locked and isolated from the LRU. Separate 531functions handle anonymous and mapped file pages, as these types of pages have 532different reverse map mechanisms. 533 534 (*) try_to_unmap_anon() 535 536 To unmap anonymous pages, each VMA in the list anchored in the anon_vma 537 must be visited - at least until a VM_LOCKED VMA is encountered. If the 538 page is being unmapped for migration, VM_LOCKED VMAs do not stop the 539 process because mlocked pages are migratable. However, for reclaim, if 540 the page is mapped into a VM_LOCKED VMA, the scan stops. 541 542 try_to_unmap_anon() attempts to acquire in read mode the mmap semphore of 543 the mm_struct to which the VMA belongs. If this is successful, it will 544 mlock the page via mlock_vma_page() - we wouldn't have gotten to 545 try_to_unmap_anon() if the page were already mlocked - and will return 546 SWAP_MLOCK, indicating that the page is unevictable. 547 548 If the mmap semaphore cannot be acquired, we are not sure whether the page 549 is really unevictable or not. In this case, try_to_unmap_anon() will 550 return SWAP_AGAIN. 551 552 (*) try_to_unmap_file() - linear mappings 553 554 Unmapping of a mapped file page works the same as for anonymous mappings, 555 except that the scan visits all VMAs that map the page's index/page offset 556 in the page's mapping's reverse map priority search tree. It also visits 557 each VMA in the page's mapping's non-linear list, if the list is 558 non-empty. 559 560 As for anonymous pages, on encountering a VM_LOCKED VMA for a mapped file 561 page, try_to_unmap_file() will attempt to acquire the associated 562 mm_struct's mmap semaphore to mlock the page, returning SWAP_MLOCK if this 563 is successful, and SWAP_AGAIN, if not. 564 565 (*) try_to_unmap_file() - non-linear mappings 566 567 If a page's mapping contains a non-empty non-linear mapping VMA list, then 568 try_to_un{map|lock}() must also visit each VMA in that list to determine 569 whether the page is mapped in a VM_LOCKED VMA. Again, the scan must visit 570 all VMAs in the non-linear list to ensure that the pages is not/should not 571 be mlocked. 572 573 If a VM_LOCKED VMA is found in the list, the scan could terminate. 574 However, there is no easy way to determine whether the page is actually 575 mapped in a given VMA - either for unmapping or testing whether the 576 VM_LOCKED VMA actually pins the page. 577 578 try_to_unmap_file() handles non-linear mappings by scanning a certain 579 number of pages - a "cluster" - in each non-linear VMA associated with the 580 page's mapping, for each file mapped page that vmscan tries to unmap. If 581 this happens to unmap the page we're trying to unmap, try_to_unmap() will 582 notice this on return (page_mapcount(page) will be 0) and return 583 SWAP_SUCCESS. Otherwise, it will return SWAP_AGAIN, causing vmscan to 584 recirculate this page. We take advantage of the cluster scan in 585 try_to_unmap_cluster() as follows: 586 587 For each non-linear VMA, try_to_unmap_cluster() attempts to acquire the 588 mmap semaphore of the associated mm_struct for read without blocking. 589 590 If this attempt is successful and the VMA is VM_LOCKED, 591 try_to_unmap_cluster() will retain the mmap semaphore for the scan; 592 otherwise it drops it here. 593 594 Then, for each page in the cluster, if we're holding the mmap semaphore 595 for a locked VMA, try_to_unmap_cluster() calls mlock_vma_page() to 596 mlock the page. This call is a no-op if the page is already locked, 597 but will mlock any pages in the non-linear mapping that happen to be 598 unlocked. 599 600 If one of the pages so mlocked is the page passed in to try_to_unmap(), 601 try_to_unmap_cluster() will return SWAP_MLOCK, rather than the default 602 SWAP_AGAIN. This will allow vmscan to cull the page, rather than 603 recirculating it on the inactive list. 604 605 Again, if try_to_unmap_cluster() cannot acquire the VMA's mmap sem, it 606 returns SWAP_AGAIN, indicating that the page is mapped by a VM_LOCKED 607 VMA, but couldn't be mlocked. 608 609 610try_to_munlock() REVERSE MAP SCAN 611--------------------------------- 612 613 [!] TODO/FIXME: a better name might be page_mlocked() - analogous to the 614 page_referenced() reverse map walker. 615 616When munlock_vma_page() [see section "munlock()/munlockall() System Call 617Handling" above] tries to munlock a page, it needs to determine whether or not 618the page is mapped by any VM_LOCKED VMA without actually attempting to unmap 619all PTEs from the page. For this purpose, the unevictable/mlock infrastructure 620introduced a variant of try_to_unmap() called try_to_munlock(). 621 622try_to_munlock() calls the same functions as try_to_unmap() for anonymous and 623mapped file pages with an additional argument specifing unlock versus unmap 624processing. Again, these functions walk the respective reverse maps looking 625for VM_LOCKED VMAs. When such a VMA is found for anonymous pages and file 626pages mapped in linear VMAs, as in the try_to_unmap() case, the functions 627attempt to acquire the associated mmap semphore, mlock the page via 628mlock_vma_page() and return SWAP_MLOCK. This effectively undoes the 629pre-clearing of the page's PG_mlocked done by munlock_vma_page. 630 631If try_to_unmap() is unable to acquire a VM_LOCKED VMA's associated mmap 632semaphore, it will return SWAP_AGAIN. This will allow shrink_page_list() to 633recycle the page on the inactive list and hope that it has better luck with the 634page next time. 635 636For file pages mapped into non-linear VMAs, the try_to_munlock() logic works 637slightly differently. On encountering a VM_LOCKED non-linear VMA that might 638map the page, try_to_munlock() returns SWAP_AGAIN without actually mlocking the 639page. munlock_vma_page() will just leave the page unlocked and let vmscan deal 640with it - the usual fallback position. 641 642Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's 643reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA. 644However, the scan can terminate when it encounters a VM_LOCKED VMA and can 645successfully acquire the VMA's mmap semphore for read and mlock the page. 646Although try_to_munlock() might be called a great many times when munlocking a 647large region or tearing down a large address space that has been mlocked via 648mlockall(), overall this is a fairly rare event. 649 650 651PAGE RECLAIM IN shrink_*_list() 652------------------------------- 653 654shrink_active_list() culls any obviously unevictable pages - i.e. 655!page_evictable(page, NULL) - diverting these to the unevictable list. 656However, shrink_active_list() only sees unevictable pages that made it onto the 657active/inactive lru lists. Note that these pages do not have PageUnevictable 658set - otherwise they would be on the unevictable list and shrink_active_list 659would never see them. 660 661Some examples of these unevictable pages on the LRU lists are: 662 663 (1) ramfs pages that have been placed on the LRU lists when first allocated. 664 665 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to 666 allocate or fault in the pages in the shared memory region. This happens 667 when an application accesses the page the first time after SHM_LOCK'ing 668 the segment. 669 670 (3) mlocked pages that could not be isolated from the LRU and moved to the 671 unevictable list in mlock_vma_page(). 672 673 (4) Pages mapped into multiple VM_LOCKED VMAs, but try_to_munlock() couldn't 674 acquire the VMA's mmap semaphore to test the flags and set PageMlocked. 675 munlock_vma_page() was forced to let the page back on to the normal LRU 676 list for vmscan to handle. 677 678shrink_inactive_list() also diverts any unevictable pages that it finds on the 679inactive lists to the appropriate zone's unevictable list. 680 681shrink_inactive_list() should only see SHM_LOCK'd pages that became SHM_LOCK'd 682after shrink_active_list() had moved them to the inactive list, or pages mapped 683into VM_LOCKED VMAs that munlock_vma_page() couldn't isolate from the LRU to 684recheck via try_to_munlock(). shrink_inactive_list() won't notice the latter, 685but will pass on to shrink_page_list(). 686 687shrink_page_list() again culls obviously unevictable pages that it could 688encounter for similar reason to shrink_inactive_list(). Pages mapped into 689VM_LOCKED VMAs but without PG_mlocked set will make it all the way to 690try_to_unmap(). shrink_page_list() will divert them to the unevictable list 691when try_to_unmap() returns SWAP_MLOCK, as discussed above.