Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

vmscan: move isolate_lru_page() to vmscan.c

On large memory systems, the VM can spend way too much time scanning
through pages that it cannot (or should not) evict from memory. Not only
does it use up CPU time, but it also provokes lock contention and can
leave large systems under memory presure in a catatonic state.

This patch series improves VM scalability by:

1) putting filesystem backed, swap backed and unevictable pages
onto their own LRUs, so the system only scans the pages that it
can/should evict from memory

2) switching to two handed clock replacement for the anonymous LRUs,
so the number of pages that need to be scanned when the system
starts swapping is bound to a reasonable number

3) keeping unevictable pages off the LRU completely, so the
VM does not waste CPU time scanning them. ramfs, ramdisk,
SHM_LOCKED shared memory segments and mlock()ed VMA pages
are keept on the unevictable list.

This patch:

isolate_lru_page logically belongs to be in vmscan.c than migrate.c.

It is tough, because we don't need that function without memory migration
so there is a valid argument to have it in migrate.c. However a
subsequent patch needs to make use of it in the core mm, so we can happily
move it to vmscan.c.

Also, make the function a little more generic by not requiring that it
adds an isolated page to a given list. Callers can do that.

Note that we now have '__isolate_lru_page()', that does
something quite different, visible outside of vmscan.c
for use with memory controller. Methinks we need to
rationalize these names/purposes. --lts

[akpm@linux-foundation.org: fix mm/memory_hotplug.c build]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Nick Piggin and committed by
Linus Torvalds
62695a84 71088785

+59 -37
-3
include/linux/migrate.h
··· 7 7 typedef struct page *new_page_t(struct page *, unsigned long private, int **); 8 8 9 9 #ifdef CONFIG_MIGRATION 10 - extern int isolate_lru_page(struct page *p, struct list_head *pagelist); 11 10 extern int putback_lru_pages(struct list_head *l); 12 11 extern int migrate_page(struct address_space *, 13 12 struct page *, struct page *); ··· 20 21 const nodemask_t *from, const nodemask_t *to, 21 22 unsigned long flags); 22 23 #else 23 - static inline int isolate_lru_page(struct page *p, struct list_head *list) 24 - { return -ENOSYS; } 25 24 static inline int putback_lru_pages(struct list_head *l) { return 0; } 26 25 static inline int migrate_pages(struct list_head *l, new_page_t x, 27 26 unsigned long private) { return -ENOSYS; }
+2
mm/internal.h
··· 39 39 atomic_dec(&page->_count); 40 40 } 41 41 42 + extern int isolate_lru_page(struct page *page); 43 + 42 44 extern void __free_pages_bootmem(struct page *page, unsigned int order); 43 45 44 46 /*
+2 -1
mm/memory_hotplug.c
··· 658 658 * We can skip free pages. And we can only deal with pages on 659 659 * LRU. 660 660 */ 661 - ret = isolate_lru_page(page, &source); 661 + ret = isolate_lru_page(page); 662 662 if (!ret) { /* Success */ 663 + list_add_tail(&page->lru, &source); 663 664 move_pages--; 664 665 } else { 665 666 /* Becasue we don't have big zone->lock. we should
+7 -2
mm/mempolicy.c
··· 93 93 #include <asm/tlbflush.h> 94 94 #include <asm/uaccess.h> 95 95 96 + #include "internal.h" 97 + 96 98 /* Internal flags */ 97 99 #define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0) /* Skip checks for continuous vmas */ 98 100 #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */ ··· 764 762 /* 765 763 * Avoid migrating a page that is shared with others. 766 764 */ 767 - if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(page) == 1) 768 - isolate_lru_page(page, pagelist); 765 + if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(page) == 1) { 766 + if (!isolate_lru_page(page)) { 767 + list_add_tail(&page->lru, pagelist); 768 + } 769 + } 769 770 } 770 771 771 772 static struct page *new_node_page(struct page *page, unsigned long node, int **x)
+3 -31
mm/migrate.c
··· 37 37 #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru)) 38 38 39 39 /* 40 - * Isolate one page from the LRU lists. If successful put it onto 41 - * the indicated list with elevated page count. 42 - * 43 - * Result: 44 - * -EBUSY: page not on LRU list 45 - * 0: page removed from LRU list and added to the specified list. 46 - */ 47 - int isolate_lru_page(struct page *page, struct list_head *pagelist) 48 - { 49 - int ret = -EBUSY; 50 - 51 - if (PageLRU(page)) { 52 - struct zone *zone = page_zone(page); 53 - 54 - spin_lock_irq(&zone->lru_lock); 55 - if (PageLRU(page) && get_page_unless_zero(page)) { 56 - ret = 0; 57 - ClearPageLRU(page); 58 - if (PageActive(page)) 59 - del_page_from_active_list(zone, page); 60 - else 61 - del_page_from_inactive_list(zone, page); 62 - list_add_tail(&page->lru, pagelist); 63 - } 64 - spin_unlock_irq(&zone->lru_lock); 65 - } 66 - return ret; 67 - } 68 - 69 - /* 70 40 * migrate_prep() needs to be called before we start compiling a list of pages 71 41 * to be migrated using isolate_lru_page(). 72 42 */ ··· 884 914 !migrate_all) 885 915 goto put_and_set; 886 916 887 - err = isolate_lru_page(page, &pagelist); 917 + err = isolate_lru_page(page); 918 + if (!err) 919 + list_add_tail(&page->lru, &pagelist); 888 920 put_and_set: 889 921 /* 890 922 * Either remove the duplicate refcount from
+45
mm/vmscan.c
··· 844 844 return nr_active; 845 845 } 846 846 847 + /** 848 + * isolate_lru_page - tries to isolate a page from its LRU list 849 + * @page: page to isolate from its LRU list 850 + * 851 + * Isolates a @page from an LRU list, clears PageLRU and adjusts the 852 + * vmstat statistic corresponding to whatever LRU list the page was on. 853 + * 854 + * Returns 0 if the page was removed from an LRU list. 855 + * Returns -EBUSY if the page was not on an LRU list. 856 + * 857 + * The returned page will have PageLRU() cleared. If it was found on 858 + * the active list, it will have PageActive set. That flag may need 859 + * to be cleared by the caller before letting the page go. 860 + * 861 + * The vmstat statistic corresponding to the list on which the page was 862 + * found will be decremented. 863 + * 864 + * Restrictions: 865 + * (1) Must be called with an elevated refcount on the page. This is a 866 + * fundamentnal difference from isolate_lru_pages (which is called 867 + * without a stable reference). 868 + * (2) the lru_lock must not be held. 869 + * (3) interrupts must be enabled. 870 + */ 871 + int isolate_lru_page(struct page *page) 872 + { 873 + int ret = -EBUSY; 874 + 875 + if (PageLRU(page)) { 876 + struct zone *zone = page_zone(page); 877 + 878 + spin_lock_irq(&zone->lru_lock); 879 + if (PageLRU(page) && get_page_unless_zero(page)) { 880 + ret = 0; 881 + ClearPageLRU(page); 882 + if (PageActive(page)) 883 + del_page_from_active_list(zone, page); 884 + else 885 + del_page_from_inactive_list(zone, page); 886 + } 887 + spin_unlock_irq(&zone->lru_lock); 888 + } 889 + return ret; 890 + } 891 + 847 892 /* 848 893 * shrink_inactive_list() is a helper for shrink_zone(). It returns the number 849 894 * of reclaimed pages