vmscan,memcg: reintroduce sc->may_swap

Commit a6dc60f8975ad96d162915e07703a4439c80dcf0 ("vmscan: rename
sc.may_swap to may_unmap") removed the may_swap flag, but memcg had used
it as a flag for "we need to use swap?", as the name indicate.

And in the current implementation, memcg cannot reclaim mapped file
caches when mem+swap hits the limit.

re-introduce may_swap flag and handle it at get_scan_ratio(). This
patch doesn't influence any scan_control users other than memcg.

Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by KOSAKI Motohiro and committed by Linus Torvalds 2e2e4259 55e5750b

+8 -4
+8 -4
mm/vmscan.c
··· 63 63 /* Can mapped pages be reclaimed? */ 64 64 int may_unmap; 65 65 66 + /* Can pages be swapped as part of reclaim? */ 67 + int may_swap; 68 + 66 69 /* This context's SWAP_CLUSTER_MAX. If freeing memory for 67 70 * suspend, we effectively ignore SWAP_CLUSTER_MAX. 68 71 * In this context, it doesn't matter that we scan the ··· 1383 1380 struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc); 1384 1381 1385 1382 /* If we have no swap space, do not bother scanning anon pages. */ 1386 - if (nr_swap_pages <= 0) { 1383 + if (!sc->may_swap || (nr_swap_pages <= 0)) { 1387 1384 percent[0] = 0; 1388 1385 percent[1] = 100; 1389 1386 return; ··· 1700 1697 .may_writepage = !laptop_mode, 1701 1698 .swap_cluster_max = SWAP_CLUSTER_MAX, 1702 1699 .may_unmap = 1, 1700 + .may_swap = 1, 1703 1701 .swappiness = vm_swappiness, 1704 1702 .order = order, 1705 1703 .mem_cgroup = NULL, ··· 1721 1717 struct scan_control sc = { 1722 1718 .may_writepage = !laptop_mode, 1723 1719 .may_unmap = 1, 1720 + .may_swap = !noswap, 1724 1721 .swap_cluster_max = SWAP_CLUSTER_MAX, 1725 1722 .swappiness = swappiness, 1726 1723 .order = 0, ··· 1730 1725 .nodemask = NULL, /* we don't care the placement */ 1731 1726 }; 1732 1727 struct zonelist *zonelist; 1733 - 1734 - if (noswap) 1735 - sc.may_unmap = 0; 1736 1728 1737 1729 sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) | 1738 1730 (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK); ··· 1769 1767 struct scan_control sc = { 1770 1768 .gfp_mask = GFP_KERNEL, 1771 1769 .may_unmap = 1, 1770 + .may_swap = 1, 1772 1771 .swap_cluster_max = SWAP_CLUSTER_MAX, 1773 1772 .swappiness = vm_swappiness, 1774 1773 .order = order, ··· 2301 2298 struct scan_control sc = { 2302 2299 .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE), 2303 2300 .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP), 2301 + .may_swap = 1, 2304 2302 .swap_cluster_max = max_t(unsigned long, nr_pages, 2305 2303 SWAP_CLUSTER_MAX), 2306 2304 .gfp_mask = gfp_mask,