Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: memcontrol: rewrite charge API

These patches rework memcg charge lifetime to integrate more naturally
with the lifetime of user pages. This drastically simplifies the code and
reduces charging and uncharging overhead. The most expensive part of
charging and uncharging is the page_cgroup bit spinlock, which is removed
entirely after this series.

Here are the top-10 profile entries of a stress test that reads a 128G
sparse file on a freshly booted box, without even a dedicated cgroup (i.e.
executing in the root memcg). Before:

15.36% cat [kernel.kallsyms] [k] copy_user_generic_string
13.31% cat [kernel.kallsyms] [k] memset
11.48% cat [kernel.kallsyms] [k] do_mpage_readpage
4.23% cat [kernel.kallsyms] [k] get_page_from_freelist
2.38% cat [kernel.kallsyms] [k] put_page
2.32% cat [kernel.kallsyms] [k] __mem_cgroup_commit_charge
2.18% kswapd0 [kernel.kallsyms] [k] __mem_cgroup_uncharge_common
1.92% kswapd0 [kernel.kallsyms] [k] shrink_page_list
1.86% cat [kernel.kallsyms] [k] __radix_tree_lookup
1.62% cat [kernel.kallsyms] [k] __pagevec_lru_add_fn

After:

15.67% cat [kernel.kallsyms] [k] copy_user_generic_string
13.48% cat [kernel.kallsyms] [k] memset
11.42% cat [kernel.kallsyms] [k] do_mpage_readpage
3.98% cat [kernel.kallsyms] [k] get_page_from_freelist
2.46% cat [kernel.kallsyms] [k] put_page
2.13% kswapd0 [kernel.kallsyms] [k] shrink_page_list
1.88% cat [kernel.kallsyms] [k] __radix_tree_lookup
1.67% cat [kernel.kallsyms] [k] __pagevec_lru_add_fn
1.39% kswapd0 [kernel.kallsyms] [k] free_pcppages_bulk
1.30% cat [kernel.kallsyms] [k] kfree

As you can see, the memcg footprint has shrunk quite a bit.

text data bss dec hex filename
37970 9892 400 48262 bc86 mm/memcontrol.o.old
35239 9892 400 45531 b1db mm/memcontrol.o

This patch (of 4):

The memcg charge API charges pages before they are rmapped - i.e. have an
actual "type" - and so every callsite needs its own set of charge and
uncharge functions to know what type is being operated on. Worse,
uncharge has to happen from a context that is still type-specific, rather
than at the end of the page's lifetime with exclusive access, and so
requires a lot of synchronization.

Rewrite the charge API to provide a generic set of try_charge(),
commit_charge() and cancel_charge() transaction operations, much like
what's currently done for swap-in:

mem_cgroup_try_charge() attempts to reserve a charge, reclaiming
pages from the memcg if necessary.

mem_cgroup_commit_charge() commits the page to the charge once it
has a valid page->mapping and PageAnon() reliably tells the type.

mem_cgroup_cancel_charge() aborts the transaction.

This reduces the charge API and enables subsequent patches to
drastically simplify uncharging.

As pages need to be committed after rmap is established but before they
are added to the LRU, page_add_new_anon_rmap() must stop doing LRU
additions again. Revive lru_cache_add_active_or_unevictable().

[hughd@google.com: fix shmem_unuse]
[hughd@google.com: Add comments on the private use of -EAGAIN]
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Johannes Weiner and committed by
Linus Torvalds
00501b53 4449a51a

+339 -396
+5 -27
Documentation/cgroups/memcg_test.txt
··· 24 24 25 25 a page/swp_entry may be charged (usage += PAGE_SIZE) at 26 26 27 - mem_cgroup_charge_anon() 28 - Called at new page fault and Copy-On-Write. 29 - 30 - mem_cgroup_try_charge_swapin() 31 - Called at do_swap_page() (page fault on swap entry) and swapoff. 32 - Followed by charge-commit-cancel protocol. (With swap accounting) 33 - At commit, a charge recorded in swap_cgroup is removed. 34 - 35 - mem_cgroup_charge_file() 36 - Called at add_to_page_cache() 37 - 38 - mem_cgroup_cache_charge_swapin() 39 - Called at shmem's swapin. 40 - 41 - mem_cgroup_prepare_migration() 42 - Called before migration. "extra" charge is done and followed by 43 - charge-commit-cancel protocol. 44 - At commit, charge against oldpage or newpage will be committed. 27 + mem_cgroup_try_charge() 45 28 46 29 2. Uncharge 47 30 a page/swp_entry may be uncharged (usage -= PAGE_SIZE) by ··· 52 69 to new page is committed. At failure, charge to old page is committed. 53 70 54 71 3. charge-commit-cancel 55 - In some case, we can't know this "charge" is valid or not at charging 56 - (because of races). 57 - To handle such case, there are charge-commit-cancel functions. 58 - mem_cgroup_try_charge_XXX 59 - mem_cgroup_commit_charge_XXX 60 - mem_cgroup_cancel_charge_XXX 61 - these are used in swap-in and migration. 72 + Memcg pages are charged in two steps: 73 + mem_cgroup_try_charge() 74 + mem_cgroup_commit_charge() or mem_cgroup_cancel_charge() 62 75 63 76 At try_charge(), there are no flags to say "this page is charged". 64 77 at this point, usage += PAGE_SIZE. 65 78 66 - At commit(), the function checks the page should be charged or not 67 - and set flags or avoid charging.(usage -= PAGE_SIZE) 79 + At commit(), the page is associated with the memcg. 68 80 69 81 At cancel(), simply usage -= PAGE_SIZE. 70 82
+14 -39
include/linux/memcontrol.h
··· 54 54 }; 55 55 56 56 #ifdef CONFIG_MEMCG 57 - /* 58 - * All "charge" functions with gfp_mask should use GFP_KERNEL or 59 - * (gfp_mask & GFP_RECLAIM_MASK). In current implementatin, memcg doesn't 60 - * alloc memory but reclaims memory from all available zones. So, "where I want 61 - * memory from" bits of gfp_mask has no meaning. So any bits of that field is 62 - * available but adding a rule is better. charge functions' gfp_mask should 63 - * be set to GFP_KERNEL or gfp_mask & GFP_RECLAIM_MASK for avoiding ambiguous 64 - * codes. 65 - * (Of course, if memcg does memory allocation in future, GFP_KERNEL is sane.) 66 - */ 67 - 68 - extern int mem_cgroup_charge_anon(struct page *page, struct mm_struct *mm, 69 - gfp_t gfp_mask); 70 - /* for swap handling */ 71 - extern int mem_cgroup_try_charge_swapin(struct mm_struct *mm, 72 - struct page *page, gfp_t mask, struct mem_cgroup **memcgp); 73 - extern void mem_cgroup_commit_charge_swapin(struct page *page, 74 - struct mem_cgroup *memcg); 75 - extern void mem_cgroup_cancel_charge_swapin(struct mem_cgroup *memcg); 76 - 77 - extern int mem_cgroup_charge_file(struct page *page, struct mm_struct *mm, 78 - gfp_t gfp_mask); 57 + int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, 58 + gfp_t gfp_mask, struct mem_cgroup **memcgp); 59 + void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, 60 + bool lrucare); 61 + void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg); 79 62 80 63 struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *); 81 64 struct lruvec *mem_cgroup_page_lruvec(struct page *, struct zone *); ··· 216 233 #else /* CONFIG_MEMCG */ 217 234 struct mem_cgroup; 218 235 219 - static inline int mem_cgroup_charge_anon(struct page *page, 220 - struct mm_struct *mm, gfp_t gfp_mask) 236 + static inline int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, 237 + gfp_t gfp_mask, 238 + struct mem_cgroup **memcgp) 221 239 { 240 + *memcgp = NULL; 222 241 return 0; 223 242 } 224 243 225 - static inline int mem_cgroup_charge_file(struct page *page, 226 - struct mm_struct *mm, gfp_t gfp_mask) 227 - { 228 - return 0; 229 - } 230 - 231 - static inline int mem_cgroup_try_charge_swapin(struct mm_struct *mm, 232 - struct page *page, gfp_t gfp_mask, struct mem_cgroup **memcgp) 233 - { 234 - return 0; 235 - } 236 - 237 - static inline void mem_cgroup_commit_charge_swapin(struct page *page, 238 - struct mem_cgroup *memcg) 244 + static inline void mem_cgroup_commit_charge(struct page *page, 245 + struct mem_cgroup *memcg, 246 + bool lrucare) 239 247 { 240 248 } 241 249 242 - static inline void mem_cgroup_cancel_charge_swapin(struct mem_cgroup *memcg) 250 + static inline void mem_cgroup_cancel_charge(struct page *page, 251 + struct mem_cgroup *memcg) 243 252 { 244 253 } 245 254
+3
include/linux/swap.h
··· 320 320 321 321 extern void add_page_to_unevictable_list(struct page *page); 322 322 323 + extern void lru_cache_add_active_or_unevictable(struct page *page, 324 + struct vm_area_struct *vma); 325 + 323 326 /* linux/mm/vmscan.c */ 324 327 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, 325 328 gfp_t gfp_mask, nodemask_t *mask);
+8 -7
kernel/events/uprobes.c
··· 167 167 /* For mmu_notifiers */ 168 168 const unsigned long mmun_start = addr; 169 169 const unsigned long mmun_end = addr + PAGE_SIZE; 170 + struct mem_cgroup *memcg; 171 + 172 + err = mem_cgroup_try_charge(kpage, vma->vm_mm, GFP_KERNEL, &memcg); 173 + if (err) 174 + return err; 170 175 171 176 /* For try_to_free_swap() and munlock_vma_page() below */ 172 177 lock_page(page); ··· 184 179 185 180 get_page(kpage); 186 181 page_add_new_anon_rmap(kpage, vma, addr); 182 + mem_cgroup_commit_charge(kpage, memcg, false); 183 + lru_cache_add_active_or_unevictable(kpage, vma); 187 184 188 185 if (!PageAnon(page)) { 189 186 dec_mm_counter(mm, MM_FILEPAGES); ··· 207 200 208 201 err = 0; 209 202 unlock: 203 + mem_cgroup_cancel_charge(kpage, memcg); 210 204 mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 211 205 unlock_page(page); 212 206 return err; ··· 323 315 if (!new_page) 324 316 goto put_old; 325 317 326 - if (mem_cgroup_charge_anon(new_page, mm, GFP_KERNEL)) 327 - goto put_new; 328 - 329 318 __SetPageUptodate(new_page); 330 319 copy_highpage(new_page, old_page); 331 320 copy_to_page(new_page, vaddr, &opcode, UPROBE_SWBP_INSN_SIZE); 332 321 333 322 ret = __replace_page(vma, vaddr, old_page, new_page); 334 - if (ret) 335 - mem_cgroup_uncharge_page(new_page); 336 - 337 - put_new: 338 323 page_cache_release(new_page); 339 324 put_old: 340 325 put_page(old_page);
+15 -6
mm/filemap.c
··· 31 31 #include <linux/security.h> 32 32 #include <linux/cpuset.h> 33 33 #include <linux/hardirq.h> /* for BUG_ON(!in_atomic()) only */ 34 + #include <linux/hugetlb.h> 34 35 #include <linux/memcontrol.h> 35 36 #include <linux/cleancache.h> 36 37 #include <linux/rmap.h> ··· 549 548 pgoff_t offset, gfp_t gfp_mask, 550 549 void **shadowp) 551 550 { 551 + int huge = PageHuge(page); 552 + struct mem_cgroup *memcg; 552 553 int error; 553 554 554 555 VM_BUG_ON_PAGE(!PageLocked(page), page); 555 556 VM_BUG_ON_PAGE(PageSwapBacked(page), page); 556 557 557 - error = mem_cgroup_charge_file(page, current->mm, 558 - gfp_mask & GFP_RECLAIM_MASK); 559 - if (error) 560 - return error; 558 + if (!huge) { 559 + error = mem_cgroup_try_charge(page, current->mm, 560 + gfp_mask, &memcg); 561 + if (error) 562 + return error; 563 + } 561 564 562 565 error = radix_tree_maybe_preload(gfp_mask & ~__GFP_HIGHMEM); 563 566 if (error) { 564 - mem_cgroup_uncharge_cache_page(page); 567 + if (!huge) 568 + mem_cgroup_cancel_charge(page, memcg); 565 569 return error; 566 570 } 567 571 ··· 581 575 goto err_insert; 582 576 __inc_zone_page_state(page, NR_FILE_PAGES); 583 577 spin_unlock_irq(&mapping->tree_lock); 578 + if (!huge) 579 + mem_cgroup_commit_charge(page, memcg, false); 584 580 trace_mm_filemap_add_to_page_cache(page); 585 581 return 0; 586 582 err_insert: 587 583 page->mapping = NULL; 588 584 /* Leave page->index set: truncation relies upon it */ 589 585 spin_unlock_irq(&mapping->tree_lock); 590 - mem_cgroup_uncharge_cache_page(page); 586 + if (!huge) 587 + mem_cgroup_cancel_charge(page, memcg); 591 588 page_cache_release(page); 592 589 return error; 593 590 }
+38 -21
mm/huge_memory.c
··· 715 715 unsigned long haddr, pmd_t *pmd, 716 716 struct page *page) 717 717 { 718 + struct mem_cgroup *memcg; 718 719 pgtable_t pgtable; 719 720 spinlock_t *ptl; 720 721 721 722 VM_BUG_ON_PAGE(!PageCompound(page), page); 722 - pgtable = pte_alloc_one(mm, haddr); 723 - if (unlikely(!pgtable)) 723 + 724 + if (mem_cgroup_try_charge(page, mm, GFP_TRANSHUGE, &memcg)) 724 725 return VM_FAULT_OOM; 726 + 727 + pgtable = pte_alloc_one(mm, haddr); 728 + if (unlikely(!pgtable)) { 729 + mem_cgroup_cancel_charge(page, memcg); 730 + return VM_FAULT_OOM; 731 + } 725 732 726 733 clear_huge_page(page, haddr, HPAGE_PMD_NR); 727 734 /* ··· 741 734 ptl = pmd_lock(mm, pmd); 742 735 if (unlikely(!pmd_none(*pmd))) { 743 736 spin_unlock(ptl); 744 - mem_cgroup_uncharge_page(page); 737 + mem_cgroup_cancel_charge(page, memcg); 745 738 put_page(page); 746 739 pte_free(mm, pgtable); 747 740 } else { ··· 749 742 entry = mk_huge_pmd(page, vma->vm_page_prot); 750 743 entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); 751 744 page_add_new_anon_rmap(page, vma, haddr); 745 + mem_cgroup_commit_charge(page, memcg, false); 746 + lru_cache_add_active_or_unevictable(page, vma); 752 747 pgtable_trans_huge_deposit(mm, pmd, pgtable); 753 748 set_pmd_at(mm, haddr, pmd, entry); 754 749 add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR); ··· 836 827 count_vm_event(THP_FAULT_FALLBACK); 837 828 return VM_FAULT_FALLBACK; 838 829 } 839 - if (unlikely(mem_cgroup_charge_anon(page, mm, GFP_TRANSHUGE))) { 840 - put_page(page); 841 - count_vm_event(THP_FAULT_FALLBACK); 842 - return VM_FAULT_FALLBACK; 843 - } 844 830 if (unlikely(__do_huge_pmd_anonymous_page(mm, vma, haddr, pmd, page))) { 845 - mem_cgroup_uncharge_page(page); 846 831 put_page(page); 847 832 count_vm_event(THP_FAULT_FALLBACK); 848 833 return VM_FAULT_FALLBACK; ··· 982 979 struct page *page, 983 980 unsigned long haddr) 984 981 { 982 + struct mem_cgroup *memcg; 985 983 spinlock_t *ptl; 986 984 pgtable_t pgtable; 987 985 pmd_t _pmd; ··· 1003 999 __GFP_OTHER_NODE, 1004 1000 vma, address, page_to_nid(page)); 1005 1001 if (unlikely(!pages[i] || 1006 - mem_cgroup_charge_anon(pages[i], mm, 1007 - GFP_KERNEL))) { 1002 + mem_cgroup_try_charge(pages[i], mm, GFP_KERNEL, 1003 + &memcg))) { 1008 1004 if (pages[i]) 1009 1005 put_page(pages[i]); 1010 - mem_cgroup_uncharge_start(); 1011 1006 while (--i >= 0) { 1012 - mem_cgroup_uncharge_page(pages[i]); 1007 + memcg = (void *)page_private(pages[i]); 1008 + set_page_private(pages[i], 0); 1009 + mem_cgroup_cancel_charge(pages[i], memcg); 1013 1010 put_page(pages[i]); 1014 1011 } 1015 - mem_cgroup_uncharge_end(); 1016 1012 kfree(pages); 1017 1013 ret |= VM_FAULT_OOM; 1018 1014 goto out; 1019 1015 } 1016 + set_page_private(pages[i], (unsigned long)memcg); 1020 1017 } 1021 1018 1022 1019 for (i = 0; i < HPAGE_PMD_NR; i++) { ··· 1046 1041 pte_t *pte, entry; 1047 1042 entry = mk_pte(pages[i], vma->vm_page_prot); 1048 1043 entry = maybe_mkwrite(pte_mkdirty(entry), vma); 1044 + memcg = (void *)page_private(pages[i]); 1045 + set_page_private(pages[i], 0); 1049 1046 page_add_new_anon_rmap(pages[i], vma, haddr); 1047 + mem_cgroup_commit_charge(pages[i], memcg, false); 1048 + lru_cache_add_active_or_unevictable(pages[i], vma); 1050 1049 pte = pte_offset_map(&_pmd, haddr); 1051 1050 VM_BUG_ON(!pte_none(*pte)); 1052 1051 set_pte_at(mm, haddr, pte, entry); ··· 1074 1065 out_free_pages: 1075 1066 spin_unlock(ptl); 1076 1067 mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end); 1077 - mem_cgroup_uncharge_start(); 1078 1068 for (i = 0; i < HPAGE_PMD_NR; i++) { 1079 - mem_cgroup_uncharge_page(pages[i]); 1069 + memcg = (void *)page_private(pages[i]); 1070 + set_page_private(pages[i], 0); 1071 + mem_cgroup_cancel_charge(pages[i], memcg); 1080 1072 put_page(pages[i]); 1081 1073 } 1082 - mem_cgroup_uncharge_end(); 1083 1074 kfree(pages); 1084 1075 goto out; 1085 1076 } ··· 1090 1081 spinlock_t *ptl; 1091 1082 int ret = 0; 1092 1083 struct page *page = NULL, *new_page; 1084 + struct mem_cgroup *memcg; 1093 1085 unsigned long haddr; 1094 1086 unsigned long mmun_start; /* For mmu_notifiers */ 1095 1087 unsigned long mmun_end; /* For mmu_notifiers */ ··· 1142 1132 goto out; 1143 1133 } 1144 1134 1145 - if (unlikely(mem_cgroup_charge_anon(new_page, mm, GFP_TRANSHUGE))) { 1135 + if (unlikely(mem_cgroup_try_charge(new_page, mm, 1136 + GFP_TRANSHUGE, &memcg))) { 1146 1137 put_page(new_page); 1147 1138 if (page) { 1148 1139 split_huge_page(page); ··· 1172 1161 put_user_huge_page(page); 1173 1162 if (unlikely(!pmd_same(*pmd, orig_pmd))) { 1174 1163 spin_unlock(ptl); 1175 - mem_cgroup_uncharge_page(new_page); 1164 + mem_cgroup_cancel_charge(new_page, memcg); 1176 1165 put_page(new_page); 1177 1166 goto out_mn; 1178 1167 } else { ··· 1181 1170 entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); 1182 1171 pmdp_clear_flush(vma, haddr, pmd); 1183 1172 page_add_new_anon_rmap(new_page, vma, haddr); 1173 + mem_cgroup_commit_charge(new_page, memcg, false); 1174 + lru_cache_add_active_or_unevictable(new_page, vma); 1184 1175 set_pmd_at(mm, haddr, pmd, entry); 1185 1176 update_mmu_cache_pmd(vma, address, pmd); 1186 1177 if (!page) { ··· 2426 2413 spinlock_t *pmd_ptl, *pte_ptl; 2427 2414 int isolated; 2428 2415 unsigned long hstart, hend; 2416 + struct mem_cgroup *memcg; 2429 2417 unsigned long mmun_start; /* For mmu_notifiers */ 2430 2418 unsigned long mmun_end; /* For mmu_notifiers */ 2431 2419 ··· 2437 2423 if (!new_page) 2438 2424 return; 2439 2425 2440 - if (unlikely(mem_cgroup_charge_anon(new_page, mm, GFP_TRANSHUGE))) 2426 + if (unlikely(mem_cgroup_try_charge(new_page, mm, 2427 + GFP_TRANSHUGE, &memcg))) 2441 2428 return; 2442 2429 2443 2430 /* ··· 2525 2510 spin_lock(pmd_ptl); 2526 2511 BUG_ON(!pmd_none(*pmd)); 2527 2512 page_add_new_anon_rmap(new_page, vma, address); 2513 + mem_cgroup_commit_charge(new_page, memcg, false); 2514 + lru_cache_add_active_or_unevictable(new_page, vma); 2528 2515 pgtable_trans_huge_deposit(mm, pmd, pgtable); 2529 2516 set_pmd_at(mm, address, pmd, _pmd); 2530 2517 update_mmu_cache_pmd(vma, address, pmd); ··· 2540 2523 return; 2541 2524 2542 2525 out: 2543 - mem_cgroup_uncharge_page(new_page); 2526 + mem_cgroup_cancel_charge(new_page, memcg); 2544 2527 goto out_up_write; 2545 2528 } 2546 2529
+167 -240
mm/memcontrol.c
··· 2551 2551 return NOTIFY_OK; 2552 2552 } 2553 2553 2554 - /** 2555 - * mem_cgroup_try_charge - try charging a memcg 2556 - * @memcg: memcg to charge 2557 - * @nr_pages: number of pages to charge 2558 - * 2559 - * Returns 0 if @memcg was charged successfully, -EINTR if the charge 2560 - * was bypassed to root_mem_cgroup, and -ENOMEM if the charge failed. 2561 - */ 2562 - static int mem_cgroup_try_charge(struct mem_cgroup *memcg, 2563 - gfp_t gfp_mask, 2564 - unsigned int nr_pages) 2554 + static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, 2555 + unsigned int nr_pages) 2565 2556 { 2566 2557 unsigned int batch = max(CHARGE_BATCH, nr_pages); 2567 2558 int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; ··· 2651 2660 return ret; 2652 2661 } 2653 2662 2654 - /** 2655 - * mem_cgroup_try_charge_mm - try charging a mm 2656 - * @mm: mm_struct to charge 2657 - * @nr_pages: number of pages to charge 2658 - * @oom: trigger OOM if reclaim fails 2659 - * 2660 - * Returns the charged mem_cgroup associated with the given mm_struct or 2661 - * NULL the charge failed. 2662 - */ 2663 - static struct mem_cgroup *mem_cgroup_try_charge_mm(struct mm_struct *mm, 2664 - gfp_t gfp_mask, 2665 - unsigned int nr_pages) 2666 - 2667 - { 2668 - struct mem_cgroup *memcg; 2669 - int ret; 2670 - 2671 - memcg = get_mem_cgroup_from_mm(mm); 2672 - ret = mem_cgroup_try_charge(memcg, gfp_mask, nr_pages); 2673 - css_put(&memcg->css); 2674 - if (ret == -EINTR) 2675 - memcg = root_mem_cgroup; 2676 - else if (ret) 2677 - memcg = NULL; 2678 - 2679 - return memcg; 2680 - } 2681 - 2682 - /* 2683 - * Somemtimes we have to undo a charge we got by try_charge(). 2684 - * This function is for that and do uncharge, put css's refcnt. 2685 - * gotten by try_charge(). 2686 - */ 2687 - static void __mem_cgroup_cancel_charge(struct mem_cgroup *memcg, 2688 - unsigned int nr_pages) 2663 + static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) 2689 2664 { 2690 2665 unsigned long bytes = nr_pages * PAGE_SIZE; 2691 2666 ··· 2717 2760 return memcg; 2718 2761 } 2719 2762 2720 - static void __mem_cgroup_commit_charge(struct mem_cgroup *memcg, 2721 - struct page *page, 2722 - unsigned int nr_pages, 2723 - enum charge_type ctype, 2724 - bool lrucare) 2763 + static void commit_charge(struct page *page, struct mem_cgroup *memcg, 2764 + unsigned int nr_pages, bool anon, bool lrucare) 2725 2765 { 2726 2766 struct page_cgroup *pc = lookup_page_cgroup(page); 2727 2767 struct zone *uninitialized_var(zone); 2728 2768 struct lruvec *lruvec; 2729 2769 bool was_on_lru = false; 2730 - bool anon; 2731 2770 2732 2771 lock_page_cgroup(pc); 2733 2772 VM_BUG_ON_PAGE(PageCgroupUsed(pc), page); ··· 2759 2806 } 2760 2807 spin_unlock_irq(&zone->lru_lock); 2761 2808 } 2762 - 2763 - if (ctype == MEM_CGROUP_CHARGE_TYPE_ANON) 2764 - anon = true; 2765 - else 2766 - anon = false; 2767 2809 2768 2810 mem_cgroup_charge_statistics(memcg, page, anon, nr_pages); 2769 2811 unlock_page_cgroup(pc); ··· 2830 2882 if (ret) 2831 2883 return ret; 2832 2884 2833 - ret = mem_cgroup_try_charge(memcg, gfp, size >> PAGE_SHIFT); 2885 + ret = try_charge(memcg, gfp, size >> PAGE_SHIFT); 2834 2886 if (ret == -EINTR) { 2835 2887 /* 2836 - * mem_cgroup_try_charge() chosed to bypass to root due to 2837 - * OOM kill or fatal signal. Since our only options are to 2838 - * either fail the allocation or charge it to this cgroup, do 2839 - * it as a temporary condition. But we can't fail. From a 2840 - * kmem/slab perspective, the cache has already been selected, 2841 - * by mem_cgroup_kmem_get_cache(), so it is too late to change 2888 + * try_charge() chose to bypass to root due to OOM kill or 2889 + * fatal signal. Since our only options are to either fail 2890 + * the allocation or charge it to this cgroup, do it as a 2891 + * temporary condition. But we can't fail. From a kmem/slab 2892 + * perspective, the cache has already been selected, by 2893 + * mem_cgroup_kmem_get_cache(), so it is too late to change 2842 2894 * our minds. 2843 2895 * 2844 2896 * This condition will only trigger if the task entered 2845 - * memcg_charge_kmem in a sane state, but was OOM-killed during 2846 - * mem_cgroup_try_charge() above. Tasks that were already 2847 - * dying when the allocation triggers should have been already 2897 + * memcg_charge_kmem in a sane state, but was OOM-killed 2898 + * during try_charge() above. Tasks that were already dying 2899 + * when the allocation triggers should have been already 2848 2900 * directed to the root cgroup in memcontrol.h 2849 2901 */ 2850 2902 res_counter_charge_nofail(&memcg->res, size, &fail_res); ··· 3566 3618 return ret; 3567 3619 } 3568 3620 3569 - int mem_cgroup_charge_anon(struct page *page, 3570 - struct mm_struct *mm, gfp_t gfp_mask) 3571 - { 3572 - unsigned int nr_pages = 1; 3573 - struct mem_cgroup *memcg; 3574 - 3575 - if (mem_cgroup_disabled()) 3576 - return 0; 3577 - 3578 - VM_BUG_ON_PAGE(page_mapped(page), page); 3579 - VM_BUG_ON_PAGE(page->mapping && !PageAnon(page), page); 3580 - VM_BUG_ON(!mm); 3581 - 3582 - if (PageTransHuge(page)) { 3583 - nr_pages <<= compound_order(page); 3584 - VM_BUG_ON_PAGE(!PageTransHuge(page), page); 3585 - } 3586 - 3587 - memcg = mem_cgroup_try_charge_mm(mm, gfp_mask, nr_pages); 3588 - if (!memcg) 3589 - return -ENOMEM; 3590 - __mem_cgroup_commit_charge(memcg, page, nr_pages, 3591 - MEM_CGROUP_CHARGE_TYPE_ANON, false); 3592 - return 0; 3593 - } 3594 - 3595 - /* 3596 - * While swap-in, try_charge -> commit or cancel, the page is locked. 3597 - * And when try_charge() successfully returns, one refcnt to memcg without 3598 - * struct page_cgroup is acquired. This refcnt will be consumed by 3599 - * "commit()" or removed by "cancel()" 3600 - */ 3601 - static int __mem_cgroup_try_charge_swapin(struct mm_struct *mm, 3602 - struct page *page, 3603 - gfp_t mask, 3604 - struct mem_cgroup **memcgp) 3605 - { 3606 - struct mem_cgroup *memcg = NULL; 3607 - struct page_cgroup *pc; 3608 - int ret; 3609 - 3610 - pc = lookup_page_cgroup(page); 3611 - /* 3612 - * Every swap fault against a single page tries to charge the 3613 - * page, bail as early as possible. shmem_unuse() encounters 3614 - * already charged pages, too. The USED bit is protected by 3615 - * the page lock, which serializes swap cache removal, which 3616 - * in turn serializes uncharging. 3617 - */ 3618 - if (PageCgroupUsed(pc)) 3619 - goto out; 3620 - if (do_swap_account) 3621 - memcg = try_get_mem_cgroup_from_page(page); 3622 - if (!memcg) 3623 - memcg = get_mem_cgroup_from_mm(mm); 3624 - ret = mem_cgroup_try_charge(memcg, mask, 1); 3625 - css_put(&memcg->css); 3626 - if (ret == -EINTR) 3627 - memcg = root_mem_cgroup; 3628 - else if (ret) 3629 - return ret; 3630 - out: 3631 - *memcgp = memcg; 3632 - return 0; 3633 - } 3634 - 3635 - int mem_cgroup_try_charge_swapin(struct mm_struct *mm, struct page *page, 3636 - gfp_t gfp_mask, struct mem_cgroup **memcgp) 3637 - { 3638 - if (mem_cgroup_disabled()) { 3639 - *memcgp = NULL; 3640 - return 0; 3641 - } 3642 - /* 3643 - * A racing thread's fault, or swapoff, may have already 3644 - * updated the pte, and even removed page from swap cache: in 3645 - * those cases unuse_pte()'s pte_same() test will fail; but 3646 - * there's also a KSM case which does need to charge the page. 3647 - */ 3648 - if (!PageSwapCache(page)) { 3649 - struct mem_cgroup *memcg; 3650 - 3651 - memcg = mem_cgroup_try_charge_mm(mm, gfp_mask, 1); 3652 - if (!memcg) 3653 - return -ENOMEM; 3654 - *memcgp = memcg; 3655 - return 0; 3656 - } 3657 - return __mem_cgroup_try_charge_swapin(mm, page, gfp_mask, memcgp); 3658 - } 3659 - 3660 - void mem_cgroup_cancel_charge_swapin(struct mem_cgroup *memcg) 3661 - { 3662 - if (mem_cgroup_disabled()) 3663 - return; 3664 - if (!memcg) 3665 - return; 3666 - __mem_cgroup_cancel_charge(memcg, 1); 3667 - } 3668 - 3669 - static void 3670 - __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *memcg, 3671 - enum charge_type ctype) 3672 - { 3673 - if (mem_cgroup_disabled()) 3674 - return; 3675 - if (!memcg) 3676 - return; 3677 - 3678 - __mem_cgroup_commit_charge(memcg, page, 1, ctype, true); 3679 - /* 3680 - * Now swap is on-memory. This means this page may be 3681 - * counted both as mem and swap....double count. 3682 - * Fix it by uncharging from memsw. Basically, this SwapCache is stable 3683 - * under lock_page(). But in do_swap_page()::memory.c, reuse_swap_page() 3684 - * may call delete_from_swap_cache() before reach here. 3685 - */ 3686 - if (do_swap_account && PageSwapCache(page)) { 3687 - swp_entry_t ent = {.val = page_private(page)}; 3688 - mem_cgroup_uncharge_swap(ent); 3689 - } 3690 - } 3691 - 3692 - void mem_cgroup_commit_charge_swapin(struct page *page, 3693 - struct mem_cgroup *memcg) 3694 - { 3695 - __mem_cgroup_commit_charge_swapin(page, memcg, 3696 - MEM_CGROUP_CHARGE_TYPE_ANON); 3697 - } 3698 - 3699 - int mem_cgroup_charge_file(struct page *page, struct mm_struct *mm, 3700 - gfp_t gfp_mask) 3701 - { 3702 - enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE; 3703 - struct mem_cgroup *memcg; 3704 - int ret; 3705 - 3706 - if (mem_cgroup_disabled()) 3707 - return 0; 3708 - if (PageCompound(page)) 3709 - return 0; 3710 - 3711 - if (PageSwapCache(page)) { /* shmem */ 3712 - ret = __mem_cgroup_try_charge_swapin(mm, page, 3713 - gfp_mask, &memcg); 3714 - if (ret) 3715 - return ret; 3716 - __mem_cgroup_commit_charge_swapin(page, memcg, type); 3717 - return 0; 3718 - } 3719 - 3720 - memcg = mem_cgroup_try_charge_mm(mm, gfp_mask, 1); 3721 - if (!memcg) 3722 - return -ENOMEM; 3723 - __mem_cgroup_commit_charge(memcg, page, 1, type, false); 3724 - return 0; 3725 - } 3726 - 3727 3621 static void mem_cgroup_do_uncharge(struct mem_cgroup *memcg, 3728 3622 unsigned int nr_pages, 3729 3623 const enum charge_type ctype) ··· 3912 4122 struct mem_cgroup *memcg = NULL; 3913 4123 unsigned int nr_pages = 1; 3914 4124 struct page_cgroup *pc; 3915 - enum charge_type ctype; 3916 4125 3917 4126 *memcgp = NULL; 3918 4127 ··· 3973 4184 * page. In the case new page is migrated but not remapped, new page's 3974 4185 * mapcount will be finally 0 and we call uncharge in end_migration(). 3975 4186 */ 3976 - if (PageAnon(page)) 3977 - ctype = MEM_CGROUP_CHARGE_TYPE_ANON; 3978 - else 3979 - ctype = MEM_CGROUP_CHARGE_TYPE_CACHE; 3980 4187 /* 3981 4188 * The page is committed to the memcg, but it's not actually 3982 4189 * charged to the res_counter since we plan on replacing the 3983 4190 * old one and only one page is going to be left afterwards. 3984 4191 */ 3985 - __mem_cgroup_commit_charge(memcg, newpage, nr_pages, ctype, false); 4192 + commit_charge(newpage, memcg, nr_pages, PageAnon(page), false); 3986 4193 } 3987 4194 3988 4195 /* remove redundant charge if migration failed*/ ··· 4037 4252 { 4038 4253 struct mem_cgroup *memcg = NULL; 4039 4254 struct page_cgroup *pc; 4040 - enum charge_type type = MEM_CGROUP_CHARGE_TYPE_CACHE; 4041 4255 4042 4256 if (mem_cgroup_disabled()) 4043 4257 return; ··· 4062 4278 * the newpage may be on LRU(or pagevec for LRU) already. We lock 4063 4279 * LRU while we overwrite pc->mem_cgroup. 4064 4280 */ 4065 - __mem_cgroup_commit_charge(memcg, newpage, 1, type, true); 4281 + commit_charge(newpage, memcg, 1, false, true); 4066 4282 } 4067 4283 4068 4284 #ifdef CONFIG_DEBUG_VM ··· 6103 6319 int ret; 6104 6320 6105 6321 /* Try a single bulk charge without reclaim first */ 6106 - ret = mem_cgroup_try_charge(mc.to, GFP_KERNEL & ~__GFP_WAIT, count); 6322 + ret = try_charge(mc.to, GFP_KERNEL & ~__GFP_WAIT, count); 6107 6323 if (!ret) { 6108 6324 mc.precharge += count; 6109 6325 return ret; 6110 6326 } 6111 6327 if (ret == -EINTR) { 6112 - __mem_cgroup_cancel_charge(root_mem_cgroup, count); 6328 + cancel_charge(root_mem_cgroup, count); 6113 6329 return ret; 6114 6330 } 6115 6331 6116 6332 /* Try charges one by one with reclaim */ 6117 6333 while (count--) { 6118 - ret = mem_cgroup_try_charge(mc.to, 6119 - GFP_KERNEL & ~__GFP_NORETRY, 1); 6334 + ret = try_charge(mc.to, GFP_KERNEL & ~__GFP_NORETRY, 1); 6120 6335 /* 6121 6336 * In case of failure, any residual charges against 6122 6337 * mc.to will be dropped by mem_cgroup_clear_mc() ··· 6123 6340 * bypassed to root right away or they'll be lost. 6124 6341 */ 6125 6342 if (ret == -EINTR) 6126 - __mem_cgroup_cancel_charge(root_mem_cgroup, 1); 6343 + cancel_charge(root_mem_cgroup, 1); 6127 6344 if (ret) 6128 6345 return ret; 6129 6346 mc.precharge++; ··· 6392 6609 6393 6610 /* we must uncharge all the leftover precharges from mc.to */ 6394 6611 if (mc.precharge) { 6395 - __mem_cgroup_cancel_charge(mc.to, mc.precharge); 6612 + cancel_charge(mc.to, mc.precharge); 6396 6613 mc.precharge = 0; 6397 6614 } 6398 6615 /* ··· 6400 6617 * we must uncharge here. 6401 6618 */ 6402 6619 if (mc.moved_charge) { 6403 - __mem_cgroup_cancel_charge(mc.from, mc.moved_charge); 6620 + cancel_charge(mc.from, mc.moved_charge); 6404 6621 mc.moved_charge = 0; 6405 6622 } 6406 6623 /* we must fixup refcnts and charges */ ··· 6728 6945 { 6729 6946 } 6730 6947 #endif 6948 + 6949 + /** 6950 + * mem_cgroup_try_charge - try charging a page 6951 + * @page: page to charge 6952 + * @mm: mm context of the victim 6953 + * @gfp_mask: reclaim mode 6954 + * @memcgp: charged memcg return 6955 + * 6956 + * Try to charge @page to the memcg that @mm belongs to, reclaiming 6957 + * pages according to @gfp_mask if necessary. 6958 + * 6959 + * Returns 0 on success, with *@memcgp pointing to the charged memcg. 6960 + * Otherwise, an error code is returned. 6961 + * 6962 + * After page->mapping has been set up, the caller must finalize the 6963 + * charge with mem_cgroup_commit_charge(). Or abort the transaction 6964 + * with mem_cgroup_cancel_charge() in case page instantiation fails. 6965 + */ 6966 + int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, 6967 + gfp_t gfp_mask, struct mem_cgroup **memcgp) 6968 + { 6969 + struct mem_cgroup *memcg = NULL; 6970 + unsigned int nr_pages = 1; 6971 + int ret = 0; 6972 + 6973 + if (mem_cgroup_disabled()) 6974 + goto out; 6975 + 6976 + if (PageSwapCache(page)) { 6977 + struct page_cgroup *pc = lookup_page_cgroup(page); 6978 + /* 6979 + * Every swap fault against a single page tries to charge the 6980 + * page, bail as early as possible. shmem_unuse() encounters 6981 + * already charged pages, too. The USED bit is protected by 6982 + * the page lock, which serializes swap cache removal, which 6983 + * in turn serializes uncharging. 6984 + */ 6985 + if (PageCgroupUsed(pc)) 6986 + goto out; 6987 + } 6988 + 6989 + if (PageTransHuge(page)) { 6990 + nr_pages <<= compound_order(page); 6991 + VM_BUG_ON_PAGE(!PageTransHuge(page), page); 6992 + } 6993 + 6994 + if (do_swap_account && PageSwapCache(page)) 6995 + memcg = try_get_mem_cgroup_from_page(page); 6996 + if (!memcg) 6997 + memcg = get_mem_cgroup_from_mm(mm); 6998 + 6999 + ret = try_charge(memcg, gfp_mask, nr_pages); 7000 + 7001 + css_put(&memcg->css); 7002 + 7003 + if (ret == -EINTR) { 7004 + memcg = root_mem_cgroup; 7005 + ret = 0; 7006 + } 7007 + out: 7008 + *memcgp = memcg; 7009 + return ret; 7010 + } 7011 + 7012 + /** 7013 + * mem_cgroup_commit_charge - commit a page charge 7014 + * @page: page to charge 7015 + * @memcg: memcg to charge the page to 7016 + * @lrucare: page might be on LRU already 7017 + * 7018 + * Finalize a charge transaction started by mem_cgroup_try_charge(), 7019 + * after page->mapping has been set up. This must happen atomically 7020 + * as part of the page instantiation, i.e. under the page table lock 7021 + * for anonymous pages, under the page lock for page and swap cache. 7022 + * 7023 + * In addition, the page must not be on the LRU during the commit, to 7024 + * prevent racing with task migration. If it might be, use @lrucare. 7025 + * 7026 + * Use mem_cgroup_cancel_charge() to cancel the transaction instead. 7027 + */ 7028 + void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg, 7029 + bool lrucare) 7030 + { 7031 + unsigned int nr_pages = 1; 7032 + 7033 + VM_BUG_ON_PAGE(!page->mapping, page); 7034 + VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); 7035 + 7036 + if (mem_cgroup_disabled()) 7037 + return; 7038 + /* 7039 + * Swap faults will attempt to charge the same page multiple 7040 + * times. But reuse_swap_page() might have removed the page 7041 + * from swapcache already, so we can't check PageSwapCache(). 7042 + */ 7043 + if (!memcg) 7044 + return; 7045 + 7046 + if (PageTransHuge(page)) { 7047 + nr_pages <<= compound_order(page); 7048 + VM_BUG_ON_PAGE(!PageTransHuge(page), page); 7049 + } 7050 + 7051 + commit_charge(page, memcg, nr_pages, PageAnon(page), lrucare); 7052 + 7053 + if (do_swap_account && PageSwapCache(page)) { 7054 + swp_entry_t entry = { .val = page_private(page) }; 7055 + /* 7056 + * The swap entry might not get freed for a long time, 7057 + * let's not wait for it. The page already received a 7058 + * memory+swap charge, drop the swap entry duplicate. 7059 + */ 7060 + mem_cgroup_uncharge_swap(entry); 7061 + } 7062 + } 7063 + 7064 + /** 7065 + * mem_cgroup_cancel_charge - cancel a page charge 7066 + * @page: page to charge 7067 + * @memcg: memcg to charge the page to 7068 + * 7069 + * Cancel a charge transaction started by mem_cgroup_try_charge(). 7070 + */ 7071 + void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg) 7072 + { 7073 + unsigned int nr_pages = 1; 7074 + 7075 + if (mem_cgroup_disabled()) 7076 + return; 7077 + /* 7078 + * Swap faults will attempt to charge the same page multiple 7079 + * times. But reuse_swap_page() might have removed the page 7080 + * from swapcache already, so we can't check PageSwapCache(). 7081 + */ 7082 + if (!memcg) 7083 + return; 7084 + 7085 + if (PageTransHuge(page)) { 7086 + nr_pages <<= compound_order(page); 7087 + VM_BUG_ON_PAGE(!PageTransHuge(page), page); 7088 + } 7089 + 7090 + cancel_charge(memcg, nr_pages); 7091 + } 6731 7092 6732 7093 /* 6733 7094 * subsys_initcall() for memory controller.
+24 -17
mm/memory.c
··· 2049 2049 struct page *dirty_page = NULL; 2050 2050 unsigned long mmun_start = 0; /* For mmu_notifiers */ 2051 2051 unsigned long mmun_end = 0; /* For mmu_notifiers */ 2052 + struct mem_cgroup *memcg; 2052 2053 2053 2054 old_page = vm_normal_page(vma, address, orig_pte); 2054 2055 if (!old_page) { ··· 2205 2204 } 2206 2205 __SetPageUptodate(new_page); 2207 2206 2208 - if (mem_cgroup_charge_anon(new_page, mm, GFP_KERNEL)) 2207 + if (mem_cgroup_try_charge(new_page, mm, GFP_KERNEL, &memcg)) 2209 2208 goto oom_free_new; 2210 2209 2211 2210 mmun_start = address & PAGE_MASK; ··· 2235 2234 */ 2236 2235 ptep_clear_flush(vma, address, page_table); 2237 2236 page_add_new_anon_rmap(new_page, vma, address); 2237 + mem_cgroup_commit_charge(new_page, memcg, false); 2238 + lru_cache_add_active_or_unevictable(new_page, vma); 2238 2239 /* 2239 2240 * We call the notify macro here because, when using secondary 2240 2241 * mmu page tables (such as kvm shadow page tables), we want the ··· 2274 2271 new_page = old_page; 2275 2272 ret |= VM_FAULT_WRITE; 2276 2273 } else 2277 - mem_cgroup_uncharge_page(new_page); 2274 + mem_cgroup_cancel_charge(new_page, memcg); 2278 2275 2279 2276 if (new_page) 2280 2277 page_cache_release(new_page); ··· 2413 2410 { 2414 2411 spinlock_t *ptl; 2415 2412 struct page *page, *swapcache; 2413 + struct mem_cgroup *memcg; 2416 2414 swp_entry_t entry; 2417 2415 pte_t pte; 2418 2416 int locked; 2419 - struct mem_cgroup *ptr; 2420 2417 int exclusive = 0; 2421 2418 int ret = 0; 2422 2419 ··· 2492 2489 goto out_page; 2493 2490 } 2494 2491 2495 - if (mem_cgroup_try_charge_swapin(mm, page, GFP_KERNEL, &ptr)) { 2492 + if (mem_cgroup_try_charge(page, mm, GFP_KERNEL, &memcg)) { 2496 2493 ret = VM_FAULT_OOM; 2497 2494 goto out_page; 2498 2495 } ··· 2517 2514 * while the page is counted on swap but not yet in mapcount i.e. 2518 2515 * before page_add_anon_rmap() and swap_free(); try_to_free_swap() 2519 2516 * must be called after the swap_free(), or it will never succeed. 2520 - * Because delete_from_swap_page() may be called by reuse_swap_page(), 2521 - * mem_cgroup_commit_charge_swapin() may not be able to find swp_entry 2522 - * in page->private. In this case, a record in swap_cgroup is silently 2523 - * discarded at swap_free(). 2524 2517 */ 2525 2518 2526 2519 inc_mm_counter_fast(mm, MM_ANONPAGES); ··· 2532 2533 if (pte_swp_soft_dirty(orig_pte)) 2533 2534 pte = pte_mksoft_dirty(pte); 2534 2535 set_pte_at(mm, address, page_table, pte); 2535 - if (page == swapcache) 2536 + if (page == swapcache) { 2536 2537 do_page_add_anon_rmap(page, vma, address, exclusive); 2537 - else /* ksm created a completely new copy */ 2538 + mem_cgroup_commit_charge(page, memcg, true); 2539 + } else { /* ksm created a completely new copy */ 2538 2540 page_add_new_anon_rmap(page, vma, address); 2539 - /* It's better to call commit-charge after rmap is established */ 2540 - mem_cgroup_commit_charge_swapin(page, ptr); 2541 + mem_cgroup_commit_charge(page, memcg, false); 2542 + lru_cache_add_active_or_unevictable(page, vma); 2543 + } 2541 2544 2542 2545 swap_free(entry); 2543 2546 if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page)) ··· 2572 2571 out: 2573 2572 return ret; 2574 2573 out_nomap: 2575 - mem_cgroup_cancel_charge_swapin(ptr); 2574 + mem_cgroup_cancel_charge(page, memcg); 2576 2575 pte_unmap_unlock(page_table, ptl); 2577 2576 out_page: 2578 2577 unlock_page(page); ··· 2628 2627 unsigned long address, pte_t *page_table, pmd_t *pmd, 2629 2628 unsigned int flags) 2630 2629 { 2630 + struct mem_cgroup *memcg; 2631 2631 struct page *page; 2632 2632 spinlock_t *ptl; 2633 2633 pte_t entry; ··· 2662 2660 */ 2663 2661 __SetPageUptodate(page); 2664 2662 2665 - if (mem_cgroup_charge_anon(page, mm, GFP_KERNEL)) 2663 + if (mem_cgroup_try_charge(page, mm, GFP_KERNEL, &memcg)) 2666 2664 goto oom_free_page; 2667 2665 2668 2666 entry = mk_pte(page, vma->vm_page_prot); ··· 2675 2673 2676 2674 inc_mm_counter_fast(mm, MM_ANONPAGES); 2677 2675 page_add_new_anon_rmap(page, vma, address); 2676 + mem_cgroup_commit_charge(page, memcg, false); 2677 + lru_cache_add_active_or_unevictable(page, vma); 2678 2678 setpte: 2679 2679 set_pte_at(mm, address, page_table, entry); 2680 2680 ··· 2686 2682 pte_unmap_unlock(page_table, ptl); 2687 2683 return 0; 2688 2684 release: 2689 - mem_cgroup_uncharge_page(page); 2685 + mem_cgroup_cancel_charge(page, memcg); 2690 2686 page_cache_release(page); 2691 2687 goto unlock; 2692 2688 oom_free_page: ··· 2923 2919 pgoff_t pgoff, unsigned int flags, pte_t orig_pte) 2924 2920 { 2925 2921 struct page *fault_page, *new_page; 2922 + struct mem_cgroup *memcg; 2926 2923 spinlock_t *ptl; 2927 2924 pte_t *pte; 2928 2925 int ret; ··· 2935 2930 if (!new_page) 2936 2931 return VM_FAULT_OOM; 2937 2932 2938 - if (mem_cgroup_charge_anon(new_page, mm, GFP_KERNEL)) { 2933 + if (mem_cgroup_try_charge(new_page, mm, GFP_KERNEL, &memcg)) { 2939 2934 page_cache_release(new_page); 2940 2935 return VM_FAULT_OOM; 2941 2936 } ··· 2955 2950 goto uncharge_out; 2956 2951 } 2957 2952 do_set_pte(vma, address, new_page, pte, true, true); 2953 + mem_cgroup_commit_charge(new_page, memcg, false); 2954 + lru_cache_add_active_or_unevictable(new_page, vma); 2958 2955 pte_unmap_unlock(pte, ptl); 2959 2956 unlock_page(fault_page); 2960 2957 page_cache_release(fault_page); 2961 2958 return ret; 2962 2959 uncharge_out: 2963 - mem_cgroup_uncharge_page(new_page); 2960 + mem_cgroup_cancel_charge(new_page, memcg); 2964 2961 page_cache_release(new_page); 2965 2962 return ret; 2966 2963 }
-19
mm/rmap.c
··· 1032 1032 __mod_zone_page_state(page_zone(page), NR_ANON_PAGES, 1033 1033 hpage_nr_pages(page)); 1034 1034 __page_set_anon_rmap(page, vma, address, 1); 1035 - 1036 - VM_BUG_ON_PAGE(PageLRU(page), page); 1037 - if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) { 1038 - SetPageActive(page); 1039 - lru_cache_add(page); 1040 - return; 1041 - } 1042 - 1043 - if (!TestSetPageMlocked(page)) { 1044 - /* 1045 - * We use the irq-unsafe __mod_zone_page_stat because this 1046 - * counter is not modified from interrupt context, and the pte 1047 - * lock is held(spinlock), which implies preemption disabled. 1048 - */ 1049 - __mod_zone_page_state(page_zone(page), NR_MLOCK, 1050 - hpage_nr_pages(page)); 1051 - count_vm_event(UNEVICTABLE_PGMLOCKED); 1052 - } 1053 - add_page_to_unevictable_list(page); 1054 1035 } 1055 1036 1056 1037 /**
+23 -14
mm/shmem.c
··· 621 621 radswap = swp_to_radix_entry(swap); 622 622 index = radix_tree_locate_item(&mapping->page_tree, radswap); 623 623 if (index == -1) 624 - return 0; 624 + return -EAGAIN; /* tell shmem_unuse we found nothing */ 625 625 626 626 /* 627 627 * Move _head_ to start search for next from here. ··· 680 680 spin_unlock(&info->lock); 681 681 swap_free(swap); 682 682 } 683 - error = 1; /* not an error, but entry was found */ 684 683 } 685 684 return error; 686 685 } ··· 691 692 { 692 693 struct list_head *this, *next; 693 694 struct shmem_inode_info *info; 694 - int found = 0; 695 + struct mem_cgroup *memcg; 695 696 int error = 0; 696 697 697 698 /* ··· 706 707 * the shmem_swaplist_mutex which might hold up shmem_writepage(). 707 708 * Charged back to the user (not to caller) when swap account is used. 708 709 */ 709 - error = mem_cgroup_charge_file(page, current->mm, GFP_KERNEL); 710 + error = mem_cgroup_try_charge(page, current->mm, GFP_KERNEL, &memcg); 710 711 if (error) 711 712 goto out; 712 713 /* No radix_tree_preload: swap entry keeps a place for page in tree */ 714 + error = -EAGAIN; 713 715 714 716 mutex_lock(&shmem_swaplist_mutex); 715 717 list_for_each_safe(this, next, &shmem_swaplist) { 716 718 info = list_entry(this, struct shmem_inode_info, swaplist); 717 719 if (info->swapped) 718 - found = shmem_unuse_inode(info, swap, &page); 720 + error = shmem_unuse_inode(info, swap, &page); 719 721 else 720 722 list_del_init(&info->swaplist); 721 723 cond_resched(); 722 - if (found) 724 + if (error != -EAGAIN) 723 725 break; 726 + /* found nothing in this: move on to search the next */ 724 727 } 725 728 mutex_unlock(&shmem_swaplist_mutex); 726 729 727 - if (found < 0) 728 - error = found; 730 + if (error) { 731 + if (error != -ENOMEM) 732 + error = 0; 733 + mem_cgroup_cancel_charge(page, memcg); 734 + } else 735 + mem_cgroup_commit_charge(page, memcg, true); 729 736 out: 730 737 unlock_page(page); 731 738 page_cache_release(page); ··· 1035 1030 struct address_space *mapping = inode->i_mapping; 1036 1031 struct shmem_inode_info *info; 1037 1032 struct shmem_sb_info *sbinfo; 1033 + struct mem_cgroup *memcg; 1038 1034 struct page *page; 1039 1035 swp_entry_t swap; 1040 1036 int error; ··· 1114 1108 goto failed; 1115 1109 } 1116 1110 1117 - error = mem_cgroup_charge_file(page, current->mm, 1118 - gfp & GFP_RECLAIM_MASK); 1111 + error = mem_cgroup_try_charge(page, current->mm, gfp, &memcg); 1119 1112 if (!error) { 1120 1113 error = shmem_add_to_page_cache(page, mapping, index, 1121 1114 swp_to_radix_entry(swap)); ··· 1130 1125 * Reset swap.val? No, leave it so "failed" goes back to 1131 1126 * "repeat": reading a hole and writing should succeed. 1132 1127 */ 1133 - if (error) 1128 + if (error) { 1129 + mem_cgroup_cancel_charge(page, memcg); 1134 1130 delete_from_swap_cache(page); 1131 + } 1135 1132 } 1136 1133 if (error) 1137 1134 goto failed; 1135 + 1136 + mem_cgroup_commit_charge(page, memcg, true); 1138 1137 1139 1138 spin_lock(&info->lock); 1140 1139 info->swapped--; ··· 1177 1168 if (sgp == SGP_WRITE) 1178 1169 __SetPageReferenced(page); 1179 1170 1180 - error = mem_cgroup_charge_file(page, current->mm, 1181 - gfp & GFP_RECLAIM_MASK); 1171 + error = mem_cgroup_try_charge(page, current->mm, gfp, &memcg); 1182 1172 if (error) 1183 1173 goto decused; 1184 1174 error = radix_tree_maybe_preload(gfp & GFP_RECLAIM_MASK); ··· 1187 1179 radix_tree_preload_end(); 1188 1180 } 1189 1181 if (error) { 1190 - mem_cgroup_uncharge_cache_page(page); 1182 + mem_cgroup_cancel_charge(page, memcg); 1191 1183 goto decused; 1192 1184 } 1185 + mem_cgroup_commit_charge(page, memcg, false); 1193 1186 lru_cache_add_anon(page); 1194 1187 1195 1188 spin_lock(&info->lock);
+34
mm/swap.c
··· 687 687 spin_unlock_irq(&zone->lru_lock); 688 688 } 689 689 690 + /** 691 + * lru_cache_add_active_or_unevictable 692 + * @page: the page to be added to LRU 693 + * @vma: vma in which page is mapped for determining reclaimability 694 + * 695 + * Place @page on the active or unevictable LRU list, depending on its 696 + * evictability. Note that if the page is not evictable, it goes 697 + * directly back onto it's zone's unevictable list, it does NOT use a 698 + * per cpu pagevec. 699 + */ 700 + void lru_cache_add_active_or_unevictable(struct page *page, 701 + struct vm_area_struct *vma) 702 + { 703 + VM_BUG_ON_PAGE(PageLRU(page), page); 704 + 705 + if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) { 706 + SetPageActive(page); 707 + lru_cache_add(page); 708 + return; 709 + } 710 + 711 + if (!TestSetPageMlocked(page)) { 712 + /* 713 + * We use the irq-unsafe __mod_zone_page_stat because this 714 + * counter is not modified from interrupt context, and the pte 715 + * lock is held(spinlock), which implies preemption disabled. 716 + */ 717 + __mod_zone_page_state(page_zone(page), NR_MLOCK, 718 + hpage_nr_pages(page)); 719 + count_vm_event(UNEVICTABLE_PGMLOCKED); 720 + } 721 + add_page_to_unevictable_list(page); 722 + } 723 + 690 724 /* 691 725 * If the page can not be invalidated, it is moved to the 692 726 * inactive list to speed up its reclaim. It is moved to the
+8 -6
mm/swapfile.c
··· 1106 1106 if (unlikely(!page)) 1107 1107 return -ENOMEM; 1108 1108 1109 - if (mem_cgroup_try_charge_swapin(vma->vm_mm, page, 1110 - GFP_KERNEL, &memcg)) { 1109 + if (mem_cgroup_try_charge(page, vma->vm_mm, GFP_KERNEL, &memcg)) { 1111 1110 ret = -ENOMEM; 1112 1111 goto out_nolock; 1113 1112 } 1114 1113 1115 1114 pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); 1116 1115 if (unlikely(!maybe_same_pte(*pte, swp_entry_to_pte(entry)))) { 1117 - mem_cgroup_cancel_charge_swapin(memcg); 1116 + mem_cgroup_cancel_charge(page, memcg); 1118 1117 ret = 0; 1119 1118 goto out; 1120 1119 } ··· 1123 1124 get_page(page); 1124 1125 set_pte_at(vma->vm_mm, addr, pte, 1125 1126 pte_mkold(mk_pte(page, vma->vm_page_prot))); 1126 - if (page == swapcache) 1127 + if (page == swapcache) { 1127 1128 page_add_anon_rmap(page, vma, addr); 1128 - else /* ksm created a completely new copy */ 1129 + mem_cgroup_commit_charge(page, memcg, true); 1130 + } else { /* ksm created a completely new copy */ 1129 1131 page_add_new_anon_rmap(page, vma, addr); 1130 - mem_cgroup_commit_charge_swapin(page, memcg); 1132 + mem_cgroup_commit_charge(page, memcg, false); 1133 + lru_cache_add_active_or_unevictable(page, vma); 1134 + } 1131 1135 swap_free(entry); 1132 1136 /* 1133 1137 * Move the page to the active list so it is not