Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

memcg: clean up checking of the disabled flag

Those checks are unnecessary, because when the subsystem is disabled
it can't be mounted, so those functions won't get called.

The check is needed in functions which will be called in other places
except cgroup.

[hugh@veritas.com: further checking of disabled flag]
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Li Zefan and committed by
Linus Torvalds
cede86ac accf163e

+12 -11
+12 -11
mm/memcontrol.c
··· 354 354 struct mem_cgroup_per_zone *mz; 355 355 unsigned long flags; 356 356 357 + if (mem_cgroup_subsys.disabled) 358 + return; 359 + 357 360 /* 358 361 * We cannot lock_page_cgroup while holding zone's lru_lock, 359 362 * because other holders of lock_page_cgroup can be interrupted ··· 536 533 unsigned long nr_retries = MEM_CGROUP_RECLAIM_RETRIES; 537 534 struct mem_cgroup_per_zone *mz; 538 535 539 - if (mem_cgroup_subsys.disabled) 540 - return 0; 541 - 542 536 pc = kmem_cache_alloc(page_cgroup_cache, gfp_mask); 543 537 if (unlikely(pc == NULL)) 544 538 goto err; ··· 620 620 621 621 int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) 622 622 { 623 + if (mem_cgroup_subsys.disabled) 624 + return 0; 625 + 623 626 /* 624 627 * If already mapped, we don't have to account. 625 628 * If page cache, page->mapping has address_space. ··· 641 638 int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm, 642 639 gfp_t gfp_mask) 643 640 { 641 + if (mem_cgroup_subsys.disabled) 642 + return 0; 643 + 644 644 /* 645 645 * Corner case handling. This is called from add_to_page_cache() 646 646 * in usual. But some FS (shmem) precharges this page before calling it ··· 794 788 int progress = 0; 795 789 int retry = MEM_CGROUP_RECLAIM_RETRIES; 796 790 791 + if (mem_cgroup_subsys.disabled) 792 + return 0; 793 + 797 794 rcu_read_lock(); 798 795 mem = mem_cgroup_from_task(rcu_dereference(mm->owner)); 799 796 css_get(&mem->css); ··· 865 856 { 866 857 int ret = -EBUSY; 867 858 int node, zid; 868 - 869 - if (mem_cgroup_subsys.disabled) 870 - return 0; 871 859 872 860 css_get(&mem->css); 873 861 /* ··· 1109 1103 static int mem_cgroup_populate(struct cgroup_subsys *ss, 1110 1104 struct cgroup *cont) 1111 1105 { 1112 - if (mem_cgroup_subsys.disabled) 1113 - return 0; 1114 1106 return cgroup_add_files(cont, ss, mem_cgroup_files, 1115 1107 ARRAY_SIZE(mem_cgroup_files)); 1116 1108 } ··· 1120 1116 { 1121 1117 struct mm_struct *mm; 1122 1118 struct mem_cgroup *mem, *old_mem; 1123 - 1124 - if (mem_cgroup_subsys.disabled) 1125 - return; 1126 1119 1127 1120 mm = get_task_mm(p); 1128 1121 if (mm == NULL)