Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

memcg: css_put after remove_list

mem_cgroup_uncharge_page does css_put on the mem_cgroup before uncharging from
it, and before removing page_cgroup from one of its lru lists: isn't there a
danger that struct mem_cgroup memory could be freed and reused before
completing that, so corrupting something? Never seen it, and for all I know
there may be other constraints which make it impossible; but let's be
defensive and reverse the ordering there.

mem_cgroup_force_empty_list is safe because there's an extra css_get around
all its works; but even so, change its ordering the same way round, to help
get in the habit of doing it like this.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hirokazu Takahashi <taka@valinux.co.jp>
Cc: YAMAMOTO Takashi <yamamoto@valinux.co.jp>
Cc: Paul Menage <menage@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Hugh Dickins and committed by
Linus Torvalds
6d48ff8b b9c565d5

+6 -6
+6 -6
mm/memcontrol.c
··· 665 665 page_assign_page_cgroup(page, NULL); 666 666 unlock_page_cgroup(page); 667 667 668 - mem = pc->mem_cgroup; 669 - css_put(&mem->css); 670 - res_counter_uncharge(&mem->res, PAGE_SIZE); 671 - 672 668 mz = page_cgroup_zoneinfo(pc); 673 669 spin_lock_irqsave(&mz->lru_lock, flags); 674 670 __mem_cgroup_remove_list(pc); 675 671 spin_unlock_irqrestore(&mz->lru_lock, flags); 672 + 673 + mem = pc->mem_cgroup; 674 + res_counter_uncharge(&mem->res, PAGE_SIZE); 675 + css_put(&mem->css); 676 676 677 677 kfree(pc); 678 678 return; ··· 774 774 if (page_get_page_cgroup(page) == pc) { 775 775 page_assign_page_cgroup(page, NULL); 776 776 unlock_page_cgroup(page); 777 - css_put(&mem->css); 778 - res_counter_uncharge(&mem->res, PAGE_SIZE); 779 777 __mem_cgroup_remove_list(pc); 778 + res_counter_uncharge(&mem->res, PAGE_SIZE); 779 + css_put(&mem->css); 780 780 kfree(pc); 781 781 } else { 782 782 /* racing uncharge: let page go then retry */