Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: memcontrol: reuse memory cgroup ID for kmem ID

There are two idrs being used by memory cgroup, one is for kmem ID,
another is for memory cgroup ID. The maximum ID of both is 64Ki. Both
of them can limit the total number of memory cgroups. Actually, we can
reuse memory cgroup ID for kmem ID to simplify the code.

Link: https://lkml.kernel.org/r/20220228122126.37293-14-songmuchun@bytedance.com
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Cc: Alex Shi <alexs@kernel.org>
Cc: Anna Schumaker <Anna.Schumaker@Netapp.com>
Cc: Chao Yu <chao@kernel.org>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Fam Zheng <fam.zheng@bytedance.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kari Argillander <kari.argillander@gmail.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
Cc: Yang Shi <shy828301@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Muchun Song and committed by
Linus Torvalds
f9c69d63 bbca91cc

+3 -36
+3 -36
mm/memcontrol.c
··· 348 348 } 349 349 350 350 /* 351 - * This will be used as a shrinker list's index. 352 - * The main reason for not using cgroup id for this: 353 - * this works better in sparse environments, where we have a lot of memcgs, 354 - * but only a few kmem-limited. 355 - */ 356 - static DEFINE_IDA(memcg_cache_ida); 357 - 358 - /* 359 - * MAX_SIZE should be as large as the number of cgrp_ids. Ideally, we could get 360 - * this constant directly from cgroup, but it is understandable that this is 361 - * better kept as an internal representation in cgroup.c. In any case, the 362 - * cgrp_id space is not getting any smaller, and we don't have to necessarily 363 - * increase ours as well if it increases. 364 - */ 365 - #define MEMCG_CACHES_MAX_SIZE MEM_CGROUP_ID_MAX 366 - 367 - /* 368 351 * A lot of the calls to the cache allocation functions are expected to be 369 352 * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are 370 353 * conditional to this static branch, we'll have to allow modules that does ··· 3580 3597 static int memcg_online_kmem(struct mem_cgroup *memcg) 3581 3598 { 3582 3599 struct obj_cgroup *objcg; 3583 - int memcg_id; 3584 3600 3585 3601 if (cgroup_memory_nokmem) 3586 3602 return 0; ··· 3587 3605 if (unlikely(mem_cgroup_is_root(memcg))) 3588 3606 return 0; 3589 3607 3590 - memcg_id = ida_alloc_max(&memcg_cache_ida, MEMCG_CACHES_MAX_SIZE - 1, 3591 - GFP_KERNEL); 3592 - if (memcg_id < 0) 3593 - return memcg_id; 3594 - 3595 3608 objcg = obj_cgroup_alloc(); 3596 - if (!objcg) { 3597 - ida_free(&memcg_cache_ida, memcg_id); 3609 + if (!objcg) 3598 3610 return -ENOMEM; 3599 - } 3611 + 3600 3612 objcg->memcg = memcg; 3601 3613 rcu_assign_pointer(memcg->objcg, objcg); 3602 3614 3603 3615 static_branch_enable(&memcg_kmem_enabled_key); 3604 3616 3605 - memcg->kmemcg_id = memcg_id; 3617 + memcg->kmemcg_id = memcg->id.id; 3606 3618 3607 3619 return 0; 3608 3620 } ··· 3604 3628 static void memcg_offline_kmem(struct mem_cgroup *memcg) 3605 3629 { 3606 3630 struct mem_cgroup *parent; 3607 - int kmemcg_id; 3608 3631 3609 3632 if (cgroup_memory_nokmem) 3610 3633 return; ··· 3618 3643 memcg_reparent_objcgs(memcg, parent); 3619 3644 3620 3645 /* 3621 - * memcg_reparent_list_lrus() can change memcg->kmemcg_id. 3622 - * Cache it to local @kmemcg_id. 3623 - */ 3624 - kmemcg_id = memcg->kmemcg_id; 3625 - 3626 - /* 3627 3646 * After we have finished memcg_reparent_objcgs(), all list_lrus 3628 3647 * corresponding to this cgroup are guaranteed to remain empty. 3629 3648 * The ordering is imposed by list_lru_node->lock taken by 3630 3649 * memcg_reparent_list_lrus(). 3631 3650 */ 3632 3651 memcg_reparent_list_lrus(memcg, parent); 3633 - 3634 - ida_free(&memcg_cache_ida, kmemcg_id); 3635 3652 } 3636 3653 #else 3637 3654 static int memcg_online_kmem(struct mem_cgroup *memcg)