Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/memcg: alignment memcg_data define condition

commit 21c690a349ba ("mm: introduce slabobj_ext to support slab object
extensions") changed the folio/page->memcg_data define condition from
MEMCG to SLAB_OBJ_EXT. This action make memcg_data exposed while !MEMCG.

As Vlastimil Babka suggested, let's add _unused_slab_obj_exts for
SLAB_MATCH for slab.obj_exts while !MEMCG. That could resolve the match
issue, clean up the feature logical.

Signed-off-by: Alex Shi (Tencent) <alexs@kernel.org>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Yoann Congal <yoann.congal@smile.fr>
Cc: Masahiro Yamada <masahiroy@kernel.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

authored by

Alex Shi (Tencent) and committed by
Vlastimil Babka
a52c6330 7b1fdf2b

+10 -3
+7 -2
include/linux/mm_types.h
··· 169 169 /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ 170 170 atomic_t _refcount; 171 171 172 - #ifdef CONFIG_SLAB_OBJ_EXT 172 + #ifdef CONFIG_MEMCG 173 173 unsigned long memcg_data; 174 + #elif defined(CONFIG_SLAB_OBJ_EXT) 175 + unsigned long _unused_slab_obj_exts; 174 176 #endif 175 177 176 178 /* ··· 300 298 * @_hugetlb_cgroup_rsvd: Do not use directly, use accessor in hugetlb_cgroup.h. 301 299 * @_hugetlb_hwpoison: Do not use directly, call raw_hwp_list_head(). 302 300 * @_deferred_list: Folios to be split under memory pressure. 301 + * @_unused_slab_obj_exts: Placeholder to match obj_exts in struct slab. 303 302 * 304 303 * A folio is a physically, virtually and logically contiguous set 305 304 * of bytes. It is a power-of-two in size, and it is aligned to that ··· 335 332 }; 336 333 atomic_t _mapcount; 337 334 atomic_t _refcount; 338 - #ifdef CONFIG_SLAB_OBJ_EXT 335 + #ifdef CONFIG_MEMCG 339 336 unsigned long memcg_data; 337 + #elif defined(CONFIG_SLAB_OBJ_EXT) 338 + unsigned long _unused_slab_obj_exts; 340 339 #endif 341 340 #if defined(WANT_PAGE_VIRTUAL) 342 341 void *virtual;
+3 -1
mm/slab.h
··· 97 97 SLAB_MATCH(flags, __page_flags); 98 98 SLAB_MATCH(compound_head, slab_cache); /* Ensure bit 0 is clear */ 99 99 SLAB_MATCH(_refcount, __page_refcount); 100 - #ifdef CONFIG_SLAB_OBJ_EXT 100 + #ifdef CONFIG_MEMCG 101 101 SLAB_MATCH(memcg_data, obj_exts); 102 + #elif defined(CONFIG_SLAB_OBJ_EXT) 103 + SLAB_MATCH(_unused_slab_obj_exts, obj_exts); 102 104 #endif 103 105 #undef SLAB_MATCH 104 106 static_assert(sizeof(struct slab) <= sizeof(struct page));