Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

slab: Add SL_partial flag

Give slab its own name for this flag. Keep the PG_workingset alias
information in one place.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Link: https://patch.msgid.link/20250611155916.2579160-4-willy@infradead.org
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

authored by

Matthew Wilcox (Oracle) and committed by
Vlastimil Babka
c5c44900 30908096

+10 -12
+10 -12
mm/slub.c
··· 91 91 * The partially empty slabs cached on the CPU partial list are used 92 92 * for performance reasons, which speeds up the allocation process. 93 93 * These slabs are not frozen, but are also exempt from list management, 94 - * by clearing the PG_workingset flag when moving out of the node 94 + * by clearing the SL_partial flag when moving out of the node 95 95 * partial list. Please see __slab_free() for more details. 96 96 * 97 97 * To sum up, the current scheme is: 98 - * - node partial slab: PG_Workingset && !frozen 99 - * - cpu partial slab: !PG_Workingset && !frozen 100 - * - cpu slab: !PG_Workingset && frozen 101 - * - full slab: !PG_Workingset && !frozen 98 + * - node partial slab: SL_partial && !frozen 99 + * - cpu partial slab: !SL_partial && !frozen 100 + * - cpu slab: !SL_partial && frozen 101 + * - full slab: !SL_partial && !frozen 102 102 * 103 103 * list_lock 104 104 * ··· 186 186 /** 187 187 * enum slab_flags - How the slab flags bits are used. 188 188 * @SL_locked: Is locked with slab_lock() 189 + * @SL_partial: On the per-node partial list 189 190 * 190 191 * The slab flags share space with the page flags but some bits have 191 192 * different interpretations. The high bits are used for information ··· 194 193 */ 195 194 enum slab_flags { 196 195 SL_locked = PG_locked, 196 + SL_partial = PG_workingset, /* Historical reasons for this bit */ 197 197 }; 198 198 199 199 /* ··· 2731 2729 free_slab(s, slab); 2732 2730 } 2733 2731 2734 - /* 2735 - * SLUB reuses PG_workingset bit to keep track of whether it's on 2736 - * the per-node partial list. 2737 - */ 2738 2732 static inline bool slab_test_node_partial(const struct slab *slab) 2739 2733 { 2740 - return folio_test_workingset(slab_folio(slab)); 2734 + return test_bit(SL_partial, &slab->flags); 2741 2735 } 2742 2736 2743 2737 static inline void slab_set_node_partial(struct slab *slab) 2744 2738 { 2745 - set_bit(PG_workingset, folio_flags(slab_folio(slab), 0)); 2739 + set_bit(SL_partial, &slab->flags); 2746 2740 } 2747 2741 2748 2742 static inline void slab_clear_node_partial(struct slab *slab) 2749 2743 { 2750 - clear_bit(PG_workingset, folio_flags(slab_folio(slab), 0)); 2744 + clear_bit(SL_partial, &slab->flags); 2751 2745 } 2752 2746 2753 2747 /*