mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation

After commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than
order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2)
requests to buddy like SLUB does.

SLAB has been using kmalloc caches to allocate freelist_idx_t array for
off slab caches. But after the commit, freelist_size can be bigger than
KMALLOC_MAX_CACHE_SIZE.

Instead of using pointer to kmalloc cache, use kmalloc_node() and only
check if the kmalloc cache is off slab during calculate_slab_order().
If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens
as it allocates freelist_idx_t array directly from buddy.

Link: https://lore.kernel.org/all/20221014205818.GA1428667@roeck-us.net/
Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net>
Fixes: d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator")
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

authored by Hyeonggon Yoo and committed by Vlastimil Babka e36ce448 d5eff736

+18 -18
-1
include/linux/slab_def.h
··· 33 33 34 34 size_t colour; /* cache colouring range */ 35 35 unsigned int colour_off; /* colour offset */ 36 - struct kmem_cache *freelist_cache; 37 36 unsigned int freelist_size; 38 37 39 38 /* constructor func */
+18 -17
mm/slab.c
··· 1619 1619 * although actual page can be freed in rcu context 1620 1620 */ 1621 1621 if (OFF_SLAB(cachep)) 1622 - kmem_cache_free(cachep->freelist_cache, freelist); 1622 + kfree(freelist); 1623 1623 } 1624 1624 1625 1625 /* ··· 1671 1671 if (flags & CFLGS_OFF_SLAB) { 1672 1672 struct kmem_cache *freelist_cache; 1673 1673 size_t freelist_size; 1674 + size_t freelist_cache_size; 1674 1675 1675 1676 freelist_size = num * sizeof(freelist_idx_t); 1676 - freelist_cache = kmalloc_slab(freelist_size, 0u); 1677 - if (!freelist_cache) 1678 - continue; 1677 + if (freelist_size > KMALLOC_MAX_CACHE_SIZE) { 1678 + freelist_cache_size = PAGE_SIZE << get_order(freelist_size); 1679 + } else { 1680 + freelist_cache = kmalloc_slab(freelist_size, 0u); 1681 + if (!freelist_cache) 1682 + continue; 1683 + freelist_cache_size = freelist_cache->size; 1679 1684 1680 - /* 1681 - * Needed to avoid possible looping condition 1682 - * in cache_grow_begin() 1683 - */ 1684 - if (OFF_SLAB(freelist_cache)) 1685 - continue; 1685 + /* 1686 + * Needed to avoid possible looping condition 1687 + * in cache_grow_begin() 1688 + */ 1689 + if (OFF_SLAB(freelist_cache)) 1690 + continue; 1691 + } 1686 1692 1687 1693 /* check if off slab has enough benefit */ 1688 - if (freelist_cache->size > cachep->size / 2) 1694 + if (freelist_cache_size > cachep->size / 2) 1689 1695 continue; 1690 1696 } 1691 1697 ··· 2067 2061 cachep->flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER); 2068 2062 #endif 2069 2063 2070 - if (OFF_SLAB(cachep)) { 2071 - cachep->freelist_cache = 2072 - kmalloc_slab(cachep->freelist_size, 0u); 2073 - } 2074 - 2075 2064 err = setup_cpu_cache(cachep, gfp); 2076 2065 if (err) { 2077 2066 __kmem_cache_release(cachep); ··· 2293 2292 freelist = NULL; 2294 2293 else if (OFF_SLAB(cachep)) { 2295 2294 /* Slab management obj is off-slab. */ 2296 - freelist = kmem_cache_alloc_node(cachep->freelist_cache, 2295 + freelist = kmalloc_node(cachep->freelist_size, 2297 2296 local_flags, nodeid); 2298 2297 } else { 2299 2298 /* We will use last bytes at the slab for freelist */