slub: Fallback to kmalloc_large for failing higher order allocs

Slub already has two ways of allocating an object. One is via its own
logic and the other is via the call to kmalloc_large to hand off object
allocation to the page allocator. kmalloc_large is typically used
for objects >= PAGE_SIZE.

We can use that handoff to avoid failing if a higher order kmalloc slab
allocation cannot be satisfied by the page allocator. If we reach the
out of memory path then simply try a kmalloc_large(). kfree() can
already handle the case of an object that was allocated via the page
allocator and so this will work just fine (apart from object
accounting...).

For any kmalloc slab that already requires higher order allocs (which
makes it impossible to use the page allocator fastpath!)
we just use PAGE_ALLOC_COSTLY_ORDER to get the largest number of
objects in one go from the page allocator slowpath.

On a 4k platform this patch will lead to the following use of higher
order pages for the following kmalloc slabs:

8 ... 1024 order 0
2048 .. 4096 order 3 (4k slab only after the next patch)

We may waste some space if fallback occurs on a 2k slab but we
are always able to fallback to an order 0 alloc.

Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Christoph Lameter <clameter@sgi.com>

authored by Christoph Lameter and committed by Christoph Lameter 71c7a06f b7a49f0d

+38 -5
+38 -5
mm/slub.c
··· 211 211 /* Internal SLUB flags */ 212 212 #define __OBJECT_POISON 0x80000000 /* Poison object */ 213 213 #define __SYSFS_ADD_DEFERRED 0x40000000 /* Not yet visible via sysfs */ 214 + #define __KMALLOC_CACHE 0x20000000 /* objects freed using kfree */ 215 + #define __PAGE_ALLOC_FALLBACK 0x10000000 /* Allow fallback to page alloc */ 214 216 215 217 /* Not all arches define cache_line_size */ 216 218 #ifndef cache_line_size ··· 1541 1539 unlock_out: 1542 1540 slab_unlock(c->page); 1543 1541 stat(c, ALLOC_SLOWPATH); 1544 - out: 1545 1542 #ifdef SLUB_FASTPATH 1546 1543 local_irq_restore(flags); 1547 1544 #endif ··· 1575 1574 c->page = new; 1576 1575 goto load_freelist; 1577 1576 } 1578 - object = NULL; 1579 - goto out; 1577 + #ifdef SLUB_FASTPATH 1578 + local_irq_restore(flags); 1579 + #endif 1580 + /* 1581 + * No memory available. 1582 + * 1583 + * If the slab uses higher order allocs but the object is 1584 + * smaller than a page size then we can fallback in emergencies 1585 + * to the page allocator via kmalloc_large. The page allocator may 1586 + * have failed to obtain a higher order page and we can try to 1587 + * allocate a single page if the object fits into a single page. 1588 + * That is only possible if certain conditions are met that are being 1589 + * checked when a slab is created. 1590 + */ 1591 + if (!(gfpflags & __GFP_NORETRY) && (s->flags & __PAGE_ALLOC_FALLBACK)) 1592 + return kmalloc_large(s->objsize, gfpflags); 1593 + 1594 + return NULL; 1580 1595 debug: 1581 1596 object = c->page->freelist; 1582 1597 if (!alloc_debug_processing(s, c->page, object, addr)) ··· 2339 2322 size = ALIGN(size, align); 2340 2323 s->size = size; 2341 2324 2342 - s->order = calculate_order(size); 2325 + if ((flags & __KMALLOC_CACHE) && 2326 + PAGE_SIZE / size < slub_min_objects) { 2327 + /* 2328 + * Kmalloc cache that would not have enough objects in 2329 + * an order 0 page. Kmalloc slabs can fallback to 2330 + * page allocator order 0 allocs so take a reasonably large 2331 + * order that will allows us a good number of objects. 2332 + */ 2333 + s->order = max(slub_max_order, PAGE_ALLOC_COSTLY_ORDER); 2334 + s->flags |= __PAGE_ALLOC_FALLBACK; 2335 + s->allocflags |= __GFP_NOWARN; 2336 + } else 2337 + s->order = calculate_order(size); 2338 + 2343 2339 if (s->order < 0) 2344 2340 return 0; 2345 2341 ··· 2569 2539 2570 2540 down_write(&slub_lock); 2571 2541 if (!kmem_cache_open(s, gfp_flags, name, size, ARCH_KMALLOC_MINALIGN, 2572 - flags, NULL)) 2542 + flags | __KMALLOC_CACHE, NULL)) 2573 2543 goto panic; 2574 2544 2575 2545 list_add(&s->list, &slab_caches); ··· 3086 3056 static int slab_unmergeable(struct kmem_cache *s) 3087 3057 { 3088 3058 if (slub_nomerge || (s->flags & SLUB_NEVER_MERGE)) 3059 + return 1; 3060 + 3061 + if ((s->flags & __PAGE_ALLOC_FALLBACK) 3089 3062 return 1; 3090 3063 3091 3064 if (s->ctor)