slub: always align cpu_slab to honor cmpxchg_double requirement

On an architecture without CMPXCHG_LOCAL but with DEBUG_VM enabled,
the VM_BUG_ON() in __pcpu_double_call_return_bool() will cause an early
panic during boot unless we always align cpu_slab properly.

In principle we could remove the alignment-testing VM_BUG_ON() for
architectures that don't have CMPXCHG_LOCAL, but leaving it in means
that new code will tend not to break x86 even if it is introduced
on another platform, and it's low cost to require alignment.

Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>

authored by Chris Metcalf and committed by Pekka Enberg d4d84fef 55922c9d

Changed files
+7 -8
include
linux
mm
+3
include/linux/percpu.h
··· 259 259 * Special handling for cmpxchg_double. cmpxchg_double is passed two 260 260 * percpu variables. The first has to be aligned to a double word 261 261 * boundary and the second has to follow directly thereafter. 262 + * We enforce this on all architectures even if they don't support 263 + * a double cmpxchg instruction, since it's a cheap requirement, and it 264 + * avoids breaking the requirement for architectures with the instruction. 262 265 */ 263 266 #define __pcpu_double_call_return_bool(stem, pcp1, pcp2, ...) \ 264 267 ({ \
+4 -8
mm/slub.c
··· 2320 2320 BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE < 2321 2321 SLUB_PAGE_SHIFT * sizeof(struct kmem_cache_cpu)); 2322 2322 2323 - #ifdef CONFIG_CMPXCHG_LOCAL 2324 2323 /* 2325 - * Must align to double word boundary for the double cmpxchg instructions 2326 - * to work. 2324 + * Must align to double word boundary for the double cmpxchg 2325 + * instructions to work; see __pcpu_double_call_return_bool(). 2327 2326 */ 2328 - s->cpu_slab = __alloc_percpu(sizeof(struct kmem_cache_cpu), 2 * sizeof(void *)); 2329 - #else 2330 - /* Regular alignment is sufficient */ 2331 - s->cpu_slab = alloc_percpu(struct kmem_cache_cpu); 2332 - #endif 2327 + s->cpu_slab = __alloc_percpu(sizeof(struct kmem_cache_cpu), 2328 + 2 * sizeof(void *)); 2333 2329 2334 2330 if (!s->cpu_slab) 2335 2331 return 0;