Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

memcg, slab: fix races in per-memcg cache creation/destruction

We obtain a per-memcg cache from a root kmem_cache by dereferencing an
entry of the root cache's memcg_params::memcg_caches array. If we find
no cache for a memcg there on allocation, we initiate the memcg cache
creation (see memcg_kmem_get_cache()). The cache creation proceeds
asynchronously in memcg_create_kmem_cache() in order to avoid lock
clashes, so there can be several threads trying to create the same
kmem_cache concurrently, but only one of them may succeed. However, due
to a race in the code, it is not always true. The point is that the
memcg_caches array can be relocated when we activate kmem accounting for
a memcg (see memcg_update_all_caches(), memcg_update_cache_size()). If
memcg_update_cache_size() and memcg_create_kmem_cache() proceed
concurrently as described below, we can leak a kmem_cache.

Asume two threads schedule creation of the same kmem_cache. One of them
successfully creates it. Another one should fail then, but if
memcg_create_kmem_cache() interleaves with memcg_update_cache_size() as
follows, it won't:

memcg_create_kmem_cache() memcg_update_cache_size()
(called w/o mutexes held) (called with slab_mutex,
set_limit_mutex held)
------------------------- -------------------------

mutex_lock(&memcg_cache_mutex)

s->memcg_params=kzalloc(...)

new_cachep=cache_from_memcg_idx(cachep,idx)
// new_cachep==NULL => proceed to creation

s->memcg_params->memcg_caches[i]
=cur_params->memcg_caches[i]

// kmem_cache_create_memcg takes slab_mutex
// so we will hang around until
// memcg_update_cache_size finishes, but
// nothing will prevent it from succeeding so
// memcg_caches[idx] will be overwritten in
// memcg_register_cache!

new_cachep = kmem_cache_create_memcg(...)
mutex_unlock(&memcg_cache_mutex)

Let's fix this by moving the check for existence of the memcg cache to
kmem_cache_create_memcg() to be called under the slab_mutex and make it
return NULL if so.

A similar race is possible when destroying a memcg cache (see
kmem_cache_destroy()). Since memcg_unregister_cache(), which clears the
pointer in the memcg_caches array, is called w/o protection, we can race
with memcg_update_cache_size() and omit clearing the pointer. Therefore
memcg_unregister_cache() should be moved before we release the
slab_mutex.

Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Glauber Costa <glommer@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Vladimir Davydov and committed by
Linus Torvalds
2edefe11 96403da2

+27 -10
+14 -9
mm/memcontrol.c
··· 3264 3264 if (is_root_cache(s)) 3265 3265 return; 3266 3266 3267 + /* 3268 + * Holding the slab_mutex assures nobody will touch the memcg_caches 3269 + * array while we are modifying it. 3270 + */ 3271 + lockdep_assert_held(&slab_mutex); 3272 + 3267 3273 root = s->memcg_params->root_cache; 3268 3274 memcg = s->memcg_params->memcg; 3269 3275 id = memcg_cache_id(memcg); ··· 3289 3283 * before adding it to the memcg_slab_caches list, otherwise we can 3290 3284 * fail to convert memcg_params_to_cache() while traversing the list. 3291 3285 */ 3286 + VM_BUG_ON(root->memcg_params->memcg_caches[id]); 3292 3287 root->memcg_params->memcg_caches[id] = s; 3293 3288 3294 3289 mutex_lock(&memcg->slab_caches_mutex); ··· 3306 3299 if (is_root_cache(s)) 3307 3300 return; 3308 3301 3302 + /* 3303 + * Holding the slab_mutex assures nobody will touch the memcg_caches 3304 + * array while we are modifying it. 3305 + */ 3306 + lockdep_assert_held(&slab_mutex); 3307 + 3309 3308 root = s->memcg_params->root_cache; 3310 3309 memcg = s->memcg_params->memcg; 3311 3310 id = memcg_cache_id(memcg); ··· 3325 3312 * after removing it from the memcg_slab_caches list, otherwise we can 3326 3313 * fail to convert memcg_params_to_cache() while traversing the list. 3327 3314 */ 3315 + VM_BUG_ON(!root->memcg_params->memcg_caches[id]); 3328 3316 root->memcg_params->memcg_caches[id] = NULL; 3329 3317 3330 3318 css_put(&memcg->css); ··· 3478 3464 struct kmem_cache *cachep) 3479 3465 { 3480 3466 struct kmem_cache *new_cachep; 3481 - int idx; 3482 3467 3483 3468 BUG_ON(!memcg_can_account_kmem(memcg)); 3484 3469 3485 - idx = memcg_cache_id(memcg); 3486 - 3487 3470 mutex_lock(&memcg_cache_mutex); 3488 - new_cachep = cache_from_memcg_idx(cachep, idx); 3489 - if (new_cachep) 3490 - goto out; 3491 - 3492 3471 new_cachep = kmem_cache_dup(memcg, cachep); 3493 3472 if (new_cachep == NULL) 3494 3473 new_cachep = cachep; 3495 - 3496 - out: 3497 3474 mutex_unlock(&memcg_cache_mutex); 3498 3475 return new_cachep; 3499 3476 }
+13 -1
mm/slab_common.c
··· 180 180 if (err) 181 181 goto out_unlock; 182 182 183 + if (memcg) { 184 + /* 185 + * Since per-memcg caches are created asynchronously on first 186 + * allocation (see memcg_kmem_get_cache()), several threads can 187 + * try to create the same cache, but only one of them may 188 + * succeed. Therefore if we get here and see the cache has 189 + * already been created, we silently return NULL. 190 + */ 191 + if (cache_from_memcg_idx(parent_cache, memcg_cache_id(memcg))) 192 + goto out_unlock; 193 + } 194 + 183 195 /* 184 196 * Some allocators will constraint the set of valid flags to a subset 185 197 * of all flags. We expect them to define CACHE_CREATE_MASK in this ··· 273 261 list_del(&s->list); 274 262 275 263 if (!__kmem_cache_shutdown(s)) { 264 + memcg_unregister_cache(s); 276 265 mutex_unlock(&slab_mutex); 277 266 if (s->flags & SLAB_DESTROY_BY_RCU) 278 267 rcu_barrier(); 279 268 280 - memcg_unregister_cache(s); 281 269 memcg_free_cache_params(s); 282 270 kfree(s->name); 283 271 kmem_cache_free(kmem_cache, s);