Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: Rename SLAB_DESTROY_BY_RCU to SLAB_TYPESAFE_BY_RCU

A group of Linux kernel hackers reported chasing a bug that resulted
from their assumption that SLAB_DESTROY_BY_RCU provided an existence
guarantee, that is, that no block from such a slab would be reallocated
during an RCU read-side critical section. Of course, that is not the
case. Instead, SLAB_DESTROY_BY_RCU only prevents freeing of an entire
slab of blocks.

However, there is a phrase for this, namely "type safety". This commit
therefore renames SLAB_DESTROY_BY_RCU to SLAB_TYPESAFE_BY_RCU in order
to avoid future instances of this sort of confusion.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-mm@kvack.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
[ paulmck: Add comments mentioning the old name, as requested by Eric
Dumazet, in order to help people familiar with the old name find
the new one. ]
Acked-by: David Rientjes <rientjes@google.com>

+57 -54
+1 -1
Documentation/RCU/00-INDEX
··· 17 17 rcubarrier.txt 18 18 - RCU and Unloadable Modules 19 19 rculist_nulls.txt 20 - - RCU list primitives for use with SLAB_DESTROY_BY_RCU 20 + - RCU list primitives for use with SLAB_TYPESAFE_BY_RCU 21 21 rcuref.txt 22 22 - Reference-count design for elements of lists/arrays protected by RCU 23 23 rcu.txt
+3 -3
Documentation/RCU/rculist_nulls.txt
··· 1 1 Using hlist_nulls to protect read-mostly linked lists and 2 - objects using SLAB_DESTROY_BY_RCU allocations. 2 + objects using SLAB_TYPESAFE_BY_RCU allocations. 3 3 4 4 Please read the basics in Documentation/RCU/listRCU.txt 5 5 ··· 7 7 to solve following problem : 8 8 9 9 A typical RCU linked list managing objects which are 10 - allocated with SLAB_DESTROY_BY_RCU kmem_cache can 10 + allocated with SLAB_TYPESAFE_BY_RCU kmem_cache can 11 11 use following algos : 12 12 13 13 1) Lookup algo ··· 96 96 3) Remove algo 97 97 -------------- 98 98 Nothing special here, we can use a standard RCU hlist deletion. 99 - But thanks to SLAB_DESTROY_BY_RCU, beware a deleted object can be reused 99 + But thanks to SLAB_TYPESAFE_BY_RCU, beware a deleted object can be reused 100 100 very very fast (before the end of RCU grace period) 101 101 102 102 if (put_last_reference_on(obj) {
+2 -1
Documentation/RCU/whatisRCU.txt
··· 925 925 926 926 e. Is your workload too update-intensive for normal use of 927 927 RCU, but inappropriate for other synchronization mechanisms? 928 - If so, consider SLAB_DESTROY_BY_RCU. But please be careful! 928 + If so, consider SLAB_TYPESAFE_BY_RCU (which was originally 929 + named SLAB_DESTROY_BY_RCU). But please be careful! 929 930 930 931 f. Do you need read-side critical sections that are respected 931 932 even though they are in the middle of the idle loop, during
+1 -1
drivers/gpu/drm/i915/i915_gem.c
··· 4552 4552 dev_priv->requests = KMEM_CACHE(drm_i915_gem_request, 4553 4553 SLAB_HWCACHE_ALIGN | 4554 4554 SLAB_RECLAIM_ACCOUNT | 4555 - SLAB_DESTROY_BY_RCU); 4555 + SLAB_TYPESAFE_BY_RCU); 4556 4556 if (!dev_priv->requests) 4557 4557 goto err_vmas; 4558 4558
+1 -1
drivers/gpu/drm/i915/i915_gem_request.h
··· 493 493 __i915_gem_active_get_rcu(const struct i915_gem_active *active) 494 494 { 495 495 /* Performing a lockless retrieval of the active request is super 496 - * tricky. SLAB_DESTROY_BY_RCU merely guarantees that the backing 496 + * tricky. SLAB_TYPESAFE_BY_RCU merely guarantees that the backing 497 497 * slab of request objects will not be freed whilst we hold the 498 498 * RCU read lock. It does not guarantee that the request itself 499 499 * will not be freed and then *reused*. Viz,
+1 -1
drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
··· 1071 1071 ldlm_lock_slab = kmem_cache_create("ldlm_locks", 1072 1072 sizeof(struct ldlm_lock), 0, 1073 1073 SLAB_HWCACHE_ALIGN | 1074 - SLAB_DESTROY_BY_RCU, NULL); 1074 + SLAB_TYPESAFE_BY_RCU, NULL); 1075 1075 if (!ldlm_lock_slab) { 1076 1076 kmem_cache_destroy(ldlm_resource_slab); 1077 1077 return -ENOMEM;
+1 -1
fs/jbd2/journal.c
··· 2340 2340 jbd2_journal_head_cache = kmem_cache_create("jbd2_journal_head", 2341 2341 sizeof(struct journal_head), 2342 2342 0, /* offset */ 2343 - SLAB_TEMPORARY | SLAB_DESTROY_BY_RCU, 2343 + SLAB_TEMPORARY | SLAB_TYPESAFE_BY_RCU, 2344 2344 NULL); /* ctor */ 2345 2345 retval = 0; 2346 2346 if (!jbd2_journal_head_cache) {
+1 -1
fs/signalfd.c
··· 38 38 /* 39 39 * The lockless check can race with remove_wait_queue() in progress, 40 40 * but in this case its caller should run under rcu_read_lock() and 41 - * sighand_cachep is SLAB_DESTROY_BY_RCU, we can safely return. 41 + * sighand_cachep is SLAB_TYPESAFE_BY_RCU, we can safely return. 42 42 */ 43 43 if (likely(!waitqueue_active(wqh))) 44 44 return;
+2 -2
include/linux/dma-fence.h
··· 229 229 * 230 230 * Function returns NULL if no refcount could be obtained, or the fence. 231 231 * This function handles acquiring a reference to a fence that may be 232 - * reallocated within the RCU grace period (such as with SLAB_DESTROY_BY_RCU), 232 + * reallocated within the RCU grace period (such as with SLAB_TYPESAFE_BY_RCU), 233 233 * so long as the caller is using RCU on the pointer to the fence. 234 234 * 235 235 * An alternative mechanism is to employ a seqlock to protect a bunch of ··· 257 257 * have successfully acquire a reference to it. If it no 258 258 * longer matches, we are holding a reference to some other 259 259 * reallocated pointer. This is possible if the allocator 260 - * is using a freelist like SLAB_DESTROY_BY_RCU where the 260 + * is using a freelist like SLAB_TYPESAFE_BY_RCU where the 261 261 * fence remains valid for the RCU grace period, but it 262 262 * may be reallocated. When using such allocators, we are 263 263 * responsible for ensuring the reference we get is to
+4 -2
include/linux/slab.h
··· 28 28 #define SLAB_STORE_USER 0x00010000UL /* DEBUG: Store the last owner for bug hunting */ 29 29 #define SLAB_PANIC 0x00040000UL /* Panic if kmem_cache_create() fails */ 30 30 /* 31 - * SLAB_DESTROY_BY_RCU - **WARNING** READ THIS! 31 + * SLAB_TYPESAFE_BY_RCU - **WARNING** READ THIS! 32 32 * 33 33 * This delays freeing the SLAB page by a grace period, it does _NOT_ 34 34 * delay object freeing. This means that if you do kmem_cache_free() ··· 61 61 * 62 62 * rcu_read_lock before reading the address, then rcu_read_unlock after 63 63 * taking the spinlock within the structure expected at that address. 64 + * 65 + * Note that SLAB_TYPESAFE_BY_RCU was originally named SLAB_DESTROY_BY_RCU. 64 66 */ 65 - #define SLAB_DESTROY_BY_RCU 0x00080000UL /* Defer freeing slabs to RCU */ 67 + #define SLAB_TYPESAFE_BY_RCU 0x00080000UL /* Defer freeing slabs to RCU */ 66 68 #define SLAB_MEM_SPREAD 0x00100000UL /* Spread some memory over cpuset */ 67 69 #define SLAB_TRACE 0x00200000UL /* Trace allocations and frees */ 68 70
+1 -1
include/net/sock.h
··· 993 993 struct module; 994 994 995 995 /* 996 - * caches using SLAB_DESTROY_BY_RCU should let .next pointer from nulls nodes 996 + * caches using SLAB_TYPESAFE_BY_RCU should let .next pointer from nulls nodes 997 997 * un-modified. Special care is taken when initializing object to zero. 998 998 */ 999 999 static inline void sk_prot_clear_nulls(struct sock *sk, int size)
+2 -2
kernel/fork.c
··· 1313 1313 if (atomic_dec_and_test(&sighand->count)) { 1314 1314 signalfd_cleanup(sighand); 1315 1315 /* 1316 - * sighand_cachep is SLAB_DESTROY_BY_RCU so we can free it 1316 + * sighand_cachep is SLAB_TYPESAFE_BY_RCU so we can free it 1317 1317 * without an RCU grace period, see __lock_task_sighand(). 1318 1318 */ 1319 1319 kmem_cache_free(sighand_cachep, sighand); ··· 2144 2144 { 2145 2145 sighand_cachep = kmem_cache_create("sighand_cache", 2146 2146 sizeof(struct sighand_struct), 0, 2147 - SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_DESTROY_BY_RCU| 2147 + SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU| 2148 2148 SLAB_NOTRACK|SLAB_ACCOUNT, sighand_ctor); 2149 2149 signal_cachep = kmem_cache_create("signal_cache", 2150 2150 sizeof(struct signal_struct), 0,
+1 -1
kernel/signal.c
··· 1237 1237 } 1238 1238 /* 1239 1239 * This sighand can be already freed and even reused, but 1240 - * we rely on SLAB_DESTROY_BY_RCU and sighand_ctor() which 1240 + * we rely on SLAB_TYPESAFE_BY_RCU and sighand_ctor() which 1241 1241 * initializes ->siglock: this slab can't go away, it has 1242 1242 * the same object type, ->siglock can't be reinitialized. 1243 1243 *
+3 -3
mm/kasan/kasan.c
··· 413 413 *size += sizeof(struct kasan_alloc_meta); 414 414 415 415 /* Add free meta. */ 416 - if (cache->flags & SLAB_DESTROY_BY_RCU || cache->ctor || 416 + if (cache->flags & SLAB_TYPESAFE_BY_RCU || cache->ctor || 417 417 cache->object_size < sizeof(struct kasan_free_meta)) { 418 418 cache->kasan_info.free_meta_offset = *size; 419 419 *size += sizeof(struct kasan_free_meta); ··· 561 561 unsigned long rounded_up_size = round_up(size, KASAN_SHADOW_SCALE_SIZE); 562 562 563 563 /* RCU slabs could be legally used after free within the RCU period */ 564 - if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU)) 564 + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) 565 565 return; 566 566 567 567 kasan_poison_shadow(object, rounded_up_size, KASAN_KMALLOC_FREE); ··· 572 572 s8 shadow_byte; 573 573 574 574 /* RCU slabs could be legally used after free within the RCU period */ 575 - if (unlikely(cache->flags & SLAB_DESTROY_BY_RCU)) 575 + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) 576 576 return false; 577 577 578 578 shadow_byte = READ_ONCE(*(s8 *)kasan_mem_to_shadow(object));
+1 -1
mm/kmemcheck.c
··· 95 95 void kmemcheck_slab_free(struct kmem_cache *s, void *object, size_t size) 96 96 { 97 97 /* TODO: RCU freeing is unsupported for now; hide false positives. */ 98 - if (!s->ctor && !(s->flags & SLAB_DESTROY_BY_RCU)) 98 + if (!s->ctor && !(s->flags & SLAB_TYPESAFE_BY_RCU)) 99 99 kmemcheck_mark_freed(object, size); 100 100 } 101 101
+2 -2
mm/rmap.c
··· 430 430 void __init anon_vma_init(void) 431 431 { 432 432 anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma), 433 - 0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_ACCOUNT, 433 + 0, SLAB_TYPESAFE_BY_RCU|SLAB_PANIC|SLAB_ACCOUNT, 434 434 anon_vma_ctor); 435 435 anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, 436 436 SLAB_PANIC|SLAB_ACCOUNT); ··· 481 481 * If this page is still mapped, then its anon_vma cannot have been 482 482 * freed. But if it has been unmapped, we have no security against the 483 483 * anon_vma structure being freed and reused (for another anon_vma: 484 - * SLAB_DESTROY_BY_RCU guarantees that - so the atomic_inc_not_zero() 484 + * SLAB_TYPESAFE_BY_RCU guarantees that - so the atomic_inc_not_zero() 485 485 * above cannot corrupt). 486 486 */ 487 487 if (!page_mapped(page)) {
+3 -3
mm/slab.c
··· 1728 1728 1729 1729 freelist = page->freelist; 1730 1730 slab_destroy_debugcheck(cachep, page); 1731 - if (unlikely(cachep->flags & SLAB_DESTROY_BY_RCU)) 1731 + if (unlikely(cachep->flags & SLAB_TYPESAFE_BY_RCU)) 1732 1732 call_rcu(&page->rcu_head, kmem_rcu_free); 1733 1733 else 1734 1734 kmem_freepages(cachep, page); ··· 1924 1924 1925 1925 cachep->num = 0; 1926 1926 1927 - if (cachep->ctor || flags & SLAB_DESTROY_BY_RCU) 1927 + if (cachep->ctor || flags & SLAB_TYPESAFE_BY_RCU) 1928 1928 return false; 1929 1929 1930 1930 left = calculate_slab_order(cachep, size, ··· 2030 2030 if (size < 4096 || fls(size - 1) == fls(size-1 + REDZONE_ALIGN + 2031 2031 2 * sizeof(unsigned long long))) 2032 2032 flags |= SLAB_RED_ZONE | SLAB_STORE_USER; 2033 - if (!(flags & SLAB_DESTROY_BY_RCU)) 2033 + if (!(flags & SLAB_TYPESAFE_BY_RCU)) 2034 2034 flags |= SLAB_POISON; 2035 2035 #endif 2036 2036 #endif
+2 -2
mm/slab.h
··· 126 126 127 127 /* Legal flag mask for kmem_cache_create(), for various configurations */ 128 128 #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \ 129 - SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS ) 129 + SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) 130 130 131 131 #if defined(CONFIG_DEBUG_SLAB) 132 132 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER) ··· 415 415 * back there or track user information then we can 416 416 * only use the space before that information. 417 417 */ 418 - if (s->flags & (SLAB_DESTROY_BY_RCU | SLAB_STORE_USER)) 418 + if (s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_STORE_USER)) 419 419 return s->inuse; 420 420 /* 421 421 * Else we can use all the padding etc for the allocation
+3 -3
mm/slab_common.c
··· 39 39 * Set of flags that will prevent slab merging 40 40 */ 41 41 #define SLAB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ 42 - SLAB_TRACE | SLAB_DESTROY_BY_RCU | SLAB_NOLEAKTRACE | \ 42 + SLAB_TRACE | SLAB_TYPESAFE_BY_RCU | SLAB_NOLEAKTRACE | \ 43 43 SLAB_FAILSLAB | SLAB_KASAN) 44 44 45 45 #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \ ··· 500 500 struct kmem_cache *s, *s2; 501 501 502 502 /* 503 - * On destruction, SLAB_DESTROY_BY_RCU kmem_caches are put on the 503 + * On destruction, SLAB_TYPESAFE_BY_RCU kmem_caches are put on the 504 504 * @slab_caches_to_rcu_destroy list. The slab pages are freed 505 505 * through RCU and and the associated kmem_cache are dereferenced 506 506 * while freeing the pages, so the kmem_caches should be freed only ··· 537 537 memcg_unlink_cache(s); 538 538 list_del(&s->list); 539 539 540 - if (s->flags & SLAB_DESTROY_BY_RCU) { 540 + if (s->flags & SLAB_TYPESAFE_BY_RCU) { 541 541 list_add_tail(&s->list, &slab_caches_to_rcu_destroy); 542 542 schedule_work(&slab_caches_to_rcu_destroy_work); 543 543 } else {
+3 -3
mm/slob.c
··· 126 126 127 127 /* 128 128 * struct slob_rcu is inserted at the tail of allocated slob blocks, which 129 - * were created with a SLAB_DESTROY_BY_RCU slab. slob_rcu is used to free 129 + * were created with a SLAB_TYPESAFE_BY_RCU slab. slob_rcu is used to free 130 130 * the block using call_rcu. 131 131 */ 132 132 struct slob_rcu { ··· 524 524 525 525 int __kmem_cache_create(struct kmem_cache *c, unsigned long flags) 526 526 { 527 - if (flags & SLAB_DESTROY_BY_RCU) { 527 + if (flags & SLAB_TYPESAFE_BY_RCU) { 528 528 /* leave room for rcu footer at the end of object */ 529 529 c->size += sizeof(struct slob_rcu); 530 530 } ··· 598 598 void kmem_cache_free(struct kmem_cache *c, void *b) 599 599 { 600 600 kmemleak_free_recursive(b, c->flags); 601 - if (unlikely(c->flags & SLAB_DESTROY_BY_RCU)) { 601 + if (unlikely(c->flags & SLAB_TYPESAFE_BY_RCU)) { 602 602 struct slob_rcu *slob_rcu; 603 603 slob_rcu = b + (c->size - sizeof(struct slob_rcu)); 604 604 slob_rcu->size = c->size;
+6 -6
mm/slub.c
··· 1687 1687 1688 1688 static void free_slab(struct kmem_cache *s, struct page *page) 1689 1689 { 1690 - if (unlikely(s->flags & SLAB_DESTROY_BY_RCU)) { 1690 + if (unlikely(s->flags & SLAB_TYPESAFE_BY_RCU)) { 1691 1691 struct rcu_head *head; 1692 1692 1693 1693 if (need_reserve_slab_rcu) { ··· 2963 2963 * slab_free_freelist_hook() could have put the items into quarantine. 2964 2964 * If so, no need to free them. 2965 2965 */ 2966 - if (s->flags & SLAB_KASAN && !(s->flags & SLAB_DESTROY_BY_RCU)) 2966 + if (s->flags & SLAB_KASAN && !(s->flags & SLAB_TYPESAFE_BY_RCU)) 2967 2967 return; 2968 2968 do_slab_free(s, page, head, tail, cnt, addr); 2969 2969 } ··· 3433 3433 * the slab may touch the object after free or before allocation 3434 3434 * then we should never poison the object itself. 3435 3435 */ 3436 - if ((flags & SLAB_POISON) && !(flags & SLAB_DESTROY_BY_RCU) && 3436 + if ((flags & SLAB_POISON) && !(flags & SLAB_TYPESAFE_BY_RCU) && 3437 3437 !s->ctor) 3438 3438 s->flags |= __OBJECT_POISON; 3439 3439 else ··· 3455 3455 */ 3456 3456 s->inuse = size; 3457 3457 3458 - if (((flags & (SLAB_DESTROY_BY_RCU | SLAB_POISON)) || 3458 + if (((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) || 3459 3459 s->ctor)) { 3460 3460 /* 3461 3461 * Relocate free pointer after the object if it is not ··· 3537 3537 s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor); 3538 3538 s->reserved = 0; 3539 3539 3540 - if (need_reserve_slab_rcu && (s->flags & SLAB_DESTROY_BY_RCU)) 3540 + if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU)) 3541 3541 s->reserved = sizeof(struct rcu_head); 3542 3542 3543 3543 if (!calculate_sizes(s, -1)) ··· 5042 5042 5043 5043 static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf) 5044 5044 { 5045 - return sprintf(buf, "%d\n", !!(s->flags & SLAB_DESTROY_BY_RCU)); 5045 + return sprintf(buf, "%d\n", !!(s->flags & SLAB_TYPESAFE_BY_RCU)); 5046 5046 } 5047 5047 SLAB_ATTR_RO(destroy_by_rcu); 5048 5048
+1 -1
net/dccp/ipv4.c
··· 950 950 .orphan_count = &dccp_orphan_count, 951 951 .max_header = MAX_DCCP_HEADER, 952 952 .obj_size = sizeof(struct dccp_sock), 953 - .slab_flags = SLAB_DESTROY_BY_RCU, 953 + .slab_flags = SLAB_TYPESAFE_BY_RCU, 954 954 .rsk_prot = &dccp_request_sock_ops, 955 955 .twsk_prot = &dccp_timewait_sock_ops, 956 956 .h.hashinfo = &dccp_hashinfo,
+1 -1
net/dccp/ipv6.c
··· 1012 1012 .orphan_count = &dccp_orphan_count, 1013 1013 .max_header = MAX_DCCP_HEADER, 1014 1014 .obj_size = sizeof(struct dccp6_sock), 1015 - .slab_flags = SLAB_DESTROY_BY_RCU, 1015 + .slab_flags = SLAB_TYPESAFE_BY_RCU, 1016 1016 .rsk_prot = &dccp6_request_sock_ops, 1017 1017 .twsk_prot = &dccp6_timewait_sock_ops, 1018 1018 .h.hashinfo = &dccp_hashinfo,
+1 -1
net/ipv4/tcp_ipv4.c
··· 2398 2398 .sysctl_rmem = sysctl_tcp_rmem, 2399 2399 .max_header = MAX_TCP_HEADER, 2400 2400 .obj_size = sizeof(struct tcp_sock), 2401 - .slab_flags = SLAB_DESTROY_BY_RCU, 2401 + .slab_flags = SLAB_TYPESAFE_BY_RCU, 2402 2402 .twsk_prot = &tcp_timewait_sock_ops, 2403 2403 .rsk_prot = &tcp_request_sock_ops, 2404 2404 .h.hashinfo = &tcp_hashinfo,
+1 -1
net/ipv6/tcp_ipv6.c
··· 1919 1919 .sysctl_rmem = sysctl_tcp_rmem, 1920 1920 .max_header = MAX_TCP_HEADER, 1921 1921 .obj_size = sizeof(struct tcp6_sock), 1922 - .slab_flags = SLAB_DESTROY_BY_RCU, 1922 + .slab_flags = SLAB_TYPESAFE_BY_RCU, 1923 1923 .twsk_prot = &tcp6_timewait_sock_ops, 1924 1924 .rsk_prot = &tcp6_request_sock_ops, 1925 1925 .h.hashinfo = &tcp_hashinfo,
+1 -1
net/llc/af_llc.c
··· 142 142 .name = "LLC", 143 143 .owner = THIS_MODULE, 144 144 .obj_size = sizeof(struct llc_sock), 145 - .slab_flags = SLAB_DESTROY_BY_RCU, 145 + .slab_flags = SLAB_TYPESAFE_BY_RCU, 146 146 }; 147 147 148 148 /**
+2 -2
net/llc/llc_conn.c
··· 506 506 again: 507 507 sk_nulls_for_each_rcu(rc, node, laddr_hb) { 508 508 if (llc_estab_match(sap, daddr, laddr, rc)) { 509 - /* Extra checks required by SLAB_DESTROY_BY_RCU */ 509 + /* Extra checks required by SLAB_TYPESAFE_BY_RCU */ 510 510 if (unlikely(!atomic_inc_not_zero(&rc->sk_refcnt))) 511 511 goto again; 512 512 if (unlikely(llc_sk(rc)->sap != sap || ··· 565 565 again: 566 566 sk_nulls_for_each_rcu(rc, node, laddr_hb) { 567 567 if (llc_listener_match(sap, laddr, rc)) { 568 - /* Extra checks required by SLAB_DESTROY_BY_RCU */ 568 + /* Extra checks required by SLAB_TYPESAFE_BY_RCU */ 569 569 if (unlikely(!atomic_inc_not_zero(&rc->sk_refcnt))) 570 570 goto again; 571 571 if (unlikely(llc_sk(rc)->sap != sap ||
+1 -1
net/llc/llc_sap.c
··· 328 328 again: 329 329 sk_nulls_for_each_rcu(rc, node, laddr_hb) { 330 330 if (llc_dgram_match(sap, laddr, rc)) { 331 - /* Extra checks required by SLAB_DESTROY_BY_RCU */ 331 + /* Extra checks required by SLAB_TYPESAFE_BY_RCU */ 332 332 if (unlikely(!atomic_inc_not_zero(&rc->sk_refcnt))) 333 333 goto again; 334 334 if (unlikely(llc_sk(rc)->sap != sap ||
+4 -4
net/netfilter/nf_conntrack_core.c
··· 914 914 continue; 915 915 916 916 /* kill only if still in same netns -- might have moved due to 917 - * SLAB_DESTROY_BY_RCU rules. 917 + * SLAB_TYPESAFE_BY_RCU rules. 918 918 * 919 919 * We steal the timer reference. If that fails timer has 920 920 * already fired or someone else deleted it. Just drop ref ··· 1069 1069 1070 1070 /* 1071 1071 * Do not use kmem_cache_zalloc(), as this cache uses 1072 - * SLAB_DESTROY_BY_RCU. 1072 + * SLAB_TYPESAFE_BY_RCU. 1073 1073 */ 1074 1074 ct = kmem_cache_alloc(nf_conntrack_cachep, gfp); 1075 1075 if (ct == NULL) ··· 1114 1114 struct net *net = nf_ct_net(ct); 1115 1115 1116 1116 /* A freed object has refcnt == 0, that's 1117 - * the golden rule for SLAB_DESTROY_BY_RCU 1117 + * the golden rule for SLAB_TYPESAFE_BY_RCU 1118 1118 */ 1119 1119 NF_CT_ASSERT(atomic_read(&ct->ct_general.use) == 0); 1120 1120 ··· 1878 1878 nf_conntrack_cachep = kmem_cache_create("nf_conntrack", 1879 1879 sizeof(struct nf_conn), 1880 1880 NFCT_INFOMASK + 1, 1881 - SLAB_DESTROY_BY_RCU | SLAB_HWCACHE_ALIGN, NULL); 1881 + SLAB_TYPESAFE_BY_RCU | SLAB_HWCACHE_ALIGN, NULL); 1882 1882 if (!nf_conntrack_cachep) 1883 1883 goto err_cachep; 1884 1884
+1 -1
net/smc/af_smc.c
··· 101 101 .unhash = smc_unhash_sk, 102 102 .obj_size = sizeof(struct smc_sock), 103 103 .h.smc_hash = &smc_v4_hashinfo, 104 - .slab_flags = SLAB_DESTROY_BY_RCU, 104 + .slab_flags = SLAB_TYPESAFE_BY_RCU, 105 105 }; 106 106 EXPORT_SYMBOL_GPL(smc_proto); 107 107