Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'locking-urgent-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Thomas Gleixner:
"A set of locking fixes and updates:

- Untangle the header spaghetti which causes build failures in
various situations caused by the lockdep additions to seqcount to
validate that the write side critical sections are non-preemptible.

- The seqcount associated lock debug addons which were blocked by the
above fallout.

seqcount writers contrary to seqlock writers must be externally
serialized, which usually happens via locking - except for strict
per CPU seqcounts. As the lock is not part of the seqcount, lockdep
cannot validate that the lock is held.

This new debug mechanism adds the concept of associated locks.
sequence count has now lock type variants and corresponding
initializers which take a pointer to the associated lock used for
writer serialization. If lockdep is enabled the pointer is stored
and write_seqcount_begin() has a lockdep assertion to validate that
the lock is held.

Aside of the type and the initializer no other code changes are
required at the seqcount usage sites. The rest of the seqcount API
is unchanged and determines the type at compile time with the help
of _Generic which is possible now that the minimal GCC version has
been moved up.

Adding this lockdep coverage unearthed a handful of seqcount bugs
which have been addressed already independent of this.

While generally useful this comes with a Trojan Horse twist: On RT
kernels the write side critical section can become preemtible if
the writers are serialized by an associated lock, which leads to
the well known reader preempts writer livelock. RT prevents this by
storing the associated lock pointer independent of lockdep in the
seqcount and changing the reader side to block on the lock when a
reader detects that a writer is in the write side critical section.

- Conversion of seqcount usage sites to associated types and
initializers"

* tag 'locking-urgent-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (25 commits)
locking/seqlock, headers: Untangle the spaghetti monster
locking, arch/ia64: Reduce <asm/smp.h> header dependencies by moving XTP bits into the new <asm/xtp.h> header
x86/headers: Remove APIC headers from <asm/smp.h>
seqcount: More consistent seqprop names
seqcount: Compress SEQCNT_LOCKNAME_ZERO()
seqlock: Fold seqcount_LOCKNAME_init() definition
seqlock: Fold seqcount_LOCKNAME_t definition
seqlock: s/__SEQ_LOCKDEP/__SEQ_LOCK/g
hrtimer: Use sequence counter with associated raw spinlock
kvm/eventfd: Use sequence counter with associated spinlock
userfaultfd: Use sequence counter with associated spinlock
NFSv4: Use sequence counter with associated spinlock
iocost: Use sequence counter with associated spinlock
raid5: Use sequence counter with associated spinlock
vfs: Use sequence counter with associated spinlock
timekeeping: Use sequence counter with associated raw spinlock
xfrm: policy: Use sequence counters with associated lock
netfilter: nft_set_rbtree: Use sequence counter with associated rwlock
netfilter: conntrack: Use sequence counter with associated spinlock
sched: tasks: Use sequence counter with associated spinlock
...

+500 -202
+52
Documentation/locking/seqlock.rst
··· 87 87 } while (read_seqcount_retry(&foo_seqcount, seq)); 88 88 89 89 90 + .. _seqcount_locktype_t: 91 + 92 + Sequence counters with associated locks (``seqcount_LOCKTYPE_t``) 93 + ----------------------------------------------------------------- 94 + 95 + As discussed at :ref:`seqcount_t`, sequence count write side critical 96 + sections must be serialized and non-preemptible. This variant of 97 + sequence counters associate the lock used for writer serialization at 98 + initialization time, which enables lockdep to validate that the write 99 + side critical sections are properly serialized. 100 + 101 + This lock association is a NOOP if lockdep is disabled and has neither 102 + storage nor runtime overhead. If lockdep is enabled, the lock pointer is 103 + stored in struct seqcount and lockdep's "lock is held" assertions are 104 + injected at the beginning of the write side critical section to validate 105 + that it is properly protected. 106 + 107 + For lock types which do not implicitly disable preemption, preemption 108 + protection is enforced in the write side function. 109 + 110 + The following sequence counters with associated locks are defined: 111 + 112 + - ``seqcount_spinlock_t`` 113 + - ``seqcount_raw_spinlock_t`` 114 + - ``seqcount_rwlock_t`` 115 + - ``seqcount_mutex_t`` 116 + - ``seqcount_ww_mutex_t`` 117 + 118 + The plain seqcount read and write APIs branch out to the specific 119 + seqcount_LOCKTYPE_t implementation at compile-time. This avoids kernel 120 + API explosion per each new seqcount LOCKTYPE. 121 + 122 + Initialization (replace "LOCKTYPE" with one of the supported locks):: 123 + 124 + /* dynamic */ 125 + seqcount_LOCKTYPE_t foo_seqcount; 126 + seqcount_LOCKTYPE_init(&foo_seqcount, &lock); 127 + 128 + /* static */ 129 + static seqcount_LOCKTYPE_t foo_seqcount = 130 + SEQCNT_LOCKTYPE_ZERO(foo_seqcount, &lock); 131 + 132 + /* C99 struct init */ 133 + struct { 134 + .seq = SEQCNT_LOCKTYPE_ZERO(foo.seq, &lock), 135 + } foo; 136 + 137 + Write path: same as in :ref:`seqcount_t`, while running from a context 138 + with the associated LOCKTYPE lock acquired. 139 + 140 + Read path: same as in :ref:`seqcount_t`. 141 + 90 142 .. _seqlock_t: 91 143 92 144 Sequential locks (``seqlock_t``)
-35
arch/ia64/include/asm/smp.h
··· 18 18 #include <linux/bitops.h> 19 19 #include <linux/irqreturn.h> 20 20 21 - #include <asm/io.h> 22 21 #include <asm/param.h> 23 22 #include <asm/processor.h> 24 23 #include <asm/ptrace.h> ··· 43 44 44 45 #ifdef CONFIG_SMP 45 46 46 - #define XTP_OFFSET 0x1e0008 47 - 48 - #define SMP_IRQ_REDIRECTION (1 << 0) 49 - #define SMP_IPI_REDIRECTION (1 << 1) 50 - 51 47 #define raw_smp_processor_id() (current_thread_info()->cpu) 52 48 53 49 extern struct smp_boot_data { ··· 56 62 DECLARE_PER_CPU_SHARED_ALIGNED(cpumask_t, cpu_sibling_map); 57 63 extern int smp_num_siblings; 58 64 extern void __iomem *ipi_base_addr; 59 - extern unsigned char smp_int_redirect; 60 65 61 66 extern volatile int ia64_cpu_to_sapicid[]; 62 67 #define cpu_physical_id(i) ia64_cpu_to_sapicid[i] ··· 75 82 if (cpu_physical_id(i) == cpuid) 76 83 break; 77 84 return i; 78 - } 79 - 80 - /* 81 - * XTP control functions: 82 - * min_xtp : route all interrupts to this CPU 83 - * normal_xtp: nominal XTP value 84 - * max_xtp : never deliver interrupts to this CPU. 85 - */ 86 - 87 - static inline void 88 - min_xtp (void) 89 - { 90 - if (smp_int_redirect & SMP_IRQ_REDIRECTION) 91 - writeb(0x00, ipi_base_addr + XTP_OFFSET); /* XTP to min */ 92 - } 93 - 94 - static inline void 95 - normal_xtp (void) 96 - { 97 - if (smp_int_redirect & SMP_IRQ_REDIRECTION) 98 - writeb(0x08, ipi_base_addr + XTP_OFFSET); /* XTP normal */ 99 - } 100 - 101 - static inline void 102 - max_xtp (void) 103 - { 104 - if (smp_int_redirect & SMP_IRQ_REDIRECTION) 105 - writeb(0x0f, ipi_base_addr + XTP_OFFSET); /* Set XTP to max */ 106 85 } 107 86 108 87 /* Upping and downing of CPUs */
+46
arch/ia64/include/asm/xtp.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_IA64_XTP_H 3 + #define _ASM_IA64_XTP_H 4 + 5 + #include <asm/io.h> 6 + 7 + #ifdef CONFIG_SMP 8 + 9 + #define XTP_OFFSET 0x1e0008 10 + 11 + #define SMP_IRQ_REDIRECTION (1 << 0) 12 + #define SMP_IPI_REDIRECTION (1 << 1) 13 + 14 + extern unsigned char smp_int_redirect; 15 + 16 + /* 17 + * XTP control functions: 18 + * min_xtp : route all interrupts to this CPU 19 + * normal_xtp: nominal XTP value 20 + * max_xtp : never deliver interrupts to this CPU. 21 + */ 22 + 23 + static inline void 24 + min_xtp (void) 25 + { 26 + if (smp_int_redirect & SMP_IRQ_REDIRECTION) 27 + writeb(0x00, ipi_base_addr + XTP_OFFSET); /* XTP to min */ 28 + } 29 + 30 + static inline void 31 + normal_xtp (void) 32 + { 33 + if (smp_int_redirect & SMP_IRQ_REDIRECTION) 34 + writeb(0x08, ipi_base_addr + XTP_OFFSET); /* XTP normal */ 35 + } 36 + 37 + static inline void 38 + max_xtp (void) 39 + { 40 + if (smp_int_redirect & SMP_IRQ_REDIRECTION) 41 + writeb(0x0f, ipi_base_addr + XTP_OFFSET); /* Set XTP to max */ 42 + } 43 + 44 + #endif /* CONFIG_SMP */ 45 + 46 + #endif /* _ASM_IA64_XTP_Hy */
+1
arch/ia64/kernel/iosapic.c
··· 95 95 #include <asm/iosapic.h> 96 96 #include <asm/processor.h> 97 97 #include <asm/ptrace.h> 98 + #include <asm/xtp.h> 98 99 99 100 #undef DEBUG_INTERRUPT_ROUTING 100 101
+1
arch/ia64/kernel/irq.c
··· 25 25 #include <linux/kernel_stat.h> 26 26 27 27 #include <asm/mca.h> 28 + #include <asm/xtp.h> 28 29 29 30 /* 30 31 * 'what should we do if we get a hw irq event on an illegal vector'.
+1
arch/ia64/kernel/process.c
··· 47 47 #include <linux/uaccess.h> 48 48 #include <asm/unwind.h> 49 49 #include <asm/user.h> 50 + #include <asm/xtp.h> 50 51 51 52 #include "entry.h" 52 53
+1
arch/ia64/kernel/sal.c
··· 18 18 #include <asm/page.h> 19 19 #include <asm/sal.h> 20 20 #include <asm/pal.h> 21 + #include <asm/xtp.h> 21 22 22 23 __cacheline_aligned DEFINE_SPINLOCK(sal_lock); 23 24 unsigned long sal_platform_features;
+1
arch/ia64/kernel/setup.c
··· 65 65 #include <asm/tlbflush.h> 66 66 #include <asm/unistd.h> 67 67 #include <asm/uv/uv.h> 68 + #include <asm/xtp.h> 68 69 69 70 #if defined(CONFIG_SMP) && (IA64_CPU_SIZE > PAGE_SIZE) 70 71 # error "struct cpuinfo_ia64 too big!"
+1
arch/ia64/kernel/smp.c
··· 45 45 #include <asm/tlbflush.h> 46 46 #include <asm/unistd.h> 47 47 #include <asm/mca.h> 48 + #include <asm/xtp.h> 48 49 49 50 /* 50 51 * Note: alignment of 4 entries/cacheline was empirically determined
+1
arch/parisc/include/asm/timex.h
··· 7 7 #ifndef _ASMPARISC_TIMEX_H 8 8 #define _ASMPARISC_TIMEX_H 9 9 10 + #include <asm/special_insns.h> 10 11 11 12 #define CLOCK_TICK_RATE 1193180 /* Underlying HZ */ 12 13
+1
arch/sh/include/asm/io.h
··· 17 17 #include <asm/cache.h> 18 18 #include <asm/addrspace.h> 19 19 #include <asm/machvec.h> 20 + #include <asm/page.h> 20 21 #include <linux/pgtable.h> 21 22 #include <asm-generic/iomap.h> 22 23
+1
arch/sh/kernel/machvec.c
··· 15 15 #include <asm/setup.h> 16 16 #include <asm/io.h> 17 17 #include <asm/irq.h> 18 + #include <asm/processor.h> 18 19 19 20 #define MV_NAME_SIZE 32 20 21
+1
arch/sparc/include/asm/timer_64.h
··· 7 7 #ifndef _SPARC64_TIMER_H 8 8 #define _SPARC64_TIMER_H 9 9 10 + #include <uapi/asm/asi.h> 10 11 #include <linux/types.h> 11 12 #include <linux/init.h> 12 13
+2 -1
arch/sparc/include/asm/vvar.h
··· 6 6 #define _ASM_SPARC_VVAR_DATA_H 7 7 8 8 #include <asm/clocksource.h> 9 - #include <linux/seqlock.h> 9 + #include <asm/processor.h> 10 + #include <asm/barrier.h> 10 11 #include <linux/time.h> 11 12 #include <linux/types.h> 12 13
-1
arch/sparc/kernel/vdso.c
··· 7 7 * a different vsyscall implementation for Linux/IA32 and for the name. 8 8 */ 9 9 10 - #include <linux/seqlock.h> 11 10 #include <linux/time.h> 12 11 #include <linux/timekeeper_internal.h> 13 12
+1 -1
arch/x86/include/asm/fixmap.h
··· 26 26 27 27 #ifndef __ASSEMBLY__ 28 28 #include <linux/kernel.h> 29 - #include <asm/acpi.h> 30 29 #include <asm/apicdef.h> 31 30 #include <asm/page.h> 31 + #include <asm/pgtable_types.h> 32 32 #ifdef CONFIG_X86_32 33 33 #include <linux/threads.h> 34 34 #include <asm/kmap_types.h>
-10
arch/x86/include/asm/smp.h
··· 5 5 #include <linux/cpumask.h> 6 6 #include <asm/percpu.h> 7 7 8 - /* 9 - * We need the APIC definitions automatically as part of 'smp.h' 10 - */ 11 - #ifdef CONFIG_X86_LOCAL_APIC 12 - # include <asm/mpspec.h> 13 - # include <asm/apic.h> 14 - # ifdef CONFIG_X86_IO_APIC 15 - # include <asm/io_apic.h> 16 - # endif 17 - #endif 18 8 #include <asm/thread_info.h> 19 9 #include <asm/cpumask.h> 20 10
+1
arch/x86/include/asm/tsc.h
··· 6 6 #define _ASM_X86_TSC_H 7 7 8 8 #include <asm/processor.h> 9 + #include <asm/cpufeature.h> 9 10 10 11 /* 11 12 * Standard way to access the cycle counter.
+1
arch/x86/kernel/apic/apic.c
··· 46 46 #include <asm/proto.h> 47 47 #include <asm/traps.h> 48 48 #include <asm/apic.h> 49 + #include <asm/acpi.h> 49 50 #include <asm/io_apic.h> 50 51 #include <asm/desc.h> 51 52 #include <asm/hpet.h>
+1
arch/x86/kernel/apic/apic_noop.c
··· 10 10 * like self-ipi, etc... 11 11 */ 12 12 #include <linux/cpumask.h> 13 + #include <linux/thread_info.h> 13 14 14 15 #include <asm/apic.h> 15 16
+1
arch/x86/kernel/apic/bigsmp_32.c
··· 9 9 #include <linux/smp.h> 10 10 11 11 #include <asm/apic.h> 12 + #include <asm/io_apic.h> 12 13 13 14 #include "local.h" 14 15
+1
arch/x86/kernel/apic/hw_nmi.c
··· 9 9 * Bits copied from original nmi.c file 10 10 * 11 11 */ 12 + #include <linux/thread_info.h> 12 13 #include <asm/apic.h> 13 14 #include <asm/nmi.h> 14 15
+1
arch/x86/kernel/apic/ipi.c
··· 2 2 3 3 #include <linux/cpumask.h> 4 4 #include <linux/smp.h> 5 + #include <asm/io_apic.h> 5 6 6 7 #include "local.h" 7 8
+1
arch/x86/kernel/apic/local.h
··· 10 10 11 11 #include <linux/jump_label.h> 12 12 13 + #include <asm/irq_vectors.h> 13 14 #include <asm/apic.h> 14 15 15 16 /* APIC flat 64 */
+1
arch/x86/kernel/apic/probe_32.c
··· 10 10 #include <linux/errno.h> 11 11 #include <linux/smp.h> 12 12 13 + #include <asm/io_apic.h> 13 14 #include <asm/apic.h> 14 15 #include <asm/acpi.h> 15 16
+1
arch/x86/kernel/apic/probe_64.c
··· 8 8 * Martin Bligh, Andi Kleen, James Bottomley, John Stultz, and 9 9 * James Cleverdon. 10 10 */ 11 + #include <linux/thread_info.h> 11 12 #include <asm/apic.h> 12 13 13 14 #include "local.h"
+1
arch/x86/kernel/cpu/amd.c
··· 15 15 #include <asm/cpu.h> 16 16 #include <asm/spec-ctrl.h> 17 17 #include <asm/smp.h> 18 + #include <asm/numa.h> 18 19 #include <asm/pci-direct.h> 19 20 #include <asm/delay.h> 20 21 #include <asm/debugreg.h>
+1
arch/x86/kernel/cpu/common.c
··· 45 45 #include <asm/mtrr.h> 46 46 #include <asm/hwcap2.h> 47 47 #include <linux/numa.h> 48 + #include <asm/numa.h> 48 49 #include <asm/asm.h> 49 50 #include <asm/bugs.h> 50 51 #include <asm/cpu.h>
+1
arch/x86/kernel/cpu/hygon.c
··· 10 10 11 11 #include <asm/cpu.h> 12 12 #include <asm/smp.h> 13 + #include <asm/numa.h> 13 14 #include <asm/cacheinfo.h> 14 15 #include <asm/spec-ctrl.h> 15 16 #include <asm/delay.h>
+1
arch/x86/kernel/cpu/intel.c
··· 23 23 #include <asm/cmdline.h> 24 24 #include <asm/traps.h> 25 25 #include <asm/resctrl.h> 26 + #include <asm/numa.h> 26 27 27 28 #ifdef CONFIG_X86_64 28 29 #include <linux/topology.h>
+1
arch/x86/kernel/devicetree.c
··· 20 20 #include <asm/irqdomain.h> 21 21 #include <asm/hpet.h> 22 22 #include <asm/apic.h> 23 + #include <asm/io_apic.h> 23 24 #include <asm/pci_x86.h> 24 25 #include <asm/setup.h> 25 26 #include <asm/i8259.h>
+2
arch/x86/kernel/irqinit.c
··· 22 22 #include <asm/timer.h> 23 23 #include <asm/hw_irq.h> 24 24 #include <asm/desc.h> 25 + #include <asm/io_apic.h> 26 + #include <asm/acpi.h> 25 27 #include <asm/apic.h> 26 28 #include <asm/setup.h> 27 29 #include <asm/i8259.h>
+2
arch/x86/kernel/jailhouse.c
··· 13 13 #include <linux/reboot.h> 14 14 #include <linux/serial_8250.h> 15 15 #include <asm/apic.h> 16 + #include <asm/io_apic.h> 17 + #include <asm/acpi.h> 16 18 #include <asm/cpu.h> 17 19 #include <asm/hypervisor.h> 18 20 #include <asm/i8259.h>
+2
arch/x86/kernel/mpparse.c
··· 19 19 #include <linux/smp.h> 20 20 #include <linux/pci.h> 21 21 22 + #include <asm/io_apic.h> 23 + #include <asm/acpi.h> 22 24 #include <asm/irqdomain.h> 23 25 #include <asm/mtrr.h> 24 26 #include <asm/mpspec.h>
+1
arch/x86/kernel/setup.c
··· 25 25 #include <xen/xen.h> 26 26 27 27 #include <asm/apic.h> 28 + #include <asm/numa.h> 28 29 #include <asm/bios_ebda.h> 29 30 #include <asm/bugs.h> 30 31 #include <asm/cpu.h>
+1
arch/x86/kernel/topology.c
··· 31 31 #include <linux/init.h> 32 32 #include <linux/smp.h> 33 33 #include <linux/irq.h> 34 + #include <asm/io_apic.h> 34 35 #include <asm/cpu.h> 35 36 36 37 static DEFINE_PER_CPU(struct x86_cpu, cpu_devices);
+1
arch/x86/kernel/tsc_msr.c
··· 7 7 */ 8 8 9 9 #include <linux/kernel.h> 10 + #include <linux/thread_info.h> 10 11 11 12 #include <asm/apic.h> 12 13 #include <asm/cpu_device_id.h>
+1
arch/x86/mm/init_32.c
··· 52 52 #include <asm/cpu_entry_area.h> 53 53 #include <asm/init.h> 54 54 #include <asm/pgtable_areas.h> 55 + #include <asm/numa.h> 55 56 56 57 #include "mm_internal.h" 57 58
+2
arch/x86/xen/apic.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/init.h> 3 + #include <linux/thread_info.h> 3 4 4 5 #include <asm/x86_init.h> 5 6 #include <asm/apic.h> 7 + #include <asm/io_apic.h> 6 8 #include <asm/xen/hypercall.h> 7 9 8 10 #include <xen/xen.h>
+1
arch/x86/xen/enlighten_hvm.c
··· 11 11 12 12 #include <asm/cpu.h> 13 13 #include <asm/smp.h> 14 + #include <asm/io_apic.h> 14 15 #include <asm/reboot.h> 15 16 #include <asm/setup.h> 16 17 #include <asm/idtentry.h>
+1
arch/x86/xen/smp_hvm.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/thread_info.h> 2 3 #include <asm/smp.h> 3 4 4 5 #include <xen/events.h>
+1
arch/x86/xen/smp_pv.c
··· 29 29 #include <asm/idtentry.h> 30 30 #include <asm/desc.h> 31 31 #include <asm/cpu.h> 32 + #include <asm/io_apic.h> 32 33 33 34 #include <xen/interface/xen.h> 34 35 #include <xen/interface/vcpu.h>
+2 -2
arch/x86/xen/suspend_pv.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/types.h> 3 3 4 - #include <asm/fixmap.h> 5 - 6 4 #include <asm/xen/hypercall.h> 7 5 #include <asm/xen/page.h> 6 + 7 + #include <asm/fixmap.h> 8 8 9 9 #include "xen-ops.h" 10 10
+2 -3
block/blk-iocost.c
··· 406 406 enum ioc_running running; 407 407 atomic64_t vtime_rate; 408 408 409 - seqcount_t period_seqcount; 409 + seqcount_spinlock_t period_seqcount; 410 410 u32 period_at; /* wallclock starttime */ 411 411 u64 period_at_vtime; /* vtime starttime */ 412 412 ··· 873 873 874 874 static void ioc_start_period(struct ioc *ioc, struct ioc_now *now) 875 875 { 876 - lockdep_assert_held(&ioc->lock); 877 876 WARN_ON_ONCE(ioc->running != IOC_RUNNING); 878 877 879 878 write_seqcount_begin(&ioc->period_seqcount); ··· 2000 2001 2001 2002 ioc->running = IOC_IDLE; 2002 2003 atomic64_set(&ioc->vtime_rate, VTIME_PER_USEC); 2003 - seqcount_init(&ioc->period_seqcount); 2004 + seqcount_spinlock_init(&ioc->period_seqcount, &ioc->lock); 2004 2005 ioc->period_at = ktime_to_us(ktime_get()); 2005 2006 atomic64_set(&ioc->cur_period, 0); 2006 2007 atomic_set(&ioc->hweight_gen, 0);
+1 -14
drivers/dma-buf/dma-resv.c
··· 52 52 DEFINE_WD_CLASS(reservation_ww_class); 53 53 EXPORT_SYMBOL(reservation_ww_class); 54 54 55 - struct lock_class_key reservation_seqcount_class; 56 - EXPORT_SYMBOL(reservation_seqcount_class); 57 - 58 - const char reservation_seqcount_string[] = "reservation_seqcount"; 59 - EXPORT_SYMBOL(reservation_seqcount_string); 60 - 61 55 /** 62 56 * dma_resv_list_alloc - allocate fence list 63 57 * @shared_max: number of fences we need space for ··· 137 143 void dma_resv_init(struct dma_resv *obj) 138 144 { 139 145 ww_mutex_init(&obj->lock, &reservation_ww_class); 146 + seqcount_ww_mutex_init(&obj->seq, &obj->lock); 140 147 141 - __seqcount_init(&obj->seq, reservation_seqcount_string, 142 - &reservation_seqcount_class); 143 148 RCU_INIT_POINTER(obj->fence, NULL); 144 149 RCU_INIT_POINTER(obj->fence_excl, NULL); 145 150 } ··· 268 275 fobj = dma_resv_get_list(obj); 269 276 count = fobj->shared_count; 270 277 271 - preempt_disable(); 272 278 write_seqcount_begin(&obj->seq); 273 279 274 280 for (i = 0; i < count; ++i) { ··· 289 297 smp_store_mb(fobj->shared_count, count); 290 298 291 299 write_seqcount_end(&obj->seq); 292 - preempt_enable(); 293 300 dma_fence_put(old); 294 301 } 295 302 EXPORT_SYMBOL(dma_resv_add_shared_fence); ··· 315 324 if (fence) 316 325 dma_fence_get(fence); 317 326 318 - preempt_disable(); 319 327 write_seqcount_begin(&obj->seq); 320 328 /* write_seqcount_begin provides the necessary memory barrier */ 321 329 RCU_INIT_POINTER(obj->fence_excl, fence); 322 330 if (old) 323 331 old->shared_count = 0; 324 332 write_seqcount_end(&obj->seq); 325 - preempt_enable(); 326 333 327 334 /* inplace update, no shared fences */ 328 335 while (i--) ··· 398 409 src_list = dma_resv_get_list(dst); 399 410 old = dma_resv_get_excl(dst); 400 411 401 - preempt_disable(); 402 412 write_seqcount_begin(&dst->seq); 403 413 /* write_seqcount_begin provides the necessary memory barrier */ 404 414 RCU_INIT_POINTER(dst->fence_excl, new); 405 415 RCU_INIT_POINTER(dst->fence, dst_list); 406 416 write_seqcount_end(&dst->seq); 407 - preempt_enable(); 408 417 409 418 dma_resv_list_free(src_list); 410 419 dma_fence_put(old);
-2
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 258 258 new->shared_count = k; 259 259 260 260 /* Install the new fence list, seqcount provides the barriers */ 261 - preempt_disable(); 262 261 write_seqcount_begin(&resv->seq); 263 262 RCU_INIT_POINTER(resv->fence, new); 264 263 write_seqcount_end(&resv->seq); 265 - preempt_enable(); 266 264 267 265 /* Drop the references to the removed fences or move them to ef_list */ 268 266 for (i = j, k = 0; i < old->shared_count; ++i) {
+1
drivers/iommu/intel/irq_remapping.c
··· 15 15 #include <linux/irqdomain.h> 16 16 #include <linux/crash_dump.h> 17 17 #include <asm/io_apic.h> 18 + #include <asm/apic.h> 18 19 #include <asm/smp.h> 19 20 #include <asm/cpu.h> 20 21 #include <asm/irq_remapping.h>
+1 -1
drivers/md/raid5.c
··· 7019 7019 } else 7020 7020 goto abort; 7021 7021 spin_lock_init(&conf->device_lock); 7022 - seqcount_init(&conf->gen_lock); 7022 + seqcount_spinlock_init(&conf->gen_lock, &conf->device_lock); 7023 7023 mutex_init(&conf->cache_size_mutex); 7024 7024 init_waitqueue_head(&conf->wait_for_quiescent); 7025 7025 init_waitqueue_head(&conf->wait_for_stripe);
+1 -1
drivers/md/raid5.h
··· 582 582 int prev_chunk_sectors; 583 583 int prev_algo; 584 584 short generation; /* increments with every reshape */ 585 - seqcount_t gen_lock; /* lock against generation changes */ 585 + seqcount_spinlock_t gen_lock; /* lock against generation changes */ 586 586 unsigned long reshape_checkpoint; /* Time we last updated 587 587 * metadata */ 588 588 long long min_offset_diff; /* minimum difference between
+1 -1
fs/dcache.c
··· 1746 1746 dentry->d_lockref.count = 1; 1747 1747 dentry->d_flags = 0; 1748 1748 spin_lock_init(&dentry->d_lock); 1749 - seqcount_init(&dentry->d_seq); 1749 + seqcount_spinlock_init(&dentry->d_seq, &dentry->d_lock); 1750 1750 dentry->d_inode = NULL; 1751 1751 dentry->d_parent = dentry; 1752 1752 dentry->d_sb = sb;
+2 -2
fs/fs_struct.c
··· 117 117 fs->users = 1; 118 118 fs->in_exec = 0; 119 119 spin_lock_init(&fs->lock); 120 - seqcount_init(&fs->seq); 120 + seqcount_spinlock_init(&fs->seq, &fs->lock); 121 121 fs->umask = old->umask; 122 122 123 123 spin_lock(&old->lock); ··· 163 163 struct fs_struct init_fs = { 164 164 .users = 1, 165 165 .lock = __SPIN_LOCK_UNLOCKED(init_fs.lock), 166 - .seq = SEQCNT_ZERO(init_fs.seq), 166 + .seq = SEQCNT_SPINLOCK_ZERO(init_fs.seq, &init_fs.lock), 167 167 .umask = 0022, 168 168 };
+1 -1
fs/nfs/nfs4_fs.h
··· 117 117 unsigned long so_flags; 118 118 struct list_head so_states; 119 119 struct nfs_seqid_counter so_seqid; 120 - seqcount_t so_reclaim_seqcount; 120 + seqcount_spinlock_t so_reclaim_seqcount; 121 121 struct mutex so_delegreturn_mutex; 122 122 }; 123 123
+1 -1
fs/nfs/nfs4state.c
··· 509 509 nfs4_init_seqid_counter(&sp->so_seqid); 510 510 atomic_set(&sp->so_count, 1); 511 511 INIT_LIST_HEAD(&sp->so_lru); 512 - seqcount_init(&sp->so_reclaim_seqcount); 512 + seqcount_spinlock_init(&sp->so_reclaim_seqcount, &sp->so_lock); 513 513 mutex_init(&sp->so_delegreturn_mutex); 514 514 return sp; 515 515 }
+2 -2
fs/userfaultfd.c
··· 61 61 /* waitqueue head for events */ 62 62 wait_queue_head_t event_wqh; 63 63 /* a refile sequence protected by fault_pending_wqh lock */ 64 - struct seqcount refile_seq; 64 + seqcount_spinlock_t refile_seq; 65 65 /* pseudo fd refcounting */ 66 66 refcount_t refcount; 67 67 /* userfaultfd syscall flags */ ··· 1961 1961 init_waitqueue_head(&ctx->fault_wqh); 1962 1962 init_waitqueue_head(&ctx->event_wqh); 1963 1963 init_waitqueue_head(&ctx->fd_wqh); 1964 - seqcount_init(&ctx->refile_seq); 1964 + seqcount_spinlock_init(&ctx->refile_seq, &ctx->fault_pending_wqh.lock); 1965 1965 } 1966 1966 1967 1967 SYSCALL_DEFINE1(userfaultfd, int, flags)
+1 -1
include/linux/dcache.h
··· 89 89 struct dentry { 90 90 /* RCU lookup touched fields */ 91 91 unsigned int d_flags; /* protected by d_lock */ 92 - seqcount_t d_seq; /* per dentry seqlock */ 92 + seqcount_spinlock_t d_seq; /* per dentry seqlock */ 93 93 struct hlist_bl_node d_hash; /* lookup hash list */ 94 94 struct dentry *d_parent; /* parent directory */ 95 95 struct qstr d_name;
+1 -3
include/linux/dma-resv.h
··· 46 46 #include <linux/rcupdate.h> 47 47 48 48 extern struct ww_class reservation_ww_class; 49 - extern struct lock_class_key reservation_seqcount_class; 50 - extern const char reservation_seqcount_string[]; 51 49 52 50 /** 53 51 * struct dma_resv_list - a list of shared fences ··· 69 71 */ 70 72 struct dma_resv { 71 73 struct ww_mutex lock; 72 - seqcount_t seq; 74 + seqcount_ww_mutex_t seq; 73 75 74 76 struct dma_fence __rcu *fence_excl; 75 77 struct dma_resv_list __rcu *fence;
+2
include/linux/dynamic_queue_limits.h
··· 38 38 39 39 #ifdef __KERNEL__ 40 40 41 + #include <asm/bug.h> 42 + 41 43 struct dql { 42 44 /* Fields accessed in enqueue path (dql_queued) */ 43 45 unsigned int num_queued; /* Total ever queued */
+1 -1
include/linux/fs_struct.h
··· 9 9 struct fs_struct { 10 10 int users; 11 11 spinlock_t lock; 12 - seqcount_t seq; 12 + seqcount_spinlock_t seq; 13 13 int umask; 14 14 int in_exec; 15 15 struct path root, pwd;
+2 -1
include/linux/hrtimer.h
··· 17 17 #include <linux/init.h> 18 18 #include <linux/list.h> 19 19 #include <linux/percpu.h> 20 + #include <linux/seqlock.h> 20 21 #include <linux/timer.h> 21 22 #include <linux/timerqueue.h> 22 23 ··· 160 159 struct hrtimer_cpu_base *cpu_base; 161 160 unsigned int index; 162 161 clockid_t clockid; 163 - seqcount_t seq; 162 + seqcount_raw_spinlock_t seq; 164 163 struct hrtimer *running; 165 164 struct timerqueue_head active; 166 165 ktime_t (*get_time)(void);
+1
include/linux/ktime.h
··· 23 23 24 24 #include <linux/time.h> 25 25 #include <linux/jiffies.h> 26 + #include <asm/bug.h> 26 27 27 28 /* Nanosecond scalar representation for kernel time values */ 28 29 typedef s64 ktime_t;
+1 -1
include/linux/kvm_irqfd.h
··· 42 42 wait_queue_entry_t wait; 43 43 /* Update side is protected by irqfds.lock */ 44 44 struct kvm_kernel_irq_routing_entry irq_entry; 45 - seqcount_t irq_entry_sc; 45 + seqcount_spinlock_t irq_entry_sc; 46 46 /* Used for level IRQ fast-path */ 47 47 int gsi; 48 48 struct work_struct inject;
+1
include/linux/lockdep.h
··· 11 11 #define __LINUX_LOCKDEP_H 12 12 13 13 #include <linux/lockdep_types.h> 14 + #include <linux/smp.h> 14 15 #include <asm/percpu.h> 15 16 16 17 struct task_struct;
+11
include/linux/mutex.h
··· 65 65 #endif 66 66 }; 67 67 68 + struct ww_class; 69 + struct ww_acquire_ctx; 70 + 71 + struct ww_mutex { 72 + struct mutex base; 73 + struct ww_acquire_ctx *ctx; 74 + #ifdef CONFIG_DEBUG_MUTEXES 75 + struct ww_class *ww_class; 76 + #endif 77 + }; 78 + 68 79 /* 69 80 * This is the control structure for tasks blocked on mutex, 70 81 * which resides on the blocked task's kernel stack:
+2 -1
include/linux/sched.h
··· 32 32 #include <linux/task_io_accounting.h> 33 33 #include <linux/posix-timers.h> 34 34 #include <linux/rseq.h> 35 + #include <linux/seqlock.h> 35 36 #include <linux/kcsan.h> 36 37 37 38 /* task_struct member predeclarations (sorted alphabetically): */ ··· 1050 1049 /* Protected by ->alloc_lock: */ 1051 1050 nodemask_t mems_allowed; 1052 1051 /* Seqence number to catch updates: */ 1053 - seqcount_t mems_allowed_seq; 1052 + seqcount_spinlock_t mems_allowed_seq; 1054 1053 int cpuset_mem_spread_rotor; 1055 1054 int cpuset_slab_spread_rotor; 1056 1055 #endif
+284 -82
include/linux/seqlock.h
··· 10 10 * 11 11 * Copyrights: 12 12 * - Based on x86_64 vsyscall gettimeofday: Keith Owens, Andrea Arcangeli 13 + * - Sequence counters with associated locks, (C) 2020 Linutronix GmbH 13 14 */ 14 15 15 - #include <linux/spinlock.h> 16 - #include <linux/preempt.h> 17 - #include <linux/lockdep.h> 18 16 #include <linux/compiler.h> 19 17 #include <linux/kcsan-checks.h> 18 + #include <linux/lockdep.h> 19 + #include <linux/mutex.h> 20 + #include <linux/preempt.h> 21 + #include <linux/spinlock.h> 22 + 20 23 #include <asm/processor.h> 21 24 22 25 /* ··· 51 48 * This mechanism can't be used if the protected data contains pointers, 52 49 * as the writer can invalidate a pointer that a reader is following. 53 50 * 51 + * If the write serialization mechanism is one of the common kernel 52 + * locking primitives, use a sequence counter with associated lock 53 + * (seqcount_LOCKTYPE_t) instead. 54 + * 54 55 * If it's desired to automatically handle the sequence counter writer 55 56 * serialization and non-preemptibility requirements, use a sequential 56 57 * lock (seqlock_t) instead. ··· 79 72 } 80 73 81 74 #ifdef CONFIG_DEBUG_LOCK_ALLOC 82 - # define SEQCOUNT_DEP_MAP_INIT(lockname) \ 83 - .dep_map = { .name = #lockname } \ 75 + 76 + # define SEQCOUNT_DEP_MAP_INIT(lockname) \ 77 + .dep_map = { .name = #lockname } 84 78 85 79 /** 86 80 * seqcount_init() - runtime initializer for seqcount_t 87 81 * @s: Pointer to the seqcount_t instance 88 82 */ 89 - # define seqcount_init(s) \ 90 - do { \ 91 - static struct lock_class_key __key; \ 92 - __seqcount_init((s), #s, &__key); \ 83 + # define seqcount_init(s) \ 84 + do { \ 85 + static struct lock_class_key __key; \ 86 + __seqcount_init((s), #s, &__key); \ 93 87 } while (0) 94 88 95 89 static inline void seqcount_lockdep_reader_access(const seqcount_t *s) ··· 116 108 */ 117 109 #define SEQCNT_ZERO(name) { .sequence = 0, SEQCOUNT_DEP_MAP_INIT(name) } 118 110 111 + /* 112 + * Sequence counters with associated locks (seqcount_LOCKTYPE_t) 113 + * 114 + * A sequence counter which associates the lock used for writer 115 + * serialization at initialization time. This enables lockdep to validate 116 + * that the write side critical section is properly serialized. 117 + * 118 + * For associated locks which do not implicitly disable preemption, 119 + * preemption protection is enforced in the write side function. 120 + * 121 + * Lockdep is never used in any for the raw write variants. 122 + * 123 + * See Documentation/locking/seqlock.rst 124 + */ 125 + 126 + #ifdef CONFIG_LOCKDEP 127 + #define __SEQ_LOCK(expr) expr 128 + #else 129 + #define __SEQ_LOCK(expr) 130 + #endif 131 + 132 + /** 133 + * typedef seqcount_LOCKNAME_t - sequence counter with LOCKTYPR associated 134 + * @seqcount: The real sequence counter 135 + * @lock: Pointer to the associated spinlock 136 + * 137 + * A plain sequence counter with external writer synchronization by a 138 + * spinlock. The spinlock is associated to the sequence count in the 139 + * static initializer or init function. This enables lockdep to validate 140 + * that the write side critical section is properly serialized. 141 + */ 142 + 143 + /** 144 + * seqcount_LOCKNAME_init() - runtime initializer for seqcount_LOCKNAME_t 145 + * @s: Pointer to the seqcount_LOCKNAME_t instance 146 + * @lock: Pointer to the associated LOCKTYPE 147 + */ 148 + 149 + /* 150 + * SEQCOUNT_LOCKTYPE() - Instantiate seqcount_LOCKNAME_t and helpers 151 + * @locktype: actual typename 152 + * @lockname: name 153 + * @preemptible: preemptibility of above locktype 154 + * @lockmember: argument for lockdep_assert_held() 155 + */ 156 + #define SEQCOUNT_LOCKTYPE(locktype, lockname, preemptible, lockmember) \ 157 + typedef struct seqcount_##lockname { \ 158 + seqcount_t seqcount; \ 159 + __SEQ_LOCK(locktype *lock); \ 160 + } seqcount_##lockname##_t; \ 161 + \ 162 + static __always_inline void \ 163 + seqcount_##lockname##_init(seqcount_##lockname##_t *s, locktype *lock) \ 164 + { \ 165 + seqcount_init(&s->seqcount); \ 166 + __SEQ_LOCK(s->lock = lock); \ 167 + } \ 168 + \ 169 + static __always_inline seqcount_t * \ 170 + __seqcount_##lockname##_ptr(seqcount_##lockname##_t *s) \ 171 + { \ 172 + return &s->seqcount; \ 173 + } \ 174 + \ 175 + static __always_inline bool \ 176 + __seqcount_##lockname##_preemptible(seqcount_##lockname##_t *s) \ 177 + { \ 178 + return preemptible; \ 179 + } \ 180 + \ 181 + static __always_inline void \ 182 + __seqcount_##lockname##_assert(seqcount_##lockname##_t *s) \ 183 + { \ 184 + __SEQ_LOCK(lockdep_assert_held(lockmember)); \ 185 + } 186 + 187 + /* 188 + * __seqprop() for seqcount_t 189 + */ 190 + 191 + static inline seqcount_t *__seqcount_ptr(seqcount_t *s) 192 + { 193 + return s; 194 + } 195 + 196 + static inline bool __seqcount_preemptible(seqcount_t *s) 197 + { 198 + return false; 199 + } 200 + 201 + static inline void __seqcount_assert(seqcount_t *s) 202 + { 203 + lockdep_assert_preemption_disabled(); 204 + } 205 + 206 + SEQCOUNT_LOCKTYPE(raw_spinlock_t, raw_spinlock, false, s->lock) 207 + SEQCOUNT_LOCKTYPE(spinlock_t, spinlock, false, s->lock) 208 + SEQCOUNT_LOCKTYPE(rwlock_t, rwlock, false, s->lock) 209 + SEQCOUNT_LOCKTYPE(struct mutex, mutex, true, s->lock) 210 + SEQCOUNT_LOCKTYPE(struct ww_mutex, ww_mutex, true, &s->lock->base) 211 + 212 + /** 213 + * SEQCNT_LOCKNAME_ZERO - static initializer for seqcount_LOCKNAME_t 214 + * @name: Name of the seqcount_LOCKNAME_t instance 215 + * @lock: Pointer to the associated LOCKTYPE 216 + */ 217 + 218 + #define SEQCOUNT_LOCKTYPE_ZERO(seq_name, assoc_lock) { \ 219 + .seqcount = SEQCNT_ZERO(seq_name.seqcount), \ 220 + __SEQ_LOCK(.lock = (assoc_lock)) \ 221 + } 222 + 223 + #define SEQCNT_SPINLOCK_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) 224 + #define SEQCNT_RAW_SPINLOCK_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) 225 + #define SEQCNT_RWLOCK_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) 226 + #define SEQCNT_MUTEX_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) 227 + #define SEQCNT_WW_MUTEX_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) 228 + 229 + 230 + #define __seqprop_case(s, lockname, prop) \ 231 + seqcount_##lockname##_t: __seqcount_##lockname##_##prop((void *)(s)) 232 + 233 + #define __seqprop(s, prop) _Generic(*(s), \ 234 + seqcount_t: __seqcount_##prop((void *)(s)), \ 235 + __seqprop_case((s), raw_spinlock, prop), \ 236 + __seqprop_case((s), spinlock, prop), \ 237 + __seqprop_case((s), rwlock, prop), \ 238 + __seqprop_case((s), mutex, prop), \ 239 + __seqprop_case((s), ww_mutex, prop)) 240 + 241 + #define __seqcount_ptr(s) __seqprop(s, ptr) 242 + #define __seqcount_lock_preemptible(s) __seqprop(s, preemptible) 243 + #define __seqcount_assert_lock_held(s) __seqprop(s, assert) 244 + 119 245 /** 120 246 * __read_seqcount_begin() - begin a seqcount_t read section w/o barrier 121 - * @s: Pointer to seqcount_t 247 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 122 248 * 123 249 * __read_seqcount_begin is like read_seqcount_begin, but has no smp_rmb() 124 250 * barrier. Callers should ensure that smp_rmb() or equivalent ordering is ··· 264 122 * 265 123 * Return: count to be passed to read_seqcount_retry() 266 124 */ 267 - static inline unsigned __read_seqcount_begin(const seqcount_t *s) 125 + #define __read_seqcount_begin(s) \ 126 + __read_seqcount_t_begin(__seqcount_ptr(s)) 127 + 128 + static inline unsigned __read_seqcount_t_begin(const seqcount_t *s) 268 129 { 269 130 unsigned ret; 270 131 ··· 283 138 284 139 /** 285 140 * raw_read_seqcount_begin() - begin a seqcount_t read section w/o lockdep 286 - * @s: Pointer to seqcount_t 141 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 287 142 * 288 143 * Return: count to be passed to read_seqcount_retry() 289 144 */ 290 - static inline unsigned raw_read_seqcount_begin(const seqcount_t *s) 145 + #define raw_read_seqcount_begin(s) \ 146 + raw_read_seqcount_t_begin(__seqcount_ptr(s)) 147 + 148 + static inline unsigned raw_read_seqcount_t_begin(const seqcount_t *s) 291 149 { 292 - unsigned ret = __read_seqcount_begin(s); 150 + unsigned ret = __read_seqcount_t_begin(s); 293 151 smp_rmb(); 294 152 return ret; 295 153 } 296 154 297 155 /** 298 156 * read_seqcount_begin() - begin a seqcount_t read critical section 299 - * @s: Pointer to seqcount_t 157 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 300 158 * 301 159 * Return: count to be passed to read_seqcount_retry() 302 160 */ 303 - static inline unsigned read_seqcount_begin(const seqcount_t *s) 161 + #define read_seqcount_begin(s) \ 162 + read_seqcount_t_begin(__seqcount_ptr(s)) 163 + 164 + static inline unsigned read_seqcount_t_begin(const seqcount_t *s) 304 165 { 305 166 seqcount_lockdep_reader_access(s); 306 - return raw_read_seqcount_begin(s); 167 + return raw_read_seqcount_t_begin(s); 307 168 } 308 169 309 170 /** 310 171 * raw_read_seqcount() - read the raw seqcount_t counter value 311 - * @s: Pointer to seqcount_t 172 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 312 173 * 313 174 * raw_read_seqcount opens a read critical section of the given 314 175 * seqcount_t, without any lockdep checking, and without checking or ··· 323 172 * 324 173 * Return: count to be passed to read_seqcount_retry() 325 174 */ 326 - static inline unsigned raw_read_seqcount(const seqcount_t *s) 175 + #define raw_read_seqcount(s) \ 176 + raw_read_seqcount_t(__seqcount_ptr(s)) 177 + 178 + static inline unsigned raw_read_seqcount_t(const seqcount_t *s) 327 179 { 328 180 unsigned ret = READ_ONCE(s->sequence); 329 181 smp_rmb(); ··· 337 183 /** 338 184 * raw_seqcount_begin() - begin a seqcount_t read critical section w/o 339 185 * lockdep and w/o counter stabilization 340 - * @s: Pointer to seqcount_t 186 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 341 187 * 342 188 * raw_seqcount_begin opens a read critical section of the given 343 189 * seqcount_t. Unlike read_seqcount_begin(), this function will not wait ··· 351 197 * 352 198 * Return: count to be passed to read_seqcount_retry() 353 199 */ 354 - static inline unsigned raw_seqcount_begin(const seqcount_t *s) 200 + #define raw_seqcount_begin(s) \ 201 + raw_seqcount_t_begin(__seqcount_ptr(s)) 202 + 203 + static inline unsigned raw_seqcount_t_begin(const seqcount_t *s) 355 204 { 356 205 /* 357 206 * If the counter is odd, let read_seqcount_retry() fail 358 207 * by decrementing the counter. 359 208 */ 360 - return raw_read_seqcount(s) & ~1; 209 + return raw_read_seqcount_t(s) & ~1; 361 210 } 362 211 363 212 /** 364 213 * __read_seqcount_retry() - end a seqcount_t read section w/o barrier 365 - * @s: Pointer to seqcount_t 214 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 366 215 * @start: count, from read_seqcount_begin() 367 216 * 368 217 * __read_seqcount_retry is like read_seqcount_retry, but has no smp_rmb() ··· 378 221 * 379 222 * Return: true if a read section retry is required, else false 380 223 */ 381 - static inline int __read_seqcount_retry(const seqcount_t *s, unsigned start) 224 + #define __read_seqcount_retry(s, start) \ 225 + __read_seqcount_t_retry(__seqcount_ptr(s), start) 226 + 227 + static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start) 382 228 { 383 229 kcsan_atomic_next(0); 384 230 return unlikely(READ_ONCE(s->sequence) != start); ··· 389 229 390 230 /** 391 231 * read_seqcount_retry() - end a seqcount_t read critical section 392 - * @s: Pointer to seqcount_t 232 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 393 233 * @start: count, from read_seqcount_begin() 394 234 * 395 235 * read_seqcount_retry closes the read critical section of given ··· 398 238 * 399 239 * Return: true if a read section retry is required, else false 400 240 */ 401 - static inline int read_seqcount_retry(const seqcount_t *s, unsigned start) 241 + #define read_seqcount_retry(s, start) \ 242 + read_seqcount_t_retry(__seqcount_ptr(s), start) 243 + 244 + static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start) 402 245 { 403 246 smp_rmb(); 404 - return __read_seqcount_retry(s, start); 247 + return __read_seqcount_t_retry(s, start); 405 248 } 406 249 407 250 /** 408 251 * raw_write_seqcount_begin() - start a seqcount_t write section w/o lockdep 409 - * @s: Pointer to seqcount_t 252 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 410 253 */ 411 - static inline void raw_write_seqcount_begin(seqcount_t *s) 254 + #define raw_write_seqcount_begin(s) \ 255 + do { \ 256 + if (__seqcount_lock_preemptible(s)) \ 257 + preempt_disable(); \ 258 + \ 259 + raw_write_seqcount_t_begin(__seqcount_ptr(s)); \ 260 + } while (0) 261 + 262 + static inline void raw_write_seqcount_t_begin(seqcount_t *s) 412 263 { 413 264 kcsan_nestable_atomic_begin(); 414 265 s->sequence++; ··· 428 257 429 258 /** 430 259 * raw_write_seqcount_end() - end a seqcount_t write section w/o lockdep 431 - * @s: Pointer to seqcount_t 260 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 432 261 */ 433 - static inline void raw_write_seqcount_end(seqcount_t *s) 262 + #define raw_write_seqcount_end(s) \ 263 + do { \ 264 + raw_write_seqcount_t_end(__seqcount_ptr(s)); \ 265 + \ 266 + if (__seqcount_lock_preemptible(s)) \ 267 + preempt_enable(); \ 268 + } while (0) 269 + 270 + static inline void raw_write_seqcount_t_end(seqcount_t *s) 434 271 { 435 272 smp_wmb(); 436 273 s->sequence++; 437 274 kcsan_nestable_atomic_end(); 438 275 } 439 276 440 - static inline void __write_seqcount_begin_nested(seqcount_t *s, int subclass) 441 - { 442 - raw_write_seqcount_begin(s); 443 - seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_); 444 - } 445 - 446 277 /** 447 278 * write_seqcount_begin_nested() - start a seqcount_t write section with 448 279 * custom lockdep nesting level 449 - * @s: Pointer to seqcount_t 280 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 450 281 * @subclass: lockdep nesting level 451 282 * 452 283 * See Documentation/locking/lockdep-design.rst 453 284 */ 454 - static inline void write_seqcount_begin_nested(seqcount_t *s, int subclass) 455 - { 456 - lockdep_assert_preemption_disabled(); 457 - __write_seqcount_begin_nested(s, subclass); 458 - } 285 + #define write_seqcount_begin_nested(s, subclass) \ 286 + do { \ 287 + __seqcount_assert_lock_held(s); \ 288 + \ 289 + if (__seqcount_lock_preemptible(s)) \ 290 + preempt_disable(); \ 291 + \ 292 + write_seqcount_t_begin_nested(__seqcount_ptr(s), subclass); \ 293 + } while (0) 459 294 460 - /* 461 - * A write_seqcount_begin() variant w/o lockdep non-preemptibility checks. 462 - * 463 - * Use for internal seqlock.h code where it's known that preemption is 464 - * already disabled. For example, seqlock_t write side functions. 465 - */ 466 - static inline void __write_seqcount_begin(seqcount_t *s) 295 + static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass) 467 296 { 468 - __write_seqcount_begin_nested(s, 0); 297 + raw_write_seqcount_t_begin(s); 298 + seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_); 469 299 } 470 300 471 301 /** 472 302 * write_seqcount_begin() - start a seqcount_t write side critical section 473 - * @s: Pointer to seqcount_t 303 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 474 304 * 475 305 * write_seqcount_begin opens a write side critical section of the given 476 306 * seqcount_t. ··· 480 308 * non-preemptible. If readers can be invoked from hardirq or softirq 481 309 * context, interrupts or bottom halves must be respectively disabled. 482 310 */ 483 - static inline void write_seqcount_begin(seqcount_t *s) 311 + #define write_seqcount_begin(s) \ 312 + do { \ 313 + __seqcount_assert_lock_held(s); \ 314 + \ 315 + if (__seqcount_lock_preemptible(s)) \ 316 + preempt_disable(); \ 317 + \ 318 + write_seqcount_t_begin(__seqcount_ptr(s)); \ 319 + } while (0) 320 + 321 + static inline void write_seqcount_t_begin(seqcount_t *s) 484 322 { 485 - write_seqcount_begin_nested(s, 0); 323 + write_seqcount_t_begin_nested(s, 0); 486 324 } 487 325 488 326 /** 489 327 * write_seqcount_end() - end a seqcount_t write side critical section 490 - * @s: Pointer to seqcount_t 328 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 491 329 * 492 330 * The write section must've been opened with write_seqcount_begin(). 493 331 */ 494 - static inline void write_seqcount_end(seqcount_t *s) 332 + #define write_seqcount_end(s) \ 333 + do { \ 334 + write_seqcount_t_end(__seqcount_ptr(s)); \ 335 + \ 336 + if (__seqcount_lock_preemptible(s)) \ 337 + preempt_enable(); \ 338 + } while (0) 339 + 340 + static inline void write_seqcount_t_end(seqcount_t *s) 495 341 { 496 342 seqcount_release(&s->dep_map, _RET_IP_); 497 - raw_write_seqcount_end(s); 343 + raw_write_seqcount_t_end(s); 498 344 } 499 345 500 346 /** 501 347 * raw_write_seqcount_barrier() - do a seqcount_t write barrier 502 - * @s: Pointer to seqcount_t 348 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 503 349 * 504 350 * This can be used to provide an ordering guarantee instead of the usual 505 351 * consistency guarantee. It is one wmb cheaper, because it can collapse ··· 556 366 * WRITE_ONCE(X, false); 557 367 * } 558 368 */ 559 - static inline void raw_write_seqcount_barrier(seqcount_t *s) 369 + #define raw_write_seqcount_barrier(s) \ 370 + raw_write_seqcount_t_barrier(__seqcount_ptr(s)) 371 + 372 + static inline void raw_write_seqcount_t_barrier(seqcount_t *s) 560 373 { 561 374 kcsan_nestable_atomic_begin(); 562 375 s->sequence++; ··· 571 378 /** 572 379 * write_seqcount_invalidate() - invalidate in-progress seqcount_t read 573 380 * side operations 574 - * @s: Pointer to seqcount_t 381 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 575 382 * 576 383 * After write_seqcount_invalidate, no seqcount_t read side operations 577 384 * will complete successfully and see data older than this. 578 385 */ 579 - static inline void write_seqcount_invalidate(seqcount_t *s) 386 + #define write_seqcount_invalidate(s) \ 387 + write_seqcount_t_invalidate(__seqcount_ptr(s)) 388 + 389 + static inline void write_seqcount_t_invalidate(seqcount_t *s) 580 390 { 581 391 smp_wmb(); 582 392 kcsan_nestable_atomic_begin(); ··· 589 393 590 394 /** 591 395 * raw_read_seqcount_latch() - pick even/odd seqcount_t latch data copy 592 - * @s: Pointer to seqcount_t 396 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 593 397 * 594 398 * Use seqcount_t latching to switch between two storage places protected 595 399 * by a sequence counter. Doing so allows having interruptible, preemptible, ··· 602 406 * picking which data copy to read. The full counter value must then be 603 407 * checked with read_seqcount_retry(). 604 408 */ 605 - static inline int raw_read_seqcount_latch(seqcount_t *s) 409 + #define raw_read_seqcount_latch(s) \ 410 + raw_read_seqcount_t_latch(__seqcount_ptr(s)) 411 + 412 + static inline int raw_read_seqcount_t_latch(seqcount_t *s) 606 413 { 607 414 /* Pairs with the first smp_wmb() in raw_write_seqcount_latch() */ 608 415 int seq = READ_ONCE(s->sequence); /* ^^^ */ ··· 614 415 615 416 /** 616 417 * raw_write_seqcount_latch() - redirect readers to even/odd copy 617 - * @s: Pointer to seqcount_t 418 + * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants 618 419 * 619 420 * The latch technique is a multiversion concurrency control method that allows 620 421 * queries during non-atomic modifications. If you can guarantee queries never ··· 693 494 * When data is a dynamic data structure; one should use regular RCU 694 495 * patterns to manage the lifetimes of the objects within. 695 496 */ 696 - static inline void raw_write_seqcount_latch(seqcount_t *s) 497 + #define raw_write_seqcount_latch(s) \ 498 + raw_write_seqcount_t_latch(__seqcount_ptr(s)) 499 + 500 + static inline void raw_write_seqcount_t_latch(seqcount_t *s) 697 501 { 698 502 smp_wmb(); /* prior stores before incrementing "sequence" */ 699 503 s->sequence++; ··· 718 516 spinlock_t lock; 719 517 } seqlock_t; 720 518 721 - #define __SEQLOCK_UNLOCKED(lockname) \ 722 - { \ 723 - .seqcount = SEQCNT_ZERO(lockname), \ 724 - .lock = __SPIN_LOCK_UNLOCKED(lockname) \ 519 + #define __SEQLOCK_UNLOCKED(lockname) \ 520 + { \ 521 + .seqcount = SEQCNT_ZERO(lockname), \ 522 + .lock = __SPIN_LOCK_UNLOCKED(lockname) \ 725 523 } 726 524 727 525 /** 728 526 * seqlock_init() - dynamic initializer for seqlock_t 729 527 * @sl: Pointer to the seqlock_t instance 730 528 */ 731 - #define seqlock_init(sl) \ 732 - do { \ 733 - seqcount_init(&(sl)->seqcount); \ 734 - spin_lock_init(&(sl)->lock); \ 529 + #define seqlock_init(sl) \ 530 + do { \ 531 + seqcount_init(&(sl)->seqcount); \ 532 + spin_lock_init(&(sl)->lock); \ 735 533 } while (0) 736 534 737 535 /** ··· 794 592 static inline void write_seqlock(seqlock_t *sl) 795 593 { 796 594 spin_lock(&sl->lock); 797 - __write_seqcount_begin(&sl->seqcount); 595 + write_seqcount_t_begin(&sl->seqcount); 798 596 } 799 597 800 598 /** ··· 806 604 */ 807 605 static inline void write_sequnlock(seqlock_t *sl) 808 606 { 809 - write_seqcount_end(&sl->seqcount); 607 + write_seqcount_t_end(&sl->seqcount); 810 608 spin_unlock(&sl->lock); 811 609 } 812 610 ··· 820 618 static inline void write_seqlock_bh(seqlock_t *sl) 821 619 { 822 620 spin_lock_bh(&sl->lock); 823 - __write_seqcount_begin(&sl->seqcount); 621 + write_seqcount_t_begin(&sl->seqcount); 824 622 } 825 623 826 624 /** ··· 833 631 */ 834 632 static inline void write_sequnlock_bh(seqlock_t *sl) 835 633 { 836 - write_seqcount_end(&sl->seqcount); 634 + write_seqcount_t_end(&sl->seqcount); 837 635 spin_unlock_bh(&sl->lock); 838 636 } 839 637 ··· 847 645 static inline void write_seqlock_irq(seqlock_t *sl) 848 646 { 849 647 spin_lock_irq(&sl->lock); 850 - __write_seqcount_begin(&sl->seqcount); 648 + write_seqcount_t_begin(&sl->seqcount); 851 649 } 852 650 853 651 /** ··· 859 657 */ 860 658 static inline void write_sequnlock_irq(seqlock_t *sl) 861 659 { 862 - write_seqcount_end(&sl->seqcount); 660 + write_seqcount_t_end(&sl->seqcount); 863 661 spin_unlock_irq(&sl->lock); 864 662 } 865 663 ··· 868 666 unsigned long flags; 869 667 870 668 spin_lock_irqsave(&sl->lock, flags); 871 - __write_seqcount_begin(&sl->seqcount); 669 + write_seqcount_t_begin(&sl->seqcount); 872 670 return flags; 873 671 } 874 672 ··· 897 695 static inline void 898 696 write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) 899 697 { 900 - write_seqcount_end(&sl->seqcount); 698 + write_seqcount_t_end(&sl->seqcount); 901 699 spin_unlock_irqrestore(&sl->lock, flags); 902 700 } 903 701 904 702 /** 905 703 * read_seqlock_excl() - begin a seqlock_t locking reader section 906 - * @sl: Pointer to seqlock_t 704 + * @sl: Pointer to seqlock_t 907 705 * 908 706 * read_seqlock_excl opens a seqlock_t locking reader critical section. A 909 707 * locking reader exclusively locks out *both* other writers *and* other
-1
include/linux/time.h
··· 3 3 #define _LINUX_TIME_H 4 4 5 5 # include <linux/cache.h> 6 - # include <linux/seqlock.h> 7 6 # include <linux/math64.h> 8 7 # include <linux/time64.h> 9 8
+1
include/linux/videodev2.h
··· 57 57 #define __LINUX_VIDEODEV2_H 58 58 59 59 #include <linux/time.h> /* need struct timeval */ 60 + #include <linux/kernel.h> 60 61 #include <uapi/linux/videodev2.h> 61 62 62 63 #endif /* __LINUX_VIDEODEV2_H */
-8
include/linux/ww_mutex.h
··· 48 48 #endif 49 49 }; 50 50 51 - struct ww_mutex { 52 - struct mutex base; 53 - struct ww_acquire_ctx *ctx; 54 - #ifdef CONFIG_DEBUG_MUTEXES 55 - struct ww_class *ww_class; 56 - #endif 57 - }; 58 - 59 51 #ifdef CONFIG_DEBUG_LOCK_ALLOC 60 52 # define __WW_CLASS_MUTEX_INITIALIZER(lockname, class) \ 61 53 , .ww_class = class
+1 -1
include/net/netfilter/nf_conntrack.h
··· 298 298 299 299 extern struct hlist_nulls_head *nf_conntrack_hash; 300 300 extern unsigned int nf_conntrack_htable_size; 301 - extern seqcount_t nf_conntrack_generation; 301 + extern seqcount_spinlock_t nf_conntrack_generation; 302 302 extern unsigned int nf_conntrack_max; 303 303 304 304 /* must be called with rcu read lock held */
+2 -1
init/init_task.c
··· 154 154 .trc_holdout_list = LIST_HEAD_INIT(init_task.trc_holdout_list), 155 155 #endif 156 156 #ifdef CONFIG_CPUSETS 157 - .mems_allowed_seq = SEQCNT_ZERO(init_task.mems_allowed_seq), 157 + .mems_allowed_seq = SEQCNT_SPINLOCK_ZERO(init_task.mems_allowed_seq, 158 + &init_task.alloc_lock), 158 159 #endif 159 160 #ifdef CONFIG_RT_MUTEXES 160 161 .pi_waiters = RB_ROOT_CACHED,
+1 -1
kernel/fork.c
··· 2011 2011 #ifdef CONFIG_CPUSETS 2012 2012 p->cpuset_mem_spread_rotor = NUMA_NO_NODE; 2013 2013 p->cpuset_slab_spread_rotor = NUMA_NO_NODE; 2014 - seqcount_init(&p->mems_allowed_seq); 2014 + seqcount_spinlock_init(&p->mems_allowed_seq, &p->alloc_lock); 2015 2015 #endif 2016 2016 #ifdef CONFIG_TRACE_IRQFLAGS 2017 2017 memset(&p->irqtrace, 0, sizeof(p->irqtrace));
+1 -1
kernel/locking/lockdep_proc.c
··· 423 423 seq_time(m, lt->min); 424 424 seq_time(m, lt->max); 425 425 seq_time(m, lt->total); 426 - seq_time(m, lt->nr ? div_s64(lt->total, lt->nr) : 0); 426 + seq_time(m, lt->nr ? div64_u64(lt->total, lt->nr) : 0); 427 427 } 428 428 429 429 static void seq_stats(struct seq_file *m, struct lock_stat_data *data)
+10 -3
kernel/time/hrtimer.c
··· 135 135 * timer->base->cpu_base 136 136 */ 137 137 static struct hrtimer_cpu_base migration_cpu_base = { 138 - .clock_base = { { .cpu_base = &migration_cpu_base, }, }, 138 + .clock_base = { { 139 + .cpu_base = &migration_cpu_base, 140 + .seq = SEQCNT_RAW_SPINLOCK_ZERO(migration_cpu_base.seq, 141 + &migration_cpu_base.lock), 142 + }, }, 139 143 }; 140 144 141 145 #define migration_base migration_cpu_base.clock_base[0] ··· 2002 1998 int i; 2003 1999 2004 2000 for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) { 2005 - cpu_base->clock_base[i].cpu_base = cpu_base; 2006 - timerqueue_init_head(&cpu_base->clock_base[i].active); 2001 + struct hrtimer_clock_base *clock_b = &cpu_base->clock_base[i]; 2002 + 2003 + clock_b->cpu_base = cpu_base; 2004 + seqcount_raw_spinlock_init(&clock_b->seq, &cpu_base->lock); 2005 + timerqueue_init_head(&clock_b->active); 2007 2006 } 2008 2007 2009 2008 cpu_base->cpu = cpu;
+11 -8
kernel/time/timekeeping.c
··· 39 39 TK_ADV_FREQ 40 40 }; 41 41 42 + static DEFINE_RAW_SPINLOCK(timekeeper_lock); 43 + 42 44 /* 43 45 * The most important data for readout fits into a single 64 byte 44 46 * cache line. 45 47 */ 46 48 static struct { 47 - seqcount_t seq; 49 + seqcount_raw_spinlock_t seq; 48 50 struct timekeeper timekeeper; 49 51 } tk_core ____cacheline_aligned = { 50 - .seq = SEQCNT_ZERO(tk_core.seq), 52 + .seq = SEQCNT_RAW_SPINLOCK_ZERO(tk_core.seq, &timekeeper_lock), 51 53 }; 52 54 53 - static DEFINE_RAW_SPINLOCK(timekeeper_lock); 54 55 static struct timekeeper shadow_timekeeper; 55 56 56 57 /** ··· 64 63 * See @update_fast_timekeeper() below. 65 64 */ 66 65 struct tk_fast { 67 - seqcount_t seq; 66 + seqcount_raw_spinlock_t seq; 68 67 struct tk_read_base base[2]; 69 68 }; 70 69 ··· 81 80 }; 82 81 83 82 static struct tk_fast tk_fast_mono ____cacheline_aligned = { 83 + .seq = SEQCNT_RAW_SPINLOCK_ZERO(tk_fast_mono.seq, &timekeeper_lock), 84 84 .base[0] = { .clock = &dummy_clock, }, 85 85 .base[1] = { .clock = &dummy_clock, }, 86 86 }; 87 87 88 88 static struct tk_fast tk_fast_raw ____cacheline_aligned = { 89 + .seq = SEQCNT_RAW_SPINLOCK_ZERO(tk_fast_raw.seq, &timekeeper_lock), 89 90 .base[0] = { .clock = &dummy_clock, }, 90 91 .base[1] = { .clock = &dummy_clock, }, 91 92 }; ··· 160 157 * tk_clock_read - atomic clocksource read() helper 161 158 * 162 159 * This helper is necessary to use in the read paths because, while the 163 - * seqlock ensures we don't return a bad value while structures are updated, 160 + * seqcount ensures we don't return a bad value while structures are updated, 164 161 * it doesn't protect from potential crashes. There is the possibility that 165 162 * the tkr's clocksource may change between the read reference, and the 166 163 * clock reference passed to the read function. This can cause crashes if ··· 225 222 unsigned int seq; 226 223 227 224 /* 228 - * Since we're called holding a seqlock, the data may shift 225 + * Since we're called holding a seqcount, the data may shift 229 226 * under us while we're doing the calculation. This can cause 230 227 * false positives, since we'd note a problem but throw the 231 - * results away. So nest another seqlock here to atomically 228 + * results away. So nest another seqcount here to atomically 232 229 * grab the points we are checking with. 233 230 */ 234 231 do { ··· 489 486 * 490 487 * To keep it NMI safe since we're accessing from tracing, we're not using a 491 488 * separate timekeeper with updates to monotonic clock and boot offset 492 - * protected with seqlocks. This has the following minor side effects: 489 + * protected with seqcounts. This has the following minor side effects: 493 490 * 494 491 * (1) Its possible that a timestamp be taken after the boot offset is updated 495 492 * but before the timekeeper is updated. If this happens, the new boot offset
+3 -2
net/netfilter/nf_conntrack_core.c
··· 180 180 181 181 unsigned int nf_conntrack_max __read_mostly; 182 182 EXPORT_SYMBOL_GPL(nf_conntrack_max); 183 - seqcount_t nf_conntrack_generation __read_mostly; 183 + seqcount_spinlock_t nf_conntrack_generation __read_mostly; 184 184 static unsigned int nf_conntrack_hash_rnd __read_mostly; 185 185 186 186 static u32 hash_conntrack_raw(const struct nf_conntrack_tuple *tuple, ··· 2588 2588 /* struct nf_ct_ext uses u8 to store offsets/size */ 2589 2589 BUILD_BUG_ON(total_extension_size() > 255u); 2590 2590 2591 - seqcount_init(&nf_conntrack_generation); 2591 + seqcount_spinlock_init(&nf_conntrack_generation, 2592 + &nf_conntrack_locks_all_lock); 2592 2593 2593 2594 for (i = 0; i < CONNTRACK_LOCKS; i++) 2594 2595 spin_lock_init(&nf_conntrack_locks[i]);
+2 -2
net/netfilter/nft_set_rbtree.c
··· 18 18 struct nft_rbtree { 19 19 struct rb_root root; 20 20 rwlock_t lock; 21 - seqcount_t count; 21 + seqcount_rwlock_t count; 22 22 struct delayed_work gc_work; 23 23 }; 24 24 ··· 523 523 struct nft_rbtree *priv = nft_set_priv(set); 524 524 525 525 rwlock_init(&priv->lock); 526 - seqcount_init(&priv->count); 526 + seqcount_rwlock_init(&priv->count, &priv->lock); 527 527 priv->root = RB_ROOT; 528 528 529 529 INIT_DEFERRABLE_WORK(&priv->gc_work, nft_rbtree_gc);
+5 -5
net/xfrm/xfrm_policy.c
··· 122 122 /* list containing '*:*' policies */ 123 123 struct hlist_head hhead; 124 124 125 - seqcount_t count; 125 + seqcount_spinlock_t count; 126 126 /* tree sorted by daddr/prefix */ 127 127 struct rb_root root_d; 128 128 ··· 155 155 __read_mostly; 156 156 157 157 static struct kmem_cache *xfrm_dst_cache __ro_after_init; 158 - static __read_mostly seqcount_t xfrm_policy_hash_generation; 158 + static __read_mostly seqcount_mutex_t xfrm_policy_hash_generation; 159 159 160 160 static struct rhashtable xfrm_policy_inexact_table; 161 161 static const struct rhashtable_params xfrm_pol_inexact_params; ··· 719 719 INIT_HLIST_HEAD(&bin->hhead); 720 720 bin->root_d = RB_ROOT; 721 721 bin->root_s = RB_ROOT; 722 - seqcount_init(&bin->count); 722 + seqcount_spinlock_init(&bin->count, &net->xfrm.xfrm_policy_lock); 723 723 724 724 prev = rhashtable_lookup_get_insert_key(&xfrm_policy_inexact_table, 725 725 &bin->k, &bin->head, ··· 1899 1899 1900 1900 static struct xfrm_pol_inexact_node * 1901 1901 xfrm_policy_lookup_inexact_addr(const struct rb_root *r, 1902 - seqcount_t *count, 1902 + seqcount_spinlock_t *count, 1903 1903 const xfrm_address_t *addr, u16 family) 1904 1904 { 1905 1905 const struct rb_node *parent; ··· 4157 4157 { 4158 4158 register_pernet_subsys(&xfrm_net_ops); 4159 4159 xfrm_dev_init(); 4160 - seqcount_init(&xfrm_policy_hash_generation); 4160 + seqcount_mutex_init(&xfrm_policy_hash_generation, &hash_resize_mutex); 4161 4161 xfrm_input_init(); 4162 4162 4163 4163 #ifdef CONFIG_XFRM_ESPINTCP
+1 -1
virt/kvm/eventfd.c
··· 303 303 INIT_LIST_HEAD(&irqfd->list); 304 304 INIT_WORK(&irqfd->inject, irqfd_inject); 305 305 INIT_WORK(&irqfd->shutdown, irqfd_shutdown); 306 - seqcount_init(&irqfd->irq_entry_sc); 306 + seqcount_spinlock_init(&irqfd->irq_entry_sc, &kvm->irqfds.lock); 307 307 308 308 f = fdget(args->fd); 309 309 if (!f.file) {