Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

percpu: add raw_cpu_ops

The kernel has never been audited to ensure that this_cpu operations are
consistently used throughout the kernel. The code generated in many
places can be improved through the use of this_cpu operations (which
uses a segment register for relocation of per cpu offsets instead of
performing address calculations).

The patch set also addresses various consistency issues in general with
the per cpu macros.

A. The semantics of __this_cpu_ptr() differs from this_cpu_ptr only
because checks are skipped. This is typically shown through a raw_
prefix. So this patch set changes the places where __this_cpu_ptr()
is used to raw_cpu_ptr().

B. There has been the long term wish by some that __this_cpu operations
would check for preemption. However, there are cases where preemption
checks need to be skipped. This patch set adds raw_cpu operations that
do not check for preemption and then adds preemption checks to the
__this_cpu operations.

C. The use of __get_cpu_var is always a reference to a percpu variable
that can also be handled via a this_cpu operation. This patch set
replaces all uses of __get_cpu_var with this_cpu operations.

D. We can then use this_cpu RMW operations in various places replacing
sequences of instructions by a single one.

E. The use of this_cpu operations throughout will allow other arches than
x86 to implement optimized references and RMV operations to work with
per cpu local data.

F. The use of this_cpu operations opens up the possibility to
further optimize code that relies on synchronization through
per cpu data.

The patch set works in a couple of stages:

I. Patch 1 adds the additional raw_cpu operations and raw_cpu_ptr().
Also converts the existing __this_cpu_xx_# primitive in the x86
code to raw_cpu_xx_#.

II. Patch 2-4 use the raw_cpu operations in places that would give
us false positives once they are enabled.

III. Patch 5 adds preemption checks to __this_cpu operations to allow
checking if preemption is properly disabled when these functions
are used.

IV. Patches 6-20 are patches that simply replace uses of __get_cpu_var
with this_cpu_ptr. They do not depend on any changes to the percpu
code. No preemption tests are skipped if they are applied.

V. Patches 21-46 are conversion patches that use this_cpu operations
in various kernel subsystems/drivers or arch code.

VI. Patches 47/48 (not included in this series) remove no longer used
functions (__this_cpu_ptr and __get_cpu_var). These should only be
applied after all the conversion patches have made it and after we
have done additional passes through the kernel to ensure that none of
the uses of these functions remain.

This patch (of 46):

The patches following this one will add preemption checks to __this_cpu
ops so we need to have an alternative way to use this_cpu operations
without preemption checks.

raw_cpu_ops will be the basis for all other ops since these will be the
operations that do not implement any checks.

Primitive operations are renamed by this patch from __this_cpu_xxx to
raw_cpu_xxxx.

Also change the uses of the x86 percpu primitives in preempt.h.
These depend directly on asm/percpu.h (header #include nesting issue).

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Christoph Lameter <cl@linux.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Alex Shi <alex.shi@intel.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Bryan Wu <cooloney@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: David Daney <david.daney@cavium.com>
Cc: David Miller <davem@davemloft.net>
Cc: David S. Miller <davem@davemloft.net>
Cc: Dimitri Sivanich <sivanich@sgi.com>
Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Hedi Berriche <hedi@sgi.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Mike Travis <travis@sgi.com>
Cc: Neil Brown <neilb@suse.de>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Robert Richter <rric@kernel.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Christoph Lameter and committed by
Linus Torvalds
b3ca1c10 54b6a731

+290 -228
+47 -47
arch/x86/include/asm/percpu.h
··· 52 52 * Compared to the generic __my_cpu_offset version, the following 53 53 * saves one instruction and avoids clobbering a temp register. 54 54 */ 55 - #define __this_cpu_ptr(ptr) \ 55 + #define raw_cpu_ptr(ptr) \ 56 56 ({ \ 57 57 unsigned long tcp_ptr__; \ 58 58 __verify_pcpu_ptr(ptr); \ ··· 362 362 */ 363 363 #define this_cpu_read_stable(var) percpu_from_op("mov", var, "p" (&(var))) 364 364 365 - #define __this_cpu_read_1(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 366 - #define __this_cpu_read_2(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 367 - #define __this_cpu_read_4(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 365 + #define raw_cpu_read_1(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 366 + #define raw_cpu_read_2(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 367 + #define raw_cpu_read_4(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 368 368 369 - #define __this_cpu_write_1(pcp, val) percpu_to_op("mov", (pcp), val) 370 - #define __this_cpu_write_2(pcp, val) percpu_to_op("mov", (pcp), val) 371 - #define __this_cpu_write_4(pcp, val) percpu_to_op("mov", (pcp), val) 372 - #define __this_cpu_add_1(pcp, val) percpu_add_op((pcp), val) 373 - #define __this_cpu_add_2(pcp, val) percpu_add_op((pcp), val) 374 - #define __this_cpu_add_4(pcp, val) percpu_add_op((pcp), val) 375 - #define __this_cpu_and_1(pcp, val) percpu_to_op("and", (pcp), val) 376 - #define __this_cpu_and_2(pcp, val) percpu_to_op("and", (pcp), val) 377 - #define __this_cpu_and_4(pcp, val) percpu_to_op("and", (pcp), val) 378 - #define __this_cpu_or_1(pcp, val) percpu_to_op("or", (pcp), val) 379 - #define __this_cpu_or_2(pcp, val) percpu_to_op("or", (pcp), val) 380 - #define __this_cpu_or_4(pcp, val) percpu_to_op("or", (pcp), val) 381 - #define __this_cpu_xchg_1(pcp, val) percpu_xchg_op(pcp, val) 382 - #define __this_cpu_xchg_2(pcp, val) percpu_xchg_op(pcp, val) 383 - #define __this_cpu_xchg_4(pcp, val) percpu_xchg_op(pcp, val) 369 + #define raw_cpu_write_1(pcp, val) percpu_to_op("mov", (pcp), val) 370 + #define raw_cpu_write_2(pcp, val) percpu_to_op("mov", (pcp), val) 371 + #define raw_cpu_write_4(pcp, val) percpu_to_op("mov", (pcp), val) 372 + #define raw_cpu_add_1(pcp, val) percpu_add_op((pcp), val) 373 + #define raw_cpu_add_2(pcp, val) percpu_add_op((pcp), val) 374 + #define raw_cpu_add_4(pcp, val) percpu_add_op((pcp), val) 375 + #define raw_cpu_and_1(pcp, val) percpu_to_op("and", (pcp), val) 376 + #define raw_cpu_and_2(pcp, val) percpu_to_op("and", (pcp), val) 377 + #define raw_cpu_and_4(pcp, val) percpu_to_op("and", (pcp), val) 378 + #define raw_cpu_or_1(pcp, val) percpu_to_op("or", (pcp), val) 379 + #define raw_cpu_or_2(pcp, val) percpu_to_op("or", (pcp), val) 380 + #define raw_cpu_or_4(pcp, val) percpu_to_op("or", (pcp), val) 381 + #define raw_cpu_xchg_1(pcp, val) percpu_xchg_op(pcp, val) 382 + #define raw_cpu_xchg_2(pcp, val) percpu_xchg_op(pcp, val) 383 + #define raw_cpu_xchg_4(pcp, val) percpu_xchg_op(pcp, val) 384 384 385 385 #define this_cpu_read_1(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 386 386 #define this_cpu_read_2(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) ··· 401 401 #define this_cpu_xchg_2(pcp, nval) percpu_xchg_op(pcp, nval) 402 402 #define this_cpu_xchg_4(pcp, nval) percpu_xchg_op(pcp, nval) 403 403 404 - #define __this_cpu_add_return_1(pcp, val) percpu_add_return_op(pcp, val) 405 - #define __this_cpu_add_return_2(pcp, val) percpu_add_return_op(pcp, val) 406 - #define __this_cpu_add_return_4(pcp, val) percpu_add_return_op(pcp, val) 407 - #define __this_cpu_cmpxchg_1(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 408 - #define __this_cpu_cmpxchg_2(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 409 - #define __this_cpu_cmpxchg_4(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 404 + #define raw_cpu_add_return_1(pcp, val) percpu_add_return_op(pcp, val) 405 + #define raw_cpu_add_return_2(pcp, val) percpu_add_return_op(pcp, val) 406 + #define raw_cpu_add_return_4(pcp, val) percpu_add_return_op(pcp, val) 407 + #define raw_cpu_cmpxchg_1(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 408 + #define raw_cpu_cmpxchg_2(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 409 + #define raw_cpu_cmpxchg_4(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 410 410 411 - #define this_cpu_add_return_1(pcp, val) percpu_add_return_op(pcp, val) 412 - #define this_cpu_add_return_2(pcp, val) percpu_add_return_op(pcp, val) 413 - #define this_cpu_add_return_4(pcp, val) percpu_add_return_op(pcp, val) 411 + #define this_cpu_add_return_1(pcp, val) percpu_add_return_op(pcp, val) 412 + #define this_cpu_add_return_2(pcp, val) percpu_add_return_op(pcp, val) 413 + #define this_cpu_add_return_4(pcp, val) percpu_add_return_op(pcp, val) 414 414 #define this_cpu_cmpxchg_1(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 415 415 #define this_cpu_cmpxchg_2(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 416 416 #define this_cpu_cmpxchg_4(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) ··· 427 427 __ret; \ 428 428 }) 429 429 430 - #define __this_cpu_cmpxchg_double_4 percpu_cmpxchg8b_double 430 + #define raw_cpu_cmpxchg_double_4 percpu_cmpxchg8b_double 431 431 #define this_cpu_cmpxchg_double_4 percpu_cmpxchg8b_double 432 432 #endif /* CONFIG_X86_CMPXCHG64 */ 433 433 ··· 436 436 * 32 bit must fall back to generic operations. 437 437 */ 438 438 #ifdef CONFIG_X86_64 439 - #define __this_cpu_read_8(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 440 - #define __this_cpu_write_8(pcp, val) percpu_to_op("mov", (pcp), val) 441 - #define __this_cpu_add_8(pcp, val) percpu_add_op((pcp), val) 442 - #define __this_cpu_and_8(pcp, val) percpu_to_op("and", (pcp), val) 443 - #define __this_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val) 444 - #define __this_cpu_add_return_8(pcp, val) percpu_add_return_op(pcp, val) 445 - #define __this_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval) 446 - #define __this_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 439 + #define raw_cpu_read_8(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 440 + #define raw_cpu_write_8(pcp, val) percpu_to_op("mov", (pcp), val) 441 + #define raw_cpu_add_8(pcp, val) percpu_add_op((pcp), val) 442 + #define raw_cpu_and_8(pcp, val) percpu_to_op("and", (pcp), val) 443 + #define raw_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val) 444 + #define raw_cpu_add_return_8(pcp, val) percpu_add_return_op(pcp, val) 445 + #define raw_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval) 446 + #define raw_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 447 447 448 - #define this_cpu_read_8(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 449 - #define this_cpu_write_8(pcp, val) percpu_to_op("mov", (pcp), val) 450 - #define this_cpu_add_8(pcp, val) percpu_add_op((pcp), val) 451 - #define this_cpu_and_8(pcp, val) percpu_to_op("and", (pcp), val) 452 - #define this_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val) 453 - #define this_cpu_add_return_8(pcp, val) percpu_add_return_op(pcp, val) 454 - #define this_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval) 448 + #define this_cpu_read_8(pcp) percpu_from_op("mov", (pcp), "m"(pcp)) 449 + #define this_cpu_write_8(pcp, val) percpu_to_op("mov", (pcp), val) 450 + #define this_cpu_add_8(pcp, val) percpu_add_op((pcp), val) 451 + #define this_cpu_and_8(pcp, val) percpu_to_op("and", (pcp), val) 452 + #define this_cpu_or_8(pcp, val) percpu_to_op("or", (pcp), val) 453 + #define this_cpu_add_return_8(pcp, val) percpu_add_return_op(pcp, val) 454 + #define this_cpu_xchg_8(pcp, nval) percpu_xchg_op(pcp, nval) 455 455 #define this_cpu_cmpxchg_8(pcp, oval, nval) percpu_cmpxchg_op(pcp, oval, nval) 456 456 457 457 /* ··· 474 474 __ret; \ 475 475 }) 476 476 477 - #define __this_cpu_cmpxchg_double_8 percpu_cmpxchg16b_double 477 + #define raw_cpu_cmpxchg_double_8 percpu_cmpxchg16b_double 478 478 #define this_cpu_cmpxchg_double_8 percpu_cmpxchg16b_double 479 479 480 480 #endif ··· 495 495 unsigned long __percpu *a = (unsigned long *)addr + nr / BITS_PER_LONG; 496 496 497 497 #ifdef CONFIG_X86_64 498 - return ((1UL << (nr % BITS_PER_LONG)) & __this_cpu_read_8(*a)) != 0; 498 + return ((1UL << (nr % BITS_PER_LONG)) & raw_cpu_read_8(*a)) != 0; 499 499 #else 500 - return ((1UL << (nr % BITS_PER_LONG)) & __this_cpu_read_4(*a)) != 0; 500 + return ((1UL << (nr % BITS_PER_LONG)) & raw_cpu_read_4(*a)) != 0; 501 501 #endif 502 502 } 503 503
+8 -8
arch/x86/include/asm/preempt.h
··· 19 19 */ 20 20 static __always_inline int preempt_count(void) 21 21 { 22 - return __this_cpu_read_4(__preempt_count) & ~PREEMPT_NEED_RESCHED; 22 + return raw_cpu_read_4(__preempt_count) & ~PREEMPT_NEED_RESCHED; 23 23 } 24 24 25 25 static __always_inline void preempt_count_set(int pc) 26 26 { 27 - __this_cpu_write_4(__preempt_count, pc); 27 + raw_cpu_write_4(__preempt_count, pc); 28 28 } 29 29 30 30 /* ··· 53 53 54 54 static __always_inline void set_preempt_need_resched(void) 55 55 { 56 - __this_cpu_and_4(__preempt_count, ~PREEMPT_NEED_RESCHED); 56 + raw_cpu_and_4(__preempt_count, ~PREEMPT_NEED_RESCHED); 57 57 } 58 58 59 59 static __always_inline void clear_preempt_need_resched(void) 60 60 { 61 - __this_cpu_or_4(__preempt_count, PREEMPT_NEED_RESCHED); 61 + raw_cpu_or_4(__preempt_count, PREEMPT_NEED_RESCHED); 62 62 } 63 63 64 64 static __always_inline bool test_preempt_need_resched(void) 65 65 { 66 - return !(__this_cpu_read_4(__preempt_count) & PREEMPT_NEED_RESCHED); 66 + return !(raw_cpu_read_4(__preempt_count) & PREEMPT_NEED_RESCHED); 67 67 } 68 68 69 69 /* ··· 72 72 73 73 static __always_inline void __preempt_count_add(int val) 74 74 { 75 - __this_cpu_add_4(__preempt_count, val); 75 + raw_cpu_add_4(__preempt_count, val); 76 76 } 77 77 78 78 static __always_inline void __preempt_count_sub(int val) 79 79 { 80 - __this_cpu_add_4(__preempt_count, -val); 80 + raw_cpu_add_4(__preempt_count, -val); 81 81 } 82 82 83 83 /* ··· 95 95 */ 96 96 static __always_inline bool should_resched(void) 97 97 { 98 - return unlikely(!__this_cpu_read_4(__preempt_count)); 98 + return unlikely(!raw_cpu_read_4(__preempt_count)); 99 99 } 100 100 101 101 #ifdef CONFIG_PREEMPT
+8 -5
include/asm-generic/percpu.h
··· 56 56 #define per_cpu(var, cpu) \ 57 57 (*SHIFT_PERCPU_PTR(&(var), per_cpu_offset(cpu))) 58 58 59 - #ifndef __this_cpu_ptr 60 - #define __this_cpu_ptr(ptr) SHIFT_PERCPU_PTR(ptr, __my_cpu_offset) 59 + #ifndef raw_cpu_ptr 60 + #define raw_cpu_ptr(ptr) SHIFT_PERCPU_PTR(ptr, __my_cpu_offset) 61 61 #endif 62 62 #ifdef CONFIG_DEBUG_PREEMPT 63 63 #define this_cpu_ptr(ptr) SHIFT_PERCPU_PTR(ptr, my_cpu_offset) 64 64 #else 65 - #define this_cpu_ptr(ptr) __this_cpu_ptr(ptr) 65 + #define this_cpu_ptr(ptr) raw_cpu_ptr(ptr) 66 66 #endif 67 67 68 68 #define __get_cpu_var(var) (*this_cpu_ptr(&(var))) 69 - #define __raw_get_cpu_var(var) (*__this_cpu_ptr(&(var))) 69 + #define __raw_get_cpu_var(var) (*raw_cpu_ptr(&(var))) 70 70 71 71 #ifdef CONFIG_HAVE_SETUP_PER_CPU_AREA 72 72 extern void setup_per_cpu_areas(void); ··· 83 83 #define __get_cpu_var(var) (*VERIFY_PERCPU_PTR(&(var))) 84 84 #define __raw_get_cpu_var(var) (*VERIFY_PERCPU_PTR(&(var))) 85 85 #define this_cpu_ptr(ptr) per_cpu_ptr(ptr, 0) 86 - #define __this_cpu_ptr(ptr) this_cpu_ptr(ptr) 86 + #define raw_cpu_ptr(ptr) this_cpu_ptr(ptr) 87 87 88 88 #endif /* SMP */ 89 89 ··· 121 121 #ifndef PER_CPU_DEF_ATTRIBUTES 122 122 #define PER_CPU_DEF_ATTRIBUTES 123 123 #endif 124 + 125 + /* Keep until we have removed all uses of __this_cpu_ptr */ 126 + #define __this_cpu_ptr raw_cpu_ptr 124 127 125 128 #endif /* _ASM_GENERIC_PERCPU_H_ */
+227 -168
include/linux/percpu.h
··· 243 243 } while (0) 244 244 245 245 /* 246 + * this_cpu operations (C) 2008-2013 Christoph Lameter <cl@linux.com> 247 + * 246 248 * Optimized manipulation for memory allocated through the per cpu 247 249 * allocator or for addresses of per cpu variables. 248 250 * ··· 298 296 do { \ 299 297 unsigned long flags; \ 300 298 raw_local_irq_save(flags); \ 301 - *__this_cpu_ptr(&(pcp)) op val; \ 299 + *raw_cpu_ptr(&(pcp)) op val; \ 302 300 raw_local_irq_restore(flags); \ 303 301 } while (0) 304 302 ··· 383 381 typeof(pcp) ret__; \ 384 382 unsigned long flags; \ 385 383 raw_local_irq_save(flags); \ 386 - __this_cpu_add(pcp, val); \ 387 - ret__ = __this_cpu_read(pcp); \ 384 + raw_cpu_add(pcp, val); \ 385 + ret__ = raw_cpu_read(pcp); \ 388 386 raw_local_irq_restore(flags); \ 389 387 ret__; \ 390 388 }) ··· 413 411 ({ typeof(pcp) ret__; \ 414 412 unsigned long flags; \ 415 413 raw_local_irq_save(flags); \ 416 - ret__ = __this_cpu_read(pcp); \ 417 - __this_cpu_write(pcp, nval); \ 414 + ret__ = raw_cpu_read(pcp); \ 415 + raw_cpu_write(pcp, nval); \ 418 416 raw_local_irq_restore(flags); \ 419 417 ret__; \ 420 418 }) ··· 441 439 typeof(pcp) ret__; \ 442 440 unsigned long flags; \ 443 441 raw_local_irq_save(flags); \ 444 - ret__ = __this_cpu_read(pcp); \ 442 + ret__ = raw_cpu_read(pcp); \ 445 443 if (ret__ == (oval)) \ 446 - __this_cpu_write(pcp, nval); \ 444 + raw_cpu_write(pcp, nval); \ 447 445 raw_local_irq_restore(flags); \ 448 446 ret__; \ 449 447 }) ··· 478 476 int ret__; \ 479 477 unsigned long flags; \ 480 478 raw_local_irq_save(flags); \ 481 - ret__ = __this_cpu_generic_cmpxchg_double(pcp1, pcp2, \ 479 + ret__ = raw_cpu_generic_cmpxchg_double(pcp1, pcp2, \ 482 480 oval1, oval2, nval1, nval2); \ 483 481 raw_local_irq_restore(flags); \ 484 482 ret__; \ ··· 506 504 #endif 507 505 508 506 /* 509 - * Generic percpu operations for context that are safe from preemption/interrupts. 510 - * Either we do not care about races or the caller has the 511 - * responsibility of handling preemption/interrupt issues. Arch code can still 512 - * override these instructions since the arch per cpu code may be more 513 - * efficient and may actually get race freeness for free (that is the 514 - * case for x86 for example). 507 + * Generic percpu operations for contexts where we do not want to do 508 + * any checks for preemptiosn. 515 509 * 516 510 * If there is no other protection through preempt disable and/or 517 511 * disabling interupts then one of these RMW operations can show unexpected ··· 515 517 * or an interrupt occurred and the same percpu variable was modified from 516 518 * the interrupt context. 517 519 */ 518 - #ifndef __this_cpu_read 519 - # ifndef __this_cpu_read_1 520 - # define __this_cpu_read_1(pcp) (*__this_cpu_ptr(&(pcp))) 520 + #ifndef raw_cpu_read 521 + # ifndef raw_cpu_read_1 522 + # define raw_cpu_read_1(pcp) (*raw_cpu_ptr(&(pcp))) 521 523 # endif 522 - # ifndef __this_cpu_read_2 523 - # define __this_cpu_read_2(pcp) (*__this_cpu_ptr(&(pcp))) 524 + # ifndef raw_cpu_read_2 525 + # define raw_cpu_read_2(pcp) (*raw_cpu_ptr(&(pcp))) 524 526 # endif 525 - # ifndef __this_cpu_read_4 526 - # define __this_cpu_read_4(pcp) (*__this_cpu_ptr(&(pcp))) 527 + # ifndef raw_cpu_read_4 528 + # define raw_cpu_read_4(pcp) (*raw_cpu_ptr(&(pcp))) 527 529 # endif 528 - # ifndef __this_cpu_read_8 529 - # define __this_cpu_read_8(pcp) (*__this_cpu_ptr(&(pcp))) 530 + # ifndef raw_cpu_read_8 531 + # define raw_cpu_read_8(pcp) (*raw_cpu_ptr(&(pcp))) 530 532 # endif 531 - # define __this_cpu_read(pcp) __pcpu_size_call_return(__this_cpu_read_, (pcp)) 533 + # define raw_cpu_read(pcp) __pcpu_size_call_return(raw_cpu_read_, (pcp)) 532 534 #endif 533 535 534 - #define __this_cpu_generic_to_op(pcp, val, op) \ 536 + #define raw_cpu_generic_to_op(pcp, val, op) \ 535 537 do { \ 536 - *__this_cpu_ptr(&(pcp)) op val; \ 538 + *raw_cpu_ptr(&(pcp)) op val; \ 537 539 } while (0) 538 540 541 + 542 + #ifndef raw_cpu_write 543 + # ifndef raw_cpu_write_1 544 + # define raw_cpu_write_1(pcp, val) raw_cpu_generic_to_op((pcp), (val), =) 545 + # endif 546 + # ifndef raw_cpu_write_2 547 + # define raw_cpu_write_2(pcp, val) raw_cpu_generic_to_op((pcp), (val), =) 548 + # endif 549 + # ifndef raw_cpu_write_4 550 + # define raw_cpu_write_4(pcp, val) raw_cpu_generic_to_op((pcp), (val), =) 551 + # endif 552 + # ifndef raw_cpu_write_8 553 + # define raw_cpu_write_8(pcp, val) raw_cpu_generic_to_op((pcp), (val), =) 554 + # endif 555 + # define raw_cpu_write(pcp, val) __pcpu_size_call(raw_cpu_write_, (pcp), (val)) 556 + #endif 557 + 558 + #ifndef raw_cpu_add 559 + # ifndef raw_cpu_add_1 560 + # define raw_cpu_add_1(pcp, val) raw_cpu_generic_to_op((pcp), (val), +=) 561 + # endif 562 + # ifndef raw_cpu_add_2 563 + # define raw_cpu_add_2(pcp, val) raw_cpu_generic_to_op((pcp), (val), +=) 564 + # endif 565 + # ifndef raw_cpu_add_4 566 + # define raw_cpu_add_4(pcp, val) raw_cpu_generic_to_op((pcp), (val), +=) 567 + # endif 568 + # ifndef raw_cpu_add_8 569 + # define raw_cpu_add_8(pcp, val) raw_cpu_generic_to_op((pcp), (val), +=) 570 + # endif 571 + # define raw_cpu_add(pcp, val) __pcpu_size_call(raw_cpu_add_, (pcp), (val)) 572 + #endif 573 + 574 + #ifndef raw_cpu_sub 575 + # define raw_cpu_sub(pcp, val) raw_cpu_add((pcp), -(val)) 576 + #endif 577 + 578 + #ifndef raw_cpu_inc 579 + # define raw_cpu_inc(pcp) raw_cpu_add((pcp), 1) 580 + #endif 581 + 582 + #ifndef raw_cpu_dec 583 + # define raw_cpu_dec(pcp) raw_cpu_sub((pcp), 1) 584 + #endif 585 + 586 + #ifndef raw_cpu_and 587 + # ifndef raw_cpu_and_1 588 + # define raw_cpu_and_1(pcp, val) raw_cpu_generic_to_op((pcp), (val), &=) 589 + # endif 590 + # ifndef raw_cpu_and_2 591 + # define raw_cpu_and_2(pcp, val) raw_cpu_generic_to_op((pcp), (val), &=) 592 + # endif 593 + # ifndef raw_cpu_and_4 594 + # define raw_cpu_and_4(pcp, val) raw_cpu_generic_to_op((pcp), (val), &=) 595 + # endif 596 + # ifndef raw_cpu_and_8 597 + # define raw_cpu_and_8(pcp, val) raw_cpu_generic_to_op((pcp), (val), &=) 598 + # endif 599 + # define raw_cpu_and(pcp, val) __pcpu_size_call(raw_cpu_and_, (pcp), (val)) 600 + #endif 601 + 602 + #ifndef raw_cpu_or 603 + # ifndef raw_cpu_or_1 604 + # define raw_cpu_or_1(pcp, val) raw_cpu_generic_to_op((pcp), (val), |=) 605 + # endif 606 + # ifndef raw_cpu_or_2 607 + # define raw_cpu_or_2(pcp, val) raw_cpu_generic_to_op((pcp), (val), |=) 608 + # endif 609 + # ifndef raw_cpu_or_4 610 + # define raw_cpu_or_4(pcp, val) raw_cpu_generic_to_op((pcp), (val), |=) 611 + # endif 612 + # ifndef raw_cpu_or_8 613 + # define raw_cpu_or_8(pcp, val) raw_cpu_generic_to_op((pcp), (val), |=) 614 + # endif 615 + # define raw_cpu_or(pcp, val) __pcpu_size_call(raw_cpu_or_, (pcp), (val)) 616 + #endif 617 + 618 + #define raw_cpu_generic_add_return(pcp, val) \ 619 + ({ \ 620 + raw_cpu_add(pcp, val); \ 621 + raw_cpu_read(pcp); \ 622 + }) 623 + 624 + #ifndef raw_cpu_add_return 625 + # ifndef raw_cpu_add_return_1 626 + # define raw_cpu_add_return_1(pcp, val) raw_cpu_generic_add_return(pcp, val) 627 + # endif 628 + # ifndef raw_cpu_add_return_2 629 + # define raw_cpu_add_return_2(pcp, val) raw_cpu_generic_add_return(pcp, val) 630 + # endif 631 + # ifndef raw_cpu_add_return_4 632 + # define raw_cpu_add_return_4(pcp, val) raw_cpu_generic_add_return(pcp, val) 633 + # endif 634 + # ifndef raw_cpu_add_return_8 635 + # define raw_cpu_add_return_8(pcp, val) raw_cpu_generic_add_return(pcp, val) 636 + # endif 637 + # define raw_cpu_add_return(pcp, val) \ 638 + __pcpu_size_call_return2(raw_add_return_, pcp, val) 639 + #endif 640 + 641 + #define raw_cpu_sub_return(pcp, val) raw_cpu_add_return(pcp, -(typeof(pcp))(val)) 642 + #define raw_cpu_inc_return(pcp) raw_cpu_add_return(pcp, 1) 643 + #define raw_cpu_dec_return(pcp) raw_cpu_add_return(pcp, -1) 644 + 645 + #define raw_cpu_generic_xchg(pcp, nval) \ 646 + ({ typeof(pcp) ret__; \ 647 + ret__ = raw_cpu_read(pcp); \ 648 + raw_cpu_write(pcp, nval); \ 649 + ret__; \ 650 + }) 651 + 652 + #ifndef raw_cpu_xchg 653 + # ifndef raw_cpu_xchg_1 654 + # define raw_cpu_xchg_1(pcp, nval) raw_cpu_generic_xchg(pcp, nval) 655 + # endif 656 + # ifndef raw_cpu_xchg_2 657 + # define raw_cpu_xchg_2(pcp, nval) raw_cpu_generic_xchg(pcp, nval) 658 + # endif 659 + # ifndef raw_cpu_xchg_4 660 + # define raw_cpu_xchg_4(pcp, nval) raw_cpu_generic_xchg(pcp, nval) 661 + # endif 662 + # ifndef raw_cpu_xchg_8 663 + # define raw_cpu_xchg_8(pcp, nval) raw_cpu_generic_xchg(pcp, nval) 664 + # endif 665 + # define raw_cpu_xchg(pcp, nval) \ 666 + __pcpu_size_call_return2(raw_cpu_xchg_, (pcp), nval) 667 + #endif 668 + 669 + #define raw_cpu_generic_cmpxchg(pcp, oval, nval) \ 670 + ({ \ 671 + typeof(pcp) ret__; \ 672 + ret__ = raw_cpu_read(pcp); \ 673 + if (ret__ == (oval)) \ 674 + raw_cpu_write(pcp, nval); \ 675 + ret__; \ 676 + }) 677 + 678 + #ifndef raw_cpu_cmpxchg 679 + # ifndef raw_cpu_cmpxchg_1 680 + # define raw_cpu_cmpxchg_1(pcp, oval, nval) raw_cpu_generic_cmpxchg(pcp, oval, nval) 681 + # endif 682 + # ifndef raw_cpu_cmpxchg_2 683 + # define raw_cpu_cmpxchg_2(pcp, oval, nval) raw_cpu_generic_cmpxchg(pcp, oval, nval) 684 + # endif 685 + # ifndef raw_cpu_cmpxchg_4 686 + # define raw_cpu_cmpxchg_4(pcp, oval, nval) raw_cpu_generic_cmpxchg(pcp, oval, nval) 687 + # endif 688 + # ifndef raw_cpu_cmpxchg_8 689 + # define raw_cpu_cmpxchg_8(pcp, oval, nval) raw_cpu_generic_cmpxchg(pcp, oval, nval) 690 + # endif 691 + # define raw_cpu_cmpxchg(pcp, oval, nval) \ 692 + __pcpu_size_call_return2(raw_cpu_cmpxchg_, pcp, oval, nval) 693 + #endif 694 + 695 + #define raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 696 + ({ \ 697 + int __ret = 0; \ 698 + if (raw_cpu_read(pcp1) == (oval1) && \ 699 + raw_cpu_read(pcp2) == (oval2)) { \ 700 + raw_cpu_write(pcp1, (nval1)); \ 701 + raw_cpu_write(pcp2, (nval2)); \ 702 + __ret = 1; \ 703 + } \ 704 + (__ret); \ 705 + }) 706 + 707 + #ifndef raw_cpu_cmpxchg_double 708 + # ifndef raw_cpu_cmpxchg_double_1 709 + # define raw_cpu_cmpxchg_double_1(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 710 + raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) 711 + # endif 712 + # ifndef raw_cpu_cmpxchg_double_2 713 + # define raw_cpu_cmpxchg_double_2(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 714 + raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) 715 + # endif 716 + # ifndef raw_cpu_cmpxchg_double_4 717 + # define raw_cpu_cmpxchg_double_4(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 718 + raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) 719 + # endif 720 + # ifndef raw_cpu_cmpxchg_double_8 721 + # define raw_cpu_cmpxchg_double_8(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 722 + raw_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) 723 + # endif 724 + # define raw_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 725 + __pcpu_double_call_return_bool(raw_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) 726 + #endif 727 + 728 + /* 729 + * Generic percpu operations for context that are safe from preemption/interrupts. 730 + * Checks will be added here soon. 731 + */ 732 + #ifndef __this_cpu_read 733 + # define __this_cpu_read(pcp) __pcpu_size_call_return(raw_cpu_read_, (pcp)) 734 + #endif 735 + 539 736 #ifndef __this_cpu_write 540 - # ifndef __this_cpu_write_1 541 - # define __this_cpu_write_1(pcp, val) __this_cpu_generic_to_op((pcp), (val), =) 542 - # endif 543 - # ifndef __this_cpu_write_2 544 - # define __this_cpu_write_2(pcp, val) __this_cpu_generic_to_op((pcp), (val), =) 545 - # endif 546 - # ifndef __this_cpu_write_4 547 - # define __this_cpu_write_4(pcp, val) __this_cpu_generic_to_op((pcp), (val), =) 548 - # endif 549 - # ifndef __this_cpu_write_8 550 - # define __this_cpu_write_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), =) 551 - # endif 552 - # define __this_cpu_write(pcp, val) __pcpu_size_call(__this_cpu_write_, (pcp), (val)) 737 + # define __this_cpu_write(pcp, val) __pcpu_size_call(raw_cpu_write_, (pcp), (val)) 553 738 #endif 554 739 555 740 #ifndef __this_cpu_add 556 - # ifndef __this_cpu_add_1 557 - # define __this_cpu_add_1(pcp, val) __this_cpu_generic_to_op((pcp), (val), +=) 558 - # endif 559 - # ifndef __this_cpu_add_2 560 - # define __this_cpu_add_2(pcp, val) __this_cpu_generic_to_op((pcp), (val), +=) 561 - # endif 562 - # ifndef __this_cpu_add_4 563 - # define __this_cpu_add_4(pcp, val) __this_cpu_generic_to_op((pcp), (val), +=) 564 - # endif 565 - # ifndef __this_cpu_add_8 566 - # define __this_cpu_add_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), +=) 567 - # endif 568 - # define __this_cpu_add(pcp, val) __pcpu_size_call(__this_cpu_add_, (pcp), (val)) 741 + # define __this_cpu_add(pcp, val) __pcpu_size_call(raw_cpu_add_, (pcp), (val)) 569 742 #endif 570 743 571 744 #ifndef __this_cpu_sub ··· 752 583 #endif 753 584 754 585 #ifndef __this_cpu_and 755 - # ifndef __this_cpu_and_1 756 - # define __this_cpu_and_1(pcp, val) __this_cpu_generic_to_op((pcp), (val), &=) 757 - # endif 758 - # ifndef __this_cpu_and_2 759 - # define __this_cpu_and_2(pcp, val) __this_cpu_generic_to_op((pcp), (val), &=) 760 - # endif 761 - # ifndef __this_cpu_and_4 762 - # define __this_cpu_and_4(pcp, val) __this_cpu_generic_to_op((pcp), (val), &=) 763 - # endif 764 - # ifndef __this_cpu_and_8 765 - # define __this_cpu_and_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), &=) 766 - # endif 767 - # define __this_cpu_and(pcp, val) __pcpu_size_call(__this_cpu_and_, (pcp), (val)) 586 + # define __this_cpu_and(pcp, val) __pcpu_size_call(raw_cpu_and_, (pcp), (val)) 768 587 #endif 769 588 770 589 #ifndef __this_cpu_or 771 - # ifndef __this_cpu_or_1 772 - # define __this_cpu_or_1(pcp, val) __this_cpu_generic_to_op((pcp), (val), |=) 773 - # endif 774 - # ifndef __this_cpu_or_2 775 - # define __this_cpu_or_2(pcp, val) __this_cpu_generic_to_op((pcp), (val), |=) 776 - # endif 777 - # ifndef __this_cpu_or_4 778 - # define __this_cpu_or_4(pcp, val) __this_cpu_generic_to_op((pcp), (val), |=) 779 - # endif 780 - # ifndef __this_cpu_or_8 781 - # define __this_cpu_or_8(pcp, val) __this_cpu_generic_to_op((pcp), (val), |=) 782 - # endif 783 - # define __this_cpu_or(pcp, val) __pcpu_size_call(__this_cpu_or_, (pcp), (val)) 590 + # define __this_cpu_or(pcp, val) __pcpu_size_call(raw_cpu_or_, (pcp), (val)) 784 591 #endif 785 592 786 - #define __this_cpu_generic_add_return(pcp, val) \ 787 - ({ \ 788 - __this_cpu_add(pcp, val); \ 789 - __this_cpu_read(pcp); \ 790 - }) 791 - 792 593 #ifndef __this_cpu_add_return 793 - # ifndef __this_cpu_add_return_1 794 - # define __this_cpu_add_return_1(pcp, val) __this_cpu_generic_add_return(pcp, val) 795 - # endif 796 - # ifndef __this_cpu_add_return_2 797 - # define __this_cpu_add_return_2(pcp, val) __this_cpu_generic_add_return(pcp, val) 798 - # endif 799 - # ifndef __this_cpu_add_return_4 800 - # define __this_cpu_add_return_4(pcp, val) __this_cpu_generic_add_return(pcp, val) 801 - # endif 802 - # ifndef __this_cpu_add_return_8 803 - # define __this_cpu_add_return_8(pcp, val) __this_cpu_generic_add_return(pcp, val) 804 - # endif 805 594 # define __this_cpu_add_return(pcp, val) \ 806 - __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) 595 + __pcpu_size_call_return2(raw_cpu_add_return_, pcp, val) 807 596 #endif 808 597 809 598 #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(typeof(pcp))(val)) 810 599 #define __this_cpu_inc_return(pcp) __this_cpu_add_return(pcp, 1) 811 600 #define __this_cpu_dec_return(pcp) __this_cpu_add_return(pcp, -1) 812 601 813 - #define __this_cpu_generic_xchg(pcp, nval) \ 814 - ({ typeof(pcp) ret__; \ 815 - ret__ = __this_cpu_read(pcp); \ 816 - __this_cpu_write(pcp, nval); \ 817 - ret__; \ 818 - }) 819 - 820 602 #ifndef __this_cpu_xchg 821 - # ifndef __this_cpu_xchg_1 822 - # define __this_cpu_xchg_1(pcp, nval) __this_cpu_generic_xchg(pcp, nval) 823 - # endif 824 - # ifndef __this_cpu_xchg_2 825 - # define __this_cpu_xchg_2(pcp, nval) __this_cpu_generic_xchg(pcp, nval) 826 - # endif 827 - # ifndef __this_cpu_xchg_4 828 - # define __this_cpu_xchg_4(pcp, nval) __this_cpu_generic_xchg(pcp, nval) 829 - # endif 830 - # ifndef __this_cpu_xchg_8 831 - # define __this_cpu_xchg_8(pcp, nval) __this_cpu_generic_xchg(pcp, nval) 832 - # endif 833 603 # define __this_cpu_xchg(pcp, nval) \ 834 - __pcpu_size_call_return2(__this_cpu_xchg_, (pcp), nval) 604 + __pcpu_size_call_return2(raw_cpu_xchg_, (pcp), nval) 835 605 #endif 836 - 837 - #define __this_cpu_generic_cmpxchg(pcp, oval, nval) \ 838 - ({ \ 839 - typeof(pcp) ret__; \ 840 - ret__ = __this_cpu_read(pcp); \ 841 - if (ret__ == (oval)) \ 842 - __this_cpu_write(pcp, nval); \ 843 - ret__; \ 844 - }) 845 606 846 607 #ifndef __this_cpu_cmpxchg 847 - # ifndef __this_cpu_cmpxchg_1 848 - # define __this_cpu_cmpxchg_1(pcp, oval, nval) __this_cpu_generic_cmpxchg(pcp, oval, nval) 849 - # endif 850 - # ifndef __this_cpu_cmpxchg_2 851 - # define __this_cpu_cmpxchg_2(pcp, oval, nval) __this_cpu_generic_cmpxchg(pcp, oval, nval) 852 - # endif 853 - # ifndef __this_cpu_cmpxchg_4 854 - # define __this_cpu_cmpxchg_4(pcp, oval, nval) __this_cpu_generic_cmpxchg(pcp, oval, nval) 855 - # endif 856 - # ifndef __this_cpu_cmpxchg_8 857 - # define __this_cpu_cmpxchg_8(pcp, oval, nval) __this_cpu_generic_cmpxchg(pcp, oval, nval) 858 - # endif 859 608 # define __this_cpu_cmpxchg(pcp, oval, nval) \ 860 - __pcpu_size_call_return2(__this_cpu_cmpxchg_, pcp, oval, nval) 609 + __pcpu_size_call_return2(raw_cpu_cmpxchg_, pcp, oval, nval) 861 610 #endif 862 611 863 - #define __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 864 - ({ \ 865 - int __ret = 0; \ 866 - if (__this_cpu_read(pcp1) == (oval1) && \ 867 - __this_cpu_read(pcp2) == (oval2)) { \ 868 - __this_cpu_write(pcp1, (nval1)); \ 869 - __this_cpu_write(pcp2, (nval2)); \ 870 - __ret = 1; \ 871 - } \ 872 - (__ret); \ 873 - }) 874 - 875 612 #ifndef __this_cpu_cmpxchg_double 876 - # ifndef __this_cpu_cmpxchg_double_1 877 - # define __this_cpu_cmpxchg_double_1(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 878 - __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) 879 - # endif 880 - # ifndef __this_cpu_cmpxchg_double_2 881 - # define __this_cpu_cmpxchg_double_2(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 882 - __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) 883 - # endif 884 - # ifndef __this_cpu_cmpxchg_double_4 885 - # define __this_cpu_cmpxchg_double_4(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 886 - __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) 887 - # endif 888 - # ifndef __this_cpu_cmpxchg_double_8 889 - # define __this_cpu_cmpxchg_double_8(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 890 - __this_cpu_generic_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) 891 - # endif 892 613 # define __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) \ 893 - __pcpu_double_call_return_bool(__this_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) 614 + __pcpu_double_call_return_bool(raw_cpu_cmpxchg_double_, (pcp1), (pcp2), (oval1), (oval2), (nval1), (nval2)) 894 615 #endif 895 616 896 617 #endif /* __LINUX_PERCPU_H */