Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branches 'torture.2014.11.03a', 'cpu.2014.11.03a', 'doc.2014.11.13a', 'fixes.2014.11.13a', 'signal.2014.10.29a' and 'rt.2014.10.29a' into HEAD

cpu.2014.11.03a: Changes for per-CPU variables.
doc.2014.11.13a: Documentation updates.
fixes.2014.11.13a: Miscellaneous fixes.
signal.2014.10.29a: Signal changes.
rt.2014.10.29a: Real-time changes.
torture.2014.11.03a: torture-test changes.

+296 -252
+2 -2
Documentation/RCU/rcu.txt
··· 36 36 executed in user mode, or executed in the idle loop, we can 37 37 safely free up that item. 38 38 39 - Preemptible variants of RCU (CONFIG_TREE_PREEMPT_RCU) get the 39 + Preemptible variants of RCU (CONFIG_PREEMPT_RCU) get the 40 40 same effect, but require that the readers manipulate CPU-local 41 41 counters. These counters allow limited types of blocking within 42 42 RCU read-side critical sections. SRCU also uses CPU-local ··· 81 81 o I hear that RCU needs work in order to support realtime kernels? 82 82 83 83 This work is largely completed. Realtime-friendly RCU can be 84 - enabled via the CONFIG_TREE_PREEMPT_RCU kernel configuration 84 + enabled via the CONFIG_PREEMPT_RCU kernel configuration 85 85 parameter. However, work is in progress for enabling priority 86 86 boosting of preempted RCU read-side critical sections. This is 87 87 needed if you have CPU-bound realtime threads.
+4 -10
Documentation/RCU/stallwarn.txt
··· 26 26 Stall-warning messages may be enabled and disabled completely via 27 27 /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress. 28 28 29 - CONFIG_RCU_CPU_STALL_VERBOSE 30 - 31 - This kernel configuration parameter causes the stall warning to 32 - also dump the stacks of any tasks that are blocking the current 33 - RCU-preempt grace period. 34 - 35 29 CONFIG_RCU_CPU_STALL_INFO 36 30 37 31 This kernel configuration parameter causes the stall warning to ··· 71 77 and that the stall was affecting RCU-sched. This message will normally be 72 78 followed by a stack dump of the offending CPU. On TREE_RCU kernel builds, 73 79 RCU and RCU-sched are implemented by the same underlying mechanism, 74 - while on TREE_PREEMPT_RCU kernel builds, RCU is instead implemented 80 + while on PREEMPT_RCU kernel builds, RCU is instead implemented 75 81 by rcu_preempt_state. 76 82 77 83 On the other hand, if the offending CPU fails to print out a stall-warning ··· 83 89 This message indicates that CPU 2 detected that CPUs 3 and 5 were both 84 90 causing stalls, and that the stall was affecting RCU-bh. This message 85 91 will normally be followed by stack dumps for each CPU. Please note that 86 - TREE_PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, 92 + PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, 87 93 and that the tasks will be indicated by PID, for example, "P3421". 88 94 It is even possible for a rcu_preempt_state stall to be caused by both 89 95 CPUs -and- tasks, in which case the offending CPUs and tasks will all ··· 199 205 o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that 200 206 is running at a higher priority than the RCU softirq threads. 201 207 This will prevent RCU callbacks from ever being invoked, 202 - and in a CONFIG_TREE_PREEMPT_RCU kernel will further prevent 208 + and in a CONFIG_PREEMPT_RCU kernel will further prevent 203 209 RCU grace periods from ever completing. Either way, the 204 210 system will eventually run out of memory and hang. In the 205 - CONFIG_TREE_PREEMPT_RCU case, you might see stall-warning 211 + CONFIG_PREEMPT_RCU case, you might see stall-warning 206 212 messages. 207 213 208 214 o A hardware or software issue shuts off the scheduler-clock
+2 -2
Documentation/RCU/trace.txt
··· 8 8 for rcutree and next for rcutiny. 9 9 10 10 11 - CONFIG_TREE_RCU and CONFIG_TREE_PREEMPT_RCU debugfs Files and Formats 11 + CONFIG_TREE_RCU and CONFIG_PREEMPT_RCU debugfs Files and Formats 12 12 13 13 These implementations of RCU provide several debugfs directories under the 14 14 top-level directory "rcu": ··· 18 18 rcu/rcu_sched 19 19 20 20 Each directory contains files for the corresponding flavor of RCU. 21 - Note that rcu/rcu_preempt is only present for CONFIG_TREE_PREEMPT_RCU. 21 + Note that rcu/rcu_preempt is only present for CONFIG_PREEMPT_RCU. 22 22 For CONFIG_TREE_RCU, the RCU flavor maps onto the RCU-sched flavor, 23 23 so that activity for both appears in rcu/rcu_sched. 24 24
+1 -1
Documentation/RCU/whatisRCU.txt
··· 137 137 Used by a reader to inform the reclaimer that the reader is 138 138 entering an RCU read-side critical section. It is illegal 139 139 to block while in an RCU read-side critical section, though 140 - kernels built with CONFIG_TREE_PREEMPT_RCU can preempt RCU 140 + kernels built with CONFIG_PREEMPT_RCU can preempt RCU 141 141 read-side critical sections. Any RCU-protected data structure 142 142 accessed during an RCU read-side critical section is guaranteed to 143 143 remain unreclaimed for the full duration of that critical section.
+8 -4
Documentation/atomic_ops.txt
··· 7 7 maintainers on how to implement atomic counter, bitops, and spinlock 8 8 interfaces properly. 9 9 10 - The atomic_t type should be defined as a signed integer. 11 - Also, it should be made opaque such that any kind of cast to a normal 12 - C integer type will fail. Something like the following should 13 - suffice: 10 + The atomic_t type should be defined as a signed integer and 11 + the atomic_long_t type as a signed long integer. Also, they should 12 + be made opaque such that any kind of cast to a normal C integer type 13 + will fail. Something like the following should suffice: 14 14 15 15 typedef struct { int counter; } atomic_t; 16 + typedef struct { long counter; } atomic_long_t; 16 17 17 18 Historically, counter has been declared volatile. This is now discouraged. 18 19 See Documentation/volatile-considered-harmful.txt for the complete rationale. ··· 37 36 initializer is used before runtime. If the initializer is used at runtime, a 38 37 proper implicit or explicit read memory barrier is needed before reading the 39 38 value with atomic_read from another thread. 39 + 40 + As with all of the atomic_ interfaces, replace the leading "atomic_" 41 + with "atomic_long_" to operate on atomic_long_t. 40 42 41 43 The second interface can be used at runtime, as in: 42 44
+16
Documentation/kernel-parameters.txt
··· 2922 2922 quiescent states. Units are jiffies, minimum 2923 2923 value is one, and maximum value is HZ. 2924 2924 2925 + rcutree.kthread_prio= [KNL,BOOT] 2926 + Set the SCHED_FIFO priority of the RCU 2927 + per-CPU kthreads (rcuc/N). This value is also 2928 + used for the priority of the RCU boost threads 2929 + (rcub/N). Valid values are 1-99 and the default 2930 + is 1 (the least-favored priority). 2931 + 2925 2932 rcutree.rcu_nocb_leader_stride= [KNL] 2926 2933 Set the number of NOCB kthread groups, which 2927 2934 defaults to the square root of the number of ··· 3077 3070 Set timeout in jiffies for RCU task stall warning 3078 3071 messages. Disable with a value less than or equal 3079 3072 to zero. 3073 + 3074 + rcupdate.rcu_self_test= [KNL] 3075 + Run the RCU early boot self tests 3076 + 3077 + rcupdate.rcu_self_test_bh= [KNL] 3078 + Run the RCU bh early boot self tests 3079 + 3080 + rcupdate.rcu_self_test_sched= [KNL] 3081 + Run the RCU sched early boot self tests 3080 3082 3081 3083 rdinit= [KNL] 3082 3084 Format: <full_path>
+29 -11
Documentation/memory-barriers.txt
··· 121 121 The set of accesses as seen by the memory system in the middle can be arranged 122 122 in 24 different combinations: 123 123 124 - STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4 125 - STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3 126 - STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4 127 - STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4 128 - STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3 129 - STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4 130 - STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4 124 + STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 125 + STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 126 + STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 127 + STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 128 + STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 129 + STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 130 + STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 131 131 STORE B=4, ... 132 132 ... 133 133 134 134 and can thus result in four different combinations of values: 135 135 136 - x == 1, y == 2 137 - x == 1, y == 4 138 - x == 3, y == 2 139 - x == 3, y == 4 136 + x == 2, y == 1 137 + x == 2, y == 3 138 + x == 4, y == 1 139 + x == 4, y == 3 140 140 141 141 142 142 Furthermore, the stores committed by a CPU to the memory system may not be ··· 693 693 Please note once again that the stores to 'b' differ. If they were 694 694 identical, as noted earlier, the compiler could pull this store outside 695 695 of the 'if' statement. 696 + 697 + You must also be careful not to rely too much on boolean short-circuit 698 + evaluation. Consider this example: 699 + 700 + q = ACCESS_ONCE(a); 701 + if (a || 1 > 0) 702 + ACCESS_ONCE(b) = 1; 703 + 704 + Because the second condition is always true, the compiler can transform 705 + this example as following, defeating control dependency: 706 + 707 + q = ACCESS_ONCE(a); 708 + ACCESS_ONCE(b) = 1; 709 + 710 + This example underscores the need to ensure that the compiler cannot 711 + out-guess your code. More generally, although ACCESS_ONCE() does force 712 + the compiler to actually emit code for a given load, it does not force 713 + the compiler to use the results. 696 714 697 715 Finally, control dependencies do -not- provide transitivity. This is 698 716 demonstrated by two related examples, with the initial values of
+1 -1
include/linux/init_task.h
··· 102 102 #define INIT_IDS 103 103 #endif 104 104 105 - #ifdef CONFIG_TREE_PREEMPT_RCU 105 + #ifdef CONFIG_PREEMPT_RCU 106 106 #define INIT_TASK_RCU_TREE_PREEMPT() \ 107 107 .rcu_blocked_node = NULL, 108 108 #else
+11 -8
include/linux/rcupdate.h
··· 57 57 INVALID_RCU_FLAVOR 58 58 }; 59 59 60 - #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) 60 + #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) 61 61 void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, 62 62 unsigned long *gpnum, unsigned long *completed); 63 63 void rcutorture_record_test_transition(void); ··· 260 260 void rcu_init(void); 261 261 void rcu_sched_qs(void); 262 262 void rcu_bh_qs(void); 263 - void rcu_check_callbacks(int cpu, int user); 263 + void rcu_check_callbacks(int user); 264 264 struct notifier_block; 265 265 void rcu_idle_enter(void); 266 266 void rcu_idle_exit(void); ··· 348 348 */ 349 349 #define cond_resched_rcu_qs() \ 350 350 do { \ 351 - rcu_note_voluntary_context_switch(current); \ 352 - cond_resched(); \ 351 + if (!cond_resched()) \ 352 + rcu_note_voluntary_context_switch(current); \ 353 353 } while (0) 354 354 355 355 #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) || defined(CONFIG_SMP) ··· 365 365 void (*func)(struct rcu_head *head)); 366 366 void wait_rcu_gp(call_rcu_func_t crf); 367 367 368 - #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) 368 + #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) 369 369 #include <linux/rcutree.h> 370 370 #elif defined(CONFIG_TINY_RCU) 371 371 #include <linux/rcutiny.h> ··· 852 852 * 853 853 * In non-preemptible RCU implementations (TREE_RCU and TINY_RCU), 854 854 * it is illegal to block while in an RCU read-side critical section. 855 - * In preemptible RCU implementations (TREE_PREEMPT_RCU) in CONFIG_PREEMPT 855 + * In preemptible RCU implementations (PREEMPT_RCU) in CONFIG_PREEMPT 856 856 * kernel builds, RCU read-side critical sections may be preempted, 857 857 * but explicit blocking is illegal. Finally, in preemptible RCU 858 858 * implementations in real-time (with -rt patchset) kernel builds, RCU ··· 887 887 * Unfortunately, this function acquires the scheduler's runqueue and 888 888 * priority-inheritance spinlocks. This means that deadlock could result 889 889 * if the caller of rcu_read_unlock() already holds one of these locks or 890 - * any lock that is ever acquired while holding them. 890 + * any lock that is ever acquired while holding them; or any lock which 891 + * can be taken from interrupt context because rcu_boost()->rt_mutex_lock() 892 + * does not disable irqs while taking ->wait_lock. 891 893 * 892 894 * That said, RCU readers are never priority boosted unless they were 893 895 * preempted. Therefore, one way to avoid deadlock is to make sure ··· 1049 1047 */ 1050 1048 #define RCU_INIT_POINTER(p, v) \ 1051 1049 do { \ 1050 + rcu_dereference_sparse(p, __rcu); \ 1052 1051 p = RCU_INITIALIZER(v); \ 1053 1052 } while (0) 1054 1053 ··· 1106 1103 __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) 1107 1104 1108 1105 #if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) 1109 - static inline int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) 1106 + static inline int rcu_needs_cpu(unsigned long *delta_jiffies) 1110 1107 { 1111 1108 *delta_jiffies = ULONG_MAX; 1112 1109 return 0;
+1 -1
include/linux/rcutiny.h
··· 78 78 call_rcu(head, func); 79 79 } 80 80 81 - static inline void rcu_note_context_switch(int cpu) 81 + static inline void rcu_note_context_switch(void) 82 82 { 83 83 rcu_sched_qs(); 84 84 }
+3 -3
include/linux/rcutree.h
··· 30 30 #ifndef __LINUX_RCUTREE_H 31 31 #define __LINUX_RCUTREE_H 32 32 33 - void rcu_note_context_switch(int cpu); 33 + void rcu_note_context_switch(void); 34 34 #ifndef CONFIG_RCU_NOCB_CPU_ALL 35 - int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies); 35 + int rcu_needs_cpu(unsigned long *delta_jiffies); 36 36 #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ 37 37 void rcu_cpu_stall_reset(void); 38 38 ··· 43 43 */ 44 44 static inline void rcu_virt_note_context_switch(int cpu) 45 45 { 46 - rcu_note_context_switch(cpu); 46 + rcu_note_context_switch(); 47 47 } 48 48 49 49 void synchronize_rcu_bh(void);
+2 -2
include/linux/sched.h
··· 1278 1278 union rcu_special rcu_read_unlock_special; 1279 1279 struct list_head rcu_node_entry; 1280 1280 #endif /* #ifdef CONFIG_PREEMPT_RCU */ 1281 - #ifdef CONFIG_TREE_PREEMPT_RCU 1281 + #ifdef CONFIG_PREEMPT_RCU 1282 1282 struct rcu_node *rcu_blocked_node; 1283 - #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ 1283 + #endif /* #ifdef CONFIG_PREEMPT_RCU */ 1284 1284 #ifdef CONFIG_TASKS_RCU 1285 1285 unsigned long rcu_tasks_nvcsw; 1286 1286 bool rcu_tasks_holdout;
+2 -2
include/trace/events/rcu.h
··· 36 36 37 37 #ifdef CONFIG_RCU_TRACE 38 38 39 - #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) 39 + #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) 40 40 41 41 /* 42 42 * Tracepoint for grace-period events. Takes a string identifying the ··· 345 345 __entry->cpu, __entry->qsevent) 346 346 ); 347 347 348 - #endif /* #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) */ 348 + #endif /* #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) */ 349 349 350 350 /* 351 351 * Tracepoint for dyntick-idle entry/exit events. These take a string
+21 -28
init/Kconfig
··· 477 477 thousands of CPUs. It also scales down nicely to 478 478 smaller systems. 479 479 480 - config TREE_PREEMPT_RCU 480 + config PREEMPT_RCU 481 481 bool "Preemptible tree-based hierarchical RCU" 482 482 depends on PREEMPT 483 483 select IRQ_WORK ··· 501 501 502 502 endchoice 503 503 504 - config PREEMPT_RCU 505 - def_bool TREE_PREEMPT_RCU 506 - help 507 - This option enables preemptible-RCU code that is common between 508 - TREE_PREEMPT_RCU and, in the old days, TINY_PREEMPT_RCU. 509 - 510 504 config TASKS_RCU 511 505 bool "Task_based RCU implementation using voluntary context switch" 512 506 default n ··· 512 518 If unsure, say N. 513 519 514 520 config RCU_STALL_COMMON 515 - def_bool ( TREE_RCU || TREE_PREEMPT_RCU || RCU_TRACE ) 521 + def_bool ( TREE_RCU || PREEMPT_RCU || RCU_TRACE ) 516 522 help 517 523 This option enables RCU CPU stall code that is common between 518 524 the TINY and TREE variants of RCU. The purpose is to allow ··· 570 576 int "Tree-based hierarchical RCU fanout value" 571 577 range 2 64 if 64BIT 572 578 range 2 32 if !64BIT 573 - depends on TREE_RCU || TREE_PREEMPT_RCU 579 + depends on TREE_RCU || PREEMPT_RCU 574 580 default 64 if 64BIT 575 581 default 32 if !64BIT 576 582 help ··· 590 596 int "Tree-based hierarchical RCU leaf-level fanout value" 591 597 range 2 RCU_FANOUT if 64BIT 592 598 range 2 RCU_FANOUT if !64BIT 593 - depends on TREE_RCU || TREE_PREEMPT_RCU 599 + depends on TREE_RCU || PREEMPT_RCU 594 600 default 16 595 601 help 596 602 This option controls the leaf-level fanout of hierarchical ··· 615 621 616 622 config RCU_FANOUT_EXACT 617 623 bool "Disable tree-based hierarchical RCU auto-balancing" 618 - depends on TREE_RCU || TREE_PREEMPT_RCU 624 + depends on TREE_RCU || PREEMPT_RCU 619 625 default n 620 626 help 621 627 This option forces use of the exact RCU_FANOUT value specified, ··· 646 652 Say N if you are unsure. 647 653 648 654 config TREE_RCU_TRACE 649 - def_bool RCU_TRACE && ( TREE_RCU || TREE_PREEMPT_RCU ) 655 + def_bool RCU_TRACE && ( TREE_RCU || PREEMPT_RCU ) 650 656 select DEBUG_FS 651 657 help 652 658 This option provides tracing for the TREE_RCU and 653 - TREE_PREEMPT_RCU implementations, permitting Makefile to 659 + PREEMPT_RCU implementations, permitting Makefile to 654 660 trivially select kernel/rcutree_trace.c. 655 661 656 662 config RCU_BOOST ··· 666 672 Say Y here if you are working with real-time apps or heavy loads 667 673 Say N here if you are unsure. 668 674 669 - config RCU_BOOST_PRIO 670 - int "Real-time priority to boost RCU readers to" 675 + config RCU_KTHREAD_PRIO 676 + int "Real-time priority to use for RCU worker threads" 671 677 range 1 99 672 678 depends on RCU_BOOST 673 679 default 1 674 680 help 675 - This option specifies the real-time priority to which long-term 676 - preempted RCU readers are to be boosted. If you are working 677 - with a real-time application that has one or more CPU-bound 678 - threads running at a real-time priority level, you should set 679 - RCU_BOOST_PRIO to a priority higher then the highest-priority 680 - real-time CPU-bound thread. The default RCU_BOOST_PRIO value 681 - of 1 is appropriate in the common case, which is real-time 681 + This option specifies the SCHED_FIFO priority value that will be 682 + assigned to the rcuc/n and rcub/n threads and is also the value 683 + used for RCU_BOOST (if enabled). If you are working with a 684 + real-time application that has one or more CPU-bound threads 685 + running at a real-time priority level, you should set 686 + RCU_KTHREAD_PRIO to a priority higher than the highest-priority 687 + real-time CPU-bound application thread. The default RCU_KTHREAD_PRIO 688 + value of 1 is appropriate in the common case, which is real-time 682 689 applications that do not have any CPU-bound threads. 683 690 684 691 Some real-time applications might not have a single real-time 685 692 thread that saturates a given CPU, but instead might have 686 693 multiple real-time threads that, taken together, fully utilize 687 - that CPU. In this case, you should set RCU_BOOST_PRIO to 694 + that CPU. In this case, you should set RCU_KTHREAD_PRIO to 688 695 a priority higher than the lowest-priority thread that is 689 696 conspiring to prevent the CPU from running any non-real-time 690 697 tasks. For example, if one thread at priority 10 and another 691 698 thread at priority 5 are between themselves fully consuming 692 - the CPU time on a given CPU, then RCU_BOOST_PRIO should be 699 + the CPU time on a given CPU, then RCU_KTHREAD_PRIO should be 693 700 set to priority 6 or higher. 694 701 695 702 Specify the real-time priority, or take the default if unsure. ··· 710 715 711 716 config RCU_NOCB_CPU 712 717 bool "Offload RCU callback processing from boot-selected CPUs" 713 - depends on TREE_RCU || TREE_PREEMPT_RCU 718 + depends on TREE_RCU || PREEMPT_RCU 714 719 default n 715 720 help 716 721 Use this option to reduce OS jitter for aggressive HPC or ··· 734 739 choice 735 740 prompt "Build-forced no-CBs CPUs" 736 741 default RCU_NOCB_CPU_NONE 742 + depends on RCU_NOCB_CPU 737 743 help 738 744 This option allows no-CBs CPUs (whose RCU callbacks are invoked 739 745 from kthreads rather than from softirq context) to be specified ··· 743 747 744 748 config RCU_NOCB_CPU_NONE 745 749 bool "No build_forced no-CBs CPUs" 746 - depends on RCU_NOCB_CPU 747 750 help 748 751 This option does not force any of the CPUs to be no-CBs CPUs. 749 752 Only CPUs designated by the rcu_nocbs= boot parameter will be ··· 756 761 757 762 config RCU_NOCB_CPU_ZERO 758 763 bool "CPU 0 is a build_forced no-CBs CPU" 759 - depends on RCU_NOCB_CPU 760 764 help 761 765 This option forces CPU 0 to be a no-CBs CPU, so that its RCU 762 766 callbacks are invoked by a per-CPU kthread whose name begins ··· 770 776 771 777 config RCU_NOCB_CPU_ALL 772 778 bool "All CPUs are build_forced no-CBs CPUs" 773 - depends on RCU_NOCB_CPU 774 779 help 775 780 This option forces all CPUs to be no-CBs CPUs. The rcu_nocbs= 776 781 boot parameter will be ignored. All CPUs' RCU callbacks will
+13 -6
kernel/cpu.c
··· 86 86 #define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map) 87 87 #define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map) 88 88 89 + static void apply_puts_pending(int max) 90 + { 91 + int delta; 92 + 93 + if (atomic_read(&cpu_hotplug.puts_pending) >= max) { 94 + delta = atomic_xchg(&cpu_hotplug.puts_pending, 0); 95 + cpu_hotplug.refcount -= delta; 96 + } 97 + } 98 + 89 99 void get_online_cpus(void) 90 100 { 91 101 might_sleep(); ··· 103 93 return; 104 94 cpuhp_lock_acquire_read(); 105 95 mutex_lock(&cpu_hotplug.lock); 96 + apply_puts_pending(65536); 106 97 cpu_hotplug.refcount++; 107 98 mutex_unlock(&cpu_hotplug.lock); 108 99 } ··· 116 105 if (!mutex_trylock(&cpu_hotplug.lock)) 117 106 return false; 118 107 cpuhp_lock_acquire_tryread(); 108 + apply_puts_pending(65536); 119 109 cpu_hotplug.refcount++; 120 110 mutex_unlock(&cpu_hotplug.lock); 121 111 return true; ··· 173 161 cpuhp_lock_acquire(); 174 162 for (;;) { 175 163 mutex_lock(&cpu_hotplug.lock); 176 - if (atomic_read(&cpu_hotplug.puts_pending)) { 177 - int delta; 178 - 179 - delta = atomic_xchg(&cpu_hotplug.puts_pending, 0); 180 - cpu_hotplug.refcount -= delta; 181 - } 164 + apply_puts_pending(1); 182 165 if (likely(!cpu_hotplug.refcount)) 183 166 break; 184 167 __set_current_state(TASK_UNINTERRUPTIBLE);
+4 -1
kernel/fork.c
··· 1022 1022 { 1023 1023 if (atomic_dec_and_test(&sighand->count)) { 1024 1024 signalfd_cleanup(sighand); 1025 + /* 1026 + * sighand_cachep is SLAB_DESTROY_BY_RCU so we can free it 1027 + * without an RCU grace period, see __lock_task_sighand(). 1028 + */ 1025 1029 kmem_cache_free(sighand_cachep, sighand); 1026 1030 } 1027 1031 } 1028 - 1029 1032 1030 1033 /* 1031 1034 * Initialize POSIX timer handling for a thread group.
+1 -1
kernel/rcu/Makefile
··· 1 1 obj-y += update.o srcu.o 2 2 obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o 3 3 obj-$(CONFIG_TREE_RCU) += tree.o 4 - obj-$(CONFIG_TREE_PREEMPT_RCU) += tree.o 4 + obj-$(CONFIG_PREEMPT_RCU) += tree.o 5 5 obj-$(CONFIG_TREE_RCU_TRACE) += tree_trace.o 6 6 obj-$(CONFIG_TINY_RCU) += tiny.o
+1 -1
kernel/rcu/tiny.c
··· 247 247 * be called from hardirq context. It is normally called from the 248 248 * scheduling-clock interrupt. 249 249 */ 250 - void rcu_check_callbacks(int cpu, int user) 250 + void rcu_check_callbacks(int user) 251 251 { 252 252 RCU_TRACE(check_cpu_stalls()); 253 253 if (user || rcu_is_cpu_rrupt_from_idle())
+55 -40
kernel/rcu/tree.c
··· 105 105 .name = RCU_STATE_NAME(sname), \ 106 106 .abbr = sabbr, \ 107 107 }; \ 108 - DEFINE_PER_CPU(struct rcu_data, sname##_data) 108 + DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, sname##_data) 109 109 110 110 RCU_STATE_INITIALIZER(rcu_sched, 's', call_rcu_sched); 111 111 RCU_STATE_INITIALIZER(rcu_bh, 'b', call_rcu_bh); ··· 151 151 * a time. 152 152 */ 153 153 static int rcu_scheduler_fully_active __read_mostly; 154 - 155 - #ifdef CONFIG_RCU_BOOST 156 - 157 - /* 158 - * Control variables for per-CPU and per-rcu_node kthreads. These 159 - * handle all flavors of RCU. 160 - */ 161 - static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task); 162 - DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_status); 163 - DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops); 164 - DEFINE_PER_CPU(char, rcu_cpu_has_work); 165 - 166 - #endif /* #ifdef CONFIG_RCU_BOOST */ 167 154 168 155 static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu); 169 156 static void invoke_rcu_core(void); ··· 273 286 * and requires special handling for preemptible RCU. 274 287 * The caller must have disabled preemption. 275 288 */ 276 - void rcu_note_context_switch(int cpu) 289 + void rcu_note_context_switch(void) 277 290 { 278 291 trace_rcu_utilization(TPS("Start context switch")); 279 292 rcu_sched_qs(); 280 - rcu_preempt_note_context_switch(cpu); 293 + rcu_preempt_note_context_switch(); 281 294 if (unlikely(raw_cpu_read(rcu_sched_qs_mask))) 282 295 rcu_momentary_dyntick_idle(); 283 296 trace_rcu_utilization(TPS("End context switch")); ··· 312 325 unsigned long *maxj), 313 326 bool *isidle, unsigned long *maxj); 314 327 static void force_quiescent_state(struct rcu_state *rsp); 315 - static int rcu_pending(int cpu); 328 + static int rcu_pending(void); 316 329 317 330 /* 318 331 * Return the number of RCU-sched batches processed thus far for debug & stats. ··· 497 510 * we really have entered idle, and must do the appropriate accounting. 498 511 * The caller must have disabled interrupts. 499 512 */ 500 - static void rcu_eqs_enter_common(struct rcu_dynticks *rdtp, long long oldval, 501 - bool user) 513 + static void rcu_eqs_enter_common(long long oldval, bool user) 502 514 { 503 515 struct rcu_state *rsp; 504 516 struct rcu_data *rdp; 517 + struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 505 518 506 519 trace_rcu_dyntick(TPS("Start"), oldval, rdtp->dynticks_nesting); 507 520 if (!user && !is_idle_task(current)) { ··· 518 531 rdp = this_cpu_ptr(rsp->rda); 519 532 do_nocb_deferred_wakeup(rdp); 520 533 } 521 - rcu_prepare_for_idle(smp_processor_id()); 534 + rcu_prepare_for_idle(); 522 535 /* CPUs seeing atomic_inc() must see prior RCU read-side crit sects */ 523 536 smp_mb__before_atomic(); /* See above. */ 524 537 atomic_inc(&rdtp->dynticks); ··· 552 565 WARN_ON_ONCE((oldval & DYNTICK_TASK_NEST_MASK) == 0); 553 566 if ((oldval & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE) { 554 567 rdtp->dynticks_nesting = 0; 555 - rcu_eqs_enter_common(rdtp, oldval, user); 568 + rcu_eqs_enter_common(oldval, user); 556 569 } else { 557 570 rdtp->dynticks_nesting -= DYNTICK_TASK_NEST_VALUE; 558 571 } ··· 576 589 577 590 local_irq_save(flags); 578 591 rcu_eqs_enter(false); 579 - rcu_sysidle_enter(this_cpu_ptr(&rcu_dynticks), 0); 592 + rcu_sysidle_enter(0); 580 593 local_irq_restore(flags); 581 594 } 582 595 EXPORT_SYMBOL_GPL(rcu_idle_enter); ··· 626 639 if (rdtp->dynticks_nesting) 627 640 trace_rcu_dyntick(TPS("--="), oldval, rdtp->dynticks_nesting); 628 641 else 629 - rcu_eqs_enter_common(rdtp, oldval, true); 630 - rcu_sysidle_enter(rdtp, 1); 642 + rcu_eqs_enter_common(oldval, true); 643 + rcu_sysidle_enter(1); 631 644 local_irq_restore(flags); 632 645 } 633 646 ··· 638 651 * we really have exited idle, and must do the appropriate accounting. 639 652 * The caller must have disabled interrupts. 640 653 */ 641 - static void rcu_eqs_exit_common(struct rcu_dynticks *rdtp, long long oldval, 642 - int user) 654 + static void rcu_eqs_exit_common(long long oldval, int user) 643 655 { 656 + struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 657 + 644 658 rcu_dynticks_task_exit(); 645 659 smp_mb__before_atomic(); /* Force ordering w/previous sojourn. */ 646 660 atomic_inc(&rdtp->dynticks); 647 661 /* CPUs seeing atomic_inc() must see later RCU read-side crit sects */ 648 662 smp_mb__after_atomic(); /* See above. */ 649 663 WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks) & 0x1)); 650 - rcu_cleanup_after_idle(smp_processor_id()); 664 + rcu_cleanup_after_idle(); 651 665 trace_rcu_dyntick(TPS("End"), oldval, rdtp->dynticks_nesting); 652 666 if (!user && !is_idle_task(current)) { 653 667 struct task_struct *idle __maybe_unused = ··· 679 691 rdtp->dynticks_nesting += DYNTICK_TASK_NEST_VALUE; 680 692 } else { 681 693 rdtp->dynticks_nesting = DYNTICK_TASK_EXIT_IDLE; 682 - rcu_eqs_exit_common(rdtp, oldval, user); 694 + rcu_eqs_exit_common(oldval, user); 683 695 } 684 696 } 685 697 ··· 700 712 701 713 local_irq_save(flags); 702 714 rcu_eqs_exit(false); 703 - rcu_sysidle_exit(this_cpu_ptr(&rcu_dynticks), 0); 715 + rcu_sysidle_exit(0); 704 716 local_irq_restore(flags); 705 717 } 706 718 EXPORT_SYMBOL_GPL(rcu_idle_exit); ··· 751 763 if (oldval) 752 764 trace_rcu_dyntick(TPS("++="), oldval, rdtp->dynticks_nesting); 753 765 else 754 - rcu_eqs_exit_common(rdtp, oldval, true); 755 - rcu_sysidle_exit(rdtp, 1); 766 + rcu_eqs_exit_common(oldval, true); 767 + rcu_sysidle_exit(1); 756 768 local_irq_restore(flags); 757 769 } 758 770 ··· 2375 2387 * invoked from the scheduling-clock interrupt. If rcu_pending returns 2376 2388 * false, there is no point in invoking rcu_check_callbacks(). 2377 2389 */ 2378 - void rcu_check_callbacks(int cpu, int user) 2390 + void rcu_check_callbacks(int user) 2379 2391 { 2380 2392 trace_rcu_utilization(TPS("Start scheduler-tick")); 2381 2393 increment_cpu_stall_ticks(); ··· 2407 2419 2408 2420 rcu_bh_qs(); 2409 2421 } 2410 - rcu_preempt_check_callbacks(cpu); 2411 - if (rcu_pending(cpu)) 2422 + rcu_preempt_check_callbacks(); 2423 + if (rcu_pending()) 2412 2424 invoke_rcu_core(); 2413 2425 if (user) 2414 2426 rcu_note_voluntary_context_switch(current); ··· 2951 2963 */ 2952 2964 void synchronize_sched_expedited(void) 2953 2965 { 2966 + cpumask_var_t cm; 2967 + bool cma = false; 2968 + int cpu; 2954 2969 long firstsnap, s, snap; 2955 2970 int trycount = 0; 2956 2971 struct rcu_state *rsp = &rcu_sched_state; ··· 2988 2997 } 2989 2998 WARN_ON_ONCE(cpu_is_offline(raw_smp_processor_id())); 2990 2999 3000 + /* Offline CPUs, idle CPUs, and any CPU we run on are quiescent. */ 3001 + cma = zalloc_cpumask_var(&cm, GFP_KERNEL); 3002 + if (cma) { 3003 + cpumask_copy(cm, cpu_online_mask); 3004 + cpumask_clear_cpu(raw_smp_processor_id(), cm); 3005 + for_each_cpu(cpu, cm) { 3006 + struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); 3007 + 3008 + if (!(atomic_add_return(0, &rdtp->dynticks) & 0x1)) 3009 + cpumask_clear_cpu(cpu, cm); 3010 + } 3011 + if (cpumask_weight(cm) == 0) 3012 + goto all_cpus_idle; 3013 + } 3014 + 2991 3015 /* 2992 3016 * Each pass through the following loop attempts to force a 2993 3017 * context switch on each CPU. 2994 3018 */ 2995 - while (try_stop_cpus(cpu_online_mask, 3019 + while (try_stop_cpus(cma ? cm : cpu_online_mask, 2996 3020 synchronize_sched_expedited_cpu_stop, 2997 3021 NULL) == -EAGAIN) { 2998 3022 put_online_cpus(); ··· 3019 3013 /* ensure test happens before caller kfree */ 3020 3014 smp_mb__before_atomic(); /* ^^^ */ 3021 3015 atomic_long_inc(&rsp->expedited_workdone1); 3016 + free_cpumask_var(cm); 3022 3017 return; 3023 3018 } 3024 3019 ··· 3029 3022 } else { 3030 3023 wait_rcu_gp(call_rcu_sched); 3031 3024 atomic_long_inc(&rsp->expedited_normal); 3025 + free_cpumask_var(cm); 3032 3026 return; 3033 3027 } 3034 3028 ··· 3039 3031 /* ensure test happens before caller kfree */ 3040 3032 smp_mb__before_atomic(); /* ^^^ */ 3041 3033 atomic_long_inc(&rsp->expedited_workdone2); 3034 + free_cpumask_var(cm); 3042 3035 return; 3043 3036 } 3044 3037 ··· 3054 3045 /* CPU hotplug operation in flight, use normal GP. */ 3055 3046 wait_rcu_gp(call_rcu_sched); 3056 3047 atomic_long_inc(&rsp->expedited_normal); 3048 + free_cpumask_var(cm); 3057 3049 return; 3058 3050 } 3059 3051 snap = atomic_long_read(&rsp->expedited_start); 3060 3052 smp_mb(); /* ensure read is before try_stop_cpus(). */ 3061 3053 } 3062 3054 atomic_long_inc(&rsp->expedited_stoppedcpus); 3055 + 3056 + all_cpus_idle: 3057 + free_cpumask_var(cm); 3063 3058 3064 3059 /* 3065 3060 * Everyone up to our most recent fetch is covered by our grace ··· 3156 3143 * by the current CPU, returning 1 if so. This function is part of the 3157 3144 * RCU implementation; it is -not- an exported member of the RCU API. 3158 3145 */ 3159 - static int rcu_pending(int cpu) 3146 + static int rcu_pending(void) 3160 3147 { 3161 3148 struct rcu_state *rsp; 3162 3149 3163 3150 for_each_rcu_flavor(rsp) 3164 - if (__rcu_pending(rsp, per_cpu_ptr(rsp->rda, cpu))) 3151 + if (__rcu_pending(rsp, this_cpu_ptr(rsp->rda))) 3165 3152 return 1; 3166 3153 return 0; 3167 3154 } ··· 3171 3158 * non-NULL, store an indication of whether all callbacks are lazy. 3172 3159 * (If there are no callbacks, all of them are deemed to be lazy.) 3173 3160 */ 3174 - static int __maybe_unused rcu_cpu_has_callbacks(int cpu, bool *all_lazy) 3161 + static int __maybe_unused rcu_cpu_has_callbacks(bool *all_lazy) 3175 3162 { 3176 3163 bool al = true; 3177 3164 bool hc = false; ··· 3179 3166 struct rcu_state *rsp; 3180 3167 3181 3168 for_each_rcu_flavor(rsp) { 3182 - rdp = per_cpu_ptr(rsp->rda, cpu); 3169 + rdp = this_cpu_ptr(rsp->rda); 3183 3170 if (!rdp->nxtlist) 3184 3171 continue; 3185 3172 hc = true; ··· 3498 3485 case CPU_DEAD_FROZEN: 3499 3486 case CPU_UP_CANCELED: 3500 3487 case CPU_UP_CANCELED_FROZEN: 3501 - for_each_rcu_flavor(rsp) 3488 + for_each_rcu_flavor(rsp) { 3502 3489 rcu_cleanup_dead_cpu(cpu, rsp); 3490 + do_nocb_deferred_wakeup(per_cpu_ptr(rsp->rda, cpu)); 3491 + } 3503 3492 break; 3504 3493 default: 3505 3494 break;
+11 -11
kernel/rcu/tree.h
··· 139 139 unsigned long expmask; /* Groups that have ->blkd_tasks */ 140 140 /* elements that need to drain to allow the */ 141 141 /* current expedited grace period to */ 142 - /* complete (only for TREE_PREEMPT_RCU). */ 142 + /* complete (only for PREEMPT_RCU). */ 143 143 unsigned long qsmaskinit; 144 144 /* Per-GP initial value for qsmask & expmask. */ 145 145 unsigned long grpmask; /* Mask to apply to parent qsmask. */ ··· 530 530 extern struct rcu_state rcu_bh_state; 531 531 DECLARE_PER_CPU(struct rcu_data, rcu_bh_data); 532 532 533 - #ifdef CONFIG_TREE_PREEMPT_RCU 533 + #ifdef CONFIG_PREEMPT_RCU 534 534 extern struct rcu_state rcu_preempt_state; 535 535 DECLARE_PER_CPU(struct rcu_data, rcu_preempt_data); 536 - #endif /* #ifdef CONFIG_TREE_PREEMPT_RCU */ 536 + #endif /* #ifdef CONFIG_PREEMPT_RCU */ 537 537 538 538 #ifdef CONFIG_RCU_BOOST 539 539 DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status); ··· 547 547 /* Forward declarations for rcutree_plugin.h */ 548 548 static void rcu_bootup_announce(void); 549 549 long rcu_batches_completed(void); 550 - static void rcu_preempt_note_context_switch(int cpu); 550 + static void rcu_preempt_note_context_switch(void); 551 551 static int rcu_preempt_blocked_readers_cgp(struct rcu_node *rnp); 552 552 #ifdef CONFIG_HOTPLUG_CPU 553 553 static void rcu_report_unblock_qs_rnp(struct rcu_node *rnp, ··· 561 561 struct rcu_node *rnp, 562 562 struct rcu_data *rdp); 563 563 #endif /* #ifdef CONFIG_HOTPLUG_CPU */ 564 - static void rcu_preempt_check_callbacks(int cpu); 564 + static void rcu_preempt_check_callbacks(void); 565 565 void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)); 566 - #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) 566 + #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_PREEMPT_RCU) 567 567 static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp, 568 568 bool wake); 569 - #endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_TREE_PREEMPT_RCU) */ 569 + #endif /* #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_PREEMPT_RCU) */ 570 570 static void __init __rcu_init_preempt(void); 571 571 static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags); 572 572 static void rcu_preempt_boost_start_gp(struct rcu_node *rnp); ··· 579 579 #endif /* #ifdef CONFIG_RCU_BOOST */ 580 580 static void __init rcu_spawn_boost_kthreads(void); 581 581 static void rcu_prepare_kthreads(int cpu); 582 - static void rcu_cleanup_after_idle(int cpu); 583 - static void rcu_prepare_for_idle(int cpu); 582 + static void rcu_cleanup_after_idle(void); 583 + static void rcu_prepare_for_idle(void); 584 584 static void rcu_idle_count_callbacks_posted(void); 585 585 static void print_cpu_stall_info_begin(void); 586 586 static void print_cpu_stall_info(struct rcu_state *rsp, int cpu); ··· 606 606 #endif /* #ifdef CONFIG_RCU_NOCB_CPU */ 607 607 static void __maybe_unused rcu_kick_nohz_cpu(int cpu); 608 608 static bool init_nocb_callback_list(struct rcu_data *rdp); 609 - static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq); 610 - static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq); 609 + static void rcu_sysidle_enter(int irq); 610 + static void rcu_sysidle_exit(int irq); 611 611 static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle, 612 612 unsigned long *maxj); 613 613 static bool is_sysidle_rcu_state(struct rcu_state *rsp);
+59 -52
kernel/rcu/tree_plugin.h
··· 30 30 #include <linux/smpboot.h> 31 31 #include "../time/tick-internal.h" 32 32 33 - #define RCU_KTHREAD_PRIO 1 34 - 35 33 #ifdef CONFIG_RCU_BOOST 34 + 36 35 #include "../locking/rtmutex_common.h" 37 - #define RCU_BOOST_PRIO CONFIG_RCU_BOOST_PRIO 38 - #else 39 - #define RCU_BOOST_PRIO RCU_KTHREAD_PRIO 40 - #endif 36 + 37 + /* rcuc/rcub kthread realtime priority */ 38 + static int kthread_prio = CONFIG_RCU_KTHREAD_PRIO; 39 + module_param(kthread_prio, int, 0644); 40 + 41 + /* 42 + * Control variables for per-CPU and per-rcu_node kthreads. These 43 + * handle all flavors of RCU. 44 + */ 45 + static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task); 46 + DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_status); 47 + DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops); 48 + DEFINE_PER_CPU(char, rcu_cpu_has_work); 49 + 50 + #endif /* #ifdef CONFIG_RCU_BOOST */ 41 51 42 52 #ifdef CONFIG_RCU_NOCB_CPU 43 53 static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */ ··· 82 72 #ifdef CONFIG_RCU_TORTURE_TEST_RUNNABLE 83 73 pr_info("\tRCU torture testing starts during boot.\n"); 84 74 #endif 85 - #if defined(CONFIG_TREE_PREEMPT_RCU) && !defined(CONFIG_RCU_CPU_STALL_VERBOSE) 86 - pr_info("\tDump stacks of tasks blocking RCU-preempt GP.\n"); 87 - #endif 88 75 #if defined(CONFIG_RCU_CPU_STALL_INFO) 89 76 pr_info("\tAdditional per-CPU info printed with stalls.\n"); 90 77 #endif ··· 92 85 pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf); 93 86 if (nr_cpu_ids != NR_CPUS) 94 87 pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids); 88 + #ifdef CONFIG_RCU_BOOST 89 + pr_info("\tRCU kthread priority: %d.\n", kthread_prio); 90 + #endif 95 91 } 96 92 97 - #ifdef CONFIG_TREE_PREEMPT_RCU 93 + #ifdef CONFIG_PREEMPT_RCU 98 94 99 95 RCU_STATE_INITIALIZER(rcu_preempt, 'p', call_rcu); 100 96 static struct rcu_state *rcu_state_p = &rcu_preempt_state; ··· 166 156 * 167 157 * Caller must disable preemption. 168 158 */ 169 - static void rcu_preempt_note_context_switch(int cpu) 159 + static void rcu_preempt_note_context_switch(void) 170 160 { 171 161 struct task_struct *t = current; 172 162 unsigned long flags; ··· 177 167 !t->rcu_read_unlock_special.b.blocked) { 178 168 179 169 /* Possibly blocking in an RCU read-side critical section. */ 180 - rdp = per_cpu_ptr(rcu_preempt_state.rda, cpu); 170 + rdp = this_cpu_ptr(rcu_preempt_state.rda); 181 171 rnp = rdp->mynode; 182 172 raw_spin_lock_irqsave(&rnp->lock, flags); 183 173 smp_mb__after_unlock_lock(); ··· 425 415 } 426 416 } 427 417 428 - #ifdef CONFIG_RCU_CPU_STALL_VERBOSE 429 - 430 418 /* 431 419 * Dump detailed information for all tasks blocking the current RCU 432 420 * grace period on the specified rcu_node structure. ··· 458 450 rcu_for_each_leaf_node(rsp, rnp) 459 451 rcu_print_detail_task_stall_rnp(rnp); 460 452 } 461 - 462 - #else /* #ifdef CONFIG_RCU_CPU_STALL_VERBOSE */ 463 - 464 - static void rcu_print_detail_task_stall(struct rcu_state *rsp) 465 - { 466 - } 467 - 468 - #endif /* #else #ifdef CONFIG_RCU_CPU_STALL_VERBOSE */ 469 453 470 454 #ifdef CONFIG_RCU_CPU_STALL_INFO 471 455 ··· 621 621 * 622 622 * Caller must disable hard irqs. 623 623 */ 624 - static void rcu_preempt_check_callbacks(int cpu) 624 + static void rcu_preempt_check_callbacks(void) 625 625 { 626 626 struct task_struct *t = current; 627 627 ··· 630 630 return; 631 631 } 632 632 if (t->rcu_read_lock_nesting > 0 && 633 - per_cpu(rcu_preempt_data, cpu).qs_pending && 634 - !per_cpu(rcu_preempt_data, cpu).passed_quiesce) 633 + __this_cpu_read(rcu_preempt_data.qs_pending) && 634 + !__this_cpu_read(rcu_preempt_data.passed_quiesce)) 635 635 t->rcu_read_unlock_special.b.need_qs = true; 636 636 } 637 637 ··· 919 919 __rcu_read_unlock(); 920 920 } 921 921 922 - #else /* #ifdef CONFIG_TREE_PREEMPT_RCU */ 922 + #else /* #ifdef CONFIG_PREEMPT_RCU */ 923 923 924 924 static struct rcu_state *rcu_state_p = &rcu_sched_state; 925 925 ··· 945 945 * Because preemptible RCU does not exist, we never have to check for 946 946 * CPUs being in quiescent states. 947 947 */ 948 - static void rcu_preempt_note_context_switch(int cpu) 948 + static void rcu_preempt_note_context_switch(void) 949 949 { 950 950 } 951 951 ··· 1017 1017 * Because preemptible RCU does not exist, it never has any callbacks 1018 1018 * to check. 1019 1019 */ 1020 - static void rcu_preempt_check_callbacks(int cpu) 1020 + static void rcu_preempt_check_callbacks(void) 1021 1021 { 1022 1022 } 1023 1023 ··· 1070 1070 { 1071 1071 } 1072 1072 1073 - #endif /* #else #ifdef CONFIG_TREE_PREEMPT_RCU */ 1073 + #endif /* #else #ifdef CONFIG_PREEMPT_RCU */ 1074 1074 1075 1075 #ifdef CONFIG_RCU_BOOST 1076 1076 ··· 1326 1326 smp_mb__after_unlock_lock(); 1327 1327 rnp->boost_kthread_task = t; 1328 1328 raw_spin_unlock_irqrestore(&rnp->lock, flags); 1329 - sp.sched_priority = RCU_BOOST_PRIO; 1329 + sp.sched_priority = kthread_prio; 1330 1330 sched_setscheduler_nocheck(t, SCHED_FIFO, &sp); 1331 1331 wake_up_process(t); /* get to TASK_INTERRUPTIBLE quickly. */ 1332 1332 return 0; ··· 1343 1343 { 1344 1344 struct sched_param sp; 1345 1345 1346 - sp.sched_priority = RCU_KTHREAD_PRIO; 1346 + sp.sched_priority = kthread_prio; 1347 1347 sched_setscheduler_nocheck(current, SCHED_FIFO, &sp); 1348 1348 } 1349 1349 ··· 1512 1512 * any flavor of RCU. 1513 1513 */ 1514 1514 #ifndef CONFIG_RCU_NOCB_CPU_ALL 1515 - int rcu_needs_cpu(int cpu, unsigned long *delta_jiffies) 1515 + int rcu_needs_cpu(unsigned long *delta_jiffies) 1516 1516 { 1517 1517 *delta_jiffies = ULONG_MAX; 1518 - return rcu_cpu_has_callbacks(cpu, NULL); 1518 + return rcu_cpu_has_callbacks(NULL); 1519 1519 } 1520 1520 #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ 1521 1521 ··· 1523 1523 * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up 1524 1524 * after it. 1525 1525 */ 1526 - static void rcu_cleanup_after_idle(int cpu) 1526 + static void rcu_cleanup_after_idle(void) 1527 1527 { 1528 1528 } 1529 1529 ··· 1531 1531 * Do the idle-entry grace-period work, which, because CONFIG_RCU_FAST_NO_HZ=n, 1532 1532 * is nothing. 1533 1533 */ 1534 - static void rcu_prepare_for_idle(int cpu) 1534 + static void rcu_prepare_for_idle(void) 1535 1535 { 1536 1536 } 1537 1537 ··· 1624 1624 * The caller must have disabled interrupts. 1625 1625 */ 1626 1626 #ifndef CONFIG_RCU_NOCB_CPU_ALL 1627 - int rcu_needs_cpu(int cpu, unsigned long *dj) 1627 + int rcu_needs_cpu(unsigned long *dj) 1628 1628 { 1629 - struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); 1629 + struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 1630 1630 1631 1631 /* Snapshot to detect later posting of non-lazy callback. */ 1632 1632 rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted; 1633 1633 1634 1634 /* If no callbacks, RCU doesn't need the CPU. */ 1635 - if (!rcu_cpu_has_callbacks(cpu, &rdtp->all_lazy)) { 1635 + if (!rcu_cpu_has_callbacks(&rdtp->all_lazy)) { 1636 1636 *dj = ULONG_MAX; 1637 1637 return 0; 1638 1638 } ··· 1666 1666 * 1667 1667 * The caller must have disabled interrupts. 1668 1668 */ 1669 - static void rcu_prepare_for_idle(int cpu) 1669 + static void rcu_prepare_for_idle(void) 1670 1670 { 1671 1671 #ifndef CONFIG_RCU_NOCB_CPU_ALL 1672 1672 bool needwake; 1673 1673 struct rcu_data *rdp; 1674 - struct rcu_dynticks *rdtp = &per_cpu(rcu_dynticks, cpu); 1674 + struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 1675 1675 struct rcu_node *rnp; 1676 1676 struct rcu_state *rsp; 1677 1677 int tne; ··· 1679 1679 /* Handle nohz enablement switches conservatively. */ 1680 1680 tne = ACCESS_ONCE(tick_nohz_active); 1681 1681 if (tne != rdtp->tick_nohz_enabled_snap) { 1682 - if (rcu_cpu_has_callbacks(cpu, NULL)) 1682 + if (rcu_cpu_has_callbacks(NULL)) 1683 1683 invoke_rcu_core(); /* force nohz to see update. */ 1684 1684 rdtp->tick_nohz_enabled_snap = tne; 1685 1685 return; ··· 1688 1688 return; 1689 1689 1690 1690 /* If this is a no-CBs CPU, no callbacks, just return. */ 1691 - if (rcu_is_nocb_cpu(cpu)) 1691 + if (rcu_is_nocb_cpu(smp_processor_id())) 1692 1692 return; 1693 1693 1694 1694 /* ··· 1712 1712 return; 1713 1713 rdtp->last_accelerate = jiffies; 1714 1714 for_each_rcu_flavor(rsp) { 1715 - rdp = per_cpu_ptr(rsp->rda, cpu); 1715 + rdp = this_cpu_ptr(rsp->rda); 1716 1716 if (!*rdp->nxttail[RCU_DONE_TAIL]) 1717 1717 continue; 1718 1718 rnp = rdp->mynode; ··· 1731 1731 * any grace periods that elapsed while the CPU was idle, and if any 1732 1732 * callbacks are now ready to invoke, initiate invocation. 1733 1733 */ 1734 - static void rcu_cleanup_after_idle(int cpu) 1734 + static void rcu_cleanup_after_idle(void) 1735 1735 { 1736 1736 #ifndef CONFIG_RCU_NOCB_CPU_ALL 1737 - if (rcu_is_nocb_cpu(cpu)) 1737 + if (rcu_is_nocb_cpu(smp_processor_id())) 1738 1738 return; 1739 1739 if (rcu_try_advance_all_cbs()) 1740 1740 invoke_rcu_core(); ··· 2573 2573 rdp->nocb_leader = rdp_spawn; 2574 2574 if (rdp_last && rdp != rdp_spawn) 2575 2575 rdp_last->nocb_next_follower = rdp; 2576 - rdp_last = rdp; 2577 - rdp = rdp->nocb_next_follower; 2578 - rdp_last->nocb_next_follower = NULL; 2576 + if (rdp == rdp_spawn) { 2577 + rdp = rdp->nocb_next_follower; 2578 + } else { 2579 + rdp_last = rdp; 2580 + rdp = rdp->nocb_next_follower; 2581 + rdp_last->nocb_next_follower = NULL; 2582 + } 2579 2583 } while (rdp); 2580 2584 rdp_spawn->nocb_next_follower = rdp_old_leader; 2581 2585 } ··· 2765 2761 * to detect full-system idle states, not RCU quiescent states and grace 2766 2762 * periods. The caller must have disabled interrupts. 2767 2763 */ 2768 - static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq) 2764 + static void rcu_sysidle_enter(int irq) 2769 2765 { 2770 2766 unsigned long j; 2767 + struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 2771 2768 2772 2769 /* If there are no nohz_full= CPUs, no need to track this. */ 2773 2770 if (!tick_nohz_full_enabled()) ··· 2837 2832 * usermode execution does -not- count as idle here! The caller must 2838 2833 * have disabled interrupts. 2839 2834 */ 2840 - static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq) 2835 + static void rcu_sysidle_exit(int irq) 2841 2836 { 2837 + struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 2838 + 2842 2839 /* If there are no nohz_full= CPUs, no need to track this. */ 2843 2840 if (!tick_nohz_full_enabled()) 2844 2841 return; ··· 3134 3127 3135 3128 #else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */ 3136 3129 3137 - static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq) 3130 + static void rcu_sysidle_enter(int irq) 3138 3131 { 3139 3132 } 3140 3133 3141 - static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq) 3134 + static void rcu_sysidle_exit(int irq) 3142 3135 { 3143 3136 } 3144 3137
+3 -2
kernel/rcu/update.c
··· 306 306 EXPORT_SYMBOL_GPL(rcuhead_debug_descr); 307 307 #endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */ 308 308 309 - #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) || defined(CONFIG_RCU_TRACE) 309 + #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) || defined(CONFIG_RCU_TRACE) 310 310 void do_trace_rcu_torture_read(const char *rcutorturename, struct rcu_head *rhp, 311 311 unsigned long secs, 312 312 unsigned long c_old, unsigned long c) ··· 531 531 struct rcu_head *next; 532 532 LIST_HEAD(rcu_tasks_holdouts); 533 533 534 - /* FIXME: Add housekeeping affinity. */ 534 + /* Run on housekeeping CPUs by default. Sysadm can move if desired. */ 535 + housekeeping_affine(current); 535 536 536 537 /* 537 538 * Each pass through the following loop makes one check for
+1 -1
kernel/sched/core.c
··· 2802 2802 preempt_disable(); 2803 2803 cpu = smp_processor_id(); 2804 2804 rq = cpu_rq(cpu); 2805 - rcu_note_context_switch(cpu); 2805 + rcu_note_context_switch(); 2806 2806 prev = rq->curr; 2807 2807 2808 2808 schedule_debug(prev);
+25 -17
kernel/signal.c
··· 1275 1275 local_irq_restore(*flags); 1276 1276 break; 1277 1277 } 1278 - 1278 + /* 1279 + * This sighand can be already freed and even reused, but 1280 + * we rely on SLAB_DESTROY_BY_RCU and sighand_ctor() which 1281 + * initializes ->siglock: this slab can't go away, it has 1282 + * the same object type, ->siglock can't be reinitialized. 1283 + * 1284 + * We need to ensure that tsk->sighand is still the same 1285 + * after we take the lock, we can race with de_thread() or 1286 + * __exit_signal(). In the latter case the next iteration 1287 + * must see ->sighand == NULL. 1288 + */ 1279 1289 spin_lock(&sighand->siglock); 1280 1290 if (likely(sighand == tsk->sighand)) { 1281 1291 rcu_read_unlock(); ··· 1341 1331 int error = -ESRCH; 1342 1332 struct task_struct *p; 1343 1333 1344 - rcu_read_lock(); 1345 - retry: 1346 - p = pid_task(pid, PIDTYPE_PID); 1347 - if (p) { 1348 - error = group_send_sig_info(sig, info, p); 1349 - if (unlikely(error == -ESRCH)) 1350 - /* 1351 - * The task was unhashed in between, try again. 1352 - * If it is dead, pid_task() will return NULL, 1353 - * if we race with de_thread() it will find the 1354 - * new leader. 1355 - */ 1356 - goto retry; 1357 - } 1358 - rcu_read_unlock(); 1334 + for (;;) { 1335 + rcu_read_lock(); 1336 + p = pid_task(pid, PIDTYPE_PID); 1337 + if (p) 1338 + error = group_send_sig_info(sig, info, p); 1339 + rcu_read_unlock(); 1340 + if (likely(!p || error != -ESRCH)) 1341 + return error; 1359 1342 1360 - return error; 1343 + /* 1344 + * The task was unhashed in between, try again. If it 1345 + * is dead, pid_task() will return NULL, if we race with 1346 + * de_thread() it will find the new leader. 1347 + */ 1348 + } 1361 1349 } 1362 1350 1363 1351 int kill_proc_info(int sig, struct siginfo *info, pid_t pid)
+1 -1
kernel/softirq.c
··· 656 656 * in the task stack here. 657 657 */ 658 658 __do_softirq(); 659 - rcu_note_context_switch(cpu); 659 + rcu_note_context_switch(); 660 660 local_irq_enable(); 661 661 cond_resched(); 662 662 return;
+1 -1
kernel/time/tick-sched.c
··· 585 585 last_jiffies = jiffies; 586 586 } while (read_seqretry(&jiffies_lock, seq)); 587 587 588 - if (rcu_needs_cpu(cpu, &rcu_delta_jiffies) || 588 + if (rcu_needs_cpu(&rcu_delta_jiffies) || 589 589 arch_needs_cpu() || irq_work_needs_cpu()) { 590 590 next_jiffies = last_jiffies + 1; 591 591 delta_jiffies = 1;
+1 -2
kernel/time/timer.c
··· 1377 1377 void update_process_times(int user_tick) 1378 1378 { 1379 1379 struct task_struct *p = current; 1380 - int cpu = smp_processor_id(); 1381 1380 1382 1381 /* Note: this timer irq context must be accounted for as well. */ 1383 1382 account_process_tick(p, user_tick); 1384 1383 run_local_timers(); 1385 - rcu_check_callbacks(cpu, user_tick); 1384 + rcu_check_callbacks(user_tick); 1386 1385 #ifdef CONFIG_IRQ_WORK 1387 1386 if (in_irq()) 1388 1387 irq_work_tick();
+1 -13
lib/Kconfig.debug
··· 1238 1238 RCU grace period persists, additional CPU stall warnings are 1239 1239 printed at more widely spaced intervals. 1240 1240 1241 - config RCU_CPU_STALL_VERBOSE 1242 - bool "Print additional per-task information for RCU_CPU_STALL_DETECTOR" 1243 - depends on TREE_PREEMPT_RCU 1244 - default y 1245 - help 1246 - This option causes RCU to printk detailed per-task information 1247 - for any tasks that are stalling the current RCU grace period. 1248 - 1249 - Say N if you are unsure. 1250 - 1251 - Say Y if you want to enable such checks. 1252 - 1253 1241 config RCU_CPU_STALL_INFO 1254 1242 bool "Print additional diagnostics on RCU CPU stall" 1255 - depends on (TREE_RCU || TREE_PREEMPT_RCU) && DEBUG_KERNEL 1243 + depends on (TREE_RCU || PREEMPT_RCU) && DEBUG_KERNEL 1256 1244 default n 1257 1245 help 1258 1246 For each stalled CPU that is aware of the current RCU grace
+1 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE01
··· 2 2 CONFIG_PREEMPT_NONE=n 3 3 CONFIG_PREEMPT_VOLUNTARY=n 4 4 CONFIG_PREEMPT=y 5 - #CHECK#CONFIG_TREE_PREEMPT_RCU=y 5 + #CHECK#CONFIG_PREEMPT_RCU=y 6 6 CONFIG_HZ_PERIODIC=n 7 7 CONFIG_NO_HZ_IDLE=y 8 8 CONFIG_NO_HZ_FULL=n ··· 14 14 CONFIG_RCU_NOCB_CPU_ZERO=y 15 15 CONFIG_DEBUG_LOCK_ALLOC=n 16 16 CONFIG_RCU_CPU_STALL_INFO=n 17 - CONFIG_RCU_CPU_STALL_VERBOSE=n 18 17 CONFIG_RCU_BOOST=n 19 18 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
+1 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE02
··· 3 3 CONFIG_PREEMPT_NONE=n 4 4 CONFIG_PREEMPT_VOLUNTARY=n 5 5 CONFIG_PREEMPT=y 6 - #CHECK#CONFIG_TREE_PREEMPT_RCU=y 6 + #CHECK#CONFIG_PREEMPT_RCU=y 7 7 CONFIG_HZ_PERIODIC=n 8 8 CONFIG_NO_HZ_IDLE=y 9 9 CONFIG_NO_HZ_FULL=n ··· 19 19 CONFIG_DEBUG_LOCK_ALLOC=y 20 20 CONFIG_PROVE_LOCKING=n 21 21 CONFIG_RCU_CPU_STALL_INFO=n 22 - CONFIG_RCU_CPU_STALL_VERBOSE=y 23 22 CONFIG_RCU_BOOST=n 24 23 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
+1 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE02-T
··· 3 3 CONFIG_PREEMPT_NONE=n 4 4 CONFIG_PREEMPT_VOLUNTARY=n 5 5 CONFIG_PREEMPT=y 6 - #CHECK#CONFIG_TREE_PREEMPT_RCU=y 6 + #CHECK#CONFIG_PREEMPT_RCU=y 7 7 CONFIG_HZ_PERIODIC=n 8 8 CONFIG_NO_HZ_IDLE=y 9 9 CONFIG_NO_HZ_FULL=n ··· 19 19 CONFIG_DEBUG_LOCK_ALLOC=y 20 20 CONFIG_PROVE_LOCKING=n 21 21 CONFIG_RCU_CPU_STALL_INFO=n 22 - CONFIG_RCU_CPU_STALL_VERBOSE=y 23 22 CONFIG_RCU_BOOST=n 24 23 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
+2 -3
tools/testing/selftests/rcutorture/configs/rcu/TREE03
··· 3 3 CONFIG_PREEMPT_NONE=n 4 4 CONFIG_PREEMPT_VOLUNTARY=n 5 5 CONFIG_PREEMPT=y 6 - #CHECK#CONFIG_TREE_PREEMPT_RCU=y 6 + #CHECK#CONFIG_PREEMPT_RCU=y 7 7 CONFIG_HZ_PERIODIC=y 8 8 CONFIG_NO_HZ_IDLE=n 9 9 CONFIG_NO_HZ_FULL=n ··· 15 15 CONFIG_RCU_NOCB_CPU=n 16 16 CONFIG_DEBUG_LOCK_ALLOC=n 17 17 CONFIG_RCU_CPU_STALL_INFO=n 18 - CONFIG_RCU_CPU_STALL_VERBOSE=n 19 18 CONFIG_RCU_BOOST=y 20 - CONFIG_RCU_BOOST_PRIO=2 19 + CONFIG_RCU_KTHREAD_PRIO=2 21 20 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
-1
tools/testing/selftests/rcutorture/configs/rcu/TREE04
··· 19 19 CONFIG_RCU_NOCB_CPU=n 20 20 CONFIG_DEBUG_LOCK_ALLOC=n 21 21 CONFIG_RCU_CPU_STALL_INFO=y 22 - CONFIG_RCU_CPU_STALL_VERBOSE=y 23 22 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
-1
tools/testing/selftests/rcutorture/configs/rcu/TREE05
··· 19 19 CONFIG_PROVE_LOCKING=y 20 20 CONFIG_PROVE_RCU=y 21 21 CONFIG_RCU_CPU_STALL_INFO=n 22 - CONFIG_RCU_CPU_STALL_VERBOSE=n 23 22 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
-1
tools/testing/selftests/rcutorture/configs/rcu/TREE06
··· 20 20 CONFIG_PROVE_LOCKING=y 21 21 CONFIG_PROVE_RCU=y 22 22 CONFIG_RCU_CPU_STALL_INFO=n 23 - CONFIG_RCU_CPU_STALL_VERBOSE=n 24 23 CONFIG_DEBUG_OBJECTS_RCU_HEAD=y
-1
tools/testing/selftests/rcutorture/configs/rcu/TREE07
··· 19 19 CONFIG_RCU_NOCB_CPU=n 20 20 CONFIG_DEBUG_LOCK_ALLOC=n 21 21 CONFIG_RCU_CPU_STALL_INFO=y 22 - CONFIG_RCU_CPU_STALL_VERBOSE=n 23 22 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
+1 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE08
··· 3 3 CONFIG_PREEMPT_NONE=n 4 4 CONFIG_PREEMPT_VOLUNTARY=n 5 5 CONFIG_PREEMPT=y 6 - #CHECK#CONFIG_TREE_PREEMPT_RCU=y 6 + #CHECK#CONFIG_PREEMPT_RCU=y 7 7 CONFIG_HZ_PERIODIC=n 8 8 CONFIG_NO_HZ_IDLE=y 9 9 CONFIG_NO_HZ_FULL=n ··· 21 21 CONFIG_PROVE_LOCKING=y 22 22 CONFIG_PROVE_RCU=y 23 23 CONFIG_RCU_CPU_STALL_INFO=n 24 - CONFIG_RCU_CPU_STALL_VERBOSE=n 25 24 CONFIG_RCU_BOOST=n 26 25 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
+1 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE08-T
··· 3 3 CONFIG_PREEMPT_NONE=n 4 4 CONFIG_PREEMPT_VOLUNTARY=n 5 5 CONFIG_PREEMPT=y 6 - #CHECK#CONFIG_TREE_PREEMPT_RCU=y 6 + #CHECK#CONFIG_PREEMPT_RCU=y 7 7 CONFIG_HZ_PERIODIC=n 8 8 CONFIG_NO_HZ_IDLE=y 9 9 CONFIG_NO_HZ_FULL=n ··· 19 19 CONFIG_RCU_NOCB_CPU_ALL=y 20 20 CONFIG_DEBUG_LOCK_ALLOC=n 21 21 CONFIG_RCU_CPU_STALL_INFO=n 22 - CONFIG_RCU_CPU_STALL_VERBOSE=n 23 22 CONFIG_RCU_BOOST=n 24 23 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
+1 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE09
··· 3 3 CONFIG_PREEMPT_NONE=n 4 4 CONFIG_PREEMPT_VOLUNTARY=n 5 5 CONFIG_PREEMPT=y 6 - #CHECK#CONFIG_TREE_PREEMPT_RCU=y 6 + #CHECK#CONFIG_PREEMPT_RCU=y 7 7 CONFIG_HZ_PERIODIC=n 8 8 CONFIG_NO_HZ_IDLE=y 9 9 CONFIG_NO_HZ_FULL=n ··· 14 14 CONFIG_RCU_NOCB_CPU=n 15 15 CONFIG_DEBUG_LOCK_ALLOC=n 16 16 CONFIG_RCU_CPU_STALL_INFO=n 17 - CONFIG_RCU_CPU_STALL_VERBOSE=n 18 17 CONFIG_RCU_BOOST=n 19 18 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
+1 -1
tools/testing/selftests/rcutorture/doc/TINY_RCU.txt
··· 34 34 CONFIG_PREEMPT_RCU 35 35 CONFIG_SMP 36 36 CONFIG_TINY_RCU 37 - CONFIG_TREE_PREEMPT_RCU 37 + CONFIG_PREEMPT_RCU 38 38 CONFIG_TREE_RCU 39 39 40 40 All forced by CONFIG_TINY_RCU.
+7 -8
tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt
··· 1 1 This document gives a brief rationale for the TREE_RCU-related test 2 - cases, a group that includes TREE_PREEMPT_RCU. 2 + cases, a group that includes PREEMPT_RCU. 3 3 4 4 5 5 Kconfig Parameters: ··· 14 14 CONFIG_PREEMPT -- Do half. (First three and #8.) 15 15 CONFIG_PROVE_LOCKING -- Do all but two, covering CONFIG_PROVE_RCU and not. 16 16 CONFIG_PROVE_RCU -- Do all but one under CONFIG_PROVE_LOCKING. 17 - CONFIG_RCU_BOOST -- one of TREE_PREEMPT_RCU. 18 - CONFIG_RCU_BOOST_PRIO -- set to 2 for _BOOST testing. 19 - CONFIG_RCU_CPU_STALL_INFO -- do one with and without _VERBOSE. 20 - CONFIG_RCU_CPU_STALL_VERBOSE -- do one with and without _INFO. 17 + CONFIG_RCU_BOOST -- one of PREEMPT_RCU. 18 + CONFIG_RCU_KTHREAD_PRIO -- set to 2 for _BOOST testing. 19 + CONFIG_RCU_CPU_STALL_INFO -- Do one. 21 20 CONFIG_RCU_FANOUT -- Cover hierarchy as currently, but overlap with others. 22 21 CONFIG_RCU_FANOUT_EXACT -- Do one. 23 22 CONFIG_RCU_FANOUT_LEAF -- Do one non-default. ··· 26 27 CONFIG_RCU_NOCB_CPU_NONE -- Do one. 27 28 CONFIG_RCU_NOCB_CPU_ZERO -- Do one. 28 29 CONFIG_RCU_TRACE -- Do half. 29 - CONFIG_SMP -- Need one !SMP for TREE_PREEMPT_RCU. 30 + CONFIG_SMP -- Need one !SMP for PREEMPT_RCU. 30 31 RCU-bh: Do one with PREEMPT and one with !PREEMPT. 31 32 RCU-sched: Do one with PREEMPT but not BOOST. 32 33 ··· 76 77 77 78 CONFIG_RCU_STALL_COMMON 78 79 79 - Implied by TREE_RCU and TREE_PREEMPT_RCU. 80 + Implied by TREE_RCU and PREEMPT_RCU. 80 81 81 82 CONFIG_RCU_TORTURE_TEST 82 83 CONFIG_RCU_TORTURE_TEST_RUNNABLE ··· 87 88 88 89 Redundant with CONFIG_NO_HZ_FULL. 89 90 90 - CONFIG_TREE_PREEMPT_RCU 91 + CONFIG_PREEMPT_RCU 91 92 CONFIG_TREE_RCU 92 93 93 94 These are controlled by CONFIG_PREEMPT.