Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branches 'array.2015.05.27a', 'doc.2015.05.27a', 'fixes.2015.05.27a', 'hotplug.2015.05.27a', 'init.2015.05.27a', 'tiny.2015.05.27a' and 'torture.2015.05.27a' into HEAD

array.2015.05.27a: Remove all uses of RCU-protected array indexes.
doc.2015.05.27a: Docuemntation updates.
fixes.2015.05.27a: Miscellaneous fixes.
hotplug.2015.05.27a: CPU-hotplug updates.
init.2015.05.27a: Initialization/Kconfig updates.
tiny.2015.05.27a: Updates to Tiny RCU.
torture.2015.05.27a: Torture-testing updates.

+581 -341
+5
Documentation/RCU/rcu_dereference.txt
··· 184 184 pointer. Note that the volatile cast in rcu_dereference() 185 185 will normally prevent the compiler from knowing too much. 186 186 187 + However, please note that if the compiler knows that the 188 + pointer takes on only one of two values, a not-equal 189 + comparison will provide exactly the information that the 190 + compiler needs to deduce the value of the pointer. 191 + 187 192 o Disable any value-speculation optimizations that your compiler 188 193 might provide, especially if you are making use of feedback-based 189 194 optimizations that take data collected from prior runs. Such
+3 -1
Documentation/RCU/whatisRCU.txt
··· 256 256 If you are going to be fetching multiple fields from the 257 257 RCU-protected structure, using the local variable is of 258 258 course preferred. Repeated rcu_dereference() calls look 259 - ugly and incur unnecessary overhead on Alpha CPUs. 259 + ugly, do not guarantee that the same pointer will be returned 260 + if an update happened while in the critical section, and incur 261 + unnecessary overhead on Alpha CPUs. 260 262 261 263 Note that the value returned by rcu_dereference() is valid 262 264 only within the enclosing RCU read-side critical section.
+30 -3
Documentation/kernel-parameters.txt
··· 2992 2992 Set maximum number of finished RCU callbacks to 2993 2993 process in one batch. 2994 2994 2995 + rcutree.dump_tree= [KNL] 2996 + Dump the structure of the rcu_node combining tree 2997 + out at early boot. This is used for diagnostic 2998 + purposes, to verify correct tree setup. 2999 + 3000 + rcutree.gp_cleanup_delay= [KNL] 3001 + Set the number of jiffies to delay each step of 3002 + RCU grace-period cleanup. This only has effect 3003 + when CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP is set. 3004 + 2995 3005 rcutree.gp_init_delay= [KNL] 2996 3006 Set the number of jiffies to delay each step of 2997 3007 RCU grace-period initialization. This only has 2998 - effect when CONFIG_RCU_TORTURE_TEST_SLOW_INIT is 2999 - set. 3008 + effect when CONFIG_RCU_TORTURE_TEST_SLOW_INIT 3009 + is set. 3010 + 3011 + rcutree.gp_preinit_delay= [KNL] 3012 + Set the number of jiffies to delay each step of 3013 + RCU grace-period pre-initialization, that is, 3014 + the propagation of recent CPU-hotplug changes up 3015 + the rcu_node combining tree. This only has effect 3016 + when CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT is set. 3017 + 3018 + rcutree.rcu_fanout_exact= [KNL] 3019 + Disable autobalancing of the rcu_node combining 3020 + tree. This is used by rcutorture, and might 3021 + possibly be useful for architectures having high 3022 + cache-to-cache transfer latencies. 3000 3023 3001 3024 rcutree.rcu_fanout_leaf= [KNL] 3002 3025 Increase the number of CPUs assigned to each ··· 3124 3101 test, hence the "fake". 3125 3102 3126 3103 rcutorture.nreaders= [KNL] 3127 - Set number of RCU readers. 3104 + Set number of RCU readers. The value -1 selects 3105 + N-1, where N is the number of CPUs. A value 3106 + "n" less than -1 selects N-n-2, where N is again 3107 + the number of CPUs. For example, -2 selects N 3108 + (the number of CPUs), -3 selects N+1, and so on. 3128 3109 3129 3110 rcutorture.object_debug= [KNL] 3130 3111 Enable debug-object double-call_rcu() testing.
+36 -26
Documentation/memory-barriers.txt
··· 617 617 However, stores are not speculated. This means that ordering -is- provided 618 618 for load-store control dependencies, as in the following example: 619 619 620 - q = ACCESS_ONCE(a); 620 + q = READ_ONCE_CTRL(a); 621 621 if (q) { 622 622 ACCESS_ONCE(b) = p; 623 623 } 624 624 625 - Control dependencies pair normally with other types of barriers. 626 - That said, please note that ACCESS_ONCE() is not optional! Without the 627 - ACCESS_ONCE(), might combine the load from 'a' with other loads from 628 - 'a', and the store to 'b' with other stores to 'b', with possible highly 629 - counterintuitive effects on ordering. 625 + Control dependencies pair normally with other types of barriers. That 626 + said, please note that READ_ONCE_CTRL() is not optional! Without the 627 + READ_ONCE_CTRL(), the compiler might combine the load from 'a' with 628 + other loads from 'a', and the store to 'b' with other stores to 'b', 629 + with possible highly counterintuitive effects on ordering. 630 630 631 631 Worse yet, if the compiler is able to prove (say) that the value of 632 632 variable 'a' is always non-zero, it would be well within its rights ··· 636 636 q = a; 637 637 b = p; /* BUG: Compiler and CPU can both reorder!!! */ 638 638 639 - So don't leave out the ACCESS_ONCE(). 639 + Finally, the READ_ONCE_CTRL() includes an smp_read_barrier_depends() 640 + that DEC Alpha needs in order to respect control depedencies. 641 + 642 + So don't leave out the READ_ONCE_CTRL(). 640 643 641 644 It is tempting to try to enforce ordering on identical stores on both 642 645 branches of the "if" statement as follows: 643 646 644 - q = ACCESS_ONCE(a); 647 + q = READ_ONCE_CTRL(a); 645 648 if (q) { 646 649 barrier(); 647 650 ACCESS_ONCE(b) = p; ··· 658 655 Unfortunately, current compilers will transform this as follows at high 659 656 optimization levels: 660 657 661 - q = ACCESS_ONCE(a); 658 + q = READ_ONCE_CTRL(a); 662 659 barrier(); 663 660 ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */ 664 661 if (q) { ··· 688 685 In contrast, without explicit memory barriers, two-legged-if control 689 686 ordering is guaranteed only when the stores differ, for example: 690 687 691 - q = ACCESS_ONCE(a); 688 + q = READ_ONCE_CTRL(a); 692 689 if (q) { 693 690 ACCESS_ONCE(b) = p; 694 691 do_something(); ··· 697 694 do_something_else(); 698 695 } 699 696 700 - The initial ACCESS_ONCE() is still required to prevent the compiler from 701 - proving the value of 'a'. 697 + The initial READ_ONCE_CTRL() is still required to prevent the compiler 698 + from proving the value of 'a'. 702 699 703 700 In addition, you need to be careful what you do with the local variable 'q', 704 701 otherwise the compiler might be able to guess the value and again remove 705 702 the needed conditional. For example: 706 703 707 - q = ACCESS_ONCE(a); 704 + q = READ_ONCE_CTRL(a); 708 705 if (q % MAX) { 709 706 ACCESS_ONCE(b) = p; 710 707 do_something(); ··· 717 714 equal to zero, in which case the compiler is within its rights to 718 715 transform the above code into the following: 719 716 720 - q = ACCESS_ONCE(a); 717 + q = READ_ONCE_CTRL(a); 721 718 ACCESS_ONCE(b) = p; 722 719 do_something_else(); 723 720 ··· 728 725 relying on this ordering, you should make sure that MAX is greater than 729 726 one, perhaps as follows: 730 727 731 - q = ACCESS_ONCE(a); 728 + q = READ_ONCE_CTRL(a); 732 729 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ 733 730 if (q % MAX) { 734 731 ACCESS_ONCE(b) = p; ··· 745 742 You must also be careful not to rely too much on boolean short-circuit 746 743 evaluation. Consider this example: 747 744 748 - q = ACCESS_ONCE(a); 745 + q = READ_ONCE_CTRL(a); 749 746 if (a || 1 > 0) 750 747 ACCESS_ONCE(b) = 1; 751 748 752 - Because the second condition is always true, the compiler can transform 753 - this example as following, defeating control dependency: 749 + Because the first condition cannot fault and the second condition is 750 + always true, the compiler can transform this example as following, 751 + defeating control dependency: 754 752 755 - q = ACCESS_ONCE(a); 753 + q = READ_ONCE_CTRL(a); 756 754 ACCESS_ONCE(b) = 1; 757 755 758 756 This example underscores the need to ensure that the compiler cannot ··· 766 762 x and y both being zero: 767 763 768 764 CPU 0 CPU 1 769 - ===================== ===================== 770 - r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y); 765 + ======================= ======================= 766 + r1 = READ_ONCE_CTRL(x); r2 = READ_ONCE_CTRL(y); 771 767 if (r1 > 0) if (r2 > 0) 772 768 ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1; 773 769 ··· 787 783 assertion can fail after the combined three-CPU example completes. If you 788 784 need the three-CPU example to provide ordering, you will need smp_mb() 789 785 between the loads and stores in the CPU 0 and CPU 1 code fragments, 790 - that is, just before or just after the "if" statements. 786 + that is, just before or just after the "if" statements. Furthermore, 787 + the original two-CPU example is very fragile and should be avoided. 791 788 792 789 These two examples are the LB and WWC litmus tests from this paper: 793 790 http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this 794 791 site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html. 795 792 796 793 In summary: 794 + 795 + (*) Control dependencies must be headed by READ_ONCE_CTRL(). 796 + Or, as a much less preferable alternative, interpose 797 + be headed by READ_ONCE() or an ACCESS_ONCE() read and must 798 + have smp_read_barrier_depends() between this read and the 799 + control-dependent write. 797 800 798 801 (*) Control dependencies can order prior loads against later stores. 799 802 However, they do -not- guarantee any other sort of ordering: ··· 1795 1784 1796 1785 Memory operations issued before the ACQUIRE may be completed after 1797 1786 the ACQUIRE operation has completed. An smp_mb__before_spinlock(), 1798 - combined with a following ACQUIRE, orders prior loads against 1799 - subsequent loads and stores and also orders prior stores against 1800 - subsequent stores. Note that this is weaker than smp_mb()! The 1801 - smp_mb__before_spinlock() primitive is free on many architectures. 1787 + combined with a following ACQUIRE, orders prior stores against 1788 + subsequent loads and stores. Note that this is weaker than smp_mb()! 1789 + The smp_mb__before_spinlock() primitive is free on many architectures. 1802 1790 1803 1791 (2) RELEASE operation implication: 1804 1792
+1
arch/powerpc/include/asm/barrier.h
··· 89 89 90 90 #define smp_mb__before_atomic() smp_mb() 91 91 #define smp_mb__after_atomic() smp_mb() 92 + #define smp_mb__before_spinlock() smp_mb() 92 93 93 94 #endif /* _ASM_POWERPC_BARRIER_H */
+16
include/linux/compiler.h
··· 252 252 #define WRITE_ONCE(x, val) \ 253 253 ({ typeof(x) __val = (val); __write_once_size(&(x), &__val, sizeof(__val)); __val; }) 254 254 255 + /** 256 + * READ_ONCE_CTRL - Read a value heading a control dependency 257 + * @x: The value to be read, heading the control dependency 258 + * 259 + * Control dependencies are tricky. See Documentation/memory-barriers.txt 260 + * for important information on how to use them. Note that in many cases, 261 + * use of smp_load_acquire() will be much simpler. Control dependencies 262 + * should be avoided except on the hottest of hotpaths. 263 + */ 264 + #define READ_ONCE_CTRL(x) \ 265 + ({ \ 266 + typeof(x) __val = READ_ONCE(x); \ 267 + smp_read_barrier_depends(); /* Enforce control dependency. */ \ 268 + __val; \ 269 + }) 270 + 255 271 #endif /* __KERNEL__ */ 256 272 257 273 #endif /* __ASSEMBLY__ */
+2 -2
include/linux/rculist.h
··· 549 549 */ 550 550 #define hlist_for_each_entry_from_rcu(pos, member) \ 551 551 for (; pos; \ 552 - pos = hlist_entry_safe(rcu_dereference((pos)->member.next),\ 553 - typeof(*(pos)), member)) 552 + pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu( \ 553 + &(pos)->member)), typeof(*(pos)), member)) 554 554 555 555 #endif /* __KERNEL__ */ 556 556 #endif
+2 -6
include/linux/rcupdate.h
··· 292 292 void rcu_bh_qs(void); 293 293 void rcu_check_callbacks(int user); 294 294 struct notifier_block; 295 - void rcu_idle_enter(void); 296 - void rcu_idle_exit(void); 297 - void rcu_irq_enter(void); 298 - void rcu_irq_exit(void); 299 295 int rcu_cpu_notify(struct notifier_block *self, 300 296 unsigned long action, void *hcpu); 301 297 ··· 1099 1103 #define kfree_rcu(ptr, rcu_head) \ 1100 1104 __kfree_rcu(&((ptr)->rcu_head), offsetof(typeof(*(ptr)), rcu_head)) 1101 1105 1102 - #if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) 1106 + #ifdef CONFIG_TINY_RCU 1103 1107 static inline int rcu_needs_cpu(unsigned long *delta_jiffies) 1104 1108 { 1105 1109 *delta_jiffies = ULONG_MAX; 1106 1110 return 0; 1107 1111 } 1108 - #endif /* #if defined(CONFIG_TINY_RCU) || defined(CONFIG_RCU_NOCB_CPU_ALL) */ 1112 + #endif /* #ifdef CONFIG_TINY_RCU */ 1109 1113 1110 1114 #if defined(CONFIG_RCU_NOCB_CPU_ALL) 1111 1115 static inline bool rcu_is_nocb_cpu(int cpu) { return true; }
+16
include/linux/rcutiny.h
··· 159 159 { 160 160 } 161 161 162 + static inline void rcu_idle_enter(void) 163 + { 164 + } 165 + 166 + static inline void rcu_idle_exit(void) 167 + { 168 + } 169 + 170 + static inline void rcu_irq_enter(void) 171 + { 172 + } 173 + 174 + static inline void rcu_irq_exit(void) 175 + { 176 + } 177 + 162 178 static inline void exit_rcu(void) 163 179 { 164 180 }
+5 -2
include/linux/rcutree.h
··· 31 31 #define __LINUX_RCUTREE_H 32 32 33 33 void rcu_note_context_switch(void); 34 - #ifndef CONFIG_RCU_NOCB_CPU_ALL 35 34 int rcu_needs_cpu(unsigned long *delta_jiffies); 36 - #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ 37 35 void rcu_cpu_stall_reset(void); 38 36 39 37 /* ··· 90 92 void rcu_force_quiescent_state(void); 91 93 void rcu_bh_force_quiescent_state(void); 92 94 void rcu_sched_force_quiescent_state(void); 95 + 96 + void rcu_idle_enter(void); 97 + void rcu_idle_exit(void); 98 + void rcu_irq_enter(void); 99 + void rcu_irq_exit(void); 93 100 94 101 void exit_rcu(void); 95 102
+1 -1
include/linux/spinlock.h
··· 120 120 /* 121 121 * Despite its name it doesn't necessarily has to be a full barrier. 122 122 * It should only guarantee that a STORE before the critical section 123 - * can not be reordered with a LOAD inside this section. 123 + * can not be reordered with LOADs and STOREs inside this section. 124 124 * spin_lock() is the one-way barrier, this LOAD can not escape out 125 125 * of the region. So the default implementation simply ensures that 126 126 * a STORE can not move into the critical section, smp_wmb() should
+29 -43
init/Kconfig
··· 465 465 466 466 menu "RCU Subsystem" 467 467 468 - choice 469 - prompt "RCU Implementation" 470 - default TREE_RCU 471 - 472 468 config TREE_RCU 473 - bool "Tree-based hierarchical RCU" 474 - depends on !PREEMPT && SMP 469 + bool 470 + default y if !PREEMPT && SMP 475 471 help 476 472 This option selects the RCU implementation that is 477 473 designed for very large SMP system with hundreds or ··· 475 479 smaller systems. 476 480 477 481 config PREEMPT_RCU 478 - bool "Preemptible tree-based hierarchical RCU" 479 - depends on PREEMPT 482 + bool 483 + default y if PREEMPT 480 484 help 481 485 This option selects the RCU implementation that is 482 486 designed for very large SMP systems with hundreds or ··· 487 491 Select this option if you are unsure. 488 492 489 493 config TINY_RCU 490 - bool "UP-only small-memory-footprint RCU" 491 - depends on !PREEMPT && !SMP 494 + bool 495 + default y if !PREEMPT && !SMP 492 496 help 493 497 This option selects the RCU implementation that is 494 498 designed for UP systems from which real-time response 495 499 is not required. This option greatly reduces the 496 500 memory footprint of RCU. 497 501 498 - endchoice 502 + config RCU_EXPERT 503 + bool "Make expert-level adjustments to RCU configuration" 504 + default n 505 + help 506 + This option needs to be enabled if you wish to make 507 + expert-level adjustments to RCU configuration. By default, 508 + no such adjustments can be made, which has the often-beneficial 509 + side-effect of preventing "make oldconfig" from asking you all 510 + sorts of detailed questions about how you would like numerous 511 + obscure RCU options to be set up. 512 + 513 + Say Y if you need to make expert-level adjustments to RCU. 514 + 515 + Say N if you are unsure. 499 516 500 517 config SRCU 501 518 bool ··· 518 509 sections. 519 510 520 511 config TASKS_RCU 521 - bool "Task_based RCU implementation using voluntary context switch" 512 + bool 522 513 default n 523 514 select SRCU 524 515 help 525 516 This option enables a task-based RCU implementation that uses 526 517 only voluntary context switch (not preemption!), idle, and 527 518 user-mode execution as quiescent states. 528 - 529 - If unsure, say N. 530 519 531 520 config RCU_STALL_COMMON 532 521 def_bool ( TREE_RCU || PREEMPT_RCU || RCU_TRACE ) ··· 538 531 bool 539 532 540 533 config RCU_USER_QS 541 - bool "Consider userspace as in RCU extended quiescent state" 542 - depends on HAVE_CONTEXT_TRACKING && SMP 543 - select CONTEXT_TRACKING 534 + bool 544 535 help 545 536 This option sets hooks on kernel / userspace boundaries and 546 537 puts RCU in extended quiescent state when the CPU runs in 547 538 userspace. It means that when a CPU runs in userspace, it is 548 539 excluded from the global RCU state machine and thus doesn't 549 540 try to keep the timer tick on for RCU. 550 - 551 - Unless you want to hack and help the development of the full 552 - dynticks mode, you shouldn't enable this option. It also 553 - adds unnecessary overhead. 554 - 555 - If unsure say N 556 541 557 542 config CONTEXT_TRACKING_FORCE 558 543 bool "Force context tracking" ··· 577 578 int "Tree-based hierarchical RCU fanout value" 578 579 range 2 64 if 64BIT 579 580 range 2 32 if !64BIT 580 - depends on TREE_RCU || PREEMPT_RCU 581 + depends on (TREE_RCU || PREEMPT_RCU) && RCU_EXPERT 581 582 default 64 if 64BIT 582 583 default 32 if !64BIT 583 584 help ··· 595 596 596 597 config RCU_FANOUT_LEAF 597 598 int "Tree-based hierarchical RCU leaf-level fanout value" 598 - range 2 RCU_FANOUT if 64BIT 599 - range 2 RCU_FANOUT if !64BIT 600 - depends on TREE_RCU || PREEMPT_RCU 599 + range 2 64 if 64BIT 600 + range 2 32 if !64BIT 601 + depends on (TREE_RCU || PREEMPT_RCU) && RCU_EXPERT 601 602 default 16 602 603 help 603 604 This option controls the leaf-level fanout of hierarchical ··· 620 621 621 622 Take the default if unsure. 622 623 623 - config RCU_FANOUT_EXACT 624 - bool "Disable tree-based hierarchical RCU auto-balancing" 625 - depends on TREE_RCU || PREEMPT_RCU 626 - default n 627 - help 628 - This option forces use of the exact RCU_FANOUT value specified, 629 - regardless of imbalances in the hierarchy. This is useful for 630 - testing RCU itself, and might one day be useful on systems with 631 - strong NUMA behavior. 632 - 633 - Without RCU_FANOUT_EXACT, the code will balance the hierarchy. 634 - 635 - Say N if unsure. 636 - 637 624 config RCU_FAST_NO_HZ 638 625 bool "Accelerate last non-dyntick-idle CPU's grace periods" 639 - depends on NO_HZ_COMMON && SMP 626 + depends on NO_HZ_COMMON && SMP && RCU_EXPERT 640 627 default n 641 628 help 642 629 This option permits CPUs to enter dynticks-idle state even if ··· 648 663 649 664 config RCU_BOOST 650 665 bool "Enable RCU priority boosting" 651 - depends on RT_MUTEXES && PREEMPT_RCU 666 + depends on RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT 652 667 default n 653 668 help 654 669 This option boosts the priority of preempted RCU readers that ··· 665 680 range 0 99 if !RCU_BOOST 666 681 default 1 if RCU_BOOST 667 682 default 0 if !RCU_BOOST 683 + depends on RCU_EXPERT 668 684 help 669 685 This option specifies the SCHED_FIFO priority value that will be 670 686 assigned to the rcuc/n and rcub/n threads and is also the value
+2 -2
kernel/cpu.c
··· 398 398 err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu)); 399 399 if (err) { 400 400 /* CPU didn't die: tell everyone. Can't complain. */ 401 - smpboot_unpark_threads(cpu); 402 401 cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu); 403 402 goto out_release; 404 403 } ··· 462 463 463 464 switch (action & ~CPU_TASKS_FROZEN) { 464 465 466 + case CPU_DOWN_FAILED: 465 467 case CPU_ONLINE: 466 468 smpboot_unpark_threads(cpu); 467 469 break; ··· 479 479 .priority = CPU_PRI_SMPBOOT, 480 480 }; 481 481 482 - void __cpuinit smpboot_thread_init(void) 482 + void smpboot_thread_init(void) 483 483 { 484 484 register_cpu_notifier(&smpboot_thread_notifier); 485 485 }
+1 -1
kernel/events/ring_buffer.c
··· 141 141 perf_output_get_handle(handle); 142 142 143 143 do { 144 - tail = ACCESS_ONCE(rb->user_page->data_tail); 144 + tail = READ_ONCE_CTRL(rb->user_page->data_tail); 145 145 offset = head = local_read(&rb->head); 146 146 if (!rb->overwrite && 147 147 unlikely(CIRC_SPACE(head, tail, perf_data_size(rb)) < size))
+7 -7
kernel/locking/locktorture.c
··· 122 122 123 123 static void torture_lock_busted_write_delay(struct torture_random_state *trsp) 124 124 { 125 - const unsigned long longdelay_us = 100; 125 + const unsigned long longdelay_ms = 100; 126 126 127 127 /* We want a long delay occasionally to force massive contention. */ 128 128 if (!(torture_random(trsp) % 129 - (cxt.nrealwriters_stress * 2000 * longdelay_us))) 130 - mdelay(longdelay_us); 129 + (cxt.nrealwriters_stress * 2000 * longdelay_ms))) 130 + mdelay(longdelay_ms); 131 131 #ifdef CONFIG_PREEMPT 132 132 if (!(torture_random(trsp) % (cxt.nrealwriters_stress * 20000))) 133 133 preempt_schedule(); /* Allow test to be preempted. */ ··· 160 160 static void torture_spin_lock_write_delay(struct torture_random_state *trsp) 161 161 { 162 162 const unsigned long shortdelay_us = 2; 163 - const unsigned long longdelay_us = 100; 163 + const unsigned long longdelay_ms = 100; 164 164 165 165 /* We want a short delay mostly to emulate likely code, and 166 166 * we want a long delay occasionally to force massive contention. 167 167 */ 168 168 if (!(torture_random(trsp) % 169 - (cxt.nrealwriters_stress * 2000 * longdelay_us))) 170 - mdelay(longdelay_us); 169 + (cxt.nrealwriters_stress * 2000 * longdelay_ms))) 170 + mdelay(longdelay_ms); 171 171 if (!(torture_random(trsp) % 172 172 (cxt.nrealwriters_stress * 2 * shortdelay_us))) 173 173 udelay(shortdelay_us); ··· 309 309 static void torture_rwlock_read_unlock_irq(void) 310 310 __releases(torture_rwlock) 311 311 { 312 - write_unlock_irqrestore(&torture_rwlock, cxt.cur_ops->flags); 312 + read_unlock_irqrestore(&torture_rwlock, cxt.cur_ops->flags); 313 313 } 314 314 315 315 static struct lock_torture_ops rw_lock_irq_ops = {
+72 -31
kernel/rcu/rcutorture.c
··· 241 241 struct rcu_torture_ops { 242 242 int ttype; 243 243 void (*init)(void); 244 + void (*cleanup)(void); 244 245 int (*readlock)(void); 245 246 void (*read_delay)(struct torture_random_state *rrsp); 246 247 void (*readunlock)(int idx); ··· 478 477 */ 479 478 480 479 DEFINE_STATIC_SRCU(srcu_ctl); 480 + static struct srcu_struct srcu_ctld; 481 + static struct srcu_struct *srcu_ctlp = &srcu_ctl; 481 482 482 - static int srcu_torture_read_lock(void) __acquires(&srcu_ctl) 483 + static int srcu_torture_read_lock(void) __acquires(srcu_ctlp) 483 484 { 484 - return srcu_read_lock(&srcu_ctl); 485 + return srcu_read_lock(srcu_ctlp); 485 486 } 486 487 487 488 static void srcu_read_delay(struct torture_random_state *rrsp) ··· 502 499 rcu_read_delay(rrsp); 503 500 } 504 501 505 - static void srcu_torture_read_unlock(int idx) __releases(&srcu_ctl) 502 + static void srcu_torture_read_unlock(int idx) __releases(srcu_ctlp) 506 503 { 507 - srcu_read_unlock(&srcu_ctl, idx); 504 + srcu_read_unlock(srcu_ctlp, idx); 508 505 } 509 506 510 507 static unsigned long srcu_torture_completed(void) 511 508 { 512 - return srcu_batches_completed(&srcu_ctl); 509 + return srcu_batches_completed(srcu_ctlp); 513 510 } 514 511 515 512 static void srcu_torture_deferred_free(struct rcu_torture *rp) 516 513 { 517 - call_srcu(&srcu_ctl, &rp->rtort_rcu, rcu_torture_cb); 514 + call_srcu(srcu_ctlp, &rp->rtort_rcu, rcu_torture_cb); 518 515 } 519 516 520 517 static void srcu_torture_synchronize(void) 521 518 { 522 - synchronize_srcu(&srcu_ctl); 519 + synchronize_srcu(srcu_ctlp); 523 520 } 524 521 525 522 static void srcu_torture_call(struct rcu_head *head, 526 523 void (*func)(struct rcu_head *head)) 527 524 { 528 - call_srcu(&srcu_ctl, head, func); 525 + call_srcu(srcu_ctlp, head, func); 529 526 } 530 527 531 528 static void srcu_torture_barrier(void) 532 529 { 533 - srcu_barrier(&srcu_ctl); 530 + srcu_barrier(srcu_ctlp); 534 531 } 535 532 536 533 static void srcu_torture_stats(void) 537 534 { 538 535 int cpu; 539 - int idx = srcu_ctl.completed & 0x1; 536 + int idx = srcu_ctlp->completed & 0x1; 540 537 541 538 pr_alert("%s%s per-CPU(idx=%d):", 542 539 torture_type, TORTURE_FLAG, idx); 543 540 for_each_possible_cpu(cpu) { 544 541 long c0, c1; 545 542 546 - c0 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[!idx]; 547 - c1 = (long)per_cpu_ptr(srcu_ctl.per_cpu_ref, cpu)->c[idx]; 543 + c0 = (long)per_cpu_ptr(srcu_ctlp->per_cpu_ref, cpu)->c[!idx]; 544 + c1 = (long)per_cpu_ptr(srcu_ctlp->per_cpu_ref, cpu)->c[idx]; 548 545 pr_cont(" %d(%ld,%ld)", cpu, c0, c1); 549 546 } 550 547 pr_cont("\n"); ··· 552 549 553 550 static void srcu_torture_synchronize_expedited(void) 554 551 { 555 - synchronize_srcu_expedited(&srcu_ctl); 552 + synchronize_srcu_expedited(srcu_ctlp); 556 553 } 557 554 558 555 static struct rcu_torture_ops srcu_ops = { ··· 570 567 .cb_barrier = srcu_torture_barrier, 571 568 .stats = srcu_torture_stats, 572 569 .name = "srcu" 570 + }; 571 + 572 + static void srcu_torture_init(void) 573 + { 574 + rcu_sync_torture_init(); 575 + WARN_ON(init_srcu_struct(&srcu_ctld)); 576 + srcu_ctlp = &srcu_ctld; 577 + } 578 + 579 + static void srcu_torture_cleanup(void) 580 + { 581 + cleanup_srcu_struct(&srcu_ctld); 582 + srcu_ctlp = &srcu_ctl; /* In case of a later rcutorture run. */ 583 + } 584 + 585 + /* As above, but dynamically allocated. */ 586 + static struct rcu_torture_ops srcud_ops = { 587 + .ttype = SRCU_FLAVOR, 588 + .init = srcu_torture_init, 589 + .cleanup = srcu_torture_cleanup, 590 + .readlock = srcu_torture_read_lock, 591 + .read_delay = srcu_read_delay, 592 + .readunlock = srcu_torture_read_unlock, 593 + .started = NULL, 594 + .completed = srcu_torture_completed, 595 + .deferred_free = srcu_torture_deferred_free, 596 + .sync = srcu_torture_synchronize, 597 + .exp_sync = srcu_torture_synchronize_expedited, 598 + .call = srcu_torture_call, 599 + .cb_barrier = srcu_torture_barrier, 600 + .stats = srcu_torture_stats, 601 + .name = "srcud" 573 602 }; 574 603 575 604 /* ··· 707 672 struct rcu_boost_inflight *rbip = 708 673 container_of(head, struct rcu_boost_inflight, rcu); 709 674 710 - smp_mb(); /* Ensure RCU-core accesses precede clearing ->inflight */ 711 - rbip->inflight = 0; 675 + /* Ensure RCU-core accesses precede clearing ->inflight */ 676 + smp_store_release(&rbip->inflight, 0); 712 677 } 713 678 714 679 static int rcu_torture_boost(void *arg) ··· 745 710 call_rcu_time = jiffies; 746 711 while (ULONG_CMP_LT(jiffies, endtime)) { 747 712 /* If we don't have a callback in flight, post one. */ 748 - if (!rbi.inflight) { 749 - smp_mb(); /* RCU core before ->inflight = 1. */ 750 - rbi.inflight = 1; 713 + if (!smp_load_acquire(&rbi.inflight)) { 714 + /* RCU core before ->inflight = 1. */ 715 + smp_store_release(&rbi.inflight, 1); 751 716 call_rcu(&rbi.rcu, rcu_torture_boost_cb); 752 717 if (jiffies - call_rcu_time > 753 718 test_boost_duration * HZ - HZ / 2) { ··· 786 751 } while (!torture_must_stop()); 787 752 788 753 /* Clean up and exit. */ 789 - while (!kthread_should_stop() || rbi.inflight) { 754 + while (!kthread_should_stop() || smp_load_acquire(&rbi.inflight)) { 790 755 torture_shutdown_absorb("rcu_torture_boost"); 791 756 schedule_timeout_uninterruptible(1); 792 757 } 793 - smp_mb(); /* order accesses to ->inflight before stack-frame death. */ 794 758 destroy_rcu_head_on_stack(&rbi.rcu); 795 759 torture_kthread_stopping("rcu_torture_boost"); 796 760 return 0; ··· 1088 1054 p = rcu_dereference_check(rcu_torture_current, 1089 1055 rcu_read_lock_bh_held() || 1090 1056 rcu_read_lock_sched_held() || 1091 - srcu_read_lock_held(&srcu_ctl)); 1057 + srcu_read_lock_held(srcu_ctlp)); 1092 1058 if (p == NULL) { 1093 1059 /* Leave because rcu_torture_writer is not yet underway */ 1094 1060 cur_ops->readunlock(idx); ··· 1162 1128 p = rcu_dereference_check(rcu_torture_current, 1163 1129 rcu_read_lock_bh_held() || 1164 1130 rcu_read_lock_sched_held() || 1165 - srcu_read_lock_held(&srcu_ctl)); 1131 + srcu_read_lock_held(srcu_ctlp)); 1166 1132 if (p == NULL) { 1167 1133 /* Wait for rcu_torture_writer to get underway */ 1168 1134 cur_ops->readunlock(idx); ··· 1447 1413 do { 1448 1414 wait_event(barrier_cbs_wq[myid], 1449 1415 (newphase = 1450 - READ_ONCE(barrier_phase)) != lastphase || 1416 + smp_load_acquire(&barrier_phase)) != lastphase || 1451 1417 torture_must_stop()); 1452 1418 lastphase = newphase; 1453 - smp_mb(); /* ensure barrier_phase load before ->call(). */ 1454 1419 if (torture_must_stop()) 1455 1420 break; 1421 + /* 1422 + * The above smp_load_acquire() ensures barrier_phase load 1423 + * is ordered before the folloiwng ->call(). 1424 + */ 1456 1425 cur_ops->call(&rcu, rcu_torture_barrier_cbf); 1457 1426 if (atomic_dec_and_test(&barrier_cbs_count)) 1458 1427 wake_up(&barrier_wq); ··· 1476 1439 do { 1477 1440 atomic_set(&barrier_cbs_invoked, 0); 1478 1441 atomic_set(&barrier_cbs_count, n_barrier_cbs); 1479 - smp_mb(); /* Ensure barrier_phase after prior assignments. */ 1480 - barrier_phase = !barrier_phase; 1442 + /* Ensure barrier_phase ordered after prior assignments. */ 1443 + smp_store_release(&barrier_phase, !barrier_phase); 1481 1444 for (i = 0; i < n_barrier_cbs; i++) 1482 1445 wake_up(&barrier_cbs_wq[i]); 1483 1446 wait_event(barrier_wq, ··· 1625 1588 rcutorture_booster_cleanup(i); 1626 1589 } 1627 1590 1628 - /* Wait for all RCU callbacks to fire. */ 1629 - 1591 + /* 1592 + * Wait for all RCU callbacks to fire, then do flavor-specific 1593 + * cleanup operations. 1594 + */ 1630 1595 if (cur_ops->cb_barrier != NULL) 1631 1596 cur_ops->cb_barrier(); 1597 + if (cur_ops->cleanup != NULL) 1598 + cur_ops->cleanup(); 1632 1599 1633 1600 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ 1634 1601 ··· 1709 1668 int cpu; 1710 1669 int firsterr = 0; 1711 1670 static struct rcu_torture_ops *torture_ops[] = { 1712 - &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &sched_ops, 1713 - RCUTORTURE_TASKS_OPS 1671 + &rcu_ops, &rcu_bh_ops, &rcu_busted_ops, &srcu_ops, &srcud_ops, 1672 + &sched_ops, RCUTORTURE_TASKS_OPS 1714 1673 }; 1715 1674 1716 1675 if (!torture_init_begin(torture_type, verbose, &torture_runnable)) ··· 1742 1701 if (nreaders >= 0) { 1743 1702 nrealreaders = nreaders; 1744 1703 } else { 1745 - nrealreaders = num_online_cpus() - 1; 1704 + nrealreaders = num_online_cpus() - 2 - nreaders; 1746 1705 if (nrealreaders <= 0) 1747 1706 nrealreaders = 1; 1748 1707 }
+5 -33
kernel/rcu/tiny.c
··· 49 49 50 50 #include "tiny_plugin.h" 51 51 52 - /* 53 - * Enter idle, which is an extended quiescent state if we have fully 54 - * entered that mode. 55 - */ 56 - void rcu_idle_enter(void) 57 - { 58 - } 59 - EXPORT_SYMBOL_GPL(rcu_idle_enter); 60 - 61 - /* 62 - * Exit an interrupt handler towards idle. 63 - */ 64 - void rcu_irq_exit(void) 65 - { 66 - } 67 - EXPORT_SYMBOL_GPL(rcu_irq_exit); 68 - 69 - /* 70 - * Exit idle, so that we are no longer in an extended quiescent state. 71 - */ 72 - void rcu_idle_exit(void) 73 - { 74 - } 75 - EXPORT_SYMBOL_GPL(rcu_idle_exit); 76 - 77 - /* 78 - * Enter an interrupt handler, moving away from idle. 79 - */ 80 - void rcu_irq_enter(void) 81 - { 82 - } 83 - EXPORT_SYMBOL_GPL(rcu_irq_enter); 84 - 85 52 #if defined(CONFIG_DEBUG_LOCK_ALLOC) || defined(CONFIG_RCU_TRACE) 86 53 87 54 /* ··· 137 170 138 171 /* Move the ready-to-invoke callbacks to a local list. */ 139 172 local_irq_save(flags); 173 + if (rcp->donetail == &rcp->rcucblist) { 174 + /* No callbacks ready, so just leave. */ 175 + local_irq_restore(flags); 176 + return; 177 + } 140 178 RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1)); 141 179 list = rcp->rcucblist; 142 180 rcp->rcucblist = *rcp->donetail;
+120 -61
kernel/rcu/tree.c
··· 91 91 92 92 #define RCU_STATE_INITIALIZER(sname, sabbr, cr) \ 93 93 DEFINE_RCU_TPS(sname) \ 94 - DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, sname##_data); \ 94 + static DEFINE_PER_CPU_SHARED_ALIGNED(struct rcu_data, sname##_data); \ 95 95 struct rcu_state sname##_state = { \ 96 96 .level = { &sname##_state.node[0] }, \ 97 97 .rda = &sname##_data, \ ··· 110 110 RCU_STATE_INITIALIZER(rcu_sched, 's', call_rcu_sched); 111 111 RCU_STATE_INITIALIZER(rcu_bh, 'b', call_rcu_bh); 112 112 113 - static struct rcu_state *rcu_state_p; 113 + static struct rcu_state *const rcu_state_p; 114 + static struct rcu_data __percpu *const rcu_data_p; 114 115 LIST_HEAD(rcu_struct_flavors); 115 116 116 - /* Increase (but not decrease) the CONFIG_RCU_FANOUT_LEAF at boot time. */ 117 - static int rcu_fanout_leaf = CONFIG_RCU_FANOUT_LEAF; 117 + /* Dump rcu_node combining tree at boot to verify correct setup. */ 118 + static bool dump_tree; 119 + module_param(dump_tree, bool, 0444); 120 + /* Control rcu_node-tree auto-balancing at boot time. */ 121 + static bool rcu_fanout_exact; 122 + module_param(rcu_fanout_exact, bool, 0444); 123 + /* Increase (but not decrease) the RCU_FANOUT_LEAF at boot time. */ 124 + static int rcu_fanout_leaf = RCU_FANOUT_LEAF; 118 125 module_param(rcu_fanout_leaf, int, 0444); 119 126 int rcu_num_lvls __read_mostly = RCU_NUM_LVLS; 120 127 static int num_rcu_lvl[] = { /* Number of rcu_nodes at specified level. */ ··· 166 159 static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp); 167 160 168 161 /* rcuc/rcub kthread realtime priority */ 162 + #ifdef CONFIG_RCU_KTHREAD_PRIO 169 163 static int kthread_prio = CONFIG_RCU_KTHREAD_PRIO; 164 + #else /* #ifdef CONFIG_RCU_KTHREAD_PRIO */ 165 + static int kthread_prio = IS_ENABLED(CONFIG_RCU_BOOST) ? 1 : 0; 166 + #endif /* #else #ifdef CONFIG_RCU_KTHREAD_PRIO */ 170 167 module_param(kthread_prio, int, 0644); 171 168 172 169 /* Delay in jiffies for grace-period initialization delays, debug only. */ 170 + 171 + #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT 172 + static int gp_preinit_delay = CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT_DELAY; 173 + module_param(gp_preinit_delay, int, 0644); 174 + #else /* #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT */ 175 + static const int gp_preinit_delay; 176 + #endif /* #else #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT */ 177 + 173 178 #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT 174 179 static int gp_init_delay = CONFIG_RCU_TORTURE_TEST_SLOW_INIT_DELAY; 175 180 module_param(gp_init_delay, int, 0644); 176 181 #else /* #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT */ 177 182 static const int gp_init_delay; 178 183 #endif /* #else #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT */ 179 - #define PER_RCU_NODE_PERIOD 10 /* Number of grace periods between delays. */ 184 + 185 + #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP 186 + static int gp_cleanup_delay = CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY; 187 + module_param(gp_cleanup_delay, int, 0644); 188 + #else /* #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP */ 189 + static const int gp_cleanup_delay; 190 + #endif /* #else #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP */ 191 + 192 + /* 193 + * Number of grace periods between delays, normalized by the duration of 194 + * the delay. The longer the the delay, the more the grace periods between 195 + * each delay. The reason for this normalization is that it means that, 196 + * for non-zero delays, the overall slowdown of grace periods is constant 197 + * regardless of the duration of the delay. This arrangement balances 198 + * the need for long delays to increase some race probabilities with the 199 + * need for fast grace periods to increase other race probabilities. 200 + */ 201 + #define PER_RCU_NODE_PERIOD 3 /* Number of grace periods between delays. */ 180 202 181 203 /* 182 204 * Track the rcutorture test sequence number and the update version ··· 621 585 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 622 586 623 587 trace_rcu_dyntick(TPS("Start"), oldval, rdtp->dynticks_nesting); 624 - if (!user && !is_idle_task(current)) { 588 + if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && 589 + !user && !is_idle_task(current)) { 625 590 struct task_struct *idle __maybe_unused = 626 591 idle_task(smp_processor_id()); 627 592 ··· 641 604 smp_mb__before_atomic(); /* See above. */ 642 605 atomic_inc(&rdtp->dynticks); 643 606 smp_mb__after_atomic(); /* Force ordering with next sojourn. */ 644 - WARN_ON_ONCE(atomic_read(&rdtp->dynticks) & 0x1); 607 + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && 608 + atomic_read(&rdtp->dynticks) & 0x1); 645 609 rcu_dynticks_task_enter(); 646 610 647 611 /* ··· 668 630 669 631 rdtp = this_cpu_ptr(&rcu_dynticks); 670 632 oldval = rdtp->dynticks_nesting; 671 - WARN_ON_ONCE((oldval & DYNTICK_TASK_NEST_MASK) == 0); 633 + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && 634 + (oldval & DYNTICK_TASK_NEST_MASK) == 0); 672 635 if ((oldval & DYNTICK_TASK_NEST_MASK) == DYNTICK_TASK_NEST_VALUE) { 673 636 rdtp->dynticks_nesting = 0; 674 637 rcu_eqs_enter_common(oldval, user); ··· 742 703 rdtp = this_cpu_ptr(&rcu_dynticks); 743 704 oldval = rdtp->dynticks_nesting; 744 705 rdtp->dynticks_nesting--; 745 - WARN_ON_ONCE(rdtp->dynticks_nesting < 0); 706 + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && 707 + rdtp->dynticks_nesting < 0); 746 708 if (rdtp->dynticks_nesting) 747 709 trace_rcu_dyntick(TPS("--="), oldval, rdtp->dynticks_nesting); 748 710 else ··· 768 728 atomic_inc(&rdtp->dynticks); 769 729 /* CPUs seeing atomic_inc() must see later RCU read-side crit sects */ 770 730 smp_mb__after_atomic(); /* See above. */ 771 - WARN_ON_ONCE(!(atomic_read(&rdtp->dynticks) & 0x1)); 731 + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && 732 + !(atomic_read(&rdtp->dynticks) & 0x1)); 772 733 rcu_cleanup_after_idle(); 773 734 trace_rcu_dyntick(TPS("End"), oldval, rdtp->dynticks_nesting); 774 - if (!user && !is_idle_task(current)) { 735 + if (IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && 736 + !user && !is_idle_task(current)) { 775 737 struct task_struct *idle __maybe_unused = 776 738 idle_task(smp_processor_id()); 777 739 ··· 797 755 798 756 rdtp = this_cpu_ptr(&rcu_dynticks); 799 757 oldval = rdtp->dynticks_nesting; 800 - WARN_ON_ONCE(oldval < 0); 758 + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && oldval < 0); 801 759 if (oldval & DYNTICK_TASK_NEST_MASK) { 802 760 rdtp->dynticks_nesting += DYNTICK_TASK_NEST_VALUE; 803 761 } else { ··· 870 828 rdtp = this_cpu_ptr(&rcu_dynticks); 871 829 oldval = rdtp->dynticks_nesting; 872 830 rdtp->dynticks_nesting++; 873 - WARN_ON_ONCE(rdtp->dynticks_nesting == 0); 831 + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && 832 + rdtp->dynticks_nesting == 0); 874 833 if (oldval) 875 834 trace_rcu_dyntick(TPS("++="), oldval, rdtp->dynticks_nesting); 876 835 else ··· 1178 1135 j = jiffies; 1179 1136 gpa = READ_ONCE(rsp->gp_activity); 1180 1137 if (j - gpa > 2 * HZ) 1181 - pr_err("%s kthread starved for %ld jiffies!\n", 1182 - rsp->name, j - gpa); 1138 + pr_err("%s kthread starved for %ld jiffies! g%lu c%lu f%#x\n", 1139 + rsp->name, j - gpa, 1140 + rsp->gpnum, rsp->completed, rsp->gp_flags); 1183 1141 } 1184 1142 1185 1143 /* ··· 1776 1732 rcu_gp_kthread_wake(rsp); 1777 1733 } 1778 1734 1735 + static void rcu_gp_slow(struct rcu_state *rsp, int delay) 1736 + { 1737 + if (delay > 0 && 1738 + !(rsp->gpnum % (rcu_num_nodes * PER_RCU_NODE_PERIOD * delay))) 1739 + schedule_timeout_uninterruptible(delay); 1740 + } 1741 + 1779 1742 /* 1780 1743 * Initialize a new grace period. Return 0 if no grace period required. 1781 1744 */ ··· 1825 1774 * will handle subsequent offline CPUs. 1826 1775 */ 1827 1776 rcu_for_each_leaf_node(rsp, rnp) { 1777 + rcu_gp_slow(rsp, gp_preinit_delay); 1828 1778 raw_spin_lock_irq(&rnp->lock); 1829 1779 smp_mb__after_unlock_lock(); 1830 1780 if (rnp->qsmaskinit == rnp->qsmaskinitnext && ··· 1882 1830 * process finishes, because this kthread handles both. 1883 1831 */ 1884 1832 rcu_for_each_node_breadth_first(rsp, rnp) { 1833 + rcu_gp_slow(rsp, gp_init_delay); 1885 1834 raw_spin_lock_irq(&rnp->lock); 1886 1835 smp_mb__after_unlock_lock(); 1887 1836 rdp = this_cpu_ptr(rsp->rda); ··· 1900 1847 raw_spin_unlock_irq(&rnp->lock); 1901 1848 cond_resched_rcu_qs(); 1902 1849 WRITE_ONCE(rsp->gp_activity, jiffies); 1903 - if (gp_init_delay > 0 && 1904 - !(rsp->gpnum % (rcu_num_nodes * PER_RCU_NODE_PERIOD))) 1905 - schedule_timeout_uninterruptible(gp_init_delay); 1906 1850 } 1907 1851 1908 1852 return 1; ··· 1994 1944 raw_spin_unlock_irq(&rnp->lock); 1995 1945 cond_resched_rcu_qs(); 1996 1946 WRITE_ONCE(rsp->gp_activity, jiffies); 1947 + rcu_gp_slow(rsp, gp_cleanup_delay); 1997 1948 } 1998 1949 rnp = rcu_get_root(rsp); 1999 1950 raw_spin_lock_irq(&rnp->lock); ··· 2189 2138 __releases(rcu_get_root(rsp)->lock) 2190 2139 { 2191 2140 WARN_ON_ONCE(!rcu_gp_in_progress(rsp)); 2141 + WRITE_ONCE(rsp->gp_flags, READ_ONCE(rsp->gp_flags) | RCU_GP_FLAG_FQS); 2192 2142 raw_spin_unlock_irqrestore(&rcu_get_root(rsp)->lock, flags); 2193 2143 rcu_gp_kthread_wake(rsp); 2194 2144 } ··· 2387 2335 rcu_report_qs_rdp(rdp->cpu, rsp, rdp); 2388 2336 } 2389 2337 2390 - #ifdef CONFIG_HOTPLUG_CPU 2391 - 2392 2338 /* 2393 2339 * Send the specified CPU's RCU callbacks to the orphanage. The 2394 2340 * specified CPU must be offline, and the caller must hold the ··· 2397 2347 struct rcu_node *rnp, struct rcu_data *rdp) 2398 2348 { 2399 2349 /* No-CBs CPUs do not have orphanable callbacks. */ 2400 - if (rcu_is_nocb_cpu(rdp->cpu)) 2350 + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) || rcu_is_nocb_cpu(rdp->cpu)) 2401 2351 return; 2402 2352 2403 2353 /* ··· 2456 2406 struct rcu_data *rdp = raw_cpu_ptr(rsp->rda); 2457 2407 2458 2408 /* No-CBs CPUs are handled specially. */ 2459 - if (rcu_nocb_adopt_orphan_cbs(rsp, rdp, flags)) 2409 + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) || 2410 + rcu_nocb_adopt_orphan_cbs(rsp, rdp, flags)) 2460 2411 return; 2461 2412 2462 2413 /* Do the accounting first. */ ··· 2504 2453 RCU_TRACE(struct rcu_data *rdp = this_cpu_ptr(rsp->rda)); 2505 2454 RCU_TRACE(struct rcu_node *rnp = rdp->mynode); 2506 2455 2456 + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) 2457 + return; 2458 + 2507 2459 RCU_TRACE(mask = rdp->grpmask); 2508 2460 trace_rcu_grace_period(rsp->name, 2509 2461 rnp->gpnum + 1 - !!(rnp->qsmask & mask), ··· 2535 2481 long mask; 2536 2482 struct rcu_node *rnp = rnp_leaf; 2537 2483 2538 - if (rnp->qsmaskinit || rcu_preempt_has_tasks(rnp)) 2484 + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU) || 2485 + rnp->qsmaskinit || rcu_preempt_has_tasks(rnp)) 2539 2486 return; 2540 2487 for (;;) { 2541 2488 mask = rnp->grpmask; ··· 2567 2512 struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu); 2568 2513 struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */ 2569 2514 2515 + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) 2516 + return; 2517 + 2570 2518 /* Remove outgoing CPU from mask in the leaf rcu_node structure. */ 2571 2519 mask = rdp->grpmask; 2572 2520 raw_spin_lock_irqsave(&rnp->lock, flags); ··· 2591 2533 struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu); 2592 2534 struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */ 2593 2535 2536 + if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) 2537 + return; 2538 + 2594 2539 /* Adjust any no-longer-needed kthreads. */ 2595 2540 rcu_boost_kthread_setaffinity(rnp, -1); 2596 2541 ··· 2607 2546 "rcu_cleanup_dead_cpu: Callbacks on offline CPU %d: qlen=%lu, nxtlist=%p\n", 2608 2547 cpu, rdp->qlen, rdp->nxtlist); 2609 2548 } 2610 - 2611 - #else /* #ifdef CONFIG_HOTPLUG_CPU */ 2612 - 2613 - static void rcu_cleanup_dying_cpu(struct rcu_state *rsp) 2614 - { 2615 - } 2616 - 2617 - static void __maybe_unused rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf) 2618 - { 2619 - } 2620 - 2621 - static void rcu_cleanup_dying_idle_cpu(int cpu, struct rcu_state *rsp) 2622 - { 2623 - } 2624 - 2625 - static void rcu_cleanup_dead_cpu(int cpu, struct rcu_state *rsp) 2626 - { 2627 - } 2628 - 2629 - #endif /* #else #ifdef CONFIG_HOTPLUG_CPU */ 2630 2549 2631 2550 /* 2632 2551 * Invoke any RCU callbacks that have made it to the end of their grace ··· 2772 2731 mask = 0; 2773 2732 raw_spin_lock_irqsave(&rnp->lock, flags); 2774 2733 smp_mb__after_unlock_lock(); 2775 - if (!rcu_gp_in_progress(rsp)) { 2776 - raw_spin_unlock_irqrestore(&rnp->lock, flags); 2777 - return; 2778 - } 2779 2734 if (rnp->qsmask == 0) { 2780 2735 if (rcu_state_p == &rcu_sched_state || 2781 2736 rsp != rcu_state_p || ··· 2801 2764 bit = 1; 2802 2765 for (; cpu <= rnp->grphi; cpu++, bit <<= 1) { 2803 2766 if ((rnp->qsmask & bit) != 0) { 2804 - if ((rnp->qsmaskinit & bit) == 0) 2805 - *isidle = false; /* Pending hotplug. */ 2806 2767 if (f(per_cpu_ptr(rsp->rda, cpu), isidle, maxj)) 2807 2768 mask |= bit; 2808 2769 } ··· 3322 3287 if (ULONG_CMP_GE((ulong)atomic_long_read(&rsp->expedited_start), 3323 3288 (ulong)atomic_long_read(&rsp->expedited_done) + 3324 3289 ULONG_MAX / 8)) { 3325 - synchronize_sched(); 3290 + wait_rcu_gp(call_rcu_sched); 3326 3291 atomic_long_inc(&rsp->expedited_wrap); 3327 3292 return; 3328 3293 } ··· 3528 3493 * non-NULL, store an indication of whether all callbacks are lazy. 3529 3494 * (If there are no callbacks, all of them are deemed to be lazy.) 3530 3495 */ 3531 - static int __maybe_unused rcu_cpu_has_callbacks(bool *all_lazy) 3496 + static bool __maybe_unused rcu_cpu_has_callbacks(bool *all_lazy) 3532 3497 { 3533 3498 bool al = true; 3534 3499 bool hc = false; ··· 3815 3780 rdp->gpnum = rnp->completed; /* Make CPU later note any new GP. */ 3816 3781 rdp->completed = rnp->completed; 3817 3782 rdp->passed_quiesce = false; 3818 - rdp->rcu_qs_ctr_snap = __this_cpu_read(rcu_qs_ctr); 3783 + rdp->rcu_qs_ctr_snap = per_cpu(rcu_qs_ctr, cpu); 3819 3784 rdp->qs_pending = false; 3820 3785 trace_rcu_grace_period(rsp->name, rdp->gpnum, TPS("cpuonl")); 3821 3786 raw_spin_unlock_irqrestore(&rnp->lock, flags); ··· 3959 3924 3960 3925 /* 3961 3926 * Compute the per-level fanout, either using the exact fanout specified 3962 - * or balancing the tree, depending on CONFIG_RCU_FANOUT_EXACT. 3927 + * or balancing the tree, depending on the rcu_fanout_exact boot parameter. 3963 3928 */ 3964 3929 static void __init rcu_init_levelspread(struct rcu_state *rsp) 3965 3930 { 3966 3931 int i; 3967 3932 3968 - if (IS_ENABLED(CONFIG_RCU_FANOUT_EXACT)) { 3933 + if (rcu_fanout_exact) { 3969 3934 rsp->levelspread[rcu_num_lvls - 1] = rcu_fanout_leaf; 3970 3935 for (i = rcu_num_lvls - 2; i >= 0; i--) 3971 - rsp->levelspread[i] = CONFIG_RCU_FANOUT; 3936 + rsp->levelspread[i] = RCU_FANOUT; 3972 3937 } else { 3973 3938 int ccur; 3974 3939 int cprv; ··· 4006 3971 4007 3972 BUILD_BUG_ON(MAX_RCU_LVLS > ARRAY_SIZE(buf)); /* Fix buf[] init! */ 4008 3973 4009 - /* Silence gcc 4.8 warning about array index out of range. */ 4010 - if (rcu_num_lvls > RCU_NUM_LVLS) 4011 - panic("rcu_init_one: rcu_num_lvls overflow"); 3974 + /* Silence gcc 4.8 false positive about array index out of range. */ 3975 + if (rcu_num_lvls <= 0 || rcu_num_lvls > RCU_NUM_LVLS) 3976 + panic("rcu_init_one: rcu_num_lvls out of range"); 4012 3977 4013 3978 /* Initialize the level-tracking arrays. */ 4014 3979 ··· 4094 4059 jiffies_till_next_fqs = d; 4095 4060 4096 4061 /* If the compile-time values are accurate, just leave. */ 4097 - if (rcu_fanout_leaf == CONFIG_RCU_FANOUT_LEAF && 4062 + if (rcu_fanout_leaf == RCU_FANOUT_LEAF && 4098 4063 nr_cpu_ids == NR_CPUS) 4099 4064 return; 4100 4065 pr_info("RCU: Adjusting geometry for rcu_fanout_leaf=%d, nr_cpu_ids=%d\n", ··· 4108 4073 rcu_capacity[0] = 1; 4109 4074 rcu_capacity[1] = rcu_fanout_leaf; 4110 4075 for (i = 2; i <= MAX_RCU_LVLS; i++) 4111 - rcu_capacity[i] = rcu_capacity[i - 1] * CONFIG_RCU_FANOUT; 4076 + rcu_capacity[i] = rcu_capacity[i - 1] * RCU_FANOUT; 4112 4077 4113 4078 /* 4114 4079 * The boot-time rcu_fanout_leaf parameter is only permitted ··· 4118 4083 * the configured number of CPUs. Complain and fall back to the 4119 4084 * compile-time values if these limits are exceeded. 4120 4085 */ 4121 - if (rcu_fanout_leaf < CONFIG_RCU_FANOUT_LEAF || 4086 + if (rcu_fanout_leaf < RCU_FANOUT_LEAF || 4122 4087 rcu_fanout_leaf > sizeof(unsigned long) * 8 || 4123 4088 n > rcu_capacity[MAX_RCU_LVLS]) { 4124 4089 WARN_ON(1); ··· 4144 4109 rcu_num_nodes -= n; 4145 4110 } 4146 4111 4112 + /* 4113 + * Dump out the structure of the rcu_node combining tree associated 4114 + * with the rcu_state structure referenced by rsp. 4115 + */ 4116 + static void __init rcu_dump_rcu_node_tree(struct rcu_state *rsp) 4117 + { 4118 + int level = 0; 4119 + struct rcu_node *rnp; 4120 + 4121 + pr_info("rcu_node tree layout dump\n"); 4122 + pr_info(" "); 4123 + rcu_for_each_node_breadth_first(rsp, rnp) { 4124 + if (rnp->level != level) { 4125 + pr_cont("\n"); 4126 + pr_info(" "); 4127 + level = rnp->level; 4128 + } 4129 + pr_cont("%d:%d ^%d ", rnp->grplo, rnp->grphi, rnp->grpnum); 4130 + } 4131 + pr_cont("\n"); 4132 + } 4133 + 4147 4134 void __init rcu_init(void) 4148 4135 { 4149 4136 int cpu; ··· 4176 4119 rcu_init_geometry(); 4177 4120 rcu_init_one(&rcu_bh_state, &rcu_bh_data); 4178 4121 rcu_init_one(&rcu_sched_state, &rcu_sched_data); 4122 + if (dump_tree) 4123 + rcu_dump_rcu_node_tree(&rcu_sched_state); 4179 4124 __rcu_init_preempt(); 4180 4125 open_softirq(RCU_SOFTIRQ, rcu_process_callbacks); 4181 4126
+26 -9
kernel/rcu/tree.h
··· 35 35 * In practice, this did work well going from three levels to four. 36 36 * Of course, your mileage may vary. 37 37 */ 38 + 38 39 #define MAX_RCU_LVLS 4 39 - #define RCU_FANOUT_1 (CONFIG_RCU_FANOUT_LEAF) 40 - #define RCU_FANOUT_2 (RCU_FANOUT_1 * CONFIG_RCU_FANOUT) 41 - #define RCU_FANOUT_3 (RCU_FANOUT_2 * CONFIG_RCU_FANOUT) 42 - #define RCU_FANOUT_4 (RCU_FANOUT_3 * CONFIG_RCU_FANOUT) 40 + 41 + #ifdef CONFIG_RCU_FANOUT 42 + #define RCU_FANOUT CONFIG_RCU_FANOUT 43 + #else /* #ifdef CONFIG_RCU_FANOUT */ 44 + # ifdef CONFIG_64BIT 45 + # define RCU_FANOUT 64 46 + # else 47 + # define RCU_FANOUT 32 48 + # endif 49 + #endif /* #else #ifdef CONFIG_RCU_FANOUT */ 50 + 51 + #ifdef CONFIG_RCU_FANOUT_LEAF 52 + #define RCU_FANOUT_LEAF CONFIG_RCU_FANOUT_LEAF 53 + #else /* #ifdef CONFIG_RCU_FANOUT_LEAF */ 54 + # ifdef CONFIG_64BIT 55 + # define RCU_FANOUT_LEAF 64 56 + # else 57 + # define RCU_FANOUT_LEAF 32 58 + # endif 59 + #endif /* #else #ifdef CONFIG_RCU_FANOUT_LEAF */ 60 + 61 + #define RCU_FANOUT_1 (RCU_FANOUT_LEAF) 62 + #define RCU_FANOUT_2 (RCU_FANOUT_1 * RCU_FANOUT) 63 + #define RCU_FANOUT_3 (RCU_FANOUT_2 * RCU_FANOUT) 64 + #define RCU_FANOUT_4 (RCU_FANOUT_3 * RCU_FANOUT) 43 65 44 66 #if NR_CPUS <= RCU_FANOUT_1 45 67 # define RCU_NUM_LVLS 1 ··· 192 170 /* if there is no such task. If there */ 193 171 /* is no current expedited grace period, */ 194 172 /* then there can cannot be any such task. */ 195 - #ifdef CONFIG_RCU_BOOST 196 173 struct list_head *boost_tasks; 197 174 /* Pointer to first task that needs to be */ 198 175 /* priority boosted, or NULL if no priority */ ··· 229 208 unsigned long n_balk_nos; 230 209 /* Refused to boost: not sure why, though. */ 231 210 /* This can happen due to race conditions. */ 232 - #endif /* #ifdef CONFIG_RCU_BOOST */ 233 211 #ifdef CONFIG_RCU_NOCB_CPU 234 212 wait_queue_head_t nocb_gp_wq[2]; 235 213 /* Place for rcu_nocb_kthread() to wait GP. */ ··· 539 519 * RCU implementation internal declarations: 540 520 */ 541 521 extern struct rcu_state rcu_sched_state; 542 - DECLARE_PER_CPU(struct rcu_data, rcu_sched_data); 543 522 544 523 extern struct rcu_state rcu_bh_state; 545 - DECLARE_PER_CPU(struct rcu_data, rcu_bh_data); 546 524 547 525 #ifdef CONFIG_PREEMPT_RCU 548 526 extern struct rcu_state rcu_preempt_state; 549 - DECLARE_PER_CPU(struct rcu_data, rcu_preempt_data); 550 527 #endif /* #ifdef CONFIG_PREEMPT_RCU */ 551 528 552 529 #ifdef CONFIG_RCU_BOOST
+67 -56
kernel/rcu/tree_plugin.h
··· 43 43 DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops); 44 44 DEFINE_PER_CPU(char, rcu_cpu_has_work); 45 45 46 - #endif /* #ifdef CONFIG_RCU_BOOST */ 46 + #else /* #ifdef CONFIG_RCU_BOOST */ 47 + 48 + /* 49 + * Some architectures do not define rt_mutexes, but if !CONFIG_RCU_BOOST, 50 + * all uses are in dead code. Provide a definition to keep the compiler 51 + * happy, but add WARN_ON_ONCE() to complain if used in the wrong place. 52 + * This probably needs to be excluded from -rt builds. 53 + */ 54 + #define rt_mutex_owner(a) ({ WARN_ON_ONCE(1); NULL; }) 55 + 56 + #endif /* #else #ifdef CONFIG_RCU_BOOST */ 47 57 48 58 #ifdef CONFIG_RCU_NOCB_CPU 49 59 static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */ ··· 70 60 { 71 61 if (IS_ENABLED(CONFIG_RCU_TRACE)) 72 62 pr_info("\tRCU debugfs-based tracing is enabled.\n"); 73 - if ((IS_ENABLED(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 64) || 74 - (!IS_ENABLED(CONFIG_64BIT) && CONFIG_RCU_FANOUT != 32)) 63 + if ((IS_ENABLED(CONFIG_64BIT) && RCU_FANOUT != 64) || 64 + (!IS_ENABLED(CONFIG_64BIT) && RCU_FANOUT != 32)) 75 65 pr_info("\tCONFIG_RCU_FANOUT set to non-default value of %d\n", 76 - CONFIG_RCU_FANOUT); 77 - if (IS_ENABLED(CONFIG_RCU_FANOUT_EXACT)) 66 + RCU_FANOUT); 67 + if (rcu_fanout_exact) 78 68 pr_info("\tHierarchical RCU autobalancing is disabled.\n"); 79 69 if (IS_ENABLED(CONFIG_RCU_FAST_NO_HZ)) 80 70 pr_info("\tRCU dyntick-idle grace-period acceleration is enabled.\n"); ··· 86 76 pr_info("\tAdditional per-CPU info printed with stalls.\n"); 87 77 if (NUM_RCU_LVL_4 != 0) 88 78 pr_info("\tFour-level hierarchy is enabled.\n"); 89 - if (CONFIG_RCU_FANOUT_LEAF != 16) 79 + if (RCU_FANOUT_LEAF != 16) 90 80 pr_info("\tBuild-time adjustment of leaf fanout to %d.\n", 91 - CONFIG_RCU_FANOUT_LEAF); 92 - if (rcu_fanout_leaf != CONFIG_RCU_FANOUT_LEAF) 81 + RCU_FANOUT_LEAF); 82 + if (rcu_fanout_leaf != RCU_FANOUT_LEAF) 93 83 pr_info("\tBoot-time adjustment of leaf fanout to %d.\n", rcu_fanout_leaf); 94 84 if (nr_cpu_ids != NR_CPUS) 95 85 pr_info("\tRCU restricting CPUs from NR_CPUS=%d to nr_cpu_ids=%d.\n", NR_CPUS, nr_cpu_ids); ··· 100 90 #ifdef CONFIG_PREEMPT_RCU 101 91 102 92 RCU_STATE_INITIALIZER(rcu_preempt, 'p', call_rcu); 103 - static struct rcu_state *rcu_state_p = &rcu_preempt_state; 93 + static struct rcu_state *const rcu_state_p = &rcu_preempt_state; 94 + static struct rcu_data __percpu *const rcu_data_p = &rcu_preempt_data; 104 95 105 96 static int rcu_preempted_readers_exp(struct rcu_node *rnp); 106 97 static void rcu_report_exp_rnp(struct rcu_state *rsp, struct rcu_node *rnp, ··· 127 116 */ 128 117 static void rcu_preempt_qs(void) 129 118 { 130 - if (!__this_cpu_read(rcu_preempt_data.passed_quiesce)) { 119 + if (!__this_cpu_read(rcu_data_p->passed_quiesce)) { 131 120 trace_rcu_grace_period(TPS("rcu_preempt"), 132 - __this_cpu_read(rcu_preempt_data.gpnum), 121 + __this_cpu_read(rcu_data_p->gpnum), 133 122 TPS("cpuqs")); 134 - __this_cpu_write(rcu_preempt_data.passed_quiesce, 1); 123 + __this_cpu_write(rcu_data_p->passed_quiesce, 1); 135 124 barrier(); /* Coordinate with rcu_preempt_check_callbacks(). */ 136 125 current->rcu_read_unlock_special.b.need_qs = false; 137 126 } ··· 161 150 !t->rcu_read_unlock_special.b.blocked) { 162 151 163 152 /* Possibly blocking in an RCU read-side critical section. */ 164 - rdp = this_cpu_ptr(rcu_preempt_state.rda); 153 + rdp = this_cpu_ptr(rcu_state_p->rda); 165 154 rnp = rdp->mynode; 166 155 raw_spin_lock_irqsave(&rnp->lock, flags); 167 156 smp_mb__after_unlock_lock(); ··· 191 180 if ((rnp->qsmask & rdp->grpmask) && rnp->gp_tasks != NULL) { 192 181 list_add(&t->rcu_node_entry, rnp->gp_tasks->prev); 193 182 rnp->gp_tasks = &t->rcu_node_entry; 194 - #ifdef CONFIG_RCU_BOOST 195 - if (rnp->boost_tasks != NULL) 183 + if (IS_ENABLED(CONFIG_RCU_BOOST) && 184 + rnp->boost_tasks != NULL) 196 185 rnp->boost_tasks = rnp->gp_tasks; 197 - #endif /* #ifdef CONFIG_RCU_BOOST */ 198 186 } else { 199 187 list_add(&t->rcu_node_entry, &rnp->blkd_tasks); 200 188 if (rnp->qsmask & rdp->grpmask) ··· 273 263 bool empty_exp_now; 274 264 unsigned long flags; 275 265 struct list_head *np; 276 - #ifdef CONFIG_RCU_BOOST 277 266 bool drop_boost_mutex = false; 278 - #endif /* #ifdef CONFIG_RCU_BOOST */ 279 267 struct rcu_node *rnp; 280 268 union rcu_special special; 281 269 ··· 315 307 t->rcu_read_unlock_special.b.blocked = false; 316 308 317 309 /* 318 - * Remove this task from the list it blocked on. The 319 - * task can migrate while we acquire the lock, but at 320 - * most one time. So at most two passes through loop. 310 + * Remove this task from the list it blocked on. The task 311 + * now remains queued on the rcu_node corresponding to 312 + * the CPU it first blocked on, so the first attempt to 313 + * acquire the task's rcu_node's ->lock will succeed. 314 + * Keep the loop and add a WARN_ON() out of sheer paranoia. 321 315 */ 322 316 for (;;) { 323 317 rnp = t->rcu_blocked_node; ··· 327 317 smp_mb__after_unlock_lock(); 328 318 if (rnp == t->rcu_blocked_node) 329 319 break; 320 + WARN_ON_ONCE(1); 330 321 raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */ 331 322 } 332 323 empty_norm = !rcu_preempt_blocked_readers_cgp(rnp); ··· 342 331 rnp->gp_tasks = np; 343 332 if (&t->rcu_node_entry == rnp->exp_tasks) 344 333 rnp->exp_tasks = np; 345 - #ifdef CONFIG_RCU_BOOST 346 - if (&t->rcu_node_entry == rnp->boost_tasks) 347 - rnp->boost_tasks = np; 348 - /* Snapshot ->boost_mtx ownership with rcu_node lock held. */ 349 - drop_boost_mutex = rt_mutex_owner(&rnp->boost_mtx) == t; 350 - #endif /* #ifdef CONFIG_RCU_BOOST */ 334 + if (IS_ENABLED(CONFIG_RCU_BOOST)) { 335 + if (&t->rcu_node_entry == rnp->boost_tasks) 336 + rnp->boost_tasks = np; 337 + /* Snapshot ->boost_mtx ownership w/rnp->lock held. */ 338 + drop_boost_mutex = rt_mutex_owner(&rnp->boost_mtx) == t; 339 + } 351 340 352 341 /* 353 342 * If this was the last task on the current list, and if ··· 364 353 rnp->grplo, 365 354 rnp->grphi, 366 355 !!rnp->gp_tasks); 367 - rcu_report_unblock_qs_rnp(&rcu_preempt_state, 368 - rnp, flags); 356 + rcu_report_unblock_qs_rnp(rcu_state_p, rnp, flags); 369 357 } else { 370 358 raw_spin_unlock_irqrestore(&rnp->lock, flags); 371 359 } 372 360 373 - #ifdef CONFIG_RCU_BOOST 374 361 /* Unboost if we were boosted. */ 375 - if (drop_boost_mutex) 362 + if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex) 376 363 rt_mutex_unlock(&rnp->boost_mtx); 377 - #endif /* #ifdef CONFIG_RCU_BOOST */ 378 364 379 365 /* 380 366 * If this was the last task on the expedited lists, 381 367 * then we need to report up the rcu_node hierarchy. 382 368 */ 383 369 if (!empty_exp && empty_exp_now) 384 - rcu_report_exp_rnp(&rcu_preempt_state, rnp, true); 370 + rcu_report_exp_rnp(rcu_state_p, rnp, true); 385 371 } else { 386 372 local_irq_restore(flags); 387 373 } ··· 398 390 raw_spin_unlock_irqrestore(&rnp->lock, flags); 399 391 return; 400 392 } 401 - t = list_entry(rnp->gp_tasks, 393 + t = list_entry(rnp->gp_tasks->prev, 402 394 struct task_struct, rcu_node_entry); 403 395 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) 404 396 sched_show_task(t); ··· 455 447 if (!rcu_preempt_blocked_readers_cgp(rnp)) 456 448 return 0; 457 449 rcu_print_task_stall_begin(rnp); 458 - t = list_entry(rnp->gp_tasks, 450 + t = list_entry(rnp->gp_tasks->prev, 459 451 struct task_struct, rcu_node_entry); 460 452 list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { 461 453 pr_cont(" P%d", t->pid); ··· 499 491 return; 500 492 } 501 493 if (t->rcu_read_lock_nesting > 0 && 502 - __this_cpu_read(rcu_preempt_data.qs_pending) && 503 - !__this_cpu_read(rcu_preempt_data.passed_quiesce)) 494 + __this_cpu_read(rcu_data_p->qs_pending) && 495 + !__this_cpu_read(rcu_data_p->passed_quiesce)) 504 496 t->rcu_read_unlock_special.b.need_qs = true; 505 497 } 506 498 ··· 508 500 509 501 static void rcu_preempt_do_callbacks(void) 510 502 { 511 - rcu_do_batch(&rcu_preempt_state, this_cpu_ptr(&rcu_preempt_data)); 503 + rcu_do_batch(rcu_state_p, this_cpu_ptr(rcu_data_p)); 512 504 } 513 505 514 506 #endif /* #ifdef CONFIG_RCU_BOOST */ ··· 518 510 */ 519 511 void call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu)) 520 512 { 521 - __call_rcu(head, func, &rcu_preempt_state, -1, 0); 513 + __call_rcu(head, func, rcu_state_p, -1, 0); 522 514 } 523 515 EXPORT_SYMBOL_GPL(call_rcu); 524 516 ··· 719 711 void synchronize_rcu_expedited(void) 720 712 { 721 713 struct rcu_node *rnp; 722 - struct rcu_state *rsp = &rcu_preempt_state; 714 + struct rcu_state *rsp = rcu_state_p; 723 715 unsigned long snap; 724 716 int trycount = 0; 725 717 ··· 806 798 */ 807 799 void rcu_barrier(void) 808 800 { 809 - _rcu_barrier(&rcu_preempt_state); 801 + _rcu_barrier(rcu_state_p); 810 802 } 811 803 EXPORT_SYMBOL_GPL(rcu_barrier); 812 804 ··· 815 807 */ 816 808 static void __init __rcu_init_preempt(void) 817 809 { 818 - rcu_init_one(&rcu_preempt_state, &rcu_preempt_data); 810 + rcu_init_one(rcu_state_p, rcu_data_p); 819 811 } 820 812 821 813 /* ··· 838 830 839 831 #else /* #ifdef CONFIG_PREEMPT_RCU */ 840 832 841 - static struct rcu_state *rcu_state_p = &rcu_sched_state; 833 + static struct rcu_state *const rcu_state_p = &rcu_sched_state; 834 + static struct rcu_data __percpu *const rcu_data_p = &rcu_sched_data; 842 835 843 836 /* 844 837 * Tell them what RCU they are running. ··· 1181 1172 struct sched_param sp; 1182 1173 struct task_struct *t; 1183 1174 1184 - if (&rcu_preempt_state != rsp) 1175 + if (rcu_state_p != rsp) 1185 1176 return 0; 1186 1177 1187 1178 if (!rcu_scheduler_fully_active || rcu_rnp_online_cpus(rnp) == 0) ··· 1375 1366 * Because we not have RCU_FAST_NO_HZ, just check whether this CPU needs 1376 1367 * any flavor of RCU. 1377 1368 */ 1378 - #ifndef CONFIG_RCU_NOCB_CPU_ALL 1379 1369 int rcu_needs_cpu(unsigned long *delta_jiffies) 1380 1370 { 1381 1371 *delta_jiffies = ULONG_MAX; 1382 - return rcu_cpu_has_callbacks(NULL); 1372 + return IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL) 1373 + ? 0 : rcu_cpu_has_callbacks(NULL); 1383 1374 } 1384 - #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ 1385 1375 1386 1376 /* 1387 1377 * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up ··· 1487 1479 * 1488 1480 * The caller must have disabled interrupts. 1489 1481 */ 1490 - #ifndef CONFIG_RCU_NOCB_CPU_ALL 1491 1482 int rcu_needs_cpu(unsigned long *dj) 1492 1483 { 1493 1484 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 1485 + 1486 + if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL)) { 1487 + *dj = ULONG_MAX; 1488 + return 0; 1489 + } 1494 1490 1495 1491 /* Snapshot to detect later posting of non-lazy callback. */ 1496 1492 rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted; ··· 1522 1510 } 1523 1511 return 0; 1524 1512 } 1525 - #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ 1526 1513 1527 1514 /* 1528 1515 * Prepare a CPU for idle from an RCU perspective. The first major task ··· 1535 1524 */ 1536 1525 static void rcu_prepare_for_idle(void) 1537 1526 { 1538 - #ifndef CONFIG_RCU_NOCB_CPU_ALL 1539 1527 bool needwake; 1540 1528 struct rcu_data *rdp; 1541 1529 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 1542 1530 struct rcu_node *rnp; 1543 1531 struct rcu_state *rsp; 1544 1532 int tne; 1533 + 1534 + if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL)) 1535 + return; 1545 1536 1546 1537 /* Handle nohz enablement switches conservatively. */ 1547 1538 tne = READ_ONCE(tick_nohz_active); ··· 1592 1579 if (needwake) 1593 1580 rcu_gp_kthread_wake(rsp); 1594 1581 } 1595 - #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ 1596 1582 } 1597 1583 1598 1584 /* ··· 1601 1589 */ 1602 1590 static void rcu_cleanup_after_idle(void) 1603 1591 { 1604 - #ifndef CONFIG_RCU_NOCB_CPU_ALL 1605 - if (rcu_is_nocb_cpu(smp_processor_id())) 1592 + if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL) || 1593 + rcu_is_nocb_cpu(smp_processor_id())) 1606 1594 return; 1607 1595 if (rcu_try_advance_all_cbs()) 1608 1596 invoke_rcu_core(); 1609 - #endif /* #ifndef CONFIG_RCU_NOCB_CPU_ALL */ 1610 1597 } 1611 1598 1612 1599 /* ··· 3059 3048 if (tick_nohz_full_cpu(smp_processor_id()) && 3060 3049 (!rcu_gp_in_progress(rsp) || 3061 3050 ULONG_CMP_LT(jiffies, READ_ONCE(rsp->gp_start) + HZ))) 3062 - return 1; 3051 + return true; 3063 3052 #endif /* #ifdef CONFIG_NO_HZ_FULL */ 3064 - return 0; 3053 + return false; 3065 3054 } 3066 3055 3067 3056 /*
+64 -2
lib/Kconfig.debug
··· 1233 1233 depends on DEBUG_KERNEL 1234 1234 select TORTURE_TEST 1235 1235 select SRCU 1236 + select TASKS_RCU 1236 1237 default n 1237 1238 help 1238 1239 This option provides a kernel module that runs torture tests ··· 1262 1261 Say N here if you want the RCU torture tests to start only 1263 1262 after being manually enabled via /proc. 1264 1263 1264 + config RCU_TORTURE_TEST_SLOW_PREINIT 1265 + bool "Slow down RCU grace-period pre-initialization to expose races" 1266 + depends on RCU_TORTURE_TEST 1267 + help 1268 + This option delays grace-period pre-initialization (the 1269 + propagation of CPU-hotplug changes up the rcu_node combining 1270 + tree) for a few jiffies between initializing each pair of 1271 + consecutive rcu_node structures. This helps to expose races 1272 + involving grace-period pre-initialization, in other words, it 1273 + makes your kernel less stable. It can also greatly increase 1274 + grace-period latency, especially on systems with large numbers 1275 + of CPUs. This is useful when torture-testing RCU, but in 1276 + almost no other circumstance. 1277 + 1278 + Say Y here if you want your system to crash and hang more often. 1279 + Say N if you want a sane system. 1280 + 1281 + config RCU_TORTURE_TEST_SLOW_PREINIT_DELAY 1282 + int "How much to slow down RCU grace-period pre-initialization" 1283 + range 0 5 1284 + default 3 1285 + depends on RCU_TORTURE_TEST_SLOW_PREINIT 1286 + help 1287 + This option specifies the number of jiffies to wait between 1288 + each rcu_node structure pre-initialization step. 1289 + 1265 1290 config RCU_TORTURE_TEST_SLOW_INIT 1266 1291 bool "Slow down RCU grace-period initialization to expose races" 1267 1292 depends on RCU_TORTURE_TEST 1268 1293 help 1269 - This option makes grace-period initialization block for a 1270 - few jiffies between initializing each pair of consecutive 1294 + This option delays grace-period initialization for a few 1295 + jiffies between initializing each pair of consecutive 1271 1296 rcu_node structures. This helps to expose races involving 1272 1297 grace-period initialization, in other words, it makes your 1273 1298 kernel less stable. It can also greatly increase grace-period ··· 1312 1285 help 1313 1286 This option specifies the number of jiffies to wait between 1314 1287 each rcu_node structure initialization. 1288 + 1289 + config RCU_TORTURE_TEST_SLOW_CLEANUP 1290 + bool "Slow down RCU grace-period cleanup to expose races" 1291 + depends on RCU_TORTURE_TEST 1292 + help 1293 + This option delays grace-period cleanup for a few jiffies 1294 + between cleaning up each pair of consecutive rcu_node 1295 + structures. This helps to expose races involving grace-period 1296 + cleanup, in other words, it makes your kernel less stable. 1297 + It can also greatly increase grace-period latency, especially 1298 + on systems with large numbers of CPUs. This is useful when 1299 + torture-testing RCU, but in almost no other circumstance. 1300 + 1301 + Say Y here if you want your system to crash and hang more often. 1302 + Say N if you want a sane system. 1303 + 1304 + config RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY 1305 + int "How much to slow down RCU grace-period cleanup" 1306 + range 0 5 1307 + default 3 1308 + depends on RCU_TORTURE_TEST_SLOW_CLEANUP 1309 + help 1310 + This option specifies the number of jiffies to wait between 1311 + each rcu_node structure cleanup operation. 1315 1312 1316 1313 config RCU_CPU_STALL_TIMEOUT 1317 1314 int "RCU CPU stall timeout in seconds" ··· 1372 1321 1373 1322 Say Y here if you want to enable RCU tracing 1374 1323 Say N if you are unsure. 1324 + 1325 + config RCU_EQS_DEBUG 1326 + bool "Use this when adding any sort of NO_HZ support to your arch" 1327 + depends on DEBUG_KERNEL 1328 + help 1329 + This option provides consistency checks in RCU's handling of 1330 + NO_HZ. These checks have proven quite helpful in detecting 1331 + bugs in arch-specific NO_HZ code. 1332 + 1333 + Say N here if you need ultimate kernel/user switch latencies 1334 + Say Y if you are unsure 1375 1335 1376 1336 endmenu # "RCU Debugging" 1377 1337
+1 -1
tools/testing/selftests/rcutorture/bin/configinit.sh
··· 66 66 mv $builddir/.config $builddir/.config.sav 67 67 sh $T/upd.sh < $builddir/.config.sav > $builddir/.config 68 68 cp $builddir/.config $builddir/.config.new 69 - yes '' | make $buildloc oldconfig > $builddir/Make.modconfig.out 2>&1 69 + yes '' | make $buildloc oldconfig > $builddir/Make.oldconfig.out 2> $builddir/Make.oldconfig.err 70 70 71 71 # verify new config matches specification. 72 72 configcheck.sh $builddir/.config $c
+4
tools/testing/selftests/rcutorture/bin/kvm-recheck.sh
··· 43 43 if test -f "$i/console.log" 44 44 then 45 45 configcheck.sh $i/.config $i/ConfigFragment 46 + if test -r $i/Make.oldconfig.err 47 + then 48 + cat $i/Make.oldconfig.err 49 + fi 46 50 parse-build.sh $i/Make.out $configfile 47 51 parse-torture.sh $i/console.log $configfile 48 52 parse-console.sh $i/console.log $configfile
+19 -6
tools/testing/selftests/rcutorture/bin/kvm.sh
··· 55 55 echo " --bootargs kernel-boot-arguments" 56 56 echo " --bootimage relative-path-to-kernel-boot-image" 57 57 echo " --buildonly" 58 - echo " --configs \"config-file list\"" 58 + echo " --configs \"config-file list w/ repeat factor (3*TINY01)\"" 59 59 echo " --cpus N" 60 60 echo " --datestamp string" 61 61 echo " --defconfig string" ··· 178 178 touch $T/cfgcpu 179 179 for CF in $configs 180 180 do 181 - if test -f "$CONFIGFRAG/$CF" 181 + case $CF in 182 + [0-9]\**|[0-9][0-9]\**|[0-9][0-9][0-9]\**) 183 + config_reps=`echo $CF | sed -e 's/\*.*$//'` 184 + CF1=`echo $CF | sed -e 's/^[^*]*\*//'` 185 + ;; 186 + *) 187 + config_reps=1 188 + CF1=$CF 189 + ;; 190 + esac 191 + if test -f "$CONFIGFRAG/$CF1" 182 192 then 183 - cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF` 184 - cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF" "$cpu_count"` 185 - echo $CF $cpu_count >> $T/cfgcpu 193 + cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF1` 194 + cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF1" "$cpu_count"` 195 + for ((cur_rep=0;cur_rep<$config_reps;cur_rep++)) 196 + do 197 + echo $CF1 $cpu_count >> $T/cfgcpu 198 + done 186 199 else 187 - echo "The --configs file $CF does not exist, terminating." 200 + echo "The --configs file $CF1 does not exist, terminating." 188 201 exit 1 189 202 fi 190 203 done
+2
tools/testing/selftests/rcutorture/configs/rcu/CFcommon
··· 1 1 CONFIG_RCU_TORTURE_TEST=y 2 2 CONFIG_PRINTK_TIME=y 3 + CONFIG_RCU_TORTURE_TEST_SLOW_CLEANUP=y 3 4 CONFIG_RCU_TORTURE_TEST_SLOW_INIT=y 5 + CONFIG_RCU_TORTURE_TEST_SLOW_PREINIT=y
+1
tools/testing/selftests/rcutorture/configs/rcu/SRCU-N
··· 5 5 CONFIG_PREEMPT_NONE=y 6 6 CONFIG_PREEMPT_VOLUNTARY=n 7 7 CONFIG_PREEMPT=n 8 + CONFIG_RCU_EXPERT=y
+1
tools/testing/selftests/rcutorture/configs/rcu/SRCU-P
··· 5 5 CONFIG_PREEMPT_NONE=n 6 6 CONFIG_PREEMPT_VOLUNTARY=n 7 7 CONFIG_PREEMPT=y 8 + #CHECK#CONFIG_RCU_EXPERT=n
+1 -1
tools/testing/selftests/rcutorture/configs/rcu/SRCU-P.boot
··· 1 - rcutorture.torture_type=srcu 1 + rcutorture.torture_type=srcud
+3 -2
tools/testing/selftests/rcutorture/configs/rcu/TASKS01
··· 5 5 CONFIG_PREEMPT_VOLUNTARY=n 6 6 CONFIG_PREEMPT=y 7 7 CONFIG_DEBUG_LOCK_ALLOC=y 8 - CONFIG_PROVE_RCU=y 9 - CONFIG_TASKS_RCU=y 8 + CONFIG_PROVE_LOCKING=n 9 + #CHECK#CONFIG_PROVE_RCU=n 10 + CONFIG_RCU_EXPERT=y
-1
tools/testing/selftests/rcutorture/configs/rcu/TASKS02
··· 2 2 CONFIG_PREEMPT_NONE=y 3 3 CONFIG_PREEMPT_VOLUNTARY=n 4 4 CONFIG_PREEMPT=n 5 - CONFIG_TASKS_RCU=y
+1 -1
tools/testing/selftests/rcutorture/configs/rcu/TASKS03
··· 6 6 CONFIG_PREEMPT_NONE=n 7 7 CONFIG_PREEMPT_VOLUNTARY=n 8 8 CONFIG_PREEMPT=y 9 - CONFIG_TASKS_RCU=y 10 9 CONFIG_HZ_PERIODIC=n 11 10 CONFIG_NO_HZ_IDLE=n 12 11 CONFIG_NO_HZ_FULL=y 13 12 CONFIG_NO_HZ_FULL_ALL=y 13 + #CHECK#CONFIG_RCU_EXPERT=n
+1 -1
tools/testing/selftests/rcutorture/configs/rcu/TINY02
··· 8 8 CONFIG_NO_HZ_FULL=n 9 9 CONFIG_RCU_TRACE=y 10 10 CONFIG_PROVE_LOCKING=y 11 - CONFIG_PROVE_RCU=y 11 + #CHECK#CONFIG_PROVE_RCU=y 12 12 CONFIG_DEBUG_LOCK_ALLOC=y 13 13 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 14 14 CONFIG_PREEMPT_COUNT=y
+1
tools/testing/selftests/rcutorture/configs/rcu/TINY02.boot
··· 1 1 rcupdate.rcu_self_test=1 2 2 rcupdate.rcu_self_test_bh=1 3 + rcutorture.torture_type=rcu_bh
+1
tools/testing/selftests/rcutorture/configs/rcu/TREE01
··· 16 16 CONFIG_RCU_CPU_STALL_INFO=n 17 17 CONFIG_RCU_BOOST=n 18 18 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 19 + CONFIG_RCU_EXPERT=y
+1 -1
tools/testing/selftests/rcutorture/configs/rcu/TREE02
··· 14 14 CONFIG_HIBERNATION=n 15 15 CONFIG_RCU_FANOUT=3 16 16 CONFIG_RCU_FANOUT_LEAF=3 17 - CONFIG_RCU_FANOUT_EXACT=n 18 17 CONFIG_RCU_NOCB_CPU=n 19 18 CONFIG_DEBUG_LOCK_ALLOC=y 20 19 CONFIG_PROVE_LOCKING=n 21 20 CONFIG_RCU_CPU_STALL_INFO=n 22 21 CONFIG_RCU_BOOST=n 23 22 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 23 + CONFIG_RCU_EXPERT=y
-1
tools/testing/selftests/rcutorture/configs/rcu/TREE02-T
··· 14 14 CONFIG_HIBERNATION=n 15 15 CONFIG_RCU_FANOUT=3 16 16 CONFIG_RCU_FANOUT_LEAF=3 17 - CONFIG_RCU_FANOUT_EXACT=n 18 17 CONFIG_RCU_NOCB_CPU=n 19 18 CONFIG_DEBUG_LOCK_ALLOC=y 20 19 CONFIG_PROVE_LOCKING=n
+4 -4
tools/testing/selftests/rcutorture/configs/rcu/TREE03
··· 1 1 CONFIG_SMP=y 2 - CONFIG_NR_CPUS=8 2 + CONFIG_NR_CPUS=16 3 3 CONFIG_PREEMPT_NONE=n 4 4 CONFIG_PREEMPT_VOLUNTARY=n 5 5 CONFIG_PREEMPT=y ··· 9 9 CONFIG_NO_HZ_FULL=n 10 10 CONFIG_RCU_TRACE=y 11 11 CONFIG_HOTPLUG_CPU=y 12 - CONFIG_RCU_FANOUT=4 13 - CONFIG_RCU_FANOUT_LEAF=4 14 - CONFIG_RCU_FANOUT_EXACT=n 12 + CONFIG_RCU_FANOUT=2 13 + CONFIG_RCU_FANOUT_LEAF=2 15 14 CONFIG_RCU_NOCB_CPU=n 16 15 CONFIG_DEBUG_LOCK_ALLOC=n 17 16 CONFIG_RCU_CPU_STALL_INFO=n 18 17 CONFIG_RCU_BOOST=y 19 18 CONFIG_RCU_KTHREAD_PRIO=2 20 19 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 20 + CONFIG_RCU_EXPERT=y
+1
tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot
··· 1 + rcutorture.onoff_interval=1 rcutorture.onoff_holdoff=30
+4 -4
tools/testing/selftests/rcutorture/configs/rcu/TREE04
··· 13 13 CONFIG_HOTPLUG_CPU=n 14 14 CONFIG_SUSPEND=n 15 15 CONFIG_HIBERNATION=n 16 - CONFIG_RCU_FANOUT=2 17 - CONFIG_RCU_FANOUT_LEAF=2 18 - CONFIG_RCU_FANOUT_EXACT=n 16 + CONFIG_RCU_FANOUT=4 17 + CONFIG_RCU_FANOUT_LEAF=4 19 18 CONFIG_RCU_NOCB_CPU=n 20 19 CONFIG_DEBUG_LOCK_ALLOC=n 21 - CONFIG_RCU_CPU_STALL_INFO=y 20 + CONFIG_RCU_CPU_STALL_INFO=n 22 21 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 22 + CONFIG_RCU_EXPERT=y
+2 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE05
··· 12 12 CONFIG_HOTPLUG_CPU=y 13 13 CONFIG_RCU_FANOUT=6 14 14 CONFIG_RCU_FANOUT_LEAF=6 15 - CONFIG_RCU_FANOUT_EXACT=n 16 15 CONFIG_RCU_NOCB_CPU=y 17 16 CONFIG_RCU_NOCB_CPU_NONE=y 18 17 CONFIG_DEBUG_LOCK_ALLOC=y 19 18 CONFIG_PROVE_LOCKING=y 20 - CONFIG_PROVE_RCU=y 19 + #CHECK#CONFIG_PROVE_RCU=y 21 20 CONFIG_RCU_CPU_STALL_INFO=n 22 21 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 22 + CONFIG_RCU_EXPERT=y
+2 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE06
··· 14 14 CONFIG_HIBERNATION=n 15 15 CONFIG_RCU_FANOUT=6 16 16 CONFIG_RCU_FANOUT_LEAF=6 17 - CONFIG_RCU_FANOUT_EXACT=y 18 17 CONFIG_RCU_NOCB_CPU=n 19 18 CONFIG_DEBUG_LOCK_ALLOC=y 20 19 CONFIG_PROVE_LOCKING=y 21 - CONFIG_PROVE_RCU=y 20 + #CHECK#CONFIG_PROVE_RCU=y 22 21 CONFIG_RCU_CPU_STALL_INFO=n 23 22 CONFIG_DEBUG_OBJECTS_RCU_HEAD=y 23 + CONFIG_RCU_EXPERT=y
+1
tools/testing/selftests/rcutorture/configs/rcu/TREE06.boot
··· 1 1 rcupdate.rcu_self_test=1 2 2 rcupdate.rcu_self_test_bh=1 3 3 rcupdate.rcu_self_test_sched=1 4 + rcutree.rcu_fanout_exact=1
+2 -2
tools/testing/selftests/rcutorture/configs/rcu/TREE07
··· 15 15 CONFIG_HOTPLUG_CPU=y 16 16 CONFIG_RCU_FANOUT=2 17 17 CONFIG_RCU_FANOUT_LEAF=2 18 - CONFIG_RCU_FANOUT_EXACT=n 19 18 CONFIG_RCU_NOCB_CPU=n 20 19 CONFIG_DEBUG_LOCK_ALLOC=n 21 - CONFIG_RCU_CPU_STALL_INFO=y 20 + CONFIG_RCU_CPU_STALL_INFO=n 22 21 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 22 + CONFIG_RCU_EXPERT=y
+3 -3
tools/testing/selftests/rcutorture/configs/rcu/TREE08
··· 1 1 CONFIG_SMP=y 2 - CONFIG_NR_CPUS=16 2 + CONFIG_NR_CPUS=8 3 3 CONFIG_PREEMPT_NONE=n 4 4 CONFIG_PREEMPT_VOLUNTARY=n 5 5 CONFIG_PREEMPT=y ··· 13 13 CONFIG_SUSPEND=n 14 14 CONFIG_HIBERNATION=n 15 15 CONFIG_RCU_FANOUT=3 16 - CONFIG_RCU_FANOUT_EXACT=y 17 16 CONFIG_RCU_FANOUT_LEAF=2 18 17 CONFIG_RCU_NOCB_CPU=y 19 18 CONFIG_RCU_NOCB_CPU_ALL=y 20 19 CONFIG_DEBUG_LOCK_ALLOC=n 21 20 CONFIG_PROVE_LOCKING=y 22 - CONFIG_PROVE_RCU=y 21 + #CHECK#CONFIG_PROVE_RCU=y 23 22 CONFIG_RCU_CPU_STALL_INFO=n 24 23 CONFIG_RCU_BOOST=n 25 24 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 25 + CONFIG_RCU_EXPERT=y
-1
tools/testing/selftests/rcutorture/configs/rcu/TREE08-T
··· 13 13 CONFIG_SUSPEND=n 14 14 CONFIG_HIBERNATION=n 15 15 CONFIG_RCU_FANOUT=3 16 - CONFIG_RCU_FANOUT_EXACT=y 17 16 CONFIG_RCU_FANOUT_LEAF=2 18 17 CONFIG_RCU_NOCB_CPU=y 19 18 CONFIG_RCU_NOCB_CPU_ALL=y
+1
tools/testing/selftests/rcutorture/configs/rcu/TREE08-T.boot
··· 1 + rcutree.rcu_fanout_exact=1
+1
tools/testing/selftests/rcutorture/configs/rcu/TREE08.boot
··· 1 1 rcutorture.torture_type=sched 2 2 rcupdate.rcu_self_test=1 3 3 rcupdate.rcu_self_test_sched=1 4 + rcutree.rcu_fanout_exact=1
+1
tools/testing/selftests/rcutorture/configs/rcu/TREE09
··· 16 16 CONFIG_RCU_CPU_STALL_INFO=n 17 17 CONFIG_RCU_BOOST=n 18 18 CONFIG_DEBUG_OBJECTS_RCU_HEAD=n 19 + #CHECK#CONFIG_RCU_EXPERT=n
+12 -22
tools/testing/selftests/rcutorture/doc/TREE_RCU-kconfig.txt
··· 12 12 CONFIG_NO_HZ_FULL -- Do two, one with CONFIG_NO_HZ_FULL_SYSIDLE. 13 13 CONFIG_NO_HZ_FULL_SYSIDLE -- Do one. 14 14 CONFIG_PREEMPT -- Do half. (First three and #8.) 15 - CONFIG_PROVE_LOCKING -- Do all but two, covering CONFIG_PROVE_RCU and not. 16 - CONFIG_PROVE_RCU -- Do all but one under CONFIG_PROVE_LOCKING. 15 + CONFIG_PROVE_LOCKING -- Do several, covering CONFIG_DEBUG_LOCK_ALLOC=y and not. 16 + CONFIG_PROVE_RCU -- Hardwired to CONFIG_PROVE_LOCKING. 17 17 CONFIG_RCU_BOOST -- one of PREEMPT_RCU. 18 18 CONFIG_RCU_KTHREAD_PRIO -- set to 2 for _BOOST testing. 19 - CONFIG_RCU_CPU_STALL_INFO -- Do one. 20 - CONFIG_RCU_FANOUT -- Cover hierarchy as currently, but overlap with others. 21 - CONFIG_RCU_FANOUT_EXACT -- Do one. 19 + CONFIG_RCU_CPU_STALL_INFO -- Now default, avoid at least twice. 20 + CONFIG_RCU_FANOUT -- Cover hierarchy, but overlap with others. 22 21 CONFIG_RCU_FANOUT_LEAF -- Do one non-default. 23 22 CONFIG_RCU_FAST_NO_HZ -- Do one, but not with CONFIG_RCU_NOCB_CPU_ALL. 24 23 CONFIG_RCU_NOCB_CPU -- Do three, see below. ··· 26 27 CONFIG_RCU_NOCB_CPU_ZERO -- Do one. 27 28 CONFIG_RCU_TRACE -- Do half. 28 29 CONFIG_SMP -- Need one !SMP for PREEMPT_RCU. 30 + !RCU_EXPERT -- Do a few, but these have to be vanilla configurations. 29 31 RCU-bh: Do one with PREEMPT and one with !PREEMPT. 30 32 RCU-sched: Do one with PREEMPT but not BOOST. 31 33 32 34 33 - Hierarchy: 35 + Boot parameters: 34 36 35 - TREE01. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=8, CONFIG_RCU_FANOUT_EXACT=n. 36 - TREE02. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=3, CONFIG_RCU_FANOUT_EXACT=n, 37 - CONFIG_RCU_FANOUT_LEAF=3. 38 - TREE03. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=4, CONFIG_RCU_FANOUT_EXACT=n, 39 - CONFIG_RCU_FANOUT_LEAF=4. 40 - TREE04. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=2, CONFIG_RCU_FANOUT_EXACT=n, 41 - CONFIG_RCU_FANOUT_LEAF=2. 42 - TREE05. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=6, CONFIG_RCU_FANOUT_EXACT=n 43 - CONFIG_RCU_FANOUT_LEAF=6. 44 - TREE06. CONFIG_NR_CPUS=8, CONFIG_RCU_FANOUT=6, CONFIG_RCU_FANOUT_EXACT=y 45 - CONFIG_RCU_FANOUT_LEAF=6. 46 - TREE07. CONFIG_NR_CPUS=16, CONFIG_RCU_FANOUT=2, CONFIG_RCU_FANOUT_EXACT=n, 47 - CONFIG_RCU_FANOUT_LEAF=2. 48 - TREE08. CONFIG_NR_CPUS=16, CONFIG_RCU_FANOUT=3, CONFIG_RCU_FANOUT_EXACT=y, 49 - CONFIG_RCU_FANOUT_LEAF=2. 50 - TREE09. CONFIG_NR_CPUS=1. 37 + nohz_full - do at least one. 38 + maxcpu -- do at least one. 39 + rcupdate.rcu_self_test_bh -- Do at least one each, offloaded and not. 40 + rcupdate.rcu_self_test_sched -- Do at least one each, offloaded and not. 41 + rcupdate.rcu_self_test -- Do at least one each, offloaded and not. 42 + rcutree.rcu_fanout_exact -- Do at least one. 51 43 52 44 53 45 Kconfig Parameters Ignored: