Merge branch 'scftorture.2020.08.24a' into HEAD

scftorture.2020.08.24a: Torture tests for smp_call_function() and friends.

+982 -210
+110 -18
Documentation/admin-guide/kernel-parameters.txt
··· 4157 4157 rcu_node tree with an eye towards determining 4158 4158 why a new grace period has not yet started. 4159 4159 4160 - rcuperf.gp_async= [KNL] 4160 + rcuscale.gp_async= [KNL] 4161 4161 Measure performance of asynchronous 4162 4162 grace-period primitives such as call_rcu(). 4163 4163 4164 - rcuperf.gp_async_max= [KNL] 4164 + rcuscale.gp_async_max= [KNL] 4165 4165 Specify the maximum number of outstanding 4166 4166 callbacks per writer thread. When a writer 4167 4167 thread exceeds this limit, it invokes the 4168 4168 corresponding flavor of rcu_barrier() to allow 4169 4169 previously posted callbacks to drain. 4170 4170 4171 - rcuperf.gp_exp= [KNL] 4171 + rcuscale.gp_exp= [KNL] 4172 4172 Measure performance of expedited synchronous 4173 4173 grace-period primitives. 4174 4174 4175 - rcuperf.holdoff= [KNL] 4175 + rcuscale.holdoff= [KNL] 4176 4176 Set test-start holdoff period. The purpose of 4177 4177 this parameter is to delay the start of the 4178 4178 test until boot completes in order to avoid 4179 4179 interference. 4180 4180 4181 - rcuperf.kfree_rcu_test= [KNL] 4181 + rcuscale.kfree_rcu_test= [KNL] 4182 4182 Set to measure performance of kfree_rcu() flooding. 4183 4183 4184 - rcuperf.kfree_nthreads= [KNL] 4184 + rcuscale.kfree_nthreads= [KNL] 4185 4185 The number of threads running loops of kfree_rcu(). 4186 4186 4187 - rcuperf.kfree_alloc_num= [KNL] 4187 + rcuscale.kfree_alloc_num= [KNL] 4188 4188 Number of allocations and frees done in an iteration. 4189 4189 4190 - rcuperf.kfree_loops= [KNL] 4191 - Number of loops doing rcuperf.kfree_alloc_num number 4190 + rcuscale.kfree_loops= [KNL] 4191 + Number of loops doing rcuscale.kfree_alloc_num number 4192 4192 of allocations and frees. 4193 4193 4194 - rcuperf.nreaders= [KNL] 4194 + rcuscale.nreaders= [KNL] 4195 4195 Set number of RCU readers. The value -1 selects 4196 4196 N, where N is the number of CPUs. A value 4197 4197 "n" less than -1 selects N-n+1, where N is again ··· 4200 4200 A value of "n" less than or equal to -N selects 4201 4201 a single reader. 4202 4202 4203 - rcuperf.nwriters= [KNL] 4203 + rcuscale.nwriters= [KNL] 4204 4204 Set number of RCU writers. The values operate 4205 - the same as for rcuperf.nreaders. 4205 + the same as for rcuscale.nreaders. 4206 4206 N, where N is the number of CPUs 4207 4207 4208 - rcuperf.perf_type= [KNL] 4208 + rcuscale.perf_type= [KNL] 4209 4209 Specify the RCU implementation to test. 4210 4210 4211 - rcuperf.shutdown= [KNL] 4211 + rcuscale.shutdown= [KNL] 4212 4212 Shut the system down after performance tests 4213 4213 complete. This is useful for hands-off automated 4214 4214 testing. 4215 4215 4216 - rcuperf.verbose= [KNL] 4216 + rcuscale.verbose= [KNL] 4217 4217 Enable additional printk() statements. 4218 4218 4219 - rcuperf.writer_holdoff= [KNL] 4219 + rcuscale.writer_holdoff= [KNL] 4220 4220 Write-side holdoff between grace periods, 4221 4221 in microseconds. The default of zero says 4222 4222 no holdoff. ··· 4502 4502 refscale.shutdown= [KNL] 4503 4503 Shut down the system at the end of the performance 4504 4504 test. This defaults to 1 (shut it down) when 4505 - rcuperf is built into the kernel and to 0 (leave 4506 - it running) when rcuperf is built as a module. 4505 + refscale is built into the kernel and to 0 (leave 4506 + it running) when refscale is built as a module. 4507 4507 4508 4508 refscale.verbose= [KNL] 4509 4509 Enable additional printk() statements. ··· 4648 4648 and so on. 4649 4649 Format: integer between 0 and 10 4650 4650 Default is 0. 4651 + 4652 + scftorture.holdoff= [KNL] 4653 + Number of seconds to hold off before starting 4654 + test. Defaults to zero for module insertion and 4655 + to 10 seconds for built-in smp_call_function() 4656 + tests. 4657 + 4658 + scftorture.longwait= [KNL] 4659 + Request ridiculously long waits randomly selected 4660 + up to the chosen limit in seconds. Zero (the 4661 + default) disables this feature. Please note 4662 + that requesting even small non-zero numbers of 4663 + seconds can result in RCU CPU stall warnings, 4664 + softlockup complaints, and so on. 4665 + 4666 + scftorture.nthreads= [KNL] 4667 + Number of kthreads to spawn to invoke the 4668 + smp_call_function() family of functions. 4669 + The default of -1 specifies a number of kthreads 4670 + equal to the number of CPUs. 4671 + 4672 + scftorture.onoff_holdoff= [KNL] 4673 + Number seconds to wait after the start of the 4674 + test before initiating CPU-hotplug operations. 4675 + 4676 + scftorture.onoff_interval= [KNL] 4677 + Number seconds to wait between successive 4678 + CPU-hotplug operations. Specifying zero (which 4679 + is the default) disables CPU-hotplug operations. 4680 + 4681 + scftorture.shutdown_secs= [KNL] 4682 + The number of seconds following the start of the 4683 + test after which to shut down the system. The 4684 + default of zero avoids shutting down the system. 4685 + Non-zero values are useful for automated tests. 4686 + 4687 + scftorture.stat_interval= [KNL] 4688 + The number of seconds between outputting the 4689 + current test statistics to the console. A value 4690 + of zero disables statistics output. 4691 + 4692 + scftorture.stutter_cpus= [KNL] 4693 + The number of jiffies to wait between each change 4694 + to the set of CPUs under test. 4695 + 4696 + scftorture.use_cpus_read_lock= [KNL] 4697 + Use use_cpus_read_lock() instead of the default 4698 + preempt_disable() to disable CPU hotplug 4699 + while invoking one of the smp_call_function*() 4700 + functions. 4701 + 4702 + scftorture.verbose= [KNL] 4703 + Enable additional printk() statements. 4704 + 4705 + scftorture.weight_single= [KNL] 4706 + The probability weighting to use for the 4707 + smp_call_function_single() function with a zero 4708 + "wait" parameter. A value of -1 selects the 4709 + default if all other weights are -1. However, 4710 + if at least one weight has some other value, a 4711 + value of -1 will instead select a weight of zero. 4712 + 4713 + scftorture.weight_single_wait= [KNL] 4714 + The probability weighting to use for the 4715 + smp_call_function_single() function with a 4716 + non-zero "wait" parameter. See weight_single. 4717 + 4718 + scftorture.weight_many= [KNL] 4719 + The probability weighting to use for the 4720 + smp_call_function_many() function with a zero 4721 + "wait" parameter. See weight_single. 4722 + Note well that setting a high probability for 4723 + this weighting can place serious IPI load 4724 + on the system. 4725 + 4726 + scftorture.weight_many_wait= [KNL] 4727 + The probability weighting to use for the 4728 + smp_call_function_many() function with a 4729 + non-zero "wait" parameter. See weight_single 4730 + and weight_many. 4731 + 4732 + scftorture.weight_all= [KNL] 4733 + The probability weighting to use for the 4734 + smp_call_function_all() function with a zero 4735 + "wait" parameter. See weight_single and 4736 + weight_many. 4737 + 4738 + scftorture.weight_all_wait= [KNL] 4739 + The probability weighting to use for the 4740 + smp_call_function_all() function with a 4741 + non-zero "wait" parameter. See weight_single 4742 + and weight_many. 4651 4743 4652 4744 skew_tick= [KNL] Offset the periodic timer tick per cpu to mitigate 4653 4745 xtime_lock contention on larger systems, and/or RCU lock
+2 -1
MAINTAINERS
··· 17510 17510 T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev 17511 17511 F: Documentation/RCU/torture.rst 17512 17512 F: kernel/locking/locktorture.c 17513 - F: kernel/rcu/rcuperf.c 17513 + F: kernel/rcu/rcuscale.c 17514 17514 F: kernel/rcu/rcutorture.c 17515 + F: kernel/rcu/refscale.c 17515 17516 F: kernel/torture.c 17516 17517 17517 17518 TOSHIBA ACPI EXTRAS DRIVER
+2
kernel/Makefile
··· 133 133 KCSAN_SANITIZE_stackleak.o := n 134 134 KCOV_INSTRUMENT_stackleak.o := n 135 135 136 + obj-$(CONFIG_SCF_TORTURE_TEST) += scftorture.o 137 + 136 138 $(obj)/configs.o: $(obj)/config_data.gz 137 139 138 140 targets += config_data.gz
+1 -1
kernel/rcu/Kconfig.debug
··· 23 23 tristate 24 24 default n 25 25 26 - config RCU_PERF_TEST 26 + config RCU_SCALE_TEST 27 27 tristate "performance tests for RCU" 28 28 depends on DEBUG_KERNEL 29 29 select TORTURE_TEST
+1 -1
kernel/rcu/Makefile
··· 11 11 obj-$(CONFIG_TREE_SRCU) += srcutree.o 12 12 obj-$(CONFIG_TINY_SRCU) += srcutiny.o 13 13 obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o 14 - obj-$(CONFIG_RCU_PERF_TEST) += rcuperf.o 14 + obj-$(CONFIG_RCU_SCALE_TEST) += rcuscale.o 15 15 obj-$(CONFIG_RCU_REF_SCALE_TEST) += refscale.o 16 16 obj-$(CONFIG_TREE_RCU) += tree.o 17 17 obj-$(CONFIG_TINY_RCU) += tiny.o
+165 -165
kernel/rcu/rcuperf.c kernel/rcu/rcuscale.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 /* 3 - * Read-Copy Update module-based performance-test facility 3 + * Read-Copy Update module-based scalability-test facility 4 4 * 5 5 * Copyright (C) IBM Corporation, 2015 6 6 * ··· 44 44 MODULE_LICENSE("GPL"); 45 45 MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>"); 46 46 47 - #define PERF_FLAG "-perf:" 48 - #define PERFOUT_STRING(s) \ 49 - pr_alert("%s" PERF_FLAG " %s\n", perf_type, s) 50 - #define VERBOSE_PERFOUT_STRING(s) \ 51 - do { if (verbose) pr_alert("%s" PERF_FLAG " %s\n", perf_type, s); } while (0) 52 - #define VERBOSE_PERFOUT_ERRSTRING(s) \ 53 - do { if (verbose) pr_alert("%s" PERF_FLAG "!!! %s\n", perf_type, s); } while (0) 47 + #define SCALE_FLAG "-scale:" 48 + #define SCALEOUT_STRING(s) \ 49 + pr_alert("%s" SCALE_FLAG " %s\n", scale_type, s) 50 + #define VERBOSE_SCALEOUT_STRING(s) \ 51 + do { if (verbose) pr_alert("%s" SCALE_FLAG " %s\n", scale_type, s); } while (0) 52 + #define VERBOSE_SCALEOUT_ERRSTRING(s) \ 53 + do { if (verbose) pr_alert("%s" SCALE_FLAG "!!! %s\n", scale_type, s); } while (0) 54 54 55 55 /* 56 56 * The intended use cases for the nreaders and nwriters module parameters ··· 61 61 * nr_cpus for a mixed reader/writer test. 62 62 * 63 63 * 2. Specify the nr_cpus kernel boot parameter, but set 64 - * rcuperf.nreaders to zero. This will set nwriters to the 64 + * rcuscale.nreaders to zero. This will set nwriters to the 65 65 * value specified by nr_cpus for an update-only test. 66 66 * 67 67 * 3. Specify the nr_cpus kernel boot parameter, but set 68 - * rcuperf.nwriters to zero. This will set nreaders to the 68 + * rcuscale.nwriters to zero. This will set nreaders to the 69 69 * value specified by nr_cpus for a read-only test. 70 70 * 71 71 * Various other use cases may of course be specified. 72 72 * 73 73 * Note that this test's readers are intended only as a test load for 74 - * the writers. The reader performance statistics will be overly 74 + * the writers. The reader scalability statistics will be overly 75 75 * pessimistic due to the per-critical-section interrupt disabling, 76 76 * test-end checks, and the pair of calls through pointers. 77 77 */ 78 78 79 79 #ifdef MODULE 80 - # define RCUPERF_SHUTDOWN 0 80 + # define RCUSCALE_SHUTDOWN 0 81 81 #else 82 - # define RCUPERF_SHUTDOWN 1 82 + # define RCUSCALE_SHUTDOWN 1 83 83 #endif 84 84 85 85 torture_param(bool, gp_async, false, "Use asynchronous GP wait primitives"); ··· 88 88 torture_param(int, holdoff, 10, "Holdoff time before test start (s)"); 89 89 torture_param(int, nreaders, -1, "Number of RCU reader threads"); 90 90 torture_param(int, nwriters, -1, "Number of RCU updater threads"); 91 - torture_param(bool, shutdown, RCUPERF_SHUTDOWN, 92 - "Shutdown at end of performance tests."); 91 + torture_param(bool, shutdown, RCUSCALE_SHUTDOWN, 92 + "Shutdown at end of scalability tests."); 93 93 torture_param(int, verbose, 1, "Enable verbose debugging printk()s"); 94 94 torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable"); 95 - torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() perf test?"); 95 + torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() scale test?"); 96 96 torture_param(int, kfree_mult, 1, "Multiple of kfree_obj size to allocate."); 97 97 98 - static char *perf_type = "rcu"; 99 - module_param(perf_type, charp, 0444); 100 - MODULE_PARM_DESC(perf_type, "Type of RCU to performance-test (rcu, srcu, ...)"); 98 + static char *scale_type = "rcu"; 99 + module_param(scale_type, charp, 0444); 100 + MODULE_PARM_DESC(scale_type, "Type of RCU to scalability-test (rcu, srcu, ...)"); 101 101 102 102 static int nrealreaders; 103 103 static int nrealwriters; ··· 107 107 108 108 static u64 **writer_durations; 109 109 static int *writer_n_durations; 110 - static atomic_t n_rcu_perf_reader_started; 111 - static atomic_t n_rcu_perf_writer_started; 112 - static atomic_t n_rcu_perf_writer_finished; 110 + static atomic_t n_rcu_scale_reader_started; 111 + static atomic_t n_rcu_scale_writer_started; 112 + static atomic_t n_rcu_scale_writer_finished; 113 113 static wait_queue_head_t shutdown_wq; 114 - static u64 t_rcu_perf_writer_started; 115 - static u64 t_rcu_perf_writer_finished; 114 + static u64 t_rcu_scale_writer_started; 115 + static u64 t_rcu_scale_writer_finished; 116 116 static unsigned long b_rcu_gp_test_started; 117 117 static unsigned long b_rcu_gp_test_finished; 118 118 static DEFINE_PER_CPU(atomic_t, n_async_inflight); ··· 124 124 * Operations vector for selecting different types of tests. 125 125 */ 126 126 127 - struct rcu_perf_ops { 127 + struct rcu_scale_ops { 128 128 int ptype; 129 129 void (*init)(void); 130 130 void (*cleanup)(void); ··· 140 140 const char *name; 141 141 }; 142 142 143 - static struct rcu_perf_ops *cur_ops; 143 + static struct rcu_scale_ops *cur_ops; 144 144 145 145 /* 146 - * Definitions for rcu perf testing. 146 + * Definitions for rcu scalability testing. 147 147 */ 148 148 149 - static int rcu_perf_read_lock(void) __acquires(RCU) 149 + static int rcu_scale_read_lock(void) __acquires(RCU) 150 150 { 151 151 rcu_read_lock(); 152 152 return 0; 153 153 } 154 154 155 - static void rcu_perf_read_unlock(int idx) __releases(RCU) 155 + static void rcu_scale_read_unlock(int idx) __releases(RCU) 156 156 { 157 157 rcu_read_unlock(); 158 158 } ··· 162 162 return 0; 163 163 } 164 164 165 - static void rcu_sync_perf_init(void) 165 + static void rcu_sync_scale_init(void) 166 166 { 167 167 } 168 168 169 - static struct rcu_perf_ops rcu_ops = { 169 + static struct rcu_scale_ops rcu_ops = { 170 170 .ptype = RCU_FLAVOR, 171 - .init = rcu_sync_perf_init, 172 - .readlock = rcu_perf_read_lock, 173 - .readunlock = rcu_perf_read_unlock, 171 + .init = rcu_sync_scale_init, 172 + .readlock = rcu_scale_read_lock, 173 + .readunlock = rcu_scale_read_unlock, 174 174 .get_gp_seq = rcu_get_gp_seq, 175 175 .gp_diff = rcu_seq_diff, 176 176 .exp_completed = rcu_exp_batches_completed, ··· 182 182 }; 183 183 184 184 /* 185 - * Definitions for srcu perf testing. 185 + * Definitions for srcu scalability testing. 186 186 */ 187 187 188 - DEFINE_STATIC_SRCU(srcu_ctl_perf); 189 - static struct srcu_struct *srcu_ctlp = &srcu_ctl_perf; 188 + DEFINE_STATIC_SRCU(srcu_ctl_scale); 189 + static struct srcu_struct *srcu_ctlp = &srcu_ctl_scale; 190 190 191 - static int srcu_perf_read_lock(void) __acquires(srcu_ctlp) 191 + static int srcu_scale_read_lock(void) __acquires(srcu_ctlp) 192 192 { 193 193 return srcu_read_lock(srcu_ctlp); 194 194 } 195 195 196 - static void srcu_perf_read_unlock(int idx) __releases(srcu_ctlp) 196 + static void srcu_scale_read_unlock(int idx) __releases(srcu_ctlp) 197 197 { 198 198 srcu_read_unlock(srcu_ctlp, idx); 199 199 } 200 200 201 - static unsigned long srcu_perf_completed(void) 201 + static unsigned long srcu_scale_completed(void) 202 202 { 203 203 return srcu_batches_completed(srcu_ctlp); 204 204 } ··· 213 213 srcu_barrier(srcu_ctlp); 214 214 } 215 215 216 - static void srcu_perf_synchronize(void) 216 + static void srcu_scale_synchronize(void) 217 217 { 218 218 synchronize_srcu(srcu_ctlp); 219 219 } 220 220 221 - static void srcu_perf_synchronize_expedited(void) 221 + static void srcu_scale_synchronize_expedited(void) 222 222 { 223 223 synchronize_srcu_expedited(srcu_ctlp); 224 224 } 225 225 226 - static struct rcu_perf_ops srcu_ops = { 226 + static struct rcu_scale_ops srcu_ops = { 227 227 .ptype = SRCU_FLAVOR, 228 - .init = rcu_sync_perf_init, 229 - .readlock = srcu_perf_read_lock, 230 - .readunlock = srcu_perf_read_unlock, 231 - .get_gp_seq = srcu_perf_completed, 228 + .init = rcu_sync_scale_init, 229 + .readlock = srcu_scale_read_lock, 230 + .readunlock = srcu_scale_read_unlock, 231 + .get_gp_seq = srcu_scale_completed, 232 232 .gp_diff = rcu_seq_diff, 233 - .exp_completed = srcu_perf_completed, 233 + .exp_completed = srcu_scale_completed, 234 234 .async = srcu_call_rcu, 235 235 .gp_barrier = srcu_rcu_barrier, 236 - .sync = srcu_perf_synchronize, 237 - .exp_sync = srcu_perf_synchronize_expedited, 236 + .sync = srcu_scale_synchronize, 237 + .exp_sync = srcu_scale_synchronize_expedited, 238 238 .name = "srcu" 239 239 }; 240 240 241 241 static struct srcu_struct srcud; 242 242 243 - static void srcu_sync_perf_init(void) 243 + static void srcu_sync_scale_init(void) 244 244 { 245 245 srcu_ctlp = &srcud; 246 246 init_srcu_struct(srcu_ctlp); 247 247 } 248 248 249 - static void srcu_sync_perf_cleanup(void) 249 + static void srcu_sync_scale_cleanup(void) 250 250 { 251 251 cleanup_srcu_struct(srcu_ctlp); 252 252 } 253 253 254 - static struct rcu_perf_ops srcud_ops = { 254 + static struct rcu_scale_ops srcud_ops = { 255 255 .ptype = SRCU_FLAVOR, 256 - .init = srcu_sync_perf_init, 257 - .cleanup = srcu_sync_perf_cleanup, 258 - .readlock = srcu_perf_read_lock, 259 - .readunlock = srcu_perf_read_unlock, 260 - .get_gp_seq = srcu_perf_completed, 256 + .init = srcu_sync_scale_init, 257 + .cleanup = srcu_sync_scale_cleanup, 258 + .readlock = srcu_scale_read_lock, 259 + .readunlock = srcu_scale_read_unlock, 260 + .get_gp_seq = srcu_scale_completed, 261 261 .gp_diff = rcu_seq_diff, 262 - .exp_completed = srcu_perf_completed, 262 + .exp_completed = srcu_scale_completed, 263 263 .async = srcu_call_rcu, 264 264 .gp_barrier = srcu_rcu_barrier, 265 - .sync = srcu_perf_synchronize, 266 - .exp_sync = srcu_perf_synchronize_expedited, 265 + .sync = srcu_scale_synchronize, 266 + .exp_sync = srcu_scale_synchronize_expedited, 267 267 .name = "srcud" 268 268 }; 269 269 270 270 /* 271 - * Definitions for RCU-tasks perf testing. 271 + * Definitions for RCU-tasks scalability testing. 272 272 */ 273 273 274 - static int tasks_perf_read_lock(void) 274 + static int tasks_scale_read_lock(void) 275 275 { 276 276 return 0; 277 277 } 278 278 279 - static void tasks_perf_read_unlock(int idx) 279 + static void tasks_scale_read_unlock(int idx) 280 280 { 281 281 } 282 282 283 - static struct rcu_perf_ops tasks_ops = { 283 + static struct rcu_scale_ops tasks_ops = { 284 284 .ptype = RCU_TASKS_FLAVOR, 285 - .init = rcu_sync_perf_init, 286 - .readlock = tasks_perf_read_lock, 287 - .readunlock = tasks_perf_read_unlock, 285 + .init = rcu_sync_scale_init, 286 + .readlock = tasks_scale_read_lock, 287 + .readunlock = tasks_scale_read_unlock, 288 288 .get_gp_seq = rcu_no_completed, 289 289 .gp_diff = rcu_seq_diff, 290 290 .async = call_rcu_tasks, ··· 294 294 .name = "tasks" 295 295 }; 296 296 297 - static unsigned long rcuperf_seq_diff(unsigned long new, unsigned long old) 297 + static unsigned long rcuscale_seq_diff(unsigned long new, unsigned long old) 298 298 { 299 299 if (!cur_ops->gp_diff) 300 300 return new - old; ··· 302 302 } 303 303 304 304 /* 305 - * If performance tests complete, wait for shutdown to commence. 305 + * If scalability tests complete, wait for shutdown to commence. 306 306 */ 307 - static void rcu_perf_wait_shutdown(void) 307 + static void rcu_scale_wait_shutdown(void) 308 308 { 309 309 cond_resched_tasks_rcu_qs(); 310 - if (atomic_read(&n_rcu_perf_writer_finished) < nrealwriters) 310 + if (atomic_read(&n_rcu_scale_writer_finished) < nrealwriters) 311 311 return; 312 312 while (!torture_must_stop()) 313 313 schedule_timeout_uninterruptible(1); 314 314 } 315 315 316 316 /* 317 - * RCU perf reader kthread. Repeatedly does empty RCU read-side critical 318 - * section, minimizing update-side interference. However, the point of 319 - * this test is not to evaluate reader performance, but instead to serve 320 - * as a test load for update-side performance testing. 317 + * RCU scalability reader kthread. Repeatedly does empty RCU read-side 318 + * critical section, minimizing update-side interference. However, the 319 + * point of this test is not to evaluate reader scalability, but instead 320 + * to serve as a test load for update-side scalability testing. 321 321 */ 322 322 static int 323 - rcu_perf_reader(void *arg) 323 + rcu_scale_reader(void *arg) 324 324 { 325 325 unsigned long flags; 326 326 int idx; 327 327 long me = (long)arg; 328 328 329 - VERBOSE_PERFOUT_STRING("rcu_perf_reader task started"); 329 + VERBOSE_SCALEOUT_STRING("rcu_scale_reader task started"); 330 330 set_cpus_allowed_ptr(current, cpumask_of(me % nr_cpu_ids)); 331 331 set_user_nice(current, MAX_NICE); 332 - atomic_inc(&n_rcu_perf_reader_started); 332 + atomic_inc(&n_rcu_scale_reader_started); 333 333 334 334 do { 335 335 local_irq_save(flags); 336 336 idx = cur_ops->readlock(); 337 337 cur_ops->readunlock(idx); 338 338 local_irq_restore(flags); 339 - rcu_perf_wait_shutdown(); 339 + rcu_scale_wait_shutdown(); 340 340 } while (!torture_must_stop()); 341 - torture_kthread_stopping("rcu_perf_reader"); 341 + torture_kthread_stopping("rcu_scale_reader"); 342 342 return 0; 343 343 } 344 344 345 345 /* 346 - * Callback function for asynchronous grace periods from rcu_perf_writer(). 346 + * Callback function for asynchronous grace periods from rcu_scale_writer(). 347 347 */ 348 - static void rcu_perf_async_cb(struct rcu_head *rhp) 348 + static void rcu_scale_async_cb(struct rcu_head *rhp) 349 349 { 350 350 atomic_dec(this_cpu_ptr(&n_async_inflight)); 351 351 kfree(rhp); 352 352 } 353 353 354 354 /* 355 - * RCU perf writer kthread. Repeatedly does a grace period. 355 + * RCU scale writer kthread. Repeatedly does a grace period. 356 356 */ 357 357 static int 358 - rcu_perf_writer(void *arg) 358 + rcu_scale_writer(void *arg) 359 359 { 360 360 int i = 0; 361 361 int i_max; ··· 366 366 u64 *wdp; 367 367 u64 *wdpp = writer_durations[me]; 368 368 369 - VERBOSE_PERFOUT_STRING("rcu_perf_writer task started"); 369 + VERBOSE_SCALEOUT_STRING("rcu_scale_writer task started"); 370 370 WARN_ON(!wdpp); 371 371 set_cpus_allowed_ptr(current, cpumask_of(me % nr_cpu_ids)); 372 372 sched_set_fifo_low(current); ··· 383 383 schedule_timeout_uninterruptible(1); 384 384 385 385 t = ktime_get_mono_fast_ns(); 386 - if (atomic_inc_return(&n_rcu_perf_writer_started) >= nrealwriters) { 387 - t_rcu_perf_writer_started = t; 386 + if (atomic_inc_return(&n_rcu_scale_writer_started) >= nrealwriters) { 387 + t_rcu_scale_writer_started = t; 388 388 if (gp_exp) { 389 389 b_rcu_gp_test_started = 390 390 cur_ops->exp_completed() / 2; ··· 404 404 rhp = kmalloc(sizeof(*rhp), GFP_KERNEL); 405 405 if (rhp && atomic_read(this_cpu_ptr(&n_async_inflight)) < gp_async_max) { 406 406 atomic_inc(this_cpu_ptr(&n_async_inflight)); 407 - cur_ops->async(rhp, rcu_perf_async_cb); 407 + cur_ops->async(rhp, rcu_scale_async_cb); 408 408 rhp = NULL; 409 409 } else if (!kthread_should_stop()) { 410 410 cur_ops->gp_barrier(); ··· 421 421 *wdp = t - *wdp; 422 422 i_max = i; 423 423 if (!started && 424 - atomic_read(&n_rcu_perf_writer_started) >= nrealwriters) 424 + atomic_read(&n_rcu_scale_writer_started) >= nrealwriters) 425 425 started = true; 426 426 if (!done && i >= MIN_MEAS) { 427 427 done = true; 428 428 sched_set_normal(current, 0); 429 - pr_alert("%s%s rcu_perf_writer %ld has %d measurements\n", 430 - perf_type, PERF_FLAG, me, MIN_MEAS); 431 - if (atomic_inc_return(&n_rcu_perf_writer_finished) >= 429 + pr_alert("%s%s rcu_scale_writer %ld has %d measurements\n", 430 + scale_type, SCALE_FLAG, me, MIN_MEAS); 431 + if (atomic_inc_return(&n_rcu_scale_writer_finished) >= 432 432 nrealwriters) { 433 433 schedule_timeout_interruptible(10); 434 434 rcu_ftrace_dump(DUMP_ALL); 435 - PERFOUT_STRING("Test complete"); 436 - t_rcu_perf_writer_finished = t; 435 + SCALEOUT_STRING("Test complete"); 436 + t_rcu_scale_writer_finished = t; 437 437 if (gp_exp) { 438 438 b_rcu_gp_test_finished = 439 439 cur_ops->exp_completed() / 2; ··· 448 448 } 449 449 } 450 450 if (done && !alldone && 451 - atomic_read(&n_rcu_perf_writer_finished) >= nrealwriters) 451 + atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters) 452 452 alldone = true; 453 453 if (started && !alldone && i < MAX_MEAS - 1) 454 454 i++; 455 - rcu_perf_wait_shutdown(); 455 + rcu_scale_wait_shutdown(); 456 456 } while (!torture_must_stop()); 457 457 if (gp_async) { 458 458 cur_ops->gp_barrier(); 459 459 } 460 460 writer_n_durations[me] = i_max; 461 - torture_kthread_stopping("rcu_perf_writer"); 461 + torture_kthread_stopping("rcu_scale_writer"); 462 462 return 0; 463 463 } 464 464 465 465 static void 466 - rcu_perf_print_module_parms(struct rcu_perf_ops *cur_ops, const char *tag) 466 + rcu_scale_print_module_parms(struct rcu_scale_ops *cur_ops, const char *tag) 467 467 { 468 - pr_alert("%s" PERF_FLAG 468 + pr_alert("%s" SCALE_FLAG 469 469 "--- %s: nreaders=%d nwriters=%d verbose=%d shutdown=%d\n", 470 - perf_type, tag, nrealreaders, nrealwriters, verbose, shutdown); 470 + scale_type, tag, nrealreaders, nrealwriters, verbose, shutdown); 471 471 } 472 472 473 473 static void 474 - rcu_perf_cleanup(void) 474 + rcu_scale_cleanup(void) 475 475 { 476 476 int i; 477 477 int j; ··· 484 484 * during the mid-boot phase, so have to wait till the end. 485 485 */ 486 486 if (rcu_gp_is_expedited() && !rcu_gp_is_normal() && !gp_exp) 487 - VERBOSE_PERFOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!"); 487 + VERBOSE_SCALEOUT_ERRSTRING("All grace periods expedited, no normal ones to measure!"); 488 488 if (rcu_gp_is_normal() && gp_exp) 489 - VERBOSE_PERFOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!"); 489 + VERBOSE_SCALEOUT_ERRSTRING("All grace periods normal, no expedited ones to measure!"); 490 490 if (gp_exp && gp_async) 491 - VERBOSE_PERFOUT_ERRSTRING("No expedited async GPs, so went with async!"); 491 + VERBOSE_SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!"); 492 492 493 493 if (torture_cleanup_begin()) 494 494 return; ··· 499 499 500 500 if (reader_tasks) { 501 501 for (i = 0; i < nrealreaders; i++) 502 - torture_stop_kthread(rcu_perf_reader, 502 + torture_stop_kthread(rcu_scale_reader, 503 503 reader_tasks[i]); 504 504 kfree(reader_tasks); 505 505 } 506 506 507 507 if (writer_tasks) { 508 508 for (i = 0; i < nrealwriters; i++) { 509 - torture_stop_kthread(rcu_perf_writer, 509 + torture_stop_kthread(rcu_scale_writer, 510 510 writer_tasks[i]); 511 511 if (!writer_n_durations) 512 512 continue; 513 513 j = writer_n_durations[i]; 514 514 pr_alert("%s%s writer %d gps: %d\n", 515 - perf_type, PERF_FLAG, i, j); 515 + scale_type, SCALE_FLAG, i, j); 516 516 ngps += j; 517 517 } 518 518 pr_alert("%s%s start: %llu end: %llu duration: %llu gps: %d batches: %ld\n", 519 - perf_type, PERF_FLAG, 520 - t_rcu_perf_writer_started, t_rcu_perf_writer_finished, 521 - t_rcu_perf_writer_finished - 522 - t_rcu_perf_writer_started, 519 + scale_type, SCALE_FLAG, 520 + t_rcu_scale_writer_started, t_rcu_scale_writer_finished, 521 + t_rcu_scale_writer_finished - 522 + t_rcu_scale_writer_started, 523 523 ngps, 524 - rcuperf_seq_diff(b_rcu_gp_test_finished, 525 - b_rcu_gp_test_started)); 524 + rcuscale_seq_diff(b_rcu_gp_test_finished, 525 + b_rcu_gp_test_started)); 526 526 for (i = 0; i < nrealwriters; i++) { 527 527 if (!writer_durations) 528 528 break; ··· 534 534 for (j = 0; j <= writer_n_durations[i]; j++) { 535 535 wdp = &wdpp[j]; 536 536 pr_alert("%s%s %4d writer-duration: %5d %llu\n", 537 - perf_type, PERF_FLAG, 537 + scale_type, SCALE_FLAG, 538 538 i, j, *wdp); 539 539 if (j % 100 == 0) 540 540 schedule_timeout_uninterruptible(1); ··· 573 573 } 574 574 575 575 /* 576 - * RCU perf shutdown kthread. Just waits to be awakened, then shuts 576 + * RCU scalability shutdown kthread. Just waits to be awakened, then shuts 577 577 * down system. 578 578 */ 579 579 static int 580 - rcu_perf_shutdown(void *arg) 580 + rcu_scale_shutdown(void *arg) 581 581 { 582 582 wait_event(shutdown_wq, 583 - atomic_read(&n_rcu_perf_writer_finished) >= nrealwriters); 583 + atomic_read(&n_rcu_scale_writer_finished) >= nrealwriters); 584 584 smp_mb(); /* Wake before output. */ 585 - rcu_perf_cleanup(); 585 + rcu_scale_cleanup(); 586 586 kernel_power_off(); 587 587 return -EINVAL; 588 588 } 589 589 590 590 /* 591 - * kfree_rcu() performance tests: Start a kfree_rcu() loop on all CPUs for number 591 + * kfree_rcu() scalability tests: Start a kfree_rcu() loop on all CPUs for number 592 592 * of iterations and measure total time and number of GP for all iterations to complete. 593 593 */ 594 594 ··· 598 598 599 599 static struct task_struct **kfree_reader_tasks; 600 600 static int kfree_nrealthreads; 601 - static atomic_t n_kfree_perf_thread_started; 602 - static atomic_t n_kfree_perf_thread_ended; 601 + static atomic_t n_kfree_scale_thread_started; 602 + static atomic_t n_kfree_scale_thread_ended; 603 603 604 604 struct kfree_obj { 605 605 char kfree_obj[8]; ··· 607 607 }; 608 608 609 609 static int 610 - kfree_perf_thread(void *arg) 610 + kfree_scale_thread(void *arg) 611 611 { 612 612 int i, loop = 0; 613 613 long me = (long)arg; ··· 615 615 u64 start_time, end_time; 616 616 long long mem_begin, mem_during = 0; 617 617 618 - VERBOSE_PERFOUT_STRING("kfree_perf_thread task started"); 618 + VERBOSE_SCALEOUT_STRING("kfree_scale_thread task started"); 619 619 set_cpus_allowed_ptr(current, cpumask_of(me % nr_cpu_ids)); 620 620 set_user_nice(current, MAX_NICE); 621 621 622 622 start_time = ktime_get_mono_fast_ns(); 623 623 624 - if (atomic_inc_return(&n_kfree_perf_thread_started) >= kfree_nrealthreads) { 624 + if (atomic_inc_return(&n_kfree_scale_thread_started) >= kfree_nrealthreads) { 625 625 if (gp_exp) 626 626 b_rcu_gp_test_started = cur_ops->exp_completed() / 2; 627 627 else ··· 646 646 cond_resched(); 647 647 } while (!torture_must_stop() && ++loop < kfree_loops); 648 648 649 - if (atomic_inc_return(&n_kfree_perf_thread_ended) >= kfree_nrealthreads) { 649 + if (atomic_inc_return(&n_kfree_scale_thread_ended) >= kfree_nrealthreads) { 650 650 end_time = ktime_get_mono_fast_ns(); 651 651 652 652 if (gp_exp) ··· 656 656 657 657 pr_alert("Total time taken by all kfree'ers: %llu ns, loops: %d, batches: %ld, memory footprint: %lldMB\n", 658 658 (unsigned long long)(end_time - start_time), kfree_loops, 659 - rcuperf_seq_diff(b_rcu_gp_test_finished, b_rcu_gp_test_started), 659 + rcuscale_seq_diff(b_rcu_gp_test_finished, b_rcu_gp_test_started), 660 660 (mem_begin - mem_during) >> (20 - PAGE_SHIFT)); 661 661 662 662 if (shutdown) { ··· 665 665 } 666 666 } 667 667 668 - torture_kthread_stopping("kfree_perf_thread"); 668 + torture_kthread_stopping("kfree_scale_thread"); 669 669 return 0; 670 670 } 671 671 672 672 static void 673 - kfree_perf_cleanup(void) 673 + kfree_scale_cleanup(void) 674 674 { 675 675 int i; 676 676 ··· 679 679 680 680 if (kfree_reader_tasks) { 681 681 for (i = 0; i < kfree_nrealthreads; i++) 682 - torture_stop_kthread(kfree_perf_thread, 682 + torture_stop_kthread(kfree_scale_thread, 683 683 kfree_reader_tasks[i]); 684 684 kfree(kfree_reader_tasks); 685 685 } ··· 691 691 * shutdown kthread. Just waits to be awakened, then shuts down system. 692 692 */ 693 693 static int 694 - kfree_perf_shutdown(void *arg) 694 + kfree_scale_shutdown(void *arg) 695 695 { 696 696 wait_event(shutdown_wq, 697 - atomic_read(&n_kfree_perf_thread_ended) >= kfree_nrealthreads); 697 + atomic_read(&n_kfree_scale_thread_ended) >= kfree_nrealthreads); 698 698 699 699 smp_mb(); /* Wake before output. */ 700 700 701 - kfree_perf_cleanup(); 701 + kfree_scale_cleanup(); 702 702 kernel_power_off(); 703 703 return -EINVAL; 704 704 } 705 705 706 706 static int __init 707 - kfree_perf_init(void) 707 + kfree_scale_init(void) 708 708 { 709 709 long i; 710 710 int firsterr = 0; ··· 713 713 /* Start up the kthreads. */ 714 714 if (shutdown) { 715 715 init_waitqueue_head(&shutdown_wq); 716 - firsterr = torture_create_kthread(kfree_perf_shutdown, NULL, 716 + firsterr = torture_create_kthread(kfree_scale_shutdown, NULL, 717 717 shutdown_task); 718 718 if (firsterr) 719 719 goto unwind; ··· 730 730 } 731 731 732 732 for (i = 0; i < kfree_nrealthreads; i++) { 733 - firsterr = torture_create_kthread(kfree_perf_thread, (void *)i, 733 + firsterr = torture_create_kthread(kfree_scale_thread, (void *)i, 734 734 kfree_reader_tasks[i]); 735 735 if (firsterr) 736 736 goto unwind; 737 737 } 738 738 739 - while (atomic_read(&n_kfree_perf_thread_started) < kfree_nrealthreads) 739 + while (atomic_read(&n_kfree_scale_thread_started) < kfree_nrealthreads) 740 740 schedule_timeout_uninterruptible(1); 741 741 742 742 torture_init_end(); ··· 744 744 745 745 unwind: 746 746 torture_init_end(); 747 - kfree_perf_cleanup(); 747 + kfree_scale_cleanup(); 748 748 return firsterr; 749 749 } 750 750 751 751 static int __init 752 - rcu_perf_init(void) 752 + rcu_scale_init(void) 753 753 { 754 754 long i; 755 755 int firsterr = 0; 756 - static struct rcu_perf_ops *perf_ops[] = { 756 + static struct rcu_scale_ops *scale_ops[] = { 757 757 &rcu_ops, &srcu_ops, &srcud_ops, &tasks_ops, 758 758 }; 759 759 760 - if (!torture_init_begin(perf_type, verbose)) 760 + if (!torture_init_begin(scale_type, verbose)) 761 761 return -EBUSY; 762 762 763 - /* Process args and tell the world that the perf'er is on the job. */ 764 - for (i = 0; i < ARRAY_SIZE(perf_ops); i++) { 765 - cur_ops = perf_ops[i]; 766 - if (strcmp(perf_type, cur_ops->name) == 0) 763 + /* Process args and announce that the scalability'er is on the job. */ 764 + for (i = 0; i < ARRAY_SIZE(scale_ops); i++) { 765 + cur_ops = scale_ops[i]; 766 + if (strcmp(scale_type, cur_ops->name) == 0) 767 767 break; 768 768 } 769 - if (i == ARRAY_SIZE(perf_ops)) { 770 - pr_alert("rcu-perf: invalid perf type: \"%s\"\n", perf_type); 771 - pr_alert("rcu-perf types:"); 772 - for (i = 0; i < ARRAY_SIZE(perf_ops); i++) 773 - pr_cont(" %s", perf_ops[i]->name); 769 + if (i == ARRAY_SIZE(scale_ops)) { 770 + pr_alert("rcu-scale: invalid scale type: \"%s\"\n", scale_type); 771 + pr_alert("rcu-scale types:"); 772 + for (i = 0; i < ARRAY_SIZE(scale_ops); i++) 773 + pr_cont(" %s", scale_ops[i]->name); 774 774 pr_cont("\n"); 775 - WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST)); 775 + WARN_ON(!IS_MODULE(CONFIG_RCU_SCALE_TEST)); 776 776 firsterr = -EINVAL; 777 777 cur_ops = NULL; 778 778 goto unwind; ··· 781 781 cur_ops->init(); 782 782 783 783 if (kfree_rcu_test) 784 - return kfree_perf_init(); 784 + return kfree_scale_init(); 785 785 786 786 nrealwriters = compute_real(nwriters); 787 787 nrealreaders = compute_real(nreaders); 788 - atomic_set(&n_rcu_perf_reader_started, 0); 789 - atomic_set(&n_rcu_perf_writer_started, 0); 790 - atomic_set(&n_rcu_perf_writer_finished, 0); 791 - rcu_perf_print_module_parms(cur_ops, "Start of test"); 788 + atomic_set(&n_rcu_scale_reader_started, 0); 789 + atomic_set(&n_rcu_scale_writer_started, 0); 790 + atomic_set(&n_rcu_scale_writer_finished, 0); 791 + rcu_scale_print_module_parms(cur_ops, "Start of test"); 792 792 793 793 /* Start up the kthreads. */ 794 794 795 795 if (shutdown) { 796 796 init_waitqueue_head(&shutdown_wq); 797 - firsterr = torture_create_kthread(rcu_perf_shutdown, NULL, 797 + firsterr = torture_create_kthread(rcu_scale_shutdown, NULL, 798 798 shutdown_task); 799 799 if (firsterr) 800 800 goto unwind; ··· 803 803 reader_tasks = kcalloc(nrealreaders, sizeof(reader_tasks[0]), 804 804 GFP_KERNEL); 805 805 if (reader_tasks == NULL) { 806 - VERBOSE_PERFOUT_ERRSTRING("out of memory"); 806 + VERBOSE_SCALEOUT_ERRSTRING("out of memory"); 807 807 firsterr = -ENOMEM; 808 808 goto unwind; 809 809 } 810 810 for (i = 0; i < nrealreaders; i++) { 811 - firsterr = torture_create_kthread(rcu_perf_reader, (void *)i, 811 + firsterr = torture_create_kthread(rcu_scale_reader, (void *)i, 812 812 reader_tasks[i]); 813 813 if (firsterr) 814 814 goto unwind; 815 815 } 816 - while (atomic_read(&n_rcu_perf_reader_started) < nrealreaders) 816 + while (atomic_read(&n_rcu_scale_reader_started) < nrealreaders) 817 817 schedule_timeout_uninterruptible(1); 818 818 writer_tasks = kcalloc(nrealwriters, sizeof(reader_tasks[0]), 819 819 GFP_KERNEL); ··· 823 823 kcalloc(nrealwriters, sizeof(*writer_n_durations), 824 824 GFP_KERNEL); 825 825 if (!writer_tasks || !writer_durations || !writer_n_durations) { 826 - VERBOSE_PERFOUT_ERRSTRING("out of memory"); 826 + VERBOSE_SCALEOUT_ERRSTRING("out of memory"); 827 827 firsterr = -ENOMEM; 828 828 goto unwind; 829 829 } ··· 835 835 firsterr = -ENOMEM; 836 836 goto unwind; 837 837 } 838 - firsterr = torture_create_kthread(rcu_perf_writer, (void *)i, 838 + firsterr = torture_create_kthread(rcu_scale_writer, (void *)i, 839 839 writer_tasks[i]); 840 840 if (firsterr) 841 841 goto unwind; ··· 845 845 846 846 unwind: 847 847 torture_init_end(); 848 - rcu_perf_cleanup(); 848 + rcu_scale_cleanup(); 849 849 return firsterr; 850 850 } 851 851 852 - module_init(rcu_perf_init); 853 - module_exit(rcu_perf_cleanup); 852 + module_init(rcu_scale_init); 853 + module_exit(rcu_scale_cleanup);
+575
kernel/scftorture.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // 3 + // Torture test for smp_call_function() and friends. 4 + // 5 + // Copyright (C) Facebook, 2020. 6 + // 7 + // Author: Paul E. McKenney <paulmck@kernel.org> 8 + 9 + #define pr_fmt(fmt) fmt 10 + 11 + #include <linux/atomic.h> 12 + #include <linux/bitops.h> 13 + #include <linux/completion.h> 14 + #include <linux/cpu.h> 15 + #include <linux/delay.h> 16 + #include <linux/err.h> 17 + #include <linux/init.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/kthread.h> 20 + #include <linux/kernel.h> 21 + #include <linux/mm.h> 22 + #include <linux/module.h> 23 + #include <linux/moduleparam.h> 24 + #include <linux/notifier.h> 25 + #include <linux/percpu.h> 26 + #include <linux/rcupdate.h> 27 + #include <linux/rcupdate_trace.h> 28 + #include <linux/reboot.h> 29 + #include <linux/sched.h> 30 + #include <linux/spinlock.h> 31 + #include <linux/smp.h> 32 + #include <linux/stat.h> 33 + #include <linux/srcu.h> 34 + #include <linux/slab.h> 35 + #include <linux/torture.h> 36 + #include <linux/types.h> 37 + 38 + #define SCFTORT_STRING "scftorture" 39 + #define SCFTORT_FLAG SCFTORT_STRING ": " 40 + 41 + #define SCFTORTOUT(s, x...) \ 42 + pr_alert(SCFTORT_FLAG s, ## x) 43 + 44 + #define VERBOSE_SCFTORTOUT(s, x...) \ 45 + do { if (verbose) pr_alert(SCFTORT_FLAG s, ## x); } while (0) 46 + 47 + #define VERBOSE_SCFTORTOUT_ERRSTRING(s, x...) \ 48 + do { if (verbose) pr_alert(SCFTORT_FLAG "!!! " s, ## x); } while (0) 49 + 50 + MODULE_LICENSE("GPL"); 51 + MODULE_AUTHOR("Paul E. McKenney <paulmck@kernel.org>"); 52 + 53 + // Wait until there are multiple CPUs before starting test. 54 + torture_param(int, holdoff, IS_BUILTIN(CONFIG_SCF_TORTURE_TEST) ? 10 : 0, 55 + "Holdoff time before test start (s)"); 56 + torture_param(int, longwait, 0, "Include ridiculously long waits? (seconds)"); 57 + torture_param(int, nthreads, -1, "# threads, defaults to -1 for all CPUs."); 58 + torture_param(int, onoff_holdoff, 0, "Time after boot before CPU hotplugs (s)"); 59 + torture_param(int, onoff_interval, 0, "Time between CPU hotplugs (s), 0=disable"); 60 + torture_param(int, shutdown_secs, 0, "Shutdown time (ms), <= zero to disable."); 61 + torture_param(int, stat_interval, 60, "Number of seconds between stats printk()s."); 62 + torture_param(int, stutter_cpus, 5, "Number of jiffies to change CPUs under test, 0=disable"); 63 + torture_param(bool, use_cpus_read_lock, 0, "Use cpus_read_lock() to exclude CPU hotplug."); 64 + torture_param(int, verbose, 0, "Enable verbose debugging printk()s"); 65 + torture_param(int, weight_single, -1, "Testing weight for single-CPU no-wait operations."); 66 + torture_param(int, weight_single_wait, -1, "Testing weight for single-CPU operations."); 67 + torture_param(int, weight_many, -1, "Testing weight for multi-CPU no-wait operations."); 68 + torture_param(int, weight_many_wait, -1, "Testing weight for multi-CPU operations."); 69 + torture_param(int, weight_all, -1, "Testing weight for all-CPU no-wait operations."); 70 + torture_param(int, weight_all_wait, -1, "Testing weight for all-CPU operations."); 71 + 72 + char *torture_type = ""; 73 + 74 + #ifdef MODULE 75 + # define SCFTORT_SHUTDOWN 0 76 + #else 77 + # define SCFTORT_SHUTDOWN 1 78 + #endif 79 + 80 + torture_param(bool, shutdown, SCFTORT_SHUTDOWN, "Shutdown at end of torture test."); 81 + 82 + struct scf_statistics { 83 + struct task_struct *task; 84 + int cpu; 85 + long long n_single; 86 + long long n_single_ofl; 87 + long long n_single_wait; 88 + long long n_single_wait_ofl; 89 + long long n_many; 90 + long long n_many_wait; 91 + long long n_all; 92 + long long n_all_wait; 93 + }; 94 + 95 + static struct scf_statistics *scf_stats_p; 96 + static struct task_struct *scf_torture_stats_task; 97 + static DEFINE_PER_CPU(long long, scf_invoked_count); 98 + 99 + // Data for random primitive selection 100 + #define SCF_PRIM_SINGLE 0 101 + #define SCF_PRIM_MANY 1 102 + #define SCF_PRIM_ALL 2 103 + #define SCF_NPRIMS (2 * 3) // Need wait and no-wait versions of each. 104 + 105 + static char *scf_prim_name[] = { 106 + "smp_call_function_single", 107 + "smp_call_function_many", 108 + "smp_call_function", 109 + }; 110 + 111 + struct scf_selector { 112 + unsigned long scfs_weight; 113 + int scfs_prim; 114 + bool scfs_wait; 115 + }; 116 + static struct scf_selector scf_sel_array[SCF_NPRIMS]; 117 + static int scf_sel_array_len; 118 + static unsigned long scf_sel_totweight; 119 + 120 + // Communicate between caller and handler. 121 + struct scf_check { 122 + bool scfc_in; 123 + bool scfc_out; 124 + int scfc_cpu; // -1 for not _single(). 125 + bool scfc_wait; 126 + }; 127 + 128 + // Use to wait for all threads to start. 129 + static atomic_t n_started; 130 + static atomic_t n_errs; 131 + static atomic_t n_mb_in_errs; 132 + static atomic_t n_mb_out_errs; 133 + static atomic_t n_alloc_errs; 134 + static bool scfdone; 135 + static char *bangstr = ""; 136 + 137 + static DEFINE_TORTURE_RANDOM_PERCPU(scf_torture_rand); 138 + 139 + // Print torture statistics. Caller must ensure serialization. 140 + static void scf_torture_stats_print(void) 141 + { 142 + int cpu; 143 + int i; 144 + long long invoked_count = 0; 145 + bool isdone = READ_ONCE(scfdone); 146 + struct scf_statistics scfs = {}; 147 + 148 + for_each_possible_cpu(cpu) 149 + invoked_count += data_race(per_cpu(scf_invoked_count, cpu)); 150 + for (i = 0; i < nthreads; i++) { 151 + scfs.n_single += scf_stats_p[i].n_single; 152 + scfs.n_single_ofl += scf_stats_p[i].n_single_ofl; 153 + scfs.n_single_wait += scf_stats_p[i].n_single_wait; 154 + scfs.n_single_wait_ofl += scf_stats_p[i].n_single_wait_ofl; 155 + scfs.n_many += scf_stats_p[i].n_many; 156 + scfs.n_many_wait += scf_stats_p[i].n_many_wait; 157 + scfs.n_all += scf_stats_p[i].n_all; 158 + scfs.n_all_wait += scf_stats_p[i].n_all_wait; 159 + } 160 + if (atomic_read(&n_errs) || atomic_read(&n_mb_in_errs) || 161 + atomic_read(&n_mb_out_errs) || atomic_read(&n_alloc_errs)) 162 + bangstr = "!!! "; 163 + pr_alert("%s %sscf_invoked_count %s: %lld single: %lld/%lld single_ofl: %lld/%lld many: %lld/%lld all: %lld/%lld ", 164 + SCFTORT_FLAG, bangstr, isdone ? "VER" : "ver", invoked_count, 165 + scfs.n_single, scfs.n_single_wait, scfs.n_single_ofl, scfs.n_single_wait_ofl, 166 + scfs.n_many, scfs.n_many_wait, scfs.n_all, scfs.n_all_wait); 167 + torture_onoff_stats(); 168 + pr_cont("ste: %d stnmie: %d stnmoe: %d staf: %d\n", atomic_read(&n_errs), 169 + atomic_read(&n_mb_in_errs), atomic_read(&n_mb_out_errs), 170 + atomic_read(&n_alloc_errs)); 171 + } 172 + 173 + // Periodically prints torture statistics, if periodic statistics printing 174 + // was specified via the stat_interval module parameter. 175 + static int 176 + scf_torture_stats(void *arg) 177 + { 178 + VERBOSE_TOROUT_STRING("scf_torture_stats task started"); 179 + do { 180 + schedule_timeout_interruptible(stat_interval * HZ); 181 + scf_torture_stats_print(); 182 + torture_shutdown_absorb("scf_torture_stats"); 183 + } while (!torture_must_stop()); 184 + torture_kthread_stopping("scf_torture_stats"); 185 + return 0; 186 + } 187 + 188 + // Add a primitive to the scf_sel_array[]. 189 + static void scf_sel_add(unsigned long weight, int prim, bool wait) 190 + { 191 + struct scf_selector *scfsp = &scf_sel_array[scf_sel_array_len]; 192 + 193 + // If no weight, if array would overflow, if computing three-place 194 + // percentages would overflow, or if the scf_prim_name[] array would 195 + // overflow, don't bother. In the last three two cases, complain. 196 + if (!weight || 197 + WARN_ON_ONCE(scf_sel_array_len >= ARRAY_SIZE(scf_sel_array)) || 198 + WARN_ON_ONCE(0 - 100000 * weight <= 100000 * scf_sel_totweight) || 199 + WARN_ON_ONCE(prim >= ARRAY_SIZE(scf_prim_name))) 200 + return; 201 + scf_sel_totweight += weight; 202 + scfsp->scfs_weight = scf_sel_totweight; 203 + scfsp->scfs_prim = prim; 204 + scfsp->scfs_wait = wait; 205 + scf_sel_array_len++; 206 + } 207 + 208 + // Dump out weighting percentages for scf_prim_name[] array. 209 + static void scf_sel_dump(void) 210 + { 211 + int i; 212 + unsigned long oldw = 0; 213 + struct scf_selector *scfsp; 214 + unsigned long w; 215 + 216 + for (i = 0; i < scf_sel_array_len; i++) { 217 + scfsp = &scf_sel_array[i]; 218 + w = (scfsp->scfs_weight - oldw) * 100000 / scf_sel_totweight; 219 + pr_info("%s: %3lu.%03lu %s(%s)\n", __func__, w / 1000, w % 1000, 220 + scf_prim_name[scfsp->scfs_prim], 221 + scfsp->scfs_wait ? "wait" : "nowait"); 222 + oldw = scfsp->scfs_weight; 223 + } 224 + } 225 + 226 + // Randomly pick a primitive and wait/nowait, based on weightings. 227 + static struct scf_selector *scf_sel_rand(struct torture_random_state *trsp) 228 + { 229 + int i; 230 + unsigned long w = torture_random(trsp) % (scf_sel_totweight + 1); 231 + 232 + for (i = 0; i < scf_sel_array_len; i++) 233 + if (scf_sel_array[i].scfs_weight >= w) 234 + return &scf_sel_array[i]; 235 + WARN_ON_ONCE(1); 236 + return &scf_sel_array[0]; 237 + } 238 + 239 + // Update statistics and occasionally burn up mass quantities of CPU time, 240 + // if told to do so via scftorture.longwait. Otherwise, occasionally burn 241 + // a little bit. 242 + static void scf_handler(void *scfc_in) 243 + { 244 + int i; 245 + int j; 246 + unsigned long r = torture_random(this_cpu_ptr(&scf_torture_rand)); 247 + struct scf_check *scfcp = scfc_in; 248 + 249 + if (likely(scfcp)) { 250 + WRITE_ONCE(scfcp->scfc_out, false); // For multiple receivers. 251 + if (WARN_ON_ONCE(unlikely(!READ_ONCE(scfcp->scfc_in)))) 252 + atomic_inc(&n_mb_in_errs); 253 + } 254 + this_cpu_inc(scf_invoked_count); 255 + if (longwait <= 0) { 256 + if (!(r & 0xffc0)) 257 + udelay(r & 0x3f); 258 + goto out; 259 + } 260 + if (r & 0xfff) 261 + goto out; 262 + r = (r >> 12); 263 + if (longwait <= 0) { 264 + udelay((r & 0xff) + 1); 265 + goto out; 266 + } 267 + r = r % longwait + 1; 268 + for (i = 0; i < r; i++) { 269 + for (j = 0; j < 1000; j++) { 270 + udelay(1000); 271 + cpu_relax(); 272 + } 273 + } 274 + out: 275 + if (unlikely(!scfcp)) 276 + return; 277 + if (scfcp->scfc_wait) 278 + WRITE_ONCE(scfcp->scfc_out, true); 279 + else 280 + kfree(scfcp); 281 + } 282 + 283 + // As above, but check for correct CPU. 284 + static void scf_handler_1(void *scfc_in) 285 + { 286 + struct scf_check *scfcp = scfc_in; 287 + 288 + if (likely(scfcp) && WARN_ONCE(smp_processor_id() != scfcp->scfc_cpu, "%s: Wanted CPU %d got CPU %d\n", __func__, scfcp->scfc_cpu, smp_processor_id())) { 289 + atomic_inc(&n_errs); 290 + } 291 + scf_handler(scfcp); 292 + } 293 + 294 + // Randomly do an smp_call_function*() invocation. 295 + static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_random_state *trsp) 296 + { 297 + uintptr_t cpu; 298 + int ret = 0; 299 + struct scf_check *scfcp = NULL; 300 + struct scf_selector *scfsp = scf_sel_rand(trsp); 301 + 302 + if (use_cpus_read_lock) 303 + cpus_read_lock(); 304 + else 305 + preempt_disable(); 306 + if (scfsp->scfs_prim == SCF_PRIM_SINGLE || scfsp->scfs_wait) { 307 + scfcp = kmalloc(sizeof(*scfcp), GFP_ATOMIC); 308 + if (WARN_ON_ONCE(!scfcp)) { 309 + atomic_inc(&n_alloc_errs); 310 + } else { 311 + scfcp->scfc_cpu = -1; 312 + scfcp->scfc_wait = scfsp->scfs_wait; 313 + scfcp->scfc_out = false; 314 + } 315 + } 316 + switch (scfsp->scfs_prim) { 317 + case SCF_PRIM_SINGLE: 318 + cpu = torture_random(trsp) % nr_cpu_ids; 319 + if (scfsp->scfs_wait) 320 + scfp->n_single_wait++; 321 + else 322 + scfp->n_single++; 323 + if (scfcp) { 324 + scfcp->scfc_cpu = cpu; 325 + barrier(); // Prevent race-reduction compiler optimizations. 326 + scfcp->scfc_in = true; 327 + } 328 + ret = smp_call_function_single(cpu, scf_handler_1, (void *)scfcp, scfsp->scfs_wait); 329 + if (ret) { 330 + if (scfsp->scfs_wait) 331 + scfp->n_single_wait_ofl++; 332 + else 333 + scfp->n_single_ofl++; 334 + kfree(scfcp); 335 + scfcp = NULL; 336 + } 337 + break; 338 + case SCF_PRIM_MANY: 339 + if (scfsp->scfs_wait) 340 + scfp->n_many_wait++; 341 + else 342 + scfp->n_many++; 343 + if (scfcp) { 344 + barrier(); // Prevent race-reduction compiler optimizations. 345 + scfcp->scfc_in = true; 346 + } 347 + smp_call_function_many(cpu_online_mask, scf_handler, scfcp, scfsp->scfs_wait); 348 + break; 349 + case SCF_PRIM_ALL: 350 + if (scfsp->scfs_wait) 351 + scfp->n_all_wait++; 352 + else 353 + scfp->n_all++; 354 + if (scfcp) { 355 + barrier(); // Prevent race-reduction compiler optimizations. 356 + scfcp->scfc_in = true; 357 + } 358 + smp_call_function(scf_handler, scfcp, scfsp->scfs_wait); 359 + break; 360 + default: 361 + WARN_ON_ONCE(1); 362 + if (scfcp) 363 + scfcp->scfc_out = true; 364 + } 365 + if (scfcp && scfsp->scfs_wait) { 366 + if (WARN_ON_ONCE((num_online_cpus() > 1 || scfsp->scfs_prim == SCF_PRIM_SINGLE) && 367 + !scfcp->scfc_out)) 368 + atomic_inc(&n_mb_out_errs); // Leak rather than trash! 369 + else 370 + kfree(scfcp); 371 + barrier(); // Prevent race-reduction compiler optimizations. 372 + } 373 + if (use_cpus_read_lock) 374 + cpus_read_unlock(); 375 + else 376 + preempt_enable(); 377 + if (!(torture_random(trsp) & 0xfff)) 378 + schedule_timeout_uninterruptible(1); 379 + } 380 + 381 + // SCF test kthread. Repeatedly does calls to members of the 382 + // smp_call_function() family of functions. 383 + static int scftorture_invoker(void *arg) 384 + { 385 + int cpu; 386 + DEFINE_TORTURE_RANDOM(rand); 387 + struct scf_statistics *scfp = (struct scf_statistics *)arg; 388 + bool was_offline = false; 389 + 390 + VERBOSE_SCFTORTOUT("scftorture_invoker %d: task started", scfp->cpu); 391 + cpu = scfp->cpu % nr_cpu_ids; 392 + set_cpus_allowed_ptr(current, cpumask_of(cpu)); 393 + set_user_nice(current, MAX_NICE); 394 + if (holdoff) 395 + schedule_timeout_interruptible(holdoff * HZ); 396 + 397 + VERBOSE_SCFTORTOUT("scftorture_invoker %d: Waiting for all SCF torturers from cpu %d", scfp->cpu, smp_processor_id()); 398 + 399 + // Make sure that the CPU is affinitized appropriately during testing. 400 + WARN_ON_ONCE(smp_processor_id() != scfp->cpu); 401 + 402 + if (!atomic_dec_return(&n_started)) 403 + while (atomic_read_acquire(&n_started)) { 404 + if (torture_must_stop()) { 405 + VERBOSE_SCFTORTOUT("scftorture_invoker %d ended before starting", scfp->cpu); 406 + goto end; 407 + } 408 + schedule_timeout_uninterruptible(1); 409 + } 410 + 411 + VERBOSE_SCFTORTOUT("scftorture_invoker %d started", scfp->cpu); 412 + 413 + do { 414 + scftorture_invoke_one(scfp, &rand); 415 + while (cpu_is_offline(cpu) && !torture_must_stop()) { 416 + schedule_timeout_interruptible(HZ / 5); 417 + was_offline = true; 418 + } 419 + if (was_offline) { 420 + set_cpus_allowed_ptr(current, cpumask_of(cpu)); 421 + was_offline = false; 422 + } 423 + cond_resched(); 424 + } while (!torture_must_stop()); 425 + 426 + VERBOSE_SCFTORTOUT("scftorture_invoker %d ended", scfp->cpu); 427 + end: 428 + torture_kthread_stopping("scftorture_invoker"); 429 + return 0; 430 + } 431 + 432 + static void 433 + scftorture_print_module_parms(const char *tag) 434 + { 435 + pr_alert(SCFTORT_FLAG 436 + "--- %s: verbose=%d holdoff=%d longwait=%d nthreads=%d onoff_holdoff=%d onoff_interval=%d shutdown_secs=%d stat_interval=%d stutter_cpus=%d use_cpus_read_lock=%d, weight_single=%d, weight_single_wait=%d, weight_many=%d, weight_many_wait=%d, weight_all=%d, weight_all_wait=%d\n", tag, 437 + verbose, holdoff, longwait, nthreads, onoff_holdoff, onoff_interval, shutdown, stat_interval, stutter_cpus, use_cpus_read_lock, weight_single, weight_single_wait, weight_many, weight_many_wait, weight_all, weight_all_wait); 438 + } 439 + 440 + static void scf_cleanup_handler(void *unused) 441 + { 442 + } 443 + 444 + static void scf_torture_cleanup(void) 445 + { 446 + int i; 447 + 448 + if (torture_cleanup_begin()) 449 + return; 450 + 451 + WRITE_ONCE(scfdone, true); 452 + if (nthreads) 453 + for (i = 0; i < nthreads; i++) 454 + torture_stop_kthread("scftorture_invoker", scf_stats_p[i].task); 455 + else 456 + goto end; 457 + smp_call_function(scf_cleanup_handler, NULL, 0); 458 + torture_stop_kthread(scf_torture_stats, scf_torture_stats_task); 459 + scf_torture_stats_print(); // -After- the stats thread is stopped! 460 + kfree(scf_stats_p); // -After- the last stats print has completed! 461 + scf_stats_p = NULL; 462 + 463 + if (atomic_read(&n_errs) || atomic_read(&n_mb_in_errs) || atomic_read(&n_mb_out_errs)) 464 + scftorture_print_module_parms("End of test: FAILURE"); 465 + else if (torture_onoff_failures()) 466 + scftorture_print_module_parms("End of test: LOCK_HOTPLUG"); 467 + else 468 + scftorture_print_module_parms("End of test: SUCCESS"); 469 + 470 + end: 471 + torture_cleanup_end(); 472 + } 473 + 474 + static int __init scf_torture_init(void) 475 + { 476 + long i; 477 + int firsterr = 0; 478 + unsigned long weight_single1 = weight_single; 479 + unsigned long weight_single_wait1 = weight_single_wait; 480 + unsigned long weight_many1 = weight_many; 481 + unsigned long weight_many_wait1 = weight_many_wait; 482 + unsigned long weight_all1 = weight_all; 483 + unsigned long weight_all_wait1 = weight_all_wait; 484 + 485 + if (!torture_init_begin(SCFTORT_STRING, verbose)) 486 + return -EBUSY; 487 + 488 + scftorture_print_module_parms("Start of test"); 489 + 490 + if (weight_single == -1 && weight_single_wait == -1 && 491 + weight_many == -1 && weight_many_wait == -1 && 492 + weight_all == -1 && weight_all_wait == -1) { 493 + weight_single1 = 2 * nr_cpu_ids; 494 + weight_single_wait1 = 2 * nr_cpu_ids; 495 + weight_many1 = 2; 496 + weight_many_wait1 = 2; 497 + weight_all1 = 1; 498 + weight_all_wait1 = 1; 499 + } else { 500 + if (weight_single == -1) 501 + weight_single1 = 0; 502 + if (weight_single_wait == -1) 503 + weight_single_wait1 = 0; 504 + if (weight_many == -1) 505 + weight_many1 = 0; 506 + if (weight_many_wait == -1) 507 + weight_many_wait1 = 0; 508 + if (weight_all == -1) 509 + weight_all1 = 0; 510 + if (weight_all_wait == -1) 511 + weight_all_wait1 = 0; 512 + } 513 + if (weight_single1 == 0 && weight_single_wait1 == 0 && 514 + weight_many1 == 0 && weight_many_wait1 == 0 && 515 + weight_all1 == 0 && weight_all_wait1 == 0) { 516 + VERBOSE_SCFTORTOUT_ERRSTRING("all zero weights makes no sense"); 517 + firsterr = -EINVAL; 518 + goto unwind; 519 + } 520 + scf_sel_add(weight_single1, SCF_PRIM_SINGLE, false); 521 + scf_sel_add(weight_single_wait1, SCF_PRIM_SINGLE, true); 522 + scf_sel_add(weight_many1, SCF_PRIM_MANY, false); 523 + scf_sel_add(weight_many_wait1, SCF_PRIM_MANY, true); 524 + scf_sel_add(weight_all1, SCF_PRIM_ALL, false); 525 + scf_sel_add(weight_all_wait1, SCF_PRIM_ALL, true); 526 + scf_sel_dump(); 527 + 528 + if (onoff_interval > 0) { 529 + firsterr = torture_onoff_init(onoff_holdoff * HZ, onoff_interval, NULL); 530 + if (firsterr) 531 + goto unwind; 532 + } 533 + if (shutdown_secs > 0) { 534 + firsterr = torture_shutdown_init(shutdown_secs, scf_torture_cleanup); 535 + if (firsterr) 536 + goto unwind; 537 + } 538 + 539 + // Worker tasks invoking smp_call_function(). 540 + if (nthreads < 0) 541 + nthreads = num_online_cpus(); 542 + scf_stats_p = kcalloc(nthreads, sizeof(scf_stats_p[0]), GFP_KERNEL); 543 + if (!scf_stats_p) { 544 + VERBOSE_SCFTORTOUT_ERRSTRING("out of memory"); 545 + firsterr = -ENOMEM; 546 + goto unwind; 547 + } 548 + 549 + VERBOSE_SCFTORTOUT("Starting %d smp_call_function() threads\n", nthreads); 550 + 551 + atomic_set(&n_started, nthreads); 552 + for (i = 0; i < nthreads; i++) { 553 + scf_stats_p[i].cpu = i; 554 + firsterr = torture_create_kthread(scftorture_invoker, (void *)&scf_stats_p[i], 555 + scf_stats_p[i].task); 556 + if (firsterr) 557 + goto unwind; 558 + } 559 + if (stat_interval > 0) { 560 + firsterr = torture_create_kthread(scf_torture_stats, NULL, scf_torture_stats_task); 561 + if (firsterr) 562 + goto unwind; 563 + } 564 + 565 + torture_init_end(); 566 + return 0; 567 + 568 + unwind: 569 + torture_init_end(); 570 + scf_torture_cleanup(); 571 + return firsterr; 572 + } 573 + 574 + module_init(scf_torture_init); 575 + module_exit(scf_torture_cleanup);
+1 -1
kernel/time/tick-sched.c
··· 927 927 928 928 if (ratelimit < 10 && 929 929 (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) { 930 - pr_warn("NOHZ: local_softirq_pending %02x\n", 930 + pr_warn("NOHZ tick-stop error: Non-RCU local softirq work is pending, handler #%02x!!!\n", 931 931 (unsigned int) local_softirq_pending()); 932 932 ratelimit++; 933 933 }
+10
lib/Kconfig.debug
··· 1367 1367 Say M if you want these self tests to build as a module. 1368 1368 Say N if you are unsure. 1369 1369 1370 + config SCF_TORTURE_TEST 1371 + tristate "torture tests for smp_call_function*()" 1372 + depends on DEBUG_KERNEL 1373 + select TORTURE_TEST 1374 + help 1375 + This option provides a kernel module that runs torture tests 1376 + on the smp_call_function() family of primitives. The kernel 1377 + module may be built after the fact on the running kernel to 1378 + be tested, if desired. 1379 + 1370 1380 endmenu # lock debugging 1371 1381 1372 1382 config TRACE_IRQFLAGS
+3 -3
tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuscale-ftrace.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0+ 3 3 # 4 - # Analyze a given results directory for rcuperf performance measurements, 4 + # Analyze a given results directory for rcuscale performance measurements, 5 5 # looking for ftrace data. Exits with 0 if data was found, analyzed, and 6 - # printed. Intended to be invoked from kvm-recheck-rcuperf.sh after 6 + # printed. Intended to be invoked from kvm-recheck-rcuscale.sh after 7 7 # argument checking. 8 8 # 9 - # Usage: kvm-recheck-rcuperf-ftrace.sh resdir 9 + # Usage: kvm-recheck-rcuscale-ftrace.sh resdir 10 10 # 11 11 # Copyright (C) IBM Corporation, 2016 12 12 #
+7 -7
tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh tools/testing/selftests/rcutorture/bin/kvm-recheck-rcuscale.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0+ 3 3 # 4 - # Analyze a given results directory for rcuperf performance measurements. 4 + # Analyze a given results directory for rcuscale scalability measurements. 5 5 # 6 - # Usage: kvm-recheck-rcuperf.sh resdir 6 + # Usage: kvm-recheck-rcuscale.sh resdir 7 7 # 8 8 # Copyright (C) IBM Corporation, 2016 9 9 # ··· 20 20 PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH 21 21 . functions.sh 22 22 23 - if kvm-recheck-rcuperf-ftrace.sh $i 23 + if kvm-recheck-rcuscale-ftrace.sh $i 24 24 then 25 25 # ftrace data was successfully analyzed, call it good! 26 26 exit 0 ··· 30 30 31 31 sed -e 's/^\[[^]]*]//' < $i/console.log | 32 32 awk ' 33 - /-perf: .* gps: .* batches:/ { 33 + /-scale: .* gps: .* batches:/ { 34 34 ngps = $9; 35 35 nbatches = $11; 36 36 } 37 37 38 - /-perf: .*writer-duration/ { 38 + /-scale: .*writer-duration/ { 39 39 gptimes[++n] = $5 / 1000.; 40 40 sum += $5 / 1000.; 41 41 } ··· 43 43 END { 44 44 newNR = asort(gptimes); 45 45 if (newNR <= 0) { 46 - print "No rcuperf records found???" 46 + print "No rcuscale records found???" 47 47 exit; 48 48 } 49 49 pct50 = int(newNR * 50 / 100); ··· 79 79 print "99th percentile grace-period duration: " gptimes[pct99]; 80 80 print "Maximum grace-period duration: " gptimes[newNR]; 81 81 print "Grace periods: " ngps + 0 " Batches: " nbatches + 0 " Ratio: " ngps / nbatches; 82 - print "Computed from rcuperf printk output."; 82 + print "Computed from rcuscale printk output."; 83 83 }'
+38
tools/testing/selftests/rcutorture/bin/kvm-recheck-scf.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0+ 3 + # 4 + # Analyze a given results directory for rcutorture progress. 5 + # 6 + # Usage: kvm-recheck-rcu.sh resdir 7 + # 8 + # Copyright (C) Facebook, 2020 9 + # 10 + # Authors: Paul E. McKenney <paulmck@kernel.org> 11 + 12 + i="$1" 13 + if test -d "$i" -a -r "$i" 14 + then 15 + : 16 + else 17 + echo Unreadable results directory: $i 18 + exit 1 19 + fi 20 + . functions.sh 21 + 22 + configfile=`echo $i | sed -e 's/^.*\///'` 23 + nscfs="`grep 'scf_invoked_count ver:' $i/console.log 2> /dev/null | tail -1 | sed -e 's/^.* scf_invoked_count ver: //' -e 's/ .*$//' | tr -d '\015'`" 24 + if test -z "$nscfs" 25 + then 26 + echo "$configfile ------- " 27 + else 28 + dur="`sed -e 's/^.* scftorture.shutdown_secs=//' -e 's/ .*$//' < $i/qemu-cmd 2> /dev/null`" 29 + if test -z "$dur" 30 + then 31 + rate="" 32 + else 33 + nscfss=`awk -v nscfs=$nscfs -v dur=$dur ' 34 + BEGIN { print nscfs / dur }' < /dev/null` 35 + rate=" ($nscfss/s)" 36 + fi 37 + echo "${configfile} ------- ${nscfs} SCF handler invocations$rate" 38 + fi
+4 -4
tools/testing/selftests/rcutorture/bin/kvm.sh
··· 71 71 echo " --qemu-args qemu-arguments" 72 72 echo " --qemu-cmd qemu-system-..." 73 73 echo " --results absolute-pathname" 74 - echo " --torture lock|rcu|rcuperf|refscale|scf" 74 + echo " --torture lock|rcu|rcuscale|refscale|scf" 75 75 echo " --trust-make" 76 76 exit 1 77 77 } ··· 198 198 shift 199 199 ;; 200 200 --torture) 201 - checkarg --torture "(suite name)" "$#" "$2" '^\(lock\|rcu\|rcuperf\|refscale\)$' '^--' 201 + checkarg --torture "(suite name)" "$#" "$2" '^\(lock\|rcu\|rcuscale\|refscale\|scf\)$' '^--' 202 202 TORTURE_SUITE=$2 203 203 shift 204 - if test "$TORTURE_SUITE" = rcuperf || test "$TORTURE_SUITE" = refscale 204 + if test "$TORTURE_SUITE" = rcuscale || test "$TORTURE_SUITE" = refscale 205 205 then 206 206 # If you really want jitter for refscale or 207 - # rcuperf, specify it after specifying the rcuperf 207 + # rcuscale, specify it after specifying the rcuscale 208 208 # or the refscale. (But why jitter in these cases?) 209 209 jitter=0 210 210 fi
+6 -5
tools/testing/selftests/rcutorture/bin/parse-console.sh
··· 33 33 fi 34 34 cat /dev/null > $file.diags 35 35 36 - # Check for proper termination, except for rcuperf and refscale. 37 - if test "$TORTURE_SUITE" != rcuperf && test "$TORTURE_SUITE" != refscale 36 + # Check for proper termination, except for rcuscale and refscale. 37 + if test "$TORTURE_SUITE" != rcuscale && test "$TORTURE_SUITE" != refscale 38 38 then 39 39 # check for abject failure 40 40 ··· 67 67 grep --binary-files=text 'torture:.*ver:' $file | 68 68 egrep --binary-files=text -v '\(null\)|rtc: 000000000* ' | 69 69 sed -e 's/^(initramfs)[^]]*] //' -e 's/^\[[^]]*] //' | 70 + sed -e 's/^.*ver: //' | 70 71 awk ' 71 72 BEGIN { 72 73 ver = 0; ··· 75 74 } 76 75 77 76 { 78 - if (!badseq && ($5 + 0 != $5 || $5 <= ver)) { 77 + if (!badseq && ($1 + 0 != $1 || $1 <= ver)) { 79 78 badseqno1 = ver; 80 - badseqno2 = $5; 79 + badseqno2 = $1; 81 80 badseqnr = NR; 82 81 badseq = 1; 83 82 } 84 - ver = $5 83 + ver = $1 85 84 } 86 85 87 86 END {
tools/testing/selftests/rcutorture/configs/rcuperf/CFLIST tools/testing/selftests/rcutorture/configs/rcuscale/CFLIST
-2
tools/testing/selftests/rcutorture/configs/rcuperf/CFcommon
··· 1 - CONFIG_RCU_PERF_TEST=y 2 - CONFIG_PRINTK_TIME=y
tools/testing/selftests/rcutorture/configs/rcuperf/TINY tools/testing/selftests/rcutorture/configs/rcuscale/TINY
tools/testing/selftests/rcutorture/configs/rcuperf/TREE tools/testing/selftests/rcutorture/configs/rcuscale/TREE
tools/testing/selftests/rcutorture/configs/rcuperf/TREE54 tools/testing/selftests/rcutorture/configs/rcuscale/TREE54
+2 -2
tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh tools/testing/selftests/rcutorture/configs/rcuscale/ver_functions.sh
··· 11 11 # 12 12 # Adds per-version torture-module parameters to kernels supporting them. 13 13 per_version_boot_params () { 14 - echo $1 rcuperf.shutdown=1 \ 15 - rcuperf.verbose=1 14 + echo $1 rcuscale.shutdown=1 \ 15 + rcuscale.verbose=1 16 16 }
+2
tools/testing/selftests/rcutorture/configs/rcuscale/CFcommon
··· 1 + CONFIG_RCU_SCALE_TEST=y 2 + CONFIG_PRINTK_TIME=y
+2
tools/testing/selftests/rcutorture/configs/scf/CFLIST
··· 1 + NOPREEMPT 2 + PREEMPT
+2
tools/testing/selftests/rcutorture/configs/scf/CFcommon
··· 1 + CONFIG_SCF_TORTURE_TEST=y 2 + CONFIG_PRINTK_TIME=y
+9
tools/testing/selftests/rcutorture/configs/scf/NOPREEMPT
··· 1 + CONFIG_SMP=y 2 + CONFIG_PREEMPT_NONE=y 3 + CONFIG_PREEMPT_VOLUNTARY=n 4 + CONFIG_PREEMPT=n 5 + CONFIG_HZ_PERIODIC=n 6 + CONFIG_NO_HZ_IDLE=n 7 + CONFIG_NO_HZ_FULL=y 8 + CONFIG_DEBUG_LOCK_ALLOC=n 9 + CONFIG_PROVE_LOCKING=n
+1
tools/testing/selftests/rcutorture/configs/scf/NOPREEMPT.boot
··· 1 + nohz_full=1
+9
tools/testing/selftests/rcutorture/configs/scf/PREEMPT
··· 1 + CONFIG_SMP=y 2 + CONFIG_PREEMPT_NONE=n 3 + CONFIG_PREEMPT_VOLUNTARY=n 4 + CONFIG_PREEMPT=y 5 + CONFIG_HZ_PERIODIC=n 6 + CONFIG_NO_HZ_IDLE=y 7 + CONFIG_NO_HZ_FULL=n 8 + CONFIG_DEBUG_LOCK_ALLOC=y 9 + CONFIG_PROVE_LOCKING=y
+30
tools/testing/selftests/rcutorture/configs/scf/ver_functions.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0+ 3 + # 4 + # Torture-suite-dependent shell functions for the rest of the scripts. 5 + # 6 + # Copyright (C) Facebook, 2020 7 + # 8 + # Authors: Paul E. McKenney <paulmck@kernel.org> 9 + 10 + # scftorture_param_onoff bootparam-string config-file 11 + # 12 + # Adds onoff scftorture module parameters to kernels having it. 13 + scftorture_param_onoff () { 14 + if ! bootparam_hotplug_cpu "$1" && configfrag_hotplug_cpu "$2" 15 + then 16 + echo CPU-hotplug kernel, adding scftorture onoff. 1>&2 17 + echo scftorture.onoff_interval=1000 scftorture.onoff_holdoff=30 18 + fi 19 + } 20 + 21 + # per_version_boot_params bootparam-string config-file seconds 22 + # 23 + # Adds per-version torture-module parameters to kernels supporting them. 24 + per_version_boot_params () { 25 + echo $1 `scftorture_param_onoff "$1" "$2"` \ 26 + scftorture.stat_interval=15 \ 27 + scftorture.shutdown_secs=$3 \ 28 + scftorture.verbose=1 \ 29 + scf 30 + }