Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

cpufreq: governor: remove copy_prev_load from 'struct cpu_dbs_common_info'

'copy_prev_load' was recently added by commit: 18b46ab (cpufreq: governor: Be
friendly towards latency-sensitive bursty workloads).

It actually is a bit redundant as we also have 'prev_load' which can store any
integer value and can be used instead of 'copy_prev_load' by setting it zero.

True load can also turn out to be zero during long idle intervals (and hence the
actual value of 'prev_load' and the overloaded value can clash). However this is
not a problem because, if the true load was really zero in the previous
interval, it makes sense to evaluate the load afresh for the current interval
rather than copying the previous load.

So, drop 'copy_prev_load' and use 'prev_load' instead.

Update comments as well to make it more clear.

There is another change here which was probably missed by Srivatsa during the
last version of updates he made. The unlikely in the 'if' statement was covering
only half of the condition and the whole line should actually come under it.

Also checkpatch is made more silent as it was reporting this (--strict option):

CHECK: Alignment should match open parenthesis
+ if (unlikely(wall_time > (2 * sampling_rate) &&
+ j_cdbs->prev_load)) {

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

authored by

Viresh Kumar and committed by
Rafael J. Wysocki
c8ae481b 18b46abd

+19 -9
+14 -5
drivers/cpufreq/cpufreq_governor.c
··· 131 131 * timer would not have fired during CPU-idle periods. Hence 132 132 * an unusually large 'wall_time' (as compared to the sampling 133 133 * rate) indicates this scenario. 134 + * 135 + * prev_load can be zero in two cases and we must recalculate it 136 + * for both cases: 137 + * - during long idle intervals 138 + * - explicitly set to zero 134 139 */ 135 - if (unlikely(wall_time > (2 * sampling_rate)) && 136 - j_cdbs->copy_prev_load) { 140 + if (unlikely(wall_time > (2 * sampling_rate) && 141 + j_cdbs->prev_load)) { 137 142 load = j_cdbs->prev_load; 138 - j_cdbs->copy_prev_load = false; 143 + 144 + /* 145 + * Perform a destructive copy, to ensure that we copy 146 + * the previous load only once, upon the first wake-up 147 + * from idle. 148 + */ 149 + j_cdbs->prev_load = 0; 139 150 } else { 140 151 load = 100 * (wall_time - idle_time) / wall_time; 141 152 j_cdbs->prev_load = load; 142 - j_cdbs->copy_prev_load = true; 143 153 } 144 154 145 155 if (load > max_load) ··· 383 373 (j_cdbs->prev_cpu_wall - j_cdbs->prev_cpu_idle); 384 374 j_cdbs->prev_load = 100 * prev_load / 385 375 (unsigned int) j_cdbs->prev_cpu_wall; 386 - j_cdbs->copy_prev_load = true; 387 376 388 377 if (ignore_nice) 389 378 j_cdbs->prev_cpu_nice =
+5 -4
drivers/cpufreq/cpufreq_governor.h
··· 134 134 u64 prev_cpu_idle; 135 135 u64 prev_cpu_wall; 136 136 u64 prev_cpu_nice; 137 - unsigned int prev_load; 138 137 /* 139 - * Flag to ensure that we copy the previous load only once, upon the 140 - * first wake-up from idle. 138 + * Used to keep track of load in the previous interval. However, when 139 + * explicitly set to zero, it is used as a flag to ensure that we copy 140 + * the previous load to the current interval only once, upon the first 141 + * wake-up from idle. 141 142 */ 142 - bool copy_prev_load; 143 + unsigned int prev_load; 143 144 struct cpufreq_policy *cur_policy; 144 145 struct delayed_work work; 145 146 /*