Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

cpufreq: governor: Quit work-handlers early if governor is stopped

gov_queue_work() acquires cpufreq_governor_lock to allow
cpufreq_governor_stop() to drain delayed work items possibly scheduled
on CPUs that share the policy with a CPU being taken offline.

However, the same goal may be achieved in a more straightforward way if
the policy pointer in the struct cpu_dbs_info matching the policy CPU is
reset upfront by cpufreq_governor_stop() under the timer_mutex belonging
to it and checked against NULL, under the same lock, at the beginning of
dbs_timer().

In that case every instance of dbs_timer() run for a struct cpu_dbs_info
sharing the policy pointer in question after cpufreq_governor_stop() has
started will notice that that pointer is NULL and bail out immediately
without queuing up any new work items. In turn, gov_cancel_work()
called by cpufreq_governor_stop() before destroying timer_mutex will
wait for all of the delayed work items currently running on the CPUs
sharing the policy to drop the mutex, so it may be destroyed safely.

Make cpufreq_governor_stop() and dbs_timer() work as described and
modify gov_queue_work() so it does not acquire cpufreq_governor_lock any
more.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

authored by

Viresh Kumar and committed by
Rafael J. Wysocki
3a91b069 539342f6

+23 -10
+23 -10
drivers/cpufreq/cpufreq_governor.c
··· 171 171 { 172 172 int i; 173 173 174 - mutex_lock(&cpufreq_governor_lock); 175 - if (!policy->governor_enabled) 176 - goto out_unlock; 177 - 178 174 if (!all_cpus) { 179 175 /* 180 176 * Use raw_smp_processor_id() to avoid preemptible warnings. ··· 184 188 for_each_cpu(i, policy->cpus) 185 189 __gov_queue_work(i, dbs_data, delay); 186 190 } 187 - 188 - out_unlock: 189 - mutex_unlock(&cpufreq_governor_lock); 190 191 } 191 192 EXPORT_SYMBOL_GPL(gov_queue_work); 192 193 ··· 222 229 struct cpu_dbs_info *cdbs = container_of(work, struct cpu_dbs_info, 223 230 dwork.work); 224 231 struct cpu_common_dbs_info *shared = cdbs->shared; 225 - struct cpufreq_policy *policy = shared->policy; 226 - struct dbs_data *dbs_data = policy->governor_data; 232 + struct cpufreq_policy *policy; 233 + struct dbs_data *dbs_data; 227 234 unsigned int sampling_rate, delay; 228 235 bool modify_all = true; 229 236 230 237 mutex_lock(&shared->timer_mutex); 238 + 239 + policy = shared->policy; 240 + 241 + /* 242 + * Governor might already be disabled and there is no point continuing 243 + * with the work-handler. 244 + */ 245 + if (!policy) 246 + goto unlock; 247 + 248 + dbs_data = policy->governor_data; 231 249 232 250 if (dbs_data->cdata->governor == GOV_CONSERVATIVE) { 233 251 struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; ··· 256 252 delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all); 257 253 gov_queue_work(dbs_data, policy, delay, modify_all); 258 254 255 + unlock: 259 256 mutex_unlock(&shared->timer_mutex); 260 257 } 261 258 ··· 483 478 if (!shared || !shared->policy) 484 479 return -EBUSY; 485 480 481 + /* 482 + * Work-handler must see this updated, as it should not proceed any 483 + * further after governor is disabled. And so timer_mutex is taken while 484 + * updating this value. 485 + */ 486 + mutex_lock(&shared->timer_mutex); 487 + shared->policy = NULL; 488 + mutex_unlock(&shared->timer_mutex); 489 + 486 490 gov_cancel_work(dbs_data, policy); 487 491 488 - shared->policy = NULL; 489 492 mutex_destroy(&shared->timer_mutex); 490 493 return 0; 491 494 }