Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

clocksource: Revert "Remove kthread"

I turns out that the silly spawn kthread from worker was actually needed.

clocksource_watchdog_kthread() cannot be called directly from
clocksource_watchdog_work(), because clocksource_select() calls
timekeeping_notify() which uses stop_machine(). One cannot use
stop_machine() from a workqueue() due lock inversions wrt CPU hotplug.

Revert the patch but add a comment that explain why we jump through such
apparently silly hoops.

Fixes: 7197e77abcb6 ("clocksource: Remove kthread")
Reported-by: Siegfried Metz <frame@mailbox.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Niklas Cassel <niklas.cassel@linaro.org>
Tested-by: Kevin Shanahan <kevin@shanahan.id.au>
Tested-by: viktor_jaegerskuepper@freenet.de
Tested-by: Siegfried Metz <frame@mailbox.org>
Cc: rafael.j.wysocki@intel.com
Cc: len.brown@intel.com
Cc: diego.viola@gmail.com
Cc: rui.zhang@intel.com
Cc: bjorn.andersson@linaro.org
Link: https://lkml.kernel.org/r/20180905084158.GR24124@hirez.programming.kicks-ass.net

authored by

Peter Zijlstra and committed by
Thomas Gleixner
e2c631ba c43c5e9f

+30 -10
+30 -10
kernel/time/clocksource.c
··· 133 133 spin_unlock_irqrestore(&watchdog_lock, *flags); 134 134 } 135 135 136 + static int clocksource_watchdog_kthread(void *data); 137 + static void __clocksource_change_rating(struct clocksource *cs, int rating); 138 + 136 139 /* 137 140 * Interval: 0.5sec Threshold: 0.0625s 138 141 */ 139 142 #define WATCHDOG_INTERVAL (HZ >> 1) 140 143 #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4) 144 + 145 + static void clocksource_watchdog_work(struct work_struct *work) 146 + { 147 + /* 148 + * We cannot directly run clocksource_watchdog_kthread() here, because 149 + * clocksource_select() calls timekeeping_notify() which uses 150 + * stop_machine(). One cannot use stop_machine() from a workqueue() due 151 + * lock inversions wrt CPU hotplug. 152 + * 153 + * Also, we only ever run this work once or twice during the lifetime 154 + * of the kernel, so there is no point in creating a more permanent 155 + * kthread for this. 156 + * 157 + * If kthread_run fails the next watchdog scan over the 158 + * watchdog_list will find the unstable clock again. 159 + */ 160 + kthread_run(clocksource_watchdog_kthread, NULL, "kwatchdog"); 161 + } 141 162 142 163 static void __clocksource_unstable(struct clocksource *cs) 143 164 { ··· 166 145 cs->flags |= CLOCK_SOURCE_UNSTABLE; 167 146 168 147 /* 169 - * If the clocksource is registered clocksource_watchdog_work() will 148 + * If the clocksource is registered clocksource_watchdog_kthread() will 170 149 * re-rate and re-select. 171 150 */ 172 151 if (list_empty(&cs->list)) { ··· 177 156 if (cs->mark_unstable) 178 157 cs->mark_unstable(cs); 179 158 180 - /* kick clocksource_watchdog_work() */ 159 + /* kick clocksource_watchdog_kthread() */ 181 160 if (finished_booting) 182 161 schedule_work(&watchdog_work); 183 162 } ··· 187 166 * @cs: clocksource to be marked unstable 188 167 * 189 168 * This function is called by the x86 TSC code to mark clocksources as unstable; 190 - * it defers demotion and re-selection to a work. 169 + * it defers demotion and re-selection to a kthread. 191 170 */ 192 171 void clocksource_mark_unstable(struct clocksource *cs) 193 172 { ··· 412 391 } 413 392 } 414 393 415 - static void __clocksource_change_rating(struct clocksource *cs, int rating); 416 - 417 - static int __clocksource_watchdog_work(void) 394 + static int __clocksource_watchdog_kthread(void) 418 395 { 419 396 struct clocksource *cs, *tmp; 420 397 unsigned long flags; ··· 437 418 return select; 438 419 } 439 420 440 - static void clocksource_watchdog_work(struct work_struct *work) 421 + static int clocksource_watchdog_kthread(void *data) 441 422 { 442 423 mutex_lock(&clocksource_mutex); 443 - if (__clocksource_watchdog_work()) 424 + if (__clocksource_watchdog_kthread()) 444 425 clocksource_select(); 445 426 mutex_unlock(&clocksource_mutex); 427 + return 0; 446 428 } 447 429 448 430 static bool clocksource_is_watchdog(struct clocksource *cs) ··· 462 442 static void clocksource_select_watchdog(bool fallback) { } 463 443 static inline void clocksource_dequeue_watchdog(struct clocksource *cs) { } 464 444 static inline void clocksource_resume_watchdog(void) { } 465 - static inline int __clocksource_watchdog_work(void) { return 0; } 445 + static inline int __clocksource_watchdog_kthread(void) { return 0; } 466 446 static bool clocksource_is_watchdog(struct clocksource *cs) { return false; } 467 447 void clocksource_mark_unstable(struct clocksource *cs) { } 468 448 ··· 830 810 /* 831 811 * Run the watchdog first to eliminate unstable clock sources 832 812 */ 833 - __clocksource_watchdog_work(); 813 + __clocksource_watchdog_kthread(); 834 814 clocksource_select(); 835 815 mutex_unlock(&clocksource_mutex); 836 816 return 0;