Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'timers-core-2021-06-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull timer updates from Thomas Gleixner:
"Time and clocksource/clockevent related updates:

Core changes:

- Infrastructure to support per CPU "broadcast" devices for per CPU
clockevent devices which stop in deep idle states. This allows us
to utilize the more efficient architected timer on certain ARM SoCs
for normal operation instead of permanentely using the slow to
access SoC specific clockevent device.

- Print the name of the broadcast/wakeup device in /proc/timer_list

- Make the clocksource watchdog more robust against delays between
reading the current active clocksource and the watchdog
clocksource. Such delays can be caused by NMIs, SMIs and vCPU
preemption.

Handle this by reading the watchdog clocksource twice, i.e. before
and after reading the current active clocksource. In case that the
two watchdog reads shows an excessive time delta, the read sequence
is repeated up to 3 times.

- Improve the debug output and add a test module for the watchdog
mechanism.

- Reimplementation of the venerable time64_to_tm() function with a
faster and significantly smaller version. Straight from the source,
i.e. the author of the related research paper contributed this!

Driver changes:

- No new drivers, not even new device tree bindings!

- Fixes, improvements and cleanups and all over the place"

* tag 'timers-core-2021-06-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (30 commits)
time/kunit: Add missing MODULE_LICENSE()
time: Improve performance of time64_to_tm()
clockevents: Use list_move() instead of list_del()/list_add()
clocksource: Print deviation in nanoseconds when a clocksource becomes unstable
clocksource: Provide kernel module to test clocksource watchdog
clocksource: Reduce clocksource-skew threshold
clocksource: Limit number of CPUs checked for clock synchronization
clocksource: Check per-CPU clock synchronization when marked unstable
clocksource: Retry clock read if long delays detected
clockevents: Add missing parameter documentation
clocksource/drivers/timer-ti-dm: Drop unnecessary restore
clocksource/arm_arch_timer: Improve Allwinner A64 timer workaround
clocksource/drivers/arm_global_timer: Remove duplicated argument in arm_global_timer
clocksource/drivers/arm_global_timer: Make symbol 'gt_clk_rate_change_nb' static
arm: zynq: don't disable CONFIG_ARM_GLOBAL_TIMER due to CONFIG_CPU_FREQ anymore
clocksource/drivers/arm_global_timer: Implement rate compensation whenever source clock changes
clocksource/drivers/ingenic: Rename unreasonable array names
clocksource/drivers/timer-ti-dm: Save and restore timer TIOCP_CFG
clocksource/drivers/mediatek: Ack and disable interrupts on suspend
clocksource/drivers/samsung_pwm: Constify source IO memory
...

+990 -138
+22
Documentation/admin-guide/kernel-parameters.txt
··· 581 581 loops can be debugged more effectively on production 582 582 systems. 583 583 584 + clocksource.max_cswd_read_retries= [KNL] 585 + Number of clocksource_watchdog() retries due to 586 + external delays before the clock will be marked 587 + unstable. Defaults to three retries, that is, 588 + four attempts to read the clock under test. 589 + 590 + clocksource.verify_n_cpus= [KNL] 591 + Limit the number of CPUs checked for clocksources 592 + marked with CLOCK_SOURCE_VERIFY_PERCPU that 593 + are marked unstable due to excessive skew. 594 + A negative value says to check all CPUs, while 595 + zero says not to check any. Values larger than 596 + nr_cpu_ids are silently truncated to nr_cpu_ids. 597 + The actual CPUs are chosen randomly, with 598 + no replacement if the same CPU is chosen twice. 599 + 600 + clocksource-wdtest.holdoff= [KNL] 601 + Set the time in seconds that the clocksource 602 + watchdog test waits before commencing its tests. 603 + Defaults to zero when built as a module and to 604 + 10 seconds when built into the kernel. 605 + 584 606 clearcpuid=BITNUM[,BITNUM...] [X86] 585 607 Disable CPUID feature X for the kernel. See 586 608 arch/x86/include/asm/cpufeatures.h for the valid bit
+1 -1
arch/arm/mach-zynq/Kconfig
··· 6 6 select ARCH_SUPPORTS_BIG_ENDIAN 7 7 select ARM_AMBA 8 8 select ARM_GIC 9 - select ARM_GLOBAL_TIMER if !CPU_FREQ 9 + select ARM_GLOBAL_TIMER 10 10 select CADENCE_TTC_TIMER 11 11 select HAVE_ARM_SCU if SMP 12 12 select HAVE_ARM_TWD if SMP
+3 -1
arch/x86/kernel/tsc.c
··· 1128 1128 static struct clocksource clocksource_tsc_early = { 1129 1129 .name = "tsc-early", 1130 1130 .rating = 299, 1131 + .uncertainty_margin = 32 * NSEC_PER_MSEC, 1131 1132 .read = read_tsc, 1132 1133 .mask = CLOCKSOURCE_MASK(64), 1133 1134 .flags = CLOCK_SOURCE_IS_CONTINUOUS | ··· 1153 1152 .mask = CLOCKSOURCE_MASK(64), 1154 1153 .flags = CLOCK_SOURCE_IS_CONTINUOUS | 1155 1154 CLOCK_SOURCE_VALID_FOR_HRES | 1156 - CLOCK_SOURCE_MUST_VERIFY, 1155 + CLOCK_SOURCE_MUST_VERIFY | 1156 + CLOCK_SOURCE_VERIFY_PERCPU, 1157 1157 .vdso_clock_mode = VDSO_CLOCKMODE_TSC, 1158 1158 .enable = tsc_cs_enable, 1159 1159 .resume = tsc_resume,
+14
drivers/clocksource/Kconfig
··· 358 358 help 359 359 This option enables support for the ARM global timer unit. 360 360 361 + config ARM_GT_INITIAL_PRESCALER_VAL 362 + int "ARM global timer initial prescaler value" 363 + default 2 if ARCH_ZYNQ 364 + default 1 365 + depends on ARM_GLOBAL_TIMER 366 + help 367 + When the ARM global timer initializes, its current rate is declared 368 + to the kernel and maintained forever. Should it's parent clock 369 + change, the driver tries to fix the timer's internal prescaler. 370 + On some machs (i.e. Zynq) the initial prescaler value thus poses 371 + bounds about how much the parent clock is allowed to decrease or 372 + increase wrt the initial clock value. 373 + This affects CPU_FREQ max delta from the initial frequency. 374 + 361 375 config ARM_TIMER_SP804 362 376 bool "Support for Dual Timer SP804 module" if COMPILE_TEST 363 377 depends on GENERIC_SCHED_CLOCK && CLKDEV_LOOKUP
+1 -2
drivers/clocksource/arm_arch_timer.c
··· 64 64 #define to_arch_timer(e) container_of(e, struct arch_timer, evt) 65 65 66 66 static u32 arch_timer_rate __ro_after_init; 67 - u32 arch_timer_rate1 __ro_after_init; 68 67 static int arch_timer_ppi[ARCH_TIMER_MAX_TIMER_PPI] __ro_after_init; 69 68 70 69 static const char *arch_timer_ppi_names[ARCH_TIMER_MAX_TIMER_PPI] = { ··· 364 365 do { \ 365 366 _val = read_sysreg(reg); \ 366 367 _retries--; \ 367 - } while (((_val + 1) & GENMASK(9, 0)) <= 1 && _retries); \ 368 + } while (((_val + 1) & GENMASK(8, 0)) <= 1 && _retries); \ 368 369 \ 369 370 WARN_ON_ONCE(!_retries); \ 370 371 _val; \
+112 -10
drivers/clocksource/arm_global_timer.c
··· 31 31 #define GT_CONTROL_COMP_ENABLE BIT(1) /* banked */ 32 32 #define GT_CONTROL_IRQ_ENABLE BIT(2) /* banked */ 33 33 #define GT_CONTROL_AUTO_INC BIT(3) /* banked */ 34 + #define GT_CONTROL_PRESCALER_SHIFT 8 35 + #define GT_CONTROL_PRESCALER_MAX 0xF 36 + #define GT_CONTROL_PRESCALER_MASK (GT_CONTROL_PRESCALER_MAX << \ 37 + GT_CONTROL_PRESCALER_SHIFT) 34 38 35 39 #define GT_INT_STATUS 0x0c 36 40 #define GT_INT_STATUS_EVENT_FLAG BIT(0) ··· 43 39 #define GT_COMP1 0x14 44 40 #define GT_AUTO_INC 0x18 45 41 42 + #define MAX_F_ERR 50 46 43 /* 47 44 * We are expecting to be clocked by the ARM peripheral clock. 48 45 * ··· 51 46 * the units for all operations. 52 47 */ 53 48 static void __iomem *gt_base; 54 - static unsigned long gt_clk_rate; 49 + static struct notifier_block gt_clk_rate_change_nb; 50 + static u32 gt_psv_new, gt_psv_bck, gt_target_rate; 55 51 static int gt_ppi; 56 52 static struct clock_event_device __percpu *gt_evt; 57 53 ··· 102 96 unsigned long ctrl; 103 97 104 98 counter += delta; 105 - ctrl = GT_CONTROL_TIMER_ENABLE; 99 + ctrl = readl(gt_base + GT_CONTROL); 100 + ctrl &= ~(GT_CONTROL_COMP_ENABLE | GT_CONTROL_IRQ_ENABLE | 101 + GT_CONTROL_AUTO_INC); 102 + ctrl |= GT_CONTROL_TIMER_ENABLE; 106 103 writel_relaxed(ctrl, gt_base + GT_CONTROL); 107 104 writel_relaxed(lower_32_bits(counter), gt_base + GT_COMP0); 108 105 writel_relaxed(upper_32_bits(counter), gt_base + GT_COMP1); ··· 132 123 133 124 static int gt_clockevent_set_periodic(struct clock_event_device *evt) 134 125 { 135 - gt_compare_set(DIV_ROUND_CLOSEST(gt_clk_rate, HZ), 1); 126 + gt_compare_set(DIV_ROUND_CLOSEST(gt_target_rate, HZ), 1); 136 127 return 0; 137 128 } 138 129 ··· 186 177 clk->cpumask = cpumask_of(cpu); 187 178 clk->rating = 300; 188 179 clk->irq = gt_ppi; 189 - clockevents_config_and_register(clk, gt_clk_rate, 180 + clockevents_config_and_register(clk, gt_target_rate, 190 181 1, 0xffffffff); 191 182 enable_percpu_irq(clk->irq, IRQ_TYPE_NONE); 192 183 return 0; ··· 241 232 .read_current_timer = gt_read_long, 242 233 }; 243 234 235 + static void gt_write_presc(u32 psv) 236 + { 237 + u32 reg; 238 + 239 + reg = readl(gt_base + GT_CONTROL); 240 + reg &= ~GT_CONTROL_PRESCALER_MASK; 241 + reg |= psv << GT_CONTROL_PRESCALER_SHIFT; 242 + writel(reg, gt_base + GT_CONTROL); 243 + } 244 + 245 + static u32 gt_read_presc(void) 246 + { 247 + u32 reg; 248 + 249 + reg = readl(gt_base + GT_CONTROL); 250 + reg &= GT_CONTROL_PRESCALER_MASK; 251 + return reg >> GT_CONTROL_PRESCALER_SHIFT; 252 + } 253 + 244 254 static void __init gt_delay_timer_init(void) 245 255 { 246 - gt_delay_timer.freq = gt_clk_rate; 256 + gt_delay_timer.freq = gt_target_rate; 247 257 register_current_timer_delay(&gt_delay_timer); 248 258 } 249 259 ··· 271 243 writel(0, gt_base + GT_CONTROL); 272 244 writel(0, gt_base + GT_COUNTER0); 273 245 writel(0, gt_base + GT_COUNTER1); 274 - /* enables timer on all the cores */ 275 - writel(GT_CONTROL_TIMER_ENABLE, gt_base + GT_CONTROL); 246 + /* set prescaler and enable timer on all the cores */ 247 + writel(((CONFIG_ARM_GT_INITIAL_PRESCALER_VAL - 1) << 248 + GT_CONTROL_PRESCALER_SHIFT) 249 + | GT_CONTROL_TIMER_ENABLE, gt_base + GT_CONTROL); 276 250 277 251 #ifdef CONFIG_CLKSRC_ARM_GLOBAL_TIMER_SCHED_CLOCK 278 - sched_clock_register(gt_sched_clock_read, 64, gt_clk_rate); 252 + sched_clock_register(gt_sched_clock_read, 64, gt_target_rate); 279 253 #endif 280 - return clocksource_register_hz(&gt_clocksource, gt_clk_rate); 254 + return clocksource_register_hz(&gt_clocksource, gt_target_rate); 255 + } 256 + 257 + static int gt_clk_rate_change_cb(struct notifier_block *nb, 258 + unsigned long event, void *data) 259 + { 260 + struct clk_notifier_data *ndata = data; 261 + 262 + switch (event) { 263 + case PRE_RATE_CHANGE: 264 + { 265 + int psv; 266 + 267 + psv = DIV_ROUND_CLOSEST(ndata->new_rate, 268 + gt_target_rate); 269 + 270 + if (abs(gt_target_rate - (ndata->new_rate / psv)) > MAX_F_ERR) 271 + return NOTIFY_BAD; 272 + 273 + psv--; 274 + 275 + /* prescaler within legal range? */ 276 + if (psv < 0 || psv > GT_CONTROL_PRESCALER_MAX) 277 + return NOTIFY_BAD; 278 + 279 + /* 280 + * store timer clock ctrl register so we can restore it in case 281 + * of an abort. 282 + */ 283 + gt_psv_bck = gt_read_presc(); 284 + gt_psv_new = psv; 285 + /* scale down: adjust divider in post-change notification */ 286 + if (ndata->new_rate < ndata->old_rate) 287 + return NOTIFY_DONE; 288 + 289 + /* scale up: adjust divider now - before frequency change */ 290 + gt_write_presc(psv); 291 + break; 292 + } 293 + case POST_RATE_CHANGE: 294 + /* scale up: pre-change notification did the adjustment */ 295 + if (ndata->new_rate > ndata->old_rate) 296 + return NOTIFY_OK; 297 + 298 + /* scale down: adjust divider now - after frequency change */ 299 + gt_write_presc(gt_psv_new); 300 + break; 301 + 302 + case ABORT_RATE_CHANGE: 303 + /* we have to undo the adjustment in case we scale up */ 304 + if (ndata->new_rate < ndata->old_rate) 305 + return NOTIFY_OK; 306 + 307 + /* restore original register value */ 308 + gt_write_presc(gt_psv_bck); 309 + break; 310 + default: 311 + return NOTIFY_DONE; 312 + } 313 + 314 + return NOTIFY_DONE; 281 315 } 282 316 283 317 static int __init global_timer_of_register(struct device_node *np) 284 318 { 285 319 struct clk *gt_clk; 320 + static unsigned long gt_clk_rate; 286 321 int err = 0; 287 322 288 323 /* ··· 383 292 } 384 293 385 294 gt_clk_rate = clk_get_rate(gt_clk); 295 + gt_target_rate = gt_clk_rate / CONFIG_ARM_GT_INITIAL_PRESCALER_VAL; 296 + gt_clk_rate_change_nb.notifier_call = 297 + gt_clk_rate_change_cb; 298 + err = clk_notifier_register(gt_clk, &gt_clk_rate_change_nb); 299 + if (err) { 300 + pr_warn("Unable to register clock notifier\n"); 301 + goto out_clk; 302 + } 303 + 386 304 gt_evt = alloc_percpu(struct clock_event_device); 387 305 if (!gt_evt) { 388 306 pr_warn("global-timer: can't allocate memory\n"); 389 307 err = -ENOMEM; 390 - goto out_clk; 308 + goto out_clk_nb; 391 309 } 392 310 393 311 err = request_percpu_irq(gt_ppi, gt_clockevent_interrupt, ··· 426 326 free_percpu_irq(gt_ppi, gt_evt); 427 327 out_free: 428 328 free_percpu(gt_evt); 329 + out_clk_nb: 330 + clk_notifier_unregister(gt_clk, &gt_clk_rate_change_nb); 429 331 out_clk: 430 332 clk_disable_unprepare(gt_clk); 431 333 out_unmap:
+5 -5
drivers/clocksource/ingenic-sysost.c
··· 186 186 187 187 static const char * const ingenic_ost_clk_parents[] = { "ext" }; 188 188 189 - static const struct ingenic_ost_clk_info ingenic_ost_clk_info[] = { 189 + static const struct ingenic_ost_clk_info x1000_ost_clk_info[] = { 190 190 [OST_CLK_PERCPU_TIMER] = { 191 191 .init_data = { 192 192 .name = "percpu timer", ··· 414 414 .num_channels = 2, 415 415 }; 416 416 417 - static const struct of_device_id __maybe_unused ingenic_ost_of_match[] __initconst = { 418 - { .compatible = "ingenic,x1000-ost", .data = &x1000_soc_info, }, 417 + static const struct of_device_id __maybe_unused ingenic_ost_of_matches[] __initconst = { 418 + { .compatible = "ingenic,x1000-ost", .data = &x1000_soc_info }, 419 419 { /* sentinel */ } 420 420 }; 421 421 422 422 static int __init ingenic_ost_probe(struct device_node *np) 423 423 { 424 - const struct of_device_id *id = of_match_node(ingenic_ost_of_match, np); 424 + const struct of_device_id *id = of_match_node(ingenic_ost_of_matches, np); 425 425 struct ingenic_ost *ost; 426 426 unsigned int i; 427 427 int ret; ··· 462 462 ost->clocks->num = ost->soc_info->num_channels; 463 463 464 464 for (i = 0; i < ost->clocks->num; i++) { 465 - ret = ingenic_ost_register_clock(ost, i, &ingenic_ost_clk_info[i], ost->clocks); 465 + ret = ingenic_ost_register_clock(ost, i, &x1000_ost_clk_info[i], ost->clocks); 466 466 if (ret) { 467 467 pr_crit("%s: Cannot register clock %d\n", __func__, i); 468 468 goto err_unregister_ost_clocks;
+29 -12
drivers/clocksource/samsung_pwm_timer.c
··· 4 4 * http://www.samsung.com/ 5 5 * 6 6 * samsung - Common hr-timer support (s3c and s5p) 7 - */ 7 + */ 8 8 9 9 #include <linux/interrupt.h> 10 10 #include <linux/irq.h> ··· 22 22 23 23 #include <clocksource/samsung_pwm.h> 24 24 25 - 26 25 /* 27 26 * Clocksource driver 28 27 */ ··· 37 38 #define TCFG0_PRESCALER_MASK 0xff 38 39 #define TCFG0_PRESCALER1_SHIFT 8 39 40 40 - #define TCFG1_SHIFT(x) ((x) * 4) 41 - #define TCFG1_MUX_MASK 0xf 41 + #define TCFG1_SHIFT(x) ((x) * 4) 42 + #define TCFG1_MUX_MASK 0xf 42 43 43 44 /* 44 45 * Each channel occupies 4 bits in TCON register, but there is a gap of 4 ··· 61 62 62 63 struct samsung_pwm_clocksource { 63 64 void __iomem *base; 64 - void __iomem *source_reg; 65 + const void __iomem *source_reg; 65 66 unsigned int irq[SAMSUNG_PWM_NUM]; 66 67 struct samsung_pwm_variant variant; 67 68 ··· 182 183 } 183 184 184 185 static int samsung_set_next_event(unsigned long cycles, 185 - struct clock_event_device *evt) 186 + struct clock_event_device *evt) 186 187 { 187 188 /* 188 189 * This check is needed to account for internal rounding ··· 224 225 225 226 if (pwm.variant.has_tint_cstat) { 226 227 u32 mask = (1 << pwm.event_id); 228 + 227 229 writel(mask | (mask << 5), pwm.base + REG_TINT_CSTAT); 228 230 } 229 231 } ··· 248 248 249 249 if (pwm.variant.has_tint_cstat) { 250 250 u32 mask = (1 << pwm.event_id); 251 + 251 252 writel(mask | (mask << 5), pwm.base + REG_TINT_CSTAT); 252 253 } 253 254 ··· 273 272 274 273 time_event_device.cpumask = cpumask_of(0); 275 274 clockevents_config_and_register(&time_event_device, 276 - clock_rate, 1, pwm.tcnt_max); 275 + clock_rate, 1, pwm.tcnt_max); 277 276 278 277 irq_number = pwm.irq[pwm.event_id]; 279 278 if (request_irq(irq_number, samsung_clock_event_isr, ··· 283 282 284 283 if (pwm.variant.has_tint_cstat) { 285 284 u32 mask = (1 << pwm.event_id); 285 + 286 286 writel(mask | (mask << 5), pwm.base + REG_TINT_CSTAT); 287 287 } 288 288 } ··· 349 347 pwm.source_reg = pwm.base + pwm.source_id * 0x0c + 0x14; 350 348 351 349 sched_clock_register(samsung_read_sched_clock, 352 - pwm.variant.bits, clock_rate); 350 + pwm.variant.bits, clock_rate); 353 351 354 352 samsung_clocksource.mask = CLOCKSOURCE_MASK(pwm.variant.bits); 355 353 return clocksource_register_hz(&samsung_clocksource, clock_rate); ··· 400 398 } 401 399 402 400 void __init samsung_pwm_clocksource_init(void __iomem *base, 403 - unsigned int *irqs, struct samsung_pwm_variant *variant) 401 + unsigned int *irqs, 402 + const struct samsung_pwm_variant *variant) 404 403 { 405 404 pwm.base = base; 406 405 memcpy(&pwm.variant, variant, sizeof(pwm.variant)); ··· 421 418 struct property *prop; 422 419 const __be32 *cur; 423 420 u32 val; 424 - int i; 421 + int i, ret; 425 422 426 423 memcpy(&pwm.variant, variant, sizeof(pwm.variant)); 427 424 for (i = 0; i < SAMSUNG_PWM_NUM; ++i) ··· 444 441 pwm.timerclk = of_clk_get_by_name(np, "timers"); 445 442 if (IS_ERR(pwm.timerclk)) { 446 443 pr_crit("failed to get timers clock for timer\n"); 447 - return PTR_ERR(pwm.timerclk); 444 + ret = PTR_ERR(pwm.timerclk); 445 + goto err_clk; 448 446 } 449 447 450 - return _samsung_pwm_clocksource_init(); 448 + ret = _samsung_pwm_clocksource_init(); 449 + if (ret) 450 + goto err_clocksource; 451 + 452 + return 0; 453 + 454 + err_clocksource: 455 + clk_put(pwm.timerclk); 456 + pwm.timerclk = NULL; 457 + err_clk: 458 + iounmap(pwm.base); 459 + pwm.base = NULL; 460 + 461 + return ret; 451 462 } 452 463 453 464 static const struct samsung_pwm_variant s3c24xx_variant = {
+24
drivers/clocksource/timer-mediatek.c
··· 241 241 timer_of_base(to) + GPT_IRQ_EN_REG); 242 242 } 243 243 244 + static void mtk_gpt_resume(struct clock_event_device *clk) 245 + { 246 + struct timer_of *to = to_timer_of(clk); 247 + 248 + mtk_gpt_enable_irq(to, TIMER_CLK_EVT); 249 + } 250 + 251 + static void mtk_gpt_suspend(struct clock_event_device *clk) 252 + { 253 + struct timer_of *to = to_timer_of(clk); 254 + 255 + /* Disable all interrupts */ 256 + writel(0x0, timer_of_base(to) + GPT_IRQ_EN_REG); 257 + 258 + /* 259 + * This is called with interrupts disabled, 260 + * so we need to ack any interrupt that is pending 261 + * or for example ATF will prevent a suspend from completing. 262 + */ 263 + writel(0x3f, timer_of_base(to) + GPT_IRQ_ACK_REG); 264 + } 265 + 244 266 static struct timer_of to = { 245 267 .flags = TIMER_OF_IRQ | TIMER_OF_BASE | TIMER_OF_CLOCK, 246 268 ··· 308 286 to.clkevt.set_state_oneshot = mtk_gpt_clkevt_shutdown; 309 287 to.clkevt.tick_resume = mtk_gpt_clkevt_shutdown; 310 288 to.clkevt.set_next_event = mtk_gpt_clkevt_next_event; 289 + to.clkevt.suspend = mtk_gpt_suspend; 290 + to.clkevt.resume = mtk_gpt_resume; 311 291 to.of_irq.handler = mtk_gpt_interrupt; 312 292 313 293 ret = timer_of_init(node, &to);
+8 -1
drivers/clocksource/timer-ti-dm.c
··· 78 78 79 79 static void omap_timer_restore_context(struct omap_dm_timer *timer) 80 80 { 81 + __omap_dm_timer_write(timer, OMAP_TIMER_OCP_CFG_OFFSET, 82 + timer->context.ocp_cfg, 0); 83 + 81 84 omap_dm_timer_write_reg(timer, OMAP_TIMER_WAKEUP_EN_REG, 82 85 timer->context.twer); 83 86 omap_dm_timer_write_reg(timer, OMAP_TIMER_COUNTER_REG, ··· 98 95 99 96 static void omap_timer_save_context(struct omap_dm_timer *timer) 100 97 { 98 + timer->context.ocp_cfg = 99 + __omap_dm_timer_read(timer, OMAP_TIMER_OCP_CFG_OFFSET, 0); 100 + 101 101 timer->context.tclr = 102 102 omap_dm_timer_read_reg(timer, OMAP_TIMER_CTRL_REG); 103 103 timer->context.twer = ··· 128 122 break; 129 123 omap_timer_save_context(timer); 130 124 break; 131 - case CPU_CLUSTER_PM_ENTER_FAILED: 125 + case CPU_CLUSTER_PM_ENTER_FAILED: /* No need to restore context */ 126 + break; 132 127 case CPU_CLUSTER_PM_EXIT: 133 128 if ((timer->capability & OMAP_TIMER_ALWON) || 134 129 !atomic_read(&timer->enabled))
+2 -1
include/clocksource/samsung_pwm.h
··· 27 27 }; 28 28 29 29 void samsung_pwm_clocksource_init(void __iomem *base, 30 - unsigned int *irqs, struct samsung_pwm_variant *variant); 30 + unsigned int *irqs, 31 + const struct samsung_pwm_variant *variant); 31 32 32 33 #endif /* __CLOCKSOURCE_SAMSUNG_PWM_H */
+1
include/clocksource/timer-ti-dm.h
··· 74 74 #define OMAP_TIMER_ERRATA_I103_I767 0x80000000 75 75 76 76 struct timer_regs { 77 + u32 ocp_cfg; 77 78 u32 tidr; 78 79 u32 tier; 79 80 u32 twer;
+7 -1
include/linux/clocksource.h
··· 43 43 * @shift: Cycle to nanosecond divisor (power of two) 44 44 * @max_idle_ns: Maximum idle time permitted by the clocksource (nsecs) 45 45 * @maxadj: Maximum adjustment value to mult (~11%) 46 + * @uncertainty_margin: Maximum uncertainty in nanoseconds per half second. 47 + * Zero says to use default WATCHDOG_THRESHOLD. 46 48 * @archdata: Optional arch-specific data 47 49 * @max_cycles: Maximum safe cycle value which won't overflow on 48 50 * multiplication ··· 100 98 u32 shift; 101 99 u64 max_idle_ns; 102 100 u32 maxadj; 101 + u32 uncertainty_margin; 103 102 #ifdef CONFIG_ARCH_CLOCKSOURCE_DATA 104 103 struct arch_clocksource_data archdata; 105 104 #endif ··· 140 137 #define CLOCK_SOURCE_UNSTABLE 0x40 141 138 #define CLOCK_SOURCE_SUSPEND_NONSTOP 0x80 142 139 #define CLOCK_SOURCE_RESELECT 0x100 143 - 140 + #define CLOCK_SOURCE_VERIFY_PERCPU 0x200 144 141 /* simplify initialization of mask field */ 145 142 #define CLOCKSOURCE_MASK(bits) GENMASK_ULL((bits) - 1, 0) 146 143 ··· 290 287 291 288 #define TIMER_ACPI_DECLARE(name, table_id, fn) \ 292 289 ACPI_DECLARE_PROBE_ENTRY(timer, name, table_id, 0, NULL, 0, fn) 290 + 291 + extern ulong max_cswd_read_retries; 292 + void clocksource_verify_percpu(struct clocksource *cs); 293 293 294 294 #endif /* _LINUX_CLOCKSOURCE_H */
+9
kernel/time/Kconfig
··· 64 64 lack support for the generic clockevent framework. 65 65 New platforms should use generic clockevents instead. 66 66 67 + config TIME_KUNIT_TEST 68 + tristate "KUnit test for kernel/time functions" if !KUNIT_ALL_TESTS 69 + depends on KUNIT 70 + default KUNIT_ALL_TESTS 71 + help 72 + Enable this option to test RTC library functions. 73 + 74 + If unsure, say N. 75 + 67 76 if GENERIC_CLOCKEVENTS 68 77 menu "Timers subsystem" 69 78
+2
kernel/time/Makefile
··· 21 21 obj-$(CONFIG_DEBUG_FS) += timekeeping_debug.o 22 22 obj-$(CONFIG_TEST_UDELAY) += test_udelay.o 23 23 obj-$(CONFIG_TIME_NS) += namespace.o 24 + obj-$(CONFIG_TEST_CLOCKSOURCE_WATCHDOG) += clocksource-wdtest.o 25 + obj-$(CONFIG_TIME_KUNIT_TEST) += time_test.o
+11 -12
kernel/time/clockevents.c
··· 347 347 while (!list_empty(&clockevents_released)) { 348 348 dev = list_entry(clockevents_released.next, 349 349 struct clock_event_device, list); 350 - list_del(&dev->list); 351 - list_add(&dev->list, &clockevent_devices); 350 + list_move(&dev->list, &clockevent_devices); 352 351 tick_check_new_device(dev); 353 352 } 354 353 } ··· 575 576 if (old) { 576 577 module_put(old->owner); 577 578 clockevents_switch_state(old, CLOCK_EVT_STATE_DETACHED); 578 - list_del(&old->list); 579 - list_add(&old->list, &clockevents_released); 579 + list_move(&old->list, &clockevents_released); 580 580 } 581 581 582 582 if (new) { ··· 627 629 628 630 /** 629 631 * tick_cleanup_dead_cpu - Cleanup the tick and clockevents of a dead cpu 632 + * @cpu: The dead CPU 630 633 */ 631 634 void tick_cleanup_dead_cpu(int cpu) 632 635 { ··· 667 668 static DEFINE_PER_CPU(struct device, tick_percpu_dev); 668 669 static struct tick_device *tick_get_tick_dev(struct device *dev); 669 670 670 - static ssize_t sysfs_show_current_tick_dev(struct device *dev, 671 - struct device_attribute *attr, 672 - char *buf) 671 + static ssize_t current_device_show(struct device *dev, 672 + struct device_attribute *attr, 673 + char *buf) 673 674 { 674 675 struct tick_device *td; 675 676 ssize_t count = 0; ··· 681 682 raw_spin_unlock_irq(&clockevents_lock); 682 683 return count; 683 684 } 684 - static DEVICE_ATTR(current_device, 0444, sysfs_show_current_tick_dev, NULL); 685 + static DEVICE_ATTR_RO(current_device); 685 686 686 687 /* We don't support the abomination of removable broadcast devices */ 687 - static ssize_t sysfs_unbind_tick_dev(struct device *dev, 688 - struct device_attribute *attr, 689 - const char *buf, size_t count) 688 + static ssize_t unbind_device_store(struct device *dev, 689 + struct device_attribute *attr, 690 + const char *buf, size_t count) 690 691 { 691 692 char name[CS_NAME_LEN]; 692 693 ssize_t ret = sysfs_get_uname(buf, name, count); ··· 713 714 mutex_unlock(&clockevents_mutex); 714 715 return ret ? ret : count; 715 716 } 716 - static DEVICE_ATTR(unbind_device, 0200, NULL, sysfs_unbind_tick_dev); 717 + static DEVICE_ATTR_WO(unbind_device); 717 718 718 719 #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST 719 720 static struct device tick_bc_dev = {
+202
kernel/time/clocksource-wdtest.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Unit test for the clocksource watchdog. 4 + * 5 + * Copyright (C) 2021 Facebook, Inc. 6 + * 7 + * Author: Paul E. McKenney <paulmck@kernel.org> 8 + */ 9 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 10 + 11 + #include <linux/device.h> 12 + #include <linux/clocksource.h> 13 + #include <linux/init.h> 14 + #include <linux/module.h> 15 + #include <linux/sched.h> /* for spin_unlock_irq() using preempt_count() m68k */ 16 + #include <linux/tick.h> 17 + #include <linux/kthread.h> 18 + #include <linux/delay.h> 19 + #include <linux/prandom.h> 20 + #include <linux/cpu.h> 21 + 22 + MODULE_LICENSE("GPL"); 23 + MODULE_AUTHOR("Paul E. McKenney <paulmck@kernel.org>"); 24 + 25 + static int holdoff = IS_BUILTIN(CONFIG_TEST_CLOCKSOURCE_WATCHDOG) ? 10 : 0; 26 + module_param(holdoff, int, 0444); 27 + MODULE_PARM_DESC(holdoff, "Time to wait to start test (s)."); 28 + 29 + /* Watchdog kthread's task_struct pointer for debug purposes. */ 30 + static struct task_struct *wdtest_task; 31 + 32 + static u64 wdtest_jiffies_read(struct clocksource *cs) 33 + { 34 + return (u64)jiffies; 35 + } 36 + 37 + /* Assume HZ > 100. */ 38 + #define JIFFIES_SHIFT 8 39 + 40 + static struct clocksource clocksource_wdtest_jiffies = { 41 + .name = "wdtest-jiffies", 42 + .rating = 1, /* lowest valid rating*/ 43 + .uncertainty_margin = TICK_NSEC, 44 + .read = wdtest_jiffies_read, 45 + .mask = CLOCKSOURCE_MASK(32), 46 + .flags = CLOCK_SOURCE_MUST_VERIFY, 47 + .mult = TICK_NSEC << JIFFIES_SHIFT, /* details above */ 48 + .shift = JIFFIES_SHIFT, 49 + .max_cycles = 10, 50 + }; 51 + 52 + static int wdtest_ktime_read_ndelays; 53 + static bool wdtest_ktime_read_fuzz; 54 + 55 + static u64 wdtest_ktime_read(struct clocksource *cs) 56 + { 57 + int wkrn = READ_ONCE(wdtest_ktime_read_ndelays); 58 + static int sign = 1; 59 + u64 ret; 60 + 61 + if (wkrn) { 62 + udelay(cs->uncertainty_margin / 250); 63 + WRITE_ONCE(wdtest_ktime_read_ndelays, wkrn - 1); 64 + } 65 + ret = ktime_get_real_fast_ns(); 66 + if (READ_ONCE(wdtest_ktime_read_fuzz)) { 67 + sign = -sign; 68 + ret = ret + sign * 100 * NSEC_PER_MSEC; 69 + } 70 + return ret; 71 + } 72 + 73 + static void wdtest_ktime_cs_mark_unstable(struct clocksource *cs) 74 + { 75 + pr_info("--- Marking %s unstable due to clocksource watchdog.\n", cs->name); 76 + } 77 + 78 + #define KTIME_FLAGS (CLOCK_SOURCE_IS_CONTINUOUS | \ 79 + CLOCK_SOURCE_VALID_FOR_HRES | \ 80 + CLOCK_SOURCE_MUST_VERIFY | \ 81 + CLOCK_SOURCE_VERIFY_PERCPU) 82 + 83 + static struct clocksource clocksource_wdtest_ktime = { 84 + .name = "wdtest-ktime", 85 + .rating = 300, 86 + .read = wdtest_ktime_read, 87 + .mask = CLOCKSOURCE_MASK(64), 88 + .flags = KTIME_FLAGS, 89 + .mark_unstable = wdtest_ktime_cs_mark_unstable, 90 + .list = LIST_HEAD_INIT(clocksource_wdtest_ktime.list), 91 + }; 92 + 93 + /* Reset the clocksource if needed. */ 94 + static void wdtest_ktime_clocksource_reset(void) 95 + { 96 + if (clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE) { 97 + clocksource_unregister(&clocksource_wdtest_ktime); 98 + clocksource_wdtest_ktime.flags = KTIME_FLAGS; 99 + schedule_timeout_uninterruptible(HZ / 10); 100 + clocksource_register_khz(&clocksource_wdtest_ktime, 1000 * 1000); 101 + } 102 + } 103 + 104 + /* Run the specified series of watchdog tests. */ 105 + static int wdtest_func(void *arg) 106 + { 107 + unsigned long j1, j2; 108 + char *s; 109 + int i; 110 + 111 + schedule_timeout_uninterruptible(holdoff * HZ); 112 + 113 + /* 114 + * Verify that jiffies-like clocksources get the manually 115 + * specified uncertainty margin. 116 + */ 117 + pr_info("--- Verify jiffies-like uncertainty margin.\n"); 118 + __clocksource_register(&clocksource_wdtest_jiffies); 119 + WARN_ON_ONCE(clocksource_wdtest_jiffies.uncertainty_margin != TICK_NSEC); 120 + 121 + j1 = clocksource_wdtest_jiffies.read(&clocksource_wdtest_jiffies); 122 + schedule_timeout_uninterruptible(HZ); 123 + j2 = clocksource_wdtest_jiffies.read(&clocksource_wdtest_jiffies); 124 + WARN_ON_ONCE(j1 == j2); 125 + 126 + clocksource_unregister(&clocksource_wdtest_jiffies); 127 + 128 + /* 129 + * Verify that tsc-like clocksources are assigned a reasonable 130 + * uncertainty margin. 131 + */ 132 + pr_info("--- Verify tsc-like uncertainty margin.\n"); 133 + clocksource_register_khz(&clocksource_wdtest_ktime, 1000 * 1000); 134 + WARN_ON_ONCE(clocksource_wdtest_ktime.uncertainty_margin < NSEC_PER_USEC); 135 + 136 + j1 = clocksource_wdtest_ktime.read(&clocksource_wdtest_ktime); 137 + udelay(1); 138 + j2 = clocksource_wdtest_ktime.read(&clocksource_wdtest_ktime); 139 + pr_info("--- tsc-like times: %lu - %lu = %lu.\n", j2, j1, j2 - j1); 140 + WARN_ON_ONCE(time_before(j2, j1 + NSEC_PER_USEC)); 141 + 142 + /* Verify tsc-like stability with various numbers of errors injected. */ 143 + for (i = 0; i <= max_cswd_read_retries + 1; i++) { 144 + if (i <= 1 && i < max_cswd_read_retries) 145 + s = ""; 146 + else if (i <= max_cswd_read_retries) 147 + s = ", expect message"; 148 + else 149 + s = ", expect clock skew"; 150 + pr_info("--- Watchdog with %dx error injection, %lu retries%s.\n", i, max_cswd_read_retries, s); 151 + WRITE_ONCE(wdtest_ktime_read_ndelays, i); 152 + schedule_timeout_uninterruptible(2 * HZ); 153 + WARN_ON_ONCE(READ_ONCE(wdtest_ktime_read_ndelays)); 154 + WARN_ON_ONCE((i <= max_cswd_read_retries) != 155 + !(clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE)); 156 + wdtest_ktime_clocksource_reset(); 157 + } 158 + 159 + /* Verify tsc-like stability with clock-value-fuzz error injection. */ 160 + pr_info("--- Watchdog clock-value-fuzz error injection, expect clock skew and per-CPU mismatches.\n"); 161 + WRITE_ONCE(wdtest_ktime_read_fuzz, true); 162 + schedule_timeout_uninterruptible(2 * HZ); 163 + WARN_ON_ONCE(!(clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE)); 164 + clocksource_verify_percpu(&clocksource_wdtest_ktime); 165 + WRITE_ONCE(wdtest_ktime_read_fuzz, false); 166 + 167 + clocksource_unregister(&clocksource_wdtest_ktime); 168 + 169 + pr_info("--- Done with test.\n"); 170 + return 0; 171 + } 172 + 173 + static void wdtest_print_module_parms(void) 174 + { 175 + pr_alert("--- holdoff=%d\n", holdoff); 176 + } 177 + 178 + /* Cleanup function. */ 179 + static void clocksource_wdtest_cleanup(void) 180 + { 181 + } 182 + 183 + static int __init clocksource_wdtest_init(void) 184 + { 185 + int ret = 0; 186 + 187 + wdtest_print_module_parms(); 188 + 189 + /* Create watchdog-test task. */ 190 + wdtest_task = kthread_run(wdtest_func, NULL, "wdtest"); 191 + if (IS_ERR(wdtest_task)) { 192 + ret = PTR_ERR(wdtest_task); 193 + pr_warn("%s: Failed to create wdtest kthread.\n", __func__); 194 + wdtest_task = NULL; 195 + return ret; 196 + } 197 + 198 + return 0; 199 + } 200 + 201 + module_init(clocksource_wdtest_init); 202 + module_exit(clocksource_wdtest_cleanup);
+214 -13
kernel/time/clocksource.c
··· 14 14 #include <linux/sched.h> /* for spin_unlock_irq() using preempt_count() m68k */ 15 15 #include <linux/tick.h> 16 16 #include <linux/kthread.h> 17 + #include <linux/prandom.h> 18 + #include <linux/cpu.h> 17 19 18 20 #include "tick-internal.h" 19 21 #include "timekeeping_internal.h" ··· 95 93 static int finished_booting; 96 94 static u64 suspend_start; 97 95 96 + /* 97 + * Threshold: 0.0312s, when doubled: 0.0625s. 98 + * Also a default for cs->uncertainty_margin when registering clocks. 99 + */ 100 + #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 5) 101 + 102 + /* 103 + * Maximum permissible delay between two readouts of the watchdog 104 + * clocksource surrounding a read of the clocksource being validated. 105 + * This delay could be due to SMIs, NMIs, or to VCPU preemptions. Used as 106 + * a lower bound for cs->uncertainty_margin values when registering clocks. 107 + */ 108 + #define WATCHDOG_MAX_SKEW (50 * NSEC_PER_USEC) 109 + 98 110 #ifdef CONFIG_CLOCKSOURCE_WATCHDOG 99 111 static void clocksource_watchdog_work(struct work_struct *work); 100 112 static void clocksource_select(void); ··· 135 119 static void __clocksource_change_rating(struct clocksource *cs, int rating); 136 120 137 121 /* 138 - * Interval: 0.5sec Threshold: 0.0625s 122 + * Interval: 0.5sec. 139 123 */ 140 124 #define WATCHDOG_INTERVAL (HZ >> 1) 141 - #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4) 142 125 143 126 static void clocksource_watchdog_work(struct work_struct *work) 144 127 { ··· 199 184 spin_unlock_irqrestore(&watchdog_lock, flags); 200 185 } 201 186 187 + ulong max_cswd_read_retries = 3; 188 + module_param(max_cswd_read_retries, ulong, 0644); 189 + EXPORT_SYMBOL_GPL(max_cswd_read_retries); 190 + static int verify_n_cpus = 8; 191 + module_param(verify_n_cpus, int, 0644); 192 + 193 + static bool cs_watchdog_read(struct clocksource *cs, u64 *csnow, u64 *wdnow) 194 + { 195 + unsigned int nretries; 196 + u64 wd_end, wd_delta; 197 + int64_t wd_delay; 198 + 199 + for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) { 200 + local_irq_disable(); 201 + *wdnow = watchdog->read(watchdog); 202 + *csnow = cs->read(cs); 203 + wd_end = watchdog->read(watchdog); 204 + local_irq_enable(); 205 + 206 + wd_delta = clocksource_delta(wd_end, *wdnow, watchdog->mask); 207 + wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult, 208 + watchdog->shift); 209 + if (wd_delay <= WATCHDOG_MAX_SKEW) { 210 + if (nretries > 1 || nretries >= max_cswd_read_retries) { 211 + pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n", 212 + smp_processor_id(), watchdog->name, nretries); 213 + } 214 + return true; 215 + } 216 + } 217 + 218 + pr_warn("timekeeping watchdog on CPU%d: %s read-back delay of %lldns, attempt %d, marking unstable\n", 219 + smp_processor_id(), watchdog->name, wd_delay, nretries); 220 + return false; 221 + } 222 + 223 + static u64 csnow_mid; 224 + static cpumask_t cpus_ahead; 225 + static cpumask_t cpus_behind; 226 + static cpumask_t cpus_chosen; 227 + 228 + static void clocksource_verify_choose_cpus(void) 229 + { 230 + int cpu, i, n = verify_n_cpus; 231 + 232 + if (n < 0) { 233 + /* Check all of the CPUs. */ 234 + cpumask_copy(&cpus_chosen, cpu_online_mask); 235 + cpumask_clear_cpu(smp_processor_id(), &cpus_chosen); 236 + return; 237 + } 238 + 239 + /* If no checking desired, or no other CPU to check, leave. */ 240 + cpumask_clear(&cpus_chosen); 241 + if (n == 0 || num_online_cpus() <= 1) 242 + return; 243 + 244 + /* Make sure to select at least one CPU other than the current CPU. */ 245 + cpu = cpumask_next(-1, cpu_online_mask); 246 + if (cpu == smp_processor_id()) 247 + cpu = cpumask_next(cpu, cpu_online_mask); 248 + if (WARN_ON_ONCE(cpu >= nr_cpu_ids)) 249 + return; 250 + cpumask_set_cpu(cpu, &cpus_chosen); 251 + 252 + /* Force a sane value for the boot parameter. */ 253 + if (n > nr_cpu_ids) 254 + n = nr_cpu_ids; 255 + 256 + /* 257 + * Randomly select the specified number of CPUs. If the same 258 + * CPU is selected multiple times, that CPU is checked only once, 259 + * and no replacement CPU is selected. This gracefully handles 260 + * situations where verify_n_cpus is greater than the number of 261 + * CPUs that are currently online. 262 + */ 263 + for (i = 1; i < n; i++) { 264 + cpu = prandom_u32() % nr_cpu_ids; 265 + cpu = cpumask_next(cpu - 1, cpu_online_mask); 266 + if (cpu >= nr_cpu_ids) 267 + cpu = cpumask_next(-1, cpu_online_mask); 268 + if (!WARN_ON_ONCE(cpu >= nr_cpu_ids)) 269 + cpumask_set_cpu(cpu, &cpus_chosen); 270 + } 271 + 272 + /* Don't verify ourselves. */ 273 + cpumask_clear_cpu(smp_processor_id(), &cpus_chosen); 274 + } 275 + 276 + static void clocksource_verify_one_cpu(void *csin) 277 + { 278 + struct clocksource *cs = (struct clocksource *)csin; 279 + 280 + csnow_mid = cs->read(cs); 281 + } 282 + 283 + void clocksource_verify_percpu(struct clocksource *cs) 284 + { 285 + int64_t cs_nsec, cs_nsec_max = 0, cs_nsec_min = LLONG_MAX; 286 + u64 csnow_begin, csnow_end; 287 + int cpu, testcpu; 288 + s64 delta; 289 + 290 + if (verify_n_cpus == 0) 291 + return; 292 + cpumask_clear(&cpus_ahead); 293 + cpumask_clear(&cpus_behind); 294 + get_online_cpus(); 295 + preempt_disable(); 296 + clocksource_verify_choose_cpus(); 297 + if (cpumask_weight(&cpus_chosen) == 0) { 298 + preempt_enable(); 299 + put_online_cpus(); 300 + pr_warn("Not enough CPUs to check clocksource '%s'.\n", cs->name); 301 + return; 302 + } 303 + testcpu = smp_processor_id(); 304 + pr_warn("Checking clocksource %s synchronization from CPU %d to CPUs %*pbl.\n", cs->name, testcpu, cpumask_pr_args(&cpus_chosen)); 305 + for_each_cpu(cpu, &cpus_chosen) { 306 + if (cpu == testcpu) 307 + continue; 308 + csnow_begin = cs->read(cs); 309 + smp_call_function_single(cpu, clocksource_verify_one_cpu, cs, 1); 310 + csnow_end = cs->read(cs); 311 + delta = (s64)((csnow_mid - csnow_begin) & cs->mask); 312 + if (delta < 0) 313 + cpumask_set_cpu(cpu, &cpus_behind); 314 + delta = (csnow_end - csnow_mid) & cs->mask; 315 + if (delta < 0) 316 + cpumask_set_cpu(cpu, &cpus_ahead); 317 + delta = clocksource_delta(csnow_end, csnow_begin, cs->mask); 318 + cs_nsec = clocksource_cyc2ns(delta, cs->mult, cs->shift); 319 + if (cs_nsec > cs_nsec_max) 320 + cs_nsec_max = cs_nsec; 321 + if (cs_nsec < cs_nsec_min) 322 + cs_nsec_min = cs_nsec; 323 + } 324 + preempt_enable(); 325 + put_online_cpus(); 326 + if (!cpumask_empty(&cpus_ahead)) 327 + pr_warn(" CPUs %*pbl ahead of CPU %d for clocksource %s.\n", 328 + cpumask_pr_args(&cpus_ahead), testcpu, cs->name); 329 + if (!cpumask_empty(&cpus_behind)) 330 + pr_warn(" CPUs %*pbl behind CPU %d for clocksource %s.\n", 331 + cpumask_pr_args(&cpus_behind), testcpu, cs->name); 332 + if (!cpumask_empty(&cpus_ahead) || !cpumask_empty(&cpus_behind)) 333 + pr_warn(" CPU %d check durations %lldns - %lldns for clocksource %s.\n", 334 + testcpu, cs_nsec_min, cs_nsec_max, cs->name); 335 + } 336 + EXPORT_SYMBOL_GPL(clocksource_verify_percpu); 337 + 202 338 static void clocksource_watchdog(struct timer_list *unused) 203 339 { 204 - struct clocksource *cs; 205 340 u64 csnow, wdnow, cslast, wdlast, delta; 206 - int64_t wd_nsec, cs_nsec; 207 341 int next_cpu, reset_pending; 342 + int64_t wd_nsec, cs_nsec; 343 + struct clocksource *cs; 344 + u32 md; 208 345 209 346 spin_lock(&watchdog_lock); 210 347 if (!watchdog_running) ··· 373 206 continue; 374 207 } 375 208 376 - local_irq_disable(); 377 - csnow = cs->read(cs); 378 - wdnow = watchdog->read(watchdog); 379 - local_irq_enable(); 209 + if (!cs_watchdog_read(cs, &csnow, &wdnow)) { 210 + /* Clock readout unreliable, so give it up. */ 211 + __clocksource_unstable(cs); 212 + continue; 213 + } 380 214 381 215 /* Clocksource initialized ? */ 382 216 if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) || ··· 403 235 continue; 404 236 405 237 /* Check the deviation from the watchdog clocksource. */ 406 - if (abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD) { 238 + md = cs->uncertainty_margin + watchdog->uncertainty_margin; 239 + if (abs(cs_nsec - wd_nsec) > md) { 407 240 pr_warn("timekeeping watchdog on CPU%d: Marking clocksource '%s' as unstable because the skew is too large:\n", 408 241 smp_processor_id(), cs->name); 409 - pr_warn(" '%s' wd_now: %llx wd_last: %llx mask: %llx\n", 410 - watchdog->name, wdnow, wdlast, watchdog->mask); 411 - pr_warn(" '%s' cs_now: %llx cs_last: %llx mask: %llx\n", 412 - cs->name, csnow, cslast, cs->mask); 242 + pr_warn(" '%s' wd_nsec: %lld wd_now: %llx wd_last: %llx mask: %llx\n", 243 + watchdog->name, wd_nsec, wdnow, wdlast, watchdog->mask); 244 + pr_warn(" '%s' cs_nsec: %lld cs_now: %llx cs_last: %llx mask: %llx\n", 245 + cs->name, cs_nsec, csnow, cslast, cs->mask); 246 + if (curr_clocksource == cs) 247 + pr_warn(" '%s' is current clocksource.\n", cs->name); 248 + else if (curr_clocksource) 249 + pr_warn(" '%s' (not '%s') is current clocksource.\n", curr_clocksource->name, cs->name); 250 + else 251 + pr_warn(" No current clocksource.\n"); 413 252 __clocksource_unstable(cs); 414 253 continue; 415 254 } ··· 581 406 struct clocksource *cs, *tmp; 582 407 unsigned long flags; 583 408 int select = 0; 409 + 410 + /* Do any required per-CPU skew verification. */ 411 + if (curr_clocksource && 412 + curr_clocksource->flags & CLOCK_SOURCE_UNSTABLE && 413 + curr_clocksource->flags & CLOCK_SOURCE_VERIFY_PERCPU) 414 + clocksource_verify_percpu(curr_clocksource); 584 415 585 416 spin_lock_irqsave(&watchdog_lock, flags); 586 417 list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) { ··· 1057 876 clocks_calc_mult_shift(&cs->mult, &cs->shift, freq, 1058 877 NSEC_PER_SEC / scale, sec * scale); 1059 878 } 879 + 880 + /* 881 + * If the uncertainty margin is not specified, calculate it. 882 + * If both scale and freq are non-zero, calculate the clock 883 + * period, but bound below at 2*WATCHDOG_MAX_SKEW. However, 884 + * if either of scale or freq is zero, be very conservative and 885 + * take the tens-of-milliseconds WATCHDOG_THRESHOLD value for the 886 + * uncertainty margin. Allow stupidly small uncertainty margins 887 + * to be specified by the caller for testing purposes, but warn 888 + * to discourage production use of this capability. 889 + */ 890 + if (scale && freq && !cs->uncertainty_margin) { 891 + cs->uncertainty_margin = NSEC_PER_SEC / (scale * freq); 892 + if (cs->uncertainty_margin < 2 * WATCHDOG_MAX_SKEW) 893 + cs->uncertainty_margin = 2 * WATCHDOG_MAX_SKEW; 894 + } else if (!cs->uncertainty_margin) { 895 + cs->uncertainty_margin = WATCHDOG_THRESHOLD; 896 + } 897 + WARN_ON_ONCE(cs->uncertainty_margin < 2 * WATCHDOG_MAX_SKEW); 898 + 1060 899 /* 1061 900 * Ensure clocksources that have large 'mult' values don't overflow 1062 901 * when adjusted.
+8 -7
kernel/time/jiffies.c
··· 49 49 * for "tick-less" systems. 50 50 */ 51 51 static struct clocksource clocksource_jiffies = { 52 - .name = "jiffies", 53 - .rating = 1, /* lowest valid rating*/ 54 - .read = jiffies_read, 55 - .mask = CLOCKSOURCE_MASK(32), 56 - .mult = TICK_NSEC << JIFFIES_SHIFT, /* details above */ 57 - .shift = JIFFIES_SHIFT, 58 - .max_cycles = 10, 52 + .name = "jiffies", 53 + .rating = 1, /* lowest valid rating*/ 54 + .uncertainty_margin = 32 * NSEC_PER_MSEC, 55 + .read = jiffies_read, 56 + .mask = CLOCKSOURCE_MASK(32), 57 + .mult = TICK_NSEC << JIFFIES_SHIFT, /* details above */ 58 + .shift = JIFFIES_SHIFT, 59 + .max_cycles = 10, 59 60 }; 60 61 61 62 __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(jiffies_lock);
+127 -16
kernel/time/tick-broadcast.c
··· 33 33 static __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(tick_broadcast_lock); 34 34 35 35 #ifdef CONFIG_TICK_ONESHOT 36 + static DEFINE_PER_CPU(struct clock_event_device *, tick_oneshot_wakeup_device); 37 + 36 38 static void tick_broadcast_setup_oneshot(struct clock_event_device *bc); 37 39 static void tick_broadcast_clear_oneshot(int cpu); 38 40 static void tick_resume_broadcast_oneshot(struct clock_event_device *bc); ··· 61 59 struct cpumask *tick_get_broadcast_mask(void) 62 60 { 63 61 return tick_broadcast_mask; 62 + } 63 + 64 + static struct clock_event_device *tick_get_oneshot_wakeup_device(int cpu); 65 + 66 + const struct clock_event_device *tick_get_wakeup_device(int cpu) 67 + { 68 + return tick_get_oneshot_wakeup_device(cpu); 64 69 } 65 70 66 71 /* ··· 97 88 return !curdev || newdev->rating > curdev->rating; 98 89 } 99 90 91 + #ifdef CONFIG_TICK_ONESHOT 92 + static struct clock_event_device *tick_get_oneshot_wakeup_device(int cpu) 93 + { 94 + return per_cpu(tick_oneshot_wakeup_device, cpu); 95 + } 96 + 97 + static void tick_oneshot_wakeup_handler(struct clock_event_device *wd) 98 + { 99 + /* 100 + * If we woke up early and the tick was reprogrammed in the 101 + * meantime then this may be spurious but harmless. 102 + */ 103 + tick_receive_broadcast(); 104 + } 105 + 106 + static bool tick_set_oneshot_wakeup_device(struct clock_event_device *newdev, 107 + int cpu) 108 + { 109 + struct clock_event_device *curdev = tick_get_oneshot_wakeup_device(cpu); 110 + 111 + if (!newdev) 112 + goto set_device; 113 + 114 + if ((newdev->features & CLOCK_EVT_FEAT_DUMMY) || 115 + (newdev->features & CLOCK_EVT_FEAT_C3STOP)) 116 + return false; 117 + 118 + if (!(newdev->features & CLOCK_EVT_FEAT_PERCPU) || 119 + !(newdev->features & CLOCK_EVT_FEAT_ONESHOT)) 120 + return false; 121 + 122 + if (!cpumask_equal(newdev->cpumask, cpumask_of(cpu))) 123 + return false; 124 + 125 + if (curdev && newdev->rating <= curdev->rating) 126 + return false; 127 + 128 + if (!try_module_get(newdev->owner)) 129 + return false; 130 + 131 + newdev->event_handler = tick_oneshot_wakeup_handler; 132 + set_device: 133 + clockevents_exchange_device(curdev, newdev); 134 + per_cpu(tick_oneshot_wakeup_device, cpu) = newdev; 135 + return true; 136 + } 137 + #else 138 + static struct clock_event_device *tick_get_oneshot_wakeup_device(int cpu) 139 + { 140 + return NULL; 141 + } 142 + 143 + static bool tick_set_oneshot_wakeup_device(struct clock_event_device *newdev, 144 + int cpu) 145 + { 146 + return false; 147 + } 148 + #endif 149 + 100 150 /* 101 151 * Conditionally install/replace broadcast device 102 152 */ 103 - void tick_install_broadcast_device(struct clock_event_device *dev) 153 + void tick_install_broadcast_device(struct clock_event_device *dev, int cpu) 104 154 { 105 155 struct clock_event_device *cur = tick_broadcast_device.evtdev; 156 + 157 + if (tick_set_oneshot_wakeup_device(dev, cpu)) 158 + return; 106 159 107 160 if (!tick_check_broadcast_device(cur, dev)) 108 161 return; ··· 324 253 return ret; 325 254 } 326 255 327 - #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST 328 256 int tick_receive_broadcast(void) 329 257 { 330 258 struct tick_device *td = this_cpu_ptr(&tick_cpu_device); ··· 338 268 evt->event_handler(evt); 339 269 return 0; 340 270 } 341 - #endif 342 271 343 272 /* 344 273 * Broadcast the event to the cpus, which are set in the mask (mangled). ··· 788 719 clockevents_switch_state(dev, CLOCK_EVT_STATE_SHUTDOWN); 789 720 } 790 721 791 - int __tick_broadcast_oneshot_control(enum tick_broadcast_state state) 722 + static int ___tick_broadcast_oneshot_control(enum tick_broadcast_state state, 723 + struct tick_device *td, 724 + int cpu) 792 725 { 793 - struct clock_event_device *bc, *dev; 794 - int cpu, ret = 0; 726 + struct clock_event_device *bc, *dev = td->evtdev; 727 + int ret = 0; 795 728 ktime_t now; 796 - 797 - /* 798 - * If there is no broadcast device, tell the caller not to go 799 - * into deep idle. 800 - */ 801 - if (!tick_broadcast_device.evtdev) 802 - return -EBUSY; 803 - 804 - dev = this_cpu_ptr(&tick_cpu_device)->evtdev; 805 729 806 730 raw_spin_lock(&tick_broadcast_lock); 807 731 bc = tick_broadcast_device.evtdev; 808 - cpu = smp_processor_id(); 809 732 810 733 if (state == TICK_BROADCAST_ENTER) { 811 734 /* ··· 926 865 return ret; 927 866 } 928 867 868 + static int tick_oneshot_wakeup_control(enum tick_broadcast_state state, 869 + struct tick_device *td, 870 + int cpu) 871 + { 872 + struct clock_event_device *dev, *wd; 873 + 874 + dev = td->evtdev; 875 + if (td->mode != TICKDEV_MODE_ONESHOT) 876 + return -EINVAL; 877 + 878 + wd = tick_get_oneshot_wakeup_device(cpu); 879 + if (!wd) 880 + return -ENODEV; 881 + 882 + switch (state) { 883 + case TICK_BROADCAST_ENTER: 884 + clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT_STOPPED); 885 + clockevents_switch_state(wd, CLOCK_EVT_STATE_ONESHOT); 886 + clockevents_program_event(wd, dev->next_event, 1); 887 + break; 888 + case TICK_BROADCAST_EXIT: 889 + /* We may have transitioned to oneshot mode while idle */ 890 + if (clockevent_get_state(wd) != CLOCK_EVT_STATE_ONESHOT) 891 + return -ENODEV; 892 + } 893 + 894 + return 0; 895 + } 896 + 897 + int __tick_broadcast_oneshot_control(enum tick_broadcast_state state) 898 + { 899 + struct tick_device *td = this_cpu_ptr(&tick_cpu_device); 900 + int cpu = smp_processor_id(); 901 + 902 + if (!tick_oneshot_wakeup_control(state, td, cpu)) 903 + return 0; 904 + 905 + if (tick_broadcast_device.evtdev) 906 + return ___tick_broadcast_oneshot_control(state, td, cpu); 907 + 908 + /* 909 + * If there is no broadcast or wakeup device, tell the caller not 910 + * to go into deep idle. 911 + */ 912 + return -EBUSY; 913 + } 914 + 929 915 /* 930 916 * Reset the one shot broadcast for a cpu 931 917 * ··· 1099 991 */ 1100 992 static void tick_broadcast_oneshot_offline(unsigned int cpu) 1101 993 { 994 + if (tick_get_oneshot_wakeup_device(cpu)) 995 + tick_set_oneshot_wakeup_device(NULL, cpu); 996 + 1102 997 /* 1103 998 * Clear the broadcast masks for the dead cpu, but do not stop 1104 999 * the broadcast device!
+1 -1
kernel/time/tick-common.c
··· 373 373 /* 374 374 * Can the new device be used as a broadcast device ? 375 375 */ 376 - tick_install_broadcast_device(newdev); 376 + tick_install_broadcast_device(newdev, cpu); 377 377 } 378 378 379 379 /**
+3 -2
kernel/time/tick-internal.h
··· 61 61 /* Broadcasting support */ 62 62 # ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST 63 63 extern int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu); 64 - extern void tick_install_broadcast_device(struct clock_event_device *dev); 64 + extern void tick_install_broadcast_device(struct clock_event_device *dev, int cpu); 65 65 extern int tick_is_broadcast_device(struct clock_event_device *dev); 66 66 extern void tick_suspend_broadcast(void); 67 67 extern void tick_resume_broadcast(void); ··· 71 71 extern int tick_broadcast_update_freq(struct clock_event_device *dev, u32 freq); 72 72 extern struct tick_device *tick_get_broadcast_device(void); 73 73 extern struct cpumask *tick_get_broadcast_mask(void); 74 + extern const struct clock_event_device *tick_get_wakeup_device(int cpu); 74 75 # else /* !CONFIG_GENERIC_CLOCKEVENTS_BROADCAST: */ 75 - static inline void tick_install_broadcast_device(struct clock_event_device *dev) { } 76 + static inline void tick_install_broadcast_device(struct clock_event_device *dev, int cpu) { } 76 77 static inline int tick_is_broadcast_device(struct clock_event_device *dev) { return 0; } 77 78 static inline int tick_device_uses_broadcast(struct clock_event_device *dev, int cpu) { return 0; } 78 79 static inline void tick_do_periodic_broadcast(struct clock_event_device *d) { }
+99
kernel/time/time_test.c
··· 1 + // SPDX-License-Identifier: LGPL-2.1+ 2 + 3 + #include <kunit/test.h> 4 + #include <linux/time.h> 5 + 6 + /* 7 + * Traditional implementation of leap year evaluation. 8 + */ 9 + static bool is_leap(long year) 10 + { 11 + return year % 4 == 0 && (year % 100 != 0 || year % 400 == 0); 12 + } 13 + 14 + /* 15 + * Gets the last day of a month. 16 + */ 17 + static int last_day_of_month(long year, int month) 18 + { 19 + if (month == 2) 20 + return 28 + is_leap(year); 21 + if (month == 4 || month == 6 || month == 9 || month == 11) 22 + return 30; 23 + return 31; 24 + } 25 + 26 + /* 27 + * Advances a date by one day. 28 + */ 29 + static void advance_date(long *year, int *month, int *mday, int *yday) 30 + { 31 + if (*mday != last_day_of_month(*year, *month)) { 32 + ++*mday; 33 + ++*yday; 34 + return; 35 + } 36 + 37 + *mday = 1; 38 + if (*month != 12) { 39 + ++*month; 40 + ++*yday; 41 + return; 42 + } 43 + 44 + *month = 1; 45 + *yday = 0; 46 + ++*year; 47 + } 48 + 49 + /* 50 + * Checks every day in a 160000 years interval centered at 1970-01-01 51 + * against the expected result. 52 + */ 53 + static void time64_to_tm_test_date_range(struct kunit *test) 54 + { 55 + /* 56 + * 80000 years = (80000 / 400) * 400 years 57 + * = (80000 / 400) * 146097 days 58 + * = (80000 / 400) * 146097 * 86400 seconds 59 + */ 60 + time64_t total_secs = ((time64_t) 80000) / 400 * 146097 * 86400; 61 + long year = 1970 - 80000; 62 + int month = 1; 63 + int mdday = 1; 64 + int yday = 0; 65 + 66 + struct tm result; 67 + time64_t secs; 68 + s64 days; 69 + 70 + for (secs = -total_secs; secs <= total_secs; secs += 86400) { 71 + 72 + time64_to_tm(secs, 0, &result); 73 + 74 + days = div_s64(secs, 86400); 75 + 76 + #define FAIL_MSG "%05ld/%02d/%02d (%2d) : %ld", \ 77 + year, month, mdday, yday, days 78 + 79 + KUNIT_ASSERT_EQ_MSG(test, year - 1900, result.tm_year, FAIL_MSG); 80 + KUNIT_ASSERT_EQ_MSG(test, month - 1, result.tm_mon, FAIL_MSG); 81 + KUNIT_ASSERT_EQ_MSG(test, mdday, result.tm_mday, FAIL_MSG); 82 + KUNIT_ASSERT_EQ_MSG(test, yday, result.tm_yday, FAIL_MSG); 83 + 84 + advance_date(&year, &month, &mdday, &yday); 85 + } 86 + } 87 + 88 + static struct kunit_case time_test_cases[] = { 89 + KUNIT_CASE(time64_to_tm_test_date_range), 90 + {} 91 + }; 92 + 93 + static struct kunit_suite time_test_suite = { 94 + .name = "time_test_cases", 95 + .test_cases = time_test_cases, 96 + }; 97 + 98 + kunit_test_suite(time_test_suite); 99 + MODULE_LICENSE("GPL");
+64 -52
kernel/time/timeconv.c
··· 22 22 23 23 /* 24 24 * Converts the calendar time to broken-down time representation 25 - * Based on code from glibc-2.6 26 25 * 27 26 * 2009-7-14: 28 27 * Moved from glibc-2.6 to kernel by Zhaolei<zhaolei@cn.fujitsu.com> 28 + * 2021-06-02: 29 + * Reimplemented by Cassio Neri <cassio.neri@gmail.com> 29 30 */ 30 31 31 32 #include <linux/time.h> 32 33 #include <linux/module.h> 33 - 34 - /* 35 - * Nonzero if YEAR is a leap year (every 4 years, 36 - * except every 100th isn't, and every 400th is). 37 - */ 38 - static int __isleap(long year) 39 - { 40 - return (year) % 4 == 0 && ((year) % 100 != 0 || (year) % 400 == 0); 41 - } 42 - 43 - /* do a mathdiv for long type */ 44 - static long math_div(long a, long b) 45 - { 46 - return a / b - (a % b < 0); 47 - } 48 - 49 - /* How many leap years between y1 and y2, y1 must less or equal to y2 */ 50 - static long leaps_between(long y1, long y2) 51 - { 52 - long leaps1 = math_div(y1 - 1, 4) - math_div(y1 - 1, 100) 53 - + math_div(y1 - 1, 400); 54 - long leaps2 = math_div(y2 - 1, 4) - math_div(y2 - 1, 100) 55 - + math_div(y2 - 1, 400); 56 - return leaps2 - leaps1; 57 - } 58 - 59 - /* How many days come before each month (0-12). */ 60 - static const unsigned short __mon_yday[2][13] = { 61 - /* Normal years. */ 62 - {0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365}, 63 - /* Leap years. */ 64 - {0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366} 65 - }; 34 + #include <linux/kernel.h> 66 35 67 36 #define SECS_PER_HOUR (60 * 60) 68 37 #define SECS_PER_DAY (SECS_PER_HOUR * 24) ··· 46 77 */ 47 78 void time64_to_tm(time64_t totalsecs, int offset, struct tm *result) 48 79 { 49 - long days, rem, y; 80 + u32 u32tmp, day_of_century, year_of_century, day_of_year, month, day; 81 + u64 u64tmp, udays, century, year; 82 + bool is_Jan_or_Feb, is_leap_year; 83 + long days, rem; 50 84 int remainder; 51 - const unsigned short *ip; 52 85 53 86 days = div_s64_rem(totalsecs, SECS_PER_DAY, &remainder); 54 87 rem = remainder; ··· 74 103 if (result->tm_wday < 0) 75 104 result->tm_wday += 7; 76 105 77 - y = 1970; 106 + /* 107 + * The following algorithm is, basically, Proposition 6.3 of Neri 108 + * and Schneider [1]. In a few words: it works on the computational 109 + * (fictitious) calendar where the year starts in March, month = 2 110 + * (*), and finishes in February, month = 13. This calendar is 111 + * mathematically convenient because the day of the year does not 112 + * depend on whether the year is leap or not. For instance: 113 + * 114 + * March 1st 0-th day of the year; 115 + * ... 116 + * April 1st 31-st day of the year; 117 + * ... 118 + * January 1st 306-th day of the year; (Important!) 119 + * ... 120 + * February 28th 364-th day of the year; 121 + * February 29th 365-th day of the year (if it exists). 122 + * 123 + * After having worked out the date in the computational calendar 124 + * (using just arithmetics) it's easy to convert it to the 125 + * corresponding date in the Gregorian calendar. 126 + * 127 + * [1] "Euclidean Affine Functions and Applications to Calendar 128 + * Algorithms". https://arxiv.org/abs/2102.06959 129 + * 130 + * (*) The numbering of months follows tm more closely and thus, 131 + * is slightly different from [1]. 132 + */ 78 133 79 - while (days < 0 || days >= (__isleap(y) ? 366 : 365)) { 80 - /* Guess a corrected year, assuming 365 days per year. */ 81 - long yg = y + math_div(days, 365); 134 + udays = ((u64) days) + 2305843009213814918ULL; 82 135 83 - /* Adjust DAYS and Y to match the guessed year. */ 84 - days -= (yg - y) * 365 + leaps_between(y, yg); 85 - y = yg; 86 - } 136 + u64tmp = 4 * udays + 3; 137 + century = div64_u64_rem(u64tmp, 146097, &u64tmp); 138 + day_of_century = (u32) (u64tmp / 4); 87 139 88 - result->tm_year = y - 1900; 140 + u32tmp = 4 * day_of_century + 3; 141 + u64tmp = 2939745ULL * u32tmp; 142 + year_of_century = upper_32_bits(u64tmp); 143 + day_of_year = lower_32_bits(u64tmp) / 2939745 / 4; 89 144 90 - result->tm_yday = days; 145 + year = 100 * century + year_of_century; 146 + is_leap_year = year_of_century ? !(year_of_century % 4) : !(century % 4); 91 147 92 - ip = __mon_yday[__isleap(y)]; 93 - for (y = 11; days < ip[y]; y--) 94 - continue; 95 - days -= ip[y]; 148 + u32tmp = 2141 * day_of_year + 132377; 149 + month = u32tmp >> 16; 150 + day = ((u16) u32tmp) / 2141; 96 151 97 - result->tm_mon = y; 98 - result->tm_mday = days + 1; 152 + /* 153 + * Recall that January 1st is the 306-th day of the year in the 154 + * computational (not Gregorian) calendar. 155 + */ 156 + is_Jan_or_Feb = day_of_year >= 306; 157 + 158 + /* Convert to the Gregorian calendar and adjust to Unix time. */ 159 + year = year + is_Jan_or_Feb - 6313183731940000ULL; 160 + month = is_Jan_or_Feb ? month - 12 : month; 161 + day = day + 1; 162 + day_of_year += is_Jan_or_Feb ? -306 : 31 + 28 + is_leap_year; 163 + 164 + /* Convert to tm's format. */ 165 + result->tm_year = (long) (year - 1900); 166 + result->tm_mon = (int) month; 167 + result->tm_mday = (int) day; 168 + result->tm_yday = (int) day_of_year; 99 169 } 100 170 EXPORT_SYMBOL(time64_to_tm);
+9 -1
kernel/time/timer_list.c
··· 228 228 SEQ_printf(m, " event_handler: %ps\n", dev->event_handler); 229 229 SEQ_printf(m, "\n"); 230 230 SEQ_printf(m, " retries: %lu\n", dev->retries); 231 + 232 + #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST 233 + if (cpu >= 0) { 234 + const struct clock_event_device *wd = tick_get_wakeup_device(cpu); 235 + 236 + SEQ_printf(m, "Wakeup Device: %s\n", wd ? wd->name : "<NULL>"); 237 + } 238 + #endif 231 239 SEQ_printf(m, "\n"); 232 240 } 233 241 ··· 256 248 257 249 static inline void timer_list_header(struct seq_file *m, u64 now) 258 250 { 259 - SEQ_printf(m, "Timer List Version: v0.8\n"); 251 + SEQ_printf(m, "Timer List Version: v0.9\n"); 260 252 SEQ_printf(m, "HRTIMER_MAX_CLOCK_BASES: %d\n", HRTIMER_MAX_CLOCK_BASES); 261 253 SEQ_printf(m, "now at %Ld nsecs\n", (unsigned long long)now); 262 254 SEQ_printf(m, "\n");
+12
lib/Kconfig.debug
··· 2573 2573 2574 2574 If unsure, say N. 2575 2575 2576 + config TEST_CLOCKSOURCE_WATCHDOG 2577 + tristate "Test clocksource watchdog in kernel space" 2578 + depends on CLOCKSOURCE_WATCHDOG 2579 + help 2580 + Enable this option to create a kernel module that will trigger 2581 + a test of the clocksource watchdog. This module may be loaded 2582 + via modprobe or insmod in which case it will run upon being 2583 + loaded, or it may be built in, in which case it will run 2584 + shortly after boot. 2585 + 2586 + If unsure, say N. 2587 + 2576 2588 endif # RUNTIME_TESTING_MENU 2577 2589 2578 2590 config ARCH_USE_MEMTEST