Merge tag 'timers-core-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull timers and timekeeping updates from Thomas Gleixner:
"Core:

- Robustness improvements for the NOHZ tick management

- Fixes and consolidation of the NTP/RTC synchronization code

- Small fixes and improvements in various places

- A set of function documentation udpates and fixes

Drivers:

- Cleanups and improvements in various clocksoure/event drivers

- Removal of the EZChip NPS clocksource driver as the platfrom
support was removed from ARC

- The usual set of new device tree binding and json conversions

- The RTC driver which have been acked by the RTC maintainer:

* fix a long standing bug in the MC146818 library code which can
cause reading garbage during the RTC internal update.

* changes related to the NTP/RTC consolidation work"

* tag 'timers-core-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (46 commits)
ntp: Fix prototype in the !CONFIG_GENERIC_CMOS_UPDATE case
tick/sched: Make jiffies update quick check more robust
ntp: Consolidate the RTC update implementation
ntp: Make the RTC sync offset less obscure
ntp, rtc: Move rtc_set_ntp_time() to ntp code
ntp: Make the RTC synchronization more reliable
rtc: core: Make the sync offset default more realistic
rtc: cmos: Make rtc_cmos sync offset correct
rtc: mc146818: Reduce spinlock section in mc146818_set_time()
rtc: mc146818: Prevent reading garbage
clocksource/drivers/sh_cmt: Fix potential deadlock when calling runtime PM
clocksource/drivers/arm_arch_timer: Correct fault programming of CNTKCTL_EL1.EVNTI
clocksource/drivers/arm_arch_timer: Use stable count reader in erratum sne
clocksource/drivers/dw_apb_timer_of: Add error handling if no clock available
clocksource/drivers/riscv: Make RISCV_TIMER depends on RISCV_SBI
clocksource/drivers/ingenic: Fix section mismatch
clocksource/drivers/cadence_ttc: Fix memory leak in ttc_setup_clockevent()
dt-bindings: timer: renesas: tmu: Convert to json-schema
dt-bindings: timer: renesas: tmu: Document r8a774e1 bindings
clocksource/drivers/orion: Add missing clk_disable_unprepare() on error path
...

+674 -819
-49
Documentation/devicetree/bindings/timer/renesas,tmu.txt
··· 1 - * Renesas R-Mobile/R-Car Timer Unit (TMU) 2 - 3 - The TMU is a 32-bit timer/counter with configurable clock inputs and 4 - programmable compare match. 5 - 6 - Channels share hardware resources but their counter and compare match value 7 - are independent. The TMU hardware supports up to three channels. 8 - 9 - Required Properties: 10 - 11 - - compatible: must contain one or more of the following: 12 - - "renesas,tmu-r8a7740" for the r8a7740 TMU 13 - - "renesas,tmu-r8a774a1" for the r8a774A1 TMU 14 - - "renesas,tmu-r8a774b1" for the r8a774B1 TMU 15 - - "renesas,tmu-r8a774c0" for the r8a774C0 TMU 16 - - "renesas,tmu-r8a7778" for the r8a7778 TMU 17 - - "renesas,tmu-r8a7779" for the r8a7779 TMU 18 - - "renesas,tmu-r8a77970" for the r8a77970 TMU 19 - - "renesas,tmu-r8a77980" for the r8a77980 TMU 20 - - "renesas,tmu" for any TMU. 21 - This is a fallback for the above renesas,tmu-* entries 22 - 23 - - reg: base address and length of the registers block for the timer module. 24 - 25 - - interrupts: interrupt-specifier for the timer, one per channel. 26 - 27 - - clocks: a list of phandle + clock-specifier pairs, one for each entry 28 - in clock-names. 29 - - clock-names: must contain "fck" for the functional clock. 30 - 31 - Optional Properties: 32 - 33 - - #renesas,channels: number of channels implemented by the timer, must be 2 34 - or 3 (if not specified the value defaults to 3). 35 - 36 - 37 - Example: R8A7779 (R-Car H1) TMU0 node 38 - 39 - tmu0: timer@ffd80000 { 40 - compatible = "renesas,tmu-r8a7779", "renesas,tmu"; 41 - reg = <0xffd80000 0x30>; 42 - interrupts = <0 32 IRQ_TYPE_LEVEL_HIGH>, 43 - <0 33 IRQ_TYPE_LEVEL_HIGH>, 44 - <0 34 IRQ_TYPE_LEVEL_HIGH>; 45 - clocks = <&mstp0_clks R8A7779_CLK_TMU0>; 46 - clock-names = "fck"; 47 - 48 - #renesas,channels = <3>; 49 - };
+99
Documentation/devicetree/bindings/timer/renesas,tmu.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/timer/renesas,tmu.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Renesas R-Mobile/R-Car Timer Unit (TMU) 8 + 9 + maintainers: 10 + - Geert Uytterhoeven <geert+renesas@glider.be> 11 + - Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> 12 + 13 + description: 14 + The TMU is a 32-bit timer/counter with configurable clock inputs and 15 + programmable compare match. 16 + 17 + Channels share hardware resources but their counter and compare match value 18 + are independent. The TMU hardware supports up to three channels. 19 + 20 + properties: 21 + compatible: 22 + items: 23 + - enum: 24 + - renesas,tmu-r8a7740 # R-Mobile A1 25 + - renesas,tmu-r8a774a1 # RZ/G2M 26 + - renesas,tmu-r8a774b1 # RZ/G2N 27 + - renesas,tmu-r8a774c0 # RZ/G2E 28 + - renesas,tmu-r8a774e1 # RZ/G2H 29 + - renesas,tmu-r8a7778 # R-Car M1A 30 + - renesas,tmu-r8a7779 # R-Car H1 31 + - renesas,tmu-r8a77970 # R-Car V3M 32 + - renesas,tmu-r8a77980 # R-Car V3H 33 + - const: renesas,tmu 34 + 35 + reg: 36 + maxItems: 1 37 + 38 + interrupts: 39 + minItems: 2 40 + maxItems: 3 41 + 42 + clocks: 43 + maxItems: 1 44 + 45 + clock-names: 46 + const: fck 47 + 48 + power-domains: 49 + maxItems: 1 50 + 51 + resets: 52 + maxItems: 1 53 + 54 + '#renesas,channels': 55 + description: 56 + Number of channels implemented by the timer. 57 + $ref: /schemas/types.yaml#/definitions/uint32 58 + enum: [ 2, 3 ] 59 + default: 3 60 + 61 + required: 62 + - compatible 63 + - reg 64 + - interrupts 65 + - clocks 66 + - clock-names 67 + - power-domains 68 + 69 + if: 70 + not: 71 + properties: 72 + compatible: 73 + contains: 74 + enum: 75 + - renesas,tmu-r8a7740 76 + - renesas,tmu-r8a7778 77 + - renesas,tmu-r8a7779 78 + then: 79 + required: 80 + - resets 81 + 82 + additionalProperties: false 83 + 84 + examples: 85 + - | 86 + #include <dt-bindings/clock/r8a7779-clock.h> 87 + #include <dt-bindings/interrupt-controller/arm-gic.h> 88 + #include <dt-bindings/power/r8a7779-sysc.h> 89 + tmu0: timer@ffd80000 { 90 + compatible = "renesas,tmu-r8a7779", "renesas,tmu"; 91 + reg = <0xffd80000 0x30>; 92 + interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>, 93 + <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>, 94 + <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>; 95 + clocks = <&mstp0_clks R8A7779_CLK_TMU0>; 96 + clock-names = "fck"; 97 + power-domains = <&sysc R8A7779_PD_ALWAYS_ON>; 98 + #renesas,channels = <3>; 99 + };
+1 -11
drivers/clocksource/Kconfig
··· 275 275 This option enables support for Texas Instruments 32.768 Hz clocksource 276 276 available on many OMAP-like platforms. 277 277 278 - config CLKSRC_NPS 279 - bool "NPS400 clocksource driver" if COMPILE_TEST 280 - depends on !PHYS_ADDR_T_64BIT 281 - select CLKSRC_MMIO 282 - select TIMER_OF if OF 283 - help 284 - NPS400 clocksource support. 285 - It has a 64-bit counter with update rate up to 1000MHz. 286 - This counter is accessed via couple of 32-bit memory-mapped registers. 287 - 288 278 config CLKSRC_STM32 289 279 bool "Clocksource for STM32 SoCs" if !ARCH_STM32 290 280 depends on OF && ARM && (ARCH_STM32 || COMPILE_TEST) ··· 644 654 645 655 config RISCV_TIMER 646 656 bool "Timer for the RISC-V platform" if COMPILE_TEST 647 - depends on GENERIC_SCHED_CLOCK && RISCV 657 + depends on GENERIC_SCHED_CLOCK && RISCV && RISCV_SBI 648 658 select TIMER_PROBE 649 659 select TIMER_OF 650 660 help
-1
drivers/clocksource/Makefile
··· 56 56 obj-$(CONFIG_MTK_TIMER) += timer-mediatek.o 57 57 obj-$(CONFIG_CLKSRC_PISTACHIO) += timer-pistachio.o 58 58 obj-$(CONFIG_CLKSRC_TI_32K) += timer-ti-32k.o 59 - obj-$(CONFIG_CLKSRC_NPS) += timer-nps.o 60 59 obj-$(CONFIG_OXNAS_RPS_TIMER) += timer-oxnas-rps.o 61 60 obj-$(CONFIG_OWL_TIMER) += timer-owl.o 62 61 obj-$(CONFIG_MILBEAUT_TIMER) += timer-milbeaut.o
+18 -9
drivers/clocksource/arm_arch_timer.c
··· 396 396 ctrl &= ~ARCH_TIMER_CTRL_IT_MASK; 397 397 398 398 if (access == ARCH_TIMER_PHYS_ACCESS) { 399 - cval = evt + arch_counter_get_cntpct(); 399 + cval = evt + arch_counter_get_cntpct_stable(); 400 400 write_sysreg(cval, cntp_cval_el0); 401 401 } else { 402 - cval = evt + arch_counter_get_cntvct(); 402 + cval = evt + arch_counter_get_cntvct_stable(); 403 403 write_sysreg(cval, cntv_cval_el0); 404 404 } 405 405 ··· 822 822 823 823 static void arch_timer_configure_evtstream(void) 824 824 { 825 - int evt_stream_div, pos; 825 + int evt_stream_div, lsb; 826 826 827 - /* Find the closest power of two to the divisor */ 828 - evt_stream_div = arch_timer_rate / ARCH_TIMER_EVT_STREAM_FREQ; 829 - pos = fls(evt_stream_div); 830 - if (pos > 1 && !(evt_stream_div & (1 << (pos - 2)))) 831 - pos--; 827 + /* 828 + * As the event stream can at most be generated at half the frequency 829 + * of the counter, use half the frequency when computing the divider. 830 + */ 831 + evt_stream_div = arch_timer_rate / ARCH_TIMER_EVT_STREAM_FREQ / 2; 832 + 833 + /* 834 + * Find the closest power of two to the divisor. If the adjacent bit 835 + * of lsb (last set bit, starts from 0) is set, then we use (lsb + 1). 836 + */ 837 + lsb = fls(evt_stream_div) - 1; 838 + if (lsb > 0 && (evt_stream_div & BIT(lsb - 1))) 839 + lsb++; 840 + 832 841 /* enable event stream */ 833 - arch_timer_evtstrm_enable(min(pos, 15)); 842 + arch_timer_evtstrm_enable(max(0, min(lsb, 15))); 834 843 } 835 844 836 845 static void arch_counter_set_user_access(void)
+39 -18
drivers/clocksource/dw_apb_timer_of.c
··· 14 14 #include <linux/reset.h> 15 15 #include <linux/sched_clock.h> 16 16 17 - static void __init timer_get_base_and_rate(struct device_node *np, 17 + static int __init timer_get_base_and_rate(struct device_node *np, 18 18 void __iomem **base, u32 *rate) 19 19 { 20 20 struct clk *timer_clk; 21 21 struct clk *pclk; 22 22 struct reset_control *rstc; 23 + int ret; 23 24 24 25 *base = of_iomap(np, 0); 25 26 ··· 47 46 pr_warn("pclk for %pOFn is present, but could not be activated\n", 48 47 np); 49 48 49 + if (!of_property_read_u32(np, "clock-freq", rate) && 50 + !of_property_read_u32(np, "clock-frequency", rate)) 51 + return 0; 52 + 50 53 timer_clk = of_clk_get_by_name(np, "timer"); 51 54 if (IS_ERR(timer_clk)) 52 - goto try_clock_freq; 55 + return PTR_ERR(timer_clk); 53 56 54 - if (!clk_prepare_enable(timer_clk)) { 55 - *rate = clk_get_rate(timer_clk); 56 - return; 57 - } 57 + ret = clk_prepare_enable(timer_clk); 58 + if (ret) 59 + return ret; 58 60 59 - try_clock_freq: 60 - if (of_property_read_u32(np, "clock-freq", rate) && 61 - of_property_read_u32(np, "clock-frequency", rate)) 62 - panic("No clock nor clock-frequency property for %pOFn", np); 61 + *rate = clk_get_rate(timer_clk); 62 + if (!(*rate)) 63 + return -EINVAL; 64 + 65 + return 0; 63 66 } 64 67 65 - static void __init add_clockevent(struct device_node *event_timer) 68 + static int __init add_clockevent(struct device_node *event_timer) 66 69 { 67 70 void __iomem *iobase; 68 71 struct dw_apb_clock_event_device *ced; 69 72 u32 irq, rate; 73 + int ret = 0; 70 74 71 75 irq = irq_of_parse_and_map(event_timer, 0); 72 76 if (irq == 0) 73 77 panic("No IRQ for clock event timer"); 74 78 75 - timer_get_base_and_rate(event_timer, &iobase, &rate); 79 + ret = timer_get_base_and_rate(event_timer, &iobase, &rate); 80 + if (ret) 81 + return ret; 76 82 77 83 ced = dw_apb_clockevent_init(-1, event_timer->name, 300, iobase, irq, 78 84 rate); 79 85 if (!ced) 80 - panic("Unable to initialise clockevent device"); 86 + return -EINVAL; 81 87 82 88 dw_apb_clockevent_register(ced); 89 + 90 + return 0; 83 91 } 84 92 85 93 static void __iomem *sched_io_base; 86 94 static u32 sched_rate; 87 95 88 - static void __init add_clocksource(struct device_node *source_timer) 96 + static int __init add_clocksource(struct device_node *source_timer) 89 97 { 90 98 void __iomem *iobase; 91 99 struct dw_apb_clocksource *cs; 92 100 u32 rate; 101 + int ret; 93 102 94 - timer_get_base_and_rate(source_timer, &iobase, &rate); 103 + ret = timer_get_base_and_rate(source_timer, &iobase, &rate); 104 + if (ret) 105 + return ret; 95 106 96 107 cs = dw_apb_clocksource_init(300, source_timer->name, iobase, rate); 97 108 if (!cs) 98 - panic("Unable to initialise clocksource device"); 109 + return -EINVAL; 99 110 100 111 dw_apb_clocksource_start(cs); 101 112 dw_apb_clocksource_register(cs); ··· 119 106 */ 120 107 sched_io_base = iobase + 0x04; 121 108 sched_rate = rate; 109 + 110 + return 0; 122 111 } 123 112 124 113 static u64 notrace read_sched_clock(void) ··· 161 146 static int num_called; 162 147 static int __init dw_apb_timer_init(struct device_node *timer) 163 148 { 149 + int ret = 0; 150 + 164 151 switch (num_called) { 165 152 case 1: 166 153 pr_debug("%s: found clocksource timer\n", __func__); 167 - add_clocksource(timer); 154 + ret = add_clocksource(timer); 155 + if (ret) 156 + return ret; 168 157 init_sched_clock(); 169 158 #ifdef CONFIG_ARM 170 159 dw_apb_delay_timer.freq = sched_rate; ··· 177 158 break; 178 159 default: 179 160 pr_debug("%s: found clockevent timer\n", __func__); 180 - add_clockevent(timer); 161 + ret = add_clockevent(timer); 162 + if (ret) 163 + return ret; 181 164 break; 182 165 } 183 166
+1 -1
drivers/clocksource/ingenic-timer.c
··· 127 127 return IRQ_HANDLED; 128 128 } 129 129 130 - static struct clk * __init ingenic_tcu_get_clock(struct device_node *np, int id) 130 + static struct clk *ingenic_tcu_get_clock(struct device_node *np, int id) 131 131 { 132 132 struct of_phandle_args args; 133 133
+14 -4
drivers/clocksource/sh_cmt.c
··· 319 319 { 320 320 int k, ret; 321 321 322 - pm_runtime_get_sync(&ch->cmt->pdev->dev); 323 322 dev_pm_syscore_device(&ch->cmt->pdev->dev, true); 324 323 325 324 /* enable clock */ ··· 393 394 clk_disable(ch->cmt->clk); 394 395 395 396 dev_pm_syscore_device(&ch->cmt->pdev->dev, false); 396 - pm_runtime_put(&ch->cmt->pdev->dev); 397 397 } 398 398 399 399 /* private flags */ ··· 560 562 int ret = 0; 561 563 unsigned long flags; 562 564 565 + if (flag & FLAG_CLOCKSOURCE) 566 + pm_runtime_get_sync(&ch->cmt->pdev->dev); 567 + 563 568 raw_spin_lock_irqsave(&ch->lock, flags); 564 569 565 - if (!(ch->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE))) 570 + if (!(ch->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE))) { 571 + if (flag & FLAG_CLOCKEVENT) 572 + pm_runtime_get_sync(&ch->cmt->pdev->dev); 566 573 ret = sh_cmt_enable(ch); 574 + } 567 575 568 576 if (ret) 569 577 goto out; ··· 594 590 f = ch->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE); 595 591 ch->flags &= ~flag; 596 592 597 - if (f && !(ch->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE))) 593 + if (f && !(ch->flags & (FLAG_CLOCKEVENT | FLAG_CLOCKSOURCE))) { 598 594 sh_cmt_disable(ch); 595 + if (flag & FLAG_CLOCKEVENT) 596 + pm_runtime_put(&ch->cmt->pdev->dev); 597 + } 599 598 600 599 /* adjust the timeout to maximum if only clocksource left */ 601 600 if ((flag == FLAG_CLOCKEVENT) && (ch->flags & FLAG_CLOCKSOURCE)) 602 601 __sh_cmt_set_next(ch, ch->max_match_value); 603 602 604 603 raw_spin_unlock_irqrestore(&ch->lock, flags); 604 + 605 + if (flag & FLAG_CLOCKSOURCE) 606 + pm_runtime_put(&ch->cmt->pdev->dev); 605 607 } 606 608 607 609 static struct sh_cmt_channel *cs_to_sh_cmt(struct clocksource *cs)
+9 -9
drivers/clocksource/timer-cadence-ttc.c
··· 413 413 ttcce->ttc.clk = clk; 414 414 415 415 err = clk_prepare_enable(ttcce->ttc.clk); 416 - if (err) { 417 - kfree(ttcce); 418 - return err; 419 - } 416 + if (err) 417 + goto out_kfree; 420 418 421 419 ttcce->ttc.clk_rate_change_nb.notifier_call = 422 420 ttc_rate_change_clockevent_cb; ··· 424 426 &ttcce->ttc.clk_rate_change_nb); 425 427 if (err) { 426 428 pr_warn("Unable to register clock notifier.\n"); 427 - return err; 429 + goto out_kfree; 428 430 } 429 431 430 432 ttcce->ttc.freq = clk_get_rate(ttcce->ttc.clk); ··· 453 455 454 456 err = request_irq(irq, ttc_clock_event_interrupt, 455 457 IRQF_TIMER, ttcce->ce.name, ttcce); 456 - if (err) { 457 - kfree(ttcce); 458 - return err; 459 - } 458 + if (err) 459 + goto out_kfree; 460 460 461 461 clockevents_config_and_register(&ttcce->ce, 462 462 ttcce->ttc.freq / PRESCALE, 1, 0xfffe); 463 463 464 464 return 0; 465 + 466 + out_kfree: 467 + kfree(ttcce); 468 + return err; 465 469 } 466 470 467 471 static int __init ttc_timer_probe(struct platform_device *pdev)
-284
drivers/clocksource/timer-nps.c
··· 1 - /* 2 - * Copyright (c) 2016, Mellanox Technologies. All rights reserved. 3 - * 4 - * This software is available to you under a choice of one of two 5 - * licenses. You may choose to be licensed under the terms of the GNU 6 - * General Public License (GPL) Version 2, available from the file 7 - * COPYING in the main directory of this source tree, or the 8 - * OpenIB.org BSD license below: 9 - * 10 - * Redistribution and use in source and binary forms, with or 11 - * without modification, are permitted provided that the following 12 - * conditions are met: 13 - * 14 - * - Redistributions of source code must retain the above 15 - * copyright notice, this list of conditions and the following 16 - * disclaimer. 17 - * 18 - * - Redistributions in binary form must reproduce the above 19 - * copyright notice, this list of conditions and the following 20 - * disclaimer in the documentation and/or other materials 21 - * provided with the distribution. 22 - * 23 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 24 - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 25 - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 26 - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS 27 - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN 28 - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 29 - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 30 - * SOFTWARE. 31 - */ 32 - 33 - #include <linux/interrupt.h> 34 - #include <linux/clocksource.h> 35 - #include <linux/clockchips.h> 36 - #include <linux/clk.h> 37 - #include <linux/of.h> 38 - #include <linux/of_irq.h> 39 - #include <linux/cpu.h> 40 - #include <soc/nps/common.h> 41 - 42 - #define NPS_MSU_TICK_LOW 0xC8 43 - #define NPS_CLUSTER_OFFSET 8 44 - #define NPS_CLUSTER_NUM 16 45 - 46 - /* This array is per cluster of CPUs (Each NPS400 cluster got 256 CPUs) */ 47 - static void *nps_msu_reg_low_addr[NPS_CLUSTER_NUM] __read_mostly; 48 - 49 - static int __init nps_get_timer_clk(struct device_node *node, 50 - unsigned long *timer_freq, 51 - struct clk **clk) 52 - { 53 - int ret; 54 - 55 - *clk = of_clk_get(node, 0); 56 - ret = PTR_ERR_OR_ZERO(*clk); 57 - if (ret) { 58 - pr_err("timer missing clk\n"); 59 - return ret; 60 - } 61 - 62 - ret = clk_prepare_enable(*clk); 63 - if (ret) { 64 - pr_err("Couldn't enable parent clk\n"); 65 - clk_put(*clk); 66 - return ret; 67 - } 68 - 69 - *timer_freq = clk_get_rate(*clk); 70 - if (!(*timer_freq)) { 71 - pr_err("Couldn't get clk rate\n"); 72 - clk_disable_unprepare(*clk); 73 - clk_put(*clk); 74 - return -EINVAL; 75 - } 76 - 77 - return 0; 78 - } 79 - 80 - static u64 nps_clksrc_read(struct clocksource *clksrc) 81 - { 82 - int cluster = raw_smp_processor_id() >> NPS_CLUSTER_OFFSET; 83 - 84 - return (u64)ioread32be(nps_msu_reg_low_addr[cluster]); 85 - } 86 - 87 - static int __init nps_setup_clocksource(struct device_node *node) 88 - { 89 - int ret, cluster; 90 - struct clk *clk; 91 - unsigned long nps_timer1_freq; 92 - 93 - 94 - for (cluster = 0; cluster < NPS_CLUSTER_NUM; cluster++) 95 - nps_msu_reg_low_addr[cluster] = 96 - nps_host_reg((cluster << NPS_CLUSTER_OFFSET), 97 - NPS_MSU_BLKID, NPS_MSU_TICK_LOW); 98 - 99 - ret = nps_get_timer_clk(node, &nps_timer1_freq, &clk); 100 - if (ret) 101 - return ret; 102 - 103 - ret = clocksource_mmio_init(nps_msu_reg_low_addr, "nps-tick", 104 - nps_timer1_freq, 300, 32, nps_clksrc_read); 105 - if (ret) { 106 - pr_err("Couldn't register clock source.\n"); 107 - clk_disable_unprepare(clk); 108 - } 109 - 110 - return ret; 111 - } 112 - 113 - TIMER_OF_DECLARE(ezchip_nps400_clksrc, "ezchip,nps400-timer", 114 - nps_setup_clocksource); 115 - TIMER_OF_DECLARE(ezchip_nps400_clk_src, "ezchip,nps400-timer1", 116 - nps_setup_clocksource); 117 - 118 - #ifdef CONFIG_EZNPS_MTM_EXT 119 - #include <soc/nps/mtm.h> 120 - 121 - /* Timer related Aux registers */ 122 - #define NPS_REG_TIMER0_TSI 0xFFFFF850 123 - #define NPS_REG_TIMER0_LIMIT 0x23 124 - #define NPS_REG_TIMER0_CTRL 0x22 125 - #define NPS_REG_TIMER0_CNT 0x21 126 - 127 - /* 128 - * Interrupt Enabled (IE) - re-arm the timer 129 - * Not Halted (NH) - is cleared when working with JTAG (for debug) 130 - */ 131 - #define TIMER0_CTRL_IE BIT(0) 132 - #define TIMER0_CTRL_NH BIT(1) 133 - 134 - static unsigned long nps_timer0_freq; 135 - static unsigned long nps_timer0_irq; 136 - 137 - static void nps_clkevent_rm_thread(void) 138 - { 139 - int thread; 140 - unsigned int cflags, enabled_threads; 141 - 142 - hw_schd_save(&cflags); 143 - 144 - enabled_threads = read_aux_reg(NPS_REG_TIMER0_TSI); 145 - 146 - /* remove thread from TSI1 */ 147 - thread = read_aux_reg(CTOP_AUX_THREAD_ID); 148 - enabled_threads &= ~(1 << thread); 149 - write_aux_reg(NPS_REG_TIMER0_TSI, enabled_threads); 150 - 151 - /* Acknowledge and if needed re-arm the timer */ 152 - if (!enabled_threads) 153 - write_aux_reg(NPS_REG_TIMER0_CTRL, TIMER0_CTRL_NH); 154 - else 155 - write_aux_reg(NPS_REG_TIMER0_CTRL, 156 - TIMER0_CTRL_IE | TIMER0_CTRL_NH); 157 - 158 - hw_schd_restore(cflags); 159 - } 160 - 161 - static void nps_clkevent_add_thread(unsigned long delta) 162 - { 163 - int thread; 164 - unsigned int cflags, enabled_threads; 165 - 166 - hw_schd_save(&cflags); 167 - 168 - /* add thread to TSI1 */ 169 - thread = read_aux_reg(CTOP_AUX_THREAD_ID); 170 - enabled_threads = read_aux_reg(NPS_REG_TIMER0_TSI); 171 - enabled_threads |= (1 << thread); 172 - write_aux_reg(NPS_REG_TIMER0_TSI, enabled_threads); 173 - 174 - /* set next timer event */ 175 - write_aux_reg(NPS_REG_TIMER0_LIMIT, delta); 176 - write_aux_reg(NPS_REG_TIMER0_CNT, 0); 177 - write_aux_reg(NPS_REG_TIMER0_CTRL, 178 - TIMER0_CTRL_IE | TIMER0_CTRL_NH); 179 - 180 - hw_schd_restore(cflags); 181 - } 182 - 183 - /* 184 - * Whenever anyone tries to change modes, we just mask interrupts 185 - * and wait for the next event to get set. 186 - */ 187 - static int nps_clkevent_set_state(struct clock_event_device *dev) 188 - { 189 - nps_clkevent_rm_thread(); 190 - disable_percpu_irq(nps_timer0_irq); 191 - 192 - return 0; 193 - } 194 - 195 - static int nps_clkevent_set_next_event(unsigned long delta, 196 - struct clock_event_device *dev) 197 - { 198 - nps_clkevent_add_thread(delta); 199 - enable_percpu_irq(nps_timer0_irq, IRQ_TYPE_NONE); 200 - 201 - return 0; 202 - } 203 - 204 - static DEFINE_PER_CPU(struct clock_event_device, nps_clockevent_device) = { 205 - .name = "NPS Timer0", 206 - .features = CLOCK_EVT_FEAT_ONESHOT, 207 - .rating = 300, 208 - .set_next_event = nps_clkevent_set_next_event, 209 - .set_state_oneshot = nps_clkevent_set_state, 210 - .set_state_oneshot_stopped = nps_clkevent_set_state, 211 - .set_state_shutdown = nps_clkevent_set_state, 212 - .tick_resume = nps_clkevent_set_state, 213 - }; 214 - 215 - static irqreturn_t timer_irq_handler(int irq, void *dev_id) 216 - { 217 - struct clock_event_device *evt = dev_id; 218 - 219 - nps_clkevent_rm_thread(); 220 - evt->event_handler(evt); 221 - 222 - return IRQ_HANDLED; 223 - } 224 - 225 - static int nps_timer_starting_cpu(unsigned int cpu) 226 - { 227 - struct clock_event_device *evt = this_cpu_ptr(&nps_clockevent_device); 228 - 229 - evt->cpumask = cpumask_of(smp_processor_id()); 230 - 231 - clockevents_config_and_register(evt, nps_timer0_freq, 0, ULONG_MAX); 232 - enable_percpu_irq(nps_timer0_irq, IRQ_TYPE_NONE); 233 - 234 - return 0; 235 - } 236 - 237 - static int nps_timer_dying_cpu(unsigned int cpu) 238 - { 239 - disable_percpu_irq(nps_timer0_irq); 240 - return 0; 241 - } 242 - 243 - static int __init nps_setup_clockevent(struct device_node *node) 244 - { 245 - struct clk *clk; 246 - int ret; 247 - 248 - nps_timer0_irq = irq_of_parse_and_map(node, 0); 249 - if (nps_timer0_irq <= 0) { 250 - pr_err("clockevent: missing irq\n"); 251 - return -EINVAL; 252 - } 253 - 254 - ret = nps_get_timer_clk(node, &nps_timer0_freq, &clk); 255 - if (ret) 256 - return ret; 257 - 258 - /* Needs apriori irq_set_percpu_devid() done in intc map function */ 259 - ret = request_percpu_irq(nps_timer0_irq, timer_irq_handler, 260 - "Timer0 (per-cpu-tick)", 261 - &nps_clockevent_device); 262 - if (ret) { 263 - pr_err("Couldn't request irq\n"); 264 - clk_disable_unprepare(clk); 265 - return ret; 266 - } 267 - 268 - ret = cpuhp_setup_state(CPUHP_AP_ARC_TIMER_STARTING, 269 - "clockevents/nps:starting", 270 - nps_timer_starting_cpu, 271 - nps_timer_dying_cpu); 272 - if (ret) { 273 - pr_err("Failed to setup hotplug state\n"); 274 - clk_disable_unprepare(clk); 275 - free_percpu_irq(nps_timer0_irq, &nps_clockevent_device); 276 - return ret; 277 - } 278 - 279 - return 0; 280 - } 281 - 282 - TIMER_OF_DECLARE(ezchip_nps400_clk_evt, "ezchip,nps400-timer0", 283 - nps_setup_clockevent); 284 - #endif /* CONFIG_EZNPS_MTM_EXT */
+8 -3
drivers/clocksource/timer-orion.c
··· 143 143 irq = irq_of_parse_and_map(np, 1); 144 144 if (irq <= 0) { 145 145 pr_err("%pOFn: unable to parse timer1 irq\n", np); 146 - return -EINVAL; 146 + ret = -EINVAL; 147 + goto out_unprep_clk; 147 148 } 148 149 149 150 rate = clk_get_rate(clk); ··· 161 160 clocksource_mmio_readl_down); 162 161 if (ret) { 163 162 pr_err("Failed to initialize mmio timer\n"); 164 - return ret; 163 + goto out_unprep_clk; 165 164 } 166 165 167 166 sched_clock_register(orion_read_sched_clock, 32, rate); ··· 171 170 "orion_event", NULL); 172 171 if (ret) { 173 172 pr_err("%pOFn: unable to setup irq\n", np); 174 - return ret; 173 + goto out_unprep_clk; 175 174 } 176 175 177 176 ticks_per_jiffy = (clk_get_rate(clk) + HZ/2) / HZ; ··· 184 183 orion_delay_timer_init(rate); 185 184 186 185 return 0; 186 + 187 + out_unprep_clk: 188 + clk_disable_unprepare(clk); 189 + return ret; 187 190 } 188 191 TIMER_OF_DECLARE(orion_timer, "marvell,orion-timer", orion_timer_init);
+17 -32
drivers/clocksource/timer-sp804.c
··· 5 5 * Copyright (C) 1999 - 2003 ARM Limited 6 6 * Copyright (C) 2000 Deep Blue Solutions Ltd 7 7 */ 8 + 9 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 10 + 8 11 #include <linux/clk.h> 9 12 #include <linux/clocksource.h> 10 13 #include <linux/clockchips.h> ··· 37 34 #define HISI_TIMER_BGLOAD 0x20 38 35 #define HISI_TIMER_BGLOAD_H 0x24 39 36 40 - 41 - struct sp804_timer __initdata arm_sp804_timer = { 37 + static struct sp804_timer arm_sp804_timer __initdata = { 42 38 .load = TIMER_LOAD, 43 39 .value = TIMER_VALUE, 44 40 .ctrl = TIMER_CTRL, ··· 46 44 .width = 32, 47 45 }; 48 46 49 - struct sp804_timer __initdata hisi_sp804_timer = { 47 + static struct sp804_timer hisi_sp804_timer __initdata = { 50 48 .load = HISI_TIMER_LOAD, 51 49 .load_h = HISI_TIMER_LOAD_H, 52 50 .value = HISI_TIMER_VALUE, ··· 61 59 62 60 static long __init sp804_get_clock_rate(struct clk *clk, const char *name) 63 61 { 64 - long rate; 65 62 int err; 66 63 67 64 if (!clk) 68 65 clk = clk_get_sys("sp804", name); 69 66 if (IS_ERR(clk)) { 70 - pr_err("sp804: %s clock not found: %ld\n", name, PTR_ERR(clk)); 67 + pr_err("%s clock not found: %ld\n", name, PTR_ERR(clk)); 71 68 return PTR_ERR(clk); 72 69 } 73 70 74 - err = clk_prepare(clk); 71 + err = clk_prepare_enable(clk); 75 72 if (err) { 76 - pr_err("sp804: clock failed to prepare: %d\n", err); 73 + pr_err("clock failed to enable: %d\n", err); 77 74 clk_put(clk); 78 75 return err; 79 76 } 80 77 81 - err = clk_enable(clk); 82 - if (err) { 83 - pr_err("sp804: clock failed to enable: %d\n", err); 84 - clk_unprepare(clk); 85 - clk_put(clk); 86 - return err; 87 - } 88 - 89 - rate = clk_get_rate(clk); 90 - if (rate < 0) { 91 - pr_err("sp804: clock failed to get rate: %ld\n", rate); 92 - clk_disable(clk); 93 - clk_unprepare(clk); 94 - clk_put(clk); 95 - } 96 - 97 - return rate; 78 + return clk_get_rate(clk); 98 79 } 99 80 100 81 static struct sp804_clkevt * __init sp804_clkevt_get(void __iomem *base) ··· 102 117 return ~readl_relaxed(sched_clkevt->value); 103 118 } 104 119 105 - int __init sp804_clocksource_and_sched_clock_init(void __iomem *base, 106 - const char *name, 107 - struct clk *clk, 108 - int use_sched_clock) 120 + static int __init sp804_clocksource_and_sched_clock_init(void __iomem *base, 121 + const char *name, 122 + struct clk *clk, 123 + int use_sched_clock) 109 124 { 110 125 long rate; 111 126 struct sp804_clkevt *clkevt; ··· 201 216 .rating = 300, 202 217 }; 203 218 204 - int __init sp804_clockevents_init(void __iomem *base, unsigned int irq, 205 - struct clk *clk, const char *name) 219 + static int __init sp804_clockevents_init(void __iomem *base, unsigned int irq, 220 + struct clk *clk, const char *name) 206 221 { 207 222 struct clock_event_device *evt = &sp804_clockevent; 208 223 long rate; ··· 221 236 222 237 if (request_irq(irq, sp804_timer_interrupt, IRQF_TIMER | IRQF_IRQPOLL, 223 238 "timer", &sp804_clockevent)) 224 - pr_err("%s: request_irq() failed\n", "timer"); 239 + pr_err("request_irq() failed\n"); 225 240 clockevents_config_and_register(evt, rate, 0xf, 0xffffffff); 226 241 227 242 return 0; ··· 283 298 if (of_clk_get_parent_count(np) == 3) { 284 299 clk2 = of_clk_get(np, 1); 285 300 if (IS_ERR(clk2)) { 286 - pr_err("sp804: %pOFn clock not found: %d\n", np, 301 + pr_err("%pOFn clock not found: %d\n", np, 287 302 (int)PTR_ERR(clk2)); 288 303 clk2 = NULL; 289 304 }
-1
drivers/rtc/Makefile
··· 6 6 ccflags-$(CONFIG_RTC_DEBUG) := -DDEBUG 7 7 8 8 obj-$(CONFIG_RTC_LIB) += lib.o 9 - obj-$(CONFIG_RTC_SYSTOHC) += systohc.o 10 9 obj-$(CONFIG_RTC_CLASS) += rtc-core.o 11 10 obj-$(CONFIG_RTC_MC146818_LIB) += rtc-mc146818-lib.o 12 11 rtc-core-y := class.o interface.o
+7 -2
drivers/rtc/class.c
··· 200 200 201 201 device_initialize(&rtc->dev); 202 202 203 - /* Drivers can revise this default after allocating the device. */ 204 - rtc->set_offset_nsec = NSEC_PER_SEC / 2; 203 + /* 204 + * Drivers can revise this default after allocating the device. 205 + * The default is what most RTCs do: Increment seconds exactly one 206 + * second after the write happened. This adds a default transport 207 + * time of 5ms which is at least halfways close to reality. 208 + */ 209 + rtc->set_offset_nsec = NSEC_PER_SEC + 5 * NSEC_PER_MSEC; 205 210 206 211 rtc->irq_freq = 1; 207 212 rtc->max_user_freq = 64;
+3
drivers/rtc/rtc-cmos.c
··· 868 868 if (retval) 869 869 goto cleanup2; 870 870 871 + /* Set the sync offset for the periodic 11min update correct */ 872 + cmos_rtc.rtc->set_offset_nsec = NSEC_PER_SEC / 2; 873 + 871 874 /* export at least the first block of NVRAM */ 872 875 nvmem_cfg.size = address_space - NVRAM_OFFSET; 873 876 if (rtc_nvmem_register(cmos_rtc.rtc, &nvmem_cfg))
+41 -29
drivers/rtc/rtc-mc146818-lib.c
··· 8 8 #include <linux/acpi.h> 9 9 #endif 10 10 11 - /* 12 - * Returns true if a clock update is in progress 13 - */ 14 - static inline unsigned char mc146818_is_updating(void) 15 - { 16 - unsigned char uip; 17 - unsigned long flags; 18 - 19 - spin_lock_irqsave(&rtc_lock, flags); 20 - uip = (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP); 21 - spin_unlock_irqrestore(&rtc_lock, flags); 22 - return uip; 23 - } 24 - 25 11 unsigned int mc146818_get_time(struct rtc_time *time) 26 12 { 27 13 unsigned char ctrl; 28 14 unsigned long flags; 29 15 unsigned char century = 0; 16 + bool retry; 30 17 31 18 #ifdef CONFIG_MACH_DECSTATION 32 19 unsigned int real_year; 33 20 #endif 34 21 22 + again: 23 + spin_lock_irqsave(&rtc_lock, flags); 35 24 /* 36 - * read RTC once any update in progress is done. The update 37 - * can take just over 2ms. We wait 20ms. There is no need to 38 - * to poll-wait (up to 1s - eeccch) for the falling edge of RTC_UIP. 39 - * If you need to know *exactly* when a second has started, enable 40 - * periodic update complete interrupts, (via ioctl) and then 41 - * immediately read /dev/rtc which will block until you get the IRQ. 42 - * Once the read clears, read the RTC time (again via ioctl). Easy. 25 + * Check whether there is an update in progress during which the 26 + * readout is unspecified. The maximum update time is ~2ms. Poll 27 + * every msec for completion. 28 + * 29 + * Store the second value before checking UIP so a long lasting NMI 30 + * which happens to hit after the UIP check cannot make an update 31 + * cycle invisible. 43 32 */ 44 - if (mc146818_is_updating()) 45 - mdelay(20); 33 + time->tm_sec = CMOS_READ(RTC_SECONDS); 34 + 35 + if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP) { 36 + spin_unlock_irqrestore(&rtc_lock, flags); 37 + mdelay(1); 38 + goto again; 39 + } 40 + 41 + /* Revalidate the above readout */ 42 + if (time->tm_sec != CMOS_READ(RTC_SECONDS)) { 43 + spin_unlock_irqrestore(&rtc_lock, flags); 44 + goto again; 45 + } 46 46 47 47 /* 48 48 * Only the values that we read from the RTC are set. We leave ··· 50 50 * RTC has RTC_DAY_OF_WEEK, we ignore it, as it is only updated 51 51 * by the RTC when initially set to a non-zero value. 52 52 */ 53 - spin_lock_irqsave(&rtc_lock, flags); 54 - time->tm_sec = CMOS_READ(RTC_SECONDS); 55 53 time->tm_min = CMOS_READ(RTC_MINUTES); 56 54 time->tm_hour = CMOS_READ(RTC_HOURS); 57 55 time->tm_mday = CMOS_READ(RTC_DAY_OF_MONTH); ··· 64 66 century = CMOS_READ(acpi_gbl_FADT.century); 65 67 #endif 66 68 ctrl = CMOS_READ(RTC_CONTROL); 69 + /* 70 + * Check for the UIP bit again. If it is set now then 71 + * the above values may contain garbage. 72 + */ 73 + retry = CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP; 74 + /* 75 + * A NMI might have interrupted the above sequence so check whether 76 + * the seconds value has changed which indicates that the NMI took 77 + * longer than the UIP bit was set. Unlikely, but possible and 78 + * there is also virt... 79 + */ 80 + retry |= time->tm_sec != CMOS_READ(RTC_SECONDS); 81 + 67 82 spin_unlock_irqrestore(&rtc_lock, flags); 83 + 84 + if (retry) 85 + goto again; 68 86 69 87 if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD) 70 88 { ··· 135 121 if (yrs > 255) /* They are unsigned */ 136 122 return -EINVAL; 137 123 138 - spin_lock_irqsave(&rtc_lock, flags); 139 124 #ifdef CONFIG_MACH_DECSTATION 140 125 real_yrs = yrs; 141 126 leap_yr = ((!((yrs + 1900) % 4) && ((yrs + 1900) % 100)) || ··· 163 150 /* These limits and adjustments are independent of 164 151 * whether the chip is in binary mode or not. 165 152 */ 166 - if (yrs > 169) { 167 - spin_unlock_irqrestore(&rtc_lock, flags); 153 + if (yrs > 169) 168 154 return -EINVAL; 169 - } 170 155 171 156 if (yrs >= 100) 172 157 yrs -= 100; ··· 180 169 century = bin2bcd(century); 181 170 } 182 171 172 + spin_lock_irqsave(&rtc_lock, flags); 183 173 save_control = CMOS_READ(RTC_CONTROL); 184 174 CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL); 185 175 save_freq_select = CMOS_READ(RTC_FREQ_SELECT);
-61
drivers/rtc/systohc.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/rtc.h> 3 - #include <linux/time.h> 4 - 5 - /** 6 - * rtc_set_ntp_time - Save NTP synchronized time to the RTC 7 - * @now: Current time of day 8 - * @target_nsec: pointer for desired now->tv_nsec value 9 - * 10 - * Replacement for the NTP platform function update_persistent_clock64 11 - * that stores time for later retrieval by rtc_hctosys. 12 - * 13 - * Returns 0 on successful RTC update, -ENODEV if a RTC update is not 14 - * possible at all, and various other -errno for specific temporary failure 15 - * cases. 16 - * 17 - * -EPROTO is returned if now.tv_nsec is not close enough to *target_nsec. 18 - * 19 - * If temporary failure is indicated the caller should try again 'soon' 20 - */ 21 - int rtc_set_ntp_time(struct timespec64 now, unsigned long *target_nsec) 22 - { 23 - struct rtc_device *rtc; 24 - struct rtc_time tm; 25 - struct timespec64 to_set; 26 - int err = -ENODEV; 27 - bool ok; 28 - 29 - rtc = rtc_class_open(CONFIG_RTC_SYSTOHC_DEVICE); 30 - if (!rtc) 31 - goto out_err; 32 - 33 - if (!rtc->ops || !rtc->ops->set_time) 34 - goto out_close; 35 - 36 - /* Compute the value of tv_nsec we require the caller to supply in 37 - * now.tv_nsec. This is the value such that (now + 38 - * set_offset_nsec).tv_nsec == 0. 39 - */ 40 - set_normalized_timespec64(&to_set, 0, -rtc->set_offset_nsec); 41 - *target_nsec = to_set.tv_nsec; 42 - 43 - /* The ntp code must call this with the correct value in tv_nsec, if 44 - * it does not we update target_nsec and return EPROTO to make the ntp 45 - * code try again later. 46 - */ 47 - ok = rtc_tv_nsec_ok(rtc->set_offset_nsec, &to_set, &now); 48 - if (!ok) { 49 - err = -EPROTO; 50 - goto out_close; 51 - } 52 - 53 - rtc_time64_to_tm(to_set.tv_sec, &tm); 54 - 55 - err = rtc_set_time(rtc, &tm); 56 - 57 - out_close: 58 - rtc_class_close(rtc); 59 - out_err: 60 - return err; 61 - }
+7 -3
include/dt-bindings/clock/ingenic,sysost.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 - * This header provides clock numbers for the ingenic,tcu DT binding. 3 + * This header provides clock numbers for the Ingenic OST DT binding. 4 4 */ 5 5 6 6 #ifndef __DT_BINDINGS_CLOCK_INGENIC_OST_H__ 7 7 #define __DT_BINDINGS_CLOCK_INGENIC_OST_H__ 8 8 9 - #define OST_CLK_PERCPU_TIMER 0 10 - #define OST_CLK_GLOBAL_TIMER 1 9 + #define OST_CLK_PERCPU_TIMER 1 10 + #define OST_CLK_GLOBAL_TIMER 0 11 + #define OST_CLK_PERCPU_TIMER0 1 12 + #define OST_CLK_PERCPU_TIMER1 2 13 + #define OST_CLK_PERCPU_TIMER2 3 14 + #define OST_CLK_PERCPU_TIMER3 4 11 15 12 16 #endif /* __DT_BINDINGS_CLOCK_INGENIC_OST_H__ */
+5 -1
include/linux/hrtimer.h
··· 447 447 /* Query timers: */ 448 448 extern ktime_t __hrtimer_get_remaining(const struct hrtimer *timer, bool adjust); 449 449 450 + /** 451 + * hrtimer_get_remaining - get remaining time for the timer 452 + * @timer: the timer to read 453 + */ 450 454 static inline ktime_t hrtimer_get_remaining(const struct hrtimer *timer) 451 455 { 452 456 return __hrtimer_get_remaining(timer, false); ··· 462 458 extern bool hrtimer_active(const struct hrtimer *timer); 463 459 464 460 /** 465 - * hrtimer_is_queued = check, whether the timer is on one of the queues 461 + * hrtimer_is_queued - check, whether the timer is on one of the queues 466 462 * @timer: Timer to check 467 463 * 468 464 * Returns: True if the timer is queued, false otherwise
+29 -40
include/linux/rtc.h
··· 110 110 /* Some hardware can't support UIE mode */ 111 111 int uie_unsupported; 112 112 113 - /* Number of nsec it takes to set the RTC clock. This influences when 114 - * the set ops are called. An offset: 115 - * - of 0.5 s will call RTC set for wall clock time 10.0 s at 9.5 s 116 - * - of 1.5 s will call RTC set for wall clock time 10.0 s at 8.5 s 117 - * - of -0.5 s will call RTC set for wall clock time 10.0 s at 10.5 s 113 + /* 114 + * This offset specifies the update timing of the RTC. 115 + * 116 + * tsched t1 write(t2.tv_sec - 1sec)) t2 RTC increments seconds 117 + * 118 + * The offset defines how tsched is computed so that the write to 119 + * the RTC (t2.tv_sec - 1sec) is correct versus the time required 120 + * for the transport of the write and the time which the RTC needs 121 + * to increment seconds the first time after the write (t2). 122 + * 123 + * For direct accessible RTCs tsched ~= t1 because the write time 124 + * is negligible. For RTCs behind slow busses the transport time is 125 + * significant and has to be taken into account. 126 + * 127 + * The time between the write (t1) and the first increment after 128 + * the write (t2) is RTC specific. For a MC146818 RTC it's 500ms, 129 + * for many others it's exactly 1 second. Consult the datasheet. 130 + * 131 + * The value of this offset is also used to calculate the to be 132 + * written value (t2.tv_sec - 1sec) at tsched. 133 + * 134 + * The default value for this is NSEC_PER_SEC + 10 msec default 135 + * transport time. The offset can be adjusted by drivers so the 136 + * calculation for the to be written value at tsched becomes 137 + * correct: 138 + * 139 + * newval = tsched + set_offset_nsec - NSEC_PER_SEC 140 + * and (tsched + set_offset_nsec) % NSEC_PER_SEC == 0 118 141 */ 119 - long set_offset_nsec; 142 + unsigned long set_offset_nsec; 120 143 121 144 bool registered; 122 145 ··· 188 165 189 166 extern int rtc_read_time(struct rtc_device *rtc, struct rtc_time *tm); 190 167 extern int rtc_set_time(struct rtc_device *rtc, struct rtc_time *tm); 191 - extern int rtc_set_ntp_time(struct timespec64 now, unsigned long *target_nsec); 192 168 int __rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm); 193 169 extern int rtc_read_alarm(struct rtc_device *rtc, 194 170 struct rtc_wkalrm *alrm); ··· 225 203 static inline bool is_leap_year(unsigned int year) 226 204 { 227 205 return (!(year % 4) && (year % 100)) || !(year % 400); 228 - } 229 - 230 - /* Determine if we can call to driver to set the time. Drivers can only be 231 - * called to set a second aligned time value, and the field set_offset_nsec 232 - * specifies how far away from the second aligned time to call the driver. 233 - * 234 - * This also computes 'to_set' which is the time we are trying to set, and has 235 - * a zero in tv_nsecs, such that: 236 - * to_set - set_delay_nsec == now +/- FUZZ 237 - * 238 - */ 239 - static inline bool rtc_tv_nsec_ok(s64 set_offset_nsec, 240 - struct timespec64 *to_set, 241 - const struct timespec64 *now) 242 - { 243 - /* Allowed error in tv_nsec, arbitarily set to 5 jiffies in ns. */ 244 - const unsigned long TIME_SET_NSEC_FUZZ = TICK_NSEC * 5; 245 - struct timespec64 delay = {.tv_sec = 0, 246 - .tv_nsec = set_offset_nsec}; 247 - 248 - *to_set = timespec64_add(*now, delay); 249 - 250 - if (to_set->tv_nsec < TIME_SET_NSEC_FUZZ) { 251 - to_set->tv_nsec = 0; 252 - return true; 253 - } 254 - 255 - if (to_set->tv_nsec > NSEC_PER_SEC - TIME_SET_NSEC_FUZZ) { 256 - to_set->tv_sec++; 257 - to_set->tv_nsec = 0; 258 - return true; 259 - } 260 - return false; 261 206 } 262 207 263 208 #define rtc_register_device(device) \
+2
include/linux/timekeeping.h
··· 303 303 extern void read_persistent_clock64(struct timespec64 *ts); 304 304 void read_persistent_wall_and_boot_offset(struct timespec64 *wall_clock, 305 305 struct timespec64 *boot_offset); 306 + #ifdef CONFIG_GENERIC_CMOS_UPDATE 306 307 extern int update_persistent_clock64(struct timespec64 now); 308 + #endif 307 309 308 310 #endif
-1
include/linux/timer.h
··· 193 193 #define del_singleshot_timer_sync(t) del_timer_sync(t) 194 194 195 195 extern void init_timers(void); 196 - extern void run_local_timers(void); 197 196 struct hrtimer; 198 197 extern enum hrtimer_restart it_real_fn(struct hrtimer *); 199 198
-1
include/linux/timex.h
··· 157 157 extern void hardpps(const struct timespec64 *, const struct timespec64 *); 158 158 159 159 int read_current_timer(unsigned long *timer_val); 160 - void ntp_notify_cmos_timer(void); 161 160 162 161 /* The clock frequency of the i8253/i8254 PIT */ 163 162 #define PIT_TICK_RATE 1193182ul
+1 -1
kernel/time/hrtimer.c
··· 1284 1284 EXPORT_SYMBOL_GPL(hrtimer_cancel); 1285 1285 1286 1286 /** 1287 - * hrtimer_get_remaining - get remaining time for the timer 1287 + * __hrtimer_get_remaining - get remaining time for the timer 1288 1288 * @timer: the timer to read 1289 1289 * @adjust: adjust relative timers when CONFIG_TIME_LOW_RES=y 1290 1290 */
+2 -1
kernel/time/jiffies.c
··· 59 59 }; 60 60 61 61 __cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(jiffies_lock); 62 - __cacheline_aligned_in_smp seqcount_t jiffies_seq; 62 + __cacheline_aligned_in_smp seqcount_raw_spinlock_t jiffies_seq = 63 + SEQCNT_RAW_SPINLOCK_ZERO(jiffies_seq, &jiffies_lock); 63 64 64 65 #if (BITS_PER_LONG < 64) 65 66 u64 get_jiffies_64(void)
+140 -91
kernel/time/ntp.c
··· 494 494 return leap; 495 495 } 496 496 497 + #if defined(CONFIG_GENERIC_CMOS_UPDATE) || defined(CONFIG_RTC_SYSTOHC) 497 498 static void sync_hw_clock(struct work_struct *work); 498 - static DECLARE_DELAYED_WORK(sync_work, sync_hw_clock); 499 + static DECLARE_WORK(sync_work, sync_hw_clock); 500 + static struct hrtimer sync_hrtimer; 501 + #define SYNC_PERIOD_NS (11UL * 60 * NSEC_PER_SEC) 499 502 500 - static void sched_sync_hw_clock(struct timespec64 now, 501 - unsigned long target_nsec, bool fail) 502 - 503 + static enum hrtimer_restart sync_timer_callback(struct hrtimer *timer) 503 504 { 504 - struct timespec64 next; 505 + queue_work(system_power_efficient_wq, &sync_work); 505 506 506 - ktime_get_real_ts64(&next); 507 - if (!fail) 508 - next.tv_sec = 659; 509 - else { 510 - /* 511 - * Try again as soon as possible. Delaying long periods 512 - * decreases the accuracy of the work queue timer. Due to this 513 - * the algorithm is very likely to require a short-sleep retry 514 - * after the above long sleep to synchronize ts_nsec. 515 - */ 516 - next.tv_sec = 0; 517 - } 518 - 519 - /* Compute the needed delay that will get to tv_nsec == target_nsec */ 520 - next.tv_nsec = target_nsec - next.tv_nsec; 521 - if (next.tv_nsec <= 0) 522 - next.tv_nsec += NSEC_PER_SEC; 523 - if (next.tv_nsec >= NSEC_PER_SEC) { 524 - next.tv_sec++; 525 - next.tv_nsec -= NSEC_PER_SEC; 526 - } 527 - 528 - queue_delayed_work(system_power_efficient_wq, &sync_work, 529 - timespec64_to_jiffies(&next)); 507 + return HRTIMER_NORESTART; 530 508 } 531 509 532 - static void sync_rtc_clock(void) 510 + static void sched_sync_hw_clock(unsigned long offset_nsec, bool retry) 533 511 { 534 - unsigned long target_nsec; 535 - struct timespec64 adjust, now; 536 - int rc; 512 + ktime_t exp = ktime_set(ktime_get_real_seconds(), 0); 537 513 538 - if (!IS_ENABLED(CONFIG_RTC_SYSTOHC)) 539 - return; 514 + if (retry) 515 + exp = ktime_add_ns(exp, 2 * NSEC_PER_SEC - offset_nsec); 516 + else 517 + exp = ktime_add_ns(exp, SYNC_PERIOD_NS - offset_nsec); 540 518 541 - ktime_get_real_ts64(&now); 519 + hrtimer_start(&sync_hrtimer, exp, HRTIMER_MODE_ABS); 520 + } 542 521 543 - adjust = now; 544 - if (persistent_clock_is_local) 545 - adjust.tv_sec -= (sys_tz.tz_minuteswest * 60); 522 + /* 523 + * Check whether @now is correct versus the required time to update the RTC 524 + * and calculate the value which needs to be written to the RTC so that the 525 + * next seconds increment of the RTC after the write is aligned with the next 526 + * seconds increment of clock REALTIME. 527 + * 528 + * tsched t1 write(t2.tv_sec - 1sec)) t2 RTC increments seconds 529 + * 530 + * t2.tv_nsec == 0 531 + * tsched = t2 - set_offset_nsec 532 + * newval = t2 - NSEC_PER_SEC 533 + * 534 + * ==> neval = tsched + set_offset_nsec - NSEC_PER_SEC 535 + * 536 + * As the execution of this code is not guaranteed to happen exactly at 537 + * tsched this allows it to happen within a fuzzy region: 538 + * 539 + * abs(now - tsched) < FUZZ 540 + * 541 + * If @now is not inside the allowed window the function returns false. 542 + */ 543 + static inline bool rtc_tv_nsec_ok(unsigned long set_offset_nsec, 544 + struct timespec64 *to_set, 545 + const struct timespec64 *now) 546 + { 547 + /* Allowed error in tv_nsec, arbitarily set to 5 jiffies in ns. */ 548 + const unsigned long TIME_SET_NSEC_FUZZ = TICK_NSEC * 5; 549 + struct timespec64 delay = {.tv_sec = -1, 550 + .tv_nsec = set_offset_nsec}; 546 551 547 - /* 548 - * The current RTC in use will provide the target_nsec it wants to be 549 - * called at, and does rtc_tv_nsec_ok internally. 550 - */ 551 - rc = rtc_set_ntp_time(adjust, &target_nsec); 552 - if (rc == -ENODEV) 553 - return; 552 + *to_set = timespec64_add(*now, delay); 554 553 555 - sched_sync_hw_clock(now, target_nsec, rc); 554 + if (to_set->tv_nsec < TIME_SET_NSEC_FUZZ) { 555 + to_set->tv_nsec = 0; 556 + return true; 557 + } 558 + 559 + if (to_set->tv_nsec > NSEC_PER_SEC - TIME_SET_NSEC_FUZZ) { 560 + to_set->tv_sec++; 561 + to_set->tv_nsec = 0; 562 + return true; 563 + } 564 + return false; 556 565 } 557 566 558 567 #ifdef CONFIG_GENERIC_CMOS_UPDATE ··· 569 560 { 570 561 return -ENODEV; 571 562 } 563 + #else 564 + static inline int update_persistent_clock64(struct timespec64 now64) 565 + { 566 + return -ENODEV; 567 + } 572 568 #endif 573 569 574 - static bool sync_cmos_clock(void) 570 + #ifdef CONFIG_RTC_SYSTOHC 571 + /* Save NTP synchronized time to the RTC */ 572 + static int update_rtc(struct timespec64 *to_set, unsigned long *offset_nsec) 575 573 { 576 - static bool no_cmos; 577 - struct timespec64 now; 578 - struct timespec64 adjust; 579 - int rc = -EPROTO; 580 - long target_nsec = NSEC_PER_SEC / 2; 574 + struct rtc_device *rtc; 575 + struct rtc_time tm; 576 + int err = -ENODEV; 581 577 582 - if (!IS_ENABLED(CONFIG_GENERIC_CMOS_UPDATE)) 583 - return false; 578 + rtc = rtc_class_open(CONFIG_RTC_SYSTOHC_DEVICE); 579 + if (!rtc) 580 + return -ENODEV; 584 581 585 - if (no_cmos) 586 - return false; 582 + if (!rtc->ops || !rtc->ops->set_time) 583 + goto out_close; 587 584 588 - /* 589 - * Historically update_persistent_clock64() has followed x86 590 - * semantics, which match the MC146818A/etc RTC. This RTC will store 591 - * 'adjust' and then in .5s it will advance once second. 592 - * 593 - * Architectures are strongly encouraged to use rtclib and not 594 - * implement this legacy API. 595 - */ 596 - ktime_get_real_ts64(&now); 597 - if (rtc_tv_nsec_ok(-1 * target_nsec, &adjust, &now)) { 598 - if (persistent_clock_is_local) 599 - adjust.tv_sec -= (sys_tz.tz_minuteswest * 60); 600 - rc = update_persistent_clock64(adjust); 601 - /* 602 - * The machine does not support update_persistent_clock64 even 603 - * though it defines CONFIG_GENERIC_CMOS_UPDATE. 604 - */ 605 - if (rc == -ENODEV) { 606 - no_cmos = true; 607 - return false; 608 - } 585 + /* First call might not have the correct offset */ 586 + if (*offset_nsec == rtc->set_offset_nsec) { 587 + rtc_time64_to_tm(to_set->tv_sec, &tm); 588 + err = rtc_set_time(rtc, &tm); 589 + } else { 590 + /* Store the update offset and let the caller try again */ 591 + *offset_nsec = rtc->set_offset_nsec; 592 + err = -EAGAIN; 609 593 } 610 - 611 - sched_sync_hw_clock(now, target_nsec, rc); 612 - return true; 594 + out_close: 595 + rtc_class_close(rtc); 596 + return err; 613 597 } 598 + #else 599 + static inline int update_rtc(struct timespec64 *to_set, unsigned long *offset_nsec) 600 + { 601 + return -ENODEV; 602 + } 603 + #endif 614 604 615 605 /* 616 606 * If we have an externally synchronized Linux clock, then update RTC clock ··· 621 613 */ 622 614 static void sync_hw_clock(struct work_struct *work) 623 615 { 624 - if (!ntp_synced()) 616 + /* 617 + * The default synchronization offset is 500ms for the deprecated 618 + * update_persistent_clock64() under the assumption that it uses 619 + * the infamous CMOS clock (MC146818). 620 + */ 621 + static unsigned long offset_nsec = NSEC_PER_SEC / 2; 622 + struct timespec64 now, to_set; 623 + int res = -EAGAIN; 624 + 625 + /* 626 + * Don't update if STA_UNSYNC is set and if ntp_notify_cmos_timer() 627 + * managed to schedule the work between the timer firing and the 628 + * work being able to rearm the timer. Wait for the timer to expire. 629 + */ 630 + if (!ntp_synced() || hrtimer_is_queued(&sync_hrtimer)) 625 631 return; 626 632 627 - if (sync_cmos_clock()) 628 - return; 633 + ktime_get_real_ts64(&now); 634 + /* If @now is not in the allowed window, try again */ 635 + if (!rtc_tv_nsec_ok(offset_nsec, &to_set, &now)) 636 + goto rearm; 629 637 630 - sync_rtc_clock(); 638 + /* Take timezone adjusted RTCs into account */ 639 + if (persistent_clock_is_local) 640 + to_set.tv_sec -= (sys_tz.tz_minuteswest * 60); 641 + 642 + /* Try the legacy RTC first. */ 643 + res = update_persistent_clock64(to_set); 644 + if (res != -ENODEV) 645 + goto rearm; 646 + 647 + /* Try the RTC class */ 648 + res = update_rtc(&to_set, &offset_nsec); 649 + if (res == -ENODEV) 650 + return; 651 + rearm: 652 + sched_sync_hw_clock(offset_nsec, res != 0); 631 653 } 632 654 633 655 void ntp_notify_cmos_timer(void) 634 656 { 635 - if (!ntp_synced()) 636 - return; 637 - 638 - if (IS_ENABLED(CONFIG_GENERIC_CMOS_UPDATE) || 639 - IS_ENABLED(CONFIG_RTC_SYSTOHC)) 640 - queue_delayed_work(system_power_efficient_wq, &sync_work, 0); 657 + /* 658 + * When the work is currently executed but has not yet the timer 659 + * rearmed this queues the work immediately again. No big issue, 660 + * just a pointless work scheduled. 661 + */ 662 + if (ntp_synced() && !hrtimer_is_queued(&sync_hrtimer)) 663 + queue_work(system_power_efficient_wq, &sync_work); 641 664 } 665 + 666 + static void __init ntp_init_cmos_sync(void) 667 + { 668 + hrtimer_init(&sync_hrtimer, CLOCK_REALTIME, HRTIMER_MODE_ABS); 669 + sync_hrtimer.function = sync_timer_callback; 670 + } 671 + #else /* CONFIG_GENERIC_CMOS_UPDATE) || defined(CONFIG_RTC_SYSTOHC) */ 672 + static inline void __init ntp_init_cmos_sync(void) { } 673 + #endif /* !CONFIG_GENERIC_CMOS_UPDATE) || defined(CONFIG_RTC_SYSTOHC) */ 642 674 643 675 /* 644 676 * Propagate a new txc->status value into the NTP state: ··· 1092 1044 void __init ntp_init(void) 1093 1045 { 1094 1046 ntp_clear(); 1047 + ntp_init_cmos_sync(); 1095 1048 }
+7
kernel/time/ntp_internal.h
··· 12 12 const struct timespec64 *ts, 13 13 s32 *time_tai, struct audit_ntp_data *ad); 14 14 extern void __hardpps(const struct timespec64 *phase_ts, const struct timespec64 *raw_ts); 15 + 16 + #if defined(CONFIG_GENERIC_CMOS_UPDATE) || defined(CONFIG_RTC_SYSTOHC) 17 + extern void ntp_notify_cmos_timer(void); 18 + #else 19 + static inline void ntp_notify_cmos_timer(void) { } 20 + #endif 21 + 15 22 #endif /* _LINUX_NTP_INTERNAL_H */
+21 -4
kernel/time/tick-broadcast.c
··· 331 331 bc_local = tick_do_periodic_broadcast(); 332 332 333 333 if (clockevent_state_oneshot(dev)) { 334 - ktime_t next = ktime_add(dev->next_event, tick_period); 334 + ktime_t next = ktime_add_ns(dev->next_event, TICK_NSEC); 335 335 336 336 clockevents_program_event(dev, next, true); 337 337 } ··· 877 877 } 878 878 } 879 879 880 + static inline ktime_t tick_get_next_period(void) 881 + { 882 + ktime_t next; 883 + 884 + /* 885 + * Protect against concurrent updates (store /load tearing on 886 + * 32bit). It does not matter if the time is already in the 887 + * past. The broadcast device which is about to be programmed will 888 + * fire in any case. 889 + */ 890 + raw_spin_lock(&jiffies_lock); 891 + next = tick_next_period; 892 + raw_spin_unlock(&jiffies_lock); 893 + return next; 894 + } 895 + 880 896 /** 881 897 * tick_broadcast_setup_oneshot - setup the broadcast device 882 898 */ ··· 921 905 tick_broadcast_oneshot_mask, tmpmask); 922 906 923 907 if (was_periodic && !cpumask_empty(tmpmask)) { 908 + ktime_t nextevt = tick_get_next_period(); 909 + 924 910 clockevents_switch_state(bc, CLOCK_EVT_STATE_ONESHOT); 925 - tick_broadcast_init_next_event(tmpmask, 926 - tick_next_period); 927 - tick_broadcast_set_event(bc, cpu, tick_next_period); 911 + tick_broadcast_init_next_event(tmpmask, nextevt); 912 + tick_broadcast_set_event(bc, cpu, nextevt); 928 913 } else 929 914 bc->next_event = KTIME_MAX; 930 915 } else {
+6 -6
kernel/time/tick-common.c
··· 27 27 */ 28 28 DEFINE_PER_CPU(struct tick_device, tick_cpu_device); 29 29 /* 30 - * Tick next event: keeps track of the tick time 30 + * Tick next event: keeps track of the tick time. It's updated by the 31 + * CPU which handles the tick and protected by jiffies_lock. There is 32 + * no requirement to write hold the jiffies seqcount for it. 31 33 */ 32 34 ktime_t tick_next_period; 33 - ktime_t tick_period; 34 35 35 36 /* 36 37 * tick_do_timer_cpu is a timer core internal variable which holds the CPU NR ··· 89 88 write_seqcount_begin(&jiffies_seq); 90 89 91 90 /* Keep track of the next tick event */ 92 - tick_next_period = ktime_add(tick_next_period, tick_period); 91 + tick_next_period = ktime_add_ns(tick_next_period, TICK_NSEC); 93 92 94 93 do_timer(1); 95 94 write_seqcount_end(&jiffies_seq); ··· 128 127 * Setup the next period for devices, which do not have 129 128 * periodic mode: 130 129 */ 131 - next = ktime_add(next, tick_period); 130 + next = ktime_add_ns(next, TICK_NSEC); 132 131 133 132 if (!clockevents_program_event(dev, next, false)) 134 133 return; ··· 174 173 for (;;) { 175 174 if (!clockevents_program_event(dev, next, false)) 176 175 return; 177 - next = ktime_add(next, tick_period); 176 + next = ktime_add_ns(next, TICK_NSEC); 178 177 } 179 178 } 180 179 } ··· 221 220 tick_do_timer_cpu = cpu; 222 221 223 222 tick_next_period = ktime_get(); 224 - tick_period = NSEC_PER_SEC / HZ; 225 223 #ifdef CONFIG_NO_HZ_FULL 226 224 /* 227 225 * The boot CPU may be nohz_full, in which case set
-1
kernel/time/tick-internal.h
··· 15 15 16 16 DECLARE_PER_CPU(struct tick_device, tick_cpu_device); 17 17 extern ktime_t tick_next_period; 18 - extern ktime_t tick_period; 19 18 extern int tick_do_timer_cpu __read_mostly; 20 19 21 20 extern void tick_setup_periodic(struct clock_event_device *dev, int broadcast);
+93 -43
kernel/time/tick-sched.c
··· 20 20 #include <linux/sched/clock.h> 21 21 #include <linux/sched/stat.h> 22 22 #include <linux/sched/nohz.h> 23 + #include <linux/sched/loadavg.h> 23 24 #include <linux/module.h> 24 25 #include <linux/irq_work.h> 25 26 #include <linux/posix-timers.h> ··· 45 44 46 45 #if defined(CONFIG_NO_HZ_COMMON) || defined(CONFIG_HIGH_RES_TIMERS) 47 46 /* 48 - * The time, when the last jiffy update happened. Protected by jiffies_lock. 47 + * The time, when the last jiffy update happened. Write access must hold 48 + * jiffies_lock and jiffies_seq. tick_nohz_next_event() needs to get a 49 + * consistent view of jiffies and last_jiffies_update. 49 50 */ 50 51 static ktime_t last_jiffies_update; 51 52 ··· 56 53 */ 57 54 static void tick_do_update_jiffies64(ktime_t now) 58 55 { 59 - unsigned long ticks = 0; 60 - ktime_t delta; 56 + unsigned long ticks = 1; 57 + ktime_t delta, nextp; 61 58 62 59 /* 63 - * Do a quick check without holding jiffies_lock: 64 - * The READ_ONCE() pairs with two updates done later in this function. 60 + * 64bit can do a quick check without holding jiffies lock and 61 + * without looking at the sequence count. The smp_load_acquire() 62 + * pairs with the update done later in this function. 63 + * 64 + * 32bit cannot do that because the store of tick_next_period 65 + * consists of two 32bit stores and the first store could move it 66 + * to a random point in the future. 65 67 */ 66 - delta = ktime_sub(now, READ_ONCE(last_jiffies_update)); 67 - if (delta < tick_period) 68 - return; 69 - 70 - /* Reevaluate with jiffies_lock held */ 71 - raw_spin_lock(&jiffies_lock); 72 - write_seqcount_begin(&jiffies_seq); 73 - 74 - delta = ktime_sub(now, last_jiffies_update); 75 - if (delta >= tick_period) { 76 - 77 - delta = ktime_sub(delta, tick_period); 78 - /* Pairs with the lockless read in this function. */ 79 - WRITE_ONCE(last_jiffies_update, 80 - ktime_add(last_jiffies_update, tick_period)); 81 - 82 - /* Slow path for long timeouts */ 83 - if (unlikely(delta >= tick_period)) { 84 - s64 incr = ktime_to_ns(tick_period); 85 - 86 - ticks = ktime_divns(delta, incr); 87 - 88 - /* Pairs with the lockless read in this function. */ 89 - WRITE_ONCE(last_jiffies_update, 90 - ktime_add_ns(last_jiffies_update, 91 - incr * ticks)); 92 - } 93 - do_timer(++ticks); 94 - 95 - /* Keep the tick_next_period variable up to date */ 96 - tick_next_period = ktime_add(last_jiffies_update, tick_period); 68 + if (IS_ENABLED(CONFIG_64BIT)) { 69 + if (ktime_before(now, smp_load_acquire(&tick_next_period))) 70 + return; 97 71 } else { 98 - write_seqcount_end(&jiffies_seq); 72 + unsigned int seq; 73 + 74 + /* 75 + * Avoid contention on jiffies_lock and protect the quick 76 + * check with the sequence count. 77 + */ 78 + do { 79 + seq = read_seqcount_begin(&jiffies_seq); 80 + nextp = tick_next_period; 81 + } while (read_seqcount_retry(&jiffies_seq, seq)); 82 + 83 + if (ktime_before(now, nextp)) 84 + return; 85 + } 86 + 87 + /* Quick check failed, i.e. update is required. */ 88 + raw_spin_lock(&jiffies_lock); 89 + /* 90 + * Reevaluate with the lock held. Another CPU might have done the 91 + * update already. 92 + */ 93 + if (ktime_before(now, tick_next_period)) { 99 94 raw_spin_unlock(&jiffies_lock); 100 95 return; 101 96 } 97 + 98 + write_seqcount_begin(&jiffies_seq); 99 + 100 + delta = ktime_sub(now, tick_next_period); 101 + if (unlikely(delta >= TICK_NSEC)) { 102 + /* Slow path for long idle sleep times */ 103 + s64 incr = TICK_NSEC; 104 + 105 + ticks += ktime_divns(delta, incr); 106 + 107 + last_jiffies_update = ktime_add_ns(last_jiffies_update, 108 + incr * ticks); 109 + } else { 110 + last_jiffies_update = ktime_add_ns(last_jiffies_update, 111 + TICK_NSEC); 112 + } 113 + 114 + /* Advance jiffies to complete the jiffies_seq protected job */ 115 + jiffies_64 += ticks; 116 + 117 + /* 118 + * Keep the tick_next_period variable up to date. 119 + */ 120 + nextp = ktime_add_ns(last_jiffies_update, TICK_NSEC); 121 + 122 + if (IS_ENABLED(CONFIG_64BIT)) { 123 + /* 124 + * Pairs with smp_load_acquire() in the lockless quick 125 + * check above and ensures that the update to jiffies_64 is 126 + * not reordered vs. the store to tick_next_period, neither 127 + * by the compiler nor by the CPU. 128 + */ 129 + smp_store_release(&tick_next_period, nextp); 130 + } else { 131 + /* 132 + * A plain store is good enough on 32bit as the quick check 133 + * above is protected by the sequence count. 134 + */ 135 + tick_next_period = nextp; 136 + } 137 + 138 + /* 139 + * Release the sequence count. calc_global_load() below is not 140 + * protected by it, but jiffies_lock needs to be held to prevent 141 + * concurrent invocations. 142 + */ 102 143 write_seqcount_end(&jiffies_seq); 144 + 145 + calc_global_load(); 146 + 103 147 raw_spin_unlock(&jiffies_lock); 104 148 update_wall_time(); 105 149 } ··· 711 661 hrtimer_set_expires(&ts->sched_timer, ts->last_tick); 712 662 713 663 /* Forward the time to expire in the future */ 714 - hrtimer_forward(&ts->sched_timer, now, tick_period); 664 + hrtimer_forward(&ts->sched_timer, now, TICK_NSEC); 715 665 716 666 if (ts->nohz_mode == NOHZ_MODE_HIGHRES) { 717 667 hrtimer_start_expires(&ts->sched_timer, ··· 1280 1230 if (unlikely(ts->tick_stopped)) 1281 1231 return; 1282 1232 1283 - hrtimer_forward(&ts->sched_timer, now, tick_period); 1233 + hrtimer_forward(&ts->sched_timer, now, TICK_NSEC); 1284 1234 tick_program_event(hrtimer_get_expires(&ts->sched_timer), 1); 1285 1235 } 1286 1236 ··· 1317 1267 next = tick_init_jiffy_update(); 1318 1268 1319 1269 hrtimer_set_expires(&ts->sched_timer, next); 1320 - hrtimer_forward_now(&ts->sched_timer, tick_period); 1270 + hrtimer_forward_now(&ts->sched_timer, TICK_NSEC); 1321 1271 tick_program_event(hrtimer_get_expires(&ts->sched_timer), 1); 1322 1272 tick_nohz_activate(ts, NOHZ_MODE_LOWRES); 1323 1273 } ··· 1383 1333 if (unlikely(ts->tick_stopped)) 1384 1334 return HRTIMER_NORESTART; 1385 1335 1386 - hrtimer_forward(timer, now, tick_period); 1336 + hrtimer_forward(timer, now, TICK_NSEC); 1387 1337 1388 1338 return HRTIMER_RESTART; 1389 1339 } ··· 1417 1367 1418 1368 /* Offset the tick to avert jiffies_lock contention. */ 1419 1369 if (sched_skew_tick) { 1420 - u64 offset = ktime_to_ns(tick_period) >> 1; 1370 + u64 offset = TICK_NSEC >> 1; 1421 1371 do_div(offset, num_possible_cpus()); 1422 1372 offset *= smp_processor_id(); 1423 1373 hrtimer_add_expires_ns(&ts->sched_timer, offset); 1424 1374 } 1425 1375 1426 - hrtimer_forward(&ts->sched_timer, now, tick_period); 1376 + hrtimer_forward(&ts->sched_timer, now, TICK_NSEC); 1427 1377 hrtimer_start_expires(&ts->sched_timer, HRTIMER_MODE_ABS_PINNED_HARD); 1428 1378 tick_nohz_activate(ts, NOHZ_MODE_HIGHRES); 1429 1379 }
+3 -3
kernel/time/timeconv.c
··· 70 70 /** 71 71 * time64_to_tm - converts the calendar time to local broken-down time 72 72 * 73 - * @totalsecs the number of seconds elapsed since 00:00:00 on January 1, 1970, 73 + * @totalsecs: the number of seconds elapsed since 00:00:00 on January 1, 1970, 74 74 * Coordinated Universal Time (UTC). 75 - * @offset offset seconds adding to totalsecs. 76 - * @result pointer to struct tm variable to receive broken-down time 75 + * @offset: offset seconds adding to totalsecs. 76 + * @result: pointer to struct tm variable to receive broken-down time 77 77 */ 78 78 void time64_to_tm(time64_t totalsecs, int offset, struct tm *result) 79 79 {
+49 -36
kernel/time/timekeeping.c
··· 407 407 /** 408 408 * update_fast_timekeeper - Update the fast and NMI safe monotonic timekeeper. 409 409 * @tkr: Timekeeping readout base from which we take the update 410 + * @tkf: Pointer to NMI safe timekeeper 410 411 * 411 412 * We want to use this from any context including NMI and tracing / 412 413 * instrumenting the timekeeping code itself. ··· 437 436 memcpy(base + 1, base, sizeof(*base)); 438 437 } 439 438 439 + static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf) 440 + { 441 + struct tk_read_base *tkr; 442 + unsigned int seq; 443 + u64 now; 444 + 445 + do { 446 + seq = raw_read_seqcount_latch(&tkf->seq); 447 + tkr = tkf->base + (seq & 0x01); 448 + now = ktime_to_ns(tkr->base); 449 + 450 + now += timekeeping_delta_to_ns(tkr, 451 + clocksource_delta( 452 + tk_clock_read(tkr), 453 + tkr->cycle_last, 454 + tkr->mask)); 455 + } while (read_seqcount_latch_retry(&tkf->seq, seq)); 456 + 457 + return now; 458 + } 459 + 440 460 /** 441 461 * ktime_get_mono_fast_ns - Fast NMI safe access to clock monotonic 442 462 * ··· 484 462 * 485 463 * So reader 6 will observe time going backwards versus reader 5. 486 464 * 487 - * While other CPUs are likely to be able observe that, the only way 465 + * While other CPUs are likely to be able to observe that, the only way 488 466 * for a CPU local observation is when an NMI hits in the middle of 489 467 * the update. Timestamps taken from that NMI context might be ahead 490 468 * of the following timestamps. Callers need to be aware of that and 491 469 * deal with it. 492 470 */ 493 - static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf) 494 - { 495 - struct tk_read_base *tkr; 496 - unsigned int seq; 497 - u64 now; 498 - 499 - do { 500 - seq = raw_read_seqcount_latch(&tkf->seq); 501 - tkr = tkf->base + (seq & 0x01); 502 - now = ktime_to_ns(tkr->base); 503 - 504 - now += timekeeping_delta_to_ns(tkr, 505 - clocksource_delta( 506 - tk_clock_read(tkr), 507 - tkr->cycle_last, 508 - tkr->mask)); 509 - } while (read_seqcount_latch_retry(&tkf->seq, seq)); 510 - 511 - return now; 512 - } 513 - 514 471 u64 ktime_get_mono_fast_ns(void) 515 472 { 516 473 return __ktime_get_fast_ns(&tk_fast_mono); 517 474 } 518 475 EXPORT_SYMBOL_GPL(ktime_get_mono_fast_ns); 519 476 477 + /** 478 + * ktime_get_raw_fast_ns - Fast NMI safe access to clock monotonic raw 479 + * 480 + * Contrary to ktime_get_mono_fast_ns() this is always correct because the 481 + * conversion factor is not affected by NTP/PTP correction. 482 + */ 520 483 u64 ktime_get_raw_fast_ns(void) 521 484 { 522 485 return __ktime_get_fast_ns(&tk_fast_raw); ··· 528 521 * (2) On 32-bit systems, the 64-bit boot offset (tk->offs_boot) may be 529 522 * partially updated. Since the tk->offs_boot update is a rare event, this 530 523 * should be a rare occurrence which postprocessing should be able to handle. 524 + * 525 + * The caveats vs. timestamp ordering as documented for ktime_get_fast_ns() 526 + * apply as well. 531 527 */ 532 528 u64 notrace ktime_get_boot_fast_ns(void) 533 529 { ··· 540 530 } 541 531 EXPORT_SYMBOL_GPL(ktime_get_boot_fast_ns); 542 532 543 - /* 544 - * See comment for __ktime_get_fast_ns() vs. timestamp ordering 545 - */ 546 533 static __always_inline u64 __ktime_get_real_fast(struct tk_fast *tkf, u64 *mono) 547 534 { 548 535 struct tk_read_base *tkr; ··· 564 557 565 558 /** 566 559 * ktime_get_real_fast_ns: - NMI safe and fast access to clock realtime. 560 + * 561 + * See ktime_get_fast_ns() for documentation of the time stamp ordering. 567 562 */ 568 563 u64 ktime_get_real_fast_ns(void) 569 564 { ··· 663 654 664 655 /** 665 656 * pvclock_gtod_register_notifier - register a pvclock timedata update listener 657 + * @nb: Pointer to the notifier block to register 666 658 */ 667 659 int pvclock_gtod_register_notifier(struct notifier_block *nb) 668 660 { ··· 683 673 /** 684 674 * pvclock_gtod_unregister_notifier - unregister a pvclock 685 675 * timedata update listener 676 + * @nb: Pointer to the notifier block to unregister 686 677 */ 687 678 int pvclock_gtod_unregister_notifier(struct notifier_block *nb) 688 679 { ··· 774 763 775 764 /** 776 765 * timekeeping_forward_now - update clock to the current time 766 + * @tk: Pointer to the timekeeper to update 777 767 * 778 768 * Forward the current clock to update its state since the last call to 779 769 * update_wall_time(). This is useful before significant clock changes, ··· 1351 1339 1352 1340 /** 1353 1341 * timekeeping_inject_offset - Adds or subtracts from the current time. 1354 - * @tv: pointer to the timespec variable containing the offset 1342 + * @ts: Pointer to the timespec variable containing the offset 1355 1343 * 1356 1344 * Adds or subtracts an offset value from the current time. 1357 1345 */ ··· 1427 1415 } 1428 1416 } 1429 1417 1430 - /** 1418 + /* 1431 1419 * __timekeeping_set_tai_offset - Sets the TAI offset from UTC and monotonic 1432 - * 1433 1420 */ 1434 1421 static void __timekeeping_set_tai_offset(struct timekeeper *tk, s32 tai_offset) 1435 1422 { ··· 1436 1425 tk->offs_tai = ktime_add(tk->offs_real, ktime_set(tai_offset, 0)); 1437 1426 } 1438 1427 1439 - /** 1428 + /* 1440 1429 * change_clocksource - Swaps clocksources if a new one is available 1441 1430 * 1442 1431 * Accumulates current time interval and initializes new clocksource ··· 1559 1548 1560 1549 /** 1561 1550 * read_persistent_clock64 - Return time from the persistent clock. 1551 + * @ts: Pointer to the storage for the readout value 1562 1552 * 1563 1553 * Weak dummy function for arches that do not yet support it. 1564 1554 * Reads the time from the battery backed persistent clock. ··· 1578 1566 * from the boot. 1579 1567 * 1580 1568 * Weak dummy function for arches that do not yet support it. 1581 - * wall_time - current time as returned by persistent clock 1582 - * boot_offset - offset that is defined as wall_time - boot_time 1569 + * @wall_time: - current time as returned by persistent clock 1570 + * @boot_offset: - offset that is defined as wall_time - boot_time 1571 + * 1583 1572 * The default function calculates offset based on the current value of 1584 1573 * local_clock(). This way architectures that support sched_clock() but don't 1585 1574 * support dedicated boot time clock will provide the best estimate of the ··· 1665 1652 1666 1653 /** 1667 1654 * __timekeeping_inject_sleeptime - Internal function to add sleep interval 1668 - * @delta: pointer to a timespec delta value 1655 + * @tk: Pointer to the timekeeper to be updated 1656 + * @delta: Pointer to the delta value in timespec64 format 1669 1657 * 1670 1658 * Takes a timespec offset measuring a suspend interval and properly 1671 1659 * adds the sleep offset to the timekeeping variables. ··· 2037 2023 } 2038 2024 } 2039 2025 2040 - /** 2026 + /* 2041 2027 * accumulate_nsecs_to_secs - Accumulates nsecs into secs 2042 2028 * 2043 2029 * Helper function that accumulates the nsecs greater than a second 2044 2030 * from the xtime_nsec field to the xtime_secs field. 2045 2031 * It also calls into the NTP code to handle leapsecond processing. 2046 - * 2047 2032 */ 2048 2033 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk) 2049 2034 { ··· 2084 2071 return clock_set; 2085 2072 } 2086 2073 2087 - /** 2074 + /* 2088 2075 * logarithmic_accumulation - shifted accumulation of cycles 2089 2076 * 2090 2077 * This functions accumulates a shifted interval of cycles into ··· 2327 2314 return base; 2328 2315 } 2329 2316 2330 - /** 2317 + /* 2331 2318 * timekeeping_validate_timex - Ensures the timex is ok for use in do_adjtimex 2332 2319 */ 2333 2320 static int timekeeping_validate_timex(const struct __kernel_timex *txc)
+1 -1
kernel/time/timekeeping.h
··· 26 26 extern void update_wall_time(void); 27 27 28 28 extern raw_spinlock_t jiffies_lock; 29 - extern seqcount_t jiffies_seq; 29 + extern seqcount_raw_spinlock_t jiffies_seq; 30 30 31 31 #define CS_NAME_LEN 32 32 32
+32 -25
kernel/time/timer.c
··· 1283 1283 u32 tf; 1284 1284 1285 1285 tf = READ_ONCE(timer->flags); 1286 - if (!(tf & TIMER_MIGRATING)) { 1286 + if (!(tf & (TIMER_MIGRATING | TIMER_IRQSAFE))) { 1287 1287 struct timer_base *base = get_timer_base(tf); 1288 1288 1289 1289 /* ··· 1366 1366 * could lead to deadlock. 1367 1367 */ 1368 1368 WARN_ON(in_irq() && !(timer->flags & TIMER_IRQSAFE)); 1369 + 1370 + /* 1371 + * Must be able to sleep on PREEMPT_RT because of the slowpath in 1372 + * del_timer_wait_running(). 1373 + */ 1374 + if (IS_ENABLED(CONFIG_PREEMPT_RT) && !(timer->flags & TIMER_IRQSAFE)) 1375 + lockdep_assert_preemption_enabled(); 1369 1376 1370 1377 do { 1371 1378 ret = try_to_del_timer_sync(timer); ··· 1700 1693 } 1701 1694 #endif 1702 1695 1703 - /* 1704 - * Called from the timer interrupt handler to charge one tick to the current 1705 - * process. user_tick is 1 if the tick is user time, 0 for system. 1706 - */ 1707 - void update_process_times(int user_tick) 1708 - { 1709 - struct task_struct *p = current; 1710 - 1711 - PRANDOM_ADD_NOISE(jiffies, user_tick, p, 0); 1712 - 1713 - /* Note: this timer irq context must be accounted for as well. */ 1714 - account_process_tick(p, user_tick); 1715 - run_local_timers(); 1716 - rcu_sched_clock_irq(user_tick); 1717 - #ifdef CONFIG_IRQ_WORK 1718 - if (in_irq()) 1719 - irq_work_tick(); 1720 - #endif 1721 - scheduler_tick(); 1722 - if (IS_ENABLED(CONFIG_POSIX_TIMERS)) 1723 - run_posix_cpu_timers(); 1724 - } 1725 - 1726 1696 /** 1727 1697 * __run_timers - run all expired timers (if any) on this CPU. 1728 1698 * @base: the timer vector to be processed. ··· 1749 1765 /* 1750 1766 * Called by the local, per-CPU timer interrupt on SMP. 1751 1767 */ 1752 - void run_local_timers(void) 1768 + static void run_local_timers(void) 1753 1769 { 1754 1770 struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]); 1755 1771 ··· 1764 1780 return; 1765 1781 } 1766 1782 raise_softirq(TIMER_SOFTIRQ); 1783 + } 1784 + 1785 + /* 1786 + * Called from the timer interrupt handler to charge one tick to the current 1787 + * process. user_tick is 1 if the tick is user time, 0 for system. 1788 + */ 1789 + void update_process_times(int user_tick) 1790 + { 1791 + struct task_struct *p = current; 1792 + 1793 + PRANDOM_ADD_NOISE(jiffies, user_tick, p, 0); 1794 + 1795 + /* Note: this timer irq context must be accounted for as well. */ 1796 + account_process_tick(p, user_tick); 1797 + run_local_timers(); 1798 + rcu_sched_clock_irq(user_tick); 1799 + #ifdef CONFIG_IRQ_WORK 1800 + if (in_irq()) 1801 + irq_work_tick(); 1802 + #endif 1803 + scheduler_tick(); 1804 + if (IS_ENABLED(CONFIG_POSIX_TIMERS)) 1805 + run_posix_cpu_timers(); 1767 1806 } 1768 1807 1769 1808 /*
+19 -47
kernel/time/timer_list.c
··· 42 42 va_end(args); 43 43 } 44 44 45 - static void print_name_offset(struct seq_file *m, void *sym) 46 - { 47 - char symname[KSYM_NAME_LEN]; 48 - 49 - if (lookup_symbol_name((unsigned long)sym, symname) < 0) 50 - SEQ_printf(m, "<%pK>", sym); 51 - else 52 - SEQ_printf(m, "%s", symname); 53 - } 54 - 55 45 static void 56 46 print_timer(struct seq_file *m, struct hrtimer *taddr, struct hrtimer *timer, 57 47 int idx, u64 now) 58 48 { 59 - SEQ_printf(m, " #%d: ", idx); 60 - print_name_offset(m, taddr); 61 - SEQ_printf(m, ", "); 62 - print_name_offset(m, timer->function); 49 + SEQ_printf(m, " #%d: <%pK>, %ps", idx, taddr, timer->function); 63 50 SEQ_printf(m, ", S:%02x", timer->state); 64 51 SEQ_printf(m, "\n"); 65 52 SEQ_printf(m, " # expires at %Lu-%Lu nsecs [in %Ld to %Ld nsecs]\n", ··· 103 116 104 117 SEQ_printf(m, " .resolution: %u nsecs\n", hrtimer_resolution); 105 118 106 - SEQ_printf(m, " .get_time: "); 107 - print_name_offset(m, base->get_time); 108 - SEQ_printf(m, "\n"); 119 + SEQ_printf(m, " .get_time: %ps\n", base->get_time); 109 120 #ifdef CONFIG_HIGH_RES_TIMERS 110 121 SEQ_printf(m, " .offset: %Lu nsecs\n", 111 122 (unsigned long long) ktime_to_ns(base->offset)); ··· 203 218 SEQ_printf(m, " next_event: %Ld nsecs\n", 204 219 (unsigned long long) ktime_to_ns(dev->next_event)); 205 220 206 - SEQ_printf(m, " set_next_event: "); 207 - print_name_offset(m, dev->set_next_event); 208 - SEQ_printf(m, "\n"); 221 + SEQ_printf(m, " set_next_event: %ps\n", dev->set_next_event); 209 222 210 - if (dev->set_state_shutdown) { 211 - SEQ_printf(m, " shutdown: "); 212 - print_name_offset(m, dev->set_state_shutdown); 213 - SEQ_printf(m, "\n"); 214 - } 223 + if (dev->set_state_shutdown) 224 + SEQ_printf(m, " shutdown: %ps\n", 225 + dev->set_state_shutdown); 215 226 216 - if (dev->set_state_periodic) { 217 - SEQ_printf(m, " periodic: "); 218 - print_name_offset(m, dev->set_state_periodic); 219 - SEQ_printf(m, "\n"); 220 - } 227 + if (dev->set_state_periodic) 228 + SEQ_printf(m, " periodic: %ps\n", 229 + dev->set_state_periodic); 221 230 222 - if (dev->set_state_oneshot) { 223 - SEQ_printf(m, " oneshot: "); 224 - print_name_offset(m, dev->set_state_oneshot); 225 - SEQ_printf(m, "\n"); 226 - } 231 + if (dev->set_state_oneshot) 232 + SEQ_printf(m, " oneshot: %ps\n", 233 + dev->set_state_oneshot); 227 234 228 - if (dev->set_state_oneshot_stopped) { 229 - SEQ_printf(m, " oneshot stopped: "); 230 - print_name_offset(m, dev->set_state_oneshot_stopped); 231 - SEQ_printf(m, "\n"); 232 - } 235 + if (dev->set_state_oneshot_stopped) 236 + SEQ_printf(m, " oneshot stopped: %ps\n", 237 + dev->set_state_oneshot_stopped); 233 238 234 - if (dev->tick_resume) { 235 - SEQ_printf(m, " resume: "); 236 - print_name_offset(m, dev->tick_resume); 237 - SEQ_printf(m, "\n"); 238 - } 239 + if (dev->tick_resume) 240 + SEQ_printf(m, " resume: %ps\n", 241 + dev->tick_resume); 239 242 240 - SEQ_printf(m, " event_handler: "); 241 - print_name_offset(m, dev->event_handler); 243 + SEQ_printf(m, " event_handler: %ps\n", dev->event_handler); 242 244 SEQ_printf(m, "\n"); 243 245 SEQ_printf(m, " retries: %lu\n", dev->retries); 244 246 SEQ_printf(m, "\n");