Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'timers-core-2022-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull timer updates from Thomas Gleixner:
"Updates for timers, timekeeping and drivers:

Core:

- The timer_shutdown[_sync]() infrastructure:

Tearing down timers can be tedious when there are circular
dependencies to other things which need to be torn down. A prime
example is timer and workqueue where the timer schedules work and
the work arms the timer.

What needs to prevented is that pending work which is drained via
destroy_workqueue() does not rearm the previously shutdown timer.
Nothing in that shutdown sequence relies on the timer being
functional.

The conclusion was that the semantics of timer_shutdown_sync()
should be:
- timer is not enqueued
- timer callback is not running
- timer cannot be rearmed

Preventing the rearming of shutdown timers is done by discarding
rearm attempts silently.

A warning for the case that a rearm attempt of a shutdown timer is
detected would not be really helpful because it's entirely unclear
how it should be acted upon. The only way to address such a case is
to add 'if (in_shutdown)' conditionals all over the place. This is
error prone and in most cases of teardown not required all.

- The real fix for the bluetooth HCI teardown based on
timer_shutdown_sync().

A larger scale conversion to timer_shutdown_sync() is work in
progress.

- Consolidation of VDSO time namespace helper functions

- Small fixes for timer and timerqueue

Drivers:

- Prevent integer overflow on the XGene-1 TVAL register which causes
an never ending interrupt storm.

- The usual set of new device tree bindings

- Small fixes and improvements all over the place"

* tag 'timers-core-2022-12-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (34 commits)
dt-bindings: timer: renesas,cmt: Add r8a779g0 CMT support
dt-bindings: timer: renesas,tmu: Add r8a779g0 support
clocksource/drivers/arm_arch_timer: Use kstrtobool() instead of strtobool()
clocksource/drivers/timer-ti-dm: Fix missing clk_disable_unprepare in dmtimer_systimer_init_clock()
clocksource/drivers/timer-ti-dm: Clear settings on probe and free
clocksource/drivers/timer-ti-dm: Make timer_get_irq static
clocksource/drivers/timer-ti-dm: Fix warning for omap_timer_match
clocksource/drivers/arm_arch_timer: Fix XGene-1 TVAL register math error
clocksource/drivers/timer-npcm7xx: Enable timer 1 clock before use
dt-bindings: timer: nuvoton,npcm7xx-timer: Allow specifying all clocks
dt-bindings: timer: rockchip: Add rockchip,rk3128-timer
clockevents: Repair kernel-doc for clockevent_delta2ns()
clocksource/drivers/ingenic-ost: Define pm functions properly in platform_driver struct
clocksource/drivers/sh_cmt: Access registers according to spec
vdso/timens: Refactor copy-pasted find_timens_vvar_page() helper into one copy
Bluetooth: hci_qca: Fix the teardown problem for real
timers: Update the documentation to reflect on the new timer_shutdown() API
timers: Provide timer_shutdown[_sync]()
timers: Add shutdown mechanism to the internal functions
timers: Split [try_to_]del_timer[_sync]() to prepare for shutdown mode
...

+530 -298
+1 -1
Documentation/RCU/Design/Requirements/Requirements.rst
··· 1858 1858 one of its functions results in a segmentation fault. The module-unload 1859 1859 functions must therefore cancel any delayed calls to loadable-module 1860 1860 functions, for example, any outstanding mod_timer() must be dealt 1861 - with via del_timer_sync() or similar. 1861 + with via timer_shutdown_sync() or similar. 1862 1862 1863 1863 Unfortunately, there is no way to cancel an RCU callback; once you 1864 1864 invoke call_rcu(), the callback function is eventually going to be
+1 -1
Documentation/core-api/local_ops.rst
··· 191 191 192 192 static void __exit test_exit(void) 193 193 { 194 - del_timer_sync(&test_timer); 194 + timer_shutdown_sync(&test_timer); 195 195 } 196 196 197 197 module_init(test_init);
+7 -1
Documentation/devicetree/bindings/timer/nuvoton,npcm7xx-timer.yaml
··· 25 25 - description: The timer interrupt of timer 0 26 26 27 27 clocks: 28 - maxItems: 1 28 + items: 29 + - description: The reference clock for timer 0 30 + - description: The reference clock for timer 1 31 + - description: The reference clock for timer 2 32 + - description: The reference clock for timer 3 33 + - description: The reference clock for timer 4 34 + minItems: 1 29 35 30 36 required: 31 37 - compatible
+2
Documentation/devicetree/bindings/timer/renesas,cmt.yaml
··· 102 102 - enum: 103 103 - renesas,r8a779a0-cmt0 # 32-bit CMT0 on R-Car V3U 104 104 - renesas,r8a779f0-cmt0 # 32-bit CMT0 on R-Car S4-8 105 + - renesas,r8a779g0-cmt0 # 32-bit CMT0 on R-Car V4H 105 106 - const: renesas,rcar-gen4-cmt0 # 32-bit CMT0 on R-Car Gen4 106 107 107 108 - items: 108 109 - enum: 109 110 - renesas,r8a779a0-cmt1 # 48-bit CMT on R-Car V3U 110 111 - renesas,r8a779f0-cmt1 # 48-bit CMT on R-Car S4-8 112 + - renesas,r8a779g0-cmt1 # 48-bit CMT on R-Car V4H 111 113 - const: renesas,rcar-gen4-cmt1 # 48-bit CMT on R-Car Gen4 112 114 113 115 reg:
+1
Documentation/devicetree/bindings/timer/renesas,tmu.yaml
··· 38 38 - renesas,tmu-r8a77995 # R-Car D3 39 39 - renesas,tmu-r8a779a0 # R-Car V3U 40 40 - renesas,tmu-r8a779f0 # R-Car S4-8 41 + - renesas,tmu-r8a779g0 # R-Car V4H 41 42 - const: renesas,tmu 42 43 43 44 reg:
+1
Documentation/devicetree/bindings/timer/rockchip,rk-timer.yaml
··· 18 18 - enum: 19 19 - rockchip,rv1108-timer 20 20 - rockchip,rk3036-timer 21 + - rockchip,rk3128-timer 21 22 - rockchip,rk3188-timer 22 23 - rockchip,rk3228-timer 23 24 - rockchip,rk3229-timer
+10 -7
Documentation/kernel-hacking/locking.rst
··· 967 967 968 968 while (list) { 969 969 struct foo *next = list->next; 970 - del_timer(&list->timer); 970 + timer_delete(&list->timer); 971 971 kfree(list); 972 972 list = next; 973 973 } ··· 981 981 the element (which has already been freed!). 982 982 983 983 This can be avoided by checking the result of 984 - del_timer(): if it returns 1, the timer has been deleted. 984 + timer_delete(): if it returns 1, the timer has been deleted. 985 985 If 0, it means (in this case) that it is currently running, so we can 986 986 do:: 987 987 ··· 990 990 991 991 while (list) { 992 992 struct foo *next = list->next; 993 - if (!del_timer(&list->timer)) { 993 + if (!timer_delete(&list->timer)) { 994 994 /* Give timer a chance to delete this */ 995 995 spin_unlock_bh(&list_lock); 996 996 goto retry; ··· 1005 1005 Another common problem is deleting timers which restart themselves (by 1006 1006 calling add_timer() at the end of their timer function). 1007 1007 Because this is a fairly common case which is prone to races, you should 1008 - use del_timer_sync() (``include/linux/timer.h``) to 1009 - handle this case. It returns the number of times the timer had to be 1010 - deleted before we finally stopped it from adding itself back in. 1008 + use timer_delete_sync() (``include/linux/timer.h``) to handle this case. 1009 + 1010 + Before freeing a timer, timer_shutdown() or timer_shutdown_sync() should be 1011 + called which will keep it from being rearmed. Any subsequent attempt to 1012 + rearm the timer will be silently ignored by the core code. 1013 + 1011 1014 1012 1015 Locking Speed 1013 1016 ============= ··· 1338 1335 1339 1336 - kfree() 1340 1337 1341 - - add_timer() and del_timer() 1338 + - add_timer() and timer_delete() 1342 1339 1343 1340 Mutex API reference 1344 1341 ===================
+1 -1
Documentation/timers/hrtimers.rst
··· 118 118 was not really a win, due to the different data structures. Also, the 119 119 hrtimer functions now have clearer behavior and clearer names - such as 120 120 hrtimer_try_to_cancel() and hrtimer_cancel() [which are roughly 121 - equivalent to del_timer() and del_timer_sync()] - so there's no direct 121 + equivalent to timer_delete() and timer_delete_sync()] - so there's no direct 122 122 1:1 mapping between them on the algorithmic level, and thus no real 123 123 potential for code sharing either. 124 124
+6 -8
Documentation/translations/it_IT/kernel-hacking/locking.rst
··· 990 990 991 991 while (list) { 992 992 struct foo *next = list->next; 993 - del_timer(&list->timer); 993 + timer_delete(&list->timer); 994 994 kfree(list); 995 995 list = next; 996 996 } ··· 1003 1003 di eliminare il suo oggetto (che però è già stato eliminato). 1004 1004 1005 1005 Questo può essere evitato controllando il valore di ritorno di 1006 - del_timer(): se ritorna 1, il temporizzatore è stato già 1006 + timer_delete(): se ritorna 1, il temporizzatore è stato già 1007 1007 rimosso. Se 0, significa (in questo caso) che il temporizzatore è in 1008 1008 esecuzione, quindi possiamo fare come segue:: 1009 1009 ··· 1012 1012 1013 1013 while (list) { 1014 1014 struct foo *next = list->next; 1015 - if (!del_timer(&list->timer)) { 1015 + if (!timer_delete(&list->timer)) { 1016 1016 /* Give timer a chance to delete this */ 1017 1017 spin_unlock_bh(&list_lock); 1018 1018 goto retry; ··· 1026 1026 Un altro problema è l'eliminazione dei temporizzatori che si riavviano 1027 1027 da soli (chiamando add_timer() alla fine della loro esecuzione). 1028 1028 Dato che questo è un problema abbastanza comune con una propensione 1029 - alle corse critiche, dovreste usare del_timer_sync() 1030 - (``include/linux/timer.h``) per gestire questo caso. Questa ritorna il 1031 - numero di volte che il temporizzatore è stato interrotto prima che 1032 - fosse in grado di fermarlo senza che si riavviasse. 1029 + alle corse critiche, dovreste usare timer_delete_sync() 1030 + (``include/linux/timer.h``) per gestire questo caso. 1033 1031 1034 1032 Velocità della sincronizzazione 1035 1033 =============================== ··· 1372 1374 1373 1375 - kfree() 1374 1376 1375 - - add_timer() e del_timer() 1377 + - add_timer() e timer_delete() 1376 1378 1377 1379 Riferimento per l'API dei Mutex 1378 1380 ===============================
+1 -1
Documentation/translations/zh_CN/core-api/local_ops.rst
··· 185 185 186 186 static void __exit test_exit(void) 187 187 { 188 - del_timer_sync(&test_timer); 188 + timer_shutdown_sync(&test_timer); 189 189 } 190 190 191 191 module_init(test_init);
+4 -4
arch/arm/mach-spear/time.c
··· 90 90 200, 16, clocksource_mmio_readw_up); 91 91 } 92 92 93 - static inline void timer_shutdown(struct clock_event_device *evt) 93 + static inline void spear_timer_shutdown(struct clock_event_device *evt) 94 94 { 95 95 u16 val = readw(gpt_base + CR(CLKEVT)); 96 96 ··· 101 101 102 102 static int spear_shutdown(struct clock_event_device *evt) 103 103 { 104 - timer_shutdown(evt); 104 + spear_timer_shutdown(evt); 105 105 106 106 return 0; 107 107 } ··· 111 111 u16 val; 112 112 113 113 /* stop the timer */ 114 - timer_shutdown(evt); 114 + spear_timer_shutdown(evt); 115 115 116 116 val = readw(gpt_base + CR(CLKEVT)); 117 117 val |= CTRL_ONE_SHOT; ··· 126 126 u16 val; 127 127 128 128 /* stop the timer */ 129 - timer_shutdown(evt); 129 + spear_timer_shutdown(evt); 130 130 131 131 period = clk_get_rate(gpt_clk) / HZ; 132 132 period >>= CTRL_PRESCALER16;
-22
arch/arm64/kernel/vdso.c
··· 151 151 mmap_read_unlock(mm); 152 152 return 0; 153 153 } 154 - 155 - static struct page *find_timens_vvar_page(struct vm_area_struct *vma) 156 - { 157 - if (likely(vma->vm_mm == current->mm)) 158 - return current->nsproxy->time_ns->vvar_page; 159 - 160 - /* 161 - * VM_PFNMAP | VM_IO protect .fault() handler from being called 162 - * through interfaces like /proc/$pid/mem or 163 - * process_vm_{readv,writev}() as long as there's no .access() 164 - * in special_mapping_vmops. 165 - * For more details check_vma_flags() and __access_remote_vm() 166 - */ 167 - WARN(1, "vvar_page accessed remotely"); 168 - 169 - return NULL; 170 - } 171 - #else 172 - static struct page *find_timens_vvar_page(struct vm_area_struct *vma) 173 - { 174 - return NULL; 175 - } 176 154 #endif 177 155 178 156 static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
-22
arch/powerpc/kernel/vdso.c
··· 129 129 130 130 return 0; 131 131 } 132 - 133 - static struct page *find_timens_vvar_page(struct vm_area_struct *vma) 134 - { 135 - if (likely(vma->vm_mm == current->mm)) 136 - return current->nsproxy->time_ns->vvar_page; 137 - 138 - /* 139 - * VM_PFNMAP | VM_IO protect .fault() handler from being called 140 - * through interfaces like /proc/$pid/mem or 141 - * process_vm_{readv,writev}() as long as there's no .access() 142 - * in special_mapping_vmops. 143 - * For more details check_vma_flags() and __access_remote_vm() 144 - */ 145 - WARN(1, "vvar_page accessed remotely"); 146 - 147 - return NULL; 148 - } 149 - #else 150 - static struct page *find_timens_vvar_page(struct vm_area_struct *vma) 151 - { 152 - return NULL; 153 - } 154 132 #endif 155 133 156 134 static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
-22
arch/riscv/kernel/vdso.c
··· 137 137 mmap_read_unlock(mm); 138 138 return 0; 139 139 } 140 - 141 - static struct page *find_timens_vvar_page(struct vm_area_struct *vma) 142 - { 143 - if (likely(vma->vm_mm == current->mm)) 144 - return current->nsproxy->time_ns->vvar_page; 145 - 146 - /* 147 - * VM_PFNMAP | VM_IO protect .fault() handler from being called 148 - * through interfaces like /proc/$pid/mem or 149 - * process_vm_{readv,writev}() as long as there's no .access() 150 - * in special_mapping_vmops. 151 - * For more details check_vma_flags() and __access_remote_vm() 152 - */ 153 - WARN(1, "vvar_page accessed remotely"); 154 - 155 - return NULL; 156 - } 157 - #else 158 - static struct page *find_timens_vvar_page(struct vm_area_struct *vma) 159 - { 160 - return NULL; 161 - } 162 140 #endif 163 141 164 142 static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
-20
arch/s390/kernel/vdso.c
··· 44 44 return (struct vdso_data *)(vvar_page); 45 45 } 46 46 47 - static struct page *find_timens_vvar_page(struct vm_area_struct *vma) 48 - { 49 - if (likely(vma->vm_mm == current->mm)) 50 - return current->nsproxy->time_ns->vvar_page; 51 - /* 52 - * VM_PFNMAP | VM_IO protect .fault() handler from being called 53 - * through interfaces like /proc/$pid/mem or 54 - * process_vm_{readv,writev}() as long as there's no .access() 55 - * in special_mapping_vmops(). 56 - * For more details check_vma_flags() and __access_remote_vm() 57 - */ 58 - WARN(1, "vvar_page accessed remotely"); 59 - return NULL; 60 - } 61 - 62 47 /* 63 48 * The VVAR page layout depends on whether a task belongs to the root or 64 49 * non-root time namespace. Whenever a task changes its namespace, the VVAR ··· 68 83 } 69 84 mmap_read_unlock(mm); 70 85 return 0; 71 - } 72 - #else 73 - static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma) 74 - { 75 - return NULL; 76 86 } 77 87 #endif 78 88
-23
arch/x86/entry/vdso/vma.c
··· 98 98 } 99 99 100 100 #ifdef CONFIG_TIME_NS 101 - static struct page *find_timens_vvar_page(struct vm_area_struct *vma) 102 - { 103 - if (likely(vma->vm_mm == current->mm)) 104 - return current->nsproxy->time_ns->vvar_page; 105 - 106 - /* 107 - * VM_PFNMAP | VM_IO protect .fault() handler from being called 108 - * through interfaces like /proc/$pid/mem or 109 - * process_vm_{readv,writev}() as long as there's no .access() 110 - * in special_mapping_vmops(). 111 - * For more details check_vma_flags() and __access_remote_vm() 112 - */ 113 - 114 - WARN(1, "vvar_page accessed remotely"); 115 - 116 - return NULL; 117 - } 118 - 119 101 /* 120 102 * The vvar page layout depends on whether a task belongs to the root or 121 103 * non-root time namespace. Whenever a task changes its namespace, the VVAR ··· 121 139 mmap_read_unlock(mm); 122 140 123 141 return 0; 124 - } 125 - #else 126 - static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma) 127 - { 128 - return NULL; 129 142 } 130 143 #endif 131 144
+8 -2
drivers/bluetooth/hci_qca.c
··· 696 696 skb_queue_purge(&qca->tx_wait_q); 697 697 skb_queue_purge(&qca->txq); 698 698 skb_queue_purge(&qca->rx_memdump_q); 699 + /* 700 + * Shut the timers down so they can't be rearmed when 701 + * destroy_workqueue() drains pending work which in turn might try 702 + * to arm a timer. After shutdown rearm attempts are silently 703 + * ignored by the timer core code. 704 + */ 705 + timer_shutdown_sync(&qca->tx_idle_timer); 706 + timer_shutdown_sync(&qca->wake_retrans_timer); 699 707 destroy_workqueue(qca->workqueue); 700 - del_timer_sync(&qca->tx_idle_timer); 701 - del_timer_sync(&qca->wake_retrans_timer); 702 708 qca->hu = NULL; 703 709 704 710 kfree_skb(qca->rx_skb);
+2 -2
drivers/char/tpm/tpm-dev-common.c
··· 155 155 out: 156 156 if (!priv->response_length) { 157 157 *off = 0; 158 - del_singleshot_timer_sync(&priv->user_read_timer); 158 + del_timer_sync(&priv->user_read_timer); 159 159 flush_work(&priv->timeout_work); 160 160 } 161 161 mutex_unlock(&priv->buffer_mutex); ··· 262 262 void tpm_common_release(struct file *file, struct file_priv *priv) 263 263 { 264 264 flush_work(&priv->async_work); 265 - del_singleshot_timer_sync(&priv->user_read_timer); 265 + del_timer_sync(&priv->user_read_timer); 266 266 flush_work(&priv->timeout_work); 267 267 file->private_data = NULL; 268 268 priv->response_length = 0;
+8 -7
drivers/clocksource/arm_arch_timer.c
··· 18 18 #include <linux/clocksource.h> 19 19 #include <linux/clocksource_ids.h> 20 20 #include <linux/interrupt.h> 21 + #include <linux/kstrtox.h> 21 22 #include <linux/of_irq.h> 22 23 #include <linux/of_address.h> 23 24 #include <linux/io.h> ··· 98 97 99 98 static int __init early_evtstrm_cfg(char *buf) 100 99 { 101 - return strtobool(buf, &evtstrm_enable); 100 + return kstrtobool(buf, &evtstrm_enable); 102 101 } 103 102 early_param("clocksource.arm_arch_timer.evtstrm", early_evtstrm_cfg); 104 103 ··· 688 687 return timer_handler(ARCH_TIMER_MEM_VIRT_ACCESS, evt); 689 688 } 690 689 691 - static __always_inline int timer_shutdown(const int access, 692 - struct clock_event_device *clk) 690 + static __always_inline int arch_timer_shutdown(const int access, 691 + struct clock_event_device *clk) 693 692 { 694 693 unsigned long ctrl; 695 694 ··· 702 701 703 702 static int arch_timer_shutdown_virt(struct clock_event_device *clk) 704 703 { 705 - return timer_shutdown(ARCH_TIMER_VIRT_ACCESS, clk); 704 + return arch_timer_shutdown(ARCH_TIMER_VIRT_ACCESS, clk); 706 705 } 707 706 708 707 static int arch_timer_shutdown_phys(struct clock_event_device *clk) 709 708 { 710 - return timer_shutdown(ARCH_TIMER_PHYS_ACCESS, clk); 709 + return arch_timer_shutdown(ARCH_TIMER_PHYS_ACCESS, clk); 711 710 } 712 711 713 712 static int arch_timer_shutdown_virt_mem(struct clock_event_device *clk) 714 713 { 715 - return timer_shutdown(ARCH_TIMER_MEM_VIRT_ACCESS, clk); 714 + return arch_timer_shutdown(ARCH_TIMER_MEM_VIRT_ACCESS, clk); 716 715 } 717 716 718 717 static int arch_timer_shutdown_phys_mem(struct clock_event_device *clk) 719 718 { 720 - return timer_shutdown(ARCH_TIMER_MEM_PHYS_ACCESS, clk); 719 + return arch_timer_shutdown(ARCH_TIMER_MEM_PHYS_ACCESS, clk); 721 720 } 722 721 723 722 static __always_inline void set_next_event(const int access, unsigned long evt,
+4 -6
drivers/clocksource/ingenic-ost.c
··· 141 141 return 0; 142 142 } 143 143 144 - static int __maybe_unused ingenic_ost_suspend(struct device *dev) 144 + static int ingenic_ost_suspend(struct device *dev) 145 145 { 146 146 struct ingenic_ost *ost = dev_get_drvdata(dev); 147 147 ··· 150 150 return 0; 151 151 } 152 152 153 - static int __maybe_unused ingenic_ost_resume(struct device *dev) 153 + static int ingenic_ost_resume(struct device *dev) 154 154 { 155 155 struct ingenic_ost *ost = dev_get_drvdata(dev); 156 156 157 157 return clk_enable(ost->clk); 158 158 } 159 159 160 - static const struct dev_pm_ops __maybe_unused ingenic_ost_pm_ops = { 160 + static const struct dev_pm_ops ingenic_ost_pm_ops = { 161 161 /* _noirq: We want the OST clock to be gated last / ungated first */ 162 162 .suspend_noirq = ingenic_ost_suspend, 163 163 .resume_noirq = ingenic_ost_resume, ··· 181 181 static struct platform_driver ingenic_ost_driver = { 182 182 .driver = { 183 183 .name = "ingenic-ost", 184 - #ifdef CONFIG_PM_SUSPEND 185 - .pm = &ingenic_ost_pm_ops, 186 - #endif 184 + .pm = pm_sleep_ptr(&ingenic_ost_pm_ops), 187 185 .of_match_table = ingenic_ost_of_match, 188 186 }, 189 187 };
+55 -33
drivers/clocksource/sh_cmt.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/interrupt.h> 15 15 #include <linux/io.h> 16 + #include <linux/iopoll.h> 16 17 #include <linux/ioport.h> 17 18 #include <linux/irq.h> 18 19 #include <linux/module.h> ··· 117 116 void __iomem *mapbase; 118 117 struct clk *clk; 119 118 unsigned long rate; 119 + unsigned int reg_delay; 120 120 121 121 raw_spinlock_t lock; /* Protect the shared start/stop register */ 122 122 ··· 249 247 250 248 static inline void sh_cmt_write_cmstr(struct sh_cmt_channel *ch, u32 value) 251 249 { 252 - if (ch->iostart) 253 - ch->cmt->info->write_control(ch->iostart, 0, value); 254 - else 255 - ch->cmt->info->write_control(ch->cmt->mapbase, 0, value); 250 + u32 old_value = sh_cmt_read_cmstr(ch); 251 + 252 + if (value != old_value) { 253 + if (ch->iostart) { 254 + ch->cmt->info->write_control(ch->iostart, 0, value); 255 + udelay(ch->cmt->reg_delay); 256 + } else { 257 + ch->cmt->info->write_control(ch->cmt->mapbase, 0, value); 258 + udelay(ch->cmt->reg_delay); 259 + } 260 + } 256 261 } 257 262 258 263 static inline u32 sh_cmt_read_cmcsr(struct sh_cmt_channel *ch) ··· 269 260 270 261 static inline void sh_cmt_write_cmcsr(struct sh_cmt_channel *ch, u32 value) 271 262 { 272 - ch->cmt->info->write_control(ch->ioctrl, CMCSR, value); 263 + u32 old_value = sh_cmt_read_cmcsr(ch); 264 + 265 + if (value != old_value) { 266 + ch->cmt->info->write_control(ch->ioctrl, CMCSR, value); 267 + udelay(ch->cmt->reg_delay); 268 + } 273 269 } 274 270 275 271 static inline u32 sh_cmt_read_cmcnt(struct sh_cmt_channel *ch) ··· 282 268 return ch->cmt->info->read_count(ch->ioctrl, CMCNT); 283 269 } 284 270 285 - static inline void sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value) 271 + static inline int sh_cmt_write_cmcnt(struct sh_cmt_channel *ch, u32 value) 286 272 { 273 + /* Tests showed that we need to wait 3 clocks here */ 274 + unsigned int cmcnt_delay = DIV_ROUND_UP(3 * ch->cmt->reg_delay, 2); 275 + u32 reg; 276 + 277 + if (ch->cmt->info->model > SH_CMT_16BIT) { 278 + int ret = read_poll_timeout_atomic(sh_cmt_read_cmcsr, reg, 279 + !(reg & SH_CMT32_CMCSR_WRFLG), 280 + 1, cmcnt_delay, false, ch); 281 + if (ret < 0) 282 + return ret; 283 + } 284 + 287 285 ch->cmt->info->write_count(ch->ioctrl, CMCNT, value); 286 + udelay(cmcnt_delay); 287 + return 0; 288 288 } 289 289 290 290 static inline void sh_cmt_write_cmcor(struct sh_cmt_channel *ch, u32 value) 291 291 { 292 - ch->cmt->info->write_count(ch->ioctrl, CMCOR, value); 292 + u32 old_value = ch->cmt->info->read_count(ch->ioctrl, CMCOR); 293 + 294 + if (value != old_value) { 295 + ch->cmt->info->write_count(ch->ioctrl, CMCOR, value); 296 + udelay(ch->cmt->reg_delay); 297 + } 293 298 } 294 299 295 300 static u32 sh_cmt_get_counter(struct sh_cmt_channel *ch, u32 *has_wrapped) ··· 352 319 353 320 static int sh_cmt_enable(struct sh_cmt_channel *ch) 354 321 { 355 - int k, ret; 322 + int ret; 356 323 357 324 dev_pm_syscore_device(&ch->cmt->pdev->dev, true); 358 325 ··· 380 347 } 381 348 382 349 sh_cmt_write_cmcor(ch, 0xffffffff); 383 - sh_cmt_write_cmcnt(ch, 0); 350 + ret = sh_cmt_write_cmcnt(ch, 0); 384 351 385 - /* 386 - * According to the sh73a0 user's manual, as CMCNT can be operated 387 - * only by the RCLK (Pseudo 32 kHz), there's one restriction on 388 - * modifying CMCNT register; two RCLK cycles are necessary before 389 - * this register is either read or any modification of the value 390 - * it holds is reflected in the LSI's actual operation. 391 - * 392 - * While at it, we're supposed to clear out the CMCNT as of this 393 - * moment, so make sure it's processed properly here. This will 394 - * take RCLKx2 at maximum. 395 - */ 396 - for (k = 0; k < 100; k++) { 397 - if (!sh_cmt_read_cmcnt(ch)) 398 - break; 399 - udelay(1); 400 - } 401 - 402 - if (sh_cmt_read_cmcnt(ch)) { 352 + if (ret || sh_cmt_read_cmcnt(ch)) { 403 353 dev_err(&ch->cmt->pdev->dev, "ch%u: cannot clear CMCNT\n", 404 354 ch->index); 405 355 ret = -ETIMEDOUT; ··· 1011 995 1012 996 static int sh_cmt_setup(struct sh_cmt_device *cmt, struct platform_device *pdev) 1013 997 { 1014 - unsigned int mask; 1015 - unsigned int i; 998 + unsigned int mask, i; 999 + unsigned long rate; 1016 1000 int ret; 1017 1001 1018 1002 cmt->pdev = pdev; ··· 1048 1032 if (ret < 0) 1049 1033 goto err_clk_unprepare; 1050 1034 1051 - if (cmt->info->width == 16) 1052 - cmt->rate = clk_get_rate(cmt->clk) / 512; 1053 - else 1054 - cmt->rate = clk_get_rate(cmt->clk) / 8; 1035 + rate = clk_get_rate(cmt->clk); 1036 + if (!rate) { 1037 + ret = -EINVAL; 1038 + goto err_clk_disable; 1039 + } 1040 + 1041 + /* We shall wait 2 input clks after register writes */ 1042 + if (cmt->info->model >= SH_CMT_48BIT) 1043 + cmt->reg_delay = DIV_ROUND_UP(2UL * USEC_PER_SEC, rate); 1044 + cmt->rate = rate / (cmt->info->width == 16 ? 512 : 8); 1055 1045 1056 1046 /* Map the memory resource(s). */ 1057 1047 ret = sh_cmt_map_memory(cmt);
+10
drivers/clocksource/timer-npcm7xx.c
··· 188 188 189 189 static int __init npcm7xx_timer_init(struct device_node *np) 190 190 { 191 + struct clk *clk; 191 192 int ret; 192 193 193 194 ret = timer_of_init(np, &npcm7xx_to); ··· 199 198 /* to the counter */ 200 199 npcm7xx_to.of_clk.rate = npcm7xx_to.of_clk.rate / 201 200 (NPCM7XX_Tx_MIN_PRESCALE + 1); 201 + 202 + /* Enable the clock for timer1, if it exists */ 203 + clk = of_clk_get(np, 1); 204 + if (clk) { 205 + if (!IS_ERR(clk)) 206 + clk_prepare_enable(clk); 207 + else 208 + pr_warn("%pOF: Failed to get clock for timer1: %pe", np, clk); 209 + } 202 210 203 211 npcm7xx_clocksource_init(); 204 212 npcm7xx_clockevents_init();
+3 -3
drivers/clocksource/timer-sp804.c
··· 155 155 return IRQ_HANDLED; 156 156 } 157 157 158 - static inline void timer_shutdown(struct clock_event_device *evt) 158 + static inline void evt_timer_shutdown(struct clock_event_device *evt) 159 159 { 160 160 writel(0, common_clkevt->ctrl); 161 161 } 162 162 163 163 static int sp804_shutdown(struct clock_event_device *evt) 164 164 { 165 - timer_shutdown(evt); 165 + evt_timer_shutdown(evt); 166 166 return 0; 167 167 } 168 168 ··· 171 171 unsigned long ctrl = TIMER_CTRL_32BIT | TIMER_CTRL_IE | 172 172 TIMER_CTRL_PERIODIC | TIMER_CTRL_ENABLE; 173 173 174 - timer_shutdown(evt); 174 + evt_timer_shutdown(evt); 175 175 writel(common_clkevt->reload, common_clkevt->load); 176 176 writel(ctrl, common_clkevt->ctrl); 177 177 return 0;
+3 -1
drivers/clocksource/timer-ti-dm-systimer.c
··· 345 345 return error; 346 346 347 347 r = clk_get_rate(clock); 348 - if (!r) 348 + if (!r) { 349 + clk_disable_unprepare(clock); 349 350 return -ENODEV; 351 + } 350 352 351 353 if (is_ick) 352 354 t->ick = clock;
+19 -2
drivers/clocksource/timer-ti-dm.c
··· 633 633 static int omap_dm_timer_free(struct omap_dm_timer *cookie) 634 634 { 635 635 struct dmtimer *timer; 636 + struct device *dev; 637 + int rc; 636 638 637 639 timer = to_dmtimer(cookie); 638 640 if (unlikely(!timer)) ··· 642 640 643 641 WARN_ON(!timer->reserved); 644 642 timer->reserved = 0; 643 + 644 + dev = &timer->pdev->dev; 645 + rc = pm_runtime_resume_and_get(dev); 646 + if (rc) 647 + return rc; 648 + 649 + /* Clear timer configuration */ 650 + dmtimer_write(timer, OMAP_TIMER_CTRL_REG, 0); 651 + 652 + pm_runtime_put_sync(dev); 653 + 645 654 return 0; 646 655 } 647 656 648 - int omap_dm_timer_get_irq(struct omap_dm_timer *cookie) 657 + static int omap_dm_timer_get_irq(struct omap_dm_timer *cookie) 649 658 { 650 659 struct dmtimer *timer = to_dmtimer(cookie); 651 660 if (timer) ··· 1148 1135 goto err_disable; 1149 1136 } 1150 1137 __omap_dm_timer_init_regs(timer); 1138 + 1139 + /* Clear timer configuration */ 1140 + dmtimer_write(timer, OMAP_TIMER_CTRL_REG, 0); 1141 + 1151 1142 pm_runtime_put(dev); 1152 1143 } 1153 1144 ··· 1275 1258 .remove = omap_dm_timer_remove, 1276 1259 .driver = { 1277 1260 .name = "omap_timer", 1278 - .of_match_table = of_match_ptr(omap_timer_match), 1261 + .of_match_table = omap_timer_match, 1279 1262 .pm = &omap_dm_timer_pm_ops, 1280 1263 }, 1281 1264 };
+2 -2
drivers/staging/wlan-ng/hfa384x_usb.c
··· 1116 1116 if (ctlx == get_active_ctlx(hw)) { 1117 1117 spin_unlock_irqrestore(&hw->ctlxq.lock, flags); 1118 1118 1119 - del_singleshot_timer_sync(&hw->reqtimer); 1120 - del_singleshot_timer_sync(&hw->resptimer); 1119 + del_timer_sync(&hw->reqtimer); 1120 + del_timer_sync(&hw->resptimer); 1121 1121 hw->req_timer_done = 1; 1122 1122 hw->resp_timer_done = 1; 1123 1123 usb_kill_urb(&hw->ctlx_urb);
+3 -3
drivers/staging/wlan-ng/prism2usb.c
··· 170 170 */ 171 171 prism2sta_ifstate(wlandev, P80211ENUM_ifstate_disable); 172 172 173 - del_singleshot_timer_sync(&hw->throttle); 174 - del_singleshot_timer_sync(&hw->reqtimer); 175 - del_singleshot_timer_sync(&hw->resptimer); 173 + del_timer_sync(&hw->throttle); 174 + del_timer_sync(&hw->reqtimer); 175 + del_timer_sync(&hw->resptimer); 176 176 177 177 /* Unlink all the URBs. This "removes the wheels" 178 178 * from the entire CTLX handling mechanism.
-2
include/clocksource/timer-ti-dm.h
··· 62 62 struct omap_dm_timer { 63 63 }; 64 64 65 - int omap_dm_timer_get_irq(struct omap_dm_timer *timer); 66 - 67 65 u32 omap_dm_timer_modify_idlect_mask(u32 inputmask); 68 66 69 67 /*
+6
include/linux/time_namespace.h
··· 45 45 void free_time_ns(struct time_namespace *ns); 46 46 void timens_on_fork(struct nsproxy *nsproxy, struct task_struct *tsk); 47 47 struct vdso_data *arch_get_vdso_data(void *vvar_page); 48 + struct page *find_timens_vvar_page(struct vm_area_struct *vma); 48 49 49 50 static inline void put_time_ns(struct time_namespace *ns) 50 51 { ··· 140 139 struct task_struct *tsk) 141 140 { 142 141 return; 142 + } 143 + 144 + static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma) 145 + { 146 + return NULL; 143 147 } 144 148 145 149 static inline void timens_add_monotonic(struct timespec64 *ts) { }
+28 -7
include/linux/timer.h
··· 169 169 } 170 170 171 171 extern void add_timer_on(struct timer_list *timer, int cpu); 172 - extern int del_timer(struct timer_list * timer); 173 172 extern int mod_timer(struct timer_list *timer, unsigned long expires); 174 173 extern int mod_timer_pending(struct timer_list *timer, unsigned long expires); 175 174 extern int timer_reduce(struct timer_list *timer, unsigned long expires); ··· 182 183 extern void add_timer(struct timer_list *timer); 183 184 184 185 extern int try_to_del_timer_sync(struct timer_list *timer); 186 + extern int timer_delete_sync(struct timer_list *timer); 187 + extern int timer_delete(struct timer_list *timer); 188 + extern int timer_shutdown_sync(struct timer_list *timer); 189 + extern int timer_shutdown(struct timer_list *timer); 185 190 186 - #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT) 187 - extern int del_timer_sync(struct timer_list *timer); 188 - #else 189 - # define del_timer_sync(t) del_timer(t) 190 - #endif 191 + /** 192 + * del_timer_sync - Delete a pending timer and wait for a running callback 193 + * @timer: The timer to be deleted 194 + * 195 + * See timer_delete_sync() for detailed explanation. 196 + * 197 + * Do not use in new code. Use timer_delete_sync() instead. 198 + */ 199 + static inline int del_timer_sync(struct timer_list *timer) 200 + { 201 + return timer_delete_sync(timer); 202 + } 191 203 192 - #define del_singleshot_timer_sync(t) del_timer_sync(t) 204 + /** 205 + * del_timer - Delete a pending timer 206 + * @timer: The timer to be deleted 207 + * 208 + * See timer_delete() for detailed explanation. 209 + * 210 + * Do not use in new code. Use timer_delete() instead. 211 + */ 212 + static inline int del_timer(struct timer_list *timer) 213 + { 214 + return timer_delete(timer); 215 + } 193 216 194 217 extern void init_timers(void); 195 218 struct hrtimer;
+1 -1
include/linux/timerqueue.h
··· 35 35 { 36 36 struct rb_node *leftmost = rb_first_cached(&head->rb_root); 37 37 38 - return rb_entry(leftmost, struct timerqueue_node, node); 38 + return rb_entry_safe(leftmost, struct timerqueue_node, node); 39 39 } 40 40 41 41 static inline void timerqueue_init(struct timerqueue_node *node)
+1 -1
kernel/time/clockevents.c
··· 76 76 } 77 77 78 78 /** 79 - * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds 79 + * clockevent_delta2ns - Convert a latch value (device ticks) to nanoseconds 80 80 * @latch: value to convert 81 81 * @evt: pointer to clock event device descriptor 82 82 *
+18
kernel/time/namespace.c
··· 192 192 offset[CLOCK_BOOTTIME_ALARM] = boottime; 193 193 } 194 194 195 + struct page *find_timens_vvar_page(struct vm_area_struct *vma) 196 + { 197 + if (likely(vma->vm_mm == current->mm)) 198 + return current->nsproxy->time_ns->vvar_page; 199 + 200 + /* 201 + * VM_PFNMAP | VM_IO protect .fault() handler from being called 202 + * through interfaces like /proc/$pid/mem or 203 + * process_vm_{readv,writev}() as long as there's no .access() 204 + * in special_mapping_vmops(). 205 + * For more details check_vma_flags() and __access_remote_vm() 206 + */ 207 + 208 + WARN(1, "vvar_page accessed remotely"); 209 + 210 + return NULL; 211 + } 212 + 195 213 /* 196 214 * Protects possibly multiple offsets writers racing each other 197 215 * and tasks entering the namespace.
+323 -92
kernel/time/timer.c
··· 1017 1017 unsigned int idx = UINT_MAX; 1018 1018 int ret = 0; 1019 1019 1020 - BUG_ON(!timer->function); 1020 + debug_assert_init(timer); 1021 1021 1022 1022 /* 1023 1023 * This is a common optimization triggered by the networking code - if ··· 1044 1044 * dequeue/enqueue dance. 1045 1045 */ 1046 1046 base = lock_timer_base(timer, &flags); 1047 + /* 1048 + * Has @timer been shutdown? This needs to be evaluated 1049 + * while holding base lock to prevent a race against the 1050 + * shutdown code. 1051 + */ 1052 + if (!timer->function) 1053 + goto out_unlock; 1054 + 1047 1055 forward_timer_base(base); 1048 1056 1049 1057 if (timer_pending(timer) && (options & MOD_TIMER_REDUCE) && ··· 1078 1070 } 1079 1071 } else { 1080 1072 base = lock_timer_base(timer, &flags); 1073 + /* 1074 + * Has @timer been shutdown? This needs to be evaluated 1075 + * while holding base lock to prevent a race against the 1076 + * shutdown code. 1077 + */ 1078 + if (!timer->function) 1079 + goto out_unlock; 1080 + 1081 1081 forward_timer_base(base); 1082 1082 } 1083 1083 ··· 1099 1083 /* 1100 1084 * We are trying to schedule the timer on the new base. 1101 1085 * However we can't change timer's base while it is running, 1102 - * otherwise del_timer_sync() can't detect that the timer's 1086 + * otherwise timer_delete_sync() can't detect that the timer's 1103 1087 * handler yet has not finished. This also guarantees that the 1104 1088 * timer is serialized wrt itself. 1105 1089 */ ··· 1137 1121 } 1138 1122 1139 1123 /** 1140 - * mod_timer_pending - modify a pending timer's timeout 1141 - * @timer: the pending timer to be modified 1142 - * @expires: new timeout in jiffies 1124 + * mod_timer_pending - Modify a pending timer's timeout 1125 + * @timer: The pending timer to be modified 1126 + * @expires: New absolute timeout in jiffies 1143 1127 * 1144 - * mod_timer_pending() is the same for pending timers as mod_timer(), 1145 - * but will not re-activate and modify already deleted timers. 1128 + * mod_timer_pending() is the same for pending timers as mod_timer(), but 1129 + * will not activate inactive timers. 1146 1130 * 1147 - * It is useful for unserialized use of timers. 1131 + * If @timer->function == NULL then the start operation is silently 1132 + * discarded. 1133 + * 1134 + * Return: 1135 + * * %0 - The timer was inactive and not modified or was in 1136 + * shutdown state and the operation was discarded 1137 + * * %1 - The timer was active and requeued to expire at @expires 1148 1138 */ 1149 1139 int mod_timer_pending(struct timer_list *timer, unsigned long expires) 1150 1140 { ··· 1159 1137 EXPORT_SYMBOL(mod_timer_pending); 1160 1138 1161 1139 /** 1162 - * mod_timer - modify a timer's timeout 1163 - * @timer: the timer to be modified 1164 - * @expires: new timeout in jiffies 1165 - * 1166 - * mod_timer() is a more efficient way to update the expire field of an 1167 - * active timer (if the timer is inactive it will be activated) 1140 + * mod_timer - Modify a timer's timeout 1141 + * @timer: The timer to be modified 1142 + * @expires: New absolute timeout in jiffies 1168 1143 * 1169 1144 * mod_timer(timer, expires) is equivalent to: 1170 1145 * 1171 1146 * del_timer(timer); timer->expires = expires; add_timer(timer); 1172 1147 * 1148 + * mod_timer() is more efficient than the above open coded sequence. In 1149 + * case that the timer is inactive, the del_timer() part is a NOP. The 1150 + * timer is in any case activated with the new expiry time @expires. 1151 + * 1173 1152 * Note that if there are multiple unserialized concurrent users of the 1174 1153 * same timer, then mod_timer() is the only safe way to modify the timeout, 1175 1154 * since add_timer() cannot modify an already running timer. 1176 1155 * 1177 - * The function returns whether it has modified a pending timer or not. 1178 - * (ie. mod_timer() of an inactive timer returns 0, mod_timer() of an 1179 - * active timer returns 1.) 1156 + * If @timer->function == NULL then the start operation is silently 1157 + * discarded. In this case the return value is 0 and meaningless. 1158 + * 1159 + * Return: 1160 + * * %0 - The timer was inactive and started or was in shutdown 1161 + * state and the operation was discarded 1162 + * * %1 - The timer was active and requeued to expire at @expires or 1163 + * the timer was active and not modified because @expires did 1164 + * not change the effective expiry time 1180 1165 */ 1181 1166 int mod_timer(struct timer_list *timer, unsigned long expires) 1182 1167 { ··· 1194 1165 /** 1195 1166 * timer_reduce - Modify a timer's timeout if it would reduce the timeout 1196 1167 * @timer: The timer to be modified 1197 - * @expires: New timeout in jiffies 1168 + * @expires: New absolute timeout in jiffies 1198 1169 * 1199 1170 * timer_reduce() is very similar to mod_timer(), except that it will only 1200 - * modify a running timer if that would reduce the expiration time (it will 1201 - * start a timer that isn't running). 1171 + * modify an enqueued timer if that would reduce the expiration time. If 1172 + * @timer is not enqueued it starts the timer. 1173 + * 1174 + * If @timer->function == NULL then the start operation is silently 1175 + * discarded. 1176 + * 1177 + * Return: 1178 + * * %0 - The timer was inactive and started or was in shutdown 1179 + * state and the operation was discarded 1180 + * * %1 - The timer was active and requeued to expire at @expires or 1181 + * the timer was active and not modified because @expires 1182 + * did not change the effective expiry time such that the 1183 + * timer would expire earlier than already scheduled 1202 1184 */ 1203 1185 int timer_reduce(struct timer_list *timer, unsigned long expires) 1204 1186 { ··· 1218 1178 EXPORT_SYMBOL(timer_reduce); 1219 1179 1220 1180 /** 1221 - * add_timer - start a timer 1222 - * @timer: the timer to be added 1181 + * add_timer - Start a timer 1182 + * @timer: The timer to be started 1223 1183 * 1224 - * The kernel will do a ->function(@timer) callback from the 1225 - * timer interrupt at the ->expires point in the future. The 1226 - * current time is 'jiffies'. 1184 + * Start @timer to expire at @timer->expires in the future. @timer->expires 1185 + * is the absolute expiry time measured in 'jiffies'. When the timer expires 1186 + * timer->function(timer) will be invoked from soft interrupt context. 1227 1187 * 1228 - * The timer's ->expires, ->function fields must be set prior calling this 1229 - * function. 1188 + * The @timer->expires and @timer->function fields must be set prior 1189 + * to calling this function. 1230 1190 * 1231 - * Timers with an ->expires field in the past will be executed in the next 1232 - * timer tick. 1191 + * If @timer->function == NULL then the start operation is silently 1192 + * discarded. 1193 + * 1194 + * If @timer->expires is already in the past @timer will be queued to 1195 + * expire at the next timer tick. 1196 + * 1197 + * This can only operate on an inactive timer. Attempts to invoke this on 1198 + * an active timer are rejected with a warning. 1233 1199 */ 1234 1200 void add_timer(struct timer_list *timer) 1235 1201 { 1236 - BUG_ON(timer_pending(timer)); 1202 + if (WARN_ON_ONCE(timer_pending(timer))) 1203 + return; 1237 1204 __mod_timer(timer, timer->expires, MOD_TIMER_NOTPENDING); 1238 1205 } 1239 1206 EXPORT_SYMBOL(add_timer); 1240 1207 1241 1208 /** 1242 - * add_timer_on - start a timer on a particular CPU 1243 - * @timer: the timer to be added 1244 - * @cpu: the CPU to start it on 1209 + * add_timer_on - Start a timer on a particular CPU 1210 + * @timer: The timer to be started 1211 + * @cpu: The CPU to start it on 1245 1212 * 1246 - * This is not very scalable on SMP. Double adds are not possible. 1213 + * Same as add_timer() except that it starts the timer on the given CPU. 1214 + * 1215 + * See add_timer() for further details. 1247 1216 */ 1248 1217 void add_timer_on(struct timer_list *timer, int cpu) 1249 1218 { 1250 1219 struct timer_base *new_base, *base; 1251 1220 unsigned long flags; 1252 1221 1253 - BUG_ON(timer_pending(timer) || !timer->function); 1222 + debug_assert_init(timer); 1223 + 1224 + if (WARN_ON_ONCE(timer_pending(timer))) 1225 + return; 1254 1226 1255 1227 new_base = get_timer_cpu_base(timer->flags, cpu); 1256 1228 ··· 1272 1220 * wrong base locked. See lock_timer_base(). 1273 1221 */ 1274 1222 base = lock_timer_base(timer, &flags); 1223 + /* 1224 + * Has @timer been shutdown? This needs to be evaluated while 1225 + * holding base lock to prevent a race against the shutdown code. 1226 + */ 1227 + if (!timer->function) 1228 + goto out_unlock; 1229 + 1275 1230 if (base != new_base) { 1276 1231 timer->flags |= TIMER_MIGRATING; 1277 1232 ··· 1292 1233 1293 1234 debug_timer_activate(timer); 1294 1235 internal_add_timer(base, timer); 1236 + out_unlock: 1295 1237 raw_spin_unlock_irqrestore(&base->lock, flags); 1296 1238 } 1297 1239 EXPORT_SYMBOL_GPL(add_timer_on); 1298 1240 1299 1241 /** 1300 - * del_timer - deactivate a timer. 1301 - * @timer: the timer to be deactivated 1242 + * __timer_delete - Internal function: Deactivate a timer 1243 + * @timer: The timer to be deactivated 1244 + * @shutdown: If true, this indicates that the timer is about to be 1245 + * shutdown permanently. 1302 1246 * 1303 - * del_timer() deactivates a timer - this works on both active and inactive 1304 - * timers. 1247 + * If @shutdown is true then @timer->function is set to NULL under the 1248 + * timer base lock which prevents further rearming of the time. In that 1249 + * case any attempt to rearm @timer after this function returns will be 1250 + * silently ignored. 1305 1251 * 1306 - * The function returns whether it has deactivated a pending timer or not. 1307 - * (ie. del_timer() of an inactive timer returns 0, del_timer() of an 1308 - * active timer returns 1.) 1252 + * Return: 1253 + * * %0 - The timer was not pending 1254 + * * %1 - The timer was pending and deactivated 1309 1255 */ 1310 - int del_timer(struct timer_list *timer) 1256 + static int __timer_delete(struct timer_list *timer, bool shutdown) 1311 1257 { 1312 1258 struct timer_base *base; 1313 1259 unsigned long flags; ··· 1320 1256 1321 1257 debug_assert_init(timer); 1322 1258 1323 - if (timer_pending(timer)) { 1259 + /* 1260 + * If @shutdown is set then the lock has to be taken whether the 1261 + * timer is pending or not to protect against a concurrent rearm 1262 + * which might hit between the lockless pending check and the lock 1263 + * aquisition. By taking the lock it is ensured that such a newly 1264 + * enqueued timer is dequeued and cannot end up with 1265 + * timer->function == NULL in the expiry code. 1266 + * 1267 + * If timer->function is currently executed, then this makes sure 1268 + * that the callback cannot requeue the timer. 1269 + */ 1270 + if (timer_pending(timer) || shutdown) { 1324 1271 base = lock_timer_base(timer, &flags); 1325 1272 ret = detach_if_pending(timer, base, true); 1273 + if (shutdown) 1274 + timer->function = NULL; 1326 1275 raw_spin_unlock_irqrestore(&base->lock, flags); 1327 1276 } 1328 1277 1329 1278 return ret; 1330 1279 } 1331 - EXPORT_SYMBOL(del_timer); 1332 1280 1333 1281 /** 1334 - * try_to_del_timer_sync - Try to deactivate a timer 1335 - * @timer: timer to delete 1282 + * timer_delete - Deactivate a timer 1283 + * @timer: The timer to be deactivated 1336 1284 * 1337 - * This function tries to deactivate a timer. Upon successful (ret >= 0) 1338 - * exit the timer is not queued and the handler is not running on any CPU. 1285 + * The function only deactivates a pending timer, but contrary to 1286 + * timer_delete_sync() it does not take into account whether the timer's 1287 + * callback function is concurrently executed on a different CPU or not. 1288 + * It neither prevents rearming of the timer. If @timer can be rearmed 1289 + * concurrently then the return value of this function is meaningless. 1290 + * 1291 + * Return: 1292 + * * %0 - The timer was not pending 1293 + * * %1 - The timer was pending and deactivated 1339 1294 */ 1340 - int try_to_del_timer_sync(struct timer_list *timer) 1295 + int timer_delete(struct timer_list *timer) 1296 + { 1297 + return __timer_delete(timer, false); 1298 + } 1299 + EXPORT_SYMBOL(timer_delete); 1300 + 1301 + /** 1302 + * timer_shutdown - Deactivate a timer and prevent rearming 1303 + * @timer: The timer to be deactivated 1304 + * 1305 + * The function does not wait for an eventually running timer callback on a 1306 + * different CPU but it prevents rearming of the timer. Any attempt to arm 1307 + * @timer after this function returns will be silently ignored. 1308 + * 1309 + * This function is useful for teardown code and should only be used when 1310 + * timer_shutdown_sync() cannot be invoked due to locking or context constraints. 1311 + * 1312 + * Return: 1313 + * * %0 - The timer was not pending 1314 + * * %1 - The timer was pending 1315 + */ 1316 + int timer_shutdown(struct timer_list *timer) 1317 + { 1318 + return __timer_delete(timer, true); 1319 + } 1320 + EXPORT_SYMBOL_GPL(timer_shutdown); 1321 + 1322 + /** 1323 + * __try_to_del_timer_sync - Internal function: Try to deactivate a timer 1324 + * @timer: Timer to deactivate 1325 + * @shutdown: If true, this indicates that the timer is about to be 1326 + * shutdown permanently. 1327 + * 1328 + * If @shutdown is true then @timer->function is set to NULL under the 1329 + * timer base lock which prevents further rearming of the timer. Any 1330 + * attempt to rearm @timer after this function returns will be silently 1331 + * ignored. 1332 + * 1333 + * This function cannot guarantee that the timer cannot be rearmed 1334 + * right after dropping the base lock if @shutdown is false. That 1335 + * needs to be prevented by the calling code if necessary. 1336 + * 1337 + * Return: 1338 + * * %0 - The timer was not pending 1339 + * * %1 - The timer was pending and deactivated 1340 + * * %-1 - The timer callback function is running on a different CPU 1341 + */ 1342 + static int __try_to_del_timer_sync(struct timer_list *timer, bool shutdown) 1341 1343 { 1342 1344 struct timer_base *base; 1343 1345 unsigned long flags; ··· 1415 1285 1416 1286 if (base->running_timer != timer) 1417 1287 ret = detach_if_pending(timer, base, true); 1288 + if (shutdown) 1289 + timer->function = NULL; 1418 1290 1419 1291 raw_spin_unlock_irqrestore(&base->lock, flags); 1420 1292 1421 1293 return ret; 1294 + } 1295 + 1296 + /** 1297 + * try_to_del_timer_sync - Try to deactivate a timer 1298 + * @timer: Timer to deactivate 1299 + * 1300 + * This function tries to deactivate a timer. On success the timer is not 1301 + * queued and the timer callback function is not running on any CPU. 1302 + * 1303 + * This function does not guarantee that the timer cannot be rearmed right 1304 + * after dropping the base lock. That needs to be prevented by the calling 1305 + * code if necessary. 1306 + * 1307 + * Return: 1308 + * * %0 - The timer was not pending 1309 + * * %1 - The timer was pending and deactivated 1310 + * * %-1 - The timer callback function is running on a different CPU 1311 + */ 1312 + int try_to_del_timer_sync(struct timer_list *timer) 1313 + { 1314 + return __try_to_del_timer_sync(timer, false); 1422 1315 } 1423 1316 EXPORT_SYMBOL(try_to_del_timer_sync); 1424 1317 ··· 1518 1365 static inline void del_timer_wait_running(struct timer_list *timer) { } 1519 1366 #endif 1520 1367 1521 - #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT) 1522 1368 /** 1523 - * del_timer_sync - deactivate a timer and wait for the handler to finish. 1524 - * @timer: the timer to be deactivated 1369 + * __timer_delete_sync - Internal function: Deactivate a timer and wait 1370 + * for the handler to finish. 1371 + * @timer: The timer to be deactivated 1372 + * @shutdown: If true, @timer->function will be set to NULL under the 1373 + * timer base lock which prevents rearming of @timer 1525 1374 * 1526 - * This function only differs from del_timer() on SMP: besides deactivating 1527 - * the timer it also makes sure the handler has finished executing on other 1528 - * CPUs. 1375 + * If @shutdown is not set the timer can be rearmed later. If the timer can 1376 + * be rearmed concurrently, i.e. after dropping the base lock then the 1377 + * return value is meaningless. 1529 1378 * 1530 - * Synchronization rules: Callers must prevent restarting of the timer, 1531 - * otherwise this function is meaningless. It must not be called from 1532 - * interrupt contexts unless the timer is an irqsafe one. The caller must 1533 - * not hold locks which would prevent completion of the timer's 1534 - * handler. The timer's handler must not call add_timer_on(). Upon exit the 1535 - * timer is not queued and the handler is not running on any CPU. 1379 + * If @shutdown is set then @timer->function is set to NULL under timer 1380 + * base lock which prevents rearming of the timer. Any attempt to rearm 1381 + * a shutdown timer is silently ignored. 1536 1382 * 1537 - * Note: For !irqsafe timers, you must not hold locks that are held in 1538 - * interrupt context while calling this function. Even if the lock has 1539 - * nothing to do with the timer in question. Here's why:: 1383 + * If the timer should be reused after shutdown it has to be initialized 1384 + * again. 1540 1385 * 1541 - * CPU0 CPU1 1542 - * ---- ---- 1543 - * <SOFTIRQ> 1544 - * call_timer_fn(); 1545 - * base->running_timer = mytimer; 1546 - * spin_lock_irq(somelock); 1547 - * <IRQ> 1548 - * spin_lock(somelock); 1549 - * del_timer_sync(mytimer); 1550 - * while (base->running_timer == mytimer); 1551 - * 1552 - * Now del_timer_sync() will never return and never release somelock. 1553 - * The interrupt on the other CPU is waiting to grab somelock but 1554 - * it has interrupted the softirq that CPU0 is waiting to finish. 1555 - * 1556 - * The function returns whether it has deactivated a pending timer or not. 1386 + * Return: 1387 + * * %0 - The timer was not pending 1388 + * * %1 - The timer was pending and deactivated 1557 1389 */ 1558 - int del_timer_sync(struct timer_list *timer) 1390 + static int __timer_delete_sync(struct timer_list *timer, bool shutdown) 1559 1391 { 1560 1392 int ret; 1561 1393 ··· 1560 1422 * don't use it in hardirq context, because it 1561 1423 * could lead to deadlock. 1562 1424 */ 1563 - WARN_ON(in_irq() && !(timer->flags & TIMER_IRQSAFE)); 1425 + WARN_ON(in_hardirq() && !(timer->flags & TIMER_IRQSAFE)); 1564 1426 1565 1427 /* 1566 1428 * Must be able to sleep on PREEMPT_RT because of the slowpath in ··· 1570 1432 lockdep_assert_preemption_enabled(); 1571 1433 1572 1434 do { 1573 - ret = try_to_del_timer_sync(timer); 1435 + ret = __try_to_del_timer_sync(timer, shutdown); 1574 1436 1575 1437 if (unlikely(ret < 0)) { 1576 1438 del_timer_wait_running(timer); ··· 1580 1442 1581 1443 return ret; 1582 1444 } 1583 - EXPORT_SYMBOL(del_timer_sync); 1584 - #endif 1445 + 1446 + /** 1447 + * timer_delete_sync - Deactivate a timer and wait for the handler to finish. 1448 + * @timer: The timer to be deactivated 1449 + * 1450 + * Synchronization rules: Callers must prevent restarting of the timer, 1451 + * otherwise this function is meaningless. It must not be called from 1452 + * interrupt contexts unless the timer is an irqsafe one. The caller must 1453 + * not hold locks which would prevent completion of the timer's callback 1454 + * function. The timer's handler must not call add_timer_on(). Upon exit 1455 + * the timer is not queued and the handler is not running on any CPU. 1456 + * 1457 + * For !irqsafe timers, the caller must not hold locks that are held in 1458 + * interrupt context. Even if the lock has nothing to do with the timer in 1459 + * question. Here's why:: 1460 + * 1461 + * CPU0 CPU1 1462 + * ---- ---- 1463 + * <SOFTIRQ> 1464 + * call_timer_fn(); 1465 + * base->running_timer = mytimer; 1466 + * spin_lock_irq(somelock); 1467 + * <IRQ> 1468 + * spin_lock(somelock); 1469 + * timer_delete_sync(mytimer); 1470 + * while (base->running_timer == mytimer); 1471 + * 1472 + * Now timer_delete_sync() will never return and never release somelock. 1473 + * The interrupt on the other CPU is waiting to grab somelock but it has 1474 + * interrupted the softirq that CPU0 is waiting to finish. 1475 + * 1476 + * This function cannot guarantee that the timer is not rearmed again by 1477 + * some concurrent or preempting code, right after it dropped the base 1478 + * lock. If there is the possibility of a concurrent rearm then the return 1479 + * value of the function is meaningless. 1480 + * 1481 + * If such a guarantee is needed, e.g. for teardown situations then use 1482 + * timer_shutdown_sync() instead. 1483 + * 1484 + * Return: 1485 + * * %0 - The timer was not pending 1486 + * * %1 - The timer was pending and deactivated 1487 + */ 1488 + int timer_delete_sync(struct timer_list *timer) 1489 + { 1490 + return __timer_delete_sync(timer, false); 1491 + } 1492 + EXPORT_SYMBOL(timer_delete_sync); 1493 + 1494 + /** 1495 + * timer_shutdown_sync - Shutdown a timer and prevent rearming 1496 + * @timer: The timer to be shutdown 1497 + * 1498 + * When the function returns it is guaranteed that: 1499 + * - @timer is not queued 1500 + * - The callback function of @timer is not running 1501 + * - @timer cannot be enqueued again. Any attempt to rearm 1502 + * @timer is silently ignored. 1503 + * 1504 + * See timer_delete_sync() for synchronization rules. 1505 + * 1506 + * This function is useful for final teardown of an infrastructure where 1507 + * the timer is subject to a circular dependency problem. 1508 + * 1509 + * A common pattern for this is a timer and a workqueue where the timer can 1510 + * schedule work and work can arm the timer. On shutdown the workqueue must 1511 + * be destroyed and the timer must be prevented from rearming. Unless the 1512 + * code has conditionals like 'if (mything->in_shutdown)' to prevent that 1513 + * there is no way to get this correct with timer_delete_sync(). 1514 + * 1515 + * timer_shutdown_sync() is solving the problem. The correct ordering of 1516 + * calls in this case is: 1517 + * 1518 + * timer_shutdown_sync(&mything->timer); 1519 + * workqueue_destroy(&mything->workqueue); 1520 + * 1521 + * After this 'mything' can be safely freed. 1522 + * 1523 + * This obviously implies that the timer is not required to be functional 1524 + * for the rest of the shutdown operation. 1525 + * 1526 + * Return: 1527 + * * %0 - The timer was not pending 1528 + * * %1 - The timer was pending 1529 + */ 1530 + int timer_shutdown_sync(struct timer_list *timer) 1531 + { 1532 + return __timer_delete_sync(timer, true); 1533 + } 1534 + EXPORT_SYMBOL_GPL(timer_shutdown_sync); 1585 1535 1586 1536 static void call_timer_fn(struct timer_list *timer, 1587 1537 void (*fn)(struct timer_list *), ··· 1691 1465 #endif 1692 1466 /* 1693 1467 * Couple the lock chain with the lock chain at 1694 - * del_timer_sync() by acquiring the lock_map around the fn() 1695 - * call here and in del_timer_sync(). 1468 + * timer_delete_sync() by acquiring the lock_map around the fn() 1469 + * call here and in timer_delete_sync(). 1696 1470 */ 1697 1471 lock_map_acquire(&lockdep_map); 1698 1472 ··· 1734 1508 detach_timer(timer, true); 1735 1509 1736 1510 fn = timer->function; 1511 + 1512 + if (WARN_ON_ONCE(!fn)) { 1513 + /* Should never happen. Emphasis on should! */ 1514 + base->running_timer = NULL; 1515 + continue; 1516 + } 1737 1517 1738 1518 if (timer->flags & TIMER_IRQSAFE) { 1739 1519 raw_spin_unlock(&base->lock); ··· 2165 1933 timer_setup_on_stack(&timer.timer, process_timeout, 0); 2166 1934 __mod_timer(&timer.timer, expire, MOD_TIMER_NOTPENDING); 2167 1935 schedule(); 2168 - del_singleshot_timer_sync(&timer.timer); 1936 + del_timer_sync(&timer.timer); 2169 1937 2170 1938 /* Remove the timer from the object tracker */ 2171 1939 destroy_timer_on_stack(&timer.timer); ··· 2249 2017 struct timer_base *new_base; 2250 2018 int b, i; 2251 2019 2252 - BUG_ON(cpu_online(cpu)); 2253 - 2254 2020 for (b = 0; b < NR_BASES; b++) { 2255 2021 old_base = per_cpu_ptr(&timer_bases[b], cpu); 2256 2022 new_base = get_cpu_ptr(&timer_bases[b]); ··· 2265 2035 */ 2266 2036 forward_timer_base(new_base); 2267 2037 2268 - BUG_ON(old_base->running_timer); 2038 + WARN_ON_ONCE(old_base->running_timer); 2039 + old_base->running_timer = NULL; 2269 2040 2270 2041 for (i = 0; i < WHEEL_SIZE; i++) 2271 2042 migrate_timer_list(new_base, old_base->vectors + i);
+1 -1
net/sunrpc/xprt.c
··· 1164 1164 spin_unlock(&xprt->queue_lock); 1165 1165 1166 1166 /* Turn off autodisconnect */ 1167 - del_singleshot_timer_sync(&xprt->timer); 1167 + del_timer_sync(&xprt->timer); 1168 1168 return 0; 1169 1169 } 1170 1170