Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'timers-core-2024-09-16' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull timer updates from Thomas Gleixner:
"Core:

- Overhaul of posix-timers in preparation of removing the workaround
for periodic timers which have signal delivery ignored.

- Remove the historical extra jiffie in msleep()

msleep() adds an extra jiffie to the timeout value to ensure
minimal sleep time. The timer wheel ensures minimal sleep time
since the large rewrite to a non-cascading wheel, but the extra
jiffie in msleep() remained unnoticed. Remove it.

- Make the timer slack handling correct for realtime tasks.

The procfs interface is inconsistent and does neither reflect
reality nor conforms to the man page. Show the correct 0 slack for
real time tasks and enforce it at the core level instead of having
inconsistent individual checks in various timer setup functions.

- The usual set of updates and enhancements all over the place.

Drivers:

- Allow the ACPI PM timer to be turned off during suspend

- No new drivers

- The usual updates and enhancements in various drivers"

* tag 'timers-core-2024-09-16' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (43 commits)
ntp: Make sure RTC is synchronized when time goes backwards
treewide: Fix wrong singular form of jiffies in comments
cpu: Use already existing usleep_range()
timers: Rename next_expiry_recalc() to be unique
platform/x86:intel/pmc: Fix comment for the pmc_core_acpi_pm_timer_suspend_resume function
clocksource/drivers/jcore: Use request_percpu_irq()
clocksource/drivers/cadence-ttc: Add missing clk_disable_unprepare in ttc_setup_clockevent
clocksource/drivers/asm9260: Add missing clk_disable_unprepare in asm9260_timer_init
clocksource/drivers/qcom: Add missing iounmap() on errors in msm_dt_timer_init()
clocksource/drivers/ingenic: Use devm_clk_get_enabled() helpers
platform/x86:intel/pmc: Enable the ACPI PM Timer to be turned off when suspended
clocksource: acpi_pm: Add external callback for suspend/resume
clocksource/drivers/arm_arch_timer: Using for_each_available_child_of_node_scoped()
dt-bindings: timer: rockchip: Add rk3576 compatible
timers: Annotate possible non critical data race of next_expiry
timers: Remove historical extra jiffie for timeout in msleep()
hrtimer: Use and report correct timerslack values for realtime tasks
hrtimer: Annotate hrtimer_cpu_base_.*_expiry() for sparse.
timers: Add sparse annotation for timer_sync_wait_running().
signal: Replace BUG_ON()s
...

+851 -380
+1 -1
Documentation/admin-guide/media/vivid.rst
··· 328 328 detail below. 329 329 330 330 Special attention has been given to the rate at which new frames become 331 - available. The jitter will be around 1 jiffie (that depends on the HZ 331 + available. The jitter will be around 1 jiffy (that depends on the HZ 332 332 configuration of your kernel, so usually 1/100, 1/250 or 1/1000 of a second), 333 333 but the long-term behavior is exactly following the framerate. So a 334 334 framerate of 59.94 Hz is really different from 60 Hz. If the framerate
+1
Documentation/devicetree/bindings/timer/rockchip,rk-timer.yaml
··· 24 24 - rockchip,rk3228-timer 25 25 - rockchip,rk3229-timer 26 26 - rockchip,rk3368-timer 27 + - rockchip,rk3576-timer 27 28 - rockchip,rk3588-timer 28 29 - rockchip,px30-timer 29 30 - const: rockchip,rk3288-timer
+1 -1
Documentation/timers/timers-howto.rst
··· 19 19 20 20 ATOMIC CONTEXT: 21 21 You must use the `*delay` family of functions. These 22 - functions use the jiffie estimation of clock speed 22 + functions use the jiffy estimation of clock speed 23 23 and will busy wait for enough loop cycles to achieve 24 24 the desired delay: 25 25
+1 -1
Documentation/translations/sp_SP/scheduler/sched-design-CFS.rst
··· 109 109 ================================== 110 110 111 111 CFS usa una granularidad de nanosegundos y no depende de ningún 112 - jiffie o detalles como HZ. De este modo, el gestor de tareas CFS no tiene 112 + jiffy o detalles como HZ. De este modo, el gestor de tareas CFS no tiene 113 113 noción de "ventanas de tiempo" de la forma en que tenía el gestor de 114 114 tareas previo, y tampoco tiene heurísticos. Únicamente hay un parámetro 115 115 central ajustable (se ha de cambiar en CONFIG_SCHED_DEBUG):
+1 -1
arch/arm/mach-versatile/spc.c
··· 73 73 74 74 /* 75 75 * Even though the SPC takes max 3-5 ms to complete any OPP/COMMS 76 - * operation, the operation could start just before jiffie is about 76 + * operation, the operation could start just before jiffy is about 77 77 * to be incremented. So setting timeout value of 20ms = 2jiffies@100Hz 78 78 */ 79 79 #define TIMEOUT_US 20000
+1 -1
arch/m68k/q40/q40ints.c
··· 106 106 * this stuff doesn't really belong here.. 107 107 */ 108 108 109 - int ql_ticks; /* 200Hz ticks since last jiffie */ 109 + int ql_ticks; /* 200Hz ticks since last jiffy */ 110 110 static int sound_ticks; 111 111 112 112 #define SVOL 45
+1 -1
arch/x86/kernel/cpu/mce/dev-mcelog.c
··· 314 314 315 315 /* 316 316 * Need to give user space some time to set everything up, 317 - * so do it a jiffie or two later everywhere. 317 + * so do it a jiffy or two later everywhere. 318 318 */ 319 319 schedule_timeout(2); 320 320
+1 -1
drivers/char/ipmi/ipmi_ssif.c
··· 980 980 ipmi_ssif_unlock_cond(ssif_info, flags); 981 981 start_get(ssif_info); 982 982 } else { 983 - /* Wait a jiffie then request the next message */ 983 + /* Wait a jiffy then request the next message */ 984 984 ssif_info->waiting_alert = true; 985 985 ssif_info->retries_left = SSIF_RECV_RETRIES; 986 986 if (!ssif_info->stopping)
+32
drivers/clocksource/acpi_pm.c
··· 25 25 #include <asm/io.h> 26 26 #include <asm/time.h> 27 27 28 + static void *suspend_resume_cb_data; 29 + 30 + static void (*suspend_resume_callback)(void *data, bool suspend); 31 + 28 32 /* 29 33 * The I/O port the PMTMR resides at. 30 34 * The location is detected during setup_arch(), ··· 62 58 return v2; 63 59 } 64 60 61 + void acpi_pmtmr_register_suspend_resume_callback(void (*cb)(void *data, bool suspend), void *data) 62 + { 63 + suspend_resume_callback = cb; 64 + suspend_resume_cb_data = data; 65 + } 66 + EXPORT_SYMBOL_GPL(acpi_pmtmr_register_suspend_resume_callback); 67 + 68 + void acpi_pmtmr_unregister_suspend_resume_callback(void) 69 + { 70 + suspend_resume_callback = NULL; 71 + suspend_resume_cb_data = NULL; 72 + } 73 + EXPORT_SYMBOL_GPL(acpi_pmtmr_unregister_suspend_resume_callback); 74 + 75 + static void acpi_pm_suspend(struct clocksource *cs) 76 + { 77 + if (suspend_resume_callback) 78 + suspend_resume_callback(suspend_resume_cb_data, true); 79 + } 80 + 81 + static void acpi_pm_resume(struct clocksource *cs) 82 + { 83 + if (suspend_resume_callback) 84 + suspend_resume_callback(suspend_resume_cb_data, false); 85 + } 86 + 65 87 static u64 acpi_pm_read(struct clocksource *cs) 66 88 { 67 89 return (u64)read_pmtmr(); ··· 99 69 .read = acpi_pm_read, 100 70 .mask = (u64)ACPI_PM_MASK, 101 71 .flags = CLOCK_SOURCE_IS_CONTINUOUS, 72 + .suspend = acpi_pm_suspend, 73 + .resume = acpi_pm_resume, 102 74 }; 103 75 104 76
+3 -8
drivers/clocksource/arm_arch_timer.c
··· 1594 1594 { 1595 1595 struct arch_timer_mem *timer_mem; 1596 1596 struct arch_timer_mem_frame *frame; 1597 - struct device_node *frame_node; 1598 1597 struct resource res; 1599 1598 int ret = -EINVAL; 1600 1599 u32 rate; ··· 1607 1608 timer_mem->cntctlbase = res.start; 1608 1609 timer_mem->size = resource_size(&res); 1609 1610 1610 - for_each_available_child_of_node(np, frame_node) { 1611 + for_each_available_child_of_node_scoped(np, frame_node) { 1611 1612 u32 n; 1612 1613 struct arch_timer_mem_frame *frame; 1613 1614 1614 1615 if (of_property_read_u32(frame_node, "frame-number", &n)) { 1615 1616 pr_err(FW_BUG "Missing frame-number.\n"); 1616 - of_node_put(frame_node); 1617 1617 goto out; 1618 1618 } 1619 1619 if (n >= ARCH_TIMER_MEM_MAX_FRAMES) { 1620 1620 pr_err(FW_BUG "Wrong frame-number, only 0-%u are permitted.\n", 1621 1621 ARCH_TIMER_MEM_MAX_FRAMES - 1); 1622 - of_node_put(frame_node); 1623 1622 goto out; 1624 1623 } 1625 1624 frame = &timer_mem->frame[n]; 1626 1625 1627 1626 if (frame->valid) { 1628 1627 pr_err(FW_BUG "Duplicated frame-number.\n"); 1629 - of_node_put(frame_node); 1630 1628 goto out; 1631 1629 } 1632 1630 1633 - if (of_address_to_resource(frame_node, 0, &res)) { 1634 - of_node_put(frame_node); 1631 + if (of_address_to_resource(frame_node, 0, &res)) 1635 1632 goto out; 1636 - } 1633 + 1637 1634 frame->cntbase = res.start; 1638 1635 frame->size = resource_size(&res); 1639 1636
+1
drivers/clocksource/asm9260_timer.c
··· 210 210 DRIVER_NAME, &event_dev); 211 211 if (ret) { 212 212 pr_err("Failed to setup irq!\n"); 213 + clk_disable_unprepare(clk); 213 214 return ret; 214 215 } 215 216
+1 -6
drivers/clocksource/ingenic-ost.c
··· 93 93 return PTR_ERR(map); 94 94 } 95 95 96 - ost->clk = devm_clk_get(dev, "ost"); 96 + ost->clk = devm_clk_get_enabled(dev, "ost"); 97 97 if (IS_ERR(ost->clk)) 98 98 return PTR_ERR(ost->clk); 99 - 100 - err = clk_prepare_enable(ost->clk); 101 - if (err) 102 - return err; 103 99 104 100 /* Clear counter high/low registers */ 105 101 if (soc_info->is64bit) ··· 125 129 err = clocksource_register_hz(cs, rate); 126 130 if (err) { 127 131 dev_err(dev, "clocksource registration failed"); 128 - clk_disable_unprepare(ost->clk); 129 132 return err; 130 133 } 131 134
+3 -4
drivers/clocksource/jcore-pit.c
··· 120 120 121 121 static irqreturn_t jcore_timer_interrupt(int irq, void *dev_id) 122 122 { 123 - struct jcore_pit *pit = this_cpu_ptr(dev_id); 123 + struct jcore_pit *pit = dev_id; 124 124 125 125 if (clockevent_state_oneshot(&pit->ced)) 126 126 jcore_pit_disable(pit); ··· 168 168 return -ENOMEM; 169 169 } 170 170 171 - err = request_irq(pit_irq, jcore_timer_interrupt, 172 - IRQF_TIMER | IRQF_PERCPU, 173 - "jcore_pit", jcore_pit_percpu); 171 + err = request_percpu_irq(pit_irq, jcore_timer_interrupt, 172 + "jcore_pit", jcore_pit_percpu); 174 173 if (err) { 175 174 pr_err("pit irq request failed: %d\n", err); 176 175 free_percpu(jcore_pit_percpu);
+4 -2
drivers/clocksource/timer-cadence-ttc.c
··· 435 435 &ttcce->ttc.clk_rate_change_nb); 436 436 if (err) { 437 437 pr_warn("Unable to register clock notifier.\n"); 438 - goto out_kfree; 438 + goto out_clk_unprepare; 439 439 } 440 440 441 441 ttcce->ttc.freq = clk_get_rate(ttcce->ttc.clk); ··· 465 465 err = request_irq(irq, ttc_clock_event_interrupt, 466 466 IRQF_TIMER, ttcce->ce.name, ttcce); 467 467 if (err) 468 - goto out_kfree; 468 + goto out_clk_unprepare; 469 469 470 470 clockevents_config_and_register(&ttcce->ce, 471 471 ttcce->ttc.freq / PRESCALE, 1, 0xfffe); 472 472 473 473 return 0; 474 474 475 + out_clk_unprepare: 476 + clk_disable_unprepare(ttcce->ttc.clk); 475 477 out_kfree: 476 478 kfree(ttcce); 477 479 return err;
+6 -1
drivers/clocksource/timer-qcom.c
··· 233 233 } 234 234 235 235 if (of_property_read_u32(np, "clock-frequency", &freq)) { 236 + iounmap(cpu0_base); 236 237 pr_err("Unknown frequency\n"); 237 238 return -EINVAL; 238 239 } ··· 244 243 freq /= 4; 245 244 writel_relaxed(DGT_CLK_CTL_DIV_4, source_base + DGT_CLK_CTL); 246 245 247 - return msm_timer_init(freq, 32, irq, !!percpu_offset); 246 + ret = msm_timer_init(freq, 32, irq, !!percpu_offset); 247 + if (ret) 248 + iounmap(cpu0_base); 249 + 250 + return ret; 248 251 } 249 252 TIMER_OF_DECLARE(kpss_timer, "qcom,kpss-timer", msm_dt_timer_init); 250 253 TIMER_OF_DECLARE(scss_timer, "qcom,scss-timer", msm_dt_timer_init);
+1 -1
drivers/dma-buf/st-dma-fence.c
··· 402 402 403 403 if (dma_fence_wait_timeout(wt.f, false, 2) == -ETIME) { 404 404 if (timer_pending(&wt.timer)) { 405 - pr_notice("Timer did not fire within the jiffie!\n"); 405 + pr_notice("Timer did not fire within the jiffy!\n"); 406 406 err = 0; /* not our fault! */ 407 407 } else { 408 408 pr_err("Wait reported incomplete after timeout\n");
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_wait.c
··· 266 266 if (ret == -ETIME && !nsecs_to_jiffies(args->timeout_ns)) 267 267 args->timeout_ns = 0; 268 268 269 - /* Asked to wait beyond the jiffie/scheduler precision? */ 269 + /* Asked to wait beyond the jiffy/scheduler precision? */ 270 270 if (ret == -ETIME && args->timeout_ns) 271 271 ret = -EAGAIN; 272 272 }
+2 -2
drivers/gpu/drm/i915/gt/selftest_execlists.c
··· 93 93 return -EINVAL; 94 94 } 95 95 96 - /* Give the request a jiffie to complete after flushing the worker */ 96 + /* Give the request a jiffy to complete after flushing the worker */ 97 97 if (i915_request_wait(rq, 0, 98 98 max(0l, (long)(timeout - jiffies)) + 1) < 0) { 99 99 pr_err("%s: hanging request %llx:%lld did not complete\n", ··· 3426 3426 cpu_relax(); 3427 3427 3428 3428 saved_timeout = engine->props.preempt_timeout_ms; 3429 - engine->props.preempt_timeout_ms = 1; /* in ms, -> 1 jiffie */ 3429 + engine->props.preempt_timeout_ms = 1; /* in ms, -> 1 jiffy */ 3430 3430 3431 3431 i915_request_get(rq); 3432 3432 i915_request_add(rq);
+1 -1
drivers/gpu/drm/i915/i915_utils.c
··· 110 110 * Paranoia to make sure the compiler computes the timeout before 111 111 * loading 'jiffies' as jiffies is volatile and may be updated in 112 112 * the background by a timer tick. All to reduce the complexity 113 - * of the addition and reduce the risk of losing a jiffie. 113 + * of the addition and reduce the risk of losing a jiffy. 114 114 */ 115 115 barrier(); 116 116
+1 -1
drivers/gpu/drm/v3d/v3d_bo.c
··· 279 279 else 280 280 args->timeout_ns = 0; 281 281 282 - /* Asked to wait beyond the jiffie/scheduler precision? */ 282 + /* Asked to wait beyond the jiffy/scheduler precision? */ 283 283 if (ret == -ETIME && args->timeout_ns) 284 284 ret = -EAGAIN; 285 285
+1 -1
drivers/isdn/mISDN/dsp_cmx.c
··· 82 82 * - has multiple clocks. 83 83 * - has no usable clock due to jitter or packet loss (VoIP). 84 84 * In this case the system's clock is used. The clock resolution depends on 85 - * the jiffie resolution. 85 + * the jiffy resolution. 86 86 * 87 87 * If a member joins a conference: 88 88 *
+1 -1
drivers/net/ethernet/marvell/mvmdio.c
··· 104 104 return 0; 105 105 } else { 106 106 /* wait_event_timeout does not guarantee a delay of at 107 - * least one whole jiffie, so timeout must be no less 107 + * least one whole jiffy, so timeout must be no less 108 108 * than two. 109 109 */ 110 110 timeout = max(usecs_to_jiffies(MVMDIO_SMI_TIMEOUT), 2);
+2
drivers/platform/x86/intel/pmc/adl.c
··· 295 295 .ppfear_buckets = CNP_PPFEAR_NUM_ENTRIES, 296 296 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 297 297 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 298 + .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 299 + .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 298 300 .ltr_ignore_max = ADL_NUM_IP_IGN_ALLOWED, 299 301 .lpm_num_modes = ADL_LPM_NUM_MODES, 300 302 .lpm_num_maps = ADL_LPM_NUM_MAPS,
+2
drivers/platform/x86/intel/pmc/cnp.c
··· 200 200 .ppfear_buckets = CNP_PPFEAR_NUM_ENTRIES, 201 201 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 202 202 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 203 + .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 204 + .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 203 205 .ltr_ignore_max = CNP_NUM_IP_IGN_ALLOWED, 204 206 .etr3_offset = ETR3_OFFSET, 205 207 };
+45
drivers/platform/x86/intel/pmc/core.c
··· 11 11 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 + #include <linux/acpi_pmtmr.h> 14 15 #include <linux/bitfield.h> 15 16 #include <linux/debugfs.h> 16 17 #include <linux/delay.h> ··· 1209 1208 return val == 1; 1210 1209 } 1211 1210 1211 + /* 1212 + * Enable or disable ACPI PM Timer 1213 + * 1214 + * This function is intended to be a callback for ACPI PM suspend/resume event. 1215 + * The ACPI PM Timer is enabled on resume only if it was enabled during suspend. 1216 + */ 1217 + static void pmc_core_acpi_pm_timer_suspend_resume(void *data, bool suspend) 1218 + { 1219 + struct pmc_dev *pmcdev = data; 1220 + struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN]; 1221 + const struct pmc_reg_map *map = pmc->map; 1222 + bool enabled; 1223 + u32 reg; 1224 + 1225 + if (!map->acpi_pm_tmr_ctl_offset) 1226 + return; 1227 + 1228 + guard(mutex)(&pmcdev->lock); 1229 + 1230 + if (!suspend && !pmcdev->enable_acpi_pm_timer_on_resume) 1231 + return; 1232 + 1233 + reg = pmc_core_reg_read(pmc, map->acpi_pm_tmr_ctl_offset); 1234 + enabled = !(reg & map->acpi_pm_tmr_disable_bit); 1235 + if (suspend) 1236 + reg |= map->acpi_pm_tmr_disable_bit; 1237 + else 1238 + reg &= ~map->acpi_pm_tmr_disable_bit; 1239 + pmc_core_reg_write(pmc, map->acpi_pm_tmr_ctl_offset, reg); 1240 + 1241 + pmcdev->enable_acpi_pm_timer_on_resume = suspend && enabled; 1242 + } 1212 1243 1213 1244 static void pmc_core_dbgfs_unregister(struct pmc_dev *pmcdev) 1214 1245 { ··· 1437 1404 struct pmc_dev *pmcdev; 1438 1405 const struct x86_cpu_id *cpu_id; 1439 1406 int (*core_init)(struct pmc_dev *pmcdev); 1407 + const struct pmc_reg_map *map; 1440 1408 struct pmc *primary_pmc; 1441 1409 int ret; 1442 1410 ··· 1496 1462 pm_report_max_hw_sleep(FIELD_MAX(SLP_S0_RES_COUNTER_MASK) * 1497 1463 pmc_core_adjust_slp_s0_step(primary_pmc, 1)); 1498 1464 1465 + map = primary_pmc->map; 1466 + if (map->acpi_pm_tmr_ctl_offset) 1467 + acpi_pmtmr_register_suspend_resume_callback(pmc_core_acpi_pm_timer_suspend_resume, 1468 + pmcdev); 1469 + 1499 1470 device_initialized = true; 1500 1471 dev_info(&pdev->dev, " initialized\n"); 1501 1472 ··· 1510 1471 static void pmc_core_remove(struct platform_device *pdev) 1511 1472 { 1512 1473 struct pmc_dev *pmcdev = platform_get_drvdata(pdev); 1474 + const struct pmc *pmc = pmcdev->pmcs[PMC_IDX_MAIN]; 1475 + const struct pmc_reg_map *map = pmc->map; 1476 + 1477 + if (map->acpi_pm_tmr_ctl_offset) 1478 + acpi_pmtmr_unregister_suspend_resume_callback(); 1479 + 1513 1480 pmc_core_dbgfs_unregister(pmcdev); 1514 1481 pmc_core_clean_structure(pdev); 1515 1482 }
+8
drivers/platform/x86/intel/pmc/core.h
··· 68 68 #define SPT_PMC_LTR_SCC 0x3A0 69 69 #define SPT_PMC_LTR_ISH 0x3A4 70 70 71 + #define SPT_PMC_ACPI_PM_TMR_CTL_OFFSET 0x18FC 72 + 71 73 /* Sunrise Point: PGD PFET Enable Ack Status Registers */ 72 74 enum ppfear_regs { 73 75 SPT_PMC_XRAM_PPFEAR0A = 0x590, ··· 149 147 150 148 #define SPT_PMC_VRIC1_SLPS0LVEN BIT(13) 151 149 #define SPT_PMC_VRIC1_XTALSDQDIS BIT(22) 150 + 151 + #define SPT_PMC_BIT_ACPI_PM_TMR_DISABLE BIT(1) 152 152 153 153 /* Cannonlake Power Management Controller register offsets */ 154 154 #define CNP_PMC_SLPS0_DBG_OFFSET 0x10B4 ··· 355 351 const u8 *lpm_reg_index; 356 352 const u32 pson_residency_offset; 357 353 const u32 pson_residency_counter_step; 354 + const u32 acpi_pm_tmr_ctl_offset; 355 + const u32 acpi_pm_tmr_disable_bit; 358 356 }; 359 357 360 358 /** ··· 430 424 u32 die_c6_offset; 431 425 struct telem_endpoint *punit_ep; 432 426 struct pmc_info *regmap_list; 427 + 428 + bool enable_acpi_pm_timer_on_resume; 433 429 }; 434 430 435 431 enum pmc_index {
+2
drivers/platform/x86/intel/pmc/icl.c
··· 46 46 .ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES, 47 47 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 48 48 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 49 + .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 50 + .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 49 51 .ltr_ignore_max = ICL_NUM_IP_IGN_ALLOWED, 50 52 .etr3_offset = ETR3_OFFSET, 51 53 };
+2
drivers/platform/x86/intel/pmc/mtl.c
··· 462 462 .ppfear_buckets = MTL_SOCM_PPFEAR_NUM_ENTRIES, 463 463 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 464 464 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 465 + .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 466 + .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 465 467 .lpm_num_maps = ADL_LPM_NUM_MAPS, 466 468 .ltr_ignore_max = MTL_SOCM_NUM_IP_IGN_ALLOWED, 467 469 .lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
+2
drivers/platform/x86/intel/pmc/spt.c
··· 130 130 .ppfear_buckets = SPT_PPFEAR_NUM_ENTRIES, 131 131 .pm_cfg_offset = SPT_PMC_PM_CFG_OFFSET, 132 132 .pm_read_disable_bit = SPT_PMC_READ_DISABLE_BIT, 133 + .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 134 + .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 133 135 .ltr_ignore_max = SPT_NUM_IP_IGN_ALLOWED, 134 136 .pm_vric1_offset = SPT_PMC_VRIC1_OFFSET, 135 137 };
+2
drivers/platform/x86/intel/pmc/tgl.c
··· 197 197 .ppfear_buckets = ICL_PPFEAR_NUM_ENTRIES, 198 198 .pm_cfg_offset = CNP_PMC_PM_CFG_OFFSET, 199 199 .pm_read_disable_bit = CNP_PMC_READ_DISABLE_BIT, 200 + .acpi_pm_tmr_ctl_offset = SPT_PMC_ACPI_PM_TMR_CTL_OFFSET, 201 + .acpi_pm_tmr_disable_bit = SPT_PMC_BIT_ACPI_PM_TMR_DISABLE, 200 202 .ltr_ignore_max = TGL_NUM_IP_IGN_ALLOWED, 201 203 .lpm_num_maps = TGL_LPM_NUM_MAPS, 202 204 .lpm_res_counter_step_x2 = TGL_PMC_LPM_RES_COUNTER_STEP_X2,
+8 -7
fs/proc/base.c
··· 2513 2513 if (!tp->sighand) 2514 2514 return ERR_PTR(-ESRCH); 2515 2515 2516 - return seq_list_start(&tp->task->signal->posix_timers, *pos); 2516 + return seq_hlist_start(&tp->task->signal->posix_timers, *pos); 2517 2517 } 2518 2518 2519 2519 static void *timers_next(struct seq_file *m, void *v, loff_t *pos) 2520 2520 { 2521 2521 struct timers_private *tp = m->private; 2522 - return seq_list_next(v, &tp->task->signal->posix_timers, pos); 2522 + return seq_hlist_next(v, &tp->task->signal->posix_timers, pos); 2523 2523 } 2524 2524 2525 2525 static void timers_stop(struct seq_file *m, void *v) ··· 2548 2548 [SIGEV_THREAD] = "thread", 2549 2549 }; 2550 2550 2551 - timer = list_entry((struct list_head *)v, struct k_itimer, list); 2551 + timer = hlist_entry((struct hlist_node *)v, struct k_itimer, list); 2552 2552 notify = timer->it_sigev_notify; 2553 2553 2554 2554 seq_printf(m, "ID: %d\n", timer->it_id); ··· 2626 2626 } 2627 2627 2628 2628 task_lock(p); 2629 - if (slack_ns == 0) 2630 - p->timer_slack_ns = p->default_timer_slack_ns; 2631 - else 2632 - p->timer_slack_ns = slack_ns; 2629 + if (task_is_realtime(p)) 2630 + slack_ns = 0; 2631 + else if (slack_ns == 0) 2632 + slack_ns = p->default_timer_slack_ns; 2633 + p->timer_slack_ns = slack_ns; 2633 2634 task_unlock(p); 2634 2635 2635 2636 out:
+4 -7
fs/select.c
··· 77 77 { 78 78 u64 ret; 79 79 struct timespec64 now; 80 + u64 slack = current->timer_slack_ns; 80 81 81 - /* 82 - * Realtime tasks get a slack of 0 for obvious reasons. 83 - */ 84 - 85 - if (rt_task(current)) 82 + if (slack == 0) 86 83 return 0; 87 84 88 85 ktime_get_ts64(&now); 89 86 now = timespec64_sub(*tv, now); 90 87 ret = __estimate_accuracy(&now); 91 - if (ret < current->timer_slack_ns) 92 - return current->timer_slack_ns; 88 + if (ret < slack) 89 + return slack; 93 90 return ret; 94 91 } 95 92
+2 -2
fs/signalfd.c
··· 159 159 DECLARE_WAITQUEUE(wait, current); 160 160 161 161 spin_lock_irq(&current->sighand->siglock); 162 - ret = dequeue_signal(current, &ctx->sigmask, info, &type); 162 + ret = dequeue_signal(&ctx->sigmask, info, &type); 163 163 switch (ret) { 164 164 case 0: 165 165 if (!nonblock) ··· 174 174 add_wait_queue(&current->sighand->signalfd_wqh, &wait); 175 175 for (;;) { 176 176 set_current_state(TASK_INTERRUPTIBLE); 177 - ret = dequeue_signal(current, &ctx->sigmask, info, &type); 177 + ret = dequeue_signal(&ctx->sigmask, info, &type); 178 178 if (ret != 0) 179 179 break; 180 180 if (signal_pending(current)) {
+1 -1
fs/xfs/xfs_buf.h
··· 210 210 * success the write is considered to be failed permanently and the 211 211 * iodone handler will take appropriate action. 212 212 * 213 - * For retry timeouts, we record the jiffie of the first failure. This 213 + * For retry timeouts, we record the jiffy of the first failure. This 214 214 * means that we can change the retry timeout for buffers already under 215 215 * I/O and thus avoid getting stuck in a retry loop with a long timeout. 216 216 *
+13
include/linux/acpi_pmtmr.h
··· 26 26 return acpi_pm_read_verified() & ACPI_PM_MASK; 27 27 } 28 28 29 + /** 30 + * Register callback for suspend and resume event 31 + * 32 + * @cb Callback triggered on suspend and resume 33 + * @data Data passed with the callback 34 + */ 35 + void acpi_pmtmr_register_suspend_resume_callback(void (*cb)(void *data, bool suspend), void *data); 36 + 37 + /** 38 + * Remove registered callback for suspend and resume event 39 + */ 40 + void acpi_pmtmr_unregister_suspend_resume_callback(void); 41 + 29 42 #else 30 43 31 44 static inline u32 acpi_pm_read_early(void)
+1 -1
include/linux/jiffies.h
··· 418 418 #define NSEC_CONVERSION ((unsigned long)((((u64)1 << NSEC_JIFFIE_SC) +\ 419 419 TICK_NSEC -1) / (u64)TICK_NSEC)) 420 420 /* 421 - * The maximum jiffie value is (MAX_INT >> 1). Here we translate that 421 + * The maximum jiffy value is (MAX_INT >> 1). Here we translate that 422 422 * into seconds. The 64-bit case will overflow if we are not careful, 423 423 * so use the messy SH_DIV macro to do it. Still all constants. 424 424 */
+1 -1
include/linux/posix-timers.h
··· 158 158 * @rcu: RCU head for freeing the timer. 159 159 */ 160 160 struct k_itimer { 161 - struct list_head list; 161 + struct hlist_node list; 162 162 struct hlist_node t_hash; 163 163 spinlock_t it_lock; 164 164 const struct k_clock *kclock;
+3 -4
include/linux/sched/signal.h
··· 137 137 138 138 /* POSIX.1b Interval Timers */ 139 139 unsigned int next_posix_timer_id; 140 - struct list_head posix_timers; 140 + struct hlist_head posix_timers; 141 141 142 142 /* ITIMER_REAL timer for the process */ 143 143 struct hrtimer real_timer; ··· 276 276 extern void flush_signals(struct task_struct *); 277 277 extern void ignore_signals(struct task_struct *); 278 278 extern void flush_signal_handlers(struct task_struct *, int force_default); 279 - extern int dequeue_signal(struct task_struct *task, sigset_t *mask, 280 - kernel_siginfo_t *info, enum pid_type *type); 279 + extern int dequeue_signal(sigset_t *mask, kernel_siginfo_t *info, enum pid_type *type); 281 280 282 281 static inline int kernel_dequeue_signal(void) 283 282 { ··· 286 287 int ret; 287 288 288 289 spin_lock_irq(&task->sighand->siglock); 289 - ret = dequeue_signal(task, &task->blocked, &__info, &__type); 290 + ret = dequeue_signal(&task->blocked, &__info, &__type); 290 291 spin_unlock_irq(&task->sighand->siglock); 291 292 292 293 return ret;
+1 -1
include/linux/timekeeper_internal.h
··· 73 73 * @overflow_seen: Overflow warning flag (DEBUG_TIMEKEEPING) 74 74 * 75 75 * Note: For timespec(64) based interfaces wall_to_monotonic is what 76 - * we need to add to xtime (or xtime corrected for sub jiffie times) 76 + * we need to add to xtime (or xtime corrected for sub jiffy times) 77 77 * to get to monotonic time. Monotonic is pegged at zero at system 78 78 * boot time, so wall_to_monotonic will be negative, however, we will 79 79 * ALWAYS keep the tv_nsec part positive so we can use the usual
+1 -1
init/init_task.c
··· 29 29 .cred_guard_mutex = __MUTEX_INITIALIZER(init_signals.cred_guard_mutex), 30 30 .exec_update_lock = __RWSEM_INITIALIZER(init_signals.exec_update_lock), 31 31 #ifdef CONFIG_POSIX_TIMERS 32 - .posix_timers = LIST_HEAD_INIT(init_signals.posix_timers), 32 + .posix_timers = HLIST_HEAD_INIT, 33 33 .cputimer = { 34 34 .cputime_atomic = INIT_CPUTIME_ATOMIC, 35 35 },
+1 -1
kernel/cpu.c
··· 330 330 /* Poll for one millisecond */ 331 331 arch_cpuhp_sync_state_poll(); 332 332 } else { 333 - usleep_range_state(USEC_PER_MSEC, 2 * USEC_PER_MSEC, TASK_UNINTERRUPTIBLE); 333 + usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC); 334 334 } 335 335 sync = atomic_read(st); 336 336 }
+1 -1
kernel/fork.c
··· 1861 1861 prev_cputime_init(&sig->prev_cputime); 1862 1862 1863 1863 #ifdef CONFIG_POSIX_TIMERS 1864 - INIT_LIST_HEAD(&sig->posix_timers); 1864 + INIT_HLIST_HEAD(&sig->posix_timers); 1865 1865 hrtimer_init(&sig->real_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 1866 1866 sig->real_timer.function = it_real_fn; 1867 1867 #endif
+8
kernel/sched/syscalls.c
··· 406 406 else if (fair_policy(policy)) 407 407 p->static_prio = NICE_TO_PRIO(attr->sched_nice); 408 408 409 + /* rt-policy tasks do not have a timerslack */ 410 + if (task_is_realtime(p)) { 411 + p->timer_slack_ns = 0; 412 + } else if (p->timer_slack_ns == 0) { 413 + /* when switching back to non-rt policy, restore timerslack */ 414 + p->timer_slack_ns = p->default_timer_slack_ns; 415 + } 416 + 409 417 /* 410 418 * __sched_setscheduler() ensures attr->sched_priority == 0 when 411 419 * !rt_policy. Always setting this ensures that things like
+17 -17
kernel/signal.c
··· 618 618 } 619 619 620 620 /* 621 - * Dequeue a signal and return the element to the caller, which is 622 - * expected to free it. 623 - * 624 - * All callers have to hold the siglock. 621 + * Try to dequeue a signal. If a deliverable signal is found fill in the 622 + * caller provided siginfo and return the signal number. Otherwise return 623 + * 0. 625 624 */ 626 - int dequeue_signal(struct task_struct *tsk, sigset_t *mask, 627 - kernel_siginfo_t *info, enum pid_type *type) 625 + int dequeue_signal(sigset_t *mask, kernel_siginfo_t *info, enum pid_type *type) 628 626 { 627 + struct task_struct *tsk = current; 629 628 bool resched_timer = false; 630 629 int signr; 631 630 632 - /* We only dequeue private signals from ourselves, we don't let 633 - * signalfd steal them 634 - */ 631 + lockdep_assert_held(&tsk->sighand->siglock); 632 + 635 633 *type = PIDTYPE_PID; 636 634 signr = __dequeue_signal(&tsk->pending, mask, info, &resched_timer); 637 635 if (!signr) { ··· 1938 1940 1939 1941 void sigqueue_free(struct sigqueue *q) 1940 1942 { 1941 - unsigned long flags; 1942 1943 spinlock_t *lock = &current->sighand->siglock; 1944 + unsigned long flags; 1943 1945 1944 - BUG_ON(!(q->flags & SIGQUEUE_PREALLOC)); 1946 + if (WARN_ON_ONCE(!(q->flags & SIGQUEUE_PREALLOC))) 1947 + return; 1945 1948 /* 1946 1949 * We must hold ->siglock while testing q->list 1947 1950 * to serialize with collect_signal() or with ··· 1970 1971 unsigned long flags; 1971 1972 int ret, result; 1972 1973 1973 - BUG_ON(!(q->flags & SIGQUEUE_PREALLOC)); 1974 + if (WARN_ON_ONCE(!(q->flags & SIGQUEUE_PREALLOC))) 1975 + return 0; 1976 + if (WARN_ON_ONCE(q->info.si_code != SI_TIMER)) 1977 + return 0; 1974 1978 1975 1979 ret = -1; 1976 1980 rcu_read_lock(); ··· 2008 2006 * If an SI_TIMER entry is already queue just increment 2009 2007 * the overrun count. 2010 2008 */ 2011 - BUG_ON(q->info.si_code != SI_TIMER); 2012 2009 q->info.si_overrun++; 2013 2010 result = TRACE_SIGNAL_ALREADY_PENDING; 2014 2011 goto out; ··· 2794 2793 type = PIDTYPE_PID; 2795 2794 signr = dequeue_synchronous_signal(&ksig->info); 2796 2795 if (!signr) 2797 - signr = dequeue_signal(current, &current->blocked, 2798 - &ksig->info, &type); 2796 + signr = dequeue_signal(&current->blocked, &ksig->info, &type); 2799 2797 2800 2798 if (!signr) 2801 2799 break; /* will return 0 */ ··· 3648 3648 signotset(&mask); 3649 3649 3650 3650 spin_lock_irq(&tsk->sighand->siglock); 3651 - sig = dequeue_signal(tsk, &mask, info, &type); 3651 + sig = dequeue_signal(&mask, info, &type); 3652 3652 if (!sig && timeout) { 3653 3653 /* 3654 3654 * None ready, temporarily unblock those we're interested ··· 3667 3667 spin_lock_irq(&tsk->sighand->siglock); 3668 3668 __set_task_blocked(tsk, &tsk->real_blocked); 3669 3669 sigemptyset(&tsk->real_blocked); 3670 - sig = dequeue_signal(tsk, &mask, info, &type); 3670 + sig = dequeue_signal(&mask, info, &type); 3671 3671 } 3672 3672 spin_unlock_irq(&tsk->sighand->siglock); 3673 3673
+2
kernel/sys.c
··· 2557 2557 error = current->timer_slack_ns; 2558 2558 break; 2559 2559 case PR_SET_TIMERSLACK: 2560 + if (task_is_realtime(current)) 2561 + break; 2560 2562 if (arg2 <= 0) 2561 2563 current->timer_slack_ns = 2562 2564 current->default_timer_slack_ns;
+2 -7
kernel/time/alarmtimer.c
··· 493 493 * promised in the context of posix_timer_fn() never 494 494 * materialized, but someone should really work on it. 495 495 * 496 - * To prevent DOS fake @now to be 1 jiffie out which keeps 496 + * To prevent DOS fake @now to be 1 jiffy out which keeps 497 497 * the overrun accounting correct but creates an 498 498 * inconsistency vs. timer_gettime(2). 499 499 */ ··· 574 574 it.alarm.alarmtimer); 575 575 enum alarmtimer_restart result = ALARMTIMER_NORESTART; 576 576 unsigned long flags; 577 - int si_private = 0; 578 577 579 578 spin_lock_irqsave(&ptr->it_lock, flags); 580 579 581 - ptr->it_active = 0; 582 - if (ptr->it_interval) 583 - si_private = ++ptr->it_requeue_pending; 584 - 585 - if (posix_timer_event(ptr, si_private) && ptr->it_interval) { 580 + if (posix_timer_queue_signal(ptr) && ptr->it_interval) { 586 581 /* 587 582 * Handle ignored signals and rearm the timer. This will go 588 583 * away once we handle ignored signals proper. Ensure that
+1 -1
kernel/time/clockevents.c
··· 190 190 191 191 #ifdef CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST 192 192 193 - /* Limit min_delta to a jiffie */ 193 + /* Limit min_delta to a jiffy */ 194 194 #define MIN_DELTA_LIMIT (NSEC_PER_SEC / HZ) 195 195 196 196 /**
+6 -16
kernel/time/hrtimer.c
··· 1177 1177 /* 1178 1178 * CONFIG_TIME_LOW_RES indicates that the system has no way to return 1179 1179 * granular time values. For relative timers we add hrtimer_resolution 1180 - * (i.e. one jiffie) to prevent short timeouts. 1180 + * (i.e. one jiffy) to prevent short timeouts. 1181 1181 */ 1182 1182 timer->is_rel = mode & HRTIMER_MODE_REL; 1183 1183 if (timer->is_rel) ··· 1351 1351 } 1352 1352 1353 1353 static void hrtimer_cpu_base_lock_expiry(struct hrtimer_cpu_base *base) 1354 + __acquires(&base->softirq_expiry_lock) 1354 1355 { 1355 1356 spin_lock(&base->softirq_expiry_lock); 1356 1357 } 1357 1358 1358 1359 static void hrtimer_cpu_base_unlock_expiry(struct hrtimer_cpu_base *base) 1360 + __releases(&base->softirq_expiry_lock) 1359 1361 { 1360 1362 spin_unlock(&base->softirq_expiry_lock); 1361 1363 } ··· 2074 2072 struct restart_block *restart; 2075 2073 struct hrtimer_sleeper t; 2076 2074 int ret = 0; 2077 - u64 slack; 2078 - 2079 - slack = current->timer_slack_ns; 2080 - if (rt_task(current)) 2081 - slack = 0; 2082 2075 2083 2076 hrtimer_init_sleeper_on_stack(&t, clockid, mode); 2084 - hrtimer_set_expires_range_ns(&t.timer, rqtp, slack); 2077 + hrtimer_set_expires_range_ns(&t.timer, rqtp, current->timer_slack_ns); 2085 2078 ret = do_nanosleep(&t, mode); 2086 2079 if (ret != -ERESTART_RESTARTBLOCK) 2087 2080 goto out; ··· 2246 2249 /** 2247 2250 * schedule_hrtimeout_range_clock - sleep until timeout 2248 2251 * @expires: timeout value (ktime_t) 2249 - * @delta: slack in expires timeout (ktime_t) for SCHED_OTHER tasks 2252 + * @delta: slack in expires timeout (ktime_t) 2250 2253 * @mode: timer mode 2251 2254 * @clock_id: timer clock to be used 2252 2255 */ ··· 2273 2276 return -EINTR; 2274 2277 } 2275 2278 2276 - /* 2277 - * Override any slack passed by the user if under 2278 - * rt contraints. 2279 - */ 2280 - if (rt_task(current)) 2281 - delta = 0; 2282 - 2283 2279 hrtimer_init_sleeper_on_stack(&t, clock_id, mode); 2284 2280 hrtimer_set_expires_range_ns(&t.timer, *expires, delta); 2285 2281 hrtimer_sleeper_start_expires(&t, mode); ··· 2292 2302 /** 2293 2303 * schedule_hrtimeout_range - sleep until timeout 2294 2304 * @expires: timeout value (ktime_t) 2295 - * @delta: slack in expires timeout (ktime_t) for SCHED_OTHER tasks 2305 + * @delta: slack in expires timeout (ktime_t) 2296 2306 * @mode: timer mode 2297 2307 * 2298 2308 * Make the current task sleep until the given expiry time has
+9 -1
kernel/time/ntp.c
··· 660 660 sched_sync_hw_clock(offset_nsec, res != 0); 661 661 } 662 662 663 - void ntp_notify_cmos_timer(void) 663 + void ntp_notify_cmos_timer(bool offset_set) 664 664 { 665 + /* 666 + * If the time jumped (using ADJ_SETOFFSET) cancels sync timer, 667 + * which may have been running if the time was synchronized 668 + * prior to the ADJ_SETOFFSET call. 669 + */ 670 + if (offset_set) 671 + hrtimer_cancel(&sync_hrtimer); 672 + 665 673 /* 666 674 * When the work is currently executed but has not yet the timer 667 675 * rearmed this queues the work immediately again. No big issue,
+2 -2
kernel/time/ntp_internal.h
··· 14 14 extern void __hardpps(const struct timespec64 *phase_ts, const struct timespec64 *raw_ts); 15 15 16 16 #if defined(CONFIG_GENERIC_CMOS_UPDATE) || defined(CONFIG_RTC_SYSTOHC) 17 - extern void ntp_notify_cmos_timer(void); 17 + extern void ntp_notify_cmos_timer(bool offset_set); 18 18 #else 19 - static inline void ntp_notify_cmos_timer(void) { } 19 + static inline void ntp_notify_cmos_timer(bool offset_set) { } 20 20 #endif 21 21 22 22 #endif /* _LINUX_NTP_INTERNAL_H */
+89 -126
kernel/time/posix-cpu-timers.c
··· 453 453 struct cpu_timer *ctmr = &timer->it.cpu; 454 454 struct posix_cputimer_base *base; 455 455 456 + timer->it_active = 0; 456 457 if (!cpu_timer_dequeue(ctmr)) 457 458 return; 458 459 ··· 560 559 struct cpu_timer *ctmr = &timer->it.cpu; 561 560 u64 newexp = cpu_timer_getexpires(ctmr); 562 561 562 + timer->it_active = 1; 563 563 if (!cpu_timer_enqueue(&base->tqhead, ctmr)) 564 564 return; 565 565 ··· 586 584 { 587 585 struct cpu_timer *ctmr = &timer->it.cpu; 588 586 589 - if ((timer->it_sigev_notify & ~SIGEV_THREAD_ID) == SIGEV_NONE) { 590 - /* 591 - * User don't want any signal. 592 - */ 593 - cpu_timer_setexpires(ctmr, 0); 594 - } else if (unlikely(timer->sigq == NULL)) { 587 + timer->it_active = 0; 588 + if (unlikely(timer->sigq == NULL)) { 595 589 /* 596 590 * This a special case for clock_nanosleep, 597 591 * not a normal timer from sys_timer_create. ··· 598 600 /* 599 601 * One-shot timer. Clear it as soon as it's fired. 600 602 */ 601 - posix_timer_event(timer, 0); 603 + posix_timer_queue_signal(timer); 602 604 cpu_timer_setexpires(ctmr, 0); 603 - } else if (posix_timer_event(timer, ++timer->it_requeue_pending)) { 605 + } else if (posix_timer_queue_signal(timer)) { 604 606 /* 605 607 * The signal did not get queued because the signal 606 608 * was ignored, so we won't get any callback to ··· 612 614 } 613 615 } 614 616 617 + static void __posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec64 *itp, u64 now); 618 + 615 619 /* 616 620 * Guts of sys_timer_settime for CPU timers. 617 621 * This is called with the timer locked and interrupts disabled. ··· 623 623 static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags, 624 624 struct itimerspec64 *new, struct itimerspec64 *old) 625 625 { 626 + bool sigev_none = timer->it_sigev_notify == SIGEV_NONE; 626 627 clockid_t clkid = CPUCLOCK_WHICH(timer->it_clock); 627 - u64 old_expires, new_expires, old_incr, val; 628 628 struct cpu_timer *ctmr = &timer->it.cpu; 629 + u64 old_expires, new_expires, now; 629 630 struct sighand_struct *sighand; 630 631 struct task_struct *p; 631 632 unsigned long flags; ··· 663 662 return -ESRCH; 664 663 } 665 664 666 - /* 667 - * Disarm any old timer after extracting its expiry time. 668 - */ 669 - old_incr = timer->it_interval; 665 + /* Retrieve the current expiry time before disarming the timer */ 670 666 old_expires = cpu_timer_getexpires(ctmr); 671 667 672 668 if (unlikely(timer->it.cpu.firing)) { ··· 671 673 ret = TIMER_RETRY; 672 674 } else { 673 675 cpu_timer_dequeue(ctmr); 676 + timer->it_active = 0; 674 677 } 675 678 676 679 /* 677 - * We need to sample the current value to convert the new 678 - * value from to relative and absolute, and to convert the 679 - * old value from absolute to relative. To set a process 680 - * timer, we need a sample to balance the thread expiry 681 - * times (in arm_timer). With an absolute time, we must 682 - * check if it's already passed. In short, we need a sample. 680 + * Sample the current clock for saving the previous setting 681 + * and for rearming the timer. 683 682 */ 684 683 if (CPUCLOCK_PERTHREAD(timer->it_clock)) 685 - val = cpu_clock_sample(clkid, p); 684 + now = cpu_clock_sample(clkid, p); 686 685 else 687 - val = cpu_clock_sample_group(clkid, p, true); 686 + now = cpu_clock_sample_group(clkid, p, !sigev_none); 688 687 688 + /* Retrieve the previous expiry value if requested. */ 689 689 if (old) { 690 - if (old_expires == 0) { 691 - old->it_value.tv_sec = 0; 692 - old->it_value.tv_nsec = 0; 693 - } else { 694 - /* 695 - * Update the timer in case it has overrun already. 696 - * If it has, we'll report it as having overrun and 697 - * with the next reloaded timer already ticking, 698 - * though we are swallowing that pending 699 - * notification here to install the new setting. 700 - */ 701 - u64 exp = bump_cpu_timer(timer, val); 702 - 703 - if (val < exp) { 704 - old_expires = exp - val; 705 - old->it_value = ns_to_timespec64(old_expires); 706 - } else { 707 - old->it_value.tv_nsec = 1; 708 - old->it_value.tv_sec = 0; 709 - } 710 - } 690 + old->it_value = (struct timespec64){ }; 691 + if (old_expires) 692 + __posix_cpu_timer_get(timer, old, now); 711 693 } 712 694 695 + /* Retry if the timer expiry is running concurrently */ 713 696 if (unlikely(ret)) { 714 - /* 715 - * We are colliding with the timer actually firing. 716 - * Punt after filling in the timer's old value, and 717 - * disable this firing since we are already reporting 718 - * it as an overrun (thanks to bump_cpu_timer above). 719 - */ 720 697 unlock_task_sighand(p, &flags); 721 698 goto out; 722 699 } 723 700 724 - if (new_expires != 0 && !(timer_flags & TIMER_ABSTIME)) { 725 - new_expires += val; 726 - } 701 + /* Convert relative expiry time to absolute */ 702 + if (new_expires && !(timer_flags & TIMER_ABSTIME)) 703 + new_expires += now; 704 + 705 + /* Set the new expiry time (might be 0) */ 706 + cpu_timer_setexpires(ctmr, new_expires); 727 707 728 708 /* 729 - * Install the new expiry time (or zero). 730 - * For a timer with no notification action, we don't actually 731 - * arm the timer (we'll just fake it for timer_gettime). 709 + * Arm the timer if it is not disabled, the new expiry value has 710 + * not yet expired and the timer requires signal delivery. 711 + * SIGEV_NONE timers are never armed. In case the timer is not 712 + * armed, enforce the reevaluation of the timer base so that the 713 + * process wide cputime counter can be disabled eventually. 732 714 */ 733 - cpu_timer_setexpires(ctmr, new_expires); 734 - if (new_expires != 0 && val < new_expires) { 735 - arm_timer(timer, p); 715 + if (likely(!sigev_none)) { 716 + if (new_expires && now < new_expires) 717 + arm_timer(timer, p); 718 + else 719 + trigger_base_recalc_expires(timer, p); 736 720 } 737 721 738 722 unlock_task_sighand(p, &flags); 739 - /* 740 - * Install the new reload setting, and 741 - * set up the signal and overrun bookkeeping. 742 - */ 743 - timer->it_interval = timespec64_to_ktime(new->it_interval); 723 + 724 + posix_timer_set_common(timer, new); 744 725 745 726 /* 746 - * This acts as a modification timestamp for the timer, 747 - * so any automatic reload attempt will punt on seeing 748 - * that we have reset the timer manually. 727 + * If the new expiry time was already in the past the timer was not 728 + * queued. Fire it immediately even if the thread never runs to 729 + * accumulate more time on this clock. 749 730 */ 750 - timer->it_requeue_pending = (timer->it_requeue_pending + 2) & 751 - ~REQUEUE_PENDING; 752 - timer->it_overrun_last = 0; 753 - timer->it_overrun = -1; 754 - 755 - if (val >= new_expires) { 756 - if (new_expires != 0) { 757 - /* 758 - * The designated time already passed, so we notify 759 - * immediately, even if the thread never runs to 760 - * accumulate more time on this clock. 761 - */ 762 - cpu_timer_fire(timer); 763 - } 764 - 765 - /* 766 - * Make sure we don't keep around the process wide cputime 767 - * counter or the tick dependency if they are not necessary. 768 - */ 769 - sighand = lock_task_sighand(p, &flags); 770 - if (!sighand) 771 - goto out; 772 - 773 - if (!cpu_timer_queued(ctmr)) 774 - trigger_base_recalc_expires(timer, p); 775 - 776 - unlock_task_sighand(p, &flags); 777 - } 778 - out: 731 + if (!sigev_none && new_expires && now >= new_expires) 732 + cpu_timer_fire(timer); 733 + out: 779 734 rcu_read_unlock(); 780 - if (old) 781 - old->it_interval = ns_to_timespec64(old_incr); 782 - 783 735 return ret; 736 + } 737 + 738 + static void __posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec64 *itp, u64 now) 739 + { 740 + bool sigev_none = timer->it_sigev_notify == SIGEV_NONE; 741 + u64 expires, iv = timer->it_interval; 742 + 743 + /* 744 + * Make sure that interval timers are moved forward for the 745 + * following cases: 746 + * - SIGEV_NONE timers which are never armed 747 + * - Timers which expired, but the signal has not yet been 748 + * delivered 749 + */ 750 + if (iv && ((timer->it_requeue_pending & REQUEUE_PENDING) || sigev_none)) 751 + expires = bump_cpu_timer(timer, now); 752 + else 753 + expires = cpu_timer_getexpires(&timer->it.cpu); 754 + 755 + /* 756 + * Expired interval timers cannot have a remaining time <= 0. 757 + * The kernel has to move them forward so that the next 758 + * timer expiry is > @now. 759 + */ 760 + if (now < expires) { 761 + itp->it_value = ns_to_timespec64(expires - now); 762 + } else { 763 + /* 764 + * A single shot SIGEV_NONE timer must return 0, when it is 765 + * expired! Timers which have a real signal delivery mode 766 + * must return a remaining time greater than 0 because the 767 + * signal has not yet been delivered. 768 + */ 769 + if (!sigev_none) 770 + itp->it_value.tv_nsec = 1; 771 + } 784 772 } 785 773 786 774 static void posix_cpu_timer_get(struct k_itimer *timer, struct itimerspec64 *itp) 787 775 { 788 776 clockid_t clkid = CPUCLOCK_WHICH(timer->it_clock); 789 - struct cpu_timer *ctmr = &timer->it.cpu; 790 - u64 now, expires = cpu_timer_getexpires(ctmr); 791 777 struct task_struct *p; 778 + u64 now; 792 779 793 780 rcu_read_lock(); 794 781 p = cpu_timer_task_rcu(timer); 795 - if (!p) 796 - goto out; 782 + if (p && cpu_timer_getexpires(&timer->it.cpu)) { 783 + itp->it_interval = ktime_to_timespec64(timer->it_interval); 797 784 798 - /* 799 - * Easy part: convert the reload time. 800 - */ 801 - itp->it_interval = ktime_to_timespec64(timer->it_interval); 785 + if (CPUCLOCK_PERTHREAD(timer->it_clock)) 786 + now = cpu_clock_sample(clkid, p); 787 + else 788 + now = cpu_clock_sample_group(clkid, p, false); 802 789 803 - if (!expires) 804 - goto out; 805 - 806 - /* 807 - * Sample the clock to take the difference with the expiry time. 808 - */ 809 - if (CPUCLOCK_PERTHREAD(timer->it_clock)) 810 - now = cpu_clock_sample(clkid, p); 811 - else 812 - now = cpu_clock_sample_group(clkid, p, false); 813 - 814 - if (now < expires) { 815 - itp->it_value = ns_to_timespec64(expires - now); 816 - } else { 817 - /* 818 - * The timer should have expired already, but the firing 819 - * hasn't taken place yet. Say it's just about to expire. 820 - */ 821 - itp->it_value.tv_nsec = 1; 822 - itp->it_value.tv_sec = 0; 790 + __posix_cpu_timer_get(timer, itp, now); 823 791 } 824 - out: 825 792 rcu_read_unlock(); 826 793 } 827 794
+44 -29
kernel/time/posix-timers.c
··· 277 277 unlock_timer(timr, flags); 278 278 } 279 279 280 - int posix_timer_event(struct k_itimer *timr, int si_private) 280 + int posix_timer_queue_signal(struct k_itimer *timr) 281 281 { 282 + int ret, si_private = 0; 282 283 enum pid_type type; 283 - int ret; 284 + 285 + lockdep_assert_held(&timr->it_lock); 286 + 287 + timr->it_active = 0; 288 + if (timr->it_interval) 289 + si_private = ++timr->it_requeue_pending; 290 + 284 291 /* 285 292 * FIXME: if ->sigq is queued we can race with 286 293 * dequeue_signal()->posixtimer_rearm(). ··· 316 309 */ 317 310 static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer) 318 311 { 312 + struct k_itimer *timr = container_of(timer, struct k_itimer, it.real.timer); 319 313 enum hrtimer_restart ret = HRTIMER_NORESTART; 320 - struct k_itimer *timr; 321 314 unsigned long flags; 322 - int si_private = 0; 323 315 324 - timr = container_of(timer, struct k_itimer, it.real.timer); 325 316 spin_lock_irqsave(&timr->it_lock, flags); 326 317 327 - timr->it_active = 0; 328 - if (timr->it_interval != 0) 329 - si_private = ++timr->it_requeue_pending; 330 - 331 - if (posix_timer_event(timr, si_private)) { 318 + if (posix_timer_queue_signal(timr)) { 332 319 /* 333 320 * The signal was not queued due to SIG_IGN. As a 334 321 * consequence the timer is not going to be rearmed from ··· 339 338 * change to the signal handling code. 340 339 * 341 340 * For now let timers with an interval less than a 342 - * jiffie expire every jiffie and recheck for a 341 + * jiffy expire every jiffy and recheck for a 343 342 * valid signal handler. 344 343 * 345 344 * This avoids interrupt starvation in case of a 346 345 * very small interval, which would expire the 347 346 * timer immediately again. 348 347 * 349 - * Moving now ahead of time by one jiffie tricks 348 + * Moving now ahead of time by one jiffy tricks 350 349 * hrtimer_forward() to expire the timer later, 351 350 * while it still maintains the overrun accuracy 352 351 * for the price of a slight inconsistency in the ··· 516 515 spin_lock_irq(&current->sighand->siglock); 517 516 /* This makes the timer valid in the hash table */ 518 517 WRITE_ONCE(new_timer->it_signal, current->signal); 519 - list_add(&new_timer->list, &current->signal->posix_timers); 518 + hlist_add_head(&new_timer->list, &current->signal->posix_timers); 520 519 spin_unlock_irq(&current->sighand->siglock); 521 520 /* 522 521 * After unlocking sighand::siglock @new_timer is subject to ··· 857 856 return lock_timer(timer_id, flags); 858 857 } 859 858 859 + /* 860 + * Set up the new interval and reset the signal delivery data 861 + */ 862 + void posix_timer_set_common(struct k_itimer *timer, struct itimerspec64 *new_setting) 863 + { 864 + if (new_setting->it_value.tv_sec || new_setting->it_value.tv_nsec) 865 + timer->it_interval = timespec64_to_ktime(new_setting->it_interval); 866 + else 867 + timer->it_interval = 0; 868 + 869 + /* Prevent reloading in case there is a signal pending */ 870 + timer->it_requeue_pending = (timer->it_requeue_pending + 2) & ~REQUEUE_PENDING; 871 + /* Reset overrun accounting */ 872 + timer->it_overrun_last = 0; 873 + timer->it_overrun = -1LL; 874 + } 875 + 860 876 /* Set a POSIX.1b interval timer. */ 861 877 int common_timer_set(struct k_itimer *timr, int flags, 862 878 struct itimerspec64 *new_setting, ··· 896 878 return TIMER_RETRY; 897 879 898 880 timr->it_active = 0; 899 - timr->it_requeue_pending = (timr->it_requeue_pending + 2) & 900 - ~REQUEUE_PENDING; 901 - timr->it_overrun_last = 0; 881 + posix_timer_set_common(timr, new_setting); 902 882 903 - /* Switch off the timer when it_value is zero */ 883 + /* Keep timer disarmed when it_value is zero */ 904 884 if (!new_setting->it_value.tv_sec && !new_setting->it_value.tv_nsec) 905 885 return 0; 906 886 907 - timr->it_interval = timespec64_to_ktime(new_setting->it_interval); 908 887 expires = timespec64_to_ktime(new_setting->it_value); 909 888 if (flags & TIMER_ABSTIME) 910 889 expires = timens_ktime_to_host(timr->it_clock, expires); ··· 919 904 const struct k_clock *kc; 920 905 struct k_itimer *timr; 921 906 unsigned long flags; 922 - int error = 0; 907 + int error; 923 908 924 909 if (!timespec64_valid(&new_spec64->it_interval) || 925 910 !timespec64_valid(&new_spec64->it_value)) ··· 932 917 retry: 933 918 if (!timr) 934 919 return -EINVAL; 920 + 921 + if (old_spec64) 922 + old_spec64->it_interval = ktime_to_timespec64(timr->it_interval); 935 923 936 924 kc = timr->kclock; 937 925 if (WARN_ON_ONCE(!kc || !kc->timer_set)) ··· 1039 1021 } 1040 1022 1041 1023 spin_lock(&current->sighand->siglock); 1042 - list_del(&timer->list); 1024 + hlist_del(&timer->list); 1043 1025 spin_unlock(&current->sighand->siglock); 1044 1026 /* 1045 1027 * A concurrent lookup could check timer::it_signal lockless. It ··· 1089 1071 1090 1072 goto retry_delete; 1091 1073 } 1092 - list_del(&timer->list); 1074 + hlist_del(&timer->list); 1093 1075 1094 1076 /* 1095 1077 * Setting timer::it_signal to NULL is technically not required ··· 1110 1092 */ 1111 1093 void exit_itimers(struct task_struct *tsk) 1112 1094 { 1113 - struct list_head timers; 1114 - struct k_itimer *tmr; 1095 + struct hlist_head timers; 1115 1096 1116 - if (list_empty(&tsk->signal->posix_timers)) 1097 + if (hlist_empty(&tsk->signal->posix_timers)) 1117 1098 return; 1118 1099 1119 1100 /* Protect against concurrent read via /proc/$PID/timers */ 1120 1101 spin_lock_irq(&tsk->sighand->siglock); 1121 - list_replace_init(&tsk->signal->posix_timers, &timers); 1102 + hlist_move_list(&tsk->signal->posix_timers, &timers); 1122 1103 spin_unlock_irq(&tsk->sighand->siglock); 1123 1104 1124 1105 /* The timers are not longer accessible via tsk::signal */ 1125 - while (!list_empty(&timers)) { 1126 - tmr = list_first_entry(&timers, struct k_itimer, list); 1127 - itimer_delete(tmr); 1128 - } 1106 + while (!hlist_empty(&timers)) 1107 + itimer_delete(hlist_entry(timers.first, struct k_itimer, list)); 1129 1108 } 1130 1109 1131 1110 SYSCALL_DEFINE2(clock_settime, const clockid_t, which_clock,
+2 -1
kernel/time/posix-timers.h
··· 36 36 extern const struct k_clock clock_thread; 37 37 extern const struct k_clock alarm_clock; 38 38 39 - int posix_timer_event(struct k_itimer *timr, int si_private); 39 + int posix_timer_queue_signal(struct k_itimer *timr); 40 40 41 41 void common_timer_get(struct k_itimer *timr, struct itimerspec64 *cur_setting); 42 42 int common_timer_set(struct k_itimer *timr, int flags, 43 43 struct itimerspec64 *new_setting, 44 44 struct itimerspec64 *old_setting); 45 + void posix_timer_set_common(struct k_itimer *timer, struct itimerspec64 *new_setting); 45 46 int common_timer_del(struct k_itimer *timer);
+3 -1
kernel/time/timekeeping.c
··· 2553 2553 { 2554 2554 struct timekeeper *tk = &tk_core.timekeeper; 2555 2555 struct audit_ntp_data ad; 2556 + bool offset_set = false; 2556 2557 bool clock_set = false; 2557 2558 struct timespec64 ts; 2558 2559 unsigned long flags; ··· 2576 2575 if (ret) 2577 2576 return ret; 2578 2577 2578 + offset_set = delta.tv_sec != 0; 2579 2579 audit_tk_injoffset(delta); 2580 2580 } 2581 2581 ··· 2610 2608 if (clock_set) 2611 2609 clock_was_set(CLOCK_SET_WALL); 2612 2610 2613 - ntp_notify_cmos_timer(); 2611 + ntp_notify_cmos_timer(offset_set); 2614 2612 2615 2613 return ret; 2616 2614 }
+48 -14
kernel/time/timer.c
··· 365 365 rem = j % HZ; 366 366 367 367 /* 368 - * If the target jiffie is just after a whole second (which can happen 368 + * If the target jiffy is just after a whole second (which can happen 369 369 * due to delays of the timer irq, long irq off times etc etc) then 370 370 * we should round down to the whole second, not up. Use 1/4th second 371 371 * as cutoff for this rounding as an extreme upper bound for this. ··· 672 672 * Set the next expiry time and kick the CPU so it 673 673 * can reevaluate the wheel: 674 674 */ 675 - base->next_expiry = bucket_expiry; 675 + WRITE_ONCE(base->next_expiry, bucket_expiry); 676 676 base->timers_pending = true; 677 677 base->next_expiry_recalc = false; 678 678 trigger_dyntick_cpu(base, timer); ··· 1561 1561 * the waiter to acquire the lock and make progress. 1562 1562 */ 1563 1563 static void timer_sync_wait_running(struct timer_base *base) 1564 + __releases(&base->lock) __releases(&base->expiry_lock) 1565 + __acquires(&base->expiry_lock) __acquires(&base->lock) 1564 1566 { 1565 1567 if (atomic_read(&base->timer_waiters)) { 1566 1568 raw_spin_unlock_irq(&base->lock); ··· 1900 1898 * 1901 1899 * Store next expiry time in base->next_expiry. 1902 1900 */ 1903 - static void next_expiry_recalc(struct timer_base *base) 1901 + static void timer_recalc_next_expiry(struct timer_base *base) 1904 1902 { 1905 1903 unsigned long clk, next, adj; 1906 1904 unsigned lvl, offset = 0; ··· 1930 1928 * bits are zero, we look at the next level as is. If not we 1931 1929 * need to advance it by one because that's going to be the 1932 1930 * next expiring bucket in that level. base->clk is the next 1933 - * expiring jiffie. So in case of: 1931 + * expiring jiffy. So in case of: 1934 1932 * 1935 1933 * LVL5 LVL4 LVL3 LVL2 LVL1 LVL0 1936 1934 * 0 0 0 0 0 0 ··· 1966 1964 clk += adj; 1967 1965 } 1968 1966 1969 - base->next_expiry = next; 1967 + WRITE_ONCE(base->next_expiry, next); 1970 1968 base->next_expiry_recalc = false; 1971 1969 base->timers_pending = !(next == base->clk + NEXT_TIMER_MAX_DELTA); 1972 1970 } ··· 1995 1993 return basem; 1996 1994 1997 1995 /* 1998 - * Round up to the next jiffie. High resolution timers are 1996 + * Round up to the next jiffy. High resolution timers are 1999 1997 * off, so the hrtimers are expired in the tick and we need to 2000 1998 * make sure that this tick really expires the timer to avoid 2001 1999 * a ping pong of the nohz stop code. ··· 2009 2007 unsigned long basej) 2010 2008 { 2011 2009 if (base->next_expiry_recalc) 2012 - next_expiry_recalc(base); 2010 + timer_recalc_next_expiry(base); 2013 2011 2014 2012 /* 2015 2013 * Move next_expiry for the empty base into the future to prevent an ··· 2020 2018 * easy comparable to find out which base holds the first pending timer. 2021 2019 */ 2022 2020 if (!base->timers_pending) 2023 - base->next_expiry = basej + NEXT_TIMER_MAX_DELTA; 2021 + WRITE_ONCE(base->next_expiry, basej + NEXT_TIMER_MAX_DELTA); 2024 2022 2025 2023 return base->next_expiry; 2026 2024 } ··· 2254 2252 base_global, &tevt); 2255 2253 2256 2254 /* 2257 - * If the next event is only one jiffie ahead there is no need to call 2255 + * If the next event is only one jiffy ahead there is no need to call 2258 2256 * timer migration hierarchy related functions. The value for the next 2259 2257 * global timer in @tevt struct equals then KTIME_MAX. This is also 2260 2258 * true, when the timer base is idle. ··· 2413 2411 * jiffies to avoid endless requeuing to current jiffies. 2414 2412 */ 2415 2413 base->clk++; 2416 - next_expiry_recalc(base); 2414 + timer_recalc_next_expiry(base); 2417 2415 2418 2416 while (levels--) 2419 2417 expire_timers(base, heads + levels); ··· 2464 2462 hrtimer_run_queues(); 2465 2463 2466 2464 for (int i = 0; i < NR_BASES; i++, base++) { 2467 - /* Raise the softirq only if required. */ 2468 - if (time_after_eq(jiffies, base->next_expiry) || 2465 + /* 2466 + * Raise the softirq only if required. 2467 + * 2468 + * timer_base::next_expiry can be written by a remote CPU while 2469 + * holding the lock. If this write happens at the same time than 2470 + * the lockless local read, sanity checker could complain about 2471 + * data corruption. 2472 + * 2473 + * There are two possible situations where 2474 + * timer_base::next_expiry is written by a remote CPU: 2475 + * 2476 + * 1. Remote CPU expires global timers of this CPU and updates 2477 + * timer_base::next_expiry of BASE_GLOBAL afterwards in 2478 + * next_timer_interrupt() or timer_recalc_next_expiry(). The 2479 + * worst outcome is a superfluous raise of the timer softirq 2480 + * when the not yet updated value is read. 2481 + * 2482 + * 2. A new first pinned timer is enqueued by a remote CPU 2483 + * and therefore timer_base::next_expiry of BASE_LOCAL is 2484 + * updated. When this update is missed, this isn't a 2485 + * problem, as an IPI is executed nevertheless when the CPU 2486 + * was idle before. When the CPU wasn't idle but the update 2487 + * is missed, then the timer would expire one jiffy late - 2488 + * bad luck. 2489 + * 2490 + * Those unlikely corner cases where the worst outcome is only a 2491 + * one jiffy delay or a superfluous raise of the softirq are 2492 + * not that expensive as doing the check always while holding 2493 + * the lock. 2494 + * 2495 + * Possible remote writers are using WRITE_ONCE(). Local reader 2496 + * uses therefore READ_ONCE(). 2497 + */ 2498 + if (time_after_eq(jiffies, READ_ONCE(base->next_expiry)) || 2469 2499 (i == BASE_DEF && tmigr_requires_handle_remote())) { 2470 2500 raise_softirq(TIMER_SOFTIRQ); 2471 2501 return; ··· 2764 2730 */ 2765 2731 void msleep(unsigned int msecs) 2766 2732 { 2767 - unsigned long timeout = msecs_to_jiffies(msecs) + 1; 2733 + unsigned long timeout = msecs_to_jiffies(msecs); 2768 2734 2769 2735 while (timeout) 2770 2736 timeout = schedule_timeout_uninterruptible(timeout); ··· 2778 2744 */ 2779 2745 unsigned long msleep_interruptible(unsigned int msecs) 2780 2746 { 2781 - unsigned long timeout = msecs_to_jiffies(msecs) + 1; 2747 + unsigned long timeout = msecs_to_jiffies(msecs); 2782 2748 2783 2749 while (timeout && !signal_pending(current)) 2784 2750 timeout = schedule_timeout_interruptible(timeout);
+1 -1
lib/Kconfig.debug
··· 97 97 using "boot_delay=N". 98 98 99 99 It is likely that you would also need to use "lpj=M" to preset 100 - the "loops per jiffie" value. 100 + the "loops per jiffy" value. 101 101 See a previous boot log for the "lpj" value to use for your 102 102 system, and then set "lpj=M" before setting "boot_delay=N". 103 103 NOTE: Using this option may adversely affect SMP systems.
+1 -1
net/batman-adv/types.h
··· 287 287 /** @lock: lock to protect the list of fragments */ 288 288 spinlock_t lock; 289 289 290 - /** @timestamp: time (jiffie) of last received fragment */ 290 + /** @timestamp: time (jiffy) of last received fragment */ 291 291 unsigned long timestamp; 292 292 293 293 /** @seqno: sequence number of the fragments in the list */
+448 -100
tools/testing/selftests/timers/posix_timers.c
··· 6 6 * 7 7 * Kernel loop code stolen from Steven Rostedt <srostedt@redhat.com> 8 8 */ 9 - 9 + #define _GNU_SOURCE 10 10 #include <sys/time.h> 11 + #include <sys/types.h> 11 12 #include <stdio.h> 12 13 #include <signal.h> 14 + #include <stdint.h> 15 + #include <string.h> 13 16 #include <unistd.h> 14 17 #include <time.h> 15 18 #include <pthread.h> ··· 21 18 22 19 #define DELAY 2 23 20 #define USECS_PER_SEC 1000000 21 + #define NSECS_PER_SEC 1000000000 22 + 23 + static void __fatal_error(const char *test, const char *name, const char *what) 24 + { 25 + char buf[64]; 26 + 27 + strerror_r(errno, buf, sizeof(buf)); 28 + 29 + if (name && strlen(name)) 30 + ksft_exit_fail_msg("%s %s %s %s\n", test, name, what, buf); 31 + else 32 + ksft_exit_fail_msg("%s %s %s\n", test, what, buf); 33 + } 34 + 35 + #define fatal_error(name, what) __fatal_error(__func__, name, what) 24 36 25 37 static volatile int done; 26 38 ··· 92 74 return 0; 93 75 } 94 76 95 - static int check_itimer(int which) 77 + static void check_itimer(int which, const char *name) 96 78 { 97 - const char *name; 98 - int err; 99 79 struct timeval start, end; 100 80 struct itimerval val = { 101 81 .it_value.tv_sec = DELAY, 102 82 }; 103 - 104 - if (which == ITIMER_VIRTUAL) 105 - name = "ITIMER_VIRTUAL"; 106 - else if (which == ITIMER_PROF) 107 - name = "ITIMER_PROF"; 108 - else if (which == ITIMER_REAL) 109 - name = "ITIMER_REAL"; 110 - else 111 - return -1; 112 83 113 84 done = 0; 114 85 ··· 108 101 else if (which == ITIMER_REAL) 109 102 signal(SIGALRM, sig_handler); 110 103 111 - err = gettimeofday(&start, NULL); 112 - if (err < 0) { 113 - ksft_perror("Can't call gettimeofday()"); 114 - return -1; 115 - } 104 + if (gettimeofday(&start, NULL) < 0) 105 + fatal_error(name, "gettimeofday()"); 116 106 117 - err = setitimer(which, &val, NULL); 118 - if (err < 0) { 119 - ksft_perror("Can't set timer"); 120 - return -1; 121 - } 107 + if (setitimer(which, &val, NULL) < 0) 108 + fatal_error(name, "setitimer()"); 122 109 123 110 if (which == ITIMER_VIRTUAL) 124 111 user_loop(); ··· 121 120 else if (which == ITIMER_REAL) 122 121 idle_loop(); 123 122 124 - err = gettimeofday(&end, NULL); 125 - if (err < 0) { 126 - ksft_perror("Can't call gettimeofday()"); 127 - return -1; 128 - } 123 + if (gettimeofday(&end, NULL) < 0) 124 + fatal_error(name, "gettimeofday()"); 129 125 130 126 ksft_test_result(check_diff(start, end) == 0, "%s\n", name); 131 - 132 - return 0; 133 127 } 134 128 135 - static int check_timer_create(int which) 129 + static void check_timer_create(int which, const char *name) 136 130 { 137 - const char *type; 138 - int err; 139 - timer_t id; 140 131 struct timeval start, end; 141 132 struct itimerspec val = { 142 133 .it_value.tv_sec = DELAY, 143 134 }; 144 - 145 - if (which == CLOCK_THREAD_CPUTIME_ID) { 146 - type = "thread"; 147 - } else if (which == CLOCK_PROCESS_CPUTIME_ID) { 148 - type = "process"; 149 - } else { 150 - ksft_print_msg("Unknown timer_create() type %d\n", which); 151 - return -1; 152 - } 135 + timer_t id; 153 136 154 137 done = 0; 155 - err = timer_create(which, NULL, &id); 156 - if (err < 0) { 157 - ksft_perror("Can't create timer"); 158 - return -1; 159 - } 160 - signal(SIGALRM, sig_handler); 161 138 162 - err = gettimeofday(&start, NULL); 163 - if (err < 0) { 164 - ksft_perror("Can't call gettimeofday()"); 165 - return -1; 166 - } 139 + if (timer_create(which, NULL, &id) < 0) 140 + fatal_error(name, "timer_create()"); 167 141 168 - err = timer_settime(id, 0, &val, NULL); 169 - if (err < 0) { 170 - ksft_perror("Can't set timer"); 171 - return -1; 172 - } 142 + if (signal(SIGALRM, sig_handler) == SIG_ERR) 143 + fatal_error(name, "signal()"); 144 + 145 + if (gettimeofday(&start, NULL) < 0) 146 + fatal_error(name, "gettimeofday()"); 147 + 148 + if (timer_settime(id, 0, &val, NULL) < 0) 149 + fatal_error(name, "timer_settime()"); 173 150 174 151 user_loop(); 175 152 176 - err = gettimeofday(&end, NULL); 177 - if (err < 0) { 178 - ksft_perror("Can't call gettimeofday()"); 179 - return -1; 180 - } 153 + if (gettimeofday(&end, NULL) < 0) 154 + fatal_error(name, "gettimeofday()"); 181 155 182 156 ksft_test_result(check_diff(start, end) == 0, 183 - "timer_create() per %s\n", type); 184 - 185 - return 0; 157 + "timer_create() per %s\n", name); 186 158 } 187 159 188 160 static pthread_t ctd_thread; ··· 183 209 184 210 ctd_count = 100; 185 211 if (timer_create(CLOCK_PROCESS_CPUTIME_ID, NULL, &id)) 186 - return "Can't create timer\n"; 212 + fatal_error(NULL, "timer_create()"); 187 213 if (timer_settime(id, 0, &val, NULL)) 188 - return "Can't set timer\n"; 189 - 214 + fatal_error(NULL, "timer_settime()"); 190 215 while (ctd_count > 0 && !ctd_failed) 191 216 ; 192 217 193 218 if (timer_delete(id)) 194 - return "Can't delete timer\n"; 219 + fatal_error(NULL, "timer_delete()"); 195 220 196 221 return NULL; 197 222 } ··· 198 225 /* 199 226 * Test that only the running thread receives the timer signal. 200 227 */ 201 - static int check_timer_distribution(void) 228 + static void check_timer_distribution(void) 202 229 { 203 - const char *errmsg; 230 + if (signal(SIGALRM, ctd_sighandler) == SIG_ERR) 231 + fatal_error(NULL, "signal()"); 204 232 205 - signal(SIGALRM, ctd_sighandler); 206 - 207 - errmsg = "Can't create thread\n"; 208 233 if (pthread_create(&ctd_thread, NULL, ctd_thread_func, NULL)) 209 - goto err; 234 + fatal_error(NULL, "pthread_create()"); 210 235 211 - errmsg = "Can't join thread\n"; 212 - if (pthread_join(ctd_thread, (void **)&errmsg) || errmsg) 213 - goto err; 236 + if (pthread_join(ctd_thread, NULL)) 237 + fatal_error(NULL, "pthread_join()"); 214 238 215 239 if (!ctd_failed) 216 240 ksft_test_result_pass("check signal distribution\n"); ··· 215 245 ksft_test_result_fail("check signal distribution\n"); 216 246 else 217 247 ksft_test_result_skip("check signal distribution (old kernel)\n"); 218 - return 0; 219 - err: 220 - ksft_print_msg("%s", errmsg); 221 - return -1; 248 + } 249 + 250 + struct tmrsig { 251 + int signals; 252 + int overruns; 253 + }; 254 + 255 + static void siginfo_handler(int sig, siginfo_t *si, void *uc) 256 + { 257 + struct tmrsig *tsig = si ? si->si_ptr : NULL; 258 + 259 + if (tsig) { 260 + tsig->signals++; 261 + tsig->overruns += si->si_overrun; 262 + } 263 + } 264 + 265 + static void *ignore_thread(void *arg) 266 + { 267 + unsigned int *tid = arg; 268 + sigset_t set; 269 + 270 + sigemptyset(&set); 271 + sigaddset(&set, SIGUSR1); 272 + if (sigprocmask(SIG_BLOCK, &set, NULL)) 273 + fatal_error(NULL, "sigprocmask(SIG_BLOCK)"); 274 + 275 + *tid = gettid(); 276 + sleep(100); 277 + 278 + if (sigprocmask(SIG_UNBLOCK, &set, NULL)) 279 + fatal_error(NULL, "sigprocmask(SIG_UNBLOCK)"); 280 + return NULL; 281 + } 282 + 283 + static void check_sig_ign(int thread) 284 + { 285 + struct tmrsig tsig = { }; 286 + struct itimerspec its; 287 + unsigned int tid = 0; 288 + struct sigaction sa; 289 + struct sigevent sev; 290 + pthread_t pthread; 291 + timer_t timerid; 292 + sigset_t set; 293 + 294 + if (thread) { 295 + if (pthread_create(&pthread, NULL, ignore_thread, &tid)) 296 + fatal_error(NULL, "pthread_create()"); 297 + sleep(1); 298 + } 299 + 300 + sa.sa_flags = SA_SIGINFO; 301 + sa.sa_sigaction = siginfo_handler; 302 + sigemptyset(&sa.sa_mask); 303 + if (sigaction(SIGUSR1, &sa, NULL)) 304 + fatal_error(NULL, "sigaction()"); 305 + 306 + /* Block the signal */ 307 + sigemptyset(&set); 308 + sigaddset(&set, SIGUSR1); 309 + if (sigprocmask(SIG_BLOCK, &set, NULL)) 310 + fatal_error(NULL, "sigprocmask(SIG_BLOCK)"); 311 + 312 + memset(&sev, 0, sizeof(sev)); 313 + sev.sigev_notify = SIGEV_SIGNAL; 314 + sev.sigev_signo = SIGUSR1; 315 + sev.sigev_value.sival_ptr = &tsig; 316 + if (thread) { 317 + sev.sigev_notify = SIGEV_THREAD_ID; 318 + sev._sigev_un._tid = tid; 319 + } 320 + 321 + if (timer_create(CLOCK_MONOTONIC, &sev, &timerid)) 322 + fatal_error(NULL, "timer_create()"); 323 + 324 + /* Start the timer to expire in 100ms and 100ms intervals */ 325 + its.it_value.tv_sec = 0; 326 + its.it_value.tv_nsec = 100000000; 327 + its.it_interval.tv_sec = 0; 328 + its.it_interval.tv_nsec = 100000000; 329 + timer_settime(timerid, 0, &its, NULL); 330 + 331 + sleep(1); 332 + 333 + /* Set the signal to be ignored */ 334 + if (signal(SIGUSR1, SIG_IGN) == SIG_ERR) 335 + fatal_error(NULL, "signal(SIG_IGN)"); 336 + 337 + sleep(1); 338 + 339 + if (thread) { 340 + /* Stop the thread first. No signal should be delivered to it */ 341 + if (pthread_cancel(pthread)) 342 + fatal_error(NULL, "pthread_cancel()"); 343 + if (pthread_join(pthread, NULL)) 344 + fatal_error(NULL, "pthread_join()"); 345 + } 346 + 347 + /* Restore the handler */ 348 + if (sigaction(SIGUSR1, &sa, NULL)) 349 + fatal_error(NULL, "sigaction()"); 350 + 351 + sleep(1); 352 + 353 + /* Unblock it, which should deliver the signal in the !thread case*/ 354 + if (sigprocmask(SIG_UNBLOCK, &set, NULL)) 355 + fatal_error(NULL, "sigprocmask(SIG_UNBLOCK)"); 356 + 357 + if (timer_delete(timerid)) 358 + fatal_error(NULL, "timer_delete()"); 359 + 360 + if (!thread) { 361 + ksft_test_result(tsig.signals == 1 && tsig.overruns == 29, 362 + "check_sig_ign SIGEV_SIGNAL\n"); 363 + } else { 364 + ksft_test_result(tsig.signals == 0 && tsig.overruns == 0, 365 + "check_sig_ign SIGEV_THREAD_ID\n"); 366 + } 367 + } 368 + 369 + static void check_rearm(void) 370 + { 371 + struct tmrsig tsig = { }; 372 + struct itimerspec its; 373 + struct sigaction sa; 374 + struct sigevent sev; 375 + timer_t timerid; 376 + sigset_t set; 377 + 378 + sa.sa_flags = SA_SIGINFO; 379 + sa.sa_sigaction = siginfo_handler; 380 + sigemptyset(&sa.sa_mask); 381 + if (sigaction(SIGUSR1, &sa, NULL)) 382 + fatal_error(NULL, "sigaction()"); 383 + 384 + /* Block the signal */ 385 + sigemptyset(&set); 386 + sigaddset(&set, SIGUSR1); 387 + if (sigprocmask(SIG_BLOCK, &set, NULL)) 388 + fatal_error(NULL, "sigprocmask(SIG_BLOCK)"); 389 + 390 + memset(&sev, 0, sizeof(sev)); 391 + sev.sigev_notify = SIGEV_SIGNAL; 392 + sev.sigev_signo = SIGUSR1; 393 + sev.sigev_value.sival_ptr = &tsig; 394 + if (timer_create(CLOCK_MONOTONIC, &sev, &timerid)) 395 + fatal_error(NULL, "timer_create()"); 396 + 397 + /* Start the timer to expire in 100ms and 100ms intervals */ 398 + its.it_value.tv_sec = 0; 399 + its.it_value.tv_nsec = 100000000; 400 + its.it_interval.tv_sec = 0; 401 + its.it_interval.tv_nsec = 100000000; 402 + if (timer_settime(timerid, 0, &its, NULL)) 403 + fatal_error(NULL, "timer_settime()"); 404 + 405 + sleep(1); 406 + 407 + /* Reprogram the timer to single shot */ 408 + its.it_value.tv_sec = 10; 409 + its.it_value.tv_nsec = 0; 410 + its.it_interval.tv_sec = 0; 411 + its.it_interval.tv_nsec = 0; 412 + if (timer_settime(timerid, 0, &its, NULL)) 413 + fatal_error(NULL, "timer_settime()"); 414 + 415 + /* Unblock it, which should not deliver a signal */ 416 + if (sigprocmask(SIG_UNBLOCK, &set, NULL)) 417 + fatal_error(NULL, "sigprocmask(SIG_UNBLOCK)"); 418 + 419 + if (timer_delete(timerid)) 420 + fatal_error(NULL, "timer_delete()"); 421 + 422 + ksft_test_result(!tsig.signals, "check_rearm\n"); 423 + } 424 + 425 + static void check_delete(void) 426 + { 427 + struct tmrsig tsig = { }; 428 + struct itimerspec its; 429 + struct sigaction sa; 430 + struct sigevent sev; 431 + timer_t timerid; 432 + sigset_t set; 433 + 434 + sa.sa_flags = SA_SIGINFO; 435 + sa.sa_sigaction = siginfo_handler; 436 + sigemptyset(&sa.sa_mask); 437 + if (sigaction(SIGUSR1, &sa, NULL)) 438 + fatal_error(NULL, "sigaction()"); 439 + 440 + /* Block the signal */ 441 + sigemptyset(&set); 442 + sigaddset(&set, SIGUSR1); 443 + if (sigprocmask(SIG_BLOCK, &set, NULL)) 444 + fatal_error(NULL, "sigprocmask(SIG_BLOCK)"); 445 + 446 + memset(&sev, 0, sizeof(sev)); 447 + sev.sigev_notify = SIGEV_SIGNAL; 448 + sev.sigev_signo = SIGUSR1; 449 + sev.sigev_value.sival_ptr = &tsig; 450 + if (timer_create(CLOCK_MONOTONIC, &sev, &timerid)) 451 + fatal_error(NULL, "timer_create()"); 452 + 453 + /* Start the timer to expire in 100ms and 100ms intervals */ 454 + its.it_value.tv_sec = 0; 455 + its.it_value.tv_nsec = 100000000; 456 + its.it_interval.tv_sec = 0; 457 + its.it_interval.tv_nsec = 100000000; 458 + if (timer_settime(timerid, 0, &its, NULL)) 459 + fatal_error(NULL, "timer_settime()"); 460 + 461 + sleep(1); 462 + 463 + if (timer_delete(timerid)) 464 + fatal_error(NULL, "timer_delete()"); 465 + 466 + /* Unblock it, which should not deliver a signal */ 467 + if (sigprocmask(SIG_UNBLOCK, &set, NULL)) 468 + fatal_error(NULL, "sigprocmask(SIG_UNBLOCK)"); 469 + 470 + ksft_test_result(!tsig.signals, "check_delete\n"); 471 + } 472 + 473 + static inline int64_t calcdiff_ns(struct timespec t1, struct timespec t2) 474 + { 475 + int64_t diff; 476 + 477 + diff = NSECS_PER_SEC * (int64_t)((int) t1.tv_sec - (int) t2.tv_sec); 478 + diff += ((int) t1.tv_nsec - (int) t2.tv_nsec); 479 + return diff; 480 + } 481 + 482 + static void check_sigev_none(int which, const char *name) 483 + { 484 + struct timespec start, now; 485 + struct itimerspec its; 486 + struct sigevent sev; 487 + timer_t timerid; 488 + 489 + memset(&sev, 0, sizeof(sev)); 490 + sev.sigev_notify = SIGEV_NONE; 491 + 492 + if (timer_create(which, &sev, &timerid)) 493 + fatal_error(name, "timer_create()"); 494 + 495 + /* Start the timer to expire in 100ms and 100ms intervals */ 496 + its.it_value.tv_sec = 0; 497 + its.it_value.tv_nsec = 100000000; 498 + its.it_interval.tv_sec = 0; 499 + its.it_interval.tv_nsec = 100000000; 500 + timer_settime(timerid, 0, &its, NULL); 501 + 502 + if (clock_gettime(which, &start)) 503 + fatal_error(name, "clock_gettime()"); 504 + 505 + do { 506 + if (clock_gettime(which, &now)) 507 + fatal_error(name, "clock_gettime()"); 508 + } while (calcdiff_ns(now, start) < NSECS_PER_SEC); 509 + 510 + if (timer_gettime(timerid, &its)) 511 + fatal_error(name, "timer_gettime()"); 512 + 513 + if (timer_delete(timerid)) 514 + fatal_error(name, "timer_delete()"); 515 + 516 + ksft_test_result(its.it_value.tv_sec || its.it_value.tv_nsec, 517 + "check_sigev_none %s\n", name); 518 + } 519 + 520 + static void check_gettime(int which, const char *name) 521 + { 522 + struct itimerspec its, prev; 523 + struct timespec start, now; 524 + struct sigevent sev; 525 + timer_t timerid; 526 + int wraps = 0; 527 + sigset_t set; 528 + 529 + /* Block the signal */ 530 + sigemptyset(&set); 531 + sigaddset(&set, SIGUSR1); 532 + if (sigprocmask(SIG_BLOCK, &set, NULL)) 533 + fatal_error(name, "sigprocmask(SIG_BLOCK)"); 534 + 535 + memset(&sev, 0, sizeof(sev)); 536 + sev.sigev_notify = SIGEV_SIGNAL; 537 + sev.sigev_signo = SIGUSR1; 538 + 539 + if (timer_create(which, &sev, &timerid)) 540 + fatal_error(name, "timer_create()"); 541 + 542 + /* Start the timer to expire in 100ms and 100ms intervals */ 543 + its.it_value.tv_sec = 0; 544 + its.it_value.tv_nsec = 100000000; 545 + its.it_interval.tv_sec = 0; 546 + its.it_interval.tv_nsec = 100000000; 547 + if (timer_settime(timerid, 0, &its, NULL)) 548 + fatal_error(name, "timer_settime()"); 549 + 550 + if (timer_gettime(timerid, &prev)) 551 + fatal_error(name, "timer_gettime()"); 552 + 553 + if (clock_gettime(which, &start)) 554 + fatal_error(name, "clock_gettime()"); 555 + 556 + do { 557 + if (clock_gettime(which, &now)) 558 + fatal_error(name, "clock_gettime()"); 559 + if (timer_gettime(timerid, &its)) 560 + fatal_error(name, "timer_gettime()"); 561 + if (its.it_value.tv_nsec > prev.it_value.tv_nsec) 562 + wraps++; 563 + prev = its; 564 + 565 + } while (calcdiff_ns(now, start) < NSECS_PER_SEC); 566 + 567 + if (timer_delete(timerid)) 568 + fatal_error(name, "timer_delete()"); 569 + 570 + ksft_test_result(wraps > 1, "check_gettime %s\n", name); 571 + } 572 + 573 + static void check_overrun(int which, const char *name) 574 + { 575 + struct timespec start, now; 576 + struct tmrsig tsig = { }; 577 + struct itimerspec its; 578 + struct sigaction sa; 579 + struct sigevent sev; 580 + timer_t timerid; 581 + sigset_t set; 582 + 583 + sa.sa_flags = SA_SIGINFO; 584 + sa.sa_sigaction = siginfo_handler; 585 + sigemptyset(&sa.sa_mask); 586 + if (sigaction(SIGUSR1, &sa, NULL)) 587 + fatal_error(name, "sigaction()"); 588 + 589 + /* Block the signal */ 590 + sigemptyset(&set); 591 + sigaddset(&set, SIGUSR1); 592 + if (sigprocmask(SIG_BLOCK, &set, NULL)) 593 + fatal_error(name, "sigprocmask(SIG_BLOCK)"); 594 + 595 + memset(&sev, 0, sizeof(sev)); 596 + sev.sigev_notify = SIGEV_SIGNAL; 597 + sev.sigev_signo = SIGUSR1; 598 + sev.sigev_value.sival_ptr = &tsig; 599 + if (timer_create(which, &sev, &timerid)) 600 + fatal_error(name, "timer_create()"); 601 + 602 + /* Start the timer to expire in 100ms and 100ms intervals */ 603 + its.it_value.tv_sec = 0; 604 + its.it_value.tv_nsec = 100000000; 605 + its.it_interval.tv_sec = 0; 606 + its.it_interval.tv_nsec = 100000000; 607 + if (timer_settime(timerid, 0, &its, NULL)) 608 + fatal_error(name, "timer_settime()"); 609 + 610 + if (clock_gettime(which, &start)) 611 + fatal_error(name, "clock_gettime()"); 612 + 613 + do { 614 + if (clock_gettime(which, &now)) 615 + fatal_error(name, "clock_gettime()"); 616 + } while (calcdiff_ns(now, start) < NSECS_PER_SEC); 617 + 618 + /* Unblock it, which should deliver a signal */ 619 + if (sigprocmask(SIG_UNBLOCK, &set, NULL)) 620 + fatal_error(name, "sigprocmask(SIG_UNBLOCK)"); 621 + 622 + if (timer_delete(timerid)) 623 + fatal_error(name, "timer_delete()"); 624 + 625 + ksft_test_result(tsig.signals == 1 && tsig.overruns == 9, 626 + "check_overrun %s\n", name); 222 627 } 223 628 224 629 int main(int argc, char **argv) 225 630 { 226 631 ksft_print_header(); 227 - ksft_set_plan(6); 632 + ksft_set_plan(18); 228 633 229 634 ksft_print_msg("Testing posix timers. False negative may happen on CPU execution \n"); 230 635 ksft_print_msg("based timers if other threads run on the CPU...\n"); 231 636 232 - if (check_itimer(ITIMER_VIRTUAL) < 0) 233 - ksft_exit_fail(); 234 - 235 - if (check_itimer(ITIMER_PROF) < 0) 236 - ksft_exit_fail(); 237 - 238 - if (check_itimer(ITIMER_REAL) < 0) 239 - ksft_exit_fail(); 240 - 241 - if (check_timer_create(CLOCK_THREAD_CPUTIME_ID) < 0) 242 - ksft_exit_fail(); 637 + check_itimer(ITIMER_VIRTUAL, "ITIMER_VIRTUAL"); 638 + check_itimer(ITIMER_PROF, "ITIMER_PROF"); 639 + check_itimer(ITIMER_REAL, "ITIMER_REAL"); 640 + check_timer_create(CLOCK_THREAD_CPUTIME_ID, "CLOCK_THREAD_CPUTIME_ID"); 243 641 244 642 /* 245 643 * It's unfortunately hard to reliably test a timer expiration ··· 618 280 * to ensure true parallelism. So test only one thread until we 619 281 * find a better solution. 620 282 */ 621 - if (check_timer_create(CLOCK_PROCESS_CPUTIME_ID) < 0) 622 - ksft_exit_fail(); 283 + check_timer_create(CLOCK_PROCESS_CPUTIME_ID, "CLOCK_PROCESS_CPUTIME_ID"); 284 + check_timer_distribution(); 623 285 624 - if (check_timer_distribution() < 0) 625 - ksft_exit_fail(); 286 + check_sig_ign(0); 287 + check_sig_ign(1); 288 + check_rearm(); 289 + check_delete(); 290 + check_sigev_none(CLOCK_MONOTONIC, "CLOCK_MONOTONIC"); 291 + check_sigev_none(CLOCK_PROCESS_CPUTIME_ID, "CLOCK_PROCESS_CPUTIME_ID"); 292 + check_gettime(CLOCK_MONOTONIC, "CLOCK_MONOTONIC"); 293 + check_gettime(CLOCK_PROCESS_CPUTIME_ID, "CLOCK_PROCESS_CPUTIME_ID"); 294 + check_gettime(CLOCK_THREAD_CPUTIME_ID, "CLOCK_THREAD_CPUTIME_ID"); 295 + check_overrun(CLOCK_MONOTONIC, "CLOCK_MONOTONIC"); 296 + check_overrun(CLOCK_PROCESS_CPUTIME_ID, "CLOCK_PROCESS_CPUTIME_ID"); 297 + check_overrun(CLOCK_THREAD_CPUTIME_ID, "CLOCK_THREAD_CPUTIME_ID"); 626 298 627 299 ksft_finished(); 628 300 }