Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

x86/vdso: Prepare introduction of struct vdso_clock

To support multiple PTP clocks, the VDSO data structure needs to be
reworked. All clock specific data will end up in struct vdso_clock and in
struct vdso_time_data there will be array of VDSO clocks. At the moment,
vdso_clock is simply a define which maps vdso_clock to vdso_time_data.

To prepare for the rework of the data structures, replace the struct
vdso_time_data pointer with a struct vdso_clock pointer where applicable.

No functional change.

Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de>
Signed-off-by: Nam Cao <namcao@linutronix.de>
Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/all/20250303-vdso-clock-v1-15-c1b5c69a166f@linutronix.de

authored by

Anna-Maria Behnsen and committed by
Thomas Gleixner
bf0eff81 5911e16c

+8 -8
+8 -8
arch/x86/include/asm/vdso/gettimeofday.h
··· 261 261 return U64_MAX; 262 262 } 263 263 264 - static inline bool arch_vdso_clocksource_ok(const struct vdso_time_data *vd) 264 + static inline bool arch_vdso_clocksource_ok(const struct vdso_clock *vc) 265 265 { 266 266 return true; 267 267 } ··· 300 300 * declares everything with the MSB/Sign-bit set as invalid. Therefore the 301 301 * effective mask is S64_MAX. 302 302 */ 303 - static __always_inline u64 vdso_calc_ns(const struct vdso_time_data *vd, u64 cycles, u64 base) 303 + static __always_inline u64 vdso_calc_ns(const struct vdso_clock *vc, u64 cycles, u64 base) 304 304 { 305 - u64 delta = cycles - vd->cycle_last; 305 + u64 delta = cycles - vc->cycle_last; 306 306 307 307 /* 308 308 * Negative motion and deltas which can cause multiplication 309 309 * overflow require special treatment. This check covers both as 310 - * negative motion is guaranteed to be greater than @vd::max_cycles 310 + * negative motion is guaranteed to be greater than @vc::max_cycles 311 311 * due to unsigned comparison. 312 312 * 313 313 * Due to the MSB/Sign-bit being used as invalid marker (see 314 314 * arch_vdso_cycles_ok() above), the effective mask is S64_MAX, but that 315 315 * case is also unlikely and will also take the unlikely path here. 316 316 */ 317 - if (unlikely(delta > vd->max_cycles)) { 317 + if (unlikely(delta > vc->max_cycles)) { 318 318 /* 319 319 * Due to the above mentioned TSC wobbles, filter out 320 320 * negative motion. Per the above masking, the effective 321 321 * sign bit is now bit 62. 322 322 */ 323 323 if (delta & (1ULL << 62)) 324 - return base >> vd->shift; 324 + return base >> vc->shift; 325 325 326 326 /* Handle multiplication overflow gracefully */ 327 - return mul_u64_u32_add_u64_shr(delta & S64_MAX, vd->mult, base, vd->shift); 327 + return mul_u64_u32_add_u64_shr(delta & S64_MAX, vc->mult, base, vc->shift); 328 328 } 329 329 330 - return ((delta * vd->mult) + base) >> vd->shift; 330 + return ((delta * vc->mult) + base) >> vc->shift; 331 331 } 332 332 #define vdso_calc_ns vdso_calc_ns 333 333