[IA64] ia64: simplify and fix udelay()

The original ia64 udelay() was simple, but flawed for platforms without
synchronized ITCs: a preemption and migration to another CPU during the
while-loop likely resulted in too-early termination or very, very
lengthy looping.

The first fix (now in 2.6.15) broke the delay loop into smaller,
non-preemptible chunks, reenabling preemption between the chunks. This
fix is flawed in that the total udelay is computed to be the sum of just
the non-premptible while-loop pieces, i.e., not counting the time spent
in the interim preemptible periods. If an interrupt or a migration
occurs during one of these interim periods, then that time is invisible
and only serves to lengthen the effective udelay().

This new fix backs out the current flawed fix and returns to a simple
udelay(), fully preemptible and interruptible. It implements two simple
alternative udelay() routines: one a default generic version that uses
ia64_get_itc(), and the other an sn-specific version that uses that
platform's RTC.

Signed-off-by: John Hawkes <hawkes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>

authored by hawkes@sgi.com and committed by Tony Luck defbb2c9 4c2cd966

+38 -22
+17 -22
arch/ia64/kernel/time.c
··· 250 250 set_normalized_timespec(&wall_to_monotonic, -xtime.tv_sec, -xtime.tv_nsec); 251 251 } 252 252 253 - #define SMALLUSECS 100 253 + /* 254 + * Generic udelay assumes that if preemption is allowed and the thread 255 + * migrates to another CPU, that the ITC values are synchronized across 256 + * all CPUs. 257 + */ 258 + static void 259 + ia64_itc_udelay (unsigned long usecs) 260 + { 261 + unsigned long start = ia64_get_itc(); 262 + unsigned long end = start + usecs*local_cpu_data->cyc_per_usec; 263 + 264 + while (time_before(ia64_get_itc(), end)) 265 + cpu_relax(); 266 + } 267 + 268 + void (*ia64_udelay)(unsigned long usecs) = &ia64_itc_udelay; 254 269 255 270 void 256 271 udelay (unsigned long usecs) 257 272 { 258 - unsigned long start; 259 - unsigned long cycles; 260 - unsigned long smallusecs; 261 - 262 - /* 263 - * Execute the non-preemptible delay loop (because the ITC might 264 - * not be synchronized between CPUS) in relatively short time 265 - * chunks, allowing preemption between the chunks. 266 - */ 267 - while (usecs > 0) { 268 - smallusecs = (usecs > SMALLUSECS) ? SMALLUSECS : usecs; 269 - preempt_disable(); 270 - cycles = smallusecs*local_cpu_data->cyc_per_usec; 271 - start = ia64_get_itc(); 272 - 273 - while (ia64_get_itc() - start < cycles) 274 - cpu_relax(); 275 - 276 - preempt_enable(); 277 - usecs -= smallusecs; 278 - } 273 + (*ia64_udelay)(usecs); 279 274 } 280 275 EXPORT_SYMBOL(udelay); 281 276
+19
arch/ia64/sn/kernel/sn2/timer.c
··· 14 14 15 15 #include <asm/hw_irq.h> 16 16 #include <asm/system.h> 17 + #include <asm/timex.h> 17 18 18 19 #include <asm/sn/leds.h> 19 20 #include <asm/sn/shub_mmr.h> ··· 29 28 .source = TIME_SOURCE_MMIO64 30 29 }; 31 30 31 + /* 32 + * sn udelay uses the RTC instead of the ITC because the ITC is not 33 + * synchronized across all CPUs, and the thread may migrate to another CPU 34 + * if preemption is enabled. 35 + */ 36 + static void 37 + ia64_sn_udelay (unsigned long usecs) 38 + { 39 + unsigned long start = rtc_time(); 40 + unsigned long end = start + 41 + usecs * sn_rtc_cycles_per_second / 1000000; 42 + 43 + while (time_before((unsigned long)rtc_time(), end)) 44 + cpu_relax(); 45 + } 46 + 32 47 void __init sn_timer_init(void) 33 48 { 34 49 sn2_interpolator.frequency = sn_rtc_cycles_per_second; 35 50 sn2_interpolator.addr = RTC_COUNTER_ADDR; 36 51 register_time_interpolator(&sn2_interpolator); 52 + 53 + ia64_udelay = &ia64_sn_udelay; 37 54 }
+2
include/asm-ia64/timex.h
··· 15 15 16 16 typedef unsigned long cycles_t; 17 17 18 + extern void (*ia64_udelay)(unsigned long usecs); 19 + 18 20 /* 19 21 * For performance reasons, we don't want to define CLOCK_TICK_TRATE as 20 22 * local_cpu_data->itc_rate. Fortunately, we don't have to, either: according to George