Merge tag 'pr-20141223-x86-vdso' of git://git.kernel.org/pub/scm/linux/kernel/git/luto/linux into x86/urgent

Pull VDSO fix from Andy Lutomirski:

"This is hopefully the last vdso fix for 3.19. It should be very
safe (it just adds a volatile).

I don't think it fixes an actual bug (the __getcpu calls in the
pvclock code may not have been needed in the first place), but
discussion on that point is ongoing.

It also fixes a big performance issue in 3.18 and earlier in which
the lsl instructions in vclock_gettime got hoisted so far up the
function that they happened even when the function they were in was
never called. n 3.19, the performance issue seems to be gone due to
the whims of my compiler and some interaction with a branch that's
now gone.

I'll hopefully have a much bigger overhaul of the pvclock code
for 3.20, but it needs careful review."

Signed-off-by: Ingo Molnar <mingo@kernel.org>

Changed files
+4 -2
arch
x86
include
asm
+4 -2
arch/x86/include/asm/vgtod.h
··· 80 80 81 81 /* 82 82 * Load per CPU data from GDT. LSL is faster than RDTSCP and 83 - * works on all CPUs. 83 + * works on all CPUs. This is volatile so that it orders 84 + * correctly wrt barrier() and to keep gcc from cleverly 85 + * hoisting it out of the calling function. 84 86 */ 85 - asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG)); 87 + asm volatile ("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG)); 86 88 87 89 return p; 88 90 }