x86: add rdtsc barrier to TSC sync check

Impact: fix incorrectly marked unstable TSC clock

Patch (commit 0d12cdd "sched: improve sched_clock() performance") has
a regression on one of the test systems here.

With the patch, I see:

checking TSC synchronization [CPU#0 -> CPU#1]:
Measured 28 cycles TSC warp between CPUs, turning off TSC clock.
Marking TSC unstable due to check_tsc_sync_source failed

Whereas, without the patch syncs pass fine on all CPUs:

checking TSC synchronization [CPU#0 -> CPU#1]: passed.

Due to this, TSC is marked unstable, when it is not actually unstable.
This is because syncs in check_tsc_wrap() goes away due to this commit.

As per the discussion on this thread, correct way to fix this is to add
explicit syncs as below?

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

authored by Venki Pallipadi and committed by Ingo Molnar 93ce99e8 26a3e991

+4
+4
arch/x86/kernel/tsc_sync.c
··· 46 46 cycles_t start, now, prev, end; 47 47 int i; 48 48 49 + rdtsc_barrier(); 49 50 start = get_cycles(); 51 + rdtsc_barrier(); 50 52 /* 51 53 * The measurement runs for 20 msecs: 52 54 */ ··· 63 61 */ 64 62 __raw_spin_lock(&sync_lock); 65 63 prev = last_tsc; 64 + rdtsc_barrier(); 66 65 now = get_cycles(); 66 + rdtsc_barrier(); 67 67 last_tsc = now; 68 68 __raw_spin_unlock(&sync_lock); 69 69