x86: Disable CLFLUSH support again

It turns out CLFLUSH support is still not complete; we
flush the wrong pages. Again disable it for the release.
Noticed by Jan Beulich who then also noticed a stupid typo later.

Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by Andi Kleen and committed by Linus Torvalds d3f3c934 3f3f7b74

+3 -2
+1 -1
arch/i386/mm/pageattr.c
··· 82 struct page *p; 83 84 /* High level code is not ready for clflush yet */ 85 - if (cpu_has_clflush) { 86 list_for_each_entry (p, lh, lru) 87 cache_flush_page(p); 88 } else if (boot_cpu_data.x86_model >= 4)
··· 82 struct page *p; 83 84 /* High level code is not ready for clflush yet */ 85 + if (0 && cpu_has_clflush) { 86 list_for_each_entry (p, lh, lru) 87 cache_flush_page(p); 88 } else if (boot_cpu_data.x86_model >= 4)
+2 -1
arch/x86_64/mm/pageattr.c
··· 75 76 /* When clflush is available always use it because it is 77 much cheaper than WBINVD. */ 78 - if (!cpu_has_clflush) 79 asm volatile("wbinvd" ::: "memory"); 80 else list_for_each_entry(pg, l, lru) { 81 void *adr = page_address(pg);
··· 75 76 /* When clflush is available always use it because it is 77 much cheaper than WBINVD. */ 78 + /* clflush is still broken. Disable for now. */ 79 + if (1 || !cpu_has_clflush) 80 asm volatile("wbinvd" ::: "memory"); 81 else list_for_each_entry(pg, l, lru) { 82 void *adr = page_address(pg);