[PATCH] ppc64: Fix huge pages MMU mapping bug

Current kernel has a couple of sneaky bugs in the ppc64 hugetlb code that
cause huge pages to be potentially left stale in the hash table and TLBs
(improperly invalidated), with all the nasty consequences that can have.

One is that we forgot to set the "secondary" bit in the hash PTEs when
hashing a huge page in the secondary bucket (fortunately very rare).

The other one is on non-LPAR machines (like Apple G5s), flush_hash_range()
which is used to flush a batch of PTEs simply did not work for huge pages.
Historically, our huge page code didn't batch, but this was changed without
fixing this routine. This patch fixes both.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by Benjamin Herrenschmidt and committed by Linus Torvalds 67b10813 2601c2e2

+7 -5
+2 -3
arch/ppc64/mm/hash_native.c
··· 343 343 hpte_t *hptep; 344 344 unsigned long hpte_v; 345 345 struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); 346 - 347 - /* XXX fix for large ptes */ 348 - unsigned long large = 0; 346 + unsigned long large; 349 347 350 348 local_irq_save(flags); 351 349 ··· 356 358 357 359 va = (vsid << 28) | (batch->addr[i] & 0x0fffffff); 358 360 batch->vaddr[j] = va; 361 + large = pte_huge(batch->pte[i]); 359 362 if (large) 360 363 vpn = va >> HPAGE_SHIFT; 361 364 else
+5 -2
arch/ppc64/mm/hugetlbpage.c
··· 710 710 hpte_group = ((~hash & htab_hash_mask) * 711 711 HPTES_PER_GROUP) & ~0x7UL; 712 712 slot = ppc_md.hpte_insert(hpte_group, va, prpn, 713 - HPTE_V_LARGE, rflags); 713 + HPTE_V_LARGE | 714 + HPTE_V_SECONDARY, 715 + rflags); 714 716 if (slot == -1) { 715 717 if (mftb() & 0x1) 716 - hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL; 718 + hpte_group = ((hash & htab_hash_mask) * 719 + HPTES_PER_GROUP)&~0x7UL; 717 720 718 721 ppc_md.hpte_remove(hpte_group); 719 722 goto repeat;