[PATCH] mm: powerpc ptlock comments

Update comments (only) on page_table_lock and mmap_sem in arch/powerpc.
Removed the comment on page_table_lock from hash_huge_page: since it's no
longer taking page_table_lock itself, it's irrelevant whether others are; but
how it is safe (even against huge file truncation?) I can't say.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by Hugh Dickins and committed by Linus Torvalds 01edcd89 cc3327e7

+10 -6
+1 -3
arch/powerpc/mm/hugetlbpage.c
··· 754 754 } 755 755 756 756 /* 757 - * No need to use ldarx/stdcx here because all who 758 - * might be updating the pte will hold the 759 - * page_table_lock 757 + * No need to use ldarx/stdcx here 760 758 */ 761 759 *ptep = __pte(new_pte & ~_PAGE_BUSY); 762 760
+1 -1
arch/powerpc/mm/mem.c
··· 495 495 * We use it to preload an HPTE into the hash table corresponding to 496 496 * the updated linux PTE. 497 497 * 498 - * This must always be called with the mm->page_table_lock held 498 + * This must always be called with the pte lock held. 499 499 */ 500 500 void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, 501 501 pte_t pte)
+6
arch/powerpc/mm/tlb_32.c
··· 149 149 return; 150 150 } 151 151 152 + /* 153 + * It is safe to go down the mm's list of vmas when called 154 + * from dup_mmap, holding mmap_sem. It would also be safe from 155 + * unmap_region or exit_mmap, but not from vmtruncate on SMP - 156 + * but it seems dup_mmap is the only SMP case which gets here. 157 + */ 152 158 for (mp = mm->mmap; mp != NULL; mp = mp->vm_next) 153 159 flush_range(mp->vm_mm, mp->vm_start, mp->vm_end); 154 160 FINISH_FLUSH;
+2 -2
arch/powerpc/mm/tlb_64.c
··· 95 95 96 96 void pgtable_free_tlb(struct mmu_gather *tlb, pgtable_free_t pgf) 97 97 { 98 - /* This is safe as we are holding page_table_lock */ 98 + /* This is safe since tlb_gather_mmu has disabled preemption */ 99 99 cpumask_t local_cpumask = cpumask_of_cpu(smp_processor_id()); 100 100 struct pte_freelist_batch **batchp = &__get_cpu_var(pte_freelist_cur); 101 101 ··· 206 206 207 207 void pte_free_finish(void) 208 208 { 209 - /* This is safe as we are holding page_table_lock */ 209 + /* This is safe since tlb_gather_mmu has disabled preemption */ 210 210 struct pte_freelist_batch **batchp = &__get_cpu_var(pte_freelist_cur); 211 211 212 212 if (*batchp == NULL)