Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

alpha: implement the new page table range API

Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_icache_pages().

Link: https://lkml.kernel.org/r/20230802151406.3735276-8-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Matthew Wilcox (Oracle) and committed by
Andrew Morton
63497b71 bcc6cc83

+18 -2
+10
arch/alpha/include/asm/cacheflush.h
··· 57 57 #define flush_icache_page(vma, page) \ 58 58 flush_icache_user_page((vma), (page), 0, 0) 59 59 60 + /* 61 + * Both implementations of flush_icache_user_page flush the entire 62 + * address space, so one call, no matter how many pages. 63 + */ 64 + static inline void flush_icache_pages(struct vm_area_struct *vma, 65 + struct page *page, unsigned int nr) 66 + { 67 + flush_icache_user_page(vma, page, 0, 0); 68 + } 69 + 60 70 #include <asm-generic/cacheflush.h> 61 71 62 72 #endif /* _ALPHA_CACHEFLUSH_H */
+8 -2
arch/alpha/include/asm/pgtable.h
··· 26 26 * hook is made available. 27 27 */ 28 28 #define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) 29 - #define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) 30 29 31 30 /* PMD_SHIFT determines the size of the area a second-level page table can map */ 32 31 #define PMD_SHIFT (PAGE_SHIFT + (PAGE_SHIFT-3)) ··· 188 189 * and a page entry and page directory to the page they refer to. 189 190 */ 190 191 #define page_to_pa(page) (page_to_pfn(page) << PAGE_SHIFT) 191 - #define pte_pfn(pte) (pte_val(pte) >> 32) 192 + #define PFN_PTE_SHIFT 32 193 + #define pte_pfn(pte) (pte_val(pte) >> PFN_PTE_SHIFT) 192 194 193 195 #define pte_page(pte) pfn_to_page(pte_pfn(pte)) 194 196 #define mk_pte(page, pgprot) \ ··· 300 300 */ 301 301 extern inline void update_mmu_cache(struct vm_area_struct * vma, 302 302 unsigned long address, pte_t *ptep) 303 + { 304 + } 305 + 306 + static inline void update_mmu_cache_range(struct vm_fault *vmf, 307 + struct vm_area_struct *vma, unsigned long address, 308 + pte_t *ptep, unsigned int nr) 303 309 { 304 310 } 305 311