Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/vmalloc: interchage the implementation of vmalloc_to_{pfn,page}

Currently we are implementing vmalloc_to_pfn() as a wrapper around
vmalloc_to_page(), which is implemented as follow:

1. walks the page talbes to generates the corresponding pfn,
2. then converts the pfn to struct page,
3. returns it.

And vmalloc_to_pfn() re-wraps vmalloc_to_page() to get the pfn.

This seems too circuitous, so this patch reverses the way: implement
vmalloc_to_page() as a wrapper around vmalloc_to_pfn(). This makes
vmalloc_to_pfn() and vmalloc_to_page() slightly more efficient.

No functional change.

Signed-off-by: Jianyu Zhan <nasa4836@gmail.com>
Cc: Vladimir Murzin <murzin.v@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Jianyu Zhan and committed by
Linus Torvalds
ece86e22 d80be7c7

+14 -14
+14 -14
mm/vmalloc.c
··· 220 220 } 221 221 222 222 /* 223 - * Walk a vmap address to the struct page it maps. 223 + * Walk a vmap address to the physical pfn it maps to. 224 224 */ 225 - struct page *vmalloc_to_page(const void *vmalloc_addr) 225 + unsigned long vmalloc_to_pfn(const void *vmalloc_addr) 226 226 { 227 227 unsigned long addr = (unsigned long) vmalloc_addr; 228 - struct page *page = NULL; 228 + unsigned long pfn = 0; 229 229 pgd_t *pgd = pgd_offset_k(addr); 230 230 231 231 /* ··· 244 244 ptep = pte_offset_map(pmd, addr); 245 245 pte = *ptep; 246 246 if (pte_present(pte)) 247 - page = pte_page(pte); 247 + pfn = pte_pfn(pte); 248 248 pte_unmap(ptep); 249 249 } 250 250 } 251 251 } 252 - return page; 253 - } 254 - EXPORT_SYMBOL(vmalloc_to_page); 255 - 256 - /* 257 - * Map a vmalloc()-space virtual address to the physical page frame number. 258 - */ 259 - unsigned long vmalloc_to_pfn(const void *vmalloc_addr) 260 - { 261 - return page_to_pfn(vmalloc_to_page(vmalloc_addr)); 252 + return pfn; 262 253 } 263 254 EXPORT_SYMBOL(vmalloc_to_pfn); 255 + 256 + /* 257 + * Map a vmalloc()-space virtual address to the struct page. 258 + */ 259 + struct page *vmalloc_to_page(const void *vmalloc_addr) 260 + { 261 + return pfn_to_page(vmalloc_to_pfn(vmalloc_addr)); 262 + } 263 + EXPORT_SYMBOL(vmalloc_to_page); 264 264 265 265 266 266 /*** Global kva allocator ***/