[PATCH] ppc64: Remove redundant abs_to_phys() macro

abs_to_phys() is a macro that turns out to do nothing, and also has the
unfortunate property that it's not the inverse of phys_to_abs() on iSeries.

The following is for my benefit as much as everyone else.

With CONFIG_MSCHUNKS enabled, the lmb code is changed such that it keeps
a physbase variable for each lmb region. This is used to take the possibly
discontiguous lmb regions and present them as a contiguous address space
beginning from zero.

In this context each lmb region's base address is its "absolute" base
address, and its physbase is it's "physical" address (from Linux's point of
view). The abs_to_phys() macro does the mapping from "absolute" to "physical".

Note: This is not related to the iSeries mapping of physical to absolute
(ie. Hypervisor) addresses which is maintained with the msChunks structure.
And the msChunks structure is not controlled via CONFIG_MSCHUNKS.

Once upon a time you could compile for non-iSeries with CONFIG_MSCHUNKS
enabled. But these days CONFIG_MSCHUNKS depends on CONFIG_PPC_ISERIES, so
for non-iSeries code abs_to_phys() is a no-op.

On iSeries we always have one lmb region which spans from 0 to
systemcfg->physicalMemorySize (arch/ppc64/kernel/iSeries_setup.c line 383).
This region has a base (ie. absolute) address of 0, and a physbase address
of 0 (as calculated in lmb_analyze() (arch/ppc64/kernel/lmb.c line 144)).

On iSeries, abs_to_phys(aa) is defined as lmb_abs_to_phys(aa), which finds
the lmb region containing aa (and there's only one, ie. 0), and then does:

return lmb.memory.region[0].physbase + (aa - lmb.memory.region[0].base)

physbase == base == 0, so you're left with "return aa".

So remove abs_to_phys(), and lmb_abs_to_phys() which is the implementation
of abs_to_phys() for iSeries.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>

authored by

Michael Ellerman and committed by
Paul Mackerras
e88bcd1b a4a0f970

+2 -27
-19
arch/ppc64/kernel/lmb.c
··· 313 313 return 0; 314 314 } 315 315 316 - unsigned long __init 317 - lmb_abs_to_phys(unsigned long aa) 318 - { 319 - unsigned long i, pa = aa; 320 - struct lmb *_lmb = &lmb; 321 - struct lmb_region *_mem = &(_lmb->memory); 322 - 323 - for (i=0; i < _mem->cnt; i++) { 324 - unsigned long lmbbase = _mem->region[i].base; 325 - unsigned long lmbsize = _mem->region[i].size; 326 - if ( lmb_addrs_overlap(aa,1,lmbbase,lmbsize) ) { 327 - pa = _mem->region[i].physbase + (aa - lmbbase); 328 - break; 329 - } 330 - } 331 - 332 - return pa; 333 - } 334 - 335 316 /* 336 317 * Truncate the lmb list to memory_limit if it's set 337 318 * You must call lmb_analyze() after this.
+1 -3
arch/ppc64/mm/init.c
··· 42 42 43 43 #include <asm/pgalloc.h> 44 44 #include <asm/page.h> 45 - #include <asm/abs_addr.h> 46 45 #include <asm/prom.h> 47 46 #include <asm/lmb.h> 48 47 #include <asm/rtas.h> ··· 166 167 ptep = pte_alloc_kernel(&init_mm, pmdp, ea); 167 168 if (!ptep) 168 169 return -ENOMEM; 169 - pa = abs_to_phys(pa); 170 170 set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, 171 171 __pgprot(flags))); 172 172 spin_unlock(&init_mm.page_table_lock); ··· 545 547 */ 546 548 bootmap_pages = bootmem_bootmap_pages(total_pages); 547 549 548 - start = abs_to_phys(lmb_alloc(bootmap_pages<<PAGE_SHIFT, PAGE_SIZE)); 550 + start = lmb_alloc(bootmap_pages<<PAGE_SHIFT, PAGE_SIZE); 549 551 BUG_ON(!start); 550 552 551 553 boot_mapsize = init_bootmem(start >> PAGE_SHIFT, total_pages);
+1 -5
include/asm-ppc64/abs_addr.h
··· 56 56 return chunk_to_addr(chunk) + (pa & MSCHUNKS_OFFSET_MASK); 57 57 } 58 58 59 - /* A macro so it can take pointers or unsigned long. */ 60 - #define abs_to_phys(aa) lmb_abs_to_phys((unsigned long)(aa)) 61 - 62 59 #else /* !CONFIG_MSCHUNKS */ 63 60 64 61 #define chunk_to_addr(chunk) ((unsigned long)(chunk)) ··· 65 68 66 69 #define phys_to_abs(pa) (pa) 67 70 #define physRpn_to_absRpn(rpn) (rpn) 68 - #define abs_to_phys(aa) (aa) 69 71 70 72 #endif /* !CONFIG_MSCHUNKS */ 71 73 72 74 /* Convenience macros */ 73 75 #define virt_to_abs(va) phys_to_abs(__pa(va)) 74 - #define abs_to_virt(aa) __va(abs_to_phys(aa)) 76 + #define abs_to_virt(aa) __va(aa) 75 77 76 78 #endif /* _ABS_ADDR_H */