Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

powerpc/book3s64/memhotplug: enable memmap on memory for radix

Radix vmemmap mapping can map things correctly at the PMD level or PTE
level based on different device boundary checks. Hence we skip the
restrictions w.r.t vmemmap size to be multiple of PMD_SIZE. This also
makes the feature widely useful because to use PMD_SIZE vmemmap area we
require a memory block size of 2GiB

We can also use MHP_RESERVE_PAGES_MEMMAP_ON_MEMORY to that the feature can
work with a memory block size of 256MB. Using altmap.reserve feature to
align things correctly at pageblock granularity. We can end up losing
some pages in memory with this. For ex: with a 256MiB memory block size,
we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages. ie, for every 4096
pages 28 pages get reserved.

Link: https://lkml.kernel.org/r/20230808091501.287660-6-aneesh.kumar@linux.ibm.com
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Aneesh Kumar K.V and committed by
Andrew Morton
603fd64d 2d1f649c

+23 -1
+1
arch/powerpc/Kconfig
··· 157 157 select ARCH_HAS_UBSAN_SANITIZE_ALL 158 158 select ARCH_HAVE_NMI_SAFE_CMPXCHG 159 159 select ARCH_KEEP_MEMBLOCK 160 + select ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE if PPC_RADIX_MMU 160 161 select ARCH_MIGHT_HAVE_PC_PARPORT 161 162 select ARCH_MIGHT_HAVE_PC_SERIO 162 163 select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
+21
arch/powerpc/include/asm/pgtable.h
··· 161 161 int __meminit vmemmap_populated(unsigned long vmemmap_addr, int vmemmap_map_size); 162 162 bool altmap_cross_boundary(struct vmem_altmap *altmap, unsigned long start, 163 163 unsigned long page_size); 164 + /* 165 + * mm/memory_hotplug.c:mhp_supports_memmap_on_memory goes into details 166 + * some of the restrictions. We don't check for PMD_SIZE because our 167 + * vmemmap allocation code can fallback correctly. The pageblock 168 + * alignment requirement is met using altmap->reserve blocks. 169 + */ 170 + #define arch_supports_memmap_on_memory arch_supports_memmap_on_memory 171 + static inline bool arch_supports_memmap_on_memory(unsigned long vmemmap_size) 172 + { 173 + if (!radix_enabled()) 174 + return false; 175 + /* 176 + * With 4K page size and 2M PMD_SIZE, we can align 177 + * things better with memory block size value 178 + * starting from 128MB. Hence align things with PMD_SIZE. 179 + */ 180 + if (IS_ENABLED(CONFIG_PPC_4K_PAGES)) 181 + return IS_ALIGNED(vmemmap_size, PMD_SIZE); 182 + return true; 183 + } 184 + 164 185 #endif /* CONFIG_PPC64 */ 165 186 166 187 #endif /* __ASSEMBLY__ */
+1 -1
arch/powerpc/platforms/pseries/hotplug-memory.c
··· 637 637 nid = first_online_node; 638 638 639 639 /* Add the memory */ 640 - rc = __add_memory(nid, lmb->base_addr, block_sz, MHP_NONE); 640 + rc = __add_memory(nid, lmb->base_addr, block_sz, MHP_MEMMAP_ON_MEMORY); 641 641 if (rc) { 642 642 invalidate_lmb_associativity_index(lmb); 643 643 return rc;