Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

alpha: fix hang caused by the bootmem removal

The conversion of alpha to memblock as the early memory manager caused
boot to hang as described at [1].

The issue is caused because for CONFIG_DISCTONTIGMEM=y case,
memblock_add() is called using memory start PFN that had been rounded
down to the nearest 8Mb and it caused memblock to see more memory that
is actually present in the system.

Besides, memblock allocates memory from high addresses while bootmem was
using low memory, which broke the assumption that early allocations are
always accessible by the hardware.

This patch ensures that memblock_add() is using the correct PFN for the
memory start and forces memblock to use bottom-up allocations.

[1] https://lkml.org/lkml/2018/11/22/1032

Link: http://lkml.kernel.org/r/1543233216-25833-1-git-send-email-rppt@linux.ibm.com
Reported-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Tested-by: Meelis Roos <mroos@linux.ee>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Mike Rapoport and committed by
Linus Torvalds
5b526090 eb6cf9f8

+4 -3
+1
arch/alpha/kernel/setup.c
··· 634 634 635 635 /* Find our memory. */ 636 636 setup_memory(kernel_end); 637 + memblock_set_bottom_up(true); 637 638 638 639 /* First guess at cpu cache sizes. Do this before init_arch. */ 639 640 determine_cpu_caches(cpu->type);
+3 -3
arch/alpha/mm/numa.c
··· 144 144 if (!nid && (node_max_pfn < end_kernel_pfn || node_min_pfn > start_kernel_pfn)) 145 145 panic("kernel loaded out of ram"); 146 146 147 + memblock_add(PFN_PHYS(node_min_pfn), 148 + (node_max_pfn - node_min_pfn) << PAGE_SHIFT); 149 + 147 150 /* Zone start phys-addr must be 2^(MAX_ORDER-1) aligned. 148 151 Note that we round this down, not up - node memory 149 152 has much larger alignment than 8Mb, so it's safe. */ 150 153 node_min_pfn &= ~((1UL << (MAX_ORDER-1))-1); 151 - 152 - memblock_add(PFN_PHYS(node_min_pfn), 153 - (node_max_pfn - node_min_pfn) << PAGE_SHIFT); 154 154 155 155 NODE_DATA(nid)->node_start_pfn = node_min_pfn; 156 156 NODE_DATA(nid)->node_present_pages = node_max_pfn - node_min_pfn;