s390/mm: Prevent lowcore vs identity mapping overlap

The identity mapping position in virtual memory is randomized
together with the kernel mapping. That position can never
overlap with the lowcore even when the lowcore is relocated.

Prevent overlapping with the lowcore to allow independent
positioning of the identity mapping. With the current value
of the alternative lowcore address of 0x70000 the overlap
could happen in case the identity mapping is placed at zero.

This is a prerequisite for uncoupling of randomization base
of kernel image and identity mapping in virtual memory.

Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>

authored by Alexander Gordeev and committed by Vasily Gorbik a3ca27c4 de9c2c66

+18 -1
+18 -1
arch/s390/kernel/setup.c
··· 734 734 } 735 735 736 736 /* 737 - * Reserve memory used for lowcore/command line/kernel image. 737 + * Reserve memory used for lowcore. 738 + */ 739 + static void __init reserve_lowcore(void) 740 + { 741 + void *lowcore_start = get_lowcore(); 742 + void *lowcore_end = lowcore_start + sizeof(struct lowcore); 743 + void *start, *end; 744 + 745 + if ((void *)__identity_base < lowcore_end) { 746 + start = max(lowcore_start, (void *)__identity_base); 747 + end = min(lowcore_end, (void *)(__identity_base + ident_map_size)); 748 + memblock_reserve(__pa(start), __pa(end)); 749 + } 750 + } 751 + 752 + /* 753 + * Reserve memory used for absolute lowcore/command line/kernel image. 738 754 */ 739 755 static void __init reserve_kernel(void) 740 756 { ··· 934 918 935 919 /* Do some memory reservations *before* memory is added to memblock */ 936 920 reserve_pgtables(); 921 + reserve_lowcore(); 937 922 reserve_kernel(); 938 923 reserve_initrd(); 939 924 reserve_certificate_list();