Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

arm64: mm: use correct mapping granularity under DEBUG_RODATA

When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA
and resides at an offset that is not a multiple of 512 MB, the rounding
that occurs in __map_memblock() and fixup_executable() results in
incorrect regions being mapped.

The following snippet from /sys/kernel/debug/kernel_page_tables shows
how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000,
the first 2 MB of memory (which may be inaccessible from non-secure EL1
or just reserved by the firmware) is inadvertently mapped into the end of
the module region.

---[ Modules start ]---
0xfffffdffffe00000-0xfffffe0000000000 2M RW NX ... UXN MEM/NORMAL
---[ Modules end ]---
---[ Kernel Mapping ]---
0xfffffe0000000000-0xfffffe0000090000 576K RW NX ... UXN MEM/NORMAL
0xfffffe0000090000-0xfffffe0000200000 1472K ro x ... UXN MEM/NORMAL
0xfffffe0000200000-0xfffffe0000800000 6M ro x ... UXN MEM/NORMAL
0xfffffe0000800000-0xfffffe0000810000 64K ro x ... UXN MEM/NORMAL
0xfffffe0000810000-0xfffffe0000a00000 1984K RW NX ... UXN MEM/NORMAL
0xfffffe0000a00000-0xfffffe00ffe00000 4084M RW NX ... UXN MEM/NORMAL

The same issue is likely to occur on 16k pages kernels whose load
address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to
SWAPPER_BLOCK_SIZE instead of SECTION_SIZE.

Fixes: da141706aea5 ("arm64: add better page protections to arm64")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Laura Abbott <labbott@redhat.com>
Cc: <stable@vger.kernel.org> # 4.0+
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

authored by

Ard Biesheuvel and committed by
Catalin Marinas
4fee9f36 bd1c6ff7

+6 -6
+6 -6
arch/arm64/mm/mmu.c
··· 362 362 * for now. This will get more fine grained later once all memory 363 363 * is mapped 364 364 */ 365 - unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); 366 - unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); 365 + unsigned long kernel_x_start = round_down(__pa(_stext), SWAPPER_BLOCK_SIZE); 366 + unsigned long kernel_x_end = round_up(__pa(__init_end), SWAPPER_BLOCK_SIZE); 367 367 368 368 if (end < kernel_x_start) { 369 369 create_mapping(start, __phys_to_virt(start), ··· 451 451 { 452 452 #ifdef CONFIG_DEBUG_RODATA 453 453 /* now that we are actually fully mapped, make the start/end more fine grained */ 454 - if (!IS_ALIGNED((unsigned long)_stext, SECTION_SIZE)) { 454 + if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) { 455 455 unsigned long aligned_start = round_down(__pa(_stext), 456 - SECTION_SIZE); 456 + SWAPPER_BLOCK_SIZE); 457 457 458 458 create_mapping(aligned_start, __phys_to_virt(aligned_start), 459 459 __pa(_stext) - aligned_start, 460 460 PAGE_KERNEL); 461 461 } 462 462 463 - if (!IS_ALIGNED((unsigned long)__init_end, SECTION_SIZE)) { 463 + if (!IS_ALIGNED((unsigned long)__init_end, SWAPPER_BLOCK_SIZE)) { 464 464 unsigned long aligned_end = round_up(__pa(__init_end), 465 - SECTION_SIZE); 465 + SWAPPER_BLOCK_SIZE); 466 466 create_mapping(__pa(__init_end), (unsigned long)__init_end, 467 467 aligned_end - __pa(__init_end), 468 468 PAGE_KERNEL);