[ARM] Fix some corner cases in new mm initialisation

Document that the VMALLOC_END address must be aligned to 2MB since
it must align with a PGD boundary.

Allocate the vectors page early so that the flush_cache_all() later
will cause any dirty cache lines in the direct mapping will be safely
written back.

Move the flush_cache_all() to the second local_flush_cache_tlb() and
remove the now redundant first local_flush_cache_tlb().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>

authored by Russell King and committed by Russell King 02b30839 67a1901f

+16 -12
+3 -1
Documentation/arm/memory.txt
··· 1 Kernel Memory Layout on ARM Linux 2 3 Russell King <rmk@arm.linux.org.uk> 4 - May 21, 2004 (2.6.6) 5 6 This document describes the virtual memory layout which the Linux 7 kernel uses for ARM processors. It indicates which regions are ··· 37 mapping region. 38 39 VMALLOC_END feffffff Free for platform use, recommended. 40 41 VMALLOC_START VMALLOC_END-1 vmalloc() / ioremap() space. 42 Memory returned by vmalloc/ioremap will
··· 1 Kernel Memory Layout on ARM Linux 2 3 Russell King <rmk@arm.linux.org.uk> 4 + November 17, 2005 (2.6.15) 5 6 This document describes the virtual memory layout which the Linux 7 kernel uses for ARM processors. It indicates which regions are ··· 37 mapping region. 38 39 VMALLOC_END feffffff Free for platform use, recommended. 40 + VMALLOC_END must be aligned to a 2MB 41 + boundary. 42 43 VMALLOC_START VMALLOC_END-1 vmalloc() / ioremap() space. 44 Memory returned by vmalloc/ioremap will
+13 -11
arch/arm/mm/init.c
··· 420 * Set up device the mappings. Since we clear out the page tables for all 421 * mappings above VMALLOC_END, we will remove any debug device mappings. 422 * This means you have to be careful how you debug this function, or any 423 - * called function. (Do it by code inspection!) 424 */ 425 static void __init devicemaps_init(struct machine_desc *mdesc) 426 { 427 struct map_desc map; 428 unsigned long addr; 429 void *vectors; 430 431 for (addr = VMALLOC_END; addr; addr += PGDIR_SIZE) 432 pmd_clear(pmd_off_k(addr)); ··· 468 create_mapping(&map); 469 #endif 470 471 - flush_cache_all(); 472 - local_flush_tlb_all(); 473 - 474 - vectors = alloc_bootmem_low_pages(PAGE_SIZE); 475 - BUG_ON(!vectors); 476 - 477 /* 478 * Create a mapping for the machine vectors at the high-vectors 479 * location (0xffff0000). If we aren't using high-vectors, also ··· 492 mdesc->map_io(); 493 494 /* 495 - * Finally flush the tlb again - this ensures that we're in a 496 - * consistent state wrt the writebuffer if the writebuffer needs 497 - * draining. After this point, we can start to touch devices 498 - * again. 499 */ 500 local_flush_tlb_all(); 501 } 502 503 /*
··· 420 * Set up device the mappings. Since we clear out the page tables for all 421 * mappings above VMALLOC_END, we will remove any debug device mappings. 422 * This means you have to be careful how you debug this function, or any 423 + * called function. This means you can't use any function or debugging 424 + * method which may touch any device, otherwise the kernel _will_ crash. 425 */ 426 static void __init devicemaps_init(struct machine_desc *mdesc) 427 { 428 struct map_desc map; 429 unsigned long addr; 430 void *vectors; 431 + 432 + /* 433 + * Allocate the vector page early. 434 + */ 435 + vectors = alloc_bootmem_low_pages(PAGE_SIZE); 436 + BUG_ON(!vectors); 437 438 for (addr = VMALLOC_END; addr; addr += PGDIR_SIZE) 439 pmd_clear(pmd_off_k(addr)); ··· 461 create_mapping(&map); 462 #endif 463 464 /* 465 * Create a mapping for the machine vectors at the high-vectors 466 * location (0xffff0000). If we aren't using high-vectors, also ··· 491 mdesc->map_io(); 492 493 /* 494 + * Finally flush the caches and tlb to ensure that we're in a 495 + * consistent state wrt the writebuffer. This also ensures that 496 + * any write-allocated cache lines in the vector page are written 497 + * back. After this point, we can start to touch devices again. 498 */ 499 local_flush_tlb_all(); 500 + flush_cache_all(); 501 } 502 503 /*