Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'core-mm-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull kmap updates from Thomas Gleixner:
"The new preemtible kmap_local() implementation:

- Consolidate all kmap_atomic() internals into a generic
implementation which builds the base for the kmap_local() API and
make the kmap_atomic() interface wrappers which handle the
disabling/enabling of preemption and pagefaults.

- Switch the storage from per-CPU to per task and provide scheduler
support for clearing mapping when scheduling out and restoring them
when scheduling back in.

- Merge the migrate_disable/enable() code, which is also part of the
scheduler pull request. This was required to make the kmap_local()
interface available which does not disable preemption when a
mapping is established. It has to disable migration instead to
guarantee that the virtual address of the mapped slot is the same
across preemption.

- Provide better debug facilities: guard pages and enforced
utilization of the mapping mechanics on 64bit systems when the
architecture allows it.

- Provide the new kmap_local() API which can now be used to cleanup
the kmap_atomic() usage sites all over the place. Most of the usage
sites do not require the implicit disabling of preemption and
pagefaults so the penalty on 64bit and 32bit non-highmem systems is
removed and quite some of the code can be simplified. A wholesale
conversion is not possible because some usage depends on the
implicit side effects and some need to be cleaned up because they
work around these side effects.

The migrate disable side effect is only effective on highmem
systems and when enforced debugging is enabled. On 64bit and 32bit
non-highmem systems the overhead is completely avoided"

* tag 'core-mm-2020-12-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (33 commits)
ARM: highmem: Fix cache_is_vivt() reference
x86/crashdump/32: Simplify copy_oldmem_page()
io-mapping: Provide iomap_local variant
mm/highmem: Provide kmap_local*
sched: highmem: Store local kmaps in task struct
x86: Support kmap_local() forced debugging
mm/highmem: Provide CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP
mm/highmem: Provide and use CONFIG_DEBUG_KMAP_LOCAL
microblaze/mm/highmem: Add dropped #ifdef back
xtensa/mm/highmem: Make generic kmap_atomic() work correctly
mm/highmem: Take kmap_high_get() properly into account
highmem: High implementation details and document API
Documentation/io-mapping: Remove outdated blurb
io-mapping: Cleanup atomic iomap
mm/highmem: Remove the old kmap_atomic cruft
highmem: Get rid of kmap_types.h
xtensa/mm/highmem: Switch to generic kmap atomic
sparc/mm/highmem: Switch to generic kmap atomic
powerpc/mm/highmem: Switch to generic kmap atomic
nds32/mm/highmem: Switch to generic kmap atomic
...

+955 -1432
+45 -51
Documentation/driver-api/io-mapping.rst
··· 20 20 mappable, while 'size' indicates how large a mapping region to 21 21 enable. Both are in bytes. 22 22 23 - This _wc variant provides a mapping which may only be used 24 - with the io_mapping_map_atomic_wc or io_mapping_map_wc. 23 + This _wc variant provides a mapping which may only be used with 24 + io_mapping_map_atomic_wc(), io_mapping_map_local_wc() or 25 + io_mapping_map_wc(). 25 26 26 - With this mapping object, individual pages can be mapped either atomically 27 - or not, depending on the necessary scheduling environment. Of course, atomic 28 - maps are more efficient:: 27 + With this mapping object, individual pages can be mapped either temporarily 28 + or long term, depending on the requirements. Of course, temporary maps are 29 + more efficient. They come in two flavours:: 30 + 31 + void *io_mapping_map_local_wc(struct io_mapping *mapping, 32 + unsigned long offset) 29 33 30 34 void *io_mapping_map_atomic_wc(struct io_mapping *mapping, 31 35 unsigned long offset) 32 36 33 - 'offset' is the offset within the defined mapping region. 34 - Accessing addresses beyond the region specified in the 35 - creation function yields undefined results. Using an offset 36 - which is not page aligned yields an undefined result. The 37 - return value points to a single page in CPU address space. 37 + 'offset' is the offset within the defined mapping region. Accessing 38 + addresses beyond the region specified in the creation function yields 39 + undefined results. Using an offset which is not page aligned yields an 40 + undefined result. The return value points to a single page in CPU address 41 + space. 38 42 39 - This _wc variant returns a write-combining map to the 40 - page and may only be used with mappings created by 41 - io_mapping_create_wc 43 + This _wc variant returns a write-combining map to the page and may only be 44 + used with mappings created by io_mapping_create_wc() 42 45 43 - Note that the task may not sleep while holding this page 44 - mapped. 46 + Temporary mappings are only valid in the context of the caller. The mapping 47 + is not guaranteed to be globaly visible. 45 48 46 - :: 49 + io_mapping_map_local_wc() has a side effect on X86 32bit as it disables 50 + migration to make the mapping code work. No caller can rely on this side 51 + effect. 47 52 53 + io_mapping_map_atomic_wc() has the side effect of disabling preemption and 54 + pagefaults. Don't use in new code. Use io_mapping_map_local_wc() instead. 55 + 56 + Nested mappings need to be undone in reverse order because the mapping 57 + code uses a stack for keeping track of them:: 58 + 59 + addr1 = io_mapping_map_local_wc(map1, offset1); 60 + addr2 = io_mapping_map_local_wc(map2, offset2); 61 + ... 62 + io_mapping_unmap_local(addr2); 63 + io_mapping_unmap_local(addr1); 64 + 65 + The mappings are released with:: 66 + 67 + void io_mapping_unmap_local(void *vaddr) 48 68 void io_mapping_unmap_atomic(void *vaddr) 49 69 50 - 'vaddr' must be the value returned by the last 51 - io_mapping_map_atomic_wc call. This unmaps the specified 52 - page and allows the task to sleep once again. 70 + 'vaddr' must be the value returned by the last io_mapping_map_local_wc() or 71 + io_mapping_map_atomic_wc() call. This unmaps the specified mapping and 72 + undoes the side effects of the mapping functions. 53 73 54 - If you need to sleep while holding the lock, you can use the non-atomic 55 - variant, although they may be significantly slower. 56 - 57 - :: 74 + If you need to sleep while holding a mapping, you can use the regular 75 + variant, although this may be significantly slower:: 58 76 59 77 void *io_mapping_map_wc(struct io_mapping *mapping, 60 78 unsigned long offset) 61 79 62 - This works like io_mapping_map_atomic_wc except it allows 63 - the task to sleep while holding the page mapped. 80 + This works like io_mapping_map_atomic/local_wc() except it has no side 81 + effects and the pointer is globaly visible. 64 82 65 - 66 - :: 83 + The mappings are released with:: 67 84 68 85 void io_mapping_unmap(void *vaddr) 69 86 70 - This works like io_mapping_unmap_atomic, except it is used 71 - for pages mapped with io_mapping_map_wc. 87 + Use for pages mapped with io_mapping_map_wc(). 72 88 73 89 At driver close time, the io_mapping object must be freed:: 74 90 75 91 void io_mapping_free(struct io_mapping *mapping) 76 - 77 - Current Implementation 78 - ====================== 79 - 80 - The initial implementation of these functions uses existing mapping 81 - mechanisms and so provides only an abstraction layer and no new 82 - functionality. 83 - 84 - On 64-bit processors, io_mapping_create_wc calls ioremap_wc for the whole 85 - range, creating a permanent kernel-visible mapping to the resource. The 86 - map_atomic and map functions add the requested offset to the base of the 87 - virtual address returned by ioremap_wc. 88 - 89 - On 32-bit processors with HIGHMEM defined, io_mapping_map_atomic_wc uses 90 - kmap_atomic_pfn to map the specified page in an atomic fashion; 91 - kmap_atomic_pfn isn't really supposed to be used with device pages, but it 92 - provides an efficient mapping for this usage. 93 - 94 - On 32-bit processors without HIGHMEM defined, io_mapping_map_atomic_wc and 95 - io_mapping_map_wc both use ioremap_wc, a terribly inefficient function which 96 - performs an IPI to inform all processors about the new mapping. This results 97 - in a significant performance penalty.
-15
arch/alpha/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_KMAP_TYPES_H 3 - #define _ASM_KMAP_TYPES_H 4 - 5 - /* Dummy header just to define km_type. */ 6 - 7 - #ifdef CONFIG_DEBUG_HIGHMEM 8 - #define __WITH_KM_FENCE 9 - #endif 10 - 11 - #include <asm-generic/kmap_types.h> 12 - 13 - #undef __WITH_KM_FENCE 14 - 15 - #endif
+1
arch/arc/Kconfig
··· 507 507 config HIGHMEM 508 508 bool "High Memory Support" 509 509 select ARCH_DISCONTIGMEM_ENABLE 510 + select KMAP_LOCAL 510 511 help 511 512 With ARC 2G:2G address split, only upper 2G is directly addressable by 512 513 kernel. Enable this to potentially allow access to rest of 2G and PAE
+20 -6
arch/arc/include/asm/highmem.h
··· 9 9 #ifdef CONFIG_HIGHMEM 10 10 11 11 #include <uapi/asm/page.h> 12 - #include <asm/kmap_types.h> 12 + #include <asm/kmap_size.h> 13 + 14 + #define FIXMAP_SIZE PGDIR_SIZE 15 + #define PKMAP_SIZE PGDIR_SIZE 13 16 14 17 /* start after vmalloc area */ 15 18 #define FIXMAP_BASE (PAGE_OFFSET - FIXMAP_SIZE - PKMAP_SIZE) 16 - #define FIXMAP_SIZE PGDIR_SIZE /* only 1 PGD worth */ 17 - #define KM_TYPE_NR ((FIXMAP_SIZE >> PAGE_SHIFT)/NR_CPUS) 18 - #define FIXMAP_ADDR(nr) (FIXMAP_BASE + ((nr) << PAGE_SHIFT)) 19 + 20 + #define FIX_KMAP_SLOTS (KM_MAX_IDX * NR_CPUS) 21 + #define FIX_KMAP_BEGIN (0UL) 22 + #define FIX_KMAP_END ((FIX_KMAP_BEGIN + FIX_KMAP_SLOTS) - 1) 23 + 24 + #define FIXADDR_TOP (FIXMAP_BASE + (FIX_KMAP_END << PAGE_SHIFT)) 25 + 26 + /* 27 + * This should be converted to the asm-generic version, but of course this 28 + * is needlessly different from all other architectures. Sigh - tglx 29 + */ 30 + #define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT)) 31 + #define __virt_to_fix(x) (((FIXADDR_TOP - ((x) & PAGE_MASK))) >> PAGE_SHIFT) 19 32 20 33 /* start after fixmap area */ 21 34 #define PKMAP_BASE (FIXMAP_BASE + FIXMAP_SIZE) 22 - #define PKMAP_SIZE PGDIR_SIZE 23 35 #define LAST_PKMAP (PKMAP_SIZE >> PAGE_SHIFT) 24 36 #define LAST_PKMAP_MASK (LAST_PKMAP - 1) 25 37 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) ··· 41 29 42 30 extern void kmap_init(void); 43 31 32 + #define arch_kmap_local_post_unmap(vaddr) \ 33 + local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE) 34 + 44 35 static inline void flush_cache_kmaps(void) 45 36 { 46 37 flush_cache_all(); 47 38 } 48 - 49 39 #endif 50 40 51 41 #endif
-14
arch/arc/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (C) 2015 Synopsys, Inc. (www.synopsys.com) 4 - */ 5 - 6 - #ifndef _ASM_KMAP_TYPES_H 7 - #define _ASM_KMAP_TYPES_H 8 - 9 - /* 10 - * We primarily need to define KM_TYPE_NR here but that in turn 11 - * is a function of PGDIR_SIZE etc. 12 - * To avoid circular deps issue, put everything in asm/highmem.h 13 - */ 14 - #endif
+6 -50
arch/arc/mm/highmem.c
··· 36 36 * This means each only has 1 PGDIR_SIZE worth of kvaddr mappings, which means 37 37 * 2M of kvaddr space for typical config (8K page and 11:8:13 traversal split) 38 38 * 39 - * - fixmap anyhow needs a limited number of mappings. So 2M kvaddr == 256 PTE 40 - * slots across NR_CPUS would be more than sufficient (generic code defines 41 - * KM_TYPE_NR as 20). 39 + * - The fixed KMAP slots for kmap_local/atomic() require KM_MAX_IDX slots per 40 + * CPU. So the number of CPUs sharing a single PTE page is limited. 42 41 * 43 42 * - pkmap being preemptible, in theory could do with more than 256 concurrent 44 43 * mappings. However, generic pkmap code: map_new_virtual(), doesn't traverse ··· 46 47 */ 47 48 48 49 extern pte_t * pkmap_page_table; 49 - static pte_t * fixmap_page_table; 50 - 51 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 52 - { 53 - int idx, cpu_idx; 54 - unsigned long vaddr; 55 - 56 - cpu_idx = kmap_atomic_idx_push(); 57 - idx = cpu_idx + KM_TYPE_NR * smp_processor_id(); 58 - vaddr = FIXMAP_ADDR(idx); 59 - 60 - set_pte_at(&init_mm, vaddr, fixmap_page_table + idx, 61 - mk_pte(page, prot)); 62 - 63 - return (void *)vaddr; 64 - } 65 - EXPORT_SYMBOL(kmap_atomic_high_prot); 66 - 67 - void kunmap_atomic_high(void *kv) 68 - { 69 - unsigned long kvaddr = (unsigned long)kv; 70 - 71 - if (kvaddr >= FIXMAP_BASE && kvaddr < (FIXMAP_BASE + FIXMAP_SIZE)) { 72 - 73 - /* 74 - * Because preemption is disabled, this vaddr can be associated 75 - * with the current allocated index. 76 - * But in case of multiple live kmap_atomic(), it still relies on 77 - * callers to unmap in right order. 78 - */ 79 - int cpu_idx = kmap_atomic_idx(); 80 - int idx = cpu_idx + KM_TYPE_NR * smp_processor_id(); 81 - 82 - WARN_ON(kvaddr != FIXMAP_ADDR(idx)); 83 - 84 - pte_clear(&init_mm, kvaddr, fixmap_page_table + idx); 85 - local_flush_tlb_kernel_range(kvaddr, kvaddr + PAGE_SIZE); 86 - 87 - kmap_atomic_idx_pop(); 88 - } 89 - } 90 - EXPORT_SYMBOL(kunmap_atomic_high); 91 50 92 51 static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr) 93 52 { ··· 65 108 { 66 109 /* Due to recursive include hell, we can't do this in processor.h */ 67 110 BUILD_BUG_ON(PAGE_OFFSET < (VMALLOC_END + FIXMAP_SIZE + PKMAP_SIZE)); 68 - 69 - BUILD_BUG_ON(KM_TYPE_NR > PTRS_PER_PTE); 70 - pkmap_page_table = alloc_kmap_pgtable(PKMAP_BASE); 71 - 72 111 BUILD_BUG_ON(LAST_PKMAP > PTRS_PER_PTE); 73 - fixmap_page_table = alloc_kmap_pgtable(FIXMAP_BASE); 112 + BUILD_BUG_ON(FIX_KMAP_SLOTS > PTRS_PER_PTE); 113 + 114 + pkmap_page_table = alloc_kmap_pgtable(PKMAP_BASE); 115 + alloc_kmap_pgtable(FIXMAP_BASE); 74 116 }
+1
arch/arm/Kconfig
··· 1499 1499 config HIGHMEM 1500 1500 bool "High Memory Support" 1501 1501 depends on MMU 1502 + select KMAP_LOCAL 1502 1503 help 1503 1504 The address space of ARM processors is only 4 Gigabytes large 1504 1505 and it has to accommodate user address space, kernel address
+2 -2
arch/arm/include/asm/fixmap.h
··· 7 7 #define FIXADDR_TOP (FIXADDR_END - PAGE_SIZE) 8 8 9 9 #include <linux/pgtable.h> 10 - #include <asm/kmap_types.h> 10 + #include <asm/kmap_size.h> 11 11 12 12 enum fixed_addresses { 13 13 FIX_EARLYCON_MEM_BASE, 14 14 __end_of_permanent_fixed_addresses, 15 15 16 16 FIX_KMAP_BEGIN = __end_of_permanent_fixed_addresses, 17 - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1, 17 + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, 18 18 19 19 /* Support writing RO kernel text via kprobes, jump labels, etc. */ 20 20 FIX_TEXT_POKE0,
+24 -10
arch/arm/include/asm/highmem.h
··· 2 2 #ifndef _ASM_HIGHMEM_H 3 3 #define _ASM_HIGHMEM_H 4 4 5 - #include <asm/kmap_types.h> 5 + #include <asm/cachetype.h> 6 + #include <asm/fixmap.h> 6 7 7 8 #define PKMAP_BASE (PAGE_OFFSET - PMD_SIZE) 8 9 #define LAST_PKMAP PTRS_PER_PTE ··· 47 46 48 47 #ifdef ARCH_NEEDS_KMAP_HIGH_GET 49 48 extern void *kmap_high_get(struct page *page); 50 - #else 49 + 50 + static inline void *arch_kmap_local_high_get(struct page *page) 51 + { 52 + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !cache_is_vivt()) 53 + return NULL; 54 + return kmap_high_get(page); 55 + } 56 + #define arch_kmap_local_high_get arch_kmap_local_high_get 57 + 58 + #else /* ARCH_NEEDS_KMAP_HIGH_GET */ 51 59 static inline void *kmap_high_get(struct page *page) 52 60 { 53 61 return NULL; 54 62 } 55 - #endif 63 + #endif /* !ARCH_NEEDS_KMAP_HIGH_GET */ 56 64 57 - /* 58 - * The following functions are already defined by <linux/highmem.h> 59 - * when CONFIG_HIGHMEM is not set. 60 - */ 61 - #ifdef CONFIG_HIGHMEM 62 - extern void *kmap_atomic_pfn(unsigned long pfn); 63 - #endif 65 + #define arch_kmap_local_post_map(vaddr, pteval) \ 66 + local_flush_tlb_kernel_page(vaddr) 67 + 68 + #define arch_kmap_local_pre_unmap(vaddr) \ 69 + do { \ 70 + if (cache_is_vivt()) \ 71 + __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); \ 72 + } while (0) 73 + 74 + #define arch_kmap_local_post_unmap(vaddr) \ 75 + local_flush_tlb_kernel_page(vaddr) 64 76 65 77 #endif
-10
arch/arm/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef __ARM_KMAP_TYPES_H 3 - #define __ARM_KMAP_TYPES_H 4 - 5 - /* 6 - * This is the "bare minimum". AIO seems to require this. 7 - */ 8 - #define KM_TYPE_NR 16 9 - 10 - #endif
-1
arch/arm/mm/Makefile
··· 19 19 obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o 20 20 21 21 obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o 22 - obj-$(CONFIG_HIGHMEM) += highmem.o 23 22 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 24 23 obj-$(CONFIG_ARM_PV_FIXUP) += pv-fixup-asm.o 25 24
-121
arch/arm/mm/highmem.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * arch/arm/mm/highmem.c -- ARM highmem support 4 - * 5 - * Author: Nicolas Pitre 6 - * Created: september 8, 2008 7 - * Copyright: Marvell Semiconductors Inc. 8 - */ 9 - 10 - #include <linux/module.h> 11 - #include <linux/highmem.h> 12 - #include <linux/interrupt.h> 13 - #include <asm/fixmap.h> 14 - #include <asm/cacheflush.h> 15 - #include <asm/tlbflush.h> 16 - #include "mm.h" 17 - 18 - static inline void set_fixmap_pte(int idx, pte_t pte) 19 - { 20 - unsigned long vaddr = __fix_to_virt(idx); 21 - pte_t *ptep = virt_to_kpte(vaddr); 22 - 23 - set_pte_ext(ptep, pte, 0); 24 - local_flush_tlb_kernel_page(vaddr); 25 - } 26 - 27 - static inline pte_t get_fixmap_pte(unsigned long vaddr) 28 - { 29 - pte_t *ptep = virt_to_kpte(vaddr); 30 - 31 - return *ptep; 32 - } 33 - 34 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 35 - { 36 - unsigned int idx; 37 - unsigned long vaddr; 38 - void *kmap; 39 - int type; 40 - 41 - #ifdef CONFIG_DEBUG_HIGHMEM 42 - /* 43 - * There is no cache coherency issue when non VIVT, so force the 44 - * dedicated kmap usage for better debugging purposes in that case. 45 - */ 46 - if (!cache_is_vivt()) 47 - kmap = NULL; 48 - else 49 - #endif 50 - kmap = kmap_high_get(page); 51 - if (kmap) 52 - return kmap; 53 - 54 - type = kmap_atomic_idx_push(); 55 - 56 - idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); 57 - vaddr = __fix_to_virt(idx); 58 - #ifdef CONFIG_DEBUG_HIGHMEM 59 - /* 60 - * With debugging enabled, kunmap_atomic forces that entry to 0. 61 - * Make sure it was indeed properly unmapped. 62 - */ 63 - BUG_ON(!pte_none(get_fixmap_pte(vaddr))); 64 - #endif 65 - /* 66 - * When debugging is off, kunmap_atomic leaves the previous mapping 67 - * in place, so the contained TLB flush ensures the TLB is updated 68 - * with the new mapping. 69 - */ 70 - set_fixmap_pte(idx, mk_pte(page, prot)); 71 - 72 - return (void *)vaddr; 73 - } 74 - EXPORT_SYMBOL(kmap_atomic_high_prot); 75 - 76 - void kunmap_atomic_high(void *kvaddr) 77 - { 78 - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; 79 - int idx, type; 80 - 81 - if (kvaddr >= (void *)FIXADDR_START) { 82 - type = kmap_atomic_idx(); 83 - idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); 84 - 85 - if (cache_is_vivt()) 86 - __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); 87 - #ifdef CONFIG_DEBUG_HIGHMEM 88 - BUG_ON(vaddr != __fix_to_virt(idx)); 89 - set_fixmap_pte(idx, __pte(0)); 90 - #else 91 - (void) idx; /* to kill a warning */ 92 - #endif 93 - kmap_atomic_idx_pop(); 94 - } else if (vaddr >= PKMAP_ADDR(0) && vaddr < PKMAP_ADDR(LAST_PKMAP)) { 95 - /* this address was obtained through kmap_high_get() */ 96 - kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)])); 97 - } 98 - } 99 - EXPORT_SYMBOL(kunmap_atomic_high); 100 - 101 - void *kmap_atomic_pfn(unsigned long pfn) 102 - { 103 - unsigned long vaddr; 104 - int idx, type; 105 - struct page *page = pfn_to_page(pfn); 106 - 107 - preempt_disable(); 108 - pagefault_disable(); 109 - if (!PageHighMem(page)) 110 - return page_address(page); 111 - 112 - type = kmap_atomic_idx_push(); 113 - idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); 114 - vaddr = __fix_to_virt(idx); 115 - #ifdef CONFIG_DEBUG_HIGHMEM 116 - BUG_ON(!pte_none(get_fixmap_pte(vaddr))); 117 - #endif 118 - set_fixmap_pte(idx, pfn_pte(pfn, kmap_prot)); 119 - 120 - return (void *)vaddr; 121 - }
+1
arch/csky/Kconfig
··· 286 286 config HIGHMEM 287 287 bool "High Memory Support" 288 288 depends on !CPU_CK610 289 + select KMAP_LOCAL 289 290 default y 290 291 291 292 config FORCE_MAX_ZONEORDER
+2 -2
arch/csky/include/asm/fixmap.h
··· 8 8 #include <asm/memory.h> 9 9 #ifdef CONFIG_HIGHMEM 10 10 #include <linux/threads.h> 11 - #include <asm/kmap_types.h> 11 + #include <asm/kmap_size.h> 12 12 #endif 13 13 14 14 enum fixed_addresses { ··· 17 17 #endif 18 18 #ifdef CONFIG_HIGHMEM 19 19 FIX_KMAP_BEGIN, 20 - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1, 20 + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, 21 21 #endif 22 22 __end_of_fixed_addresses 23 23 };
+4 -2
arch/csky/include/asm/highmem.h
··· 9 9 #include <linux/init.h> 10 10 #include <linux/interrupt.h> 11 11 #include <linux/uaccess.h> 12 - #include <asm/kmap_types.h> 12 + #include <asm/kmap_size.h> 13 13 #include <asm/cache.h> 14 14 15 15 /* undef for production */ ··· 32 32 33 33 #define ARCH_HAS_KMAP_FLUSH_TLB 34 34 extern void kmap_flush_tlb(unsigned long addr); 35 - extern void *kmap_atomic_pfn(unsigned long pfn); 36 35 37 36 #define flush_cache_kmaps() do {} while (0) 37 + 38 + #define arch_kmap_local_post_map(vaddr, pteval) kmap_flush_tlb(vaddr) 39 + #define arch_kmap_local_post_unmap(vaddr) kmap_flush_tlb(vaddr) 38 40 39 41 extern void kmap_init(void); 40 42
+1 -74
arch/csky/mm/highmem.c
··· 9 9 #include <asm/tlbflush.h> 10 10 #include <asm/cacheflush.h> 11 11 12 - static pte_t *kmap_pte; 13 - 14 12 unsigned long highstart_pfn, highend_pfn; 15 13 16 14 void kmap_flush_tlb(unsigned long addr) ··· 17 19 } 18 20 EXPORT_SYMBOL(kmap_flush_tlb); 19 21 20 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 21 - { 22 - unsigned long vaddr; 23 - int idx, type; 24 - 25 - type = kmap_atomic_idx_push(); 26 - idx = type + KM_TYPE_NR*smp_processor_id(); 27 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 28 - #ifdef CONFIG_DEBUG_HIGHMEM 29 - BUG_ON(!pte_none(*(kmap_pte - idx))); 30 - #endif 31 - set_pte(kmap_pte-idx, mk_pte(page, prot)); 32 - flush_tlb_one((unsigned long)vaddr); 33 - 34 - return (void *)vaddr; 35 - } 36 - EXPORT_SYMBOL(kmap_atomic_high_prot); 37 - 38 - void kunmap_atomic_high(void *kvaddr) 39 - { 40 - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; 41 - int idx; 42 - 43 - if (vaddr < FIXADDR_START) 44 - return; 45 - 46 - #ifdef CONFIG_DEBUG_HIGHMEM 47 - idx = KM_TYPE_NR*smp_processor_id() + kmap_atomic_idx(); 48 - 49 - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); 50 - 51 - pte_clear(&init_mm, vaddr, kmap_pte - idx); 52 - flush_tlb_one(vaddr); 53 - #else 54 - (void) idx; /* to kill a warning */ 55 - #endif 56 - kmap_atomic_idx_pop(); 57 - } 58 - EXPORT_SYMBOL(kunmap_atomic_high); 59 - 60 - /* 61 - * This is the same as kmap_atomic() but can map memory that doesn't 62 - * have a struct page associated with it. 63 - */ 64 - void *kmap_atomic_pfn(unsigned long pfn) 65 - { 66 - unsigned long vaddr; 67 - int idx, type; 68 - 69 - pagefault_disable(); 70 - 71 - type = kmap_atomic_idx_push(); 72 - idx = type + KM_TYPE_NR*smp_processor_id(); 73 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 74 - set_pte(kmap_pte-idx, pfn_pte(pfn, PAGE_KERNEL)); 75 - flush_tlb_one(vaddr); 76 - 77 - return (void *) vaddr; 78 - } 79 - 80 - static void __init kmap_pages_init(void) 22 + void __init kmap_init(void) 81 23 { 82 24 unsigned long vaddr; 83 25 pgd_t *pgd; ··· 33 95 pmd = pmd_offset(pud, vaddr); 34 96 pte = pte_offset_kernel(pmd, vaddr); 35 97 pkmap_page_table = pte; 36 - } 37 - 38 - void __init kmap_init(void) 39 - { 40 - unsigned long vaddr; 41 - 42 - kmap_pages_init(); 43 - 44 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN); 45 - 46 - kmap_pte = pte_offset_kernel((pmd_t *)pgd_offset_k(vaddr), vaddr); 47 98 }
-13
arch/ia64/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_IA64_KMAP_TYPES_H 3 - #define _ASM_IA64_KMAP_TYPES_H 4 - 5 - #ifdef CONFIG_DEBUG_HIGHMEM 6 - #define __WITH_KM_FENCE 7 - #endif 8 - 9 - #include <asm-generic/kmap_types.h> 10 - 11 - #undef __WITH_KM_FENCE 12 - 13 - #endif /* _ASM_IA64_KMAP_TYPES_H */
+1
arch/microblaze/Kconfig
··· 155 155 config HIGHMEM 156 156 bool "High memory support" 157 157 depends on MMU 158 + select KMAP_LOCAL 158 159 help 159 160 The address space of Microblaze processors is only 4 Gigabytes large 160 161 and it has to accommodate user address space, kernel address
+2 -2
arch/microblaze/include/asm/fixmap.h
··· 20 20 #include <asm/page.h> 21 21 #ifdef CONFIG_HIGHMEM 22 22 #include <linux/threads.h> 23 - #include <asm/kmap_types.h> 23 + #include <asm/kmap_size.h> 24 24 #endif 25 25 26 26 #define FIXADDR_TOP ((unsigned long)(-PAGE_SIZE)) ··· 47 47 FIX_HOLE, 48 48 #ifdef CONFIG_HIGHMEM 49 49 FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ 50 - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * num_possible_cpus()) - 1, 50 + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * num_possible_cpus()) - 1, 51 51 #endif 52 52 __end_of_fixed_addresses 53 53 };
+5 -1
arch/microblaze/include/asm/highmem.h
··· 25 25 #include <linux/uaccess.h> 26 26 #include <asm/fixmap.h> 27 27 28 - extern pte_t *kmap_pte; 29 28 extern pte_t *pkmap_page_table; 30 29 31 30 /* ··· 50 51 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) 51 52 52 53 #define flush_cache_kmaps() { flush_icache(); flush_dcache(); } 54 + 55 + #define arch_kmap_local_post_map(vaddr, pteval) \ 56 + local_flush_tlb_page(NULL, vaddr); 57 + #define arch_kmap_local_post_unmap(vaddr) \ 58 + local_flush_tlb_page(NULL, vaddr); 53 59 54 60 #endif /* __KERNEL__ */ 55 61
-1
arch/microblaze/mm/Makefile
··· 6 6 obj-y := consistent.o init.o 7 7 8 8 obj-$(CONFIG_MMU) += pgtable.o mmu_context.o fault.o 9 - obj-$(CONFIG_HIGHMEM) += highmem.o
-78
arch/microblaze/mm/highmem.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * highmem.c: virtual kernel memory mappings for high memory 4 - * 5 - * PowerPC version, stolen from the i386 version. 6 - * 7 - * Used in CONFIG_HIGHMEM systems for memory pages which 8 - * are not addressable by direct kernel virtual addresses. 9 - * 10 - * Copyright (C) 1999 Gerhard Wichert, Siemens AG 11 - * Gerhard.Wichert@pdb.siemens.de 12 - * 13 - * 14 - * Redesigned the x86 32-bit VM architecture to deal with 15 - * up to 16 Terrabyte physical memory. With current x86 CPUs 16 - * we now support up to 64 Gigabytes physical RAM. 17 - * 18 - * Copyright (C) 1999 Ingo Molnar <mingo@redhat.com> 19 - * 20 - * Reworked for PowerPC by various contributors. Moved from 21 - * highmem.h by Benjamin Herrenschmidt (c) 2009 IBM Corp. 22 - */ 23 - 24 - #include <linux/export.h> 25 - #include <linux/highmem.h> 26 - 27 - /* 28 - * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap 29 - * gives a more generic (and caching) interface. But kmap_atomic can 30 - * be used in IRQ contexts, so in some (very limited) cases we need 31 - * it. 32 - */ 33 - #include <asm/tlbflush.h> 34 - 35 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 36 - { 37 - 38 - unsigned long vaddr; 39 - int idx, type; 40 - 41 - type = kmap_atomic_idx_push(); 42 - idx = type + KM_TYPE_NR*smp_processor_id(); 43 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 44 - #ifdef CONFIG_DEBUG_HIGHMEM 45 - BUG_ON(!pte_none(*(kmap_pte-idx))); 46 - #endif 47 - set_pte_at(&init_mm, vaddr, kmap_pte-idx, mk_pte(page, prot)); 48 - local_flush_tlb_page(NULL, vaddr); 49 - 50 - return (void *) vaddr; 51 - } 52 - EXPORT_SYMBOL(kmap_atomic_high_prot); 53 - 54 - void kunmap_atomic_high(void *kvaddr) 55 - { 56 - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; 57 - int type; 58 - unsigned int idx; 59 - 60 - if (vaddr < __fix_to_virt(FIX_KMAP_END)) 61 - return; 62 - 63 - type = kmap_atomic_idx(); 64 - 65 - idx = type + KM_TYPE_NR * smp_processor_id(); 66 - #ifdef CONFIG_DEBUG_HIGHMEM 67 - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); 68 - #endif 69 - /* 70 - * force other mappings to Oops if they'll try to access 71 - * this pte without first remap it 72 - */ 73 - pte_clear(&init_mm, vaddr, kmap_pte-idx); 74 - local_flush_tlb_page(NULL, vaddr); 75 - 76 - kmap_atomic_idx_pop(); 77 - } 78 - EXPORT_SYMBOL(kunmap_atomic_high);
-5
arch/microblaze/mm/init.c
··· 50 50 EXPORT_SYMBOL(max_low_pfn); 51 51 52 52 #ifdef CONFIG_HIGHMEM 53 - pte_t *kmap_pte; 54 - EXPORT_SYMBOL(kmap_pte); 55 - 56 53 static void __init highmem_init(void) 57 54 { 58 55 pr_debug("%x\n", (u32)PKMAP_BASE); 59 56 map_page(PKMAP_BASE, 0, 0); /* XXX gross */ 60 57 pkmap_page_table = virt_to_kpte(PKMAP_BASE); 61 - 62 - kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN)); 63 58 } 64 59 65 60 static void highmem_setup(void)
+1
arch/mips/Kconfig
··· 2719 2719 config HIGHMEM 2720 2720 bool "High Memory Support" 2721 2721 depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM && !CPU_MIPS32_3_5_EVA 2722 + select KMAP_LOCAL 2722 2723 2723 2724 config CPU_SUPPORTS_HIGHMEM 2724 2725 bool
+2 -2
arch/mips/include/asm/fixmap.h
··· 17 17 #include <spaces.h> 18 18 #ifdef CONFIG_HIGHMEM 19 19 #include <linux/threads.h> 20 - #include <asm/kmap_types.h> 20 + #include <asm/kmap_size.h> 21 21 #endif 22 22 23 23 /* ··· 52 52 #ifdef CONFIG_HIGHMEM 53 53 /* reserved pte's for temporary kernel mappings */ 54 54 FIX_KMAP_BEGIN = FIX_CMAP_END + 1, 55 - FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1, 55 + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, 56 56 #endif 57 57 __end_of_fixed_addresses 58 58 };
+3 -3
arch/mips/include/asm/highmem.h
··· 24 24 #include <linux/interrupt.h> 25 25 #include <linux/uaccess.h> 26 26 #include <asm/cpu-features.h> 27 - #include <asm/kmap_types.h> 27 + #include <asm/kmap_size.h> 28 28 29 29 /* declarations for highmem.c */ 30 30 extern unsigned long highstart_pfn, highend_pfn; ··· 48 48 49 49 #define ARCH_HAS_KMAP_FLUSH_TLB 50 50 extern void kmap_flush_tlb(unsigned long addr); 51 - extern void *kmap_atomic_pfn(unsigned long pfn); 52 51 53 52 #define flush_cache_kmaps() BUG_ON(cpu_has_dc_aliases) 54 53 55 - extern void kmap_init(void); 54 + #define arch_kmap_local_post_map(vaddr, pteval) local_flush_tlb_one(vaddr) 55 + #define arch_kmap_local_post_unmap(vaddr) local_flush_tlb_one(vaddr) 56 56 57 57 #endif /* __KERNEL__ */ 58 58
-13
arch/mips/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_KMAP_TYPES_H 3 - #define _ASM_KMAP_TYPES_H 4 - 5 - #ifdef CONFIG_DEBUG_HIGHMEM 6 - #define __WITH_KM_FENCE 7 - #endif 8 - 9 - #include <asm-generic/kmap_types.h> 10 - 11 - #undef __WITH_KM_FENCE 12 - 13 - #endif
-77
arch/mips/mm/highmem.c
··· 8 8 #include <asm/fixmap.h> 9 9 #include <asm/tlbflush.h> 10 10 11 - static pte_t *kmap_pte; 12 - 13 11 unsigned long highstart_pfn, highend_pfn; 14 12 15 13 void kmap_flush_tlb(unsigned long addr) ··· 15 17 flush_tlb_one(addr); 16 18 } 17 19 EXPORT_SYMBOL(kmap_flush_tlb); 18 - 19 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 20 - { 21 - unsigned long vaddr; 22 - int idx, type; 23 - 24 - type = kmap_atomic_idx_push(); 25 - idx = type + KM_TYPE_NR*smp_processor_id(); 26 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 27 - #ifdef CONFIG_DEBUG_HIGHMEM 28 - BUG_ON(!pte_none(*(kmap_pte - idx))); 29 - #endif 30 - set_pte(kmap_pte-idx, mk_pte(page, prot)); 31 - local_flush_tlb_one((unsigned long)vaddr); 32 - 33 - return (void*) vaddr; 34 - } 35 - EXPORT_SYMBOL(kmap_atomic_high_prot); 36 - 37 - void kunmap_atomic_high(void *kvaddr) 38 - { 39 - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; 40 - int type __maybe_unused; 41 - 42 - if (vaddr < FIXADDR_START) 43 - return; 44 - 45 - type = kmap_atomic_idx(); 46 - #ifdef CONFIG_DEBUG_HIGHMEM 47 - { 48 - int idx = type + KM_TYPE_NR * smp_processor_id(); 49 - 50 - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); 51 - 52 - /* 53 - * force other mappings to Oops if they'll try to access 54 - * this pte without first remap it 55 - */ 56 - pte_clear(&init_mm, vaddr, kmap_pte-idx); 57 - local_flush_tlb_one(vaddr); 58 - } 59 - #endif 60 - kmap_atomic_idx_pop(); 61 - } 62 - EXPORT_SYMBOL(kunmap_atomic_high); 63 - 64 - /* 65 - * This is the same as kmap_atomic() but can map memory that doesn't 66 - * have a struct page associated with it. 67 - */ 68 - void *kmap_atomic_pfn(unsigned long pfn) 69 - { 70 - unsigned long vaddr; 71 - int idx, type; 72 - 73 - preempt_disable(); 74 - pagefault_disable(); 75 - 76 - type = kmap_atomic_idx_push(); 77 - idx = type + KM_TYPE_NR*smp_processor_id(); 78 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 79 - set_pte(kmap_pte-idx, pfn_pte(pfn, PAGE_KERNEL)); 80 - flush_tlb_one(vaddr); 81 - 82 - return (void*) vaddr; 83 - } 84 - 85 - void __init kmap_init(void) 86 - { 87 - unsigned long kmap_vstart; 88 - 89 - /* cache the first kmap pte */ 90 - kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN); 91 - kmap_pte = virt_to_kpte(kmap_vstart); 92 - }
-4
arch/mips/mm/init.c
··· 36 36 #include <asm/cachectl.h> 37 37 #include <asm/cpu.h> 38 38 #include <asm/dma.h> 39 - #include <asm/kmap_types.h> 40 39 #include <asm/maar.h> 41 40 #include <asm/mmu_context.h> 42 41 #include <asm/sections.h> ··· 401 402 402 403 pagetable_init(); 403 404 404 - #ifdef CONFIG_HIGHMEM 405 - kmap_init(); 406 - #endif 407 405 #ifdef CONFIG_ZONE_DMA 408 406 max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN; 409 407 #endif
+1
arch/nds32/Kconfig.cpu
··· 157 157 config HIGHMEM 158 158 bool "High Memory Support" 159 159 depends on MMU && !CPU_CACHE_ALIASING 160 + select KMAP_LOCAL 160 161 help 161 162 The address space of Andes processors is only 4 Gigabytes large 162 163 and it has to accommodate user address space, kernel address
+2 -2
arch/nds32/include/asm/fixmap.h
··· 6 6 7 7 #ifdef CONFIG_HIGHMEM 8 8 #include <linux/threads.h> 9 - #include <asm/kmap_types.h> 9 + #include <asm/kmap_size.h> 10 10 #endif 11 11 12 12 enum fixed_addresses { ··· 14 14 FIX_KMAP_RESERVED, 15 15 FIX_KMAP_BEGIN, 16 16 #ifdef CONFIG_HIGHMEM 17 - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS), 17 + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, 18 18 #endif 19 19 FIX_EARLYCON_MEM_BASE, 20 20 __end_of_fixed_addresses
+16 -6
arch/nds32/include/asm/highmem.h
··· 5 5 #define _ASM_HIGHMEM_H 6 6 7 7 #include <asm/proc-fns.h> 8 - #include <asm/kmap_types.h> 9 8 #include <asm/fixmap.h> 10 9 11 10 /* ··· 44 45 extern void kmap_init(void); 45 46 46 47 /* 47 - * The following functions are already defined by <linux/highmem.h> 48 - * when CONFIG_HIGHMEM is not set. 48 + * FIXME: The below looks broken vs. a kmap_atomic() in task context which 49 + * is interupted and another kmap_atomic() happens in interrupt context. 50 + * But what do I know about nds32. -- tglx 49 51 */ 50 - #ifdef CONFIG_HIGHMEM 51 - extern void *kmap_atomic_pfn(unsigned long pfn); 52 - #endif 52 + #define arch_kmap_local_post_map(vaddr, pteval) \ 53 + do { \ 54 + __nds32__tlbop_inv(vaddr); \ 55 + __nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN); \ 56 + __nds32__tlbop_rwr(pteval); \ 57 + __nds32__isb(); \ 58 + } while (0) 59 + 60 + #define arch_kmap_local_pre_unmap(vaddr) \ 61 + do { \ 62 + __nds32__tlbop_inv(vaddr); \ 63 + __nds32__isb(); \ 64 + } while (0) 53 65 54 66 #endif
-1
arch/nds32/mm/Makefile
··· 3 3 mm-nds32.o cacheflush.o proc.o 4 4 5 5 obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o 6 - obj-$(CONFIG_HIGHMEM) += highmem.o 7 6 8 7 ifdef CONFIG_FUNCTION_TRACER 9 8 CFLAGS_REMOVE_proc.o = $(CC_FLAGS_FTRACE)
-48
arch/nds32/mm/highmem.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/export.h> 5 - #include <linux/highmem.h> 6 - #include <linux/sched.h> 7 - #include <linux/smp.h> 8 - #include <linux/interrupt.h> 9 - #include <linux/memblock.h> 10 - #include <asm/fixmap.h> 11 - #include <asm/tlbflush.h> 12 - 13 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 14 - { 15 - unsigned int idx; 16 - unsigned long vaddr, pte; 17 - int type; 18 - pte_t *ptep; 19 - 20 - type = kmap_atomic_idx_push(); 21 - 22 - idx = type + KM_TYPE_NR * smp_processor_id(); 23 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 24 - pte = (page_to_pfn(page) << PAGE_SHIFT) | prot; 25 - ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr); 26 - set_pte(ptep, pte); 27 - 28 - __nds32__tlbop_inv(vaddr); 29 - __nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN); 30 - __nds32__tlbop_rwr(pte); 31 - __nds32__isb(); 32 - return (void *)vaddr; 33 - } 34 - EXPORT_SYMBOL(kmap_atomic_high_prot); 35 - 36 - void kunmap_atomic_high(void *kvaddr) 37 - { 38 - if (kvaddr >= (void *)FIXADDR_START) { 39 - unsigned long vaddr = (unsigned long)kvaddr; 40 - pte_t *ptep; 41 - kmap_atomic_idx_pop(); 42 - __nds32__tlbop_inv(vaddr); 43 - __nds32__isb(); 44 - ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr); 45 - set_pte(ptep, 0); 46 - } 47 - } 48 - EXPORT_SYMBOL(kunmap_atomic_high);
-1
arch/openrisc/mm/init.c
··· 33 33 #include <asm/io.h> 34 34 #include <asm/tlb.h> 35 35 #include <asm/mmu_context.h> 36 - #include <asm/kmap_types.h> 37 36 #include <asm/fixmap.h> 38 37 #include <asm/tlbflush.h> 39 38 #include <asm/sections.h>
-1
arch/openrisc/mm/ioremap.c
··· 15 15 #include <linux/io.h> 16 16 #include <linux/pgtable.h> 17 17 #include <asm/pgalloc.h> 18 - #include <asm/kmap_types.h> 19 18 #include <asm/fixmap.h> 20 19 #include <asm/bug.h> 21 20 #include <linux/sched.h>
-13
arch/parisc/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_KMAP_TYPES_H 3 - #define _ASM_KMAP_TYPES_H 4 - 5 - #ifdef CONFIG_DEBUG_HIGHMEM 6 - #define __WITH_KM_FENCE 7 - #endif 8 - 9 - #include <asm-generic/kmap_types.h> 10 - 11 - #undef __WITH_KM_FENCE 12 - 13 - #endif
+1
arch/powerpc/Kconfig
··· 410 410 config HIGHMEM 411 411 bool "High memory support" 412 412 depends on PPC32 413 + select KMAP_LOCAL 413 414 414 415 source "kernel/Kconfig.hz" 415 416
+2 -2
arch/powerpc/include/asm/fixmap.h
··· 20 20 #include <asm/page.h> 21 21 #ifdef CONFIG_HIGHMEM 22 22 #include <linux/threads.h> 23 - #include <asm/kmap_types.h> 23 + #include <asm/kmap_size.h> 24 24 #endif 25 25 26 26 #ifdef CONFIG_KASAN ··· 55 55 FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128K, PAGE_SIZE)/PAGE_SIZE)-1, 56 56 #ifdef CONFIG_HIGHMEM 57 57 FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ 58 - FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1, 58 + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, 59 59 #endif 60 60 #ifdef CONFIG_PPC_8xx 61 61 /* For IMMR we need an aligned 512K area */
+5 -2
arch/powerpc/include/asm/highmem.h
··· 24 24 #ifdef __KERNEL__ 25 25 26 26 #include <linux/interrupt.h> 27 - #include <asm/kmap_types.h> 28 27 #include <asm/cacheflush.h> 29 28 #include <asm/page.h> 30 29 #include <asm/fixmap.h> 31 30 32 - extern pte_t *kmap_pte; 33 31 extern pte_t *pkmap_page_table; 34 32 35 33 /* ··· 57 59 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) 58 60 59 61 #define flush_cache_kmaps() flush_cache_all() 62 + 63 + #define arch_kmap_local_post_map(vaddr, pteval) \ 64 + local_flush_tlb_page(NULL, vaddr) 65 + #define arch_kmap_local_post_unmap(vaddr) \ 66 + local_flush_tlb_page(NULL, vaddr) 60 67 61 68 #endif /* __KERNEL__ */ 62 69
-13
arch/powerpc/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - #ifndef _ASM_POWERPC_KMAP_TYPES_H 3 - #define _ASM_POWERPC_KMAP_TYPES_H 4 - 5 - #ifdef __KERNEL__ 6 - 7 - /* 8 - */ 9 - 10 - #define KM_TYPE_NR 16 11 - 12 - #endif /* __KERNEL__ */ 13 - #endif /* _ASM_POWERPC_KMAP_TYPES_H */
-1
arch/powerpc/mm/Makefile
··· 16 16 obj-$(CONFIG_PPC_MM_SLICES) += slice.o 17 17 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 18 18 obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o 19 - obj-$(CONFIG_HIGHMEM) += highmem.o 20 19 obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o 21 20 obj-$(CONFIG_PPC_PTDUMP) += ptdump/ 22 21 obj-$(CONFIG_KASAN) += kasan/
-67
arch/powerpc/mm/highmem.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * highmem.c: virtual kernel memory mappings for high memory 4 - * 5 - * PowerPC version, stolen from the i386 version. 6 - * 7 - * Used in CONFIG_HIGHMEM systems for memory pages which 8 - * are not addressable by direct kernel virtual addresses. 9 - * 10 - * Copyright (C) 1999 Gerhard Wichert, Siemens AG 11 - * Gerhard.Wichert@pdb.siemens.de 12 - * 13 - * 14 - * Redesigned the x86 32-bit VM architecture to deal with 15 - * up to 16 Terrabyte physical memory. With current x86 CPUs 16 - * we now support up to 64 Gigabytes physical RAM. 17 - * 18 - * Copyright (C) 1999 Ingo Molnar <mingo@redhat.com> 19 - * 20 - * Reworked for PowerPC by various contributors. Moved from 21 - * highmem.h by Benjamin Herrenschmidt (c) 2009 IBM Corp. 22 - */ 23 - 24 - #include <linux/highmem.h> 25 - #include <linux/module.h> 26 - 27 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 28 - { 29 - unsigned long vaddr; 30 - int idx, type; 31 - 32 - type = kmap_atomic_idx_push(); 33 - idx = type + KM_TYPE_NR*smp_processor_id(); 34 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 35 - WARN_ON(IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !pte_none(*(kmap_pte - idx))); 36 - __set_pte_at(&init_mm, vaddr, kmap_pte-idx, mk_pte(page, prot), 1); 37 - local_flush_tlb_page(NULL, vaddr); 38 - 39 - return (void*) vaddr; 40 - } 41 - EXPORT_SYMBOL(kmap_atomic_high_prot); 42 - 43 - void kunmap_atomic_high(void *kvaddr) 44 - { 45 - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; 46 - 47 - if (vaddr < __fix_to_virt(FIX_KMAP_END)) 48 - return; 49 - 50 - if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM)) { 51 - int type = kmap_atomic_idx(); 52 - unsigned int idx; 53 - 54 - idx = type + KM_TYPE_NR * smp_processor_id(); 55 - WARN_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); 56 - 57 - /* 58 - * force other mappings to Oops if they'll try to access 59 - * this pte without first remap it 60 - */ 61 - pte_clear(&init_mm, vaddr, kmap_pte-idx); 62 - local_flush_tlb_page(NULL, vaddr); 63 - } 64 - 65 - kmap_atomic_idx_pop(); 66 - } 67 - EXPORT_SYMBOL(kunmap_atomic_high);
-7
arch/powerpc/mm/mem.c
··· 62 62 unsigned long long memory_limit; 63 63 bool init_mem_is_free; 64 64 65 - #ifdef CONFIG_HIGHMEM 66 - pte_t *kmap_pte; 67 - EXPORT_SYMBOL(kmap_pte); 68 - #endif 69 - 70 65 pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, 71 66 unsigned long size, pgprot_t vma_prot) 72 67 { ··· 231 236 232 237 map_kernel_page(PKMAP_BASE, 0, __pgprot(0)); /* XXX gross */ 233 238 pkmap_page_table = virt_to_kpte(PKMAP_BASE); 234 - 235 - kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN)); 236 239 #endif /* CONFIG_HIGHMEM */ 237 240 238 241 printk(KERN_DEBUG "Top of RAM: 0x%llx, Total RAM: 0x%llx\n",
-8
arch/sh/include/asm/fixmap.h
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/threads.h> 15 15 #include <asm/page.h> 16 - #ifdef CONFIG_HIGHMEM 17 - #include <asm/kmap_types.h> 18 - #endif 19 16 20 17 /* 21 18 * Here we define all the compile-time 'special' virtual ··· 49 52 #define FIX_N_COLOURS 8 50 53 FIX_CMAP_BEGIN, 51 54 FIX_CMAP_END = FIX_CMAP_BEGIN + (FIX_N_COLOURS * NR_CPUS) - 1, 52 - 53 - #ifdef CONFIG_HIGHMEM 54 - FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ 55 - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1, 56 - #endif 57 55 58 56 #ifdef CONFIG_IOREMAP_FIXED 59 57 /*
-15
arch/sh/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef __SH_KMAP_TYPES_H 3 - #define __SH_KMAP_TYPES_H 4 - 5 - /* Dummy header just to define km_type. */ 6 - 7 - #ifdef CONFIG_DEBUG_HIGHMEM 8 - #define __WITH_KM_FENCE 9 - #endif 10 - 11 - #include <asm-generic/kmap_types.h> 12 - 13 - #undef __WITH_KM_FENCE 14 - 15 - #endif
-8
arch/sh/mm/init.c
··· 362 362 mem_init_print_info(NULL); 363 363 pr_info("virtual kernel memory layout:\n" 364 364 " fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n" 365 - #ifdef CONFIG_HIGHMEM 366 - " pkmap : 0x%08lx - 0x%08lx (%4ld kB)\n" 367 - #endif 368 365 " vmalloc : 0x%08lx - 0x%08lx (%4ld MB)\n" 369 366 " lowmem : 0x%08lx - 0x%08lx (%4ld MB) (cached)\n" 370 367 #ifdef CONFIG_UNCACHED_MAPPING ··· 372 375 " .text : 0x%08lx - 0x%08lx (%4ld kB)\n", 373 376 FIXADDR_START, FIXADDR_TOP, 374 377 (FIXADDR_TOP - FIXADDR_START) >> 10, 375 - 376 - #ifdef CONFIG_HIGHMEM 377 - PKMAP_BASE, PKMAP_BASE+LAST_PKMAP*PAGE_SIZE, 378 - (LAST_PKMAP*PAGE_SIZE) >> 10, 379 - #endif 380 378 381 379 (unsigned long)VMALLOC_START, VMALLOC_END, 382 380 (VMALLOC_END - VMALLOC_START) >> 20,
+1
arch/sparc/Kconfig
··· 139 139 config HIGHMEM 140 140 bool 141 141 default y if SPARC32 142 + select KMAP_LOCAL 142 143 143 144 config ZONE_DMA 144 145 bool
+5 -3
arch/sparc/include/asm/highmem.h
··· 24 24 #include <linux/interrupt.h> 25 25 #include <linux/pgtable.h> 26 26 #include <asm/vaddrs.h> 27 - #include <asm/kmap_types.h> 28 27 #include <asm/pgtsrmmu.h> 29 28 30 29 /* declarations for highmem.c */ ··· 31 32 32 33 #define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE) 33 34 extern pte_t *pkmap_page_table; 34 - 35 - void kmap_init(void) __init; 36 35 37 36 /* 38 37 * Right now we initialize only a single pte table. It can be extended ··· 49 52 #define PKMAP_END (PKMAP_ADDR(LAST_PKMAP)) 50 53 51 54 #define flush_cache_kmaps() flush_cache_all() 55 + 56 + /* FIXME: Use __flush_tlb_one(vaddr) instead of flush_cache_all() -- Anton */ 57 + #define arch_kmap_local_post_map(vaddr, pteval) flush_cache_all() 58 + #define arch_kmap_local_post_unmap(vaddr) flush_cache_all() 59 + 52 60 53 61 #endif /* __KERNEL__ */ 54 62
-11
arch/sparc/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_KMAP_TYPES_H 3 - #define _ASM_KMAP_TYPES_H 4 - 5 - /* Dummy header just to define km_type. None of this 6 - * is actually used on sparc. -DaveM 7 - */ 8 - 9 - #include <asm-generic/kmap_types.h> 10 - 11 - #endif
+2 -2
arch/sparc/include/asm/vaddrs.h
··· 32 32 #define SRMMU_NOCACHE_ALCRATIO 64 /* 256 pages per 64MB of system RAM */ 33 33 34 34 #ifndef __ASSEMBLY__ 35 - #include <asm/kmap_types.h> 35 + #include <asm/kmap_size.h> 36 36 37 37 enum fixed_addresses { 38 38 FIX_HOLE, 39 39 #ifdef CONFIG_HIGHMEM 40 40 FIX_KMAP_BEGIN, 41 - FIX_KMAP_END = (KM_TYPE_NR * NR_CPUS), 41 + FIX_KMAP_END = (KM_MAX_IDX * NR_CPUS), 42 42 #endif 43 43 __end_of_fixed_addresses 44 44 };
-3
arch/sparc/mm/Makefile
··· 15 15 16 16 # Only used by sparc64 17 17 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 18 - 19 - # Only used by sparc32 20 - obj-$(CONFIG_HIGHMEM) += highmem.o
-115
arch/sparc/mm/highmem.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * highmem.c: virtual kernel memory mappings for high memory 4 - * 5 - * Provides kernel-static versions of atomic kmap functions originally 6 - * found as inlines in include/asm-sparc/highmem.h. These became 7 - * needed as kmap_atomic() and kunmap_atomic() started getting 8 - * called from within modules. 9 - * -- Tomas Szepe <szepe@pinerecords.com>, September 2002 10 - * 11 - * But kmap_atomic() and kunmap_atomic() cannot be inlined in 12 - * modules because they are loaded with btfixup-ped functions. 13 - */ 14 - 15 - /* 16 - * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap 17 - * gives a more generic (and caching) interface. But kmap_atomic can 18 - * be used in IRQ contexts, so in some (very limited) cases we need it. 19 - * 20 - * XXX This is an old text. Actually, it's good to use atomic kmaps, 21 - * provided you remember that they are atomic and not try to sleep 22 - * with a kmap taken, much like a spinlock. Non-atomic kmaps are 23 - * shared by CPUs, and so precious, and establishing them requires IPI. 24 - * Atomic kmaps are lightweight and we may have NCPUS more of them. 25 - */ 26 - #include <linux/highmem.h> 27 - #include <linux/export.h> 28 - #include <linux/mm.h> 29 - 30 - #include <asm/cacheflush.h> 31 - #include <asm/tlbflush.h> 32 - #include <asm/vaddrs.h> 33 - 34 - static pte_t *kmap_pte; 35 - 36 - void __init kmap_init(void) 37 - { 38 - unsigned long address = __fix_to_virt(FIX_KMAP_BEGIN); 39 - 40 - /* cache the first kmap pte */ 41 - kmap_pte = virt_to_kpte(address); 42 - } 43 - 44 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 45 - { 46 - unsigned long vaddr; 47 - long idx, type; 48 - 49 - type = kmap_atomic_idx_push(); 50 - idx = type + KM_TYPE_NR*smp_processor_id(); 51 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 52 - 53 - /* XXX Fix - Anton */ 54 - #if 0 55 - __flush_cache_one(vaddr); 56 - #else 57 - flush_cache_all(); 58 - #endif 59 - 60 - #ifdef CONFIG_DEBUG_HIGHMEM 61 - BUG_ON(!pte_none(*(kmap_pte-idx))); 62 - #endif 63 - set_pte(kmap_pte-idx, mk_pte(page, prot)); 64 - /* XXX Fix - Anton */ 65 - #if 0 66 - __flush_tlb_one(vaddr); 67 - #else 68 - flush_tlb_all(); 69 - #endif 70 - 71 - return (void*) vaddr; 72 - } 73 - EXPORT_SYMBOL(kmap_atomic_high_prot); 74 - 75 - void kunmap_atomic_high(void *kvaddr) 76 - { 77 - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; 78 - int type; 79 - 80 - if (vaddr < FIXADDR_START) 81 - return; 82 - 83 - type = kmap_atomic_idx(); 84 - 85 - #ifdef CONFIG_DEBUG_HIGHMEM 86 - { 87 - unsigned long idx; 88 - 89 - idx = type + KM_TYPE_NR * smp_processor_id(); 90 - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN+idx)); 91 - 92 - /* XXX Fix - Anton */ 93 - #if 0 94 - __flush_cache_one(vaddr); 95 - #else 96 - flush_cache_all(); 97 - #endif 98 - 99 - /* 100 - * force other mappings to Oops if they'll try to access 101 - * this pte without first remap it 102 - */ 103 - pte_clear(&init_mm, vaddr, kmap_pte-idx); 104 - /* XXX Fix - Anton */ 105 - #if 0 106 - __flush_tlb_one(vaddr); 107 - #else 108 - flush_tlb_all(); 109 - #endif 110 - } 111 - #endif 112 - 113 - kmap_atomic_idx_pop(); 114 - } 115 - EXPORT_SYMBOL(kunmap_atomic_high);
-2
arch/sparc/mm/srmmu.c
··· 971 971 972 972 sparc_context_init(num_contexts); 973 973 974 - kmap_init(); 975 - 976 974 { 977 975 unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; 978 976
-1
arch/um/include/asm/fixmap.h
··· 3 3 #define __UM_FIXMAP_H 4 4 5 5 #include <asm/processor.h> 6 - #include <asm/kmap_types.h> 7 6 #include <asm/archparam.h> 8 7 #include <asm/page.h> 9 8 #include <linux/threads.h>
-13
arch/um/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Copyright (C) 2002 Jeff Dike (jdike@karaya.com) 4 - */ 5 - 6 - #ifndef __UM_KMAP_TYPES_H 7 - #define __UM_KMAP_TYPES_H 8 - 9 - /* No more #include "asm/arch/kmap_types.h" ! */ 10 - 11 - #define KM_TYPE_NR 14 12 - 13 - #endif
+3 -1
arch/x86/Kconfig
··· 14 14 select ARCH_WANT_IPC_PARSE_VERSION 15 15 select CLKSRC_I8253 16 16 select CLONE_BACKWARDS 17 + select GENERIC_VDSO_32 17 18 select HAVE_DEBUG_STACKOVERFLOW 19 + select KMAP_LOCAL 18 20 select MODULES_USE_ELF_REL 19 21 select OLD_SIGACTION 20 - select GENERIC_VDSO_32 21 22 22 23 config X86_64 23 24 def_bool y ··· 93 92 select ARCH_SUPPORTS_ACPI 94 93 select ARCH_SUPPORTS_ATOMIC_RMW 95 94 select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 95 + select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP if NR_CPUS <= 4096 96 96 select ARCH_USE_BUILTIN_BSWAP 97 97 select ARCH_USE_QUEUED_RWLOCKS 98 98 select ARCH_USE_QUEUED_SPINLOCKS
+10 -5
arch/x86/include/asm/fixmap.h
··· 14 14 #ifndef _ASM_X86_FIXMAP_H 15 15 #define _ASM_X86_FIXMAP_H 16 16 17 + #include <asm/kmap_size.h> 18 + 17 19 /* 18 20 * Exposed to assembly code for setting up initial page tables. Cannot be 19 21 * calculated in assembly code (fixmap entries are an enum), but is sanity 20 22 * checked in the actual fixmap C code to make sure that the fixmap is 21 23 * covered fully. 22 24 */ 23 - #define FIXMAP_PMD_NUM 2 25 + #ifndef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP 26 + # define FIXMAP_PMD_NUM 2 27 + #else 28 + # define KM_PMDS (KM_MAX_IDX * ((CONFIG_NR_CPUS + 511) / 512)) 29 + # define FIXMAP_PMD_NUM (KM_PMDS + 2) 30 + #endif 24 31 /* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */ 25 32 #define FIXMAP_PMD_TOP 507 26 33 ··· 38 31 #include <asm/pgtable_types.h> 39 32 #ifdef CONFIG_X86_32 40 33 #include <linux/threads.h> 41 - #include <asm/kmap_types.h> 42 34 #else 43 35 #include <uapi/asm/vsyscall.h> 44 36 #endif ··· 98 92 FIX_IO_APIC_BASE_0, 99 93 FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS - 1, 100 94 #endif 101 - #ifdef CONFIG_X86_32 95 + #ifdef CONFIG_KMAP_LOCAL 102 96 FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ 103 - FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1, 97 + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, 104 98 #ifdef CONFIG_PCI_MMCONFIG 105 99 FIX_PCIE_MCFG, 106 100 #endif ··· 157 151 158 152 extern int fixmaps_set; 159 153 160 - extern pte_t *kmap_pte; 161 154 extern pte_t *pkmap_page_table; 162 155 163 156 void __native_set_fixmap(enum fixed_addresses idx, pte_t pte);
+9 -4
arch/x86/include/asm/highmem.h
··· 23 23 24 24 #include <linux/interrupt.h> 25 25 #include <linux/threads.h> 26 - #include <asm/kmap_types.h> 27 26 #include <asm/tlbflush.h> 28 27 #include <asm/paravirt.h> 29 28 #include <asm/fixmap.h> ··· 57 58 #define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT) 58 59 #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) 59 60 60 - void *kmap_atomic_pfn(unsigned long pfn); 61 - void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot); 62 - 63 61 #define flush_cache_kmaps() do { } while (0) 62 + 63 + #define arch_kmap_local_post_map(vaddr, pteval) \ 64 + arch_flush_lazy_mmu_mode() 65 + 66 + #define arch_kmap_local_post_unmap(vaddr) \ 67 + do { \ 68 + flush_tlb_one_kernel((vaddr)); \ 69 + arch_flush_lazy_mmu_mode(); \ 70 + } while (0) 64 71 65 72 extern void add_highpages_with_active_regions(int nid, unsigned long start_pfn, 66 73 unsigned long end_pfn);
+4 -9
arch/x86/include/asm/iomap.h
··· 9 9 #include <linux/fs.h> 10 10 #include <linux/mm.h> 11 11 #include <linux/uaccess.h> 12 + #include <linux/highmem.h> 12 13 #include <asm/cacheflush.h> 13 14 #include <asm/tlbflush.h> 14 15 15 - void __iomem * 16 - iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot); 16 + void __iomem *__iomap_local_pfn_prot(unsigned long pfn, pgprot_t prot); 17 17 18 - void 19 - iounmap_atomic(void __iomem *kvaddr); 18 + int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot); 20 19 21 - int 22 - iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot); 23 - 24 - void 25 - iomap_free(resource_size_t base, unsigned long size); 20 + void iomap_free(resource_size_t base, unsigned long size); 26 21 27 22 #endif /* _ASM_X86_IOMAP_H */
-13
arch/x86/include/asm/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_X86_KMAP_TYPES_H 3 - #define _ASM_X86_KMAP_TYPES_H 4 - 5 - #if defined(CONFIG_X86_32) && defined(CONFIG_DEBUG_HIGHMEM) 6 - #define __WITH_KM_FENCE 7 - #endif 8 - 9 - #include <asm-generic/kmap_types.h> 10 - 11 - #undef __WITH_KM_FENCE 12 - 13 - #endif /* _ASM_X86_KMAP_TYPES_H */
-1
arch/x86/include/asm/paravirt_types.h
··· 41 41 #ifndef __ASSEMBLY__ 42 42 43 43 #include <asm/desc_defs.h> 44 - #include <asm/kmap_types.h> 45 44 #include <asm/pgtable_types.h> 46 45 #include <asm/nospec-branch.h> 47 46
+5 -1
arch/x86/include/asm/pgtable_64_types.h
··· 143 143 144 144 #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) 145 145 /* The module sections ends with the start of the fixmap */ 146 - #define MODULES_END _AC(0xffffffffff000000, UL) 146 + #ifndef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP 147 + # define MODULES_END _AC(0xffffffffff000000, UL) 148 + #else 149 + # define MODULES_END _AC(0xfffffffffe000000, UL) 150 + #endif 147 151 #define MODULES_LEN (MODULES_END - MODULES_VADDR) 148 152 149 153 #define ESPFIX_PGD_ENTRY _AC(-2, UL)
+10 -38
arch/x86/kernel/crash_dump_32.c
··· 13 13 14 14 #include <linux/uaccess.h> 15 15 16 - static void *kdump_buf_page; 17 - 18 16 static inline bool is_crashed_pfn_valid(unsigned long pfn) 19 17 { 20 18 #ifndef CONFIG_X86_PAE ··· 39 41 * @userbuf: if set, @buf is in user address space, use copy_to_user(), 40 42 * otherwise @buf is in kernel address space, use memcpy(). 41 43 * 42 - * Copy a page from "oldmem". For this page, there is no pte mapped 43 - * in the current kernel. We stitch up a pte, similar to kmap_atomic. 44 - * 45 - * Calling copy_to_user() in atomic context is not desirable. Hence first 46 - * copying the data to a pre-allocated kernel page and then copying to user 47 - * space in non-atomic context. 44 + * Copy a page from "oldmem". For this page, there might be no pte mapped 45 + * in the current kernel. 48 46 */ 49 - ssize_t copy_oldmem_page(unsigned long pfn, char *buf, 50 - size_t csize, unsigned long offset, int userbuf) 47 + ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, 48 + unsigned long offset, int userbuf) 51 49 { 52 50 void *vaddr; 53 51 ··· 53 59 if (!is_crashed_pfn_valid(pfn)) 54 60 return -EFAULT; 55 61 56 - vaddr = kmap_atomic_pfn(pfn); 62 + vaddr = kmap_local_pfn(pfn); 57 63 58 64 if (!userbuf) { 59 - memcpy(buf, (vaddr + offset), csize); 60 - kunmap_atomic(vaddr); 65 + memcpy(buf, vaddr + offset, csize); 61 66 } else { 62 - if (!kdump_buf_page) { 63 - printk(KERN_WARNING "Kdump: Kdump buffer page not" 64 - " allocated\n"); 65 - kunmap_atomic(vaddr); 66 - return -EFAULT; 67 - } 68 - copy_page(kdump_buf_page, vaddr); 69 - kunmap_atomic(vaddr); 70 - if (copy_to_user(buf, (kdump_buf_page + offset), csize)) 71 - return -EFAULT; 67 + if (copy_to_user(buf, vaddr + offset, csize)) 68 + csize = -EFAULT; 72 69 } 70 + 71 + kunmap_local(vaddr); 73 72 74 73 return csize; 75 74 } 76 - 77 - static int __init kdump_buf_page_init(void) 78 - { 79 - int ret = 0; 80 - 81 - kdump_buf_page = kmalloc(PAGE_SIZE, GFP_KERNEL); 82 - if (!kdump_buf_page) { 83 - printk(KERN_WARNING "Kdump: Failed to allocate kdump buffer" 84 - " page\n"); 85 - ret = -ENOMEM; 86 - } 87 - 88 - return ret; 89 - } 90 - arch_initcall(kdump_buf_page_init);
-59
arch/x86/mm/highmem_32.c
··· 4 4 #include <linux/swap.h> /* for totalram_pages */ 5 5 #include <linux/memblock.h> 6 6 7 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 8 - { 9 - unsigned long vaddr; 10 - int idx, type; 11 - 12 - type = kmap_atomic_idx_push(); 13 - idx = type + KM_TYPE_NR*smp_processor_id(); 14 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 15 - BUG_ON(!pte_none(*(kmap_pte-idx))); 16 - set_pte(kmap_pte-idx, mk_pte(page, prot)); 17 - arch_flush_lazy_mmu_mode(); 18 - 19 - return (void *)vaddr; 20 - } 21 - EXPORT_SYMBOL(kmap_atomic_high_prot); 22 - 23 - /* 24 - * This is the same as kmap_atomic() but can map memory that doesn't 25 - * have a struct page associated with it. 26 - */ 27 - void *kmap_atomic_pfn(unsigned long pfn) 28 - { 29 - return kmap_atomic_prot_pfn(pfn, kmap_prot); 30 - } 31 - EXPORT_SYMBOL_GPL(kmap_atomic_pfn); 32 - 33 - void kunmap_atomic_high(void *kvaddr) 34 - { 35 - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; 36 - 37 - if (vaddr >= __fix_to_virt(FIX_KMAP_END) && 38 - vaddr <= __fix_to_virt(FIX_KMAP_BEGIN)) { 39 - int idx, type; 40 - 41 - type = kmap_atomic_idx(); 42 - idx = type + KM_TYPE_NR * smp_processor_id(); 43 - 44 - #ifdef CONFIG_DEBUG_HIGHMEM 45 - WARN_ON_ONCE(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); 46 - #endif 47 - /* 48 - * Force other mappings to Oops if they'll try to access this 49 - * pte without first remap it. Keeping stale mappings around 50 - * is a bad idea also, in case the page changes cacheability 51 - * attributes or becomes a protected page in a hypervisor. 52 - */ 53 - kpte_clear_flush(kmap_pte-idx, vaddr); 54 - kmap_atomic_idx_pop(); 55 - arch_flush_lazy_mmu_mode(); 56 - } 57 - #ifdef CONFIG_DEBUG_HIGHMEM 58 - else { 59 - BUG_ON(vaddr < PAGE_OFFSET); 60 - BUG_ON(vaddr >= (unsigned long)high_memory); 61 - } 62 - #endif 63 - } 64 - EXPORT_SYMBOL(kunmap_atomic_high); 65 - 66 7 void __init set_highmem_pages_init(void) 67 8 { 68 9 struct zone *zone;
-15
arch/x86/mm/init_32.c
··· 394 394 return last_map_addr; 395 395 } 396 396 397 - pte_t *kmap_pte; 398 - 399 - static void __init kmap_init(void) 400 - { 401 - unsigned long kmap_vstart; 402 - 403 - /* 404 - * Cache the first kmap pte: 405 - */ 406 - kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN); 407 - kmap_pte = virt_to_kpte(kmap_vstart); 408 - } 409 - 410 397 #ifdef CONFIG_HIGHMEM 411 398 static void __init permanent_kmaps_init(pgd_t *pgd_base) 412 399 { ··· 698 711 pagetable_init(); 699 712 700 713 __flush_tlb_all(); 701 - 702 - kmap_init(); 703 714 704 715 /* 705 716 * NOTE: at this point the bootmem allocator is fully available.
+3 -54
arch/x86/mm/iomap_32.c
··· 44 44 } 45 45 EXPORT_SYMBOL_GPL(iomap_free); 46 46 47 - void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot) 48 - { 49 - unsigned long vaddr; 50 - int idx, type; 51 - 52 - preempt_disable(); 53 - pagefault_disable(); 54 - 55 - type = kmap_atomic_idx_push(); 56 - idx = type + KM_TYPE_NR * smp_processor_id(); 57 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 58 - set_pte(kmap_pte - idx, pfn_pte(pfn, prot)); 59 - arch_flush_lazy_mmu_mode(); 60 - 61 - return (void *)vaddr; 62 - } 63 - 64 - /* 65 - * Map 'pfn' using protections 'prot' 66 - */ 67 - void __iomem * 68 - iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot) 47 + void __iomem *__iomap_local_pfn_prot(unsigned long pfn, pgprot_t prot) 69 48 { 70 49 /* 71 50 * For non-PAT systems, translate non-WB request to UC- just in ··· 60 81 /* Filter out unsupported __PAGE_KERNEL* bits: */ 61 82 pgprot_val(prot) &= __default_kernel_pte_mask; 62 83 63 - return (void __force __iomem *) kmap_atomic_prot_pfn(pfn, prot); 84 + return (void __force __iomem *)__kmap_local_pfn_prot(pfn, prot); 64 85 } 65 - EXPORT_SYMBOL_GPL(iomap_atomic_prot_pfn); 66 - 67 - void 68 - iounmap_atomic(void __iomem *kvaddr) 69 - { 70 - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; 71 - 72 - if (vaddr >= __fix_to_virt(FIX_KMAP_END) && 73 - vaddr <= __fix_to_virt(FIX_KMAP_BEGIN)) { 74 - int idx, type; 75 - 76 - type = kmap_atomic_idx(); 77 - idx = type + KM_TYPE_NR * smp_processor_id(); 78 - 79 - #ifdef CONFIG_DEBUG_HIGHMEM 80 - WARN_ON_ONCE(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); 81 - #endif 82 - /* 83 - * Force other mappings to Oops if they'll try to access this 84 - * pte without first remap it. Keeping stale mappings around 85 - * is a bad idea also, in case the page changes cacheability 86 - * attributes or becomes a protected page in a hypervisor. 87 - */ 88 - kpte_clear_flush(kmap_pte-idx, vaddr); 89 - kmap_atomic_idx_pop(); 90 - } 91 - 92 - pagefault_enable(); 93 - preempt_enable(); 94 - } 95 - EXPORT_SYMBOL_GPL(iounmap_atomic); 86 + EXPORT_SYMBOL_GPL(__iomap_local_pfn_prot);
+1
arch/xtensa/Kconfig
··· 666 666 config HIGHMEM 667 667 bool "High Memory Support" 668 668 depends on MMU 669 + select KMAP_LOCAL 669 670 help 670 671 Linux can use the full amount of RAM in the system by 671 672 default. However, the default MMUv2 setup only maps the
+9 -50
arch/xtensa/include/asm/fixmap.h
··· 16 16 #ifdef CONFIG_HIGHMEM 17 17 #include <linux/threads.h> 18 18 #include <linux/pgtable.h> 19 - #include <asm/kmap_types.h> 20 - #endif 19 + #include <asm/kmap_size.h> 21 20 22 - /* 23 - * Here we define all the compile-time 'special' virtual 24 - * addresses. The point is to have a constant address at 25 - * compile time, but to set the physical address only 26 - * in the boot process. We allocate these special addresses 27 - * from the start of the consistent memory region upwards. 28 - * Also this lets us do fail-safe vmalloc(), we 29 - * can guarantee that these special addresses and 30 - * vmalloc()-ed addresses never overlap. 31 - * 32 - * these 'compile-time allocated' memory buffers are 33 - * fixed-size 4k pages. (or larger if used with an increment 34 - * higher than 1) use fixmap_set(idx,phys) to associate 35 - * physical memory with fixmap indices. 36 - */ 21 + /* The map slots for temporary mappings via kmap_atomic/local(). */ 37 22 enum fixed_addresses { 38 - #ifdef CONFIG_HIGHMEM 39 - /* reserved pte's for temporary kernel mappings */ 40 23 FIX_KMAP_BEGIN, 41 24 FIX_KMAP_END = FIX_KMAP_BEGIN + 42 - (KM_TYPE_NR * NR_CPUS * DCACHE_N_COLORS) - 1, 43 - #endif 25 + (KM_MAX_IDX * NR_CPUS * DCACHE_N_COLORS) - 1, 44 26 __end_of_fixed_addresses 45 27 }; 46 28 47 - #define FIXADDR_TOP (XCHAL_KSEG_CACHED_VADDR - PAGE_SIZE) 29 + #define FIXADDR_END (XCHAL_KSEG_CACHED_VADDR - PAGE_SIZE) 48 30 #define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT) 49 - #define FIXADDR_START ((FIXADDR_TOP - FIXADDR_SIZE) & PMD_MASK) 31 + /* Enforce that FIXADDR_START is PMD aligned to handle cache aliasing */ 32 + #define FIXADDR_START ((FIXADDR_END - FIXADDR_SIZE) & PMD_MASK) 33 + #define FIXADDR_TOP (FIXADDR_START + FIXADDR_SIZE - PAGE_SIZE) 50 34 51 - #define __fix_to_virt(x) (FIXADDR_START + ((x) << PAGE_SHIFT)) 52 - #define __virt_to_fix(x) (((x) - FIXADDR_START) >> PAGE_SHIFT) 35 + #include <asm-generic/fixmap.h> 53 36 54 - #ifndef __ASSEMBLY__ 55 - /* 56 - * 'index to address' translation. If anyone tries to use the idx 57 - * directly without translation, we catch the bug with a NULL-deference 58 - * kernel oops. Illegal ranges of incoming indices are caught too. 59 - */ 60 - static __always_inline unsigned long fix_to_virt(const unsigned int idx) 61 - { 62 - /* Check if this memory layout is broken because fixmap overlaps page 63 - * table. 64 - */ 65 - BUILD_BUG_ON(FIXADDR_START < 66 - TLBTEMP_BASE_1 + TLBTEMP_SIZE); 67 - BUILD_BUG_ON(idx >= __end_of_fixed_addresses); 68 - return __fix_to_virt(idx); 69 - } 70 - 71 - static inline unsigned long virt_to_fix(const unsigned long vaddr) 72 - { 73 - BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START); 74 - return __virt_to_fix(vaddr); 75 - } 76 - 77 - #endif 78 - 37 + #endif /* CONFIG_HIGHMEM */ 79 38 #endif
+13 -2
arch/xtensa/include/asm/highmem.h
··· 12 12 #ifndef _XTENSA_HIGHMEM_H 13 13 #define _XTENSA_HIGHMEM_H 14 14 15 + #ifdef CONFIG_HIGHMEM 15 16 #include <linux/wait.h> 16 17 #include <linux/pgtable.h> 17 18 #include <asm/cacheflush.h> 18 19 #include <asm/fixmap.h> 19 - #include <asm/kmap_types.h> 20 20 21 - #define PKMAP_BASE ((FIXADDR_START - \ 21 + #define PKMAP_BASE ((FIXADDR_START - \ 22 22 (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK) 23 23 #define LAST_PKMAP (PTRS_PER_PTE * DCACHE_N_COLORS) 24 24 #define LAST_PKMAP_MASK (LAST_PKMAP - 1) ··· 59 59 { 60 60 return pkmap_map_wait_arr + color; 61 61 } 62 + 63 + enum fixed_addresses kmap_local_map_idx(int type, unsigned long pfn); 64 + #define arch_kmap_local_map_idx kmap_local_map_idx 65 + 66 + enum fixed_addresses kmap_local_unmap_idx(int type, unsigned long addr); 67 + #define arch_kmap_local_unmap_idx kmap_local_unmap_idx 68 + 62 69 #endif 63 70 64 71 extern pte_t *pkmap_page_table; ··· 75 68 flush_cache_all(); 76 69 } 77 70 71 + #define arch_kmap_local_post_unmap(vaddr) \ 72 + local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE) 73 + 78 74 void kmap_init(void); 79 75 76 + #endif /* CONFIG_HIGHMEM */ 80 77 #endif
+17 -49
arch/xtensa/mm/highmem.c
··· 12 12 #include <linux/highmem.h> 13 13 #include <asm/tlbflush.h> 14 14 15 - static pte_t *kmap_pte; 16 - 17 15 #if DCACHE_WAY_SIZE > PAGE_SIZE 18 16 unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS]; 19 17 wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS]; ··· 23 25 for (i = 0; i < ARRAY_SIZE(pkmap_map_wait_arr); ++i) 24 26 init_waitqueue_head(pkmap_map_wait_arr + i); 25 27 } 26 - #else 27 - static inline void kmap_waitqueues_init(void) 28 - { 29 - } 30 - #endif 31 28 32 29 static inline enum fixed_addresses kmap_idx(int type, unsigned long color) 33 30 { 34 - return (type + KM_TYPE_NR * smp_processor_id()) * DCACHE_N_COLORS + 35 - color; 31 + int idx = (type + KM_MAX_IDX * smp_processor_id()) * DCACHE_N_COLORS; 32 + 33 + /* 34 + * The fixmap operates top down, so the color offset needs to be 35 + * reverse as well. 36 + */ 37 + return idx + DCACHE_N_COLORS - 1 - color; 36 38 } 37 39 38 - void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) 40 + enum fixed_addresses kmap_local_map_idx(int type, unsigned long pfn) 39 41 { 40 - enum fixed_addresses idx; 41 - unsigned long vaddr; 42 + return kmap_idx(type, DCACHE_ALIAS(pfn << PAGE_SHIFT)); 43 + } 42 44 43 - idx = kmap_idx(kmap_atomic_idx_push(), 44 - DCACHE_ALIAS(page_to_phys(page))); 45 - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 46 - #ifdef CONFIG_DEBUG_HIGHMEM 47 - BUG_ON(!pte_none(*(kmap_pte + idx))); 45 + enum fixed_addresses kmap_local_unmap_idx(int type, unsigned long addr) 46 + { 47 + return kmap_idx(type, DCACHE_ALIAS(addr)); 48 + } 49 + 50 + #else 51 + static inline void kmap_waitqueues_init(void) { } 48 52 #endif 49 - set_pte(kmap_pte + idx, mk_pte(page, prot)); 50 - 51 - return (void *)vaddr; 52 - } 53 - EXPORT_SYMBOL(kmap_atomic_high_prot); 54 - 55 - void kunmap_atomic_high(void *kvaddr) 56 - { 57 - if (kvaddr >= (void *)FIXADDR_START && 58 - kvaddr < (void *)FIXADDR_TOP) { 59 - int idx = kmap_idx(kmap_atomic_idx(), 60 - DCACHE_ALIAS((unsigned long)kvaddr)); 61 - 62 - /* 63 - * Force other mappings to Oops if they'll try to access this 64 - * pte without first remap it. Keeping stale mappings around 65 - * is a bad idea also, in case the page changes cacheability 66 - * attributes or becomes a protected page in a hypervisor. 67 - */ 68 - pte_clear(&init_mm, kvaddr, kmap_pte + idx); 69 - local_flush_tlb_kernel_range((unsigned long)kvaddr, 70 - (unsigned long)kvaddr + PAGE_SIZE); 71 - 72 - kmap_atomic_idx_pop(); 73 - } 74 - } 75 - EXPORT_SYMBOL(kunmap_atomic_high); 76 53 77 54 void __init kmap_init(void) 78 55 { 79 - unsigned long kmap_vstart; 80 - 81 56 /* Check if this memory layout is broken because PKMAP overlaps 82 57 * page table. 83 58 */ 84 59 BUILD_BUG_ON(PKMAP_BASE < TLBTEMP_BASE_1 + TLBTEMP_SIZE); 85 - /* cache the first kmap pte */ 86 - kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN); 87 - kmap_pte = virt_to_kpte(kmap_vstart); 88 60 kmap_waitqueues_init(); 89 61 }
+2 -2
arch/xtensa/mm/init.c
··· 147 147 #ifdef CONFIG_HIGHMEM 148 148 PKMAP_BASE, PKMAP_BASE + LAST_PKMAP * PAGE_SIZE, 149 149 (LAST_PKMAP*PAGE_SIZE) >> 10, 150 - FIXADDR_START, FIXADDR_TOP, 151 - (FIXADDR_TOP - FIXADDR_START) >> 10, 150 + FIXADDR_START, FIXADDR_END, 151 + (FIXADDR_END - FIXADDR_START) >> 10, 152 152 #endif 153 153 PAGE_OFFSET, PAGE_OFFSET + 154 154 (max_low_pfn - min_low_pfn) * PAGE_SIZE,
+2 -1
arch/xtensa/mm/mmu.c
··· 52 52 53 53 static void __init fixedrange_init(void) 54 54 { 55 - init_pmd(__fix_to_virt(0), __end_of_fixed_addresses); 55 + BUILD_BUG_ON(FIXADDR_START < TLBTEMP_BASE_1 + TLBTEMP_SIZE); 56 + init_pmd(FIXADDR_START, __end_of_fixed_addresses); 56 57 } 57 58 #endif 58 59
-1
fs/aio.c
··· 43 43 #include <linux/mount.h> 44 44 #include <linux/pseudo_fs.h> 45 45 46 - #include <asm/kmap_types.h> 47 46 #include <linux/uaccess.h> 48 47 #include <linux/nospec.h> 49 48
-1
fs/btrfs/ctree.h
··· 17 17 #include <linux/wait.h> 18 18 #include <linux/slab.h> 19 19 #include <trace/events/btrfs.h> 20 - #include <asm/kmap_types.h> 21 20 #include <asm/unaligned.h> 22 21 #include <linux/pagemap.h> 23 22 #include <linux/btrfs.h>
+1 -1
include/asm-generic/Kbuild
··· 30 30 mandatory-y += irq_regs.h 31 31 mandatory-y += irq_work.h 32 32 mandatory-y += kdebug.h 33 - mandatory-y += kmap_types.h 33 + mandatory-y += kmap_size.h 34 34 mandatory-y += kprobes.h 35 35 mandatory-y += linkage.h 36 36 mandatory-y += local.h
+12
include/asm-generic/kmap_size.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_GENERIC_KMAP_SIZE_H 3 + #define _ASM_GENERIC_KMAP_SIZE_H 4 + 5 + /* For debug this provides guard pages between the maps */ 6 + #ifdef CONFIG_DEBUG_KMAP_LOCAL 7 + # define KM_MAX_IDX 33 8 + #else 9 + # define KM_MAX_IDX 16 10 + #endif 11 + 12 + #endif
-11
include/asm-generic/kmap_types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_GENERIC_KMAP_TYPES_H 3 - #define _ASM_GENERIC_KMAP_TYPES_H 4 - 5 - #ifdef __WITH_KM_FENCE 6 - # define KM_TYPE_NR 41 7 - #else 8 - # define KM_TYPE_NR 20 9 - #endif 10 - 11 - #endif
+232
include/linux/highmem-internal.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_HIGHMEM_INTERNAL_H 3 + #define _LINUX_HIGHMEM_INTERNAL_H 4 + 5 + /* 6 + * Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft. 7 + */ 8 + #ifdef CONFIG_KMAP_LOCAL 9 + void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot); 10 + void *__kmap_local_page_prot(struct page *page, pgprot_t prot); 11 + void kunmap_local_indexed(void *vaddr); 12 + void kmap_local_fork(struct task_struct *tsk); 13 + void __kmap_local_sched_out(void); 14 + void __kmap_local_sched_in(void); 15 + static inline void kmap_assert_nomap(void) 16 + { 17 + DEBUG_LOCKS_WARN_ON(current->kmap_ctrl.idx); 18 + } 19 + #else 20 + static inline void kmap_local_fork(struct task_struct *tsk) { } 21 + static inline void kmap_assert_nomap(void) { } 22 + #endif 23 + 24 + #ifdef CONFIG_HIGHMEM 25 + #include <asm/highmem.h> 26 + 27 + #ifndef ARCH_HAS_KMAP_FLUSH_TLB 28 + static inline void kmap_flush_tlb(unsigned long addr) { } 29 + #endif 30 + 31 + #ifndef kmap_prot 32 + #define kmap_prot PAGE_KERNEL 33 + #endif 34 + 35 + void *kmap_high(struct page *page); 36 + void kunmap_high(struct page *page); 37 + void __kmap_flush_unused(void); 38 + struct page *__kmap_to_page(void *addr); 39 + 40 + static inline void *kmap(struct page *page) 41 + { 42 + void *addr; 43 + 44 + might_sleep(); 45 + if (!PageHighMem(page)) 46 + addr = page_address(page); 47 + else 48 + addr = kmap_high(page); 49 + kmap_flush_tlb((unsigned long)addr); 50 + return addr; 51 + } 52 + 53 + static inline void kunmap(struct page *page) 54 + { 55 + might_sleep(); 56 + if (!PageHighMem(page)) 57 + return; 58 + kunmap_high(page); 59 + } 60 + 61 + static inline struct page *kmap_to_page(void *addr) 62 + { 63 + return __kmap_to_page(addr); 64 + } 65 + 66 + static inline void kmap_flush_unused(void) 67 + { 68 + __kmap_flush_unused(); 69 + } 70 + 71 + static inline void *kmap_local_page(struct page *page) 72 + { 73 + return __kmap_local_page_prot(page, kmap_prot); 74 + } 75 + 76 + static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) 77 + { 78 + return __kmap_local_page_prot(page, prot); 79 + } 80 + 81 + static inline void *kmap_local_pfn(unsigned long pfn) 82 + { 83 + return __kmap_local_pfn_prot(pfn, kmap_prot); 84 + } 85 + 86 + static inline void __kunmap_local(void *vaddr) 87 + { 88 + kunmap_local_indexed(vaddr); 89 + } 90 + 91 + static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) 92 + { 93 + preempt_disable(); 94 + pagefault_disable(); 95 + return __kmap_local_page_prot(page, prot); 96 + } 97 + 98 + static inline void *kmap_atomic(struct page *page) 99 + { 100 + return kmap_atomic_prot(page, kmap_prot); 101 + } 102 + 103 + static inline void *kmap_atomic_pfn(unsigned long pfn) 104 + { 105 + preempt_disable(); 106 + pagefault_disable(); 107 + return __kmap_local_pfn_prot(pfn, kmap_prot); 108 + } 109 + 110 + static inline void __kunmap_atomic(void *addr) 111 + { 112 + kunmap_local_indexed(addr); 113 + pagefault_enable(); 114 + preempt_enable(); 115 + } 116 + 117 + unsigned int __nr_free_highpages(void); 118 + extern atomic_long_t _totalhigh_pages; 119 + 120 + static inline unsigned int nr_free_highpages(void) 121 + { 122 + return __nr_free_highpages(); 123 + } 124 + 125 + static inline unsigned long totalhigh_pages(void) 126 + { 127 + return (unsigned long)atomic_long_read(&_totalhigh_pages); 128 + } 129 + 130 + static inline void totalhigh_pages_inc(void) 131 + { 132 + atomic_long_inc(&_totalhigh_pages); 133 + } 134 + 135 + static inline void totalhigh_pages_add(long count) 136 + { 137 + atomic_long_add(count, &_totalhigh_pages); 138 + } 139 + 140 + #else /* CONFIG_HIGHMEM */ 141 + 142 + static inline struct page *kmap_to_page(void *addr) 143 + { 144 + return virt_to_page(addr); 145 + } 146 + 147 + static inline void *kmap(struct page *page) 148 + { 149 + might_sleep(); 150 + return page_address(page); 151 + } 152 + 153 + static inline void kunmap_high(struct page *page) { } 154 + static inline void kmap_flush_unused(void) { } 155 + 156 + static inline void kunmap(struct page *page) 157 + { 158 + #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 159 + kunmap_flush_on_unmap(page_address(page)); 160 + #endif 161 + } 162 + 163 + static inline void *kmap_local_page(struct page *page) 164 + { 165 + return page_address(page); 166 + } 167 + 168 + static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) 169 + { 170 + return kmap_local_page(page); 171 + } 172 + 173 + static inline void *kmap_local_pfn(unsigned long pfn) 174 + { 175 + return kmap_local_page(pfn_to_page(pfn)); 176 + } 177 + 178 + static inline void __kunmap_local(void *addr) 179 + { 180 + #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 181 + kunmap_flush_on_unmap(addr); 182 + #endif 183 + } 184 + 185 + static inline void *kmap_atomic(struct page *page) 186 + { 187 + preempt_disable(); 188 + pagefault_disable(); 189 + return page_address(page); 190 + } 191 + 192 + static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) 193 + { 194 + return kmap_atomic(page); 195 + } 196 + 197 + static inline void *kmap_atomic_pfn(unsigned long pfn) 198 + { 199 + return kmap_atomic(pfn_to_page(pfn)); 200 + } 201 + 202 + static inline void __kunmap_atomic(void *addr) 203 + { 204 + #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 205 + kunmap_flush_on_unmap(addr); 206 + #endif 207 + pagefault_enable(); 208 + preempt_enable(); 209 + } 210 + 211 + static inline unsigned int nr_free_highpages(void) { return 0; } 212 + static inline unsigned long totalhigh_pages(void) { return 0UL; } 213 + 214 + #endif /* CONFIG_HIGHMEM */ 215 + 216 + /* 217 + * Prevent people trying to call kunmap_atomic() as if it were kunmap() 218 + * kunmap_atomic() should get the return value of kmap_atomic, not the page. 219 + */ 220 + #define kunmap_atomic(__addr) \ 221 + do { \ 222 + BUILD_BUG_ON(__same_type((__addr), struct page *)); \ 223 + __kunmap_atomic(__addr); \ 224 + } while (0) 225 + 226 + #define kunmap_local(__addr) \ 227 + do { \ 228 + BUILD_BUG_ON(__same_type((__addr), struct page *)); \ 229 + __kunmap_local(__addr); \ 230 + } while (0) 231 + 232 + #endif
+113 -193
include/linux/highmem.h
··· 11 11 12 12 #include <asm/cacheflush.h> 13 13 14 + #include "highmem-internal.h" 15 + 16 + /** 17 + * kmap - Map a page for long term usage 18 + * @page: Pointer to the page to be mapped 19 + * 20 + * Returns: The virtual address of the mapping 21 + * 22 + * Can only be invoked from preemptible task context because on 32bit 23 + * systems with CONFIG_HIGHMEM enabled this function might sleep. 24 + * 25 + * For systems with CONFIG_HIGHMEM=n and for pages in the low memory area 26 + * this returns the virtual address of the direct kernel mapping. 27 + * 28 + * The returned virtual address is globally visible and valid up to the 29 + * point where it is unmapped via kunmap(). The pointer can be handed to 30 + * other contexts. 31 + * 32 + * For highmem pages on 32bit systems this can be slow as the mapping space 33 + * is limited and protected by a global lock. In case that there is no 34 + * mapping slot available the function blocks until a slot is released via 35 + * kunmap(). 36 + */ 37 + static inline void *kmap(struct page *page); 38 + 39 + /** 40 + * kunmap - Unmap the virtual address mapped by kmap() 41 + * @addr: Virtual address to be unmapped 42 + * 43 + * Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of 44 + * pages in the low memory area. 45 + */ 46 + static inline void kunmap(struct page *page); 47 + 48 + /** 49 + * kmap_to_page - Get the page for a kmap'ed address 50 + * @addr: The address to look up 51 + * 52 + * Returns: The page which is mapped to @addr. 53 + */ 54 + static inline struct page *kmap_to_page(void *addr); 55 + 56 + /** 57 + * kmap_flush_unused - Flush all unused kmap mappings in order to 58 + * remove stray mappings 59 + */ 60 + static inline void kmap_flush_unused(void); 61 + 62 + /** 63 + * kmap_local_page - Map a page for temporary usage 64 + * @page: Pointer to the page to be mapped 65 + * 66 + * Returns: The virtual address of the mapping 67 + * 68 + * Can be invoked from any context. 69 + * 70 + * Requires careful handling when nesting multiple mappings because the map 71 + * management is stack based. The unmap has to be in the reverse order of 72 + * the map operation: 73 + * 74 + * addr1 = kmap_local_page(page1); 75 + * addr2 = kmap_local_page(page2); 76 + * ... 77 + * kunmap_local(addr2); 78 + * kunmap_local(addr1); 79 + * 80 + * Unmapping addr1 before addr2 is invalid and causes malfunction. 81 + * 82 + * Contrary to kmap() mappings the mapping is only valid in the context of 83 + * the caller and cannot be handed to other contexts. 84 + * 85 + * On CONFIG_HIGHMEM=n kernels and for low memory pages this returns the 86 + * virtual address of the direct mapping. Only real highmem pages are 87 + * temporarily mapped. 88 + * 89 + * While it is significantly faster than kmap() for the higmem case it 90 + * comes with restrictions about the pointer validity. Only use when really 91 + * necessary. 92 + * 93 + * On HIGHMEM enabled systems mapping a highmem page has the side effect of 94 + * disabling migration in order to keep the virtual address stable across 95 + * preemption. No caller of kmap_local_page() can rely on this side effect. 96 + */ 97 + static inline void *kmap_local_page(struct page *page); 98 + 99 + /** 100 + * kmap_atomic - Atomically map a page for temporary usage - Deprecated! 101 + * @page: Pointer to the page to be mapped 102 + * 103 + * Returns: The virtual address of the mapping 104 + * 105 + * Effectively a wrapper around kmap_local_page() which disables pagefaults 106 + * and preemption. 107 + * 108 + * Do not use in new code. Use kmap_local_page() instead. 109 + */ 110 + static inline void *kmap_atomic(struct page *page); 111 + 112 + /** 113 + * kunmap_atomic - Unmap the virtual address mapped by kmap_atomic() 114 + * @addr: Virtual address to be unmapped 115 + * 116 + * Counterpart to kmap_atomic(). 117 + * 118 + * Effectively a wrapper around kunmap_local() which additionally undoes 119 + * the side effects of kmap_atomic(), i.e. reenabling pagefaults and 120 + * preemption. 121 + */ 122 + 123 + /* Highmem related interfaces for management code */ 124 + static inline unsigned int nr_free_highpages(void); 125 + static inline unsigned long totalhigh_pages(void); 126 + 14 127 #ifndef ARCH_HAS_FLUSH_ANON_PAGE 15 128 static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr) 16 129 { ··· 141 28 { 142 29 } 143 30 #endif 144 - 145 - #include <asm/kmap_types.h> 146 - 147 - #ifdef CONFIG_HIGHMEM 148 - extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); 149 - extern void kunmap_atomic_high(void *kvaddr); 150 - #include <asm/highmem.h> 151 - 152 - #ifndef ARCH_HAS_KMAP_FLUSH_TLB 153 - static inline void kmap_flush_tlb(unsigned long addr) { } 154 - #endif 155 - 156 - #ifndef kmap_prot 157 - #define kmap_prot PAGE_KERNEL 158 - #endif 159 - 160 - void *kmap_high(struct page *page); 161 - static inline void *kmap(struct page *page) 162 - { 163 - void *addr; 164 - 165 - might_sleep(); 166 - if (!PageHighMem(page)) 167 - addr = page_address(page); 168 - else 169 - addr = kmap_high(page); 170 - kmap_flush_tlb((unsigned long)addr); 171 - return addr; 172 - } 173 - 174 - void kunmap_high(struct page *page); 175 - 176 - static inline void kunmap(struct page *page) 177 - { 178 - might_sleep(); 179 - if (!PageHighMem(page)) 180 - return; 181 - kunmap_high(page); 182 - } 183 - 184 - /* 185 - * kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because 186 - * no global lock is needed and because the kmap code must perform a global TLB 187 - * invalidation when the kmap pool wraps. 188 - * 189 - * However when holding an atomic kmap it is not legal to sleep, so atomic 190 - * kmaps are appropriate for short, tight code paths only. 191 - * 192 - * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap 193 - * gives a more generic (and caching) interface. But kmap_atomic can 194 - * be used in IRQ contexts, so in some (very limited) cases we need 195 - * it. 196 - */ 197 - static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) 198 - { 199 - preempt_disable(); 200 - pagefault_disable(); 201 - if (!PageHighMem(page)) 202 - return page_address(page); 203 - return kmap_atomic_high_prot(page, prot); 204 - } 205 - #define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot) 206 - 207 - /* declarations for linux/mm/highmem.c */ 208 - unsigned int nr_free_highpages(void); 209 - extern atomic_long_t _totalhigh_pages; 210 - static inline unsigned long totalhigh_pages(void) 211 - { 212 - return (unsigned long)atomic_long_read(&_totalhigh_pages); 213 - } 214 - 215 - static inline void totalhigh_pages_inc(void) 216 - { 217 - atomic_long_inc(&_totalhigh_pages); 218 - } 219 - 220 - static inline void totalhigh_pages_dec(void) 221 - { 222 - atomic_long_dec(&_totalhigh_pages); 223 - } 224 - 225 - static inline void totalhigh_pages_add(long count) 226 - { 227 - atomic_long_add(count, &_totalhigh_pages); 228 - } 229 - 230 - static inline void totalhigh_pages_set(long val) 231 - { 232 - atomic_long_set(&_totalhigh_pages, val); 233 - } 234 - 235 - void kmap_flush_unused(void); 236 - 237 - struct page *kmap_to_page(void *addr); 238 - 239 - #else /* CONFIG_HIGHMEM */ 240 - 241 - static inline unsigned int nr_free_highpages(void) { return 0; } 242 - 243 - static inline struct page *kmap_to_page(void *addr) 244 - { 245 - return virt_to_page(addr); 246 - } 247 - 248 - static inline unsigned long totalhigh_pages(void) { return 0UL; } 249 - 250 - static inline void *kmap(struct page *page) 251 - { 252 - might_sleep(); 253 - return page_address(page); 254 - } 255 - 256 - static inline void kunmap_high(struct page *page) 257 - { 258 - } 259 - 260 - static inline void kunmap(struct page *page) 261 - { 262 - #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 263 - kunmap_flush_on_unmap(page_address(page)); 264 - #endif 265 - } 266 - 267 - static inline void *kmap_atomic(struct page *page) 268 - { 269 - preempt_disable(); 270 - pagefault_disable(); 271 - return page_address(page); 272 - } 273 - #define kmap_atomic_prot(page, prot) kmap_atomic(page) 274 - 275 - static inline void kunmap_atomic_high(void *addr) 276 - { 277 - /* 278 - * Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic() 279 - * handles re-enabling faults + preemption 280 - */ 281 - #ifdef ARCH_HAS_FLUSH_ON_KUNMAP 282 - kunmap_flush_on_unmap(addr); 283 - #endif 284 - } 285 - 286 - #define kmap_atomic_pfn(pfn) kmap_atomic(pfn_to_page(pfn)) 287 - 288 - #define kmap_flush_unused() do {} while(0) 289 - 290 - #endif /* CONFIG_HIGHMEM */ 291 - 292 - #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32) 293 - 294 - DECLARE_PER_CPU(int, __kmap_atomic_idx); 295 - 296 - static inline int kmap_atomic_idx_push(void) 297 - { 298 - int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1; 299 - 300 - #ifdef CONFIG_DEBUG_HIGHMEM 301 - WARN_ON_ONCE(in_irq() && !irqs_disabled()); 302 - BUG_ON(idx >= KM_TYPE_NR); 303 - #endif 304 - return idx; 305 - } 306 - 307 - static inline int kmap_atomic_idx(void) 308 - { 309 - return __this_cpu_read(__kmap_atomic_idx) - 1; 310 - } 311 - 312 - static inline void kmap_atomic_idx_pop(void) 313 - { 314 - #ifdef CONFIG_DEBUG_HIGHMEM 315 - int idx = __this_cpu_dec_return(__kmap_atomic_idx); 316 - 317 - BUG_ON(idx < 0); 318 - #else 319 - __this_cpu_dec(__kmap_atomic_idx); 320 - #endif 321 - } 322 - 323 - #endif 324 - 325 - /* 326 - * Prevent people trying to call kunmap_atomic() as if it were kunmap() 327 - * kunmap_atomic() should get the return value of kmap_atomic, not the page. 328 - */ 329 - #define kunmap_atomic(addr) \ 330 - do { \ 331 - BUILD_BUG_ON(__same_type((addr), struct page *)); \ 332 - kunmap_atomic_high(addr); \ 333 - pagefault_enable(); \ 334 - preempt_enable(); \ 335 - } while (0) 336 - 337 31 338 32 /* when CONFIG_HIGHMEM is not set these will be plain clear/copy_page */ 339 33 #ifndef clear_user_highpage
+34 -4
include/linux/io-mapping.h
··· 69 69 70 70 BUG_ON(offset >= mapping->size); 71 71 phys_addr = mapping->base + offset; 72 - return iomap_atomic_prot_pfn(PHYS_PFN(phys_addr), mapping->prot); 72 + preempt_disable(); 73 + pagefault_disable(); 74 + return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); 73 75 } 74 76 75 77 static inline void 76 78 io_mapping_unmap_atomic(void __iomem *vaddr) 77 79 { 78 - iounmap_atomic(vaddr); 80 + kunmap_local_indexed((void __force *)vaddr); 81 + pagefault_enable(); 82 + preempt_enable(); 83 + } 84 + 85 + static inline void __iomem * 86 + io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset) 87 + { 88 + resource_size_t phys_addr; 89 + 90 + BUG_ON(offset >= mapping->size); 91 + phys_addr = mapping->base + offset; 92 + return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); 93 + } 94 + 95 + static inline void io_mapping_unmap_local(void __iomem *vaddr) 96 + { 97 + kunmap_local_indexed((void __force *)vaddr); 79 98 } 80 99 81 100 static inline void __iomem * ··· 116 97 iounmap(vaddr); 117 98 } 118 99 119 - #else 100 + #else /* HAVE_ATOMIC_IOMAP */ 120 101 121 102 #include <linux/uaccess.h> 122 103 ··· 181 162 preempt_enable(); 182 163 } 183 164 184 - #endif /* HAVE_ATOMIC_IOMAP */ 165 + static inline void __iomem * 166 + io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset) 167 + { 168 + return io_mapping_map_wc(mapping, offset, PAGE_SIZE); 169 + } 170 + 171 + static inline void io_mapping_unmap_local(void __iomem *vaddr) 172 + { 173 + io_mapping_unmap(vaddr); 174 + } 175 + 176 + #endif /* !HAVE_ATOMIC_IOMAP */ 185 177 186 178 static inline struct io_mapping * 187 179 io_mapping_create_wc(resource_size_t base,
+9
include/linux/sched.h
··· 35 35 #include <linux/rseq.h> 36 36 #include <linux/seqlock.h> 37 37 #include <linux/kcsan.h> 38 + #include <asm/kmap_size.h> 38 39 39 40 /* task_struct member predeclarations (sorted alphabetically): */ 40 41 struct audit_context; ··· 637 636 638 637 struct wake_q_node { 639 638 struct wake_q_node *next; 639 + }; 640 + 641 + struct kmap_ctrl { 642 + #ifdef CONFIG_KMAP_LOCAL 643 + int idx; 644 + pte_t pteval[KM_MAX_IDX]; 645 + #endif 640 646 }; 641 647 642 648 struct task_struct { ··· 1326 1318 unsigned int sequential_io; 1327 1319 unsigned int sequential_io_avg; 1328 1320 #endif 1321 + struct kmap_ctrl kmap_ctrl; 1329 1322 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP 1330 1323 unsigned long task_state_change; 1331 1324 #endif
+2
kernel/entry/common.c
··· 2 2 3 3 #include <linux/context_tracking.h> 4 4 #include <linux/entry-common.h> 5 + #include <linux/highmem.h> 5 6 #include <linux/livepatch.h> 6 7 #include <linux/audit.h> 7 8 ··· 204 203 205 204 /* Ensure that the address limit is intact and no locks are held */ 206 205 addr_limit_user_check(); 206 + kmap_assert_nomap(); 207 207 lockdep_assert_irqs_disabled(); 208 208 lockdep_sys_exit(); 209 209 }
+1
kernel/fork.c
··· 931 931 account_kernel_stack(tsk, 1); 932 932 933 933 kcov_task_init(tsk); 934 + kmap_local_fork(tsk); 934 935 935 936 #ifdef CONFIG_FAULT_INJECTION 936 937 tsk->fail_nth = 0;
+25
kernel/sched/core.c
··· 4091 4091 # define finish_arch_post_lock_switch() do { } while (0) 4092 4092 #endif 4093 4093 4094 + static inline void kmap_local_sched_out(void) 4095 + { 4096 + #ifdef CONFIG_KMAP_LOCAL 4097 + if (unlikely(current->kmap_ctrl.idx)) 4098 + __kmap_local_sched_out(); 4099 + #endif 4100 + } 4101 + 4102 + static inline void kmap_local_sched_in(void) 4103 + { 4104 + #ifdef CONFIG_KMAP_LOCAL 4105 + if (unlikely(current->kmap_ctrl.idx)) 4106 + __kmap_local_sched_in(); 4107 + #endif 4108 + } 4109 + 4094 4110 /** 4095 4111 * prepare_task_switch - prepare to switch tasks 4096 4112 * @rq: the runqueue preparing to switch ··· 4129 4113 perf_event_task_sched_out(prev, next); 4130 4114 rseq_preempt(prev); 4131 4115 fire_sched_out_preempt_notifiers(prev, next); 4116 + kmap_local_sched_out(); 4132 4117 prepare_task(next); 4133 4118 prepare_arch_switch(next); 4134 4119 } ··· 4196 4179 finish_lock_switch(rq); 4197 4180 finish_arch_post_lock_switch(); 4198 4181 kcov_finish_switch(current); 4182 + /* 4183 + * kmap_local_sched_out() is invoked with rq::lock held and 4184 + * interrupts disabled. There is no requirement for that, but the 4185 + * sched out code does not have an interrupt enabled section. 4186 + * Restoring the maps on sched in does not require interrupts being 4187 + * disabled either. 4188 + */ 4189 + kmap_local_sched_in(); 4199 4190 4200 4191 fire_sched_in_preempt_notifiers(current); 4201 4192 /*
+22
lib/Kconfig.debug
··· 849 849 850 850 Say N if unsure. 851 851 852 + config DEBUG_KMAP_LOCAL 853 + bool "Debug kmap_local temporary mappings" 854 + depends on DEBUG_KERNEL && KMAP_LOCAL 855 + help 856 + This option enables additional error checking for the kmap_local 857 + infrastructure. Disable for production use. 858 + 859 + config ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP 860 + bool 861 + 862 + config DEBUG_KMAP_LOCAL_FORCE_MAP 863 + bool "Enforce kmap_local temporary mappings" 864 + depends on DEBUG_KERNEL && ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP 865 + select KMAP_LOCAL 866 + select DEBUG_KMAP_LOCAL 867 + help 868 + This option enforces temporary mappings through the kmap_local 869 + mechanism for non-highmem pages and on non-highmem systems. 870 + Disable this for production systems! 871 + 852 872 config DEBUG_HIGHMEM 853 873 bool "Highmem debugging" 854 874 depends on DEBUG_KERNEL && HIGHMEM 875 + select DEBUG_KMAP_LOCAL_FORCE_MAP if ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP 876 + select DEBUG_KMAP_LOCAL 855 877 help 856 878 This option enables additional error checking for high memory 857 879 systems. Disable for production systems.
+3
mm/Kconfig
··· 859 859 config MAPPING_DIRTY_HELPERS 860 860 bool 861 861 862 + config KMAP_LOCAL 863 + bool 864 + 862 865 endmenu
+257 -15
mm/highmem.c
··· 31 31 #include <asm/tlbflush.h> 32 32 #include <linux/vmalloc.h> 33 33 34 - #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32) 35 - DEFINE_PER_CPU(int, __kmap_atomic_idx); 36 - #endif 37 - 38 34 /* 39 35 * Virtual_count is not a pure "count". 40 36 * 0 means that it is not mapped, and has not been mapped ··· 104 108 atomic_long_t _totalhigh_pages __read_mostly; 105 109 EXPORT_SYMBOL(_totalhigh_pages); 106 110 107 - EXPORT_PER_CPU_SYMBOL(__kmap_atomic_idx); 108 - 109 - unsigned int nr_free_highpages (void) 111 + unsigned int __nr_free_highpages (void) 110 112 { 111 113 struct zone *zone; 112 114 unsigned int pages = 0; ··· 141 147 do { spin_unlock(&kmap_lock); (void)(flags); } while (0) 142 148 #endif 143 149 144 - struct page *kmap_to_page(void *vaddr) 150 + struct page *__kmap_to_page(void *vaddr) 145 151 { 146 152 unsigned long addr = (unsigned long)vaddr; 147 153 ··· 152 158 153 159 return virt_to_page(addr); 154 160 } 155 - EXPORT_SYMBOL(kmap_to_page); 161 + EXPORT_SYMBOL(__kmap_to_page); 156 162 157 163 static void flush_all_zero_pkmaps(void) 158 164 { ··· 194 200 flush_tlb_kernel_range(PKMAP_ADDR(0), PKMAP_ADDR(LAST_PKMAP)); 195 201 } 196 202 197 - /** 198 - * kmap_flush_unused - flush all unused kmap mappings in order to remove stray mappings 199 - */ 200 - void kmap_flush_unused(void) 203 + void __kmap_flush_unused(void) 201 204 { 202 205 lock_kmap(); 203 206 flush_all_zero_pkmaps(); ··· 358 367 if (need_wakeup) 359 368 wake_up(pkmap_map_wait); 360 369 } 361 - 362 370 EXPORT_SYMBOL(kunmap_high); 363 - #endif /* CONFIG_HIGHMEM */ 371 + #endif /* CONFIG_HIGHMEM */ 372 + 373 + #ifdef CONFIG_KMAP_LOCAL 374 + 375 + #include <asm/kmap_size.h> 376 + 377 + /* 378 + * With DEBUG_KMAP_LOCAL the stack depth is doubled and every second 379 + * slot is unused which acts as a guard page 380 + */ 381 + #ifdef CONFIG_DEBUG_KMAP_LOCAL 382 + # define KM_INCR 2 383 + #else 384 + # define KM_INCR 1 385 + #endif 386 + 387 + static inline int kmap_local_idx_push(void) 388 + { 389 + WARN_ON_ONCE(in_irq() && !irqs_disabled()); 390 + current->kmap_ctrl.idx += KM_INCR; 391 + BUG_ON(current->kmap_ctrl.idx >= KM_MAX_IDX); 392 + return current->kmap_ctrl.idx - 1; 393 + } 394 + 395 + static inline int kmap_local_idx(void) 396 + { 397 + return current->kmap_ctrl.idx - 1; 398 + } 399 + 400 + static inline void kmap_local_idx_pop(void) 401 + { 402 + current->kmap_ctrl.idx -= KM_INCR; 403 + BUG_ON(current->kmap_ctrl.idx < 0); 404 + } 405 + 406 + #ifndef arch_kmap_local_post_map 407 + # define arch_kmap_local_post_map(vaddr, pteval) do { } while (0) 408 + #endif 409 + 410 + #ifndef arch_kmap_local_pre_unmap 411 + # define arch_kmap_local_pre_unmap(vaddr) do { } while (0) 412 + #endif 413 + 414 + #ifndef arch_kmap_local_post_unmap 415 + # define arch_kmap_local_post_unmap(vaddr) do { } while (0) 416 + #endif 417 + 418 + #ifndef arch_kmap_local_map_idx 419 + #define arch_kmap_local_map_idx(idx, pfn) kmap_local_calc_idx(idx) 420 + #endif 421 + 422 + #ifndef arch_kmap_local_unmap_idx 423 + #define arch_kmap_local_unmap_idx(idx, vaddr) kmap_local_calc_idx(idx) 424 + #endif 425 + 426 + #ifndef arch_kmap_local_high_get 427 + static inline void *arch_kmap_local_high_get(struct page *page) 428 + { 429 + return NULL; 430 + } 431 + #endif 432 + 433 + /* Unmap a local mapping which was obtained by kmap_high_get() */ 434 + static inline bool kmap_high_unmap_local(unsigned long vaddr) 435 + { 436 + #ifdef ARCH_NEEDS_KMAP_HIGH_GET 437 + if (vaddr >= PKMAP_ADDR(0) && vaddr < PKMAP_ADDR(LAST_PKMAP)) { 438 + kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)])); 439 + return true; 440 + } 441 + #endif 442 + return false; 443 + } 444 + 445 + static inline int kmap_local_calc_idx(int idx) 446 + { 447 + return idx + KM_MAX_IDX * smp_processor_id(); 448 + } 449 + 450 + static pte_t *__kmap_pte; 451 + 452 + static pte_t *kmap_get_pte(void) 453 + { 454 + if (!__kmap_pte) 455 + __kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN)); 456 + return __kmap_pte; 457 + } 458 + 459 + void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot) 460 + { 461 + pte_t pteval, *kmap_pte = kmap_get_pte(); 462 + unsigned long vaddr; 463 + int idx; 464 + 465 + /* 466 + * Disable migration so resulting virtual address is stable 467 + * accross preemption. 468 + */ 469 + migrate_disable(); 470 + preempt_disable(); 471 + idx = arch_kmap_local_map_idx(kmap_local_idx_push(), pfn); 472 + vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 473 + BUG_ON(!pte_none(*(kmap_pte - idx))); 474 + pteval = pfn_pte(pfn, prot); 475 + set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval); 476 + arch_kmap_local_post_map(vaddr, pteval); 477 + current->kmap_ctrl.pteval[kmap_local_idx()] = pteval; 478 + preempt_enable(); 479 + 480 + return (void *)vaddr; 481 + } 482 + EXPORT_SYMBOL_GPL(__kmap_local_pfn_prot); 483 + 484 + void *__kmap_local_page_prot(struct page *page, pgprot_t prot) 485 + { 486 + void *kmap; 487 + 488 + /* 489 + * To broaden the usage of the actual kmap_local() machinery always map 490 + * pages when debugging is enabled and the architecture has no problems 491 + * with alias mappings. 492 + */ 493 + if (!IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) && !PageHighMem(page)) 494 + return page_address(page); 495 + 496 + /* Try kmap_high_get() if architecture has it enabled */ 497 + kmap = arch_kmap_local_high_get(page); 498 + if (kmap) 499 + return kmap; 500 + 501 + return __kmap_local_pfn_prot(page_to_pfn(page), prot); 502 + } 503 + EXPORT_SYMBOL(__kmap_local_page_prot); 504 + 505 + void kunmap_local_indexed(void *vaddr) 506 + { 507 + unsigned long addr = (unsigned long) vaddr & PAGE_MASK; 508 + pte_t *kmap_pte = kmap_get_pte(); 509 + int idx; 510 + 511 + if (addr < __fix_to_virt(FIX_KMAP_END) || 512 + addr > __fix_to_virt(FIX_KMAP_BEGIN)) { 513 + if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP)) { 514 + /* This _should_ never happen! See above. */ 515 + WARN_ON_ONCE(1); 516 + return; 517 + } 518 + /* 519 + * Handle mappings which were obtained by kmap_high_get() 520 + * first as the virtual address of such mappings is below 521 + * PAGE_OFFSET. Warn for all other addresses which are in 522 + * the user space part of the virtual address space. 523 + */ 524 + if (!kmap_high_unmap_local(addr)) 525 + WARN_ON_ONCE(addr < PAGE_OFFSET); 526 + return; 527 + } 528 + 529 + preempt_disable(); 530 + idx = arch_kmap_local_unmap_idx(kmap_local_idx(), addr); 531 + WARN_ON_ONCE(addr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); 532 + 533 + arch_kmap_local_pre_unmap(addr); 534 + pte_clear(&init_mm, addr, kmap_pte - idx); 535 + arch_kmap_local_post_unmap(addr); 536 + current->kmap_ctrl.pteval[kmap_local_idx()] = __pte(0); 537 + kmap_local_idx_pop(); 538 + preempt_enable(); 539 + migrate_enable(); 540 + } 541 + EXPORT_SYMBOL(kunmap_local_indexed); 542 + 543 + /* 544 + * Invoked before switch_to(). This is safe even when during or after 545 + * clearing the maps an interrupt which needs a kmap_local happens because 546 + * the task::kmap_ctrl.idx is not modified by the unmapping code so a 547 + * nested kmap_local will use the next unused index and restore the index 548 + * on unmap. The already cleared kmaps of the outgoing task are irrelevant 549 + * because the interrupt context does not know about them. The same applies 550 + * when scheduling back in for an interrupt which happens before the 551 + * restore is complete. 552 + */ 553 + void __kmap_local_sched_out(void) 554 + { 555 + struct task_struct *tsk = current; 556 + pte_t *kmap_pte = kmap_get_pte(); 557 + int i; 558 + 559 + /* Clear kmaps */ 560 + for (i = 0; i < tsk->kmap_ctrl.idx; i++) { 561 + pte_t pteval = tsk->kmap_ctrl.pteval[i]; 562 + unsigned long addr; 563 + int idx; 564 + 565 + /* With debug all even slots are unmapped and act as guard */ 566 + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) { 567 + WARN_ON_ONCE(!pte_none(pteval)); 568 + continue; 569 + } 570 + if (WARN_ON_ONCE(pte_none(pteval))) 571 + continue; 572 + 573 + /* 574 + * This is a horrible hack for XTENSA to calculate the 575 + * coloured PTE index. Uses the PFN encoded into the pteval 576 + * and the map index calculation because the actual mapped 577 + * virtual address is not stored in task::kmap_ctrl. 578 + * For any sane architecture this is optimized out. 579 + */ 580 + idx = arch_kmap_local_map_idx(i, pte_pfn(pteval)); 581 + 582 + addr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 583 + arch_kmap_local_pre_unmap(addr); 584 + pte_clear(&init_mm, addr, kmap_pte - idx); 585 + arch_kmap_local_post_unmap(addr); 586 + } 587 + } 588 + 589 + void __kmap_local_sched_in(void) 590 + { 591 + struct task_struct *tsk = current; 592 + pte_t *kmap_pte = kmap_get_pte(); 593 + int i; 594 + 595 + /* Restore kmaps */ 596 + for (i = 0; i < tsk->kmap_ctrl.idx; i++) { 597 + pte_t pteval = tsk->kmap_ctrl.pteval[i]; 598 + unsigned long addr; 599 + int idx; 600 + 601 + /* With debug all even slots are unmapped and act as guard */ 602 + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) { 603 + WARN_ON_ONCE(!pte_none(pteval)); 604 + continue; 605 + } 606 + if (WARN_ON_ONCE(pte_none(pteval))) 607 + continue; 608 + 609 + /* See comment in __kmap_local_sched_out() */ 610 + idx = arch_kmap_local_map_idx(i, pte_pfn(pteval)); 611 + addr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 612 + set_pte_at(&init_mm, addr, kmap_pte - idx, pteval); 613 + arch_kmap_local_post_map(addr, pteval); 614 + } 615 + } 616 + 617 + void kmap_local_fork(struct task_struct *tsk) 618 + { 619 + if (WARN_ON_ONCE(tsk->kmap_ctrl.idx)) 620 + memset(&tsk->kmap_ctrl, 0, sizeof(tsk->kmap_ctrl)); 621 + } 622 + 623 + #endif 364 624 365 625 #if defined(HASHED_PAGE_VIRTUAL) 366 626