Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/highmem: make kmap cache coloring aware

User-visible effect:
Architectures that choose this method of maintaining cache coherency
(MIPS and xtensa currently) are able to use high memory on cores with
aliasing data cache. Without this fix such architectures can not use
high memory (in case of xtensa it means that at most 128 MBytes of
physical memory is available).

The problem:
VIPT cache with way size larger than MMU page size may suffer from
aliasing problem: a single physical address accessed via different
virtual addresses may end up in multiple locations in the cache.
Virtual mappings of a physical address that always get cached in
different cache locations are said to have different colors. L1 caching
hardware usually doesn't handle this situation leaving it up to
software. Software must avoid this situation as it leads to data
corruption.

What can be done:
One way to handle this is to flush and invalidate data cache every time
page mapping changes color. The other way is to always map physical
page at a virtual address with the same color. Low memory pages already
have this property. Giving architecture a way to control color of high
memory page mapping allows reusing of existing low memory cache alias
handling code.

How this is done with this patch:
Provide hooks that allow architectures with aliasing cache to align
mapping address of high pages according to their color. Such
architectures may enforce similar coloring of low- and high-memory page
mappings and reuse existing cache management functions to support
highmem.

This code is based on the implementation of similar feature for MIPS by
Leonid Yegoshin.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Marc Gauthier <marc@cadence.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Steven Hill <Steven.Hill@imgtec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Max Filippov and committed by
Linus Torvalds
15de36a4 b972216e

+75 -11
+75 -11
mm/highmem.c
··· 44 44 */ 45 45 #ifdef CONFIG_HIGHMEM 46 46 47 + /* 48 + * Architecture with aliasing data cache may define the following family of 49 + * helper functions in its asm/highmem.h to control cache color of virtual 50 + * addresses where physical memory pages are mapped by kmap. 51 + */ 52 + #ifndef get_pkmap_color 53 + 54 + /* 55 + * Determine color of virtual address where the page should be mapped. 56 + */ 57 + static inline unsigned int get_pkmap_color(struct page *page) 58 + { 59 + return 0; 60 + } 61 + #define get_pkmap_color get_pkmap_color 62 + 63 + /* 64 + * Get next index for mapping inside PKMAP region for page with given color. 65 + */ 66 + static inline unsigned int get_next_pkmap_nr(unsigned int color) 67 + { 68 + static unsigned int last_pkmap_nr; 69 + 70 + last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK; 71 + return last_pkmap_nr; 72 + } 73 + 74 + /* 75 + * Determine if page index inside PKMAP region (pkmap_nr) of given color 76 + * has wrapped around PKMAP region end. When this happens an attempt to 77 + * flush all unused PKMAP slots is made. 78 + */ 79 + static inline int no_more_pkmaps(unsigned int pkmap_nr, unsigned int color) 80 + { 81 + return pkmap_nr == 0; 82 + } 83 + 84 + /* 85 + * Get the number of PKMAP entries of the given color. If no free slot is 86 + * found after checking that many entries, kmap will sleep waiting for 87 + * someone to call kunmap and free PKMAP slot. 88 + */ 89 + static inline int get_pkmap_entries_count(unsigned int color) 90 + { 91 + return LAST_PKMAP; 92 + } 93 + 94 + /* 95 + * Get head of a wait queue for PKMAP entries of the given color. 96 + * Wait queues for different mapping colors should be independent to avoid 97 + * unnecessary wakeups caused by freeing of slots of other colors. 98 + */ 99 + static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color) 100 + { 101 + static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait); 102 + 103 + return &pkmap_map_wait; 104 + } 105 + #endif 106 + 47 107 unsigned long totalhigh_pages __read_mostly; 48 108 EXPORT_SYMBOL(totalhigh_pages); 49 109 ··· 128 68 } 129 69 130 70 static int pkmap_count[LAST_PKMAP]; 131 - static unsigned int last_pkmap_nr; 132 71 static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kmap_lock); 133 72 134 73 pte_t * pkmap_page_table; 135 - 136 - static DECLARE_WAIT_QUEUE_HEAD(pkmap_map_wait); 137 74 138 75 /* 139 76 * Most architectures have no use for kmap_high_get(), so let's abstract ··· 218 161 { 219 162 unsigned long vaddr; 220 163 int count; 164 + unsigned int last_pkmap_nr; 165 + unsigned int color = get_pkmap_color(page); 221 166 222 167 start: 223 - count = LAST_PKMAP; 168 + count = get_pkmap_entries_count(color); 224 169 /* Find an empty entry */ 225 170 for (;;) { 226 - last_pkmap_nr = (last_pkmap_nr + 1) & LAST_PKMAP_MASK; 227 - if (!last_pkmap_nr) { 171 + last_pkmap_nr = get_next_pkmap_nr(color); 172 + if (no_more_pkmaps(last_pkmap_nr, color)) { 228 173 flush_all_zero_pkmaps(); 229 - count = LAST_PKMAP; 174 + count = get_pkmap_entries_count(color); 230 175 } 231 176 if (!pkmap_count[last_pkmap_nr]) 232 177 break; /* Found a usable entry */ ··· 240 181 */ 241 182 { 242 183 DECLARE_WAITQUEUE(wait, current); 184 + wait_queue_head_t *pkmap_map_wait = 185 + get_pkmap_wait_queue_head(color); 243 186 244 187 __set_current_state(TASK_UNINTERRUPTIBLE); 245 - add_wait_queue(&pkmap_map_wait, &wait); 188 + add_wait_queue(pkmap_map_wait, &wait); 246 189 unlock_kmap(); 247 190 schedule(); 248 - remove_wait_queue(&pkmap_map_wait, &wait); 191 + remove_wait_queue(pkmap_map_wait, &wait); 249 192 lock_kmap(); 250 193 251 194 /* Somebody else might have mapped it while we slept */ ··· 335 274 unsigned long nr; 336 275 unsigned long flags; 337 276 int need_wakeup; 277 + unsigned int color = get_pkmap_color(page); 278 + wait_queue_head_t *pkmap_map_wait; 338 279 339 280 lock_kmap_any(flags); 340 281 vaddr = (unsigned long)page_address(page); ··· 362 299 * no need for the wait-queue-head's lock. Simply 363 300 * test if the queue is empty. 364 301 */ 365 - need_wakeup = waitqueue_active(&pkmap_map_wait); 302 + pkmap_map_wait = get_pkmap_wait_queue_head(color); 303 + need_wakeup = waitqueue_active(pkmap_map_wait); 366 304 } 367 305 unlock_kmap_any(flags); 368 306 369 307 /* do wake-up, if needed, race-free outside of the spin lock */ 370 308 if (need_wakeup) 371 - wake_up(&pkmap_map_wait); 309 + wake_up(pkmap_map_wait); 372 310 } 373 311 374 312 EXPORT_SYMBOL(kunmap_high);