Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at v2.6.21-rc2 408 lines 18 kB view raw
1 Cache and TLB Flushing 2 Under Linux 3 4 David S. Miller <davem@redhat.com> 5 6This document describes the cache/tlb flushing interfaces called 7by the Linux VM subsystem. It enumerates over each interface, 8describes it's intended purpose, and what side effect is expected 9after the interface is invoked. 10 11The side effects described below are stated for a uniprocessor 12implementation, and what is to happen on that single processor. The 13SMP cases are a simple extension, in that you just extend the 14definition such that the side effect for a particular interface occurs 15on all processors in the system. Don't let this scare you into 16thinking SMP cache/tlb flushing must be so inefficient, this is in 17fact an area where many optimizations are possible. For example, 18if it can be proven that a user address space has never executed 19on a cpu (see vma->cpu_vm_mask), one need not perform a flush 20for this address space on that cpu. 21 22First, the TLB flushing interfaces, since they are the simplest. The 23"TLB" is abstracted under Linux as something the cpu uses to cache 24virtual-->physical address translations obtained from the software 25page tables. Meaning that if the software page tables change, it is 26possible for stale translations to exist in this "TLB" cache. 27Therefore when software page table changes occur, the kernel will 28invoke one of the following flush methods _after_ the page table 29changes occur: 30 311) void flush_tlb_all(void) 32 33 The most severe flush of all. After this interface runs, 34 any previous page table modification whatsoever will be 35 visible to the cpu. 36 37 This is usually invoked when the kernel page tables are 38 changed, since such translations are "global" in nature. 39 402) void flush_tlb_mm(struct mm_struct *mm) 41 42 This interface flushes an entire user address space from 43 the TLB. After running, this interface must make sure that 44 any previous page table modifications for the address space 45 'mm' will be visible to the cpu. That is, after running, 46 there will be no entries in the TLB for 'mm'. 47 48 This interface is used to handle whole address space 49 page table operations such as what happens during 50 fork, and exec. 51 523) void flush_tlb_range(struct vm_area_struct *vma, 53 unsigned long start, unsigned long end) 54 55 Here we are flushing a specific range of (user) virtual 56 address translations from the TLB. After running, this 57 interface must make sure that any previous page table 58 modifications for the address space 'vma->vm_mm' in the range 59 'start' to 'end-1' will be visible to the cpu. That is, after 60 running, here will be no entries in the TLB for 'mm' for 61 virtual addresses in the range 'start' to 'end-1'. 62 63 The "vma" is the backing store being used for the region. 64 Primarily, this is used for munmap() type operations. 65 66 The interface is provided in hopes that the port can find 67 a suitably efficient method for removing multiple page 68 sized translations from the TLB, instead of having the kernel 69 call flush_tlb_page (see below) for each entry which may be 70 modified. 71 724) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) 73 74 This time we need to remove the PAGE_SIZE sized translation 75 from the TLB. The 'vma' is the backing structure used by 76 Linux to keep track of mmap'd regions for a process, the 77 address space is available via vma->vm_mm. Also, one may 78 test (vma->vm_flags & VM_EXEC) to see if this region is 79 executable (and thus could be in the 'instruction TLB' in 80 split-tlb type setups). 81 82 After running, this interface must make sure that any previous 83 page table modification for address space 'vma->vm_mm' for 84 user virtual address 'addr' will be visible to the cpu. That 85 is, after running, there will be no entries in the TLB for 86 'vma->vm_mm' for virtual address 'addr'. 87 88 This is used primarily during fault processing. 89 905) void flush_tlb_pgtables(struct mm_struct *mm, 91 unsigned long start, unsigned long end) 92 93 The software page tables for address space 'mm' for virtual 94 addresses in the range 'start' to 'end-1' are being torn down. 95 96 Some platforms cache the lowest level of the software page tables 97 in a linear virtually mapped array, to make TLB miss processing 98 more efficient. On such platforms, since the TLB is caching the 99 software page table structure, it needs to be flushed when parts 100 of the software page table tree are unlinked/freed. 101 102 Sparc64 is one example of a platform which does this. 103 104 Usually, when munmap()'ing an area of user virtual address 105 space, the kernel leaves the page table parts around and just 106 marks the individual pte's as invalid. However, if very large 107 portions of the address space are unmapped, the kernel frees up 108 those portions of the software page tables to prevent potential 109 excessive kernel memory usage caused by erratic mmap/mmunmap 110 sequences. It is at these times that flush_tlb_pgtables will 111 be invoked. 112 1136) void update_mmu_cache(struct vm_area_struct *vma, 114 unsigned long address, pte_t pte) 115 116 At the end of every page fault, this routine is invoked to 117 tell the architecture specific code that a translation 118 described by "pte" now exists at virtual address "address" 119 for address space "vma->vm_mm", in the software page tables. 120 121 A port may use this information in any way it so chooses. 122 For example, it could use this event to pre-load TLB 123 translations for software managed TLB configurations. 124 The sparc64 port currently does this. 125 1267) void tlb_migrate_finish(struct mm_struct *mm) 127 128 This interface is called at the end of an explicit 129 process migration. This interface provides a hook 130 to allow a platform to update TLB or context-specific 131 information for the address space. 132 133 The ia64 sn2 platform is one example of a platform 134 that uses this interface. 135 1368) void lazy_mmu_prot_update(pte_t pte) 137 This interface is called whenever the protection on 138 any user PTEs change. This interface provides a notification 139 to architecture specific code to take appropriate action. 140 141 142Next, we have the cache flushing interfaces. In general, when Linux 143is changing an existing virtual-->physical mapping to a new value, 144the sequence will be in one of the following forms: 145 146 1) flush_cache_mm(mm); 147 change_all_page_tables_of(mm); 148 flush_tlb_mm(mm); 149 150 2) flush_cache_range(vma, start, end); 151 change_range_of_page_tables(mm, start, end); 152 flush_tlb_range(vma, start, end); 153 154 3) flush_cache_page(vma, addr, pfn); 155 set_pte(pte_pointer, new_pte_val); 156 flush_tlb_page(vma, addr); 157 158The cache level flush will always be first, because this allows 159us to properly handle systems whose caches are strict and require 160a virtual-->physical translation to exist for a virtual address 161when that virtual address is flushed from the cache. The HyperSparc 162cpu is one such cpu with this attribute. 163 164The cache flushing routines below need only deal with cache flushing 165to the extent that it is necessary for a particular cpu. Mostly, 166these routines must be implemented for cpus which have virtually 167indexed caches which must be flushed when virtual-->physical 168translations are changed or removed. So, for example, the physically 169indexed physically tagged caches of IA32 processors have no need to 170implement these interfaces since the caches are fully synchronized 171and have no dependency on translation information. 172 173Here are the routines, one by one: 174 1751) void flush_cache_mm(struct mm_struct *mm) 176 177 This interface flushes an entire user address space from 178 the caches. That is, after running, there will be no cache 179 lines associated with 'mm'. 180 181 This interface is used to handle whole address space 182 page table operations such as what happens during exit and exec. 183 1842) void flush_cache_dup_mm(struct mm_struct *mm) 185 186 This interface flushes an entire user address space from 187 the caches. That is, after running, there will be no cache 188 lines associated with 'mm'. 189 190 This interface is used to handle whole address space 191 page table operations such as what happens during fork. 192 193 This option is separate from flush_cache_mm to allow some 194 optimizations for VIPT caches. 195 1963) void flush_cache_range(struct vm_area_struct *vma, 197 unsigned long start, unsigned long end) 198 199 Here we are flushing a specific range of (user) virtual 200 addresses from the cache. After running, there will be no 201 entries in the cache for 'vma->vm_mm' for virtual addresses in 202 the range 'start' to 'end-1'. 203 204 The "vma" is the backing store being used for the region. 205 Primarily, this is used for munmap() type operations. 206 207 The interface is provided in hopes that the port can find 208 a suitably efficient method for removing multiple page 209 sized regions from the cache, instead of having the kernel 210 call flush_cache_page (see below) for each entry which may be 211 modified. 212 2134) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) 214 215 This time we need to remove a PAGE_SIZE sized range 216 from the cache. The 'vma' is the backing structure used by 217 Linux to keep track of mmap'd regions for a process, the 218 address space is available via vma->vm_mm. Also, one may 219 test (vma->vm_flags & VM_EXEC) to see if this region is 220 executable (and thus could be in the 'instruction cache' in 221 "Harvard" type cache layouts). 222 223 The 'pfn' indicates the physical page frame (shift this value 224 left by PAGE_SHIFT to get the physical address) that 'addr' 225 translates to. It is this mapping which should be removed from 226 the cache. 227 228 After running, there will be no entries in the cache for 229 'vma->vm_mm' for virtual address 'addr' which translates 230 to 'pfn'. 231 232 This is used primarily during fault processing. 233 2345) void flush_cache_kmaps(void) 235 236 This routine need only be implemented if the platform utilizes 237 highmem. It will be called right before all of the kmaps 238 are invalidated. 239 240 After running, there will be no entries in the cache for 241 the kernel virtual address range PKMAP_ADDR(0) to 242 PKMAP_ADDR(LAST_PKMAP). 243 244 This routing should be implemented in asm/highmem.h 245 2466) void flush_cache_vmap(unsigned long start, unsigned long end) 247 void flush_cache_vunmap(unsigned long start, unsigned long end) 248 249 Here in these two interfaces we are flushing a specific range 250 of (kernel) virtual addresses from the cache. After running, 251 there will be no entries in the cache for the kernel address 252 space for virtual addresses in the range 'start' to 'end-1'. 253 254 The first of these two routines is invoked after map_vm_area() 255 has installed the page table entries. The second is invoked 256 before unmap_vm_area() deletes the page table entries. 257 258There exists another whole class of cpu cache issues which currently 259require a whole different set of interfaces to handle properly. 260The biggest problem is that of virtual aliasing in the data cache 261of a processor. 262 263Is your port susceptible to virtual aliasing in it's D-cache? 264Well, if your D-cache is virtually indexed, is larger in size than 265PAGE_SIZE, and does not prevent multiple cache lines for the same 266physical address from existing at once, you have this problem. 267 268If your D-cache has this problem, first define asm/shmparam.h SHMLBA 269properly, it should essentially be the size of your virtually 270addressed D-cache (or if the size is variable, the largest possible 271size). This setting will force the SYSv IPC layer to only allow user 272processes to mmap shared memory at address which are a multiple of 273this value. 274 275NOTE: This does not fix shared mmaps, check out the sparc64 port for 276one way to solve this (in particular SPARC_FLAG_MMAPSHARED). 277 278Next, you have to solve the D-cache aliasing issue for all 279other cases. Please keep in mind that fact that, for a given page 280mapped into some user address space, there is always at least one more 281mapping, that of the kernel in it's linear mapping starting at 282PAGE_OFFSET. So immediately, once the first user maps a given 283physical page into its address space, by implication the D-cache 284aliasing problem has the potential to exist since the kernel already 285maps this page at its virtual address. 286 287 void copy_user_page(void *to, void *from, unsigned long addr, struct page *page) 288 void clear_user_page(void *to, unsigned long addr, struct page *page) 289 290 These two routines store data in user anonymous or COW 291 pages. It allows a port to efficiently avoid D-cache alias 292 issues between userspace and the kernel. 293 294 For example, a port may temporarily map 'from' and 'to' to 295 kernel virtual addresses during the copy. The virtual address 296 for these two pages is chosen in such a way that the kernel 297 load/store instructions happen to virtual addresses which are 298 of the same "color" as the user mapping of the page. Sparc64 299 for example, uses this technique. 300 301 The 'addr' parameter tells the virtual address where the 302 user will ultimately have this page mapped, and the 'page' 303 parameter gives a pointer to the struct page of the target. 304 305 If D-cache aliasing is not an issue, these two routines may 306 simply call memcpy/memset directly and do nothing more. 307 308 void flush_dcache_page(struct page *page) 309 310 Any time the kernel writes to a page cache page, _OR_ 311 the kernel is about to read from a page cache page and 312 user space shared/writable mappings of this page potentially 313 exist, this routine is called. 314 315 NOTE: This routine need only be called for page cache pages 316 which can potentially ever be mapped into the address 317 space of a user process. So for example, VFS layer code 318 handling vfs symlinks in the page cache need not call 319 this interface at all. 320 321 The phrase "kernel writes to a page cache page" means, 322 specifically, that the kernel executes store instructions 323 that dirty data in that page at the page->virtual mapping 324 of that page. It is important to flush here to handle 325 D-cache aliasing, to make sure these kernel stores are 326 visible to user space mappings of that page. 327 328 The corollary case is just as important, if there are users 329 which have shared+writable mappings of this file, we must make 330 sure that kernel reads of these pages will see the most recent 331 stores done by the user. 332 333 If D-cache aliasing is not an issue, this routine may 334 simply be defined as a nop on that architecture. 335 336 There is a bit set aside in page->flags (PG_arch_1) as 337 "architecture private". The kernel guarantees that, 338 for pagecache pages, it will clear this bit when such 339 a page first enters the pagecache. 340 341 This allows these interfaces to be implemented much more 342 efficiently. It allows one to "defer" (perhaps indefinitely) 343 the actual flush if there are currently no user processes 344 mapping this page. See sparc64's flush_dcache_page and 345 update_mmu_cache implementations for an example of how to go 346 about doing this. 347 348 The idea is, first at flush_dcache_page() time, if 349 page->mapping->i_mmap is an empty tree and ->i_mmap_nonlinear 350 an empty list, just mark the architecture private page flag bit. 351 Later, in update_mmu_cache(), a check is made of this flag bit, 352 and if set the flush is done and the flag bit is cleared. 353 354 IMPORTANT NOTE: It is often important, if you defer the flush, 355 that the actual flush occurs on the same CPU 356 as did the cpu stores into the page to make it 357 dirty. Again, see sparc64 for examples of how 358 to deal with this. 359 360 void copy_to_user_page(struct vm_area_struct *vma, struct page *page, 361 unsigned long user_vaddr, 362 void *dst, void *src, int len) 363 void copy_from_user_page(struct vm_area_struct *vma, struct page *page, 364 unsigned long user_vaddr, 365 void *dst, void *src, int len) 366 When the kernel needs to copy arbitrary data in and out 367 of arbitrary user pages (f.e. for ptrace()) it will use 368 these two routines. 369 370 Any necessary cache flushing or other coherency operations 371 that need to occur should happen here. If the processor's 372 instruction cache does not snoop cpu stores, it is very 373 likely that you will need to flush the instruction cache 374 for copy_to_user_page(). 375 376 void flush_anon_page(struct vm_area_struct *vma, struct page *page, 377 unsigned long vmaddr) 378 When the kernel needs to access the contents of an anonymous 379 page, it calls this function (currently only 380 get_user_pages()). Note: flush_dcache_page() deliberately 381 doesn't work for an anonymous page. The default 382 implementation is a nop (and should remain so for all coherent 383 architectures). For incoherent architectures, it should flush 384 the cache of the page at vmaddr. 385 386 void flush_kernel_dcache_page(struct page *page) 387 When the kernel needs to modify a user page is has obtained 388 with kmap, it calls this function after all modifications are 389 complete (but before kunmapping it) to bring the underlying 390 page up to date. It is assumed here that the user has no 391 incoherent cached copies (i.e. the original page was obtained 392 from a mechanism like get_user_pages()). The default 393 implementation is a nop and should remain so on all coherent 394 architectures. On incoherent architectures, this should flush 395 the kernel cache for page (using page_address(page)). 396 397 398 void flush_icache_range(unsigned long start, unsigned long end) 399 When the kernel stores into addresses that it will execute 400 out of (eg when loading modules), this function is called. 401 402 If the icache does not snoop stores then this routine will need 403 to flush it. 404 405 void flush_icache_page(struct vm_area_struct *vma, struct page *page) 406 All the functionality of flush_icache_page can be implemented in 407 flush_dcache_page and update_mmu_cache. In 2.7 the hope is to 408 remove this interface completely.