Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

[PATCH] unpaged: anon in VM_UNPAGED

copy_one_pte needs to copy the anonymous COWed pages in a VM_UNPAGED area,
zap_pte_range needs to free them, do_wp_page needs to COW them: just like
ordinary pages, not like the unpaged.

But recognizing them is a little subtle: because PageReserved is no longer a
condition for remap_pfn_range, we can now mmap all of /dev/mem (whether the
distro permits, and whether it's advisable on this or that architecture, is
another matter). So if we can see a PageAnon, it may not be ours to mess with
(or may be ours from elsewhere in the address space). I suspect there's an
entertaining insoluble self-referential problem here, but the page_is_anon
function does a good practical job, and MAP_PRIVATE PROT_WRITE VM_UNPAGED will
always be an odd choice.

In updating the comment on page_address_in_vma, noticed a potential NULL
dereference, in a path we don't actually take, but fixed it.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by

Hugh Dickins and committed by
Linus Torvalds
ee498ed7 920fc356

+46 -24
+41 -22
mm/memory.c
··· 350 350 } 351 351 352 352 /* 353 + * page_is_anon applies strict checks for an anonymous page belonging to 354 + * this vma at this address. It is used on VM_UNPAGED vmas, which are 355 + * usually populated with shared originals (which must not be counted), 356 + * but occasionally contain private COWed copies (when !VM_SHARED, or 357 + * perhaps via ptrace when VM_SHARED). An mmap of /dev/mem might window 358 + * free pages, pages from other processes, or from other parts of this: 359 + * it's tricky, but try not to be deceived by foreign anonymous pages. 360 + */ 361 + static inline int page_is_anon(struct page *page, 362 + struct vm_area_struct *vma, unsigned long addr) 363 + { 364 + return page && PageAnon(page) && page_mapped(page) && 365 + page_address_in_vma(page, vma) == addr; 366 + } 367 + 368 + /* 353 369 * copy one vm_area from one task to the other. Assumes the page tables 354 370 * already present in the new task to be cleared in the whole range 355 371 * covered by this vma. ··· 397 381 goto out_set_pte; 398 382 } 399 383 400 - /* If the region is VM_UNPAGED, the mapping is not 401 - * mapped via rmap - duplicate the pte as is. 402 - */ 403 - if (vm_flags & VM_UNPAGED) 404 - goto out_set_pte; 405 - 406 384 pfn = pte_pfn(pte); 407 - /* If the pte points outside of valid memory but 385 + page = pfn_valid(pfn)? pfn_to_page(pfn): NULL; 386 + 387 + if (unlikely(vm_flags & VM_UNPAGED)) 388 + if (!page_is_anon(page, vma, addr)) 389 + goto out_set_pte; 390 + 391 + /* 392 + * If the pte points outside of valid memory but 408 393 * the region is not VM_UNPAGED, we have a problem. 409 394 */ 410 - if (unlikely(!pfn_valid(pfn))) { 395 + if (unlikely(!page)) { 411 396 print_bad_pte(vma, pte, addr); 412 397 goto out_set_pte; /* try to do something sane */ 413 398 } 414 - 415 - page = pfn_to_page(pfn); 416 399 417 400 /* 418 401 * If it's a COW mapping, write protect it both ··· 583 568 continue; 584 569 } 585 570 if (pte_present(ptent)) { 586 - struct page *page = NULL; 571 + struct page *page; 572 + unsigned long pfn; 587 573 588 574 (*zap_work) -= PAGE_SIZE; 589 575 590 - if (!(vma->vm_flags & VM_UNPAGED)) { 591 - unsigned long pfn = pte_pfn(ptent); 592 - if (unlikely(!pfn_valid(pfn))) 593 - print_bad_pte(vma, ptent, addr); 594 - else 595 - page = pfn_to_page(pfn); 596 - } 576 + pfn = pte_pfn(ptent); 577 + page = pfn_valid(pfn)? pfn_to_page(pfn): NULL; 578 + 579 + if (unlikely(vma->vm_flags & VM_UNPAGED)) { 580 + if (!page_is_anon(page, vma, addr)) 581 + page = NULL; 582 + } else if (unlikely(!page)) 583 + print_bad_pte(vma, ptent, addr); 584 + 597 585 if (unlikely(details) && page) { 598 586 /* 599 587 * unmap_shared_mapping_pages() wants to ··· 1313 1295 old_page = pfn_to_page(pfn); 1314 1296 src_page = old_page; 1315 1297 1316 - if (unlikely(vma->vm_flags & VM_UNPAGED)) { 1317 - old_page = NULL; 1318 - goto gotten; 1319 - } 1298 + if (unlikely(vma->vm_flags & VM_UNPAGED)) 1299 + if (!page_is_anon(old_page, vma, address)) { 1300 + old_page = NULL; 1301 + goto gotten; 1302 + } 1320 1303 1321 1304 if (PageAnon(old_page) && !TestSetPageLocked(old_page)) { 1322 1305 int reuse = can_share_swap_page(old_page);
+5 -2
mm/rmap.c
··· 225 225 226 226 /* 227 227 * At what user virtual address is page expected in vma? checking that the 228 - * page matches the vma: currently only used by unuse_process, on anon pages. 228 + * page matches the vma: currently only used on anon pages, by unuse_vma; 229 + * and by extraordinary checks on anon pages in VM_UNPAGED vmas, taking 230 + * care that an mmap of /dev/mem might window free and foreign pages. 229 231 */ 230 232 unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) 231 233 { ··· 236 234 (void *)page->mapping - PAGE_MAPPING_ANON) 237 235 return -EFAULT; 238 236 } else if (page->mapping && !(vma->vm_flags & VM_NONLINEAR)) { 239 - if (vma->vm_file->f_mapping != page->mapping) 237 + if (!vma->vm_file || 238 + vma->vm_file->f_mapping != page->mapping) 240 239 return -EFAULT; 241 240 } else 242 241 return -EFAULT;