[PATCH] unpaged: VM_NONLINEAR VM_RESERVED

There's one peculiar use of VM_RESERVED which the previous patch left behind:
because VM_NONLINEAR's try_to_unmap_cluster uses vm_private_data as a swapout
cursor, but should never meet VM_RESERVED vmas, it was a way of extending
VM_NONLINEAR to VM_RESERVED vmas using vm_private_data for some other purpose.
But that's an empty set - they don't have the populate function required. So
just throw away those VM_RESERVED tests.

But one more interesting in rmap.c has to go too: try_to_unmap_one will want
to swap out an anonymous page from VM_RESERVED or VM_UNPAGED area.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by Hugh Dickins and committed by Linus Torvalds 101d2be7 0b14c179

+7 -14
+2 -4
mm/fremap.c
··· 204 204 * Make sure the vma is shared, that it supports prefaulting, 205 205 * and that the remapped range is valid and fully within 206 206 * the single existing vma. vm_private_data is used as a 207 - * swapout cursor in a VM_NONLINEAR vma (unless VM_RESERVED 208 - * or VM_LOCKED, but VM_LOCKED could be revoked later on). 207 + * swapout cursor in a VM_NONLINEAR vma. 209 208 */ 210 209 if (vma && (vma->vm_flags & VM_SHARED) && 211 - (!vma->vm_private_data || 212 - (vma->vm_flags & (VM_NONLINEAR|VM_RESERVED))) && 210 + (!vma->vm_private_data || (vma->vm_flags & VM_NONLINEAR)) && 213 211 vma->vm_ops && vma->vm_ops->populate && 214 212 end > start && start >= vma->vm_start && 215 213 end <= vma->vm_end) {
+5 -10
mm/rmap.c
··· 529 529 * If the page is mlock()d, we cannot swap it out. 530 530 * If it's recently referenced (perhaps page_referenced 531 531 * skipped over this mm) then we should reactivate it. 532 - * 533 - * Pages belonging to VM_RESERVED regions should not happen here. 534 532 */ 535 - if ((vma->vm_flags & (VM_LOCKED|VM_RESERVED)) || 533 + if ((vma->vm_flags & VM_LOCKED) || 536 534 ptep_clear_flush_young(vma, address, pte)) { 537 535 ret = SWAP_FAIL; 538 536 goto out_unmap; ··· 725 727 726 728 list_for_each_entry(vma, &mapping->i_mmap_nonlinear, 727 729 shared.vm_set.list) { 728 - if (vma->vm_flags & (VM_LOCKED|VM_RESERVED)) 730 + if (vma->vm_flags & VM_LOCKED) 729 731 continue; 730 732 cursor = (unsigned long) vma->vm_private_data; 731 733 if (cursor > max_nl_cursor) ··· 759 761 do { 760 762 list_for_each_entry(vma, &mapping->i_mmap_nonlinear, 761 763 shared.vm_set.list) { 762 - if (vma->vm_flags & (VM_LOCKED|VM_RESERVED)) 764 + if (vma->vm_flags & VM_LOCKED) 763 765 continue; 764 766 cursor = (unsigned long) vma->vm_private_data; 765 767 while ( cursor < max_nl_cursor && ··· 781 783 * in locked vmas). Reset cursor on all unreserved nonlinear 782 784 * vmas, now forgetting on which ones it had fallen behind. 783 785 */ 784 - list_for_each_entry(vma, &mapping->i_mmap_nonlinear, 785 - shared.vm_set.list) { 786 - if (!(vma->vm_flags & VM_RESERVED)) 787 - vma->vm_private_data = NULL; 788 - } 786 + list_for_each_entry(vma, &mapping->i_mmap_nonlinear, shared.vm_set.list) 787 + vma->vm_private_data = NULL; 789 788 out: 790 789 spin_unlock(&mapping->i_mmap_lock); 791 790 return ret;