[PATCH] unpaged: VM_NONLINEAR VM_RESERVED

There's one peculiar use of VM_RESERVED which the previous patch left behind:
because VM_NONLINEAR's try_to_unmap_cluster uses vm_private_data as a swapout
cursor, but should never meet VM_RESERVED vmas, it was a way of extending
VM_NONLINEAR to VM_RESERVED vmas using vm_private_data for some other purpose.
But that's an empty set - they don't have the populate function required. So
just throw away those VM_RESERVED tests.

But one more interesting in rmap.c has to go too: try_to_unmap_one will want
to swap out an anonymous page from VM_RESERVED or VM_UNPAGED area.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

authored by Hugh Dickins and committed by Linus Torvalds 101d2be7 0b14c179

+7 -14
+2 -4
mm/fremap.c
··· 204 * Make sure the vma is shared, that it supports prefaulting, 205 * and that the remapped range is valid and fully within 206 * the single existing vma. vm_private_data is used as a 207 - * swapout cursor in a VM_NONLINEAR vma (unless VM_RESERVED 208 - * or VM_LOCKED, but VM_LOCKED could be revoked later on). 209 */ 210 if (vma && (vma->vm_flags & VM_SHARED) && 211 - (!vma->vm_private_data || 212 - (vma->vm_flags & (VM_NONLINEAR|VM_RESERVED))) && 213 vma->vm_ops && vma->vm_ops->populate && 214 end > start && start >= vma->vm_start && 215 end <= vma->vm_end) {
··· 204 * Make sure the vma is shared, that it supports prefaulting, 205 * and that the remapped range is valid and fully within 206 * the single existing vma. vm_private_data is used as a 207 + * swapout cursor in a VM_NONLINEAR vma. 208 */ 209 if (vma && (vma->vm_flags & VM_SHARED) && 210 + (!vma->vm_private_data || (vma->vm_flags & VM_NONLINEAR)) && 211 vma->vm_ops && vma->vm_ops->populate && 212 end > start && start >= vma->vm_start && 213 end <= vma->vm_end) {
+5 -10
mm/rmap.c
··· 529 * If the page is mlock()d, we cannot swap it out. 530 * If it's recently referenced (perhaps page_referenced 531 * skipped over this mm) then we should reactivate it. 532 - * 533 - * Pages belonging to VM_RESERVED regions should not happen here. 534 */ 535 - if ((vma->vm_flags & (VM_LOCKED|VM_RESERVED)) || 536 ptep_clear_flush_young(vma, address, pte)) { 537 ret = SWAP_FAIL; 538 goto out_unmap; ··· 725 726 list_for_each_entry(vma, &mapping->i_mmap_nonlinear, 727 shared.vm_set.list) { 728 - if (vma->vm_flags & (VM_LOCKED|VM_RESERVED)) 729 continue; 730 cursor = (unsigned long) vma->vm_private_data; 731 if (cursor > max_nl_cursor) ··· 759 do { 760 list_for_each_entry(vma, &mapping->i_mmap_nonlinear, 761 shared.vm_set.list) { 762 - if (vma->vm_flags & (VM_LOCKED|VM_RESERVED)) 763 continue; 764 cursor = (unsigned long) vma->vm_private_data; 765 while ( cursor < max_nl_cursor && ··· 781 * in locked vmas). Reset cursor on all unreserved nonlinear 782 * vmas, now forgetting on which ones it had fallen behind. 783 */ 784 - list_for_each_entry(vma, &mapping->i_mmap_nonlinear, 785 - shared.vm_set.list) { 786 - if (!(vma->vm_flags & VM_RESERVED)) 787 - vma->vm_private_data = NULL; 788 - } 789 out: 790 spin_unlock(&mapping->i_mmap_lock); 791 return ret;
··· 529 * If the page is mlock()d, we cannot swap it out. 530 * If it's recently referenced (perhaps page_referenced 531 * skipped over this mm) then we should reactivate it. 532 */ 533 + if ((vma->vm_flags & VM_LOCKED) || 534 ptep_clear_flush_young(vma, address, pte)) { 535 ret = SWAP_FAIL; 536 goto out_unmap; ··· 727 728 list_for_each_entry(vma, &mapping->i_mmap_nonlinear, 729 shared.vm_set.list) { 730 + if (vma->vm_flags & VM_LOCKED) 731 continue; 732 cursor = (unsigned long) vma->vm_private_data; 733 if (cursor > max_nl_cursor) ··· 761 do { 762 list_for_each_entry(vma, &mapping->i_mmap_nonlinear, 763 shared.vm_set.list) { 764 + if (vma->vm_flags & VM_LOCKED) 765 continue; 766 cursor = (unsigned long) vma->vm_private_data; 767 while ( cursor < max_nl_cursor && ··· 783 * in locked vmas). Reset cursor on all unreserved nonlinear 784 * vmas, now forgetting on which ones it had fallen behind. 785 */ 786 + list_for_each_entry(vma, &mapping->i_mmap_nonlinear, shared.vm_set.list) 787 + vma->vm_private_data = NULL; 788 out: 789 spin_unlock(&mapping->i_mmap_lock); 790 return ret;