Fix 'get_user_pages_fast()' with non-page-aligned start address

Alexey Dobriyan reported trouble with LTP with the new fast-gup code,
and Johannes Weiner debugged it to non-page-aligned addresses, where the
new get_user_pages_fast() code would do all the wrong things, including
just traversing past the end of the requested area due to 'addr' never
matching 'end' exactly.

This is not a pretty fix, and we may actually want to move the alignment
into generic code, leaving just the core code per-arch, but Alexey
verified that the vmsplice01 LTP test doesn't crash with this.

Reported-and-tested-by: Alexey Dobriyan <adobriyan@gmail.com>
Debugged-by: Johannes Weiner <hannes@saeurebad.de>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

+6 -3
+6 -3
arch/x86/mm/gup.c
··· 223 223 struct page **pages) 224 224 { 225 225 struct mm_struct *mm = current->mm; 226 - unsigned long end = start + (nr_pages << PAGE_SHIFT); 227 - unsigned long addr = start; 226 + unsigned long addr, len, end; 228 227 unsigned long next; 229 228 pgd_t *pgdp; 230 229 int nr = 0; 231 230 231 + start &= PAGE_MASK; 232 + addr = start; 233 + len = (unsigned long) nr_pages << PAGE_SHIFT; 234 + end = start + len; 232 235 if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ, 233 - start, nr_pages*PAGE_SIZE))) 236 + start, len))) 234 237 goto slow_irqon; 235 238 236 239 /*