mm: fix page allocation for larger I/O segments

In some cases the IO subsystem is able to merge requests if the pages are
adjacent in physical memory. This was achieved in the allocator by having
expand() return pages in physically contiguous order in situations were a
large buddy was split. However, list-based anti-fragmentation changed the
order pages were returned in to avoid searching in buffered_rmqueue() for a
page of the appropriate migrate type.

This patch restores behaviour of rmqueue_bulk() preserving the physical
order of pages returned by the allocator without incurring increased search
costs for anti-fragmentation.

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Mark Lord <mlord@pobox.com
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by Mel Gorman and committed by Linus Torvalds 81eabcbe 8d936626

+11
+11
mm/page_alloc.c
··· 847 847 struct page *page = __rmqueue(zone, order, migratetype); 848 848 if (unlikely(page == NULL)) 849 849 break; 850 + 851 + /* 852 + * Split buddy pages returned by expand() are received here 853 + * in physical page order. The page is added to the callers and 854 + * list and the list head then moves forward. From the callers 855 + * perspective, the linked list is ordered by page number in 856 + * some conditions. This is useful for IO devices that can 857 + * merge IO requests if the physical pages are ordered 858 + * properly. 859 + */ 850 860 list_add(&page->lru, list); 851 861 set_page_private(page, migratetype); 862 + list = &page->lru; 852 863 } 853 864 spin_unlock(&zone->lock); 854 865 return i;