Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm: compaction: encapsulate defer reset logic

Currently there are several functions to manipulate the deferred
compaction state variables. The remaining case where the variables are
touched directly is when a successful allocation occurs in direct
compaction, or is expected to be successful in the future by kswapd.
Here, the lowest order that is expected to fail is updated, and in the
case of successful allocation, the deferred status and counter is reset
completely.

Create a new function compaction_defer_reset() to encapsulate this
functionality and make it easier to understand the code. No functional
change.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Vlastimil Babka and committed by
Linus Torvalds
de6c60a6 0eb927c0

+21 -9
+16
include/linux/compaction.h
··· 62 62 return zone->compact_considered < defer_limit; 63 63 } 64 64 65 + /* 66 + * Update defer tracking counters after successful compaction of given order, 67 + * which means an allocation either succeeded (alloc_success == true) or is 68 + * expected to succeed. 69 + */ 70 + static inline void compaction_defer_reset(struct zone *zone, int order, 71 + bool alloc_success) 72 + { 73 + if (alloc_success) { 74 + zone->compact_considered = 0; 75 + zone->compact_defer_shift = 0; 76 + } 77 + if (order >= zone->compact_order_failed) 78 + zone->compact_order_failed = order + 1; 79 + } 80 + 65 81 /* Returns true if restarting compaction after many failures */ 66 82 static inline bool compaction_restarting(struct zone *zone, int order) 67 83 {
+4 -5
mm/compaction.c
··· 1124 1124 compact_zone(zone, cc); 1125 1125 1126 1126 if (cc->order > 0) { 1127 - int ok = zone_watermark_ok(zone, cc->order, 1128 - low_wmark_pages(zone), 0, 0); 1129 - if (ok && cc->order >= zone->compact_order_failed) 1130 - zone->compact_order_failed = cc->order + 1; 1127 + if (zone_watermark_ok(zone, cc->order, 1128 + low_wmark_pages(zone), 0, 0)) 1129 + compaction_defer_reset(zone, cc->order, false); 1131 1130 /* Currently async compaction is never deferred. */ 1132 - else if (!ok && cc->sync) 1131 + else if (cc->sync) 1133 1132 defer_compaction(zone, cc->order); 1134 1133 } 1135 1134
+1 -4
mm/page_alloc.c
··· 2235 2235 preferred_zone, migratetype); 2236 2236 if (page) { 2237 2237 preferred_zone->compact_blockskip_flush = false; 2238 - preferred_zone->compact_considered = 0; 2239 - preferred_zone->compact_defer_shift = 0; 2240 - if (order >= preferred_zone->compact_order_failed) 2241 - preferred_zone->compact_order_failed = order + 1; 2238 + compaction_defer_reset(preferred_zone, order, true); 2242 2239 count_vm_event(COMPACTSUCCESS); 2243 2240 return page; 2244 2241 }