Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic

__GFP_REPEAT was designed to allow retry-but-eventually-fail semantic to
the page allocator. This has been true but only for allocations
requests larger than PAGE_ALLOC_COSTLY_ORDER. It has been always
ignored for smaller sizes. This is a bit unfortunate because there is
no way to express the same semantic for those requests and they are
considered too important to fail so they might end up looping in the
page allocator for ever, similarly to GFP_NOFAIL requests.

Now that the whole tree has been cleaned up and accidental or misled
usage of __GFP_REPEAT flag has been removed for !costly requests we can
give the original flag a better name and more importantly a more useful
semantic. Let's rename it to __GFP_RETRY_MAYFAIL which tells the user
that the allocator would try really hard but there is no promise of a
success. This will work independent of the order and overrides the
default allocator behavior. Page allocator users have several levels of
guarantee vs. cost options (take GFP_KERNEL as an example)

- GFP_KERNEL & ~__GFP_RECLAIM - optimistic allocation without _any_
attempt to free memory at all. The most light weight mode which even
doesn't kick the background reclaim. Should be used carefully because
it might deplete the memory and the next user might hit the more
aggressive reclaim

- GFP_KERNEL & ~__GFP_DIRECT_RECLAIM (or GFP_NOWAIT)- optimistic
allocation without any attempt to free memory from the current
context but can wake kswapd to reclaim memory if the zone is below
the low watermark. Can be used from either atomic contexts or when
the request is a performance optimization and there is another
fallback for a slow path.

- (GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM (aka GFP_ATOMIC) -
non sleeping allocation with an expensive fallback so it can access
some portion of memory reserves. Usually used from interrupt/bh
context with an expensive slow path fallback.

- GFP_KERNEL - both background and direct reclaim are allowed and the
_default_ page allocator behavior is used. That means that !costly
allocation requests are basically nofail but there is no guarantee of
that behavior so failures have to be checked properly by callers
(e.g. OOM killer victim is allowed to fail currently).

- GFP_KERNEL | __GFP_NORETRY - overrides the default allocator behavior
and all allocation requests fail early rather than cause disruptive
reclaim (one round of reclaim in this implementation). The OOM killer
is not invoked.

- GFP_KERNEL | __GFP_RETRY_MAYFAIL - overrides the default allocator
behavior and all allocation requests try really hard. The request
will fail if the reclaim cannot make any progress. The OOM killer
won't be triggered.

- GFP_KERNEL | __GFP_NOFAIL - overrides the default allocator behavior
and all allocation requests will loop endlessly until they succeed.
This might be really dangerous especially for larger orders.

Existing users of __GFP_REPEAT are changed to __GFP_RETRY_MAYFAIL
because they already had their semantic. No new users are added.
__alloc_pages_slowpath is changed to bail out for __GFP_RETRY_MAYFAIL if
there is no progress and we have already passed the OOM point.

This means that all the reclaim opportunities have been exhausted except
the most disruptive one (the OOM killer) and a user defined fallback
behavior is more sensible than keep retrying in the page allocator.

[akpm@linux-foundation.org: fix arch/sparc/kernel/mdesc.c]
[mhocko@suse.com: semantic fix]
Link: http://lkml.kernel.org/r/20170626123847.GM11534@dhcp22.suse.cz
[mhocko@kernel.org: address other thing spotted by Vlastimil]
Link: http://lkml.kernel.org/r/20170626124233.GN11534@dhcp22.suse.cz
Link: http://lkml.kernel.org/r/20170623085345.11304-3-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Alex Belits <alex.belits@cavium.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: NeilBrown <neilb@suse.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Michal Hocko and committed by
Linus Torvalds
dcda9b04 473738eb

+86 -47
+1 -1
Documentation/DMA-ISA-LPC.txt
··· 42 42 43 43 Unfortunately the memory available for ISA DMA is scarce so unless you 44 44 allocate the memory during boot-up it's a good idea to also pass 45 - __GFP_REPEAT and __GFP_NOWARN to make the allocator try a bit harder. 45 + __GFP_RETRY_MAYFAIL and __GFP_NOWARN to make the allocator try a bit harder. 46 46 47 47 (This scarcity also means that you should allocate the buffer as 48 48 early as possible and not release it until the driver is unloaded.)
+1 -1
arch/powerpc/include/asm/book3s/64/pgalloc.h
··· 56 56 return (pgd_t *)__get_free_page(pgtable_gfp_flags(mm, PGALLOC_GFP)); 57 57 #else 58 58 struct page *page; 59 - page = alloc_pages(pgtable_gfp_flags(mm, PGALLOC_GFP | __GFP_REPEAT), 59 + page = alloc_pages(pgtable_gfp_flags(mm, PGALLOC_GFP | __GFP_RETRY_MAYFAIL), 60 60 4); 61 61 if (!page) 62 62 return NULL;
+1 -1
arch/powerpc/kvm/book3s_64_mmu_hv.c
··· 93 93 } 94 94 95 95 if (!hpt) 96 - hpt = __get_free_pages(GFP_KERNEL|__GFP_ZERO|__GFP_REPEAT 96 + hpt = __get_free_pages(GFP_KERNEL|__GFP_ZERO|__GFP_RETRY_MAYFAIL 97 97 |__GFP_NOWARN, order - PAGE_SHIFT); 98 98 99 99 if (!hpt)
+1 -1
arch/sparc/kernel/mdesc.c
··· 205 205 handle_size = (sizeof(struct mdesc_handle) - 206 206 sizeof(struct mdesc_hdr) + 207 207 mdesc_size); 208 - base = kmalloc(handle_size + 15, GFP_KERNEL | __GFP_REPEAT); 208 + base = kmalloc(handle_size + 15, GFP_KERNEL | __GFP_RETRY_MAYFAIL); 209 209 if (!base) 210 210 return NULL; 211 211
+1 -1
drivers/mmc/host/wbsd.c
··· 1386 1386 * order for ISA to be able to DMA to it. 1387 1387 */ 1388 1388 host->dma_buffer = kmalloc(WBSD_DMA_SIZE, 1389 - GFP_NOIO | GFP_DMA | __GFP_REPEAT | __GFP_NOWARN); 1389 + GFP_NOIO | GFP_DMA | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); 1390 1390 if (!host->dma_buffer) 1391 1391 goto free; 1392 1392
+1 -1
drivers/s390/char/vmcp.c
··· 98 98 } 99 99 if (!session->response) 100 100 session->response = (char *)__get_free_pages(GFP_KERNEL 101 - | __GFP_REPEAT | GFP_DMA, 101 + | __GFP_RETRY_MAYFAIL | GFP_DMA, 102 102 get_order(session->bufsize)); 103 103 if (!session->response) { 104 104 mutex_unlock(&session->mutex);
+1 -1
drivers/target/target_core_transport.c
··· 252 252 int rc; 253 253 254 254 se_sess->sess_cmd_map = kzalloc(tag_num * tag_size, 255 - GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); 255 + GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL); 256 256 if (!se_sess->sess_cmd_map) { 257 257 se_sess->sess_cmd_map = vzalloc(tag_num * tag_size); 258 258 if (!se_sess->sess_cmd_map) {
+1 -1
drivers/vhost/net.c
··· 897 897 struct sk_buff **queue; 898 898 int i; 899 899 900 - n = kvmalloc(sizeof *n, GFP_KERNEL | __GFP_REPEAT); 900 + n = kvmalloc(sizeof *n, GFP_KERNEL | __GFP_RETRY_MAYFAIL); 901 901 if (!n) 902 902 return -ENOMEM; 903 903 vqs = kmalloc(VHOST_NET_VQ_MAX * sizeof(*vqs), GFP_KERNEL);
+1 -1
drivers/vhost/scsi.c
··· 1404 1404 struct vhost_virtqueue **vqs; 1405 1405 int r = -ENOMEM, i; 1406 1406 1407 - vs = kzalloc(sizeof(*vs), GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT); 1407 + vs = kzalloc(sizeof(*vs), GFP_KERNEL | __GFP_NOWARN | __GFP_RETRY_MAYFAIL); 1408 1408 if (!vs) { 1409 1409 vs = vzalloc(sizeof(*vs)); 1410 1410 if (!vs)
+1 -1
drivers/vhost/vsock.c
··· 508 508 /* This struct is large and allocation could fail, fall back to vmalloc 509 509 * if there is no other way. 510 510 */ 511 - vsock = kvmalloc(sizeof(*vsock), GFP_KERNEL | __GFP_REPEAT); 511 + vsock = kvmalloc(sizeof(*vsock), GFP_KERNEL | __GFP_RETRY_MAYFAIL); 512 512 if (!vsock) 513 513 return -ENOMEM; 514 514
+43 -13
include/linux/gfp.h
··· 25 25 #define ___GFP_FS 0x80u 26 26 #define ___GFP_COLD 0x100u 27 27 #define ___GFP_NOWARN 0x200u 28 - #define ___GFP_REPEAT 0x400u 28 + #define ___GFP_RETRY_MAYFAIL 0x400u 29 29 #define ___GFP_NOFAIL 0x800u 30 30 #define ___GFP_NORETRY 0x1000u 31 31 #define ___GFP_MEMALLOC 0x2000u ··· 136 136 * 137 137 * __GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim. 138 138 * 139 - * __GFP_REPEAT: Try hard to allocate the memory, but the allocation attempt 140 - * _might_ fail. This depends upon the particular VM implementation. 139 + * The default allocator behavior depends on the request size. We have a concept 140 + * of so called costly allocations (with order > PAGE_ALLOC_COSTLY_ORDER). 141 + * !costly allocations are too essential to fail so they are implicitly 142 + * non-failing by default (with some exceptions like OOM victims might fail so 143 + * the caller still has to check for failures) while costly requests try to be 144 + * not disruptive and back off even without invoking the OOM killer. 145 + * The following three modifiers might be used to override some of these 146 + * implicit rules 147 + * 148 + * __GFP_NORETRY: The VM implementation will try only very lightweight 149 + * memory direct reclaim to get some memory under memory pressure (thus 150 + * it can sleep). It will avoid disruptive actions like OOM killer. The 151 + * caller must handle the failure which is quite likely to happen under 152 + * heavy memory pressure. The flag is suitable when failure can easily be 153 + * handled at small cost, such as reduced throughput 154 + * 155 + * __GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim 156 + * procedures that have previously failed if there is some indication 157 + * that progress has been made else where. It can wait for other 158 + * tasks to attempt high level approaches to freeing memory such as 159 + * compaction (which removes fragmentation) and page-out. 160 + * There is still a definite limit to the number of retries, but it is 161 + * a larger limit than with __GFP_NORETRY. 162 + * Allocations with this flag may fail, but only when there is 163 + * genuinely little unused memory. While these allocations do not 164 + * directly trigger the OOM killer, their failure indicates that 165 + * the system is likely to need to use the OOM killer soon. The 166 + * caller must handle failure, but can reasonably do so by failing 167 + * a higher-level request, or completing it only in a much less 168 + * efficient manner. 169 + * If the allocation does fail, and the caller is in a position to 170 + * free some non-essential memory, doing so could benefit the system 171 + * as a whole. 141 172 * 142 173 * __GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller 143 - * cannot handle allocation failures. New users should be evaluated carefully 144 - * (and the flag should be used only when there is no reasonable failure 145 - * policy) but it is definitely preferable to use the flag rather than 146 - * opencode endless loop around allocator. 147 - * 148 - * __GFP_NORETRY: The VM implementation must not retry indefinitely and will 149 - * return NULL when direct reclaim and memory compaction have failed to allow 150 - * the allocation to succeed. The OOM killer is not called with the current 151 - * implementation. 174 + * cannot handle allocation failures. The allocation could block 175 + * indefinitely but will never return with failure. Testing for 176 + * failure is pointless. 177 + * New users should be evaluated carefully (and the flag should be 178 + * used only when there is no reasonable failure policy) but it is 179 + * definitely preferable to use the flag rather than opencode endless 180 + * loop around allocator. 181 + * Using this flag for costly allocations is _highly_ discouraged. 152 182 */ 153 183 #define __GFP_IO ((__force gfp_t)___GFP_IO) 154 184 #define __GFP_FS ((__force gfp_t)___GFP_FS) 155 185 #define __GFP_DIRECT_RECLAIM ((__force gfp_t)___GFP_DIRECT_RECLAIM) /* Caller can reclaim */ 156 186 #define __GFP_KSWAPD_RECLAIM ((__force gfp_t)___GFP_KSWAPD_RECLAIM) /* kswapd can wake */ 157 187 #define __GFP_RECLAIM ((__force gfp_t)(___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM)) 158 - #define __GFP_REPEAT ((__force gfp_t)___GFP_REPEAT) 188 + #define __GFP_RETRY_MAYFAIL ((__force gfp_t)___GFP_RETRY_MAYFAIL) 159 189 #define __GFP_NOFAIL ((__force gfp_t)___GFP_NOFAIL) 160 190 #define __GFP_NORETRY ((__force gfp_t)___GFP_NORETRY) 161 191
+2 -1
include/linux/slab.h
··· 471 471 * 472 472 * %__GFP_NOWARN - If allocation fails, don't issue any warnings. 473 473 * 474 - * %__GFP_REPEAT - If allocation fails initially, try once more before failing. 474 + * %__GFP_RETRY_MAYFAIL - Try really hard to succeed the allocation but fail 475 + * eventually. 475 476 * 476 477 * There are other flags available as well, but these are not intended 477 478 * for general use, and so are not documented here. For a full list of
+1 -1
include/trace/events/mmflags.h
··· 34 34 {(unsigned long)__GFP_FS, "__GFP_FS"}, \ 35 35 {(unsigned long)__GFP_COLD, "__GFP_COLD"}, \ 36 36 {(unsigned long)__GFP_NOWARN, "__GFP_NOWARN"}, \ 37 - {(unsigned long)__GFP_REPEAT, "__GFP_REPEAT"}, \ 37 + {(unsigned long)__GFP_RETRY_MAYFAIL, "__GFP_RETRY_MAYFAIL"}, \ 38 38 {(unsigned long)__GFP_NOFAIL, "__GFP_NOFAIL"}, \ 39 39 {(unsigned long)__GFP_NORETRY, "__GFP_NORETRY"}, \ 40 40 {(unsigned long)__GFP_COMP, "__GFP_COMP"}, \
+2 -2
mm/hugetlb.c
··· 1384 1384 1385 1385 page = __alloc_pages_node(nid, 1386 1386 htlb_alloc_mask(h)|__GFP_COMP|__GFP_THISNODE| 1387 - __GFP_REPEAT|__GFP_NOWARN, 1387 + __GFP_RETRY_MAYFAIL|__GFP_NOWARN, 1388 1388 huge_page_order(h)); 1389 1389 if (page) { 1390 1390 prep_new_huge_page(h, page, nid); ··· 1525 1525 { 1526 1526 int order = huge_page_order(h); 1527 1527 1528 - gfp_mask |= __GFP_COMP|__GFP_REPEAT|__GFP_NOWARN; 1528 + gfp_mask |= __GFP_COMP|__GFP_RETRY_MAYFAIL|__GFP_NOWARN; 1529 1529 if (nid == NUMA_NO_NODE) 1530 1530 nid = numa_mem_id(); 1531 1531 return __alloc_pages_nodemask(gfp_mask, order, nid, nmask);
+1 -1
mm/internal.h
··· 23 23 * hints such as HIGHMEM usage. 24 24 */ 25 25 #define GFP_RECLAIM_MASK (__GFP_RECLAIM|__GFP_HIGH|__GFP_IO|__GFP_FS|\ 26 - __GFP_NOWARN|__GFP_REPEAT|__GFP_NOFAIL|\ 26 + __GFP_NOWARN|__GFP_RETRY_MAYFAIL|__GFP_NOFAIL|\ 27 27 __GFP_NORETRY|__GFP_MEMALLOC|__GFP_NOMEMALLOC|\ 28 28 __GFP_ATOMIC) 29 29
+11 -3
mm/page_alloc.c
··· 3284 3284 /* The OOM killer will not help higher order allocs */ 3285 3285 if (order > PAGE_ALLOC_COSTLY_ORDER) 3286 3286 goto out; 3287 + /* 3288 + * We have already exhausted all our reclaim opportunities without any 3289 + * success so it is time to admit defeat. We will skip the OOM killer 3290 + * because it is very likely that the caller has a more reasonable 3291 + * fallback than shooting a random task. 3292 + */ 3293 + if (gfp_mask & __GFP_RETRY_MAYFAIL) 3294 + goto out; 3287 3295 /* The OOM killer does not needlessly kill tasks for lowmem */ 3288 3296 if (ac->high_zoneidx < ZONE_NORMAL) 3289 3297 goto out; ··· 3421 3413 } 3422 3414 3423 3415 /* 3424 - * !costly requests are much more important than __GFP_REPEAT 3416 + * !costly requests are much more important than __GFP_RETRY_MAYFAIL 3425 3417 * costly ones because they are de facto nofail and invoke OOM 3426 3418 * killer to move on while costly can fail and users are ready 3427 3419 * to cope with that. 1/4 retries is rather arbitrary but we ··· 3928 3920 3929 3921 /* 3930 3922 * Do not retry costly high order allocations unless they are 3931 - * __GFP_REPEAT 3923 + * __GFP_RETRY_MAYFAIL 3932 3924 */ 3933 - if (costly_order && !(gfp_mask & __GFP_REPEAT)) 3925 + if (costly_order && !(gfp_mask & __GFP_RETRY_MAYFAIL)) 3934 3926 goto nopage; 3935 3927 3936 3928 if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
+2 -2
mm/sparse-vmemmap.c
··· 56 56 57 57 if (node_state(node, N_HIGH_MEMORY)) 58 58 page = alloc_pages_node( 59 - node, GFP_KERNEL | __GFP_ZERO | __GFP_REPEAT, 59 + node, GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL, 60 60 get_order(size)); 61 61 else 62 62 page = alloc_pages( 63 - GFP_KERNEL | __GFP_ZERO | __GFP_REPEAT, 63 + GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL, 64 64 get_order(size)); 65 65 if (page) 66 66 return page_address(page);
+3 -3
mm/util.c
··· 339 339 * Uses kmalloc to get the memory but if the allocation fails then falls back 340 340 * to the vmalloc allocator. Use kvfree for freeing the memory. 341 341 * 342 - * Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL are not supported. __GFP_REPEAT 342 + * Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL are not supported. __GFP_RETRY_MAYFAIL 343 343 * is supported only for large (>32kB) allocations, and it should be used only if 344 344 * kmalloc is preferable to the vmalloc fallback, due to visible performance drawbacks. 345 345 * ··· 367 367 kmalloc_flags |= __GFP_NOWARN; 368 368 369 369 /* 370 - * We have to override __GFP_REPEAT by __GFP_NORETRY for !costly 370 + * We have to override __GFP_RETRY_MAYFAIL by __GFP_NORETRY for !costly 371 371 * requests because there is no other way to tell the allocator 372 372 * that we want to fail rather than retry endlessly. 373 373 */ 374 - if (!(kmalloc_flags & __GFP_REPEAT) || 374 + if (!(kmalloc_flags & __GFP_RETRY_MAYFAIL) || 375 375 (size <= PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) 376 376 kmalloc_flags |= __GFP_NORETRY; 377 377 }
+1 -1
mm/vmalloc.c
··· 1795 1795 * allocator with @gfp_mask flags. Map them into contiguous 1796 1796 * kernel virtual space, using a pagetable protection of @prot. 1797 1797 * 1798 - * Reclaim modifiers in @gfp_mask - __GFP_NORETRY, __GFP_REPEAT 1798 + * Reclaim modifiers in @gfp_mask - __GFP_NORETRY, __GFP_RETRY_MAYFAIL 1799 1799 * and __GFP_NOFAIL are not supported 1800 1800 * 1801 1801 * Any use of gfp flags outside of GFP_KERNEL should be consulted
+4 -4
mm/vmscan.c
··· 2506 2506 return false; 2507 2507 2508 2508 /* Consider stopping depending on scan and reclaim activity */ 2509 - if (sc->gfp_mask & __GFP_REPEAT) { 2509 + if (sc->gfp_mask & __GFP_RETRY_MAYFAIL) { 2510 2510 /* 2511 - * For __GFP_REPEAT allocations, stop reclaiming if the 2511 + * For __GFP_RETRY_MAYFAIL allocations, stop reclaiming if the 2512 2512 * full LRU list has been scanned and we are still failing 2513 2513 * to reclaim pages. This full LRU scan is potentially 2514 - * expensive but a __GFP_REPEAT caller really wants to succeed 2514 + * expensive but a __GFP_RETRY_MAYFAIL caller really wants to succeed 2515 2515 */ 2516 2516 if (!nr_reclaimed && !nr_scanned) 2517 2517 return false; 2518 2518 } else { 2519 2519 /* 2520 - * For non-__GFP_REPEAT allocations which can presumably 2520 + * For non-__GFP_RETRY_MAYFAIL allocations which can presumably 2521 2521 * fail without consequence, stop if we failed to reclaim 2522 2522 * any pages from the last SWAP_CLUSTER_MAX number of 2523 2523 * pages that were scanned. This will return to the
+3 -3
net/core/dev.c
··· 7384 7384 7385 7385 BUG_ON(count < 1); 7386 7386 7387 - rx = kvzalloc(sz, GFP_KERNEL | __GFP_REPEAT); 7387 + rx = kvzalloc(sz, GFP_KERNEL | __GFP_RETRY_MAYFAIL); 7388 7388 if (!rx) 7389 7389 return -ENOMEM; 7390 7390 ··· 7424 7424 if (count < 1 || count > 0xffff) 7425 7425 return -EINVAL; 7426 7426 7427 - tx = kvzalloc(sz, GFP_KERNEL | __GFP_REPEAT); 7427 + tx = kvzalloc(sz, GFP_KERNEL | __GFP_RETRY_MAYFAIL); 7428 7428 if (!tx) 7429 7429 return -ENOMEM; 7430 7430 ··· 7965 7965 /* ensure 32-byte alignment of whole construct */ 7966 7966 alloc_size += NETDEV_ALIGN - 1; 7967 7967 7968 - p = kvzalloc(alloc_size, GFP_KERNEL | __GFP_REPEAT); 7968 + p = kvzalloc(alloc_size, GFP_KERNEL | __GFP_RETRY_MAYFAIL); 7969 7969 if (!p) 7970 7970 return NULL; 7971 7971
+1 -1
net/core/skbuff.c
··· 4747 4747 4748 4748 gfp_head = gfp_mask; 4749 4749 if (gfp_head & __GFP_DIRECT_RECLAIM) 4750 - gfp_head |= __GFP_REPEAT; 4750 + gfp_head |= __GFP_RETRY_MAYFAIL; 4751 4751 4752 4752 *errcode = -ENOBUFS; 4753 4753 skb = alloc_skb(header_len, gfp_head);
+1 -1
net/sched/sch_fq.c
··· 648 648 return 0; 649 649 650 650 /* If XPS was setup, we can allocate memory on right NUMA node */ 651 - array = kvmalloc_node(sizeof(struct rb_root) << log, GFP_KERNEL | __GFP_REPEAT, 651 + array = kvmalloc_node(sizeof(struct rb_root) << log, GFP_KERNEL | __GFP_RETRY_MAYFAIL, 652 652 netdev_queue_numa_node_read(sch->dev_queue)); 653 653 if (!array) 654 654 return -ENOMEM;
+1 -1
tools/perf/builtin-kmem.c
··· 643 643 { "__GFP_FS", "F" }, 644 644 { "__GFP_COLD", "CO" }, 645 645 { "__GFP_NOWARN", "NWR" }, 646 - { "__GFP_REPEAT", "R" }, 646 + { "__GFP_RETRY_MAYFAIL", "R" }, 647 647 { "__GFP_NOFAIL", "NF" }, 648 648 { "__GFP_NORETRY", "NR" }, 649 649 { "__GFP_COMP", "C" },