Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

vduse: avoid using __GFP_NOFAIL

Patch series "mm/vdpa: correct misuse of non-direct-reclaim __GFP_NOFAIL
and improve related doc and warn", v4.

__GFP_NOFAIL carries the semantics of never failing, so its callers do not
check the return value:

%__GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
cannot handle allocation failures. The allocation could block
indefinitely but will never return with failure. Testing for
failure is pointless.

However, __GFP_NOFAIL can sometimes fail if it exceeds size limits or is
used with GFP_ATOMIC/GFP_NOWAIT in a non-sleepable context. This patchset
handles illegal using __GFP_NOFAIL together with GFP_ATOMIC lacking
__GFP_DIRECT_RECLAIM(without this, we can't do anything to reclaim memory
to satisfy the nofail requirement) and improve related document and
warnings.

The proper size limits for __GFP_NOFAIL will be handled separately after
more discussions.


This patch (of 3):

mm doesn't support non-blockable __GFP_NOFAIL allocation. Because
persisting in providing __GFP_NOFAIL services for non-block users who
cannot perform direct memory reclaim may only result in an endless busy
loop.

Therefore, in such cases, the current mm-core may directly return a NULL
pointer:

static inline struct page *
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct alloc_context *ac)
{
...
if (gfp_mask & __GFP_NOFAIL) {
/*
* All existing users of the __GFP_NOFAIL are blockable, so warn
* of any new users that actually require GFP_NOWAIT
*/
if (WARN_ON_ONCE_GFP(!can_direct_reclaim, gfp_mask))
goto fail;
...
}
...
fail:
warn_alloc(gfp_mask, ac->nodemask,
"page allocation failure: order:%u", order);
got_pg:
return page;
}

Unfortuantely, vpda does that nofail allocation under non-sleepable lock.
A possible way to fix that is to move the pages allocation out of the lock
into the caller, but having to allocate a huge number of pages and
auxiliary page array seems to be problematic as well per Tetsuon: " You
should implement proper error handling instead of using __GFP_NOFAIL if
count can become large."

So I chose another way, which does not release kernel bounce pages when
user tries to register userspace bounce pages. Then we can avoid
allocating in paths where failure is not expected.(e.g in the release).
We pay this for more memory usage as we don't release kernel bounce pages
but further optimizations could be done on top.

[v-songbaohua@oppo.com: Refine the changelog]
Link: https://lkml.kernel.org/r/20240830202823.21478-1-21cnbao@gmail.com
Link: https://lkml.kernel.org/r/20240830202823.21478-2-21cnbao@gmail.com
Fixes: 6c77ed22880d ("vduse: Support using userspace pages as bounce buffer")
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Reviewed-by: Xie Yongji <xieyongji@bytedance.com>
Tested-by: Xie Yongji <xieyongji@bytedance.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Hailong.Liu <hailong.liu@oppo.com>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yafang Shao <laoar.shao@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: "Eugenio Pérez" <eperezma@redhat.com>
Cc: Kees Cook <kees@kernel.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Jason Wang and committed by
Andrew Morton
955abe0a 83362d22

+12 -8
+11 -8
drivers/vdpa/vdpa_user/iova_domain.c
··· 162 162 enum dma_data_direction dir) 163 163 { 164 164 struct vduse_bounce_map *map; 165 + struct page *page; 165 166 unsigned int offset; 166 167 void *addr; 167 168 size_t sz; ··· 179 178 map->orig_phys == INVALID_PHYS_ADDR)) 180 179 return; 181 180 182 - addr = kmap_local_page(map->bounce_page); 181 + page = domain->user_bounce_pages ? 182 + map->user_bounce_page : map->bounce_page; 183 + 184 + addr = kmap_local_page(page); 183 185 do_bounce(map->orig_phys + offset, addr + offset, sz, dir); 184 186 kunmap_local(addr); 185 187 size -= sz; ··· 274 270 memcpy_to_page(pages[i], 0, 275 271 page_address(map->bounce_page), 276 272 PAGE_SIZE); 277 - __free_page(map->bounce_page); 278 273 } 279 - map->bounce_page = pages[i]; 274 + map->user_bounce_page = pages[i]; 280 275 get_page(pages[i]); 281 276 } 282 277 domain->user_bounce_pages = true; ··· 300 297 struct page *page = NULL; 301 298 302 299 map = &domain->bounce_maps[i]; 303 - if (WARN_ON(!map->bounce_page)) 300 + if (WARN_ON(!map->user_bounce_page)) 304 301 continue; 305 302 306 303 /* Copy user page to kernel page if it's in use */ 307 304 if (map->orig_phys != INVALID_PHYS_ADDR) { 308 - page = alloc_page(GFP_ATOMIC | __GFP_NOFAIL); 305 + page = map->bounce_page; 309 306 memcpy_from_page(page_address(page), 310 - map->bounce_page, 0, PAGE_SIZE); 307 + map->user_bounce_page, 0, PAGE_SIZE); 311 308 } 312 - put_page(map->bounce_page); 313 - map->bounce_page = page; 309 + put_page(map->user_bounce_page); 310 + map->user_bounce_page = NULL; 314 311 } 315 312 domain->user_bounce_pages = false; 316 313 out:
+1
drivers/vdpa/vdpa_user/iova_domain.h
··· 21 21 22 22 struct vduse_bounce_map { 23 23 struct page *bounce_page; 24 + struct page *user_bounce_page; 24 25 u64 orig_phys; 25 26 }; 26 27