SUNRPC: Cleanup/fix initial rq_pages allocation

While investigating some reports of memory-constrained NUMA machines
failing to mount v3 and v4.0 nfs mounts, we found that svc_init_buffer()
was not attempting to retry allocations from the bulk page allocator.
Typically, this results in a single page allocation being returned and
the mount attempt fails with -ENOMEM. A retry would have allowed the mount
to succeed.

Additionally, it seems that the bulk allocation in svc_init_buffer() is
redundant because svc_alloc_arg() will perform the required allocation and
does the correct thing to retry the allocations.

The call to allocate memory in svc_alloc_arg() drops the preferred node
argument, but I expect we'll still allocate on the preferred node because
the allocation call happens within the svc thread context, which chooses
the node with memory closest to the current thread's execution.

This patch cleans out the bulk allocation in svc_init_buffer() to allow
svc_alloc_arg() to handle the allocation/retry logic for rq_pages.

Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Fixes: ed603bcf4fea ("sunrpc: Replace the rq_pages array with dynamically-allocated memory")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>

authored by Benjamin Coddington and committed by Chuck Lever 8b3ac9fa 32ce6b3a

+1 -5
+1 -5
net/sunrpc/svc.c
··· 638 638 static bool 639 639 svc_init_buffer(struct svc_rqst *rqstp, const struct svc_serv *serv, int node) 640 640 { 641 - unsigned long ret; 642 - 643 641 rqstp->rq_maxpages = svc_serv_maxpages(serv); 644 642 645 643 /* rq_pages' last entry is NULL for historical reasons. */ ··· 647 649 if (!rqstp->rq_pages) 648 650 return false; 649 651 650 - ret = alloc_pages_bulk_node(GFP_KERNEL, node, rqstp->rq_maxpages, 651 - rqstp->rq_pages); 652 - return ret == rqstp->rq_maxpages; 652 + return true; 653 653 } 654 654 655 655 /*