Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

RDMA/core: WQ_PERCPU added to alloc_workqueue users

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.

alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Link: https://patch.msgid.link/20251101163121.78400-3-marco.crivellari@suse.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>

authored by

Marco Crivellari and committed by
Leon Romanovsky
e60c5583 f673fb34

+3 -3
+1 -1
drivers/infiniband/core/cm.c
··· 4517 4517 get_random_bytes(&cm.random_id_operand, sizeof cm.random_id_operand); 4518 4518 INIT_LIST_HEAD(&cm.timewait_list); 4519 4519 4520 - cm.wq = alloc_workqueue("ib_cm", 0, 1); 4520 + cm.wq = alloc_workqueue("ib_cm", WQ_PERCPU, 1); 4521 4521 if (!cm.wq) { 4522 4522 ret = -ENOMEM; 4523 4523 goto error2;
+2 -2
drivers/infiniband/core/device.c
··· 3021 3021 { 3022 3022 int ret = -ENOMEM; 3023 3023 3024 - ib_wq = alloc_workqueue("infiniband", 0, 0); 3024 + ib_wq = alloc_workqueue("infiniband", WQ_PERCPU, 0); 3025 3025 if (!ib_wq) 3026 3026 return -ENOMEM; 3027 3027 ··· 3031 3031 goto err; 3032 3032 3033 3033 ib_comp_wq = alloc_workqueue("ib-comp-wq", 3034 - WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS, 0); 3034 + WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS | WQ_PERCPU, 0); 3035 3035 if (!ib_comp_wq) 3036 3036 goto err_unbound; 3037 3037