Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistency cannot be addressed without refactoring the API.

system_unbound_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.

Adding system_dfl_wq to encourage its use when unbound work should be used.

The old system_unbound_wq will be kept for a few release cycles.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Link: https://patch.msgid.link/20251101163121.78400-2-marco.crivellari@suse.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>

authored by

Marco Crivellari and committed by
Leon Romanovsky
f673fb34 da58d422

+3 -3
+1 -1
drivers/infiniband/core/ucma.c
··· 366 366 if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) { 367 367 xa_lock(&ctx_table); 368 368 if (xa_load(&ctx_table, ctx->id) == ctx) 369 - queue_work(system_unbound_wq, &ctx->close_work); 369 + queue_work(system_dfl_wq, &ctx->close_work); 370 370 xa_unlock(&ctx_table); 371 371 } 372 372 return 0;
+2 -2
drivers/infiniband/hw/mlx5/odp.c
··· 265 265 266 266 /* Freeing a MR is a sleeping operation, so bounce to a work queue */ 267 267 INIT_WORK(&mr->odp_destroy.work, free_implicit_child_mr_work); 268 - queue_work(system_unbound_wq, &mr->odp_destroy.work); 268 + queue_work(system_dfl_wq, &mr->odp_destroy.work); 269 269 } 270 270 271 271 static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, ··· 2093 2093 destroy_prefetch_work(work); 2094 2094 return rc; 2095 2095 } 2096 - queue_work(system_unbound_wq, &work->work); 2096 + queue_work(system_dfl_wq, &work->work); 2097 2097 return 0; 2098 2098 }