Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

drivers/block: WQ_PERCPU added to alloc_workqueue users

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.

alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This patch adds a new WQ_PERCPU flag to explicitly request the use of
the per-CPU behavior. Both flags coexist for one release cycle to allow
callers to transition their calls.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.

All existing users have been updated accordingly.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>

authored by

Marco Crivellari and committed by
Jens Axboe
d7b1cdc9 456cefcb

+5 -5
+1 -1
drivers/block/aoe/aoemain.c
··· 44 44 { 45 45 int ret; 46 46 47 - aoe_wq = alloc_workqueue("aoe_wq", 0, 0); 47 + aoe_wq = alloc_workqueue("aoe_wq", WQ_PERCPU, 0); 48 48 if (!aoe_wq) 49 49 return -ENOMEM; 50 50
+1 -1
drivers/block/rbd.c
··· 7389 7389 * The number of active work items is limited by the number of 7390 7390 * rbd devices * queue depth, so leave @max_active at default. 7391 7391 */ 7392 - rbd_wq = alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM, 0); 7392 + rbd_wq = alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM | WQ_PERCPU, 0); 7393 7393 if (!rbd_wq) { 7394 7394 rc = -ENOMEM; 7395 7395 goto err_out_slab;
+1 -1
drivers/block/rnbd/rnbd-clt.c
··· 1809 1809 unregister_blkdev(rnbd_client_major, "rnbd"); 1810 1810 return err; 1811 1811 } 1812 - rnbd_clt_wq = alloc_workqueue("rnbd_clt_wq", 0, 0); 1812 + rnbd_clt_wq = alloc_workqueue("rnbd_clt_wq", WQ_PERCPU, 0); 1813 1813 if (!rnbd_clt_wq) { 1814 1814 pr_err("Failed to load module, alloc_workqueue failed.\n"); 1815 1815 rnbd_clt_destroy_sysfs_files();
+1 -1
drivers/block/sunvdc.c
··· 1216 1216 { 1217 1217 int err; 1218 1218 1219 - sunvdc_wq = alloc_workqueue("sunvdc", 0, 0); 1219 + sunvdc_wq = alloc_workqueue("sunvdc", WQ_PERCPU, 0); 1220 1220 if (!sunvdc_wq) 1221 1221 return -ENOMEM; 1222 1222
+1 -1
drivers/block/virtio_blk.c
··· 1682 1682 { 1683 1683 int error; 1684 1684 1685 - virtblk_wq = alloc_workqueue("virtio-blk", 0, 0); 1685 + virtblk_wq = alloc_workqueue("virtio-blk", WQ_PERCPU, 0); 1686 1686 if (!virtblk_wq) 1687 1687 return -ENOMEM; 1688 1688