Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

scsi: message: fusion: Add WQ_PERCPU to alloc_workqueue() users

Currently if a user enqueues a work item using schedule_delayed_work()
the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND. This lack of consistency cannot be addressed
without refactoring the API.

alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This continues the effort to refactor workqueue APIs, which began with
the introduction of new workqueues and a new alloc_workqueue flag in:

commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Link: https://patch.msgid.link/20251107141458.225119-1-marco.crivellari@suse.com
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>

authored by

Marco Crivellari and committed by
Martin K. Petersen
f0dc4417 84150ef0

+5 -2
+5 -2
drivers/message/fusion/mptbase.c
··· 1857 1857 INIT_DELAYED_WORK(&ioc->fault_reset_work, mpt_fault_reset_work); 1858 1858 1859 1859 ioc->reset_work_q = 1860 - alloc_workqueue("mpt_poll_%d", WQ_MEM_RECLAIM, 0, ioc->id); 1860 + alloc_workqueue("mpt_poll_%d", WQ_MEM_RECLAIM | WQ_PERCPU, 0, 1861 + ioc->id); 1861 1862 if (!ioc->reset_work_q) { 1862 1863 printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n", 1863 1864 ioc->name); ··· 1985 1984 1986 1985 INIT_LIST_HEAD(&ioc->fw_event_list); 1987 1986 spin_lock_init(&ioc->fw_event_lock); 1988 - ioc->fw_event_q = alloc_workqueue("mpt/%d", WQ_MEM_RECLAIM, 0, ioc->id); 1987 + ioc->fw_event_q = alloc_workqueue("mpt/%d", 1988 + WQ_MEM_RECLAIM | WQ_PERCPU, 0, 1989 + ioc->id); 1989 1990 if (!ioc->fw_event_q) { 1990 1991 printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n", 1991 1992 ioc->name);