Currently if a user enqueues a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistency cannot be addressed without refactoring the API.

This patch continues the effort to refactor worqueue APIs, which has begun
with the change introducing new workqueues and a new alloc_workqueue flag:

commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

Replace system_wq with system_percpu_wq, keeping the same old behavior.
The old wq (system_wq) will be kept for a few release cycles.

Suggested-by: Tejun Heo <[email protected]>
Signed-off-by: Marco Crivellari <[email protected]>
---
 drivers/nvdimm/security.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
index 4adce8c38870..e41f6951ca0f 100644
--- a/drivers/nvdimm/security.c
+++ b/drivers/nvdimm/security.c
@@ -424,7 +424,7 @@ static int security_overwrite(struct nvdimm *nvdimm, 
unsigned int keyid)
                 * query.
                 */
                get_device(dev);
-               queue_delayed_work(system_wq, &nvdimm->dwork, 0);
+               queue_delayed_work(system_percpu_wq, &nvdimm->dwork, 0);
        }
 
        return rc;
@@ -457,7 +457,7 @@ static void __nvdimm_security_overwrite_query(struct nvdimm 
*nvdimm)
 
                /* setup delayed work again */
                tmo += 10;
-               queue_delayed_work(system_wq, &nvdimm->dwork, tmo * HZ);
+               queue_delayed_work(system_percpu_wq, &nvdimm->dwork, tmo * HZ);
                nvdimm->sec.overwrite_tmo = min(15U * 60U, tmo);
                return;
        }
-- 
2.51.1


Reply via email to