On 2022/06/10 23:35, Tetsuo Handa wrote:
> Use local wq in order to avoid flush_scheduled_work() usage.
>
> Signed-off-by: Tetsuo Handa <[email protected]>
> ---
> Changes in v2:
> Replace flush_scheduled_work() with flush_workqueue().
>
> Please see commit c4f135d643823a86 ("workqueue: Wrap flush_workqueue()
> using a macro") for background.
>
> This is a blind conversion, and is only compile tested.
>
> .../drm/bridge/cadence/cdns-mhdp8546-core.c | 32 ++++++++++++++++---
> .../drm/bridge/cadence/cdns-mhdp8546-core.h | 2 ++
> .../drm/bridge/cadence/cdns-mhdp8546-hdcp.c | 16 +++++-----
> 3 files changed, 37 insertions(+), 13 deletions(-)
>
I'm thinking about flush_work() version, and I got confused.
Since cdns-mhdp8546 driver uses 4 works
mhdp->modeset_retry_work
mhdp->hpd_work
mhdp->hdcp.check_work
mhdp->hdcp.prop_work
I assume that flush_scheduled_work() in cdns_mhdp_remove() needs to wait
for only these 4 works. And since mhdp->modeset_retry_work already uses
cancel_work_sync(), flush_scheduled_work() would need to wait for only 3 works.
Therefore, I guess that the flush_work() version would look something like
diff --git a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
index 67f0f444b4e8..04b21752ab3f 100644
--- a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+++ b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
@@ -2603,7 +2603,11 @@ static int cdns_mhdp_remove(struct platform_device *pdev)
pm_runtime_disable(&pdev->dev);
cancel_work_sync(&mhdp->modeset_retry_work);
- flush_scheduled_work();
+ flush_work(&mhdp->hpd_work);
+ if (mhdp->hdcp_supported) {
+ cancel_delayed_work_sync(&mhdp->hdcp.check_work);
+ flush_work(&mhdp->hdcp.prop_work);
+ }
clk_disable_unprepare(mhdp->clk);
but I came to wonder whether mhdp->hdcp.check_work should be flushed or
cancelled.
While flush_scheduled_work() waits for completion of works which were already
queued
to system_wq, mhdp->modeset_retry_work and mhdp->hdcp.check_work are delayed
works.
That is, work won't be queued to system_wq unless timeout expires.
Current code will wait for mhdp->hdcp.check_work only if timeout already
expired.
If timeout is not expired yet, flush_scheduled_work() will fail to cancel
mhdp->hdcp.check_work, and cdns_mhdp_hdcp_check_work() which is triggered by
mhdp->hdcp.check_work will schedule hdcp->check_work, which is too late for
flush_scheduled_work() to wait for completion of cdns_mhdp_hdcp_prop_work().
Thus, how do we want to handle this race window?
flush_delayed_work(&mhdp->hdcp.check_work) followed by
flush_work(&mhdp->hdcp.prop_work) (i.e. flush as much as possible) ?
cancel_delayed_work_sync(&mhdp->hdcp.check_work) followed by
cancel_work_sync(&mhdp->hdcp.prop_work) (i.e. cancel as much as possible) ?
do nothing (i.e. no need to flush or cancel mhdp->hdcp.check_work and
mhdp->hdcp.prop_work) ?