Christoph Hellwig <[email protected]> writes:

> Witch to using a preallocated flush_rq for blk-mq similar to what's done
> with the old request path.  This allows us to set up the request properly
> with a tag from the actually allowed range and ->rq_disk as needed by
> some drivers.  To make life easier we also switch to dynamic allocation
> of ->flush_rq for the old path.
>
> This effectively reverts most of
>
>     "blk-mq: fix for flush deadlock"
>
> and
>
>     "blk-mq: Don't reserve a tag for flush request"
>
> Signed-off-by: Christoph Hellwig <[email protected]>

[snip]

> -static void blk_mq_flush_data_insert(struct request *rq)
> +static bool blk_flush_queue_rq(struct request *rq)
>  {
> -     INIT_WORK(&rq->mq_flush_data, mq_flush_data_run);
> -     kblockd_schedule_work(rq->q, &rq->mq_flush_data);
> +     if (rq->q->mq_ops) {
> +             INIT_WORK(&rq->mq_flush_work, mq_flush_run);
> +             kblockd_schedule_work(rq->q, &rq->mq_flush_work);
> +             return false;
> +     } else {
> +             list_add_tail(&rq->queuelist, &rq->q->queue_head);
> +             return true;
> +     }
>  }
>  
>  /**
> @@ -187,12 +193,7 @@ static bool blk_flush_complete_seq(struct request *rq, 
> unsigned int seq,
>  
>       case REQ_FSEQ_DATA:
>               list_move_tail(&rq->flush.list, &q->flush_data_in_flight);
> -             if (q->mq_ops)
> -                     blk_mq_flush_data_insert(rq);
> -             else {
> -                     list_add(&rq->queuelist, &q->queue_head);
> -                     queued = true;
> -             }
> +             queued = blk_flush_queue_rq(rq);
>               break;

Hi, Christoph,

Did you mean to switch from list_add to list_add_tail?  That seems like
a change that warrants mention.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to