On Thu, May 29, 2025 at 03:02:16PM -0500, Eric Blake wrote: > On Wed, May 28, 2025 at 03:09:14PM -0400, Stefan Hajnoczi wrote: > > Introduce the aio_add_sqe() API for submitting io_uring requests in the > > current AioContext. This allows other components in QEMU, like the block > > layer, to take advantage of io_uring features without creating their own > > io_uring context. > > > > This API supports nested event loops just like file descriptor > > monitoring and BHs do. This comes at a complexity cost: a BH is required > > to dispatch CQE callbacks and they are placed on a list so that a nested > > event loop can invoke its parent's pending CQE callbacks. If you're > > wondering why CqeHandler exists instead of just a callback function > > pointer, this is why. > > > > Signed-off-by: Stefan Hajnoczi <[email protected]> > > --- > > Large patch. I found a couple of nits, but the overall design looks > sound. > > Reviewed-by: Eric Blake <[email protected]> > > > include/block/aio.h | 82 ++++++++++++++++++++++++ > > util/aio-posix.h | 1 + > > util/aio-posix.c | 9 +++ > > util/fdmon-io_uring.c | 145 +++++++++++++++++++++++++++++++----------- > > 4 files changed, 200 insertions(+), 37 deletions(-) > > > > diff --git a/include/block/aio.h b/include/block/aio.h > > index d919d7c8f4..95beef28c3 100644 > > --- a/include/block/aio.h > > +++ b/include/block/aio.h > > @@ -61,6 +61,27 @@ typedef struct LuringState LuringState; > > /* Is polling disabled? */ > > bool aio_poll_disabled(AioContext *ctx); > > > > +#ifdef CONFIG_LINUX_IO_URING > > +/* > > + * Each io_uring request must have a unique CqeHandler that processes the > > cqe. > > + * The lifetime of a CqeHandler must be at least from aio_add_sqe() until > > + * ->cb() invocation. > > + */ > > +typedef struct CqeHandler CqeHandler; > > +struct CqeHandler { > > + /* Called by the AioContext when the request has completed */ > > + void (*cb)(CqeHandler *handler); > > I see an opaque callback pointer in prep_cqe below, but not one here. > Is that because callers can write their own struct that includes a > CqeHandler as its first member, if more state is needed?
Yes.
>
> > +
> > + /* Used internally, do not access this */
> > + QSIMPLEQ_ENTRY(CqeHandler) next;
> > +
> > + /* This field is filled in before ->cb() is called */
> > + struct io_uring_cqe cqe;
> > +};
> > +
> > +typedef QSIMPLEQ_HEAD(, CqeHandler) CqeHandlerSimpleQ;
> > +#endif /* CONFIG_LINUX_IO_URING */
> > +
> > /* Callbacks for file descriptor monitoring implementations */
> > typedef struct {
> > /*
> > @@ -138,6 +159,27 @@ typedef struct {
> > * Called with list_lock incremented.
> > */
> > void (*gsource_dispatch)(AioContext *ctx, AioHandlerList *ready_list);
> > +
> > +#ifdef CONFIG_LINUX_IO_URING
> > + /**
> > + * aio_add_sqe: Add an io_uring sqe for submission.
> > + * @prep_sqe: invoked with an sqe that should be prepared for
> > submission
> > + * @opaque: user-defined argument to @prep_sqe()
> > + * @cqe_handler: the unique cqe handler associated with this request
> > + *
> > + * The caller's @prep_sqe() function is invoked to fill in the details
> > of
> > + * the sqe. Do not call io_uring_sqe_set_data() on this sqe.
> > + *
> > + * The kernel may see the sqe as soon as @pre_sqe() returns or it may
> > take
>
> prep_sqe
Oops, will fix.
>
> > + * until the next event loop iteration.
> > + *
> > + * This function is called from the current AioContext and is not
> > + * thread-safe.
> > + */
> > + void (*add_sqe)(AioContext *ctx,
> > + void (*prep_sqe)(struct io_uring_sqe *sqe, void
> > *opaque),
> > + void *opaque, CqeHandler *cqe_handler);
> > +#endif /* CONFIG_LINUX_IO_URING */
> > } FDMonOps;
> >
> > /*
> > @@ -255,6 +297,10 @@ struct AioContext {
> > struct io_uring fdmon_io_uring;
> > AioHandlerSList submit_list;
> > gpointer io_uring_fd_tag;
> > +
> > + /* Pending callback state for cqe handlers */
> > + CqeHandlerSimpleQ cqe_handler_ready_list;
> > + QEMUBH *cqe_handler_bh;
> > #endif
>
> While here, is it worth adding a comment to state which matching #if
> it ends (similar to what you did above in FDMonOps add_sqe)?
Sounds good.
>
> >
> > /* TimerLists for calling timers - one per clock type. Has its own
> > @@ -761,4 +807,40 @@ void aio_context_set_aio_params(AioContext *ctx,
> > int64_t max_batch);
> > */
> > void aio_context_set_thread_pool_params(AioContext *ctx, int64_t min,
> > int64_t max, Error **errp);
> > +
> > +#ifdef CONFIG_LINUX_IO_URING
> > +/**
> > + * aio_has_io_uring: Return whether io_uring is available.
> > + *
> > + * io_uring is either available in all AioContexts or in none, so this only
> > + * needs to be called once from within any thread's AioContext.
> > + */
> > +static inline bool aio_has_io_uring(void)
> > +{
> > + AioContext *ctx = qemu_get_current_aio_context();
> > + return ctx->fdmon_ops->add_sqe;
> > +}
> > +
> > +/**
> > + * aio_add_sqe: Add an io_uring sqe for submission.
> > + * @prep_sqe: invoked with an sqe that should be prepared for submission
> > + * @opaque: user-defined argument to @prep_sqe()
> > + * @cqe_handler: the unique cqe handler associated with this request
> > + *
> > + * The caller's @prep_sqe() function is invoked to fill in the details of
> > the
> > + * sqe. Do not call io_uring_sqe_set_data() on this sqe.
> > + *
> > + * The sqe is submitted by the current AioContext. The kernel may see the
> > sqe
> > + * as soon as @pre_sqe() returns or it may take until the next event loop
>
> prep_sqe
Will fix.
signature.asc
Description: PGP signature
