On Mon, 09/29 11:28, Stefan Hajnoczi wrote:
> On Mon, Sep 29, 2014 at 01:26:29PM +0800, Fam Zheng wrote:
> > +int qemu_epoll(GPollFD *fds, guint nfds, int64_t timeout)
> > +{
> > +/* A copy of last fd array, used to skip epoll_prepare when nothing
> > + * changed. */
> > +static GPollFD
On Mon, Sep 29, 2014 at 01:26:29PM +0800, Fam Zheng wrote:
> +static bool g_poll_fds_changed(const GPollFD *fds_a, const guint nfds_a,
> + const GPollFD *fds_b, const guint nfds_b)
...
> +static inline int g_io_condition_from_epoll_events(int e)
> +{
Please don't use
On Mon, Sep 29, 2014 at 01:26:29PM +0800, Fam Zheng wrote:
> +int qemu_epoll(GPollFD *fds, guint nfds, int64_t timeout)
> +{
> +/* A copy of last fd array, used to skip epoll_prepare when nothing
> + * changed. */
> +static GPollFD *last_fds;
> +static guint last_nfds;
> +/* An
On Mon, 09/29 13:26, Fam Zheng wrote:
> A new implementation for qemu_poll_ns based on epoll is introduced here
> to address the slowness of g_poll and ppoll when the number of fds are
> high.
>
> On my laptop this would reduce the virtio-blk on top of null-aio
> device's response time from 32 us
A new implementation for qemu_poll_ns based on epoll is introduced here
to address the slowness of g_poll and ppoll when the number of fds are
high.
On my laptop this would reduce the virtio-blk on top of null-aio
device's response time from 32 us to 29 us with few fds (~10), and 48 us
to 32 us wi