On Thu, Jul 27, 2023 at 4:57 PM Michael S. Tsirkin <[email protected]> wrote:

> On Thu, Jul 27, 2023 at 04:48:30PM +0200, Albert Esteve wrote:
> >
> >
> > On Mon, Jul 17, 2023 at 4:11 PM Michael S. Tsirkin <[email protected]>
> wrote:
> >
> >
> >
> >
> >
> >     On Mon, Jul 17, 2023 at 01:42:02PM +0200, Albert Esteve wrote:
> >     > Hi Michael,
> >     >
> >     > True. It may be a good idea to impose a limit in the number of
> entries
> >     that can
> >     > be added to the table.
> >     > And fail to add new entries once it reaches the limit.
> >     >
> >     > Not sure what would be a good limit though. For example, https://
> >     www.kernel.org
> >     >
> /doc/html/v4.9/media/uapi/v4l/vidioc-reqbufs.html#c.v4l2_requestbuffers
> >     > does not limit the number of buffers that can be allocated
> >     simultaneously, it
> >     > is an unsigned 32-bits value.
> >     > However, I guess 16-bits (65535) would suffice to cover the vast
> majority
> >     of
> >     > usecases. Or even lower, and
> >     > can be adjusted later, as this API gets (more) used.
> >     >
> >     > Does that make sense?
> >     >
> >     > Thanks.
> >     > BR,
> >     > Albert
> >
> >     let's not top-post please.
> >
> >     Maybe. Another concern is qemu running out of FDs with a bad backend.
> >
> >     Question: why does qemu have to maintain these UUIDs in its memory?
> >
> >     Can't it query the backend with UUID and get the fd back?
> >
> >
> > In the end, we have one backend sharing an object with other backends.
> > From the importer POV, it does not know who the exporter is, so it cannot
> > go pocking other backends until it finds the one that is holding a
> resource
> > with
> > the same UUID, it relies on qemu providing this information.
> >
> > If we do not want qemu to hold the fds, we could, for instance, store
> > references to
> > backends that act as exporters. And then, once an importer requests for a
> > specific
> > object with its UUID, we ask for the fd to the exporter(s), hoping to
> find it.
>
>
> right. I'd do this. and then the existing table can be regarded
> as a cache.
>

It is true that it is not easy to find a limit that fits all usecases,
and the cache proposal could result in a more maintanable
solution in the long term.

I'll explore this and post a proposal for the next version
of the patch. It will mean having a bigger changeset, so
I'll try to push something as clean as possible.

BR,
Albert


>
> > But the current solution sounds better fit to the shared objects virtio
> > feature.
> > I would be more keen to look into something like what Gerd suggested,
> limiting
> > the memory that we use.
> >
> > Nonetheless, in qemu we are storing fds, and not mmaping the dmabufs.
> > So I think limiting the number of entries should suffice, to ensure
> > that we do not run out of FDs, and memory.
>
> my point is you really don't know how much to limit it.
> if there's ability to drop the entries then you
> can do this, and cache things in memory.
>
>
> >
> >
> >     And then, the hash table in QEMU becomes just a cache
> >     to speed up lookups.
> >
> >     --
> >     MST
> >
> >
>
>

Reply via email to