On Tue, May 04, 2021 at 12:53:27PM -0300, Jason Gunthorpe wrote:
> On Tue, May 04, 2021 at 04:23:40PM +0200, Daniel Vetter wrote:
>
> > Just my 2cents from drm (where we deprecate old gunk uapi quite often):
> > Imo it's best to keep the uapi headers as-is, but exchange the
ameter extensions or
whatever they are in this case) are defacto reserved, because there are
binaries (qemu in this) that have code acting on them out there.
The only exception where we completely nuke the structs and #defines is
when uapi has been only used by testcases. Which we know, since we defacto
limit our stable uapi guarantee to the canonical open&upstream userspace
drivers only (for at least the driver-specific stuff, the cross-driver
interfaces are hopeless).
Anyway feel free to ignore since this might be different than drivers/gpu.
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
2018 at 12:14 PM, Oleksandr Andrushchenko
wrote:
> On 04/16/2018 12:32 PM, Daniel Vetter wrote:
>>
>> On Mon, Apr 16, 2018 at 10:22 AM, Oleksandr Andrushchenko
>> wrote:
>>>
>>> On 04/16/2018 10:43 AM, Daniel Vetter wrote:
>>>>
>>>>
On Mon, Apr 16, 2018 at 10:22 AM, Oleksandr Andrushchenko
wrote:
> On 04/16/2018 10:43 AM, Daniel Vetter wrote:
>>
>> On Mon, Apr 16, 2018 at 10:16:31AM +0300, Oleksandr Andrushchenko wrote:
>>>
>>> On 04/13/2018 06:37 PM, Daniel Vetter wrote:
>>>>
On Mon, Apr 16, 2018 at 10:16:31AM +0300, Oleksandr Andrushchenko wrote:
> On 04/13/2018 06:37 PM, Daniel Vetter wrote:
> > On Wed, Apr 11, 2018 at 08:59:32AM +0300, Oleksandr Andrushchenko wrote:
> > > On 04/10/2018 08:26 PM, Dongwon Kim wrote:
> > > > On Tue, Ap
en propose a Xen helper library for sharing big
> > > buffers,
> > > so common code of the above drivers can use the same code w/o code
> > > duplication)
> > I think it is possible to use your functions for memory sharing part in
> > hyper_dmabuf's backend (this 'backend' means the layer that does page
> > sharing
> > and inter-vm communication with xen-specific way.), so why don't we work on
> > "Xen helper library for sharing big buffers" first while we continue our
> > discussion on the common API layer that can cover any dmabuf sharing cases.
> >
> Well, I would love we reuse the code that I have, but I also
> understand that it was limited by my use-cases. So, I do not
> insist we have to ;)
> If we start designing and discussing hyper-dmabuf protocol we of course
> can work on this helper library in parallel.
Imo code reuse is overrated. Adding new uapi is what freaks me out here
:-)
If we end up with duplicated implementations, even in upstream, meh, not
great, but also ok. New uapi, and in a similar way, new hypervisor api
like the dma-buf forwarding that hyperdmabuf does is the kind of thing
that will lock us in for 10+ years (if we make a mistake).
> > > Thank you,
> > > Oleksandr
> > >
> > > P.S. All, is it a good idea to move this out of udmabuf thread into a
> > > dedicated one?
> > Either way is fine with me.
> So, if you can start designing the protocol we may have a dedicated mail
> thread for that. I will try to help with the protocol as much as I can
Please don't start with the protocol. Instead start with the concrete
use-cases, and then figure out why exactly you need new uapi. Once we have
that answered, we can start thinking about fleshing out the details.
Cheers, Daniel
>
> > > > > > cheers,
> > > > > >Gerd
> > > > > >
> > > > > Thank you,
> > > > > Oleksandr
> > > > >
> > > > > P.S. Sorry for making your original mail thread to discuss things much
> > > > > broader than your RFC...
> > > > >
> > > [1] https://github.com/xen-troops/displ_be
> > > [2]
> > > https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/displif.h#L484
> > > [3]
> > > https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/sndif.h
> > >
> [1]
> https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/displif.h
> [2]
> https://lists.xenproject.org/archives/html/xen-devel/2018-04/msg00685.html
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
m shmem/memfd" ioctl. Sounds like a good
idea, and generally useful.
We might want to limit to memfd though for semantic reasons: dma-buf have
invariant size, shmem not so much. memfd can be locked down to not change
their size anymore. And iirc the core mm page invalidation protocol around
truncate() is about as bad as get_user_pages vs cow :-)
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Thu, Apr 05, 2018 at 05:11:17PM -0700, Matt Roper wrote:
> On Thu, Apr 05, 2018 at 10:32:04PM +0200, Daniel Vetter wrote:
> > Pulling this out of the shadows again.
> >
> > We now also have xen-zcopy from Oleksandr and the hyper dmabuf stuff
> > from Matt and Dongw
e, so the guest
> would need to find a free spot and ask the host to place the resource
> there. Then the guest needs page structs covering the mapped resource,
> so it can work with it. Didn't investigate how difficuilt that is. Use
> memory hotplug maybe? Can we easily unma
t; dom0 exports.
>
>> Overall I like the idea, but too lazy to review.
>
> Cool. General comments on the idea was all I was looking for for the
> moment. Spare yor review cycles for the next version ;)
>
>> Oh, some kselftests for this stuff would be lovely.
>
> I'll look into it.
>
> thanks,
> Gerd
>
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
t;
> +
> + return 0;
> +}
> +
> +static void __exit udmabuf_dev_exit(void)
> +{
> + misc_deregister(&udmabuf_misc);
> +}
> +
> +module_init(udmabuf_dev_init)
> +module_exit(udmabuf_dev_exit)
> +
> +MODULE_LICENSE("GPL v2");
> diff --gi
. Bigger tree means you have more reasons for regular pull requests,
and that means patches land in drm-next faster. Which I think is good. And
I'm a bit on a crusade against boutique trees, for these reasons ;-)
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
s up in
time for 4.10, and the i915/kmvgt stuff needs to be postponed to 4.11.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
t want to
support (maybe allow for perf reasons if the guest is stupid) frontbuffer
rendering, which means you need buffer handles + damage, and not a static
region.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
comments are welcome.
Generally the kernel can't do gpu blts since the required massive state
setup is only in the userspace side of the GL driver stack. But
glReadPixels can do tricks for detiling, and if you use pixel buffer
objects or something similar it'll even be amortized reasonably.
mostly a problem for integrated gpus, since discrete ones usually
require contiguous vram for scanout. I think saying "don't do that" is a
valid option though, i.e. we're assuming that page mappings for a in-use
scanout range never changes on the guest side. That is true for at least
all the current linux drivers.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Fri, Jul 11, 2014 at 08:30:59PM +, Tian, Kevin wrote:
> > From: Konrad Rzeszutek Wilk [mailto:konrad.w...@oracle.com]
> > Sent: Friday, July 11, 2014 12:42 PM
> >
> > On Fri, Jul 11, 2014 at 08:29:56AM +0200, Daniel Vetter wrote:
> > > On Thu, Jul 10, 2014
e majority case just works.
I guess we can do it, at least I haven't seen any strange combinations in
the wild outside of Intel ...
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
On Mon, Jul 07, 2014 at 07:58:30PM +0200, Paolo Bonzini wrote:
> Il 07/07/2014 19:54, Daniel Vetter ha scritto:
> >On Mon, Jul 07, 2014 at 04:57:45PM +0200, Paolo Bonzini wrote:
> >>Il 07/07/2014 16:49, Daniel Vetter ha scritto:
> >>>So the correct fix to forward int
On Mon, Jul 07, 2014 at 04:57:45PM +0200, Paolo Bonzini wrote:
> Il 07/07/2014 16:49, Daniel Vetter ha scritto:
> >So the correct fix to forward intel gpus to guests is indeed to somehow
> >fake the pch pci ids since the driver really needs them. Gross design, but
> >that
how
fake the pch pci ids since the driver really needs them. Gross design, but
that's how the hardware works.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
bridge and our PCH is always
> >on device 31: func0 as far as I know. Looks good to me.
> >
> >Reviewed-by: Zhenyu Wang
> >
>
> Thanks for your review.
>
> Do you know when this can be applied?
I'll hold off merging until we have buy-in from upstream quemu on a given
approach (which should work for both linux and windows).
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
if (pch->vendor == PCI_VENDOR_ID_INTEL) {
>> unsigned short id = pch->device &
>> INTEL_PCH_DEVICE_ID_MASK;
>> dev_priv->pch_id = id;
>> @@ -462,10 +452,7 @@ void intel_detect_pch(struct drm_device *dev)
>>
23 matches
Mail list logo