Re: remove the nvlink2 pci_vfio subdriver v2

2021-05-04 Thread Daniel Vetter
On Tue, May 04, 2021 at 12:53:27PM -0300, Jason Gunthorpe wrote: > On Tue, May 04, 2021 at 04:23:40PM +0200, Daniel Vetter wrote: > > > Just my 2cents from drm (where we deprecate old gunk uapi quite often): > > Imo it's best to keep the uapi headers as-is, but exchange the

Re: remove the nvlink2 pci_vfio subdriver v2

2021-05-04 Thread Daniel Vetter
ameter extensions or whatever they are in this case) are defacto reserved, because there are binaries (qemu in this) that have code acting on them out there. The only exception where we completely nuke the structs and #defines is when uapi has been only used by testcases. Which we know, since we defacto limit our stable uapi guarantee to the canonical open&upstream userspace drivers only (for at least the driver-specific stuff, the cross-driver interfaces are hopeless). Anyway feel free to ignore since this might be different than drivers/gpu. Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-16 Thread Daniel Vetter
2018 at 12:14 PM, Oleksandr Andrushchenko wrote: > On 04/16/2018 12:32 PM, Daniel Vetter wrote: >> >> On Mon, Apr 16, 2018 at 10:22 AM, Oleksandr Andrushchenko >> wrote: >>> >>> On 04/16/2018 10:43 AM, Daniel Vetter wrote: >>>> >>>>

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-16 Thread Daniel Vetter
On Mon, Apr 16, 2018 at 10:22 AM, Oleksandr Andrushchenko wrote: > On 04/16/2018 10:43 AM, Daniel Vetter wrote: >> >> On Mon, Apr 16, 2018 at 10:16:31AM +0300, Oleksandr Andrushchenko wrote: >>> >>> On 04/13/2018 06:37 PM, Daniel Vetter wrote: >>>>

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-16 Thread Daniel Vetter
On Mon, Apr 16, 2018 at 10:16:31AM +0300, Oleksandr Andrushchenko wrote: > On 04/13/2018 06:37 PM, Daniel Vetter wrote: > > On Wed, Apr 11, 2018 at 08:59:32AM +0300, Oleksandr Andrushchenko wrote: > > > On 04/10/2018 08:26 PM, Dongwon Kim wrote: > > > > On Tue, Ap

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-13 Thread Daniel Vetter
en propose a Xen helper library for sharing big > > > buffers, > > > so common code of the above drivers can use the same code w/o code > > > duplication) > > I think it is possible to use your functions for memory sharing part in > > hyper_dmabuf's backend (this 'backend' means the layer that does page > > sharing > > and inter-vm communication with xen-specific way.), so why don't we work on > > "Xen helper library for sharing big buffers" first while we continue our > > discussion on the common API layer that can cover any dmabuf sharing cases. > > > Well, I would love we reuse the code that I have, but I also > understand that it was limited by my use-cases. So, I do not > insist we have to ;) > If we start designing and discussing hyper-dmabuf protocol we of course > can work on this helper library in parallel. Imo code reuse is overrated. Adding new uapi is what freaks me out here :-) If we end up with duplicated implementations, even in upstream, meh, not great, but also ok. New uapi, and in a similar way, new hypervisor api like the dma-buf forwarding that hyperdmabuf does is the kind of thing that will lock us in for 10+ years (if we make a mistake). > > > Thank you, > > > Oleksandr > > > > > > P.S. All, is it a good idea to move this out of udmabuf thread into a > > > dedicated one? > > Either way is fine with me. > So, if you can start designing the protocol we may have a dedicated mail > thread for that. I will try to help with the protocol as much as I can Please don't start with the protocol. Instead start with the concrete use-cases, and then figure out why exactly you need new uapi. Once we have that answered, we can start thinking about fleshing out the details. Cheers, Daniel > > > > > > > cheers, > > > > > >Gerd > > > > > > > > > > > Thank you, > > > > > Oleksandr > > > > > > > > > > P.S. Sorry for making your original mail thread to discuss things much > > > > > broader than your RFC... > > > > > > > > [1] https://github.com/xen-troops/displ_be > > > [2] > > > https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/displif.h#L484 > > > [3] > > > https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/sndif.h > > > > [1] > https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/displif.h > [2] > https://lists.xenproject.org/archives/html/xen-devel/2018-04/msg00685.html > ___ > dri-devel mailing list > dri-de...@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch

Re: [Qemu-devel] [PATCH v2] Add udmabuf misc device

2018-04-09 Thread Daniel Vetter
m shmem/memfd" ioctl. Sounds like a good idea, and generally useful. We might want to limit to memfd though for semantic reasons: dma-buf have invariant size, shmem not so much. memfd can be locked down to not change their size anymore. And iirc the core mm page invalidation protocol around truncate() is about as bad as get_user_pages vs cow :-) -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-09 Thread Daniel Vetter
On Thu, Apr 05, 2018 at 05:11:17PM -0700, Matt Roper wrote: > On Thu, Apr 05, 2018 at 10:32:04PM +0200, Daniel Vetter wrote: > > Pulling this out of the shadows again. > > > > We now also have xen-zcopy from Oleksandr and the hyper dmabuf stuff > > from Matt and Dongw

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-09 Thread Daniel Vetter
e, so the guest > would need to find a free spot and ask the host to place the resource > there. Then the guest needs page structs covering the mapped resource, > so it can work with it. Didn't investigate how difficuilt that is. Use > memory hotplug maybe? Can we easily unma

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-04-05 Thread Daniel Vetter
t; dom0 exports. > >> Overall I like the idea, but too lazy to review. > > Cool. General comments on the idea was all I was looking for for the > moment. Spare yor review cycles for the next version ;) > >> Oh, some kselftests for this stuff would be lovely. > > I'll look into it. > > thanks, > Gerd > > ___ > dri-devel mailing list > dri-de...@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch

Re: [Qemu-devel] [RfC PATCH] Add udmabuf misc device

2018-03-13 Thread Daniel Vetter
t; > + > + return 0; > +} > + > +static void __exit udmabuf_dev_exit(void) > +{ > + misc_deregister(&udmabuf_misc); > +} > + > +module_init(udmabuf_dev_init) > +module_exit(udmabuf_dev_exit) > + > +MODULE_LICENSE("GPL v2"); > diff --gi

Re: [Qemu-devel] RfC: MAINTAINERS update for qemu drm drivers.

2016-11-21 Thread Daniel Vetter
. Bigger tree means you have more reasons for regular pull requests, and that means patches land in drm-next faster. Which I think is good. And I'm a bit on a crusade against boutique trees, for these reasons ;-) -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch

Re: [Qemu-devel] [PATCH v14 00/22] Add Mediated device support

2016-11-18 Thread Daniel Vetter
s up in time for 4.10, and the i915/kmvgt stuff needs to be postponed to 4.11. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch

Re: [Qemu-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-24 Thread Daniel Vetter
t want to support (maybe allow for perf reasons if the guest is stupid) frontbuffer rendering, which means you need buffer handles + damage, and not a static region. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch

Re: [Qemu-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-24 Thread Daniel Vetter
comments are welcome. Generally the kernel can't do gpu blts since the required massive state setup is only in the userspace side of the GL driver stack. But glReadPixels can do tricks for detiling, and if you use pixel buffer objects or something similar it'll even be amortized reasonably.

Re: [Qemu-devel] [Intel-gfx] [Announcement] 2015-Q3 release of XenGT - a Mediated Graphics Passthrough Solution from Intel

2015-11-24 Thread Daniel Vetter
mostly a problem for integrated gpus, since discrete ones usually require contiguous vram for scanout. I think saying "don't do that" is a valid option though, i.e. we're assuming that page mappings for a in-use scanout range never changes on the guest side. That is true for at least all the current linux drivers. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch

Re: [Qemu-devel] [Intel-gfx] [Xen-devel] [RFC][PATCH] gpu:drm:i915:intel_detect_pch: back to check devfn instead of check class type

2014-07-12 Thread Daniel Vetter
On Fri, Jul 11, 2014 at 08:30:59PM +, Tian, Kevin wrote: > > From: Konrad Rzeszutek Wilk [mailto:konrad.w...@oracle.com] > > Sent: Friday, July 11, 2014 12:42 PM > > > > On Fri, Jul 11, 2014 at 08:29:56AM +0200, Daniel Vetter wrote: > > > On Thu, Jul 10, 2014

Re: [Qemu-devel] [Intel-gfx] [RFC][PATCH] gpu:drm:i915:intel_detect_pch: back to check devfn instead of check class type

2014-07-10 Thread Daniel Vetter
e majority case just works. I guess we can do it, at least I haven't seen any strange combinations in the wild outside of Intel ... -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch

Re: [Qemu-devel] [Intel-gfx] [RFC][PATCH] gpu:drm:i915:intel_detect_pch: back to check devfn instead of check class type

2014-07-07 Thread Daniel Vetter
On Mon, Jul 07, 2014 at 07:58:30PM +0200, Paolo Bonzini wrote: > Il 07/07/2014 19:54, Daniel Vetter ha scritto: > >On Mon, Jul 07, 2014 at 04:57:45PM +0200, Paolo Bonzini wrote: > >>Il 07/07/2014 16:49, Daniel Vetter ha scritto: > >>>So the correct fix to forward int

Re: [Qemu-devel] [Intel-gfx] [RFC][PATCH] gpu:drm:i915:intel_detect_pch: back to check devfn instead of check class type

2014-07-07 Thread Daniel Vetter
On Mon, Jul 07, 2014 at 04:57:45PM +0200, Paolo Bonzini wrote: > Il 07/07/2014 16:49, Daniel Vetter ha scritto: > >So the correct fix to forward intel gpus to guests is indeed to somehow > >fake the pch pci ids since the driver really needs them. Gross design, but > >that

Re: [Qemu-devel] [RFC][PATCH] gpu:drm:i915:intel_detect_pch: back to check devfn instead of check class type

2014-07-07 Thread Daniel Vetter
how fake the pch pci ids since the driver really needs them. Gross design, but that's how the hardware works. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch

Re: [Qemu-devel] [Intel-gfx] [RFC][PATCH] gpu:drm:i915:intel_detect_pch: back to check devfn instead of check class type

2014-07-07 Thread Daniel Vetter
bridge and our PCH is always > >on device 31: func0 as far as I know. Looks good to me. > > > >Reviewed-by: Zhenyu Wang > > > > Thanks for your review. > > Do you know when this can be applied? I'll hold off merging until we have buy-in from upstream quemu on a given approach (which should work for both linux and windows). -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch

Re: [Qemu-devel] [RFC][PATCH] gpu:drm:i915:intel_detect_pch: back to check devfn instead of check class type

2014-06-20 Thread Daniel Vetter
if (pch->vendor == PCI_VENDOR_ID_INTEL) { >> unsigned short id = pch->device & >> INTEL_PCH_DEVICE_ID_MASK; >> dev_priv->pch_id = id; >> @@ -462,10 +452,7 @@ void intel_detect_pch(struct drm_device *dev) >>