+CC Jean-Phillipe and iommu list.
On Mon, 19 Nov 2018 20:29:39 -0700 Jason Gunthorpe <j...@ziepe.ca> wrote: > On Tue, Nov 20, 2018 at 11:07:02AM +0800, Kenneth Lee wrote: > > On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote: > > > Date: Mon, 19 Nov 2018 11:49:54 -0700 > > > From: Jason Gunthorpe <j...@ziepe.ca> > > > To: Kenneth Lee <liguo...@hisilicon.com> > > > CC: Leon Romanovsky <l...@kernel.org>, Kenneth Lee <nek.in...@gmail.com>, > > > Tim Sell <timothy.s...@unisys.com>, linux-...@vger.kernel.org, Alexander > > > Shishkin <alexander.shish...@linux.intel.com>, Zaibo Xu > > > <xuza...@huawei.com>, zhangfei....@foxmail.com, linux...@huawei.com, > > > haojian.zhu...@linaro.org, Christoph Lameter <c...@linux.com>, Hao Fang > > > <fangha...@huawei.com>, Gavin Schenk <g.sch...@eckelmann.de>, RDMA > > > mailing > > > list <linux-r...@vger.kernel.org>, Zhou Wang <wangzh...@hisilicon.com>, > > > Doug Ledford <dledf...@redhat.com>, Uwe Kleine-König > > > <u.kleine-koe...@pengutronix.de>, David Kershner > > > <david.kersh...@unisys.com>, Johan Hovold <jo...@kernel.org>, Cyrille > > > Pitchen <cyrille.pitc...@free-electrons.com>, Sagar Dharia > > > <sdha...@codeaurora.org>, Jens Axboe <ax...@kernel.dk>, > > > guodong...@linaro.org, linux-netdev <net...@vger.kernel.org>, Randy > > > Dunlap > > > <rdun...@infradead.org>, linux-ker...@vger.kernel.org, Vinod Koul > > > <vk...@kernel.org>, linux-crypto@vger.kernel.org, Philippe Ombredanne > > > <pombreda...@nexb.com>, Sanyog Kale <sanyog.r.k...@intel.com>, "David S. > > > Miller" <da...@davemloft.net>, linux-accelerat...@lists.ozlabs.org > > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce > > > User-Agent: Mutt/1.9.4 (2018-02-28) > > > Message-ID: <20181119184954.gb4...@ziepe.ca> > > > > > > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote: > > > > > > > If the hardware cannot share page table with the CPU, we then need to > > > > have > > > > some way to change the device page table. This is what happen in ODP. It > > > > invalidates the page table in device upon mmu_notifier call back. But > > > > this cannot > > > > solve the COW problem: if the user process A share a page P with > > > > device, and A > > > > forks a new process B, and it continue to write to the page. By COW, the > > > > process B will keep the page P, while A will get a new page P'. But you > > > > have > > > > no way to let the device know it should use P' rather than P. > > > > > > Is this true? I thought mmu_notifiers covered all these cases. > > > > > > The mm_notifier for A should fire if B causes the physical address of > > > A's pages to change via COW. > > > > > > And this causes the device page tables to re-synchronize. > > > > I don't see such code. The current do_cow_fault() implemenation has nothing > > to > > do with mm_notifer. > > Well, that sure sounds like it would be a bug in mmu_notifiers.. > > But considering Jean's SVA stuff seems based on mmu notifiers, I have > a hard time believing that it has any different behavior from RDMA's > ODP, and if it does have different behavior, then it is probably just > a bug in the ODP implementation. > > > > > In WarpDrive/uacce, we make this simple. If you support IOMMU and it > > > > support > > > > SVM/SVA. Everything will be fine just like ODP implicit mode. And you > > > > don't need > > > > to write any code for that. Because it has been done by IOMMU > > > > framework. If it > > > > > > Looks like the IOMMU code uses mmu_notifier, so it is identical to > > > IB's ODP. The only difference is that IB tends to have the IOMMU page > > > table in the device, not in the CPU. > > > > > > The only case I know if that is different is the new-fangled CAPI > > > stuff where the IOMMU can directly use the CPU's page table and the > > > IOMMU page table (in device or CPU) is eliminated. > > > > Yes. We are not focusing on the current implementation. As mentioned in the > > cover letter. We are expecting Jean Philips' SVA patch: > > git://linux-arm.org/linux-jpb. > > This SVA stuff does not look comparable to CAPI as it still requires > maintaining seperate IOMMU page tables. > > Also, those patches from Jean have a lot of references to > mmu_notifiers (ie look at iommu_mmu_notifier). > > Are you really sure it is actually any different at all? > > > > Anyhow, I don't think a single instance of hardware should justify an > > > entire new subsystem. Subsystems are hard to make and without multiple > > > hardware examples there is no way to expect that it would cover any > > > future use cases. > > > > Yes. That's our first expectation. We can keep it with our driver. But > > because > > there is no user driver support for any accelerator in mainline kernel. > > Even the > > well known QuickAssit has to be maintained out of tree. So we try to see if > > people is interested in working together to solve the problem. > > Well, you should come with patches ack'ed by these other groups. > > > > If all your driver needs is to mmap some PCI bar space, route > > > interrupts and do DMA mapping then mediated VFIO is probably a good > > > choice. > > > > Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's opinion > > and > > try not to add complexity to the mm subsystem. > > Why would a mediated VFIO driver touch the mm subsystem? Sounds like > you don't have a VFIO driver if it needs to do stuff like that... > > > > If it needs to do a bunch of other stuff, not related to PCI bar > > > space, interrupts and DMA mapping (ie special code for compression, > > > crypto, AI, whatever) then you should probably do what Jerome said and > > > make a drivers/char/hisillicon_foo_bar.c that exposes just what your > > > hardware does. > > > > Yes. If no other accelerator driver writer is interested. That is the > > expectation:) > > I don't think it matters what other drivers do. > > If your driver does not need any other kernel code then VFIO is > sensible. In this kind of world you will probably have a RDMA-like > userspace driver that can bring this to a common user space API, even > if one driver use VFIO and a different driver uses something else. > > > You create some connections (queues) to NIC, RSA, and AI engine. Then you > > got > > data direct from the NIC and pass the pointer to RSA engine for decryption. > > The > > CPU then finish some data taking or operation and then pass through to the > > AI > > engine for CNN calculation....This will need a place to maintain the same > > address space by some means. > > How is this any different from what we have today? > > SVA is not something even remotely new, IB has been doing various > versions of it for 20 years. > > Jason