On Wed, Oct 21, 2020 at 2:50 PM Jason Gunthorpe <[email protected]> wrote: > > On Wed, Oct 21, 2020 at 10:56:51AM +0200, Daniel Vetter wrote: > > There's three ways to access PCI BARs from userspace: /dev/mem, sysfs > > files, and the old proc interface. Two check against > > iomem_is_exclusive, proc never did. And with CONFIG_IO_STRICT_DEVMEM, > > this starts to matter, since we don't want random userspace having > > access to PCI BARs while a driver is loaded and using it. > > > > Fix this by adding the same iomem_is_exclusive() check we already have > > on the sysfs side in pci_mmap_resource(). > > > > References: 90a545e98126 ("restrict /dev/mem to idle io memory ranges") > > Signed-off-by: Daniel Vetter <[email protected]> > > Cc: Jason Gunthorpe <[email protected]> > > Cc: Kees Cook <[email protected]> > > Cc: Dan Williams <[email protected]> > > Cc: Andrew Morton <[email protected]> > > Cc: John Hubbard <[email protected]> > > Cc: Jérôme Glisse <[email protected]> > > Cc: Jan Kara <[email protected]> > > Cc: Dan Williams <[email protected]> > > Cc: [email protected] > > Cc: [email protected] > > Cc: [email protected] > > Cc: [email protected] > > Cc: Bjorn Helgaas <[email protected]> > > Cc: [email protected] > > Signed-off-by: Daniel Vetter <[email protected]> > > Maybe not for fixing in this series, but this access to > IORESOURCE_BUSY doesn't have any locking. > > The write side holds the resource_lock at least.. > > > ret = pci_mmap_page_range(dev, i, vma, > > fpriv->mmap_state, write_combine); > > At this point the vma isn't linked into the address space, so doesn't > this happen? > > CPU 0 CPU1 > mmap_region() > vma = vm_area_alloc > proc_bus_pci_mmap > iomem_is_exclusive > pci_mmap_page_range > revoke_devmem > unmap_mapping_range() > // vma is not linked to the address space here, > // unmap doesn't find it > vma_link() > !!! The VMA gets mapped with the revoked PTEs > > I couldn't find anything that prevents it at least, no mmap_sem on the > unmap side, just the i_mmap_lock > > Not seeing how address space and pre-populating during mmap work > together? Did I miss locking someplace? > > Not something to be fixed for this series, this is clearly an > improvement, but seems like another problem to tackle?
Uh yes. In drivers/gpu this isn't a problem because we only install ptes from the vm_ops->fault handler. So no races. And I don't think you can fix this otherwise through holding locks: mmap_sem we can't hold because before vma_link we don't even know which mm_struct is involved, so can't solve the race. Plus this would be worse that mm_take_all_locks used by mmu notifier. And the address_space i_mmap_lock is also no good since it's not held during the ->mmap callback, when we write the ptes. And the resource locks is even less useful, since we're not going to hold that at vma_link() time for sure. Hence delaying the pte writes after the vma_link, which means ->fault time, looks like the only way to close this gap. Trouble is I have no idea how to do this cleanly ... -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list [email protected] https://lists.freedesktop.org/mailman/listinfo/dri-devel
