On Thu, Oct 20, 2022 at 06:11:06PM -0400, Alexander Bulekov wrote: > On 220712 1034, Stefan Hajnoczi wrote: > > On Tue, Jun 21, 2022 at 11:53:06AM -0400, Alexander Bulekov wrote: > > > On 220621 1630, Peter Maydell wrote: > > > > On Thu, 9 Jun 2022 at 14:59, Alexander Bulekov <[email protected]> wrote: > > > > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h > > > > > index 44dacfa224..ab1ad0f7a8 100644 > > > > > --- a/include/hw/pci/pci.h > > > > > +++ b/include/hw/pci/pci.h > > > > > @@ -834,8 +834,17 @@ static inline MemTxResult pci_dma_rw(PCIDevice > > > > > *dev, dma_addr_t addr, > > > > > void *buf, dma_addr_t len, > > > > > DMADirection dir, MemTxAttrs > > > > > attrs) > > > > > { > > > > > - return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, > > > > > - dir, attrs); > > > > > + bool prior_engaged_state; > > > > > + MemTxResult result; > > > > > + > > > > > + prior_engaged_state = dev->qdev.engaged_in_io; > > > > > + > > > > > + dev->qdev.engaged_in_io = true; > > > > > + result = dma_memory_rw(pci_get_address_space(dev), addr, buf, > > > > > len, > > > > > + dir, attrs); > > > > > + dev->qdev.engaged_in_io = prior_engaged_state; > > > > > + > > > > > + return result; > > > > > > > > Why do we need to do something in this pci-specific function ? > > > > I was expecting this to only need changes at the generic-to-all-devices > > > > level. > > > > > > Both of these handle the BH->DMA->MMIO case. Unlike MMIO, I don't think > > > there is any neat way to set the engaged_in_io flag as we enter a BH. So > > > instead, we try to set it when a device initiates DMA. > > > > > > The pci function lets us do that since we get a glimpse of the dev/qdev > > > (unlike the dma_memory_... functions). > > ... > > > > > @@ -302,6 +310,10 @@ static MemTxResult dma_buf_rw(void *buf, > > > > > dma_addr_t len, dma_addr_t *residual, > > > > > xresidual -= xfer; > > > > > } > > > > > > > > > > + if (dev) { > > > > > + dev->engaged_in_io = prior_engaged_state; > > > > > + } > > > > > > > > Not all DMA goes through dma_buf_rw() -- why does it need changes? > > > > > > This one has the same goal, but accesses the qdev through sg, instead of > > > PCI. > > > > Should dma_*() APIs take a reentrancy guard argument so that all DMA > > accesses are systematically covered? > > > > /* Define this in the memory API */ > > typedef struct { > > bool engaged_in_io; > > } MemReentrancyGuard; > > > > /* Embed MemReentrancyGuard in DeviceState */ > > ... > > > > /* Require it in dma_*() APIs */ > > static inline MemTxResult dma_memory_rw(AddressSpace *as, dma_addr_t addr, > > void *buf, dma_addr_t len, > > DMADirection dir, MemTxAttrs > > attrs, > > MemReentrancyGuard *guard); > > > > /* Call dma_*() APIs like this... */ > > static inline MemTxResult pci_dma_rw(PCIDevice *dev, dma_addr_t addr, > > void *buf, dma_addr_t len, > > DMADirection dir, MemTxAttrs attrs) > > { > > return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, > > dir, attrs, &dev->qdev.reentrancy_guard); > > } > > > > Taking a stab at this. Here is the list of DMA APIs that appear to need > changes: > dma_memory_valid (1 usage) > dma_memory_rw (~5 uses) > dma_memory_read (~92 uses) > dma_memory_write (~71 uses) > dma_memory_set (~4 uses) > dma_memory_map (~18 uses) > dma_memory_unmap (~21 uses) > {ld,st}_{le,be}_{uw,l,q}_dma (~10 uses) > ldub_dma (does not appear to be used anywhere) > stb_dma (1 usage) > dma_buf_read (~18 uses) > dma_buf_write (~7 uses) > > These appear to be internal to the DMA API and probably don't need to be > changed: > dma_memory_read_relaxed (does not appear to be used anywhere) > dma_memory_write_relaxed (does not appear to be used anywhere) > dma_memory_rw_relaxed > > I don't think the sglist APIs need to be changed since we can get > DeviceState from the QEMUSGList. > > Does this look more-or-less right?
That's along the lines of what I would expect. Interesting that map/unmap is also on the list; it makes sense when considering bounce buffers. Stefan
signature.asc
Description: PGP signature
