On Thu, Nov 30, 2023 at 05:49:38PM +0800, Xuan Zhuo wrote:
> On Thu, 30 Nov 2023 04:45:58 -0500, "Michael S. Tsirkin" <m...@redhat.com> 
> wrote:
> > On Thu, Aug 10, 2023 at 08:30:56PM +0800, Xuan Zhuo wrote:
> > > These API has been introduced:
> > >
> > > * virtqueue_dma_need_sync
> > > * virtqueue_dma_sync_single_range_for_cpu
> > > * virtqueue_dma_sync_single_range_for_device
> > >
> > > These APIs can be used together with the premapped mechanism to sync the
> > > DMA address.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanz...@linux.alibaba.com>
> > > ---
> > >  drivers/virtio/virtio_ring.c | 76 ++++++++++++++++++++++++++++++++++++
> > >  include/linux/virtio.h       |  8 ++++
> > >  2 files changed, 84 insertions(+)
> > >
> > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > index 916479c9c72c..81ecb29c88f1 100644
> > > --- a/drivers/virtio/virtio_ring.c
> > > +++ b/drivers/virtio/virtio_ring.c
> > > @@ -3175,4 +3175,80 @@ int virtqueue_dma_mapping_error(struct virtqueue 
> > > *_vq, dma_addr_t addr)
> > >  }
> > >  EXPORT_SYMBOL_GPL(virtqueue_dma_mapping_error);
> > >
> > > +/**
> > > + * virtqueue_dma_need_sync - check a dma address needs sync
> > > + * @_vq: the struct virtqueue we're talking about.
> > > + * @addr: DMA address
> > > + *
> > > + * Check if the dma address mapped by the virtqueue_dma_map_* APIs needs 
> > > to be
> > > + * synchronized
> > > + *
> > > + * return bool
> > > + */
> > > +bool virtqueue_dma_need_sync(struct virtqueue *_vq, dma_addr_t addr)
> > > +{
> > > + struct vring_virtqueue *vq = to_vvq(_vq);
> > > +
> > > + if (!vq->use_dma_api)
> > > +         return false;
> > > +
> > > + return dma_need_sync(vring_dma_dev(vq), addr);
> > > +}
> > > +EXPORT_SYMBOL_GPL(virtqueue_dma_need_sync);
> > > +
> > > +/**
> > > + * virtqueue_dma_sync_single_range_for_cpu - dma sync for cpu
> > > + * @_vq: the struct virtqueue we're talking about.
> > > + * @addr: DMA address
> > > + * @offset: DMA address offset
> > > + * @size: buf size for sync
> > > + * @dir: DMA direction
> > > + *
> > > + * Before calling this function, use virtqueue_dma_need_sync() to 
> > > confirm that
> > > + * the DMA address really needs to be synchronized
> > > + *
> > > + */
> > > +void virtqueue_dma_sync_single_range_for_cpu(struct virtqueue *_vq,
> > > +                                      dma_addr_t addr,
> > > +                                      unsigned long offset, size_t size,
> > > +                                      enum dma_data_direction dir)
> > > +{
> > > + struct vring_virtqueue *vq = to_vvq(_vq);
> > > + struct device *dev = vring_dma_dev(vq);
> > > +
> > > + if (!vq->use_dma_api)
> > > +         return;
> > > +
> > > + dma_sync_single_range_for_cpu(dev, addr, offset, size,
> > > +                               DMA_BIDIRECTIONAL);
> > > +}
> >
> >
> > Why did you use DMA_BIDIRECTIONAL here?
> > Why is "dir" ignored?
> 
> This is a mistake.
> 
> I see Jason has a fix patch.

the one he sent in response to the bug report? it's incomplete though.

> How can I help?
> 
> Thanks.

develop a full patch with commit log, explanation of what
the result of the bug is, Fixes tag etc, test
in some environment where dir makes a difference and post.


> 
> >
> >
> > > +EXPORT_SYMBOL_GPL(virtqueue_dma_sync_single_range_for_cpu);
> > > +
> > > +/**
> > > + * virtqueue_dma_sync_single_range_for_device - dma sync for device
> > > + * @_vq: the struct virtqueue we're talking about.
> > > + * @addr: DMA address
> > > + * @offset: DMA address offset
> > > + * @size: buf size for sync
> > > + * @dir: DMA direction
> > > + *
> > > + * Before calling this function, use virtqueue_dma_need_sync() to 
> > > confirm that
> > > + * the DMA address really needs to be synchronized
> > > + */
> > > +void virtqueue_dma_sync_single_range_for_device(struct virtqueue *_vq,
> > > +                                         dma_addr_t addr,
> > > +                                         unsigned long offset, size_t 
> > > size,
> > > +                                         enum dma_data_direction dir)
> > > +{
> > > + struct vring_virtqueue *vq = to_vvq(_vq);
> > > + struct device *dev = vring_dma_dev(vq);
> > > +
> > > + if (!vq->use_dma_api)
> > > +         return;
> > > +
> > > + dma_sync_single_range_for_device(dev, addr, offset, size,
> > > +                                  DMA_BIDIRECTIONAL);
> > > +}
> > > +EXPORT_SYMBOL_GPL(virtqueue_dma_sync_single_range_for_device);
> > > +
> > >  MODULE_LICENSE("GPL");
> >
> > same question here.
> >
> > > diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> > > index 79e3c74391e0..1311a7fbe675 100644
> > > --- a/include/linux/virtio.h
> > > +++ b/include/linux/virtio.h
> > > @@ -220,4 +220,12 @@ void virtqueue_dma_unmap_single_attrs(struct 
> > > virtqueue *_vq, dma_addr_t addr,
> > >                                 size_t size, enum dma_data_direction dir,
> > >                                 unsigned long attrs);
> > >  int virtqueue_dma_mapping_error(struct virtqueue *_vq, dma_addr_t addr);
> > > +
> > > +bool virtqueue_dma_need_sync(struct virtqueue *_vq, dma_addr_t addr);
> > > +void virtqueue_dma_sync_single_range_for_cpu(struct virtqueue *_vq, 
> > > dma_addr_t addr,
> > > +                                      unsigned long offset, size_t size,
> > > +                                      enum dma_data_direction dir);
> > > +void virtqueue_dma_sync_single_range_for_device(struct virtqueue *_vq, 
> > > dma_addr_t addr,
> > > +                                         unsigned long offset, size_t 
> > > size,
> > > +                                         enum dma_data_direction dir);
> > >  #endif /* _LINUX_VIRTIO_H */
> > > --
> > > 2.32.0.3.g01195cf9f
> >
> >


Reply via email to