On Fri, 17 Feb 2023 13:23:14 +0800, Jason Wang <[email protected]> wrote:
> On Thu, Feb 16, 2023 at 7:50 PM Xuan Zhuo <[email protected]> wrote:
> >
> > On Thu, 16 Feb 2023 13:27:00 +0800, Jason Wang <[email protected]> wrote:
> > > On Tue, Feb 14, 2023 at 3:27 PM Xuan Zhuo <[email protected]> 
> > > wrote:
> > > >
> > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The 
> > > > zero
> > > > copy feature of xsk (XDP socket) needs to be supported by the driver. 
> > > > The
> > > > performance of zero copy is very good.
> > > >
> > > > ENV: Qemu with vhost.
> > > >
> > > >                    vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS
> > > > -----------------------------|---------------|------------------|------------
> > > > xmit by sockperf:     90%    |   100%        |                  |  
> > > > 318967
> > > > xmit by xsk:          100%   |   30%         |   33%            | 
> > > > 1192064
> > >
> > > What's the setup of this test?
> > >
> > > CPU model/frequency, packet size, zerocopy enabled or not.
> >
> > Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> >
> > zerocopy: enabled
> >
> > size: 64
> >
> >
> > >
> > > (I remember I can get better performance with my old laptop through
> > > pktgen (about 2Mpps))
> >
> > Let's compare sockperf just.
> >
> > The result of the test on Alibaba Cloud was 3.5M+PPS/60%cpu.
>
> Just to make sure I understand here, the above said:
>

sockperf: https://github.com/Mellanox/sockperf

It should be my problem, I didn't make it clear.

sockperf uses the sentdo() syscall to send udp packets.
xsk send udp by AF_XDP. I write an app with AF_XDP.

Thanks.


>  xmit by sockperf:     90%    |   100%        |                  |  318967
>
> It's 0.3 Mpps, what's the difference between those two?
>
> Thanks
>
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > > recv by sockperf:     100%   |   68%         |   100%           |  
> > > > 692288
> > > > recv by xsk:          100%   |   33%         |   43%            |  
> > > > 771670
> > > >
> > > > Before achieving the function of Virtio-Net, we also have to let virtio 
> > > > core
> > > > support these features:
> > > >
> > > > 1. virtio core support premapped
> > > > 2. virtio core support reset per-queue
> > > > 3. introduce DMA APIs to virtio core
> > > >
> > > > Please review.
> > > >
> > > > Thanks.
> > > >
> > > > Xuan Zhuo (10):
> > > >   virtio_ring: split: refactor virtqueue_add_split() for premapped
> > > >   virtio_ring: packed: separate prepare code from
> > > >     virtuque_add_indirect_packed()
> > > >   virtio_ring: packed: refactor virtqueue_add_packed() for premapped
> > > >   virtio_ring: split: introduce virtqueue_add_split_premapped()
> > > >   virtio_ring: packed: introduce virtqueue_add_packed_premapped()
> > > >   virtio_ring: introduce virtqueue_add_inbuf_premapped()
> > > >   virtio_ring: add api virtio_dma_map() for advance dma
> > > >   virtio_ring: introduce dma sync api for virtio
> > > >   virtio_ring: correct the expression of the description of
> > > >     virtqueue_resize()
> > > >   virtio_ring: introduce virtqueue_reset()
> > > >
> > > >  drivers/virtio/virtio_ring.c | 792 ++++++++++++++++++++++++++++-------
> > > >  include/linux/virtio.h       |  29 ++
> > > >  2 files changed, 659 insertions(+), 162 deletions(-)
> > > >
> > > > --
> > > > 2.32.0.3.g01195cf9f
> > > >
> > >
> >
>
_______________________________________________
Virtualization mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to