On Mon, Dec 30, 2024 at 5:51 PM Chia-Yu Chang (Nokia)
wrote:
>
> >From: Jason Wang
> >Sent: Monday, December 30, 2024 8:52 AM
> >To: Chia-Yu Chang (Nokia)
> >Cc: netdev@vger.kernel.org; dsah...@gmail.com; da...@davemloft.net;
> >eduma...@google.com; dsah
On Sat, Dec 28, 2024 at 3:13 AM wrote:
>
> From: Chia-Yu Chang
>
> Unlike RFC 3168 ECN, accurate ECN uses the CWR flag as part of the ACE
> field to count new packets with CE mark; however, it will be corrupted
> by the RFC 3168 ECN-aware TSO. Therefore, fallback shall be applied by
> seting NETI
On Wed, Nov 13, 2024 at 5:19 AM Ilpo Järvinen wrote:
>
> Adding a few virtio people. Please see the virtio spec/flag question
> below.
>
> On Tue, 12 Nov 2024, Chia-Yu Chang (Nokia) wrote:
>
> > >From: Ilpo Järvinen
> > >Sent: Thursday, November 7, 2024 8:28 PM
> > >To: Eric Dumazet
> > >Cc: Chi
buffer to record this info.
> >
> > Signed-off-by: Xuan Zhuo
>
>
> Hi, Jason
>
> This also needs a review.
Sorry, I miss this.
Acked-by: Jason Wang
Thanks
)((unsigned long)ptr & ~VIRTIO_ORPHAN_FLAG);
> }
>
> +static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
> +{
> + sg_dma_address(sg) = addr;
> + sg_dma_len(sg) = len;
> +}
> +
In the future, we need to consider hiding those in the core.
Anyhow
Acked-by: Jason Wang
Thanks
void *data,
> gfp_t gfp);
>
> Signed-off-by: Xuan Zhuo
Acked-by: Jason Wang
Thanks
On Thu, Nov 7, 2024 at 4:55 PM Xuan Zhuo wrote:
>
> The subsequent commit needs to know whether every indirect buffer is
> premapped or not. So we need to introduce an extra struct for every
> indirect buffer to record this info.
>
> Signed-off-by: Xuan Zhuo
Acked-by: Jason Wang
Thanks
On Wed, Nov 6, 2024 at 3:42 PM Michael S. Tsirkin wrote:
>
> On Wed, Nov 06, 2024 at 09:44:39AM +0800, Jason Wang wrote:
> > > > > while (vq->split.vring.desc[i].flags & nextflag) {
> > > > > - vring_unmap_one_split(vq, i);
> &g
On Tue, Nov 5, 2024 at 3:23 PM Xuan Zhuo wrote:
>
> On Tue, 5 Nov 2024 11:23:50 +0800, Jason Wang wrote:
> > On Wed, Oct 30, 2024 at 4:25 PM Xuan Zhuo
> > wrote:
> > >
> > > virtio-net rq submits premapped per-buffer by setting sg page to NULL;
On Tue, Nov 5, 2024 at 2:53 PM Xuan Zhuo wrote:
>
> On Tue, 5 Nov 2024 11:42:09 +0800, Jason Wang wrote:
> > On Wed, Oct 30, 2024 at 4:25 PM Xuan Zhuo
> > wrote:
> > >
> > > The subsequent commit needs to know whether every indirect buffer is
> > >
d to prevent misusing of these API.
>
> Tested-by: Darren Kenny
> Signed-off-by: Xuan Zhuo
Acked-by: Jason Wang
Thanks
and fix this
> bug directly.
>
> Here, when the frag size is not enough, we reduce the buffer len to fix
> this problem.
>
> Reported-by: "Si-Wei Liu"
> Tested-by: Darren Kenny
> Signed-off-by: Xuan Zhuo
Acked-by: Jason Wang
Thanks
Hi Jakub:
On Tue, Nov 5, 2024 at 10:46 AM Jakub Kicinski wrote:
>
> On Tue, 29 Oct 2024 16:46:11 +0800 Xuan Zhuo wrote:
> > In the last linux version, we disabled this feature to fix the
> > regress[1].
> >
> > The patch set is try to fix the problem and re-enable it.
> >
> > More info:
> > http
On Wed, Oct 30, 2024 at 4:25 PM Xuan Zhuo wrote:
>
> Now, this API is useless. remove it.
>
> Signed-off-by: Xuan Zhuo
Acked-by: Jason Wang
Thanks
On Wed, Oct 30, 2024 at 4:25 PM Xuan Zhuo wrote:
>
> Two APIs are introduced to submit premapped per-buffers.
>
> int virtqueue_add_inbuf_premapped(struct virtqueue *vq,
> struct scatterlist *sg, unsigned int num,
> void *data,
>
formation via sg,
so we can simply use dma_map_sg() in add_sgs() which allows various
optimizations in IOMMU layers.
>
> So we pass the new argument 'premapped' to indicate the buffers
> submitted to virtio are premapped in advance. Additionally,
> DMA unmap operations for thes
On Wed, Oct 30, 2024 at 4:25 PM Xuan Zhuo wrote:
>
> The subsequent commit needs to know whether every indirect buffer is
> premapped or not. So we need to introduce an extra struct for every
> indirect buffer to record this info.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/virtio/virtio_ring.c
On Wed, Oct 30, 2024 at 4:25 PM Xuan Zhuo wrote:
>
> virtio-net rq submits premapped per-buffer by setting sg page to NULL;
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 24 +---
> 1 file changed, 13 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/ne
EM;
> }
>
> +static void virtnet_rq_set_premapped(struct virtnet_info *vi)
> +{
> + int i;
> +
> + /* disable for big mode */
> + if (vi->mode == VIRTNET_MODE_BIG)
> + return;
Nitpick: I would like such a check to be done at the
On Mon, Oct 14, 2024 at 11:12 AM Xuan Zhuo wrote:
>
> Now, if we want to judge the rx work mode, we have to use such codes:
>
> 1. merge mode: vi->mergeable_rx_bufs
> 2. big mode: vi->big_packets && !vi->mergeable_rx_bufs
> 3. small: !vi->big_packets && !vi->mergeable_rx_bufs
>
> This is inc
On Mon, Oct 14, 2024 at 11:12 AM Xuan Zhuo wrote:
>
> Now, the premapped mode can be enabled unconditionally.
>
> So we can remove the failover code for merge and small mode.
>
> Signed-off-by: Xuan Zhuo
Let's be more verbose here. For example, the virtnet_rq_xxx() helper
would be only used if t
On Mon, Oct 14, 2024 at 11:12 AM Xuan Zhuo wrote:
>
> When the frag just got a page, then may lead to regression on VM.
> Specially if the sysctl net.core.high_order_alloc_disable value is 1,
> then the frag always get a page when do refill.
>
> Which could see reliable crashes or scp failure (scp
sq - vi->sq),
> - false, &stats);
> + virtnet_free_old_xmit(sq, netdev_get_tx_queue(dev, sq - vi->sq),
> + false, &stats);
>
> for (i = 0; i < n; i++) {
> struct xdp_frame *xdpf = frames[i];
> @@ -2961,6 +3097,7 @@ static int virtnet_poll_tx(struct napi_struct *napi,
> int budget)
> struct virtnet_info *vi = sq->vq->vdev->priv;
> unsigned int index = vq2txq(sq->vq);
> struct netdev_queue *txq;
> + bool xsk_busy = false;
> int opaque;
> bool done;
>
> @@ -2973,7 +3110,11 @@ static int virtnet_poll_tx(struct napi_struct *napi,
> int budget)
> txq = netdev_get_tx_queue(vi->dev, index);
> __netif_tx_lock(txq, raw_smp_processor_id());
> virtqueue_disable_cb(sq->vq);
> - free_old_xmit(sq, txq, !!budget);
> +
> + if (sq->xsk_pool)
> + xsk_busy = virtnet_xsk_xmit(sq, sq->xsk_pool, budget);
I think we need a better name of "xsk_busy", it looks like it means we
exceeds the quota. Or just return the number of buffers received and
let the caller to judge.
Other looks good.
With this fixed.
Acked-by: Jason Wang
Thanks
On Tue, Sep 24, 2024 at 9:32 AM Xuan Zhuo wrote:
>
> The current configuration sets the virtqueue (vq) to premapped mode,
> implying that all buffers submitted to this queue must be mapped ahead
> of time. This presents a challenge for the virtnet send queue (sq): the
> virtnet driver would be req
>
> + err = virtnet_sq_bind_xsk_pool(vi, sq, pool);
> + if (err)
> + goto err_sq;
> +
> + /* Now, we do not support tx offset, so all the tx virtnet hdr is
> zero.
What did you mean by "tx offset" here? (Or I don't see the connection
with vnet hdr).
Anyhow the patch looks good.
Acked-by: Jason Wang
Thanks
unsigned long)ptr | VIRTIO_XDP_FLAG);
> -}
> +/* We use the last two bits of the pointer to distinguish the xmit type. */
> +#define VIRTNET_XMIT_TYPE_MASK (BIT(0) | BIT(1))
>
> -static struct xdp_frame *ptr_to_xdp(void *ptr)
> +static enum virtnet_xmit_type virtnet_xmit_ptr_strip(void **ptr)
Nit: not a native speaker but I think something like pack/unpack might
be better.
With those changes.
Acked-by: Jason Wang
Thanks
On Tue, Sep 24, 2024 at 9:32 AM Xuan Zhuo wrote:
>
> The subsequent commit needs to know whether every indirect buffer is
> premapped or not. So we need to introduce an extra struct for every
> indirect buffer to record this info.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/virtio/virtio_ring.c
On Thu, Sep 12, 2024 at 3:43 PM Xuan Zhuo wrote:
>
> On Wed, 11 Sep 2024 11:54:25 +0800, Jason Wang wrote:
> > On Tue, Aug 20, 2024 at 3:33 PM Xuan Zhuo
> > wrote:
> > >
> > > The current configuration sets the virtqueue (vq) to premapped mode,
> > >
On Thu, Sep 12, 2024 at 3:54 PM Xuan Zhuo wrote:
>
> On Wed, 11 Sep 2024 12:04:16 +0800, Jason Wang wrote:
> > On Tue, Aug 20, 2024 at 3:33 PM Xuan Zhuo
> > wrote:
> > >
> > > Because the af-xdp will introduce a new xmit type, so I refactor the
> > &
On Thu, Sep 12, 2024 at 4:50 PM Xuan Zhuo wrote:
>
> On Wed, 11 Sep 2024 12:31:32 +0800, Jason Wang wrote:
> > On Tue, Aug 20, 2024 at 3:33 PM Xuan Zhuo
> > wrote:
> > >
> > > The driver's tx napi is very important for XSK. It is responsible for
&g
On Tue, Aug 20, 2024 at 3:33 PM Xuan Zhuo wrote:
>
> Now, we supported AF_XDP(xsk). Add NETDEV_XDP_ACT_XSK_ZEROCOPY to
Should be "support".
> xdp_features.
>
> Signed-off-by: Xuan Zhuo
Other than this.
Acked-by: Jason Wang
Thanks
On Tue, Aug 20, 2024 at 3:34 PM Xuan Zhuo wrote:
>
> virtnet_free_old_xmit distinguishes three type ptr(skb, xdp frame, xsk
> buffer) by the last bits of the pointer.
>
> Signed-off-by: Xuan Zhuo
I suggest squashing this into the previous patch which looks more
logical and complete.
Thanks
On Tue, Aug 20, 2024 at 3:33 PM Xuan Zhuo wrote:
>
> The driver's tx napi is very important for XSK. It is responsible for
> obtaining data from the XSK queue and sending it out.
>
> At the beginning, we need to trigger tx napi.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 127
On Tue, Aug 20, 2024 at 3:33 PM Xuan Zhuo wrote:
>
> This patch implement the logic of bind/unbind xsk pool to sq and rq.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 54
> 1 file changed, 54 insertions(+)
>
> diff --git a/drivers/net/v
On Tue, Aug 20, 2024 at 3:33 PM Xuan Zhuo wrote:
>
> Because the af-xdp will introduce a new xmit type, so I refactor the
> xmit type mechanism first.
>
> We use the last two bits of the pointer to distinguish the xmit type,
> so we can distinguish four xmit types. Now we have three types: skb,
>
work?
A quick glance told me AF_XEP seems to be safe as it uses
pin_user_pages(), but we need to check other possibilities.
Or we need to fall back to our previous idea, having new APIs.
> Additionally, DMA unmap operations
> for this buffer will be bypassed.
>
> Suggested-by: Ja
On Tue, Aug 20, 2024 at 3:33 PM Xuan Zhuo wrote:
>
> 1. this commit hardens dma unmap for indirect
I think we need to explain why we need such hardening. For example
indirect use stream mapping which is read-only from the device. So it
looks to me like it doesn't require hardening by itself.
> 2
On Mon, Sep 9, 2024 at 4:50 PM Xuan Zhuo wrote:
>
> On Mon, 9 Sep 2024 16:38:16 +0800, Jason Wang wrote:
> > On Fri, Sep 6, 2024 at 5:32 PM Xuan Zhuo wrote:
> > >
> > > On Fri, 6 Sep 2024 05:08:56 -0400, "Michael S. Tsirkin"
> > > wrote:
>
On Mon, Sep 9, 2024 at 4:52 PM Xuan Zhuo wrote:
>
> On Mon, 9 Sep 2024 16:47:02 +0800, Jason Wang wrote:
> > On Mon, Sep 9, 2024 at 11:16 AM Xuan Zhuo
> > wrote:
> > >
> > > On Sun, 8 Sep 2024 15:40:32 -0400, "Michael S. Tsirkin"
> > >
On Mon, Sep 9, 2024 at 11:16 AM Xuan Zhuo wrote:
>
> On Sun, 8 Sep 2024 15:40:32 -0400, "Michael S. Tsirkin"
> wrote:
> > On Tue, Aug 20, 2024 at 03:19:13PM +0800, Xuan Zhuo wrote:
> > > leads to regression on VM with the sysctl value of:
> > >
> > > - net.core.high_order_alloc_disable=1
> > >
>
On Fri, Sep 6, 2024 at 5:32 PM Xuan Zhuo wrote:
>
> On Fri, 6 Sep 2024 05:08:56 -0400, "Michael S. Tsirkin"
> wrote:
> > On Fri, Sep 06, 2024 at 04:53:38PM +0800, Xuan Zhuo wrote:
> > > On Fri, 6 Sep 2024 04:43:29 -0400, "Michael S. Tsirkin"
> > > wrote:
> > > > On Tue, Aug 20, 2024 at 03:19:1
On Wed, Aug 28, 2024 at 7:21 PM Xuan Zhuo wrote:
>
> On Tue, 27 Aug 2024 11:38:45 +0800, Jason Wang wrote:
> > On Tue, Aug 20, 2024 at 3:19 PM Xuan Zhuo
> > wrote:
> > >
> > > leads to regression on VM with the sysctl value of:
> > >
On Tue, Aug 20, 2024 at 3:19 PM Xuan Zhuo wrote:
>
> leads to regression on VM with the sysctl value of:
>
> - net.core.high_order_alloc_disable=1
>
> which could see reliable crashes or scp failure (scp a file 100M in size
> to VM):
>
> The issue is that the virtnet_rq_dma takes up 16 bytes at th
_completed().
> >
> > Move virtnet_napi_tx_enable() what does BQL counters reset before RX
> > napi enable to avoid the issue.
> >
> > Reported-by: Marek Szyprowski
> > Closes:
> > https://lore.kernel.org/netdev/e632e378-d019-4de7-8f13-07c572ab3...@samsung.com/
> > Fixes: c8bd1f7f3e61 ("virtio_net: add support for Byte Queue Limits")
> > Tested-by: Marek Szyprowski
> > Signed-off-by: Jiri Pirko
>
> Acked-by: Michael S. Tsirkin
Acked-by: Jason Wang
Thanks
On Tue, Aug 13, 2024 at 10:40 PM Jakub Kicinski wrote:
>
> On Tue, 13 Aug 2024 11:43:43 +0800 Jason Wang wrote:
> > Hello netdev maintainers.
> >
> > Could we get this series merged?
>
> Repost it with the Fixes tag correctly included.
Ok, I've posted a new version.
Thanks
>
On Wed, Aug 7, 2024 at 9:51 PM Michael S. Tsirkin wrote:
>
> On Tue, Aug 06, 2024 at 10:22:20AM +0800, Jason Wang wrote:
> > Hi All:
> >
> > This series tries to synchronize the operstate with the admin state
> > which allows the lower virtio-net to propagate th
On Fri, Aug 2, 2024 at 6:00 PM Louis Peens wrote:
>
> From: Kyle Xu
>
> Add a new kernel module ‘nfp_vdpa’ for the NFP vDPA networking driver.
>
> The vDPA driver initializes the necessary resources on the VF and the
> data path will be offloaded. It also implements the ‘vdpa_config_ops’
> and th
default qlen 1000
link/ether b2:a9:c5:04:da:53 brd ff:ff:ff:ff:ff:ff
Cc: Venkat Venkatsubra
Cc: Gia-Khanh Nguyen
Signed-off-by: Jason Wang
---
drivers/net/virtio_net.c | 78 +---
1 file changed, 50 insertions(+), 28 deletions(-)
diff --git a/drivers/net
We calculate guest offloads during probe without the protection of
rtnl_lock. This lead to race between probe and ndo_set_features. Fix
this by moving the calculation under the rtnl_lock.
Signed-off-by: Jason Wang
---
drivers/net/virtio_net.c | 10 +-
1 file changed, 5 insertions(+), 5
driver. It is set to false by default to
hold the current semantic so we don't need to change any drivers.
The first user for this would be virtio-net.
Cc: Venkat Venkatsubra
Cc: Gia-Khanh Nguyen
Signed-off-by: Jason Wang
---
drivers/virtio/virtio.c
Following patch will allow the config interrupt to be disabled by a
specific driver via another boolean. So this patch renames
virtio_config_enabled and relevant helpers to
virtio_config_core_enabled.
Cc: Venkat Venkatsubra
Cc: Gia-Khanh Nguyen
Signed-off-by: Jason Wang
---
drivers/virtio
race with
ndo_open()
Changes since V2:
- introduce config_driver_disabled and helpers
- schedule config change work unconditionally
Thanks
Jason Wang (4):
virtio: rename virtio_config_enabled to virtio_config_core_enabled
virtio: allow driver to disable the configure change notification
t; So we add the feature negotiation check to
> virtnet_send_{r,t}x_ctrl_coal_vq_cmd as a basis for the next bugfix patch.
>
> Suggested-by: Michael S. Tsirkin
> Signed-off-by: Heng Qi
> ---
Acked-by: Jason Wang
Thanks
On Wed, Jul 31, 2024 at 9:20 AM Jakub Kicinski wrote:
>
> On Mon, 29 Jul 2024 20:47:55 +0800 Heng Qi wrote:
> > Subject: [PATCH net] virtio_net: Avoid sending unnecessary vq coalescing
> > commands
>
> subject currently reads like this is an optimization, could you
> rephrase?
It might be "virti
The driver must not send vq notification coalescing commands if
> VIRTIO_NET_F_VQ_NOTF_COAL is not negotiated. This limitation of course
> applies to vq resize.
>
> Fixes: f61fe5f081cf ("virtio-net: fix the vq coalescing setting for vq
> resize")
> Signed-off-by: Heng Qi
> ---
Acked-by: Jason Wang
Thanks
On Tue, Jul 16, 2024 at 2:46 PM Xuan Zhuo wrote:
>
> ## AF_XDP
>
> XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> copy feature of xsk (XDP socket) needs to be supported by the driver. The
> performance of zero copy is very good. mlx5 and intel ixgbe already support
On Mon, Jul 8, 2024 at 7:25 PM Xuan Zhuo wrote:
>
> Implement the logic of filling rq with XSK buffers.
>
> Signed-off-by: Xuan Zhuo
> ---
Acked-by: Jason Wang
Thanks
On Fri, Jul 5, 2024 at 3:37 PM Xuan Zhuo wrote:
>
> Support AF-XDP for merge mode.
>
> Signed-off-by: Xuan Zhuo
> ---
>
Acked-by: Jason Wang
Thanks
On Mon, Jul 8, 2024 at 3:47 PM Xuan Zhuo wrote:
>
> On Mon, 8 Jul 2024 15:00:50 +0800, Jason Wang wrote:
> > On Fri, Jul 5, 2024 at 3:38 PM Xuan Zhuo wrote:
> > >
> > > In the process:
> > > 1. We may need to copy data to create skb for XDP_PASS.
> >
On Fri, Jul 5, 2024 at 3:38 PM Xuan Zhuo wrote:
>
> In the process:
> 1. We may need to copy data to create skb for XDP_PASS.
> 2. We may need to call xsk_buff_free() to release the buffer.
> 3. The handle for xdp_buff is difference from the buffer.
>
> If we pushed this logic into existing receiv
On Fri, Jul 5, 2024 at 3:37 PM Xuan Zhuo wrote:
>
> Implement the logic of filling rq with XSK buffers.
>
> Signed-off-by: Xuan Zhuo
> ---
>
> v7:
>1. some small fixes
>
> drivers/net/virtio_net.c | 70 +---
> 1 file changed, 66 insertions(+), 4 deletions(
+* may use one buffer to receive from the rx and reuse this buffer to
> +* send by the tx. So the dma dev of sq and rq must be the same one.
> + *
> +* But vq->dma_dev allows every vq has the respective dma dev. So I
> +* check the dma dev of vq and sq is the same dev.
> +*/
> + if (virtqueue_dma_dev(rq->vq) != virtqueue_dma_dev(sq->vq))
> + return -EPERM;
I think -EINVAL is better.
> +
> + dma_dev = virtqueue_dma_dev(rq->vq);
> + if (!dma_dev)
> + return -EPERM;
-EINVAL seems to be better.
With those fixed.
Acked-by: Jason Wang
THanks
here we remove the
> VIRTIO_XDP_HEADROOM, and use the XDP_PACKET_HEADROOM to replace it.
>
> Signed-off-by: Xuan Zhuo
Acked-by: Jason Wang
Thanks
On Mon, Jul 1, 2024 at 11:20 AM Jason Wang wrote:
>
> On Fri, Jun 28, 2024 at 1:48 PM Xuan Zhuo wrote:
> >
> > On Fri, 28 Jun 2024 10:19:41 +0800, Jason Wang wrote:
> > > On Tue, Jun 18, 2024 at 3:57 PM Xuan Zhuo
> > > wrote:
> > > >
> > &
On Fri, Jun 28, 2024 at 1:48 PM Xuan Zhuo wrote:
>
> On Fri, 28 Jun 2024 10:19:41 +0800, Jason Wang wrote:
> > On Tue, Jun 18, 2024 at 3:57 PM Xuan Zhuo
> > wrote:
> > >
> > > In the process:
> > > 1. We may need to copy data to create s
On Fri, Jun 28, 2024 at 1:44 PM Xuan Zhuo wrote:
>
> On Fri, 28 Jun 2024 10:19:37 +0800, Jason Wang wrote:
> > On Tue, Jun 18, 2024 at 3:57 PM Xuan Zhuo
> > wrote:
> > >
> > > Implement the logic of filling rq with XSK buffers.
> > >
> > >
On Tue, Jun 18, 2024 at 3:57 PM Xuan Zhuo wrote:
>
> Release the xsk buffer, when the queue is releasing or the queue is
> resizing.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 5 +
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/
On Tue, Jun 18, 2024 at 3:57 PM Xuan Zhuo wrote:
>
> Support AF-XDP for merge mode.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 139 +++
> 1 file changed, 139 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.
On Tue, Jun 18, 2024 at 3:57 PM Xuan Zhuo wrote:
>
> In the process:
> 1. We may need to copy data to create skb for XDP_PASS.
> 2. We may need to call xsk_buff_free() to release the buffer.
> 3. The handle for xdp_buff is difference from the buffer.
>
> If we pushed this logic into existing recei
On Tue, Jun 18, 2024 at 3:57 PM Xuan Zhuo wrote:
>
> Implement the logic of filling rq with XSK buffers.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 68 ++--
> 1 file changed, 66 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/vi
On Tue, Jun 18, 2024 at 3:57 PM Xuan Zhuo wrote:
>
> This patch implement the logic of bind/unbind xsk pool to rq.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 133 +++
> 1 file changed, 133 insertions(+)
>
> diff --git a/drivers/net/virtio_
On Thu, Jun 20, 2024 at 6:12 PM Michael S. Tsirkin wrote:
>
> On Thu, Jun 20, 2024 at 06:10:51AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Jun 20, 2024 at 05:53:15PM +0800, Heng Qi wrote:
> > > On Thu, 20 Jun 2024 16:26:05 +0800, Jason Wang
> > > wrote:
>
On Thu, Jun 20, 2024 at 4:32 PM Michael S. Tsirkin wrote:
>
> On Thu, Jun 20, 2024 at 03:29:15PM +0800, Heng Qi wrote:
> > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"
> > wrote:
> > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > > @@ -5312,7 +5315,7 @@ static int v
On Tue, Jun 18, 2024 at 11:17 AM Heng Qi wrote:
>
> On Tue, 18 Jun 2024 11:10:26 +0800, Jason Wang wrote:
> > On Mon, Jun 17, 2024 at 9:15 PM Heng Qi wrote:
> > >
> > > The XDP program can't correctly handle partially checksummed
> > > packets, b
On Wed, Jun 19, 2024 at 11:44 PM Heng Qi wrote:
>
>
> 在 2024/6/19 下午11:08, Jakub Kicinski 写道:
> > On Wed, 19 Jun 2024 10:02:58 +0800 Heng Qi wrote:
> Currently we do not allow RXCUSM to be disabled.
> >>> You don't have to disable checksuming in the device.
> >> Yes, it is up to the device it
On Thu, Jun 20, 2024 at 4:21 PM Jason Wang wrote:
>
> On Thu, Jun 20, 2024 at 3:35 PM Heng Qi wrote:
> >
> > On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"
> > wrote:
> > > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrot
On Thu, Jun 20, 2024 at 3:35 PM Heng Qi wrote:
>
> On Wed, 19 Jun 2024 17:19:12 -0400, "Michael S. Tsirkin"
> wrote:
> > On Thu, Jun 20, 2024 at 12:19:05AM +0800, Heng Qi wrote:
> > > @@ -5312,7 +5315,7 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> > >
> > > /* Parameters for con
On Wed, Jun 19, 2024 at 10:56 AM Li RongQing wrote:
>
> This place is fetching the stats, so u64_stats_fetch_begin
> and u64_stats_fetch_retry should be used
>
> Fixes: 6208799553a8 ("virtio-net: support rx netdim")
> Signed-off-by: Li RongQing
Acked-by: Jason Wang
Thanks
On Mon, Jun 17, 2024 at 9:15 PM Heng Qi wrote:
>
> The XDP program can't correctly handle partially checksummed
> packets, but works fine with fully checksummed packets.
Not sure this is ture, if I was not wrong, XDP can try to calculate checksum.
Thanks
> If the
> device has already validated
CSUM feature if GUEST_CSUM is
> available")
> Signed-off-by: Heng Qi
> ---
Acked-by: Jason Wang
(Should we manually do checksum if RXCUSM is disabled?)
Thanks
On Mon, Jun 17, 2024 at 4:08 PM Heng Qi wrote:
>
> On Mon, 17 Jun 2024 12:05:30 +0800, Jason Wang wrote:
> > On Thu, Jun 6, 2024 at 2:15 PM Heng Qi wrote:
> > >
> > > Currently, control vq handles commands synchronously,
> > > leading to increased del
On Tue, Jun 18, 2024 at 12:16 AM Michael S. Tsirkin wrote:
>
> On Mon, Jun 17, 2024 at 11:30:36AM +0200, Jiri Pirko wrote:
> > Mon, Jun 17, 2024 at 03:44:55AM CEST, jasow...@redhat.com wrote:
> > >On Mon, Jun 10, 2024 at 10:19 PM Michael S. Tsirkin
> > >wrote:
> > >>
> > >> On Fri, Jun 07, 2024
On Mon, Jun 17, 2024 at 3:54 PM Xuan Zhuo wrote:
>
> On Mon, 17 Jun 2024 14:30:07 +0800, Jason Wang wrote:
> > On Fri, Jun 14, 2024 at 2:40 PM Xuan Zhuo
> > wrote:
> > >
> > > The driver's tx napi is very important for XSK. It is responsible for
&g
On Mon, Jun 17, 2024 at 3:49 PM Xuan Zhuo wrote:
>
> On Mon, 17 Jun 2024 14:19:10 +0800, Jason Wang wrote:
> > On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo
> > wrote:
> > >
> > > This patch implement the logic of bind/unbind xsk pool to sq and rq.
On Mon, Jun 17, 2024 at 3:41 PM Xuan Zhuo wrote:
>
> On Mon, 17 Jun 2024 14:28:05 +0800, Jason Wang wrote:
> > On Mon, Jun 17, 2024 at 1:00 PM Jason Wang wrote:
> > >
> > > On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo
> > > wrote:
> > > >
> &
On Tue, Jun 18, 2024 at 8:57 AM Jason Wang wrote:
>
> On Mon, Jun 17, 2024 at 3:39 PM Xuan Zhuo wrote:
> >
> > On Mon, 17 Jun 2024 13:00:13 +0800, Jason Wang wrote:
> > > On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo
> > > wrote:
> > > >
> >
On Mon, Jun 17, 2024 at 3:39 PM Xuan Zhuo wrote:
>
> On Mon, 17 Jun 2024 13:00:13 +0800, Jason Wang wrote:
> > On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo
> > wrote:
> > >
> > > If the xsk is enabling, the xsk tx will share the send queue.
> > > But
On Mon, Jun 17, 2024 at 5:18 PM Jiri Pirko wrote:
>
> Mon, Jun 17, 2024 at 04:34:26AM CEST, jasow...@redhat.com wrote:
> >On Thu, Jun 13, 2024 at 1:09 AM Jiri Pirko wrote:
> >>
> >> From: Jiri Pirko
> >>
> >> Add support for Byte Queue Limits (BQL).
> >>
> >> Tested on qemu emulated virtio_net d
On Mon, Jun 17, 2024 at 5:30 PM Jiri Pirko wrote:
>
> Mon, Jun 17, 2024 at 03:44:55AM CEST, jasow...@redhat.com wrote:
> >On Mon, Jun 10, 2024 at 10:19 PM Michael S. Tsirkin wrote:
> >>
> >> On Fri, Jun 07, 2024 at 01:30:34PM +0200, Jiri Pirko wrote:
> >> > Fri, Jun 07, 2024 at 12:23:37PM CEST, m
On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo wrote:
>
> The virtnet_xdp_handler() is re-used. But
>
> 1. We need to copy data to create skb for XDP_PASS.
> 2. We need to call xsk_buff_free() to release the buffer.
> 3. The handle for xdp_buff is difference.
>
> If we pushed this logic into existing r
by: Xuan Zhuo
Acked-by: Jason Wang
Thanks
> ---
> drivers/net/virtio_net.c | 24
> 1 file changed, 24 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 7e811f392768..9bfccef18e27 100644
> --- a/drivers/net/
On Fri, Jun 14, 2024 at 2:40 PM Xuan Zhuo wrote:
>
> The driver's tx napi is very important for XSK. It is responsible for
> obtaining data from the XSK queue and sending it out.
>
> At the beginning, we need to trigger tx napi.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 121
On Mon, Jun 17, 2024 at 1:00 PM Jason Wang wrote:
>
> On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo wrote:
> >
> > If the xsk is enabling, the xsk tx will share the send queue.
> > But the xsk requires that the send queue use the premapped mode.
> > So the send queue mu
On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo wrote:
>
> This patch implement the logic of bind/unbind xsk pool to sq and rq.
>
> Signed-off-by: Xuan Zhuo
> ---
> drivers/net/virtio_net.c | 201 ++-
> 1 file changed, 200 insertions(+), 1 deletion(-)
>
> diff --git
On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo wrote:
>
> If the xsk is enabling, the xsk tx will share the send queue.
> But the xsk requires that the send queue use the premapped mode.
> So the send queue must support premapped mode when it is bound to
> af-xdp.
>
> * virtnet_sq_set_premapped(sq, tru
On Fri, Jun 14, 2024 at 2:40 PM Xuan Zhuo wrote:
>
> Because the af-xdp and sq premapped mode will introduce two
> new xmit type, so I refactor the xmit type mechanism first.
>
> We use the last two bits of the pointer to distinguish the xmit type,
> so we can distinguish four xmit types. Now we h
On Thu, Jun 6, 2024 at 2:15 PM Heng Qi wrote:
>
> Currently, control vq handles commands synchronously,
> leading to increased delays for dim commands during multi-queue
> VM configuration and directly impacting dim performance.
>
> To address this, we are shifting to asynchronous processing of
>
On Thu, Jun 13, 2024 at 1:09 AM Jiri Pirko wrote:
>
> From: Jiri Pirko
>
> Add support for Byte Queue Limits (BQL).
>
> Tested on qemu emulated virtio_net device with 1, 2 and 4 queues.
> Tested with fq_codel and pfifo_fast. Super netperf with 50 threads is
> running in background. Netperf TCP_RR
On Mon, Jun 10, 2024 at 10:19 PM Michael S. Tsirkin wrote:
>
> On Fri, Jun 07, 2024 at 01:30:34PM +0200, Jiri Pirko wrote:
> > Fri, Jun 07, 2024 at 12:23:37PM CEST, m...@redhat.com wrote:
> > >On Fri, Jun 07, 2024 at 11:57:37AM +0200, Jiri Pirko wrote:
> > >> >True. Personally, I would like to jus
t;On Thu, Jun 6, 2024 at 2:05 PM Michael S. Tsirkin wrote:
> >> >>
> >> >> On Thu, Jun 06, 2024 at 12:25:15PM +0800, Jason Wang wrote:
> >> >> > > If the codes of orphan mode don't have an impact when you enable
> >> >>
1 - 100 of 1028 matches
Mail list logo