This patch adds a framework that allows GRO on tunneled packets.
Furthermore, it leverages that framework to provide GRO support for
VxLAN-encapsulated packets.
Supported VxLAN packets must have an outer IPv4 header, and contain an
inner TCP/IPv4 packet.
VxLAN GRO doesn't check if input packets h
VxLAN is one of the most widely used tunneled protocols. Providing GRO
support for VxLAN-encapsulated packets can benefit many per-packet based
applications, like OVS.
This patchset is to support VxLAN GRO. The first patch cleans up current
TCP/IPv4 GRO codes for the sake of supporting tunneled GR
This patch updates TCP/IPv4 GRO as follows:
- remove IP identification check when merge TCP/IPv4 packets
- extract common internal functions for supporting tunneled GRO
- rename internal functions and variants for better understanding
- update comments
Signed-off-by: Jiayu Hu
---
lib/librte_gro/
Some machines appear to have buggy DMAR mappings. A typical mapping
error looks like:
DMAR: intel_iommu_map: iommu width (39) is not sufficient for the mapped
address (7fc4fa80)
DMAR: intel_iommu_map: iommu width (39) is not sufficient for the mapped
address (7fc4fa80)
DMAR:
On Fri, Nov 24, 2017 at 10:28:48AM +, Gonglei (Arei) wrote:
> Hi,
>
> Currently, the maximum number of supported memory regions for vhost-user
> backends is 8,
> and the maximum supported memory regions for vhost-net backends is determined
> by
> " /sys/module/vhost/parameters/max_mem_regi
-Original Message-
> Date: Fri, 24 Nov 2017 11:23:45 +
> From: liang.j...@intel.com
> To: jerin.ja...@caviumnetworks.com
> CC: dev@dpdk.org, harry.van.haa...@intel.com, bruce.richard...@intel.com,
> deepak.k.j...@intel.com, john.ge...@intel.com
> Subject: [RFC PATCH 0/7] RFC:EventDev O
From: Harish Patil
The qede firmware expects minimum one RX queue to be created, otherwise
it results in firmware exception. So a check is added to prevent that.
Fixes: ec94dbc57362 ("qede: add base driver")
Cc: sta...@dpdk.org
Signed-off-by: Harish Patil
---
drivers/net/qede/qede_ethdev.c |
From: Shahed Shaikh
This patch refactors existing VXLAN tunneling offload code and enables
following features for GENEVE:
- destination UDP port configuration
- checksum offloads
- filter configuration
Signed-off-by: Shahed Shaikh
---
drivers/net/qede/qede_ethdev.c | 518 ++
From: Shahed Shaikh
Replace rx_vxlan_port command with rx_tunnel_udp_port to support both VXLAN
and GENEVE udp ports.
Signed-off-by: Shahed Shaikh
---
app/test-pmd/cmdline.c | 28 +++-
1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/app/test-pmd/cmdline
From: Harish Patil
Enable LRO feature to work with tunnel encapsulation protocols.
Fixes: 29540be7efce ("net/qede: support LRO/TSO offloads")
Cc: sta...@dpdk.org
Signed-off-by: Harish Patil
---
drivers/net/qede/qede_ethdev.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff -
From: Harish Patil
Add an optional cmdline parameter to enable LRO. This is useful to test
LRO feature by being able to run linux utils like iperf over KNI interface
which generates consistent packet aggregations.
Signed-off-by: Harish Patil
---
examples/kni/main.c | 15 +--
1 fi
Hi,
This patch set adds enhancements and fixes for qede PMD.
Thanks!
Rasesh
Harish Patil (3):
net/qede: fix to enable LRO over tunnels
examples/kni: add optional parameter to enable LRO
net/qede: fix to reject config with no Rx queue
Shahed Shaikh (2):
app/testpmd: add configuration for
When performing live-migration with multiple queue pairs,
VHOST_USER_SET_LOG_BASE request is sent multiple times.
If packets are being processed by the PMD threads, it is
possible that they are setting bits in the dirty log map while
its region is being unmapped by the vhost-user protocol thread.
In VHOST_USER_SET_VRING_ADDR handling, don't invalidate the vring
if it has already been mapped and new addresses are identical.
When initiating live-migration, VHOST_USER_SET_VRING_ADDR is sent
again by QEMU, but the queues are enabled, so invalidating them
can result in NULL pointer de-referenci
If VHOST_USER_SET_LOG_BASE request's message size is invalid,
the fd is leaked.
Fix this by closing the fd systematically as long as it is valid.
Fixes: 53af5b1e0ace ("vhost: fix leak of file descriptor")
Cc: sta...@dpdk.org
Signed-off-by: Maxime Coquelin
---
lib/librte_vhost/vhost_user.c | 13
Sorry, posted the wrong version. Only patch 2 changes in the v2,
the log_lock is read-locked is moved after the VHOST_F_LOG_ALL
feature check, so that it does not degrade performance when
not doing the live-migration.
This 3 patches series fixes issues met when doing live-migration
with multiple q
When performing live-migration with multiple queue pairs,
VHOST_USER_SET_LOG_BASE request is sent multiple times.
If packets are being processed by the PMD threads, it is
possible that they are setting bits in the dirty log map while
its region is being unmapped by the vhost-user protocol thread.
In VHOST_USER_SET_VRING_ADDR handling, don't invalidate the vring
if it has already been mapped and new addresses are identical.
When initiating live-migration, VHOST_USER_SET_VRING_ADDR is sent
again by QEMU, but the queues are enabled, so invalidating them
can result in NULL pointer de-referenci
This 3 patches series fixes issues met when doing live-migration
with multiple queue pairs.
Patch 1 is theorical and unlikely to be reproduced in real use-cases,
so it may be safe not to pick it in stable trees.
Patch 2 reproduces quite often when lots of packets are being processed.
Easiest way
If VHOST_USER_SET_LOG_BASE request's message size is invalid,
the fd is leaked.
Fix this by closing the fd systematically as long as it is valid.
Fixes: 53af5b1e0ace ("vhost: fix leak of file descriptor")
Cc: sta...@dpdk.org
Signed-off-by: Maxime Coquelin
---
lib/librte_vhost/vhost_user.c | 13
Hi all,
As you may already know, there is a page "News" on the web site.
The goal is to provide some highlights on projects or major achievements.
Unfortunately, there is not a lot of contributions to this section.
Remember that you can contribute by sending a patch to w...@dpdk.org.
I think it
Virtual machines hosted by Hyper-V/Azure platforms are fitted with
simplified virtual network devices named NetVSC that are used for fast
communication between VM to VM, VM to hypervisor, and the outside.
They appear as standard system netdevices to user-land applications, the
main difference bein
compressdev API
Signed-off-by: Trahe, Fiona
---
config/common_base |7 +
lib/Makefile |3 +
lib/librte_compressdev/Makefile| 54 +
lib/librte_compressdev/rte_comp.h | 565 +++
With the vast amounts of data being transported around networks and stored in
storage systems, reducing data size is becoming ever more important. There are
both software libraries and hardware devices available that provide
compression, but no common API. This RFC proposes a compression API for
From: "Artem V. Andreev"
Signed-off-by: Artem V. Andreev
Signed-off-by: Andrew Rybchenko
---
drivers/mempool/bucket/rte_mempool_bucket.c | 38 +
1 file changed, 38 insertions(+)
diff --git a/drivers/mempool/bucket/rte_mempool_bucket.c
b/drivers/mempool/bucket/rte_
The patch series adds bucket mempool driver which allows to allocate
(both physically and virtually) contiguous blocks of objects and adds
mempool API to do it. It is still capable to provide separate objects,
but it is definitely more heavy-weight than ring/stack drivers.
The target usecase is de
From: "Artem V. Andreev"
Clustered allocation is required to simplify packaging objects into
buckets and search of the bucket control structure by an object.
Signed-off-by: Artem V. Andreev
Signed-off-by: Andrew Rybchenko
---
lib/librte_mempool/rte_mempool.c | 39 +
From: "Artem V. Andreev"
If mempool manager supports object blocks (physically and virtual
contiguous set of objects), it is sufficient to get the first
object only and the function allows to avoid filling in of
information about each block member.
Signed-off-by: Artem V. Andreev
Signed-off-by:
From: "Artem V. Andreev"
The manager provides a way to allocate physically and virtually
contiguous set of objects.
Note: due to the way objects are organized in the bucket manager,
the get_avail_count may return less objects than were enqueued.
That breaks the expectation of mempool and mempool
From: "Artem V. Andreev"
Mempool get/put API cares about cache itself, but sometimes it is
required to flush the cache explicitly.
Also dedicated API allows to decouple it from block get API (to be
added) and provides more fine-grained control.
Signed-off-by: Artem V. Andreev
Signed-off-by: An
From: "Artem V. Andreev"
Primarily, it is intended as a way for the mempool driver to provide
additional information on how it lays up objects inside the mempool.
Signed-off-by: Artem V. Andreev
Signed-off-by: Andrew Rybchenko
---
lib/librte_mempool/rte_mempool.h | 31
We are seeing same performance drops but in our case is 16.11.3 compared
against 17.05.2 and 17.08.
That is when DPDK is used with SRIOV inside VMs, and the only change is the
DPDK version. Similar tests but using SRIOV in the host don't have such a
drop.
One change that could impact performance
Some apps can enable RSS but not update the reta table nor the hash.
This patch adds a default reta table setup based on total number of
configured rx queues. The hash key is dependent on how the app
configures the rx_conf struct.
Signed-off-by: Alejandro Lucero
---
drivers/net/nfp/nfp_net.c | 1
On 11/23/2017 03:14 PM, Shahaf Shuler wrote:
Ethdev offloads API has changed since:
commit ce17eddefc20 ("ethdev: introduce Rx queue offloads API")
commit cba7f53b717d ("ethdev: introduce Tx queue offloads API")
This commit support the new API.
Signed-off-by: Shahaf Shuler
---
examples/l2fw
The patchset adds support input set configuration for
RSS/FDIR/FDIR flexible payload.
Beilei Xing (2):
net/i40e: support input set configuration
app/testpmd: add configuration for input set
app/test-pmd/cmdline.c| 124 ++
drivers/net/i40e/rte_pmd_i
This patch adds command to configure input set for
RSS/flow director/flow director flexible payload.
Signed-off-by: Beilei Xing
---
app/test-pmd/cmdline.c | 124 +
1 file changed, 124 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pm
This patch supports getting/setting input set info for
RSS/FDIR/FDIR flexible payload.
Also add some helper functions for input set configuration.
Signed-off-by: Beilei Xing
---
drivers/net/i40e/rte_pmd_i40e.c | 141 ++
drivers/net/i40e/rte_pmd_i40e.h
NFP does CRC strip by default and it is not configurable. But, even
if an app requests not to do it, that should not be a reason for PMD
configuration failure.
Fixes: defb9a5dd156 ("nfp: introduce driver initialization")
Signed-off-by: Alejandro Lucero
---
drivers/net/nfp/nfp_net.c | 6 ++
When jumbo frames is configured, the hardware mtu needs to be updated to
the specified max_rx_pkt_len. Also, changing mtu should be avoided once
the PMD port started.
Fixes: defb9a5dd156 ("nfp: introduce driver initialization")
Signed-off-by: Alejandro Lucero
---
drivers/net/nfp/nfp_net.c | 9 +
The wrong mtu length was used for configuring the hardware. The
max_rx_pktlen reported was also wrong.
Fixes: defb9a5dd156 ("nfp: introduce driver initialization")
Signed-off-by: Alejandro Lucero
---
drivers/net/nfp/nfp_net.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
On Fri, Nov 24, 2017 at 01:01:08PM +0200, Roy Shterman wrote:
>
>
> נשלח מה-iPhone שלי
>
> ב-24 בנוב׳ 2017, בשעה 12:03, Bruce Richardson
> כתב/ה:
>
> >> On Fri, Nov 24, 2017 at 11:39:54AM +0200, roy wrote:
> >> Thanks for your answer, but I cannot understand the dimension of the ring
> >
Hi Akhil, Radu
PLease see inline.
Thanks,
Anoob
On 11/24/2017 05:04 PM, Akhil Goyal wrote:
Hi Radu,
On 11/24/2017 4:47 PM, Radu Nicolau wrote:
On 11/24/2017 10:55 AM, Akhil Goyal wrote:
On 11/24/2017 3:09 PM, Radu Nicolau wrote:
Hi,
Comment inline
On 11/24/2017 8:50 AM, Akhil Goyal w
On Fri, Nov 24, 2017 at 03:03:52PM +0530, Akhil Goyal wrote:
> Hi Nelio,
> On 11/23/2017 3:32 PM, Nelio Laranjeiro wrote:
> > Device operation pointers should be constant to avoid any modification
> > while it is in use.
> >
> > Fixes: c261d1431bd8 ("security: introduce security API and framework"
On 11/24/2017 5:29 PM, Radu Nicolau wrote:
On 11/24/2017 11:34 AM, Akhil Goyal wrote:
Hi Radu,
On 11/24/2017 4:47 PM, Radu Nicolau wrote:
On 11/24/2017 10:55 AM, Akhil Goyal wrote:
On 11/24/2017 3:09 PM, Radu Nicolau wrote:
Hi,
Comment inline
On 11/24/2017 8:50 AM, Akhil Goyal wrote:
On 11/24/2017 11:34 AM, Akhil Goyal wrote:
Hi Radu,
On 11/24/2017 4:47 PM, Radu Nicolau wrote:
On 11/24/2017 10:55 AM, Akhil Goyal wrote:
On 11/24/2017 3:09 PM, Radu Nicolau wrote:
Hi,
Comment inline
On 11/24/2017 8:50 AM, Akhil Goyal wrote:
Hi Anoob, Radu,
On 11/23/2017 4:49 PM, Anoob
Hi All,
Significant performance degradation observed with DPDK 17.11
The Scenario is with TRex traffic generator
(https://github.com/cisco-system-traffic-generator/trex-core)
1. Stateless mode, 64B packet stream (multi-core/single core)
DPDK 17.02 - 37-39MPPS/core
DPDK 17.11 - 33.5MPPS/c
Hi Radu,
On 11/24/2017 4:47 PM, Radu Nicolau wrote:
On 11/24/2017 10:55 AM, Akhil Goyal wrote:
On 11/24/2017 3:09 PM, Radu Nicolau wrote:
Hi,
Comment inline
On 11/24/2017 8:50 AM, Akhil Goyal wrote:
Hi Anoob, Radu,
On 11/23/2017 4:49 PM, Anoob Joseph wrote:
In case of inline protocol proc
Re-sending
Hanoh
From: Hanoch Haim (hhaim)
Sent: Monday, November 20, 2017 5:19 PM
To: dev@dpdk.org
Cc: Wu, Jingjing (jingjing...@intel.com); Hanoch Haim (hhaim)
Subject: [dpdk-dev] net/i40e: latency issue due fix interrupt throttling
setting in PF
Hi All,
While integrating dpdk17.11 into TRex
From: Liang Ma
Add the description about opdl pmd and example usage/descrption
Signed-off-by: Liang Ma
Signed-off-by: Peter, Mccarthy
---
doc/guides/eventdevs/index.rst | 1 +
doc/guides/eventdevs/opdl.rst | 162 +
.../sample_app_
From: Liang Ma
This commit adds a OPDL implementation of the eventdev API. The
implementation here is intended to enable the community to use
the OPDL infrastructure under eventdev API.
The main components of the implementation is three files:
- opdl_evdev.c Creation, configuratio
From: Liang Ma
This adds the minimal changes to allow a OPDL eventdev implementation to
be compiled, linked and created at run time. Opdl eventdev does nothing,
but can be created via vdev on commandline, e.g.
sudo ./x86_64-native-linuxapp-gcc/app/test --vdev=event_opdl0
...
PMD: Creating
From: Liang Ma
This patch adds a sample app to the examples/ directory, which can
be used as a reference application and for general testing.
The application requires two ethdev ports and expects traffic to be
flowing. The application must be run with the --vdev flags as
follows to indicate to EA
From: Liang Ma
update the base config, add OPDL event dev flag
update the driver/event Makefile to add opdl subdir
update the rte.app.mkallow app link the pmd lib
Signed-off-by: Liang Ma
Signed-off-by: Peter, Mccarthy
---
config/common_base | 6 ++
drivers/event/Makefile | 1 +
m
From: Liang Ma
opdl_evdev.h include the main data structure of opdl device
and all the function prototype need to be exposed to support
eventdev API.
opdl_evdev_init.c implement all initailization helper function
Signed-off-by: Liang Ma
Signed-off-by: Peter, Mccarthy
---
drivers/event/opdl/o
From: Liang Ma
The OPDL (Ordered Packet Distribution Library) eventdev is a specific
implementation of the eventdev API. It is particularly suited to packet
processing workloads that have high throughput and low latency
requirements. All packets follow the same path through the device.
The order
From: Liang Ma
OPDL ring is the core infrastructure of OPDL PMD. OPDL ring library
provide the core data structure and core helper function set. The Ring
implements a single ring multi-port/stage pipelined packet distribution
mechanism. This mechanism has the following characteristics:
• No mult
On 11/24/2017 10:55 AM, Akhil Goyal wrote:
On 11/24/2017 3:09 PM, Radu Nicolau wrote:
Hi,
Comment inline
On 11/24/2017 8:50 AM, Akhil Goyal wrote:
Hi Anoob, Radu,
On 11/23/2017 4:49 PM, Anoob Joseph wrote:
In case of inline protocol processed ingress traffic, the packet
may not
have enoug
נשלח מה-iPhone שלי
ב-24 בנוב׳ 2017, בשעה 12:03, Bruce Richardson
כתב/ה:
>> On Fri, Nov 24, 2017 at 11:39:54AM +0200, roy wrote:
>> Thanks for your answer, but I cannot understand the dimension of the ring
>> and it is affected by the cache size.
>>
>>> On 24/11/17 11:30, Bruce Richardso
On 11/24/2017 3:09 PM, Radu Nicolau wrote:
Hi,
Comment inline
On 11/24/2017 8:50 AM, Akhil Goyal wrote:
Hi Anoob, Radu,
On 11/23/2017 4:49 PM, Anoob Joseph wrote:
In case of inline protocol processed ingress traffic, the packet may not
have enough information to determine the security parame
Hi Anoob,
On 11/24/2017 3:28 PM, Anoob wrote:
static inline void
route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t
nb_pkts)
{
uint32_t hop[MAX_PKT_BURST * 2];
uint32_t dst_ip[MAX_PKT_BURST * 2];
+ int32_t pkt_hop = 0;
uint16_t i, offset;
+ uint16
Hi,
Currently, the maximum number of supported memory regions for vhost-user
backends is 8,
and the maximum supported memory regions for vhost-net backends is determined
by
" /sys/module/vhost/parameters/max_mem_regions".
In many scenarios, the vhost-user NIC will cause the memory region to
On Fri, Nov 24, 2017 at 11:39:54AM +0200, roy wrote:
> Thanks for your answer, but I cannot understand the dimension of the ring
> and it is affected by the cache size.
>
> On 24/11/17 11:30, Bruce Richardson wrote:
> > On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote:
> > > Hi,
> > >
Hi Akhil,
Please see inline.
Thanks,
Anoob
On 11/24/2017 02:58 PM, Akhil Goyal wrote:
Hi Anoob,
On 11/15/2017 3:11 PM, Anoob Joseph wrote:
When security offload is enabled, the packet should be forwarded on the
port configured in the SA. Security session will be configured on that
port onl
Thanks for your answer, but I cannot understand the dimension of the
ring and it is affected by the cache size.
On 24/11/17 11:30, Bruce Richardson wrote:
On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote:
Hi,
In the documentation it says that:
* @param cache_size
* If cache
Hi,
Comment inline
On 11/24/2017 8:50 AM, Akhil Goyal wrote:
Hi Anoob, Radu,
On 11/23/2017 4:49 PM, Anoob Joseph wrote:
In case of inline protocol processed ingress traffic, the packet may not
have enough information to determine the security parameters with which
the packet was processed. In
Acked-by: Akhil Goyal
Hi Nelio,
On 11/23/2017 3:32 PM, Nelio Laranjeiro wrote:
Device operation pointers should be constant to avoid any modification
while it is in use.
Fixes: c261d1431bd8 ("security: introduce security API and framework")
Cc: akhil.go...@nxp.com
Cc: sta...@dpdk.org
Signed-off-by: Nelio Laranjeiro
On Thu, Nov 23, 2017 at 11:05:11PM +0200, Roy Shterman wrote:
> Hi,
>
> In the documentation it says that:
>
> * @param cache_size
> * If cache_size is non-zero, the rte_mempool library will try to
> * limit the accesses to the common lockless pool, by maintaining a
> * per-lcore object
Hi Anoob,
On 11/15/2017 3:11 PM, Anoob Joseph wrote:
When security offload is enabled, the packet should be forwarded on the
port configured in the SA. Security session will be configured on that
port only, and sending the packet on other ports could result in
unencrypted packets being sent out.
On Fri, Nov 24, 2017 at 10:08:18AM +0800, Jia He wrote:
> Hi Bruce
>
> I knew little about DPDK's milestone
>
> I read some hints from http://dpdk.org/dev/roadmap
>
> 18.02
>
> * Proposal deadline: November 24, 2017
> * Integration deadline: January 5, 2018
>
> I wonder whether I need to res
Rte_flow actually defined to include RSS,
but till now, RSS is out of rte_flow.
This patch is to move i40e existing RSS to rte_flow.
This patch also enable queue region configuration
using flow API for i40e.
Signed-off-by: Wei Zhao
---
drivers/net/i40e/i40e_ethdev.c | 91 +++
drivers/ne
Hi Anoob, Radu,
On 11/23/2017 4:49 PM, Anoob Joseph wrote:
In case of inline protocol processed ingress traffic, the packet may not
have enough information to determine the security parameters with which
the packet was processed. In such cases, application could get metadata
from the packet which
From: Junjie Chen
The driver can suppress interrupt when VIRTIO_F_EVENT_IDX feature bit is
negotiated. The driver set vring flags to 0, and MAY use used_event in available
ring to advise device interrupt util reach an index specified by used_event. The
device ignore the lower bit of vring flags,
Rte_flow actually defined to include RSS,
but till now, RSS is out of rte_flow.
This patch is to move igb existing RSS to rte_flow.
Signed-off-by: Wei Zhao
---
drivers/net/e1000/e1000_ethdev.h | 20 +
drivers/net/e1000/igb_ethdev.c | 17 +
drivers/net/e1000/igb_flow.c | 160 +
Rte_flow actually defined to include RSS,
but till now, RSS is out of rte_flow.
This patch is to move ixgbe existing RSS to rte_flow.
Signed-off-by: Wei Zhao
---
drivers/net/ixgbe/ixgbe_ethdev.c | 13 +++
drivers/net/ixgbe/ixgbe_ethdev.h | 10 +++
drivers/net/ixgbe/ixgbe_flow.c | 165 +++
The patches mainly finish following functions:
1) igb move RSS to flow API
2) ixgbe move RSS to flow API
v2:
-fix bug for RSS flush code.
-fix patch check warning.
v3:
-fix bug for ixgbe rss restore.
root (2):
net/e1000: move RSS to flow API
net/ixgbe: move RSS to flow API
drivers/net/e100
76 matches
Mail list logo