Warshavsky mailto:arnon at qwilt.com>>, "dev at
dpdk.org<mailto:dev at dpdk.org>" mailto:dev at dpdk.org>>
Subject: Re: [dpdk-dev] Reshuffling of rte_mbuf structure.
On Mon, Nov 02, 2015 at 11:51:23PM +0100, Thomas Monjalon wrote:
But it is simpler to say that having
Also, there could be places in the code where we change a set of
continuous fields in the mbuf. E.g. ixgbe vector pmd receive function
takes advantage of 128 bit vector registers and fill out
rx_descriptor_fields1 with one instruction. But I guess there are other
places too, and they are really
On Mon, Nov 02, 2015 at 07:21:17PM -0500, Matthew Hall wrote:
> On Mon, Nov 02, 2015 at 11:51:23PM +0100, Thomas Monjalon wrote:
> > But it is simpler to say that having an API depending of some options
> > is a "no-design" which could seriously slow down the DPDK adoption.
>
> What about somethin
On Tue, Nov 03, 2015 at 11:44:22AM +, Zoltan Kiss wrote:
> Also, there could be places in the code where we change a set of
> continuous fields in the mbuf. E.g. ixgbe vector pmd receive
> function takes advantage of 128 bit vector registers and fill out
> rx_descriptor_fields1 with one instruc
This discussion is about improving performance of specific use cases
by moving the mbuf fields when needed.
We could consider how to configure it and how complicated it would be to
write applications or drivers (especially vector ones) for such a moving
structure.
But it is simpler to say that havi
2, 2015 at 10:35 AM
To: Cisco Employee mailto:shesha at cisco.com>>
Cc: Stephen Hemminger mailto:stephen at
networkplumber.org>>, "dev at dpdk.org<mailto:dev at dpdk.org>" mailto:dev at dpdk.org>>
Subject: Re: [dpdk-dev] Reshuffling of rte_mbuf structure.
If
--
> *- Thanks*
> *char * (*shesha) (uint64_t cache, uint8_t F00D)*
> *{ return 0xC0DE; } *
>
> From: Stephen Hemminger
> Date: Monday, November 2, 2015 at 8:24 AM
> To: Arnon Warshavsky
> Cc: Cisco Employee , "dev at dpdk.org"
> Subject: Re: [dpdk-dev] Reshuffling
On Mon, Nov 02, 2015 at 11:51:23PM +0100, Thomas Monjalon wrote:
> But it is simpler to say that having an API depending of some options
> is a "no-design" which could seriously slow down the DPDK adoption.
What about something similar to how Java JNI works? It needed to support
multiple Java JRE
ot;dev at
dpdk.org<mailto:dev at dpdk.org>" mailto:dev at dpdk.org>>
Subject: Re: [dpdk-dev] Reshuffling of rte_mbuf structure.
On Sun, 1 Nov 2015 06:45:31 +0200
Arnon Warshavsky mailto:arnon at qwilt.com>> wrote:
My 2 cents,
This was brought up in the recent user spac
On Sun, 1 Nov 2015 06:45:31 +0200
Arnon Warshavsky wrote:
> My 2 cents,
>
> This was brought up in the recent user space summit, and it seems that
> indeed there is no one cache lines arrangement that fits all.
> OTOH multiple compile time options to suffice all flavors, would make it
> unpleasa
My 2 cents,
This was brought up in the recent user space summit, and it seems that
indeed there is no one cache lines arrangement that fits all.
OTOH multiple compile time options to suffice all flavors, would make it
unpleasant to read maintain test and debug.
(I think there was quiet a consensus
In Cisco, we are using DPDK for a very high speed packet processor application.
We don't use NIC TCP offload / RSS hashing. Putting those fields in the first
cache-line - and the obligatory mb->next datum in the second cache line -
causes significant LSU pressure and performance degradation. If
12 matches
Mail list logo