Hi Anatoly,
I was able to resolve the problem, which problem in our script.
Thanks and regards
Venu
On Fri, 6 Dec 2019 at 16:17, Burakov, Anatoly
wrote:
> On 18-Nov-19 4:43 PM, Venumadhav Josyula wrote:
> > Hi Anatoly,
> >
> > After using iova-mode=va, i see my ports are not getting detected ?
On 18-Nov-19 4:43 PM, Venumadhav Josyula wrote:
Hi Anatoly,
After using iova-mode=va, i see my ports are not getting detected ? I
thought it's working but I see following problem
what could be the problem?
i) I see allocation is faster
ii) But my ports are not getting detected
I take my word
PL note I am using dpdk 18-11...
On Wed, 13 Nov, 2019, 10:37 am Venumadhav Josyula,
wrote:
> Hi ,
> We are using 'rte_mempool_create' for allocation of flow memory. This has
> been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05. Now
> here is problem statement
>
> Problem stat
Hi Anatoly,
After using iova-mode=va, i see my ports are not getting detected ? I
thought it's working but I see following problem
what could be the problem?
i) I see allocation is faster
ii) But my ports are not getting detected
I take my word back that it entirely working..
Thanks,
Regards,
Ve
On 14-Nov-19 9:50 AM, Venumadhav Josyula wrote:
Hi Anatoly,
Thanks for quick response. We want to understand, if there will be
performance implications because of iova-mode being va. We want to
understand, specifically in terms following
* cache misses
* Branch misses etc
* translatio
Hi Anatoly,
> I would also suggest using --limit-mem if you desire to limit the
> maximum amount of memory DPDK will be able to allocate.
We are already using that.
Thanks and regards,
Venu
On Thu, 14 Nov 2019 at 15:19, Burakov, Anatoly
wrote:
> On 14-Nov-19 8:12 AM, Venumadhav Josyula wrote:
Hi Anatoly,
Thanks for quick response. We want to understand, if there will be
performance implications because of iova-mode being va. We want to
understand, specifically in terms following
- cache misses
- Branch misses etc
- translation of va addr -> phy addr when packet is receieved
On 14-Nov-19 8:12 AM, Venumadhav Josyula wrote:
Hi Oliver,Bruce,
* we were using --SOCKET-MEM Eal flag.
* We did not wanted to avoid going back to legacy mode.
* we also wanted to avoid 1G huge-pages.
Thanks for your inputs.
Hi Anatoly,
We were using vfio with iommu, but by default it s
On 13-Nov-19 9:01 PM, Venumadhav Josyula wrote:
Hi Anatoly,
By default w/o specifying --iova-mode option is iova-mode=pa by default ?
Thanks
Venu
In 18.11, there is a very specific set of circumstances that will
default to IOVA as VA mode. Future releases have become more aggressive,
to th
Hi Oliver,Bruce,
- we were using --SOCKET-MEM Eal flag.
- We did not wanted to avoid going back to legacy mode.
- we also wanted to avoid 1G huge-pages.
Thanks for your inputs.
Hi Anatoly,
We were using vfio with iommu, but by default it s iova-mode=pa, after
changing to iova-mode=va
Hi Anatoly,
By default w/o specifying --iova-mode option is iova-mode=pa by default ?
Thanks
Venu
On Wed, 13 Nov, 2019, 10:56 pm Burakov, Anatoly,
wrote:
> On 13-Nov-19 9:19 AM, Bruce Richardson wrote:
> > On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula wrote:
> >> Hi ,
> >> We ar
On 13-Nov-19 9:19 AM, Bruce Richardson wrote:
On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula wrote:
Hi ,
We are using 'rte_mempool_create' for allocation of flow memory. This has
been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05. Now
here is problem statement
Hi Venu,
On Wed, Nov 13, 2019 at 02:41:04PM +0530, Venumadhav Josyula wrote:
> Hi Oliver,
>
>
>
> *> Could you give some more details about you use case? (hugepage size,
> number of objects, object size, additional mempool flags, ...)*
>
> Ours in telecom product, we support multiple rats. Let
On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula wrote:
> Hi ,
> We are using 'rte_mempool_create' for allocation of flow memory. This has
> been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05. Now
> here is problem statement
>
> Problem statement :
> In new dpdk ( 1
Hi Oliver,
*> Could you give some more details about you use case? (hugepage size,
number of objects, object size, additional mempool flags, ...)*
Ours in telecom product, we support multiple rats. Let us take example of
4G case where we act as an gtpu proxy.
·Hugepage size :- 2 Mb
·
Hi Venu,
On Wed, Nov 13, 2019 at 10:42:07AM +0530, Venumadhav Josyula wrote:
> Hi,
>
> Few more points
>
> Operating system : Centos 7.6
> Logging mechanism : syslog
>
> We have logged using syslog before the call and syslog after the call.
>
> Thanks & Regards
> Venu
>
> On Wed, 13 Nov 2019
Hi,
Few more points
Operating system : Centos 7.6
Logging mechanism : syslog
We have logged using syslog before the call and syslog after the call.
Thanks & Regards
Venu
On Wed, 13 Nov 2019 at 10:37, Venumadhav Josyula wrote:
> Hi ,
> We are using 'rte_mempool_create' for allocation of flow
Hi ,
We are using 'rte_mempool_create' for allocation of flow memory. This has
been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05. Now
here is problem statement
Problem statement :
In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4 sec
for allocation compar
18 matches
Mail list logo