Hello,
I am trying to find the conclusion for this thread:[dpdk-dev] Question about
DPDK hugepage fd change
Can someone please help me as I do not see this patch anywhere. Was the lock
file usage changed such that lower number of descriptors are used if the
-single-file-segments option is used
.lock file is opened. As a result, it still
uses up a large number of fd's.
Thanks.
-- edwin
-Original Message-
From: Iain Barker
Sent: Wednesday, February 27, 2019 8:57 AM
To: Burakov, Anatoly ; Wiles, Keith
Cc: dev@dpdk.org; Edwin Leung
Subject: RE: [dpdk-dev] Question about
Leung
Subject: RE: [dpdk-dev] Question about DPDK hugepage fd change
Original Message from: Burakov, Anatoly [mailto:anatoly.bura...@intel.com]
I just realized that, unless you're using --legacy-mem switch, one
other way to alleviate the issue would be to use --single-file-segments
option. This
Original Message from: Burakov, Anatoly [mailto:anatoly.bura...@intel.com]
>I just realized that, unless you're using --legacy-mem switch, one other
>way to alleviate the issue would be to use --single-file-segments
>option. This will still store the fd's, however it will only do so per
>memse
On 06-Feb-19 1:57 PM, Iain Barker wrote:
Can you use 1G hugepages instead of 2M pages or a combo of the two, not sure
how dpdk handles having both in the system?
Unfortunately, no. Some of our customer deployments are tenancies on KVM hosts
and low-end appliances, which are not configurable b
On 06-Feb-19 1:57 PM, Iain Barker wrote:
Can you use 1G hugepages instead of 2M pages or a combo of the two, not sure
how dpdk handles having both in the system?
Unfortunately, no. Some of our customer deployments are tenancies on KVM hosts
and low-end appliances, which are not configurable b
> Can you use 1G hugepages instead of 2M pages or a combo of the two, not sure
> how dpdk handles having both in the system?
Unfortunately, no. Some of our customer deployments are tenancies on KVM hosts
and low-end appliances, which are not configurable by the end user to enable 1G
huge pages.
> On Feb 5, 2019, at 3:49 PM, Iain Barker wrote:
>
>
>>
>> Maybe I do not see the full problem here. If DPDK used poll instead of
>> select it would solve the 1024 problem as poll has a high limit to the
>> number of file descriptors at least that was my assumption.
>>>
>
> Thanks Keith.
>
> Maybe I do not see the full problem here. If DPDK used poll instead of select
> it would solve the 1024 problem as poll has a high limit to the number of
> file descriptors at least that was my assumption.
>>
Thanks Keith.
The issue is not whether DPDK is using poll or select on the f
> On Feb 5, 2019, at 3:27 PM, Iain Barker wrote:
>
>>
>> Would poll work here instead?
>
> Poll (or epoll) would definitely work - if we controlled the source and
> compilation of all the libraries that the application links against.
>
> But an app doesn’t know how the libraries in the OS a
>
> Would poll work here instead?
Poll (or epoll) would definitely work - if we controlled the source and
compilation of all the libraries that the application links against.
But an app doesn’t know how the libraries in the OS are implemented. We’d have
no way to ensure select() isn’t called b
> On Feb 5, 2019, at 12:56 PM, Iain Barker wrote:
>
> Hi everyone,
>
> We just updated our application from DPDK 17.11.4 (LTS) to DPDK 18.11 (LTS)
> and we noticed a regression.
>
> Our host platform is providing 2MB huge pages, so for 8GB reservation this
> means 4000 pages are allocated.
Hi everyone,
We just updated our application from DPDK 17.11.4 (LTS) to DPDK 18.11 (LTS) and
we noticed a regression.
Our host platform is providing 2MB huge pages, so for 8GB reservation this
means 4000 pages are allocated.
This worked fine in the prior LTS, but after upgrading DPDK what we a
13 matches
Mail list logo