Hi
> That means that each file with '#include will have its own
> copy
> of that function:
> $ objdump -d x86_64-native-linuxapp-gcc/app/testpmd | grep
> ':' | sort -u | wc -l
> 233
> Same story for rte_memcpy_ptr and rte_memcpy_DEFAULT, etc...
> Obviously we need (and want) only one copy of th
OK.
Won't touch it in next version.
Best Regards,
Xiaoyun Li
> -Original Message-
> From: Ananyev, Konstantin
> Sent: Monday, October 2, 2017 08:08
> To: Li, Xiaoyun ; Richardson, Bruce
>
> Cc: Lu, Wenzhuo ; Zhang, Helin
> ; dev@dpdk.org
> Subject: RE: [PATCH v3 3/3] efd: run-time disp
>
> This patch dynamically selects x86 EFD functions at run-time.
I don't think it really does.
In fact, I am not sure that we need to touch EFD at all here -
from what I can see, it already does dynamic selection properly.
Konstantin
> This patch uses function pointer and binds it to the rel
On 9/30/2017 5:25 PM, Bill Bonaparte wrote:
Hi Jianfeng,
Thank you for replying, I appreciate so much for this.
we are trying to run our dpdk application on AWS cloud which use xen
platform.
for this case, what should I do to support AWS cloud ?
Is there any way to do this ?
Sorry, I
On 9/30/2017 7:49 PM, Yuanhan Liu wrote:
On Sat, Sep 30, 2017 at 12:06:44PM , Jianfeng Tan wrote:
+ /* share callfd and kickfd */
+ params->type = VHOST_MSG_TYPE_SET_FDS;
+ vring_num = rte_vhost_get_vring_num(vid);
+ for (i = 0; i < vring_num; i++) {
+
On 9/30/2017 7:34 PM, Yuanhan Liu wrote:
On Thu, Sep 30, 2017 at 12:53:00PM +, Jianfeng Tan wrote:
On 9/30/2017 4:23 PM, Yuanhan Liu wrote:
On Thu, Sep 28, 2017 at 01:55:59PM +, Jianfeng Tan wrote:
+static int
new_device(int vid)
{
struct rte_eth_dev *eth_dev;
@@ -610,6
Hi Xiaoyun,
> This patch dynamically selects functions of memcpy at run-time based
> on CPU flags that current machine supports. This patch uses function
> pointers which are bind to the relative functions at constrctor time.
> In addition, AVX512 instructions set would be compiled only if users
>
On Sat, 30 Sep 2017 09:59:08 +0800
Xiaoyun Li wrote:
> To make the performance can be tuning on different NICs or platforms. We
> need to make the number of descriptors and Rx/TX threshold as arguments
> when starting l3fwd application.
>
> Signed-off-by: Xiaoyun Li
Not sure about this. The po
HW pool manager e.g. Octeontx SoC demands s/w to program start and end
address of pool. Currently, there is no such api in external mempool.
Introducing rte_mempool_ops_register_memory_area api which will let HW(pool
manager) to know when common layer selects hugepage:
For each hugepage - Notify it
Some mempool hw like octeontx/fpa block, demands block size
(/total_elem_sz) aligned object start address.
Introducing an MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS flag.
If this flag is set:
- Align object start address(vaddr) to a multiple of total_elt_sz.
- Allocate one additional object. Additional ob
The memory area containing all the objects must be physically
contiguous.
Introducing MEMPOOL_F_CAPA_PHYS_CONTIG flag for such use-case.
The flag useful to detect whether pool area has sufficient space
to fit all objects. If not then return -ENOSPC.
This way, we make sure that all object within a
xmem_size and xmem_usage need to know the status of mempool flags,
so add 'flags' arg in _xmem_size/usage() api.
Following patch will make use of that.
Signed-off-by: Santosh Shukla
Signed-off-by: Jerin Jacob
Acked-by: Olivier Matz
---
drivers/net/xenvirt/rte_mempool_gntalloc.c | 7 ---
Removed mempool deprecation notice and
updated change info in release_17.11.
Signed-off-by: Santosh Shukla
Signed-off-by: Jerin Jacob
Acked-by: Olivier Matz
---
doc/guides/rel_notes/deprecation.rst | 9 -
doc/guides/rel_notes/release_17_11.rst | 7 +++
2 files changed, 7 insertio
Allow the mempool driver to advertise his pool capabilities.
For that pupose, an api(rte_mempool_ops_get_capabilities)
and ->get_capabilities() handler has been introduced.
- Upon ->get_capabilities() call, mempool driver will advertise
his capabilities to mempool flags param.
Signed-off-by: Santo
mp->flags is int and mempool API writes unsigned int
value in 'flags', so fix the 'flags' data type.
Signed-off-by: Santosh Shukla
Signed-off-by: Jerin Jacob
Acked-by: Olivier Matz
---
lib/librte_mempool/rte_mempool.c | 4 ++--
lib/librte_mempool/rte_mempool.h | 2 +-
2 files changed, 3 insert
* Remove redundant 'flags' API description from
- __mempool_generic_put
- __mempool_generic_get
- rte_mempool_generic_put
- rte_mempool_generic_get
* Remove unused 'flags' argument from
- rte_mempool_generic_put
- rte_mempool_generic_get
Signed-off-by: Santosh Shukla
Signed-off-by: J
v7:
Includes v6 minor review changes suggested by Olivier.
Patches are rebases on tip / upstream commit : 5dce9fcdb23
v6:
Include v5 review change, suggested by Olivier.
Patches rebased on tip, commit:06791a4bcedf
v5:
Includes v4 review change, suggested by Olivier.
v4:
Include
- mempool deprec
Now that dpdk supports more than one mempool drivers and
each mempool driver works best for specific PMD, example:
- sw ring based mempool for Intel PMD drivers.
- dpaa2 HW mempool manager for dpaa2 PMD driver.
- fpa HW mempool manager for Octeontx PMD driver.
Application would like to know the be
DPDK has support for both sw and hw mempool and
currently user is limited to use ring_mp_mc pool.
In case user want to use other pool handle,
need to update config RTE_MEMPOOL_OPS_DEFAULT, then
build and run with desired pool handle.
Introducing eal option to override default pool handle.
Now use
v5:
- Includes v4 minor review comment.
Patches rebased on upstream tip / commit id:5dce9fcdb2
v4:
- Includes v3 review coment changes.
Patches rebased on 06791a4bce: ethdev: get the supported pools for a port
v3:
- Rebased on top of v17.11-rc0
- Updated version.map entry to v17.11.
v2:
20 matches
Mail list logo