Hi all,
I have a question regarding dpdk checksum. If the hardware does not support
hardware checksum, but the dpdk application sets to allow the hardware do the
checksum. What will happen? Driver will automatically find hardware checksum
is not supported, it will use software calculate the
Hi all,
I have a question related to sr-iov. The problem is that there are two machines
directly connected. On one machine, create an VF and assign IP to it. From
another machine, ping it. VF receives the packets. ifconfig has tx increase.
tcpdump shows the arp reply. But ethtool tx is 0. an
I do not have any lock/critical sections in my code.
I have logs to print out the core id, src port, dst port and queue id. worker
0 runs on core 1, run macswap very light, the throughput is 4.5Mpps. worker 1
runs on core2, is a load balancer heavy, the throughput is also 4.5Mpps. This
does
Hi all,
I have two threads process the packets with different ways. thread A (core 0)
is very heavy, thread B (core 1) is very light. If I just run each of them,
their throughput is huge different with small packet. Thread A polls queue 0 of
port 0, thread B polls queue 1 of port 0. If I ru
Got it. Thanks for your guidance!
? 2016-09-20 22:41:36?"Andriy Berestovskyy" ???
>AFAIR Intel hardware should do the 10Gbit/s line rate (i.e. ~14,8
>MPPS) with one flow and LPM quite easily. Sorry, I don't have numbers
>to share at hand.
>
>Regarding the tool please see the pktgen-dpdk
Thanks so much for your reply! Usually how did you test lpm performance with
variety of destination addresses? use which tool send the traffic? how many
flows rules will you add? what's the performance you get?
At 2016-09-20 17:41:13, "Andriy Berestovskyy" wrote:
>Hey,
>You are correct.
Hi all,
Does anyone test IPv4 performance? If so, what's the throughput? I can get
almost 10Gb with 64 byte packets. But before the test, I would expect it will
be less than 10G. I thought the performance will not be affected by the
number of rule entires. But the throughput will be relate
Hi all,
I am using a memory safe tool safecode http://safecode.cs.illinois.edu/
compiling dpdk application. If I do not do the memory safety checking, it works
correctly. But my main aim is to evaluate if using safecode protects the
memory, what's the overhead?
The related compiling o
Please ignore this message. It works. I just made a mistake by myself. Sorry.
At 2016-08-01 06:10:24, "??" wrote:
>Hi,
>
>
>I want to compile and run dpdk example qos_meter. But it shows compile errors.
>
>
>qos_meter/rte_policer.h:34:20: error: #include nested too deeply
>
>qos_meter/r
Hi,
I want to compile and run dpdk example qos_meter. But it shows compile errors.
qos_meter/rte_policer.h:34:20: error: #include nested too deeply
qos_meter/rte_policer.h:35:25: error: #include nested too deeply
qos_meter/rte_policer.h:38:43: error: unknown type name ?uint32_t?
...
I am
Hi all,
When using dpdk multi process client server example, I create many clients.
After the number of clients 1239, I met this error:
EAL: memzone_reserve_aligned_thread_unsafe(): No more room in config
RING: Cannot reserve memory
EAL: Error - exiting with code: 1
Cause: Cannot create t
Thanks so much! That fixes my problem.
At 2016-04-19 15:39:16, "De Lara Guarch, Pablo" wrote:
>Hi,
>
>> -Original Message-
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of ??
>> Sent: Tuesday, April 19, 2016 5:58 AM
>> To: De Lara Guarch, Pablo
>> Cc: Thomas Monjalon; Go
Hi all,
In the multi-process environment, before I met a bug when calling
rte_hash_lookup_with_hash. Using Dhana's patch fixed my problem. Now I need to
remove the flow in the multi-process environment, the system gets crashed when
calling rte_hash_del_key function. The following is the gdb t
Thanks so much for your patch! Your patch exactly solves my bug. :)
At 2016-03-15 08:57:29, "Dhananjaya Eadala" wrote:
Hi
I looked at your info from gdb and source code.
Thanks for your reply! I used one patch solve my problem someone posted last
night in the mailing list.
At 2016-03-14 21:02:13, "Kyle Larose" wrote:
>Hello,
>
>On Sun, Mar 13, 2016 at 10:38 AM, ?? wrote:
>> Hi all,
>> When I use the dpdk lookup function, I met the segment fault problem. Can
BTW, the following is my backtrace when the system crashes.
Program received signal SIGSEGV, Segmentation fault.
0x004883ab in rte_hash_reset (h=0x0)
at
/home/zhangwei1984/timopenNetVM/dpdk-2.2.0/lib/librte_hash/rte_cuckoo_hash.c:444
444while (rte_ring_dequeue(h->free_slots, &ptr)
I met a problem which I used the DPDK hash table for multi processes. One
started as primary process and the other as secondary process.
I based on the client and server multiprocess example. My aim is that server
creates a hash table, then share it to the client. The client will read and
writ
Hi all,
When I use the dpdk lookup function, I met the segment fault problem. Can
anybody help to look at why this happens. I will put the aim what I want to do
and the related piece of code, and my debug message,
This problem is that in dpdk multi process - client and server example,
dpdk-
Hi all,
Now I use the DPDK multiple process example : client -server. I want the server
create a hash table, then share it to client. Currently what I do is to
create_hash_table function returns a pointer which points to the flow table
(inside the create_hash_table function, I use rte_calloc
Hi all,
When running the multi process example, does anybody know that why increasing
the number of mbufs, the performance gets dropped.
In multi process example, there are two macros which are related to the number
of mbufs
#defineMBUFS_PER_CLIENT1536
|
| #defineMBUFS_PER_PORT1536 |
| |
Hi all,
I want to ask does anybody know how kernel can share the info from dpdk
hugepage. My project has a requirement which kernel needs to get some info from
dpdk application. Eg, in multi-process example, every client has a shared ring
buffer with server. The shared ring contains some meta
Hi all,
I am using the dpdk example dpdk-1.8.0/examples/multi_process/client_server_mp
on ubuntu 14.04. I need to disable the batch. At first, I just change the
macro in mp_server/main.c and mp_client/client.c
#define PACKET_READ_SIZE 32 to 1
The server and the client can not receive any pa
22 matches
Mail list logo