Re: [dpdk-dev] [Bug 814] Intel PMD both i40 and iavf send OP_DISABLE_VLAN_STRIPPING and it not mandatory

2021-09-23 Thread spyroot
The second issue is in CONFIG_RSS_LUT.

That *VIRTCHNL_OP_CONFIG_RSS_LUT*
; is
optional so if it returns ret it must be handled correctly.


int

*iavf_configure_rss_lut*
(struct
*iavf_adapter* 
*adapter)

{

struct *iavf_info*
 *vf =
*IAVF_DEV_PRIVATE_TO_VF*
(adapter);

struct *virtchnl_rss_lut*
 *rss_lut;

struct *iavf_cmd_info*
 args;

int len, err = 0;



len = sizeof(*rss_lut) + vf->vf_res->rss_lut_size - 1;

rss_lut = *rte_zmalloc*
("rss_lut", len,
0);

if (!rss_lut)

return -ENOMEM;



rss_lut->vsi_id = vf->vsi_res->vsi_id;

rss_lut->lut_entries = vf->vf_res->rss_lut_size;

*rte_memcpy*
(rss_lut->lut,
vf->rss_lut, vf->vf_res->rss_lut_size);



args.ops = *VIRTCHNL_OP_CONFIG_RSS_LUT*
;

args.in_args = (*u8*
 *)rss_lut;

args.in_args_size = len;

args.out_buffer = vf->aq_resp;

args.out_size = *IAVF_AQ_BUF_SZ*
;



err = *iavf_execute_vf_cmd*
(adapter,
&args);

if (err)

*PMD_DRV_LOG*
(ERR,

   "Failed to execute command of OP_CONFIG_RSS_LUT");



*rte_free* (rss_lut);

return err;

}


On Fri, Sep 24, 2021 at 12:29 AM  wrote:

> https://bugs.dpdk.org/show_bug.cgi?id=814
>
> Bug ID: 814
>Summary: Intel PMD both i40 and iavf send
> OP_DISABLE_VLAN_STRIPPING and it not mandatory
>Product: DPDK
>Version: 20.11
>   Hardware: All
> OS: All
> Status: UNCONFIRMED
>   Severity: normal
>   Priority: Normal
>  Component: core
>   Assignee: dev@dpdk.org
>   Reporter: spyr...@gmail.com
>   Target Milestone: ---
>
> Hi Folks,
>
> There is an issue in i40 and ivfx PMD's both drivers send optional commands
> that require trusted mode enabled and if OP_DISABLE_VLAN_STRIPPING discard
> by
> PF, PMD return error and not gracefully handles.
>
>
> Example iavf_disable_vlan_strip() same function present in i40 that disable
> vlan strip.  That is optional and not required and must be gracefully be
> handled.
>
>
> EAL: Detected 4 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No available hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: Probe PCI driver: net_iavf (8086:1889) device: :13:00.0 (socket 0)
> EAL: Error reading from file descriptor 26: Input/output error
> EAL: No legacy callbacks, legacy socket not created
> testpmd: create a new mbuf pool : n=171456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port
> will
> pair with itself.
>
> Configuring Port 0 (socket 0)
> iavf_execute_vf_cmd(): No response for cmd 28
> iavf_disable_vlan_strip(): Failed to execute command of
> OP_DISABLE_VLAN_STRIPPING
> iavf_init_rss(): RSS is enabled by PF by default
> iavf_execute_vf_cmd(): No response for cmd 24
> iavf_configure_rss_lut(): Failed to execute command of OP_CONFIG_RSS_LUT
> iavf_dev_configure(): configure rss failed
> Port0 dev_configure = -1
> Fail to configure port 0
> EAL: Error - exiting with code: 1
>   Cause: Start ports failed
>
> Thank you,
> Mus>
>
> --
> You are receiving this mail because:
> You are the assignee for the bug.


[dpdk-dev] KNI compilation

2021-08-30 Thread spyroot
Hi Folks,

I troubleshooting already for a couple of hours but still not clear why
exactly the build process
failing on the KNI module.  I've tried to disable KNI but it same issue.
Any ideas what might be a root cause.  It x86_64


[1/2] Generating rte_kni with a custom command
FAILED: kernel/linux/kni/rte_kni.ko
make -j4 -C /lib/modules/5.10.52-3.ph4-esx/build
M=/root/dpdk-stable-20.11.2/build/kernel/linux/kni
src=/root/dpdk-stable-20.11.2/kernel/linux/kni 'MODULE_CFLAGS=-include
/root/dpdk-stable-20.11.2/config/rte_config.h
-I/root/dpdk-stable-20.11.2/lib/librte_eal/include
-I/root/dpdk-stable-20.11.2/lib/librte_kni
-I/root/dpdk-stable-20.11.2/build
-I/root/dpdk-stable-20.11.2/kernel/linux/kni' modules
make: Entering directory '/usr/src/linux-headers-5.10.52-3.ph4-esx'
make[1]: *** No rule to make target
'/root/dpdk-stable-20.11.2/build/kernel/linux/kni/kni_misc.o', needed by
'/root/dpdk-stable-20.11.2/build/kernel/linux/kni/rte_kni.o'.  Stop.
make: *** [Makefile:1821: /root/dpdk-stable-20.11.2/build/kernel/linux/kni]
Error 2
make: Leaving directory '/usr/src/linux-headers-5.10.52-3.ph4-esx'
ninja: build stopped: subcommand failed.

Kind Regards,
Mus>


Re: [dpdk-dev] KNI compilation

2021-08-30 Thread spyroot
Hi Ferruh,

Sorry, I forgot to mention,  Yes, I was compiling with -Denable_kmods=true.

I just partially found a root cause.   The target system Photon OS (It
VMware distro)
It has 4 kernels variation regular, RT, ESX, security.  ( I was using ESX
variation that optimized for
ESXi but it looks like something is disabled in the kernel that prevents
KNI to compile) when
I use a regular kernel source tree, it does compile.

I was trying to figure out what KNI actually requires from the kernel
source tree or what instrumentation
I can use it to see the more verbose errors.  ninja -v doesn't really
provide details.

Thank you very much

On Mon, Aug 30, 2021 at 9:44 PM Ferruh Yigit  wrote:

> On 8/30/2021 5:47 PM, spyroot wrote:
> > Hi Folks,
> >
> > I troubleshooting already for a couple of hours but still not clear why
> > exactly the build process
> > failing on the KNI module.  I've tried to disable KNI but it same issue.
> > Any ideas what might be a root cause.  It x86_64
> >
> >
> > [1/2] Generating rte_kni with a custom command
> > FAILED: kernel/linux/kni/rte_kni.ko
> > make -j4 -C /lib/modules/5.10.52-3.ph4-esx/build
> > M=/root/dpdk-stable-20.11.2/build/kernel/linux/kni
> > src=/root/dpdk-stable-20.11.2/kernel/linux/kni 'MODULE_CFLAGS=-include
> > /root/dpdk-stable-20.11.2/config/rte_config.h
> > -I/root/dpdk-stable-20.11.2/lib/librte_eal/include
> > -I/root/dpdk-stable-20.11.2/lib/librte_kni
> > -I/root/dpdk-stable-20.11.2/build
> > -I/root/dpdk-stable-20.11.2/kernel/linux/kni' modules
> > make: Entering directory '/usr/src/linux-headers-5.10.52-3.ph4-esx'
> > make[1]: *** No rule to make target
> > '/root/dpdk-stable-20.11.2/build/kernel/linux/kni/kni_misc.o', needed by
> > '/root/dpdk-stable-20.11.2/build/kernel/linux/kni/rte_kni.o'.  Stop.
> > make: *** [Makefile:1821:
> /root/dpdk-stable-20.11.2/build/kernel/linux/kni]
> > Error 2
> > make: Leaving directory '/usr/src/linux-headers-5.10.52-3.ph4-esx'
> > ninja: build stopped: subcommand failed.
> >
> > Kind Regards,
> > Mus>
> >
>
> Hi spyroot (Mus> ?),
>
> modules already disabled by default, you should be enabling it via
> '-Denable_kmods=true' meson option, if you want to disable kni just not
> enabling
> kmods should be enough.
>
> btw, I have tried and not able to reproduce the issue in 20.11.2 in my
> platforml, kni compiles fine.
> Can you confirm you have kernel source installed in your system?
> Or to be more specific, does '/usr/src/linux-headers-5.10.52-3.ph4-esx'
> patch
> has kernel source code?
>
> And can you please share your full commands for build?
>
> Cheers,
> ferruh
>


Re: [PATCH v3 2/2] net/vmxnet3: support larger MTU with version 6

2025-03-30 Thread spyroot
Hi Folks, how can you bump them VMXNET3 version, and do we have a VMXNET3
unit test for RSS / Max mtu and 32 queue?

On Wed, Nov 6, 2024 at 8:00 AM Ferruh Yigit  wrote:

> On 11/4/2024 9:44 PM, Jochen Behrens wrote:
>
> > On 11/4/24 02:52, Morten Brørup wrote:
> >> Virtual hardware version 6 supports larger max MTU, but the device
> >> information (dev_info) did not reflect this, so it could not be used.
> >>
> >> Fixes: b1584dd0affe ("net/vmxnet3: support version 6")
> >>
> >> Signed-off-by: Morten Brørup 
> >
> > Acked-by: Jochen Behrens jochen.behr...@broadcom.com
> >
>
> Applied to dpdk-next-net/main, thanks.
>


Re: IAVF/ICEN Intel 810 (i.e SRIOV case) 16 queue and RSS dpdk-tested report single queue pps / stats

2025-04-22 Thread spyroot
Observation.

Two instances of testpmd.  Only one report shows correct stats when you run
16 queues on RX with the default RSS config.
i.e rss-ip , rss-udp etc.  You only see the counter for a single queue.

How I know I took last report on first intestine test-pmd at the end of
run, take all bytes added. I know time I run compute pps
correlated to the PPS in the actual switch.

so either ICEN or IAVF doesn't report stats (only for queue 0)




On Fri, Apr 18, 2025 at 10:30 PM spyroot  wrote:

> Hi Folks,
>
>
> I am observing that DPDK test-pmd with IAVF PMD, ICEN PF driver,
>
> reporting statistics incorrectly when the RX side generates a UDP flow that 
> randomizes or increments IP/UDP header data (IP/port, etc).
>
> I tested all 23.x stable and all branches.
>
>
> -If I use *single* flow (on the TX side, all tuples are the same on the RX
>
> HASH() produce -> same result). no issue.
>
> So, on the RX side, I see all zero packet drops and the correct pps value 
> reported by test-pmd.
>
>
> - If I increase the number of flows ( IP/UDP, etc.), the PPS Counter and 
> byte/pkt counter
>
> report only single queues. (i.e, it looks to me like it uses some default 
> queue 0
>
> or something and skips the rest 15 (in my case --rxq=16). (It could IAVF do 
> that or ICEN report that). I'm not sure.
>
>
> For example, the counter I'm referring to test-pmd Rx-pps counter.
>
>
> Rx-pps: 4158531 Rx-bps: 2129167832
>
>
> I'm also observing PMD Failing fetch stats error msg.
>
>
> iavf_query_stats(): fail to execute command OP_GET_STATS
>
> iavf_dev_stats_get(): Get statistics failed
>
>
> My Question.
>
>
> If I have two instances of
>
> test-pmd --allowed X
>
> test-pmd --allowed Y
>
> Where X is VFs PCI ADDR X and Y  PCI ADDR Y from the  PF?
>
> I expect to see the total stats (pps/bytes, etc.) (combined value for all 16 
> queues for a port 0 )
>
> RX-PPS and bytes per port on both instances.
>
> Yes/No?
>
>
> Has anyone had a similar issue in the past?
>
>
> Thank you,
>
> MB
>
>


810 VFIO , SRIOV, multi-process stats read DPDK testpmd.

2025-04-14 Thread spyroot
Hi Folks,

I'm observing some unexpected behavior related to how statistics are
retrieved from a Physical Function (PF) on an Intel 810 NIC.

*Scenario:* I have two dpdk-testpmd instances running in separate
Kubernetes pods (same worker node). Each instance uses the -a flag to bind
to a different VF. (i.e to have consistent port id 0)

*Questions:*

   1.

   *PF Statistics and 64B Line Rate:*
   I'm noticing that the RX packet-per-second value reported on the PF side
   for a given VF is *higher than the theoretical maximum* for 64-byte
   packets.
   -

  Does the Intel 810 PMD apply any kind of optimization, offloading, or
  fast path processing when two VFs (e.g., A and B) are on the same PF?
  2.

   *Concurrent Stats Polling:*
   -

  When two separate dpdk-testpmd processes are running (in pod A and
  pod B), does the PMD or driver layer support concurrent reading of PF
  statistics?
  -

  Is there any locking or synchronization mechanism involved when
  multiple testpmd instances attempt to pull stats from the same PF
  simultaneously? ( in essence, does a firmware/OF support
concurrent read).

Thank you,

cd /usr/local/bin && dpdk-testpmd \
  --main-lcore \$main -l \$cores -n 4 \
  --socket-mem 2048 \
  --proc-type auto --file-prefix testpmd_rx0 \
  -a \$PCIDEVICE_INTEL_COM_DPDK \
  -- --forward-mode=rxonly --auto-start --stats-period 1'"

cd /usr/local/bin && dpdk-testpmd \
  --main-lcore \$main -l \$cores -n 4 \
  --socket-mem 2048 \
  --proc-type auto --file-prefix testpmd_rx1 \
  -a \$PCIDEVICE_INTEL_COM_DPDK \
  -- --forward-mode=rxonly --auto-start --stats-period 1'"


Re: 810 VFIO , SRIOV, multi-process stats read DPDK testpmd.

2025-04-29 Thread spyroot
Thank you very much, Bruce, for taking the time.

I'm observing odd behavior when PF either does not show stats for some VF
or leaks VF stats from one to another.

i.e., generator A sent 100 packets to VF A and generator B 100 to VF B;
VF A and PF ICEN stats show 200 packets per VF stats.

(Note each flow is destined to the corresponding MAC of VF,
and prior to a generation, MAC entry is confirmed on the L2 switch, TX VF.
So the packets aren't flooded, nor is it an unknown unicast frame).

The second condition, PF, shows zero stats for one VFs, but two RX
instances of testpmd
show correct statistics. (i.e A should see 100, B should see 100, A and B
each report 100, but PF reports 0 for either A VF or B VF).

The results are the same whether tested on the same node (k8s)
i.e lo-locate with TX PODs or on two different k8s nodes.

Condition two is observed if you do a single flow (i.e., hash on RX ide
land packets to the same RX queue). PF doesn't account for or report
correct values if you have two readers (i.e., two DPDK instances).

(Note: TX reports correct stats for A and B)

i.e. A sent 100, B sent 100
switch shows 200.

TX - A - VF A -- (L2 switch) -- (RX side) VF A -- testpmd
TX - B - VF B - - (L2 switch) -- (RX side) VF B -- testpmd.

On the TX side, 3 points of measurement.
VF TX stats, PF TX stats, L2 switch (total pkt,ppks etc) TX and outgoing
port RX.

Kind Regards,
Mus

On Tue, Apr 29, 2025 at 5:22 PM Bruce Richardson 
wrote:

> On Mon, Apr 14, 2025 at 07:21:57PM +0400, spyroot wrote:
> >Hi Folks,
> >
> >I'm observing some unexpected behavior related to how statistics are
> >retrieved from a Physical Function (PF) on an Intel 810 NIC.
> >
> >Scenario: I have two dpdk-testpmd instances running in separate
> >Kubernetes pods (same worker node). Each instance uses the -a flag to
> >bind to a different VF. (i.e to have consistent port id 0)
> >
> >Questions:
> > 1. PF Statistics and 64B Line Rate:
> >I'm noticing that the RX packet-per-second value reported on the
> PF
> >side for a given VF is higher than the theoretical maximum for
> >64-byte packets.
> >   + Does the Intel 810 PMD apply any kind of optimization,
> > offloading, or fast path processing when two VFs (e.g., A and
> > B) are on the same PF?
>
> This wouldn't be something that the PMD does. The forwarding from VF to VF,
> or PF to VF would happen internally in the hardware.
>
> > 2. Concurrent Stats Polling:
> >   + When two separate dpdk-testpmd processes are running (in pod
> A
> > and pod B), does the PMD or driver layer support concurrent
> > reading of PF statistics?
>
> Looking at the iavf PMD, the reading of stats is done by sending an adminq
> message to the PF and reading the response. Any serialization of stats
> reading would then be done at the PF or adminq management level. The VF
> should not need to worry about whether another VF is reading the stats at
> the same time. [In fact it would be a serious bug if one VF needed to be
> aware of what other VFs were doing, since different VFs could be attached
> to different virtual machines which should be isolated from each other]
>
> >   + Is there any locking or synchronization mechanism involved
> > when multiple testpmd instances attempt to pull stats from
> the
> > same PF simultaneously? ( in essence, does a firmware/OF
> > support concurrent read).
> >
>
> See above.
>
> /Bruce
>
> >Thank you,
> > cd /usr/local/bin && dpdk-testpmd \
> >   --main-lcore \$main -l \$cores -n 4 \
> >   --socket-mem 2048 \
> >   --proc-type auto --file-prefix testpmd_rx0 \
> >   -a \$PCIDEVICE_INTEL_COM_DPDK \
> >   -- --forward-mode=rxonly --auto-start --stats-period 1'"
> >
> > cd /usr/local/bin && dpdk-testpmd \
> >   --main-lcore \$main -l \$cores -n 4 \
> >   --socket-mem 2048 \
> >   --proc-type auto --file-prefix testpmd_rx1 \
> >   -a \$PCIDEVICE_INTEL_COM_DPDK \
> >   -- --forward-mode=rxonly --auto-start --stats-period 1'"
>