Hi all,

This testing instructions aims to introduce an emulating a soft ROCE 
device with normal NIC(no RDMA), we have finished a vhost-user RDMA
device demo, which can work with RDMA features such as CM, QP type of 
UC/UD and so on.

There are testing instructions of the demo:

1.Test Environment Configuration
Hardware Environment
Servers: 1 identically configured servers

CPU: HUAWEI Kunpeng 920 (96 cores)

Memory: 3T DDR4

NIC: TAP (paired virtio-net device for RDMA)

Software Environment
Server Host OS: 6.4.0-10.1.0.20.oe2309.aarch64

Kernel: linux-6.16.8 (with kernel-vrdma module)

QEMU: 9.0.2 (compiled with vhost-user-rdma virtual device support)

DPDK: 24.07.0-rc2

Dependencies:

        rdma-core
        
        rdma_rxe

        libibverbs-dev
        
2. Test Procedures
a. Starting DPDK with vhost-user-rdma first: 
1). Configure Hugepages
   echo 2048 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
2). app start  
  /DPDKDIR/build/examples/dpdk-vhost_user_rdma -l 1-4 -n 4 --vdev "net_tap0" -- 
--socket-file /tmp/vhost-rdma0

b. Booting guest kernel with qemu, command line: 
...
-netdev tap,id=hostnet1,ifname=tap1,script=no,downscript=no 
-device 
virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:14:72:30,bus=pci.3,addr=0x0.0,multifunction=on
 
-chardev socket,path=/tmp/vhost-rdma0,id=vurdma 
-device 
vhost-user-rdma-pci,bus=pci.3,addr=0x0.1,page-per-vq=on,disable-legacy=on,chardev=vurdma
...

c. Guest Kernel Module Loading and Validation
# Load vhost_rdma kernel module
sudo modprobe vrdma

# Verify module loading
lsmod | grep vrdma

# Check kernel logs
dmesg | grep vhost_rdma

# Expected output:
[    4.935473] vrdma_init_device: Initializing vRDMA device with max_cq=64, 
max_qp=64
[    4.949888] [vrdma_init_device]: Successfully initialized, last qp_vq 
index=192
[    4.949907] [vrdma_init_netdev]: Found paired net_device 'enp3s0f0' (on 
0000:03:00.0)
[    4.949924] Bound vRDMA device to net_device 'enp3s0f0'
[    5.026032] vrdma virtio2: vrdma_alloc_pd: allocated PD 1
[    5.028006] Successfully registered vRDMA device as 'vrdma0'
[    5.028020] [vrdma_probe]: Successfully probed VirtIO RDMA device (index=2)
[    5.028104] VirtIO RDMA driver initialized successfully

d. Inside VM, one rdma device fs node will be generated in /dev/infiniband: 
[root@localhost ~]# ll -h /dev/infiniband/
total 0
drwxr-xr-x. 2 root root       60 Dec 17 11:24 by-ibdev
drwxr-xr-x. 2 root root       60 Dec 17 11:24 by-path
crw-rw-rw-. 1 root root  10, 259 Dec 17 11:24 rdma_cm
crw-rw-rw-. 1 root root 231, 192 Dec 17 11:24 uverbs0

e. The following are to be done in the future version: 
1). SRQ support
2). DPDK support for physical RDMA NIC for handling the datapath between front 
and backend
3). Reset of VirtQueue
4). Increase size of VirtQueue for PCI transport
5). Performance Testing

f. Test Results
1). Functional Test Results:
Kernel module loading   PASS    Module loaded without errors
DPDK startup            PASS    vhost-user-rdma backend initialized
QEMU VM launch          PASS    VM booted using RDMA device
Network connectivity    PASS    Host-VM communication established
RDMA device detection   PASS    Virtual RDMA device recognized

f.Test Conclusion
1). Full functional compliance with specifications
2). Stable operation under extended stress conditions

Recommendations:
1). Optimize memory copy paths for higher throughput
2). Enhance error handling and recovery mechanisms

Reply via email to