> > Are you using the same binding as mentioned in previous mail sent by you? it > might be caused by cpu convention between pktgen and vhost, could you please > try to run pktgen from another idle cpu by adjusting the binding?
I don't think that's the case -- I can cause pktgen to hang in the guest without any cpu binding, and with vhost disabled even. > BTW, did you see any improvement when running pktgen from the host if no > regression was found? Since this can be reproduced with only 1 vcpu for > guest, may you try this bind? This might help simplify the problem. > vcpu0 -> cpu2 > vhost -> cpu3 > pktgen -> cpu1 > Yes -- I ran the pktgen test from host to guest with the binding described. I see an approx 5% increase in throughput from 4.12->4.13. Some numbers: host-4.12: 1384486.2pps 663.8MB/sec host-4.13: 1434598.6pps 688.2MB/sec