I have added the possibility to configure the number
of buffers used to store the trace data for packet delays.
The complete command to start netem with a trace file is:
tc qdisc add dev eth1 root netem trace path/to/trace/file.bin buf 3 loops 1 0
with buf: the number of buffers to be used
loops: how many times to loop through the tracefile
the last argument is optional and specifies whether the default is to drop packets or 0-delay them.

The patches are available at:
http://www.tcn.hypert.net/tcn_kernel_2_6_23_confbuf
http://www.tcn.hypert.net/tcn_iproute2_2_6_23_confbuf

I'm looking forward for your comments!
Thanks!
Ariane


Ben Greear wrote:
Ariane Keller wrote:

Yes, for short-term starvation it helps certainly.
But I'm still not convinced that it is really necessary to add more buffers, because I'm not sure whether the bottleneck is really the loading of data from user space to kernel space. Some basic tests have shown that the kernel starts loosing packets at approximately the same packet rate regardless whether we use netem, or netem with the trace extension. But if you have contrary experience I'm happy to add a parameter which defines the number of buffers.

I have no numbers, so if you think it works, then that is fine with me.

If you actually run out of the trace buffers, do you just continue to
run with the last settings?  If so, that would keep up throughput
even if you are out of trace buffers...

What rates do you see, btw?  (pps, bps).

Thanks,
Ben


--
Ariane Keller
Communication Systems Research Group, ETH Zurich
Web: http://www.csg.ethz.ch/people/arkeller
Office: ETZ G 60.1, Gloriastrasse 35, 8092 Zurich
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to