Re: [tcpdump-workers] Best OS / Distribution for gigabit capture?

2011-02-08 Thread M. V.

>> as i mentioned in my previous mail, (with the title: "HUGE packet-drop") i'm
>> having problem trying to dump gigabit traffic on harddisk with tcpdump on
>> Debian5.0. i tried almost everything but got no success. so, i decided to
>> start-over:
>>
>> *) if anyone has experience on successful gigabit capture, what combination 
of
>> "Operating-System / Distribution / Kernel Version / libpcap version / ..." do
>> you suggest for maximum zero-packet-loss capture?

> What are you going to do with the packets?
> Can you process the packets that you capture with few enough
> CPU cycles that you never cause backlog?

hi,

right now, no (extra) process is being done on packets. tcpdump (or dumpcap) 
simply dumps received packets (whole packet, with s = 0) into file(s). my HP 
server's HDD performance is good, and i also tried dumping on SSD and RAMDisk 
and i increased kernel buffer-sizes,  but my best zero-packet-loss result is 
something about 350Mbps (this result is with libpcap-0.9.8. i got much worse 
results with libpcap-1.0+).



  -
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] HUGE packet-drop

2011-02-08 Thread rixed
-[ Mon, Feb 07, 2011 at 06:40:40PM +0100, ri...@happyleptic.org ]
> And I also experienced a huge packet drop whenever the throughput raised
> above 50MB/sec (up to 100MB/sec). By huge, I mean almost 50% of total
> received packets (ie. one third of the packets were lost according to
> pcap stats).

So it appeared that the packet drop was due to too much processing of
the packets. If I aleviate the processing going on for each packet then the
numbers of dropped packets can be made as low as wanted.

So much for the noise.

-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.