Re: [tcpdump-workers] reconstruct HTTP requests in custom sniffer

2011-01-07 Thread Cedric Cellier

> I am asked to write a custom sniffer with libpcap on Linux that has to
> handle a load of 50.000 packets per second. The sniffer has to detect all
> HTTP requests and dump the URI with additional information, such as
> request size and possibly response time/size.

Looks very similar to :

http://github.com/securactive/junkie

if you can live with the AGPL, maybe we could join forces ?

> Regarding the load of 50.000 packets a second, is this expected to be a 
> problem?

Junkie handle this rate of packets (quite more actually) on one of our test 
probe running on a 8 core PC, with plenty of CPU left. So I bet this is not a 
problem.

-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] reconstruct HTTP requests in custom sniffer

2011-01-10 Thread Cedric Cellier
-[ Sun, Jan 09, 2011 at 02:19:53PM +0900, Andrej van der Zee ]
> Is there anything to say about a rough time-schedule?

Support for TCP segmentation as well as new parsers that use this
feature should be pushed before end of week. Concerning the capture of
POST messages we should probably start working on this in february (this
is a small company so no schedule is ever definitive, so no promise).

> In some of our projects, we are only interested in the length of HTTP
> requests and responses therefor reassembling the whole requests would be
> overkill, as the segment lengths can be read from the TCP headers of packets
> in a TCP stream, obviously.

Yes, in theory we could follow the sizes associated with each request quite
precisely even with truncated packets as long as the "Content-length"
header lines are present. To be honest, truncated packets were
introduced very recently and were not tested much (since we do not
require this feature), thus I'm not certain junkie is very robust in this
regard ; but I'm going to check.

> In other projects, we definitely have to access
> the POST data need full-reassembly. Depending on the project, a different
> parsing-behavior is wanted. Will such behavior be configurable without
> having to write my own patches against junkie?

What we need here is to be able to tell junkie for which hosts we want to
keep all queries (including POST data). At first sight, I planned to
let junkie reassemble everything on HTTP and copies all HTTP requests in
whole, then drop everything I do not need in the callback that's called
after the parse. I find this approach simpler and I don't think we require
extra speed in the parse phase anyway. It will still be possible to
optimize this later anyway.


-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] HUGE packet-drop

2011-01-24 Thread Cedric Cellier
Maybe you use a custom kernel lacking the option to enable mmap sharing
of packets from kernel to userland ?

-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] HUGE packet-drop

2011-01-24 Thread Cedric Cellier
Quick guess : maybe you build a custom kernel without the option to enable mmap 
sharing of packets with userland ?
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] capturing on both interfaces simultaneously

2011-12-10 Thread Cedric Cellier
> I got it to work.
(...)
> > default:           /* We got traffic */
> > pcap_dispatch(pcap0,-1, (void *) packet_callback, NULL);
> > pcap_dispatch(pcap1,-1, (void *) packet_callback2, NULL);

So that other may benefit from it in the future, I
guess your fixed version looks like:

default:
  if (t==pcap0) pcap_dispatch(pcap0,...)
  else if (t==pcap1) pcap_dispatch(pcap1,...)


-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.