[tcpdump-workers] Custom decoding offset? (for batman-adv)

2020-11-06 Thread Linus Lüssing via tcpdump-workers
--- Begin Message ---
Hi!

I would like to use tcpdump and libpcap to filter and examine
batman-adv packets. batman-adv is a mesh routing protocol which
encapsulates layer 2 ethernet frames.

I know my way to identify batman-adv packets via raw ether filters.
What I would like to additionally do is filtering by fields of the
inner ethernet header.

I saw in the manpage that for various keys the decoding offset is
modified for the remainder of the expression.

My question is, is there a way to specify a custom decoding offset
for an encapsulating protocol that is not known by libpcap yet,
like batman-adv?

Or would I need to add batman-adv support to libpcap?

Regards, Linus


PS: The closest I found online so far is this:

https://serverfault.com/questions/617066/tcpdump-decode-packet-starting-at-non-zero-offset

Which suggests something like:

$ tcpdump -i eth0 -w - | editcap -C 82 - - | tcpdump -r -

However, ideally I would like to use a custom offset in a project
based on libpcap:

https://github.com/lemoer/bpfcountd

Where the tcpdump/editcap approach would currently not work.

So some native, custom decoding offset support for a filter
expression would be great.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Custom decoding offset? (for batman-adv)

2020-11-16 Thread Linus Lüssing via tcpdump-workers
--- Begin Message ---
On Fri, Nov 06, 2020 at 02:36:13PM +, Denis Ovsienko via tcpdump-workers 
wrote:
> Date: Fri, 6 Nov 2020 14:36:13 +
> From: Denis Ovsienko 
> To: Linus Lüssing via tcpdump-workers 
> Subject: Re: [tcpdump-workers] Custom decoding offset? (for batman-adv)
> [...]
> editcap would possibly do as a one-time hack given every packet is a
> batman-adv packet, but a clean solution would likely need to introduce a
> keyword into pcap filter language along the lines of "pppoed" and
> "pppoes":
> 
>pppoes [session_id]
>   [...]

Thanks for the pointers! I got a simple "batadv" like the "pppoed"
working, which checks for the ethertype in the same way.

Next I would like to further extend it with checks for two fields
in the batman-adv header, the version and the type field. From a
user perspective I would find the following syntax the easiest:

  batadv [version UINT8] [type UINT8] ...

Ideally it would be possible to interchange the version and type
attributes. And the type attribute should only be accepted if
"version" is either 14 or 15 (these are the only two versions in
use these days; 14 is further deprecated).

Later I would also like to add more pairs. For instance [ttl
UINT8]. However this one is only available for some version/type
combinations.


I couldn't find an example for this kind of syntax in the pcap-filter
manpage. Or is the only way supported or preferred by libpcap to have
separate rules for each field to test? Like for "wlan addr1",
"wlan addr2" etc.?

Regards
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Performance impact with multiple pcap handlers on Linux

2020-12-22 Thread Linus Lüssing via tcpdump-workers
--- Begin Message ---
Hi,

I was experimenting a bit with migrating from the use of
pcap_offline_filter() to pcap_setfilter().

I was a bit surprised that installing for instance 500 pcap
handlers with a BPF rule "arp" via pcap_setfilter() reduced
the TCP performance of iperf3 over veth interfaces from 73.8 Gbits/sec
to 5.39 Gbits/sec. Using only one or even five handlers seemed
fine (71.7 Gbits/sec and 70.3 Gbits/sec).

Is that expected?

Full test setup description and more detailed results can be found
here: https://github.com/lemoer/bpfcountd/pull/8

Regards, Linus

PS: And I was also surprised that there seems to be a limit of
only 510 pcap handlers on Linux.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Performance impact with multiple pcap handlers on Linux

2020-12-22 Thread Linus Lüssing via tcpdump-workers
--- Begin Message ---
On Tue, Dec 22, 2020 at 02:28:17PM -0800, Guy Harris wrote:
> On Dec 22, 2020, at 2:05 PM, Linus Lüssing via tcpdump-workers 
>  wrote:
> 
> > I was experimenting a bit with migrating from the use of
> > pcap_offline_filter() to pcap_setfilter().
> > 
> > I was a bit surprised that installing for instance 500 pcap
> > handlers
> 
> What is a "pcap handler" in this context?  An open live-capture pcap_t?
> 
> > with a BPF rule "arp" via pcap_setfilter() reduced
> > the TCP performance of iperf3 over veth interfaces from 73.8 Gbits/sec
> > to 5.39 Gbits/sec. Using only one or even five handlers seemed
> > fine (71.7 Gbits/sec and 70.3 Gbits/sec).
> > 
> > Is that expected?
> > 
> > Full test setup description and more detailed results can be found
> > here: https://github.com/lemoer/bpfcountd/pull/8
> 
> That talks about numbers of "rules" rather than "handlers".  It does speak of 
> "pcap *handles*"; did you mean "handles", rather than "handlers"?

Sorry, right, I ment pcap handles everywhere.

So far the bpfcountd code uses one pcap_t handle created via one
pcap_open_live() call. And then for each received packet iterates
over a list of user specified filter expressions and applies
pcap_offline_filter() for each filter to the packet. And then
counts the number of packets and packet bytes that matched each
filter expression.

> 
> Do those "rules" correspond to items in the filter expression that's compiled 
> into BPF code, or do they correspond to open `pcap_t`s?  If a "rule" 
> corresponds to a "handle", then does it correspond to an open pcap_t?
> 
> Or do they correspond to an entire filter expression?

What I ment with "rule" was an entire filter expression. The user
specifies a list of filter expressions. And bpfcountd counts how
many packets and the sum of packet bytes which matched each filter
expression.

Basically we want to do live measurements of the overhead of the mesh
routing protocol and measure and dissect the layer 2 broadcast traffic.
To measure how much ARP, DHCP, ICMPv6 NS/NA/RS/RA, MDNS, LLDP overhead
etc. we have.

> 
> Does this change involve replacing a *single* pcap_t, on which you use 
> pcap_offline_filter() with multiple different filter expressions, with 
> *multiple* pcap_t's, with each one having a separate filter, set with 
> pcap_setfilter()?  If so, note that this involves replacing a single file 
> descriptor with multiple file descriptors, and replacing a single ring buffer 
> into which the kernel puts captured packets with multiple ring buffers into 
> *each* of which the kernel puts captured packets, which increases the amount 
> of work the kernel does.

Correct. I tried to replace the single pcap_t with multiple
pcap_t's, one for each filter expression the user specified. And
then tried using pcap_setfilter() on each pcap_t and removing the
filtering in userspace via pcap_offline_filter().

The idea was to improve performance by A): Avoiding to copy the
actual packet data to userspace. And B) I was hoping that
traffic which does not match any filter expression would not be
impacted by running bpfcountd / libpcap that much anymore.

Right, for matching, captured traffic the work for the kernel is
probably more, with mulitple ring buffers as you described. But
we only want to match and measure and dissect broadcast and mesh
protocol traffic with bpfcountd. For which we are expecting traffic
rates of about 100 to 500 kbits/s which are supposed to match.

Unicast IP traffic at much higher rates will not be matched and the
idea/hope for these changes was to leave the IP unicast performance
mostly untampered while still measuring and dissecting the other,
non IP unicast traffic.

> 
> > PS: And I was also surprised that there seems to be a limit of
> > only 510 pcap handlers on Linux.
> 
> "handlers" or "handles"?
> 
> If it's "handles", as in "pcap_t's open for live capture", and if you're 
> switching from a single pcap_t to multiple pcap_t's, that means using more 
> file descriptors (so that you may eventually run out) and more ring buffers 
> (so that the kernel may eventually say "you're tying up too much wired memory 
> for all those ring buffers").
> 
> In either of those cases, the attempt to open a pcap_t will eventually get an 
> error; what is the error that's reported?

pcap_activate() returns "socket: Too many open files" for the
511th pcap_t and pcap_activate() call.

Ah! "ulimit -n" as root returns "1024" for me. Increasing that
limit helps, I can have more pcap_t handles then, thanks!

(as a non-root user "ulimit -n" returns 1048576 - interesting that
an unprivileged user can open more sockets than root by default,
didn't expect that)
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers