Re: [tcpdump-workers] aclocal.m4 and openssl

2004-04-07 Thread Bill Fenner

I've been meaning to revisit aclocal.m4 and the autoconf setup for a
long time.  Much of it was hand-spun to get around bugs or limitations
in autoconf 2.9.  Unfortunately, I don't have access to many of the
"funny" systems to make sure that I don't delete something that looks
like cruft but is actually needed.

I'd start with a modern check for libcrypto - use AC_ARG_WITH to add
-L$with_libcrypto/lib to LDFLAGS and -I$with_libcrypto/include to CPPFLAGS
if $with_libcrypto is not "yes" or "no", then if $with_libcrypto is not
"no", use AC_CHECK_LIB with either "main" or a more modern function than
the current autoconf check uses (the one that autoconf currently uses
was turned into a compatability macro in OpenSSL 0.9.7, I think, which
is why it usually fails).

I dunno if we want to try to keep compatibility with older systems with
sslEAY.

  Bill
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Why isn't 'ether proto \ip host host' a legal tcpdump expression?

2012-10-17 Thread Bill Fenner
On Wed, Oct 17, 2012 at 3:59 AM, Ezequiel Garzón
 wrote:
> Greetings! I'm trying to understand tcpdump expressions a bit more,
> and I'm confused about a basic example given in the pcap-filter man
> pages. They first state:
>
> | The filter expression consists of one or more primitives. Primitives
> usually consist of an id (name or number) preceded by one or more
> qualifiers.
>
> In turn, these qualifiers are type, dir and proto. So far so good, but
> further down we find this:
>
> |  ip host host
> | which is equivalent to:
> |  ether proto \ip and host host
>
> If I'm not mistaken, in the first case, ip and host are, respectively,
> proto and type. What pattern does 'ether proto \ip' follow? Isn't
> that, as a whole, a proto qualifier? If so, why isn't (a properly
> escaped) 'ether proto \ip host host' legal (without the keyboard
> 'and')?

They're two separate primitives:

"ether proto \ip" is:   

"host host" is  

Concatenating two primitives requires "and".

(Don't get confused between "ether" being a  and "proto" being
a : that doesn't make "proto" a .)

  Bill
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Why isn't 'ether proto \ip host host' a legal tcpdump expression?

2012-10-18 Thread Bill Fenner
On Oct 18, 2012, at 7:00 AM, Ezequiel Garzón  wrote:

> Thanks for your reply, Bill.
> 
>> "ether proto \ip" is:   
> 
> In what sense is "proto" here a . s are described as
> "qualifiers say what kind of thing the id name or  number  refers to.
> Possible  types are host, net , port and portrange." Not only is
> "proto" not given as an option, but it seems to me as if it belongs in
> another category entirely.

That part of the documentation is incomplete. "Proto" is just like "port" in 
the sense that it is saying "look in this part of the packet".

> This leads to the more central question of how to match "\ip" with
> . s are defined in passing as "(name or number)". How can one
> match conceptually "\ip" with an address?

\ip is turned into 0x800 via an internal name -> number lookup.

> 
> I'm sorry to insist on this open-ended issue. I know there must be
> something off with my understanding, and would like to fix it if
> possible!

"Ether proto ip" says "look in the Ethernet header, in the proto field, for the 
value 0x800".

"Host host" says "look up host in /etc/hosts or in DNS, get an IP address for 
it, and look for that IP address in the source or destination headers".

You have to use "and" to join any "look here for this value and look there for 
that value".

  Bill

> 
> Thanks again.
> 
> Best regards,
> 
> Ezequiel
> 
> On Wed, Oct 17, 2012 at 4:49 PM, Bill Fenner  wrote:
>> On Wed, Oct 17, 2012 at 3:59 AM, Ezequiel Garzón
>>  wrote:
>>> Greetings! I'm trying to understand tcpdump expressions a bit more,
>>> and I'm confused about a basic example given in the pcap-filter man
>>> pages. They first state:
>>> 
>>> | The filter expression consists of one or more primitives. Primitives
>>> usually consist of an id (name or number) preceded by one or more
>>> qualifiers.
>>> 
>>> In turn, these qualifiers are type, dir and proto. So far so good, but
>>> further down we find this:
>>> 
>>> |  ip host host
>>> | which is equivalent to:
>>> |  ether proto \ip and host host
>>> 
>>> If I'm not mistaken, in the first case, ip and host are, respectively,
>>> proto and type. What pattern does 'ether proto \ip' follow? Isn't
>>> that, as a whole, a proto qualifier? If so, why isn't (a properly
>>> escaped) 'ether proto \ip host host' legal (without the keyboard
>>> 'and')?
>> 
>> They're two separate primitives:
>> 
>> "ether proto \ip" is:   
>> 
>> "host host" is  
>> 
>> Concatenating two primitives requires "and".
>> 
>> (Don't get confused between "ether" being a  and "proto" being
>> a : that doesn't make "proto" a .)
>> 
>>  Bill
> ___
> tcpdump-workers mailing list
> tcpdump-workers@lists.tcpdump.org
> https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] vlan tagged packets and libpcap breakage

2012-11-11 Thread Bill Fenner
On Wed, Oct 31, 2012 at 6:20 PM, Guy Harris  wrote:
>
> On Oct 31, 2012, at 2:50 PM, Ani Sinha  wrote:
>
>> pcap files that already have the tags reinsrted should work with
>> current filter code. However for live traffic, one has to get the tags
>> from CMSG() and then reinsert it back to the packet for the current
>> filter to work.
>
> *Somebody* has to do that, at least to packets that pass the filter, before 
> they're handed to a libpcap-based application, for programs that expect to 
> see packets as they arrived from/were transmitted to the wire to work.
>
> I.e., the tags *should* be reinserted by libpcap, and, as I understand it, 
> that's what the
>
> #if defined(HAVE_PACKET_AUXDATA) && 
> defined(HAVE_LINUX_TPACKET_AUXDATA_TP_VLAN_TCI)
> ...
> #endif
>
> blocks of code in pcap-linux.c in libpcap are doing.
>
> Now, if filtering is being done in the *kernel*, and the tags aren't being 
> reinserted by the kernel, then filter code stuffed into the kernel would need 
> to differ from filter code run in userland.  There's already precedent for 
> that on Linux, with the "cooked mode" headers; those are synthesized by 
> libpcap from the metadata returned for PF_PACKET sockets, and the code that 
> attempts to hand the kernel a filter goes through the filter code, which was 
> generated under the assumption that the packet begins with a "cooked mode" 
> header, and modifies (a copy of) the code to, instead, use the special 
> Linux-BPF-interpreter offsets to access the metadata.
>
> The right thing to do here would be to, if possible, do the same, so that the 
> kernel doesn't have to reinsert VLAN tags for packets that aren't going to be 
> handed to userland.

In this case, it would be incredibly complicated to do this just
postprocessing a set of bpf instructions.  The problem is that when
running the filter in the kernel, the IP header, etc. are not offset,
so "off_macpl" and "off_linktype" would be zero, not 4, while
generating the rest of the expression.  We would also have to insert
code when comparing the ethertype to 0x8100 to instead load the
vlan-tagged metadata, so all jumps crossing that point would have to
be adjusted, and if the "if-false" instruction was also testing the
ethertype, then the ethertype would have to be reloaded (again
inserting another instruction).

Basically, take a look at the output of "tcpdump -d tcp port 22 or
(vlan and tcp port 22)".  Are the IPv4 tcp ports at x+14/x+16, or at
x+18/x+20?  If we're filtering in the kernel, they're at x+14/x+16
whether the packet is vlan tagged or not.  If we're filtering on the
actual packet contents (from a savefile, for example), they're at
x+18/x+20 if the packet is vlan tagged.

Also, an expression such as 'tcp port 22' would have to have some
instructions added at the beginning, for "vlan-tagged == false", or it
would match both tagged and untagged packets.

This would be much more straightforward to deal with in the code
generation phase, except until now the code generation phase hasn't
known whether the filter is headed for the kernel or not.

  Bill
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] PROBLEM: Software injected vlan tagged packets are unable to be identified using recent BPF modifications

2013-01-10 Thread Bill Fenner
On Mon, Jan 7, 2013 at 10:04 PM, Paul Pearce  wrote:
> However, raw vlan tagged packets that are *injected* into the
> interface using libpcap's pcap_inject() (which is just a fancy wrapper
> for the send() syscall) are not identified by filters using the recent
> BPF modifications.
>
> The bug manifests itself if you attempt to use the new BPF
> modifications to filter vlan tagged packets on a live interface. All
> packets from the medium show up, but all injected packets are dropped.

Given that the vlan tag metadata is supplied to userland with
PACKET_AUXDATA, does the symmetrical sendmsg() with PACKET_AUXDATA
work to put the vlan info in the metadata?  I.e., this would require
modifying pcap_inject() to parse the packet and extract the VLAN tag
info into a struct tpacket_auxdata, but obviously you could write a
little test program to test the underlying PF_PACKET socket behavior.

Even if it doesn't currently work, I think this may be a more
acceptable change ("provide a way to set PACKET_AUXDATA using
sendmsg") than having the packet send code munge/parse the vlan tag on
output.

  Bill
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] "not vlan" filter expression broken catastrophically!

2013-02-01 Thread Bill Fenner
On Thu, Jan 31, 2013 at 7:20 PM, Gianluca Varenni
 wrote:
> To be totally honest, I think the whole way in which vlans are managed in the 
> filters is quite nonsense. The underlying problem is that normally a BPF 
> filter is an "or" or "and" combination of disjoint filters, so if I write 
> "filterA" or "filterB" I assume that the two filters are disjoints, so
>
> "filterA or filterB" should be equivalent to "filterB or filterA"
>
> This is not true when using the "vlan" keyword. Vlan sticks globally and 
> increments the offset of the L3 header unconditionally of two bytes, no 
> turning back.
>
> For example "ip or vlan 14" is different than "vlan 14 or ip"

We have wanted to fix the vlan support ever since it was added.  If I
remember right we even talked about not adding it and waiting to do it
right.  It's definitely a hack, the vlan offset info should be
associative and only apply to anything that is "and"ed with the vlan
keyword.  Sadly, the current structure of the parser / code generator
do not lend themselves to that.

The global nature of the vlan offset is something that nobody is happy
with.  All it will take to fix it is to rewrite the grammar parser and
filter generation code.

  Bill


> -Original Message-
> From: tcpdump-workers-boun...@lists.tcpdump.org 
> [mailto:tcpdump-workers-boun...@lists.tcpdump.org] On Behalf Of Ani Sinha
> Sent: Thursday, January 31, 2013 3:42 PM
> To: tcpdump-workers@lists.tcpdump.org
> Cc: Bill Fenner; Michael Richardson; Francesco Ruggeri
> Subject: [tcpdump-workers] "not vlan" filter expression broken 
> catastrophically!
>
> hello folks :
>
> As you guys have been aware, I am hacking libpcap for a while. Me and Bill 
> noticed something seriously broken for any filter expression that has a "not 
> vlan" in it. For example, take a look at the filter code generated by libpcap 
> with an expression like "not vlan and tcp port 80" :
>
> BpfExpression '(not vlan and tcp port 80)'
>   { 0x28,  0,  0, 0x000c }, //(000) ldh  [12]
>   { 0x15, 19,  0, 0x8100 }, //(001) jeq  #0x8100 jt 21  jf 2
>   { 0x28,  0,  0, 0x0010 }, //(002) ldh  [16]
>   { 0x15,  0,  6, 0x86dd }, //(003) jeq  #0x86dd jt 4   jf 10
>   { 0x30,  0,  0, 0x0018 }, //(004) ldb  [24]
>   { 0x15,  0, 15, 0x0006 }, //(005) jeq  #0x6jt 6   jf 21
>   { 0x28,  0,  0, 0x003a }, //(006) ldh  [58]
>   { 0x15, 12,  0, 0x0050 }, //(007) jeq  #0x50   jt 20  jf 8
>   { 0x28,  0,  0, 0x003c }, //(008) ldh  [60]
>   { 0x15, 10, 11, 0x0050 }, //(009) jeq  #0x50   jt 20  jf 21
>   { 0x15,  0, 10, 0x0800 }, //(010) jeq  #0x800  jt 11  jf 21
>   { 0x30,  0,  0, 0x001b }, //(011) ldb  [27]
>   { 0x15,  0,  8, 0x0006 }, //(012) jeq  #0x6jt 13  jf 21
>   { 0x28,  0,  0, 0x0018 }, //(013) ldh  [24]
>   { 0x45,  6,  0, 0x1fff }, //(014) jset #0x1fff jt 21  jf 15
>   { 0xb1,  0,  0, 0x0012 }, //(015) ldxb 4*([18]&0xf)
>   { 0x48,  0,  0, 0x0012 }, //(016) ldh  [x + 18]
>   { 0x15,  2,  0, 0x0050 }, //(017) jeq  #0x50   jt 20  jf 18
>   { 0x48,  0,  0, 0x0014 }, //(018) ldh  [x + 20]
>   { 0x15,  0,  1, 0x0050 }, //(019) jeq  #0x50   jt 20  jf 21
>   {  0x6,  0,  0, 0x }, //(020) ret  #65535
>   {  0x6,  0,  0, 0x }, //(021) ret  #0
>
>
> As you can see, it loads offset 12 (ethertype). For vlan packets, it jumps to 
> #21 and returns false right away. However, for packets that are not vlan 
> tagged, it goes to #2 which loads offset 16 in the packet. Notice that this 
> is wrong! The offsets should be incremented by 4 only for vlan tagged packets 
> and not for non-vlan packets. The problem is that in gencode.c, the 
> off_linktype increments by 4 unconditionally whether or not the packet 
> actually contains a vlan tag. We do not want to increment this offset if "not 
> vlan" is true. So the above filter code is generated wrong.
>
> I just wanted to point this out to folks who wishes to dig in and fix it. I 
> do not have time right now to think of a proper solution. It would seem using 
> unconditional increments of offsets like off_linktype below the parser is not 
> going to work. How do you know if the parser is going to take your code 
> generated from the "vlan" expression and just negate it? Or may be we can 
> hack another rule in grammar.y. I don't know.
>
> cheers,
> ani
> ___
> tcpdump-workers mailing list

Re: [tcpdump-workers] "not vlan" filter expression broken catastrophically!

2013-02-01 Thread Bill Fenner
On Fri, Feb 1, 2013 at 3:50 PM, Paul Pearce  wrote:
> I'd like to point out that vlan filtering in general is completely
> broken under Linux 3x (as discussed several times on this list).
>
> In Linux 3x they began stripping the vlan headers off of RX packets
> and setting BPF ancillary flags, but not doing the same on TX packets.
> Since the vlan tags are missing when RX packets reach the kernel filter it
> means that stock libpcap plus any linux 3x kernel can only see TX
> vlan tagged packets.
>
> A recent (3.8 I believe) patch added the ability to use BPF to poke at
> the vlan ancillary fields, and Ani RFC'd a patch to on this list to
> shift vlan filtering to using the ancillary fields rather than offsetting into
> the header. But even with that patch since RX and TX paths are
> different, it's still not fixed.
>
> You could imagine extending Ani's patch to check for the vlan
> ancillary fields and if not set then look at the headers

That was my proposal to Ani, since the kernel guys seemed to insist
that asymmetry was a virtue.

> but that
> would mean the filter:
>
> vlan X or vlan Y
>
> would have different behavior on RX vs TX packets because of the
> pointer into the header advancing when it encounters a vlan tag
> on TX, but not RX.

Well, that filter is broken anyway in the current world, since it
matches 'a packet on vlan X' or 'a double-tagged packet with inner
vlan Y' (or, a packet that happens to have the same bit pattern as a
double-tagged packet with inner vlan Y).

This is the kind of thing that would be fixed by making the vlan
modifications associative - the 'or' in that expression would
effectively reset the offset.

> In my humble (uneducated) opinion the correct fix is to get linux to
> move to setting the vlan ancillary fields on TX packets as they do now
> on RX packets, which would simplify things a lot for libpcap. But that
> idea got a lot of pushback on the net-dev list. I didn't fully understand
> their distinction as to why it was ok on RX vs TX, and they never
> answered when I asked.

We're on the same page on that topic.

  Bill

> -Paul
>
> On Fri, Feb 1, 2013 at 8:51 AM, Gianluca Varenni
>  wrote:
>> The problem is that if you change the behavior of the vlan keyword, you 
>> potentially break a lot of applications that are based on the old buggy 
>> behavior :-(
>>
>> -Original Message-
>> From: fen...@gmail.com [mailto:fen...@gmail.com] On Behalf Of Bill Fenner
>> Sent: Friday, February 01, 2013 4:49 AM
>> To: Gianluca Varenni
>> Cc: Ani Sinha; tcpdump-workers@lists.tcpdump.org; Michael Richardson; 
>> Francesco Ruggeri
>> Subject: Re: [tcpdump-workers] "not vlan" filter expression broken 
>> catastrophically!
>>
>> On Thu, Jan 31, 2013 at 7:20 PM, Gianluca Varenni 
>>  wrote:
>>> To be totally honest, I think the whole way in which vlans are managed
>>> in the filters is quite nonsense. The underlying problem is that
>>> normally a BPF filter is an "or" or "and" combination of disjoint
>>> filters, so if I write "filterA" or "filterB" I assume that the two
>>> filters are disjoints, so
>>>
>>> "filterA or filterB" should be equivalent to "filterB or filterA"
>>>
>>> This is not true when using the "vlan" keyword. Vlan sticks globally and 
>>> increments the offset of the L3 header unconditionally of two bytes, no 
>>> turning back.
>>>
>>> For example "ip or vlan 14" is different than "vlan 14 or ip"
>>
>> We have wanted to fix the vlan support ever since it was added.  If I 
>> remember right we even talked about not adding it and waiting to do it 
>> right.  It's definitely a hack, the vlan offset info should be associative 
>> and only apply to anything that is "and"ed with the vlan keyword.  Sadly, 
>> the current structure of the parser / code generator do not lend themselves 
>> to that.
>>
>> The global nature of the vlan offset is something that nobody is happy with. 
>>  All it will take to fix it is to rewrite the grammar parser and filter 
>> generation code.
>>
>>   Bill
>>
>>
>>> -Original Message-
>>> From: tcpdump-workers-boun...@lists.tcpdump.org
>>> [mailto:tcpdump-workers-boun...@lists.tcpdump.org] On Behalf Of Ani
>>> Sinha
>>> Sent: Thursday, January 31, 2013 3:42 PM
>>> To: tcpdump-workers@lists.tcpdump.org
>>> Cc: Bill Fenner; Michael Richardson; Francesco Ruggeri
>>> Subject: [t

Re: [tcpdump-workers] "not vlan" filter expression broken catastrophically!

2013-02-04 Thread Bill Fenner
On Fri, Feb 1, 2013 at 8:07 PM, Michael Richardson  wrote:
>
>> "Ani" == Ani Sinha  writes:
> Ani> hello folks :
>
> Ani> As you guys have been aware, I am hacking libpcap for a
> Ani> while. Me and Bill noticed something seriously broken for any
> Ani> filter expression that has a "not vlan" in it. For example,
> Ani> take a look at the filter code generated by libpcap with an
> Ani> expression like "not vlan and tcp port 80" :
>
> Ani> BpfExpression '(not vlan and tcp port 80)' { 0x28, 0, 0,
>
> Do we have any way to test libpcap expression outputs other than -d
> options to tcpdump?  I'm thinking regression tests here.


All the bits are there inside libpcap, they just need to be plumbed together.

   pcap = pcap_open_dead(link, snaplen);
   /* todo: hook together argv to a single string */
   prog = argv[0];
   if (pcap_compile(pcap, &p, prog, optimize, 0) < 0) {
  fprintf(stderr, pcap_geterr(pcap));
  exit(1);
   }
   bpf_dump(&p, option);
   pcap_freecode(&p);
   pcap_close(pcap);

add some command-line arguments to set link, snaplen, optimize and
option and you've got part of a regression test engine! :-)

  Bill
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] "not vlan" filter expression broken catastrophically!

2013-02-04 Thread Bill Fenner
On Sat, Feb 2, 2013 at 12:26 AM, Gianluca Varenni
 wrote:
> What I'm talking about is not having the vlan keyword that has a global 
> effect to the whole filters.
> At the moment we write something like "vlan and ip" to capture ip packets 
> within vlan, and a filter like "(vlan and ip) or udp" is actually compiled 
> with the logic meaning "vlan and (ip or udp)".
> One proposal that I have is to support a syntax like the following
>
> vlan ip
>
> This is exactly like "vlan ip" but it sticks *only* to the "ip" keyword. So a 
> filter like "vlan ip or udp" means "accepts ip packets within a vlan tag, or 
> udp packets without a vlan tag". If you want to specify the vlan id, you use
>
> vlan 23 ip
>
> if you want to "stick" vlan to multiple filters, you use the syntax
>
> vlan 23 (udp or tcp)
>
> this is equivalent to
>
> vlan 23 udp or vlan 23 udp
>
> Finally, if you want to filter QinQ packets, you use
>
> vlan vlan ip (ip packets within  2 vlan encapsulations)
>
>
> I know the syntax is not the most elegant (and I don't know how 
> easy/difficult it would be to parse), but I believe it solves the problem of 
> having the vlan keyword having a global effect during compilation.
>
>
> What do you guys think?

This sounds like my earlier suggestion: to make the vlan keyword
associative.  My syntax would have a few more "and"s than your
examples, such as "vlan 23 and ( udp or tcp )", or "vlan and vlan and
ip".

  Bill

>
> Have a nice day
> GV
>
> -Original Message-
> From: Guy Harris [mailto:g...@alum.mit.edu]
> Sent: Friday, February 01, 2013 6:19 PM
> To: Bill Fenner
> Cc: Gianluca Varenni; Michael Richardson; tcpdump-workers@lists.tcpdump.org; 
> Francesco Ruggeri
> Subject: Re: [tcpdump-workers] "not vlan" filter expression broken 
> catastrophically!
>
>
> On Feb 1, 2013, at 4:49 AM, Bill Fenner  wrote:
>
>> We have wanted to fix the vlan support ever since it was added.
>
> The "vlan" keyword serves two purposes:
>
> 1) matching VLAN-encapsulated packets or VLAN-encapsulated packets on 
> a particular VLAN;
>
> 2) handling the extra MAC-layer header length due to the VLAN header.
>
> That's also the case for "pppoed" and "mpls".
>
> 2), in the best of all possible worlds, would be done by having filter 
> programs that can, without much performance penalty, check for higher-level 
> protocol types in the presence of 
> VLAN/MPLS/PPPoE/GTP/fill-in-your-encapsulation-layering headers, so that "tcp 
> port 80" would find all packets on the network that are going to or from TCP 
> port 80, regardless of how IP is encapsulated.  If you wanted only 
> VLAN-encapsulated packets going to or from TCP port 80, you'd do "vlan and 
> tcp port 80"; if you only wanted *non*-VLAN-encapsulated packets going to or 
> from TCP port 80, you'd do "not vlan and tcp port 80".  "vlan" (and "pppoed" 
> and "mpls") would only handle 1) (and its equivalents).
>
> Unfortunately, that requires changes to the machine code language for filter 
> programs, so you'd have to somehow deal with systems where the kernel has a 
> filtering engine but it doesn't support those changes.
>
> ___
> tcpdump-workers mailing list
> tcpdump-workers@lists.tcpdump.org
> https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] tool to reorder packets of a pcap?

2013-02-21 Thread Bill Fenner
On Wed, Feb 6, 2013 at 4:08 AM,   wrote:
> Many people suggested reordercap from wireshark 1.9.
> Thank you, I was not aware of this tool.
>
> But looking at the code, it seams that this program loads the whole pcap 
> before
> sorting it - this is not practical when the pcap is huge, as is often the case
> for me.
>
> So I wrote a small tool but unfortunately it will be very unpractical for
> anyone else to use since it uses a badly packaged, unpolished library of mine
> written in an alien technology[1]. It should be rewriten in C for max
> usability. The idea is merely to do one single pass with a small buffer of N
> packets that you can reorder, and check wether the buffer was enough to sort
> completely the pcap (so that you can ask for another pass). There probably are
> more intelligent ways to sort a stream inline, but this was enough for my need
> (I record in a single pcap from several threads with a huge mmap buffer so the
> packets are somewhat intermixed but not completely random).
>
> [1]: http://github.com/rixed/robinet/blob/master/examples/pcap_reorder.ml

tcpslice already does time-based interleaving when you give it
multiple pcap files.  It might be reasonably straightforward to adapt
it to have a buffer of N packets (per pcap) to do local reordering
too.

  Bill
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] why the ethernet and ip header of packets, which are captured by libpcap function, are distorted

2013-03-21 Thread Bill Fenner
On Mon, Mar 18, 2013 at 11:08 PM, Wesley Shields  wrote:
> On Fri, Mar 15, 2013 at 06:37:25PM -0700, Guy Harris wrote:
>>
>> On Mar 15, 2013, at 2:45 PM, Michael Richardson  wrote:
>>
>> >
>> >> "wen" == wen lui  writes:
>> >wen> I used libpcap function pcap_next() to capture some tcp packets
>> >wen> I checked the bytes of the captured packets and notice that the
>> >wen> ethernet and ip header of packets are distorted, in a mess with
>> >wen> a lot 0's but the TCP header is fine
>> >
>> >wen> what are potential reasons for this?
>> >
>> > if you capture on Linux with the cooked mode interface.
>>
>> That probably won't happen if you're capturing on an Ethernet device,
>> but it *will* happen if you capture on the "any" device.
>>
>> However, yes, *NO* program using libpcap/WinPcap should simply
>> *assume* it's getting Ethernet packets; if it's looking at the
>> packets, not just blindly writing them to a file without examining the
>> contents, then, if it doesn't need to handle 802.11 and PPP and so on,
>> just Ethernet, it should at least call pcap_datalink() and fail if the
>> return value isn't DLT_EN10MB.  (If it's writing them to a pcap file,
>> pcap_dump_open() will call pcap_datalink() for you, to put the right
>> link-layer header type in the file header.)
>>
>> (Should we change libpcap so that if pcap_datalink() isn't called at
>> least once before calling pcap_next(), pcap_next_ex(),
>> pcap_dispatch(), or pcap_loop(), it prints a message to the standard
>> error saying "you're probably assuming all the world is Ethernet,
>> aren't you?" and calls abort(). :-))
>
> As I'm not sure if you're serious or not I decided to look into this to
> satisfy my own curiosity. In case you are serious:
>
> https://github.com/wxsBSD/libpcap/commit/70cbe36e2bd12498ca1622349ecb1716a874c376
>
> If you are serious and want this I'll submit a pull request.

Since pcap_compile() calls pcap_datalink(), I don't think that this
will have as much affect as Guy was imagining.

(Now introduce an argument to pcap_datalink() that says "I'm calling
you from pcap_compile()," and ... ;-)

  Bill
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Valgrind fix

2019-09-25 Thread Bill Fenner
On Wed, Sep 25, 2019 at 6:50 AM P.B.  wrote:

> I would like to contribute a small fix for valgrind issue with
> uninitialized bytes but I an;t push a branch to pcap repo. Any guidance on
> how to add it and create a pull request ?
>

Hi Pawel,

Start at https://github.com/the-tcpdump-group/libpcap and use the "Fork"
button in the upper right corner to create a forked repo under your own
github ID.  You can push to that repo, and when you push to it it will tell
you how to create a pull request.

  Bill
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Please consider pull request for negative offsets in Linux filters on SLL sockets

2019-09-27 Thread Bill Fenner
Hi,

On Linux, the kernel filter code uses negative offsets for some purposes -
for example, "inbound" is implemented via "ether[-4092] = 4".  Using this
mechanism, the user can apply kernel filter methods for which there is no
pcap support.

When capturing on an SLL or SLL2 socket, these negative offsets specified
by the user are corrupted before installing the filter in the kernel, so
they do not mean what they are intended to mean.  This means that a filter
like "ether[-4092] = 4" will not work on an "-i any" capture, even though
it would work if it was not modified.  (I am using this mechanism to
capture on multiple interfaces, filtering by ifIndex inside the kernel.)

My pull request has been stalled since April.  I've been rebasing in order
to make it easier to accept.

https://github.com/the-tcpdump-group/libpcap/pull/820/

Can I request that it get some attention?

Thanks,
  Bill
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Sharing code between print-icmp.c and print-icmp6.c

2024-02-24 Thread Bill Fenner
Hi,

I'm working on RFC8335 (PROBE) support for tcpdump - I've already submitted
the pull request for IPv4.  I'm working on IPv6 support, and it looks like
this is the first case that the packet format is identical between ICMP and
ICMPv6 but complex enough that it's worth reusing code.

My commit
https://github.com/fenner/tcpdump/commit/8590ce9d7c06f3db88f27a63a608484f9b2c04ae
is a first try at reusing code appropriately: it makes some "struct tok"'s
global, as well as the ICMP Extension Object parser, and puts them in a new
"icmp.h" (for which I took the "print-icmp.c" copyright statement, since
the code came from print-icmp.c).

Is this a reasonable way to proceed?

Thanks,
  Bill
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: Sharing code between print-icmp.c and print-icmp6.c

2024-02-26 Thread Bill Fenner
On Sat, Feb 24, 2024 at 1:40 PM Guy Harris  wrote:

> On Feb 5, 2024, at 9:38 AM, Bill Fenner  wrote:
>
> > Is this a reasonable way to proceed?
>
> Yes.
>
> Perhaps have a file icmp-common.c or print-icmp-common.c with code and
> data structures common to ICMP(v4) and ICMPv6?
>

There's a bunch of stuff that would have to move to print-icmp-common.c;
for now if it's ok I'd rather stick with the code living in print-icmp.c.

Once someone implements RFC4884 for IPv6 (and hopefully updates the
implementation for IPv4 to match the RFC), it may make more sense to move
the code since there will be more shared code.  (Basically, the RFC8335
code calls print_icmp_multipart_ext_object(), and I don't want to move that
until there's a reason to - and it doesn't really make sense for
print-icmp-common.c to call into print-icmp.c)

My current proposal is the second commit in
https://github.com/the-tcpdump-group/tcpdump/pull/1131/ - right now it's
https://github.com/the-tcpdump-group/tcpdump/pull/1131/commits/a362775645ac012eeda9d4f66c47b595cee36e77
but that may change if I need to make any changes.

  Bill
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[tcpdump-workers] Re: openwrt Conclusions from CVE-2024-3094 (libxz disaster)

2024-04-01 Thread Bill Fenner
mcr suggested:
> I wonder if we should nuke our own make tarball system.

The creation of a tarball and its signature gives a place to hang one's hat
about origin of code - "someone with the right key claims that this tarball
genuinely reflects what the project wants to distribute".  Is there a
similar mechanism for a git tag?

  Bill
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: openwrt Conclusions from CVE-2024-3094 (libxz disaster)

2024-04-01 Thread Bill Fenner
On Mon, Apr 1, 2024 at 11:06 AM Michael Richardson  wrote:

>
> Bill Fenner  wrote:
> > mcr suggested:
> >> I wonder if we should nuke our own make tarball system.
>
> > The creation of a tarball and its signature gives a place to hang
> one's hat
> > about origin of code - "someone with the right key claims that this
> tarball
> > genuinely reflects what the project wants to distribute".  Is there a
> > similar mechanism for a git tag?
>
> Yes, git tag -s, lets you sign a commit with a PGP key.
>

Just trying to brainstorm about how this fits with build systems like
Arista's, where we store the tarball and check the signature at build time
- I suppose it just turns into "vendor the git tag into a local repo and
check the signature at build time".

I have no objection to either requiring people to have autotools, or going
cmake-only.  (I mean, I personally find cmake hard to use, but that
shouldn't influence what the project does.)

  Bill
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[tcpdump-workers] Setting BPF_SPECIAL_VLAN_HANDLING on a "dead" handle

2025-07-04 Thread Bill Fenner
Hi all,

I have a little program that basically
calls pcap_open_dead(), pcap_compile(), and then dumps the instructions
like a C struct, so that we can include a bpf program in some other code
that will use it later.  (Like "tcpdump -dd", but with a little extra
formatting.)

We may know that we will be using this code on a kernel that
requires BPF_SPECIAL_VLAN_HANDLING, and so I'd like to be able to set that
flag on a "dead" handle.  Obviously, the current mechanism is to test this
dynamically when the handle is live.

Does anyone have any opinions about how to implement this, through the
generic pcap interface?  I can think of two possibilities:

1. implement set_special_vlan_handling_op, and have it implemented by dead
and linux, and have the default version return an error
2. implement set_bpf_codegen_flags_op, and have a generic implementation,
assuming that the caller completely knows what they are doing. Then move
the definition of BPF_SPECIAL_VLAN_HANDLING from pcap-int.h to the public
header.

Anyone have a preference of which one I should go with, and/or have a
better suggestion?

Thanks,
  Bill
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


Re: [tcpdump-workers] Compile libpcap with DLT_LINUX_SLL2

2020-03-13 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
The "-y" flag to tcpdump allows you to specify capturing with
DLT_LINUX_SLL2.

//tmp @fenner-t493.sjc% tcpdump -i any -y linux_sll2 udp port 53

tcpdump: data link type linux_sll2

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length
262144 bytes

04:34:16.440349 ifindex 2 (e-a4c-281e9814) Out 8e:18:55:e1:02:4b (oui
Unknown) ethertype IPv4 (0x0800), length 81: me.4 > dnsserver.domain:
53929+ A? www.tcpdump.org. (33)


  Bill

On Wed, Mar 11, 2020 at 2:49 PM Petr Vorel via tcpdump-workers <
tcpdump-workers@lists.tcpdump.org> wrote:

>
>
>
> -- Forwarded message --
> From: Petr Vorel 
> To: Guy Harris 
> Cc: tcpdump-workers@lists.tcpdump.org, Denis Ovsienko  >
> Bcc:
> Date: Wed, 11 Mar 2020 19:49:18 +0100
> Subject: Compile libpcap with DLT_LINUX_SLL2
> Hi Guy,
>
> some time ago we did together DLT_LINUX_SLL2 support for libpcap.
> I don't remember the details, but IMHO it was enabled by default.
> When now I compile libpcap and tcpdump, it's still using DLT_LINUX_SLL:
>
> tcpdump: listening on any, link-type LINUX_SLL (Linux cooked v1), ...
>
> What do I do wrong?
>
> Kind regards,
> Petr
>
>
>
> -- Forwarded message --
> From: Petr Vorel via tcpdump-workers 
> To: Guy Harris 
> Cc: tcpdump-workers@lists.tcpdump.org
> Bcc:
> Date: Wed, 11 Mar 2020 14:48:19 -0400 (EDT)
> Subject: [tcpdump-workers] Compile libpcap with DLT_LINUX_SLL2
> ___
> tcpdump-workers mailing list
> tcpdump-workers@lists.tcpdump.org
> https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers
>
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Compile libpcap with DLT_LINUX_SLL2

2020-05-09 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
Since there's interest in SLL2 here, I'd like to raise the visibility of my
libpcap pull request for filtering on ifindex:
https://github.com/the-tcpdump-group/libpcap/pull/829

It filters on both live "any" captures (SLL or SLL2) and reading from a
saved SLL2 pcap.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Using libnetdissect in other code, outside tcpdump source tree

2020-08-12 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
Hi,

Is there a plan for a public face for libnetdissect?  I've tried teasing it
out, and I ended up having to install:
funcattrs.h print.h config.h netdissect.h ip.h ip6.h compiler-tests.h
status-exit-codes.h
in /usr/include/tcpdump/ in order to compile a libnetdissect-using program
outside of the tcpdump source tree.

Also, netdissect.h likes to #define min() and max() macros, which makes
life interesting when you have, say, a struct with min and max elements.

Any pro tips from others who are reusing libnetdissect as a library?

Thanks,
  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Using libnetdissect in other code, outside tcpdump source tree

2020-08-14 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Wed, Aug 12, 2020 at 6:22 PM Guy Harris  wrote:

> On Aug 12, 2020, at 1:31 PM, Guy Harris via tcpdump-workers <
> tcpdump-workers@lists.tcpdump.org> wrote:
>
> > We should probably have an include/libnetdissect directory in which we
> install netdissect.h and the headers it requires.
>
> Or include/netdissect.
>
> > However, API-declaring headers should *NEVER* require config.h (there
> was a particularly horrible case with OpenBSD's version of libz, forcing a
> painful workaround in Wireshark:
>
> ...
>
> > so if anything in netdissect.h depends on config.h definitions, we
> should try to fix that.
>
> It looks like it's just declaring replacements for strlcat(), strlcpy(),
> strdup(), and strsep() if the platform doesn't provide them.  That should
> be done in a non-public header.
>

The specific reason for needing config.h was

#ifndef HAVE___ATTRIBUTE__
#define __attribute__(x)
#endif

because there's other code that doesn't work right without defining the
right attributes.  Scrolling through netdissect.h, it relies on things
like __ATTRIBUTE___FORMAT_OK_FOR_FUNCTION_POINTERS to be able to fully
prototype the elements in netdissect_options.  (There's probably an
argument that netdissect_options should be opaque, with the right accessors
and mutators for an API consumer.  On the other hand, any API user probably
wants to provide ndo_printf, ndo_error, ndo_warning, which themselves have
the ndo as the first argument, hm.)

> That leaves ip.h and ip6.h; I'd have to check to see whether they should
> be considered part of the API or not.
>
> The comments are:
>
> #include "ip.h" /* struct ip for nextproto4_cksum() */
> #include "ip6.h" /* struct ip6 for nextproto6_cksum() */
>
> so what should probably be done is have a header for *users* of
> libnetdissect and a separate header for *components* of libnetdissect; the
> latter can define more things.  (The latter would be a non-public header,
> unless we decide to support third-party dissector plugins; that would also
> mean we'd probably want to have something like Wireshark's dissector tables
> to which those plugins would add themselves.)


For my use case, "pretty_print_packet()" (and ndo_set_function_pointers() )
is the only public interface I need.  I wonder if starting over with a
clean header file for API users would be a better start.
(Is sunrpcrequest_print() really part of the public api?)

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] CVE-2020-8037: memory allocation in ppp decapsulator

2020-11-30 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
I see that Red Hat/Fedora have released new packages to address
CVE-2020-8037 in tcpdump.  Does the tcpdump group have any message about
this CVE?  Is there a release from tcpdump.org with this CVE fixed?

See https://bugzilla.redhat.com/show_bug.cgi?id=1895080 for details
(pointing to a commit to the 4.9 branch from April).

Are there other CVEs that affect tcpdump-4.9.3 that vendors should be aware
of?

It looks like http://www.tcpdump.org/public-cve-list.txt hasn't been
updated since the 4.9.3 release (even though CVE-2020-8037 is a public cve).

I realize that http://www.tcpdump.org/security.html says there is no
commitment from the tcpdump group to release security fixes on any
timeframe whatsoever.  However, is there a way for someone who ships
tcpdump with their product to be made aware of unreleased security fixes,
or should we rely on Red Hat and others for that?

Thanks,
  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] CVE-2020-8037: memory allocation in ppp decapsulator

2020-11-30 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Mon, Nov 30, 2020 at 12:59 PM Michael Richardson 
wrote:

> Hi, CVE-2020-8037 causes a big amount of memory to be allocated (then
> freed),
> it does not cause an attack.


That's helpful information.  (On a low-memory device that actually requires
memory at malloc time, it might cause tcpdump to crash due to failure to
allocate memory, but on a system using, e.g., glibc, it won't).  I think
changing the availability impact from A:H to A:N results in reducing the
CVSS score from 7.5 to 0, which is probably worth pursuing if you want
people to not be freaking out about the severity here.

I think that you are on the security@ list, and I think that this did go
> through that list at the time.
>

I'm not receiving any messages from security@, but let's take this off-list.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] After setjmp/longjmp update

2021-01-04 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
I just wanted to share some of my thinking about how to proceed with the
truncation-related changes on the road to 5.0.0.

1. Improve code coverage for the printer that's being modified.  (This
ensures that the code being modified has a corresponding test pcap that can
be used by steps 2 and 3).
2. Use the trunc-o-matic tool from
https://github.com/fenner/tcpdump/tree/trunc-o-matic/tools to print out the
result at all possible truncation lengths.  (The current state of this tool
is just proof-of-concept; obviously before it is really useful it should at
least loop over all of the pcap files provided on the command line, and
allow specifying options like -v and -e, etc.)
3. Modify the code to use the new logic, using trunc-o-matic output to
ensure that the differences introduced are not regressions.

If step 3 results in no output differences, then there's no need to examine
the (extremely verbose) trunc-o-matic output.

I also think that community members would be willing to chip in if the
effort was coordinated (e.g., open a github ticket for each printer that
needs this conversion, have a wiki page that talks about the conversion
process, etc.).  There's no need for the maintainers to take on the work of
all of the protocols.

Proof of concept for coverage integration with travis:
https://coveralls.io/jobs/72964678

Sample file - print-ospf.c with 26% coverage -
https://coveralls.io/jobs/72964678/source_files/4635095477

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] [tcpdump] After setjmp/longjmp update

2021-01-06 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Tue, Jan 5, 2021 at 8:10 PM Denis Ovsienko via tcpdump-workers <
tcpdump-workers@lists.tcpdump.org> wrote:

> Bill Fenner via tcpdump-workers 
> wrote:
>
> > I just wanted to share some of my thinking about how to proceed with
> > the truncation-related changes on the road to 5.0.0.
> >
> > 1. Improve code coverage for the printer that's being modified.  (This
> > ensures that the code being modified has a corresponding test pcap
> > that can be used by steps 2 and 3).
>
> Hello Bill.
>
> It used to be 31 test in tcpdump 4.3.0, now it is 571 test, and the
> coverage is still very far from complete, even more so for less common
> protocols. There is a steady consensus that it would be nice to have
> more tests, contributions are welcome as always.
>

My apologies that this statement seems to have been misconstrued.  The
whole point of this statement about improving coverage was in the context
of using trunc-o-matic below.  The testing situation *is* improving very
impressively and I would not have made this statement standing alone.

The point here was meant to be: first you have to make sure you have a pcap
that will cause trunc-o-matic to run the code that you're going to change,
and then when you change it and trunc-o-matic gives the same result, you
can be sure that the code change that you made was correct.

> 2. Use the trunc-o-matic tool from
> [...]
>
> Thank you for proposing this new tool. I see a potential for false
> positives, let me have some time to try it and to see if that's actually
> the case.
>

Feedback is absolutely welcome.

> I also think that community members would be willing to chip in if the
> > effort was coordinated (e.g., open a github ticket for each printer
> > that needs this conversion, have a wiki page that talks about the
> > conversion process, etc.).  There's no need for the maintainers to
> > take on the work of all of the protocols.
>
> You mean well, but let me suggest after you walk a mile in these
> particular shoes you'll prefer a different wording, or maybe even a
> different idea.
>
> Francois-Xavier started this thread in September 2020 on the assumption
> that community members will want to contribute. It is January 2021 and
> except myself you are the first ever to discuss, thank you. Let's
> concur that in foreseeable future there will be a meaningful amount of
> work that will be never done unless the project maintainers do it.
>

Yes, there will always be a part that nobody will volunteer to do.
However, I believe that one reason that nobody has stepped up to
participate is that it is not very clear exactly what changes needs to be
done and how to measure success.  My goal is to lower the bar for external
contributions - not to say that I think that there's someone out there who
will meaningfully fix print-wb.c, but that I think that there are people
out there who could contribute to ospf, bgp, isis, mpls, etc.

I realize that I am suggesting to do more work (documentation) in order to
reduce conversion work by an unknown amount, which could be zero, so as you
suggest I will take a first pass at writing the documentation.

A separate ticket for every file to me seems to be not worth the
> hassle considering how few people need to coordinate now. That said, a
> weekly/fortnightly status update on the list could be a useful addition.
>

Well, another reason that community members may not be participating could
be because it is not at all clear how to avoid duplicate effort.  So while
a ticket per file may be too much overhead, I do believe some level of
communication could help.  Right now this is opaque from the outside, so we
just end up leaving it all to Francois-Xavier.  He has done an amazing job
but also deserves some help.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-07 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Wed, Sep 9, 2020 at 12:08 PM Denis Ovsienko via tcpdump-workers <
tcpdump-workers@lists.tcpdump.org> wrote:

> Travis CI tcpdump builds have been failing for a while and I went to
> see why. It is easy to see that only the jobs that have
> "BUILD_LIBPCAP=yes CMAKE=yes" fail


These jobs are still failing, but now for a different reason.  The build
succeeds, but the tests fail - the tests that require the new libpcap.
However, if you augment TESTrun to print the version, it says
1.11.0-PRE-GIT but the tests that need the new libpcap still fail.  I can
not find anything that changes LD_LIBRARY_PATH, or anything, but shortly
after running

print "Running tests from ${testsdir}\n";

print "using ${TCPDUMP}\n";

system "${TCPDUMP} --version";

we run

$r = system "$TCPDUMP -# -n -r $input $options >tests/NEW/${outputbase}
2>${rawstderrlog}";

and that uses the wrong libpcap.

I am at loose ends as to how to debug further - a sample log from my
attempt to see what's going on is
https://api.travis-ci.com/v3/job/468210839/log.txt - the diffs in that run
are very simple (and the env. variable one is probably
incorrect/unnecessary) -
https://github.com/the-tcpdump-group/tcpdump/compare/master...fenner:cmake-pcap

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Any way to filter ether address when type is LINUX_SLL?

2021-01-21 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
It would be perfectly reasonable (and fairly straightforward) to update
libpcap to be able to filter on the Ethernet address in DLT_LINUX_SLL or
DLT_LINUX_SLL2 mode.  There are already filters that match other offsets in
the SLL or SLL2 header.  However, I don't think it could be done on live
captures, only against a savefile.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

[tcpdump-workers] What's the correct new API to request pcap_linux to not open an eventfd

2022-05-20 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
I'm helping to debug a system that uses many many pcap handles, and never
calls pcap_loop - only ever pcap_next.  We've found that each pcap handle
has an associated eventfd, which is used to make sure to wake up when
pcap_breakloop() is called.  Since this code doesn't call pcap_loop or
pcap_breakloop, the eventfd is unneeded.

I'm willing to write and test the code that skips creating the breakloop_fd
- but, I wanted to discuss what the API should be.  Should there be a
pcap.c "pcap_breakloop_not_needed( pcap_t * )" that dispatches to the
implementation, or should there be a linux-specific
"pcap_linux_dont_create_eventfd( pcap_t * )"?

For this use case, portability is not important, so I would be fine with
either. I'd also be fine with less ridiculous name suggestions :-)

Thanks,
  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] What's the correct new API to request pcap_linux to not open an eventfd

2022-05-20 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Fri, May 20, 2022 at 12:36 PM Guy Harris  wrote:

> If it's putting them in non-blocking mode, and using some
> select/poll/epoll/etc. mechanism in a single event loop, then the right
> name for the API is pcap_setnonblock().  There's no need for an eventfd to
> wake up the blocking poll() if there *is* no blocking poll(), so:
>
> if non-blocking mode is on before pcap_activate() is called, no
> eventfd should be opened, and poll_breakloop_fd should be set to -1;
>
> if non-blocking mode is turned on after pcap_activate() is called,
> the eventfd should be closed, and poll_breakloop_fd should be set to -1;
>
> if non-blocking mode is turned *off* afterwards, an eventfd should
> be opened, and poll_breakloop_fd should be set to it;
>
> if poll_breakloop_fd is -1, the poll() should only wait on the
> socket FD;
>
> so this can be handled without API changes.
>

Thank you for the excellent observation, Guy.  It is indeed setting
non-block before pcap_activate().  I'll work on this plan.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] What's the correct new API to request pcap_linux to not open an eventfd

2022-06-01 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Fri, May 20, 2022 at 6:10 PM Bill Fenner  wrote:

> On Fri, May 20, 2022 at 12:36 PM Guy Harris  wrote:
>
>> If it's putting them in non-blocking mode, and using some
>> select/poll/epoll/etc. mechanism in a single event loop, then the right
>> name for the API is pcap_setnonblock().  There's no need for an eventfd to
>> wake up the blocking poll() if there *is* no blocking poll(), so:
>>
>> if non-blocking mode is on before pcap_activate() is called, no
>> eventfd should be opened, and poll_breakloop_fd should be set to -1;
>>
>> if non-blocking mode is turned on after pcap_activate() is
>> called, the eventfd should be closed, and poll_breakloop_fd should be set
>> to -1;
>>
>> if non-blocking mode is turned *off* afterwards, an eventfd
>> should be opened, and poll_breakloop_fd should be set to it;
>>
>> if poll_breakloop_fd is -1, the poll() should only wait on the
>> socket FD;
>>
>> so this can be handled without API changes.
>>
>
> Thank you for the excellent observation, Guy.  It is indeed setting
> non-block before pcap_activate().  I'll work on this plan.
>

Actually, I confused myself.  It turns out that pcap_linux is buggy when
you set non-block before pcap_activate() -- it uses the handlep->timeout
value to remember whether or not non-block was set, but, pcap_activate()
unconditionally overwrites handlep->timeout.  So it just comes down to, for
now, we always open the eventfd and then can close it when non-blocking
mode is turned on.  This just means the first item on your list is not
done, but the last 3 are enough.

(I think that the right fix for setting nonblock before activate could be
to add a bool to the handlep, to separate the nonblock status from the
timeout, but that can be a separate fix.)

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] What's the correct new API to request pcap_linux to not open an eventfd

2022-07-01 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Fri, May 20, 2022 at 6:10 PM Bill Fenner  wrote:

> On Fri, May 20, 2022 at 12:36 PM Guy Harris  wrote:
>
>> If it's putting them in non-blocking mode, and using some
>> select/poll/epoll/etc. mechanism in a single event loop, then the right
>> name for the API is pcap_setnonblock().  There's no need for an eventfd to
>> wake up the blocking poll() if there *is* no blocking poll(), so:
>>
>> if non-blocking mode is on before pcap_activate() is called, no
>> eventfd should be opened, and poll_breakloop_fd should be set to -1;
>>
>> if non-blocking mode is turned on after pcap_activate() is
>> called, the eventfd should be closed, and poll_breakloop_fd should be set
>> to -1;
>>
>> if non-blocking mode is turned *off* afterwards, an eventfd
>> should be opened, and poll_breakloop_fd should be set to it;
>>
>> if poll_breakloop_fd is -1, the poll() should only wait on the
>> socket FD;
>>
>> so this can be handled without API changes.
>>
>
> Thank you for the excellent observation, Guy.  It is indeed setting
> non-block before pcap_activate().  I'll work on this plan.
>

A slight variation of this plan is at
https://github.com/the-tcpdump-group/libpcap/pull/1113

I wrote a test program that doesn't do much, but does demonstrate that the
blocking-ness API on Linux is at least a little weird.  If we set
pcap_nonblock after pcap_create and before pcap_activate, we get -3 - which
I don't get at all, unless, -3 means "you didn't activate the pcap yet".
My naive reading of the Linux pcap_getnonblock code says it'll return the
integer value of a bool, and I don't know how that can be -3.

The sequence ends up being:
pcap_create() -> open eventfd
pcap_setnonblock() -> close eventfd
pcap_activate()

I didn't want to move the eventfd creation out of pcap_create without
having a deeper understanding of the strangeness around the nonblock API.

I ran 15,645 internal Arista tests using these changes, in an
infrastructure that relies on nonblocking pcap for packet exchange, and
they all passed.  Obviously this doesn't really say much about its general
applicability, and kind of is a no-brainer "when you don't need the eventfd
you don't notice if it's not there", but at least it says that things
aren't drastically broken.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] What's the correct new API to request pcap_linux to not open an eventfd

2022-07-05 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
Hi Denis,

Thanks for pointing out the manpage update.  I had old man pages (my work
is being done in the context of the 1.10 release).  What confused me is the
asymmetry of the API.  If you call pcap_setnonblock() on an
un-activated socket, it sets a flag and doesn't return an error.  But
pcap_getnonblock() returns an error, so you can not check the value on an
un-activated socket.

This is not a flaw, necessarily, just confusing.

Now that I understand this flow (I did not realize there were two different
implementations of pcap_setnonblock(), because I was focused on
pcap-linux.c and not on the important stuff in pcap.c) I think it's
straightforward to defer the opening of the eventfd to pcap_activate(), so
that we can avoid opening the eventfd altogether in nonblock mode.  I'll
see if I can update my changes accordingly.  Thank you for pointing out the
extra detail that helped me to understand.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] About struct in_addr / struct in6_addr

2022-07-17 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Sun, Jul 17, 2022 at 3:30 PM Guy Harris via tcpdump-workers <
tcpdump-workers@lists.tcpdump.org> wrote:

>
> Should we care about it, or should we just drop support for OSes lacking
> native IPv6 support in 5.0?


IMO it is safe to drop support for OSes lacking native IPv6 support.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Has anyone got a clang-format for the tcpdump style?

2023-01-04 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
Hi,

I know the tcpdump style follows a bunch of bsd patterns, since it came
from Berkeley in the first place.  Does anyone have a clang-format config
that reflects these coding conventions?  One of the problems I have in
upstreaming Arista-developed tcpdump code is making sure that the code fits
in well with its surroundings, and using clang-format for that purpose
would sure be easier.

Thanks,
  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Has anyone got a clang-format for the tcpdump style?

2023-01-07 Thread Bill Fenner via tcpdump-workers
--- Begin Message ---
On Sat, Jan 7, 2023 at 12:38 PM Denis Ovsienko  wrote:

> On Wed, 4 Jan 2023 08:40:21 -0500
> Bill Fenner via tcpdump-workers 
> wrote:
>
> > Hi,
> >
> > I know the tcpdump style follows a bunch of bsd patterns, since it
> > came from Berkeley in the first place.  Does anyone have a
> > clang-format config that reflects these coding conventions?  One of
> > the problems I have in upstreaming Arista-developed tcpdump code is
> > making sure that the code fits in well with its surroundings, and
> > using clang-format for that purpose would sure be easier.
>
> Bill, do you know if it would be practicable to apply code style per
> file instead of doing a flag day on entire repository?
>
> Also it might help before enforcing the style to get tcpdump 5.0 ready
> (to remove backporting into 4.99 from the problem space) and to
> merge/close as many pull requests as possible.


Many people use incremental formatting: when you change or add some code,
use the autoformatter to make sure that it conforms to the appropriate
style. One such script is “clang-format-diff.py”. This is the approach that
I want to take on the diffs that I would like to create pull requests for:
make sure that the diff itself conforms to the style.

  Bill
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers