Re: [tcpdump-workers] timestamps and timezone

2004-03-28 Thread Jefferson Ogata
Maybe I'm dumb but it's taken me five successive postings to get past the 
impenetrably cryptic notifications ("The postblock flag is set for...", 
"Duplicate Partial Message Checksum") from the new list manager. And I'm 
assuming that this time I'll actually succeed in posting a message.

Guy Harris wrote:
On Mar 26, 2004, at 2:37 PM, alex medvedev wrote:

so what is the latest trend on the timestamp accuracy?
do you think nanoseconds are the future?
I think *allowing* nanosecond resolution would be a good idea.
It's the obvious choice since we're stuck with base 10 (even though it causes
roundoff error every time you crunch it in IEEE FP format), and nanoseconds is
the finest base 10 resolution that fits in 32 bits.
just wondering because struct timeval is usually defined in sec/usec.
We're not obliged to use "struct timeval"s in capture files.
Right; that's what libpcap is for. If you want to use the old API, libpcap can
give you the traditional struct timeval by dividing tv_nsec by 1000.
also, what are the benefits of having the timezone in the dump file set
[to anything other than 0]?
i'd rather see the local time where the dump was collected than my local
time at that moment.
Unfortunately, that's not what you see now, by default - time stamps are 
UNIX-style, i.e. UTC...
Why would anyone want the time zone of the local system to affect the timestamps
in the cap file? If you want to see what they would have been, just set TZ. And
alex, you don't have to export TZ (if you're using any sensible Bourne-type
shell); just set it as a prefix on the same line:
$ TZ=US/Pacific tcpdump -n -r foo.pcap

The timestamp in the packet headers should be UTC, period. Otherwise you're
screwing around trying to synchronize cap files collected from systems in who
knows what time zones. Every time someone sends me output from system log files
I have to ask them what zone they're logging in (because no one ever bothers to
tell you up front). Thank the gods I /don't/ have to ask them about time zones
when they send me pcap files, because all pcap timestamps are in the correct
zone: UTC.
and if i do want to see what hour it was at my location, i can export TZ
before running tcpdump -r as was pointed out on this list back in 
december.
...which means that if you want to see the time stamps in the time zone 
of the site where it was captured, you have to set TZ if that machine is 
in a different time zone or has different DST rules.  Exporting TZ 
*isn't* necessary if you want to see what time it was in *your* locale.
Which is the way it should be, especially if you have enough sense to configure
all your systems in UTC. Let users set TZ in their .profiles if they want to see
a local zone.
--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Proposed new pcap format

2004-04-13 Thread Jefferson Ogata
Darren Reed wrote:
On the contrary, it's a trivial matter, really, to add more.

Is there a "default" hashing method for SSL ?
Or IPSec ?
Or S/MIME ?
No.
In each case the specification defines support for a number of different
hashes, of varying strengths and the choice is left to the end user to
decide on what they wish to use.  I don't see why libpcap should be any
different.
Something keeps bugging me, and I just want to throw it out there for 
the mad dogs to tear into little bloody pieces:

Given all the desirable options people are looking for in this, and the 
need for future growth, I think we should seriously consider an 
XML-based format. Besides making it easy, format-wise, to include many 
optional features and types of metadata, programs could also embed 
decoded frame and protocol information in appropriate elements, right 
within the capture file.


  
  

  0003470102030003470405060800


  4580...
  
030d0801...

   ...

  

  

  
  
00034701020300034704050608004580...
  
  ...

Yes, fully fledged decoded captures would use a lot of extra disk, but a 
raw no-frills capture could be recorded with maybe only 50% or so overhead.

Processors using xslt or custom code could pull out just what they're 
interested in using XPath expressions. Decoders for specific application 
protocols could be written as filters to produce decoded elements in the 
output XML.

And so on... mull it over for a minute before you start shredding.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Proposed new pcap format

2004-04-13 Thread Jefferson Ogata
Guy Harris wrote:
That might not require us to choose a default, however, as long as the 
kernel can tell libpcap which hash value it's providing (if any).  It 
might, however, mean that we should choose a hash value that, for kernel 
hashing, is considered "adequate", and recommend that capture mechanisms 
implement it.
Maybe I missed something, but why does the kernel have to generate the 
hash? Are we actually worried that the packet data will be corrupted 
between the kernel and userland?

Also, why is the list setting Reply-to: to 
<[EMAIL PROTECTED]>, rather than 
<[EMAIL PROTECTED]>, which is the advertised address of 
the list?

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Proposed new pcap format

2004-04-14 Thread Jefferson Ogata
Christian Kreibich wrote:
Are you suggesting an xml-based pcap, or xmlified tcpdump output?

If you mean the former, I think the problem with this approach is that
in order to be able to write out a file in the first place, the
structure of the packet content has to be understood by libpcap (so that
it knows to write "" etc) -- then the question becomes what to do with unknown
protocols etc.
I'm suggesting the pcap storage format be XML. A raw capture, without using 
protocol dissectors, would just be a sequence of base64-encoded (perhaps) frames 
and metadata.

Tools like the tcpdump protocol dissectors and tethereal could then just be XML 
filters that take a raw XML input frame and annotate it with protocol elements, 
as in the rough example I posted. Existing XML tools, e.g. xsltproc, could 
generate reports from the annotated XML using XSLT. The reports could as easily 
be HTML output as plain text or more XML.

Additional protocol dissectors for protocols unknown to tcpdump/tethereal could 
be written in any language with XML support (preferably event-based). In fact, 
many protocol analyzers could be written directly in XSLT/XPath and processed 
using xsltproc. Among other things, this provides many means to eliminate the 
continuing problem of buffer overflows. tcpdump could have a plugin architecture 
with an XML filter for each protocol/frame type.

I think what you're proposing should be provided by an xmlified tcpdump,
but not the capture library.
I'm suggesting that we use XML as the capture file format so that tcpdump 
becomes an extensible XML filter.

Or you can throw all that musing away. Just pay attention to the discussion for 
a little while -- it revolves around timestamp and metadata formats, sizes of 
fields, and other esoterica that are sounding a bit archaic in today's computing 
environment. I think we should take a hard look at whether it's really 
appropriate to define yet another hard binary file format when XML can provide 
the same functionality with modest storage overhead, and has many added benefits.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Proposed new pcap format

2004-04-14 Thread Jefferson Ogata
Fulvio Risso wrote:
[mailto:[EMAIL PROTECTED] Behalf Of Stephen
Donnelly
Jefferson Ogata wrote:
Yes, fully fledged decoded captures would use a lot of extra
disk, but a
raw no-frills capture could be recorded with maybe only 50% or
so overhead.

50% extra space and 50% extra disk bandwidth cost? So my 250
Megabyte per second
pcap stream to disk becomes 375MB/s?
No, more than 500 MB/s.
You have to trasform everything in ascii, so an 8bit value becomes a 2 bytes
ascii value.
As I imagine you know, XML is not ASCII; it's Unicode.

Raw packet data would typically be base64-encoded. This expands data by 33%; 
three octets become four. You don't have to write one octet as two.

In any case, if you're trying to capture every packet off the wire, you might 
not want to use the newer binary pcap format under discussion either. It's 
looking to impose some not insignificant overhead as well.

Again, pay attention to the discussion; there are many optional features being 
suggested for the pcap storage format. What prompted my remark was the 
discussion about which hash algorithms to include in the storage format, what 
data gets hashed, and whether any particular algorithm is designated as a 
default. That's the kind of stuff that says, to me, that a binary file format is 
going to grow out of itself pretty fast.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Proposed new pcap format

2004-04-14 Thread Jefferson Ogata
Ronnie Sahlberg wrote:
I dont see really the benefit from using XML at all.
Usually I find that people who say that haven't used XML for anything significant.

It is just another file format and other well defined TLV fileformats are
just as extensible.
The only aplications ever reading pcap files are always going to do that
through libpcap anyway so
XML or a better suited binary fileformat would be invisible to the
application anyway.
As long as pcap files are a fixed binary format, then yes, the only applications 
reading them are going to do so through libpcap, pretty much by definition: i.e. 
your statement is a tautology.

If you store in XML, then many applications could read pcap files, because they 
wouldn't /have/ to use libpcap.

I think XML is the 21st centurys response to C++.
I have no idea what you mean by that...

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Proposed new pcap format

2004-04-14 Thread Jefferson Ogata
Ronnie Sahlberg wrote:
Given all the desirable options people are looking for in this, and the
need for future growth, I think we should seriously consider an
XML-based format. Besides making it easy, format-wise, to include many
optional features and types of metadata, programs could also embed
decoded frame and protocol information in appropriate elements, right
within the capture file.

Please no.   All programs reading pcap files through the pcap library will
know how to translate the capture file into a dissected list of packets.
Again the tautology. Programs only need the pcap library to read capture files 
because of the file format. Make the file format into XML and any program that 
supports XML can read capture files. You don't need to compile pcap on new 
platforms just to read capture files; you can read pcap in Java or Perl or 
Python or PHP without finding a language port of libpcap. You can have your web 
browser display decoded capture files using an XSL stylesheet, without writing 
any new code. You can filter packets in decoded captures with XPath using 
expressions like "//*[dport = 53]" or "//arp" or "//ip[src = '127.0.0.1']" or 
"//frame[dir = 'inbound']" or "//frame[timestamp >= 1373849233]".

At the very least it argues for tcpdump in protocol dissection mode, and 
tethereal, to have XML output formats. Having the native capture file format be 
XML also, however, would turn protocol dissection into XML filtering, which 
would mean you could do it on raw capture files or preprocessed capture files 
alike. If you have two different file formats, your tools can only work on one 
or the other.

If this is absolutely necessary it can be done really well by an external
tool thant reads a pcap file and expands it 1000 times into an xml file.
It does not have to be implemented inside pcap.
Typical expansion would be by a factor of about 1.5 for undecoded packets, not 
1000. Expansion for decoded packets would be somewhere between what tcpdump -v 
and tethereal do, more like 5 to 20.

NO xml in the kernel where pcap lives.
Huh? BPF lives in the kernel, on some platforms. pcap, and its file format, live 
in userland.

Also, some people actually work with pretty large files containint 10's of
milions of packets.
Indeed. I am one of them. So what?

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


[tcpdump-workers] List management

2004-04-14 Thread Jefferson Ogata
Noise to get past repost filters.

Okay this is getting ridiculous. I just tried to post the below information, and 
it came back stalled with some nonsense about "GLOBAL ADMIN BODY" matching 
various stupid regexes. I've added a hyphens in a few places throughout in an 
ATTEMPT to get past this. We'll see. It's a good thing I know how to read 
regexes or you can bet I wouldn't be able to post this at all. Apparently, you 
aren't allowed to post anything with any word that begins with the letters u, n, 
s, or has the word c-h-a-n-g-e followed by the word a-d-d-r-e-s-s.

So I was posting the below just to say, hey, the new list is a little weird, and 
maybe a few things need fixing, but at this point I'm pretty pissed off at the 
waste of time and I'd like to say:

The posting semantics of the new list SUCK, and SUCK BADLY.

Here's what I originally wrote:

Just a couple of observations about management of the new tcpdump-workers list:

When the list switched hosts, my subscribed address from the old list was 
automatically added to the new list. This address was an alias, however, and the 
new list wouldn't accept mail from un-subscribed addresses (the old list did). 
So I couldn't post from my canonical address, which was a pain.

So I used the subscriber admin widget to modify my subscribed ad-dress from the 
old alias to the canonical one. I could finally post, after getting past various 
other repost rejections.

The next day, the list admin noticed discrepancies in the membership and 
re-added a lot of addresses, including my old alias. So now I was getting two 
copies of everything.

Last night I unregistered my old alias so I would get only one copy of list 
messages. Inexplicably, my canonical address got unregistered as well.

So I re-registered the canonical address. Meanwhile, I went to check the list 
archive to see if I had missed anything, since I'm in the middle of a discussion.

Only, guess what -- I can't find any archive. The only archive listed on 
tcpdump.org stops at December 2003. There's no apparent digest on lists.tcpdump.org.

So now I have these questions:

- Is the list still being archived? If so, where?

- Why does the tcpdump.org site still list the mailing list address as 
<[EMAIL PROTECTED]>? I wonder how many people are still trying to post 
to the old address...

- Why does the list set Reply-To: to <[EMAIL PROTECTED]> 
instead of <[EMAIL PROTECTED]>?

- What's with the weird regex restrictions on postings?

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] List management

2004-04-14 Thread Jefferson Ogata
Guy Harris wrote:
However, the page at

	http://www.tcpdump.org/lists/workers/

doesn't have any links to archives past 2003-12 - this needs to be
fixed.  Michael, is there some way to automatically generate, or at
least automatically update, that page?  Having to update it once a month
seems a bit awkward.
A webserver directory index would be fine, especially if accompanied by an 
inline README.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Proposed new pcap format

2004-04-14 Thread Jefferson Ogata
Stephen Donnelly wrote:
When capturing network data at hundreds of megabytes per second for 
extended periods and hence dealing with hundreds of gigabytes of 
captured data at a time, even 33% overhead is very expensive in storage 
space and disk bandwidth, as well as the cpu time required to perform 
XML output with base-64 encoding.

This is why my interest in the new format is to encourage keeping the 
fixed overhead per packet record small. This can be done by a) keeping 
per-packet meta data optional where possible, and b) keeping space 
efficiency in mind when encoding per packet (meta)data.
It can also be done by using your own personal file format for your 
actually quite rare application of long time-period, high-bandwidth 
capturing. I don't see why the general format would be tailored to this 
application. Most people aren't interested in saving off a whole OC48 
for any period of time. The more usual use is to try to identify 
problems with specific protocols, or perform IDS functions. In these 
cases, people are more interested in saving selected packets and metadata.

Another thing you might do with your own high-bandwidth capture format 
would be to design it to facilitate merging streams from multiple 
capture sources, which you might split up using a toplayer or similar 
box. Again, this is not an application for the general tcpdump user pop.

It may well be true that for analysis XML is useful either internally 
for processing, or for results, but libpcap is primarily about packet 
capture.
I disagree, at least in the sense in which you appear to mean this. 
libpcap includes BPF and the pcap expression compiler, which are about 
packet filtering. And remember that this is the tcpdump-workers list.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Proposed new pcap format

2004-04-22 Thread Jefferson Ogata
Darren Reed wrote:
In some email I received from Michael Richardson, sie wrote:
 Prooving what? that you aren't being lied to? By whom?
 What is the thread model for this? What does having the kernel digital
sign stuff gain you? Who would lie to you in such a way that they
couldn't also have the kernel lie to you?
It's not about lieing so much as data integrity within the
computer/application and being able to trust that to a very
high level.
Darren,

I'm still trying to understand an attack or failure scenario where having the 
kernel MD5 the packet is any more reliable than having userland do it. Can you 
describe such a scenario?

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Size limit on the filter string to libpcap?

2004-05-14 Thread Jefferson Ogata
[EMAIL PROTECTED] wrote:
Is there a limit on how big the string can be for specifying the filter to
pcap in pcap_compile. My filter needs to exclude a bunch of ip addresses
(eg: tcpdump host not 1.1.1.1 and host not 2.2.2.2 ...upto 50 addresses)
There are definitely limits for the operating systems which do in-kernel
filtering (FreeBSD in my case) - specifically, there is a limit to how
many instructions the kernel will accept for the BPF interpreter. FreeBSD
as of 4.9 has:
On Linux, you will also run into a limit on the maximum size of socket options, 
which is how the filter gets passed to the kernel. The Red Hat 2.4 kernel has 
BPF_MAXINSNS set to 4096, but the default socket option limit is 10240 bytes. 
With the socket option buffer, 4096 instructions comes to 32800 bytes. You can 
adjust this limit at runtime, however, with "echo 32800 > 
/proc/sys/net/core/optmem_max" using an appropriate value, or put 
"net.core.optmem_max = 32800" in /etc/sysctl.conf.

You won't run anywhere near this limit with only 50 addresses tho.
If no such limit (other than reasonable buffer size and sanity checks) is
it safe and efficient to add that many (50) IP addresses to the filter?
Safe provided you can get the kernel to accept a sufficiently large
filter. Efficient? Maybe - if you need to compare with 50 addresses I
believe it will do a sequential comparison with address 1, address 2,
etc.
If you want efficiency you should use a binary space partition to pre-sort the 
input addresses before passing the filter to the compiler. Put the addresses 
into a balanced, sorted binary tree and then use a recursive >= operation. The 
optimizer does not handle this.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Ethernet type in wrong byte order

2004-06-23 Thread Jefferson Ogata
Claudio Lavecchia wrote:
/* Ethernet header */ 
struct sniff_ethernet {
u_char  ether_dhost[ETHER_ADDR_LEN];// Destination host address
u_char  ether_shost[ETHER_ADDR_LEN];// Source host address
u_short ether_type; // IP? ARP? RARP? etc
};

If I read ethernet encapsulation specifications, I find out that the 
value corresponding to a ethernet packet carrying ARP is 0x0806. If I 
invert the two bytes of this value I obtain 0x0608 which is 1644 in 
decimal notation. So that is obviously a problem in the byte order. If I 
sniff ARP packets using ethereal, the ethernet type value is correctly 
set to 0x0806, so that means that I have a byte order issue. I am not 
very familiar with this kind of issues, can anyone please explain me 
what is going on and possibly give me a hint on what is the correct way 
to handle this kind of issues?
Intel systems store ints in little-endian format. When you declare a structure 
field as u_short, the processor reads it in the native format, which is the 
opposite of how it came across the wire and was actually stored into memory.

Read the man pages for htons and ntohs.
Note that if you try to use structures for this kind of thing, you may 
eventually end up with alignment issues, where, for example, you are trying to 
read a 2- or 4-byte integer quantity on an odd byte-boundary. Some processor 
will hoark if you try to do this. So you might want to define handy functions 
for memcpying values into a short and long and doing the byte-order switch at 
the same time.

BTW, packet dissectors are especially easy to write in Perl, using the unpack 
function, and are then invulnerable to pesky buffer overflows. Just install 
Net::Pcap.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] [PATCH] Drop unneeded capabilities

2004-06-24 Thread Jefferson Ogata
Pekka Savola wrote:
On Wed, 23 Jun 2004, Matt Beaumont wrote:
I've written a little patch to drop all but the CAP_NET_ADMIN and
CAP_NET_RAW capabilities immediately if tcpdump is running with root
privileges.  The idea is to limit the damage done by an exploit
against tcpdump.
Some of the inspiration for this patch came from here:
<http://www.dwheeler.com/secure-programs/Secure-Programs-HOWTO/minimize-privileges.html>
This is the first patch I've ever submitted, so I'd love to hear some
feedback :)
Have you checked the code in the CVS?  It already includes a 
"droproot" option.

Yours is slightly different, though, as it uses (Linux-specific?) 
capabilities.  I'm not sure if it's necessary when we already drop the 
root privileges.
Capabilities are a much better approach than simply dropping root. Dropping 
capabilities can restrict the process far more than simply having it run as a 
regular user. While it's true that some OSes are sorely behind the times and 
don't support capabilities, it's still useful to have the infrastructure in 
place for the modern ones that do.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] [PATCH] Drop unneeded capabilities

2004-06-24 Thread Jefferson Ogata
Michael Richardson wrote:
"Pekka" == Pekka Savola <[EMAIL PROTECTED]> writes:
Pekka> Have you checked the code in the CVS?  It already includes a
Pekka> "droproot" option.
Pekka> Yours is slightly different, though, as it uses
Pekka> (Linux-specific?) capabilities.  I'm not sure if it's
Pekka> necessary when we already drop the root privileges.
  Yes, they are Linux specific.
  We should have a file:
   droppriv-FOO.c
IRIX, for one, also supports capabilities.
--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] text format stability

2004-06-24 Thread Jefferson Ogata
Eddie Kohler wrote:
It would seem to me that the best approach here would be to design a new 
format that applied *only in those cases where it was required*: 
gre/l2tp/mpls tunneling.  And of course it doesn't matter how new 
protocols are printed, there are no backwards compatibility issues.
It would seem to me that the best approach would be to have a format 
configuration file when an entry for each dissected protocol. Local 
installations could tweak it however they like.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Corrupt files

2004-06-25 Thread Jefferson Ogata
Xavier Brouckaert wrote:
I have several corrupted pcap files.  The error message looks like this
when I try to reread the trace with tethereal :
This usually happens to me when I have a disk full condition while capturing. 
Captures stop getting flushed to disk until some space is cleared, and when they 
restart a header is no longer in the right place because a lot of buffered data 
was lost.

If this is what happened and the data is valuable to you you can make the best 
of it by locating the next valid packet header by hand and stripping out the 
bogus info in the middle. This is not as hard as it might seem at first.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Corrupt files

2004-06-26 Thread Jefferson Ogata
Marco van den Bovenkamp wrote:
Xavier Brouckaert wrote:
How do you do that ?  Is there a tool for this ? editcap cannot remove 
a single broken packet.
No? Assuming it doesn't choke on the bogus packet, and you know its' 
sequence number, something like 'editcap   <# 
of bogus packet>' should do it...
Not really.
The problem is usually that what follows some packet is not a valid packet 
header, for whatever reason -- in my case usually a transient disk full 
condition. You can't skip a packet if the header is invalid; you don't know how 
many bytes to skip to find the next valid packet header.

If you know where the problem is, though, you can split the file on various 
boundaries (say, using tail +c) until you find a valid packet header at the 
beginning.

Or if you open the file in a hex editor you'll have no problem finding a valid 
packet header, especially for captured ethernet data. The link headers are 
unmistakable.

Once you've found a sync point, you just need to strip out the data from the 
start of the problem area to your sync point.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Libpcap and Super User mode

2004-06-30 Thread Jefferson Ogata
[EMAIL PROTECTED] wrote:
Is it possible to write a program using libpcap that doesnt need to be run in super user mode, and if there is how would that be done.  Everything that i have seen that uses libpcap has to be in su mode
At least on BSD based systems, it depends on readability of the /dev/bpf*
devices and not on super user mode. Normally /dev/bpf* is only readable
by root, but you can change this.
More specifically, you can use libpcap as any user. On most systems, you have to 
be root, however, to monitor traffic on a network interface.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] text format stability

2004-07-01 Thread Jefferson Ogata
Michael Richardson wrote:
Guy> for the PDML specification.
  I think it is an abuse of XML... nothing is actually marked up. 

  Everything seems to be given as attributes, i.e.:

  rather than:
0x45
Using attributes makes it slightly easier to process stuff with XSLT. When you 
use an  you have to be more careful about 
intercepting all subnodes with another template; otherwise you get all the text 
nodes in the subtree instead of just the topmost one. When everything is marked 
up as attributes, you just do  and you're done.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Wrong tcp sequence numbers???

2004-09-21 Thread Jefferson Ogata
Claudio Lavecchia wrote:
I am using a libpcap based packet dissector to sniff WLAN traffic:
I read tcp packets using the structure:
struct sniff_tcp {
   u_short th_sport;   /* source port */
   u_short th_dport;   /* destination port */
   tcp_seq th_seq; /* sequence number */
   tcp_seq th_ack; /* acknowledgement number */
[snip]
1. What is the typedef for tcp_seq?
   //u_int th_seq;   /* sequence number */
   //u_int th_ack;   /* acknowledgement 
number */
[snip]
but in my code when I try to read the tcp sequence numbers, I get very 
odd values of sequence number. Here follows the code snippet I use to 
read sequence number. The values I get do not correspond to the ones I 
read using ethereal, for example.
2. What do you mean by "odd"?
// CODE SNIPPET
   /* This pointer points to the beginning of the IP packet */
   ip = (struct sniff_ip*)(packet + size_ethernet);
   /* This pointer points to the beginning of the TCP packet */
   tcp = (struct sniff_tcp*)(packet + size_ethernet + size_ip);
3. How do you calculate size_ip?
   // The payload represents the application data
   d_ip_packet->payload = (u_char *)(packet + size_ethernet + 
size_ip + size_tcp);

   /* Interesting portion of the IP header */
   d_ip_packet->src_ip_address = 
strcpy(d_ip_packet->src_ip_address,inet_ntoa(ip->ip_src));
   strcat(d_ip_packet->src_ip_address,"\0");
4. What are you trying to achieve here?
   d_ip_packet->dst_ip_address = 
strcpy(d_ip_packet->dst_ip_address,inet_ntoa(ip->ip_dst));
   strcat(d_ip_packet->src_ip_address,"\0");
5. And here?
   d_ip_packet->sequence_number = ntohl(tcp->th_seq); // BUG HERE! 
sequence number is not correct
6. Not correct, but how? Unrelated? Byte-swapped? Shifted?
--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Wrong tcp sequence numbers???

2004-09-22 Thread Jefferson Ogata
Claudio Lavecchia wrote:
Jefferson Ogata wrote:
Claudio Lavecchia wrote:
I am using a libpcap based packet dissector to sniff WLAN traffic:
I read tcp packets using the structure:
struct sniff_tcp {
   u_short th_sport;   /* source port */
   u_short th_dport;   /* destination port */
   tcp_seq th_seq; /* sequence number */
   tcp_seq th_ack; /* acknowledgement 
number */
[snip]
1. What is the typedef for tcp_seq?
Here follows the typedef
typedefu_int32_t tcp_seq;
Okay. I wouldn't use a typedef for that, personally, as it just means someone 
has to go find it when they read the code, and it will never change through 
protocol evolution.

   //u_int th_seq;   /* sequence number */
   //u_int th_ack;   /* acknowledgement 
number */
[snip]
but in my code when I try to read the tcp sequence numbers, I get 
very odd values of sequence number. Here follows the code snippet I 
use to read sequence number. The values I get do not correspond to 
the ones I read using ethereal, for example.
2. What do you mean by "odd"?
I mean that they are not the same that I can observe in Ethereal, 
moreover I mean that the same sequence number can appear a lot of times.
See below.
// CODE 
SNIPPET
   /* This pointer points to the beginning of the IP packet */
   ip = (struct sniff_ip*)(packet + size_ethernet);
   /* This pointer points to the beginning of the TCP packet */
   tcp = (struct sniff_tcp*)(packet + size_ethernet + size_ip);
3. How do you calculate size_ip?
int size_ip = sizeof(struct sniff_ip);
Where struct sniff_ip is the structure used to decode IP packets in the 
packet dissectors based on libpcap available on the web (cfr. sniffer.c)
Have you considered doing that correctly? I.e., size_ip = (ip_version_headerlen 
& 0xf) << 2?

Do values in the IP header match up, e.g. version, source IP, etc.?
   // The payload represents the application data
   d_ip_packet->payload = (u_char *)(packet + size_ethernet + 
size_ip + size_tcp);

   /* Interesting portion of the IP header */
   d_ip_packet->src_ip_address = 
strcpy(d_ip_packet->src_ip_address,inet_ntoa(ip->ip_src));
   strcat(d_ip_packet->src_ip_address,"\0");
4. What are you trying to achieve here?
I inspect  a packet at different ISO/OSI stack layers and copy some 
interesting information (such as MAC source and destination, IP source 
and destination and in the case of  a TCP packet the sequence number) 
into an utility structure that I  use later to process the packet
I was referring to strcating a null string onto an existing string. That's a 
null operation.

   d_ip_packet->dst_ip_address = 
strcpy(d_ip_packet->dst_ip_address,inet_ntoa(ip->ip_dst));
   strcat(d_ip_packet->src_ip_address,"\0");
5. And here?
   d_ip_packet->sequence_number = ntohl(tcp->th_seq); // BUG 
HERE! sequence number is not correct
Here I copy the TCP sequence number to my utility structure.
Another null strcat, this time using what appears to be the wrong destination 
field.
6. Not correct, but how? Unrelated? Byte-swapped? Shifted?
Well, I do not know how to answer to this question. What I can say is 
that a sequence number appears several times,  a repeating TCP sequence 
number that I got is for example 819974287
Look at the value in hex, in this case 30DFD08F. Now load the same packet in 
ethereal and look at the packet data in the bottom pane. See if you see those 
bytes, or their reversal. If so, locate the corresponding fields in the 
structural view. If you can do that you'll have some orientation and you can 
figure out where you're off.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] tcpdump filter for HTTP GET

2004-11-08 Thread Jefferson Ogata
Robert Lowe wrote:
Anyone have a filter that will capture just HTTP GET requests?  I'm 
looking for
something more specific than just "dst host X and tcp dst port 80", but 
I'm not
worried about requests to non-standard ports.  I would suspect I could 
reference
tcp[N:3] = GET, but can N be an expression itself, e.g. the data offset 
in the
TCP header??
Yes.
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420
--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] tcpdump filter for HTTP GET

2004-11-08 Thread Jefferson Ogata
Robert Lowe wrote:
Jefferson Ogata wrote:
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420
Beautiful!  But wouldn't the bit-shift be for 4 bits?  Thanks
It would, but then you'd have to multiply by 4 since the offset is in 
multiples of 4. So >> 2 does the shift and multiply in one operation.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Sniffing ranges of ips

2004-11-19 Thread Jefferson Ogata
MMatos wrote:
I want to write a little program that analyses packets within a given ip 
range.

My current problem is to set a filter that work with ip ranges.
For example I want to dump all traffic that arrives to my box from ips 
192.168.2.15 to 192.168.2.40
I could write all the ips in the range but that's not a good solution, 
so how can implement that filter correctly using the range?

some kind of
$tcpdump "src 192.168.2.15/40"   :)
Use the attached perl scripts, e.g.:
tcpdump [options] `./genrange.pl 192.168.2.15 192.168.2.40 | 
./aggregate.pl | ./iptcpdump.pl src`

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
#!/usr/bin/perl -wT

my $low = shift;
my $high = shift;

die "usage: $0 low-addr high-addr" unless (defined ($high));

$low = &a2n ($low);
$high = &a2n ($high);

for (my $ip = $low; $ip <= $high; ++$ip)
{
print &n2a ($ip), "\n";
}

sub a2n
{
return unpack ('N', pack ('C4', split (/\./, $_[0])));
}

sub n2a
{
return join ('.', unpack ('C4', pack ('N', $_[0])));
}

#!/usr/bin/perl -wT

my %in;
my %out;

while (defined ($_ = ))
{
chomp;
my $line = $_;
s/#.*$//;
s/\s+//;
next unless (length);
die (qq{$.:$line}) unless (/^([\d\.]+)(?:\/(\d+))?$/);
my ($ip, $bits) = ($1, $2);
$bits = 32 unless (defined ($bits));
$ip = &a2n ($ip);
$in{$ip} = $bits;
}

# Eliminate subnets.
foreach (keys (%in))
{
next unless (exists ($in{$_}));
my $mask = &mask ($in{$_});
foreach my $sub (keys (%in))
{
	next if ($sub == $_);
	if (($sub & $mask) == $_)
	{
	delete ($in{$sub});
	}
}
}

# Aggregate what's left.
while (scalar (keys (%in)))
{
foreach (sort (keys (%in)))
{
	next unless (exists ($in{$_}));
	my $bits = $in{$_};
	my $node = 1 << (32 - $bits);
	my $other = $_ ^ (1 << (32 - $bits));
	if (exists ($in{$other}))
	{
	delete ($in{$_});
	delete ($in{$other});
	my $super = $_ & &mask ($bits - 1);
	$in{$super} = $bits - 1;
	}
	else
	{
	$out{$_} = $bits;
	delete ($in{$_});
	}
}
}

foreach (sort (keys (%out)))
{
my $bits = $out{$_};
print &n2a ($_), '/', &n2a (&mask ($bits)), qq{\n};
}


sub a2n
{
return unpack ('N', pack ('C4', split (/\./, $_[0])));
}

sub n2a
{
return join ('.', unpack ('C4', pack ('N', $_[0])));
}

sub mask
{
my $bits = shift;
return 0x if ($bits > 32);
return 0 if ($bits < 1);
return ~((1 << (32 - $bits)) - 1);
}


#!/usr/bin/perl -wT

my @expr;

my $qualifier = shift;
if (defined ($qualifier))
{
$qualifier =~ s/^\s+//;
$qualifier =~ s/\s+$//;
$qualifier .= ' ';
}
else
{
$qualifier = '';
}

while (defined ($_ = ))
{
chomp;
my $line = $_;
s/#.*$//;
s/\s+//;
next unless (length);
die (qq{$.:$line}) unless (/^([\d\.]+)\/([\d\.]+)$/);
my ($addr, $mask) = ($1, $2);
if ($mask eq '255.255.255.255')
{
	push (@expr, "${qualifier}host $addr");
}
else
{
	push (@expr, "(${qualifier}net $addr mask $mask)");
}
}

print join (' or ', @expr), "\n";

-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Sniffing ranges of ips

2004-11-19 Thread Jefferson Ogata
Jefferson Ogata wrote:
MMatos wrote:
I want to write a little program that analyses packets within a given 
ip range.

My current problem is to set a filter that work with ip ranges.
For example I want to dump all traffic that arrives to my box from ips 
192.168.2.15 to 192.168.2.40
I could write all the ips in the range but that's not a good solution, 
so how can implement that filter correctly using the range?

some kind of
$tcpdump "src 192.168.2.15/40"   :)

Use the attached perl scripts, e.g.:
tcpdump [options] `./genrange.pl 192.168.2.15 192.168.2.40 | 
./aggregate.pl | ./iptcpdump.pl src`
Or you can do something more utilitarian, such as:
tcpdump [options] '( ip[12:4] >= 0xc0a8020f ) and ( ip[12:4] <= 
0xc0a80228 )'

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Sniffing ranges of ips

2004-11-19 Thread Jefferson Ogata
MMatos wrote:
Jefferson Ogata wrote:
Jefferson Ogata wrote:
MMatos wrote:
For example I want to dump all traffic that arrives to my box from 
ips 192.168.2.15 to 192.168.2.40
I could write all the ips in the range but that's not a good 
solution, so how can implement that filter correctly using the range?
Use the attached perl scripts, e.g.:
tcpdump [options] `./genrange.pl 192.168.2.15 192.168.2.40 | 
./aggregate.pl | ./iptcpdump.pl src`
Or you can do something more utilitarian, such as:
tcpdump [options] '( ip[12:4] >= 0xc0a8020f ) and ( ip[12:4] <= 
0xc0a80228 )'
First of all thanks for the precious help you give me !
You're welcome.
I' ve been analysing the scripts and they expand the ranges to all ips 
and then work around with the netmasks ..
Correct.
Indead i like the 2nd way you're sugesting but i've a little doubt:
Lets pick ip[12:4]
The ip is self explanatory; the 4 represents the 4th word of the ip 
datagram wich corresponds to the source adress  (right?) but i'm unable 
to find out  the purpose of the number 12 .
Can you enlight me about that?
ip[12:4] means the four bytes starting at offset 12 in the IP header. 
tcpdump will extract these bytes as a 32-bit integer in network order. 
The four-byte value at offset 12 in the IP header is the IP source 
address. The destination IP address can be found at ip[16:4].

The rest is simple integer comparisons with hex-encoded integers 
representing the IP range you're interested in. This technique is more 
efficient than the netmask technique, although the netmask technique is 
somewhat cooler, in a dark sunglasses kind of way, if you're into that 
sort of thing. You might want to keep the aggregation script around for 
other things. (Assuming it works correctly; I wrote it a long time ago 
and I don't remember if I ever actually finished and tested it. On the 
face of it it does what it's supposed to.)

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Sniffing ranges of ips

2004-11-20 Thread Jefferson Ogata
MMatos wrote:
Note: I'm resending this message because i've sent it 20 hours ago and 
it  wasn't arrived to the list (at least i wasn't received it yet).
I saw it yesterday.
Alexander Dupuy wrote:
Jefferson Ogata wrote:
Or you can do something more utilitarian, such as:
tcpdump [options] '( ip[12:4] >= 0xc0a8020f ) and ( ip[12:4] <= 
0xc0a80228 )'
This doesn't support non-power-of-two ranges; for example addresses 
between 192.168.1.10 and 192.168.1.19.  For something like that, with 
IPv4 you can use a hack like "(ip[12:4] >= 0x01020304) and (ip[12:4] 
<= 0x01020506)" to express that the source IP address should be within 
the range of 1.2.3.4 to 1.2.5.6 (inclusive).  No simple expression 
exists for non-power-of-two IPv6 address ranges, but you could 
probably cobble up something only fairly heinous by computing 
enclosing power-of-two ranges using an adaptation of Jefferson Ogata's 
genrange.pl and aggregate.pl scripts and doing something similar with 
comparisons on low-order four-byte pieces of the address.

Yes solving that problem of unsopported non-power-of-two-ranges wouldn't
be much difficult
The aggregate.pl script I sent earlier did in fact have bugs (I 
apparently hadn't actually tested it in days of yore), so attached find 
a more correct implementation.

How can I know that a given bpf filter is correct for a given range by
analysing its opcodes? Maybe a link to to a doc lying somewhere?
Usually we trust it. But the code generator is a snarly rat's nest, and 
the optimizer is terrifying to behold. So it helps to know the virtual 
machine semantics. You can find them here, among other places:

http://www.tcpdump.org/papers/bpf-usenix93.pdf
http://www.freebsd.org/cgi/man.cgi?query=bpf&apropos=0&sektion=0&manpath=FreeBSD+5.3-RELEASE+and+Ports&format=html
--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
#!/usr/bin/perl -wT

my %in;
my %out;

my $me = $0;
$me =~ s/.*\///;

my $cidrOut = 0;

while (defined ($_ = shift))
{
if (s/^\-//)
{
	$cidrOut += s/c//g;

	next unless (length);
}

print STDERR <))
{
chomp;
my $line = $_;
s/#.*$//;
s/\s+//;
next unless (length);
die (qq{$.:$line}) unless (/^([\d\.]+)(?:\/(\d+))?$/);
my ($ip, $bits) = ($1, $2);
$bits = 32 unless (defined ($bits));
$mask = &mask ($bits);
$ip = &a2n ($ip) & $mask;

# Eliminate subnets.
foreach my $check (sort { $in{$b} <=> $in{$a}; } (keys (%in)))
{
	my $checkMask = &mask ($in{$check});

	if (($check & $mask) == $ip)
	{
	delete ($in{$check});
	}
	elsif (($ip & $checkMask) == $check)
	{
	$ip = undef;
	last;
	}
}
next unless (defined ($ip));

$in{$ip} = $bits;
}

# Aggregate what's left.
while (scalar (keys (%in)))
{
foreach (sort { $in{$b} <=> $in{$a}; } (keys (%in)))
{
	next unless (exists ($in{$_}));
	my $bits = $in{$_};
	my $other = $_ ^ (1 << (32 - $bits));
	if (exists ($in{$other}))
	{
	delete ($in{$_});
	delete ($in{$other});
	my $super = $_ & &mask ($bits - 1);
	$in{$super} = $bits - 1;
	}
	else
	{
	$out{$_} = $bits;
	delete ($in{$_});
	}
}
}

foreach (sort { $a <=> $b; } (keys (%out)))
{
my $addr = &n2a ($_);
my $mask = $cidrOut ? $out{$_} : &n2a (&mask ($out{$_}));
print qq($addr/$mask\n);
}


sub a2n
{
return unpack ('N', pack ('C4', split (/\./, $_[0])));
}

sub n2a
{
return join ('.', unpack ('C4', pack ('N', $_[0])));
}

sub mask
{
my $bits = shift;
return 0x if ($bits > 32);
return 0 if ($bits < 1);
return ~((1 << (32 - $bits)) - 1);
}


-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] BPF in hardware

2004-11-22 Thread Jefferson Ogata
Guy Harris wrote:
That obviates the need to design the expression tree representation (as 
I'd like to be able to hand expression trees *not* constructed by 
libpcap's parser to the filter installer, that should be designed well 
enough to be usable and extensible as necessary), but does mean you'd 
have to do a lot of work on the *existing* code generator to make it 
emit stuff other than a BPF program, and it might be a bit more 
intrusive than having separate code generators (code generator routines 
are called from the parser).
The obvious approach is to simply write a BPF-to-MTP assembler/compiler, 
crunch the result of pcap_compile(), and stuff that into the interface. 
This would be a lot easier than hacking on the code generator.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Stopping DHCP request

2005-03-03 Thread Jefferson Ogata
aman Reddy wrote:
 I think you didn`t understand my question exactly. The thing is that I have written a porgram to capture packets for the interfaces attached to my laptop but with out having an IP address assiged to those interfaces from DHCP server. 

And one morething is that I don`t want to assign static IP because when ever I capture packet related to my interest then I want to get an IP address for that interface through which I captured the interested packet. But I have seen even though I put static IP for some x interface but whenever I start DHCP server the x interface is getting an IP from DHCP server. It is strange to me.
You can configure a static address for your system on the DHCP server. 
Simply associate a fixed IP with the mac address of your system's NIC. 
Then your address will be static regardless of whether you get your 
address from local configuration or DHCP.

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Payload in HEX and ASCII..

2005-04-27 Thread Jefferson Ogata
Aaron Turner wrote:
> You'll either have to write your own function (not that hard) or you can
> fork tcpdump and pass the packets to it via a pipe.
> 
> For a full list of what features/functions libpcap comes with do a 'man
> pcap'.  Anything that isn't listed there you'll have to do yourself.
> 
> On Wed, Apr 27, 2005 at 11:04:17AM -, soumya r wrote:
>>> I am doing a sniffer program using "libpcap" as part of my project.
>>> How can I display the 'packet payload' in 'HEX' and 'ASCII' forms?
>>> Please advice.

This is so obvious a feature that is truly incredible no one has added
it to tcpdump in all these years. It's no wonder someone would be
curious that the man page doesn't mention it.

I recommend using ethereal or tethereal, since they do this. Or filter
the output of tcpdump through the following:

#!/usr/bin/perl -w

my $maxLen = -1;
my $format = "\t%-s";

while (<>)
{
unless (/^\s/)
{
print;
next;
}

chomp;

s/^\s*//;
if (length ($_) > $maxLen)
{
$maxLen = length ($_);
$format = sprintf ("\t%%-%ds", $maxLen);
}
printf ($format, $_);
s/\s//g;
s/([0-9a-f]{2})/chr (hex ($1))/eg;
s/[^\040-\176]/./g;
print "\t$_\n";
}



-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Payload in HEX and ASCII..

2005-04-27 Thread Jefferson Ogata
Jefferson Ogata wrote:
> This is so obvious a feature that is truly incredible no one has added
> it to tcpdump in all these years. It's no wonder someone would be
> curious that the man page doesn't mention it.
> 
> I recommend using ethereal or tethereal, since they do this. Or filter
> the output of tcpdump through the following:

Or some versions of tcpdump have -X, I see. Had forgotten that.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Does option -w influence the packet capture?

2005-05-06 Thread Jefferson Ogata
David Rosal wrote:
> I'm using tcpdump-3.7.2 to capture ethernet traffic, and I'm wondering
> why it captures much less packets when I use option -w.
> 
> I have done the following test:
> 
> I've run "tcpdump -s0" many times for 10 seconds each time, and the
> average result is to capture about 100 packets.
> I've run "tcpdump -s0 -w dumpfile" many times for 10 seconds each time,
> and the average result is to capture only 70 or 80 packets.
> But both tests have been done in the same computer, at the same hour.
> 
> Is this behaviour expected?

When you perform live analysis, you may also be capturing DNS and other
related traffic initiated by tcpdump itself. When writing to a file, no
protocol analysis is done, so this traffic is absent.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] sniffex.c - libpcap example code proposal

2005-06-29 Thread Jefferson Ogata
Nathan Jennings wrote:
> There's one issue I've run into: after displaying certain packets (see
> function print_payload), my xterm/bash shell loses the ability to
> display newlines (i.e scroll lines). I suppose this is due to the
> display of a certain sequence of characters to my xterm/shell. Any ideas?

Escape all non-printing characters, especially anything outside [\040-\176].

If you are passing arbitrary binary data to your terminal, an attacker
may be able to instruct your terminal to insert characters into your
terminal stream to execute arbitrary commands.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] lpcap not capturing non-accepted connections?

2005-07-19 Thread Jefferson Ogata
[EMAIL PROTECTED] wrote:
> Heya everyone, I'm trying to build a port knocker for fun using pcap and 
> basic C
> sockets. I've set up 10 sockets listenning on ports 4000-4010 but not actually
> accepting.
> 
> I then set up a pcap filter for port 4000 (just to test it) to see if it would
> grab anything. When I try to telnet to port 4000, I can connect but I don't 
> see
> any packets grabbed by pcap, does pcap not grab them if I don't have an accept
> for my sockets?

Even if you're not even listening, pcap should see the traffic. The
client system has to send a SYN packet to even find out whether
something's listening.

Either there's something wrong with you capture program, you have a
firewall in the way, or you're capturing on the wrong interface.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] lpcap not capturing non-accepted connections?

2005-07-20 Thread Jefferson Ogata
[EMAIL PROTECTED] wrote:
> I found out the problem, for some reason it wasn't sniffing localhost 
> traffic, I
> had to telnet from a different computer for it to work. I have no idea why it
> does this, but hey, at least it works now.
> 
> Thanks for the help.

localhost traffic is routed over the loopback interface. Perhaps you
were not sniffing loopback.

Top-posting is evil.

> Quoting Jefferson Ogata <[EMAIL PROTECTED]>:
>>[EMAIL PROTECTED] wrote:
>>>Heya everyone, I'm trying to build a port knocker for fun using pcap and
>>basic C
>>>sockets. I've set up 10 sockets listenning on ports 4000-4010 but not
>>actually
>>>accepting.
>>>I then set up a pcap filter for port 4000 (just to test it) to see if it
>>would
>>>grab anything. When I try to telnet to port 4000, I can connect but I don't
>>see
>>>any packets grabbed by pcap, does pcap not grab them if I don't have an
>>accept
>>>for my sockets?
>>
>>Even if you're not even listening, pcap should see the traffic. The
>>client system has to send a SYN packet to even find out whether
>>something's listening.
>>
>>Either there's something wrong with you capture program, you have a
>>firewall in the way, or you're capturing on the wrong interface.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] not net problem

2005-08-08 Thread Jefferson Ogata
Black, Michael wrote:
> For example, I've got two networks linked:
> 
> 10.4.4 mask 255.255.255.0
> 10.1.1 mask 255.255.255.0
> 
> I want to monitor each network for traffic from the other so tried this
> on 10.4.4:
> 
> tcdump not net 10.4.4
> 
> But when I do this it drops traffic from 10.1.1 also in addition to the
> requested 10.4.4

Yes, it will drop any traffic that matches "net 10.4.4". This includes
traffic from 10.4.4 to 10.4.1 and traffic from 10.4.1 to 10.4.4.

> I've tried specifying netmasks to no avail.
> 
> Am I doing something wrong or is this a bug that I've found?

If you want to see traffic that involves both networks, use "net 10.4.4
and 10.4.1".

Or, if you want to see traffic leaving 10.4.4 bound for any other
destination, use "not dst net 10.4.4".

> ___
> Michael D. Black, MSIA, CISSP, IAM

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap: prob w/libnet making raw socket client

2005-10-04 Thread Jefferson Ogata
On 10/03/2005 04:56 PM, rh wrote:
> I'm using libnet 1.1.1 and pcaplib 0.8.3 (I believe).
> 
> Linux 2.4.20 / 2.6.11 (and later, FreeBSD 5.2).
> 
> GCC 3.3
> 
> Apologies if this is too off-topic an application for this list.
> 
> I'm attempting to use libnet and pcap together to write a client using raw
> sockets so that I can gain explicit control over the ip_p value in the IP
> header.  I need to test application-sensitive router configurations.
> 
> I'm failing at connection establishment.  I can squirt the packet out using
> libnet and get a reply using pcap, but the connection-initiating TCP seems
> to be generating a RST on my behalf before I can transmit the third packet
> of the handshake.

Is there some reason you don't simply synthesize packets using an IP
address that doesn't belong to a box on the network (but use a little
proxy arp glue)?

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap file format documentation

2006-03-19 Thread Jefferson Ogata
On 03/20/2006 12:12 AM, Stephen Donnelly wrote:
[top-posted rat's nest cleaned up]
> On Sun, 2006-03-19 at 20:43 -0800, Don Morrison wrote:
>>Here's the problem.  I'm dealing with corrupted pcap files, where the
>>last packet was partially written, but it's not of interest and all I
>>want to do is truncate the last packet.  My assumption is that
>>libpcap's API will not allow me to deal with this since programs that
>>are dependent on it (tcpdump, ethereal) hang when attempting to open
>>any such file.  Is this assumption incorrect?
> 
> That sounds quite likely. This may well be a case where you need to edit
> the file directly, and it seems unlikely that the compatibility issues I
> mentioned would be a problem.

The trivial way to fix a truncated pcap file:

tcpdump -r broken.pcap -w clean.pcap

I suspect Ethereal's editcap and mergecap might accomplish pretty much
the same thing.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap file format documentation

2006-03-20 Thread Jefferson Ogata
On 03/20/2006 02:01 AM, Don Morrison wrote:
[top posting fixed again]
> On 3/19/06, Jefferson Ogata <[EMAIL PROTECTED]> wrote:
>>
>>The trivial way to fix a truncated pcap file:
>>
>>tcpdump -r broken.pcap -w clean.pcap
> 
> I tried this method, but it hangs tcpdump.

That would be a bug in tcpdump. Why don't you send an example pcap file
along that does this (or post it to a web or FTP site and send a URL),
and state what version of tcpdump you are using.

You did run tcpdump with no options other than -r and -w, right?

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Can I be able to use Libpcap for capturing packets on Unix socket by the following way described in the body of the mail

2006-03-20 Thread Jefferson Ogata
On 03/13/2006 01:28 AM, Santosh wrote:
> I need a clarification regarding Libpcap library. What I am doing is instead 
> of writing the packets on to ethernet interface, I am writing on to the Unix 
> socket.
> I am using Libnet library for building and injecting the packets. I have 
> modified the Libnet library for supporting Unix sockets. For capturing the 
> packets on unix sockets I am thinking of using Libpcap library.

The concept of "capturing" on UNIX-domain sockets doesn't really make
much sense. One doesn't use libpcap to capture on an Internet-domain
socket; one captures on an interface. Traffic from multiple
Internet-domain sockets, as well as non-socket-based traffic (e.g. ICMP
messages), is multiplexed over an interface by necessity, since the
interface is the egress for network traffic from the host. The interface
thus provides the observation point for capturing to occur.

There is no parallel with UNIX-domain sockets. There is no API I know of
for a third party to observe UNIX-domain datagrams as they traverse from
socket to socket.

In short, I don't understand what you are trying to achieve. If you want
to monitor stream-based UNIX-domain socket activity, the only way I know
of is to act as a proxy between your client and server.

If you want to write a traffic log from your server or client, and wish
simply to use libpcap format, well, what's the point? There are no IP or
other protocol headers on UNIX-domain messages, so it's not as if you
will then be able to use other existing tools to analyze the traffic,
since your messages aren't IP packets.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Convert timeval to timestamp

2006-03-20 Thread Jefferson Ogata
On 03/20/2006 08:29 PM, Luis Del Pino wrote:
> What function do I have to use to convert a struct timeval (struct
> pcap_pkthdr {struct timeval ts;...}) to timestamp units(u_int32)?

Um, is this a trick question?

man 2 gettimeofday (Linux):

   struct timeval {
   time_t tv_sec;/* seconds */
   suseconds_ttv_usec;  /* microseconds */
   };

> i like calculating jitter in RTP streams.

I like jittering calculators in REM dreams.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Can BPF be used to filter on Unix Sockets ?

2006-03-23 Thread Jefferson Ogata
On 03/23/2006 05:25 AM, [EMAIL PROTECTED] wrote:
> Can I use BPF(BSD Packet Filter) for unix sockets. I don't think so it
> can be used. I just needed to confirm.
> I know its used to filter on any data link devices.

Did you read my response to you on your earlier related question posted
2006/03/20 09:56 UTC?

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap file format documentation

2006-03-23 Thread Jefferson Ogata
On 03/20/2006 04:18 AM, Don Morrison wrote:
[top posting fixed YET again]
> On 3/20/06, Jefferson Ogata <[EMAIL PROTECTED]> wrote:
>>On 03/20/2006 02:01 AM, Don Morrison wrote:
>>[top posting fixed again]
>>>I tried this method, but it hangs tcpdump.
>>
>>That would be a bug in tcpdump. Why don't you send an example pcap file
>>along that does this (or post it to a web or FTP site and send a URL),
>>and state what version of tcpdump you are using.
>>
> The files are at work, so I'll have to reply in the morning. -Don

Don, did you want to point us at one of your problem files?

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap file format documentation

2006-03-24 Thread Jefferson Ogata
On 03/24/2006 04:35 PM, Don Morrison wrote:
>>>>The trivial way to fix a truncated pcap file:
>>>>
>>>>tcpdump -r broken.pcap -w clean.pcap
>>>
>>>I tried this method, but it hangs tcpdump.
>>
>>That would be a bug in tcpdump. Why don't you send an example pcap file
>>along that does this (or post it to a web or FTP site and send a URL),
>>and state what version of tcpdump you are using.
>>
>>You did run tcpdump with no options other than -r and -w, right?
> 
> My apologies, what I said was incorrect.  Running the command does not
> crash tcpdump, but the outputfile ("clean.pcap") will crash Ethereal,
> so while both files are clean enough for tcpdump to display and not
> crash, not so for Ethereal.  

Offhand I'd say this has nothing to do with truncation, since the
truncated packet shouldn't be included in the clean pcap file. My guess
would be that you've found a bug in one of ethereal's protocol dissectors.

Just for grins, have you tried tethereal?

Also, have you identified exactly what packet ethereal/tethereal crashes
on? If so, extract just that packet from the pcap file into a separate
pcap and see if it still crashes ethereal.

There is at least one tool for noising up pcap files so it's fairly safe
to release to others without fear that it might contain private data.

>   Why am I using Ethereal? :) UMA decodes. 
> Unfortunately, I cannot send you the pcap file because it would be a
> violation of my contract with the telecom I work for.

Understood.

> Thanks very much for your help.

No problem.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Assumptions needed to get the same tcpdump

2006-04-12 Thread Jefferson Ogata
On 04/12/2006 07:07 AM, Hannes Gredler wrote:
> if your DNS is configured correct on both systems and you don't do any
> site local private adressing then you should get the identical output
> on both systems - if you specifiy the -n flag then tcpdump does not attempt
> to resolve names, you should be fine i.e. identical output irrespective
> how broken your DNS is.

What about differences in /etc/services?

> Latha G wrote:
>> Cann't we expect the output of tcpdump on different systems for the same
>> input file
>> to be same?
>> I am not getting the same output, in the sense it was differencing at the
>> hostnames..I suppose the problem might be DNS lookups,
>> one was using and the other one not.
>> Whether the both systems has to be DNS enabled or disabled?
>> Is this assumption is needed to get the same output?
>> Like wise , are there any other assumptions ? or it is impossible to
>> get the
>> same output on different systems?
>>
>> Thanks in advance.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers]

2006-04-29 Thread Jefferson Ogata
On 04/28/2006 09:53 PM, Jeremy Sheldon wrote:
> hello, i'm writing a little program.  this program attempts to monitor
> the linux system (via /proc) to discover if certain specified programs
> are running (just for the logged in user).  if they are, the program
> then attempts to discover if they have any external connections.
> 
> for tcp this is easy.  i just use /proc and some netstat code to
> discover the remote address.  however, for udp they are sometimes these
> "unconnected" connections.  so, i'd like the program to sniff a few
> packets on the udp source port gathered and determine the remote IP/port.
> 
> naturally, my first thought was libpcap.  i whipped up a quick little
> sniffer that grabs a couple packets and BAM.  it works great... as long
> as you're root.  well, this program shouldn't need root access.
> 
> does anyone have suggestions for either 1. how to determine the remote
> ip/port for the udp connection without using the libpcap "sniffer"
> technique?

ptrace(2) the process and trap send and sendto calls. Naturally you
won't be able to do this if some other process is already ptracing the
target process.

> or 2. how to use libpcap without require the program to run with root
> privlidges?

AFAIK on Linux this is not possible.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] RPM for TCPDUMP

2006-05-26 Thread Jefferson Ogata
On 05/26/2006 11:24 AM, Scott Krall wrote:
> I'm looking for an rpm for tcpdump that will run on a Red Hat Linux 7.2 
> system 

Well, you can download and build the SRPM for tcpdump for RHEL 2.1AS
from Red Hat.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] About pcap rules

2006-08-24 Thread Jefferson Ogata
the
subrules.

My hack was to add a comma and callback operator to the pcap compiler
and implement a callback opcode in the BPF engine, and do the packet
inspection in userland. If I did it again, I might do it differently,
but it works.

My main point in all this, however, is that when you start digging, the
question of "which subrule" is somewhat more subtle than it might seem
at first.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Permission denied wrigint second file when using

2006-09-13 Thread Jefferson Ogata
On 2006-09-13 17:16, John Bourke wrote:
> I have a problem with using tethereal to capture and write series of files.
> The first file writes ok, so there cannot be an issue with file access of
> any type.  The second file gives an error.

I doubt tethereal has anything to do with this.

> [EMAIL PROTECTED] root]# mkdir test
> [EMAIL PROTECTED] root]# cd test
> [EMAIL PROTECTED] test]# ls -l
> total 0
> [EMAIL PROTECTED] test]# /usr/sbin/tcpdump -i eth0 -w test.cap -s 0 -C 1
> tcpdump: listening on eth0
> tcpdump: test.cap2: Permission denied
> [EMAIL PROTECTED] test]# ls -l
> total 984
> -rw-r--r--1 root root  1000531 Sep 13 11:36 test.cap
> [EMAIL PROTECTED] test]#
> 
> Any ideas ?

The directory needs to be writable by the local tcpdump user, which may
be "pcap".

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Sniffing inbound ethernet frames only

2006-10-21 Thread Jefferson Ogata
On 2006-10-20 16:24, [EMAIL PROTECTED] wrote:
> I have a Linux box with two Fast Ethernet interfaces.
> In two separate windows on the desktop I want to see
> all inbound ethernet frames (from the wire), but not
> the ethernet frames coming down the local network stack.
> In the left window tcpdump should run to catch all
> incoming ethernet frames from interface eth0.
> In the right window tcpdump should run to catch all
> incoming ethernet frames from interface eth1.
> All outgoing ethernet frames must not be displayed.
> Both tcpdump processes must run in parallel.
> 
> The keyword inbound cannot be used with link level.
> Which tcpdump expression solves the problem?

Have you tried

left window: not ether src mac:addr:of:eth0
right window: not ether src mac:addr:of:eth1

?

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Sniffing inbound ethernet frames only

2006-10-23 Thread Jefferson Ogata
On 2006-10-23 15:13, [EMAIL PROTECTED] wrote:
> Jefferson Ogata wrote:
>> Have you tried
>>
>> left window: not ether src mac:addr:of:eth0
>> right window: not ether src mac:addr:of:eth1
> 
> Hello Jefferson,
> 
> thanks for the quick response.
> Is there a per process filtering or is there
> one kernel filter for all processes? In the latter
> case the filter rule of the second invocation
> of tcpdump would overwrite the rule of the
> first invocation of tcpdump, isn't it?

Filtering is per process, or really per raw socket.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] why not filtering at driver level ?

2006-10-23 Thread Jefferson Ogata
On 2006-10-24 05:11, Guy Harris wrote:
> 3. Raise the limit on the maximum number of BPF instructions.
> 
> You're going to have to add stuff to, or change stuff in, the kernel to
> implement this *anyway*, so you might as well just boost the maximum
> number of BPF instructions and not have to change libpcap *at all*.

I've lost track of what the original issue was, but if the maximum size
of the in-kernel BPF program is the sticking point, it's tunable at
runtime, or at least it used to be. Set /proc/sys/net/core/optmem_max to
32 + 8 * number-of-bpf-instructions. There's still an upper bound, but
the default value is much lower.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] why not filtering at driver level ?

2006-10-23 Thread Jefferson Ogata
On 2006-10-24 05:15, Guy Harris wrote:
> Guy Harris wrote:
> 
>> You forgot option 3:
>>
>> 3. Raise the limit on the maximum number of BPF instructions.
>>
> You might also have to raise the limit on socket options; see
> 
> http://www.tcpdump.org/lists/workers/2004/05/msg7.html

Well, this gives me the urge to quote Steven Wright: "I'm having deja vu
and amnesia at the same time." :^)

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] HTTP support in libpcap

2006-10-30 Thread Jefferson Ogata
On 2006-10-30 03:05, Ian McDonald wrote:
> On 10/29/06, Guy Harris <[EMAIL PROTECTED]> wrote:
>> abakash wrote:
>> > I am new to libpcap and just want to know whether libpcap has got any
>> > http support in it i.e. whether I can extract http header information
>> > from any packet.
>>
>> You can, if you choose, write code to extract HTTP header information
>> from any TCP segment captured by libpcap that contains HTTP header
>> information and that was captured with a "snapshot length" long enough
>> to contain the header information in question.
>>
>> Libpcap, however, won't do it for you; you will have to do it yourself.
>> -
> libtrace from our research group might be able to help:
> http://research.wand.net.nz/software/libtrace.php

Um, gee, is no one going to suggest wireshark?

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap files with file header snaplen < packet

2006-11-30 Thread Jefferson Ogata
On 2006-12-01 01:28, Guy Harris wrote:
> On Nov 30, 2006, at 1:08 PM, Aaron Turner wrote:
>> Unfortunately, I don't know where or how these pcap files were
>> generated, so I don't know what's causing this to happen or how
>> widespread it is.  Could this of been a bug in earlier versions of
>> libpcap??
> 
> I don't know - it might have come from some vendor-"improved" version of
> libpcap, or the bug might have been in the underlying packet capture
> mechanism that libpcap used on whatever platform the packet was
> captured, or it might have been written by something other than libpcap.

Is it possible they were the result of combining multiple pcaps via
something like mergecap?

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap files with file header snaplen < packet

2006-12-04 Thread Jefferson Ogata
On 2006-12-04 15:03, Harley Stenzel wrote:
> On 12/1/06, Jefferson Ogata <[EMAIL PROTECTED]> wrote:
>> Is it possible they were the result of combining multiple pcaps via
>> something like mergecap?
> 
> It would seem that for something like this to be generally usefull, a
> capture station identifier would be needed.  I suppose a source-file
> identifier could also do the trick.

Not sure I follow your response. It's not a proposal--mergecap exists as
part of wireshark ne ethereal. There are other tools for doing this as
well. Yes, something is lost, but something is gained. I use tools of
this ilk to merge together multiple capture files that were collected on
multiple identical, synchronized hosts that receive load-balanced
monitor traffic.

I was merely suggesting that perhaps one of the several tools available
for this purpose doesn't properly set snaplen on its output file to the
max of all input snaplens.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap files with file header snaplen < packet

2006-12-05 Thread Jefferson Ogata
Aaron Turner wrote:
> Storing (or processing) the snaplen seems to open the door for
> problems with little benefit (the cost of wasting a few thousand bytes
> or incurring the performance penalty of a realloc if the default is to
> small).  Actually, if you took the snaplen as merely a hint to the max
> stored packet size and did a realloc on demand, the problem would
> appear to be solved rather gracefully.

I see the benefit of not truncating as far as maximizing the utility of
a pcap file that is slightly degenerate. On the other hand, I see one
gnarly downside.

If semantics were changed so that packets could be longer than snaplen,
then legacy programs that rely on snaplen as a convenient upper bound on
packet size will experience buffer overflows if pcap starts returning
packets longer than snaplen.

So I agree it would be better for libpcap never to have truncated
packets in the past, but turning off that behavior now is possibly
dangerous.

So, having said all that, I'll stay on the fence on this one.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap files with file header snaplen < packet

2006-12-05 Thread Jefferson Ogata
Aaron Turner wrote:
> Perhaps I'm confused... how does an application using the libpcap API
> get access to the snaplen?   I don't see any way to do that.

int pcap_snapshot (pcap_t *)

> Furthermore, all the libpcap functions seem to return a pointer to the
> packet buffer, and said buffer is allocated by libpcap, not the
> application.  I guess I don't see the danger.

Yes, but an application could have allocated another buffer to copy that
into based on snap. Of course it should check caplen, but there are a
lot of lousy programmers out there.

Like I say, I could go either way. But I think there is a potential problem.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Filter complexity and performance

2007-01-16 Thread Jefferson Ogata
On 2007-01-15 13:08, Dmitry Rubinstein wrote:
> We are trying to capture stuff using a relatively simple filter (on
> Linux, using Phil Wood's PCAP with ssldump on top of it). What we want
> is basically to capture the traffic to and from a specific port of a
> specific host (say, 10.0.0.1:80). So far we did it using the filter
> 'host 10.0.0.1 and port 80', but obviously that means we also see
> traffic originating from 10.0.0.1 to port 80 of other hosts. The simple
> way to prevent that would be to use a bit more elaborate filter: '(dst
> host 10.0.0.1 and dst port 80) or (src host 10.0.0.1 and src port 80)'.
> This means the filter has grown two fold in the number of clauses. What
> will be the implications upon the performance of the filtering code?
> Will we be able to capture twice as few packets (hopefully not)? I was
> hoping to kinda avoid the need to do this test if anyone has already did
> some sort of evaluation... 

If your packet filter is running in the kernel, reducing the number of
packets you match may actually improve your performance, even though
executing the filter is more work per packet, because you end up
transferring fewer packets from kernel memory to userland. If using a
slightly more complex filter eliminates 90% of the packets, you're
probably winning.

If you want to make that filter a little more efficient, add "ip and tcp
and ((dst host...". This will shorten the resulting BPF code a bit. You
can find the optimal filter with various options and ordering using
tcpdump -d to dump the BPF packet filter.

If you want to estimate the effect on filtering time, you can measure
the number of BPF instructions it takes to process various packets.
Based on a profile of your network traffic you could then estimate the
average number of BPF instructions spent on each packet.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Sending captured packets to a virtual nic

2007-04-22 Thread Jefferson Ogata
On 2007-04-22 16:50, Quan Doan wrote:
> Hi all,
> I have a problem. I had captured a lot packets from my box, which is a gateway
> of a LAN. Those packets are sent back to me. Now I have those packets, I would
> like to use the Ethereal for analyzing them. So, my idea is sending those
> packets to a virtual NIC and the Ethereal will get those packets on the 
> virtual
> NIC as well. I would like to do that as real-time capturing.
> Does anyone have idea and how to do that?

If you're still using ethereal, stop and switch to wireshark.

To answer your question: "wireshark -r
pcap-file-containing-captured-traffic". Or just start wireshark with no
arguments and go to the file menu to open your capture file.

You don't need a virtual NIC. RTFM.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] matching packetw with tcpdump

2007-05-10 Thread Jefferson Ogata
On 2007-05-10 17:41, McDouglas wrote:
> Is it possible to match packets based on the data content? Say, for
> example match only packets with the first two bytes of the data being
> (hex) 01 1B ?

If by "the data" you mean the TCP payload, yes.

tcp[((tcp[12:1] & 0xf0) >> 2):2] = 0x011b

The high nybble of tcp[12:1] is the number of 32-bit words in the TCP
header. So tcp[12:1] >> 2 (the & 0xf0 is perhaps a no-op in the example
expression, but is there for clarity) gives you the actual size of the
TCP header. The payload thus begins at tcp[tcp[12:1] >> 2].

You can do similar machinations for UDP or what have you.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


[tcpdump-workers] Packet capture performance comparison of quad-core Xeon vs Opteron

2007-06-27 Thread Jefferson Ogata
Greetings.

I've read Fabian Schneider's thesis "Performance evaluation of packet
capturing systems for high-speed networks", which compares capture
performance under variable testing and generally finds that dual-core
Opterons perform somewhat better under heavy capture load than dual-core
Xeons. But now that quad-core Xeons are available, I'm curious whether
anyone has measured capture improvement using four cores. I should
expect four cores to do better, but I'd be interested in any empirical
results to that effect. I'm wondering, for example, how close a box with
a couple of dual-port PCIe Gb NICs (Endace or nPulse) and dual quad-core
processors could come to 4Gb/s aggregate capture speed, while writing
some packets to disk. Has anyone out there put together such a box and
come up with some performance statistics?

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Packet capture performance comparison of

2007-06-27 Thread Jefferson Ogata
Stephen Donnelly wrote:
> On Wed, 2007-06-27 at 22:00 +0000, Jefferson Ogata wrote:
>> some packets to disk. Has anyone out there put together such a box and
>> come up with some performance statistics?
[snip]
> Endace also offers disk capture appliances which provide this level of
> performance.
> 
> Unfortunately I'm not aware of any recent independent test publications.

Hmm. I wonder if that's because your company requires signing an NDA for
getting eval gear. :^/

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Capture filter help

2008-01-17 Thread Jefferson Ogata

On 2008-01-17 13:20, Moheed Moheed Ahmad wrote:

The problem I am facing is the same interface sometimes gives the normal
packet and sometimes with 12 bytes extra.
So when I apply the normal capture filter those with normal packets get
filtered out.


The length of the TCP header + options is encoded in the header in the 
upper nybble of octet 12; this nybble represents the number of longwords 
(4 octets) in the header. So if you want to match the beginning of the 
TCP payload, e.g. against 0xdeadbeef, you can do:


tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0xdeadbeef

To get the next four octets, use:

tcp[((tcp[12:1] & 0xf0) >> 2):4 + 4] = 0xdeadbeef

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Capture filter help

2008-01-17 Thread Jefferson Ogata

On 2008-01-17 18:37, Jefferson Ogata wrote:

To get the next four octets, use:

tcp[((tcp[12:1] & 0xf0) >> 2):4 + 4] = 0xdeadbeef


Sorry, that latter case should have been:

tcp[(((tcp[12:1] & 0xf0) >> 2) + 4):4] = 0xdeadbeef

--
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] protochain, BPF_JA, and sk_chk_filter

2008-09-19 Thread Jefferson Ogata
On 2008-09-19 07:48, Guy Harris wrote:
> and 1) has no clue whether the program is being generated for the kernel
> or userland and 2) takes raw generated code, not a filter expression
> from which to generate code, as an argument, so there's no place to
> *tell* it what kind of code to generate.

There's really no need. The BPF engine can certainly be protected
against this. E.g. count each BPF instruction you execute and bail after
a threshold is reached. On bailing, you could also detach the filter, if
you want to set a very high threshold.

-- 
Jefferson Ogata <[EMAIL PROTECTED]>
NOAA Computer Incident Response Team (N-CIRT) <[EMAIL PROTECTED]>
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] MIME type for libpcap-format capture files

2008-12-11 Thread Jefferson Ogata

On 2008-12-11 20:26, Michael Richardson wrote:

"Phil" == Phil Vandry  writes:

Phil> We suggest that the type should be "application/libpcap-capture" and

application/pcap-capture

  makes more sense to me.


I agree. For one thing, another MIME type might eventually exist for 
filter specifications. It is not sufficient to describe a capture file 
as simply "pcap".


But what I think is missing is a version number. Given the talk in 
recent years about implementing the next version, I think the type 
should be application/pcap-capture-v1.


--
Jefferson Ogata 
NOAA Computer Incident Response Team (N-CIRT) 
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] MIME type for libpcap-format capture files

2008-12-12 Thread Jefferson Ogata

On 2008-12-12 16:01, Phil Vandry wrote:

I agree with Guy, the version is not necesary. At most I could imagine
a version parameter ("Content-Type: application/pcap-capture; version=1")
but even that is not necesary: the version will surely be unambiguously
distinguishable by examining the beginning of the file (magic number).


I still think current and "ng" pcap formats should be distinguished in 
MIME type name. The ng format carries a lot more metadata, and handling 
of it by applications is potentially quite different from handling of 
current pcap files. We're talking about something like the difference 
between a PCM audio file and AIFF. Applications might be interested 
specifically in metadata rather than packets, and should be able to 
indicate this preference in an Accept header.


I also think "ng" is fine for talking about the new format now, but 
someday, when we have been using ng for some time, we may well be 
looking at yet another version. Using "ng" as a notation doesn't help us 
then. Numbered, or at least absolutely named, versions indicate forethought.


But, you know, whatever. I'll be moderately surprised if ng ever comes 
to fruition, frankly.


--
Jefferson Ogata 
NOAA Computer Incident Response Team (N-CIRT) 
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] MIME type for libpcap-format capture files

2008-12-12 Thread Jefferson Ogata

On 2008-12-13 01:17, Guy Harris wrote:

On Dec 12, 2008, at 5:02 PM, Jefferson Ogata wrote:
I still think current and "ng" pcap formats should be distinguished in 
MIME type name.


So do I, which is why I said it'd be something such as 
application/pcap-ng-capture.


I was responding, however, to Phil Vandry, who seemed to be indicating 
that there was no need for a version designation in the MIME type, and 
that versions would be distinguished only by parsing the file itself:


On 2008-12-12 16:01, Phil Vandry wrote:

I agree with Guy, the version is not necesary. At most I could imagine
a version parameter ("Content-Type: application/pcap-capture; version=1")
but even that is not necesary: the version will surely be unambiguously
distinguishable by examining the beginning of the file (magic number).


--
Jefferson Ogata 
NOAA Computer Incident Response Team (N-CIRT) 
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] How to print BOOTP/DHCP packets

2009-05-07 Thread Jefferson Ogata
On 2009-05-07 14:34, Javier Gálvez Guerrero wrote:
> I want to get the information included in bootp/dhcp packets captured
> through tcpdump. I tried adding -v, -vv and -vvv options to the issued
> command but all the information I got was like this:
> 
> pike:/home/dulceangustia/tcpdump-4.0.0# tcpdump -i ra0 port bootps -vvv
> tcpdump: listening on ra0, link-type EN10MB (Ethernet), capture size 96
> bytes

Try bumping up your snapshot size with the -s option.

-- 
Jefferson Ogata 
NOAA Computer Incident Response Team (N-CIRT) 
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] local timestamp recovery of .cap files

2009-05-14 Thread Jefferson Ogata

On 2009-05-15 01:48, Guy Harris wrote:

pcap-NG:

http://www.winpcap.org/ntar/draft/PCAP-DumpFileFormat.html

can store a 4-byte "Time zone for GMT support" value of unspecified 
interpretation (probably a seconds-from-GMT offset), although, if the 
capture crosses a standard time/summer time boundary either at the 
location where it's captured or the location at which it's read, that's 
not sufficient.  Unfortunately, there isn't a universal standard for 
specifying time zones - the Olson time zone names are a 
sort-of-standard, but not all OSes use them (many popular ones do, but 
the "most popular one", i.e. Windows, doesn't), and even for those that 
do some of them don't use the current names (Solaris is still living in 
the past there).


It can also store, on a per-interface basis, the IPv4, IPv6, and MAC or 
EUI addresses for the interface, as well as storing name-to-IPv4-address 
and name-to-IPv6 address mappings.


Of course, there is no *requirement* that any of that information be 
present, so you'd need to have the programs doing the capturing store 
the relevant information.


But the point of storing the mostly irrelevant zone data as metadata is 
so that it can be recorded when pcap timestamps are UTC, as they always 
should have been. I'd like to find the person who decided to store 
localtime instead of gmtime in the pcap timestamp field and smack him or 
her with a large sock filled with horse manure.


--
Jefferson Ogata 
NOAA Computer Incident Response Team (N-CIRT) 
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] local timestamp recovery of .cap files

2009-05-15 Thread Jefferson Ogata
On 2009-05-15 03:10, Guy Harris wrote:
> On May 14, 2009, at 7:20 PM, Jefferson Ogata wrote:
>> But the point of storing the mostly irrelevant zone data as metadata
>> is so that it can be recorded when pcap timestamps are UTC, as they
>> always should have been. I'd like to find the person who decided to
>> store localtime instead of gmtime in the pcap timestamp field and
>> smack him or her with a large sock filled with horse manure.
> 
> What application or applications make that mistake?

>From the mere existence of this thread, I was assuming tcpdump does. :^)

This has come up before, back when we were talking about the NG format.
I guess I got confused by the current context; if pcap files are
natively UTC (which I had thought they were until this thread arose,
seeming to suggest they weren't), great. I configure all my systems in
UTC anyway, so I never have issues, and I wouldn't be able to tell
without tweaking $TZ.

Frankly, I don't understand why anyone configures a UNIX-like system in
anything other than UTC. That's what $TZ is for.

> However, even with standard pcap files, which have GMT time stamps, one
> might want to be able to display the time stamps in the time zone in
> which the capture was done rather than in the time zone in which it's
> being read; that's what the original poster wanted.  Storing time zone
> information in the file, rather than getting it out of band (e.g.,
> asking whoever sent you the file where they captured it) isn't a
> requirement, but it could be a convenience.

Storing offset from UTC as metadata can work even across DST changes by
dropping in a new offset metadata record when the zone change occurs. It
doesn't have to be global.

-- 
Jefferson Ogata 
NOAA Computer Incident Response Team (N-CIRT) 
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] local timestamp recovery of .cap files

2009-05-15 Thread Jefferson Ogata

On 2009-05-15 18:20, Guy Harris wrote:

On May 15, 2009, at 12:43 AM, Jefferson Ogata wrote:

This has come up before, back when we were talking about the NG format.
I guess I got confused by the current context; if pcap files are
natively UTC (which I had thought they were until this thread arose,
seeming to suggest they weren't), great.


They are.

The issue in the thread is how to *display* the time stamps, especially 
if you want to know what *local* time, at the point of capture, a packet 
arrived, when you're reading it in a different time zone.  *That* 
requires that some form of time zone information for the point of 
capture be available, whether in the capture file or, for example, in an 
email to which the capture file was attached.  So there's a use for time 
zone information in a capture file even when the time stamps in the 
capture file are in UTC.


It seemed to me as if he was trying to go the other way 'round. I don't 
have the original message any more so I can't say why I got that impression.



I configure all my systems in
UTC anyway, so I never have issues, and I wouldn't be able to tell
without tweaking $TZ.

Frankly, I don't understand why anyone configures a UNIX-like system in
anything other than UTC. That's what $TZ is for.


There are two ways I see in which "configure a UNIX-like system for a 
particular time zone" could be read:


1) set the default time zone used by routines such as localtime() 
and mktime() to convert UTC to local time;


2) set the time zone of the value returned by 
time()/gettimeofday()/etc..


3) Set the time zone of the system to a local zone instead of UTC, e.g. 
by setting a global TZ value or copying an Olson zone file to 
/etc/localtime. This is what a lot of people do, and I don't see why.


Users who want their desktops to operate in a local zone can just set TZ 
for their environment.


One thing I hate having to deal with is syslog messages logged in a 
local time zone. There is no indication of zone in syslog messages. 
Furthermore, at DST end you can have syslog messages where it is 
impossible to determine the actual time something was logged. 
Correlating syslog messages from multiple systems is a royal PITA when 
people use local zones system-wide, and it's completely unnecessary to 
do so.


Anyway, this is off-topic. But as someone who has to correlate data from 
systems in 12 or so different time zones, it's something I care about.


--
Jefferson Ogata 
NOAA Computer Incident Response Team (N-CIRT) 
"Never try to retrieve anything from a bear."--National Park Service
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] reconstruct HTTP requests in custom sniffer

2010-12-28 Thread Jefferson Ogata
On 2010-12-28 17:22, Andrej van der Zee wrote:
> I am asked to write a custom sniffer with libpcap on Linux that has to
> handle a load of 50.000 packets per second. The sniffer has to detect all
> HTTP requests and dump the URI with additional information, such as request
> size and possibly response time/size. The packets, destined for the
> load-balancer, are duplicated by the switch using port-mirroring to my own
> machine. It is important that our solution is 100% non-intrusive to the web
> application being monitored.
> 
> Probably I need to access the POST data of certain HTTP requests. Because
> HTTP requests are, obviously, broken into multiple packets, is it feasible
> to reconstruct the whole HTTP request with POST data from multiple packets?
> 
> Regarding the load of 50.000 packets a second, is this expected to be a
> problem?
> 
> Any feedback is very appreciated!

See urlsnarf:

http://monkey.org/~dugsong/dsniff/

I don't think it does POST data but it may be a good starting point.

-- 
Jefferson Ogata 
National Oceanographic Data Center
You can't step into the same river twice. -- Herakleitos
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.