On Wed, 6 May 2009 10:17:06 -0600 (MDT) Diana Eichert
<[email protected]> wrote:

> On Wed, 6 May 2009, openbsd misc wrote:
> 
> > On Wed, May 6, 2009 at 3:42 PM, Diana Eichert <[email protected]>
> > wrote:
> >
> >> We use physical taps at work, when I get the chance I'll take a
> >> look at the vendor.
> >>
> >> Also, you really think you can capture 10GE? Chuckle, good luck.
> >>
> >> diana
> >
> >
> >   NSA,MI(x)/GCHQ,ASIO and their vendor friends would beg to differ.
> >
> > I can't see any  black helicopters and my Tin Foil hat fits fine
> > thanks for asking.
> 
> Yeah, and I'm sure JC has equivalent resources of the acronym laden
> institutions you mention.  Do you have any idea how they capture
> packets at line rate?  I strongly doubt they are using off the shelf
> hardware, 

Well, a good number of the 10-Gbit/s Eethernet cards on the market
actually have dual 10GbE interfaces in one configuration or another.
The most typical configuration that *I* have seen is the two bonded
(20-Gbit/s) as a single logical interface with fail-over between the two
physical connections. In short, to capture a single card, you basically
need to be able to store 2-GByte/s *somewhere*

Yes, I'm intentionally skipping the overhead calculations and keeping
things overly generalized... --this is misc@ after all (;

On the more modern Intel chipset systems (X58), your memory bandwidth
is about 64-Gbyte/s from RAM to proc, so if you stuff the box with
128-GByte of ram, you can collect about hour's worth of capture in a
sizable RAM disk. Of course, 128-GByte of 1333-MHz RAM will set you
back about $15-20 thousand USD.

If you need more permanent storage (i.e. saved to "disk"), you only
have two options:

1.) A large stripe set of Intel X25-{M,E} devices. Both the X25-M and
X25-E SATA II (3.0 Gbit/s) can do about 250-Mbyte/s read/write, so a
RAID0 stripe set of 16 of them will get you to about 4-Gbyte/s.
Unfortunately, as far as *I* know, no SATA/RAID controller manufacturer
has a product that can support 16 SATA II drives, *AND* has a 16-Lane
PCIe Gen-1.0 interface (4-GByte/s), or 8-Lane PCIe Gen-2.0 interface
(also 4-GByte/s), or a 4-Lane PCIe Gen-3.0 interface (again 4-GByte/s),
so you'd be forced to use multiple controller cards and suffer a
performance hit. It would cost you about $12-16 Thousand USD to build
such a beast mainly due to the cost of the drives, but it's doable. For
your money, you'd get about 2500-GByte (16 * 160-GByte) of rather
volatile storage due to the RAID0, or about 21 hours of capture.

2.) Due to the absolutely insane prices of the hardware, your other
option for non-custom hardware doesn't really qualify as "off the
shelf." The other option is to use a stripe set of Fusion-IO.com solid
state "disks" which can read/write at either 800-MByte/s (for the
320-GByte and below) or 1.5-GByte/s (for the 640-GByte and above
"double" disks) depending on the model you buy. The present capacity
limit is 640-GByte for their high end, "double" disk but that will hit
1.2-TByte by the end of the year (supposedly). Doing a stripe set
across a bunch of these is, ummm, and interesting endeavor due to the
fact they require very custom, closed source drivers and a system with
8-GByte of RAM per device. Oh, and according to what I've been told, if
you have a power fault, you're totally screwed due to the way the
mystery driver works. Though you can buy these things off the shelf,
it's a very high shelf. The 320-GByte capactity 800-MByte/s drives are
about $14,000 each retail, and you'd need at least four of the striped
together to surpass the 2-GByte/s rate of a single 10-GBit/s card (two
interfaces 20-Gbit/s).

Other than the three options above, I do not know of any other way to
capture 10 and/or 20 GBit/s Ethernet at line speed with off the shelf
components. Also, I'll be the first to say the above is a bit dodgy,
but it would more or less work if one can afford it. And yep, you're
very much correct; attempting capture at these speeds is good for a
chuckle and even the three "cheap" off-the-shelf methods above are not
really affordable for home use. (;

If anyone here mistakenly thinks they can actually run *ANALYSIS* at
these speeds with off the shelf components...

        BAWAHAHAHAHAHAHAHA!

Diana, thanks for the link to the FPGA analysis stuff later in the
thread. I'll try to read it tomorrow, but the thought of someone doing
the *REQUIRED* over-clocking of a FPGA to get the needed throughput
sounds dangerously dodgy at best. Off the top of my head, other than
over-clocking a "half-baked" FPGA, I can't think of any other way they
could have done it without a serious performance impact on the link.

> but hey what would I know, I'm just a girl.
> 

CORRECTION: "... just a girl with technical super powers, and a lab that
makes everyone very, very jealous."

-- 
J.C. Roberts

Reply via email to