Re: [Beowulf] GPU (was Accelerator) for data compressing

2008-10-03 Thread Ellis Wilson
Bruno Coutinho wrote: > You need to get data from machine main memory, compress and send results > back several times. > The bandwidth of pci express today is 8GB/s, so this is the maximum data > rate a gpu can compress. > You can use some tricks like computation and i/o (to main memory) > paral

Re: [Beowulf] GPU (was Accelerator) for data compressing

2008-10-03 Thread Bruno Coutinho
2008/10/3 Vincent Diepeveen <[EMAIL PROTECTED]> > > On Oct 3, 2008, at 5:45 PM, Joe Landman wrote: > >> >> GPU as a compression engine? Interesting ... >> >> Joe >> >> > For great compression, it's rather hard to get that to work. > With a lot of RAM some clever guys manage. > > GPU has a lot of

Re: [Beowulf] Compute Node OS on Local Disk vs. Ram Disk

2008-10-03 Thread Eric Thibodeau
Bogdan Costescu wrote: On Wed, 1 Oct 2008, Eric Thibodeau wrote: the NFS root approach only does changes on the head node and changed files don't need to be propagated and are accessed on a as-needed basis, this might have significant impacts on large deployments NFS-root doesn't scale too w

RE: [Beowulf] Has DDR IB gone the way of the Dodo?

2008-10-03 Thread Gilad Shainer
Prentice wrote: > In the ongoing saga that is my new cluster, we were just told > today that Cisco is no longer manufacturing DDR IB cables, > which we, uhh, need. > > Has DDR IB gone the way of the dodo bird and been supplanted by QDR? > > If so, why would anyone spec a brand new cluster with

Re: [Beowulf] Has DDR IB gone the way of the Dodo?

2008-10-03 Thread Joe Landman
Greg Lindahl wrote: Of course, your "latency and bandwidth" benchmark won't see this problem, because it only uses a single core, and it sends the same buffer over and over without touching it. Yup. We have a storage test using MPI that has each node generate random numbers, and passes the b

Re: [Beowulf] Has DDR IB gone the way of the Dodo?

2008-10-03 Thread Greg Lindahl
On Fri, Oct 03, 2008 at 10:00:05AM -0400, Mark Hahn wrote: > qdr would be 40 Gb (raw data rate, right? so 4 GB/s before any sort > of packet overhead, etc.) I don't really see why that's a problem - > even a memory-constrained current-gen Intel box has about twice that much > memory bandwidth

[Beowulf] MonetDB's lightweight compression; Re: Accelerator for data compressing

2008-10-03 Thread Andrew Piskorski
On Fri, Oct 03, 2008 at 01:49:04PM +0200, Carsten Aulbert wrote: > No, quite on the contrary. I would like to use a compressor within a > pipe to increase the throughput over the network, i.e. to get around the > ~ 120 MB/s limit. Carsten, it is probably not directly relevant to you, but you may

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Greg Lindahl
On Fri, Oct 03, 2008 at 02:17:52AM -0700, Bill Broadley wrote: > Er, that makes no sense to me. You aren't going to end up with a smaller > file by encoding a file less efficiently. I often find that floating-point data doesn't compress much, but that ASCII representations of the same data comp

Re: [Beowulf] Streamlined standard Linux installation

2008-10-03 Thread Bruno Coutinho
2008/10/3 Alan Ward <[EMAIL PROTECTED]> > > Hi. > > Several days ago there was a thread on using a streamlined standard Linux > distribution. These days I have been fiddling with the new Debian Lenny > version (still beta, but it seems not for long). > > A "no frills, no X" install with vi, aptitu

Re: [Beowulf] Has DDR IB gone the way of the Dodo?

2008-10-03 Thread Scott Atchley
On Oct 3, 2008, at 2:24 PM, Bill Broadley wrote: QDR over fiber should be "reasonably priced", here's hoping that the days of Myrinet 250MB/sec optical cables will return. Corrections/comments welcome. I am not in sales and I have no access to pricing besides our list prices, but I am tol

Re: [Beowulf] Has DDR IB gone the way of the Dodo?

2008-10-03 Thread Bill Broadley
Alan Louis Scheinine wrote: > NiftyOMPI Mitch wrote >> QDR is interesting... in all likelyhood the >> QDR game will be optical for any link further away than a single rack. >> Once IB goes optical there will be a lot of reason to install IB in >> machine rooms and campus sites that are just out of

[Beowulf] Streamlined standard Linux installation

2008-10-03 Thread Alan Ward
Hi. Several days ago there was a thread on using a streamlined standard Linux distribution. These days I have been fiddling with the new Debian Lenny version (still beta, but it seems not for long). A "no frills, no X" install with vi, aptitude and a few other rescue tools fits comfortably i

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Prentice Bisbal
Carsten Aulbert wrote: > > Hi all > > > > Bill Broadley wrote: >> >> Another example: >> >> http://bbs.archlinux.org/viewtopic.php?t=11670 >> >> >> >> 7zip compress: 19:41 >> >> Bzip2 compress: 8:56 >> >> Gzip compress: 3:00 >> >> >> >> Again 7zip is a factor of 6 and change slower than gzip. > >

[Beowulf] Re: Accelerator for data compressing

2008-10-03 Thread David Mathog
Carsten Aulbert <[EMAIL PROTECTED]> wrote > We have a Gbit network, i.e. for us this test is a null test, since it > takes 7-zip close to 5 minutes to compress the data set of 311 MB which > we could blow over the network in less than 5 seconds, i.e. in this case > tar would be our favorite ;) Ma

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Vincent Diepeveen
Well that's obvious what happens there, that's not really new, already the old good pkzip was storing files it couldn't compress uncompressed. Note the type of files you mention you can compress quite well still with PPM, which really is nothing new anymore. All the old zippers are LZ77/Huf

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Joe Landman
Vincent Diepeveen wrote: The question is Joe, Why are you storing it uncompressed? Lets see now... Riddle me this Vincent: What happens when you compress uncompressable binary data? Like RPMs, or .debs? Or pdf's with images, or images which have already been compressed? Any idea? --

Re: [Beowulf] GPU (was Accelerator) for data compressing

2008-10-03 Thread Vincent Diepeveen
On Oct 3, 2008, at 5:45 PM, Joe Landman wrote: GPU as a compression engine? Interesting ... Joe For great compression, it's rather hard to get that to work. With a lot of RAM some clever guys manage. GPU has a lot of stream processors, yet little RAM a stream processor. Additionally they

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Vincent Diepeveen
The question is Joe, Why are you storing it uncompressed? Vincent On Oct 3, 2008, at 5:45 PM, Joe Landman wrote: Carsten Aulbert wrote: If 7-zip can only compress data at a rate of less than say 5 MB/s (input data) I can much much faster copy the data over uncompressed regardless of how

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Joe Landman
Carsten Aulbert wrote: If 7-zip can only compress data at a rate of less than say 5 MB/s (input data) I can much much faster copy the data over uncompressed regardless of how many unused cores I have in the system. Exactly for these cases I would like to use all cores available to compress the d

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Vincent Diepeveen
Hi Carsten, In your example the only thing that seems to matter to you is *collecting* data speed, in short the realtime compression speed that tapestreamers can get, to give one example. In your example you need to compress each time stuff. That's not being realistic however. I'll give yo

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Carsten Aulbert
Hi Vincent, Vincent Diepeveen wrote: > Ah you googled 2 seconds and found some oldie homepage. Actually no, I just looked at my freshmeat watchlist of items still to look at :) > > Look especially at compressed sizes and decompression times. Yeah, I'm currently looking at http://www.maximumcom

Re: [Beowulf] Has DDR IB gone the way of the Dodo?

2008-10-03 Thread Mark Hahn
Has DDR IB gone the way of the dodo bird and been supplanted by QDR? I don't think front side busses are fast enough for QDR yet. qdr would be 40 Gb (raw data rate, right? so 4 GB/s before any sort of packet overhead, etc.) I don't really see why that's a problem - even a memory-constrained

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Vincent Diepeveen
hi Carsten, Ah you googled 2 seconds and found some oldie homepage. Try this homepage www.maximumcompression.com Far better testing over there. Note that it's the same testset there that gets compressed a lot. In real life, database type data is having all kind of patterns which PPM type co

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Vincent Diepeveen
hi Bill, 7-zip is of course faster than bzip2 and a bit slower than gzip. Thing is that after the files have been compressed you need to do less i/o; the big bottleneck in most systems is the bandwidth from and to i/o. A single disk start of 90s was several megabytes per second up to 10 MB/

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Bill Broadley
For uncompressed TIFF files this might be of use: http://optipng.sourceforge.net/features.html It seems to mention the lossless compression of TIFF files. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubs

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Carsten Aulbert
Hi all Bill Broadley wrote: > > Another example: > http://bbs.archlinux.org/viewtopic.php?t=11670 > > 7zip compress: 19:41 > Bzip2 compress: 8:56 > Gzip compress: 3:00 > > Again 7zip is a factor of 6 and change slower than gzip. Have you looked into threaded/parallel bzip2? freshmeat has a

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Bill Broadley
Vincent Diepeveen wrote: Bzip2, gzip, Why do you guys keep quoting those total outdated compressors :) Path of least resistance, not to mention python bindings. there is 7-zip for linux, it's open source and also part of LZMA. On average remnants are 2x smaller than what gzip/bzip2 is doing

Re: [Beowulf] Accelerator for data compressing

2008-10-03 Thread Vincent Diepeveen
Bzip2, gzip, Why do you guys keep quoting those total outdated compressors :) there is 7-zip for linux, it's open source and also part of LZMA. On average remnants are 2x smaller than what gzip/bzip2 is doing for you (so bzip2/gzip is factor 2 worse). 7-zip also works parallel, not sure whet