>
> 2D finite difference can be comm intensive is the mesh is too small for each
> processor to have a fair amount of work to do before needing the neighboring
> values from a "far" node.
>
Actually it seems that with VX50 the same node may be the "far" node.
At least that's what I see
from the NU
Dmitri Chubarov wrote:
Hello,
we have got a VX50 down here as well. We have observed very different
scalability on different applications. With an OpenMP molecular
dynamics code we have got over 14 times speedup while on a 2D finite
difference scheme I could not get far beyond 3 fold.
2D fini
I've read the entire thread up to now and noticed no one mentioned the
parallel bzip2 (pbzip2 http://compression.ca/pbzip2/) utility. Although
not strictly compatible with bzip2 when compressing large files, it's
still a valid compressor imho. More below...
Xu, Jerry wrote:
Hello,
Currentl
On Fri, Oct 03, 2008 at 12:58:16PM -0700, Greg Lindahl wrote:
> On Fri, Oct 03, 2008 at 02:17:52AM -0700, Bill Broadley wrote:
>
> > Er, that makes no sense to me. You aren't going to end up with a smaller
> > file by encoding a file less efficiently.
>
> I often find that floating-point data d
On Fri, Oct 03, 2008 at 02:17:52AM -0700, Bill Broadley wrote:
> Er, that makes no sense to me. You aren't going to end up with a smaller
> file by encoding a file less efficiently.
I often find that floating-point data doesn't compress much, but that
ASCII representations of the same data comp
Carsten Aulbert wrote:
> > Hi all
> >
> > Bill Broadley wrote:
>> >> Another example:
>> >> http://bbs.archlinux.org/viewtopic.php?t=11670
>> >>
>> >> 7zip compress: 19:41
>> >> Bzip2 compress: 8:56
>> >> Gzip compress: 3:00
>> >>
>> >> Again 7zip is a factor of 6 and change slower than gzip.
> >
Well that's obvious what happens there, that's not really new,
already the old good pkzip was storing files it couldn't compress
uncompressed.
Note the type of files you mention you can compress quite well still
with PPM, which really is nothing new anymore.
All the old zippers are LZ77/Huf
Vincent Diepeveen wrote:
The question is Joe,
Why are you storing it uncompressed?
Lets see now...
Riddle me this Vincent: What happens when you compress uncompressable
binary data? Like RPMs, or .debs? Or pdf's with images, or images
which have already been compressed?
Any idea?
--
The question is Joe,
Why are you storing it uncompressed?
Vincent
On Oct 3, 2008, at 5:45 PM, Joe Landman wrote:
Carsten Aulbert wrote:
If 7-zip can only compress data at a rate of less than say 5 MB/s
(input
data) I can much much faster copy the data over uncompressed
regardless
of how
Carsten Aulbert wrote:
If 7-zip can only compress data at a rate of less than say 5 MB/s (input
data) I can much much faster copy the data over uncompressed regardless
of how many unused cores I have in the system. Exactly for these cases I
would like to use all cores available to compress the d
Hi Carsten,
In your example the only thing that seems to matter to you is
*collecting* data speed,
in short the realtime compression speed that tapestreamers can get,
to give one
example.
In your example you need to compress each time stuff.
That's not being realistic however. I'll give yo
Hi Vincent,
Vincent Diepeveen wrote:
> Ah you googled 2 seconds and found some oldie homepage.
Actually no, I just looked at my freshmeat watchlist of items still to
look at :)
>
> Look especially at compressed sizes and decompression times.
Yeah, I'm currently looking at
http://www.maximumcom
hi Carsten,
Ah you googled 2 seconds and found some oldie homepage.
Try this homepage www.maximumcompression.com
Far better testing over there. Note that it's the same testset there
that gets compressed a lot.
In real life, database type data is having all kind of patterns which
PPM type co
hi Bill,
7-zip is of course faster than bzip2 and a bit slower than gzip.
Thing is that after the files have been compressed you need to do
less i/o;
the big bottleneck in most systems is the bandwidth from and to i/o.
A single disk start of 90s was several megabytes per second up to 10
MB/
For uncompressed TIFF files this might be of use:
http://optipng.sourceforge.net/features.html
It seems to mention the lossless compression of TIFF files.
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubs
Hi all
Bill Broadley wrote:
>
> Another example:
> http://bbs.archlinux.org/viewtopic.php?t=11670
>
> 7zip compress: 19:41
> Bzip2 compress: 8:56
> Gzip compress: 3:00
>
> Again 7zip is a factor of 6 and change slower than gzip.
Have you looked into threaded/parallel bzip2?
freshmeat has a
Vincent Diepeveen wrote:
Bzip2, gzip,
Why do you guys keep quoting those total outdated compressors :)
Path of least resistance, not to mention python bindings.
there is 7-zip for linux, it's open source and also part of LZMA. On
average remnants
are 2x smaller than what gzip/bzip2 is doing
Bzip2, gzip,
Why do you guys keep quoting those total outdated compressors :)
there is 7-zip for linux, it's open source and also part of LZMA. On
average remnants
are 2x smaller than what gzip/bzip2 is doing for you (so bzip2/gzip
is factor 2 worse).
7-zip also works parallel, not sure whet
2008/10/2 Bill Broadley <[EMAIL PROTECTED]>
<...>
Why hardware? I have some python code that managed 10MB/sec per CPU (or
> 80MB
> on 8 CPUs if you prefer) that compresses with zlib, hashes with sha256, and
> encrypts with AES (256 bit key). Assuming the compression you want isn't
> substantial
Currently I generate nearly one TB data every few days and I need to pass it
Bill's right - 6 MB/s is really not much to ask from even a complex WAN.
I think the first thing you should do is find the bottleneck. to me it
sounds like you have a sort of ropey path with a 100 Mbps hop somewhere.
Xu, Jerry wrote:
Hello,
Currently I generate nearly one TB data every few days and I need to pass it
along enterprise network to the storage center attached to my HPC system, I am
thinking about compressing it (most tiff format image data)
tiff uncompressed, or tiff compressed files? If unc
Hi Jerry
I think HDF5 can help you in some way... - http://www.hdfgroup.org/HDF5/
Rodrigo
2008/10/2 Xu, Jerry <[EMAIL PROTECTED]>
> Hello,
>
> Currently I generate nearly one TB data every few days and I need to pass
> it
> along enterprise network to the storage center attached to my
Hi,
Am 02.10.2008 um 22:09 schrieb Xu, Jerry:
Currently I generate nearly one TB data every few days and I need
to pass it
along enterprise network to the storage center attached to my HPC
system, I am
thinking about compressing it (most tiff format image data)
is it plain tiff or alread
On Thu, Oct 02, 2008 at 04:09:36PM -0400, Xu, Jerry wrote:
>
> Currently I generate nearly one TB data every few days and I need to pass it
> along enterprise network to the storage center attached to my HPC system, I am
> thinking about compressing it (most tiff format image data) as much as I c
On Thu, Oct 02, 2008 at 05:40:31PM -0400, Joe Landman wrote:
> I have heard of some "xml accelerators" in the
> past (back when XML was considered a good buzzword) that did on-the-fly
> compression.
Well, given how wordy the tags are, simply compressing those is
inexpensive and is a big win,
Xu, Jerry wrote:
Hello,
Currently I generate nearly one TB data every few days and I need to pass it
along enterprise network to the storage center attached to my HPC system, I am
thinking about compressing it (most tiff format image data) as much as I can, as
fast as I can before I send it cr
Hello,
Currently I generate nearly one TB data every few days and I need to pass it
along enterprise network to the storage center attached to my HPC system, I am
thinking about compressing it (most tiff format image data) as much as I can, as
fast as I can before I send it crossing network ...
27 matches
Mail list logo