DSA key id FFF1 seems fake and not listed on pgp.mit.edu.
% gpg --verify parallel-20110322.tar.bz2.sig parallel-20110322.tar.bz2
gpg: Signature made Mon Mar 21 19:03:34 2011 CDT using DSA key ID FFF1
gpg: can't open `/home/cmgreen/.gnupg/pubring.gpg'
gpg: keydb_search failed: file open err
On Mon, Mar 28, 2011 at 9:02 PM, Chris Green wrote:
> DSA key id FFF1 seems fake and not listed on pgp.mit.edu.
>
> % gpg --verify parallel-20110322.tar.bz2.sig parallel-20110322.tar.bz2
> gpg: Signature made Mon Mar 21 19:03:34 2011 CDT using DSA key ID FFF1
> gpg: can't open `/home/cmgre
I have a large gzipped tar archive containing many small files; just
untarring it takes a lot of time and space. I'd like to be able to
process each file in the archive, ideally without untarring the whole
thing first, and I'd like to process several files in parallel. Is
there a recipe for this
On Tue, 29 Mar 2011, Jay Hacker wrote:
I have a large gzipped tar archive containing many small files; just
untarring it takes a lot of time and space. I'd like to be able to
process each file in the archive, ideally without untarring the
whole thing first, and I'd like to process several fil
On Tue, 29 Mar 2011, Benjamin R. Haskell wrote:
On Tue, 29 Mar 2011, Hans Schou wrote:
tar xvf big-file.tar.gz | parallel echo "Proc this file {}"
Hans, you left off the 'z' in 'tar zxvf':
No, its a new feature in tar (Ubuntu 10). If you specify a file, tar
will test compression type it s
Hans,
That is a great idea. However, can I be sure the file is completely
written to disk before tar prints the filename? It seems to print the
filename first. Could that not lead to a race condition, or the
consumer reaching the "end" of the file before tar has finished
writing it?
On Tue,
On Tue, 29 Mar 2011, Jay Hacker wrote:
Hans,
That is a great idea. However, can I be sure the file is completely
written to disk before tar prints the filename?
You will tell us.
Write a script which handle the job. Let it start like this:
#!/bin/bash
S1=$( stat -c %s $1 )
sleep 1
S2=$( sta
On Tue, Mar 29, 2011 at 10:14 PM, Jay Hacker wrote:
> On Tue, Mar 29, 2011 at 11:20 AM, Hans Schou wrote:
>> On Tue, 29 Mar 2011, Jay Hacker wrote:
>>
>>> I have a large gzipped tar archive containing many small files; just
>>> untarring it takes a lot of time and space. I'd like to be able to p
Hmmm
use tar-t to extract the filenames pipe that into parallel to call tar again to
extract just that file and pipe it to some other command
tar -t big-file.tar.gz | parallel tar -f big-file.tar.gz - '|'
someCommandThatReadsFromStdIn
Malcolm Cook
Stowers Institute for Medical Research - Bio
On Tue, 29 Mar 2011, Ole Tange wrote:
While I loved Hans' idea, it does indeed have a race condition. This
should run 'ls -l' on each file after decompressing and clearly fails
now and then:
$ tar xvf ../i.tgz | parallel ls -l > ls-l
ls: cannot access 1792: No such file or directory
But you co
ooops, more like:
tar -t big-file.tar.gz | parallel tar -O -x -f big-file.tar.gz '|'
someCommandThatReadsFromStdIn
Malcolm Cook
Stowers Institute for Medical Research - Bioinformatics
Kansas City, Missouri USA
> -Original Message-
> From: parallel-bounces+mec=stowers
On Tue, Mar 29, 2011 at 11:41 PM, Hans Schou wrote:
> On Tue, 29 Mar 2011, Ole Tange wrote:
>
>> While I loved Hans' idea, it does indeed have a race condition. This
>> should run 'ls -l' on each file after decompressing and clearly fails
>> now and then:
>>
>> $ tar xvf ../i.tgz | parallel ls -l
On Tue, Mar 29, 2011 at 11:41 PM, Cook, Malcolm wrote:
> ooops, more like:
>
> tar -t big-file.tar.gz | parallel tar -O -x -f big-file.tar.gz '|'
> someCommandThatReadsFromStdIn
You probably mean:
tar -tf big-file.tar.gz | parallel tar -O -x -f big-file.tar.gz {}
'|' someCommandThatR
13 matches
Mail list logo