Am I misunderstanding that command? How would that help? The limit here
would be hard disk throughput, I imagine, and you're still reading/writing
the same amount of data to the drive, nay? Just use your first, simplest,
command (cp -pr * /mnw.t/nfs/dir/) and leave it. If you had done that to
begin with you'd be at least half done by now.

On Tue, Apr 14, 2009 at 5:11 AM, Glyn Astill <glynast...@yahoo.co.uk> wrote:

>
> --- On Tue, 14/4/09, Frank Bonnet <f.bon...@esiee.fr> wrote:
>
> > From: Frank Bonnet <f.bon...@esiee.fr>
> > Subject: Re: massive copy
> > To: glynast...@yahoo.co.uk
> > Cc: "Debian User List" <debian-user@lists.debian.org>
> > Date: Tuesday, 14 April, 2009, 10:02 AM
> > Glyn Astill wrote:
> > > --- On Tue, 14/4/09, Frank Bonnet
> > <f.bon...@esiee.fr> wrote:
> > >
> > >> From: Frank Bonnet <f.bon...@esiee.fr>
> > >> Subject: massive copy
> > >> To: "Debian User List"
> > <debian-user@lists.debian.org>
> > >> Date: Tuesday, 14 April, 2009, 9:14 AM
> > >> Hello
> > >>
> > >> I have to copy around 250 Gb from a server to a
> > Netapp NFS
> > >> server
> > >> and I wonder what would be faster ?
> > >>
> > >> first solution
> > >>
> > >> cp -pr * /mnt/nfs/dir/
> > >>
> > >> second solution ( 26 cp processes running in // )
> > >>
> > >>
> > >> for i in a b c d e f g h i j k l m n o p q r s t u
> > v w x y
> > >> z
> > >> do
> > >> cp -pr $i* /mnt/nfs/dir/ &
> > >> done
> > >>
> > >
> > > Perhpas you could try some sort of tar pipe if
> > you've got a nice cpu?
> > >
> > > tar cf - * | (cd /mnt/nfs/dir/ ; tar xf - )
> > >
> >
> > Yes the machine has nice CPUs and a lot of RAM
> > do you think it will be faster using tar rather than cp ?
> >
>
> I'd like to think it would help, if the files are quite compressable
> perhaps you could add a 'z' in there too...
>
>
>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact
> listmas...@lists.debian.org
>
>

Reply via email to