On Sat 14 Oct 2006, Fabrice Lorrain wrote:

> Any progress on this bug ?

I'm afraid not...
I'll talk to the upstream maintainer to see what possibilities there are
for extending the protocol to handle this.

> The way rsync is handling sparse file is suboptimal. It leaves any
> backup policy based on rsync open to a trivial DoS with thinks link the 
> following :
> 
> dd if=/dev/zero of=bigfake bs=1k count=1 seek=2000000000
> 
> rsync -e ssh -avS bigfake [EMAIL PROTECTED]:/tmp
> 
> At that point you wait for 2TB of unusfull zeros been transferred
> between the src-server and the backup_server... Annoying.

I understand...

> I've been beaten by this feature twice already. Students borking some
> seek/lseek maths while writing to files... We got several 100GB files
> to transfert during the backup at night...

Using -z will speed things up quite a lot, as the zeroes compress well.
However, perhaps a better workaround in the meantime is to exclude
(student) files that are larger than a reasonable amount via the
--max-size option.


Paul Slootman


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to