> ----- Forwarded message from Marc Lehmann <[EMAIL PROTECTED]> -----
> 
> The .zsync files can get very big for large files (where they are worth
> most). A block size of 2048 (the maximum supported by zsync) is not
> realistic for these cases, as the http request and responses are easily
> within the same range nowadays.

Well, a block size of 2048 makes the .zsync ~1% of the size of the
original file - and it is relative size which is interesting. But
clearly it is worth having it selectable - my own testing shows wide
variation in data transfer depending on the blocksize.

> So here is my wish: zsync should support arbitrary (power-of-two, although
> I wonder where this limit comes from - rdiff doesn't have it for example,
> either) blocksizes,

It does - zsyncmake(1) documents the -b option. One of my test cases is
an ISO file with a .zsync with blocksize 8192. If there is a limitation
that I am not aware of, please give an example. The only limitation I do
know of is that very large block sizes don't work for compressed files.

The power of two limitation is to save a multiplication in the inner
loop. It may be that this is not the critical path and the restriction
could be easily lifted. I may try that - but I doubt having more
precision than powers-of-two helps much.

-- 
Colin Phipps <[EMAIL PROTECTED]>


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to