Hi!

Ha, I remembered right that I had seen such a problem reported before...


On 2018-01-15T12:31:04+0100, Bruno Haible <br...@clisp.org> wrote:
> I tried:
>> # tar cf - directory | ssh bruno@10.0.2.2 tar xf -
>> It hangs after transferring 1.6 GB. I.e. no more data arrives within 15 
>> minutes.

You were lucky: for me it stopped much earlier (a few dozen MiB) -- the
file I'm trying to transfer is just 460 MiB in size.  ;-)

> Found a workaround: Throttling of the bandwidth.
> - Throttling at the network adapter level [1] is not applicable to Hurd.
> - The 'throttle' program [2] is no longer available.
> - But a replacement program [3] is available.

Or use the '--bwlimit' functionality of 'rsync', or the '--rate-limit'
functionality of 'pv', which are often already packaged, readily
available.

> The command that worked for me (it limits the bandwidth to 1 MB/sec):
>
> # tar cf - directory | ~/throttle.py --bandwidth 1024576 | ssh bruno@10.0.2.2 
> tar xf -

I thus tried:

    $ pv --rate-limit 1M [file] | ssh [...] 'cat > [file]'

..., which crashed after 368 MiB.  After rebooting, a 'rsync -Pa
--inplace --bwlimit=500K' was then able to complete the transfer; the two
files' checksums do match.


> But really, this is only a workaround. It smells like a bug in ssh or the 
> Hurd.

As all networking seems to go down, maybe it's that the GNU Hurd
networking stack ('pfinet', 'netdde') gets "overwhelmed" by that much
data.


(That was on a Debian GNU/Hurd installation that's more than a one year
out of date, so there's a -- slight? ;-D -- chance that this has been
fixed already.)


Grüße
 Thomas

Attachment: signature.asc
Description: PGP signature

Reply via email to