<sarcasm> I just checked my calendar, and it says November, 19, 2017.  So, I am 
going to say, 21st century! </sarcasm>

In all seriousness, I just ran some tests on my servers to see if SSH is still 
the bottleneck on rsync.  These servers have dual 10G NICs (linux bond), 3.6GHz 
CPU, and 32G RAM.  I found some interesting data points:

* Running the command "pv /dev/zero | ssh $REMOTE_SERVER 'cat > /dev/null’” I 
was able to get about 235MB/sec between two servers with ssh pegged at 100% CPU 
usage. 

* I mounted an NFS share from one server to the other and was able to hit 
450MB/sec via rsync across an NFS mount (rsync pegged at 100% CPU).  

* I ran the command “rsync --progress root@<remote_server>:/export/tmp/file1 
/mnt/ramdisk/file1” gave me 130MB/sec with sshd pegged at 100% CPU and rsync at 
22% CPU.


In the end, rsync over NFS (using 10G networking) is much faster than rsync 
using SSH keys in my environment.  Maybe your environment is different or you 
use different ciphers?



As for not trusting the LAN with unencrypted traffic, I would argue either the 
security policies are not well enforced or the server uses insecure NFS mount 
options.  I have no reason not to trust my LAN.

-Ron





> On Nov 19, 2017, at 10:39 AM, Marat Khalili <[email protected]> wrote:
> 
>> My experience has shown rsync over ssh is by far the slowest because of the 
>> ssh cipher. 
> 
> What century is this experience from? Any modern hardware can encrypt at IO 
> speed several times over. Even LAN, on the other hand, cannot be trusted with 
> unencrypted data.
> -- 
> 
> With Best Regards,
> Marat Khalili
> _______________________________________________
> lxc-users mailing list
> [email protected]
> http://lists.linuxcontainers.org/listinfo/lxc-users

_______________________________________________
lxc-users mailing list
[email protected]
http://lists.linuxcontainers.org/listinfo/lxc-users

Reply via email to