Hi,

Did you solve your problem ? Did you try copying without any encryption layer, with something like netcat/tnc (tnc is using nc + tar + a bit of perl) ? I would test that firstly, while reading from /dev/zero and writting to /dev/null to avoid any HDD issue. If you have quite good results, then read from the source and write to /dev/null, then write to the destination disk, and finally, read from the wanted source and write to destination. Then, add encryption, try a basic scp with only aes or any Cipher mechanism directly known from processor (grep aes /proc/cpuinfo ).

Limitation to 1Gb/s could also mean that you have many interfaces and routes, but iperf should be at the same speed, except if there is any misconfiguration on source or target. I would then double check routes on both machines.

Otherwise, maybe I missed the solution, and that would interest me :)

Best,

Le 03/01/2020 à 01:24, David Mathog a écrit :
On Thu, 2 Jan 2020 13:32:17 Michael Di Domenico wrote:
On Thu, Jan 2, 2020 at 12:44 PM David Mathog <mat...@caltech.edu> wrote:
1. Is a single large file transfer rate reasonable?
2. Ditto for several large files?

yes, if i transfer files outside of rsync performance is reasonable

Are you sure there is not a patrol read ongoing on one system or the
other?  That can cause this sort of disk head issue.

yes, i control both sides.  the client side is totally idle and the
lustre system is quiet.

Double checking - you queried the RAID card (if present) to see that it was not doing a patrol read or SMART analysis?  In my experience SMART commands do not light the disk activity lights, so physically looking at the array may show no or little activity when in fact the disks are working quite hard.


Also it might be this "hugepage" issue:
https://www.beowulf.org/pipermail/beowulf/2015-July/033282.html

ah forgot about that one.  tried it, no change

Hmm.  Let's see if you can take the file systems more or less out of the equation.  Something along these lines:

1. Create 100 FIFOs with matching names on each end in a similarly named directory.
2. On the receiving machine spin out 100 processes doing:

  dd if=/PATH/FIFOname12 of=/dev/null &

3. On the the sending side spin out similar process to write to the FIFO

  dd if=/dev/zero of=/path/FIFOname12 bs=8196 count=10000 &

4. Start up rysnc on the directory holding the FIFOs.

I never tried coercing rsync into working like that, but if it can be done then it emulates a storage system to storage system transfer without ever actually reading or writing to any file systems.

Regards,

David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf

--
Dernat Rémy
IT Infrastructure Engineer, CNRS
MBB Platform - ISEM Montpellier


Attachment: smime.p7s
Description: Signature cryptographique S/MIME

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
https://beowulf.org/cgi-bin/mailman/listinfo/beowulf

Reply via email to