On 16/1/21 6:56 am, the...@sys-concept.com wrote: > On 1/15/21 1:11 AM, Raffaele BELARDI wrote: >>> -----Original Message----- >>> From: the...@sys-concept.com <the...@sys-concept.com> >>> Sent: Friday, January 15, 2021 07:57 >>> To: Gentoo mailing list <gentoo-user@lists.gentoo.org> >>> Subject: [gentoo-user] network transfer speed >>> >>> >>> On both of my systems the network card speed is showing 1000 >>> cat /sys/class/net/enp4s0/speed 1000 >>> >>> but when I do rsync larage file I only see about: 20 to 22MB/s On my home >>> network I get about 110MB/s between PC's >>> >>> Both PC's have SSD and the swith is Gigabit (I think). >>> How to find a the bottleneck? >> If the PCs attached to the switch show 1000 then the switch _is_ gigabit. >> >> On my 1Gb home network I have an FTP transfer speed between Gentoo PCs A and >> B of almost 900Mbps, the other way round is almost half of that. One >> difference between the two systems is the disk, A uses SATA-2 disk while B >> has SATA-3. >> >> Does the 'B' in 110MB/s stand for byte? If so you have 880Mbps which is not >> bad, the problem probably lies somewhere else. Otherwise you could check the >> switch error count (if you have a managed switch) or the network card error >> count, just to ensure you don't have a cabling/connector problem. >> >> Have you tried other transfer methods just for comparison? I think FTP is >> still the fastest way to transfer files, though insecure or inconvenient as >> it might be. I have no experience with rsync. >> >> raffaele > On a remote network I run ethtool on both cards and I got both 1000Mb/s speed > > 1.) > ethtool net0 > Settings for net0: > Supported ports: [ TP MII ] > Supported link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Half 1000baseT/Full > Supported pause frame use: No > Supports auto-negotiation: Yes > Advertised link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Half 1000baseT/Full > Advertised pause frame use: Symmetric Receive-only > Advertised auto-negotiation: Yes > Link partner advertised link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Link partner advertised pause frame use: Symmetric > Link partner advertised auto-negotiation: Yes > Speed: 1000Mb/s > Duplex: Full > Port: MII > PHYAD: 0 > Transceiver: internal > Auto-negotiation: on > MDI-X: on (auto) > > 2.) > ethtool enp4s0 > Settings for enp4s0: > Supported ports: [ TP ] > Supported link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Supported pause frame use: Symmetric > Supports auto-negotiation: Yes > Advertised link modes: 10baseT/Half 10baseT/Full > 100baseT/Half 100baseT/Full > 1000baseT/Full > Advertised pause frame use: Symmetric > Advertised auto-negotiation: Yes > Speed: 1000Mb/s > Duplex: Full > Port: Twisted Pair > PHYAD: 1 > Transceiver: internal > Auto-negotiation: on > MDI-X: on (auto) > What brand/type of interface (Realtek, int/ext USB ?) There is an obscure bug that drops throughput to ~1/3 in Realtek RTL8153 chip on USB2 and the r8152 kernel driver. Some interfaces (e.g., most raspberry pi) connect via internal USB with its own throughput problems. Probably not the problem in this case but worth mentioning.
Rsync is not a good tool for measuring throughput as its optimisations are problematic - and this can show up when comparing local to remote tests. If on a file system it will always transfer the whole file (and it considers a network filesystem local) while on a network it will use the transfer algorithm which can vary depending on commandline switches used and where the remote file lives, and this can change between tests depending on conditions. Whole files have little overhead, whereas the algorithm overhead can actually be much slower (hence why it tries to do it this way) Try and use a largish file created with /dev/random and dd as well as a simple netcat type transfer mechanism to measure. Note that encryption can severely limit transfers due to cpu limiting if on a low powered system and rsync over ssh uses encrytion. And if a secure VPN is in the mix its orders of magnitude worse. Also note that something like an inline IDS on a link can drop throughput by 50% or more (measured on an older enterprise Cisco device) which further complicates local/remote via internet comparisons. Good luck in finding the problem - its really not simple and easy. BillK