Hi, I'm trying to understand the throughput across the different links of my little home network, and am perplexed by the measured wireless throughput.
The three main devices I'm interested in: Router: Buffalo WZR-HP-G300NH running OpenWrt (Chaos Calmer 15.05). Gigabit WAN and LAN, 802.11bgn wireless. https://wiki.openwrt.org/toh/buffalo/wzr-hp-g300h Laptop: Thinkpad T61 running Jessie 8.3. Gigabit ethernet, 802.11abgn wireless. NAS: Seagate GoFlex Net [STAK100] runninng Debian Jessie 8.3. https://archlinuxarm.org/platforms/armv5/seagate-goflex-net All throughput measurements taken with iperf (run three times and using the median result), unless specified otherwise. These first results are with the laptop connected to the router via cat5: Laptop - NAS: ~874 Mbps. I suppose this is close enough to the gigabit theoretical max, and there isn't any significant bottleneck. Router - NAS: ~217 Mbps Router - laptop: ~198 Mbps Here the router CPU is apparently the bottleneck (top shows close to 100% CPU utilization by iperf for at least part of the 10 second iperf runs). I suppose that this is due to the bits needing to be copied out of the kernel networking stack into iperf's userspace memory, or something like that. I don't understanding why the NAS seems to be doing better, but I suppose it could be an artifact of the data. Here's the part that baffles me - these are with the laptop connected to the router wirelessly: Laptop - router: ~11.8 Mbps These numbers actually exhibit significant variance, but they're generally at least this much, and at most about 15-20 Mbps. Laptop - NAS: ~14.7 Mbps Once again, these numbers vary widely, but are in line with the laptop - router numbers. But here's the kicker: Ookla's speedtest (run on the laptop with speedtest-cli) shows 29.01/5.89 (d/u), and this is fairly consistent. I'm paying Comcast for 25/5, and they apparently provision at 31.25/6.25, so I'm getting quite close to the theoretical max, even when the laptop is connected to the router wirelessly. Additionally, various Android phones also get close to the Comcast provisioned max when connecting wirelessly to the router. So the wireless link can apparently push at least 30 Mbps or so, so why are my local wireless throughput numbers so much lower? I was originally using one of the common 1/6/11 channels, and I switched to 3 since I saw a lot of other stations on those channels. This may have resulted in some improvement, but I'm still stuck locally as above. What's the explanation for this - how can I possibly be getting much better throughput to servers tens of miles away than to my local stations? Does iperf somehow work fundamentally differently from speedtest? If so, which is a better representation of actual throughput? Celejar