Why do I get such a difference in Bitrate with iPerf when running a single connection compared to running 10 simultaneous connections?
When I run iPerf with 10 simultaneous connections:
iperf3 -p 32770 -R -t 180 -P 10 -c <remote_ip>
I get the following output:
Connecting to host <remote_ip>, port 32770
Reverse mode, remote host <remote_ip> is sending
[ 6] local <local_ip> port 54283 connected to <remote_ip> port 32770
[ 8] local <local_ip> port 54284 connected to <remote_ip> port 32770
[ 10] local <local_ip> port 54285 connected to <remote_ip> port 32770
[ 12] local <local_ip> port 54286 connected to <remote_ip> port 32770
[ 14] local <local_ip> port 54287 connected to <remote_ip> port 32770
[ 16] local <local_ip> port 54288 connected to <remote_ip> port 32770
[ 18] local <local_ip> port 54289 connected to <remote_ip> port 32770
[ 20] local <local_ip> port 54290 connected to <remote_ip> port 32770
[ 22] local <local_ip> port 54291 connected to <remote_ip> port 32770
[ 24] local <local_ip> port 54292 connected to <remote_ip> port 32770
[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-180.03 sec 546 MBytes 25.5 Mbits/sec 689 sender
[ 6] 0.00-180.01 sec 546 MBytes 25.4 Mbits/sec receiver
[ 8] 0.00-180.03 sec 128 KBytes 5.82 Kbits/sec 109 sender
[ 8] 0.00-180.01 sec 0.00 Bytes 0.00 bits/sec receiver
[ 10] 0.00-180.03 sec 128 KBytes 5.82 Kbits/sec 16 sender
[ 10] 0.00-180.01 sec 0.00 Bytes 0.00 bits/sec receiver
[ 12] 0.00-180.03 sec 543 MBytes 25.3 Mbits/sec 730 sender
[ 12] 0.00-180.01 sec 542 MBytes 25.3 Mbits/sec receiver
[ 14] 0.00-180.03 sec 368 MBytes 17.2 Mbits/sec 846 sender
[ 14] 0.00-180.01 sec 367 MBytes 17.1 Mbits/sec receiver
[ 16] 0.00-180.03 sec 746 MBytes 34.7 Mbits/sec 634 sender
[ 16] 0.00-180.01 sec 744 MBytes 34.7 Mbits/sec receiver
[ 18] 0.00-180.03 sec 469 MBytes 21.8 Mbits/sec 727 sender
[ 18] 0.00-180.01 sec 468 MBytes 21.8 Mbits/sec receiver
[ 20] 0.00-180.03 sec 128 KBytes 5.82 Kbits/sec 112 sender
[ 20] 0.00-180.01 sec 128 KBytes 5.83 Kbits/sec receiver
[ 22] 0.00-180.03 sec 507 MBytes 23.6 Mbits/sec 624 sender
[ 22] 0.00-180.01 sec 506 MBytes 23.6 Mbits/sec receiver
[ 24] 0.00-180.03 sec 7.50 MBytes 349 Kbits/sec 671 sender
[ 24] 0.00-180.01 sec 7.38 MBytes 344 Kbits/sec receiver
[SUM] 0.00-180.03 sec 3.11 GBytes 148 Mbits/sec 5158 sender
[SUM] 0.00-180.01 sec 3.11 GBytes 148 Mbits/sec receiver
but when I run it with only one connection:
iperf3 -p 32770 -R -t 180 -P 1 -c <remote_ip>
I get the following:
Connecting to host 18.132.14.20, port 32770
Reverse mode, remote host 18.132.14.20 is sending
[ 6] local 192.168.1.177 port 55808 connected to 18.132.14.20 port 32770
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 6] 0.00-180.00 sec 7.00 MBytes 326 Kbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-180.02 sec 7.00 MBytes 326 Kbits/sec 635 sender
[ 6] 0.00-180.00 sec 7.00 MBytes 326 Kbits/sec receiver
I am currently experiencing issues with my ISP with intermittent speed differences. I normally get 150Mbps according to fast.net and speedtest.net, however my new results vary between 325Kbps and 150Mbps. To investigate this further, I set up an iPerf3 server in AWS and run the iPerf3 client on my desktop and laptop. The results above are similar if I run the iPerf3 client on a wireless 5Ghz, 1000BaseT over Powerline or using a 1000BaseT directly in to the router.
I am expecting the single connection to use up the full 150Mbps bitrate or at least match the bitrate of one of the connections in the 10 simultaneous connections.
Unfortunately I do not have a clear answer for this as my ISP resolved the issue. The ISP provider said there where issues with the switches withing the data centres. Even though the BGP links were up on their side and showing correct confirmations and speed, they acknowledged that several properties in the area (hundreds apparently) were experiencing symptoms similar to mine. The issue took 3 weeks to be resolved.
If I run the iperf3 tests now, I get text book output with similar bitrates across both single and multiple stream tests.