I am wondering why iperf shows much better performance in TCP than UDP. This question is very similar to this one.
UDP should be much faster than TCP because there are no acknowledge and congestion detection. I am looking for an explanation.
$ iperf -u -c 127.0.0.1 -b10G
------------------------------------------------------------
Client connecting to 127.0.0.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 52064 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 962 MBytes 807 Mbits/sec
[ 3] Sent 686377 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 960 MBytes 805 Mbits/sec 0.004 ms 1662/686376 (0.24%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
$ iperf -c 127.0.0.1
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[ 3] local 127.0.0.1 port 60712 connected with 127.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 31.1 GBytes 26.7 Gbits/sec
I suspect you're using the old iperf version 2.0.5 which has known performance problems with UDP. I'd suggest a 2.0.10 version.
iperf -v will give the version
Note 1: The primary issue in 2.0.5 associated with this problem is due to mutex contention between the client thread and the reporter thread. The shared memory between these two threads was increased to address the issue.
Note 3: There are other performance related enhancements in 2.0.10.
Bob