I'm currently unable to find where, in my application, the bottleneck is located. The issue isn't hardware specific as I made tests on different computers using different hardware.
My application is developed using Qt 5.15 and C++ language, for Windows 10 or higher only.
The goal of the application is to connect, through an HTTP network connection, to a server (another application) which handles a datamodel of a controlled machine. The datamodel is a list of parameter representing the state of the machine. The server also allows retrieving some "heavier" data like images to be displayed for the user.
The "Client" application uses the HTTP protocol and the PUT HTTP's method to send messages and read-back answers from the server. As I use Qt's libraries, I base my connection on QNetworkRequest
, QNetworkReply
and QNetworkAccessManager
classes to perform network operations.
The client application works with an autopoll (100ms minimum between 2 poll) to have updates from the server. The autopoll just PUT
a "GetUpdate" request: following, the exact data sent to the server to request a simple update of the datamodel.
<?xml version="1.0" encoding="UTF-8"?>
<Request requestType="GetUpdated"/>
If the server handled update of a machine's parameter, the answer to this request will contain the specific parameter with the new value.
<?xml version="1.0" encoding="utf-8"?>
<Response responseType="GetUpdated">
<Device Name="ServerCore" type="0">
<Parameter Name="DateTime" type="String">
<Actual>2024-10-09T15:42:52</Actual>
</Parameter>
</Device>
</Response>
Above, there is an example of update answer. The data size may vary depending on the quantity of parameter updated. But I based all my performance analysis on this kind of data size.
When a request is going to be sent to the server application, I create a QElapsedTimer
to estimate the elapsed time between the request sent and the handled reply from the server.
The issue is that when I use a "Private Network" architecture between the server and the client, the elapsed time is too high for my kind of application. Even if my network handles switches/rooter/etc.. The elapsed time is similar to the simplest network: a cable directly connected from the server to the client. ~50ms maximum
If I run a server and a client on the same computer (using 127.0.0.1), the elapsed time is more acceptable. ~3ms maximum
To be sure the Data Link & Physical Layer
was the real bottleneck, I create a standalone python application to connect to the server and make the same processes as my client's. My python application is more efficient than My Qt's one on private networks (~5ms maximum
) for the same requests. My python application uses http.client
library (https://docs.python.org/3/library/http.client.html)
So I conclude that Qt was the problem, but why can I have equivalent delay on local network I don't know if it's really Qt's fault.
To reduce the Qt delay, I tried to:
QNetworkReply
class: waitForReadyRead
(https://doc.qt.io/qt-5/qiodevice.html#waitForReadyRead)QThread
for the communication process.Priority
level to requests (https://doc.qt.io/qt-5/qnetworkrequest.html#setPriority)But none of them reduced the delay. I don't know if it's really the Qt's fault.
If it's the NIC's fault, why is my Python application can improve performances ?
I sniffed the connection with WireShark to quantify the network transfer time. I ran wireshark on both computer and match the corresponding packet to determine the transmission time. The result is:
My question about these new information is: Which layer WireShark is sniffing?
I had answer from the Qt support team about this problem, and it's a Qt5's problem. They opened a bug report about this kind of issue.
Moving the code from Qt5 to Qt6 improved the performance and allows having an answer with a ~5ms delay, which is more acceptable for our kind of application.
Edit:
Qt's bug report: https://bugreports.qt.io/browse/QTBUG-130438