I am developing a client-server application in C++ to transfer an image file over a TCP connection. I have encountered an issue where the client does not appear to receive the complete image data from the server.
The server reports that it is sending the data in chunks as expected. However, the client receives an unexpectedly small number of bytes in the first read and then stalls, waiting indefinitely for the remaining data.
Here is the console output from the server and client during the transfer attempt.
Server Output: The server logs indicate that the requests are received and the image chunks are being sent sequentially.
Request received: img_size...
Image size sent!
Request received: image...
Start sending image.
1024 bytes sent (1)
2048 bytes sent (2)
3072 bytes sent (3)
...
Client Output: The client correctly receives the total image size. However, during the image data reception, the byte count is incorrect from the very first packet, and the process eventually hangs.
Get Imagesize: 75186 bytes
(0/75186)
(5/75186)
(80/75186)
...
(16150/75186)
(16227/75186)
<-- The client waits here forever -->
My primary point of confusion is why the client reports receiving only 5 bytes after the server has sent the first 1024-byte packet.
Below are the relevant snippets of my C++ code for the client's receiving logic and the server's sending function.
Client-Side Receiving Logic This is the main loop on the client side, intended to receive the full image.
tcp.Send("image");
std::string msg = "";
std::stringstream ss;
while (msg.length() < size_img) {
cout << "(" << msg.length() << "/" << size_img << ")" << endl;
std::string g;
if (msg.length() + 1024 < size_img) {
g = tcp.receive(1024);
} else {
g = tcp.receive(size_img - msg.length());
}
// Append the received chunk to the main string
ss << msg;
ss << g;
msg = ss.str();
ss.str(""); // Clear the stringstream
}
Client receive Function
This function is called by the loop above to receive a chunk of data from the socket.
std::string TCPClient::receive(int size)
{
char buffer[size]; // Note: This is a Variable Length Array (VLA)
memset(&buffer[0], 0, sizeof(buffer));
size_t len = sizeof(buffer);
char *p = buffer;
ssize_t n;
std::string reply;
// Loop to ensure all 'size' bytes are received
while (len > 0 && (n = recv(sock, p, len, 0)) > 0) {
p += n;
len -= (size_t)n;
}
if (len > 0 || n < 0) {
cout << "receive failed!" << endl;
return nullptr; // This should ideally be an empty string or throw an exception
}
reply = buffer; // Convert char array to std::string
return reply;
}
Server-Side Sending Function This is the function on the server responsible for sending the image data in 1024-byte chunks.
void TCPServer::Send_Bytes(unsigned char* msg, int laenge)
{
for (int i = 0; i < laenge; i = i + 1024) {
if (i + 1024 < laenge) {
send(newsockfd, msg + i, 1024, 0);
} else {
send(newsockfd, msg + i, laenge - i, 0);
break;
}
cout << i + 1024 << " bytes send (" << (i/1024)+1 << ")" << endl;
usleep(100000); // Small delay
}
}
I suspect the issue is related to how I am handling the binary image data, particularly with the use of std::string, which may be misinterpreting null bytes (\0) as terminators. However, I am unable to pinpoint the exact cause of the data truncation and the subsequent stall.
Could someone please help me identify the flaw in my implementation? Thank you.
Solution:
void TCPClient::receive_char(char* outStr, int size){
char buffer[size];
memset(&buffer[0], 0, sizeof(buffer));
size_t len = sizeof(buffer);
char *p = buffer;
ssize_t n;
while(len > 0 && (n=recv(sock,p,len,0)) > 0){
p += n;
len -= (size_t)n;
}
if ( len > 0 || n < 0 ) {
cout << "receive failed!" << endl;
}
//buffer[size-1]='\0';
for(int i=0; i < size; ++i){
outStr[i] = buffer[i];
}
}
You can call with:
char data[4096];
memset(&data[0], 0, sizeof(data));
tcp.receive_char(data,4096);
for(int i = 0; i < 4096; i++){
image_bytes[counter] = data[i];
counter++;
}
Maybe there are better solutions... :D