I have a single TCP connection to a server, but possibly have multiple requests at the same time. Most of the time the response will be so big that I would constantly receive lot of data chunks. It's possible for me to check the data length in order to determine it is the END OF STREAM. But with multiple requests, sometimes packets "mixed" with another requests that causes a lot of failures.
For example,
for a normal request:
in real life:
For now, my only thought is to make it serially by serving the request one after one. However it can't fully solve my problem because sometimes I got subscribed events comes in without control.
How can I solve this TCP problem in general?
I'm showing some of the code below (in case somebody knows erlang and elixir)
# Create TCP connection
{:ok, socket} =
:gen_tcp.connect(host_charlist, port, [:binary, active: true, keepalive: true])
# Send request
def handle_call({:send, msg}, from, state) do
:ok = :gen_tcp.send(state.socket, msg)
new_state = %{state | from: from, msg: ""}
{:noreply, new_state}
end
# When it receive packet
def handle_info({:tcp, _socket, msg}, state) do
new_state = %{state | msg: state.msg <> msg}
current_msg_size = byte_size(new_state.msg)
defined_msg_size = Response.get_body_size(new_state.msg) # this msg size can read from the first packet's header
cond do
current_msg_size == defined_msg_size ->
GenServer.reply(new_state.from, {:ok, new_state.msg})
{:noreply, %{new_state | msg: ""}}
current_msg_size > defined_msg_size ->
GenServer.reply(new_state.from, {:error, "Message size exceed."})
{:noreply, %{new_state | msg: ""}}
true ->
{:noreply, new_state}
end
end
At TCP level, in a connection, request
and response
do no exist, it's a single tube transferring bytes from one side to the other in order.
In order to handle interleaving over a single connection you have to handle it one level up the stack.
Possible solutions include:
receive
s) and assign it to the corresponding request.