nettyproject-reactorkeep-alivereactor-nettypersistent-connection

Read big JSON payload all at once with Reactor Netty client, and not in chunks for keep-alive long lived persistent connection


To describe the use case I am trying to solve/implement, we integrate with the 3rd party service, and this service sends a continuous stream of data (JSON) over HTTP/1.1 long-living/persistent connection (which can stay open up to the 2-3 hours, which is another question, on how the connection pool will behave in this case, as all connections over the HTTP client only to this host). But the problem I have at the moment, I connect and subscribe to the data like the following

 client
            .get()
            .uri("/")
            .responseContent()
            .asString()
             // .aggregate() // 1
            .subscribe(content -> {
                  logger.info("Running content on thread {}: {}",
                        Thread.currentThread().getName(), content);
            });

For small JSON it works fine, for the large JSON, content contains the part of the full JSON, so by itself, it is not the valid JSON or complete data. Is it possible to always get the whole data no matter the size in one chunk (is there a way to set a large ByteBuf size to fit any JSON payload into one response?) or if not, how you can wait and combine multiple parts of the same JSON response into the complete valid JSON?

To give an example, what I mean (I reduce the size of the JSON payload in the example, but just to demonstrate the idea), saying the server sends the following JSON:

{"id":1, data: [1, 2, 3]}

And on the client-side, I get the data/response in 3 chunks, i.e.

(and then after chunk 3 I see READ COMPLETE in the log, see below).

If I enable aggregate() nothing is printed at all, as I understand it will wait for the connection to be closed, but as it is a persistent long-living connection this will not work.

The interesting part is that if enable .wiretap(true) on the client, it prints in the logs READ COMPLETE when large JSON has been split into multiple ByteBuf-s only when the full JSON content is consumed (not after every individual ByteBuf part), which might mean that the client knows when the processing of the single data response from the server is over.

I have found there is .httpResponseDecoder as it looked like the client limits the chunk to 16kb, although I extended it to 2mb, i.e. .httpResponseDecoder(spec -> spec.maxChunkSize(2 * 1024 * 1024)), but still don't get the whole JSON as a single data chunk, and still see the limit of 16kb max per chunk.

Any idea how I can achieve this or where to look into?


Solution

  • In Gitter https://gitter.im/reactor/reactor-netty Violeta Georgieva suggested to give a try and use JsonObjectDecoder, added the following to the client:

    client.doOnConnected(connection -> connection.addHandler(new JsonObjectDecoder()))
    

    And now gets the proper JSON on each subscription processing.