We are sending binary encoded data as the payload of a JSON request to our Netty backend. On smaller payloads (1-2MB) everything is fine, but on larger payloads it fails with a HTTP 413 Request entity too large
.
We have found two places where this can be affected:
We have set the first to a default way above (10MB) our problem threshold (5MB) and the latter we don't use at all, so we are unsure how to dig into this.
(BTW, we intend on not using JSON for large binary payloads in the future, so "helpful" tips on changing the underlying architecture aren't required ;-) )
Pipeline setup
We have two stages of pipeline initialization, where the first stage depends on whether the combination of http protocol version and SSL, and the latter stage is just concerned with application level handlers. PipelineInitializer
is just an internal interface.
/**
* This class is concerned with setting up the handlers for the protocol level of the pipeline
* Only use it for the cases where you know the passed in traffic will be HTTP 1.1
*/
public class Http1_1PipelineInitializer implements PipelineInitializer {
private final static int MAX_CONTENT_LENGTH = 10 * 1024 * 1024; // 10MB
@Override
public void addHandlersToPipeline(ChannelPipeline pipeline) {
pipeline.addLast(
new HttpServerCodec(),
new HttpObjectAggregator(MAX_CONTENT_LENGTH),
new HttpChunkContentCompressor(),
new ChunkedWriteHandler()
);
}
}
Application level pipeline setup in our ApplicationPipelineInitializer. I don't think these are that relevant, but included for completeness. All handlers in this part are user-defined handlers:
@Override
public void addHandlersToPipeline(final ChannelPipeline pipeline) {
pipeline.addLast(
new HttpLoggerHandler(),
userRoleProvisioningHandler,
authHandlerFactory.get(),
new AuthenticatedUserHandler(services),
createRoleHandlerFactory(configuration, services, externalAuthorizer).get(),
buildInfoHandler,
eventStreamEncoder,
eventStreamDecoder,
eventStreamHandler,
methodEncoder,
methodDecoder,
methodHandler,
fileServer,
notFoundHandler,
createInterruptOnErrorHandler());
// Prepend the error handler to every entry in the pipeline. The intention behind this is to have a catch-all
// outbound error handler and thereby avoiding the need to attach a listener to every ctx.write(...).
final OutboundErrorHandler outboundErrorHandler = new OutboundErrorHandler();
for (Map.Entry<String, ChannelHandler> entry : pipeline) {
pipeline.addBefore(entry.getKey(), entry.getKey() + "#OutboundErrorHandler", outboundErrorHandler);
}
}
Netty version 4.1.15
Sorry for wasting everybody's time: this wasn't Netty at all, hence the wild goosechase. After inspecting the output I found it was the Nginx reverse proxy in front that returned the HTTP error, not Netty. Adding client_max_body_size 10M;
did the trick, aligning its limits with Netty's.