restprotocol-buffersgrpcgrpc-node

Why is gRPC so much slower than an HTTP API sending an array


I'm doing load tests between services implemented in Node.JS, both services on the same machine connected through localhost.

There are REST and gRPC client & server files. The main goal is to prove that gRPC is faster than an HTTP call because the use of HTTP/2, the use of protocol buffers that are more efficient than code/decode JSON...

But in my tests (sending an integer array) gRPC is so much slower.

The code is very simple for boths implementations, I have an auxiliar class to generate objects with sizes (in MB): 0.125, 0.25, 0.5, 1, 2, 5, 20. REST and gRPC server uses this auxiliar class so the object to send is the same.

The object send in the payload is like this:

{
  message: "Hello world",
  array: []
}

Where the array is filled with numbers until get the desired size.

And my .proto is like this:

syntax = "proto3";

service ExampleService {
    rpc GetExample (Size) returns (Example) {}
}

message Size {
    int32 size = 1;
}

message Example {
   string message = 1;
   repeated int32 array = 2;
}

Also I've running the application measuring only one call, to not create a loop and find the average, and also to not handle measuring time with callbacks. So I'm running the application 10 times and calculating the average.

REST server:

app.get('/:size',(req,res) => {
    const size = req.params.size
    res.status(200).send(objects[size])
})

REST client:

const start = performance.now()
const response = await axios.get(`http://localhost:8080/${size}`)
const end = performance.now()

gRPC server:

getExample:(call, callback) => {
    callback(null, objects.objects[call.request.size])
}

And gRPC client:

const start = performance.now()
client.getExample({ size: size }, (error, response) => {
    const end = performance.now()
})

To do more efficently I have tried:

  1. Compress data like this:
let server = new grpc.Server({
    'grpc.default_compression_level': 3, // (1->Low -- 3->High)
});

I know I can use streaming to get data and iterate over the array but I want to prove the "same call" in both methods.

And the difference is so big.

Other thing I've seen is that times using REST way are more "lineal" the difference between times is small, but using gRPC one call sending 2MB can be 220ms and the next one 500ms.

Here is the final comparision, as you can see the difference is considerably big.

Data:

Size (MB) REST (ms) gRPC (ms)
0,125 37.98976998329162 35.5489800453186
0,25 40.03781998157501 46.077759981155396
0,5 51.35283002853394 59.37109994888306
1 63.4725800037384 166.7616500457128
2 95.76031665007274 394.2442199707031
5 261.9365399837494 804.1371199131012
20 713.1867599964141 5492.330539941788

enter image description here

But I thought... maybe the array field can't be decode in an efficient way, maybe is the integer number which is not heavy for JSON... I don't know, so I'm going to try to send a string, a very huge large string.

So my proto file now looks like this:

syntax = "proto3";

service ExampleService {
    rpc GetExample (Size) returns (Example) {}
}

message Size {
    int32 size = 1;
}

message Example {
   string message = 1;
   string array = 2;
}

Now the object send is like this:

{
  message: "Hello world",
  array: "text to reach the desired MB"
}

And results are so differents, now gRPC is much more efficient.

Data:

Size (MB) REST (ms) gRPC (ms)
0,125 30.672580003738403 25.028959941864013
0,25 33.568540048599246 25.366739988327026
0,5 37.19938006401062 27.539460039138795
1 46.4020166794459 28.798949996630352
2 57.50188330809275 35.45066670576731
5 107.39933327833812 48.90079998970032
20 313.4138665994008 136.4138500293096

enter image description here

And the question: So, why sending an integer array is not as efficient as sending an string? Is the way protobuf encode/decode arrays? Is not efficient send repeated values? Is related with the language (JS)?


Solution

  • The reason gRPC -- well, really protobufs -- doesn’t scale well in your example is that every entry of your repeated field results in protobuf needing to decode a separate field, and there is overhead related to that. You can see more details about the encoding of repeated fields in the docs here. You're using proto3, so at least you don't need to specify the [packed=true] option, although that helps somewhat if you're on proto2.

    The reason switching to a string or bytes field speeds it up so much is that there is only a constant decoding cost for this field which doesn't scale with the amount of data that's encoded in the field (not sure about JS though, which might need to create a copy of the data, but clearly that is still much faster than actually parsing the data). Just make sure your protocol defines what format / endianness the data in the field is :-)

    Answering your question at a higher level, sending multiple megabytes in a single API call is usually not an amazing idea anyway -- it ties up a thread on both the server and client for a long time which forces you to use multithreading or async code to get reasonable performance. (Admittedly might be less of an issue since you are used to writing async stuff on Node, but there's still only so many CPUs to burn on the server.)

    Depending on what you're actually trying to do, a common pattern can be to write the data to a file in a shared storage system (S3, etc.) and pass the filename to the other service, which can then download it when it's actually needed.