I was doing some testing and found out that maximum document size limits of CosmosDB seem inconsistent with the documentation: https://learn.microsoft.com/en-us/azure/cosmos-db/concepts-limits#per-item-limits
< 2Mb
as stated in the official documentation (link above)> 2Mb
RequestEntityTooLarge error occurred: Microsoft.Azure.Cosmos.CosmosException : Response >status code does not indicate success: RequestEntityTooLarge (413); Substatus: 0; >ActivityId: c1977df8-ec39-40b9-bd69-6e6a40ff6c00; Reason: (Message: {"Errors":["Request >size is too large"]}
Conclusion: Results match the documentation
< 2Mb
> 2Mb
and the documentation states differently (or I am missing something)ERROR: MongoDB.Driver.MongoWriteException: A write operation resulted in an error. >WriteError: { Category : "Uncategorized", Code : 16, Message : "Error=16, Details='Response >status code does not indicate success: RequestEntityTooLarge (413); Substatus: 0; >ActivityId: 8f20b261-e1c5-4ca9-b4e6-6cbc5352ce7e; Reason: (Message: {"Errors":["Request >size is too large"]}
Conclusion: Results do not match the documentation
The number of bytes was calculated in the following way:
var itemStr = JsonConvert.SerializeObject(item);
var bytes = Encoding.UTF8.GetBytes(itemStr);
Console.WriteLine($"total num of bytes: {bytes.Length}");
Regarding the successful write of a large item into MongoDB, I also verified through mongoshell that the stored document is larger than 4Mb:
Object.bsonsize(db.Items.findOne({_id:ObjectId("61dec458316798c759091aef")}))
Any help is much appreciated, thanks!
Cosmos DB's API for MongoDB has a binary storage format that compresses the data. The amount of compression depends on the shape of data within your documents. Documents with deeper hierarchies tend to compress more than those which are flatter. As a result, you are able to store uncompressed data greater than the documented 2MB limit.
While it is possible to store more than 2MB of uncompressed data with Cosmos DB's API for MongoDB this is not something I would recommend as it is impossible for you to know what amount of compressed data you will have before being inserted.
I also should point out that in general, you will have better performance in terms of cost and latency with higher numbers of smaller documents than you will with larger sized documents (this applies to native MongoDB as well). So as you model your data for your application, keep that in mind.