I have an application that uses this data model:
objects: {
0: { name: 'foo', items: [...] },
1: { name: 'bar', items: [...] },
...
999: { name: 'z', items: [...] }
}
At first I didn't think it could grow fast and now some documents have hundreds of entries and I'm forced to switch my account to Blaze plan.
I'm trying to see if there is a better approach to this model. Currently I'm using this kind of logic to modify some entries "without modifying the whole structure":
await setDoc(
doc(getFirestore(), `/user/${user.uid}`),
{
objects[id]: {
items: arrayUnion(...) // or arrayRemove
}
},
{ merge: true }
)
// Or when I want to remove an object
await setDoc(
doc(getFirestore(), `/user/${user.uid}`),
{
objects[id]: deleteField()
},
{ merge: true }
)
// Etc...
Let say objects
is huge (thousands of entries), is arrayUnion
(or arrayRemove
) elaborated enough to avoid updating the whole field. My understanding of Google Cloud back-end is very limited and by "updating the whole field" I mean does the data internal reference gets rewritten (similar to this.objects = {...}
in JS) and will it have any implications on the billing or will it be the same no matter the number of properties in objects
?
If the answer is yes, could switching this structure to a document per object architecture potentially reduce the billing when the number of entries grows?
Thanks for your help
Firestore bills on document reads, document writes, outgoing bandwidth, and storage used. For more on this, see the documentation on understanding Cloud Firestore billing, and this pricing example.
There is no charge on the incoming bandwidth, so the size of a write operation has no impact on the cost of that operation. Of course the size of the resulting documents does contribute to your storage cost.
The common alternative to consider is if it wouldn't be better to store each of these objects in a document of their own. If price is your differentiating factor, keep the price calculator handy to determine the impact of such a change.