firebasegoogle-cloud-platformgoogle-cloud-functionsmemorycache

How to lock a google cloud function concurrency?


i want to have a big concurrencies with the cloud function, so I'm using the cloud function v2. But, in one of the steps, I want only 1 process can do it, while other processes need to wait for it.

Let's say that I have below logic for my function

if (notGenerated()) {
    generateSomething()
}
step2()
step3()
...

If its 'not generated yet', and then there are 2 or more request coming at the same time, how can I block it so that only 1 process/request can do the generation?

In the conventional way, I would use redis as memory cache, to make sure that only 1 process can go through. The other processes could wait, or do any retries.

But if I need to implement the same style to my cloud function, I need to use the memorystore for redis and serverless vps connector, which is not cheap. Especially that in the memorystore, I would be charged for 24 hours, while I might only use it for 3-4 hours only.

How do you check / use the memory cache in cloud function v2? Could I use something like node-cache in cloud function v2? Does the concurrency in cloud function v2 share the same memory? If the cloud functions scale my functions, does it still using the same memory?


Solution

  • AFAIK your cloud functions don't need to share and probably will not share the same memory or, in general, the same underlying physical hardware between them.

    More important, in my opinion, it is advisable to never think in your functions in such way: they are an isolated computed unit that should perform their job taking into account their own physical resources.

    If sharing resources is not an option you could try implementing some type of distributed locking/mutex mechanism.

    I agree with you that for this purpose Redis is a very suitable use case, although to avoid the costs associated with memorystore you could trying using another alternative approaches.

    I am not aware that it will fit your requirements but, extrapolating the idea of filesystem based file locking, you could try using for example a GCS bucket and create blobs on demand. Unfortunately, there is no absolutely guarantee that the operations of checking the existence of the "lock" blob and creating them will be performed by the same function, a race condition could happen, but it may be a suitable solution - although sorry, because even on my own eyes, it doesn't seem an ideal one.

    Although I understand your requirements, please, consider modifying your functions to avoid this type of interdependencies, I think that in a certain way it is against their concept and the serverless approach: try conceiving them as idempotent services instead.