I'm trying to set up a serverless database on Cloud Run using a PostgreSQL or any other RDBMS image. I want to know if it's possible to mount a Google Cloud Storage (GCS) bucket as a volume within a Cloud Run container using gcsfuse.
Can someone provide guidance or share their experiences on whether it's possible to achieve this setup? If so, what steps or configurations are necessary to mount a GCS bucket as a volume within a Cloud Run container using gcsfuse? Are there any limitations or considerations I should be aware of?
Any insights or suggestions would be greatly appreciated. Thank you!
I have explored the documentation and various online resources, but I haven't found a clear answer to this question. I understand that Cloud Run provides a stateless and serverless environment for running containers, but I need to use a persistent storage solution like GCS for my database.
Here are the specific details of my setup:
Cloud Run: I have already deployed my containerized application on Cloud Run, and it's working well with the default ephemeral storage provided by Cloud Run. PostgreSQL/RDBMS: I have chosen a PostgreSQL or any other RDBMS image as my database engine. I need to store the database files in a reliable and scalable storage solution like GCS. I came across gcsfuse, which seems to be a tool for mounting GCS buckets as file systems. I wonder if I can use gcsfuse to mount the GCS bucket as a volume within my Cloud Run container, so that the PostgreSQL/RDBMS image can access and write data to that mounted volume.
Your question is an example of is it possible
versus is it practical
.
Yes, you can mount Cloud Storage as a network file system onto a Cloud Run service. Applications can reference objects within that file system including a database.
However, Cloud Storage is immutable. Each time you modify the database, or the database performs any update, the modified Cloud Storage objects must be updated in their entirety. There are no partial updates. Read-modify-write of an object is not possible without rewriting the entire object.
One of the primary goals of a database is to rapidly and reliably get data onto disk. First, a journal is updated, then the database tables are updated. A key metric for a fast database is IOPS (I/O Per Second).
Google Cloud Storage has a very fast sequential read rate and a pretty good random read rate. Write performance will kill your use case.
A database backed by Cloud Storage would have poor performance due to everything being rewritten over and over and over again.
I have implemented read-only databases for Cloud Run. For this use case, I use SQLite. However, the databases are small (< 10 MB), and only reads (queries) are performed.