Being new to GCP, I have a question about which architecture to use in a particular case.
Suppose I have a Django website running on the App engine (flexible environment?). Users upload images to the website. I would like to first use Google Vision API to perform some label detection on the images and then feed the labels and images to a VM with GPU attached (all running on Google cloud), for additional computationally costly job on the images. After the job is completed by the VM, the resulting images are then available for the user to download or sent to the user email.
Because of the relatively large time spent on the VM+GPU side, and because the website will be accessed by users globally, I would like to reduce the overall latency time and pick the most efficient architecture for the job.
My first thought was to:
Now, that's a lot of bouncing back and forth between a storage bucket and APP engine plus VM on either side. I was wondering if there is a 1) quicker and 2) more efficient resources-wise way to achieve the same goal.
If your website is accessed globally, your App Engine choice is the wrong one: App Engine can be deployed in only one region, not globally.
For the frontend, I recommend to use Cloud Run instead (or VM, but I don't like VM) and to put a HTTPS load balancer in front of. Like that, the physical latency is reduced.
And, the files must be also store in the closest region, so in Cloud Storage in different region.
And finally, to duplicate the VM/GPU infrastructure in each region (it could be costly, but it's the best way to reduce latency.
Your process is the right one. I recommend you to expose an API on your VM to notify it when a file is ready. You can use the PubSub notification on Cloud Storage to sink the event in PubSub, and then create a push subscription to invoke your VM directly (instead of a cloud functions).
Like that, you remove a component and you perform all your processing on the VM side.