google-cloud-platformgoogle-compute-enginegoogle-cloud-tpu

How to utilize multiple Google Cloud TPUs to train a single model


I have been allocated multiple Google Cloud TPUs in the us-central1-f region. The machine types are all v2-8.

How can I utilize all my TPUs to train a single model?

The us-central1-f region doesn't support pods, so using pods doesn't seem like the solution. Even if pods were available, the number of v2-8 units that I have does not match any of the pod TPU slice sizes (16, 64, 128, 256), so I couldn't use them all in a single pod.


Solution

  • I believe you cannot easily do this. If you want to train a single model using multiple TPUs, you would need to have access to a region with TPU Pods. Otherwise you can do the obvious thing: train the same model on different TPUs but with different hyperparameters as a way to do grid search OR you can train multiple weak learners and then manually combine them.