The Google PaLM API (import google.generativeai as palm) works successfully when I use a 2 CPU colab runtime.
However, when I switch to an 8 CPU, 51 GB colab runtime (via Colab Pro+), I get an error when running a simple PaLM API request. The error message says:
FailedPrecondition: 400 User location is not supported for the API use.
What I tried:
When I switch back to the 2 CPU Colab runtime, everything works. It is only when I switch to the high RAM 51GB runtime, where it does not work.
After I pip install google-generativeai, I do restart the runtime, so that is not the issue, as that works with the 2 CPU Colab runtime.
Note, some days it does work and some days it seemingly randomly does not. I want to use the 8 CPU Colab runtime in order to speed up my colab.
Here is a sample colab with the minimal code to reproduce the error: https://colab.research.google.com/drive/1fm4CZjj_axPssIOkBRi4V6JxX9q1Zt4p?usp=sharing
Note, to run the above colab, you'll need to upload your own PaLM API key.
If you have this same issue, you can +1 the bug I created in the Google issue tracker here.
Use !curl ipinfo.io
to check where your colab instance is located.
just a quick test 2 of 3 "high ram" instances I created landed in Belgium. Belgium is not in the "allowed regions" list.