I have the following services deployed to a Kubernetes cluster running locally on my machine:
The NextJS app is basic: it uses NextAuth and the NextAuth Keycloak provider to allow the user to log in and view a protected page.
I have configured Keycloak with a realm and a web
client. This all works fine.
Both services have ingress configurations:
auth.starter.local
starter.local
These DNS names are manually configured in my /etc/hosts
file to resolve to 127.0.0.1
.
When I attempt to log in via the NextJS app (auth.starter.local), I get an error. This is caused because the backend NextJS app is attempting to retrieve the .well-known
config from Keycloak at its DNS name, which it cannot resolve inside the Kubernetes cluster.
How do I get this to work? Can I configure Kubernetes coredns somehow to get the domain names to resolve from an inside service?
First thing first, the /etc/hosts
on your local machine is not shared nor cascaded to the pods/containers inside your Kubernetes clusters. They each has their own /etc/hosts
files. To add custom entries to each of those /etc/hosts
inside the pod/container, you need to use HostAliases.
You have to map the auth.starter.local
custom domain inside the pod/container of your NextJS app to the IP of the service for the Keycloak. Run a kubectl describe
on the service to obtain its IP.
Next, add the HostAliases
setting to the deployment of your NextJS app (under .spec.template.spec
):
hostAliases:
- ip: "x.x.x.x" # replace with the service's cluster IP
hostnames:
- "auth.starter.local"
Repeat in reverse if your Keycloak needs to connect to your NextJS app.