kubernetesamazon-vpcamazon-eks

DNS problem on AWS EKS when running in private subnets


I have an EKS cluster setup in a VPC. The worker nodes are launched in private subnets. I can successfully deploy pods and services.

However, I'm not able to perform DNS resolution from within the pods. (It works fine on the worker nodes, outside the container.)

Troubleshooting using https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ results in the following from nslookup (timeout after a minute or so):

Server: 172.20.0.10 Address 1: 172.20.0.10

nslookup: can't resolve 'kubernetes.default'

When I launch the cluster in an all-public VPC, I don't have this problem. Am I missing any necessary steps for DNS resolution from within a private subnet?

Many thanks, Daniel


Solution

  • I feel like I have to give this a proper answer because coming upon this question was the answer to 10 straight hours of debugging for me. As @Daniel said in his comment, the issue I found was with my ACL blocking outbound traffic on UDP port 53 which apparently kubernetes uses to resolve DNS records.

    The process was especially confusing for me because one of my pods worked actually worked the entire time since (I think?) it happened to be in the same zone as the kubernetes DNS resolver.