amazon-web-serviceskubernetesaws-fargate

EKS Fargate workloads unable to resolve DNS names


Why is a workload running EKS Fargate unable to resolve DNS names, while the same workload when run on an EC2 Node can resolve the names? In Fargate, the workload can't resolve a service running in the cluster, a DNS name in a private hosted zone, or a public DNS name.

/ $ nslookup my-api.my-namespace.svc.cluster.loca
;; connection timed out; no servers could be reached

/ $ nslookup my-record.my-domain.com
;; connection timed out; no servers could be reached

/ $ nslookup www.google.co.uk
;; connection timed out; no servers could be reached

/ $

I've looked at the AWS Fargate Considerations, and I don't believe any of them are the issue.

Is anyone able to suggest why EKS Fargate is unable to resolve DNS names?


In case it's relevant, here is an example manifest I'm using to deploy a Job that will be scheduled in Fargate.

apiVersion: batch/v1
kind: Job
metadata:
  name: test-fargate
spec:
  backoffLimit: 0
  ttlSecondsAfterFinished: 600
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: test-import-aixm
          image: my-container/image:latest
          command: [ "/bin/sh", "-c", "--" ]
          args: [ "while true; do sleep 30; done;" ]

Solution

  • Fargate might not be able to resolve:

    1. Check where your core-dns pods are running and see if the traffic is allowed between core-dns and fargate pod. Like if core-dns is running on ec2 worker node then make sure communication is allowed between worker node security group and fargate pod security group i.e cluster security group(by default cluster security group will be attached to fargate pods)

    2. Check if fargate pod can resolve directly using kube-dns service(core-dns service) cluster ip and also individually with core-dns pod ip.

    nslookup my-record.my-domain.com <'kube-dns service cluster ip'>

    nslookup my-record.my-domain.com <'cored-dns pod ip'>