kuberneteskvmcoredns

kubernetes pod cannot resolve local hostnames but can resolve external ones like google.com


I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as wrkr1 or wrkr2 (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.

My cluster is a basic setup with one master and 2 worker nodes.

What works from within the pod:

What works from the node hosting pod:

Note: the pod cidr is completely different from the IP range used in LAN

the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)

Kubernetes Version: 1.19.14

Ubuntu Version: 18.04 LTS

Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?


Solution

  • What happens

    Need help as to whether this is normal behavior

    This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in /etc/hosts because pods simply don't have access to this file.

    I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry

    This is a point where tricky thing happens. There are four available DNS policies which are applied per pod. We will take a look at two of them which are usually used:

    The trickiest ever part is this (from the same link above):

    Note: "Default" is not the default DNS policy. If dnsPolicy is not explicitly specified, then "ClusterFirst" is used.

    That means that all pods that do not have DNS policy set will be run with ClusterFirst and they won't be able to see /etc/resolv.conf on the host. I tried changing this to Default and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.

    For example coredns deployment is run with Default dnsPolicy which allows coredns to resolve hosts.

    How this can be resolved

    1. Add local domain to coreDNS

    This will require to add A records per host. Here's a part from edited coredns configmap:

    This should be within .:53 { block

    file /etc/coredns/local.record local
    

    This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):

    local.record: |
      local.            IN      SOA     sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
      wrkr1.            IN      A      172.10.10.10
      wrkr2.            IN      A      172.11.11.11
    

    Then coreDNS deployment should be added to include this file:

    $ kubectl edit deploy coredns -n kube-system
          volumes:
          - configMap:
              defaultMode: 420
              items:
              - key: Corefile
                path: Corefile
              - key: local.record # 1st line to add
                path: local.record # 2nd line to add
              name: coredns
    

    And restart coreDNS deployment:

    $ kubectl rollout restart deploy coredns -n kube-system
    

    Just in case check if coredns pods are running and ready:

    $ kubectl get pods -A | grep coredns
    kube-system   coredns-6ddbbfd76-mk2wv              1/1     Running            0                4h46m
    kube-system   coredns-6ddbbfd76-ngrmq              1/1     Running            0                4h46m
    

    If everything's done correctly, now newly created pods will be able to resolve hosts by their names. Please find an example in coredns documentation

    2. Set up DNS server in the network

    While avahi looks similar to DNS server, it does not act like a DNS server. It's not possible to setup requests forwarding from coredns to avahi, while it's possible to proper DNS server in the network and this way have everything will be resolved.

    3. Deploy avahi to kubernetes cluster

    There's a ready image with avahi here. If it's deployed into the cluster with dnsPolicy set to ClusterFirstWithHostNet and most importantly hostNetwork: true it will be able to use host adapter to discover all available hosts within the network.

    Useful links: