I'm new to kubernetes and I have some issues with my dns names in my k3s cluster on pc with arm architecture.
I've tried to debug as docs (https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) suggest
I installed 3ks as follows:
sudo curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE=”644” sh -
And applied manifest for debugging pod:
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
I've checked that pod is running:
kubectl get pods dnsutils
and tried to run
kubectl exec -i -t dnsutils -- nslookup kubernetes.default
and expected smth like that:
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: kubernetes.default
Address 1: 10.0.0.1
But get:
;; connection timed out; no servers could be reached
command terminated with exit code 1
Any thoughts to debug? It seems that I messing smth...
UPD. Tried to debug as rancher suggests (https://docs.ranchermanager.rancher.io/v2.5/troubleshooting/other-troubleshooting-tips/dns):
kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup kubernetes.default
And there is the output:
If you don't see a command prompt, try pressing enter.
Address 1: 10.43.0.10
nslookup: can't resolve 'kubernetes.default'
pod "busybox" deleted
pod default/busybox terminated (Error)
So I tried next step:
for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done
and logs are:
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
.:53
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[INFO] plugin/reload: Running configuration SHA512 = b941b080e5322f6519009bb49349462c7ddb6317425b0f6a83e5451175b720703949e3f3b454a24e77f3ffe57fd5e9c6130e528a5a1dd00d9000e4afd6c1108d
CoreDNS-1.9.1
linux/arm64, go1.17.8, 4b597f8
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:39581->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:52272->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:41480->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:52059->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:46821->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:35222->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:38013->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:42222->8.8.8.8:53: i/o timeout
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:50612->8.8.8.8:53: i/o timeout
[ERROR] plugin/errors: 2 4288512074117887106.1437335397389171032. HINFO: read udp 10.42.0.5:50341->8.8.8.8:53: i/o timeout
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
...
UPD2
kubectl -n kube-system get cm coredns -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
ttl 60
reload 15s
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
import /etc/coredns/custom/*.server
NodeHosts: |
192.168.0.103 ubuntu
kind: ConfigMap
metadata:
annotations:
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/4yQwWrzMBCEX0Xs2fEf20nsX9BDybH02lMva2kdq1Z2g6SkBJN3L8IUCiVtbyNGOzvfzoAn90IhOmHQcKmgAIsJQc+wl0CD8wQaSr1t1PzKSilFIUiIix4JfRoXHQjtdZHTuafAlCgq488xUSi9wK2AybEFDXvhwR2e8QQFHCnh50ZkloTJCcf8lP6NTIqUyuCkNJiSp9LJP5czoLjryztTWB0uE2iYmvjFuVSFenJsHx6tFf41gvGY6Y0Eshz/9D2e0OSZfIJVvMZExwzusSf/I9SIcQQNvaG6a+r/XVdV7abBddPtsN9W66Eedi0N7aberM22zaHf6t0tcPsIAAD//8Ix+PfoAQAA
objectset.rio.cattle.io/id: ""
objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
objectset.rio.cattle.io/owner-name: coredns
objectset.rio.cattle.io/owner-namespace: kube-system
creationTimestamp: "2022-09-23T09:06:05Z"
labels:
objectset.rio.cattle.io/hash: bce283298811743a0386ab510f2f67ef74240c57
name: coredns
namespace: kube-system
resourceVersion: "315"
uid: 33a8ccf6-511f-49c4-9752-424859d67d70
UPD3
kubectl -n kube-system get po -o wide
Output:
coredns-b96499967-sct84 1/1 Running 1 (17h ago) 20h 10.42.0.6 ubuntu <none> <none>
helm-install-traefik-crd-wrh5b 0/1 Completed 0 20h 10.42.0.3 ubuntu <none> <none>
helm-install-traefik-wx7s2 0/1 Completed 1 20h 10.42.0.5 ubuntu <none> <none>
local-path-provisioner-7b7dc8d6f5-qxjvs 1/1 Running 1 (17h ago) 20h 10.42.0.3 ubuntu <none> <none>
metrics-server-668d979685-ngbmr 1/1 Running 1 (17h ago) 20h 10.42.0.5 ubuntu <none> <none>
svclb-traefik-67fcd721-mz6sd 2/2 Running 2 (17h ago) 20h 10.42.0.2 ubuntu <none> <none>
traefik-7cd4fcff68-j74gd 1/1 Running 1 (17h ago) 20h 10.42.0.4 ubuntu <none> <none>
kubectl -n kube-system get svc
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 20h
metrics-server ClusterIP 10.43.178.64 <none> 443/TCP 20h
traefik LoadBalancer 10.43.36.41 192.168.0.103 80:30268/TCP,443:30293/TCP 20h
Actually I found workaround. When install k3s one should use flag flannel-backend=ipsec
curl -sfL https://get.k3s.io | sh -s - server --write-kubeconfig-mode 644 --flannel-backend=ipsec
By default it uses --flannel-backend=vxlan
I've tried --flannel-backend=host-gw
But for me works well flannel-backend=ipsec