I setup a k8s cluster using kubeadm init
on a bare metal cluster.
I noticed the kube-apiserver
is exposing its interface on a private IP:
# kubectl get pods kube-apiserver-cluster1 -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-apiserver-cluster1 1/1 Running 0 6d22h 10.11.1.99 cluster1 <none> <none>
Here's the kube config inside the cluster:
# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.11.1.99:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
This is fine for using kubectl
locally on the cluster, but I want to add an additional interface to expose the kube-apiserver
using the public IP address. Ultimately I'm trying to configure kubectl
from a laptop to remotely access the cluster.
How can I expose the kube-apiserver
on an external IP address?
Execute following command:
$ kubeadm init --pod-network-cidr=<ip-range> --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=<PRIVATE_IP>[,<PUBLIC_IP>,...]
Don't forget to replace the private IP for the public IP in your .kube/config
if you use kubectl from remote.
You can also forward the private IP of the master node to the public IP of the master node on the worker node. Run this command on worker node before running kubeadm join
:
$ sudo iptables -t nat -A OUTPUT -d <Private IP of master node> -j DNAT --to-destination <Public IP of master node>
.
But keep in mind that you'll also have to forward worker private IPs the same way on the master node to make everything work correctly (if they suffer from the same issue of being covered by cloud provider NAT).
See more: apiserver-ip, kube-apiserver.