Given a scenario where I have two Kubernetes clusters, one hosted on AWS EKS and the other on another cloud provider, I would like to manage the EKS cluster from the other cloud provider. What's the easiest way to authenticate such that I can do this?
Would it be reasonable to generate a kubeconfig, where I embed the result from aws get-token
(or something like that) to the cluster on the other cloud provider? Or are these tokens not persistent?
Any help or guidance would be appreciated!
I believe the most correct is the way described in Create a kubeconfig for Amazon EKS
yes, you create kubeconfig
with aws eks get-token
and later add newly created config to KUBECONFIG environment variable , eg
export KUBECONFIG=$KUBECONFIG:~/.kube/config-aws
or you can add it to .bash_profile
for your convenience
echo 'export KUBECONFIG=$KUBECONFIG:~/.kube/config-aws' >> ~/.bash_profile
For detailed steps please refer to provided url.