kubectlamazon-ekseksctl

Connecting to existing EKS cluster using kubectl or eksctl


I have created one kubernetes cluster on EKS. I used eksctl create cluster to create the cluster. I am able to access everything which is great.

However, my colleague has created another cluster, and I am wondering how will I generate / get Kubeconfigs so that I can point to the cluster that my colleague has created.


Solution

  • Accessing a private only API server

    If you have disabled public access for your cluster's Kubernetes API server endpoint, you can only access the API server from within your VPC or a connected network. Here are a few possible ways to access the Kubernetes API server endpoint:

    Isolation through multiple clusters

    A possible alternative is to use multiple single tenant Amazon EKS clusters. With this strategy, each tenant will have the possibility to use its own Kubernetes cluster, within a shared AWS account or using dedicated accounts within an Organization for large enterprises. Once clusters are deployed, you might want to have an overview of all deployed clusters to monitor each tenant, make sure we are running the latest version of EKS control plane and operate at scale. Rancher is a popular open-source tool used to manage multiple Kubernetes clusters, make sure to check out this article on the Open Source blog for details on how to deploy and use it.

    Clusters in the same VPC

    If your colleague's cluster is in the same VPC I advice you to use AWS App Mesh. App Mesh is a service mesh that lets you control and monitor services spanning two clusters deployed in the same VPC.

    Architecture:

    Prerequisites

    In order to successfully carry out the base deployment:

    Note that this walkthrough assumes throughout to operate in the us-east-1 Region.

    Assuming that both of cluster are working and

    Update the KUBECONFIG environment variable on each cluster according to the eksctl output, respectively:
    Run the following in respective tabs.

    export KUBECONFIG=~/.kube/eksctl/clusters/first-cluster 
    
    export KUBECONFIG=~/.kube/eksctl/clusters/second-cluster
    

    You have now setup the two clusters and pointing kubectl to respective clusters.

    Now it is time to deploy App Mesh custom components

    To automatically inject App Mesh components and proxies on pod creation, you need to create some custom resources on the clusters. Use helm for that. Install tiller on both the clusters and then use helm to run the following commands on both clusters for that.

    Download App Mesh repo

    >> git clone https://github.com/aws/aws-app-mesh-examples (https://github.com/aws/aws-app-mesh-examples).git
    >> cd aws-app-mesh-examples/walkthroughs/howto-k8s-cross-cluster
    
    

    Install Helm

    >>brew install kubernetes-helm
    

    Install tiller

    Using helm requires a server-side component called tiller installed on the cluster. Follow the instructions in the documentation to install tiller on both the clusters.

    Verify tiller installation

    >>kubectl get po -n kube-system | grep -i tiller
    tiller-deploy-6d65d78679-whwzn 1/1 Running 0 5h35m
    
    

    Install App Mesh Components

    Run the following set of commands to install the App Mesh controller and Injector components.

    helm repo add eks https://aws.github.io/eks-charts
    kubectl create ns appmesh-system
    kubectl apply -f https://raw.githubusercontent.com/aws/eks-charts/master/stable/appmesh-controller/crds/crds.yaml
    helm upgrade -i appmesh-controller eks/appmesh-controller --namespace appmesh-system
    helm upgrade -i appmesh-inject eks/appmesh-inject --namespace appmesh-system --set mesh.create=true --set mesh.name=global
    
    

    You are now ready to deploy example front and colorapp applications to respective clusters along with the App Mesh, which will span both clusters.

    Deploy services and mesh constructs

    1. You should be in the walkthrough/howto-k8s-cross-cluster folder, all commands will be run from this location.

    2. Your account id:

    export AWS_ACCOUNT_ID=<your_account_id>
    
    1. Region, e.g., us-east-1
    export AWS_DEFAULT_REGION=us-east-1
    
    1. ENVOY_IMAGE environment variable is set to App Mesh Envoy, see Envoy.
    export ENVOY_IMAGE=...
    
    1. VPC_ID environment variable is set to the VPC where Kubernetes pods are launched. VPC will be used to set up
      private DNS namespace in AWS using create-private-dns-namespace API. To find out VPC of EKS cluster, you can
      use aws eks describe-cluster. See below for reason why AWS Cloud Map PrivateDnsNamespace is required.
    export VPC_ID=...
    
    1. CLUSTER environment variables to export kube configuration
    export CLUSTER1=first-cluster
    export CLUSTER2=second-cluster
    

    Deploy

    ./deploy.sh
    

    Finally remember to verify deployment. More information you can find here: app-mesh-eks.