amazon-web-serviceskubernetesamazon-elbkopsbare-metal-server

Problem with running a aws cloud kubernetes project in local 2-node kubernetes cluster


I am trying to run this project configured for AWS cloud deployment in my local cluster environment. In the cloud deployment, it is configured using Kubernetes CLI (kubectl) and kops (a tool to create & manage Kubernetes clusters on public cloud infrastructure) on AWS cloud.

In short, my question is: Is not it possible to run this application in my local 2-node kubernetes cluster for testing (as I do not have AWS cloud)?

More details:

In the cloud setup, it has a cluster creating script to create a cluster and after the creation of the cluster, we get the URL of two AWS ELBs, which can be used to interact with two services provided by the cloud loadbalancer (one backend service+ one frontend service).

My problem: As I am trying to run the project on a 2-node kubernetes cluster hosted in my lab. I have set up the cluster using kubeadm instead of native AWS cloud (kops + k8s) as given in the GitHub link. I have modified the script to remove the references to kops and AWS respectively. In this local kubernetes cluster, I have used MetalLB as a loadBalancer for the services.

At the end of script, unlike in AWS cloud deployment, instead of 2 AWS ELB address links which the client can use to interface with the system services, we are getting two public IPs of the physical nodes: xxx.xxx.80.72 and xxx.xxx.12.58 respectively for the two services, provided by the MetalLb loadbalancer (using pre-configured address pool).

As expected all the pods are in running state.

$ kubectl get all -o wide

NAME                        READY   STATUS    RESTARTS   AGE   IP               NODE   NOMINATED NODE   READINESS GATES

pod/benchmark-nodes-qccl6   4/4     Running   0          22h   xxx.xxx..58    srl1   <none>           <none>

pod/benchmark-nodes-s2rqj   4/4     Running   0          22h   xxx.xxx.80.72    srl2   <none>           <none>

pod/function-nodes-ct7jm    4/4     Running   17         22h   xxx.xxx.12.58    srl1   <none>           <none>

pod/function-nodes-d5r6w    4/4     Running   7          22h   xxx.xxx..80.72    srl2   <none>           <none>

pod/management-pod          1/1     Running   0          22h   192.168.120.66   srl1   <none>           <none>

pod/memory-nodes-7dhsv      1/1     Running   1          22h   xxx.xxx.80.72    srl2   <none>           <none>

pod/memory-nodes-v8s2c      1/1     Running   1          22h   xxx.xxx.12.58    srl1   <none>           <none>

pod/monitoring-pod          1/1     Running   1          22h   192.168.120.84   srl1   <none>           <none>

pod/routing-nodes-lc62q     1/1     Running   1          22h   xxx.xxx.80.72    srl2   <none>           <none>

pod/routing-nodes-xm8n2     1/1     Running   1          22h   xxx.xxx.12.58    srl1   <none>           <none>

pod/scheduler-nodes-495kj   1/1     Running   0          22h   xxx.xxx.80.72    srl2   <none>           <none>

pod/scheduler-nodes-pjb9w   1/1     Running   0          22h   xxx.xxx.12.58    srl1   <none>           <none>

$kubectl get svc -A

NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                                                                                    AGE   SELECTOR

service/function-service   LoadBalancer   10.108.79.97    xxx.xxx.12.58   5000:32427/TCP,5001:30516/TCP,5002:30830/TCP,5003:31430/TCP,5004:32448/TCP,5005:30177/TCP,5006:30892/TCP   22h   role=scheduler

service/kubernetes         ClusterIP      10.96.0.1       <none>          443/TCP                                                                                                    20d   <none>

service/routing-service    LoadBalancer   10.107.63.188   xxx.xxx.80.72   6450:31251/TCP,6451:31374/TCP,6452:30037/TCP,6453:32030/TCP                                                22h   role=routing

However, when I try to connect with the service from a client in the cluster, it is failing to connect to the services. The execution always going into the exception in the if-else condition at _connect method (code given bellow):

At this point can someone please give me some pointers on what might be the issue with connecting the services for this project in my bare-metal 2-node cluster?

def _connect(self):
       sckt = self.context.socket(zmq.REQ)
       sckt.setsockopt(zmq.RCVTIMEO, 1000)
       sckt.connect(self.service_addr % CONNECT_PORT)

       sckt.send_string('')


       try:
           result = sckt.recv_string()
           return result
       except zmq.ZMQError as e:
           if e.errno == zmq.EAGAIN:
               return None
           else:
               raise e



Solution

  • Is not it possible to run this application in my local 2-node kubernetes cluster for testing (as I do not have aws cloud)?

    It is not a recommended approach and will not work as expected.

    By trying to run this setup on non-AWS environment you are not meeting the Prerequisites:

    We assume you are running inside an EC2 linux VM on AWS, where you have Python3 installed (preferably Python3.6 or later -- we have not tested with earlier versions). AWS has default quotas on resources that can be allocated for accounts. The cluster to create in this doc will exceed the default vCPU limit(32) for a regular AWS account. Please make sure this limit is lifted before proceeding.

    For learning and testing purposes locally it would be better for you to go by Creating a cluster with kubeadm:

    The kubeadm tool is good if you need:

    • A simple way for you to try out Kubernetes, possibly for the first time.
    • A way for existing users to automate setting up a cluster and test their application.
    • A building block in other ecosystem and/or installer tools with a larger scope.