google-kubernetes-enginecass-operator

What does 2/2 mean in output of kubectl get all -n cass-operator


I have a 3 node Kubernetes cluster and I have set up Cassandra on it using Cass-Operator. I am following the instructions from here - https://github.com/datastax/cass-operator

What does the 2/2 mean in the output of the following command

kubectl get all -n cass-operator
NAME                                READY   STATUS    RESTARTS   AGE
pod/cass-operator-78c6469c6-6qhsb   1/1     Running   0          139m
pod/cluster1-dc1-default-sts-0      2/2     Running   0          138m
pod/cluster1-dc1-default-sts-1      2/2     Running   0          138m
pod/cluster1-dc1-default-sts-2      2/2     Running   0          138m

Does it mean that there are 3 data centres each running 2 cassandra nodes? It should be because my K8S cluster has only 3 nodes.

manuchadha25@cloudshell:~ (copper-frame-262317)$ gcloud compute instances list
NAME                                              ZONE            MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
gke-cassandra-cluster-default-pool-92d544da-6fq8  europe-west4-a  n1-standard-1               10.164.0.26  34.91.214.233  RUNNING
gke-cassandra-cluster-default-pool-92d544da-g0b5  europe-west4-a  n1-standard-1               10.164.0.25  34.91.101.218  RUNNING
gke-cassandra-cluster-default-pool-92d544da-l87v  europe-west4-a  n1-standard-1               10.164.0.27  34.91.86.10    RUNNING

Or is Cassandra-operator running two containers per K8S Node?


Solution

  • When you are deploying some application, one pod can have more than 1 container inside. If you will check Kubernetes Pod docs you can find 2 typs:

    Pods that run a single container.

    The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly.

    Pods that run multiple containers that need to work together.

    A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service--one container serving files from a shared volume to the public, while a separate "sidecar" container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.

    More information you can find in this docs.

    How Pod configuration YAML with 2 containers looks like can be found here. In .spec.containers you can specify 2 or more containers.

    Ive deployed those YAMLs.

    $ kubectl get po -n cass-operator
    NAME                             READY   STATUS    RESTARTS   AGE
    cass-operator-78c9999797-gb88g   1/1     Running   0          4m26s
    cluster1-dc1-default-sts-0       2/2     Running   0          4m12s
    cluster1-dc1-default-sts-1       2/2     Running   0          4m12s
    cluster1-dc1-default-sts-2       2/2     Running   0          4m12s
    

    Now you have to describe pod. In my example its:

    $ kubectl describe po cluster1-dc1-default-sts-0 -n cass-operator
    

    And under Containers: you can find details like image, ports, state, mounts, etc.

    Containers:
      cassandra:
        Container ID:   docker://49b58eacc380da6c29928677e84082373d4330a91c29b29f3f3b021e43c21a38
        Image:          datastax/cassandra-mgmtapi-3_11_6:v0.1.5
        Image ID:       docker-pullable://datastax/cassandra-mgmtapi-3_11_6@sha256:aa7d6072607e60b1dfddd5877dcdf436660bacd31dd4aa6c8c2b85978c9fd170
       ....
      server-system-logger:
        Container ID:  docker://d0b572e767236e2baab7b67d5ad0fc6656b862fc4e463aa1836de80d34f608ea
        Image:         busybox
        Image ID:      docker-pullable://busybox@sha256:2131f09e4044327fd101ca1fd4043e6f3ad921ae7ee901e9142e6e36b354a907
        Port:          <none>
    

    So this pod runs 2 containers

    What when there is pod with 1/2?

    It means that in this specific pod only 1 container is running. Containers states are Waiting, Running and Terminated. More information you can find here.

    Use case? You can check logs from specified container.

    $ kubectl logs cluster1-dc1-default-sts-0 -n cass-operator -c cassandra
    Starting Management API
    /docker-entrypoint.sh: line 74: [: missing `]'
    Running java -Xms128m -Xmx128m -jar /opt/mgmtapi/datastax-mgmtapi-server-0.1.0-SNAPSHOT.jar --cassandra-socket /tmp/cassandra.sock --host tcp://0.0.0.0:8080 --host file:///tmp/oss-mgmt.sock --explicit-start true --cassandra-home /var/lib/cassandra/
    INFO  [main] 2020-07-03 13:43:08,199 Cli.java:343 - Cassandra Version 3.11.6
    INFO  [main] 2020-07-03 13:43:08,709 ResteasyDeploymentImpl.java:551 - RESTEASY002225: Deploying javax.ws.rs.core.Application: class com.datastax.mgmtapi.ManagementApplication
    ...
    

    Or

    $ kubectl logs cluster1-dc1-default-sts-0 -n cass-operator -c server-system-logger
    INFO  [main] 2020-07-03 13:44:04,588 YamlConfigurationLoader.java:89 - Configuration location: file:/etc/cassandra/cassandra.yaml
    INFO  [main] 2020-07-03 13:44:06,137 Config.java:516 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=org.apache.cassandra.auth.PasswordAuthenticator; authorizer=org.apache.cassandra.auth.CassandraAuthorizer; auto_bootstrap=true; auto_snapshot=true;
    ...
    

    You can also get this pod YAML to verify. You can do it in this example by:

    $ kubectl get po cluster1-dc1-default-sts-0 -n cass-operator -o yaml
    

    As addition to your question:

    Or is Cassandra-operator running two containers per K8S Node?

    It's running two containers per pod. You can check which pod was scheduled to which node by:

    $ kubectl get pods -n cass-operator -o wide
    NAME                             READY   STATUS    RESTARTS   AGE   IP          NODE                                       NOMINATED NODE   READINESS GATES
    cass-operator-78c9999797-gb88g   1/1     Running   0          20m   10.44.1.4   gke-cluster-2-default-pool-5aa60336-n3hr   <none>           <none>
    cluster1-dc1-default-sts-0       2/2     Running   0          19m   10.44.1.5   gke-cluster-2-default-pool-5aa60336-n3hr   <none>           <none>
    cluster1-dc1-default-sts-1       2/2     Running   0          19m   10.44.2.3   gke-cluster-2-default-pool-5aa60336-dl2g   <none>           <none>
    cluster1-dc1-default-sts-2       2/2     Running   0          19m   10.44.0.9   gke-cluster-2-default-pool-5aa60336-m7ms   <none>           <none>