kubernetesredisredis-sentinel

Redis sentinel HA on Kubernetes


I am trying to have 1 redis master with 2 redis replicas tied to a 3 Quorum Sentinel on Kubernetes. I am very new to Kubernetes.

My initial plan was to have the master running on a pod tied to 1 Kubernetes SVC and the 2 replicas running on their own pods tied to another Kubernetes SVC. Finally, the 3 Sentinel pods will be tied to their own SVC. The replicas will be tied to the master SVC (because without svc, ip will change). The sentinel will also be configured and tied to master and replica SVCs. But I'm not sure if this is feasible because when master pod crashes, how will one of the replica pods move to the master SVC and become the master? Is that possible?

The second approach I had was to wrap redis pods in a replication controller and the same for sentinel as well. However, I'm not sure how to make one of the pods master and the others replicas with a replication controller.

Would any of the two approaches work? If not, is there a better design that I can adopt? Any leads would be appreciated.


Solution

  • You can deploy Redis Sentinel using the Helm package manager and the Redis Helm Chart.
    If you don't have Helm3 installed yet, you can use this documentation to install it.

    I will provide a few explanations to illustrate how it works.


    First we need to get the values.yaml file from the Redis Helm Chart to customize our installation:

    $ wget https://raw.githubusercontent.com/bitnami/charts/master/bitnami/redis/values.yaml
    

    We can configure a lot of parameters in the values.yaml file , but for demonstration purposes I only enabled Sentinel and set the redis password:
    NOTE: For a list of parameters that can be configured during installation, see the Redis Helm Chart Parameters documentation.

    # values.yaml
    
    global:
      redis:
        password: redispassword
    ...
    replica:
      replicaCount: 3
    ...
    sentinel:
      enabled: true
    ...
    

    Then we can deploy Redis using the configuration from the values.yaml file:
    NOTE: It will deploy a three Pod cluster (one master and two slaves) managed by the StatefulSets with a sentinel container running inside each Pod.

    $ helm install redis-sentinel bitnami/redis --values values.yaml
    

    Be sure to carefully read the NOTES section of the chart installation output. It contains many useful information (e.g. how to connect to your database from outside the cluster)

    After installation, check redis StatefulSet, Pods and Services (headless service can be used for internal access):

    $ kubectl get pods -o wide
    NAME                    READY   STATUS    RESTARTS   AGE     IP
    redis-sentinel-node-0   2/2     Running   0          2m13s   10.4.2.21
    redis-sentinel-node-1   2/2     Running   0          86s     10.4.0.10
    redis-sentinel-node-2   2/2     Running   0          47s     10.4.1.10
    
    
    $ kubectl get sts
    NAME                  READY   AGE
    redis-sentinel-node   3/3     2m41s
    
    $ kubectl get svc
    NAME                      TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)              AGE
    redis-sentinel            ClusterIP   10.8.15.252   <none>        6379/TCP,26379/TCP   2m
    redis-sentinel-headless   ClusterIP   None          <none>        6379/TCP,26379/TCP   2m
    

    As you can see, each redis-sentinel-node Pod contains the redis and sentinel containers:

    $ kubectl get pods redis-sentinel-node-0 -o jsonpath={.spec.containers[*].name}
    redis sentinel
    

    We can check the sentinel container logs to find out which redis-sentinel-node is the master:

    $ kubectl logs -f redis-sentinel-node-0 sentinel
    ...
    1:X 09 Jun 2021 09:52:01.017 # Configuration loaded
    1:X 09 Jun 2021 09:52:01.019 * monotonic clock: POSIX clock_gettime
    1:X 09 Jun 2021 09:52:01.019 * Running mode=sentinel, port=26379.
    1:X 09 Jun 2021 09:52:01.026 # Sentinel ID is 1bad9439401e44e749e2bf5868ad9ec7787e914e
    1:X 09 Jun 2021 09:52:01.026 # +monitor master mymaster 10.4.2.21 6379 quorum 2
    ...
    1:X 09 Jun 2021 09:53:21.429 * +slave slave 10.4.0.10:6379 10.4.0.10 6379 @ mymaster 10.4.2.21 6379
    1:X 09 Jun 2021 09:53:21.435 * +slave slave 10.4.1.10:6379 10.4.1.10 6379 @ mymaster 10.4.2.21 6379
    ...
    

    As you can see from the logs above, the redis-sentinel-node-0 Pod is the master and the redis-sentinel-node-1 & redis-sentinel-node-2 Pods are slaves.

    For testing, let's delete the master and check if sentinel will switch the master role to one of the slaves:

        $ kubectl delete pod redis-sentinel-node-0
        pod "redis-sentinel-node-0" deleted
        
        $ kubectl logs -f redis-sentinel-node-1 sentinel
        ...                                                                                           
        1:X 09 Jun 2021 09:55:20.902 # Executing user requested FAILOVER of 'mymaster'
        ...
        1:X 09 Jun 2021 09:55:22.666 # +switch-master mymaster 10.4.2.21 6379 10.4.1.10 6379
        ...
        1:X 09 Jun 2021 09:55:50.626 * +slave slave 10.4.0.10:6379 10.4.0.10 6379 @ mymaster 10.4.1.10 6379
        1:X 09 Jun 2021 09:55:50.632 * +slave slave 10.4.2.22:6379 10.4.2.22 6379 @ mymaster 10.4.1.10 6379
    

    A new master (redis-sentinel-node-2 10.4.1.10) has been selected, so everything works as expected.

    Additionally, we can display more information by connecting to one of the Redis nodes:

    $ kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=redispassword --image docker.io/bitnami/redis:6.2.1-debian-10-r47 --command -- sleep infinity
    pod/redis-client created
    $ kubectl exec --tty -i redis-client --namespace default -- bash
    I have no name!@redis-client:/$ redis-cli -h redis-sentinel-node-1.redis-sentinel-headless -p 6379 -a $REDIS_PASSWORD
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    redis-sentinel-node-1.redis-sentinel-headless:6379> info replication
    # Replication
    role:slave
    master_host:10.4.1.10
    master_port:6379
    master_link_status:up
    ...