dockerkubernetesrediskubernetes-helmredislabs

Production ready Kubernetes redis


I want to run the Redis with ReJson module on production Kubernetes.

Right now in staging, I am running single pod of Redis database as stateful sets.

There is a helm chart available? Can anyone please share it?

i have tried editing redis/stable and stable/redis-ha with redislabs/rejson imaoge but it's not working.

What i have did

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
  repository: redislabs/rejson
  tag: latest
  pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3

## Kubernetes priorityClass name for the redis-ha-server pod
# priorityClassName: ""

## Custom labels for the redis pod
labels: {}

## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the redis-ha.fullname template
  # name:

## Enables a HA Proxy for better LoadBalancing / Sentinel Master support. Automatically proxies to Redis master.
## Recommend for externally exposed Redis clusters.
## ref: https://cbonte.github.io/haproxy-dconv/1.9/intro.html
haproxy:
  enabled: false
  # Enable if you want a dedicated port in haproxy for redis-slaves
  readOnly:
    enabled: false
    port: 6380
  replicas: 1
  image:
    repository: haproxy
    tag: 2.0.4
    pullPolicy: IfNotPresent
  annotations: {}
  resources: {}
  ## Kubernetes priorityClass name for the haproxy pod
  # priorityClassName: ""
  ## Service type for HAProxy
  ##
  service:
    type: ClusterIP
    loadBalancerIP:
    annotations: {}
  serviceAccount:
    create: true
  ## Prometheus metric exporter for HAProxy.
  ##
  exporter:
    image:
      repository: quay.io/prometheus/haproxy-exporter
      tag: v0.9.0
    enabled: false
    port: 9101
  init:
    resources: {}
  timeout:
    connect: 4s
    server: 30s
    client: 30s


## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
  create: true

sysctlImage:
  enabled: false
  command: []
  registry: docker.io
  repository: bitnami/minideb
  tag: latest
  pullPolicy: Always
  mountHostSys: false

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## Redis specific configuration options
redis:
  port: 6379
  masterGroupName: mymaster
  config:
    ## Additional redis conf options can be added below
    ## For all available options see http://download.redis.io/redis-stable/redis.conf
    min-replicas-to-write: 1
    min-replicas-max-lag: 5   # Value in seconds
    maxmemory: "0"       # Max memory to use for each redis instance. Default is unlimited.
    maxmemory-policy: "volatile-lru"  # Max memory policy to use for each redis instance. Default is volatile-lru.
    # Determines if scheduled RDB backups are created. Default is false.
    # Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
    save: "900 1"
    # When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
    repl-diskless-sync: "yes"
    rdbcompression: "yes"
    rdbchecksum: "yes"


  ## Custom redis.conf files used to override default settings. If this file is
  ## specified then the redis.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: {}
  #  requests:
  #    memory: 200Mi
  #    cpu: 100m
  #  limits:
  #    memory: 700Mi

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    ## Additional sentinel conf options can be added below. Only options that
    ## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
    ## be properly templated.
    ## For available options see http://download.redis.io/redis-stable/sentinel.conf
    down-after-milliseconds: 10000
    ## Failover timeout value in milliseconds
    failover-timeout: 180000
    parallel-syncs: 5

  ## Custom sentinel.conf files used to override default settings. If this file is
  ## specified then the sentinel.config above will be ignored.
  # customConfig: |-
      # Define configuration here

  resources: {}
  #  requests:
  #    memory: 200Mi
  #    cpu: 100m
  #  limits:
  #    memory: 200Mi

securityContext:
  runAsUser: 1000
  fsGroup: 1000
  runAsNonRoot: true

## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
nodeSelector: {}

## Whether the Redis server pods should be forced to run on separate nodes.
## This is accomplished by setting their AntiAffinity with requiredDuringSchedulingIgnoredDuringExecution as opposed to preferred.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
##
hardAntiAffinity: true

## Additional affinities to add to the Redis server pods.
##
## Example:
##   nodeAffinity:
##     preferredDuringSchedulingIgnoredDuringExecution:
##       - weight: 50
##         preference:
##           matchExpressions:
##             - key: spot
##               operator: NotIn
##               values:
##                 - "true"
##
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
additionalAffinities: {}

## Override all other affinity settings for the Redis server pods with a string.
##
## Example:
## affinity: |
##   podAntiAffinity:
##     requiredDuringSchedulingIgnoredDuringExecution:
##       - labelSelector:
##           matchLabels:
##             app: {{ template "redis-ha.name" . }}
##             release: {{ .Release.Name }}
##         topologyKey: kubernetes.io/hostname
##     preferredDuringSchedulingIgnoredDuringExecution:
##       - weight: 100
##         podAffinityTerm:
##           labelSelector:
##             matchLabels:
##               app:  {{ template "redis-ha.name" . }}
##               release: {{ .Release.Name }}
##           topologyKey: failure-domain.beta.kubernetes.io/zone
##
affinity: |
# Prometheus exporter specific configuration options
exporter:
  enabled: false
  image: oliver006/redis_exporter
  tag: v0.31.0
  pullPolicy: IfNotPresent

  # prometheus port & scrape path
  port: 9121
  scrapePath: /metrics

  # cpu/memory resource limits/requests
  resources: {}

  # Additional args for redis exporter
  extraArgs: {}

podDisruptionBudget: {}
  # maxUnavailable: 1
  # minAvailable: 1

## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:

## Use existing secret containing key `authKey` (ignores redisPassword)
# existingSecret:

## Defines the key holding the redis password in existing secret.
authKey: auth

persistentVolume:
  enabled: true
  ## redis-ha data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 10Gi
  annotations: {}
init:
  resources: {}

# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
  ## path is evaluated as template so placeholders are replaced
  # path: "/data/{{ .Release.Name }}"

  # if chown is true, an init-container with root permissions is launched to
  # change the owner of the hostPath folder to the user defined in the
  # security context
  chown: true

in redis-ha chart i have updated two line to change the image and tag for image in helm chart.

image:
  repository: redislabs/rejson
  tag: latest

Pod is starting but when logged in using redis-cli it's not taking json as input.


Solution

  • Looks like Helm stable/redis has support to ReJson, as stated in the following PR (#7745):

    This allows stable/redis to offer a higher degree of flexibility for those who may need to run images containing redis modules or based on a different linux distribution than what is currently offered by bitnami.

    Several interesting test cases:

    • [...]
    • helm upgrade --install redis-test ./stable/redis --set image.repository=redislabs/rejson --set image.tag=latest

    The stable/redis-ha also has a PR (#7323) that may make the chart compatible with ReJson:

    This also removes dependencies on very specific redis images thus allowing for use of any redis images.