kubernetesconfigmapspring-cloud-kubernetes

Kubernetes pod level configuration externalization in spring boot app


I need some help from the community, I'm still new to K8 and Spring Boot. Thanks all in advance.
what I need is to have 4 K8 pods running in K8 environment and each pod have slightly different configuration from each other, for example, I have a property in one of my java class called regions, it extract it's value from Application.yml, like

@Value("${regions}")
Private String regions;

Now after deploy it to K8 env I want to have 4 pods(I can configure it in helm file) running and in each pod the regions field should have different value. Is this something achievable ? Can anyone please give any advice ?


Solution

  • If you want to run 4 different pods with different configurations, you have to deploy the 4 different deployments in kubernetes.

    You can create the different configmap as per need storing the whole Application.yaml file or environment variables and inject it to different deployments.

    how to store whole application.yaml inside config map

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: yaml-region-first
    data:
      application.yaml: |-
        data: test,
        region: first-region
    

    the same way you can create the config map with the second deployment.

     apiVersion: v1
        kind: ConfigMap
        metadata:
          name: yaml-region-second
        data:
          application.yaml: |-
            data: test,
            region: second-region
    

    you can inject this configmap to each deployment

    example :

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: hello-app
      name: hello-app
      namespace: default
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: hello-app
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: hello-app
        spec:
          containers:
          - name: nginx
            image: nginx
            imagePullPolicy: IfNotPresent
            volumeMounts:
              - mountPath: /etc/nginx/app.yaml
                name: yaml-file
                readOnly: true
          volumes:
          - configMap:
              name: yaml-region-second
              optional: false
            name: yaml-file
    

    accordingly, you can also create the helm chart.

    If you just to pass the single environment instead of storing the whole file inside the configmap you can directly add value to the deployment.

    Example :

    apiVersion: v1
    kind: Pod
    metadata:
      name: print-greeting
    spec:
      containers:
      - name: env-print-demo
        image: bash
        env:
        - name: REGION
          value: "one"
        - name: HONORIFIC
          value: "The Most Honorable"
        - name: NAME
          value: "Kubernetes"
        command: ["echo"]
        args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
    

    https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/

    for each deployment, your environment will be different and in helm, you can dynamically also update or overwrite it using CLI command.