google-cloud-platformgoogle-kubernetes-enginekubernetes-helmgoogle-cloud-sqlcloud-sql-proxy

GKE with CloudSQL using Cloud proxy job is not in COMPLETED status


I am trying to connect CloudSQL from Kubernetes cluster using Cloud proxy with Sidecar container pattern. Both the proxy and Cloudsql containers are in a same pod and they both are successfully running.

But the cloudproxy container is always "RUNNING" status and the job is not going in completed state. Because of this, the other jobs are not getting triggered.

Can I know the best possible solution to deal with this?

enter image description here

Also please find my .yml template.

  restartPolicy: Never
  securityContext:
    {{- toYaml .Values.mysqlSetupJob.podSecurityContext | nindent 8 }}
  containers:
    - name: mysql-setup-job
      image: "{{ .Values.mysqlSetupJob.image.repository }}:{{ .Values.mysqlSetupJob.image.tag }}"
      imagePullPolicy: {{ .Values.mysqlSetupJob.imagePullPolicy | default "IfNotPresent" }}
      env:
        - name: MYSQL_USERNAME
          value: {{ .Values.global.sql.datasource.username | quote }}
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              name: "{{ .Values.global.sql.datasource.password.secretRef }}"
              key: "{{ .Values.global.sql.datasource.password.secretKey }}"
        - name: MYSQL_HOST
          value: {{ .Values.global.sql.datasource.hostForMysqlClient | quote }}
        - name: MYSQL_PORT
          value: {{ .Values.global.sql.datasource.port | quote }}
      {{- with .Values.mysqlSetupJob.extraEnvs }}
        {{- toYaml . | nindent 12 }}
      {{- end }}
      securityContext:
        {{- toYaml .Values.mysqlSetupJob.securityContext | nindent 12 }}
      volumeMounts:
      {{- with .Values.mysqlSetupJob.extraVolumeMounts }}
        {{- toYaml . | nindent 12 }}
      {{- end }}
      resources:
        limits:
          cpu: 500m
          memory: 512Mi
        requests:
          cpu: 300m
          memory: 256Mi
    {{- if .Values.cloudsqlProxy.required }}
    - name: cloud-sql-proxy
      image: {{ .Values.cloudsqlProxy.image }}
      command:
        - "/cloud_sql_proxy"
        - "-instances={{ .Values.cloudsqlProxy.instance_connection_name }}=tcp:{{ .Values.cloudsqlProxy.port }}"
        {{- if .Values.gcp.serviceAccount.secretName }}
        - "-credential_file={{ .Values.gcp.serviceAccount.mountPoint }}/{{ .Values.gcp.serviceAccount.secretKey }}"
        {{- end }}
      securityContext:
        runAsNonRoot: true
      resources:
        {{- toYaml .Values.cloudsqlProxy.resources | nindent 12 }}
      {{- if .Values.gcp.serviceAccount.secretName }}
      volumeMounts:
        - name: serviceaccount
          mountPath: {{ .Values.gcp.serviceAccount.mountPoint }}
          readOnly: true
      {{- end }}
    {{- end }}

  {{- with .Values.mysqlSetupJob.nodeSelector }}
  nodeSelector:
    {{- toYaml . | nindent 8 }}
  {{- end }}
  {{- with .Values.mysqlSetupJob.affinity }}
  affinity:
    {{- toYaml . | nindent 8 }}
  {{- end }}
  {{- with .Values.mysqlSetupJob.tolerations }}
  tolerations:
    {{- toYaml . | nindent 8 }}
  {{- end }}

{{- end -}}


Solution

  • The issue is solved. What you could do is run the proxy outside of the job. You then can not use localhost to connect to it anymore, but you can still expose the proxy cluster internally and call it with its Kubernetes DNS name.