kubernetesamazon-s3bitbucketargo-workflowsargo

How do I mount my ssh key for a BitBucket Repo to my container to use ssh git links on Argo Workflows?


I have a workflow that:

  1. clones a repository tars it and adds to an s3 bucket
  2. builds a docker image using kaniko
  3. clones a different repository tars it and adds to an s3 bucket
  4. I use a python step to run a script file from that bucket (step 3) that will update a tag, tar it and send it back
  5. run a git container step that should do a git push, which is where my issue lies. I tried to do it the same way I cloned the repository (reference for cloning https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml)

I get this error:

Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Error: exit status 128

How do I push my changes to my repo after the python script runs? How do I mount my ssh key to the container?

I tried adding my ssh key using https://github.com/argoproj/argo-workflows/blob/main/examples/input-artifact-git.yaml as a reference to push the changes to the repo, I was expecting it to use the ssh key and push the changes made by my python script


Solution

  • Solved the problem, this is how I did it:

    1. mount the ssh key after creating a secret (eg: kubectl create secret generic ssh-keys --from-file=id_rsa=ssh-key --from-file=id_rsa.pub=ssh-key.pub) mount it like this as a volume:

      - name: ssh-keys
        secret:
          secretName: ssh-keys
          defaultMode: 400
      
      
    2. mount it to your container that needs the ssh keys:

         volumeMounts:
         - name: ssh-keys
           readOnly: true
           mountPath: /root/.ssh
      
    3. on the container's arguments add a script to make a temporary directory /tmp/ssh and copy your ssh keys in there as ssh volume mounts are read only and you can't use that to add a known hosts directory in the .ssh directory, change the private key to read only as private keys need to be read only, use ssh-keyscan to add your git provider hostname and export private key and known hosts with the files (NOTE: your container will need to have ssh-keyscan, if it does not add an alpine init container to your step to add the ssh key) if it does have ssh-keyscan add this to arguments:

       mkdir -p /tmp/ssh && cp /root/.ssh/id_rsa /tmp/ssh/id_rsa && chmod
       600 /tmp/ssh/id_rsa && ssh-keyscan -H <your git provider> >>
       /tmp/ssh/known_hosts && export GIT_SSH_COMMAND="ssh -i
       /tmp/ssh/id_rsa -o UserKnownHostsFile=/tmp/ssh/known_hosts" && <rest of your arguments>
      

    if it does not have ssh-keyscan add this init container:

      initContainers:
        - name: ssh-setup
          image: alpine
          command:
            - /bin/sh
            - '-c'
          args:
            - >
              apk add --no-cache openssh-client && mkdir -p /tmp/ssh && cp
              /root/.ssh/id_rsa /tmp/ssh/id_rsa && chmod 600 /tmp/ssh/id_rsa &&
              ssh-keyscan -H bitbucket.org >> /tmp/ssh/known_hosts && export
              GIT_SSH_COMMAND="ssh -i /tmp/ssh/id_rsa -o
              UserKnownHostsFile=/tmp/ssh/known_hosts"
          resources: {}
          volumeMounts:
            - name: ssh-keys
              readOnly: true
              mountPath: /root/.ssh