What I want is to have ArgoCD approve a job that is in the approval step in CircleCI when a deployment was successful.
However, this has been a multi week journey with no results. I first looked at this:
https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/
I spent the time building out an entire argo workflow, and job. Then setup a pipeline to capture the waiting job id (before updating the yaml file) and run the workflow with that id that would generate a PostSync
that could use that id to approve the waiting circleci job only AFTER a successful deploy, then destroy itself. I also had one for SyncFail as well. My hopes were dashed when I realized this was generic. I couldn't see a way to only run this job for a specific application in argocd, it appears to run on ANY sync event. (What??)
Now I'm looking at this: https://argo-cd.readthedocs.io/en/stable/operator-manual/notifications/
However, yet again I see hiccups already before I event started. There seems no possible way to run custom code, therefore I can't run the API to approve my job. Has anyone gotten a callback system for a specific app working? I can try and hack something together by calling back to a pipeline to run an API, but that just seems dumb. There has to be a better way....
Nearly a month of searching, building, and digging through code later and I have something working. Wasn't exactly what I wanted above but I think this is probably better in the long run anyway.
Instead of a callback mechanism I went with a async callback, although I will give you some ideas of ways you can implement the callback approval mechanism below.
I'm using the following services:
argocd
argocd-notifications
argocd-workflows
By default argocd-notifications
comes bundled with argocd
while argocd-workflows
does not and is required to have a separate install. (all done via the community helm charts).
argocd-notifications-cm
(configmap MUST be named this, argocd
will auto pickup this configmap name) argocd-notifications-secret
(secret MUST be named this for the same reason)
Please note, if you just use this as is you will overwrite the one that is generated by default. You can append this to the existing map if you want, just be sure to capture that information BEFORE applying your own.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-notifications-cm
namespace: argocd
data:
template.run-github-pipeline: |
webhook:
trigger-workflow:
method: POST
path: /submit
body: |
{
"resourceKind": "WorkflowTemplate",
"namespace": "argo-workflows",
"resourceName": "scripts-trigger-pipelines",
"submitOptions": {
"entryPoint": "github",
"generateName": "{{ `{{.app.metadata.name}}` }}-github-pipeline-",
"parameters": [
"app_name={{ `{{.app.metadata.name}}` }}",
"trigger_with_data={{ `{{.recipient}}` }}"
],
"labels": "workflows.argoproj.io/workflow-template=scripts-trigger-pipelines",
"generateName": "{{ `{{.app.metadata.name}}` }}-trigger-github-pipeline-"
}}
trigger.on-deployment-success-circleci: |
- when: app.status.sync.status in ['Synced'] and app.status.operationState != nil and app.status.operationState.phase in ['Succeeded'] and app.status.health.status in ['Healthy']
oncePer: app.status.operationState.syncResult.revision
send: [run-circleci-pipeline]
service.webhook.trigger-workflow: |
url: "https://{{ .Values.workflows.endpoint }}/api/v1/workflows/argo-workflows"
headers:
- name: "Authorization"
value: "Bearer $ARGO_WORKFLOWS_API_TOKEN"
- name: "Content-Type"
value: "application/json"
insecureSkipVerify: false
This secret must also exist so that it will know what the $ARGO_WORKFLOWS_API_TOKEN is:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: argocd-notifications-secret
namespace: argocd
spec:
secretStoreRef:
kind: ClusterSecretStore
name: aws-secrets-manager
refreshInterval: 1m
target:
creationPolicy: Owner
deletionPolicy: Delete
# map secret key/value pairs to envvars
data:
- secretKey: ARGO_WORKFLOWS_API_TOKEN
remoteRef:
key: my-secret-name
property: argo_workflows_api_token
In the above I'm using an external secret so I can maintain the secret elsewhere and have it synced to kuberentes but you can also just use a plain Secret if you want. Subscription
Now this one got me for a while, but you can ONLY add these annotations to the kubernetes resource kind: Application
. This is something that only argocd understands. You can add it like:
notifications.argoproj.io/subscribe.on-deployment-success-github.trigger-workflow: 'repo:my_test_repo|eventtype:my-repository-dispatch-event|payload:{app_name:sample}'
This takes advantage of the {{.recipient}}
as defined in the configmap above. It will add the following into the {{.recipient}}
entry on that configmap BEFORE firing the event:
repo:my_test_repo|eventtype:my-repository-dispatch-event|payload:{app_name:sample}
Now the argo-notifications
side is completed. We need to setup the argo-workflows side....
You will need to setup some way to communicate with argo-workflows
. That means you will need to setup a webhook (sorry won't go into detail how to do that here, just know you need to have it).
Get API Token
Next we need to setup a service account, rolebinding
, and role
that will we use to access the api of argo-workflows
. In order to get the api token that we use in the configmap above for notifications we need to generate a secret:
apiVersion: v1
kind: Secret
metadata:
name: argo-workflows-pipelines.service-account-token
annotations:
kubernetes.io/service-account.name: argo-workflows-pipelines
type: kubernetes.io/service-account-token
Doing this will then allow us to run the following command to get the api token and place it either into the external secret or the argocd-notifications-secret
directly:
kg secret argo-workflows-pipelines.service-account-token -o=jsonpath='{.data.token}' -n argo-workflows | base64 --decode
Now that we have the token we can setup a job manifest that will be triggered by the noficiation which will be in charge of calling back to any pipeline we want (circleci, github, bitbucket, etc.) -- in this example I'm using github
:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: scripts-trigger-pipelines
spec:
ttlStrategy:
secondsAfterCompletion: 320 # Auto delete workflow in 5 minutes after it completes
entrypoint: github
arguments:
parameters:
- name: trigger_with_data
value: "{}"
- name: app_name
value: ""
templates:
- name: github
serviceAccountName: <my-sa-name>
inputs:
parameters:
- name: trigger_with_data
- name: app_name
metadata:
annotations:
kubectl.kubernetes.io/default-container: main
resource:
action: create
setOwnerReference: true
manifest: |
apiVersion: batch/v1
kind: Job
metadata:
generateName: "{{ `{{inputs.parameters.app_name}}` }}-trigger-github-pipeline-"
spec:
template:
spec:
serviceAccountName: argo-workflows-pipelines
volumes:
- name: secret-volume
secret:
secretName: <my-secret-volume>
containers:
- name: trigger-github-pipeline
image: {{ .Values.awsAccountID }}.dkr.ecr.us-east-1.amazonaws.com/{{ .Values.pipelineCallbacks.docker.image }}:{{ .Values.pipelineCallbacks.docker.tag }}
command: ["/bin/bash", "-c"]
args:
- |-
echo "Input App: {{ `{{inputs.parameters.app_name}}` }}"
INPUT_DATA='{{ `{{inputs.parameters.trigger_with_data}}` }}'
...
# Trigger target github repository dispatch event with the generated token and parsed data
echo "Issuing curl to url:https://api.github.com/repos/<owner>/$REPO/dispatches -- with data: {\"event_type\":\"$EVENT_TYPE\",\"client_payload\":{\"dispatch_payload\":$PAYLOAD}}"
curl \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer $GITHUB_TOKEN"\
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/<owner>/$REPO/dispatches \
-d "{\
\"event_type\":\"$EVENT_TYPE\",\
\"client_payload\":$PAYLOAD
}"
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/secrets"
restartPolicy: Never
backoffLimit: 2
Note: I don't spell everything out for you here, as you will need to get your GITHUB_TOKEN
into the workflow. I would just use my provided example for "secrets" via a volume mount. I would use a github app to dynamically generate the token as well, more secure.
NOTE2:
ttlStrategy:
secondsAfterCompletion: 320
While first building I would probably remove this until you're ready for it. Github Setup
Now that we have argo-notifications
and argo-workflows
setup we can setup a repository dispatch event inside our github
pipeline (or something equivalent for other pipelines)
name: "Argo Callback"
on:
repository_dispatch:
types: [my-repository-dispatch-event]
inputs:
app_name:
description: 'The name of the app you want to update in your own triggering repo.'
required: true
type: string
jobs:
argo-callback:
name: "Init Callback"
runs-on: ubuntu-latest
steps:
- name: Output payload
id: verify
run: |
echo "RECEIVED CALLBACK FOR APP: ${{ github.event.client_payload.app_name }}"
echo "${{ github.event.client_payload }}"
As long as I didn't typo something and you filled everything out it should be working.
However, what if it isn't? Lets talk about debugging. Debugging
kubectl get pods -n argocd | grep argocd-notifications
Should spit out the notifications controller. Then you can look at it's logs:
kubectl logs <pod_name> -n argocd --tail=100 -f
I added -f if you want to follow along the logs in realtime.
Argo-Workflows
kubectl get pods -n argo-workflows
there is a server and a controller just like argo-notifications
. Look at those logs using the same method. That should give you all the information you need to succeed! Good luck!
Argo-Notifications documentation was hell. Also here is some helpful info on un-documented things like app.status etc.
https://github.com/argoproj/gitops-engine/blob/master/pkg/sync/common/types.go#L54
Now with all the above you can achieve a callback approval if you use a database, something like dynamo db to store the hash that is triggering the deployment. Then use the undocument app.status.operationState.syncResult.revision (Which is basically the short hash for the commit) parameter inside your configmap to send it to argo-workflows
. Then argo-workflows
will use that hash to lookup that information in the database to know who it should respond to.
I know this is going to be a super valuable resource for anyone working with workflows or notifications.