Using Grafana 9.2.2 with VictoriaMetrics as data source to send alerts when certain criteria is met. Using an external service to deliver alerts by configuring an API as webhook contact point, over which the payload is sent and processed further to be delivered on Slack.
Alert evaluation behaviour is set as - Evaluate every 1h for 0s. Want the alert to be fired as soon as condition is met, and evaluate every 1h because that is the frequency of new data points.
Expected behaviour: Alert once every 24hours after the condition is met.
Actual behaviour: once the condition is met, alert gets triggered ( as it should). However, the same alert gets sent again within 5 minutes.
How to handle this?
Options tried:
alertname
and grafana-folder
, didn't help. Also, I tried to group using alert_uid
but that did not get interpreted. Am I trying to wrong combination of timings ( clubbed with alert evaluation behaviour period).The issue was multiple instances of Grafana running independently of each other. We had 2 pods of Grafana running, and they were both serving the request, hence the duplication. Need to check how to run Grafana in cluster mode for future.