I have prometheus server that is using self discovery for Azure VMs which are running WMI exporter. In Grafana I am using dashboard variables for filtering (see screenshot).
On the VMs I have created a custom exporter that outputs the metric with the value of 1 for each server and each server is sending the values to a single Pushgateway that is configured in etc/prometheus/prometheus.yaml
- job_name: 'push-gateway'
static_configs:
- targets: ['localhost:9091']
When I look at scraped metrics I always see the instance: localhost:9091 and job: push-gateway regardless of the server the metric came from. If I add those labels manually I see the "exported" prefix (see screenshot).
What I am confused about is, how can I ensure that the "job" and "instance" for the custom created metric have the same "job" and "instance" values that match the server that has generated the metrics so I can use dashboard variables to extract the correct data for the selected server?
You can use metric_relabel_configs to rewrite labels after scraping. An example:
- job_name: pushgateway
# This is necessary if metrics in pushgateway have "job" or "instance" labels.
# With "honor_labels: true" Prometheus will save those as "exported_job" and "exported_instance" respectively.
honor_labels: true
static_configs:
- targets:
- my-pushgateway.com:9091
metric_relabel_configs:
# copy pushgateway address from "instance" to "source" label
- source_labels: [instance]
target_label: source
# replace "instance" label value with one from "exported_instance" label
- source_labels: [exported_instance]
target_label: instance
# remove "exported_instance" label
- source_labels: [exported_instance]
action: labeldrop
If you previously had metrics like this:
my_metric{job="pushgateway", instance="my-pushgateway.com:9091", exported_instance="example.com"}
then with the configuration from the example above they will look like this:
my_metric{job="pushgateway", instance="example.com", source="my-pushgateway.com:9091"}