prometheusuptimeprometheus-blackbox-exporterprobegoogle-managed-prometheus

How to Configure Blackbox-exporter with Google Managed Prometheus?


I am migrating to Google Managed Prometheus. I've been using a helm deployed version of Prometheus to monitor my kubernetes cluster. The helm deployed Prometheus has a variety of selectors, including Pod Monitors, Service Monitors, and Probe Monitors. The Google Managed Prometheus only has pod monitors (PodMonitoring). I'm using Google Managed Prometheus with a managed collection.

I'd like to keep using my blackbox-exporter probes for uptime metrics. I configured this with a "kind: Probe" on my existing Prometheus. However, with Google Managed Promtheus only using PodMonitoring, I'm not sure that the blackbox-exporter is compatible.

I like the blackbox-exporter because I can configure it to check all my ingress hosts without having to manually create one for each. I'm frequently adding new ingresses with new endpoints to my cluster so this automation has been great.

Has anyone configured the blackbox-exporter with Google Managed Prometheus?

I've tried port forwarding the actual blackbox-exporter pod to see what metrics it exposes, but that doesn't show all the metrics I'd like.


Solution

  • There can only (!?) be one endpoints item per PodMonitoring and the query string for /probe?... must be defined in terms of params. See examples below.

    This restriction appears to contradict the CRDs:

    To probe multiple e.g. Ingresses, you could have a single Blackbox exporter Deployment but you'd need to have one PodMonitoring per Ingress, e.g.:

    podmonitorings.yaml:

    apiVersion: v1
    kind: List
    metadata: {}
    items:
    - kind: PodMonitoring
      apiVersion: monitoring.googleapis.com/v1
      metadata:
        name: google-com
      spec:
        selector:
          matchLabels:
            app: blackbox
            type: exporter
        endpoints:
        - interval: 30s
          path: /probe
          params:
            target:
            - google.com
            module:
            - http_2xx
          port: 9115
        targetLabels:
          metadata:
          - pod
          - container
    - kind: PodMonitoring
      apiVersion: monitoring.googleapis.com/v1
      metadata:
        name: stackoverflow-com
      spec:
        selector:
          matchLabels:
            app: blackbox
            type: exporter
        endpoints:
        - interval: 30s
          path: /probe
          params:
            target:
            - stackoverflow.com
            module:
            - http_2xx
          port: 9115
        targetLabels:
          metadata:
          - pod
          - container
    - kind: PodMonitoring
      apiVersion: monitoring.googleapis.com/v1
      metadata:
        name: prometheus-com
      spec:
        selector:
          matchLabels:
            app: blackbox
            type: exporter
        endpoints:
        - interval: 30s
          path: /probe
          params:
            target:
            - prometheus.io
            module:
            - http_2xx
          port: 9115
        targetLabels:
          metadata:
          - pod
          - container
    
    

    blackbox-exporter.yaml config defines:

    1. ConfigMap that defines a Blackbox Exporter config
      • (using the documented) http_2xx
      • and I added a foo (gRPC prober)
    2. Deployment of the Blackbox Exporter using the ConfigMap

    blackbox-exporter.yaml:

    apiVersion: v1
    kind: List
    metadata: {}
    items:
    - kind: ConfigMap
      apiVersion: v1
      metadata:
        name: blackbox-exporter
      data:
        blackbox.yml: |+
          modules:
            http_2xx:
              prober: http
              timeout: 5s
              http:
                valid_http_versions: ["HTTP/1.1", "HTTP/2.0"]
                valid_status_codes: []  # Defaults to 2xx
                method: GET
                follow_redirects: true
                fail_if_ssl: false
                fail_if_not_ssl: false
                tls_config:
                  insecure_skip_verify: false
                preferred_ip_protocol: "ip4" # defaults to "ip6"
                ip_protocol_fallback: false  # no fallback to "ip6"
            foo:
              prober: grpc
              grpc:
                service: ""
                preferred_ip_protocol: ip4
                tls: true
                tls_config:
                  insecure_skip_verify: false
    - kind: Deployment
      apiVersion: apps/v1
      metadata:
          name: blackbox-exporter
      spec:
        selector:
          matchLabels:
            app: blackbox
            type: exporter
        template:
          metadata:
            labels:
              app: blackbox
              type: exporter
          spec:
            containers:
            - name: blackbox-exporter
              image: docker.io/prom/blackbox-exporter:v0.24.0
              args:
              - --config.file=/config/blackbox.yml
              ports:
              - name: http
                containerPort: 9115
              volumeMounts:
              - name: vol
                mountPath: /config
            volumes:
            - name: vol
              configMap:
                name: blackbox-exporter
    
    NAMESPACE="test"
    
    kubectl create namespace ${NAMESPACE}
    
    # Deploy Blackbox exporter
    kubectl apply \
    --filename=${PWD}/blackbox-exporter.yaml \
    --namespace=${NAMESPACE}
    
    # Apply PodMonitorings
    kubectl apply \
    --filename=${PWD}/podmonitorings.yaml \
    --namespace=${NAMESPACE}
    

    Then you'll need to PATCH operatorconfig/config in Namespace gmp-public. There are several ways to do this. Don't delete any content from this file but ensure that matchOneOf list includes some of the newly expected metric names:

    KUBE_EDITOR=nano kubectl edit operatorconfig/config \
    --namespace=gmp-public
    

    You'll want to add the metrics that you want to include:

    apiVersion: monitoring.googleapis.com/v1
    collection:
      filter:
        matchOneOf:
        - '{__name__=~"blackbox_exporter_.+"}'
        - '{__name__=~"probe_.+"}'
      kubeletScraping:
        interval: 30s
    kind: OperatorConfig
    metadata:
      name: config
    

    Once scraped (~1m), you will see blackbox_exporter_* and probe_* metrics listed under Metrics Diagnostics:

    enter image description here

    And be able to query them in Google's Metrics exporter and the Google-managed Prometheus frontend:

    enter image description here

    https://console.cloud.google.com/monitoring/metrics-diagnostics;duration=PT1H?project={project}

    You can port-forward to the Blackbox Exporter to execute probes:

    kubectl port-forward deployment/blackbox-exporter \
    --namespace=${NAMESPACE} \
    9115:9115
    

    And then (locally):

    # Blackbox Exporter's metrics
    curl --get http://localhost:9115/metrics
    
    # Blackbox Exporter's HTTP probe for e.g. Google
    curl --get http://localhost:9115/probe?target=google.com&module=http_2xx
    
    # Blackbox Exporter's gRPC probe for Google's Cloud Profiler
    curl --get http://localhost:9115/probe?target=cloudprofiler.googleapis.com:443&module=foo