apache-sparkdatabricksmonitoringdatadogdatabricks-workflows

How to monitor different Spark jobs on the same cluster/SparkContext on Databricks?


I wanted to have a monitoring and alerting system in place (with a tool such as Datadog) that could fetch metrics and logs from my Spark applications in Databricks. Thing is, for not having to spin up, run and kill hundreds or even thousands of Job-clusters every day, it is better to re-use existing clusters for similar Data Extraction jobs.

To fetch the metrics from Databricks and Spark in Datadog, I have tried the following:

  1. Change the SparkSession.builder.appName within each notebook: doesn't work, since it is not possible to change it after the cluster's started. By default it will always be "Databricks Shell"
  2. Set a cluster-wide tag and unset it after the job has ended -> can lead to mismatch between tags, when concurrency happens. Also, I didn't find a clear way to "append" a tag there.
  3. Somehow fetch the Databricks' Job/Run Id from Datadog: I have no clue on how to do this.

Seems to me that it would be feasible, since every spark job on the same SparkSession has the name of my Databricks' Job/Run id. I just have to understand how to identify it on Datadog.

Thoughts? Anything silly i might me missing to achieve this?


Solution

  • There are several points here:

    So I really would recommend to use separate automated clusters. If you want to reuse nodes, and have shorter startup times, you can use instance pools.

    If you want to monitor resource usage, etc. I would recommend to look onto project Overwatch, that is able to collect data from different sources, like, cluster logs, APIs, etc., and then build unified view on the performance, costs, etc. One of the its advantages is that you can attribute costs, resource load, etc. down to the users/notebooks/individual Spark jobs. It's not the "classical" real-time monitoring tool, but used by many customers already.