logstashkibanaelastic-stackstatsdtelegraf

Use ELK Stack to visualise metrics of Telegraf or StatsD


I am aggregating my logs using the ELK stack. Now I would like to show metrics and create alerts with it too like current CPU usage, number of requests Handled, number of DB queries etc

I can collect the metrics using Telegraf or StatsD but how do I plug them into Logstash? There is no Logstash input for either of these two.

Does this approach even make sense or should I Aggregate time series data in a different system? I would like to have everything under one hood.


Solution

  • I can give you some insight on how to accomplish this with Telegraf:

    Option 1: Telegraf output TCP into Logstash. This is what I do personally, because I like to have all of my data go through Logstash for tagging and mutations.

    Telegraf output config:

    [[outputs.socket_writer]]
      ## URL to connect to
      address = "tcp://$LOGSTASH_IP:8094"
    

    Logstash input config:

    tcp {
      port => 8094
    }
    

    Option 2: Telegraf directly to Elasticsearch. The docs for this are good and should tell you what to do!

    From an ideological perspective, inserting metrics into the ELK stack may or may not be the right thing to do - it depends on your use case. I switched to using Telegraf/InfluxDB because I had a lot of metrics and my consumers preferred the Influx query syntax for time-series data and some other Influx features such as rollups.

    But there is something to be said about reducing complexity by having all of your data "under one hood". Elastic is also making the push toward being more suitable for time-series data with Timelion and there were a few talks at Elasticon concerning storing time-series data in Elasticsearch. Here's one. I would say that storing your metrics in ELK is a completely reasonable thing to do. :)

    Let me know if this helps.