I am looking for an answer for my Logstash RAM problem because it is nearly 100%. I made a lot of searches for it but they didn't work out for me. Below code is my logstash.conf file. I think it needs small touches.
Logstash.conf:
input {
file {
path => ["c:/mylogs/*.txt"]
start_position => "beginning"
discover_interval => 10
stat_interval => 10
sincedb_write_interval => 10
close_older => 10
codec => "json"
}
}
filter {
date {
match => ["mydate","yyyy-MM-dd HH:mm:ss.SSSS" ]
timezone => "UTC"
}
date {
match => ["TimeStamp", "ISO8601"]
}
json {
source => "request"
target => "parsedJson"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => [ "http://localhost:9200" ]
index => "log-%{+YYYY.MM}"
}
}
Logstash uses a JVM to run. JVM options used by logstash can be configured in the jvm.options file, in the config folder of your logstash folder (see the doc). In the file you can set a -Xmx
option to set the max heap size, which would limit the max memory used.
From the tuning logstash page, you can also configure the batch size and number of worker to reduce the number of in-flight events, which should reduce the RAM usage, but also reduce the throughput of logstash.