elasticsearchloggingkibanafluentdefk

Troubleshooting Fluentd as a Log Aggregator: Connectivity Issues


I am having trouble sending API logs from one Fluentd server to another, and then to Elasticsearch. When ! tried sending logs directly from one Fluentd server to Elasticsearch, it worked fine. Now, I suspect there might be a problem with connectivity. But , Both Fluentd servers are set up to listen for logs on UDP and TCP across all interfaces. How can I resolve this issue?

Fluentd Server 1 configuration

<source>
  @type tail
  @id input_log
  <parse>
    @type json
  </parse>
  path /home/ubuntu/OT-attendance/access.log
  tag api.log
  read_from_head true
</source>

<match api.log>
  @type forward
  @id output_system_forward
  <server>
    host 18.212.132.77
    port 24225
  </server>
</match>

Fluentd Server 2 configuration

<match api.log>
  @type elasticsearch
  host localhost
  port 9200
  index_name fluentd.${tag} 
</match>

Solution

  • Here are some recommendations to solve your problem.

    1. Remove one of the fluentd if possible and send the logs directly to Elasticsearch. Additional hoops will cause more latency and problems.

    2. Directly sending data from Fluentd server to Elasticsearch working fine. Whenever there are more than one hoops you should test it one by one. In your data flow the architecture is like the following.

      Fluentd => Fluentd => Elasticsearch

      For debugging you can remove Elasticsearch from output add stdout and check the result in terminal. If you can't see any result that means there is a problem between Fluentds. In that way, you can fix your problem.