elasticsearchkubernetesefk

How to handle Elasticsearch data when it fills up dedicated volume


I am creating an EFK stack on a k8s cluster. I am using an EFK helm chart described here. This creates two PVC's: one for es-master and one for es-data.

Let's say I allocated 50 Gi for each of these PVC's. When these eventually fill up, my desired behavior is to have new data start overwriting the old data. Then I want the old data stored to, for example, an s3 bucket. How can I configure Elasticsearch to do this?


Solution

  • One easy tool that can help you do that is Elasticsearch Curator: https://www.elastic.co/guide/en/elasticsearch/client/curator/5.5/actions.html

    you can use it to:

    1. Rollover the indices that hold the data, by size/time. This will cause each PVC to hold few indices, based on time.
    2. snapshot the rolled over indices to backup in S3
    3. delete old indices based on their date - delete the oldest indices in order to free up space for the new indices.

    Curator can help you do all these.