elasticsearchelasticsearch-6

Creating snapshot from multi-node elasticsearch cluster, restoring on single-node, shards red


We have a running instance of elasticsearch 6.6 that has several indices, so I took a snapshot of the two indices that I am interested in. I set up a new dockerized single-node elasticsearch 6.6 instance, where I attempted to restore the snapshot by using curl. The indices were restored, but the 10 shards were all red. So, I deleted the two restored indices, and ran the operation again, but this time in Kibana. After this restore operation, with restoring from the SAME snapshot, the shards were now all green and my application that queries elasticsearch was working!

I apologize for not having the output, but I have left work for the week, so I can't yet post the specifics of my snapshotting and restoring. Do any of you have suggestions about what might have caused the restore via curl to appear to have worked, but the shards were all red? And why deleting and re-restoring via kibana had a better effect? I definitely set include_global_state to false when taking the snapshot. And, on monday, if it's not clear why this is happening, then I will post more specifics. Thanks in advance!


Solution

  • It appears that this was, simply, a permissions issue! I brought the container up with docker-compose, and then I invoked docker-compose exec my_elastic_container /bin/bash /scripts/import-data.sh. That script extracted the gzipped tar file that contained the elasticsearch snapshot from the other cluster. Well, doing docker-compose exec means that the action is being done by the container's root user, but the snapshot restore operation is being done by elasticsearch, which was started by the elasticsearch user. If I perform chown -R elasticsearch:root /backups/* after extracting the archive, and then make the call to restore the snapshot, things are working. I will do more thorough testing tomorrow, and edit this answer if I missed anything.