amazon-web-servicesamazon-elbmarklogic-9aws-ebs

Instance in AWS auto-scaling group failed the healthcheck (didn't get terminated), but EBS is still attached


We have a 3 node MarkLogic 9 setup in AWS (we have setup ELB auto-scaling group). So, whenever an instance fails the ELB healthcheck, the EBS volume attached to that instance is still attached to the instance (that failed). Because of this, MarkLogic is unable to start in the newly spawned instance. Has anyone came across this and do you have any idea how to resolve this?


Solution

  • If an instance fails an Elastic Load Balancing health check, then the load balancer will not send traffic to that instance. It will keep performing the health check and will resume sending traffic if the health check turns successful.

    The Load Balancer will not terminate an instance.