There are a bunch of windows ec2 instances running certain legacy applications that write application logs to the default path. Hence, we've attached a secondary volume (D: drive , 200GB for the application and it's logs). I'm now trying to create CW Alarms for diskspace using terraform but although the alarms created for each instance, are stuck in 'insufficient_data' state forever.
The terraform snippet for CW alarm is as follows
data "aws_instances" "this" {
filter {
name = "image-id"
values = [data.aws_ami.this["windows"].image_id]
}
}
resource "aws_cloudwatch_metric_alarm" "this" {
for_each = toset(data.aws_instances.this.ids)
alarm_name = "Disk-space-${each.value}"
comparison_operator = "LessThanOrEqualToThreshold"
evaluation_periods = "1"
metric_name = "LogicalDisk % Free Space"
namespace = "CWAgent"
period = "180"
statistic = "Average"
threshold = "20"
alarm_description = "This metric monitors free space on application drive"
actions_enabled = "true"
alarm_actions = ["arn:aws:sns:xxxxxxx]
insufficient_data_actions = []
#treat_missing_data = "notBreaching"
dimensions = {
InstanceId = each.value
Instance = "D:"
}
}
I'm guessing I've got the dimensions
wrong. I also tried including path = /
and device= xvda
in dimesions
but it still does not work. Any suggestions please?
After a bit of R&D, apparently, what works for metric_name = "LogicalDisk % Free Space"
is:
dimensions = {
InstanceId = each.value
}
No other dimensions are accepted. The above evaluates the total logicaldisk free space %age of the windows ec2 instance and triggers actions.
I have an open ticket with AWS support and will update here if they provide a different solution.