I'm hitting
│ Error: 1 error occurred:
│ * kinesis_settings must be set when engine_name = "kinesis"
│
│
│
│ with module.replicate_oltp.aws_dms_endpoint.this["rates_kinesis_target"],
│ on .terraform/modules/replicate_oltp/main.tf line 160, in resource "aws_dms_endpoint" "this":
│ 160: resource "aws_dms_endpoint" "this" {
I'm trying to figure out why the terraform-aws-dms isn't picking up my kinesis_settings
variable. Hell, I'm trying to understand how Terraform even understands how to even be able to see what the contexts / structure of the aws_dms_endpoint
that's dynamically produced in the plan step from the AWS DMS module is at this point.
1.5.1
terraform -version
Terraform v1.1.9
on linux_amd64
Providers required by configuration:
.
├── provider[registry.terraform.io/hashicorp/aws]
├── provider[registry.terraform.io/hashicorp/random]
├── module.defaults
│ ├── provider[registry.terraform.io/hashicorp/aws]
│ └── provider[terraform.io/builtin/terraform]
├── module.dms_write_to_rating_data_sync_policy
│ └── provider[registry.terraform.io/hashicorp/aws] >= 3.35.0
├── module.postgres_dms_instance_access
│ └── provider[registry.terraform.io/hashicorp/aws] >= 3.0.0
└── module.replicate_oltp
└── provider[registry.terraform.io/hashicorp/aws] >= 4.6.0
endpoints = {
rates_kinesis_target = {
endpoint_id = "${local.name}-rates-kinesis-target"
endpoint_type = "target"
engine_name = "kinesis"
kinesis_settings = {
# These options are described in
# https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html
service_access_role_arn = aws_iam_role.dms_write_to_rating_data_sync.arn
stream_arn = data.aws_kinesis_stream.rating_replication_stream.arn
partition_include_schema_table = true
include_partition_value = true
}
}
Yes
Yes
Terraform init -backend-config=envs/dev-main/backend.tf
Terraform plan
Terraform should be able to find the kinesis_settings
as described in the documentation.
resource "random_integer" "rates_mapping_rule_id" {
min = 1
max = 500000
}
module "replicate_oltp" {
source = "terraform-aws-modules/dms/aws"
# Question: Why is a brand new module using an old version of a library?
# Answer: Shippo-tf-services has an AWS module (according to @shippo-eric)
# that can't use the latest AWS provider version. Therefore, we need
# to use this out of date dms module version.
version = "1.5.1"
# "normal" is in comparison to peak season for this instances description and
# purpose
repl_subnet_group_name = "${local.name}-normal-season-replication"
repl_subnet_group_description = "The Shippo default VPC for ${var.env_name}"
repl_subnet_group_subnet_ids = aws_db_subnet_group.oltp_primary.subnet_ids
# Instance
repl_instance_allocated_storage = 20
repl_instance_auto_minor_version_upgrade = true
repl_instance_allow_major_version_upgrade = true
repl_instance_apply_immediately = true
repl_instance_engine_version = "3.4.6"
repl_instance_multi_az = false
repl_instance_preferred_maintenance_window = "sun:10:30-sun:14:30"
repl_instance_publicly_accessible = false
repl_instance_class = "dms.t3.medium"
repl_instance_id = "${var.env_name}-normal-season-replication"
repl_instance_vpc_security_group_ids = [module.postgres_dms_instance_access.security_group_id]
# This saves us from `EntityAlreadyExists: Role with name dms-cloudwatch-logs-role already exists.`
# errors on three potential IAM roles shared between many services. These roles
# were created in 2019 in dev-main, and dev-qa, and prod-data but have not
# been confirmed in prod.
create_iam_roles = false
endpoints = {
rates_kinesis_target = {
endpoint_id = "${local.name}-rates-kinesis-target"
endpoint_type = "target"
engine_name = "kinesis"
kinesis_settings = {
service_access_role_arn = aws_iam_role.dms_write_to_rating_data_sync.arn
stream_arn = data.aws_kinesis_stream.rating_replication_stream.arn
partition_include_schema_table = true
include_partition_value = true
}
}
postgresql_source = {
database_name = var.source_db_name
endpoint_id = "${local.name}-postgresql-source"
extra_connection_attributes = "heartbeatFrequency=1;"
endpoint_type = "source"
engine_name = "aurora-postgresql"
port = 5432
username = jsondecode(data.aws_secretsmanager_secret_version.replication_user.secret_string)["login"]
password = jsondecode(data.aws_secretsmanager_secret_version.replication_user.secret_string)["password"]
server_name = data.aws_rds_cluster.oltp.endpoint
# TODO: Setting this to None but we should test this with `require` to see if we need
# a root CA cert or not.
ssl_mode = "none"
}
}
replication_tasks = {
cdc_postgresql_to_kinesis = {
replication_task_id = "${local.name}-postgresql-cdc-to-kinesis-for-rates"
migration_type = "cdc"
replication_task_settings = file("${path.cwd}/task-templates/default_cdc_settings.json")
table_mappings = templatefile("${path.cwd}/task-templates/api_shipment_rate_query_table_mapping.tmpl",
{ tables_to_replicate = [{
rule_id = random_integer.rates_mapping_rule_id.id,
rule_name = "replicate_api_shipment_rate_query",
target_table = "api_shipmentratequery" }] })
source_endpoint_key = "postgresql_source"
target_endpoint_key = "rates_kinesis_target"
}
}
}
At a fundamental level I don't get how the engine_type
and engine_name
are associated with the dynamic block in this module.
I also don't understand why the dynamic block has a list of objects? That makes it look like this module supports multiple kinesis_settings arguments?
resource "aws_dms_endpoint" "this" {
for_each = { for k, v in var.endpoints : k => v if var.create }
certificate_arn = try(aws_dms_certificate.this[each.value.certificate_key].certificate_arn, null)
database_name = lookup(each.value, "database_name", null)
endpoint_id = each.value.endpoint_id
endpoint_type = each.value.endpoint_type
engine_name = each.value.engine_name
extra_connection_attributes = lookup(each.value, "extra_connection_attributes", null)
kms_key_arn = lookup(each.value, "kms_key_arn", null)
password = lookup(each.value, "password", null)
port = lookup(each.value, "port", null)
server_name = lookup(each.value, "server_name", null)
service_access_role = lookup(each.value, "service_access_role", null)
ssl_mode = lookup(each.value, "ssl_mode", null)
username = lookup(each.value, "username", null)
# Skipping elastisearch and kafka settings
# https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.Kinesis.html
dynamic "kinesis_settings" {
for_each = try([each.value.kinesis_settings], [])
content {
include_control_details = lookup(kinesis_settings.value, "include_control_details", null)
include_null_and_empty = lookup(kinesis_settings.value, "include_null_and_empty", null)
include_partition_value = lookup(kinesis_settings.value, "include_partition_value", null)
include_table_alter_operations = lookup(kinesis_settings.value, "include_table_alter_operations", null)
include_transaction_details = lookup(kinesis_settings.value, "include_transaction_details", null)
message_format = lookup(kinesis_settings.value, "message_format", null)
partition_include_schema_table = lookup(kinesis_settings.value, "partition_include_schema_table", null)
service_access_role_arn = lookup(kinesis_settings.value, "service_access_role_arn", null)
stream_arn = lookup(kinesis_settings.value, "stream_arn", null)
}
}
2022-06-22T23:50:40.484Z [WARN] Provider "registry.terraform.io/hashicorp/aws" produced an invalid plan for module.replicate_oltp.aws_dms_endpoint.this["postgresql_source"], but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .tags: planned value cty.NullVal(cty.Map(cty.String)) does not match config value cty.MapValEmpty(cty.String)
- .redshift_settings: attribute representing nested block must not be unknown itself; set nested attribute values to unknown instead
2022-06-22T23:50:40.485Z [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for module.replicate_oltp.aws_dms_endpoint.this["postgresql_source"]
2022-06-22T23:50:40.485Z [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for module.replicate_oltp.aws_dms_endpoint.this["postgresql_source"]
2022-06-22T23:50:40.485Z [ERROR] vertex "module.replicate_oltp.aws_dms_endpoint.this[\"rates_kinesis_target\"]" error: 1 error occurred:
* kinesis_settings must be set when engine_name = "kinesis"
2022-06-22T23:50:40.485Z [TRACE] vertex "module.replicate_oltp.aws_dms_endpoint.this[\"rates_kinesis_target\"]": visit complete, with errors
2022-06-22T23:50:40.485Z [TRACE] writeChange: recorded Create change for module.replicate_oltp.aws_dms_endpoint.this["postgresql_source"]
2022-06-22T23:50:40.486Z [TRACE] vertex "module.replicate_oltp.aws_dms_endpoint.this[\"postgresql_source\"]": visit complete
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "root" errored, so skipping
2022-06-22T23:50:40.485Z [TRACE] provider.terraform-provider-aws_v4.19.0_x5: Served request: @caller=github.com/hashicorp/terraform-plugin-go@v0.9.1/tfprotov5/tf5server/server.go:791 @module=sdk.proto tf_proto_version=5.2 tf_provider_addr=provider tf_resource_type=aws_dms_endpoint tf_req_id=4fa6f44f-c38e-fbf5-37af-3b50cbc67640 tf_rpc=PlanResourceChange timesta
mp=2022-06-22T23:50:40.485Z
2022-06-22T23:50:40.486Z [TRACE] vertex "module.replicate_oltp.aws_dms_endpoint.this": dynamic subgraph encountered errors: 1 error occurred:
* kinesis_settings must be set when engine_name = "kinesis"
2022-06-22T23:50:40.486Z [ERROR] vertex "module.replicate_oltp.aws_dms_endpoint.this" error: 1 error occurred:
* kinesis_settings must be set when engine_name = "kinesis"
2022-06-22T23:50:40.486Z [TRACE] vertex "module.replicate_oltp.aws_dms_endpoint.this": visit complete, with errors
2022-06-22T23:50:40.486Z [TRACE] vertex "module.replicate_oltp.aws_dms_endpoint.this (expand)": dynamic subgraph encountered errors: 1 error occurred:
* kinesis_settings must be set when engine_name = "kinesis"
2022-06-22T23:50:40.486Z [ERROR] vertex "module.replicate_oltp.aws_dms_endpoint.this (expand)" error: 1 error occurred:
* kinesis_settings must be set when engine_name = "kinesis"
2022-06-22T23:50:40.486Z [TRACE] vertex "module.replicate_oltp.aws_dms_endpoint.this (expand)": visit complete, with errors
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "module.replicate_oltp.output.endpoints (expand)" errored, so skipping
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "module.replicate_oltp.aws_dms_replication_task.this (expand)" errored, so skipping
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "module.replicate_oltp.output.replication_tasks (expand)" errored, so skipping
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "module.replicate_oltp.aws_dms_event_subscription.this (expand)" errored, so skipping
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "module.replicate_oltp.output.event_subscriptions (expand)" errored, so skipping
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "module.replicate_oltp (close)" errored, so skipping
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/hashicorp/aws\"] (close)" errored, so skipping
2022-06-22T23:50:40.486Z [TRACE] dag/walk: upstream of "root" errored, so skipping
2022-06-22T23:50:40.486Z [INFO] backend/local: plan operation completed
╷
│ Error: 1 error occurred:
│ * kinesis_settings must be set when engine_name = "kinesis"
│
│
│
│ with module.replicate_oltp.aws_dms_endpoint.this["rates_kinesis_target"],
│ on .terraform/modules/replicate_oltp/main.tf line 160, in resource "aws_dms_endpoint" "this":
│ 160: resource "aws_dms_endpoint" "this" {
After some further digging this settings declaration gets terraform plan to work
kinesis_settings = {
# service_access_role_arn = aws_iam_role.dms_write_to_rating_data_sync.arn
stream_arn = data.aws_kinesis_stream.rating_replication_stream.arn
partition_include_schema_table = true
include_partition_value = true
}
After a lot of digging yesterday a co-worker and I found a modification that built a plan correctly
kinesis_settings = {
# service_access_role_arn = aws_iam_role.dms_write_to_rating_data_sync.arn
stream_arn = data.aws_kinesis_stream.rating_replication_stream.arn
partition_include_schema_table = true
include_partition_value = true
}
Which lead to this github ticket https://github.com/hashicorp/terraform/issues/30937
and finally lead to this terraform setup
kinesis_settings = {
service_access_role_arn = "arn:aws:iam::${module.defaults.aws_account_id}:role/${var.env_name}-dms-assume-kinesis-write-role
stream_arn = data.aws_kinesis_stream.rating_replication_stream.arn
partition_include_schema_table = true
include_partition_value = true
}