My mongo db module contains the following section
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas",
version = "1.8.0"
}
}
}
Whitin the module I create various resources related to mongo atlas such as
provider "mongodbatlas" {
public_key = data.azurerm_key_vault_secret.mongodb_public_api_key.value
private_key = data.azurerm_key_vault_secret.mongodb_private_api_key.value
}
resource "mongodbatlas_project" "mongodb_project" {
name = "arm-${local.suffix}"
org_id = var.mongodbatlas_org_id
teams {
team_id = mongodbatlas_team.arm_devOps_team.team_id
role_names = ["GROUP_OWNER"]
}
}
resource "mongodbatlas_team" "arm_devOps_team" {
org_id = var.mongodbatlas_org_id
name = var.mongodb_atlas_team
usernames = ["user"]
}
resource "mongodbatlas_project_api_key" "api_key_assignment" {
description = mongodbatlas_api_key.project_api_key.description
project_id = mongodbatlas_project.mongodb_project.id
role_names = ["GROUP_OWNER"]
depends_on = [
mongodbatlas_project.mongodb_project
]
}
resource "mongodbatlas_advanced_cluster" "mongodb_cluster" {
project_id = mongodbatlas_project.mongodb_project.id
name = "mongo-${local.mongo_db_name_variable_segment}-${local.suffix}"
cluster_type = "REPLICASET"
backup_enabled = var.mongodb_backup_enabled
pit_enabled = var.mongodb_continous_cloud_backup_enabled
replication_specs {
num_shards = 1
zone_name = "Zone 1"
region_configs {
provider_name = "AZURE"
priority = 7
region_name = var.mongodb_region_name
auto_scaling {
compute_enabled = true
compute_max_instance_size = "M40"
compute_min_instance_size = "M10"
compute_scale_down_enabled = true
disk_gb_enabled = true
}
electable_specs {
disk_iops = 0
instance_size = "M10"
node_count = 3
}
read_only_specs {
disk_iops = 0
instance_size = "M10"
node_count = 0
}
}
}
}
When I am testing my module in a standalone terraform file, I create the following configuration
provider "azurerm" {
subscription_id = "b02457c4-592b-4165-bd92-38b3d6162a31"
features {}
}
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.117.1"
}
mongodbatlas = {
source = "mongodb/mongodbatlas",
version = "1.8.0"
}
}
}
module "mongodb_atlas" {
source = "../"
environment_name = "qa"
environment = "qa"
instance_number = "001"
app_name = "bogus-name"
location = "eastus2"
admin_group_names = ["ARM DevOps"]
mongodbatlas_org_id = "ID"
mongodbatlas_public_key = "PUBLIC_KEY"
mongodb_atlas_team = "Arm-DevOps-Test"
mongodbatlas_private_key_name = "PRIVATE_KEY"
atlas_mongo_cidr = "192.XXX.XXX.0/21"
mongodb_region_name = "US_EAST_2"
vega_ips = { }
}
certificate_name = "arm-cloud-sailpoint-com"
shared_key_vault_id = "id"
skip_secrets_generation = true
cors_addresses = ["www.google.com"]
shared_vnet = local.vnet
shared_sub_key_vault = "kv-arm-shared-qa"
shared_sub_key_vault_rg = "rg-arm-shared-qa"
supported_azure_regions = { eastus2 = "f" }
}
terraform plans resources as expected. My issue starts when this module code is integrated into terragrunt driven repo. In this repo I have terragrunt.hcl file which already contains required_providers section, so when I am trying to integrate mongo module into the repo terragrunt plan fails with the following
Error: Duplicate required providers configuration
│
│ on tg_auto_provider.tf line 48, in terraform:
│ 48: required_providers {
│
│ A module may have only one required providers configuration. The required
│ providers were previously configured at required_providers.tf:2,3-21.
I already have similar use cases in my code where modules define a required_providers block, but in all those cases, the modules were used as nested modules, not as top-level modules in Terragrunt.
I understand that if I create a wrapper module around the Mongo module and call that wrapper from Terragrunt, it will avoid the aforementioned error. Is that the correct and recommended way to resolve this issue?
Yes, the standard solution is to create a wrapper module around the Mongo module, like:
module "mongodb_atlas" {
source = "../../../modules/mongodb-atlas"
environment_name = "qa"
environment = "qa"
instance_number = "001"
app_name = "bogus-name"
location = "eastus2"
admin_group_names = ["ARM DevOps"]
mongodbatlas_org_id = "ID"
mongodbatlas_public_key = "PUBLIC_KEY"
mongodb_atlas_team = "Arm-DevOps-Test"
mongodbatlas_private_key_name = "PRIVATE_KEY"
atlas_mongo_cidr = "192.XXX.XXX.0/21"
mongodb_region_name = "US_EAST_2"
vega_ips = {}
certificate_name = "arm-cloud-sailpoint-com"
shared_key_vault_id = "id"
skip_secrets_generation = true
cors_addresses = ["www.google.com"]
shared_vnet = local.vnet
shared_sub_key_vault = "kv-arm-shared-qa"
shared_sub_key_vault_rg = "rg-arm-shared-qa"
supported_azure_regions = { eastus2 = "f" }
}
The source is pointing to your current module. Then in terragrunt.hcl, set terraform.source to that wrapper directory.
Now the reusable module is nested, so its required_providers will not conflict with Terragrunt generated root config.