I'm new to terraform so pardon my inexperience. I have a terraform file to create multiple big query table. we specify table config in tables array. Since this is a consolidated repository the number of table in array is exceeding 200. I need to divide this module into sub modules where the parent module calls the sub module. I am adding sample code below the number of objects in tables = [] is more than 200.
codes
module "bq_module_tables" {
source = "tfregistry..XXX.com/XXX/bigquery/gcp"
depends_on = [module.bq_module_dset]
project_id = var.gcp_project_id
deletion_protection = false #var.deletion_protection
tables = [
{
table_id = var.tbl_id
dataset_id = var.dataset_id
schema = "${path.module}/schemas/ssrin113.json"
clustering = []
expiration_time = null
deletion_protection = false #var.deletion_protection
range_partitioning = null
time_partitioning = {
type = "DAY",
field = null,
require_partition_filter = false,
expiration_ms = null,
},
labels = {
env = var.app_environment
billable = "false"
}
},
{
table_id = var.tbl_id_1
dataset_id = var.dataset_id_1
schema = "${path.module}/schemas/na_formatted_messages.json"
clustering = []
expiration_time = null
deletion_protection = false #var.deletion_protection
range_partitioning = null
time_partitioning = {
type = "DAY",
field = null,
require_partition_filter = false,
expiration_ms = null,
},
labels = {
env = var.app_environment
billable = "false"
}
}
]
}```
I found the solution. The terraform consider all the .tf files as a single file when processing so we can create new tf files in same folder with different module name to divide a single module into multiple files. The down side is the tables will be destroyed and recreated. One way to avoid that is by using "terraform state mv" command and change meta data from old module to new module.