I've been working on a complex Terraform setup where:
We have multiple environments (e.g., dev, staging, and prod), each using different workspaces.
Key challenges:
How to safely and consistently handle Terraform state across environments with multiple workspaces and varying backend configurations?
What’s the best way to manage shared resources (e.g., VPC, RDS) between environments without risk of one environment's state interfering with another's?
How to implement a blue-green deployment pattern using Terraform for critical services like ALB and EC2, while ensuring that no downtime occurs during the cutover?
I've explored options like using terraform import for shared resources, but I'm concerned about the maintainability and safety of this approach.
How can we ensure that each environment is isolated yet still shares common modules, and how should state be handled in this scenario?
To give an answer to these questions:
There are multiple ways in which you can set boundaries, however I would not recommend terraform workspaces unless you know what you are doing. Try to set boundaries in AWS. A good recommendation is to separate dev, acc and prd environments in different accounts. If you set permissions correctly, it provides a natural safe boundary for your deployments.
You could create a separate environment called shared and give the other environments read-only access to the state file. So that those environments can read the outputs with a terraform_remote_state data sources but not change the state.
There is an excellent tutorial about this specific topic on the Hashicorp website.
Check out bullet number two for my prefferred way of sharing resources between environments.
Each environment can be isolated on the AWS Account level as I recommended under the first bullet. The is one of the easiest methods of isolation resources.