We have a common set of variables in our infrastructure on AWS, planned to be used by several modules. For example subnet ids, vpc id and so on.
I want to avoid duplicating these variables through each module's *.tfvars files. Is that possible to make them available to all modules, while keeping the modules isolated from each other?
I think about kind of core module, which can be imported from everywhere. But I am not sure if a module is the right way, as modules are intended to have only resources in them, and I want to simply expose a few variables instead. Is it the right way to use modules to share variables? Or how you guys cope with this problem? Think it's common or it's bad approach in terraform?
If you have a set of expressions (including hard-coded literal values) that you want to reuse then it is valid to write a module which only contains input variable declarations, local values, and output values as a way to model that.
The simplest form of this would be a module that only contains output
blocks whose values are hard-coded literal values, like this:
output "example" {
value = "result"
}
The official module hashicorp/subnets/cidr
is an example of that: it doesn't declare any resources of its own, and instead it just encapsulates some logic for calculating a set of subnets based on some input variables.
This is a special case of Data-only Modules where the data comes from inside the module itself, rather than from data sources. A nice thing about modelling shared data in this way is that if you later decide to calculate those results automatically based on data sources then you'll be able to do so, while keeping the details encapsulated. For example, if you define a module which takes an environment name as an input variable and returns derived settings about that environment, the module could contain local logic to calculate those results today but could later determine some of those settings by fetching them from a prescribed remote location, such as AWS SSM Parameter store, if the need arises.