terraformterraform-provider-azureterraform-template-file

Terraform Organization


I'm new to Terraform and have been spending some time reading up and had a couple of questions related to the best way to structure the code. My plan is to store my Terraform code all in a single repository for all my projects. The infrastructure (Azure) consists of Virtual Machines and an AKS cluster. 98% of our VMs are all exactly the same except for subscription (uat/prod), size, resource group, name and maybe few differences in data disks. My first idea was to create a single environment for virtual machines with multiple .tfvars files for each of the VMs I want to manage? Like this:

└── terraform/
    ├── virtual_machines/
    │   ├── main.tf
    │   ├── variables.tf
    │   ├── outputs.tf
    │   ├── vm1.tfvars
    │   ├── vm2.tfvars
    │   └── vm3.tfvars
    └── aks/
        └── aks_project/
            ├── main.tf
            ├── variables.tf
            └── outputs.tf

Then I can just specify what .tfvars files to point to when applying?

The second idea would be to put the shared vm code in a module then create a directory for each VM like this:

└── terraform/
    ├── virtual_machines/
    │   ├── modules/
    │   │   └── virtual_machines/
    │   │       ├── main.tf
    │   │       ├── variables.tf
    │   │       └── outputs.tf
    │   ├── vm1/
    │   │   ├── main.tf
    │   │   ├── variables.tf
    │   │   └── outputs.tf
    │   ├── vm2/
    │   │   ├── main.tf
    │   │   ├── variables.tf
    │   │   └── outputs.tf
    │   └── vm3/
    │       ├── main.tf
    │       ├── variables.tf
    │       └── outputs.tf
    └── aks/
        └── aks_project/
            ├── main.tf
            ├── variables.tf
            └── outputs.tf

With each VM directory sourcing from the modules and including the variable values needed for each?

Are either of these approaches make sense for what I'm looking to do? What would you suggest for or use for organizing your terraform code?


Solution

  • Your second approach looks like a better start, as it applies Hashicorp's recommendation of using modules whenever it's possible.

    Then you can simply define each VM in a separate .tf file (or group all of them in one), each one of these files should use your local VM module that sets your defaults, and then provide multiple tfvars for each environment.

    So you would have something like:

    .
    ├── main.tf
    └── virtual_machines
        ├── modules
        │   └── virtual_machines
        │       ├── README.md
        │       ├── main.tf
        │       ├── outputs.tf
        │       └── variables.tf
        ├── dev.tfvars
        ├── uat.tfvars
        ├── prod.tfvars
        ├── variables.tf
        ├── locals.tf
        ├── vm1.tf
        └── vm2.tf
    

    Where vm1.tf is using your local module and sets its custom parameters, for example:

    module "vm1" {
      source = "./modules/virtual_machines"
    
      name        = "my-vm1-name"
      environment = var.environment
    }
    

    My var.environment here is defined by my tfvars file which will be picked up for each environment, if I'm on dev I would run my plan or apply with dev.tfvars.

    Read more at:

    Also, you may want to consider options like terragrunt for this scenario of configuration DRY.


    I hope this answer helps, it's based on my opinions and my experience. You may have a different use case, and a different setup, and you may have other answers from smarter people suggesting different approaches since there's nothing such as a "Best terraform project structure".