I have a project within which I am trying to use terraform to manage DNS on its own and also deploy code to staging and production. I am using GH Actions for CI/CD and deploying to AWS.
My initial setup was working fine, however, because DNS was being managed in TF, when I ran destroy on staging, this knocked out the DNS records for production. Minor rub, I could just re-run the production job to reinitiate them, however, not ideal.
Instead I split out the DNS config into a separate module, structure shown below.
├── deploy
│ ├── dns.tf
│ ├── ecs.tf
│ ├── load_balancer.tf
│ ├── main.tf
│ ├── modules
│ │ └── dns
│ │ ├── backend.tf
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ └── variables.tf
│ ├── network.tf
│ ├── outputs.tf
│ ├── templates
│ │ └── ecs
│ │ ├── task-assume-role-policy.json
│ │ ├── task-execution-role-policy.json
│ │ └── task-ssm-policy.json
│ └── variables.tf
├── docker-compose.yml
└── setup
├── ecr.tf
├── iam.tf
├── main.tf
├── outputs.tf
└── variables.tf
Within these files I am trying to cross-reference the states to allow the outputs from the DNS config to be used within the infrastructure setup. I've included the contents of dns/backend.tf
below, as well as the matching map in deploy.tf
. This setup runs from deploy.yml
.
dns/backend.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.89.0"
}
}
backend "s3" {
bucket = "rsec-main-website-tf-state"
key = "dns"
workspace_key_prefix = "tf-state-deploy-env"
encrypt = true
dynamodb_table = "rsec-main-website-tf-lock"
region = "eu-west-2"
}
}
deploy/main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.89.0"
}
}
backend "s3" {
bucket = "rsec-main-website-tf-state"
key = "tf-state-deploy"
workspace_key_prefix = "tf-state-deploy-env"
encrypt = true
dynamodb_table = "rsec-main-website-tf-lock"
}
}
provider "aws" {
region = "eu-west-2"
default_tags {
tags = {
Environment = terraform.workspace
Project = var.project
Contact = var.contact
ManageBy = "Terraform/deploy"
}
}
}
locals {
prefix = "${var.prefix}-${terraform.workspace}"
}
data "terraform_remote_state" "dns" {
backend = "s3"
config = {
bucket = "rsec-main-website-tf-state"
key = "dns"
region = "eu-west-2"
workspace_key_prefix = "tf-state-deploy-env"
}
}
data "aws_region" "current" {}
deploy.yml
name: Build and push images. Cache build artifacts.
on:
workflow_call:
inputs:
version:
required: true
type: string
permissions:
id-token: write
contents: read
jobs:
build_and_push_to_ecr:
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set Vars
run: |
if [[ $GITHUB_REF == 'refs/heads/production' ]]; then
echo "production" > .workspace
else
echo "staging" > .workspace
fi
...SNIP...
- name: Terraform Apply
run: |
export TF_VAR_ecr_app_image="${{ vars.ECR_REPO_APP }}:${{ steps.compute_tags.outputs.app_tag }}"
export TF_VAR_ecr_proxy_image="${{ vars.ECR_REPO_PROXY }}:${{ steps.compute_tags.outputs.proxy_tag }}"
workspace=$(cat .workspace)
cd infra/
docker compose run --rm terraform -chdir=deploy/modules/dns init
docker compose run --rm terraform -chdir=deploy/modules/dns workspace select -or-create $workspace
docker compose run --rm terraform -chdir=deploy/modules/dns apply -auto-approve
docker compose run --rm terraform -chdir=deploy/ init
docker compose run --rm terraform -chdir=deploy/ workspace select -or-create $workspace
docker compose run --rm terraform -chdir=deploy/ apply -auto-approve
The path seems to be created fine in AWS, see beow. However, when my pipeline runs, I get this error. The error is triggered after the DNS deployment has successfully completed and the infra deployment has performed checks/diffs to verify what needs to be created or destroyed. The infra deployment then fails before the checks can be applied.
Does anybody know what I am missing here?
╷
│ Error: Unable to find remote state
│
│ with data.terraform_remote_state.dns,
│ on main.tf line 34, in data "terraform_remote_state" "dns":
│ 34: data "terraform_remote_state" "dns" {
│
│ No stored state was found for the given workspace in the given backend.
==== EDIT ====
As it was asked below, the dns
path is populated with the expected DNS records and with the outputs defined in my dns/outputs.tf
, shown below.
output "cert_arns" {
description = "Map of certificate ARNs for each domain"
value = {
for domain, cert in aws_acm_certificate_validation.cert : domain => cert.certificate_arn
}
}
Hail, future travellers. I'm sure someone can improve on this answer, but this was what I eventually came up with.
The issue here was that terraform was trying to use the wrong workspace. This was resolved by adding workspace = terraform.workspace
to the terraform_remote_state
block in my deploy/main.tf
. When terraform_remote_state
handles workspaces using remote state, it creates a different state file for each workspace. So whilst both deployment and DNS tasks were running in the same environment, they used different workspaces and thus looked for state in different locations.
The DNS module was creating resources and storing state in the staging workspace path, but the main deployment was trying to read state from the default workspace path where nothing existed.
Adding workspace = terraform.workspace
lets TF now which state file to read from. This keeps the state location uniform for DNS and deployment.
data "terraform_remote_state" "dns" {
backend = "s3"
workspace = terraform.workspace # Added this
config = {
bucket = "rsec-main-website-tf-state"
key = "dns"
region = "eu-west-2"
workspace_key_prefix = "tf-state-deploy-env"
}
}