aws-lambdaterraformamazon-rdsendpointlocalstack

Lambda function cannot "translate" RDS endpoint despite pointing directly at it?


Have some infrastructure deployed in localstack running in a docker container. In particular, there is a lambda function and an RDS, defined below via terraform.
RDS.tf:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

provider "aws" {
  region                  = "us-east-1"
  access_key              = "test"
  secret_key              = "test"
  skip_credentials_validation = true
  skip_metadata_api_check = true
  skip_requesting_account_id = true
  endpoints {
    rds = "http://localhost:4566"  # LocalStack RDS endpoint
  }
}
resource "aws_db_instance" "main_RDS" {
  identifier        = "main-rds"
  allocated_storage = 20
  storage_type      = "gp2"
  engine            = "postgres"
  engine_version    = "14.1"  # Replace with the latest version supported by LocalStack
  instance_class    = "db.t3.micro"  # Adjust if needed
  name              = "veni_vidi_db"
  username          = "admin"
  password          = "admin123"
  skip_final_snapshot = true
}

output "db_instance_endpoint" {
    value = aws_db_instance.main_RDS.endpoint
}

Note that terraform output of this yields "localhost.localstack.cloud:4510" and the localstack browser UI similar defines this endpoint. 1. Why does it not rest on port 4566 as defined in the terraform? And 2. If I instead define the terraform to be rds = "localhost.localstack.cloud:4510" and terraform apply, the creation hangs endlessly on "aws_db_instance.main_RDS: Refreshing state... [id=main-rds]".

Lambda.tf:

provider "aws" {
  region                  = "us-east-1"
  access_key              = "test"
  secret_key              = "test"
  skip_credentials_validation = true
  skip_requesting_account_id = true

  endpoints {
    rds         = "http://localhost:4566"  # LocalStack RDS endpoint
    lambda      = "http://localhost:4566"
    iam         = "http://localhost:4566"
  }
}

resource "aws_iam_role" "lambda_execution_role" {
  name = "lambda_execution_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = "sts:AssumeRole",
        Effect = "Allow",
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy" "lambda_rds_access" {
  name   = "lambda_rds_access"
  role   = aws_iam_role.lambda_execution_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = [
          "rds:*",
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ],
        Effect   = "Allow",
        Resource = "*"
      },
      {
        Action   = "lambda:InvokeFunction",
        Effect   = "Allow",
        Resource = aws_lambda_function.add_user_post.arn
      }
    ]
  })
}

resource "aws_lambda_function" "add_user_post" {
  function_name = "AddUserPost"
  handler       = "addUserPostFunction.lambda_handler"
  runtime       = "python3.9"
  filename      = data.archive_file.lambda_function_add_user_post.output_path
  role          = aws_iam_role.lambda_execution_role.arn
  source_code_hash = filebase64sha256(data.archive_file.lambda_function_add_user_post.output_path)
  timeout       = 30


  layers = [aws_lambda_layer_version.psycopg2_layer.arn]

  environment {
    variables = {
      DB_ENDPOINT = "http://localhost:4566"  # Replace with your actual RDS endpoint
      DB_USER     = "admin"
      DB_PASSWORD = "admin123"
      DB_NAME     = "veni_vidi_db"
    }
  }
}

resource "aws_lambda_layer_version" "psycopg2_layer" {
  filename = "${path.module}/layer/psycopg2-layer.zip"
  layer_name = "psycopg2-layer"
  compatible_runtimes = ["python3.9"]
}

data "archive_file" "lambda_function_add_user_post" {
  type         = "zip"
  source_dir   = "${path.module}/Source"
  output_path  = "${path.module}/lambda_function_add_user_post.zip"
}

The lambda.py function:

import os
import psycopg2

def lambda_handler(event, context):
    try:
        # Database connection details
        db_endpoint = os.environ['DB_ENDPOINT']
        db_user = os.environ['DB_USER']
        db_password = os.environ['DB_PASSWORD']
        db_name = os.environ['DB_NAME']

        # Establish a connection to the database
        conn = psycopg2.connect(
            dbname=db_name, user=db_user, password=db_password, host=db_endpoint
        )
        cur = conn.cursor()

        # Sample data to be added to the RDS table
        user_id = 1
        post_id = 1
        content = 'Yay!'

        # Insert data into the table
        cur.execute(
            "INSERT INTO user_posts_table (UserID, PostID, Content) VALUES (%s, %s, %s)",
            (user_id, post_id, content)
        )
        conn.commit()

        # Close the database connection
        cur.close()
        conn.close()

        return {
            'statusCode': 200,
            'body': 'Item added to RDS successfully',
        }
    except Exception as e:
        print('Error adding item to RDS:', e)

        # Ensure the connection is closed in case of error
        if 'conn' in locals() and conn is not None:
            conn.close()

        return {
            'statusCode': 500,
            'body': 'Error adding item to RDS',
        }

Both infrastructure components correctly deploy. However, running awslocal lambda invoke --function-name AddUserPost output.txt leads to "Error adding item to RDS: could not translate host name "http://localhost:4566" to address: Name or service not known" in the localstack logs.

I have tried many different combinations for the values in the RDS.tf and lambda.tf's endpoints for RDS, as well as the DB_ENDPOINT variable:

All lead to this error to add the data to the RDS due to the "could not translate host name "[any of the above attempts]" to address: Name or service not known".

Don't know what I'm doing wrong.


Solution

  • The issue is that the actual lambda.py function's connection definition was missing an argument. I was defining

    conn = psycopg2.connect(
                dbname=db_name, user=db_user, password=db_password, host=db_endpoint
            )
    

    by setting host = "host:port" instead of separately passing two arguments:

    db_host = os.environ['DB_HOST']
    db_port = int(os.environ["DB_PORT"])
    db_user = os.environ['DB_USER']
    db_password = os.environ['DB_PASSWORD']
    db_name = os.environ['DB_NAME']
    
    # Establish a connection to the database
    conn = psycopg2.connect(
         dbname=db_name, user=db_user, password=db_password, host=db_host, port=db_port
    )
        cur = conn.cursor()
    

    The values in the variables.tf being DB_HOST = "localhost.localstack.cloud" and DB_PORT = "4510". The definition of endpoint within the provider sections of the .tf files should be "host:port", but the connection configuration in the .py function should be host = host and port = port.