amazon-web-servicesdockerrustdocker-composeamazon-dynamodb

Running AWS DynamoDB inside a Dev Container


I'm trying to setup a devcontainer for rust project which uses dynamodb as its database.

This is the compose file (referenced AWS docs)

services:
  api:    
    build: 
      context: .
      dockerfile: Dockerfile
    tty: true
    volumes:  
      - ../:/workspace:cached
    depends_on:  
      - dynamodb
    env_file:
      - devcontainer.env
  dynamodb:
    command: "-jar DynamoDBLocal.jar -sharedDb -dbPath ./data"
    image: "amazon/dynamodb-local:latest"
    ports:
      - "8000:8000"
    volumes:
      - "./data/dynamodb:/home/dynamodblocal/data"
    working_dir: /home/dynamodblocal

And this is the devcontainer.json

{
  "name": "api",
  "dockerComposeFile": "docker-compose.yaml",
  "service": "api",
  "workspaceFolder": "/workspace",
  "customizations": {
    "vscode": {
      "extensions": [
        "rust-lang.rust-analyzer",
        "tamasfe.even-better-toml",
        "eamodio.gitlens",
        "usernamehw.errorlens",
        "esbenp.prettier-vscode",
        "ms-azuretools.vscode-containers"
      ]
    }
  },
  "remoteUser": "dev",
  "postCreateCommand": "bash .devcontainer/post.sh",
  "forwardPorts": [3000]
}

Now I'm trying to run the following code (took straight from the AWS docs, just replaced the hostname only)

/// Lists your tables from a local DynamoDB instance by setting the SDK Config's
/// endpoint_url and test_credentials.
#[tokio::main]
async fn main() {
  tracing_subscriber::fmt::init();

  let config = aws_config::defaults(aws_config::BehaviorVersion::latest())
    .test_credentials()
    // DynamoDB run locally uses port 8000 by default.
    .endpoint_url("http://dynamodb:8000")
    .load()
    .await;
  let dynamodb_local_config = aws_sdk_dynamodb::config::Builder::from(&config).build();

  let client = aws_sdk_dynamodb::Client::from_conf(dynamodb_local_config);

  let list_resp = client.list_tables().send().await;
  match list_resp {
    Ok(resp) => {
      println!("Found {} tables", resp.table_names().len());
      for name in resp.table_names() {
        println!("  {}", name);
      }
    }
    Err(err) => eprintln!("Failed to list local dynamodb tables: {err:?}"),
  }
}

And it does not work!. It just hangs. When I tried to run a sample client in python REPL using boto3 inside the same api container, after a long time I got a connection timeout error.

Now what's more confusing is that I CAN cURL the service just fine from the API container of course.

Running:

curl http://dynamodb:8000

Gave me

{
  "__type": "com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken",
  "Message": "Request must contain either a valid (registered) AWS access key ID or X.509 certificate."
}

What am I missing here?


Solution

  • Finally figured out what was going wrong.

    In the docker-compose.yaml, I mount the ./data/dynamodb to the dynamodb container to persists its DB file. But when I booted the devcontainer, that folder wasn't there and docker created it automatically.

    The new folder's owner was root instead of my local user (host). Because of this, when mounted, inside the DynamoDB containter, the permission kept the same (because docker behave so in linux) as to the root user. But the user in DynamoDB container (which is dynamodblocal) was trying to create the SQLite inside this folder, and was not getting permission to do so.

    So to solve this issue, all I had to do is create that mount directory BEFORE I start the devcontainer. In that way, the folder's owner will be my local user (with UID of 1000) and inside the container, it'll be RW-ed by the dynamodblocal user.

    I've used initializeCommand of the devcontainer lifecycle hook to make sure it's created before booting.