amazon-web-servicessshamazon-ec2sdnterraform

Why can't terraform SSH in to EC2 Instance using supplied example?


I'm using the AWS Two-tier example and I've direct copy-n-pasted the whole thing. terraform apply works right up to where it tries to SSH into the created EC2 instance. It loops several times giving this output before finally failing.

aws_instance.web (remote-exec): Connecting to remote host via SSH...
aws_instance.web (remote-exec):   Host: 54.174.8.144
aws_instance.web (remote-exec):   User: ubuntu
aws_instance.web (remote-exec):   Password: false
aws_instance.web (remote-exec):   Private key: false
aws_instance.web (remote-exec):   SSH Agent: true

Ultimately, it fails w/:

Error applying plan:

1 error(s) occurred:

* ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

I've searched around and seen some older posts/issues saying flip agent=false and I've tried that also w/ no changes or success. I'm skeptical that this example is broky out of the box yet I've done no tailoring or modifications that could have broken it. I'm using terraform 0.6.11 installed via homebrew on OS X 10.10.5.

Additional detail:

resource "aws_instance" "web" {
  # The connection block tells our provisioner how to
  # communicate with the resource (instance)
  connection {
    # The default username for our AMI
    user = "ubuntu"

    # The connection will use the local SSH agent for authentication.
    agent = false
  }

  instance_type = "t1.micro"

  # Lookup the correct AMI based on the region
  # we specified
  ami = "${lookup(var.aws_amis, var.aws_region)}"

  # The name of our SSH keypair we created above.
  key_name = "${aws_key_pair.auth.id}"

  # Our Security group to allow HTTP and SSH access
  vpc_security_group_ids = ["${aws_security_group.default.id}"]

  # We're going to launch into the same subnet as our ELB. In a production
  # environment it's more common to have a separate private subnet for
  # backend instances.
  subnet_id = "${aws_subnet.default.id}"

  # We run a remote provisioner on the instance after creating it.
  # In this case, we just install nginx and start it. By default,
  # this should be on port 80
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get -y update",
      "sudo apt-get -y install nginx",
      "sudo service nginx start"
    ]
  }
}

And from the variables tf file:

variable "key_name" {
  description = "Desired name of AWS key pair"
  default = "test-keypair"
}

variable "key_path" {
  description = "key location"
  default = "/Users/n8/dev/play/.ssh/terraform.pub"
}

but i can ssh in with this command:

ssh -i ../.ssh/terraform ubuntu@w.x.y.z

Solution

  • You have two possibilities:

    1. Add your key to your ssh-agent:

      ssh-add ../.ssh/terraform
      

      and use agent = true in your configuration. The case should work for you

    2. Modify your configuration to use the key directly with

      secret_key = "../.ssh/terraform"
      

      or so. Please consult the documentation for more specific syntax.