I am using the Ansible Packer provisioner to execute a playbook on one or more AWS instances brought up by Packer to build a custom Amazon Managed Image (AMI).
The instances are brought up in parallel but the issue is when I execute the Ansible provisioner it errors out in cases where it tries to reinstall roles that are already installed and asks me to use --ignore-errors
.
amazon-ebs.build_ami: [WARNING]: - ansible_role_<REDACTED> was NOT installed successfully: the
amazon-ebs.build_ami: Starting galaxy role install process
amazon-ebs.build_ami: specified role ansible_role_<REDACTED> appears to already exist. Use
amazon-ebs.build_ami: --force to replace it.
amazon-ebs.build_ami: - extracting ansible_role_<REDACTED> to <REDACTED>/.ansible/roles/ansible_role_<REDACTED>
amazon-ebs.build_ami: ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
==> amazon-ebs.build_ami: Provisioning step had errors: Running the cleanup provisioner, if present...
==> amazon-ebs.build_ami: Terminating the source AWS instance...
s there any way to stop the provisioner from trying to install the roles multiple times or send in --ignore-errors
to the Ansible galaxy command used by the provisioner?
I have tried setting this up using the galaxy_command
option but could not figure it out.
I also tried setting a custom roles_path
and collections_path
but the provisioner still installed the roles and collections to the default location specified by Ansible.
I would like to keep building the AMIs in parallel.
Here is a simplified version of build config:
build {
dynamic "source" {
for_each = local.images
labels = ["amazon-ebs.ami"]
content {
name = "build_${source.value.value_one}_${source.value.value_two}_${source.value_three}"
value_one = source.value.value_one
value_two = source.value.value_two
value_three = source.value.value_three
}
}
And here is a simplified version of ansible provisioner config:
provisioner "ansible" {
playbook_file = "playbook.yml"
galaxy_file = "requirements.yml"
use_proxy = false
ansible_env_vars = [
"ANSIBLE_FORCE_COLOR=1", # Force colored output
"ANSIBLE_PYTHON_INTERPRETER=auto_silent" # Silence warning about Python discovery
]
extra_arguments = [
"--extra-vars", "\"ansible_python_interpreter=/usr/bin/env python3\"", # Find Python 3 on PATH
]
}
I hope someone could give me a way to either use --ignore-errors
which is not ideal or some other way to get around this issue.
This is somewhat of a known conflict between ansible-galaxy
and packer-plugin-ansible
due to issues related to caching. I usually solve this with the galaxy_force_install
parameter:
provisioner "ansible" {
playbook_file = "playbook.yml"
galaxy_file = "requirements.yml"
galaxy_force_install = true
use_proxy = false
ansible_env_vars = [
"ANSIBLE_FORCE_COLOR=1", # Force colored output
"ANSIBLE_PYTHON_INTERPRETER=auto_silent" # Silence warning about Python discovery
]
extra_arguments = [
"--extra-vars", "\"ansible_python_interpreter=/usr/bin/env python3\"", # Find Python 3 on PATH
]
}
There is a chance this may still cause issues for you with role and collection dependencies, and in that situation I would recommend going the full force with galaxy_force_with_deps
:
provisioner "ansible" {
playbook_file = "playbook.yml"
galaxy_file = "requirements.yml"
galaxy_force_with_deps = true
use_proxy = false
ansible_env_vars = [
"ANSIBLE_FORCE_COLOR=1", # Force colored output
"ANSIBLE_PYTHON_INTERPRETER=auto_silent" # Silence warning about Python discovery
]
extra_arguments = [
"--extra-vars", "\"ansible_python_interpreter=/usr/bin/env python3\"", # Find Python 3 on PATH
]
}
You can view more information about these optional parameters in the documentation.