I am running awx-operator
on a k3 cluster.
I have a role, update_host_group
, that tries to use the AWX REST API to manipulate Ansible groups.
---
- name: Update Ansible Tower inventory
uri:
url: https://awx.example.com/api/v2/groups/36/hosts
user: "{{ admin_username }}"
password: "{{ admin_password }}"
method: 'POST'
body: '{ "name" : "{{ update_ati_fqdn }}" }'
force_basic_auth: yes
status_code: 201
body_format: json
validate_certs: no
I created an inventory named "AWX Meta Inventory" which has just one host, localhost
.
I also created a job template: "Run Update Host Group Playbook" that executes a playbook
that uses the role, update_host_group
. When I launch this template I get this error:
TASK [update_host_group : Update Ansible Tower inventory] **********************
fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 0, "msg": "Status code was
-1 and not [201]: Request failed: <urlopen error [Errno -2] Name or service not known>",
"redirected": false, "status": -1, "url": "https://awx.example.com"}
Note: Obvisouly don't actually use example.com
. I use my real domain name.
UPDATE:
I have tried adding hostAliases
to my kubernetes but I am not well versed in k* admin. I looked at the docs here: https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
But of course that doesn't tell where in the awx
deployment to add the hostAliases
stanza. So I am just guessing and breaking things. Does anyone know where I need to put my hostAliases
?
This is more sort of a workaround. I am sure there is probably a better solution. I read the documentation here: https://docs.ansible.com/automation-controller/latest/html/administration/containers_instance_groups.html#customize-the-pod-spec
Then I:
containers:
line:hostAliases:
- hostnames:
- awx.example.com
ip: 10.0.3.52
then click the apply button and my task worked (had to update the status_code
to look for status code 204).