Hi i keep getting this error when using ansible via kubespray and I am wondering how to over come it
TASK [bootstrap-os : Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)] ********************************************************************************************************************************************************************************************************
task path: /home/dc/xcp-projects/kubespray/roles/bootstrap-os/tasks/main.yml:50
<192.168.10.55> (1, b'\x1b[1;31m==== AUTHENTICATING FOR org.freedesktop.hostname1.set-hostname ===\r\n\x1b[0mAuthentication is required to set the local host name.\r\nMultiple identities can be used for authentication:\r\n 1. test\r\n 2. provision\r\n 3. dc\r\nChoose identity to authenticate as (1-3): \r\n{"msg": "Command failed rc=1, out=, err=\\u001b[0;1;31mCould not set property: Connection timed out\\u001b[0m\\n", "failed": true, "invocation": {"module_args": {"name": "node3", "use": null}}}\r\n', b'Shared connection to 192.168.10.55 closed.\r\n')
<192.168.10.55> Failed to connect to the host via ssh: Shared connection to 192.168.10.55 closed.
<192.168.10.55> ESTABLISH SSH CONNECTION FOR USER: provision
<192.168.10.55> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="provision"' -o ConnectTimeout=10 -oStrictHostKeyChecking=no -o ControlPath=/home/dc/.ansible/cp/c6d70a0b7d 192.168.10.55 '/bin/sh -c '"'"'rm -f -r /home/provision/.ansible/tmp/ansible-tmp-1614373378.5434802-17760837116436/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.10.56> (0, b'', b'')
fatal: [node2]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"name": "node2",
"use": null
}
},
"msg": "Command failed rc=1, out=, err=\u001b[0;1;31mCould not set property: Method call timed out\u001b[0m\n"
}
my inventory file is as follows
all:
hosts:
node1:
ansible_host: 192.168.10.54
ip: 192.168.10.54
access_ip: 192.168.10.54
node2:
ansible_host: 192.168.10.56
ip: 192.168.10.56
access_ip: 192.168.10.56
node3:
ansible_host: 192.168.10.55
ip: 192.168.10.55
access_ip: 192.168.10.55
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
I also have a file which provision the users in the following manner
- name: Add a new user named provision
user:
name: provision
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add a new user named dc
user:
name: dc
create_home: true
shell: /bin/bash
password: "{{ provision_password }}"
groups: sudo
append: yes
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/provision"
content: "provision ALL=(ALL) NOPASSWD: ALL"
- name: Add provision user to the sudoers
copy:
dest: "/etc/sudoers.d/dc"
content: "dc ALL=(ALL) NOPASSWD: ALL"
- name: Disable Root Login
lineinfile:
path: /etc/ssh/sshd_config
regexp: '^PermitRootLogin'
line: "PermitRootLogin no"
state: present
backup: yes
notify:
- Restart ssh
I have run the ansible command in the following manner
ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" kubespray/cluster.yml -vvv
as well as
ansible-playbook -i kubespray/inventory/mycluster/hosts.yaml --user="provision" --ssh-extra-args="-oStrictHostKeyChecking=no" --key-file "/home/dc/.ssh/xcp_server_k8s_nodes/xcp-k8s-provision-key" --become-user="provision" kubespray/cluster.yml -vv
both yield the same error an interestingly escalation seems to succeed on earlier points
after reading this article https://askubuntu.com/questions/542397/change-default-user-for-authentication I have decided to add the users to the sudo group but the error still persists
looking into the main.yaml file position suggested by the error it seems to be this code possibly causing issues?
# Workaround for https://github.com/ansible/ansible/issues/42726
# (1/3)
- name: Gather host facts to get ansible_os_family
setup:
gather_subset: '!all'
filter: ansible_*
- name: Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux)
hostname:
name: "{{ inventory_hostname }}"
when:
- override_system_hostname
- ansible_os_family not in ['Suse', 'Flatcar Container Linux by Kinvolk', 'ClearLinux'] and not is_fedora_coreos
The OS'es of the hosts are ubuntu 20.04.02 server is there anything more I can do?
From Kubespray documentation:
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
As stated, the --become
is mandatory, it allows to do privilege escalation for most of the system modifications (like setting the hostname) that Kubespray performs.
With --user=provision
you're just setting the SSH user, but it will need privilege escalation anyway.
With --become-user=provision
you're just saying that privilege escalation will escalade to 'provision' user (but you would need --become
to do the privilege escalation).
In both cases, unless 'provision' user has root permissions (not sure putting it in root
group is enough), it won't be enough.
For the user 'provision' to be enough, you need to make sure that it can perform a hostnamectl <some-new-host>
without being asked for authentication.