We are using a packstack installation of Openstack-Train with NFS as the backend driver for Cinder.
We are unable to create VMs using qcow2 images which were built from a cinder volume.
Message Build of instance 8056996a-487c-4730-9ee0-f55dbf2fc320 aborted: Volume 7506ee68-6c9f-427c-bb37-ab6213de1b8e did not finish being created even after we waited 80 seconds or 27 attempts. And its status is error. Code 500 Details Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2168, in _do_build_and_run_instance filter_properties, request_spec) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2478, in build_and_run_instance bdms=block_device_mapping, tb=tb) File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in exit self.force_reraise() File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2428, in _build_and_run_instance request_group_resource_providers_mapping) as resources: File "/usr/lib64/python2.7/contextlib.py", line 17, in enter return self.gen.next() File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2656, in _build_resources reason=e.format_message()) BuildAbortException: Build of instance 8056996a-487c-4730-9ee0-f55dbf2fc320 aborted: Volume 7506ee68-6c9f-427c-bb37-ab6213de1b8e did not finish being created even after we waited 80 seconds or 27 attempts. And its status is error. Created Aug. 25, 2021, 7:15 a.m.
Here are the key configurations made in the cinder.conf
[nfs]
nfs_shares_config = /etc/cinder/nfs_shares.txt
volume_driver = cinder.volume.drivers.nfs.NfsDriver
volume_backend_name = nfsbackend
nfs_mount_options = rw
default_volume_type=nfstype
enabled_backends=nfs
nfs_qcow2_volumes = true
nfs_snapshot_support = false
The strange part to this issue is that these same qcow2 images can be used when we rebuild the Openstack from scratch, however, any images created on the new setup still continues to have the same issue.
I am unable to post the entire cinder.conf or the log files due to size limitations. Please let me know if I can post it in some other way.
Managed to fix this by setting the following parameters in the cinder.conf file (/etc/cinder/cinder.conf):
verify_glance_signatures = disabled
It turns out that Cinder has trouble verifying the metadata of the images created in Glance. Setting this parameter to disabled will resolve the error on any newer images you will create.
To fix the issue on an existing image use the command below:
openstack image unset --property signature_verified "id of the glance image"
To allow snapshots of NFS volumes, use the following parameter:
nas_secure_file_operations = False
Restart the cinder service:
- openstack-cinder-api
- openstack-cinder-scheduler
- openstack-cinder-volume
If you have a problem loading a RAW image, try the following parameters in the nova.conf file (/etc/nova/nova.conf):
block_device_allocate_retries=400
block_device_allocate_retries_interval=3
This will allow nova to create the large sized volumes.
Restart the Nova services:
- openstack-nova-compute
- openstack-nova-scheduler
- openstack-nova-conductor