Nova-compute unable to Copy on Write from glance image

help-needed
openstack

#1

Hi,

With a normal and correct deployment of Openstack with ceph as storage backend you are able to utilize the snapshot features of ceph to basically instantly copy a Glance image and boot the instance with nova.
This requires that cephx is correct and the user client.nova-compute has “rwx” access to the glance pool.
I am also using RAW images.

However i have verified that my config is correct but still, it does not work. I have verified against a separate staging cluster.

Example of my ceph auth

client.cinder-ceph
  key: foobar1==
  caps: [mon] allow r; allow command "osd blacklist"
  caps: [osd] allow rwx
client.glance
  key: foobar2==
  caps: [mon] allow r; allow command "osd blacklist"
  caps: [osd] allow rwx
client.nova-compute
  key: foobar3==
  caps: [mon] allow r; allow command "osd blacklist"
  caps: [osd] allow rwx

It is the same for our Production and Staging environments. But still it does not work in production.
I get these errors from nova-compute.log:

2019-09-25 12:24:34.452 87448 DEBUG nova.virt.libvirt.imagebackend [req-bdf268b7-ce68-4a0d-94e8-a040d4d182c7 dcfc6807c683426697a698b2b1db2aa8 809617a6db2641a9b219a32548a9a75e - 2eb5f30ff25e4a7fae1e6a059a6a8587 2eb5f30ff25e4a7fae1e6a059a6a8587] Image locations are: [{'url': 'rbd://b%273bc43fec-6042-11e9-9012-00163e6d2e65%27/glance/106b5da6-e82c-49f3-9ce7-bdba47d5fec4/snap', 'metadata': {}}, {'url': 'rbd://b%273bc43fec-6042-11e9-9012-00163e6d2e65%27/glance/106b5da6-e82c-49f3-9ce7-bdba47d5fec4/snap', 'metadata': {}}] clone /usr/lib/python3/dist-packages/nova/virt/libvirt/imagebackend.py:916
2019-09-25 12:24:34.472 87448 DEBUG nova.virt.libvirt.storage.rbd_utils [req-bdf268b7-ce68-4a0d-94e8-a040d4d182c7 dcfc6807c683426697a698b2b1db2aa8 809617a6db2641a9b219a32548a9a75e - 2eb5f30ff25e4a7fae1e6a059a6a8587 2eb5f30ff25e4a7fae1e6a059a6a8587] rbd://b%273bc43fec-6042-11e9-9012-00163e6d2e65%27/glance/106b5da6-e82c-49f3-9ce7-bdba47d5fec4/snap is in a different ceph cluster is_cloneable /usr/lib/python3/dist-packages/nova/virt/libvirt/storage/rbd_utils.py:209
2019-09-25 12:24:34.487 87448 DEBUG nova.virt.libvirt.storage.rbd_utils [req-bdf268b7-ce68-4a0d-94e8-a040d4d182c7 dcfc6807c683426697a698b2b1db2aa8 809617a6db2641a9b219a32548a9a75e - 2eb5f30ff25e4a7fae1e6a059a6a8587 2eb5f30ff25e4a7fae1e6a059a6a8587] rbd://b%273bc43fec-6042-11e9-9012-00163e6d2e65%27/glance/106b5da6-e82c-49f3-9ce7-bdba47d5fec4/snap is in a different ceph cluster is_cloneable /usr/lib/python3/dist-packages/nova/virt/libvirt/storage/rbd_utils.py:209

Please help :slight_smile:


#2

Update on this.
I found this bug referencing something similar. This leaves me to believe that i have a bug and need to update.
https://bugs.launchpad.net/cinder/+bug/1816468

My plan is to patch the compute nodes and controllers and try again.


#3

That bug does appear to be affecting you, but it doesn’t look like it’s fix-released yet for you to be able to apply directly. You’ll want to wait until your revision of openstack and ubuntu release are marked as fix-released before performing the patching and re-attempt.
Edit: I may be mistaken about the release status.