OpenStack on LXD Demo


Hi folks,

Bit of self promotion but also because I tried a whole bunch of OpenStack on LXD stuff and struggled to find one that worked until @beisner and @thedac pointed me to

As such, I figured I’d dump it into a video for people crawling youtube:

So thanks to everyone who made the LXD stuff work, we’ve been making great use of it recently.



Awesome - nice work!

Regarding the conjure-up approach: In the time since you ran into issues, we’ve been hammering pretty hard on it in a few different labs. Fixes have released in the conjure-up snap, and this week we’ll be releasing updates to the corresponding docs at, resolving multiple issues in that wizard-driven approach.

For perhaps the more nerdy among us, the charm-guide (long form procedure) is a fun journey.


Cool, i’ll give the conjure-up way another go next week and see how I fair. I do like the nerdy way, you learn a thing or 5 along the way.


Thanks for posting this. It was very helpful. I do have some questions though. Not sure if this is a good place to ask though.

I was able to get a test openstack cluster up and running by following the directions you linked to. I want to set up and test live migration of instances. So I used juju add unit nova-compute to add another nova compute node. So far so good. However, when I try to live migrate I run into an issue that the nodes are not using shared storage. Is there a way that I can change or reconfigure ceph to be shared between the two compute nodes?

Also when looking into the necessary configurations for live migration, one of the things it mentions is that DNS has to be working between the compute nodes. When creating this cluster via juju, I’m not sure DNS is working. The cluster uses the subnet, and the nodes are configured to use as a DNS server. But I don’t think there is any DNS running at all. Any ideas about how to get DNS working among all the juju machines?


The “openstack-on-lxd” dev/test scenario does have limitations, as a very dense, all-in-one machine deployment.

For live migration testing, I would recommend a multiple-node deployment instead of an all-in-one machine approach. In our software stack, MAAS plays an important infrastructure role, including providing DNS to the Juju units.

This, plus if you have multiple Nova Compute “nodes” on the single machine, they will really all point to the same KVM host anyway, and live-migration behavior/success is unpredictable with that.

Regarding ceph-backed instance storage, I am finding that is not very well documented, and I have raised a bug to track that documentation gap. To clarify here, if you want your instance storage to be backed by ceph, and you have adequate network bandwidth to sustain that, the following should do. Keep in mind that every disk read or write is a network operation in that case.

  • Add the relation: nova-compute:ceph ceph-mon:client to set up ceph as a back-end.
  • Add the relation: nova-compute:ceph-access cinder:ceph-access to allow cinder to see those devices.