Juju Deploy Units to New Containers in New MAAS-Deployed Machines


I’ve been working through the process of spinning up a DevOps cloud, backed by MAAS machines, with services running in LXD containers. The kind folk over at the MAAS discourse have gotten me on a clear path where I’d had some initial misconceptions about the process, but I’m hitting a couple of bumps now that are more in Juju’s territory. Here goes:

I deployed with this command:
juju deploy bionic/ceph-mon -n 3 --to lxd
In the debug logs I see juju complaining that the default LXD bridge ‘lxdbr0’ doesn’t exist, this seems to be blocking any container creation. The MAAS folk believe that Juju should be able to create this bridge during the process, but I don’t know what requirements I’m not meeting for it to be able to do so (in MAAS I had added a manual bridge to each node prior to deployment, perhaps that’s the issue?)
The end result looks like this:

Every 2.0s: juju status --color maas-hl: Fri Oct 19 10:13:13 2018

Model          Controller    Cloud/Region  Version    SLA          Timestamp
ceph-lxd-maas  maas-homelab  maas-homelab  2.5-beta1  unsupported  10:13:13-07:00

App       Version  Status   Scale  Charm     Store       Rev  OS      Charm version  Notes
ceph-mon  12.2.7   blocked    2/3  ceph-mon  jujucharms   27  ubuntu

Unit         Workload  Agent       Machine  Public address  Ports  Message
ceph-mon/0   waiting   allocating  0/lxd/0                         waiting for machine
ceph-mon/1   blocked   idle        1            Insufficient peer units to bootstrap cluster (require 3)
ceph-mon/2*  blocked   idle        2            Insufficient peer units to bootstrap cluster (require 3)

Machine  State    DNS          Inst id  Series  AZ       Message
0        started  xm8wq4   bionic  default  Deployed
0/lxd/0  pending               pending  bionic
1        started  cxwtaw   bionic  default  Deployed
2        started  8qcqnb   bionic  default  Deployed

So it appears only the first unit was assigned to a container (and failed to deploy due to the missing lxdbr0 interface) and the other two ended up directly on the machines metal.



juju deploy ceph-mon -n 3 --to lxd:0,lxd:1,lxd:2