Lxd Cluster with juju

hi,
I am using juju with lxd clustering. I used three virtual machine(node1, node2, node3) as a node in the same vairtualBox.
Cluster created from lxd init. then in one node1 installed jujuj (snap install juju --classic). from node1 it is not possible to create controller in node2 or node3. at the same time if deploy any application it is working fine only node1 but not possible to deploy any application in node2 and node3.

Is there any issue?

Hi there, thanks for trying out the lxd cluster work. We’ve got a post walking through setup here:

Typically you’d setup the cluster and have a trust password setup on the nodes. From there you’d have the Juju client on another machine (your main laptop?) that can reach the cluster over the network and go through the add-cloud steps for talking to the cluster. Note that it’s not the same as bootstrapping “localhost”. If you go through the add-cloud and add the cluster then spreading the deployed things (controllers, workloads, etc) is handled by the cluster itself. Juju just asks the cluster API for a new machine and it comes up somewhere in the cluster.

Does that help?

https://docs.jujucharms.com/2.5/en/clouds-lxd-advanced following this one . is it also okay so far i understood… let me check again for total procedure. thank you for your reply :slight_smile:

@rick_h

Enter the API endpoint url for the remote LXD server: https://10.55.60.244:8443 at this point used one ip address from cluster node. then added credential for ‘lxd-remote’ but when used:
juju bootstrap lxd-remote command it is following

root@node1:~# juju bootstrap lxd-remote
Creating Juju controller “lxd-remote-default” on lxd-remote/default
Looking for packaged Juju agent version 2.7-beta1 for amd64
No packaged binary found, preparing local Juju agent binary
To configure your system to better support LXD containers, please see: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
Launching controller instance(s) on lxd-remote/default…

  • juju-e5250d-0 (arch=amd64)
    Installing Juju agent on bootstrap instance

almost 15 min at this point!!! but nothing proceeding from last line!!! any kind of help for this one? even i tried with --to zone=“node_name” also … nothing happened !!

Hmm, so it looks like Juju is trying to get it going and that will involve lxd bringing down the image for the machine. I’d check the logs in lxd on the cluster instance and see if you can see what’s up. Another thing is to look at the debug version of output by doing juju bootstrap --debug and see if there’s any more details in there. Your first time bringing up a machine on a cluster might take a bit depending on things.

1 Like

juju bootstrap is working for existing juju node but when trying to use in different node the following log is getting via debug :

Attempting to connect to 10.101.153.77:22
18:56:06 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:14 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:22 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:30 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:38 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host
18:56:46 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.101.153.77 failed: ssh: connect to host 10.101.153.77 port 22: No route to host

so far I know juju will be installed in one node in the cluster. so i have juju in node1 but node2 is including cluster without juju.
any suggestion ?

There’s an LXD-Cluster charm that used to make this process trivial, alas it’s fallen into disrepair. I might go back and look at it again sometime.

So it is a problem from your side or i did some mistake? and obviously thank you for your reply :slight_smile:

That looks a lot like we are being told an IP address of a container which is not routable from the machine you are running “juju” on. (eg, you are talking to a LXD agent that is exposed on the network, but that cluster creates containers that are only exposed on a local bridge inside the machine).

Yes, as @jameinel points out, it looks as though you have containers deploying onto the default LXD bridge (lxdbr0) inside the nodes, which means they do not have ingress from outside.

If this is the issue, the solution is to manually bridge whatever device on the nodes is providing external connectivity, and specify that as the bridge to use when you run “lxd init”. Do this on each node of the cluster so they are homogeneous.

There is more detail on manually bridging for LXD clusters here.

First created lxd bridge (lxc network create lxdbr0) then assigned that bridge through lxd init for all nodes. is there any difference between dynamically or manually created a bridge network?

@jameinel please have a look for this ans.

this is my bridge network details from juju5 node in the cluster:

root@juju5:~# lxc network show lxdbr0
config:
  ipv4.address: 10.78.54.1/24
  ipv4.nat: "true"
  ipv6.address: none
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by: []
managed: true
status: Created
locations:
- juju1
- juju5

but when i am trying to bootstrap in juju5 node from juju1 node it is showing following debug output:

Attempting to connect to 10.78.54.137:22
10:52:09 DEBUG juju.provider.common bootstrap.go:576 connection attempt for 10.78.54.137 failed: ssh: connect to host 10.78.54.137 port 22: No route to host

and also it is possible to create normal lxc container in the cluster two nodes. but making problem when i am using juju.
totally confused here, how should I continue here !!!

@rick_h is there possible any kind of help ?

This is a NAT bridge (ipv4.nat: true) using lxdbr0 which is local to only the machine it is on. Which means the containers can get to the outside world (via NAT), but the outside world cannot see them.
If you wanted the containers externally visible, then you would need to create a bridge on your network interface (ens0, etc) and then use that as the bridge, instead of lxdbr0. (usually we use a name like br-ens0 to indicate it is the bridge with ens0 on it.)

to be honest, It is so confusing. my cluster is connected with lxdbr0 but why again i need another bridge network. If i can create a lxc container without juju in the cluster why not juju is not getting within the same lxdbr0 network?! Sorry to say, I am blank here now.

btw thank you for your reply. but i am not getting why I am so confused here!!

They are not on the same network. There is a separate lxdbr0 network inside each node.

The LXD servers themselves are connected by the network that nodes are on. So the cluster can coordinate and create containers, but those containers are on bridges inside the nodes that the world outside knows nothing about.

So you need to manually bridge the node devices that communicate with the outside world, so that traffic can be routed to the containers from outside.

Are the nodes running Bionic? If so, post the contents of one of the /etc/netplan/{your config}.yaml files and we can lend a hand regarding how to set up the bridge.

@manadart Yes Bionic and Yaml file:

# network: {config: disabled}
network:
    ethernets:
        enp0s3:
            dhcp4: true
    version: 2

I think this will work. Caveat: if it does not, you may not be able to get into those nodes any longer.

Change the file so that it looks like this:

network:
    version: 2
    ethernets:
        enp0s3:
            dhcp4: false
    bridges:
        br0:
            interfaces: [enp0s3]
            dhcp4: true
            parameters:
                forward-delay: 15
                stp: false

Then run “sudo netplan apply”.

If all is well, re-run “lxd init” and use “br0” as the existing bridge.

If anyone else can vet this, by all means please chime in.

Edit: Config correction for future readers of the post.

Yes, each node in the cluster needs to be set up so that containers on that node will be externally accessible.

Error in network definition /etc/netplan/50-cloud-init.yaml line 3 column 15: expected mapping

Is there anything need after enp0s3. it is showing some error

There is an error in the yaml… see the update, you need to add the mac address for it to work.

network:
version: 2
ethernets:
    enp0s3:
        match:
            macaddress: ???
        mtu: 1500
        set-name: enp0s3
bridges:
    br0:
        interfaces: [enp0s3]
        dhcp4: true
        parameters:
            forward-delay: 15
            stp: false

Finally It’s done. Thank’s to all. what I did I am writing for next person of me:

if you are using VirtualBox: go to VM settings-> network->advance->Promescuous Mode -> allow all
Install juju : snap install juju --edge --classic
then change the every node /etc/netplan/(yaml file name).yaml to :

network:
    version: 2
    renderer: networkd
    ethernets:
        enp0s3:
            dhcp4: false
    bridges:
        br0:
            interfaces: [enp0s3]
            dhcp4: true
            parameters:
               stp: false
               forward-delay: 0

then run: ‘sudo netplan apply’. You have to log in again with the newly assigned ip address with br0.
[keep it mind do it all nodes before initialize the cluster first node.]

then run lxd init for first cluster node and used this created br0 network. do the same things in all node.

then do the others things as documents. like juju add-cloud and so on…
have fun with juju in lxd cluster.

3 Likes