Juju not correctly setting up network interfaces

Some background.
I have MAAS configured cloud with 4 systems successfully setup and PXE booting.
Each system has the same network setup as follows:

maas nodes, different ip for each node referenced as ##


name      type      subnet        ipaddress
eno1      Physical  10.41.0.0/20   10.41.0.##
eno1.100  VLAN      10.100.0.0/20  10.100.0.##
eno1.150  VLAN      10.150.0.0/20  10.150.0.##
eno1.30   VLAN      10.30.0.0/20   10.30.0.##
eno2      Physical  10.99.0.0/20   unconfigured
eno2.200  VLAN      10.200.0.0/20  10.200.0.##
eno2.250  VLAN      10.250.0.0/20  10.250.0.##
eno2.50   VLAN      10.50.0.0/20   10.50.0.##

Whenever I attempt to deploy configurations like below using JUJU I notice that the charm fails because it’s not getting both IP addresses. The charm hangs at “config changed” or “leader changed”. When I do a juju-debug --hooks ceph-mon# I find that the unit only has 1 of the configured IP addresses.


ceph-mon:
annotations:
gui-x: ‘750’
gui-y: ‘500’
charm: cs:ceph-mon
num_units: 3
options:
ceph-cluster-network: 10.30.0.0/20
ceph-public-network: 10.50.0.0/20
expected-osd-count: *expected-osd-count
monitor-count: *expected-mon-count
source: *openstack-origin
to:
- ‘lxd:1’
- ‘lxd:2’
- ‘lxd:3’


The error varries but from what I can tell the charm, in this case ceph-mon is only getting one of the 2 configured IP addresses.

So how do I go about fixing this or diagnosing this?
Do I change the MAAS nodes from VLAN’s to aliases? The nodes have an IP address on each of the networks and VLANs and is reachable from all of the other nodes on each of the configured networks so networking is working. Do I need to enable VLAN’s on the containers, how do I do that? During deployment, other charms do detect and get setup under the other configured IP’s but it’s always just one of the configured IPs.

What is the proper way to make the containers, for lack of better words, multi nic aware?
Other details:

Regardless of what IP’s and networks are available on the host node, the containers only seam to see 1 of the nodes configured IP’s, the host will get configured with an IP on one subnet and all of the containers will get configured on a different subnet but it’s all the same one.

If there is a “correct” setup for this to work either in MAAS or JUJU I’m open to suggestions.

The reason I’m asking for help here rather than the MAAS forum is because when I deploy the nodes with MAAS as just plan nodes, all networking works without issue.

Thank you.

1 Like

It sounds like you need to get the low down on spaces and endpoint binding. Juju will only setup the network devices needed in the container based on how the endpoints of the charm are bound to “Spaces”.

The idea is that you tell Juju what to tell the charm when it asks “hey, what network info do I need to use to manage things over my admin endpoint, or my public-service endpoint, or data-plane endpoints.”

The idea would be if you had different endpoints bound to different spaces then Juju knows which networks need to be made available in the container. If you don’t specify then it goes a safer/more secure route and doesn’t map every network into every container.

Some light reading on spaces and using the --bind flag to help manage these:



Thank you very much for your kind reply and the useful resources.
I’ve read those resources, but in an attempt to prevent a “wall of text” I was abit shorter than I probably should have been.

I have attempted the following:
I’ve attempted this with no spaces setup, one space covering all network setups and a different space for each network.

I’ve also attempted bindings as follows:


ceph-mon:
bindings:
“”: *public-space

and

ceph-mon:
bindings:
“”: *public-space
“”: *public-space

and

ceph-mon:
bindings:
“”: *cluster-space
“”: *public-space


I’ve attempted the binding setup in the bundle.yaml
and
openstack-base-spaces-overlay.yaml

The only thing I’ve not tried cli arguments like below.


juju deploy --bind “db=db-space db-admin=admin-space” mysql


mostly because it was my understanding that CLI options were unnecessary because i’m supplying those configurations in the “yaml”. If I have to do both what is the proper way to configure an upenstack bundle with multiple applications what would be the proper syntax? do I do something like following the application order in the bundle?

Thank you.

1 Like

Hi @nathan-flowers, if you want to specify multiple different network spaces in the bundle besides the default space "", then you will have to specify the endpoint keys by name.

The ceph-mon charm has multiple different relationship endpoints (see the relations table here), but they also have a couple specific endpoints listed in the network space support section in the description.

Example copied from above:

ceph-mon:
  charm: cs:xenial/ceph-mon
  num_units: 1
  bindings:
    public: data-space
    cluster: cluster-space

I highly recommend sticking to only specifying bindings in bundles though they need a bit of fine tuning to get going for different setups.

1 Like

As @szeestraten points out the issue is the binding used. Each charm provides an array of “endpoints” that can be bound to a space (network) and the empty string is just a default for the endpoints not specified. So if you do

bindings:
  “”: *public-space
  “”: *public-space

you’ve just stated "if I don’t tell you otherwise use the public-space for any endpoints in this charm

where you probably want more like

juju deploy --bind “db=db-space db-admin=admin-space” mysql

which would map to

bindings:
  "": public-space
  db: db-space

This means that the db endpoint would get an address on the db-space and any others (the db-admin endpoint in this case) would use the default value which is public-space. So in this deployment to a container Juju knows that this container needs access to two networks, public-space and db-space.

Hopefully that provides a bit more context.

when specifying bindings, should one also remove the networking subsets from the options…
example using cepth-mon please ignore the formatting issues.


ceph-mon:
bindings:
ceph-cluster-network: *cluster-space
ceph-public-network: *public-space
annotations:
gui-x: ‘750’
gui-y: ‘500’
charm: cs:ceph-mon
num_units: 3
options:
ceph-cluster-network: 10.30.0.0/20
ceph-public-network: 10.50.0.0/20
expected-osd-count: *expected-osd-count
monitor-count: *expected-mon-count
source: *openstack-origin
to:
- ‘lxd:1’
- ‘lxd:2’
- ‘lxd:3’

vs

ceph-mon:
bindings:
ceph-cluster-network: *cluster-space
ceph-public-network: *public-space
annotations:
gui-x: ‘750’
gui-y: ‘500’
charm: cs:ceph-mon
num_units: 3
options:
expected-osd-count: *expected-osd-count
monitor-count: *expected-mon-count
source: *openstack-origin
to:
- ‘lxd:1’
- ‘lxd:2’
- ‘lxd:3’


thanks.

1 Like

Yes, you should remove the options. The ceph-mon charm says the following:

NOTE: Existing deployments using ceph-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.