Juju 2.4.0 has been released

Juju team is proud to release version 2.4. This release greatly improves running and operating production infrastructure at scale. Improvements to juju status output, easier maintenance of proper HA, and guiding Juju to the correct management network all aid in keeping your infrastructure running smoothly.

Bionic support

Juju 2.4 fully supports running controllers and workloads on Ubuntu 18.04 LTS (Bionic), including leveraging netplan for network management.

LXD enhancements

LXD functionality has been updated to support the latest LXD 3.0. Juju supports LXD installed as a Snap and defaults to Snap-installed LXD by default if it is present.

A basic model of LXD clustering is now supported with the following conditions:

  • The juju bootstrap of the localhost cloud must be performed on a cluster member.
  • Bridge networking on clustered machines must be set up to allow egress traffic to the controller container(s).

Status UX cleanup

‘Relations’ section in status output has been cleaned up:

  • When filtering by application name, only direct relations are shown;
  • In tabular format, ‘relations’ section is no longer visible by default (bug # 1633972). Use ‘–relations’ option to see the section.
  • Clarified empty status output: whether it is due to a model being empty or because a provided filter did not match anything on the model (bugs 1255786, 1696245 and 1594883).
  • Addition of a timestamp to the status output (bug 1765404)
  • Reordering the status model table to improve consistency between model updates.
  • Status now shows application endpoint binding information (in YAML and JSON formats). For each endpoint, the space to which it is bound is listed.

Controller configuration options for network spaces

Two new controller configuration settings have been introduced (see https://docs.jujucharms.com/2.4/en/controllers-config). These are:

  • juju-mgmt-space
  • juju-ha-space

juju-mgmt-space is the name of the network space used by agents to communicate with controllers. Setting a value for this item limits the IP addresses of controller API endpoints in agent config, to those in the space. If the value is misconfigured so as to expose no addresses to agents, then a fallback to all available addresses results. Juju client communication with controllers is unaffected by this value.

Juju-ha-space is the name of the network space used for MongoDB replica-set communication in high availability (HA) setups. This replaces the previously auto-detected space used for such communication. When enabling HA, this value must be set where member machines in the HA set have more than one IP address available for MongoDB use, otherwise an error will be reported. Existing HA replica sets with multiple available addresses will report a warning instead of an error provided the members and addresses remain unchanged.

Using either of these options during bootstrap or enable-ha effectively adds constraints to machine provisioning. The commands will fail with an error if such constraints cannot be satisfied.

Updates to ‘juju enable-ha’

In Juju 2.4 you can no longer use ‘juju enable-ha’ to demote controllers. Instead you can now use the usual ‘juju remove-machine X’ command, targeting a controller machine. This will gracefully remove the machine as a controller and from the database replica set. This method does allow you to end up with an even number of controllers, which is not a recommended configuration. After removing a controller it is recommended to run ‘juju enable-ha’ to bring back proper redundancy. ‘juju remove-machine --force’ is also available, for when the machine is gone and not available to run its own teardown and cleanup. See https://docs.jujucharms.com/2.4/en/controllers-ha.

New charming tool: Charm Goal State

Charm goal state allows charms to discover relevant information about their deployment that might affect their behavior. For instance, a charm may choose to wait to set up configuration because it knows there will be enough units to build an HA cluster vs first setting itself up as a standalone deployment and then rerunning all the configuration to join a cluster after it sees later units.

The key pieces of information a charm needs to discover are:

  • what other peer units have been deployed and their status
  • what remote units exist on the other end of each endpoint, and their status

Charms use a new hook command, goal-state, to query for information about their deployment. This hook command will print only yaml or json output (default yaml):

goal-state --format yaml

The output will be a subset of that produced by the juju status command. There will be output for sibling (peer) units and relation state per unit.

The unit status values are the workload status of the (sibling) peer units. We also use a unit status value of dying when the unit’s life becomes dying. Thus unit status is one of:

  • allocating
  • active
  • waiting
  • blocked
  • error
  • dying

The relation status values are determined per unit and depend on whether the unit has entered or left scope. The possible values are:

  • joining (relation created but unit not yet entered scope)
  • joined (unit has entered scope and relation is active)
  • broken (unit has left, or is preparing to leave scope)
  • suspended (parent cross model relation is suspended)
  • error

By reporting error state, the charm has a chance to determine that goal state may not be reached due to some external cause. As with status, we will report the time since the status changed to allow the charm to empirically guess that a peer may have become stuck if it has not yet reached active state.

Model owner changes

The concept of a model ‘owner’ is becoming obsolete. Models now support multiple users with admin access. None of those users is special.

Cloud credential changes

Cloud credentials are used by models to authenticate communications with the underlying provider as well as to perform authorised operations on this provider.

Juju has always dealt with both cloud credentials stored locally on a user’s client machine as well as the cloud credentials stored remotely on a bootstrapped Juju controller. The distinction has not been made clear previously and this release addresses these ambiguities.

Basic cloud credential information such as its name and owner have been added to the show-model command output. The new section looks like:

mymodel:
  <snip>
  … existing model output...
  <snip>
  credential:
    name: default
    owner: admin
    cloud: aws

A new command has been added, show-credential, that provides a logged on user their remotely stored cloud credentials along with models that use them.

juju show-credential ...

See command help for more information.

New Proxy config settings

New model proxy config values for new proxy behaviour. Using the existing model config for juju proxy remain unchanged, and any existing model or controller should not notice any changes at all. There are now four new model config properties are:

  • juju-http-proxy
  • juju-https-proxy
  • juju-ftp-proxy
  • juju-no-proxy

These proxy values are used by the model for downloading charms, but are not set as the normal proxy environment variables for charm hook contexts, nor written as default systemd config values.

The juju-no-proxy can and should contain CIDRs for subnets. The controller machines are not added automatically to the juju-no-proxy value, so the internal network that is used should be in the juju-no-proxy value if there are other proxies set.

The new proxy values are passed in to the charm hook contexts, but as the following environment variables:

  • JUJU_CHARM_HTTP_PROXY
  • JUJU_CHARM_HTTPS_PROXY
  • JUJU_CHARM_FTP_PROXY
  • JUJU_CHARM_NO_PROXY

The charm helpers library will be gaining the ability to use proxies for certain activities. This is new behaviour and still being developed.

The rationale behind this change is to better support proxies in situations where there are larger subnets, or multiple subnets that should not be proxied. The traditional no_proxy values cannot have CIDR values as they are not understood by many tools.

HA controller improvements

Upgrading across release streams (devel, released, etc) is improved as the juju-upgrade command now takes an --agent-stream argument.

Backup Restore behavior changes

Multiple backups saved on a controller can now be removed at one time while keeping the latest backup with juju remove-backup --keep-latest.

More detailed help for backup/restore is available including instructions on how to restore a backup in an HA configuration. juju restore-backup provides the updated instructions.

Important Fixes

network-get CIDR reporting fix: Bug #1772887 “Creating backups does not work on Bionic controlle...” : Bugs : juju.

Change in behavior in status output: ‘Relations’ section is not visible by default in tabular format (bug # 1633972). Use ‘–relations’ option to see the section.

fixes for when /var, /etc, /tmp are on different partitions

networking related fixes, eg

juju resolve fix / improvement

add support for --all https://bugs.launchpad.net/bugs/1755141

–no-retry behaviour is inverted https://bugs.launchpad.net/bugs/1762979

support for st1 and sc1 volume types on AWS

support for new AWS instance types

How do I get it?

The best way to get your hands on this release of Juju is to install it as a
snap package

sudo snap install juju --classic

Other packages are available for a variety of platforms. Please see the online
documentation at https://jujucharms.com/docs/stable/reference-install. Those
subscribed to a snap channel should be automatically upgraded. If you’re using
the ppa/homebrew, you should see an upgrade available.

Feedback Appreciated!

We encourage everyone to let us know how you’re using Juju. Join us on Discourse at https://discourse.jujucharms.com/, send us a
message on Twitter using the hashtag #jujucharms, and join us at #juju on freenode.

More information

To learn more about Juju please visit https://jujucharms.com.

3 Likes