Juju 2.6 beta 2 release notes

The Juju team is pleased to announce the release of Juju 2.6-beta2.

This is expected to be the final beta release prior to a 2.6 release candidate.
The release contains some great new features and also bug fixes.

Noteworthy changes

Login to controller by specifying any API endpoint

The juju login command now supports logging into controllers by specifying any of the controller endpoints as an argument to juju login. For example juju login 127.0.0.1:17070 -c mycontroller. This allows users that have already set up their controller account on one machine using a juju register command to login to the same controller from another machine via the controller IP and their credentials.

When attempting to login to an unknown controller, juju login will first obtain the CA certificate from the remove server, print out its fingerprint and prompt the user to trust it before the login attempt can proceed.

Users can obtain the CA cert fingerprint (a SHA-256 hash of the CA cert) for any locally known controller by running juju show-controller.

Caveat: this type of login is not supported for controller versions less than 2.6.

k8s support

bootstrap

Bootstrap is now known to work on the following substrates:

  • microk8s
  • AKS
  • GKE
  • CDK deployed to AWS, Azure, Google clouds

It’s expected that CDK deployed to Openstack and MAAS will also work but these have not been tested yet.

To bootstrap, you first need to use juju add-k8s to import the cluster definition as a local cloud, viewable using juju clouds. Juju will attempt to detect the type of cluster, in particular the underlying cloud and region on which the cluster runs (ec2, azure, gce, openatack, maas). This is needed to correctly configure the cluster with preferred storage. Sometime this isn’t possible so you’ll need to specify that manually using the --region option.

eg, right now, Juju cannot detect the cloud with CDK is running on (a new CDK release will soon fix this). Until then, if you’ve deployed CDK to the Google cloud for example, you need to do something like this:

juju add-k8s --region gce/us-central1 myk8s

or for AWS

juju add-k8s --region ec2/us-east-1 myk8s

Storage will be correctly configured out of the box, but if you want to specify your own storage class to use, first create it in the cluster, and use the --storage option:

juju add-k8s --storage mysc myk8s

The storage is set as a model config option (workload-storage) so can always be changed on a per model base later as well.

upgrades

A bug was fixed in upgrade-controller - previously all models would be upgraded, not just the controller.
In this release, upgrade-model is broken but will be fixed prior to rc.

Deletion of stuck applications and units

This is a long requested feature which has finally landed. There are a lot of possible corner cases so we welcome as much testing and feedback as possible prior to final release.

If a unit ever got wedged, due to a hook error, or an issue with storage, or the agent being killed, or the machine being stopped, it was impossible to remove from the Juju model. And then in turn, the application could not be removed, and the model could not be destroyed.

There is now a --force option for remove-application and remove-unit. There’s also a
--no-wait option which is not done yet. The way it works is that Juju will try to remove/destroy entities and allow for proper cleanup, but after a set time, will resort to forceful removal. If you don’t want Juju to wait because you know things are totally broken, you can use the --no-wait option.

Note: in beta2, Juju will always default to --no-wait as it is not yet fully implemented.

If you need to destroy a model containing stuck applications/units, first remove --force the applications. This release does not yet support destroy-model --force.

Note: remove-machine already supports --force but it does not cover all cases. Work is being done to improve this but won’t be available until the release candidate.

TODO

The feature is not quite complete. Still todo before the release candidate:

  • destroy-model --force
  • support removal of stuck machines in more cases
  • implement the -no-wait option
  • surfacing of units which had cleanup errors during destroy model

Tracking where migrated models went

Juju will not track what controller a model was migrated to. If a client requests details about the model from the old controller Juju will help update the client and get to the new location.

This new tracking is only available on 2.6 controllers and so the model has to be moved from a 2.6 controller to a 2.6 or greater to be able to help direct the client.

All changes and fixes

Every change and fix to this release is tracked on a per-bug basis on Launchpad.

The most important fixes are listed below:

  • LP #1825068 - Juju should display FQDN when MAAS does not report a node’s hostname
  • LP #1814633 - juju support for nested virtualisation images on GCP

All bugs corresponding to changes and fixes to this release are listed on the 2.6-beta2 milestone page.

Known issues

  • the user needs to specify the host cloud / region when using add-k8s with CDK deployments
  • destroy-model, remove-relation --force not fully implemented even though CLI options are available
  • sometimes destroy-model can get stuck due to a race condition removing applications; you can remove-application --force from another shell and destroy-model will complete

Install Juju

Install Juju using the snap:

sudo snap install juju --classic

Those users already using the ‘stable’ snap channel (the default as per the above command) should be upgraded automatically. Other packages are available for a variety of platforms (see the install documentation).

Feedback Appreciated

Let us know how you’re using Juju or of any questions you may have. You can join us on Discourse, send us a message on Twitter (hashtag #jujucharms), or talk to us in the #juju IRC channel on freenode.