Charmed Kubernetes users: todo before upgrading models to 2.6.4


#1

Ahoy friends! A recent bug in the kubernetes-master charm was discovered that could affect people upgrading a model to 2.6.4:

https://bugs.launchpad.net/charm-kubernetes-master/+bug/1833089

The gist is that k8s-master followers were incorrectly calling leader-set. This was a logged warning in the past, but it is a formal error in 2.6.4. If you have kubernetes-master deployed in a model, please ensure you upgrade it to revision 695 or later:

$ juju upgrade-charm kubernetes-master
Added charm "cs:~containers/kubernetes-master-695" to the model.

$ juju status kubernetes-master  ## rev >= 695 means you're good
...
kubernetes-master  1.14.3   active      2  kubernetes-master  jujucharms  695  ubuntu
...

It’s worth noting a few ways that this does not affect you:

  • As of June 19th, no action is required for new deployments of kubernetes charms/bundles because all stable artifacts include the required fix.
  • Upgrading your juju client to 2.6.4 (or having snapd upgrade it for you) is perfectly fine. Existing models will function as they always have; upgrading a model with an older kubernetes-master charm is where this issue becomes relevant.
  • Single kubernetes-master deployments are not affected; it’s hard to have a follower when there’s only one of you.

To know if this has affected you, have a look at juju status. If non-leader kubernetes-master units are in error, you likely have an older revision deployed in a 2.6.4 model. Never fear! The solution is just an upgrade and a few times around the resolved track:

$ juju upgrade-charm kubernetes-master
Added charm "cs:~containers/kubernetes-master-695" to the model.

$ juju status ## <-- identify units in error

$ juju resolved --no-retry kubernetes-master/<unit-number-from-above>

You may have to run that last command multiple times to get the charm to process the upgrade and eventually settle itself. Keep watching juju status between resolved calls to see when the deployment becomes healthy again.

Feel free to reach out here or in Freenode #juju with questions/concerns. And finally, a big THANKS to the Juju and QA teams for helping us identify and fix this issue quickly!