New 2.5 feature: Kubernetes workload support


Now that 2.5 is released I’d like to call out some of the new features for
those who haven’t been following the development this cycle.

In this post I’ll be covering the Kubernetes workload support.

Kubernetes workloads support

Juju has been able to install a Kubernetes cluster for a while now. However,
only until the 2.5 release is Juju able to take a pre-existing cluster and add
it to its list of backing clouds. This renders the cluster available for charm
deployment. Kubernetes-specific charms are naturally required.

The benefit here is that if you’re adding Kubernetes to your list of tools and
using Juju already you can use a single workflow for everything.

In the idealised scenario presented here, we assume the following:

  • A Kubernetes cluster is pre-existing.
  • The Juju controller has been configured to use a charm store that contains
    Kubernetes charms.
  • The cluster’s configuration file is saved as ~/.kube/config.
  • The charm we’ll use does not itself have storage requirements.

Add the cluster (which we’ve called ‘k8s-cloud’) to Juju and create a model
(called ‘k8s-model’):

juju add-k8s k8s-cloud
juju add-model k8s-model k8s-cloud

Set up a Juju storage pool for operator storage using statically provisioned

kubectl create -f operator-storage.yaml
juju create-storage-pool operator-storage kubernetes \
        storage-class=juju-operator-storage \

Deploy a Kubernetes charm:

juju deploy gitlab-k8s

This was an speedy overview but I hope that the main thrust of the utility of
Kubernetes workload support has been demonstrated. See documentation
Using Kubernetes with Juju for full coverage.


Can you give an example of what the operator-storage.yaml looks like? Thanks!


Ahh hmm possibly creating the operator storage kubectl create -f operator-storage.yaml is replaced with

juju create-storage-pool operator-storage kubernetes storage-class=juju-operator-storage parameters.type=gp2

as such users need not concern themselves with creating the storage with the kubectl create command, and instead use juju storage to create operator storage - I think thats the ticket.


Yes. My example makes use of statically provisioned persistent volumes. If you use a storage type other than no-provisioner, like in your sample command, then you’re indeed using dynamically provisioned PVs and do not need to set up the PVs in advance (what kubectl create does).