Hello World Juju Kubernetes
Here’s a basic set of instructions for setting up Juju to work with a given Kubernetes cloud.
First, you’ll need to be running the Juju 2.5 candidate snap.
sudo snap install juju --classic --candidate
Limitations and Prerequisites.
- The Juju GUI can display a Kubernetes model but anything else - creating units, deploying charms, status etc - is currently unsupported.
Setting Up The Juju Controller
You need a running Juju controller and also a Kubernetes cluster to work with. For the Kubernetes cluster, there’s several choices:
- use MicroK8s.
- deploy your own Kubernetes cluster
- use a Kubernetes cluster on a public cloud (eg GKE)
Once a Juju controller is up and running and you have your Kubernetes cluster, you need to import the cluster and user credential into Juju. The Juju
add-k8s command extracts the information it needs from the Kubernetes configuration file (the same one that
kubectl uses). Once imported, the cluster appears as a cloud. See below for instructions relevant to each of the above scenarios.
Note: add-k8s will import whatever credential values exist in ~/.kube/config. There’s plans to add support for juju add-credential so that you can add arbitrary k8s clusters as named Juju clouds.
Local setup with MicroK8s
If you want to try a simple deployment on your own server or laptop, the microk8s option is perhaps the easiest. See this post for how to get things set up. The TL;DR: is that you’ll need to enable dns and storage on MicroK8s.
juju bootstrap lxd microk8s.config | juju add-k8s myk8scloud
You can use your own cloud name in place of myk8scloud.
Running your own Kubernetes cluster
Deploying your own Kubernetes cluster may be done using conjure-up or deploying the kubernetes core bundle or deploying a production ready Canonical Distribution of Kubernetes. Kubernetes may be run locally on a Juju LXD cloud, or on a public cloud like AWS.
Assume we want to run Kubernetes on a Juju instance running on AWS.
juju bootstrap aws juju deploy kubernetes-core juju deploy cs:~containers/aws-integrator juju trust aws-integrator juju relate aws-integrator kubernetes-master juju relate aws-integrator kubernetes-worker
juju trust command sets up the necessary configuration to allow Juju to request dynamic persistent volumes to use for storage (covered in another topic).
You’ll now need to wait for things to stabilise before taking the next step. You can run
watch -c juju status --color and wait for everything to go green.
Finally, you need to copy the
kubectl config file for the cluster to your local machine and register the cluster as a cloud known to Juju using
juju scp kubernetes-master/0:config ~/.kube/config
juju add-k8s myk8scloud
You can use your own cloud name in place of myk8scloud.
Creating a Kubernetes model in Juju.
Now that the Juju controller is set up and the Kubernetes cluster has been registered as a cloud known to Juju, you can create a new Juju model on that cloud.
juju add-model myk8smodel myk8scloud
The cloud name is whatever was used previously with
add-k8s. The model name is whatever works for you. Juju will create a Kubernetes namespace in the cluster to host all of the pods and other resources for that model. The namespace is used to separate resources from different models.
Optionally set up storage
Now that the model is created, you’ll may also need to configure a storage pool to provide storage for the charm operator pods. If Juju is running on a cluster that has suitable storage already configured, then you don’t need to do anything and Juju will use that storage class (Juju on microk8s requires no additional setup apart from ensuring microk8s storage is enabled via
microk8s.enable storage). But you may want to set up a bespoke storage class per application, or for the model itself. See this post for more detail.
Deploying a Kubernetes charm
There are some very early proof of concept charms and bundles in the staging charm store.
These charms are not for production and are not complete. They are proof of concept only.
Let’s deploy a gitlab and mariadb charm and relate them. Ensure that the Kubernetes model is in focus.
We’ll need storage for the mariadb charm - if there’s a default storage class already set up for the cluster, there’s no need to do anything and you’ll get some storage allocated out of the box. If you want to configure some bespoke storage, see this post.
If we’re deploying to MicroK8s, that comes with a default
hostpath storage class, so there’s no need to create a storage pool for mariadb. But we may want to override the default 1GiB storage allocation and only ask for 10MiB.
Now deploy and relate the charms:
juju switch myk8smodel
juju deploy cs:~juju/mariadb-k8s --storage database=10M
juju deploy cs:~juju/gitlab-k8s
juju relate gitlab-k8s mariadb-k8s
You can use
juju status to watch the progress of the deployment. Note that even after Juju status indicates that things have finished, the gitlab image is churning away setting up the database tables it needs. We don’t currently have a way of exposing this to Juju status,
Juju status will surface the current pod status for each unit and the last error message (if any). You can also use kubectl to describe relevant pods or other artefacts if more detail is required.
To be able to connect to gitlab externally with a web browser, it needs to be exposed. The means to do this depends on the underlying cloud on which Kubernetes is running and how the deployment was set up.
If the Kubernetes bundle was deployed on AWS using the aws-integrator charm, then an AWS Elastic Load Balancer is automatically configured to route external traffic to the gitlab service. Use
juju status to see the FQDN of the service and point the browser at that address.
Similarly, when using MicroK8s, you can simply access the workload using the IP address of the Kubernetes service resource (shown as the application address in Juju status).
For deployments on other substrates, you’ll need to
juju expose gitlab and also supply a hostname for the service. The easiest way to get a hostname to test with is to use the facility provided by xip.io. For our simple test deployment, using the kubernetes-core bundle, there’s only 1 worker node and that’s where gitlab will be running. The IP address of the worker node is what’s needed. Run
juju status on the Juju model hosting the actual Kubernetes deployment and note the IP addresses of the worker node.
Use this Juju command to configure the gitlab application:
juju config gitlab juju-external-hostname=10.112.143.15.xip.io
Obviously replace 10.112.143.15 with the correct IP address.
Now gitlab can be exposed:
juju expose gitlab
Note: it may take a minute for the exposed workload to become available. Until then you get an nginx error page trying to view the gitlab web page.
Persistent storage for charms is supported. See this topic for more details.
You can specify a placement directive using the standard Juju --to syntax.
Right now, we support a node selector placement based on matching labels. The labels can be either a built-in label or any user defined label added to the node.
juju deploy mariadb-k8s --to kubernetes.io/hostname=somehost
You can specify resource limits for memory and cpu. The cpu units are milli cpu 2. The standard Juju constraint syntax is used.
Note: right now, the constraint values are mapped to resource limits. There’s no support for resource requests. This conforms to the behaviour of LXD constraints.
juju deploy mariadb-k8s --constraints "mem=4G cpu-power=500"
cpu-power value specified is an int, and the implicit unit is “milli CPUs”.
K8s requires a value plus unit, so 500 is translated to “500m” to pass to Kubernetes.