Getting Started


#1

Hello World Juju Kubernetes

Here’s a basic set of instructions for setting up Juju to work with a given Kubernetes cloud.

First, you’ll need to be running the Juju 2.5 edge snap.

sudo snap install juju --classic --edge

This functionality is evolving rapidly and the edge snap will always contain the latest functionality.

Limitations and Prerequisites.

Note that this feature is under development and there are areas needing improvement:

  1. Surfacing of errors
    If a pod cannot be created, this is not reflected in juju status and kubectl is needed to inspect what went wrong directly at the source.
  2. Operator logs
    An operator pod is used to run the Kubernetes charms - the logs are not exported to Juju and visible using debug-log so again kubectl is needed to inspec the operator logs.
  3. The Juju GUI can display a Kubernetes model but anything else - creating units, deploying charms, status etc - is currently unsupported.

The production charm store does not yet support Kubernetes charms. If you want to deploy Kubernetes charms written and published by others, you’ll need to use the staging charm store. This will need to be set up when the Juju controller is bootstrapped.

You only need to use the staging charm store if you want to deploy Kubernetes charms published by others. If you have local on disk versions of those charms, you can bootstrap the Juju controller just using the production charm store and deploy the charms as local charms.

To bootstrap a Juju controller using the staging charm store, do this:

juju bootstrap <cloudname> --config charmstore-url=https://api.staging.jujucharms.com/charmstore

Setting Up The Juju Contoller

You need a running Juju controller and also a Kubernetes cluster to work with. For the Kubernetes cluster, there’s several choices:

  1. use microk8s.
  2. deploy a bespoke Kubernetes
  3. use a Kubernetes clutser on a public cloud (eg GKE)

Once a Juju controller is up and running and you have your Kubernetes cluster, you need to import the cluster and user credential into Juju. The Juju add-k8s command extracts the information it needs from the Kubernetes configuration file (the same one that kubectl uses). Once imported, the cluster appears as a cloud. See below for instructions relevant to each of the above scenarios.

Note: add-k8s will import whatever credential values exist in ~/.kube/config. There’s plans to add support for juju add-credential so that you can add arbitrary k8s clusters as named Juju clouds.

Local setup with microk8s

If you want to try a simple deployment on your own server or laptop, the microk8s option is perhaps the easiest. See this post for how to get things set up. The TL;DR: is that you’ll need to enable dns and storage on microk8s.

juju bootstrap lxd --config charmstore-url=https://api.staging.jujucharms.com/charmstore
microk8s.config | juju add-k8s myk8scloud

You can use your own cloud name in place of myk8scloud.

Running a bespoke Kubernetes

Deploying a bespoke Kubernetes may be done using conjure-up or deploying the kubernetes core bundle or deploying a production ready Canonical Distribution of Kubernetes. Kubernetes may be run locally on a Juju LXD cloud, or on a public cloud like AWS.

Assume we want to run Kubenetes on a Juju instance running on AWS. We’ll configure the controller to use the staging charm store. Using the staging charm store means that the production kubernetes-core of CDK bundles are not available to deploy. However, there’s an alternative Kubernetes bundle which has been uploaded which can be used instead.

juju bootstrap aws --config charmstore-url=https://api.staging.jujucharms.com/charmstore
juju deploy cs:~johnsca/kube-core-aws && juju trust aws-integrator

The juju trust command sets up the necessary configuration to allow Juju to request dynamic persistent volumes to use for storage (covered in another topic).

You’ll now need to wait for things to stabilise before taking the next step. You can run watch -c juju status --color and wait for everything to go green.

Finally, you need to copy the kubectl config file for the cluster to your local machine and register the cluster as a cloud known to Juju using add-k8s.

juju scp kubernetes-master/0:config ~/.kube/config
juju add-k8s myk8scloud

You can use your own cloud name in place of myk8scloud.

Creating a Kubernetes model in Juju.

Now that the Juju controller is set up and the Kubernetes cluster has been registrered as a cloud known to Juju, you can create a new Juju model on that cloud.

juju add-model myk8smodel myk8scloud

The cloud name is whatever was used previously with add-k8s. The model name is whatever works for you. Juju will create a Kubernetes namespace in the cluster to host all of the pods and other resources for that model. The namespace is used to separate resources from different models.

Set up storage

Now that the model is created, you’ll also need to configure a storage pool to provide storage for the charm operator pods. See this post for more detail.

For microk8s:
juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath

For Kubernetes deployed on AWS:
juju create-storage-pool operator-storage kubernetes storage-class=juju-ebs storage-provisioner=kubernetes.io/aws-ebs parameters.type=gp2

Deploying a Kubernetes charm

There are some very early proof of concept charms and bunldes in the staging charm store.

These charms are not for production and are not complete. They are proof of concept only.

Let’s deploy a gitlab and mysql charm and relate them. We use mysql because we’ve not set up storage support yet. Ensure that the Kubernetes model is in focus.

juju switch myk8smodel
juju deploy cs:~wallyworld/mysql-k8s
juju deploy cs:~wallyworld/gitlab-k8s
juju relate gitlab-k8s mysql-k8s

You can use juju status to watch the progress of the deployment. Note that even after Juju status indicates that things have finished, the gitlab image is churning away setting up the database tables it needs. We don’t currently have a way of exposing this to Juju status,

Note that we haven’t specified any persistent storage for the mysql charm (and the mysql charm doesn’t support storage). This means that when the pod goes away or is restarted, any data is lost. We can overcome this by using Kubernets persistent storage and the mariadb charm or relating gitlab to a mysql running in a different cloud which is configured to use storage. These topics are covered separately.

As well as watchig Juju status, it’s useful to look at the Kubernetes cluster to see the status of the pods to ensure everything is happening as expected:

kubectl -n myk8smodel get all

This will become unnecessary once pod status is properly surfaced in Juju.

Exposing gitlab

To be able to connect to gitlab externally with a web browser, it needs to be exposed. The means to do this depends on the underlying cloud on which Kubernetes is running and how the deployment was set up.

If the Kubernetes bundle was deployed on AWS using the aws-integrator charm

juju deploy cs:~johnsca/kube-core-aws && juju trust aws-integrator

then an AWS Elastic Load Balancer is automatically configured to route external traffic to the gitlab service. Use juju status to see the FQDN of the service and point the browser at that address.

For deployments on other substrates, or when using microk8s, you’ll need to juju expose gitlab and also supply a hostname for the service. The easiest way to get a hostname to test with is to use the facility provided by xip.io. For our simple test deployment, using the kubernetes-core bundle or microk8s, there’s only 1 worker node and that’s where gitlab will be running. The IP address of the worker node is what’s needed. Run juju status on the Juju model hosting the actual Kubernetes deployment and note the IP addresses of the worker node.

Use this Juju command to configure the gitlab application:

juju config gitlab juju-external-hostname=10.112.143.15.xip.io

Obviously replace 10.112.143.15 with the correct IP address.

Now gitlab can be exposed:

juju expose gitlab

Note: it may take a minute for the exposed workload to become available. Until then you get an nginx error page trying to view the gitlab web page.

Using Storage

Persistent storage for charms is supported. See this topic for more details.

Placement

You can specify a placement directive using the standard Juju --to syntax.

Right now, we support a node selector placement based on matching labels. The labels can be either a built-in label or any user defined label added to the node.
Example:
juju deploy mariadb-k8s --to kubernetes.io/hostname=somehost

Constraints

You can specify resource limits for memory and cpu. The cpu units are milli cpu 2. The standard Juju constraint syntax is used.

Note: right now, the constraint values are mapped to resource limits. There’s no support for resource requests. This conforms to the behaviour of LXD constraints.
Example:

juju deploy mariadb-k8s --constraints "mem=4G cpu-power=500"

The cpu-power value specified is an int, and the implicit unit is “milli CPUs”.
K8s requires a value plus unit, so 500 is translated to “500m” to pass to Kubernetes.


Google GKE now supported!
#2

#3

This command resulted in a failed hook error for me:

juju deploy cs:~johnsca/kube-core-aws && juju trust aws-integrator

#4

I deployed Kubernetes using the canonical-kubernetes bundle on the localhost cloud and I’m stuck setting up storage. Apparently I need to create a storage pool by supplying a value for a Kubernetes storage class but I don’t see anything for LXD.


#5

The storage backends supported by Kubernetes are listed here https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes

For LXD, there’s no native Kubernetes dynamic persistent volume support as far as I know.
You’ll need to use static persistent volumes instead as described in the post on storage.


#6

The status output provided doesn’t include enough info to diagnose the issue, but typically you will get an error with the aws-integrator if the account being used doesn’t have sufficient permissions to support the IAM profiles needed.


#7

I’ve followed these instruction (on a vsphere setup) and I’m currently stuck after deploying a k8s charm. It seems that there are some authentication issues between the charm agent and the controller.

2018-10-16 13:24:35 DEBUG juju.worker.apicaller connect.go:125 connecting with old password
2018-10-16 13:24:35 DEBUG juju.api apiclient.go:877 successfully dialed “wss://10.10.139.233:17070/model/e4d97d49-c87a-4720-831c-ca51d17295ad/api”
2018-10-16 13:24:35 INFO juju.api apiclient.go:599 connection established to “wss://10.10.139.233:17070/model/e4d97d49-c87a-4720-831c-ca51d17295ad/api”
2018-10-16 13:24:35 DEBUG juju.worker.apicaller connect.go:152 failed to connect
2018-10-16 13:24:35 DEBUG juju.worker.dependency engine.go:538 “api-caller” manifold worker stopped: cannot open api: invalid entity name or password (unauthorized access)

Did I miss something here? Juju version is 2.5-beta1. Charm used is mysql-k8s.


#8

It may be that the cached docker jujud image in your k8s cluster is stale (we use PullIfNotPresent). Because we’re still in development, APIs can change. This won’t be a problem once the final release goes out as the tagged docker image will be stable.

You can try deleting the caas-jujud-operator docker image from your k8s cluster. I’ve just tested with a clean microk8s setup with the 2.5 edge Juju snap and things were fine.


#9

I’ve had this happen as well. In each case, I’d removed the charm from Juju, but did not delete the operator pod’s persistent volume. It seems that they are not automatically removed (as of juju 2.5-beta1) if you juju remove-application the charm.


#10

As with cloud deployments, we specifically do not remove storage when deleting an application unless the user asks for it with --destroy-sorage.

But, we should remove the operator volume. That’s a bug that needs fixing.


#11

I’ve just landed a fix so that the operator storage is deleted when the operator is deleted.


Trouble with aws-integrator
#12

I tried this today with cdk on gce with the gcp-integrator charm.

I have the cdk up and running and setup the operator storage using the gce example on the storage post.

juju list-storage-pools
operator-storage  kubernetes  parameters.type=pd-standard storage-class=juju-operator-storage storage-provisioner=kubernetes.io/gce-pd

When I deploy mysql/gitlab charms I get the following:

pod has unbound immediate PersistentVolumeClaims

On both of the applications and I’m unsure how to proceed.


#13

pod has unbound immediate PersistentVolumeClaims means that the underlying cloud could not provision the requested storage. This may be due to an issue with the integrator charm, or it could be an account limitation, or something else. It’s not a juju-k8s issue per se but something with the substrate on which Kubernetes is running. You’ll need to kubectl describe the affected PVC to dig into the details of why the claim could not be satisfied.


#14

Also, don’t use mysql - use mariadb instead. There’s an issue with the upstream mysql docker image that causes the db not to come up correctly.


Juju 2.5.0 Beta 1 Release Notes