Getting Started


#1

Hello World Juju Kubernetes

Here’s a basic set of instructions for setting up Juju to work with a given Kubernetes cloud.

First, you’ll need to be running the Juju 2.5 candidate snap.

sudo snap install juju --classic --candidate

Limitations and Prerequisites.

  1. The Juju GUI can display a Kubernetes model but anything else - creating units, deploying charms, status etc - is currently unsupported.

Setting Up The Juju Contoller

You need a running Juju controller and also a Kubernetes cluster to work with. For the Kubernetes cluster, there’s several choices:

  1. use microk8s.
  2. deploy a bespoke Kubernetes
  3. use a Kubernetes clutser on a public cloud (eg GKE)

Once a Juju controller is up and running and you have your Kubernetes cluster, you need to import the cluster and user credential into Juju. The Juju add-k8s command extracts the information it needs from the Kubernetes configuration file (the same one that kubectl uses). Once imported, the cluster appears as a cloud. See below for instructions relevant to each of the above scenarios.

Note: add-k8s will import whatever credential values exist in ~/.kube/config. There’s plans to add support for juju add-credential so that you can add arbitrary k8s clusters as named Juju clouds.

Local setup with microk8s

If you want to try a simple deployment on your own server or laptop, the microk8s option is perhaps the easiest. See this post for how to get things set up. The TL;DR: is that you’ll need to enable dns and storage on microk8s.

juju bootstrap lxd
microk8s.config | juju add-k8s myk8scloud

You can use your own cloud name in place of myk8scloud.

Running a bespoke Kubernetes

Deploying a bespoke Kubernetes may be done using conjure-up or deploying the kubernetes core bundle or deploying a production ready Canonical Distribution of Kubernetes. Kubernetes may be run locally on a Juju LXD cloud, or on a public cloud like AWS.

Assume we want to run Kubenetes on a Juju instance running on AWS.

juju bootstrap aws
juju deploy kubernetes-core
juju deploy aws-integrator
juju trust aws-integrator

The juju trust command sets up the necessary configuration to allow Juju to request dynamic persistent volumes to use for storage (covered in another topic).

You’ll now need to wait for things to stabilise before taking the next step. You can run watch -c juju status --color and wait for everything to go green.

Finally, you need to copy the kubectl config file for the cluster to your local machine and register the cluster as a cloud known to Juju using add-k8s.

juju scp kubernetes-master/0:config ~/.kube/config
juju add-k8s myk8scloud

You can use your own cloud name in place of myk8scloud.

Creating a Kubernetes model in Juju.

Now that the Juju controller is set up and the Kubernetes cluster has been registrered as a cloud known to Juju, you can create a new Juju model on that cloud.

juju add-model myk8smodel myk8scloud

The cloud name is whatever was used previously with add-k8s. The model name is whatever works for you. Juju will create a Kubernetes namespace in the cluster to host all of the pods and other resources for that model. The namespace is used to separate resources from different models.

Set up storage

Now that the model is created, you’ll also need to configure a storage pool to provide storage for the charm operator pods. See this post for more detail.

For microk8s:
juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath

For Kubernetes deployed on AWS:
juju create-storage-pool operator-storage kubernetes storage-class=juju-ebs storage-provisioner=kubernetes.io/aws-ebs parameters.type=gp2

Deploying a Kubernetes charm

There are some very early proof of concept charms and bundles in the staging charm store.

These charms are not for production and are not complete. They are proof of concept only.

Let’s deploy a gitlab and mariadb charm and relate them. Ensure that the Kubernetes model is in focus.
We’ll need storage for the mariadb charm - let’s assume we’re running on microk8s and set up a suitable storage pool for mariadb database storage:

juju create-storage-pool mariadb-pv kubernetes storage-class=microk8s-hostpath

Now deploy and relate the charms:

juju switch myk8smodel
juju deploy cs:~juju/mariadb-k8s --storage database=10M,mariadb-pv
juju deploy cs:~juju/gitlab-k8s
juju relate gitlab-k8s mariadb-k8s

You can use juju status to watch the progress of the deployment. Note that even after Juju status indicates that things have finished, the gitlab image is churning away setting up the database tables it needs. We don’t currently have a way of exposing this to Juju status,

Juju status will surface the current pod status for each unit and the last error message (if any). You can also use kubectl to describe relevant pods or other artefacts if more detail is required.

Exposing gitlab

To be able to connect to gitlab externally with a web browser, it needs to be exposed. The means to do this depends on the underlying cloud on which Kubernetes is running and how the deployment was set up.

If the Kubernetes bundle was deployed on AWS using the aws-integrator charm, then an AWS Elastic Load Balancer is automatically configured to route external traffic to the gitlab service. Use juju status to see the FQDN of the service and point the browser at that address.

Similarly, when using microk8s, you can simply access the workload using the IP address of the Kubernetes service resource (shown as the application address in Juju status).

For deployments on other substrates, you’ll need to juju expose gitlab and also supply a hostname for the service. The easiest way to get a hostname to test with is to use the facility provided by xip.io. For our simple test deployment, using the kubernetes-core bundle, there’s only 1 worker node and that’s where gitlab will be running. The IP address of the worker node is what’s needed. Run juju status on the Juju model hosting the actual Kubernetes deployment and note the IP addresses of the worker node.

Use this Juju command to configure the gitlab application:

juju config gitlab juju-external-hostname=10.112.143.15.xip.io

Obviously replace 10.112.143.15 with the correct IP address.

Now gitlab can be exposed:

juju expose gitlab

Note: it may take a minute for the exposed workload to become available. Until then you get an nginx error page trying to view the gitlab web page.

Using Storage

Persistent storage for charms is supported. See this topic for more details.

Placement

You can specify a placement directive using the standard Juju --to syntax.

Right now, we support a node selector placement based on matching labels. The labels can be either a built-in label or any user defined label added to the node.
Example:
juju deploy mariadb-k8s --to kubernetes.io/hostname=somehost

Constraints

You can specify resource limits for memory and cpu. The cpu units are milli cpu 2. The standard Juju constraint syntax is used.

Note: right now, the constraint values are mapped to resource limits. There’s no support for resource requests. This conforms to the behaviour of LXD constraints.
Example:

juju deploy mariadb-k8s --constraints "mem=4G cpu-power=500"

The cpu-power value specified is an int, and the implicit unit is “milli CPUs”.
K8s requires a value plus unit, so 500 is translated to “500m” to pass to Kubernetes.


Google GKE now supported!
Kubeflow charms now available
#2

#3

This command resulted in a failed hook error for me:

juju deploy cs:~johnsca/kube-core-aws && juju trust aws-integrator

#4

I deployed Kubernetes using the canonical-kubernetes bundle on the localhost cloud and I’m stuck setting up storage. Apparently I need to create a storage pool by supplying a value for a Kubernetes storage class but I don’t see anything for LXD.


#5

The storage backends supported by Kubernetes are listed here https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes

For LXD, there’s no native Kubernetes dynamic persistent volume support as far as I know.
You’ll need to use static persistent volumes instead as described in the post on storage.


#6

The status output provided doesn’t include enough info to diagnose the issue, but typically you will get an error with the aws-integrator if the account being used doesn’t have sufficient permissions to support the IAM profiles needed.


#7

I’ve followed these instruction (on a vsphere setup) and I’m currently stuck after deploying a k8s charm. It seems that there are some authentication issues between the charm agent and the controller.

2018-10-16 13:24:35 DEBUG juju.worker.apicaller connect.go:125 connecting with old password
2018-10-16 13:24:35 DEBUG juju.api apiclient.go:877 successfully dialed “wss://10.10.139.233:17070/model/e4d97d49-c87a-4720-831c-ca51d17295ad/api”
2018-10-16 13:24:35 INFO juju.api apiclient.go:599 connection established to “wss://10.10.139.233:17070/model/e4d97d49-c87a-4720-831c-ca51d17295ad/api”
2018-10-16 13:24:35 DEBUG juju.worker.apicaller connect.go:152 failed to connect
2018-10-16 13:24:35 DEBUG juju.worker.dependency engine.go:538 “api-caller” manifold worker stopped: cannot open api: invalid entity name or password (unauthorized access)

Did I miss something here? Juju version is 2.5-beta1. Charm used is mysql-k8s.


#8

It may be that the cached docker jujud image in your k8s cluster is stale (we use PullIfNotPresent). Because we’re still in development, APIs can change. This won’t be a problem once the final release goes out as the tagged docker image will be stable.

You can try deleting the caas-jujud-operator docker image from your k8s cluster. I’ve just tested with a clean microk8s setup with the 2.5 edge Juju snap and things were fine.


#9

I’ve had this happen as well. In each case, I’d removed the charm from Juju, but did not delete the operator pod’s persistent volume. It seems that they are not automatically removed (as of juju 2.5-beta1) if you juju remove-application the charm.


#10

As with cloud deployments, we specifically do not remove storage when deleting an application unless the user asks for it with --destroy-sorage.

But, we should remove the operator volume. That’s a bug that needs fixing.


#11

I’ve just landed a fix so that the operator storage is deleted when the operator is deleted.


Trouble with aws-integrator
#12

I tried this today with cdk on gce with the gcp-integrator charm.

I have the cdk up and running and setup the operator storage using the gce example on the storage post.

juju list-storage-pools
operator-storage  kubernetes  parameters.type=pd-standard storage-class=juju-operator-storage storage-provisioner=kubernetes.io/gce-pd

When I deploy mysql/gitlab charms I get the following:

pod has unbound immediate PersistentVolumeClaims

On both of the applications and I’m unsure how to proceed.


#13

pod has unbound immediate PersistentVolumeClaims means that the underlying cloud could not provision the requested storage. This may be due to an issue with the integrator charm, or it could be an account limitation, or something else. It’s not a juju-k8s issue per se but something with the substrate on which Kubernetes is running. You’ll need to kubectl describe the affected PVC to dig into the details of why the claim could not be satisfied.


#14

Also, don’t use mysql - use mariadb instead. There’s an issue with the upstream mysql docker image that causes the db not to come up correctly.


#15

How to write a statefuleset container with kuernetes charm. Can you help me to giving some documents or examples.


#16

Juju automatically creates a k8s stateful set if your charm requires storage; otherwise a deployment controller is used to managed to pods. The stateful set allows the storage pv’s to be correctly re-attached to new pods which are spun up to replace existing pods. The creation of a stateful set or deployment controller is done automatically by Juju; it’s not something the charm author needs to worry about.

So, based on the above, the mariadb charm is an example of a charm which uses a stateful set; the gitlab charm will use a deployment controller.

There’s also this post on writing k8s charms in general.


#17

If i want deploy pod on some kubernetes-worker with specific tag. How can i do it.


#18

This post explains how you can do what you want: