Writing a Kubernetes charm


#1

Introduction

Kubernetes charms are the similar as traditional cloud charms. The same model is used. An application has units and each unit relation has its own data bag. The same hooks are invoked but unlike traditional charms there’s no expectation that the install hook be implemented. This is less relevant anyway using the reactive framework.

The only mandatory task for the charm is to tell Juju some key pod / container configuration. Kubernetes charms are recommended to be reactive and there’s a base layer that is similar to the base layer used for traditional charms:

https://github.com/juju-solutions/layer-caas-base

The basic flow for how the charm operates is:

  1. charm calls config-get to retrieve the charm settings from Juju

  2. charm translates settings to create pod configuration

  3. charm calls pod-spec-set to tell Juju how to create pods/units

  4. charm can use status-set or juju-log or any other hook command the same as for traditional charms

  5. charm can implement hooks the same as for traditional charms

There’s no need for the charm to apt install anything - the operator docker image has all the necessary reactive and charm helper libraries baked in.

The charm can call pod-spec-set at any time and Juju will update any running pods with the new pod spec. This may be done in response to the config-changed hook due to the user changing charm settings, or when relations are joined etc. Juju will check for actual changes before restarting pods so the call is idempotent.

Note: the pod spec applies for the application as a whole. All pods are homogeneous.

Sample gitlab, mariadb charms are at https://github.com/wallyworld/caas. These charms are POC only and are not production quality.

There’s also Kubeflow charms in development at https://github.com/juju-solutions.

Kubernetes charm store

There are some early prototype charms already uploaded to the production charm store.

There are also some charms hosted at the staging charm store.

Container images

Charms specify that they need a container image by including a resource definition.

resources:
  mysql_image:
    type: oci-image
    description: Image used for mysql pod.

oci-image is a new type of charm resource (we already have file).

The image is attached to a charm and hosted by the charm store’s inbuilt docker repo. Standard Juju resource semantics apply. A charm is released (published) as a tuple of (charm revision, resource version). This allows the charm and associated image to be published as a known working configuration.

Example workflow

To build and push a charm to the charm store, ensure you have the charm snap installed.
After hacking on the charm and running charm build to fully generate it, you push, attach, release:

cd <build dir>
charm push . mariadb-k8s
docker pull mariadb
charm attach cs:~wallyworld/mariadb-k8s-8 mysql_image=mariadb
charm release cs:~wallyworld/mariadb-k8s-9 --resource mysql_image-0

See
charm help push
charm help attach

Charms in more detail

Use the information below in addition to looking at the charms already written to see how this all hangs together.

To illustrate how a charm tells Juju how to configure a unit’s pod, here’s the template YAML snippet used by the Kubernetes mariadb charm. Note the placeholders which are filled in from the charm config obtained via config-get.

ports:
- containerPort: %(port)s
  protocol: TCP
config:
 MYSQL_ROOT_PASSWORD: %(rootpassword)
 MYSQL_USER: %(user)s
 MYSQL_PASSWORD: %(password)s
 MYSQL_DATABASE: %(database)s
files:
 - name: configurations
   mountPath: /etc/mysql/conf.d
   files:
     custom_mysql.cnf: |
       [mysqld]
       skip-host-cache
       skip-name-resolve         
       query_cache_limit = 1M
       query_cache_size = %(query-cache-size)s

The charm simply sends this YAML snippet to Juju using the pod_spec_set() charm helper.
Here’s a code snippet from the mariadb charm.

from charms.reactive import when, when_not
from charms.reactive.flags import set_flag, get_state, clear_flag
from charmhelpers.core.hookenv import (
    log,
    metadata,
    status_set,
    config,
    network_get,
    relation_id,
)

from charms import layer

@when_not('layer.docker-resource.mysql_image.fetched')
def fetch_image():
    layer.docker_resource.fetch('mysql_image')

@when('mysql.configured')
def mariadb_active():
    status_set('active', '')

@when('layer.docker-resource.mysql_image.available')
@when_not('mysql.configured')
def config_mariadb():
    status_set('maintenance', 'Configuring mysql container')

    spec = make_pod_spec()
    log('set pod spec:\n{}'.format(spec))
    layer.caas_base.pod_spec_set(spec)

    set_flag('mysql.configured')
....

Important Difference With Cloud Charms

Charms such as databases which have a provides endpoint often need to set in relation data the IP address to which related charms can connect. The IP address is obtained using network-get, often something like this:

@when('mysql.configured')
@when('server.database.requested')
def provide_database(mysql):
    info = network_get('server', relation_id())

    for request, application in mysql.database_requests().items():
        database_name = get_state('database')
        user = get_state('user')
        password = get_state('password')

        mysql.provide_database(
            request_id=request,
            host=host,
            port=3306,
            database_name=database_name,
            user=user,
            password=password,
        )
        clear_flag('server.database.requested')

Workload Status

Currently, there’s no well defined way for a Kubernetes charm to query the status of the workload it is managing. So although the charm can reasonably set status as say blocked when it’s waiting for a required relation to be created, or maintenance when the pod spec is being set up, there’s no real way for the charm to know when to set active.

Juju helps solve this problem by looking at the pod status and uses that in conjunction with the status reported by the charm to determine what to display to the user. Workload status values of waiting, blocked, maintenance, or any error conditions, are always reported directly. However, if the charm sets status as active, this is not shown as such until the pod is reported as Running. So all the charm has to do is set status as active when all of its initial setup is complete and the pod spec has been sent to Juju, and Juju will “Do The Right Thing” from then on. Both the gitlab and mariadb sample charms illustrate how workload status can set correctly set.

A future enhancement will be to allow the charm to directly query the workload status and the above workaround will become unnecessary.

Kubernetes Specific Pod Config

It’s possible to specify Kubernetes specific pod configuration in the pod spec YAML created by the charm. The supported container attributes are:

  • livenessProbe
  • readinessProbe
  • imagePullPolicy

The syntax used is standard k8s pod spec syntax.

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

It’s also possible to set pod attributes as well:

  • activeDeadlineSeconds
  • serviceAccountName
  • restartPolicy
  • terminationGracePeriodSeconds
  • automountServiceAccountToken
  • securityContext
  • hostname
  • subdomain
  • priorityClassName
  • priority
  • dnsConfig
  • readinessGates

Again, standard k8s syntax is used for the above attributes.

You can also specify the command to run when starting the container.

command: ["sh", "-c"]
args: ["doIt", "--debug"]
workingDir: "/path/to/here"

Here’s an example:

activeDeadlineSeconds: 10
serviceAccountName: serviceAccount
restartPolicy: OnFailure
terminationGracePeriodSeconds: 20
automountServiceAccountToken: true
securityContext:
  runAsNonRoot: true
hostname: host
subdomain: sub
priorityClassName: top
priority: 30
dnsConfig: 
  nameservers: [ns1, ns2]
readinessGates:
  - conditionType: PodScheduled
containers:
  - name: gitlab
    imagePullPolicy: Always
    ports:
    - containerPort: 80
      protocol: TCP
    livenessProbe:
      initialDelaySeconds: 10
      httpGet:
        path: /ping
        port: 8080
    readinessProbe:
      initialDelaySeconds: 10
      httpGet:
        path: /pingReady
        port: www
    config:
      attr: foo=bar; fred=blogs

Meta: Collected topics and docs for k8s charms
Deploying K8S/Ceph to OVH
Spark Charm -> Spark 2.4
#2

Getting Started
Juju 2.5.0 RC 2 Release Notes
Juju 2.5.0 has been released!
#3

how can I deploy a k8s charm with local-path like
$ juju depoly ./builds/k8s/mariadb-k8s --resource mysql_image=mariadb


#4

What you have typed is exactly what’s needed. It works right now with Juju 2.5 candidate.
Your example is missing the --storage requirement (mariadb uses storage) but is otherwise 100% correct.


#5

where are all the keywords which charm spec supported. like above example, I find ports, config, files and livenessProbe, readlinessPorbe, imagePullpolicy. is that all?

One of my specific requirements is:
pod with a emptyDir volume by emptyDir.medium field to “Memory”. doc in https://kubernetes.io/docs/concepts/storage/volumes/#emptydir


#6

In metadata.yaml

storage:
  database:
    type: filesystem
    location: /var/lib/mysql

What are the else storage types besides filesystem?


#7

Juju also supports block storage devices for IAAS models but not k8s models yet. Block device support in Kubernetes itself was only relatively recently added. It will be supported at some point but for now it’s just filesystem storage.


#8

emptyDir in-mem support is not implemented yet but we can look at it for this cycle.
It would be done using standard Juju storage primitives. Juju does have a tmpfs storage type which for k8s models would be mapped to emptyDir.medium=“Memory”

juju deploy mycharm --storage mydata=100M,tmpfs

#9

$ juju deploy cs:~juju/mariadb-k8s abc --storage database=10M,tmpfs ;
ERROR pool “tmpfs” not found

Should some special values be defined in the metadata?


#10

It’s not implemented yet. But it’s something we can try and look at this cycle.


#11

@wallyworld I followed the steps outlined above and thought it all came together pretty smooth. Hit a few bumps along the way (more just my lack of knowledge in k8s) a little googling and I got things working. Documented most of the process here.

Might you have an idea on how this workflow extends itself to getting ingress to the applications deployed in k8s?


#12

Glad it worked, and thanks for documenting your experience. You are right that when things go wrong, you do need some k8s knowledge to debug.

With the jenkins permission issue, I’m hoping that one of the guys with more practical k8s experience than I have can chime in. If there’s something we need to allow for in the k8s charm setup we can add it in. @kwmonroe @cory_fu @knobby any insight?

With regards to ingress to an application, juju expose works - see Exposing Gitlab.
But the exact mechanics of it depend on the underlying cluster setup. In my experience I have found that deploying CDK on AWS with the integrator charm results in a Load Balanced service with a public IP being created out of the box and so the application is exposed automatically. But it appears that’s not the case for you so the juju expose step is necessary.

Also, again depending on the underling k8s setup, you can use advanced app config to configure the deployed application with various k8s specific settings that relate to ingress. eg external ips, load balancer ip etc. Ideally we’ll get to documenting a deployment recipe for various scenarios but for now there’s a bit of k8s operations knowledge needed to get things going in this type of set up.


#13

Exactly what I was missing. Thanks @wallyworld


Translating Deployment Configurations to K8S Charms
#14

If I need to import a python package in my reactive file, how can I install this with k8s charms? I used to create a wheelhouse.txt file in the charm and it would get pulled into the wheelhouse after charm build but that doesn’t seem to work here.


#15

@sborny,

The wheelhouse.txt file is still used, but due to technical differences in the way k8s charms work vs machine charms, the dependencies are installed into the image at build time rather than being put into a wheelhouse/ directory as source wheels or eggs. You should be able to see your packages in the built charm under the lib/ directory, which is automatically added to the PYTHONPATH by layer:caas-base so it should function seamlessly in the way you would expect from machine charm development.

If you’re having import issues in your charm, please start a new thread (if you don’t mind) so we can dive into debugging it there. Thanks!


#16

@cory_fu
Thanks for the write-up. The wheelhouse.txt file does indeed still work. The python lib that I needed didn’t install via the wheelhouse.txt method and needs a pip_install from the charmhelpers library.


#17

I see the various bits of K8S stuff you can overload. One obvious thing that I can’t seem to find in documentation or charm, is, can you overload the command or do you have to use the container provided one?


#18

k8s charms do support configuring the command to run when starting the container plus args, working dir etc:

containers:
  - name: mycharm
    imageDetails:
      imagePath: mycharm
      username: fred
      password: secret
    imagePullPolicy: Always
    command: ["sh", "-c"]
    args: ["doIt", "--debug"]
    workingDir: "/path/to/here"
    ports:
    - containerPort: 80
      protocol: TCP

#19

Cool thanks! I’ll give it a test.