Translating Deployment Configurations to K8S Charms


Given a deployment configuration, I can identify a few things that will fit into the pod spec, but I have a difficult time translating how to apply the other necessary objects in the deployment configuration to the juju deployed application.

I feel it may be helpful if we cut this enterprise-gateway.yaml apart and depict how each piece fits into the Juju CAAS workflow.



I would definitely like to document the changes necessary, and there has even been some discussion about creating a charm create template that could attempt to map the deployment configuration over for you to give you a starting point for your charm. However, we do have to be a little careful with that, since the charm will almost certainly need some manual intervention after the templating process to function correctly, not to mention to really take advantage of what Juju adds.


The Writing a Kubernetes Charm post pretty much describes what’s currently supported in terms of setting up a k8s workload. In a nutshell, Juju will create these k8s resources in response to a juju deploy operation:

Charms without storage:

  • deployment controller / replica set
  • service
  • pod(s)
  • ingress resource (when app is exposed)

Charms with storage:

  • stateful set / replica set
  • service
  • pod(s)
  • persistent volume claim(s)
  • persistent volume(s)
  • storage class
  • ingress resource (when app is exposed)

The bits that be configured by the user (the devop engineer) are:

  • service properties - at deploy time using juju application config
    • service type
    • target port
    • annotations
    • external ips
    • load balancer ip
    • load balancer source ranges
    • ingress hostname


juju deploy mycharm --config kubernetes-service-annotations="a=b c=d"

The bits specified by the charm author and augmented by charm config (tweakable by the devop engineer):

  • workload pod configuration (pod spec)
  • files injected into the pod’s docker image
  • custom resource definition

The charm does the above by sending a YAML snippet to Juju. The YAML snippet is at a higher level of abstraction than raw k8s YAML and absolves the user from having to worry about all the boilerplate normally associated with setting up config maps for file mapping, PVCs for storage claims etc.

The above refers specifically to configuring the workload pods and the docker images that are run, and also custom resource definitions.

Here’s some example YAML file which a charm may produce which fully covers all currently supported attributes. The charm would pass the YAML below to the pod-spec-set hook command.

The example YAML would result in workload pods containing 2 containers and also a custom resource definition. The 2 containers each get their docker images from a private or public repo:

  1. gitlab - docker image comes from registry requiring credential
  2. gitlab-helper - public docker image

Note: juju charms use resources to store the image registry details separate to the charm, and the charms substitute those details into the YAML template prior to sending to the controller with pod-spec-set. There’s a reactive layer to help with this. See also the example charms for guidance.

You can see that aside from the expected image and container port details, a charm can configure:

  • the docker command and args
  • working dir
  • docker env vars (see config section)
  • text files to mount inside the docker container (see files section)
  • liveness and readiness probes

The liveness and readiness probe YAML syntax is just the native k8s pod spec syntax.

  - name: gitlab
        username: docker-registry
        password: hunter2
    imagePullPolicy: Always
    command: ["sh", "-c"]
    args: ["doIt", "--debug"]
    workingDir: "/path/to/here"
    - containerPort: 80
      name: fred
      protocol: TCP
    - containerPort: 443
      name: mary
      initialDelaySeconds: 10
        path: /ping
        port: 8080
      initialDelaySeconds: 10
        path: /pingReady
        port: www
      attr: foo=bar; name['fred']='blogs';
      foo: bar
      restricted: 'yes'
      switch: on
      - name: configuration
        mountPath: /var/lib/foo
          file1: |
            foo: bar
  - name: gitlab-helper
        imagePath: testing/no-secrets-needed@sha256:deed-beef

  - group:
    version: v1alpha2
    scope: Namespaced
    kind: TFJob
                  type: integer
                  minimum: 1
                  type: integer
                  minimum: 1
                  type: integer
                  minimum: 1
                  maximum: 1

So with the above knowledge, and looking at the enterprise gateway example referenced, the k8s resources that are not modelled by Juju and need to be kubectl created are:

  • ServiceAccount
  • ClusterRole
  • ClusterRoleBinding

Juju creates a namespace with the same name as the model, and also the deployment and service.
The enterprise gateway charm would produce a pod-spec YAML just containing just the bits from the deployment template spec.

The gitlab and mariabdb demo charms are decent enough examples to crib off to see how to cover the various bits in the enterprise gateway example.

What’s Not Supported

Right now, we don’t support the serviceAccountName for the pod. That means that the default service account is used and the pod can’t be granted different privileges. That can be fixed.

Meta: Collected topics and docs for k8s charms

Let me see what I can get going given this information. Many thanks :pray::pray::pray::pray::pray: @wallyworld


Given the question needed to be asked means there needs to be more doc besides my “Getting Started” posts which are a guide to how to kick the tyres rather than good reference doc. Doc takes time but it is something we’re working on. Hopefully we can take the material in this thread and polish it up into some good reference doc.

The best reference doc we have right now are the various k8s charms that have been done - the gitlab and mariadb examples, plus the Kubeflow ones (including redis). All of these are in the ~juju namespace on the charm store.


I’m trying to use securityContext like here and here

You haven’t listed securityContext above, I’m assuming this means it is not supported (not sure if ^ is meant to be an exhaustive example or not). Would it be feasible to add securityContext if it isn’t already supported?



This would be awesome! thanks!