The Writing a Kubernetes Charm post pretty much describes what’s currently supported in terms of setting up a k8s workload. In a nutshell, Juju will create these k8s resources in response to a juju deploy
operation:
Charms without storage:
- deployment controller / replica set
- service
- pod(s)
- ingress resource (when app is exposed)
Charms with storage:
- stateful set / replica set
- service
- pod(s)
- persistent volume claim(s)
- persistent volume(s)
- storage class
- ingress resource (when app is exposed)
The bits that be configured by the user (the devop engineer) are:
- service properties - at deploy time using juju application config
- service type
- target port
- annotations
- external ips
- load balancer ip
- load balancer source ranges
- ingress hostname
eg
juju deploy mycharm --config kubernetes-service-annotations="a=b c=d"
The bits specified by the charm author and augmented by charm config (tweakable by the devop engineer):
- workload pod configuration (pod spec)
- files injected into the pod’s docker image
- custom resource definition
The charm does the above by sending a YAML snippet to Juju. The YAML snippet is at a higher level of abstraction than raw k8s YAML and absolves the user from having to worry about all the boilerplate normally associated with setting up config maps for file mapping, PVCs for storage claims etc.
The above refers specifically to configuring the workload pods and the docker images that are run, and also custom resource definitions.
Here’s some example YAML file which a charm may produce which fully covers all currently supported attributes. The charm would pass the YAML below to the pod-spec-set
hook command.
The example YAML would result in workload pods containing 2 containers and also a custom resource definition. The 2 containers each get their docker images from a private or public repo:
- gitlab - docker image comes from registry requiring credential
- gitlab-helper - public docker image
Note: juju charms use resources to store the image registry details separate to the charm, and the charms substitute those details into the YAML template prior to sending to the controller with pod-spec-set
. There’s a reactive layer to help with this. See also the example charms for guidance.
You can see that aside from the expected image and container port details, a charm can configure:
- the docker command and args
- working dir
- docker env vars (see config section)
- text files to mount inside the docker container (see files section)
- liveness and readiness probes
The liveness and readiness probe YAML syntax is just the native k8s pod spec syntax.
containers:
- name: gitlab
imageDetails:
imagePath: staging.registry.org/testing/testing-image@sha256:deed-beef
username: docker-registry
password: hunter2
imagePullPolicy: Always
command: ["sh", "-c"]
args: ["doIt", "--debug"]
workingDir: "/path/to/here"
ports:
- containerPort: 80
name: fred
protocol: TCP
- containerPort: 443
name: mary
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: /ping
port: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: /pingReady
port: www
config:
attr: foo=bar; name['fred']='blogs';
foo: bar
restricted: 'yes'
switch: on
files:
- name: configuration
mountPath: /var/lib/foo
files:
file1: |
[config]
foo: bar
- name: gitlab-helper
imageDetails:
imagePath: testing/no-secrets-needed@sha256:deed-beef
customResourceDefinition:
- group: kubeflow.org
version: v1alpha2
scope: Namespaced
kind: TFJob
validation:
properties:
tfReplicaSpecs:
properties:
Worker:
properties:
replicas:
type: integer
minimum: 1
PS:
properties:
replicas:
type: integer
minimum: 1
Chief:
properties:
replicas:
type: integer
minimum: 1
maximum: 1
So with the above knowledge, and looking at the enterprise gateway example referenced, the k8s resources that are not modelled by Juju and need to be kubectl created are:
- ServiceAccount
- ClusterRole
- ClusterRoleBinding
Juju creates a namespace with the same name as the model, and also the deployment and service.
The enterprise gateway charm would produce a pod-spec YAML just containing just the bits from the deployment template spec.
The gitlab and mariabdb demo charms are decent enough examples to crib off to see how to cover the various bits in the enterprise gateway example.
What’s Not Supported
Right now, we don’t support the serviceAccountName
for the pod. That means that the default service account is used and the pod can’t be granted different privileges. That can be fixed.