K8s Spec v3 changes

Available in the 2.8 edge snap…

New extended volume support

It’s now possible to configure volumes backed by:

  • config map
  • secret
  • host path
  • empty dir

To do this, you’ll need to mark your YAML as version 3. This new version 3 also:

  • renames the config block to envConfig (to better reflect its purpose).
  • renames the files block to volumeConfig.
  • allows file mode to be specified

With secret and config map, these must be defined in the elsewhere YAML handed to Juju - you can’t reference existing resources not created by the charm. If you leave out the files block, the entire secret or config map will be mounted. path is optional - the file will be created with the same name as key if not specified.

The path for each file is created relative to the overall mount point.

Here’s an example of what’s possible when creating the new volume types.

version: 3
...
    # renamed from config
    envConfig:
      MYSQL_ROOT_PASSWORD: %(root_password)s
      MYSQL_USER: %(user)s
      MYSQL_PASSWORD: %(password)s
      MYSQL_DATABASE: %(database)s
      MY_NODE_NAME:
        field:
          path: spec.nodeName
          api-version: v1
      build-robot-secret:
        secret:
          name: build-robot-secret
          key: config.yaml
    # Here's where the new volumes types are set up
    # This block was called "files" in v2
    volumeConfig:
      # This is what was supported previously (simple text files)
      - name: configurations
        mountPath: /etc/mysql/conf.d
        files:
          - path: custom_mysql.cnf
            content: |
              [mysqld]
              skip-host-cache
              skip-name-resolve
              query_cache_limit = 1M
              query_cache_size = %(query-cache-size)s
              query_cache_type = %(query-cache-type)s
      # host path
      - name: myhostpath1
        mountPath: /var/log1
        hostPath:
          path: /var/log
          type: Directory
      - name: myhostpath2
        mountPath: /var/log2
        hostPath:
          path: /var/log
          # see https://kubernetes.io/docs/concepts/storage/volumes/#hostpath for other types
          type: Directory
      # empty dir
      - name: cache-volume
        mountPath: /empty-dir
        emptyDir:
          medium: Memory # defaults to disk
      - name: cache-volume222
        mountPath: /empty-dir222
        emptyDir:
          medium: Memory
      - name: cache-volume
        mountPath: /empty-dir1
        emptyDir:
          medium: Memory
      # secret
      - name: another-build-robot-secret
        mountPath: /opt/another-build-robot-secret
        secret:
          name: another-build-robot-secret
          defaultMode: 511
          files:
            - key: username
              path: my-group/username
              mode: 511
            - key: password
              path: my-group/password
              mode: 511
        # config map
        configMap:
          name: log-config
          defaultMode: 511
          files:
            - key: log_level
              path: log_level
              mode: 511

The lifecycle of CRDs

Introduce CRD lifecycle. Now charmers can decide when the CRDs get deleted by specifying proper labels.

{
    "juju-resource-lifecycle": "model | persistent"
}
  1. If no juju-resource-lifecycle label set, the CRD gets deleted with the application together.

  2. If juju-resource-lifecycle sets to model, the CRD will not get deleted when the application is removed until the model is destroyed.

  3. If juju-resource-lifecycle sets to persistent, the CRD will never get deleted by Juju even the model is gone.

deploy a charm has below spec:

version: 3
kubernetesResources:
  customResourceDefinitions:
    - name: tfjobs.kubeflow.org
      labels:
        foo: bar  # deleted with the app;
      spec:
        ...
    - name: tfjob1s.kubeflow.org1
      labels:
        foo: bar
        juju-resource-lifecycle: model  # deleted with the model;
      spec:
        ...
    - name: tfjob2s.kubeflow.org2
      labels:
        foo: bar
        juju-resource-lifecycle: persistent  # never gets deleted;
      spec:
        ...

$ juju deploy /tmp/charm-builds/mariadb-k8s/ --debug  --resource mysql_image=mariadb -n1

$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
  "tfjob1s.kubeflow.org1",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "model",
    "juju-model": "t1"
  }
]
[
  "tfjob2s.kubeflow.org2",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "persistent",
    "juju-model": "t1"
  }
]
[
  "tfjobs.kubeflow.org",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-model": "t1"
  }
]

$ juju remove-application mariadb-k8s -m k1:t1 --destroy-storage --force
removing application mariadb-k8s
- will remove storage database/0

$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
  "tfjob1s.kubeflow.org1",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "model",
    "juju-model": "t1"
  }
]
[
  "tfjob2s.kubeflow.org2",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "persistent",
    "juju-model": "t1"
  }
]

$ juju destroy-model t1 --destroy-storage -y --debug --force

$ mkubectl get crds -o json | jq '.items[] | .metadata | [.name,.labels]'
[
  "tfjob2s.kubeflow.org2",
  {
    "foo": "bar",
    "juju-app": "mariadb-k8s",
    "juju-resource-lifecycle": "persistent",
    "juju-model": "t1"
  }
]

The lifecycle of CRs

$ juju deploy /tmp/charm-builds/mariadb-k8s/ --debug  --resource mysql_image=mariadb

$ mkubectl get crds tfjob1s.kubeflow.org1 -o json | jq ' .metadata | {name: .name,"juju-resource-lifecycle": (.labels | ."juju-resource-lifecycle")}'
{
  "name": "tfjob1s.kubeflow.org1",
  "juju-resource-lifecycle": "persistent"
}

$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
  "name": "dist-mnist-for-e2e-test11",
  "juju-resource-lifecycle": null
}
{
  "name": "dist-mnist-for-e2e-test12",
  "juju-resource-lifecycle": "model"
}
{
  "name": "dist-mnist-for-e2e-test13",
  "juju-resource-lifecycle": "persistent"
}

$ juju remove-application mariadb-k8s -m k1:t1 --destroy-storage --force
removing application mariadb-k8s
- will remove storage database/0

$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
  "name": "dist-mnist-for-e2e-test12",
  "juju-resource-lifecycle": "model"
}
{
  "name": "dist-mnist-for-e2e-test13",
  "juju-resource-lifecycle": "persistent"
}

$ juju destroy-model t1 --destroy-storage -y --debug --force

$ mkubectl get tfjob1s.kubeflow.org1 -o json | jq '.items[] | .metadata | {name: .name,"juju-resource-lifecycle":(.labels | ."juju-resource-lifecycle")}'
{
  "name": "dist-mnist-for-e2e-test13",
  "juju-resource-lifecycle": "persistent"
}

Webhooks name now can be fixed;

  • webhooks section changed from map to slice;

  • Juju by default adds the namespace to the global webhook resources’ name as a prefix.
    Now charmers can fix the name by specifying an annotation like:

{
    "juju.io/disable-name-prefix": "true",
}
yml2json /tmp/charm-builds/mariadb-k8s/reactive/k8s_resources.yaml --pretty | jq '.kubernetesResources | .mutatingWebhookConfigurations[],.validatingWebhookConfigurations[] | {name: .name, annotations: .annotations}'
{
  "name": "mutatingwebhook-will-change",
  "annotations": null
}
{
  "name": "mutatingwebhook-will-keep",
  "annotations": {
    "juju.io/disable-name-prefix": "true"
  }
}
{
  "name": "validatingwebhook-will-keep",
  "annotations": {
    "juju.io/disable-name-prefix": "true"
  }
}
{
  "name": "validatingwebhook-will-change",
  "annotations": null
}

$ mkubectl get mutatingWebhookConfigurations,validatingWebhookConfigurations -n t1 -o json | jq '.items[].metadata | {name: .name, annotations: .annotations}'
{
  "name": "mutatingwebhook-will-keep",
  "annotations": {
    "juju.io/controller": "f8917560-4288-46b7-87e4-56fce849bf6b",
    "juju.io/disable-name-prefix": "true",
    "juju.io/model": "a271b010-7f50-4254-8b1d-eda1f0c62081"
  }
}
{
  "name": "t1-mutatingwebhook-will-change",
  "annotations": {
    "juju.io/controller": "f8917560-4288-46b7-87e4-56fce849bf6b",
    "juju.io/model": "a271b010-7f50-4254-8b1d-eda1f0c62081"
  }
}
{
  "name": "t1-validatingwebhook-will-change",
  "annotations": {
    "juju.io/controller": "f8917560-4288-46b7-87e4-56fce849bf6b",
    "juju.io/model": "a271b010-7f50-4254-8b1d-eda1f0c62081"
  }
}
{
  "name": "validatingwebhook-will-keep",
  "annotations": {
    "juju.io/controller": "f8917560-4288-46b7-87e4-56fce849bf6b",
    "juju.io/disable-name-prefix": "true",
    "juju.io/model": "a271b010-7f50-4254-8b1d-eda1f0c62081"
  }
}

Update strategy support:

Now we can define the update strategy in .service section in podspec.
The detailed configuration for k8s:

stateful app:

$ yml2json /tmp/charm-builds/mariadb-k8s/metadata.yaml | jq .deployment
{
  "type": "stateful",
  "min-version": "1.10.1",
  "service": "omit"
}

$ juju run --unit mariadb-k8s/0  pod-spec-get | yml2json | jq .service
{
  "updateStrategy": {
    "rollingUpdate": {
      "partition": 10
    },
    "type": "RollingUpdate"
  }
}

stateless

$ yml2json /tmp/charm-builds/mariadb-k8s/metadata.yaml | jq .deployment
{
  "type": "stateless",
  "min-version": "1.10.1",
  "service": "omit"
}

$ juju run --unit mariadb-k8s/0  pod-spec-get | yml2json | jq .service
{
  "updateStrategy": {
    "rollingUpdate": {
      "maxUnavailable": 10
    },
    "type": "RollingUpdate"
  }
}

daemon app

$ yml2json /tmp/charm-builds/mariadb-k8s/metadata.yaml | jq .deployment
{
  "type": "daemon",
  "min-version": "1.10.1",
  "service": "omit"
}

$ juju run --unit mariadb-k8s/0  pod-spec-get | yml2json | jq .service
{
  "updateStrategy": {
    "rollingUpdate": {
      "maxUnavailable": 10
    },
    "type": "RollingUpdate"
  }
}
4 Likes

I’m not sure to understand how to attach a kubernetes secret to a container config from this example. It looks like the secret is added as an environment variable, and not from a secret created by the spec.

My use case is the following. My charm spec created a secret like this :

'kubernetesResources': {
                'secrets': [
                    {
                        'name': 'mssql',
                        'type': 'Opaque',
                        'data': {
                            'SA_PASSWORD': (b64encode(
                                ('MyC0m9l&xP@ssw0rd').encode('utf-8')).decode('utf-8')),
                        }
                    }
                ]
            }

So, my goal is to attach this secret to my container. How would I do it without it being part of the envConfig? i.e

            'containers': [
                {
                    'name': self.framework.model.app.name,
                    'image': config["image"],
                    'ports': ports,
                    'envConfig': container_config,
                }
            ],

Hi @camille.rodriguez1
You can mount the secret to the pod’s filesystem using volumeConfig like:

    volumeConfig:
      - name: another-build-robot-secret
        mountPath: /opt/another-build-robot-secret
        secret:
          name: another-build-robot-secret
          defaultMode: 511
          files:
            - key: username
              path: my-group/username
              mode: 511
            - key: password
              path: my-group/password
              mode: 511

or mount as env variable using envConfig like

   envConfig:
     build-robot-secret:
        secret:
          name: build-robot-secret
          key: config.yaml

Hi @kelvin.liu,

I’ve tried both ways, and both leads to config-changed hook errors.
In my case, what would be the “key”, “data”? or “SA_PASSWORD”? Anyway, I’ve tried both and I get this in result. It might be a problem with the operator framework not able to process this new feature, I’m not sure…

application-mssql: 10:15:27 INFO unit.mssql/0.juju-log Ran on_config_changed hook
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed Traceback (most recent call last):
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed File “/var/lib/juju/agents/unit-mssql-0/charm/hooks/config-changed”, line 204, in
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed main(Charm)
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed File “lib/ops/main.py”, line 183, in main
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed _emit_charm_event(charm, juju_event_name)
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed File “lib/ops/main.py”, line 114, in _emit_charm_event
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed event_to_emit.emit(*args, **kwargs)
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed File “lib/ops/framework.py”, line 177, in emit
application-mssql: 10:15:27 DEBUG unit.mssql/0.config-changed framework._emit(event)
application-mssql: 10:15:32 INFO unit.mssql/0.juju-log Ran on_config_changed hook
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed Traceback (most recent call last):
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed File “/var/lib/juju/agents/unit-mssql-0/charm/hooks/config-changed”, line 204, in
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed main(Charm)
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed File “lib/ops/main.py”, line 183, in main
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed _emit_charm_event(charm, juju_event_name)
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed File “lib/ops/main.py”, line 114, in _emit_charm_event
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed event_to_emit.emit(*args, **kwargs)
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed File “lib/ops/framework.py”, line 177, in emit
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed framework._emit(event)
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed File “lib/ops/framework.py”, line 582, in _emit
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed self._reemit(event_path)
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed File “lib/ops/framework.py”, line 617, in _reemit
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed custom_handler(event)
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed File “/var/lib/juju/agents/unit-mssql-0/charm/hooks/config-changed”, line 62, in on_config_changed
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed if self.state.spec != new_spec:
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed File “lib/ops/framework.py”, line 692, in getattr
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed raise AttributeError(f"attribute ‘{key}’ is not stored")
application-mssql: 10:15:33 DEBUG unit.mssql/0.config-changed AttributeError: attribute ‘spec’ is not stored
application-mssql: 10:15:33 ERROR juju.worker.uniter.operation hook “config-changed” (via explicit, bespoke hook script) failed: exit status 1

I tried a few different ways to build my spec template, such as

'containers': [
                {
                    'name': self.framework.model.app.name,
                    'image': config["image"],
                    'ports': ports,
                    'envConfig': {
                        'mssql-secret': {
                            'secret': {
                                'name': 'mssql',
                                'key': 'data'
                            },
                        }
                    },
                },
            ],

or something more like

        'containers': [
            {
                'name': self.framework.model.app.name,
                'image': config["image"],
                'ports': ports,
                'envConfig': container_config,
                'volumeConfig': {
                    'name': 'mssql-secret',
                    'mountPath': '/opt/secret',
                    'secret': {
                        'name': 'mssql',
                        'defaultMode': 511,
                    }
                },
            }
        ],

Edit : I talked to the operator framework developers, and it seems the issue is on their side, they do not support the new secrets integration yet.

Looking at this traceback, the issue isn’t how you’re using secret, or whether the feature is supported by the charm framework. Its just an initialization issue. You are doing a check to see “has the pod spec changed” but you forgot to initialise self.state.spec.

You’ll want to have something like:

class MyCharm(CharmBase):
  state = StoredState()

  def __init__(self, parent, key):
    super().__init__(parent, key)
    self.state.set_default(spec=None)

That will guarantee that the state has a ‘spec’ attribute that is initialized to None, but won’t be overwritten in the next hook. (vs just doing self.state.spec = None directly in __init__, which would reset the value on every hook)

1 Like

Are these new fields renamed in a backwards-compatible way, e.g. will the old keys continue to work?

If you use v3, you’ll need to migrate to the new field names.
v2 will continue to accept the original field names.

Please could someone provide an example of using a plain-old environment variable, i.e. I want to set "basic_auth": "false" but haven’t been able to do that yet, everything I’ve tried gives syntax errors.

Just a simple example would be most appreciated, I couldn’t see that in this thread.

Is there also a formal schema or spec we can read with examples?

Wouldn’t that be:

 envConfig:
      basic_auth: False

?

I’ve been using the postgres charm as my template for understanding most concepts but I think thats the same as the top of this thread
https://git.launchpad.net/charm-k8s-postgresql/tree/src/charm.py#n116

The issue was the undocumented “version”: “3” requirement. We’ve got past that now, thank you for the reply.

To be fair, the very first post in this thread did contain what was needed, including the version attribute and an example of setting an env var or 2 :slight_smile:

Blockquote
Here’s an example of what’s possible when creating the new volume types.

version: 3
...
   # renamed from config
   envConfig:
     MYSQL_ROOT_PASSWORD: %(root_password)s
     MYSQL_USER: %(user)s
     MYSQL_PASSWORD: %(password)s
     MYSQL_DATABASE: %(database)s
     MY_NODE_NAME:

thanks @alexellisuk and @wallyworld we’re working to make sure this is clear in the documentation!