Workload status

Are there any caveats with workload status?

In the examples (mysql, gitlab, mariadb) I’ve tried, as well as my own k8s charms, the workload status of the units are set to maintenance before calling pod_spec_set, and never change to active.

I can check the status of the pod via kubectl, but is there a way to carry over that state into the Juju model?

Note firstly that those charms are POC and not production ready.

The general principal is that charms are expected to set their own workload status. So a charm would start out setting “maintenance” while it sets stuff up and then “active” when the workload is ready. Or “blocked” if there’s a problem that needs human intervention etc.

For k8s charms however, it’s not currently possible for the charm hooks to directly query the workload status, as the charm operator runs in a different pod to the workload. This is something that will be looked at next cycle. For now, what we do is fudge the workload status based on the pod status. So long as the charm has not said it’s doing maintenance, we’ll reflect the workload pod status as the Juju workload status. If the workload pod is reported as “running”, we’ll display that as “active”. If the workload pod can’t come up and is in error, we’ll display that as “blocked” etc.

For now, a k8s charm could report “maintenance” when it first starts, and then “active” as soon as the pod spec yaml is sent to Juju. Juju will report a sensible workload status based on the pod status.