Kubernetes remove unit

Hi folks I’m not sure if this is broken cause its new juju on k8s or me running the wrong command or because its in error and bad feedback:

ubuntu@bionic-test:~$ juju status
Model  Controller  Cloud/Region              Version  SLA          Timestamp
saiku  k8s         k8s-test-cloud/RegionOne  2.6.8    unsupported  23:42:32Z

App            Version  Status      Scale  Charm          Store       Rev  OS          Address     Notes
s2                      active          1  saiku-k8s      jujucharms    1  kubernetes
saiku-k8s               active          2  saiku-k8s      jujucharms    1  kubernetes  10.10.8.26
zookeeper-k8s           allocating    0/1  zookeeper-k8s  jujucharms   16  kubernetes              Successfully assigned saiku/zookeeper-k8s-operator-0 to juju-e7f111-k8s-9

Unit             Workload  Agent       Address     Ports     Message
s2/0*            active    idle        10.1.60.18  8080/TCP
saiku-k8s/1*     active    idle        10.1.60.17  8080/TCP
saiku-k8s/2      error     idle        10.1.18.18  8080/TCP  hook failed: "leader-settings-changed"
zookeeper-k8s/0  waiting   allocating                        agent initializing

ubuntu@bionic-test:~$ juju remove-unit "saiku-k8s/2"
ERROR application name "saiku-k8s/2" not valid
ubuntu@bionic-test:~$ juju remove-unit saiku-k8s
ERROR removing 0 units not valid
ubuntu@bionic-test:~$ juju remove-unit saiku-k8s/2
ERROR application name "saiku-k8s/2" not valid

I mean, it looks right according to the help, but failing.

Removing units in a k8s model is a bit different. There is the ‘scale’ method, but you don’t get control over which unit is removed. Am I right @wallyworld?

For k8s, a Kubernetes deployment controller (or stateful set) is used to manage the scale out of an application. On that basis, you can specify the number of units you want, but cannot remove them individually as Juju has no control over how the cluster ultimately manages the pods.

The Juju CLI provides operations like:

Set total number of units/pods to 3
$ juju scale-application mariadb-k8s 3

Increase the number of units/pods by 2
$ juju add-unit mariadb-k8s --num-units 2

Decrease the number of units/pods by 1
$ juju remove-unit mariadb-k8s --num-units 1

juju help remove-unit mentions the k8s specific behaviour but the feedback when running the command definitely should be improved and we’ll do that.

I can see that in your case you perhaps want to delete unit 2 because it’s in error and start again. The issue is that the k8s doc is pretty clear about the fact that it can be dangerous to directly delete a statefulset pod; https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/
Also, because pod identity is stable with stateful sets, even if you were to use kubectl to delete the pod corresponding to unit 2, the unit on the Juju side would be marked as “terminated”, k8s would recreate a new pod with the same name, and the same unit on the Juju side would become active again. This is needed to allow the k8s cluster to restart stateful set pods (as needed in response to nodes going down or whatever) and have Juju units remain stable to such events.

The current answer would be to upgrade the charm to fix the logic error in the hook and allow the unit to come good again.

Thanks @wallyworld that makes sense. I didn’t check the docs as it wasn’t freaking out about an incorrect command so it never crossed my mind.

Tom

I think it is pretty clear that our error messages don’t really point you in the right direction. It shouldn’t tell you that a unit is an invalid application name, it should tell you that you cannot remove specific units from kubernetes applications. And ideally we would point you towards ‘--num-units’ if you supply an application name but no count. (we sort of do, but we don’t say ‘num-units’ in those words.)

Indeed, hence my note in the original reply that we would update the command help text :slight_smile: