Minimizing CaaS operator pod volume usage


Copied over from, since this is a better venue for it.

The background / motivating use case is this: The Kubeflow bundle is comprised of approximately 30 charms, with more slated to get included. Right now, Juju creates a PV/PVC per operator pod, and one operator pod per charm. This runs into issues with the fact that you can only mount so many volumes onto a single node instance with many providers, such as AWS. Concretely, when I spin up a Charmed Kubernetes stack on AWS and deploy Kubeflow to it, any one node can only have 26 volumes mounted on it.

The end result is that I’m unable to scale up with Charmed Kubernetes / Juju, the only option I have is to scale out, since all of the charms can’t fit onto a single node instance. This isn’t the worst thing in the world, but in a microservices world, that can translate to a lot of wasted capacity if you can’t place enough microservices onto a node to maximize utilization.

There’s two solutions that seem like they would solve this issue for me. Neither one seems particularly easy, hence this being a feature request:

  1. The ability to run operator pods in a stateless manner, with no volumes.

    This would probably require some work put into making the operator code idempotent, which won’t be easy, but there are some pretty great benefits in fault-tolerance that this would bring.

  2. The ability to coalesce multiple operators into fewer stateful pods

    This would probably be easier than above, but may run into issues with complexities around logging, multi-threading, etc.


Hey @knkski

I’ve had a quick look into how Kubernetes functions with regards to volume limits.

It looks like possibly this is an issue with CDK not adding the node label for AWS (haven’t confirmed this). For in-tree provisioners, getMaxVolumeFunc shows that it uses for EBS volumes.

Kubernetes scheduler should take into account volume limits when selecting a node for a pod, but in this case it’s just selecting the wrong volume limit due to miss-configuration.

Possibly a temporary solution to this is using a custom StorageClass with the annotation “” set to “true” and choosing an alternative storage mechanism.