"How to Deploy Applications at Scale in Kubernetes"

Spicule has posted a fantastic blog post and an associated video discussing one of Juju’s “new superpowers” - managing Kubernetes workloads:

Kubernetes is the hot new property on the block, but its tooling leaves a little to be desired. Kubectl is okay for checking the state of your cluster and deploying basic Docker containers(pods), but what happens if you want to deploy a multi-container application that can scale, distribute and understand the environment in which it lies? The current incumbent is Helm which solves some of these problems, but not all of them. For the end-user it often involves editing a bunch of YAML to configure the application correctly and Helm charts can be a little hit and miss as to their stability and also the version of the software it’s deploying.

What about software in multiple locations? - Software at scale

Kubernetes is great, but not everything runs in Kubernetes, also some workloads are just not designed for containerisation. So what do we do with these? We can deploy them into Public or Private Cloud or we can deploy them on bare metal somewhere, but in both cases, they live outside the container ecosystem and the containers will need configuring to point at these services.

Let me introduce Juju to you. Juju is a software orchestration platform from Canonical which for a number of years has happily supported deploying software, of pretty much any variety, to Public Cloud services, Openstack, Baremetal, LXD and more. Just recently though it developed a new super power… being able to manage software in Kubernetes. This includes configuring the container, updating container configuration, creating the required services for exposing the containers and so on.