Understanding the API server traffic configuration in Charmed Kubernetes

The API server is perhaps the single most central component of a Kubernetes cluster. It routes all of the internal traffic between components of the cluster, as well as routing management requests from kubectl clients interacting with the cluster from the outside. Given the differing natures of those two use-cases, as well as the many different ways a cluster admin might want to have that various traffic routed, it can be confusing to understand how Charmed Kubernetes manages the various addresses that the API server might need to present to the various consumers of those addresses. As part of my work to add support for the OpenStack Integrator charm to provide load-balancing for the API server in Charmed Kubernetes via native OpenStack load balancers, I realized I didn’t understand how those addresses were managed and communicated quite as well as I thought I did. So this is my attempt to document my current understanding and get clarification on where it might be wrong.

Internal Components

The components of the cluster need to know how to talk to the API server, so that they can communicate with the rest of the cluster. This is done via the kube-api-server relation endpoint on the kubernetes-master and kubernetes-worker charms. On this relation, the master charm advertises the addresses it thinks the other components of the cluster should use to talk to the API server, while the worker charm listens for those addresses.

In the simplest case, where the master and worker are directly related, each master advertises its ingress address and the workers pick one to use. This is because Kubernetes only supports a single API server address, so the workers have to decide which one to use, and they try to all use the same one and stick with it as long as it is available.

Alternatively, another charm can be inserted between the master and worker on this relation to provide some form of load balancing. In the default Charmed Kubernetes bundle, this is done with the kubeapi-load-balancer charm. While this does provide round-robin distribution of API server traffic, it is not an HA solution; keepalived can be combined with the kubeapi-load-balancer to add HA fail-over. Other potential options which can be inserted on this relation are haproxy (untested), and (soon) one or more of the cloud integrator charms.

A third option is to configure the master with either an externally managed VIP, which can then be managed with hacluster, or an externally managed load balancer via the loadbalancer_ips config option. Both of these options override what addresses the master advertises over the kube-api-server relation.

External Clients

External clients, primarily kubectl also need to talk to the API server. The master charm determines what address external clients should use, and puts that information into the config file that it creates for kubectl to use.

As with the case for internal components, the simplest case is where the master just puts its API server address directly in the file, although in this case, it only uses the leader’s public address. Of course, this requires clients to manually update their config if anything changes with the cluster, so it is not recommended.

Instead, the master can be related via the loadbalancer relation endpoint to a charm which can provide some form of load balancing service and returns to the master the address of the load balancer which is then passed on to the clients via the config file. In the default Charmed Kubernetes bundle, this is also handled by the kubeapi-load-balancer charm, which then uses the same round-robin load balancer for both internal and external traffic. However, this could be replaced by another charm or even another instance of kubeapi-load-balancer if the traffic needed to be separated.

Finally, if the master charm is configured to use an external VIP or externally managed load balancer for internal traffic, it will also use that same configuration for external traffic as well. There is currently no way to specify a separate VIP or external load balancer for external traffic vs internal traffic.

1 Like

I think we should add some detail here as to why it isn’t a HA solution. The reason is that the kubeapi-load-balancer is a single point of failure in the default bundle and if you use multiple ones, the workers will still just pick one of them and talk to it forever. This is the root of the issue.

We test and use HACluster and I don’t believe we currently test keepalived. Maybe we should plug that first up here instead? Also, keepalived uses a virtual IP as well, so this section might need some changes.