The certificate does come from the microk8s kubeconfig file. For microk8s it’s the cluster certificate-authority-data value. It’s base 64 encoded. The k8s apis used to parse the kubeconfig automatically decode prior to Juju reading the PEM-encoded cert value. So you’ll just need to base 64 decode the data.
Is there a reason for using microk8s with a LXD controller rather than just bootstrapping directly to microk8s? It’s rather unnecessary unless you also want to run non-k8s workloads and say cross model relate.
I know one of the issues is driving this from libjuju and we’ve kept a firm line that we don’t support bootstrap on the other clients due to the complex code paths and keeping them up to date and in sync.
You still need to bootstrap the LXD controller though. Either way, you need to bootstrap something. So why not bootstrap directly to microk8s even if via a script. And them use libjuju to drive things. If the workloads are k8s, why introduce the complexity of having to manage something outside of k8s. Unless you want an HA controller or maybe need to cross model relate to a VM workload. But even though a k8s controller is not strictly HA, k8s will restart it if it goes down. But microk8s on a single node is hardly a production scenario I would have thought, so the HA-ness is maybe moot.
+1 I’d also be interested in more info about the use case. I assume there’s already a controller around because this is typically going on top of tools on an existing cloud of some sort, but that’s totally an assumption on my end.
Juju is used in a non-traditional way within OSM. OSM has it’s own resource orchestrator, so we use Juju, bootstrapped to LXD; typically, we deploy so-called “proxy” charms which operate against a remote machine via SSH. What we’d normally think of as a charm, aka a “machine” charm, are supported by adding the remote machine to Juju via the manual provisioner and deploying the charm to that machine.
Juju is bootstrapped during the installation process, and then driven via libjuju running in a container (docker or k8s). Access to the CLI tools is limited.
With Kubernetes, we want to ingest a kube config of an existing K8s cluster, add it as a cloud to Juju, and deploy a bundle to it.