Where is add-k8s command getting the cacert from?

help-needed

#1

Hello,

I will try to explain as clear as possible the issue I am facing:

Scenario:

  • lxd juju controller bootstrapped
  • microk8s installed

Objective:

  • Add the microk8s as a cloud to the lxd controller
  • Do it with libjuju

Juju CLI command:

The juju command that does the same as I am trying to achieve in libjuju is the following:

cd ~
davigar15@Canonical:~$ microk8s.config > kubeconfig
davigar15@Canonical:~$ KUBECONFIG=~/kubeconfig  juju add-k8s myk8scloud --controller lxd-controller
davigar15@Canonical:~$ juju show-cloud myk8scloud
defined: local
type: k8s
description: A Kubernetes Cluster
auth-types: [userpass]
endpoint: https://192.168.0.161:16443
regions:
  localhost: {}
config:
  operator-storage: microk8s-hostpath
  workload-storage: microk8s-hostpath
ca-credentials:
- |
  -----BEGIN CERTIFICATE-----
  MIIDCTCCAfGgAwIBAgIUEZlNANCT1JLq2tBMMbkYDNu40LAwDQYJKoZIhvcNAQEL
  BQAwFDESMBAGA1UEAwwJMTI3LjAuMC4xMB4XDTE5MTAwOTExMzMwNVoXDTQ3MDIy
  NDExMzMwNVowFDESMBAGA1UEAwwJMTI3LjAuMC4xMIIBIjANBgkqhkiG9w0BAQEF
  AAOCAQ8AMIIBCgKCAQEAr6eI9tonu+6rBsvM8qLNtOScE6JUqpO/6a1DhRgDLlzd
  3gnyz2eTDLMW2IJVv5Jbvor1fiCKOCkthW5k1X878IXCG7U5p+Jy0G9nZ9RM1h5h
  TmlzecFG070enc+/xDNUexiWPnkhnN5CkLBMg8Rf/usSJsGg4CR6rXKspwOtgQBY
  JXkWOnpXn5t0k+7//2DYKw/sVfek9dXZ/KG3peoa3CIHJ9SXzNRhn1UHRlXh3R29
  bfFpM0ysXVakekwoy0V3FTnEYYfKJxM/kmCrY2RwaxIshXfhgoJJezg4XB43YGdK
  dIr7HUPR2WUcuy8daOT3B14fk942yULk6v6h7nzgRwIDAQABo1MwUTAdBgNVHQ4E
  FgQU8trhQlnF1ISWLzgaKyYzOGK9574wHwYDVR0jBBgwFoAU8trhQlnF1ISWLzga
  KyYzOGK9574wDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAFzxn
  78hmYN8L3tYmK94UB8E/pEyohGrXraXdxfXQUGDJzxlzhnnLWlR+pU7BvGoTvrFS
  og0oaBkwH1A9U5+RA8frGjXys56tUQcnUya8Uj3O4xpBsHI7ZX0kfgSJdyQqyGCs
  h4ixUsTlbqLBty5yfdXhto7qacUvgOzjJp6Y7++hlR2xMiLFAHVqhcyxpBnHfyXD
  wmdjcAcTJkOog4m94SYUqTZIC5wCPgwtZGc8u0V6upLjoCOXpUHuVubb9l3htLA1
  LR8Ixu94DLSe8PeGO3VVgbbTEhfY3LF/GJaJVYFE3FZXcyOl1foynQggrz+2Kbdb
  rhFB4v+H0q8mVCcnQQ==
  -----END CERTIFICATE-----

Extra notes:

  • That ca-certificate is the same one generated if I would do juju bootstrap microk8s

Observations:

  • That certificate is not the one inside the microk8s kubeconfig
  • That certificate is not the same as the one located in ~/.local/share/juju/controllers.yaml (lxd controller cacert)

Question:

Where is this cacert come from? How is the juju add-k8s command getting it?

Thanks,
David Garcia


#2

The certificate does come from the microk8s kubeconfig file. For microk8s it’s the cluster certificate-authority-data value. It’s base 64 encoded. The k8s apis used to parse the kubeconfig automatically decode prior to Juju reading the PEM-encoded cert value. So you’ll just need to base 64 decode the data.

Is there a reason for using microk8s with a LXD controller rather than just bootstrapping directly to microk8s? It’s rather unnecessary unless you also want to run non-k8s workloads and say cross model relate.


#3

I know one of the issues is driving this from libjuju and we’ve kept a firm line that we don’t support bootstrap on the other clients due to the complex code paths and keeping them up to date and in sync.


#4

You still need to bootstrap the LXD controller though. Either way, you need to bootstrap something. So why not bootstrap directly to microk8s even if via a script. And them use libjuju to drive things. If the workloads are k8s, why introduce the complexity of having to manage something outside of k8s. Unless you want an HA controller or maybe need to cross model relate to a VM workload. But even though a k8s controller is not strictly HA, k8s will restart it if it goes down. But microk8s on a single node is hardly a production scenario I would have thought, so the HA-ness is maybe moot.


#5

+1 I’d also be interested in more info about the use case. I assume there’s already a controller around because this is typically going on top of tools on an existing cloud of some sort, but that’s totally an assumption on my end.


#6

To give a bit more context to the question:

Juju is used in a non-traditional way within OSM. OSM has it’s own resource orchestrator, so we use Juju, bootstrapped to LXD; typically, we deploy so-called “proxy” charms which operate against a remote machine via SSH. What we’d normally think of as a charm, aka a “machine” charm, are supported by adding the remote machine to Juju via the manual provisioner and deploying the charm to that machine.

Juju is bootstrapped during the installation process, and then driven via libjuju running in a container (docker or k8s). Access to the CLI tools is limited.

With Kubernetes, we want to ingest a kube config of an existing K8s cluster, add it as a cloud to Juju, and deploy a bundle to it.