Modifying a charmed-kubernetes deployment to have access to an LXD storage pool

Hi Folks,

Still early days for me in Juju/Conjure-up so bear with me.

So far, I have an LXD host properly configured and I have deployed a Kubernetes cluster with conjure-up, and it’s working. I can schedule workloads that do not require storage. Cool. But now I need persistent storage, and after reviewing the cluster deployed by conjure-up, I see that there are no storage classes out of the box.

Adding storage seems to be problematic as nothing I have tried seems to work, each command I attempt from the docs seems to be unaware of the model I’ve deployed and thus fails. Working with models that are already deployed seems to be a major hole in the docs, btw. They all talk about creating models from scratch, but not how to select and modify deployed ones.

Currently my Juju command sees this model:

routhinator@andromeda:~$ juju show-model
kubernetes:
  name: admin/kubernetes
  short-name: kubernetes
  model-uuid: 366a3fec-0af6-4e57-8da5-5d11bf1fc49f
  model-type: iaas
  controller-uuid: f59cb3fd-74b8-4562-812c-61ef298537b6
  controller-name: conjure-up-localhost-091
  is-controller: false
  owner: admin
  cloud: localhost
  region: localhost
  type: lxd
  life: alive
  status:
    current: available
    since: 23 hours ago
  users:
    admin:
      display-name: admin
      access: admin
      last-connection: 22 seconds ago
  sla: unsupported
  agent-version: 2.6.5
  credential:
    name: localhost
    owner: admin
    cloud: localhost

It also sees my controller:

routhinator@andromeda:~$ juju show-controller
conjure-up-localhost-091:
  details:
    uuid: f59cb3fd-74b8-4562-812c-61ef298537b6
    controller-uuid: f59cb3fd-74b8-4562-812c-61ef298537b6
    api-endpoints: ['192.168.52.196:17070']
    cloud: localhost
    region: localhost
    agent-version: 2.6.5
    mongo-version: 3.6.3
    ca-fingerprint: FD:8E:24:D6:D7:6C:10:CC:E8:58:3E:A4:38:1F:A1:29:D3:9E:01:66:64:41:34:CD:9A:32:AD:1A:0E:22:ED:24
    ca-cert: |
      -----BEGIN CERTIFICATE-----
      MIIDrTCCApWgAwIBAgIVAK6xP44ko0kgszYRvfq5ifd7+c/+MA0GCSqGSIb3DQEB
      CwUAMG4xDTALBgNVBAoTBGp1anUxLjAsBgNVBAMMJWp1anUtZ2VuZXJhdGVkIENB
      IGZvciBtb2RlbCAianVqdS1jYSIxLTArBgNVBAUTJDU5N2M4ZDI0LTQzZmQtNGEy
      Yy04NTNiLWJiOWM1YjMwMmQ1MzAeFw0xOTA5MjIyMDU5MTVaFw0yOTA5MjkyMDU5
      MTRaMG4xDTALBgNVBAoTBGp1anUxLjAsBgNVBAMMJWp1anUtZ2VuZXJhdGVkIENB
      IGZvciBtb2RlbCAianVqdS1jYSIxLTArBgNVBAUTJDU5N2M4ZDI0LTQzZmQtNGEy
      Yy04NTNiLWJiOWM1YjMwMmQ1MzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
      ggEBAM0dlHZPpr6frgf+udcg4ULX00tLIJbJsb/ZYHoRJrA9oL7gcEiRZKcOySDa
      4aQ9yVeFGfYOMAW/zKyVyq30J9ELF+g8LZX6VIq+dd7ci2d9f71k/W7ekDoGPacr
      HXsc8pUFagcFbP/J1cpUsn8iqMqdkofndmcMCCAeSSiHCCC+/cPW5uJ/9tX+errV
      MuyGQ6eNUn1Q+Gk2kHM/MXlaME6R4QwqkEEEYEq8UIXKKJfrLDdJ13GPVWzUpuZY
      vhYTmzUjwV74ZLyR4EhCuzC7zLpf6bAE2Tg2/iJfXvZ0ij/yX93uFbeL9shaReYz
      TTXE2Kv6fdZKBVh3TmH5n7OA56UCAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgKkMA8G
      A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFFmmyXOz16Uy4b8Kz3sFyVSThoJOMA0G
      CSqGSIb3DQEBCwUAA4IBAQAt0zTEH8QCETI27rBhiyc+93wMmMprVzj7Nd3s3fPs
      sDFRe2uB5+72tURsVzEXtM4SVhB7LvOa0Rh+nNY4BzpoMDFsMmYfQb/zuemSvlF5
      KuYqjMYnMMJBOKovXBGiws3ZKgxnVcFBs70Tusm1UuRvaG1LeQOvuvEf/s1OaiWY
      +mmkFozzoUVbZPs/KdDlCM8eDtjW4+rOptbKnt6Hdl1Xh9koApYKd4PuHLZp+KT5
      Y2LsgBU9n2yRdiJRkNFgADNHyCkV5J5EUICqhKSRDV8dI75MbyidbXwjwNkIAlRT
      0v9ZBAkHi91n8vLeNTQ5NFVWgAZT1d0gr8/pEzed0IMs
      -----END CERTIFICATE-----
  controller-machines:
    "0":
      instance-id: juju-337606-0
  models:
    conjure-charmed-kubernet-e31:
      uuid: 08548267-242a-41ad-8d34-1248894a6e82
      model-uuid: 08548267-242a-41ad-8d34-1248894a6e82
      machine-count: 5
      core-count: 6
    controller:
      uuid: b2499f8d-ecc4-4332-83cd-aeadb1337606
      model-uuid: b2499f8d-ecc4-4332-83cd-aeadb1337606
      machine-count: 1
    kubernetes:
      uuid: 366a3fec-0af6-4e57-8da5-5d11bf1fc49f
      model-uuid: 366a3fec-0af6-4e57-8da5-5d11bf1fc49f
  current-model: admin/kubernetes
  account:
    user: admin
    access: superuser

routhinator@andromeda:~$ juju list-controllers
Use --refresh option with this command to see the latest information.

Controller                 Model       User   Access     Cloud/Region         Models  Nodes    HA  Version
conjure-up-localhost-091*  kubernetes  admin  superuser  localhost/localhost       3      6  none  2.6.5 

However, in spite of it seeing the kubernetes model that’s been deployed, whenever I attempt to add storage to my cluster I get an error indicating the juju controller doesn’t recognize this as a kubernetes model:

routhinator@andromeda:~$ juju create-storage-pool operator-storage kubernetes \
>     storage-class=microk8s-hostpath
ERROR storage provider "kubernetes" not found

I’m not sure exactly how I fix this.

Ultimately this is a single node LXD host that’s meant for a staging/review cluster, so I just want to pass in an LXD storage pool that all the kubenodes can access and share. I am attempting to test out the hostpath provisioner, but ultimately would like to either add LXD dir storage or something lightweight like ZFS. Ceph is completely overkill resource wise and not really useful as a result.

Ok, finally worked out model manipulation a bit more, and realized I had the wrong model selected. I have switched to the model that represents my deployment, however Juju still doesn’t recognize it as a kubernetes model:

routhinator@andromeda:~$ juju switch :conjure-charmed-kubernet-e31
conjure-up-localhost-091:admin/kubernetes -> conjure-up-localhost-091:admin/conjure-charmed-kubernet-e31
routhinator@andromeda:~$ juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath
ERROR storage provider "kubernetes" not found
routhinator@andromeda:~$ juju show-model
conjure-charmed-kubernet-e31:
  name: admin/conjure-charmed-kubernet-e31
  short-name: conjure-charmed-kubernet-e31
  model-uuid: 08548267-242a-41ad-8d34-1248894a6e82
  model-type: iaas
  controller-uuid: f59cb3fd-74b8-4562-812c-61ef298537b6
  controller-name: conjure-up-localhost-091
  is-controller: false
  owner: admin
  cloud: localhost
  region: localhost
  type: lxd
  life: alive
  status:
    current: available
    since: "2019-09-29"
  users:
    admin:
      display-name: admin
      access: admin
      last-connection: 10 seconds ago
  machines:
    "0":
      cores: 0
    "1":
      cores: 0
    "2":
      cores: 0
    "3":
      cores: 2
    "4":
      cores: 4
  sla: unsupported
  agent-version: 2.6.5
  credential:
    name: localhost
    owner: admin
    cloud: localhost

No idea if I’m on the right track here, but I managed to create storage pools by changing the provider to LXD instead of Kubernetes like in the docs.

Just no idea how to attach the resulting pools to Kube and add a storage class so they can be used… help?

routhinator@andromeda:~$ juju create-storage-pool workload-storage lxd driver=dir storage-class=microk8s-hostpath lxd-pool=juju-workload-storage
routhinator@andromeda:~$ lxc storage list
+-----------------------+-------------+--------+---------+---------+
|         NAME          | DESCRIPTION | DRIVER |  STATE  | USED BY |
+-----------------------+-------------+--------+---------+---------+
| juju                  |             | dir    | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| juju-btrfs            |             | btrfs  | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| juju-workload-storage |             | dir    | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| juju-zfs              |             | zfs    | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| local                 |             | dir    | CREATED | 7       |
+-----------------------+-------------+--------+---------+---------+
routhinator@andromeda:~$ juju create-storage-pool operator-storage lxd driver=dir storage-class=microk8s-hostpath lxd-pool=juju-operator-storage
routhinator@andromeda:~$ lxc storage list
+-----------------------+-------------+--------+---------+---------+
|         NAME          | DESCRIPTION | DRIVER |  STATE  | USED BY |
+-----------------------+-------------+--------+---------+---------+
| juju                  |             | dir    | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| juju-btrfs            |             | btrfs  | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| juju-operator-storage |             | dir    | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| juju-workload-storage |             | dir    | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| juju-zfs              |             | zfs    | CREATED | 0       |
+-----------------------+-------------+--------+---------+---------+
| local                 |             | dir    | CREATED | 7       |
+-----------------------+-------------+--------+---------+---------+
routhinator@andromeda:~$ juju storage-pools
Name              Provider  Attrs
loop              loop      
lxd               lxd       
lxd-btrfs         lxd       driver=btrfs lxd-pool=juju-btrfs
lxd-zfs           lxd       driver=zfs lxd-pool=juju-zfs zfs.pool_name=juju-lxd
operator-storage  lxd       driver=dir lxd-pool=juju-operator-storage storage-class=microk8s-hostpath
rootfs            rootfs    
tmpfs             tmpfs     
workload-storage  lxd       driver=dir lxd-pool=juju-workload-storage storage-class=microk8s-hostpath

Storage on Kubernetes can feel like a bit of a dark art. Juju does have a pretty good story here, but it can take a little bit of time to get a firm mental model about what Juju is doing under the hood.

I seem some recall that there are some known issues with deploying Kubernetes onto LXD.

Here are some links that might provide a little bit of guidance while someone with some more expertise (pinging @hpidcock @wallyworld) is able to get some time to help you out.

To summarise my understanding of the issue, you want to deploy CDK on top of LXD and make available to the k8s cluster LXD storage pools so that a k8s storage class can be used to provision volumes with those LXD storage pools. This is a question that I’m hoping some of the CDK guys like @tvansteenburgh or @knobby can help with.

Where there appears to be confusion in trying to set this up by creating a Juju storage pool.

Creating a storage pool of type “kubernetes” is only relevant for Juju models created on top of a k8s cluster, ie models used for deploying k8s specific charms. “kubernetes” storage pools are not relevant where the model happens to be the deployment of CDK itself. The CDK deployment is simply a non-k8s Juju model deployed on a cloud, which happens to be LXD in this case. Creating a storage pool for non-k8s models requires using storage relevant to the cloud on which the model is deployed. eg for AWS it would be an EBS volume, for LXD there’s built in pools zfs/dir etc which don’t need to be created since Juju does that automatically.

Setting up operator or workload storage is only relevant for k8s models. It seems that this was being done for the model used to deploy CDK itself, which is a so-called “iaas” (non-k8s) model. To get k8s workloads deployed, what you would want to do is, after deploying CDK, use the add-k8s command to register that k8s cluster with the controller as a named cloud. Then you can create a k8s model in that cluster (cloud) and for that k8s model, you can set the operator-storage or workload-storage model config attribute to tell Juju how to provision storage for that model. You’d point these attributes to a named k8s storage class which had been set up to provision storage from the underlying LXD cloud (the topic of the first paragraph). Actually, adding the k8s cluster using add-k8s would require you to use the --storage option to tell Juju at that point what storage class to use. You could then change that later using the operator or workload storage model config, but you need to have it set up initially in a workable state.

1 Like

Aha, that connects the missing dots for me!

I did manage to get a workable solution by adding an NFS charm to the deployment and associating it with the cluster, which added the nfs provisioner.

I will add the k8s cluster as you suggested and look at what else I can add. I have a NAS with zfs backed iscsi, so it would be great to use that.

I was looking at presenting the LXD dir storage pool to the k8s workers and somehow getting the microk8s.hostpath provisioner to use the mounted directory, but the NFS approach probably makes more sense for a short term solution.

Thanks for your replies and for clarifying my misunderstanding!

These are the pages I was originally looking at and what lead to my confusion. Ceph would be nice but I don’t have enough physical nodes or resources to deploy it on my home dev cluster due to it’s minimal 3 node requirements, and I somehow originally missed the NFS suggestion in the docs.

Really glad to hear that you’re moving forward