Adding Storage to Charmed Kubernetes - Getting an Error

I am following the instructions here:

I get the error

No available machine matches constraints: [('agent_name', ['0fddafcc-4ad5-4a45-869b-ff4dd03f30b3']), ('storage', ['root:0,36:32,37:32,38:8']), ('zone', ['default'])] (resolved to "storage=root:0,36:32,37:32,38:8 zone=default")

I know I need to create a storage pool, per these instructions:

I still don’t know exactly how.

I tried
juju create-storage-pool default maas
and deleted that when it didn’t work.
I also tried the following unsuccessfully:
juju create-storage-pool default rootfs

No dice.

Any tips?

Can you clarify what you’re trying to do?

Do you want to set up your k8s cluster so that it can provision storage as needed for k8s workloads deployed by Juju to the cluster? If so, you don’t need to create any Juju storage pools to use k8s provisioned storage with Juju. You can if you need advanced options, but all you need in most cases is simple to have a default Storage Class in your cluster. When running juju add-k8s, this default storage class will be used to set the workload-storage model config attribute and will be used as needed.

Or are you asking about how to set up the Storage Class itself with a suitable provisioner on MAAS?

I am trying to do everything I usually do with manifests and helm charts with Juju as much as possible.

To clarify. I want to set up the Storage Class in Kubernetes. And I have challenged myself to do it only with Juju. As far as I can see the charmed documentation shows how to do that. I just could not get that to work yet.

By the time I am done running something like
juju deploy -n 3 ceph-mon
juju deploy -n 3 ceph-osd --storage osd-devices=32G,2 --storage osd-journals=8G,1
juju add-relation ceph-osd ceph-mon
juju add-relation ceph-mon:admin kubernetes-master
juju add-relation ceph-mon:client kubernetes-master
per the docs it should all be done.
kubectl get sc will show storage classes (maybe with no default but I can set that with kubectl if I have to…)

Do you want to set up your k8s cluster so that it can provision storage as needed for k8s workloads deployed by Juju to the cluster?

Yes. I am ok with the simplest version of this to start.

I also prefer to later have dynamic provisioning of Persistant Volumes set up. I have used GlusterFS or Ceph for this in the past. When I make a Persistant Volume Claim it will generate a new suitable Persistant Volume with a UUID.

Or are you asking about how to set up the Storage Class itself with a suitable provisioner on MAAS?

Yes I want this.

I may also need it to successfully juju add-k8s here

dw@maas-effect:~$ juju add-k8s
ERROR missing k8s name.
dw@maas-effect:~$ juju add-k8s charmed
...
ERROR 	Juju needs to know what storage class to use to provision workload and operator storage.
	Run add-k8s again, using --storage=<name> to specify the storage class to use.
dw@maas-effect:~$ juju add-k8s charmed --storage=nostorageclassyet
...
ERROR storage class "nostorageclassyet" not found

Thanks for the clarifications!

I’m surprised Juju doesn’t automatically use the k8s cluster default SC. It will for Juju 2.7.x and if the SC is correctly annotated as the default. Can you check those things?

As for the CK setup etc, that’s best answered by @tvansteenburgh and his team as I’m not across that level of detail.

After running the Charmed K8s Jujucharm there is no default Storage Class:
dw@maas-effect:~$ kubectl get sc
No resources found in default namespace.

I see per the docs that I can add those additionally with Juju by running more charms.

Still stuck trying to add a storage pool to Juju or just solve the previous error:

No available machine matches constraints: [('agent_name', ['0fddafcc-4ad5-4a45-869b-ff4dd03f30b3']), ('storage', ['root:0,36:32,37:32,38:8']), ('zone', ['default'])] (resolved to "storage=root:0,36:32,37:32,38:8 zone=default")

Insight?

I’m stuck making the machines for the ceph-osd Juju units.

I seem like I might just skip using the official Ubuntu K8s Charmed Docs for adding storage and just add Rook tomorrow if I can’t get this soon.

Thanks!

I ended up using Rook. A note for any people that do that, you will need to set allow-privileged to true on the Kubernetes Masters to enable the regular setup of the Rook operator.

I set things up this way

after cloning the rook proj linked off of rook.io:

cd cluster/examples/kubernetes/ceph
kubectl create -f common.yaml
kubectl create -f operator.yaml

## verify the rook-ceph-operator is in the `Running` state before proceeding
kubectl -n rook-ceph get pod -w

kubectl create -f ./csi/rbd/storageclass.yaml

kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Result

dw@maas-effect:~/src/rook$ kubectl get sc
NAME                        PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block (default)   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   2m43s

I’m now going to try the Gitlab k8s charm. Thanks for your time and attention. I’ll report on any success or experience.

I’m a little late to this party but I ran into the exact same problem you described above with ceph deployment.

I tried your rook solution but can’t quite get there. When I created the storageclass I got the error:

unable to recognize “./csi/rbd/storageclass.yaml”: no matches for kind “CephBlockPool” in version “ceph.rook.io/v1

But it created the storage class anyway. Then it seemed to accept the patch command. The rook-ceph-pod will make it to the running state for a few seconds, then crashes with:

failed to run the controller-runtime manager: no matches for kind “CephObjectStore” in version “ceph.rook.io/v1

Just wondering if you have any suggestions.

I’m also hitting this issue. Running the canonical kubernetes charm and the openstack integrator does not create a default storage class - “kubectl get sc” returns “No resources found”? So I can’t run “juju add-k8s”. Any suggestions? This is juju version 2.8.10.

So long as a storage class exists, Juju will use it. If it is flagged as the cluster default storage class, Juju will prefer it over any others. I thought that the openstack integrator did make a suitable storage class backed by cinder but maybe that’s not true. So you just need to kubectl create a storage class backed by whatever provider you want to use and then you should be able to add-k8s.