Assorted questions

  1. Kubernetes has native support for certain volume types (e.g. awsElasticBlockStore, cephfs, nfs, etc.) but does Juju itself need to have internal support for them in order to use them? For instance, does Juju allow for storage type ‘azureDisk’ if I’m using the Azure cloud?

  2. If internal storage support is needed, which types does Juju currently support other than ‘awsElasticBlockStore’ and ‘hostPath’?

  3. What is the mapping of ‘storage-provisioner’ values to those Kubernetes volume types when creating a Juju storage pool? For example, this is possible for an AWS cloud (type ‘awsElasticBlockStore’):

    storage-provisioner=kubernetes.io/aws-ebs

    But how does one know to use ‘aws-ebs’?

  4. Whenever a k8s charm is used with AWS is the aws-integrator charm always required when using dynamically provisioned persistent volumes (PV)? Same question for GCE and the gcp-integrator charm. For example, can I use GCE as my backing cloud and use type gcePersistentDisk to create a storage pool? If so, like in #3, what is the bit I use for ‘storage-provisioner’?

  5. My understanding was that operator storage and unit/charm storage required a storage class name of ‘juju-operator-storage’ and ‘juju-unit-storage’ respectively. However, I see in this post that charm storage uses a class of ‘juju-ebs’. Is ‘juju-unit-storage’ only needed for static PVs? How does one know to use ‘juju-ebs’?

1 Like

[quote=“pmatulis, post:1, topic:537, full:true”]

  1. Kubernetes has native support for certain [volume types][k8s-volume-types] (e.g. awsElasticBlockStore, cephfs, nfs, etc.) but does Juju itself need to have internal support for them in order to use them? For instance, does Juju allow for storage type ‘azureDisk’ if I’m using the Azure cloud?

Juju k8s storage is documented here.
In a nutshell, the storage provider argument to create-storage-pool is always “kubernetes”. Which backend Kubernetes uses to do this is determined by the storage-provisioner attribute setting with which the pool is created. The storage-provisioner value is set to azureDisk or whatever backend is required.

  1. If internal storage support is needed, which types does Juju currently support other than ‘awsElasticBlockStore’ and ‘hostPath’?

What do you mean by “internal” storage support? The above values are possible choices for the storage-provisioner attribute, representing a k8s storage backend. hostPath is a supported storage backend on microk8s but not anywhere else; the storage backends supported are very cloud specific.

  1. What is the mapping of ‘storage-provisioner’ values to those Kubernetes volume types when creating a Juju storage pool? For example, this is possible for an AWS cloud (type ‘awsElasticBlockStore’):

    storage-provisioner=kubernetes.io/aws-ebs

    But how does one know to use ‘aws-ebs’?

You need to know the capabilities of the cloud on which the k8s cluster is running. kubernetes.io/aws-ebs is a documented storage provisioner option when running the cluster on AWS.

  1. Whenever a k8s charm is used with AWS is the aws-integrator charm always required when using dynamically provisioned persistent volumes (PV)? Same question for GCE and the gcp-integrator charm. For example, can I use GCE as my backing cloud and use type gcePersistentDisk to create a storage pool? If so, like in #3, what is the bit I use for ‘storage-provisioner’?

Yes, the <cloud>-integrator charm is always required as it gives the k8s nodes the necessary permissions to create and attach storage.
For GCE, you use kubernetes.io/gce-pd. Possibilities for other clouds are in the k8s storage doc.

  1. My understanding was that operator storage and unit/charm storage required a storage class name of ‘juju-operator-storage’ and ‘juju-unit-storage’ respectively. However, I see in [this post][post-155] that charm storage uses a class of ‘juju-ebs’. Is ‘juju-unit-storage’ only needed for static PVs? How does one know to use ‘juju-ebs’?

Juju requires that a storage pool called operator-storage be defined which it will use to provision storage for any application operator. How this storage pool is configured is totally up to you. If the pool is configured with a storage-class attribute, that k8s storage class is the one that will be used to create the storage. The k8s storage class may be created ahead of time and so specifying storage-class in the storage pool will use this pre-existing class. If the class doesn’t exist, Juju will create it using the attributes specified in the storage pool. So the choices are:

  1. the devop engineer decides they want Juju to use storage classes that have been set up ahead of time - the storage pool just has a storage-class value and nothing else
  2. the devop engineer wants to control what the storage class is called and how it is configured using a storage pool - the storage pool has a storage-class value and necessary storage class attributes
  3. the devop engineer wants to control how the storage class is configured and is happy for juju to name it - the storage pool has the necessary storage class attributes

The most common use case used in the various examples for CDK is 3. For microk8s, the first method is used as microk8s is configured out of the box with a storage class called microk8s-hostpath.

With reference to the juju-ebs question, and it light of the explanation above - that’s just a name. It could have been “mary”.

1 Like

Thanks for all this good information.

Except for static PVs right? In that case you need to give the storage class a name and then use it when you define the PVs manually?

Yes, that’s right. For static PVs the PVs themselves have a storageClass attribute which needs to match what’s defined in the storage pool, albeit with the as the prefix.