Adding additional disks to machine

We are trying to deploy Ceph for Kubernetes persistent storage via Juju. If nothing is specified, then ceph-osd charm seems to be creating OSD devices on a rootfs via loopback devices (/dev/loopX), which is not optimal from performance perspective.

According to ceph-osd (ceph osd | Juju) documentation, you are able to specify OSD devices via “osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde” option. I have tried to do that via CLI at the moment, which I suppose is equivalent:

juju deploy -n 3 ceph-osd --storage osd-devices="/dev/sdb,16G /dev/sdc,16G" --storage osd-journals=loop,1G,1 --constraints “mem=4G root-disk=81G”

However these devices does not seem to be created on OSD unit machines. Maybe wrong syntax?

In VSphere provider git folder I see an OVF file, which, if I understand correctly, is used as a template for machine deployment. It has only one disk. Can it be customized? Or are there any other way to add storage for machine via Juju? Juju was chosen to avoid Ansible and alike automation tool complexities, but it seems to be limited in what is able to do.

Cloud provider: VSphere
Juju version: 2.7-rc6-bionic-amd64


This is interesting for me as well. I’m using vsphere too and still have much to discover on what and how to use such a cloud substrate.

I have a few questions, like, for example how to add “centos” to vsphere and how to use storage in general with vsphere. Do you have any experience here you could share with me? Perhaps open a separate thread on vsphere and how to work with it in some use-cases?

@timClicks might also know if there are any existing condensed material on this?

Some bad news for the immediate term. --storage directives for the vSphere provider are not yet supported.

To add block devices to VMs, you can use the vSphere console directly, or one of the VMware equivalents. I know that’s not optimal. [Edit: this actually won’t help here, because Juju won’t be informed of the devices’ presence]

We have relied on vSAN support to provide high-availability for any Kubernetes clusters deployed within a datacenter. Deploying Ceph in vSphere is an interesting idea though… I’ve filed a bug.

@elvinas, @erik-lonroth you are both very welcome to add your comments there, which will increase the priority of this task.

Technically yes… but that would mean recompiling Juju with your custom patch. We use code generation to copy the contents of that file into the juju (client) and jujud (agent) binaries.

One option would be to adjust the template in vSphere itself. You will find it in the root juju-vmdks folder at the root of your vSphere instance.

@babbageclunk, @wallyworld could you two please take a look at my advice here and add any supplementary info if required?

Ok. I will skip recompiling part as it will not be a feasible solution from support perspective.

By stating that “–storage” option not supported for VSphere provider you say, that these options are not passed to ceph entirely? If they just not reaching VSphere, but Ceph is instructed to initialize /dev/sdb, /dev/sdc devices, it should pick those up, as they would appear on a machine.

Regarding Ceph in VCenter this kind of “poor’s man SAN” solution allows to utilize local storage available on ESXi hosts and avoid cloud vendor “lock in”. Same approach we are using with another project on Azure it just Ceph is replaced with GlusterFS. As a result deployment can be ported to AWS or on premises bare metal without full redesign. In case of Azure, this choice was mostly due to a limitation of allowed data disks per node, as initially we thought to just use Azure disks as Persistent Volumes, which are nicely supported in Kubernetes.

As we are making use of both “NFS” kind of storage AND would be benefited of being able to use disk surfaces from multiple datastores in our vsphere environment(s).

We are deploying “slurm” clusters that we could place in virtual clusters (like vsphere) without need of extra aux charms for providing NFS etc. that in turn would need to be maintained in the charm-space.

Leveraging vpshere ability to provide storage primitives would get us even closer to be able to run HPC workloads in the vsphere context alot more easier than today … and there is the performance considerations for running external-to-vsphere NAS/DAS storage.

Hey guys,

I found this way for get this to work:

Deploy ceph:

juju deploy -n 3 ceph-mon
juju deploy -n 3 ceph-osd
juju add-relation ceph-osd ceph-mon

Wait for deplyoment, it will state that the number of OSD devices is too less.

Add via powercli or manually disks to OSD machines, reboot the machines and run:

juju run-action ceph-osd/0 add-disk osd-devices='/dev/sdb /dev/sdc ...' --wait
juju run-action ceph-osd/1 add-disk osd-devices='/dev/sdb /dev/sdc ...' --wait
juju run-action ceph-osd/2 add-disk osd-devices='/dev/sdb /dev/sdc ...' --wait

Continue with relations mentioned in documentation.


1 Like

I would be greatful for some description on how to build custom images for vsphere and how to add them to vsphere. Especially for centos which we have learned how to do for MAAS and LXD. I would like to be able to reproduce this for vsphere and centos…

This isn’t a supported feature, so would require custom code development.

The template VM that Juju creates during the bootstrap phase is defined within an embedded OVF file. Tweaking that will change the base image that vSphere creates.

However, that won’t allow you to have multiple image types.

To support multiple instance types, perhaps Juju could inspect the folder in which the template VMs are kept and then select the correct template via constraints… vSphere admins could create a template in the web interface and then it would be accessible to models.

Any thoughts on if that would be practical @babbageclunk, @wallyworld ?

1 Like

We have avoided custom images in general (for any cloud) because then it’s not official Ubuntu and that can lead to unexpected consequences. There is a feature flag that can be enabled to work with the public clouds that allows a cloud specific image id to be used to bootstrap, and also subsequently be deployed to host workloads. There’s no support for that in vSphere. It would entail a bit of work to spec it up and implement. At this stage, the only vSphere work we have scheduled is to add proper support for multi-tenant clusters.

That’s not entirely accurate though. Or is it? My understanding is that although we have avoided supporting custom images, we haven’t been tied to Ubuntu for a long time. It’s possible to use CentOS and Windows images on other providers.

To clarify, I was talking about Ubuntu.

The image which is deployed won’t need changes for that. You should add the possibility to add hard disks with the vsphere provider. If the vsphere provider is able to deploy vmdk why shouldn’t it be possible to add volumes to it? :smiley:

Being able to deploy other os:es with juju is to me a key feature to adoption across a broader linux community.

After all, alot of people might love juju and not be able to use ubuntu.

Domains where Ubuntu is not big to my knowledge are: Embedded, security, HPC (centos/redhat) to give a few examples.

… and many of those still might have a vsphere environment around.

1 Like