We are trying to deploy Ceph for Kubernetes persistent storage via Juju. If nothing is specified, then ceph-osd charm seems to be creating OSD devices on a rootfs via loopback devices (/dev/loopX), which is not optimal from performance perspective.
According to ceph-osd (ceph osd | Juju) documentation, you are able to specify OSD devices via “osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde” option. I have tried to do that via CLI at the moment, which I suppose is equivalent:
juju deploy -n 3 ceph-osd --storage osd-devices="/dev/sdb,16G /dev/sdc,16G" --storage osd-journals=loop,1G,1 --constraints “mem=4G root-disk=81G”
However these devices does not seem to be created on OSD unit machines. Maybe wrong syntax?
In VSphere provider git folder I see an OVF file, which, if I understand correctly, is used as a template for machine deployment. It has only one disk. Can it be customized? Or are there any other way to add storage for machine via Juju? Juju was chosen to avoid Ansible and alike automation tool complexities, but it seems to be limited in what is able to do.
Cloud provider: VSphere
Juju version: 2.7-rc6-bionic-amd64