The doc is broken where it describes bringing up CDK on AWS with the integrator charm, because there are no relations between worker, master or aws-integrator.
To add on, I believe the best practice is to use a bundle overlay when using an integrator charm. It is supposed to be a more robust way of doing it. There is a tutorial on using the AWS integrator charm in this way.
I corrected the original post. This is my final
juju status --relations output if it helps anyone.
The post says optionally setup storage, but then the mariadb k8s charm requires storage. Doesn’t seem particularly options.
Also, I’m probably just looking in the wrong place but how do you provision storage that it can use on microk8s?
The doc was updated to match 2.5.1 which improves behaviour on microk8s by automatically using the out of the box hostpath storage built into microk8s. Unfortunately this was premature as 2.5.1 is not quite released. It’s close - we hope to have it out by the end of the week. It’s currently in the 2.5/candidate channel.
When microk8s is installed, the storage provisioner is not enabled by default. You run
microk8s.enable storage dns to set things up to work with Juju. Also,
microk8s.status shows the optional services which you may want to enable when using microk8s.
Alright so I did a snap switch and snap refresh and restarted microk8s.
It also has storage enabled. Same error. Do i need to rebootstrap or something?
Hmmm even tried removing microk8s and reinstalling and it’s not picking it up. Am I just missing a provisioning step or something, I went and reread that part of the doc.
@magicaltrout Juju 2.5.0 requires that the user set up a storage pool so that Juju knows how to provision storage for charm state (operator-storage). So in 2.5.0, after adding a model, you need to do this:
juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath
If you want to deploy a charm which requires storage for the workload, you’ll need a storage pool for that also, eg
juju create-storage-pool mariadb-storage kubernetes storage-class=microk8s-hostpath
When you deploy mariadb-k8s, you then use the storage pool:
juju deploy cs:~juju/mariadb-k8s --storage database=10M,mariadb-storage
Now, in Juju 2.5.1, it works better with microk8s by using the hostpath storage provisioner automatically. No need for the steps to create the storage pools (unless you want to customise things).
Refresh you Juju to use the 2.5/candidate snap and bootstrap again.
juju bootstrap lxd microk8s.config | juju add-k8s k8stest juju add-model test k8stest juju deploy cs:~juju/mariadb-k8s
Channel candidate for juju is closed; temporarily forwarding to stable. < what does that mean?
You ran this command?
snap refresh juju --channel 2.5/candidate
Okay I give up trying to understand… I thought a clean removal and install would be better so I just threw it away and got my error, then refreshed and it works… FML
Literally copied your command block verbatim…
I did then copy the create-storage-pool line you wrote. That worked. And now MariaDB is doing something…
I dunno, both my snaps are candidate branch and storage is enabled. The automatic magic didn’t happen, but that maybe due to the weird upgrade path or something I guess.
With the snap error, my guess is you left out the “2.5” track when specifying --channel, and so snap operated on the latest track where the candidate channel does not have anything published to it. If you ask for a candidate snap in that case, it will fall back to the next lower level of risk which is stable.
Did you rebootstrap? Or updgrade your controller? The controller needs to be running 2.5.1
oooh controller didn’t go away when I threw the old Juju in the bin… I assumed it would wipe out local controllers, clearly not.
Hurrah! Thanks @wallyworld … candidate snaps… reminds me of the first time in Ghent when I thought it was a good idea to go home and migrate from juju 1.x to 2.x-alpha-x… I sometimes make interesting life choices.
I do have an exciting data project to build with this though, so I’m looking forward to diving in and figuring it all out.
Thanks for the help.
No worries, awesome you got it sorted. We’re still working through the best way to do stuff like share (read only) storage between pods, if that’s a thing, or provide a nice way to get training data for kubeflow to where it needs to go, etc. Because operators (ie the unit agent which runs the charm) lives in a different pod to the workloads, there’s challenges there that don’t exist on cloud/vm based workloads, where the agent is co-located with the workload and so when it downloads a resource that data is also available to the workload etc. And k8s is still evolving; things like storage snapshots as a way to satisfy a volume claim have only relatively recently been implemented etc.