The status output provided doesn’t include enough info to diagnose the issue, but typically you will get an error with the aws-integrator if the account being used doesn’t have sufficient permissions to support the IAM profiles needed.
I’ve followed these instruction (on a vsphere setup) and I’m currently stuck after deploying a k8s charm. It seems that there are some authentication issues between the charm agent and the controller.
2018-10-16 13:24:35 DEBUG juju.worker.apicaller connect.go:125 connecting with old password
2018-10-16 13:24:35 DEBUG juju.api apiclient.go:877 successfully dialed “wss://10.10.139.233:17070/model/e4d97d49-c87a-4720-831c-ca51d17295ad/api”
2018-10-16 13:24:35 INFO juju.api apiclient.go:599 connection established to “wss://10.10.139.233:17070/model/e4d97d49-c87a-4720-831c-ca51d17295ad/api”
2018-10-16 13:24:35 DEBUG juju.worker.apicaller connect.go:152 failed to connect
2018-10-16 13:24:35 DEBUG juju.worker.dependency engine.go:538 “api-caller” manifold worker stopped: cannot open api: invalid entity name or password (unauthorized access)
Did I miss something here? Juju version is 2.5-beta1. Charm used is mysql-k8s.
It may be that the cached docker jujud image in your k8s cluster is stale (we use PullIfNotPresent). Because we’re still in development, APIs can change. This won’t be a problem once the final release goes out as the tagged docker image will be stable.
You can try deleting the
caas-jujud-operator docker image from your k8s cluster. I’ve just tested with a clean microk8s setup with the 2.5 edge Juju snap and things were fine.
I’ve had this happen as well. In each case, I’d removed the charm from Juju, but did not delete the operator pod’s persistent volume. It seems that they are not automatically removed (as of
juju 2.5-beta1) if you
juju remove-application the charm.
As with cloud deployments, we specifically do not remove storage when deleting an application unless the user asks for it with --destroy-sorage.
But, we should remove the operator volume. That’s a bug that needs fixing.
I’ve just landed a fix so that the operator storage is deleted when the operator is deleted.
Trouble with aws-integrator
I tried this today with cdk on gce with the gcp-integrator charm.
I have the cdk up and running and setup the operator storage using the gce example on the storage post.
juju list-storage-pools operator-storage kubernetes parameters.type=pd-standard storage-class=juju-operator-storage storage-provisioner=kubernetes.io/gce-pd
When I deploy mysql/gitlab charms I get the following:
pod has unbound immediate PersistentVolumeClaims
On both of the applications and I’m unsure how to proceed.
pod has unbound immediate PersistentVolumeClaims means that the underlying cloud could not provision the requested storage. This may be due to an issue with the integrator charm, or it could be an account limitation, or something else. It’s not a juju-k8s issue per se but something with the substrate on which Kubernetes is running. You’ll need to
kubectl describe the affected PVC to dig into the details of why the claim could not be satisfied.
Also, don’t use mysql - use mariadb instead. There’s an issue with the upstream mysql docker image that causes the db not to come up correctly.
How to write a statefuleset container with kuernetes charm. Can you help me to giving some documents or examples.
Juju automatically creates a k8s stateful set if your charm requires storage; otherwise a deployment controller is used to managed to pods. The stateful set allows the storage pv’s to be correctly re-attached to new pods which are spun up to replace existing pods. The creation of a stateful set or deployment controller is done automatically by Juju; it’s not something the charm author needs to worry about.
There’s also this post on writing k8s charms in general.
If i want deploy pod on some kubernetes-worker with specific tag. How can i do it.
This post explains how you can do what you want:
I used a whole kubernetes built by kubespray. Can I create storage by “storage-class=microk8s-hostpath”. specially:
- how to create a hostpath type storage for high disk-IO requirements.
- how to create a storage on a distribute storage system(maybe ceph).
“hostpath” is a class of storage setup specifically by microk8s.
To create storage for any given k8s cluster, you need to use what’s supported by the underlying cloud or cluster itself as the storage class provisioner. Juju just uses what it’s told to - knowledge of the underlying k8s cluster is necessary to know how to set things up.
Jenkins-K8S Charm Summary/Feedback
The doc is broken where it describes bringing up CDK on AWS with the integrator charm, because there are no relations between worker, master or aws-integrator.
To add on, I believe the best practice is to use a bundle overlay when using an integrator charm. It is supposed to be a more robust way of doing it. There is a tutorial on using the AWS integrator charm in this way.
I corrected the original post. This is my final
juju status --relations output if it helps anyone.
The post says optionally setup storage, but then the mariadb k8s charm requires storage. Doesn’t seem particularly options.
Also, I’m probably just looking in the wrong place but how do you provision storage that it can use on microk8s?
The doc was updated to match 2.5.1 which improves behaviour on microk8s by automatically using the out of the box hostpath storage built into microk8s. Unfortunately this was premature as 2.5.1 is not quite released. It’s close - we hope to have it out by the end of the week. It’s currently in the 2.5/candidate channel.
When microk8s is installed, the storage provisioner is not enabled by default. You run
microk8s.enable storage dns to set things up to work with Juju. Also,
microk8s.status shows the optional services which you may want to enable when using microk8s.