[outdated] Juju, Kubernetes and microk8s

Warning: This tutorial is outdated. Please use the Juju with MicroK8s tutorial instead.

microk8s is awsome

If you want to quickly and easily do some Kubernetes charm development, or hack on Juju’s Kubernetes support, do yourself a favour and snap install microk8s.

I have previously described how to run a Kubernetes demo on AWS. It’s great and all, but the time to bootstrap, deploy a Kubernetes bundle, and run the trust command to set up the necessary permissions means that you are waiting up to say 20 minutes to get started. And often, you’ll need to tear things down and start again as part of the development process. And you need to be online.

With microk8s, you have a local, fully compliant Kubernetes deployment with dynamic persistent volume support, and a running ingres controller.

Ensure microk8s is set up correctly

After installing mictok8s, you’ll want to enable dns and storage.

sudo snap install microk8s --edge --classic
microk8s.enable dns storage

I normally also alias microk8s.kubctl:
sudo snap alias microk8s.kubectl mkubectl

Juju time!

You start by bootstrapping an LXD controller:

juju bootstrap lxd

Now, register your microk8s cloud with Juju and add a model:

microk8s.config | juju add-k8s k8stest
juju add-model test k8stest

That’s all the setup we need. In the time taken to bootstrap a LXd controller, we’re ready to deploy Kubernetes workloads with Juju.

Deploy mariadb

Let’s deploy mariadb. It requires storage, but microk8s has a storage class built in which will be used. To deploy with the default hostpath storage and a 1GiB allocation:

juju deploy cs:~juju/mariadb-k8s

Or you may want to deploy with a smaller storage allocation, say 10MiB:

juju deploy cs:~juju/mariadb-k8s --storage database=10M

juju status

Model  Controller  Cloud/Region  Version      SLA          Timestamp
test   ian         k8stest       2.5-beta1    unsupported  14:56:31+10:00

App      Version  Status  Scale  Charm        Store       Rev  OS          Address        Charm version  Notes
mariadb           active      1  mariadb-k8s  jujucharms    7  kubernetes  10.152.183.94                 

Unit        Workload  Agent  Address    Ports     Message
mariadb/0*  active    idle   10.1.1.32  3306/TCP  Started container

juju storage --filesystem

[Filesystems]
Unit       Storage     Id  Provider id           Mountpoint      Size   State     Message
mariadb/0  database/0  0   database-0-mariadb-0  /var/lib/mysql  34MiB  attached  Successfully provisioned volume pvc-39a6de3a-b255-11e8-a2de-80fa5b27f2bf

The actual physical storage location is at /var/snap/microk8s/common/default-storage/ so you can easily inspect the files created by both the Juju mariadb operator and the database itself.

ls /var/snap/microk8s/common/default-storage/
database-0-mariadb-0-pvc-39a6de3a-b255-11e8-a2de-80fa5b27f2bf  charm-operator-mariadb-0-pvc-33d08565-b255-11e8-a2de-80fa5b27f2bf

Full Reset

If for whatever reason you need to completely reset everything and want to start from scratch, it’s easy enough just to do:

juju kill-controller <name> -y -t 0
microk8s.reset
juju remove-cloud k8stest
4 Likes

I had to use:

microk8s.enable dns storage

To get storage enabled.

This command gave me an error:

juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath
ERROR storage provider "kubernetes" not found (not found)

You’ll get that error if you attempt to create a kubernetes storage pool on a model that is not a Kubernetes model.

On a fresh machine, I get

/snap/microk8s/251/microk8s-config.wrapper: line 38: netstat: command not found
/snap/microk8s/251/microk8s-config.wrapper: line 39: ifconfig: command not found

So, net-tools need to be installed on a client, for example ‘sudo apt install net-tools’.

@tvansteenburgh @kos.tsakalozos can you provide input here?

@wallyworld, @anastasia-macmood, thank you for spotting this problem.

We need to package net-tools within microk8s. A workaround for now is to ‘sudo apt install net-tools’ as you suggested.

Just added the following issue https://github.com/ubuntu/microk8s/issues/148 . We will get to it soon.

Thank you again.

If you are adding a k8s model to a LXD controller as in this example, and you have UFW enabled, you might get an error like:

ERROR failed to list namespaces: Get http://192.168.1.16:8080/api/v1/namespaces?includeUninitialized=true: dial tcp 192.168.1.16:8080: i/o timeout

Granting access to port 8080 from the LXD bridge subnet solves this. For me it was:

sudo ufw allow from 10.7.95.1/24 to any port 8080

I boot a juju controller and a microk8s follow this discourse, but blocked when I am deploying the mairedb。the unit always show the message “waiting for container”, the storage filesystem always “pending”, can you give me a hand. details show below

$ juju storage --filesystem
Unit           Storage id  Id  Provider id  Mountpoint  Size  State    Message
mariadb-k8s/0  database/0  0                                  pending  
$  juju status
Model  Controller  Cloud/Region  Version  SLA          Timestamp
test   manual      k8stest       2.5-rc1  unsupported  12:04:40+08:00

App          Version  Status   Scale  Charm        Store       Rev  OS          Address  Notes
mariadb-k8s           blocked    0/1  mariadb-k8s  jujucharms   13  kubernetes           pod has unbound immediate PersistentVolumeClaims

Unit           Workload  Agent       Address  Ports  Message
mariadb-k8s/0  waiting   allocating                  waiting for container

install juju by “snap install juju --candidate --classic”
install microk8s by “snap install microk8s --edge --classic”

the tools version:

$  juju version 
2.5-rc1-xenial-amd64
$ microk8s.kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

We’ll need a little more information to be able to help.
The message in juju status says
pod has unbound immediate PersistentVolumeClaims

This means that microk8s cannot allocate the storage needed for the charm (or is in the process of doing so).
Did you:

  • enable storage in microk8s?
  • set up an operator storage pool using hostpath storage?
  • set up a storage pool for mariadb using hostpath storage?
  • deploy the mariadb charm with the --storage option?

If those steps are done, microk8s normally has no problem allocating storage as needed.
You may need to provide the exact commands run.
You will also want to use microk8s.kubectl to inspect the pod and pvc to look at detailed error information.
Note that sometimes it can take a minute for the storage to be allocated and the PVC will eventually get bound.

  • enable storage in microk8s? YES re-execute cmd like that:
   $ microk8s.enable dns storage
    Enabling DNS
    Applying manifest
    service/kube-dns unchanged
    serviceaccount/kube-dns unchanged
    configmap/kube-dns unchanged
    deployment.extensions/kube-dns configured
    Restarting kubelet
    DNS is enabled
    Enabling default storage class
    deployment.extensions/hostpath-provisioner unchanged
    storageclass.storage.k8s.io/microk8s-hostpath unchanged
    Storage will be available soon
  • set up an operator storage pool using hostpath storage? YES command like this
    juju create-storage-pool mariadb-pv kubernetes storage-class=microk8s-hostpath
    recreate it show ERROR
    ERROR creating pool "operator-storage": cannot overwrite existing settings
  • set up a storage pool for mariadb using hostpath storage? YES
    juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath
    recreate it show ERROR like above
  • deploy the mariadb charm with the --storage option? YES
    juju deploy cs:~wallyworld/mariadb-k8s --storage database=10M,mariadb-pv

and storage had create a line for mariadb

$ juju storage --filesystem
Unit           Storage id  Id  Provider id  Mountpoint  Size  State    Message
mariadb-k8s/0  database/0  0                                  pending  

I want research further, so i read the code on github with tag 2.5-rc1. I find create storage pool cmmand only save a storageManager object without any process on k8s. and then, I guess the cmd “deploy --resource” will run the codes above.
caas/kubernetes/provider/storage.go:178

   func (v *volumeSource) CreateVolumes(ctx context.ProviderCallContext, params 
           []storage.VolumeParams) (_ []storage.CreateVolumesResult, err error) {
	   // noop
	   return nil, nil
   }

caas/kubernetes/provider/storage.go:254

  // AttachVolumes is specified on the storage.VolumeSource interface.
   func (v *volumeSource) AttachVolumes(ctx context.ProviderCallContext, attachParams 
   []storage.VolumeAttachmentParams) ([]storage.AttachVolumesResult, error) {
   	// noop
   	return nil, nil
   }

but these codes do nothing. I want to known where should add logger.info into or where the codes may make influence. But I Can’t catch it.

You likely don’t need to look at the Juju code; Juju appears to be correctly setting up the PVC. What’s needed is to inspect the k8s resources like the PVCs using microk8s.kubectl to get to the root cause issue. The methods you are looking at in the Juju code are correctly no-ops because storage is configured slightly differently in k8s compared to other clouds.
For the mariadb deployment, there should be in k8s:

  • an operator pod
  • a pvc and pv for the operator
  • a mariadb stateful set and pod
  • a pvc and pv for mariadb pod

If the operator pvc can’t be satisfied, then the operator won’t run and the mariadb pod will not be set up.

Depending on what kubectl describe shows up when inspecting the k8s resources, you may need to perform further troubleshooting as described here.

I’d start by looking at the k8s resources for the model:

microk8s.kubectl -n <model> get all,pvc,pv,sc

Look at the pvc’s and pv’s etc that are there and run microk8s.kubectl describe <resource> to look at the error history to see why things are not being allocated.

As a last resort, you can reset microk8s using microk8s.reset - sometimes thing can get stuck. It can take a while for reset to finish. You’ll need to enable storage again after.

1 Like

【resolved】

# microk8s.kubectl -n test get all,pvc,pv,sc
NAME                              READY   STATUS    RESTARTS   AGE
pod/juju-operator-mariadb-k8s-0   0/1     Pending   0          2d3h

NAME                                         READY   AGE
statefulset.apps/juju-operator-mariadb-k8s   0/1     2d3h

NAME                                                                            STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS        AGE
persistentvolumeclaim/mariadb-k8s-operator-volume-juju-operator-mariadb-k8s-0   Pending                                      microk8s-hostpath   2d3h

NAME                                                      PROVISIONER            AGE
storageclass.storage.k8s.io/microk8s-hostpath (default)   microk8s.io/hostpath   2d4h

I think pv should be ready first before pod, so I described the “persistentvolumeclaim/mariadb-k8s-operator-volume-juju-operator-mariadb-k8s-0”

# microk8s.kubectl -n test describe  persistentvolumeclaim/mariadb-k8s-operator-volume-juju-operator-mariadb-k8s-0 
Name:          mariadb-k8s-operator-volume-juju-operator-mariadb-k8s-0
Namespace:     test
StorageClass:  microk8s-hostpath
Status:        Pending
Volume:        
Labels:        juju-operator=mariadb-k8s
Annotations:   volume.beta.kubernetes.io/storage-provisioner: microk8s.io/hostpath
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason                Age                      From                         Message
  ----       ------                ----                     ----                         -------
  Normal     ExternalProvisioning  3m4s (x12364 over 2d3h)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "microk8s.io/hostpath" or manually created by system administrator
Mounted By:  juju-operator-mariadb-k8s-0

I search ”waiting for a volume to be created, either by external provisioner “microk8s.io/hostpath” or manually created by system administrator" in google.

And then I find my kube-system build failed.

# microk8s.kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
hostpath-provisioner-599db8d5fb-g2jf8   0/1     Running   0          3d
kube-dns-6ccd496668-47szk               0/3     Running   0          19h

# microk8s.kubectl describe pod/hostpath-provisioner-599db8d5fb-g2jf8

failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

So i edit /var/snap/microk8s/354/args/dockerd-env
Open the https_proxy config.

Now I had fixed the problem. Thanks, wallyworld

1 Like

Awesome that you got it working!

I did it again. This time it’s even worse. The namespace did not be created.

juju status
Model  Controller  Cloud/Region  Version  SLA          Timestamp
test   manual      k8stest       2.5-rc2  unsupported  20:34:44+08:00

App          Version  Status   Scale  Charm        Store       Rev  OS          Address  Notes
mariadb-k8s           waiting    0/1  mariadb-k8s  jujucharms    0  kubernetes           waiting for container

Unit           Workload  Agent       Address  Ports  Message
mariadb-k8s/0  waiting   allocating                  waiting for container
# microk8s.kubectl get namespace
NAME          STATUS   AGE
default       Active   10h
kube-public   Active   10h
kube-system   Active   10h

You’ll need to provide more information.
See the microk8s troubleshooting guidelines. Maybe microk8s needs a reset.
juju debug-log should also be expected to show any issues with why the namespace could not be created.
Depending on what shows up, you may need to file a microk8s bug.