Installing Charmed OSM with MicroK8s

This guide will walk you through installing the Charmed Distribution of OSM.

Requirements

We suggest the following minimum requirements:

  • Ubuntu Bionic
  • 4 CPU
  • 8 GB RAM
  • 50G free disk space

Getting Started

Install the basic prerequisites.

sudo snap install juju --classic
sudo snap install osmclient --edge

Alias

The osm command will be available via osmclient.osm. You can create an alias to osm with the following command:

$ sudo snap alias osmclient.osm osm
Added:
  - osmclient.osm as osm

Connect Snap Interface

By default, snaps need to be given permissions to read hidden resources in your home directory. This will allow the osmclient snap to access your Juju configuration.

sudo snap connect osmclient:juju-client-observe

Bootstrap Juju on LXD

NOTE: It is not necessary to use sudo with any juju command. Doing so may lead to permission denied errors.

Bootstrap the Juju controller, on LXD, that OSM will use to deploy proxy charms.

Before that, make sure LXD is installed and configured correctly to work with Charmed OSM

juju bootstrap localhost osm-lxd

Install and configure MicroK8s

MicroK8s is a fast, lightweight, and certified distribution of Kubernetes that is made for developers. It’s a great choice if you want Kubernetes within minutes.

sudo snap install microk8s --channel 1.14/stable --classic
sudo usermod -a -G microk8s $USER
newgrp microk8s
microk8s.status --wait-ready
microk8s.enable storage dns

# For easier access, create an alias to microk8's kubectl command
sudo snap alias microk8s.kubectl kubectl

# Bootstrap the Kubernetes cloud
juju bootstrap microk8s osm-on-k8s

# Add a new model for OSM
juju add-model osm

Install OSM

Generate a bundle overlay containing the credentials of our OSM Juju controller and deploy

osmclient.overlay

Deployment

Choose how you would like Charmed OSM to be deployed.

Standalone

The standalone version is perfect for evaluation and development purposes. Each component is installed with a single instance, ideal for running on a laptop or workstation, pairing well with microk8s.

juju deploy osm --overlay vca-overlay.yaml

High-Availability

For production use, we offer a high-availability version of Charmed OSM. Each component will be deployed in clusters of three units setup with failover, and requires significantly more resources to operate.

juju deploy osm-ha --overlay vca-overlay.yaml

Status

It can take several minutes or longer to install, depending on your machine and bandwidth. To monitor the progress of the installation, you can watch the output of juju status:

$ watch -c juju status --color
Every 2.0s: juju status --color                                                                                                                                                                                                             micro-osm: Fri Aug 23 17:03:25 2019

Model  Controller  Cloud/Region        Version  SLA          Timestamp
osm    osm-on-k8s  microk8s/localhost  2.6.6    unsupported  17:03:27Z

App             Version  Status  Scale  Charm           Store       Rev  OS          Address         Notes 
grafana-k8s              active      1  grafana-k8s     jujucharms   15  kubernetes  10.152.183.122 
kafka-k8s                active      1  kafka-k8s       jujucharms    1  kubernetes  10.152.183.90     
lcm-k8s                  active      1  lcm-k8s         jujucharms   21  kubernetes  10.152.183.44 
mariadb-k8s              active      1  mariadb-k8s     jujucharms   16  kubernetes  10.152.183.75     
mon-k8s                  active      1  mon-k8s         jujucharms   14  kubernetes  10.152.183.231   
mongodb-k8s              active      1  mongodb-k8s     jujucharms   15  kubernetes  10.152.183.15     
nbi-k8s                  active      1  nbi-k8s         jujucharms   24  kubernetes  10.152.183.104 
pol-k8s                  active      1  pol-k8s         jujucharms   14  kubernetes  10.152.183.230 
prometheus-k8s           active      1  prometheus-k8s  jujucharms   14  kubernetes  10.152.183.13    
ro-k8s                   active      1  ro-k8s          jujucharms   20  kubernetes  10.152.183.56     
ui-k8s                   active      1  ui-k8s          jujucharms   28  kubernetes  10.152.183.7     
zookeeper-k8s            active      1  zookeeper-k8s   jujucharms   16  kubernetes  10.152.183.140     

Unit               Workload  Agent  Address    Ports                       Message   
grafana-k8s/0*     active    idle   10.1.1.39  3000/TCP                    configured   
kafka-k8s/0*       active    idle   10.1.1.32  9092/TCP                    configured   
lcm-k8s/0*         active    idle   10.1.1.36  80/TCP                      configured   
mariadb-k8s/0*     active    idle   10.1.1.24  3306/TCP                    ready   
mon-k8s/0*         active    idle   10.1.1.33  8000/TCP                    configured   
mongodb-k8s/0*     active    idle   10.1.1.26  27017/TCP                   configured   
nbi-k8s/0*         active    idle   10.1.1.35  9999/TCP                    configured   
pol-k8s/0*         active    idle   10.1.1.34  80/TCP                      configured   
prometheus-k8s/0*  active    idle   10.1.1.38  9090/TCP                    configured
ro-k8s/0*          active    idle   10.1.1.30  9090/TCP                    configured   
ui-k8s/0*          active    idle   10.1.1.37  80/TCP                      configured   
zookeeper-k8s/0*   active    idle   10.1.1.31  2181/TCP,2888/TCP,3888/TCP  configured   

When all Status and Workloads are shown in an active state, your installation of OSM is ready to use.

Post-deployment

Once Charmed OSM has been successfully installed, set the OSM_HOSTNAME.

First, get the IP address of the nbi-k8s application.

$ juju status nbi-k8s
Model  Controller  Cloud/Region        Version  SLA          Timestamp
osm    osm-on-k8s  microk8s/localhost  2.6.6    unsupported  17:15:10Z

App      Version  Status  Scale  Charm    Store       Rev  OS          Address         Notes
nbi-k8s           active      1  nbi-k8s  jujucharms   24  kubernetes  10.152.183.104  

Unit        Workload  Agent  Address    Ports     Message
nbi-k8s/0*  active    idle   10.1.1.35  9999/TCP  configured

Next, export the OSM_HOSTNAME variable and confirm that the platform is operational:

$ export OSM_HOSTNAME=10.152.183.104

You may now interact with Charmed OSM via the osm command. To make this persistent across sessions, it’s recommended to add this to your ~/.bashrc.

$ osm vim-list
+----------+------+
| vim name | uuid |
+----------+------+
+----------+------+

$ osm user-list
+-------+--------------------------------------+
| name  | id                                   |
+-------+--------------------------------------+
| admin | 51b369ed-942e-4a61-a031-64eaa15e8cff |
+-------+--------------------------------------+

Hi @aisrael how can I debug an app that keep “waiting” ?

ubuntu@osm-rel6-k8s:~$ juju status
Model  Controller  Cloud/Region        Version  SLA          Timestamp
osm    osm-on-k8s  microk8s/localhost  2.6.6    unsupported  18:46:58Z

App             Version  Status   Scale  Charm           Store       Rev  OS          Address         Notes
grafana-k8s              active       1  grafana-k8s     jujucharms   15  kubernetes  10.152.183.192
kafka-k8s                active       1  kafka-k8s       jujucharms    1  kubernetes  10.152.183.135
lcm-k8s                  waiting      1  lcm-k8s         jujucharms   21  kubernetes  10.152.183.229
mariadb-k8s              waiting      1  mariadb-k8s     jujucharms   16  kubernetes  10.152.183.138
mon-k8s                  active       1  mon-k8s         jujucharms   14  kubernetes  10.152.183.7
mongodb-k8s              active       1  mongodb-k8s     jujucharms   15  kubernetes  10.152.183.65
nbi-k8s                  active       1  nbi-k8s         jujucharms   24  kubernetes  10.152.183.112
pol-k8s                  active       1  pol-k8s         jujucharms   14  kubernetes  10.152.183.81
prometheus-k8s           active       1  prometheus-k8s  jujucharms   14  kubernetes  10.152.183.9
ro-k8s                   active       1  ro-k8s          jujucharms   20  kubernetes  10.152.183.173
ui-k8s                   active       1  ui-k8s          jujucharms   28  kubernetes  10.152.183.45
zookeeper-k8s            active       1  zookeeper-k8s   jujucharms   16  kubernetes  10.152.183.2

Unit               Workload  Agent  Address    Ports                       Message
grafana-k8s/0*     active    idle   10.1.1.12  3000/TCP                    configured
kafka-k8s/0*       active    idle   10.1.1.21  9092/TCP                    configured
lcm-k8s/0*         active    idle   10.1.1.14  80/TCP                      configured
mariadb-k8s/0*     active    idle   10.1.1.31  3306/TCP                    ready
mon-k8s/0*         active    idle   10.1.1.11  8000/TCP                    configured
mongodb-k8s/0*     active    idle   10.1.1.28  27017/TCP                   configured
nbi-k8s/0*         active    idle   10.1.1.33  9999/TCP                    configured
pol-k8s/0*         active    idle   10.1.1.24  80/TCP                      configured
prometheus-k8s/0*  active    idle   10.1.1.25  9090/TCP                    configured
ro-k8s/0*          active    idle   10.1.1.32  9090/TCP                    configured
ui-k8s/0*          active    idle   10.1.1.17  80/TCP                      configured
zookeeper-k8s/0*   active    idle   10.1.1.16  2181/TCP,2888/TCP,3888/TCP  configured

The waiting state means that Juju expects to be able to proceed, once something else is completed that it needs to install or configure. No action should be required from you.

It is possible though that lcm-k8s and mariadb-k8s are waiting for a relations to be established. Perhaps the overlay bundle step in the “Install OSM” section needs more explanation:

.

Hello @guilag,

First of all, thanks for installing Charmed OSM.

When an application is in waiting status, it’s generally because the workload kubernetes pod is in a CrashLoopBackOff state, or not ready. This problem usually has to do with mariadb-k8s application, that is not starting properly. Because of that, the ro-k8s is not properly initialized, and the lcm-k8s, which has a relation with the ro-k8s is failing too.

In order to debug, I would recommend you to execute the following commands:

kubectl -n osm get pods # list all the pods
kubectl -n osm logs mariadb-k8s-0 # show logs of the mariadb-k8s-0 pod
kubectl -n osm describe pods mariadb-k8s-0 # show pod details
kubectl -n osm get pods mariadb-k8s-0 -o yaml # show the deployment yaml

I would appreciate if you could open a bug here, and put there the logs of the previous commands. Also it would be nice if you could specify the OS version you’re using, whether you are installing it in a VM or not, memory, disk space, etc.

Thanks,
David García

Hi @davigar15,
thx, I will try your commands.

Here is my ubuntu version:

ubuntu@osm-rel6-k8s:~$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
...
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

Here are my resources:
Running in a VM in OpenStack
RAM 12GB
VCPUs 20 VCPU
Disk 100GB

Hello @guilag,

Okay, it’s weirdbecause I just tried it in the same environment as you, and with fewer resources.

We’ll try to figure out why is this happening. In the meantime do the following:

juju destroy-model osm --destroy-storage -y
juju add-model osm
juju create-storage-pool osm-pv kubernetes storage-class=microk8s-hostpath
juju create-storage-pool packages-pv kubernetes storage-class=microk8s-hostpath
juju deploy osm --overlay vca-overlay.yaml --channel edge

Hi, same status so far after running your commands:

ubuntu@osm-rel6-k8s:~$ juju status
Model  Controller  Cloud/Region        Version  SLA          Timestamp
osm    osm-on-k8s  microk8s/localhost  2.6.6    unsupported  19:28:30Z

App             Version  Status   Scale  Charm           Store       Rev  OS          Address         Notes
grafana-k8s              active       1  grafana-k8s     jujucharms   19  kubernetes  10.152.183.242
kafka-k8s                active       1  kafka-k8s       jujucharms    4  kubernetes  10.152.183.199
lcm-k8s                  active       1  lcm-k8s         jujucharms   24  kubernetes  10.152.183.128
mariadb-k8s              waiting      1  mariadb-k8s     jujucharms   19  kubernetes  10.152.183.180
mon-k8s                  active       1  mon-k8s         jujucharms   18  kubernetes  10.152.183.125
mongodb-k8s              active       1  mongodb-k8s     jujucharms   18  kubernetes  10.152.183.154
nbi-k8s                  active       1  nbi-k8s         jujucharms   27  kubernetes  10.152.183.152
pol-k8s                  active       1  pol-k8s         jujucharms   18  kubernetes  10.152.183.127
prometheus-k8s           active       1  prometheus-k8s  jujucharms   18  kubernetes  10.152.183.230
ro-k8s                   waiting      1  ro-k8s          jujucharms   24  kubernetes  10.152.183.109
ui-k8s                   waiting      1  ui-k8s          jujucharms   32  kubernetes  10.152.183.126
zookeeper-k8s            active       1  zookeeper-k8s   jujucharms   25  kubernetes  10.152.183.47

Unit               Workload  Agent  Address    Ports                       Message
grafana-k8s/0*     active    idle   10.1.1.51  3000/TCP                    ready
kafka-k8s/0*       active    idle   10.1.1.50  9092/TCP                    ready
lcm-k8s/0*         active    idle   10.1.1.55  9999/TCP                    ready
mariadb-k8s/0*     active    idle   10.1.1.44  3306/TCP                    ready
mon-k8s/0*         active    idle   10.1.1.54  8000/TCP                    ready
mongodb-k8s/0*     active    idle   10.1.1.46  27017/TCP                   ready
nbi-k8s/0*         active    idle   10.1.1.53  9999/TCP                    ready
pol-k8s/0*         active    idle   10.1.1.52  80/TCP                      ready
prometheus-k8s/0*  active    idle   10.1.1.47  9090/TCP                    ready
ro-k8s/0*          active    idle   10.1.1.49  9090/TCP                    ready
ui-k8s/0*          active    idle   10.1.1.56  80/TCP                      ready
zookeeper-k8s/0*   active    idle   10.1.1.48  2181/TCP,2888/TCP,3888/TCP  ready

Hi @aisrael,
I followed your installation but the “juju add-model osm” returns a:
ERROR ownership of the file is not the same as the current user: open /home/admin/.local/share/juju/controllers.yaml: permission denied

Do you have any suggestions?

Thank you!
Luca