First steps with the Canonical Distribution of Open Source Mano


#1

Canonical Distribution of OSM

Welcome to Canonical Distribution of OSM!

The objective of this page is to give an overview of the first steps to get up and running with Canonical Distribution of OSM (CDO).

Requirements

  • OS: Ubuntu 18.04 LTS
  • MINIMUM:
    • 2 CPUs
    • 4 GB RAM
    • 40GB disk
    • Single interface with Internet access.
  • RECOMMENDED:
    • 4 CPUs
    • 8 GB RAM
    • 80GB disk
    • Single interface with Internet access.

User Guide

Installing OSM has never been easier. With a few commands, you will be able to deploy OSM in an empty environment using microk8s.

First of all, let’s download the repository of Canonical Distribution of OSM.

git clone https://git.launchpad.net/canonical-osm 
cd ./canonical-osm/

Install

To install Canonical Distribution of OSM locally you should execute the following commands, and it will be installed in a local Microk8s.

sudo snap install microk8s --classic
sudo snap install juju --classic
microk8s.status --wait-ready
microk8s.enable dashboard storage dns
./setup_lxd.sh
echo "./update_lxc_juju_images.sh" | at now
juju bootstrap localhost osm-lxd
juju bootstrap microk8s osm-on-k8s
juju add-model osm
juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath
juju create-storage-pool osm-pv kubernetes storage-class=microk8s-hostpath
juju create-storage-pool packages-pv kubernetes storage-class=microk8s-hostpath
./vca_config.sh
juju deploy cs:~charmed-osm/osm --overlay overlay.yaml

Checking the status

When the juju deploy command is executed, it will take several minutes to have the OSM up and running. To see the status of the deployment execute watch -c juju status --color. Also, you can execute watch kubectl -n osm get pods to see the status of the Kubernetes Pods.

You will see this output from the juju status command when the deployment is finished.

juju status
Model  Controller  Cloud/Region        Version  SLA          Timestamp
osm    osm-on-k8s  microk8s/localhost  2.6.5    unsupported  11:10:16Z

App             Version  Status  Scale  Charm           Store       Rev  OS          Address         Notes
grafana-k8s              active      1  grafana-k8s     jujucharms   15  kubernetes  10.152.183.164
kafka-k8s                active      1  kafka-k8s       jujucharms    1  kubernetes  10.152.183.216
lcm-k8s                  active      1  lcm-k8s         jujucharms   20  kubernetes  10.152.183.176
mariadb-k8s              active      1  mariadb-k8s     jujucharms   13  kubernetes  10.152.183.73
mon-k8s                  active      1  mon-k8s         jujucharms   14  kubernetes  10.152.183.59
mongodb-k8s              active      1  mongodb-k8s     jujucharms   14  kubernetes  10.152.183.44
nbi-k8s                  active      1  nbi-k8s         jujucharms   21  kubernetes  10.152.183.53
pol-k8s                  active      1  pol-k8s         jujucharms   14  kubernetes  10.152.183.106
prometheus-k8s           active      1  prometheus-k8s  jujucharms   12  kubernetes  10.152.183.126
ro-k8s                   active      1  ro-k8s          jujucharms   17  kubernetes  10.152.183.146
ui-k8s                   active      1  ui-k8s          jujucharms   23  kubernetes  10.152.183.229
zookeeper-k8s            active      1  zookeeper-k8s   jujucharms   16  kubernetes  10.152.183.252

Unit               Workload  Agent  Address    Ports                                Message
grafana-k8s/0*     active    idle   10.1.1.45  3000/TCP                             configured
kafka-k8s/0*       active    idle   10.1.1.39  9092/TCP                             configured
lcm-k8s/0*         active    idle   10.1.1.42  80/TCP                               configured
mariadb-k8s/0*     active    idle   10.1.1.33  3306/TCP,4444/TCP,4567/TCP,4568/TCP  configured
mon-k8s/0*         active    idle   10.1.1.41  8000/TCP                             configured
mongodb-k8s/0*     active    idle   10.1.1.34  27017/TCP                            configured
nbi-k8s/0*         active    idle   10.1.1.43  9999/TCP                             configured
pol-k8s/0*         active    idle   10.1.1.40  80/TCP                               configured
prometheus-k8s/0*  active    idle   10.1.1.44  9090/TCP                             configured
ro-k8s/0*          active    idle   10.1.1.37  9090/TCP                             configured
ui-k8s/0*          active    idle   10.1.1.35  80/TCP                               configured
zookeeper-k8s/0*   active    idle   10.1.1.36  2181/TCP,2888/TCP,3888/TCP           configured

OSM Client

If you want to interact with OSM using the OSM client, just execute the following after the deployment is finished:

./install_osm_client.sh
source ~/.osmclient.env

If you want to see the the commands available for the OSM client, execute osm --help.

Uninstall

To uninstall Canonical Distribution of OSM you should execute the following command.

./uninstall.sh

Start playing with OSM

After the deployment is finished, check the IP of the ui-k8s application in the juju status command, and go to http://10.152.183.20 in your web-browser.

Access services from outside

If you have installed OSM in an external machine, or in a VM, you can access it enabling the ingress module on microk8s, and exposing the application. Enter the following commands, where <ip> is the external ip of the machine in which the OSM has been installed.

microk8s.enable ingress
juju config ui-k8s juju-external-hostname=osm.<ip>.xip.io
juju expose ui-k8s
juju config prometheus-k8s juju-external-hostname=prometheus.<ip>.xip.io
juju expose prometheus-k8s
juju config grafana-k8s juju-external-hostname=grafana.<ip>.xip.io
juju expose grafana-k8s

The ingress module uses nginx. By default, it has the option proxy-body-size to 1m. This will be a problem if a VNF package of more than 1m is uploaded. To solve it, we only have to add an annotation to the ingress.

kubectl -n osm edit ingress ui-k8s

# Add the following line in the annotations
    nginx.ingress.kubernetes.io/proxy-body-size: "0"

You can access now these services:

  • OSM: http://osm.<ip>.xip.io
  • Prometheus: http://prometheus.<ip>.xip.io
  • Grafana: http://grafana.<ip>.xip.io

Known issues

Microk8s not starting on reboot

Microk8s is not started properly. The bug is already reported: https://github.com/ubuntu/microk8s/issues/531

Workaround: Just execute microk8s.start on reboot. Pods will be available after a few minutes.

Troubleshooting

If you have any trouble with the installation, please contact us, we will be glad to answer your questions:


#2

In environments with restricted network access, you may encounter an error similar to this:

ERROR cannot deploy bundle: cannot add charm "cs:~charmed-osm/grafana-k8s-13": cannot retrieve charm "cs:~charmed-osm/grafana-k8s-13": cannot get archive: Get https://api.jujucharms.com/charmstore/v5/~charmed-osm/grafana-k8s-13/archive?channel=edge: dial tcp: lookup api.jujucharms.com on 10.152.183.10:53: read udp 10.1.1.12:55949->10.152.183.10:53: i/o timeout

In order to solve this, we need to edit the kube-dns configuration to point to your DNS servers. Edit the configuration and both sets of DNS addresses accordingly:

microk8s.kubectl -n kube-system edit configmap/kube-dns

kube-dns will automatically reload the configuration, so re-run juju deploy command and verify that the error is resolved.

Get the name of the kube-dns pod:

$ kubectl -n kube-system get pods
NAME                                              READY   STATUS    RESTARTS   AGE
heapster-v1.5.2-6b5d7b57f9-c9vln                  4/4     Running   0          67m
hostpath-provisioner-6d744c4f7c-cr9br             1/1     Running   0          71m
kube-dns-6bfbdd666c-xrnnb                         3/3     Running   3          71m
kubernetes-dashboard-6fd7f9c494-zx6s9             1/1     Running   0          71m
monitoring-influxdb-grafana-v4-78777c64c8-lsh8l   2/2     Running   2          71m

Check the logs for dnsmasq container in the pod:

$ kubectl -n kube-system logs kube-dns-6bfbdd666c-xrnnb dnsmasq

Once dnsmasq is able to resolve hostnames, you can continue with the installation.