First steps with the Canonical Distribution of Open Source Mano


Canonical Distribution of OSM

The objective of this page is to give an overview of the first steps to get up and running with Canonical Distribution of OSM (CDO).


  • OS: Ubuntu 16.04/18.04 LTS
    • 2 CPUs
    • 4 GB RAM
    • 20GB disk
    • Single interface with Internet access.
    • 4 CPUs
    • 8 GB RAM
    • 40GB disk
    • Single interface with Internet access.

User Guide

Installing OSM has never been easier. With a few commands, you will be able to deploy OSM in an empty environment using microk8s.

First of all, let’s download the repository of Canonical Distribution of OSM.

git clone 
cd ./canonical-osm/


To install Canonical Distribution of OSM locally you should execute the following commands, and it will be installed in a local Microk8s.

sudo snap install microk8s --classic
sudo snap install juju --classic
microk8s.status --wait-ready
microk8s.enable dashboard storage dns
echo "./" | at now
juju bootstrap localhost osm-lxd
juju bootstrap microk8s osm-on-k8s
juju add-model osm
juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath
juju create-storage-pool osm-pv kubernetes storage-class=microk8s-hostpath
juju create-storage-pool packages-pv kubernetes storage-class=nfs-hostpath
juju deploy cs:~charmed-osm/bundle/canonical-osm --overlay overlay.yaml

Checking the status

When the juju deploy command is executed, it will take several minutes to have the OSM up and running. To see the status of the deployment execute watch -c juju status --color. Also, you can execute watch kubectl -n osm get pods to see the status of the Kubernetes Pods.

You will see this output from the juju status command when the deployment is finished.

$ juju status

Model  Controller  Cloud/Region  Version  SLA          Timestamp
osm    osm-on-k8s  microk8s      2.6.2    unsupported  17:40:14Z

App             Version  Status  Scale  Charm           Store       Rev  OS          Address         Notes
grafana-k8s              active      1  grafana-k8s     jujucharms   12  kubernetes    
kafka-k8s                active      1  kakfa-k8s       jujucharms    7  kubernetes   
keystone-k8s             active      1  keystone-k8s    jujucharms    7  kubernetes   
lcm-k8s                  active      1  lcm-k8s         jujucharms    9  kubernetes   
manodb                   active      1  mariadb-k8s     jujucharms    8  kubernetes    
mon-k8s                  active      1  mon-k8s         jujucharms    8  kubernetes   
mongo-k8s                active      1  mongodb-k8s     jujucharms    7  kubernetes    
nbi-k8s                  active      1  nbi-k8s         jujucharms    9  kubernetes   
pol-k8s                  active      1  pol-k8s         jujucharms    8  kubernetes   
prometheus-k8s           active      1  prometheus-k8s  jujucharms    8  kubernetes    
ro-k8s                   active      1  ro-k8s          jujucharms    8  kubernetes 
ui-k8s                   active      1  ui-k8s          jujucharms   16  kubernetes    
vimdb                    active      1  mariadb-k8s     jujucharms    8  kubernetes    
zookeeper-k8s            active      1  zookeeper-k8s   jujucharms   10  kubernetes   

Unit               Workload  Agent  Address    Ports                       Message
grafana-k8s/0*     active    idle  3000/TCP                    configured
kafka-k8s/0*       active    idle  9092/TCP                    configured
keystone-k8s/0*    active    idle  5000/TCP                    configured
lcm-k8s/0*         active    idle  80/TCP                      configured
manodb/0*          active    idle  3306/TCP                    configured
mon-k8s/0*         active    idle  8000/TCP                    configured
mongo-k8s/0*       active    idle  27017/TCP                   configured
nbi-k8s/0*         active    idle  9999/TCP                    configured
pol-k8s/0*         active    idle  80/TCP                      configured
prometheus-k8s/0*  active    idle  9090/TCP                    configured
ro-k8s/0*          active    idle  9090/TCP                    configured
ui-k8s/0*          active    idle  80/TCP                      configured
vimdb/0*           active    idle  3306/TCP                    configured
zookeeper-k8s/0*   active    idle  2181/TCP,2888/TCP,3888/TCP  configured

OSM Client

If you want to interact with OSM using the OSM client, just execute the following after the deployment is finished:

$ ./

If you want to see the the commands available for the OSM client, execute osm --help.


To uninstall Canonical Distribution of OSM you should execute the following command.


Start playing with OSM

After the deployment is finished, check the IP of the ui-k8s application in the juju status command, and go to in your web-browser.

  • User: admin
  • Password: admin


If you have any trouble with the installation, please contact us, we will be glad to answer your questions:


In environments with restricted network access, you may encounter an error similar to this:

ERROR cannot deploy bundle: cannot add charm "cs:~charmed-osm/grafana-k8s-13": cannot retrieve charm "cs:~charmed-osm/grafana-k8s-13": cannot get archive: Get dial tcp: lookup on read udp> i/o timeout

In order to solve this, we need to edit the kube-dns configuration to point to your DNS servers. Edit the configuration and both sets of DNS addresses accordingly:

microk8s.kubectl -n kube-system edit configmap/kube-dns

kube-dns will automatically reload the configuration, so re-run juju deploy command and verify that the error is resolved.

Get the name of the kube-dns pod:

$ kubectl -n kube-system get pods
NAME                                              READY   STATUS    RESTARTS   AGE
heapster-v1.5.2-6b5d7b57f9-c9vln                  4/4     Running   0          67m
hostpath-provisioner-6d744c4f7c-cr9br             1/1     Running   0          71m
kube-dns-6bfbdd666c-xrnnb                         3/3     Running   3          71m
kubernetes-dashboard-6fd7f9c494-zx6s9             1/1     Running   0          71m
monitoring-influxdb-grafana-v4-78777c64c8-lsh8l   2/2     Running   2          71m

Check the logs for dnsmasq container in the pod:

$ kubectl -n kube-system logs kube-dns-6bfbdd666c-xrnnb dnsmasq

Once dnsmasq is able to resolve hostnames, you can continue with the installation.