Canonical Distribution of OSM
Welcome to Canonical Distribution of OSM!
The objective of this page is to give an overview of the first steps to get up and running with Canonical Distribution of OSM (CDO).
- OS: Ubuntu 18.04 LTS
- 2 CPUs
- 4 GB RAM
- 40GB disk
- Single interface with Internet access.
- 4 CPUs
- 8 GB RAM
- 80GB disk
- Single interface with Internet access.
Installing OSM has never been easier. With a few commands, you will be able to deploy OSM in an empty environment using microk8s.
First of all, let’s download the repository of Canonical Distribution of OSM.
git clone https://git.launchpad.net/canonical-osm cd ./canonical-osm/
To install Canonical Distribution of OSM locally you should execute the following commands, and it will be installed in a local Microk8s.
sudo snap install microk8s --classic sudo snap install juju --classic microk8s.status --wait-ready microk8s.enable dashboard storage dns ./setup_lxd.sh echo "./update_lxc_juju_images.sh" | at now juju bootstrap localhost osm-lxd juju bootstrap microk8s osm-on-k8s juju add-model osm juju create-storage-pool operator-storage kubernetes storage-class=microk8s-hostpath juju create-storage-pool osm-pv kubernetes storage-class=microk8s-hostpath juju create-storage-pool packages-pv kubernetes storage-class=microk8s-hostpath ./vca_config.sh juju deploy cs:~charmed-osm/osm --overlay overlay.yaml
Checking the status
juju deploy command is executed, it will take several minutes to have the OSM up and running. To see the status of the deployment execute
watch -c juju status --color. Also, you can execute
watch kubectl -n osm get pods to see the status of the Kubernetes Pods.
You will see this output from the
juju status command when the deployment is finished.
Model Controller Cloud/Region Version SLA Timestamp osm osm-on-k8s microk8s/localhost 2.6.5 unsupported 11:10:16Z App Version Status Scale Charm Store Rev OS Address Notes grafana-k8s active 1 grafana-k8s jujucharms 15 kubernetes 10.152.183.164 kafka-k8s active 1 kafka-k8s jujucharms 1 kubernetes 10.152.183.216 lcm-k8s active 1 lcm-k8s jujucharms 20 kubernetes 10.152.183.176 mariadb-k8s active 1 mariadb-k8s jujucharms 13 kubernetes 10.152.183.73 mon-k8s active 1 mon-k8s jujucharms 14 kubernetes 10.152.183.59 mongodb-k8s active 1 mongodb-k8s jujucharms 14 kubernetes 10.152.183.44 nbi-k8s active 1 nbi-k8s jujucharms 21 kubernetes 10.152.183.53 pol-k8s active 1 pol-k8s jujucharms 14 kubernetes 10.152.183.106 prometheus-k8s active 1 prometheus-k8s jujucharms 12 kubernetes 10.152.183.126 ro-k8s active 1 ro-k8s jujucharms 17 kubernetes 10.152.183.146 ui-k8s active 1 ui-k8s jujucharms 23 kubernetes 10.152.183.229 zookeeper-k8s active 1 zookeeper-k8s jujucharms 16 kubernetes 10.152.183.252 Unit Workload Agent Address Ports Message grafana-k8s/0* active idle 10.1.1.45 3000/TCP configured kafka-k8s/0* active idle 10.1.1.39 9092/TCP configured lcm-k8s/0* active idle 10.1.1.42 80/TCP configured mariadb-k8s/0* active idle 10.1.1.33 3306/TCP,4444/TCP,4567/TCP,4568/TCP configured mon-k8s/0* active idle 10.1.1.41 8000/TCP configured mongodb-k8s/0* active idle 10.1.1.34 27017/TCP configured nbi-k8s/0* active idle 10.1.1.43 9999/TCP configured pol-k8s/0* active idle 10.1.1.40 80/TCP configured prometheus-k8s/0* active idle 10.1.1.44 9090/TCP configured ro-k8s/0* active idle 10.1.1.37 9090/TCP configured ui-k8s/0* active idle 10.1.1.35 80/TCP configured zookeeper-k8s/0* active idle 10.1.1.36 2181/TCP,2888/TCP,3888/TCP configured
If you want to interact with OSM using the OSM client, just execute the following after the deployment is finished:
./install_osm_client.sh source ~/.osmclient.env
If you want to see the the commands available for the OSM client, execute
To uninstall Canonical Distribution of OSM you should execute the following command.
Start playing with OSM
After the deployment is finished, check the IP of the ui-k8s application in the
juju status command, and go to http://10.152.183.20 in your web-browser.
Access services from outside
If you have installed OSM in an external machine, or in a VM, you can access it enabling the
ingress module on microk8s, and exposing the application. Enter the following commands, where <ip> is the external ip of the machine in which the OSM has been installed.
microk8s.enable ingress juju config ui-k8s juju-external-hostname=osm.<ip>.xip.io juju expose ui-k8s juju config prometheus-k8s juju-external-hostname=prometheus.<ip>.xip.io juju expose prometheus-k8s juju config grafana-k8s juju-external-hostname=grafana.<ip>.xip.io juju expose grafana-k8s
The ingress module uses nginx. By default, it has the option
1m. This will be a problem if a VNF package of more than 1m is uploaded. To solve it, we only have to add an annotation to the ingress.
kubectl -n osm edit ingress ui-k8s # Add the following line in the annotations nginx.ingress.kubernetes.io/proxy-body-size: "0"
You can access now these services:
- OSM: http://osm.<ip>.xip.io
- Prometheus: http://prometheus.<ip>.xip.io
- Grafana: http://grafana.<ip>.xip.io
Microk8s not starting on reboot
Microk8s is not started properly. The bug is already reported: https://github.com/ubuntu/microk8s/issues/531
Workaround: Just execute
microk8s.start on reboot. Pods will be available after a few minutes.
If you have any trouble with the installation, please contact us, we will be glad to answer your questions: