Over the last few days I have carved out the initial bits for an elastic-operator k8s charm.
I’m currently at a crossroad where I don’t really know how to proceed, going to try and put it on blast, here goes.
The Deets:
- The elastic-operator charm successfully deploys the elastic-operator to k8s!
- Following the operator deploy, I can deploy an Elasticsearch object via kubectl using the juju deployed elastic-operator.
- Currently stuck in understanding how to model the CRD Elasticsearch, Kibana, and ApmServer objects via Juju.
Install and configure microk8s
sudo snap install microk8s --classic
microk8s.enable storage
microk8s.enable dns
Bootstrap Juju, Add Model, Deploy K8S Operator
juju bootstrap microk8s
juju add-model bdx
juju deploy cs:~omnivector/elastic-operator-k8s
Check Juju Status
see Juju’s view of the successful operator deployment.
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
bdx microk8s-localhost microk8s/localhost 2.7-beta1 unsupported 16:51:00Z
App Version Status Scale Charm Store Rev OS Address Notes
elastic-operator-k8s active 1 elastic-operator-k8s jujucharms 4 kubernetes 10.152.183.186
Unit Workload Agent Address Ports Message
elastic-operator-k8s/0* active idle 10.1.35.13 9876/TCP
Check kubectl
Verify the running pods via kubectl.
$ microk8s.kubectl get pods --namespace bdx
NAME READY STATUS RESTARTS AGE
elastic-operator-k8s-0 1/1 Running 0 15m
elastic-operator-k8s-operator-0 1/1 Running 0 15m
Follow elastic-operator logs
Follow the elastic-operator logs if you are interested.
# Follow the operator logs to see the successful deployment and reconciliations
microk8s.kubectl logs -f elastic-operator-k8s-0 --namespace bdx
Deploy Elasticsearch
At this point the elastic-operator should be running and crds installed to your model namespace. Run the code below to provision an Elasticsearch object.
The bit below is what I am having trouble understanding how we will model with Juju.
cat <<EOF | microk8s.kubectl apply -n bdx -f -
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elasticsearch-sample
spec:
version: 7.4.0
nodeSets:
- name: juju-testing-default
config:
# most Elasticsearch configuration parameters are possible to set, e.g:
node.attr.attr_name: attr_value
node.master: true
node.data: true
node.ingest: true
node.ml: true
node.store.allow_mmap: false
spec:
containers:
- name: elasticsearch
resources:
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
count: 1
EOF
After running the command above, the operator will schedule, assemble, and reconcile the new Elasticsearch object into existence in the namespace of the model, in this case, “bdx”.
Inspect the logs of the deployed Elasticsearch object
microk8s.kubectl logs -f elasticsearch-sample-es-juju-testing-default-0 -n bdx
List pods via kubectl
Check that the pods we expect to be up and running are up and running.
$ microk8s.kubectl get pods --namespace bdx
NAME READY STATUS RESTARTS AGE
elastic-operator-k8s-0 1/1 Running 0 15m
elastic-operator-k8s-operator-0 1/1 Running 0 15m
elasticsearch-sample-es-juju-testing-default-0 1/1 Running 0 55s
juju status
$ juju status
Model Controller Cloud/Region Version SLA Timestamp
bdx microk8s-localhost microk8s/localhost 2.7-beta1 unsupported 17:00:48Z
App Version Status Scale Charm Store Rev OS Address Notes
elastic-operator-k8s active 1 elastic-operator-k8s jujucharms 4 kubernetes 10.152.183.186
Unit Workload Agent Address Ports Message
elastic-operator-k8s/0* active idle 10.1.35.13 9876/TCP
From the microk8s.kubectl get pods
output above, we see the elasticsearch-sample-es-juju-testing-default-0
pod only exists in the context of the k8s namespace, but not the juju model. This is because the deployment was created via kubectl
, Juju is not tracking these resources.
How will we approach modeling custom k8s objects like this via Juju?