Vault and ingress, how does it work!?

Hey guys,

I need some help. I am new to the topic of Charmed Kubernetes and Kubernetes therefore I am bit struggeling.

Before Kubernetes I had a Docker daemon and Docker applications which run on localhost:port. I used nginx as a reverse proxy and had my let’s encrypt certificate with a certbot auto renewal and all of that worked pretty well.

So now I am a bit stranded in this Kubernetes world. First, here’s my current setup:
I use the charmed k8s with calico, added keepalived with a virtual IP and vault like described in the docs for setting up a “production-grade-cluster”. I added dns and vip to the sans and my cluster is currently reachable via “cluster.k8s.local” which points to the virtual IP from keepalived. It is possible to access the cluster via (https://)cluster.k8s.local/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default and I see that the certificate comes from vault. Until here is everything clear and set up.

Now to my problem:
I want to access the dashboard not via link from above, I want to use this link: https://cluster.k8s.local/dashboard
I understood, that I need to set up for this an Ingress which forwards (like my old reverse proxy) this to the correct service. I tried hundreds of different yaml configurations but none of them work. Here are two of them: Ubuntu Pastebin

How can I get this link (https://)cluster.k8s.local/dashboard to work!?

I understood addiotnally, that it is possible to manually create the tls.crt and tls.key and import them as a secret. But I this is one of the reasons I installed Vault in the first place. To have something that does my auto renewel like in my old times with certbot…

Additionally, if someday the redirection from /dashboard work, how can I get new certs for subdomains like (https://)dashboard.cluster.k8s.local from vault? And how do I have to configure my ingress?

I hope that someone could help me out.

Best regards,
Panda

It’s important to note that Vault is a Certificate Authority which is issuing multiple certificates for the entire Kubernetes cluster, so that all communication to and between components is secured. This includes the client certificate used to communicate with the cluster and presumably the certificate presented by the Dashboard.

Vault does not currently support automatic renewal of the certificates that it generates, but you can easily renew the certificates for the entire cluster with:

juju run-action vault/0 --wait reissue-certificates

As a CA, it is at the same “level” as Let’s Encrypt itself. That said, Vault supports two modes of operation: acting as a root CA, or acting as an intermediary CA.

Acting as a root CA

When acting as a root CA, there is no further chain of trust, so you must explicitly add Vault’s public cert to your chain of trust. You can get the public cert with:

juju run-action vault/0 --wait get-root-ca

This mode is enabled by either setting the auto-generate-root-ca-cert config option (which I believe is used in the Charmed Kubernetes docs), or by running the generate-root-ca action, which allows you to tweak various parameters for the root CA cert:

juju run-action vault/0 --wait generate-root-ca [params...]

Acting as an intermediary CA

When acting as an intermediary CA, Vault will have a chain of trust to a higher CA, which could be a public CA such as VeriSign, GoDaddy, or even Let’s Encrypt, or it could be a corporate CA that you presumably would already have in your trust chain. This mode of operation requires you to generate a Certificate Signing Request (CSR) and submit that to the higher CA for signing, and then hand the signed public cert back to Vault. This can be done with:

juju run-action vault/0 --wait get-csr [params...]

Followed by:

juju run-action vault/0 --wait upload-signed-csr pem=... root-ca=... [other params...]

Note: Information on these and other actions supported by Vault can be found with:

juju list-actions vault [--schema]

Unfortunately, there isn’t a way to enable ssl passthrough properly with configuration changes in CDK right now. I’ve filed this bug to fix that. It isn’t a hard fix, but will take some time before it is released. In the meantime, you can disable the ingress and deploy it yourself with the proper options or you could build your own cdk-addons snap that has that hacked in and then attach that as a resource. If that is gobbledy goop, don’t worry about it and just disable the configuration. juju config kubernetes-worker enable-ingress=false and then deploy the ingress yourself with something like the following pile of yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx-kubernetes-worker
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount-kubernetes-worker
  namespace: ingress-nginx-kubernetes-worker
  labels:
    app.kubernetes.io/name: ingress-nginx-kubernetes-worker
    app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole-kubernetes-worker
  labels:
    app.kubernetes.io/name: ingress-nginx-kubernetes-worker
    app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role-kubernetes-worker
  namespace: ingress-nginx-kubernetes-worker
  labels:
    app.kubernetes.io/name: ingress-nginx-kubernetes-worker
    app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding-kubernetes-worker
  namespace: ingress-nginx-kubernetes-worker
  labels:
    app.kubernetes.io/name: ingress-nginx-kubernetes-worker
    app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role-kubernetes-worker
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount-kubernetes-worker
    namespace: ingress-nginx-kubernetes-worker
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding-kubernetes-worker
  labels:
    app.kubernetes.io/name: ingress-nginx-kubernetes-worker
    app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole-kubernetes-worker
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount-kubernetes-worker
    namespace: ingress-nginx-kubernetes-worker

---
apiVersion: app/v1beta2
kind: DaemonSet
metadata:
  name: nginx-ingress-controller-kubernetes-worker
  namespace: ingress-nginx-kubernetes-worker
  labels:
    app.kubernetes.io/name: ingress-nginx-kubernetes-worker
    app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx-kubernetes-worker
      app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx-kubernetes-worker
        app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount-kubernetes-worker
      terminationGracePeriodSeconds: 60
      # hostPort doesn't work with CNI, so we have to use hostNetwork instead
      # see https://github.com/kubernetes/kubernetes/issues/23920
      hostNetwork: true
      containers:
        - name: nginx-ingress-controller-kubernetes-worker
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.22.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration-kubernetes-worker
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services-kubernetes-worker
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services-kubernetes-worker
            - --annotations-prefix=nginx.ingress.kubernetes.io
            - --enable-ssl-chain-completion=False
            - --enable-ssl-passthrough
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
---

The biggest part of that yaml is the command line option enable-ssl-passthrough, which allows you to do what you want. I hate that it isn’t just an option and hopefully, that bug will get fixed soon. After that, you just need a few of those annotations on your ingress. I used this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  rules:
    - host: dashboard.cluster.k8s.local
      http:
        paths:
        - path: /
          backend:
            serviceName: kubernetes-dashboard
            servicePort: 443

Note that you can’t use a path with ssl passthrough, so I had to add dashboard to your domain. I assume this isn’t a big issue.

As for tls termination at ingress, I would suggest setting up cert-manager, which is very similar to certbot you had before. You can generate certs from Let’s Encrypt or from Vault to secure your ingress. Note that this isn’t going to be the cert that the dashboard is using, but instead the one for terminating the ssl connection at your ingress. This is probably cool for you as well, but I’m not sure. If it is, you don’t need passthrough at all, you just need to tell nginx that there is a secure backed, which the backend-protocol annotation will handle.

If you do it that way, you can have ssl terminate at your ingress and then a different ssl session going to the backend. This would mean you can keep the ingress controller CDK sets up for you and just add the secure backend annotation. When I did it this way, the dashboard itself didn’t like not being at the root and the links inside for javascript files and things didn’t pan out. I would suggest a domain-based approach here again.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    certmanager.k8s.io/acme-challenge-type: http01
    certmanager.k8s.io/cluster-issuer: my-vault-issuer
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  name: tls-dash
  namespace: kube-system
spec:
  rules:
  - host: dashboard.cluster.k8s.local
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443
        path: /
  tls:
  - hosts:
    - dashboard.cluster.k8s.local
    secretName: dashboard-tls

Note that you need a way to resolve dashboard.cluster.k8s.local to your machines. I personally am using metallb to broadcast a virtual IP for my ingress service, but as long as the traffic ends up on your ingress IP you can do it any way you want.

Sorry to slam you with information, but these are the two ways I would do it and I think I would lean toward the second way.

As an aside, I’m not sure how you have access to your cluster setup, but if it can be reached from outside I would suggest a whitelist at a minimum on the ingress. There isn’t an ingress setup on the dashboard by default for a reason. You don’t want people to be able to reach that and there have been security issues in the past. An annotation such as the following on your ingress would help secure it more than the default:

metadata:
  annotations:
    nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/24"

You will need to adjust the IP for your internal IP addresses and then I would test to ensure that your source IPs aren’t getting smashed to internals for all requests as well. It doesn’t do much to limit to an IP range if all external requests appear to come from that IP range.

Thank you all! I will read your information carefully
BR
Panda