Unfortunately, there isn’t a way to enable ssl passthrough properly with configuration changes in CDK right now. I’ve filed this bug to fix that. It isn’t a hard fix, but will take some time before it is released. In the meantime, you can disable the ingress and deploy it yourself with the proper options or you could build your own cdk-addons snap that has that hacked in and then attach that as a resource. If that is gobbledy goop, don’t worry about it and just disable the configuration. juju config kubernetes-worker enable-ingress=false
and then deploy the ingress yourself with something like the following pile of yaml:
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx-kubernetes-worker
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount-kubernetes-worker
namespace: ingress-nginx-kubernetes-worker
labels:
app.kubernetes.io/name: ingress-nginx-kubernetes-worker
app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole-kubernetes-worker
labels:
app.kubernetes.io/name: ingress-nginx-kubernetes-worker
app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role-kubernetes-worker
namespace: ingress-nginx-kubernetes-worker
labels:
app.kubernetes.io/name: ingress-nginx-kubernetes-worker
app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding-kubernetes-worker
namespace: ingress-nginx-kubernetes-worker
labels:
app.kubernetes.io/name: ingress-nginx-kubernetes-worker
app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role-kubernetes-worker
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount-kubernetes-worker
namespace: ingress-nginx-kubernetes-worker
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding-kubernetes-worker
labels:
app.kubernetes.io/name: ingress-nginx-kubernetes-worker
app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole-kubernetes-worker
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount-kubernetes-worker
namespace: ingress-nginx-kubernetes-worker
---
apiVersion: app/v1beta2
kind: DaemonSet
metadata:
name: nginx-ingress-controller-kubernetes-worker
namespace: ingress-nginx-kubernetes-worker
labels:
app.kubernetes.io/name: ingress-nginx-kubernetes-worker
app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-kubernetes-worker
app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-kubernetes-worker
app.kubernetes.io/part-of: ingress-nginx-kubernetes-worker
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount-kubernetes-worker
terminationGracePeriodSeconds: 60
# hostPort doesn't work with CNI, so we have to use hostNetwork instead
# see https://github.com/kubernetes/kubernetes/issues/23920
hostNetwork: true
containers:
- name: nginx-ingress-controller-kubernetes-worker
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.22.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration-kubernetes-worker
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services-kubernetes-worker
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services-kubernetes-worker
- --annotations-prefix=nginx.ingress.kubernetes.io
- --enable-ssl-chain-completion=False
- --enable-ssl-passthrough
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
The biggest part of that yaml is the command line option enable-ssl-passthrough, which allows you to do what you want. I hate that it isn’t just an option and hopefully, that bug will get fixed soon. After that, you just need a few of those annotations on your ingress. I used this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: dashboard.cluster.k8s.local
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 443
Note that you can’t use a path with ssl passthrough, so I had to add dashboard to your domain. I assume this isn’t a big issue.
As for tls termination at ingress, I would suggest setting up cert-manager, which is very similar to certbot you had before. You can generate certs from Let’s Encrypt or from Vault to secure your ingress. Note that this isn’t going to be the cert that the dashboard is using, but instead the one for terminating the ssl connection at your ingress. This is probably cool for you as well, but I’m not sure. If it is, you don’t need passthrough at all, you just need to tell nginx that there is a secure backed, which the backend-protocol annotation will handle.
If you do it that way, you can have ssl terminate at your ingress and then a different ssl session going to the backend. This would mean you can keep the ingress controller CDK sets up for you and just add the secure backend annotation. When I did it this way, the dashboard itself didn’t like not being at the root and the links inside for javascript files and things didn’t pan out. I would suggest a domain-based approach here again.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/cluster-issuer: my-vault-issuer
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: tls-dash
namespace: kube-system
spec:
rules:
- host: dashboard.cluster.k8s.local
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
tls:
- hosts:
- dashboard.cluster.k8s.local
secretName: dashboard-tls
Note that you need a way to resolve dashboard.cluster.k8s.local to your machines. I personally am using metallb to broadcast a virtual IP for my ingress service, but as long as the traffic ends up on your ingress IP you can do it any way you want.
Sorry to slam you with information, but these are the two ways I would do it and I think I would lean toward the second way.