HA Controllers, FQDNs, Lets Encrypt/User Provided Key/Cert

Looking at re-provisioning a few public facing controllers, I’m wondering how to to tell Juju to bootstrap and enable-ha on a public facing lets-encrypt OR user provided key/cert HA controller configuration?

I know how to configure HA controllers and separately how configure a singleton controller to use lets-encrypt, just not sure how to tie it all together and unsure of what is actually available in the light of controllers on public facing ssl.

I brought this to light in a this docs topic where it is probably best suited.

1 Like

Hmm, this is interesting. I’ll have to tinker with it and see how to do the LE version in HA. I think it’s just adding the DNS for the other servers so the hosts resolve properly, but not 100% sure.

1 Like

Possibly a bit more on my use case, and how I’m going about it

Goal:

  1. Deploy HA public facing juju controllers using my own key/cert/ca.

Steps to accomplish goal:

  1. Deploy HA Juju controllers that use my own key/cert by providing juju bootstrap with the correct config values.
    a) Identify config values that matter for this use case can be found here.
    - ca-cert
    - autocert-dns-name
    - autocert-url
    b) Formulate bootstrap command with correct values.

     juju bootstrap aws/us-west-2 -n 3 --config ca-cert="$(cat my-ca.pem)" \
         --config autocert-dns-name="juju-controller-fqdn.example.com" \
         --config autocert-url="https://public-s3-bucket.com/cert"
    

Considering the bootstrap command and my use case, I guess I’m a little unsure of a few things still:

  1. Should the autocert-dns-name fqdn point to a load balancer that sits in front of the juju controllers, or should the FQDN assigned to autocert-dns-name contain values for all three controller public ip addresses?

  2. What are the actual requirements of autocert-url? I’m guessing this is probably a bit ssl/dns knowledge that I’m just unfamiliar with that is external to juju? Possibly if I knew more about this autocert-url the rest would make more sense?
    Looking at the default config for autocert-url leads me to believe I need to create a platform similar to lets-encrypt if I want to use this for my own ssl infrastructure.

Possibly its good enough to just provide a public key in a directory on s3? I need to do some more research around this.

Like adding 2nd and 3rd ip addresses for the other controllers to the A record?

@rick_h I recall seeing something you posted about HA controllers with letsencrypt certs. Is there a separate post for that.
Is that even supported at all.

I’m considering bootstrapping new controllers and stretching the cluster between our two data centers, but I’m unsure about how to round robin the DNS to make sure that the letsenctypt challenges are matched

Hmm, I’ll have to test that. I would think that if you setup the DNS so that each IP of the controllers is valid and resolve to the same DNS name you’d be ok, but honestly at that point fronting the controllers with an HA Proxy or nginx proxy and setting up let’s encrypt on that is probably a more sustainable model as it lets you do things like change the controllers over time and such without breaking client access.

I’m actually thinking of using an Azure traffic manager profile. That way I can do a table cloth trick with the hosts behind the name.