Kubernetes-core

Hello everyone.

I’m deploying the kubernetes-core bundle behind a proxy. I followed the instruction on the official doc about the proxy, anyway the last charm 724 for kubernetes-master and 571 for kubernetes-worker doesn’t have any more the support to set by juju config kubernetes-worker http_proxy etc…

From the docs:

After deploying the bundle, you need to configure the kubernetes-worker charm
to use your proxy:

$ juju config kubernetes-worker http_proxy=http://squid.internal:3128 
$ juju config kubernetes-worker http_proxy=http://X.X.X.X:XXXX
ERROR unknown option "http_proxy"

The cluster is still pending on waiting state to start 7 pods.

Sorry about this @cardax . I encounter the same problem. http_proxy and https_proxy are being reported as unknown.

When looking at the charm’s config, it appears that the snap_proxy setting is deprecated. Perhaps the other two have already been used in favour of the model config settings?

$ juju config kubernetes-worker
  snap_proxy:
   default: ""
   description: |
     DEPRECATED. Use snap-http-proxy and snap-https-proxy model configuration settings. HTTP/HTTPS web proxy for Snappy to use when accessing the snap store.

Perhaps try setting the juju-http-proxy and juju-https-proxy model configuration settings? Juju will detect that they have changed and will retry.

juju model-config juju-http-proxy=<proxy-url> juju-https-proxy=<proxy-url>

If that doesn’t work, consider also setting the http-proxy and https-proxy settings. Although it’s possible that this may cause etcd to get confused, you will be further along than before.

juju model-config http-proxy=<proxy-url> https-proxy=<proxy-url>

If etcd becomes stuck, you may need to add the relevant subnet(s) to the no-proxy settings.

juju model-config juju-no-proxy="127.0.0.1,localhost,::1,<subnet>[,<subnet>]" 
juju model-config no-proxy="127.0.0.1,localhost,::1,<subnet>[,<subnet>]"

Sorry about this and thanks for reporting it to us with links. We need to update those docs and mark them as deprecated. As of the latest release, http-proxy and https-proxy have moved to the runtime charms which are docker and containerd. The docs should have all been updated to reflect that change and it was called out in the release notes. Our current docs live at Kubernetes documentation | Ubuntu

Hello @timClicks.
I already tried all the possible setting about model-config setting snap, http, juju proxy doesn’t change the final behavior. The master is still waiting to start Seven pods.
I’m try to debug now directly inside the machines. Worker and master. Setting the proxy inside the systemd kublet and containerd. Also this doesn’t have effect.
I saw the cdk-addons service is responsible to start the pods with python script. I’m trying to debug that in the next step.
I’m glad If you have any advices.
Fabrizio

Hello everyone.
The problem about the state of 7 pods hanging to start seems to be related on containerd service.

  1. Doesn’t take the module-config http-proxy e https
  2. Behind a organization firewall and proxy also the URL of the registry image.canonical.com:5000 is impossible to reach. Reading the documentation of the charm there is no way to change the registry url by juju

Fabrizio

  1. containerd will respect the model-config proxy settings, but you can also override them by setting the http_proxy and https_proxy config options on the containerd charm.
  2. You can change the registry for the cluster by setting the image-registry config on kubernetes-master.

Hello @tvansteenburgh you right there was a value inside kubernetes-master about the image-registry.
Anyway if someone else is interesting behind corporate proxy to make works kubernetes-core

before deploy

juju model-config apt-http-proxy=http://squid.proxy.local:3129
juju model-config apt-https-proxy=http://squid.proxy.local:3129
juju model-config https-proxy=http://squid.porxy.local:3129
juju model-config http-proxy=http://squid.porxy.local:3129
juju model-config snap-http-proxy=http://squid.porxy.local:3129
juju model-config snap-https-proxy=http://squid.porxy.local:3129

after deploy

juju config containerd  http_proxy=http://squid.proxy.local:3129
juju config containerd  https_proxy=http://squid.proxy.local:3129

change the port url from 5000 to 80

juju config kubernetes-master image-registry=image-registry.canonical.com/cdk

bye
Fabrizio

4 Likes

Hiya @cardax,

Thanks for following up here for the benefit of others. Glad you got things working. Although, I’m surprised that you had to explicitly set proxy config on containerd. Containerd should inherit the proxy settings from the model. Are you sure that was necessary? If that’s not the case we may have a bug there.

/cc @joeborg

Grazie mille @cardax per la collaborazione.

At the moment, we will need to set this is both places as the stable branch for containerd doesn’t have the fix (https://github.com/charmed-kubernetes/layer-container-runtime-common/pull/4) in. It is in edge though, so should land with the next release.

1 Like

Hi @tvansteenburgh.
100% sure, I’ve to redeployed a new kubernetes-core without change the proxy setting of the containerd. At the end the result was “waiting for 7 kube-system pods to start”

After adding the proxy to containerd:

This looks very similar to what I was seeing behind the firewall. I’ll try changing the settings as you have to see if I can move forward.