I’m deploying the kubernetes-core bundle behind a proxy. I followed the instruction on the official doc about the proxy, anyway the last charm 724 for kubernetes-master and 571 for kubernetes-worker doesn’t have any more the support to set by juju config kubernetes-worker http_proxy etc…
Sorry about this @cardax . I encounter the same problem. http_proxy and https_proxy are being reported as unknown.
When looking at the charm’s config, it appears that the snap_proxy setting is deprecated. Perhaps the other two have already been used in favour of the model config settings?
$ juju config kubernetes-worker
snap_proxy:
default: ""
description: |
DEPRECATED. Use snap-http-proxy and snap-https-proxy model configuration settings. HTTP/HTTPS web proxy for Snappy to use when accessing the snap store.
Perhaps try setting the juju-http-proxy and juju-https-proxy model configuration settings? Juju will detect that they have changed and will retry.
If that doesn’t work, consider also setting the http-proxy and https-proxy settings. Although it’s possible that this may cause etcd to get confused, you will be further along than before.
Sorry about this and thanks for reporting it to us with links. We need to update those docs and mark them as deprecated. As of the latest release, http-proxy and https-proxy have moved to the runtime charms which are docker and containerd. The docs should have all been updated to reflect that change and it was called out in the release notes. Our current docs live at Kubernetes documentation | Ubuntu
Hello @timClicks.
I already tried all the possible setting about model-config setting snap, http, juju proxy doesn’t change the final behavior. The master is still waiting to start Seven pods.
I’m try to debug now directly inside the machines. Worker and master. Setting the proxy inside the systemd kublet and containerd. Also this doesn’t have effect.
I saw the cdk-addons service is responsible to start the pods with python script. I’m trying to debug that in the next step.
I’m glad If you have any advices.
Fabrizio
Hello everyone.
The problem about the state of 7 pods hanging to start seems to be related on containerd service.
Doesn’t take the module-config http-proxy e https
Behind a organization firewall and proxy also the URL of the registry image.canonical.com:5000 is impossible to reach. Reading the documentation of the charm there is no way to change the registry url by juju
containerd will respect the model-config proxy settings, but you can also override them by setting the http_proxy and https_proxy config options on the containerd charm.
You can change the registry for the cluster by setting the image-registry config on kubernetes-master.
Hello @tvansteenburgh you right there was a value inside kubernetes-master about the image-registry.
Anyway if someone else is interesting behind corporate proxy to make works kubernetes-core
Thanks for following up here for the benefit of others. Glad you got things working. Although, I’m surprised that you had to explicitly set proxy config on containerd. Containerd should inherit the proxy settings from the model. Are you sure that was necessary? If that’s not the case we may have a bug there.
Hi @tvansteenburgh.
100% sure, I’ve to redeployed a new kubernetes-core without change the proxy setting of the containerd. At the end the result was “waiting for 7 kube-system pods to start”