Can't create pod with container from a custom registry

I’m really new to JuJu and containers and Kubernetes, so please bear with me.

I am using my own MAAS cloud with plenty of resources. With JuJu, I deploy charmed Kubernetes. I have my own private Docker registry which I am trying to pull my test containers from.

Some details:

ubuntu@golang-project:~$ juju --version
2.6.6-bionic-amd64
ubuntu@golang-project:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T17:09:13Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T17:06:39Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"linux/amd64"}

Create cluster:

ubuntu@golang-project:~$ juju add-model k8s
ubuntu@golang-project:~$ juju deploy charmed-kubernetes --model k8s
ubuntu@golang-project:~$ juju scp kubernetes-master/0:config ~/.kube/config

Everything looks good:

ubuntu@golang-project:~$ juju status
Model  Controller  Cloud/Region  Version  SLA          Timestamp
k8s    maas-cloud  maas-cloud    2.6.5    unsupported  18:13:31Z

App                    Version  Status  Scale  Charm                  Store       Rev  OS      Notes
containerd                      active      5  containerd             jujucharms    2  ubuntu
easyrsa                3.0.1    active      1  easyrsa                jujucharms  254  ubuntu
etcd                   3.2.10   active      3  etcd                   jujucharms  434  ubuntu
flannel                0.10.0   active      5  flannel                jujucharms  425  ubuntu
kubeapi-load-balancer  1.14.0   active      1  kubeapi-load-balancer  jujucharms  649  ubuntu  exposed
kubernetes-master      1.15.2   active      2  kubernetes-master      jujucharms  700  ubuntu
kubernetes-worker      1.15.2   active      3  kubernetes-worker      jujucharms  552  ubuntu  exposed

Unit                      Workload  Agent  Machine  Public address  Ports           Message
easyrsa/0*                active    idle   0        192.168.1.32                    Certificate Authority connected.
etcd/0*                   active    idle   1        192.168.1.33    2379/tcp        Healthy with 3 known peers
etcd/1                    active    idle   2        192.168.1.34    2379/tcp        Healthy with 3 known peers
etcd/2                    active    idle   3        192.168.1.48    2379/tcp        Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   4        192.168.1.35    443/tcp         Loadbalancer ready.
kubernetes-master/0       active    idle   5        192.168.1.37    6443/tcp        Kubernetes master running.
  containerd/4            active    idle            192.168.1.37                    Container runtime available.
  flannel/4               active    idle            192.168.1.37                    Flannel subnet 10.1.75.1/24
kubernetes-master/1*      active    idle   6        192.168.1.46    6443/tcp        Kubernetes master running.
  containerd/3            active    idle            192.168.1.46                    Container runtime available.
  flannel/3               active    idle            192.168.1.46                    Flannel subnet 10.1.78.1/24
kubernetes-worker/0       active    idle   7        192.168.1.45    80/tcp,443/tcp  Kubernetes worker running.
  containerd/0            active    idle            192.168.1.45                    Container runtime available.
  flannel/0               active    idle            192.168.1.45                    Flannel subnet 10.1.2.1/24
kubernetes-worker/1*      active    idle   8        192.168.1.49    80/tcp,443/tcp  Kubernetes worker running.
  containerd/2*           active    idle            192.168.1.49                    Container runtime available.
  flannel/2*              active    idle            192.168.1.49                    Flannel subnet 10.1.40.1/24
kubernetes-worker/2       active    idle   9        192.168.1.47    80/tcp,443/tcp  Kubernetes worker running.
  containerd/1            active    idle            192.168.1.47                    Container runtime available.
  flannel/1               active    idle            192.168.1.47                    Flannel subnet 10.1.85.1/24

Machine  State    DNS           Inst id        Series  AZ       Message
0        started  192.168.1.32  up-bee         bionic  default  Deployed
1        started  192.168.1.33  master-iguana  bionic  default  Deployed
2        started  192.168.1.34  open-liger     bionic  default  Deployed
3        started  192.168.1.48  handy-goblin   bionic  default  Deployed
4        started  192.168.1.35  normal-goat    bionic  default  Deployed
5        started  192.168.1.37  heroic-civet   bionic  default  Deployed
6        started  192.168.1.46  driven-ape     bionic  default  Deployed
7        started  192.168.1.45  superb-duck    bionic  default  Deployed
8        started  192.168.1.49  vital-caiman   bionic  default  Deployed
9        started  192.168.1.47  fond-chow      bionic  default  Deployed

I compiled my simple Golang app, and build it into an alpine container. I then push it to my Docker registry. Mind you, it is insecure, no HTTPS

ubuntu@golang-project:~$ docker push 192.168.1.44:5000/goapp:latest
The push refers to repository [192.168.1.44:5000/goapp]
8e5d795c7ed6: Pushed
latest: digest: sha256:ddb2459d8b5deb384f00852ad93038ea52430996a5ef565cbc3002b864380746 size: 528

Tell JuJu about custom registry (docs)

ubuntu@golang-project:~$ juju config containerd custom_registries='[{"url": "http://192.168.1.44:5000", "username": "admin", "password": "password01"}]'`

Create a pod yaml -> .kube/goapp.yaml

apiVersion: v1
kind: Pod
metadata:
  name: gopod
spec:
  containers:
    - name: goapp
      image: 192.168.1.44:5000/goapp:latest
      imagePullPolicy: Always

Send it!

ubuntu@golang-project:~$ kubectl apply -f .kube/goapp.yaml

Watch it not work…

ubuntu@golang-project:~$ kubectl describe pod gopod
Name:         gopod
Namespace:    default
Node:         superb-duck/192.168.1.45
Start Time:   Thu, 08 Aug 2019 17:22:30 +0000
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"gopod","namespace":"default"},"spec":{"containers":[{"image":"192.168...
Status:       Pending
IP:           10.1.2.76
Containers:
  goapp:
    Container ID:
    Image:          192.168.1.44:5000/goapp:latest
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-87vvj (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-87vvj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-87vvj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From                  Message
  ----     ------     ----                 ----                  -------
  Normal   Scheduled  45m                  default-scheduler     Successfully assigned default/gopod to superb-duck
  Normal   Pulling    44m (x4 over 45m)    kubelet, superb-duck  Pulling image "192.168.1.44:5000/goapp:latest"
  Warning  Failed     44m (x4 over 45m)    kubelet, superb-duck  Failed to pull image "192.168.1.44:5000/goapp:latest": rpc error: code = Unknown desc = failed to resolve image "192.168.1.44:5000/goapp:latest": no available registry endpoint: failed to do request: Head https://192.168.1.44:5000/v2/goapp/manifests/latest: http: server gave HTTP response to HTTPS client
  Warning  Failed     44m (x4 over 45m)    kubelet, superb-duck  Error: ErrImagePull
  Warning  Failed     10m (x152 over 45m)  kubelet, superb-duck  Error: ImagePullBackOff
  Normal   BackOff    45s (x196 over 45m)  kubelet, superb-duck  Back-off pulling image "192.168.1.44:5000/goapp:latest"

Manually use containerd tool to test container pull on Kubernetes node

ubuntu@superb-duck:~$ sudo ctr image pull -k --plain-http -user admin 192.168.1.44:5000/goapp:latest
Password:
192.168.1.44:5000/goapp:latest:                                                   resolved       |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:ddb2459d8b5deb384f00852ad93038ea52430996a5ef565cbc3002b864380746: done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:231e98b468c142d1f9bbec3b5a09df2d8bcae0cfe8ea4323fd2dc72ae699ae1d:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:385f52d7a9c13c560007d881cf4ac040cb07a350438f7d4bcea9e32191ebdefc:   done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.2 s                                                                    total:  1.2 Mi (5.9 MiB/s)                       
unpacking linux/amd64 sha256:ddb2459d8b5deb384f00852ad93038ea52430996a5ef565cbc3002b864380746...
done

I am at a loss now. I apologize ahead of time, since I think that my lack of experience is probably hampering my ability to really dig deeper. My assumption is that the additional containerd config custom_registries isn’t being respected or maybe HTTPS is being forced and JuJu isn’t accounting for the possibility of a plain text registry.

Can anyone assist?

Hey @nbowman,

If there’s nothing confidential, could you please dump the contents of /etc/containerd/config.toml?

You can do this with juju run --unit containerd/0 -- cat /etc/containerd/config.toml.

I’ll try and reproduce it in the mean while.

Thanks!

Sure, here you go @joeborg

root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[debug]
  address = ""
  uid = 0
  gid = 0
  level = ""

[metrics]
  address = ""
  grpc_histogram = false

[cgroup]
  path = ""

[plugins]
  [plugins.cgroups]
    no_prometheus = false
  [plugins.cri]
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    enable_selinux = false
    sandbox_image = "k8s.gcr.io/pause:3.1"
    stats_collect_period = 10
    systemd_cgroup = false
    enable_tls_streaming = false
    max_container_log_line_size = 16384
    [plugins.cri.containerd]
      snapshotter = "overlayfs"
      no_pivot = false
      [plugins.cri.containerd.default_runtime]
        runtime_type = "io.containerd.runtime.v1.linux"
        runtime_engine = ""
        runtime_root = ""
      [plugins.cri.containerd.untrusted_workload_runtime]
        runtime_type = ""
        runtime_engine = ""
        runtime_root = ""
    [plugins.cri.cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
    [plugins.cri.registry]
      [plugins.cri.registry.mirrors]
        [plugins.cri.registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]

        [plugins.cri.registry.mirrors."http://192.168.1.44:5000"]
          endpoint = ["http://192.168.1.44:5000"]

      [plugins.cri.registry.auths]

        [plugins.cri.registry.auths."http://192.168.1.44:5000"]
          username = "admin"
          password = "password01"


    [plugins.cri.x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""
  [plugins.diff-service]
    default = ["walking"]
  [plugins.linux]
    shim = "containerd-shim"
    runtime = "runc"
    runtime_root = ""
    no_shim = false
    shim_debug = false
  [plugins.opt]
    path = "/opt/containerd"
  [plugins.restart]
    interval = "10s"
  [plugins.scheduler]
    pause_threshold = 0.02
    deletion_threshold = 0
    mutation_threshold = 100
    schedule_delay = "0s"
    startup_delay = "100ms"
1 Like

To setup a private registry, I followed the Docker directions here: Deploy a registry server | Docker Documentation

I started it with it:

docker run -d   -p 5000:5000   --restart=always   --name registry   registry:2
1 Like

@nbowman that all looks good, to my eye. Seems to correspond with https://github.com/containerd/cri/blob/master/docs/registry.md

Out of interest, does sudo ctr image pull 192.168.1.44:5000/goapp:latest (i.e. without the auth explicitly), work?

@joeborg I get the HTTPS error when I execute that

$ sudo ctr image pull 192.168.1.44:5000/goapp:latest
ctr: failed to resolve reference "192.168.1.44:5000/goapp:latest": failed to do request: Head https://192.168.1.44:5000/v2/goapp/manifests/latest: http: server gave HTTP response to HTTPS client

I grabbed a PCAP, and it doesn’t look like an HTTPS session is even attempted. It uses a protocol I’ve never heard of before, the Radio Signalling Link protocol

The Docker server replies with a 400

Oddly, I’m getting the opposite :joy:

$ sudo docker push 172.31.58.254:5000/nginx:latest
The push refers to repository [172.31.58.254:5000/nginx]
Get https://172.31.58.254:5000/v2/: http: server gave HTTP response to HTTPS client

Let me do some digging, we’re probably making a similar mistake. We might have to explicitly make an insecure registry, I need to work out how.

@joeborg lol how…

I guess the solution on my end would probably be to pony up and toss certs on the Docker registry, but that leads me down the path of generating and pushing my own root CA certs, then pushing SSL certs to the nodes.

It’s a hassle I was hoping to avoid until I smartened up on everything else.

Do you think easyrsa would be the answer to the certs problem?

You could also try doing it via the registry charm https://github.com/CanonicalLtd/docker-registry-charm

After deploying it, you can connect it to containerd via juju add-relation containerd docker-registry

1 Like

BTW @nbowman, this fixed my pushing issue https://github.com/docker/distribution/issues/1874#issuecomment-237194314

Will let you know if it fixes the pull problem too (unless you beat me to it).

@joeborg

Ah yes, that was a requirement that I handled originally. But I eventually learned that the new Charmed Kubernetes from JuJu went to containerd. Which started my saga…

Ah I forgot that stable probably doesn’t have support for relating registries but the edge charm does. It might be buggy though, obviously. Feel free to try it and I’ll help out where I can.
juju upgrade-charm containerd —channel edge

Ah interesting, I’ll try that out tomorrow and check back in.

juju upgrade-charm containerd --channel edge gives me the same result

I think I might be SOL…

I’ll try the docker-registry charm today

Sorry, I don’t think I explained well. Bringing containerd to edge will allow you to attach a docker registry charm, stable doesn’t have the code in place yet.

I’ve managed to reproduce the original error though, so I’ll see if we can fix that easily. If you have a launchpad account, could you log a bug here please? If not, let me know and I’ll do it for you.

Sure, can do.

Also, I’m midway through setting up docker-registry and as predicted; its a hot mess of adding trusted certs.

I can add details here, or post another thread.

Please open a new thread (or bug) for that.

I think we’re hitting this bug https://github.com/containerd/cri/issues/1201.

Let me know when you’ve opened the bug on LP. Many thanks!

1 Like