CAAS charms installing resources and secrets

I’ve been looking at CAAS charming support and one of the first use cases I like to understand is a typical ingress with a reverse-proxy in front of a service.

If I have, for example, a charmed service it will be installed in the model namespace. I need an ingress charm, nginx-ingress for example that I can relate it to to forward traffic to my service.

Different ingress controllers operate differently and my service really shouldn’t know anything about that. It should just relate to the charm that handles it. Looking at the nginx-service I need to install an Ingress Resource in the same name space as the charm requesting it.

  1. Is there already an example or production ready ingress charm to handle this common use case?
  2. Is there a way for CAAS charms to install custom resources like this Ingress resource which seems a common activity for K8s services.
  3. Is there a way for CAAS charms to create and access secrets? For example, if this ingress charm is using TLS it will need to set a secret and then provide that name in the Resource it creates.

I’ve looked through the examples, and I’m following the various service types I can declare but I was expecting to see functions to handle secretes, resources, etc similar to the set_pod_spec command and I’m not finding them. I’m not sure if it’s not yet implemented or if there is a helper library (like charmhelpers) that I’m missing. A lot of K8s configuration requires managing secretes, resources, and other objects that I would like for the charm to handle.

Thanks,
Chris

Pinging @wallyworld, @hpidcock, @kelvin.liu

Support for k8s charms creating secrets is in development and should land in the Juju 2.7 edge snap within the next week.

Support for custom resource definitions exists already
https://discourse.jujucharms.com/t/whats-new-in-juju-k8s-for-2-6

Support for custom resources is under development. Hopefully it will land in the next few weeks.

2 Likes

Thanks!

I’ve read through that post before and even on this pass it took me a while to find it. I didn’t even catch that there was a custom resource at the bottom of the example podspec. I’ll give this a go you might have secretes implemented before I’m ready to try it out anyway.

Just some early user feedback, even being familiar with charms already I didn’t expect for a ‘pod spec’ to contain custom resource definitions. If that’s going to remain this way, maybe pod_spec isn’t the best name for it?

We wanted a single word name for the YAML that the charms handed over to Juju that best represented the content. “PodSpec” was the best we could come up with as that best describes the majority of the context; naming stuff is hard :slight_smile:
Having said that, we’re in the middle of an iteration to the YAML handling which splits out the k8s specific stuff that we don’t (yet) model , like CustomResourceDefinitions, to a separate YAML file. That work should land in the 2.7 edge snap next week.

1 Like

Makes since. Splitting non-pod yaml seems like a good idea. I’m probably well ahead of myself here but taking the above example there’s also a situation where you would need to specify the namespace to install the objects in.

As I understand it today, all charms are installed in a name space based on the model name. So with the example I was describing originally, if we had an Ingress charm and a Server charm that related to it the Ingress charm would need to install some Ingress objects when the relation is made. Those objects need to be in the namespace of the Server charm (not the Ingress charm). That means the custom objects are only valid if the Ingress charm is in the same model as the Server charm. If however, we had several models each hosting some different workloads and used a CMR to relate to the Ingress provider the Ingress provider would need to install those objects in the namespace for the Server model. This would mean an Ingress charm would need to install different objects in different name spaces, so it would need to support specifying the namespace for each object.

At least, I think that’s how it would play out today. Perhaps juju shouldn’t force an assumption about namespaces but default to the model name if not otherwise provided? This would be a lot cleaner if objects were not part of the pod spec, installing objects in different name spaces in the same yaml spec would probably be confusing as well.

Sounds like I need to find time to charm nginx-ingress so I can really understand if this plays out like I think it does.

I’ve been trying out the CRD today and I have something wrong with the YAML but I can’t tell what. Deploying w/o the CRD deploys a pod as expected, but once the CRD is in the YAML neither the CRD nor the POD are created. Juju debug-log shows that pod_spec_set is called, I’m just not sure of any way to determine what about the YAML juju isn’t liking.

This is the template for the spec, I’ve just lifted the tested and working CRD, added the ‘customResourceDefinitions’ section, and put it as the only resource in the section.

containers:
    - name: {{ name }}
      imageDetails:
          imagePath: {{ registry_path }}
          username: {{ image_username }}
          password: {{ image_password }}
      ports:
        - containerPort: 80
          name: http
        - containerPort: 3012
          name: websocket
customResourceDefinitions:
    ingress.bitwarden:
        apiVersion: extensions/v1beta1
        kind: Ingress
        metadata:
          name: haproxy-ingress-bitwarden
          namespace: mk8s
        spec:
          tls:
            - hosts:
              - stable.mk8s.lxd 
          rules:
            - host: stable.mk8s.lxd
              http:
                paths:
                - path: /
                  backend:
                    serviceName: bitwarden
                    servicePort: 80

Can anyone tell me what I’m misunderstanding with the YAML format? Also, is there any way to validate the YAML so I can see what juju doesn’t like about these files when it fails?

Juju should parse the YAML and check for any syntax errors (but note that just because the YAML parses, doesn’t mean that there aren’t k8s specific semantic errors. ie it’s really application specific whether a given resource definition is defined correctly). The YAML seems ok looking at it.

So the logs show pod-spec-set is called and you can see the YAML printed out and passed to Juju? Does the application definitely have a non-zero scale? I ask because if deployed from a bundle, you need to ensure the scale is set to 1 or more. Does juju status show a scale > 0? Or are there any errors logged? The only time I’ve ever seen pods not get created is due to a 0 scale or an error in the YAML (and any error Juju sees is logged).

On a side note, some brief info on using the new pod spec features, including specifying secrets, has been posted:
Updated podspec YAML - new features

I’ll re-run this and capture juju debug-log to be sure I’m remembering correctly. What I recall testing was:

  • The above yaml file, deploy just the charm (not a bundle) and get no Ingress object or Pod.
  • Comment out the customResourceDefinitions, re-build the charm, clean microk8s and do the deploy again. This time I get a pod.

But I’ll do that, grab juju debug-log just to be sure. Also thanks for the new link I’ll look through that as well. I presume the new feature requires an update to the base layer to support that field as well. If that’s available I’m happy to try that method as well.

Ok quick validation of what I saw before. Before each deploy, microk8s.reset is run, then dns/storage is enabled, juju is bootstraped, a model is added, and the single charm is deployed.

The only difference in the two charms is commenting/uncommenting the CRD section of the pod spec.

With the CRD, you’ll see the pod_sepc and then debug-log stops having retrieved the pod and never starting it.
debug-log: Ubuntu Pastebin
pods: Ubuntu Pastebin

Without the CRD, you can see the commented out section in debug-log and it proceeds to start the service.
debug-log: Ubuntu Pastebin
pods: Ubuntu Pastebin

I can’t explain why adding a CRD would affect this unless during processing of the Spec something goes wrong, but I’m not seeing any errors it just seems to stop. Anything else I can test or provide to debug what’s going on here?

juju is 2.6.9-bionic-amd64 from the snap if that matters

To see if I could get this working, I’ve also tried the new v2 spec. First, it seems that’s not supported on the stable channel, so I’ve switched to edge with the 2.7.x series.

Using the same yaml as above I get the following: “'ERROR yaml: line 11: mapping values are not allowed in this context”

The YAML for the k8s-resource split out is: Ubuntu Pastebin
Line 11 appears to be the ‘rules’ line. I’m not a YAML expert, but this same format works as an actual CRD with K8s but not in this context. Does YAML disallow a certain level of nesting? Am I converting this from my K8s CRD to Juju wrong?

You are using podspec v2?
Looking at the debug log, I don’t see a “version: 2” in the podspec.
Also, for v2, the CRD section is under a “kubernetesResources” section in a separate yaml file, passed to the pod-spec-set command via --k8s-resources or via an additional parameter to the base layer pod_spec_set() func.

But having said all that, your yaml looks like you’re trying to create a k8s Ingress resource. That’s not a resource of type CustomResourceDefinition which is what Juju is designed to support for this feature, which is where you basically take the spec portion of a k8s CustomResourceDefinition yaml file and pass that to Juju.

eg if the k8s yaml looked like this:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tfjobs.kubeflow.org
version: v1alpha2
spec:
  group: kubeflow.org
  names:
    kind: TFJob
    plural: tfjobs
    singular: tfjob
  scope: Namespaced
  validation:
    openAPIV3Schema:
      properties:
        spec:
          properties:
...

the Juju yaml would be:

kubernetesResources:
  customResourceDefinitions:
    tfjobs.kubeflow.org:
      group: kubeflow.org
      version: v1alpha2
      scope: Namespaced
      names:
        plural: "tfjobs"
        singular: "tfjob"
        kind: TFJob
      validation:
        openAPIV3Schema:
          properties:
...

Juju doesn’t currently support creating Ingress Resources directly. It will create one for an application when that application is exposed via juju expose. We currently support just setting the Rules attribute though. There’s a little documentation here:

It’s certain that not all use cases are supported yet. Can you expand more on what you need? Can we look at implementing something in terms of juju expose for a given application?

Thanks Wally!

I did not specify v2 in the spec anywhere, I was using subprocess in the charm to explicitly call out to the new --k8s-resources flag with a file. Although it sounds like, for an Ingress object that’s not going to work regardless.

You are correct, I’ve been trying to setup an Ingres resource. I’ll take a look at the documentation you’ve provided, I don’t see any reference to ‘Rules’ in that link. Maybe upon further inspection I’ll figure it out, but I’m curious how that works it might provide enough of what I want for now.

The primary use case here, is that I want to charm an Ingress controller and allow applications related to it to receive traffic. Given that multiple Ingress controllers can co-exist as long as the controller type is specified in the Ingress resource, the plan was to try out HAProxy and Ambassador to see how they compare. I don’t want indecision around Ingress to bleed into my other charms (obviously bitwarden as my first candidate).

Coming from non CAAS charms, having a charm to control all of the possible Ingres configurations seems like a natural fit. Taking a look at something like the HAProxy Helm there is already a controller and it has many configuration options which seems like a good fit to manage with a charm. I suspect more advanced Ingress options like Ambassador have even more benefit to having a charm and relations.

So while exposing the rules is a decent start, what I’m really after is the ability to charm Ingress as a 1st class citizen for K8s. I don’t want to enable Ingress as an add-on, or deploy it outside of juju. It’s an incredibly important part of the puzzle, one of the first I start with, and I’d like to have a charm with interfaces for my other charms. If I started with HAPorxy today, and switched to Ambassador in the future, I don’t want any of the other charms to care so long as the new Ingress supports the necessary relations. I do this today with ‘normal’ charms and HAProxy. I could switch to Nginx and the rest of the charms wouldn’t know or care.

I’ll take a look at this documentation, in the long run I don’t see any reason to prevent charms from being written for Ingress. I think the best long term solution is to allow arbitrary objects (Ingress in this instance) to be created with the k8s-resources. That covers charming Ingress, but it covers any newly emergent resources that become a standard via community development as well. That’s my understanding to how Ingress became an object, and why the Ingress object for different Ingress options don’t always conform to a common spec (even though ideally they should).

Thinking ahead, the other feature I would need to be truly feature complete for Ingress is multiple namespaces. A single Ingress should be able to install Ingress objects in other CMR namespaces as well. It shouldn’t be necessary for every model (namespaces) to install it’s own Ingress charm. With Non-Caas charms I use CMR in a fairly limited capacity. However with K8s, namespaces are cheap and I would expect for CMR to become the norm.

Juju doesn’t currently allow k8s Ingress Resources to be created directly by a charm. There’s a number of k8s resources that potentially fall into the same category.

To pick up another point, Juju doesn’t allow cluster wide resources to be created - Juju models correspond to a namespace and all artefacts from that model will go in the namespace. Allowing cluster scoped artefacts is not something we have modelled and it gets potentially messy very quickly.

So it is true that today Juju cannot do everything you can with say Helm. The trick is to carefully consider how to plugs the gaps - can we implement a holistic, model driven approach that scales well and is manageable; do we need a tasteful, pragmatic solution until we get more data to help drive a model driven approach; are there some things that will always be k8s specific that we need to just allow.

Maybe we can add a curated set of k8s resources (like Ingress) to the k8s specific yaml file the charm can produce. We already have Custom Resource Definitions and Secrets in there. The caveat would be that it would not be a free for all and at some stage we may deprecate some things in there (like secrets) when we do get to model such things.

1 Like

Ok, I can see what you’re trying to do with specific and intentional objects. That being the case, I guess what I really need is the Ingress object to be modeled. As you said, that can just be a pass through of fields for now, as long as secretes are taken care of in the future once they are modeled in juju.

I don’t think the Ingress object needs to be a global resources but it does need to exist in the requesting applications model. That’s why I keep talking about CMR. For example, I would expect to run my supporting ‘services’ like Ingress in a model exposing the reverse proxy relation to other models to consume. When another application makes the relation the Ingress charm will need to install the appropriate Ingress object in the other applications namespace (model). So the ‘namespace’ field is necessary for the Ingress object, I’m assuming I can get the model name on a CMR?

What I like about this flow is that it let’s the Ingress controller deal with all of the not-quite-conforming configuration options and provides central configuration of what ingress is allowed in the cluster. It also lets each charm specify per-application settings without user configuration. An example being an application that requires a URL rewrite because it doesn’t support a base url. The requesting charm can request url rewrite in the relation and the user doesn’t need to configure anything. This doesn’t work w/o a charmed Ingress controller because the consuming charm would have to know how to configure bespoke options on all possible ingress controllers.

If Ingres is added in a preview or beta, let me know I’ll hold on this for now until a path for Ingress objects is laid out.

It’s an interesting use case. Ideally, with CMR, there’s an argument Juju could be expected to manage connectivity between models automatically. Certainly, for VM clouds, the primitives exist for that:

  • Juju manages firewalls on each model as needed
  • –via option to add-relation to deal with NATed services
  • enhancements to network-info output to provide ingress, egress, bind addresses to the charm
  • Juju will use a public scoped IP address if available for cross model network info

So what to do for k8s where the infrastructure is radically different. Would Juju be expected to create Ingress Resources “automagically”. Does it have enough information at hand to do that correctly in all cases, or if not how do we allow the charm to specify what it needs. What role would CNI and other such technologies play. What’s a tasteful short term approach until a model driven approach can be implemented. Some thought is needed.