I need to deploy several containers to a Kubernetes cluster. The objetive is automating the deployment of Kafka, Kafka Connect, PostgreSQL, and others. Some of them already provide a Helm operator that we could use. So my question is, can we somehow use those helm operators inside our operator? If so, what would be the best approach?
The only method I can think of so far is calling the helm setup console commands from within a deployment app. Another approach, without using those helm files, would be implementing the functionality of each operator in my own operator, which doesn't seem to make much sense since what I need was already developed and is public.
I'm very new to operator development so please excuse me if this is a silly question.
Edit: The main purpose of the operator is to deploy X databases. Along with that we would like to have a single operator/bundle that deploys the whole system right away. Does it even make sense to use an operator to bundle, even if we have additional tasks for some of the containers? With this, the user would specify in the yaml file:
databases
- type: "postgres"
name: "users"
- type: "postgres"
name: "purchases"
and 2 PostgreSQL databases would be created. Those databases could then be mentioned in other yaml files or further down in the same yaml file. Case on hands: the information from the databases will be pulled by Debezium (another container), so Debezium needs to know their addresses. So the operator should create a service and associate the service address with the database name.
This is part of an ETL system. The idea is that the operator would allow an easy deployment of the whole system by taking care of most of the configuration. With this in mind, we were thinking if it wasn't possible to pick on existing Helm operators (or another kind of operator) and deploy them with small modifications to the configurations such as different ports for different databases.
But after reading F1ko's reply I gained new perspectives. Perhaps this is not possible with an operator as initially expected?
Edit2: Clarification of edit1.
Just for clarification purposes:
Helm is a package manager with which you can install an application onto the cluster in a bundled matter: it basically provides you with all the necessary YAMLs, such as ConfigMaps, Services, Deployments, and whatever else is needed to get the desired application up and running in a proper way.
An Operator is essentially a controller. In Kubernetes, there are lots of different controllers that define the "logic" whenever you do something (e.g. the replication-controller adds more replicates of a Pod if you decide to increment the replicas
field). There simply are too many controllers to list them all and have running individually, that's why they are compiled into a single binary known as the kube-controller-manager.
Custom-built controllers are called operators for easier distinction. These operators simply watch over the state of certain "things" and are going to perform an action if needed. Most of the time these "things" are going to be CustomResources (CRs) which are essentially new Kubernetes objects that were introduced to the cluster by applying CustomResourceDefinitions (CRDs).
With that being said, it is not uncommon to use helm to deploy operators, however, try to avoid the term "helm operator" as it is actually referring to a very specific operator and may lead to confusion in the future: https://github.com/fluxcd/helm-operator
So my question is, can we somehow use those helm operators inside our operator?
Although you may build your own operator with the operator-sdk which then lets you deploy or trigger certain events from other operators (e.g. by editing their CRDs) there is no reason to do so.
The only method I can think of so far is calling the helm setup console commands from within a deployment app.
Most likely what you are looking for is a proper CI/CD workflow.
Simply commit the helm chart and values.yaml
files that you are using during helm install
inside a Git repository and have a CI/CD tool (such as GitLab) deploy them to your cluster every time you make a new commit.
Update: As the other edited his question and left a comment i decided to update this post:
The main purpose of the operator is to deploy X databases. Along with that we would like to have a single operator/bundle that deploys the whole system right away.
Do you think it makes sense to bundle operators together in another operator, as one would do with Helm?
No it does not make sense at all. That's exactly what helm is there for. With helm you can bundle stuff, you can even bundle multiple helm charts together which may be what you are actually looking for. You can have one helm chart that passes the needed values down to the actual operator helm charts and therefore use something like the service-name in multiple locations.
In the case of operators inside operators, is it still necessary to configure every sub-operator individually when configuring the operator?
As mentioned above, it does not make any sense to do it like that, it is just an over-engineered approach. However, if you truly want to go with the operator approach there are basically two approaches you could take:
apiVersion
with breaking changes, introduce new CRs or anything of that kind, you will have to adapt again).Hopefully it is clear by now that building your own operator for "operating" other operators comes with lot of painful dependencies and should not be the way to go.
Is it possible to deploy different configurations of images? Such as databases configured with different ports?
Good operators and helm charts let you do that out of the box, either via a respective CR / ConfigMap or a values.yaml
file, however, that now depends on what solutions you are going to use. So in general the answer is: yes, it is possible if supported.