kubernetesrbackubernetes-operatorkubebuilderkubernetes-rbac

How to add RBAC roles to a Controller for a different kind of resource in Kubebuilder


I am creating a new Operator with Kubebuilder to deploy a Kubernetes controller to manage a new CRD Custom Resource Definition.

This new CRD (let's say is called MyNewResource), needs to list/create/delete CronJobs.

So in the Controller Go code where the Reconcile(...) method is defined I added a new RBAC comment to allow the reconciliation to work on CronJobs (see here):

//+kubebuilder:rbac:groups=batch,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete

However after building pushing and deploying the Docker/Kubernetes controller (repo myrepo, make manifests, then make install, then make docker-build docker-push, then make deploy), then in the logs I still see:

E0111 09:35:18.785523       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/reflector.go:167: Failed to watch *v1beta1.CronJob: failed to list *v1beta1.CronJob: cronjobs.batch is forbidden: User "system:serviceaccount:myrepo-system:myrepo-controller-manager" cannot list resource "cronjobs" in API group "batch" at the cluster scope

I also see issues about the cache, but they might not be related (not sure):

2022-01-11T09:35:57.857Z        ERROR   controller.mynewresource        Could not wait for Cache to sync        {"reconciler group": "mygroup.mydomain.com", "reconciler kind": "MyNewResource", "error": "failed to wait for mynewresource caches to sync: timed out waiting for cache to be synced"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.0/pkg/internal/controller/controller.go:234
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.0/pkg/manager/internal.go:696
2022-01-11T09:35:57.858Z        ERROR   error received after stop sequence was engaged  {"error": "leader election lost"}
2022-01-11T09:35:57.858Z        ERROR   setup   problem running manager {"error": "failed to wait for mynewresource caches to sync: timed out waiting for cache to be synced"}

How can I allow my new Operator to deal with CronJobs resources?

At the moment basically I am not able to create new CronJobs programmatically (Go code) when I provide some YAML for a new instance of my CRD, by invoking:

kubectl create -f mynewresource-project/config/samples/

Solution

  • You need to create new Role or ClusterRole (depending if you want your permissions to be namespaced or cluster-wide) and bind that to your system:serviceaccount:myrepo-system:myrepo-controller-manager user using RoleBinding/ClusterRoleBinding. I will provide examples for cluster-wide configuration.

    ClusterRole:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: cronjobs-role
    rules:
    - apiGroups: [""]
      resources: ["cronjobs"]
      verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]
    

    Then, bind that using ClusterRoleBinding:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: cronjobs-rolebinding
    subjects:
    - kind: User
      name: system:serviceaccount:myrepo-system:myrepo-controller-manager
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: ClusterRole
      name: cronjob-role
      apiGroup: rbac.authorization.k8s.io
    

    Judging from your logs you might want to use batch apiGroup but I'll leave more generic example. More about k8s RBAC here.

    Kubebuilder

    With Kubebuilder the ClusterRole and the ClusterRoleBinding YAML code is autogenerated and stored in the config/rbac/ directory.

    To grant the binding on all groups (rather than just batch), you can place the Go comment with an asterisk like this:

    //+kubebuilder:rbac:groups=*,resources=cronjobs,verbs=get;list;watch;create;update;patch;delete
    

    This will change the autogenerated YAML for the ClusterRole to:

    rules:
     - apiGroups:
       - '*' # instead of simply: batch
    

    When deploying the updated operator, then the controller should be able to list/create/delete CronJobs.

    See here for a reference RBAC for Kubebuilder comments