kubernetesrole

How to create a role to ServiceAccount with context


I want to create a role to service account with context.
My goal is to be able to run kubectl get pods with the context of the service account.

To do it I need:

  1. Create service account
  2. Create role
  3. Create bind role
  4. Create context

I created a service account:

kubectl create serviceaccount myservice

Role.yaml:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata: 
  namespace: development
  name: my-role
rules: 
- apiGroups: ["", "extensions", "apps"]
  resources: ["pods"]
  verbs: ["get"] 

BindRole.yaml:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata: 
  name: my-role-binding
  namespace: development
subjects: 
- kind: ServiceAccount 
  name: myservice
  namespace: development
  apiGroup: ""
roleRef: 
  kind: Role
  name: my-role
  apiGroup: ""

I want to be able to run kubectl get pods in the context of the service account myservice.

To create context I need something like that:

kubectl config set-context myservice-context --cluster=kubernetes --user=???

But I can't use --user for the service account.
So how can I do it ?

I thought to use kubectl config set-credentials but it just creates a user and I already have the service account.

EDIT:
Here is my try to create a user with the token of the service account and then use it with kubectl --context=myservice-context get pods but it failed:
enter image description here


Solution

  • It appears the cluster maybe missing from your ~/.kube/config file. If it were a permissions issue, I would expect to see either error: You must be logged in to the server (Unauthorized) or Error from server (Forbidden).

    The error you are seeing The connection to the server localhost:8080 was refused - did you specify the right host or port? implies that there is no cluster with the name you specified in your kubeconfig.

    I'd check that your kubeconfig includes the cluster name kubernetes with certificate-authority-data and respective server.

    For example here is me attempting with non-existent service account first with an invalid cluster, then again with a cluster that does exist in my kubeconfig.

    Bad cluster name:

    kubectl config set-context service-context \
    --cluster=doesnotexist \
    > --namespace=default \
    > --user=service-account
    Context "service-context" modified.
    ➜  ~ kubectl --context=service-context get pods
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    

    Good cluster name:

    kubectl config set-context service-context \
    --cluster=exists \
    --namespace=default \
    --user=service-account
    Context "service-context" modified.
    ➜  ~ kubectl --context=service-context get pods
    error: You must be logged in to the server (Unauthorized)
    

    The later error would suggest there was something wrong with your user/permissions. The former would suggest the cluster does not exist in your kubeconfig.

    EDIT:

    Also remember when you use sudo its using /root/.kube/config which may not be what you want