kubernetesrbac

Kubernetes namespace default service account


If not specified, pods are run under a default service account.

Environment: Kubernetes 1.12 , with RBAC


Solution

    1. A default service account is automatically created for each namespace.

      $ kubectl get serviceaccount
      NAME    SECRETS   AGE
      default   1       1d
      
    2. Service accounts can be added when required. Each pod is associated with exactly one service account but multiple pods can use the same service account.

    3. A pod can only use one service account from the same namespace.

    4. Service account are assigned to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default service account.

    5. The default permissions for a service account don't allow it to list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.

    6. By default, the default service account in a namespace has no permissions other than those of an unauthenticated user.

    7. Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.

      $ kubectl exec -it test -n foo sh / # curl
      localhost:8001/api/v1/namespaces/foo/services {   "kind": "Status",  
      "apiVersion": "v1",   "metadata": {
      
          },   "status": "Failure",   "message": "services is forbidden: User
      \"system:serviceaccount:foo:default\" cannot list resource
      \"services\" in API group \"\" in the namespace \"foo\"",   "reason":
      "Forbidden",   "details": {
          "kind": "services"   },   "code": 403
      

      as can be seen above the default service account cannot list services

      but when given proper role and role binding like below

      apiVersion: rbac.authorization.k8s.io/v1
      kind: Role
      metadata:
        creationTimestamp: null
        name: foo-role
        namespace: foo
      rules:
      - apiGroups:
        - ""
        resources:
        - services
        verbs:
        - get
        - list
      
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        creationTimestamp: null
        name: test-foo
        namespace: foo
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: foo-role
      subjects:
      - kind: ServiceAccount
        name: default
        namespace: foo
      

      now I am able to list the resource service

      $ kubectl exec -it test -n foo sh
      / # curl localhost:8001/api/v1/namespaces/foo/services
      {
        "kind": "ServiceList",
        "apiVersion": "v1",
        "metadata": {
          "selfLink": "/api/v1/namespaces/bar/services",
          "resourceVersion": "457324"
        },
        "items": []
      
    8. Giving all your service accounts the clusteradmin ClusterRole is a bad idea. It is best to give everyone only the permissions they need to do their job and not a single permission more.

    9. It’s a good idea to create a specific service account for each pod and then associate it with a tailor-made role or a ClusterRole through a RoleBinding.

    10. If one of your pods only needs to read pods while the other also needs to modify them then create two different service accounts and make those pods use them by specifying the serviceaccountName property in the pod spec.

    You can refer the below link for an in-depth explanation.

    Service account example with roles

    You can check kubectl explain serviceaccount.automountServiceAccountToken and edit the service account

    kubectl edit serviceaccount default -o yaml

    apiVersion: v1
    automountServiceAccountToken: false
    kind: ServiceAccount
    metadata:
      creationTimestamp: 2018-10-14T08:26:37Z
      name: default
      namespace: default
      resourceVersion: "459688"
      selfLink: /api/v1/namespaces/default/serviceaccounts/default
      uid: de71e624-cf8a-11e8-abce-0642c77524e8
    secrets:
    - name: default-token-q66j4
    

    Once this change is done whichever pod you spawn doesn't have a serviceaccount token as can be seen below.

    kubectl exec tp -it bash
    root@tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount
    bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory