This is in continuation to the question No field manager for service account with kubectl create, but there with kubectl apply.
The Service (or Pod) created with "kubectl apply" and "kubectl create" in a CAPI cluster gets all the same managed fields (except the expected ones like last-applied-configuration). But this does not seem to be the case for ServiceAccount as mentioned in the above question.
kubectl create
$ k get svc my-service -oyaml --show-managed-fields
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2024-01-26T18:40:15Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl-create
operation: Update
time: "2024-01-26T18:40:15Z"
name: my-service
namespace: default
resourceVersion: "548809"
uid: 7d57743d-9b8d-4f04-850b-7ca6d5e1347a
spec:
clusterIP: 172.19.186.161
clusterIPs:
- 172.19.186.161
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 9376
selector:
app.kubernetes.io/name: MyApp
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
kubectl apply
$ k get svc my-service -oyaml --show-managed-fields
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-service","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":9376}],"selector":{"app.kubernetes.io/name":"MyApp"}}}
creationTimestamp: "2024-01-26T18:41:00Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2024-01-26T18:41:00Z"
name: my-service
namespace: default
resourceVersion: "548931"
uid: 8a378bcc-b70e-441b-8b55-463f7700e1f3
spec:
clusterIP: 172.19.141.7
clusterIPs:
- 172.19.141.7
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 9376
selector:
app.kubernetes.io/name: MyApp
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Result: The managedFields are (almost) same.
kubectl create
$ k get sa -oyaml --show-managed-fields
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2024-01-24T15:11:24Z"
name: build-robot
namespace: default
resourceVersion: "7337504"
uid: e2414d28-d897-4099-ac5d-699c89835615
secrets:
- name: build-robot-token-77p6d
kubectl apply
$ k get sa -oyaml --show-managed-fields
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"build-robot","namespace":"default"}}
creationTimestamp: "2024-01-24T15:10:55Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:secrets:
.: {}
k:{"name":"build-robot-token-8rqgq"}: {}
manager: kube-controller-manager
operation: Update
time: "2024-01-24T15:10:55Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
manager: kubectl-client-side-apply
operation: Update
time: "2024-01-24T15:10:55Z"
name: build-robot
namespace: default
resourceVersion: "7337399"
uid: 0bac2513-844f-4526-b374-3642bdf26838
secrets:
- name: build-robot-token-8rqgq
Result: The managedFields are different; completely absent in the case of "kubectl create". Why? What changed?
I saw the issue in version: server: v1.23.10, client: v1.25.13.
Can't reproduce it in server: v1.28.4, client: v1.29.1.