I have set up EFS and trying to set it up as statically provisioned, without an access point to write from multiple pods. The storage resources are set up according to this doc:
https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/multiple_pods
I'm using the resource manifests from the above github example.
When the pod is up, the EFS mount is successful, but when I try to interactive shell into the mounted directory I can see the owner is root
and I can read the contents of it but can't write. Attempting to write gives Access Denied
error. The container user is non-root.
I have this exact same setup running in a different namespace and it works perfectly, but it was created 6 months ago on a different k8s version. (I think 1.19 then, and now we have 1.21) On this setup the owner of mounted EFS directory is the non-root user.
I have attempted to create an access point and given the uid 1000 (which is my non-root user id) and gid 1000 the root path 777 access, but this didn't have any effect.
How to fix this and mount as non-root, or grant the non-root user write access in static EFS provisioning?
I couldn't manage to fix this without using access points.
Finally attempted to set up access points once again and this time it worked, used uid 1000 (user id that's set up in the container) basically in every field and set the permissions to 0777.
Also it wasn't working with the examples for setting up PVC using access point ( found here https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/access_points) , but after recreating the PVCs using the example here:
https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/multiple_pods
But modifying just this part to include access point:
volumeHandle: [FileSystemId]::[AccessPointId]
It started working.