Is it possible to run commands with root or sudo privileges inside a pod container's shell in K3s (not K8s)?
As far as I know, K3s is a bit smaller version of K8s Kubernetes, with some functionalities reduced.
I was wondering if I could make some root/sudo changes in the pod container on my PC with TrueNAS Scale OS installed, which uses K3s instead of K8s Kubernetes.
Is it possible to enable sudo command inside K3s container pod?
Or is it possible to turn off read-only option for filesystem inside pod container and make it writable?
I tried running commands from this thread without much success:
Exec commands on kubernetes pods with root access
sudo k3s kubectl -n namespace_here exec -it -u root pod_container_name_id_here /bin/sh
error: unknown shorthand flag: 'u' in -u
See 'kubectl exec --help' for usage.
sudo runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 container_id_here /bin/sh
sudo: runc: command not found
sudo k3s ctr task exec -t --exec-id myshell --user root container_id_here /bin/sh
This command correctly enters the container pod's shell as a root user with id 0, but I still can't make many changes with root privileges.
For example I still get this notification: Read-only, similarly when I log in as an ordinary user.
mkdir test
mkdir: can't create directory 'test': Read-only file system
sudo /bin/sh: sudo: not found
sudo k3s kubectl exec-as -it -u root pod_container_name_id_here /bin/sh -n namespace_here
error: unknown command "exec-as" for "kubectl"
I tried installing the krew plugin for K3s Kubernetes, but then I got this error:
Installed plugin: krew
sudo k3s kubectl krew install exec-as
error: unknown command "krew" for "kubectl"
sudo kubelet
sudo: kubelet: command not found
I found some workaround solutions, but if someone else has other suggestions then they are appreciated.
All pods, deployments, services and other objects in kubernetes are defined in yaml files, which can be easily edited like using this command, which opens vi editor for editing yaml code. After editing deployments they are automatically recreated with newly edited yaml definition.
sudo k3s kubectl edit deployment deployment_name_here -n namespace_here
The problem with the read-only file system is generally caused by this setting in pod/deployment yaml parameter under securityContext
section:
readOnlyRootFilesystem: true
So the solution is to edit yaml definition of this pod/deployment and set this setting to false
Then, after redeployment changes to root filesystem are possible when logged as root inside pod container
sudo k3s ctr task exec -t --exec-id myshell --user root container_id_here /bin/sh
There is also a way to change default user that is used to log into pod container also in the same section securityContext
. So default user can be set to root (0) to make it easier to log into this pod container as root.
By default, in TrueNAS Scale, user apps (568) is used to log into pod containers and most pods/deployments have below settings under securityContext
section
runAsUser: 568
runAsGroup: 568
runAsNonRoot: true
So the solution is to edit yaml file of this pod/deployment and set these settings as follows
runAsUser: 0
runAsGroup: 0
runAsNonRoot: false
Then when using standard command to log into pod container's shell root user (0) is automatically selected
sudo k3s kubectl -n namespace_here exec -it pod_container_name_id_here -- /bin/sh
id
uid=0(root) gid=0(root) groups=0(root)
There can still occur problems like permission denied when using some commands as root user inside pod container
The solution is to make below changes in the same section securityContext
allowPrivilegeEscalation: false
privileged: false
into
#allowPrivilegeEscalation: false
privileged: true
Can specify exactly which commands are allowed in securityContext.cababilities
section
capabilities:
drop:
- ALL
add:
- CHOWN
- ADDUSER
In yaml description of pod/deployment it is also possible to mount some directories from host computer into pod container and make it possible to make changes on these directories from pod container
volumes:
- hostPath:
path: /any_directory_on_host_computer
type: Directory
name: host-directory
# inside container settings
volumeMounts:
- mountPath: "/mnt/host_directory"
name: host-directory
readOnly: false
Also, if pod container is missing some important features like sudo, apt or some packages from apt, then can use custom docker image instead of default docker image used by TrueNAS k3s application in container settings in pod/deployment yaml file. Can create custom Dockerfile with custom docker image based on default docker image.
containers:
#- image: nginx:1.14.2
- image: docker-image-nginx-modified:latest
A bit weird aspect of k3s ctr
containerd utility is that there is no build command to build custom docker image - there’s no ctr image build
command.
A bit specific workaround is to build docker image using traditional docker build
command and then save this image to tar file using docker save
command and then import this image from tar file using k3s ctr image import
command
sudo docker build -t docker-image-nginx-modified -f ./Dockerfile_nginx .
sudo docker save docker-image-nginx-modified:latest -o docker-image-nginx-modified.tar
sudo k3s ctr image import docker-image-nginx-modified.tar
sudo k3s ctr images list
Important note is that in TrueNAS when reinstalling or applying an update to application then all manual changes made to pod/deployments yaml definitions are reset and above changes need to be made again.
Also, sometimes there can be dedicated options in TrueNAS WebGUI application settings to make custom changes to pods/deployments yaml settings like mentioned above, without manual editing of yaml definition files.
There is also an option/button in WebGUI Apps panel of TrueNAS to install custom Application based on original or custom docker image. Also can create custom docker deployment in k3s node alongside TrueNAS docker apps without involving TrueNAS WebGUI at all like this way
sudo k3s kubectl create -f nginx-deployment.yaml