My container command consists of multiple sub-commands. The yaml
file looks like this (there is only one container in the pod):
apiVersion: batch/v1
kind: Job
metadata:
name: t-j
namespace: tom # job and pvc should be in the same namespace
spec:
template:
metadata:
labels:
app: test-job
spec:
containers:
- name: test-job
image: <URL>/tom/test:v001
command: ["bash", "-c", "tail /proc/cpuinfo -n 28 &>> job.log; python3 setup_util_cpp.py build_ext --inplace &>> job.log; python3 script_01.py &>> job.log; python3 script_02.py &>> job.log; echo '--------------Purpose: test ----------------' &>> job. log; mkdir result ; mv *.csv result; mv job.log result; tar -cjf result.bz2 result/ ; aws s3 cp --endpoint http://<another URL> /result.bz2 s3://mybucket01/ --no-progress --only-show-errors"]
resources:
<some other specifications>
Sub-command python3 script_01.py
has been running for days. I don't need results from this script and I'd like other sub-commands run as planned. Is it possible that I can kill the current sub-command without terminating the whole flow?
I tried docker kill --signal='SIGTERM' test-job
, but got Error: no container with name or ID "test-job" found: no such container
.
docker container ls
returned empty.
kubectl get pods
returned t-j-abcde
, and docker kill --signal='SIGTERM' t-j-abcde
did not work either.
The image is based on Ubuntu 22.04
, docker is podman 4.6.1
If you wanna kill a process in container, you should try these command:
1. kubectl exec -it -n $NAMESPACE $POD_NAME -- ps -ef # find PID
2. kubectl exec -it -n $NAMESPACE $POD_NAME -- kill $PID # kill it
But above operations will lead pod restart, then script_01.py will run again :)
Then you may want to modify spec.containers.command directly, it will also lead pod restart.
As all I know, the only way to manage dynamic process python3 script_01.py
is use sidecar container. And you show config sharedProcessNamespace. Attentation, sidecar also be implemented by openkruise-sidecarset