I have a Sidekiq Enterprise version. My app is running on Elastic BeanStalk (web and worker env). Also I use ElastiCache service for Redis. Before this, the application was running on a Linux system. Now I have switched to the Docker platform.
After switching to the Docker platform, I started having problems with sidekiq (I use sidekiqswarm). If jobs are being performed and we start deploying the application, then after a successful deployment of web and worker (sidekiq here) env, the jobs simply disappear, the cloudwatch logs are empty. On the sidekick panel itself, I see that I have 2 processes, after deployment 2 new ones appear, the old ones are destroyed and along with this, the jobs disappear. Maybe someone also has a the same problem?
Dockerfile
:
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ["tini", "--"]
CMD ["./entrypoint.sh"]
entrypoint.sh
:
#!/bin/bash
case $DEPLOYMENT_TYPE in
"web")
bundle exec rails db:migrate;
bundle exec puma; ;;
"worker")
SIDEKIQ_PRELOAD= bundle exec sidekiqswarm -C ./config/sidekiq.yml; ;;
*)
echo "DEPLOYMENT_TYPE is invalid" 1>&2;
exit 1; ;;
esac
I suspect the problem is with docker. I have docker + docker-compose file. It turns out that with each deployment the image is rebuilt and maybe because of this the jobs are completely lost?
services:
my-app:
restart: always
env_file:
- .env
build:
context: .
args:
....
expose:
- ....
volumes:
- ...
.....
nginx-proxy:
restart: always
image: nginx:latest
depends_on:
- my-app
..
I found resolution. Need's to use exec ....(as Mike stated in the documentation) and for container in docker compose file need to insert stop_grace_period:.. and stop_signal: SIGTERM. In default config docker compose use SIGINT and this signal kills all processes with interruption so our jobs are lost. You need to explicitly set stop_signal: SIGTERM so that the sidekiq ends with gracefully shut down.