I have python app with celery in docker containers. I want have few workers with different queue. For example:
celery worker -c 3 -Q queue1
celery worker -c 7 -Q queue2,queue3
But I don't do this in docker compose. I found out about celery multi. I tried use it.
version: '3.2'
services:
app:
image: "app"
build:
context: .
networks:
- net
ports:
- 5004:5000
stdin_open: true
tty: true
environment:
FLASK_APP: app/app.py
FLASK_DEBUG: 1
volumes:
- .:/home/app
app__celery:
image: "app"
build:
context: .
command: sh -c 'celery multi start 2 -l INFO -c:1 3 -c:2 7 -Q:1 queue1 -Q:2 queue2,queue3'
But I get it...
app__celery_1 | > celery1@1ab37081acb9: OK
app__celery_1 | > celery2@1ab37081acb9: OK
app__celery_1 exited with code 0
And my container with celery closes. How not to let him close and get his logs from him?
UPD: Celery multi created background processes. How to start celery multi in foreground?
I used supervisord instead celery multi. Supervisord start in foreground and my container not closed.
command: supervisord -c supervisord.conf
And I added all queues to supervisord.conf
[program:celery]
command = celery worker -A app.celery.celery -l INFO -c 3 -Q q1
directory = %(here)s
startsecs = 5
autostart = true
autorestart = true
stopwaitsecs = 300
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
[program:beat]
command = celery -A app.celery.celery beat -l INFO --pidfile=/tmp/beat.pid
directory = %(here)s
startsecs = 5
autostart = true
autorestart = true
stopwaitsecs = 300
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
[supervisord]
loglevel = info
nodaemon = true
pidfile = /tmp/supervisord.pid
logfile = /dev/null
logfile_maxbytes = 0