I have this working but was wondering if there is any potential side effects or even a better way to do this. The example below is generic.
I have a docker-compose file with two containers (container_1
and container_2
).
container_1
exposes a volume that contains various config files that it uses to run the installed service.
container_2
mounts the volume from container_1
and periodically runs a script that pulls files and updates the config of the service running in container_1
.
Every time the configs are updated I want to restart the service in container_1
without having to use cron
or some of the other methods I have seen discussed.
My solution:
I put a script on container_1
that checks if the config file has been updated (the file is initially empty and that md5sum is stored in a separate file) and if the file has changed based on md5sum it updates the current hash and kills the process.
In the compose file I have a healthcheck
that runs the script periodically and restart
is set to always
. When the script in container_2
runs and updates the config files in container_1
the monitor_configs.sh
the script on container_1
will kill the process of the service and the container will be restarted and reload the configs.
monitor_config.sh
# current_hash contains md5sum of empty file initially
#!/bin/sh
echo "Checking if config has updated"
config_hash=$(md5sum /path/to/config_file)
current_hash=$(cat /path/to/current_file_hash)
if [ "$rules_hash" != "$current_hash" ]
then
echo "config has been updated, restarting service"
md5sum /path/to/config_file > /path/to/current_file_hash
kill $(pgrep service)
else
echo "config unchanged"
fi
docker-compose.yml
version: '3.2'
services:
service_1:
build:
context: /path/to/Dockerfile1
healthcheck:
test: ["CMD-SHELL", "/usr/bin/monitor_config.sh"]
interval: 1m30s
timeout: 10s
retries: 1
restart: always
volumes:
- type: volume
source: conf_volume
target: /etc/dir_from_1
service_2:
build:
context: /path/to/Dockerfile2
depends_on:
- service_1
volumes:
- type: volume
source: conf_volume
target: /etc/dir_from_1
volumes:
conf_volume:
I know this is not the intended use of healthcheck
but it seemed like the cleanest way to get the desired effect while still maintaining only one running process in each container.
I have tried with and without tini
in container_1
and it seems to work as expected in both cases.
I plan on extending the interval
of the healthcheck
to 24 hours as the script in container_2
only runs once a day.
Use case
I'm running Suricata in container_1
and pulledpork in container_2
to update the rules for Suricata. I want to run pulledpork once a day and if the rules have been update, restart Suricata to load the new rules.
You may want to look at how tools like confd work, which would run as your container_1 entrypoint. It runs in the foreground, polls an external configuration source, and upon a change it rewrites the config files inside the container and restarts the spawned application.
To make your own tool like confd you'd need to include your restart trigger, maybe your health monitoring script, and then make the stdin/stdout/stderr pass through along with any signals so that your restart tool becomes transparent inside the container.