I am coverting my infra. to containers. I have a couple daemons that right now live in rc.local but I want to do this the docker way
here are the commands:
sudo /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --daemonize --busy-time 30 --kill --print h=db-1,u=master,p=password,P=3306
sudo /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --daemonize --busy-time 30 --kill --print h=db-2,u=master,p=password,P=3306
What is the proper way to do this via docker?
AFAIK Percona doesn't provide an official Docker image for the toolkit but, as suggested by @VonC in his answer as well, you could try using the Dockerfile
provided in their Github repository, it will give you a base image with the necessary tools installed, including pt-kill
. To run pt-kill
, you will need to provide the necessary command when running your docker container, or extend the image by including a CMD
in your Dockerfile
with the necessary information. For reference, I build the aforementioned Dockerfile
:
docker build -t local/pt:3.5.0-5.el8 .
And was able to using pt-kill
against a local docker based MySQL database running the following command from my terminal:
docker run -d local/pt:3.5.0-5.el8 /usr/bin/pt-kill --match-command Query --victims all --busy-time 5s --print h=172.17.0.2,D=local,u=local,p=local,P=3306
I tested it running the following sentence from MySQL Workbench:
SELECT SLEEP(10)
Which produces the following output from pt-kill
:
# 2023-01-01T22:12:33 KILL 16 (Query 5 sec) SELECT SLEEP(10)
LIMIT 0, 1000
# 2023-01-01T22:12:35 KILL 16 (Query 7 sec) SELECT SLEEP(10)
LIMIT 0, 1000
# 2023-01-01T22:12:37 KILL 16 (Query 9 sec) SELECT SLEEP(10)
LIMIT 0, 1000
The way in which this container could be run will depend on your actual infrastructure.
I assume by the existence of the --rds
flag in your command that you are connecting to an Amazon RDS instance.
You have many ways for running containers in AWS (see for instance this blog entry, for naming some of them).
In your use case probably the way to go would be using ECS running over EC2 compute instances (the Fargate serverless option doesn't make sense this time), or even EKS, although I think it would be overkill.
You could provision an EC2 instance, install docker
, and deploy locally your containers as well, but probably it would a less reliable solution than using ECS.
Just in case, and the same applies if running your containers from an on-premise machine, you will need to launch the containers at startup. In my original answer I stated that in the end you probably will need to use rc.local
or systemd
to run your container, perhaps by invoking an intermediate shell script, that will launch the actual container using docker run
, but thinking about it I realized that the dependency with the docker daemon - it should be running to run your container - could be a problem. Although some kind of automation could be required, consider running your container indicating always
or unless-stopped
as the --restart
policy.
As you suggested, you could use docker-compose
for defining both daemons too. The following docker-compose.yaml
file could be of help:
version: '3'
x-pt-kill-common:
&pt-kill-common
build: .
restart: always
services:
pt-kill-db-1:
<<: *pt-kill-common
command: /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --busy-time 30 --kill --print h=db-1,u=master,p=password,P=3306
pt-kill-db-2:
<<: *pt-kill-common
command: /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --busy-time 30 --kill --print h=db-2,u=master,p=password,P=3306
We are building the docker image in compose itself: it assumes the existence of the mentioned Percona toolkit Dockerfile
in the same directory in which the docker-compose.yaml
file is located. You can build the image and publish it to ECR or wherever you see fit and use it in your docker-compose.yaml
file as an alternative if you prefer:
version: '3'
x-pt-kill-common:
&pt-kill-common
image: aws_account_id.dkr.ecr.region.amazonaws.com/pt:3.5.0-5.el8
restart: always
services:
pt-kill-db-1:
<<: *pt-kill-common
command: /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --busy-time 30 --kill --print h=db-1,u=master,p=password,P=3306
pt-kill-db-2:
<<: *pt-kill-common
command: /usr/bin/pt-kill --rds --match-command Query --victims all --match-user phppoint --busy-time 30 --kill --print h=db-2,u=master,p=password,P=3306
In order to reuse as much code as possible, the example uses extension
fragments, although of course you can repeat the service definition if necessary.
Note as well that we get rid of the --daemonize
option in the command definition.
In any case, you will need to configure your security groups to allow communication with the RDS database.
Having said all that, in my opinion, your current solution is a good one: although especially using ECS could be a valid solution, probably provisioning a minimal EC2 instance with the necessary tools installed could be a cheaper and simple option than running them in containers.