dockerload-balancing

Consul and Tomcat in the same docker container


This is a two-part question.

First part: What is the best approach to run Consul and a Tomcat in the same docker container?

I've built my own image, installing both Tomcat and Consul correctly, but I am not sure on how to start them. I tried putting both calls as CMD in the Dockerfile, but no success. I tried to put Consul as an ENTRYPOINT (Dockerfile) and Tomcat to be called in the "docker run" command. It could be vice versa but I have a feeling that it is no good way either.

The docker will run in one AWS instance. Each docker container would run Consul as a server, to register themselves in another AWS instance. Consul and Consul-template will be integrated into proper load balance. This way, my HAproxy instance will be able to correctly forward the requests as I plug or unplug containers.

Second part: In individual tests I did, the docker container was able to reach my main Consul server(leader) but it failed to register itself as an "alive" node.

Reading the logs at Consul server, I think is a matter of which ports I am exposing and publishing. In AWS, I already allowed communication in all ports in TCP and UDP between the instances in this particular Security Group.

Do you know which ports I should be exposing and publishing to allow proper communication between a standalone consul(aws instance) and consul servers (running inside docker containers inside a aws container)? What is command to run the docker container: docker run -p 8300:8300 .........

Thank you.


Solution

  • I would use ENTRYPOINT to kick off a script on docker run.

    Something like

    ENTRYPOINT myamazingbashscript.sh
    

    Syntax might be off but you get the idea

    The script should start both services and finally should tail -f the tomcat logs (or any logs).

    tail -f will prevent container exit since the tail -f command never exits and it will also help you to see what Tomcat is doing

    Do docker logs -f to watch the logs after a docker run.

    Note because the container doesn't exit you can exec into it with docker exec -it containerName bash.

    This lets you have a sniff around inside the container.

    Not generally the best approach to have two services in one container because it destroys the separation of concerns and reusability but you may have valid reasons.

    To build use docker build then run with docker run as you stated it.

    If you decided to go for a two-container solution then you will need to expose ports between containers to allow them to talk to each other. You could share files between containers using volumes_from.