dockerdockerfileamazon-ecsdocker-entrypoint

Is it "bad" to install a system package in a docker entryPoint script?


I've got an AWS ECS service that includes several containers that are nearly identical. I've initially built this with a dockerfile for each image, and it works fine but the builds are really slow, with a lot of duplicated effort. Building a single base image for all the containers seems like the Docker way to do this, but results in some overhead and seems like overkill so I am avoiding it.

I can solve this more easily by starting each container with a different entryPoint file BUT it means that at least one of the entryPoints would install a system package or two, and that feels a little "dirty." I'm talking about things like apt-get install .... But I can't think of any real problems with this, besides a bit slower startup.

Is this a terrible idea? Are there other issues I should be considering before installing packages in my entryPoint instead of in the docker image build?


Solution

  • You should include all of the packages you need in your image build.

    Trying to install things in your image's entrypoint script...

    If everything is included in your image, and then there's an outage in the remote repositories, you won't be able to rebuild your image until they come back, but you'll still be able to launch new containers, scale up and scale down, and so on. If there is a package change in the remote repository, you won't see it until it passes through your CI system, and get a chance to run system tests on it.

    Depending on how similar your images really are, you might be able to share a slightly larger image with multiple containers. One common example is a Web server and background worker sharing much of the same code base; the worker doesn't need the Web server per se, but if it can share the same image and override the command then you can save a build. Similarly, a couple extra OS packages in a shared Docker layer isn't usually going to be a big space problem.