I want to use a centralized log catcher like Logentries, but I don't want its agent running inside all of my containers. Thus I plan for each service to log to its container's stdout, which is then passed to Logentries via the Docker API or a logging container.
The question: How do I handle a container that needs to output two logs? How do I keep them clean and separate without introducing another logging mechanism?
The scenario: I have a PHP app, which necessitates three components: Nginx, PHP-FPM, and my code. I can put Nginx and PHP-FPM in separate Docker containers, so they'll have separate logs, so we're good there. But my PHP has to be in the same container as Nginx so that it can be served, right?
When my app needs to log something (using Monolog), I can send it to the stdout of the container (e.g., make the log file a link to /dev/stdout), but then I can't keep the logs for Nginx and my app separate.
Is there a way to do that? Or am I looking at this all wrong? Is there a better way to run Nginx + PHP in Docker?
Having not found a better solution, I ended up having Laravel/Monolog log to a file in a mounted volume. The Logentries agent then collects the log from the container's host. This allows my container to remain as clean as possible in that I'm not installing Supervisor or a logging agent, and it allows whatever is running the container to access the log with minimal effort.
Logging to stdout turned out not to be an option because PHP-FPM wraps each line of output from the child process so as to make it difficult to parse, and in the case of JSON logs, entirely useless. (See https://groups.google.com/forum/#!topic/highload-php-en/VXDN8-Ox9-M)