elasticsearchlogstashredhatfilebeatpodman

Sending logs of (rootless) podman containers to ELK?


I am struggling with the $question for a while and it is time to ask for some guidance. We have 10+ containers that runs on different RHEL VMs deployed via ansible as systemd service (in other words, there is no cubernetes or other container orchestration service on top of /next to podman installed) These containers are running as rootless containers. Unfortunately the docker (podman) socket is not active on the VMs (albeit we can turn it on). All the VMs has filebeat installation. We have an ELK stack separately deployed. What I have found during working on this:

I am looking for a solution for this setup that would make the centralized logging working in the asiest way. For now I would be grateful for some hints about a working concept.

Thank you in advance.


Solution

  • There is a journald input in Filebeat so you don't need to deploy Journaldbeat (deprecated) separately.

    journald natively supports structured data in logs (you can run journalctl -o json to see what fields are in your logs). So if the journald log driver is passing container metadata through to journald then the events produced by Filebeat will contain fields like container.{id,name,image.tag} (assuming it follows the same field naming as the Docker journald log driver). Or if it uses a custom naming convention then Filebeat will produce events with fields like journald.custom.*.

    Here's what the podman logs look like on my machine when running journalctl -o json | jq -S .

    {
      "CODE_FILE": "src/ctr_logging.c",
      "CODE_FUNC": "write_journald",
      "CODE_LINE": "264",
      "CONTAINER_ID": "ee059a097566",
      "CONTAINER_ID_FULL": "ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac",
      "CONTAINER_NAME": "modest_heyrovsky",
      "CONTAINER_PARTIAL_MESSAGE": "true",
      "MESSAGE": "10.0.2.100 - - [29/Aug/2023:16:46:50 +0000] \"GET / HTTP/1.1\" 200 45",
      "PRIORITY": "6",
      "SYSLOG_IDENTIFIER": "conmon",
      "_AUDIT_LOGINUID": "1000",
      "_AUDIT_SESSION": "4",
      "_BOOT_ID": "f5f4c28d95df4cfaaac4d7ec5e33eeb5",
      "_CAP_EFFECTIVE": "1ffffffffff",
      "_CMDLINE": "/usr/bin/conmon --api-version 1 -c ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac -u ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac -r /usr/bin/crun -b /home/ubuntu/.local/share/containers/storage/overlay-containers/ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac/userdata -p /run/user/1000/containers/overlay-containers/ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac/userdata/pidfile -n modest_heyrovsky --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level warning --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/overlay-containers/ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac/userdata/oci-log -t --conmon-pidfile /run/user/1000/containers/overlay-containers/ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/ubuntu/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac",
      "_COMM": "conmon",
      "_EXE": "/usr/bin/conmon",
      "_GID": "1000",
      "_HOSTNAME": "linux",
      "_MACHINE_ID": "0829fb9723294fceb0eddbf4c45b197b",
      "_PID": "6088",
      "_SELINUX_CONTEXT": "unconfined\n",
      "_SOURCE_REALTIME_TIMESTAMP": "1693327610600063",
      "_SYSTEMD_CGROUP": "/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-conmon-ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac.scope",
      "_SYSTEMD_INVOCATION_ID": "98a1d7a85cf048eaa726b6e5aadbb043",
      "_SYSTEMD_OWNER_UID": "1000",
      "_SYSTEMD_SLICE": "user-1000.slice",
      "_SYSTEMD_UNIT": "user@1000.service",
      "_SYSTEMD_USER_SLICE": "user.slice",
      "_SYSTEMD_USER_UNIT": "libpod-conmon-ee059a097566fdc5ac9141bfcdfbed0c972163da891de076e0849d7b53597aac.scope",
      "_TRANSPORT": "journal",
      "_UID": "1000",
      "__CURSOR": "s=66bb92cf8741499ca3d9dd4ef84dd0a4;i=2427;b=f5f4c28d95df4cfaaac4d7ec5e33eeb5;m=18f10791ee;t=604128eb7daa0;x=41277d78955a13c3",
      "__MONOTONIC_TIMESTAMP": "107123020270",
      "__REALTIME_TIMESTAMP": "1693327610600096"
    }
    

    Additionally when using the Filebeat journald input you might want to apply a filter to only ingest data from podman. This would produce the same data as journalctl _COMM=conmon.

    filebeat.inputs:
    - type: journald
      id: podman-container-logs
      include_matches.match:
        - _COMM=conmon