I have 2 applications (Application1, Application2) running on the Kubernetes cluster. I would like to collect the logs from my applications from outside of the Kubernetes cluster and save them in different directories(for eg: /var/log/application1/application1-YYYYMMDD.log and /var/log/application2/application2-YYYYMMDD.log).
Therefore I deploy a filebeat DaemonSet on the Kubernetes cluster to fetch the logs from my applications(Application1, Application2) and run logstash service on the instance where I want to save the log files(outside of the Kubernetes cluster).
I create 2 filebeat.yml(filebeat-application1.yml and filebeat-application2.yml) files in configMap and then feed both files as args in DaemonSet(docker.elastic.co/beats/filebeat:7.10.1) as below.
....
- name: filebeat-application1
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat-application1.yml",
"-c", "/etc/filebeat-application2.yml",
"-e",
]
.....
But only /etc/filebeat-application2.yml is affected. Therefore, I get log only from application2.
Can you please help me about how to feed two filebeat configuration files into docker.elastic.co/beats/filebeat DaemonSet? or how to config two "filebeat.autodiscovery:" rules with 2 separate "output.logstash:"?
Below is my complete filebeat-kubernetes-whatsapp.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-application1
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat-application1.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application1"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.pod.name}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
output.logstash:
hosts: ["IP:5045"]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config-application2
namespace: logging
labels:
k8s-app: filebeat
data:
filebeat-application2.yml: |-
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
templates:
- condition:
equals:
kubernetes.namespace: default
- condition:
contains:
kubernetes.pod.name: "application2"
config:
- type: container
paths:
- /var/log/containers/*${data.kubernetes.pod.name}*.log
processors:
- add_locale:
format: offset
- add_kubernetes_metadata:
output.logstash:
hosts: ["IP:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat-application1
image: docker.elastic.co/beats/filebeat:7.10.1
args: [
"-c", "/etc/filebeat-application1.yml",
"-c", "/etc/filebeat-application2.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config-application1
mountPath: /etc/filebeat-application1.yml
readOnly: true
subPath: filebeat-application1.yml
- name: config-application2
mountPath: /etc/filebeat-application2.yml
readOnly: true
subPath: filebeat-application2.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
volumes:
- name: config-application1
configMap:
defaultMode: 0640
name: filebeat-config-application1
- name: config-application2
configMap:
defaultMode: 0640
name: filebeat-config-application2
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
# When filebeat runs as non-root user, this directory needs to be writable by group (g+w).
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: logging
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: logging
labels:
k8s-app: filebeat
---
It is not possible, filebeat supports only one output.
From the documentation
Only a single output may be defined.
You will need to send your logs to the same logstash instance and filter the output based on some field.
For example, assuming that you have the field kubernetes.pod.name
in the event sent to logstash, you could use something like this.
output {
if [kubernetes][pod][name] == "application1" {
your output for the application1 log
}
if [kubernetes][pod][name] == "application2" {
your output for the application2 log
}
}