I’m working with Kubernetes and I often need to check my application logs for specific keywords, but only within the most recent log entries.
For example, I can get the logs from my pod using:
kubectl logs my-pod-name
I know I can search for a pattern using grep, like:
kubectl logs my-pod-name | grep "ERROR"
However, this gives me all matching lines from the logs, not just the most recent ones.
What I want is something like:
Only fetch the last few minutes or last few lines of logs
Filter those logs for a keyword (e.g., "ERROR" or "timeout")
I’ve seen --since=5m and --tail=100 options in kubectl logs, but I’m not sure how to combine them effectively with pattern matching while still being efficient.
How can I use kubectl to only get logs from, say, the last 5 minutes and filter for a keyword?
Is there a built-in way in kubectl to do this without relying entirely on grep?
If not, what’s the best practice for efficiently doing this in production environments?
Yes, if you’re only using kubectl, you can combine --since
or --tail
with grep to filter the relevant log lines.
like
# Last 200 lines + keyword
kubectl logs POD -n NS --tail=200 | grep -Ei 'error|timeout'
# Follow from last 5 minutes
kubectl logs POD -n NS --since=5m -f | grep -Ei 'error|timeout'
# All containers in pod
kubectl logs POD -n NS --since=5m --all-containers | grep -Ei 'error|timeout'
There are also external tools for working with Kubernetes logs — stern and kail which let you conveniently view logs from multiple pods and filter them in real time. These tools can be safely used in production systems for faster troubleshooting.
Alternatively, you can use centralized log storage solutions like the ELK stack or Loki for more advanced searching and analysis.