A pod was crashing and I needed to see what went wrong.
kubectl logs Link to heading
View current logs:
kubectl logs my-pod-abc123
View logs from a crashed container (the previous instance):
kubectl logs my-pod-abc123 -p
The -p flag is the key thing - without it you can’t see why the pod crashed. For CrashLoopBackOff pods, this is often the only way to see the actual error.
Follow logs in real time:
kubectl logs my-pod-abc123 -f
If the pod has multiple containers:
kubectl logs my-pod-abc123 -c my-container
stern for multiple pods Link to heading
kubectl logs only handles one pod at a time. For multiple replicas, use stern:
brew install stern
Tail all pods matching a pattern:
stern my-app
This matches pod names, so stern checkout would match checkout-api-abc123 and checkout-page-xyz789.
Filter logs:
stern my-app -i "error" # include pattern
stern my-app -e "health" # exclude pattern
Specify namespace:
stern my-app -n production
Is stern worth installing? Yes. Once you have multiple replicas, kubectl logs becomes painful. stern is one of the first things I install on a new machine.
When to use what Link to heading
- kubectl logs - Quick check of a single pod, or when you need previous logs (
-p) - stern - Multiple pods, real-time tailing during debugging
- Logging platform (Loki, CloudWatch, etc.) - Historical queries, searching across services, alerting
For production issues, I usually start with stern for real-time debugging, then switch to the logging platform to search history.
describe for pod events Link to heading
To see what’s happening with the pod itself:
kubectl describe pod/my-pod-abc123
This shows events, restart counts, and why containers failed.
For the full debugging workflow, see kubectl debugging commands.