I was waiting for pods to come up and wanted to see resource usage in real time.

watch for live updates Link to heading

Basic watch:

watch kubectl get pods

Refreshes every 2 seconds by default. Faster refresh:

watch -n 0.5 kubectl get pods

Highlight differences:

watch -d kubectl get pods

Filter with grep (note the quotes):

watch 'kubectl get pods | grep my-app'

kubectl top for resource usage Link to heading

Pod resource usage:

kubectl top pods

Node resource usage:

kubectl top nodes

Filter by label:

kubectl top pods -l app=my-app
kubectl top nodes -l workload=backend

Combine them Link to heading

Monitor resource usage live:

watch kubectl top pods

Watch a specific app:

watch 'kubectl top pods | grep my-app'

Watch nodes during an upgrade:

watch kubectl get nodes

When to use this vs proper monitoring Link to heading

kubectl top is for quick checks - “is this pod using more memory than expected?” For anything beyond that, use Prometheus/Grafana or your cloud provider’s monitoring.

Limitations of kubectl top:

  • No history - you only see current values
  • Numbers can lag by 30-60 seconds
  • No alerting

These commands are ideal for real-time debugging. For capacity planning, trend analysis, or alerting, Prometheus is the better choice.

Typical resource limits Link to heading

For reference, these are reasonable starting points:

  • API services: 256Mi-512Mi memory, 100m-500m CPU
  • Background workers: 512Mi-1Gi memory, 200m-1000m CPU
  • Databases: At least 1Gi memory, depends heavily on workload

Always start low and adjust based on actual usage from monitoring.

Note: kubectl top requires metrics-server to be running in your cluster. Most managed Kubernetes clusters have it enabled by default.

For detailed debugging, see kubectl debugging commands.