Introduction
and then suddenly you see that the pod status is pending. Ever came accross this situation. I know you saw this situation.
I am sure debug by running below command:
Why Kubernetes Pod Shows Insufficient Memory Error?
Here kubernetes scheduler is basically saying, like:
“I don’t have enough memory on any node to run your pod.”
Even worse:
- kubernetes scheduler tried preventiv actions
- Couldn’t evict anything
- So kubernetes scheduler gave up
Why this happens?
- Allocatable memory (not total)
- Existing workload usage
Real Scenario 1: Over-Provisioned Requests
Immediate fix (Used in production)
- Once the memory request is updated, redeploy your helm package.
Most “insufficient memory” issues are actually bad YAML, not bad infrastructure
Real life Scenario 2: AKS Node Pool Too Small
Fix:
workload: database
- Scaling node count ≠ solving memory issues
- VM size matters more.
Real life Scenario 3: No Preemption Victims Found
Some times low priority pods better to remove if high priority pod is in pending.
Fix
Preemption only works if you define priorities
How I Debug pod pending status in Real Life
When I see this issue, I don’t guess.
I run:
kubectl describe pod <pod>
kubectl top nodes
kubectl get events --sort-by=.metadata.creationTimestamp
Then I check:
- Is memory really full?
- Or just requested badly?
- Can scaling fix it?
- Or do I need a new node pool?
Conclusion
This error:
Insufficient memory
Insufficient memory
does NOT mean:
“You need more memory”
It usually means:
“Your cluster is poorly planned or badly configured”
Frequently Asked Questions (FAQ)
Why is my Kubernetes pod stuck in Pending with insufficient memory?
This happens when no node in the cluster has enough allocatable memory to satisfy the pod's requested resources. Even if total memory exists, Kubernetes scheduling depends on available allocatable memory.
Does increasing node count fix insufficient memory issues in AKS?
Not always. If your node VM size is too small, adding more nodes may not solve the issue. In such cases, upgrading to a larger VM size is required.
What does “No preemption victims found” mean in Kubernetes?
This means Kubernetes cannot evict any existing pods to free enough memory for the new pod. This usually happens when all pods have similar priority or insufficient reclaimable resources.
What is the fastest way to fix a pod stuck in Pending?
The quickest fixes are reducing memory requests in your pod configuration or scaling your AKS node pool to provide more available memory.
How do I check why my pod is stuck in Pending?
Run the command kubectl describe pod <pod-name> to view scheduling errors and identify whether memory, CPU, or other constraints are causing the issue.
Can wrong resource requests cause insufficient memory errors?
Yes. Overestimated resource requests in your Kubernetes manifest can block scheduling even if actual usage is low. Proper resource tuning is essential.
Is autoscaler required in AKS to avoid Pending pods?
Yes. Enabling cluster autoscaler ensures new nodes are automatically added when resources are insufficient, preventing pods from staying in Pending state.