aboutBlog

Learn DevOps Step-by-Step Tutorials and fixing related issues.

Welcome to Py-Bucket, your go-to blog for DevOps tutorials and production issues fixes guide.

  • ✔ Beginner-friendly DevOps guides
  • ✔ Real-world production issues and fixes

Kubernetes Pod Pending in AKS? Fix fast “Insufficient Memory” Error (Real Production Fixes)

Kubernetes Pod Pending in AKS? Fix “Insufficient Memory” Error (Real Production Fixes)
Kubernetes pod pending insufficient memory error in AKS

Introduction

So you deploy a pod on Kubernetes… everything looks fine…
and then suddenly you see that the pod status is pending. Ever came accross this situation. I know you saw this situation.
I am sure debug by running below command:

    kubectl describe pod <pod-name> -n <namespace-name>

and you saw below error in the descriptionpod insufficient memory issue message
your pod just sits there… Pending forever.
This is one of the most common Kubernetes scheduling issues in production environments.
Let’s break this down the real DevOps way — not theory, but what actually works in production.

Why Kubernetes Pod Shows Insufficient Memory Error?

Here kubernetes scheduler is basically saying, like:

“I don’t have enough memory on any node to run your pod.”  

 Even worse:

  • kubernetes scheduler tried preventiv actions 
  • Couldn’t evict anything 
  • So kubernetes scheduler gave up 

Why this happens?

Before jumping to fixes, check this why pending state for pod? , run below command:
    kubectl top nodeskubectl top nodes
Look at:
  • Allocatable memory (not total)
  • Existing workload usage

now check what you requested and what is limit you provided in kubernetes deployment file:
    kubectl get pod -o yamlkubernetes resources inmanifest
Check the resources section in manifest file Deployment, statefulset etc....
now from the both command result what is your conclusion, simple right Node don't have requested space.

Real Scenario 1: Over-Provisioned Requests

Immediate fix (Used in production)

- Open your manifest files and update the resources section for required memory.
    resources:
      requests:
        memory: "256Mi"
      limits:
        memory: "512Mi"

- Once the memory request is updated, redeploy your helm package. 
- If direct manifest file, delete pod and redploy. Run below command:
    kubectl apply -f <deployment.yml>
Lesson

Most “insufficient memory” issues are actually bad YAML, not bad infrastructure

Real life Scenario 2: AKS Node Pool Too Small

Consider you created cluster in Azure cloud (AKS), and as initial need you created small size node pool vms say Standard_B2s this machine has only 4 GB RAM. So if you keep scheduling more pods like monitoring pods, more service pods introduced definitely you going to see space crunch.
problem ---> Even if you scale nodes it will still fail to fix the Insufficient Memory issue.
What to do in this situation?

Fix:

Create new node pool with better VM size, you can use command to do this:
using Azure CLI:
    az aks nodepool add \
      --resource-group <rg> \
      --cluster-name <aks> \
      --name highmemory \
      --node-vm-size Standard_D4s_v3 \
      --node-count 2

here highmemory is node pool name
move the high memory request pod to this pool, using nodeselector in kubernetes manifest:
    nodeSelector:
        workload: database
Lesson

  • Scaling node count ≠ solving memory issues
  • VM size matters more.

 

Real life Scenario 3: No Preemption Victims Found

All pods we run has same priority, so kubernetes cant decide which pod to keep and which is low priority can be removed.
Some times low priority pods better to remove if high priority pod is in pending.

Fix

Create priority class:
    apiVersion: scheduling.k8s.io/v1
    kind: PriorityClass
    metadata:
      name: db-critical
    value: 100000
Apply to the pod which you want to mark as high priority:
    priorityClassName: db-critical

Lesson

Preemption only works if you define priorities


How I Debug pod pending status in Real Life 

When I see this issue, I don’t guess.

I run:

    kubectl describe pod <pod>

    kubectl top nodes

    kubectl get events --sort-by=.metadata.creationTimestamp

Then I check:

  • Is memory really full?
  • Or just requested badly?
  • Can scaling fix it?
  • Or do I need a new node pool?

Conclusion

This error:

    Insufficient memory

Insufficient memory

does NOT mean:

“You need more memory”

It usually means:

“Your cluster is poorly planned or badly configured”


Frequently Asked Questions (FAQ)

Why is my Kubernetes pod stuck in Pending with insufficient memory?

This happens when no node in the cluster has enough allocatable memory to satisfy the pod's requested resources. Even if total memory exists, Kubernetes scheduling depends on available allocatable memory.

Does increasing node count fix insufficient memory issues in AKS?

Not always. If your node VM size is too small, adding more nodes may not solve the issue. In such cases, upgrading to a larger VM size is required.

What does “No preemption victims found” mean in Kubernetes?

This means Kubernetes cannot evict any existing pods to free enough memory for the new pod. This usually happens when all pods have similar priority or insufficient reclaimable resources.

What is the fastest way to fix a pod stuck in Pending?

The quickest fixes are reducing memory requests in your pod configuration or scaling your AKS node pool to provide more available memory.

How do I check why my pod is stuck in Pending?

Run the command kubectl describe pod <pod-name> to view scheduling errors and identify whether memory, CPU, or other constraints are causing the issue.

Can wrong resource requests cause insufficient memory errors?

Yes. Overestimated resource requests in your Kubernetes manifest can block scheduling even if actual usage is low. Proper resource tuning is essential.

Is autoscaler required in AKS to avoid Pending pods?

Yes. Enabling cluster autoscaler ensures new nodes are automatically added when resources are insufficient, preventing pods from staying in Pending state.


You may like below blogs

Featured posts

🔥 Featured Tutorials

Devops

DevOps Tutorials

Author Details

Hi, I'm Prashant — a full-time software engineer with a passion for automation, DevOps, and sharing what I learn. I started Py-Bucket to document my journey through tools like Docker, Kubernetes, Azure DevOps, and PowerShell scripting — and to help others navigate the same path. When I’m not coding or writing, I’m experimenting with side projects, exploring productivity hacks, or learning how to build passive income streams online. This blog is my sandbox — and you're welcome to explore it with me. Get in touch or follow me for future updates!