Showing posts with label DevOps. Show all posts
Showing posts with label DevOps. Show all posts

Sunday, 15 March 2026

How to Recover a Failed Terraform Deployment Without Breaking Production

 How to Recover a Failed Terraform Deployment Without Breaking Production



Terraform apply failed what to do?

Infrastructure automation using Terraform is powerful, but sometimes deployments fail in the middle of execution. When this happens, your infrastructure may be partially created, and Terraform state may become inconsistent with the actual cloud resources.

Many DevOps engineers panic at this point and try random fixes, which can damage production infrastructure.

In this guide, you will learn how to safely recover Terraform infrastructure when terraform apply fails halfway.



Problem:

A Terraform deployment stops in the middle while creating or updating infrastructure.

Some resources are created successfully while others fail.

Now you have a dangerous situation:

Terraform state ≠ Actual infrastructure

This means Terraform and the cloud provider no longer agree on the current infrastructure state.

This problem commonly occurs when working with cloud providers like:

Example Error Message

Typical Terraform failure messages look like this:

AuthorizationFailed — service principal missing role

The service principal running Terraform doesn't have permission to perform the ARM action. Most common when applying across subscription scopes or creating role assignments.

OperationNotAllowed — vCPU quota exceeded

Your Azure subscription has hit its regional vCPU limit. This fires mid-apply when creating VMs, AKS node pools, or VMSS. Resources before this point remain live.

ParentResourceNotFound — missing depends_on

A child resource (SQL database, subnet, diagnostic setting) was deployed before its parent finished provisioning. Azure's ARM API returns 404 on the parent reference.  

Or Terraform may simply stop with:

Error: failed to create resource

When this happens, the infrastructure might be partially created.

Why Partial Failures Happen?

Terraform builds a dependency graph and walks it concurrently. When one node fails, Terraform stops scheduling new work but does not roll back completed nodes. In simple words Terraform is declarative, but it is not transactional.

There is no:

Undo

Rollback
Atomic execution

Terraform follows a simple philosophy:

Friday, 13 March 2026

Run Oracle Database Non-CDB in Docker – Step-by-Step Setup + Fix Common Errors

 Run Oracle Database Non-CDB in Docker – Step-by-Step Setup + Fix Common Errors



Introduction

Many enterprise applications still depend on the traditional Non-CDB database architecture used in Oracle Database before the multitenant model was introduced. While modern Oracle versions focus on container databases, many legacy systems still require a Non-CDB database environment for compatibility, testing, or learning purposes.

Setting up an Oracle database manually can be complex and time-consuming. This is where Docker becomes extremely useful. Docker allows developers and database administrators to quickly create isolated environments where databases can run inside containers without installing Oracle directly on the host machine.

By running Oracle Non-CDB databases in Docker, you can easily create repeatable test environments, experiment with database configurations, and work with legacy applications safely.

This guide is designed for DBAs, developers, DevOps engineers, and students who want to understand how to run an Oracle Non-CDB database inside a Docker container.

By the end of this tutorial, you will learn how to:

  • Create an Oracle Non-CDB database in Docker

  • Run and manage the containerized database

  • Verify that the database is working correctly


Understanding Non-CDB vs CDB in Oracle Database

In Oracle Database, there are two main database architectures:

            Non-CDB (Non-Container Database)
            CDB (Container Database)
Understanding the difference between CDB vs Non-CDB is important when working with legacy applications or when deploying databases in containerized environments such as Docker.
What is a Non-CDB Database?

A Non-CDB (Non-Container Database) is the traditional Oracle database architecture used before Oracle introduced the multitenant model.

In this architecture:

  • A single database instance manages everything

  • Users, schemas, tables, and data exist in one standalone database

  • There are no pluggable databases (PDBs)

Because of its simplicity, many legacy enterprise applications were built specifically for Non-CDB environments. As a result, many organizations still maintain Non-CDB databases for compatibility and testing purposes.

What is a CDB Database?

A CDB (Container Database) is part of Oracle’s multitenant architecture.

A single CDB can host multiple databases known as Pluggable Databases (PDBs).

Each PDB behaves like an independent database, but they share the same Oracle instance and system resources.

This architecture offers several advantages:

  • Better resource utilization

  • Easier database management

  • Ability to run multiple databases in one instance

Quick Comparison: CDB vs Non-CDB

FeatureNon-CDBCDB
ArchitectureSingle standalone databaseContainer with multiple PDBs
Resource sharingNoYes
ScalabilityLimitedHigh
Primary useLegacy applicationsModern deployments


Why Non-CDB Databases Are Still Used?

Even though Oracle promotes the multitenant architecture, Non-CDB databases are still widely used in many environments.

Here are some common reasons:
1. Legacy Applications
Many enterprise systems were originally built to run on Non-CDB architecture, making migration difficult or costly.
2. Learning and Training
Students and beginners often start with Non-CDB databases to understand the core structure of Oracle databases.
3. Development and Testing
Developers sometimes require a simple standalone Oracle database to test applications or reproduce issues.


Why Oracle Removed Non-CDB in 21c ??? big question right please refer : Link

Step-by-Step: Pull the Oracle Docker Image

1. Open oracle container registry - link
2. If no account created previously, create account for registry.
3. select image and click on continue, i am selecting enterprise and accept terms

4. After this click on profile and select auth and generate secret key:
Secret key

5. Copy the secret key and store it safely.
6. Run docker command to login to registry
docker login container-registry.oracle.com
provide user name and the secret

7. Pull the Docker image using  pull command
docker pull container-registry.oracle.com/database/enterprise:19.3.0.0

Step-by-Step: Run the Non-CDB Container

Wednesday, 11 March 2026

How to Prepare for DevOps Interviews (Complete Guide for DevOps, SRE & Production Engineers)

 How to Prepare for DevOps Interviews (Complete Guide for DevOps, SRE & Production Engineers)



DevOps interviews can be challenging because they test knowledge across multiple domains such as Linux, cloud infrastructure, automation, scripting, and system design.

If you are preparing for a DevOps Engineer, Site Reliability Engineer (SRE), or Production Engineer role, you need a clear strategy to cover the most important technical areas.

In this guide, we will explain how to prepare for DevOps interviews step-by-step, including the key skills you should develop, practical preparation strategies, and common interview scenarios used by companies today.

Skills Required for DevOps Interviews

To succeed in a DevOps interview, you must develop strong skills in several technical areas. Below are the most important ones.

Linux Skills Required for DevOps Engineers

Every DevOps engineer should have a deep understanding of at least one operating system. In most cases, that operating system should be Linux.

Linux is widely used in:

  • Cloud infrastructure

  • Containers

  • Web servers

  • Production systems

Because of this, most DevOps job descriptions require Linux knowledge.

How deep should your Linux knowledge be?

Linux and programming should be your strongest technical skills.

You should understand:

  • Process management

  • Filesystem structure

  • Networking basics

  • System logs

  • Performance monitoring

  • Troubleshooting techniques

During DevOps interviews, candidates are often asked to diagnose system issues, so understanding how the operating system works internally is extremely important.


Programming Skills Needed for DevOps Engineers

Programming is another essential skill for DevOps engineers.

With programming knowledge, you can:

  • Automate repetitive tasks

  • Build internal tools

  • Improve existing infrastructure

  • Create custom deployment scripts

In DevOps, automation is everything, and programming enables that automation.

The required level of programming varies between companies.

Some organizations only expect candidates to write automation scripts, while others may test algorithms and data structures.

How to practice coding for DevOps interviews

The best way to improve coding skills is through regular practice.

You can practice by building:

  • automation scripts

  • command-line tools

  • small web applications

  • DevOps utilities

Coding challenge platforms are also useful for interview preparation.

Recommended platforms include:

Important tip:

If an interview allows you to choose your programming language, always select the language you are most comfortable with.

Using a familiar language increases your chances of solving problems correctly.


System Design Questions in DevOps Interviews

DevOps engineers are often responsible for designing infrastructure and deployment systems.

Because of this, many interviews include system design discussions.

You may be asked to design systems such as:

Friday, 20 February 2026

Azure DevOps Tutorial: Automate Windows VM Deployment Using Terraform

Azure DevOps Tutorial: Automate Windows VM Deployment Using Terraform

vmcreation

Introduction

Deploying infrastructure as code (IaC) is a modern and scalable way to manage your cloud resources. In this guide, we’ll walk you step-by-step on how to create a Windows Virtual Machine in Azure using Terraform.

This tutorial is ideal for beginners and intermediate users who want a repeatable and automated way to spin up Windows VMs in Azure.

What You Will Learn

✔ Install and configure Terraform
✔ Write Terraform code to deploy a Windows VM
✔ Output VM details after deployment
✔ Set up Azure Service Principal

Prerequisites

Before starting, make sure you have:
✔ An Azure Subscription
Terraform installed on your machine
Azure CLI installed (optional but recommended)
✔ Basic understanding of IaC and Azure resources

1. Install Terraform

Download Terraform from the official website:
https://www.terraform.io/downloads
After installation, verify with:
terraform version

2. Configure Azure CLI & Login

Login to Azure using Azure CLI:

az login

(Optional) Set your desired subscription:

az account set --subscription "YOUR_SUBSCRIPTION_NAME"

3. Create Azure Service Principal

A Service Principal gives Terraform permission to provision resources in Azure.

Run:

az ad sp create-for-rbac --name "TerraformSP" --role="Contributor" --sdk-auth

4. Create Terraform Project Folder

Create a new folder:

mkdir azure-windows-vm
cd azure-windows-vm

Create the following files:

main.tf
variables.tf
outputs.tf






5. Define Provider — main.tf

First, lets configure the provider. In Terraform, a provider is a plugin that acts as the bridge between terraform and the target platform. In today's demo the target platform is azure.

Wednesday, 4 February 2026

Azure Tutorial: Automate Linux Virtual Machine Creation in Azure Using Terraform

Step-by-step guide: Create a Linux Virtual Machine in Azure Using Terraform
vmcreation



Introduction

Infrastructure as Code (IaC) helps automate cloud resource provisioning in a reliable and repeatable way. Terraform is one of the most widely used IaC tools for managing cloud infrastructure.

In this blog, you will learn how to create a Linux Virtual Machine in Microsoft Azure using Terraform, configure a remote backend for state management, and connect to the VM using SSH.

By the end of this guide, you will be able to:

1. Configure Azure provider in Terraform

2. Store Terraform state in Azure Storage Account

3. Create networking resources (VNet, Subnet, NSG, Public IP)

4. Deploy a Linux Virtual Machine

5. Connect securely to the machine


Prerequisite:

Before starting, ensure that you have:

  1. An active Azure subscription

  2. Terraform installed on your machine

  3. An existing Azure Storage Account for Terraform state

  4. An SSH key pair on your local system

To generate an SSH key (if not already available):

ssh-keygen -t rsa -b 4096

Project Structure

Create a new project directory and organize files as follows:

Tuesday, 27 January 2026

Custom task in Azure Devops

 Create Custom task using powershell in Azure DevOps



What are extensions?

Extensions are simple add-ons that can be customize and extend  your Devops experience.
Extensions provide new capabilities when they are installed in the Azure devops orgnization. 

Pre-requisite:

1. Azure DevOps Orgnization
2. Code editor (suggested to have visual studio code)
3. Node.js
4. Azure DevOps CLI 


Installation

Here i am using mac os,  (please follow steps for your OS)

1 . Node js 
Run below command to install Node js

brew install node

Run below command to install CLI

npm install -g tfx-cli

3. Azure devops extension SDK 

Run below command to install CLI

npm install azure-devops-extension-sdk --save


Folder structure

Follow below folder structure

Friday, 18 April 2025

Azure DevOps YAML Pipeline: Key Concepts, Hierarchy, and Best Practices

Azure DevOps YAML Pipeline Key Concepts | CI/CD Best Practices




Azure DevOps pipeline flow
 


Azure DevOps YAML pipelines provide a powerful and flexible way to automate your CI/CD workflows. Understanding the hierarchy, variable passing, triggers, and conditions is essential for building efficient pipelines.

YAML Pipeline Hierarchy

A YAML pipeline is structured in a hierarchical manner:

  1. Stages – The top-level division in a pipeline (e.g., "Build," "Test," "Deploy").

  2. Jobs – A sequence of steps that run sequentially or in parallel within a stage.

  3. Steps – The smallest executable unit, which can be a script or a predefined task.


1. Stages

In a YAML pipeline, the top level is stages. Each stage can contain multiple jobs, and each job consists of a series of steps.
Stages define logical boundaries in a pipeline,even if not explicitly defined, every pipeline has at least one stage.
Below are examples of stages:

Example:

yaml
stages:  
- stage: Build App
  jobs:  
  - job: BuildJob  
    steps:  
      - script: echo "Building the app..."  


2. Jobs

A job is a set of steps that run together. Jobs can:

  • Run sequentially or in parallel.
  • Have dependencies (e.g., Job2 depends on Job1).
  • Read more about job here

Example:

yaml
jobs:  
- job: JobA  
  steps:  
    - script: echo "Running Job A"  

- job: JobB  
  dependsOn: JobA  
  steps:  
    - script: echo "Running Job B after Job A" 

Saturday, 12 April 2025

NuGet Pack, Spec & Push to Azure Artifacts Explained

NuGet Pack, Spec & Push to Azure Artifacts Explained 



What is Nuget?

NuGet is the essential package management tool for .NET development, simplifying the creation, sharing, and consumption of reusable code packages. Below, we break down key NuGet commands—Pack, Push, and Spec—along with practical examples and best practices.
for Basics please refer : Microsoft Document

NuGet Pack: Creating a NuGet Package

The nuget pack command generates a .nupkg file from your project, bundling compiled output and dependencies for seamless distribution.



Key Features:

1. Converts project files (.csproj) into a deployable NuGet package.

2. Automatically includes required dependencies.

3. Supports metadata customization via a .nuspec file or command-line arguments.

Example Usage:

nuget pack MyProject.csproj  

Friday, 11 April 2025

How to Set Up Docker for .NET Applications on Windows (Without Docker Desktop)

 How to Set Up Docker for .NET Applications on Windows (Without Docker Desktop)


Introduction

This guide explains how to use Docker and set up a Windows container for a .NET application without Docker Desktop. Follow these steps to install Docker, configure your environment, and deploy your .NET app in a container.


Prerequisites

Before starting, ensure:

Hyper-V is enabled on your Windows machine.

Saturday, 16 December 2023

Docker: The Magic Wand for Modern Software Development & Deployment

Docker: The Magic Wand for Modern Software Development & Deployment



The Rise of Docker

In the ever-evolving landscape of software development and deployment, one technology has emerged as a transformative force—Docker. Like a magic wand for developers, Docker has revolutionized the way applications are built, shipped, and deployed. Let’s explore why Docker has become the cornerstone of modern software development.

Understanding Docker Containers

At the heart of Docker lies the concept of containers—lightweight, standalone, and executable packages that include everything needed to run an application: code, runtime, libraries, and system tools. These containers operate in isolation, eliminating the infamous "it works on my machine" dilemma and ensuring consistency from development to production.

The Docker Workflow

Docker introduces a streamlined workflow that simplifies development and deployment:

  1. Dockerfile – A script defining the steps to build a Docker image, serving as a blueprint for the application.

  2. Image – A portable snapshot of the application and its dependencies, ensuring consistency across environments.

Kubernetes: The Ultimate Guide to Container Orchestration & Scalability


Kubernetes: The Ultimate Guide to Container Orchestration & Scalability



Introduction: The Rise of Kubernetes

In the vast and ever-expanding universe of technology, the need for efficient and scalable application deployment has led to the rise of container orchestration tools. Among these, Kubernetes (K8s) shines brightest—guiding developers, operators, and organizations toward seamless container management.

The Prelude to Kubernetes

Before diving into Kubernetes, let’s revisit containerization. Containers revolutionized software development by bundling applications with their dependencies. But as container usage grew, managing them at scale became a challenge—requiring a powerful orchestration system.

Enter Kubernetes

Kubernetes (K8s), an open-source container orchestration platform, was originally developed by Google and later donated to the Cloud Native Computing Foundation (CNCF). Like a skilled captain, Kubernetes manages, scales, and deploys containerized applications across diverse environments with precision.

Understanding Kubernetes Core Concepts

To master Kubernetes, you need to know its key components:

1. Nodes

  • The foundation of Kubernetes, nodes are machines (physical or virtual) where containers run.

2. Pods

  • The smallest deployable units, pods group one or more containers, sharing network and storage resources.

3. Services

  • Services enable communication between pods, providing a stable IP and DNS name despite pod changes.

4. ReplicaSets

  • Ensure high availability by maintaining a set number of identical pod replicas, replacing failed ones automatically.

5. Deployments

  • Manage application updates, scaling, and rollbacks declaratively, ensuring the desired state is always maintained.

The Kubernetes Ecosystem

Kubernetes thrives with supporting tools like:

Why Kubernetes? Key Benefits

✅ Scalability – Automatically scale applications up or down based on demand.
✅ Portability – Run seamlessly across cloud, hybrid, or on-premises environments.
✅ Resilience – Self-healing capabilities restart failed containers and reschedule them.
✅ Declarative Configuration – Define the desired state, and Kubernetes makes it happen.

The Future of Kubernetes

Kubernetes continues to evolve, with innovations in:
🔹 Edge Computing – Deploying containers closer to data sources.
🔹 Serverless Kubernetes (Knative) – Simplifying serverless workloads.
🔹 AI/ML Integration – Enhancing machine learning deployments.

Conclusion

Kubernetes isn’t just a tool—it’s a game-changer in modern software deployment. By mastering scalability, resilience, and portability, it empowers businesses to navigate the future of cloud-native applications with confidence.

Author Details

Hi, I'm Prashant — a full-time software engineer with a passion for automation, DevOps, and sharing what I learn. I started Py-Bucket to document my journey through tools like Docker, Kubernetes, Azure DevOps, and PowerShell scripting — and to help others navigate the same path. When I’m not coding or writing, I’m experimenting with side projects, exploring productivity hacks, or learning how to build passive income streams online. This blog is my sandbox — and you're welcome to explore it with me. Get in touch or follow me for future updates!