DevOps & Container Labs - KCNA Foundation

Master Kubernetes, Docker, and cloud-native fundamentals with hands-on labs aligned to the KCNA certification objectives. Build real container workloads through interactive scenarios.

πŸ† These labs cover all DevOps & container certifications including:

☸️ KCNA 🐳 DCA πŸ”’ KCSA 🌍 Terraform Associate βš™οΈ CKAD
πŸ› οΈ CKA πŸ›‘οΈ CKS ☁️ AWS DevOps Professional πŸ”· Azure DevOps AZ-400 🌐 Google Prof. Cloud DevOps

KCNA Foundation Labs - Module 1

Start your Kubernetes journey with fundamental cloud-native concepts and hands-on container orchestration.

Lab 1: Kubernetes Pod & Deployment Fundamentals
Kubernetes / Beginner
Scenario: First Kubernetes Deployment
WebApp Co. is migrating their application to Kubernetes. As a junior DevOps engineer, you need to deploy your first set of pods and deployments. Create namespaces, deploy pods, manage deployments, and scale workloads in a Kubernetes cluster.
KCNA Lab

Learning Objectives:

  • Namespaces: Create and manage Kubernetes namespaces for resource isolation
  • Pods: Deploy and inspect individual pods with kubectl
  • Deployments: Create deployments with replica management
  • Scaling: Scale workloads up and down and verify status

πŸ“‹ Step-by-Step Instructions

  1. Step 1: Create a Namespace
    🎯 Goal: Create a dedicated namespace to isolate your lab resources

    πŸ“ What is a Namespace?
    A Namespace is a virtual cluster inside your Kubernetes cluster. It provides isolation so teams or applications don't interfere with each other. Think of it like separate folders for different projects.

    πŸ’» Command:
    kubectl create namespace webapp-lab

    πŸ” What happens:
    β€’ Kubernetes creates an isolated workspace named "webapp-lab"
    β€’ Resources inside won't collide with other namespaces
    β€’ You can set resource quotas per namespace later
    πŸ’‘ Tip: Always use namespaces in production! The "default" namespace is fine for testing, but real workloads should be organized into named namespaces.
    πŸ“– KCNA Objective: Understand Kubernetes resource organization and isolation through namespaces (Kubernetes Fundamentals domain, 46% of exam).
  2. Step 2: Deploy Your First Pod
    🎯 Goal: Run your first container as a Kubernetes pod

    πŸ“ What is a Pod?
    A Pod is the smallest deployable unit in Kubernetes. It wraps one or more containers and gives them shared networking and storage. Every container runs inside a pod.

    πŸ’» Command:
    kubectl run webapp-pod --image=nginx:latest --namespace=webapp-lab

    πŸ” What happens:
    β€’ Kubernetes pulls the nginx image from Docker Hub
    β€’ Creates a pod named "webapp-pod" in the webapp-lab namespace
    β€’ The pod gets an internal cluster IP address
    β€’ Pod status transitions: Pending β†’ ContainerCreating β†’ Running
    πŸ’‘ Tip: Use kubectl get pods -n webapp-lab after this step to verify your pod is Running. If it shows "ImagePullBackOff", the image name may be wrong.
    πŸŽ“ Hint: The --namespace flag (or -n) tells kubectl which namespace to target. Without it, you'd create resources in the "default" namespace.
  3. Step 3: Check Pod Status
    🎯 Goal: Verify the pod is running and inspect its details

    πŸ’» List Pods:
    kubectl get pods -n webapp-lab

    πŸ” What to look for:
    β€’ STATUS should be "Running"
    β€’ READY column should show "1/1" (1 container, 1 ready)
    β€’ RESTARTS should be 0 (no crashes)
    πŸ’‘ Tip: If STATUS is "Pending", the cluster might be pulling the image or low on resources. Use kubectl describe pod webapp-pod -n webapp-lab for more details (not required for this lab).
  4. Step 4: Create a Deployment
    🎯 Goal: Create a Deployment that manages multiple replicas of your app

    πŸ“ Pod vs Deployment:
    A bare pod has no self-healing. If it crashes, it stays dead. A Deployment ensures your desired number of replicas are always running. It auto-restarts crashed pods and enables rolling updates.

    πŸ’» Command:
    kubectl create deployment webapp-deploy --image=nginx:latest --replicas=3 --namespace=webapp-lab

    πŸ” What happens:
    β€’ Creates a Deployment object with 3 replica pods
    β€’ A ReplicaSet is automatically created to manage the pods
    β€’ If any pod crashes, Kubernetes replaces it automatically
    β€’ All 3 pods share the same image and configuration
    πŸ’‘ Tip: In production, always use Deployments instead of bare pods. Deployments give you self-healing, rolling updates, and easy scaling.
    ⚠️ Common Mistake: Don't create bare pods in production. If the node fails, a bare pod is lost forever. Deployments recreate pods on healthy nodes.
  5. Step 5: Scale the Deployment
    🎯 Goal: Scale your app from 3 to 5 replicas to handle more traffic

    πŸ“ Why Scale?
    Scaling adds more pod replicas to handle increased load. Kubernetes distributes pods across available nodes for high availability.

    πŸ’» Command:
    kubectl scale deployment webapp-deploy --replicas=5 --namespace=webapp-lab

    πŸ” Verification:
    After scaling, check with: kubectl get deployment webapp-deploy -n webapp-lab
    READY column should show 5/5.
    πŸ’‘ Tip: You can also scale down by specifying fewer replicas. Kubernetes gracefully terminates extra pods.
  6. Step 6: Verify Deployment Status
    🎯 Goal: Confirm all 5 replicas are healthy and running

    πŸ’» Command:
    kubectl get deployment webapp-deploy -n webapp-lab

    πŸ” Expected output:
    β€’ READY: 5/5
    β€’ UP-TO-DATE: 5
    β€’ AVAILABLE: 5
    This confirms all replicas are healthy and serving traffic.
    πŸŽ“ Learning Checkpoint: You've just deployed and scaled your first Kubernetes workload! In a real environment, a load balancer would distribute traffic across all 5 replicas.

Kubernetes Lab Environment

Terminal
K8s Dashboard
user@k8s-master:~$
☸️ Kubernetes Dashboard β€” webapp-lab Cluster: minikube
πŸ“¦ Pods
Running0
Pending0
Failed0
πŸš€ Deployments
Active0
Replicas0
Available0
πŸ“‚ Namespaces
Total3
Custom0
πŸ–₯️ Nodes
Ready1
CPU25%
Memory42%
πŸ“œ Recent Activity
[system]Cluster initialized. Ready for lab.
Progress: 0/6 tasks completed
Score: 0/100
πŸŽ‰ After Completing All Steps - Review Your Work:

1. Validate Your Configuration:
Click "Validate Configuration" to check all resources. The dashboard shows completion % and which tasks still need work.
2. View Cluster Diagram:
Click "View Architecture" to see a visual diagram of pods, deployments, and namespaces you created.
3. Switch to K8s Dashboard tab to see live resource counts update as you complete tasks.
0%

Lab Completed!

Great Kubernetes work!

Lab 2: Kubernetes Services & Networking
Kubernetes / Beginner
Scenario: Exposing Applications to Traffic
Your webapp pods are running, but nobody can reach them yet! You need to create Kubernetes Services to expose your application internally and externally. Configure ClusterIP, NodePort, and LoadBalancer services to route traffic to your pods.
KCNA Lab

Learning Objectives:

  • ClusterIP Service: Create internal-only service for pod-to-pod communication
  • NodePort Service: Expose application on a static port on each node
  • LoadBalancer: Create cloud load balancer for external access
  • Service Discovery: Understand DNS-based service discovery

πŸ“‹ Step-by-Step Instructions

  1. Step 1: Create a ClusterIP Service
    🎯 Goal: Create an internal service to allow pod-to-pod traffic

    πŸ“ What is ClusterIP?
    ClusterIP is the default service type. It gives your pods a stable internal IP address. Other pods in the cluster can reach your app using this IP or the service name, but it's NOT accessible from outside the cluster.

    πŸ’» Command:
    kubectl expose deployment webapp-deploy --name=webapp-clusterip --port=80 --target-port=80 --type=ClusterIP --namespace=webapp-lab

    πŸ” What happens:
    β€’ Creates a Service named "webapp-clusterip"
    β€’ Assigns a stable internal IP (e.g. 10.96.x.x)
    β€’ Routes traffic on port 80 to pod port 80
    β€’ Load balances across all 5 replicas
    πŸ’‘ Tip: ClusterIP is ideal for internal microservices (e.g., a backend API that only your frontend pods call). It's the most common service type.
    πŸ“– KCNA Objective: Understand Kubernetes service types and how they facilitate network communication (Container Orchestration domain, 22%).
  2. Step 2: Create a NodePort Service
    🎯 Goal: Expose your app on a static port accessible from outside the cluster

    πŸ“ ClusterIP vs NodePort:
    NodePort opens a static port (30000-32767) on EVERY node in the cluster. Anyone who can reach a node's IP on that port can access your app. It builds on top of ClusterIP.

    πŸ’» Command:
    kubectl expose deployment webapp-deploy --name=webapp-nodeport --port=80 --target-port=80 --type=NodePort --namespace=webapp-lab

    πŸ” Access pattern:
    http://<any-node-ip>:<nodeport> β†’ routes to your pods
    πŸ’‘ Tip: NodePort is great for development and testing. In production, you'd typically use a LoadBalancer or Ingress instead.
    πŸŽ“ Exam Hint: Remember the port range! NodePort always uses ports 30000-32767 by default. This is a common exam question.
  3. Step 3: Create a LoadBalancer Service
    🎯 Goal: Create a cloud-integrated load balancer for production-grade external access

    πŸ“ What is a LoadBalancer Service?
    LoadBalancer is the production way to expose apps. It provisions a real cloud load balancer (AWS ELB, Azure LB, GCP LB) that distributes traffic to your pods. It builds on NodePort + ClusterIP.

    πŸ’» Command:
    kubectl expose deployment webapp-deploy --name=webapp-lb --port=80 --target-port=80 --type=LoadBalancer --namespace=webapp-lab
    πŸ’‘ Tip: In cloud environments (EKS, AKS, GKE), this automatically creates a real load balancer. In minikube/local clusters, the external IP shows "Pending".
  4. Step 4: List All Services
    🎯 Goal: Verify all 3 services are running and inspect their endpoints

    πŸ’» Command:
    kubectl get services -n webapp-lab

    πŸ” What to check:
    β€’ ClusterIP service: should have internal IP, no external IP
    β€’ NodePort: should have internal IP + NodePort number
    β€’ LoadBalancer: should have internal IP + external IP (or Pending)
    πŸ’‘ Tip: Add -o wide flag for more details including selectors and endpoints.
  5. Step 5: Test Service DNS
    🎯 Goal: Verify Kubernetes internal DNS resolves service names

    πŸ“ How DNS Works in K8s:
    CoreDNS runs in every cluster. It automatically creates DNS records for services. Any pod can reach a service by its name: service-name.namespace.svc.cluster.local

    πŸ’» Command:
    kubectl run dns-test --image=busybox --restart=Never --namespace=webapp-lab -- nslookup webapp-clusterip.webapp-lab.svc.cluster.local
    πŸ’‘ Tip: DNS-based service discovery is automatic. You never need to hardcode IPs between microservices.
  6. Step 6: Verify Network Connectivity
    🎯 Goal: Test end-to-end connectivity by curling the service

    πŸ’» Command:
    kubectl run curl-test --image=curlimages/curl --restart=Never --namespace=webapp-lab -- curl -s webapp-clusterip.webapp-lab.svc.cluster.local
    πŸŽ“ Learning Checkpoint: You now know 3 service types! ClusterIP for internal, NodePort for dev/test, LoadBalancer for production. In a real cluster, you'd add an Ingress controller for path-based routing.

Kubernetes Lab Environment

Terminal
Service Map
user@k8s-master:~$
πŸ”— Service Network Map β€” webapp-lab Namespace: webapp-lab
πŸ”΅ ClusterIP Services
Count0
Endpointsβ€”
🟠 NodePort Services
Count0
Portβ€”
🟒 LoadBalancer Services
Count0
External IPβ€”
🧭 DNS
Records0
CoreDNSActive
πŸ“œ Recent Activity
[system]Service network initialized. Ready for lab.
Progress: 0/6 tasks completed
Score: 0/100
πŸŽ‰ After Completing All Steps:

1. Click "Validate Configuration" for completion feedback.
2. Click "View Network Topology" for a diagram of services and traffic flow.
3. Switch to the "Service Map" tab to see live service counts.
0%

Lab Completed!

Great networking work!

Lab 3: Cloud Native Observability & Monitoring
Kubernetes / Beginner
Scenario: Monitoring Your Kubernetes Cluster
Your deployments are running, but how do you know they're healthy? Implement monitoring and observability for your Kubernetes cluster. Deploy metrics-server, check resource usage, view pod logs, and set up basic alerts.
KCNA Lab

Learning Objectives:

  • Metrics Server: Deploy and verify cluster metrics collection
  • Resource Monitoring: Check CPU and memory usage of pods
  • Logging: View and analyze pod logs for troubleshooting
  • Observability: Understand the three pillars: metrics, logs, traces

πŸ“‹ Step-by-Step Instructions

  1. Step 1: Deploy Metrics Server
    🎯 Goal: Install the metrics-server to enable resource monitoring

    πŸ“ What is Metrics Server?
    Metrics Server collects CPU and memory usage from all nodes and pods. It's required for kubectl top commands and the Horizontal Pod Autoscaler (HPA). Without it, you're flying blind!

    πŸ’» Command:
    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
    πŸ’‘ Tip: Metrics Server takes 1-2 minutes to start collecting data. Wait before running kubectl top.
    πŸ“– KCNA Objective: Understand cloud native observability concepts including metrics, logging, and tracing (Cloud Native Observability domain, 8%).
  2. Step 2: Check Node Resource Usage
    🎯 Goal: Monitor CPU and memory at the node level

    πŸ’» Command:
    kubectl top nodes

    πŸ” What to look for:
    β€’ CPU(cores) β€” how many CPU cores being used
    β€’ CPU% β€” percentage of total CPU capacity used
    β€’ MEMORY β€” memory used in bytes
    β€’ MEMORY% β€” percentage of total memory used
    πŸ’‘ Tip: If CPU% or Memory% is above 80%, your cluster needs more nodes or you need to optimize workloads.
  3. Step 3: Monitor Pod Resource Usage
    🎯 Goal: Check CPU and memory consumption of individual pods

    πŸ’» Command:
    kubectl top pods -n webapp-lab

    πŸ” What it shows:
    Per-pod CPU and memory usage. Compare this with your resource requests and limits to see if pods are over/under-provisioned.
    πŸŽ“ Hint: Run this command periodically to identify "noisy neighbor" pods that consume too many resources and may starve other pods.
  4. Step 4: View Pod Logs
    🎯 Goal: Access application logs for debugging and analysis

    πŸ“ Why Logs Matter:
    Logs are one of the three pillars of observability (metrics, logs, traces). They show what your app is actually doing β€” errors, requests, warnings, and more.

    πŸ’» Command:
    kubectl logs deployment/webapp-deploy -n webapp-lab --tail=20
    πŸ’‘ Tip: Use --tail=N to show last N lines. Use -f flag for live streaming (like tail -f).
  5. Step 5: Describe Pod for Events
    🎯 Goal: View Kubernetes events for troubleshooting scheduling and health issues

    πŸ’» Command:
    kubectl describe deployment webapp-deploy -n webapp-lab

    πŸ” What to look for in Events section:
    β€’ ScalingReplicaSet events show when pods were added/removed
    β€’ FailedScheduling means not enough resources
    β€’ ImagePullBackOff means the image name is wrong
    πŸ’‘ Tip: The Events section at the bottom of describe output is the #1 debugging tool. Always check it when something goes wrong.
  6. Step 6: Check Cluster Health Overview
    🎯 Goal: Get a complete health overview of the entire cluster

    πŸ’» Command:
    kubectl get all -n webapp-lab

    πŸ” What this shows:
    Lists ALL resource types in the namespace: pods, services, deployments, replicasets β€” a quick health snapshot of everything you've built.
    πŸŽ“ Learning Checkpoint: You've used all three pillars of observability! Metrics (kubectl top), Logs (kubectl logs), Events (kubectl describe). In production, you'd use Prometheus + Grafana + Loki for a full observability stack.

Monitoring Lab Environment

Terminal
Grafana Dashboard
user@k8s-master:~$
πŸ“Š Grafana β€” Cluster Monitoring ⬀ Live
CPU Usage (%)
β€”
Memory Usage (%)
β€”
Running Pods
0
Active Alerts
0
πŸ“œ Recent Activity
[system]Monitoring stack initialized.
Progress: 0/6 tasks completed
Score: 0/100
πŸŽ‰ After Completing All Steps:

1. Click "Validate Configuration" to see your monitoring completeness.
2. Switch to "Grafana Dashboard" tab to see live CPU/memory metrics update in real-time.
3. Click "View Architecture" to see the full observability stack diagram.
0%

Lab Completed!

Excellent monitoring work!

⚠️ Reset Lab?

This will clear all your progress for this lab including terminal history, completed tasks, and dashboard data. This action cannot be undone.