Master Kubernetes, Docker, and cloud-native fundamentals with hands-on labs aligned to the KCNA certification objectives. Build real container workloads through interactive scenarios.
π These labs cover all DevOps & container certifications including:
π οΈ CKAπ‘οΈ CKSβοΈ AWS DevOps Professionalπ· Azure DevOps AZ-400π Google Prof. Cloud DevOps
KCNA Foundation Labs - Module 1
Start your Kubernetes journey with fundamental cloud-native concepts and hands-on container orchestration.
Lab 1: Kubernetes Pod & Deployment Fundamentals
Kubernetes / Beginner
Scenario: First Kubernetes Deployment
WebApp Co. is migrating their application to Kubernetes. As a junior DevOps engineer, you need to deploy your first set of pods and deployments. Create namespaces, deploy pods, manage deployments, and scale workloads in a Kubernetes cluster.
KCNA Lab
Learning Objectives:
Namespaces: Create and manage Kubernetes namespaces for resource isolation
Pods: Deploy and inspect individual pods with kubectl
Deployments: Create deployments with replica management
Scaling: Scale workloads up and down and verify status
π Step-by-Step Instructions
Step 1: Create a Namespace
π― Goal: Create a dedicated namespace to isolate your lab resources
π What is a Namespace?
A Namespace is a virtual cluster inside your Kubernetes cluster. It provides isolation so teams or applications don't interfere with each other. Think of it like separate folders for different projects.
π» Command: kubectl create namespace webapp-lab
π What happens:
β’ Kubernetes creates an isolated workspace named "webapp-lab"
β’ Resources inside won't collide with other namespaces
β’ You can set resource quotas per namespace later
π‘ Tip: Always use namespaces in production! The "default" namespace is fine for testing, but real workloads should be organized into named namespaces.
π KCNA Objective: Understand Kubernetes resource organization and isolation through namespaces (Kubernetes Fundamentals domain, 46% of exam).
Step 2: Deploy Your First Pod
π― Goal: Run your first container as a Kubernetes pod
π What is a Pod?
A Pod is the smallest deployable unit in Kubernetes. It wraps one or more containers and gives them shared networking and storage. Every container runs inside a pod.
π» Command: kubectl run webapp-pod --image=nginx:latest --namespace=webapp-lab
π What happens:
β’ Kubernetes pulls the nginx image from Docker Hub
β’ Creates a pod named "webapp-pod" in the webapp-lab namespace
β’ The pod gets an internal cluster IP address
β’ Pod status transitions: Pending β ContainerCreating β Running
π‘ Tip: Use kubectl get pods -n webapp-lab after this step to verify your pod is Running. If it shows "ImagePullBackOff", the image name may be wrong.
π Hint: The --namespace flag (or -n) tells kubectl which namespace to target. Without it, you'd create resources in the "default" namespace.
Step 3: Check Pod Status
π― Goal: Verify the pod is running and inspect its details
π» List Pods: kubectl get pods -n webapp-lab
π What to look for:
β’ STATUS should be "Running"
β’ READY column should show "1/1" (1 container, 1 ready)
β’ RESTARTS should be 0 (no crashes)
π‘ Tip: If STATUS is "Pending", the cluster might be pulling the image or low on resources. Use kubectl describe pod webapp-pod -n webapp-lab for more details (not required for this lab).
Step 4: Create a Deployment
π― Goal: Create a Deployment that manages multiple replicas of your app
π Pod vs Deployment:
A bare pod has no self-healing. If it crashes, it stays dead. A Deployment ensures your desired number of replicas are always running. It auto-restarts crashed pods and enables rolling updates.
π What happens:
β’ Creates a Deployment object with 3 replica pods
β’ A ReplicaSet is automatically created to manage the pods
β’ If any pod crashes, Kubernetes replaces it automatically
β’ All 3 pods share the same image and configuration
π‘ Tip: In production, always use Deployments instead of bare pods. Deployments give you self-healing, rolling updates, and easy scaling.
β οΈ Common Mistake: Don't create bare pods in production. If the node fails, a bare pod is lost forever. Deployments recreate pods on healthy nodes.
Step 5: Scale the Deployment
π― Goal: Scale your app from 3 to 5 replicas to handle more traffic
π Why Scale?
Scaling adds more pod replicas to handle increased load. Kubernetes distributes pods across available nodes for high availability.
π Verification:
After scaling, check with: kubectl get deployment webapp-deploy -n webapp-lab
READY column should show 5/5.
π‘ Tip: You can also scale down by specifying fewer replicas. Kubernetes gracefully terminates extra pods.
Step 6: Verify Deployment Status
π― Goal: Confirm all 5 replicas are healthy and running
π» Command: kubectl get deployment webapp-deploy -n webapp-lab
π Expected output:
β’ READY: 5/5
β’ UP-TO-DATE: 5
β’ AVAILABLE: 5
This confirms all replicas are healthy and serving traffic.
π Learning Checkpoint: You've just deployed and scaled your first Kubernetes workload! In a real environment, a load balancer would distribute traffic across all 5 replicas.
π After Completing All Steps - Review Your Work:
1. Validate Your Configuration: Click "Validate Configuration" to check all resources. The dashboard shows completion % and which tasks still need work. 2. View Cluster Diagram: Click "View Architecture" to see a visual diagram of pods, deployments, and namespaces you created. 3. Switch to K8s Dashboard tab to see live resource counts update as you complete tasks.
0%
Lab Completed!
Great Kubernetes work!
Lab 2: Kubernetes Services & Networking
Kubernetes / Beginner
Scenario: Exposing Applications to Traffic
Your webapp pods are running, but nobody can reach them yet! You need to create Kubernetes Services to expose your application internally and externally. Configure ClusterIP, NodePort, and LoadBalancer services to route traffic to your pods.
KCNA Lab
Learning Objectives:
ClusterIP Service: Create internal-only service for pod-to-pod communication
NodePort Service: Expose application on a static port on each node
LoadBalancer: Create cloud load balancer for external access
Service Discovery: Understand DNS-based service discovery
π Step-by-Step Instructions
Step 1: Create a ClusterIP Service
π― Goal: Create an internal service to allow pod-to-pod traffic
π What is ClusterIP?
ClusterIP is the default service type. It gives your pods a stable internal IP address. Other pods in the cluster can reach your app using this IP or the service name, but it's NOT accessible from outside the cluster.
π What happens:
β’ Creates a Service named "webapp-clusterip"
β’ Assigns a stable internal IP (e.g. 10.96.x.x)
β’ Routes traffic on port 80 to pod port 80
β’ Load balances across all 5 replicas
π‘ Tip: ClusterIP is ideal for internal microservices (e.g., a backend API that only your frontend pods call). It's the most common service type.
π KCNA Objective: Understand Kubernetes service types and how they facilitate network communication (Container Orchestration domain, 22%).
Step 2: Create a NodePort Service
π― Goal: Expose your app on a static port accessible from outside the cluster
π ClusterIP vs NodePort:
NodePort opens a static port (30000-32767) on EVERY node in the cluster. Anyone who can reach a node's IP on that port can access your app. It builds on top of ClusterIP.
π Access pattern:
http://<any-node-ip>:<nodeport> β routes to your pods
π‘ Tip: NodePort is great for development and testing. In production, you'd typically use a LoadBalancer or Ingress instead.
π Exam Hint: Remember the port range! NodePort always uses ports 30000-32767 by default. This is a common exam question.
Step 3: Create a LoadBalancer Service
π― Goal: Create a cloud-integrated load balancer for production-grade external access
π What is a LoadBalancer Service?
LoadBalancer is the production way to expose apps. It provisions a real cloud load balancer (AWS ELB, Azure LB, GCP LB) that distributes traffic to your pods. It builds on NodePort + ClusterIP.
π‘ Tip: In cloud environments (EKS, AKS, GKE), this automatically creates a real load balancer. In minikube/local clusters, the external IP shows "Pending".
Step 4: List All Services
π― Goal: Verify all 3 services are running and inspect their endpoints
π» Command: kubectl get services -n webapp-lab
π What to check:
β’ ClusterIP service: should have internal IP, no external IP
β’ NodePort: should have internal IP + NodePort number
β’ LoadBalancer: should have internal IP + external IP (or Pending)
π‘ Tip: Add -o wide flag for more details including selectors and endpoints.
Step 5: Test Service DNS
π― Goal: Verify Kubernetes internal DNS resolves service names
π How DNS Works in K8s:
CoreDNS runs in every cluster. It automatically creates DNS records for services. Any pod can reach a service by its name: service-name.namespace.svc.cluster.local
π Learning Checkpoint: You now know 3 service types! ClusterIP for internal, NodePort for dev/test, LoadBalancer for production. In a real cluster, you'd add an Ingress controller for path-based routing.
Kubernetes Lab Environment
Terminal
Service Map
user@k8s-master:~$
π Service Network Map β webapp-labNamespace: webapp-lab
π΅ ClusterIP Services
Count0
Endpointsβ
π NodePort Services
Count0
Portβ
π’ LoadBalancer Services
Count0
External IPβ
π§ DNS
Records0
CoreDNSActive
π Recent Activity
[system]Service network initialized. Ready for lab.
Progress:0/6 tasks completed
Score: 0/100
π After Completing All Steps:
1. Click "Validate Configuration" for completion feedback. 2. Click "View Network Topology" for a diagram of services and traffic flow. 3. Switch to the "Service Map" tab to see live service counts.
0%
Lab Completed!
Great networking work!
Lab 3: Cloud Native Observability & Monitoring
Kubernetes / Beginner
Scenario: Monitoring Your Kubernetes Cluster
Your deployments are running, but how do you know they're healthy? Implement monitoring and observability for your Kubernetes cluster. Deploy metrics-server, check resource usage, view pod logs, and set up basic alerts.
KCNA Lab
Learning Objectives:
Metrics Server: Deploy and verify cluster metrics collection
Resource Monitoring: Check CPU and memory usage of pods
Logging: View and analyze pod logs for troubleshooting
Observability: Understand the three pillars: metrics, logs, traces
π Step-by-Step Instructions
Step 1: Deploy Metrics Server
π― Goal: Install the metrics-server to enable resource monitoring
π What is Metrics Server?
Metrics Server collects CPU and memory usage from all nodes and pods. It's required for kubectl top commands and the Horizontal Pod Autoscaler (HPA). Without it, you're flying blind!
π‘ Tip: Metrics Server takes 1-2 minutes to start collecting data. Wait before running kubectl top.
π KCNA Objective: Understand cloud native observability concepts including metrics, logging, and tracing (Cloud Native Observability domain, 8%).
Step 2: Check Node Resource Usage
π― Goal: Monitor CPU and memory at the node level
π» Command: kubectl top nodes
π What to look for:
β’ CPU(cores) β how many CPU cores being used
β’ CPU% β percentage of total CPU capacity used
β’ MEMORY β memory used in bytes
β’ MEMORY% β percentage of total memory used
π‘ Tip: If CPU% or Memory% is above 80%, your cluster needs more nodes or you need to optimize workloads.
Step 3: Monitor Pod Resource Usage
π― Goal: Check CPU and memory consumption of individual pods
π» Command: kubectl top pods -n webapp-lab
π What it shows:
Per-pod CPU and memory usage. Compare this with your resource requests and limits to see if pods are over/under-provisioned.
π Hint: Run this command periodically to identify "noisy neighbor" pods that consume too many resources and may starve other pods.
Step 4: View Pod Logs
π― Goal: Access application logs for debugging and analysis
π Why Logs Matter:
Logs are one of the three pillars of observability (metrics, logs, traces). They show what your app is actually doing β errors, requests, warnings, and more.
π What to look for in Events section:
β’ ScalingReplicaSet events show when pods were added/removed
β’ FailedScheduling means not enough resources
β’ ImagePullBackOff means the image name is wrong
π‘ Tip: The Events section at the bottom of describe output is the #1 debugging tool. Always check it when something goes wrong.
Step 6: Check Cluster Health Overview
π― Goal: Get a complete health overview of the entire cluster
π» Command: kubectl get all -n webapp-lab
π What this shows:
Lists ALL resource types in the namespace: pods, services, deployments, replicasets β a quick health snapshot of everything you've built.
π Learning Checkpoint: You've used all three pillars of observability! Metrics (kubectl top), Logs (kubectl logs), Events (kubectl describe). In production, you'd use Prometheus + Grafana + Loki for a full observability stack.
Monitoring Lab Environment
Terminal
Grafana Dashboard
user@k8s-master:~$
π Grafana β Cluster Monitoring⬀ Live
CPU Usage (%)
β
Memory Usage (%)
β
Running Pods
0
Active Alerts
0
π Recent Activity
[system]Monitoring stack initialized.
Progress:0/6 tasks completed
Score: 0/100
π After Completing All Steps:
1. Click "Validate Configuration" to see your monitoring completeness. 2. Switch to "Grafana Dashboard" tab to see live CPU/memory metrics update in real-time. 3. Click "View Architecture" to see the full observability stack diagram.
0%
Lab Completed!
Excellent monitoring work!
β οΈ Reset Lab?
This will clear all your progress for this lab including terminal history, completed tasks, and dashboard data. This action cannot be undone.