Master Kubernetes application deployment with hands-on, PBQ-style labs: Deployment & ConfigMap management, multi-container pod design patterns, and Service/Ingress networking - aligned to the Certified Kubernetes Application Developer (CKAD) exam objectives.
These labs cover all DevOps & container certifications including:
KCNADCAKCSATerraform AssociateCKAD
CKACKSAWS DevOps ProfessionalAzure DevOps AZ-400Google Prof. Cloud DevOps
CKAD Application Labs - Module 5
GUI-first labs with deployment consoles, pod designers, and service managers - build real Kubernetes application skills: deployments, configmaps, multi-container pods, services, and ingress.
Lab 13: Deployment & ConfigMap Manager
Kubernetes / GUI
Scenario: Application Rollout
AppStream needs to deploy a web application to the staging namespace with environment-specific configuration. Create a ConfigMap for application settings, deploy the app with 3 replicas, and verify the rollout status and pod health.
CKAD Lab
Learning Objectives:
ConfigMaps: create and mount configuration data into pods
Deployments: create deployments with replica management
Rollouts: monitor deployment rollout status
Pod health: verify running pods and readiness
Step-by-Step Instructions
Step 1: Set deployment target
In the Deployment Console tab, set:
Namespace = stagingApp Name = web-frontendImage = nginx:1.25-alpine
Then click Set Target.
Tip: Always use specific image tags, never :latest in production.
Step 2: Create ConfigMap
Set the ConfigMap values:
APP_ENV = stagingLOG_LEVEL = info
Then click Create ConfigMap.
Tip: ConfigMaps decouple configuration from container images for portability.
Step 3: Set replicas
Set Replicas = 3 and click Save Replicas.
Step 4: Generate deployment YAML
Click Generate YAML to render the Deployment + ConfigMap manifest.
Step 5: Apply deployment
Click Apply to create the deployment and configmap in the cluster.
Step 6: Verify via terminal
In the Terminal tab, run:
kubectl rollout status deployment/web-frontend -n staging
Expected: deployment successfully rolled out with 3/3 replicas ready.
CKAD Lab Environment
Deployment Console
Terminal
Deployment ConsoleCluster: ckad-lab
Deployment Target
ConfigMap
Deployment Status
App--
Namespace--
Replicas0
Ready0/0
ConfigMap
CreatedNo
APP_ENV--
LOG_LEVEL--
AppliedNo
Activity Log
[system]Deployment console ready. No deployment configured.
root@k8s-dev:~#
Progress:0/6 tasks completed
Score: 0/100
After Completing All Steps:
1. Click "Validate Configuration" for a deployment checklist. 2. Click "View Architecture" to see the deployment topology. 3. Reset the lab to re-practice deployment patterns.
0%
Lab Completed!
Deployment rolled out with ConfigMap.
Lab 14: Multi-Container Pod Designer
Kubernetes / GUI + Terminal
Scenario: Sidecar Logging Pipeline
LogPipe Inc. needs a pod with a main application container and a sidecar log-shipper. The app writes logs to a shared volume; the sidecar reads and forwards them. You will also configure an init container that pre-populates configuration before the main containers start.
CKAD Lab
Learning Objectives:
Sidecar pattern: shared volumes between containers in a pod
Init containers: run setup tasks before app containers start
Volume mounts: emptyDir for inter-container communication
Pod lifecycle: understand init -> main container ordering
Step-by-Step Instructions
Step 1: Configure the main container
In the Pod Designer tab, set:
Pod Name = log-pipelineMain Container = app (nginx:1.25-alpine)
Then click Set Main Container.
Step 2: Add sidecar container
Add the sidecar:
Sidecar = log-shipper (busybox:1.36)Volume Mount = /var/log/app
Then click Add Sidecar.
Tip: The sidecar shares the same emptyDir volume as the main container.
Step 3: Add init container
Add the init container:
Init = config-loader (busybox:1.36)
Then click Add Init Container.
Tip: Init containers run to completion before any app containers start.
Step 4: Generate pod YAML
Click Generate YAML to render the multi-container pod spec.
Step 5: Apply pod
Click Apply Pod to create the multi-container pod.
1. Validate to see pod configuration checklist. 2. Click "View Architecture" to see the multi-container pod diagram. 3. Use Reset to re-practice sidecar and init patterns.
0%
Lab Completed!
Multi-container pod with sidecar and init.
Lab 15: Service & Ingress Networking
Kubernetes / Terminal + GUI
Scenario: Expose and Route Traffic
WebScale Corp. has a frontend deployment running in the cluster. Your task is to create a ClusterIP Service for internal communication, a NodePort Service for external access, and configure an Ingress resource to route HTTP traffic by path to different backends.
CKAD Lab
Learning Objectives:
Services: ClusterIP, NodePort, and their use cases
Selectors: label-based pod selection for services
Ingress: path-based routing and host rules
DNS: in-cluster service discovery patterns
Step-by-Step Instructions
Step 1: Create a ClusterIP Service
In the Terminal, run:
kubectl create service clusterip web-frontend --tcp=80:80 -n staging
Tip: ClusterIP is the default service type - only accessible within the cluster.
Step 2: Create a NodePort Service
Run:
kubectl create service nodeport web-external --tcp=80:80 --node-port=30080 -n staging
Tip: NodePort exposes the service on each node's IP at a static port (30000-32767).
Click Apply Ingress to create the ingress resource.
Step 6: Verify ingress via terminal
Run:
kubectl get ingress -n staging
Expected: ingress created with host app.lab.local routing to web-frontend backend.
CKAD Service & Ingress Lab
Terminal
Ingress Manager
root@k8s-dev:~#
Ingress ManagerNamespace: staging
Ingress Rule
Services
ClusterIP--
NodePort--
Total Services0
Ingress
Host--
Path--
Backend--
AppliedNo
Activity Log
[system]Ingress manager ready. No services or ingress configured.
Progress:0/6 tasks completed
Score: 0/100
After Completing All Steps:
1. Validate to see service + ingress checklist. 2. Click "View Architecture" to see the traffic routing diagram. 3. Use Reset to re-practice networking patterns.