Master Kubernetes cluster administration with hands-on, PBQ-style labs: node management & maintenance, RBAC & ServiceAccount configuration, and PersistentVolume storage management - aligned to the Certified Kubernetes Administrator (CKA) exam objectives.
These labs cover all DevOps & container certifications including:
KCNADCAKCSATerraform AssociateCKAD
CKACKSAWS DevOps ProfessionalAzure DevOps AZ-400Google Prof. Cloud DevOps
CKA Administration Labs - Module 6
Cluster administration labs with node management consoles, RBAC builders, and storage managers - build real Kubernetes admin skills: node maintenance, role-based access control, and persistent storage.
Lab 16: Node Management & Cluster Maintenance
Kubernetes / Terminal + GUI
Scenario: Scheduled Node Maintenance
The operations team needs to perform maintenance on worker-node-2. You must safely cordon the node, drain all workloads, verify pods have been rescheduled, perform the maintenance (OS upgrade simulation), then uncordon the node and confirm the cluster is healthy.
CKA Lab
Learning Objectives:
Node cordon: mark a node as unschedulable
Node drain: safely evict all pods from a node
Pod rescheduling: verify workloads moved to other nodes
Node uncordon: return node to schedulable state
Step-by-Step Instructions
Step 1: Check cluster nodes
In the Terminal tab, run:
kubectl get nodes
Confirm all 3 nodes are Ready before maintenance.
Step 2: Cordon the node
Mark the node as unschedulable:
kubectl cordon worker-node-2
Tip: Cordoning prevents new pods from being scheduled but does not evict existing pods.
Tip: --ignore-daemonsets is required since DaemonSet pods cannot be evicted.
Step 4: Verify pod rescheduling
In the Node Console tab, click Verify Drain to confirm no application pods remain on worker-node-2.
Step 5: Perform maintenance
Click Run OS Upgrade in the Node Console to simulate the maintenance task.
Step 6: Uncordon and verify
In the Terminal, run:
kubectl uncordon worker-node-2
Expected: node returns to Ready,SchedulingDisabled -> Ready status.
CKA Lab Environment
Terminal
Node Console
root@k8s-admin:~#
Node Management ConsoleCluster: cka-lab
Cluster Nodes
control-planeReady
worker-node-1Ready
worker-node-2Ready
Maintenance Status
Target Nodeworker-node-2
CordonedNo
DrainedNo
OS UpgradePending
Maintenance Actions
Activity Log
[system]Node management console ready. 3 nodes in cluster.
Progress:0/6 tasks completed
Score: 0/100
After Completing All Steps:
1. Click "Validate Configuration" for a maintenance checklist. 2. Click "View Architecture" to see the cluster node topology. 3. Reset the lab to re-practice node maintenance patterns.
0%
Lab Completed!
Node maintenance performed successfully.
Lab 17: RBAC & ServiceAccount Configuration
Kubernetes / GUI
Scenario: Developer Access Control
SecureOps requires a new ServiceAccount for the development team with specific permissions. Create a ServiceAccount named dev-deployer, define a Role allowing get/list/create on pods and deployments in the dev namespace, then bind the role to the service account and verify the access.
CKA Lab
Learning Objectives:
ServiceAccounts: create and configure service accounts
Roles: define namespace-scoped permission sets
RoleBindings: bind roles to service accounts
Auth verification: test access with kubectl auth can-i
Step-by-Step Instructions
Step 1: Create ServiceAccount
In the RBAC Console tab, set:
Name = dev-deployerNamespace = dev
Then click Create ServiceAccount.
Step 2: Define Role
Configure the role:
Role Name = pod-managerResources = pods, deploymentsVerbs = get, list, create
Then click Create Role.
Tip: Roles are namespace-scoped. Use ClusterRoles for cluster-wide permissions.
Click Apply to create all RBAC resources in the cluster.
Step 6: Verify access
In the Terminal tab, run:
kubectl auth can-i create pods --as=system:serviceaccount:dev:dev-deployer -n dev
Expected: "yes" - confirming the service account has pod creation access.
CKA Lab Environment
RBAC Console
Terminal
RBAC ManagerNamespace: dev
ServiceAccount
Role
RoleBinding
ServiceAccount
Name--
Namespace--
CreatedNo
Role & Binding
Role--
Binding--
AppliedNo
VerifiedNo
Activity Log
[system]RBAC console ready. No service accounts or roles configured.
root@k8s-admin:~#
Progress:0/6 tasks completed
Score: 0/100
After Completing All Steps:
1. Click "Validate Configuration" for an RBAC checklist. 2. Click "View Architecture" to see the RBAC relationship diagram. 3. Reset the lab to re-practice RBAC patterns.
0%
Lab Completed!
RBAC configured with ServiceAccount, Role, and RoleBinding.
Lab 18: PersistentVolume & Storage Management
Kubernetes / GUI + Terminal
Scenario: Database Storage Provisioning
DataVault needs persistent storage for a PostgreSQL database. Create a PersistentVolume (5Gi, ReadWriteOnce, hostPath), a matching PersistentVolumeClaim, and mount it to a database pod. Verify the PVC is bound and the pod can write to the volume.
CKA Lab
Learning Objectives:
PersistentVolumes: create PVs with capacity and access modes
Name = db-pvcRequest = 5GiAccess Mode = ReadWriteOnce
Then click Create PVC.
Tip: PVC request must not exceed PV capacity, and access modes must match.
Step 3: Generate storage YAML
Click Generate YAML to render PV + PVC + Pod manifest.
Step 4: Apply storage resources
Click Apply to create PV, PVC, and database pod.
Step 5: Verify PVC binding
In the Terminal tab, run:
kubectl get pv,pvc -n database
Expected: PV status = Available -> Bound, PVC status = Bound.
Step 6: Verify pod and volume
Run:
kubectl describe pod postgres-db -n database
Expected: Pod running with volume mounted at /var/lib/postgresql/data.
CKA Lab Environment
Storage Console
Terminal
Storage ManagerNamespace: database
PersistentVolume
PersistentVolumeClaim
PersistentVolume
Name--
Capacity--
Access Mode--
Status--
PVC & Pod
PVC Name--
PVC Status--
Pod--
AppliedNo
Activity Log
[system]Storage manager ready. No PVs or PVCs configured.
root@k8s-admin:~#
Progress:0/6 tasks completed
Score: 0/100
After Completing All Steps:
1. Validate to see storage + pod checklist. 2. Click "View Architecture" to see the PV/PVC/Pod binding diagram. 3. Use Reset to re-practice storage patterns.