DevOps & Kubernetes Admin Labs - CKA

Master Kubernetes cluster administration with hands-on, PBQ-style labs: node management & maintenance, RBAC & ServiceAccount configuration, and PersistentVolume storage management - aligned to the Certified Kubernetes Administrator (CKA) exam objectives.

These labs cover all DevOps & container certifications including:

KCNA DCA KCSA Terraform Associate CKAD
CKA CKS AWS DevOps Professional Azure DevOps AZ-400 Google Prof. Cloud DevOps

CKA Administration Labs - Module 6

Cluster administration labs with node management consoles, RBAC builders, and storage managers - build real Kubernetes admin skills: node maintenance, role-based access control, and persistent storage.

Lab 16: Node Management & Cluster Maintenance
Kubernetes / Terminal + GUI
Scenario: Scheduled Node Maintenance
The operations team needs to perform maintenance on worker-node-2. You must safely cordon the node, drain all workloads, verify pods have been rescheduled, perform the maintenance (OS upgrade simulation), then uncordon the node and confirm the cluster is healthy.
CKA Lab

Learning Objectives:

  • Node cordon: mark a node as unschedulable
  • Node drain: safely evict all pods from a node
  • Pod rescheduling: verify workloads moved to other nodes
  • Node uncordon: return node to schedulable state

Step-by-Step Instructions

  1. Step 1: Check cluster nodes
    In the Terminal tab, run:
    kubectl get nodes
    Confirm all 3 nodes are Ready before maintenance.
  2. Step 2: Cordon the node
    Mark the node as unschedulable:
    kubectl cordon worker-node-2
    Tip: Cordoning prevents new pods from being scheduled but does not evict existing pods.
  3. Step 3: Drain the node
    Evict all pods from the node:
    kubectl drain worker-node-2 --ignore-daemonsets --delete-emptydir-data
    Tip: --ignore-daemonsets is required since DaemonSet pods cannot be evicted.
  4. Step 4: Verify pod rescheduling
    In the Node Console tab, click Verify Drain to confirm no application pods remain on worker-node-2.
  5. Step 5: Perform maintenance
    Click Run OS Upgrade in the Node Console to simulate the maintenance task.
  6. Step 6: Uncordon and verify
    In the Terminal, run:
    kubectl uncordon worker-node-2
    Expected: node returns to Ready,SchedulingDisabled -> Ready status.

CKA Lab Environment

Terminal
Node Console
root@k8s-admin:~#
Node Management Console Cluster: cka-lab
Cluster Nodes
control-planeReady
worker-node-1Ready
worker-node-2Ready
Maintenance Status
Target Nodeworker-node-2
CordonedNo
DrainedNo
OS UpgradePending
Maintenance Actions
Activity Log
[system]Node management console ready. 3 nodes in cluster.
Progress: 0/6 tasks completed
Score: 0/100
After Completing All Steps:

1. Click "Validate Configuration" for a maintenance checklist.
2. Click "View Architecture" to see the cluster node topology.
3. Reset the lab to re-practice node maintenance patterns.
0%

Lab Completed!

Node maintenance performed successfully.

Lab 17: RBAC & ServiceAccount Configuration
Kubernetes / GUI
Scenario: Developer Access Control
SecureOps requires a new ServiceAccount for the development team with specific permissions. Create a ServiceAccount named dev-deployer, define a Role allowing get/list/create on pods and deployments in the dev namespace, then bind the role to the service account and verify the access.
CKA Lab

Learning Objectives:

  • ServiceAccounts: create and configure service accounts
  • Roles: define namespace-scoped permission sets
  • RoleBindings: bind roles to service accounts
  • Auth verification: test access with kubectl auth can-i

Step-by-Step Instructions

  1. Step 1: Create ServiceAccount
    In the RBAC Console tab, set:
    Name = dev-deployer Namespace = dev
    Then click Create ServiceAccount.
  2. Step 2: Define Role
    Configure the role:
    Role Name = pod-manager Resources = pods, deployments Verbs = get, list, create
    Then click Create Role.
    Tip: Roles are namespace-scoped. Use ClusterRoles for cluster-wide permissions.
  3. Step 3: Create RoleBinding
    Bind the role to the service account:
    Binding = pod-manager-binding Role = pod-manager Subject = dev-deployer
    Then click Create RoleBinding.
  4. Step 4: Generate RBAC YAML
    Click Generate YAML to render all RBAC resources.
  5. Step 5: Apply RBAC
    Click Apply to create all RBAC resources in the cluster.
  6. Step 6: Verify access
    In the Terminal tab, run:
    kubectl auth can-i create pods --as=system:serviceaccount:dev:dev-deployer -n dev
    Expected: "yes" - confirming the service account has pod creation access.

CKA Lab Environment

RBAC Console
Terminal
RBAC Manager Namespace: dev
ServiceAccount
Role
RoleBinding
ServiceAccount
Name--
Namespace--
CreatedNo
Role & Binding
Role--
Binding--
AppliedNo
VerifiedNo
Activity Log
[system]RBAC console ready. No service accounts or roles configured.
root@k8s-admin:~#
Progress: 0/6 tasks completed
Score: 0/100
After Completing All Steps:

1. Click "Validate Configuration" for an RBAC checklist.
2. Click "View Architecture" to see the RBAC relationship diagram.
3. Reset the lab to re-practice RBAC patterns.
0%

Lab Completed!

RBAC configured with ServiceAccount, Role, and RoleBinding.

Lab 18: PersistentVolume & Storage Management
Kubernetes / GUI + Terminal
Scenario: Database Storage Provisioning
DataVault needs persistent storage for a PostgreSQL database. Create a PersistentVolume (5Gi, ReadWriteOnce, hostPath), a matching PersistentVolumeClaim, and mount it to a database pod. Verify the PVC is bound and the pod can write to the volume.
CKA Lab

Learning Objectives:

  • PersistentVolumes: create PVs with capacity and access modes
  • PersistentVolumeClaims: request storage matching PV specs
  • Volume mounting: attach PVCs to pods
  • Storage verification: confirm bound state and data persistence

Step-by-Step Instructions

  1. Step 1: Create PersistentVolume
    In the Storage Console tab, configure the PV:
    Name = db-pv Capacity = 5Gi Access Mode = ReadWriteOnce Type = hostPath (/mnt/data/postgres)
    Then click Create PV.
  2. Step 2: Create PersistentVolumeClaim
    Configure the PVC:
    Name = db-pvc Request = 5Gi Access Mode = ReadWriteOnce
    Then click Create PVC.
    Tip: PVC request must not exceed PV capacity, and access modes must match.
  3. Step 3: Generate storage YAML
    Click Generate YAML to render PV + PVC + Pod manifest.
  4. Step 4: Apply storage resources
    Click Apply to create PV, PVC, and database pod.
  5. Step 5: Verify PVC binding
    In the Terminal tab, run:
    kubectl get pv,pvc -n database
    Expected: PV status = Available -> Bound, PVC status = Bound.
  6. Step 6: Verify pod and volume
    Run:
    kubectl describe pod postgres-db -n database
    Expected: Pod running with volume mounted at /var/lib/postgresql/data.

CKA Lab Environment

Storage Console
Terminal
Storage Manager Namespace: database
PersistentVolume
PersistentVolumeClaim
PersistentVolume
Name--
Capacity--
Access Mode--
Status--
PVC & Pod
PVC Name--
PVC Status--
Pod--
AppliedNo
Activity Log
[system]Storage manager ready. No PVs or PVCs configured.
root@k8s-admin:~#
Progress: 0/6 tasks completed
Score: 0/100
After Completing All Steps:

1. Validate to see storage + pod checklist.
2. Click "View Architecture" to see the PV/PVC/Pod binding diagram.
3. Use Reset to re-practice storage patterns.
0%

Lab Completed!

PersistentVolume bound and database pod running.