DevOps & Container Labs - DCA Docker

Master Docker image management, container networking, storage drivers, and Docker Swarm orchestration through hands-on labs aligned to the Docker Certified Associate (DCA) exam objectives.

πŸ† These labs cover all DevOps & container certifications including:

☸️ KCNA 🐳 DCA πŸ”’ KCSA 🌍 Terraform Associate βš™οΈ CKAD
πŸ› οΈ CKA πŸ›‘οΈ CKS ☁️ AWS DevOps Professional πŸ”· Azure DevOps AZ-400 🌐 Google Prof. Cloud DevOps

DCA Docker Labs - Module 2

Build production-ready Docker skills covering images, registries, networking, storage, and Swarm orchestration.

Lab 4: Docker Image Management & Registry
Docker / Intermediate
Scenario: Container Image Pipeline
MicroApps Inc. needs a containerized image workflow. As a Docker engineer, build custom images with Dockerfiles, tag and manage image layers, push to a private registry, and inspect container image metadata. This lab covers the DCA "Image Creation, Management, and Registry" domain (20% of exam).
DCA Lab

Learning Objectives:

  • Dockerfile: Build custom images using multi-stage Dockerfiles
  • Tagging: Tag images with semantic versioning for registries
  • Registry: Push and pull images from a private Docker registry
  • Inspection: Analyze image layers, history, and metadata

πŸ“‹ Step-by-Step Instructions

  1. Step 1: Build a Docker Image from Dockerfile
    🎯 Goal: Build a custom nginx-based image using a Dockerfile

    πŸ“ What is a Dockerfile?
    A Dockerfile is a text file with instructions for building a Docker image. Each instruction creates a new layer. Common instructions include FROM (base image), COPY (add files), RUN (execute commands), and EXPOSE (document ports).

    πŸ’» Command:
    docker build -t webapp:v1.0 .

    πŸ” What happens:
    β€’ Docker reads the Dockerfile in the current directory
    β€’ Each instruction creates a layer (cached for efficiency)
    β€’ The final image is tagged as "webapp:v1.0"
    β€’ You can see each layer's size with docker history
    πŸ’‘ Tip: Always use specific base image tags (e.g., nginx:1.25-alpine) instead of :latest for reproducible builds.
    πŸ“– DCA Objective: Describe and demonstrate how to create an efficient image via a Dockerfile (Image Creation domain, 20% of exam).
  2. Step 2: Tag the Image for Registry
    🎯 Goal: Tag the image with the registry URL for pushing

    πŸ“ Why Tag?
    Docker images need to be tagged with the full registry path before pushing. The format is: registry-url/repository:tag. This tells Docker where to push/pull the image.

    πŸ’» Command:
    docker tag webapp:v1.0 registry.local:5000/webapp:v1.0

    πŸ” What happens:
    β€’ Creates a new tag pointing to the same image layers
    β€’ No data is copied β€” tags are just pointers
    β€’ The image is now addressable by the registry path
    πŸ’‘ Tip: You can have multiple tags pointing to the same image ID. Use docker images to see all tags.
  3. Step 3: Push Image to Private Registry
    🎯 Goal: Push the tagged image to the private Docker registry

    πŸ’» Command:
    docker push registry.local:5000/webapp:v1.0

    πŸ” What happens:
    β€’ Docker uploads each layer that the registry doesn't already have
    β€’ Layers are deduplicated β€” shared layers are pushed only once
    β€’ The manifest (image metadata) is pushed last
    β€’ Other hosts can now pull this image from the registry
    πŸ’‘ Tip: If you get "connection refused", the registry container may not be running. Use docker ps to check.
    πŸŽ“ Hint: In production, you'd use Docker Hub, AWS ECR, GCR, or Harbor. This lab uses a local registry for simplicity.
  4. Step 4: List Local Images
    🎯 Goal: View all local Docker images and their sizes

    πŸ’» Command:
    docker images

    πŸ” What to look for:
    β€’ REPOSITORY and TAG columns show how images are named
    β€’ IMAGE ID is a unique hash of the image
    β€’ SIZE shows the total disk usage per image
    β€’ Multiple tags with the same ID share layers
    πŸ’‘ Tip: Use docker images --filter dangling=true to find untagged images that waste disk space.
  5. Step 5: Inspect Image Layers
    🎯 Goal: Examine the build history and layers of the image

    πŸ’» Command:
    docker history webapp:v1.0

    πŸ” What to look for:
    β€’ Each row is a layer from a Dockerfile instruction
    β€’ SIZE column shows how much each layer adds
    β€’ <missing> in CREATED BY means it's from the base image
    β€’ Smaller layers = more efficient image
    πŸ’‘ Tip: To reduce image size, combine RUN commands with &&, use .dockerignore, and choose alpine base images.
  6. Step 6: Inspect Image Metadata
    🎯 Goal: View detailed metadata including environment variables, ports, and entrypoint

    πŸ’» Command:
    docker inspect webapp:v1.0

    πŸ” Key metadata fields:
    β€’ Config.Env β€” environment variables baked into the image
    β€’ Config.ExposedPorts β€” documented ports (not published)
    β€’ Config.Entrypoint β€” default process that runs
    β€’ RootFS.Layers β€” list of layer digests
    πŸŽ“ Learning Checkpoint: You've built, tagged, pushed, and inspected Docker images! Understanding image layers is key for optimizing build times and reducing image sizes.

Docker Lab Environment

Terminal
Docker Dashboard
root@docker-host:~#
🐳 Docker Dashboard β€” Image Management Host: docker-host
πŸ“¦ Images
Local Images2
Custom Built0
Total Size142 MB
🏷️ Tags
Tagged0
Registry Tags0
Dangling0
πŸ“‘ Registry
Pushed0
Registry URLβ€”
πŸ” Layers
Total Layers0
InspectedNo
πŸ“œ Recent Activity
[system]Docker daemon ready. Registry running at registry.local:5000.
Progress: 0/6 tasks completed
Score: 0/100
πŸŽ‰ After Completing All Steps:

1. Click "Validate Configuration" to see your image pipeline completeness.
2. Switch to "Docker Dashboard" tab to see image/registry stats.
3. Click "View Architecture" to see the Docker image pipeline diagram.
0%

Lab Completed!

Excellent Docker image work!

Lab 5: Docker Networking & Storage
Docker / Intermediate
Scenario: Multi-Container Application Stack
DataFlow Corp. is deploying a multi-container application with a web frontend and database backend. As a Docker engineer, configure custom bridge networks for container isolation, create persistent volumes for database storage, and verify inter-container communication. This covers the DCA "Networking" (15%) and "Storage" (10%) domains.
DCA Lab

Learning Objectives:

  • Bridge Networks: Create custom bridge networks for container isolation
  • DNS: Leverage Docker's built-in DNS for service discovery
  • Volumes: Create and manage persistent storage volumes
  • Connectivity: Test and verify inter-container communication

πŸ“‹ Step-by-Step Instructions

  1. Step 1: Create a Custom Bridge Network
    🎯 Goal: Create an isolated bridge network for your application containers

    πŸ“ What is a Bridge Network?
    Docker's default bridge network doesn't support DNS-based discovery. A custom bridge network gives containers automatic DNS resolution by name, network isolation, and better security. Containers on different networks can't communicate unless explicitly connected.

    πŸ’» Command:
    docker network create --driver bridge --subnet app-network

    πŸ” What happens:
    β€’ Creates a new bridge network named "app-network"
    β€’ Assigns subnet for container IPs
    β€’ Enables DNS-based container name resolution
    β€’ Isolates traffic from other networks
    πŸ’‘ Tip: Always use custom bridge networks instead of the default bridge. Custom bridges support DNS, while the default bridge requires --link (deprecated).
    πŸ“– DCA Objective: Create a Docker bridge network for container communication (Networking domain, 15% of exam).
  2. Step 2: Create a Persistent Volume
    🎯 Goal: Create a named volume for database data persistence

    πŸ“ Why Volumes?
    Container filesystems are ephemeral β€” data is lost when the container is removed. Docker volumes store data outside the container's writable layer, surviving container recreation. They're managed by Docker and are the preferred way to persist data.

    πŸ’» Command:
    docker volume create db-data

    πŸ” What happens:
    β€’ Creates a named volume at /var/lib/docker/volumes/db-data
    β€’ Volume persists even after containers using it are removed
    β€’ Can be mounted into any container with -v db-data:/path
    πŸ’‘ Tip: Named volumes are preferred over bind mounts in production. They're managed by Docker, portable, and work with volume drivers for cloud storage.
  3. Step 3: Run Database Container with Volume
    🎯 Goal: Start a PostgreSQL container on the custom network with persistent storage

    πŸ’» Command:
    docker run -d --name db-server --network app-network -v db-data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=labpass123 postgres:15-alpine

    πŸ” Flag breakdown:
    β€’ -d β€” Run in detached (background) mode
    β€’ --name db-server β€” Container name (also its DNS hostname)
    β€’ --network app-network β€” Attach to custom bridge
    β€’ -v db-data:/var/lib/postgresql/data β€” Mount volume
    β€’ -e POSTGRES_PASSWORD=labpass123 β€” Set env variable
    πŸ’‘ Tip: The container name "db-server" becomes its DNS hostname on the custom network. Other containers can reach it at db-server:5432.
    ⚠️ Common Mistake: Don't use the default bridge network for multi-container apps. DNS resolution won't work, and you'll have to use container IPs which change on restart.
  4. Step 4: Run Web Frontend Container
    🎯 Goal: Start an nginx frontend on the same network, publishing port 8080

    πŸ’» Command:
    docker run -d --name web-frontend --network app-network -p 8080:80 nginx:alpine

    πŸ” What happens:
    β€’ Runs nginx container on app-network alongside db-server
    β€’ -p 8080:80 publishes container port 80 to host port 8080
    β€’ web-frontend can reach db-server by name via DNS
    β€’ External traffic reaches nginx at http://localhost:8080
    πŸ’‘ Tip: Port publishing (-p) maps host:container. Only published ports are accessible from outside Docker.
  5. Step 5: Test Container Connectivity
    🎯 Goal: Verify that containers can communicate via DNS on the custom network

    πŸ’» Command:
    docker exec web-frontend ping -c 3 db-server

    πŸ” What to look for:
    β€’ DNS resolves "db-server" to its container IP (172.20.x.x)
    β€’ ICMP replies confirm network connectivity
    β€’ 0% packet loss means the network is working perfectly
    β€’ Round-trip time should be <1ms (same host)
    πŸ’‘ Tip: If ping fails with "Name or service not known", containers are on different networks. Use docker network inspect app-network to check membership.
  6. Step 6: Inspect the Network
    🎯 Goal: Examine the network configuration and connected containers

    πŸ’» Command:
    docker network inspect app-network

    πŸ” Key sections:
    β€’ Containers β€” shows all connected containers and their IPs
    β€’ IPAM.Config β€” shows subnet and gateway
    β€’ Driver β€” shows "bridge" network type
    β€’ Internal β€” shows if external access is blocked
    πŸŽ“ Learning Checkpoint: You've built a complete multi-container stack with custom networking and persistent storage. In production, Docker Compose automates this entire process with a single YAML file.

Docker Networking Lab

Terminal
Network Topology
root@docker-host:~#
πŸ”— Docker Network Topology β€” app-network Driver: bridge
🌐 Networks
Custom Networks0
Subnetβ€”
Connected0
πŸ’Ύ Volumes
Named Volumes0
Mounted0
πŸ“¦ Containers
Running0
DB Containerβ€”
Web Containerβ€”
πŸ”Œ Connectivity
DNS Testβ€”
Ping Testβ€”
Published Ports0
πŸ“œ Recent Activity
[system]Docker daemon ready. Default networks: bridge, host, none.
Progress: 0/6 tasks completed
Score: 0/100
πŸŽ‰ After Completing All Steps:

1. Click "Validate Configuration" to see your networking and storage completeness.
2. Switch to "Network Topology" tab to see connected containers and subnet info.
3. Click "View Architecture" to see the network topology diagram.
0%

Lab Completed!

Excellent Docker networking work!

Lab 6: Docker Swarm Orchestration & Security
Docker / Advanced
Scenario: Production Swarm Cluster
CloudScale Ltd. is setting up a Docker Swarm cluster for production workloads. As the lead Docker engineer, initialize a Swarm cluster, deploy services with replicas, create encrypted overlay networks, manage secrets securely, and perform rolling updates. This covers the DCA "Orchestration" (25%) and "Security" (15%) domains.
DCA Lab

Learning Objectives:

  • Swarm Init: Initialize a Docker Swarm cluster and manage node roles
  • Services: Deploy replicated services across the swarm
  • Overlay Networks: Create encrypted overlay networks for service communication
  • Secrets: Store and manage sensitive data with Docker secrets

πŸ“‹ Step-by-Step Instructions

  1. Step 1: Initialize Docker Swarm
    🎯 Goal: Initialize this node as a Docker Swarm manager

    πŸ“ What is Docker Swarm?
    Docker Swarm is Docker's native orchestration engine. It turns a pool of Docker hosts into a single virtual host. A Swarm has managers (control plane) and workers (run containers). Managers use Raft consensus for high availability.

    πŸ’» Command:
    docker swarm init --advertise-addr

    πŸ” What happens:
    β€’ This node becomes the first Swarm manager
    β€’ A join token is generated for adding workers
    β€’ An internal PKI is created for TLS communication
    β€’ The Raft consensus database is initialized
    πŸ’‘ Tip: The --advertise-addr tells other nodes which IP to use to reach this manager. Use the host's main interface IP.
    πŸ“– DCA Objective: Set up a Swarm, add nodes, and describe the Raft consensus protocol (Orchestration domain, 25% of exam).
  2. Step 2: Create an Overlay Network
    🎯 Goal: Create an encrypted overlay network for service-to-service communication

    πŸ“ Why Overlay Networks?
    Overlay networks span across multiple Docker hosts in a Swarm. They use VXLAN tunneling to enable containers on different hosts to communicate as if on the same network. The --opt encrypted flag adds IPsec encryption for data-in-transit security.

    πŸ’» Command:
    docker network create --driver overlay --opt encrypted prod-overlay

    πŸ” What happens:
    β€’ Creates a multi-host overlay network
    β€’ IPsec encryption protects all traffic between nodes
    β€’ Services on this network get automatic DNS load balancing
    β€’ Only Swarm services can attach to overlay networks
    πŸ’‘ Tip: Always use --opt encrypted in production for security compliance. Without it, traffic between nodes is in clear text.
  3. Step 3: Create a Docker Secret
    🎯 Goal: Store the database password as a Swarm secret

    πŸ“ What are Docker Secrets?
    Docker secrets securely store sensitive data (passwords, TLS certs, API keys) in the Swarm's encrypted Raft log. Secrets are mounted as files inside containers at /run/secrets/<name>. They're never stored on disk on worker nodes.

    πŸ’» Command:
    echo "SuperSecurePass2024!" | docker secret create db_password -

    πŸ” What happens:
    β€’ The secret is encrypted and stored in the Raft log
    β€’ Only services explicitly granted access can read it
    β€’ The secret value is NEVER exposed in docker inspect
    β€’ Workers only receive secrets for services they're running
    πŸ’‘ Tip: Never put passwords in environment variables or Dockerfiles. Use secrets for all sensitive data. They're encrypted at rest and in transit.
    ⚠️ Security Best Practice: In production, pipe the secret from a file (docker secret create db_password ./password.txt) instead of using echo to avoid shell history exposure.
  4. Step 4: Deploy a Swarm Service
    🎯 Goal: Deploy a replicated service on the overlay network with the secret

    πŸ’» Command:
    docker service create --name webapp-svc --replicas 3 --network prod-overlay --secret db_password -p 80:80 nginx:alpine

    πŸ” Flag breakdown:
    β€’ --replicas 3 β€” Run 3 identical task containers
    β€’ --network prod-overlay β€” Attach to encrypted overlay
    β€’ --secret db_password β€” Mount secret at /run/secrets/db_password
    β€’ -p 80:80 β€” Publish port with routing mesh
    β€’ Swarm load-balances traffic across all 3 replicas
    πŸ’‘ Tip: Swarm's routing mesh means any node in the cluster can accept traffic on published ports, even if the task isn't running on that node.
  5. Step 5: Scale and Update the Service
    🎯 Goal: Scale the service and perform a rolling update

    πŸ’» Command:
    docker service update --replicas 5 --update-parallelism 2 --update-delay 10s webapp-svc

    πŸ” What happens:
    β€’ Scales from 3 to 5 replicas
    β€’ Sets rolling update policy: 2 tasks at a time with 10s delay
    β€’ Swarm performs zero-downtime updates
    β€’ If an update fails, Swarm automatically rolls back
    πŸ’‘ Tip: Use --update-failure-action rollback in production to auto-rollback if tasks fail during an update.
  6. Step 6: List Swarm Services and Tasks
    🎯 Goal: Verify the service is running with all replicas healthy

    πŸ’» Command:
    docker service ls

    πŸ” What to look for:
    β€’ REPLICAS should show 5/5 (all running)
    β€’ MODE should be "replicated"
    β€’ PORTS show the published port mapping
    β€’ Use docker service ps webapp-svc to see individual tasks
    πŸŽ“ Learning Checkpoint: You've built a production-ready Docker Swarm! You've covered cluster init, encrypted networks, secrets management, service deployment, and rolling updates β€” all key DCA exam topics.

Docker Swarm Lab Environment

Terminal
Swarm Visualizer
root@swarm-manager:~#
🐳 Docker Swarm Visualizer ⬀ Cluster Active
Cluster Nodes
0
Running Services
0
Total Replicas
0
Secrets
0
πŸ“œ Recent Activity
[system]Docker engine ready. Swarm not initialized.
Progress: 0/6 tasks completed
Score: 0/100
πŸŽ‰ After Completing All Steps:

1. Click "Validate Configuration" to see your Swarm completeness.
2. Switch to "Swarm Visualizer" tab to see cluster nodes, services, and replicas.
3. Click "View Architecture" to see the full Swarm cluster architecture.
0%

Lab Completed!

Excellent Swarm orchestration work!

⚠️ Reset Lab?

This will clear all your progress for this lab including terminal history, completed tasks, and dashboard data. This action cannot be undone.